This website uses cookies for reasons of functionality, convenience, and statistics. If you consent to this use of cookies, please click “Yes, I agree.”

The AI ​​Regulation and what to expect from it?

The AI ​​Regulation and what to expect from it?

 

1. Introduction .

Artificial intelligence has literally become the new reality of the third millennium, as it is now permanently present as a technological solution in numerous devices, businesses, social solutions, with its "face" peeking out from industry and medicine as well. All this necessitates the construction of an adequate legislative framework not only at the national, but also at the multinational level, with a clear awareness of the complexity of the problem of a legal and technological nature, as well as the assumed general human, philosophical, social and purely pragmatic questions "for" and "against" ", which should be included as a decision in the normative framework in question. I am happy to note that Europe has responded quickly to this new challenge and work on global, pan-European legislative planning has already started a few years ago.

After all, in December 2023. European Union parliamentarians agreed on a sweeping new international act to regulate artificial intelligence - one of the world's first comprehensive attempts to limit the use of a rapidly developing technology that has far-reaching societal and economic implications. The regulation on the definition of harmonized rules on artificial intelligence (also called the "Legislative act on artificial intelligence" or from the English "AI act"), set a new global criterion for the EU member states, which are determined to lay the legal foundation of the potential benefits of the technology while trying to guard against the possible risks associated with it, such as automating jobs, spreading misinformation online, and jeopardizing national security. The Regulation is currently going through several final steps of legal agreement and fine-tuning in the European Parliament, but the existing political agreement means that its main guidelines are set.

At this stage, the work of European parliamentarians is focused on defining the riskiest uses of artificial intelligence by companies and governments, including those for law enforcement and the management of key services such as water and energy. The meaning reached and embedded in the texts of the Regulation also shows that the creators of the largest general purpose artificial intelligence systems, such as those powering the chatbot "ChatGPT", will face new transparency requirements. Chatbots and software that create manipulated images - so-called "deep fakes" - will be required to clearly indicate that what people see is generated by artificial intelligence, according to officials from EU and earlier drafts of the law.

On the other hand, it should be emphasized that for security reasons, the use of AI software to recognize individuals for forensic purposes by the police, as well as for security purposes by governments, will be limited to certain exceptions related to the safety of citizens and national security. security. Specific sanctions for violating the Regulation are also foreseen, and companies that violate its provisions can be fined up to 7% of their global sales. Although the Regulation was hailed as a sort of regulatory breakthrough for the world's cutting edge technology, questions remained about its effectiveness. Many of the legal norms were expected to enter into force only after an anticipated vacatio legis of 12 to 24 months - a period that will give, on the one hand, a significant boost to machine learning (from the English "machine learning"), regarding the development and development of artificial intelligences, as well as accurate and clear awareness and understanding of the legal issues on the subject. That is why, literally until the last minute of the negotiations, European politicians fought over the wording in the Regulation and how to achieve the maximum balance between the promotion of innovation and the need to anticipate all possible indirect legal and technological harms.

The consensus reached on the main texts of Regulation to lay down harmonized rules on artificial intelligence took three days of negotiations in Brussels, which began on 5 December 2023 with an initial twenty-two hour session of the European Parliament. The final agreement was not immediately disclosed as talks were expected to continue behind the scenes to finalize technical details, which could delay final adoption. The final vote is expected in 2024. and must be held in the Parliament and in the European Council, which consists of representatives of the 27 member states in the union.

It is correct to note that in fact the regulation of artificial intelligence gained urgent relevance only after the launch of ChatGPT in 2022, the functionality and application of which opened a large-scale discussion about the possible consequences and this turned it into a global sensation, demonstrating the technological achievements of artificial intelligence in the 21 century. The economic impact of the introduction of the technology in industry, business, social services, civil society and robotics is predicted to gain trillions of dollars in market share as artificial intelligence is predicted to change the global understanding of the 'economy'. On the subject, Jean-Noël Barro - France's Minister of Digital Technologies stated that at present, technological dominance precedes economic and political - a fact that is more than obvious today.

 

2. Background of the legislative process.

As I already noted on December 8, 2023. The European Parliament has officially reached a preliminary agreement with the Council on Regulation laying down harmonized rules on artificial intelligence (from the English "AI Act"). With this agreement, the European Union actually introduces the world's first comprehensive legal, legislative act on artificial intelligence. This Regulation, remarkable for its legal and technological achievements, is currently in the final stages of its adoption, as it is still expected to be finally ratified by the Parliament and the Council. By its legal nature, the AI ​​Regulation aims to ensure the well-being and fundamental rights and obligations of EU citizens by setting a precedent for the global governance of AI.

In April 2021, the European Commission launched for the first time the debate on proposals for a European Union regulatory framework concerning artificial intelligence ("AI"). The main objective was to classify different AI systems according to the risk they would pose to users. The different levels of risk provide for more or less regulation in the regulation, which, as a practical idea, corresponds to reality and the use of technologies. Once formally adopted, the AI ​​Regulation will be the world's first set of legal rules governing AI technologies. Here I would like to exhaust and present in full the adoption process, content and potential issues related to the AI ​​Regulation.

 

2.1. The regulation on artificial intelligence and its goals .

The AI ​​Regulation is a legal framework that regulates the commercialization in various scenarios and the use of AI in the EU. Its main objective is to ensure the proper functioning of the EU's internal market by establishing consistent standards for AI systems across EU member states. The proposal is in fulfillment of the political commitment of the President of the European Commission, Ursula von der Leyen, who announced in her political guidelines for the period 2019-2024, under the name "Union with greater ambitions", that the Commission will propose legislation for a coordinated European approach to the human and ethical aspects of AI. Following this communication, on 19 February 2020, the Commission published the White Paper on AI - "Europe in search of excellence and a climate of trust". This white paper sets out policy options on how to achieve the dual objective of promoting the adoption of AI and addressing the risks associated with some applications of these technologies. The Regulation's task is to propose a legal framework for trustworthy AI, as well as to fulfill the second objective of creating an ecosystem of trust among EU citizens in relation to new technologies. The proposal is based on EU values ​​and fundamental rights, and aims to give the individual and other users the confidence to adopt solutions based on AI technologies, while encouraging entrepreneurs to develop such solutions. The basic concept of the Regulation is that AI should be a tool at the disposal of man and a driving force for the good of society, with the ultimate goal being to achieve greater human well-being. Legal rules concerning AI marketed in the Union or otherwise affecting persons in the Union should therefore be human-centred, so that people have confidence that the technology is used in a safe and lawful manner, including respect for the fundamental rights of the individual. Following the publication of the White Paper, the Commission initiated a broad-based consultation with stakeholders. They were met with great interest by a large number of stakeholders, who generally supported regulatory intervention to address the challenges, but also concerns, raised by the growing use of AI.

In practice, the Regulation is the first comprehensive piece of legislation to address AI risks through a set of rights and obligations, as well as requirements and sanctions, designed to protect the health, safety and fundamental rights of citizens in the EU and beyond, i.e. this legislation is expected to have an abstract impact on global AI governance internationally.

The European Parliament's priority in this context is to ensure that AI systems used in the EU are safe, transparent, traceable, non-discriminatory and environmentally friendly, as well as safeguarding the rights of European citizens. To prevent unpredictable, indirect results, the European legislator stipulates that AI systems should be operated by humans, not automated software systems. In many countries, a debate on the philosophy of the algorithms behind artificial intelligence has also opened on the subject, and they should be directed to the basic principles of humanity, justice and legality. In this line of thinking, the European Parliament has created a (technology-neutral), standard definition of AI that can be applied in an abstract sense to all future AI systems.

 

2.2. Deployment Process .

Originally proposed by the European Commission in April 2021. [1] , the policy of a common approach to artificial intelligence was adopted by the European Council in 2022. [2] and on 14 June 2023, Members of the European Parliament adopted Parliament's negotiating position on the Artificial Intelligence Regulation, with final discussions currently taking place between EU Member States and the Council on the final form of the document within the framework of the .called "tripartite bargaining".

The tentative agreement was formally reached on 9 December 2023 between the Council and the Parliament, with the only step remaining in the legislative process being for Parliament's Internal Market and Civil Liberties Committees to vote on the agreement jointly at an upcoming meeting. All of this logically means that European legislation on artificial intelligence will finally be adopted in early 2024, which is expected to happen before the European Parliament elections in June. Its adoption will be followed by a transitional period (vacation legis) of at least 18 months before the AI ​​Regulation starts to apply in full. However, there are still significant differences in the proposals, especially those concerning detailed technical provisions, as well as those regarding the legal definitions of some processes and their implementation.

 

3. Dynamics of the legislative process .

The AI ​​Regulation covers the systems that operate and are based on it, i.e. those that are "placed on the market, put into service or already in use in the EU" [3] . This means, at an abstract level, that the commented pan-European normative act will apply not only to businesses, developers and natural persons - users in the EU, but also to global suppliers who sell or otherwise make available for use their own technologies functioning through artificial intelligence or its results on EU users with three exceptions: AI developed for military purposes; Research AI and free and open source AI systems and components - a term that is not yet clearly defined.

In this context, the provisions of the AI ​​Regulation have several main objectives:

- to define the risks posed by AI;

- to define the risk categories;

- to establish clear requirements and obligations for systems operating through AI and their suppliers;

- to propose assessment and enforcement targeting AI systems;

- to propose a structure for the management of systems working with artificial intelligence at European and national level.

In summary, it can be said that the main objective of the AI ​​Regulation is to establish obligations for providers and users depending on the level of risk posed by artificial intelligence, the said levels being divided into four different (risk) categories:

 

3.1. Artificial intelligence systems with an unacceptable level of risk to human safety. Systems that pose this level of risk are as follows:

- systems concerning cognitive-behavioral manipulation of people or specific vulnerable groups;

- social rating systems: classifying people based on behavior, socioeconomic status, or personal characteristics

- real-time and remote biometric identification systems, such as facial recognition;

- biometric categorization systems that use sensitive characteristics (eg political, religious, philosophical beliefs, sexual orientation, race);

- systems for non-targeted extraction of images of persons from the Internet or recordings from video surveillance to create databases for facial recognition;

- emotion recognition systems at the workplace and in educational institutions;

- artificial intelligence systems that manipulate people's behavior to bypass their free will;

- artificial intelligence systems used to manipulate people's vulnerability (due to their age, disability, social or economic situation).

All such systems that contain or work through artificial intelligence as a method of evaluating information are prohibited [4] . Naturally, there are also fines for participation in prohibited practices in the field of AI. This behavior can result in a fine of up to €40 million or up to 7% of the company's global annual turnover, whichever is higher. A principle of proportionality of fines is also introduced, which will take into account the market position of suppliers of goods and services powered by artificial intelligence, suggesting that there could be more flexible rules for start-ups. The amount of the fines is still subject to discussion in the tripartite consultations.

 

3. 2. High-risk systems that have a negative impact on safety or fundamental rights. Systems that present this level of risk are:

- Artificial intelligence systems that are used in products covered by EU product safety legislation. This includes toys, aviation, automobiles, medical devices and elevators;

- artificial intelligence systems falling into eight specific areas that will need to be registered in an EU database.

Legal regulation of the commented high-risk systems of the Regulation provides for a permit regime, after assessing their impact before and after being put on the market. This evaluation will include rigorous testing, documentation of data quality, and an accountability framework that includes human oversight. Providers of high-risk AI will have to register their technologies in an EU database managed by the Commission before they can be placed on the market. Non-EU suppliers will need to have an authorized representative in the EU to demonstrate compliance and post-market surveillance of such technologies.

 

3. 3. Systems with limited risk Systems representing this level of risk are:

- Artificial intelligence systems that generate or manipulate images, audio and video content, and chatbots.

Targeted systems are permissive, requiring minimal technological transparency measures that would allow users to make informed choices. After interacting with the applications, the user can decide whether he wants to continue using them. Users should also be informed when interacting with this type of artificial intelligence. Transparency requirements should comply with the following rules:

- disclosure of the fact that the available content is generated by AI;

- designing the AI ​​model to prevent the generation of illegal content [5] ;

 - publication of summaries of copyrighted data used for training.

 

3.4. Artificial intelligence systems with minimal (low) risk. Systems representing this level of risk are:

- systems generating artificial intelligence (eg ChatGPT). These systems do not suffer any legal or technological restrictions, as their use usually brings only and only benefits. The current development of ChatGPT indicates that it is used more for the purpose of collecting and analyzing information, and it cannot lead to any scientific contribution from the compiled databases, and speculations in the opposite direction are unfounded. However, the use of artificial intelligence in science gives better and better results precisely in the analysis, and one of the latest achievements on the subject is the invention of new cancer drugs precisely with the help of artificial intelligence.

 

4. Conclusion .

Despite the initial skepticism towards technologies and systems containing artificial intelligence, my humble opinion as someone who deals entirely professionally with innovation is that it marks the future. At the very dawn of AI as a development activity, a number of leading European business leaders have withdrawn their support for the EU's proposed AI Regulation, citing claims that it could harm EU competitiveness and lead to investment outflows. The motive behind these entities was the view that the draft rules go too far, particularly in regulating generative artificial intelligence models - the technology behind popular platforms such as ChatGPT.

Research by the European Parliament itself has refuted this view, with parliamentarians such as René Repassi declaring that there is no need for any concerns, as the European market of over 450 million users is too attractive for AI providers to bypass it, and Parliament not to deal with its regulation.

On the subject, MEPs also envisioned legal instruments on the basis of which EU citizens can file complaints about artificial intelligence systems and receive explanations for decisions based on high-risk artificial intelligence systems that significantly affect their basic, civil rights. Members of the European Parliament also tried to reform the role of the EU Office for Artificial Intelligence, which will strictly monitor the implementation of the Regulation.

Naturally, the Regulation on artificial intelligence also provides for the exemption of law enforcement authorities when using artificial intelligence. Here, a number of safeguards and narrow exceptions are provided for the use of Biometric Identification Systems (BIS) in publicly accessible spaces for law enforcement purposes, subject to prior judicial authorization and for strictly defined lists of offences. Remote access via artificial intelligence biometric systems will be used strictly in the targeted search of a person convicted of or suspected of committing a serious crime, and such law enforcement actions will be subject to strict conditions and their use will be limited in time and place, for the purposes of:

- targeted search for victims (kidnapping, trafficking, sexual exploitation);

- preventing a specific and present terrorist threat, or - locating or identifying a person suspected of committing any of the specific crimes specified in the Regulation (e.g. terrorism, trafficking, sexual exploitation, murder, kidnapping, rape, armed robbery, participation in criminal organization, crimes against the environment).

Having all these tightly regulated hypothetical exceptions to the general principles of respect for basic civil rights, transparency and accountability when using artificial intelligence technologies gives me confidence that they are indeed the future of fairer law enforcement, public services , research and development, healthcare and research to solve severe clinical problems, including remote surgery, research, etc., development of industrial and household robotics, genetics and high-tech agriculture . My impatience towards the adoption of the Regulation is huge, because it is also caused by professional considerations - for example, how it will be applied in the Bulgarian not so innovative "justice", which still believes, for example, that the domain is not the subject of a trademark and does not undertake any and to be steps in the direction of the "view" in question changing in the third millennium. This very "justice" can very soon be managed by artificial intelligence, and who knows - tomorrow, in fact, for this very reason, it is logical that there will be no judges, because artificial intelligence will not make any such mistake, but will simply analyze a legislative act and will make a decision ! The same will happen, in my opinion, to notaries and private bailiffs, whose work can easily be written into a logical AI algorithm. I'm personally in favor!

 

 

 

Author: Mr. Atanas Kostov - attorney at law

 

 

[1] The original document of the "Regulation of the European Parliament and of the Council - laying down harmonized rules on artificial intelligence and amending certain legislative acts of the Union", Brussels, 21.4.2021, 2021/0106 (COD), can to be found at the following link: eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=CELEX:52021PC0206;

[2] See "Proposal for a Regulation of the European Parliament and of the Council laying down harmonized rules on artificial intelligence (Artificial Intelligence Act) and amending certain legislative acts of the Union", Brussels, 25/11/2022, 14954/22 , link: data.consilium.europa.eu/doc/document/ST-14954-2022-INIT/en/pdf;

[3] For more details on the subject, see "Regulation of the European Parliament and of the Council - establishing harmonized rules on artificial intelligence (Artificial Intelligence Act) and amending certain legislative acts of the Union", Brussels, 21.4.2021. , 2021/0106 (COD), link:eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=CELEX:52021PC0206.

[4] Certain exceptions to this rule are also allowed – for example, regarding remote biometric identification systems, where the identification takes place after a significant delay. They will be permitted to be used only for the purpose of prosecuting serious crimes, but only after the express permission of the court.

[5] For example, when using artificial intelligence systems such as chatbots, users must be aware that they are interacting with a machine so that they can make an informed decision to continue or withdraw.