The European Union has the world’s first laws pertaining to the advancement and application of artificial intelligence.

The new Regulation has been agreed upon by co-legislators from Parliament and the Council following a gruelling 36-hour negotiation session; nevertheless, technical work is still ongoing. concessions on real-time biometric system use by law enforcement and requirements for foundation model openness

Brussels – Though it still needs technical work to be finished, the European Union has completed it. The co-legislators of the EU Parliament and Council agreed on the EU Artificial Intelligence Act, the first piece of law on the topic in history, following a 36-hour marathon that started on Wednesday, December 6, continued over two rounds, and ended late on Friday evening, December 8. “The purpose of this regulation is to safeguard against high-risk artificial intelligence basic rights, democracy, the rule of law, and environmental sustainability. As stated in the statement from Eurochamber, the organisation that, in the face of strong pressure from the 27 governments, pushed hardest for democratic protections during the negotiations, “while stimulating innovation and making Europe a leader in the field.”

Artificial Intelligence EU Benifei Tudorache Breton
From left: Spanish Secretary of State for Digitization and Artificial Intelligence, Carme Artigas, European Commissioner for the Internal Market, Thierry Breton, and co-rapporteurs for the European Parliament, Brando Benifei (S&D) and Dragoș Tudorache (Dec. 8, 2023)

 

After waiting more than 2.5 years to see her cabinet’s proposal supported by co-legislators, European Commission President Ursula von der Leyen exulted, calling the European Union’s Artificial Intelligence Act a “world first, a unique legal framework for the development of AI that can be trusted and for the security and fundamental rights of people and businesses.” “A commitment we made in our political guidelines and that we have kept,” von der Leyen said. Nevertheless, the temporary accord between the Council and the Parliament does not conclude the legislative procedure. Weeks of technical work will be required to complete the Regulations’ specifics before the text is submitted for final approval to the two co-legislators. Two years after it is formally adopted and published in the EU Official Journal, the world’s first artificial intelligence law will go into effect.

The European Union Commission launched the AI Pact on November 16 in an attempt to encourage industry efforts to start implementing its requirements before the legal deadline, particularly with regard to generative AI systems, since the larger body of provisions, including the requirements on high-risk AI systems, will only be applicable at the end of a transitional period.

in advance of the June 2020 European elections. To increase transparency and foster greater confidence, participating companies will sign commitment statements that are supported by specific ongoing or planned measures that the EU Commission will publicise. In advance of stakeholder discussions on draft concepts and best practices, which are anticipated to take place in the first half of 2024, a request for expressions of interest has been issued. The key organisations of the Covenant will be invited to publicly announce their initial commitments following the official approval of the Artificial Intelligence Act.

Categorization and bans on artificial intelligence

 

Donald Clark Plan B: EU AI legislation is firming up and I fear the worst.  Danger of overregulation...

A horizontal degree of protection was upheld by the compromise agreement, which included a risk scale to govern AI applications on four different levels: low, limited, high, and unacceptable. Systems with low risk would be subject to very minimal transparency requirements, including revealing that the information was created by AI. Before a product is let onto the market, a high-risk manufacturer would have to submit to an evaluation of its potential impact on basic rights, which would include registering in the creating specifications for the data and technical documents that must be provided to prove product conformity, as well as a dedicated EU database.

The agreement prohibits the following: cognitive behaviour manipulation systems; government “social scoring”; untargeted collection of facial images from the Internet or CCTV footage to create facial recognition databases; emotion recognition in the workplace and educational institutions; biometric categorization to infer sensitive data (political, religious, philosophical, or sexual orientation); and some instances of predictive policing for individuals. These practices are deemed unacceptable.

Exceptions for law enforcement

During the protracted deliberations, this chapter was the source of tension since it was the one where the greatest modifications to the Commission’s proposal were agreed upon. The emergency procedure, which permits law enforcement agencies to employ a high-risk artificial intelligence tool that has failed the evaluation process and requires communication with a specific mechanism on the protection of fundamental rights, is one of the most significant.

Certain exceptions apply, “subject to judicial authorization and for strictly defined lists of offences,” even to the use of real-time remote biometric identification devices in public areas. The use of “post-remote” technology could be restricted to the targeted search of an individual who has been found guilty or is suspected of committing a serious crime, while real-time technology, which is “limited in time and location,” can be used to locate or identify a person suspected of committing specific crimes, such as terrorism, human trafficking, sexual exploitation, murder, kidnapping, rape, armed robbery, involvement in a criminal organisation, and environmental crimes, as well as for targeted searches of victims (such as kidnapping, trafficking, and sexual exploitation).

Governance and foundation models

Artificial Intelligence ChatGPT EU
Image created by an artificial intelligence following the instructions “robot making a speech at the EU Parliament.”
The agreement’s wording has been updated to include new clauses that address scenarios in which artificial intelligence (AI) systems can be used to a wide range of tasks (known as “general purpose AI”) and in which AI technology is later incorporated into another system that poses a significant danger. To “oversee the most advanced models, help promote testing standards and practices, and enforce common rules across member states,” the European Commission is setting up an AI Office. Simultaneously, the AI Office will receive technical assistance from an advisory group for stakeholders, which includes representatives from academia, small and medium-sized businesses, industry, and civil society.
The “high-impact” foundation models, a kind of generative artificial intelligence trained on a wide range of generalised, label-free data, will need to abide by a number of transparency requirements before being released onto the market, given the vast array of tasks that artificial intelligence systems can perform – generation of video, text, images, side-language conversation, computation, or computer code generation – and the rapid expansion of their capabilities. From creating technical documentation to distributing thorough summaries of the training material, all while adhering to EU copyright regulations.

Innovation and Sanctions

Sandboxes, or artificial intelligence regulatory test environments, will be able to establish a controlled environment for the development, testing, and validation of novel systems—even in real-world scenarios—in order to foster innovation. The agreement allows for “limited and clearly specified” supportive actions and exemptions to buffer smaller businesses from pressure from major market players and to lessen the administrative load on them.

And lastly, fines. Regarding non-compliance with the EU Artificial Intelligence Act, any natural or legal person may lodge a complaint with the appropriate market supervisory body. If the Regulation is broken, the company will be fined either a predetermined amount or a percentage of its annual global turnover from the previous fiscal year, whichever is higher: 35 million euros, or 7% of the total, for using prohibited applications; 15 million euros, or 3% of the total, for breaking the law; and 7.5 million euros, or 1.5%, for giving false information. There will be more reasonable caps in place for startups and small and medium-sized businesses.