EU Commissioner Thierry Breton added “The AI Act is the result of years of preparation, consultation and negotiation, including the historic 38-hour final trilogue in December. Throughout the process, we resisted the special interests and lobbyists who called for large-scale AI models to be excluded from the regulation. The result is a balanced, risk-based and future-proof regulation.” Breton emphasized that the AI Act creates the necessary transparency and ensures that developers share information with the many SMEs along the value chain. “The AI Act will be a launch pad for EU start-ups leading the global race for trustworthy AI. It will enable European citizens and businesses to use AI “made in Europe” safely and confidently.”
Aim of the new regulation
The aim of the new rules is to promote trustworthy AI in Europe and beyond by ensuring that AI systems respect fundamental rights, safety and ethical principles and address the risks of very powerful and impactful AI models.
The regulation was approved by MEPs with 523 votes in favor, 46 against and 49 abstentions.
Risk-based approach
The new rules will be applied directly and in the same way in all Member States, based on a future-proof definition of AI. They follow a risk-based approach:
Minimal risk
The vast majority of AI systems fall into the minimal risk category. Minimal risk applications such as AI-powered recommender systems or spam filters will benefit from a free pass and lack of obligations, as these systems pose minimal or no risk to the rights or safety of citizens. However, on a voluntary basis, companies can commit to additional codes of conduct for these AI systems.
High risk
AI systems that are deemed high-risk must meet stringent requirements, including risk mitigation systems, high quality of data sets, logging of activity, detailed documentation, clear user information, human oversight and high levels of robustness, accuracy and cybersecurity. Regulatory sandboxes will facilitate responsible innovation and the development of compliant AI systems.
Critical infrastructure
Examples of such high-risk AI systems include certain critical infrastructure, e.g. in the water, gas and electricity sectors, medical devices, systems used to determine access to educational institutions or to recruit individuals, or certain systems used in law enforcement, border control, administration of justice and democratic processes. In addition, biometric identification, categorization and emotion recognition systems are also considered high-risk.
Unacceptable risk
AI systems that pose a clear threat to people’s fundamental rights will be banned. This includes AI systems or applications that manipulate human behavior to circumvent the user’s free will, such as voice-assisted toys that encourage dangerous behavior by minors, or systems that enable “social scoring” by governments or companies, as well as certain predictive policing applications. In addition, some applications of biometric systems will be prohibited, e.g. systems for detecting emotions in the workplace and some systems for categorizing individuals or for real-time remote biometric identification for law enforcement purposes in publicly accessible spaces (with narrow exceptions).
Specific transparency risk
When using AI systems such as chatbots, users should be aware that they are interacting with a machine. Deep fakes and other AI-generated content must be labeled as such, and users must be informed if biometric categorization or emotion recognition systems are used. In addition, providers must design their systems so that synthetic audio, video, text and image content can be labeled in a machine-readable format and recognized as artificially generated or manipulated.
Penalties
Companies that do not comply with the regulations will face fines.
General purpose AI
The AI Act introduces special regulations for general purpose AI models, which will ensure transparency along the value chain. For very powerful models that could pose systemic risks, there will be additional binding obligations regarding risk management and monitoring of serious incidents as well as conducting model evaluations and testing with adversarial systems. These new obligations will be operationalized through codes of conduct developed by industry, academia, civil society and other stakeholders together with the Commission.
New Office for Artificial Intelligence
In terms of governance, the relevant national market surveillance authorities will oversee the implementation of the new rules at national level, while the establishment of a new European AI Office within the European Commission will ensure coordination at European level. The new AI Office will also oversee the implementation and enforcement of the new rules for general AI models.
Together with the national market surveillance authorities, the AI Office will be the first body in the world to enforce binding rules on artificial intelligence and is therefore expected to become an international reference point. For general models, a scientific panel of independent experts will play a central role by issuing warnings on systemic risks and contributing to the classification and testing of models.
International level
To promote rules for trustworthy AI at international level, the European Union will continue to participate in fora such as the G7, the OECD, the Council of Europe, the G20 and the UN. Just recently, we supported the agreement of the G7 Heads of State and Government in the Hiroshima Process for AI on international guiding principles and a voluntary code of conduct for advanced AI systems.
Further steps
The regulation is still subject to a final review by lawyers and linguists and is expected to be finally adopted before the end of the legislative term (under the so-called corrigendum procedure). The law must also be formally approved by the Council.
The AI law will enter into force 20 days after its publication in the Official Journal and will be fully applicable two years later, with some exceptions: Prohibitions will come into force after six months, governance rules and obligations for general AI models will apply after 12 months and rules for AI systems – embedded in regulated products – will apply after 36 months.
To facilitate the transition to the new legal framework, the Commission has launched the AI Pact, a voluntary initiative to support future implementation and calls on AI developers from Europe and beyond to comply with the key obligations of the AI law ahead of time.
– – – – – –
Further links
👉 https://commission.europa.eu/index_en
👉 Coordinated plan for artificial intelligence
Photo: European Commission