Software

European Commission: AI law enters into force

August 1, 2024. The world’s first comprehensive regulation on artificial intelligence, the European Artificial Intelligence Act (AI Act), comes into force. The AI Act aims to ensure that AI developed and used in the EU is trustworthy and that people’s fundamental rights are protected. Most of the provisions of the AI Act will apply from August 2, 2026.

Share this Post
Photo: European Commission

Contact info

Silicon Saxony

Marketing, Kommunikation und Öffentlichkeitsarbeit

Manfred-von-Ardenne-Ring 20 F

Telefon: +49 351 8925 886

Fax: +49 351 8925 889

redaktion@silicon-saxony.de

Contact person:

Executive Vice-President Margrethe Vestager said: “AI has the potential to transform the way we work and live and promises huge benefits for citizens, our society and the European economy. The European approach to technology puts people at the center and ensures that everyone’s rights are respected. With the AI law, the EU has taken an important step to ensure that the introduction of AI technology complies with EU rules in Europe.”

EU approach: product safety, risk-based

The AI Act introduces a forward-looking definition of AI, based on a product safety and risk-based approach in the EU:

  • Minimal risk: Most AI systems, such as AI-powered recommender systems and spam filters, fall into this category. These systems are not subject to obligations under the AI Act due to their minimal risk to the rights and safety of citizens. Companies can adopt additional codes of conduct on a voluntary basis.
  • Specific transparency risk: AI systems such as chatbots must make it clear to users that they are interacting with a machine. Certain AI-generated content (including deep fakes) must be labeled as such. Users must be informed when biometric categorization or emotion recognition systems are used. In addition, providers must design their systems so that synthetic audio, video, text and image content can be labeled in a machine-readable format and recognized as artificially generated or manipulated.
  • High risk: AI systems that are classified as high-risk must meet strict requirements. These include risk mitigation systems, high quality data sets, activity logging, detailed documentation, clear user information, human oversight and a high level of robustness, accuracy and cyber security. Regulatory sandboxes will facilitate responsible innovation and the development of compliant AI systems. Such high-risk AI systems include, for example, AI systems used in personnel selection, to assess whether someone is eligible for a loan, or to operate autonomous robots.
  • Unacceptable risk: AI systems that pose a clear threat to the fundamental rights of humans will be banned. This includes AI systems or applications that manipulate human behavior to circumvent the free will of users, such as toys that encourage minors to engage in dangerous behavior through voice control, systems that enable “social rating” by governments or companies, and certain predictive policing applications. In addition, some applications of biometric systems will be prohibited, such as systems for detecting emotions in the workplace and some systems for categorizing individuals or for real-time remote biometric identification for law enforcement purposes in publicly accessible spaces (with narrow exceptions).

In addition to this system, the AI Act also introduces regulations for so-called general purpose AI models. These are highly powerful AI models that are designed for a variety of tasks – such as creating texts that read as if they had been written by humans. General purpose AI models are increasingly being used as components of AI applications. The AI law will ensure transparency along the value chain and address potential systemic risks of the best performing models.

Application and enforcement of AI rules

Member states have until August 2, 2025 to designate the national competent authorities that will monitor the application of the rules for AI systems and carry out market surveillance measures.

The Commission’s Artificial Intelligence Office (AI Office) will be the main body for implementing the AI law at EU level and for enforcing the rules on general purpose AI models.

Three advisory bodies will support the AI Office:

  • The European Artificial Intelligence Council will ensure consistent application of the AI law across all EU Member States and act as the main body for cooperation between the Commission and Member States.
  • A scientific panel of independent experts will provide technical advice and contribute to enforcement. In particular, this panel may alert the AI Office to risks associated with general purpose AI models.
  • The AI Office may also seek advice from an advisory forum made up of a wide range of stakeholders.

Violation of the regulations can result in hefty fines Companies that do not comply with the regulations will face fines. The fines can amount to up to 7 percent of annual global turnover for violations of the ban on AI applications, up to 3 percent for violations of other obligations and up to 1.5 percent for providing false information.

Next steps

Most of the provisions of the AI Act will apply from August 2, 2026. However, bans on AI systems that pose an unacceptable risk will already apply after six months, and the rules for so-called general-purpose AI models after 12 months.

AI Pact

In order to bridge the transition period until full implementation, the Commission has launched the AI Pact. This initiative invites AI developers to voluntarily adopt the key obligations of the AI Act ahead of the legal deadlines.

Guidelines

The Commission is also developing guidelines to define and clarify how the AI Act should be implemented and to facilitate co-regulatory tools such as standards and codes of conduct. The Commission has launched a call for expressions of interest to participate in the development of the first general AI Code of Conduct and a multi-stakeholder consultation to give all stakeholders the opportunity to comment on the first Code of Conduct under the AI Act.

Background

On December 9, 2023, the Commission welcomed the political agreement on the AI Act. On January 24, 2024, the Commission launched a package of measures to support European start-ups and SMEs in the development of trustworthy AI. On May 29, 2024, the Commission presented the AI Office. On July 9, 2024, the amended EuroHPC JU Regulation came into force, enabling the establishment of AI factories. This allows dedicated AI supercomputers to be used for the training of General Purpose AI (GPAI) models.

The Joint Research Center’s (JRC) continuous independent, evidence-based research has been fundamental to shaping the EU’s AI policy and ensuring its effective implementation.

– – – – – –

Further links

👉 https://commission.europa.eu/index_de 
👉 European Artificial Intelligence Act
👉 Artificial Intelligence – Questions and Answers
👉 European Artificial Intelligence Agency

Photo: European Commission

You may be interested in the following