Software

European Parliament: Law on artificial intelligence adopted

March 13, 2024. On March 13, Parliament gave the green light for the Artificial Intelligence Act. It is intended to ensure security and respect for fundamental rights and promote innovation. MEPs adopted the regulation by 523 votes to 46 with 49 abstentions. Parliament and the Council agreed on the text in December 2023.

Share this Post
The new regulations prohibit certain AI applications that threaten the rights of citizens. These include biometric categorization based on sensitive characteristics and the untargeted reading of facial images from the internet or from surveillance cameras for facial recognition databases. Photo: pixabay

Contact info

Silicon Saxony

Marketing, Kommunikation und Öffentlichkeitsarbeit

Manfred-von-Ardenne-Ring 20 F

Telefon: +49 351 8925 886

Fax: +49 351 8925 889

redaktion@silicon-saxony.de

Contact person:

The new rules aim to protect fundamental rights, democracy, the rule of law and environmental sustainability from high-risk AI systems. At the same time, they are intended to boost innovation and ensure that the EU takes a leading role in this area. The regulation sets out certain obligations for AI systems, depending on the potential risks and impacts.

Prohibited applications

The new rules prohibit certain AI applications that threaten the rights of citizens. These include biometric categorization based on sensitive characteristics and the untargeted reading of facial images from the internet or from surveillance cameras for facial recognition databases. Emotion recognition systems in the workplace and in schools and the evaluation of social behavior using AI will also be prohibited in future. Predictive policing based solely on profiling or the assessment of a person’s characteristics and the use of artificial intelligence to influence people’s behavior or exploit their weaknesses are also prohibited under the new rules.

Exceptions for law enforcement agencies

In principle, the use of remote biometric identification systems by law enforcement agencies is prohibited. However, there are certain detailed and narrowly defined exceptions. Real-time remote identification is only permitted if strict security regulations are complied with – including time and space restrictions, and special administrative or judicial authorization must be obtained in advance. Such systems may be used, for example, to specifically search for a missing person or to prevent a terrorist attack. The use of AI systems for subsequent remote identification is considered highly risky. This requires judicial authorization, which must be linked to a criminal offence.

Obligations for high-risk systems

Other high-risk AI systems are also subject to certain obligations, as they can pose a significant threat to health, safety, fundamental rights, the environment, democracy and the rule of law. AI systems that are used in the areas of critical infrastructure, education and training or employment, among others, are classified as high-risk. AI systems that are used for basic private and public services – such as in healthcare or banking -, in certain areas of law enforcement and in connection with migration and border management, justice and democratic processes (for example to influence elections) are also considered high-risk. Such systems must assess and reduce risks, keep usage logs, be transparent and accurate and be supervised by humans. In future, the public will have the right to lodge complaints about AI systems and have decisions made on the basis of high-risk AI systems that affect their rights explained to them.

Transparency requirements

General-purpose AI systems and the models on which they are based must meet certain transparency requirements, including compliance with EU copyright law and the publication of detailed summaries of the content used for training. For the more powerful models that could pose systemic risks, additional requirements will apply in the future – for example, model assessments must be carried out, systemic risks assessed and mitigated and incidents reported.

In addition, artificially generated or edited images or audio and video content (so-called deepfakes) must be clearly labeled as such in the future.

Measures to promote innovation and SMEs

Real laboratories must be set up in the Member States and tests must be carried out under real conditions. These must be accessible to small and medium-sized enterprises and start-ups so that they can develop and train innovative AI systems before they are put on the market.

Quotes

During Tuesday’s plenary debate, Internal Market Committee co-rapporteur Brando Benifei (S&D, IT) said: “Finally, we have the world’s first binding law on artificial intelligence to reduce risks, create opportunities, fight discrimination and ensure transparency. Thanks to the Parliament, unacceptable AI practices will be banned in Europe and the rights of workers and citizens will be protected. The European Artificial Intelligence Office will now be set up to help companies comply with the rules before they come into force. We have ensured that the development of artificial intelligence puts people and European values first.”

Co-rapporteur of the Civil Liberties Committee Dragos Tudorache (Renew, RO) said: “The EU has delivered. We have managed to link the concept of artificial intelligence to the fundamental values that underpin our societies. However, we still have a long way to go, far beyond the AI Act. Artificial intelligence will force us to rethink the social contract – a contract that lies at the heart of our democracies, our education systems, our labor markets and in the way we wage war. The AI Act is a starting point for a new model of governance built on technology. We must now focus on putting this law into practice.”

Next steps

The executive order will now undergo a final review by legal and language experts. It is likely to be adopted before the end of the legislative period as part of the so-called rectification procedure. The Council also still has to formally adopt the new rules.

The regulation will enter into force 20 days after its publication in the Official Journal of the EU and, with a few exceptions, will be fully applicable 24 months after its entry into force. The exceptions are bans on so-called prohibited practices, which will apply six months after entry into force, codes of conduct (which will apply nine months after entry into force), rules on general-purpose artificial intelligence, including governance (12 months after entry into force) and obligations for high-risk systems (36 months after entry into force).

Background

The Artificial Intelligence Act is a direct response to the citizens’ proposals of the Conference on the Future of Europe (COFOE), in particular proposal 12(10) on strengthening the EU’s competitiveness in strategic sectors, proposal 33(5) on a safe and trustworthy society, including combating disinformation and ensuring that ultimately humans are in control, proposal 35 on fostering digital innovation (3) while ensuring human control and (8) trustworthy and responsible use of AI, establishing safeguards and ensuring transparency, and proposal 37 (3) on using AI and digital tools to improve citizens’ access to information, including people with disabilities.

– – – – – –

Further links

👉 www.europarl.europa.eu 

Photo: pixabay

You may be interested in the following