Software

European AI Act – “blueprint for trustworthy AI” or “draft law with many unanswered questions”?

March 21, 2024. On March 13, the European Parliament gave its final blessing to the “AI Act”. EU Commission President Ursula von der Leyen rejoiced shortly afterwards: “It (editor’s note: the law) will benefit Europe’s fantastic talent potential and create a blueprint for trustworthy AI around the world.” EU Commissioner Thierry Breton adds, not without pride: “Throughout the process, we have resisted the special interests and lobbyists calling for large-scale AI models to be excluded from the regulation. The result is a balanced, risk-based and future-proof regulation.” So everything done right? For the time being, AI experts are satisfied, as the comment by Anita Klingel, Senior Lead Specialist at PD – Berater der öffentlichen Hand GmbH, shows: “I see a huge opportunity because, in my opinion, we won’t be able to score points with technical superiority anyway. We won’t win this race,” she says in an interview with ZDF. “Fast and high-performance is what you get from Elon Musk. You get thorough and clean in the EU.” So far, so true. But only the next steps taken by the member states will show what effect the new law will actually have.

Share this Post
Symbolbild Künstliche Intelligenz. Foto: pixabay

Contact info

Silicon Saxony

Marketing, Kommunikation und Öffentlichkeitsarbeit

Manfred-von-Ardenne-Ring 20 F

Telefon: +49 351 8925 886

Fax: +49 351 8925 889

redaktion@silicon-saxony.de

Contact person:

The next steps for implementing the AI Act

“The regulation is now undergoing a final review by legal and linguistic experts. It is likely to be adopted before the end of the parliamentary term as part of the so-called rectification procedure. The Council also still has to formally adopt the new rules. The regulation will enter into force 20 days after its publication in the Official Journal of the EU and, with a few exceptions, will be fully applicable for 24 months after its entry into force. The exceptions are bans on so-called “prohibited practices”, which will apply six months after entry into force, codes of conduct (which will apply nine months after entry into force), rules on general-purpose artificial intelligence, including governance (12 months after entry into force) and obligations for high-risk systems (36 months after entry into force)”, explains the European Parliament. What sounds complicated is actually quite simple. It is now up to the member states of the EU to develop their own customized versions of the law and integrate them into their own legislation in line with the EU’s requirements.

Criticism of Germany, combined with clear demands

And this is where things could get “funny” again. “They say that the best argument for regulation is that it applies throughout Europe,” explained Daniel Abbou, Managing Director of the KI Bundesverband, before immediately adding: “Unfortunately, we are already seeing a tendency for Germany to be 120 percent strict.” A position paper from the association sums up the key demands. The Bitkom also sees the transition from European basic framework to national law as the biggest challenge or danger: “The AI Act provides an EU-wide regulatory framework for artificial intelligence, but leaves many crucial questions unanswered. For Germany, the focus must now be on a legally secure and innovation-friendly implementation. The German government must not repeat the mistakes of the General Data Protection Regulation and tighten the national regulatory corset so tightly that companies lack the freedom to innovate. The aim must be to create the conditions for German companies and start-ups to be able to compete on an equal footing with the strong international players in artificial intelligence.”

Does the law go too far or not far enough?

“On an equal footing” is the key phrase. After all, the competition in Asia and North America is already far ahead of most European players – exceptions such as ALEPH ALPHA – AI for Enterprises and Governments confirm the rule. It remains to be seen whether the intra-European regulation sought here is actually beneficial in this environment, especially with regard to the necessary bureaucracy and the resulting increase in work and verification costs. Other nations, especially China and the USA, are far more relaxed and willing to take risks in this area. Many things that have long been possible here or are exploited by state players without scruples are to be banned in Europe. Certain forms of facial recognition in particular, which evaluate the emotions of employees or enable so-called “social scoring”, are rightly banned in Europe. China has long been using this technology to monitor its citizens in public spaces and evaluate their behavior using a social scoring system. “Correct behavior” makes it easier to find a place at university or get a loan, for example. And yet the European regulations do not go far enough for consumer protection and human rights organizations on these critical points. Despite all the restrictions, the AI law could pave the way for a new form of mass surveillance and data retention, it is suspected. Even penalties of up to 35 million euros or seven percent of annual turnover will not change anything if the regulation contains loopholes and legal ambiguities. The position paper of the KI Bundeverband is also recommended at this point.

What does the AI Act actually regulate?

Many regulations and approaches of the AI Act still offer potential for interpretation or even attack. An article by Netzpolitik looks at the most serious ones. However, the classification of AI systems into risk groups is by no means as clear and well-defined as it might appear at first glance. After all, what exactly constitutes an AI system whose risk is assessed as unacceptable, high, limited or low? Or how does the further development of systems influence their classification?

Here is a brief explanation of the current groupings:

Low risk

In principle, applications that are at the lowest risk level have little to fear. This applies, for example, to spam filters in email inboxes or AI applications in video games. However, companies that use such systems can voluntarily commit to additional codes of conduct for these AI systems.

Limited risk

AI applications with limited risk include systems such as ChatGPT, Bard, Midjourney, DallE and Co. In order not to deceive or mislead their users, but also those who only experience the results of these AI tools, the applications and generated texts, images, videos, etc. must in future clearly inform users that they are the work of an artificial intelligence and not a human being.

High risk

A high risk, according to the AI Act, is posed by applications that could significantly affect the well-being of the general public. This applies, for example, to AI systems that are used in the areas of critical infrastructure, healthcare or justice. Artificial intelligence can put livelihoods or even lives at risk here. AI systems that are classified as high-risk must meet strict requirements, including risk mitigation systems, high quality data sets, activity logging, detailed documentation, clear user information, human oversight and a high level of robustness, accuracy and cybersecurity. The European Commission estimates that five to 15 percent of all AI applications will be affected by the stricter requirements.

Unacceptable risk

AI systems that pose a clear threat to people’s fundamental rights will be banned. This includes AI systems or applications that manipulate human behavior to circumvent the user’s free will, such as voice-assisted toys that encourage dangerous behavior by minors, or systems that enable “social scoring” by governments or companies, as well as certain predictive policing applications. In addition, some applications of biometric systems will be banned, e.g. systems for detecting emotions in the workplace and some systems for categorizing people or for real-time remote biometric identification for law enforcement purposes in publicly accessible spaces (with narrow exceptions).

Conclusion

Since 2021, the European Union has been working on the regulation of artificial intelligence. In December 2023, the various EU bodies – the European Parliament, the Council of the European Union and the European Commission – finally reached an agreement. The AI Act was formally adopted in March 2024. As the first law of its kind in the world, the AI Act regulates the development and use of AI systems. Depending on the type of system and its potential uses, it divides these systems into four risk groups, each of which must meet more or less specific bureaucratic and regulatory requirements. This goes as far as banning systems that pose an unacceptable risk to society, the state and the federation of states. The impact of the AI Act ultimately depends on the member states and how they integrate the requirements of the AI Act into their own legislation. The mistakes of the past, e.g. in the implementation of the GDPR, must not be repeated here. If this succeeds and the member states agree on uniform regulations and implementation, the AI Act can serve as a role model for large parts of the world. However, this law does not make AI development any easier for small and medium-sized companies in particular. The bureaucratic obligation to provide evidence and technological regulations in particular will entail considerable additional personnel, financial and time expenditure. The impact this will have on the economic viability and therefore the marketability of systems developed in the EU will become apparent in the coming years. The fear remains that the gap between AI companies in Asia, the USA and Europe will continue to widen.

– – – – – –

Further links

👉 EU information page on the AI Act
👉 EU podcast on the European AI Act

Photo: pixabay

You may be interested in the following