EU Passes Landmark AI Bill

Yesterday, the EU reached a deal on its landmark AI bill. In the process, they’re racing ahead of US:

The European approach to trustworthy AI

The new rules will be applied directly in the same way across all Member States, based on a future-proof definition of AI. They follow a risk-based approach:

Minimal risk: The vast majority of AI systems fall into the category of minimal risk. Minimal risk applications such as AI-enabled recommender systems or spam filters will benefit from a free-pass and absence of obligations, as these systems present only minimal or no risk for citizens’ rights or safety. On a voluntary basis, companies may nevertheless commit to additional codes of conduct for these AI systems.

High-risk: AI systems identified as high-risk will be required to comply with strict requirements, including risk-mitigation systems, high quality of data sets, logging of activity, detailed documentation, clear user information, human oversight, and a high level of robustness, accuracy and cybersecurity. Regulatory sandboxes will facilitate responsible innovation and the development of compliant AI systems.

Examples of such high-risk AI systems include certain critical infrastructures for instance in the fields of water, gas and electricity; medical devices; systems to determine access to educational institutions or for recruiting people; or certain systems used in the fields of law enforcement, border control, administration of justice and democratic processes. Moreover, biometric identification, categorisation and emotion recognition systems are also considered high-risk. 

Unacceptable risk: AI systems considered a clear threat to the fundamental rights of people will be banned. This includes AI systems or applications that manipulate human behaviour to circumvent users’ free will, such as toys using voice assistance encouraging dangerous behaviour of minors or systems that allow ‘social scoring’ by governments or companies, and certain applications of predictive policing. In addition, some uses of biometric systems will be prohibited, for example emotion recognition systems used at the workplace and some systems for categorising people or real time remote biometric identification for law enforcement purposes in publicly accessible spaces (with narrow exceptions).

Specific transparency risk: When employing AI systems such as chatbots, users should be aware that they are interacting with a machine. Deep fakes and other AI generated content will have to be labelled as such, and users need to be informed when biometric categorisation or emotion recognition systems are being used. In addition, providers will have to design systems in a way that synthetic audio, video, text and images content is marked in a machine-readable format, and detectable as artificially generated or manipulated.

Companies not complying with the rules will be fined.

I’ll give my commentary in a moment. But I’ll serve up the comments of Anurag Gurtu , CPO, StrikeReady:

The regulation paves the way for what could become a global standard to classify risk, enforce transparency and financially penalize tech companies for noncompliance.

The European Union’s deal on the landmark AI bill marks a significant moment in the global conversation about the regulation of artificial intelligence. This ambitious legislation, which seeks to classify AI risks, enforce transparency, and penalize noncompliance, demonstrates the EU’s proactive stance in addressing the complexities of AI technologies.

The Act’s focus on monitoring and oversight, especially for high-risk applications, could set a new global standard for AI regulation. While it aims to balance protection and innovation, the Act will require tech companies operating in the EU to adapt significantly, potentially reshaping global AI development and deployment strategies.

This legislation also raises critical discussions about the balance between innovation and ethical considerations in AI. While Europe is taking a lead, it will be interesting to see how other regions, particularly the U.S., respond to this development. Will they follow suit with similar regulations, or will they take a different path?

Moreover, the Act’s implications on open-source AI models, which are exempt from certain restrictions, could stimulate interesting shifts in the AI industry, potentially favoring open-source approaches.

However, there are concerns about the potential impact on innovation and the competitive edge of European AI companies. While the Act aims to ensure safety and ethical standards, it’s crucial that it doesn’t stifle the innovative potential of AI.

This development is a significant step in the global dialogue on AI governance and sets the stage for further international discussions on how best to manage this rapidly evolving technology.

The combination of classifying risk and known that the EU will not be afraid to drop the ban hammer on any company who tries to skirt the rules is sure to be an effective combination. Other countries need to copy this so that AI is sufficiently regulated and risk is minimized.

Leave a Reply

Discover more from The IT Nerd

Subscribe now to keep reading and get access to the full archive.

Continue reading