The EU Passes Draft Legislation To Govern AI

The news is out today that the EU Parliament has moved one step closer to putting legislation into force to govern AI:

The European parliament approved rules aimed at setting a global standard for the technology, which encompasses everything from automated medical diagnoses to some types of drone, AI-generated videos known as deepfakes, and bots such as ChatGPT.

MEPs will now thrash out details with EU countries before the draft rules – known as the AI act – become legislation.

“AI raises a lot of questions socially, ethically, economically. But now is not the time to hit any ‘pause button’. On the contrary, it is about acting fast and taking responsibility,” said Thierry Breton, the European commissioner for the internal market.

A rebellion by centre-right MEPs in the EPP political grouping over an outright ban on real-time facial recognition on the streets of Europe failed to materialise, with a number of politicians attending Silvio Berlusconi’s funeral in Italy.

The final vote was 499 in favour and 28 against with 93 abstentions.

Craig Burland, CISO, Inversion6 had this comment in relation to this news:

Let the debate begin! Similar to data privacy years ago, the EU has just taken a position at the far end of the spectrum to frame the parameters of the discussion. Putting aside the many challenges of enforcement as well as the ubiquitous use of AI in modern technology projects, the EU has documented intriguing concepts centered on ensuring the validity of the content and proper use cases. Contrast this with Google’s pronouncement last week that focused primarily on protecting the technology itself.  What was announced today will shift and transition as the debate plays out in the media and behind closed doors. But, in planting this flag, the EU has started what will be a fascinating dialog that affects businesses and individuals alike.

I’m honestly not sure how this will shake out. But based on the fact that the EU has come out with regulations like GDPR, this draft legislation is likely to shape the discussion about AI and how it should be used. Thus everyone need to pay attention to this.

UPDATE: Eduardo Azanza, CEO, Veridas adds this:

     “The passing of the Artificial Intelligence Act is a significant moment and should not be underestimated at all. For technologies such as AI and biometrics to ever be successful, it is essential that there is trust from businesses and the wider public.

It’s critical that we have established agreed standards and deliverables to ensure that AI and collected biometric data are used responsibly and ethically. There must be clearly defined responsibilities and chains of accountability for all parties, as well as a high degree of transparency for the processes involved. 

As the UK and US look to introduce their own Artificial Intelligence Act, it is essential they work with the EU to define minimum global standards – only then can we guarantee the ethical use of AI and biometrics.

Ultimately, it’s businesses’ duty to responsibly and ethically use AI technology, as its capability to replicate human abilities raises huge concerns. Organizations need to be conducting periodic diagnoses on the ethical principles of AI. Confidence in AI security technology must be based on transparency and compliance with legal, technical, and ethical standards.”

UPDATE #2: Ani Chaudhuri, CEO, Dasera had this comment:

European Union lawmakers have taken a decisive step in shaping the future of artificial intelligence by adopting the E.U. AI Act. This landmark legislation challenges the power of American tech giants and sets unprecedented restrictions on AI usage. This move is long overdue as it prioritizes data security and protects individuals from potential harm caused by unchecked AI systems.

The E.U. AI Act introduces essential guardrails to prevent deploying AI systems that pose an “unacceptable level of risk.” By banning tools like predictive policing and social scoring systems, the legislation safeguards against intrusive and discriminatory practices. Furthermore, it limits high-risk AI applications, such as those that could influence elections or jeopardize people’s health.

One significant aspect of the legislation is its focus on generative AI, including systems like ChatGPT. Requiring content generated by such systems to be labeled and mandating the publication of summaries of copyrighted data used for training promotes transparency and protects intellectual property rights. These measures address growing concerns and ensure responsible AI development.

While some voices express concern over the potential impact on AI development and adoption, the European Parliament’s determination to lead the global dialogue on responsible AI should be applauded.  European lawmakers have proactively developed comprehensive AI legislation that accounts for evolving technologies and potential risks.

The E.U.’s commitment to data privacy, tech competition, and social media regulation aligns with its ambitious AI regulations. This cohesive framework ensures that European companies adhere to high standards, promoting consumer trust and privacy. It also strengthens Europe’s position as the global tech regulator, setting precedents that will shape international tech policies.

As Europe leads in establishing AI standards, the United States must step up its efforts to keep pace. Congress must pass comprehensive legislation addressing AI and online privacy. Falling behind Europe risks hindering innovation and surrendering the opportunity to lead the global debate on AI governance.

We believe that responsible AI development should be a global endeavor. As Europe sets the bar, it is incumbent upon the United States to catch up and play an active role in shaping AI policies. We can strike the right balance and ensure AI benefits society by fostering innovation while safeguarding individual rights.

While concerns and challenges exist, the E.U. AI Act represents a significant step toward building a responsible and secure AI ecosystem. Europe’s commitment to protecting individuals and upholding data security sets an example for the world. As the AI landscape continues to evolve, we must embrace robust regulations that foster trust, innovation, and global cooperation.

Leave a Reply

Discover more from The IT Nerd

Subscribe now to keep reading and get access to the full archive.

Continue reading