This week, the UK hosted the AI Safety Summit in Bletchley Park where 28 countries, including the US, the UK, China, six EU member states, Brazil, Nigeria, Israel and Saudi Arabia, signed the Bletchley Declaration, an agreement establishing shared responsibility for the opportunities, risks and needs for global action on systems that pose urgent and dangerous risks.
“Many risks arising from AI are inherently international in nature, and so are best addressed through international cooperation,” reads a public statement published by the UK Department for Science, Innovation and Technology.
The declaration lays out the first two steps of their agenda for addressing ‘frontier AI’ risk:
- Identify shared concerns for AI safety risks by building a “scientific and evidence-based understanding of the risks, and sustaining that understanding as capabilities continue to increase, in the context of a wider global approach to understanding the impact of AI in our societies.”
- Build respective risk-based policies to ensure safety in light of identified risks, collaborating “while recognizing our approaches may differ based on national circumstances and applicable legal frameworks.” This includes: increased transparency by developers, tools for safety testing and evaluation metrics, and developing relevant public sector capabilities and scientific research.
Ted Miracco, CEO, Approov Mobile Security had this comment:
“The Bletchley Declaration demonstrates a more proactive approach by governments, signaling a possible lesson learned from past failures to regulate social media giants. By addressing AI risks collectively, nations aim to stay ahead of tech behemoths, recognizing the potential for recklessness. This commitment to collaboration underscores some determination to safeguard the future by shaping responsible AI development and mitigating potential harms.
“We all certainly harbor doubts regarding the ability of governments and legal systems to match the speed and avarice of the tech industry, but the Bletchley Declaration signifies a crucial departure from the laissez-faire approach witnessed with social media companies. We should applaud the proactive effort of these governments to avoid idle passivity and assertively engage in shaping AI’s trajectory, while prioritizing public safety and responsible governance over unfettered market forces.”
Emily Phelps, Director, Cyware adds this comment:
“Recognizing that AI-driven risks cross borders, it is imperative for countries to join forces, ensuring that advancements in AI are accompanied by safety measures that protect all societies equally. The focus on a scientific and evidence-based approach to understanding these risks will enhance our collective intelligence and response capabilities. While the nuances of national circumstances will lead to varied approaches, the shared commitment to transparency, rigorous testing, and bolstered public sector capabilities is a reassuring move towards a safer AI-driven future for everyone.”
It’s a good thing in my mind that there’s cross border collaboration on AI as the potential for it to help mankind is great. But the potential for it to harm mankind is also great. Thus rules, boundaries and limitations need to be wrapped around it so that the latter does not happen.
New Secure AI System Guidelines Agreed To By 18 Countries
Posted in Commentary with tags AI on November 27, 2023 by itnerdThe US, UK, among 16 other countries have jointly released secure AI system guidelines based on the principle that it should be secure by design:
This document recommends guidelines for providers of any systems that use artificial intelligence (AI), whether those systems have been created from scratch or built on top of tools and services provided by others. Implementing these guidelines will help providers build AI systems that function as intended, are available when needed, and work without revealing sensitive data to unauthorised parties.
This document is aimed primarily at providers of AI systems who are using models hosted by an organisation, or are using external application programming interfaces (APIs). We urge all stakeholders (including data scientists, developers, managers, decision-makers and risk owners) to read these guidelines to help them make informed decisions about the design, development, deployment and operation of their AI systems.
Anurag Gurtu , Chief Product Officer, StrikeReady had this comment:
The recent secure AI system development guidelines released by the U.K., U.S., and other international partners are a significant move in enhancing cybersecurity in the field of artificial intelligence. These guidelines emphasize the importance of security outcomes for customers, incorporating transparency and accountability, and promoting a secure organizational structure. They focus on managing AI-related risks, requiring rigorous testing of tools before public release, and establishing measures to counteract societal harms, like bias. Additionally, the guidelines advocate a ‘secure by design’ approach covering all stages of AI development and deployment, and address the need to combat adversarial attacks targeting AI and machine learning systems, including prompt injection attacks and data poisoning.
The fact that 18 countries agreed on a common set of principals is great. The thing is that more nations have to do the same thing. Otherwise you may still have AI that is closer to the “Terminator” end of the spectrum rather than being helpful and friendly.
UPDATE: Troy Batterberry, CEO and founder, EchoMark had this comment:
“While logging and monitoring insider activities are important, we know they do not go nearly far enough to prevent insider leaks. Highly damaging leaks continue to happen at well-run government and commercial organizations all over the world, even with sophisticated monitoring activities in place. The leaker (insider) simply feels they can hide in the anonymity of the group and never be caught. An entirely new approach is required to help change human behavior. Information watermarking is one such technology that can help keep private information private.”
Leave a comment »