The US, UK, among 16 other countries have jointly released secure AI system guidelines based on the principle that it should be secure by design:
This document recommends guidelines for providers of any systems that use artificial intelligence (AI), whether those systems have been created from scratch or built on top of tools and services provided by others. Implementing these guidelines will help providers build AI systems that function as intended, are available when needed, and work without revealing sensitive data to unauthorised parties.
This document is aimed primarily at providers of AI systems who are using models hosted by an organisation, or are using external application programming interfaces (APIs). We urge all stakeholders (including data scientists, developers, managers, decision-makers and risk owners) to read these guidelines to help them make informed decisions about the design, development, deployment and operation of their AI systems.
Anurag Gurtu , Chief Product Officer, StrikeReady had this comment:
The recent secure AI system development guidelines released by the U.K., U.S., and other international partners are a significant move in enhancing cybersecurity in the field of artificial intelligence. These guidelines emphasize the importance of security outcomes for customers, incorporating transparency and accountability, and promoting a secure organizational structure. They focus on managing AI-related risks, requiring rigorous testing of tools before public release, and establishing measures to counteract societal harms, like bias. Additionally, the guidelines advocate a ‘secure by design’ approach covering all stages of AI development and deployment, and address the need to combat adversarial attacks targeting AI and machine learning systems, including prompt injection attacks and data poisoning.
The fact that 18 countries agreed on a common set of principals is great. The thing is that more nations have to do the same thing. Otherwise you may still have AI that is closer to the “Terminator” end of the spectrum rather than being helpful and friendly.
UPDATE: Troy Batterberry, CEO and founder, EchoMark had this comment:
“While logging and monitoring insider activities are important, we know they do not go nearly far enough to prevent insider leaks. Highly damaging leaks continue to happen at well-run government and commercial organizations all over the world, even with sophisticated monitoring activities in place. The leaker (insider) simply feels they can hide in the anonymity of the group and never be caught. An entirely new approach is required to help change human behavior. Information watermarking is one such technology that can help keep private information private.”
UPDATE #2: Josh Davies, Principal Technical Manager, Fortra adds this:
The AI arms race and rapid adoption of open AI systems* have created concerns in the cyber security sector around the impact of a supply chain compromise – where the AI source code is compromised and used as a trusted delivery mechanism to pass on the compromise to third party users. These guidelines look to secure the design, development, and deployment of AI which will help reduce the likelihood of this type of attack.
As systems and nation states are increasingly interdependent on each other, global buy in is crucial. We have already seen how collective security is important, otherwise threats are allowed to grow, become more sophisticated, and attack global targets. Ransomware criminal families are a prime example. This levels the playing field by homogenising guidance across national states and limiting a race to the bottom with AI tech.
The guidelines recommend the use of red teaming. Red teaming surfaces the gaps in systems, and security strategies, and ties them directly to an impact. The AI Executive Order also mandates red teaming to identify flaws and vulnerabilities in AI systems. Mandating red teaming future proofs these guidelines (and other regulations) as it is hard to anticipate the threats of tomorrow and the appropriate mitigations – especially at the pace governments can legislate. It’s an indirect way of saying you need to make sure that your security strategies are always up to date, because if not, attackers will surely find and expose your gaps. This is important as we have seen other security regulations quickly become outdated and redundant as controls cannot be agreed upon and updated at the pace required to achieve good security.
Will we see adoption? Or does it just serve to re-assure the public that AI issues are being considered? What is the consequence of not following the guidance? I would hope to see soft enforcement through the exclusion of organisations that cannot show adherence to guidance from government or B2B collaborations.
Without any punitive measures, a cynic would say organizations have no motivation to implement the recommendations properly. An optimist might lean on the red team reports and hope for buy in on reporting flaws and issues, removing the ‘black box’ nature of AI which some executives have hid behind, and opening up these leaders to the court of public opinion if there is evidence they were aware of a flaw and did not take appropriate action, resulting in a compromise and/or data breach.
These guidelines are a step in the right direction. They pull together key AI stakeholders, from nation states and industry, and call for collaboration and consideration of the security of AI. Hopefully this is a continued theme, as we’ve seen with the United States AI executive order, and that AI systems are developed responsibly, without stifling innovation and adoption.
My personal opinion is that the real value we might see from such collaboration will be when we do see a large-scale AI compromise. Hopefully the involved parties are brave enough to lift the lid on what happened so everyone can learn how to be better prepared, and we can define further guidance (preferably as a requirement) beyond just secure build practices and a general monitoring requirement. But this is a good start.
Is it ground breaking? In my opinion, no. Security teams should already be looking to apply the principles outlined to any technological development. This has taken long standing DevSecOps principles and applied them to AI. I would expect it will have the most impact on startups entering the space, i.e. those without an existing level of security maturity.
*open source data sets, i.e. the internet, not OpenAI the company
Fact: Despite What Some Say, NameDrop Is Safe
Posted in Commentary with tags Apple on November 28, 2023 by itnerdSome warnings have recently appeared that claim that Apple’s NameDrop feature that appeared in iOS 17 and allows you to share information when you bring two iPhones or Apple Watches together isn’t safe. Specifically, police departments in Pennsylvania, Ohio, Oklahoma (these are Facebook links) and other places posted similar Facebook messages warning about the privacy risk of NameDrop. Specifically, that any miscreant can bring their phone next to yours and get your contact info.
The fact is, this is completely inaccurate. Here’s why:
In other words, you would not only know that someone is trying to get your contact info, but you would have to authorize the sharing of contact info. The ability to share contact info without your knowledge simply doesn’t exist. And that shouldn’t be a shock to anyone given how Apple tends to roll when it comes to security and privacy.
Having said that, if you really want to turn off NameDrop because you’re concerned about this feature, here’s how you do it:
But honestly, this who NameDrop is a risk thing is overblown and inaccurate. NameDrop is safe and the police departments who are freaking out about it are doing so for no reason. Until someone shows up with some actual evidence based on demonstrable facts, you should move on to paying attention to something that actually matters.
Leave a comment »