The US, UK, among 16 other countries have jointly released secure AI system guidelines based on the principle that it should be secure by design:
This document recommends guidelines for providers of any systems that use artificial intelligence (AI), whether those systems have been created from scratch or built on top of tools and services provided by others. Implementing these guidelines will help providers build AI systems that function as intended, are available when needed, and work without revealing sensitive data to unauthorised parties.
This document is aimed primarily at providers of AI systems who are using models hosted by an organisation, or are using external application programming interfaces (APIs). We urge all stakeholders (including data scientists, developers, managers, decision-makers and risk owners) to read these guidelines to help them make informed decisions about the design, development, deployment and operation of their AI systems.
Anurag Gurtu , Chief Product Officer, StrikeReady had this comment:
The recent secure AI system development guidelines released by the U.K., U.S., and other international partners are a significant move in enhancing cybersecurity in the field of artificial intelligence. These guidelines emphasize the importance of security outcomes for customers, incorporating transparency and accountability, and promoting a secure organizational structure. They focus on managing AI-related risks, requiring rigorous testing of tools before public release, and establishing measures to counteract societal harms, like bias. Additionally, the guidelines advocate a ‘secure by design’ approach covering all stages of AI development and deployment, and address the need to combat adversarial attacks targeting AI and machine learning systems, including prompt injection attacks and data poisoning.
The fact that 18 countries agreed on a common set of principals is great. The thing is that more nations have to do the same thing. Otherwise you may still have AI that is closer to the “Terminator” end of the spectrum rather than being helpful and friendly.
UPDATE: Troy Batterberry, CEO and founder, EchoMark had this comment:
“While logging and monitoring insider activities are important, we know they do not go nearly far enough to prevent insider leaks. Highly damaging leaks continue to happen at well-run government and commercial organizations all over the world, even with sophisticated monitoring activities in place. The leaker (insider) simply feels they can hide in the anonymity of the group and never be caught. An entirely new approach is required to help change human behavior. Information watermarking is one such technology that can help keep private information private.”
UPDATE #2: Josh Davies, Principal Technical Manager, Fortra adds this:
The AI arms race and rapid adoption of open AI systems* have created concerns in the cyber security sector around the impact of a supply chain compromise – where the AI source code is compromised and used as a trusted delivery mechanism to pass on the compromise to third party users. These guidelines look to secure the design, development, and deployment of AI which will help reduce the likelihood of this type of attack.
As systems and nation states are increasingly interdependent on each other, global buy in is crucial. We have already seen how collective security is important, otherwise threats are allowed to grow, become more sophisticated, and attack global targets. Ransomware criminal families are a prime example. This levels the playing field by homogenising guidance across national states and limiting a race to the bottom with AI tech.
The guidelines recommend the use of red teaming. Red teaming surfaces the gaps in systems, and security strategies, and ties them directly to an impact. The AI Executive Order also mandates red teaming to identify flaws and vulnerabilities in AI systems. Mandating red teaming future proofs these guidelines (and other regulations) as it is hard to anticipate the threats of tomorrow and the appropriate mitigations – especially at the pace governments can legislate. It’s an indirect way of saying you need to make sure that your security strategies are always up to date, because if not, attackers will surely find and expose your gaps. This is important as we have seen other security regulations quickly become outdated and redundant as controls cannot be agreed upon and updated at the pace required to achieve good security.
Will we see adoption? Or does it just serve to re-assure the public that AI issues are being considered? What is the consequence of not following the guidance? I would hope to see soft enforcement through the exclusion of organisations that cannot show adherence to guidance from government or B2B collaborations.
Without any punitive measures, a cynic would say organizations have no motivation to implement the recommendations properly. An optimist might lean on the red team reports and hope for buy in on reporting flaws and issues, removing the ‘black box’ nature of AI which some executives have hid behind, and opening up these leaders to the court of public opinion if there is evidence they were aware of a flaw and did not take appropriate action, resulting in a compromise and/or data breach.
These guidelines are a step in the right direction. They pull together key AI stakeholders, from nation states and industry, and call for collaboration and consideration of the security of AI. Hopefully this is a continued theme, as we’ve seen with the United States AI executive order, and that AI systems are developed responsibly, without stifling innovation and adoption.
My personal opinion is that the real value we might see from such collaboration will be when we do see a large-scale AI compromise. Hopefully the involved parties are brave enough to lift the lid on what happened so everyone can learn how to be better prepared, and we can define further guidance (preferably as a requirement) beyond just secure build practices and a general monitoring requirement. But this is a good start.
Is it ground breaking? In my opinion, no. Security teams should already be looking to apply the principles outlined to any technological development. This has taken long standing DevSecOps principles and applied them to AI. I would expect it will have the most impact on startups entering the space, i.e. those without an existing level of security maturity.
*open source data sets, i.e. the internet, not OpenAI the company
Elon Musk’s Lawsuit Against Media Matters Has Resulted In Him Being Introduced To The Streisand Effect
Posted in Commentary with tags Twitter on November 27, 2023 by itnerdFirst some background. Here’s a definition of the Streisand effect:
The Streisand effect is an unintended consequence of attempts to hide, remove, or censor information, where the effort instead backfires by increasing awareness of that information. It is named after American singer and actress Barbra Streisand, whose attempt to suppress the California Coastal Records Project‘s photograph of her cliff-top residence in Malibu, California, taken to document California coastal erosion, inadvertently drew far greater attention to the heretofore obscure photograph in 2003.
Now here’s how it applies to Elon Musk. His lawsuit against Media Matters for exposing antisemitic posts on Twitter being served up beside ads from big name advertisers, who then pulled their ads from Twitter, is has basically resulted in the Streisand effect coming into play according to TechDirt:
in making a big deal out of this and filing one of the worst SLAPP suits I’ve ever seen, all while claiming that Media Matters “manipulated” things (even as the lawsuit admits that it did no such thing), it is only begging more people to go looking for ads appearing next to terrible content.
And they’re finding them. Easily.
As the DailyDot pointed out, a bunch of users started looking around and found that ads were being served next to the tag #HeilHitler and “killjews” among other neo-Nazi content and accounts.
SLAPP stands for Strategic lawsuit against public participation by the way. But I digress. The point is that he’s adding to the reasons that Media Matters is going to win this lawsuit. The fact is that what they said is true and evidence of antisemitism and Nazi posts are easily found if you go looking for them. And you apparently don’t have to try all that hard to find them. The only lawsuit that’s going to be even easier to win than this one is the dBrand vs. Casetify lawsuit. The fact is that Elon is going to get pwned in court as well as the court of public opinion at the rate he’s going. Thus if he were smart, he’d make this go away and do something more than go on the apology tour that he’s planning to go on. But as has been proven recently, he’s not smart. Which is why this will be one more thing that hurts him.
Leave a comment »