HYAS Infosec is pleased to share that research cited from HYAS Labs, the research arm of HYAS, is being utilized by contributors to and framers of the European Union’s AI Act.
The AI Act is widely viewed as a cornerstone initiative that is helping shape the trajectory of AI governance, with the United States’ policies and considerations soon to follow.
AI Act researchers and framers assert that the Act reflects a specific conception of AI systems, viewing them as non-autonomous statistical software with potential harms primarily stemming from datasets. The researchers view the concept of “intended purpose,” drawing inspiration from product safety principles, as a fitting paradigm and one that has significantly influenced the initial provisions and regulatory approach of the AI Act.
However, these researchers also see a substantial gap in the AI Act concerning AI systems devoid of an intended purpose, a category that encompasses General-Purpose AI Systems (GPAIS) and foundation models.
HYAS’ work on AI-generated malware — specifically, BlackMamba, as well as its more sophisticated and fully autonomous cousin, EyeSpy – is helping advance the understanding of AI systems that are devoid of an intended purpose, including GPAIS and the unique challenges posed by GPAIS to cybersecurity.
HYAS research is proving important for both the development of proposed policies and for the real-world challenges posed by the rising dilemma of fully autonomous and intelligent malware which cannot be solved by policy alone.
HYAS is providing researchers with tangible examples of GPAIS gone rogue. BlackMamba, the proof of concept cited in the research paper “General Purpose AI systems in the AI Act: trying to fit a square peg into a round hole,” by Claire Boine and David Rolnick, exploited a large language model to synthesize polymorphic keylogger functionality on-the-fly and dynamically modified the benign code at runtime — all without any command-and-control infrastructure to deliver or verify the malicious keylogger functionality.
EyeSpy, the more advanced (and more dangerous) proof of concept from HYAS Labs, is a fully autonomous AI-synthesized malware that uses artificial intelligence to make informed decisions to conduct cyberattacks and continuously morph to avoid detection. The challenges posed by an entity such as EyeSpy capable of autonomously assessing its environment, selecting its target and tactics of choice, strategizing, and self-correcting until successful – all while dynamically evading detection – was highlighted at the recent Cyber Security Expo 2023 in presentations such as “The Red Queen’s Gambit: Cybersecurity Challenges in the Age of AI.”
In response to the nuanced challenges posed by GPAIS, the EU Parliament has proactively proposed provisions within the AI Act to regulate these complex models. The significance of these proposed measures cannot be overstated and will help to further refine the AI Act and sustain its continued usefulness in the dynamic landscape of AI technologies.
Additional Resources:
“General Purpose AI systems in the AI Act: trying to fit a square peg into a round hole” https://www.bu.edu/law/files/2023/09/General-Purpose-AI-systems-in-the-AI-Act.pdf. Paper submitted by Claire Boine, Research Associate at the Artificial and Natural Intelligence Toulouse Institute and in the Accountable AI in a Global Context Research Chair at University of Ottawa, researcher in AI law, and CEO of Successif, and David Rolnick, Assistant Professor in CS at McGill and Co-Founder of Climate Change AI, to WeRobot 2023.
News – European Parliament – The European Union’s AI Act: https://www.europarl.europa.eu/news/en/headlines/society/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence
Future of Life Institute “General Purpose – AI and the AI Act” What are general purpose AI systems? Why regulate general purpose AI systems? https://artificialintelligenceact.eu/wp-content/uploads/2022/05/General-Purpose-AI-and-the-AI-Act.pdf
Towards Data Science – “AI-powered Monopolies and the New World Order – How AI’s reliance on data will empower tech giants and reshape the global order” https://towardsdatascience.com/ai-powered-monopolies-and-the-new-world-order-1c56cfc76e7d
“The Red Queen’s Gambit: Cybersecurity Challenges in the Age of AI” presented by Lindsay Thorburn at Cyber Security Expo 2023 https://www.youtube.com/watch?v=Z2GsZHCXc_c
HYAS Blog: “Effective AI Regulation Requires Adaptability and Collaboration” https://www.hyas.com/blog/effective-ai-regulation-requires-adaptability-and-collaboration
CISA Official Argues ‘Patch Faster, Fix Faster’ Is A Failed Model
Posted in Commentary with tags CISA on December 4, 2023 by itnerdA top cybersecurity official at CISA said that addressing computer security vulnerabilities by finding and patching flaws is a fundamentally broken model in need of being overhauled and called on technology providers to “take accountability” for the security of their customers.
“To say that our solution to cybersecurity is at least in part, patch faster, fix faster, that is a failed model. It is a model that does not account for the capability and the acceleration of the adversaries who we’re up against,” said Eric Goldstein, executive assistant director for cybersecurity at CISA at an event held by the nonprofit International Information System Security Certification Consortium.
Goldstein argued that meaningful gains in computer security will require a “philosophical shift” taking the burden away from school districts, water utilities, and small businesses and putting it on the technology providers.
“What we’re seeing today, we believe, is systematic cost transference from technology providers who make decisions to design products a certain way to customers, who then have to bear the burden to patch, to mitigate, to respond. It doesn’t make sense to us, at least as applied to smaller organizations that really can’t bear that burden,” Goldstein continued.
Goldstein also expressed optimism that AI can assist in finding and fixing vulnerabilities in legacy code, discover tactics, techniques and procedures used by malicious hackers, and to assist in writing secure code.
Troy Batterberry, CEO and founder, EchoMark had this comment:
“Eric Goldstein is correct. Software as a Service (SaaS) vendors have an unrealized potential to help address this big problem. SaaS vendors can already remove the burden of customers having to periodically patch information systems. SaaS vendors can also take on more direct accountability for breaches in their systems. Through more “security conscious” configuration settings such as requiring Multi-Factor Authentication (MFA), more defense in depth technologies, architectures, and monitoring (including Artificial Intelligence), SaaS vendors can preclude a vast majority of breaches from happening in the first place. In many cases, the knowhow already exists. Those involved simply need a nudge.
“This has happened before in other industries. Somewhat analogous to how Federal and State government have required standards and compliance with transportation safety for decades, it is time for governments to further impose effective regulations for cyber-security against both SaaS vendors and organizations that utilize them on our information superhighways. This includes the rapid phaseout of insecure legacy systems, which are too often the “wide open door” that lets hackers in.”
Mike Barker, CCO, HYAS Infosec follow with this:
“Absolutely agree with Eric Goldstein’s perspective on the need for a paradigm shift in cybersecurity. It’s high time we move beyond the reactive “patch and fix” approach. Holding technology providers accountable for security is a crucial step towards a more robust defense.
“Excitingly, Goldstein’s optimism about leveraging AI aligns perfectly with the evolving threat landscape. AI can play a pivotal role in proactively identifying vulnerabilities and enhancing overall cybersecurity resilience.
“Looking forward to a future where technology providers lead the charge in security, embracing innovative solutions to stay one step ahead of adversaries.
David Ratner, CEO, HYAS Infosec concludes with this:
“”While I agree with Goldstein that technology providers need to be accountable and responsible, there is also another fundamental shift required. Gone are the days where one can be confident that they can keep bad actors out of their environment; instead, organizations need to shift their thinking from a pure-prevention strategy to one of operational resiliency. They need to implement appropriate levels of visibility and controls because everyone will at one time or another be breached, and they need to ensure that when it happens to them, the breach can be identified, isolated, and addressed before it spreads and causes financial, reputational, and other damage.”
Defence is best done in layers. Patching is a layer but there need to be other layers to make sure that your organization stays safe.
Leave a comment »