Archive for AI

Patches Fix Claude Code Flaws, But Broader Repository-Based Risks Remain 

Posted in Commentary with tags on February 26, 2026 by itnerd

Researchers at Check Point have identified multiple vulnerabilities in Anthropic’s development tool Claude Code, allowing malicious repositories to trigger remote code execution and steal active API credentials.

The observed security issues exploited built-in mechanisms including Hooks, Model Context Protocol servers, and environment variables to run arbitrary shell commands and exfiltrate API keys before trust prompts could be confirmed.

Two specific tracked vulnerabilities, CVE-2025-59536 and CVE-2026-21852, were documented and patched by Anthropic following disclosure by security researchers. The first enabled arbitrary code execution via overridden configuration settings that bypass user consent dialogs, while the second could redirect API traffic to malicious endpoints, exposing developers’ Anthropic API keys in plaintext.

All reported flaws have been remedied in subsequent Claude Code updates prior to public advisory publication.

According to researchers, even after the specific vulnerabilities were fixed, the underlying risk does not disappear. The issues exposed how project configuration files can directly shape execution behavior inside AI-assisted development tools, and a malicious repository can still act as a delivery mechanism if safeguards are insufficient, which expands the threat model beyond the individual CVEs that were addressed.

As a result, applying patches resolves the documented flaws but does not fully remove the broader exposure created when AI tooling automatically interprets and acts on repository-level settings. 

Jacob Krell, Senior Director: Secure AI Solutions & Cybersecurity, Suzu Labs:

“These CVEs are real and Anthropic was right to patch them. The broader issue is not unique to Claude Code. The AI development tool industry as a whole is prioritizing enablement over security, and these vulnerabilities are a symptom of that design philosophy, not an isolated product failure.

“In the case of Claude Code, hooks ran shell commands before the developer even saw the trust dialog. The security control existed. It just executed after the damage was already done. AI agents are deployed with broad permissions by default because restricting them reduces productivity. That is the same tradeoff the industry made with admin accounts two decades ago, and it took years of breaches to correct. The principle of least privilege does not stop applying because the user is an AI model instead of a human. Agents should be treated as untrusted by default, with strict zero trust boundaries between the agent and any command surface, credential store, or system resource it touches.

“This is not a new class of attack surface. Malicious Makefiles, poisoned scripts, and git hooks have compromised developers for years. What AI tools change is the scope of what runs once triggered. The attack surface is not new. The blast radius is.

“AI development tools are going to become more autonomous, not less. The industry is building the capability first and retrofitting the security later. That pattern has never aged well in software, and it is unlikely to age any better with AI.”

I am aware of a large number of developers who are using tools like Claude Code to speed up the coding pf

Vibe-coded Moltbook security flaw leaks AI agent credentials

Posted in Commentary with tags on February 5, 2026 by itnerd

A new social media platform called Moltbook, designed for AI agents to interact with each other and “hang out”, was found to have a misconfiguration, leaving its backend database publicly accessible allowing full read and write access to all data, according to a recent blog post by Wiz Security.

Researchers discovered a Supabase API key exposed in client-side JavaScript revealing thousands of private AI conversations, 30,000 user email addresses, and 1.5 million API keys..

   “Supabase is a popular open source Firebase alternative providing hosted PostgreSQL databases with REST APIs. It’s become especially popular with vibe-coded applications due to its ease of setup,” explained Wiz head of threat exposure, Gal Nagli.

   “When properly configured with Row Level Security (RLS), the public API key is safe to expose – it acts like a project identifier. However, without RLS policies, this key grants full database access to anyone who has it. In Moltbook’s implementation, this critical line of defense was missing.”

In a message posted to X before the Wiz posted the blog, Moltbook’s creator, Matt Schlicht said he “didn’t write one line of code” for the site. Wiz reported the vulnerability to Schlicht, and the database was secured.

   “As we see over and over again with vibe coding, although it runs very fast, many times people forget the basics of security,” Wiz cofounder Ami Luttwak said.

Sunil Gottumukkala, CEO, Averlon:

   “What this highlights is the tradeoff vibe coding creates. It massively compresses idea-to-product time, but often skips essential security steps like threat modeling, secure defaults, and review gates that account for real user behavior and adversarial abuse.

   “When those controls are missing, a routine misconfiguration, such as shipping without proper authorization or RLS policies, can quickly turn into an instant, internet-scale incident. Some vibe-coding platforms are starting to add guardrails, but we’re still early. As long as speed continues to outpace security analysis and remediation, this will be a bumpy road.”

Lydia Zhang, President & Co-Founder,Ridge Security Technology Inc. gave me this comment:

   “This leads to another mandatory step: testing. Zero-trust principles should also be applied to Vibe coding. Vibe-coded solutions can miss basic security practices, and configuration or misconfiguration issues are often outside the scope of the code itself. I’m glad Wiz Security caught this before the damage spread further.”

Michael Bell, Founder & CEO, Suzu Labs added this comment:

   “The Moltbook incident shows what happens when people shipping production applications have no security training and are relying entirely on AI-generated code. The creator said publicly that he didn’t write a single line of code. Current AI coding tools don’t reason about security on the developer’s behalf. They generate functional code, not secure code.

   “The specific failure here was a single Supabase configuration setting. Row Level Security was disabled, which meant the API key that’s supposed to be safe to expose became a skeleton key to the entire database. That’s not a sophisticated vulnerability. It’s a checkbox that never got checked, and nobody reviewed the code to notice. When 10% of apps built on vibe coding platforms (CursorGuard) have the same misconfiguration, that’s not a user error problem. That’s a systemic failure in how these tools are designed.

   “The write access vulnerability should concern anyone building AI agent infrastructure. Moltbook wasn’t just leaking data. Anyone with the exposed API key could modify posts that AI agents were reading and responding to. That’s prompt injection at ecosystem scale. You could manipulate the information environment that shapes how thousands of AI agents behave.

   “Users shared OpenAI API keys in private messages assuming those messages were private. One platform’s misconfiguration turned into credential exposure for unrelated services. As AI ecosystems become more interconnected, these cascading failures become the norm.

   “The 88:1 agent-to-human ratio should make everyone skeptical of AI adoption metrics going forward. Moltbook claimed 1.5 million agents. The reality was 17,000 humans running bot armies. No rate limiting. No verification. The platform couldn’t distinguish between an actual AI agent and a human with a script pretending to be one.

   “We’re going to see a lot of “AI-powered” metrics that look impressive until you examine what’s actually behind them. Participation numbers, engagement statistics, autonomous behavior claims. Without verification mechanisms, the numbers are meaningless. The AI internet is coming, but right now it’s mostly humans wearing AI masks.

   “If you’re deploying vibe-coded applications to production, you need security review by someone who understands both the code and the infrastructure it runs on. AI tools don’t have security reasoning built in, which means every configuration decision is a potential exposure. We help organizations identify exactly these kinds of gaps through security assessments that trace data flows and access controls. The discovery process that found this vulnerability took Wiz researchers minutes of looking at client-side JavaScript. That’s the same level of effort an attacker would spend.

   “AI development velocity and AI security maturity are on completely different curves. Teams are shipping production applications in days. Security practices haven’t caught up. Until AI tools start generating secure defaults and flagging dangerous configurations automatically, humans (or hackers) need to be in the loop reviewing what gets deployed.”

Ryan McCurdy, VP of Marketing, Liquibase contributed this:

   “Moltbook is a textbook example of what happens when you ship at AI speed without change control at the database layer. A single missing guardrail turned a “public” Supabase key into full read and write access, exposing private agent conversations, user emails, and a massive pile of credentials. This is why Database Change Governance matters.

   “The highest risk changes are often permissions, policies, and access rules, and those need automated checks, separation of duties, drift detection, and audit-ready evidence before anything hits production. AI agents and vibe-coded apps will only amplify the blast radius if database change is not governed.”

Noelle Murata, Sr. Security Engineer, Xcape, Inc. served up this comment:

   “Matt Schlicht’s admission that he “didn’t write one line of code” isn’t something to celebrate, given the fundamental nature of the security flaw. The database completely lacked Row Level Security (RLS) policies, allowing anyone to access it without authentication. This misconfiguration exposed the entire database structure and content, including tokens that granted read/write/edit access to non-authenticated users – a basic oversight with serious consequences.

   “Vibe-coding,” or relying on AI to generate code, can produce functional results but often sacrifices best practices in architecture and security for speed and convenience. Without code review or highly specific prompting, AI-generated code prioritizes “fast and easy” over “resilient and secure.” This is analogous to why junior developers need oversight; the same principle applies to AI-generated code.

   “Despite Moltbook being marketed as a social platform “for bots, by bots,” it had a significant human user base: 17,000 humans alongside 1.5 million bots, creating a roughly 1:88 ratio. Notably, no CAPTCHA or human/bot validation system was implemented, raising questions about the platform’s actual purpose and user management.

   “This incident demonstrates that AI-generated applications require careful monitoring and professional oversight. Software development still demands review by trained, experienced humans to ensure security and reliability.”

This highlights the danger of vibe coding. You can get stuff done. But how it gets done might be a problem. You might want to keep that in mind if you rely on vibe coding.

Several Senators Release A Framework to Mitigate Extreme AI Risks

Posted in Commentary with tags on April 18, 2024 by itnerd

Yesterday, U.S. Senators Mitt Romney (R-UT), Jack Reed (D-RI), Jerry Moran (R-KS), and Angus King (I-ME) released a letter to the Senate artificial intelligence (AI) working group leaders outlining a framework to mitigate extreme AI risks. I encourage you to read the letter, but here’s the TL:DR:

Congress should consider a permanent framework to mitigate extreme risks. This framework should also serve as the basis for international coordination to mitigate extreme risks posed by AI. This letter is an attempt to start a dialogue about the need for such a framework, which would be in addition to, not at the exclusion of, proposals focused on other risks presented by developments in AI.

Under this potential framework, the most advanced model developers in the future would be required to safeguard against four extreme risks – the development of biological, chemical, cyber, or nuclear weapons. An agency or federal coordinating body would be tasked to oversee the implementation of these proposed requirements, which would apply to only the very largest and most advanced models. Such requirements would be reevaluated on a recurring basis as we gain a better understanding of the threat landscape and the technology.

Sounds interesting. But is it useful? Here’s what Kevin Surace, Chair, Token had to say:

This is great politics and important to state publicly, but it won’t protect anyone from these threats. The major model providers already have strong safeguards in place for these and similar threats (you cannot get an answer from ChatGPT on how to create a chemical weapon).

This changes nothing from all major US providers. They already strongly limit access to such content. However open source models being used by bad actors and rogue countries are not subject to these laws and will misuse the technology anyway.

Anyone can already Google how to create a biological weapon. Having the answers faster doesn’t really help someone with the chemistry, procurement, production and so on anymore than Google already did. But AI could create perhaps new compounds not well documented elsewhere. And the bad actors are already taking advantage of that with open source models.

This has zero impact on OpenAI, Microsoft, Google and so on. And it has zero impact on a rogue country using open source models.

I’m all for guardrails and safeguards. But they have to be useful. I am not yet convinced that this effort by these senators is useful. But I am free to be convinced otherwise. Let’s see if they can convince myself and others that this is a useful exercise.

UPDATE: I have additional commentary from Madison Horn, Congressional Candidate (OK-5) and cybersecurity leader:

The plan proposed by the Senators is crucial. We are in the midst of a new kind of Cold War with China, one that includes the race to harness AI. A comprehensive strategy to not only secure but also to fully harness the potential of AI is essential. The nation that leads in AI will not only dictate global markets but also define international norms for decades to come.

Executing a plan to mitigate AI risks is loaded with challenges. First, we need a solid strategy to retain top talent for any new agencies we might set up, and we must also forge strong partnerships with the private sector. Then there’s Congress—sometimes it seems like they’re in a tech time warp, which doesn’t help. Plus, we can’t let our drive for security strangle American innovation. We need to stay agile, adapting as new models and classifications emerge, and ensure we’re not shutting out new startups or inadvertently creating monopolies.

And let’s not overlook cybersecurity challenges. Ensuring these AI models aren’t leaked or stolen is crucial—our adversaries are definitely taking notes and will be trying to tap into this wealth of information that will be retained.

Artificial intelligence poses a significant threat, one that reshapes the global landscape in ways we haven’t witnessed since the post-WWII era. With new alliances forming, notably between Russia and China, the stakes in the AI war are extraordinarily high. The power of AI doesn’t just accelerate a country’s ability to dominate global markets; it also has the potential to shift global values depending on who emerges as the leader in this technology. In the most extreme scenarios, the misuse of AI could lead to catastrophic outcomes, potentially destroying the world in a matter of seconds. The race to harness AI, therefore, is not just about technological superiority but also about steering the future ethical and moral compass of our entire planet.

We need to keep the spark of American innovation alive—it’s also crucial for our national security. Collaboration with the private sector? Non-negotiable. With many of the few qualified individuals in Congress retiring or being pushed out of office by partisan politics, it’s up to the American people to step up. We must elect leaders who are not just filling a seat but who truly understand the complexities of today’s tech challenges. Leaders who have the understanding to craft and pass laws that safeguard our citizens without choking out our innovation and economic growth. This is about securing a future where America continues to lead, not follow.

EU Passes Landmark AI Bill

Posted in Commentary with tags , on December 9, 2023 by itnerd

Yesterday, the EU reached a deal on its landmark AI bill. In the process, they’re racing ahead of US:

The European approach to trustworthy AI

The new rules will be applied directly in the same way across all Member States, based on a future-proof definition of AI. They follow a risk-based approach:

Minimal risk: The vast majority of AI systems fall into the category of minimal risk. Minimal risk applications such as AI-enabled recommender systems or spam filters will benefit from a free-pass and absence of obligations, as these systems present only minimal or no risk for citizens’ rights or safety. On a voluntary basis, companies may nevertheless commit to additional codes of conduct for these AI systems.

High-risk: AI systems identified as high-risk will be required to comply with strict requirements, including risk-mitigation systems, high quality of data sets, logging of activity, detailed documentation, clear user information, human oversight, and a high level of robustness, accuracy and cybersecurity. Regulatory sandboxes will facilitate responsible innovation and the development of compliant AI systems.

Examples of such high-risk AI systems include certain critical infrastructures for instance in the fields of water, gas and electricity; medical devices; systems to determine access to educational institutions or for recruiting people; or certain systems used in the fields of law enforcement, border control, administration of justice and democratic processes. Moreover, biometric identification, categorisation and emotion recognition systems are also considered high-risk. 

Unacceptable risk: AI systems considered a clear threat to the fundamental rights of people will be banned. This includes AI systems or applications that manipulate human behaviour to circumvent users’ free will, such as toys using voice assistance encouraging dangerous behaviour of minors or systems that allow ‘social scoring’ by governments or companies, and certain applications of predictive policing. In addition, some uses of biometric systems will be prohibited, for example emotion recognition systems used at the workplace and some systems for categorising people or real time remote biometric identification for law enforcement purposes in publicly accessible spaces (with narrow exceptions).

Specific transparency risk: When employing AI systems such as chatbots, users should be aware that they are interacting with a machine. Deep fakes and other AI generated content will have to be labelled as such, and users need to be informed when biometric categorisation or emotion recognition systems are being used. In addition, providers will have to design systems in a way that synthetic audio, video, text and images content is marked in a machine-readable format, and detectable as artificially generated or manipulated.

Companies not complying with the rules will be fined.

I’ll give my commentary in a moment. But I’ll serve up the comments of Anurag Gurtu , CPO, StrikeReady:

The regulation paves the way for what could become a global standard to classify risk, enforce transparency and financially penalize tech companies for noncompliance.

The European Union’s deal on the landmark AI bill marks a significant moment in the global conversation about the regulation of artificial intelligence. This ambitious legislation, which seeks to classify AI risks, enforce transparency, and penalize noncompliance, demonstrates the EU’s proactive stance in addressing the complexities of AI technologies.

The Act’s focus on monitoring and oversight, especially for high-risk applications, could set a new global standard for AI regulation. While it aims to balance protection and innovation, the Act will require tech companies operating in the EU to adapt significantly, potentially reshaping global AI development and deployment strategies.

This legislation also raises critical discussions about the balance between innovation and ethical considerations in AI. While Europe is taking a lead, it will be interesting to see how other regions, particularly the U.S., respond to this development. Will they follow suit with similar regulations, or will they take a different path?

Moreover, the Act’s implications on open-source AI models, which are exempt from certain restrictions, could stimulate interesting shifts in the AI industry, potentially favoring open-source approaches.

However, there are concerns about the potential impact on innovation and the competitive edge of European AI companies. While the Act aims to ensure safety and ethical standards, it’s crucial that it doesn’t stifle the innovative potential of AI.

This development is a significant step in the global dialogue on AI governance and sets the stage for further international discussions on how best to manage this rapidly evolving technology.

The combination of classifying risk and known that the EU will not be afraid to drop the ban hammer on any company who tries to skirt the rules is sure to be an effective combination. Other countries need to copy this so that AI is sufficiently regulated and risk is minimized.

Today Is The One Year Anniversary Of ChatGPT Being Publicly Available

Posted in Commentary with tags on November 30, 2023 by itnerd

Today is November 30th which makes it one year since ChatGPT became available to the public. ChatGPT has taken the world by storm for good and bad reasons. History will be the ultimate judge of how impactful ChatGPT will be. But John Pritchard, CPO at Radiant Logic has some thought on ChatGPT:

“The one-year anniversary of ChatGPT marks a revolutionary moment for Generative AI. It has completely surpassed our expectations of what technology is capable of and enabled businesses of all sizes to leverage AI without significant upfront investments. However, we must consider how we can best utilize this advanced tool – businesses may feel inclined to rush and hop on the AI train to keep up with their competitors, but without a strong foundation and data ecosystem, businesses can unintentionally cause more problems.  

Before organizations invest time, finances and resources in integrating Gen AI into their decision-making processes, they need to first and foremost ensure their data is clean and of the best quality. GenAI’s effectiveness is directly dependent on the data it receives and if businesses aren’t careful, they can exacerbate existing issues by making decisions based on inaccurate AI results. This means making sure your data set is accurate, up-to-date and does not have anomalies. 

Businesses must also train their employees who will be overseeing the AI. While GenAI is an intelligent tool, it has not yet been perfected and can produce errors and wrong answers – human oversight remains critical to significantly reduce GenAI hallucinations and unwanted output. As GenAI is not advanced enough to fully function on its own, using it is more like collaborating with it. So, employees must also know how to frame instructions that an AI model can properly understand and interpret, a technique known as prompt engineering. With these steps, businesses can fully move forward with implementing GenAI and harness its full potential.” 

With everything that surrounds AI, the next year or two will be interesting to watch to see how it is used, and how it is controlled.

New Secure AI System Guidelines Agreed To By 18 Countries

Posted in Commentary with tags on November 27, 2023 by itnerd

The US, UK, among 16 other countries have jointly released secure AI system guidelines based on the principle that it should be secure by design:

This document recommends guidelines for providers of any systems that use artificial intelligence (AI), whether those systems have been created from scratch or built on top of tools and services provided by others. Implementing these guidelines will help providers build AI systems that function as intended, are available when needed, and work without revealing sensitive data to unauthorised parties.

This document is aimed primarily at providers of AI systems who are using models hosted by an organisation, or are using external application programming interfaces (APIs). We urge all stakeholders (including data scientists, developers, managers, decision-makers and risk owners) to read these guidelines to help them make informed decisions about the design, development, deployment and operation of their AI systems.

 Anurag Gurtu , Chief Product Officer, StrikeReady had this comment:

The recent secure AI system development guidelines released by the U.K., U.S., and other international partners are a significant move in enhancing cybersecurity in the field of artificial intelligence. These guidelines emphasize the importance of security outcomes for customers, incorporating transparency and accountability, and promoting a secure organizational structure. They focus on managing AI-related risks, requiring rigorous testing of tools before public release, and establishing measures to counteract societal harms, like bias. Additionally, the guidelines advocate a ‘secure by design’ approach covering all stages of AI development and deployment, and address the need to combat adversarial attacks targeting AI and machine learning systems, including prompt injection attacks and data poisoning.

The fact that 18 countries agreed on a common set of principals is great. The thing is that more nations have to do the same thing. Otherwise you may still have AI that is closer to the “Terminator” end of the spectrum rather than being helpful and friendly.

UPDATE: Troy Batterberry, CEO and founder, EchoMark had this comment:

   “While logging and monitoring insider activities are important, we know they do not go nearly far enough to prevent insider leaks. Highly damaging leaks continue to happen at well-run government and commercial organizations all over the world, even with sophisticated monitoring activities in place. The leaker (insider) simply feels they can hide in the anonymity of the group and never be caught. An entirely new approach is required to help change human behavior. Information watermarking is one such technology that can help keep private information private.”

UPDATE #2:  Josh Davies, Principal Technical Manager, Fortra adds this:

The AI arms race and rapid adoption of open AI systems* have created concerns in the cyber security sector around the impact of a supply chain compromise – where the AI source code is compromised and used as a trusted delivery mechanism to pass on the compromise to third party users. These guidelines look to secure the design, development, and deployment of AI which will help reduce the likelihood of this type of attack.

As systems and nation states are increasingly interdependent on each other, global buy in is crucial. We have already seen how collective security is important, otherwise threats are allowed to grow, become more sophisticated, and attack global targets. Ransomware criminal families are a prime example. This levels the playing field by homogenising guidance across national states and limiting a race to the bottom with AI tech.

The guidelines recommend the use of red teaming. Red teaming surfaces the gaps in systems, and security strategies, and ties them directly to an impact. The AI Executive Order also mandates red teaming to identify flaws and vulnerabilities in AI systems. Mandating red teaming future proofs these guidelines (and other regulations) as it is hard to anticipate the threats of tomorrow and the appropriate mitigations – especially at the pace governments can legislate. It’s an indirect way of saying you need to make sure that your security strategies are always up to date, because if not, attackers will surely find and expose your gaps. This is important as we have seen other security regulations quickly become outdated and redundant as controls cannot be agreed upon and updated at the pace required to achieve good security.

Will we see adoption? Or does it just serve to re-assure the public that AI issues are being considered? What is the consequence of not following the guidance? I would hope to see soft enforcement through the exclusion of organisations that cannot show adherence to guidance from government or B2B collaborations.

Without any punitive measures, a cynic would say organizations have no motivation to implement the recommendations properly. An optimist might lean on the red team reports and hope for buy in on reporting flaws and issues, removing the ‘black box’ nature of AI which some executives have hid behind, and opening up these leaders to the court of public opinion if there is evidence they were aware of a flaw and did not take appropriate action, resulting in a compromise and/or data breach.

These guidelines are a step in the right direction. They pull together key AI stakeholders, from nation states and industry, and call for collaboration and consideration of the security of AI. Hopefully this is a continued theme, as we’ve seen with the United States AI executive order, and that AI systems are developed responsibly, without stifling innovation and adoption.

My personal opinion is that the real value we might see from such collaboration will be when we do see a large-scale AI compromise. Hopefully the involved parties are brave enough to lift the lid on what happened so everyone can learn how to be better prepared, and we can define further guidance (preferably as a requirement) beyond just secure build practices and a general monitoring requirement. But this is a good start.

Is it ground breaking? In my opinion, no. Security teams should already be looking to apply the principles outlined to any technological development. This has taken long standing DevSecOps principles and applied them to AI. I would expect it will have the most impact on startups entering the space, i.e. those without an existing level of security maturity.

*open source data sets, i.e. the internet, not OpenAI the company

28 Countries Agree To Collaborate On ‘Frontier AI’

Posted in Commentary with tags on November 3, 2023 by itnerd

This week, the UK hosted the AI Safety Summit in Bletchley Park where 28 countries, including the US, the UK, China, six EU member states, Brazil, Nigeria, Israel and Saudi Arabia, signed the Bletchley Declaration, an agreement establishing shared responsibility for the opportunities, risks and needs for global action on systems that pose urgent and dangerous risks.

“Many risks arising from AI are inherently international in nature, and so are best addressed through international cooperation,” reads a public statement published by the UK Department for Science, Innovation and Technology. 

The declaration lays out the first two steps of their agenda for addressing ‘frontier AI’ risk:

  1. Identify shared concerns for AI safety risks by building a “scientific and evidence-based understanding of the risks, and sustaining that understanding as capabilities continue to increase, in the context of a wider global approach to understanding the impact of AI in our societies.”
  2. Build respective risk-based policies to ensure safety in light of identified risks, collaborating “while recognizing our approaches may differ based on national circumstances and applicable legal frameworks.” This includes: increased transparency by developers, tools for safety testing and evaluation metrics, and developing relevant public sector capabilities and scientific research.  

Ted Miracco, CEO, Approov Mobile Security had this comment:

   “The Bletchley Declaration demonstrates a more proactive approach by governments, signaling a possible lesson learned from past failures to regulate social media giants. By addressing AI risks collectively, nations aim to stay ahead of tech behemoths, recognizing the potential for recklessness. This commitment to collaboration underscores some determination to safeguard the future by shaping responsible AI development and mitigating potential harms.

   “We all certainly harbor doubts regarding the ability of governments and legal systems to match the speed and avarice of the tech industry, but the Bletchley Declaration signifies a crucial departure from the laissez-faire approach witnessed with social media companies. We should applaud the proactive effort of these governments to avoid idle passivity and assertively engage in shaping AI’s trajectory, while prioritizing public safety and responsible governance over unfettered market forces.”


Emily Phelps, Director, Cyware adds this comment:
 
   “Recognizing that AI-driven risks cross borders, it is imperative for countries to join forces, ensuring that advancements in AI are accompanied by safety measures that protect all societies equally. The focus on a scientific and evidence-based approach to understanding these risks will enhance our collective intelligence and response capabilities. While the nuances of national circumstances will lead to varied approaches, the shared commitment to transparency, rigorous testing, and bolstered public sector capabilities is a reassuring move towards a safer AI-driven future for everyone.”

It’s a good thing in my mind that there’s cross border collaboration on AI as the potential for it to help mankind is great. But the potential for it to harm mankind is also great. Thus rules, boundaries and limitations need to be wrapped around it so that the latter does not happen.

White House Issues Executive Order on Safe, Secure, and Trustworthy AI

Posted in Commentary with tags on October 30, 2023 by itnerd

Today the White House has announced on using an executive order to mitigate AI risks:

As part of the Biden-Harris Administration’s comprehensive strategy for responsible innovation, the Executive Order builds on previous actions the President has taken, including work that led to voluntary commitments from 15 leading companies to drive safe, secure, and trustworthy development of AI.

The link above has a very extensive document that is worth reading as it goes into a lot of detail as to what this executive order covers.  John Gunn, CEO, Token had this comment:

The aim is noble and the need is certain, but the implementation will be challenging considering that Generative AI technology is already being used extensively by hackers and enemy states to attack US companies with phishing emails that are nearly impossible to detect. Most AI technologies that deliver benefits can also be used for harm, so almost every company developing AI solutions needs to make the required disclosure today.

This is likely to be a hot topic today. Thus as I get other reactions to this, I will post it here.

UPDATE: Anurag Gurtu, CPO, StrikeReady had this comment:

As President Biden prepares to leverage emergency powers for AI risk mitigation, it’s a clear signal of the critical juncture at which we find ourselves in the evolution of AI technology. The administration’s decision reflects a growing awareness of the transformative impact AI has on every sector, and the need for robust frameworks that govern its ethical use and development.

This initiative isn’t just about preemptive measures against potential misuse; it’s a foundational move towards establishing a global standard for AI that aligns with our values of safety, security, and trustworthiness. It’s an acknowledgment that while AI presents unparalleled opportunities for advancement, it also brings challenges that must be addressed to protect societal welfare and national interests.

For businesses and developers, this move will likely mean a more stringent regulatory environment, but also a clearer direction for innovation within safe and secure boundaries. It’s time for all stakeholders to engage in dialogue and contribute to a balanced approach that fosters innovation while safeguarding against the risks that have kept policymakers and citizens alike vigilant.

UPDATE #2: George McGregor, VP, Approov had this to say:

If you market a cybersecurity solution in the USA, you had better read through this Executive Order (EO)  – it may affect your business!  If your solution is deterministic in nature, then life will be easier, but if you are promoting the use of AI in your product, then life may well get more complicated: Not only do you need to demonstrate to customers that false-positives and management overhead due to AI are not an issue,  but with these new guidelines, the AI methods you employ will be under the microscope also.

Here are some other comments, each followed by the relevant text from the EO:

First – if you are an AI based cybersecurity vendor, you may be expected to share your test results with the government. The success or failure of a security solution, by its very nature, “poses a risk to national security”.

  • From the EO text:  Require that developers of the most powerful AI systems share their safety test results and other critical information with the U.S. government. In accordance with the Defense Production Act, the Order will require that companies developing any foundation model that poses a serious risk to national security, national economic security, or national public health and safety must notify the federal government when training the model and must share the results of all red-team safety tests. These measures will ensure AI systems are safe, secure, and trustworthy before companies make them public.

Second, attestation techniques will become critical – this is already true for mobile app code which can easily be reverse-engineered and replicated unless steps are taken. Fingerprinting techniques used in mobile may be applicable here.

  • From the EO text: Protect Americans from AI-enabled fraud and deception by establishing standards and best practices for detecting AI-generated content and authenticating official content. The Department of Commerce will develop guidance for content authentication and watermarking to clearly label AI-generated content. Federal agencies will use these tools to make it easy for Americans to know that the communications they receive from their government are authentic—and set an example for the private sector and governments around the world.

A program to use AI to eliminate vulnerabilities is a very noble pursuit but should not be viewed as a replacement for good software development discipline and implementing run time visibility and protection.

  • From the EO text:  Establish an advanced cybersecurity program to develop AI tools to find and fix vulnerabilities in critical software, building on the Biden-Harris Administration’s ongoing AI Cyber Challenge. Together, these efforts will harness AI’s potentially game-changing cyber capabilities to make software and networks more secure.

The use of AI will not only be a power for good. The hackers will seek to use these techniques also and there will inevitably be an arms-race between security teams and hackers. To start with however, the cost of entry for bad actors will be high, in terms of knowledge required and complexity of the task, and this will mean that only well funded “nation state” teams will be the primary users of AI for nefarious purposes.   National Security teams will need to have the resources to track and counter these efforts.

  • From the EO text: Order the development of a National Security Memorandum that directs further actions on AI and security, to be developed by the National Security Council and White House Chief of Staff. This document will ensure that the United States military and intelligence community use AI safely, ethically, and effectively in their missions, and will direct actions to counter adversaries’ military use of AI.

Malwarebytes Discovers That The Bing AI Chatbot Delivers Ads With Malicious Links

Posted in Commentary with tags , on September 29, 2023 by itnerd

Malwarebytes has research on Bing and its AI Chatbot being leveraged by threat actors to deliver ads with malicious links. In short, it’s a malvertizing campaign in which attackers take over the ad accounts of legitimate businesses to create targeted malicious ads:

Ads can be inserted into a Bing Chat conversation in various ways. One of those is when a user hovers over a link and an ad is displayed first before the organic result. In the example below, we asked where we could download a program called Advanced IP Scanner used by network administrators. When we place our cursor over the first sentence, a dialog appears showing an ad and the official website for this program right below it:

Users have the choice of visiting either link, although the first one may be more likely to be clicked on because of its position. Even though there is a small ‘Ad’ label next to this link, it would be easy to miss and view the link as a regular search result.

Upon clicking the first link, users are taken to a website (mynetfoldersip[.]cfd) whose purpose is to filter traffic and separate real victims from bots, sandboxes, or security researchers. It does that by checking your IP address, time zone, and various other system settings such as web rendering that identifies virtual machines.

Real humans are redirected to a fake site (advenced-ip-scanner[.]com) that mimics the official one while others are sent to a decoy page. The next step is for victims to download the supposed installer and run it.

The MSI installer contains three different files but only one is malicious and is a heavily obfuscated script:

Upon execution, the script reaches out to an external IP address (65.21.119[.]59) presumably to announce itself and receive an additional payload.

Lovely.

Emily Phelps, Director, Cyware had this comment:

   “With advancing technologies and a rapidly evolving digital landscape, threat actors are able to exploit human trust in established entities at scale. Addressing these risks requires more than awareness training and traditional security controls. End users must understand the risks and proceed with caution, but platforms must also bolster their security posture to adapt to these threats. It’s critical to employ continuous and rigorous testing to ensure they remain a step ahead of potential online adversaries.”

Add this to the attack surface that you have to defend yourself against as I didn’t have “malware delivered by ads on an AI chatbot” on my cybersecurity BINGO card. But I should have expected it as threat actors are getting very crafty these days.

Tech Leaders Make A Trip To Capitol Hill To Talk AI

Posted in Commentary with tags on September 14, 2023 by itnerd

Yesterday, the biggest names in tech made a trip to Capitol Hill for a closed-door summit on artificial intelligence:

Senate Majority Leader Chuck Schumer, D-N.Y., hosted the private AI Insight Forum in the grand Kennedy Caucus Room on Capitol Hill on Wednesday, as lawmakers sought advice from 22 AI tech giants, human rights and labor leaders about how government should regulate the new technology.

In addition to Musk, Meta CEO Zuckerberg and Microsoft co-founder Gates, ChatGPT-maker OpenAI CEO Sam Altman and Google CEO Sundar Pichahi attended, as well as leaders from human rights, labor and entertainment groups.

And here’s what they allegedly said:

According to Schumer, every leader in the meeting raised their hand when asked if government should regulate AI.

“We got some consensus on some things … I asked everyone in the room, does government need to play a role in regulating AI and every single person raised their hand, even though they had diverse views,” Schumer told reporters. “So that gives us a message here that we have to try to act, as difficult as the process might be.”

That’s not the response I was expecting from them. But likely it likely is the right answer. Allen Drennan, Principal & Co-Founder, Cordoniq had this comment:

“The new privacy and security concerns of AI need to be carefully evaluated by regulators, or consumers could quickly find that every piece of data that has ever been provided to private companies and organizations is used in the training of AI models.  While this has clear benefits, such as applying AI to cold-case files in investigations, it could also be used to scrape all communications you have ever posted to the Internet, including social media, email cloud host providers and others, to gain a more exact profile of the consumer, on a mass basis. This type of advertiser information is invaluable which makes privacy regulations all that more important.”

Hopefully, there’s a thoughtful approach to AI that balances regulation to letting it do what it was designed to do. That way we can get the benefits without many of the risks.