Archive for Hacked

Iranian APT MuddyWater Disguise Their Operations as a Chaos Ransomware Attack

Posted in Commentary with tags , on May 7, 2026 by itnerd

Iranian APT MuddyWater has been found disguising their operations as a Chaos ransomware attack leveraging Microsoft Teams social engineering to infiltrate organizations. 

The campaign was characterized by a high-touch social engineering phase conducted via Microsoft Teams, where the attackers utilized interactive screen-sharing to harvest credentials and manipulate Multi-Factor Authentication (MFA). Once inside, the group bypassed traditional ransomware workflows, forgoing file encryption in favor of data exfiltration and long-term persistence via remote management tools like DWAgent. This report deconstructs the infection chain and analyzes the custom “Game.exe” Remote Access Trojan (RAT).

Additionally, this explores the process by which MuddyWater is increasingly leveraging the cybercriminal ecosystem to provide plausible deniability for geopolitical espionage and prepositioning, particularly in the US. The strategy highlights the convergence between state-sponsored intrusion activity and criminal tradecraft, where a big “tell” lies in the techniques that were deployed – and those that weren’t.

This overall strategy suggests the primary goal was not financial gain. It is also further proof of the lines blurring against the background of geopolitical tensions, and that attribution is becoming more difficult if teams do not take it upon themselves to conduct proper and thorough research.

More details here: https://www.rapid7.com/blog/post/tr-muddying-tracks-state-sponsored-shadow-behind-chaos-ransomware/

Ensar Seker, CISO at threat intel company SOCRadar, commented:

“The MuddyWater activity is another example of how state-aligned threat actors increasingly blur the line between cybercrime and cyber-espionage. Using Chaos ransomware as a decoy, provides plausible deniability while also distracting incident responders into treating the intrusion as financially motivated cybercrime instead of a long-term intelligence collection operation. This tactic complicates attribution, delays strategic response decisions, and increases confusion during the critical early stages of an investigation.

The Microsoft Teams social engineering component is particularly notable because collaboration platforms are becoming one of the most effective initial access vectors. Employees inherently trust internal communication tools, and attackers understand that exploiting human familiarity inside business collaboration environments often bypasses traditional email-focused security controls. Organizations should treat Teams, Slack, and similar platforms as high-risk attack surfaces, applying the same monitoring, user awareness, and identity protection strategies traditionally reserved for email and VPN infrastructure.”

Threat actors come in all shapes and sizes. Thus as Mr. Seker says, consider everything to be a potential threat. And I would add to that the fact that nothing should be trusted.

Edtech Firm Instructure Admits To Being Pwned

Posted in Commentary with tags on May 4, 2026 by itnerd

Education technology firm Infrastructure, best known for its widely used learning management platform Canvas, confirmed that it was the victim of a data breach. Yesterday, the ShinyHunters cybercrime group claimed they stole 3.65 terabytes of data from more than 9,000 schools.

We are providing an update on the security incident we advised you of yesterday. While our investigation continues alongside our outside forensics experts, at this stage we believe the incident has been contained.

Here are the steps we have taken since we became aware of the incident. We have:
– Revoked privileged credentials and access tokens associated with affected systems
– Deployed patches to enhance system security
– Out of an abundance of caution, we rotated certain keys, even though there is no evidence they were misused
– Implemented increased monitoring across all platforms

While we continue actively investigating, thus far, indications are that the information involved consists of certain identifying information of users at affected institutions, such as names, email addresses, and student ID numbers, as well as messages among users. At this time, we have found no evidence that passwords, dates of birth, government identifiers, or financial information were involved. If that changes, we will notify any impacted institutions.

Brian Bell, CEO of customer identity and access management platform FusionAuth:

“This is the uncomfortable truth for edtech: student data now moves through a sprawling web of identity systems, APIs, and third-party integrations. Instructure has not confirmed how the attackers got in, but its response shows where the risk had to be contained, privileged credentials, access tokens, and application keys. In edtech, credential governance is student data protection.”

Ensar Seker, CISO at threat intel company SOCRadar:

“The disruption tied to API keys is a strong indicator that identity and access management, not just perimeter security, was the real failure point. When privileged tokens or API credentials are exposed, attackers can bypass traditional defenses and operate as trusted entities. In environments like Instructure’s Canvas, where integrations and automation are core, this creates a high-impact blast radius very quickly.

“The involvement of ShinyHunters and claims of access to a Salesforce instance suggest this may be more than a single-system breach, it points to lateral movement across SaaS ecosystems. Organizations often underestimate how interconnected these platforms are; once attackers gain a foothold, misconfigured integrations and over-permissioned tokens allow them to pivot and aggregate data at scale. Even if highly sensitive fields like financial data or government IDs were not exposed, the combination of names, emails, student IDs, and communications still creates long-term risk. This type of dataset is extremely valuable for phishing, identity correlation, and social engineering campaigns, especially in education, where users are less likely to question trusted platforms.

“The key lesson here is that revoking credentials after the fact is necessary but not sufficient. Organizations need continuous monitoring of API behavior, strict token lifecycle management, and least-privilege enforcement across all integrations. In modern breaches, it’s not just about how attackers get in, it’s about how long they can operate undetected using legitimate access.”

This likely won’t end well in the long term as ShinyHunters is involved. They are on a tear as of late with no end in sight to their spree of hacking anything within their reach.

Vimeo Pwned By ShinyHunters

Posted in Commentary with tags on April 29, 2026 by itnerd

Vimeo has confirmed a security incident involving unauthorized access to user and customer data following a breach at third-party analytics provider Anodot. The incident involved attackers stealing authentication tokens and using them to access connected cloud environments, including Vimeo systems.

According to Vimeo, the accessed data includes technical information, video titles, metadata, and in some cases customer email addresses. The company stated that video content, login credentials, and payment card information were not accessed, and there was no disruption to its services. 

Vimeo said it disabled Anodot credentials, removed the integration, engaged external security experts, and notified law enforcement, while the investigation into the incident remains ongoing.

The breach has been linked to the ShinyHunters extortion group, which has claimed responsibility and threatened to release stolen data. 

Denis Calderone, CTO, Suzu Labs:

   “This has become such a prevalent pattern. A third-party SaaS provider gets compromised, its authentication tokens get stolen, and suddenly attackers are inside customer cloud environments pulling data from Snowflake, BigQuery, Salesforce, or whatever else that integration was allowed to reach. Vimeo is just the latest to fall victim to this new trend in supply chain risk.

   “Vimeo can say its core systems were not disrupted and that video content, passwords, and payment cards were not accessed, and that may all be true. But was that ever the real target? If your goal is data theft and extortion, you do not necessarily need production systems. All data has some amount of inherent value, and the downstream data stores where customer metadata, operational data, reporting exports, and business intelligence live may be just as valuable as what Vimeo is emphasizing was not affected.

   “ShinyHunters has been very good at turning “limited” data exposure into leverage. SoundCloud said the exposed data was mostly email addresses and public profile information, and the group still used it for extortion and harassment. Panera described its incident as customer contact information, and that still became 5.1 million exposed accounts. AT&T’s Snowflake incident did not expose call content or Social Security numbers, but call and text metadata alone reportedly led to a six-figure payment.

   “My guess is Vimeo lands in that same lane. Not a catastrophic platform compromise if Vimeo’s statement holds, but enough context to create pressure. Video titles, metadata, technical data, and email addresses could help attackers embarrass enterprise customers, threaten Vimeo’s reputation, and craft follow-on phishing that references real projects or business relationships.

   “For organizations using third-party SaaS integrations, the takeaway is to inventory every integration that can read from your cloud data platforms, identify what tokens exist, who owns them, when they were last rotated, and what data they can actually reach. Monitor for abnormal query volume, unusual exports, access from new infrastructure, and dormant integrations suddenly becoming active. If a vendor in that trust chain reports an incident, don’t wait for a perfect impact statement. Act fast and proactively revoke and rotate first, then investigate. Also, make sure your threat modeling is taking this attack pattern into account, because this is becoming the norm these days.”

Damon Small, Board of Directors, Xcape, Inc.:

   “The Vimeo breach via Anodot is a high-fidelity case study in the vulnerability of the modern “integrated” enterprise. By compromising the third-party analytics provider Anodot and stealing its authentication tokens, the ShinyHunters extortion group bypassed Vimeo’s own identity perimeter to directly query its Snowflake and BigQuery data warehouses. While Vimeo’s confirmation that raw video content and passwords remain secure is a necessary PR distinction, it underplays the reality of the breach: the exfiltration of customer email addresses and video metadata from a centralized cloud environment creates a persistent, high-value asset for downstream phishing and social engineering.

   “For security practitioners and executives, this incident exposes the “read-only” fallacy. Many organizations grant third-party SaaS tools programmatic access to their data lakes under the assumption that the integration is limited in scope; however, in a cloud-native environment, a stolen token is often functionally equivalent to a root credential for bulk data export. The April 30 “pay or leak” deadline set by ShinyHunters highlights the urgent need for a shift toward identity-based, time-bound access.

   “Organizations must immediately audit their service-to-service integrations and implement rigid “least privilege” controls – specifically monitoring for unauthorized COPY INTO or UNLOAD commands within cloud warehouses that signify bulk exfiltration. If your vendor security assessment ended with a SOC 2 report instead of a review of their token management lifecycle, you are essentially outsourcing your data integrity to the weakest link in your supply chain.

   “Read-only” permissions are the security industry’s favorite fairy tale – until someone uses them to export your entire database.”

Vishal Agarwal, CTO, Averlon:

   “Third-party breaches become much more consequential when the compromised asset is trust itself. Stolen authentication tokens carry delegated access into connected environments, and those tokens work silently until someone explicitly revokes them. When a third-party provider is compromised, every token it holds can become a potential entry point into the environments those tokens connect to.

   “The real risk isn’t just what was exposed at the vendor. It’s how much inherited access those tokens may have provided downstream. Organizations should treat third-party token grants like privileged credentials: audit them regularly, scope them tightly, and revoke anything that isn’t actively needed.”

Third party hacks, supply chain attacks, whatever you want to call them are the new hotness. Thus you need to treat third parties as untrustworthy until proven otherwise. Otherwise you will be added to the growing list of organizations that have been pwned by ShinyHunters.

Namastex.ai npm Packages Hit with TeamPCP-Style CanisterWorm Malware

Posted in Commentary with tags on April 22, 2026 by itnerd

Researchers have uncovered malicious Namastex.ai npm packages with the tradecraft of TeamPCP/LiteLLM style CanisterWorm malware, including install-time execution, credential theft from developer environments, off-host exfiltration, canister-backed infrastructure, and self-propagation logic intended to compromise additional packages.

More details here: https://socket.dev/blog/namastex-npm-packages-compromised-canisterworm

Dan Moore, Sr. Director CIAM Strategy at cybersecurity company FusionAuth, commented:

“This newest supply chain threat in the npm ecosystem demonstrates that a lot of the time, the issue isn’t an organizations’ code, but their credentials. Long-lived, over-permissioned CI/CD tokens are as risky as passwords written on a sticky note. Organizations need to have more than credentials for software systems. In order to maintain identity hygiene, organizations should rotate, scope, and continually monitor credentials.”

AI for coding is great. But you have to be incredibly careful to make sure that the benefit of being able to code more efficiently isn’t overshadowed by having threat actors set up shop by infecting your code.

Lovable access issue exposes project data, credentials in AI-generated coding

Posted in Commentary with tags on April 22, 2026 by itnerd

A security issue involving AI coding platform Lovable allowed users to access other users’ project data, including source code, database credentials, AI chat histories, and customer data, according to reports and user disclosures.

The issue was publicly highlighted after a user demonstrated that a free account could access data across projects created before November 2025.

Lovable initially stated there was no data breach, describing the behavior as expected for public projects, but later acknowledged a backend error that temporarily enabled access to AI chat data. The company updated its visibility and permission settings following the incident and said the issue had been addressed.

The incident involved exposure of project-level data within the platform environment and did not include confirmation of broader system compromise. Reporting indicates the issue remained unresolved for a period of time after being reported before changes were implemented.

Ryan McCurdy, VP of Marketing, Liquibase had this to say:

   “This incident is a reminder that the risk in AI-generated development is not just bad code. It is bad control design. When application creation speeds up, permissions, secrets exposure, and database access paths can become part of the attack surface just as quickly. If teams do not put governed change, least-privilege access, and clear separation between public artifacts and sensitive backend context in place, AI can amplify operational risk faster than traditional review processes can catch it.”

John Carberry, Solution Sleuth, Xcape, Inc. adds this comment:

   “The Lovable data exposure incident highlights a catastrophic failure in the fundamental security architecture of AI-powered “vibe coding” platforms. By failing to implement basic ownership validation on API endpoints, a textbook Broken Object Level Authorization (BOLA) flaw. Lovable allowed any user to traverse project IDs and scrape the source code, database credentials, and AI chat histories of others.

   “For security leaders, the primary risk is a silent supply chain compromise: while Lovable claims no “breach” of its own servers, the exposure of third-party secrets like Stripe and Supabase keys means the applications built on the platform are now effectively backdoored.

   “Technically, the crisis was compounded by a February 2026 backend regression that re-opened access to sensitive chats and a response cycle that spent 48 days ignoring a bug bounty report. Organizations must treat AI-generated code with extreme caution, ensuring that “vibe coding” speed doesn’t bypass mandatory secret scanning, environment variable isolation, and the hard-won security logic of the last twenty years.

   “Lovable proved that while AI can write your code, it can’t write your common sense, especially when “public by default” includes your Stripe secret keys.”

Hannah Perez, Director of Marketing, Suzu Labs followed up with this:

   “As we move toward AI-generated software, the ‘shared responsibility model’ is becoming dangerously blurred. Users expected a private sandbox for innovation, but instead found a communal space with paper thin walls.

   “Lovable’s eventual pivot is welcome, but the delay between the initial report and the actual fix suggests that AI startups are currently outpacing their own security protocols, which is as expected for most. In the rush to ‘vibe code,’ fundamental safety is being treated as a post-launch patch rather than a requirement. For this industry to mature, Secure by Default must be the non-negotiable standard for any platform handling sensitive IP and source code.”

Vishal Agarwal, CTO, Averlon provided this comment:

   “It’s one thing to have access to the sauce. It’s another to have access to its recipe. With inadvertent leakage of chat history, attackers gain access to reconnaissance information that can be leveraged to target the organization more precisely.

   “What makes sophisticated attackers dangerous isn’t just their technical capability, it’s their detailed understanding of the target’s systems. Exposing chat history and source code together hands that understanding directly to an attacker.”

This highlights the fact that AI has to be part of your security planning. Otherwise really bad things will happen. And this is a case in point.

ZionSiphon malware targets Israeli water and desalination systems

Posted in Commentary with tags on April 21, 2026 by itnerd

Researchers at Darktrace have identified a new malware strain dubbed ZionSiphon designed to target Israeli water treatment and desalination systems, with code specifically built to interact with industrial control system (ICS) and operational technology (OT) environments.

The malware was first detected on June 29, 2025, and includes functionality to identify processes associated with reverse osmosis, chlorine handling, and plant control systems.

Researchers said the malware appears designed to activate only when two conditions are met: a geographic trigger and an environmental trigger tied to desalination or water treatment systems.

Once executed, ZionSiphon scans devices on the local network, attempts communications using Modbus, DNP3, and S7comm industrial protocols, and alters configuration settings related to chlorine levels and pressure controls. Analysis found the Modbus-based attack functionality is the most developed, while the DNP3 and S7comm components appear incomplete, suggesting the malware may still be under development.

The malware appears configured to focus on Israeli IP ranges and includes politically themed embedded strings, according to reporting. 

Josh Marpet, Senior Product Security ConsultantFinite State had this to say:

   “The rise of Hacktivist actions is increasing.  From nation-state (stuxnet), to this apparent politically motivated terroristic action, it is becoming easier and easier to build, configure, and deploy malware against Operational Technology (OT) targets. These targets include water, power, sewer, and other utilities and critical infrastructures. Without an OT specific security program and/or partner, it’s almost impossible for the utility companies to protect against these types of attacks.

   “OT devices are fundamentally different from Information Technology (IT) devices. Compare a laptop to a thermostat, or a factory full of valves and switches. Without specialized knowledge and experience, the normal IT security firms are simply not enough. After all, laptops rarely explode. Factories full of chemicals…can.”

Damon Small, Board of Directors, Xcape, Inc. adds this comment:

   “ZionSiphon is an intent-driven Operational Technology (OT) sabotage malware targeting the logic of water desalination and treatment plants. The immediate business risk is physical process disruption, specifically manipulating hydraulic pressure and chemical dosing, with the possibility of infrastructure damage or public health incidents.

   “Technically, it is highly sector-specific, with dual-trigger checks for Israeli IP ranges and process names like “ChlorineCtrl.” Though a current flaw prevents payload activation, functional Modbus sabotage routines and DNP3/S7comm stubs indicate active development. Despite post-Stuxnet awareness, critical infrastructure remains exposed to 45-year-old unauthenticated protocols. Mitigation requires urgent OT/IT network segmentation, deep packet inspection for unauthorized register writes, and verified hard-coded failsafes to prevent dangerous chemical or pressure levels, irrespective of compromised software.

   “Relying on unauthenticated Modbus to protect the water supply is like locking your front door with a Post-it note that says, “Please don’t come in.”

Jacob Krell, Senior Director: Secure AI Solutions & Cybersecurity, Suzu Labs follows up with this comment:

   “AI has compressed the timeline for developing ICS malware from months to days, and ZionSiphon demonstrates exactly where that trajectory leads. The malware’s dual trigger design, requiring both an Israeli IP range and the presence of desalination or water treatment processes before activating, reflects deliberate targeting of infrastructure that is both nationally critical and geopolitically charged.

   “Israel depends on desalination for a significant share of its drinking water, and ZionSiphon’s target list names specific facilities including Mekorot, Sorek, Hadera, and Palmachim. Darktrace’s analysis found the Modbus sabotage path is fully implemented while DNP3 and S7comm remain incomplete. That development gap will close faster than the industry expects when the structured technical knowledge required to build this tooling is exactly what AI models accelerate.

   “The protocols ZionSiphon targets date to the late 1970s. Modbus has no authentication and no encryption. DNP3 and S7comm carry the same fundamental weakness. Any device on the network segment can issue commands that a controller will execute without question. As geopolitical tensions continue to drive threat actors toward critical infrastructure, these protocols represent an expanding attack surface defended by decades old assumptions.

   “When malware can identify processes associated with reverse osmosis, chlorine handling, and plant control systems, and then communicate directly with the controllers managing them, the only meaningful barrier is the network architecture surrounding those protocols.

   “Every ICS protocol should sit behind multiple layers of network segmentation, with strict access controls governing what can reach those segments. If Modbus traffic is reachable from an IT network or an internet facing system, the architecture has already failed before the malware arrives. The industry also needs sustained investment in zero trust solutions layered on top of these legacy protocols. Modbus and DNP3 are not going away. The installed base is too large, and the replacement cost is too high. The security model has to evolve around them.”

This illustrates the fact that critical systems like these are prime targets for threat actors. Which means that everything possible must be done to protect those systems from getting pwned. Otherwise the consequences would potentially be massive.

Hackers Pwn Vercel & Steal Data 

Posted in Commentary with tags on April 20, 2026 by itnerd

Over the weekend, cloud app hosting company Vercel said hackers breached its internal systems and stole customer credentials which they are now selling online. The breach originated with a compromise of Context.ai, a third-party AI tool used by a Vercel employee. The attacker used that access to take over the employee’s Vercel Google Workspace account, which enabled them to gain access to some Vercel environments and environment variables that were not marked as “sensitive.” 

More details from Vercel here: https://vercel.com/kb/bulletin/vercel-april-2026-security-incident

Ensar Seker, CISO at SOCRadar, commented:

“This incident is a textbook example of how identity and integration layers have become the new attack surface. The breach didn’t start with Vercel itself, it started with a trusted third-party application and an OAuth connection that effectively bypassed traditional security controls.

We’re seeing a clear shift where attackers no longer need to exploit infrastructure vulnerabilities; instead, they exploit trust relationships between services. Once an OAuth token is granted, it can provide persistent and often over-privileged access, especially if organizations lack strict controls over third-party app integrations. The more concerning detail here is the mention of unencrypted credentials in internal systems. That turns what could have been a contained identity compromise into a broader data exposure event.

Organizations need to treat OAuth integrations as privileged access, enforce least privilege, continuously audit connected apps, and implement controls like device-bound sessions and conditional access. Otherwise, these types of “indirect breaches” will continue to scale faster than traditional defenses can handle.”’

Lotem Finkelstein, VP Research at Check Point, offered the following commentary:

“This is not a theoretical risk but an active security incident involving a widely used library, which significantly increases the potential impact. Given its broad adoption, even a single compromise can quickly translate into large-scale exposure across organizations, so organizations need to make sure the right security measures are in place to prevent any exposure related to this library.

What makes incidents like this particularly challenging is the lack of immediate visibility — many organizations are not fully aware of where and how such dependencies are embedded across their environments, which can delay detection and response at scale.”

SOCRadar also offered the following analysis – Vercel Breach: Hacker Claims to Sell Stolen Data in Potential Global Supply Chain Attack

UPDATE: Yagub Rahimov, CEO of Polygraf AI adds this:

“One employee. One AI app. “Allow All.” That’s how Vercel got breached.

The employee signed up for Context AI’s app using their enterprise account and gave broad Google Workspace permissions. When that OAuth token was stolen, the attacker didn’t need credentials, didn’t need to bypass MFA – they just used a valid token doing exactly what it was allowed to do. The Salesloft-Drift breach in late 2025 worked the same way – attackers stole OAuth tokens from an integration provider and rode trusted connections straight into hundreds of customer environments without triggering a single login alert. The technical problem is that OAuth tokens granted to third-party apps are outside most organizations’ detection scope. They don’t appear in login logs. They don’t trigger MFA prompts. Context AI was compromised a month before anyone at Vercel knew there was a problem – and CrowdStrike apparently didn’t flag the OAuth tokens as part of their investigation scope. The token just kept working, silently, with whatever permissions the employee gave it on day one. It’s the same problem we see all the time at Polygraf AI – AI tools quietly holding OAuth access to corporate accounts that nobody is watching. The breach surface is not your perimeter anymore. It’s every OAuth grant your employees ever clicked through.”

UPDATE #2:  Fredrik Almroth, co-founder and security researcher at Detectify had this to say:

“The Vercel breach is a stark reminder that modern security risks don’t stop at the boundaries of your own systems. They extend to every tool and service your organization is connected to.

What we’re seeing here is a pattern that’s becoming alarmingly common: a sophisticated attacker found a smaller, less-scrutinized part of Vercel’s ecosystem – a third-party AI productivity tool – compromised it, and used that foothold to take over an employee’s corporate account and move into Vercel’s internal systems. There was no need to go after Vercel directly,  to use brute force, or sophisticated technical knowledge.

The practical lesson is to focus less on the label of the tool involved and more on the access chain: which external apps are connected to employee accounts, what those apps are allowed to do, what internal systems those accounts can reach, and whether sensitive credentials would still be exposed if that chain of trust broke.

That’s a blind spot many organizations still have. They’ve got a reasonable handle on their known vendors, but the web of third-party tools that employees connect to their work accounts organically, tool by tool, often without a formal approval process, is a different thing entirely. It’s rarely tracked, rarely reviewed, and almost never reconsidered when something goes wrong elsewhere. That’s the gap this incident exposes.

The organizations that develop real visibility into what’s connected to their systems (and what those connections can actually reach) will be the ones that catch these intrusions before an attacker decides to go public.”

AgingFly Malware used in attacks on Ukraine government and hospitals

Posted in Commentary with tags on April 16, 2026 by itnerd

A new malware family named ‘AgingFly’ has been identified (the link requires you to translate into English) in attacks against Ukrainian governments and hospitals that steal authentication data from Chromium-based browsers and WhatsApp messenger.

Commenting on this news is Ensar Seker, CISO at SOCRadar:

“AgingFly reflects a continued shift toward credential-centric operations, where attackers prioritize access over disruption in the initial stages. By targeting Chromium-based browsers and messaging platforms like WhatsApp, actors are going after high-value session data that enables lateral movement, impersonation, and long-term persistence rather than immediate impact.

What’s notable here is the targeting profile, government, healthcare, and potentially defense-linked entities which suggests intelligence collection and pre-positioning rather than opportunistic cybercrime. Groups like UAC-0247 are increasingly blending espionage tactics with commodity malware techniques, making detection harder. Organizations should treat browser-stored credentials and messaging session tokens as sensitive assets and move toward stronger controls like device-bound authentication, reduced credential storage, and continuous session monitoring.”

Reading through this document makes one thing clear. This is a skilled threat actor who is clearly out to set up shop for the long term. That’s the most dangerous type of threat actor to deal with. And chances are, they won’t stop at Ukraine as I fully expect them to be using the same techniques elsewhere.

Users Not Warned of Credential Theft in Claude Code, Gemini CLI, and GitHub Copilot Agents

Posted in Commentary with tags on April 16, 2026 by itnerd

Three of the most widely deployed AI agents on GitHub Actions can be hijacked into leaking the host repository’s API keys and access tokens — using GitHub itself as the command-and-control channel. Anthropic’s Claude Code Security Review, Google’s Gemini CLI Action, and Microsoft’s GitHub Copilot were targeted and disclosed the flaws but did not assigned CVEs or publish public advisories.

More details here: https://oddguan.com/blog/comment-and-control-prompt-injection-credential-theft-claude-code-gemini-cli-github-copilot/

Ensar Seker, CISO at SOCRadar:

“AI agents embedded into developer workflows are quickly becoming part of the software supply chain, and this research highlights a structural security gap rather than an isolated bug. When an agent is granted access to GitHub Actions, secrets, and external tools, prompt injection is no longer just a data integrity issue, it becomes a privilege escalation path that can directly expose API keys, tokens, and internal automation pipelines.

The more concerning aspect is not the vulnerability itself, but the lack of transparent disclosure. Without advisories or CVEs, organizations cannot properly assess exposure, especially when many teams pin agent versions or reuse workflows across repositories. This creates a silent risk layer inside CI/CD environments, where compromised agents can operate with high trust and minimal visibility.

From a defensive standpoint, this reinforces that AI agents must be treated as untrusted code with strict isolation boundaries. Secrets should never be directly accessible to agent execution contexts, and GitHub Actions workflows need tighter scoping, short-lived credentials, and explicit approval gates. More broadly, this is a wake-up call that AI-native attack surfaces are evolving faster than vendor disclosure practices, and security teams need to assume these agents can and will be manipulated.”

Dave Hayes, VP of Product at FusionAuth:

“We spent twenty years building zero-trust for humans and then handed AI agents god-mode secrets with no identity layer at all. These aren’t getting hacked because they’re flawed. They’re getting hacked because nobody asked the most basic security question of all: should this thing have access to our secrets?”

“Three billion-dollar companies paid researchers for finding credential-theft vulnerabilities in their AI agents, and then told no one. No CVEs, no advisories…. If this were an OAuth library, there’d be congressional hearings. But AI gets a different set of rules and that should terrify every company running these tools in production.”

This is a #fail. Any company doing anything with AI needs to make sure that the trust level is low so that when, not if, these sorts of things happen, they are protected from the inevitable fallout.

AI supply chain attack exposes 4TB of sensitive data

Posted in Commentary with tags on April 2, 2026 by itnerd

Mercor has disclosed it was impacted by a supply chain attack involving LiteLLM, after attackers used a compromised maintainer account to publish malicious PyPI packages that were available for roughly 40 minutes and likely downloaded by thousands of organizations. The incident, tied to a broader campaign involving a compromised Trivy dependency in CI/CD security workflows, is now under investigation as the Lapsus$ extortion group claims to have stolen over 4TB of data, including candidate profiles, credentials, and proprietary information.

Here’s some commentary from CTO of DryRun Security, Ken Johnson:

“What’s notable here isn’t just the LiteLLM compromise, it’s the pattern. We’re seeing the same playbook show up across groups like Lapsus$ and TeamPCP. Start with a trusted tool, pivot into CI/CD, then ride that access into cloud and AI infrastructure. This is becoming repeatable.

The bigger shift is that this isn’t traditional SCA risk. This isn’t a CVE sitting in a dependency. This is active malware in the supply chain, designed to spread, harvest credentials, and exfiltrate data as it moves.

Once attackers land in the pipeline, they’re inside your build and deployment process. At that point, it’s not about exploiting a bug, it’s about abusing trust to scale across environments.

We’ve moved toward a world where attackers don’t need new techniques, they just reuse what already works across the same shared tooling and AI stack.”

Supply chain attacks are real. Organizations need to make sure that they do everything possible to make sure that everything and everyone that they interact with are as secure as possible. Otherwise this is what you will get 100% of the time.