Three of the most widely deployed AI agents on GitHub Actions can be hijacked into leaking the host repository’s API keys and access tokens — using GitHub itself as the command-and-control channel. Anthropic’s Claude Code Security Review, Google’s Gemini CLI Action, and Microsoft’s GitHub Copilot were targeted and disclosed the flaws but did not assigned CVEs or publish public advisories.
More details here: https://oddguan.com/blog/comment-and-control-prompt-injection-credential-theft-claude-code-gemini-cli-github-copilot/
Ensar Seker, CISO at SOCRadar:
“AI agents embedded into developer workflows are quickly becoming part of the software supply chain, and this research highlights a structural security gap rather than an isolated bug. When an agent is granted access to GitHub Actions, secrets, and external tools, prompt injection is no longer just a data integrity issue, it becomes a privilege escalation path that can directly expose API keys, tokens, and internal automation pipelines.
The more concerning aspect is not the vulnerability itself, but the lack of transparent disclosure. Without advisories or CVEs, organizations cannot properly assess exposure, especially when many teams pin agent versions or reuse workflows across repositories. This creates a silent risk layer inside CI/CD environments, where compromised agents can operate with high trust and minimal visibility.
From a defensive standpoint, this reinforces that AI agents must be treated as untrusted code with strict isolation boundaries. Secrets should never be directly accessible to agent execution contexts, and GitHub Actions workflows need tighter scoping, short-lived credentials, and explicit approval gates. More broadly, this is a wake-up call that AI-native attack surfaces are evolving faster than vendor disclosure practices, and security teams need to assume these agents can and will be manipulated.”
Dave Hayes, VP of Product at FusionAuth:
“We spent twenty years building zero-trust for humans and then handed AI agents god-mode secrets with no identity layer at all. These aren’t getting hacked because they’re flawed. They’re getting hacked because nobody asked the most basic security question of all: should this thing have access to our secrets?”
“Three billion-dollar companies paid researchers for finding credential-theft vulnerabilities in their AI agents, and then told no one. No CVEs, no advisories…. If this were an OAuth library, there’d be congressional hearings. But AI gets a different set of rules and that should terrify every company running these tools in production.”
This is a #fail. Any company doing anything with AI needs to make sure that the trust level is low so that when, not if, these sorts of things happen, they are protected from the inevitable fallout.
Like this:
Like Loading...
Related
This entry was posted on April 16, 2026 at 8:19 am and is filed under Commentary with tags Hacked. You can follow any responses to this entry through the RSS 2.0 feed.
You can leave a response, or trackback from your own site.
Users Not Warned of Credential Theft in Claude Code, Gemini CLI, and GitHub Copilot Agents
Three of the most widely deployed AI agents on GitHub Actions can be hijacked into leaking the host repository’s API keys and access tokens — using GitHub itself as the command-and-control channel. Anthropic’s Claude Code Security Review, Google’s Gemini CLI Action, and Microsoft’s GitHub Copilot were targeted and disclosed the flaws but did not assigned CVEs or publish public advisories.
More details here: https://oddguan.com/blog/comment-and-control-prompt-injection-credential-theft-claude-code-gemini-cli-github-copilot/
Ensar Seker, CISO at SOCRadar:
“AI agents embedded into developer workflows are quickly becoming part of the software supply chain, and this research highlights a structural security gap rather than an isolated bug. When an agent is granted access to GitHub Actions, secrets, and external tools, prompt injection is no longer just a data integrity issue, it becomes a privilege escalation path that can directly expose API keys, tokens, and internal automation pipelines.
The more concerning aspect is not the vulnerability itself, but the lack of transparent disclosure. Without advisories or CVEs, organizations cannot properly assess exposure, especially when many teams pin agent versions or reuse workflows across repositories. This creates a silent risk layer inside CI/CD environments, where compromised agents can operate with high trust and minimal visibility.
From a defensive standpoint, this reinforces that AI agents must be treated as untrusted code with strict isolation boundaries. Secrets should never be directly accessible to agent execution contexts, and GitHub Actions workflows need tighter scoping, short-lived credentials, and explicit approval gates. More broadly, this is a wake-up call that AI-native attack surfaces are evolving faster than vendor disclosure practices, and security teams need to assume these agents can and will be manipulated.”
Dave Hayes, VP of Product at FusionAuth:
“We spent twenty years building zero-trust for humans and then handed AI agents god-mode secrets with no identity layer at all. These aren’t getting hacked because they’re flawed. They’re getting hacked because nobody asked the most basic security question of all: should this thing have access to our secrets?”
“Three billion-dollar companies paid researchers for finding credential-theft vulnerabilities in their AI agents, and then told no one. No CVEs, no advisories…. If this were an OAuth library, there’d be congressional hearings. But AI gets a different set of rules and that should terrify every company running these tools in production.”
This is a #fail. Any company doing anything with AI needs to make sure that the trust level is low so that when, not if, these sorts of things happen, they are protected from the inevitable fallout.
Share this:
Like this:
Related
This entry was posted on April 16, 2026 at 8:19 am and is filed under Commentary with tags Hacked. You can follow any responses to this entry through the RSS 2.0 feed. You can leave a response, or trackback from your own site.