Researchers at Check Point have identified multiple vulnerabilities in Anthropic’s development tool Claude Code, allowing malicious repositories to trigger remote code execution and steal active API credentials.
The observed security issues exploited built-in mechanisms including Hooks, Model Context Protocol servers, and environment variables to run arbitrary shell commands and exfiltrate API keys before trust prompts could be confirmed.
Two specific tracked vulnerabilities, CVE-2025-59536 and CVE-2026-21852, were documented and patched by Anthropic following disclosure by security researchers. The first enabled arbitrary code execution via overridden configuration settings that bypass user consent dialogs, while the second could redirect API traffic to malicious endpoints, exposing developers’ Anthropic API keys in plaintext.
All reported flaws have been remedied in subsequent Claude Code updates prior to public advisory publication.
According to researchers, even after the specific vulnerabilities were fixed, the underlying risk does not disappear. The issues exposed how project configuration files can directly shape execution behavior inside AI-assisted development tools, and a malicious repository can still act as a delivery mechanism if safeguards are insufficient, which expands the threat model beyond the individual CVEs that were addressed.
As a result, applying patches resolves the documented flaws but does not fully remove the broader exposure created when AI tooling automatically interprets and acts on repository-level settings.
Jacob Krell, Senior Director: Secure AI Solutions & Cybersecurity, Suzu Labs:
“These CVEs are real and Anthropic was right to patch them. The broader issue is not unique to Claude Code. The AI development tool industry as a whole is prioritizing enablement over security, and these vulnerabilities are a symptom of that design philosophy, not an isolated product failure.
“In the case of Claude Code, hooks ran shell commands before the developer even saw the trust dialog. The security control existed. It just executed after the damage was already done. AI agents are deployed with broad permissions by default because restricting them reduces productivity. That is the same tradeoff the industry made with admin accounts two decades ago, and it took years of breaches to correct. The principle of least privilege does not stop applying because the user is an AI model instead of a human. Agents should be treated as untrusted by default, with strict zero trust boundaries between the agent and any command surface, credential store, or system resource it touches.
“This is not a new class of attack surface. Malicious Makefiles, poisoned scripts, and git hooks have compromised developers for years. What AI tools change is the scope of what runs once triggered. The attack surface is not new. The blast radius is.
“AI development tools are going to become more autonomous, not less. The industry is building the capability first and retrofitting the security later. That pattern has never aged well in software, and it is unlikely to age any better with AI.”
I am aware of a large number of developers who are using tools like Claude Code to speed up the coding pf
Datadobi Announces Early Access Program for Data Access Review, a New Addition to StorageMAP
Posted in Commentary with tags Datadobi on February 26, 2026 by itnerdDatadobi has launched an Early Access Program for Data Access Review, a new capability coming to its StorageMAP platform. Developed in direct response to customer demand for deeper visibility and control over data permissions, Data Access Review will extend StorageMAP’s value by adding actionable permissions intelligence to unstructured data management. During the Early Access program, selected customers have the opportunity to test and help shape new permissions intelligence features.
By formalizing and expanding StorageMAP’s ability to analyze and report on access permissions, Data Access Review enables organizations to identify excessive, outdated, or inappropriate access rights before they evolve into security risks or compliance violations. It integrates into existing unstructured data management workflows, ensuring that access governance becomes a natural extension of data visibility, classification, and remediation strategies.
The Early Access Program is available exclusively to current Datadobi customers who are actively using StorageMAP. Participants will get an early look at new features, gain valuable insights about access permissions in part of their environment, and have a direct line to share feedback that will help shape the final data access product.
Customers interested in joining the Early Access Program can reach out to their Datadobi account representative or visit our website
Leave a comment »