Flashpoint’s threat intelligence team has uncovered new details about DarkCloud, a rapidly spreading, commercially available infostealer that is reshaping the initial‑access landscape for cybercriminals.
DarkCloud is part of a growing wave of low‑cost, highly scalable infostealers that are lowering the barrier to enterprise compromise. First observed in 2022 and openly sold on Telegram and a clearnet storefront for as little as $30, DarkCloud gives even low‑skill threat actors the ability to harvest credentials at scale and gain enterprise‑wide access.
Flashpoint’s latest analysis reveals several concerning trends:
- DarkCloud is written in Visual Basic 6.0, a legacy language that helps it evade modern detection tools and signature‑based defenses.
- Its encryption and string‑obfuscation techniques make it harder for defenders to analyze and block.
- It is fully commercialized, with subscription tiers, active development, and a growing user base on Telegram—mirroring the professionalization of the cybercrime economy.
- Credential theft at scale enables attackers to pivot into ransomware, business email compromise, and long‑term espionage operations.
Flashpoint’s researchers warn that DarkCloud represents a broader shift: infostealers are now the dominant initial‑access vector in 2026, giving attackers a cheap, fast, and reliable way to infiltrate organizations.
Why this matters:
Infostealers like DarkCloud are no longer niche tools – they are becoming the backbone of modern cybercrime. With DarkCloud’s low cost, ease of access, and ability to bypass traditional defenses, organizations across every sector face heightened risk. Flashpoint’s analysis provides rare visibility into how these tools are built, sold, and deployed – and what security teams must do to defend against them.
Flashpoint can offer:
- Expert interviews with the analysts who dissected DarkCloud
- Insights into the commercialization of infostealers and the threat‑actor economy
- Guidance for CISOs on mitigating credential‑theft‑driven breaches
- Data from Flashpoint’s 2026 threat intelligence research
You can learn more here: Understanding the DarkCloud Infostealer | Flashpoint
Patches Fix Claude Code Flaws, But Broader Repository-Based Risks Remain
Posted in Commentary with tags AI on February 26, 2026 by itnerdResearchers at Check Point have identified multiple vulnerabilities in Anthropic’s development tool Claude Code, allowing malicious repositories to trigger remote code execution and steal active API credentials.
The observed security issues exploited built-in mechanisms including Hooks, Model Context Protocol servers, and environment variables to run arbitrary shell commands and exfiltrate API keys before trust prompts could be confirmed.
Two specific tracked vulnerabilities, CVE-2025-59536 and CVE-2026-21852, were documented and patched by Anthropic following disclosure by security researchers. The first enabled arbitrary code execution via overridden configuration settings that bypass user consent dialogs, while the second could redirect API traffic to malicious endpoints, exposing developers’ Anthropic API keys in plaintext.
All reported flaws have been remedied in subsequent Claude Code updates prior to public advisory publication.
According to researchers, even after the specific vulnerabilities were fixed, the underlying risk does not disappear. The issues exposed how project configuration files can directly shape execution behavior inside AI-assisted development tools, and a malicious repository can still act as a delivery mechanism if safeguards are insufficient, which expands the threat model beyond the individual CVEs that were addressed.
As a result, applying patches resolves the documented flaws but does not fully remove the broader exposure created when AI tooling automatically interprets and acts on repository-level settings.
Jacob Krell, Senior Director: Secure AI Solutions & Cybersecurity, Suzu Labs:
“These CVEs are real and Anthropic was right to patch them. The broader issue is not unique to Claude Code. The AI development tool industry as a whole is prioritizing enablement over security, and these vulnerabilities are a symptom of that design philosophy, not an isolated product failure.
“In the case of Claude Code, hooks ran shell commands before the developer even saw the trust dialog. The security control existed. It just executed after the damage was already done. AI agents are deployed with broad permissions by default because restricting them reduces productivity. That is the same tradeoff the industry made with admin accounts two decades ago, and it took years of breaches to correct. The principle of least privilege does not stop applying because the user is an AI model instead of a human. Agents should be treated as untrusted by default, with strict zero trust boundaries between the agent and any command surface, credential store, or system resource it touches.
“This is not a new class of attack surface. Malicious Makefiles, poisoned scripts, and git hooks have compromised developers for years. What AI tools change is the scope of what runs once triggered. The attack surface is not new. The blast radius is.
“AI development tools are going to become more autonomous, not less. The industry is building the capability first and retrofitting the security later. That pattern has never aged well in software, and it is unlikely to age any better with AI.”
I am aware of a large number of developers who are using tools like Claude Code to speed up the coding pf
Leave a comment »