Anthropic scrambles to contain leak of proprietary Claude AI agent code

Posted in Commentary with tags on April 2, 2026 by itnerd

Anthropic is working to contain the fallout after accidentally exposing internal source code for its Claude AI coding agent, following a human error during a software update that made proprietary files publicly accessible, which was quickly discovered by a security researcher named Chaofan Shou and posted to X.

The new version of its Claude Code software package unintentionally included a file that exposed nearly 2,000 source code files and more than 512,000 lines of code including tools, techniques, and internal instructions used to guide the behavior of its AI agent. This included operational components of the system and internal frameworks used to control how the AI performs tasks.

Anthropic issued thousands of takedown requests to remove the code from public repositories.

Anthropic said it is implementing changes to prevent similar issues while continuing efforts to remove the leaked materials from circulation.

Michael Bell, Founder & CEO, Suzu Labs had this comment:

   “Anthropic shipped a 60MB source map inside their npm package. Every line of Claude Code’s source, all 512,000 of them, publicly available. For the second time. The first leak was February 2025 and the root cause was never fixed.

   “We pulled the codebase apart. The headline findings are real but the details are worse. Undercover Mode instructs Claude to disguise itself as a human developer when contributing to open source: “Do not blow your cover.” There is no force-off option. Frustration tracking runs a regex on every user input and silently sends your emotional state to Anthropic’s analytics pipeline without notification or consent. That emotional classification also feeds a system that can prompt users to share their full session transcript with Anthropic, controlled by remote feature flags that Anthropic can activate at any time.

   “The finding that matters most for government and defense: the default telemetry collects device IDs, session data, email, org UUID, and process tree information on startup before the user types anything. Environment flags can escalate collection to include full prompts, file contents, bash command output, system prompts, and entire conversation transcripts sent to commercial endpoints. The code confirms FedRAMP OAuth paths to claude.fedstart.com, meaning government deployments share the same codebase. Whether hardening was applied before those deployments is unknown, but the telemetry infrastructure is baked into the foundation. The Pentagon designated Anthropic a “supply chain risk” in March. This is what that risk looks like in code.

   “The engineers documented their own attack surfaces in comments. Prompt-injected models can exfiltrate secrets via GitHub CLI URL paths. Leaked GitHub Actions tokens enable “repo takeover” and “supply-chain pivot.” Bash parsing ambiguity allows commands to execute while hidden from security validators. They built mitigations, but the comments confirm the attack surfaces exist.

   “The AI safety company with a $380 billion IPO target acquired Bun, whose known source-map-in-production bug was filed publicly and left open while the product shipped to millions of developers. Their operational security posture is a .npmignore file that nobody checked the second time around.”

Jacob Krell, Senior Director: Secure AI Solutions & Cybersecurity, Suzu Labs had this to say:

   “The model is the engine. What Anthropic accidentally published is the machine built around it.

   “Anthropic has been here before. This is the second time Claude Code’s source has leaked through the same vector, a source map file left in the npm package. The first was in February 2025. Thirteen months later, the same packaging mistake exposed a far more complex system, days after the accidental exposure of details about an unreleased model codenamed Mythos.

   “The significance of this leak is in what the code reveals about AI agent architecture. The leak exposed approximately 512,000 lines of TypeScript across roughly 1,900 source files. Developers and researchers who have analyzed the source have since documented the scale of what Anthropic built around the model. The code contains what analysts describe as 44 feature flags for unreleased capabilities, approximately 40 permission gated tools, a multi agent coordination system, a persistent autonomous daemon mode, a layered memory architecture, defenses against competitor model distillation, and granular attribution tracking for AI versus human code contributions. The leaked code strongly suggests that the bulk of Claude Code’s production capability comes from orchestration, tooling, memory, and permission layers built around the model.

   “The multi agent coordinator mode, as documented in the leaked source, illustrates where the engineering complexity lives. The code describes a system where Claude Code operates not as a single model session but as a supervisor managing a fleet of worker agents executing tasks in parallel. In the leaked architecture, the coordinator does not directly edit files, run commands, or read code. All implementation goes through workers. Verification is handled by what the code describes as a separate adversarial agent that must confirm the output works before the task can be marked complete. In effect, this is zero trust architecture applied to AI agents, with the orchestration system enforcing verification independently of the model.

   “The leaked code also references an autonomous daemon mode, internally called KAIROS. The source describes a persistent agent that watches the developer’s project and proactively acts without waiting for user input. It uses a tick based lifecycle with periodic prompts, and the code indicates behavior that adjusts based on whether the developer’s terminal is active. The source also references memory consolidation during idle periods, converting observations into structured facts. These features represent event driven architecture, state management, and context engineering built entirely in the orchestration layer.

   “The code also contains what analysts describe as a competitive defense embedded directly in the orchestration layer. The system references injecting artificial tool definitions into certain API responses, apparently designed to degrade the performance of any competitor model trained on Claude’s outputs. That defense lives in the scaffolding. It tells you where Anthropic believes their competitive advantage sits.

   “The depth of interlocking systems documented in the leaked code is what stands out. The coordinator depends on the memory system, the memory system depends on the tool layer, the tool layer depends on the permission framework. These systems are deeply interdependent, and building them to work in concert at production quality is the hard engineering problem. The public conversation about AI capabilities focuses almost entirely on which model is smarter. What this leak suggests is that the model generates the next token, and everything around it is what turns that reasoning into reliable, operational capability.

   “This leak also serves as a proof of concept for the rest of the industry. The engineering gap between a frontier research lab and a commercial competitor appears narrower than many assumed. The architectural patterns documented in the leaked source are well structured and reproducible in principle. A competent engineering team can study the coordination strategies, memory approaches, and tool integration designs and adapt the approach using any available foundation model. The model layer is swappable. The orchestration patterns are the transferable knowledge. What Anthropic built behind closed doors is now visible, and for anyone questioning whether a smaller team could build a credible AI coding agent, the architectural proof of concept is now public.

   “The knowledge transfer effect is significant. Developers who were building AI coding tools through trial and error now have a detailed reference implementation from a team backed by billions in research and development. The architectural decisions, trade-offs, prompt engineering techniques, and multi agent coordination strategies are all visible. The effect extends beyond direct competitors. It raises the floor for every developer building with AI. The gap between what a frontier lab understood about AI agent architecture and what the broader developer community understood has been enormous. That gap collapsed overnight.

   “The model is increasingly a commodity. Multiple frontier models are available from multiple providers, and the performance gap between them continues to narrow. The orchestration system built around the model is the competitive frontier, and Anthropic just published the blueprint.”

Vishal Agarwal, CTO, Averlon adds this:

   “The deeper risk here isn’t what was exposed, it’s what becomes possible. When AI coding agent internals are public, attackers can study how those agents interpret context, follow instructions, and make decisions.

   “That makes it easier to craft inputs or artifacts that appear legitimate to developers but influence how the agent behaves: modifying code, introducing insecure changes, or interacting with downstream systems. This expands the attack surface beyond the model itself into developer workflows, CI/CD pipelines, and the systems those pipelines connect to.”

This is embarrassing for Anthropic. But I honestly am not shocked by this. They clearly need to tighten things up or this will keep happening. Which of course is bad for them.

AI supply chain attack exposes 4TB of sensitive data

Posted in Commentary with tags on April 2, 2026 by itnerd

Mercor has disclosed it was impacted by a supply chain attack involving LiteLLM, after attackers used a compromised maintainer account to publish malicious PyPI packages that were available for roughly 40 minutes and likely downloaded by thousands of organizations. The incident, tied to a broader campaign involving a compromised Trivy dependency in CI/CD security workflows, is now under investigation as the Lapsus$ extortion group claims to have stolen over 4TB of data, including candidate profiles, credentials, and proprietary information.

Here’s some commentary from CTO of DryRun Security, Ken Johnson:

“What’s notable here isn’t just the LiteLLM compromise, it’s the pattern. We’re seeing the same playbook show up across groups like Lapsus$ and TeamPCP. Start with a trusted tool, pivot into CI/CD, then ride that access into cloud and AI infrastructure. This is becoming repeatable.

The bigger shift is that this isn’t traditional SCA risk. This isn’t a CVE sitting in a dependency. This is active malware in the supply chain, designed to spread, harvest credentials, and exfiltrate data as it moves.

Once attackers land in the pipeline, they’re inside your build and deployment process. At that point, it’s not about exploiting a bug, it’s about abusing trust to scale across environments.

We’ve moved toward a world where attackers don’t need new techniques, they just reuse what already works across the same shared tooling and AI stack.”

Supply chain attacks are real. Organizations need to make sure that they do everything possible to make sure that everything and everyone that they interact with are as secure as possible. Otherwise this is what you will get 100% of the time.

CloudBees Smart Tests Brings Control to the Surge of AI-Generated Code Flooding CI Pipelines

Posted in Commentary with tags on April 2, 2026 by itnerd

CloudBees, one of the world’s leading software development solution providers, today announced that CloudBees Smart Tests, its award-winning AI-driven test intelligence solution for continuous integration and continuous delivery (CI/CD), is now generally available for all customers.

As AI tools dramatically increase code output, the bottleneck in modern software delivery is shifting from writing code to validating it. With roughly 41% of all code now AI‑generated and more than 80% of developers are using AI tools daily, CI pipelines are under growing pressure from a surge of pull requests that expand regression suites and slow feedback loops.

CloudBees Smart Tests set a new standard for controlling AI-generated code. By ensuring the right tests run for each code change, developers are empowered to maintain velocity without sacrificing reliability and control.

Early enterprise deployments demonstrate measurable impact: 

  • 30% faster test execution: Reduced from 54 minutes (69 test cases) to 4 minutes for 18 parallelized tests, with further gains expected when applied across the full pipeline. 
  • Automated test failure analysis: Failures are now automatically segmented, replacing manual triage and speeding identification of unstable tests.
  • 40% better infrastructure utilization: Reduced infrastructure from 10 executors across 2 VMs to 4 executors on 1 VM for the same workload. 

To address these challenges, the team turned to CloudBees Smart Tests to streamline release testing, accelerate feedback loops, and reduce cloud spend. 

Built for Modern Software Delivery 

CloudBees Smart Tests apply machine learning–based Predictive Test Selection and failure pattern analysis. Key advantages include: 

  • Accelerated testing: Runs only the tests most relevant to each code change, finding failures 40-80% faster.
  • Controlled CI costs: Reduces unnecessary execution that drives CI cost and delays.
  • Reduced cognitive load: Groups failures by root cause to identify what failures to prioritize.
  • Streamlined dev-test process: Identifies flaky, reliable, and long-running tests to speed up dev-test interaction, giving both engineers and leaders a shared view of test performance.

Longstanding developer roadblocks like large test suites, flaky failures, reruns, manual triage, and a CI bill that grows with every wasted test minute are amplified with the proliferation of vibe coding and AI-generated code,” said Shawn Ahmed, Chief Product Officer at CloudBees. “Beyond providing time and cost savings, CloudBees Smart Tests restores developer confidence. We’re giving teams the ability to ship AI-generated code knowing it’s been properly validated.

CI-Agnostic to Modernize Without Disruption 

Enterprises rarely operate in a single-CI environment. Multi-team, multi-repository estates often span Jenkins, GitHub Actions, GitLab CI,  and other frameworks. 

Without requiring costly migrations, Smart Tests integrates seamlessly into existing pipelines and works across heterogeneous environments. This flexibility allows organizations to pilot Smart Tests in a single repository, validate impact end-to-end, and expand based on measurable results.

Availability

CloudBees Smart Tests is available today. Enterprises can request a CI Waste Assessment to evaluate optimization opportunities within their existing CI environment.

To learn more, visit www.cloudbees.com.

Fortra Acquires Zero-Point Security

Posted in Commentary with tags on April 2, 2026 by itnerd

Fortra announced today the acquisition of Zero-Point Security, a specialized cybersecurity training firm based in Warrington, UK. This will expand Fortra’s offensive security education capabilities, bringing additional training expertise in red team operations, adversary emulation, and penetration testing. Zero‑Point Security is widely recognized for its trusted red team operations training and has built a strong reputation delivering its high-demand, self-paced courses to individuals and businesses seeking advanced offensive operations skills.

Zero-Point Security’s well-known courses include Red Team Operations I and II, which meet the high standards to be certified by the Council of Registered Ethical Security Testers (CREST). Successful completion of these programs helps participants achieve Certified Red Team Operator (CRTO) status, an industry-respected credential that validates expertise in offensive security techniques.

Further details and timelines will follow.

Guest Post: The curious and occasionally bizarre quest to replace passwords

Posted in Commentary with tags on April 2, 2026 by itnerd

By Karolis Arbaciauskas, head of product at NordPass

Yet another new authentication method has emerged. A team led by researchers at Rutgers University (USA) has developed a system called “VitalID” based on a newly proposed biometric — tiny vibrations from breathing and heartbeats that resonate through the skull in patterns unique to each person’s bone structure and facial tissues.

This is far from the first attempt to eliminate passwords and the need to remember them. From swallowable microchip pills and electronic tattoos to logging in via the echo of your skull, the tech industry has spent more than a decade searching for the password’s successor.

“Nobody likes passwords. We all have too many of them — about 170 on average, by our count. And we can’t remember them all, so people reuse passwords, and those reused credentials often become a common attack vector. It’s no surprise that there have been and still are many attempts to free us from passwords and remembering them. At NordPass, we’re also developing passwordless authentication. But for now, there is no universally practical way to live without passwords — especially since not all websites and platforms support passkeys yet,” says Karolis Arbaciauskas, head of product at the password manager company NordPass.

Bizarre passwordless experiments

Let’s take a look at the strangest and most interesting authentication methods proposed.

The password pill. In 2013, around the time Apple’s Touch ID launched, Motorola unveiled a striking prototype — a swallowable authentication pill containing a tiny chip powered by stomach acid. The device produced an 18‑bit, ECG‑like signal that effectively turned the user’s body into an authentication token. It never advanced beyond demos, largely because it felt more like surveillance than authentication, and because Touch ID offered a simpler, far less invasive alternative.

Electronic tattoo. At the same 2013 conference, Motorola also showcased a temporary password tattoo — ultra‑thin, flexible circuits that adhered to the skin for on‑body authentication. The demos were unforgettable, but the concept stalled due to practicality, privacy, and adoption hurdles — users had to replace the tattoo weekly, or it stopped working, making it more cumbersome and costly than passwords. Notably, while that authentication concept faded, similar flexible electronics now power consumer products such as adhesive baby thermometers.

Bone-conducted skull signatures. Researchers have repeatedly explored using the way sound travels through the skull as a unique biometric, from early “SkullConduct” work to recent systems like Rutgers’ VitalID. The core idea is simple — your skull’s acoustic response can be as distinctive as a fingerprint. It’s a clever concept, but so far it has remained largely at the prototype stage because it’s impractical to rely on a head‑mounted device every time you log in. However, VitalID may be on the right track by focusing on virtual and augmented reality environments, where users already wear a device on their heads.

Heartbeat recognition (ECG). Devices like the Nymi Band use a person’s unique heart rhythm as a biometric signature. Because no two ECG patterns are identical, the wearer can authenticate simply by being near authorized devices. This is one of the few experimental methods that actually reached the market — but it remains niche, designed for B2B and research scenarios where staff must authenticate to equipment beyond standard computers (it requires both an ECG bracelet and a compatible reader plugged into a machine). For the mass market, it is still too costly and impractical.

Vein pattern mapping. This method uses infrared light to map the unique vein patterns beneath the skin, typically in the palm or fingers. It is already deployed in high‑security environments such as laboratories and data centers, as well as for patient identification and secure access to electronic medical records (e.g., Imprivata PatientSecure). Like ECG bracelets, however, it remains impractical for mass‑market use because it requires specialized sensors or additional hardware on smartphones and computers.

Lip-reading software. Researchers have developed systems that identify users based on the unique way they mouth specific words or phrases. While the technology is now relatively mature, it is used more often to support solutions for people with hearing impairments and for forensic analysis (e.g., extracting speech cues from silent CCTV footage). It could be applied to authentication, but it remains impractical — most users won’t want to mouth passphrases at a computer or phone every time they log in.

Ear shape, heartbeat, gait, and odor. Over the years, various academic teams have tested everything from ear morphology and gait to body odor and body proportions as identity signals. While these traits can be distinctive, they struggle with reliability, sensor availability, and user acceptance, which is why you don’t scan your ear or authenticate by aroma at the office door.

Mainstream biometrics

So far, the search for a password successor has produced few mainstream winners. Only a handful of biometrics — primarily face and fingerprint — have become everyday tools. Passkeys, a phishing‑resistant login method built on on‑device biometrics and supported by technology heavyweights, are progressing in the same direction, but their adoption is slower than expected.

“Fingerprint login became mainstream in 2013 and face scan in 2017, driven primarily by Apple’s introduction of Touch ID and Face ID. These technologies succeeded because they are simple to use, fast, built into phones and laptops, and work offline on the device. Voice recognition as a biometric authentication has been demoed some time ago and even existed for some time but never became common. Now that AI can clone a voice from a few seconds of audio, it’s not reliable. Keystroke dynamics also exist. AI can infer identity from typing patterns, but this technology also remains niche. AI can recognize handwriting as well, though that’s more relevant to forensic analysis than authentication,” says Arbaciauskas.

Most likely successor

According to him, passkeys have the potential to become the dominant form of authentication because they are based on previous technology that is already built into nearly all modern devices and solves the password problem.

“Passkeys replace passwords with public‑key cryptography. A private key stays on your device, while a website holds the public key. When you sign in, your phone or laptop proves possession of the private key — often unlocked by your fingerprint or face — without revealing anything that can be phished or reused. As a result, passkeys are resistant to phishing, credential stuffing, and brute‑force guessing. Major platforms now support them, and modern password managers include passkey functionality to help organizations and users adopt them,” says Arbaciauskas.

He adds that even with broad platform support, it will take years for websites, apps, and enterprises to standardize on passkeys. During this transition, we live in a mixed world — some accounts support passkeys, while many still rely on passwords, so we’re using both for now.

“Use passkeys wherever they’re available. Everywhere else, use long, unique, randomly generated passwords stored in a password manager. These are harder to phish or disclose in the heat of the moment because you don’t memorize them. And always enable multi‑factor authentication,” says Arbaciauskas.

If You Are Still Running iOS 18, Check Software Update ASAP

Posted in Commentary with tags on April 1, 2026 by itnerd

If you have an older iPhone or iPad that still runs iOS 18, Apple has just released a newer version of iOS and iPadOS 18.7.7. It’s meant to fix this vulnerability that is in the wild and should be considered a today problem. Your other option is to go to iOS 26 which is not affected by this vulnerability and protects you from other vulnerabilities as a bonus. But whatever you do, don’t delay and patch your iDevice ASAP.

That ends today’s public service announcement.

Peer Software and Carahsoft Partner to Bring Data Replication and Synchronization Solutions to the Public Sector

Posted in Commentary with tags on April 1, 2026 by itnerd

Peer Software, and Carahsoft Technology Corp. today announced a strategic partnership. Under the agreement, Carahsoft will serve as Peer Software’s Master Government Aggregator®, making the company’s flagship Peer Global File Service (PeerGFS) platform available to the Public Sector through Carahsoft and its reseller partners.

Peer Software’s PeerGFS platform provides real-time file replication and synchronization across distributed environments, enabling Government agencies to maintain file consistency, reduce data silos and ensure high availability without relying exclusively on the cloud. With multi-protocol support for SMB and NFS on the same volume, PeerGFS helps agencies manage hybrid environments, support legacy systems and enable seamless collaboration across locations.

Designed to meet the stringent security requirements of the Public Sector, Peer Software’s solutions support compliance, strengthen operational resilience and optimize data accessibility. Peer Software’s capabilities are critical for agencies managing sensitive workloads and geographically dispersed teams.

Peer Software’s solutions and services are available through Carahsoft and its reseller partners. For more information, contact the Carahsoft Team at (703) 871-8585 or PeerSoftware@carahsoft.com. Explore Peer Software’s solutions here.

SIOS Technology to Present at Spring 2026 Industry Events and Host Webinar on Cloud Resilience

Posted in Commentary with tags on April 1, 2026 by itnerd

SIOS Technology Corp. today announced its participation in several industry events this spring, where company experts will share best practices for maintaining uptime for mission-critical applications across cloud, hybrid, and multi-cloud environments. SIOS will also host an educational webinar focused on designing resilient workloads in the cloud.

Webinar: Resilience by Design – Keeping Mission-Critical Workloads Running on AWS

Date: April 9, 2026 @ 12:00 pm ET
Format: Virtual

Register here

SIOS will host the webinar “Resilience by Design: Keeping Mission-Critical Workloads Running on AWS,” which will explore strategies for ensuring application availability in cloud environments. Attendees will learn how to architect resilient infrastructures, address common failure scenarios, and maintain uptime during maintenance or outages.

SQLBits 2026

Date:April 22–25, 2026
Location: Caerleon, Wales, United Kingdom

Register here

SIOS experts will present two technical sessions focused on SQL Server high availability across operating systems and cloud environments. These sessions, include:

  • Breaking the Default: SQL Server High Availability on Windows and Linux
    Speaker: Aaron West, senior solutions engineer
    Date/Time: April 23 at 12:20 PM

This session will provide a side-by-side comparison of SQL Server high availability on Windows and Linux, covering clustering architectures, failover processes, maintenance and patching considerations, and the operational impact of each approach.

  • Building Resilient SQL Server HA/DR in a Multi-Cloud World
    Speaker: Dave Bermingham, senior technical evangelist
    Date/Time: April 25 at 12:30 PM

This session will examine how organizations can architect SQL Server high availability and disaster recovery solutions spanning Azure, AWS, and Google Cloud. Attendees will learn how to use technologies such as Always On Availability Groups and Failover Cluster Instances (FCIs) to build resilient multi-cloud deployments.

SQL Saturday Jacksonville 2026

Date:May 2, 2026
Location: Jacksonville, FL

Register here

At Jacksonville’s 18th annual Data Conference Day of Data, Bermingham will present “Building Resilient SQL Server HA/DR in a Multi-Cloud World.

The session will explore real-world architectures for running SQL Server reliably across multiple cloud providers. Bermingham will share practical guidance for designing high availability and disaster recovery strategies that span Azure, AWS, and Google Cloud, helping organizations reduce risk, avoid vendor lock-in, and meet aggressive recovery objectives.

Red Hat Summit 2026

Date: May 11–14, 2026
Location: Georgia World Congress Center, Atlanta, GA

Register here

SIOS will exhibit at Red Hat Summit, where attendees can learn how organizations are protecting mission-critical Linux applications with SIOS high availability and disaster recovery solutions. At the event, SIOS will showcase SIOS LifeKeeper for Linux, which enables automated failover and continuous application availability for enterprise workloads running on Linux across physical, virtual, and cloud environments.

Pass Summit Europe

Date: June 10-11, 2026

Location: Hilton Frankfurt, Hochstraße 4, 60313 Frankfurt am Main, Germany

Register here

SIOS will participate as a Gold Sponsor at PASS Summit Europe, where attendees can connect with SIOS experts at the company’s exhibit table to learn more about high availability and disaster recovery solutions for SQL Server environments. SIOS will also deliver a conference session.

For more information about SIOS events and high availability solutions, visit https://us.sios.com

The CISO’s Guide: When AI Helps vs. Hurts Security

Posted in Commentary with tags on April 1, 2026 by itnerd

Dubai-based Secure.com has published a concise analysis of both sides of the coin in  “The CISO’s Guide: When AI Helps vs. Hurts Security.”

With research revealing that 76% of CISOs reporting that they expect a material cyberattack in the next 12 months, most report that their organizations are already using AI in some form.

The Guide examines key issues including:

  • Where AI Actually Delivers for the SOC:  AI doesn’t think, it predicts, and every model’s no better than the data it was trained on.
  •  Where and How AI Can Quietly Hurt The Organization
  • The Four Questions to Ask Before Deploying Any AI Security Tool: Every AI system makes mistakes. The question is whether mistakes are recoverable.
  • Building a Security Program Where AI and Humans Work Together: Gall’s Law applies.
  • Shadow AI Prevention Measures: Shadow AI is a growing internal risk that can expose sensitive data without the user realizing it.
  • Metrics to measure deployment success.

The question is no longer “should we use it?” It’s “are we using it in the right places?” The CISO’s Guide delivers a clear, honest answer to that question, and full content is below.

You can read the analysis here: The CISO’s Guide: When AI Helps vs. Hurts Security

CDW Canada’s 2026 Cybersecurity Study reveals an 80% jump in cyberattacks for Canadian enterprise

Posted in Commentary with tags on April 1, 2026 by itnerd

Today, CDW Canada released data from its annual Canadian Cybersecurity Study, Navigating Ransomware, Modern Architectures and the Maturity Paradox. 

Key findings from the study include: 

  • Canadian companies are being targeted by cyberattacks at a rate not seen before. Enterprise organizations saw an 80 percent increase in cyberattacks in 2025 due to the use of AI in cyberattacks and the larger financial reward potential.
  • Enterprise cloud infection rates hit a record high in 2026, jumping from 41 percent to 53 percent year over year, the highest level recorded since CDW Canada started this study.
  • Most organizations assume their cloud environments are secure. The study suggests that assumption is creating one of the biggest vulnerabilities in Canadian cybersecurity right now.
  • Security spending reached a five-year high, with 20% of IT budgets now dedicated to security; however, the gaps in foundational weaknesses in people and processes create the “security maturity paradox,” making organizations appear advanced but leaving them open to attacks.
  • AI is creating new security pressures on two fronts. Attackers are using it to be more effective. And organizations adopting AI internally need to make sure they are doing so in a way that does not create new vulnerabilities.
  • The ripple effects go beyond the organization itself. When a major company is hit, the impact is felt by employees, customers and the communities that depend on those services.

There are many more findings in the press release linked here. The full report can be accessed here.