Hardcoded API keys expose Google Gemini AI via apps with 500M+ installs: CloudSEK

Posted in Commentary with tags on April 7, 2026 by itnerd

CloudSEK has published research showing that 22 popular Android applications, collectively installed on more than 500 million devices, contain hardcoded Google API keys that now provide full, unauthorized access to Google’s Gemini artificial intelligence platform.

The report, released today by CloudSEK’s BeVigil security search engine, reveals a structural flaw at the crossroads of decade-old developer practices and Google’s rapidly expanding AI infrastructure. It is available at: 

Background: A Decade-Old Assumption, Quietly Broken

For more than a decade, Google told developers that API keys in the AIza… format were safe to embed in public-facing applications. They were treated as public identifiers, not secrets.

That changed with Gemini. When a developer enables the Gemini API on a Google Cloud project, every existing API key on that project silently inherits access to Gemini endpoints, with no warning, no notification, and no opt-in prompt. 

Developers who embedded Maps or Firebase keys years ago, following Google’s own documentation, now unknowingly hold live credentials to one of the world’s most powerful AI systems.

BeVigil scanned the top 10,000 Android apps by install count and confirmed 32 such live keys across 22 applications.

The Affected Apps: Household Names, Global Reach

The 22 vulnerable applications span e-commerce, travel, finance, education, news, and productivity. They include:

  • OYO Hotel Booking App (100M+ installs)
  • Google Pay for Business (50M+ installs)
  • Taobao (50M+ installs)
  • apna Job Search App (50M+ installs)
  • ELSA Speak: AI English Learning (10M+ installs) – confirmed data exposure
  • The Hindu: India and World News (10M+ installs)
  • Shutterfly: Prints, Cards and Gifts (10M+ installs)
  • JioSphere Web Browser (10M+ installs)
  • Muslim: Ramadan 2026, Athan (10M+ installs)
  • 30 Day Fitness Challenge, Krishify, ISS Live Now, and 10 others
     

CONFIRMED DATA EXPOSURE: Using the key found in ELSA Speak’s publicly downloadable app, CloudSEK researchers queried Google’s Gemini Files API and received a live response listing uploaded audio files. The files were likely speech recordings submitted by users for AI-powered pronunciation coaching.

What an Attacker Can Do With a Single Exposed Key

Any person who decompiles a vulnerable app and extracts its hardcoded key can:

  • Access and download private user files, including documents, audio, and images, stored in the Gemini Files API
  • Make unlimited Gemini API calls, potentially generating thousands of dollars in charges on the developer’s Google Cloud account
  • Exhaust the organization’s API quotas, knocking out AI-powered features for real users
  • Read cached AI context windows, which may contain sensitive prompts and internal data
  • Continue exploiting the key across multiple app update cycles, as hardcoded keys often survive app versioning
     

Real Losses: Three Cases of Gemini API Key Abuse

The following highlights three publicly reported cases where stolen or exposed Google API keys led to severe financial harm:

Case 1: $15,400 overnight. A solo developer’s startup nearly collapsed after an attacker used his exposed key to flood Gemini with inference requests. The developer revoked the key within 10 minutes of a $40 billing alert. Due to a 30-hour reporting lag in Google Cloud’s billing system, the damage had already reached $15,400 by the time the dashboard updated.

Case 2: $128,000 and a company facing bankruptcy. A Japanese company using the Gemini API for internal tools saw approximately 20.36 million yen (around $128,000) in unauthorized charges accumulate after its key was compromised, even though firewall-level IP restrictions were in place. Google initially denied an adjustment request.

Case 3: $82,314 in 48 hours, a 455-times spike. A three-person development team in Mexico with a typical monthly cloud spend of $180 had their key stolen between February 11 and 12, 2025. Within 48 hours, attackers generated $82,314 in Gemini charges. Google’s representative initially held the company liable under the platform’s Shared Responsibility Model, citing an amount that exceeded the company’s total bank balance.

Full Report:  https://www.cloudsek.com/blog/hardcoded-google-api-keys-in-top-android-apps-now-expose-gemini-ai 

Finite State Appoints AI Security Marketing Veteran Ann Miller as Vice President of Marketing

Posted in Commentary with tags on April 7, 2026 by itnerd

Finite State, a leader in product security and software supply chain risk management, today announced the appointment of Ann Miller as Vice President of Marketing. Miller brings more than 15 years of experience scaling high-growth technology companies, with deep expertise in cybersecurity and AI-driven platforms, and is known for turning emerging technologies into market-defining categories.

Miller joins Finite State at a pivotal moment as enterprises face increasing pressure to secure software embedded across critical infrastructure, connected devices, and regulated environments. Her appointment underscores the company’s commitment to defining the future of product security through data, automation, and AI.

Prior to joining Finite State, Miller led marketing at Horizon3.ai, where she helped scale the company from early-stage to thousands of customers, driving rapid market adoption. During her tenure, Horizon3.ai was recognized as the #1 fastest-growing cybersecurity company on the 2025 Inc. 5000 list and established leadership in autonomous security testing. Earlier in her career, she held strategic roles at Cylance, a pioneer in AI-driven endpoint security, and iboss, a leader in cloud security.

Miller will lead all aspects of marketing, including branding, demand generation, product marketing, and go-to-market strategy.

She is the latest expansion of the Finite State executive team, following the February 2026 appointment of Sharon Hagi as Chief Security Officer, and January 2026 appointment of Chris Overton as Executive Vice President of Engineering.

Hagi brings more than 30 years of experience building and operating security programs across semiconductors, IoT, embedded systems, AI-enabled platforms, and cloud environments. Leading Finite State’s Security and Services organization, Hagi ensures execution, customer outcomes, and operational excellence.

Overton brings more than 20 years of engineering leadership experience. He drives Finite State’s engineering innovation at a critical stage of the company’s growth, as device manufacturers face increasing pressure to ship faster while meeting requirements such as the EU Cyber Resilience Act and other emerging security mandates.

Clarvos Introduces Agentic Workflow Platform

Posted in Commentary with tags on April 7, 2026 by itnerd

Clarvos today announced the early access launch of its agentic marketing workflow platform designed to simplify how growing small and mid-sized businesses (SMBs) plan, create, and run marketing campaigns. The platform brings together audience discovery, creative generation, and campaign execution into a single system, helping businesses maintain relevance and move from idea to live campaign in minutes.

Small and mid-sized businesses today face increasing pressure to grow, but many struggle to find new customers, understand what those customers value, and consistently produce marketing that performs. Managing campaigns across multiple platforms only adds to the challenge, making it difficult to keep up with the coordination required to plan and launch campaigns.

Research from the Ehrenberg-Bass Institute for Marketing Science shows that effective marketing depends on reaching new customers, understanding what they value, and aligning creative, media, and messaging accordingly. However, the growing number of platforms and data sources has made this process increasingly complex and time- consuming, especially for solo marketers and owner-operators.

The Clarvos Agentic Workflow introduces a unified workflow that coordinates campaign strategy, creative generation, and activation across Google, Meta and TikTok, using AI to simulate customer response, compare campaign options, and guide setup before launch while keeping teams in full control of final decisions. The result is an agentic workflow that cuts campaign launch time from weeks to minutes, reduces operational friction, and lowers the cost of managing multiple marketing tools by roughly up to 90% compared to typical multi-platform workflows.

A Unified Campaign Workflow for Growing Businesses

The Clarvos Agentic Workflow is built around an agentic workflow, meaning AI agents coordinate multi-step marketing tasks across audience discovery, planning, creative generation, budgeting, and campaign setup while keeping humans in control of final decisions. Instead of using separate tools for research, creative, media, and reporting, the system manages the full workflow in one place, reducing the need for manual handoffs between platforms and teams.

At launch, the platform enables businesses to:

  • Discover potential customer segments using AI-generated audience modeling
  • Generate and manage ad creatives using AI and existing brand assets, with built-in approval workflows
  • Simulate customer response to messaging and creative before campaigns go live
  • Develop campaign plans and budget allocations
  • Launch campaigns across major platforms, including Google, Meta, TikTok and other channels without switching tools
  • Coordinate approvals and campaign setup from a single dashboard

Because the workflow begins with planning and insight, the platform can support a wide range of industries where small teams need to manage growth with limited resources, including retail, CPG, automotive, restaurants, home services, and local businesses. The platform can also support broader marketing decisions, including content and organic strategy, by helping teams understand which audiences, messages, and creative directions are worth pursuing before campaigns go live.

By consolidating the core steps of campaign planning and execution, Clarvos reduces workflow friction, shortens planning cycles, and helps teams move from concept to activation more quickly compared to traditional multi-tool processes. Early internal testing and pilot use have shown meaningful reductions in the time required to prepare and launch campaigns.

Availability

The Clarvos Agentic Workflow is available in early access starting today, April 7, 2026, with broader availability planned later this year. Looking ahead, Clarvos plans to expand the platform throughout 2026 with additional capabilities, including expanded campaign orchestration, multi-user collaboration, deeper reporting dashboards, and tools designed to help growing businesses manage marketing across channels with greater visibility and control.

Fortinet issues emergency weekend patch for actively exploited FortiClient EMS zero-day 

Posted in Commentary with tags on April 7, 2026 by itnerd

Over the weekend, Fortinet released an emergency security update for a critical FortiClient Enterprise Management Server (EMS) vulnerability (CVSS 9.1), after confirming it is being actively exploited in the wild.

The flaw, CVE-2026-35616, is a pre-authentication access control issue that enables attackers to bypass authentication protections and gain elevated privileges on affected systems to execute code or commands via crafted requests. 

The vulnerability impacts FortiClient EMS versions 7.4.5 and 7.4.6, and internet scans have identified more than 2,000 exposed instances that could be targeted. Exploitation activity was first observed on March 31, 2026, prior to public disclosure, giving attackers an early window to compromise vulnerable systems.

Fortinet issued hotfixes on Saturday and urged immediate patching, noting that the flaw has already been leveraged in attacks. 

Jacob Warner, Director of IT, Xcape, Inc.:

   “A compromised FortiClient EMS allows attackers to push malicious payloads to the entire managed fleet, turning a single exploit into a total enterprise breach. To stop the active exploitation of CVE-2026-35616 and CVE-2026-21643, organizations must immediately apply hotfixes for versions 7.4.5/7.4.6 or upgrade to 7.4.7.

   “The most impactful action is removing EMS interfaces from the public Internet by placing them behind a VPN or Zero Trust gateway. Additionally, teams should audit logs for unauthorized API activity and implement strict network segmentation to isolate management traffic. Relying on a cycle of emergency patches for exposed edge tools is a failing strategy; eliminating the external attack surface for management infrastructure is the only way to break the pattern of constant exploitation.

   “If your management console is still reachable from the public Internet, you are essentially crowdsourcing your admin privileges.”

Sunil Gottumukkala, CEO, Averlon:

   “The running joke in the cybersecurity industry is that the nastiest bugs always show up on Friday evenings or on major holidays, but Fortinet appeared to be doing the right thing here by getting the patch out fast once it confirmed active exploitation. The bigger issue is that attackers keep targeting management infrastructure because it offers high leverage: if you own the control plane, you often own everything behind it. Teams should treat these platforms accordingly, with minimal exposure, emergency patching, continuous monitoring, and clear containment playbooks.”

Lydia Zhang, President & Co-Founder,Ridge Security Technology Inc.:

   “Any vulnerability in a network management platform can lead to large-scale impact, as it often has access to many managed devices. This is why attackers frequently target management platforms.

   “It is recommended to conduct thorough application security testing, including zero-day scenario testing, before releasing any management platform. During development, engineering efforts are often focused on the firewall itself, while the management platform may receive less attention and, as a result, be less hardened.”

Denis Calderone, CTO, Suzu Labs:

   “Fortinet products, EMS specifically, have had some pretty big issues as of late. Admins have just finished patching FortiClient EMS to 7.4.5 to fix last week’s SQL injection and now there is this new zero-day, CVE-2026-35616. This one is a pre-auth API bypass in 7.4.5 and 7.4.6 that was being exploited before Fortinet even knew about it (exploitation started March 31, disclosure was April 4). So that’s now three critical pre-auth vulnerabilities patched in this same product in two years: CVE-2023-48788 patched in March of 2024, CVE-2026-21643 in February, and CVE-2026-35616 this week. At some point, patch and hope you’re done stops looking like a viable strategy.

   “So, is Fortinet doing the right thing by pushing an emergency weekend patch? Yes of course, a Saturday hotfix when you confirm zero-day exploitation is the right response, and it’s better than Fortinet’s history of delayed disclosure. But still, you have to worry about the engineering process when you have 2 critical flaws like this in in back-to-back versions of the same product.  The threat actors and researchers are finding these problems, and it would be nice to see the manufacturer chipping into that effort.

   “Unfortunately, we don’t think this is isolated, and we expect the pace of discovery in products like these to accelerate. Products with deep vulnerability histories are giving researchers and attackers a roadmap, and AI-assisted code analysis has gotten very good at finding the same types of bugs. Fortinet, Ivanti, Citrix, the products with the longest track records in the CISA KEV catalog, are going to keep producing new critical CVEs at an increasing rate. Hopefully we’re wrong here, but that’s the trajectory we’re already seeing.

   “Which brings us to the only practical strategy left, which is to stop exposing the management server UIs and APIs to the internet. The EMS admin interface is what’s being targeted here. If it’s reachable, you’re at risk. Restrict access to management networks, put it behind a VPN or conditional access, and monitor for anomalous API activity.

   “You’re always going to be patching Fortinet, but you don’t have to make it easy for attackers to reach the thing you’re patching, and even in a good scenario, you will still end up being exposed for days before announcement and patching even happens, which is just way too long nowadays.”

The fact that I keep seeing Fortinet pop up in my inbox is a sign that I may want to reconsider my use of their products. But in the meantime, it’s once again time to patch all the (Fortinet) things.

Leaseweb to Showcase Global Cloud and Infrastructure Solutions at SaaStock USA

Posted in Commentary with tags on April 7, 2026 by itnerd

Leaseweb has announced that it will showcase its full roster of global cloud and infrastructure solutions at the upcoming SaaStock USA show taking place April 15-16 at the Palmer Events Center in Austin, TX. Attendees – AI & B2B SaaS founders, operators, and investors – visiting Leaseweb Booth B13 will see first-hand how to leverage Leaseweb solutions to achieve the performance, scalability, and reliability needed to build, launch, and scale applications to ensure differentiation and strategic advantage in today’s competitive market. 

Leaseweb will feature the following solutions during SaaStock USA: 

  • GPU Servers: Unleash the power of AI with our high-performance GPU servers
  • Dedicated Servers: Experience unparalleled performance and reliability.
  • Multi-CDN: Ensure fast, reliable content delivery across the globe.
  • Public/Private Cloud: Flexible cloud solutions tailored to your needs.
  • Hybrid Cloud: Combine the best of both worlds with our hybrid cloud solutions.
  • Colocation: Secure and scalable colocation services.
  • Object Storage: Efficient and scalable storage solutions.
  • File and Block Storage: Flexible storage solutions for every need.
  • Managed Kubernetes: Simplify container orchestration with our managed Kubernetes services.

Qualified SaaStock USA attendees that schedule and attend a meeting with Leaseweb in Booth B13 will be entered to win a pair of Ray-Ban Meta glasses. To learn more and schedule your meeting please visit: https://www.leaseweb.com/en/about-us/events/saastock-usa-2026.

Meta pauses work with Mercor after supply chain breach raises risk to AI training data

Posted in Commentary with tags on April 6, 2026 by itnerd

As first reported by Wired on Friday, Meta has paused all work with AI data startup Mercor following a confirmed security breach linked to a supply chain attack involving the LiteLLM open-source project, which impacted thousands of organizations globally.

Mercor, which provides proprietary training data to major AI companies including Meta, OpenAI, and Anthropic, said it was among those affected and has launched an investigation with third-party forensic experts.

The breach raised concerns about potential exposure of sensitive AI training data and internal datasets, which are used to develop and fine-tune large language models. Reports indicate that Mercor’s systems were impacted as part of a broader compromise involving malicious updates to widely used AI tooling, though it remains unclear what specific data was accessed.

Michael Bell, Founder & CEO, Suzu Labs had this comment:

   “The Mercor breach is what happens when the companies building the most valuable AI models in the world outsource the creation of their training data to vendors running on Airtable and shared passwords. A single poisoned open-source package gave attackers VPN credentials, and from there they walked through Mercor’s systems and took 4TB of proprietary datasets, source code, and contractor PII.

   “We’ve been investigating these AI data vendors for months and found the same structural failures at Sama, Teleperformance, Scale AI, and Cognizant we see unrotated credentials, info-stealer infections on contractor endpoints, and access controls that don’t exist. The training data behind every major frontier model is sitting inside vendors that wouldn’t pass a basic security audit, and now that data is on an extortion site. This is a national security problem dressed up as a vendor management failure.”

Lydia Zhang, President & Co-Founder,Ridge Security Technology Inc. adds this comment:

   “This incident alerts us that AI training data should be treated as critical infrastructure, subject to stricter security scrutiny and regulation.

   “The breach also underscores the risks of relying directly on open-source projects in enterprise environments. Supply chain attacks, like the compromised LiteLLM library in this case, can introduce vulnerabilities at scale and expose highly sensitive data. 

   “At a minimum, enterprises should adopt thoroughly tested and commercially supported versions of such components, with stronger security guarantees and accountability.”

Noelle Murata, Sr. Security Engineer, Xcape, Inc. provided this comment:

   “Meta’s indefinite suspension of its partnership with Mercor underscores how the AI industry’s rush to outsource training data has effectively liquidated billions in proprietary methodology. By allowing a poisoned version of the LiteLLM gateway (versions 1.82.7 and 1.82.8) to persist in their environment, Mercor gifted attackers 4 TB of data, including the precise “secret sauce” protocols Meta and OpenAI use to tune their models.

   “This was not a sophisticated zero-day; it was a basic supply chain failure where a compromised security scanner (Trivy) was used to poison a niche dependency that nobody bothered to pin. For anyone surprised that an autonomous, interconnected AI stack would eventually expose sensitive data to the internet, the lesson is clear. 

   “If you are not auditing your data vendors for basic dependency hygiene, your IP is already public property. Defenders must immediately scan for litellm_init.pth files, which provide stealthy persistence on every Python startup, and rotate all LLM provider API keys and cloud tokens. Protecting training integrity now requires treating every AI data broker as a high-risk production endpoint and enforcing strict, pinned Software Bill of Materials (SBOM) standards.

   “If your AI supply chain is this leaky, you are not training a model; you are just broadcasting a technical manual to Lapsus$.”

Supply chain vulnerabilities are real. If your organization doesn’t take them seriously, your organization will get pwned. It’s as simple as that. And you can double that if AI is involved.

White House budget proposal would cut $707 million from CISA 

Posted in Commentary with tags on April 6, 2026 by itnerd

The White House’s proposed fiscal 2027 budget includes a $707 million reduction to CISA, significantly decreasing funding, building on earlier reductions, including a third of its workforce, and further scaling back the agency’s overall budget.

The budget outlines a shift in CISA’s focus toward federal network defense and critical infrastructure protection, while proposing cuts to programs related to external engagement, international affairs, and certain information-related initiatives. Previous proposals from the administration have also targeted reductions in staffing and program consolidation.

The White House’s 2026 budget tried to cut about $491 million from CISA’s spending, but Congress eventually only approved a reduction of approximately $135 million.

The new proposal will require approval from Congress, where funding levels and program priorities may be revised as part of the appropriations process. 

Doc McConnell, Head of Policy and Compliance, Finite State serves up this insight:

   “When CISA was created in 2018, it was built on a recognition that cybersecurity is a shared problem that no single organization can solve alone. CISA’s value lies in the connective tissue it creates, early warning of emerging threats, coordinated vulnerability assessment, and remediation, and partnerships with state and local governments and critical infrastructure operators that bolster our national resilience.

    “That mission is more urgent than ever. Nation-state adversaries are actively and strategically exploiting weaknesses in U.S. cyber defenses, and sophisticated threat actors are targeting critical infrastructure with increasing persistence. While manufacturers bear responsibility for the cybersecurity of their products, including proactively identifying and remediating vulnerabilities and managing supply chain risk. Those efforts are most effective when backed by a strong government cybersecurity function. Now is the time to strengthen our collective ability to detect and respond to threats, not reduce it.”

Aaron Colclough, VP of Operations, Suzu Labs adds this comment:

   “The FY2027 budget proposal ties CISA to a refocus away from weaponization and waste, which tracks with a lot of this administration’s stated priorities for the term. The examples in the text stay high-level, so it is still unclear what exactly would be cut; nothing maps dollars to line items. That vagueness overlaps with functions or offices that were already reduced, so we’re not in a position to say what is net-new from the wording alone. This looks like the president’s usual high opening bid before Congress settles the real numbers.”

John Carberry, Solution Sleuth, Xcape, Inc.:

   “The proposed $707 million reduction to CISA signals a retreat from the public-private partnership model, effectively ending the agency’s role as a primary intelligence collaborator for the commercial sector. By eliminating the Stakeholder Engagement Division and the Joint Cyber Defense Collaborative (JCDC), the administration is forcing enterprise security teams to manage nation-state threats without a centralized federal clearinghouse. This shift places the entire burden of national collective defense onto individual firms at a time of unprecedented geopolitical volatility.

   “Security leaders must immediately de-risk their dependency on CISA for threat telemetry and sector-specific alerts, instead prioritizing deeper involvement in private Information Sharing and Analysis Centers (ISACs) and direct vendor partnerships. Since CISA will pivot its remaining resources almost exclusively toward federal network defense, organizations should also prepare for more aggressive compliance enforcement on federal contractors rather than collaborative support.

   “It turns out “Shields Up” was a limited-time offer.”

Seemant Sehgal, Founder & CEO, BreachLock had this comment:

    “You don’t cut the fire department and then wonder why buildings burn. CISA isn’t the bureaucratic overhead, for practitioners it’s the lifeline between government intelligence and the private sector running the infrastructure this country depends on. Cutting its budget by $707 million, on top of what’s already been cut, is a gift to every nation-state actor that’s been quietly targeting U.S. critical infrastructure.”

This is a pretty dumb idea from the White House. Though I am not shocked by this as this is how this administration rolls. And I suspect it will not take long for this administration to figure out how dumb this idea is.

Doritos gives gamers a chance to win while wiping crumb-covered keyboard

Posted in Commentary with tags on April 6, 2026 by itnerd

Anyone who snacks while gaming knows the tradeoff: crumb-covered keyboards, sticky keys, and the occasional missed input at the worst possible moment. Now Doritos is rewarding the mess. 

Now live, Doritos Key Codes turns that gamer friction into part of the experience. Players can visit DoritosKeyCodes.ca, wipe down their keyboard, and enter the resulting “gibberish” for a chance to instantly win prizes. Each entry is randomized, with instant-win rewards seeded throughout the content, adding an element of surprise with every attempt and a chance to try the new Ultimate Garlic Parmesan chip flavour and other epic gaming-inspired prizes! 

Prizes include:

  • Keyboard
  • Coupon for a free bag of Ultimate Garlic Parmesan Doritos
  • Mouse
  • Headset
  • Monitor
  • Laptop

Fans can enter up to three times a day from now through April 16, with no purchase necessary. 

It Should Not Have Taken 13 Phone Calls And A Month For Bell & Distributel To (Hopefully) Fix My Internet

Posted in Commentary with tags , on April 2, 2026 by itnerd

Starting on March 8th, I’ve been having consistent issues with the Internet service that is provided by Distributel, who is owned by Bell. Basically what would happen is that my connection would disconnect. Then it may reconnect on its own 15 minutes later. Or it may reconnect only if I power cycle the optical networking terminal which in layman’s terms converts fibre to ethernet. And this would happen as much as a dozen times a day. Now fibre should be ultra reliable. So I know something was seriously wrong. But as I found out, getting it fixed would be a nightmare.

First let me address the title. It really did 13 calls and a month of my life for Bell and Distributel to (hopefully) fix this. Each time I would call into Distributel, I was guaranteed to lose at least 45 minutes of my time that I would never get back because after some brief troubleshooting, I would be placed on hold while the tech support person called Bell to look at the line remotely. Then they would rebuild my speed profile each time and declare the problem fixed. But it was never truly fixed. It may stay up for an hour, or it may stay up for a day or two. One time it stayed up for 11 days. The longer that this went on, I figured that it must be me. So since I do IT for a living. Thus I did this troubleshooting:

  • I got a friend who works with fibre to check the fibre cable that ran from where it enters my condo to where my equipment is. That was fine.
  • I put back the TP-Link hardware that Distributel shipped over when I first got their service to see if that would change anything. It didn’t.
  • I had Ubiquiti swap out my Cloud Gateway Max seeing as I purchased the UI Care extended warranty. That made no difference either.
  • I also swapped ethernet cables and the like.

After doing all of this, I concluded that this was clearly a Bell issue.

What made this worse is that it was also clear that Bell did not want to send out a tech to figure out what was going on. I get that’s expensive and Canadian telcos are loathe to do that. But when you can’t figure the issue out over the phone, you should just go ahead and and do that. On top of that, when I tried to escalate the issue within Bell, I was met with some of the worst possible customer service I have ever experienced. For example, one tier two Bell tech support person said the problem was my fault because I plugged my hardware into an uninterruptible power supply. Well, that’s a #fail on his part for two reasons. One, Bell themselves recommends that you do that as you can see here. Two, an uninterruptible power supply or UPS for short has the following benefits as per this:

  1. Voltage spike or sustained overvoltage
  2. Momentary or sustained reduction in input voltage
  3. Voltage sag
  4. Noise, defined as a high frequency transient or oscillation, usually injected into the line by nearby equipment
  5. Instability of the mains frequency
  6. Harmonic distortion, defined as a departure from the ideal sinusoidal waveform expected on the line

So in short, your equipment is better off when plugged into a UPS. So why would someone from Bell say the opposite? My guess is that it is a way for him to get me off the phone and not actually address the problem as that allows him to close a ticket and improve his metrics. As well as avoid sending out a tech as he likely gets evaluated on that too. I will also note that this individual was extremely rude about it and disconnected the call when I dared to point out that what he was saying was factually incorrect.

This brings me to another point. The dynamics of Distributel versus the dynamics of Bell. While Bell was not helpful, and as per the example sometimes rude, the staff at Distributel were friendly and generally pleasant to deal with. Though I will say that a couple of them did not follow through on promises that they made. For example one of them promised to have a manager call me when I wanted to escalate the issue on their end. That never happened. Another promised that he would demand that Bell send out a tech. That never happened and rebuilt my profile again. One of the most important rules of providing customer service is to never say you’re going to do something and not follow through as that never ever ends well for the organization that you work for.

On top of that, Distributel employees openly criticized Bell employees. By openly, I mean while was on the phone with them. A lot of them said the quality of the service they got from Bell has nosedived over the years since Bell started outsourcing everything overseas. Or calling Bell employees “not well trained.” This kind of shocked me because Distributel is owned by Bell and these calls are being recorded. Which means that the potential for someone a few rungs up the ladder finding out should be high. But I am guessing that these Distributel employees either don’t care or nobody is listening to those recordings and they know that. Whatever the reason, this appears to highlight some serious problems within Bell that will affect customers in a negative way.

Let’s fast forward to call number 13. The person that I got finally was able to convince Bell to send a tech to figure out what was going on. His suspicion based on everything that I told him was that the optical networking terminal was the issue, and that Bell needed to swap it. A day and a half later the tech arrived and my wife was there to greet him. This tech tested everything from top to bottom, and he was going to leave because everything was working according to him. But unfortunately for him he was dealing with my wife who is kind of like The Doctor from the British sci-fi series Doctor Who. The Doctor gives you one chance to do the right thing, and if you don’t, The Doctor goes scorched Earth on you. In his case, he failed to grasp that this was an intermittent problem and she went scorched Earth on him and backed him into the position of swapping the optical networking terminal. The fact that according to her, he said that doing that was going to be an inconvenience to him as he would have to go to his truck to get one, and then he might miss out on another repair order (likely because he was a contractor who is paid by the repair order) did not help his cause. But he did do the swap of the optical networking terminal and he did note that the new optical networking terminal that he installed was substantially cooler than its predecessor. Perhaps that one was overheating due to some sort of fault? Who knows. As it stands as I type this, I have not had a single disconnect. Not one. If it continues like this for 30 days, I will declare this issue fixed.

But if the Internet continues to be problematic, then my wife and I will switch back to Rogers on a temporary basis. I say that because cable Internet would be a serious downgrade from fibre Internet in terms of speed (especially upstream where speeds can be a quarter of the downstream speeds at best) and latency (fibre has a latency of 3ms or less while cable can be 5 times as high or more which negatively affects anything from video calls to gaming). But more importantly, it will be temporary because I have begun to champion bringing Beanfield into the building. This is a company that runs fibre internet that they control from end to end into condos like ours. So during the month that this was going on, I had conversations with the condo board who unknown to me wanted a third option for residents. Apparently they have fielded complaints from residents who go back and forth between Bell and Rogers and don’t feel that they are getting quality telco services from either company. Thus to the board, my suggestion of going to Beanfield made sense. They’ve already touched based with the company and a meeting with them is scheduled for next week in order to explore how to execute this and what it will take to get it done. Once they’re in the building, my wife and I will be moving to them. And I suspect that others in the building will as well. That might send chills down the spines of Rogers and Bell execs. Or they may not care. I guess we’re about to find out.

One final thing, in the middle of all this, I attempted to reach out to a contact at Bell to tell her of my issue, the fact that I was having problems getting a resolution to said issue, a request to point me to someone who could help, and I was going to go public with this. I didn’t hear from her so I went public. Now some of you may say that I’m trying to pull rank because I am a public figure. And you’re 100% correct. I do have that option and I have exercised it at times out of desperation. But the general public doesn’t have that option which illustrates the state of customer service in the telco industry where Joe Average who is in a situation like mine has not a lot of options to escalate and issue and get a timely resolution. That needs to change, either through telcos making the choice that they must do better, or forcing it upon them via competition. My condo is doing the latter via Beanfield because we have the ability to do that. You may not as lucky as we are. For those in the latter category, that really needs to change and change now.

Anthropic scrambles to contain leak of proprietary Claude AI agent code

Posted in Commentary with tags on April 2, 2026 by itnerd

Anthropic is working to contain the fallout after accidentally exposing internal source code for its Claude AI coding agent, following a human error during a software update that made proprietary files publicly accessible, which was quickly discovered by a security researcher named Chaofan Shou and posted to X.

The new version of its Claude Code software package unintentionally included a file that exposed nearly 2,000 source code files and more than 512,000 lines of code including tools, techniques, and internal instructions used to guide the behavior of its AI agent. This included operational components of the system and internal frameworks used to control how the AI performs tasks.

Anthropic issued thousands of takedown requests to remove the code from public repositories.

Anthropic said it is implementing changes to prevent similar issues while continuing efforts to remove the leaked materials from circulation.

Michael Bell, Founder & CEO, Suzu Labs had this comment:

   “Anthropic shipped a 60MB source map inside their npm package. Every line of Claude Code’s source, all 512,000 of them, publicly available. For the second time. The first leak was February 2025 and the root cause was never fixed.

   “We pulled the codebase apart. The headline findings are real but the details are worse. Undercover Mode instructs Claude to disguise itself as a human developer when contributing to open source: “Do not blow your cover.” There is no force-off option. Frustration tracking runs a regex on every user input and silently sends your emotional state to Anthropic’s analytics pipeline without notification or consent. That emotional classification also feeds a system that can prompt users to share their full session transcript with Anthropic, controlled by remote feature flags that Anthropic can activate at any time.

   “The finding that matters most for government and defense: the default telemetry collects device IDs, session data, email, org UUID, and process tree information on startup before the user types anything. Environment flags can escalate collection to include full prompts, file contents, bash command output, system prompts, and entire conversation transcripts sent to commercial endpoints. The code confirms FedRAMP OAuth paths to claude.fedstart.com, meaning government deployments share the same codebase. Whether hardening was applied before those deployments is unknown, but the telemetry infrastructure is baked into the foundation. The Pentagon designated Anthropic a “supply chain risk” in March. This is what that risk looks like in code.

   “The engineers documented their own attack surfaces in comments. Prompt-injected models can exfiltrate secrets via GitHub CLI URL paths. Leaked GitHub Actions tokens enable “repo takeover” and “supply-chain pivot.” Bash parsing ambiguity allows commands to execute while hidden from security validators. They built mitigations, but the comments confirm the attack surfaces exist.

   “The AI safety company with a $380 billion IPO target acquired Bun, whose known source-map-in-production bug was filed publicly and left open while the product shipped to millions of developers. Their operational security posture is a .npmignore file that nobody checked the second time around.”

Jacob Krell, Senior Director: Secure AI Solutions & Cybersecurity, Suzu Labs had this to say:

   “The model is the engine. What Anthropic accidentally published is the machine built around it.

   “Anthropic has been here before. This is the second time Claude Code’s source has leaked through the same vector, a source map file left in the npm package. The first was in February 2025. Thirteen months later, the same packaging mistake exposed a far more complex system, days after the accidental exposure of details about an unreleased model codenamed Mythos.

   “The significance of this leak is in what the code reveals about AI agent architecture. The leak exposed approximately 512,000 lines of TypeScript across roughly 1,900 source files. Developers and researchers who have analyzed the source have since documented the scale of what Anthropic built around the model. The code contains what analysts describe as 44 feature flags for unreleased capabilities, approximately 40 permission gated tools, a multi agent coordination system, a persistent autonomous daemon mode, a layered memory architecture, defenses against competitor model distillation, and granular attribution tracking for AI versus human code contributions. The leaked code strongly suggests that the bulk of Claude Code’s production capability comes from orchestration, tooling, memory, and permission layers built around the model.

   “The multi agent coordinator mode, as documented in the leaked source, illustrates where the engineering complexity lives. The code describes a system where Claude Code operates not as a single model session but as a supervisor managing a fleet of worker agents executing tasks in parallel. In the leaked architecture, the coordinator does not directly edit files, run commands, or read code. All implementation goes through workers. Verification is handled by what the code describes as a separate adversarial agent that must confirm the output works before the task can be marked complete. In effect, this is zero trust architecture applied to AI agents, with the orchestration system enforcing verification independently of the model.

   “The leaked code also references an autonomous daemon mode, internally called KAIROS. The source describes a persistent agent that watches the developer’s project and proactively acts without waiting for user input. It uses a tick based lifecycle with periodic prompts, and the code indicates behavior that adjusts based on whether the developer’s terminal is active. The source also references memory consolidation during idle periods, converting observations into structured facts. These features represent event driven architecture, state management, and context engineering built entirely in the orchestration layer.

   “The code also contains what analysts describe as a competitive defense embedded directly in the orchestration layer. The system references injecting artificial tool definitions into certain API responses, apparently designed to degrade the performance of any competitor model trained on Claude’s outputs. That defense lives in the scaffolding. It tells you where Anthropic believes their competitive advantage sits.

   “The depth of interlocking systems documented in the leaked code is what stands out. The coordinator depends on the memory system, the memory system depends on the tool layer, the tool layer depends on the permission framework. These systems are deeply interdependent, and building them to work in concert at production quality is the hard engineering problem. The public conversation about AI capabilities focuses almost entirely on which model is smarter. What this leak suggests is that the model generates the next token, and everything around it is what turns that reasoning into reliable, operational capability.

   “This leak also serves as a proof of concept for the rest of the industry. The engineering gap between a frontier research lab and a commercial competitor appears narrower than many assumed. The architectural patterns documented in the leaked source are well structured and reproducible in principle. A competent engineering team can study the coordination strategies, memory approaches, and tool integration designs and adapt the approach using any available foundation model. The model layer is swappable. The orchestration patterns are the transferable knowledge. What Anthropic built behind closed doors is now visible, and for anyone questioning whether a smaller team could build a credible AI coding agent, the architectural proof of concept is now public.

   “The knowledge transfer effect is significant. Developers who were building AI coding tools through trial and error now have a detailed reference implementation from a team backed by billions in research and development. The architectural decisions, trade-offs, prompt engineering techniques, and multi agent coordination strategies are all visible. The effect extends beyond direct competitors. It raises the floor for every developer building with AI. The gap between what a frontier lab understood about AI agent architecture and what the broader developer community understood has been enormous. That gap collapsed overnight.

   “The model is increasingly a commodity. Multiple frontier models are available from multiple providers, and the performance gap between them continues to narrow. The orchestration system built around the model is the competitive frontier, and Anthropic just published the blueprint.”

Vishal Agarwal, CTO, Averlon adds this:

   “The deeper risk here isn’t what was exposed, it’s what becomes possible. When AI coding agent internals are public, attackers can study how those agents interpret context, follow instructions, and make decisions.

   “That makes it easier to craft inputs or artifacts that appear legitimate to developers but influence how the agent behaves: modifying code, introducing insecure changes, or interacting with downstream systems. This expands the attack surface beyond the model itself into developer workflows, CI/CD pipelines, and the systems those pipelines connect to.”

This is embarrassing for Anthropic. But I honestly am not shocked by this. They clearly need to tighten things up or this will keep happening. Which of course is bad for them.