Namastex.ai npm Packages Hit with TeamPCP-Style CanisterWorm Malware

Posted in Commentary with tags on April 22, 2026 by itnerd

Researchers have uncovered malicious Namastex.ai npm packages with the tradecraft of TeamPCP/LiteLLM style CanisterWorm malware, including install-time execution, credential theft from developer environments, off-host exfiltration, canister-backed infrastructure, and self-propagation logic intended to compromise additional packages.

More details here: https://socket.dev/blog/namastex-npm-packages-compromised-canisterworm

Dan Moore, Sr. Director CIAM Strategy at cybersecurity company FusionAuth, commented:

“This newest supply chain threat in the npm ecosystem demonstrates that a lot of the time, the issue isn’t an organizations’ code, but their credentials. Long-lived, over-permissioned CI/CD tokens are as risky as passwords written on a sticky note. Organizations need to have more than credentials for software systems. In order to maintain identity hygiene, organizations should rotate, scope, and continually monitor credentials.”

AI for coding is great. But you have to be incredibly careful to make sure that the benefit of being able to code more efficiently isn’t overshadowed by having threat actors set up shop by infecting your code.

Lovable access issue exposes project data, credentials in AI-generated coding

Posted in Commentary with tags on April 22, 2026 by itnerd

A security issue involving AI coding platform Lovable allowed users to access other users’ project data, including source code, database credentials, AI chat histories, and customer data, according to reports and user disclosures.

The issue was publicly highlighted after a user demonstrated that a free account could access data across projects created before November 2025.

Lovable initially stated there was no data breach, describing the behavior as expected for public projects, but later acknowledged a backend error that temporarily enabled access to AI chat data. The company updated its visibility and permission settings following the incident and said the issue had been addressed.

The incident involved exposure of project-level data within the platform environment and did not include confirmation of broader system compromise. Reporting indicates the issue remained unresolved for a period of time after being reported before changes were implemented.

Ryan McCurdy, VP of Marketing, Liquibase had this to say:

   “This incident is a reminder that the risk in AI-generated development is not just bad code. It is bad control design. When application creation speeds up, permissions, secrets exposure, and database access paths can become part of the attack surface just as quickly. If teams do not put governed change, least-privilege access, and clear separation between public artifacts and sensitive backend context in place, AI can amplify operational risk faster than traditional review processes can catch it.”

John Carberry, Solution Sleuth, Xcape, Inc. adds this comment:

   “The Lovable data exposure incident highlights a catastrophic failure in the fundamental security architecture of AI-powered “vibe coding” platforms. By failing to implement basic ownership validation on API endpoints, a textbook Broken Object Level Authorization (BOLA) flaw. Lovable allowed any user to traverse project IDs and scrape the source code, database credentials, and AI chat histories of others.

   “For security leaders, the primary risk is a silent supply chain compromise: while Lovable claims no “breach” of its own servers, the exposure of third-party secrets like Stripe and Supabase keys means the applications built on the platform are now effectively backdoored.

   “Technically, the crisis was compounded by a February 2026 backend regression that re-opened access to sensitive chats and a response cycle that spent 48 days ignoring a bug bounty report. Organizations must treat AI-generated code with extreme caution, ensuring that “vibe coding” speed doesn’t bypass mandatory secret scanning, environment variable isolation, and the hard-won security logic of the last twenty years.

   “Lovable proved that while AI can write your code, it can’t write your common sense, especially when “public by default” includes your Stripe secret keys.”

Hannah Perez, Director of Marketing, Suzu Labs followed up with this:

   “As we move toward AI-generated software, the ‘shared responsibility model’ is becoming dangerously blurred. Users expected a private sandbox for innovation, but instead found a communal space with paper thin walls.

   “Lovable’s eventual pivot is welcome, but the delay between the initial report and the actual fix suggests that AI startups are currently outpacing their own security protocols, which is as expected for most. In the rush to ‘vibe code,’ fundamental safety is being treated as a post-launch patch rather than a requirement. For this industry to mature, Secure by Default must be the non-negotiable standard for any platform handling sensitive IP and source code.”

Vishal Agarwal, CTO, Averlon provided this comment:

   “It’s one thing to have access to the sauce. It’s another to have access to its recipe. With inadvertent leakage of chat history, attackers gain access to reconnaissance information that can be leveraged to target the organization more precisely.

   “What makes sophisticated attackers dangerous isn’t just their technical capability, it’s their detailed understanding of the target’s systems. Exposing chat history and source code together hands that understanding directly to an attacker.”

This highlights the fact that AI has to be part of your security planning. Otherwise really bad things will happen. And this is a case in point.

Malicious Trading Site Drops “Needle Stealer” to Harvest Browser Data

Posted in Commentary with tags on April 22, 2026 by itnerd

Researchers have uncovered a new attack campaign using a previous malware loader to deliver a different threat: Needle Stealer, a data-stealing malware designed to quietly harvest sensitive information from infected devices, including browser data, login sessions, and cryptocurrency wallets. This time, attackers use a website promoting a tool called TradingClaw (tradingclaw[.]pro), which claims to be an AI-powered assistant for TradingView, a legitimate platform used by traders to analyze financial markets. The fake TradingClaw site is not part of TradingView, nor is it related to the legitimate startup tradingclaw[.]chat. Instead, it’s being used here as a lure to trick people into downloading malware.

More details can be found here: https://www.malwarebytes.com/blog/threat-intel/2026/04/malicious-trading-website-drop-malware-that-hands-over-your-browser-to-attackers  

Ensar Seker, CISO at SOCRadar, commented:

“This campaign reflects a growing shift where threat actors weaponize trust in legitimate platforms like TradingView by building highly convincing AI-themed lures around them. The use of “AI trading assistants” is particularly effective because it targets both curiosity and financial motivation, lowering user skepticism. What stands out here is the reuse of a known loader to deploy a different payload, which shows how modular and scalable modern malware operations have become.

More importantly, the focus on harvesting browser sessions and crypto wallets signals that attackers are prioritizing immediate monetization over persistence. Once session tokens are stolen, MFA becomes irrelevant, and accounts can be hijacked in real time. Organizations and individuals need to treat any third-party tool claiming integration with financial platforms as high risk unless it is directly verified.

This is not just malware delivery, it is identity compromise at scale disguised as innovation.”

This is scary as this is a big jump in terms of what threat actors can do. Thus you really need to by hyper aware to threats as they can come from anywhere and pop up in the most unexpected places.

Guest Post: Mythos access by Discord group reveals real danger of AI-powered hacking

Posted in Commentary with tags on April 22, 2026 by itnerd

By Stefanie Schappert

A Discord group’s unauthorized access to Anthropic AI’s powerful Mythos model on Tuesday is doing more than raising questions about the guardrails around powerful AI cybersecurity tools.

It’s exposing a bigger problem for the cybersecurity industry: AI can now find flaws and exploit them so quickly that defenders may be the ones left truly exposed.

A group of AI-fueled Discord info-seekers – one of them linked to a third-party vendor of the AI startup – managed to access the highly gatekept cybersecurity defense system in February, the same day of its debut. 

Using a mixed bag of insider access, web-scouring bots, and some raw ingenuity, the breach is triggering a fresh wave of alarm across an already spooked industry.

Ironically, as the Discord incident was unfolding, the Cloud Security Alliance – in a rapid-response briefing published days after Mythos was unveiled – warned that AI was accelerating vulnerability discovery faster than organizations could keep up, creating the perfect storm for defenders.

Finding thousands of flaws and zero days across hundreds of software systems, the introduction of Mythos has effectively shrunk the patch window defenders have relied on for years – from days to just a few hours.

If released in the wild and adopted by hackers, security teams will inevitably be tasked with building an entirely new playbook to help decide how to prioritize and fix what matters – and there’s still no guarantee they can stem the cyber bleeding. 

More than 250 security leaders helped shape the briefing, which argues the challenge is no longer just finding flaws, but deciding which ones actually pose real risk – and fixing them before they can be turned into working exploits.

It’s a shift some security experts say the industry is still underestimating. The problem is no longer discovery alone. It is remediation, accountability, and whether defenders can keep up as AI moves from identifying vulnerabilities to showing how they can be exploited in the real world.

The Mythos moment may ultimately be less about a single powerful cybersecurity model and more about what happens in the shrinking window between finding a flaw and weaponizing it.

Anthropic’s answer, for now, is Project Glasswing – a tightly controlled effort to use Mythos to help secure critical software before comparable models become more widely available.

But even that highlights the larger issue at hand: the industry knows what is coming and is still scrambling to build that much-needed playbook in time to defend against larger threats, such as nation-state or ransomware attackers.

If a group of AI nerds could get into Mythos – allegedly without malicious intent – imagine the fallout if the next ones to slide through that door were actual criminals.

ABOUT THE EXPERT

Stefanie Schappert, a senior journalist at Cybernews, is an accomplished writer with an M.S. in cybersecurity, immersed in the security world since 2019.  She has a decade-plus experience in America’s #1 news market working for Fox News, Gannett, Blaze Media, Verizon Fios1, and NY1 News.  With a strong focus on national security, data breaches, trending threats, hacker groups, global issues, and women in tech, she is also a commentator for live panels, podcasts, radio, and TV. Earned the ISC2 Certified in Cybersecurity (CC) certification as part of the initial CC pilot program, participated in numerous Capture-the-Flag (CTF) competitions, and took 3rd place in Temple University’s International Social Engineering Pen Testing Competition, sponsored by Google.  Member of Women’s Society of Cyberjutsu (WSC), Upsilon Pi Epsilon (UPE) International Honor Society for Computing and Information Disciplines.

OVHcloud and Alchemy enter strategic relationship 

Posted in Commentary with tags on April 22, 2026 by itnerd

 OVHcloudand Alchemy today announced a strategic relationship. Together, the two companies will enable decentralized app and chain developers to benefit from Alchemy’s powerful suite of tools and Supernodes, Alchemy’s blockchain engine, on the secure, de-centralized and high-performance foundation of OVHcloud’s cloud infrastructure.

The strategic relationship has already started to have an impact. The performance-price ratio offered by OVHcloud has enabled Alchemy to scale to new regions ahead of schedule, even in highly regulated markets, helping developers around the world to launch decentralized apps and chains faster. The OVHcloud platform seamlessly interconnects with Alchemy’s existing cloud infrastructure, including hyperscale offerings, giving Alchemy a truly multi-cloud environment. 

Earlier this year, Alchemy supported OVHcloud’s blockchain startup accelerator, helping to build an ecosystem where startups, enterprises, and partners co-innovated and worked to deliver the next generation of blockchain services at global scale.

Inside RAMP: What a leaked database reveals about Russia’s ransomware marketplace

Posted in Commentary with tags on April 22, 2026 by itnerd

Comparitech researchers have publised an in-depth analysis of RAMP (Russian Anonymous Marketplace), a Russian-language cybercrime forum that operated from late 2021 until being seized by the FBI in January 2026. 

Comparitech researchers gained exclusive access to a leaked database from RAMP, the dump containing user records, forum threats, private messages, IP logs, and admin activity from November 2021 through January 2024. 

In the analysis of this dump, the researchers have broken down details regarding the access market, the biggest listings, the affiliate splits, the criminal job market, the top vendors, the top buyers, and more. 

You can read the analysis here: https://www.comparitech.com/news/inside-ramp-what-a-leaked-database-reveals-about-russias-ransomware-marketplace/

National IT Service Providers Day Is Today

Posted in Commentary on April 22, 2026 by itnerd

With National IT Service Providers Day being today, I wanted to share a perspective that goes beyond the standard “keep systems running” narrative.

Jason Tierney, SVP of Managed Services at C3 Integrated Solutions had this to say:

“For defense contractors, the challenge today goes beyond traditional IT support. In real-world assessment scenarios, challenges can come up when internal IT must work with external compliance teams, including process assumptions, incorrect documentation and a lack of coordination during a formal third-party assessment. Many organizations are also navigating multiple compliance frameworks, each with its own language, requirements and techniques. Even seemingly minor admin or system changes right before an assessment can create real problems. 


In regulated environments, IT service providers are taking on a different role, with responsibility that extends beyond downtime and outages the risk of not passing an assessment or annual re-attestation. Strong change management, close coordination and consistent compliance process documentation are critical to getting organizations to an assessment-ready state and helping them stay there.”
 

Jeff Cratty,VP of Cloud & Integration at Blue Mantis adds this:

“National IT Service Provider Day is a reminder that the right technology partner does more than keep systems running. The best providers help organizations assess where they are, strengthen security, modernize what matters most and manage change in ways that support business goals. They create a secure foundation for innovation and help teams move forward with greater clarity and confidence.

As companies navigate AI adoption, cloud transformation and rising operational demands, they need service providers that can connect strategy to execution, protect critical data and reduce risk and stay engaged beyond deployment. That means identifying practical use cases, strengthening data governance and supporting internal teams through change.

When providers deliver that kind of guidance and accountability, they do more than solve technical challenges. They help businesses adapt faster and turn technology investments into measurable value.”

This link provides some suggestions on how you can say thanks to the people who keep your organization running. Trust me when I say that a thank you can go a long way for these people.

SafeBreach launches AI-driven CTEM to close the execution gap 

Posted in Commentary with tags on April 22, 2026 by itnerd

SafeBreach today announced the launch of its AI-powered Continuous Threat Exposure Management (CTEM) solution. This solution is designed to help organizations move beyond siloed security activities toward a complete, closed-loop CTEM program that continuously identifies, prioritizes, and remediates cyber risk at scale.

As enterprises struggle with challenges like AI-generated threats, tool fatigue, and alert overload, traditional reactive security measures are no longer sufficient. Organizations are increasingly turning to the five-phased CTEM framework developed by Gartner™ as a more proactive way to manage exposures, but this has historically required the manual integration of disparate tools, datasets and processes.

SafeBreach is changing that with a unified solution that operationalizes the full CTEM lifecycle. The solution is grounded in the SafeBreach Exposure Validation Platform, which provides the safe, scalable adversarial exposure validation (AEV) capabilities that underpin the entire CTEM framework. Building on this foundation, the SafeBreach Helm AI Agent unifies the platform’s AEV capabilities with data and insights from a customer’s existing security ecosystem to provide a complete 360-degree CTEM solution that ensures exposures are not only identified but continuously validated and resolved.

SafeBreach Helm accomplishes this with a specialized set of capabilities aligned to each CTEM stage. Users query Helm with simple, conversational prompts to initiate each CTEM phase:

  1. The Scoping Phase: SafeBreach Helm leverages contextual data from Threat Intelligence (TI) tools to identify critical assets, business priorities, and relevant segments of the attack surface.
  2. The Discovery Phase: SafeBreach Helm continuously aggregates and correlates exposure data across internal and external environments, using Vulnerability Management (VM) and External Attack Surface Management (EASM) tools.
  3. The Prioritization Phase: SafeBreach Helm uses asset context from the Discovery phase to precisely highlight the exposures that present the greatest risk, helping users cut through the noise. 
  4. The Validation Phase: SafeBreach Helm utilizes the breach and attack simulation (BAS) of SafeBreach Validate and the attack path validation of SafeBreach Propagate to confirm the exploitability of the highlighted exposures and map realistic attack paths using real-world adversary techniques.
  5. The Mobilization Phase: SafeBreach Helm uses SafeBreach’s AI Remediation technology to translate validated findings into actionable guidance that can be shared with Security Information and Event Management (SIEM); Security Orchestration, Automation, and Response (SOAR); and other workflow management and ticketing tools—including ServiceNow and Jira— to enable teams to remediate risk efficiently and effectively.

Key Offerings of the CTEM by SafeBreach Solution:

  • SafeBreach Helm: The AI CTEM Agent that unifies data from sources including AEV, TI, VM, EASM, SIEM, SOAR, and other workflow management and ticketing tools into a single, intelligent interface for proactive risk management.
  • AEV: The SafeBreach Exposure Validation Platform, which combines SafeBreach Validate to test control effectiveness and SafeBreach Propagate to reveal how adversaries could traverse environments to reach critical assets.
  • AI Remediation: Provides context-aware, AI-driven guidance and integrates with SIEM, SOAR, and ticketing systems to operationalize remediation workflows and accelerate risk reduction.
  • Breach Studio: Advanced capabilities to design custom attack scenarios, including a VS Code extension for environment-specific testing.
  • Exposure Hub (Upcoming): A centralized hub that correlates data from VM, EASM, and other tools to provide comprehensive visibility into the attack surface.

Built for large, distributed environments, the CTEM by SafeBreach solution empowers organizations to evolve from fragmented, reactive security practices to a unified, AI-driven CTEM program—grounded in proven AEV and elevated by SafeBreach Helm—to deliver continuous, measurable risk reduction aligned to real-world attacker behavior.

To learn more about the CTEM by SafeBreach solution or the SafeBreach Helm Agent: 

Read the recent blog about SafeBreach Helm

Today Is Earth Day

Posted in Commentary on April 22, 2026 by itnerd

Today is Earth Day and Earth Day matters because the systems we’ve built, especially in tech, don’t just run in isolation, they draw power, consume resources, and scale globally, which means every decision we make at the infrastructure level has a real, cumulative impact on the world around us. The companies that take that seriously and design for efficiency, smarter data placement, and sustainable operations aren’t just being good citizens, they’re building more resilient, cost-effective, and future-proof IT environments that actually perform better under pressure.

Richard Copeland, CEO, Leaseweb USA and Marie-Pier Angers, Sales Director, Leaseweb Canada had this to say: 


Richard Copeland, CEO, Leaseweb USA:

“From a tech and business perspective, I’d bet most people haven’t thought about Earth Day in terms of server utilization, but that’s exactly where this lives. You walk into most environments and what you find isn’t some cutting-edge, perfectly tuned system. It’s racks of infrastructure running at a fraction of their capacity, powered on, cooled, maintained, and barely doing anything. Then on the other end, you’ve got teams overcompensating in the cloud, spinning things up ‘just in case,’ because nobody wants to be the one who underbuilt. So you end up paying for excess on both sides. More machines than you need. More energy than you should be using. A lot of complexity layered on top of it.

When organizations step back and actually place workloads where they make sense, in infrastructure that’s designed to run efficiently at scale, things start to normalize. Utilization goes up. The number of systems required goes down. Cooling demand drops. You can see it in the power draw, you can see it in the monthly bill, and you can feel it operationally because everything is just simpler to run. That’s the part that doesn’t get enough attention. Sustainability in IT isn’t some separate initiative. It’s what naturally happens when you stop running inefficient environments and start treating infrastructure like something that should actually be optimized.”

Marie-Pier Angers, Sales Director, Leaseweb Canada: 

“Many IT environments are inefficient by design. Not because people are careless, but because they’re trying to solve for risk. So they overbuild. They duplicate. They leave capacity sitting there unused because it feels safer than coming up short. Then they layer in cloud on top of that, sometimes the right way, sometimes not, and suddenly you’ve got this sprawl of infrastructure that’s expensive to run and even harder to reason about. The environmental impact is just a byproduct of that inefficiency.

When you start running workloads in infrastructure that’s actually built for efficiency, where higher utilization is the goal, where resources are shared intelligently, and where you’re not defaulting to one model for everything, the math changes pretty quickly. Fewer machines doing more work. Less power required to run them. Less cooling to keep them stable. At the same time, better performance and more predictable costs. That’s why this isn’t a tradeoff conversation. The same decisions that make your environment easier to operate and cheaper to run are the ones that reduce your footprint. That’s the alignment most teams don’t realize is sitting right in front of them.”

Once Agentic Smartphones Act Without User Permission, What Could Go Wrong? 

Posted in Commentary with tags on April 21, 2026 by itnerd

When a smartphone’s AI agent can execute actions across apps, read messages, interpret meaning, pull data from various apps and act autonomously outside of the user’s knowledge or intent, outcomes can potentially go sideways very quickly.

For the last 15 years, smartphones have responded to their users’ commands. Now, Android 17 threatens this user interaction model and its inherent safety guardrails.

Agentic mobile’s risks are explained in “Android 17: Your Phone’s AI is Evolving to be More Autonomous,” new analysis by Approov Senior Manager Joyce Kuo.  The full analysis is embedded at bottom.

Here’s the upshot:

Android 17 represents a major step towards moving toward the agentic mobile model, in which a device can coordinate tasks across apps as a personal agent. The upside is convenience. The downside is a new class of risk where nothing is technically compromised, but the result is unpredictable and potentially quite wrong. Data may be exposed, actions may be triggered, and workflows may be executed based on manipulated or misunderstood context.

Kuo looks at this expansion of the mobile attack surface beyond traditional app boundaries and user interaction norms, and why existing protections like sandboxing and permissions won’t address this new layer of risk.

Android 17 represents more than just a UX update; it’s a fundamental security and architecture shift – for brands on mobile, for their developers, and for users.

The core issues are straightforward: when systems start acting on your behalf, potentially without the user’s knowledge, how do you as a smartphone-using consumer prevent them from doing exactly what they may otherwise be allowed to do at the wrong time and for the wrong reasons? And how to brands and other app publishers (and their developers) contain these risks?