38 million customers impacted in ManoMano third-party data breach

Posted in Commentary with tags on February 27, 2026 by itnerd

ManoMano, a European online DIY, home improvement marketplace with 50 million visitors per month, is notifying customers about a significant data breach that affected an estimated 38 million individuals after it discovered unauthorized access in January 2026 linked to one of its third-party customer service providers.

Although not confirmed, it is rumored that the compromised organization was a customer support service provider that suffered a Zendesk breach. Investigations found that personal data from customer accounts and interactions were extracted by the attackers. 

A threat actor using the alias “Indra” claimed responsibility on a hacker forum, alleging possession of roughly 37.8 million user records, over 900,000 service tickets, and over 13,000 attachments. The exposed information varied by individual and may include full names, email addresses, phone numbers, and the contents of customer service communications.

The ManoMano stated that account passwords were not accessed and there is no evidence of data being altered within its internal systems. Upon discovering the incident, the company disabled the subcontractor’s access to customer data, strengthened access controls and monitoring, notified relevant authorities, and began informing potentially affected users with guidance on vigilance against phishing and other threats.

Noelle Murata, Sr. Security Engineer, Xcape, Inc.:

   “The data breach at ManoMano allowed the threat actor “Indra” to abscond with almost 38 million user records and close to a million service tickets. Although internal systems were unaffected, this highlights the inherent dangers associated with the “extended enterprise” model and reliance on third parties. This incident is believed to be connected to a broader exploitation of Zendesk. It underscores the sensitivity of customer support communications that frequently contain unmasked personal information and user behavior data.

   “The true prize lies not merely in contact details but also in the 13,000 pilfered attachments and service logs that provide the ideal blueprint for highly targeted phishing attacks. The primary threat isn’t necessarily account hijacking, but rather scams referencing actual past purchases or support interactions. Any communication purporting to be from a support representative should be viewed with suspicion.

   “Retailers should take this event as a strong impetus to enforce stringent vendor security protocols. This includes minimal data sharing, robust access controls, ongoing monitoring, and swift mechanisms to revoke third-party access when suspicious activity is detected.

   “When a contractor gets breached, the fallout belongs to you, not the subcontractor.”

Denis Calderone, CTO, Suzu Labs:

   “ManoMano wasn’t breached directly. Their outsourced customer support provider got compromised, and through that one access point attackers pulled millions of customer records and close to a million support tickets. This is the supply chain problem we keep talking about. You can lock your own house down all you want, but if your subcontractor leaves their door open, your data walks out through their environment.

   “What really caught our attention though is the support ticket data. People don’t think about what lives in support tickets. It’s not just names and emails. It’s conversations, order details, complaints, account issues, file attachments. That’s gold for social engineering. An attacker can reference your specific order, your specific complaint, and suddenly that phishing email doesn’t look like phishing anymore. It looks like a legitimate follow-up from customer support.

   “So, if you’re outsourcing customer support, ask yourself if a single agent account on the provider’s side can export your entire customer database? What kind of export controls exist to minimize the blast radius from a breach such as this? If you don’t know the answers, that’s where you start.”

Outsourcing saves cash, but it introduces a variety of dangers. This is a big one. Thus if I were an organization thinking of outsourcing something, this would make me think twice.

6,000 organisations scanned as UK vulnerability monitoring service cuts unresolved flaws by 75%

Posted in Commentary with tags on February 27, 2026 by itnerd

The UK government announced that its new Vulnerability Monitoring Service (VMS), a centralized platform continuously scanning internet-facing public sector systems for known weaknesses, has sharply reduced the time to fix serious flaws and the backlog of unresolved issues.

The service, which monitors around 6,000 public sector organizations, has helped cut unresolved security issues by roughly 75% and reduced the median time to fix critical vulnerabilities from about 50 days to approximately eight days.

Officials said the VMS detects around 1,000 different types of weaknesses each month and provides specific guidance to agencies on how to remediate them. Alongside this capability, the government is launching a dedicated “Cyber Profession” initiative to recruit, train, and retain cybersecurity experts, including a Cyber Resourcing Hub and a Cyber Academy to support long-term defensive capabilities across the public sector.

The UK government said these efforts are designed to protect public services from cyber-attacks and strengthen national cyber resilience. The announcement outlined plans for structured career pathways aligned with Cyber Security Council standards and emphasized improved detection, prioritization, and response across departments.

Denis Calderone, CTO, Suzu Labs:

   “Scanning 6,000 public sector organizations and cutting DNS fix times from 50 days to 8 is genuinely good news. Find it, assign it, track it, close it. That’s how vulnerability management should work. Worth noting though that the 84% number is specifically for domain-related issues. Other vulnerability types went from 53 days to 32, so closer to a 40% improvement. Still real progress, just not quite as dramatic.

   “The part that should give everyone pause is that these vulnerabilities were sitting across the public sector for years and nobody knew. NHS trusts, legal aid, ambulance services. Turning on a scanner and finding this much is a win, absolutely, but it also tells you just how blind these organizations were before. You can’t fix what you can’t see.

   “And this is why it kind of bugs me that the government exempted itself from the Cyber Security and Resilience Bill it’s putting on the private sector. You have to wonder what the numbers would look like if they pointed these same scanners at their own departments with actual legal obligations behind them.”

Rajeev Raghunarayan, Head of GTM, Averlon:

   “Reducing median remediation time from roughly 50 days to single digits across thousands of public sector organizations is meaningful progress. It shows that when vulnerability management is treated as an operational priority, measurable improvements follow.

   “At the same time, modern attack cycles move quickly. Even an eight-day exposure window can be significant. The real takeaway is not improved scanning alone, but operational follow through. Most organizations already have visibility into weaknesses. The challenge is translating findings into prioritized, accountable remediation and consistently shrinking the time between discovery and fix.”

Noelle Murata, Sr. Security Engineer, Xcape, Inc.:

   “The UK government’s implementation of the Vulnerability Monitoring Service (VMS) marks a significant move from reactive patching to proactive, centralized security management for 6,000 public sector organizations. This initiative drastically reduces the average time to fix critical vulnerabilities from fifty days to just eight, effectively eliminating the window of opportunity that state-sponsored attackers and ransomware groups exploit for initial access. The focus on DNS vulnerabilities is a key strategic choice, as these frequently overlooked misconfigurations are the main method used for covert redirection and data interception.

   “Complementing this technical solution is the new “Cyber Profession” initiative, which includes a Cyber Academy and a Resourcing Hub in Manchester, aiming to tackle the persistent skills shortage that has historically hindered public sector cybersecurity resilience. Crucially, the VMS approach reorients cybersecurity from a reactive “firefighting” mode to ongoing risk management. By combining this technical capacity with a structured “Cyber Profession” development program, the government is also addressing the human resource deficit that often undermines sustained resilience.

   “While scanning tools are essential, they don’t resolve vulnerabilities on their own; skilled professionals and clear accountability are what truly fix them. Other governments would benefit from observing this model. This includes mandatory, continuous scanning of Internet-facing assets, coordinated centrally but executed by individual agencies. Talent development programs that establish cybersecurity as a viable career path can close security gaps more effectively than any regulation or budget increase.

   “When governments treat patching speed as a national security metric, attackers lose their advantage: time.”

The UK government lately has been known to come up with some good ideas on the cybersecurity front. This is one of those good ideas because it forces those who are responsible for defending government networks to actually defend those networks in a way that reduces the attack surface.

The U.S. Financial Industry at the Epicenter of the Global Cybercrime Economy 

Posted in Commentary with tags on February 27, 2026 by itnerd

According to new SOCRadar threat intel, the U.S. financial sector now stands squarely at the center of the global cybercrime economy, enduring roughly half of all financial phishing attacks and nearly a quarter of all dark web threat activity.

Adversaries are now pivoting from basic software exploits to highly sophisticated, AI-driven crime waves, relentless BEC campaigns, and stealthy third-party supply chain infiltrations. 

In an analysis that can be read here, the SOCRadar research team has broken down how the U.S. financial sector is uniquely in the crosshairs for cyber criminals, what the dominate attack vectors are, and some key steps that financial leaders should use to fortify their defenses. 

Key findings include: 

  1. The U.S. financial sector accounts for 23.52% of all finance-related dark web threat activity and 48.02% of global phishing activity. 
  2. Over 80% of dark web threat types are centered on exposing data and databases, with 74.49% of dark web posts involving selling these assets. 
  3. Dominant attack vectors targeting U.S. financial institutions include social engineering, BE, and more increasingly AI-powered exploits. 
  4. Third-party vendors remain critical vectors for systemic risk.

For full details, here is the analysis: https://socradar.io/blog/finance-industry-us-institutions-2026/

Flashpoint Analysis: Six‑Month Supply‑Chain Attack Targeting Notepad++ Users

Posted in Commentary with tags on February 27, 2026 by itnerd

Flashpoint’s threat intelligence team has published new analysis on a significant supply‑chain attack involving Notepad++, one of the world’s most widely used open‑source text editors. The compromise—quietly active for roughly six months—allowed threat actors to hijack the application’s update mechanism and deliver malicious executables to targeted users.

Flashpoint’s research breaks down how attackers gained unauthorized access to the hosting infrastructure supporting Notepad++ updates and selectively redirected update requests to attacker‑controlled servers. Instead of receiving legitimate installers, victims were served malicious payloads disguised as trusted updates. The attack did not exploit a vulnerability in Notepad++ code itself; it was an infrastructure‑level compromise that evaded detection for months.

Flashpoint’s analysis highlights several critical findings:

  • The compromise persisted from June through December 2025, affecting users who attempted to update during that window.
  • Attackers hijacked the update delivery pipeline, redirecting traffic from the legitimate Notepad++ server to malicious infrastructure.
  • The attack targeted select victims, suggesting a focused espionage or intelligence‑gathering operation rather than broad malware distribution.
  • The WinGUp updater lacked sufficient verification controls, enabling the delivery of malicious executables without triggering integrity checks.
  • No CVE was assigned, underscoring that the weakness was not in the application code but in the surrounding ecosystem.


This incident is a stark reminder that supply‑chain attacks increasingly target the infrastructure around trusted tools – not just their source code. With Notepad++ used globally by developers, IT teams, and enterprises, the attack demonstrates how a single compromised update path can create widespread risk. Flashpoint’s analysis provides rare visibility into the mechanics of the attack and offers actionable guidance for organizations to assess exposure and strengthen their software update pipelines.

You can get more details here: What to Know About the Notepad++ Supply-Chain Attack | Flashpoint

G+D and AWS collaborate on new cloud-based Remote eSIM Provisioning for consumer and IoT solutions

Posted in Commentary with tags on February 27, 2026 by itnerd

Global SecurityTech company Giesecke+Devrient (G+D) today launched a new cloud-based eSIM solution powered by Amazon Web Services (AWS). This new collaboration combines trusted digital security from G+D with best-in-class cloud agility and scale from AWS, enabling customers to confidently deploy and manage devices with eSIM connectivity worldwide. 

Under this new collaboration, G+D will transition eSIM workloads to the AWS cloud environment. This newly launched solution combines G+D’s focus on GSMA compliance and foundational security with AWS’ secure, high availability cloud infrastructure to deliver global provisioning and low-latency connectivity solutions across multiple geographies.

The growth of eSIM-only devices across global telecommunications markets during the last year has driven a significant shift towards industry adoption of eSIM for both consumer and IoT applications. Telecommunications operators increasingly need faster onboarding and elastic scaling of eSIM deployment, especially during peak demand periods. Ensuring end-to-end security is also critical for the rollout of SGP.32 (IoT eSIM) and SGP.22 (consumer eSIM) compatible devices.

This new agreement expands the collaboration between G+D and AWS. G+D is already deploying its SGP.32 eSIM technology for Amazon’s eero Signal device, marking one of the first commercial deployments of the technology. G+D will also offer its solutions via the AWS Marketplace, ensuring widespread accessibility to the technology for mobile carriers, enterprises, and IoT service providers.

MWC Barcelona 2026:

To find out more about how AWS and G+D are bringing eSIM to the cloud, be sure to check out the GSMA’s eSIM Summit where G+D CEO Philipp Schulte and Jan Hofmeyr, Vice President for Telecommunications at AWS, will be diving deep with the keynote “eSIM: From Fraud Risks to Cloud-Powered Security“. More information on the eSIM Summit and the keynote topics can be found here: eSim Summit -… Mar 4, 2026 12:30-14:30 | MWC Barcelona

First live 6G trial by Ericsson in Texas powers AI robotics and real-time video streaming

Posted in Commentary with tags on February 27, 2026 by itnerd

Ericsson today announced it has successfully completed the world’s first 6G pre-standard over-the-air (OTA) session, marking a major milestone towards commercial 6G networks and reinforcing U.S. leadership in next-generation wireless innovation.

This milestone was achieved on a pre-standard 6G system using a trusted, end-to-end architecture designed to be AI and cloud native. Conducted at Ericsson’s U.S. headquarters in Plano, Texas, the OTA session validates the readiness of key 6G building blocks. The demonstration features radio hardware, RAN Compute, software-defined air interfaces, and cloud platforms. Ericsson’s future-proof software architecture is deployable on multiple hardware platforms, including CPU (Central Processing Units) and GPU (Graphics Processing Units).

This achievement supports the U.S. government’s focus on 6G leadership, including early research, global standards and forward-looking spectrum policy. 6G is a critical infrastructure for national security, economic competitiveness, and AI-driven innovation. Ericsson’s work directly supports those priorities by showing how future networks can deliver secure, high-performance, AI-native connectivity that underpins U.S. economic competitiveness, innovation, and national security.

Why this matters

Specifically, the 6G trial proves two key capabilities to prepare future networks for AI: powering AI robotics with instant, reliable connections and processing for real‑time control; and enabling real‑time video streaming.  As AI expands beyond smartphones to power robotics, autonomous systems, immersive applications, and industrial automation, wireless infrastructure is becoming a critical layer of the AI stack. 6G networks will be designed to sense, compute, and adapt in real time, enabling consistent low latency, higher uplink capacity, and new classes of AI services that are not possible today.

Ericsson’s OTA milestone demonstrates that these capabilities are moving into system-level reality, positioning the U.S. ecosystem to shape global standards, drive innovation, and lead commercialization of 6G.

A long-term commitment to the United States

Ericsson has operated in the U.S. for more than 120 years and continues to expand its footprint across research, manufacturing, and operations. The company employs more than 6,000 people across the country and operates 12 R&D centers focused on AI, ASIC design, and antenna systems. Its U.S. headquarters in Plano, Texas, serves as a major hub for advanced wireless R&D, standards development, and customer engagement.

Ericsson also currently manufactures advanced 5G radios and RAN Compute systems at its 5G USA Smart Factory in Lewisville, Texas – one of the most advanced telecom manufacturing facilities in the country. Ericsson has invested more than USD 150 million in the factory and is the only manufacturer making telecom equipment at scale in the U.S. The highly automated, 300,000-square-foot facility supports more than 550 U.S. manufacturing jobs and strengthens secure, resilient domestic supply chains. As 6G technology matures, Ericsson plans to build-on this U.S.-based manufacturing foundation to support future deployments.

Technical highlights

  • The system consists of a pre-standard 6G stack with:
    • Spectrum in the 7GHz range (Centimeter Wave)
    • Carrier bandwidth of 400 MHz
    • Performance focus on optimized uplink, enhanced energy efficiency, and maximized spectral utilization

The demonstration leveraged Ericsson radios, baseband platforms, and cloud-native software, and strengthened ongoing contributions to global standard bodies, including 3GPP and Open RAN. Ericsson will continue expanding trials across additional spectrum bands, enabling AI-native capabilities, and collaborating with operators, chipset partners, and the broader ecosystem to accelerate 6G readiness.

Clicks Brings Communicator to New Markets with Localized Keyboard Layouts

Posted in Commentary with tags on February 27, 2026 by itnerd

Clicks Technology, today announced expanded language support for Clicks Communicator, introducing new localized keyboard layouts that bring the communication-focused Android smartphone to more users globally. The update reflects stronger-than-expected demand since opening reservations earlier this year, and reinforces the role of Communicator being purpose-built to help people communicate with confidence and take action on the go.

New keyboard layouts include French (AZERTY), German (QWERTZ), Korean and Arabic, enabling Communicator to better serve customers across Europe, the Middle East and Asia.

In response to strong global interest, Clicks extended the early bird window to March 15, giving customers in these markets the opportunity to take advantage of special pricing and bonuses.

Customers will configure their preferred keyboard layout, along with their Communicator color and back Covers, closer to shipping.

Purpose-Built For Fast, Responsive Communications Over Time

Clicks also confirmed today that Communicator will be powered by the Dimensity 8300 (MT8883), a modern 4nm architecture that delivers a fast, responsive experience, with performance to spare.

As a smartphone purpose-built for doing, not doomscrolling, Communicator is designed to feel instant and responsive every time it’s picked up, empowering customers to take action on the go. Clicks selected the MT8883 platform to provide plenty of performance headroom while ensuring the experience continues to meet customer expectations over time.

The MT8883 platform also supports a long runway for Android and security updates, with software support planned through Android 20 and five years of security updates.

Pricing and Availability

Clicks Communicator will be available in Smoke, Clover, and Onyx at a special launch price of USD $499. Purchase your reservation before March 15 to lock in early bird pricing and priority access:

Clicks Communicator will begin shipping later this year.

Meet Clicks at MWC 2026
Clicks will be meeting with media, creators and partners at Mobile World Congress in Barcelona, March 2-5. Media interested in briefings or hands-on demos can contact press@clicks.tech.

Datadobi Announces Early Access Program for Data Access Review, a New Addition to StorageMAP 

Posted in Commentary with tags on February 26, 2026 by itnerd

Datadobi has launched an Early Access Program for Data Access Review, a new capability coming to its StorageMAP platform. Developed in direct response to customer demand for deeper visibility and control over data permissions, Data Access Review will extend StorageMAP’s value by adding actionable permissions intelligence to unstructured data management. During the Early Access program, selected customers have the opportunity to test and help shape new permissions intelligence features. 

By formalizing and expanding StorageMAP’s ability to analyze and report on access permissions, Data Access Review enables organizations to identify excessive, outdated, or inappropriate access rights before they evolve into security risks or compliance violations. It integrates into existing unstructured data management workflows, ensuring that access governance becomes a natural extension of data visibility, classification, and remediation strategies.  

The Early Access Program is available exclusively to current Datadobi customers who are actively using StorageMAP. Participants will get an early look at new features, gain valuable insights about access permissions in part of their environment, and have a direct line to share feedback that will help shape the final data access product. 

Customers interested in joining the Early Access Program can reach out to their Datadobi account representative or visit our website

Patches Fix Claude Code Flaws, But Broader Repository-Based Risks Remain 

Posted in Commentary with tags on February 26, 2026 by itnerd

Researchers at Check Point have identified multiple vulnerabilities in Anthropic’s development tool Claude Code, allowing malicious repositories to trigger remote code execution and steal active API credentials.

The observed security issues exploited built-in mechanisms including Hooks, Model Context Protocol servers, and environment variables to run arbitrary shell commands and exfiltrate API keys before trust prompts could be confirmed.

Two specific tracked vulnerabilities, CVE-2025-59536 and CVE-2026-21852, were documented and patched by Anthropic following disclosure by security researchers. The first enabled arbitrary code execution via overridden configuration settings that bypass user consent dialogs, while the second could redirect API traffic to malicious endpoints, exposing developers’ Anthropic API keys in plaintext.

All reported flaws have been remedied in subsequent Claude Code updates prior to public advisory publication.

According to researchers, even after the specific vulnerabilities were fixed, the underlying risk does not disappear. The issues exposed how project configuration files can directly shape execution behavior inside AI-assisted development tools, and a malicious repository can still act as a delivery mechanism if safeguards are insufficient, which expands the threat model beyond the individual CVEs that were addressed.

As a result, applying patches resolves the documented flaws but does not fully remove the broader exposure created when AI tooling automatically interprets and acts on repository-level settings. 

Jacob Krell, Senior Director: Secure AI Solutions & Cybersecurity, Suzu Labs:

“These CVEs are real and Anthropic was right to patch them. The broader issue is not unique to Claude Code. The AI development tool industry as a whole is prioritizing enablement over security, and these vulnerabilities are a symptom of that design philosophy, not an isolated product failure.

“In the case of Claude Code, hooks ran shell commands before the developer even saw the trust dialog. The security control existed. It just executed after the damage was already done. AI agents are deployed with broad permissions by default because restricting them reduces productivity. That is the same tradeoff the industry made with admin accounts two decades ago, and it took years of breaches to correct. The principle of least privilege does not stop applying because the user is an AI model instead of a human. Agents should be treated as untrusted by default, with strict zero trust boundaries between the agent and any command surface, credential store, or system resource it touches.

“This is not a new class of attack surface. Malicious Makefiles, poisoned scripts, and git hooks have compromised developers for years. What AI tools change is the scope of what runs once triggered. The attack surface is not new. The blast radius is.

“AI development tools are going to become more autonomous, not less. The industry is building the capability first and retrofitting the security later. That pattern has never aged well in software, and it is unlikely to age any better with AI.”

I am aware of a large number of developers who are using tools like Claude Code to speed up the coding pf

$30 Infostealer “DarkCloud” Is Fueling a Surge in Enterprise Breaches

Posted in Commentary with tags on February 26, 2026 by itnerd

Flashpoint’s threat intelligence team has uncovered new details about DarkCloud, a rapidly spreading, commercially available infostealer that is reshaping the initial‑access landscape for cybercriminals.

DarkCloud is part of a growing wave of low‑cost, highly scalable infostealers that are lowering the barrier to enterprise compromise. First observed in 2022 and openly sold on Telegram and a clearnet storefront for as little as $30, DarkCloud gives even low‑skill threat actors the ability to harvest credentials at scale and gain enterprise‑wide access.

Flashpoint’s latest analysis reveals several concerning trends:

  • DarkCloud is written in Visual Basic 6.0, a legacy language that helps it evade modern detection tools and signature‑based defenses.
  • Its encryption and string‑obfuscation techniques make it harder for defenders to analyze and block.
  • It is fully commercialized, with subscription tiers, active development, and a growing user base on Telegram—mirroring the professionalization of the cybercrime economy.
  • Credential theft at scale enables attackers to pivot into ransomware, business email compromise, and long‑term espionage operations.

Flashpoint’s researchers warn that DarkCloud represents a broader shift: infostealers are now the dominant initial‑access vector in 2026, giving attackers a cheap, fast, and reliable way to infiltrate organizations.

Why this matters:
Infostealers like DarkCloud are no longer niche tools – they are becoming the backbone of modern cybercrime. With DarkCloud’s low cost, ease of access, and ability to bypass traditional defenses, organizations across every sector face heightened risk. Flashpoint’s analysis provides rare visibility into how these tools are built, sold, and deployed – and what security teams must do to defend against them.

Flashpoint can offer:

  • Expert interviews with the analysts who dissected DarkCloud
  • Insights into the commercialization of infostealers and the threat‑actor economy
  • Guidance for CISOs on mitigating credential‑theft‑driven breaches
  • Data from Flashpoint’s 2026 threat intelligence research

You can learn more here: Understanding the DarkCloud Infostealer | Flashpoint