Hacker reveals 6.8 billion emails online and warns victims “your data is public”

Posted in Commentary with tags on February 11, 2026 by itnerd

A user of a popular data leak forum posted a database, claiming it contains 6.8 billion unique email addresses collected from various data sources online. The user claims to have spent several months digging through various online sources, containing often illegally obtained data.

“Two years ago, I obtained more than 3.3 billion unique email addresses. After a long break, I started this again and spent about 2 months extracting emails from various combos, ULP collections, logs, and databases and extracted 6,839,584,670 unique email addresses,” the post’s author, going by the moniker Adkka72424, said.

The Cybernews research team investigated the 150GB-strong dataset and here’s what they found:

  • The dataset did include over 6.8 billion lines of information, exactly as the posts’ author said.
  • However, our team noted numerous invalid email addresses, which makes the database a lot more difficult to use for amateur attackers. For one, the database requires time and effort to fix and make usable for large scale attacks.
  • The team believes that after eliminating unusable emails and removing duplicates the actual number of email addresses included in the database could be significantly smaller, hovering 3 billion unique emails.

While over twice as small as initially intended, several billion email addresses in a single database is still a massive number of ready-to-use targets for cybercriminals.

For more information, here’s the full report: https://cybernews.com/security/massive-email-database-leak-billions-records/ 

The privacy costs of dating on your phone: 100+ dating apps analyzed

Posted in Commentary on February 11, 2026 by itnerd

Comparitech researchers published an in-depth study examining the data privacy practices of more than 100 popular dating apps. With an estimated 350 million people worldwide using dating platforms, the findings raise serious questions about how much personal data users are asked to give up in search of love. 

By analyzing each app’s Android manifest, the researchers found that dating apps request an average of just over 30 permissions at download, nearly eight of which are classified by Android as high-risk or “dangerous.”

Key findings include: 

  • The average app requests access to just over 30 permissions in total, 8 of which are classed as high-level/”dangerous”
  • The most common dangerous permissions are ones that request access to the device’s camera, access location data (precise geolocation data or approximate location based on cell tower or Wi-Fi data), read and write to external storage (data outside of the app, e.g. stored on the device), and record audio
  • 24% of apps (24 apps out of 102) potentially violate Google’s privacy policy standards
  • The most common omission from privacy policies was the data retention period (not provided by 15 apps), followed by a clear policy on how users can delete their data (omitted, restricted, or unclearly defined by 11 apps)
  • The average app comes with 8.7 trackers with one app (Zoosk) using 28
  • These apps have been downloaded over 1.2 billion times in total (based on each app’s download figure listed on Google Play)

For full details on the data privacy of these dating apps, the research can be read here: https://www.comparitech.com/news/the-privacy-costs-of-dating-on-your-phone-100-dating-apps-analyzed/

Kyndryl unveils Agentic AI workflow governance

Posted in Commentary with tags on February 11, 2026 by itnerd

Kyndryl today announced an innovative capability for creating policy-governed agentic AI workflows to enable enterprises to scale agentic AI across complex and highly regulated environments. Kyndryl’s policy as code capability translates customers’ organizational rules, regulatory requirements and operational controls into machine‑readable policies that govern how agentic AI workflows execute, to support consistent, auditable and trustworthy outcomes.

Customers want to reap the benefits of integrating agentic AI into their operations, but security, compliance and control challenges inhibit trusted deployment of AI agents. In fact, 31% of customers cite regulatory or compliance concerns as a primary barrier limiting their organization’s ability to scale recent technology investments.

Kyndryl’s policy as code capability addresses these concerns by defining operational boundaries and designing agents actions to remain explainable, reviewable and aligned with the customer-defined business and regulatory requirements. This combination also helps reduce costs, accelerate decision-making, eliminate errors and power AI-native workflows within defined policy guardrails.

Policy as code is a critical element of the Kyndryl Agentic AI Framework, providing a logical enforcement layer that dynamically governs how AI agents execute, interact and operate across systems. Kyndryl’s approach to codifying compliance into enterprise workflows is strengthened by insights drawn from decades of operating complex enterprise environments and the nearly 190 million automations the company manages every month for these mission-critical systems. These operational foundations enable more reliable governance, improve agent explainability and reduce unexpected behaviors in production environments.

Embedding policy-governed agent workflows into business operations

Kyndryl policy as code enables governance of agentic workflows and is bolstered via differentiated capabilities, including:

  • Deterministic execution – Agents only execute actions permitted and enforced by pre-defined policies, reducing operational risk.
  • Eliminates hallucination impact – Guardrails block unpredictable or unauthorized actions along the workflow, eliminating operational impact of agentic hallucinations.
  • Audit-by-design transparency – Each agent action and decision is logged and explainable, supporting compliance and oversight.
  • Human supervision – Agents execute tasks aligned with established and testable policies that are observed via a dashboard to support consistent actions and decisions.

Kyndryl’s structured approach to managing agentic workflow execution supports controlled and safe deployment of policy-constrained autonomous agents in sectors such as financial operations, public services, supply chains and other mission-critical domains where reliability and predictability are essential.

Learn more by connecting with a Kyndryl Consult expert to design, implement and operate agentic AI solutions governed by the company’s enterprise-grade policies, oversight and compliance controls.

Adyen Launches ‘Personalize’ to Tailor Checkout Experiences in Real-Time

Posted in Commentary with tags on February 11, 2026 by itnerd

Adyen today announced the launch of Personalize, a new product within its Adyen Uplift payment optimization suite. Personalize allows businesses to adjust their checkout pages in real-time based on individual shopper preferences, making it easier for customers to pay while reducing processing costs for the merchant.

The addition of Personalize builds on the overall success of Adyen Uplift, which launched in January 2025. In its first year, Adyen Uplift helped businesses lower payment costs by 9.4% on eligible traffic while reducing false positives (blocking legitimate transactions) by 42% on average. Additionally the 6,500+ businesses that are using Adyen Uplift saw an average increase of 1.19% in payment conversion rates above standard industry baselines, reaching up to 6% for some customers. These results stem from optimized routing and the prevention of unnecessary blocks triggered by inefficient risk configurations. The new Personalize product goes a step further, focusing on the early customer journey, routing shoppers to optimal payment methods to maximize both merchant savings and conversion rates.

Addressing checkout friction

Traditional online checkouts are often rigid, showing the same payment options and security steps to every shopper regardless of their history or preferences. This lack of flexibility is a leading cause of lost sales, with Adyen’s research showing that 37% of shoppers abandon a purchase if the process takes too long. Additionally, 72% of businesses report that high transaction fees continue to put significant pressure on their profit margins.

Personalize addresses these challenges by adding a Dynamic Identification layer to the checkout experience. By leveraging insights from trillions of dollars in transaction data and Adyen’s global banking infrastructure, businesses can now recognize shoppers and adapt the payment experience before they click ‘pay.’ This allows businesses to automatically order payment methods based on what a specific customer is most likely to use, creating a faster, more user-friendly experience that reduces abandoned carts.

Improving efficiency and security

Beyond speed, Personalize improves margins and security by highlighting cost-effective payment methods and identifying risk signals before a payment is even attempted. These optimizations, supported by detailed reporting, A/B testing capabilities, and configurable UI components, allow merchants to pinpoint friction and validate performance in real-time. As a result, early data shows businesses can improve conversion rates by up to 6% and lower transaction costs by up to 3%.

Turning payments into a strategic advantage

Results from initial pilots demonstrate how Personalize helps businesses manage transaction costs while improving the shopper experience. Hospitality tech platform, Tebi, saw a 4.26% saving alongside a 0.8% lift in checkout conversions. These results show that real-time checkout customization can protect margins without adding friction to the customer journey.

The Personalize module is available now to Adyen customers as part of Adyen Uplift. For more information, read here.

Hisense Designated World’s First Customer Centricity Lighthouse in TV Industry by World Economic Forum

Posted in Commentary with tags on February 11, 2026 by itnerd

Hisense announced that the Hisense Visual Technology Qingdao Factory in China has been recognized by the World Economic Forum (WEF) as a Customer Centricity Lighthouse, becoming the first and only such factory in the global television industry.

The designation was announced as part of the WEF’s Global Lighthouse Network, which recognizes industrial sites applying advanced digital technologies to improve customer value, speed-to-market and operational performance. The Customer Centricity Lighthouse designation represents a key milestone in Hisense’s human-centric digital transformation and intelligent manufacturing strategy.

Operating in a mature and highly competitive global TV market, the Hisense Visual Technology Qingdao Factory faced rapidly evolving consumer demand and increasing cost pressure. In response, the site undertook a comprehensive digital transformation, embedding artificial intelligence, big data, industrial simulation and large-scale virtual reality (VR) across new product R&D and manufacturing. As a result, the factory achieved a Net Promoter Score (NPS) of 84 per cent, reduced R&D cycles by 34 per cent, lowered material costs by 18 per cent and shortened new employee training time by 60 per cent. The cycle from capturing customer needs to translating them into product functions was reduced by 62 per cent, while production efficiency for 85-inch TVs improved to a 20-second manufacturing cycle.

This marks Hisense’s third Lighthouse designation within the WEF Global Lighthouse Network. Previously, Hisense Hitachi’s Huangdao factory was recognized as the world’s first Sustainability Lighthouse in the VRF sector and the industry’s only dual Lighthouse factory, underscoring Hisense’s leadership in AI-enabled sustainable manufacturing.

For more information, please visit hisense-canada.com

Forcepoint X-Labs Uncovers SmartScreen Evasion Campaign Abusing ScreenConnect for Persistent Remote Access

Posted in Commentary with tags on February 11, 2026 by itnerd

Authored by Mayur Sewani, Senior Security Researcher, Forcepoint X-Labs researchers observed:

A campaign in which a spoofed email impersonating the U.S. Social Security Administration delivers a malicious attachment designed for silent execution and privilege escalation

The script disables Windows SmartScreen, removes the Mark-of-the-Web, and installs a legitimate ScreenConnect client that is then abused as a Remote Access Trojan (RAT) to maintain command-and-control access. 

Notably, the ScreenConnect client analyzed was signed with a certificate that had been explicitly revoked, underscoring how attackers are leveraging trusted tooling to evade detection. 

The compromised host ultimately establishes encrypted communications with a remote server linked to Iranian network infrastructure, enabling data exfiltration activity. 

Why This Matters

This research highlights a growing defensive challenge: attackers increasingly bypass traditional security controls by modifying system protections and repurposing legitimate IT management software. The findings reinforce the need for organizations to block revoked software, enforce strict RMM allowlists, and monitor for security-control tampering.

You can read the research here: ScreenConnect Attack: SmartScreen Bypass and RMM Abuse

AI Adoption Report from Nudge Security Reveals How Widespread AI Use Is Transforming Security Governance

Posted in Commentary with tags on February 11, 2026 by itnerd

Nudge Security, the leading innovator in SaaS and AI security governance, today announced the findings of its newest report, AI Adoption in Practice: What Enterprise Usage Data Reveals About Risk and Governance, which provides revealing insights into workforce AI adoption and usage patterns. The report found that AI use has moved beyond experimentation and general-purpose chat tools, and is now embedded into workflows, integrated with core business platforms, and increasingly capable of taking autonomous action.

The research report is based on anonymized and aggregated telemetry collected across Nudge Security customer environments. Rather than relying on surveys or self-reported usage, this analysis is grounded in direct observation of AI activity within enterprise environments. The percentages referenced below reflect the % of organizations using each tool, unless otherwise noted.

The report’s key findings include:

  • Usage of core LLM providers is nearly ubiquitous. OpenAI is present in 96.0% of organizations, with Anthropic at 77.8%
  • The most-used AI tools are diversifying beyond chat. Meeting intelligence (Otter.ai at 74.2%, Read.ai at 62.5%), presentations (Gamma at 52.8%), coding (Cursor at 48.4%), and voice (ElevenLabs at 45.2%) are now widely present.
  • Agentic tooling is emerging. Agent tools like Manus (22%), Lindy (11%), and Agent.ai (8%) are establishing an early footprint.
  • Integrations are prevalent and varied. OpenAI and Anthropic are most commonly integrated with the organization’s productivity suite, as well as knowledge management systems, code repositories, and other tools.
  • Usage is concentrated. Among the most active chat tools observed, OpenAI accounts for 66.8% of prompt volume and Google Gemini for 29.6% (together 96.4%).
  • Data egress via prompts is non-trivial. 17% percent of prompts include copy/paste and/or file upload activity.
  • Sensitive data risks skew toward secrets. Detected sensitive-data events are led by secrets and credentials (47.9%), followed by financial information (36.3%) and health-related data (15.8%).

AI governance in practice differs from this reality

AI governance has emerged as a top priority for security and risk leaders, but many programs remain narrowly focused on vendor approvals, acceptable use policies, or model-level risk. While necessary, these controls alone are insufficient. As this research illustrates, the most consequential AI risks now stem from how employees actually use AI tools day to day—what data they share, which systems AI is connected to, and how deeply AI is embedded into other tools and operational workflows. Understanding these intersections—between people, permissions, and platforms—is the foundation of effective AI security.

To download the report, visit https://www.nudgesecurity.com/content/ai-adoption-in-practice.

Inside Gunra RaaS – Dark Web Affiliate Infiltration & Technical Dissection

Posted in Commentary with tags on February 11, 2026 by itnerd

CloudSEK’s threat intelligence team has just published an in-depth investigation into Gunra, a rapidly emerging Ransomware-as-a-Service (RaaS) operation that has formalized its affiliate recruitment on the dark web.

What makes this report significant is that their researchers successfully infiltrated the affiliate program, gaining access to:

  • The live RaaS management panel
  • Affiliate documentation (operator guide)
  • A functional ransomware locker sample for full reverse engineering
     

Key findings include:

  • Gunra operates a professionalized RaaS business model, lowering the barrier for cybercriminals through structured affiliate onboarding.
  • The locker uses a ChaCha20 + RSA-4096 hybrid encryption model, making decryption cryptographically infeasible without attacker-controlled private keys.
  • The malware executes fully offline, bypassing network-based detection during encryption.
  • It implements multi-threaded parallel encryption, enabling rapid filesystem-wide impact within minutes.
  • The ransomware performs surgical targeting, excluding system directories (C:\Windows, Program Files) to maintain operability and ensure ransom payment.
  • Embedded Tor payment infrastructure and hardcoded credentials streamline victim-to-operator communication.
  • Complete MITRE ATT&CK mapping and actionable IOCs are included for defenders.
     

This report provides rare insight into both the business infrastructure and technical core of a growing RaaS operation.

Full report: https://www.cloudsek.com/blog/inside-gunra-raas-from-affiliate-recruitment-on-the-dark-web-to-full-technical-dissection-of-their-locker 

Volume of OpenClaw public internet exposures spirals

Posted in Commentary with tags on February 10, 2026 by itnerd

In a report published yesterday, SecurityScorecard’s STRIKE threat intelligence team identified a widespread exposure problem affecting the OpenClaw open-source, vibe-coded AI agent platform, with more than 135,000 instances of the software publicly exposed to the internet. This is in addition to previously known vulnerabilities in the platform.

   “Our findings reveal a massive access and identity problem created by poorly secured automation at scale. Convenience-driven deployment, default settings, and weak access controls have turned powerful AI agents into high-value targets for attackers,” the STRIKE team wrote in the report.

OpenClaw’s bot extensions “skill store” had three high-risk CVEs attributed to it in recent weeks, and it’s also been documented that its various skills can be cracked fairly easily exposing API keys, credit card numbers, PII, and other data valuable to cybercriminals. 

Just a few hours after publication of the report, as the number of internet-facing OpenClaw instances associated with known threat actor IPs increased, the number of identified vulnerable systems on STRIKE’s live OpenClaw threat Dashboard increased by 40,000, the number of RCE-vulnerable instances went from 12,812 to more than 50,000, the number of instances detected that were linked to previously reported breaches had gone from 549 to over 53,000.

Researchers recommend OpenClaw users immediately change the default network connection so it’s configured to point to a localhost. 

   “Out of the box, OpenClaw binds to `0.0.0.0:18789`, meaning it listens on all network interfaces, including the public internet. For a tool this powerful, the default should be `127.0.0.1` (localhost only). It isn’t,” STRIKE noted.

Ryan McCurdy, VP of Marketing, Liquibase:

   “This is what automation at scale looks like when controls lag behind speed. Teams are moving fast but security and governance have to start with safe defaults, tight network exposure, and auditable access. Otherwise, the first misconfiguration becomes a repeatable incident pattern.”

Michael Bell, Founder & CEO, Suzu Labs:

   “135,000 OpenClaw instances are listening on the public internet right now. Most have no authentication. Most are running versions with known RCE vulnerabilities and public exploit code. The platform binds to all network interfaces by default, and the numbers tell you how many users changed that setting.

   “We just saw the same fundamental problem with Claude Desktop Extensions last week. AI agent platforms keep shipping with full system access and no trust boundaries. OpenClaw is what that looks like at scale. 78% of exposed instances haven’t applied the critical patches from January 29. Some are running on infrastructure previously linked to Kimsuky, APT28, and Salt Typhoon. And this isn’t hobbyists in garages. STRIKE found exposed instances in financial services, healthcare, government, and education.

   “A privileged service account with no password on an internet-facing server would get someone fired. An AI agent with the same access level and the same exposure is somehow a feature.”

John Carberry, Solution Sleuth, Xcape, Inc.:

   “The widespread exposure of over 175,000 OpenClaw instances serves as a stark warning about the perils of “vibe-coded” AI agents that prioritize ease of use over fundamental security. By defaulting to a 0.0.0.0:18789 binding, OpenClaw effectively opened the door for the public Internet to engage with potent autonomous agents holding direct access to sensitive API keys and PII.

   “This “convenience-first” approach has generated a vast, automated attack surface, with over 50,000 instances now confirmed vulnerable to Remote Code Execution (RCE). The rapid increase in systems connected to known threat actor IPs, observed within hours of the SecurityScorecard report, indicates that cybercriminals are leveraging the same speed of automation for weaponization as developers used for deployment. What’s particularly alarming is how swiftly AI tools designed for convenience can lead to widespread access and identity breaches when basic safeguards are absent.

   “For security teams, immediate action is imperative: limit network exposure by configuring listening IP Addresses to only those required, revoke and reissue all potentially compromised keys and secrets, scan for misconfigurations using tools like Nuclei or Shodan, scrutinize skill extensions for vulnerabilities, implement Zero Trust principles for AI infrastructure, and operate under the assumption of compromise for systems with default configurations.

   “In the long run, SOC teams must manage AI agents with the same rigor as any other privileged infrastructure, implementing robust default security settings, continuous monitoring, and adherence to the principle of least privilege.

   “If you don’t vibe-code your defaults to localhost, hackers will vibe off your information. In short, don’t use these inherently flawed software.”

Vibe coding is a thing. But perhaps it shouldn’t be based on this. What are your thoughts on this? Please leave a comment and share what you think.

Abstract Security Blog: How a single compromised VM can quietly inherit cloud trust and move across Azure w/out touching the network

Posted in Commentary with tags on February 10, 2026 by itnerd

Abstract Security just published a blog this morning: Moving Laterally through Abuse of Managed Identities attached to VMs.  The blog was written by Abstract’s ASTRO research organization.

The research talks about how to put some detection for some type of managed identity abuse. Since managed Identities are very useful tools for the proper functioning of an Azure environment, it becomes difficult in case there are multiple resources attached to a single Managed Identity.

This can lead to the abuse of managed identities. Even though detection may vary depending on environment. For example, there might be some script which uses managed Identities to access other resources like another Virtual Machine. Therefore, this detection is very generalized form of detecting some type of managed identity abuse.

You can read the blog post here: https://www.abstract.security/blog/moving-laterally-through-abuse-of-managed-identities-attached-to-vms