Mike Bell, Founder and CEO of Suzu Labs, has just published the research blog “The Company Reviewing Your Meta Glasses Footage Has a Security Problem.”
“Last week, Swedish journalists revealed that Meta sends video footage from Meta Ray-Ban smart glasses to human data annotators at Sama, a San Francisco-based outsourcing company that runs its annotation workforce out of Nairobi, Kenya. Workers described seeing footage of people in bathrooms, bedrooms, and intimate situations. The UK’s Information Commissioner opened a probe. The story dominated privacy news for days,” Bell said.
“Nobody asked the obvious follow-up question. How secure is Sama? We did. And the answer isn’t reassuring.”
Sama Credential Exposure on the Dark Web: Suzu Labs ran dark web intelligence against Sama’s corporate domain (sama.com) using its threat intelligence platform. Within the last 90 days alone, Suzu Labs identified 118 credential entries tied to sama.com circulating across Telegram channels, underground forums, and breach databases. The results were alarming, including the fact that eighty-three of the entries included plaintext passwords.
Suzu Labs research reveals just how shaky Sama’s current (December 2025-Feb. 2026) security posture is. “Most of these credentials didn’t come from some third-party breach where Sama employees happened to have accounts. Roughly 87% came from info-stealer malware logs. That means malware was running on machines used by people with sama.com email addresses, pulling credentials and session tokens directly off the endpoint. The stealer takes everything on the machine. It doesn’t filter by importance.”
The research also evaluates risks to AI training data and other Sama clients, and offers recommendations – for Meta, for Sama, and for every organization.
The Company Reviewing Your Meta Glasses Footage Has a Security Problem: https://suzulabs.com/suzu-labs-blog/the-company-reviewing-your-meta-glasses-footage-has-a-security-problem

CloudSEK Identifies 40,000+ Exposed US Industrial Systems Vulnerable to AI-Assisted Recon as Iranian-Aligned Groups Mobilise
Posted in Commentary with tags CloudSEK on March 6, 2026 by itnerdCloudSEK researchers have documented how artificial intelligence has fundamentally collapsed the barrier to targeting industrial control systems, compressing what once required weeks of specialist knowledge into a five-minute reconnaissance workflow.
The findings come as the 28 February 2026 US-Israel strikes against Iran triggered the largest single-event activation of Iranian-aligned cyber actors ever documented, with over 60 hacktivist groups mobilising within hours – many without deep ICS expertise, but now equipped with AI tools that make that expertise unnecessary.
Key Findings
Why This Matters
The real shift is not in malware sophistication. It is in speed, scale, and accessibility. AI is enabling less technically mature actors to perform ICS reconnaissance that once required years of specialist knowledge.
In a conflict environment where over 60 groups are simultaneously activated and seeking accessible targets, AI compresses the cycle from intent to impact.
CloudSEK researchers reproduced the AI-assisted reconnaissance chain as a passive research exercise, mirroring the confirmed methodology. Following the same process, researchers identified multiple live instances of unauthenticated, internet-exposed ICS systems with direct operational impact potential.
CloudSEK notes that the passive nature of this research, standard HTTP requests against publicly indexed systems, is indistinguishable from what a threat actor would perform.
The cyber fallout from the Iran-US conflict is not limited to advanced state-linked operators. Loosely aligned hacktivists and proxy actors can now use AI-assisted workflows to identify and prioritise exposed industrial assets in real time, increasing the risk of opportunistic disruption to water treatment, energy distribution, fuel management, and manufacturing operations.
The same 28 February window also saw OpenAI confirm a partnership with the US Department of Defense, triggering a 295% spike in ChatGPT app uninstalls (Sensor Tower via TechCrunch). As commercial AI platforms face governance pressure around military use, threat actors migrate to unconstrained alternatives. The safety guardrails that limited CyberAv3ngers on ChatGPT in 2024 are a floor, not a ceiling.
Immediate Defensive Priorities
CloudSEK recommends that organisations urgently:
CloudSEK’s findings are based on passive reconnaissance of publicly indexed information and exposed web interfaces, without logging into or actively probing any system.
You can read the research here: AI, the Iran-US Conflict, and the Threat to US Critical Infrastructure | CloudSEK
Leave a comment »