Cybersecurity firm CloudSEK has published research showing that the infrastructure organisations use to train and deploy AI systems is dangerously exposed. The report focuses on MLOps platforms, the operational backbone of modern AI, and finds that leaked credentials and misconfigured deployments are handing adversaries quiet, persistent access to systems that were never designed with security in mind.
The timing matters. After US and Israeli forces struck Iranian nuclear and military sites on February 28, 2026, Iranian APT groups, including MuddyWater, APT34, APT33, and APT35 showed clear signs of heightened activity. But CloudSEK’s analysts note that the footholds these groups hold inside Western defence, financial, and aviation networks were not built in response to that escalation. They were built before it.
What CloudSEK Found
In a 72-hour scan of public GitHub repositories and internet-facing infrastructure, the research team identified:
- Over 100 exposed credential instances tied to platforms including ClearML, MLflow, Kubeflow, Metaflow, ZenML, and Weights & Biases. Keys were hardcoded directly into source files, configuration scripts, and environment files that were left public.
- More than 80 MLOps deployments are sitting open on the public internet with weak or no authentication. Basic scanning tools like Shodan and FOFA were enough to find them.
- Multiple platforms where anyone could create an account, walk into the dashboard, browse active projects, pull model artifacts, and access connected cloud storage credentials with no barriers at all.
None of this required exploiting a software vulnerability. It used the same interfaces that engineers use every day.
Why MLOps Platforms Are Worth Targeting
MLOps platforms coordinate everything in an AI operation: training pipelines, model storage, cloud integrations, and execution agents that run around the clock. Getting inside one of these platforms gives an attacker far more than a data breach. It gives them four things:Dataset exfiltration: training data typically contains surveillance feeds, telemetry, and behavioural analytics. Studying it tells an adversary exactly what signals a model trusts and where its blind spots are.
Model theft: downloaded model files can be analysed offline to reverse-engineer the decision logic behind AI systems used in targeting, surveillance, or autonomous operations. Training data poisoning: with write access to a pipeline, adversaries can subtly corrupt retraining inputs. The model degrades over time, with no forensic trace and no security alert. Execution environment abuse: MLOps workers trust instructions from the control plane. Attackers can use that trust to run arbitrary code inside the compute infrastructure connected to sensitive internal networks.
A Multi-Actor Threat Landscape
The MLOps threat does not sit with Iran alone. North Korea’s Lazarus Group and TraderTraitor have spent years hiding malicious packages inside npm and PyPI ecosystems, quietly compromising developer infrastructure at scale. Chinese APT groups have a direct strategic interest in understanding how Western militaries use AI-assisted decision-making. Russia, too, has been watching.
Proxy groups add further complexity. Hamas-affiliated MOLERATS, Hezbollah-linked operators, and Houthi-aligned actors have all been documented running cyber operations in parallel with kinetic activity, often targeting the same organisations their backers have in their sights.
The report’s sharpest point is about intent. These actors do not need to destroy an AI system. They need to make it unreliable. A targeting model whose thresholds shift through poisoned retraining data, an anomaly detector tuned to ignore a specific pattern: that is battlefield sabotage. It leaves no forensic trace, triggers no security alert, and has no obvious point of attribution.
The Security Gap No One Is Talking About
The core problem is not a software bug. It is a maturity gap. CI/CD systems and cloud IAM services have been hardened through more than a decade of real-world attack exposure. Most MLOps platforms have not. They were built to speed up model development, and security was rarely part of the original brief.
One finding stands out. Cloud storage credentials for AWS S3, Google Cloud Storage, and Azure Blob are routinely stored inside MLOps platform interfaces in a form that can simply be retrieved. Anyone who gets into the platform gets the keys to the cloud storage too. One breach becomes two.
What Organisations Should Do Now
CloudSEK lays out four immediate steps:
- Stop hardcoding credentials. API keys, access tokens, and cloud credentials have no place in source code or config files. Use a dedicated secrets manager and rotate regularly.
- Take MLOps platforms off the public internet. Enforce authentication, segment networks, and switch off open self-registration on any externally accessible instance.
- Drop static cloud storage keys in favour of short-lived, role-based credentials. It limits how far a compromise can spread.
- Treat MLOps like the critical infrastructure it is. Monitor access to datasets, models, and pipelines with the same rigour applied to CI/CD systems and cloud control planes.
Note on Responsible Disclosure
This research was conducted using publicly accessible information. All validation was performed passively, with no modifications made to any systems, pipelines, datasets, or models. All sensitive details, including credential values and organizational identifiers, have been redacted.
Chinese Hackers Plant Digital Sleeper Cells in Telecom Backbone
Posted in Commentary with tags Hacked on March 26, 2026 by itnerdResearchers at Rapid 7 have uncovered evidence of an advanced China-nexus threat actor, Red Menshen, placing stealthy digital sleeper cells in telecommunications networks to carry out high-level espionage, including against government networks.
Rapid 7 has a blog post on this here: https://www.rapid7.com/blog/post/tr-bpfdoor-telecom-networks-sleeper-cells-threat-research-report/
Lieutenant General Ross Coffman (U.S. Army, Ret.) who currently serves as President of Forward Edge-AI, provided the following comment:
“Chinese hackers caught deep in the backbone of telecommunications infrastructure are doing so for high-level espionage.
Anyone that’s surprised by this news should be embarrassed. This is not the end nor the beginning. We’re in a fight to protect our data. PWC technologies that protect data inflight need to be deployed across verticals to protect the US and the free world against China and other malicious actors.”
This shows how far threat actors are willing to go to execute whatever plans that they have. This is crafty and stealthy and dangerous. Defenders should bear that in mind.
Leave a comment »