Cybersecurity firm CloudSEK has published research showing that the infrastructure organisations use to train and deploy AI systems is dangerously exposed. The report focuses on MLOps platforms, the operational backbone of modern AI, and finds that leaked credentials and misconfigured deployments are handing adversaries quiet, persistent access to systems that were never designed with security in mind.
The timing matters. After US and Israeli forces struck Iranian nuclear and military sites on February 28, 2026, Iranian APT groups, including MuddyWater, APT34, APT33, and APT35 showed clear signs of heightened activity. But CloudSEK’s analysts note that the footholds these groups hold inside Western defence, financial, and aviation networks were not built in response to that escalation. They were built before it.
What CloudSEK Found
In a 72-hour scan of public GitHub repositories and internet-facing infrastructure, the research team identified:
- Over 100 exposed credential instances tied to platforms including ClearML, MLflow, Kubeflow, Metaflow, ZenML, and Weights & Biases. Keys were hardcoded directly into source files, configuration scripts, and environment files that were left public.
- More than 80 MLOps deployments are sitting open on the public internet with weak or no authentication. Basic scanning tools like Shodan and FOFA were enough to find them.
- Multiple platforms where anyone could create an account, walk into the dashboard, browse active projects, pull model artifacts, and access connected cloud storage credentials with no barriers at all.
None of this required exploiting a software vulnerability. It used the same interfaces that engineers use every day.
Why MLOps Platforms Are Worth Targeting
MLOps platforms coordinate everything in an AI operation: training pipelines, model storage, cloud integrations, and execution agents that run around the clock. Getting inside one of these platforms gives an attacker far more than a data breach. It gives them four things:Dataset exfiltration: training data typically contains surveillance feeds, telemetry, and behavioural analytics. Studying it tells an adversary exactly what signals a model trusts and where its blind spots are.
Model theft: downloaded model files can be analysed offline to reverse-engineer the decision logic behind AI systems used in targeting, surveillance, or autonomous operations. Training data poisoning: with write access to a pipeline, adversaries can subtly corrupt retraining inputs. The model degrades over time, with no forensic trace and no security alert. Execution environment abuse: MLOps workers trust instructions from the control plane. Attackers can use that trust to run arbitrary code inside the compute infrastructure connected to sensitive internal networks.
A Multi-Actor Threat Landscape
The MLOps threat does not sit with Iran alone. North Korea’s Lazarus Group and TraderTraitor have spent years hiding malicious packages inside npm and PyPI ecosystems, quietly compromising developer infrastructure at scale. Chinese APT groups have a direct strategic interest in understanding how Western militaries use AI-assisted decision-making. Russia, too, has been watching.
Proxy groups add further complexity. Hamas-affiliated MOLERATS, Hezbollah-linked operators, and Houthi-aligned actors have all been documented running cyber operations in parallel with kinetic activity, often targeting the same organisations their backers have in their sights.
The report’s sharpest point is about intent. These actors do not need to destroy an AI system. They need to make it unreliable. A targeting model whose thresholds shift through poisoned retraining data, an anomaly detector tuned to ignore a specific pattern: that is battlefield sabotage. It leaves no forensic trace, triggers no security alert, and has no obvious point of attribution.
The Security Gap No One Is Talking About
The core problem is not a software bug. It is a maturity gap. CI/CD systems and cloud IAM services have been hardened through more than a decade of real-world attack exposure. Most MLOps platforms have not. They were built to speed up model development, and security was rarely part of the original brief.
One finding stands out. Cloud storage credentials for AWS S3, Google Cloud Storage, and Azure Blob are routinely stored inside MLOps platform interfaces in a form that can simply be retrieved. Anyone who gets into the platform gets the keys to the cloud storage too. One breach becomes two.
What Organisations Should Do Now
CloudSEK lays out four immediate steps:
- Stop hardcoding credentials. API keys, access tokens, and cloud credentials have no place in source code or config files. Use a dedicated secrets manager and rotate regularly.
- Take MLOps platforms off the public internet. Enforce authentication, segment networks, and switch off open self-registration on any externally accessible instance.
- Drop static cloud storage keys in favour of short-lived, role-based credentials. It limits how far a compromise can spread.
- Treat MLOps like the critical infrastructure it is. Monitor access to datasets, models, and pipelines with the same rigour applied to CI/CD systems and cloud control planes.
Note on Responsible Disclosure
This research was conducted using publicly accessible information. All validation was performed passively, with no modifications made to any systems, pipelines, datasets, or models. All sensitive details, including credential values and organizational identifiers, have been redacted.
For More Details, Read The Full Report
Spring forward with these must-have tech essentials from Samsung
Posted in Commentary with tags Samsung on March 30, 2026 by itnerdSpring is a natural moment to refresh the devices Canadians rely on every day. Samsung’s latest Galaxy lineup introduces updated AI capabilities, performance upgrades, and deeper ecosystem integration across mobile, audio, wearables, and PC.
Here are a few standout devices, each defined by the core innovations driving them:
Including Galaxy S26, S26+, and S26 Ultra, the latest S series is powered by Snapdragon® 8 Elite Gen 5 (3nm) and introduces expanded on-device AI. Features like Now Nudge enable context-aware assistance, Notification Intelligence prioritizes key alerts, and Circle to Search 3.0 supports multi-object recognition. Privacy Screen adds pixel-level display protection, while Nightography Video enhances low-light capture.
Including Galaxy Book6 and Galaxy Book6 Pro, the lineup combines Intel® Core™ Ultra processors with AI-driven productivity tools. The Pro model features a high-resolution AMOLED display with HDR support and variable refresh rate, alongside extended battery life and seamless continuity across Galaxy devices.
Including Galaxy Buds4 and Galaxy Buds4 Pro, the series introduces upgraded 2-way speakers (Pro), 24-bit Hi-Fi sound, and adaptive noise control. AI integrations enable voice access to Gemini, Bixby, and Perplexity, with new head gesture controls offering hands-free call management.
Including Galaxy Watch8 (40mm/44mm) and Galaxy Watch8 Classic (46mm), the series features a new 3nm chipset, expanded storage, and enhanced sensor capabilities. Updates include improved sleep analysis, activity tracking, and gesture controls, with the Classic model adding a rotating bezel and quick-access button.
Samsung Care+ provides coverage with unlimited repairs using Samsung-certified parts, free device replacement for loss, and worldwide repair support. Designed to maintain device performance and value over time, it offers an alternative to traditional carrier insurance with broader global coverage.
For a limited time, until April 2, Canadian customers can access launch offers including 25% off Samsung Care+ for Galaxy S26 Ultra and 15% off across Galaxy S26 and S26+, Galaxy Buds4 series, and Galaxy Book6 series.
More details are available at samsung.com/ca .
Leave a comment »