CloudSEK Raises $19 Million in Series B1 Funding to Scale Predictive Cybersecurity Platform

Posted in Commentary with tags on May 20, 2025 by itnerd

CloudSEK has raised $19 million across its Series A2 and B1 funding rounds. The round included participation from a mix of India- and US-based investors, such as MassMutual Ventures, Inflexor Ventures, Prana Ventures, Tenacity Ventures, and select strategic investors, including Commvault. Notably, Meeran Family (founders of Eastern Group), StartupXSeed, Neon Fund and Exfinity Ventures are among CloudSEK’s earlier backers and continue to support the company’s long-term vision.

Founded in 2015 by cybersecurity researcher-turned-entrepreneur Rahul Sasi, CloudSEK was created with a mission to build a safer digital future by proactively predicting and mitigating cyber threats. What began as a research-driven initiative has since evolved into one of the industry’s most trusted threat intelligence platforms—serving 250+ enterprises across banking, healthcare, technology, and the public sector.

The newly raised capital will fuel CloudSEK’s continued product innovation and global expansion, with a focus on advancing its AI models and platform integrations. Unlike traditional tools that respond after an incident, CloudSEK identifies Initial Attack Vectors (IAVs)—the earliest signs of a potential breach, such as leaked credentials, exposed APIs, or compromised vendors.

CloudSEK’s differentiated approach has resonated globally, earning the company a 4.8-star rating on Gartner Peer Insights across 195 reviews, making it one of the most recommended vendors in the cybersecurity space.

With this funding and a strategic investor on board, CloudSEK is doubling down on its vision to make predictive threat intelligence a global cybersecurity standard for —empowering organizations to stay ahead of increasingly sophisticated threat actors.

Outpost24 Introduces AI-Powered Digital Risk Protection to Simplify and Expedite Threat Analysis

Posted in Commentary with tags on May 20, 2025 by itnerd

Outpost24, leading provider of cyber risk management and threat intelligence solutions, today announced the addition of AI-enhanced summaries to the Digital Risk Protection (DRP) modules within its External Attack Surface Management (EASM) platform.

With Outpost24’s DRP modules, organizations are able to identify, monitor, and protect against threats before they can be exploited. DRP’s threat intelligence provides continuous scans for exposed credentials, brand impersonations, data leaks and more. While this is all valuable information to have, these DRP findings can be challenging and time-consuming for security teams to interpret. 

Leveraging the AI LLM, jobs are enhanced to automatically generate a 25-word summary, which will replace original, complex DRP excerpts. This will help customers to reduce decision-making time by: 

  • Providing helpful content insights in an easily-understandable format
  • Translating foreign language threat information into English
  • Distilling threat intelligence into key areas of concern

Outpost24 is continuously researching and developing how to bring AI-enhanced functionality into its Attack Surface Management (ASM) platforms. The addition of AI-enhanced summaries now sits alongside the Domain Discovery AI feature already available in the EASM platform. 

Additionally, while DRP results are already publicly available by nature, Outpost24 is committed to ensuring that data is not further leaked to third parties. For this reason, AI summaries are powered from a private LLM instance. 

To learn more about Outpost24’s EASM Platform with Digital Risk Protection modules, including the addition of AI-powered summaries, please click here.

Alluxio Enterprise AI 3.6 Accelerates Model Distribution, Optimizes Model Training Checkpoint Writing, and Enhanced Multi-Tenancy Support

Posted in Commentary with tags on May 20, 2025 by itnerd

Alluxio today announced the release of Alluxio Enterprise AI 3.6, delivering breakthrough capabilities for model distribution, model training checkpoint writing optimization, and enhanced multi-tenancy support. This latest version enables organizations to dramatically accelerate AI model deployment cycles, reduce training time, and ensure seamless data access across cloud environments.

AI-driven organizations face increasing challenges as model sizes grow and inference infrastructures span multiple regions. Distributing large models from training to production environments introduces significant latency issues and escalating cloud costs, while lengthy checkpoint writing processes substantially slow down the model training cycle.

Alluxio Enterprise AI version 3.6 includes the following key features:

●      High-Performance Model Distribution – Alluxio Enterprise AI 3.6 leverages Alluxio Distributed Cache to accelerate model distribution workloads. By placing the cache in each region, model files need only be copied from the Model Repository to the Alluxio Distributed Cache once per region rather than once per server. Inference servers can then retrieve models directly from the cache, with further optimizations including local caching on inference servers and memory pool utilization. Benchmarks demonstrate impressive throughput with Alluxio AI Acceleration Platform achieving 32 GiB/s throughput, exceeding the 11.6 GiB/s available network capacity by 20 GiB/s.

●      Fast Model Training Checkpoint Writing – Building on the CACHE_ONLY Write Mode introduced earlier, version 3.6 debuts the new ASYNC write mode, delivering up to 9GB/s write throughput in 100 Gbps network environments. This enhancement significantly reduces the time needed for model training checkpoints by writing to the Alluxio cache instead of directly to the underlying file system, avoiding network and storage bottlenecks. With ASYNC write mode, checkpoint files are written to the underlying file system asynchronously to optimize training performance.

●      New Management Console – Alluxio 3.6 introduces a comprehensive web-based Management Console designed to enhance observability and simplify administration. The console displays key cluster information, including cache usage, coordinator and worker status, and critical metrics such as read/write throughput and cache hit rates. Administrators can also manage mount tables, configure quotas, set priority and TTL policies, submit cache jobs, and collect diagnostic information directly through the interface without command-line tools.

This release also introduces several enhancements to Alluxio administrators:

●      Multi-Tenancy Support – This release brings robust multi-tenancy capabilities through seamless integration with Open Policy Agent (OPA). Administrators can now define fine-grained role-based access controls for multiple teams using a single, secure Alluxio cache.

●      Multi-Availability Zone Failover Support – Alluxio Enterprise AI 3.6 adds support for data access failover in multi-Availability Zone architectures, ensuring high availability and stronger data access resilience.

●      Virtual Path Support in FUSE – The new virtual path support allows users to define custom access paths to data resources, creating an abstraction layer that masks physical data locations in underlying storage systems.

Availability 

Alluxio Enterprise AI version 3.6 is available for download here: https://www.alluxio.io/demo

Guest Post: Your AI Isn’t Safe: How LLM Hijacking and Prompt Leaks Are Fueling a New Wave of Data Breaches

Posted in Commentary with tags on May 20, 2025 by itnerd

A junior developer at a fast-growing fintech startup, racing to meet a launch deadline, copied an API key into a public GitHub repo. Within hours, the key was scraped, bundled with others, and traded on Discord to a shadowy network of digital joyriders. 

By the time the company’s CTO noticed the spike in usage, the damage was done: thousands of dollars in LLM compute costs, and a trove of confidential business data potentially exposed to the world.

I’m not hypothesizing. It’s a composite of what’s repeatedly happened in the first half of 2025.

In January, the AI world was rocked by breaches that feel less like the old “oops, someone left a database open” and more like a new genre of cyberattack. DeepSeek, a buzzy new LLM from China, had its keys stolen and saw 2 billion tokens vanish into the ether, used by attackers for who-knows-what. 

A few weeks later, OmniGPT, a widely used AI chatbot aggregator that connects users to multiple LLMs, suffered a major breach, exposing over 34 million user messages and thousands of API keys to the public. 

If you’re trusting these machines with your data, you’re now watching them betray that trust in real time.

The New Playbook: Steal the Mind, Not Just the Data

For years, we’ve worried about hackers stealing files or holding data for ransom. But LLM hijacking is something different – something weirder and more insidious. Attackers are after the very “brains” that power your apps, your research, your business. 

They are scraping GitHub, scanning cloud configs, even dumpster-diving in Slack channels for exposed API keys. Once they find one, they can spin up shadow networks, resell access, extract more information for lateral movement or simply run up service bills that would make any CFO faint. 

Take the DeepSeek case, where attackers used reverse proxies to cover their tracks, letting dozens of bad actors exploit the same stolen keys undetected. The result? You could wake up to a massive bill for unauthorized AI usage – and the nightmare scenario of your private data, whether personal or professional, being leaked across the internet.

But the plot thickens with system prompt leakage. System prompts – the secret scripts that tell a GPT how to behave – are supposed to be hidden from the end users. But with the right prompt, attackers can coax models into revealing these instructions, exposing the logic, rules, and sometimes even extremely sensitive information that keep your AI in check. Suddenly, the AI you thought you understood is playing by someone else’s rules.

Why This Should Scare Us All

We’re wiring LLMs into everything, everywhere, all at once. Customer service bots, healthcare, legal research, even the systems that write our code. With every new integration, the attack surface grows. But our security culture might be still stuck in the times of password123.

In the meantime, the underground market for LLM exploits is exploding. Stolen keys are traded on Discord like baseball cards. Prompt leakage tools are getting more sophisticated. Hackers are sprinting ahead. And the more autonomy we give these models, the more damage a breach can do. We’re in a battle for control, trust, and the very nature of automation.

Are We Moving Too Fast for Our Own Good?

Thinking of AI as “just another tool” is a mistake. You can’t just plug these systems in and hope to slap on security later, because LLMs aren’t predictable spreadsheets or file servers. They’re dynamic and increasingly autonomous – sometimes making decisions in ways even their creators can’t fully explain. 

Yet, in the hurry to ride the AI gold rush, most organizations are betting their futures on systems they barely understand, let alone know how to defend. Security has been left in the dust, and the cost of that gamble is only going up as LLMs get embedded deeper into everything from business operations to healthcare and finance.

If we don’t change course, we’re headed for a reckoning – lost dollars and, more importantly, trust. The next phase of AI adoption will depend on whether people believe these systems are safe, reliable, and worthy of the power we’re handing them. If we keep treating LLMs like black boxes, we’re inviting disaster.

What Needs to Change, Ideally, Yesterday

So, what do we do? Here’s my take:

  • Treat API keys like plutonium. Rotate them, restrict their scope, and keep them out of your codebase, chats and logs. If you’re still pasting keys into Slack, you’re asking for trouble.
  • Watch everything. Set up real-time monitoring for LLM usage. If your AI starts unexpectedly churning out tokens at 3 a.m., you want to know before your cloud bill explodes.
  • Don’t trust the model’s built-in guardrails. Add your own layers – filter user inputs and system outputs, always assume someone will try to trick your AI if it’s exposed to user input.
  • Red-team your own AI solutions. Try to break it before someone else does. 
  • Implement segregation through access controls. Don’t let your chatbot have the keys to your entire kingdom.

And yes, a handful of vendors are starting to take these threats seriously. Platforms like Nexos.ai offer centralized monitoring and guardrails for LLM activity, while WhyLabs and Lasso Security are developing tools to detect prompt injection and model emerging threats. None of these solutions are perfect, but together they signal a much-needed shift toward building real security into the generative AI ecosystem.

Your AI’s Brain Is Up for Grabs, Unless You Fight Back

It’s time to recognize that LLM hijacking and system prompt leakage aren’t sci-fi. This stuff is happening right now, and the next breach could be yours. AI is the new brain of your business, and if you’re not protecting it, someone else will take it for a joyride.

I’ve seen enough to know that “hope” isn’t a security strategy. The future of AI seems bright, but only if we get serious about its dark side now – before the next breach turns your optimism into regret.

ABOUT THE AUTHOR

Vincentas Baubonis is an expert in Full-Stack Software Development and Web App Security, with a specialized focus on identifying and mitigating critical vulnerabilities in IoT, hardware hacking, and organizational penetration testing. As Head of Security Research at Cybernews, he leads a team that has uncovered significant privacy and security issues affecting high-profile organizations and platforms such as NASA, Google Play, and PayPal. Under his leadership, the Cybernews team conducts over 7,000 pieces of research annually, publishing more than 600 studies each year that provide consumers and businesses with actionable insights on data security risks. 

Over 50% of top oil & gas firms hit by data breaches in last 30 days: Cybernews

Posted in Commentary with tags on May 20, 2025 by itnerd

A recent Cybernews analysis found that 94% of the world’s top 400 oil and gas companies have suffered at least one data breach to date. Over 50% of the analyzed oil and gas firms were breached in just the last 30 days.

Moreover, according to the Cybernews Business Digital Index, which grades businesses based on their online security measures, 69% of the companies received a cybersecurity score of D or F, and only 10% achieved an A grade.

Key research takeaways:

  • 94% of the largest oil and gas companies had experienced at least one breach, and over 50% were breached within the last 30 days. 
  • Nearly 7 in 10 oil and gas companies are in the high-risk category for cybersecurity, with 35% scoring an F and 34% a D. 
  • Asia-based companies had the lowest average score at 65. Europe and North America followed closely with average scores of 74.
  • Credential hygiene is a major weak spot, especially in Asia, where 68% of companies reused previously compromised passwords.
  • Email security remains a critical weakness, affecting 48% of organizations worldwide.
  • 74% of companies contain insecure configurations in their servers.
  • Issues with SSL/TLS configuration were identified in 91% of organizations.
  • More than 80% of firms had corporate credentials stolen, while 38% of domains were susceptible to email spoofing attacks.

To read the full research, please click here.

Research Methodology

For this study, Cybernews researchers evaluated 391 companies in the oil and gas industry worldwide. The companies were selected from Companiesmarketcap’s “Largest Oil and Gas Companies by Market Cap” list. 

This report assesses cybersecurity risk across seven core dimensions: software patching, web application security, email security, system reputation, system hosting, SSL/TLS configuration, and data breach history.

The report’s full Methodology can be found here. It provides detailed information on how researchers conducted this analysis.

40,000+ iOS Apps Exploit Private Entitlements

Posted in Commentary with tags on May 20, 2025 by itnerd

Researchers are warning that hackers are increasingly targeting iOS devices tied to unvetted and mobile apps via methods like privilege escalation, the misuse of private APIs, and sideloading exploits that bypass Apple’s app review process entirely. More than 40,000 apps were found to be using private entitlements with 800+ relying on private APIs. 

You can find out more here: https://zimperium.com/blog/preventing-malicious-mobile-apps-from-taking-over-ios-through-app-vetting

Erich Kron, security awareness advocate at KnowBe4, commented had this to say:

“Mobile devices are such an important part of our everyday lives, most of us can’t imagine living without them. They can be incredibly useful, especially with the use of so many great applications available. Unfortunately, people place a lot of trust in these application developers, and will even go out of their way to sidestep built-in security features to install potentially dangerous applications without considering the ramifications.

“The official app stores for most devices do a pretty good job vetting applications and removing or denying publication of those that are malicious or could be problematic, however even that is not foolproof. In some cases, the device owner is willing to bypass the safety features to install applications that seem especially useful or entertaining. Cybercriminals and bad actors take advantage of this desire and will work hard to market dangerous applications as useful, then use them to access bank accounts, steal passwords, and perform other dirty deeds. This can be especially problematic if the devices contain information from their employer or have access to the employers’ network.

“Individuals need to understand that official app stores are in place to protect them, and even with those officially approved applications, there have been issues where the application has turned out to be insecure, or malicious. Organizations should have policies in place to dissuade users from installing unofficial applications, and should ensure that mobile devices have controls in place to safeguard organizational information from potential bad actors.”

The best way to stay secure on the iOS platform is to only download apps from the App Store and be careful about what apps you choose to download even if they come from the App Store. That way the threat actors behind schemes like these can are less effective.

NRS Breach Impacts 210,140 Harbin Clinic Patients

Posted in Commentary with tags on May 20, 2025 by itnerd

The personal information of 210,140  people was stolen in a Harbin Clinic July 2024 data breach at debt collector Nationwide Recovery Services (NRS). There is more info posted here.

Ensar Seker, CISO at SOCRadar had this to say:

“The Harbin Clinic (NRS) incident is a textbook example of the cascading risks and delayed fallout of third-party breaches in healthcare, where the real victims (patients) are too often left in the dark for far too long.

This breach highlights the critical danger of delegated data stewardship without sufficient oversight. In this case, a cyberattack on Harbin Clinic’s third-party debt collection vendor, Nationwide Recovery Services (NRS), led to the exposure of highly sensitive health and financial information for hundreds of thousands of patients. But what makes this incident especially concerning is the timeline, the breach occurred in July 2024, yet patients are only being notified nearly a year later.

Such delays are deeply problematic. They increase the window of exposure for fraud, identity theft, and social engineering attacks, while eroding public trust in how healthcare providers handle patient data. In regulated sectors like healthcare, data sharing doesn’t mean risk sharing stops at the vendor boundary. It’s the responsibility of the covered entity, in this case, Harbin Clinic, to ensure that any vendor handling PHI or financial data has clear contractual obligations for rapid breach reporting, data segregation, encryption, and continuous risk monitoring. This case also underscores a growing pattern where third-party breaches are compounded by slow response cycles, internal communication gaps, and often, outdated or manual incident response processes between partners. We must move toward a model of shared real-time threat visibility across the entire supply chain, along with zero-trust access models that limit how much data vendors can retain or access post-engagement.

Ultimately, healthcare organizations must treat third-party services, especially those handling debt, litigation, or estate matters, as high-risk extensions of their own environment. If they don’t, patients will continue to suffer the consequences of invisible vulnerabilities buried deep in the supply chain.”

Erich Kron, security awareness advocate at KnowBe4 follows with this:

“Unfortunately, this is a case of the true victims being left unaware and vulnerable by the organizations that were trusted to keep their data secure. While the data was lost by NRS, they have been hired by the clinic to perform a service using data the clinic provided to them. As unfortunate as it is that the data was lost in the first place, the failure to notify individuals whose data was compromised for such a long time, leaves them open to potential fraud and identity theft. While NRS states there is no evidence to suggest there has been identity theft or fraud related to the incident, it can be extremely difficult to correlate attacks that may have happened specifically to this data dump. Information such as Social Security numbers, birth dates, and medical information, generally do not have a shelf life, and this information could be used against the victims of this crime years or decades later.

“In today’s business world, data breaches are a real concern and processes should be in place to quickly notify customers or employees impacted by the loss of data quickly and with a reasonable explanation of how to protect themselves now that their data is public.”

You’re only secure as those you work with. Thus you need to make sure that those you work with are as secure as possible. Just like the NHS in the UK has started to demand from those they work with.

UK’s Legal Aid Has Been Pwned

Posted in Commentary with tags on May 19, 2025 by itnerd

Reports have surfaced that a “significant amount” of private data dating back to 2010, including details of domestic abuse victims, has been hacked from Legal Aid’s online system from an April breach.  

More details here: https://www.gov.uk/government/news/legal-aid-agency-data-breach

Martin Jartelius, CISO at cybersecurity company Outpost24, commented:

“While described as “the latest in a line of attacks,” it’s important to note that the Legal Aid Agency (LAA) first detected the breach on 23 April 2025 and has been actively managing the incident since then. Under UK data protection laws, a notifiable personal data breach must be reported to the Information Commissioner’s Office (ICO) within 72 hours, unless it’s unlikely to pose a risk to individuals’ rights. If there’s a high risk, affected individuals must also be informed without undue delay. In this case, the public was not informed until 16 May—nearly three weeks later. While delays can sometimes be justified to assess the situation or support an organized investigation, this timeline falls well outside the expected reporting window.

“Given the sensitivity of the data involved and the scale of the breach, it’s now clear that individuals were placed at risk of further harm, including malicious targeting. Transparency and timely communication are essential—especially when public trust and personal safety are at stake.

“While the UK has recently faced attacks from groups like Scattered Spider, the Legal Aid Agency breach does not currently match their known pattern. This appears to be a targeted compromise of a digital platform, rather than a broader, hands-on infiltration and ransomware operation. This is of course based on the limited data published.”

The UK has been starting to focus more on upping their cybersecurity game. This is an example of what I mean. But this breach shows that they have much more work to do on that front.

Commvault Extends Support to Red Hat OpenShift Virtualization Workloads for Enhanced, Cloud-Native Cyber Resilience 

Posted in Commentary with tags on May 19, 2025 by itnerd

Commvault, a leading provider of cyber resilience and data protection solutions for the hybrid cloud, today announced it is extending its Kubernetes protection to support virtual machines (VMs) running on Red Hat OpenShift Virtualization. This new capability enhances cyber resilience for organizations moving to modern application environments. 

Containerized workload adoption is rapidly growing: Gartner predicts 90% of G2000 companies will use container management tools by 20271, and the Containers as a Service (CaaS) market is forecasted to hit nearly $USD 44B by 20342. This surge makes integrated data protection and recoverability critical. Enterprises must mitigate downtime from ransomware and other disruptions while managing complex data protection across hybrid environments. Using disparate tools for VMs and containers ​can ​create overhead, duplicate effort​s​, and heighten risk. These are just some of the reasons a unified cyber resilience strategy is vital for protection against evolving threats, reducing complexity, streamlining operations, and lowering total cost of ownership (TCO​)​. 

Commvault addresses this by enabling customers to automatically discover, protect, and recover VMs running on Red Hat OpenShift Virtualization alongside their containerized workloads, all through the Commvault Cloud platform. These capabilities can be particularly valuable for DevOps, SRE, IT/backup admins, and technology leaders (CIOs, CISOs, CTOs) that are managing cloud-native estates.  

For customers, this means: 

  • Robust Cyber Resilience: Commvault offers air-gapped and immutable backups with advanced recovery for VMs on Red Hat OpenShift Virtualization, enabling improved business continuity in the face of ransomware and other threats. 
  • Faster and More Flexible Recovery: Customers can restore VMs both in-place and out-of-place, including VM configurations, accelerating deployment and minimizing downtime. 
  • Unified Protection for Hybrid Workloads: Customers can simplify operations by managing both traditional and cloud-native workloads through a single platform, reducing tool sprawl and operational silos. 
  • Cost Savings and Operational Efficiency: Customers can eliminate the need for separate backup infrastructure or tools for VMs, lowering TCO and increasing administrative efficiency. 

Availability  

Commvault support for Red Hat OpenShift Virtualization will be available for early adopters in early summer and is targeted for general availability by early fall. Pricing is aligned with existing Commvault Kubernetes protection models.  

Starburst Announces Strategic Investment from Citi

Posted in Commentary with tags on May 19, 2025 by itnerd

 Starburst, the data platform for apps and AI, today announced a strategic investment from Citi. 

Starburst’s platform enables organizations to unify access to distributed data, across cloud, on-premises, and hybrid environments, without the need for data duplication or complex migrations.

  • Starburst’s vision is to deliver cutting-edge AI and analytics solutions on an open, hybrid data lakehouse foundation. 
  • The investment strengthens the company’s momentum in enabling global enterprises to build secure, scalable, and intelligent data applications. 
  • By bringing AI “lakeside,” Starburst eliminates the traditional friction between data, governance, and AI. Starburst’s technology is used by 10 of the top 15 banks. 

Starburst continues to expand its reach into high-demand, regulated industries where AI is becoming a cornerstone of transformation.