Archive for March, 2026

Zalos gets $3.6M for its computer agents to help CFOs

Posted in Commentary with tags on March 24, 2026 by itnerd

Modern finance teams run on a fragmented stack of ERPs, CRMs, spreadsheets, email, and banking platforms that were never designed to talk to each other. APIs between these systems are often missing or incomplete, which means finance teams become the human API themselves, manually stitching data across systems to complete billing cycles, close the books, and produce reporting their business depends on. Zalos was built on the belief that the next leap in productivity will not come from replacing that stack, but from agentic software that can operate it the same way humans do and understands the deep business context. 

Today Zalos, the leader in Computer Agents for Finance Operations, announced a $3.6 million seed round to realize this vision. The funding round was led by 14 Peaks with participation from Cohen Circle, 20VC and notable angels*.

Computer Agents are the defining AI technology for 2026. 2023 was generative AI, 2024 brought multi-modal, and in 2025, AI learnt reasoning. Now AI will take over our computers. OpenAI and Anthropic have both moved into the space with generalist Computer Agents, but Zalos is purpose-built for finance operations, where the stakes of getting it wrong are categorically higher. Finance teams cannot operate on 90% accuracy, the agents need finance specific skills, and they need every automated action logged in a format auditors can follow. The Computer Agent market is still in its early stages; comparable to where large language models were at GPT 3.5. Zalos’s purpose-built infrastructure and evaluation systems are designed to push reliability to the accuracy levels that CFOs need to automate finance operations at scale.

Zalos converts screen recordings of finance workflows into Computer Agents that log in, navigate screens, enter data, and check against controls across ERPs, Excel, email, and internal tools. The platform works inside NetSuite, Sage, and SAP S/4HANA today, with no heavy integrations required. Every agent action is captured in an auditable log, and the platform is built to enterprise security standards including SOC 2 Part II certification, enterprise single sign-on, role-based access controls, and on-premise deployment options. Use cases being most actively used by clients include billing automation across multiple systems, month-end reconciliations, and cross-system KPI reporting across multiple ERP instances.

The company was founded by CEO William Fairbairn and CTO Hung Hoang after intersecting paths led them to the same conclusion. Fairbairn spent years at Agicap speaking with hundreds of CFOs, and heard the same frustration consistently: ERP implementations take more than twelve months, deliver limited upside when they go well, and carry real career risk when they go wrong. Hoang left Apple Pay after five years and became focused on Computer Agents specifically because they avoid the API problem that has stalled so many automation efforts in finance. The two began building Zalos last October after joining Y Combinator, with a focus on specialized agents that emulate how finance teams actually operate inside their tools.

The rise of reliable Computer Agents creates a third path: automation that sits on top of the existing stack and operates it as a human would. These agents are trained once with screen recordings, then the process is automated forever, never taking a holiday, and at a speed and consistency a person cannot match.”

Looking ahead, Zalos plans to expand beyond the major midmarket ERPs where it already has customers and into enterprise ERPs and on-premise systems. By building a wide-reaching context graph across the finance stack, the company aims to help CFOs deploy a swarm of agents and drive a step-change in their finance team’s impact.

* Notable angels included: Mike Lenz (CFO Fedex), Ian Sutherland (CFO Tide), Long Dinh (CFO Ada), Nancy Casey (Global Vice President, Oracle, SAP), Paul Forster (Founder, Indeed), Henri Stern (Founder, Privacy), Ed Woodford (Founder, zerohash), James Beshara (Founder, Tilt Payments), Long Lu (Founder, Misa Accounting), Catherine Dahl (Founder, Beanworks Accounts Payable), Pablo Palafox (Founder, Happy Robot), Hasan Sukkar (Founder, 11x), Chris Smoak (Founder, Atrium), Ooshma Garg (Gobble), Minh Pham (Head of Browser Infra, Perplexity), Jon Langbert (Founder, Alight), Mandeep Singh (Founder, Trouva), Thai Duong (Founder, Calif), Ash Rush (Founder, Sterling Road), Jake Klamka (Founder Insight Data Science), Jonathan Meeks (Board, TA Associates).

EnGenius Brings AI-Powered Analytics and Sophisticated Cloud Management to Existing ONVIF Cameras

Posted in Commentary with tags on March 24, 2026 by itnerd

EnGenius Technologies Inc., a global leader in advanced connectivity and cloud-managed networking solutions, is pleased to announce the expansion of its AI-powered Network Video System (NVS) lineup with two tower-based SKUs designed to bring intelligent analytics, centralized cloud management, and enterprise reliability to existing ONVIF & RTSP camera deployments. This transformative solution brings AI intelligence to existing camera systems without the need for a full hardware replacement, significantly reducing upgrade costs, minimizing the risk of evidence loss, and accelerating investigations. The company also announced that its EnGenius EVS1004D has been honored with a Best of Show award at Integrated Systems Europe 2026, where industry judges recognized the platform’s innovation in AI-driven video surveillance and seamless cloud management designed to simplify enterprise security deployments.

The new lineup includes:

  • EVS1004D — Cloud Managed AI 4-Bay Network Video System Tower
  • EVS1002D — Cloud Managed AI 2-Bay Network Video System Tower

Both systems enable organizations to upgrade existing ONVIF-compatible cameras with advanced AI capabilities—without costly camera replacements— capable of supporting up to 16 non-AI channels, or a maximum of 4 channels when 2 AI-enabled cameras are included, for intelligent, real-time video analysis.

Recognizing the stringent legal and regulatory compliance requirements faced by multi-site SMBs and enterprise organizations across the retail, hospitality, healthcare, education, and finance sectors, the EnGenius NVS Series delivers reliable, 24/7 video availability and playback. By combining edge-based storage with unified cloud management, the EVS Series provides a secure, scalable, and resilient surveillance ecosystem designed to meet the operational and compliance demands of modern, distributed environments.

Intelligent AI Upgrade for Existing Cameras

EnGenius Cloud Managed AI NVS platforms enhance third-party ONVIF or RTSP cameras with powerful edge and cloud-based intelligence. Supporting FHD to 4K resolutions, both tower models deliver 24/7 continuous recording, intelligent metadata-driven analytics, and centralized cloud management across single or multi-site deployments.

AI processing is performed locally while leveraging EnGenius Cloud AI for advanced search, alerts, and insights. Natural language search powered by multimodal AI/LLMs allows operators to locate critical video evidence using simple descriptions—dramatically reducing investigation time.

Two Tower Options for Flexible Deployments

Designed to fit a wide range of surveillance needs, both SKUs share a desktop tower housing optimized for professional environments:

  • EVS1004D (4-Bay Tower)

Provides enterprise-grade RAID-protected storage (RAID 1/5/6) for high availability and long-term video retention, ideal for larger or compliance-driven deployments.

  • EVS1002D (2-Bay Tower)

A compact, cost-efficient solution delivering centralized AI-enabled recording and analytics, with RAID 1–protected storage for added data reliability, for small to mid-size installations.

Both models feature:

  • Maximum video backup capacity: up to 30 channels with EnGenius AI cameras; 16 channels with third-party, non-AI-enabled cameras; or up to 4 channels when two AI-enabled cameras are used for intelligent, real-time video analysis.
  • 1× 10-Gigabit Ethernet + 1× Gigabit Ethernet ports
  • USB 3.0 ×4 and USB 2.0 ×1 connectivity
  • SA2.5″ or 3.5″ SATA 3 drives; includes 1× HDMI port and 1× Kensington lock slot.
  • ONVIF Profile S and RTSP compatibility
  • Cloud-managed access anytime, anywhere

Secure, Bandwidth-Efficient, and Future-Ready

Security is built into every layer of the EnGenius AI NVS architecture. By transmitting AI metadata instead of continuous video streams, both systems significantly reduce WAN bandwidth usage—making them ideal for scalable, multi-location environments.

Flexible Video Backup Mechanism

Designed for multi-site enterprise environments, the EVS Series enables seamless video backup across distributed networks within the same organization to EnGenius NVS units. Featuring customizable retention policies, administrators can define recording duration or storage limits to align with legal, regulatory, and operational requirements.

Unified Cloud Management in a Single Ecosystem

Eliminating system silos, the EVS Series seamlessly integrates with all cameras within the EnGenius Cloud platform, enabling IT teams to centrally manage storage, video access, and device health from a single interface. This cloud-native architecture delivers streamlined monitoring and actionable insights—without the complexity of on-premises server deployments.

Designed for Every Industry

The EnGenius Cloud Managed AI NVS solutions are purpose-built for education, retail, hospitality, student housing, senior living, corporate offices, and warehousing, delivering actionable intelligence such as people and vehicle detection, tracking, counting, and real-time Cloud-AI alerts for incidents including bullying, fights, accidents, or restricted-area access.

Availability

The EnGenius Cloud Managed AI Network Video System Tower lineup—including the EVS1004D (4-bay) and EVS1002D (2-bay) models, will be available through EnGenius authorized resellers and distribution partners beginning in March 2026. For additional product specifications and purchasing information, visit: EnGenius AI NVS

Minimus to Launch Open Source Program, Bringing Hardened Images to Critical Infrastructure Projects 

Posted in Commentary with tags on March 24, 2026 by itnerd

Minimus today announced the Minimus Open Source Program, an initiative to help open source maintainers strengthen the security and integrity of their software supply chains. Eligible projects will receive access to Minimus secure container images, Software Bill of Materials (SBOM) generation and analysis, and threat intelligence tooling at no cost.

Open source software underpins a vast share of the world’s critical digital infrastructure, yet most maintainers lack access to the security tooling enterprises take for granted. This program aims to close that gap, putting modern supply chain security directly in the hands of the communities that need it most.

Projects accepted into the program can integrate Minimus images into their build pipelines, immediately reducing attack surface for their users. Maintainers will also gain visibility into dependencies and potential vulnerabilities through Minimus’s threat intelligence dashboard.

The Open Source Program builds on a period of rapid growth for Minimus. Since launching publicly at RSAC in April 2025, the company has grown revenue by 285%, expanded its Image Gallery to over 1,200 hardened container images, and shipped major new capabilities, including Image Creator, which enables enterprises to build and manage their own hardened images on the Minimus platform. Minimus images are now supported by major cloud security platforms, including Aqua Security, AWS, Google Cloud, Orca Security, Snyk, and Wiz.

The program is open to open source projects using an OSI-approved license that meet minimum project health criteria. Accepted projects receive:

  • Access to hardened, compliant images from the Minimus Image Gallery
  • Custom image creation, Helm charts, and automatically generated SBOMs
  • Real-time exploit intelligence to prioritize CVE remediation and patch efforts
  • Image updates in accordance with Minimus’ commercial SLAs

Applications open March 24, 2026. Open source maintainers can learn more and apply at minimus.io/open-source

DH2i Launches DxEnterprise v26.0 and DxOperator v2

Posted in Commentary with tags on March 24, 2026 by itnerd

DH2i, a leading provider of always-secure and always-on IT solutions, today announced the general availability (GA) launch of DxEnterprise v26.0 and DxOperator v2, featuring significant high availability (HA), disaster recovery (DR), and operational resilience capabilities enhancements for SQL Server deployments across Windows, Linux, and Kubernetes environments. Together, the releases introduce meaningful advances in availability group (AG) protection, security controls, observability, and automation for both traditional and containerized SQL Server deployments.

In today’s enterprises, a perfect storm has emerged where applications have become direct revenue channels, infrastructure complexity has increased while IT staffing has not, modernization initiatives are no longer optional, security and compliance requirements are tightening, and software update velocity has accelerated. Together, these forces expose the limits of traditional HA approaches. What once worked for small, static clusters no longer scales when SQL Server deployments span hybrid, multi-platform, and containerized environments that demand continuous availability, stronger safeguards, and higher levels of automation. DxEnterprise v26.0 and DxOperator v2 address these challenges head-on.

DxEnterprise v26.0 focuses on improving cluster resilience, visibility, and administrative confidence through enhanced monitoring, stronger safeguards against split-brain scenarios, expanded credential support, and platform modernization. DxOperator v2 extends those capabilities into Kubernetes environments, giving users greater control over scale, updates, and network configuration for SQL Server AGs running in containers.

What’s New in DxEnterprise v26.0 

Deeper SQL Server and Availability Group Intelligence

  • Database-level health monitoring is now enabled by default, allowing faster detection of issues affecting individual databases within an AG
  • Split-brain scenarios are prevented via automatic per-availability-group quorum enforcement by demoting or shutting down replicas when quorum requirements are not met
  • Improved replica connectivity alerts provide real-time notification when replicas disconnect or when SQL Server replica configurations diverge from expected cluster state

Improved Security and Credential Resilience

  • Support for secondary SQL Server backup credentials enables automatic fallback if primary authentication fails, reducing downtime caused by credential changes or expirations
  • Administrative sessions are automatically disconnected when the cluster passkey changes, ensuring only authorized users with current credentials retain access
  • The DxAdmin user interface now includes clearer prompts, stronger validation, and improved feedback for passkey configuration

Greater Stability and Observability

  • Core monitoring services, including DxLMonitor, DxCMonitor, DxStorMonitor, and DxHealthMonitor, have received reliability and stability improvements to reduce unexpected restarts and improve overall cluster resilience
  • Basic anonymous telemetry is now available to help improve product quality and diagnostics, with opt-out configuration for customers who prefer not to participate

Platform and Usability Enhancements

  • DxEnterprise’s Linux version now runs on the .NET 8.0 runtime, delivering improved performance, security, and long-term support alignment
  • Virtual hosts can now be renamed using a new rename-vhost command, simplifying cluster management and reorganization
  • Additional safeguards prevent accidental overwriting of existing data stores during SQL Server high availability virtualization
  • Enhancements to DxCLI and DxPS improve command-line usability, including human-readable XML output and new PowerShell cmdlets
  • The DxCollect utility now includes expanded command-line options for more targeted diagnostics and log collection.

What’s New in DxOperator v2 

Flexible Scaling Up and Down

  • Availability group clusters can now be expanded or reduced dynamically
  • Unlike the previous version, DxOperator v2 can safely de-configure and remove replicas from a running cluster, enabling true scale-down operations

Automated Rolling Updates

  • Administrators can automate rolling updates of SQL Server or DxEnterprise container images, allowing pods to be updated one at a time without manual intervention
  • Updates can also be performed manually when desired, giving operators full control over rollout strategy
  • DxOperator does not automatically check for new container versions, ensuring that administrators remain in control of when and how updates are applied

Advanced Network and Service Configuration

  • Flexible service templates allow load balancers and other network services to be fully specified and automatically deployed per availability group replica
  • This enables more consistent connectivity across different Kubernetes environments and cloud providers

Redesigned Custom Resource and StatefulSet Adoption

  • The custom resource definition has been redesigned for greater flexibility and now leverages Kubernetes StatefulSets
  • By delegating pod creation, storage allocation, and rolling upgrades to Kubernetes, DxOperator v2 simplifies internal logic while benefiting from native Kubernetes reliability and lifecycle management

DH2i’s DxEnterprise v26.0 and DxOperator v2 are now generally available (GA) – to learn more, please visit: https://dh2i.com/dxenterprise-high-availability/ and https://dh2i.com/dxoperator-sql-server-operator-for-kubernetes/ respectively. 

To dive even deeper, please join DH2i’s upcoming webinar: “High Availability, Simplified: What’s New in DxEnterprise v26 & DxOperator v2”, on April 16 at 12:00 pm EDT. Save your seat by registering here: https://dh2i.com/webinar-simplified-high-availability-solution/

The Infographic can be found here: https://dh2i.com/blog/v26-simplified-sql-server-high-availability/

Detectify launches IP Range Scanning to uncover hidden infrastructure before attackers do 

Posted in Commentary on March 24, 2026 by itnerd

Detectify today announced the launch of IP Range Scanning, a new capability designed to help organizations continuously discover and monitor entire blocks of IP addresses. The technology automates the identification of exposed infrastructure, helping security teams find forgotten assets and hidden risks before attackers exploit them.

Organizations across all sectors are sitting on forgotten IP addresses that have become primary entry points for modern cyberattacks. While millions have been spent securing public-facing websites, legacy tools often struggle with noise and stale data, leaving modern organizations with a massive, unmonitored blind spot. Recent research from Detectify highlights this gap, with SSH found on non-standard ports nearly as often as on port 22 (49.3% vs. 50.7%), indicating that organizations focused only on standard ports risk missing a substantial portion of exposed services.

This digital basement can be filled with orphaned servers, legacy hardware, and unauthorized shadow IT. To a security team, these assets are invisible. To a hacker, they are an unlocked window. Identifying assets across large IP blocks often results in fragmented data or noisy snapshots that fail to integrate with modern AppSec workflows. High-risk services like Redis and MongoDB are frequently exposed on raw IP addresses without associated domains, making them invisible to traditional tools.

Detectify’s IP Range Scanning prioritizes high-fidelity discovery across large network segments, giving security teams accurate, actionable visibility into previously overlooked assets and reducing blind spots at scale. With this release, customers can benefit from: Onboarding entire CIDR blocks in seconds: Gain continuous visibility into the infrastructure behind their networks, from legacy systems to rapidly expanding environments. Identifying hidden services: Uncover everything from remote desktops and databases to web applications, powered by Protocol Discovery that goes beyond simple port detection. Bridging the gap to testing: When a web application is detected, Detectify automatically transitions to deep security testing, evaluating it against more than 922 quintillion payload-based permutations to uncover any potential for exploitation.

For organizations operating their own networks, such as government agencies and other large enterprises, IP ranges are often among the least understood areas of the attack surface. The ability to scan entire IP blocks in the same way as domains provides a clearer, more comprehensive view of what is actually exposed. Continuous discovery of services and applications across these ranges helps security teams identify forgotten or unmanaged assets early, improving visibility and reducing the risk of overlooked weaknesses being exploited.

FBI Warns Of Iran-Linked Threat Actors Using Telegram For Attacks

Posted in Commentary with tags , on March 23, 2026 by itnerd

The FBI has warned of Iran-linked Handala hackers using Telegram in malware attacks:

The Federal Bureau of Investigation (FBI) is releasing this FLASH to disseminate information on malicious cyber activity conducted by actors on behalf of the Government of Iran Ministry of Intelligence and Security (MOIS). Specifically, MOIS cyber actors are responsible for using Telegram as a command-and-control (C2) infrastructure to push malware targeting Iranian dissidents, journalists opposed to Iran, and other opposition groups around the world. This malware resulted in intelligence collection, data leaks, and reputational harm against the targeted parties. The FBI is releasing this information to maximize awareness of malicious Iranian cyber activity and provide mitigation strategies to reduce the risk of compromise.

Due to the elevated geopolitical climate of the Middle East and current conflict, the FBI is highlighting this MOIS cyber activity. The FBI assessed MOIS cyber actors are responsible for using Telegram as a C2 infrastructure to push malware targeting Iranian dissidents, journalists opposed to Iran, and other oppositional groups around the world. This FLASH warns network defenders and the public of continued malicious cyber activity by Iran MOIS cyber actors and outlines the tactics, techniques, and procedures (TTPs) used in this malware campaign.

Commenting on this news is Ensar Seker, CISO at SOCRadar

“The use of Telegram as command-and-control infrastructure is not surprising, it reflects a broader shift where threat actors deliberately blend malicious traffic into trusted, encrypted platforms. By leveraging a widely used application like Telegram, groups such as Handala significantly reduce the likelihood of detection, because security controls are often tuned to allow this traffic by default.

What makes this particularly concerning is the targeting profile. These operations are not opportunistic; they are highly intentional, focusing on journalists, dissidents, and opposition voices. This aligns with state-sponsored objectives, where cyber operations are used as an extension of intelligence gathering and influence campaigns rather than purely financial gain.

From a defensive standpoint, this highlights a critical gap: many organizations still rely too heavily on traditional indicators like IP blocking or domain reputation. When attackers operate inside legitimate platforms, defenders must shift toward behavioral detection, monitoring anomalies in application usage, data flows, and endpoint activity rather than trusting the platform itself.

The bigger implication is that encrypted messaging platforms are becoming dual-use infrastructure for both communication and covert operations. Security teams need to reassess their trust assumptions and implement visibility controls around sanctioned apps, including logging, anomaly detection, and strict access policies.

Ultimately, this is not about Telegram specifically, it’s about the normalization of “living off trusted services.” Organizations that fail to adapt to this model will continue to miss early-stage intrusions, especially those tied to advanced persistent threat actors with geopolitical motivations.”

This highlights the fact that warfare is different now because the battlefield has expanded to the cyber world. Thus you need to keep that in mind in order to keep your organization safe from this new generation of threats.

Ubiquiti Unifi Users Should Update Their Gear ASAP To Protect Themselves From Three Absolutely Critical Vulnerabilities

Posted in Commentary with tags on March 23, 2026 by itnerd

Users of Ubiquiti Unifi gear should be aware of CVE-2026-22557 which details a super critical vulnerability that can lead to account takeovers. This is what the CVE says:

A malicious actor with access to the network could exploit a Path Traversal vulnerability found in the UniFi Network Application to access files on the underlying system that could be manipulated to access an underlying account.

The issue is a 10/10 which makes this a today problem for Ubiquiti users. The company put out this advisory last week that kind of flew under the radar until it surfaced on Reddit where it quickly became a thing as the kids say.

There’s a second critical vulnerability that has surfaced as well. From the security advisory:

“An Authenticated NoSQL Injection vulnerability found in UniFi Network Application could allow a malicious actor with authenticated access to the network to escalate privileges,”

This one doesn’t have a score. But given that the flaw can escalate privileges, it’s bad. There’s one more vulnerability:

An Improper Input Validation vulnerability in UniFi Network Server may allow unauthorized access to an account if the account owner is socially engineered into clicking a malicious link.

This is being tracked as  CVE-2026-22559 with a score of 8.8 which is bad. Not as bad as the first issue. But still bad.

All of these are fixed by updating the UniFi Network Server app on gateways and self hosted systems to Version 10.1.89 or later. If you have auto update turned on, this might have already happened for you. But you should check to ensure that it has. For bonus points, you should strongly consider turning off remote access. That way it forces threat actors to actually be on your network to take advantage of a vulnerability. That’s not to say that it would make you completely safe, but it reduces the attack surface a lot. That’s why I mentioned in my review of the Cloud Gateway Max, I would never, ever expose the administration of the device to the Internet.

In any case, it’s once again time to upgrade all the things.

Vigil: The First Open-Source AI SOC Built with a LLM-native Architecture

Posted in Commentary with tags on March 23, 2026 by itnerd

Security teams are trapped between proprietary AI SOC vendors that obscure model intelligence and open-source tools that haven’t kept up with agentic architectures. A new open source project,Vigil, launched at RSA today, changes that. Vigil enhances rather than obfuscates the transformative intelligence of rapidly advancing reasoning models, including Anthropic’s Claude.

Available immediately under an Apache 2.0 license, Vigil ships with13 specialized AI agents, 30+ integrations, and 7,200+ detection rules spanning Sigma, Splunk, Elastic, and KQL formats. Additionally, Vigil includes four initial production-tested multi-agent workflows that tie together underlying capabilities to address common use cases in the SOC: incident response, investigation, threat hunting, and forensic analysis. Users can easily add additional integrations, custom rules, and agents often as simply as checking in a file to a designated repository.

Vigil’s architecture is pluggable and transparent. Teams bring their own enterprise model deployments, their own rule sets, and their own integrations for operational context. As reasoning models improve rapidly, those advances surface directly in analyst-facing workflows rather than remaining buried in proprietary black boxes. As a result, users can apply it to their particular environment quickly, and can leverage their own enterprise deployments of reasoning models, their own rule sets and other systems for detection, and of course their own integrations to provide operational context. Importantly, as models improve, the architecture is structured so those advances surface directly in analyst-facing workflows rather than remaining obscured in proprietary systems.


Vigil is one of a new wave of open source projects built in the agentic era. Contributors are welcome across product direction, module development, governance, and developer relations. Agentic red teaming projects are a natural fit. Vigil initial engineers have hands-on experience with Stanford’s Artemis and other frameworks and are keen to collaborate.

Built by Open-Source Security Veterans

The DeepTempo team built Vigil as a side project initially and saw demand from users and partners, including professional services partners and research collaborators at Stanford and other educational institutions, for an open and simple to extend solution. Larger enterprises and national SOCs and similar scale organizations are already writing their own agentic SOC capabilities, and Vigil is a community in which they can collaborate on relevant components.

Open by Design

Vigil is vendor-independent. Contributors are welcome from across the security ecosystem, including AI SOC vendors, internal security teams, services organizations, open-source maintainers, and developers building on MCP and agentic frameworks. The Trail of Bits skills repository represents one natural area of collaboration, offering reusable building blocks for cyber-specific reasoning that Vigil is designed to interoperate with via clear Claude skills definitions. Projects like Cisco’s Foundation Sec-8 are candidates for first-class integration, alongside Claude and other advanced reasoning models.

Extending Vigil is simple: multi-agent workflows are defined in a single SKILL.md file, tool integrations use the open MCP standard, and detection rules can be contributed in any major format. Every MCP server in the security ecosystem is a potential Vigil integration.Every skill someone writes makes the platform more capable for everyone.

Availability and Community

Vigil is available now:

git clone –recurse-submodules https://github.com/deeptempo/vigil.git

cd vigil && ./start_web.sh

# Open http://localhost:6988 — your AI SOC is running.

Security practitioners, researchers, and developers interested in contributing, leading, or experimenting with Vigil are encouraged to connect with the maintainers via the GitHub repository or community Discord.

As AI systems grow more capable, security analysts need shared patterns, tools, and workflows to keep pace. DeepTemp released Vigil as open source to accelerate that learning, building a transparent, adaptable foundation for the next generation of security operations.

See Vigil at RSA Conference 2026

The team behind Vigil will be showcasing the project live at RSA Conference 2026 at Moscone North Expo Hall, Cribl Booth #6353. Visit the booth for live demos, contributor onboarding, and conversations with the Vigil maintainers.

SOCRadar Launches AI Agent Marketplace and Identity Intelligence

Posted in Commentary with tags on March 23, 2026 by itnerd

Today at RSA Conference 2026, SOCRadar launched its new AI Agent Marketplace, an integrated hub where organizations can browse, purchase, and deploy specialized autonomous AI agents tailored for specific cybersecurity tasks and use cases in the SOCRadar XTI Platform. This includes phishing detection, brand abuse protection, and dark web monitoring. By unbundling the traditional ‘all-in-one’ platform, this modular ecosystem liberates security teams from rigid, legacy software in favor of a precision-led approach. Organizations can easily select and deploy only the specific agents required for their unique use cases, with the granular controls and customization to perfectly fit high-precision workflows.

SOCRadar also introduced Identity and Access Intelligence capabilities to its Extended Threat Intelligence Platform to bridge the gap between internal identity security and external exposure. The new capabilities are designed to secure identity “blind spots” such as credential exposures detected in third-party SaaS environments, dark web marketplaces, and collaboration platforms.

Credentials are a hot commodity for opportunistic threat actors looking to launch identity-based attacks. According to IBM, approximately 388 million credentials were stolen in 2025 from just 10 top online platforms including Meta and Google. Additionally, data breaches have surged 475% over the past decade with adversaries moving faster and hitting harder. This has culminated in the 2025 global average cost of a data breach hitting $4.4 million.

SOCRadar is also launching a new Identity & Access Threat Intelligence AI Agent, which can analyze the data files associated with a compromised machine (e.g. session cookies, credentials, etc.) to help analysts quickly determine the source of a leak and generate a risk analysis report. This is the first of many AI Agents to be released as part of the new AI Agent Marketplace.

Key Features of SOCRadar’s Identity and Access Intelligence Capabilities

SOCRadar’s Identity and Access Intelligence capabilitiesleverage Identity-Related Risk Clarification to understand risk and makefaster decisions.

Clear Security Narratives allow analysts to easilyvisualize attack steps and system-level artifacts to translate raw data into clear, actionable security narratives for analysts. This includes:

Company Insights: Delivers contextualized visibility into an organization’s digital footprint and compromised users so customers learn which function, asset, and risk chain was exposed.

  • Enterprise Attack Surface Risk Profile: Maps externally exposed enterprise services and domains into categorized risk profiles so customers can associate risks and prioritize by potential blast radius.
  • Third-Party Service Credential Exposure: Reveals external SaaS providers where leaked or reused credentials are associated with your domain.
  • Customers can now understand not just that credentials were leaked, but which systems they unlock and how they could enable lateral movement

File Insights: Presents an interactive snapshot of a compromised endpoint and lets users review how credentials were exfiltrated and stored on disk by the stealer.

Tag Insights: Exposed artifacts are classified using descriptive tags to indicate their type and context.  Sensitive data can be viewed at a glance within the attack flow and endpoint view.

The Cookie Analysis section filters and displays browser-stored cookies and allows sorting by domain, cookie name, or filter.  Customers can also assess potential for abuse by analyzing secure flag indicators and cookie entropy surfaced by the platform.

Attack Flow Visualization: Reconstructs the end-to-end infection path, starting from the internet entry point and progressing through malware execution, system interaction, and endpoint compromise.

  • Customers can view the complete infection chain, including the stealer involved, its origin, where it executed on the victim machine, and what data was exfiltrated.

AI-Powered Analysis: Provides natural language driven risk analysis that summarizes exposure, highlights prioritized threats, and provides remediation guidance for compromised identities Customers can see auto-summarization of the infection severity such as device context, critical risks, and exposed identities. They can get recommended remediation actions.

Meta AI agent incident exposes deeper agentic security gap

Posted in Commentary with tags on March 21, 2026 by itnerd

A recent incident at Meta shows how an AI agent provided guidance that led an engineer to unintentionally expose a large amount of sensitive internal data to employees for a short period of time.

While Meta confirmed the issue was contained and no external data was mishandled, the episode highlights a broader risk as AI agents become embedded in engineering workflows. These systems aren’t just generating suggestions, they’re influencing real actions inside environments that handle sensitive data.

Gidi Cohen, CEO & Co-founder, Bonfy.AI

“Meta’s incident is exactly what happens when you let agents loose on sensitive data without any real data-centric guardrails. This wasn’t some exotic AGI failure, it was a very simple pattern: an engineer asked an internal agent for help, the agent produced a “reasonable” plan, and that plan quietly exposed a huge amount of internal and user data to people who were never supposed to see it.

The problem is that neither the engineer nor the agent had any persistent notion of “who actually should see this data” beyond whatever happened to sit in a narrow context window at that moment. Traditional controls don’t help much here. Endpoint DLP, CASB, browser controls, even basic role-based permissions, none of them are watching the actual content as it moves through an agent’s reasoning steps and tool calls, especially when the agent is running as a system service in some framework.

Our view is simple: treat agents like very fast, very forgetful junior interns and make the data security layer smart enough to compensate. That means three things: constrain what data is even available to the agent via contextual labeling and grounding; give the agent a Bonfy MCP tool it can call inline to ask “is this safe to use or send in this context?” before it takes an action; and inspect what ultimately comes out of the workflow before it lands in email, chat, dashboards, or internal portals. In a Meta-style scenario, those controls would have either prevented the broad internal exposure entirely or at least shrunk the blast radius to something manageable.

As organizations “experiment at scale” with agents, the only sustainable path is to make agents first-class entities in the risk model and put the intelligence where it belongs: on the data that’s being read, composed, and shared, not just on the configuration screens of yet another AI tool.”

The thing is that when you expose AI anything to sensitive data, it can get out there. Samsung banned AI usage for that reason. Keep that in mind if you’re an organization that uses AI