Peer Software and Carahsoft Partner to Bring Data Replication and Synchronization Solutions to the Public Sector

Posted in Commentary with tags on April 1, 2026 by itnerd

Peer Software, and Carahsoft Technology Corp. today announced a strategic partnership. Under the agreement, Carahsoft will serve as Peer Software’s Master Government Aggregator®, making the company’s flagship Peer Global File Service (PeerGFS) platform available to the Public Sector through Carahsoft and its reseller partners.

Peer Software’s PeerGFS platform provides real-time file replication and synchronization across distributed environments, enabling Government agencies to maintain file consistency, reduce data silos and ensure high availability without relying exclusively on the cloud. With multi-protocol support for SMB and NFS on the same volume, PeerGFS helps agencies manage hybrid environments, support legacy systems and enable seamless collaboration across locations.

Designed to meet the stringent security requirements of the Public Sector, Peer Software’s solutions support compliance, strengthen operational resilience and optimize data accessibility. Peer Software’s capabilities are critical for agencies managing sensitive workloads and geographically dispersed teams.

Peer Software’s solutions and services are available through Carahsoft and its reseller partners. For more information, contact the Carahsoft Team at (703) 871-8585 or PeerSoftware@carahsoft.com. Explore Peer Software’s solutions here.

SIOS Technology to Present at Spring 2026 Industry Events and Host Webinar on Cloud Resilience

Posted in Commentary with tags on April 1, 2026 by itnerd

SIOS Technology Corp. today announced its participation in several industry events this spring, where company experts will share best practices for maintaining uptime for mission-critical applications across cloud, hybrid, and multi-cloud environments. SIOS will also host an educational webinar focused on designing resilient workloads in the cloud.

Webinar: Resilience by Design – Keeping Mission-Critical Workloads Running on AWS

Date: April 9, 2026 @ 12:00 pm ET
Format: Virtual

Register here

SIOS will host the webinar “Resilience by Design: Keeping Mission-Critical Workloads Running on AWS,” which will explore strategies for ensuring application availability in cloud environments. Attendees will learn how to architect resilient infrastructures, address common failure scenarios, and maintain uptime during maintenance or outages.

SQLBits 2026

Date:April 22–25, 2026
Location: Caerleon, Wales, United Kingdom

Register here

SIOS experts will present two technical sessions focused on SQL Server high availability across operating systems and cloud environments. These sessions, include:

  • Breaking the Default: SQL Server High Availability on Windows and Linux
    Speaker: Aaron West, senior solutions engineer
    Date/Time: April 23 at 12:20 PM

This session will provide a side-by-side comparison of SQL Server high availability on Windows and Linux, covering clustering architectures, failover processes, maintenance and patching considerations, and the operational impact of each approach.

  • Building Resilient SQL Server HA/DR in a Multi-Cloud World
    Speaker: Dave Bermingham, senior technical evangelist
    Date/Time: April 25 at 12:30 PM

This session will examine how organizations can architect SQL Server high availability and disaster recovery solutions spanning Azure, AWS, and Google Cloud. Attendees will learn how to use technologies such as Always On Availability Groups and Failover Cluster Instances (FCIs) to build resilient multi-cloud deployments.

SQL Saturday Jacksonville 2026

Date:May 2, 2026
Location: Jacksonville, FL

Register here

At Jacksonville’s 18th annual Data Conference Day of Data, Bermingham will present “Building Resilient SQL Server HA/DR in a Multi-Cloud World.

The session will explore real-world architectures for running SQL Server reliably across multiple cloud providers. Bermingham will share practical guidance for designing high availability and disaster recovery strategies that span Azure, AWS, and Google Cloud, helping organizations reduce risk, avoid vendor lock-in, and meet aggressive recovery objectives.

Red Hat Summit 2026

Date: May 11–14, 2026
Location: Georgia World Congress Center, Atlanta, GA

Register here

SIOS will exhibit at Red Hat Summit, where attendees can learn how organizations are protecting mission-critical Linux applications with SIOS high availability and disaster recovery solutions. At the event, SIOS will showcase SIOS LifeKeeper for Linux, which enables automated failover and continuous application availability for enterprise workloads running on Linux across physical, virtual, and cloud environments.

Pass Summit Europe

Date: June 10-11, 2026

Location: Hilton Frankfurt, Hochstraße 4, 60313 Frankfurt am Main, Germany

Register here

SIOS will participate as a Gold Sponsor at PASS Summit Europe, where attendees can connect with SIOS experts at the company’s exhibit table to learn more about high availability and disaster recovery solutions for SQL Server environments. SIOS will also deliver a conference session.

For more information about SIOS events and high availability solutions, visit https://us.sios.com

The CISO’s Guide: When AI Helps vs. Hurts Security

Posted in Commentary with tags on April 1, 2026 by itnerd

Dubai-based Secure.com has published a concise analysis of both sides of the coin in  “The CISO’s Guide: When AI Helps vs. Hurts Security.”

With research revealing that 76% of CISOs reporting that they expect a material cyberattack in the next 12 months, most report that their organizations are already using AI in some form.

The Guide examines key issues including:

  • Where AI Actually Delivers for the SOC:  AI doesn’t think, it predicts, and every model’s no better than the data it was trained on.
  •  Where and How AI Can Quietly Hurt The Organization
  • The Four Questions to Ask Before Deploying Any AI Security Tool: Every AI system makes mistakes. The question is whether mistakes are recoverable.
  • Building a Security Program Where AI and Humans Work Together: Gall’s Law applies.
  • Shadow AI Prevention Measures: Shadow AI is a growing internal risk that can expose sensitive data without the user realizing it.
  • Metrics to measure deployment success.

The question is no longer “should we use it?” It’s “are we using it in the right places?” The CISO’s Guide delivers a clear, honest answer to that question, and full content is below.

You can read the analysis here: The CISO’s Guide: When AI Helps vs. Hurts Security

CDW Canada’s 2026 Cybersecurity Study reveals an 80% jump in cyberattacks for Canadian enterprise

Posted in Commentary with tags on April 1, 2026 by itnerd

Today, CDW Canada released data from its annual Canadian Cybersecurity Study, Navigating Ransomware, Modern Architectures and the Maturity Paradox. 

Key findings from the study include: 

  • Canadian companies are being targeted by cyberattacks at a rate not seen before. Enterprise organizations saw an 80 percent increase in cyberattacks in 2025 due to the use of AI in cyberattacks and the larger financial reward potential.
  • Enterprise cloud infection rates hit a record high in 2026, jumping from 41 percent to 53 percent year over year, the highest level recorded since CDW Canada started this study.
  • Most organizations assume their cloud environments are secure. The study suggests that assumption is creating one of the biggest vulnerabilities in Canadian cybersecurity right now.
  • Security spending reached a five-year high, with 20% of IT budgets now dedicated to security; however, the gaps in foundational weaknesses in people and processes create the “security maturity paradox,” making organizations appear advanced but leaving them open to attacks.
  • AI is creating new security pressures on two fronts. Attackers are using it to be more effective. And organizations adopting AI internally need to make sure they are doing so in a way that does not create new vulnerabilities.
  • The ripple effects go beyond the organization itself. When a major company is hit, the impact is felt by employees, customers and the communities that depend on those services.

There are many more findings in the press release linked here. The full report can be accessed here.

The CISA mandates federal patching of Citrix NetScaler flaw by Thursday 

Posted in Commentary with tags , on March 31, 2026 by itnerd

The CISA has added a new Citrix NetScaler appliance vulnerability to its Known Exploited Vulnerabilities catalog and is giving federal agencies till Thursday to remediate the flaw.

The vulnerability (CVE-2026-3055) is caused by inadequate input validation and can be exploited by unauthenticated remote attackers to extract sensitive data from Citrix ADC or Citrix Gateway appliances configured as SAML identity providers.

Denis Calderone, CTO, Suzu Labs provided this comment:

   “Back in 2023 CISA, the FBI, and Australia’s ACSC put out a joint advisory related to CVE-2023-4966, CitrixBleed. That was the same class of vulnerability on the same product family as this new issue, CVE-2026-3055. The issues are memory leaks on NetScaler that let attackers steal session tokens and walk right past authentication, including MFA. We saw LockBit use it to devastating effect against ICBC, Boeing, and DP World, and now we’re looking at another critical memory disclosure flaw on NetScaler. Citrix themselves are warning that exploitation is likely once proof-of-concept code surfaces.

   “An out-of-bounds read on a device like this is particularly dangerous because of where NetScaler sits in the environment. It’s at the network boundary, handling authentication and session management.

   “NetScaler is often used to build a layer of abstraction between the untrusted, semi-trusted and fully trusted security zones within a network. When memory leaks on a device like that, what spills out isn’t random data. It’s potentially session tokens, authentication material, and credentials. These are the things that let attackers bypass every security control sitting behind it. That’s what made CitrixBleed so devastating, and this vulnerability has the same potential.

   “The one piece of good news is that this only affects NetScaler instances configured as a SAML Identity Provider, not default configurations. SOC teams should check right now: search your NetScaler config for ‘add authentication samlIdPProfile’. If it’s there, you’re in scope and you need to patch immediately. If you can’t patch today, consider whether you can disable SAML IDP functionality as a temporary mitigation. Citrix has 21 entries in the CISA KEV catalog at this point. Waiting to see if this gets exploited is not a strategy that has historically worked out with this vendor.”

Jacob Warner, Director of IT, Xcape, Inc. adds this comment:

   “Unpatched gateway appliances are the primary door for initial access brokers and nation-state actors, making this 48-hour remediation window a critical operational priority. This vulnerability allows unauthenticated attackers to bypass security boundaries and harvest credentials or session tokens, effectively turning your identity provider into a pivot point for lateral movement across the entire network. Organizations should immediately identify all Citrix ADC and Gateway instances acting as SAML IdPs and apply the vendor-provided firmware updates before the Thursday deadline.

   “If immediate patching is not feasible, security teams must evaluate whether to disable SAML functionality or place these appliances behind a restrictive VPN to reduce the attack surface. This is not a drill for the weekend; the inclusion in the KEV catalog confirms that active exploitation is already occurring in the wild.

   “Given the history of NetScaler vulnerabilities such as CitrixBleed, the blast radius of a successful exploit likely includes a full bypass of multi-factor authentication (MFA) for downstream applications. Priority should be placed on Internet-facing instances, followed by a comprehensive review of logs for unusual outbound traffic from these appliances.

   “I appreciate CISA giving us a Tuesday warning for a Thursday deadline, though I suspect the “unauthenticated remote attackers” didn’t bother waiting for the official calendar invite.”

Rajeev Raghunarayan, Head of GTM, Averlon said this:

   “Most organizations measure response in terms of time to patch. The real gap is time to decision. Teams often know about a vulnerability, but they don’t know whether it actually matters in their environment.

   “We’ve seen environments with tens of thousands of vulnerabilities where only a handful created meaningful risk based on how they connected to critical systems, especially when identity infrastructure is involved. Without that clarity, everything looks urgent and ends up in the same queue.

   “The organizations moving fastest don’t need external deadlines to act. They can quickly determine what matters and treat those cases as incidents. Others rely on external signals like KEV listings to prioritize, rather than identifying that urgency internally.”

If you organization is affected by this, you need to patch this ASAP because threat actors will not wait to exploit this.

Unit 42 researchers discover security flag in Google Vertex AI Engine

Posted in Commentary with tags on March 31, 2026 by itnerd

Palo Alto Networks Unit 42 published new research on a security flaw in Google’s Vertex AI Engine,

Unit 42 researchers found that Google Cloud’s Vertex AI Engine is giving AI agents far too much access by default. This critical discovery highlights the challenges of applying foundational security standards in the AI era.

Key Takeaways:

  • Significant Insider Threat: The research details how Google Cloud’s Vertex AI Engine is giving AI agents far too much access, by default. The report reveals that a misconfigured or compromised AI agent deployed via Google Cloud Platform’s (GCP) Vertex AI Agent Engine can be weaponized to compromise an organization’s cloud environment. This level of access constitutes a significant security risk, transforming the AI agent from a helpful tool into a potential insider threat.
  • The Big Picture: The rapid deployment of AI agents introduces a whole new class of overprivileged insiders. This comes as 90% of organizations are already facing pressure to loosen access control to support AI-driven automation.

You can read the research here:http://unit42.paloaltonetworks.com/double-agents-vertex-ai 

New Research Shows How Attackers Silently Disable AWS CloudTrail Without Triggering Alerts

Posted in Commentary with tags on March 31, 2026 by itnerd

The Abstract ASTRO research team has just published a blog entitled: How Attackers Disable CloudTrail Without Calling StopLogging or DeleteTrail.

Security teams rely heavily on AWS CloudTrail as a source of truth for detecting breaches, but new research shows attackers can quietly disable or degrade logging without ever touching the APIs most defenders monitor.

In a new technical deep dive, ASTRO uncovers how adversaries are bypassing traditional detections (like StopLogging or DeleteTrail) and instead using lesser-known AWS APIs to blind logging systems while keeping them appearing fully operational.

Key findings that may interest your readers:

  • Attackers can create “invisible activity zones” using PutEventSelectors, selectively excluding malicious actions from logs while CloudTrail continues to run normally.
  • CloudTrail Lake can be silently neutralized via APIs like StopEventDataStoreIngestion and DeleteEventDataStore, halting or destroying long-term forensic visibility.
  • Anomaly detection can be disabled outright by-passing empty parameters to PutInsightSelectors, removing automated detection of suspicious behavior.
  • Critical guardrails can be dismantled through APIs like DeleteResourcePolicy and DeregisterOrganizationDelegatedAdmin, weakening cross-account protections.
  • The real risk is in the sequence: individually, these API calls look like routine maintenance—but chained together, they allow attackers to erase evidence and evade detection entirely.

The research also outlines detection strategies, including how to identify subtle parameter changes and—more importantly—how to correlate multiple low-signal events into high-confidence alerts, something most SIEMs struggle to do.

This has major implications for DFIR teams and cloud security programs: organizations may believe they have full visibility, while attackers are actively operating in blind spots.

You can read the blog entry here: https://www.abstract.security/blog/how-attackers-disable-cloudtrail-without-calling-stoplogging-or-deletetrail

Liquibase Unveils Change Intelligence and New Connectors for Governed Database Delivery 

Posted in Commentary with tags on March 31, 2026 by itnerd

Liquibase today unveiled Liquibase Change Intelligence and a new suite of Liquibase Secure Deployment Connectors, expanding how enterprises understand, govern, and operationalize database change across modern delivery environments.

The new capabilities are designed to help teams understand database changes, monitor delivery performance, identify risk earlier, resolve issues up to 95% faster, and centralize audit evidence, while extending governed database change into the systems where developers, DBAs, and change teams already work, including ServiceNow, GitHub, Harness, and Terraform.

The announcement addresses a persistent gap in enterprise delivery. While application and infrastructure changes have become more automated, observable, and standardized, database change still too often moves through ticket attachments, side-channel SQL, manual approvals, and inconsistent execution paths. The result is slower investigations, weaker auditability, and more risk around outages, data integrity, and compliance.

Change Intelligence helps teams see what changed and respond faster

Liquibase Change Intelligence is designed to give teams a clearer view of what changed, how changes are moving across environments, where drift is emerging, and what requires attention next.

It brings together deployment activity, environment-level change status, drift signals, policy outcomes, and operational history so teams can answer critical questions faster: What changed? Where did it fail? Which environments are out of sync? Is drift increasing? What needs to be fixed now?

When failures occur, Change Intelligence is designed to help teams investigate with greater speed and context through AI-driven analysis that identifies likely causes and provides remediation guidance. Instead of forcing teams to reconstruct events from scattered logs, tickets, and tribal knowledge, it gives them a more direct path from issue to understanding to action.

Change Intelligence is also designed to help organizations centralize audit evidence for what changed, who approved it, where it ran, and what happened. That gives engineering, security, and compliance teams a more structured and accessible record of database change activity, reducing reliance on screenshots, manual evidence gathering, and fragmented reporting.

New connectors extend governed database change into the tools teams already use

Liquibase also unveiled a new suite of Liquibase Secure Deployment Connectors designed to extend governed database change into the platforms many enterprises already use to plan, approve, and deliver work.

For teams using ServiceNow, the connector is designed to bring database change into the existing approval process so approved tickets can result in governed, auditable deployments instead of manual SQL execution and disconnected handoffs.

For teams using GitHub, the connector is designed to bring database change into the same pull request and workflow model already used for application code, adding policy checks, validation, and deployment history tied to commits and branches.

For teams using Harness, the connector is designed to preserve existing pipelines while adding stronger governance, centralized visibility, and compliance-grade auditability around database changes.

For teams using Terraform, the connector is designed to extend infrastructure as code to the database layer, connecting Liquibase Secure to Terraform-managed instances through existing pipelines while enforcing database policies, applying versioned changeSets, and maintaining a complete audit trail over time.

Together, the connectors are designed to remove one of the biggest barriers to stronger database governance: the belief that teams need to rebuild their workflows to get it. Instead, Liquibase is extending governed database change into the systems teams already use, while strengthening traceability, standardization, and audit evidence across the delivery lifecycle.

Built for a new era of AI, data integrity, and operational accountability

The new capabilities reflect a broader shift in how enterprises are thinking about AI readiness and operational risk.

As AI initiatives expand, more changes are being generated, reviewed, and pushed through delivery systems at higher speed and greater scale. But when database change remains inconsistent, weakly governed, or hard to trace, the resulting risk does not stay isolated at the database layer. It carries into applications, analytics, automation, and AI-driven systems.

By helping organizations better understand database changes, catch drift earlier, investigate failures faster, and centralize audit evidence, Liquibase is giving enterprises a stronger operational foundation for trusted applications, data products, and AI initiatives.

Availability

Liquibase Change IntelligenceLiquibase Secure Deployment Connectors, and related capabilities are expected to begin rolling out in fall 2026. Additional details will be shared closer to availability.

Ericsson to power majority of Virgin Media O2’s UK RAN network through major partnership extension

Posted in Commentary with tags on March 31, 2026 by itnerd

Ericsson will become Virgin Media O2’s primary radio access network (RAN) partner in a five-year partnership extension that will see Ericsson power the majority of the UK service provider’s nationwide UK radio network. Through securing the majority of the radio network-focused element of Virgin Media O2’s latest Mobile Transformation Plan, the partnership extension will earn Ericsson several hundred million Euros across the five years.

Virgin Media O2’s Mobile Transformation Plan will deliver faster, more reliable mobile connectivity across the UK.

With Virgin Media O2’s mobile traffic more than doubling in the last five years alone, a key element of the network enhancement will focus on maximizing the capabilities of additional 5G mid-band spectrum acquired by Virgin Media O2 in 2025, to strengthen the service provider’s UK leadership in 5G Standalone (SA) connectivity.

The partnership extension is the latest development in Virgin Media O2’s Mobile Transformation Plan – with 2026 investments aimed at improving reliability, boosting capacity and widening coverage across its nationwide network.

The upgrade will feature the deployment of a wide range of Ericsson Radio System products, including advanced and energy-efficient multiband Massive MIMO radios – such as the AIR 3229 and the triple-band Radio 4486 – at both new and existing locations.

Ericsson AI and machine learning-based software will also be deployed to intelligently optimize network performance and efficiency in real time.

Network programmability and intelligence will help Virgin Media O2 to utilize the full capabilities of its 5G SA network, supporting advanced differentiated services through network slicing for application, enterprise and industry use cases.

The network upgrade will enable Virgin Media O2 to move more of its customer base to its 5G SA network, which is already available to 87 percent of the UK population. The partnership is also structured to support Virgin Media O2’s evolution to Cloud RAN and to scale into future 5G-Advanced.

The 2026 enhanced Ericsson-VMO2 partnership is the latest development in a productive longstanding relationship between the companies – which included the 2025 investment tranche of the Mobile Transformation Plan.

That scope included performance and capacity improvements through additional spectrum, network densification and small‑cell deployments, targeted upgrades at network hot spots (like stadiums and transport hubs), and extended coverage along railways, major roads, and previously underserved rural and coastal areas.

Hammerspace Announces FIPS 140-3 Validation

Posted in Commentary with tags on March 31, 2026 by itnerd

Hammerspace today announced support for FIPS 140-3 validated cryptography, enabling the Hammerspace Data Platform to be configured to meet the U.S. government standard for cryptographic security. This milestone positions Hammerspace to support deployments in federal, defense, healthcare, finance and other highly regulated environments. Integration into the Hammerspace Data Platform is planned for an upcoming release by the end of 2026.

By supporting FIPS 140-3 validated cryptography, Hammerspace meets key requirements for secure data protection in regulated environments and is advancing the integration of these capabilities into the Hammerspace Data Platform.
 

Security Enforced at the Data Layer for Consistent Control, Compliance and Data Sovereignty

Hammerspace delivers consistent, policy-driven orchestration, governance and protection across distributed environments, providing consistent control in multi-site and hybrid-cloud architectures. With the integration of FIPS 140-3 validated cryptography, the platform is designed to provide:
 

  • End-to-End Encryption with FIPS-Validated Security: Support for encrypting data in-flight and at-rest using FIPS 140-3 validated cryptographic modules, aligning with federal security requirements.
  • Built-In Data Protection and Ransomware Resilience: Immutable snapshots, clones and WORM capabilities to enable rapid recovery and protect against unauthorized modification or deletion.
  • Consistent Security Enforcement Across a Global Namespace: Centralized policy enforcement across the global namespace, ensuring consistent protection across sites, clouds and storage systems.
  • Unified Access Controls Across Protocols and Environments: Consistent access policies across file and object data, spanning NFS, SMB and S3.
  • Policy-Driven Data Governance Sovereignty and Orchestration: Metadata-driven data placement policies to control where data resides, how it moves and how it is used in real time.


The Federal Information Processing Standards (FIPS) 140-3 is defined by the National Institute of Standards and Technology (NIST), and establishes stringent requirements for the design, implementation, and validation of cryptographic modules used to protect sensitive data. Validation requires independent testing by accredited laboratories and is mandatory for systems used by U.S. federal agencies and organizations operating under stringent compliance mandates.

Learn more about Hammerspace solutions for the public sector at https://hammerspace.com/public-sector/.