Archive for March, 2026

Multiply raises $9.5m for self-learning ads, reports 300%-500% pipeline increase for B2B companies

Posted in Commentary with tags on March 18, 2026 by itnerd

Multiply is the first AI-native media agency for B2B companies. All marketers know that in traditional advertising, campaigns start losing effectiveness the moment they launch. Creative gets stale and audiences tune out. Multiply calls this phenomenon “decaying ads.”

Today, the company emerged from stealth with $9.5 million in funding to introduce what it calls the next paradigm: Self-Learning Advertising, where ads use internal data to continuously get better on their own. The round was led by Mayfield, with participation from Sorenson Capital, Instacart Co-Founder Max Mullen, Google Head of Gemini and Google Labs Josh Woodward, and executives from HubSpot, Braze, Issuu, Brex, Sierra, and Common Room, among others.

Early customers report outsized impact in sales pipeline generated from ads. Vanta, a leader in security automation, which has raised over $500 million from Sequoia Capital and other top VCs, shared: “We’ve seen 770% more sales meetings, we build and test faster with their AI, and their team is strategic, hands-on, and operates as trusted partners.” Listen Labs, the leading AI customer research platform that has raised $100M, said LinkedIn has become its most efficient paid channel for new leads, with campaigns performing 5X above LinkedIn benchmarks. Across customers, the common thread is velocity, and lead quality, and pipeline impact.

Multiply was founded by Matt Jayson, formerly at Google and Brex, and Ashish Warty, formerly SVP Engineering at HackerOne and engineering leader at Dropbox and Airship.

To tackle something this ambitious, Multiply couldn’t just build AI software. The company operates as a media agency staffed by expert strategists, who use Multiply’s proprietary AI to operate campaigns at speeds and with impact previously impossible.

Multiply’s Customer Insights AI Agent extracts real customer language from sales calls and uses it to personalize ads. The ICP Agent analyzes closed-won deals to refine targeting. The Quality Score Agent continuously tune copy and keyword alignment. The Creative Design Agent refreshes images weekly. The A/B Testing Agent runs hundreds of experiments, quickly identifying winners and cutting losers. Ashish Warty, Co-founder and CTO of Multiply, describes, “Together, these systems allow Multiply to iterate faster than any traditional agency model.”

While Multiply launched first with Google and LinkedIn ads, the company says its infrastructure was designed for emerging AI-driven ad platforms like ChatGPT ads. Multiply is already helping its customers prepare for ChatGPT ads. All campaign learnings and experimentation systems can extend directly into new formats, including conversational and AI-driven advertising experiences.

Looking ahead, Multiply will expand into a full omni-channel ad buyer for B2B companies, enabling businesses to launch and optimize advertising across all major platforms from a single system. The roadmap includes expansion to additional channels, daily creative refresh, unified cross-channel attribution, and AI-driven budget allocation across ad channels to maximize pipeline impact. As new AI-powered advertising channels emerge, Multiply aims to help customers adopt them early while continuing to outperform across existing platforms.

Polygraf AI Launches Desktop Overlay as a Real-time AI Behavior Control Plane Across Enterprise Operations

Posted in Commentary with tags on March 18, 2026 by itnerd

Polygraf AI today announced the launch of its Desktop Overlay, a new product designed to provide continuous, real-time guidance for compliance operations and data protection directly at the user interface level, as a personal compliance assistant. Built for highly regulated and government agencies, the Desktop Overlay runs at the edge and preemptively warns users of sensitive data exposure while they are writing, before the data is sent to third-party models, external systems, or leaves device endpoints  – and requires no integration.

As AI adoption accelerates across everyday workflows, organizations face a growing challenge: sensitive information is increasingly shared unintentionally through chat tools, AI assistants, email, and browser-based applications. Traditional Data Loss Prevention (DLP)  tools rely on post-exposure audits, endpoint monitoring, or reactive controls that introduce friction and often fail to stop human error in the moment. Polygraf AI’s Desktop Overlay addresses this gap.

Data Protection at the Edge.

Operating directly at the desktop interface, across all applications, the Overlay identifies and flags sensitive information within 100 milliseconds, as users type. Using intuitive color-coding, it highlights confidential data, such as employee IDs or contact information, in yellow, and critical regulatory data, like Social Security numbers, API keys, or protected health information, in red, providing immediate visual feedback so users can correct mistakes before data leaves the organization.

Unlike legacy DLP systems, the Overlay does not wait for data to be transmitted or logged. It proactively highlights sensitive content in real time using Polygraf’s task-specific Small Language Models. These models run entirely within customer infrastructure, requiring as little as 1.3 GHz CPU and 8GB RAM while consuming just 40-120MB RAM giving organizations complete control, visibility, and auditability over AI interactions. 

The result is a shift from reactive enforcement to continuous protection and education.

Additionally, with the Overlay, Polygraf provides real-time behavioral training for employees. Rather than blocking workflows or relying solely on annual compliance trainings, the Desktop Overlay serves as an always-on security coach. As employees see real-time highlighting across email, chat, AI tools, and internal systems, they develop a practical understanding of what constitutes sensitive information within their organization. Over time, this builds lasting security awareness while reducing accidental exposure. During the pilot testing, customers saw up to a 72% decline in their DLP (Data Leakage Prevention) triggers within 4 weeks of Overlay adoption. 

For organizations operating under SOC2, HIPAA, GDPR, NIST-RMF, or other compliance frameworks, the Overlay combines immediate safeguards with long-term improvements in workforce behavior. It enables productivity while strengthening governance.

This shift toward preemptive control is becoming imperative as organizations struggle to govern  autonomous AI deployments. According to Gartner, By 2027, 40% of agentic AI projects will be canceled due to high costs, unclear value, and inadequate controls.” Polygraf AI directly addresses the “controls” gap by embedding security into the user’s natural workflow, ensuring AI initiatives move from pilot to production safely.

Over the past year, Polygraf AI has expanded its footprint across the defense, financial services, insurance, and healthcare sectors, where data sovereignty and compliance are mission-critical. The company’s premise-agnostic AI Behavioral Usage Control Layer provides explainable, auditable controls that align with strict regulatory and operational requirements, offering organizations a practical alternative to opaque, cloud-dependent AI security tools.

With the launch of the Desktop Overlay, Polygraf extends its AI security platform directly to the individual user, embedding protection into daily workflows without disrupting productivity. The company will showcase the Desktop Overlay and its broader AI usage control platform during the RSAC Conference, where attendees can see how the technology protects AI interactions in real time across enterprise environments.

Source: Gartner Report, When AI Goes Rogue: Building Guardrails and Kill Paths for Agentic I&O, By Apurva Singh, February 2026. Gartner is a trademark of Gartner, Inc. and/or its affiliates.

Building the Future of Travel & Expense: New Innovations from the SAP Concur and Amex GBT Alliance

Posted in Commentary with tags on March 17, 2026 by itnerd

Today at SAP Concur Fusion, SAP Concur and American Express Global Business Travel (Amex GBT) are announcing new advancements in Complete by SAP Concur and Amex GBT, an AI-enabled, co-developed solution for travel booking, servicing, payments, and expensing. These updates mark an important milestone in building the framework and foundations for next gen business travel between the two companies, expanding content access, strengthening service, and further integrating travel and expense into a more seamless and intelligent experience enhancing customer value.

Since announcing the strategic alliance in October 2025, adoption of Complete has continued to accelerate. Product and engineering teams from both companies have been focused on building the foundations to connect Amex GBT’s marketplace and servicing with Concur Travel & Expense to deliver features that simplify process, increase visibility and support smarter decision making across the entire travel lifecycle.  

Integrated Travel Support from Joule and Travel Counselor

An enterprise integrated chat platform with automated hand-off to a live travel counsellor for additional support is in pilot for customers now. This is the first of many features to go live for Complete customers. Layering in Joule in Q2 2026 will bring SAP’s agentic AI directly into the experience to help travellers find answers faster and with greater confidence.

Joule will be trained on Amex GBT’s most common inquiries and supplier marketplace data, enabling it to resolve frequent questions more effectively throughout the trip lifecycle. When a request requires additional support, the conversation can transition to a live travel counselor from Amex GBT without the traveler needing to leave Complete. This experience brings together the convenience of AI and the reassurance of human support, all in one place.

New Travel Manager Home Page

As the role of the Travel Manager continues to expand, bringing information together in one place helps reduce friction, improve efficiency, and keep teams focused on optimizing their travel program rather than managing multiple systems.

To support this need, Complete is introducing a new home page designed specifically for Travel Managers. This home page integrates SAP Concur and Amex GBT data and brings key travel and expense tools, insights, and controls into one central place, with a cleaner look and simpler navigation. Travel managers will have a single starting point to monitor programs, access reports, manage approvals, and connect to duty-of-care information.

Initially, the home page will provide essential insights, with more advanced analytics and reporting planned over time. In the future, it will deliver a comprehensive, real-time view of the travel program to help travel managers make faster, more informed decisions, including spend trends, savings opportunities, and policy compliance to reduce leakage.

Expanded Content Across Air, Hotel and Ground

Content breadth is foundational to delivering a better travel experience, and Complete continues to expand access across key categories. The Amex GBT marketplace is being fully built into Complete and will provide access to all of Amex GBT’s Air, Hotel and Ground content. This will include NDC, core GDS content, Booking.com and Expedia hotel content and much more.   Complete users also now have access to more than 80 rail providers. These live rail connections expand transportation options and support regional travel needs, particularly in regions where rail is a preferred or more sustainable choice.

SAP Concur and Amex GBT are also building the foundation for expanded NDC content with a single, unified implementation to lay the groundwork for symmetry across the SAP Concur and Amex GBT ecosystem. This will help provide customers with consistent, predictable access to NDC airline content without needing to worry about where or how that content is sourced.

Concur Expense and Amex GBT Egencia Integration

Additionally, Concur Expense is integrating with the Amex GBT Egencia solution, introducing enhanced e-receipt and itinerary data integration. This integration is enabling Egencia travel bookings to automatically flow into Concur Expense, with eligible items pre-populated and enriched with detailed itinerary and transaction data.

Travelers no longer need to manually enter booking details, making it easier to move from booking to expensing without re-entering information or manually uploading receipts. Finance and travel teams will also benefit from more timely, accurate expense data that supports stronger compliance, reporting, and visibility into travel spend.

The pilot integration is delivering a configurable, personalized, end-to-end digital travel experience for joint customers now and will be generally available in April 2026.

A Continuation of Their Shared Vision

Together, SAP Concur and Amex GBT are delivering:

  • A unified marketplace with access to 600+ airlines and 2 million+ properties, offering greater incentives and cost savings
  • Accelerated NDC releases and modern retailing capabilities
  • AI-powered tools trained on unmatched travel and expense data
  • Integrated traveler support through conversational, connected channels
  • Streamlined processes across booking, servicing, payments, and expensing

TMC Partner Program by SAP Concur and GBTNetwork

SAP Concur and GBTNetwork have announced the TMC Partner Program by SAP Concur and GBTNetwork. This joint program extends select benefits to our most trusted and capable mutual TMC partners—those who can bring this vision of superior travel marketplace content and travel technology of the future to customers and prospects. 

With this program launch, we’re also introducing two levels of partner capability and competency recognition – TMC Gold and TMC Silver aligned to travel content, and technology adoption, and delivery excellence. These levels are intended to help customers understand which TMCs have earned our highest level of trust to bring this vision to life. 

Learn more about Complete from SAP Concur and Amex GBT here: https://www.concur.com/solutions/complete.

Penguin Solutions Selected by Deepgram to Enable Deployment of Optimized AI Inference Infrastructure for Enterprise Voice AI

Posted in Commentary with tags on March 17, 2026 by itnerd

Penguin Solutions today announced a strategic collaboration with Deepgram and Dell Technologies to architect and deploy a fully optimized, production-ready infrastructure aligned to Deepgram’s demanding enterprise voice AI requirements. By leveraging its unique expertise in designing, building, deploying, and managing AI infrastructure with Dell PowerEdge servers and Dell PowerScale storage optimized for AI workloads, Penguin Solutions delivered an optimal solution to support and enhance Deepgram’s innovative Speech-to-Text (STT), Text-to-Speech (TTS), and Voice Agent capabilities, while ensuring maximum reliability and performance.  

As enterprise adoption of generative AI accelerates, organizations must adhere to stricter service level agreements (SLAs), which require infrastructure that can ensure low latency and high concurrent usage. This Penguin-led deployment addresses these challenges by combining Deepgram’s innovative voice AI models with a purpose-built architectural design, a highly efficient deployment, and ongoing performance optimization.

Drawing on its extensive experience with HPC and AI infrastructure, Penguin Solutions ensures that the underlying infrastructure meets the specific demands of Deepgram’s neural networks. The architecture also incorporates Dell PowerScale storage and Dell PowerEdge XE7745 servers with NVIDIA RTX PRO 6000 Blackwell Server Edition GPUs, which provide efficient inferencing that enables data-intensive voice applications to operate seamlessly in real-time environments.

The Deepgram-Penguin Solutions-Dell collaboration comprises a comprehensive approach for enterprises looking to modernize their customer and employee experiences. With Deepgram’s API-driven voice capabilities, Penguin Solutions’ AI services, and Dell’s powerful AI infrastructure, organizations can achieve highly accurate, real-time transcription and speech synthesis—all while maintaining strict data governance and control.

For those attending NVIDIA GTC AI Conference and Expo March 16-19, 2026, in San Jose, CA, learn more about this innovative collaboration at Dell’s Booth #721 on March 17 at 3:30 p.m. for the session “Powering Enterprise Voice AI: Deepgram’s Agentic Solution” presented by Penguin, Deepgram and Dell. Attendees can also stop by Penguin Solutions’ booth #1031 to speak with an AI factory platform expert.

GhostPoster, and Why Browser Extensions Are Your Next Major Blind Spot

Posted in Commentary with tags on March 17, 2026 by itnerd

Browser extensions have quietly become one of the more dangerous and overlooked attack surfaces within the enterprise. Fortra Intelligence and Research Experts (FIRE) have released a new Browser Extension Threat Guide that breaks down why this risk is escalating and what security teams need to do now to close the gap.

This in‑depth guide covers:

  • A deep forensic analysis of the GhostPoster campaign, including staged payloads, obfuscation techniques, and real-world impact.
  • How modern extension malware evades EDR by hiding inside legitimate browser processes and abusing trusted APIs.
  • Actionable detection and threat hunting playbooks focused on manifest analysis, sideloading identification, and high‑risk behaviors.
  • Clear mitigation strategies, including extension governance, default‑deny controls, and browser-layer security recommendations.

If extensions aren’t already on your threat model, this guide will show you why they need to be. You can access it here: https://www.fortra.com/resources/guides/browser-extension-threat-guide

Chatbot logs and audio exposed in data breach at major department store chain

Posted in Commentary with tags on March 17, 2026 by itnerd

Cybersecurity researcher Jeremiah Fowler recently discovered 3 separate databases that were neither password-protected nor encrypted and contained a total of 3.7 million chat log transcripts, audio recordings, and text transcriptions of phone calls exposing Sears Home Services.

The publicly exposed databases totaled over 4TB and contained:

  • 2,116,011 txt files that exposed names, phone numbers, physical addresses, and user-submitted personally identifiable information (PII).
  • 207,381 xlsx files and audio recordings totaling 415.2GB.
  • 1,442,577 audio recordings of customers and their text transcripts totaling 3.9TB.

Jeremiah’s detailed findings are published on the ExpressVPN blog here: https://www.expressvpn.com/blog/searshomeservices-data-exposed/.

Hammerspace Launches AI Data Platform Based on NVIDIA Reference Design 

Posted in Commentary with tags on March 17, 2026 by itnerd

Hammerspace announced today the general availability of its new AI Data Platform (AIDP) solution. AIDP is a turnkey approach that removes one of the biggest barriers preventing enterprise AI pilot projects from reaching production: seamless access to distributed enterprise datasets. It does this without creating new copies, performing slow migrations, or relying on manual preparation and curation, dramatically simplifying and securing the process of curating AI-ready data.

The Hammerspace AIDP meets enterprises where they are by allowing them to start making their existing data AI-ready using the infrastructure they already own, without deploying a separate AI storage system. By uniquely leveraging data in place, Hammerspace eliminates the need to purchase massive amounts of new flash just to house AI data. 

Solving the Primary Blockers to Enterprise AI Success
 

Eliminate Data Fragmentation. Identifying, gathering, organizing, and transforming unstructured data into an AI-ready format remains labor-intensive and highly manual. In most enterprises, the same work–finding the right data, enriching metadata, and shaping it into a form AI agents and models can use–is repeated across teams, projects and platforms because the data estate is fragmented. Hammerspace eliminates data fragmentation by providing a unified view across heterogeneous systems and automating the entire pipeline that produces AI-ready data for applications.
 

Skip Costly Mass Migrations. By enabling customers to use data in place, Hammerspace eliminates tedious migrations and the heavy manual work behind copy-first pipelines that consume human capital and stall initiatives. Instead of requiring a new AI storage buildout just to get started, the platform accelerates time to value and time to answer by making distributed data immediately usable for enterprise AI.
 

Reduce Data Copies. Hammerspace defeats data gravity by continuously cataloging distributed data in place, then using its Model Context Protocol (MCP) server to coordinate with NVIDIA and other AI tools and applications so only the data that’s needed moves, when it’s needed. With policy- and security-driven automation managing placement and flow end to end, vectors and source data stay continuously synchronized with consistent governance, compliance and performance. This allows pilot programs to scale cleanly into production with operational simplicity.

Image 1: The Hammerspace AI Data Platform: Seamless Access to Distributed Enterprise Datasets
 

Delivered and Validated by SHI, the Premier Experts in AI Transformation
 

SHI has been a key partner in the development and testing of the Hammerspace AI Data Platform solution, using its AI and Cyber Lab to quickly showcase the value and integrations across technologies for enterprise-scale AI factories.
  

Full End-to-End Solution on Cisco UCS with Secuvy DSPM

Hammerspace also delivers solutions that meet enterprise demands across the spectrum by combining best-of-breed technologies from its ecosystem partners. To provide organizations with a complete, validated and secure AI infrastructure, Hammerspace has established key partnerships and achieved major integration milestones.
 

All-in-One Orchestration: Hammerspace collapses as many as 15 disconnected tools for data discovery, cataloging, classification, policies and movement into a single orchestration layer, providing unified data insight, management and access. The platform is also the first to deliver a fully agentic data foundation, intelligently managing data placement and flow based on real-time demand.
 

  • NVIDIA Partnership: The Hammerspace AIDP is built on NVIDIA’s reference design, ensuring optimal performance and compatibility with accelerated computing platforms, including NVIDIA RTX PRO 6000 and RTX PRO 4500 Blackwell Server Edition GPUs. Using NVIDIA AI Enterprise software, including NIM microservices and NeMo Retriever, Hammerspace converges data management with data orchestration across heterogeneous storage to simplify and automate the data pipeline and deliver the security, governance and content indexing required for high-performance inference, retrieval-augmented generation (RAG) and agentic AI.
      
  • Secuvy DSPM Integration: The Hammerspace AIDP is integrated with Secuvy’s Data Security Posture Management (DSPM) technology, providing customers with an end-to-end solution that prepares and delivers AI-ready data while ensuring continuous security monitoring, compliance, and governance throughout the entire data pipeline. 

Hardware Platform Flexibility: Hammerspace’s software-defined architecture provides the ultimate flexibility for the modern enterprise. Our AIDP can be delivered on a broad ecosystem of industry-leading hardware from partners including Cisco, Lenovo, and Supermicro. It seamlessly integrates with any server environment that meets performance specifications, ensuring organizations can leverage their preferred infrastructure without compromise.
 

Availability and More Information

Hammerspace will feature its AI Data Platform in Booth #7040 at NVIDIA’s GTC 2026, March 16-19, in San Jose, California. 

The solution is immediately available. Customers can contact their Hammerspace sales representative or authorized partners to operationalize their data for AI success.

Learn More

Black Kite Introduces Open FAIR™-Based Risk Assessments

Posted in Commentary with tags on March 17, 2026 by itnerd

Black Kite today announced the release of Open FAIR™-Based Risk Assessments, which extends its CRQ capabilities to its AI-powered cyber assessment offering. Black Kite fully automates the calculation of probable financial impact in the event of a data breach, ransomware attack, or business disruption scenario using the industry-leading Open FAIR™ methodology, eliminating the complexity and manual effort typically associated with CRQ analysis. This latest release brings CRQ directly into the cyber risk assessment workflow, enabling customers to instantly calculate financial risk during onboarding and periodic risk reviews.

As the industry’s first provider to automate Cyber Risk Quantification (CRQ) for third party risk management, Black Kite has long delivered real-time CRQ through its continuous monitoring offering. These insights help them prioritize remediation efforts and vendor outreach, and clearly communicate risk and program success to executive and business stakeholders.

By introducing Open FAIR™-based risk quantification into the assessment workflow, customers can model onboarding decisions through  “what-if” analysis. For example, they can simulate  how sharing more or fewer records with a vendor impacts financial risk so that they can set clear vendor approval conditions. Additionally, customers are able to view real-time CRQ alongside assessment-based CRQ captured at onboarding and during periodic risk reviews to track how vendor risk is trending over time.

Customer key benefits include:

  • Turn risk decisions into business decisions: Instantly quantify a company’s financial risk during onboarding and annual assessments to inform vendor selection, renewal decisions, and even insurance underwriting.
  • Clearer vendor comparisons: Use a consistent financial risk language (e.g., “Are we willing to accept $10M vs. $2M of cyber risk in a ransomware scenario?”) to objectively compare vendors and select the best option.
  • Understand risk trends over time: Track how a vendor’s financial risk changes by comparing point-in-time CRQ from assessments with real-time CRQ from continuous monitoring to get a high-level understanding of vendor maturity, remediation progress, and the impact of outreach campaigns over time.
  • Model scenarios with full customization: Adjust model inputs to test different decision conditions, like onboarding a vendor only if data access is limited, and see how each scenario changes probable financial impact.

Open FAIR™-Based Risk Assessments key features include:

  • Automated FAIR model population: Never start with a blank model with Open FAIR™ factors that are automatically populated and enhanced by assessment responses, uploaded documentation, and insights from continuous monitoring.
  • Assessment-based private modeling: Run private, assessment-specific analysis to estimate probable financial risk impact at key moments, such as onboarding, renewal, post major outreach campaign, and more. 
  • Full customization: Customize exposure metrics and FAIR inputs across key scenarios or entirely custom scenarios to test different assumptions.

For more information, visit https://blackkite.com/platform/financial-impact.

DH2i Launches DxEnterprise v26.0 and DxOperator v2

Posted in Commentary with tags on March 17, 2026 by itnerd

DH2i today announced the general availability (GA) launch of DxEnterprise v26.0 and DxOperator v2, featuring significant high availability (HA), disaster recovery (DR), and operational resilience capabilities enhancements for SQL Server deployments across Windows, Linux, and Kubernetes environments. Together, the releases introduce meaningful advances in availability group (AG) protection, security controls, observability, and automation for both traditional and containerized SQL Server deployments.

In today’s enterprises, a perfect storm has emerged where applications have become direct revenue channels, infrastructure complexity has increased while IT staffing has not, modernization initiatives are no longer optional, security and compliance requirements are tightening, and software update velocity has accelerated. Together, these forces expose the limits of traditional HA approaches. What once worked for small, static clusters no longer scales when SQL Server deployments span hybrid, multi-platform, and containerized environments that demand continuous availability, stronger safeguards, and higher levels of automation. DxEnterprise v26.0 and DxOperator v2 address these challenges head-on.

DxEnterprise v26.0 focuses on improving cluster resilience, visibility, and administrative confidence through enhanced monitoring, stronger safeguards against split-brain scenarios, expanded credential support, and platform modernization. DxOperator v2 extends those capabilities into Kubernetes environments, giving users greater control over scale, updates, and network configuration for SQL Server AGs running in containers.

What’s New in DxEnterprise v26.0

Deeper SQL Server and Availability Group Intelligence

  • Database-level health monitoring is now enabled by default, allowing faster detection of issues affecting individual databases within an AG
  • Split-brain scenarios are prevented via automatic per-availability-group quorum enforcement by demoting or shutting down replicas when quorum requirements are not met
  • Improved replica connectivity alerts provide real-time notification when replicas disconnect or when SQL Server replica configurations diverge from expected cluster state

Improved Security and Credential Resilience

  • Support for secondary SQL Server backup credentials enables automatic fallback if primary authentication fails, reducing downtime caused by credential changes or expirations
  • Administrative sessions are automatically disconnected when the cluster passkey changes, ensuring only authorized users with current credentials retain access
  • The DxAdmin user interface now includes clearer prompts, stronger validation, and improved feedback for passkey configuration

Greater Stability and Observability

  • Core monitoring services, including DxLMonitor, DxCMonitor, DxStorMonitor, and DxHealthMonitor, have received reliability and stability improvements to reduce unexpected restarts and improve overall cluster resilience
  • Basic anonymous telemetry is now available to help improve product quality and diagnostics, with opt-out configuration for customers who prefer not to participate

Platform and Usability Enhancements

  • DxEnterprise’s Linux version now runs on the .NET 8.0 runtime, delivering improved performance, security, and long-term support alignment
  • Virtual hosts can now be renamed using a new rename-vhost command, simplifying cluster management and reorganization
  • Additional safeguards prevent accidental overwriting of existing data stores during SQL Server high availability virtualization
  • Enhancements to DxCLI and DxPS improve command-line usability, including human-readable XML output and new PowerShell cmdlets
  • The DxCollect utility now includes expanded command-line options for more targeted diagnostics and log collection.

What’s New in DxOperator v2

Flexible Scaling Up and Down

  • Availability group clusters can now be expanded or reduced dynamically
  • Unlike the previous version, DxOperator v2 can safely de-configure and remove replicas from a running cluster, enabling true scale-down operations

Automated Rolling Updates

  • Administrators can automate rolling updates of SQL Server or DxEnterprise container images, allowing pods to be updated one at a time without manual intervention
  • Updates can also be performed manually when desired, giving operators full control over rollout strategy
  • DxOperator does not automatically check for new container versions, ensuring that administrators remain in control of when and how updates are applied

Advanced Network and Service Configuration

  • Flexible service templates allow load balancers and other network services to be fully specified and automatically deployed per availability group replica
  • This enables more consistent connectivity across different Kubernetes environments and cloud providers

Redesigned Custom Resource and StatefulSet Adoption

  • The custom resource definition has been redesigned for greater flexibility and now leverages Kubernetes StatefulSets
  • By delegating pod creation, storage allocation, and rolling upgrades to Kubernetes, DxOperator v2 simplifies internal logic while benefiting from native Kubernetes reliability and lifecycle management

DH2i’s DxEnterprise v26.0 and DxOperator v2 are now generally available

Guest Post: How Meta and TikTok Turn User Rage into Revenue, While Pretending to Keep You Safe

Posted in Commentary with tags on March 16, 2026 by itnerd

By Jurgita Lapienytė, Editor-in-Chief at Cybernews 

A new BBC report revealed what we suspected all along – big tech platforms turn a blind eye to harmful content for the sake of profit. Platforms allow so-called borderline content – misogynistic, sexist, racist, conspiracy-driven – that is harmful yet legal.

According to the report, based on accounts from a dozen whistleblowers and insiders, Meta engineers were instructed to allow more borderline content to compete with TikTok. Meanwhile, TikTok is said to have prioritized several user complaints involving politicians to “avoid threats of regulation or bans.”

Unsurprisingly, big tech platforms denied any wrongdoing, insisting that they do not amplify harmful content.

Algorithms are allegedly designed to better understand user interests and needs, and cater to them accordingly. Unfortunately, most of what a user “wants” turns out to be conspiracy theories, AI slop, deepfakes, and pro-Nazi content. Or at least the algorithm seems to think so – because most of this is so-called ragebait content, designed to provoke a strong response from the user.

And since users engage with it, the algorithm is tricked into “thinking” this is what people want. Humans behind the algorithm must clearly understand this is not the case, but clicks translate to cash. So why would Big Tech cut the branch it’s sitting on?

In 2024, Meta earned $16 billion, or 10% of its annual revenue, from scam ads and banned goods. The information comes not from a third-party analytics firm but from Meta’s own documents, proving that the tech giant is well aware of how much harm it can spread – and how much money it can make along the way.

While platforms and lawmakers take their sweet time debating what borderline content is, people are left to deal with the psychological fallout of social media addiction. From the inability to tell right from wrong or fake from real, loss of concentration, sleep, and even sense of self, to radicalization, depression, and self harm – the consequences of companies toying with their algorithms to meet business goals are dire for humanity.

It’s not only our mental health that’s at stake. Adversaries, well aware of algorithmic logic, abuse it to spread misinformation and straightforward lies, sowing division to influence elections all over the world – making us wonder just how much harm performative compliance has already done to democracy.

ABOUT THE AUTHOR 

Jurgita Lapienytė is the Editor-in-Chief at Cybernews, where she leads a team of journalists and security experts dedicated to uncovering cyber threats through research, testing, and data-driven reporting. With a career spanning over 15 years, she has reported on major global events, including the 2008 financial crisis and the 2015 Paris terror attacks, and has driven transparency through investigative journalism. A passionate advocate for cybersecurity awareness and women in tech, Jurgita has interviewed leading cybersecurity figures and amplifies underrepresented voices in the industry. Recognized as the Cybersecurity Journalist of the Year and featured in Top Cyber News Magazine’s 40 Under 40 in Cybersecurity, she is a thought leader shaping the conversation around cybersecurity. Jurgita has been quoted internationally.

ABOUT CYBERNEWS

Cybernews is a globally recognized independent media outlet where journalists and security experts debunk cyber by research, testing, and data. Founded in 2019 in response to rising concerns about online security, the site covers breaking news, conducts original investigations, and offers unique perspectives on the evolving digital security landscape. Through white-hat investigative techniques, Cybernews research team identifies and safely discloses cybersecurity threats and vulnerabilities, while the editorial team provides cybersecurity-related news, analysis, and opinions by industry insiders with complete independence. For more, visit www.cybernews.com.