FBI Says Hackers Stole $262M by Impersonating Bank Staff

Posted in Commentary with tags on November 25, 2025 by itnerd

The FBI has warned that cyber criminals are impersonating staff at financial institutions to steal money or information in Account Takeover (ATO) fraud schemes. Since January 2025, the FBI Internet Crime Complaint Center (IC3) received more than 5,100 complaints reporting ATO fraud, with losses exceeding $262 million.

Details can be found here: https://www.ic3.gov/PSA/2025/PSA251125

Jim Routh, Chief Trust Officer at Saviynt, commented:

“The large majority of ATO accounts referenced in the FBI announcement occur through compromised credentials used by threat actors intimately familiar with the internal processes and workflows for money movement within financial institutions. The most effective controls to prevent these attacks are manual (phone calls for verification) and SMS messages for approval. The root cause continues to be the accepted use of credentials for cloud accounts despite having passwordless options available.”

If you want to protect yourself from a scam like this, this link will help: Learn about the phony bank investigator scam

2026 Technology Predictions from Starburst

Posted in Commentary with tags on November 25, 2025 by itnerd

Here’s some 2026 Industry Predictions by Justin Borgman, CEO and Cofounder, Starburst.

The Rise of Human-and-Machine-Centered Data Ecosystems – “We’re moving toward a world where data platforms won’t primarily serve people anymore; they’ll serve machines. The new consumers of data are AI agents, which will increasingly drive decisions, generate insights, and automate processes at speeds humans can’t match. These AI agents will require direct, governed, real-time access to all enterprise data to reason, generate, and act effectively. As AI agents become the primary consumers, enterprises must decide whether their data governance models empower or constrain them. This shift fundamentally changes everything about how we build and operate data infrastructure, from architecture and pipelines to governance and security, demanding a new approach that prioritizes machine-first accessibility without sacrificing trust or compliance.”

Hybrid AI Becomes the New Default – “The ‘cloud-everything’ era is coming to an end. Data gravity, sovereignty laws, and inference cost control are drivers for on-premises and model-to-data architectures. Enterprises are realizing that critical AI workloads need to remain close to their data, whether on-premises or in hybrid environments, to meet stringent requirements for performance, compliance, and data sovereignty. As a result, DevOps and data teams will increasingly build intelligent, governed ‘AI factories’ inside the enterprise, integrating AI pipelines directly with existing systems rather than relying solely on public cloud services. This approach ensures organizations can scale AI responsibly while maintaining control over sensitive information and operational efficiency.”

The Real Battle Moves Above the Data Format – “The last decade was about standardizing how we store data; the next is about standardizing how we trust it. With open table formats like Iceberg now widely adopted as the standard, the next competitive frontier isn’t the format itself. It’s the management of metadata, governance, and secure access. AI explainability depends on how well metadata is managed. Enterprise success will hinge on how effectively DevOps and data teams curate data catalogs, enforce policies, and provide federated access across diverse environments. Without unified metadata and policy, enterprises risk an AI compliance crisis. It’s no longer just about where the data lives; it’s about how intelligently it can be accessed, trusted, and leveraged to drive actionable outcomes.”

DevOps for Machines, Not Just Humans – “DevOps is evolving beyond its traditional focus on deploying applications. DevOps for machines means governing the real-time interaction between AI agents and enterprise data, with the same rigor once reserved for production apps. Modern teams will now treat data and AI pipelines as mission-critical workloads, ensuring that AI agents have real-time, governed access to enterprise data while maintaining reliability, security, and observability at scale. DevOps for machines is about managing the data-to-action lifecycle, not model training pipelines. Humans remain responsible for defining access, policy, and safety nets. For example, tomorrow’s DevOps teams will monitor not only application uptime, but also AI decision health to ensure agents operate within defined parameters. This evolution requires a new mindset: one where DevOps teams are responsible for orchestrating an ecosystem in which machines, not just humans, can operate safely, efficiently, and autonomously.”

New Keepit research: Data sovereignty is becoming a frontline security issue

Posted in Commentary with tags on November 25, 2025 by itnerd

Keepit has released a new report — Data Sovereignty: Take Control of Your Data — along with expert commentary from CISO Kim Larsen that breaks down why sovereignty has moved beyond compliance and is now a core security and resilience concern. The report notes that many organizations believe sovereignty is a legal or CIO priority, but the research shows it is increasingly a security architecture challenge.

Key Themes: The research highlights several issues now directly impacting SOC, IR, and cyber-resilience teams:

  • Hyperscaler monoculture = single points of failure. 97% of cloud infrastructure sits with a handful of providers, creating systemic risk when outages or misconfigurations cascade across SaaS, identity, and backup platforms.
  • CLOUD Act + Schrems II = conflicting access rules. Security teams must defend information that may be legally accessible to foreign jurisdictions — even when stored in-region.
  • Hybrid warfare is targeting cloud identity and control planes.
    The report details growing APT activity against cloud identity providers and the risk of dependent ecosystems failing simultaneously.
  • Most SaaS backups rely on the same hyperscalers as production.
    Making “air-gapped” recovery impossible in many breach or outage scenarios.
  • Regulators are raising the bar on resilience.
    Under DORA, NIS2, BaFin, and CNIL/ANSSI guidance, CISOs must demonstrate independence, portability, and provable control — not just encryption and regional storage.

Why this is timely for security practitioners

  • Attackers are exploiting cross-cloud dependencies.
  • Resilience mandates are forcing redesigns of backup + identity strategy.
  • EU regulators are signaling that US-controlled clouds may not meet sovereignty requirements for healthcare, public sector, and critical infrastructure.
  • Organizations are reassessing “cloud-by-default” models and returning to hybrid or sovereign-cloud setups for high-value assets.

Report Download:
https://www.keepit.com/data-sovereignty-in-the-cloud/

Expert Commentary:
https://www.keepit.com/blog/data-sovereignty-report

TestDevLab Joins Xoriant to Expand Global Reach and Capabilities

Posted in Commentary with tags on November 25, 2025 by itnerd

TestDevLab, a Latvian software quality engineering company, today announced it is joining Xoriant, a ChrysCapital-owned global engineering and technology services company. The partnership combines TestDevLab’s 14 years of testing expertise with Xoriant’s broader engineering capabilities and international presence across 28 offices in the USA, Europe, and Asia. 

TestDevLab has built deep expertise in software quality engineering, offering services including test automation, performance testing, accessibility testing, audio and video quality analysis, competitive intelligence, and AI testing solutions. TestDevLab employs 500 professionals across Europe, primarily in the Baltics and North Macedonia, testing products used by more than five billion people daily.

TestDevLab will continue operating, retaining its name, brand, and leadership team. Co-founders Andrejs Frišfelds and Ervins Grīnfelds will remain as co-CEOs, and engineers will continue working with existing clients and project teams, with teams remaining intact. The company anticipates increased hiring as it gains access to Xoriant’s sales network and expanded market opportunities.

This acquisition follows Xoriant’s earlier acquisitions of FEXLE Services (September 2024), MapleLabs (February 2024), and Thoucentric (August 2023).

SUSE launches AI-Assisted Infrastructure at Scale

Posted in Commentary with tags on November 25, 2025 by itnerd

Today, SUSE launched AI-assisted infrastructure at scale. Benefits of SUSE’s AI-assisted infrastructure offering include:

  • Financial/Strategic:
    Cost Avoidance & Competitive Edge: The AI-assisted infrastructure drives down by gaining intelligent, correlated visibilityminimizing knowledge niches and automating context-aware maintenance. This allows high-value IT talent to accelerate strategic engineering, innovation, and digital transformation initiatives.
  • Risk/Security 
    Proactive Governance and Resilience: The environment uses correlated insight to eliminate configuration drift and compliance gaps proactively minimizing knowledge niches, rather than detecting them reactively. This ensures continuous, auditable security postures, dramatically reducing critical incident frequency and minimizing the financial and reputational cost of downtime.
  • Operational Agility:
    Simplified Control at Scale: Complexity across hybrid environments is managed through simple, secure, natural language commands, allowing executive oversight and faster decision-making. Infrastructure become context-aware and automatically aligns with business policy, ensuring maximum availability and optimization for mission-critical applications (like SAP).
  • Practitioner
    System administrators move from spending their time on manual, repetitive log analysis, patching, and compliance checks to focusing on strategic engineering and innovation. The context-aware infrastructure leverages correlated intelligence to instantly diagnose root causes, converting the environment into a self-healing, self-optimizing system where complexity is managed through simple, secure, natural language commands. Downtime is significantly reduced, and configuration drift is eliminated proactively.

For more details, here is a blog post for your reading pleasure. 

DHL partners with HappyRobot for AI efficiency

Posted in Commentary with tags on November 25, 2025 by itnerd

DHL Group is accelerating its enterprise-wide AI strategy through a new partnership between its contract logistics division, DHL Supply Chain, and the AI startup HappyRobot. The collaboration marks a significant step in deploying agentic AI to streamline operational communication and enhance both customer experience and employee engagement. 

DHL Supply Chain has already successfully utilized HappyRobot’s AI agents across several regions and use cases, including appointment scheduling, driver follow-up calls, and high-priority warehouse coordination. These agents autonomously handle phone and email interactions, enabling faster, more consistent, and scalable communication. 

Strategic AI deployment across DHL Supply Chain 

Current deployments already in use across DHL Supply Chain target hundreds of thousands of emails and millions of voice minutes annually. AI agents are supporting key workflows such as appointment scheduling, transport status calls, and high-priority warehouse coordination – helping teams manage operational communication at scale and with greater consistency. 

AI agents as a new operating model 

These implementations have already shown measurable impact – significantly reducing manual effort, increasing responsiveness, and enabling teams to focus on more strategic tasks and exception Press release Page 2 of 3 handling. By automating high-volume communication workflows, AI agents like those from HappyRobot are helping DHL deliver faster, more customer-centric services, while improving the work experience for employees and contributing to long-term workforce retention. 

HappyRobot’s platform enables fully autonomous AI agents to interact via phone, email, and messaging, while integrating seamlessly with DHL’s internal systems. And DHL Group continues to expand its AI strategy across all divisions. Beyond current pilots, further use cases are tested.

CData Appoints Ken Yagen as Chief Product Office

Posted in Commentary with tags on November 25, 2025 by itnerd

 CData Software today announced the appointment of Ken Yagen as Chief Product Officer (CPO). Yagen will lead product strategy and engineering as CData scales its connectivity platform for enterprises deploying agentic AI internally and for software providers building AI into their products.

The appointment comes as CData experiences rapid growth in the AI connectivity space. With thousands of users already connecting enterprise data sources to AI systems through CData’s MCP Servers, and the recent launch of Connect AI—a managed Model Context Protocol (MCP) platform—Yagen’s leadership will accelerate the company’s product roadmap.

Advancing AI-Native Connectivity

Yagen joins CData as the company shapes the emerging category of AI-native connectivity. Connect AI provides the enterprise-scale infrastructure that AI systems and autonomous agents require: live, governed access to business systems combined with embedded system-level semantic intelligence that teaches AI the structure, relationships, and business logic native to each platform—transforming raw connectivity into operational fluency.

Yagen is an accomplished product management and technology leader with more than 25 years of experience driving innovation in enterprise software. Most recently at Warburg Pincus, he led AI and LLM initiatives across the firm’s portfolio companies, helping enterprises integrate emerging AI technologies into their business strategies. His career includes pivotal roles at MuleSoft, where he shaped product strategy for APIs and integration platforms that became foundational to modern enterprise architecture, as well as leadership positions at Box and Symphony, where he drove collaboration and enterprise SaaS innovation.

Dual Market Strategy: Enterprises and ISVs

Under Yagen’s leadership, CData will accelerate its dual go-to-market strategy, enabling both direct enterprise adoption and embedded use by independent software vendors (ISVs). Organizations are adopting CData’s managed MCP platform to standardize connectivity across departments and initiatives, while software providers are embedding CData’s connectivity into their products to deliver enterprise-ready AI capabilities without building integrations themselves.

US big banks hit by real estate fin-tech breach

Posted in Commentary with tags on November 24, 2025 by itnerd

Saturday, real estate lender tech provider SitusAMC confirmed a November 12 cyberattack impacting the sensitive personal information on the clients of hundreds of some of the nation’s biggest banks, including JPMorgan Chase.

The data exposed was related to residential mortgages, the company said. JPMorgan Chase, Citi, and Morgan Stanley are among those that have been notified that their client data may have been taken. 

   “The incident is now contained and our services are fully operational. No encrypting malware was involved,” the statement reads.

   “We remain focused on analyzing any potentially affected data,” SitusAMC’s chief executive, Michael Franco said.

SitusAMC manages extensive sensitive data collected through mortgage applications, including Social Security numbers. The fintech also provides regulatory compliance services to ensure lenders’ loans meet state and federal requirements. As a result, a breach could expose highly confidential information about lenders and their real estate portfolios.

   “We remain committed to identifying those responsible and safeguarding the security of our critical infrastructure,” FBI Director Kash Patel said in a statement.

Michael Bell, Founder & CEO, Suzu Labs had this to say:

   “SitusAMC proves that Wall Street’s hundreds of millions spent on bank cybersecurity is irrelevant when a third-party vendor holding SSNs, mortgage applications, and regulatory compliance data gets compromised.

   “The attackers bypassed JPMorgan, Citi, and Morgan Stanley’s defenses entirely by hitting the shared services provider with access to all their customer data.

   “Pentesting offers a lens inside these third-party environments and the lack of controls protecting customer data is shocking. Organizations need to start auditing vendor security postures with the same rigor they apply to their own perimeters.”

Damon Small, Board of Directors, Xcape, Inc. follows with this:

   “The recent cyberattack on SitusAMC underscores the significant and widespread third-party risk that major US financial institutions like JPMorgan Chase, Citi, and Morgan Stanley are currently exposed to.

   “Despite claims of containment, the breach resulted in the confirmed exfiltration of highly sensitive residential mortgage data, including Social Security numbers and private real estate holdings, all valuable targets for identity theft.

   “This incident confirms that the security of financial service providers is only as strong as the weakest link within their specialized fintech supply chain. Under regulations like GLBA, banks are ultimately accountable for protecting client data across their entire vendor network, necessitating the immediate implementation of Zero Trust principles for all third-party access.

   “Banks should treat this breach as if client data has been exposed by immediately activating dark-web monitoring, placing fraud alerts, and closely monitoring for unauthorized changes of address and wire instructions within their mortgage and servicing systems.

   “Lenders also need to immediately rotate tokens and credentials for SitusAMC integrations, implement stricter least-privilege access controls, and enforce breach-notification service-level agreements and data minimization practices through contractual obligations.

   “Regulators will be expecting concrete evidence of third-party risk management, including vendor audits, immutable backups, and well-tested incident response playbooks that cover the entire lifecycle of loan origination, servicing, and secondary market data flows.

   “Wall Street learned the hard lesson again: In the modern financial supply chain, the security of a bank’s information assets is only as effective as the least-protected mortgage application.”

This latest supply chain attack is going to be bad given the type of data that is now out there. I feel sorry for anyone who is potentially affected as this will not end well for them at all.

101 Black Friday Apps Analyzed: What data privacy costs do Black Friday bargains come with?

Posted in Commentary with tags on November 24, 2025 by itnerd

This Black Friday, around half of us will reach for our smartphones to try and bag the latest deal, with 27 percent of people preferring to do this via a retailer’s app. 

But is there a privacy cost in trying to get the best deal via an app?

Today, Comparitech researchers have published a study looking at just this. By analyzing 101 of the most popular Black Friday apps, they have found out the exact data privacy cost these convenient bargains come with. 

Key findings include: 

  • The average app requests access to nearly 29 permissions in total, 8 of which are classed as high-level/”dangerous”
  • The most common dangerous permissions are ones that request access to the device’s camera, access location data (precise geolocation data or approximate location based on cell tower or Wi-Fi data), and read and write to external storage (data outside of the app, e.g. stored on the device)
  • 23% of apps (23 apps out of 101) potentially violate Google’s privacy policy standards
  • The most common omission from privacy policies was the data retention period (not provided by 8 apps), followed by a clear policy on how users can delete their data (omitted or restricted/unclearly defined by 11 apps)
  • The average app comes with 7 trackers, with one app (Vinted) coming with 17
  • These apps have been downloaded by over 7 billion people

For full details, this research can be read here: https://www.comparitech.com/news/data-privacy-black-friday-apps/

2026 Predictions from SIOS Technology

Posted in Commentary with tags on November 24, 2025 by itnerd

Today’s 2026 predictions come from Cassius Rhue, VP of Customer Experience, SIOS Technology.

1) Cloud Computing

Hybrid and Multicloud Strategies Gain Momentum – “Hybrid and Multicloud solutions have become a more proven option to help organizations balance performance, cost, and resilience while avoiding vendor lock-in.  More enterprises will continue to consider and adopt hybrid and multicloud architectures in 2026. As a result, HA solutions that can seamlessly operate across diverse infrastructures will become indispensable to modern IT strategies.”

2) Cybersecurity

Cybersecurity Will Redefine the Role of High Availability – “The rising wave of cybersecurity threats is transforming how enterprises view HA clustering. In 2026, HA will not only be about achieving 99.99% uptime—it will also serve as a vital tool for maintaining security resilience. More organizations will use HA clusters to enable rapid, low-risk patching and updates, ensuring systems remain both highly available and protected against emerging threats.”

3) Data Management

High Availability Focuses on Ease of Use to Meet Growing IT Admin Needs – “As IT administrators and generalists are given increasing responsibility for managing complex high availability (HA) application environments, the demand for intuitive, automated HA solutions will surge. In 2026, IT teams will favor platforms that do not require specialized HA skills, minimize manual configuration and simplify cluster management. Vendors that prioritize ease of use, automation, and guided workflows will stand out as the market evolves toward accessibility for non-specialist admins.”

4) DevOps

DevOps teams will increasingly integrate high availability clustering into application planning to reduce deployment risk
 – “Clustering tools with robust APIs, automation hooks, and real-time observability will allow rapid updates without interrupting production services. DevOps engineers will use clusters to test patches against active workloads, reducing the risk and degree of change. HA becomes a built-in feature of the delivery process—not an afterthought.”

5) AI / Machine Learning

Continuous Availability: The New Foundation for Trusted AI – “AI and ML workloads will run more frequently on distributed clusters and GPU-intensive systems, where downtime creates costly disruptions. In 2026, IT admins will demand high availability solutions that simplify complex AI stacks and expose full visibility into data, storage, and node health. Continuous availability becomes a prerequisite for AI reliability and trust.”

6) Application Performance Management (APM)

Observability Becomes Essential for Complex IT Environments – “As IT infrastructures expand across on-premises, cloud, hybrid, and multi-cloud environments, visibility into application performance and health and interdependencies of the elements of the IT stack will become mission-critical. In 2026, observability will emerge as a key differentiator for HA solutions, allowing IT teams to identify and resolve issues before they impact uptime. The most successful HA platforms will provide deep insights across the full stack—from hardware to application layer.”

7) Virtualization

Consolidation of Virtual Application Environments Drives Up Complexity and Need for Easy-to-Manage HA – “As enterprises consolidate onto virtualized platforms, IT admins will manage more mission-critical workloads per host. HA clustering will provide automated and intelligent failover across hypervisors without requiring deep virtualization expertise. Growing cybersecurity pressures will drive adoption of cluster-based patch automation to protect large pools of VMs simultaneously. Virtualized environments won’t just run clusters—they will depend on them.”

8) Disaster Recovery

Growing need for Automated Disaster Recovery – “By 2026, high availability and disaster recovery IT admins will expect clustering tools to support disaster recovery locations with automate failover, verify replication integrity, and give full visibility into the entire application stack—including networking, storage, and cloud resources. Frequent cyber incidents will force DR teams to apply patches and recover systems rapidly, with clusters minimizing downtime during failover. Disaster recovery becomes proactive, not reactive.”

“By 2026, IT admins will require clustering tools for high availability and disaster recovery (HA/DR) to support greater visibility into and control of failover operations and environments. The rapidly evolving landscape of hybrid cloud and multicloud environments will demand sophisticated solutions capable of providing full visibility into the entire application stack—including networking, storage, and cloud resources— while simultaneously helping advance organizational cybersecurity processes and posture.”