Whitepaper: AI Chatbot and Youth Safety 

Posted in Commentary on February 6, 2026 by itnerd

AI can shift how developing minds understand technology and where they turn for support, leading to issues when chatbots are designed to feel “personal”. Magic School, the educational AI platform, has just released a white paper explaining how these risks can develop and what schools should know about AI in the classroom. 

You can find out more details here: https://www.magicschool.ai/blog-posts/student-safety-companionship

DataBee Launches DataBee RiskFlow

Posted in Commentary with tags on February 6, 2026 by itnerd

DataBee today announced the launch of DataBee RiskFlow™, an innovative agentic AI capability that lets security and IT teams query enterprise security and compliance data in simple conversational language. With DataBee RiskFlow, teams can ask questions like:

  • “Which assets have critical vulnerabilities that haven’t been patched in the last 30 days?”
  • “Show me users with risky login patterns across cloud and on‑prem environments.”
  • “What evidence do I need to demonstrate MFA compliance for my audit?”

DataBee RiskFlow interprets the question, identifies the relevant data, and returns a clear, defensible answer – complete with the underlying logic and data lineage. The result: faster investigations, simplified audits, and more consistent control validation.

Ask. Understand. Act.

DataBee RiskFlow transforms how organizations engage with their data. Any user can ask a question and receive:

  • A clear, concise answer
  • Full data lineage showing exactly where the answer came from
  • Traceable logic that demonstrates how conclusions were drawn
  • Defensible, audit-ready evidence
  • Recommended next actions to validate controls or address deviations

Because it is built directly into the DataBee security data fabric, it requires zero setup. The new capability is already in use across nearly all DataBee customers following its initial rollout, supporting security operations, IT teams, compliance groups, and business leaders who need fast, trustworthy insights.

2025: A Breakout Year for DataBee

The launch of DataBee RiskFlow caps a year of accelerated innovation and market momentum for DataBee. In 2025, the company delivered major advancements that further strengthened its position as a leader in unified security and compliance data.

Key milestones include:

As DataBee continues to expand and enhance its security offerings, organizations across healthcare, financial services, manufacturing, and media are leveraging its unified data foundation to validate controls, uncover previously unknown risks, and drive better security and compliance outcomes.

Understanding Cyber Risk in the Insurance Industry

Posted in Commentary with tags on February 6, 2026 by itnerd

Cyber risk is one of the most significant threats facing financial services, with insurers among the most frequently targeted organizations. Over the past year, there has been a notable increase in the number of attacks on the insurance industry, with several major insurers having reported major cybersecurity incidents, including Allianz Life InsuranceAflacPhiladelphia Indemnity Insurance, and Erie Insurance.

In response to this, Specops Software have published a look at cyber risk in the insurance industry.  You can read it here: https://specopssoft.com/blog/cyber-risk-insurance-industry/

Lessons From 2025: Zero-Day Exploitation Shaping 2026 

Posted in Commentary with tags on February 6, 2026 by itnerd

Outpost24 researchers have published an analysis into the major zero-day exploitations of 2025. Zero-day exploits were some of the most defining cyber threats of last year, with flaws affecting major platforms like React2Shell, Oracle EBS, and CitrixBleed 2.  This analysis is insightful for those who need to defend against zero days.

You can read the analysis here: https://outpost24.com/blog/top-zero-day-exploits-2025/

TELUS achieves its 100% renewable and low-emitting electricity target

Posted in Commentary with tags on February 6, 2026 by itnerd

TELUS Corporation is the first Canadian telecom to achieve its target of sourcing 100% of electricity for their global operations from renewable or low-emitting sources as of December 31, 2025. Building on this milestone, TELUS unveiled its new Climate Transition Framework, a comprehensive roadmap to reach net-zero greenhouse gas (GHG) emissions by 2040 while helping to enable Canada’s own transition to a low-carbon economy.

In 2025, TELUS secured Science Based Targets initiative (SBTi) validation for comprehensive climate targets (from a 2019 baseline) aligned with contributing to limit global warming to 1.5 degrees Celsius, including:

  • Net-Zero across its value chain through direct sources (Scope 1), indirectly through electricity consumption (Scope 2) and indirectly through TELUS’ value chain (Scope 3) by 2040
  • 46% absolute reduction in operational emissions (Scopes 1 and 2) by 2030
  • 85% absolute reduction in Scope 1 and 2 emissions by 2033
  • 46% absolute reduction in Scope 3 emissions from business travel and employee commuting by 2030
  • 75% reduction per million dollars of revenue in Scope 3 emissions from purchased goods and services, capital goods, and use of sold products by 2030
  • By 2028, 65% of TELUS’ suppliers by spend will have also set their own SBTi-approved targets

As a continuation of TELUS’ 25 year focus on sustainability, the Climate Transition Framework outlines the next phase in its commitment to protect the planet for future generations, addressing emissions reduction and climate resilience through five interconnected strategic pillars:

  • Business operations: Decarbonizing network infrastructure and buildings through renewable electricity, energy-efficient TELUS PureFibre and 5G networks (which are up to 85% more efficient than traditional copper networks), fleet electrification, and climate adaptation programs
  • Supply chain: Engaging suppliers to set science-based targets and implementing ESG audits and due diligence to reduce value chain emissions
  • Low carbon products and services: Minimizing environmental impacts through responsible product design, energy efficiency standards, and participation in the Canadian Energy Efficiency Voluntary Agreement program (CEEVA)
  • Stakeholder engagement: Collaborating with suppliers, industry peers, government, and communities to drive transformational climate action
  • Enabling emissions reductions outside of our value chain and protecting nature: Enabling emissions reductions beyond TELUS’ value chain through remote work solutions, virtual healthcare, smart energy management, and precision agriculture. Investing in nature-based solutions including actively planting more than 25 million trees to date

Following today’s release of the framework, TELUS plans to unveil a comprehensive Climate Transition Plan later this year that will outline strategies for climate resilience and provide detailed pathways for achieving its net-zero ambition, with a particular focus on addressing Scope 3 emissions across its value chain.

To learn more about TELUS’ commitment to global sustainability, visit telus.com/sustainability.

AI Agents Now Building 80% Of Certain Key Enterprise Infrastructure – data & cyber experts comment 

Posted in Commentary with tags on February 6, 2026 by itnerd

Databricks has just published “The State of AI Agents” summarizing its telemetry revealing that enterprise adoption of AI has spread well beyond copilots, isolated pilot projects, dashboards, and analysis functions, and is now widely entrusted with core systems.

“The State of AI Agents” specifies four key findings:

  • Multi-agent systems are becoming the new enterprise operating model. Enterprises are transitioning from single chatbots to multiagent systems built on domain intelligence. Use of these systems grew by 327% in just four months.
  • AI agents are driving core database activities. 80% of databases are built by AI agents. 97% of database testing and dev environments are now built by AI agents. This shift is driving the need for a new kind of database called Lakebase.AI is now part of critical workflows across industries. Most GenAI use cases are focused on automating routine necessary tasks, with 40% related to customer experiences. Model flexibility is the new AI strategy, with 78% of companies are using two or more LLM model families.
  • AI evaluations and governance are the building blocks of production. Companies that use evaluation tools get nearly 6x more AI projects into production. Companies using AI governance put over 12x more AI projects into production. AI governance is a top investment priority, and grew 7x in nine months.

You can get the Databricks paper here: https://www.databricks.com/resources/ebook/state-of-ai-agents#:~:text=Key%20findings%3A&text=Enterprises%20are%20transitioning%20from%20single,more%20AI%20projects%20into%20production.

Sunil Gottumukkala, CEO, Averlon:

   “When AI agents create databases at machine speed, ‘Secure by default’ becomes critical. Agents today optimize for the fastest path to completion, not safe configurations, so insecure defaults get replicated at scale. We saw this with row-level security gaps like the Moltbook incident. Teams need guardrails that catch risky configurations as they’re introduced and an operating model that prioritizes remediation when insecure defaults slip through.” 

Ryan McCurdy, VP, Liquibase:

   “When AI agents can create and modify database environments on demand, the database becomes a high frequency software event. The risk is uncontrolled change. Policy enforced in the workflow, automatic audit evidence, drift detection, and trusted rollback are essential to keep velocity without sacrificing control.

    “Moreover, agentic development will multiply database changes. If governance stays manual, you get drift, surprise outages, and you can’t explain what changed when it matters. Database Change Governance is how enterprises keep the data layer fast, trusted, and auditable as it goes agentic.

   “The answer isn’t more humans reviewing more changes. It’s policy enforced in the workflow, automatic evidence capture, and trustworthy rollback.”

John Carberry, Solution Sleuth, Xcape, Inc.

   “The discovery that 80% of new enterprise databases are currently created by AI agents signifies a historic transition from human-centric administration to “vibe coding” on an industrial scale. Although this increase in autonomous infrastructure speeds up development, it also adds a significant “governance debt” by directly incorporating security logic into AI-generated code that is rarely submitted to human peer review.

   “The main risk is “excessive agency,” whereby these agents might unintentionally produce vulnerable endpoints, excessively lenient access rules, or unsafe schemas that get beyond conventional perimeter defenses. Moreover, these databases produce a vast, undetectable attack surface called Shadow Data, which is usually left out of centralized logging and auditing because they are routinely spun up in real-time “branches” for testing and development. In response, SOC teams must switch from post-deployment scanning to infrastructure-level enforcement, in which the security border is located outside of the code that is generated and checks each database operation against a policy that is hardcoded at runtime. The function of the DBA is changing from being a builder to a high-level auditor of autonomous systems as AI progresses beyond creating chatbots to designing the enterprise’s basic foundations.

    “The ‘human in the loop’ becomes a myth when 80% of your data infrastructure is built by AI.”

China Warns of OpenClaw Open-Source AI Agent Security Risks

Posted in Commentary with tags on February 5, 2026 by itnerd

China’s industry ministry has warned that the OpenClaw open-source AI agent could pose significant security risks when improperly configured and expose users to cyberattacks and data breaches.

More info can be found here: https://www.reuters.com/world/china/china-warns-security-risks-linked-openclaw-open-source-ai-agent-2026-02-05/

Ensar Seker, CISO at SOCRadar:

“This warning isn’t really about China versus open source, it’s about a familiar pattern we’ve seen repeatedly with fast-moving AI agent frameworks like OpenClaw. When agent platforms go viral faster than security practices mature, misconfiguration becomes the primary attack surface. The risk isn’t the agent itself; it’s exposing autonomous tooling to public networks without hardened identity, access control, and execution boundaries.

“What’s notable here is that the Chinese regulator is explicitly calling out configuration risk rather than banning the technology. That aligns with what defenders already know: agent frameworks amplify both productivity and blast radius. A single exposed endpoint or overly permissive plugin can turn an AI agent into an unintentional automation layer for attackers.

“This should be a wake-up call globally. AI agents need to be treated like internet-facing services, not experimental scripts. That means threat modeling, least-privilege identities, continuous monitoring, and clear separation between reasoning, action, and data access. Without that, “agentic” systems don’t just scale intelligence, they scale mistakes.”

Henrique Teixeira, SVP of Strategy at Saviynt:

“The Chinese Ministry of Industry and Information Technology warning is valid. The point most people miss, however, is that OpenClaw (aka Moltbot, Clawdbot), even when properly configured, still poses a lot of identity security risks. If I had to simplify how OpenClaw credentials work it’s basically this: if you want your bot to do useful stuff, you need to provide it credentials (either username and passwords, cryptographic keys, etc.) with high levels of permissions. For example: if you want to have OpenClaw streamline your Gmail inbox, you need to give it a full pass to your email account. How most people will handle that poses a huge risk of credential exposure. Best case, they will follow steps like this  https://setupopenclaw.com/blog/openclaw-gmail-integration). This is the best case, which is using an OAuth flow for consent, instead of simply hardcoding your email and password somewhere. But it still involves steps like generating JSON files and some light coding that not everyone may feel comfortable with. And in the end, this process is still flagged as “unsafe” by Google, as OpenClaw’s app has not been verified by them. That’s a warning that some people will ignore, but identity security-conscious people shouldn’t. Assuming that OpenClaw is “my app” and it’s accessing “my inbox” is all the security vetting necessary is the same as accepting that it’s ok for me to use a very weak password on my company laptop, because I don’t have anything important in it. It glosses over the fact that most modern breaches according to research, were initiated by abusing existing credentials from employees and contractors. Anyone is a valid target, and attackers can use that initial access to move laterally and escalate privileges to access more sensitive stuff. In the OpenClaw Gmail example, that OAuth token is not immune from being stolen or reused. The user just created one more spot where credentials are now exposed. And the bot itself could be poisoned with external prompts to share more details of the permissions it carries. In summary the alarm is valid. But not for the reasons most people think it’s valid!”

AI is the new hotness as the kids say. But it has risks. This is the latest of those risks. So this is a case of user beware that you should likely pay attention to.

Sharp Canada Introduces Next-gen EC Series dvLED

Posted in Commentary with tags on February 5, 2026 by itnerd

Sharp Electronics of Canada Ltd. today unveiled the Sharp EC Series dvLED, the latest expansion of its essential dvLED portfolio. The EC Series delivers exceptional visual performance, simplified installation and significantly improved energy efficiency, powered by advanced Chip-on-Board (COB) technology. Built on Sharp’s global leadership in display innovation, the EC Series answers growing demand for sustainable, cost-effective and future-ready large-format display solutions.

Redefining dvLED for Canadian Businesses and Institutions

The EC Series expands Sharp’s E Series family with a new generation of direct-view LED displays designed for retail, corporate, education and public-space environments. Engineered for reliability and performance, the EC Series supports impactful visual communication across a wide range of professional applications.

The Chip-on-Board (COB) Advantage: Smarter by Design

At the core of the EC Series is advanced Chip-on-Board (COB) construction, a manufacturing approach that bonds multiple LED chips directly onto the display substrate. This design delivers measurable benefits throughout the product lifecycle:

  • Superior energy efficiency: COB technology can cut power use by up to 60 per cent compared with traditional Surface-Mounted Device (SMD) LEDs, reducing energy costs and environmental impact while maintaining brightness.
  • Enhanced durability: Protective encapsulation gives the EC Series a durable, touch-friendly surface that resists dust and impact, ideal for high-traffic, interactive environments.
  • Outstanding image quality: Dense LED integration enables vivid colour reproduction, deep blacks and contrast ratios of up to 10,000:1, resulting in crisp, uniform visuals.

Designed for Efficiency from Installation to Operation

Sharp’s intuitive EC Series cabinet design can cut installation time by up to 50 per cent versus conventional dvLEDs. With faster setup, lower operating costs and simpler deployment, it’s an efficient, practical choice from day one.

Flexible Configuration for Diverse Spaces

The EC Series is available in fine pixel pitches of 0.9, 1.2, 1.5 and 1.8 mm, ensuring optimal resolution and viewing performance across applications ranging from collaborative corporate spaces to dynamic retail signage.

The EC Series is scheduled to begin shipping in April 2026.

Sharp’s dvLED Video Displays

Sharp’s full line of indoor and outdoor direct view LED (dvLED) video displays is designed to provide stunning clarity, effortless scalability and enterprise-grade reliability, making them a gamechanger for corporate spaces, digital signage and immersive experiences. With their cutting-edge image quality, plug-and-play simplicity, flexible and scalable design, energy efficiency and enhanced durability, Sharp dvLED displays set a new standard for how businesses, institutions and brands communicate visually.

Recently, Sharp Canada partnered with Diversified to help shape the future of learning and innovation at Western University. At the heart of Western’s Schmeichel Innovation and Entrepreneurship Centre, Sharp dvLED displays were strategically positioned in spaces for internal events, lectures and large gatherings, setting a new standard for visual excellence. With ultra slim profiles and neutral finishes, Sharp dvLED displays integrate seamlessly into the architecture, enhancing academic, administrative and cultural programming with professional grade clarity and reliability.

For more information, visit https://sharp.ca/en/products/business-displays-dvled/.

Vibe-coded Moltbook security flaw leaks AI agent credentials

Posted in Commentary with tags on February 5, 2026 by itnerd

A new social media platform called Moltbook, designed for AI agents to interact with each other and “hang out”, was found to have a misconfiguration, leaving its backend database publicly accessible allowing full read and write access to all data, according to a recent blog post by Wiz Security.

Researchers discovered a Supabase API key exposed in client-side JavaScript revealing thousands of private AI conversations, 30,000 user email addresses, and 1.5 million API keys..

   “Supabase is a popular open source Firebase alternative providing hosted PostgreSQL databases with REST APIs. It’s become especially popular with vibe-coded applications due to its ease of setup,” explained Wiz head of threat exposure, Gal Nagli.

   “When properly configured with Row Level Security (RLS), the public API key is safe to expose – it acts like a project identifier. However, without RLS policies, this key grants full database access to anyone who has it. In Moltbook’s implementation, this critical line of defense was missing.”

In a message posted to X before the Wiz posted the blog, Moltbook’s creator, Matt Schlicht said he “didn’t write one line of code” for the site. Wiz reported the vulnerability to Schlicht, and the database was secured.

   “As we see over and over again with vibe coding, although it runs very fast, many times people forget the basics of security,” Wiz cofounder Ami Luttwak said.

Sunil Gottumukkala, CEO, Averlon:

   “What this highlights is the tradeoff vibe coding creates. It massively compresses idea-to-product time, but often skips essential security steps like threat modeling, secure defaults, and review gates that account for real user behavior and adversarial abuse.

   “When those controls are missing, a routine misconfiguration, such as shipping without proper authorization or RLS policies, can quickly turn into an instant, internet-scale incident. Some vibe-coding platforms are starting to add guardrails, but we’re still early. As long as speed continues to outpace security analysis and remediation, this will be a bumpy road.”

Lydia Zhang, President & Co-Founder,Ridge Security Technology Inc. gave me this comment:

   “This leads to another mandatory step: testing. Zero-trust principles should also be applied to Vibe coding. Vibe-coded solutions can miss basic security practices, and configuration or misconfiguration issues are often outside the scope of the code itself. I’m glad Wiz Security caught this before the damage spread further.”

Michael Bell, Founder & CEO, Suzu Labs added this comment:

   “The Moltbook incident shows what happens when people shipping production applications have no security training and are relying entirely on AI-generated code. The creator said publicly that he didn’t write a single line of code. Current AI coding tools don’t reason about security on the developer’s behalf. They generate functional code, not secure code.

   “The specific failure here was a single Supabase configuration setting. Row Level Security was disabled, which meant the API key that’s supposed to be safe to expose became a skeleton key to the entire database. That’s not a sophisticated vulnerability. It’s a checkbox that never got checked, and nobody reviewed the code to notice. When 10% of apps built on vibe coding platforms (CursorGuard) have the same misconfiguration, that’s not a user error problem. That’s a systemic failure in how these tools are designed.

   “The write access vulnerability should concern anyone building AI agent infrastructure. Moltbook wasn’t just leaking data. Anyone with the exposed API key could modify posts that AI agents were reading and responding to. That’s prompt injection at ecosystem scale. You could manipulate the information environment that shapes how thousands of AI agents behave.

   “Users shared OpenAI API keys in private messages assuming those messages were private. One platform’s misconfiguration turned into credential exposure for unrelated services. As AI ecosystems become more interconnected, these cascading failures become the norm.

   “The 88:1 agent-to-human ratio should make everyone skeptical of AI adoption metrics going forward. Moltbook claimed 1.5 million agents. The reality was 17,000 humans running bot armies. No rate limiting. No verification. The platform couldn’t distinguish between an actual AI agent and a human with a script pretending to be one.

   “We’re going to see a lot of “AI-powered” metrics that look impressive until you examine what’s actually behind them. Participation numbers, engagement statistics, autonomous behavior claims. Without verification mechanisms, the numbers are meaningless. The AI internet is coming, but right now it’s mostly humans wearing AI masks.

   “If you’re deploying vibe-coded applications to production, you need security review by someone who understands both the code and the infrastructure it runs on. AI tools don’t have security reasoning built in, which means every configuration decision is a potential exposure. We help organizations identify exactly these kinds of gaps through security assessments that trace data flows and access controls. The discovery process that found this vulnerability took Wiz researchers minutes of looking at client-side JavaScript. That’s the same level of effort an attacker would spend.

   “AI development velocity and AI security maturity are on completely different curves. Teams are shipping production applications in days. Security practices haven’t caught up. Until AI tools start generating secure defaults and flagging dangerous configurations automatically, humans (or hackers) need to be in the loop reviewing what gets deployed.”

Ryan McCurdy, VP of Marketing, Liquibase contributed this:

   “Moltbook is a textbook example of what happens when you ship at AI speed without change control at the database layer. A single missing guardrail turned a “public” Supabase key into full read and write access, exposing private agent conversations, user emails, and a massive pile of credentials. This is why Database Change Governance matters.

   “The highest risk changes are often permissions, policies, and access rules, and those need automated checks, separation of duties, drift detection, and audit-ready evidence before anything hits production. AI agents and vibe-coded apps will only amplify the blast radius if database change is not governed.”

Noelle Murata, Sr. Security Engineer, Xcape, Inc. served up this comment:

   “Matt Schlicht’s admission that he “didn’t write one line of code” isn’t something to celebrate, given the fundamental nature of the security flaw. The database completely lacked Row Level Security (RLS) policies, allowing anyone to access it without authentication. This misconfiguration exposed the entire database structure and content, including tokens that granted read/write/edit access to non-authenticated users – a basic oversight with serious consequences.

   “Vibe-coding,” or relying on AI to generate code, can produce functional results but often sacrifices best practices in architecture and security for speed and convenience. Without code review or highly specific prompting, AI-generated code prioritizes “fast and easy” over “resilient and secure.” This is analogous to why junior developers need oversight; the same principle applies to AI-generated code.

   “Despite Moltbook being marketed as a social platform “for bots, by bots,” it had a significant human user base: 17,000 humans alongside 1.5 million bots, creating a roughly 1:88 ratio. Notably, no CAPTCHA or human/bot validation system was implemented, raising questions about the platform’s actual purpose and user management.

   “This incident demonstrates that AI-generated applications require careful monitoring and professional oversight. Software development still demands review by trained, experienced humans to ensure security and reliability.”

This highlights the danger of vibe coding. You can get stuff done. But how it gets done might be a problem. You might want to keep that in mind if you rely on vibe coding.

BigHammer.ai to Replace the Legacy Data Stack with AI Agents

Posted in Commentary with tags on February 5, 2026 by itnerd

AI start-up, BigHammer.ai, launched today with a bold ambition: to disrupt the $500bn data and analytics market. BigHammer.ai’s team of AI Agents work together as a virtual data engineering function – redefining how data products are built, governed and run at scale. 

As organizations struggle under the weight of fragmented data tools, siloed teams, and rising labor and platform costs, BigHammer.ai offers a fundamentally new operating model. Instead of assembling and maintaining complex stacks of legacy point solutions, BigHammer.ai’s team of AI agents automate data pipeline development, operations and governance end to end across the entire data lifecycle. 

Built for modern data and analytics teams, BigHammer.ai replaces today’s disconnected tools and manual workflows with AI agents that can learn, plan, build, make decisions and act independently. The result is faster delivery, lower cost and dramatically reduced operational complexity – without sacrificing control, compliance or trust.

The AI agents are instructed and managed via natural language interfaces, enabling closer collaboration between business and technical teams – empowering citizen data engineers through self-service and reducing reliance on specialist engineering resources. 

Unlike copilots that sit on top of existing tools and therefore only see part of the stack, BigHammer.ai is AI-native by design. Its agents securely ingest, catalog and govern data, build and operate complex pipelines, and deliver analytics-ready data and insights. As a result, organizations can:

  • Scale data and AI without scaling headcount, cutting operational and labor costs by up to 70%.
  • Deliver insights faster, removing engineering bottlenecks and empowering technical data engineers and citizen data engineers to build data products up to 70% faster.
  • Radically simplify the data stack, accelerating legacy migration and rationalization while reducing total cost of ownership (TCO) by up to 30%.
  • Automate governance and compliance, maintaining end-to-end data integrity, security and provenance across the entire lifecycle.   

Founder Srinath Reddy B, formerly Head of Data Platforms & Engineering at Dun & Bradstreet and Head of Data at Aon, brings over 20+ years of frontline experience building and running large-scale data and analytical platforms. 

Four AI powered super agents, one coordinated approach

At the heart of BigHammer.ai are four specialized agents. Each agent has a defined persona, collaborates seamlessly with other agents, and is orchestrated by a meta-model that plans, coordinates and optimizes work across the data lifecycle – continuously improving as agents learn and share knowledge across deployments:

  • Agent DataGov provides end-to-end data governance to inform and set guardrails for all agents while, delivering trust and transparency through metadata, lineage, quality and compliance.
  • Agent Pipeline builds pipelines and accelerates modernization, using natural language to generate production-ready pipelines and support the migration of legacy data/code.
  • Agent DataOps monitors and improves reliability across the data estate, including cost, latency and data quality signals. It reduces operational toil, accelerates the incident response, and finds opportunities to save costs.
  • Agent Xplore helps teams explore and analyze data faster, enabling natural language-driven discovery, deep insights on data and next best action recommendations.

For more information, visit the website, or request a demo.