Archive for December 1, 2025

Sumo Logic Expands Dojo AI to Transform Security Investigations with Expanded Agentic AI Capabilities

Posted in Commentary with tags on December 1, 2025 by itnerd

Sumo Logic today announced new advancements to Sumo Logic Dojo AI, its agentic AI platform for security operations. This expansion of Dojo AI introduces new agents, including SOC Analyst Agent, Knowledge Agent, and a Model Context Protocol (MCP) server. These new agents help security teams reduce alert fatigue, accelerate investigations, and streamline security workflows, allowing customers to focus on real threats and respond more effectively. These innovations will be on display at AWS re:Invent 2025, at Sumo Logic’s booth #1329.

Modern security operations centers (SOCs) face a perfect storm of complexity: growing alert volumes, fragmented tools, and pressure to respond faster than ever. Dojo AI brings intelligence and control to this frantic environment, combining agentic AI, log intelligence, and secure model integration to transform how investigations are conducted.

Launched earlier this year, Dojo AI is Sumo Logic’s agentic AI system for Intelligent Security Operations. Within the Dojo, agents can ingest signals and develop context-aware responses. This continuous feedback ensures agents improve over time, become more resilient, and deliver higher-fidelity insights when deployed in production. Dojo AI is an enterprise-grade, agentic AI platform purpose-built for the modern SOC and gives security teams the ability to analyze the highest value security issues facing their organization at any given moment.

Sumo Logic Dojo AI New Capabilities

  • SOC Analyst Agent (Beta) — The SOC Analyst Agent applies agentic AI reasoning to streamline triage and investigation. It delivers verdicts on alert severity, collects related activity, and presents a clear context for analysts to quickly understand impact and scope. By filtering out noise and repetitive reviews, analysts can focus on real threats and potentially achieve faster, more consistent outcomes across teams.
  • Knowledge Agent — The Knowledge Agent provides immediate, AI-powered answers to “how-to” questions in natural language, reducing friction and accelerating onboarding. By asking Mobot — Dojo AI’s conversational interface — users receive straightforward, citable responses drawn from documentation and product knowledge, empowering efficient self-service and faster platform adoption.
  • Sumo Logic Model Context Protocol (MCP) Server (Prototype) — The Sumo Logic MCP Server extends Dojo AI into a connected, agentic ecosystem. It integrates customer-owned copilots, proprietary models, and third-party AI systems into the Dojo, allowing organizations to bring their own AI while maintaining Sumo Logic’s scale, consistency, and security. With unified access across integrated development environments (IDEs) and collaboration tools, customers can blend their unique AI innovation with Dojo AI’s operational intelligence to helpfuture-proof their SecOps strategy.

Availability

The SOC Analyst Agent and MCP server are currently available in beta and prototype to select customers, with general availability planned for 2026. The Knowledge Agent is available today within the Sumo Logic platform.

Amazon Web Services (AWS) identified Sumo Logic as a Top 100 AI ISV, and we’re proud to present at AWS re:Invent 2025. For demonstrations and customer briefings, please visit Sumo Logic at Booth #1329. You can also see Sumo Logic at events at re:Invent:

  • Scaling agent tools with AgentCore Gateway for enterprises, Mandalay Bay, Monday, Dec 1st, 11:30AM – 12:30PM PST
  • ISV Executive Forum on Agentic AI moderated by Carol Potts, The Venetian Theater, Monday, Dec 1st, 1:00PM – 6:30PM PST

Deepgram Brings Low-Latency Speech Recognition and TTS to Amazon Connect

Posted in Commentary with tags on December 1, 2025 by itnerd

Deepgram today announced integration of its enterprise-grade speech-to-text (STT) and text-to-speech (TTS) models with Amazon Connect and Amazon Lex, enabling real-time transcription, low-latency voice bots, and analytics within customers’ existing AWS environments.

With this launch, teams can use Deepgram’s models natively in Amazon Lex for natural conversational experiences and pair them with Amazon Connect to unlock real-time transcription, quality monitoring, and automation without heavy custom engineering for customer experience scenarios.

Real-time transcription and analytics in Amazon Connect enable live coaching, compliance monitoring, and automated workflows built on a documented integration pattern. Native STT and TTS support in Amazon Lex deliver ultra-low latency, natural-sounding voice experiences and accurate understanding in noisy, high-variance environments. The integration fits seamlessly into existing AWS operations, allowing customers to deploy inside their AWS environment, keep data within AWS, and streamline procurement through AWS Marketplace.

Deepgram’s integrations with Amazon Connect and Amazon Lex are available for customers building on AWS today, with live demonstrations planned at AWS re:Invent in Las Vegas, December 1–5, 2025, in Deepgram Booth #690

Learn more and explore deployment resources here.

Deepgram Launches Streaming Speech, Text, and Voice Agents on Amazon SageMaker AI

Posted in Commentary with tags on December 1, 2025 by itnerd

Deepgram today announced native integration with Amazon SageMaker AI, delivering streaming, real-time speech-to-text (STT), text-to-speech (TTS), and the Voice Agent API as Amazon SageMaker AI real-time endpoints, no custom pipelines or orchestration required. Teams can now build, deploy, and scale voice-powered applications inside their existing AWS workflows while maintaining the security and compliance benefits of their AWS environment.

Native streaming via Amazon SageMaker endpoints means no workarounds or hoops to jump through, just clean, real-time inferences through the SageMaker API. The integration enables sub-second latency and enterprise-grade reliability for high-scale use cases like contact centers, trading floors, and live analytics.

Built to run on AWS, the solution supports streaming responses via InvokeEndpointWithResponseStream and keeps data within AWS. Customers can deploy Deepgram in their Amazon Virtual Private Cloud (Amazon VPC) or as a managed service, aligning with stringent data residency and compliance requirements.

The integration is also backed by a strong relationship with AWS. Deepgram is an AWS Generative AI Competency Partner and has signed a multi-year Strategic Collaboration Agreement (SCA) with AWS to accelerate enterprise adoption.

The integration is available to customers building on AWS, with live demonstrations planned at AWS re:Invent in Las Vegas, December 1–5, 2025, in Deepgram Booth #690. Learn more about our AWS partnership and technical implementation on Deepgram’s AWS partner page, and read the AWS blog: “Introducing bidirectional streaming for real-time inference on Amazon SageMaker AI.”