Deepgram’s Unfiltered Views on The Announcement From OpenAI

OpenAI just made an announcement titled, “Introducing gpt-realtime and Realtime API updates for production voice agents” found here: https://openai.com/index/introducing-gpt-realtime/

Scott Stephenson, CEO and Founder of Deepgram, would like to respectfully offer the following thoughts on this news:

“OpenAI’s new model shows progress, but the benchmarks make it clear: latency, turn-taking, and lack of control remain its Achilles’ heel in real conversations,” said Scott Stephenson, CEO and Founder, Deepgram. “When you measure what makes conversations actually work — speed, politeness, and turn-taking — Deepgram still leads the pack. The benchmarks confirm what users feel: conversations with Deepgram just flow more naturally.”

Stephenson continued, “Why does this matter? In real-world deployments, people don’t judge a voice agent by its feature set — they judge it by how the conversation feels. Latency and turn-taking aren’t technical footnotes; they’re the difference between a helpful interaction and a frustrating one. That’s why benchmarks that measure conversational flow, not just functionality, are the true indicator of readiness for production.”

Benchmarks That Back It Up 

  • #1 across all tests: Deepgram ranked highest under every VAQI weighting — equal, politeness-heavy, and latency-heavy.
  • Politer conversations: Fewest interruptions, meaning agents don’t talk over users. 
  • Faster responses: Sub-second average latency (0.85s) vs. OpenAI’s 2.55s. 
  • Smarter timing: Strong turn-taking with a competitive miss rate (0.427). 
  •  Consistent edge: Even when benchmarks shifted priorities, results held — Deepgram stayed on top. 

Source: VAQI Benchmark, August 2025

Deepgram published a blog today with further details: https://deepgram.com/learn/vaqi-openai-gpt-realtime-test-with-sensitivity-analysis

Leave a Reply

Discover more from The IT Nerd

Subscribe now to keep reading and get access to the full archive.

Continue reading