$2,000 free credits for voice AI Startups

Oct 30, 2025
Product
Product

Deepgram Flux: now available on the edge

Deepgram Flux: now available on the edge

Aidan Hornsby
Aidan Hornsby

We're excited to announce a major upgrade to Layercode: support for Deepgram Flux: the world's first transcription system built specifically for voice agents.

What is Deepgram Flux?

Traditional speech-to-text models are built for pure transcription, but voice agents need to understand the flow of conversation, not just capture words. To solve this, Deepgram built the world’s first production-ready Conversational Speech Recognition (CSR) model.

Deepgram's new Flux model targets one of the biggest pain points in voice AI: knowing when a user has actually finished speaking.

By embedding intelligent turn-taking directly into the recognition model itself, Flux helps eliminate awkward interruptions and robotic pauses in your agent conversations.

We've been blown away by Flux's improvements. In our testing:

  • ~260ms end-of-turn detection with context-aware turn handling

  • Nova-3 level transcription accuracy while adding real-time conversational intelligence

  • Native barge-in support for fluid, natural exchanges

  • Dramatically reduced end-to-end latency that makes conversations feel snappier and more human

How Layercode makes Flux even better

Using Flux for your agents will notably reduce conversational latency, but we’re able to take things a step further.

By running running Flux on the edge at Cloudflare's 330+ global locations, Layercode can process Flux inference at the network edge — delivering the lowest possible latency to your users, anywhere in the world.

What does this mean for your agents? Layercode's sub-50ms audio processing latency combined with Flux's intelligent turn-taking helps agents respond with human-like timing, creating conversations that feel truly natural.

Try it now: Simply switch to Flux from the STT model picker in your Layercode agent dashboard. No code changes required.

Other updates

New Features

  • Latency breakdowns: View latency breakdowns for all agent sessions (i.e. total latency, user endpoint latency, agent backend response latency, TTS generation latency). Just click into any session > 'View latency breakdown' under a session's 'Average latency'

  • Cartesia Sonic-3 STT: Added preview support for Cartesia’s latest speech model

Improved Documentation

That’s it for this release. More soon!

Stay up-to-date with the product
Stay up-to-date with the product
Company

Contact

Stay up-to-date with the product
Company

Contact

Stay up-to-date with the product