Layercode
Back to blog
GuidesJanuary 16, 20263 min read

How to build a voice agent with TypeScript & Next.js and deploy to Vercel in 5 minutes

A template that helps you skip the WebSocket debugging, VAD tuning, and latency issues. Deploy a voice AI agent to Vercel in minutes.

Stuart Bowness
Stuart Bowness
@stuartbowness
Next.js voice agent deployed to Vercel

Adding text-based chat to a Next.js app takes an afternoon. Adding voice? That can mean debugging WebSockets, tuning VAD, whack-a-moling latency issues across regions, etc.

This template helps you skip all of those headaches.

Vercel's serverless architecture is a natural fit for voice AI when paired with the right infrastructure layer. Your Vercel API routes handle the AI logic. You receive text, you send text. Layercode handles everything else: audio transport, pipeline orchestration, voice activity detection, turn-taking, session management, and observability at the edge.

Webhooks auto-connect, API keys stay server-side, conversation history persists via Vercel KV, and the backend is under 50 lines of TypeScript.

Try a live demo →

What's Included

The template runs on Next.js 16 with the App Router and uses the Vercel AI SDK for LLM calls. Layercode handles the voice infrastructure: speech-to-text, text-to-speech, and audio streaming at the edge.

Features:

  • Real-time voice conversations
  • Streaming transcripts with partial updates as users speak
  • Speaking indicators showing who's talking
  • Microphone selection that persists across sessions
  • Message history via Vercel KV (in-memory fallback if you skip it)
  • Editable knowledge base in lib/knowledge.ts

Deploy (in 5 minutes)

You'll need a Layercode account (free tier works), an OpenAI API key, and a Vercel account.

Step 1: One-Click Deploy

Hit the Deploy button. Vercel will clone the repository to your GitHub account and deploy it automatically.

Step 2: Configure Environment Variables

Add the following environment variables in your Vercel project settings:

LAYERCODE_API_KEY=your_layercode_api_key
OPENAI_API_KEY=your_openai_api_key

Step 3: Create a Pipeline

In the Layercode dashboard, create a new pipeline and point the webhook URL to your Vercel deployment:

https://your-app.vercel.app/api/voice

Step 4: Test It

Open your deployed app and click the microphone button. Start talking.

Customize the Agent

The agent's behavior is controlled by two files:

lib/knowledge.ts — Add product knowledge, FAQs, or any context you want the agent to have access to.

app/api/voice/route.ts — Modify the system prompt, add tool calls, or integrate with your own APIs.

Architecture

User's Browser → Layercode Edge → Your Vercel API Route
                    ↓
              Speech-to-Text
                    ↓
              Your AI Logic
                    ↓
              Text-to-Speech
                    ↓
              User's Browser

All audio processing happens at the edge, close to your users. Your API route only handles text, keeping serverless function execution time minimal.

What's Next

  • Add authentication with NextAuth.js
  • Connect to your own database for conversation history
  • Build custom tool calls for booking, CRM updates, or any backend action

Questions? Join our Discord community or check out the documentation.

Related posts