LiveKit Agents
Use Runway Characters with LiveKit Agents to build fully custom conversational experiences where you control the entire pipeline. Your agent handles speech-to-text, language model, and text-to-speech. Runway provides the visual layer: audio in, avatar video out.
Before you start
Section titled “Before you start”You’ll need:
- A Runway API key
- A LiveKit Cloud project (or self-hosted LiveKit server)
- A Google Gemini API key (or another LLM/TTS provider)
- A preset ID (e.g.
cat-character) or custom Avatar ID from the Developer Portal
-
Install the plugin
Terminal window pip install livekit-plugins-runwayTerminal window npm install @livekit/agents-plugin-runwaySet the following in your
.envfile:Terminal window RUNWAYML_API_SECRET=...LIVEKIT_URL=...LIVEKIT_API_KEY=...LIVEKIT_API_SECRET=...GOOGLE_API_KEY=... -
Add AvatarSession to your agent
agent_worker.py from dotenv import load_dotenvfrom livekit.agents import Agent, AgentServer, AgentSession, JobContext, clifrom livekit.plugins import google, runwayload_dotenv()server = AgentServer()@server.rtc_session()async def entrypoint(ctx: JobContext):session = AgentSession(llm=google.realtime.RealtimeModel(voice="kore"),)avatar = runway.AvatarSession(preset_id="cat-character",)await avatar.start(session, room=ctx.room)await session.start(agent=Agent(instructions="Talk to me!"),room=ctx.room,)session.generate_reply(instructions="Say hello to the user.")if __name__ == "__main__":cli.run_app(server)agent_worker.ts import { type JobContext, ServerOptions, cli, defineAgent, voice } from '@livekit/agents';import * as google from '@livekit/agents-plugin-google';import * as runway from '@livekit/agents-plugin-runway';import { fileURLToPath } from 'node:url';export default defineAgent({entry: async (ctx: JobContext) => {await ctx.connect();const session = new voice.AgentSession({llm: new google.beta.realtime.RealtimeModel({ voice: 'Kore' }),});const avatar = new runway.AvatarSession({presetId: 'cat-character',});await avatar.start(session, ctx.room);await session.start({agent: new voice.Agent({ instructions: 'Talk to me!' }),room: ctx.room,outputOptions: { syncTranscription: false },});session.generateReply({ instructions: 'Say hello to the user.' });},});cli.runApp(new ServerOptions({ agent: fileURLToPath(import.meta.url) }));Use
avatar_id/avatarIdinstead ofpreset_id/presetIdto use a custom Character from the Developer Portal.See the LiveKit Runway plugin guide for the full list of
AvatarSessionparameters. -
Test it
Open the LiveKit Agents Playground to preview your agent without building a frontend. Start a conversation and verify the avatar video track appears alongside your agent’s audio.
End sessions promptly
Section titled “End sessions promptly”Runway bills realtime Character sessions while the Runway avatar worker is active. The plugin cancels the Runway realtime session during normal LiveKit job shutdown, so make sure your agent shutdown path runs when the user leaves, your agent disconnects, or your app ends the conversation.
Set max_duration / maxDuration (seconds) in the AvatarSession constructor to cap session length. If the job is force-killed before cleanup runs, the Runway session can continue until this limit.
Handle startup errors
Section titled “Handle startup errors”AvatarSession.start() can fail before the Character joins the LiveKit room, for example if the Runway project has insufficient credits or the session request is invalid. Catch startup errors in your agent and send an application-level message to your frontend so the user does not wait indefinitely for the avatar video track.
try: await avatar.start(session, room=ctx.room)except Exception as exc: print(f"failed to start Runway avatar: {exc}") raisetry { await avatar.start(session, ctx.room);} catch (error) { console.error('failed to start Runway avatar', error); throw error;}