JavaScript SDK

Agents Platform SDK: deploy customized, interactive voice agents in minutes.

Installation

Install the package in your project through package manager.

$npm install @elevenlabs/client
># or
>yarn add @elevenlabs/client
># or
>pnpm install @elevenlabs/client

Usage

This library is primarily meant for development in vanilla JavaScript projects, or as a base for libraries tailored to specific frameworks. It is recommended to check whether your specific framework has its own library. However, you can use this library in any JavaScript-based project.

Initialize conversation

First, initialize the Conversation instance:

1const conversation = await Conversation.startSession(options);

This will kick off the websocket connection and start using microphone to communicate with the ElevenLabs Agents agent. Consider explaining and allowing microphone access in your apps UI before the Conversation kicks off:

1// call after explaining to the user why the microphone access is needed
2await navigator.mediaDevices.getUserMedia({ audio: true });

Session configuration

The options passed to startSession specify how the session is established. Conversations can be started with public or private agents.

Public agents

Agents that don’t require any authentication can be used to start a conversation by using the agent ID and the connection type. The agent ID can be acquired through the ElevenLabs UI.

For public agents, you can use the ID directly:

1const conversation = await Conversation.startSession({
2 agentId: '<your-agent-id>',
3 connectionType: 'webrtc', // 'websocket' is also accepted
4});
Private agents

If the conversation requires authorization, you will need to add a dedicated endpoint to your server that will either request a signed url (if using the WebSockets connection type) or a conversation token (if using WebRTC) using the ElevenLabs API and pass it back to the client.

Here’s an example for a WebSocket connection:

1// Node.js server
2
3app.get('/signed-url', yourAuthMiddleware, async (req, res) => {
4 const response = await fetch(
5 `https://api.elevenlabs.seobdtools.com/v1/convai/conversation/get-signed-url?agent_id=${process.env.AGENT_ID}`,
6 {
7 method: 'GET',
8 headers: {
9 // Requesting a signed url requires your ElevenLabs API key
10 // Do NOT expose your API key to the client!
11 'xi-api-key': process.env.XI_API_KEY,
12 },
13 }
14 );
15
16 if (!response.ok) {
17 return res.status(500).send('Failed to get signed URL');
18 }
19
20 const body = await response.json();
21 res.send(body.signed_url);
22});
1// Client
2
3const response = await fetch('/signed-url', yourAuthHeaders);
4const signedUrl = await response.text();
5
6const conversation = await Conversation.startSession({
7 signedUrl,
8 connectionType: 'websocket',
9});

Here’s an example for WebRTC:

1// Node.js server
2
3app.get('/conversation-token', yourAuthMiddleware, async (req, res) => {
4 const response = await fetch(
5 `https://api.elevenlabs.seobdtools.com/v1/convai/conversation/token?agent_id=${process.env.AGENT_ID}`,
6 {
7 headers: {
8 // Requesting a conversation token requires your ElevenLabs API key
9 // Do NOT expose your API key to the client!
10 'xi-api-key': process.env.ELEVENLABS_API_KEY,
11 },
12 }
13 );
14
15 if (!response.ok) {
16 return res.status(500).send('Failed to get conversation token');
17 }
18
19 const body = await response.json();
20 res.send(body.token);
21});

Once you have the token, providing it to startSession will initiate the conversation using WebRTC.

1// Client
2
3const response = await fetch('/conversation-token', yourAuthHeaders);
4const conversationToken = await response.text();
5
6const conversation = await Conversation.startSession({
7 conversationToken,
8 connectionType: 'webrtc',
9});

Optional callbacks

The options passed to startSession can also be used to register optional callbacks:

  • onConnect - handler called when the conversation websocket connection is established.
  • onDisconnect - handler called when the conversation websocket connection is ended.
  • onMessage - handler called when a new text message is received. These can be tentative or final transcriptions of user voice, replies produced by LLM. Primarily used for handling conversation transcription.
  • onError - handler called when an error is encountered.
  • onStatusChange - handler called whenever connection status changes. Can be connected, connecting and disconnected (initial).
  • onModeChange - handler called when a status changes, eg. agent switches from speaking to listening, or the other way around.
  • onCanSendFeedbackChange - handler called when sending feedback becomes available or unavailable.

Not all client events are enabled by default for an agent. If you have enabled a callback but aren’t seeing events come through, ensure that your ElevenLabs agent has the corresponding event enabled. You can do this in the “Advanced” tab of the agent settings in the ElevenLabs dashboard.

Return value

startSession returns a Conversation instance that can be used to control the session. The method will throw an error if the session cannot be established. This can happen if the user denies microphone access, or if the connection fails.

endSession

A method to manually end the conversation. The method will end the conversation and disconnect from websocket. Afterwards the conversation instance will be unusable and can be safely discarded.

1await conversation.endSession();

getId

A method returning the conversation ID.

1const id = conversation.getId();

setVolume

A method to set the output volume of the conversation. Accepts object with volume field between 0 and 1.

1await conversation.setVolume({ volume: 0.5 });

getInputVolume / getOutputVolume

Methods that return the current input/output volume on a scale from 0 to 1 where 0 is -100 dB and 1 is -30 dB.

1const inputVolume = await conversation.getInputVolume();
2const outputVolume = await conversation.getOutputVolume();

sendFeedback

A method for sending binary feedback to the agent. The method accepts a boolean value, where true represents positive feedback and false negative feedback.

Feedback is always correlated to the most recent agent response and can be sent only once per response.

You can listen to onCanSendFeedbackChange to know if feedback can be sent at the given moment.

1conversation.sendFeedback(true); // positive feedback
2conversation.sendFeedback(false); // negative feedback

sendContextualUpdate

A method to send contextual updates to the agent. This can be used to inform the agent about user actions that are not directly related to the conversation, but may influence the agent’s responses.

1conversation.sendContextualUpdate(
2 "User navigated to another page. Consider it for next response, but don't react to this contextual update."
3);

sendUserMessage

Sends a text message to the agent.

Can be used to let the user type in the message instead of using the microphone. Unlike sendContextualUpdate, this will be treated as a user message and will prompt the agent to take its turn in the conversation.

1sendButton.addEventListener('click', (e) => {
2 conversation.sendUserMessage(textInput.value);
3 textInput.value = '';
4});

sendUserActivity

Notifies the agent about user activity.

The agent will not attempt to speak for at least 2 seconds after the user activity is detected.

This can be used to prevent the agent from interrupting the user when they are typing.

1textInput.addEventListener('input', () => {
2 conversation.sendUserActivity();
3});

setMicMuted

A method to mute/unmute the microphone.

1// Mute the microphone
2conversation.setMicMuted(true);
3
4// Unmute the microphone
5conversation.setMicMuted(false);

changeInputDevice

Allows you to change the audio input device during an active voice conversation. This method is only available for voice conversations.

In WebRTC mode the input format and sample rate are hardcoded to pcm and 48000 respectively. Changing those values when changing the input device is a no-op.

1const conversation = await Conversation.startSession({
2 agentId: '<your-agent-id>',
3 // Alternatively you can provide a device ID when starting the session
4 // Useful if you want to start the conversation with a non-default device
5 inputDeviceId: 'your-device-id',
6});
7
8// Change to a specific input device
9await conversation.changeInputDevice({
10 sampleRate: 16000,
11 format: 'pcm',
12 preferHeadphonesForIosDevices: true,
13 inputDeviceId: 'your-device-id',
14});

If the device ID is invalid, the default device will be used instead.

changeOutputDevice

Allows you to change the audio output device during an active voice conversation. This method is only available for voice conversations.

In WebRTC mode the output format and sample rate are hardcoded to pcm and 48000 respectively. Changing those values when changing the output device is a no-op.

1const conversation = await Conversation.startSession({
2 agentId: '<your-agent-id>',
3 // Alternatively you can provide a device ID when starting the session
4 // Useful if you want to start the conversation with a non-default device
5 outputDeviceId: 'your-device-id',
6});
7
8// Change to a specific output device
9await conversation.changeOutputDevice({
10 sampleRate: 16000,
11 format: 'pcm',
12 outputDeviceId: 'your-device-id',
13});

Device switching only works for voice conversations. If no specific deviceId is provided, the browser will use its default device selection. You can enumerate available devices using the MediaDevices.enumerateDevices() API.

getInputByteFrequencyData / getOutputByteFrequencyData

Methods that return Uint8Arrays containing the current input/output frequency data. See AnalyserNode.getByteFrequencyData for more information.

These methods are only available for voice conversations. In WebRTC mode the audio is hardcoded to use pcm_48000, meaning any visualization using the returned data might show different patterns to WebSocket connections.