Skip to content
Get started

Create an assistant

client.assistants.create(AssistantCreateParams { name, prompt, background_sound, 15 more } body, RequestOptionsoptions?): AssistantCreateResponse { assistant }
POST/assistants

Create a new AI assistant with a custom prompt, voice, and behavior configuration. Assistants define how the AI agent behaves during calls, including the system prompt given to the LLM, the first sentence spoken, the voice provider and voice ID (Cartesia or ElevenLabs), and end-of-call behavior. Once created, reference the assistant by its ID when placing calls.

ParametersExpand Collapse
body: AssistantCreateParams { name, prompt, background_sound, 15 more }
name: string
prompt: string

The prompt to use for the call. This will be given to the LLM (gpt-4.1)

background_sound?: "audio/office.ogg" | null

The background sound to play during the call. Useful to give the impression that your AI agent is in an office, in the street, or anywhere else you want.

calendly?: Calendly
connection_id: string

The connection ID representing the link between your Calendly account and Revox.

event_type_id: string

The event type ID representing the event type to schedule. (eg: https://api.calendly.com/event_types/b2330295-2a91-4a1d-bb73-99e7707663d5)

call_retry_config?: CallRetryConfig

Configuration for call retry behavior including time windows, delays, and max iterations. If not provided, defaults will be used.

calling_windows: Array<CallingWindow>
calling_window_end_time: string

End time for the calling window in the recipient's timezone (or timezone_override if provided). Format: 'HH:mm' (24-hour) or 'H:mma' (12-hour). Examples: '17:00', '6pm'. Default: '18:00'.

calling_window_start_time: string

Start time for the calling window in the recipient's timezone (or timezone_override if provided). Format: 'HH:mm' (24-hour) or 'H:mma' (12-hour). Examples: '09:00', '10am'. Default: '10:00'.

retry_delay_seconds: number

Delay between retry attempts in seconds. Default: 7200 (2 hours).

exclusiveMinimum0
maximum9007199254740991
max_retry_attempts: number

Maximum number of call retry attempts. Default: 3.

exclusiveMinimum0
maximum9007199254740991
timezone?: string | null

Optional IANA timezone identifier to override the automatic timezone detection from phone number. If not provided, timezone is determined from the recipient's phone number country code. Examples: 'America/New_York', 'Europe/Paris'.

end_of_call_sentence?: string

Optional message to say when the agent decides to end the call.

faq_items?: Array<FaqItem>

FAQ items to associate with this assistant. When provided, replaces all existing FAQ items.

answer: string
question: string
first_sentence?: string

The first sentence to use for the call. This will be given to the LLM

first_sentence_delay_ms?: number

Delay in milliseconds before speaking the first sentence. Default: 400.

minimum0
maximum9007199254740991
first_sentence_mode?: "generated" | "static" | "none"

How the first sentence should be handled. "generated" means the LLM will generate a response based on the first_sentence instruction. "static" means the first_sentence will be spoken exactly as provided. "none" means the agent will not speak first and will wait for the user.

One of the following:
"generated"
"static"
"none"
ivr_navigation_enabled?: boolean

Enable IVR navigation tools. When enabled, the assistant can send DTMF tones and skip turns to navigate phone menus.

llm_model?: UnionMember0 { name, type } | UnionMember1 { openrouter_model_id, openrouter_provider, type }
One of the following:
UnionMember0 { name, type }
name: "gpt-4.1" | "ministral-3-8b-instruct"
One of the following:
"gpt-4.1"
"ministral-3-8b-instruct"
type: "dedicated-instance"
UnionMember1 { openrouter_model_id, openrouter_provider, type }
openrouter_model_id: string

The model ID to use from OpenRouter. eg: openai/gpt-4.1

openrouter_provider: string

The provider to use from OpenRouter. eg: nebius, openai, azure, etc.

type: "openrouter"

Use a model from OpenRouter.

max_call_duration_secs?: number

The maximum duration of the call in seconds. This is the maximum time the call will be allowed to run.

structured_output_config?: Array<StructuredOutputConfig>

The structured output config to use for the call. This is used to extract the data from the call (like email, name, company name, etc.).

name: string
minLength1
required: boolean
type: "string" | "number" | "boolean" | 3 more
One of the following:
"string"
"number"
"boolean"
"enum"
"date"
"datetime"
description?: string
enum_options?: Array<string>
transfer_phone_number?: string | null

Phone number to transfer calls to when users request to speak to a human agent in E.164 format (e.g. +1234567890).

voice?: Voice

The voice to use for the call. You can get the list of voices using the /voices endpoint

id: string

The ID of the voice.

minLength1
provider: "cartesia" | "elevenlabs"

The provider of the voice.

One of the following:
"cartesia"
"elevenlabs"
speed?: number

The speed of the voice. Range depends on provider: Cartesia 0.6–1.5, ElevenLabs 0.7–1.2. Default is 1.0.

minimum0.6
maximum1.5
voicemail_message?: string | null

If set, when voicemail is detected the agent will speak this message then hang up; if null, hang up immediately.

webhook_url?: string

The webhook URL to call when the call is completed.

ReturnsExpand Collapse
AssistantCreateResponse { assistant }
assistant: Assistant { id, background_sound, calendly, 20 more }
id: string
background_sound: "audio/office.ogg" | null

The background sound to play during the call. Useful to give the impression that your AI agent is in an office.

calendly: Calendly | null
connection_id: string

The connection ID representing the link between your Calendly account and Revox.

event_type_id: string

The event type ID representing the event type to schedule. (eg: https://api.calendly.com/event_types/b2330295-2a91-4a1d-bb73-99e7707663d5)

call_retry_config: CallRetryConfig | null

Configuration for call retry behavior including time windows, delays, and max iterations. If not provided, defaults will be used.

calling_windows: Array<CallingWindow>
calling_window_end_time: string

End time for the calling window in the recipient's timezone (or timezone_override if provided). Format: 'HH:mm' (24-hour) or 'H:mma' (12-hour). Examples: '17:00', '6pm'. Default: '18:00'.

calling_window_start_time: string

Start time for the calling window in the recipient's timezone (or timezone_override if provided). Format: 'HH:mm' (24-hour) or 'H:mma' (12-hour). Examples: '09:00', '10am'. Default: '10:00'.

retry_delay_seconds: number

Delay between retry attempts in seconds. Default: 7200 (2 hours).

exclusiveMinimum0
maximum9007199254740991
max_retry_attempts: number

Maximum number of call retry attempts. Default: 3.

exclusiveMinimum0
maximum9007199254740991
timezone?: string | null

Optional IANA timezone identifier to override the automatic timezone detection from phone number. If not provided, timezone is determined from the recipient's phone number country code. Examples: 'America/New_York', 'Europe/Paris'.

created_at: unknown
end_of_call_sentence: string | null
first_sentence: string | null
first_sentence_delay_ms: number

Delay in milliseconds before speaking the first sentence. Default: 400.

minimum-9007199254740991
maximum9007199254740991
first_sentence_mode: "generated" | "static" | "none"
One of the following:
"generated"
"static"
"none"
ivr_navigation_enabled: boolean

Enable IVR navigation tools. When enabled, the assistant can send DTMF tones and skip turns to navigate phone menus.

llm_model: UnionMember0 { name, type } | UnionMember1 { openrouter_model_id, openrouter_provider, type }
One of the following:
UnionMember0 { name, type }
name: "gpt-4.1" | "ministral-3-8b-instruct"
One of the following:
"gpt-4.1"
"ministral-3-8b-instruct"
type: "dedicated-instance"
UnionMember1 { openrouter_model_id, openrouter_provider, type }
openrouter_model_id: string

The model ID to use from OpenRouter. eg: openai/gpt-4.1

openrouter_provider: string

The provider to use from OpenRouter. eg: nebius, openai, azure, etc.

type: "openrouter"

Use a model from OpenRouter.

max_call_duration_secs: number

The maximum duration of the call in seconds. This is the maximum time the call will be allowed to run.

name: string
organization_id: string
prompt: string
structured_output_config: Array<StructuredOutputConfig> | null

The structured output config to use for the call. This is used to extract the data from the call (like email, name, company name, etc.).

name: string
minLength1
required: boolean
type: "string" | "number" | "boolean" | 3 more
One of the following:
"string"
"number"
"boolean"
"enum"
"date"
"datetime"
description?: string
enum_options?: Array<string>
transfer_phone_number: string | null

Phone number to transfer calls to when users request to speak to a human agent.

updated_at: unknown
voice: Voice | null
id: string

The ID of the voice.

minLength1
provider: "cartesia" | "elevenlabs"

The provider of the voice.

One of the following:
"cartesia"
"elevenlabs"
speed?: number

The speed of the voice. Range depends on provider: Cartesia 0.6–1.5, ElevenLabs 0.7–1.2. Default is 1.0.

minimum0.6
maximum1.5
voicemail_message: string | null

If set, when voicemail is detected the agent will speak this message then hang up; if null, hang up immediately.

webhook_url: string | null

The webhook URL to call when the call is completed.

faq_items?: Array<FaqItem>
answer: string
question: string
id?: string
needs_human_answer?: boolean
source?: "human" | "ai"
One of the following:
"human"
"ai"
pending_faq_count?: number

Create an assistant

import Revox from '@revoxai/sdk';

const client = new Revox({
  apiKey: process.env['REVOX_API_KEY'], // This is the default and can be omitted
});

const assistant = await client.assistants.create({ name: 'name', prompt: 'prompt' });

console.log(assistant.assistant);
{
  "assistant": {
    "id": "id",
    "background_sound": "audio/office.ogg",
    "calendly": {
      "connection_id": "connection_id",
      "event_type_id": "event_type_id"
    },
    "call_retry_config": {
      "calling_windows": [
        {
          "calling_window_end_time": "calling_window_end_time",
          "calling_window_start_time": "calling_window_start_time",
          "retry_delay_seconds": 1
        }
      ],
      "max_retry_attempts": 1,
      "timezone": "timezone"
    },
    "created_at": {},
    "end_of_call_sentence": "end_of_call_sentence",
    "first_sentence": "first_sentence",
    "first_sentence_delay_ms": -9007199254740991,
    "first_sentence_mode": "generated",
    "ivr_navigation_enabled": true,
    "llm_model": {
      "name": "gpt-4.1",
      "type": "dedicated-instance"
    },
    "max_call_duration_secs": 0,
    "name": "name",
    "organization_id": "organization_id",
    "prompt": "prompt",
    "structured_output_config": [
      {
        "name": "x",
        "required": true,
        "type": "string",
        "description": "description",
        "enum_options": [
          "string"
        ]
      }
    ],
    "transfer_phone_number": "transfer_phone_number",
    "updated_at": {},
    "voice": {
      "id": "x",
      "provider": "cartesia",
      "speed": 0.6
    },
    "voicemail_message": "voicemail_message",
    "webhook_url": "webhook_url",
    "faq_items": [
      {
        "answer": "answer",
        "question": "question",
        "id": "id",
        "needs_human_answer": true,
        "source": "human"
      }
    ],
    "pending_faq_count": 0
  }
}
Returns Examples
{
  "assistant": {
    "id": "id",
    "background_sound": "audio/office.ogg",
    "calendly": {
      "connection_id": "connection_id",
      "event_type_id": "event_type_id"
    },
    "call_retry_config": {
      "calling_windows": [
        {
          "calling_window_end_time": "calling_window_end_time",
          "calling_window_start_time": "calling_window_start_time",
          "retry_delay_seconds": 1
        }
      ],
      "max_retry_attempts": 1,
      "timezone": "timezone"
    },
    "created_at": {},
    "end_of_call_sentence": "end_of_call_sentence",
    "first_sentence": "first_sentence",
    "first_sentence_delay_ms": -9007199254740991,
    "first_sentence_mode": "generated",
    "ivr_navigation_enabled": true,
    "llm_model": {
      "name": "gpt-4.1",
      "type": "dedicated-instance"
    },
    "max_call_duration_secs": 0,
    "name": "name",
    "organization_id": "organization_id",
    "prompt": "prompt",
    "structured_output_config": [
      {
        "name": "x",
        "required": true,
        "type": "string",
        "description": "description",
        "enum_options": [
          "string"
        ]
      }
    ],
    "transfer_phone_number": "transfer_phone_number",
    "updated_at": {},
    "voice": {
      "id": "x",
      "provider": "cartesia",
      "speed": 0.6
    },
    "voicemail_message": "voicemail_message",
    "webhook_url": "webhook_url",
    "faq_items": [
      {
        "answer": "answer",
        "question": "question",
        "id": "id",
        "needs_human_answer": true,
        "source": "human"
      }
    ],
    "pending_faq_count": 0
  }
}