Skip to content
Get started

Create an assistant

POST/assistants

Create a new AI assistant with a custom prompt, voice, and behavior configuration. Assistants define how the AI agent behaves during calls, including the system prompt given to the LLM, the first sentence spoken, the voice provider and voice ID (Cartesia or ElevenLabs), and end-of-call behavior. Once created, reference the assistant by its ID when placing calls.

Body ParametersJSONExpand Collapse
name: string
prompt: string

The prompt to use for the call. This will be given to the LLM (gpt-4.1)

after_call_sms_prompt: optional string

Prompt / instructions for the after-call SMS. Supports {{variable}} placeholders. When null, no after-call SMS is sent.

background_sound: optional "audio/office.ogg"

Ambient background sound to play during the call. null/omitted disables it.

background_sound_volume: optional number

Volume of the ambient background sound (0 = silent, 1 = max).

minimum0
maximum1
calendly: optional object { connection_id, event_type_id }
connection_id: string

The connection ID representing the link between your Calendly account and Revox.

event_type_id: string

The event type ID representing the event type to schedule. (eg: https://api.calendly.com/event_types/b2330295-2a91-4a1d-bb73-99e7707663d5)

call_retry_config: optional object { allowed_days, call_twice_in_a_row, calling_windows, 2 more }

Configuration for call retry behavior including time windows, delays, and max iterations. If not provided, defaults will be used.

allowed_days: array of "monday" or "tuesday" or "wednesday" or 4 more

Days of the week when calls are allowed, in the recipient's timezone. Default: Monday through Friday.

One of the following:
"monday"
"tuesday"
"wednesday"
"thursday"
"friday"
"saturday"
"sunday"
call_twice_in_a_row: boolean

If true and max_retry_attempts >= 2, attempt #2 fires immediately (skipping retry_delay_seconds) when attempt #1 didn't reach a human. Calling-window/allowed-days checks still apply. Only affects the 1→2 transition. Default: false.

calling_windows: array of object { calling_window_end_time, calling_window_start_time, retry_delay_seconds }
calling_window_end_time: string

End time for the calling window in the recipient's timezone (or timezone_override if provided). Format: 'HH:mm' (24-hour) or 'H:mma' (12-hour). Examples: '17:00', '6pm'. Default: '18:00'.

calling_window_start_time: string

Start time for the calling window in the recipient's timezone (or timezone_override if provided). Format: 'HH:mm' (24-hour) or 'H:mma' (12-hour). Examples: '09:00', '10am'. Default: '10:00'.

retry_delay_seconds: number

Delay between retry attempts in seconds. Default: 7200 (2 hours).

exclusiveMinimum0
maximum9007199254740991
max_retry_attempts: number

Maximum number of call retry attempts. Default: 3.

exclusiveMinimum0
maximum9007199254740991
timezone: optional string

Optional IANA timezone identifier to override the automatic timezone detection from phone number. If not provided, timezone is determined from the recipient's phone number country code. Examples: 'America/New_York', 'Europe/Paris'.

custom_tools: optional array of object { body_template, description, headers, 5 more }

Custom API tools the assistant can call during conversations. Each tool defines an HTTP endpoint with variable substitution.

body_template: string

JSON body template for the request. Use quoted {{variable}} placeholders (e.g. "{{name}}") for dynamic values

description: string

Human-readable description of what the tool does, used by the LLM to decide when to call it

minLength1
headers: array of object { key, value }

HTTP headers to include in the request. Values support {{variable}} placeholders

key: string
minLength1
value: string
input_schema: array of object { name, required, type, 2 more }

Schema defining the parameters the LLM should extract from the conversation to pass to this tool

name: string
minLength1
required: boolean
type: "string" or "number" or "boolean" or 3 more
One of the following:
"string"
"number"
"boolean"
"enum"
"date"
"datetime"
description: optional string
enum_options: optional array of string
method: "GET" or "POST" or "PUT" or 2 more

HTTP method to use when calling the API endpoint

One of the following:
"GET"
"POST"
"PUT"
"PATCH"
"DELETE"
name: string

Unique tool name in lowercase_snake_case (e.g. check_inventory)

minLength1
query_params: array of object { key, value }

Query string parameters appended to the URL. Values support {{variable}} placeholders

key: string
minLength1
value: string
url: string

Full URL of the API endpoint. Supports {{variable}} placeholders for dynamic values

minLength1
email_notification_address: optional string

Email address to receive notifications when a call ends with a matching outcome.

formatemail
email_notification_outcomes: optional array of "not_interested" or "interested" or "completed" or 3 more

Which call outcomes trigger an email notification. E.g. ["interested", "completed"].

One of the following:
"not_interested"
"interested"
"completed"
"requested_callback_later"
"requested_callback_new_number"
"do_not_contact"
end_of_call_sentence: optional string

Optional message to say when the agent decides to end the call.

faq_items: optional array of object { answer, question }

FAQ items to associate with this assistant. When provided, replaces all existing FAQ items.

answer: string
question: string
first_sentence: optional string

The first sentence to use for the call. This will be given to the LLM

first_sentence_delay_ms: optional number

Delay in milliseconds before speaking the first sentence. Default: 400.

minimum0
maximum9007199254740991
first_sentence_mode: optional "generated" or "static" or "none"

How the first sentence should be handled. "generated" means the LLM will generate a response based on the first_sentence instruction. "static" means the first_sentence will be spoken exactly as provided. "none" means the agent will not speak first and will wait for the user.

One of the following:
"generated"
"static"
"none"
from_phone_number: optional string

Override the default outbound phone number for calls placed with this assistant. Must be a phone number owned by the organization in E.164 format (e.g. +1234567890). When null, the organization's default phone number is used.

human_transfer_mode: optional "warm" or "cold"

When transfer_phone_number is set: "warm" (AI bridges) or "cold" (SIP REFER; trunk must allow REFER/PSTN). Omit or null when transfer is disabled.

One of the following:
"warm"
"cold"
ivr_navigation_enabled: optional boolean

Enable IVR navigation tools. When enabled, the assistant can send DTMF tones and skip turns to navigate phone menus.

llm_model: optional object { name, type } or object { openrouter_model_id, openrouter_provider, type } or object { api_key, api_url, model_name, type } or object { provider, realtime_model_id, type, realtime_voice_id }
One of the following:
UnionMember0 = object { name, type }
name: "gpt-4.1" or "ministral-3-8b-instruct"
One of the following:
"gpt-4.1"
"ministral-3-8b-instruct"
type: "dedicated-instance"
UnionMember1 = object { openrouter_model_id, openrouter_provider, type }
openrouter_model_id: string

The model ID to use from OpenRouter. eg: openai/gpt-4.1

openrouter_provider: string

The provider to use from OpenRouter. eg: nebius, openai, azure, etc.

type: "openrouter"

Use a model from OpenRouter.

UnionMember2 = object { api_key, api_url, model_name, type }
api_key: string

API key sent as Bearer token to the custom endpoint.

minLength1
api_url: string

Base URL for the OpenAI-compatible API, e.g. https://api.together.xyz/v1

formaturi
model_name: string

Model name as expected by the provider, e.g. meta-llama/llama-3-70b

minLength1
type: "custom"

OpenAI-compatible chat completions API (bring your own endpoint and key).

UnionMember3 = object { provider, realtime_model_id, type, realtime_voice_id }
provider: "openai" or "google"

The provider to use from Realtime. eg: openai, google.

One of the following:
"openai"
"google"
realtime_model_id: string

The model ID to use from Realtime. eg: gpt-4.1

type: "realtime"

Use a model from Realtime.

realtime_voice_id: optional string

Output voice for the realtime provider (e.g. OpenAI: marin; Gemini: Puck).

minLength1
max_call_duration_secs: optional number

The maximum duration of the call in seconds. This is the maximum time the call will be allowed to run.

max_duration_end_message: optional string

Optional message the agent will say, without being interruptible, when the call reaches its max duration. Kept short so it fits inside the farewell buffer. If not set, the call ends silently.

maxLength150
sms_enabled: optional boolean

Enable SMS tool during calls. When enabled, the agent can send SMS messages to the user on the call.

sms_template: optional string

Hardcoded SMS template to send during calls. When set, this exact text is sent instead of letting the agent generate the message. Supports {{variable}} placeholders.

structured_output_config: optional array of object { name, required, type, 2 more }

The structured output config to use for the call. This is used to extract the data from the call (like email, name, company name, etc.).

name: string
minLength1
required: boolean
type: "string" or "number" or "boolean" or 3 more
One of the following:
"string"
"number"
"boolean"
"enum"
"date"
"datetime"
description: optional string
enum_options: optional array of string
structured_output_prompt: optional string

Custom prompt for structured data extraction. If not provided, a default prompt is used. Available variables: {{transcript}}, {{call_direction}}, {{user_phone_number}}, {{agent_phone_number}}.

thinking_sound: optional "city-ambience.ogg" or "forest-ambience.ogg" or "office-ambience.ogg" or 4 more

Audio clip to play while the agent is processing a response. One of the built-in LiveKit audio clips; null/omitted disables it.

One of the following:
"city-ambience.ogg"
"forest-ambience.ogg"
"office-ambience.ogg"
"crowded-room.ogg"
"keyboard-typing.ogg"
"keyboard-typing2.ogg"
"hold_music.ogg"
thinking_sound_probability: optional number

Probability [0..1] that the thinking sound plays on any given turn; otherwise the agent is silent while thinking.

minimum0
maximum1
thinking_sound_volume: optional number

Volume of the thinking sound (0 = silent, 1 = max).

minimum0
maximum1
transfer_phone_number: optional string

Phone number to transfer calls to when users request to speak to a human agent in E.164 format (e.g. +1234567890).

voice: optional object { id, provider, speed, volume }

The voice to use for the call. You can get the list of voices using the /voices endpoint

id: string

The ID of the voice.

minLength1
provider: "cartesia" or "elevenlabs"

The provider of the voice.

One of the following:
"cartesia"
"elevenlabs"
speed: optional number

The speed of the voice. Range depends on provider: Cartesia 0.6–1.5, ElevenLabs 0.7–1.2. Default is 1.0.

minimum0.6
maximum1.5
volume: optional number

Volume of the voice (Cartesia only). 0.5–2.0, default 1.0. Ignored for other providers.

minimum0.5
maximum2
voicemail_message: optional string

If set, when voicemail is detected the agent will speak this message then hang up; if null, hang up immediately.

voicemail_sms_prompt: optional string

SMS message to send when the call reaches voicemail. Supports {{variable}} placeholders. When null, no SMS is sent on voicemail.

warm_transfer_summary_instructions: optional string

When using warm transfer: extra instructions for the supervisor handoff summary. If null or empty, the API uses the product default briefing when the call is loaded for the agent.

webhook_url: optional string

The webhook URL to call when the call is completed.

ReturnsExpand Collapse
assistant: object { id, after_call_sms_prompt, background_sound, 36 more }
id: string
after_call_sms_prompt: string

Prompt / instructions for the after-call SMS. Supports {{variable}} placeholders. When null, no after-call SMS is sent.

background_sound: "audio/office.ogg"

Ambient background sound to play during the call. null disables it.

background_sound_volume: number

Volume of the ambient background sound (0 = silent, 1 = max).

minimum0
maximum1
calendly: object { connection_id, event_type_id }
connection_id: string

The connection ID representing the link between your Calendly account and Revox.

event_type_id: string

The event type ID representing the event type to schedule. (eg: https://api.calendly.com/event_types/b2330295-2a91-4a1d-bb73-99e7707663d5)

call_retry_config: object { allowed_days, call_twice_in_a_row, calling_windows, 2 more }

Configuration for call retry behavior including time windows, delays, and max iterations. If not provided, defaults will be used.

allowed_days: array of "monday" or "tuesday" or "wednesday" or 4 more

Days of the week when calls are allowed, in the recipient's timezone. Default: Monday through Friday.

One of the following:
"monday"
"tuesday"
"wednesday"
"thursday"
"friday"
"saturday"
"sunday"
call_twice_in_a_row: boolean

If true and max_retry_attempts >= 2, attempt #2 fires immediately (skipping retry_delay_seconds) when attempt #1 didn't reach a human. Calling-window/allowed-days checks still apply. Only affects the 1→2 transition. Default: false.

calling_windows: array of object { calling_window_end_time, calling_window_start_time, retry_delay_seconds }
calling_window_end_time: string

End time for the calling window in the recipient's timezone (or timezone_override if provided). Format: 'HH:mm' (24-hour) or 'H:mma' (12-hour). Examples: '17:00', '6pm'. Default: '18:00'.

calling_window_start_time: string

Start time for the calling window in the recipient's timezone (or timezone_override if provided). Format: 'HH:mm' (24-hour) or 'H:mma' (12-hour). Examples: '09:00', '10am'. Default: '10:00'.

retry_delay_seconds: number

Delay between retry attempts in seconds. Default: 7200 (2 hours).

exclusiveMinimum0
maximum9007199254740991
max_retry_attempts: number

Maximum number of call retry attempts. Default: 3.

exclusiveMinimum0
maximum9007199254740991
timezone: optional string

Optional IANA timezone identifier to override the automatic timezone detection from phone number. If not provided, timezone is determined from the recipient's phone number country code. Examples: 'America/New_York', 'Europe/Paris'.

created_at: unknown
custom_tools: array of object { body_template, description, headers, 5 more }
body_template: string

JSON body template for the request. Use quoted {{variable}} placeholders (e.g. "{{name}}") for dynamic values

description: string

Human-readable description of what the tool does, used by the LLM to decide when to call it

minLength1
headers: array of object { key, value }

HTTP headers to include in the request. Values support {{variable}} placeholders

key: string
minLength1
value: string
input_schema: array of object { name, required, type, 2 more }

Schema defining the parameters the LLM should extract from the conversation to pass to this tool

name: string
minLength1
required: boolean
type: "string" or "number" or "boolean" or 3 more
One of the following:
"string"
"number"
"boolean"
"enum"
"date"
"datetime"
description: optional string
enum_options: optional array of string
method: "GET" or "POST" or "PUT" or 2 more

HTTP method to use when calling the API endpoint

One of the following:
"GET"
"POST"
"PUT"
"PATCH"
"DELETE"
name: string

Unique tool name in lowercase_snake_case (e.g. check_inventory)

minLength1
query_params: array of object { key, value }

Query string parameters appended to the URL. Values support {{variable}} placeholders

key: string
minLength1
value: string
url: string

Full URL of the API endpoint. Supports {{variable}} placeholders for dynamic values

minLength1
email_notification_address: string

Email address to receive notifications when a call ends with a matching outcome.

formatemail
email_notification_outcomes: array of "not_interested" or "interested" or "completed" or 3 more

Which call outcomes trigger an email notification. E.g. ["interested", "completed"].

One of the following:
"not_interested"
"interested"
"completed"
"requested_callback_later"
"requested_callback_new_number"
"do_not_contact"
end_of_call_sentence: string
first_sentence: string
first_sentence_delay_ms: number

Delay in milliseconds before speaking the first sentence. Default: 400.

minimum-9007199254740991
maximum9007199254740991
first_sentence_mode: "generated" or "static" or "none"
One of the following:
"generated"
"static"
"none"
from_phone_number: string

Override the default outbound phone number for calls placed with this assistant. When null, the organization's default phone number is used.

human_transfer_mode: "warm" or "cold"

Warm or cold transfer when transfer_phone_number is set; null when transfer is not configured.

One of the following:
"warm"
"cold"
ivr_navigation_enabled: boolean

Enable IVR navigation tools. When enabled, the assistant can send DTMF tones and skip turns to navigate phone menus.

llm_model: object { name, type } or object { openrouter_model_id, openrouter_provider, type } or object { api_key, api_url, model_name, type } or object { provider, realtime_model_id, type, realtime_voice_id }
One of the following:
UnionMember0 = object { name, type }
name: "gpt-4.1" or "ministral-3-8b-instruct"
One of the following:
"gpt-4.1"
"ministral-3-8b-instruct"
type: "dedicated-instance"
UnionMember1 = object { openrouter_model_id, openrouter_provider, type }
openrouter_model_id: string

The model ID to use from OpenRouter. eg: openai/gpt-4.1

openrouter_provider: string

The provider to use from OpenRouter. eg: nebius, openai, azure, etc.

type: "openrouter"

Use a model from OpenRouter.

UnionMember2 = object { api_key, api_url, model_name, type }
api_key: string

API key sent as Bearer token to the custom endpoint.

minLength1
api_url: string

Base URL for the OpenAI-compatible API, e.g. https://api.together.xyz/v1

formaturi
model_name: string

Model name as expected by the provider, e.g. meta-llama/llama-3-70b

minLength1
type: "custom"

OpenAI-compatible chat completions API (bring your own endpoint and key).

UnionMember3 = object { provider, realtime_model_id, type, realtime_voice_id }
provider: "openai" or "google"

The provider to use from Realtime. eg: openai, google.

One of the following:
"openai"
"google"
realtime_model_id: string

The model ID to use from Realtime. eg: gpt-4.1

type: "realtime"

Use a model from Realtime.

realtime_voice_id: optional string

Output voice for the realtime provider (e.g. OpenAI: marin; Gemini: Puck).

minLength1
max_call_duration_secs: number

The maximum duration of the call in seconds. This is the maximum time the call will be allowed to run.

max_duration_end_message: string

Optional message the agent will say, without being interruptible, when the call reaches its max duration. Kept short so it fits inside the farewell buffer. If null, the call ends silently.

maxLength150
name: string
organization_id: string
prompt: string
sms_enabled: boolean

Enable SMS tool during calls. When enabled, the agent can send SMS messages to the user on the call.

sms_template: string

Hardcoded SMS template to send during calls. When set, this exact text is sent instead of letting the agent generate the message. Supports {{variable}} placeholders.

structured_output_config: array of object { name, required, type, 2 more }

The structured output config to use for the call. This is used to extract the data from the call (like email, name, company name, etc.).

name: string
minLength1
required: boolean
type: "string" or "number" or "boolean" or 3 more
One of the following:
"string"
"number"
"boolean"
"enum"
"date"
"datetime"
description: optional string
enum_options: optional array of string
structured_output_prompt: string

Custom prompt for structured data extraction. If not provided, a default prompt is used. Available variables: {{transcript}}, {{call_direction}}, {{user_phone_number}}, {{agent_phone_number}}.

thinking_sound: "city-ambience.ogg" or "forest-ambience.ogg" or "office-ambience.ogg" or 4 more

Audio clip to play while the agent is processing a response. One of the built-in LiveKit audio clips; null disables it.

One of the following:
"city-ambience.ogg"
"forest-ambience.ogg"
"office-ambience.ogg"
"crowded-room.ogg"
"keyboard-typing.ogg"
"keyboard-typing2.ogg"
"hold_music.ogg"
thinking_sound_probability: number

Probability [0..1] that the thinking sound plays on any given turn; otherwise the agent is silent while thinking.

minimum0
maximum1
thinking_sound_volume: number

Volume of the thinking sound (0 = silent, 1 = max).

minimum0
maximum1
transfer_phone_number: string

Phone number to transfer calls to when users request to speak to a human agent.

updated_at: unknown
voice: object { id, provider, speed, volume }
id: string

The ID of the voice.

minLength1
provider: "cartesia" or "elevenlabs"

The provider of the voice.

One of the following:
"cartesia"
"elevenlabs"
speed: optional number

The speed of the voice. Range depends on provider: Cartesia 0.6–1.5, ElevenLabs 0.7–1.2. Default is 1.0.

minimum0.6
maximum1.5
volume: optional number

Volume of the voice (Cartesia only). 0.5–2.0, default 1.0. Ignored for other providers.

minimum0.5
maximum2
voicemail_message: string

If set, when voicemail is detected the agent will speak this message then hang up; if null, hang up immediately.

voicemail_sms_prompt: string

Prompt / instructions for the voicemail SMS. Supports {{variable}} placeholders. When null, no SMS is sent on voicemail.

warm_transfer_summary_instructions: string

Warm transfer only: instructions for the supervisor handoff summary; null when not configured or cold transfer.

webhook_url: string

The webhook URL to call when the call is completed.

faq_items: optional array of object { answer, question, id, 2 more }
answer: string
question: string
id: optional string
needs_human_answer: optional boolean
source: optional "human" or "ai"
One of the following:
"human"
"ai"
pending_faq_count: optional number

Create an assistant

curl https://www.getrevox.com/api/assistants \
    -H 'Content-Type: application/json' \
    -H "Authorization: Bearer $REVOX_API_KEY" \
    -d '{
          "name": "name",
          "prompt": "prompt"
        }'
{
  "assistant": {
    "id": "id",
    "after_call_sms_prompt": "after_call_sms_prompt",
    "background_sound": "audio/office.ogg",
    "background_sound_volume": 0,
    "calendly": {
      "connection_id": "connection_id",
      "event_type_id": "event_type_id"
    },
    "call_retry_config": {
      "allowed_days": [
        "monday"
      ],
      "call_twice_in_a_row": true,
      "calling_windows": [
        {
          "calling_window_end_time": "calling_window_end_time",
          "calling_window_start_time": "calling_window_start_time",
          "retry_delay_seconds": 1
        }
      ],
      "max_retry_attempts": 1,
      "timezone": "timezone"
    },
    "created_at": {},
    "custom_tools": [
      {
        "body_template": "body_template",
        "description": "x",
        "headers": [
          {
            "key": "x",
            "value": "value"
          }
        ],
        "input_schema": [
          {
            "name": "x",
            "required": true,
            "type": "string",
            "description": "description",
            "enum_options": [
              "string"
            ]
          }
        ],
        "method": "GET",
        "name": "name",
        "query_params": [
          {
            "key": "x",
            "value": "value"
          }
        ],
        "url": "x"
      }
    ],
    "email_notification_address": "dev@stainless.com",
    "email_notification_outcomes": [
      "not_interested"
    ],
    "end_of_call_sentence": "end_of_call_sentence",
    "first_sentence": "first_sentence",
    "first_sentence_delay_ms": -9007199254740991,
    "first_sentence_mode": "generated",
    "from_phone_number": "from_phone_number",
    "human_transfer_mode": "warm",
    "ivr_navigation_enabled": true,
    "llm_model": {
      "name": "gpt-4.1",
      "type": "dedicated-instance"
    },
    "max_call_duration_secs": 0,
    "max_duration_end_message": "max_duration_end_message",
    "name": "name",
    "organization_id": "organization_id",
    "prompt": "prompt",
    "sms_enabled": true,
    "sms_template": "sms_template",
    "structured_output_config": [
      {
        "name": "x",
        "required": true,
        "type": "string",
        "description": "description",
        "enum_options": [
          "string"
        ]
      }
    ],
    "structured_output_prompt": "structured_output_prompt",
    "thinking_sound": "city-ambience.ogg",
    "thinking_sound_probability": 0,
    "thinking_sound_volume": 0,
    "transfer_phone_number": "transfer_phone_number",
    "updated_at": {},
    "voice": {
      "id": "x",
      "provider": "cartesia",
      "speed": 0.6,
      "volume": 0.5
    },
    "voicemail_message": "voicemail_message",
    "voicemail_sms_prompt": "voicemail_sms_prompt",
    "warm_transfer_summary_instructions": "warm_transfer_summary_instructions",
    "webhook_url": "webhook_url",
    "faq_items": [
      {
        "answer": "answer",
        "question": "question",
        "id": "id",
        "needs_human_answer": true,
        "source": "human"
      }
    ],
    "pending_faq_count": 0
  }
}
Returns Examples
{
  "assistant": {
    "id": "id",
    "after_call_sms_prompt": "after_call_sms_prompt",
    "background_sound": "audio/office.ogg",
    "background_sound_volume": 0,
    "calendly": {
      "connection_id": "connection_id",
      "event_type_id": "event_type_id"
    },
    "call_retry_config": {
      "allowed_days": [
        "monday"
      ],
      "call_twice_in_a_row": true,
      "calling_windows": [
        {
          "calling_window_end_time": "calling_window_end_time",
          "calling_window_start_time": "calling_window_start_time",
          "retry_delay_seconds": 1
        }
      ],
      "max_retry_attempts": 1,
      "timezone": "timezone"
    },
    "created_at": {},
    "custom_tools": [
      {
        "body_template": "body_template",
        "description": "x",
        "headers": [
          {
            "key": "x",
            "value": "value"
          }
        ],
        "input_schema": [
          {
            "name": "x",
            "required": true,
            "type": "string",
            "description": "description",
            "enum_options": [
              "string"
            ]
          }
        ],
        "method": "GET",
        "name": "name",
        "query_params": [
          {
            "key": "x",
            "value": "value"
          }
        ],
        "url": "x"
      }
    ],
    "email_notification_address": "dev@stainless.com",
    "email_notification_outcomes": [
      "not_interested"
    ],
    "end_of_call_sentence": "end_of_call_sentence",
    "first_sentence": "first_sentence",
    "first_sentence_delay_ms": -9007199254740991,
    "first_sentence_mode": "generated",
    "from_phone_number": "from_phone_number",
    "human_transfer_mode": "warm",
    "ivr_navigation_enabled": true,
    "llm_model": {
      "name": "gpt-4.1",
      "type": "dedicated-instance"
    },
    "max_call_duration_secs": 0,
    "max_duration_end_message": "max_duration_end_message",
    "name": "name",
    "organization_id": "organization_id",
    "prompt": "prompt",
    "sms_enabled": true,
    "sms_template": "sms_template",
    "structured_output_config": [
      {
        "name": "x",
        "required": true,
        "type": "string",
        "description": "description",
        "enum_options": [
          "string"
        ]
      }
    ],
    "structured_output_prompt": "structured_output_prompt",
    "thinking_sound": "city-ambience.ogg",
    "thinking_sound_probability": 0,
    "thinking_sound_volume": 0,
    "transfer_phone_number": "transfer_phone_number",
    "updated_at": {},
    "voice": {
      "id": "x",
      "provider": "cartesia",
      "speed": 0.6,
      "volume": 0.5
    },
    "voicemail_message": "voicemail_message",
    "voicemail_sms_prompt": "voicemail_sms_prompt",
    "warm_transfer_summary_instructions": "warm_transfer_summary_instructions",
    "webhook_url": "webhook_url",
    "faq_items": [
      {
        "answer": "answer",
        "question": "question",
        "id": "id",
        "needs_human_answer": true,
        "source": "human"
      }
    ],
    "pending_faq_count": 0
  }
}