Skip to content
Get started

Update an assistant

client.assistants.update(stringid, AssistantUpdateParams { after_call_sms_prompt, background_sound, background_sound_volume, 31 more } body?, RequestOptionsoptions?): AssistantUpdateResponse { assistant }
PATCH/assistants/{id}

Update one or more fields on an existing assistant. Supports partial updates — only the fields you include in the request body will be changed. You can update the prompt, voice, first sentence, name, or any other assistant property without affecting the rest of the configuration.

ParametersExpand Collapse
id: string
body: AssistantUpdateParams { after_call_sms_prompt, background_sound, background_sound_volume, 31 more }
after_call_sms_prompt?: string | null

Prompt / instructions for the after-call SMS. Supports {{variable}} placeholders. When null, no after-call SMS is sent.

background_sound?: "audio/office.ogg" | null

Ambient background sound to play during the call. null/omitted disables it.

background_sound_volume?: number

Volume of the ambient background sound (0 = silent, 1 = max).

minimum0
maximum1
calendly?: Calendly | null
connection_id: string

The connection ID representing the link between your Calendly account and Revox.

event_type_id: string

The event type ID representing the event type to schedule. (eg: https://api.calendly.com/event_types/b2330295-2a91-4a1d-bb73-99e7707663d5)

call_retry_config?: CallRetryConfig

Configuration for call retry behavior including time windows, delays, and max iterations. If not provided, defaults will be used.

allowed_days: Array<"monday" | "tuesday" | "wednesday" | 4 more>

Days of the week when calls are allowed, in the recipient's timezone. Default: Monday through Friday.

One of the following:
"monday"
"tuesday"
"wednesday"
"thursday"
"friday"
"saturday"
"sunday"
call_twice_in_a_row: boolean

If true and max_retry_attempts >= 2, attempt #2 fires immediately (skipping retry_delay_seconds) when attempt #1 didn't reach a human. Calling-window/allowed-days checks still apply. Only affects the 1→2 transition. Default: false.

calling_windows: Array<CallingWindow>
calling_window_end_time: string

End time for the calling window in the recipient's timezone (or timezone_override if provided). Format: 'HH:mm' (24-hour) or 'H:mma' (12-hour). Examples: '17:00', '6pm'. Default: '18:00'.

calling_window_start_time: string

Start time for the calling window in the recipient's timezone (or timezone_override if provided). Format: 'HH:mm' (24-hour) or 'H:mma' (12-hour). Examples: '09:00', '10am'. Default: '10:00'.

retry_delay_seconds: number

Delay between retry attempts in seconds. Default: 7200 (2 hours).

exclusiveMinimum0
maximum9007199254740991
max_retry_attempts: number

Maximum number of call retry attempts. Default: 3.

exclusiveMinimum0
maximum9007199254740991
timezone?: string | null

Optional IANA timezone identifier to override the automatic timezone detection from phone number. If not provided, timezone is determined from the recipient's phone number country code. Examples: 'America/New_York', 'Europe/Paris'.

custom_tools?: Array<CustomTool> | null

Custom API tools the assistant can call during conversations. Each tool defines an HTTP endpoint with variable substitution.

body_template: string | null

JSON body template for the request. Use quoted {{variable}} placeholders (e.g. "{{name}}") for dynamic values

description: string

Human-readable description of what the tool does, used by the LLM to decide when to call it

minLength1
headers: Array<Header>

HTTP headers to include in the request. Values support {{variable}} placeholders

key: string
minLength1
value: string
input_schema: Array<InputSchema>

Schema defining the parameters the LLM should extract from the conversation to pass to this tool

name: string
minLength1
required: boolean
type: "string" | "number" | "boolean" | 3 more
One of the following:
"string"
"number"
"boolean"
"enum"
"date"
"datetime"
description?: string
enum_options?: Array<string>
method: "GET" | "POST" | "PUT" | 2 more

HTTP method to use when calling the API endpoint

One of the following:
"GET"
"POST"
"PUT"
"PATCH"
"DELETE"
name: string

Unique tool name in lowercase_snake_case (e.g. check_inventory)

minLength1
query_params: Array<QueryParam>

Query string parameters appended to the URL. Values support {{variable}} placeholders

key: string
minLength1
value: string
url: string

Full URL of the API endpoint. Supports {{variable}} placeholders for dynamic values

minLength1
email_notification_address?: string | null

Email address to receive notifications when a call ends with a matching outcome.

formatemail
email_notification_outcomes?: Array<"not_interested" | "interested" | "completed" | 3 more> | null

Which call outcomes trigger an email notification. E.g. ["interested", "completed"].

One of the following:
"not_interested"
"interested"
"completed"
"requested_callback_later"
"requested_callback_new_number"
"do_not_contact"
end_of_call_sentence?: string

Optional message to say when the agent decides to end the call.

faq_items?: Array<FaqItem>

FAQ items to associate with this assistant. When provided, replaces all existing FAQ items.

answer: string
question: string
first_sentence?: string

The first sentence to use for the call. This will be given to the LLM

first_sentence_delay_ms?: number

Delay in milliseconds before speaking the first sentence. Default: 400.

minimum0
maximum9007199254740991
first_sentence_mode?: "generated" | "static" | "none"

How the first sentence should be handled. "generated" means the LLM will generate a response based on the first_sentence instruction. "static" means the first_sentence will be spoken exactly as provided. "none" means the agent will not speak first and will wait for the user.

One of the following:
"generated"
"static"
"none"
from_phone_number?: string | null

Override the default outbound phone number for calls placed with this assistant. Must be a phone number owned by the organization in E.164 format (e.g. +1234567890). When null, the organization's default phone number is used.

human_transfer_mode?: "warm" | "cold" | null

When transfer_phone_number is set: "warm" (AI bridges) or "cold" (SIP REFER; trunk must allow REFER/PSTN). Omit or null when transfer is disabled.

One of the following:
"warm"
"cold"
ivr_navigation_enabled?: boolean

Enable IVR navigation tools. When enabled, the assistant can send DTMF tones and skip turns to navigate phone menus.

llm_model?: UnionMember0 { name, type } | UnionMember1 { openrouter_model_id, openrouter_provider, type } | UnionMember2 { api_key, api_url, model_name, type } | UnionMember3 { provider, realtime_model_id, type, realtime_voice_id }
One of the following:
UnionMember0 { name, type }
name: "gpt-4.1" | "ministral-3-8b-instruct"
One of the following:
"gpt-4.1"
"ministral-3-8b-instruct"
type: "dedicated-instance"
UnionMember1 { openrouter_model_id, openrouter_provider, type }
openrouter_model_id: string

The model ID to use from OpenRouter. eg: openai/gpt-4.1

openrouter_provider: string

The provider to use from OpenRouter. eg: nebius, openai, azure, etc.

type: "openrouter"

Use a model from OpenRouter.

UnionMember2 { api_key, api_url, model_name, type }
api_key: string

API key sent as Bearer token to the custom endpoint.

minLength1
api_url: string

Base URL for the OpenAI-compatible API, e.g. https://api.together.xyz/v1

formaturi
model_name: string

Model name as expected by the provider, e.g. meta-llama/llama-3-70b

minLength1
type: "custom"

OpenAI-compatible chat completions API (bring your own endpoint and key).

UnionMember3 { provider, realtime_model_id, type, realtime_voice_id }
provider: "openai" | "google"

The provider to use from Realtime. eg: openai, google.

One of the following:
"openai"
"google"
realtime_model_id: string

The model ID to use from Realtime. eg: gpt-4.1

type: "realtime"

Use a model from Realtime.

realtime_voice_id?: string

Output voice for the realtime provider (e.g. OpenAI: marin; Gemini: Puck).

minLength1
max_call_duration_secs?: number

The maximum duration of the call in seconds. This is the maximum time the call will be allowed to run.

max_duration_end_message?: string | null

Optional message the agent will say, without being interruptible, when the call reaches its max duration. Kept short so it fits inside the farewell buffer. If not set, the call ends silently.

maxLength150
name?: string
prompt?: string

The prompt to use for the call. This will be given to the LLM (gpt-4.1)

sms_enabled?: boolean

Enable SMS tool during calls. When enabled, the agent can send SMS messages to the user on the call.

sms_template?: string | null

Hardcoded SMS template to send during calls. When set, this exact text is sent instead of letting the agent generate the message. Supports {{variable}} placeholders.

structured_output_config?: Array<StructuredOutputConfig>

The structured output config to use for the call. This is used to extract the data from the call (like email, name, company name, etc.).

name: string
minLength1
required: boolean
type: "string" | "number" | "boolean" | 3 more
One of the following:
"string"
"number"
"boolean"
"enum"
"date"
"datetime"
description?: string
enum_options?: Array<string>
structured_output_prompt?: string | null

Custom prompt for structured data extraction. If not provided, a default prompt is used. Available variables: {{transcript}}, {{call_direction}}, {{user_phone_number}}, {{agent_phone_number}}.

thinking_sound?: "city-ambience.ogg" | "forest-ambience.ogg" | "office-ambience.ogg" | 4 more | null

Audio clip to play while the agent is processing a response. One of the built-in LiveKit audio clips; null/omitted disables it.

One of the following:
"city-ambience.ogg"
"forest-ambience.ogg"
"office-ambience.ogg"
"crowded-room.ogg"
"keyboard-typing.ogg"
"keyboard-typing2.ogg"
"hold_music.ogg"
thinking_sound_probability?: number

Probability [0..1] that the thinking sound plays on any given turn; otherwise the agent is silent while thinking.

minimum0
maximum1
thinking_sound_volume?: number

Volume of the thinking sound (0 = silent, 1 = max).

minimum0
maximum1
transfer_phone_number?: string | null

Phone number to transfer calls to when users request to speak to a human agent in E.164 format (e.g. +1234567890).

voice?: Voice

The voice to use for the call. You can get the list of voices using the /voices endpoint

id: string

The ID of the voice.

minLength1
provider: "cartesia" | "elevenlabs"

The provider of the voice.

One of the following:
"cartesia"
"elevenlabs"
speed?: number

The speed of the voice. Range depends on provider: Cartesia 0.6–1.5, ElevenLabs 0.7–1.2. Default is 1.0.

minimum0.6
maximum1.5
volume?: number

Volume of the voice (Cartesia only). 0.5–2.0, default 1.0. Ignored for other providers.

minimum0.5
maximum2
voicemail_message?: string | null

If set, when voicemail is detected the agent will speak this message then hang up; if null, hang up immediately.

voicemail_sms_prompt?: string | null

SMS message to send when the call reaches voicemail. Supports {{variable}} placeholders. When null, no SMS is sent on voicemail.

warm_transfer_summary_instructions?: string | null

When using warm transfer: extra instructions for the supervisor handoff summary. If null or empty, the API uses the product default briefing when the call is loaded for the agent.

webhook_url?: string

The webhook URL to call when the call is completed.

ReturnsExpand Collapse
AssistantUpdateResponse { assistant }
assistant: Assistant { id, after_call_sms_prompt, background_sound, 36 more }
id: string
after_call_sms_prompt: string | null

Prompt / instructions for the after-call SMS. Supports {{variable}} placeholders. When null, no after-call SMS is sent.

background_sound: "audio/office.ogg" | null

Ambient background sound to play during the call. null disables it.

background_sound_volume: number

Volume of the ambient background sound (0 = silent, 1 = max).

minimum0
maximum1
calendly: Calendly | null
connection_id: string

The connection ID representing the link between your Calendly account and Revox.

event_type_id: string

The event type ID representing the event type to schedule. (eg: https://api.calendly.com/event_types/b2330295-2a91-4a1d-bb73-99e7707663d5)

call_retry_config: CallRetryConfig | null

Configuration for call retry behavior including time windows, delays, and max iterations. If not provided, defaults will be used.

allowed_days: Array<"monday" | "tuesday" | "wednesday" | 4 more>

Days of the week when calls are allowed, in the recipient's timezone. Default: Monday through Friday.

One of the following:
"monday"
"tuesday"
"wednesday"
"thursday"
"friday"
"saturday"
"sunday"
call_twice_in_a_row: boolean

If true and max_retry_attempts >= 2, attempt #2 fires immediately (skipping retry_delay_seconds) when attempt #1 didn't reach a human. Calling-window/allowed-days checks still apply. Only affects the 1→2 transition. Default: false.

calling_windows: Array<CallingWindow>
calling_window_end_time: string

End time for the calling window in the recipient's timezone (or timezone_override if provided). Format: 'HH:mm' (24-hour) or 'H:mma' (12-hour). Examples: '17:00', '6pm'. Default: '18:00'.

calling_window_start_time: string

Start time for the calling window in the recipient's timezone (or timezone_override if provided). Format: 'HH:mm' (24-hour) or 'H:mma' (12-hour). Examples: '09:00', '10am'. Default: '10:00'.

retry_delay_seconds: number

Delay between retry attempts in seconds. Default: 7200 (2 hours).

exclusiveMinimum0
maximum9007199254740991
max_retry_attempts: number

Maximum number of call retry attempts. Default: 3.

exclusiveMinimum0
maximum9007199254740991
timezone?: string | null

Optional IANA timezone identifier to override the automatic timezone detection from phone number. If not provided, timezone is determined from the recipient's phone number country code. Examples: 'America/New_York', 'Europe/Paris'.

created_at: unknown
custom_tools: Array<CustomTool> | null
body_template: string | null

JSON body template for the request. Use quoted {{variable}} placeholders (e.g. "{{name}}") for dynamic values

description: string

Human-readable description of what the tool does, used by the LLM to decide when to call it

minLength1
headers: Array<Header>

HTTP headers to include in the request. Values support {{variable}} placeholders

key: string
minLength1
value: string
input_schema: Array<InputSchema>

Schema defining the parameters the LLM should extract from the conversation to pass to this tool

name: string
minLength1
required: boolean
type: "string" | "number" | "boolean" | 3 more
One of the following:
"string"
"number"
"boolean"
"enum"
"date"
"datetime"
description?: string
enum_options?: Array<string>
method: "GET" | "POST" | "PUT" | 2 more

HTTP method to use when calling the API endpoint

One of the following:
"GET"
"POST"
"PUT"
"PATCH"
"DELETE"
name: string

Unique tool name in lowercase_snake_case (e.g. check_inventory)

minLength1
query_params: Array<QueryParam>

Query string parameters appended to the URL. Values support {{variable}} placeholders

key: string
minLength1
value: string
url: string

Full URL of the API endpoint. Supports {{variable}} placeholders for dynamic values

minLength1
email_notification_address: string | null

Email address to receive notifications when a call ends with a matching outcome.

formatemail
email_notification_outcomes: Array<"not_interested" | "interested" | "completed" | 3 more> | null

Which call outcomes trigger an email notification. E.g. ["interested", "completed"].

One of the following:
"not_interested"
"interested"
"completed"
"requested_callback_later"
"requested_callback_new_number"
"do_not_contact"
end_of_call_sentence: string | null
first_sentence: string | null
first_sentence_delay_ms: number

Delay in milliseconds before speaking the first sentence. Default: 400.

minimum-9007199254740991
maximum9007199254740991
first_sentence_mode: "generated" | "static" | "none"
One of the following:
"generated"
"static"
"none"
from_phone_number: string | null

Override the default outbound phone number for calls placed with this assistant. When null, the organization's default phone number is used.

human_transfer_mode: "warm" | "cold" | null

Warm or cold transfer when transfer_phone_number is set; null when transfer is not configured.

One of the following:
"warm"
"cold"
ivr_navigation_enabled: boolean

Enable IVR navigation tools. When enabled, the assistant can send DTMF tones and skip turns to navigate phone menus.

llm_model: UnionMember0 { name, type } | UnionMember1 { openrouter_model_id, openrouter_provider, type } | UnionMember2 { api_key, api_url, model_name, type } | UnionMember3 { provider, realtime_model_id, type, realtime_voice_id }
One of the following:
UnionMember0 { name, type }
name: "gpt-4.1" | "ministral-3-8b-instruct"
One of the following:
"gpt-4.1"
"ministral-3-8b-instruct"
type: "dedicated-instance"
UnionMember1 { openrouter_model_id, openrouter_provider, type }
openrouter_model_id: string

The model ID to use from OpenRouter. eg: openai/gpt-4.1

openrouter_provider: string

The provider to use from OpenRouter. eg: nebius, openai, azure, etc.

type: "openrouter"

Use a model from OpenRouter.

UnionMember2 { api_key, api_url, model_name, type }
api_key: string

API key sent as Bearer token to the custom endpoint.

minLength1
api_url: string

Base URL for the OpenAI-compatible API, e.g. https://api.together.xyz/v1

formaturi
model_name: string

Model name as expected by the provider, e.g. meta-llama/llama-3-70b

minLength1
type: "custom"

OpenAI-compatible chat completions API (bring your own endpoint and key).

UnionMember3 { provider, realtime_model_id, type, realtime_voice_id }
provider: "openai" | "google"

The provider to use from Realtime. eg: openai, google.

One of the following:
"openai"
"google"
realtime_model_id: string

The model ID to use from Realtime. eg: gpt-4.1

type: "realtime"

Use a model from Realtime.

realtime_voice_id?: string

Output voice for the realtime provider (e.g. OpenAI: marin; Gemini: Puck).

minLength1
max_call_duration_secs: number

The maximum duration of the call in seconds. This is the maximum time the call will be allowed to run.

max_duration_end_message: string | null

Optional message the agent will say, without being interruptible, when the call reaches its max duration. Kept short so it fits inside the farewell buffer. If null, the call ends silently.

maxLength150
name: string
organization_id: string
prompt: string
sms_enabled: boolean

Enable SMS tool during calls. When enabled, the agent can send SMS messages to the user on the call.

sms_template: string | null

Hardcoded SMS template to send during calls. When set, this exact text is sent instead of letting the agent generate the message. Supports {{variable}} placeholders.

structured_output_config: Array<StructuredOutputConfig> | null

The structured output config to use for the call. This is used to extract the data from the call (like email, name, company name, etc.).

name: string
minLength1
required: boolean
type: "string" | "number" | "boolean" | 3 more
One of the following:
"string"
"number"
"boolean"
"enum"
"date"
"datetime"
description?: string
enum_options?: Array<string>
structured_output_prompt: string | null

Custom prompt for structured data extraction. If not provided, a default prompt is used. Available variables: {{transcript}}, {{call_direction}}, {{user_phone_number}}, {{agent_phone_number}}.

thinking_sound: "city-ambience.ogg" | "forest-ambience.ogg" | "office-ambience.ogg" | 4 more | null

Audio clip to play while the agent is processing a response. One of the built-in LiveKit audio clips; null disables it.

One of the following:
"city-ambience.ogg"
"forest-ambience.ogg"
"office-ambience.ogg"
"crowded-room.ogg"
"keyboard-typing.ogg"
"keyboard-typing2.ogg"
"hold_music.ogg"
thinking_sound_probability: number

Probability [0..1] that the thinking sound plays on any given turn; otherwise the agent is silent while thinking.

minimum0
maximum1
thinking_sound_volume: number

Volume of the thinking sound (0 = silent, 1 = max).

minimum0
maximum1
transfer_phone_number: string | null

Phone number to transfer calls to when users request to speak to a human agent.

updated_at: unknown
voice: Voice | null
id: string

The ID of the voice.

minLength1
provider: "cartesia" | "elevenlabs"

The provider of the voice.

One of the following:
"cartesia"
"elevenlabs"
speed?: number

The speed of the voice. Range depends on provider: Cartesia 0.6–1.5, ElevenLabs 0.7–1.2. Default is 1.0.

minimum0.6
maximum1.5
volume?: number

Volume of the voice (Cartesia only). 0.5–2.0, default 1.0. Ignored for other providers.

minimum0.5
maximum2
voicemail_message: string | null

If set, when voicemail is detected the agent will speak this message then hang up; if null, hang up immediately.

voicemail_sms_prompt: string | null

Prompt / instructions for the voicemail SMS. Supports {{variable}} placeholders. When null, no SMS is sent on voicemail.

warm_transfer_summary_instructions: string | null

Warm transfer only: instructions for the supervisor handoff summary; null when not configured or cold transfer.

webhook_url: string | null

The webhook URL to call when the call is completed.

faq_items?: Array<FaqItem>
answer: string
question: string
id?: string
needs_human_answer?: boolean
source?: "human" | "ai"
One of the following:
"human"
"ai"
pending_faq_count?: number

Update an assistant

import Revox from '@revoxai/sdk';

const client = new Revox({
  apiKey: process.env['REVOX_API_KEY'], // This is the default and can be omitted
});

const assistant = await client.assistants.update('id');

console.log(assistant.assistant);
{
  "assistant": {
    "id": "id",
    "after_call_sms_prompt": "after_call_sms_prompt",
    "background_sound": "audio/office.ogg",
    "background_sound_volume": 0,
    "calendly": {
      "connection_id": "connection_id",
      "event_type_id": "event_type_id"
    },
    "call_retry_config": {
      "allowed_days": [
        "monday"
      ],
      "call_twice_in_a_row": true,
      "calling_windows": [
        {
          "calling_window_end_time": "calling_window_end_time",
          "calling_window_start_time": "calling_window_start_time",
          "retry_delay_seconds": 1
        }
      ],
      "max_retry_attempts": 1,
      "timezone": "timezone"
    },
    "created_at": {},
    "custom_tools": [
      {
        "body_template": "body_template",
        "description": "x",
        "headers": [
          {
            "key": "x",
            "value": "value"
          }
        ],
        "input_schema": [
          {
            "name": "x",
            "required": true,
            "type": "string",
            "description": "description",
            "enum_options": [
              "string"
            ]
          }
        ],
        "method": "GET",
        "name": "name",
        "query_params": [
          {
            "key": "x",
            "value": "value"
          }
        ],
        "url": "x"
      }
    ],
    "email_notification_address": "dev@stainless.com",
    "email_notification_outcomes": [
      "not_interested"
    ],
    "end_of_call_sentence": "end_of_call_sentence",
    "first_sentence": "first_sentence",
    "first_sentence_delay_ms": -9007199254740991,
    "first_sentence_mode": "generated",
    "from_phone_number": "from_phone_number",
    "human_transfer_mode": "warm",
    "ivr_navigation_enabled": true,
    "llm_model": {
      "name": "gpt-4.1",
      "type": "dedicated-instance"
    },
    "max_call_duration_secs": 0,
    "max_duration_end_message": "max_duration_end_message",
    "name": "name",
    "organization_id": "organization_id",
    "prompt": "prompt",
    "sms_enabled": true,
    "sms_template": "sms_template",
    "structured_output_config": [
      {
        "name": "x",
        "required": true,
        "type": "string",
        "description": "description",
        "enum_options": [
          "string"
        ]
      }
    ],
    "structured_output_prompt": "structured_output_prompt",
    "thinking_sound": "city-ambience.ogg",
    "thinking_sound_probability": 0,
    "thinking_sound_volume": 0,
    "transfer_phone_number": "transfer_phone_number",
    "updated_at": {},
    "voice": {
      "id": "x",
      "provider": "cartesia",
      "speed": 0.6,
      "volume": 0.5
    },
    "voicemail_message": "voicemail_message",
    "voicemail_sms_prompt": "voicemail_sms_prompt",
    "warm_transfer_summary_instructions": "warm_transfer_summary_instructions",
    "webhook_url": "webhook_url",
    "faq_items": [
      {
        "answer": "answer",
        "question": "question",
        "id": "id",
        "needs_human_answer": true,
        "source": "human"
      }
    ],
    "pending_faq_count": 0
  }
}
Returns Examples
{
  "assistant": {
    "id": "id",
    "after_call_sms_prompt": "after_call_sms_prompt",
    "background_sound": "audio/office.ogg",
    "background_sound_volume": 0,
    "calendly": {
      "connection_id": "connection_id",
      "event_type_id": "event_type_id"
    },
    "call_retry_config": {
      "allowed_days": [
        "monday"
      ],
      "call_twice_in_a_row": true,
      "calling_windows": [
        {
          "calling_window_end_time": "calling_window_end_time",
          "calling_window_start_time": "calling_window_start_time",
          "retry_delay_seconds": 1
        }
      ],
      "max_retry_attempts": 1,
      "timezone": "timezone"
    },
    "created_at": {},
    "custom_tools": [
      {
        "body_template": "body_template",
        "description": "x",
        "headers": [
          {
            "key": "x",
            "value": "value"
          }
        ],
        "input_schema": [
          {
            "name": "x",
            "required": true,
            "type": "string",
            "description": "description",
            "enum_options": [
              "string"
            ]
          }
        ],
        "method": "GET",
        "name": "name",
        "query_params": [
          {
            "key": "x",
            "value": "value"
          }
        ],
        "url": "x"
      }
    ],
    "email_notification_address": "dev@stainless.com",
    "email_notification_outcomes": [
      "not_interested"
    ],
    "end_of_call_sentence": "end_of_call_sentence",
    "first_sentence": "first_sentence",
    "first_sentence_delay_ms": -9007199254740991,
    "first_sentence_mode": "generated",
    "from_phone_number": "from_phone_number",
    "human_transfer_mode": "warm",
    "ivr_navigation_enabled": true,
    "llm_model": {
      "name": "gpt-4.1",
      "type": "dedicated-instance"
    },
    "max_call_duration_secs": 0,
    "max_duration_end_message": "max_duration_end_message",
    "name": "name",
    "organization_id": "organization_id",
    "prompt": "prompt",
    "sms_enabled": true,
    "sms_template": "sms_template",
    "structured_output_config": [
      {
        "name": "x",
        "required": true,
        "type": "string",
        "description": "description",
        "enum_options": [
          "string"
        ]
      }
    ],
    "structured_output_prompt": "structured_output_prompt",
    "thinking_sound": "city-ambience.ogg",
    "thinking_sound_probability": 0,
    "thinking_sound_volume": 0,
    "transfer_phone_number": "transfer_phone_number",
    "updated_at": {},
    "voice": {
      "id": "x",
      "provider": "cartesia",
      "speed": 0.6,
      "volume": 0.5
    },
    "voicemail_message": "voicemail_message",
    "voicemail_sms_prompt": "voicemail_sms_prompt",
    "warm_transfer_summary_instructions": "warm_transfer_summary_instructions",
    "webhook_url": "webhook_url",
    "faq_items": [
      {
        "answer": "answer",
        "question": "question",
        "id": "id",
        "needs_human_answer": true,
        "source": "human"
      }
    ],
    "pending_faq_count": 0
  }
}