-
Notifications
You must be signed in to change notification settings - Fork 770
Closed
Description
tensorzero/tensorzero-core/src/endpoints/openai_compatible/types/chat_completions.rs
Lines 54 to 68 in 437ba75
| pub struct OpenAICompatibleAssistantMessage { | |
| pub content: Option<Value>, | |
| pub tool_calls: Option<Vec<OpenAICompatibleToolCall>>, | |
| } | |
| #[derive(Clone, Debug, Deserialize, PartialEq)] | |
| #[serde(tag = "role")] | |
| #[serde(rename_all = "lowercase")] | |
| pub enum OpenAICompatibleMessage { | |
| #[serde(alias = "developer")] | |
| System(OpenAICompatibleSystemMessage), | |
| User(OpenAICompatibleUserMessage), | |
| Assistant(OpenAICompatibleAssistantMessage), | |
| Tool(OpenAICompatibleToolMessage), | |
| } |
When LLM assistant requires multiple tool_use concurrently, it will require the following tool result also be an array, with the tool_call_id matches. Otherwise it will return error like:
"error": {
"message": "An assistant message with 'tool_calls' must be followed by tool messages responding to each 'tool_call_id'. The following tool_call_ids did not have response messages: call_5lCYn0RkWOzv6sWEqhWAs0k1",
"type": "invalid_request_error",
"param": "messages.[5].role",
"code": null
}Currently it's reproducible in gpt-5 and claude 4, grok seems ok with multiple tool sections
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
No labels