fix(provider): use snake_case for thinking param with OpenAI-compatible APIs#10109
Conversation
|
Thanks for your contribution! This PR doesn't have a linked issue. All PRs must reference an existing issue. Please:
See CONTRIBUTING.md for details. |
|
The following comment was made by an LLM, it may be inaccurate: Potential Duplicate/Related PRs FoundPR #8900: feat(opencode): add copilot specific provider to properly handle copilot reasoning tokens
PR #5531: Feature/OpenAI compatible reasoning
PR #8359: feat(opencode): add auto model detection for OpenAI-compatible providers
These PRs address related functionality around provider-specific parameter handling and reasoning tokens, but your PR (10109) appears to be the specific fix for the snake_case conversion issue with OpenAI-compatible APIs. |
|
Which api are you using because I don't think this is the case for all providers? |
|
Thanks for reporting on our behalf |
|
L
Correct we are getting when using litellm to proxy Anthropic thinking models and trying to switch using c-t thinking modes |
…le APIs When connecting to OpenAI-compatible proxies (like LiteLLM) with Claude/Anthropic models, the thinking parameter must use snake_case (budget_tokens) to match the OpenAI API spec. The @ai-sdk/anthropic package handles camelCase → snake_case conversion automatically, but @ai-sdk/openai-compatible passes parameters as-is without conversion. This fix detects Claude/Anthropic models using openai-compatible SDK and sends budget_tokens (snake_case) instead of reasoningEffort. Also updated maxOutputTokens() to handle both camelCase (budgetTokens) and snake_case (budget_tokens) for proper token limit calculation. Fixes thinking mode when using Claude via LiteLLM or other OpenAI-compatible proxies.
28a56e9 to
dbf143e
Compare
Hi, the fix is for Claude models accessed via OpenAI-compatible proxies like LiteLLM, OpenRouter, Portkey and more. Before : { "thinking": { "type": "enabled", "budgetTokens": 16000 } } ❌ OpenAI-compatible endpoints don't convert camelCase → snake_case
After (fix): { "thinking": { "type": "enabled", "budget_tokens": 16000 } } ✅ Correct format for OpenAI-compatible API spec |
00637c0 to
71e0ba2
Compare
f1ae801 to
08fa7f7
Compare
|
it's a gross but it's really annoying that openai compat providers all do it inconsistently should re-evaluate this |
|
Problem with this is it completely breaks this config: I am expecting reasoning effort to be sent, and instead.. i get that. User's decision on max tokens is not even respected - it resulted in.. 1 token set as max. |
|
ill take a look at this today, apologies, im gonna be more stingy w/ prs now |
|
@Chesars im going to revert this and we can handle per provider instead. I don't wanna force something across the board |


What does this PR do?
Fixes thinking mode when using Claude/Anthropic models via OpenAI-compatible proxies like LiteLLM.
Summary:
The
@ai-sdk/openai-compatiblepackage passes parameters as-is without converting to snake_case. OpenAI-compatible APIs expectbudget_tokens(snake_case) but OpenCode was sendingbudgetTokens(camelCase).Changes:
maxOutputTokens(): Now supports bothbudgetTokensandbudget_tokensfor proper token limit calculationCompatibility:
@ai-sdk/anthropic): No changes, continues using camelCasereasoningEffortTests: Added 8 new unit tests