Extend the OpenAI API spec to include reasoning content blocks in chat completion responses (OpenAI-compatible inference endpoint) #3578
Replies: 4 comments 5 replies
-
|
Hi @d8rt8v - the OpenAI spec doesn't have fields for raw reasoning, so we currently don't return it in the OpenAI-compatible endpoint. You'll have to use the standard |
Beta Was this translation helpful? Give feedback.
-
|
Hi @d8rt8v @GabrielBianconi please check my PR #3746 |
Beta Was this translation helpful? Give feedback.
-
|
Hi @GabrielBianconi ! Just wanted to check whether you’re planning to implement this feature. Using T0 as an LLM gateway is a bit of a struggle without reasoning support via the OAI endpoints. |
Beta Was this translation helpful? Give feedback.
-
|
@d8rt8v upgrading this to an issue, thank you for the feedback |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Hello, when making a curl request to an OAI-compatible endpoint like /openai/v1/chat/completions, I only receive the final assistant message. However, with the /inference endpoint, I’m able to see reasoning traces from the model.
Is there any way to enable reasoning output when using the /chat/completions endpoint, or is this strictly not supported?
Thanks!
Beta Was this translation helpful? Give feedback.
All reactions