-
Notifications
You must be signed in to change notification settings - Fork 6.9k
Migrate model preset #7542
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Migrate model preset #7542
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Codex Review
codex/codex-rs/core/src/sandboxing/assessment.rs
Lines 16 to 18 in 801e2f4
| use codex_protocol::ConversationId; | |
| use codex_protocol::config_types::ReasoningEffort as ReasoningEffortConfig; | |
| use codex_protocol::models::ContentItem; |
The sandbox assessment module still imports ReasoningEffort from codex_protocol::config_types, but this enum was removed from that module in this commit (it now lives under openai_models). As a result the codex-core crate no longer compiles because the import at this location cannot be resolved. Update the import to the new module so builds succeed again.
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
| description: "Latest Codex-optimized flagship for deep and fast reasoning.".to_string(), | ||
| default_reasoning_effort: ReasoningEffort::Medium, | ||
| supported_reasoning_efforts: &[ | ||
| ReasoningEffortPreset { | ||
| effort: ReasoningEffort::Low, | ||
| description: "Fast responses with lighter reasoning", | ||
| }, | ||
| ReasoningEffortPreset { | ||
| effort: ReasoningEffort::Medium, | ||
| description: "Balances speed and reasoning depth for everyday tasks", | ||
| }, | ||
| supported_reasoning_efforts: vec![ | ||
| ReasoningEffortPreset { | ||
| effort: ReasoningEffort::High, | ||
| description: "Maximizes reasoning depth for complex problems", | ||
| description: "Maximizes reasoning depth for complex problems".to_string(), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Default model omits medium/low reasoning options
For the default gpt-5.1-codex-max preset the default_reasoning_effort remains Medium, but the supported_reasoning_efforts list now only contains High and XHigh. This means the default effort is no longer in the supported set and lower-effort choices disappear from the advertised model list, even though Medium is still configured as the default. Clients consuming these presets will be unable to pick the prior default effort for the main model. Please restore the missing efforts or realign the default with the supported list.
Useful? React with 👍 / 👎.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this is true
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Bad merge. Thanks codex
codex-rs/core/src/rollout/policy.rs
Outdated
| | EventMsg::Warning(_) | ||
| | EventMsg::TaskStarted(_) | ||
| | EventMsg::TaskComplete(_) | ||
| | EventMsg::AvailableModels(_) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| | EventMsg::AvailableModels(_) | |
| | EventMsg::ListModelsResponse(_) |
pakrym-oai
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's add a test for the new OP.
openai_modelsin/corePRESETSunder itModelPreset,ModelUpgrade,ReasoningEffortPreset,ReasoningEffortPreset, andReasoningEffortPresettoprotocolOp::ListModelsandEventMsg::AvailableModelsNext steps:
app-serverandtuito use the introduced Operation