-
Notifications
You must be signed in to change notification settings - Fork 6.9k
Remove legacy ModelInfo and merge it with ModelFamily
#7748
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
ModelInfo and merge it with ModelFamily
|
@codex review this |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Codex Review
Here are some automated review suggestions for this pull request.
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
| let config = Self { | ||
| model, | ||
| review_model, | ||
| model_context_window, | ||
| model_auto_compact_token_limit, | ||
| model_context_window: cfg.model_context_window, | ||
| model_auto_compact_token_limit: cfg.model_auto_compact_token_limit, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Preserve derived defaults for model context limits
Loading config no longer derives model_context_window and model_auto_compact_token_limit from the selected model family; these fields now mirror only the raw TOML/profile values. In the provided fixtures (e.g., the o3, gpt3, and gpt5 profiles in create_test_fixture) those settings are omitted, so Config::load_from_base_config_with_overrides now returns None for both fields where the tests and callers previously received model-specific defaults (200_000/180_000 for o3, 16_385/14_746 for gpt‑3.5, 272_000/244_800 for gpt‑5.1). This will break the precedence tests around lines 2923–3235 and removes context-window metadata from telemetry for default profiles. Please restore deriving these defaults from the model family when the user has not overridden them.
Useful? React with 👍 / 👎.
| (context_window * 9) / 10 | ||
| } | ||
|
|
||
| fn finalize_auto_compact_default(mut self) -> Self { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you make auto_compact_token_limit a getter instead?
pakrym-oai
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
very nice!
| review_model: OPENAI_DEFAULT_REVIEW_MODEL.to_string(), | ||
| model_context_window: Some(272_000), | ||
| model_auto_compact_token_limit: Some(244_800), | ||
| model_context_window: None, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we have coverage for this?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Mostly interested in ensuring model_auto_compact_token_limit is set correctly for well known models
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
codex/codex-rs/core/tests/suite/compact.rs
Lines 454 to 466 in b519267
| #[tokio::test(flavor = "multi_thread", worker_threads = 2)] | |
| async fn multiple_auto_compact_per_task_runs_after_token_limit_hit() { | |
| skip_if_no_network!(); | |
| let server = start_mock_server().await; | |
| let codex = test_codex() | |
| .build(&server) | |
| .await | |
| .expect("build codex") | |
| .codex; | |
| // user message |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This should cover it although not very precise
This is a step towards removing the need to know
modelwhen constructing config. We firstly don't need to knowmodel_infoand just respect if the user has already set it. Next step, we don't need to knowmodelunless the user explicitly set it inconfig.toml