Codex should work out of the box for most users. But sometimes you want to configure Codex to your own liking to better suit your needs. For this there is a wide range of configuration options.
Codex configuration file
The configuration file for Codex is located at ~/.codex/config.toml.
To access the configuration file when you are using the Codex IDE extension, you can click the gear icon in the top right corner of the extension and then clicking Codex Settings > Open config.toml.
This configuration file is shared between the CLI and the IDE extension and can be used to configure things like the default model, approval policies, sandbox settings or MCP servers that Codex should have access to.
High level configuration options
Codex provides a wide range of configuration options. Some of the most commonly changed settings are:
Default model
Pick which model Codex uses by default in both the CLI and IDE.
Using config.toml:
model = "gpt-5"
Using CLI arguments:
codex --model gpt-5
Model provider
Select the backend provider referenced by the active model. Be sure to define the provider in your config first.
Using config.toml:
model_provider = "ollama"
Using CLI arguments:
codex --config model_provider="ollama"
Approval prompts
Control when Codex pauses to ask before running generated commands.
Using config.toml:
approval_policy = "on-request"
Using CLI arguments:
codex --ask-for-approval on-request
Sandbox level
Adjust how much filesystem and network access Codex has while executing commands.
Using config.toml:
sandbox_mode = "workspace-write"
Using CLI arguments:
codex --sandbox workspace-write
Reasoning depth
Tune how much reasoning effort the model applies when supported.
Using config.toml:
model_reasoning_effort = "high"
Using CLI arguments:
codex --config model_reasoning_effort="high"
Command environment
Restrict or expand which environment variables are forwarded to spawned commands.
Using config.toml:
[shell_environment_policy]
include_only = ["PATH", "HOME"]
Using CLI arguments:
codex --config shell_environment_policy.include_only='["PATH","HOME"]'
Profiles
Profiles bundle a set of configuration values so you can jump between setups without editing config.toml each time. They currently apply to the Codex CLI.
Define profiles under [profiles.<name>] in config.toml and launch the CLI with codex --profile <name>:
model = "gpt-5-codex"
approval_policy = "on-request"
[profiles.deep-review]
model = "gpt-5-pro"
model_reasoning_effort = "high"
approval_policy = "never"
[profiles.lightweight]
model = "gpt-4.1"
approval_policy = "untrusted"
Running codex --profile deep-review will use the gpt-5-pro model with high reasoning effort and no approval policy. Running codex --profile lightweight will use the gpt-4.1 model with untrusted approval policy. To make one profile the default, add profile = "deep-review" at the top level of config.toml; the CLI will load that profile unless you override it on the command line.
Values resolve in this order: explicit CLI flags (like --model) override everything, profile values come next, then root-level entries in config.toml, and finally the CLI’s built-in defaults. Use that precedence to layer common settings at the top level while letting each profile tweak just the fields that need to change.
Feature flags
Optional and experimental capabilities are toggled via the [features] table in config.toml. If Codex emits a deprecation warning mentioning a legacy key (such as experimental_use_exec_command_tool), move that setting into [features] or launch the CLI with codex --enable <feature>.
[features]
streamable_shell = true # enable the streamable exec tool
web_search_request = true # allow the model to request web searches
# view_image_tool defaults to true; omit to keep defaults
Supported features
| Key | Default | Stage | Description |
|---|---|---|---|
unified_exec | false | Experimental | Use the unified PTY-backed exec tool |
streamable_shell | false | Experimental | Use the streamable exec-command/write-stdin pair |
rmcp_client | false | Experimental | Enable OAuth support for streamable HTTP MCP servers |
apply_patch_freeform | false | Beta | Include the freeform apply_patch tool |
view_image_tool | true | Stable | Include the view_image tool |
web_search_request | false | Stable | Allow the model to issue web searches |
experimental_sandbox_command_assessment | false | Experimental | Enable model-based sandbox risk assessment |
ghost_commit | false | Experimental | Create a ghost commit each turn |
enable_experimental_windows_sandbox | false | Experimental | Use the Windows restricted-token sandbox |
Omit feature keys to keep their defaults.
Legacy booleans such as
experimental_use_exec_command_tool,
experimental_use_unified_exec_tool,
include_apply_patch_tool, and similar
experimental_use_* entries are deprecated—migrate them to the matching
[features].<key> flag to avoid repeated warnings.
Enabling features quickly
- In
config.toml: addfeature_name = trueunder[features]. - CLI onetime:
codex --enable feature_name. - Multiple flags:
codex --enable feature_a --enable feature_b. - Disable explicitly by setting the key to
falseinconfig.toml.
Advanced configuration
Custom model providers
Define additional providers and point model_provider at them:
model = "gpt-4o"
model_provider = "openai-chat-completions"
[model_providers.openai-chat-completions]
name = "OpenAI using Chat Completions"
base_url = "https://api.openai.com/v1"
env_key = "OPENAI_API_KEY"
wire_api = "chat"
query_params = {}
[model_providers.ollama]
name = "Ollama"
base_url = "http://localhost:11434/v1"
[model_providers.mistral]
name = "Mistral"
base_url = "https://api.mistral.ai/v1"
env_key = "MISTRAL_API_KEY"
Add request headers when needed:
[model_providers.example]
http_headers = { "X-Example-Header" = "example-value" }
env_http_headers = { "X-Example-Features" = "EXAMPLE_FEATURES" }
Azure provider & per-provider tuning
[model_providers.azure]
name = "Azure"
base_url = "https://YOUR_PROJECT_NAME.openai.azure.com/openai"
env_key = "AZURE_OPENAI_API_KEY"
query_params = { api-version = "2025-04-01-preview" }
wire_api = "responses"
[model_providers.openai]
request_max_retries = 4
stream_max_retries = 10
stream_idle_timeout_ms = 300000
Model reasoning, verbosity, and limits
model_reasoning_summary = "none" # disable summaries
model_verbosity = "low" # shorten responses on Responses API providers
model_supports_reasoning_summaries = true # force reasoning on custom providers
model_context_window = 128000 # override when Codex doesn't know the window
model_max_output_tokens = 4096 # cap completion length
model_verbosity applies only to providers using the Responses API; Chat Completions providers will ignore the setting.
Approval policies and sandbox modes
Pick approval strictness (affects when Codex pauses) and sandbox level (affects file/network access). See Sandbox & approvals for deeper examples.
approval_policy = "untrusted" # other options: on-request, on-failure, never
sandbox_mode = "workspace-write"
[sandbox_workspace_write]
exclude_tmpdir_env_var = false # allow $TMPDIR
exclude_slash_tmp = false # allow /tmp
writable_roots = ["/Users/YOU/.pyenv/shims"]
network_access = false # opt in to outbound network
Disable sandboxing entirely (use only if your environment already isolates processes):
sandbox_mode = "danger-full-access"
Rules (preview)
A .rules file lets you define fine-grained rules that govern Codex’s behavior, such as identifying commands that Codex is allowed to run outside the sandbox.
For example, suppose you created the file ~/.codex/rules/default.rules with the following contents:
# Rule that allows commands that start with `gh pr view` to run outside
# the sandbox for Codex's "shell tool."
prefix_rule(
# The prefix to match.
pattern = ["gh", "pr", "view"],
# The action to take when Codex requests to run a matching command.
decision = "allow",
# `match` and `not_match` are optional "inline unit tests" where you can
# provide examples of commands that should (or should not) match this rule,
# respectively. The .rules file will fail to load if these tests fail.
match = [
"gh pr view 7888",
"gh pr view --repo openai/codex",
"gh pr view 7888 --json title,body,comments",
],
not_match = [
# Does not match because the `pattern` must be an exact prefix.
"gh pr --repo openai/codex view 7888",
],
)
A prefix_rule() lets you pre-approve, prompt, or block commands before Codex runs them using the following options:
pattern(required) is a non-empty list where each element is either a literal (e.g.,"pr") or a union of literals (e.g.,["view", "list"]) that defines the command prefix to be matched by the rule. When Codex’s shell tool considers a command to run (which internally can be thought of as a list of arguments forexecvp(3)), it will compare the start of the list of arguments with those of thepattern.- Use a union to express alternatives for an individual argument. For example,
pattern = ["gh", "pr", ["view", "list"]]would allow bothgh pr viewandgh pr listto run outside the sandbox.
- Use a union to express alternatives for an individual argument. For example,
decision(defaults to"allow") sets the strictness; Codex applies the most restrictive decision when multiple rules match (forbidden>prompt>allow)allowmeans the command should be run automatically outside the sandbox: the user will not be consulted.promptmeans the user will be prompted to allow each individual invocation of a matching command. If approved, the command will be run outside the sandbox.forbiddenmeans the request will be rejected automatically without notifying the user.
matchandnot_match(defaults to[]) act like tests that Codex validates when it loads your policy.
Codex loads every *.rules file under ~/.codex/rules at startup; when you whitelist a command in the TUI, it appends a rule to ~/.codex/rules/default.rules so future runs can skip the prompt.
Note the input language for a .rules file is Starlark. Its syntax is similar to Python’s, but it is designed to be a safe, embeddable language that can be interpeted without side-effects (such as touching the filesystem). Starlark’s affordances such as list comprehensions makes it possible to build up rules dynamically.
Finally, to test how a policy applies to a command without editing files, you can use the CLI helper:
$ codex execpolicy check --pretty --rules ~/.codex/rules/default.rules -- gh pr view 7888 --json title,body,comments
{
"matchedRules": [
{
"prefixRuleMatch": {
"matchedPrefix": [
"gh",
"pr",
"view"
],
"decision": "prompt"
}
}
],
"decision": "prompt"
}
Pass multiple --rules flags to combine files and add --pretty for formatted JSON. The rules system is still in preview, so syntax and defaults may change.
Shell environment templates
shell_environment_policy controls which environment variables Codex passes to any subprocess it launches (for example, when running a tool-command the model proposes). Start from a clean slate (inherit = "none") or a trimmed set (inherit = "core"), then layer on excludes, includes, and overrides to avoid leaking secrets while still providing the paths, keys, or flags your tasks need.
[shell_environment_policy]
inherit = "none"
set = { PATH = "/usr/bin", MY_FLAG = "1" }
ignore_default_excludes = false
exclude = ["AWS_*", "AZURE_*"]
include_only = ["PATH", "HOME"]
Patterns are case-insensitive globs (*, ?, [A-Z]); ignore_default_excludes = false keeps the automatic KEY/SECRET/TOKEN filter before your includes/excludes run.
MCP servers
See the dedicated MCP guide for full server setups and toggle descriptions. Below is a minimal STDIO example using the Context7 MCP server:
[mcp_servers.context7]
command = "npx"
args = ["-y", "@upstash/context7-mcp"]
Observibility and telemetry
Enable OpenTelemetry (Otel) log export to track Codex runs (API requests, SSE/events, prompts, tool approvals/results). Disabled by default; opt in via [otel]:
[otel]
environment = "staging" # defaults to "dev"
exporter = "none" # set to otlp-http or otlp-grpc to send events
log_user_prompt = false # redact user prompts unless explicitly enabled
Choose an exporter:
[otel]
exporter = { otlp-http = {
endpoint = "https://otel.example.com/v1/logs",
protocol = "binary",
headers = { "x-otlp-api-key" = "${OTLP_TOKEN}" }
}}
[otel]
exporter = { otlp-grpc = {
endpoint = "https://otel.example.com:4317",
headers = { "x-otlp-meta" = "abc123" }
}}
If exporter = "none" Codex records events but sends nothing. Exporters batch asynchronously and flush on shutdown. Event metadata includes service name, CLI version, env tag, conversation id, model, sandbox/approval settings, and per-event fields (see Config reference table below).
Notifications
Use notify to trigger an external program whenever Codex emits supported events (today: agent-turn-complete). This is handy for desktop toasts, chat webhooks, CI updates, or any side-channel alerting that the built-in TUI notifications don’t cover.
notify = ["python3", "/path/to/notify.py"]
Example notify.py (truncated) that reacts to agent-turn-complete:
#!/usr/bin/env python3
import json, subprocess, sys
def main() -> int:
notification = json.loads(sys.argv[1])
if notification.get("type") != "agent-turn-complete":
return 0
title = f"Codex: {notification.get('last-assistant-message', 'Turn Complete!')}"
message = " ".join(notification.get("input-messages", []))
subprocess.check_output([
"terminal-notifier",
"-title", title,
"-message", message,
"-group", "codex-" + notification.get("thread-id", ""),
"-activate", "com.googlecode.iterm2",
])
return 0
if __name__ == "__main__":
sys.exit(main())
Place the script somewhere on disk and point notify to it. For lighter in-terminal alerts, toggle tui.notifications instead.
Personalizing the Codex IDE Extension
Additionally to configuring the underlying Codex agent through your config.toml file, you can also configure the way you use the Codex IDE extension.
To see the list of available configuration options, click the gear icon in the top right corner of the extension and then click IDE settings.
To define your own keyboard shortcuts to trigger Codex or add something to the Codex context, you can click the gear icon in the top right corner of the extension and then click Keyboard shortcuts.
Configuration options
| Key | Type / Values | Details |
|---|---|---|
model | string | Model to use (e.g., `gpt-5-codex`). |
model_provider | string | Provider id from `model_providers` (default: `openai`). |
model_context_window | number | Context window tokens available to the active model. |
model_max_output_tokens | number | Maximum number of tokens Codex may request from the model. |
approval_policy | untrusted | on-failure | on-request | never | Controls when Codex pauses for approval before executing commands. |
sandbox_mode | read-only | workspace-write | danger-full-access | Sandbox policy for filesystem and network access during command execution. |
sandbox_workspace_write.writable_roots | array<string> | Additional writable roots when `sandbox_mode = "workspace-write"`. |
sandbox_workspace_write.network_access | boolean | Allow outbound network access inside the workspace-write sandbox. |
sandbox_workspace_write.exclude_tmpdir_env_var | boolean | Exclude `$TMPDIR` from writable roots in workspace-write mode. |
sandbox_workspace_write.exclude_slash_tmp | boolean | Exclude `/tmp` from writable roots in workspace-write mode. |
notify | array<string> | Command invoked for notifications; receives a JSON payload from Codex. |
instructions | string | Reserved for future use; prefer `experimental_instructions_file` or `AGENTS.md`. |
mcp_servers.<id>.command | string | Launcher command for an MCP stdio server. |
mcp_servers.<id>.args | array<string> | Arguments passed to the MCP stdio server command. |
mcp_servers.<id>.env | map<string,string> | Environment variables forwarded to the MCP stdio server. |
mcp_servers.<id>.env_vars | array<string> | Additional environment variables to whitelist for an MCP stdio server. |
mcp_servers.<id>.cwd | string | Working directory for the MCP stdio server process. |
mcp_servers.<id>.url | string | Endpoint for an MCP streamable HTTP server. |
mcp_servers.<id>.bearer_token_env_var | string | Environment variable sourcing the bearer token for an MCP HTTP server. |
mcp_servers.<id>.http_headers | map<string,string> | Static HTTP headers included with each MCP HTTP request. |
mcp_servers.<id>.env_http_headers | map<string,string> | HTTP headers populated from environment variables for an MCP HTTP server. |
mcp_servers.<id>.enabled | boolean | Disable an MCP server without removing its configuration. |
mcp_servers.<id>.startup_timeout_sec | number | Override the default 10s startup timeout for an MCP server. |
mcp_servers.<id>.tool_timeout_sec | number | Override the default 60s per-tool timeout for an MCP server. |
mcp_servers.<id>.enabled_tools | array<string> | Allow list of tool names exposed by the MCP server. |
mcp_servers.<id>.disabled_tools | array<string> | Deny list applied after `enabled_tools` for the MCP server. |
features.unified_exec | boolean | Use the unified PTY-backed exec tool (experimental). |
features.streamable_shell | boolean | Switch to the streamable exec command/write-stdin tool pair (experimental). |
features.rmcp_client | boolean | Enable the Rust MCP client to unlock OAuth for HTTP servers (experimental). |
features.apply_patch_freeform | boolean | Expose the freeform `apply_patch` tool (beta). |
features.view_image_tool | boolean | Allow Codex to attach local images via the `view_image` tool (stable; on by default). |
features.web_search_request | boolean | Allow the model to issue web searches (stable). |
features.experimental_sandbox_command_assessment | boolean | Enable model-based sandbox risk assessment (experimental). |
features.ghost_commit | boolean | Create a ghost commit on each turn (experimental). |
features.enable_experimental_windows_sandbox | boolean | Run the Windows restricted-token sandbox (experimental). |
experimental_use_rmcp_client | boolean | Deprecated; replace with `[features].rmcp_client` or `codex --enable rmcp_client`. |
model_providers.<id>.name | string | Display name for a custom model provider. |
model_providers.<id>.base_url | string | API base URL for the model provider. |
model_providers.<id>.env_key | string | Environment variable supplying the provider API key. |
model_providers.<id>.wire_api | chat | responses | Protocol used by the provider (defaults to `chat` if omitted). |
model_providers.<id>.query_params | map<string,string> | Extra query parameters appended to provider requests. |
model_providers.<id>.http_headers | map<string,string> | Static HTTP headers added to provider requests. |
model_providers.<id>.env_http_headers | map<string,string> | HTTP headers populated from environment variables when present. |
model_providers.<id>.request_max_retries | number | Retry count for HTTP requests to the provider (default: 4). |
model_providers.<id>.stream_max_retries | number | Retry count for SSE streaming interruptions (default: 5). |
model_providers.<id>.stream_idle_timeout_ms | number | Idle timeout for SSE streams in milliseconds (default: 300000). |
model_reasoning_effort | minimal | low | medium | high | Adjust reasoning effort for supported models (Responses API only). |
model_reasoning_summary | auto | concise | detailed | none | Select reasoning summary detail or disable summaries entirely. |
model_verbosity | low | medium | high | Control GPT-5 Responses API verbosity (defaults to `medium`). |
model_supports_reasoning_summaries | boolean | Force Codex to send reasoning metadata even for unknown models. |
model_reasoning_summary_format | none | experimental | Override the format of reasoning summaries (experimental). |
shell_environment_policy.inherit | all | core | none | Baseline environment inheritance when spawning subprocesses. |
shell_environment_policy.ignore_default_excludes | boolean | Keep variables containing KEY/SECRET/TOKEN before other filters run. |
shell_environment_policy.exclude | array<string> | Glob patterns for removing environment variables after the defaults. |
shell_environment_policy.include_only | array<string> | Whitelist of patterns; when set only matching variables are kept. |
shell_environment_policy.set | map<string,string> | Explicit environment overrides injected into every subprocess. |
project_doc_max_bytes | number | Maximum bytes read from `AGENTS.md` when building project instructions. |
project_doc_fallback_filenames | array<string> | Additional filenames to try when `AGENTS.md` is missing. |
profile | string | Default profile applied at startup (equivalent to `--profile`). |
profiles.<name>.* | various | Profile-scoped overrides for any of the supported configuration keys. |
history.persistence | save-all | none | Control whether Codex saves session transcripts to history.jsonl. |
history.max_bytes | number | Reserved for future use; currently not enforced. |
file_opener | vscode | vscode-insiders | windsurf | cursor | none | URI scheme used to open citations from Codex output (default: `vscode`). |
otel.environment | string | Environment tag applied to emitted OpenTelemetry events (default: `dev`). |
otel.exporter | none | otlp-http | otlp-grpc | Select the OpenTelemetry exporter and provide any endpoint metadata. |
otel.log_user_prompt | boolean | Opt in to exporting raw user prompts with OpenTelemetry logs. |
tui | table | TUI-specific options such as enabling inline desktop notifications. |
tui.notifications | boolean | array<string> | Enable TUI notifications; optionally restrict to specific event types. |
hide_agent_reasoning | boolean | Suppress reasoning events in both the TUI and `codex exec` output. |
show_raw_agent_reasoning | boolean | Surface raw reasoning content when the active model emits it. |
chatgpt_base_url | string | Override the base URL used during the ChatGPT login flow. |
experimental_instructions_file | string (path) | Experimental replacement for built-in instructions instead of `AGENTS.md`. |
experimental_use_exec_command_tool | boolean | Deprecated; use `[features].unified_exec` or `codex --enable unified_exec`. |
projects.<path>.trust_level | string | Mark a project or worktree as trusted (only `"trusted"` is recognized). |
tools.web_search | boolean | Deprecated; use `[features].web_search_request` or `codex --enable web_search_request`. |
tools.view_image | boolean | Deprecated; use `[features].view_image_tool` or `codex --enable view_image_tool`. |
forced_login_method | chatgpt | api | Restrict Codex to a specific authentication method. |
forced_chatgpt_workspace_id | string (uuid) | Limit ChatGPT logins to a specific workspace identifier. |
modelstringModel to use (e.g., `gpt-5-codex`).
model_providerstringProvider id from `model_providers` (default: `openai`).
model_context_windownumberContext window tokens available to the active model.
model_max_output_tokensnumberMaximum number of tokens Codex may request from the model.
approval_policyuntrusted | on-failure | on-request | neverControls when Codex pauses for approval before executing commands.
sandbox_moderead-only | workspace-write | danger-full-accessSandbox policy for filesystem and network access during command execution.
sandbox_workspace_write.writable_rootsarray<string>Additional writable roots when `sandbox_mode = "workspace-write"`.
sandbox_workspace_write.network_accessbooleanAllow outbound network access inside the workspace-write sandbox.
sandbox_workspace_write.exclude_tmpdir_env_varbooleanExclude `$TMPDIR` from writable roots in workspace-write mode.
sandbox_workspace_write.exclude_slash_tmpbooleanExclude `/tmp` from writable roots in workspace-write mode.
notifyarray<string>Command invoked for notifications; receives a JSON payload from Codex.
instructionsstringReserved for future use; prefer `experimental_instructions_file` or `AGENTS.md`.
mcp_servers.<id>.commandstringLauncher command for an MCP stdio server.
mcp_servers.<id>.argsarray<string>Arguments passed to the MCP stdio server command.
mcp_servers.<id>.envmap<string,string>Environment variables forwarded to the MCP stdio server.
mcp_servers.<id>.env_varsarray<string>Additional environment variables to whitelist for an MCP stdio server.
mcp_servers.<id>.cwdstringWorking directory for the MCP stdio server process.
mcp_servers.<id>.urlstringEndpoint for an MCP streamable HTTP server.
mcp_servers.<id>.bearer_token_env_varstringEnvironment variable sourcing the bearer token for an MCP HTTP server.
mcp_servers.<id>.http_headersmap<string,string>Static HTTP headers included with each MCP HTTP request.
mcp_servers.<id>.env_http_headersmap<string,string>HTTP headers populated from environment variables for an MCP HTTP server.
mcp_servers.<id>.enabledbooleanDisable an MCP server without removing its configuration.
mcp_servers.<id>.startup_timeout_secnumberOverride the default 10s startup timeout for an MCP server.
mcp_servers.<id>.tool_timeout_secnumberOverride the default 60s per-tool timeout for an MCP server.
mcp_servers.<id>.enabled_toolsarray<string>Allow list of tool names exposed by the MCP server.
mcp_servers.<id>.disabled_toolsarray<string>Deny list applied after `enabled_tools` for the MCP server.
features.unified_execbooleanUse the unified PTY-backed exec tool (experimental).
features.streamable_shellbooleanSwitch to the streamable exec command/write-stdin tool pair (experimental).
features.rmcp_clientbooleanEnable the Rust MCP client to unlock OAuth for HTTP servers (experimental).
features.apply_patch_freeformbooleanExpose the freeform `apply_patch` tool (beta).
features.view_image_toolbooleanAllow Codex to attach local images via the `view_image` tool (stable; on by default).
features.web_search_requestbooleanAllow the model to issue web searches (stable).
features.experimental_sandbox_command_assessmentbooleanEnable model-based sandbox risk assessment (experimental).
features.ghost_commitbooleanCreate a ghost commit on each turn (experimental).
features.enable_experimental_windows_sandboxbooleanRun the Windows restricted-token sandbox (experimental).
experimental_use_rmcp_clientbooleanDeprecated; replace with `[features].rmcp_client` or `codex --enable rmcp_client`.
model_providers.<id>.namestringDisplay name for a custom model provider.
model_providers.<id>.base_urlstringAPI base URL for the model provider.
model_providers.<id>.env_keystringEnvironment variable supplying the provider API key.
model_providers.<id>.wire_apichat | responsesProtocol used by the provider (defaults to `chat` if omitted).
model_providers.<id>.query_paramsmap<string,string>Extra query parameters appended to provider requests.
model_providers.<id>.http_headersmap<string,string>Static HTTP headers added to provider requests.
model_providers.<id>.env_http_headersmap<string,string>HTTP headers populated from environment variables when present.
model_providers.<id>.request_max_retriesnumberRetry count for HTTP requests to the provider (default: 4).
model_providers.<id>.stream_max_retriesnumberRetry count for SSE streaming interruptions (default: 5).
model_providers.<id>.stream_idle_timeout_msnumberIdle timeout for SSE streams in milliseconds (default: 300000).
model_reasoning_effortminimal | low | medium | highAdjust reasoning effort for supported models (Responses API only).
model_reasoning_summaryauto | concise | detailed | noneSelect reasoning summary detail or disable summaries entirely.
model_verbositylow | medium | highControl GPT-5 Responses API verbosity (defaults to `medium`).
model_supports_reasoning_summariesbooleanForce Codex to send reasoning metadata even for unknown models.
model_reasoning_summary_formatnone | experimentalOverride the format of reasoning summaries (experimental).
shell_environment_policy.inheritall | core | noneBaseline environment inheritance when spawning subprocesses.
shell_environment_policy.ignore_default_excludesbooleanKeep variables containing KEY/SECRET/TOKEN before other filters run.
shell_environment_policy.excludearray<string>Glob patterns for removing environment variables after the defaults.
shell_environment_policy.include_onlyarray<string>Whitelist of patterns; when set only matching variables are kept.
shell_environment_policy.setmap<string,string>Explicit environment overrides injected into every subprocess.
project_doc_max_bytesnumberMaximum bytes read from `AGENTS.md` when building project instructions.
project_doc_fallback_filenamesarray<string>Additional filenames to try when `AGENTS.md` is missing.
profilestringDefault profile applied at startup (equivalent to `--profile`).
profiles.<name>.*variousProfile-scoped overrides for any of the supported configuration keys.
history.persistencesave-all | noneControl whether Codex saves session transcripts to history.jsonl.
history.max_bytesnumberReserved for future use; currently not enforced.
file_openervscode | vscode-insiders | windsurf | cursor | noneURI scheme used to open citations from Codex output (default: `vscode`).
otel.environmentstringEnvironment tag applied to emitted OpenTelemetry events (default: `dev`).
otel.exporternone | otlp-http | otlp-grpcSelect the OpenTelemetry exporter and provide any endpoint metadata.
otel.log_user_promptbooleanOpt in to exporting raw user prompts with OpenTelemetry logs.
tuitableTUI-specific options such as enabling inline desktop notifications.
tui.notificationsboolean | array<string>Enable TUI notifications; optionally restrict to specific event types.
hide_agent_reasoningbooleanSuppress reasoning events in both the TUI and `codex exec` output.
show_raw_agent_reasoningbooleanSurface raw reasoning content when the active model emits it.
chatgpt_base_urlstringOverride the base URL used during the ChatGPT login flow.
experimental_instructions_filestring (path)Experimental replacement for built-in instructions instead of `AGENTS.md`.
experimental_use_exec_command_toolbooleanDeprecated; use `[features].unified_exec` or `codex --enable unified_exec`.
projects.<path>.trust_levelstringMark a project or worktree as trusted (only `"trusted"` is recognized).
tools.web_searchbooleanDeprecated; use `[features].web_search_request` or `codex --enable web_search_request`.
tools.view_imagebooleanDeprecated; use `[features].view_image_tool` or `codex --enable view_image_tool`.
forced_login_methodchatgpt | apiRestrict Codex to a specific authentication method.
forced_chatgpt_workspace_idstring (uuid)Limit ChatGPT logins to a specific workspace identifier.