Quick start
The CLI integrates automatically with the following model providers: no extra configuration needed beyond installing the relevant provider package.-
Install provider packages
Each model provider requires installing its corresponding LangChain integration package. These are available as optional extras when installing the CLI, done intentionally to keep the application lightweight:
-
Set credentials
Store API keys in
~/.deepagents/.envso they’re available across all projects, or export them in your shell:Some providers use other credentials (for example, Vertex AI usesGOOGLE_CLOUD_PROJECTplus ADC). See the table below for the required variable per provider. You can also scope credentials to the CLI with theDEEPAGENTS_CLI_prefix.
Provider reference
The Deep Agents CLI is built in Python, please use the Python provider reference docs.Model routers and proxies
Model routers like OpenRouter and LiteLLM provide access to models from multiple providers through a single endpoint. Use the dedicated integration packages for these services:| Router | Package |
|---|---|
| OpenRouter | langchain-openrouter |
Switching models
To switch models in the CLI, either:-
Use the interactive model switcher with the
/modelcommand. This displays available models sourced from each installed LangChain provider package’s model profiles.Not all models appear here. If yours is missing, pass the model name directly (e.g./model gpt-5.4). See Which models appear in the switcher for details. -
Specify a model name directly as an argument, e.g.
/model gpt-5.4. You can use any model supported by the chosen provider, regardless of whether it appears in the list from option 1. The model name will be passed to the API request. -
Specify the model at launch via
--model, e.g.
Which models appear in the switcher
The/model selector dynamically builds its list from installed provider packages. Expand below for the full criteria and troubleshooting.
How the switcher builds its model list
How the switcher builds its model list
The interactive
/model selector builds its list dynamically—it is not a hardcoded list baked into the CLI. A model appears in the switcher when all of the following are true:-
The provider package is installed. Each provider (e.g.
langchain-anthropic,langchain-openai) must be installed alongsidedeepagents-cli—either as an install extra (e.g.uv tool install 'deepagents-cli[ollama]') or added later withuv tool install deepagents-cli --with <package>. If a package is missing, its entire provider section is absent from the switcher. -
The model has a profile with
tool_callingenabled. The CLI requires tool-calling support, so models withouttool_calling: truein their profile are excluded. This is the most common reason a model is missing from the list. For providers that don’t bundle profiles (see the Provider reference table), you can define one inconfig.toml:This is not strictly required for the model to appear in the switcher — adding it to themodelslist also works and is simpler. A profile is useful when you want the CLI to know the model’s context window and capabilities for features like auto-summarization. See Profile overrides for all overridable fields. -
The model accepts and produces text. Models whose profile explicitly sets
text_inputsortext_outputstofalse(e.g. embedding or image-generation models) are excluded.
config.toml under [models.providers.<name>].models bypass the profile filter—they always appear in the switcher regardless of profile metadata. This is the recommended way to add models that are missing from the list.Troubleshooting missing models
| Symptom | Likely cause | Fix |
|---|---|---|
| Entire provider missing from switcher | Provider package not installed | Install the package (e.g. uv tool install deepagents-cli --with langchain-groq) |
| Provider shown but specific model missing | Model profile has tool_calling: false or no profile exists | Add the model to [models.providers.<name>].models in config.toml, or use /model <provider>:<model> directly |
| Provider shows ⚠ “missing credentials” | API key env var not set | Set the credential env var from the Provider reference table |
| Provider shows ? “credentials unknown” | Provider uses non-standard auth that the CLI can’t verify | Credentials may still work—try switching to the model. If auth fails, check the provider’s docs |
Setting a default model
You can set a persistent default model that will be used for all future CLI launches:-
Via model selector: Open
/model, navigate to the desired model, and pressCtrl+Sto pin it as the default. PressingCtrl+Sagain on the current default clears it. -
Via command:
/model --default provider:model(e.g.,/model --default anthropic:claude-opus-4-6) -
Via config file: Set
[models].defaultin~/.deepagents/config.toml(see Configuration). -
From the shell:
-
From the shell:
-
Via command:
/model --default --clear -
Via model selector: Press
Ctrl+Son the currently pinned default model.
Model resolution order
When the CLI launches, it resolves which model to use in the following order:--modelflag always wins when provided.[models].defaultin~/.deepagents/config.toml—The user’s intentional long-term preference.[models].recentin~/.deepagents/config.toml—The last model switched to via/model. Written automatically; never overwrites[models].default.- Environment auto-detection: Falls back to the first available startup credential, checked in order:
OPENAI_API_KEY,ANTHROPIC_API_KEY,GOOGLE_API_KEY,GOOGLE_CLOUD_PROJECT(Vertex AI).
--model, /model, and saved defaults ([models].default / [models].recent).
Advanced configuration
For detailed configuration of provider params, profile overrides, custom base URLs, compatible APIs, arbitrary providers, and lifecycle hooks, see Configuration.Connect these docs to Claude, VSCode, and more via MCP for real-time answers.

