Primary graph assembly module for Deep Agents.
Provides create_deep_agent, the main entry point for constructing a fully
configured Deep Agent with planning, filesystem, subagent, and summarization
middleware.
Default base system prompt for every Deep Agent.
When a caller passes system_prompt to create_deep_agent, the custom prompt
is prepended and this base prompt is appended. When system_prompt is None,
this is used as the sole system prompt.
Extract the provider-native model identifier from a chat model.
Providers do not agree on a single field name for the identifier. Some use
model_name, while others use model. Reading the serialized model config
lets us inspect both without relying on reflective attribute access.
Extract the provider name from a chat model instance.
Uses the model's _get_ls_params method. The base BaseChatModel
implementation derives ls_provider from the class name, and all major
providers override it with a hardcoded value (e.g. "anthropic").
Resolve a model string to a BaseChatModel.
If model is already a BaseChatModel, returns it unchanged.
String models are resolved via init_chat_model. OpenAI models
(prefixed with openai:) default to the Responses API.
OpenRouter models include default app attribution headers unless overridden
via OPENROUTER_APP_URL / OPENROUTER_APP_TITLE env vars.
Create a SummarizationMiddleware with model-aware defaults.
Computes trigger, keep, and truncation settings from the model's profile (or uses fixed-token fallbacks) and returns a configured middleware.
Get the default model for Deep Agents.
Used as a fallback when model=None is passed to create_deep_agent.
Requires ANTHROPIC_API_KEY to be set in the environment.
Create a Deep Agent.
By default, this agent has access to the following tools:
write_todos: manage a todo listls, read_file, write_file, edit_file, glob, grep: file operationsexecute: run shell commandstask: call subagentsThe execute tool allows running shell commands if the backend implements SandboxBackendProtocol.
For non-sandbox backends, the execute tool will return an error message.
Backend that stores files in agent state (ephemeral).
Uses LangGraph's state management and checkpointing. Files persist within a conversation thread but not across threads. State is automatically checkpointed after each agent step.
Reads and writes go through LangGraph's CONFIG_KEY_READ /
CONFIG_KEY_SEND so that state updates are queued as proper channel
writes rather than returned as files_update dicts.
Protocol for pluggable memory backends (single, unified).
Backends can store files in different locations (state, filesystem, database, etc.) and provide a uniform interface for file operations.
All file data is represented as dicts with the following structure::
{
"content": str, # Text content (utf-8) or base64-encoded binary
"encoding": str, # "utf-8" for text, "base64" for binary data
"created_at": str, # ISO format timestamp
"modified_at": str, # ISO format timestamp
}
Specification for an async subagent running on a remote Agent Protocol server.
Async subagents connect to any Agent Protocol-compliant server via the LangGraph SDK. They run as background tasks that the main agent can monitor and update.
Compatible with LangGraph Platform (managed) and self-hosted servers.
Authentication for LangGraph Platform is handled automatically by the SDK
via environment variables (LANGGRAPH_API_KEY, LANGSMITH_API_KEY, or
LANGCHAIN_API_KEY). For self-hosted servers, pass custom auth via
headers.
Middleware for async subagents running on remote Agent Protocol servers.
This middleware adds tools for launching, monitoring, and updating
background tasks on remote Agent Protocol servers. Unlike the synchronous
SubAgentMiddleware, async subagents return immediately with a task ID,
allowing the main agent to continue working while subagents execute.
Works with any Agent Protocol-compliant server — LangGraph Platform (managed) or self-hosted (e.g. a FastAPI server implementing the Agent Protocol spec).
Task IDs are persisted in the agent state under async_tasks so they
survive context compaction/offloading and can be accessed programmatically.
Middleware for providing filesystem and optional execution tools to an agent.
This middleware adds filesystem tools to the agent: ls, read_file, write_file,
edit_file, glob, and grep.
Files can be stored using any backend that implements the BackendProtocol.
If the backend implements SandboxBackendProtocol, an execute tool is also added
for running shell commands.
This middleware also automatically evicts large tool results to the file system when they exceed a token threshold, preventing context window saturation.
Middleware for loading agent memory from AGENTS.md files.
Loads memory content from configured sources and injects into the system prompt.
Supports multiple sources that are combined together.
Middleware to patch dangling tool calls in the messages history.
A single access rule for filesystem operations.
Rules are evaluated in declaration order. The first matching rule's
mode is applied. If no rule matches, the call is allowed (permissive
default).
Middleware for loading and exposing agent skills to the system prompt.
Loads skills from backend sources and injects them into the system prompt using progressive disclosure (metadata first, full content on demand).
Skills are loaded in source order with later sources overriding earlier ones.
A pre-compiled agent spec.
The runnable's state schema must include a 'messages' key.
This is required for the subagent to communicate results back to the main agent.
When the subagent completes, the final message in the 'messages' list will be
extracted and returned as a ToolMessage to the parent agent.
Specification for an agent.
When using create_deep_agent, subagents automatically receive a default middleware
stack (TodoListMiddleware, FilesystemMiddleware, SummarizationMiddleware, etc.) before
any custom middleware specified in this spec.
Middleware for providing subagents to an agent via a task tool.
This middleware adds a task tool to the agent that can be used to invoke subagents.
Subagents are useful for handling complex tasks that require multiple steps, or tasks
that require a lot of context to resolve.
A chief benefit of subagents is that they can handle multi-step tasks, and then return a clean, concise response to the main agent.
Subagents are also great for different domains of expertise that require a narrower subset of tools and focus.