create_agent(
model: str | BaseChatModel,
tools: Sequence[| Name | Type | Description |
|---|---|---|
model* | str | BaseChatModel | The language model for the agent. Can be a string identifier (e.g., For a full list of supported model strings, see
Tip See the Models docs for more information. |
tools | Sequence[BaseTool | Callable[..., Any] | dict[str, Any]] | None | Default: None |
system_prompt | str | SystemMessage | None | Default: None |
middleware | Sequence[AgentMiddleware[StateT_co, ContextT]] | Default: () |
response_format | ResponseFormat[ResponseT] | type[ResponseT] | dict[str, Any] | None | Default: None |
state_schema | type[AgentState[ResponseT]] | None | Default: None |
context_schema | type[ContextT] | None | Default: None |
checkpointer | Checkpointer | None | Default: None |
store | BaseStore | None | Default: None |
interrupt_before | list[str] | None | Default: None |
interrupt_after | list[str] | None | Default: None |
debug | bool | Default: False |
name | str | None | Default: None |
cache | BaseCache[Any] | None | Default: None |
Creates an agent graph that calls tools in a loop until a stopping condition is met.
For more details on using create_agent,
visit the Agents docs.
The agent node calls the language model with the messages list (after applying
the system prompt). If the resulting AIMessage
contains tool_calls, the graph will then call the tools. The tools node executes
the tools and adds the responses to the messages list as
ToolMessage objects. The agent node then calls
the language model again. The process repeats until no more tool_calls are present
in the response. The agent then returns the full list of messages.
Example:
from langchain.agents import create_agent
def check_weather(location: str) -> str:
'''Return the weather forecast for the specified location.'''
return f"It's always sunny in {location}"
graph = create_agent(
model="anthropic:claude-sonnet-4-5-20250929",
tools=[check_weather],
system_prompt="You are a helpful assistant",
)
inputs = {"messages": [{"role": "user", "content": "what is the weather in sf"}]}
for chunk in graph.stream(inputs, stream_mode="updates"):
print(chunk)A list of tools, dict, or Callable.
If None or an empty list, the agent will consist of a model node without a
tool calling loop.
See the Tools docs for more information.
An optional system prompt for the LLM.
Can be a str (which will be converted to a SystemMessage) or a
SystemMessage instance directly. The system message is added to the
beginning of the message list when calling the model.
A sequence of middleware instances to apply to the agent.
Middleware can intercept and modify agent behavior at various stages.
See the Middleware docs for more information.
An optional configuration for structured responses.
Can be a ToolStrategy, ProviderStrategy, or a Pydantic model class.
If provided, the agent will handle structured output during the conversation flow.
Raw schemas will be wrapped in an appropriate strategy based on model capabilities.
See the Structured output docs for more information.
An optional TypedDict schema that extends AgentState.
When provided, this schema is used instead of AgentState as the base
schema for merging with middleware state schemas. This allows users to
add custom state fields without needing to create custom middleware.
Generally, it's recommended to use state_schema extensions via middleware
to keep relevant extensions scoped to corresponding hooks / tools.
An optional schema for runtime context.
An optional checkpoint saver object.
Used for persisting the state of the graph (e.g., as chat memory) for a single thread (e.g., a single conversation).
An optional store object.
Used for persisting data across multiple threads (e.g., multiple conversations / users).
An optional list of node names to interrupt before.
Useful if you want to add a user confirmation or other interrupt before taking an action.
An optional list of node names to interrupt after.
Useful if you want to return directly or run additional processing on an output.
Whether to enable verbose logging for graph execution.
When enabled, prints detailed information about each node execution, state updates, and transitions during agent runtime. Useful for debugging middleware behavior and understanding agent execution flow.
An optional name for the CompiledStateGraph.
This name will be automatically used when adding the agent graph to another graph as a subgraph node - particularly useful for building multi-agent systems.
An optional BaseCache instance to enable caching of graph execution.