Return the input type for the parser.
Parse a list of candidate model Generation objects into a specific format.
Parse a list of candidate model Generation objects into a specific format.
Async parse a single string model output into some structure.
Parse the output of an LLM call with the input prompt for context.
Instructions on how the LLM output should be formatted.
Return dictionary representation of output parser.
The name of the Runnable. Used for debugging and tracing.
Input type.
Output Type.
The type of input this Runnable accepts specified as a Pydantic model.
Output schema.
List configurable fields for this Runnable.
Get the name of the Runnable.
Get a Pydantic model that can be used to validate input to the Runnable.
Get a JSON schema that represents the input to the Runnable.
Get a Pydantic model that can be used to validate output to the Runnable.
Extract text content from model outputs as a string.
Converts model outputs (such as AIMessage or AIMessageChunk objects) into plain
text strings. It's the simplest output parser and is useful when you need string
responses for downstream processing, display, or storage.
Supports streaming, yielding text chunks as they're generated by the model.
Example:
from langchain_core.output_parsers import StrOutputParser
from langchain_openai import ChatOpenAI
model = ChatOpenAI(model="gpt-4o")
parser = StrOutputParser()
# Get string output from a model
message = model.invoke("Tell me a joke")
result = parser.invoke(message)
print(result) # plain string
# With streaming - use transform() to process a stream
stream = model.stream("Tell me a story")
for chunk in parser.transform(stream):
print(chunk, end="", flush=True)Get a JSON schema that represents the output of the Runnable.