LangChain Reference home pageLangChain ReferenceLangChain Reference
  • GitHub
  • Main Docs
Deep Agents
LangChain
LangGraph
Integrations
LangSmith
  • Overview
  • Client
  • AsyncClient
  • Run Helpers
  • Run Trees
  • Evaluation
  • Schemas
  • Utilities
  • Wrappers
  • Anonymizer
  • Testing
  • Expect API
  • Middleware
  • Pytest Plugin
  • Deployment SDK
⌘I

LangChain Assistant

Ask a question to get started

Enter to send•Shift+Enter new line

Menu

OverviewClientAsyncClientRun HelpersRun TreesEvaluationSchemasUtilitiesWrappersAnonymizerTestingExpect APIMiddlewarePytest PluginDeployment SDK
Language
Theme
PythonlangsmithclientClientevaluate
Method●Since v0.2

evaluate

Copy
evaluate(
  self,
  target: Union[TARGET_T, Runnable, EXPERIMENT_T, tuple[EXPERIMENT_T

Used in Docs

  • How to add evaluators to an existing experiment (Python only)
  • How to define a code evaluator
  • How to define an LLM-as-a-judge evaluator
  • How to evaluate an LLM application
  • How to evaluate with repetitions
View source on GitHub
,
EXPERIMENT_T
]
]
,
,
data
:
Optional
[
DATA_T
]
=
None
,
evaluators
:
Optional
[
Union
[
Sequence
[
EVALUATOR_T
]
,
Sequence
[
COMPARATIVE_EVALUATOR_T
]
]
]
=
None
,
summary_evaluators
:
Optional
[
Sequence
[
SUMMARY_EVALUATOR_T
]
]
=
None
,
metadata
:
Optional
[
dict
]
=
None
,
experiment_prefix
:
Optional
[
str
]
=
None
,
description
:
Optional
[
str
]
=
None
,
max_concurrency
:
Optional
[
int
]
=
0
,
num_repetitions
:
int
=
1
,
blocking
:
bool
=
True
,
experiment
:
Optional
[
EXPERIMENT_T
]
=
None
,
upload_results
:
bool
=
True
,
error_handling
:
Literal
[
'log'
,
'ignore'
]
=
'log'
,
**
kwargs
:
Any
=
{
}
)
->
Union
[
ExperimentResults
,
ComparativeExperimentResults
]

Parameters

NameTypeDescription
target*Union[TARGET_T, Runnable, EXPERIMENT_T, Tuple[EXPERIMENT_T, EXPERIMENT_T]]

The target system or experiment(s) to evaluate.

Can be a function that takes a dict and returns a dict, a langchain Runnable, an existing experiment ID, or a two-tuple of experiment IDs.

dataDATA_T
Default:None
evaluatorsOptional[Union[Sequence[EVALUATOR_T], Sequence[COMPARATIVE_EVALUATOR_T]]]
Default:None
summary_evaluatorsOptional[Sequence[SUMMARY_EVALUATOR_T]]
Default:None
metadataOptional[dict]
Default:None
experiment_prefixOptional[str]
Default:None
descriptionOptional[str]
Default:None
max_concurrencyOptional[int], default=0
Default:0
blockingbool, default=True
Default:True
num_repetitionsint, default=1
Default:1
experimentOptional[EXPERIMENT_T]
Default:None
upload_resultsbool, default=True
Default:True
error_handlingstr, default="log"
Default:'log'
**kwargsAny
Default:{}

Evaluate a target system on a given dataset.

The dataset to evaluate on.

Can be a dataset name, a list of examples, or a generator of examples.

A list of evaluators to run on each example. The evaluator signature depends on the target type. Default to None.

A list of summary evaluators to run on the entire dataset. Should not be specified if comparing two existing experiments.

Metadata to attach to the experiment.

A prefix to provide for your experiment name.

A free-form text description for the experiment.

The maximum number of concurrent evaluations to run.

If None then no limit is set. If 0 then no concurrency.

Whether to block until the evaluation is complete.

The number of times to run the evaluation. Each item in the dataset will be run and evaluated this many times. Defaults to 1.

An existing experiment to extend.

If provided, experiment_prefix is ignored.

For advanced usage only. Should not be specified if target is an existing experiment or two-tuple fo experiments.

Whether to upload the results to LangSmith.

How to handle individual run errors.

'log' will trace the runs with the error message as part of the experiment, 'ignore' will not count the run as part of the experiment at all.

Additional keyword arguments to pass to the evaluator.