client.prompts.log(...)
-
-
-
Log to a Prompt.
You can use query parameters
version_id, orenvironment, to target an existing version of the Prompt. Otherwise, the default deployed version will be chosen.Instead of targeting an existing version explicitly, you can instead pass in Prompt details in the request body. In this case, we will check if the details correspond to an existing version of the Prompt. If they do not, we will create a new version. This is helpful in the case where you are storing or deriving your Prompt details in code.
-
-
-
import datetime from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) client.prompts.log( path="persona", prompt={ "model": "gpt-4", "template": [ { "role": "system", "content": "You are {{person}}. Answer questions as this person. Do not break character.", } ], }, messages=[{"role": "user", "content": "What really happened at Roswell?"}], inputs={"person": "Trump"}, created_at=datetime.datetime.fromisoformat( "2024-07-19 00:29:35.178000+00:00", ), provider_latency=6.5931549072265625, output_message={ "content": "Well, you know, there is so much secrecy involved in government, folks, it's unbelievable. They don't want to tell you everything. They don't tell me everything! But about Roswell, itβs a very popular question. I know, I just know, that something very, very peculiar happened there. Was it a weather balloon? Maybe. Was it something extraterrestrial? Could be. I'd love to go down and open up all the classified documents, believe me, I would. But they don't let that happen. The Deep State, folks, the Deep State. Theyβre unbelievable. They want to keep everything a secret. But whatever the truth is, I can tell you this: itβs something big, very very big. Tremendous, in fact.", "role": "assistant", }, prompt_tokens=100, output_tokens=220, prompt_cost=1e-05, output_cost=0.0002, finish_reason="stop", )
-
-
-
version_id:
typing.Optional[str]β A specific Version ID of the Prompt to log to.
-
environment:
typing.Optional[str]β Name of the Environment identifying a deployed version to log to.
-
run_id:
typing.Optional[str]β Unique identifier for the Run to associate the Log to.
-
path:
typing.Optional[str]β Path of the Prompt, including the name. This locates the Prompt in the Humanloop filesystem and is used as as a unique identifier. For example:folder/nameor justname.
-
id:
typing.Optional[str]β ID for an existing Prompt.
-
output_message:
typing.Optional[ChatMessageParams]β The message returned by the provider.
-
prompt_tokens:
typing.Optional[int]β Number of tokens in the prompt used to generate the output.
-
output_tokens:
typing.Optional[int]β Number of tokens in the output generated by the model.
-
prompt_cost:
typing.Optional[float]β Cost in dollars associated to the tokens in the prompt.
-
output_cost:
typing.Optional[float]β Cost in dollars associated to the tokens in the output.
-
finish_reason:
typing.Optional[str]β Reason the generation finished.
-
messages:
typing.Optional[typing.Sequence[ChatMessageParams]]β The messages passed to the to provider chat endpoint.
-
tool_choice:
typing.Optional[PromptLogRequestToolChoiceParams]Controls how the model uses tools. The following options are supported:
'none'means the model will not call any tool and instead generates a message; this is the default when no tools are provided as part of the Prompt.'auto'means the model can decide to call one or more of the provided tools; this is the default when tools are provided as part of the Prompt.'required'means the model can decide to call one or more of the provided tools.{'type': 'function', 'function': {name': <TOOL_NAME>}}forces the model to use the named function.
-
prompt:
typing.Optional[PromptKernelRequestParams]β Details of your Prompt. A new Prompt version will be created if the provided details are new.
-
start_time:
typing.Optional[dt.datetime]β When the logged event started.
-
end_time:
typing.Optional[dt.datetime]β When the logged event ended.
-
output:
typing.Optional[str]β Generated output from your model for the provided inputs. Can beNoneif logging an error, or if creating a parent Log with the intention to populate it later.
-
created_at:
typing.Optional[dt.datetime]β User defined timestamp for when the log was created.
-
error:
typing.Optional[str]β Error message if the log is an error.
-
provider_latency:
typing.Optional[float]β Duration of the logged event in seconds.
-
stdout:
typing.Optional[str]β Captured log and debug statements.
-
provider_request:
typing.Optional[typing.Dict[str, typing.Optional[typing.Any]]]β Raw request sent to provider.
-
provider_response:
typing.Optional[typing.Dict[str, typing.Optional[typing.Any]]]β Raw response received the provider.
-
inputs:
typing.Optional[typing.Dict[str, typing.Optional[typing.Any]]]β The inputs passed to the prompt template.
-
source:
typing.Optional[str]β Identifies where the model was called from.
-
metadata:
typing.Optional[typing.Dict[str, typing.Optional[typing.Any]]]β Any additional metadata to record.
-
source_datapoint_id:
typing.Optional[str]β Unique identifier for the Datapoint that this Log is derived from. This can be used by Humanloop to associate Logs to Evaluations. If provided, Humanloop will automatically associate this Log to Evaluations that require a Log for this Datapoint-Version pair.
-
trace_parent_id:
typing.Optional[str]β The ID of the parent Log to nest this Log under in a Trace.
-
user:
typing.Optional[str]β End-user ID related to the Log.
-
prompt_log_request_environment:
typing.Optional[str]β The name of the Environment the Log is associated to.
-
save:
typing.Optional[bool]β Whether the request/response payloads will be stored on Humanloop.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.prompts.update_log(...)
-
-
-
Update a Log.
Update the details of a Log with the given ID.
-
-
-
from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) client.prompts.update_log( id="id", log_id="log_id", )
-
-
-
id:
strβ Unique identifier for Prompt.
-
log_id:
strβ Unique identifier for the Log.
-
output_message:
typing.Optional[ChatMessageParams]β The message returned by the provider.
-
prompt_tokens:
typing.Optional[int]β Number of tokens in the prompt used to generate the output.
-
output_tokens:
typing.Optional[int]β Number of tokens in the output generated by the model.
-
prompt_cost:
typing.Optional[float]β Cost in dollars associated to the tokens in the prompt.
-
output_cost:
typing.Optional[float]β Cost in dollars associated to the tokens in the output.
-
finish_reason:
typing.Optional[str]β Reason the generation finished.
-
messages:
typing.Optional[typing.Sequence[ChatMessageParams]]β The messages passed to the to provider chat endpoint.
-
tool_choice:
typing.Optional[PromptLogUpdateRequestToolChoiceParams]Controls how the model uses tools. The following options are supported:
'none'means the model will not call any tool and instead generates a message; this is the default when no tools are provided as part of the Prompt.'auto'means the model can decide to call one or more of the provided tools; this is the default when tools are provided as part of the Prompt.'required'means the model can decide to call one or more of the provided tools.{'type': 'function', 'function': {name': <TOOL_NAME>}}forces the model to use the named function.
-
output:
typing.Optional[str]β Generated output from your model for the provided inputs. Can beNoneif logging an error, or if creating a parent Log with the intention to populate it later.
-
created_at:
typing.Optional[dt.datetime]β User defined timestamp for when the log was created.
-
error:
typing.Optional[str]β Error message if the log is an error.
-
provider_latency:
typing.Optional[float]β Duration of the logged event in seconds.
-
stdout:
typing.Optional[str]β Captured log and debug statements.
-
provider_request:
typing.Optional[typing.Dict[str, typing.Optional[typing.Any]]]β Raw request sent to provider.
-
provider_response:
typing.Optional[typing.Dict[str, typing.Optional[typing.Any]]]β Raw response received the provider.
-
inputs:
typing.Optional[typing.Dict[str, typing.Optional[typing.Any]]]β The inputs passed to the prompt template.
-
source:
typing.Optional[str]β Identifies where the model was called from.
-
metadata:
typing.Optional[typing.Dict[str, typing.Optional[typing.Any]]]β Any additional metadata to record.
-
start_time:
typing.Optional[dt.datetime]β When the logged event started.
-
end_time:
typing.Optional[dt.datetime]β When the logged event ended.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.prompts.call_stream(...)
-
-
-
Call a Prompt.
Calling a Prompt calls the model provider before logging the request, responses and metadata to Humanloop.
You can use query parameters
version_id, orenvironment, to target an existing version of the Prompt. Otherwise the default deployed version will be chosen.Instead of targeting an existing version explicitly, you can instead pass in Prompt details in the request body. In this case, we will check if the details correspond to an existing version of the Prompt. If they do not, we will create a new version. This is helpful in the case where you are storing or deriving your Prompt details in code.
-
-
-
import datetime from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) response = client.prompts.call_stream( version_id="string", environment="string", path="string", id="string", messages=[ { "content": "string", "name": "string", "tool_call_id": "string", "role": "user", "tool_calls": [ { "id": "string", "type": "function", "function": {"name": "string"}, } ], } ], prompt={"model": "string"}, inputs={"string": {"key": "value"}}, source="string", metadata={"string": {"key": "value"}}, start_time=datetime.datetime.fromisoformat( "2024-01-15 09:30:00+00:00", ), end_time=datetime.datetime.fromisoformat( "2024-01-15 09:30:00+00:00", ), source_datapoint_id="string", trace_parent_id="string", user="string", prompts_call_stream_request_environment="string", save=True, provider_api_keys={ "openai": "string", "ai_21": "string", "mock": "string", "anthropic": "string", "bedrock": "string", "cohere": "string", "openai_azure": "string", "openai_azure_endpoint": "string", }, num_samples=1, return_inputs=True, logprobs=1, suffix="string", ) for chunk in response: yield chunk
-
-
-
version_id:
typing.Optional[str]β A specific Version ID of the Prompt to log to.
-
environment:
typing.Optional[str]β Name of the Environment identifying a deployed version to log to.
-
path:
typing.Optional[str]β Path of the Prompt, including the name. This locates the Prompt in the Humanloop filesystem and is used as as a unique identifier. For example:folder/nameor justname.
-
id:
typing.Optional[str]β ID for an existing Prompt.
-
messages:
typing.Optional[typing.Sequence[ChatMessageParams]]β The messages passed to the to provider chat endpoint.
-
tool_choice:
typing.Optional[PromptsCallStreamRequestToolChoiceParams]Controls how the model uses tools. The following options are supported:
'none'means the model will not call any tool and instead generates a message; this is the default when no tools are provided as part of the Prompt.'auto'means the model can decide to call one or more of the provided tools; this is the default when tools are provided as part of the Prompt.'required'means the model can decide to call one or more of the provided tools.{'type': 'function', 'function': {name': <TOOL_NAME>}}forces the model to use the named function.
-
prompt:
typing.Optional[PromptKernelRequestParams]β Details of your Prompt. A new Prompt version will be created if the provided details are new.
-
inputs:
typing.Optional[typing.Dict[str, typing.Optional[typing.Any]]]β The inputs passed to the prompt template.
-
source:
typing.Optional[str]β Identifies where the model was called from.
-
metadata:
typing.Optional[typing.Dict[str, typing.Optional[typing.Any]]]β Any additional metadata to record.
-
start_time:
typing.Optional[dt.datetime]β When the logged event started.
-
end_time:
typing.Optional[dt.datetime]β When the logged event ended.
-
source_datapoint_id:
typing.Optional[str]β Unique identifier for the Datapoint that this Log is derived from. This can be used by Humanloop to associate Logs to Evaluations. If provided, Humanloop will automatically associate this Log to Evaluations that require a Log for this Datapoint-Version pair.
-
trace_parent_id:
typing.Optional[str]β The ID of the parent Log to nest this Log under in a Trace.
-
user:
typing.Optional[str]β End-user ID related to the Log.
-
prompts_call_stream_request_environment:
typing.Optional[str]β The name of the Environment the Log is associated to.
-
save:
typing.Optional[bool]β Whether the request/response payloads will be stored on Humanloop.
-
provider_api_keys:
typing.Optional[ProviderApiKeysParams]β API keys required by each provider to make API calls. The API keys provided here are not stored by Humanloop. If not specified here, Humanloop will fall back to the key saved to your organization.
-
num_samples:
typing.Optional[int]β The number of generations.
-
return_inputs:
typing.Optional[bool]β Whether to return the inputs in the response. If false, the response will contain an empty dictionary under inputs. This is useful for reducing the size of the response. Defaults to true.
-
logprobs:
typing.Optional[int]β Include the log probabilities of the top n tokens in the provider_response
-
suffix:
typing.Optional[str]β The suffix that comes after a completion of inserted text. Useful for completions that act like inserts.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.prompts.call(...)
-
-
-
Call a Prompt.
Calling a Prompt calls the model provider before logging the request, responses and metadata to Humanloop.
You can use query parameters
version_id, orenvironment, to target an existing version of the Prompt. Otherwise the default deployed version will be chosen.Instead of targeting an existing version explicitly, you can instead pass in Prompt details in the request body. In this case, we will check if the details correspond to an existing version of the Prompt. If they do not, we will create a new version. This is helpful in the case where you are storing or deriving your Prompt details in code.
-
-
-
from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) client.prompts.call( version_id="prv_Wu6zx1lAWJRqOyL8nWuZk", path="persona", messages=[{"role": "user", "content": "What really happened at Roswell?"}], inputs={"person": "Trump"}, )
-
-
-
version_id:
typing.Optional[str]β A specific Version ID of the Prompt to log to.
-
environment:
typing.Optional[str]β Name of the Environment identifying a deployed version to log to.
-
path:
typing.Optional[str]β Path of the Prompt, including the name. This locates the Prompt in the Humanloop filesystem and is used as as a unique identifier. For example:folder/nameor justname.
-
id:
typing.Optional[str]β ID for an existing Prompt.
-
messages:
typing.Optional[typing.Sequence[ChatMessageParams]]β The messages passed to the to provider chat endpoint.
-
tool_choice:
typing.Optional[PromptsCallRequestToolChoiceParams]Controls how the model uses tools. The following options are supported:
'none'means the model will not call any tool and instead generates a message; this is the default when no tools are provided as part of the Prompt.'auto'means the model can decide to call one or more of the provided tools; this is the default when tools are provided as part of the Prompt.'required'means the model can decide to call one or more of the provided tools.{'type': 'function', 'function': {name': <TOOL_NAME>}}forces the model to use the named function.
-
prompt:
typing.Optional[PromptKernelRequestParams]β Details of your Prompt. A new Prompt version will be created if the provided details are new.
-
inputs:
typing.Optional[typing.Dict[str, typing.Optional[typing.Any]]]β The inputs passed to the prompt template.
-
source:
typing.Optional[str]β Identifies where the model was called from.
-
metadata:
typing.Optional[typing.Dict[str, typing.Optional[typing.Any]]]β Any additional metadata to record.
-
start_time:
typing.Optional[dt.datetime]β When the logged event started.
-
end_time:
typing.Optional[dt.datetime]β When the logged event ended.
-
source_datapoint_id:
typing.Optional[str]β Unique identifier for the Datapoint that this Log is derived from. This can be used by Humanloop to associate Logs to Evaluations. If provided, Humanloop will automatically associate this Log to Evaluations that require a Log for this Datapoint-Version pair.
-
trace_parent_id:
typing.Optional[str]β The ID of the parent Log to nest this Log under in a Trace.
-
user:
typing.Optional[str]β End-user ID related to the Log.
-
prompts_call_request_environment:
typing.Optional[str]β The name of the Environment the Log is associated to.
-
save:
typing.Optional[bool]β Whether the request/response payloads will be stored on Humanloop.
-
provider_api_keys:
typing.Optional[ProviderApiKeysParams]β API keys required by each provider to make API calls. The API keys provided here are not stored by Humanloop. If not specified here, Humanloop will fall back to the key saved to your organization.
-
num_samples:
typing.Optional[int]β The number of generations.
-
return_inputs:
typing.Optional[bool]β Whether to return the inputs in the response. If false, the response will contain an empty dictionary under inputs. This is useful for reducing the size of the response. Defaults to true.
-
logprobs:
typing.Optional[int]β Include the log probabilities of the top n tokens in the provider_response
-
suffix:
typing.Optional[str]β The suffix that comes after a completion of inserted text. Useful for completions that act like inserts.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.prompts.list(...)
-
-
-
Get a list of all Prompts.
-
-
-
from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) response = client.prompts.list( size=1, ) for item in response: yield item # alternatively, you can paginate page-by-page for page in response.iter_pages(): yield page
-
-
-
page:
typing.Optional[int]β Page number for pagination.
-
size:
typing.Optional[int]β Page size for pagination. Number of Prompts to fetch.
-
name:
typing.Optional[str]β Case-insensitive filter for Prompt name.
-
user_filter:
typing.Optional[str]β Case-insensitive filter for users in the Prompt. This filter matches against both email address and name of users.
-
sort_by:
typing.Optional[ProjectSortBy]β Field to sort Prompts by
-
order:
typing.Optional[SortOrder]β Direction to sort by.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.prompts.upsert(...)
-
-
-
Create a Prompt or update it with a new version if it already exists.
Prompts are identified by the
IDor theirpath. The parameters (i.e. the prompt template, temperature, model etc.) determine the versions of the Prompt.If you provide a commit message, then the new version will be committed; otherwise it will be uncommitted. If you try to commit an already committed version, an exception will be raised.
-
-
-
from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) client.prompts.upsert( path="Personal Projects/Coding Assistant", model="gpt-4o", endpoint="chat", template=[ { "content": "You are a helpful coding assistant specialising in {{language}}", "role": "system", } ], provider="openai", max_tokens=-1, temperature=0.7, top_p=1.0, presence_penalty=0.0, frequency_penalty=0.0, other={}, tools=[], linked_tools=[], commit_message="Initial commit", )
-
-
-
model:
strβ The model instance used, e.g.gpt-4. See supported models
-
path:
typing.Optional[str]β Path of the Prompt, including the name. This locates the Prompt in the Humanloop filesystem and is used as as a unique identifier. For example:folder/nameor justname.
-
id:
typing.Optional[str]β ID for an existing Prompt.
-
endpoint:
typing.Optional[ModelEndpoints]β The provider model endpoint used.
-
template:
typing.Optional[PromptRequestTemplateParams]The template contains the main structure and instructions for the model, including input variables for dynamic values.
For chat models, provide the template as a ChatTemplate (a list of messages), e.g. a system message, followed by a user message with an input variable. For completion models, provide a prompt template as a string.
Input variables should be specified with double curly bracket syntax:
{{input_name}}.
-
provider:
typing.Optional[ModelProviders]β The company providing the underlying model service.
-
max_tokens:
typing.Optional[int]β The maximum number of tokens to generate. Provide max_tokens=-1 to dynamically calculate the maximum number of tokens to generate given the length of the prompt
-
temperature:
typing.Optional[float]β What sampling temperature to use when making a generation. Higher values means the model will be more creative.
-
top_p:
typing.Optional[float]β An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass.
-
stop:
typing.Optional[PromptRequestStopParams]β The string (or list of strings) after which the model will stop generating. The returned text will not contain the stop sequence.
-
presence_penalty:
typing.Optional[float]β Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the generation so far.
-
frequency_penalty:
typing.Optional[float]β Number between -2.0 and 2.0. Positive values penalize new tokens based on how frequently they appear in the generation so far.
-
other:
typing.Optional[typing.Dict[str, typing.Optional[typing.Any]]]β Other parameter values to be passed to the provider call.
-
seed:
typing.Optional[int]β If specified, model will make a best effort to sample deterministically, but it is not guaranteed.
-
response_format:
typing.Optional[ResponseFormatParams]β The format of the response. Only{"type": "json_object"}is currently supported for chat.
-
tools:
typing.Optional[typing.Sequence[ToolFunctionParams]]β The tool specification that the model can choose to call if Tool calling is supported.
-
linked_tools:
typing.Optional[typing.Sequence[str]]β The IDs of the Tools in your organization that the model can choose to call if Tool calling is supported. The default deployed version of that tool is called.
-
attributes:
typing.Optional[typing.Dict[str, typing.Optional[typing.Any]]]β Additional fields to describe the Prompt. Helpful to separate Prompt versions from each other with details on how they were created or used.
-
commit_message:
typing.Optional[str]β Message describing the changes made.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.prompts.get(...)
-
-
-
Retrieve the Prompt with the given ID.
By default, the deployed version of the Prompt is returned. Use the query parameters
version_idorenvironmentto target a specific version of the Prompt.
-
-
-
from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) client.prompts.get( id="pr_30gco7dx6JDq4200GVOHa", )
-
-
-
id:
strβ Unique identifier for Prompt.
-
version_id:
typing.Optional[str]β A specific Version ID of the Prompt to retrieve.
-
environment:
typing.Optional[str]β Name of the Environment to retrieve a deployed Version from.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.prompts.delete(...)
-
-
-
Delete the Prompt with the given ID.
-
-
-
from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) client.prompts.delete( id="pr_30gco7dx6JDq4200GVOHa", )
-
-
-
id:
strβ Unique identifier for Prompt.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.prompts.move(...)
-
-
-
Move the Prompt to a different path or change the name.
-
-
-
from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) client.prompts.move( id="pr_30gco7dx6JDq4200GVOHa", path="new directory/new name", )
-
-
-
id:
strβ Unique identifier for Prompt.
-
path:
typing.Optional[str]β Path of the Prompt including the Prompt name, which is used as a unique identifier.
-
name:
typing.Optional[str]β Name of the Prompt.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.prompts.list_versions(...)
-
-
-
Get a list of all the versions of a Prompt.
-
-
-
from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) client.prompts.list_versions( id="pr_30gco7dx6JDq4200GVOHa", status="committed", )
-
-
-
id:
strβ Unique identifier for Prompt.
-
status:
typing.Optional[VersionStatus]β Filter versions by status: 'uncommitted', 'committed'. If no status is provided, all versions are returned.
-
evaluator_aggregates:
typing.Optional[bool]β Whether to include Evaluator aggregate results for the versions in the response
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.prompts.commit(...)
-
-
-
Commit a version of the Prompt with a commit message.
If the version is already committed, an exception will be raised.
-
-
-
from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) client.prompts.commit( id="pr_30gco7dx6JDq4200GVOHa", version_id="prv_F34aba5f3asp0", commit_message="Reiterated point about not discussing sentience", )
-
-
-
id:
strβ Unique identifier for Prompt.
-
version_id:
strβ Unique identifier for the specific version of the Prompt.
-
commit_message:
strβ Message describing the changes made.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.prompts.set_deployment(...)
-
-
-
Deploy Prompt to an Environment.
Set the deployed version for the specified Environment. This Prompt will be used for calls made to the Prompt in this Environment.
-
-
-
from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) client.prompts.set_deployment( id="id", environment_id="environment_id", version_id="version_id", )
-
-
-
id:
strβ Unique identifier for Prompt.
-
environment_id:
strβ Unique identifier for the Environment to deploy the Version to.
-
version_id:
strβ Unique identifier for the specific version of the Prompt.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.prompts.remove_deployment(...)
-
-
-
Remove deployed Prompt from the Environment.
Remove the deployed version for the specified Environment. This Prompt will no longer be used for calls made to the Prompt in this Environment.
-
-
-
from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) client.prompts.remove_deployment( id="id", environment_id="environment_id", )
-
-
-
id:
strβ Unique identifier for Prompt.
-
environment_id:
strβ Unique identifier for the Environment to remove the deployment from.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.prompts.list_environments(...)
-
-
-
List all Environments and their deployed versions for the Prompt.
-
-
-
from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) client.prompts.list_environments( id="pr_30gco7dx6JDq4200GVOHa", )
-
-
-
id:
strβ Unique identifier for Prompt.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.prompts.update_monitoring(...)
-
-
-
Activate and deactivate Evaluators for monitoring the Prompt.
An activated Evaluator will automatically be run on all new Logs within the Prompt for monitoring purposes.
-
-
-
from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) client.prompts.update_monitoring( id="pr_30gco7dx6JDq4200GVOHa", activate=[{"evaluator_version_id": "evv_1abc4308abd"}], )
-
-
-
id:
str
-
activate:
typing.Optional[ typing.Sequence[EvaluatorActivationDeactivationRequestActivateItemParams] ]β Evaluators to activate for Monitoring. These will be automatically run on new Logs.
-
deactivate:
typing.Optional[ typing.Sequence[EvaluatorActivationDeactivationRequestDeactivateItemParams] ]β Evaluators to deactivate. These will not be run on new Logs.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.tools.log(...)
-
-
-
Log to a Tool.
You can use query parameters
version_id, orenvironment, to target an existing version of the Tool. Otherwise the default deployed version will be chosen.Instead of targeting an existing version explicitly, you can instead pass in Tool details in the request body. In this case, we will check if the details correspond to an existing version of the Tool, if not we will create a new version. This is helpful in the case where you are storing or deriving your Tool details in code.
-
-
-
from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) client.tools.log( path="math-tool", tool={ "function": { "name": "multiply", "description": "Multiply two numbers", "parameters": { "type": "object", "properties": { "a": {"type": "number"}, "b": {"type": "number"}, }, "required": ["a", "b"], }, } }, inputs={"a": 5, "b": 7}, output="35", )
-
-
-
version_id:
typing.Optional[str]β A specific Version ID of the Tool to log to.
-
environment:
typing.Optional[str]β Name of the Environment identifying a deployed version to log to.
-
path:
typing.Optional[str]β Path of the Tool, including the name. This locates the Tool in the Humanloop filesystem and is used as as a unique identifier. For example:folder/nameor justname.
-
id:
typing.Optional[str]β ID for an existing Tool.
-
start_time:
typing.Optional[dt.datetime]β When the logged event started.
-
end_time:
typing.Optional[dt.datetime]β When the logged event ended.
-
output:
typing.Optional[str]β Generated output from your model for the provided inputs. Can beNoneif logging an error, or if creating a parent Log with the intention to populate it later.
-
created_at:
typing.Optional[dt.datetime]β User defined timestamp for when the log was created.
-
error:
typing.Optional[str]β Error message if the log is an error.
-
provider_latency:
typing.Optional[float]β Duration of the logged event in seconds.
-
stdout:
typing.Optional[str]β Captured log and debug statements.
-
provider_request:
typing.Optional[typing.Dict[str, typing.Optional[typing.Any]]]β Raw request sent to provider.
-
provider_response:
typing.Optional[typing.Dict[str, typing.Optional[typing.Any]]]β Raw response received the provider.
-
inputs:
typing.Optional[typing.Dict[str, typing.Optional[typing.Any]]]β The inputs passed to the prompt template.
-
source:
typing.Optional[str]β Identifies where the model was called from.
-
metadata:
typing.Optional[typing.Dict[str, typing.Optional[typing.Any]]]β Any additional metadata to record.
-
source_datapoint_id:
typing.Optional[str]β Unique identifier for the Datapoint that this Log is derived from. This can be used by Humanloop to associate Logs to Evaluations. If provided, Humanloop will automatically associate this Log to Evaluations that require a Log for this Datapoint-Version pair.
-
trace_parent_id:
typing.Optional[str]β The ID of the parent Log to nest this Log under in a Trace.
-
user:
typing.Optional[str]β End-user ID related to the Log.
-
tool_log_request_environment:
typing.Optional[str]β The name of the Environment the Log is associated to.
-
save:
typing.Optional[bool]β Whether the request/response payloads will be stored on Humanloop.
-
tool:
typing.Optional[ToolKernelRequestParams]β Details of your Tool. A new Tool version will be created if the provided details are new.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.tools.update(...)
-
-
-
Update a Log.
Update the details of a Log with the given ID.
-
-
-
from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) client.tools.update( id="id", log_id="log_id", )
-
-
-
id:
strβ Unique identifier for Prompt.
-
log_id:
strβ Unique identifier for the Log.
-
output:
typing.Optional[str]β Generated output from your model for the provided inputs. Can beNoneif logging an error, or if creating a parent Log with the intention to populate it later.
-
created_at:
typing.Optional[dt.datetime]β User defined timestamp for when the log was created.
-
error:
typing.Optional[str]β Error message if the log is an error.
-
provider_latency:
typing.Optional[float]β Duration of the logged event in seconds.
-
stdout:
typing.Optional[str]β Captured log and debug statements.
-
provider_request:
typing.Optional[typing.Dict[str, typing.Optional[typing.Any]]]β Raw request sent to provider.
-
provider_response:
typing.Optional[typing.Dict[str, typing.Optional[typing.Any]]]β Raw response received the provider.
-
inputs:
typing.Optional[typing.Dict[str, typing.Optional[typing.Any]]]β The inputs passed to the prompt template.
-
source:
typing.Optional[str]β Identifies where the model was called from.
-
metadata:
typing.Optional[typing.Dict[str, typing.Optional[typing.Any]]]β Any additional metadata to record.
-
start_time:
typing.Optional[dt.datetime]β When the logged event started.
-
end_time:
typing.Optional[dt.datetime]β When the logged event ended.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.tools.list(...)
-
-
-
Get a list of all Tools.
-
-
-
from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) response = client.tools.list( size=1, ) for item in response: yield item # alternatively, you can paginate page-by-page for page in response.iter_pages(): yield page
-
-
-
page:
typing.Optional[int]β Page offset for pagination.
-
size:
typing.Optional[int]β Page size for pagination. Number of Tools to fetch.
-
name:
typing.Optional[str]β Case-insensitive filter for Tool name.
-
user_filter:
typing.Optional[str]β Case-insensitive filter for users in the Tool. This filter matches against both email address and name of users.
-
sort_by:
typing.Optional[ProjectSortBy]β Field to sort Tools by
-
order:
typing.Optional[SortOrder]β Direction to sort by.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.tools.upsert(...)
-
-
-
Create a Tool or update it with a new version if it already exists.
Tools are identified by the
IDor theirpath. The name, description and parameters determine the versions of the Tool.If you provide a commit message, then the new version will be committed; otherwise it will be uncommitted. If you try to commit an already committed version, an exception will be raised.
-
-
-
from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) client.tools.upsert( path="math-tool", function={ "name": "multiply", "description": "Multiply two numbers", "parameters": { "type": "object", "properties": {"a": {"type": "number"}, "b": {"type": "number"}}, "required": ["a", "b"], }, }, commit_message="Initial commit", )
-
-
-
path:
typing.Optional[str]β Path of the Tool, including the name. This locates the Tool in the Humanloop filesystem and is used as as a unique identifier. For example:folder/nameor justname.
-
id:
typing.Optional[str]β ID for an existing Tool.
-
function:
typing.Optional[ToolFunctionParams]β Callable function specification of the Tool shown to the model for tool calling.
-
source_code:
typing.Optional[str]β Code source of the Tool.
-
setup_values:
typing.Optional[typing.Dict[str, typing.Optional[typing.Any]]]β Values needed to setup the Tool, defined in JSON Schema format: https://json-schema.org/
-
attributes:
typing.Optional[typing.Dict[str, typing.Optional[typing.Any]]]β Additional fields to describe the Tool. Helpful to separate Tool versions from each other with details on how they were created or used.
-
tool_type:
typing.Optional[FilesToolType]β Type of Tool.
-
commit_message:
typing.Optional[str]β Message describing the changes made.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.tools.get(...)
-
-
-
Retrieve the Tool with the given ID.
By default, the deployed version of the Tool is returned. Use the query parameters
version_idorenvironmentto target a specific version of the Tool.
-
-
-
from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) client.tools.get( id="tl_789ghi", )
-
-
-
id:
strβ Unique identifier for Tool.
-
version_id:
typing.Optional[str]β A specific Version ID of the Tool to retrieve.
-
environment:
typing.Optional[str]β Name of the Environment to retrieve a deployed Version from.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.tools.delete(...)
-
-
-
Delete the Tool with the given ID.
-
-
-
from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) client.tools.delete( id="tl_789ghi", )
-
-
-
id:
strβ Unique identifier for Tool.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.tools.move(...)
-
-
-
Move the Tool to a different path or change the name.
-
-
-
from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) client.tools.move( id="tl_789ghi", path="new directory/new name", )
-
-
-
id:
strβ Unique identifier for Tool.
-
path:
typing.Optional[str]β Path of the Tool including the Tool name, which is used as a unique identifier.
-
name:
typing.Optional[str]β Name of the Tool, which is used as a unique identifier.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.tools.list_versions(...)
-
-
-
Get a list of all the versions of a Tool.
-
-
-
from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) client.tools.list_versions( id="tl_789ghi", status="committed", )
-
-
-
id:
strβ Unique identifier for the Tool.
-
status:
typing.Optional[VersionStatus]β Filter versions by status: 'uncommitted', 'committed'. If no status is provided, all versions are returned.
-
evaluator_aggregates:
typing.Optional[bool]β Whether to include Evaluator aggregate results for the versions in the response
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.tools.commit(...)
-
-
-
Commit a version of the Tool with a commit message.
If the version is already committed, an exception will be raised.
-
-
-
from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) client.tools.commit( id="tl_789ghi", version_id="tv_012jkl", commit_message="Initial commit", )
-
-
-
id:
strβ Unique identifier for Tool.
-
version_id:
strβ Unique identifier for the specific version of the Tool.
-
commit_message:
strβ Message describing the changes made.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.tools.set_deployment(...)
-
-
-
Deploy Tool to an Environment.
Set the deployed version for the specified Environment. This Prompt will be used for calls made to the Tool in this Environment.
-
-
-
from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) client.tools.set_deployment( id="tl_789ghi", environment_id="staging", version_id="tv_012jkl", )
-
-
-
id:
strβ Unique identifier for Tool.
-
environment_id:
strβ Unique identifier for the Environment to deploy the Version to.
-
version_id:
strβ Unique identifier for the specific version of the Tool.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.tools.remove_deployment(...)
-
-
-
Remove deployed Tool from the Environment.
Remove the deployed version for the specified Environment. This Tool will no longer be used for calls made to the Tool in this Environment.
-
-
-
from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) client.tools.remove_deployment( id="tl_789ghi", environment_id="staging", )
-
-
-
id:
strβ Unique identifier for Tool.
-
environment_id:
strβ Unique identifier for the Environment to remove the deployment from.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.tools.list_environments(...)
-
-
-
List all Environments and their deployed versions for the Tool.
-
-
-
from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) client.tools.list_environments( id="tl_789ghi", )
-
-
-
id:
strβ Unique identifier for Tool.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.tools.update_monitoring(...)
-
-
-
Activate and deactivate Evaluators for monitoring the Tool.
An activated Evaluator will automatically be run on all new Logs within the Tool for monitoring purposes.
-
-
-
from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) client.tools.update_monitoring( id="tl_789ghi", activate=[{"evaluator_version_id": "evv_1abc4308abd"}], )
-
-
-
id:
str
-
activate:
typing.Optional[ typing.Sequence[EvaluatorActivationDeactivationRequestActivateItemParams] ]β Evaluators to activate for Monitoring. These will be automatically run on new Logs.
-
deactivate:
typing.Optional[ typing.Sequence[EvaluatorActivationDeactivationRequestDeactivateItemParams] ]β Evaluators to deactivate. These will not be run on new Logs.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.datasets.list(...)
-
-
-
List all Datasets.
-
-
-
from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) response = client.datasets.list( size=1, ) for item in response: yield item # alternatively, you can paginate page-by-page for page in response.iter_pages(): yield page
-
-
-
page:
typing.Optional[int]β Page offset for pagination.
-
size:
typing.Optional[int]β Page size for pagination. Number of Datasets to fetch.
-
name:
typing.Optional[str]β Case-insensitive filter for Dataset name.
-
user_filter:
typing.Optional[str]β Case-insensitive filter for users in the Dataset. This filter matches against both email address and name of users.
-
sort_by:
typing.Optional[ProjectSortBy]β Field to sort Datasets by
-
order:
typing.Optional[SortOrder]β Direction to sort by.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.datasets.upsert(...)
-
-
-
Create a Dataset or update it with a new version if it already exists.
Datasets are identified by the
IDor theirpath. The datapoints determine the versions of the Dataset.By default, the new Dataset version will be set to the list of Datapoints provided in the request. You can also create a new version by adding or removing Datapoints from an existing version by specifying
actionasaddorremoverespectively. In this case, you may specify theversion_idorenvironmentquery parameters to identify the existing version to base the new version on. If neither is provided, the default deployed version will be used.If you provide a commit message, then the new version will be committed; otherwise it will be uncommitted. If you try to commit an already committed version, an exception will be raised.
Humanloop also deduplicates Datapoints. If you try to add a Datapoint that already exists, it will be ignored. If you intentionally want to add a duplicate Datapoint, you can add a unique identifier to the Datapoint's inputs such as
{_dedupe_id: <unique ID>}.
-
-
-
from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) client.datasets.upsert( path="datasets/support-queries", datapoints=[ { "messages": [ { "role": "user", "content": "How do i manage my organizations API keys?\n", } ], "target": { "response": 'Hey, thanks for your questions. Here are steps for how to achieve: 1. Log in to the Humanloop Dashboard \n\n2. Click on "Organization Settings."\n If you do not see this option, you might need to contact your organization admin to gain the necessary permissions.\n\n3. Within the settings or organization settings, select the option labeled "API Keys" on the left. Here you will be able to view and manage your API keys.\n\n4. You will see a list of existing API keys. You can perform various actions, such as:\n - **Generate New API Key:** Click on the "Generate New Key" button if you need a new API key.\n - **Revoke an API Key:** If you need to disable an existing key, find the key in the list and click the "Revoke" or "Delete" button.\n - **Copy an API Key:** If you need to use an existing key, you can copy it to your clipboard by clicking the "Copy" button next to the key.\n\n5. **Save and Secure API Keys:** Make sure to securely store any new or existing API keys you are using. Treat them like passwords and do not share them publicly.\n\nIf you encounter any issues or need further assistance, it might be helpful to engage with an engineer or your IT department to ensure you have the necessary permissions and support.\n\nWould you need help with anything else?' }, }, { "messages": [ { "role": "user", "content": "Hey, can do I use my code evaluator for monitoring my legal-copilot prompt?", } ], "target": { "response": "Hey, thanks for your questions. Here are steps for how to achieve: 1. Navigate to your Prompt dashboard. \n 2. Select the `Monitoring` button on the top right of the Prompt dashboard \n 3. Within the model select the Version of the Evaluator you want to turn on for monitoring. \n\nWould you need help with anything else?" }, }, ], commit_message="Add two new questions and answers", )
-
-
-
datapoints:
typing.Sequence[CreateDatapointRequestParams]β The Datapoints to create this Dataset version with. Modify theactionfield to determine how these Datapoints are used.
-
version_id:
typing.Optional[str]β ID of the specific Dataset version to base the created Version on. Only used whenactionis"add"or"remove".
-
environment:
typing.Optional[str]β Name of the Environment identifying a deployed Version to base the created Version on. Only used whenactionis"add"or"remove".
-
path:
typing.Optional[str]β Path of the Dataset, including the name. This locates the Dataset in the Humanloop filesystem and is used as as a unique identifier. For example:folder/nameor justname.
-
id:
typing.Optional[str]β ID for an existing Dataset.
-
action:
typing.Optional[UpdateDatesetAction]The action to take with the provided Datapoints.
- If
"set", the created version will only contain the Datapoints provided in this request. - If
"add", the created version will contain the Datapoints provided in this request in addition to the Datapoints in the target version. - If
"remove", the created version will contain the Datapoints in the target version except for the Datapoints provided in this request.
If
"add"or"remove", one of theversion_idorenvironmentquery parameters may be provided. - If
-
attributes:
typing.Optional[typing.Dict[str, typing.Optional[typing.Any]]]β Additional fields to describe the Dataset. Helpful to separate Dataset versions from each other with details on how they were created or used.
-
commit_message:
typing.Optional[str]β Message describing the changes made. If provided, a committed version of the Dataset is created. Otherwise, an uncommitted version is created.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.datasets.get(...)
-
-
-
Retrieve the Dataset with the given ID.
Unless
include_datapointsis set totrue, the response will not include the Datapoints. Use the List Datapoints endpoint (GET /{id}/datapoints) to efficiently retrieve Datapoints for a large Dataset.By default, the deployed version of the Dataset is returned. Use the query parameters
version_idorenvironmentto target a specific version of the Dataset.
-
-
-
from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) client.datasets.get( id="ds_b0baF1ca7652", version_id="dsv_6L78pqrdFi2xa", include_datapoints=True, )
-
-
-
id:
strβ Unique identifier for Dataset.
-
version_id:
typing.Optional[str]β A specific Version ID of the Dataset to retrieve.
-
environment:
typing.Optional[str]β Name of the Environment to retrieve a deployed Version from.
-
include_datapoints:
typing.Optional[bool]β If set totrue, include all Datapoints in the response. Defaults tofalse. Consider using the paginated List Datapoints endpoint instead.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.datasets.delete(...)
-
-
-
Delete the Dataset with the given ID.
-
-
-
from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) client.datasets.delete( id="id", )
-
-
-
id:
strβ Unique identifier for Dataset.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.datasets.move(...)
-
-
-
Move the Dataset to a different path or change the name.
-
-
-
from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) client.datasets.move( id="id", )
-
-
-
id:
strβ Unique identifier for Dataset.
-
path:
typing.Optional[str]β Path of the Dataset including the Dataset name, which is used as a unique identifier.
-
name:
typing.Optional[str]β Name of the Dataset, which is used as a unique identifier.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.datasets.list_datapoints(...)
-
-
-
List all Datapoints for the Dataset with the given ID.
-
-
-
from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) response = client.datasets.list_datapoints( id="ds_b0baF1ca7652", size=1, ) for item in response: yield item # alternatively, you can paginate page-by-page for page in response.iter_pages(): yield page
-
-
-
id:
strβ Unique identifier for Dataset.
-
version_id:
typing.Optional[str]β A specific Version ID of the Dataset to retrieve.
-
environment:
typing.Optional[str]β Name of the Environment to retrieve a deployed Version from.
-
page:
typing.Optional[int]β Page number for pagination.
-
size:
typing.Optional[int]β Page size for pagination. Number of Datapoints to fetch.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.datasets.list_versions(...)
-
-
-
Get a list of the versions for a Dataset.
-
-
-
from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) client.datasets.list_versions( id="ds_b0baF1ca7652", status="committed", )
-
-
-
id:
strβ Unique identifier for Dataset.
-
status:
typing.Optional[VersionStatus]β Filter versions by status: 'uncommitted', 'committed'. If no status is provided, all versions are returned.
-
include_datapoints:
typing.Optional[typing.Literal["latest_committed"]]β If set to 'latest_committed', include the Datapoints for the latest committed version. Defaults toNone.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.datasets.commit(...)
-
-
-
Commit a version of the Dataset with a commit message.
If the version is already committed, an exception will be raised.
-
-
-
from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) client.datasets.commit( id="ds_b0baF1ca7652", version_id="dsv_6L78pqrdFi2xa", commit_message="initial commit", )
-
-
-
id:
strβ Unique identifier for Dataset.
-
version_id:
strβ Unique identifier for the specific version of the Dataset.
-
commit_message:
strβ Message describing the changes made.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.datasets.upload_csv(...)
-
-
-
Add Datapoints from a CSV file to a Dataset.
This will create a new committed version of the Dataset with the Datapoints from the CSV file.
If either
version_idorenvironmentis provided, the new version will be based on the specified version, with the Datapoints from the CSV file added to the existing Datapoints in the version. If neitherversion_idnorenvironmentis provided, the new version will be based on the version of the Dataset that is deployed to the default Environment.
-
-
-
from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) client.datasets.upload_csv( id="id", commit_message="commit_message", )
-
-
-
id:
strβ Unique identifier for the Dataset
-
file: `from future import annotations
core.File` β See core.File for more documentation
-
commit_message:
strβ Commit message for the new Dataset version.
-
version_id:
typing.Optional[str]β ID of the specific Dataset version to base the created Version on.
-
environment:
typing.Optional[str]β Name of the Environment identifying a deployed Version to base the created Version on.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.datasets.set_deployment(...)
-
-
-
Deploy Dataset to Environment.
Set the deployed version for the specified Environment.
-
-
-
from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) client.datasets.set_deployment( id="ds_b0baF1ca7652", environment_id="staging", version_id="dsv_6L78pqrdFi2xa", )
-
-
-
id:
strβ Unique identifier for Dataset.
-
environment_id:
strβ Unique identifier for the Environment to deploy the Version to.
-
version_id:
strβ Unique identifier for the specific version of the Dataset.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.datasets.remove_deployment(...)
-
-
-
Remove deployed Dataset from Environment.
Remove the deployed version for the specified Environment.
-
-
-
from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) client.datasets.remove_deployment( id="ds_b0baF1ca7652", environment_id="staging", )
-
-
-
id:
strβ Unique identifier for Dataset.
-
environment_id:
strβ Unique identifier for the Environment to remove the deployment from.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.datasets.list_environments(...)
-
-
-
List all Environments and their deployed versions for the Dataset.
-
-
-
from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) client.datasets.list_environments( id="id", )
-
-
-
id:
strβ Unique identifier for Dataset.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.evaluators.log(...)
-
-
-
Submit Evaluator judgment for an existing Log.
Creates a new Log. The evaluated Log will be set as the parent of the created Log.
-
-
-
from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) client.evaluators.log( parent_id="parent_id", )
-
-
-
parent_id:
strβ Identifier of the evaluated Log. The newly created Log will have this one set as parent.
-
version_id:
typing.Optional[str]β ID of the Evaluator version to log against.
-
environment:
typing.Optional[str]β Name of the Environment identifying a deployed version to log to.
-
path:
typing.Optional[str]β Path of the Evaluator, including the name. This locates the Evaluator in the Humanloop filesystem and is used as as a unique identifier. For example:folder/nameor justname.
-
id:
typing.Optional[str]β ID for an existing Evaluator.
-
start_time:
typing.Optional[dt.datetime]β When the logged event started.
-
end_time:
typing.Optional[dt.datetime]β When the logged event ended.
-
output:
typing.Optional[str]β Generated output from the LLM. Only populated for LLM Evaluator Logs.
-
created_at:
typing.Optional[dt.datetime]β User defined timestamp for when the log was created.
-
error:
typing.Optional[str]β Error message if the log is an error.
-
provider_latency:
typing.Optional[float]β Duration of the logged event in seconds.
-
stdout:
typing.Optional[str]β Captured log and debug statements.
-
provider_request:
typing.Optional[typing.Dict[str, typing.Optional[typing.Any]]]β Raw request sent to provider. Only populated for LLM Evaluator Logs.
-
provider_response:
typing.Optional[typing.Dict[str, typing.Optional[typing.Any]]]β Raw response received the provider. Only populated for LLM Evaluator Logs.
-
inputs:
typing.Optional[typing.Dict[str, typing.Optional[typing.Any]]]β The inputs passed to the prompt template.
-
source:
typing.Optional[str]β Identifies where the model was called from.
-
metadata:
typing.Optional[typing.Dict[str, typing.Optional[typing.Any]]]β Any additional metadata to record.
-
source_datapoint_id:
typing.Optional[str]β Unique identifier for the Datapoint that this Log is derived from. This can be used by Humanloop to associate Logs to Evaluations. If provided, Humanloop will automatically associate this Log to Evaluations that require a Log for this Datapoint-Version pair.
-
trace_parent_id:
typing.Optional[str]β The ID of the parent Log to nest this Log under in a Trace.
-
user:
typing.Optional[str]β End-user ID related to the Log.
-
create_evaluator_log_request_environment:
typing.Optional[str]β The name of the Environment the Log is associated to.
-
save:
typing.Optional[bool]β Whether the request/response payloads will be stored on Humanloop.
-
judgment:
typing.Optional[CreateEvaluatorLogRequestJudgmentParams]β Evaluator assessment of the Log.
-
marked_completed:
typing.Optional[bool]β Whether the Log has been manually marked as completed by a user. <<<<<<< HEAD=======
32f482a (Release 0.8.9a1)
-
<<<<<<< HEAD spec:
typing.Optional[CreateEvaluatorLogRequestSpecParams]======= spec:
typing.Optional[CreateEvaluatorLogRequestSpecParams]32f482a (Release 0.8.9a1)
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.evaluators.list(...)
-
-
-
Get a list of all Evaluators.
-
-
-
from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) response = client.evaluators.list( size=1, ) for item in response: yield item # alternatively, you can paginate page-by-page for page in response.iter_pages(): yield page
-
-
-
page:
typing.Optional[int]β Page offset for pagination.
-
size:
typing.Optional[int]β Page size for pagination. Number of Evaluators to fetch.
-
name:
typing.Optional[str]β Case-insensitive filter for Evaluator name.
-
user_filter:
typing.Optional[str]β Case-insensitive filter for users in the Evaluator. This filter matches against both email address and name of users.
-
sort_by:
typing.Optional[ProjectSortBy]β Field to sort Evaluators by
-
order:
typing.Optional[SortOrder]β Direction to sort by.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.evaluators.upsert(...)
-
-
-
Create an Evaluator or update it with a new version if it already exists.
Evaluators are identified by the
IDor theirpath. The spec provided determines the version of the Evaluator.If you provide a commit message, then the new version will be committed; otherwise it will be uncommitted. If you try to commit an already committed version, an exception will be raised.
-
-
-
from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) client.evaluators.upsert( path="Shared Evaluators/Accuracy Evaluator", spec={ "arguments_type": "target_required", "return_type": "number", "evaluator_type": "python", "code": "def evaluate(answer, target):\n return 0.5", }, commit_message="Initial commit", )
-
-
-
<<<<<<< HEAD spec:
EvaluatorRequestSpecParams======= spec:
EvaluatorRequestSpecParams32f482a (Release 0.8.9a1)
-
path:
typing.Optional[str]β Path of the Evaluator, including the name. This locates the Evaluator in the Humanloop filesystem and is used as as a unique identifier. For example:folder/nameor justname.
-
id:
typing.Optional[str]β ID for an existing Evaluator.
-
commit_message:
typing.Optional[str]β Message describing the changes made.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.evaluators.get(...)
-
-
-
Retrieve the Evaluator with the given ID.
By default, the deployed version of the Evaluator is returned. Use the query parameters
version_idorenvironmentto target a specific version of the Evaluator.
-
-
-
from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) client.evaluators.get( id="ev_890bcd", )
-
-
-
id:
strβ Unique identifier for Evaluator.
-
version_id:
typing.Optional[str]β A specific Version ID of the Evaluator to retrieve.
-
environment:
typing.Optional[str]β Name of the Environment to retrieve a deployed Version from.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.evaluators.delete(...)
-
-
-
Delete the Evaluator with the given ID.
-
-
-
from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) client.evaluators.delete( id="ev_890bcd", )
-
-
-
id:
strβ Unique identifier for Evaluator.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.evaluators.move(...)
-
-
-
Move the Evaluator to a different path or change the name.
-
-
-
from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) client.evaluators.move( id="ev_890bcd", path="new directory/new name", )
-
-
-
id:
strβ Unique identifier for Evaluator.
-
path:
typing.Optional[str]β Path of the Evaluator including the Evaluator name, which is used as a unique identifier.
-
name:
typing.Optional[str]β Name of the Evaluator, which is used as a unique identifier.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.evaluators.list_versions(...)
-
-
-
Get a list of all the versions of an Evaluator.
-
-
-
from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) client.evaluators.list_versions( id="ev_890bcd", )
-
-
-
id:
strβ Unique identifier for the Evaluator.
-
status:
typing.Optional[VersionStatus]β Filter versions by status: 'uncommitted', 'committed'. If no status is provided, all versions are returned.
-
evaluator_aggregates:
typing.Optional[bool]β Whether to include Evaluator aggregate results for the versions in the response
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.evaluators.commit(...)
-
-
-
Commit a version of the Evaluator with a commit message.
If the version is already committed, an exception will be raised.
-
-
-
from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) client.evaluators.commit( id="ev_890bcd", version_id="evv_012def", commit_message="Initial commit", )
-
-
-
id:
strβ Unique identifier for Prompt.
-
version_id:
strβ Unique identifier for the specific version of the Evaluator.
-
commit_message:
strβ Message describing the changes made.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.evaluators.set_deployment(...)
-
-
-
Deploy Evaluator to an Environment.
Set the deployed version for the specified Environment. This Evaluator will be used for calls made to the Evaluator in this Environment.
-
-
-
from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) client.evaluators.set_deployment( id="ev_890bcd", environment_id="staging", version_id="evv_012def", )
-
-
-
id:
strβ Unique identifier for Evaluator.
-
environment_id:
strβ Unique identifier for the Environment to deploy the Version to.
-
version_id:
strβ Unique identifier for the specific version of the Evaluator.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.evaluators.remove_deployment(...)
-
-
-
Remove deployed Evaluator from the Environment.
Remove the deployed version for the specified Environment. This Evaluator will no longer be used for calls made to the Evaluator in this Environment.
-
-
-
from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) client.evaluators.remove_deployment( id="ev_890bcd", environment_id="staging", )
-
-
-
id:
strβ Unique identifier for Evaluator.
-
environment_id:
strβ Unique identifier for the Environment to remove the deployment from.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.evaluators.list_environments(...)
-
-
-
List all Environments and their deployed versions for the Evaluator.
-
-
-
from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) client.evaluators.list_environments( id="ev_890bcd", )
-
-
-
id:
strβ Unique identifier for Evaluator.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.evaluators.update_monitoring(...)
-
-
-
Activate and deactivate Evaluators for monitoring the Evaluator.
An activated Evaluator will automatically be run on all new Logs within the Evaluator for monitoring purposes.
-
-
-
from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) client.evaluators.update_monitoring( id="id", )
-
-
-
id:
str
-
activate:
typing.Optional[ typing.Sequence[EvaluatorActivationDeactivationRequestActivateItemParams] ]β Evaluators to activate for Monitoring. These will be automatically run on new Logs.
-
deactivate:
typing.Optional[ typing.Sequence[EvaluatorActivationDeactivationRequestDeactivateItemParams] ]β Evaluators to deactivate. These will not be run on new Logs.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.flows.log(...)
-
-
-
Log to a Flow.
You can use query parameters
version_id, orenvironment, to target an existing version of the Flow. Otherwise, the default deployed version will be chosen.
-
-
-
import datetime from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) client.flows.log( log_id="medqa_experiment_0001", id="fl_6o701g4jmcanPVHxdqD0O", flow={ "attributes": { "prompt": { "template": "You are a helpful assistant helping with medical anamnesis", "model": "gpt-4o", "temperature": 0.8, }, "tool": { "name": "retrieval_tool_v3", "description": "Retrieval tool for MedQA.", "source_code": "def retrieval_tool(question: str) -> str:\n pass\n", }, } }, inputs={ "question": "Patient with a history of diabetes and hypertension presents with chest pain and shortness of breath." }, output="The patient is likely experiencing a myocardial infarction. Immediate medical attention is required.", trace_status="incomplete", start_time=datetime.datetime.fromisoformat( "2024-07-08 22:40:35+00:00", ), end_time=datetime.datetime.fromisoformat( "2024-07-08 22:40:39+00:00", ), )
-
-
-
version_id:
typing.Optional[str]β A specific Version ID of the Flow to log to.
-
environment:
typing.Optional[str]β Name of the Environment identifying a deployed version to log to.
-
run_id:
typing.Optional[str]β Unique identifier for the Run to associate the Log to.
-
path:
typing.Optional[str]β Path of the Flow, including the name. This locates the Flow in the Humanloop filesystem and is used as as a unique identifier. For example:folder/nameor justname.
-
id:
typing.Optional[str]β ID for an existing Flow.
-
start_time:
typing.Optional[dt.datetime]β The start time of the Trace. Will be updated if a child Log with an earlier start time is added.
-
end_time:
typing.Optional[dt.datetime]β The end time of the Trace. Will be updated if a child Log with a later end time is added.
-
output:
typing.Optional[str]β Generated output from your model for the provided inputs. Can beNoneif logging an error, or if creating a parent Log with the intention to populate it later.
-
created_at:
typing.Optional[dt.datetime]β User defined timestamp for when the log was created.
-
error:
typing.Optional[str]β Error message if the log is an error.
-
provider_latency:
typing.Optional[float]β Duration of the logged event in seconds.
-
stdout:
typing.Optional[str]β Captured log and debug statements.
-
provider_request:
typing.Optional[typing.Dict[str, typing.Optional[typing.Any]]]β Raw request sent to provider.
-
provider_response:
typing.Optional[typing.Dict[str, typing.Optional[typing.Any]]]β Raw response received the provider.
-
inputs:
typing.Optional[typing.Dict[str, typing.Optional[typing.Any]]]β The inputs passed to the prompt template.
-
source:
typing.Optional[str]β Identifies where the model was called from.
-
metadata:
typing.Optional[typing.Dict[str, typing.Optional[typing.Any]]]β Any additional metadata to record.
-
source_datapoint_id:
typing.Optional[str]β Unique identifier for the Datapoint that this Log is derived from. This can be used by Humanloop to associate Logs to Evaluations. If provided, Humanloop will automatically associate this Log to Evaluations that require a Log for this Datapoint-Version pair.
-
trace_parent_id:
typing.Optional[str]β The ID of the parent Log to nest this Log under in a Trace.
-
user:
typing.Optional[str]β End-user ID related to the Log.
-
flow_log_request_environment:
typing.Optional[str]β The name of the Environment the Log is associated to.
-
save:
typing.Optional[bool]β Whether the request/response payloads will be stored on Humanloop.
-
log_id:
typing.Optional[str]β The identifier for the Log. If not specified, a default ID will be generated. This allows additional Logs to be appended to the trace without waiting for Humanloop to return an ID.
-
flow:
typing.Optional[FlowKernelRequestParams]β Flow used to generate the Trace.
-
trace_status:
typing.Optional[TraceStatus]β Status of the Trace. When a Trace is marked ascomplete, no more Logs can be added to it. Monitoring Evaluators will only run oncompleteTraces. If you do not intend to add more Logs to the Trace after creation, set this tocomplete.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.flows.get(...)
-
-
-
Retrieve the Flow with the given ID.
By default, the deployed version of the Flow is returned. Use the query parameters
version_idorenvironmentto target a specific version of the Flow.
-
-
-
from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) client.flows.get( id="fl_6o701g4jmcanPVHxdqD0O", )
-
-
-
id:
strβ Unique identifier for Flow.
-
version_id:
typing.Optional[str]β A specific Version ID of the Flow to retrieve.
-
environment:
typing.Optional[str]β Name of the Environment to retrieve a deployed Version from.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.flows.delete(...)
-
-
-
Delete the Flow with the given ID.
-
-
-
from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) client.flows.delete( id="fl_6o701g4jmcanPVHxdqD0O", )
-
-
-
id:
strβ Unique identifier for Flow.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.flows.move(...)
-
-
-
Move the Flow to a different path or change the name.
-
-
-
from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) client.flows.move( id="fl_6o701g4jmcanPVHxdqD0O", path="new directory/new name", )
-
-
-
id:
strβ Unique identifier for Flow.
-
path:
typing.Optional[str]β Path of the Flow including the Flow name, which is used as a unique identifier.
-
name:
typing.Optional[str]β Name of the Flow.
-
directory_id:
typing.Optional[str]β Unique identifier for the Directory to move Flow to. Starts withdir_.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.flows.list(...)
-
-
-
Get a list of Flows.
-
-
-
from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) response = client.flows.list( size=1, ) for item in response: yield item # alternatively, you can paginate page-by-page for page in response.iter_pages(): yield page
-
-
-
page:
typing.Optional[int]β Page number for pagination.
-
size:
typing.Optional[int]β Page size for pagination. Number of Flows to fetch.
-
name:
typing.Optional[str]β Case-insensitive filter for Flow name.
-
user_filter:
typing.Optional[str]β Case-insensitive filter for users in the Flow. This filter matches against both email address and name of users.
-
sort_by:
typing.Optional[ProjectSortBy]β Field to sort Flows by
-
order:
typing.Optional[SortOrder]β Direction to sort by.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.flows.upsert(...)
-
-
-
Create or update a Flow.
Flows can also be identified by the
IDor theirpath.If you provide a commit message, then the new version will be committed; otherwise it will be uncommitted. If you try to commit an already committed version, an exception will be raised.
-
-
-
from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) client.flows.upsert( path="Personal Projects/MedQA Flow", attributes={ "prompt": { "template": "You are a helpful medical assistant helping with medical anamnesis. Answer {{question}}", "model": "gpt-4o", "temperature": 0.8, }, "tool": { "name": "retrieval_tool_v3", "description": "Retrieval tool for MedQA.", "source_code": "def retrieval_tool(question: str) -> str:\n pass\n", }, "commit_message": "Initial commit", }, )
-
-
-
attributes:
typing.Dict[str, typing.Optional[typing.Any]]β A key-value object identifying the Flow Version.
-
path:
typing.Optional[str]β Path of the Flow, including the name. This locates the Flow in the Humanloop filesystem and is used as as a unique identifier. For example:folder/nameor justname.
-
id:
typing.Optional[str]β ID for an existing Flow.
-
commit_message:
typing.Optional[str]β Message describing the changes made.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.flows.update_log(...)
-
-
-
Update the status, inputs, output of a Flow Log.
Marking a Flow Log as complete will trigger any monitoring Evaluators to run. Inputs and output (or error) must be provided in order to mark it as complete.
-
-
-
from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) client.flows.update_log( log_id="medqa_experiment_0001", inputs={ "question": "Patient with a history of diabetes and normal tension presents with chest pain and shortness of breath." }, output="The patient is likely experiencing a myocardial infarction. Immediate medical attention is required.", trace_status="complete", )
-
-
-
log_id:
strβ Unique identifier of the Flow Log.
-
trace_status:
TraceStatusβ Status of the Trace. When a Trace is marked ascomplete, no more Logs can be added to it. Monitoring Evaluators will only run on completed Traces.
-
inputs:
typing.Optional[typing.Dict[str, typing.Optional[typing.Any]]]β The inputs passed to the Flow Log.
-
output:
typing.Optional[str]β The output of the Flow Log. Provide None to unset existingoutputvalue. Provide either this orerror.
-
error:
typing.Optional[str]β The error message of the Flow Log. Provide None to unset existingerrorvalue. Provide either this oroutput.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.flows.list_versions(...)
-
-
-
Get a list of all the versions of a Flow.
-
-
-
from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) client.flows.list_versions( id="fl_6o701g4jmcanPVHxdqD0O", status="committed", )
-
-
-
id:
strβ Unique identifier for Flow.
-
status:
typing.Optional[VersionStatus]β Filter versions by status: 'uncommitted', 'committed'. If no status is provided, all versions are returned.
-
evaluator_aggregates:
typing.Optional[bool]β Whether to include Evaluator aggregate results for the versions in the response
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.flows.commit(...)
-
-
-
Commit a version of the Flow with a commit message.
If the version is already committed, an exception will be raised.
-
-
-
from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) client.flows.commit( id="fl_6o701g4jmcanPVHxdqD0O", version_id="flv_6o701g4jmcanPVHxdqD0O", commit_message="RAG lookup tool bug fixing", )
-
-
-
id:
strβ Unique identifier for Flow.
-
version_id:
strβ Unique identifier for the specific version of the Flow.
-
commit_message:
strβ Message describing the changes made.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.flows.set_deployment(...)
-
-
-
Deploy Flow to an Environment.
Set the deployed version for the specified Environment. This Flow will be used for calls made to the Flow in this Environment.
-
-
-
from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) client.flows.set_deployment( id="fl_6o701g4jmcanPVHxdqD0O", environment_id="staging", version_id="flv_6o701g4jmcanPVHxdqD0O", )
-
-
-
id:
strβ Unique identifier for Flow.
-
environment_id:
strβ Unique identifier for the Environment to deploy the Version to.
-
version_id:
strβ Unique identifier for the specific version of the Flow.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.flows.remove_deployment(...)
-
-
-
Remove deployed Flow from the Environment.
Remove the deployed version for the specified Environment. This Flow will no longer be used for calls made to the Flow in this Environment.
-
-
-
from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) client.flows.remove_deployment( id="fl_6o701g4jmcanPVHxdqD0O", environment_id="staging", )
-
-
-
id:
strβ Unique identifier for Flow.
-
environment_id:
strβ Unique identifier for the Environment to remove the deployment from.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.flows.list_environments(...)
-
-
-
List all Environments and their deployed versions for the Flow.
-
-
-
from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) client.flows.list_environments( id="fl_6o701g4jmcanPVHxdqD0O", )
-
-
-
id:
strβ Unique identifier for Flow.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.flows.update_monitoring(...)
-
-
-
Activate and deactivate Evaluators for monitoring the Flow.
An activated Evaluator will automatically be run on all new "completed" Logs within the Flow for monitoring purposes.
-
-
-
from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) client.flows.update_monitoring( id="fl_6o701g4jmcanPVHxdqD0O", activate=[{"evaluator_version_id": "evv_1abc4308abd"}], )
-
-
-
id:
str
-
activate:
typing.Optional[ typing.Sequence[EvaluatorActivationDeactivationRequestActivateItemParams] ]β Evaluators to activate for Monitoring. These will be automatically run on new Logs.
-
deactivate:
typing.Optional[ typing.Sequence[EvaluatorActivationDeactivationRequestDeactivateItemParams] ]β Evaluators to deactivate. These will not be run on new Logs.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.directories.list()
-
-
-
Retrieve a list of all Directories.
-
-
-
from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) client.directories.list()
-
-
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.directories.create(...)
-
-
-
Creates a Directory.
-
-
-
from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) client.directories.create()
-
-
-
name:
typing.Optional[str]β Name of the directory to create.
-
parent_id:
typing.Optional[str]β ID of the parent directory. Starts withdir_.
-
path:
typing.Optional[str]β Path to create the directory in, relative to the root directory. If the path does not exist, it will be created. Includes name, e.g.path/to/directory.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.directories.get(...)
-
-
-
Fetches a directory by ID.
-
-
-
from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) client.directories.get( id="id", )
-
-
-
id:
strβ String ID of directory. Starts withdir_.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.directories.delete(...)
-
-
-
Delete the Directory with the given ID.
The Directory must be empty (i.e. contain no Directories or Files).
-
-
-
from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) client.directories.delete( id="id", )
-
-
-
id:
strβ Unique identifier for Directory. Starts withdir_.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.directories.update(...)
-
-
-
Update the Directory with the given ID.
-
-
-
from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) client.directories.update( id="id", )
-
-
-
id:
strβ Unique identifier for Directory. Starts withdir_.
-
name:
typing.Optional[str]β Name to set for the directory.
-
parent_id:
typing.Optional[str]β ID of the parent directory. Specify this to move directories. Starts withdir_.
-
path:
typing.Optional[str]β Path to move the directory to, relative to the root directory. Specify this to move directories. Includes name, e.g.path/to/directory.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.files.list(...)
-
-
-
Get a paginated list of files.
-
-
-
from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) client.files.list()
-
-
-
page:
typing.Optional[int]β Page offset for pagination.
-
size:
typing.Optional[int]β Page size for pagination. Number of files to fetch.
-
name:
typing.Optional[str]β Case-insensitive filter for file name.
-
type:
typing.Optional[typing.Union[FileType, typing.Sequence[FileType]]]β List of file types to filter for.
-
environment:
typing.Optional[str]β Case-sensitive filter for files with a deployment in the specified environment. Requires the environment name.
-
sort_by:
typing.Optional[ProjectSortBy]β Field to sort files by
-
order:
typing.Optional[SortOrder]β Direction to sort by.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.evaluations.list(...)
-
-
-
Retrieve a list of Evaluations for the specified File. <<<<<<< HEAD
=======
32f482a (Release 0.8.9a1)
-
-
-
from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) response = client.evaluations.list( file_id="pr_30gco7dx6JDq4200GVOHa", size=1, ) for item in response: yield item # alternatively, you can paginate page-by-page for page in response.iter_pages(): yield page
-
-
-
file_id:
strβ Filter by File ID. Only Evaluations for the specified File will be returned.
-
page:
typing.Optional[int]β Page number for pagination.
-
size:
typing.Optional[int]β Page size for pagination. Number of Evaluations to fetch.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.evaluations.create(...)
-
-
-
Create an Evaluation.
Create a new Evaluation by specifying the File to evaluate, and a name for the Evaluation. You can then add Runs to this Evaluation using the
POST /evaluations/{id}/runsendpoint.
-
-
-
from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) client.evaluations.create( evaluators=[{"version_id": "version_id"}], )
-
-
-
evaluators:
typing.Sequence[CreateEvaluationRequestEvaluatorsItemParams]β The Evaluators used to evaluate. <<<<<<< HEAD=======
32f482a (Release 0.8.9a1)
-
file:
typing.Optional[FileRequestParams]β The File to associate with the Evaluation. This File contains the Logs you're evaluating.
-
name:
typing.Optional[str]β Name of the Evaluation to help identify it. Must be unique within the associated File.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.evaluations.add_evaluators(...)
-
-
-
Add Evaluators to an Evaluation.
The Evaluators will be run on the Logs generated for the Evaluation. <<<<<<< HEAD
=======
32f482a (Release 0.8.9a1)
-
-
-
from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) client.evaluations.add_evaluators( id="id", evaluators=[{"version_id": "version_id"}], )
-
-
-
id:
strβ Unique identifier for Evaluation.
-
evaluators:
typing.Sequence[AddEvaluatorsRequestEvaluatorsItemParams]β The Evaluators to add to this Evaluation. <<<<<<< HEAD=======
32f482a (Release 0.8.9a1)
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.evaluations.remove_evaluator(...)
-
-
-
Remove an Evaluator from an Evaluation.
The Evaluator will no longer be run on the Logs in the Evaluation. <<<<<<< HEAD
=======
32f482a (Release 0.8.9a1)
-
-
-
from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) client.evaluations.remove_evaluator( id="id", evaluator_version_id="evaluator_version_id", )
-
-
-
id:
strβ Unique identifier for Evaluation.
-
evaluator_version_id:
strβ Unique identifier for Evaluator Version.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.evaluations.get(...)
-
-
-
Get an Evaluation.
This includes the Evaluators associated with the Evaluation and metadata about the Evaluation, such as its name.
To get the Runs associated with the Evaluation, use the
GET /evaluations/{id}/runsendpoint. To retrieve stats for the Evaluation, use theGET /evaluations/{id}/statsendpoint. <<<<<<< HEAD=======
32f482a (Release 0.8.9a1)
-
-
-
from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) client.evaluations.get( id="ev_567yza", )
-
-
-
id:
strβ Unique identifier for Evaluation.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.evaluations.delete(...)
-
-
-
Delete an Evaluation.
The Runs and Evaluators in the Evaluation will not be deleted. <<<<<<< HEAD
=======
32f482a (Release 0.8.9a1)
-
-
-
from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) client.evaluations.delete( id="ev_567yza", )
-
-
-
id:
strβ Unique identifier for Evaluation.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.evaluations.list_runs_for_evaluation(...)
-
-
-
List all Runs for an Evaluation.
-
-
-
from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) client.evaluations.list_runs_for_evaluation( id="id", )
-
-
-
id:
strβ Unique identifier for Evaluation.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.evaluations.create_run(...)
-
-
-
Create an Evaluation Run.
Optionally specify the Dataset and version to be evaluated.
Humanloop will automatically start generating Logs and running Evaluators where
orchestrated=true. If you are generating Logs yourself, you can setorchestrated=falseand then generate and submit the required Logs via the API.If
datasetandversionare provided, you can setuse_existing_logs=Trueto reuse existing Logs, avoiding generating new Logs unnecessarily. Logs that are associated with the specified Version and havesource_datapoint_idreferencing a datapoint in the specified Dataset will be associated with the Run.To keep updated on the progress of the Run, you can poll the Run using the
GET /evaluations/{id}/runsendpoint and check its status.
-
-
-
from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) client.evaluations.create_run( id="id", )
-
-
-
id:
strβ Unique identifier for Evaluation.
-
dataset:
typing.Optional[CreateRunRequestDatasetParams]β Dataset to use in this Run. <<<<<<< HEAD=======
32f482a (Release 0.8.9a1)
-
version:
typing.Optional[CreateRunRequestVersionParams]β Version to use in this Run. <<<<<<< HEAD=======
32f482a (Release 0.8.9a1)
-
orchestrated:
typing.Optional[bool]β Whether the Run is orchestrated by Humanloop. IfTrue, Humanloop will generate Logs for the Run;datasetandversionmust be provided. IfFalse, a log for the Prompt/Tool should be submitted by the user via the API.
-
use_existing_logs:
typing.Optional[bool]β IfTrue, the Run will be initialized with existing Logs associated with the Dataset and Version. IfFalse, the Run will be initialized with no Logs. Can only be set toTruewhen bothdatasetandversionare provided. <<<<<<< HEAD=======
32f482a (Release 0.8.9a1)
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.evaluations.add_existing_run(...)
-
-
-
Add an existing Run to the specified Evaluation.
This is useful if you want to compare the Runs in this Evaluation with an existing Run that exists within another Evaluation. <<<<<<< HEAD
=======
32f482a (Release 0.8.9a1)
-
-
-
from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) client.evaluations.add_existing_run( id="id", run_id="run_id", )
-
-
-
id:
strβ Unique identifier for Evaluation.
-
run_id:
strβ Unique identifier for Run.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.evaluations.remove_run(...)
-
-
-
Remove a Run from an Evaluation.
The Logs and Versions used in the Run will not be deleted. If this Run is used in any other Evaluations, it will still be available in those Evaluations.
-
-
-
from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) client.evaluations.remove_run( id="id", run_id="run_id", )
-
-
-
id:
strβ Unique identifier for Evaluation.
-
run_id:
strβ Unique identifier for Run.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.evaluations.update_evaluation_run(...)
-
-
-
Update an Evaluation Run.
Specify
control=trueto use this Run as the control Run for the Evaluation. You can cancel a running/pending Run, or mark a Run that uses external or human Evaluators as completed. <<<<<<< HEAD=======
32f482a (Release 0.8.9a1)
-
-
-
from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) client.evaluations.update_evaluation_run( id="id", run_id="run_id", )
-
-
-
id:
strβ Unique identifier for Evaluation.
-
run_id:
strβ Unique identifier for Run.
-
control:
typing.Optional[bool]β IfTrue, this Run will be used as the control in the Evaluation. Stats for other Runs will be compared to this Run. This will replace any existing control Run. <<<<<<< HEAD=======
32f482a (Release 0.8.9a1)
-
status:
typing.Optional[EvaluationStatus]β Used to set the Run tocancelledorcompleted. Can only be used if the Run is currentlypendingorrunning. <<<<<<< HEAD=======
32f482a (Release 0.8.9a1)
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.evaluations.add_logs_to_run(...)
-
-
-
Add the specified Logs to a Run. <<<<<<< HEAD
=======
32f482a (Release 0.8.9a1)
-
-
-
from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) client.evaluations.add_logs_to_run( id="id", run_id="run_id", log_ids=["log_ids"], )
-
-
-
id:
strβ Unique identifier for Evaluation.
-
run_id:
strβ Unique identifier for Run.
-
log_ids:
typing.Sequence[str]β The IDs of the Logs to add to the Run.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.evaluations.get_stats(...)
-
-
-
Get Evaluation Stats.
Retrieve aggregate stats for the specified Evaluation. This includes the number of generated Logs for each Run and the corresponding Evaluator statistics (such as the mean and percentiles).
-
-
-
from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) client.evaluations.get_stats( id="id", )
-
-
-
id:
strβ Unique identifier for Evaluation.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.evaluations.get_logs(...)
-
-
-
Get the Logs associated to a specific Evaluation.
This returns the Logs associated to all Runs within with the Evaluation.
-
-
-
from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) client.evaluations.get_logs( id="id", )
-
-
-
id:
strβ String ID of evaluation. Starts withev_orevr_.
-
page:
typing.Optional[int]β Page number for pagination.
-
size:
typing.Optional[int]β Page size for pagination. Number of Logs to fetch.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.logs.list(...)
-
-
-
List all Logs for the given filter criteria.
-
-
-
from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) response = client.logs.list( file_id="file_123abc", size=1, ) for item in response: yield item # alternatively, you can paginate page-by-page for page in response.iter_pages(): yield page
-
-
-
file_id:
strβ Unique identifier for the File to list Logs for.
-
page:
typing.Optional[int]β Page number for pagination.
-
size:
typing.Optional[int]β Page size for pagination. Number of Logs to fetch.
-
version_id:
typing.Optional[str]β If provided, only Logs belonging to the specified Version will be returned.
-
version_status:
typing.Optional[VersionStatus]β If provided, only Logs belonging to Versions with the specified status will be returned.
-
search:
typing.Optional[str]β If provided, only Logs that contain the provided string in its inputs and output will be returned.
-
metadata_search:
typing.Optional[str]β If provided, only Logs that contain the provided string in its metadata will be returned.
-
start_date:
typing.Optional[dt.datetime]β If provided, only Logs created after the specified date will be returned.
-
end_date:
typing.Optional[dt.datetime]β If provided, only Logs created before the specified date will be returned.
-
include_parent:
typing.Optional[bool]β If true, include the full parent Log in the response. Only applicable when retrieving Evaluator Logs.
-
in_trace_filter:
typing.Optional[typing.Union[bool, typing.Sequence[bool]]]β If true, return Logs that are associated to a Trace. False, return Logs that are not associated to a Trace.
-
sample_n:
typing.Optional[int]β If provided, only a random sample of approximately N Logs will be returned.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.logs.delete(...)
-
-
-
Delete Logs with the given IDs.
-
-
-
from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) client.logs.delete( id="string", )
-
-
-
id:
typing.Optional[typing.Union[str, typing.Sequence[str]]]β Unique identifiers for the Logs to delete.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-
client.logs.get(...)
-
-
-
Retrieve the Log with the given ID.
-
-
-
from humanloop import Humanloop client = Humanloop( api_key="YOUR_API_KEY", ) client.logs.get( id="prv_Wu6zx1lAWJRqOyL8nWuZk", )
-
-
-
id:
strβ Unique identifier for Log.
-
request_options:
typing.Optional[RequestOptions]β Request-specific configuration.
-
-