Skip to content

Latest commit

Β 

History

History
9759 lines (6625 loc) Β· 138 KB

File metadata and controls

9759 lines (6625 loc) Β· 138 KB

Reference

Prompts

client.prompts.log(...)

πŸ“ Description

Log to a Prompt.

You can use query parameters version_id, or environment, to target an existing version of the Prompt. Otherwise, the default deployed version will be chosen.

Instead of targeting an existing version explicitly, you can instead pass in Prompt details in the request body. In this case, we will check if the details correspond to an existing version of the Prompt. If they do not, we will create a new version. This is helpful in the case where you are storing or deriving your Prompt details in code.

πŸ”Œ Usage

import datetime

from humanloop import Humanloop

client = Humanloop(
    api_key="YOUR_API_KEY",
)
client.prompts.log(
    path="persona",
    prompt={
        "model": "gpt-4",
        "template": [
            {
                "role": "system",
                "content": "You are {{person}}. Answer questions as this person. Do not break character.",
            }
        ],
    },
    messages=[{"role": "user", "content": "What really happened at Roswell?"}],
    inputs={"person": "Trump"},
    created_at=datetime.datetime.fromisoformat(
        "2024-07-19 00:29:35.178000+00:00",
    ),
    provider_latency=6.5931549072265625,
    output_message={
        "content": "Well, you know, there is so much secrecy involved in government, folks, it's unbelievable. They don't want to tell you everything. They don't tell me everything! But about Roswell, it’s a very popular question. I know, I just know, that something very, very peculiar happened there. Was it a weather balloon? Maybe. Was it something extraterrestrial? Could be. I'd love to go down and open up all the classified documents, believe me, I would. But they don't let that happen. The Deep State, folks, the Deep State. They’re unbelievable. They want to keep everything a secret. But whatever the truth is, I can tell you this: it’s something big, very very big. Tremendous, in fact.",
        "role": "assistant",
    },
    prompt_tokens=100,
    output_tokens=220,
    prompt_cost=1e-05,
    output_cost=0.0002,
    finish_reason="stop",
)

βš™οΈ Parameters

version_id: typing.Optional[str] β€” A specific Version ID of the Prompt to log to.

environment: typing.Optional[str] β€” Name of the Environment identifying a deployed version to log to.

run_id: typing.Optional[str] β€” Unique identifier for the Run to associate the Log to.

path: typing.Optional[str] β€” Path of the Prompt, including the name. This locates the Prompt in the Humanloop filesystem and is used as as a unique identifier. For example: folder/name or just name.

id: typing.Optional[str] β€” ID for an existing Prompt.

output_message: typing.Optional[ChatMessageParams] β€” The message returned by the provider.

prompt_tokens: typing.Optional[int] β€” Number of tokens in the prompt used to generate the output.

output_tokens: typing.Optional[int] β€” Number of tokens in the output generated by the model.

prompt_cost: typing.Optional[float] β€” Cost in dollars associated to the tokens in the prompt.

output_cost: typing.Optional[float] β€” Cost in dollars associated to the tokens in the output.

finish_reason: typing.Optional[str] β€” Reason the generation finished.

messages: typing.Optional[typing.Sequence[ChatMessageParams]] β€” The messages passed to the to provider chat endpoint.

tool_choice: typing.Optional[PromptLogRequestToolChoiceParams]

Controls how the model uses tools. The following options are supported:

  • 'none' means the model will not call any tool and instead generates a message; this is the default when no tools are provided as part of the Prompt.
  • 'auto' means the model can decide to call one or more of the provided tools; this is the default when tools are provided as part of the Prompt.
  • 'required' means the model can decide to call one or more of the provided tools.
  • {'type': 'function', 'function': {name': <TOOL_NAME>}} forces the model to use the named function.

prompt: typing.Optional[PromptKernelRequestParams] β€” Details of your Prompt. A new Prompt version will be created if the provided details are new.

start_time: typing.Optional[dt.datetime] β€” When the logged event started.

end_time: typing.Optional[dt.datetime] β€” When the logged event ended.

output: typing.Optional[str] β€” Generated output from your model for the provided inputs. Can be None if logging an error, or if creating a parent Log with the intention to populate it later.

created_at: typing.Optional[dt.datetime] β€” User defined timestamp for when the log was created.

error: typing.Optional[str] β€” Error message if the log is an error.

provider_latency: typing.Optional[float] β€” Duration of the logged event in seconds.

stdout: typing.Optional[str] β€” Captured log and debug statements.

provider_request: typing.Optional[typing.Dict[str, typing.Optional[typing.Any]]] β€” Raw request sent to provider.

provider_response: typing.Optional[typing.Dict[str, typing.Optional[typing.Any]]] β€” Raw response received the provider.

inputs: typing.Optional[typing.Dict[str, typing.Optional[typing.Any]]] β€” The inputs passed to the prompt template.

source: typing.Optional[str] β€” Identifies where the model was called from.

metadata: typing.Optional[typing.Dict[str, typing.Optional[typing.Any]]] β€” Any additional metadata to record.

source_datapoint_id: typing.Optional[str] β€” Unique identifier for the Datapoint that this Log is derived from. This can be used by Humanloop to associate Logs to Evaluations. If provided, Humanloop will automatically associate this Log to Evaluations that require a Log for this Datapoint-Version pair.

trace_parent_id: typing.Optional[str] β€” The ID of the parent Log to nest this Log under in a Trace.

user: typing.Optional[str] β€” End-user ID related to the Log.

prompt_log_request_environment: typing.Optional[str] β€” The name of the Environment the Log is associated to.

save: typing.Optional[bool] β€” Whether the request/response payloads will be stored on Humanloop.

request_options: typing.Optional[RequestOptions] β€” Request-specific configuration.

client.prompts.update_log(...)

πŸ“ Description

Update a Log.

Update the details of a Log with the given ID.

πŸ”Œ Usage

from humanloop import Humanloop

client = Humanloop(
    api_key="YOUR_API_KEY",
)
client.prompts.update_log(
    id="id",
    log_id="log_id",
)

βš™οΈ Parameters

id: str β€” Unique identifier for Prompt.

log_id: str β€” Unique identifier for the Log.

output_message: typing.Optional[ChatMessageParams] β€” The message returned by the provider.

prompt_tokens: typing.Optional[int] β€” Number of tokens in the prompt used to generate the output.

output_tokens: typing.Optional[int] β€” Number of tokens in the output generated by the model.

prompt_cost: typing.Optional[float] β€” Cost in dollars associated to the tokens in the prompt.

output_cost: typing.Optional[float] β€” Cost in dollars associated to the tokens in the output.

finish_reason: typing.Optional[str] β€” Reason the generation finished.

messages: typing.Optional[typing.Sequence[ChatMessageParams]] β€” The messages passed to the to provider chat endpoint.

tool_choice: typing.Optional[PromptLogUpdateRequestToolChoiceParams]

Controls how the model uses tools. The following options are supported:

  • 'none' means the model will not call any tool and instead generates a message; this is the default when no tools are provided as part of the Prompt.
  • 'auto' means the model can decide to call one or more of the provided tools; this is the default when tools are provided as part of the Prompt.
  • 'required' means the model can decide to call one or more of the provided tools.
  • {'type': 'function', 'function': {name': <TOOL_NAME>}} forces the model to use the named function.

output: typing.Optional[str] β€” Generated output from your model for the provided inputs. Can be None if logging an error, or if creating a parent Log with the intention to populate it later.

created_at: typing.Optional[dt.datetime] β€” User defined timestamp for when the log was created.

error: typing.Optional[str] β€” Error message if the log is an error.

provider_latency: typing.Optional[float] β€” Duration of the logged event in seconds.

stdout: typing.Optional[str] β€” Captured log and debug statements.

provider_request: typing.Optional[typing.Dict[str, typing.Optional[typing.Any]]] β€” Raw request sent to provider.

provider_response: typing.Optional[typing.Dict[str, typing.Optional[typing.Any]]] β€” Raw response received the provider.

inputs: typing.Optional[typing.Dict[str, typing.Optional[typing.Any]]] β€” The inputs passed to the prompt template.

source: typing.Optional[str] β€” Identifies where the model was called from.

metadata: typing.Optional[typing.Dict[str, typing.Optional[typing.Any]]] β€” Any additional metadata to record.

start_time: typing.Optional[dt.datetime] β€” When the logged event started.

end_time: typing.Optional[dt.datetime] β€” When the logged event ended.

request_options: typing.Optional[RequestOptions] β€” Request-specific configuration.

client.prompts.call_stream(...)

πŸ“ Description

Call a Prompt.

Calling a Prompt calls the model provider before logging the request, responses and metadata to Humanloop.

You can use query parameters version_id, or environment, to target an existing version of the Prompt. Otherwise the default deployed version will be chosen.

Instead of targeting an existing version explicitly, you can instead pass in Prompt details in the request body. In this case, we will check if the details correspond to an existing version of the Prompt. If they do not, we will create a new version. This is helpful in the case where you are storing or deriving your Prompt details in code.

πŸ”Œ Usage

import datetime

from humanloop import Humanloop

client = Humanloop(
    api_key="YOUR_API_KEY",
)
response = client.prompts.call_stream(
    version_id="string",
    environment="string",
    path="string",
    id="string",
    messages=[
        {
            "content": "string",
            "name": "string",
            "tool_call_id": "string",
            "role": "user",
            "tool_calls": [
                {
                    "id": "string",
                    "type": "function",
                    "function": {"name": "string"},
                }
            ],
        }
    ],
    prompt={"model": "string"},
    inputs={"string": {"key": "value"}},
    source="string",
    metadata={"string": {"key": "value"}},
    start_time=datetime.datetime.fromisoformat(
        "2024-01-15 09:30:00+00:00",
    ),
    end_time=datetime.datetime.fromisoformat(
        "2024-01-15 09:30:00+00:00",
    ),
    source_datapoint_id="string",
    trace_parent_id="string",
    user="string",
    prompts_call_stream_request_environment="string",
    save=True,
    provider_api_keys={
        "openai": "string",
        "ai_21": "string",
        "mock": "string",
        "anthropic": "string",
        "bedrock": "string",
        "cohere": "string",
        "openai_azure": "string",
        "openai_azure_endpoint": "string",
    },
    num_samples=1,
    return_inputs=True,
    logprobs=1,
    suffix="string",
)
for chunk in response:
    yield chunk

βš™οΈ Parameters

version_id: typing.Optional[str] β€” A specific Version ID of the Prompt to log to.

environment: typing.Optional[str] β€” Name of the Environment identifying a deployed version to log to.

path: typing.Optional[str] β€” Path of the Prompt, including the name. This locates the Prompt in the Humanloop filesystem and is used as as a unique identifier. For example: folder/name or just name.

id: typing.Optional[str] β€” ID for an existing Prompt.

messages: typing.Optional[typing.Sequence[ChatMessageParams]] β€” The messages passed to the to provider chat endpoint.

tool_choice: typing.Optional[PromptsCallStreamRequestToolChoiceParams]

Controls how the model uses tools. The following options are supported:

  • 'none' means the model will not call any tool and instead generates a message; this is the default when no tools are provided as part of the Prompt.
  • 'auto' means the model can decide to call one or more of the provided tools; this is the default when tools are provided as part of the Prompt.
  • 'required' means the model can decide to call one or more of the provided tools.
  • {'type': 'function', 'function': {name': <TOOL_NAME>}} forces the model to use the named function.

prompt: typing.Optional[PromptKernelRequestParams] β€” Details of your Prompt. A new Prompt version will be created if the provided details are new.

inputs: typing.Optional[typing.Dict[str, typing.Optional[typing.Any]]] β€” The inputs passed to the prompt template.

source: typing.Optional[str] β€” Identifies where the model was called from.

metadata: typing.Optional[typing.Dict[str, typing.Optional[typing.Any]]] β€” Any additional metadata to record.

start_time: typing.Optional[dt.datetime] β€” When the logged event started.

end_time: typing.Optional[dt.datetime] β€” When the logged event ended.

source_datapoint_id: typing.Optional[str] β€” Unique identifier for the Datapoint that this Log is derived from. This can be used by Humanloop to associate Logs to Evaluations. If provided, Humanloop will automatically associate this Log to Evaluations that require a Log for this Datapoint-Version pair.

trace_parent_id: typing.Optional[str] β€” The ID of the parent Log to nest this Log under in a Trace.

user: typing.Optional[str] β€” End-user ID related to the Log.

prompts_call_stream_request_environment: typing.Optional[str] β€” The name of the Environment the Log is associated to.

save: typing.Optional[bool] β€” Whether the request/response payloads will be stored on Humanloop.

provider_api_keys: typing.Optional[ProviderApiKeysParams] β€” API keys required by each provider to make API calls. The API keys provided here are not stored by Humanloop. If not specified here, Humanloop will fall back to the key saved to your organization.

num_samples: typing.Optional[int] β€” The number of generations.

return_inputs: typing.Optional[bool] β€” Whether to return the inputs in the response. If false, the response will contain an empty dictionary under inputs. This is useful for reducing the size of the response. Defaults to true.

logprobs: typing.Optional[int] β€” Include the log probabilities of the top n tokens in the provider_response

suffix: typing.Optional[str] β€” The suffix that comes after a completion of inserted text. Useful for completions that act like inserts.

request_options: typing.Optional[RequestOptions] β€” Request-specific configuration.

client.prompts.call(...)

πŸ“ Description

Call a Prompt.

Calling a Prompt calls the model provider before logging the request, responses and metadata to Humanloop.

You can use query parameters version_id, or environment, to target an existing version of the Prompt. Otherwise the default deployed version will be chosen.

Instead of targeting an existing version explicitly, you can instead pass in Prompt details in the request body. In this case, we will check if the details correspond to an existing version of the Prompt. If they do not, we will create a new version. This is helpful in the case where you are storing or deriving your Prompt details in code.

πŸ”Œ Usage

from humanloop import Humanloop

client = Humanloop(
    api_key="YOUR_API_KEY",
)
client.prompts.call(
    version_id="prv_Wu6zx1lAWJRqOyL8nWuZk",
    path="persona",
    messages=[{"role": "user", "content": "What really happened at Roswell?"}],
    inputs={"person": "Trump"},
)

βš™οΈ Parameters

version_id: typing.Optional[str] β€” A specific Version ID of the Prompt to log to.

environment: typing.Optional[str] β€” Name of the Environment identifying a deployed version to log to.

path: typing.Optional[str] β€” Path of the Prompt, including the name. This locates the Prompt in the Humanloop filesystem and is used as as a unique identifier. For example: folder/name or just name.

id: typing.Optional[str] β€” ID for an existing Prompt.

messages: typing.Optional[typing.Sequence[ChatMessageParams]] β€” The messages passed to the to provider chat endpoint.

tool_choice: typing.Optional[PromptsCallRequestToolChoiceParams]

Controls how the model uses tools. The following options are supported:

  • 'none' means the model will not call any tool and instead generates a message; this is the default when no tools are provided as part of the Prompt.
  • 'auto' means the model can decide to call one or more of the provided tools; this is the default when tools are provided as part of the Prompt.
  • 'required' means the model can decide to call one or more of the provided tools.
  • {'type': 'function', 'function': {name': <TOOL_NAME>}} forces the model to use the named function.

prompt: typing.Optional[PromptKernelRequestParams] β€” Details of your Prompt. A new Prompt version will be created if the provided details are new.

inputs: typing.Optional[typing.Dict[str, typing.Optional[typing.Any]]] β€” The inputs passed to the prompt template.

source: typing.Optional[str] β€” Identifies where the model was called from.

metadata: typing.Optional[typing.Dict[str, typing.Optional[typing.Any]]] β€” Any additional metadata to record.

start_time: typing.Optional[dt.datetime] β€” When the logged event started.

end_time: typing.Optional[dt.datetime] β€” When the logged event ended.

source_datapoint_id: typing.Optional[str] β€” Unique identifier for the Datapoint that this Log is derived from. This can be used by Humanloop to associate Logs to Evaluations. If provided, Humanloop will automatically associate this Log to Evaluations that require a Log for this Datapoint-Version pair.

trace_parent_id: typing.Optional[str] β€” The ID of the parent Log to nest this Log under in a Trace.

user: typing.Optional[str] β€” End-user ID related to the Log.

prompts_call_request_environment: typing.Optional[str] β€” The name of the Environment the Log is associated to.

save: typing.Optional[bool] β€” Whether the request/response payloads will be stored on Humanloop.

provider_api_keys: typing.Optional[ProviderApiKeysParams] β€” API keys required by each provider to make API calls. The API keys provided here are not stored by Humanloop. If not specified here, Humanloop will fall back to the key saved to your organization.

num_samples: typing.Optional[int] β€” The number of generations.

return_inputs: typing.Optional[bool] β€” Whether to return the inputs in the response. If false, the response will contain an empty dictionary under inputs. This is useful for reducing the size of the response. Defaults to true.

logprobs: typing.Optional[int] β€” Include the log probabilities of the top n tokens in the provider_response

suffix: typing.Optional[str] β€” The suffix that comes after a completion of inserted text. Useful for completions that act like inserts.

request_options: typing.Optional[RequestOptions] β€” Request-specific configuration.

client.prompts.list(...)

πŸ“ Description

Get a list of all Prompts.

πŸ”Œ Usage

from humanloop import Humanloop

client = Humanloop(
    api_key="YOUR_API_KEY",
)
response = client.prompts.list(
    size=1,
)
for item in response:
    yield item
# alternatively, you can paginate page-by-page
for page in response.iter_pages():
    yield page

βš™οΈ Parameters

page: typing.Optional[int] β€” Page number for pagination.

size: typing.Optional[int] β€” Page size for pagination. Number of Prompts to fetch.

name: typing.Optional[str] β€” Case-insensitive filter for Prompt name.

user_filter: typing.Optional[str] β€” Case-insensitive filter for users in the Prompt. This filter matches against both email address and name of users.

sort_by: typing.Optional[ProjectSortBy] β€” Field to sort Prompts by

order: typing.Optional[SortOrder] β€” Direction to sort by.

request_options: typing.Optional[RequestOptions] β€” Request-specific configuration.

client.prompts.upsert(...)

πŸ“ Description

Create a Prompt or update it with a new version if it already exists.

Prompts are identified by the ID or their path. The parameters (i.e. the prompt template, temperature, model etc.) determine the versions of the Prompt.

If you provide a commit message, then the new version will be committed; otherwise it will be uncommitted. If you try to commit an already committed version, an exception will be raised.

πŸ”Œ Usage

from humanloop import Humanloop

client = Humanloop(
    api_key="YOUR_API_KEY",
)
client.prompts.upsert(
    path="Personal Projects/Coding Assistant",
    model="gpt-4o",
    endpoint="chat",
    template=[
        {
            "content": "You are a helpful coding assistant specialising in {{language}}",
            "role": "system",
        }
    ],
    provider="openai",
    max_tokens=-1,
    temperature=0.7,
    top_p=1.0,
    presence_penalty=0.0,
    frequency_penalty=0.0,
    other={},
    tools=[],
    linked_tools=[],
    commit_message="Initial commit",
)

βš™οΈ Parameters

model: str β€” The model instance used, e.g. gpt-4. See supported models

path: typing.Optional[str] β€” Path of the Prompt, including the name. This locates the Prompt in the Humanloop filesystem and is used as as a unique identifier. For example: folder/name or just name.

id: typing.Optional[str] β€” ID for an existing Prompt.

endpoint: typing.Optional[ModelEndpoints] β€” The provider model endpoint used.

template: typing.Optional[PromptRequestTemplateParams]

The template contains the main structure and instructions for the model, including input variables for dynamic values.

For chat models, provide the template as a ChatTemplate (a list of messages), e.g. a system message, followed by a user message with an input variable. For completion models, provide a prompt template as a string.

Input variables should be specified with double curly bracket syntax: {{input_name}}.

provider: typing.Optional[ModelProviders] β€” The company providing the underlying model service.

max_tokens: typing.Optional[int] β€” The maximum number of tokens to generate. Provide max_tokens=-1 to dynamically calculate the maximum number of tokens to generate given the length of the prompt

temperature: typing.Optional[float] β€” What sampling temperature to use when making a generation. Higher values means the model will be more creative.

top_p: typing.Optional[float] β€” An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass.

stop: typing.Optional[PromptRequestStopParams] β€” The string (or list of strings) after which the model will stop generating. The returned text will not contain the stop sequence.

presence_penalty: typing.Optional[float] β€” Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the generation so far.

frequency_penalty: typing.Optional[float] β€” Number between -2.0 and 2.0. Positive values penalize new tokens based on how frequently they appear in the generation so far.

other: typing.Optional[typing.Dict[str, typing.Optional[typing.Any]]] β€” Other parameter values to be passed to the provider call.

seed: typing.Optional[int] β€” If specified, model will make a best effort to sample deterministically, but it is not guaranteed.

response_format: typing.Optional[ResponseFormatParams] β€” The format of the response. Only {"type": "json_object"} is currently supported for chat.

tools: typing.Optional[typing.Sequence[ToolFunctionParams]] β€” The tool specification that the model can choose to call if Tool calling is supported.

linked_tools: typing.Optional[typing.Sequence[str]] β€” The IDs of the Tools in your organization that the model can choose to call if Tool calling is supported. The default deployed version of that tool is called.

attributes: typing.Optional[typing.Dict[str, typing.Optional[typing.Any]]] β€” Additional fields to describe the Prompt. Helpful to separate Prompt versions from each other with details on how they were created or used.

commit_message: typing.Optional[str] β€” Message describing the changes made.

request_options: typing.Optional[RequestOptions] β€” Request-specific configuration.

client.prompts.get(...)

πŸ“ Description

Retrieve the Prompt with the given ID.

By default, the deployed version of the Prompt is returned. Use the query parameters version_id or environment to target a specific version of the Prompt.

πŸ”Œ Usage

from humanloop import Humanloop

client = Humanloop(
    api_key="YOUR_API_KEY",
)
client.prompts.get(
    id="pr_30gco7dx6JDq4200GVOHa",
)

βš™οΈ Parameters

id: str β€” Unique identifier for Prompt.

version_id: typing.Optional[str] β€” A specific Version ID of the Prompt to retrieve.

environment: typing.Optional[str] β€” Name of the Environment to retrieve a deployed Version from.

request_options: typing.Optional[RequestOptions] β€” Request-specific configuration.

client.prompts.delete(...)

πŸ“ Description

Delete the Prompt with the given ID.

πŸ”Œ Usage

from humanloop import Humanloop

client = Humanloop(
    api_key="YOUR_API_KEY",
)
client.prompts.delete(
    id="pr_30gco7dx6JDq4200GVOHa",
)

βš™οΈ Parameters

id: str β€” Unique identifier for Prompt.

request_options: typing.Optional[RequestOptions] β€” Request-specific configuration.

client.prompts.move(...)

πŸ“ Description

Move the Prompt to a different path or change the name.

πŸ”Œ Usage

from humanloop import Humanloop

client = Humanloop(
    api_key="YOUR_API_KEY",
)
client.prompts.move(
    id="pr_30gco7dx6JDq4200GVOHa",
    path="new directory/new name",
)

βš™οΈ Parameters

id: str β€” Unique identifier for Prompt.

path: typing.Optional[str] β€” Path of the Prompt including the Prompt name, which is used as a unique identifier.

name: typing.Optional[str] β€” Name of the Prompt.

request_options: typing.Optional[RequestOptions] β€” Request-specific configuration.

client.prompts.list_versions(...)

πŸ“ Description

Get a list of all the versions of a Prompt.

πŸ”Œ Usage

from humanloop import Humanloop

client = Humanloop(
    api_key="YOUR_API_KEY",
)
client.prompts.list_versions(
    id="pr_30gco7dx6JDq4200GVOHa",
    status="committed",
)

βš™οΈ Parameters

id: str β€” Unique identifier for Prompt.

status: typing.Optional[VersionStatus] β€” Filter versions by status: 'uncommitted', 'committed'. If no status is provided, all versions are returned.

evaluator_aggregates: typing.Optional[bool] β€” Whether to include Evaluator aggregate results for the versions in the response

request_options: typing.Optional[RequestOptions] β€” Request-specific configuration.

client.prompts.commit(...)

πŸ“ Description

Commit a version of the Prompt with a commit message.

If the version is already committed, an exception will be raised.

πŸ”Œ Usage

from humanloop import Humanloop

client = Humanloop(
    api_key="YOUR_API_KEY",
)
client.prompts.commit(
    id="pr_30gco7dx6JDq4200GVOHa",
    version_id="prv_F34aba5f3asp0",
    commit_message="Reiterated point about not discussing sentience",
)

βš™οΈ Parameters

id: str β€” Unique identifier for Prompt.

version_id: str β€” Unique identifier for the specific version of the Prompt.

commit_message: str β€” Message describing the changes made.

request_options: typing.Optional[RequestOptions] β€” Request-specific configuration.

client.prompts.set_deployment(...)

πŸ“ Description

Deploy Prompt to an Environment.

Set the deployed version for the specified Environment. This Prompt will be used for calls made to the Prompt in this Environment.

πŸ”Œ Usage

from humanloop import Humanloop

client = Humanloop(
    api_key="YOUR_API_KEY",
)
client.prompts.set_deployment(
    id="id",
    environment_id="environment_id",
    version_id="version_id",
)

βš™οΈ Parameters

id: str β€” Unique identifier for Prompt.

environment_id: str β€” Unique identifier for the Environment to deploy the Version to.

version_id: str β€” Unique identifier for the specific version of the Prompt.

request_options: typing.Optional[RequestOptions] β€” Request-specific configuration.

client.prompts.remove_deployment(...)

πŸ“ Description

Remove deployed Prompt from the Environment.

Remove the deployed version for the specified Environment. This Prompt will no longer be used for calls made to the Prompt in this Environment.

πŸ”Œ Usage

from humanloop import Humanloop

client = Humanloop(
    api_key="YOUR_API_KEY",
)
client.prompts.remove_deployment(
    id="id",
    environment_id="environment_id",
)

βš™οΈ Parameters

id: str β€” Unique identifier for Prompt.

environment_id: str β€” Unique identifier for the Environment to remove the deployment from.

request_options: typing.Optional[RequestOptions] β€” Request-specific configuration.

client.prompts.list_environments(...)

πŸ“ Description

List all Environments and their deployed versions for the Prompt.

πŸ”Œ Usage

from humanloop import Humanloop

client = Humanloop(
    api_key="YOUR_API_KEY",
)
client.prompts.list_environments(
    id="pr_30gco7dx6JDq4200GVOHa",
)

βš™οΈ Parameters

id: str β€” Unique identifier for Prompt.

request_options: typing.Optional[RequestOptions] β€” Request-specific configuration.

client.prompts.update_monitoring(...)

πŸ“ Description

Activate and deactivate Evaluators for monitoring the Prompt.

An activated Evaluator will automatically be run on all new Logs within the Prompt for monitoring purposes.

πŸ”Œ Usage

from humanloop import Humanloop

client = Humanloop(
    api_key="YOUR_API_KEY",
)
client.prompts.update_monitoring(
    id="pr_30gco7dx6JDq4200GVOHa",
    activate=[{"evaluator_version_id": "evv_1abc4308abd"}],
)

βš™οΈ Parameters

id: str

activate: typing.Optional[ typing.Sequence[EvaluatorActivationDeactivationRequestActivateItemParams] ] β€” Evaluators to activate for Monitoring. These will be automatically run on new Logs.

deactivate: typing.Optional[ typing.Sequence[EvaluatorActivationDeactivationRequestDeactivateItemParams] ] β€” Evaluators to deactivate. These will not be run on new Logs.

request_options: typing.Optional[RequestOptions] β€” Request-specific configuration.

Tools

client.tools.log(...)

πŸ“ Description

Log to a Tool.

You can use query parameters version_id, or environment, to target an existing version of the Tool. Otherwise the default deployed version will be chosen.

Instead of targeting an existing version explicitly, you can instead pass in Tool details in the request body. In this case, we will check if the details correspond to an existing version of the Tool, if not we will create a new version. This is helpful in the case where you are storing or deriving your Tool details in code.

πŸ”Œ Usage

from humanloop import Humanloop

client = Humanloop(
    api_key="YOUR_API_KEY",
)
client.tools.log(
    path="math-tool",
    tool={
        "function": {
            "name": "multiply",
            "description": "Multiply two numbers",
            "parameters": {
                "type": "object",
                "properties": {
                    "a": {"type": "number"},
                    "b": {"type": "number"},
                },
                "required": ["a", "b"],
            },
        }
    },
    inputs={"a": 5, "b": 7},
    output="35",
)

βš™οΈ Parameters

version_id: typing.Optional[str] β€” A specific Version ID of the Tool to log to.

environment: typing.Optional[str] β€” Name of the Environment identifying a deployed version to log to.

path: typing.Optional[str] β€” Path of the Tool, including the name. This locates the Tool in the Humanloop filesystem and is used as as a unique identifier. For example: folder/name or just name.

id: typing.Optional[str] β€” ID for an existing Tool.

start_time: typing.Optional[dt.datetime] β€” When the logged event started.

end_time: typing.Optional[dt.datetime] β€” When the logged event ended.

output: typing.Optional[str] β€” Generated output from your model for the provided inputs. Can be None if logging an error, or if creating a parent Log with the intention to populate it later.

created_at: typing.Optional[dt.datetime] β€” User defined timestamp for when the log was created.

error: typing.Optional[str] β€” Error message if the log is an error.

provider_latency: typing.Optional[float] β€” Duration of the logged event in seconds.

stdout: typing.Optional[str] β€” Captured log and debug statements.

provider_request: typing.Optional[typing.Dict[str, typing.Optional[typing.Any]]] β€” Raw request sent to provider.

provider_response: typing.Optional[typing.Dict[str, typing.Optional[typing.Any]]] β€” Raw response received the provider.

inputs: typing.Optional[typing.Dict[str, typing.Optional[typing.Any]]] β€” The inputs passed to the prompt template.

source: typing.Optional[str] β€” Identifies where the model was called from.

metadata: typing.Optional[typing.Dict[str, typing.Optional[typing.Any]]] β€” Any additional metadata to record.

source_datapoint_id: typing.Optional[str] β€” Unique identifier for the Datapoint that this Log is derived from. This can be used by Humanloop to associate Logs to Evaluations. If provided, Humanloop will automatically associate this Log to Evaluations that require a Log for this Datapoint-Version pair.

trace_parent_id: typing.Optional[str] β€” The ID of the parent Log to nest this Log under in a Trace.

user: typing.Optional[str] β€” End-user ID related to the Log.

tool_log_request_environment: typing.Optional[str] β€” The name of the Environment the Log is associated to.

save: typing.Optional[bool] β€” Whether the request/response payloads will be stored on Humanloop.

tool: typing.Optional[ToolKernelRequestParams] β€” Details of your Tool. A new Tool version will be created if the provided details are new.

request_options: typing.Optional[RequestOptions] β€” Request-specific configuration.

client.tools.update(...)

πŸ“ Description

Update a Log.

Update the details of a Log with the given ID.

πŸ”Œ Usage

from humanloop import Humanloop

client = Humanloop(
    api_key="YOUR_API_KEY",
)
client.tools.update(
    id="id",
    log_id="log_id",
)

βš™οΈ Parameters

id: str β€” Unique identifier for Prompt.

log_id: str β€” Unique identifier for the Log.

output: typing.Optional[str] β€” Generated output from your model for the provided inputs. Can be None if logging an error, or if creating a parent Log with the intention to populate it later.

created_at: typing.Optional[dt.datetime] β€” User defined timestamp for when the log was created.

error: typing.Optional[str] β€” Error message if the log is an error.

provider_latency: typing.Optional[float] β€” Duration of the logged event in seconds.

stdout: typing.Optional[str] β€” Captured log and debug statements.

provider_request: typing.Optional[typing.Dict[str, typing.Optional[typing.Any]]] β€” Raw request sent to provider.

provider_response: typing.Optional[typing.Dict[str, typing.Optional[typing.Any]]] β€” Raw response received the provider.

inputs: typing.Optional[typing.Dict[str, typing.Optional[typing.Any]]] β€” The inputs passed to the prompt template.

source: typing.Optional[str] β€” Identifies where the model was called from.

metadata: typing.Optional[typing.Dict[str, typing.Optional[typing.Any]]] β€” Any additional metadata to record.

start_time: typing.Optional[dt.datetime] β€” When the logged event started.

end_time: typing.Optional[dt.datetime] β€” When the logged event ended.

request_options: typing.Optional[RequestOptions] β€” Request-specific configuration.

client.tools.list(...)

πŸ“ Description

Get a list of all Tools.

πŸ”Œ Usage

from humanloop import Humanloop

client = Humanloop(
    api_key="YOUR_API_KEY",
)
response = client.tools.list(
    size=1,
)
for item in response:
    yield item
# alternatively, you can paginate page-by-page
for page in response.iter_pages():
    yield page

βš™οΈ Parameters

page: typing.Optional[int] β€” Page offset for pagination.

size: typing.Optional[int] β€” Page size for pagination. Number of Tools to fetch.

name: typing.Optional[str] β€” Case-insensitive filter for Tool name.

user_filter: typing.Optional[str] β€” Case-insensitive filter for users in the Tool. This filter matches against both email address and name of users.

sort_by: typing.Optional[ProjectSortBy] β€” Field to sort Tools by

order: typing.Optional[SortOrder] β€” Direction to sort by.

request_options: typing.Optional[RequestOptions] β€” Request-specific configuration.

client.tools.upsert(...)

πŸ“ Description

Create a Tool or update it with a new version if it already exists.

Tools are identified by the ID or their path. The name, description and parameters determine the versions of the Tool.

If you provide a commit message, then the new version will be committed; otherwise it will be uncommitted. If you try to commit an already committed version, an exception will be raised.

πŸ”Œ Usage

from humanloop import Humanloop

client = Humanloop(
    api_key="YOUR_API_KEY",
)
client.tools.upsert(
    path="math-tool",
    function={
        "name": "multiply",
        "description": "Multiply two numbers",
        "parameters": {
            "type": "object",
            "properties": {"a": {"type": "number"}, "b": {"type": "number"}},
            "required": ["a", "b"],
        },
    },
    commit_message="Initial commit",
)

βš™οΈ Parameters

path: typing.Optional[str] β€” Path of the Tool, including the name. This locates the Tool in the Humanloop filesystem and is used as as a unique identifier. For example: folder/name or just name.

id: typing.Optional[str] β€” ID for an existing Tool.

function: typing.Optional[ToolFunctionParams] β€” Callable function specification of the Tool shown to the model for tool calling.

source_code: typing.Optional[str] β€” Code source of the Tool.

setup_values: typing.Optional[typing.Dict[str, typing.Optional[typing.Any]]] β€” Values needed to setup the Tool, defined in JSON Schema format: https://json-schema.org/

attributes: typing.Optional[typing.Dict[str, typing.Optional[typing.Any]]] β€” Additional fields to describe the Tool. Helpful to separate Tool versions from each other with details on how they were created or used.

tool_type: typing.Optional[FilesToolType] β€” Type of Tool.

commit_message: typing.Optional[str] β€” Message describing the changes made.

request_options: typing.Optional[RequestOptions] β€” Request-specific configuration.

client.tools.get(...)

πŸ“ Description

Retrieve the Tool with the given ID.

By default, the deployed version of the Tool is returned. Use the query parameters version_id or environment to target a specific version of the Tool.

πŸ”Œ Usage

from humanloop import Humanloop

client = Humanloop(
    api_key="YOUR_API_KEY",
)
client.tools.get(
    id="tl_789ghi",
)

βš™οΈ Parameters

id: str β€” Unique identifier for Tool.

version_id: typing.Optional[str] β€” A specific Version ID of the Tool to retrieve.

environment: typing.Optional[str] β€” Name of the Environment to retrieve a deployed Version from.

request_options: typing.Optional[RequestOptions] β€” Request-specific configuration.

client.tools.delete(...)

πŸ“ Description

Delete the Tool with the given ID.

πŸ”Œ Usage

from humanloop import Humanloop

client = Humanloop(
    api_key="YOUR_API_KEY",
)
client.tools.delete(
    id="tl_789ghi",
)

βš™οΈ Parameters

id: str β€” Unique identifier for Tool.

request_options: typing.Optional[RequestOptions] β€” Request-specific configuration.

client.tools.move(...)

πŸ“ Description

Move the Tool to a different path or change the name.

πŸ”Œ Usage

from humanloop import Humanloop

client = Humanloop(
    api_key="YOUR_API_KEY",
)
client.tools.move(
    id="tl_789ghi",
    path="new directory/new name",
)

βš™οΈ Parameters

id: str β€” Unique identifier for Tool.

path: typing.Optional[str] β€” Path of the Tool including the Tool name, which is used as a unique identifier.

name: typing.Optional[str] β€” Name of the Tool, which is used as a unique identifier.

request_options: typing.Optional[RequestOptions] β€” Request-specific configuration.

client.tools.list_versions(...)

πŸ“ Description

Get a list of all the versions of a Tool.

πŸ”Œ Usage

from humanloop import Humanloop

client = Humanloop(
    api_key="YOUR_API_KEY",
)
client.tools.list_versions(
    id="tl_789ghi",
    status="committed",
)

βš™οΈ Parameters

id: str β€” Unique identifier for the Tool.

status: typing.Optional[VersionStatus] β€” Filter versions by status: 'uncommitted', 'committed'. If no status is provided, all versions are returned.

evaluator_aggregates: typing.Optional[bool] β€” Whether to include Evaluator aggregate results for the versions in the response

request_options: typing.Optional[RequestOptions] β€” Request-specific configuration.

client.tools.commit(...)

πŸ“ Description

Commit a version of the Tool with a commit message.

If the version is already committed, an exception will be raised.

πŸ”Œ Usage

from humanloop import Humanloop

client = Humanloop(
    api_key="YOUR_API_KEY",
)
client.tools.commit(
    id="tl_789ghi",
    version_id="tv_012jkl",
    commit_message="Initial commit",
)

βš™οΈ Parameters

id: str β€” Unique identifier for Tool.

version_id: str β€” Unique identifier for the specific version of the Tool.

commit_message: str β€” Message describing the changes made.

request_options: typing.Optional[RequestOptions] β€” Request-specific configuration.

client.tools.set_deployment(...)

πŸ“ Description

Deploy Tool to an Environment.

Set the deployed version for the specified Environment. This Prompt will be used for calls made to the Tool in this Environment.

πŸ”Œ Usage

from humanloop import Humanloop

client = Humanloop(
    api_key="YOUR_API_KEY",
)
client.tools.set_deployment(
    id="tl_789ghi",
    environment_id="staging",
    version_id="tv_012jkl",
)

βš™οΈ Parameters

id: str β€” Unique identifier for Tool.

environment_id: str β€” Unique identifier for the Environment to deploy the Version to.

version_id: str β€” Unique identifier for the specific version of the Tool.

request_options: typing.Optional[RequestOptions] β€” Request-specific configuration.

client.tools.remove_deployment(...)

πŸ“ Description

Remove deployed Tool from the Environment.

Remove the deployed version for the specified Environment. This Tool will no longer be used for calls made to the Tool in this Environment.

πŸ”Œ Usage

from humanloop import Humanloop

client = Humanloop(
    api_key="YOUR_API_KEY",
)
client.tools.remove_deployment(
    id="tl_789ghi",
    environment_id="staging",
)

βš™οΈ Parameters

id: str β€” Unique identifier for Tool.

environment_id: str β€” Unique identifier for the Environment to remove the deployment from.

request_options: typing.Optional[RequestOptions] β€” Request-specific configuration.

client.tools.list_environments(...)

πŸ“ Description

List all Environments and their deployed versions for the Tool.

πŸ”Œ Usage

from humanloop import Humanloop

client = Humanloop(
    api_key="YOUR_API_KEY",
)
client.tools.list_environments(
    id="tl_789ghi",
)

βš™οΈ Parameters

id: str β€” Unique identifier for Tool.

request_options: typing.Optional[RequestOptions] β€” Request-specific configuration.

client.tools.update_monitoring(...)

πŸ“ Description

Activate and deactivate Evaluators for monitoring the Tool.

An activated Evaluator will automatically be run on all new Logs within the Tool for monitoring purposes.

πŸ”Œ Usage

from humanloop import Humanloop

client = Humanloop(
    api_key="YOUR_API_KEY",
)
client.tools.update_monitoring(
    id="tl_789ghi",
    activate=[{"evaluator_version_id": "evv_1abc4308abd"}],
)

βš™οΈ Parameters

id: str

activate: typing.Optional[ typing.Sequence[EvaluatorActivationDeactivationRequestActivateItemParams] ] β€” Evaluators to activate for Monitoring. These will be automatically run on new Logs.

deactivate: typing.Optional[ typing.Sequence[EvaluatorActivationDeactivationRequestDeactivateItemParams] ] β€” Evaluators to deactivate. These will not be run on new Logs.

request_options: typing.Optional[RequestOptions] β€” Request-specific configuration.

Datasets

client.datasets.list(...)

πŸ“ Description

List all Datasets.

πŸ”Œ Usage

from humanloop import Humanloop

client = Humanloop(
    api_key="YOUR_API_KEY",
)
response = client.datasets.list(
    size=1,
)
for item in response:
    yield item
# alternatively, you can paginate page-by-page
for page in response.iter_pages():
    yield page

βš™οΈ Parameters

page: typing.Optional[int] β€” Page offset for pagination.

size: typing.Optional[int] β€” Page size for pagination. Number of Datasets to fetch.

name: typing.Optional[str] β€” Case-insensitive filter for Dataset name.

user_filter: typing.Optional[str] β€” Case-insensitive filter for users in the Dataset. This filter matches against both email address and name of users.

sort_by: typing.Optional[ProjectSortBy] β€” Field to sort Datasets by

order: typing.Optional[SortOrder] β€” Direction to sort by.

request_options: typing.Optional[RequestOptions] β€” Request-specific configuration.

client.datasets.upsert(...)

πŸ“ Description

Create a Dataset or update it with a new version if it already exists.

Datasets are identified by the ID or their path. The datapoints determine the versions of the Dataset.

By default, the new Dataset version will be set to the list of Datapoints provided in the request. You can also create a new version by adding or removing Datapoints from an existing version by specifying action as add or remove respectively. In this case, you may specify the version_id or environment query parameters to identify the existing version to base the new version on. If neither is provided, the default deployed version will be used.

If you provide a commit message, then the new version will be committed; otherwise it will be uncommitted. If you try to commit an already committed version, an exception will be raised.

Humanloop also deduplicates Datapoints. If you try to add a Datapoint that already exists, it will be ignored. If you intentionally want to add a duplicate Datapoint, you can add a unique identifier to the Datapoint's inputs such as {_dedupe_id: <unique ID>}.

πŸ”Œ Usage

from humanloop import Humanloop

client = Humanloop(
    api_key="YOUR_API_KEY",
)
client.datasets.upsert(
    path="datasets/support-queries",
    datapoints=[
        {
            "messages": [
                {
                    "role": "user",
                    "content": "How do i manage my organizations API keys?\n",
                }
            ],
            "target": {
                "response": 'Hey, thanks for your questions. Here are steps for how to achieve: 1. Log in to the Humanloop Dashboard \n\n2. Click on "Organization Settings."\n If you do not see this option, you might need to contact your organization admin to gain the necessary permissions.\n\n3. Within the settings or organization settings, select the option labeled "API Keys" on the left. Here you will be able to view and manage your API keys.\n\n4. You will see a list of existing API keys. You can perform various actions, such as:\n     - **Generate New API Key:** Click on the "Generate New Key" button if you need a new API key.\n     - **Revoke an API Key:** If you need to disable an existing key, find the key in the list and click the "Revoke" or "Delete" button.\n     - **Copy an API Key:** If you need to use an existing key, you can copy it to your clipboard by clicking the "Copy" button next to the key.\n\n5. **Save and Secure API Keys:** Make sure to securely store any new or existing API keys you are using. Treat them like passwords and do not share them publicly.\n\nIf you encounter any issues or need further assistance, it might be helpful to engage with an engineer or your IT department to ensure you have the necessary permissions and support.\n\nWould you need help with anything else?'
            },
        },
        {
            "messages": [
                {
                    "role": "user",
                    "content": "Hey, can do I use my code evaluator for monitoring my legal-copilot prompt?",
                }
            ],
            "target": {
                "response": "Hey, thanks for your questions. Here are steps for how to achieve: 1. Navigate to your Prompt dashboard. \n 2. Select the `Monitoring` button on the top right of the Prompt dashboard \n 3. Within the model select the Version of the Evaluator you want to turn on for monitoring. \n\nWould you need help with anything else?"
            },
        },
    ],
    commit_message="Add two new questions and answers",
)

βš™οΈ Parameters

datapoints: typing.Sequence[CreateDatapointRequestParams] β€” The Datapoints to create this Dataset version with. Modify the action field to determine how these Datapoints are used.

version_id: typing.Optional[str] β€” ID of the specific Dataset version to base the created Version on. Only used when action is "add" or "remove".

environment: typing.Optional[str] β€” Name of the Environment identifying a deployed Version to base the created Version on. Only used when action is "add" or "remove".

path: typing.Optional[str] β€” Path of the Dataset, including the name. This locates the Dataset in the Humanloop filesystem and is used as as a unique identifier. For example: folder/name or just name.

id: typing.Optional[str] β€” ID for an existing Dataset.

action: typing.Optional[UpdateDatesetAction]

The action to take with the provided Datapoints.

  • If "set", the created version will only contain the Datapoints provided in this request.
  • If "add", the created version will contain the Datapoints provided in this request in addition to the Datapoints in the target version.
  • If "remove", the created version will contain the Datapoints in the target version except for the Datapoints provided in this request.

If "add" or "remove", one of the version_id or environment query parameters may be provided.

attributes: typing.Optional[typing.Dict[str, typing.Optional[typing.Any]]] β€” Additional fields to describe the Dataset. Helpful to separate Dataset versions from each other with details on how they were created or used.

commit_message: typing.Optional[str] β€” Message describing the changes made. If provided, a committed version of the Dataset is created. Otherwise, an uncommitted version is created.

request_options: typing.Optional[RequestOptions] β€” Request-specific configuration.

client.datasets.get(...)

πŸ“ Description

Retrieve the Dataset with the given ID.

Unless include_datapoints is set to true, the response will not include the Datapoints. Use the List Datapoints endpoint (GET /{id}/datapoints) to efficiently retrieve Datapoints for a large Dataset.

By default, the deployed version of the Dataset is returned. Use the query parameters version_id or environment to target a specific version of the Dataset.

πŸ”Œ Usage

from humanloop import Humanloop

client = Humanloop(
    api_key="YOUR_API_KEY",
)
client.datasets.get(
    id="ds_b0baF1ca7652",
    version_id="dsv_6L78pqrdFi2xa",
    include_datapoints=True,
)

βš™οΈ Parameters

id: str β€” Unique identifier for Dataset.

version_id: typing.Optional[str] β€” A specific Version ID of the Dataset to retrieve.

environment: typing.Optional[str] β€” Name of the Environment to retrieve a deployed Version from.

include_datapoints: typing.Optional[bool] β€” If set to true, include all Datapoints in the response. Defaults to false. Consider using the paginated List Datapoints endpoint instead.

request_options: typing.Optional[RequestOptions] β€” Request-specific configuration.

client.datasets.delete(...)

πŸ“ Description

Delete the Dataset with the given ID.

πŸ”Œ Usage

from humanloop import Humanloop

client = Humanloop(
    api_key="YOUR_API_KEY",
)
client.datasets.delete(
    id="id",
)

βš™οΈ Parameters

id: str β€” Unique identifier for Dataset.

request_options: typing.Optional[RequestOptions] β€” Request-specific configuration.

client.datasets.move(...)

πŸ“ Description

Move the Dataset to a different path or change the name.

πŸ”Œ Usage

from humanloop import Humanloop

client = Humanloop(
    api_key="YOUR_API_KEY",
)
client.datasets.move(
    id="id",
)

βš™οΈ Parameters

id: str β€” Unique identifier for Dataset.

path: typing.Optional[str] β€” Path of the Dataset including the Dataset name, which is used as a unique identifier.

name: typing.Optional[str] β€” Name of the Dataset, which is used as a unique identifier.

request_options: typing.Optional[RequestOptions] β€” Request-specific configuration.

client.datasets.list_datapoints(...)

πŸ“ Description

List all Datapoints for the Dataset with the given ID.

πŸ”Œ Usage

from humanloop import Humanloop

client = Humanloop(
    api_key="YOUR_API_KEY",
)
response = client.datasets.list_datapoints(
    id="ds_b0baF1ca7652",
    size=1,
)
for item in response:
    yield item
# alternatively, you can paginate page-by-page
for page in response.iter_pages():
    yield page

βš™οΈ Parameters

id: str β€” Unique identifier for Dataset.

version_id: typing.Optional[str] β€” A specific Version ID of the Dataset to retrieve.

environment: typing.Optional[str] β€” Name of the Environment to retrieve a deployed Version from.

page: typing.Optional[int] β€” Page number for pagination.

size: typing.Optional[int] β€” Page size for pagination. Number of Datapoints to fetch.

request_options: typing.Optional[RequestOptions] β€” Request-specific configuration.

client.datasets.list_versions(...)

πŸ“ Description

Get a list of the versions for a Dataset.

πŸ”Œ Usage

from humanloop import Humanloop

client = Humanloop(
    api_key="YOUR_API_KEY",
)
client.datasets.list_versions(
    id="ds_b0baF1ca7652",
    status="committed",
)

βš™οΈ Parameters

id: str β€” Unique identifier for Dataset.

status: typing.Optional[VersionStatus] β€” Filter versions by status: 'uncommitted', 'committed'. If no status is provided, all versions are returned.

include_datapoints: typing.Optional[typing.Literal["latest_committed"]] β€” If set to 'latest_committed', include the Datapoints for the latest committed version. Defaults to None.

request_options: typing.Optional[RequestOptions] β€” Request-specific configuration.

client.datasets.commit(...)

πŸ“ Description

Commit a version of the Dataset with a commit message.

If the version is already committed, an exception will be raised.

πŸ”Œ Usage

from humanloop import Humanloop

client = Humanloop(
    api_key="YOUR_API_KEY",
)
client.datasets.commit(
    id="ds_b0baF1ca7652",
    version_id="dsv_6L78pqrdFi2xa",
    commit_message="initial commit",
)

βš™οΈ Parameters

id: str β€” Unique identifier for Dataset.

version_id: str β€” Unique identifier for the specific version of the Dataset.

commit_message: str β€” Message describing the changes made.

request_options: typing.Optional[RequestOptions] β€” Request-specific configuration.

client.datasets.upload_csv(...)

πŸ“ Description

Add Datapoints from a CSV file to a Dataset.

This will create a new committed version of the Dataset with the Datapoints from the CSV file.

If either version_id or environment is provided, the new version will be based on the specified version, with the Datapoints from the CSV file added to the existing Datapoints in the version. If neither version_id nor environment is provided, the new version will be based on the version of the Dataset that is deployed to the default Environment.

πŸ”Œ Usage

from humanloop import Humanloop

client = Humanloop(
    api_key="YOUR_API_KEY",
)
client.datasets.upload_csv(
    id="id",
    commit_message="commit_message",
)

βš™οΈ Parameters

id: str β€” Unique identifier for the Dataset

file: `from future import annotations

core.File` β€” See core.File for more documentation

commit_message: str β€” Commit message for the new Dataset version.

version_id: typing.Optional[str] β€” ID of the specific Dataset version to base the created Version on.

environment: typing.Optional[str] β€” Name of the Environment identifying a deployed Version to base the created Version on.

request_options: typing.Optional[RequestOptions] β€” Request-specific configuration.

client.datasets.set_deployment(...)

πŸ“ Description

Deploy Dataset to Environment.

Set the deployed version for the specified Environment.

πŸ”Œ Usage

from humanloop import Humanloop

client = Humanloop(
    api_key="YOUR_API_KEY",
)
client.datasets.set_deployment(
    id="ds_b0baF1ca7652",
    environment_id="staging",
    version_id="dsv_6L78pqrdFi2xa",
)

βš™οΈ Parameters

id: str β€” Unique identifier for Dataset.

environment_id: str β€” Unique identifier for the Environment to deploy the Version to.

version_id: str β€” Unique identifier for the specific version of the Dataset.

request_options: typing.Optional[RequestOptions] β€” Request-specific configuration.

client.datasets.remove_deployment(...)

πŸ“ Description

Remove deployed Dataset from Environment.

Remove the deployed version for the specified Environment.

πŸ”Œ Usage

from humanloop import Humanloop

client = Humanloop(
    api_key="YOUR_API_KEY",
)
client.datasets.remove_deployment(
    id="ds_b0baF1ca7652",
    environment_id="staging",
)

βš™οΈ Parameters

id: str β€” Unique identifier for Dataset.

environment_id: str β€” Unique identifier for the Environment to remove the deployment from.

request_options: typing.Optional[RequestOptions] β€” Request-specific configuration.

client.datasets.list_environments(...)

πŸ“ Description

List all Environments and their deployed versions for the Dataset.

πŸ”Œ Usage

from humanloop import Humanloop

client = Humanloop(
    api_key="YOUR_API_KEY",
)
client.datasets.list_environments(
    id="id",
)

βš™οΈ Parameters

id: str β€” Unique identifier for Dataset.

request_options: typing.Optional[RequestOptions] β€” Request-specific configuration.

Evaluators

client.evaluators.log(...)

πŸ“ Description

Submit Evaluator judgment for an existing Log.

Creates a new Log. The evaluated Log will be set as the parent of the created Log.

πŸ”Œ Usage

from humanloop import Humanloop

client = Humanloop(
    api_key="YOUR_API_KEY",
)
client.evaluators.log(
    parent_id="parent_id",
)

βš™οΈ Parameters

parent_id: str β€” Identifier of the evaluated Log. The newly created Log will have this one set as parent.

version_id: typing.Optional[str] β€” ID of the Evaluator version to log against.

environment: typing.Optional[str] β€” Name of the Environment identifying a deployed version to log to.

path: typing.Optional[str] β€” Path of the Evaluator, including the name. This locates the Evaluator in the Humanloop filesystem and is used as as a unique identifier. For example: folder/name or just name.

id: typing.Optional[str] β€” ID for an existing Evaluator.

start_time: typing.Optional[dt.datetime] β€” When the logged event started.

end_time: typing.Optional[dt.datetime] β€” When the logged event ended.

output: typing.Optional[str] β€” Generated output from the LLM. Only populated for LLM Evaluator Logs.

created_at: typing.Optional[dt.datetime] β€” User defined timestamp for when the log was created.

error: typing.Optional[str] β€” Error message if the log is an error.

provider_latency: typing.Optional[float] β€” Duration of the logged event in seconds.

stdout: typing.Optional[str] β€” Captured log and debug statements.

provider_request: typing.Optional[typing.Dict[str, typing.Optional[typing.Any]]] β€” Raw request sent to provider. Only populated for LLM Evaluator Logs.

provider_response: typing.Optional[typing.Dict[str, typing.Optional[typing.Any]]] β€” Raw response received the provider. Only populated for LLM Evaluator Logs.

inputs: typing.Optional[typing.Dict[str, typing.Optional[typing.Any]]] β€” The inputs passed to the prompt template.

source: typing.Optional[str] β€” Identifies where the model was called from.

metadata: typing.Optional[typing.Dict[str, typing.Optional[typing.Any]]] β€” Any additional metadata to record.

source_datapoint_id: typing.Optional[str] β€” Unique identifier for the Datapoint that this Log is derived from. This can be used by Humanloop to associate Logs to Evaluations. If provided, Humanloop will automatically associate this Log to Evaluations that require a Log for this Datapoint-Version pair.

trace_parent_id: typing.Optional[str] β€” The ID of the parent Log to nest this Log under in a Trace.

user: typing.Optional[str] β€” End-user ID related to the Log.

create_evaluator_log_request_environment: typing.Optional[str] β€” The name of the Environment the Log is associated to.

save: typing.Optional[bool] β€” Whether the request/response payloads will be stored on Humanloop.

judgment: typing.Optional[CreateEvaluatorLogRequestJudgmentParams] β€” Evaluator assessment of the Log.

marked_completed: typing.Optional[bool] β€” Whether the Log has been manually marked as completed by a user. <<<<<<< HEAD

=======

32f482a (Release 0.8.9a1)

<<<<<<< HEAD spec: typing.Optional[CreateEvaluatorLogRequestSpecParams]

======= spec: typing.Optional[CreateEvaluatorLogRequestSpecParams]

32f482a (Release 0.8.9a1)

request_options: typing.Optional[RequestOptions] β€” Request-specific configuration.

client.evaluators.list(...)

πŸ“ Description

Get a list of all Evaluators.

πŸ”Œ Usage

from humanloop import Humanloop

client = Humanloop(
    api_key="YOUR_API_KEY",
)
response = client.evaluators.list(
    size=1,
)
for item in response:
    yield item
# alternatively, you can paginate page-by-page
for page in response.iter_pages():
    yield page

βš™οΈ Parameters

page: typing.Optional[int] β€” Page offset for pagination.

size: typing.Optional[int] β€” Page size for pagination. Number of Evaluators to fetch.

name: typing.Optional[str] β€” Case-insensitive filter for Evaluator name.

user_filter: typing.Optional[str] β€” Case-insensitive filter for users in the Evaluator. This filter matches against both email address and name of users.

sort_by: typing.Optional[ProjectSortBy] β€” Field to sort Evaluators by

order: typing.Optional[SortOrder] β€” Direction to sort by.

request_options: typing.Optional[RequestOptions] β€” Request-specific configuration.

client.evaluators.upsert(...)

πŸ“ Description

Create an Evaluator or update it with a new version if it already exists.

Evaluators are identified by the ID or their path. The spec provided determines the version of the Evaluator.

If you provide a commit message, then the new version will be committed; otherwise it will be uncommitted. If you try to commit an already committed version, an exception will be raised.

πŸ”Œ Usage

from humanloop import Humanloop

client = Humanloop(
    api_key="YOUR_API_KEY",
)
client.evaluators.upsert(
    path="Shared Evaluators/Accuracy Evaluator",
    spec={
        "arguments_type": "target_required",
        "return_type": "number",
        "evaluator_type": "python",
        "code": "def evaluate(answer, target):\n    return 0.5",
    },
    commit_message="Initial commit",
)

βš™οΈ Parameters

<<<<<<< HEAD spec: EvaluatorRequestSpecParams

======= spec: EvaluatorRequestSpecParams

32f482a (Release 0.8.9a1)

path: typing.Optional[str] β€” Path of the Evaluator, including the name. This locates the Evaluator in the Humanloop filesystem and is used as as a unique identifier. For example: folder/name or just name.

id: typing.Optional[str] β€” ID for an existing Evaluator.

commit_message: typing.Optional[str] β€” Message describing the changes made.

request_options: typing.Optional[RequestOptions] β€” Request-specific configuration.

client.evaluators.get(...)

πŸ“ Description

Retrieve the Evaluator with the given ID.

By default, the deployed version of the Evaluator is returned. Use the query parameters version_id or environment to target a specific version of the Evaluator.

πŸ”Œ Usage

from humanloop import Humanloop

client = Humanloop(
    api_key="YOUR_API_KEY",
)
client.evaluators.get(
    id="ev_890bcd",
)

βš™οΈ Parameters

id: str β€” Unique identifier for Evaluator.

version_id: typing.Optional[str] β€” A specific Version ID of the Evaluator to retrieve.

environment: typing.Optional[str] β€” Name of the Environment to retrieve a deployed Version from.

request_options: typing.Optional[RequestOptions] β€” Request-specific configuration.

client.evaluators.delete(...)

πŸ“ Description

Delete the Evaluator with the given ID.

πŸ”Œ Usage

from humanloop import Humanloop

client = Humanloop(
    api_key="YOUR_API_KEY",
)
client.evaluators.delete(
    id="ev_890bcd",
)

βš™οΈ Parameters

id: str β€” Unique identifier for Evaluator.

request_options: typing.Optional[RequestOptions] β€” Request-specific configuration.

client.evaluators.move(...)

πŸ“ Description

Move the Evaluator to a different path or change the name.

πŸ”Œ Usage

from humanloop import Humanloop

client = Humanloop(
    api_key="YOUR_API_KEY",
)
client.evaluators.move(
    id="ev_890bcd",
    path="new directory/new name",
)

βš™οΈ Parameters

id: str β€” Unique identifier for Evaluator.

path: typing.Optional[str] β€” Path of the Evaluator including the Evaluator name, which is used as a unique identifier.

name: typing.Optional[str] β€” Name of the Evaluator, which is used as a unique identifier.

request_options: typing.Optional[RequestOptions] β€” Request-specific configuration.

client.evaluators.list_versions(...)

πŸ“ Description

Get a list of all the versions of an Evaluator.

πŸ”Œ Usage

from humanloop import Humanloop

client = Humanloop(
    api_key="YOUR_API_KEY",
)
client.evaluators.list_versions(
    id="ev_890bcd",
)

βš™οΈ Parameters

id: str β€” Unique identifier for the Evaluator.

status: typing.Optional[VersionStatus] β€” Filter versions by status: 'uncommitted', 'committed'. If no status is provided, all versions are returned.

evaluator_aggregates: typing.Optional[bool] β€” Whether to include Evaluator aggregate results for the versions in the response

request_options: typing.Optional[RequestOptions] β€” Request-specific configuration.

client.evaluators.commit(...)

πŸ“ Description

Commit a version of the Evaluator with a commit message.

If the version is already committed, an exception will be raised.

πŸ”Œ Usage

from humanloop import Humanloop

client = Humanloop(
    api_key="YOUR_API_KEY",
)
client.evaluators.commit(
    id="ev_890bcd",
    version_id="evv_012def",
    commit_message="Initial commit",
)

βš™οΈ Parameters

id: str β€” Unique identifier for Prompt.

version_id: str β€” Unique identifier for the specific version of the Evaluator.

commit_message: str β€” Message describing the changes made.

request_options: typing.Optional[RequestOptions] β€” Request-specific configuration.

client.evaluators.set_deployment(...)

πŸ“ Description

Deploy Evaluator to an Environment.

Set the deployed version for the specified Environment. This Evaluator will be used for calls made to the Evaluator in this Environment.

πŸ”Œ Usage

from humanloop import Humanloop

client = Humanloop(
    api_key="YOUR_API_KEY",
)
client.evaluators.set_deployment(
    id="ev_890bcd",
    environment_id="staging",
    version_id="evv_012def",
)

βš™οΈ Parameters

id: str β€” Unique identifier for Evaluator.

environment_id: str β€” Unique identifier for the Environment to deploy the Version to.

version_id: str β€” Unique identifier for the specific version of the Evaluator.

request_options: typing.Optional[RequestOptions] β€” Request-specific configuration.

client.evaluators.remove_deployment(...)

πŸ“ Description

Remove deployed Evaluator from the Environment.

Remove the deployed version for the specified Environment. This Evaluator will no longer be used for calls made to the Evaluator in this Environment.

πŸ”Œ Usage

from humanloop import Humanloop

client = Humanloop(
    api_key="YOUR_API_KEY",
)
client.evaluators.remove_deployment(
    id="ev_890bcd",
    environment_id="staging",
)

βš™οΈ Parameters

id: str β€” Unique identifier for Evaluator.

environment_id: str β€” Unique identifier for the Environment to remove the deployment from.

request_options: typing.Optional[RequestOptions] β€” Request-specific configuration.

client.evaluators.list_environments(...)

πŸ“ Description

List all Environments and their deployed versions for the Evaluator.

πŸ”Œ Usage

from humanloop import Humanloop

client = Humanloop(
    api_key="YOUR_API_KEY",
)
client.evaluators.list_environments(
    id="ev_890bcd",
)

βš™οΈ Parameters

id: str β€” Unique identifier for Evaluator.

request_options: typing.Optional[RequestOptions] β€” Request-specific configuration.

client.evaluators.update_monitoring(...)

πŸ“ Description

Activate and deactivate Evaluators for monitoring the Evaluator.

An activated Evaluator will automatically be run on all new Logs within the Evaluator for monitoring purposes.

πŸ”Œ Usage

from humanloop import Humanloop

client = Humanloop(
    api_key="YOUR_API_KEY",
)
client.evaluators.update_monitoring(
    id="id",
)

βš™οΈ Parameters

id: str

activate: typing.Optional[ typing.Sequence[EvaluatorActivationDeactivationRequestActivateItemParams] ] β€” Evaluators to activate for Monitoring. These will be automatically run on new Logs.

deactivate: typing.Optional[ typing.Sequence[EvaluatorActivationDeactivationRequestDeactivateItemParams] ] β€” Evaluators to deactivate. These will not be run on new Logs.

request_options: typing.Optional[RequestOptions] β€” Request-specific configuration.

Flows

client.flows.log(...)

πŸ“ Description

Log to a Flow.

You can use query parameters version_id, or environment, to target an existing version of the Flow. Otherwise, the default deployed version will be chosen.

πŸ”Œ Usage

import datetime

from humanloop import Humanloop

client = Humanloop(
    api_key="YOUR_API_KEY",
)
client.flows.log(
    log_id="medqa_experiment_0001",
    id="fl_6o701g4jmcanPVHxdqD0O",
    flow={
        "attributes": {
            "prompt": {
                "template": "You are a helpful assistant helping with medical anamnesis",
                "model": "gpt-4o",
                "temperature": 0.8,
            },
            "tool": {
                "name": "retrieval_tool_v3",
                "description": "Retrieval tool for MedQA.",
                "source_code": "def retrieval_tool(question: str) -> str:\n    pass\n",
            },
        }
    },
    inputs={
        "question": "Patient with a history of diabetes and hypertension presents with chest pain and shortness of breath."
    },
    output="The patient is likely experiencing a myocardial infarction. Immediate medical attention is required.",
    trace_status="incomplete",
    start_time=datetime.datetime.fromisoformat(
        "2024-07-08 22:40:35+00:00",
    ),
    end_time=datetime.datetime.fromisoformat(
        "2024-07-08 22:40:39+00:00",
    ),
)

βš™οΈ Parameters

version_id: typing.Optional[str] β€” A specific Version ID of the Flow to log to.

environment: typing.Optional[str] β€” Name of the Environment identifying a deployed version to log to.

run_id: typing.Optional[str] β€” Unique identifier for the Run to associate the Log to.

path: typing.Optional[str] β€” Path of the Flow, including the name. This locates the Flow in the Humanloop filesystem and is used as as a unique identifier. For example: folder/name or just name.

id: typing.Optional[str] β€” ID for an existing Flow.

start_time: typing.Optional[dt.datetime] β€” The start time of the Trace. Will be updated if a child Log with an earlier start time is added.

end_time: typing.Optional[dt.datetime] β€” The end time of the Trace. Will be updated if a child Log with a later end time is added.

output: typing.Optional[str] β€” Generated output from your model for the provided inputs. Can be None if logging an error, or if creating a parent Log with the intention to populate it later.

created_at: typing.Optional[dt.datetime] β€” User defined timestamp for when the log was created.

error: typing.Optional[str] β€” Error message if the log is an error.

provider_latency: typing.Optional[float] β€” Duration of the logged event in seconds.

stdout: typing.Optional[str] β€” Captured log and debug statements.

provider_request: typing.Optional[typing.Dict[str, typing.Optional[typing.Any]]] β€” Raw request sent to provider.

provider_response: typing.Optional[typing.Dict[str, typing.Optional[typing.Any]]] β€” Raw response received the provider.

inputs: typing.Optional[typing.Dict[str, typing.Optional[typing.Any]]] β€” The inputs passed to the prompt template.

source: typing.Optional[str] β€” Identifies where the model was called from.

metadata: typing.Optional[typing.Dict[str, typing.Optional[typing.Any]]] β€” Any additional metadata to record.

source_datapoint_id: typing.Optional[str] β€” Unique identifier for the Datapoint that this Log is derived from. This can be used by Humanloop to associate Logs to Evaluations. If provided, Humanloop will automatically associate this Log to Evaluations that require a Log for this Datapoint-Version pair.

trace_parent_id: typing.Optional[str] β€” The ID of the parent Log to nest this Log under in a Trace.

user: typing.Optional[str] β€” End-user ID related to the Log.

flow_log_request_environment: typing.Optional[str] β€” The name of the Environment the Log is associated to.

save: typing.Optional[bool] β€” Whether the request/response payloads will be stored on Humanloop.

log_id: typing.Optional[str] β€” The identifier for the Log. If not specified, a default ID will be generated. This allows additional Logs to be appended to the trace without waiting for Humanloop to return an ID.

flow: typing.Optional[FlowKernelRequestParams] β€” Flow used to generate the Trace.

trace_status: typing.Optional[TraceStatus] β€” Status of the Trace. When a Trace is marked as complete, no more Logs can be added to it. Monitoring Evaluators will only run on complete Traces. If you do not intend to add more Logs to the Trace after creation, set this to complete.

request_options: typing.Optional[RequestOptions] β€” Request-specific configuration.

client.flows.get(...)

πŸ“ Description

Retrieve the Flow with the given ID.

By default, the deployed version of the Flow is returned. Use the query parameters version_id or environment to target a specific version of the Flow.

πŸ”Œ Usage

from humanloop import Humanloop

client = Humanloop(
    api_key="YOUR_API_KEY",
)
client.flows.get(
    id="fl_6o701g4jmcanPVHxdqD0O",
)

βš™οΈ Parameters

id: str β€” Unique identifier for Flow.

version_id: typing.Optional[str] β€” A specific Version ID of the Flow to retrieve.

environment: typing.Optional[str] β€” Name of the Environment to retrieve a deployed Version from.

request_options: typing.Optional[RequestOptions] β€” Request-specific configuration.

client.flows.delete(...)

πŸ“ Description

Delete the Flow with the given ID.

πŸ”Œ Usage

from humanloop import Humanloop

client = Humanloop(
    api_key="YOUR_API_KEY",
)
client.flows.delete(
    id="fl_6o701g4jmcanPVHxdqD0O",
)

βš™οΈ Parameters

id: str β€” Unique identifier for Flow.

request_options: typing.Optional[RequestOptions] β€” Request-specific configuration.

client.flows.move(...)

πŸ“ Description

Move the Flow to a different path or change the name.

πŸ”Œ Usage

from humanloop import Humanloop

client = Humanloop(
    api_key="YOUR_API_KEY",
)
client.flows.move(
    id="fl_6o701g4jmcanPVHxdqD0O",
    path="new directory/new name",
)

βš™οΈ Parameters

id: str β€” Unique identifier for Flow.

path: typing.Optional[str] β€” Path of the Flow including the Flow name, which is used as a unique identifier.

name: typing.Optional[str] β€” Name of the Flow.

directory_id: typing.Optional[str] β€” Unique identifier for the Directory to move Flow to. Starts with dir_.

request_options: typing.Optional[RequestOptions] β€” Request-specific configuration.

client.flows.list(...)

πŸ“ Description

Get a list of Flows.

πŸ”Œ Usage

from humanloop import Humanloop

client = Humanloop(
    api_key="YOUR_API_KEY",
)
response = client.flows.list(
    size=1,
)
for item in response:
    yield item
# alternatively, you can paginate page-by-page
for page in response.iter_pages():
    yield page

βš™οΈ Parameters

page: typing.Optional[int] β€” Page number for pagination.

size: typing.Optional[int] β€” Page size for pagination. Number of Flows to fetch.

name: typing.Optional[str] β€” Case-insensitive filter for Flow name.

user_filter: typing.Optional[str] β€” Case-insensitive filter for users in the Flow. This filter matches against both email address and name of users.

sort_by: typing.Optional[ProjectSortBy] β€” Field to sort Flows by

order: typing.Optional[SortOrder] β€” Direction to sort by.

request_options: typing.Optional[RequestOptions] β€” Request-specific configuration.

client.flows.upsert(...)

πŸ“ Description

Create or update a Flow.

Flows can also be identified by the ID or their path.

If you provide a commit message, then the new version will be committed; otherwise it will be uncommitted. If you try to commit an already committed version, an exception will be raised.

πŸ”Œ Usage

from humanloop import Humanloop

client = Humanloop(
    api_key="YOUR_API_KEY",
)
client.flows.upsert(
    path="Personal Projects/MedQA Flow",
    attributes={
        "prompt": {
            "template": "You are a helpful medical assistant helping with medical anamnesis. Answer {{question}}",
            "model": "gpt-4o",
            "temperature": 0.8,
        },
        "tool": {
            "name": "retrieval_tool_v3",
            "description": "Retrieval tool for MedQA.",
            "source_code": "def retrieval_tool(question: str) -> str:\n    pass\n",
        },
        "commit_message": "Initial commit",
    },
)

βš™οΈ Parameters

attributes: typing.Dict[str, typing.Optional[typing.Any]] β€” A key-value object identifying the Flow Version.

path: typing.Optional[str] β€” Path of the Flow, including the name. This locates the Flow in the Humanloop filesystem and is used as as a unique identifier. For example: folder/name or just name.

id: typing.Optional[str] β€” ID for an existing Flow.

commit_message: typing.Optional[str] β€” Message describing the changes made.

request_options: typing.Optional[RequestOptions] β€” Request-specific configuration.

client.flows.update_log(...)

πŸ“ Description

Update the status, inputs, output of a Flow Log.

Marking a Flow Log as complete will trigger any monitoring Evaluators to run. Inputs and output (or error) must be provided in order to mark it as complete.

πŸ”Œ Usage

from humanloop import Humanloop

client = Humanloop(
    api_key="YOUR_API_KEY",
)
client.flows.update_log(
    log_id="medqa_experiment_0001",
    inputs={
        "question": "Patient with a history of diabetes and normal tension presents with chest pain and shortness of breath."
    },
    output="The patient is likely experiencing a myocardial infarction. Immediate medical attention is required.",
    trace_status="complete",
)

βš™οΈ Parameters

log_id: str β€” Unique identifier of the Flow Log.

trace_status: TraceStatus β€” Status of the Trace. When a Trace is marked as complete, no more Logs can be added to it. Monitoring Evaluators will only run on completed Traces.

inputs: typing.Optional[typing.Dict[str, typing.Optional[typing.Any]]] β€” The inputs passed to the Flow Log.

output: typing.Optional[str] β€” The output of the Flow Log. Provide None to unset existing output value. Provide either this or error.

error: typing.Optional[str] β€” The error message of the Flow Log. Provide None to unset existing error value. Provide either this or output.

request_options: typing.Optional[RequestOptions] β€” Request-specific configuration.

client.flows.list_versions(...)

πŸ“ Description

Get a list of all the versions of a Flow.

πŸ”Œ Usage

from humanloop import Humanloop

client = Humanloop(
    api_key="YOUR_API_KEY",
)
client.flows.list_versions(
    id="fl_6o701g4jmcanPVHxdqD0O",
    status="committed",
)

βš™οΈ Parameters

id: str β€” Unique identifier for Flow.

status: typing.Optional[VersionStatus] β€” Filter versions by status: 'uncommitted', 'committed'. If no status is provided, all versions are returned.

evaluator_aggregates: typing.Optional[bool] β€” Whether to include Evaluator aggregate results for the versions in the response

request_options: typing.Optional[RequestOptions] β€” Request-specific configuration.

client.flows.commit(...)

πŸ“ Description

Commit a version of the Flow with a commit message.

If the version is already committed, an exception will be raised.

πŸ”Œ Usage

from humanloop import Humanloop

client = Humanloop(
    api_key="YOUR_API_KEY",
)
client.flows.commit(
    id="fl_6o701g4jmcanPVHxdqD0O",
    version_id="flv_6o701g4jmcanPVHxdqD0O",
    commit_message="RAG lookup tool bug fixing",
)

βš™οΈ Parameters

id: str β€” Unique identifier for Flow.

version_id: str β€” Unique identifier for the specific version of the Flow.

commit_message: str β€” Message describing the changes made.

request_options: typing.Optional[RequestOptions] β€” Request-specific configuration.

client.flows.set_deployment(...)

πŸ“ Description

Deploy Flow to an Environment.

Set the deployed version for the specified Environment. This Flow will be used for calls made to the Flow in this Environment.

πŸ”Œ Usage

from humanloop import Humanloop

client = Humanloop(
    api_key="YOUR_API_KEY",
)
client.flows.set_deployment(
    id="fl_6o701g4jmcanPVHxdqD0O",
    environment_id="staging",
    version_id="flv_6o701g4jmcanPVHxdqD0O",
)

βš™οΈ Parameters

id: str β€” Unique identifier for Flow.

environment_id: str β€” Unique identifier for the Environment to deploy the Version to.

version_id: str β€” Unique identifier for the specific version of the Flow.

request_options: typing.Optional[RequestOptions] β€” Request-specific configuration.

client.flows.remove_deployment(...)

πŸ“ Description

Remove deployed Flow from the Environment.

Remove the deployed version for the specified Environment. This Flow will no longer be used for calls made to the Flow in this Environment.

πŸ”Œ Usage

from humanloop import Humanloop

client = Humanloop(
    api_key="YOUR_API_KEY",
)
client.flows.remove_deployment(
    id="fl_6o701g4jmcanPVHxdqD0O",
    environment_id="staging",
)

βš™οΈ Parameters

id: str β€” Unique identifier for Flow.

environment_id: str β€” Unique identifier for the Environment to remove the deployment from.

request_options: typing.Optional[RequestOptions] β€” Request-specific configuration.

client.flows.list_environments(...)

πŸ“ Description

List all Environments and their deployed versions for the Flow.

πŸ”Œ Usage

from humanloop import Humanloop

client = Humanloop(
    api_key="YOUR_API_KEY",
)
client.flows.list_environments(
    id="fl_6o701g4jmcanPVHxdqD0O",
)

βš™οΈ Parameters

id: str β€” Unique identifier for Flow.

request_options: typing.Optional[RequestOptions] β€” Request-specific configuration.

client.flows.update_monitoring(...)

πŸ“ Description

Activate and deactivate Evaluators for monitoring the Flow.

An activated Evaluator will automatically be run on all new "completed" Logs within the Flow for monitoring purposes.

πŸ”Œ Usage

from humanloop import Humanloop

client = Humanloop(
    api_key="YOUR_API_KEY",
)
client.flows.update_monitoring(
    id="fl_6o701g4jmcanPVHxdqD0O",
    activate=[{"evaluator_version_id": "evv_1abc4308abd"}],
)

βš™οΈ Parameters

id: str

activate: typing.Optional[ typing.Sequence[EvaluatorActivationDeactivationRequestActivateItemParams] ] β€” Evaluators to activate for Monitoring. These will be automatically run on new Logs.

deactivate: typing.Optional[ typing.Sequence[EvaluatorActivationDeactivationRequestDeactivateItemParams] ] β€” Evaluators to deactivate. These will not be run on new Logs.

request_options: typing.Optional[RequestOptions] β€” Request-specific configuration.

Directories

client.directories.list()

πŸ“ Description

Retrieve a list of all Directories.

πŸ”Œ Usage

from humanloop import Humanloop

client = Humanloop(
    api_key="YOUR_API_KEY",
)
client.directories.list()

βš™οΈ Parameters

request_options: typing.Optional[RequestOptions] β€” Request-specific configuration.

client.directories.create(...)

πŸ“ Description

Creates a Directory.

πŸ”Œ Usage

from humanloop import Humanloop

client = Humanloop(
    api_key="YOUR_API_KEY",
)
client.directories.create()

βš™οΈ Parameters

name: typing.Optional[str] β€” Name of the directory to create.

parent_id: typing.Optional[str] β€” ID of the parent directory. Starts with dir_.

path: typing.Optional[str] β€” Path to create the directory in, relative to the root directory. If the path does not exist, it will be created. Includes name, e.g. path/to/directory.

request_options: typing.Optional[RequestOptions] β€” Request-specific configuration.

client.directories.get(...)

πŸ“ Description

Fetches a directory by ID.

πŸ”Œ Usage

from humanloop import Humanloop

client = Humanloop(
    api_key="YOUR_API_KEY",
)
client.directories.get(
    id="id",
)

βš™οΈ Parameters

id: str β€” String ID of directory. Starts with dir_.

request_options: typing.Optional[RequestOptions] β€” Request-specific configuration.

client.directories.delete(...)

πŸ“ Description

Delete the Directory with the given ID.

The Directory must be empty (i.e. contain no Directories or Files).

πŸ”Œ Usage

from humanloop import Humanloop

client = Humanloop(
    api_key="YOUR_API_KEY",
)
client.directories.delete(
    id="id",
)

βš™οΈ Parameters

id: str β€” Unique identifier for Directory. Starts with dir_.

request_options: typing.Optional[RequestOptions] β€” Request-specific configuration.

client.directories.update(...)

πŸ“ Description

Update the Directory with the given ID.

πŸ”Œ Usage

from humanloop import Humanloop

client = Humanloop(
    api_key="YOUR_API_KEY",
)
client.directories.update(
    id="id",
)

βš™οΈ Parameters

id: str β€” Unique identifier for Directory. Starts with dir_.

name: typing.Optional[str] β€” Name to set for the directory.

parent_id: typing.Optional[str] β€” ID of the parent directory. Specify this to move directories. Starts with dir_.

path: typing.Optional[str] β€” Path to move the directory to, relative to the root directory. Specify this to move directories. Includes name, e.g. path/to/directory.

request_options: typing.Optional[RequestOptions] β€” Request-specific configuration.

Files

client.files.list(...)

πŸ“ Description

Get a paginated list of files.

πŸ”Œ Usage

from humanloop import Humanloop

client = Humanloop(
    api_key="YOUR_API_KEY",
)
client.files.list()

βš™οΈ Parameters

page: typing.Optional[int] β€” Page offset for pagination.

size: typing.Optional[int] β€” Page size for pagination. Number of files to fetch.

name: typing.Optional[str] β€” Case-insensitive filter for file name.

type: typing.Optional[typing.Union[FileType, typing.Sequence[FileType]]] β€” List of file types to filter for.

environment: typing.Optional[str] β€” Case-sensitive filter for files with a deployment in the specified environment. Requires the environment name.

sort_by: typing.Optional[ProjectSortBy] β€” Field to sort files by

order: typing.Optional[SortOrder] β€” Direction to sort by.

request_options: typing.Optional[RequestOptions] β€” Request-specific configuration.

Evaluations

client.evaluations.list(...)

πŸ“ Description

Retrieve a list of Evaluations for the specified File. <<<<<<< HEAD

=======

32f482a (Release 0.8.9a1)

πŸ”Œ Usage

from humanloop import Humanloop

client = Humanloop(
    api_key="YOUR_API_KEY",
)
response = client.evaluations.list(
    file_id="pr_30gco7dx6JDq4200GVOHa",
    size=1,
)
for item in response:
    yield item
# alternatively, you can paginate page-by-page
for page in response.iter_pages():
    yield page

βš™οΈ Parameters

file_id: str β€” Filter by File ID. Only Evaluations for the specified File will be returned.

page: typing.Optional[int] β€” Page number for pagination.

size: typing.Optional[int] β€” Page size for pagination. Number of Evaluations to fetch.

request_options: typing.Optional[RequestOptions] β€” Request-specific configuration.

client.evaluations.create(...)

πŸ“ Description

Create an Evaluation.

Create a new Evaluation by specifying the File to evaluate, and a name for the Evaluation. You can then add Runs to this Evaluation using the POST /evaluations/{id}/runs endpoint.

πŸ”Œ Usage

from humanloop import Humanloop

client = Humanloop(
    api_key="YOUR_API_KEY",
)
client.evaluations.create(
    evaluators=[{"version_id": "version_id"}],
)

βš™οΈ Parameters

evaluators: typing.Sequence[CreateEvaluationRequestEvaluatorsItemParams] β€” The Evaluators used to evaluate. <<<<<<< HEAD

=======

32f482a (Release 0.8.9a1)

file: typing.Optional[FileRequestParams] β€” The File to associate with the Evaluation. This File contains the Logs you're evaluating.

name: typing.Optional[str] β€” Name of the Evaluation to help identify it. Must be unique within the associated File.

request_options: typing.Optional[RequestOptions] β€” Request-specific configuration.

client.evaluations.add_evaluators(...)

πŸ“ Description

Add Evaluators to an Evaluation.

The Evaluators will be run on the Logs generated for the Evaluation. <<<<<<< HEAD

=======

32f482a (Release 0.8.9a1)

πŸ”Œ Usage

from humanloop import Humanloop

client = Humanloop(
    api_key="YOUR_API_KEY",
)
client.evaluations.add_evaluators(
    id="id",
    evaluators=[{"version_id": "version_id"}],
)

βš™οΈ Parameters

id: str β€” Unique identifier for Evaluation.

evaluators: typing.Sequence[AddEvaluatorsRequestEvaluatorsItemParams] β€” The Evaluators to add to this Evaluation. <<<<<<< HEAD

=======

32f482a (Release 0.8.9a1)

request_options: typing.Optional[RequestOptions] β€” Request-specific configuration.

client.evaluations.remove_evaluator(...)

πŸ“ Description

Remove an Evaluator from an Evaluation.

The Evaluator will no longer be run on the Logs in the Evaluation. <<<<<<< HEAD

=======

32f482a (Release 0.8.9a1)

πŸ”Œ Usage

from humanloop import Humanloop

client = Humanloop(
    api_key="YOUR_API_KEY",
)
client.evaluations.remove_evaluator(
    id="id",
    evaluator_version_id="evaluator_version_id",
)

βš™οΈ Parameters

id: str β€” Unique identifier for Evaluation.

evaluator_version_id: str β€” Unique identifier for Evaluator Version.

request_options: typing.Optional[RequestOptions] β€” Request-specific configuration.

client.evaluations.get(...)

πŸ“ Description

Get an Evaluation.

This includes the Evaluators associated with the Evaluation and metadata about the Evaluation, such as its name.

To get the Runs associated with the Evaluation, use the GET /evaluations/{id}/runs endpoint. To retrieve stats for the Evaluation, use the GET /evaluations/{id}/stats endpoint. <<<<<<< HEAD

=======

32f482a (Release 0.8.9a1)

πŸ”Œ Usage

from humanloop import Humanloop

client = Humanloop(
    api_key="YOUR_API_KEY",
)
client.evaluations.get(
    id="ev_567yza",
)

βš™οΈ Parameters

id: str β€” Unique identifier for Evaluation.

request_options: typing.Optional[RequestOptions] β€” Request-specific configuration.

client.evaluations.delete(...)

πŸ“ Description

Delete an Evaluation.

The Runs and Evaluators in the Evaluation will not be deleted. <<<<<<< HEAD

=======

32f482a (Release 0.8.9a1)

πŸ”Œ Usage

from humanloop import Humanloop

client = Humanloop(
    api_key="YOUR_API_KEY",
)
client.evaluations.delete(
    id="ev_567yza",
)

βš™οΈ Parameters

id: str β€” Unique identifier for Evaluation.

request_options: typing.Optional[RequestOptions] β€” Request-specific configuration.

client.evaluations.list_runs_for_evaluation(...)

πŸ“ Description

List all Runs for an Evaluation.

πŸ”Œ Usage

from humanloop import Humanloop

client = Humanloop(
    api_key="YOUR_API_KEY",
)
client.evaluations.list_runs_for_evaluation(
    id="id",
)

βš™οΈ Parameters

id: str β€” Unique identifier for Evaluation.

request_options: typing.Optional[RequestOptions] β€” Request-specific configuration.

client.evaluations.create_run(...)

πŸ“ Description

Create an Evaluation Run.

Optionally specify the Dataset and version to be evaluated.

Humanloop will automatically start generating Logs and running Evaluators where orchestrated=true. If you are generating Logs yourself, you can set orchestrated=false and then generate and submit the required Logs via the API.

If dataset and version are provided, you can set use_existing_logs=True to reuse existing Logs, avoiding generating new Logs unnecessarily. Logs that are associated with the specified Version and have source_datapoint_id referencing a datapoint in the specified Dataset will be associated with the Run.

To keep updated on the progress of the Run, you can poll the Run using the GET /evaluations/{id}/runs endpoint and check its status.

πŸ”Œ Usage

from humanloop import Humanloop

client = Humanloop(
    api_key="YOUR_API_KEY",
)
client.evaluations.create_run(
    id="id",
)

βš™οΈ Parameters

id: str β€” Unique identifier for Evaluation.

dataset: typing.Optional[CreateRunRequestDatasetParams] β€” Dataset to use in this Run. <<<<<<< HEAD

=======

32f482a (Release 0.8.9a1)

version: typing.Optional[CreateRunRequestVersionParams] β€” Version to use in this Run. <<<<<<< HEAD

=======

32f482a (Release 0.8.9a1)

orchestrated: typing.Optional[bool] β€” Whether the Run is orchestrated by Humanloop. If True, Humanloop will generate Logs for the Run; dataset and version must be provided. If False, a log for the Prompt/Tool should be submitted by the user via the API.

use_existing_logs: typing.Optional[bool] β€” If True, the Run will be initialized with existing Logs associated with the Dataset and Version. If False, the Run will be initialized with no Logs. Can only be set to True when both dataset and version are provided. <<<<<<< HEAD

=======

32f482a (Release 0.8.9a1)

request_options: typing.Optional[RequestOptions] β€” Request-specific configuration.

client.evaluations.add_existing_run(...)

πŸ“ Description

Add an existing Run to the specified Evaluation.

This is useful if you want to compare the Runs in this Evaluation with an existing Run that exists within another Evaluation. <<<<<<< HEAD

=======

32f482a (Release 0.8.9a1)

πŸ”Œ Usage

from humanloop import Humanloop

client = Humanloop(
    api_key="YOUR_API_KEY",
)
client.evaluations.add_existing_run(
    id="id",
    run_id="run_id",
)

βš™οΈ Parameters

id: str β€” Unique identifier for Evaluation.

run_id: str β€” Unique identifier for Run.

request_options: typing.Optional[RequestOptions] β€” Request-specific configuration.

client.evaluations.remove_run(...)

πŸ“ Description

Remove a Run from an Evaluation.

The Logs and Versions used in the Run will not be deleted. If this Run is used in any other Evaluations, it will still be available in those Evaluations.

πŸ”Œ Usage

from humanloop import Humanloop

client = Humanloop(
    api_key="YOUR_API_KEY",
)
client.evaluations.remove_run(
    id="id",
    run_id="run_id",
)

βš™οΈ Parameters

id: str β€” Unique identifier for Evaluation.

run_id: str β€” Unique identifier for Run.

request_options: typing.Optional[RequestOptions] β€” Request-specific configuration.

client.evaluations.update_evaluation_run(...)

πŸ“ Description

Update an Evaluation Run.

Specify control=true to use this Run as the control Run for the Evaluation. You can cancel a running/pending Run, or mark a Run that uses external or human Evaluators as completed. <<<<<<< HEAD

=======

32f482a (Release 0.8.9a1)

πŸ”Œ Usage

from humanloop import Humanloop

client = Humanloop(
    api_key="YOUR_API_KEY",
)
client.evaluations.update_evaluation_run(
    id="id",
    run_id="run_id",
)

βš™οΈ Parameters

id: str β€” Unique identifier for Evaluation.

run_id: str β€” Unique identifier for Run.

control: typing.Optional[bool] β€” If True, this Run will be used as the control in the Evaluation. Stats for other Runs will be compared to this Run. This will replace any existing control Run. <<<<<<< HEAD

=======

32f482a (Release 0.8.9a1)

status: typing.Optional[EvaluationStatus] β€” Used to set the Run to cancelled or completed. Can only be used if the Run is currently pending or running. <<<<<<< HEAD

=======

32f482a (Release 0.8.9a1)

request_options: typing.Optional[RequestOptions] β€” Request-specific configuration.

client.evaluations.add_logs_to_run(...)

πŸ“ Description

Add the specified Logs to a Run. <<<<<<< HEAD

=======

32f482a (Release 0.8.9a1)

πŸ”Œ Usage

from humanloop import Humanloop

client = Humanloop(
    api_key="YOUR_API_KEY",
)
client.evaluations.add_logs_to_run(
    id="id",
    run_id="run_id",
    log_ids=["log_ids"],
)

βš™οΈ Parameters

id: str β€” Unique identifier for Evaluation.

run_id: str β€” Unique identifier for Run.

log_ids: typing.Sequence[str] β€” The IDs of the Logs to add to the Run.

request_options: typing.Optional[RequestOptions] β€” Request-specific configuration.

client.evaluations.get_stats(...)

πŸ“ Description

Get Evaluation Stats.

Retrieve aggregate stats for the specified Evaluation. This includes the number of generated Logs for each Run and the corresponding Evaluator statistics (such as the mean and percentiles).

πŸ”Œ Usage

from humanloop import Humanloop

client = Humanloop(
    api_key="YOUR_API_KEY",
)
client.evaluations.get_stats(
    id="id",
)

βš™οΈ Parameters

id: str β€” Unique identifier for Evaluation.

request_options: typing.Optional[RequestOptions] β€” Request-specific configuration.

client.evaluations.get_logs(...)

πŸ“ Description

Get the Logs associated to a specific Evaluation.

This returns the Logs associated to all Runs within with the Evaluation.

πŸ”Œ Usage

from humanloop import Humanloop

client = Humanloop(
    api_key="YOUR_API_KEY",
)
client.evaluations.get_logs(
    id="id",
)

βš™οΈ Parameters

id: str β€” String ID of evaluation. Starts with ev_ or evr_.

page: typing.Optional[int] β€” Page number for pagination.

size: typing.Optional[int] β€” Page size for pagination. Number of Logs to fetch.

request_options: typing.Optional[RequestOptions] β€” Request-specific configuration.

Logs

client.logs.list(...)

πŸ“ Description

List all Logs for the given filter criteria.

πŸ”Œ Usage

from humanloop import Humanloop

client = Humanloop(
    api_key="YOUR_API_KEY",
)
response = client.logs.list(
    file_id="file_123abc",
    size=1,
)
for item in response:
    yield item
# alternatively, you can paginate page-by-page
for page in response.iter_pages():
    yield page

βš™οΈ Parameters

file_id: str β€” Unique identifier for the File to list Logs for.

page: typing.Optional[int] β€” Page number for pagination.

size: typing.Optional[int] β€” Page size for pagination. Number of Logs to fetch.

version_id: typing.Optional[str] β€” If provided, only Logs belonging to the specified Version will be returned.

version_status: typing.Optional[VersionStatus] β€” If provided, only Logs belonging to Versions with the specified status will be returned.

search: typing.Optional[str] β€” If provided, only Logs that contain the provided string in its inputs and output will be returned.

metadata_search: typing.Optional[str] β€” If provided, only Logs that contain the provided string in its metadata will be returned.

start_date: typing.Optional[dt.datetime] β€” If provided, only Logs created after the specified date will be returned.

end_date: typing.Optional[dt.datetime] β€” If provided, only Logs created before the specified date will be returned.

include_parent: typing.Optional[bool] β€” If true, include the full parent Log in the response. Only applicable when retrieving Evaluator Logs.

in_trace_filter: typing.Optional[typing.Union[bool, typing.Sequence[bool]]] β€” If true, return Logs that are associated to a Trace. False, return Logs that are not associated to a Trace.

sample_n: typing.Optional[int] β€” If provided, only a random sample of approximately N Logs will be returned.

request_options: typing.Optional[RequestOptions] β€” Request-specific configuration.

client.logs.delete(...)

πŸ“ Description

Delete Logs with the given IDs.

πŸ”Œ Usage

from humanloop import Humanloop

client = Humanloop(
    api_key="YOUR_API_KEY",
)
client.logs.delete(
    id="string",
)

βš™οΈ Parameters

id: typing.Optional[typing.Union[str, typing.Sequence[str]]] β€” Unique identifiers for the Logs to delete.

request_options: typing.Optional[RequestOptions] β€” Request-specific configuration.

client.logs.get(...)

πŸ“ Description

Retrieve the Log with the given ID.

πŸ”Œ Usage

from humanloop import Humanloop

client = Humanloop(
    api_key="YOUR_API_KEY",
)
client.logs.get(
    id="prv_Wu6zx1lAWJRqOyL8nWuZk",
)

βš™οΈ Parameters

id: str β€” Unique identifier for Log.

request_options: typing.Optional[RequestOptions] β€” Request-specific configuration.