@traceabledecorator: recommended for most casestracecontext manager: Python onlyRunTreeAPI: explicit, low-level control
If you’re using an LLM provider or agent framework with a built-in LangSmith integration, refer to the integrations overview instead
Prerequisites
Before tracing, set the following environment variables:LANGSMITH_TRACING=true: enables tracing. Set this to toggle tracing on and off without changing your code.LANGSMITH_API_KEY: your LangSmith API key.- By default, LangSmith logs traces to a project named
default. To log to a different project, setLANGSMITH_PROJECT. For more details, refer to Log traces to a specific project.
LANGSMITH_TRACING controls the @traceable decorator and the trace context manager. To override this at runtime for @traceable without changing environment variables, use tracing_context(enabled=True/False) (Python) or pass tracingEnabled directly to traceable (JS/TS). RunTree objects are not affected by any of these controls; they always send data to LangSmith when posted.Use @traceable / traceable
Apply @traceable (Python) or traceable (TypeScript) to any function to make it a traced run. LangSmith handles context propagation across nested calls automatically.
The following example traces a simple pipeline: run_pipeline calls format_prompt to build the messages, invoke_llm to call the model, and parse_output to extract the result.
Each function is individually traced, and because they’re called from within run_pipeline (also traced), LangSmith automatically nests them as child runs. invoke_llm uses run_type="llm" to mark it as an LLM call so LangSmith can render token counts and latency correctly:
run_pipeline trace with format_prompt, invoke_llm, and parse_output as nested child runs.
When you wrap a sync function with
traceable (e.g., formatPrompt in the previous example), use the await keyword when calling it to ensure the trace is logged correctly.Use the trace context manager (Python only)
In Python, you can use the trace context manager to log traces to LangSmith. This is useful in situations where:
- You want to log traces for a specific block of code.
- You want control over the inputs, outputs, and other attributes of the trace.
- It is not feasible to use a decorator or wrapper.
- Any or all of the above.
traceable decorator and wrap_openai wrapper, so you can use them together in the same application.
The following example shows all three used together. wrap_openai wraps the OpenAI client so its calls are traced automatically. my_tool uses @traceable with run_type="tool" and a custom name to appear correctly in the trace. chat_pipeline itself is not decorated; instead, ls.trace wraps the call, letting you pass the project name and inputs explicitly and set outputs manually via rt.end():
Use the RunTree API
Another, more explicit way to log traces to LangSmith is via the RunTree API. This API allows you more control over your tracing. You can manually create runs and children runs to assemble your trace. You still need to set your LANGSMITH_API_KEY, but LANGSMITH_TRACING is not necessary for this method.
This method is not recommended for most use cases; manually managing trace context is error-prone compared to @traceable, which handles context propagation automatically.
Example usage
You can extend the utilities explained in the previous section to trace any code. The following code shows some example extensions. Trace any public method in a class:Ensure all traces are submitted before exiting
LangSmith performs tracing in a background thread to avoid obstructing your production application. This means that your process may end before all traces are successfully posted to LangSmith. Here are some options for ensuring all traces are submitted before exiting your application.Use the LangSmith SDK
If you are using the LangSmith SDK standalone, you can use theflush method before exit:
Use LangChain
If you are using LangChain, refer to the LangChain tracing guide. If you prefer a video tutorial, check out the Tracing Basics video from the Introduction to LangSmith Course.Connect these docs to Claude, VSCode, and more via MCP for real-time answers.

