This document covers the multi-provider LLM integration system that enables workflows to execute requests across 30+ models from 15+ providers through a unified interface. The system provides provider abstraction, model capability detection, cost calculation, and tool calling orchestration.
For execution context and workflow orchestration, see Workflow Execution Engine. For tool configuration and parameter handling, see Tool Integration & Execution. For streaming response mechanics, see Streaming & Response Handling.
The provider system implements a plugin architecture where each LLM provider (OpenAI, Anthropic, Google, etc.) adheres to a common ProviderConfig interface while handling provider-specific API formats internally.
Sources: apps/sim/providers/types.ts40-51 apps/sim/providers/index.ts1-21 apps/sim/executor/handlers/agent/agent-handler.ts34-96
All providers implement the ProviderConfig interface that standardizes execution across different LLM APIs:
| Component | Type | Description |
|---|---|---|
id | ProviderId | Unique provider identifier (e.g., 'openai', 'anthropic', 'deepseek') |
name | string | Human-readable provider name |
models | string[] | List of supported model identifiers |
defaultModel | string | Default model when none specified |
executeRequest | Function | Main execution method that accepts ProviderRequest and returns ProviderResponse or StreamingExecution |
The ProviderRequest object encapsulates all parameters required for an LLM call, including advanced reasoning and formatting options.
Sources: apps/sim/providers/types.ts140-182 apps/sim/providers/index.ts23-114
The system uses getProviderFromModel() to automatically route requests to the correct implementation based on model naming patterns or explicit registration in PROVIDER_DEFINITIONS.
Sources: apps/sim/providers/utils.ts203-249 apps/sim/providers/models.ts73-87
The system supports providers that host a dynamic or self-hosted set of models:
| Provider | Discovery Method | Update Function |
|---|---|---|
| Ollama | Queries /api/tags endpoint at initialization | updateOllamaProviderModels() |
| vLLM | Queries /v1/models endpoint with auth | updateVLLMProviderModels() |
| OpenRouter | Fetches real-time model catalog from API | updateOpenRouterProviderModels() |
Sources: apps/sim/providers/utils.ts152-167 apps/sim/providers/ollama/index.ts31-53
The AgentBlockHandler orchestrates the transition from workflow state to LLM response. It handles permission checks via validateModelProvider, tool formatting, and optional memory persistence.
Sources: apps/sim/executor/handlers/agent/agent-handler.ts49-114 apps/sim/providers/index.ts34-114
Model capabilities are defined in PROVIDER_DEFINITIONS, allowing the UI and execution engine to adapt to specific model features like reasoning effort, thinking levels, or native structured outputs.
The AgentBlock uses capability detection from providers/utils.ts to conditionally render configuration fields:
| Model Feature | UI Behavior | Example Models |
|---|---|---|
reasoningEffort | Shows dropdown with values from definition | gpt-5, o1, o3 |
verbosity | Shows dropdown with values from definition | gpt-5.4, gpt-5.2 |
thinking | Shows dropdown with thinking levels | claude-3-7-sonnet, gemini-2.0-flash-thinking |
temperature | Shows slider with dynamic min/max | Most non-reasoning models |
Sources: apps/sim/blocks/blocks/agent.ts154-316 apps/sim/providers/models.ts29-52 apps/sim/providers/utils.ts15-40
Agents support tool calling through a unified interface that handles provider-specific formats (e.g., OpenAI's tool_calls vs Anthropic's tool_use).
usageControl ('auto', 'force', 'none'). Tools with none are ignored during formatting. apps/sim/executor/handlers/agent/agent-handler.ts185-188ProviderToolConfig using transformBlockTool. apps/sim/executor/handlers/agent/agent-handler.ts221-230executeTool and feeding results back to the LLM until a final response is reached. apps/sim/providers/openai/index.ts14-40Sources: apps/sim/executor/handlers/agent/agent-handler.ts178-210 apps/sim/providers/utils.ts35-43 apps/sim/providers/openai/index.ts14-40
The system calculates costs based on token usage and model-specific pricing tiers defined in PROVIDER_DEFINITIONS.
The calculateCost function in providers/utils.ts computes input and output costs by multiplying usage by the pricing per 1M tokens. apps/sim/providers/utils.ts4-7
Sources: apps/sim/providers/utils.ts4-7 apps/sim/providers/models.ts111-124 apps/sim/executor/handlers/router/router-handler.ts144-163
The system supports multiple tiers of API key management:
Sources: apps/sim/providers/utils.ts12-13 apps/sim/providers/utils.test.ts50-171 apps/sim/providers/azure-openai/index.ts67-71
For more detailed information on specific components of the AI integration system, please refer to the following pages:
PROVIDER_DEFINITIONS and feature detection.Refresh this wiki