This document provides a comprehensive reference of all LLM providers integrated into the Sim platform. It covers provider configurations, authentication methods, model capabilities, pricing structures, and the provider abstraction layer that enables unified execution across 30+ different LLM services.
For information about the provider execution system and request dispatching, see 5.1 Provider System Architecture For model capability detection and selection logic, see 5.3 Model Registry & Capabilities
The platform uses a unified provider abstraction that normalizes differences between LLM APIs into a consistent interface. All providers implement the ProviderConfig interface and return responses in a standardized format.
The following diagram illustrates how different block handlers interact with the provider system to execute model requests.
Title: Provider Request Dispatching Flow
Sources: apps/sim/providers/types.ts65-75 apps/sim/providers/openai/index.ts10-38 apps/sim/providers/ollama/index.ts23-53 apps/sim/providers/vllm/index.ts30-79
The ProviderConfig interface defines the contract all providers must implement:
| Property | Type | Description |
|---|---|---|
id | string | Unique provider identifier (e.g., 'openai', 'mistral') |
name | string | Human-readable provider name |
description | string | Provider description |
version | string | Provider implementation version |
models | string[] | List of supported model identifiers |
defaultModel | string | Default model when none specified |
initialize | () => Promise<void> | Optional async initialization (e.g., for local model discovery) |
executeRequest | (request: ProviderRequest) => Promise<ProviderResponse | StreamingExecution> | Main execution method |
Sources: apps/sim/providers/types.ts65-75 apps/sim/providers/ollama/index.ts31-53 apps/sim/providers/vllm/index.ts38-79
The platform supports a vast range of provider integrations, ranging from global cloud services to local inference servers.
| Provider ID | Name | Category | Primary Implementation File |
|---|---|---|---|
openai | OpenAI | Cloud | apps/sim/providers/openai/index.ts10-38 |
anthropic | Anthropic | Cloud | apps/sim/providers/anthropic.ts |
google | Google Gemini | Cloud | apps/sim/providers/gemini/core.ts1-38 |
azure-openai | Azure OpenAI | Enterprise | apps/sim/providers/azure-openai/index.ts48-71 |
mistral | Mistral AI | Cloud | apps/sim/providers/mistral/index.ts29-39 |
deepseek | DeepSeek | Cloud | apps/sim/providers/deepseek/index.ts25-35 |
xai | xAI (Grok) | Cloud | apps/sim/providers/xai/index.ts30-40 |
groq | Groq | Cloud | apps/sim/providers/groq/index.ts25-35 |
cerebras | Cerebras | Cloud | apps/sim/providers/cerebras/index.ts26-36 |
ollama | Ollama | Local | apps/sim/providers/ollama/index.ts23-53 |
vllm | vLLM | Self-Hosted | apps/sim/providers/vllm/index.ts30-79 |
openrouter | OpenRouter | Gateway | apps/sim/providers/openrouter/index.ts65-75 |
Sources: apps/sim/providers/models.ts1-100
The Azure OpenAI provider handles both the standard Chat Completions API and the modern OpenAI Responses API. It extracts deployment names and API versions directly from the provided Azure endpoint URL.
Title: Azure API Selection Logic
Sources: apps/sim/providers/azure-openai/index.ts48-71 apps/sim/providers/azure-openai/utils.ts18-23
The Gemini implementation is unique in how it handles multi-tool execution. Unlike OpenAI-style providers that often loop, Gemini requires all function calls from a single response to be executed together and sent back in a specific message structure.
extractAllFunctionCallParts.google.protobuf.Struct requirement via ensureStructResponse.Sources: apps/sim/providers/gemini/core.ts89-130 apps/sim/providers/google/utils.ts30-135
These providers include an initialize() method to dynamically discover available models from the local or remote server at runtime.
${OLLAMA_HOST}/api/tags and updates the useProvidersStore.${baseUrl}/v1/models using an optional VLLM_API_KEY.Sources: apps/sim/providers/ollama/index.ts31-53 apps/sim/providers/vllm/index.ts38-79
OpenRouter acts as a unified gateway. The implementation includes logic to detect if a model supports native structured outputs (using json_schema) or if it requires falling back to prompt-based instructions.
Sources: apps/sim/providers/openrouter/index.ts36-85
Most providers share a common logic for handling tools and "Forced Tool Usage" via the prepareToolsWithUsageControl utility.
| Feature | Description |
|---|---|
| Tool Choice | Normalizes auto, none, or specific function forcing across providers. |
| Max Iterations | Providers like Groq, Mistral, and DeepSeek implement a MAX_TOOL_ITERATIONS loop (defaulting to 10) to handle multi-step reasoning. |
| Streaming Tools | Some providers (like OpenAI/Azure) support streaming while tools are present, while others (like Mistral) fallback to non-streaming when tools are active. |
Sources: apps/sim/providers/utils.ts36-40 apps/sim/providers/mistral/index.ts109-134 apps/sim/providers/groq/index.ts96-114 apps/sim/providers/deepseek/index.ts88-113
All providers use the calculateCost utility to determine execution expenses based on usage metadata returned by the APIs.
thoughtsTokenCount to output tokens) to ensure accurate billing.TimeSegment objects, separating "model" time from "tool" execution time.Sources: apps/sim/providers/google/utils.ts142-154 apps/sim/providers/utils.ts15-21 apps/sim/providers/types.ts31-32
Refresh this wiki