The open-source platform to build AI agents and run your agentic workforce. Connect 1,000+ integrations and LLMs to orchestrate agentic workflows.
Design agent workflows visually on a canvas—connect agents, tools, and blocks, then run them instantly.
Leverage Copilot to generate nodes, fix errors, and iterate on flows directly from natural language.
Upload documents to a vector store and let agents answer questions grounded in your specific content.
Cloud-hosted: sim.ai
npx simstudioDocker must be installed and running on your machine.
| Flag | Description |
|---|---|
-p, --port <port> |
Port to run Sim on (default 3000) |
--no-pull |
Skip pulling latest Docker images |
For local Docker builds, use the local compose file:
git clone https://github.com/simstudioai/sim.git && cd sim
docker compose -f docker-compose.local.yml up -d --buildFor a cloud or production-style deployment, use the published images:
git clone https://github.com/simstudioai/sim.git && cd sim
docker compose -f docker-compose.prod.yml up -dOpenCode is opt-in. By default the OpenCode block stays hidden so the base Sim UX and deployment path remain unchanged.
Quick deploy paths:
- Do not set
NEXT_PUBLIC_OPENCODE_ENABLED. - Do not set any
OPENCODE_*variables. - Do not use
docker-compose.opencode.ymlordocker-compose.opencode.local.yml. - Start Sim with the normal upstream flow.
- Set
NEXT_PUBLIC_OPENCODE_ENABLED=true. - Set
OPENCODE_SERVER_USERNAMEandOPENCODE_SERVER_PASSWORD. - Set
OPENCODE_REPOSITORY_ROOT=/app/reposunless you intentionally changed the runtime root. - Set
OPENCODE_REPOSto one or more HTTPS repository URLs. - Set at least one provider key such as
OPENAI_API_KEY,ANTHROPIC_API_KEY,GEMINI_API_KEY, orGOOGLE_GENERATIVE_AI_API_KEY. - Set
GIT_USERNAMEandGIT_TOKENorGITHUB_TOKENif any repository is private. - For host-side
next dev, also setOPENCODE_BASE_URL=http://127.0.0.1:4096. - Start with
docker compose -f docker-compose.local.yml -f docker-compose.opencode.local.yml up -d --build.
- Set
NEXT_PUBLIC_OPENCODE_ENABLED=true. - Set
OPENCODE_BASE_URLto the external OpenCode server. - Set
OPENCODE_SERVER_USERNAMEandOPENCODE_SERVER_PASSWORDto the credentials expected by that server. - Set
OPENCODE_REPOSITORY_ROOTto the same worktree root used by the external OpenCode deployment. - Set
OPENCODE_REPOSto the repository catalog you expect the runtime to clone or expose. - Ensure the external runtime already has at least one provider key configured.
- Ensure the external runtime can clone private repositories with the right git credentials if needed.
- Verify
/global/healthand one real prompt before exposing the block to users.
Minimum setup:
cp apps/sim/.env.example apps/sim/.envThen add these values to apps/sim/.env:
NEXT_PUBLIC_OPENCODE_ENABLED=true
OPENCODE_REPOSITORY_ROOT=/app/repos
OPENCODE_SERVER_USERNAME=opencode
OPENCODE_SERVER_PASSWORD=change-me
OPENCODE_REPOS=https://github.com/octocat/Hello-World.git
# Pick at least one provider key that OpenCode can use
GEMINI_API_KEY=your-gemini-key
# or OPENAI_API_KEY=...
# or ANTHROPIC_API_KEY=...If you want private repositories:
# Generic HTTPS or Azure Repos
GIT_USERNAME=your-user-or-email
GIT_TOKEN=your-token-or-pat
# Optional GitHub-only fallback
GITHUB_TOKEN=your-github-tokenImportant:
- The
OpenCodeblock remains hidden unlessNEXT_PUBLIC_OPENCODE_ENABLED=trueis set on the Sim app. docker composereads environment from the shell, not fromapps/sim/.envautomatically.- If you want the app and the OpenCode runtime to use the same credentials, load that file before starting compose:
set -a
source apps/sim/.env
set +a
docker compose -f docker-compose.local.yml -f docker-compose.opencode.local.yml up -d --buildLocal vs production behavior:
docker-compose.local.yml- remains unchanged
docker-compose.opencode.local.yml- adds OpenCode locally without changing the base local compose file
- publishes
OPENCODE_PORTto the host sonext devon the host can talk to OpenCode - defaults
OPENCODE_REPOSITORY_ROOT=/app/repos - defaults
OPENCODE_SERVER_USERNAME=opencode - defaults
OPENCODE_SERVER_PASSWORD=dev-opencode-passwordif you do not set one explicitly
docker-compose.prod.yml- contains the upstream-style base deployment only
docker-compose.opencode.yml- adds the
opencodeservice as a production overlay - builds the OpenCode runtime locally from this repository instead of requiring an official Sim-hosted image
- injects the required
NEXT_PUBLIC_OPENCODE_ENABLEDandOPENCODE_*variables intosimstudio - keeps OpenCode internal to the Docker network with
expose, not a published host port - defaults
OPENCODE_REPOSITORY_ROOT=/app/repos - requires
OPENCODE_SERVER_PASSWORDto be set explicitly beforedocker composestarts
- adds the
Production deploy command:
docker compose -f docker-compose.prod.yml -f docker-compose.opencode.yml up -d --buildFor local hot reload with next dev on the host, also set this in apps/sim/.env:
OPENCODE_BASE_URL=http://127.0.0.1:4096Then start only the optional OpenCode runtime:
docker compose -f docker-compose.local.yml -f docker-compose.opencode.local.yml up -d --build opencodeWithout that override, host-side Next.js cannot reliably reach the Docker service alias.
Notes:
- If
OPENCODE_REPOSis empty,opencodestill starts but no repositories are cloned. - Repositories are cloned into
${OPENCODE_REPOSITORY_ROOT:-/app/repos}/<repo-name>. - Private Azure Repos must use
httpsplusGIT_USERNAMEandGIT_TOKEN; the container will not prompt interactively for passwords. GOOGLE_GENERATIVE_AI_API_KEYis optional; the optional overlays map it automatically fromGEMINI_API_KEYif not set.- If you prefer to run OpenCode in separate infrastructure, skip the overlays and point Sim at that deployment with
OPENCODE_BASE_URL,OPENCODE_SERVER_USERNAME,OPENCODE_SERVER_PASSWORD, and the matchingOPENCODE_REPOSITORY_ROOT.
Basic verification after startup:
curl -u "opencode:change-me" http://127.0.0.1:4096/global/healthIf you changed the username, password, or port, use those values instead.
See docker/opencode/README.md for service-specific verification steps and runtime behavior.
Run Sim with local AI models using Ollama - no external APIs required:
# Start with GPU support (automatically downloads gemma3:4b model)
docker compose -f docker-compose.ollama.yml --profile setup up -d
# For CPU-only systems:
docker compose -f docker-compose.ollama.yml --profile cpu --profile setup up -dWait for the model to download, then visit http://localhost:3000. Add more models with:
docker compose -f docker-compose.ollama.yml exec ollama ollama pull llama3.1:8bIf Ollama is running on your host machine, use host.docker.internal instead of localhost:
OLLAMA_URL=http://host.docker.internal:11434 docker compose -f docker-compose.prod.yml up -dOn Linux, use your host's IP address or add extra_hosts: ["host.docker.internal:host-gateway"] to the compose file.
Sim supports vLLM for self-hosted models. Set VLLM_BASE_URL and optionally VLLM_API_KEY in your environment.
- Open VS Code with the Remote - Containers extension
- Open the project and click "Reopen in Container" when prompted
- Run
bun run dev:fullin the terminal or use thesim-startalias- This starts both the main application and the realtime socket server
Requirements: Bun, Node.js v20+, PostgreSQL 12+ with pgvector
- Clone and install:
git clone https://github.com/simstudioai/sim.git
cd sim
bun install- Set up PostgreSQL with pgvector:
docker run --name simstudio-db -e POSTGRES_PASSWORD=your_password -e POSTGRES_DB=simstudio -p 5432:5432 -d pgvector/pgvector:pg17Or install manually via the pgvector guide.
- Configure environment:
cp apps/sim/.env.example apps/sim/.env
cp packages/db/.env.example packages/db/.env
# Edit both .env files to set DATABASE_URL="postgresql://postgres:your_password@localhost:5432/simstudio"If you want to use the OpenCode workflow block while running next dev on the host, also set these in apps/sim/.env:
NEXT_PUBLIC_OPENCODE_ENABLED=true
OPENCODE_BASE_URL=http://127.0.0.1:4096
OPENCODE_SERVER_USERNAME=opencode
OPENCODE_SERVER_PASSWORD=change-me
OPENCODE_REPOS=https://github.com/octocat/Hello-World.git
GEMINI_API_KEY=your-gemini-keyThen export the same environment before starting the OpenCode container so the app and Docker use identical credentials:
set -a
source apps/sim/.env
set +a
docker compose -f docker-compose.local.yml -f docker-compose.opencode.local.yml up -d --build opencode- Run migrations:
cd packages/db && bunx drizzle-kit migrate --config=./drizzle.config.ts- Start development servers:
bun run dev:full # Starts both Next.js app and realtime socket server
bun run dev:full:webpack # Same, but using Webpack instead of TurbopackOr run separately: bun run dev (Next.js/Turbopack), cd apps/sim && bun run dev:webpack (Next.js/Webpack), and cd apps/sim && bun run dev:sockets (realtime).
Copilot is a Sim-managed service. To use Copilot on a self-hosted instance:
- Go to https://sim.ai → Settings → Copilot and generate a Copilot API key
- Set
COPILOT_API_KEYenvironment variable in your self-hosted apps/sim/.env file to that value
See the environment variables reference for the full list, or apps/sim/.env.example for defaults.
- Framework: Next.js (App Router)
- Runtime: Bun
- Database: PostgreSQL with Drizzle ORM
- Authentication: Better Auth
- UI: Shadcn, Tailwind CSS
- State Management: Zustand
- Flow Editor: ReactFlow
- Docs: Fumadocs
- Monorepo: Turborepo
- Realtime: Socket.io
- Background Jobs: Trigger.dev
- Remote Code Execution: E2B
We welcome contributions! Please see our Contributing Guide for details.
This project is licensed under the Apache License 2.0 - see the LICENSE file for details.
Made with ❤️ by the Sim Team


