Skip to content

Soju06/codex-lb

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

489 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

codex-lb

Load balancer for ChatGPT accounts. Pool multiple accounts, track usage, manage API keys, view everything in a dashboard.

dashboard accounts
More screenshots
Settings Login
settings login
Dashboard (dark) Accounts (dark) Settings (dark)
dashboard-dark accounts-dark settings-dark

Features

Account Pooling
Load balance across multiple ChatGPT accounts
Usage Tracking
Per-account tokens, cost, 28-day trends
API Keys
Per-key rate limits by token, cost, window, model
Dashboard Auth
Password + optional TOTP
OpenAI-compatible
Codex CLI, OpenCode, any OpenAI client
Auto Model Sync
Available models fetched from upstream

Quick Start

# Docker (recommended)
docker volume create codex-lb-data
docker run -d --name codex-lb \
  -p 2455:2455 -p 1455:1455 \
  -v codex-lb-data:/var/lib/codex-lb \
  ghcr.io/soju06/codex-lb:latest

# or uvx
uvx codex-lb

Open localhost:2455 β†’ Add account β†’ Done.

Remote Setup

When accessing the dashboard remotely for the first time, a bootstrap token is required to set the initial password.

Auto-generated (default): On first startup (no password configured), the server generates a one-time token and prints it to logs:

docker logs codex-lb
# ============================================
#   Dashboard bootstrap token (first-run):
#   <token>
# ============================================

Open the dashboard β†’ enter the token + new password β†’ done. The token is shared across replicas and remains valid until a password is set. In multi-replica setups, replicas must share the same encryption key (the Helm chart default) for restart recovery to work.

Manual token: To use a fixed token instead, set the env var before starting:

docker run -d --name codex-lb \
  -e CODEX_LB_DASHBOARD_BOOTSTRAP_TOKEN=your-secret-token \
  -p 2455:2455 -p 1455:1455 \
  -v codex-lb-data:/var/lib/codex-lb \
  ghcr.io/soju06/codex-lb:latest

Local access (localhost) bypasses bootstrap entirely β€” no token needed.

Client Setup

Point any OpenAI-compatible client at codex-lb. If API key auth is enabled, pass a key from the dashboard as a Bearer token.

Logo Client Endpoint Config
OpenAI Codex CLI http://127.0.0.1:2455/backend-api/codex ~/.codex/config.toml
OpenCode OpenCode http://127.0.0.1:2455/v1 ~/.config/opencode/opencode.json
OpenClaw OpenClaw http://127.0.0.1:2455/v1 ~/.openclaw/openclaw.json
Python OpenAI Python SDK http://127.0.0.1:2455/v1 Code
OpenAI Codex CLI / IDE Extension

~/.codex/config.toml:

model = "gpt-5.3-codex"
model_reasoning_effort = "xhigh"
model_provider = "codex-lb"

[model_providers.codex-lb]
name = "OpenAI"  # required β€” enables remote /responses/compact
base_url = "http://127.0.0.1:2455/backend-api/codex"
wire_api = "responses"
supports_websockets = true
requires_openai_auth = true # required for codex app

Optional: enable native upstream WebSockets for Codex streaming while keeping codex-lb pooling:

export CODEX_LB_UPSTREAM_STREAM_TRANSPORT=websocket

auto is the default and uses native WebSockets for native Codex headers or models that prefer them. You can also switch this in the dashboard under Settings -> Routing -> Upstream stream transport.

Note: Codex itself does not currently expose a stable documented wire_api = "websocket" provider mode. If you want to experiment on the Codex side, the current CLI exposes under-development feature flags:

[features]
responses_websockets = true
# or
responses_websockets_v2 = true

These flags are experimental and do not replace wire_api = "responses".

If upstream websocket handshakes must use environment proxies in your deployment, set CODEX_LB_UPSTREAM_WEBSOCKET_TRUST_ENV=true. By default websocket handshakes connect directly to match Codex CLI's native transport.

With API key auth:

[model_providers.codex-lb]
name = "OpenAI"
base_url = "http://127.0.0.1:2455/backend-api/codex"
wire_api = "responses"
env_key = "CODEX_LB_API_KEY"
supports_websockets = true
requires_openai_auth = true # required for codex app
export CODEX_LB_API_KEY="sk-clb-..."   # key from dashboard
codex

Verify WebSocket transport

Use a one-off debug run:

RUST_LOG=debug codex exec "Reply with OK only."

Healthy websocket signals:

  • CLI logs contain connecting to websocket and successfully connected to websocket
  • codex-lb logs show WebSocket /backend-api/codex/responses
  • codex-lb logs do not show fallback POST /backend-api/codex/responses for the same run

If you run codex-lb behind a reverse proxy, make sure it forwards WebSocket upgrades.

Migrating from direct OpenAI β€” codex resume filters by model_provider; old sessions won't appear until you re-tag them:

# JSONL session files (all versions)
find ~/.codex/sessions -name '*.jsonl' \
  -exec sed -i '' 's/"model_provider":"openai"/"model_provider":"codex-lb"/g' {} +

# SQLite state DB (>= v0.105.0, creates ~/.codex/state_*.sqlite)
sqlite3 ~/.codex/state_5.sqlite \
  "UPDATE threads SET model_provider = 'codex-lb' WHERE model_provider = 'openai';"
OpenCode OpenCode

Important: Use the built-in openai provider with baseURL override β€” not a custom provider with @ai-sdk/openai-compatible. Custom providers use the Chat Completions API which drops reasoning/thinking content. The built-in openai provider uses the Responses API, which properly preserves encrypted_content and multi-turn reasoning state.

Before starting, please ensure that all existing OpenAI credentials is cleared in ~/.local/share/opencode/auth.json You can clean the config by using this one-liner jq 'del(.openai)' ~/.local/share/opencode/auth.json > auth.json.tmp && mv auth.json.tmp ~/.local/share/opencode/auth.json

~/.config/opencode/opencode.json:

{
  "$schema": "https://opencode.ai/config.json",
  "provider": {
    "openai": {
      "options": {
        "baseURL": "http://127.0.0.1:2455/v1",
        "apiKey": "{env:CODEX_LB_API_KEY}"
      },
      "models": {
        "gpt-5.4": {
          "name": "GPT-5.4",
          "reasoning": true,
          "options": { "reasoningEffort": "high", "reasoningSummary": "detailed" },
          "limit": { "context": 1050000, "output": 128000 }
        },
        "gpt-5.3-codex": {
          "name": "GPT-5.3 Codex",
          "reasoning": true,
          "options": { "reasoningEffort": "high", "reasoningSummary": "detailed" },
          "limit": { "context": 272000, "output": 65536 }
        },
        "gpt-5.1-codex-mini": {
          "name": "GPT-5.1 Codex Mini",
          "reasoning": true,
          "options": { "reasoningEffort": "high", "reasoningSummary": "detailed" },
          "limit": { "context": 272000, "output": 65536 }
        },
        "gpt-5.3-codex-spark": {
          "name": "GPT-5.3 Codex Spark",
          "reasoning": true,
          "options": { "reasoningEffort": "xhigh", "reasoningSummary": "detailed" },
          "limit": { "context": 128000, "output": 65536 }
        }
      }
    }
  },
  "model": "openai/gpt-5.3-codex"
}

This overrides the built-in openai provider's endpoint to point at codex-lb while keeping the Responses API code path that handles reasoning properly.

export CODEX_LB_API_KEY="sk-clb-..."   # key from dashboard
opencode
OpenClaw OpenClaw

~/.openclaw/openclaw.json:

{
  "agents": {
    "defaults": {
      "model": { "primary": "codex-lb/gpt-5.4" },
      "models": {
        "codex-lb/gpt-5.4": { "params": { "cacheRetention": "short" } }
        "codex-lb/gpt-5.4-mini": { "params": { "cacheRetention": "short" } }
        "codex-lb/gpt-5.3-codex": { "params": { "cacheRetention": "short" } }
      }
    }
  },
  "models": {
    "mode": "merge",
    "providers": {
      "codex-lb": {
        "baseUrl": "http://127.0.0.1:2455/v1",
        "apiKey": "${CODEX_LB_API_KEY}",   // or "dummy" if API key auth is disabled
        "api": "openai-responses",
        "models": [
          {
            "id": "gpt-5.4",
            "name": "gpt-5.4 (codex-lb)",
            "contextWindow": 1050000,
            "contextTokens": 272000,
            "maxTokens": 4096,
            "input": ["text"],
            "reasoning": false
          },
          {
            "id": "gpt-5.4-mini",
            "name": "gpt-5.4-mini (codex-lb)",
            "contextWindow": 400000,
            "contextTokens": 272000,
            "maxTokens": 4096,
            "input": ["text"],
            "reasoning": false
          },
          {
            "id": "gpt-5.3-codex",
            "name": "gpt-5.3-codex (codex-lb)",
            "contextWindow": 400000,
            "contextTokens": 272000,
            "maxTokens": 4096,
            "input": ["text"],
            "reasoning": false
          }
        ]
      }
    }
  }
}

Set the env var or replace ${CODEX_LB_API_KEY} with a key from the dashboard. If API key auth is disabled, local requests can omit the key, but non-local requests are still rejected until proxy authentication is configured.

Python OpenAI Python SDK
from openai import OpenAI

client = OpenAI(
    base_url="http://127.0.0.1:2455/v1",
    api_key="sk-clb-...",  # from dashboard, or any non-empty string if auth is disabled
)

response = client.chat.completions.create(
    model="gpt-5.3-codex",
    messages=[{"role": "user", "content": "Hello!"}],
)
print(response.choices[0].message.content)

API Key Authentication

API key auth is disabled by default. In that mode, only local requests to the protected proxy routes can proceed without a key; non-local requests are rejected until proxy authentication is configured. Enable it in Settings β†’ API Key Auth on the dashboard when clients connect remotely or through Docker, VM, or container networking that appears non-local to the service.

When enabled, clients must pass a valid API key as a Bearer token:

Authorization: Bearer sk-clb-...

The protected proxy routes covered by this setting are:

  • /v1/* (except /v1/usage, which always requires a valid key)
  • /backend-api/codex/*
  • /backend-api/transcribe

Creating keys: Dashboard β†’ API Keys β†’ Create. The full key is shown only once at creation. Keys support optional expiration, model restrictions, and rate limits (tokens / cost per day / week / month).

Configuration

Environment variables with CODEX_LB_ prefix or .env.local. See .env.example. SQLite is the default database backend; PostgreSQL is optional via CODEX_LB_DATABASE_URL (for example postgresql+asyncpg://...).

Dashboard authentication modes

codex-lb supports three dashboard auth modes via environment variables:

  • CODEX_LB_DASHBOARD_AUTH_MODE=standard β€” built-in dashboard password with optional TOTP from the Settings page.
  • CODEX_LB_DASHBOARD_AUTH_MODE=trusted_header β€” trust a reverse-proxy auth header such as Authelia's Remote-User, but only from CODEX_LB_FIREWALL_TRUSTED_PROXY_CIDRS. Built-in password/TOTP remain available as an optional fallback, and password/TOTP management still requires a fallback password session.
  • CODEX_LB_DASHBOARD_AUTH_MODE=disabled β€” fully bypass dashboard auth. Use only behind network restrictions or external auth. Built-in password/TOTP management is disabled in this mode.

trusted_header mode also requires:

CODEX_LB_FIREWALL_TRUST_PROXY_HEADERS=true
CODEX_LB_FIREWALL_TRUSTED_PROXY_CIDRS=172.18.0.0/16
CODEX_LB_DASHBOARD_AUTH_PROXY_HEADER=Remote-User

If the trusted header is missing and no fallback password is configured, the dashboard fails closed and shows a reverse-proxy-required message instead of loading the UI.

Docker examples

Authelia / trusted header

docker run -d --name codex-lb \
  -p 2455:2455 -p 1455:1455 \
  -e CODEX_LB_DASHBOARD_AUTH_MODE=trusted_header \
  -e CODEX_LB_DASHBOARD_AUTH_PROXY_HEADER=Remote-User \
  -e CODEX_LB_FIREWALL_TRUST_PROXY_HEADERS=true \
  -e CODEX_LB_FIREWALL_TRUSTED_PROXY_CIDRS=172.18.0.0/16 \
  -v codex-lb-data:/var/lib/codex-lb \
  ghcr.io/soju06/codex-lb:latest

Hard override / no app-level dashboard auth

docker run -d --name codex-lb \
  -p 2455:2455 -p 1455:1455 \
  -e CODEX_LB_DASHBOARD_AUTH_MODE=disabled \
  -v codex-lb-data:/var/lib/codex-lb \
  ghcr.io/soju06/codex-lb:latest

For Helm, pass the same values through extraEnv.

Data

Environment Path
Local / uvx ~/.codex-lb/
Docker /var/lib/codex-lb/

Backup this directory to preserve your data.

Kubernetes

helm install codex-lb oci://ghcr.io/soju06/charts/codex-lb \
  --set postgresql.auth.password=changeme \
  --set config.databaseMigrateOnStartup=true \
  --set migration.schemaGate.enabled=false
kubectl port-forward svc/codex-lb 2455:2455

Open localhost:2455 β†’ Add account β†’ Done.

The Helm chart auto-configures HTTP /responses owner handoff for multi-replica installs using a headless-service DNS name per pod. The default cluster domain is cluster.local; set Helm clusterDomain if your cluster uses a different suffix. Override config.sessionBridgeAdvertiseBaseUrl only if pods must be reached through a different internal address.

For external database, production config, ingress, observability, and more see the Helm chart README.

Development

# Docker
docker compose watch

# Local
uv sync && cd frontend && bun install && cd ..
uv run fastapi run app/main.py --reload        # backend :2455
cd frontend && bun run dev                     # frontend :5173

Contributors ✨

Thanks goes to these wonderful people (emoji key):

Soju06
Soju06

πŸ’» ⚠️ 🚧 πŸš‡
Jonas Kamsker
Jonas Kamsker

πŸ’» πŸ› 🚧
Quack
Quack

πŸ’» πŸ› 🚧 🎨
Jill Kok, San Mou
Jill Kok, San Mou

πŸ’» ⚠️ 🚧 πŸ›
PARK CHANYOUNG
PARK CHANYOUNG

πŸ“– πŸ’» ⚠️
Choi138
Choi138

πŸ’» πŸ› ⚠️
LYA⚚CAP⚚OCEAN
LYA⚚CAP⚚OCEAN

πŸ’» ⚠️
Eugene Korekin
Eugene Korekin

πŸ’» πŸ› ⚠️
jordan
jordan

πŸ’» πŸ› ⚠️
DOCaCola
DOCaCola

πŸ› ⚠️ πŸ“–
JoeBlack2k
JoeBlack2k

πŸ’» πŸ› ⚠️
Peter A.
Peter A.

πŸ“– πŸ’» πŸ›
Hannah Markfort
Hannah Markfort

πŸ’» ⚠️
mws-weekend-projects
mws-weekend-projects

πŸ’» ⚠️
Quang Do
Quang Do

πŸ’» ⚠️
Anand Aiyer
Anand Aiyer

πŸ› πŸ’» ⚠️
defin85
defin85

πŸ’» πŸ› ⚠️
Jacky Fong
Jacky Fong

πŸ’» πŸ› πŸ’¬ 🚧 ⚠️
flokosti96
flokosti96

πŸ’» ⚠️
Woonggi Min
Woonggi Min

πŸ’» ⚠️
Yigit Konur
Yigit Konur

πŸ› πŸ’»
Ruben
Ruben

πŸ’» ⚠️ πŸ›
Steve Santacroce
Steve Santacroce

πŸ’» ⚠️ πŸ›
Hugh Do
Hugh Do

πŸ’» ⚠️
Hubert Salwin
Hubert Salwin

πŸ’» ⚠️
Teemu Koskinen
Teemu Koskinen

πŸ“–
Yu Peng Zheng
Yu Peng Zheng

πŸ“– πŸ’»
embogomolov
embogomolov

πŸ’» ⚠️
Renat Sharipov
Renat Sharipov

πŸ’» ⚠️
Liu Rui
Liu Rui

πŸ“– πŸ’» ⚠️ πŸ›
OverHash
OverHash

πŸ’» ⚠️
Kazet
Kazet

πŸ’» ⚠️
Bala Kumar
Bala Kumar

πŸ’» ⚠️ πŸ€”
ihazgithub
ihazgithub

πŸ’» ⚠️
Temirkhan
Temirkhan

πŸ’» ⚠️ πŸ“– πŸ›
tobwen
tobwen

πŸ’» ⚠️ πŸ›
Rio
Rio

πŸ’» πŸ› ⚠️
Mika
Mika

πŸ’» πŸ“– ⚠️
Darafei Praliaskouski
Darafei Praliaskouski

πŸ’» πŸ“– ⚠️ πŸ›
Maxim Feofilov
Maxim Feofilov

πŸ’» ⚠️
JeffKandt
JeffKandt

⚠️ πŸ‘€
klaascommerce
klaascommerce

πŸ’» ⚠️
ozpool
ozpool

πŸ€” πŸ“– πŸ’» ⚠️
Manu
Manu

⚠️ πŸ‘€
Wojtek Majewski
Wojtek Majewski

⚠️

This project follows the all-contributors specification. Contributions of any kind welcome!