-
Notifications
You must be signed in to change notification settings - Fork 131
Expand file tree
/
Copy path.env.example
More file actions
120 lines (99 loc) · 4.63 KB
/
.env.example
File metadata and controls
120 lines (99 loc) · 4.63 KB
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
# --- Required (one of these) ---
# Option A: Anthropic API key — used by claude-agent-sdk for all coding agents
ANTHROPIC_API_KEY=sk-ant-api03-...
# Option B: Claude Code subscription OAuth token (run `claude setup-token` to get it)
# Uses your Pro/Max subscription credits instead of API billing
# CLAUDE_CODE_OAUTH_TOKEN=sk-ant-oat01-...
# Option C: Open-source models (DeepSeek, Qwen, Llama, MiniMax, etc.)
# Access 200+ open-source and proprietary models via OpenRouter, OpenAI, or Google
# Configure one or more API keys:
# OpenRouter (recommended - 200+ models including DeepSeek, Qwen, Llama, MiniMax)
# OPENROUTER_API_KEY=sk-or-v1-...
# OpenAI API-platform billing for OpenAI models and Codex api_key mode.
# OPENAI_API_KEY=sk-...
# Codex CLI runtime auth. Values:
# auto Use OPENAI_API_KEY when set; otherwise use local Codex login.
# chatgpt Use ChatGPT Free/Plus/Pro/Team login. Run `codex login` on the
# host, keep OPENAI_API_KEY unset for this process, and Docker will
# mount ~/.codex into both swe-planner and swe-fast.
# api_key Use OpenAI API-platform billing. Set OPENAI_API_KEY=sk-...
# SWE_CODEX_AUTH_MODE=auto
# Google Gemini
# GOOGLE_API_KEY=...
# Z.AI / Zhipu AI (GLM-4.7, GLM-4.5, etc.) — direct access without OpenRouter
# Get your key at https://z.ai/manage-apikey/apikey-list
# ZHIPU_API_KEY=...
# Note: Can reuse ANTHROPIC_API_KEY from above for Claude models via OpenCode
# --- Optional: Web search (open runtime) ---
#
# Enables opencode's built-in `websearch` and `webfetch` tools so coding
# and review agents can look up external documentation, library APIs,
# error messages, and version/deprecation status during a build.
#
# Both vars must be set. They reach the opencode subprocess via the
# parent-env propagation in agentfield's run_cli — no SWE-AF wiring is
# needed beyond setting them on the deployment.
#
# Get an Exa key at https://exa.ai/
# OPENCODE_ENABLE_EXA=1
# EXA_API_KEY=...
# --- Optional: GitHub integration ---
# GitHub personal access token (repo scope) — for draft PR creation
# Only needed if using repo_url + enable_github_pr
GH_TOKEN=ghp_...
# --- Optional: AgentField server ---
# Override the control plane URL (default: http://control-plane:8080 in Docker,
# http://localhost:8080 when running bare metal)
# Set this to use an already-running control plane instead of Docker's:
# AGENTFIELD_SERVER=http://host.docker.internal:8080
# AGENTFIELD_SERVER=http://localhost:8080
# Node ID for this agent (default: swe-planner)
# NODE_ID=swe-planner
# Port the agent listens on (default: 8003)
# PORT=8003
# --- Optional: Model configuration ---
# Default runtime when callers don't pass a `runtime` in the request config.
# Lets the deployer pick the runtime once instead of every caller threading
# a config through. Falls back to claude_code if unset; an invalid value is
# logged as a warning and ignored.
# SWE_DEFAULT_RUNTIME=claude_code # or: open_code, codex
# Default model when callers don't pass `models` in the request config.
# Applies to all 16 agent roles for whichever runtime is active. Caller
# config (`models.default` or per-role keys) overrides this. Set this on
# the deployment to pin a model without code changes — e.g. swap from
# minimax-m2.5 to a newer release. Empty / unset → use the runtime's
# baked-in defaults.
# SWE_DEFAULT_MODEL=openrouter/minimax/minimax-m2.6
# Runtime/model selection is configured via API request config (V2):
# {
# "runtime": "claude_code" | "open_code" | "codex",
# "models": {
# "default": "sonnet or provider/model-id",
# "coder": "provider/model-id",
# "qa": "provider/model-id"
# }
# }
#
# Runtime mapping:
# claude_code -> Claude backend
# open_code -> OpenCode backend
# codex -> OpenAI Codex CLI backend
#
# Legacy keys are removed: ai_provider, preset, model, and all *_model fields.
#
# Example open runtime request config:
# {"runtime": "open_code", "models": {"default": "deepseek/deepseek-chat"}}
#
# Example Claude runtime request config:
# {"runtime": "claude_code", "models": {"default": "sonnet", "coder": "opus"}}
#
# Example Codex runtime request config:
# {"runtime": "codex", "models": {"default": "gpt-5.3-codex"}}
# Available open runtime model IDs (format: provider/model-name):
# deepseek/deepseek-chat # DeepSeek via OpenRouter
# minimax/minimax-m2.5 # MiniMax M2.5 via OpenRouter
# qwen/qwen-2.5-72b-instruct # Qwen via OpenRouter
# openai/gpt-4o # GPT-4 via OpenAI
# anthropic/claude-sonnet-4 # Claude via Anthropic
# zhipuai-coding-plan/glm-4.7 # GLM-4.7 via Z.AI direct (set ZHIPU_API_KEY)
# openrouter/z-ai/glm-5 # GLM-5 via OpenRouter (set OPENROUTER_API_KEY)