Knowledge as Code
Knowledge as Code applies software engineering practices to knowledge management. Plain-text canonical files, automated verification, and a single source that produces HTML, JSON APIs, and MCP servers — with no database, no vendor, and no silent decay.
What it is
Six interrelated properties define the pattern. Adopt them together and your knowledge base stops drifting, stops requiring babysitting, and starts behaving like the rest of your stack.
1 Plain-text canonical
Human-readable, version-controlled files. No database. No vendor lock-in. What you commit is the source of truth.
2 Self-healing
Automated verification detects when knowledge has drifted from reality and flags discrepancies as issues for human review.
3 Multi-output
One source compiles to HTML, JSON APIs, MCP servers, and SEO pages. Write once; serve anywhere an agent or human looks.
4 Zero-dependency
Builds with language built-ins only. Nothing breaks after years of inactivity. The repo you clone today still builds in 2036.
5 Git-native
Git is the collaboration layer, the audit trail, and the deployment trigger. No additional CMS or workflow tool required.
6 Ontology-driven
Vendor-neutral taxonomies map to specific implementations. Swap the data; the structure — and the consumers — keep working.
Why it matters
Traditional documentation decays silently. Wikis rot. Notion pages go stale. A landing page you published eighteen months ago now quietly lies to your users. Knowledge as Code resists this through verification cascades — multiple models cross-check data weekly and surface discrepancies as reviewable changes rather than corrupted prose.
For devtool builders & DevRel
Capability matrices, plan comparisons, compat tables, and changelogs stay honest without a human auditing them every sprint. Ship the structured data your docs site, your API, and your customers' agents all consume from a single commit.
For AI & agent builders
Agents need queryable, well-typed, citation-friendly knowledge. A KaC repo emits a JSON API and an MCP server alongside its HTML — your LLMs read the same source your users do, with provenance baked in.
For open-source maintainers
No platform to outlive your project. No service to pay for. A KaC knowledge base is a normal repo: forkable, archivable, auditable, and durable long after whichever hosting vendor you picked pivots or shuts down.
A reference implementation
AITool.watch is a maintained registry of AI capabilities, implementations, and products — built entirely on the Knowledge as Code pattern. It tracks 18 capabilities across 66 implementations spanning ChatGPT, Claude, Gemini, Copilot, Grok, and Perplexity.
aitool.watch
Capability-first taxonomy. Each entry carries "What counts" and "what doesn't count" boundaries, plan-tier requirements, and launch dates. The site, JSON API, and MCP server all regenerate from a folder of markdown files every time someone merges a PR.
It sits alongside sibling specs like Graceful Boundaries (machine-readable error responses) and Skill Provenance (portable version identity for agent skills) — all open patterns in the same family.
See the pattern in action
Most people don't understand Knowledge as Code until they see both halves at once — the built site a visitor experiences, and the plain markdown files a contributor edits. The trick is that they live at the same URL.
/demo/ The built site
A complete Knowledge as Code site generated from the example data in this very repo. Homepage, entity detail pages, a coverage matrix, a comparison tool, a JSON API, an RSS feed, and an MCP server — all from 7 markdown files.
Open the live demo →.md The source markdown
This is what a contributor writes. YAML frontmatter for metadata, Markdown prose for the body. Git diffs are human-readable. No database, no CMS, no migrations.
- → data/examples/frameworks/iso-27001.md
- → data/examples/requirements/access-control.md
- → data/examples/organizations/iso.md
- → project.yml — the ontology config
Click a markdown file above, then open the matching page in the demo. The left-hand column
is the input. The right-hand column is the output. Everything in between is a single
node scripts/build.js command.
Built with Knowledge as Code
Three production reference sites running the pattern today. Each is a normal GitHub repo — clone it, read the markdown, rebuild the site. Nothing is hidden behind a CMS.
aitool.watch
A plain-English reference for AI capabilities, plans, constraints, and implementations across ChatGPT, Claude, Gemini, Copilot, Grok, and Perplexity.
everyailaw.com
A global reference to AI regulation, obligations, and compliance deadlines for GRC, CISO, CAIO, and legal teams.
meetings.snapsynapse.com
Virtual meeting software mapped to facilitation primitives, with timeline and compatibility matrix.
Get started
The reference template has no installation step. Clone it, run the build, and open the generated site. Everything else is editing markdown.
# 1. Clone the template git clone https://github.com/snapsynapse/knowledge-as-code-template.git cd knowledge-as-code-template # 2. Build the site + JSON API (no npm install required) node scripts/build.js # 3. Validate cross-references any time node scripts/validate.js
- 01Define your ontology. Edit
project.ymlto describe the entities your domain cares about — primary anchors, containers, authorities, and mappings. - 02Add data as markdown. Drop one file per entity into
data/examples/with YAML frontmatter. Cross-references resolve by id. - 03Build and publish.
node scripts/build.jsemitsdocs/— HTML, JSON API, and MCP server. Push to GitHub Pages or any static host. - 04Wire up verification. Schedule the verifier to re-check claims against their sources and open issues when drift appears.