-
-
Live Bloch sphere visualization rendered inside every relevant concept node.
-
Ask a question, grow a knowledge graph — every node delivers a personalized explanation and AI-generated video.
-
Real quantum circuits running on Amazon Braket, with live code, gate diagrams, and AI-generated video.
-
Declare your background, get explanations tailored to your field — update anytime.
-
End-to-end AWS architecture powering QuantumBridge — Bedrock, Braket, DynamoDB, and Nova Reel working together.
QuantumBridge
Inspiration
Last year, Sundar Pichai said that quantum computing today is where AI was in 2015. Back then, most people couldn't explain what a neural network was. Now AI is everywhere. Quantum is on that same trajectory - but there's a bottleneck. McKinsey and Avery Fairbank report a 3:1 gap between quantum job openings and qualified candidates. Less than half of quantum roles were expected to be filled by 2025. Over 80% of university students say quantum concepts feel inaccessible.
The shortage isn't a lack of interest. It's that quantum education is still locked behind walls of physics jargon and linear algebra. We asked a simple question: if AI tools are making everything else easier to learn, why not use them to make quantum computing approachable for everyone?
That's QuantumBridge.
What It Does
QuantumBridge is an AI-powered learning platform that teaches quantum computing through the lens of what you already know. You provide your background - Computer Science, Physics, Finance, Music, whatever - and every explanation gets reframed for you. A music student learns superposition through harmonics. A finance student learns entanglement through correlated assets. Same concept, completely different entry point.
Each concept becomes a node on an interactive knowledge graph. As you learn, your graph grows - a visual map of your quantum journey that you can pan, zoom, and navigate. Every node includes:
- Personalized markdown explanations with domain-specific analogies
- Real quantum circuit simulations executed on Amazon Braket's LocalSimulator
- Interactive visualizations: 3D Bloch spheres, SVG circuit diagrams, animated wave interference patterns, probability distribution charts
- AI-generated educational videos
- Follow-up questions that branch into new nodes
When you're ready, the system generates a quiz from everything you've explored - testing retention across your entire knowledge graph.
Two Learning Modes - and Why Both Exist
Quantum computing is genuinely hard. Forcing everyone down the same path is exactly why quantum education has a dropout problem. We built two modes because effective learning requires both structure and freedom.
Pipeline Mode (Structured) offers a curated curriculum across three difficulty levels: beginner, intermediate, and advanced. Each level has a predefined sequence of topics that build on each other in pedagogically sound order. This is for learners starting from zero who need guardrails - you don't know what you don't know yet, so the system decides what comes next. You pick a level, the backend generates nodes one by one, each personalized to your background. This mode eliminates the cognitive overhead of deciding what to learn so you can focus entirely on understanding.
Explore Mode (Unstructured) gives you a blank canvas and a question box. Ask anything. A node appears with a full personalized explanation and suggests further questions. Click one, a new node spawns - connected to the first. Your knowledge graph grows organically, driven by curiosity. This mirrors how experts actually learn: by pulling threads, not reading textbooks front to back.
The two modes complement each other. A beginner starts with Pipeline to build a foundation, then switches to Explore to dive deeper into what fascinated them. An experienced developer skips Pipeline entirely and asks "How does Shor's algorithm actually break RSA?" Both paths produce the same node format, feed into the same graph, and generate the same quizzes.
How We Built It
Frontend
The frontend is a Next.js 16 application (React 19, App Router) built around an interactive knowledge graph powered by React Flow. The architecture follows a contract-based, feature-isolated pattern designed for parallel development - each feature (Explore, Pipeline, Quiz) is a self-contained module with its own API layer, components, and barrel exports. Features never import each other's internals; cross-feature communication happens through props composed at the page level. The UI component library uses shadcn/ui (Base Nova style) for primitives, built on Base UI and styled with Tailwind CSS 4.
Design system - A "Notion-meets-scientific-notebook" aesthetic: warm parchment background (#f9f7f4), white cards, dark ink text, and amber (#d97706) as the accent. Three Google Fonts - Lora (serif display), Inter (body), JetBrains Mono (code/labels) - injected at runtime via a FontInjector component that appends CSS custom properties to the document head. All design tokens (colors, shadows, radii, font stacks, panel widths) are CSS variables, making the entire system re-themeable.
Explore view (empty state) - A centered landing page with the QuantumBridge logo (inline SVG), a headline, pipeline level buttons (color-coded green/blue/purple for beginner/intermediate/advanced with topic counts), a search input with an "Ask" button, and suggested question chips ("What is superposition?", "Explain qubits simply", "What is quantum tunnelling?"). The background uses a dot-grid pattern via CSS radial-gradient.
Explore view (graph state) - A full-screen three-panel layout: a collapsible 248px sidebar on the left showing a hierarchical tree of all nodes with depth-based indentation and a "Quiz me" button; the React Flow canvas in the center; and a 500px slide-in document panel on the right when a node is selected. A floating input bar sits at the bottom center for follow-up questions. A loading pill appears at the top during node generation.
Knowledge graph canvas - Uses a custom BFS tree layout algorithm with 320px horizontal and 220px vertical spacing, siblings centered around their parent. Nodes are color-coded by depth using a 5-color palette (amber → sky → emerald → violet → rose) that cycles. Each node card renders a colored left accent strip, a depth label ("Root" or "Depth N"), the title in serif, and a 2-line truncated explanation. Edges connect parent-child relationships. Includes pan/zoom controls and a color-coded minimap.
Node document panel - The richest component. Renders the full answer as styled markdown (react-markdown + remark-gfm with custom components for headings, code blocks, tables, blockquotes), a "plain language" explanation in italic serif with a colored left border, interactive visualizations, Braket circuit results as probability bar charts with ket notation (|0⟩, |1⟩), a collapsible Python code viewer with a live "Run code" button that calls the Braket execution API and displays results inline, AI-generated video playback, follow-up question buttons (filtered to exclude already-used ones), connected child node links, and source citations. Parent breadcrumb navigation at the top.
Four visualization types - 3D Bloch spheres rendered with Plotly.js (showing state vector, axes, and translucent sphere surface); SVG quantum circuit diagrams supporting H, X, Y, Z, CNOT, and Measure gates with qubit wire labels; canvas-based wave interference animations with configurable wave count and phase difference; and SVG probability bar charts with animated bar growth and axis labels. An inference engine scans node content for keywords (superposition → Bloch sphere, gates → circuit diagram, tunneling → interference) to auto-select the right visualization when the backend doesn't specify one.
Quiz view - Full-screen interface that generates multiple-choice questions from the session's knowledge graph. Each question shows four options; selecting one immediately reveals whether it's correct (amber highlight) or incorrect (red), with unselected options fading to 45% opacity. A score counter ("3/5 correct") appears once all questions are answered.
Session management - A React Context provider auto-creates a profile on first visit (POSTs to
/profile/), persists the session to localStorage, and supports mid-session profile editing through a modal with name, domain, and additional context fields. Learners can change their background and get re-personalized explanations without losing their graph. A user profile menu (amber avatar with initials) provides quick access to profile details and editing.State persistence - The knowledge graph nodes are persisted to localStorage on every change, so refreshing the page doesn't lose your learning progress. Session data (sessionId + profile) is stored separately.
Backend
The backend is a FastAPI application (Python) with a clean contract-based architecture mirroring the frontend's isolation pattern. All service boundaries are defined as Python Protocol interfaces in contracts/ - IContentService, IPlannerService, IBraketService, IRAGService, IQuizService, IOrchestratorService - and a service registry in core/dependencies.py swaps between real and mock implementations based on a single USE_MOCKS environment variable. This means the entire backend can run locally with no AWS credentials.
Feature modules live in features/, each containing its own router.py, service.py, schemas.py, and sometimes prompts.py. A router auto-discovery system (core/discovery.py) scans the features directory at startup and registers any module that exports a FastAPI router - adding a new feature requires zero manual wiring.
The core of the system is a 6-step orchestration pipeline that runs for every question:
Planning - A Claude model on Amazon Bedrock analyzes the concept and the learner's profile to decide what media types are appropriate. Should this topic include a Braket circuit demo? An animation? A video? The planner returns structured media flags.
RAG Retrieval - The question is sent to a Bedrock Knowledge Base backed by OpenSearch Serverless and Titan Embeddings. The knowledge base indexes ~24 documents scraped from 11 GitHub repositories (Qiskit textbook, Amazon Braket examples, Microsoft QuantumKatas) plus curated sources. Top-k relevant chunks are returned as grounding context.
Content Generation - Claude receives the learner's profile (name, domain, additional context), the RAG chunks, conversation history, and media flags, then generates a personalized answer with domain-specific analogies, a plain-language explanation, follow-up questions, and optionally Braket circuit code.
Braket Execution - If the planner flagged the concept for a circuit demo, the LLM-generated Python code is executed in a sandboxed subprocess using Amazon Braket's LocalSimulator. The executor writes the code to a temp file, runs it with a configurable timeout (default 30s), captures stdout as JSON probability distributions, and handles syntax errors, runtime failures, and timeouts gracefully. Results come back as measurement probabilities (e.g.,
{"00": 0.5, "11": 0.5}).Animation Resolution - The system maps the concept to a frontend animation type. On the backend, the planner sets the
has_animationflag; on the frontend, an inference engine scans the node content for keywords (superposition → Bloch sphere, gates → circuit diagram, interference → wave animation) to select the right visualization.Video Generation - If flagged, an async job is dispatched to fal.ai's LTX Video 2.3 Fast model with a prompt derived from the concept. The video is generated in ~10-15 seconds, uploaded to S3, and the frontend polls for completion every 10 seconds.
Session state is persisted in DynamoDB - each session stores the learner's profile and the full history of processed nodes, which feeds into both context accumulation for Explore mode and quiz generation.
MCP Server
We built an MCP (Model Context Protocol) server that wraps the entire backend as a tool server with 12 tools - profile management, explore, pipeline execution, circuit running, quiz generation, video generation, RAG retrieval, and media planning. This means the same backend that powers the web UI can be accessed from any MCP-compatible client: Kiro, Claude Desktop, Cursor. Same orchestration pipeline, same personalization, completely different interface. The server supports both stdio and SSE transport.
AWS Services
Seven AWS services power the platform:
| Service | Role |
|---|---|
| Amazon Bedrock (Claude Sonnet) | Content generation, media planning, quiz generation across three LLM agents |
| Bedrock Knowledge Bases + OpenSearch Serverless | Vector-based RAG retrieval with Titan Embeddings |
| Amazon Braket SDK (LocalSimulator) | Quantum circuit execution - real quantum simulation, no cloud hardware costs |
| DynamoDB | Session persistence (profiles + node history) |
| S3 | Knowledge base document storage + generated video hosting |
External: fal.ai for fast video generation (~10-15s per clip).
Challenges We Ran Into
Making quantum intuitive across domains. Translating quantum mechanics into explanations that genuinely make sense for a music student vs. a finance student vs. a CS major required careful prompt engineering. Each node has two explanation layers - a technical answer and a plain-language analogy - and the RAG context grounds the LLM so it doesn't hallucinate physics.
Sandboxed circuit execution. LLM-generated Python code is inherently unpredictable. The Braket executor runs code in isolated subprocesses with timeouts, captures both stdout and stderr, and handles syntax errors, import failures, and infinite loops without crashing the server.
Knowledge base construction. We scraped 11 GitHub repositories and curated 10+ documents into a coherent knowledge base. Getting retrieval accuracy right - so the RAG chunks actually help the LLM rather than confuse it - required iteration on chunking strategy and embedding configuration.
Graph layout at scale. A knowledge graph that's beautiful with 3 nodes can become unreadable at 30. The BFS tree layout algorithm centers siblings around parents and spaces nodes to minimize overlap, but this remains an area for improvement.
Two interfaces, one backend. The web UI and MCP server both hit the same orchestration pipeline, but they have fundamentally different interaction patterns (streaming UI updates vs. tool call/response). Keeping them in sync while the backend evolved rapidly during the hackathon was a constant coordination challenge.
Accomplishments We're Proud Of
We built a working 6-step AI orchestration pipeline that generates personalized quantum education content end-to-end. Real quantum circuits run on Amazon Braket and produce actual measurement probabilities that render as interactive visualizations. The RAG system genuinely adapts explanations to different backgrounds - it's not just prompt templating, it's grounded retrieval. The MCP server means the entire platform is accessible from any compatible IDE. And the contract-based architecture on both frontend and backend meant we could develop features in parallel without stepping on each other's code.
What We Learned
Building multi-agent AI systems with Amazon Bedrock taught us that orchestration design matters more than individual prompt quality. Working with Braket's LocalSimulator showed us that quantum simulation is surprisingly accessible when you remove the infrastructure barrier. Implementing RAG with Bedrock Knowledge Bases revealed how much retrieval quality depends on source curation - garbage in, garbage out applies to vector stores too. And building an MCP server from scratch gave us a deep appreciation for how protocol design shapes developer experience.
The biggest lesson: personalization in education isn't a nice-to-have. When a finance student sees entanglement explained through correlated assets instead of spin states, the concept clicks in seconds instead of hours. The 3:1 quantum talent gap isn't a talent problem - it's a translation problem.
What's Next for QuantumBridge
Real quantum hardware. The architecture already supports it - swap LocalSimulator for a Braket managed device and circuits run on actual quantum computers. The timeout and error handling infrastructure is already in place.
Adaptive assessment. The current quiz system generates static MCQs. We want quizzes that adapt difficulty based on performance and target the specific concepts a learner struggled with.
Collaborative learning. Share your knowledge graph with classmates. Compare how different backgrounds lead to different exploration paths through the same material.
Expanded knowledge base. More sources, research papers, and quantum algorithm implementations. The RAG pipeline is designed to scale - adding documents is just uploading to S3.
Career pathways. Connect the learning graph to quantum job requirements. Show learners exactly which concepts they need to master for specific roles, and how close they are.
Built With
- amazon-web-services
- bedrock
- dynamodb
- fastapi
- javascript
- mcp
- next.js
- python
- rag
- react
- typescript
- vector-embeddings



Log in or sign up for Devpost to join the conversation.