A multi-agent route optimization system using Microsoft Agent Framework with Group Chat pattern. Automatically redistributes delivery routes when a driver becomes unavailable.
┌──────────────┐
│ Orchestrator │ ◄── Azure OpenAI (GPT-4o)
└──────┬───────┘
┌──────────┼──────────┐
▼ ▼ ▼
┌───────┐ ┌────────┐ ┌────────┐
│ Route │ │Schedule│ │ Driver │
│Analyzer│ │Optimizer│ │Assigner│
└───────┘ └────────┘ └────────┘
Agents: RouteAnalyzer (identifies affected routes) → ScheduleOptimizer (plans redistribution) → DriverAssigner (matches drivers) → Orchestrator (synthesizes plan)
- Azure Developer CLI (azd)
- Python 3.11+
- Azure subscription with Azure OpenAI access
azd auth login
azd env new dev
azd env set AZURE_LOCATION eastus2
azd upA web-based UI is included at src/frontend/index.html for interacting with the optimization system.
Features:
- Visual agent workflow progress
- Real-time chat interface showing agent conversations
- Driver selection dropdown
- Markdown-rendered agent responses
Running locally: Open src/frontend/index.html in a browser after configuring config.local.js with your API endpoint.
After deployment: The postprovision hook generates config.local.js with your Azure Function URL.
| Endpoint | Method | Description |
|---|---|---|
/api/health |
GET | Health check |
/api/optimize-routes |
POST | Trigger optimization {"driver_id": "D001", "reason": "..."} |
/api/drivers |
GET | List all drivers |
/api/routes/{driver_id} |
GET | Get driver's route details |
cd src/functions
python -m venv .venv && .venv\Scripts\activate
pip install -r requirements.txt
# Update local.settings.json with AZURE_OPENAI_ENDPOINT
func start├── azure.yaml # azd configuration
├── infra/ # Bicep IaC (OpenAI, Functions, Monitoring)
├── src/
│ ├── functions/ # Python Azure Functions (API + Agents)
│ └── frontend/ # Web UI (HTML + Tailwind CSS)
└── hooks/ # azd lifecycle hooks
Azure OpenAI (GPT-4o) • AI Foundry Hub & Project • Azure Functions • Application Insights • Storage Account
Azure OpenAI usage is logged to Log Analytics. See Azure OpenAI Monitoring Queries for KQL queries to track:
- Token usage - Prompt and completion tokens over time
- API calls - Total calls per model, streaming vs non-streaming
- Latency - Per request and per model (avg, max, P95)
- Request details - Individual request inspection, slow request detection
azd downMIT
