The smarter subtitle translator. It reads your SRT, groups sequential lines for context, and uses GPT to produce translations that actually sound human.
โก Get Started โข โจ How It Works โข ๐ฎ API Usage โข โ๏ธ Configuration โข ๐ Why This Slaps
context-aware-srt-translation is the translator your subtitles deserve. Stop feeding GPT one line at a time and getting robotic, disconnected results. This service groups sequential subtitle lines together, giving the AI the context it needs to understand the conversation and produce translations that actually flow naturally.
|
Context Windows 3 lines translated together |
Concurrent Processing Parallel chunk translation |
Auto Fallback OpenAI โ DeepL seamlessly |
How it works:
- You: POST your SRT file to the API
- Service: Groups lines into context windows, translates concurrently
- Result: Natural translations that respect conversational flow
- Bonus: Full statistics on what happened
Line-by-line translation is a vibe-killer. Context windows make other methods look ancient.
| โ Line-by-Line (Pain) | โ Context Windows (Glory) |
"I think we should..." โ "Sanฤฑrฤฑm biz..." "...go there tomorrow" โ "...yarฤฑn oraya git"Disconnected. Robotic. Wrong verb forms. |
["I think we should...", "...go there tomorrow"] โ ["Bence yarฤฑn oraya...", "...gitmeliyiz"]Connected. Natural. Correct grammar. |
The difference is context. When GPT sees the full thought, it understands the sentence structure, maintains speaker tone, and produces translations humans would actually write.
git clone https://github.com/yigitkonur/context-aware-srt-translation-gpt.git
cd context-aware-srt-translation-gpt
python3 -m venv .venv && source .venv/bin/activate
pip install -r requirements.txtcp .env.example .env
# Add your OpenAI API key (required)
# Add DeepL API key (optional fallback)python run.pyThe API is now live at http://localhost:8000 ๐
Instead of translating each subtitle line individually (which loses context), this service groups sequential lines:
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ Traditional: Line 1 โ Translate โ Output 1 โ
โ Line 2 โ Translate โ Output 2 โ
โ Line 3 โ Translate โ Output 3 โ
โ โ No context between lines โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ Context Window: โ
โ [Line 1, Line 2, Line 3] โ Translate Together โ
โ โ โ
โ [Output 1, Output 2, Output 3] โ
โ โ
AI sees the full picture โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
This allows GPT to:
- Maintain speaker continuity โ Same character, same voice
- Preserve conversation flow โ Questions match answers
- Handle split sentences โ "I think..." + "...we should go" = coherent thought
- Respect cultural context โ Idioms translated appropriately
curl -X POST "http://localhost:8000/subtitle-translate" \
-H "Content-Type: application/json" \
-d '{
"srt_content": "1\n00:00:01,000 --> 00:00:04,000\nHello, how are you?\n\n2\n00:00:05,000 --> 00:00:08,000\nI am doing great, thanks!",
"source_language": "en",
"target_language": "tr"
}'{
"translated_srt_content": "1\n00:00:01,000 --> 00:00:04,000\nMerhaba, nasฤฑlsฤฑn?\n\n2\n00:00:05,000 --> 00:00:08,000\nรok iyiyim, teลekkรผrler!",
"status": "success",
"error_message": null,
"stats": {
"total_sentences": 2,
"translated_sentences": 2,
"failed_sentences": 0,
"success_rate": 100.0,
"openai_calls": 1,
"deepl_calls": 0,
"elapsed_seconds": 1.23
}
}curl http://localhost:8000/health
# {"status": "healthy", "version": "2.0.0"}All settings via environment variables:
| Variable | Default | Description |
|---|---|---|
OPENAI_API_KEY |
โ | Required. Your OpenAI API key |
DEEPL_API_KEY |
โ | Optional fallback service |
OPENAI_MODEL |
gpt-4o-mini |
Model for translations |
OPENAI_TEMPERATURE |
0.3 |
Lower = more consistent |
CONTEXT_WINDOW_SIZE |
3 |
Lines per translation chunk |
MAX_CONCURRENT_REQUESTS |
10 |
Parallel API calls |
LOG_LEVEL |
INFO |
Logging verbosity |
src/
โโโ config.py # Environment configuration
โโโ models.py # Pydantic request/response models
โโโ srt_parser.py # SRT parsing & reconstruction
โโโ translator.py # Main orchestration logic
โโโ main.py # FastAPI application
โโโ services/
โโโ base.py # Service interface
โโโ openai_service.py # OpenAI implementation
โโโ deepl_service.py # DeepL fallback
Interactive docs available when running:
- Swagger UI:
http://localhost:8000/docs - ReDoc:
http://localhost:8000/redoc
# Setup
python3 -m venv .venv && source .venv/bin/activate
pip install -r requirements.txt
# Run tests
pytest tests/ -v
# Run with hot reload
python run.py| Problem | Solution |
|---|---|
| OpenAI rate limit | Reduce MAX_CONCURRENT_REQUESTS |
| DeepL not working | Check DEEPL_API_KEY is set correctly |
| Translations cut off | Increase OPENAI_MAX_TOKENS |
| Wrong language codes | Use ISO 639-1 codes: en, tr, de, fr, etc. |
Built with ๐ฅ because line-by-line subtitle translation is a crime against cinema.
MIT ยฉ Yiฤit Konur