Skip to content

Conversation

@slin1237
Copy link
Collaborator

Overview

Introduce chat template implementation in SGL Router Tokenizer, which now fully supports loading templates from multiple sources, matching the behavior of SGLang existing tokenizer manager

Implementation Details

1. Template Loading Priority

When a chat template path is explicitly provided, it overrides any existing template in the tokenizer:

// Load tokenizer with custom chat template - overrides any built-in template
let tokenizer = HuggingFaceTokenizer::from_file_with_chat_template(
    "tokenizer.json",
    Some("custom_template.jinja")
)?;

2. Template Sources

Our implementation supports loading templates from:

  1. tokenizer_config.json (automatic) - Default behavior when no custom template is specified
  2. .jinja files (explicit) - When a custom template path is provided, it overrides the default
  3. Programmatic setting - Templates can be set after tokenizer creation

3. Key Features Implemented

a. Loading from Custom .jinja File

pub fn from_file_with_chat_template(
    file_path: &str, 
    chat_template_path: Option<&str>
) -> Result<Self>

b. Setting Template After Creation

pub fn set_chat_template(&mut self, template: String)

c. Factory Functions

// With custom template
create_tokenizer_with_chat_template(tokenizer_path, Some(template_path))

// Without custom template (uses tokenizer_config.json if available)
create_tokenizer_from_file(tokenizer_path)

4. Usage Examples

Example 1: Load with Custom Template

use sglang_router_rs::tokenizer::Tokenizer;

// Load tokenizer with custom chat template
let tokenizer = Tokenizer::from_file_with_chat_template(
    "path/to/tokenizer.json",
    Some("path/to/custom_template.jinja")
)?;

Example 2: Override Template After Loading

use sglang_router_rs::tokenizer::huggingface::HuggingFaceTokenizer;

// Load tokenizer
let mut tokenizer = HuggingFaceTokenizer::from_file("tokenizer.json")?;

// Override template (like Python's tokenizer.chat_template = template)
tokenizer.set_chat_template(custom_template_string);

Example 3: Apply Chat Template

use sglang_router_rs::tokenizer::chat_template::ChatMessage;

let messages = vec![
    ChatMessage::system("You are helpful"),
    ChatMessage::user("Hello"),
];

let formatted = tokenizer.apply_chat_template(&messages, true)?;

Checklist

Co-authored-by: Chang Su <chang.s.su@oracle.com>
@slin1237 slin1237 requested a review from ByronHsu as a code owner August 20, 2025 00:07
Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Summary of Changes

Hello @slin1237, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request introduces comprehensive chat template support to the SGL Router's Rust tokenizer, enabling it to load and apply chat templates from various sources, including tokenizer_config.json files and custom Jinja files. This enhancement ensures the Rust tokenizer's behavior is consistent with SGLang's existing tokenizer manager, providing greater flexibility in handling conversational AI models.

Highlights

  • New Chat Template Module: A dedicated chat_template.rs module has been added, providing core functionalities for defining chat messages (ChatMessage) and processing them using Jinja2 templates via ChatTemplateProcessor.
  • Flexible Template Loading: The HuggingFaceTokenizer now supports loading chat templates from tokenizer_config.json by default, or from a specified .jinja file, with the latter taking precedence.
  • Programmatic Template Control: A new set_chat_template method allows developers to dynamically set or override the chat template after the tokenizer has been initialized.
  • Updated Tokenizer Factory: The tokenizer factory functions have been extended to incorporate the new chat template loading logic, simplifying tokenizer creation with template support.
  • Expanded Test Coverage: New unit and integration tests have been added to thoroughly validate the chat template processing and loading mechanisms, including tests for different template styles (e.g., Llama, ChatML).
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in issue comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces chat template support for the SGLang Router Tokenizer, a valuable addition that aligns its functionality with the existing SGLang tokenizer manager. The implementation is well-structured, with clear separation of concerns into a new chat_template module and good integration into the existing tokenizer factory and HuggingFace tokenizer implementation. The test coverage for the new functionality is also comprehensive, covering various template styles and loading mechanisms.

I've identified a couple of opportunities for performance improvements in the chat_template.rs file, primarily related to avoiding repeated work in template processing. These are detailed in the review comments.

Comment on lines +68 to +78
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The current implementation of apply_chat_template creates a new minijinja::Environment and parses the template on every call. This is inefficient as it involves repeated work for a template that doesn't change.

For better performance, I recommend refactoring ChatTemplateProcessor to parse the template only once during its initialization. You could store the compiled template or the minijinja::Environment instance within the ChatTemplateProcessor struct.

This would likely involve:

  1. Changing ChatTemplateProcessor::new to return a Result and perform the one-time parsing.
  2. Modifying HuggingFaceTokenizer to store an Option<ChatTemplateProcessor> instead of Option<String>, and initializing it when the tokenizer is loaded.

Comment on lines +95 to +96
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

There's an unnecessary clone of the bos_token and eos_token strings here. You can avoid this by passing a string slice (&str) to the context, which minijinja supports. Using as_deref is more efficient as it avoids allocating a new string if the token is present.

Suggested change
bos_token => self.bos_token.clone().unwrap_or_default(),
eos_token => self.eos_token.clone().unwrap_or_default()
bos_token => self.bos_token.as_deref().unwrap_or_default(),
eos_token => self.eos_token.as_deref().unwrap_or_default()

@slin1237 slin1237 added enhancement New feature or request router labels Aug 20, 2025
@slin1237 slin1237 force-pushed the router-tokenizer-05 branch from 9acc1c7 to 33fbef7 Compare August 20, 2025 00:18
@slin1237 slin1237 force-pushed the router-tokenizer-05 branch from 33fbef7 to 715c119 Compare August 20, 2025 01:29
@zhyncs zhyncs merged commit 5fbad30 into main Aug 20, 2025
28 of 29 checks passed
@zhyncs zhyncs deleted the router-tokenizer-05 branch August 20, 2025 03:14
MahmoudAshraf97 pushed a commit to MahmoudAshraf97/sglang that referenced this pull request Sep 8, 2025
Co-authored-by: Chang Su <chang.s.su@oracle.com>
@slin1237 slin1237 mentioned this pull request Sep 11, 2025
48 tasks
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

enhancement New feature or request router

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants