Skip to content

Conversation

@timothygao8710
Copy link
Collaborator

@timothygao8710 timothygao8710 commented Oct 6, 2025

Summary by CodeRabbit

  • Documentation
    • Added comprehensive guide for mixed-precision serving in disaggregated deployments. Includes setup prerequisites, configuration examples, and workflows demonstrating how to combine different quantization levels across compute-bound and memory-bound workers for optimized performance.

Description

This PR updates the Disaggregated Serving README to document how to run context and generation servers in mixed-precision using model opt. An example usage for BF16 prefill + FP8 decode is included.

Test Coverage

Performance and accuracy tests were run for FP8 and NVFP4 with llama 3.1 8B base model on GSM8k and MMLU, with 1 Ctx/Gen Server. Example:

FP16/16 (2xH100):
"results": { "gsm8k": { "alias": "gsm8k", "exact_match,strict-match": 0.7558756633813495, "exact_match_stderr,strict-match": 0.011832404674077592, "exact_match,flexible-extract": 0.7839272175890827, "exact_match_stderr,flexible-extract": 0.011336531489638858 } },

"start_time": 34733.35253426, "end_time": 35711.569743744, "total_evaluation_time_seconds": "978.2172094840062"

FP16/8 (2xH100):
"results": { "gsm8k": { "alias": "gsm8k", "exact_match,strict-match": 0.7354056103108415, "exact_match_stderr,strict-match": 0.012150554001563231, "exact_match,flexible-extract": 0.7626990144048522, "exact_match_stderr,flexible-extract": 0.011718409178739442 } },
"start_time": 33387.524966359, "end_time": 34123.906708314, "total_evaluation_time_seconds": "736.3817419550032"

PR Checklist

Please review the following before submitting your PR:

  • PR description clearly explains what and why. If using CodeRabbit's summary, please make sure it makes sense.

  • PR Follows TRT-LLM CODING GUIDELINES to the best of your knowledge.

  • Test cases are provided for new code paths (see test instructions)

  • Any new dependencies have been scanned for license and vulnerabilities

  • CODEOWNERS updated if ownership changes

  • Documentation updated as needed

  • The reviewers assigned automatically/manually are appropriate for the PR.

  • [ x ] Please check this after reviewing the above items as appropriate for this PR.

GitHub Bot Help

/bot [-h] ['run', 'kill', 'skip', 'reuse-pipeline'] ...

Provide a user friendly way for developers to interact with a Jenkins server.

Run /bot [-h|--help] to print this help message.

See details below for each supported subcommand.

Details

run [--reuse-test (optional)pipeline-id --disable-fail-fast --skip-test --stage-list "A10-PyTorch-1, xxx" --gpu-type "A30, H100_PCIe" --test-backend "pytorch, cpp" --add-multi-gpu-test --only-multi-gpu-test --disable-multi-gpu-test --post-merge --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" --detailed-log --debug(experimental)]

Launch build/test pipelines. All previously running jobs will be killed.

--reuse-test (optional)pipeline-id (OPTIONAL) : Allow the new pipeline to reuse build artifacts and skip successful test stages from a specified pipeline or the last pipeline if no pipeline-id is indicated. If the Git commit ID has changed, this option will be always ignored. The DEFAULT behavior of the bot is to reuse build artifacts and successful test results from the last pipeline.

--disable-reuse-test (OPTIONAL) : Explicitly prevent the pipeline from reusing build artifacts and skipping successful test stages from a previous pipeline. Ensure that all builds and tests are run regardless of previous successes.

--disable-fail-fast (OPTIONAL) : Disable fail fast on build/tests/infra failures.

--skip-test (OPTIONAL) : Skip all test stages, but still run build stages, package stages and sanity check stages. Note: Does NOT update GitHub check status.

--stage-list "A10-PyTorch-1, xxx" (OPTIONAL) : Only run the specified test stages. Examples: "A10-PyTorch-1, xxx". Note: Does NOT update GitHub check status.

--gpu-type "A30, H100_PCIe" (OPTIONAL) : Only run the test stages on the specified GPU types. Examples: "A30, H100_PCIe". Note: Does NOT update GitHub check status.

--test-backend "pytorch, cpp" (OPTIONAL) : Skip test stages which don't match the specified backends. Only support [pytorch, cpp, tensorrt, triton]. Examples: "pytorch, cpp" (does not run test stages with tensorrt or triton backend). Note: Does NOT update GitHub pipeline status.

--only-multi-gpu-test (OPTIONAL) : Only run the multi-GPU tests. Note: Does NOT update GitHub check status.

--disable-multi-gpu-test (OPTIONAL) : Disable the multi-GPU tests. Note: Does NOT update GitHub check status.

--add-multi-gpu-test (OPTIONAL) : Force run the multi-GPU tests in addition to running L0 pre-merge pipeline.

--post-merge (OPTIONAL) : Run the L0 post-merge pipeline instead of the ordinary L0 pre-merge pipeline.

--extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" (OPTIONAL) : Run the ordinary L0 pre-merge pipeline and specified test stages. Examples: --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx".

--detailed-log (OPTIONAL) : Enable flushing out all logs to the Jenkins console. This will significantly increase the log volume and may slow down the job.

--debug (OPTIONAL) : Experimental feature. Enable access to the CI container for debugging purpose. Note: Specify exactly one stage in the stage-list parameter to access the appropriate container environment. Note: Does NOT update GitHub check status.

For guidance on mapping tests to stage names, see docs/source/reference/ci-overview.md
and the scripts/test_to_stage_mapping.py helper.

kill

kill

Kill all running builds associated with pull request.

skip

skip --comment COMMENT

Skip testing for latest commit on pull request. --comment "Reason for skipping build/test" is required. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

reuse-pipeline

reuse-pipeline

Reuse a previous pipeline to validate current commit. This action will also kill all currently running builds associated with the pull request. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.


## Mixed Precision Context and Generation

In disaggregated serving, the context (prefill) workers and generation (decode) workers have different performance characteristics: prefill workers are more compute-bound while decode workers are more memory-bound. Therefore, it may be beneficial to run prefill workers in higher precision. Running these workers with different precisions also enables the ability to interpolate between performance/compute trade-offs of different quantization levels.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No need to specify prefill/decode, the terms are not used in the rest of the document.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Changed!


## Mixed Precision Context and Generation

In disaggregated serving, the context (prefill) workers and generation (decode) workers have different performance characteristics: prefill workers are more compute-bound while decode workers are more memory-bound. Therefore, it may be beneficial to run prefill workers in higher precision. Running these workers with different precisions also enables the ability to interpolate between performance/compute trade-offs of different quantization levels.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

prefill workers are more compute-bound while decode workers are more memory-bound.

Use context / generation as well here. Remove 'more'

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Changed!

@juney-nvidia juney-nvidia added Community want to contribute PRs initiated from Community Community Engagement help/insights needed from community labels Oct 7, 2025
@timothygao8710
Copy link
Collaborator Author

The full results for MMLU, GSM8K tests using 16/16, 16/8, 8/8, 16/4, 4/4, and 4/16 precisions for Llama 3.1 8B are available here: https://docs.google.com/spreadsheets/d/1jyG4WFsrPSoBISw72wioJjgLJZL2Yqe7uRHoFIpyfcE/edit?usp=sharing

@timothygao8710 timothygao8710 marked this pull request as ready for review October 16, 2025 21:08
@timothygao8710 timothygao8710 requested review from a team as code owners October 16, 2025 21:08
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Oct 16, 2025

📝 Walkthrough

Walkthrough

Added documentation section to disaggregated README describing mixed-precision serving, explaining compute vs. memory-bound worker trade-offs, and providing prerequisites and example workflows for quantized checkpoints with identical KV cache dtypes. Duplicate content was added.

Changes

Cohort / File(s) Summary
Mixed-Precision Documentation
examples/disaggregated/README.md
Added new "Mixed Precision Context and Generation" section with rationale for compute vs. memory-bound workers, prerequisites for enabling mixed precision, and example workflow commands for BF16 generation and FP8 context. Content appears duplicated in two locations.

Estimated code review effort

🎯 2 (Simple) | ⏱️ ~10 minutes

Rationale: Documentation-only changes with straightforward additions. Review involves verifying content accuracy, checking for the noted duplication, and ensuring command snippets are correct. No logic or structural code changes to evaluate.

Pre-merge checks and finishing touches

✅ Passed checks (3 passed)
Check name Status Explanation
Title Check ✅ Passed The pull request title "[None] [doc] Add Mixed Precision Context and Generation section to Disagg" directly and accurately summarizes the main change in the PR. It follows the repository's required template format with a ticket identifier, type designation, and concise summary. The title clearly reflects that this is a documentation change adding a section about mixed-precision serving to the disaggregated deployment README, which matches the actual changeset described in the raw summary.
Description Check ✅ Passed The pull request description is substantially complete and follows the template structure provided in the repository. It includes a clear Description section explaining the purpose of the changes (updating the README for mixed-precision context and generation servers with BF16 prefill and FP8 decode examples), a comprehensive Test Coverage section with detailed performance metrics and evaluation results from GSM8k and MMLU tests, and the full PR Checklist with an explicit checkmark indicating the author reviewed the appropriate items. While the description could potentially be more detailed about the specific prerequisites and example workflow mentioned in the raw summary, the core required information is present and adequate.
Docstring Coverage ✅ Passed No functions found in the changes. Docstring coverage check skipped.
✨ Finishing touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (1)
examples/disaggregated/README.md (1)

203-260: Consider consolidating duplication with the Dynamic scaling section.

The new "Mixed Precision Context and Generation" section overlaps significantly with the "Dynamic scaling" section below (lines 261–286) in terms of the workflow and command patterns shown. Both sections demonstrate:

  • Starting context servers with --server_role CONTEXT and --metadata_server_config_file
  • Starting generation servers with --server_role GENERATION and --metadata_server_config_file
  • Launching the disaggregated server with -m ./metadata_config.yaml

This duplication could confuse readers about whether the features are prerequisites for each other or independent capabilities. Consider clarifying the relationship or consolidating the examples.

Review the overlap between lines 235–253 and lines 273–279 to determine if one example can reference the other or if the sections should be restructured.

📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 2b8722b and 269122d.

📒 Files selected for processing (1)
  • examples/disaggregated/README.md (1 hunks)
🧰 Additional context used
🪛 LanguageTool
examples/disaggregated/README.md

[grammar] ~209-~209: There might be a mistake here.
Context: ...le mixed precision serving, you'll need: 1. A quantized checkpoint created with [Ten...

(QB_NEW_EN)


[grammar] ~210-~210: There might be a mistake here.
Context: ...t created with TensorRT Model Optimizer 2. The original unquantized checkpoint 3. B...

(QB_NEW_EN)


[grammar] ~211-~211: There might be a mistake here.
Context: ...) 2. The original unquantized checkpoint 3. Both checkpoints must use the same KV ca...

(QB_NEW_EN)

🔇 Additional comments (1)
examples/disaggregated/README.md (1)

205-205: Verify past feedback has been addressed.

The text correctly uses "context workers" and "generation workers" terminology (not prefill/decode) and omits the "more" modifier ("compute-bound" not "more compute-bound"), which aligns with feedback from the previous review round. ✓


### Example (BF 16 Gen, FP 8 Ctx)

A quantized checkpoint can be created `--kv_cache_qformat none`.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Complete the sentence at line 216.

The sentence is incomplete: "A quantized checkpoint can be created --kv_cache_qformat none." is missing a preposition or connector between the verb and the flag.

-A quantized checkpoint can be created `--kv_cache_qformat none`.
+A quantized checkpoint can be created with `--kv_cache_qformat none`.
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
A quantized checkpoint can be created `--kv_cache_qformat none`.
A quantized checkpoint can be created with `--kv_cache_qformat none`.
🤖 Prompt for AI Agents
In examples/disaggregated/README.md around line 216, the sentence "A quantized
checkpoint can be created `--kv_cache_qformat none`." is missing a connector;
update the line to include a preposition such as "with" or "using" (for example:
"A quantized checkpoint can be created with `--kv_cache_qformat none`.") to make
the sentence grammatically correct and clear.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Community Engagement help/insights needed from community Community want to contribute PRs initiated from Community

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants