Skip to content

Conversation

@b8zhong
Copy link
Collaborator

@b8zhong b8zhong commented Nov 4, 2025

Don't merge this yet

Rebase #9526

python3 -m sglang.launch_server \
  --model-path=nvidia/Llama-4-Scout-17B-16E-Instruct-FP4 \
  --tp=8 \
  --quantization modelopt_fp4 \
  --moe-runner-backend=flashinfer_cutlass \
  --attention-backend trtllm_mha \ # tried triton backend, no difference.
  --trust-remote-code \
  --mem-fraction-static=0.7 \
  --context-length=131072 \
  --kv-cache-dtype=fp8_e4m3 \
  --model-loader-extra-config '{"enable_multithread_load": true, "num_threads": 8}'

python3 benchmark/gsm8k/bench_sglang.py --num-shots 8 --num-questions 1319 --parallel 500

100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1319/1319 [00:21<00:00, 62.48it/s]
Accuracy: 0.884
Invalid: 0.001
Latency: 21.276 s
Output throughput: 6298.596 token/s

It's too low, MMLU Pro needs to be >=0.74. It seems to be some accuracy issue in flashinfer_cutlass MoE usage

lm_eval --model local-chat-completions --model_args model=nvidia/Llama-4-Scout-17B-16E-Instruct-FP4,base_url=http://localhost:30000/v1/chat/completions,num_concurrent=512,timeout=999999,max_gen_toks=2048 --tasks mmlu_pro --batch_size 512 --apply_chat_template --num_fewshot 0
|       Tasks       |Version|    Filter    |n-shot|  Metric   |   |Value |   |Stderr|
|-------------------|------:|--------------|-----:|-----------|---|-----:|---|-----:|
|mmlu_pro           |    2.0|custom-extract|      |exact_match|↑  |0.7239|±  |0.0040|
| - biology         |    2.1|custom-extract|     0|exact_match|↑  |0.8703|±  |0.0126|
| - business        |    2.1|custom-extract|     0|exact_match|↑  |0.7592|±  |0.0152|
| - chemistry       |    2.1|custom-extract|     0|exact_match|↑  |0.7694|±  |0.0125|
| - computer_science|    2.1|custom-extract|     0|exact_match|↑  |0.7439|±  |0.0216|
| - economics       |    2.1|custom-extract|     0|exact_match|↑  |0.8033|±  |0.0137|
| - engineering     |    2.1|custom-extract|     0|exact_match|↑  |0.6047|±  |0.0157|
| - health          |    2.1|custom-extract|     0|exact_match|↑  |0.7164|±  |0.0158|
| - history         |    2.1|custom-extract|     0|exact_match|↑  |0.6168|±  |0.0249|
| - law             |    2.1|custom-extract|     0|exact_match|↑  |0.4841|±  |0.0151|
| - math            |    2.1|custom-extract|     0|exact_match|↑  |0.8275|±  |0.0103|
| - other           |    2.1|custom-extract|     0|exact_match|↑  |0.6851|±  |0.0153|
| - philosophy      |    2.1|custom-extract|     0|exact_match|↑  |0.6152|±  |0.0218|
| - physics         |    2.1|custom-extract|     0|exact_match|↑  |0.7883|±  |0.0113|
| - psychology      |    2.1|custom-extract|     0|exact_match|↑  |0.7657|±  |0.0150|

| Groups |Version|    Filter    |n-shot|  Metric   |   |Value |   |Stderr|
|--------|------:|--------------|------|-----------|---|-----:|---|-----:|
|mmlu_pro|      2|custom-extract|      |exact_match|↑  |0.7239|±  | 0.004|

BF16 launch commands (H200):

python3 -m sglang.launch_server   --model-path=/opt/dlami/nvme/models/Llama-4-Scout-17B-16E-Instruct/   --tp=8 --trust-remote-code   --mem-fraction-static=0.7   --context-length=131072 --attention-backend=fa3   --model-loader-extra-config '{"enable_multithread_load": true, "num_threads": 8}'
root@ip-10-40-12-14:/sgl-workspace/sglang# python3 benchmark/gsm8k/bench_sglang.py --num-shots 8 --num-questions 1319 --parallel 500
100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1319/1319 [00:32<00:00, 41.15it/s]
Accuracy: 0.918
Invalid: 0.000
Latency: 32.415 s
Output throughput: 4169.418 token/s
|       Tasks       |Version|    Filter    |n-shot|  Metric   |   |Value |   |Stderr|
|-------------------|------:|--------------|-----:|-----------|---|-----:|---|-----:|
|mmlu_pro           |    2.0|custom-extract|      |exact_match|↑  |0.7490|±  |0.0039|
| - biology         |    2.1|custom-extract|     0|exact_match|↑  |0.8773|±  |0.0123|
| - business        |    2.1|custom-extract|     0|exact_match|↑  |0.7858|±  |0.0146|
| - chemistry       |    2.1|custom-extract|     0|exact_match|↑  |0.8110|±  |0.0116|
| - computer_science|    2.1|custom-extract|     0|exact_match|↑  |0.7683|±  |0.0209|
| - economics       |    2.1|custom-extract|     0|exact_match|↑  |0.8187|±  |0.0133|
| - engineering     |    2.1|custom-extract|     0|exact_match|↑  |0.6502|±  |0.0153|
| - health          |    2.1|custom-extract|     0|exact_match|↑  |0.7421|±  |0.0153|
| - history         |    2.1|custom-extract|     0|exact_match|↑  |0.6457|±  |0.0245|
| - law             |    2.1|custom-extract|     0|exact_match|↑  |0.5232|±  |0.0151|
| - math            |    2.1|custom-extract|     0|exact_match|↑  |0.8342|±  |0.0101|
| - other           |    2.1|custom-extract|     0|exact_match|↑  |0.7154|±  |0.0149|
| - philosophy      |    2.1|custom-extract|     0|exact_match|↑  |0.6493|±  |0.0214|
| - physics         |    2.1|custom-extract|     0|exact_match|↑  |0.8006|±  |0.0111|
| - psychology      |    2.1|custom-extract|     0|exact_match|↑  |0.7870|±  |0.0145|

| Groups |Version|    Filter    |n-shot|  Metric   |   |Value|   |Stderr|
|--------|------:|--------------|------|-----------|---|----:|---|-----:|
|mmlu_pro|      2|custom-extract|      |exact_match|↑  |0.749|±  |0.0039|

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @b8zhong, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request addresses critical issues related to loading Llama 4 models, especially when utilizing FP4 quantization and Mixture-of-Experts (MoE) layers. The primary goal is to ensure accurate and efficient loading of quantized weights and scales, which was previously leading to suboptimal performance as indicated by the provided benchmarks. The changes introduce specialized handling for MoE layer components, improve integration with Flashinfer kernels, and refine weight permutation logic for rotary embeddings, ultimately aiming to stabilize and enhance the model's performance with FP4 quantization.

Highlights

  • Specialized MoE Weight Loading: Introduced new methods for handling w13 weight scales and refined the logic for loading expert weights and scales, particularly for FP4/FP8 quantized models, ensuring proper sharding and transposition.
  • Enhanced Flashinfer MoE Integration: Added support for apply_router_weight_on_input in Flashinfer Cutlass MoE for topK=1 and implemented explicit dtype alignment for Flashinfer inputs to ensure compatibility and correct operation.
  • Improved Q/K Weight Permutation: Updated the rotary embedding weight permutation logic to correctly handle various quantization dtypes, including uint8 for FP4 packed weights and float8_e4m3fn for FP8 block scales.
  • Refactored Llama4 Weight Loading: A comprehensive overhaul of the load_weights method in Llama4ForCausalLM to manage complex weight loading scenarios for Mixture-of-Experts (MoE) and quantized models more effectively.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@b8zhong b8zhong added the run-ci label Nov 4, 2025
Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces fixes for loading Llama 4 FP4 models, which appears to address performance issues observed in benchmarks. The core changes involve a significant refactoring of the weight loading logic in llama4.py and mllama4.py to correctly handle permutations and formats of expert and attention weights. Additionally, there are improvements in handling CUDA graph capture with empty inputs and aligning data types for FlashInfer kernels. My review focuses on improving code maintainability by removing redundant code blocks and adhering to Python's standard style guidelines for imports. The core logic of the fix seems sound and well-targeted.

num_experts: int,
loaded_params: set,
) -> bool:
import re
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The import re statement is located inside the _handle_expert_scale_params method. According to the PEP 8 style guide, imports should be placed at the top of the file. Moving this import to the beginning of python/sglang/srt/models/llama4.py will improve code style and consistency.

Comment on lines +722 to +730
else:
for expert_id in range(num_experts):
weight_loader(
param,
weight_chunk[expert_id],
param_name,
shard_id=shard_id,
expert_id=expert_id,
)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The else block here is identical to the elif block above it. This redundancy can make the code harder to maintain. If the logic is indeed the same, the else block can be removed to simplify the code.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant