Skip to content

Conversation

@wu6u3tw
Copy link
Contributor

@wu6u3tw wu6u3tw commented Jun 26, 2025

[nvbugs/5302040] feat. Add whisper support (Bert Attention on SM100 and GPTAttention for cross attention on SM100)

Description

  • Add bert attention plugin for Blackwell:
    implement with fmhaDispatcher

  • Add same seq_len causal mask support for gpt attention on cross attention:
    fix mask type and fix mask plugin layer

Please explain the issue and the solution in short.

Test Coverage

GitHub Bot Help

/bot [-h] ['run', 'kill', 'skip', 'reuse-pipeline'] ...

Provide a user friendly way for developers to interact with a Jenkins server.

Run /bot [-h|--help] to print this help message.

See details below for each supported subcommand.

Details

run [--disable-fail-fast --skip-test --stage-list "A10-1, xxx" --gpu-type "A30, H100_PCIe" --add-multi-gpu-test --only-multi-gpu-test --disable-multi-gpu-test --post-merge --extra-stage "H100_PCIe-[Post-Merge]-1, xxx"]

Launch build/test pipelines. All previously running jobs will be killed.

--disable-fail-fast (OPTIONAL) : Disable fail fast on build/tests/infra failures.

--skip-test (OPTIONAL) : Skip all test stages, but still run build stages, package stages and sanity check stages. Note: Does NOT update GitHub check status.

--stage-list "A10-1, xxx" (OPTIONAL) : Only run the specified test stages. Examples: "A10-1, xxx". Note: Does NOT update GitHub check status.

--gpu-type "A30, H100_PCIe" (OPTIONAL) : Only run the test stages on the specified GPU types. Examples: "A30, H100_PCIe". Note: Does NOT update GitHub check status.

--only-multi-gpu-test (OPTIONAL) : Only run the multi-GPU tests. Note: Does NOT update GitHub check status.

--disable-multi-gpu-test (OPTIONAL) : Disable the multi-GPU tests. Note: Does NOT update GitHub check status.

--add-multi-gpu-test (OPTIONAL) : Force run the multi-GPU tests. Will also run L0 pre-merge pipeline.

--post-merge (OPTIONAL) : Run the L0 post-merge pipeline instead of the ordinary L0 pre-merge pipeline.

--extra-stage "H100_PCIe-[Post-Merge]-1, xxx" (OPTIONAL) : Run the ordinary L0 pre-merge pipeline and specified test stages. Examples: --extra-stage "H100_PCIe-[Post-Merge]-1, xxx".

For guidance on mapping tests to stage names, see docs/source/reference/ci-overview.md.

kill

kill

Kill all running builds associated with pull request.

skip

skip --comment COMMENT

Skip testing for latest commit on pull request. --comment "Reason for skipping build/test" is required. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

reuse-pipeline

reuse-pipeline

Reuse a previous pipeline to validate current commit. This action will also kill all currently running builds associated with the pull request. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

Summary by CodeRabbit

  • Bug Fixes

    • Improved GPU compatibility for BERT/GPT attention across architectures.
    • Limited use of custom/packed attention masks to older GPUs (SM < 100), preventing issues on newer devices.
    • Removed an unnecessary hardware guard, enabling the attention plugin on additional GPUs.
  • Refactor

    • Internal attention kernel dispatch and initialization updated for more robust execution and support checks.
    • No public API changes; behavior remains the same for supported configurations.

@wu6u3tw wu6u3tw changed the title [https://nvbugspro.nvidia.com/bug/5302040] feat. Add whisper support (Bert Attention on SM100 and GPTAttention for cross attention on SM100) [nvbugs/5302040] feat. Add whisper support (Bert Attention on SM100 and GPTAttention for cross attention on SM100) Jun 26, 2025
@nvzhihanj
Copy link
Collaborator

/bot run

@nvzhihanj
Copy link
Collaborator

@PerkzZheng @kaiyux can you review or suggest the right PIC to review this PR?
This is to enable Whisper submission for MLPerf 5.1 B200 - adding support for SM100

@tensorrt-cicd
Copy link
Collaborator

PR_Github #10072 [ run ] triggered by Bot

@nvzhihanj
Copy link
Collaborator

@wu6u3tw please run precommit hook locally and call bot run in the comment section

@tensorrt-cicd
Copy link
Collaborator

PR_Github #10072 [ run ] completed with state FAILURE
/LLM/main/L0_MergeRequest_PR pipeline #7436 completed with status: 'FAILURE'

@wu6u3tw wu6u3tw force-pushed the dev-tinyinl-whisper_v2 branch from 7c9106a to 04ee26d Compare June 27, 2025 21:55
@wu6u3tw
Copy link
Contributor Author

wu6u3tw commented Jun 27, 2025

/bot run

1 similar comment
@nvzhihanj
Copy link
Collaborator

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #10192 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #10192 [ run ] completed with state SUCCESS
/LLM/main/L0_MergeRequest_PR pipeline #7524 completed with status: 'FAILURE'

Copy link
Collaborator

@PerkzZheng PerkzZheng left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Other changes LGTM. BTW, are there any tests added yet to guard this (Whisper) ?

@wu6u3tw
Copy link
Contributor Author

wu6u3tw commented Jun 30, 2025

Other changes LGTM. BTW, are there any tests added yet to guard this (Whisper) ?

I did not add any test yet. I can add it on separated 2nd PR.

@wu6u3tw
Copy link
Contributor Author

wu6u3tw commented Jul 2, 2025

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #10698 [ run ] triggered by Bot

@wu6u3tw wu6u3tw force-pushed the dev-tinyinl-whisper_v2 branch 2 times, most recently from 6282efe to 122f4ca Compare July 2, 2025 20:21
@tensorrt-cicd
Copy link
Collaborator

PR_Github #10698 [ run ] completed with state FAILURE
/LLM/main/L0_MergeRequest_PR pipeline #7908 completed with status: 'ABORTED'

@wu6u3tw
Copy link
Contributor Author

wu6u3tw commented Jul 3, 2025

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #10728 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #10728 [ run ] completed with state SUCCESS
/LLM/main/L0_MergeRequest_PR pipeline #7922 completed with status: 'FAILURE'

@nvzhihanj
Copy link
Collaborator

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #11097 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #11097 [ run ] completed with state SUCCESS
/LLM/main/L0_MergeRequest_PR pipeline #8204 completed with status: 'FAILURE'

@wu6u3tw wu6u3tw force-pushed the dev-tinyinl-whisper_v2 branch from 044e809 to d6d5da4 Compare July 16, 2025 02:53
Copy link
Collaborator

@PerkzZheng PerkzZheng left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Others LGTM.

@wu6u3tw wu6u3tw force-pushed the dev-tinyinl-whisper_v2 branch from dbc0d52 to c83108d Compare August 12, 2025 17:41
@wu6u3tw
Copy link
Contributor Author

wu6u3tw commented Aug 12, 2025

/bot run --disable-fail-fast

@wu6u3tw wu6u3tw force-pushed the dev-tinyinl-whisper_v2 branch from c83108d to a7d5a27 Compare August 12, 2025 17:44
@wu6u3tw
Copy link
Contributor Author

wu6u3tw commented Aug 12, 2025

/bot run --disable-fail-fast

@tensorrt-cicd
Copy link
Collaborator

PR_Github #15013 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #15013 [ run ] completed with state SUCCESS
/LLM/main/L0_MergeRequest_PR pipeline #11340 completed with status: 'FAILURE'

@wu6u3tw wu6u3tw force-pushed the dev-tinyinl-whisper_v2 branch 2 times, most recently from 86feec2 to 02b2804 Compare August 12, 2025 22:09
@wu6u3tw
Copy link
Contributor Author

wu6u3tw commented Aug 12, 2025

/bot run --disable-fail-fast

@tensorrt-cicd
Copy link
Collaborator

PR_Github #15018 [ run ] triggered by Bot

Signed-off-by: tinyinl <tinyinl@nvidia.com>
	modified:   cpp/tensorrt_llm/common/attentionOp.cpp
	modified:   cpp/tensorrt_llm/plugins/bertAttentionPlugin/bertAttentionPlugin.cpp
	modified:   tensorrt_llm/functional.py
	modified:   tensorrt_llm/layers/attention.py

Signed-off-by: tinyinl <tinyinl@nvidia.com>
Signed-off-by: tinyinl <tinyinl@nvidia.com>
Signed-off-by: tinyinl <tinyinl@nvidia.com>
@wu6u3tw wu6u3tw force-pushed the dev-tinyinl-whisper_v2 branch from 02b2804 to 28f84e2 Compare August 12, 2025 23:07
@wu6u3tw
Copy link
Contributor Author

wu6u3tw commented Aug 12, 2025

/bot run --disable-fail-fast

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between aeae1ae and 28f84e2.

📒 Files selected for processing (1)
  • cpp/tensorrt_llm/common/attentionOp.cpp (1 hunks)
🧰 Additional context used
📓 Path-based instructions (4)
**/*.{cpp,cxx,cc,cu,h,hpp,hxx,hh,cuh}

📄 CodeRabbit Inference Engine (CODING_GUIDELINES.md)

**/*.{cpp,cxx,cc,cu,h,hpp,hxx,hh,cuh}: In C++, close namespaces with a comment naming the namespace (e.g., } // namespace foo)
Prefer const/constexpr variables over #define for constants
Declare variables const if not modified after initialization
Use Allman brace style in C++
C++ filenames use lowerCamelCase and must be case-insensitively unique within a build target
C++ type names use UpperCamelCase
Local variables, methods, and namespaces use lowerCamelCase
Global non-static variables not in anonymous namespace use gPrefix lowerCamelCase (e.g., gExample)
Static globals or globals in anonymous namespaces use sPrefix lowerCamelCase
Locally visible static variables start with 's' (e.g., static std::once_flag sFlag;)
Member variables use mPrefix lowerCamelCase; public members may omit but are encouraged to use 'm'
Constants (enums, global/static/function-scope magic numbers) use kPREFIXED_UPPER_SNAKE (e.g., kDIGIT_NUM)
If macros are unavoidable, use UPPER_SNAKE_CASE (prefer constants over #define)
Constructor parameter that conflicts with a public member name gets trailing underscore (foo_)
Literal suffixes should be uppercase (e.g., 1234L not 1234l)
C++: use spaces only; indent 4 spaces
Run clang-format (LLVM style) before submitting; wrap lines at 120 characters
If formatting must be bypassed, use // clang-format off/on around the section
Prefer smart pointers; use unique_ptr for sole ownership, shared_ptr for shared; weak_ptr only in exceptional cases
Do not use deprecated pre-C++11 smart pointers
Use C++ style comments; avoid C comments except special inline cases; prefer // single-line
Capitalize and punctuate full-sentence comments
Follow Doxygen rules: use //! for comments and //!< for members in C++
Disable code with #if/#endif and mnemonic conditions; avoid commented-out code; avoid dead code
Do not throw exceptions across library boundaries
Use least-forceful casts; avoid removing const/volatile; avoid C-style and functional casts (except constructors); p...

Files:

  • cpp/tensorrt_llm/common/attentionOp.cpp
**/*.{cpp,cxx,cc,cu}

📄 CodeRabbit Inference Engine (CODING_GUIDELINES.md)

**/*.{cpp,cxx,cc,cu}: Avoid literal values except for 0, nullptr, true, false; use named constexpr for other literals
Place semicolon of empty for/while loop on a new line
Always use brace-delimited bodies for switch/while/do-for/if/else
Use inline C comments in argument lists when parameter meaning is unclear (e.g., /* checkForErrors = */ false)
Do not use assignment in subexpressions (e.g., if (x = y) ... is forbidden)
Switch on enums should enumerate all values and omit default to catch new values at compile time
Structure switch statements; prohibit fallthrough except between empty cases; each case ends with break or throw; return at end of case not allowed; put break inside braces for compound case
Prefer anonymous namespaces over static for internal linkage of functions
Every defined function must be called at least once (no unused methods)

Files:

  • cpp/tensorrt_llm/common/attentionOp.cpp
**/*.{h,hpp,hxx,hh,cuh,cpp,cxx,cc,cu}

📄 CodeRabbit Inference Engine (CODING_GUIDELINES.md)

Parameter names must be consistent between declarations and definitions

Files:

  • cpp/tensorrt_llm/common/attentionOp.cpp
**/*.{cpp,cxx,cc,cu,h,hpp,hxx,hh,cuh,py}

📄 CodeRabbit Inference Engine (CODING_GUIDELINES.md)

Prepend NVIDIA copyright header (current year) to all source files

Files:

  • cpp/tensorrt_llm/common/attentionOp.cpp
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Pre-commit Check

@tensorrt-cicd
Copy link
Collaborator

PR_Github #15025 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #15018 [ run ] completed with state ABORTED

Signed-off-by: tinyinl <tinyinl@nvidia.com>

clean

Signed-off-by: tinyinl <tinyinl@nvidia.com>

clean

Signed-off-by: tinyinl <tinyinl@nvidia.com>

clean

Signed-off-by: tinyinl <tinyinl@nvidia.com>
@wu6u3tw wu6u3tw force-pushed the dev-tinyinl-whisper_v2 branch from 5ea8b09 to 0a60361 Compare August 12, 2025 23:27
@wu6u3tw
Copy link
Contributor Author

wu6u3tw commented Aug 12, 2025

/bot run --disable-fail-fast

@tensorrt-cicd
Copy link
Collaborator

PR_Github #15029 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #15025 [ run ] completed with state ABORTED

@wu6u3tw
Copy link
Contributor Author

wu6u3tw commented Aug 12, 2025

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #15031 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #15029 [ run ] completed with state ABORTED

@tensorrt-cicd
Copy link
Collaborator

PR_Github #15031 [ run ] completed with state SUCCESS
/LLM/main/L0_MergeRequest_PR pipeline #11353 completed with status: 'SUCCESS'
Pipeline passed with automatic retried tests. Check the rerun report for details.

@nvzhihanj nvzhihanj merged commit 6c52bb0 into NVIDIA:main Aug 13, 2025
5 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants