Skip to content

[None][chore] AutoDeploy: Set nanov3 and superv3 configs to use flashinfer ssm#11183

Merged
galagam merged 2 commits intoNVIDIA:mainfrom
nv-auto-deploy:gagam/enable_fi_ssm_for_nanov3
Feb 4, 2026
Merged

[None][chore] AutoDeploy: Set nanov3 and superv3 configs to use flashinfer ssm#11183
galagam merged 2 commits intoNVIDIA:mainfrom
nv-auto-deploy:gagam/enable_fi_ssm_for_nanov3

Conversation

@galagam
Copy link
Copy Markdown
Collaborator

@galagam galagam commented Feb 2, 2026

Description

Follow up on #10867
Enable flashinfer_ssm backend in examples/auto_deploy/nano_v3.yaml

Perf boost for AutoDeploy:
Charts were generated with #11194 applied. Without it, triton ssm perf is even worse.
image

Test Coverage

PR Checklist

Please review the following before submitting your PR:

  • PR description clearly explains what and why. If using CodeRabbit's summary, please make sure it makes sense.

  • PR Follows TRT-LLM CODING GUIDELINES to the best of your knowledge.

  • Test cases are provided for new code paths (see test instructions)

  • Any new dependencies have been scanned for license and vulnerabilities

  • CODEOWNERS updated if ownership changes

  • Documentation updated as needed

  • Update tava architecture diagram if there is a significant design change in PR.

  • The reviewers assigned automatically/manually are appropriate for the PR.

  • Please check this after reviewing the above items as appropriate for this PR.

GitHub Bot Help

/bot [-h] ['run', 'kill', 'skip', 'reuse-pipeline'] ...

Provide a user friendly way for developers to interact with a Jenkins server.

Run /bot [-h|--help] to print this help message.

See details below for each supported subcommand.

Details

run [--reuse-test (optional)pipeline-id --disable-fail-fast --skip-test --stage-list "A10-PyTorch-1, xxx" --gpu-type "A30, H100_PCIe" --test-backend "pytorch, cpp" --add-multi-gpu-test --only-multi-gpu-test --disable-multi-gpu-test --post-merge --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" --detailed-log --debug(experimental)]

Launch build/test pipelines. All previously running jobs will be killed.

--reuse-test (optional)pipeline-id (OPTIONAL) : Allow the new pipeline to reuse build artifacts and skip successful test stages from a specified pipeline or the last pipeline if no pipeline-id is indicated. If the Git commit ID has changed, this option will be always ignored. The DEFAULT behavior of the bot is to reuse build artifacts and successful test results from the last pipeline.

--disable-reuse-test (OPTIONAL) : Explicitly prevent the pipeline from reusing build artifacts and skipping successful test stages from a previous pipeline. Ensure that all builds and tests are run regardless of previous successes.

--disable-fail-fast (OPTIONAL) : Disable fail fast on build/tests/infra failures.

--skip-test (OPTIONAL) : Skip all test stages, but still run build stages, package stages and sanity check stages. Note: Does NOT update GitHub check status.

--stage-list "A10-PyTorch-1, xxx" (OPTIONAL) : Only run the specified test stages. Examples: "A10-PyTorch-1, xxx". Note: Does NOT update GitHub check status.

--gpu-type "A30, H100_PCIe" (OPTIONAL) : Only run the test stages on the specified GPU types. Examples: "A30, H100_PCIe". Note: Does NOT update GitHub check status.

--test-backend "pytorch, cpp" (OPTIONAL) : Skip test stages which don't match the specified backends. Only support [pytorch, cpp, tensorrt, triton]. Examples: "pytorch, cpp" (does not run test stages with tensorrt or triton backend). Note: Does NOT update GitHub pipeline status.

--only-multi-gpu-test (OPTIONAL) : Only run the multi-GPU tests. Note: Does NOT update GitHub check status.

--disable-multi-gpu-test (OPTIONAL) : Disable the multi-GPU tests. Note: Does NOT update GitHub check status.

--add-multi-gpu-test (OPTIONAL) : Force run the multi-GPU tests in addition to running L0 pre-merge pipeline.

--post-merge (OPTIONAL) : Run the L0 post-merge pipeline instead of the ordinary L0 pre-merge pipeline.

--extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" (OPTIONAL) : Run the ordinary L0 pre-merge pipeline and specified test stages. Examples: --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx".

--detailed-log (OPTIONAL) : Enable flushing out all logs to the Jenkins console. This will significantly increase the log volume and may slow down the job.

--debug (OPTIONAL) : Experimental feature. Enable access to the CI container for debugging purpose. Note: Specify exactly one stage in the stage-list parameter to access the appropriate container environment. Note: Does NOT update GitHub check status.

For guidance on mapping tests to stage names, see docs/source/reference/ci-overview.md
and the scripts/test_to_stage_mapping.py helper.

kill

kill

Kill all running builds associated with pull request.

skip

skip --comment COMMENT

Skip testing for latest commit on pull request. --comment "Reason for skipping build/test" is required. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

reuse-pipeline

reuse-pipeline

Reuse a previous pipeline to validate current commit. This action will also kill all currently running builds associated with the pull request. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

Summary by CodeRabbit

  • Chores
    • Added cached SSM attention configuration examples to deployment templates, enabling support for flashinfer backend optimization in example configurations.

Comment thread examples/auto_deploy/nano_v3.yaml Outdated
@galagam galagam changed the title [None][chore] AutoDeploy: Set nanov3 config to use flashinfer ssm [None][chore] AutoDeploy: Set nanov3 and superv3 configs to use flashinfer ssm Feb 2, 2026
@galagam galagam marked this pull request as ready for review February 2, 2026 20:20
@galagam galagam requested review from a team as code owners February 2, 2026 20:20
@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai bot commented Feb 2, 2026

📝 Walkthrough

Walkthrough

These changes add a new transform entry named insert_cached_ssm_attention with flashinfer_ssm backend to two auto-deployment configuration files.

Changes

Cohort / File(s) Summary
Configuration Updates
examples/auto_deploy/nano_v3.yaml, examples/auto_deploy/super_v3.yaml
Added new insert_cached_ssm_attention transform with flashinfer_ssm backend. nano_v3.yaml includes enabled: true, while super_v3.yaml omits stage and enabled fields.

Estimated code review effort

🎯 1 (Trivial) | ⏱️ ~3 minutes

🚥 Pre-merge checks | ✅ 2 | ❌ 1
❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Description check ⚠️ Warning PR description lacks required sections: title format missing, Test Coverage section empty, and PR checklist items not properly evaluated for relevance. Add proper PR title following [type] format, explicitly list test coverage or explain why none are needed, and review/uncheck irrelevant checklist items for configuration-only changes.
✅ Passed checks (2 passed)
Check name Status Explanation
Title check ✅ Passed The title accurately describes the main change: enabling flashinfer SSM backend configuration in nanov3 and superv3 AutoDeploy configs.
Docstring Coverage ✅ Passed No functions found in the changed files to evaluate docstring coverage. Skipping docstring coverage check.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Signed-off-by: Gal Hubara Agam <96368689+galagam@users.noreply.github.com>
Signed-off-by: Gal Hubara Agam <96368689+galagam@users.noreply.github.com>
@galagam galagam force-pushed the gagam/enable_fi_ssm_for_nanov3 branch from 23a4ed3 to c3f70f0 Compare February 3, 2026 04:56
@galagam
Copy link
Copy Markdown
Collaborator Author

galagam commented Feb 3, 2026

/bot run

@tensorrt-cicd
Copy link
Copy Markdown
Collaborator

PR_Github #34587 [ run ] triggered by Bot. Commit: c3f70f0

@tensorrt-cicd
Copy link
Copy Markdown
Collaborator

PR_Github #34587 [ run ] completed with state SUCCESS. Commit: c3f70f0
/LLM/main/L0_MergeRequest_PR pipeline #26690 completed with status: 'SUCCESS'

@galagam galagam enabled auto-merge (squash) February 3, 2026 18:32
@galagam
Copy link
Copy Markdown
Collaborator Author

galagam commented Feb 4, 2026

@NVIDIA/trt-llm-doc-owners please help review

@galagam galagam merged commit 767b8dc into NVIDIA:main Feb 4, 2026
5 checks passed
SchumiDing pushed a commit to SchumiDing/TensorRT-LLM that referenced this pull request Feb 6, 2026
…infer ssm (NVIDIA#11183)

Signed-off-by: Gal Hubara Agam <96368689+galagam@users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants