Skip to content

Add AMDGPU execution provider support#2093

Open
aditya-dl wants to merge 1 commit intomicrosoft:mainfrom
aditya-dl:amd/dev/adilohia/amdgpu_support
Open

Add AMDGPU execution provider support#2093
aditya-dl wants to merge 1 commit intomicrosoft:mainfrom
aditya-dl:amd/dev/adilohia/amdgpu_support

Conversation

@aditya-dl
Copy link
Copy Markdown

Add AMD GPU (MIGraphX) execution provider support to ONNX Runtime GenAI. The provider is exposed as "amdgpu" to users and maps to the MIGraphX EP in ONNX Runtime internally.

Changes:

  • Create src/amdgpu/session_options.{h,cpp} with AppendExecutionProvider that tries V2 plugin path then falls back to V1 legacy API
  • Add provider name normalization ("amdgpu" -> "AMDGPU") and register in the dispatch table
  • Enable graph capture for AMDGPU to allow compiled graph reuse during token generation
  • Add static input shape padding (prompt_gen_ flag) so the EP avoids recompilation on varying prompt lengths. Gated behind NeedsStaticInputShapes() to only activate for AMDGPU, not other EPs
  • Fix input_ids padding to handle both int32 and int64 element types, and correct per-row copy for batch_size > 1
  • Fix position_ids padding to prevent out-of-bounds read on next_tokens when tensor shape is padded to max_length for batch_size > 1

Configuration:
"provider_options": [{ "amdgpu": {} }] or: config.append_provider("amdgpu")

Known limitations:

  • Beam search not supported (requires past_present_share_buffer=true which requires num_beams=1)
  • Inputs allocated on CPU; the MIGraphX EP handles CPU<->GPU transfers internally

@microsoft-github-policy-service
Copy link
Copy Markdown
Contributor

@aditya-dl please read the following Contributor License Agreement(CLA). If you agree with the CLA, please reply with the following information.

@microsoft-github-policy-service agree [company="{your company}"]

Options:

  • (default - no company specified) I have sole ownership of intellectual property rights to my Submissions and I am not making Submissions in the course of work for my employer.
@microsoft-github-policy-service agree
  • (when company given) I am making Submissions in the course of work for my employer (or my employer has intellectual property rights in my Submissions by contract or applicable law). I have permission from my employer to make Submissions and enter into this Agreement on behalf of my employer. By signing below, the defined term “You” includes me and my employer.
@microsoft-github-policy-service agree company="Microsoft"
Contributor License Agreement

Contribution License Agreement

This Contribution License Agreement (“Agreement”) is agreed to by the party signing below (“You”),
and conveys certain license rights to Microsoft Corporation and its affiliates (“Microsoft”) for Your
contributions to Microsoft open source projects. This Agreement is effective as of the latest signature
date below.

  1. Definitions.
    “Code” means the computer software code, whether in human-readable or machine-executable form,
    that is delivered by You to Microsoft under this Agreement.
    “Project” means any of the projects owned or managed by Microsoft and offered under a license
    approved by the Open Source Initiative (www.opensource.org).
    “Submit” is the act of uploading, submitting, transmitting, or distributing code or other content to any
    Project, including but not limited to communication on electronic mailing lists, source code control
    systems, and issue tracking systems that are managed by, or on behalf of, the Project for the purpose of
    discussing and improving that Project, but excluding communication that is conspicuously marked or
    otherwise designated in writing by You as “Not a Submission.”
    “Submission” means the Code and any other copyrightable material Submitted by You, including any
    associated comments and documentation.
  2. Your Submission. You must agree to the terms of this Agreement before making a Submission to any
    Project. This Agreement covers any and all Submissions that You, now or in the future (except as
    described in Section 4 below), Submit to any Project.
  3. Originality of Work. You represent that each of Your Submissions is entirely Your original work.
    Should You wish to Submit materials that are not Your original work, You may Submit them separately
    to the Project if You (a) retain all copyright and license information that was in the materials as You
    received them, (b) in the description accompanying Your Submission, include the phrase “Submission
    containing materials of a third party:” followed by the names of the third party and any licenses or other
    restrictions of which You are aware, and (c) follow any other instructions in the Project’s written
    guidelines concerning Submissions.
  4. Your Employer. References to “employer” in this Agreement include Your employer or anyone else
    for whom You are acting in making Your Submission, e.g. as a contractor, vendor, or agent. If Your
    Submission is made in the course of Your work for an employer or Your employer has intellectual
    property rights in Your Submission by contract or applicable law, You must secure permission from Your
    employer to make the Submission before signing this Agreement. In that case, the term “You” in this
    Agreement will refer to You and the employer collectively. If You change employers in the future and
    desire to Submit additional Submissions for the new employer, then You agree to sign a new Agreement
    and secure permission from the new employer before Submitting those Submissions.
  5. Licenses.
  • Copyright License. You grant Microsoft, and those who receive the Submission directly or
    indirectly from Microsoft, a perpetual, worldwide, non-exclusive, royalty-free, irrevocable license in the
    Submission to reproduce, prepare derivative works of, publicly display, publicly perform, and distribute
    the Submission and such derivative works, and to sublicense any or all of the foregoing rights to third
    parties.
  • Patent License. You grant Microsoft, and those who receive the Submission directly or
    indirectly from Microsoft, a perpetual, worldwide, non-exclusive, royalty-free, irrevocable license under
    Your patent claims that are necessarily infringed by the Submission or the combination of the
    Submission with the Project to which it was Submitted to make, have made, use, offer to sell, sell and
    import or otherwise dispose of the Submission alone or with the Project.
  • Other Rights Reserved. Each party reserves all rights not expressly granted in this Agreement.
    No additional licenses or rights whatsoever (including, without limitation, any implied licenses) are
    granted by implication, exhaustion, estoppel or otherwise.
  1. Representations and Warranties. You represent that You are legally entitled to grant the above
    licenses. You represent that each of Your Submissions is entirely Your original work (except as You may
    have disclosed under Section 3). You represent that You have secured permission from Your employer to
    make the Submission in cases where Your Submission is made in the course of Your work for Your
    employer or Your employer has intellectual property rights in Your Submission by contract or applicable
    law. If You are signing this Agreement on behalf of Your employer, You represent and warrant that You
    have the necessary authority to bind the listed employer to the obligations contained in this Agreement.
    You are not expected to provide support for Your Submission, unless You choose to do so. UNLESS
    REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING, AND EXCEPT FOR THE WARRANTIES
    EXPRESSLY STATED IN SECTIONS 3, 4, AND 6, THE SUBMISSION PROVIDED UNDER THIS AGREEMENT IS
    PROVIDED WITHOUT WARRANTY OF ANY KIND, INCLUDING, BUT NOT LIMITED TO, ANY WARRANTY OF
    NONINFRINGEMENT, MERCHANTABILITY, OR FITNESS FOR A PARTICULAR PURPOSE.
  2. Notice to Microsoft. You agree to notify Microsoft in writing of any facts or circumstances of which
    You later become aware that would make Your representations in this Agreement inaccurate in any
    respect.
  3. Information about Submissions. You agree that contributions to Projects and information about
    contributions may be maintained indefinitely and disclosed publicly, including Your name and other
    information that You submit with Your Submission.
  4. Governing Law/Jurisdiction. This Agreement is governed by the laws of the State of Washington, and
    the parties consent to exclusive jurisdiction and venue in the federal courts sitting in King County,
    Washington, unless no federal subject matter jurisdiction exists, in which case the parties consent to
    exclusive jurisdiction and venue in the Superior Court of King County, Washington. The parties waive all
    defenses of lack of personal jurisdiction and forum non-conveniens.
  5. Entire Agreement/Assignment. This Agreement is the entire agreement between the parties, and
    supersedes any and all prior agreements, understandings or communications, written or oral, between
    the parties relating to the subject matter hereof. This Agreement may be assigned by Microsoft.

@aditya-dl aditya-dl force-pushed the amd/dev/adilohia/amdgpu_support branch from c25173d to f185537 Compare April 21, 2026 00:15

#include "../models/session_options.h"

namespace Generators::AMDGPUExecutionProvider {
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why name the provider as AMDGPU instead of MIGraphX?

Copy link
Copy Markdown
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Moving forward, we're renaming the EP to AMDGPU Execution Provider. Internally, OGA maps "AMDGPU" to "MIGraphX" when communicating with ORT, so there are no ORT-side changes needed. Users configure "amdgpu": {} in their genai_config.json or call config.append_provider("amdgpu").

Copy link
Copy Markdown
Collaborator

@baijumeswani baijumeswani Apr 24, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why not rename it inside ORT as well? Having multiple names is a problem because there are multiple components that are execution provider aware (including foundry local and foundry local catalog). It would be problematic if on the surface it appeared that the name is AMDExecutionProvider but when we query OrtGetEpDevices, it returns MIGraphXExecutionProvider?
If the name needs to change, it should change in ort as well.

Comment thread src/models/model.cpp
for (int i = 0; i < inputs_.size(); i++) {
std::string input_name = input_names_[i];

if (input_name == "input_ids") {
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A boolean called is_prompt_ already exists for detecting the prefill stage when updating the input ids.

// For beam search
if (is_prompt_ && state_.params_->search.num_beams > 1) {
int row_size = static_cast<int>(shape_[1]);
for (int b = 0; b < shape_[0]; b++) {
int in_offset = (b / state_.params_->search.num_beams) * row_size;
int out_offset = b * row_size;
data_span.subspan(out_offset, row_size).CopyFrom(new_tokens.subspan(in_offset, row_size));
}
} else {
data_span.CopyFrom(new_tokens);
}

Can we set the correct input ids when constructed rather than doing it here right before the model runs?

Copy link
Copy Markdown
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The current approach pads in State::Run() because the padding needs to be coordinated across input_ids, position_ids, and logits - all three must be padded to the same max_length for the shapes to be consistent. Moving input_ids padding to DefaultInputIDs::Update() would require scattering the padding logic across three separate classes (input_ids, position_inputs, logits) rather than keeping it centralized.
Also, is_prompt_ is currently used for beam search token duplicating - overloading it for static shape padding could be confusing.
That said, if you feel the architectural separation is important, we can refactor to move each padding into its respective Update() function. Let me know your thoughts.

Comment thread src/models/logits.cpp
shape_[1] = new_kv_length;
if (state_.prompt_gen_)
{
shape_[1] = state_.params_->search.max_length;
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Typically, we have used enabling or disabling graph capture to decide on static vs dynamic shapes. Can we use that rather than introducing new boolean feature flags such as prompt_gen_?

Copy link
Copy Markdown
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We considered using use_graph_capture but it's also true for DML and WebGPU. Gating the static input padding behind use_graph_capture would cause DML/WebGPU to also pad inputs to max_length during prompt processing, which they don't need. They handle static shapes through their own mechanisms (DML has ep.dml.enable_graph_capture session config, WebGPU checks enableGraphCapture provider option). The prompt_gen_ flag, gated behind NeedsStaticInputShapes(), ensures the padding only activates for AMDGPU without affecting other EPs.
Let me know if my understanding is right.

@aditya-dl aditya-dl force-pushed the amd/dev/adilohia/amdgpu_support branch from f185537 to 16ef498 Compare April 21, 2026 17:13
keys.emplace_back(option.first.c_str());
values.emplace_back(option.second.c_str());
}
session_options.AppendExecutionProvider("MIGraphX", keys.data(), values.data(), keys.size());
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How come the execution provider has a different name when using append execution provider v1 vs v2?

Comment thread src/ort_genai_c.cpp Outdated
Comment on lines +1091 to +1096
// Map AMDGPU to MIGraphX for ORT compatibility
const char* ort_name = registration_name;
if (std::string_view(registration_name) == "AMDGPUExecutionProvider") {
ort_name = "MIGraphXExecutionProvider";
}
Ort::RegisterExecutionProviderLibrary(&(Generators::GetOrtEnv()), ort_name, fs::path(library_path).c_str());
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As I mentioned above, this has consequences on other layers in the stack (such as foundry local and foundry local catalog). Please unify the naming.

@aditya-dl
Copy link
Copy Markdown
Author

@baijumeswani Thank you for the feedback on naming consistency. We understand the concern.

A larger AMDGPU execution provider effort is underway. We are open-sourcing the plugin EP that will use the AMDGPU name consistently across the stack. This PR is the initial OGA-side integration to support that direction.

The MIGraphX name will be retained in legacy ORT for backward compatibility. Once the plugin EP is available, the naming will be unified end-to-end.

Add AMD GPU (MIGraphX) execution provider support to ONNX Runtime
GenAI. The provider is exposed as "amdgpu" to users and maps to the
MIGraphX EP in ONNX Runtime internally.

Changes:
- Create src/amdgpu/session_options.{h,cpp} with AppendExecutionProvider
  that tries V2 plugin path then falls back to V1 legacy API
- Add provider name normalization ("amdgpu" -> "AMDGPU") and register
  in the dispatch table
- Enable graph capture for AMDGPU to allow compiled graph reuse
  during token generation
- Add static input shape padding (prompt_gen_ flag) so the EP avoids
  recompilation on varying prompt lengths. Gated behind
  NeedsStaticInputShapes() to only activate for AMDGPU, not other EPs
- Fix input_ids padding to handle both int32 and int64 element types,
  and correct per-row copy for batch_size > 1
- Fix position_ids padding to prevent out-of-bounds read on
  next_tokens when tensor shape is padded to max_length for
  batch_size > 1

Configuration:
  "provider_options": [{ "amdgpu": {} }]
  or: config.append_provider("amdgpu")

Known limitations:
- Beam search not supported (requires past_present_share_buffer=true
  which requires num_beams=1)
- Inputs allocated on CPU; the MIGraphX EP handles CPU<->GPU
  transfers internally
@aditya-dl aditya-dl force-pushed the amd/dev/adilohia/amdgpu_support branch from 16ef498 to 67b74e9 Compare April 28, 2026 20:14
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants