Add AMDGPU execution provider support#2093
Conversation
|
@aditya-dl please read the following Contributor License Agreement(CLA). If you agree with the CLA, please reply with the following information.
Contributor License AgreementContribution License AgreementThis Contribution License Agreement (“Agreement”) is agreed to by the party signing below (“You”),
|
c25173d to
f185537
Compare
|
|
||
| #include "../models/session_options.h" | ||
|
|
||
| namespace Generators::AMDGPUExecutionProvider { |
There was a problem hiding this comment.
Why name the provider as AMDGPU instead of MIGraphX?
There was a problem hiding this comment.
Moving forward, we're renaming the EP to AMDGPU Execution Provider. Internally, OGA maps "AMDGPU" to "MIGraphX" when communicating with ORT, so there are no ORT-side changes needed. Users configure "amdgpu": {} in their genai_config.json or call config.append_provider("amdgpu").
There was a problem hiding this comment.
Why not rename it inside ORT as well? Having multiple names is a problem because there are multiple components that are execution provider aware (including foundry local and foundry local catalog). It would be problematic if on the surface it appeared that the name is AMDExecutionProvider but when we query OrtGetEpDevices, it returns MIGraphXExecutionProvider?
If the name needs to change, it should change in ort as well.
| for (int i = 0; i < inputs_.size(); i++) { | ||
| std::string input_name = input_names_[i]; | ||
|
|
||
| if (input_name == "input_ids") { |
There was a problem hiding this comment.
A boolean called is_prompt_ already exists for detecting the prefill stage when updating the input ids.
onnxruntime-genai/src/models/input_ids.cpp
Lines 84 to 94 in f185537
Can we set the correct input ids when constructed rather than doing it here right before the model runs?
There was a problem hiding this comment.
The current approach pads in State::Run() because the padding needs to be coordinated across input_ids, position_ids, and logits - all three must be padded to the same max_length for the shapes to be consistent. Moving input_ids padding to DefaultInputIDs::Update() would require scattering the padding logic across three separate classes (input_ids, position_inputs, logits) rather than keeping it centralized.
Also, is_prompt_ is currently used for beam search token duplicating - overloading it for static shape padding could be confusing.
That said, if you feel the architectural separation is important, we can refactor to move each padding into its respective Update() function. Let me know your thoughts.
| shape_[1] = new_kv_length; | ||
| if (state_.prompt_gen_) | ||
| { | ||
| shape_[1] = state_.params_->search.max_length; |
There was a problem hiding this comment.
Typically, we have used enabling or disabling graph capture to decide on static vs dynamic shapes. Can we use that rather than introducing new boolean feature flags such as prompt_gen_?
There was a problem hiding this comment.
We considered using use_graph_capture but it's also true for DML and WebGPU. Gating the static input padding behind use_graph_capture would cause DML/WebGPU to also pad inputs to max_length during prompt processing, which they don't need. They handle static shapes through their own mechanisms (DML has ep.dml.enable_graph_capture session config, WebGPU checks enableGraphCapture provider option). The prompt_gen_ flag, gated behind NeedsStaticInputShapes(), ensures the padding only activates for AMDGPU without affecting other EPs.
Let me know if my understanding is right.
f185537 to
16ef498
Compare
| keys.emplace_back(option.first.c_str()); | ||
| values.emplace_back(option.second.c_str()); | ||
| } | ||
| session_options.AppendExecutionProvider("MIGraphX", keys.data(), values.data(), keys.size()); |
There was a problem hiding this comment.
How come the execution provider has a different name when using append execution provider v1 vs v2?
| // Map AMDGPU to MIGraphX for ORT compatibility | ||
| const char* ort_name = registration_name; | ||
| if (std::string_view(registration_name) == "AMDGPUExecutionProvider") { | ||
| ort_name = "MIGraphXExecutionProvider"; | ||
| } | ||
| Ort::RegisterExecutionProviderLibrary(&(Generators::GetOrtEnv()), ort_name, fs::path(library_path).c_str()); |
There was a problem hiding this comment.
As I mentioned above, this has consequences on other layers in the stack (such as foundry local and foundry local catalog). Please unify the naming.
|
@baijumeswani Thank you for the feedback on naming consistency. We understand the concern. A larger AMDGPU execution provider effort is underway. We are open-sourcing the plugin EP that will use the AMDGPU name consistently across the stack. This PR is the initial OGA-side integration to support that direction. The MIGraphX name will be retained in legacy ORT for backward compatibility. Once the plugin EP is available, the naming will be unified end-to-end. |
Add AMD GPU (MIGraphX) execution provider support to ONNX Runtime
GenAI. The provider is exposed as "amdgpu" to users and maps to the
MIGraphX EP in ONNX Runtime internally.
Changes:
- Create src/amdgpu/session_options.{h,cpp} with AppendExecutionProvider
that tries V2 plugin path then falls back to V1 legacy API
- Add provider name normalization ("amdgpu" -> "AMDGPU") and register
in the dispatch table
- Enable graph capture for AMDGPU to allow compiled graph reuse
during token generation
- Add static input shape padding (prompt_gen_ flag) so the EP avoids
recompilation on varying prompt lengths. Gated behind
NeedsStaticInputShapes() to only activate for AMDGPU, not other EPs
- Fix input_ids padding to handle both int32 and int64 element types,
and correct per-row copy for batch_size > 1
- Fix position_ids padding to prevent out-of-bounds read on
next_tokens when tensor shape is padded to max_length for
batch_size > 1
Configuration:
"provider_options": [{ "amdgpu": {} }]
or: config.append_provider("amdgpu")
Known limitations:
- Beam search not supported (requires past_present_share_buffer=true
which requires num_beams=1)
- Inputs allocated on CPU; the MIGraphX EP handles CPU<->GPU
transfers internally
16ef498 to
67b74e9
Compare
Add AMD GPU (MIGraphX) execution provider support to ONNX Runtime GenAI. The provider is exposed as "amdgpu" to users and maps to the MIGraphX EP in ONNX Runtime internally.
Changes:
Configuration:
"provider_options": [{ "amdgpu": {} }] or: config.append_provider("amdgpu")
Known limitations: