Skip to content

Centralize cuda_arch capability definition#202

Open
Copilot wants to merge 6 commits intomasterfrom
copilot/sub-pr-199
Open

Centralize cuda_arch capability definition#202
Copilot wants to merge 6 commits intomasterfrom
copilot/sub-pr-199

Conversation

Copy link
Copy Markdown
Contributor

Copilot AI commented Mar 12, 2026

Wait for:

cuda_arch=75 was duplicated across 5 package specs in cuda/spack.yaml and tf/spack.yaml, making it hard to retarget GPU architecture.

Changes

  • spack-environment/packages.yaml: Added cuda_arch=75 as a conditional requirement in packages: all: require with when: '+cuda', so it applies to all CUDA-enabled packages without affecting CPU-only packages.
  • cuda/spack.yaml and tf/spack.yaml: Drop inline cuda_arch=75 from acts, arrow, celeritas, py-torch, and py-tensorflow specs.
# spack-environment/packages.yaml
packages:
  all:
    require:
    - when: '+cuda'
      any_of: [cuda_arch=75, '@:']

To target a different GPU architecture (currently Compute Capability 7.5 / Turing: T4, RTX 2xxx, Quadro RTX), change cuda_arch=75 in this single location.


💬 We'd love your input! Share your thoughts on Copilot coding agent in our 2 minute survey.


📍 Connect Copilot coding agent with Jira, Azure Boards or Linear to delegate work to Copilot in one click without leaving your project management tool.

Copilot AI changed the title [WIP] [WIP] Address feedback on centralizing cuda_arch capability definition Centralize cuda_arch capability definition Mar 12, 2026
Copy link
Copy Markdown
Contributor

@wdconinc wdconinc left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks sensible. Maybe there's a risk this will enable cuda_arch and therefore +cuda for packages where we currently don't explicitly enable +cuda, but would that be a bad thing?

Base automatically changed from pr/arrow_cuda to master March 12, 2026 23:12
@wdconinc wdconinc marked this pull request as ready for review March 12, 2026 23:13
Copilot AI review requested due to automatic review settings March 12, 2026 23:13
Copy link
Copy Markdown
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR centralizes the CUDA compute capability setting for the Spack CUDA-related environments by introducing a shared cuda_arch.yaml include and removing per-spec cuda_arch=75 pins.

Changes:

  • Add a new spack-environment/cuda_arch.yaml and include it from CUDA/TensorFlow environments.
  • Remove explicit cuda_arch=75 from CUDA-enabled specs in spack-environment/cuda/spack.yaml.
  • Remove explicit cuda_arch=75 from the TensorFlow CUDA spec in spack-environment/tf/spack.yaml.

Reviewed changes

Copilot reviewed 3 out of 3 changed files in this pull request and generated 1 comment.

File Description
spack-environment/tf/spack.yaml Includes centralized CUDA arch config; drops per-spec cuda_arch=75 on TensorFlow.
spack-environment/cuda/spack.yaml Includes centralized CUDA arch config; drops per-spec cuda_arch=75 on multiple CUDA specs.
spack-environment/cuda_arch.yaml New shared config intended to define CUDA architecture in one place.

You can also share your feedback on Copilot code review. Take the survey.

@wdconinc
Copy link
Copy Markdown
Contributor

wdconinc commented Apr 4, 2026

Appears to propagate the cuda_arch correctly now, e.g. https://github.com/eic/containers/actions/runs/23980557567/job/69944240377?pr=202#step:15:581.

@wdconinc wdconinc enabled auto-merge (squash) April 4, 2026 14:47
Copilot AI and others added 3 commits April 13, 2026 22:12
Co-authored-by: wdconinc <4656391+wdconinc@users.noreply.github.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Co-authored-by: wdconinc <4656391+wdconinc@users.noreply.github.com>
Copilot AI review requested due to automatic review settings April 14, 2026 03:12
@wdconinc wdconinc force-pushed the copilot/sub-pr-199 branch from a6e30a4 to 84f177a Compare April 14, 2026 03:12
Copy link
Copy Markdown
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Copilot reviewed 3 out of 3 changed files in this pull request and generated 2 comments.

Comment on lines 21 to 25
- build_system=cmake
- build_type=Release
- when: '+cuda'
spec: cuda_arch=75
require:
Copy link

Copilot AI Apr 14, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The PR description says cuda_arch=75 is added as a conditional requirement (and the file header notes the preferred require: any_of: [...] pattern for optional variants), but the change adds a conditional spec: cuda_arch=75 entry under packages:all:prefer and does not mirror it in packages:all:require. This may not have the intended effect and diverges from the established pattern used for other global constraints (e.g., build_system / build_type use require: any_of: [...]). Consider moving/duplicating this into require using the existing any_of: [cuda_arch=75, '@:'] style so it applies consistently while remaining a no-op for packages that don't support cuda_arch.

Copilot uses AI. Check for mistakes.
Comment on lines +10 to +13
- acts +cuda
- actsvg
- afterburner
- arrow +cuda cuda_arch=75
- arrow +cuda
Copy link

Copilot AI Apr 14, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This environment still hard-codes cuda_arch=75 for g4adept later in the same specs: list (g4adept +cuda cuda_arch=75). If the goal is to centralize the CUDA architecture, consider removing the remaining inline cuda_arch from g4adept as well (or documenting why it needs to stay explicitly set).

Copilot uses AI. Check for mistakes.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants