Skip to content

fix the llm test failing issue + add back cu126/cu128#4118

Merged
lanluo-nvidia merged 3 commits intorelease/2.11from
lluo/fix_2.11_issue_1
Mar 5, 2026
Merged

fix the llm test failing issue + add back cu126/cu128#4118
lanluo-nvidia merged 3 commits intorelease/2.11from
lluo/fix_2.11_issue_1

Conversation

@lanluo-nvidia
Copy link
Collaborator

Description

AttributeError: module 'torch._inductor.fx_passes' has no attribute 'reinplace'

Fixes # (issue)

Type of change

Please delete options that are not relevant and/or add your own.

  • Bug fix (non-breaking change which fixes an issue)
  • New feature (non-breaking change which adds functionality)
  • Breaking change (fix or feature that would cause existing functionality to not work as expected)
  • This change requires a documentation update

Checklist:

  • My code follows the style guidelines of this project (You can use the linters)
  • I have performed a self-review of my own code
  • I have commented my code, particularly in hard-to-understand areas and hacks
  • I have made corresponding changes to the documentation
  • I have added tests to verify my fix or my feature
  • New and existing unit tests pass locally with my changes
  • I have added the relevant labels to my PR in so that relevant reviewers are notified

@meta-cla meta-cla bot added the cla signed label Mar 5, 2026
@lanluo-nvidia lanluo-nvidia changed the base branch from main to release/2.11 March 5, 2026 18:22
@github-actions github-actions bot added component: lowering Issues re: The lowering / preprocessing passes component: core Issues re: The core compiler component: api [Python] Issues re: Python API component: dynamo Issues relating to the `torch.compile` or `torch._dynamo.export` paths labels Mar 5, 2026
Copy link

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There are some changes that do not conform to Python style guidelines:

--- /home/runner/work/TensorRT/TensorRT/py/torch_tensorrt/dynamo/lowering/passes/_FakeTensorUpdater.py	2026-03-05 18:38:18.298241+00:00
+++ /home/runner/work/TensorRT/TensorRT/py/torch_tensorrt/dynamo/lowering/passes/_FakeTensorUpdater.py	2026-03-05 18:38:55.444799+00:00
@@ -18,10 +18,11 @@
from torch.utils._ordered_set import OrderedSet

# Try to import reinplace module - may not be available in all PyTorch builds
try:
    import importlib
+
    reinplace_module = importlib.import_module("torch._inductor.fx_passes.reinplace")
    _generalized_scatter = getattr(reinplace_module, "_generalized_scatter", None)
except Exception:
    _generalized_scatter = None

@@ -165,11 +166,14 @@
            # or operator.getitem which is used when returning multiple
            # tensors from an op.
            return node.op == "call_function" and (
                isinstance(node.target, torch._ops.OpOverload)
                or node.target is operator.getitem
-                or (_generalized_scatter is not None and node.target is _generalized_scatter)
+                or (
+                    _generalized_scatter is not None
+                    and node.target is _generalized_scatter
+                )
            )

        to_process = OrderedSet[int]()
        for node in self.graph.nodes:
            # NB: Be very careful about skipping nodes (via continues) here

Copy link

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There are some changes that do not conform to Python style guidelines:

--- /home/runner/work/TensorRT/TensorRT/py/torch_tensorrt/dynamo/lowering/passes/_FakeTensorUpdater.py	2026-03-05 20:28:58.533773+00:00
+++ /home/runner/work/TensorRT/TensorRT/py/torch_tensorrt/dynamo/lowering/passes/_FakeTensorUpdater.py	2026-03-05 20:29:32.293886+00:00
@@ -18,10 +18,11 @@
from torch.utils._ordered_set import OrderedSet

# Try to import reinplace module - may not be available in all PyTorch builds
try:
    import importlib
+
    reinplace_module = importlib.import_module("torch._inductor.fx_passes.reinplace")
    _generalized_scatter = getattr(reinplace_module, "_generalized_scatter", None)
except Exception:
    _generalized_scatter = None

@@ -165,11 +166,14 @@
            # or operator.getitem which is used when returning multiple
            # tensors from an op.
            return node.op == "call_function" and (
                isinstance(node.target, torch._ops.OpOverload)
                or node.target is operator.getitem
-                or (_generalized_scatter is not None and node.target is _generalized_scatter)
+                or (
+                    _generalized_scatter is not None
+                    and node.target is _generalized_scatter
+                )
            )

        to_process = OrderedSet[int]()
        for node in self.graph.nodes:
            # NB: Be very careful about skipping nodes (via continues) here

@lanluo-nvidia lanluo-nvidia changed the title fix the llm test failing issue fix the llm test failing issue + add back cu126/cu128 Mar 5, 2026
@lanluo-nvidia lanluo-nvidia marked this pull request as ready for review March 5, 2026 21:35
@lanluo-nvidia lanluo-nvidia merged commit e11eaa3 into release/2.11 Mar 5, 2026
121 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

cla signed component: api [Python] Issues re: Python API component: core Issues re: The core compiler component: dynamo Issues relating to the `torch.compile` or `torch._dynamo.export` paths component: lowering Issues re: The lowering / preprocessing passes

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant