Skip to content

Conversation

@jiqing-feng
Copy link
Contributor

As the contiguous may have a negative impact on CPU performance, ResNet can only use it in training mode.
Fix #12975

Signed-off-by: jiqing-feng <[email protected]>
Copy link
Member

@sayakpaul sayakpaul left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks!

@HuggingFaceDocBuilderDev

The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.

@sayakpaul
Copy link
Member

The failing test passes on main.

But with this PR, that fails. Could you please check?

Signed-off-by: jiqing-feng <[email protected]>
@jiqing-feng
Copy link
Contributor Author

jiqing-feng commented Jan 15, 2026

The behaviour change has little impact on some models' precision; just changing the tolerance from 1e-4 to 2e-4 will be fine.
Hi @sayakpaul . Please re-run the CI. Thanks!

@jiqing-feng jiqing-feng requested a review from sayakpaul January 15, 2026 05:07
Signed-off-by: jiqing-feng <[email protected]>
@sayakpaul
Copy link
Member

But we need the tests to pass on our CI.

@jiqing-feng
Copy link
Contributor Author

I will check these failed tests.

@jiqing-feng
Copy link
Contributor Author

Hi @sayakpaul . The failed test seems network issue. Please check it. Thanks!

@jiqing-feng
Copy link
Contributor Author

Hi @sayakpaul . Please retrigger the CI. Thanks!

@jiqing-feng
Copy link
Contributor Author

jiqing-feng commented Jan 16, 2026

It's so weird. The CI already passed in commit aa65c5c, but failed after update branch.
image

@sayakpaul . Do you have any clue about it?

@jiqing-feng
Copy link
Contributor Author

It's so weird. The CI already passed in commit aa65c5c, but failed after update branch. image

@sayakpaul . Do you have any clue about it?

Hi @sayakpaul . Would you please re-run the CI to see if the error disappears? Thanks!

@jiqing-feng
Copy link
Contributor Author

I see the lora tests also failed in #13003

@require_hf_hub_version_greater("0.26.5")
@require_transformers_version_greater("4.47.1")
def test_save_load_dduf(self, atol=1e-4, rtol=1e-4):
def test_save_load_dduf(self, atol=1e-3, rtol=1e-3):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We shouldn't change it at the main test level. If a model test failing, we should override the method at the corresponding test class and relax tolerance.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

AnimateDiffPipeline performance regression on CPU.

3 participants