Skip to content

Conversation

@roystgnr
Copy link
Member

@roystgnr roystgnr commented Jan 4, 2024

I'm still trying to figure out how to make this easy to use, but I'd like to at least test that what I've got so far still passes CI.

@lindsayad
Copy link
Member

Does "do not merge" also mean "do not review"?

@moosebuild
Copy link

Job Coverage on 7c8c1d1 wanted to post the following:

Coverage

ffa3a3 #3759 7c8c1d
Total Total +/- New
Rate 62.35% 62.36% +0.02% 98.67%
Hits 68275 68328 +53 74
Misses 41235 41236 +1 1

Diff coverage report

Full coverage report

This comment will be updated on new commits.

@roystgnr
Copy link
Member Author

roystgnr commented Jan 4, 2024

Does "do not merge" also mean "do not review"?

It's already bad enough to need fixing, huh? ;-)

Nah, please jump in as you see fit. I've got more I'm wanting to add here, but if there's flaws in the foundation then it's probably better to fix them before I put more layers on top.

Copy link
Member

@lindsayad lindsayad left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just looking at the header this seems like a great foundation to me

@roystgnr roystgnr force-pushed the mesh_preparation_finer branch from 7c8c1d1 to fc6ee94 Compare June 21, 2024 15:16
@roystgnr roystgnr force-pushed the mesh_preparation_finer branch from fc6ee94 to e39fc75 Compare December 5, 2025 21:17
@roystgnr
Copy link
Member Author

roystgnr commented Dec 5, 2025

This might actually be in shape to merge soon. My remaining concerns:

  1. The additional internal testing I've added might trigger dbg and/or devel failures downstream, in tests of code that doesn't properly prepare a mesh before solving (so it'll fail the new assertions) but either doesn't rely on the unprepared data or just compares to a gold solution that mis-relied on it (so it's not already failing tests). I'll add every recipe I think might be affected to the CI here.

  2. The additional internal testing I've added might be too expensive in dbg mode, where it tests preparation validity before every assembly. I think we can afford that (prepare_for_use() is much cheaper if we don't repartition, which we don't when checking validity). If we see CI timeouts then I'll try moving my testing from assembly methods to solve() methods. If we still see timeouts after that then I'll remove the intensive testing entirely for now.

@moosebuild
Copy link

moosebuild commented Dec 11, 2025

Job Coverage, step Generate coverage on 7143591 wanted to post the following:

Coverage

502d42 #3759 714359
Total Total +/- New
Rate 65.30% 65.33% +0.02% 79.00%
Hits 77566 77751 +185 331
Misses 41209 41270 +61 88

Diff coverage report

Full coverage report

Warnings

  • New new line coverage rate 79.00% is less than the suggested 90.0%

This comment will be updated on new commits.

This wasn't failing the test, because the kernel here didn't need the
unprepared data, but we'd like to simply assert that we're not
assembling or solving on unprepared meshes.
Otherwise the mesh still thinks its fully prepared even when it isn't
and we can't assert valid_is_prepared() on it.
We need things like neighbor pointers, and in general we'd like to
assert that we're never assembling or solving on unprepared meshes.
We're trying to allow meshes to keep track of "half-prepared" status,
for the sake of efficiency in large chains or webs of mesh generators,
but at the very least user assembly kernels should be able to rely on a
mesh being prepared for use - they're what we mean by "use"!.
We can rebuild point locators or element caches, but we don't want to
use an already-built invalid cache by accident.
We're making that pure virtual in non-deprecated builds now.
(at least in non-deprecated builds)
This is actually a big behavior change, considering it led to failed
assertions in a half dozen of our own examples.  Let's give users some
time to get up to speed.
We can have those on a ghosted boundary element
2008.  This has been buggy since I wrote it in 2008.  And yet it took a
double-refinement test on a distributed mesh plus an extra layer of
dbg-mode internal consistency checks to catch the bug.
If we're skipping partitioning, we shouldn't mark ourselves as
partitioned if we're not, and we shouldn't expect that we're marked as
partitioned.
I haven't actually seen this fail yet but it does seem like the sort of
thing we should be checking.
I went to double-check that this was the macro I wanted, and the
behavior did match my recollection but didn't match my comment
@roystgnr roystgnr force-pushed the mesh_preparation_finer branch from cc7fd1a to 9e6f0db Compare January 15, 2026 19:28
Figuring out neighbor pointers in distributed infinite element
generation is hard enough that I want to just make find_neighbors do
most of the work, but it'll be preliminary work that we can't safely
assert parallel consistency on.
We never do parallel sweeps with infinite elements enabled in CI, but I
did one and after a little work it passes.
Otherwise the nonlinear solver can fail to converge, depending on
parallel n_processors() and partitioning
We're going to need an extra find_neighbors() call after mesh
redistribution to handle a corner case that unpacking neighbor
information doesn't, so let's make that call as cheap as possible.
This fixed a corner case for me, IIRC with unpacking of element sets
where we had a non-remote neighbor link whose backlinks were fine but
whose descendants' backlinks weren't.
There's a corner case where a processor can receive 2 ancestor elements
that are neighbors, but from source processors each of which only sees
one of the two semilocally.  This is necessary to fix that.
This was referenced Jan 29, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants