Skip to content

Update fe forcing#340

Merged
alperaltuntas merged 14 commits intoNCAR:dev/ncarfrom
mnlevy1981:update_fe_forcing
Mar 10, 2026
Merged

Update fe forcing#340
alperaltuntas merged 14 commits intoNCAR:dev/ncarfrom
mnlevy1981:update_fe_forcing

Conversation

@mnlevy1981
Copy link
Copy Markdown
Collaborator

marbl-ecosys/MARBL#464 updates MARBL's iron sediment forcing scheme, which will change what forcing fields MOM6 needs to provide to MARBL

Instead of just fesedflux and feventflux, updates to MARBL from UCI now add an
fesedfluxred forcing field.
Added optional argument num_pass to MOM_initialize_tracer_from_Z(), which then
gets passed down to horiz_interp_and_extrap_tracer() eventually is passed to
fill_miss_2d() [where the optional argument is already part of the API but is
never used]. I tested this updated by setting num_pass=0 for the MARBL tracers
and verifying that the smoothing step was not run, but that was just a test to
see if the smoothing was responsible for some unexpected tracer values in the
MARBL initial conditions -- in general, we do want to smooth the MARBL tracers
after they are initialized.
There was an issue where smoothing the MARBL tracer initial conditions led to
some inconsistent tracer values for tracers associated with MARBL's autotrophs.
For each autotroph, if any tracer (C, N, P, etc) is 0 we want all of that
autotroph's tracers to be 0... but the smoothing algorithm in
MOM_initialize_tracer_from_Z() would sometimes leave one tracer at 0 but
introduce a nonzero value in a different tracer at the same location. The
subroutine on the MARBL interface enforces this consistency.

This function is called whenever any MARBL tracer is initialized with
MOM_initialize_tracer_from_Z(); if all the tracers are read in from a restart
file then this subroutine is skipped. If this function is called, it is noted
in the log with the following message:

Enforcing consistency across autotroph tracer initial conditions
Added autotroph_tracer_consistency_enforce() to the dummy MARBL interface so
MARBL_tracers.F90 can still be built without MARBL
MARBL_tracers_stock() was writing a list of all tracers from all processors,
filling up cesm.log unnecessarily
We decided to update the MARBL interface instead of playing with how MOM6
smooths the MARBL ICs, so we don't need to modify src/framework or
src/initialization in this PR.

Also changed some ".eq." to "==" in MARBL_tracers.F90
@mnlevy1981
Copy link
Copy Markdown
Collaborator Author

With ESCOMP/MOM_interface#231 and ESCOMP/FMS#7, this passes aux_mom_MARBL and pr_mom (the latter includes baseline comparisons to baselines generated with the head of dev/ncar); this is ready for final review / merging

Copy link
Copy Markdown
Member

@alperaltuntas alperaltuntas left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Below is a minor suggestion. Aside from that, have you run the DIMCSL test? It tends to timeout because reading in JRA datastreams take so long. Unless you were able to run it successfully, I suggest running a DIMCSL test with CORE2 with and without MARBL.

character(len=200) :: feventflux_file !< name of [netCDF] file containing iron vent flux
character(len=200) :: fesedflux_file !< name of [netCDF] file containing iron sediment flux
character(len=200) :: fesedfluxred_file !< name of [netCDF] file containing reduced iron sediment flux
character(len=200) :: feventflux_file !< name of [netCDF] file containing iron vent flux
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Optional: While it's probably unlikely to hit the character limit of 200, I suggest using deferred-length character strings, which are more robust and memory-efficient.

@mnlevy1981
Copy link
Copy Markdown
Collaborator Author

@alperaltuntas -- I opened #410 to note that it would be nice to switch to deferred-length character strings, but I'd like to put that off until a future PR. I forgot to mention that the DIMCSL tests in pr_mom did not have enough walltime; I have resubmitted two tests (G_JRA and G1850MARBL_JRA) with an 8 hour walltime request, but they've been in the queue all day. Hopefully they'll run over the weekend and I can report back on Monday -- both are running on my branch, neither will have a baseline to compare to but the rest of the pr_mom tests show that non-MARBL compsets are not changing answers.

@mnlevy1981
Copy link
Copy Markdown
Collaborator Author

DIMCSL tests ran over the weekend; the good news is that the G_JRA test passed:

PASS DIMCSL_Ld1.TL319_t232.G_JRA.derecho_intel COMPARE_dimscale_1_Tp_base
PASS DIMCSL_Ld1.TL319_t232.G_JRA.derecho_intel COMPARE_dimscale_2_Lp_base
PASS DIMCSL_Ld1.TL319_t232.G_JRA.derecho_intel COMPARE_dimscale_3_Hp_base
PASS DIMCSL_Ld1.TL319_t232.G_JRA.derecho_intel COMPARE_dimscale_4_Zp_base
PASS DIMCSL_Ld1.TL319_t232.G_JRA.derecho_intel COMPARE_dimscale_5_Rp_base
PASS DIMCSL_Ld1.TL319_t232.G_JRA.derecho_intel COMPARE_dimscale_6_Qp_base

Unfortunately, there are a few issues with the MARBL compset:

FAIL DIMCSL_Ld1.TL319_t232.G1850MARBL_JRA.derecho_intel COMPARE_dimscale_1_Tp_base
FAIL DIMCSL_Ld1.TL319_t232.G1850MARBL_JRA.derecho_intel COMPARE_dimscale_2_Lp_base
FAIL DIMCSL_Ld1.TL319_t232.G1850MARBL_JRA.derecho_intel COMPARE_dimscale_3_Hp_base
PASS DIMCSL_Ld1.TL319_t232.G1850MARBL_JRA.derecho_intel COMPARE_dimscale_4_Zp_base
PASS DIMCSL_Ld1.TL319_t232.G1850MARBL_JRA.derecho_intel COMPARE_dimscale_5_Rp_base
PASS DIMCSL_Ld1.TL319_t232.G1850MARBL_JRA.derecho_intel COMPARE_dimscale_6_Qp_base

@alperaltuntas and @klindsay28 -- I think it's important that we fix this before the CESM3 release, but I'm okay creating a new issue ticket about this test failure and tackling it in a future PR if that's okay with the two of you. Otherwise I'll fix it in this PR.

@mnlevy1981
Copy link
Copy Markdown
Collaborator Author

I am currently running a DIMCSL test with MARBL out of the current dev/ncar; I added DEBUG=TRUE to user_nl_mom for the control case as well as the three failing tests. My current thinking is we should wait to see these test results -- if the failure is already on dev/ncar, we can fix it later (and I may have checksum output that helps find the problem(s)) but if it was introduced in this PR we should fix it here as well.

@alperaltuntas
Copy link
Copy Markdown
Member

Thanks @mnlevy1981. My experience is that CORE2 DIMCSL tests run much faster compared to JRA tests since datm doesn't have to slowly read through the relatively large JRA datastreams, but I understand if CORE2 testing is not meaningful in your case.

@mnlevy1981
Copy link
Copy Markdown
Collaborator Author

@alperaltuntas I've verified that the DIMCSL test is failing for MARBL compsets in dev/ncar, so it is not related to this PR. I'll open a separate issue ticket to discuss details of that failure, but I think this one is okay to bring in as-is and we'll fix the dimensional scaling in a future PR

@alperaltuntas
Copy link
Copy Markdown
Member

@mnlevy1981 Seems like there is at least one build failure in CI tests, but otherwise, LGTM.

CI was complaining about dust_ratio**-0.9 ("Unary operator following arithmetic
operator (use parentheses)") so I took the message's advice and changed to

dust_ratio**(-0.9)
@mnlevy1981
Copy link
Copy Markdown
Collaborator Author

@mnlevy1981 Seems like there is at least one build failure in CI tests, but otherwise, LGTM.

Fixed! Sorry about that, the first few times I looked through the failed test log I had trouble finding the actual cause of the failure (gfortran wants dust_ratio**(-0.9) instead of (dust_ratio**-0.9)). Tests are passing now!

Copy link
Copy Markdown
Collaborator

@klindsay28 klindsay28 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@alperaltuntas alperaltuntas merged commit e8b17da into NCAR:dev/ncar Mar 10, 2026
52 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants