[MITgcm-support] Controlling MNC output

Gus Correa gus at ldeo.columbia.edu
Mon Mar 26 13:18:15 EDT 2018

Hi Taylor

Piggybacking on Jody's suggestions:

1) My understanding is that deltaTClock (in seconds)
controls the model time step
(other than the momentum solver time step, controlled by deltaTMom),
is the model fundamental heartbeat,
and in most setups it boils down to the the same as deltaT, 
deltaTtracer, etc.
(MITgcm insiders correct me, please correct me if I am wrong.)
deltaTClock doesn't control the output frequency.

2) You could run the spinup phase first,
with a large value of dumpFreq (in seconds),
say, greater or equal to the model endTime, to avoid output
during spinup (if that is really what you want).
[Having some output during spinup would let you QC the spinup.]
The endTime or nTimeSteps in the data namelist file
will tell the model when to stop.

Then run again, now picking up at the end of the spinup phase,
now with a smaller dumpFreq,
to output what you need for post-spinup.

You need to update nIter0 (in data)
for this second phase of the run, to match the final
iteration of the spinup phase.

The same technique can be applied if you're using the
diagnostics package, by changing the values in data.diagnostics
between the two phases.
BTW, the diagnostic package allows finer grained control.

3) In my experience with the MITgcm,
to avoid to have jobs running for extremely long periods of time
(sometimes doing the wrong thing),
and to promote more flexibility to make changes such
as the one you mention,
one can handle these things in at least two ways, both laborious:

A) If you're running on a cluster with a job scheduler, you could
write a wrapper job submission script
(in bash, tcsh, perl, python, whatever) that:
A1) updates nIter0 (in data) after each model run ends (after mpirun 
A2) then resubmits the job (with "qsub $0",
or equivalent command for the job scheduler you use).
Most job schedulers are configured to allow jobs to resubmit themselves 
this way.

B) If you're running on a workstation or standalone machine without
a job scheduler, you could do the steps in item A) manually.

This way you can gracefully break your run in two phases (spinup and 
"steady state"), and adjust the various data* namelists according to 
your needs.

This also has the advantage of allowing you to QC partial results along 
the way, avoiding long and costly runs with mistaken setups.

I hope this helps,
Gus Correa

On 03/26/2018 12:07 PM, Jody Klymak wrote:
> Hi Taylor,
> If you just use the state-dump files, I don’t know of a way to only 
> starts saving after N years, except to stop the run after saving a 
> pickup, and then restart with the pickup.
> If you use the diagnostics package, then you can set the “phase” of the 
> save to be many years in the future and the saves won’t start until that 
> point.
> Cheers,  Jody
>> On Mar 26, 2018, at  8:58 AM, Taylor Shropshire <tas14j at my.fsu.edu 
>> <mailto:tas14j at my.fsu.edu>> wrote:
>> Hello,
>> I am attempting to save computational resources by having the model 
>> write output files only after a predetermined amount of spin up. I see 
>> variables like deltaTClock and dumpFreq but can't seem to figure out 
>> how to make the model write output files only after a few years of a 
>> simulation have been ran.
>> Thanks,
>> Taylor
>> *Taylor Shropshire
>> PhD Candidate - Oceanography*
>> *Center for Ocean - Atmospheric Prediction Studies*
>> *Florida State University*
>> _______________________________________________
>> MITgcm-support mailing list
>> MITgcm-support at mitgcm.org <mailto:MITgcm-support at mitgcm.org>
>> http://mailman.mitgcm.org/mailman/listinfo/mitgcm-support
> _______________________________________________
> MITgcm-support mailing list
> MITgcm-support at mitgcm.org
> http://mailman.mitgcm.org/mailman/listinfo/mitgcm-support

More information about the MITgcm-support mailing list