[MITgcm-devel] offline_exf_seaice

Martin Losch Martin.Losch at awi.de
Tue May 27 10:14:46 EDT 2014


Hi Jean-Michel,

I am afraid, I was not very clear about my plans.

I planned to leave seaice_itd as it is. But it does not contain any horizontal physics/advection/ridging. 

I wanted to add one or two experiments to offline_exf_seaice that would use the ITD code with some ridging and advection (and thermodynamics). This requires that the existing experiments in this directory are all runs with SEAICE_ITD defined. As far as I can see, this is possible. I can set MULTDIM=nITD=1 and get the same results except for the one place with area and 4-5 digits. I agree I’ll have to dig further into that.
I can even have MULTDIM=nITD = 5 (or some other number) and set the runtime parameter SEAICE_multDim = 1 and get the same agreement. (after I have replace all loop boundaries 1,nITD with 1,SEAICE_multDim. I like this, because it makes the code a little more transparent and moves the code towards being the same for SEAICE_ITD and not SEAICE_ITD (which I think makes sense).

Now I understand that there are parts of the code that are replaced by the ITD code (in seaice_growth) and that’s maybe not what we want, because the original code is not tested anymore …

Alternatively I could add a totally new verification experiment, e.g, seaice_offline_itd that tests itd+advection+ridging. This experiment could be the same configuration as offline_exf_seaice.

What do you think?

Martin

On May 27, 2014, at 3:43 PM, Jean-Michel Campin <jmc at ocean.mit.edu> wrote:

> Hi Martin,
> 
> Regarding testing the ITD code: we talked (Torge, Patrick and me)
> some time ago how to move forward, and at this time, we thought
> that we would start with something similar to 1D_ocean_ice_column
> (because it was "ready to check-in"), and it's what we current have 
> in experiment seaice_itd;
> but the plan was to quicky move to a "real" test (with advection)
> that would be more or less similar to offline_exf_seaice.
> 
> In what you propose (because SEAICE_ITD & nITD are set at compile time),
> we would lose some of the current offline_exf_seaice test (pkg/seaice
> thermodynamics without ITD: input.thermo & input.dyn_lsr) and I am not 
> sure that it's really what we want now.
> And we will be left with seaice_itd test experiment that does not
> test much of the ITD code (no advection, no ridging, EXCH calls
> irelevant).
> 
> And regarding your 2 questions:
>> the table header that testreport generates: 
> yes, it does not fit well when it's a non standard ocean dynamics test;
> you should rely on tr_checklist.
>> 2. Are these differences described above typical?
> Nothing to worry about all the exf_*del2 (it requires valid overlap
> and exf is not really supposed to provide these, for all fields
> in EXF_MONITOR.
> I am more concerned about changes in:
>> seaice_area_mean & seaice_area_sd (5 & 4 digits ?).
> the fact it comes back to same number after it=30 would be surprising
> for ocean/atmos dynamics test but with an off-line seaice thermo-only test,
> not sure it means that "every thing is OK".
> 
> Cheers,
> Jean-Michel
> 
> On Tue, May 27, 2014 at 01:50:17PM +0200, Martin Losch wrote:
>> Hi Jean-Michel,
>> 
>> I have a specific testreport question. I want to create a test for the ITD code with advection etc. Ideally something like the offline_exf_seaice, but rather than coming up with a new directory, my plan is to modify the code and offline_exf_seaice, that we can use it for both testing the code with SEAICE_multDim = 1 and larger, should be possible.
>> 
>> The first hurdle (after removing code that resets useThSice in seaice_readparms, defining SEAICE_ITD and setting nITD=1 in SEAICE_SIZE.h) is that testreport for thermo gives only 5 and 4 digits of agreement for mean and s.d. of U, but really it happens in seaice_area_mean, seaice_area_sd, seaice_area_del2 only once in timestep 30. Before and after that everything is perfect except for exf_hflux_del2 and exf_sflux_del2 in timestep 0. All other experiments look perfect.
>> 
>> Two questions: 
>> 1. Am I right that the table header that testreport generates cannot be trusted entirely and that I should really have a look at tr_checklist instead? 
>> 2. Are these differences described above typical? I mean I find it hard to believe that that results are precise to 16 digits except for one variable at one timestep in the middle of the test (not the beginning or the end). Should I just ignore that and update the output file?
>> 
>> Martin
>> _______________________________________________
>> MITgcm-devel mailing list
>> MITgcm-devel at mitgcm.org
>> http://mitgcm.org/mailman/listinfo/mitgcm-devel
> 
> _______________________________________________
> MITgcm-devel mailing list
> MITgcm-devel at mitgcm.org
> http://mitgcm.org/mailman/listinfo/mitgcm-devel




More information about the MITgcm-devel mailing list