[MITgcm-devel] big improvement with SEAICE_CGRID
Jean-Michel Campin
jmc at ocean.mit.edu
Tue May 19 10:49:35 EDT 2009
Hi Martin,
Thanks for your comments.
I switched off all the SEAICEadvXXX (SEAICEadvHeff, SEAICEadvArea
+ the other, to be on the safe side), and it's working fine.
My guess is that there are EXCH applied to non-shared array,
which works fine in multi-threaded without MPI, but does not
work with both. And as you remind me, the SEAICE_CGRID has
some effect on what/where things are exchanged.
Just one coment:
I have #undef SEAICE_ALLOW_DYNAMICS and SEAICEuseDYNAMICS=.FALSE.,
but SEAICE_DYNSOLVER is still called from seaice_model, and
is doing something. May be we could check what is really needed
when SEAICEuseDYNAMICS=F and skip some part of code in this case.
Cheers,
Jean-Michel
On Tue, May 19, 2009 at 09:09:38AM +0200, Martin Losch wrote:
> I am glad to hear that the c-grid seaice is soo much better (o:
>
> How do I reproduce this problem? What options for testreport and what
> machine configuration, number of cpus, which compiler?
>
> The SEAICE_CGRID option replaces the call to dynsolver (and then lsr and
> ostres) with seaice_dynsolver (and seaice_lsr or seaice_evp, and then
> seaice_ocean_stress, further there are new subroutines
> seaice_calc_strainrates, seaice_calc_viscosities, which are not called
> for the the B-grid code). The forcing is treated differently
> (seaice_get_dynforcing). Then there is an averaging of ice velocities
> from B-grid to C-grid points in seaice_advdiff and an additional
> exchange, which is of course removed with SEAICE_CGRID. Since in the
> offline experiment, the solver is not called, I would start looking
> there (seaice_advdiff).
>
> Currently, I cannot even update the code because of CVS-problems, so I
> can't try to reproduce this problem.
>
> Martin
>
>
> On May 19, 2009, at 6:09 AM, Jean-Michel Campin wrote:
>
>> Hi,
>>
>> I think I have to report on this huge improvement (10^60)
>> I noticed when switching on SEAICE_CGRID:
>>
>> It started with those MPI+MTH tests, all 4 lab_sea tests
>> get stuck somewhere, and today I noticed that
>> offline_exf_seaice.seaicetd has the same problem (whereas
>> the standard offline_exf_seaice, which does not use any seaice,
>> is passing well the MPI+MTH test).
>> The option file SEAICE_OPTIONS.h from offline_exf_seaice/code
>> is out of date, so I started to switch to an up-to date version,
>> keeping most options just #undef. The only one I turned on
>> (compared to the old version) is "SEAICE_CGRID".
>> And guess what ?
>> The results improve by 60 order of magnitude !
>> Before, HEFF was reaching ~10^120 after 1 iteration (and stopping
>> before the end of the 2nd one), but now it's only 10^61, and can
>> even go through the 2nd iteration, with only a moderate increase
>> (in log scale) at the end of the 2nd iter (10^62).
>>
>> More seriously, an advise on where to start to look at ?
>> (works well with MPI alone, MTH alone, but not with both)
>>
>> Thanks,
>> Jean-Michel
>> _______________________________________________
>> MITgcm-devel mailing list
>> MITgcm-devel at mitgcm.org
>> http://mitgcm.org/mailman/listinfo/mitgcm-devel
>
> _______________________________________________
> MITgcm-devel mailing list
> MITgcm-devel at mitgcm.org
> http://mitgcm.org/mailman/listinfo/mitgcm-devel
More information about the MITgcm-devel
mailing list