[MITgcm-devel] big improvement with SEAICE_CGRID
Martin Losch
Martin.Losch at awi.de
Tue May 19 03:09:38 EDT 2009
I am glad to hear that the c-grid seaice is soo much better (o:
How do I reproduce this problem? What options for testreport and what
machine configuration, number of cpus, which compiler?
The SEAICE_CGRID option replaces the call to dynsolver (and then lsr
and ostres) with seaice_dynsolver (and seaice_lsr or seaice_evp, and
then seaice_ocean_stress, further there are new subroutines
seaice_calc_strainrates, seaice_calc_viscosities, which are not called
for the the B-grid code). The forcing is treated differently
(seaice_get_dynforcing). Then there is an averaging of ice velocities
from B-grid to C-grid points in seaice_advdiff and an additional
exchange, which is of course removed with SEAICE_CGRID. Since in the
offline experiment, the solver is not called, I would start looking
there (seaice_advdiff).
Currently, I cannot even update the code because of CVS-problems, so I
can't try to reproduce this problem.
Martin
On May 19, 2009, at 6:09 AM, Jean-Michel Campin wrote:
> Hi,
>
> I think I have to report on this huge improvement (10^60)
> I noticed when switching on SEAICE_CGRID:
>
> It started with those MPI+MTH tests, all 4 lab_sea tests
> get stuck somewhere, and today I noticed that
> offline_exf_seaice.seaicetd has the same problem (whereas
> the standard offline_exf_seaice, which does not use any seaice,
> is passing well the MPI+MTH test).
> The option file SEAICE_OPTIONS.h from offline_exf_seaice/code
> is out of date, so I started to switch to an up-to date version,
> keeping most options just #undef. The only one I turned on
> (compared to the old version) is "SEAICE_CGRID".
> And guess what ?
> The results improve by 60 order of magnitude !
> Before, HEFF was reaching ~10^120 after 1 iteration (and stopping
> before the end of the 2nd one), but now it's only 10^61, and can
> even go through the 2nd iteration, with only a moderate increase
> (in log scale) at the end of the 2nd iter (10^62).
>
> More seriously, an advise on where to start to look at ?
> (works well with MPI alone, MTH alone, but not with both)
>
> Thanks,
> Jean-Michel
> _______________________________________________
> MITgcm-devel mailing list
> MITgcm-devel at mitgcm.org
> http://mitgcm.org/mailman/listinfo/mitgcm-devel
More information about the MITgcm-devel
mailing list