[MITgcm-devel] big improvement with SEAICE_CGRID
Jean-Michel Campin
jmc at ocean.mit.edu
Wed May 20 11:18:10 EDT 2009
Hi Martin,
With those common blocks I added, all the lab_sea and
offline_exf_seaice.seaicetd now work with MPI+MTH (see
http://mitgcm.org/testing/results/2009_05/tr_danton_20090519_0/summary.txt)
On Wed, May 20, 2009 at 11:28:50AM +0200, Martin Losch wrote:
> Hi Jean-Michel,
>
> thanks for the changes in pkg/seaice, this renaming in advect/diffus was
> long overdue, but I never use (nor look at) these routines anymore.
>
> I learned this:
> "Whenever a field is exchanged, put it into a (at least local) common
> block."
> Is this a new programming rule for the MITgcm (or even more generally)
> in order not to break MPI+MTH?
There is little chance that we can go arround this with EXCH-1.
But with Chris, we could change EXCH2 so that it would work
with MPI+MTH on non-shared variables.
And part of the reason why we don't have many exch applied to
non-shared variable is that, if I remember well, even without MPI,
the old Cube-Exch from EXCH-1 does not work in multi-threaded for
non-shared variables (it was used for multi-threaded run before
we add EXCH-2)
>
> If so, what about fld3d in ADVECT.F?
ADVECT is now only applied to variable in common block,
either HEFF, AREA, or fld3d from seaice_advdiff.F:
> COMMON / SEAICE_ADVDIFF_LOCAL / uc, vc, fld3d
So it should do it.
Cheers,
Jean-Michel
>
> Martin
>
> On May 19, 2009, at 4:49 PM, Jean-Michel Campin wrote:
>
>> Hi Martin,
>>
>> Thanks for your comments.
>> I switched off all the SEAICEadvXXX (SEAICEadvHeff, SEAICEadvArea
>> + the other, to be on the safe side), and it's working fine.
>> My guess is that there are EXCH applied to non-shared array,
>> which works fine in multi-threaded without MPI, but does not
>> work with both. And as you remind me, the SEAICE_CGRID has
>> some effect on what/where things are exchanged.
>>
>> Just one coment:
>> I have #undef SEAICE_ALLOW_DYNAMICS and SEAICEuseDYNAMICS=.FALSE.,
>> but SEAICE_DYNSOLVER is still called from seaice_model, and
>> is doing something. May be we could check what is really needed
>> when SEAICEuseDYNAMICS=F and skip some part of code in this case.
>>
>> Cheers,
>> Jean-Michel
>>
>> On Tue, May 19, 2009 at 09:09:38AM +0200, Martin Losch wrote:
>>> I am glad to hear that the c-grid seaice is soo much better (o:
>>>
>>> How do I reproduce this problem? What options for testreport and what
>>> machine configuration, number of cpus, which compiler?
>>>
>>> The SEAICE_CGRID option replaces the call to dynsolver (and then lsr
>>> and
>>> ostres) with seaice_dynsolver (and seaice_lsr or seaice_evp, and then
>>> seaice_ocean_stress, further there are new subroutines
>>> seaice_calc_strainrates, seaice_calc_viscosities, which are not
>>> called
>>> for the the B-grid code). The forcing is treated differently
>>> (seaice_get_dynforcing). Then there is an averaging of ice velocities
>>> from B-grid to C-grid points in seaice_advdiff and an additional
>>> exchange, which is of course removed with SEAICE_CGRID. Since in the
>>> offline experiment, the solver is not called, I would start looking
>>> there (seaice_advdiff).
>>>
>>> Currently, I cannot even update the code because of CVS-problems, so
>>> I
>>> can't try to reproduce this problem.
>>>
>>> Martin
>>>
>>>
>>> On May 19, 2009, at 6:09 AM, Jean-Michel Campin wrote:
>>>
>>>> Hi,
>>>>
>>>> I think I have to report on this huge improvement (10^60)
>>>> I noticed when switching on SEAICE_CGRID:
>>>>
>>>> It started with those MPI+MTH tests, all 4 lab_sea tests
>>>> get stuck somewhere, and today I noticed that
>>>> offline_exf_seaice.seaicetd has the same problem (whereas
>>>> the standard offline_exf_seaice, which does not use any seaice,
>>>> is passing well the MPI+MTH test).
>>>> The option file SEAICE_OPTIONS.h from offline_exf_seaice/code
>>>> is out of date, so I started to switch to an up-to date version,
>>>> keeping most options just #undef. The only one I turned on
>>>> (compared to the old version) is "SEAICE_CGRID".
>>>> And guess what ?
>>>> The results improve by 60 order of magnitude !
>>>> Before, HEFF was reaching ~10^120 after 1 iteration (and stopping
>>>> before the end of the 2nd one), but now it's only 10^61, and can
>>>> even go through the 2nd iteration, with only a moderate increase
>>>> (in log scale) at the end of the 2nd iter (10^62).
>>>>
>>>> More seriously, an advise on where to start to look at ?
>>>> (works well with MPI alone, MTH alone, but not with both)
>>>>
>>>> Thanks,
>>>> Jean-Michel
>>>> _______________________________________________
>>>> MITgcm-devel mailing list
>>>> MITgcm-devel at mitgcm.org
>>>> http://mitgcm.org/mailman/listinfo/mitgcm-devel
>>>
>>> _______________________________________________
>>> MITgcm-devel mailing list
>>> MITgcm-devel at mitgcm.org
>>> http://mitgcm.org/mailman/listinfo/mitgcm-devel
>> _______________________________________________
>> MITgcm-devel mailing list
>> MITgcm-devel at mitgcm.org
>> http://mitgcm.org/mailman/listinfo/mitgcm-devel
>
> _______________________________________________
> MITgcm-devel mailing list
> MITgcm-devel at mitgcm.org
> http://mitgcm.org/mailman/listinfo/mitgcm-devel
More information about the MITgcm-devel
mailing list