[MITgcm-devel] [MITgcm-cvs] MITgcm/pkg/seaice CVS Commit

Martin Losch Martin.Losch at awi.de
Tue Sep 27 03:56:48 EDT 2016


Hi Jean-Michel,

should I also add a "eedata.mth" to offline_exf_seaice.dyn_jfnk? Or 
would you rather test this first on some of your platforms privately?

Martin

On 09/26/2016 06:06 PM, Martin Losch wrote:
> Hi Jean-Michel,
>
> I guess it is OK, because it is only addressed in for its=0 and before it is used, all threads have written to eps1. But you are right, to be safe, that should be also with a mythid, I’ll do that tomorrow. Or I’ll replace it with the formal parameter eps, which is passed to the routine all the time anyway.
>
> Martin
>
>> On 26 Sep 2016, at 16:18, Jean-Michel Campin <jmc at mit.edu> wrote:
>>
>> Hi Martin,
>>
>> I was assuming that using MAX_NO_THREADS & myThid (instead of nSx,nSy & bi,bj) would
>> save some bi,bj loops. And ended your latest version is simpler (less bi,bj loops).
>>
>> I am still unsure about "eps1", in (local) common block but without myThid
>> (+ I get lost with all these "goto" statements).
>> Are you sure that it's safe with multi-threads ?
>>
>> Cheers,
>> Jean-Michel
>>
>> On Sun, Sep 25, 2016 at 09:35:59PM +0200, Martin Losch wrote:
>>> Hi Jean-Michel,
>>> Thank you,
>>> I will try that, but maybe this is true for all variables, that have bi,bj now, e.g. hh, c, s (do not depend on tile), so that I don't have to loop over bi,bi at all? You can see, that I don't really understand, how this threading works.
>>>
>>> Martin
>>>
>>>> On 23.09.2016, at 21:14, Jean-Michel Campin <jmc at mit.edu> wrote:
>>>>
>>>> Hi Martin,
>>>>
>>>> My impression is that any MPI proc was computing the right thing, no needs for global sum.
>>>> However, since few tile-independent variable were stored in common block (to avoid "save"
>>>> instruction) and updated & used in the same routine, all threads where doing that but
>>>> not in the right order, so the multi-threaded results where not right.
>>>> So I guess the global sum is not necessary, just need to store in common block
>>>> one value per thread, so that treads will not interfere with each other.
>>>>
>>>> Does this make sense ?
>>>>
>>>> May be changing from
>>>> _RL rs(4*imax+1,nSx,nSy)    to   _RL rs(4*imax,MAX_NO_THREADS)
>>>> and changing
>>>>    rs(i,bi,bj)             to   rs(i,myThid)
>>>> would be a little bit more clear (since they are not really tile dependent) ?
>>>>
>>>> Cheers,
>>>> Jean-Michel
>>>>
>>>>> On Fri, Sep 23, 2016 at 09:27:42AM +0200, Martin Losch wrote:
>>>>> Hi Jean-Michel, Chris,
>>>>>
>>>>> I found a way to change seaice_fgmres.F so that it passes all tests for configurations that are accessible to me (without MPI, with MPI, with mulithreading). I haven???t tried the combination of MPI and multithreading yet.
>>>>>
>>>>> Could you do me a favor and have a look at the code lines 559-570? There I replaced ???ro=abs(rs(i1))??? by a global sum and "ro=abs(ro/(nXy*nPy*nSx*nPx))???. All threads compute the same thing (at least in my tests), but I need to reduce the ???rs??? to one scalar number ???ro??? that is used as the termination criterion for the loop. Computing the average over all threads and processes in the crude way I have done makes sure that alway have the correct value for ro in all threads. I am a little uncertain if this is OK.
>>>>>
>>>>> Also, I am not sure why the code has worked before for multiple mpi-processes. Probably because rs(i1) was alway computed correctly and was available (although redundantly for each mpi-process), but with multithreading not all rs(i1,bi,bj) are available to all threads and I cannot just copy, e.g. ro=abs(rs(i1,1,1)). Do I understand this correctly?
>>>>>
>>>>> Martin
>>>>>
>>>>>> On 23 Sep 2016, at 09:15, Martin Losch <mlosch at forge.csail.mit.edu> wrote:
>>>>>>
>>>>>> Update of /u/gcmpack/MITgcm/pkg/seaice
>>>>>> In directory forge:/tmp/cvs-serv16065/pkg/seaice
>>>>>>
>>>>>> Modified Files:
>>>>>>   seaice_fgmres.F
>>>>>> Log Message:
>>>>>> make seaice_fgmres.F fit for multithreading; this first attempt
>>>>>> appears to be a bit of a hack but it does not affect the results for 1
>>>>>> cpu with an without multithreading and with more than one cpu (without
>>>>>> multithreading)
>>>>>>
>>>>>>
>>>>>> _______________________________________________
>>>>>> MITgcm-cvs mailing list
>>>>>> MITgcm-cvs at mitgcm.org
>>>>>> http://mitgcm.org/mailman/listinfo/mitgcm-cvs
>>>>>
>>>>>
>>>>> _______________________________________________
>>>>> MITgcm-devel mailing list
>>>>> MITgcm-devel at mitgcm.org
>>>>> http://mitgcm.org/mailman/listinfo/mitgcm-devel
>>>>
>>>> _______________________________________________
>>>> MITgcm-devel mailing list
>>>> MITgcm-devel at mitgcm.org
>>>> http://mitgcm.org/mailman/listinfo/mitgcm-devel
>>>
>>> _______________________________________________
>>> MITgcm-devel mailing list
>>> MITgcm-devel at mitgcm.org
>>> http://mitgcm.org/mailman/listinfo/mitgcm-devel
>>
>> _______________________________________________
>> MITgcm-devel mailing list
>> MITgcm-devel at mitgcm.org
>> http://mitgcm.org/mailman/listinfo/mitgcm-devel
>
>
> _______________________________________________
> MITgcm-devel mailing list
> MITgcm-devel at mitgcm.org
> http://mitgcm.org/mailman/listinfo/mitgcm-devel
>

-- 
Martin Losch
Alfred Wegener Institute for Polar and Marine Research
Postfach 120161, 27515 Bremerhaven, Germany;
Tel./Fax: ++49(0471)4831-1872/1797




More information about the MITgcm-devel mailing list