[MITgcm-support] OpenMP Model Code.
Chris Hill
cnh at mit.edu
Tue Oct 7 13:47:36 EDT 2003
John,
That's great. What hardware and system software setup are you using?
Also what rev/checkpoint of the code are you working from?
Chris
> -----Original Message-----
> From: mitgcm-support-bounces at mitgcm.org
> [mailto:mitgcm-support-bounces at mitgcm.org] On Behalf Of John Stark
> Sent: Tuesday, October 07, 2003 1:37 PM
> To: mitgcm-support at mitgcm.org
> Subject: [MITgcm-support] OpenMP Model Code.
>
>
> Hi,
>
> Does anyone else use the threaded version of the model,
> rather than MPI? I have recently set up the code to use
> OpenMP directives instead of the processor specific ones
> currently in the code. The code runs at the same speed as it
> does using the shared-memory mpich anyway though :(.
>
> If anyone wants to do the same, I'd recommend the following
> modifications to the code:
>
> 1. main.F , MAIN_PDIRECTIVES1.h, MAIN_PDIRECTIVES2.h
> Replace the loop DO I=1,nThreads with an OMP parallel
> section. This allows the use of OMP BARRIERS within the code
> whereas a DO does not. The use of a loop is misleading since
> the model never actually uses this as a loop anyway, it is
> only a construct for setting up the threads
> on some platforms (more detail below).
>
> 2. barrier.F , bar2.F
>
> The whole contents of the subroutines barrier and bar2 can
> become C$OMP BARRIER
>
>
> 3. Some architectures (Itanium and Opteron) seem to have low
> stack limits, which cause seg. faults with a reasonable model
> size. To get around this I had to create thread-private
> common blocks in the following files: thermodynamics.F,
> dynamics.F and impldiff.F. These routines use very large
> local arrays (see below). This seems to be an issue
> regardless of stack limits set with 'ulimit -s' or 'export
> MP_STACKSIZE=...'
>
> 4. exch_rl.F and exch_rs.F need an additional _BARRIER at the
> start to ensure all threads have finished writing the array
> before it is exchanged.
>
> 5. ini_masks_etc.F
> The _BEGIN_MASTER( myThid ) and _END_MASTER should be
> commented out since barriers in the underlying mds...
> routines will cause the code to become out of synch and crash
> otherwise.
>
> 6. mdsio_writefield.F
> Needs a _BARRIER and _BEGIN_MASTER( mythid ) at the start and an
> _END_MASTER at the end.
>
>
> Some more detail...
>
>
> [ At the top of main.F ]
> #ifdef USE_OMP_THREADING
> c external routine return thread number from openmp
> integer OMP_GET_THREAD_NUM
> external OMP_GET_THREAD_NUM
> #endif
>
> [ main.F ]
> REplace DO... with myThid=1 to handle the no threads case.
>
> [ MAIN_PDIRECTIVES1.h ]
> #ifdef USE_OMP_THREADING
> C$OMP PARALLEL
> C$OMP& SHARED(nThreads) ,
> C$OMP& PRIVATE(I, myThid)
> c we could use the NUM_THREADS(nThreads) directive
> C but does not seem to be supported in all systems.
> call OMP_SET_NUM_THREADS(nThreads)
>
> c Model uses thread ids from 1,2... whereas openmp uses 0,1...
> myThid = OMP_GET_THREAD_NUM() + 1
> #define CODE_IS_THREADED
> #endif
>
> [ MAIN_PDIRECTIVES2.h ]
> #ifdef USE_OMP_THREADING
> C$OMP END PARALLEL
> #endif
>
>
> [thermodynamics.F:]
> #ifdef LESS_STACK_VARS
> common / thermo_work_S / xA, yA, maskUp
> common / thermo_work_L / uTrans, vTrans, rTrans, fVerT, fVerS
> & , fVerTr1, phiHyd, rhokm1, rhok, phiSurfX, phiSurfY,
> & KappaRT, KappaRS, sigmaX, sigmaY, sigmaR
> C$OMP THREADPRIVATE( /thermo_work_S/ , /thermo_work_L/)
> #endif
>
> [dynamics.F:]
> #ifdef LESS_STACK_VARS
> common / dynamics_rl / fVerU, fVerV, phiHyd, rhokm1, rhok,
> & phiSurfX, phiSurfY, KappaRU, KappaRV
> c$OMP THREADPRIVATE( / dynamics_rl / )
> #endif
>
>
> [impldiff.F:]
> #ifdef LESS_STACK_VARS
> common / impldiff_rl / gYnm1 , a , b , c , bet , gam
> C$OMP THREADPRIVATE( / impldiff_rl / ) #endif
>
> --
> ------------------------------------------
> John Stark
> Applications Programmer
> Southampton Oceanography Centre
> Tel. +44 (0)23 8059 6571
> e-mail jods at soc.soton.ac.uk
> ------------------------------------------
> _______________________________________________
> MITgcm-support mailing list
> MITgcm-support at mitgcm.org
> http://dev.mitgcm.org/mailman/listinfo/mitgcm-> support
>
More information about the MITgcm-support
mailing list