[MITgcm-support] MPI scaling of coupled aim-ocn runs

David Ferreira dfer at mit.edu
Mon Dec 7 16:17:41 EST 2009


Hi Andrew,
> I have just begun setting up a coupled run based on the code in the 
> verification/cpl_aim+ocn directory.  By editing SIZE.h's I have set up 
> runs with different combinations of processes (1 ocn / 1 atm ; 2 ocn / 
> 1 atm ; 1 ocn / 2 atm ; 2 ocn / 2 atm ; 1 ocn / 3 atm) and the fastest 
> run seems to be the one with 1 ocn / 2 atm.
When running the coupled model, I usually uses 18 or 24 processors: 1 
coupler/6 ocn/12 atm or 1 coupler/12ocn/12atm. There is little or no 
benefits using more cpus.

> We are eventually hoping to use this code to do multiple thousand-year 
> runs for the last glaciation (at some point will attempt coupling with 
> Tarasov's glacial system model).  Does anyone know what the best 
> configuration of processes/threads is for running the coupled model?  
> I haven't tried multithreading yet, only MPI.  Also, is there a way to 
> tell how well synchronized the atmosphere and ocean processes are?  
> Does it happen that one ends up waiting for the other if the ratio of 
> the time steps is off?
Yes it happens, usually the ocean waits for the atmosphere to finish but 
this can be the reverse if
for example the ocean carries lots of tracers.
You can find out which component waits for the other by looking at the 
time each spends in the coupler
(given by "CPL_EXPORT-IMPORT  [FORWARD_STEP]").
At this point, there is no other possibility than the ocean and 
atmosphere exchanging information
every ocean time-step.



> Many thanks,
>
> Andrew
>
>
>
> -- 
> Andrew Keats
> NSERC Postdoctoral Fellow
> Department of Physics and Physical Oceanography
> Memorial University of Newfoundland
>
>
>
>
> _______________________________________________
> MITgcm-support mailing list
> MITgcm-support at mitgcm.org
> http://mitgcm.org/mailman/listinfo/mitgcm-support





More information about the MITgcm-support mailing list