[MITgcm-support] Independent Tiling

Jean-Michel Campin jmc at ocean.mit.edu
Fri Mar 21 01:26:35 EDT 2014


Hi Jody,

Thank you very much for all these useful suggestions.
I would like to take this opportunity to tell how much
the MITgcm users appreciate your input on this list.

And just to clarify:
> After several hours with Jean-Michel yesterday, the problem is still there!
I made few suggestions about simple tests to help figure out
where the problem is, but did not get any report yet.

I am sorry if you feel like some pieces of information is not getting 
to the list, but I cannot do much on this.

Cheers,
Jean-Mcihel

On Thu, Mar 20, 2014 at 01:22:54PM -0700, Jody Klymak wrote:
> Hi Chris,
> 
> What is plotted?  How are you forcing this?  What do you expect your solution to look like?  
> 
> Just for fun, did you try tiling in y and just having one tile in x?  
> 
> What if you start with an initial condition that varies clearly in x?  Does this still dominate?  i.e. can you ever get a solution that is different in each tile?  
> 
> When you say this occurs for each tile no matter how many tiles you used, did you try keeping the physical dimensions of the problem the same and increasing the tiles?  i.e. reduce dx or sNx?  If you do that is it still one node of this pattern per tile or is do the patterns look the same in physical space?  
> 
> That will say if your standing wave is physical or not.  There is no reason a big standing wave might not be important in a long channel, and if you keep Ly=sNy*nPy*dy the same adding more tiles in x will just give you more nodes of your standing wave.
> 
> > We did find, however, that this issue is not with the exchange, boundaries, or forcing files...
> 
> Hmmmmm.... Well its either those or your mpi compiler....
> 
> Cheers,   Jody
> 
> 
> > The system seems to always want to set up a standing wave of wavenumber k = Npx: this has happened for 4-7 zonal tiles (all that I've tried) and completely drowns out any other signal. Attached is a figure of one of these modes for 6 processors. 
> > 
> > With just 1 processor, of course, there is no mode like this and things look normal. Perhaps this is a known numerical issue? The problem presents itself with only the gfd package on. Hoping someone has some advice. 
> > 
> > Still stumped, 
> > 
> > Chris
> > 
> > 
> > 
> > 
> > 
> > 
> > 
> > 
> > ---
> > Christopher Horvat -- www.chrv.at
> > 
> > 
> > 
> > On Wed, Mar 19, 2014 at 8:09 AM, Chris Horvat <horvat at fas.harvard.edu> wrote:
> > Oops, replied to the wrong email.
> > 
> > To update this support thread, exp2 appears to output as expected, but even with a clean build and clean code directory and newest checkout the issue persists in my runs even when using a modified data file from exp2.
> > 
> > Topography/boundary conditions dont appear to be the culprit as they are shown as expected by the model (aka depth.*, rbc_* are as expected). Pretty stumped.
> > 
> > Chris
> > 
> > 
> > On Mar 19, 2014 8:00 AM, "Chris Horvat" <horvat at fas.harvard.edu> wrote:
> > Hey jean-michel, can I drop by your office today? I'll be at the noon seminar at mit, can meet any other time. Really need to get this issue sorted out (aka find my dumb mistake)
> > 
> > Chris
> > 
> > On Mar 17, 2014 12:44 PM, "Jean-Michel Campin" <jmc at ocean.mit.edu> wrote:
> > Hi Chris,
> > 
> > I would sugest to start with one of the simple verification experiment
> > (e.g., exp2) using a clean fresh downloaded version of the code.
> > If the problem with MPI shows up again there, then you could report the
> > full sequence of command you typed and what you are getting (log files
> > and output).
> > Otherwise, we will know that the problem is in your customised set-up
> > and we should investigate this.
> > 
> > Cheers,
> > Jean-Michel
> > 
> > On Mon, Mar 17, 2014 at 12:07:10PM -0400, Chris Horvat wrote:
> > > Hi Jody and Martin,
> > >
> > > I was hoping that you would both be right but I am in fact compiling with
> > > mpi turned on. STDOUT reports
> > >
> > > nTiles =    1 ; /* Total no. tiles per process ( = nSx*nSy ) */
> > >
> > > (PID.TID 0000.0001)   nProcs =   12 ; /* Total no. processes ( = nPx*nPy )
> > > */
> > >
> > > (PID.TID 0000.0001) nThreads =    1 ; /* Total no. threads per process ( =
> > > nTx*nTy ) */
> > >
> > > (PID.TID 0000.0001) usingMPI =    T ; /* Flag used to control whether MPI
> > > is in use */
> > >
> > >
> > > and a few lines later...
> > >
> > >
> > > ======= Starting MPI parallel Run =========
> > >
> > >
> > > I've attached STDOUT.0000 to this email. I'm also submitting jobs with
> > >
> > > mpirun -np 12 ../build/mitgcmuv > xx 2>errors.txt
> > >
> > >
> > > Any thoughts?
> > >
> > >
> > > On Mar 14, 2014 1:21 AM, "Martin Losch" <Martin.Losch at awi.de> wrote:
> > >
> > > > Chris,
> > > >
> > > > in order to compiler with MPI you need to specify "-mpi" at the genmake2
> > > > step:
> > > >
> > > > genmake2 -mpi (+all the options that you normally use)
> > > >
> > > > Martin
> > > >
> > > > On Mar 14, 2014, at 2:34 AM, Klymak Jody <jklymak at uvic.ca> wrote:
> > > >
> > > > > Hi Chris.  Did you compile w MPI turned on?   Cheers.  Jody
> > > > >
> > > > > Sent from my iPhone
> > > > >
> > > > > On Mar 13, 2014, at 9:40, Chris Horvat <horvat at fas.harvard.edu> wrote:
> > > > >
> > > > >> Hi everyone,
> > > > >>
> > > > >> I've recently "upgraded" simulations to a larger domain, and now am
> > > > tiling results and spreading execution over several CPUs. At the moment,
> > > > however, all of the tiles are producing the exact same output, so it
> > > > appears that the model is just simulating each small tile as its own
> > > > channel.
> > > > >>
> > > > >> This appears to be the case even when all packages are turned off. I've
> > > > attached my data file and SIZE.h to this email, any suggestions?
> > > > >>
> > > > >>
> > > > >> Chris
> > > > >>
> > > > >> <data>
> > > > >> <SIZE.h>
> > > > >> _______________________________________________
> > > > >> MITgcm-support mailing list
> > > > >> MITgcm-support at mitgcm.org
> > > > >> http://mitgcm.org/mailman/listinfo/mitgcm-support
> > > > > _______________________________________________
> > > > > MITgcm-support mailing list
> > > > > MITgcm-support at mitgcm.org
> > > > > http://mitgcm.org/mailman/listinfo/mitgcm-support
> > > >
> > > >
> > > > _______________________________________________
> > > > MITgcm-support mailing list
> > > > MITgcm-support at mitgcm.org
> > > > http://mitgcm.org/mailman/listinfo/mitgcm-support
> > > >
> > 
> > 
> > > _______________________________________________
> > > MITgcm-support mailing list
> > > MITgcm-support at mitgcm.org
> > > http://mitgcm.org/mailman/listinfo/mitgcm-support
> > 
> > 
> > _______________________________________________
> > MITgcm-support mailing list
> > MITgcm-support at mitgcm.org
> > http://mitgcm.org/mailman/listinfo/mitgcm-support
> > 
> > <6mode.tiff>_______________________________________________
> > MITgcm-support mailing list
> > MITgcm-support at mitgcm.org
> > http://mitgcm.org/mailman/listinfo/mitgcm-support
> 
> --
> Jody Klymak    
> http://web.uvic.ca/~jklymak/
> 
> 
> 
> 
> 

> _______________________________________________
> MITgcm-support mailing list
> MITgcm-support at mitgcm.org
> http://mitgcm.org/mailman/listinfo/mitgcm-support




More information about the MITgcm-support mailing list