[MITgcm-devel] [MITgcm-cvs] MITgcm/eesupp/src CVS Commit

Menemenlis, Dimitris (3248) Dimitris.Menemenlis at jpl.nasa.gov
Mon Sep 30 20:25:43 EDT 2013


Hi Jean-Michel, Chris sent some code that will use process 0 and MPI
for reading the parameter files.  I will test then check in.

Regarding having all procs reading eedata, the following
options were suggested by the NAS folks:
1- pre-process eedata to remove the comment lines:
2- use "!" for comments instead of "#"
3- use the /tmp directories associated with each node

We already tried option 1 and it worked fine because problem is with
writing scratch files to same directory, not with reading the parameter files.

Dimitris Menemenlis

On Sep 30, 2013, at 6:06 AM, Jean-Michel Campin wrote:

> Hi Dimitris,
> 
> On Sun, Sep 29, 2013 at 07:21:34PM +0000, Menemenlis, Dimitris (3248) wrote:
>> Jean-Michel thanks for taking a look.
>> I am still experimenting with this on lustre and pleiades
>> so will wait until it works before I modify again.
>> But I will definitely follow your advice below,
>> or revert  to old version if I don't succeed.
>> 
>> Two questions:
>> 
>> 1- Do we need to write scratch2 to disk?
>> Seems to be identical to original parameter files.
> 
> scratch2 seems to be used to report on STDOUT. My impression (but I 
> can be wrong; may be Chris remembers why it's coded this way)
> is that, since what is written to STDOUT (scratch2) is processed the 
> same way (through temp scratch copy) as what is realy loaded (scratch1),
> this also allows to check (by looking to STDOUT) that this temp scratch
> write/read works.
> 
>> 2- Is it OK to use
>> CALL MPI_BARRIER( MPI_COMM_MODEL, mpiRC )
>> in open_copy_data_file.F?
> I think it's what you want (no thread barrier but an MPI barrier),
> and, as I can see, you already made this change.
> 
> Just a detail (no effect since, as I mentionned, S/R OPEN_COPY_DATA_FILE
> is generally called from master thread only): I would limit MPI call to
> only master thread (my experience is that it does not work very well to 
> allow all threads to do MPI calls), which means the _END_MASTER / _BEGIN_MASTE
> inside #ifef SINGLE_DISK_IO could be removed.
> 
>> Sorry I thought this would be a trivial change
>> but seems like I opened a small can of worms.
>> I should not have checked in code so carelessly.
>> 
>> Dimitris Menemenlis
> 
> There is no problem here. I think that it would be nice to find a way to
> continue to have all procs reading eedata (I am not concerned about other
> data* files, just eedata) but if this prevent to run your set-up, we clearly 
> need to do something.
> 
> Cheers,
> Jean-Michel




More information about the MITgcm-devel mailing list