[MITgcm-devel] cs510 on IBM p690
Matthew Mazloff
mmazloff at MIT.EDU
Wed Feb 14 12:32:48 EST 2007
Hi Dimitris,
When using single cpu IO, defining DISABLE_MPI_READY_TO_RECEIVE can
really speed up the model when you have a large configuration (mine
is 8% faster). And it works fine if there is not much writing at
each time step. Each processor sends its data to the I/O processor
and then keeps going...there is no waiting for processor 0 to be
ready. The model waits at then end of each time step, however, for
all processors to finish and thus the buffer does not get
overloaded. But, if one is writing an excessive amount of data at a
single time step the buffer can get overloaded. I find that if I am
using the divided adjoint with sea-ice there are a bunch of stores
that all happen at the same time step and the model crashes, then I
have to undefine DISABLE_MPI_READY_TO_RECEIVE. Maybe you turned it
off when you were writing a large amount of output for a visualization?
A smart thing to try would be to have processor 0 be solely the I/O
processor. In other words, have no wet points on this tile so that
all it does is read write...this may speed up the entire model.
-Matt
On Feb 14, 2007, at 12:19 PM, Dimitris Menemenlis wrote:
>> So I haven't really been following...but, on occasion, I have run
>> into a few
>> problems with #define DISABLE_MPI_READY_TO_RECEIVE
>> Try undefining it for a run...this will prevent MPI buffer overflow
>
> Matt, you are reading my mind. That was the flag I was trying to
> remember about when I wrote previous message. But I checked and it
> is off by default in the cube sphere setup so that's not the cause
> of Martin's problem. More likely something along lines of Chris'
> last message.
>
> Now if I only could remember why I defined
> DISABLE_MPI_READY_TO_RECEIVE for cube sphere three months ago then
> undefined it two months ago?
>
> D.
> _______________________________________________
> MITgcm-devel mailing list
> MITgcm-devel at mitgcm.org
> http://mitgcm.org/mailman/listinfo/mitgcm-devel
More information about the MITgcm-devel
mailing list