[MITgcm-devel] I/O

Menemenlis, Dimitris (329C) Dimitris.Menemenlis at jpl.nasa.gov
Fri Aug 11 15:47:32 EDT 2017


Hi Martin, I agree that asyncio is invasive and configuration-specific and that
what you suggest would be an improvement in terms of usability and portability.
Bron Nelson has cleaned up his asyncio code somewhat compared to what
is checked in MITgcm_contrib but I have not had time to test it and the code
remains invasive and configuration-specific.

Definitely the usesinglecpuio flag is not very efficient as core count increases.
Way back (http://ecco2.org/manuscripts/2007/Hill_etal_07_SciProg.pdf)
Chris and I used the capability of MITgcm to run in mixed memory model
to force model to do I/O from 1 core per shared memory set of processors.

For asyncio we reserve extra CPU cores that just do I/O.
So for example let’s say we run an MITgcm configuration that
requires 19023 cores and submit jobs with "mpiexec -n 20400 mitgcmuv”.
This would set aside 1377 cores just for doing I/O.  During initialization,
asyncio spreads these 1377 cores across all the available nodes that
are being used by MITgcm for computations.  Let’s say that the 20400
cores are from 1020 nodes with 20 cores each, the 1377 I/O cores
will be distributed 1 core per node for 663 nodes and 2 cores per node
for 357 nodes.

Dimitris

On Aug 11, 2017, at 12:26 AM, Martin Losch <Martin.Losch at awi.de<mailto:Martin.Losch at awi.de>> wrote:

Hi Dimitris,

one of the reasons why I suggested this is that the stuff in code-async seems so invasive and configuration specific to me, whereas what I suggest should work without too many changes in the code (but I am not so sure about that).
But, honestly, I don’t really understand how the "code-async” works. Do you reserve extra node(s) for this or do you reserve extra cpus on nodes that are already used by the model run? In the latter case, it is almost exactly what I had in mind and I probably should stay away from it, because it is too involved (with my limited understanding of this)?

Martin

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.mitgcm.org/pipermail/mitgcm-devel/attachments/20170811/91bdf0e1/attachment.html>


More information about the MITgcm-devel mailing list