[MITgcm-support] Better than expected HPC Scaling

Edward Doddridge edward.doddridge at utas.edu.au
Wed Oct 14 18:07:43 EDT 2020


Thanks Matt and Dimitris.

48 cores is definitely a small number for a job like this, but I wasn’t pushing the memory limits – these cores all have plenty of memory (4GB per core). As for sharing the node, it’s a good thought, but each node has 48 cores (which is why I went in multiples of 48). I also tried to keep the tiles as square as possible. They weren’t always perfect, but the 480 core run actually had slightly squarer tiles than the 384 core run and the 768 core run had perfectly square tiles.

“Another possibility is that this is within the machine noise”
That’s certainly a possibility. I haven’t rerun all of the tests, but I reran a couple and the timings were very similar. The variation wasn’t enough to pull the curve below the ideal scaling curve. I don’t think this is noise. It seemsto me that there is structure in the signal.

Are you doing any I/O?
I’m not doing much I/O. I set it to output a few monthly mean fields, but that was all. Using the timing breakdown in STDOUT, the time spent in “DO_THE_MODEL_IO” scales pretty well with the core count (see attached).

Cheers,
Ed


Dimitris Menemenlis menemenlis at jpl.nasa.gov <mailto:mitgcm-support%40mitgcm.org?Subject=Re:%20Re%3A%20%5BMITgcm-support%5D%20%5BEXTERNAL%5D%20%20Better%20than%20expected%20HPC%20Scaling&In-Reply-To=%3C2BA4C263-D240-4F32-9233-D60B208EDD15%40jpl.nasa.gov%3E>
Tue Oct 13 00:38:33 EDT 2020

I like the theory of better cache utilization initially, from 48 to ~250 processors, then degrading @ higher processor count because of communications (and to a lesser extent more grid cells and hence more computations in the overlap regions).

Are you doing any I/O?

> On Oct 12, 2020, at 9:24 PM, Matthew Mazloff <mmazloff at ucsd.edu<http://mailman.mitgcm.org/mailman/listinfo/mitgcm-support>> wrote:
>
> Hi Ed
>
> It depends on the machine you are running on. But its only slightly better at 250 than 48, and 48 does seem a bit low for a job of that size. Maybe you were pushing max memory? It is odd, but I suspect you are on a machine that has very fast interconnects and I/O and 250 is just more efficient.
>
> Another possibility is that this is within the machine noise. Or that you were sharing a node when you ran the 48 and 96 job. Or that your tiles were very rectangular for the 48 and 96 core jobs so you had more overlap.
>
> Matt
>
>


From: Edward Doddridge <edward.doddridge at utas.edu.au>
Date: Tuesday, 13 October 2020 at 15:00
To: "mitgcm-support at mitgcm.org" <mitgcm-support at mitgcm.org>
Subject: Better than expected HPC Scaling

Hi MITgcmers,

As part of an HPC bid I need to provide some scaling information for MITgcm on their cluster. The test configuration is a reentrant channel 600x800x50 grid points, using just the ocean component and some idealised forcing fields. As I increased the core count between 48 and 384 the model scaled better than the theoretical scaling (see attached figure). I’m not complaining that it ran faster, but I was surprised. Any thoughts about what would cause this sort of behaviour? I wondered if it might be something to do with the tiles not fitting in the cache for the low core count simulations. The bid might be more convincing if I can give a plausible explanation for why the model scales better than ideal.

Cheers,
Ed



Edward Doddridge
Research Associate and Theme Leader
Australian Antarctic Program Partnership (AAPP)
Institute for Marine and Antarctic Studies (IMAS)
University of Tasmania (UTAS)

doddridge.me


University of Tasmania Electronic Communications Policy (December, 2014).
This email is confidential, and is for the intended recipient only. Access, disclosure, copying, distribution, or reliance on any of it by anyone outside the intended recipient organisation is prohibited and may be a criminal offence. Please delete if obtained in error and email confirmation to the sender. The views expressed in this email are not necessarily the views of the University of Tasmania, unless clearly intended otherwise.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.mitgcm.org/pipermail/mitgcm-support/attachments/20201014/032c7687/attachment-0001.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: image001.png
Type: image/png
Size: 276 bytes
Desc: image001.png
URL: <http://mailman.mitgcm.org/pipermail/mitgcm-support/attachments/20201014/032c7687/attachment-0002.png>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: Screen Shot 2020-10-15 at 09.02.33.png
Type: image/png
Size: 472266 bytes
Desc: Screen Shot 2020-10-15 at 09.02.33.png
URL: <http://mailman.mitgcm.org/pipermail/mitgcm-support/attachments/20201014/032c7687/attachment-0003.png>


More information about the MITgcm-support mailing list