[MITgcm-support] [EXTERNAL] Better than expected HPC Scaling

Dimitris Menemenlis menemenlis at jpl.nasa.gov
Tue Oct 13 00:38:33 EDT 2020


I like the theory of better cache utilization initially, from 48 to ~250 processors, then degrading @ higher processor count because of communications (and to a lesser extent more grid cells and hence more computations in the overlap regions).

Are you doing any I/O?

> On Oct 12, 2020, at 9:24 PM, Matthew Mazloff <mmazloff at ucsd.edu> wrote:
> 
> Hi Ed
> 
> It depends on the machine you are running on. But its only slightly better at 250 than 48, and 48 does seem a bit low for a job of that size. Maybe you were pushing max memory? It is odd, but I suspect you are on a machine that has very fast interconnects and I/O and 250 is just more efficient. 
> 
> Another possibility is that this is within the machine noise. Or that you were sharing a node when you ran the 48 and 96 job. Or that your tiles were very rectangular for the 48 and 96 core jobs so you had more overlap.
> 
> Matt
> 
> 
>> On Oct 12, 2020, at 9:00 PM, Edward Doddridge <edward.doddridge at utas.edu.au <mailto:edward.doddridge at utas.edu.au>> wrote:
>> 
>> Hi MITgcmers,
>>  
>> As part of an HPC bid I need to provide some scaling information for MITgcm on their cluster. The test configuration is a reentrant channel 600x800x50 grid points, using just the ocean component and some idealised forcing fields. As I increased the core count between 48 and 384 the model scaled better than the theoretical scaling (see attached figure). I’m not complaining that it ran faster, but I was surprised. Any thoughts about what would cause this sort of behaviour? I wondered if it might be something to do with the tiles not fitting in the cache for the low core count simulations. The bid might be more convincing if I can give a plausible explanation for why the model scales better than ideal.
>>  
>> Cheers,
>> Ed
>>  
>>  
>> <image001.png>
>> Edward Doddridge
>> Research Associate and Theme Leader
>> Australian Antarctic Program Partnership (AAPP)
>> Institute for Marine and Antarctic Studies (IMAS)
>> University of Tasmania (UTAS)
>>  
>> doddridge.me <x-msg://76/doddridge.me>
>> 
>> 
>> University of Tasmania Electronic Communications Policy (December, 2014). 
>> This email is confidential, and is for the intended recipient only. Access, disclosure, copying, distribution, or reliance on any of it by anyone outside the intended recipient organisation is prohibited and may be a criminal offence. Please delete if obtained in error and email confirmation to the sender. The views expressed in this email are not necessarily the views of the University of Tasmania, unless clearly intended otherwise.
>> 
>> <scaling.pdf>_______________________________________________
>> MITgcm-support mailing list
>> MITgcm-support at mitgcm.org <mailto:MITgcm-support at mitgcm.org>
>> https://urldefense.com/v3/__http://mailman.mitgcm.org/mailman/listinfo/mitgcm-support__;!!Mih3wA!QFIuk74XwYtnW8B0xg281RCQ84XPo6IL8-sQhqJDbqQX-G1MyyvO0_z9d68C6hIjxA$ <https://urldefense.us/v3/__https://urldefense.com/v3/__http:/*mailman.mitgcm.org/mailman/listinfo/mitgcm-support__;!!Mih3wA!QFIuk74XwYtnW8B0xg281RCQ84XPo6IL8-sQhqJDbqQX-G1MyyvO0_z9d68C6hIjxA$__;Lw!!PvBDto6Hs4WbVuu7!fPzZwL_yJKy2NG6um7nLRQ4On1_QwdSYKDn4MckVRQD939V5f7uoATUNOSQUEPLROq---gTVWIw$>
> _______________________________________________
> MITgcm-support mailing list
> MITgcm-support at mitgcm.org
> https://urldefense.us/v3/__http://mailman.mitgcm.org/mailman/listinfo/mitgcm-support__;!!PvBDto6Hs4WbVuu7!fPzZwL_yJKy2NG6um7nLRQ4On1_QwdSYKDn4MckVRQD939V5f7uoATUNOSQUEPLROq--keYVC9M$ 

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.mitgcm.org/pipermail/mitgcm-support/attachments/20201012/6995eb65/attachment-0001.html>


More information about the MITgcm-support mailing list