[MITgcm-support] Reducing Runtime for High Resolution Model
Dimitris Menemenlis
dmenemenlis at gmail.com
Sun Mar 2 20:53:45 EST 2025
I am assuming that you already have done the obvious of splitting your 556x3396 domain into smaller tiles and increased the number of processes that you are using so as to reduce computational load per process?
> On Feb 25, 2025, at 5:11 AM, Nadav Mantel <nadav.mantel at mail.huji.ac.il> wrote:
>
> I am attempting to run a high-resolution model of the Gulf of Aqaba. We are attempting to simluate a desalination plant's long term effects, but the addmass on a coarse grid doesn't simulate the salt plume as a singular point increases the salinity of a cell only slightly. Therefore, we increased the horizontal grid resolution to 60x60 meters, with 32 depth levels with depths ranging from 5 meters to 255 meters. Due to the high resolution, the CFL requires a maximum timestep of 30 sec before the model crashes. We want to run a hydrostatic simulation for two years which will result in a very long simulation.
>
> Does anyone have any tips on how to reduce runtime? We tried to increase the timestep by changing the A4 and Ah max values and also use useFullLeith but they didn't really help.
>
> delX = 556*60,
> delY = 3396*60,
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.mitgcm.org/pipermail/mitgcm-support/attachments/20250302/03c71d95/attachment.html>
More information about the MITgcm-support
mailing list