<div dir="ltr">Hi Robin,<div><br></div><div>Not an answer to your question but if your simulations are non-hydrostatic on regular Cartesian grids then you might be able to run Oceananigans.jl on a GPU: <a href="https://github.com/CliMA/Oceananigans.jl">https://github.com/CliMA/Oceananigans.jl</a></div><div><br></div><div>It uses mostly the same finite volume algorithm as MITgcm but with a more efficient pressure solver since it focuses on regular Cartesian grids. It's more for ocean process studies, i.e. simulation patches of ocean or incompressible fluid. Unfortunately, Oceananigans.jl does not run simulations on the sphere so it may not be suitable for your work.</div><div><br></div><div>But if you think it might benefit your work, I'm more than happy to help out with setting up your problem in Oceananigans.jl and we can try running it on your GPU.</div><div><br></div><div>Cheers,</div><div>Ali</div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Tue, Aug 4, 2020 at 12:50 PM Robin Despouys <<a href="mailto:robin.despouys@gmail.com">robin.despouys@gmail.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr">Hello Dear Community, <div><br></div><div>I would like to know if there is any current work on trying to use Graphics Processing Units in order to accelerate the simulations on personal desktop. </div><div><br></div><div>I just discovered this incredible work, MITgcm, thanks to a friend who is doing simulations for his PhD. We managed to run his simulations on my Computer but due to the limitations of my CPU 4 cores, 4 Ghz I theoretically have 64 GFLOPS on best case scenario, and according to the specs of my GPU I could theoretically have 7 TFLOPS (again best case scenario). </div><div><br></div><div>So today we started a simulation using MPI and 4 cores, my friend says it will take roughly 4 days. So with my simple supposition and ignoring all the drawbacks and limitations due to the synchronisation of highly multithreaded computation we could run this simulation 100 times faster which would reduce the computation time to ~ 1 hour.</div><div><br></div><div>I know that we could simply use MPI with a cluster of machines or with a super-calculator.</div><div>I can't help but think : that there might be a way to harness the power of GPUs by using an implementation of the OpenCL standard in C and make it "callable" through the wrapper without having to change the numerical input models. </div><div><br></div><div>I am very new to FORTRAN and I have a reasonable understanding of C but at the moment I have a very approximate knowledge of the Software Architecture of MITGcm. Then... maybe what I am asking for is impossible.</div><div><br></div><div>But if it is possible! Perhaps some of you could point me at the parts I should focus on what/where should I update/add some code. (Yup it is a very broad question^_^). </div><div><br></div><div>Best regards, </div><div><br></div><div>Robin</div><div><br></div><div><br></div><div><br></div><div><br></div><div><br></div><div><br></div></div>
_______________________________________________<br>
MITgcm-support mailing list<br>
<a href="mailto:MITgcm-support@mitgcm.org" target="_blank">MITgcm-support@mitgcm.org</a><br>
<a href="http://mailman.mitgcm.org/mailman/listinfo/mitgcm-support" rel="noreferrer" target="_blank">http://mailman.mitgcm.org/mailman/listinfo/mitgcm-support</a><br>
</blockquote></div>