[MITgcm-support] Wrapper bindings to OpenCL C library - The possibilities of GPGPU?

Robin Despouys robin.despouys at gmail.com
Wed Aug 5 04:25:17 EDT 2020


Hello,

@ Ali, The Oceananigans.jl projects is very cool and active. This is a very
promising ressource. I think we might be able to run a part of the
simulations with this tool. I am not very sure since I am not a
mathematician neither a data-scientist. If we do use Oceananigans.jl  we
might contact you in the future for little help in case we struggle. Even
if it's not an answer to my question it helps us. Thanks a lot :).

@ Jody, very relevant questions. I learned two or three interesting things
;)

Best regards,
Robin

Le mar. 4 août 2020 à 20:15, Ali Ramadhan <alir at mit.edu> a écrit :

> Hi Jody,
>
> Thank you for your interest! Perhaps opening a GitHub issue on the
> repository to discuss further (and make the discussion public) would be
> better but to answer your questions:
>
> 1) Yes so right now it's constant grid spacing only (Δx, Δy, Δz can be
> different of course). We have a branch that adds a vertically stretched
> Cartesian grid, i.e. variable vertical grid spacing, but it still needs to
> be fully tested. We currently have no plans to add grids other than
> constant and vertically stretched as we can still use a fast/efficient
> pressure solver based on FFTs with one stretched dimension. But with 2-3
> stretched dimensions then we would have to implement an iterative solver,
> e.g. conjugate gradient like MITgcm, which would be much slower. And for
> now we're not tackling problems that require variable spacings in all three
> dimensions so we haven't thought about supporting it yet.
>
> 2) Not officially in that there's no API to specify topography but we've
> been using the forcing functions API to implement a continuous forcing
> immersed boundary method to add topography. Basically you strongly relax
> the velocity to zero inside the topography. We haven't been tackling
> problems with topography yet so we haven't fleshed out how we handle
> topography but we're hoping to add some tests to see if this immersed
> boundary method can work well in oceanographic settings.
>
> For an example, this PR adds an example/test for viscous flow around a
> cylinder: https://github.com/CliMA/Oceananigans.jl/pull/693
>
> These are the pertinent lines (PR needs updating a bit as API has changed,
> improved we think):
> https://github.com/CliMA/Oceananigans.jl/pull/693/files#diff-7331d4b1c4f866ed7dfe0952578bf1cfR18-R27
>
> We're hoping to slowly expand the scope of what Oceananigans can do as we
> tackle more complicated problems but for now trying to stick to
> configurations where we can leverage the GPU using solvers we've already
> implemented.
>
> That said, we're more than happy to discuss new features. Sometimes
> they're not difficult to add, and sometimes it can be implemented using an
> existing API.
>
> Cheers,
> Ali
>
>
> On Tue, Aug 4, 2020 at 1:44 PM Jody Klymak <jklymak at uvic.ca> wrote:
>
>> Hi Ali,
>>
>> Thanks for the link - that looks pretty fun.
>>
>> Happy to discuss somewhere more appropriate if you send me there, but a
>> couple of questions:
>>
>> 1) Are grids regular in that you can’t specify variable grid spacing
>> (even if it is still Cartesian)?
>>
>> 2) It seems maybe no topography?
>>
>> I think both 1 and 2 make it really hard to use GPUs or spectral methods
>> in general, but I could very well be ignorant of recent advances.
>>
>> Thanks,   Jody
>>
>> On 4 Aug 2020, at 10:27, Ali Ramadhan <alir at mit.edu> wrote:
>>
>> Hi Robin,
>>
>> Not an answer to your question but if your simulations are
>> non-hydrostatic on regular Cartesian grids then you might be able to run
>> Oceananigans.jl on a GPU: https://github.com/CliMA/Oceananigans.jl
>>
>> It uses mostly the same finite volume algorithm as MITgcm but with a more
>> efficient pressure solver since it focuses on regular Cartesian grids. It's
>> more for ocean process studies, i.e. simulation patches of ocean or
>> incompressible fluid. Unfortunately, Oceananigans.jl does not run
>> simulations on the sphere so it may not be suitable for your work.
>>
>> But if you think it might benefit your work, I'm more than happy to help
>> out with setting up your problem in Oceananigans.jl and we can try running
>> it on your GPU.
>>
>> Cheers,
>> Ali
>>
>> On Tue, Aug 4, 2020 at 12:50 PM Robin Despouys <robin.despouys at gmail.com>
>> wrote:
>>
>>> Hello Dear Community,
>>>
>>> I would like to know if there is any current work on trying to use
>>> Graphics Processing Units in order to accelerate the simulations on
>>> personal desktop.
>>>
>>> I just discovered this incredible work, MITgcm, thanks to a friend who
>>> is doing simulations for his PhD. We managed to run his simulations on my
>>> Computer but due to the limitations of my CPU 4 cores, 4 Ghz I
>>> theoretically have 64 GFLOPS on best case scenario, and according to the
>>> specs of my GPU I could theoretically have 7 TFLOPS (again best case
>>> scenario).
>>>
>>> So today we started a simulation using MPI and 4 cores, my friend says
>>> it will take roughly 4 days.  So with my simple supposition and ignoring
>>> all the drawbacks and limitations due to the synchronisation of highly
>>> multithreaded computation we could run this simulation 100 times faster
>>> which would reduce the computation time to ~ 1 hour.
>>>
>>> I know that we could simply use MPI with a cluster of machines or with a
>>> super-calculator.
>>> I can't help but think : that there might be a way to harness the power
>>> of GPUs by using an implementation of the OpenCL standard in C and make it
>>> "callable" through the wrapper without having to change the numerical
>>> input models.
>>>
>>> I am very new to FORTRAN and I have a reasonable understanding of C but
>>> at the moment I have a very approximate knowledge of the Software
>>> Architecture of MITGcm. Then... maybe what I am asking for is impossible.
>>>
>>> But if it is possible! Perhaps some of you could point me at the parts I
>>> should focus on what/where should I update/add some code. (Yup it is a very
>>> broad question^_^).
>>>
>>> Best regards,
>>>
>>> Robin
>>>
>>>
>>>
>>>
>>>
>>>
>>> _______________________________________________
>>> MITgcm-support mailing list
>>> MITgcm-support at mitgcm.org
>>> http://mailman.mitgcm.org/mailman/listinfo/mitgcm-support
>>>
>> _______________________________________________
>> MITgcm-support mailing list
>> MITgcm-support at mitgcm.org
>> http://mailman.mitgcm.org/mailman/listinfo/mitgcm-support
>>
>>
>> --
>> Jody Klymak
>> http://ocean-physics.seos.uvic.ca/~jklymak/
>>
>>
>>
>>
>>
>>
>> _______________________________________________
>> MITgcm-support mailing list
>> MITgcm-support at mitgcm.org
>> http://mailman.mitgcm.org/mailman/listinfo/mitgcm-support
>>
> _______________________________________________
> MITgcm-support mailing list
> MITgcm-support at mitgcm.org
> http://mailman.mitgcm.org/mailman/listinfo/mitgcm-support
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.mitgcm.org/pipermail/mitgcm-support/attachments/20200805/437adb67/attachment-0001.html>


More information about the MITgcm-support mailing list