[MITgcm-support] MITgcm-support Digest, Vol 197, Issue 15

Martin Losch Martin.Losch at awi.de
Wed Nov 27 04:24:34 EST 2019


Hi David,

I’d be interested to know what parameter values you chose in the end for SEAICE_LSRrelaxU and SEAICE_LSRrelaxV. If these values are very small, the update in the linear solver practically vanishes (like this: uIce=uIce + SEAICE_LSRrelaxU*(uNew-uIce)), and you will not solve the linear system, but instead you operate with nearly unchanged ice drift. I am not surprised that you can suppress any numerical stability in this way, but this happens at the cost of not solving the linear problem. Or do you actually reach convergence?

Zhang and Hibler (1997) (which is the basis of seaice_lsr) desribe an overrelaxation method (LSOR) where LSRrelaxU/V are > 1 (but not much). This can really speed up the convergence of the linear problem, but with parallelized code, this does not work, so we are actually using underrelaxation by default (LSRrelaxU/V < 1), and I as far as I know, Jinlun Zhang is doing that too for his parallel code. In my experience, this works quite stably (we run this with configurations down to grid spacings of tens of meters) and so far any instabilities have been associated with other problems (or real bugs in the code). I sincerely hope that this is also the case for your setup. If you can come up with an example that I can run with current code, I am happy to help in debugging.

In Mari’s case the NaNs go away as soon as the forcing is clean (i.e., does not contain any NaNs anymore).

Martin 

> On 27. Nov 2019, at 08:14, David Vishny <davidvish at gmail.com> wrote:
> 
> Hi Mari,
> 
> I did manage to solve my problem, and I believe it was caused by numerical instability in the LSR sea ice dynamics solver. I played around with the relaxation parameters SEAICE_LSRrelaxU and SEAICE_LSRrelaxV until I found values that stabilized the LSR solver. Strangely, I needed to make these parameters smaller by a few orders of magnitude to obtain stability, but I don’t think I caused any new problems by making this change.  From what I have read online about relaxation parameters, their values only affect the rate at which the algorithm converges, so long as those values fall within the range of stability. Furthermore, my computation speed did not slow too much as a result.
> 
> So perhaps you could play around with the relaxation parameters, and see if this solves your problem?
> 
> Cheers,
> David
>> On Nov 26, 2019, at 11:00 AM, mitgcm-support-request at mitgcm.org wrote
>> Hi again,
>> 
>> the source of this problem (NaN in the solution) was NaNs already in the wind forcing files (so that?s something to check before really digging into the code, as I did). In this context, I recommend that atmospheric forcing fields (from reanalysis) should be defined over the entire model domain. Especially wind forcing may be averaged to velocity grid points in pkg/exf, so that masking your atmospheric data with a land-sea mask may lead to unexpected results near the coasts.
>> 
>> Martin
>> 
>>> On 22. Nov 2019, at 16:07, Martin Losch <Martin.Losch at awi.de> wrote:
>>> 
>>> Dear Mari,
>>> 
>>> if it happens in the first time step, the problem shouldn?t be too hard to find, but without the complete setup, I cannot do much. Can you provide a minimal configuration (not to this list, but somewhere to download), that reproduces your problem?
>>> In the meantime, can you set the debugLevel = 5 to get more output, also dumpFreq=1. (or smaller than deltaT)? Then you might be able to figure out, where exactly NaNs appear first. Often the sea ice code yields NaNs because it?s called at the beginning of the timestep and picks up NaN from the previous timestep or the initialization (if there are any). 
>>> 
>>> Martin
>>> 
>>> PS. None of the following may be related to your specific problem, but I comment anyway:
>>> - you use insufficent forcing files (downward radiation, specific humidity are missing, although they are required by the flags in EXF_OPTIONS.h), this can lead to very large fluxes of heat.
>>> 
>>> precipfile        = 'EmP.bin',
>>> implies wrong sign (but I don?t know what?s in EmP, I am assuming E minus P, so evap minus precip), precipfile expects positive values for precipitation.
>>> 
>>> # this turns off useful checks (.e.g if heat fluxes are ridiculuous, that will usually lead to model explosion)
>>> useExfCheckRange  = .FALSE.,
>>> 
>>> # don?t set these, unless you know what you are doing, they default to the global ?deltaT? anyway
>>> SEAICE_deltaTtherm = 200.,
>>> SEAICE_deltaTdyn   = 200.,
>>> 
>>> from your parameters I estimate a cfl number of 0.04 for u-velocity = 1 m/s, so that should be OK for the first few timesteps, but when you have large adjustment problem, this may lead to instability.
>>> 
>>> # as rule of them the ratio of two adjacent dz should not exceed 1.4 (for numerical accuracy), ie. dz(k+1)/dz(k) < 1.4
>>> delZ=5,10,10,10,25,15*50,4*200,6*400,
>>> 
>>> # why do yo do that? that can mask problems
>>> checkIniTemp=.false.,
>>> 
>>> The grid you are using will have rectangular cells because dx = 0.08333 * cos(lat) * pi/180 * 6371 km = 4.9km, and dy = 0.08333 * pi/180 * 6371 km = 9.2 km. It can be difficult to find viscosity parameters that will damp enough (for the y-direction), but be small enough for the x-direction to avoid numerical instability due to too large viscosity. I would use a grid with nearly quadratic cells if possible (i.e. have dy also scaled by cos(lat))
>>> 
>>> 
>>> 
>>>> On 21. Nov 2019, at 17:58, Mari Fjalstad Jensen <Mari.F.Jensen at uib.no> wrote:
>>>> 
>>>> Dear Martin and David (or others)
>>>> 
>>>> Did you ever solve the problem below? I am having the same problem, NaN's after on time-step when sea ice dynamics are enabled. The simulation runs fine when sea ice dynamics are turned off, or if there is no wind forcing.
>>>> 
>>>> In addition, I recently switched from a cartesian domain with smooth simple bathymetry (flat bottom + sloping sides) to a spherical polar- realistic bathymetry domain. In the previous setup, the sea ice package was working fine in the presence of wind and sea ice dynamics.
>>>> 
>>>> Attached are data* and *OPTIONS files
>>>> 
>>>> Cheers and thanks,
>>>> Mari
>>>> 
>>>> 
>>>> 
>>>> On 24/08/2019 08.55, Martin Losch wrote:
>>>>> Hi David,
>>>>> I would need more details to be able to help. Configuration (code directory, namelist files, version of model), and what exactly you changed from a working configuration
>>>>> 
>>>>> Martin
>>>>> 
>>>>>> On 24. Aug 2019, at 00:07, David Vishny <davidvish at gmail.com> wrote:
>>>>>> 
>>>>>> To whom it may concern,
>>>>>> 
>>>>>> I recently changed the grid configuration in my model setup, and now the model outputs all NaN?s after a single time-step if sea ice dynamics are enabled, even though there is no sea ice to begin with. If I disable sea ice dynamics, it appears that no problems occur. Thus, my advisor Malte Jansen and I believe the LSR solver is producing NaN?s.
>>>>>> 
>>>>>> I know one parameter related to the LSR solver is the LSR error. For any given grid configuration, could the solver crash due to an LSR error that is either too high or too low? Are there any other parameters I should be playing around with that are related to the LSR solver?
>>>>>> 
>>>>>> 
>>>>>> Thanks,
>>>>>> 
>>>>>> David Vishny
>>>>>> _______________________________________________
> 
> _______________________________________________
> MITgcm-support mailing list
> MITgcm-support at mitgcm.org
> http://mailman.mitgcm.org/mailman/listinfo/mitgcm-support



More information about the MITgcm-support mailing list