[MITgcm-support] SSH blow-up at Orlanski BC with small deltaT
Martin.Losch at awi.de
Mon Jul 22 10:32:13 EDT 2019
did you get an answer to this? Could you figure out something yourself?
The usual procedure is to turn off suspicious code (in this case Orlanski) and see if the problem persists. With a grid spacing of 1km you should get away with larger time steps than 2s, at least after the initial adjustment, unless you excite fast waves.
Maybe you can use a sponge layer (either the obcs-version or with rbcs) in order to damp the waves that you may have.
> On 7. Jun 2019, at 00:52, Yilang Xu <yxu at whoi.edu> wrote:
> Hi everyone,
> The subject title sounds crazy but I did encounter this problem. I set up a high-resolution experiment (1~2 km for grid size in x-y, and 10m in z) and initialized it with linear stratification. The boundaries are Orlanski with net flow balanced.
> Initially I used deltaT=60s but the code blew up with NaNs due to CFL condition. Then I decrease deltaT to 30s, the run could come to a normal end, but SSH increased/decreased to unreasonable values at Orlanski boundaries. I then used deltaT=5s, and SSH seemed to have smaller anomalies, but still blew up at boundaries.
> It seems that decreasing deltaT is the right way to ensure proper SSH behavior, and this did work for some cases with other kinds of initial stratifications. So I further decreased deltaT to 2s, but then I got much higher SSH blowups at boundaries than the case with deltaT=5s.
> According to this old post http://mailman.mitgcm.org/pipermail/mitgcm-support/2013-January/008122.html , this might be related to viscAhGrid, which is already used in my model and is calculated as below:
> viscAh = 0.25*L**2*viscAhGrid/deltaT
> The formula above shows that decreasing deltaT increases viscAh. What is confusing to me is that decreasing deltaT can give me both better or worse results. I assume this is related to viscAhGrid, but there might be other SSH instability issues that matter (Orlanski, etc).
> I have included the data file below. Really appreciate any help on this problem.
> # ====================
> # | Model parameters |
> # ====================
> # Continuous equation parameters
> Tref = 50*-1.9,
> Sref = 34.0000, 34.0612, 34.1224, 34.1837, 34.2449,
> 34.3061, 34.3673, 34.4286, 34.4898, 34.5510,
> 34.6122, 34.6735, 34.7347, 34.7959, 34.8571,
> 34.9184, 34.9796, 35.0408, 35.1020, 35.1633,
> 35.2245, 35.2857, 35.3469, 35.4082, 35.4694,
> 35.5306, 35.5918, 35.6531, 35.7143, 35.7755,
> 35.8367, 35.8980, 35.9592, 36.0204, 36.0816,
> 36.1429, 36.2041, 36.2653, 36.3265, 36.3878,
> 36.4490, 36.5102, 36.5714, 36.6327, 36.6939,
> 36.7551, 36.8163, 36.8776, 36.9388, 37.0000,
> # viscAh=10.0,
> HeatCapacity_cp = 3974.0,
> convertFW2Salt = 33.4,
> implicitDiffusion = .TRUE.,
> implicitViscosity = .TRUE.,
> saltVertAdvScheme =33,
> # useCDScheme = .TRUE.,
> # Elliptic solver parameters
> # Time stepping parameters
> cAdjFreq = 0.,
> # tauCD = 400000.,
> # Gridding parameters
> ygOrigin = -80.0,
> # Input datasets
> # hydrogThetaFile='temp_3d_end_state.bin',
> # hydrogSaltFile='salt_3d_end_state.bin',
> # uVelInitFile='u_3d_end_state.bin',
> # vVelInitFile='v_3d_end_state.bin',
> MITgcm-support mailing list
> MITgcm-support at mitgcm.org
More information about the MITgcm-support