[MITgcm-support] seaice model

Dimitris Menemenlis menemenlis at jpl.nasa.gov
Tue Apr 27 20:40:16 EDT 2004


Hi Martin, just going over your list of questions.  It will take some
time to address them all adequately, but here are some suggestions.
Don't feel obliged to try all of them.  Just to get a discussion going.

> I realize that this may not be a perfect domain, because it ends at
> 80degree N and S, and it is very coarse (4degx4deg, 15 layers 50 to
> 690m) but just to try it out.

There exists a coarse resolution set-ups on the cube sphere that
include the Arctic Ocean that you can use.  It's 32x32 x 6 faces
x 15 levels.  So the number of horizontal grid cells is 6144, as
opposed to 3600 for the 4x4.

There is two flavors of ice model on this.  One is a thermodynamic only,
Winton formulation.  That's the default package of
verification/global_ocean.cs32x15 and Jean-Michel is the expert.

The other is the dynamic/thermodynamic pkg/seaice, which is
essentially Hibler (79-80) based plus snow.  An example configuration
for this lives under MITgcm_contrib/high_res_cube/README_ice
the package you have been using.

It'd be interesting to compare the two thermodynamic formulations
sometime, and also impact, if any, of dynamics ... but hasn't been
done in detail yet.

> - I have to reduce the time step from 2 days (without seaice) to 12h to 
> make the model run more than a few time steps (why is that?).

What happens if you turn the sea-ice package off, and just
use the pkg/exf bulk formulae with your forcing?  Do you still
need to reduce the time step?

> - the model blows up after 6-12 months because of a sudden violation of 
> the vertical CFL-criterium, depending on the convection scheme I use.
> - with KPP the model blows up fastest, convective adjustment of 
> implicit diffusion make it last longer
> - ice builds up and melts properly according to seasons, but ice 
> thickness are huge (?): HEFF goes up to 20m at isolated places.
> - this is my preliminary analysis of what's going on:
> in winter sea ice builds up, there is a net fresh water flux out of the 
> ocean which is huge (up to 8m/day in monthly averages!), as a 
> consequence surface salinities go up to 500 PSU, which leads to strong 
> horizontal pressure gradients, strong horizontal velocity divergences 
> and therefore to strong vertical velocities, so that eventually the 
> vertical cfl-criterium is violated and ... booooom. The preferred areas 
> for this process are the eastern Wedell Sea (what ever is resolved), 
> the Norwegian Sea, if you can call it that and Hudson Bay (yes, it's in 
> there). I guess the problem is that these areas are quite shallow in 
> the model (only a few grid cells) so the that the high salinities 
> cannot be convected or advected away. But the main problem is probably 
> related to the strong freezing rates of a few meters a day.

Freezing rates of a few meters a day doesn't sound right.
Is your ncep_tair.bin file in deg C or K?
If the former, then you would need to add a 
 exf_offset_atemp=273.15
in the data.exf file.
More generally, can you check that your forcing files are roughly
in the ranges defined  pkg/exf/exf_fields.h

> PS. I also tried the identical configuration on my mac with g77, and 
> got a segmentation fault in the seaice model, in one of the loops of 
> the LSR solve. Clearly a compiler optimization problem because reducing 
> the optimization from -O3 to -O1 (no optimization, documentation on g77 
> is sparse) made the problem go away. I had a look at the code and saw a 
> lot of divisions. I replace all occurrences of /CSUICE(i,j,bi,bj) in 
> the code by *RECIP_CSUICE(i,j,bi,bj) (a new field initialized to 
> 1/CSUICE in seaice_init) and the segmentation fault went away. It may 
> be worth thinking about changing this throughout the code (CSTICE is 
> another candidate). What do you think about this. I can volunteer to do 
> this but I am afraid that it might break results in the verification 
> experiments, because optimization is affected?

I think that's a good idea.  Could you implement this and check it in
or send me the modifications and I'll check them in and update the
output files accordingly, if needed.

> What are the policies regarding the sea-ice model? Did you aim at
> using as little information from the main code as possible, e.g. grid
> and masking information? I noticed that the masks maskC are not used 
> anywhere, so that for example, after calling the seaice model, theta
> is no longer zero over land (doesn't matter, but is ugly), because it
> is not masked in growth.F.

I agree that it's ugly.  I need to fix that.  What I really would like
to see is some values over land like -9999, so they can be used for
plotting purposes, without having to load a mask separately.
What do you think?

> All grid parameters are computed in seaice_init.F from the grid 
> parameters of the ocean models, instead of adopting the ocean model 
> grid parameters (I know it's a different grid, but still all fields 
> could be applied properly). Is this intentional in the sense you
> didn't want to change the sea ice model too much or didn't want any 
> interference or is it just because it  was quicker and less error
> prone that way (which I could perfectly understand, because adi and
> lsr are difficult to read, and I wouldn't like to have to modify these
> routines)?

A bit of both.  This is work in progress.  The eventual objective is to
rewrite the ice model for generalized curvilinear coordinates on the
C-grid, so that it matches the ocean model exactly.  This is one of the
things that Jinlun is working on.

P.S. Don't use adi.  Only lsr works properly.  Maybe we should
get rid of adi all together.

> in subroutiine growth.F: what does the variable
> WATR(1-Olx:Snx+Olx,1-Oly:Sny+Oly,Nsx,Nsy),
> defined in SEAICE.h and part of common block /SALT_WATER/, do? It's a
> identical copy of SEAICE_SALT, as far as I can see, and it's not used 
> anywhere else in the code.

very good point, I will remove it

D.

-- 
Dimitris Menemenlis <menemenlis at jpl.nasa.gov>
Jet Propulsion Lab, California Insitute of Technology
MS 300-323, 4800 Oak Grove Dr, Pasadena CA 91109-8099
tel: 818-354-1656;  fax: 818-393-6720




More information about the MITgcm-support mailing list