[MITgcm-devel] seaice
Martin Losch
mlosch at awi-bremerhaven.de
Wed Feb 15 15:00:28 EST 2006
Hi Jinlun,
I cannot claim that I am absolutely right about the indices, but as I
said before, for spherical coordinates these would not matter at all!
I had hoped that if I replace all grid information with the generic
grid information of the MITgcm the general curvilinear formulation
would be there already. Terms such as tan(phi) are set to zero in in
moder/src/ini_curvilinear_grid.F so that the metric terms of the
spherical formulation vanish in cartesian and general coordinates.
Probably that was a little naive. I'll go through LSR and DYNSOLVER
again and try to figure these things out.
The same is true for advect.F and diffus.F, if you have a look at my
(still comented) flux form versions, you'll see that these are in
general curvilinear coordinates and of the same order accuracy, as
the remaining finite volume MITgcm.
Martin
On Feb 15, 2006, at 6:50 PM, Jinlun Zhang wrote:
> Hi Martin,
>
> I have to admit that I am surprised that you find quite a few
> indexing inconsistency, but thanks for the effort and for watching
> my back. I'll take a look at the changes you have made and a second
> look at the existing code. I just roughly checked one piece of the
> code that you are trying to modify.
>
>> +HALF*ZETAMEAN(I,J,bi,bj)
>> 3 *((UICEC(I+1,J+1,bi,bj)
>> 3 -UICEC(I-1,J+1,bi,bj))*RECIP_CSUICE(I,J+1,bi,bj)
>> 3 -(UICEC(I+1,J-1,bi,bj)-UICEC(I-1,J-1,bi,bj))
>> 3 *RECIP_CSUICE(I,J-1,bi,bj))*DELXY(I,J,bi,bj))
>
> >I have to admit, that I find it hard to recognize this term
> somewhere in eq9b in your 1997-JGR paper (with Hibler), but
> nevertheless this looks to me like zeta*(d/dx)*(d/dy)U with
> DELXY=HALF/(DXU*DYU). These are central differences, but shouldn't
> >the (for variable DXU,DYU) correct form be
> >zetamean*(
> >(u(i+1,j+1)-u(i-1,j+1))/(dxu(i,j+1)+dxu(i-1,j+1) - (u(i+1,j)-u
> (i-1,j))/(dxu(i,j)+dxu(i-1,j))
> >)/(dyu(i,j)+dyu(i,j-1)
> The existing code is strictly based on spherical coordinates. If
> you use what you are thinking of using, you are essentially doing
> the generalized curvilinear thing, and the code would be based
> partially on spherical and partially on (not quite correct)
> curvilinear, which would likely degrade the model accuracy into the
> 1st order, instead of 2nd order. For curvilinear formulation,
> please see Zhang and Rothrock, 2003 MWR. Note that the code is
> going to be changed into a full blown curvilinear sys. if I (would
> love to) have the opportunity to do that sometime later.
>
> Cheers, Jinlun
>
> Martin Losch wrote:
>
>> Jinlun,
>>
>> the current advection will certainly not be removed and will be
>> the default. If there are new advection schemes they will be
>> additions that can be selected at runtime.
>>
>> Independently of the advection schemes I will do this:
>> First, I will check in new versions of lsr.F, advect.F,
>> dynsolver.F, diffus.F, that will NOT change the lab_sea
>> verification experiment (which is on a spherical grid), but will
>> change cubed-sphere results. These new versions will no longer
>> need SEAICE_GRID.h, but completely rely on the grid information
>> that is defined in GRID.h, so that all hacks for the cubed sphere
>> in seaice_init.F should be obsolete.
>>
>> The next step is a little hairy and I don't have to do it today,
>> but sooner or later I would like to do it:
>> - use global variable rSphere for RADIUS. This will change the
>> results (I have no idea, why).
>> - Replace a few formulations in advect.F and diffus.F by a flux
>> form, which use used in all (?) other parts of the MITgcm. This
>> will make sure that advection and diffusion is also consistent
>> with cubed- sphere grids (I am not so sure about it now). I have
>> already included these formulations in advect.F and diffus.F, but
>> they are commented out. These should be accurate to the same
>> order than the original schemes, but the do not give exactly the
>> same results in the lab_sea (they are slightly different).
>> PLEASE help me by having a look at the code that I will check in
>> now and tell me if you agree with the (commented)
>> discretizations (dxG=DXUICE*CSUICE, dyG=DYUICE on a spherical lat-
>> lon grid).
>>
>> When I worked on LSR (you may not recognize it in the present
>> form), I found a few places, where the indices did not convince
>> me. Again on a spherical grid, correcting these indices did not
>> change the results (mostly things like CSUICE(I-1,J) instead of
>> CSUICE(I,J) which doesn't matter on a spherical grid). These
>> things should be corrected.
>> But there where other places, where I did not understand the
>> discretization and I am afraid, that on a cubed-sphere grid is
>> may not be correct (but no sure). Here is an example:
>>
>>> +HALF*ZETAMEAN(I,J,bi,bj)
>>> 3 *((UICEC(I+1,J+1,bi,bj)
>>> 3 -UICEC(I-1,J+1,bi,bj))*RECIP_CSUICE(I,J+1,bi,bj)
>>> 3 -(UICEC(I+1,J-1,bi,bj)-UICEC(I-1,J-1,bi,bj))
>>> 3 *RECIP_CSUICE(I,J-1,bi,bj))*DELXY(I,J,bi,bj))
>>
>> I have to admit, that I find it hard to recognize this term
>> somewhere in eq9b in your 1997-JGR paper (with Hibler), but
>> nevertheless this looks to me like zeta*(d/dx)*(d/dy)U with
>> DELXY=HALF/(DXU*DYU). These are central differences, but
>> shouldn't the (for variable DXU,DYU) correct form be
>> zetamean*(
>> (u(i+1,j+1)-u(i-1,j+1))/(dxu(i,j+1)+dxu(i-1,j+1) - (u(i+1,j)-u
>> (i-1,j))/(dxu(i,j)+dxu(i-1,j))
>> )/(dyu(i,j)+dyu(i,j-1)
>> ????
>> if dxu and dyu are constant, the formulations are the same, but
>> for variable dxu,dyu, they are not. In the right-hand-sides (FXY)
>> this is easy to see and to fix, but for the left hand sides (the
>> coefficients a,b,c) I may have missed quite a few occurences
>> (mostly because I had a hard time identifiying the term in 9a and
>> 9b). I'll keep looking, but could you also have a look at that
>> when you find the time?
>>
>> Martin
>>
>>
>> On Feb 14, 2006, at 6:34 PM, Jinlun Zhang wrote:
>>
>>> Martin and all,
>>>
>>> I wouldn't be against using a (proven) better advection scheme.
>>> But we should be cautious about replacing the existing one. It
>>> is a well tested scheme (with 2nd order accuracy), not only
>>> useful for the existing 2-category ice model, but particularly
>>> stable with multi-category models that simulate ridging
>>> processes and are likely to be implemented in mitgcm sometime
>>> later. I have used various advection schemes in the past, a few
>>> of them blew up the ridging calculation, because of a
>>> mishandling deformation. I would go for 'if it is not broken,
>>> don't fix it.' But just in case a new scheme is to be used,
>>> let's keep this version as an option, if not as the standard.
>>>
>>> Cheers, Jinlun
>>>
>>> chris hill wrote:
>>>
>>>> Martin,
>>>>
>>>> I took a quick skim through the code. You could try setting
>>>> advfac to 0.. That would mean any convergence in the ice
>>>> velocity will cause ice to pile up (heff gets bigger) and
>>>> divergence will cause it to thin (heff get smaller). That seems
>>>> like it would make sense to me. As you have it now a divergent
>>>> ice flow field may end up producing more ice (since with advfac
>>>> = 1 heff will have a source term that balances the flow
>>>> divergence/convergence). However, I think we need to get
>>>> Jinlun's expert thoughts, as the end result is tied up with how
>>>> the solver works with heff, uice and vice, whether uice and
>>>> vice can be non-divergent etc...
>>>>
>>>> Chris
>>>> Martin Losch wrote:
>>>>
>>>>> GAD:
>>>>>
>>>>> I have tried using dst3fl (withoug any explicit diffusion). It
>>>>> runs stably for 20 years now, but now my ice (instead of
>>>>> being too little) is growing endlessly (37 m in the western
>>>>> Weddell Sea), so I have the suspicion that I am make severe
>>>>> mistakes: Could someone who understand gad_calc_rhs have a
>>>>> look at the attached routine and tell me, if I have done
>>>>> things right? In particular in the update of HEFF, is it
>>>>> correct to have advFac=1?:
>>>>>
>>>>>> DO j=1-Oly,sNy+Oly-1
>>>>>> DO i=1-Olx,sNx+Olx-1
>>>>>> HEFF(i,j,1,bi,bj)=HEFF(i,j,3,bi,bj) + DELTT *
>>>>>> & maskC(i,j,kSurface,bi,bj)*recip_rA(i,j,bi,bj)
>>>>>> & *( (fZon(i+1,j)-fZon(i,j))
>>>>>> & +(fMer(i,j+1)-fMer(i,j))
>>>>>> & -localT(i,j)*( (uTrans(i+1,j)-uTrans(i,j))
>>>>>> & +(vTrans(i,j+1)-vTrans(i,j))
>>>>>> & )*advFac
>>>>>> & )
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> Martin
>>>>>
>>>>>
>>>>>
>>>>> On Feb 14, 2006, at 3:57 PM, chris hill wrote:
>>>>>
>>>>>> Martin,
>>>>>>
>>>>>> These all sound good.
>>>>>> Technically it should be possible to use gad to do the
>>>>>> explicit part of ice advection. Jinlun may have comments on
>>>>>> whether this makes sense algorithmically and on interactions
>>>>>> with the implicit parts of the ice dynamics.
>>>>>>
>>>>>> Chris
>>>>>> Martin Losch wrote:
>>>>>>
>>>>>>> Hi Dimitris,
>>>>>>> I am probably playing too much with the seaice model right
>>>>>>> now, but here are a few points that I'd like to make (and
>>>>>>> maybe check them in, even if they break lab_sea):
>>>>>>> 1. I am now sure that there is a bug in advect.F that does
>>>>>>> NOT affect lat-lon-grid simulations, but WILL affect cubed-
>>>>>>> sphere simulations and all other irregular grid
>>>>>>> simulations. It's basically an idexing error (see my
>>>>>>> previous email). I think I will just fix that.
>>>>>>> 2. I would like to replace all DXTICE DYTICE SINEICE CSTICE
>>>>>>> etc with the proper combination of variables dxF,dxG, etc.
>>>>>>> from GRID.h. This will --- at least as far as I can see
>>>>>>> --- make sure that the grid information is correct and the
>>>>>>> same grid parameters that are used for the ocean are used
>>>>>>> for seacie.
>>>>>>> Since I want to use the seaice model on a cubed sphere grid,
>>>>>>> I do care about this. However, this will change the
>>>>>>> lab_sea and very like (more dramatically) any cubed sphere
>>>>>>> set-up that you may have (I am currently currently playing
>>>>>>> with global_ocean.cs32x15 + seaice). Will I get your OK?
>>>>>>> 3. Advection schemes: for properties such as volume and
>>>>>>> fractional area, the advection scheme should not produce
>>>>>>> negative (or positve) overshoots. A 2nd order central
>>>>>>> difference scheme does that (eg., can produce negative
>>>>>>> thicknesses). The scheme in advect.F is 2nd order central
>>>>>>> difference, but I don't understand the time stepping
>>>>>>> scheme, so it may be OK. Nevertheless, I naively think, a
>>>>>>> positive scheme may be better, but it is no longer
>>>>>>> conservative, eg. 2n-order with flux limiter (e.g, Hunke's
>>>>>>> CSIM5 uses MPDATA) or DST3FL that I use routinely for
>>>>>>> geochemical tracers. The nice thing is, that all of these
>>>>>>> schemes are there (in generic_advdiff), one just needs to
>>>>>>> pick one. I have tried dst3fl, but again, I do not
>>>>>>> understand the time stepping in advect.F (nor do I
>>>>>>> understand fully how gad_calc_rhs works): I have tried
>>>>>>> dst3fl and I even got it to work, but only halfway. If I
>>>>>>> am not mistaken, the DST schemes look as if they are
>>>>>>> explicit in time, that is, h(n+1) = h(n) + gh(n)*deltaT. I
>>>>>>> can compute gh (n), but for that I need to know what the
>>>>>>> different time levels are, eg.,
>>>>>>> HEFF(:,:,1,:,:) = current time level?
>>>>>>> HEFF(:,:,2,:,:) = do I need these?
>>>>>>> HEFF(:,:,3,:,:) = ?
>>>>>>> Or do I just update HEFF(:,:,1,:,:) in advect.F?
>>>>>>> Martin
>>>>>>> On Feb 14, 2006, at 2:07 PM, Dimitris Menemenlis wrote:
>>>>>>>
>>>>>>>> Martin and Jinlun, I am out of my depth when it comes to
>>>>>>>> advection schemes. Is there a reason for changing the
>>>>>>>> scheme that is there already in pkg/seaice?
>>>>>>>>
>>>>>>>> For cubed-sphere grid right now, I assumed that grid is
>>>>>>>> rectangular near the Poles (CS*ICE=1, TNG*ICE=0). This
>>>>>>>> was a quick fix to get going but it is not exact. So
>>>>>>>> maybe that explains why you get different numerical values?
>>>>>>>>
>>>>>>>> Regarding coastal sflux from seaice. One does expect
>>>>>>>> coastal regions around Antarctica to be ice/salt
>>>>>>>> factories, but maybe too much salt is being rejected.
>>>>>>>> Carl recently send me some slides and Ph.D. thesis from
>>>>>>>> Dirk Notz:
>>>>>>>> http://ecco.jpl.nasa.gov/~dimitri/Notz/talk_MPI16112005.pdf
>>>>>>>> http://ecco.jpl.nasa.gov/~dimitri/Notz/PhD_thesis_Dirk.pdf
>>>>>>>> suggesting there is considerable uncertainty regarding how
>>>>>>>> much salt is rejected during sea ice creation.
>>>>>>>>
>>>>>>>> Dimitris
>>>>>>>
>>
>> _______________________________________________
>> MITgcm-devel mailing list
>> MITgcm-devel at mitgcm.org
>> http://mitgcm.org/mailman/listinfo/mitgcm-devel
>
>
> --
>
> Jinlun Zhang
> Polar Science Center, Applied Physics Laboratory
> University of Washington, 1013 NE 40th St, Seattle, WA 98105-6698
>
> Phone: (206)-543-5569; Fax: (206)-616-3142
> zhang at apl.washington.edu
> http://psc.apl.washington.edu/pscweb2002/Staff/zhang/zhang.html
>
>
>
>
> _______________________________________________
> MITgcm-devel mailing list
> MITgcm-devel at mitgcm.org
> http://mitgcm.org/mailman/listinfo/mitgcm-devel
More information about the MITgcm-devel
mailing list