[MITgcm-support] rbcs advice
Jody Klymak
jklymak at uvic.ca
Tue Mar 4 09:35:30 EST 2014
Hi Martin,
Thanks, great suggestions. I think you are right about the file names too, but wanted to check. The logic to calculate initer0 wasn't crystal clear to me.
Probably the big reads are slowish, bu tthey don't happen that often if I understand the code correctly. Every 1800 or 3600 s with 10-s timesteps is probably going to be acceptable. Plus it'll be at about the same time its *writing* more data than that, so hopefully the filesystem is up for the challenge!
Cheers, Jody
On Mar 3, 2014, at 23:58 PM, Martin Losch <Martin.Losch at awi.de> wrote:
> Hi Jody,
> I am not sure if I understand all of the issues, but here’s my 5-cent worth:
>
> - Can you reduce the file size by using single precision input (in data: readBinaryPrec = 32)? This would mean that all of your input files (topgraphy, initial conditions, surface forcing, etc.) need to be converted to single precision, but it would save you 50% of the disk space, if you can live with the loss of precision. For smaller files, there’s always OBCS, but that comes with a different set of issues (as you know).
>
> - I can imagine that reading 6GB chunks from 150-300 GB files can be slow, but that would be my only concern with the single files.
>
> - I haven’t used this feature of RBCS yet, but from looking at the code (rbcs_fields_load.F), it looks to me as if the same counter variables initer0 and initer1 are used for reading the record number from a large file and for generating the name of the individual files:
> IF ( rbcsSingleTimeFiles ) THEN
> IL=ILNBLNK( relaxTFile )
> WRITE(fullName,'(2a,i10.10)') relaxTFile(1:IL),'.',initer0
> CALL READ_REC_XYZ_RS(fullName, rbct0, 1, myIter, myThid)
> WRITE(fullName,'(2a,i10.10)') relaxTFile(1:IL),'.',initer1
> CALL READ_REC_XYZ_RS(fullName, rbct1, 1, myIter, myThid)
> ELSE
> CALL READ_REC_XYZ_RS(relaxTFile,rbct0,intime0,myIter,myThid)
> CALL READ_REC_XYZ_RS(relaxTFile,rbct1,intime1,myIter,myThid)
> ENDIF
>
> So I think that you files names (in your example) should look like this:
> Uforce.0000000000
> Uforce.0000000001
> Uforce.0000000002
> …
> Uforce.0000000024 (or 48)
> totally independent of your time step. It’s just the large file with 24 records in it split into 24 individual files with the record number in the name.
>
> Martin
>
> On Mar 4, 2014, at 1:30 AM, Jody Klymak <jklymak at uvic.ca> wrote:
>
>>
>> Hi all,
>>
>> I have been using RBCS package to force a coarse domain with internal waves, and that works very well. Now I want to nest the larger domain into a small domain. I'm fine w/ a 1-way nest (I don't expect the small scale will substantially impact the large-scale).
>>
>> My issues are how to deal with rbcs. Each field in the domain as doubles will be 6 Gb/time step. I'd like to have at least 24 forcings over two tidal cycles (48 would be better), so we are talking about 150-300 Gb files for the relaxTFile etc.
>>
>> Is that OK, or should I be saving the files separately, using rbcsSingleTimeFiles = .TRUE.
>>
>> If I do rbcsSingleTimeFiles = .TRUE., then I am unsure how to specify the iteration number. I found the discussion in the manual vague. By "iteration" do we mean the iteration number for the saves? Or model iteration number? Or???
>>
>> i.e. suppose I have
>>
>> deltaT=12.
>> deltaTrbcs=3600.
>>
>> and I have snapshots I want to use to force at t=0,3600,7200, etc
>>
>> I'd think I need to set
>>
>> rbcsForcingOffset=-1800.
>> and the files would be:
>> Uforce.0000000000
>> Uforce.0000000001
>> Uforce.0000000002
>>
>> or would they be
>> Uforce.0000000000
>> Uforce.0000003600
>> Uforce.0000007200
>>
>> Finally, what do I set rbcsIter0 to?
>>
>> Thanks for any help. Feel free to ignore if there is no penalty to just making a single big forcing file.
>>
>> Cheers, Jody
>>
>>
>>
>>
>>
>> --
>> Jody Klymak
>> http://web.uvic.ca/~jklymak/
>>
>>
>>
>>
>>
>>
>> _______________________________________________
>> MITgcm-support mailing list
>> MITgcm-support at mitgcm.org
>> http://mitgcm.org/mailman/listinfo/mitgcm-support
>
>
> _______________________________________________
> MITgcm-support mailing list
> MITgcm-support at mitgcm.org
> http://mitgcm.org/mailman/listinfo/mitgcm-support
More information about the MITgcm-support
mailing list