Difference between revisions of "Category:MITgcm"

From mn/geo/geoit
Jump to: navigation, search
Line 177: Line 177:
  
  
 +
== Create a job command file ==
  
 +
Create a script (or a job command file) where all the resources you need to run MITgcm are specified. Let call it <span style="color:#ff0000">MITgcm.job</span>
 +
<pre>#!/bin/bash
 +
# Job name:
 +
#SBATCH --job-name=run_MITgcm
 +
#
 +
# Project:
 +
#SBATCH --account=XXXXX
 +
#
 +
# Wall clock limit:
 +
#SBATCH --time=24:0:0
 +
#
 +
# Max memory usage per core (MB):
 +
#SBATCH --mem-per-cpu=4G
 +
#
 +
#SBATCH --ntasks=64
 +
###
 +
 +
# Set up job environment: DO NOT CHANGE
 +
export LANG=en_US.UTF-8
 +
export LC_ALL=en_US
 +
source /cluster/bin/jobsetup
 +
 +
ulimit -l unlimited
 +
 +
 +
module load netcdf.intel
 +
module switch openmpi.intel/1.6.1 openmpi.intel/1.8
 +
 +
 +
# NOTE cd to where you have MITgcm executable
 +
cd $HOME/MITgcm/verification/exp2/run
 +
 +
mpirun ./mitgcmuv > mitgcmuv.out
 +
 +
</pre>
 +
Please note that you need to adjust account (use your notur account if you have one or uio), time and ntasks (the number of processors required for your MITgcm configuration).
 +
 +
== Adjust ntasks ==
 +
 +
<span id="result_box" lang="en"><span class="hps">The number of</span> <span class="hps">tasks</span> <span class="hps">you need depends on your MITgcm configuraton: it can be found in</span> <span class="hps">code/SIZE.h</span> <span class="hps">file</span> <span class="hps">and</span> <span class="hps">it</span> <span class="hps">is<span style="display: none" data-cke-bookmark="1">&nbsp;</span></span> <span style="color:#ff0000"><span class="hps">nPy</span> <span class="hps">*</span> <span class="hps">nPx</span></span></span><span style="display: none" data-cke-bookmark="1">&nbsp;</span>
 +
 +
== Submit and monitor your job command file ==
 +
 +
=== Submit your job ===
 +
<pre>sbatch MITgcm.job</pre>
 +
=== Monitor your job ===
 +
<pre>squeue -u $USER</pre>
 +
=  =
  
 
= Troubleshooting: =
 
= Troubleshooting: =
Line 191: Line 240:
  
 
set version 1.0
 
set version 1.0
module load <tt>use.own</tt></pre>
+
module load use.own</pre>
 
<br/>and any personal module files in this privatemodules directory will become available for you to list, load or unloal.
 
<br/>and any personal module files in this privatemodules directory will become available for you to list, load or unloal.
  

Revision as of 08:05, 3 July 2015

Websites:

http://mitgcm.org/

http://mitgcm.org/public/r2_manual/latest/

Getting MITgcm source code:

export CVSROOT=':pserver:cvsanon@mitgcm.org:/u/gcmpack'


#To get MITgcm through CVS, first register with the MITgcm CVS server using command and CVS password: cvsanon

cvs login

#You only need to do a cvs login once. To obtain the latest sources type:

cvs co -P MITgcm

Installation on abel:

Make sure you have downloaded MITgcm source code and let's assume your code is $HOME/MITgcm:

cd $HOME/MITgcm

export ROOTDIR=$HOME/MITgcm

export PATH=$ROOTDIR/tools:$PATH

module load netcdf.intel
module switch openmpi.intel/1.6.1 openmpi.intel/1.8


Once you have set-up your environment, you are ready to set-up a use case and run MITgcm.  All MITgcm examples and use-cases are in the verification directory. Feel free to test another example; close to what you wish to run in "real life". As a first example, we will compile and run exp2:

cd $HOME/MITgcm/verification/exp2

genmake2 -mpi -mods=../code/ -of=optfile.sh


where optfile.sh contains specific settings for abel. You need to create this file and add it should contain:

#!/bin/bash
export LANG=en_US.UTF-8
export LC_ALL=en_US
module load netcdf.intel
module switch openmpi.intel/1.6.1 openmpi.intel/1.8
module load jasper
module load ncl/6.1.0
export FC=mpif90
export F90C=mpif90
export CC=mpicc
export DEFINES='-DWORDLENGTH=1 -D_BYTESWAPIO'
#
export NETCDF=/cluster/software/VERSIONS/netcdf.intel-4.2.1.1
export HDF5=/cluster/software/VERSIONS/hdf5-1.8.9_intel
export NCARG_ROOT=/cluster/software/VERSIONS/ncl-6.1.0
export JASPERLIB=/cluster/software/VERSIONS/jasper-1.900.1/lib
export JASPERINC=/cluster/software/VERSIONS/jasper-1.900.1/include/jasper
export LIBS="-L${NETCDF}/lib -lnetcdff -lnetcdf"


optfile.sh should be placed in your build directory (or if you place if somewhere else, the full path should be provided when calling genmake2).


Then to compile:

make depend

make >& compile.log


If everything went well, your build directory should contain the MITgcm executable (called mitgcmuv).


If you need to install MITgcm on another machine, please contact drift@geo.uio.no


Running small test cases on abel:

Once compiled, mitgcmuv executable needs to be moved to the run directory:


mv mitgcmuv ../run

Before running MITgcm, you need to prepare your input files. All these input files are located in the input directory (exp2/input) but when running, they must be available in the build directory. Instead of copying all the input files into build, we create symbolic links:

cd ../run
ln -s ../input/* .


You are now ready to run mitgcmuv. Small configurations (most tutorial examples) can be run interactively:

./mitgcmuv


A sucessful run ends with:

NORMAL END


Check STDOUT.0000 and search for "Execution ended Normally".

tail STDOUT.0000

(PID.TID 0000.0001) //            Min. Y spins =     1000000000
(PID.TID 0000.0001) //          Total. Y spins =              0
(PID.TID 0000.0001) //            Avg. Y spins =       0.00E+00
(PID.TID 0000.0001) // o Thread number: 000001
(PID.TID 0000.0001) //            No. barriers =          10092
(PID.TID 0000.0001) //      Max. barrier spins =              1
(PID.TID 0000.0001) //      Min. barrier spins =              1
(PID.TID 0000.0001) //     Total barrier spins =          10092
(PID.TID 0000.0001) //      Avg. barrier spins =       1.00E+00
PROGRAM MAIN: Execution ended Normally


For this experiment exp2, several output files are created. By default, and at least for this experiment (exp2), outputs are in binary format. You'll get several files for each variable. Let's take T (Temperature):

ls T.*

T.0000000000.001.001.data  T.0000000000.002.002.data  T.0000000024.002.001.data  T.0000000026.001.002.data
T.0000000000.001.001.meta  T.0000000000.002.002.meta  T.0000000024.002.001.meta  T.0000000026.001.002.meta
T.0000000000.001.002.data  T.0000000024.001.001.data  T.0000000024.002.002.data  T.0000000026.002.001.data
T.0000000000.001.002.meta  T.0000000024.001.001.meta  T.0000000024.002.002.meta  T.0000000026.002.001.meta
T.0000000000.002.001.data  T.0000000024.001.002.data  T.0000000026.001.001.data  T.0000000026.002.002.data
T.0000000000.002.001.meta  T.0000000024.001.002.meta  T.0000000026.001.001.meta  T.0000000026.002.002.meta


meta files (*.meta) are metadata files i.e. text files containing some information about the data files (*.data). *.data are binary files containing Temperature values.

T.[timestep].[X].[Y].meta

T.[timestep].[X].[Y].data


Where timestep is the actual timestep and valid for the stored variable.

X, Y correspond to the subgrids as defined in code/SIZE.h

In our example (check code/SIZE.h), we have 2 x 2 subgrids, this is why we have 4 files per variable for a given timestep.


Getting MITgcm netCDF outputs:

It is recommended to generate netCDF outputs instead of MITgcm binary outputs. It will make it easier to analyze and visualize your outputs and it is more portable.


For this, you need to enable netCDF when compiling MITgcm.  Several examples are meant to create netCDF outputs instead of standard MITgcm binary outputs (global_ocean.90x40x15, etc.) but let's assume we want to generate netCDF outputs for our example exp2.

In our exp2 example, we first clean any previous compilation in build directory:

cd exp2

rm -rf build

mkdir build

cd build


And then we recompile with -enable=nmc option:

genmake2 -mpi -enable=mnc -mods=../code/ -of=optfile.sh

make depend

make >& compile.log


(don't forget to create optfile.sh...).


When running, netCDF (NMC outputs) needs to be activated:

cd exp2/input

cp ../../global_ocean.90x40x15/input/data.mnc .

cp ../../global_ocean.90x40x15/input/data.diagnostics .


Please note that all these files can be customized (see MITgcm documentation). Here is copy existing files...


And edit data.pkg where you add "useMNC=.TRUE.," i.e.:

# Packages
 &PACKAGES
  useMNC=.TRUE.,
 &

This latter option indicates, you wish to generate netCDF outputs at runtime (by default, binary outputs are generated even though you have compiled your code with netCDF).

When running mitgcmuv, netCDF outputs are generated in a subdirectory; In our example it is called mnc_test_0001 (this is what is specified in data.mnc; you can change to a more meaningful name!).

As before, you'll get a netCDF file per subgrid (tile) but this time, one netcdf file contains several variables and several timesteps. To make them easier to analyse, use python and gluemncbig:

module load python/anaconda

gluemncbig -o grid.nc mnc_test_0001/grid.*.nc
gluemncbig -o state.nc mnc_test_0001/state.*.nc
gluemncbig -o phiHyd.nc mnc_test_0001/phiHyd.*.nc

Then the resulting files can be viewed for instance with ncview:

module load ncview

ncview state.nc


Full documentation for NMC is available at [1]



Running large cases on abel:

When running large cases and more generally for your research, it is best to use netCDF outputs. In addition, you cannot run "interactively" as when login on abel, a limit of 30 minutes CPU is applied. It is also likely you run MITgcm with MPI, using several processors.


Create a job command file

Create a script (or a job command file) where all the resources you need to run MITgcm are specified. Let call it MITgcm.job

#!/bin/bash
# Job name:
#SBATCH --job-name=run_MITgcm
#
# Project:
#SBATCH --account=XXXXX
#
# Wall clock limit:
#SBATCH --time=24:0:0
#
# Max memory usage per core (MB):
#SBATCH --mem-per-cpu=4G
#
#SBATCH --ntasks=64
###

# Set up job environment: DO NOT CHANGE
export LANG=en_US.UTF-8
export LC_ALL=en_US
source /cluster/bin/jobsetup

ulimit -l unlimited


module load netcdf.intel
module switch openmpi.intel/1.6.1 openmpi.intel/1.8


# NOTE cd to where you have MITgcm executable
cd $HOME/MITgcm/verification/exp2/run

mpirun ./mitgcmuv > mitgcmuv.out

Please note that you need to adjust account (use your notur account if you have one or uio), time and ntasks (the number of processors required for your MITgcm configuration).

Adjust ntasks

The number of tasks you need depends on your MITgcm configuraton: it can be found in code/SIZE.h file and it is  nPy * nPx 

Submit and monitor your job command file

Submit your job

sbatch MITgcm.job

Monitor your job

squeue -u $USER

Troubleshooting:

No python/anaconda modulefile available

if python/anaconda is not available on abel, this is likely because you need to set-up your local environment properly:

Create a directory called privatemodules in your home directory:

mkdir $HOME/privatemodules


Then create (or edit)   $HOME/.modulerc file (please not the "dot" in front of the filename). This file must contain the special modulefile token "#%Module1.0" on the first line followed by one or more module commands:

#%Module1.0

set version 1.0
module load use.own


and any personal module files in this privatemodules directory will become available for you to list, load or unloal.


For python anaconda, create a subdirectory called python:

mkdir $HOME/privatemodules/python


And place the following file called anaconda in this new python directory. If you have any problems, please contact drift@geo.uio.no for getting help.

This category currently contains no pages or media.