Essential components

Introduction

The NEMO source code is written in Fortran 2008 and some of its prerequisite tools and libraries are already included in the download.
It contains the AGRIF mesh refinement library; the FCM build system ; the PPR polynomial reconstruction library and the IOIPSL library for parts of the output.

System prerequisites

The following should be provided natively by your system, if not, they need to be installed from the official repositories:

  • You need a Unix-like machine (e.g. Linux Distributions, MacOS)

  • subversion (svn) for version control of XIOS sources

  • git for version control of NEMO sources

  • Perl interpreter

  • Fortran compiler (ifort, gfortran, pgfortran, ftn, …),

  • Message Passing Interface (MPI) implementation (e.g. OpenMPI or MPICH).

  • Network Common Data Form (NetCDF) library with its underlying Hierarchical Data Form (HDF)

Note

By default, NEMO requires MPI-3. However, it is possible to circumnavigate this by using the following work-arounds:
  • Activate the key_mpi2 preprocessor key at compile time. This will allow you to run the model using MPI-2, but keep in mind that you will lose some performance benefits.

  • Activate the key_mpi_off preprocessor key at compile time. This will allow you to run the model only on a single process (no MPI parallelization) and you will not be able to use XIOS.

Specifics for NetCDF and HDF

In order to take full advantage of the XIOS IO-server (one_file option; i.e. combines all of your output into one file), HDF (C library) and NetCDF (C and Fortran libraries) must be compiled with MPI support. To do this:

  • You need to compile these libraries with the same version of the MPI implementation that both NEMO and XIOS will be compiled and linked with (see below).

  • When compiling the HDF library, you need to use the --enable-parallel option when calling configure:

    $ ./configure --enable-parallel ...
    

Note

For XIOS you need to use NetCDF-4. NetCDF-3 can still be used in NEMO if you do not wish to use XIOS.

Caution

The output created by XIOS are NetCDF-4 and not NetCDF4-classic, and are therefore incompatible with NetCDF-3 software. In order to handle any XIOS output, you need a software which is compatible with true NetCDF-4 files (e.g. ncview, Matlab, Python). If you would like to use other software (which is not compatible with NetCDF-4), then you can convert your XIOS output into NetCDF4-classic format by using the following command:
$ cdo -f nc4c copy infile outfile

or

$ ncks -7 infile outfile

Install XIOS library

With the sole exception of running NEMO without MPI (in which case output options are limited to the default minimum), diagnostic outputs from NEMO are handled by the third party XIOS library.

For more details, refer the to section on configuring XIOS outputs below.

Instructions on how to install XIOS can be found on its wiki.

Note

Prior to NEMO version 4.2.0 it was recommended to use XIOS 2.5 release. However, versions 4.2.0 and beyond utilise some newer features of XIOS2 and users will need to upgrade to the trunk version of XIOS2. Note version 5.0 will also support the use of XIOS3 by activating “key_xios3” (in this case you cannot use the tiling capability).

XIOS2 trunk can be checked out with:

$ svn co http://forge.ipsl.jussieu.fr/ioserver/svn/XIOS2/trunk

If you are not planning on using the tiling capabilities (i.e. ln_tile = .false., always) then there will be performance and robustness gains to be had by using XIOS3 instead. This can be used with the same XML files as XIOS2 or you can choose to make changes to the XML files to enable new features. To illustrate some of the new options, v5.0 includes a demonstrator configuration which is described in XIOS3 demonstrator. XIOS3 trunk can be checked out with:

$ svn co http://forge.ipsl.jussieu.fr/ioserver/svn/XIOS3/trunk

but remember to always run with ln_tile = .false. and compile nemo with key_xios3 defined.

If you find problems at this stage, support can be found by subscribing to the XIOS mailing list and sending a mail message to it.

Download and install the NEMO code

Checkout the NEMO source

There are several ways to obtain the NEMO source code. Users who are not familiar with git and simply want a fixed code to compile and run, can download a tarball from the 5.0 release site

Users who are familiar with git and likely to use it to manage their own local branches and modifications, can clone the repository at the release tag:

git clone --branch 5.0 https://forge.nemo-ocean.eu/nemo/nemo.git nemo_5.0

Experienced developers who may wish to experiment with other branches or more recent code than the release (perhaps with a view to returning developements to the system), can clone the main repository:

git clone https://forge.nemo-ocean.eu/nemo/nemo.git
cd nemo

and then, to work with v5.0, either switch to the tagged release or the head of branch_5.0. The latter will contain any important bug-fixes since the release of 5.0 and will form the basis of any future tagged 5.0.X point releases:

git switch --detach 5.0
or
git switch branch_5.0

Note, use the second option if intending to work on user-contributed bug fixes.

Description of main NEMO directories

arch

Compilation settings

cfgs

Reference configurations

ext

Dependencies included (AGRIF, FCM, PPR & IOIPSL)

mk

Compilation scripts

src

NEMO codebase

tests

Test cases

tools

Utilities to {pre,post}process data

sct

PSyclone PSyclone code transformation scripts

sette

SETTE a code testing framework

Install PSyclone in a Python virtual environemt (if required)

It is recommended to install PSyclone in a Python virtual environment inside the parent directory of the local NEMO repository (using Python’s venv module):

$ python -m venv --upgrade-deps ./PSyclone
$ source ./PSyclone/bin/activate
$ python -m pip install psyclone==2.5.0 --require-virtualenv

PSyclone is rapidly evolving and the NEMO build-system will be kept compatible with a recent version of PSyclone (for example see the file exclusions applied by the wrapper script mk/sct_psyclone.sh during builds with activated PSyclone processing), hence the installation of a particular PSyclone version will be recommended. For NEMO v5.0, the recommended PSyclone version is 2.5.0.

Inside the virtual environment, the psyclone command should now be available:

$ psyclone --version
$ psyclone --help

Specific guidance on the use of PSyclone with NEMO is given in the PSyclone section in the developers part of this guide. Additional material may also be found in the rst files located in the sct subdirectory.

Setup your architecture configuration file

All compiler options in NEMO are controlled using files in ./arch/arch-'my_arch'.fcm where my_arch is the name you use to refer to your computing environment.

Note

You can use build_arch-auto.sh to automatically setup your arch file.
cd arch
./build_arch-auto.sh

If you want further help on how to use this functionality: ./build_arch-auto.sh -h

The build_arch-auto.sh will create an arch file called arch-auto.fcm

Alternatively, you can copy, rename and edit a configuration file from an architecture similar to your own. You will need to set appropriate values for all of the variables in the file. In particular the variables: %NCDF_HOME; %HDF5_HOME and %XIOS_HOME should be set to the installation directories used for XIOS installation. For example:

%NCDF_HOME    /usr/local/path/to/netcdf
%HDF5_HOME    /usr/local/path/to/hdf5
%XIOS_HOME    /home/$( whoami )/path/to/xios-trunk
%OASIS_HOME   /home/$( whoami )/path/to/oasis

and, if the use of PSyclone is anticipated, its installation directory will also need to be set. For example:

%PSYCLONE_HOME       /path/to/Python/virtual/environment/PSyclone

Note

At this point the Python virtual environment can be deactivated (run: deactivate). The base path of the PSyclone installation added to the architecture file is sufficient for subsequent use of PSyclone provided the base python3 installation is available in the user’s environment.

Preparing an experiment

Create and compile a new configuration

The main script to {re}compile and create an executable is called makenemo, it is located at the root of the working copy. It is used to identify the routines you need from the source code, to build the makefile and run it. As an example, compile a MY_GYRE configuration from GYRE, which can be found in the cfgs directory (more information on GYRE can be found in the NEMO Reference Manual). The following example uses the ‘auto’ arch file (if you used the automatic build):

./makenemo -h # This is for help
./makenemo –m 'auto' –r GYRE -n 'MY_GYRE'

Then at the end of the configuration compilation, the MY_GYRE directory will have the following structure.

Directory

Purpose

BLD

BuiLD folder: target executable, libraries, preprocessed routines, …

EXP00

Run folder: link to executable, namelists, *.xml and IOs

EXPREF

Files under version control only for official configurations

MY_SRC

Your new routines or your modified copies of NEMO sources

WORK

Links to all fortran routines that you will compile.

After successful execution of makenemo command, the executable called nemo is available in the EXP00 directory

Viewing and changing list of active CPP keys

A CPP key is used to activate/disactivate certain parts of the code at the pre-compilation stage.

For a given configuration located in the cfgs directory (here called MY_CONFIG), the list of active CPP keys can be found in ./cfgs/'MYCONFIG'/cpp_MY_CONFIG.fcm

This text file can be edited by hand or with makenemo to change the list of active CPP keys. Once changed, one needs to recompile nemo in order for this change to be taken into account. Note that most NEMO configurations will need to specify the following CPP keys: key_xios for IOs. MPI parallelism is activated by default. Use key_mpi_off to compile without MPI.

./makenemo –m 'auto' -r 'MY_CONFIG' add_key 'key_mykey1 key_mykey2' del_key 'key_notwanted'

Configure XIOS outputs

XIOS allows for an efficient management of diagnostic outputs. This page gives a basic introduction to using XIOS with NEMO. Additional information are available at the XIOS wiki and in the NEMO reference manual.

Use of XIOS for NEMO IOs is activated using the pre-compiler key key_xios.

XIOS is controlled by means of XML input files that should be copied to your model run directory before running the model. Examples of these files can be found in the reference configurations’ subdirectories (./cfgs). The XIOS executable expects to find a file called iodef.xml in the model run directory. To improve readability, NEMO has the following ‘include’ statements in the iodef.xml file:

  • field_def_nemo-oce.xml (potential output variable definition for physics)

  • field_def_nemo-ice.xml (potential output variable definition for ice)

  • field_def_nemo-pisces.xml (potential output variable definition for biogeochemistry)

  • domain_def.xml and axis_def.xml (horizontal and vertical grid information)

All these files are available in the ./cfgs/SHARED directory.

Note

Most users will not modify the above xml files unless they want to add new diagnostics to the NEMO code.
The user, instead, defines the selection of output files that is of interest to them in file_def_nemo-oce/ice/pisces.xml files.

XIOS can be used along with NEMO in two different modes:

Detached Mode

In detached mode the XIOS executable is executed on separate cores from the NEMO model. This is the recommended method for using XIOS for realistic model runs. To use this mode set using_server to true at the bottom of the iodef.xml file:

<variable id="using_server" type="boolean">true</variable>

Make sure there is a copy (or link to) your XIOS executable in the working directory and in your job submission script allocate processors to XIOS.

Attached Mode

In attached mode XIOS runs on each of the cores used by NEMO. This method is less efficient than the detached mode but can be more convenient for testing or with small configurations. To activate this mode simply set using_server to false in the iodef.xml file

Note

For both of these options, you can activate the option for “one_file” or “multiple_file” mode. For the former, output is collected and collated to directly produce one single file for your domain. For the latter option, you will have as many output files as your number of NEMO processes (if in attached) or XIOS processes (if in detatched).

More makenemo options

makenemo has several other options that can control which source files are selected and the operation of the build process itself.

Output of makenemo -h
Usage:
------
./makenemo -[aru] CONFIG -m ARCH [-[dehjntv] ...] [{list_key,clean,clean_config}]
                                                  [{add_key,del_key} ...]

Mandatory
   -m    Computing architecture (./arch), FCM file describing the compilation settings

   and one of the following option (use 'all' arg to list available items)

   -r    Reference configuration (./cfgs), proven with long-term support
   -a    Academic test case (./tests), ready-to-use configuration with no support over time
   -u    Scripted remote configuration (see ./tests/rmt_cfgs.txt)

Optional
   -d    New set of sub-components (subfolders from ./src directory)
   -e    Path for alter patch  location (default: 'MY_SRC' in configuration folder)
   -h    Print this help
   -j    Number of processes to compile (0: dry run with no build)
   -n    Name for new configuration
   -s    Path for alter source location (default: 'src' root directory)
   -t    Path for alter build  location (default: 'BLD' in configuration folder)
   -v    Level of verbosity ([0-3])

Examples
   ¤ Configuration creation
        Build          : ./makenemo         -[aru] ... [...]
        Copy           : ./makenemo -n ...  -[aru] ... [...]
   ¤ Configuration management
        List CPP keys  : ./makenemo -n ... list_key
        Add-Remove keys: ./makenemo -n ... add_key '...' del_key '...'
        Fresh start    : ./makenemo -n ... clean
        Removal        : ./makenemo -n ... clean_config

These options can be useful for maintaining several code versions with only minor differences but they should be used sparingly. Note however the -j option which should be used more routinely to speed up the build process. For example:

./makenemo –m 'my_arch' –r GYRE -n 'MY_GYRE' -j 8

will compile with up to 8 processes simultaneously.

Default behaviour

The first time you call makenemo, you need the -m option to specify the architecture configuration file (compiler and its options, routines and libraries to include), then for next compilation, it is assumed that you will be using the same compiler.

Tools used during the process

The various bash scripts used by makenemo (for instance, to create the WORK directory) are located in the mk subdirectory. In most cases, there should be no need for user intervention with these scripts. Occasionally, incomplete builds can leave the environment in a indeterminate state. If problems are experienced with subsequent attempts then try running:

./makenemo –m 'my_arch' –r 'MY_GYRE' clean

will prepare the directories for a fresh attempt and remove any intermediate files that may be causing issues.

The reference configurations that may be provided to the -r argument of makenemo are listed in the cfgs/ref_cfgs.txt file:

AGRIF_DEMO OCE ICE TOP NST
AMM12 OCE
C1D_PAPA OCE
GYRE_BFM OCE TOP
GYRE_PISCES OCE TOP
ORCA2_OFF_PISCES OCE TOP OFF
ORCA2_OFF_TRC OCE TOP OFF
ORCA2_SAS_ICE OCE ICE NST SAS
ORCA2_ICE_PISCES OCE TOP ICE NST ABL
ORCA2_ICE_ABL OCE ICE ABL
SPITZ12 OCE ICE
WED025 OCE ICE

User added configurations will be listed in cfgs/work_cfgs.txt

Running the model

Once makenemo has run successfully, a symbolic link to the nemo executable is available in ./cfgs/MY_CONFIG/EXP00. For the reference configurations, the EXP00 folder also contains the initial input files (namelists, *.xml files for the IOs, …). If the configuration needs other input files, they have to be placed here.

cd 'MY_CONFIG'/EXP00
mpirun -n $NPROCS ./nemo   # $NPROCS is the number of processes
                           # mpirun is your MPI wrapper