From mn/geo/geoit
Revision as of 11:08, 27 June 2017 by (talk | contribs) (Create a job command file)

Jump to: navigation, search

ESyS-Particle is Open Source software for particle-based numerical modelling. The software implements the Discrete Element Method (DEM), a widely used technique for modelling processes involving large deformations, granular flow and/or fragmentation.

Online documentation:


ESys-Particle is available on wessel (main server at the Department of Geosciences) and on the UIO HPC system called abel.

To check which version is available:

module avail esysparticle
------------------------------------- /cluster/etc/modulefiles ------------------------------------------------
esysparticle/2.1            esysparticle/2.2.2          esysparticle/2.2.2_patch    esysparticle/2.3.3(default)

To set-up your environment:

module load esysparticle

it loads the default version of ESys-Particle. Please note that de default version may vary from one machine to another. Therefore, we suggest you always specify the version you wish to load:

module load esysparticle/esysparticle/2.3.3

Please note that more recent versions are available on our server (wessel).

Running small test cases on wessel:

Wessel is a very small server available to all users at the Department of Geosciences. A total of 24 processors is available but it is meant to be used for interactive access so only "small" (both memory and CPU usage, including small number of processors) simulations should be run on wessel.

As a general rule, never use more than 8 processors and more than 8GB of memory. If you need more resources, please use abel (contact if you need further advice on how to access abel).

All the examples from the ESysParticle Tutorial are available on github at

For instance to run the first example on wessel, using 2 processors:

module load esysparticle

mpirun -np 2 esysparticle

Analysing ESys-Particle outputs:

- VisIt: VisIt is an Open Source, interactive, scalable, visualization, animation and analysis tool.

- Paraview: ParaView is an open-source, multi-platform data analysis and visualization application. Paraiew is available on wessel, abel, cruncher and viz2 (norStore remote visualization servers).

- Python (available on all platforms).

On some platforms, you would need to load the corresponding modulefile to set-up your environment and use these packages:

module load paraview
module load visit
module load python

On Wessel, a default version of paraview is installed and there is no need to load paraview.

Running large simulation on abel:

When running large cases and more generally for your research, it is best to use HPC resources. On most HPC systems, you cannot run "interactively" for more than a limit of 30 minutes CPU. It is also likely you run ESys-Particule with MPI, using several processors.

You can also use what we call "interactive login" to access compute nodes on abel. See interactive logins documentation.

Create a job command file

Create a script (or a job command file) where all the resources you need to run ESys-Particle are specified. Let call it esysparticle.job

# Job name:
#SBATCH --job-name=run_esysparticle
# Project (change it to your NOTUR or uio project):
#SBATCH --account=XXXXX
# Wall clock limit (to be adjusted!):
#SBATCH --time=24:0:0
# Max memory usage per core (MB):
#SBATCH --mem-per-cpu=4G
# Adjust the number of processors (MPI tasks)
# SBATCH --ntasks=64
#Set up job environment: DO NOT CHANGE
export LANG=en_US.UTF-8 
export LC_ALL=en_US 
source /cluster/bin/jobsetup
ulimit -l unlimited
module load esysparticle

mpirun -np 2 esysparticle

Please note that you need to adjust account (use your notur account if you have one or uio), time and ntasks (the number of processors required for your Esys-Particle simulation).

Adjust ntasks

The number of tasks your needs depend on your ESys-Particle configuration.

Submit/monitor your job command file

Submit your job

sbatch esysparticle.job

Monitor your job

squeue -u $USER

For more information on the batch system on abel follow this link.


This category currently contains no pages or media.