Difference between revisions of "Category:ESysParticle"

From mn/geo/geoit
Jump to: navigation, search
(Create a job command file)
 
(6 intermediate revisions by the same user not shown)
Line 35: Line 35:
 
Wessel is a very small server available to all users at the Department of Geosciences. A total of 24 processors is available but it is meant to be used for interactive access so only "small" (both memory and CPU usage, including small number of processors) simulations should be run on wessel.
 
Wessel is a very small server available to all users at the Department of Geosciences. A total of 24 processors is available but it is meant to be used for interactive access so only "small" (both memory and CPU usage, including small number of processors) simulations should be run on wessel.
  
As a general rule, never use more than 8 processors and more than 8GB of memory. If you need more resources, please use abel (contact drift@geo.uio.no if you need further advice on how to access abel).  
+
As a general rule, never use more than 8 processors and more than 8GB of memory. If you need more resources, please use abel (contact drift@geo.uio.no if you need further advice on how to access abel).
 +
 
 +
All the examples from the [https://launchpad.net/esys-particle/2.3/2.3.1/+download/ESyS-Particle_Tutorial.pdf ESysParticle Tutorial] are available on github at [https://github.com/annefou/ESys-Particle/tree/master/examples https://github.com/annefou/ESys-Particle/tree/master/examples].
 +
 
 +
For instance to run the first example [https://github.com/annefou/ESys-Particle/blob/master/examples/esysparticles/bingle.py bingle.py] on wessel, using 2 processors:
 +
 
 +
<pre>
 +
module load esysparticle
 +
 
 +
mpirun -np 2 esysparticle bingle.py
 +
</pre>
  
 
= Analysing ESys-Particle outputs: =
 
= Analysing ESys-Particle outputs: =
  
 +
 +
- [https://wci.llnl.gov/simulation/computer-codes/visit/ VisIt]: VisIt is an Open Source, interactive, scalable, visualization, animation and analysis tool.
 +
 +
- [https://www.paraview.org/ Paraview]: ParaView is an open-source, multi-platform data analysis and visualization application. Paraiew is available on wessel, abel, cruncher and viz2 (norStore remote visualization servers).
 +
 +
- Python (available on all platforms).
 +
 +
 +
On some platforms, you would need to load the corresponding modulefile to set-up your environment and use these packages:
 +
 +
<pre>
 +
module load paraview
 +
module load visit
 +
module load python
 +
</pre>
 +
 +
On Wessel, a default version of paraview is installed and there is no need to load paraview.
  
 
= Running large simulation on abel: =
 
= Running large simulation on abel: =
Line 44: Line 71:
 
When running large cases and more generally for your research, it is best to use HPC resources. On most HPC systems, you cannot run "interactively"  for more than a limit of 30 minutes CPU. It is also likely you run ESys-Particule with MPI, using several processors.
 
When running large cases and more generally for your research, it is best to use HPC resources. On most HPC systems, you cannot run "interactively"  for more than a limit of 30 minutes CPU. It is also likely you run ESys-Particule with MPI, using several processors.
  
 +
You can also use what we call "interactive login" to access compute nodes on abel. See [http://www.uio.no/english/services/it/research/hpc/abel/help/user-guide/interactive-logins.html interactive logins] documentation.
  
  
Line 71: Line 99:
 
ulimit -l unlimited
 
ulimit -l unlimited
 
module load esysparticle
 
module load esysparticle
 +
 +
cp gravity_cube.py $SCRATCH
 +
chkfile *.png
 +
 +
cd $SCRATCH
 +
mpirun -np 2 esysparticle gravity_cube.py
  
  

Latest revision as of 11:16, 27 June 2017

ESyS-Particle is Open Source software for particle-based numerical modelling. The software implements the Discrete Element Method (DEM), a widely used technique for modelling processes involving large deformations, granular flow and/or fragmentation.

Online documentation:

https://launchpad.net/esys-particle

Availability:

ESys-Particle is available on wessel (main server at the Department of Geosciences) and on the UIO HPC system called abel.

To check which version is available:


module avail esysparticle
------------------------------------- /cluster/etc/modulefiles ------------------------------------------------
esysparticle/2.1            esysparticle/2.2.2          esysparticle/2.2.2_patch    esysparticle/2.3.3(default)


To set-up your environment:


module load esysparticle

it loads the default version of ESys-Particle. Please note that de default version may vary from one machine to another. Therefore, we suggest you always specify the version you wish to load:


module load esysparticle/esysparticle/2.3.3

Please note that more recent versions are available on our server (wessel).


Running small test cases on wessel:

Wessel is a very small server available to all users at the Department of Geosciences. A total of 24 processors is available but it is meant to be used for interactive access so only "small" (both memory and CPU usage, including small number of processors) simulations should be run on wessel.

As a general rule, never use more than 8 processors and more than 8GB of memory. If you need more resources, please use abel (contact drift@geo.uio.no if you need further advice on how to access abel).

All the examples from the ESysParticle Tutorial are available on github at https://github.com/annefou/ESys-Particle/tree/master/examples.

For instance to run the first example bingle.py on wessel, using 2 processors:

module load esysparticle

mpirun -np 2 esysparticle bingle.py

Analysing ESys-Particle outputs:

- VisIt: VisIt is an Open Source, interactive, scalable, visualization, animation and analysis tool.

- Paraview: ParaView is an open-source, multi-platform data analysis and visualization application. Paraiew is available on wessel, abel, cruncher and viz2 (norStore remote visualization servers).

- Python (available on all platforms).


On some platforms, you would need to load the corresponding modulefile to set-up your environment and use these packages:

module load paraview
module load visit
module load python

On Wessel, a default version of paraview is installed and there is no need to load paraview.

Running large simulation on abel:

When running large cases and more generally for your research, it is best to use HPC resources. On most HPC systems, you cannot run "interactively" for more than a limit of 30 minutes CPU. It is also likely you run ESys-Particule with MPI, using several processors.

You can also use what we call "interactive login" to access compute nodes on abel. See interactive logins documentation.


Create a job command file

Create a script (or a job command file) where all the resources you need to run ESys-Particle are specified. Let call it esysparticle.job

#!/bin/bash
# Job name:
#SBATCH --job-name=run_esysparticle
#
# Project (change it to your NOTUR or uio project):
#SBATCH --account=XXXXX
#
# Wall clock limit (to be adjusted!):
#SBATCH --time=24:0:0
#
# Max memory usage per core (MB):
#SBATCH --mem-per-cpu=4G
#
# Adjust the number of processors (MPI tasks)
# SBATCH --ntasks=64
#
#Set up job environment: DO NOT CHANGE
export LANG=en_US.UTF-8 
export LC_ALL=en_US 
source /cluster/bin/jobsetup
ulimit -l unlimited
module load esysparticle

cp gravity_cube.py $SCRATCH
chkfile *.png

cd $SCRATCH
mpirun -np 2 esysparticle gravity_cube.py


Please note that you need to adjust account (use your notur account if you have one or uio), time and ntasks (the number of processors required for your Esys-Particle simulation).

Adjust ntasks

The number of tasks your needs depend on your ESys-Particle configuration.

Submit/monitor your job command file

Submit your job

sbatch esysparticle.job

Monitor your job

squeue -u $USER


For more information on the batch system on abel follow this link.

Troubleshooting:

This category currently contains no pages or media.