ESyS-Particle is Open Source software for particle-based numerical modelling. The software implements the Discrete Element Method (DEM), a widely used technique for modelling processes involving large deformations, granular flow and/or fragmentation.
- 1 Online documentation:
- 2 Availability:
- 3 Running small test cases on wessel:
- 4 Analysing ESys-Particle outputs:
- 5 Running large simulation on abel:
- 6 Troubleshooting:
ESys-Particle is available on wessel (main server at the Department of Geosciences) and on the UIO HPC system called abel.
To check which version is available:
module avail esysparticle ------------------------------------- /cluster/etc/modulefiles ------------------------------------------------ esysparticle/2.1 esysparticle/2.2.2 esysparticle/2.2.2_patch esysparticle/2.3.3(default)
To set-up your environment:
module load esysparticle
it loads the default version of ESys-Particle. Please note that de default version may vary from one machine to another. Therefore, we suggest you always specify the version you wish to load:
module load esysparticle/esysparticle/2.3.3
Please note that more recent versions are available on our server (wessel).
Running small test cases on wessel:
Wessel is a very small server available to all users at the Department of Geosciences. A total of 24 processors is available but it is meant to be used for interactive access so only "small" (both memory and CPU usage, including small number of processors) simulations should be run on wessel.
As a general rule, never use more than 8 processors and more than 8GB of memory. If you need more resources, please use abel (contact email@example.com if you need further advice on how to access abel).
All the examples from the ESysParticle Tutorial are available on github at https://github.com/annefou/ESys-Particle/tree/master/examples.
For instance to run the first example bingle.py on wessel, using 2 processors:
module load esysparticle mpirun -np 2 esysparticle bingle.py
Analysing ESys-Particle outputs:
Running large simulation on abel:
When running large cases and more generally for your research, it is best to use HPC resources. On most HPC systems, you cannot run "interactively" for more than a limit of 30 minutes CPU. It is also likely you run ESys-Particule with MPI, using several processors.
Create a job command file
Create a script (or a job command file) where all the resources you need to run ESys-Particle are specified. Let call it esysparticle.job
#!/bin/bash # Job name: #SBATCH --job-name=run_esysparticle # # Project (change it to your NOTUR or uio project): #SBATCH --account=XXXXX # # Wall clock limit (to be adjusted!): #SBATCH --time=24:0:0 # # Max memory usage per core (MB): #SBATCH --mem-per-cpu=4G # # Adjust the number of processors (MPI tasks) # SBATCH --ntasks=64 # #Set up job environment: DO NOT CHANGE export LANG=en_US.UTF-8 export LC_ALL=en_US source /cluster/bin/jobsetup ulimit -l unlimited module load esysparticle
Please note that you need to adjust account (use your notur account if you have one or uio), time and ntasks (the number of processors required for your Esys-Particle simulation).
The number of tasks your needs depend on your ESys-Particle configuration.
Submit/monitor your job command file
Submit your job
Monitor your job
squeue -u $USER
For more information on the batch system on abel follow this link.
This category currently contains no pages or media.