Category:ESysParticle
Contents
Online documentation:
https://launchpad.net/esys-particle
Availability:
Running small test cases on wessel:
Analysing ESys-Particle outputs:
Running large simulation on abel:
When running large cases and more generally for your research, it is best to use HPC resources. On most HPC systems, you cannot run "interactively" for more than a limit of 30 minutes CPU. It is also likely you run ESys-Particule with MPI, using several processors.
Create a job command file
Create a script (or a job command file) where all the resources you need to run ESys-Particle are specified. Let call it esysparticle.job
#!/bin/bash # Job name: #SBATCH --job-name=run_esysparticle # # Project (change it to your NOTUR or uio project): #SBATCH --account=XXXXX # # Wall clock limit (to be adjusted!): #SBATCH --time=24:0:0 # # Max memory usage per core (MB): #SBATCH --mem-per-cpu=4G # # Adjust the number of processors (MPI tasks) # SBATCH --ntasks=64 # #Set up job environment: DO NOT CHANGE export LANG=en_US.UTF-8 export LC_ALL=en_US source /cluster/bin/jobsetup ulimit -l unlimited module load esysparticle
Please note that you need to adjust account (use your notur account if you have one or uio), time and ntasks (the number of processors required for your Esys-Particle simulation).
Adjust ntasks
The number of tasks your needs depend on your ESys-Particle configuration.
Submit/monitor your job command file
Submit your job
sbatch esysparticle.job
Monitor your job
squeue -u $USER
For more information on the batch system on abel follow this link.
Troubleshooting:
This category currently contains no pages or media.