Difference between revisions of "Submit R jobs on Abel"

From mn/geo/geoit
Jump to: navigation, search
Line 35: Line 35:
 
export OMP_NUM_<span id="cke_bm_104E" style="display: none" data-cke-bookmark="1">&nbsp;</span>THREADS=12
 
export OMP_NUM_<span id="cke_bm_104E" style="display: none" data-cke-bookmark="1">&nbsp;</span>THREADS=12
 
module load R openmpi.gnu</pre><pre>mpirun -np 1 R CMD BATCH --no-save --no-restore R_program.r
 
module load R openmpi.gnu</pre><pre>mpirun -np 1 R CMD BATCH --no-save --no-restore R_program.r
 +
</pre>
 +
#End of script
  
#End of script</pre>
+
[[Category:Tools]] [[Category:Software]] [[Category:R]] [[Category:Abel]]
 
 
&#x5B;&#x5B;Category:Tools&#x5D;&#x5D; &#x5B;&#x5B;Category:Software&#x5D;&#x5D; &#x5B;&#x5B;Category:R&#x5D;&#x5D; &#x5B;&#x5B;Category:Abel&#x5D;&#x5D;
 

Revision as of 11:05, 27 April 2015

This is an example of a Matlab submit script for use on the Abel cluster. Abel usage is documented at UiOs HPC pages


For normal off-line non-parallell R running on any Linux server, you can simply do e.g.

module load R
R CMD BATCH R_program.r

For large jobs, this script should set you up OK at the Abel cluster:

#!/bin/bash<br/># Job name:<br/>#SBATCH --job-name=R
#
# Project (change to your :
#SBATCH --account=geofag
#
# Wall clock limit:
#SBATCH --time=1000:0:0
#
# Max memory usage per core (MB):
#SBATCH --mem-per-cpu=2000M
#
# Number of tasks (cores):
#SBATCH --ntasks-per-cpu=12
#

##Setup job environment
source /cluster/bin/jobsetup

#Load R module

module load R

#Set variable with number of processors and load mpi

export OMP_NUM_<span id="cke_bm_104E" style="display: none" data-cke-bookmark="1"> </span>THREADS=12
module load R openmpi.gnu
mpirun -np 1 R CMD BATCH --no-save --no-restore R_program.r
  1. End of script