Difference between revisions of "Abel use old"

From mn/bio/cees-bioinf
Jump to: navigation, search
(Added notes on cpu use)
m
Line 7: Line 7:
 
Same, for 24 hrs  
 
Same, for 24 hrs  
 
<pre>qlogin --account nn9244k --nodes 1 --ntasks-per-node 1 --time 24:00:00</pre>  
 
<pre>qlogin --account nn9244k --nodes 1 --ntasks-per-node 1 --time 24:00:00</pre>  
 +
<br> NOTE you are''sharing the node with others'', do no use more than the number of cpus you asked for
  
 
+
NOTE a pipeline of unix commands may use one cpu per command:  
NOTE you are''sharing the node with others'', do no use more than the number of cpus you asked for
+
<pre>grep something somefile | sort | uniq -c</pre>  
 
+
This may use three cpus!  
NOTE a pipeline of unix commands may use one cpu per command:
 
<pre>grep something somefile | sort | uniq -c</pre>
 
This may use three cpus!
 
  
 
<br>Getting a whole node with 16 CPUs and 64 GB RAM:  
 
<br>Getting a whole node with 16 CPUs and 64 GB RAM:  
 
<pre>qlogin --account nn9244k --nodes 1 --ntasks-per-node 16 --time 24:00:00
 
<pre>qlogin --account nn9244k --nodes 1 --ntasks-per-node 16 --time 24:00:00
 
</pre>  
 
</pre>  
Even though each node has 16 cpus, due to hyperthreading, you can run'''up to 32 processes '''simultaneously  
+
Even though each node has 16 cpus, due to hyperthreading, you can run&nbsp;'''up to 32 processes '''simultaneously  
  
 
<br>  
 
<br>  
Line 32: Line 30:
 
<pre>/work/jobID.d
 
<pre>/work/jobID.d
 
</pre>  
 
</pre>  
'''NOTE''' all data on this area is'''deleted''' once you log out  
+
'''NOTE''' all data on this area is&nbsp;'''deleted''' once you log out  
  
 
<br>  
 
<br>  

Revision as of 09:21, 13 December 2012

Get onto abel

ssh abel.uio.no

Getting a single cpu for 11 hrs

qlogin --account nn9244k --nodes 1 --ntasks-per-node 1

Same, for 24 hrs

qlogin --account nn9244k --nodes 1 --ntasks-per-node 1 --time 24:00:00


NOTE you aresharing the node with others, do no use more than the number of cpus you asked for

NOTE a pipeline of unix commands may use one cpu per command:

grep something somefile | sort | uniq -c

This may use three cpus!


Getting a whole node with 16 CPUs and 64 GB RAM:

qlogin --account nn9244k --nodes 1 --ntasks-per-node 16 --time 24:00:00

Even though each node has 16 cpus, due to hyperthreading, you can run up to 32 processes simultaneously


You have a large work area available as well:

echo $SCRATCH
cd $SCRATCH

Using

squeue -u your username

will tell you the job ID, the work area is

/work/jobID.d

NOTE all data on this area is deleted once you log out


Quitting:

logout (or ctrl-d)