Difference between revisions of "Abel use old"
From mn/bio/cees-bioinf
(New page describing qlogin use on abel) |
(Added quitiing) |
||
Line 1: | Line 1: | ||
− | Get onto abel | + | Get onto abel |
<pre>ssh abel.uio.no | <pre>ssh abel.uio.no | ||
− | </pre> | + | </pre> |
Getting a single cpu for 11 hrs | Getting a single cpu for 11 hrs | ||
<pre>qlogin --account nn9244k --nodes 1 --ntasks-per-node 1 | <pre>qlogin --account nn9244k --nodes 1 --ntasks-per-node 1 | ||
− | </pre> | + | </pre> |
− | Same, for 24 hrs | + | Same, for 24 hrs |
− | <pre>qlogin --account nn9244k --nodes 1 --ntasks-per-node 1 --time 24:00:00</pre> | + | <pre>qlogin --account nn9244k --nodes 1 --ntasks-per-node 1 --time 24:00:00</pre> |
− | <br>Getting a whole node with 16 CPUs and 64 GB RAM: | + | <br>Getting a whole node with 16 CPUs and 64 GB RAM: |
<pre>qlogin --account nn9244k --nodes 1 --ntasks-per-node 16 --time 24:00:00 | <pre>qlogin --account nn9244k --nodes 1 --ntasks-per-node 16 --time 24:00:00 | ||
− | </pre> | + | </pre> |
− | Even though each node has 16 cpus, due to hyperthreading, you can run'''up to 32 processes '''simultaneously | + | Even though each node has 16 cpus, due to hyperthreading, you can run'''up to 32 processes '''simultaneously |
+ | <br> | ||
− | + | You have a large work area available as well: | |
− | You have a large work area available as well: | ||
<pre>echo $SCRATCH | <pre>echo $SCRATCH | ||
cd $SCRATCH | cd $SCRATCH | ||
− | </pre> | + | </pre> |
− | Using | + | Using |
<pre>squeue -u your username | <pre>squeue -u your username | ||
+ | </pre> | ||
+ | will tell you the job ID, the work area is | ||
+ | <pre>/work/jobID.d | ||
</pre> | </pre> | ||
− | + | ||
− | <pre> | + | '''NOTE''' all data on this area is'''deleted''' once you log out |
+ | |||
+ | |||
+ | |||
+ | Quitting: | ||
+ | <pre>logout (or ctrl-d)</pre> |
Revision as of 08:18, 13 December 2012
Get onto abel
ssh abel.uio.no
Getting a single cpu for 11 hrs
qlogin --account nn9244k --nodes 1 --ntasks-per-node 1
Same, for 24 hrs
qlogin --account nn9244k --nodes 1 --ntasks-per-node 1 --time 24:00:00
Getting a whole node with 16 CPUs and 64 GB RAM:
qlogin --account nn9244k --nodes 1 --ntasks-per-node 16 --time 24:00:00
Even though each node has 16 cpus, due to hyperthreading, you can runup to 32 processes simultaneously
You have a large work area available as well:
echo $SCRATCH cd $SCRATCH
Using
squeue -u your username
will tell you the job ID, the work area is
/work/jobID.d
NOTE all data on this area isdeleted once you log out
Quitting:
logout (or ctrl-d)