Difference between revisions of "Abel use old"

From mn/bio/cees-bioinf
Jump to: navigation, search
(Typos)
Line 1: Line 1:
= Introduction =
+
= Introduction =
  
<br>
 
  
We have been given a large allocation on Abel for computational work. Yhis page explains how to get access and start using the resources. <br><br> '''Mailing list'''  
+
 
 +
We have been given a large allocation on Abel for computational work. This page explains how to get access and start using the resources. All use of Abel needs to draw CPU hours from an allocation.<br/><br/>'''Mailing list'''
  
 
If you're not already on it, get subscribed to the appropriate mailing lists. We use this list to distribute information on the use of the CEES HPC resources - both our own nodes and the CPU allocation on Abel. See [https://wiki.uio.no/mn/bio/cees-bioinf/index.php/Main_Page#Mailing_lists the main wiki page], then come back here.
 
If you're not already on it, get subscribed to the appropriate mailing lists. We use this list to distribute information on the use of the CEES HPC resources - both our own nodes and the CPU allocation on Abel. See [https://wiki.uio.no/mn/bio/cees-bioinf/index.php/Main_Page#Mailing_lists the main wiki page], then come back here.
  
<br>
 
  
= Getting access  =
 
  
Fill out this form:<br>
+
= Getting access to CPU hours =
 +
 
 +
Fill out this form:
 +
 
 +
[https://www.notur.no/notur/sites/drupal.uninett.no.notur/files/User-account-application-0613.pdf https://www.notur.no/notur/sites/drupal.uninett.no.notur/files/User-account-application-0613.pdf]
  
[https://www.notur.no/notur/sites/drupal.uninett.no.notur/files/User-account-application-0613.pdf https://www.notur.no/notur/sites/drupal.uninett.no.notur/files/User-account-application-0613.pdf]<br>
+
NOTES<br/>11. I would like an account on the following resources: '''abel'''
  
NOTES<br>11. I would like an account on the following resources: '''abel'''<br>
+
12. Start date (yyyy-mm-dd): '''use&nbsp;today's date''' End date (yyyy-mm-dd):'''&nbsp;when your project/contract ends''' (don't worry, we can extend access beyond that if needed)
  
12. Start date (yyyy-mm-dd): '''use&nbsp;today's date''' End date (yyyy-mm-dd):'''&nbsp;when your project/contract ends''' (don't worry, we can extend access beyond that if needed)<br>
+
13. Existing project (format nn****k for Notur):* '''NN9244K'''
  
13. Existing project (format nn****k for Notur):* '''NN9244K'''<br>  
+
14. Notur/NorStore user account (if you already have one): '''N/A'''<br/>Otherwise provide preferred / local user name: _____________________ (max. 8 chars) -->'''Please fill out your UiO user name'''
  
14. Notur/NorStore user account (if you already have one): '''N/A'''<br> Otherwise provide preferred / local user name: _____________________ (max. 8 chars) --&gt;'''Please fill out your UiO user name'''<br>
+
15. If you want to use a grid certificate (GSI), provide the distinguished name (DN): '''N/A'''
  
15. If you want to use a grid certificate (GSI), provide the distinguished name (DN): '''N/A'''<br>
+
Name of the project manager:* '''Kjetill Jakobsen'''
  
Name of the project manager:* '''Kjetill Jakobsen'''
+
Give the form to Kjetill Jakobsen for submission. Ask Lex for help if needed.
  
Give the form to Kjetill Jakobsen for submission. Ask Lex for help if needed.
 
  
<br>
 
  
= Using Abel =
+
= Using Abel =
  
== Interactive login ==
+
== Interactive login ==
  
See also [http://www.uio.no/english/services/it/research/hpc/abel/help/user-guide/interactive-logins.html here].  
+
See also [http://www.uio.no/english/services/it/research/hpc/abel/help/user-guide/interactive-logins.html here].
 
<pre>ssh abel.uio.no
 
<pre>ssh abel.uio.no
</pre>  
+
</pre>
Getting a single cpu for 11 hrs  
+
Getting a single cpu for 11 hrs
 
<pre>qlogin --account nn9244k --nodes 1 --ntasks-per-node 1
 
<pre>qlogin --account nn9244k --nodes 1 --ntasks-per-node 1
</pre>  
+
</pre>
Same, for 24 hrs  
+
Same, for 24 hrs
<pre>qlogin --account nn9244k --nodes 1 --ntasks-per-node 1 --time 24:00:00</pre>  
+
<pre>qlogin --account nn9244k --nodes 1 --ntasks-per-node 1 --time 24:00:00</pre>
<br> NOTE you are''sharing the node with others'', do no use more than the number of cpus you asked for  
+
<br/>NOTE you are''sharing the node with others'', do no use more than the number of cpus you asked for
  
NOTE a pipeline of unix commands may use one cpu per command:  
+
NOTE a pipeline of unix commands may use one cpu per command:
<pre>grep something somefile | sort | uniq -c</pre>  
+
<pre>grep something somefile | sort | uniq -c</pre>
This may use three cpus!  
+
This may use three cpus!
  
<br>Getting a whole node with 16 CPUs and 64 GB RAM:  
+
<br/>Getting a whole node with 16 CPUs and 64 GB RAM:
 
<pre>qlogin --account nn9244k --nodes 1 --ntasks-per-node 16 --time 24:00:00
 
<pre>qlogin --account nn9244k --nodes 1 --ntasks-per-node 16 --time 24:00:00
</pre>  
+
</pre>
Even though each node has 16 cpus, due to hyperthreading, you can run&nbsp;'''up to 32 processes '''simultaneously  
+
Even though each node has 16 cpus, due to hyperthreading, you can run&nbsp;'''up to 32 processes '''simultaneously
 +
 
  
<br>
 
  
You have a large work area available as well:  
+
You have a large work area available as well:
 
<pre>echo $SCRATCH
 
<pre>echo $SCRATCH
 
cd $SCRATCH
 
cd $SCRATCH
</pre>  
+
</pre>
Using  
+
Using
 
<pre>squeue -u your username
 
<pre>squeue -u your username
</pre>  
+
</pre>
will tell you the job ID, the work area is  
+
will tell you the job ID, the work area is
 
<pre>/work/jobID.d
 
<pre>/work/jobID.d
</pre>  
+
</pre>
'''NOTE''' all data on this area is&nbsp;'''deleted''' once you log out  
+
'''NOTE''' all data on this area is&nbsp;'''deleted''' once you log out
  
<br>
 
  
Quitting:  
+
 
 +
Quitting:
 
<pre>logout (or ctrl-d)
 
<pre>logout (or ctrl-d)
</pre>  
+
</pre>
== SLURM scripts ==
+
== SLURM scripts ==
 +
 
 +
Information coming, until then see [http://www.uio.no/english/services/it/research/hpc/abel/help/user-guide/queue-system.html here].
  
Information coming, until then see [http://www.uio.no/english/services/it/research/hpc/abel/help/user-guide/queue-system.html here].
 
  
<br>
 
  
== Temporary, fast access disk space on Abel ==
+
== Temporary, fast access disk space on Abel ==
  
From the [http://www.uio.no/english/services/it/research/hpc/abel/newsletters/abel-newsletter-3-2013.html#toc3 Abel newsletter #3]:<br>
+
From the [http://www.uio.no/english/services/it/research/hpc/abel/newsletters/abel-newsletter-3-2013.html#toc3 Abel newsletter #3]:
<blockquote>'''Update on Abel scratch file-system usage<br>'''While a job runs, it has access to a temporary scratch directory on the shared file system /work. The directory is individual for each job, is automatically created, and is deleted when the job finishes (or gets requeued). There is no backup of this directory. The name of the directory is stored in the environment variable $SCRATCH, which is set within the job script. If your job is I/O intensive, we strongly recommend copying its work files to $SCRATCH and running the program there.<br>Sometimes, one needs to use a file for several jobs, or have it available some time after the job finishes. To accommodate this need, we have now created a directory /work/users/$USER for each user, where $USER is the user's user name. The purpose of the directory is to stage files that are needed by more than one job. Files in this directory are automatically deleted after a certain time (currently 45 days). There is no backup of files in /work/users/.<br></blockquote>
+
<blockquote>'''Update on Abel scratch file-system usage'''<br/>While a job runs, it has access to a temporary scratch directory on the shared file system /work. The directory is individual for each job, is automatically created, and is deleted when the job finishes (or gets requeued). There is no backup of this directory. The name of the directory is stored in the environment variable $SCRATCH, which is set within the job script. If your job is I/O intensive, we strongly recommend copying its work files to $SCRATCH and running the program there.<br/>Sometimes, one needs to use a file for several jobs, or have it available some time after the job finishes. To accommodate this need, we have now created a directory /work/users/$USER for each user, where $USER is the user's user name. The purpose of the directory is to stage files that are needed by more than one job. Files in this directory are automatically deleted after a certain time (currently 45 days). There is no backup of files in /work/users/.<br/></blockquote>

Revision as of 11:25, 29 September 2014

Introduction

We have been given a large allocation on Abel for computational work. This page explains how to get access and start using the resources. All use of Abel needs to draw CPU hours from an allocation.

Mailing list

If you're not already on it, get subscribed to the appropriate mailing lists. We use this list to distribute information on the use of the CEES HPC resources - both our own nodes and the CPU allocation on Abel. See the main wiki page, then come back here.


Getting access to CPU hours

Fill out this form:

https://www.notur.no/notur/sites/drupal.uninett.no.notur/files/User-account-application-0613.pdf

NOTES
11. I would like an account on the following resources: abel

12. Start date (yyyy-mm-dd): use today's date End date (yyyy-mm-dd): when your project/contract ends (don't worry, we can extend access beyond that if needed)

13. Existing project (format nn****k for Notur):* NN9244K

14. Notur/NorStore user account (if you already have one): N/A
Otherwise provide preferred / local user name: _____________________ (max. 8 chars) -->Please fill out your UiO user name

15. If you want to use a grid certificate (GSI), provide the distinguished name (DN): N/A

Name of the project manager:* Kjetill Jakobsen

Give the form to Kjetill Jakobsen for submission. Ask Lex for help if needed.


Using Abel

Interactive login

See also here.

ssh abel.uio.no

Getting a single cpu for 11 hrs

qlogin --account nn9244k --nodes 1 --ntasks-per-node 1

Same, for 24 hrs

qlogin --account nn9244k --nodes 1 --ntasks-per-node 1 --time 24:00:00


NOTE you aresharing the node with others, do no use more than the number of cpus you asked for

NOTE a pipeline of unix commands may use one cpu per command:

grep something somefile | sort | uniq -c

This may use three cpus!


Getting a whole node with 16 CPUs and 64 GB RAM:

qlogin --account nn9244k --nodes 1 --ntasks-per-node 16 --time 24:00:00

Even though each node has 16 cpus, due to hyperthreading, you can run up to 32 processes simultaneously


You have a large work area available as well:

echo $SCRATCH
cd $SCRATCH

Using

squeue -u your username

will tell you the job ID, the work area is

/work/jobID.d

NOTE all data on this area is deleted once you log out


Quitting:

logout (or ctrl-d)

SLURM scripts

Information coming, until then see here.


Temporary, fast access disk space on Abel

From the Abel newsletter #3:

Update on Abel scratch file-system usage
While a job runs, it has access to a temporary scratch directory on the shared file system /work. The directory is individual for each job, is automatically created, and is deleted when the job finishes (or gets requeued). There is no backup of this directory. The name of the directory is stored in the environment variable $SCRATCH, which is set within the job script. If your job is I/O intensive, we strongly recommend copying its work files to $SCRATCH and running the program there.
Sometimes, one needs to use a file for several jobs, or have it available some time after the job finishes. To accommodate this need, we have now created a directory /work/users/$USER for each user, where $USER is the user's user name. The purpose of the directory is to stage files that are needed by more than one job. Files in this directory are automatically deleted after a certain time (currently 45 days). There is no backup of files in /work/users/.