User manual cod nodes old

From mn/bio/cees-bioinf
Revision as of 09:50, 15 June 2012 by Alexajo@uio.no (talk | contribs) (First version)

(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search

This document describes how to use the high-performance compute nodes of the cod group at CEES. We have the following resources available:

cod1 24cpus 128 GB of RAM, ~1 TB disk space cod2 24cpus 128 GB of RAM, ~1.3 TB disk space cod3 64cpus 512 GB of RAM, ~XX TB disk space cod4 64cpus 512 GB of RAM, ~XX TB disk space

Getting access

Ask Lex or Karin

Mailing list

If you're not already on it, get subscribed to the cees-hpc mailing list: https://sympa.uio.no/bio.uio.no/subscribe/cees-hpc

If you intend to use one of the nodes for an extended period of time, please send an email to this list

Logging in

ssh username@cod1.titan.uio.no When you are on the UiO network it is enough to write ssh cod1

All nodes have titan and your titan home area mounted to them. So, all the files located in /project are avail be on the cod nodes. In addition, the have local discs, currently:

/data --> for permanent files, e.g input to your program /work --> working are for your programs

A few important things to note: - reading and writing data to/from /data and /work will be much faster and efficient than to /projects - data on /project is backed up by USIT, but NOT data on /data and /work

This leads to the following strategy for how to choose which disk to use: - for a quick command, you may use /projects - for a long running program, or one that generates a lot of data over a long time, use /data and /work - having your program write a lot over a long time to a file on /projects causes problems for the backup system, as the file may be changed during backup.

TIP: using screen

After starting a long running job, you cannot close the terminal wind, or the job will be cancelled.Isnteads, run the job from within a 'screen': type screen

You now stared a 'new' terminal start your job press ctrl-a-d, that is the CTRL key with the 'a' key, followed by the 'd' key Now you're back in the terminal where you started. You can close this terminal, and the one behind the screen still exists and continues To get back into the now hidden one, type

screen -rd

CEES project data

The CEES projects are organised as follows: /projects/454data is the main area (at some point this will change names) Access is for those in the seq454 unix user group (soon to change name to ceesdata) Please check that files and folders you create have the right permissions: chgrp -R seq454 yourfolder chmod -R 770 yourfolder

It is possible to restrict access to a subgroup of those that are inn the seq454 group, please ask Lex/Karin

Folders in /projects/454data: projects --> finished projects, data belonging to an (accepted) publication databases --> permanenet files, e.g reference genomes in_progress --> the main working area. Here you store data for unfinished projects bin --> here are programs and script located lib --> needed by certain programs src --> source files for some of the programs in the bin folder exchange --> for exchanging files with non-members

bioportal --> used by the NSC cees --> will be removed at a later date scripts --> to be removed (empty) utsp --> to be removed www_docs_old --> to be removed runs --> to be removed

It is recommended to put the /projects/454data/bin folder in your path. Include the following line in your ~/.bash_login file: export PATH=/projects/454data/bin:$PATH

Note to SLURM users

If you are used to submit jobs through a slurm script, this will not work on the cod nodes. Here you'll have to give the command directly on the command line.

Job scripts

You can use a job script: collect a bunch of commands and put them in an executable file. Run the command with source yourcommands.sh