NCMM CryoEM Computing platform

From medicin.ncmm.IT
Revision as of 11:39, 7 January 2019 by Georgmar@uio.no (talk | contribs) (first post)

(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search

This is the (growing) documentation for the software installed in the NCMM computational nodes

Aristotle/CryoMP Installation Documentation ( intaristotle.internal.biotek/192.168.8.109 )

RT ticket #3135527

Software Installed, List of

  • ccp4/4.7.0
  • chimera/1.13.1
  • ctffind4/4.1.10
  • eman2/2.2
  • external
    • openssl/1.0.2o
  • frealign/9.11
  • gautomatch/0.53
  • gctf/1.18
  • modules/4.20
  • mpich/3.0.4
  • motioncor2/1.2.1
  • nvidia/
    • cuda/{ 8.0-GA1, 8.0-GA2, 9.0, 9.1, 9.2 }
  • openmpi/3.1.2
  • phenix/1.14-3260
  • relion/{ 2.1, 3.0b }
  • sphire/1.1
  • xchimera/0.8

Top Directory ( /lsc )

Top directory for all installed software is /lsc

Source Files ( in case you need to re-compile something )

Source for all installed programs can be found under /lsc/sources

Dependencies for interactive 3D applications

VirtualGL: Running accelerated 3D graphics remotely via OpenGL

Located under /opt/VirtualGL
Loaded by default

VirtualGL is an open source toolkit that gives any Unix or Linux remote display software the ability to run OpenGL applications with full 3D hardware acceleration.

It is used to display 3D applications on a laptop that does not have a powerful 3D graphics card: Instead of letting the laptop render the 3D application, VirtualGL uses the (more powerful) graphics cards of the server to pre-render the application and send back the result.

This app is very useful in 3D applications like XChimera/Chimera, PyMol and the like

In order to take full advantage of this application, you need to be connected to the NCMM internal network, and not UiO.

Open the command-line/terminal of your choice and type

vglconnect intaristotle.internal.biotek

vgl connect will connect you to the internal interface for aristotle and set up the environment for executing a graphical application.

Let’s say we need to run chimera. So, we load the module

module load chimera/1.13.1

and then we use vglrun to run the chimera application

vglrun -d $DISPLAY chimera

The “-d $DISPLAY” argument is there just in case something has corrupted your local environment.

How was each piece of software compiled (flags, switches)

ccp4

cd /lsc/sources/ccp4/4.7.0
mkdir -p /lsc/ccp4/4.7.0
cp -r /lsc/sources/ccp4/4.7.0/* /lsc/ccp4/4.7.0
cd /lsc/ccp4/4.7.0
./BINARY.setup
# mucho bullshito, but yeah, welcome to scientific applications

chimera

cd /lsc/sources/chimera/1.13.1
./chimera-1.13.1-linux_x86_64.bin #self-extracts into two files
# answer questions
> Enter install location: /lsc/chimera/1.13.1
> Install desktop menu (icon has to be done by user)? no 
> Install symbolic link to chimera executable for command line use in which directory? [hit Enter for default (0)]: 0
# installer copies files into destination
> Installation is done; press return.

ctffind4

cd /lsc/sources/ctffind/4.1.10/ctffind-4.1.10
./configure --prefix=/lsc/ctffind4/4.1.10 --enable-latest-instruction-set
make -j 64 all
make -j 64 install

eman2

cd /lsc/sources/eman2/2.2/
./eman2.22.linux64.sh
> EMAN2 will now be installed into this location:
> [/root/EMAN2] >>> /lsc/eman2/2.2
# installer does the rest of the work, re-installing a bunch of python modules
#   via anaconda, cuz apparently the built-in package managers are less elitist
# *sigh* scientific computation, alright
> Do you wish the installer to prepend the EMAN2 install location
> to PATH in your /root/.bashrc ? [yes|no]
> [no] >>> no
> You may wish to edit your .bashrc to prepend the EMAN2 install location to PATH:
> export PATH=/opt/eman/2.2/bin:$PATH
# covered in the relevant environment module

external/openssl

cd /lsc/sources/openssl/openssl-1.0.2o/
./config --prefix=/lsc/external/openssl/1.0.2o
make -j 64
make -j 64 install
#done

frealign

cd /lsc/sources/frealign/9.11/frealign_v9.11
mkdir -p /lsc/frealign/9.11
cp -r /lsc/sources/frealign/9.11/frealign_v9.11/* /lsc/frealign/9.11
# Ignore the ./INSTALL file, it does nothing
# you just need to add the relevant bin path for the application to work
# the relevant module covers the details

gautomatch

cd /lsc/sources/gautomatch/Gautomatch_v0.53
mkdir /lsc/gautomatch/0.53
cp -r /lsc/sources/gautomatch/Gautomatch_v0.53 /lsc/gautomatch/0.53

gctf

cd /lsc/sources/gctf/1.18
mkdir /lsc/gctf/1.18
cp -r /lsc/sources/gctf/1.18/* /lsc/gctf/1.18
# environmental module takes care of loading up cuda 8.0, (required) adding the bin path, and changing the LD_PRELOAD path ### modules 
cd /lsc/sources/modules/4.2.0
./configure --prefix=/lsc/modules --enable-doc-install --enable-example-modulefiles --enable-compat-version --enable-versioning --enable-quarantine-support --enable-auto-handling 
# since --enable-versioning is turned on, the module will autoversion itself and place all its files under
# /lsc/modules/4.2.0

mpich

default loaded by CentOS 7.
Environmental module was altered to make sure that it conflicts with the openmpi modules

motioncor2

cd /lsc/sources/motioncor2/1.2.1
mkdir -p /lsc/motioncor2/1.2.1/bin
cp -r /lsc/sources/motioncor2/1.2.1/* /lsc/motioncor2/1.2.1a
ln -s /lsc/motioncor2/1.2.1/MotionCor2_1.2.1-Cuda80 /lsc/motioncor2/1.2.1/bin/motioncor2
# environmental module takes care of setting up the bin path 

nvidia/driver

cd /lsc/sources/nvidia/drivers/396.54;
./NVIDIA-Linux-x86_64-396.54.run

nvidia/cuda

CUDA frameworks installed:

  • 8.0-GA1
  • 8.0-GA2
  • 9.0
  • 9.1
  • 9.2

Default CUDA framework for all applications is 8.0GA1

NVidia Driver installed is 396.54, short-term support as of 2018-10-31, as per above.

Default driver installed with CUDA 8.0-GA1 is 375.64 (not installed, obviously)

During loading different CUDA versions, NVIDIA driver remains the same. Things seem to work, but may require future additional detailed testing

All CUDA installations require gcc 4.8 (hence the reason we went with CentOS)

All mentioned versions are currently installed, but we regard 8.0-GA1 as the base release, due to software that demands 8.0-GA1 and has no alternatives.

Installations:

8.0-GA1

cd /lsc/sources/nvidia/cuda/8.0-GA1 ;
./cuda_8.0.44_linux-run --silent --toolkit=/lsc/nvidia/cuda/8.0-GA1 --samples --samplespath=/lsc/nvidia/cuda/8.0-GA1/samples --run-nvidia-xconfig --tmpdir=/tmp

8.0-GA2

cd /lsc/sources/nvidia/cuda/8.0-GA2 ;
./cuda_8.0.61_375.26_linux-run --silent --toolkit=/lsc/nvidia/cuda/8.0-GA2 --samples --samplespath=/lsc/nvidia/cuda/8.0-GA2/samples --run-nvidia-xconfig --tmpdir=/tmp

9.0

cd /lsc/sources/nvidia/cuda/9.0 ;
./cuda_9.0.176_384.81_linux-run --silent --toolkit=/lsc/nvidia/cuda/9.0 --samples --samplespath=/lsc/nvidia/cuda/9.0/samples --run-nvidia-xconfig --tmpdir=/tmp

9.1

cd /lsc/sources/nvidia/cuda/9.1 ;
./cuda_9.1.85_387.26_linux --silent --toolkit=/lsc/nvidia/cuda/9.1 --samples --samplespath=/lsc/nvidia/cuda/9.1/samples --run-nvidia-xconfig --tmpdir=/tmp

9.2

cd /lsc/sources/nvidia/cuda/9.2 ;
./9.2.148_396.37_linux --silent --toolkit=/lsc/nvidia/cuda/9.2 --samples --samplespath=/lsc/nvidia/cuda/9.2/samples --run-nvidia-xconfig --tmpdir=/tmp

openmpi

cd /lsc/sources/openmpi/3.1.2;
./configure --prefix /lsc/openmpi/3.1.2 --enable-binaries --enable-mpi-fortran --with-cuda=/lsc/nvidia/cuda/8.0-GA1 --with-devel-headers 

Default: 3.1.2

The default version loaded is the stable-as-of 2018-10-31 3.1.2

phenix

cd /lsc/sources/phenix/1.14/phenix-installer-1.14-3260-intel-linux-2.6-x86_64-centos6 ;
./install --prefix=/lsc/phenix/1.14-3260 --openmp --makedirs 

relion

2.1

cd /lsc/sources/relion/2.1-mpich ;
mkdir build;
cmake -DCMAKE_INSTALL_PREFIX=/lsc/relion/2.1
make -j 64
make -j 64 install

3.0b

cd /lsc/sources/relion/3.0b-mpich ;
mkdir build;
cmake -DCMAKE_INSTALL_PREFIX=/lsc/relion/3.0b
make -j 64
make -j 64 install

sphire

cd /lsc/sources/sphire/1.1 ;
./sphire_1_1_linux.sh
# follow questions, install under /lsc/sphire/1.1

How is each piece of software run ( really basic, just front UI, text or graphics)

ccp4/4.7.0

chimera/1.13.1

module load chimera
chimera

ctffind4/4.1.10

module load ctffind4
ctffind4

eman2/2.2

module load eman
eman2.py

external

openssl/1.0.2o

Nothing to see here, this is just a support library

frealign/9.11

module load frealign
frealign

gautomatch/0.53

module load gautomatch
gautomatch

gctf/1.18

module load gctf
gctf

mpich/3.0.4

module load mpich
mpirun

motioncor2/1.2.1

nvidia/

driver

modinfo nvidia

cuda/{ 8.0-GA1, 8.0-GA2, 9.0, 9.1, 9.2 }

There is really nothing you can do with the CUDA libraries directly

You can only verify that version of CUDA works with the following for loop in bash:

for version in 8.0-GA1 8.0-GA2 9.0 9.1 9.2; 
do
    module switch $version; # press q to quit the nbody simulation below
    /lsc/nvidia/cuda/$version/samples/5_Simulations/nbody/nbody -hostmem -numdevices=$(lspci | grep -i nvidia | grep -i vga); # n-body gravitational attraction simulation
done;

openmpi/3.1.2

module load openmpi
mpirun -j 64 date

phenix/1.14-3260

module load phenix
phenix

relion/{ 2.1, 3.0b }

for version in 2.1 3.0b; 
do
    module switch $version; # press q to quit the nbody simulation below
    /lsc/nvidia/relion/$version/bin/relion
done;

sphire/1.1

module load sphire
sphire

xchimera/0.8

not yet available   

Environmental Modules

To see what environment modules are available:

module avail

To load a module:

module load <module>

To see the already loaded modules:

module list

To unload a module:

module unload <module>

Git Repo

Pushing them to the nodes via Ansible + git