NCMM CryoEM Computing platform
The Luecke Group CryoEM setup data processing platform consists of three main servers:
- intaristotle.intenal.biotek GPU server: 64 Intel Broadwell cores, 128 Gbytes of RAM, 4 x NVIDIA GTX 1080 GPU cards, 6 Tb of SSD scratch space
- perun.uio.no file server: 28 Intel Broadwell cores, 32 Gigs of RAM, 212 Tb of local disk space used as file server for the storage of CryoEM data. The machine is also used as a gateway to internal GPU/CPU processing nodes.
All nodes have 10 Gigabit Ethernet connectivity for dedicated NAS/NFS disk space between the file server and the current (and future) GPU compute nodes.
Contents
- 1 Aristotle/CryoMP Installation Documentation ( intaristotle.internal.biotek/192.168.8.109 )
- 2 Live computational resource usage information (cpu and memory)
- 3 Software Installed, List of
- 4 Top Directory ( /lsc )
- 5 How was each piece of software compiled (flags, switches)
- 5.1 ccp4
- 5.2 chimera
- 5.3 cistem
- 5.4 cryolo
- 5.5 ctffind4
- 5.6 eman2
- 5.7 external/fftw
- 5.8 external/openssl
- 5.9 frealign
- 5.10 gautomatch
- 5.11 gctf
- 5.12 imod
- 5.13 mpich
- 5.14 motioncor2
- 5.15 nvidia/driver
- 5.16 nvidia/cuda
- 5.17 How is each piece of software run ( really basic, just front UI, text or graphics)
- 5.17.1 ccp4/4.7.0
- 5.17.2 chimera/1.13.1
- 5.17.3 ctffind4/4.1.10
- 5.17.4 eman2/2.2
- 5.17.5 external
- 5.17.6 frealign/9.11
- 5.17.7 gautomatch/0.53
- 5.17.8 gctf/1.18
- 5.17.9 mpich/3.0.4
- 5.17.10 motioncor2/1.2.1
- 5.17.11 nvidia/
- 5.17.12 openmpi/3.1.2
- 5.17.13 phenix/1.14-3260
- 5.17.14 relion/{ 2.1, 3.0b }
- 5.17.15 sphire/1.1
- 5.17.16 xchimera/0.8
- 5.18 Environmental Modules
- 5.19 Running Test Jobs on Aristotle to test the software
Aristotle/CryoMP Installation Documentation ( intaristotle.internal.biotek/192.168.8.109 )
Live computational resource usage information (cpu and memory)
Software Installed, List of
Application | Version on intaristotle.internal.biotek server | Status | Comments | |
adxv | 1.9.13 | Done | - | |
ccp4 | 7.0.074 | Done | - | |
chimera | 1.13.1 | Done | - | |
cistem | 1.0.0-beta | Done | - | |
ctffind4 | 4.1.10 | Done | - | |
cryosparc | 2.9 | Done | Documentation | |
eman2 | 2.2, 2.3, 2.3cd1 | Done | - | |
fftw, double precision | 2.1.5 | Done | - | |
fftw, single precision | 2.1.5 | Done | - | |
fftw, double precision | 3.3.8 | Done | - | |
fftw, single precision | 3.3.8 | Done | - | |
imod | 4.9.12 | Done | - | |
openssl | 1.0.2 | Done | - | |
frealign | 9.11 | Done | - | |
gautomatch | 0.56 | Done | - | |
gctf | 1.18 | Done | - | |
modules | 4.20 | Done | - | |
motioncor2 | 1.2.1 | Done | - | |
mpich | 3.0.4 | Done | - | |
nvidia/cuda | 8.0-GA1, 8.0-GA2, 9.0, 9.1, 9.2 | Done | - | |
openmpi | 3.1.3[+cuda] | Done | - | |
openmpi | 4.0.0[+cuda] | Done | - | |
phenix | 1.15.2-3472 | Done | - | |
relion | 3.0.4, 3.0.6 | Done | - | |
scipion | 2.0 | Done | - | |
sphire | 1.2 | Done | - | |
xchimera | 0.8 | Done | - | |
xds | 2020-03-31 | Done | - |
Top Directory ( /lsc )
Top directory for all installed software is /lsc
Source Files ( in case you need to re-compile something )
Source for all installed programs can be found under /lsc/sources
Dependencies for interactive 3D applications
VirtualGL: Running accelerated 3D graphics remotely via OpenGL
Located under /opt/VirtualGL
Loaded by default
VirtualGL is an open source toolkit that gives any Unix or Linux remote display software the ability to run OpenGL applications with full 3D hardware acceleration.
It is used to display 3D applications on a laptop that does not have a powerful 3D graphics card: Instead of letting the laptop render the 3D application, VirtualGL uses the (more powerful) graphics cards of the server to pre-render the application and send back the result.
This app is very useful in 3D applications like XChimera/Chimera, PyMol and the like
In order to take full advantage of this application, you need to be connected to the NCMM internal network, and not UiO.
Open the command-line/terminal of your choice and type
vglconnect intaristotle.internal.biotek
vgl connect will connect you to the internal interface for aristotle and set up the environment for executing a graphical application.
Let’s say we need to run chimera. So, we load the module
module load chimera/1.13.1
and then we use vglrun to run the chimera application
vglrun -d $DISPLAY chimera
The “-d $DISPLAY” argument is there just in case something has corrupted your local environment.
How was each piece of software compiled (flags, switches)
ccp4
cd /lsc/sources/ccp4/7.0.074/ mkdir -p /lsc/ccp4/7.0.074/ cp -r /lsc/sources/ccp4/7.0.074/* /lsc/ccp4/7.0.074/ cd /lsc/ccp4/7.0.074/ ./BINARY.setup
arp/warp
module load ccp4/7.0.074 /lsc/ccp4/7.0.074/start cd /lsc/sources/ccp4/arp_warp_8.0 ./install.sh
chimera
cd /lsc/sources/chimera/1.13.1 ./chimera-1.13.1-linux_x86_64.bin #self-extracts into two files # answer questions > Enter install location: /lsc/chimera/1.13.1 > Install desktop menu (icon has to be done by user)? no > Install symbolic link to chimera executable for command line use in which directory? [hit Enter for default (0)]: 0 # installer copies files into destination > Installation is done; press return.
cistem
cd /lsc/sources/cistem/1.0/cistem-1.0.0-beta ./configure --prefix=/lsc/cistem/1.0 make make install
cryolo
1.2.1
cd /lsc/sources/cryolo/1.2.1/ scl enable rh-python36 bash conda create --name cryolo -c anaconda python=3.6 pyqt=5 cudnn=7.1.2 conda activate cryolo conda install numpy==1.14.5 pip install ./cryolo-1.2.1.tar.gz
1.3.6
cd /lsc/sources/cryolo wget ftp://ftp.gwdg.de/pub/misc/sphire/crYOLO_V1_3_6/cryolo-1.3.6.tar.gz wget ftp://ftp.gwdg.de/pub/misc/sphire/crYOLO_BM_V1_2_2/cryoloBM-1.2.2.tar.gz tar zxvf cryolo-1.3.6.tar.gz ln -s cryolo-1.4.0 1.4.0 conda env remove --name cryolo # Due to conda, cryolo cannot support previous installations conda create --prefix /lsc/sphire/1.3.6/.conda/envs/ -c anaconda python=3.6 pyqt=5 cudnn=7.1.2 cython source activate cryolo conda install numpy==1.15.4 pip install cryolo-1.3.6.tar.gz[gpu] pip install cryoloBM-1.2.2.tar.gz
conda activate /lsc/sphire/1.3.6/.conda/envs/cryoloto activate
1.4.0
cd /lsc/sources/cryolo wget ftp://ftp.gwdg.de/pub/misc/sphire/crYOLO_V1_4_0/cryolo-1.4.0.tar.gz wget ftp://ftp.gwdg.de/pub/misc/sphire/crYOLO_BM_V1_2_3/cryoloBM-1.2.3.tar.gz tar zxvf cryolo-1.4.0.tar.gz ln -s cryolo-1.4.0 1.4.0 conda create --prefix /lsc/cryolo/1.4.0 -c anaconda python=3.6 pyqt=5 cudnn=7.1.2 cython conda activate /lsc/cryolo/1.4.0 conda install numpy==1.15.4 pip install cryolo-1.4.0.tar.gz[gpu] pip install cryoloBM-1.2.3.tar.gz
conda activate /lsc/cryolo/1.4.0to activate
ctffind4
cd /lsc/sources/ctffind/4.1.10/ctffind-4.1.10 ./configure --prefix=/lsc/ctffind4/4.1.10 --enable-latest-instruction-set make -j 64 all make -j 64 install
eman2
cd /lsc/sources/eman2/2.2/ ./eman2.22.linux64.sh > EMAN2 will now be installed into this location: > [/root/EMAN2] >>> /lsc/eman2/2.2 # installer does the rest of the work, re-installing a bunch of python modules via anaconda > Do you wish the installer to prepend the EMAN2 install location > to PATH in your /root/.bashrc ? [yes|no] > [no] >>> no > You may wish to edit your .bashrc to prepend the EMAN2 install location to PATH: > export PATH=/opt/eman/2.2/bin:$PATH # covered in the relevant environment module
external/fftw
2.1.5
cd /lsc/sources/fftw/2.1.5/fftw-2.1.5 ./configure --prefix=/lsc/fftw/2.1.5 --enable-threads --enable-mpi --enable-i386-hacks make make install
3.3.8
cd /lsc/sources/fftw/3.3.8/fftw-3.3.8 ./configure CC=/usr/bin/gcc-4.8.5 --prefix=/lsc/fftw/3.3.8 --enable-single --enable-sse --enable-sse2 --enable-avx --enable-avx2 --enable-avx-128-fma --enable-generic-simd128 --enable-generic-simd256 --enable-fma --enable-mpi --enable-threads --with-g77-wrappers --with-combined-threads make make install
external/openssl
cd /lsc/sources/openssl/openssl-1.0.2o/ ./config --prefix=/lsc/external/openssl/1.0.2o make -j 64 make -j 64 install #done
frealign
cd /lsc/sources/frealign/9.11/frealign_v9.11 mkdir -p /lsc/frealign/9.11 cp -r /lsc/sources/frealign/9.11/frealign_v9.11/* /lsc/frealign/9.11 # Ignore the ./INSTALL file, it does nothing # you just need to add the relevant bin path for the application to work # the relevant module covers the details
gautomatch
cd /lsc/sources/gautomatch/Gautomatch_v0.53 mkdir /lsc/gautomatch/0.53 cp -r /lsc/sources/gautomatch/Gautomatch_v0.53 /lsc/gautomatch/0.53
gctf
cd /lsc/sources/gctf/1.18 mkdir /lsc/gctf/1.18 cp -r /lsc/sources/gctf/1.18/* /lsc/gctf/1.18 # environmental module takes care of loading up cuda 8.0, (required) adding the bin path, and changing the LD_PRELOAD path ### modules cd /lsc/sources/modules/4.2.0 ./configure --prefix=/lsc/modules --enable-doc-install --enable-example-modulefiles --enable-compat-version --enable-versioning --enable-quarantine-support --enable-auto-handling # since --enable-versioning is turned on, the module will autoversion itself and place all its files under # /lsc/modules/4.2.0
imod
Follow instructions as described in RT 3509886
mpich
default loaded by CentOS 7. Environmental module was altered to make sure that it conflicts with the openmpi modules
motioncor2
cd /lsc/sources/motioncor2/1.2.1 mkdir -p /lsc/motioncor2/1.2.1/bin cp -r /lsc/sources/motioncor2/1.2.1/* /lsc/motioncor2/1.2.1a ln -s /lsc/motioncor2/1.2.1/MotionCor2_1.2.1-Cuda80 /lsc/motioncor2/1.2.1/bin/motioncor2 # environmental module takes care of setting up the bin path
nvidia/driver
cd /lsc/sources/nvidia/drivers/396.54; ./NVIDIA-Linux-x86_64-396.54.run
Update: In order to combat the issue with creating remote OpenGL context (X11 forwarding over ssh), I upgraded the driver to 415.23 but had no luck. In nvidia-smi, the new driver reports that is nvidia CUDA 10 capable, which is fine as it can use the CUDA 8 run-time environment, as NVIDIA guarantees binary compatibility
nvidia/cuda
CUDA frameworks installed:
- 8.0-GA1
- 8.0-GA2
- 9.0
- 9.1
- 9.2
Default CUDA framework for all applications is 8.0GA1
NVidia Driver installed is 396.54, short-term support as of 2018-10-31, as per above.
Default driver installed with CUDA 8.0-GA1 is 375.64 (not installed, obviously)
During loading different CUDA versions, NVIDIA driver remains the same. Things seem to work, but may require future additional detailed testing
All CUDA installations require gcc 4.8 (hence the reason we went with CentOS)
All mentioned versions are currently installed, but we regard 8.0-GA1 as the base release, due to software that demands 8.0-GA1 and has no alternatives.
Installations:
8.0-GA1
cd /lsc/sources/nvidia/cuda/8.0-GA1 ; ./cuda_8.0.44_linux-run --silent --toolkit=/lsc/nvidia/cuda/8.0-GA1 --samples --samplespath=/lsc/nvidia/cuda/8.0-GA1/samples --run-nvidia-xconfig --tmpdir=/tmp
8.0-GA2
cd /lsc/sources/nvidia/cuda/8.0-GA2 ; ./cuda_8.0.61_375.26_linux-run --silent --toolkit=/lsc/nvidia/cuda/8.0-GA2 --samples --samplespath=/lsc/nvidia/cuda/8.0-GA2/samples --run-nvidia-xconfig --tmpdir=/tmp
9.0
cd /lsc/sources/nvidia/cuda/9.0 ; ./cuda_9.0.176_384.81_linux-run --silent --toolkit=/lsc/nvidia/cuda/9.0 --samples --samplespath=/lsc/nvidia/cuda/9.0/samples --run-nvidia-xconfig --tmpdir=/tmp
9.1
cd /lsc/sources/nvidia/cuda/9.1 ; ./cuda_9.1.85_387.26_linux --silent --toolkit=/lsc/nvidia/cuda/9.1 --samples --samplespath=/lsc/nvidia/cuda/9.1/samples --run-nvidia-xconfig --tmpdir=/tmp
9.2
cd /lsc/sources/nvidia/cuda/9.2 ; ./9.2.148_396.37_linux --silent --toolkit=/lsc/nvidia/cuda/9.2 --samples --samplespath=/lsc/nvidia/cuda/9.2/samples --run-nvidia-xconfig --tmpdir=/tmp
openmpi
cd /lsc/sources/openmpi/3.1.2; ./configure --prefix /lsc/openmpi/3.1.2 --enable-binaries --enable-mpi-fortran --with-cuda=/lsc/nvidia/cuda/8.0-GA1 --with-devel-headers
Default: 3.1.2
The default version loaded is the stable-as-of 2018-10-31 3.1.2
phenix
cd /lsc/sources/phenix/1.14/phenix-installer-1.14-3260-intel-linux-2.6-x86_64-centos6 ; ./install --prefix=/lsc/phenix/1.14-3260 --openmp --makedirs
relion
2.1
cd /lsc/sources/relion/2.1-mpich ; mkdir build; cmake -DCMAKE_INSTALL_PREFIX=/lsc/relion/2.1 make -j 64 make -j 64 install
3.0b
cd /lsc/sources/relion/3.0b-mpich ; mkdir build; cmake -DCMAKE_INSTALL_PREFIX=/lsc/relion/3.0b make -j 64 make -j 64 install
3.0.4
cd /lsc/sources/relion/3.0.4 ; mkdir build; cmake -DCMAKE_INSTALL_PREFIX=/lsc/relion/3.0.4 --DGUI=ON -DCUDA=ON -DCudaTexture=ON -DFORCE_OWN_TBB=ON -DFORCE_OWN_FLTK=ON -DFORCE_OWN_FFTW=ON -DCUDA_ARCH=61 -DBUILD_SHARED_LIBS=ON .. make -j 64 make install
3.0.6
cd /lsc/sources/relion/3.0.6 ; mkdir build; cmake -DCMAKE_INSTALL_PREFIX=/lsc/relion/3.0.6 --DGUI=ON -DCUDA=ON -DCudaTexture=ON -DFORCE_OWN_TBB=ON -DFORCE_OWN_FLTK=ON -DFORCE_OWN_FFTW=ON -DCUDA_ARCH=61 -DBUILD_SHARED_LIBS=ON .. make -j 64 make install
List of CUDA architectures here : https://arnon.dk/matching-sm-architectures-arch-and-gencode-for-various-nvidia-cards/
scipion
2.0
Cloning/Getting software
mkdir -p /lsc/scipion/2.0 git clone https://github.com/I2PC/scipion.git cd scipion
Dependencies
yum install -y gcc gcc-g++ cmake java-1.8.0-openjdk-devel.x86_64 libXft-devel.x86_64 openssl-devel.x86_64 libXext-devel.x86_64 libxml++.x86_64 libquadmath-devel.x86_64 libxslt.x86_64 openmpi-devel.x86_64 gsl-devel.x86_64 libX11.x86_64 gcc-gfortran.x86_64 git
Configuring
./scipion config scipion installp -p ~/scipion-em-relion
sphire
1.1
cd /lsc/sources/sphire/1.1 ; ./sphire_1_1_linux.sh # follow questions, install under /lsc/sphire/1.1
How is each piece of software run ( really basic, just front UI, text or graphics)
ccp4/4.7.0
chimera/1.13.1
module load chimera chimera
ctffind4/4.1.10
module load ctffind4 ctffind4
eman2/2.2
module load eman eman2.py
external
openssl/1.0.2o
Nothing to see here, this is just a support library
frealign/9.11
module load frealign frealign
gautomatch/0.53
module load gautomatch gautomatch
gctf/1.18
module load gctf gctf
mpich/3.0.4
module load mpich mpirun
motioncor2/1.2.1
nvidia/
driver
modinfo nvidia
cuda/{ 8.0-GA1, 8.0-GA2, 9.0, 9.1, 9.2 }
There is really nothing you can do with the CUDA libraries directly You can only verify that version of CUDA works with the following for loop in bash: for version in 8.0-GA1 8.0-GA2 9.0 9.1 9.2; do module switch nvidia/cuda/$version; # press q to quit the nbody simulation below /lsc/nvidia/cuda/$version/samples/5_Simulations/nbody/nbody -hostmem -numdevices=$(lspci | grep -i nvidia | grep -i vga); # n-body gravitational attraction simulation done && module purge;
openmpi/3.1.2
module load openmpi mpirun -j 64 date
phenix/1.14-3260
module load phenix phenix
relion/{ 2.1, 3.0b }
for version in 2.1 3.0b; do module switch relion/$version; /lsc/relion/$version/bin/relion done;
sphire/1.1
module load sphire sphire
xchimera/0.8
not yet available
Environmental Modules
Download
Download from here:
http://modules.sourceforge.net/
Installation
cd /lsc/sources/modules/4.2.0
./configure --prefix=/lsc/modules/4.2.0 --enable-doc-install --enable-example-modulefiles --enable-compat-version --enable-versioning --enable-quarantine-support --enable-auto-handling --with-tclsh=/usr/bin/tclsh --with-pager=/usr/bin/less && make && make installThis will place symbolic links from
/etc/profile.d/modules.shto
/lsc/modules/4.2.0/init/bashThe configure log is under
compat/config.login case you need to see what went wrong
In the event you need to re-initialize the modules subsystem, for any reason run the following as root:
ln -sf /lsc/modules/4.2.0/init/bash /etc/profile.d/modules.sh
A soft link is preferred, as hard links do not work across filesystems
How to use
To see what environment modules are available:
module avail
To load a module:
module load <module>
To see the already loaded modules:
module list
To unload a module:
module unload <module>
Git Repo
Pushing them to the nodes via Ansible + git
Running Test Jobs on Aristotle to test the software
Relion
As per the Relion 3.0 tutorial:
You will need the following test sets:
ftp://ftp.mrc-lmb.cam.ac.uk/pub/scheres/relion30_tutorial_data.tar ftp://ftp.mrc-lmb.cam.ac.uk/pub/scheres/relion30_tutorial_precalculated_results.tar.gz
These test sets are already downloaded and untar'ed under intaristotle.internal.biotek:/lsc/relion/test_data
module load relion/
$version you need and wait for relion to open on your X server.
3D Classification
Follow the instructions in the tutorial regarding 3D classification