Difference between revisions of "NCMM CryoEM Computing platform"

From medicin.ncmm.IT
Jump to: navigation, search
(nvidia/driver)
(Software Installed, List of: corrected versions of software and alphabetical order)
(47 intermediate revisions by 2 users not shown)
Line 1: Line 1:
 
The [https://www.med.uio.no/ncmm/english/groups/luecke-group/index.html Luecke Group] CryoEM setup data processing platform consists of three main servers:
 
The [https://www.med.uio.no/ncmm/english/groups/luecke-group/index.html Luecke Group] CryoEM setup data processing platform consists of three main servers:
 
* intaristotle.intenal.biotek GPU server: 64 [https://ark.intel.com/products/91766/Intel-Xeon-Processor-E5-2683-v4-40M-Cache-2-10-GHz- Intel Broadwell cores], 128 Gbytes of RAM, 4 x [https://www.nvidia.com/en-us/geforce/products/10series/geforce-gtx-1080-ti/ NVIDIA GTX 1080 GPU cards], 6 Tb of SSD scratch space
 
* intaristotle.intenal.biotek GPU server: 64 [https://ark.intel.com/products/91766/Intel-Xeon-Processor-E5-2683-v4-40M-Cache-2-10-GHz- Intel Broadwell cores], 128 Gbytes of RAM, 4 x [https://www.nvidia.com/en-us/geforce/products/10series/geforce-gtx-1080-ti/ NVIDIA GTX 1080 GPU cards], 6 Tb of SSD scratch space
* lueckec2.internal.biotek VM server: 52 [https://www.amd.com/en/products/epyc AMD EPYC cores], 230 Gbytes of RAM, 5 Tb of local disk space, on loan for estimating large memory CPU jobs.
 
 
* perun.uio.no file server: 28 [https://ark.intel.com/products/91754/Intel-Xeon-Processor-E5-2680-v4-35M-Cache-2-40-GHz- Intel Broadwell cores], 32 Gigs of RAM, 212 Tb of local disk space used as file server for the storage of CryoEM data. The machine is also used as a gateway to internal GPU/CPU processing nodes.
 
* perun.uio.no file server: 28 [https://ark.intel.com/products/91754/Intel-Xeon-Processor-E5-2680-v4-35M-Cache-2-40-GHz- Intel Broadwell cores], 32 Gigs of RAM, 212 Tb of local disk space used as file server for the storage of CryoEM data. The machine is also used as a gateway to internal GPU/CPU processing nodes.
 
All nodes have 10 Gigabit Ethernet connectivity for dedicated NAS/NFS disk space between the file server and the current (and future) GPU compute nodes.  
 
All nodes have 10 Gigabit Ethernet connectivity for dedicated NAS/NFS disk space between the file server and the current (and future) GPU compute nodes.  
Line 15: Line 14:
 
{{#widget:Image|url=http://panoptis.uio.no/S/F}}
 
{{#widget:Image|url=http://panoptis.uio.no/S/F}}
  
= Software Installed, List of =
+
= Software Installed, List of =
  
 
{| class="wikitable" style="text-align: center;"
 
{| class="wikitable" style="text-align: center;"
 
|-
 
|-
| '''Application'''                                                                  || '''Version on intaristotle.internal.biotek server'''                                     || '''Status''' || '''Comments''' ||
+
| '''Application'''                                                                  || '''Version on intaristotle.internal.biotek server''' ||   '''Status''' || '''Comments''' ||
 
+
|-
 +
|  [https://www.scripps.edu/tainer/arvai/adxv.html adxv]              ||        1.9.13    ||                          Done      ||  -            ||
 +
|-
 +
|  [http://www.ccp4.ac.uk/ ccp4]              ||        7.0.074  ||                          Done      ||  -            ||
 +
|-
 +
|  [https://www.cgl.ucsf.edu/chimera/ chimera]          ||        1.13.1    ||                          Done      ||  -            ||
 +
|-
 +
|  [https://cistem.org/ cistem]            ||        1.0.0-beta||                          Done      ||  -            ||
 +
|-
 +
|  [http://grigoriefflab.janelia.org/ctffind4 ctffind4]          ||        4.1.10    ||                          Done      ||  -            ||
 +
|-
 +
|  [http://prometheus.uio.no:3900/ cryosparc]        ||        2.9      ||                          Done      ||  [https://wiki.uio.no/medicin/ncmm/IT/index.php/Prometheus.uio.no Documentation]                  ||
 +
|-
 +
|  [http://www.msg.ucsf.edu/local/programs/eman/ eman2]            ||        2.2, 2.3, 2.3cd1  ||                          Done      ||  -            ||
 +
|-
 +
|  [http://www.fftw.org/ fftw, double precision]          ||        2.1.5    ||                          Done      ||  -            ||
 +
|-
 +
|  [http://www.fftw.org/ fftw, single precision]          ||        2.1.5    ||                          Done      ||  -            ||
 +
|-
 +
|  [http://www.fftw.org/ fftw, double precision]          ||        3.3.8    ||                          Done      ||  -            ||
 +
|-
 +
|  [http://www.fftw.org/ fftw, single precision]          ||        3.3.8    ||                          Done      ||  -            ||
 +
|-
 +
|[https://bio3d.colorado.edu/imod/download.html#Development imod]
 +
|4.9.12
 +
|Done
 +
| -
 +
|
 
|-
 
|-
| [http://www.ccp4.ac.uk/ ccp4]       ||   4.7.0                          || Done || - ||
+
| [https://www.openssl.org/ openssl]           ||         1.0.2    ||                           Done     ||   -           ||
 
|-
 
|-
| [https://www.cgl.ucsf.edu/chimera/ chimera]     ||   1.13.1                          || Done || - ||
+
| [http://grigoriefflab.janelia.org/frealign frealign]         ||         9.11      ||                           Done     ||   -           ||
 
|-
 
|-
| [http://grigoriefflab.janelia.org/ctffind4 ctffind4]   ||   4.1.10                          || Done || - ||
+
| [https://www.mrc-lmb.cam.ac.uk/kzhang/ gautomatch]       ||         0.56      ||                           Done     ||   -           ||
 
|-
 
|-
| [http://www.msg.ucsf.edu/local/programs/eman/ eman2]       ||   2.2                            || Done || - ||
+
| [https://www.mrc-lmb.cam.ac.uk/kzhang/ gctf]             ||         1.18      ||                           Done     ||   -           ||
 
|-
 
|-
| [https://www.openssl.org/ openssl]    ||  1.0.2o                          || Done || - ||
 
 
|-
 
|-
| [http://grigoriefflab.janelia.org/frealign frealign]   ||   9.11                            || Done || - ||
+
| [https://modules.readthedocs.io/en/latest/ modules]           ||         4.20      ||                           Done     ||   -           ||
 
|-
 
|-
| [https://www.mrc-lmb.cam.ac.uk/kzhang/ gautomatch] ||   0.53                            || Done || - ||
+
| [http://msg.ucsf.edu/em/software/motioncor2.html motioncor2]       ||         1.2.1    ||                           Done     ||   -           ||
 
|-
 
|-
| [https://www.mrc-lmb.cam.ac.uk/kzhang/ gctf]       ||   1.18                            || Done || - ||
+
| [https://www.mpich.org/ mpich]             ||         3.0.4    ||                           Done     ||   -           ||
 
|-
 
|-
| [https://modules.readthedocs.io/en/latest/ modules]     ||   4.20                            || Done || - ||
+
| [https://developer.nvidia.com/cuda-zone nvidia/cuda]       || 8.0-GA1, 8.0-GA2, 9.0, 9.1, 9.2 ||             Done     ||   -           ||
 
|-
 
|-
| [https://www.mpich.org/ mpich]       ||   3.0.4                          || Done || - ||
+
| [https://www.open-mpi.org/ openmpi]           ||     3.1.3[+cuda] ||                           Done     ||   -           ||
 
|-
 
|-
| [http://msg.ucsf.edu/em/software/motioncor2.html motioncor2] ||   1.2.1                          || Done || - ||
+
| [https://www.open-mpi.org/ openmpi]           ||     4.0.0[+cuda] ||                           Done     ||   -           ||
 
|-
 
|-
| [https://developer.nvidia.com/cuda-zone nvidia cuda] ||   8.0-GA1, 8.0-GA2, 9.0, 9.1, 9.2 || Done || - ||
+
| [https://www.phenix-online.org/ phenix]           ||       1.15.2-3472 ||                           Done     ||   -           ||
 
|-
 
|-
| [https://www.open-mpi.org/ openmpi]     ||   3.1.2                          || Done || - ||
+
| [https://www2.mrc-lmb.cam.ac.uk/relion/index.php?title=Main_Page relion]           ||     3.0.4, 3.0.6 ||                           Done     ||   -           ||
 
|-
 
|-
| [https://www.phenix-online.org/ phenix]     ||   1.14-3260                      || Done || - ||
+
| [https://github.com/I2PC/scipion scipion]           ||         2.0      ||                           Done     ||   -           ||
 
|-
 
|-
| [https://www2.mrc-lmb.cam.ac.uk/relion/index.php?title=Main_Page relion]     ||   2.1, 3.0b                      || Done || - ||
+
| [http://sphire.mpg.de/ sphire]           ||         1.2      ||                           Done     ||   -           ||
 
|-
 
|-
| [http://sphire.mpg.de/ sphire]     ||   1.1                            || Done || - ||
+
| [https://www.rbvi.ucsf.edu/chimerax/ xchimera]         ||         0.8      ||                           Done     ||   -           ||
 
|-
 
|-
| [https://www.rbvi.ucsf.edu/chimerax/ xchimera]   ||   0.8                            || Done || - ||
+
| [http://xds.mpimf-heidelberg.mpg.de/ xds]               ||       2020-03-31 ||                           Done     ||   -           ||
 
|-
 
|-
 
|}
 
|}
Line 99: Line 124:
 
== ccp4 ==
 
== ccp4 ==
  
<pre>cd /lsc/sources/ccp4/4.7.0
+
<pre>cd /lsc/sources/ccp4/7.0.074/
mkdir -p /lsc/ccp4/4.7.0
+
mkdir -p /lsc/ccp4/7.0.074/
cp -r /lsc/sources/ccp4/4.7.0/* /lsc/ccp4/4.7.0
+
cp -r /lsc/sources/ccp4/7.0.074/* /lsc/ccp4/7.0.074/
cd /lsc/ccp4/4.7.0
+
cd /lsc/ccp4/7.0.074/
 
./BINARY.setup
 
./BINARY.setup
# mucho bullshito, but yeah, welcome to scientific applications</pre>
+
</pre>
 +
 
 +
=== arp/warp ===
 +
 
 +
<pre>
 +
module load ccp4/7.0.074
 +
/lsc/ccp4/7.0.074/start
 +
cd /lsc/sources/ccp4/arp_warp_8.0
 +
./install.sh
 +
</pre>
 +
 
 
== chimera ==
 
== chimera ==
  
Line 115: Line 150:
 
# installer copies files into destination
 
# installer copies files into destination
 
&gt; Installation is done; press return.</pre>
 
&gt; Installation is done; press return.</pre>
 +
 +
 +
== cistem ==
 +
 +
<pre>cd /lsc/sources/cistem/1.0/cistem-1.0.0-beta
 +
./configure --prefix=/lsc/cistem/1.0
 +
make
 +
make install
 +
</pre>
 +
 +
== cryolo ==
 +
 +
=== 1.2.1 ===
 +
 +
<pre>
 +
cd /lsc/sources/cryolo/1.2.1/
 +
scl enable rh-python36 bash
 +
conda create --name cryolo -c anaconda python=3.6 pyqt=5 cudnn=7.1.2
 +
conda activate cryolo
 +
conda install numpy==1.14.5
 +
pip install ./cryolo-1.2.1.tar.gz
 +
</pre>
 +
 +
=== 1.3.6 ===
 +
 +
<pre>
 +
cd /lsc/sources/cryolo
 +
wget ftp://ftp.gwdg.de/pub/misc/sphire/crYOLO_V1_3_6/cryolo-1.3.6.tar.gz
 +
wget ftp://ftp.gwdg.de/pub/misc/sphire/crYOLO_BM_V1_2_2/cryoloBM-1.2.2.tar.gz
 +
tar zxvf cryolo-1.3.6.tar.gz
 +
ln -s cryolo-1.4.0 1.4.0
 +
conda env remove --name cryolo # Due to conda, cryolo cannot support previous installations
 +
conda create --prefix /lsc/sphire/1.3.6/.conda/envs/ -c anaconda python=3.6 pyqt=5 cudnn=7.1.2 cython
 +
source activate cryolo
 +
conda install numpy==1.15.4
 +
pip install cryolo-1.3.6.tar.gz[gpu]
 +
pip install cryoloBM-1.2.2.tar.gz
 +
 +
</pre>
 +
 +
<pre>conda activate /lsc/sphire/1.3.6/.conda/envs/cryolo</pre> to activate
 +
 +
=== 1.4.0 ===
 +
 +
<pre>
 +
cd /lsc/sources/cryolo
 +
wget ftp://ftp.gwdg.de/pub/misc/sphire/crYOLO_V1_4_0/cryolo-1.4.0.tar.gz
 +
wget ftp://ftp.gwdg.de/pub/misc/sphire/crYOLO_BM_V1_2_3/cryoloBM-1.2.3.tar.gz
 +
tar zxvf cryolo-1.4.0.tar.gz
 +
ln -s cryolo-1.4.0 1.4.0
 +
conda create --prefix /lsc/cryolo/1.4.0 -c anaconda python=3.6 pyqt=5 cudnn=7.1.2 cython
 +
conda activate /lsc/cryolo/1.4.0
 +
conda install numpy==1.15.4
 +
pip install cryolo-1.4.0.tar.gz[gpu]
 +
pip install cryoloBM-1.2.3.tar.gz
 +
 +
</pre>
 +
 +
<pre>conda activate /lsc/cryolo/1.4.0</pre> to activate
  
 
== ctffind4 ==
 
== ctffind4 ==
Line 129: Line 223:
 
&gt; EMAN2 will now be installed into this location:
 
&gt; EMAN2 will now be installed into this location:
 
&gt; [/root/EMAN2] &gt;&gt;&gt; /lsc/eman2/2.2
 
&gt; [/root/EMAN2] &gt;&gt;&gt; /lsc/eman2/2.2
# installer does the rest of the work, re-installing a bunch of python modules
+
# installer does the rest of the work, re-installing a bunch of python modules via anaconda
via anaconda, cuz apparently the built-in package managers are less elitist
 
# *sigh* scientific computation, alright
 
 
&gt; Do you wish the installer to prepend the EMAN2 install location
 
&gt; Do you wish the installer to prepend the EMAN2 install location
 
&gt; to PATH in your /root/.bashrc ? [yes|no]
 
&gt; to PATH in your /root/.bashrc ? [yes|no]
Line 138: Line 230:
 
&gt; export PATH=/opt/eman/2.2/bin:$PATH
 
&gt; export PATH=/opt/eman/2.2/bin:$PATH
 
# covered in the relevant environment module</pre>
 
# covered in the relevant environment module</pre>
 +
 +
== external/fftw ==
 +
 +
=== 2.1.5 ===
 +
<pre>
 +
cd /lsc/sources/fftw/2.1.5/fftw-2.1.5
 +
./configure --prefix=/lsc/fftw/2.1.5 --enable-threads --enable-mpi --enable-i386-hacks
 +
make
 +
make install
 +
</pre>
 +
 +
=== 3.3.8 ===
 +
<pre>
 +
cd /lsc/sources/fftw/3.3.8/fftw-3.3.8
 +
./configure CC=/usr/bin/gcc-4.8.5 --prefix=/lsc/fftw/3.3.8 --enable-single --enable-sse --enable-sse2 --enable-avx --enable-avx2 --enable-avx-128-fma --enable-generic-simd128 --enable-generic-simd256 --enable-fma --enable-mpi --enable-threads --with-g77-wrappers --with-combined-threads
 +
make
 +
make install
 +
</pre>
  
 
== external/openssl ==
 
== external/openssl ==
Line 171: Line 281:
 
./configure --prefix=/lsc/modules --enable-doc-install --enable-example-modulefiles --enable-compat-version --enable-versioning --enable-quarantine-support --enable-auto-handling  
 
./configure --prefix=/lsc/modules --enable-doc-install --enable-example-modulefiles --enable-compat-version --enable-versioning --enable-quarantine-support --enable-auto-handling  
 
# since --enable-versioning is turned on, the module will autoversion itself and place all its files under
 
# since --enable-versioning is turned on, the module will autoversion itself and place all its files under
# /lsc/modules/4.2.0</pre>
+
# /lsc/modules/4.2.0
 +
</pre>
 +
 
 +
== imod ==
 +
 
 +
<pre> Follow instructions as described in RT 3509886
 +
</pre>
  
 
== mpich ==
 
== mpich ==
Line 192: Line 308:
  
  
Update: In order to combat the issue with creating remote OpenGL context (X11 forwarding over ssh), I upgraded the driver to 415.23 but had no luck. In nvidia-smi, the new driver [https://stackoverflow.com/questions/53422407/different-cuda-versions-shown-by-nvcc-and-nvidia-smi | reports that is nvidia CUDA 10 capable], which is fine as it can use the CUDA 8 run-time environment, as [https://docs.nvidia.com/deploy/cuda-compatibility/index.html | NVIDIA guarantees binary compatibility]
+
Update: In order to combat the issue with creating remote OpenGL context (X11 forwarding over ssh), I upgraded the driver to 415.23 but had no luck. In nvidia-smi, the new driver [https://stackoverflow.com/questions/53422407/different-cuda-versions-shown-by-nvcc-and-nvidia-smi reports that is nvidia CUDA 10 capable], which is fine as it can use the CUDA 8 run-time environment, as [https://docs.nvidia.com/deploy/cuda-compatibility/index.html NVIDIA guarantees binary compatibility]
  
 
== nvidia/cuda ==
 
== nvidia/cuda ==
Line 266: Line 382:
 
make -j 64
 
make -j 64
 
make -j 64 install</pre>
 
make -j 64 install</pre>
 +
 +
 +
==== 3.0.4 ====
 +
 +
<pre>
 +
cd /lsc/sources/relion/3.0.4 ;
 +
mkdir build;
 +
cmake -DCMAKE_INSTALL_PREFIX=/lsc/relion/3.0.4 --DGUI=ON -DCUDA=ON -DCudaTexture=ON -DFORCE_OWN_TBB=ON -DFORCE_OWN_FLTK=ON -DFORCE_OWN_FFTW=ON -DCUDA_ARCH=61 -DBUILD_SHARED_LIBS=ON ..
 +
make -j 64
 +
make install
 +
</pre>
 +
 +
==== 3.0.6 ====
 +
 +
<pre>
 +
cd /lsc/sources/relion/3.0.6 ;
 +
mkdir build;
 +
cmake -DCMAKE_INSTALL_PREFIX=/lsc/relion/3.0.6 --DGUI=ON -DCUDA=ON -DCudaTexture=ON -DFORCE_OWN_TBB=ON -DFORCE_OWN_FLTK=ON -DFORCE_OWN_FFTW=ON -DCUDA_ARCH=61 -DBUILD_SHARED_LIBS=ON ..
 +
make -j 64
 +
make install
 +
</pre>
 +
 +
 +
List of CUDA architectures here : https://arnon.dk/matching-sm-architectures-arch-and-gencode-for-various-nvidia-cards/
 +
 +
=== scipion ===
 +
 +
==== 2.0 ====
 +
 +
===== Cloning/Getting software =====
 +
mkdir -p /lsc/scipion/2.0
 +
git clone https://github.com/I2PC/scipion.git
 +
cd scipion
 +
 +
===== Dependencies =====
 +
yum install -y gcc gcc-g++ cmake java-1.8.0-openjdk-devel.x86_64 libXft-devel.x86_64 openssl-devel.x86_64 libXext-devel.x86_64  libxml++.x86_64 libquadmath-devel.x86_64  libxslt.x86_64 openmpi-devel.x86_64  gsl-devel.x86_64  libX11.x86_64  gcc-gfortran.x86_64 git
 +
 +
===== Configuring =====
 +
 +
./scipion config
 +
scipion installp -p ~/scipion-em-relion
 +
 
=== sphire ===
 
=== sphire ===
  
 +
==== 1.1 ====
 
<pre>cd /lsc/sources/sphire/1.1 ;
 
<pre>cd /lsc/sources/sphire/1.1 ;
 
./sphire_1_1_linux.sh
 
./sphire_1_1_linux.sh
 
# follow questions, install under /lsc/sphire/1.1</pre>
 
# follow questions, install under /lsc/sphire/1.1</pre>
 +
 
== How is each piece of software run ( really basic, just front UI, text or graphics) ==
 
== How is each piece of software run ( really basic, just front UI, text or graphics) ==
  
Line 368: Line 528:
  
 
=== Pushing them to the nodes via Ansible + git ===
 
=== Pushing them to the nodes via Ansible + git ===
 +
 +
 +
== Running Test Jobs on Aristotle to test the software ==
 +
 +
=== Relion ===
 +
 +
 +
As per the [https://hpc.nih.gov/apps/RELION/relion30_tutorial.pdf Relion 3.0 tutorial]:
 +
 +
You will need the following test sets:
 +
 +
ftp://ftp.mrc-lmb.cam.ac.uk/pub/scheres/relion30_tutorial_data.tar
 +
ftp://ftp.mrc-lmb.cam.ac.uk/pub/scheres/relion30_tutorial_precalculated_results.tar.gz
 +
 +
 +
These test sets are already downloaded and untar'ed under <code>intaristotle.internal.biotek:/lsc/relion/test_data</code>
 +
 +
<code>module load relion/</code>'''$version''' you need and wait for relion to open on your X server.
 +
 +
==== 3D Classification ====
 +
 +
Follow the instructions in the tutorial regarding 3D classification

Revision as of 15:02, 14 August 2019

The Luecke Group CryoEM setup data processing platform consists of three main servers:

  • intaristotle.intenal.biotek GPU server: 64 Intel Broadwell cores, 128 Gbytes of RAM, 4 x NVIDIA GTX 1080 GPU cards, 6 Tb of SSD scratch space
  • perun.uio.no file server: 28 Intel Broadwell cores, 32 Gigs of RAM, 212 Tb of local disk space used as file server for the storage of CryoEM data. The machine is also used as a gateway to internal GPU/CPU processing nodes.

All nodes have 10 Gigabit Ethernet connectivity for dedicated NAS/NFS disk space between the file server and the current (and future) GPU compute nodes.

Contents

Aristotle/CryoMP Installation Documentation ( intaristotle.internal.biotek/192.168.8.109 )

RT ticket #3135527

Live computational resource usage information (cpu and memory)

Software Installed, List of

Application Version on intaristotle.internal.biotek server Status Comments
adxv 1.9.13 Done -
ccp4 7.0.074 Done -
chimera 1.13.1 Done -
cistem 1.0.0-beta Done -
ctffind4 4.1.10 Done -
cryosparc 2.9 Done Documentation
eman2 2.2, 2.3, 2.3cd1 Done -
fftw, double precision 2.1.5 Done -
fftw, single precision 2.1.5 Done -
fftw, double precision 3.3.8 Done -
fftw, single precision 3.3.8 Done -
imod 4.9.12 Done -
openssl 1.0.2 Done -
frealign 9.11 Done -
gautomatch 0.56 Done -
gctf 1.18 Done -
modules 4.20 Done -
motioncor2 1.2.1 Done -
mpich 3.0.4 Done -
nvidia/cuda 8.0-GA1, 8.0-GA2, 9.0, 9.1, 9.2 Done -
openmpi 3.1.3[+cuda] Done -
openmpi 4.0.0[+cuda] Done -
phenix 1.15.2-3472 Done -
relion 3.0.4, 3.0.6 Done -
scipion 2.0 Done -
sphire 1.2 Done -
xchimera 0.8 Done -
xds 2020-03-31 Done -

Top Directory ( /lsc )

Top directory for all installed software is /lsc

Source Files ( in case you need to re-compile something )

Source for all installed programs can be found under /lsc/sources

Dependencies for interactive 3D applications

VirtualGL: Running accelerated 3D graphics remotely via OpenGL

Located under /opt/VirtualGL
Loaded by default

VirtualGL is an open source toolkit that gives any Unix or Linux remote display software the ability to run OpenGL applications with full 3D hardware acceleration.

It is used to display 3D applications on a laptop that does not have a powerful 3D graphics card: Instead of letting the laptop render the 3D application, VirtualGL uses the (more powerful) graphics cards of the server to pre-render the application and send back the result.

This app is very useful in 3D applications like XChimera/Chimera, PyMol and the like

In order to take full advantage of this application, you need to be connected to the NCMM internal network, and not UiO.

Open the command-line/terminal of your choice and type

vglconnect intaristotle.internal.biotek

vgl connect will connect you to the internal interface for aristotle and set up the environment for executing a graphical application.

Let’s say we need to run chimera. So, we load the module

module load chimera/1.13.1

and then we use vglrun to run the chimera application

vglrun -d $DISPLAY chimera

The “-d $DISPLAY” argument is there just in case something has corrupted your local environment.

How was each piece of software compiled (flags, switches)

ccp4

cd /lsc/sources/ccp4/7.0.074/
mkdir -p /lsc/ccp4/7.0.074/
cp -r /lsc/sources/ccp4/7.0.074/* /lsc/ccp4/7.0.074/
cd /lsc/ccp4/7.0.074/
./BINARY.setup

arp/warp

module load ccp4/7.0.074
/lsc/ccp4/7.0.074/start
cd /lsc/sources/ccp4/arp_warp_8.0
./install.sh

chimera

cd /lsc/sources/chimera/1.13.1
./chimera-1.13.1-linux_x86_64.bin #self-extracts into two files
# answer questions
> Enter install location: /lsc/chimera/1.13.1
> Install desktop menu (icon has to be done by user)? no 
> Install symbolic link to chimera executable for command line use in which directory? [hit Enter for default (0)]: 0
# installer copies files into destination
> Installation is done; press return.


cistem

cd /lsc/sources/cistem/1.0/cistem-1.0.0-beta
./configure --prefix=/lsc/cistem/1.0
make
make install

cryolo

1.2.1

cd /lsc/sources/cryolo/1.2.1/
scl enable rh-python36 bash
conda create --name cryolo -c anaconda python=3.6 pyqt=5 cudnn=7.1.2
conda activate cryolo
conda install numpy==1.14.5
pip install ./cryolo-1.2.1.tar.gz

1.3.6

cd /lsc/sources/cryolo
wget ftp://ftp.gwdg.de/pub/misc/sphire/crYOLO_V1_3_6/cryolo-1.3.6.tar.gz
wget ftp://ftp.gwdg.de/pub/misc/sphire/crYOLO_BM_V1_2_2/cryoloBM-1.2.2.tar.gz
tar zxvf cryolo-1.3.6.tar.gz
ln -s cryolo-1.4.0 1.4.0
conda env remove --name cryolo # Due to conda, cryolo cannot support previous installations
conda create --prefix /lsc/sphire/1.3.6/.conda/envs/ -c anaconda python=3.6 pyqt=5 cudnn=7.1.2 cython
source activate cryolo
conda install numpy==1.15.4
pip install cryolo-1.3.6.tar.gz[gpu]
pip install cryoloBM-1.2.2.tar.gz

conda activate /lsc/sphire/1.3.6/.conda/envs/cryolo
to activate

1.4.0

cd /lsc/sources/cryolo
wget ftp://ftp.gwdg.de/pub/misc/sphire/crYOLO_V1_4_0/cryolo-1.4.0.tar.gz
wget ftp://ftp.gwdg.de/pub/misc/sphire/crYOLO_BM_V1_2_3/cryoloBM-1.2.3.tar.gz
tar zxvf cryolo-1.4.0.tar.gz
ln -s cryolo-1.4.0 1.4.0
conda create --prefix /lsc/cryolo/1.4.0 -c anaconda python=3.6 pyqt=5 cudnn=7.1.2 cython
conda activate /lsc/cryolo/1.4.0
conda install numpy==1.15.4
pip install cryolo-1.4.0.tar.gz[gpu]
pip install cryoloBM-1.2.3.tar.gz

conda activate /lsc/cryolo/1.4.0
to activate

ctffind4

cd /lsc/sources/ctffind/4.1.10/ctffind-4.1.10
./configure --prefix=/lsc/ctffind4/4.1.10 --enable-latest-instruction-set
make -j 64 all
make -j 64 install

eman2

cd /lsc/sources/eman2/2.2/
./eman2.22.linux64.sh
> EMAN2 will now be installed into this location:
> [/root/EMAN2] >>> /lsc/eman2/2.2
# installer does the rest of the work, re-installing a bunch of python modules via anaconda
> Do you wish the installer to prepend the EMAN2 install location
> to PATH in your /root/.bashrc ? [yes|no]
> [no] >>> no
> You may wish to edit your .bashrc to prepend the EMAN2 install location to PATH:
> export PATH=/opt/eman/2.2/bin:$PATH
# covered in the relevant environment module

external/fftw

2.1.5

cd /lsc/sources/fftw/2.1.5/fftw-2.1.5
./configure --prefix=/lsc/fftw/2.1.5 --enable-threads --enable-mpi --enable-i386-hacks
make
make install

3.3.8

cd /lsc/sources/fftw/3.3.8/fftw-3.3.8
./configure CC=/usr/bin/gcc-4.8.5 --prefix=/lsc/fftw/3.3.8 --enable-single --enable-sse --enable-sse2 --enable-avx --enable-avx2 --enable-avx-128-fma --enable-generic-simd128 --enable-generic-simd256 --enable-fma --enable-mpi --enable-threads --with-g77-wrappers --with-combined-threads
make
make install

external/openssl

cd /lsc/sources/openssl/openssl-1.0.2o/
./config --prefix=/lsc/external/openssl/1.0.2o
make -j 64
make -j 64 install
#done

frealign

cd /lsc/sources/frealign/9.11/frealign_v9.11
mkdir -p /lsc/frealign/9.11
cp -r /lsc/sources/frealign/9.11/frealign_v9.11/* /lsc/frealign/9.11
# Ignore the ./INSTALL file, it does nothing
# you just need to add the relevant bin path for the application to work
# the relevant module covers the details

gautomatch

cd /lsc/sources/gautomatch/Gautomatch_v0.53
mkdir /lsc/gautomatch/0.53
cp -r /lsc/sources/gautomatch/Gautomatch_v0.53 /lsc/gautomatch/0.53

gctf

cd /lsc/sources/gctf/1.18
mkdir /lsc/gctf/1.18
cp -r /lsc/sources/gctf/1.18/* /lsc/gctf/1.18
# environmental module takes care of loading up cuda 8.0, (required) adding the bin path, and changing the LD_PRELOAD path ### modules 
cd /lsc/sources/modules/4.2.0
./configure --prefix=/lsc/modules --enable-doc-install --enable-example-modulefiles --enable-compat-version --enable-versioning --enable-quarantine-support --enable-auto-handling 
# since --enable-versioning is turned on, the module will autoversion itself and place all its files under
# /lsc/modules/4.2.0

imod

 Follow instructions as described in RT 3509886

mpich

default loaded by CentOS 7.
Environmental module was altered to make sure that it conflicts with the openmpi modules

motioncor2

cd /lsc/sources/motioncor2/1.2.1
mkdir -p /lsc/motioncor2/1.2.1/bin
cp -r /lsc/sources/motioncor2/1.2.1/* /lsc/motioncor2/1.2.1a
ln -s /lsc/motioncor2/1.2.1/MotionCor2_1.2.1-Cuda80 /lsc/motioncor2/1.2.1/bin/motioncor2
# environmental module takes care of setting up the bin path 

nvidia/driver

cd /lsc/sources/nvidia/drivers/396.54;
./NVIDIA-Linux-x86_64-396.54.run


Update: In order to combat the issue with creating remote OpenGL context (X11 forwarding over ssh), I upgraded the driver to 415.23 but had no luck. In nvidia-smi, the new driver reports that is nvidia CUDA 10 capable, which is fine as it can use the CUDA 8 run-time environment, as NVIDIA guarantees binary compatibility

nvidia/cuda

CUDA frameworks installed:

  • 8.0-GA1
  • 8.0-GA2
  • 9.0
  • 9.1
  • 9.2
Default CUDA framework for all applications is 8.0GA1

NVidia Driver installed is 396.54, short-term support as of 2018-10-31, as per above.

Default driver installed with CUDA 8.0-GA1 is 375.64 (not installed, obviously)

During loading different CUDA versions, NVIDIA driver remains the same. Things seem to work, but may require future additional detailed testing

All CUDA installations require gcc 4.8 (hence the reason we went with CentOS)

All mentioned versions are currently installed, but we regard 8.0-GA1 as the base release, due to software that demands 8.0-GA1 and has no alternatives.

Installations:

8.0-GA1

cd /lsc/sources/nvidia/cuda/8.0-GA1 ;
./cuda_8.0.44_linux-run --silent --toolkit=/lsc/nvidia/cuda/8.0-GA1 --samples --samplespath=/lsc/nvidia/cuda/8.0-GA1/samples --run-nvidia-xconfig --tmpdir=/tmp

8.0-GA2

cd /lsc/sources/nvidia/cuda/8.0-GA2 ;
./cuda_8.0.61_375.26_linux-run --silent --toolkit=/lsc/nvidia/cuda/8.0-GA2 --samples --samplespath=/lsc/nvidia/cuda/8.0-GA2/samples --run-nvidia-xconfig --tmpdir=/tmp

9.0

cd /lsc/sources/nvidia/cuda/9.0 ;
./cuda_9.0.176_384.81_linux-run --silent --toolkit=/lsc/nvidia/cuda/9.0 --samples --samplespath=/lsc/nvidia/cuda/9.0/samples --run-nvidia-xconfig --tmpdir=/tmp

9.1

cd /lsc/sources/nvidia/cuda/9.1 ;
./cuda_9.1.85_387.26_linux --silent --toolkit=/lsc/nvidia/cuda/9.1 --samples --samplespath=/lsc/nvidia/cuda/9.1/samples --run-nvidia-xconfig --tmpdir=/tmp

9.2

cd /lsc/sources/nvidia/cuda/9.2 ;
./9.2.148_396.37_linux --silent --toolkit=/lsc/nvidia/cuda/9.2 --samples --samplespath=/lsc/nvidia/cuda/9.2/samples --run-nvidia-xconfig --tmpdir=/tmp

openmpi

cd /lsc/sources/openmpi/3.1.2;
./configure --prefix /lsc/openmpi/3.1.2 --enable-binaries --enable-mpi-fortran --with-cuda=/lsc/nvidia/cuda/8.0-GA1 --with-devel-headers 

Default: 3.1.2

The default version loaded is the stable-as-of 2018-10-31 3.1.2

phenix

cd /lsc/sources/phenix/1.14/phenix-installer-1.14-3260-intel-linux-2.6-x86_64-centos6 ;
./install --prefix=/lsc/phenix/1.14-3260 --openmp --makedirs 

relion

2.1

cd /lsc/sources/relion/2.1-mpich ;
mkdir build;
cmake -DCMAKE_INSTALL_PREFIX=/lsc/relion/2.1
make -j 64
make -j 64 install

3.0b

cd /lsc/sources/relion/3.0b-mpich ;
mkdir build;
cmake -DCMAKE_INSTALL_PREFIX=/lsc/relion/3.0b
make -j 64
make -j 64 install


3.0.4

cd /lsc/sources/relion/3.0.4 ;
mkdir build;
cmake -DCMAKE_INSTALL_PREFIX=/lsc/relion/3.0.4 --DGUI=ON -DCUDA=ON -DCudaTexture=ON -DFORCE_OWN_TBB=ON -DFORCE_OWN_FLTK=ON -DFORCE_OWN_FFTW=ON -DCUDA_ARCH=61 -DBUILD_SHARED_LIBS=ON ..
make -j 64
make install

3.0.6

cd /lsc/sources/relion/3.0.6 ;
mkdir build;
cmake -DCMAKE_INSTALL_PREFIX=/lsc/relion/3.0.6 --DGUI=ON -DCUDA=ON -DCudaTexture=ON -DFORCE_OWN_TBB=ON -DFORCE_OWN_FLTK=ON -DFORCE_OWN_FFTW=ON -DCUDA_ARCH=61 -DBUILD_SHARED_LIBS=ON ..
make -j 64
make install


List of CUDA architectures here : https://arnon.dk/matching-sm-architectures-arch-and-gencode-for-various-nvidia-cards/

scipion

2.0

Cloning/Getting software
mkdir -p /lsc/scipion/2.0
git clone https://github.com/I2PC/scipion.git
cd scipion
Dependencies
yum install -y gcc gcc-g++ cmake java-1.8.0-openjdk-devel.x86_64 libXft-devel.x86_64 openssl-devel.x86_64 libXext-devel.x86_64  libxml++.x86_64 libquadmath-devel.x86_64  libxslt.x86_64 openmpi-devel.x86_64  gsl-devel.x86_64  libX11.x86_64  gcc-gfortran.x86_64 git
Configuring
./scipion config
scipion installp -p ~/scipion-em-relion

sphire

1.1

cd /lsc/sources/sphire/1.1 ;
./sphire_1_1_linux.sh
# follow questions, install under /lsc/sphire/1.1

How is each piece of software run ( really basic, just front UI, text or graphics)

ccp4/4.7.0

chimera/1.13.1

module load chimera
chimera

ctffind4/4.1.10

module load ctffind4
ctffind4

eman2/2.2

module load eman
eman2.py

external

openssl/1.0.2o

Nothing to see here, this is just a support library

frealign/9.11

module load frealign
frealign

gautomatch/0.53

module load gautomatch
gautomatch

gctf/1.18

module load gctf
gctf

mpich/3.0.4

module load mpich
mpirun

motioncor2/1.2.1

nvidia/

driver

modinfo nvidia

cuda/{ 8.0-GA1, 8.0-GA2, 9.0, 9.1, 9.2 }

There is really nothing you can do with the CUDA libraries directly

You can only verify that version of CUDA works with the following for loop in bash:

for version in 8.0-GA1 8.0-GA2 9.0 9.1 9.2; 
do
    module switch nvidia/cuda/$version; # press q to quit the nbody simulation below
    /lsc/nvidia/cuda/$version/samples/5_Simulations/nbody/nbody -hostmem -numdevices=$(lspci | grep -i nvidia | grep -i vga); # n-body gravitational attraction simulation
done && module purge;

openmpi/3.1.2

module load openmpi
mpirun -j 64 date

phenix/1.14-3260

module load phenix
phenix

relion/{ 2.1, 3.0b }

for version in 2.1 3.0b; 
do
    module switch relion/$version;
    /lsc/relion/$version/bin/relion
done;

sphire/1.1

module load sphire
sphire

xchimera/0.8

not yet available   

Environmental Modules

To see what environment modules are available:

module avail

To load a module:

module load <module>

To see the already loaded modules:

module list

To unload a module:

module unload <module>

Git Repo

Pushing them to the nodes via Ansible + git

Running Test Jobs on Aristotle to test the software

Relion

As per the Relion 3.0 tutorial:

You will need the following test sets:

ftp://ftp.mrc-lmb.cam.ac.uk/pub/scheres/relion30_tutorial_data.tar
ftp://ftp.mrc-lmb.cam.ac.uk/pub/scheres/relion30_tutorial_precalculated_results.tar.gz


These test sets are already downloaded and untar'ed under intaristotle.internal.biotek:/lsc/relion/test_data

module load relion/$version you need and wait for relion to open on your X server.

3D Classification

Follow the instructions in the tutorial regarding 3D classification