Forskjell mellom versjoner av «Computing System Administration»

Fra mn/fys/epf
Hopp til: navigasjon, søk
(Installing an ATLAS software release)
(Deleted all the obsolete how-to's - they were geared towards releases of RHEL (RHEL4,5) much older than the current release used at UiO (7) or to deprecated ATLAS software configuration.)
 
(19 mellomliggende revisjoner av 2 brukere er ikke vist)
Linje 1: Linje 1:
=== Getting ATLAS releases to work on RHEL5 x86_64 nodes  ===
+
<tt></tt>
  
*For pacman to work you need to install an older 32-bit opensll: <tt>rpm -ivh http://ftp.scientificlinux.org/linux/scientific/51/i386/SL/openssl097a-0.9.7a-9.i386.rpm</tt>
+
=== Installing ROOT with Jupyter notebook capability ===
*To satisfy some packages, especially event generation: <tt>yum install libgfortran-4.1.2-44.el5.i386</tt>
+
* ROOT is in the EPEL repository (at UiO/FI this should already be set up)
*You need to install blas: <tt>yum install blas.i386</tt> and <tt>yum install blas-devel.i386</tt>
+
* sudo yum install root root-notebook python3-root python3-devel python3-jupyroot (an admin/superuser needs to do this)
*To get /usr/lib/libreadline.so.4 (also needed for 32-bit Linux) do <tt>rpm -ivh http://linuxsoft.cern.ch/cern/slc5X/i386/SL/compat-readline43-4.3-3.i386.rpm</tt>
+
* pip3 install --user jupyter juypterlab metakernel (this you can do as a normal user)
*You need 32-bit gcc 3.4 (also needed for 32-bit Linux): <tt>yum install compat-gcc-34</tt>
+
** This will create ~/.local - later you can move the directory somewhere else and softlink to it, i.e. ln -s /scratch/.local ~/.local
*You need 32-bit g++ 3.4 (also needed for 32-bit Linux): <tt>yum install compat-gcc-34-c++</tt>
+
* Make sure ~/.local/bin is in your PATH
*You may need to install 32-bit popt: <tt>yum install popt.i386</tt>
 
 
 
<br>
 
 
 
=== Getting the I686-SLC5-GCC43 kits to validate (and run) on RHEL5  ===
 
 
 
*Install the pacman kit for gcc43, see https://twiki.cern.ch/twiki/bin/view/Atlas/RPMCompatSLC5#Run_SLC4_32bit_binaries_on_SLC5
 
*You probably need to <tt>ln -s /usr/lib/libg2c.so.0.0.0 /usr/lib/libg2c.so </tt>
 
*You probably need to <tt>ln -s /usr/lib/gcc/x86_64-redhat-linux/3.4.6/libgcc_s_32.so /lib/libgcc_s_32.so</tt>
 
 
 
=== Installing root and enabling the xrootd service (needed for proof)  ===
 
 
 
*on RHEL5 x86_64 systems: (gcc 4.3 is too advanced for UiO desktops)
 
 
 
<tt>tar -xvzf /mn/kvant/hep/linux/root/root_v5.22.00.Linux-slc5-gcc3.4.tar.gz -C /opt</tt>
 
 
 
*Assume standard setup is on default system (e.g. scalar.uio.no) and $node is the node you want to install on. Of course if you are logged on to $node all the ssh'ing is not needed
 
 
 
<tt>scp /etc/sysconfig/xrootd $node:/etc/sysconfig/ <br>scp /etc/xpd.cf $node:/etc/<br>scp /etc/rc.d/init.d/xrootd $node:/etc/rc.d/init.d/ <br>ssh $node chkconfig --add xrootd<br>ssh $node chkconfig --level 35 xrootd on </tt>
 
 
 
*Note, xrootd runs under read account (as of May 2009)
 
 
 
<tt>ssh $node mkdir /var/log/xrootd &lt;br&gt;ssh $node chown read:fysepf /var/log/xrootd</tt>
 
 
 
*Edit /etc/xpd.cf and restart xrootd to add a worker node.
 
*Redistribute /etc/cpd.cf as well (have to find a simpler but reliable system for this
 
 
 
<tt>/sbin/service xrootd start </tt>
 
 
 
<br>
 
 
 
=== Installing VirtualBox  ===
 
 
 
*Download and install the appropriate rpm from http://download.virtualbox.org/virtualbox
 
*Remove the new line with vboxusers from /etc/group
 
*Add the host's username to the vboxusers group, e.g.
 
 
 
<tt>echo vboxusers:x:15522:esbenlu &gt;&gt; /etc/group<br>echo vboxusers:x:15522:esbenlu &gt;&gt; /etc/group.local </tt>
 
 
 
*Finalize the vbox setup:
 
 
 
<tt>/etc/init.d/vboxdrv setup </tt>
 
 
 
<br>
 
 
 
=== Installing ARC client on RHEL5 boxes (as superuser) ===
 
 
 
*See http://download.nordugrid.org/repos.html
 
*Typical /etc/yum.repos.d/nordugrid:
 
 
 
<tt>[nordugrid] name=NorduGrid - $basearch - stable</tt>
 
 
 
<tt>baseurl=http://download.nordugrid.org/repos/redhat/$releasever/$basearch/stable<br>enabled=1 gpgcheck=1<br>gpgkey=http://download.nordugrid.org/RPM-GPG-KEY-nordugrid&nbsp; </tt>
 
 
 
*<tt>yum groupinstall "ARC Client"</tt> to install the client s/w
 
*<tt>yum install nordugrid-arc-ca-utils</tt> to install the missing Certificate Authority utilities
 
*<tt>/opt/nordugrid/sbin/grid-update-crls</tt> to jumpstart the first update of the [http://searchsecurity.techtarget.com/sDefinition/0,,sid14_gci803160,00.html Certificate Revocation Lists] - after that there is a cron that does the job automatically.
 
 
 
=== Installing an ATLAS software release  ===
 
 
 
*Decide where to install the software - this is the "repository".
 
*The install script is here: http://koherens.uio.no/atlas/install/installPacmanKit.sh. Download it somewhere and chmod u+x it.
 
*A typical installation looks like
 
 
 
<tt>mkdir {root of the software repository}/atlas/software/releases/15.2.0&lt;br&gt;<br>cd {root of the software repository}/atlas/software/releases/15.2.0&lt;br&gt; </tt>
 
 
 
*If you download the install script in {root of the software repository}/atlas/software/releases/ then you do&nbsp;
 
 
 
<tt>../installPacmanKit.sh 15.2.0 I686-SLC4-GCC34-OPT {runtimedir} </tt>
 
 
 
*The <tt>{runtimedir}</tt> is set in <tt>/etc/arc.conf</tt> (normal users should create <tt>/<something>/APPS/HEP</tt> and use <tt>{runtimedir}=/<something></tt>). Last folder of <something> could often be runtime, so e.g. for maikens installation, <tt><something></tt> is: <tt>/scratch2/maikenp/atlas/runtime</tt> The I686-SLC4-GCC34-OPT downloads a kit built for this platform (which is the only one fully supported for the moment - SLC4 is RHEL4/CentOS 4-compatible, kits for x86_64 and RHEL5 are under development).
 
 
 
*If your OS is 64-bit there may be some missing 32-bit libraries, my "favorites" are from the blas, libgfortran and libf2c packages. The first thing I will do is run a kit validation, e.g. <tt>./KVsubmit nordugris.uit.no 15.2.0</tt> which will send a grid job to validate the kit (<tt>KVsubmit</tt> is in http://koherens.uio.no/atlas/install/KVsubmit if you want to try it yourself).
 
 
 
*Occasionally the database release is updated after a Kit is packaged. In this case you have to download a new DBRelease. Check the [http://atlas-computing.web.cern.ch/atlas-computing/projects/releases/status/ release notes] to find out which one should be downloaded. In the directory where you installed the release (i.e. SITEROOT) do e.g. for DBRelease 7.2.1
 
<tt>pacman  -get http://atlas.web.cern.ch/Atlas/GROUPS/DATABASE/pacman4/DBRelease:DBRelease-7.2.1</tt>
 
 
 
=== ATLAS VO Registration and Management  ===
 
 
 
[https://lcg-voms.cern.ch:8443/vo/atlas/vomrs?path=/RootNode&action=execute This site] is used for registering atlas VO grid resource users and for managing information about their affiliation with the ATLAS VO and their permissions regarding use of grid resources.<br>
 
 
 
=== Installation of a new release of DQ2Clients ===
 
 
 
* You had better set umask to 022 or nobody else will be able to read files or execute any commands!
 
* Typical install command is <tt>
 
pacman -trust-all-caches -allow tar-overwrite -get http://atlas.web.cern.ch/Atlas/GROUPS/DATABASE/project/ddm/releases/pacman/cache:DQ2Clients</tt>
 
* Additional documentation is [https://twiki.cern.ch/twiki/bin/view/Atlas/DQ2Clients here] - there is also a [https://twiki.cern.ch/twiki/bin/view/Atlas/DQ2ClientsHowTo howto].
 

Nåværende revisjon fra 1. okt. 2020 kl. 09:07

Installing ROOT with Jupyter notebook capability

  • ROOT is in the EPEL repository (at UiO/FI this should already be set up)
  • sudo yum install root root-notebook python3-root python3-devel python3-jupyroot (an admin/superuser needs to do this)
  • pip3 install --user jupyter juypterlab metakernel (this you can do as a normal user)
    • This will create ~/.local - later you can move the directory somewhere else and softlink to it, i.e. ln -s /scratch/.local ~/.local
  • Make sure ~/.local/bin is in your PATH