Forskjell mellom versjoner av «Computing System Administration»

Fra mn/fys/epf
Hopp til: navigasjon, søk
(Added installation instructions to enable "root --notebook")
(Deleted all the obsolete how-to's - they were geared towards releases of RHEL (RHEL4,5) much older than the current release used at UiO (7) or to deprecated ATLAS software configuration.)
 
Linje 1: Linje 1:
=== Getting ATLAS releases to work on RHEL5 x86_64 nodes  ===
+
<tt></tt>
 
 
 
 
 
 
*For pacman to work you need to install an older 32-bit opensll: <tt>rpm -ivh http://ftp.scientificlinux.org/linux/scientific/51/i386/SL/openssl097a-0.9.7a-9.i386.rpm</tt>
 
*To satisfy some packages, especially event generation: <tt>yum install libgfortran-4.1.2-44.el5.i386</tt>
 
*You need to install blas: <tt>yum install blas.i386</tt> and <tt>yum install blas-devel.i386</tt>
 
*To get /usr/lib/libreadline.so.4 (also needed for 32-bit Linux) do <tt>rpm -ivh http://linuxsoft.cern.ch/cern/slc5X/i386/SL/compat-readline43-4.3-3.i386.rpm</tt>
 
*You need 32-bit gcc 3.4 (also needed for 32-bit Linux): <tt>yum install compat-gcc-34</tt>
 
*You need 32-bit g++ 3.4 (also needed for 32-bit Linux): <tt>yum install compat-gcc-34-c++</tt>
 
*You may need to install 32-bit popt: <tt>yum install popt.i386</tt>
 
*In order to analyze luminosity blocks in real data libxml2-devel is needed: <tt>yum install libxml2-devel</tt>
 
 
 
=== Getting the I686-SLC5-GCC43 kits to validate (and run) on RHEL5  ===
 
 
 
*Install the pacman kit for gcc43, see https://twiki.cern.ch/twiki/bin/view/Atlas/RPMCompatSLC5#Run_SLC4_32bit_binaries_on_SLC5
 
*You probably need to <tt>ln -s /usr/lib/libg2c.so.0.0.0 /usr/lib/libg2c.so </tt>
 
*You probably need to <tt>ln -s /usr/lib/gcc/x86_64-redhat-linux/3.4.6/libgcc_s_32.so /lib/libgcc_s_32.so</tt>
 
 
 
=== Installing root and enabling the xrootd service (needed for proof)  ===
 
 
 
*on RHEL5 x86_64 systems: (gcc 4.3 is too advanced for UiO desktops)
 
<pre>tar -xvzf /mn/kvant/hep/linux/root/root_v5.22.00.Linux-slc5-gcc3.4.tar.gz -C /opt
 
</pre>
 
*Assume standard setup is on default system (e.g. scalar.uio.no) and $node is the node you want to install on. Of course if you are logged on to $node all the ssh'ing is not needed
 
<pre>scp /etc/sysconfig/xrootd $node:/etc/sysconfig/
 
scp /etc/xpd.cf $node:/etc/
 
scp /etc/rc.d/init.d/xrootd $node:/etc/rc.d/init.d/
 
ssh $node chkconfig --add xrootd
 
ssh $node chkconfig --level 35 xrootd on
 
</pre>
 
*Note, xrootd runs under read account (as of May 2009)
 
<pre>ssh $node mkdir /var/log/xrootd
 
ssh $node chown read:fysepf /var/log/xrootd
 
</pre>
 
*Edit /etc/xpd.cf and restart xrootd to add a worker node.
 
*Redistribute /etc/cpd.cf as well (have to find a simpler but reliable system for this
 
<pre>/sbin/service xrootd start
 
</pre>
 
<br>
 
 
 
=== Installing VirtualBox  ===
 
 
 
*Download and install the appropriate rpm from http://download.virtualbox.org/virtualbox
 
*Remove the new line with vboxusers from /etc/group
 
*Add the host's username to the vboxusers group, e.g.
 
<pre>echo vboxusers:x:15522:esbenlu &gt;&gt; /etc/group&lt;br&gt;echo vboxusers:x:15522:esbenlu &gt;&gt; /etc/group.local
 
</pre>
 
*Finalize the vbox setup:
 
<pre>/etc/init.d/vboxdrv setup
 
</pre>
 
<br>
 
 
 
=== Installing ARC client on RHEL5 boxes (as superuser)  ===
 
 
 
*See http://download.nordugrid.org/repos.html
 
*Set up the extra packages for Enterprise Linux ([http://www2.usit.uio.no/it/unix/linux/doc/epel.html epel]):
 
<pre>rpm -Uvh http://download.fedora.redhat.com/pub/epel/5/i386/epel-release-5-4.noarch.rpm</pre>
 
*Typical /etc/yum.repos.d/nordugrid:
 
<pre>[nordugrid] name=NorduGrid - $basearch - stable
 
baseurl=http://download.nordugrid.org/repos/redhat/$releasever/$basearch/stable
 
enabled=1
 
gpgcheck=1
 
gpgkey=http://download.nordugrid.org/RPM-GPG-KEY-nordugrid;
 
</pre>
 
*<tt>yum groupinstall "ARC Client"</tt> to install the client s/w
 
*<tt>yum install nordugrid-arc-ca-utils</tt> to install the missing Certificate Authority utilities
 
*<tt>/opt/nordugrid/sbin/grid-update-crls</tt> to jumpstart the first update of the [http://searchsecurity.techtarget.com/sDefinition/0,,sid14_gci803160,00.html Certificate Revocation Lists] - after that there is a cron that does the job automatically.
 
*<tt>yum install lfc-devel lfc-python></tt> to get LFC needed by the ATLAS DQ2 commands (this should cure "python LFC exception [Python bindings missing]").
 
 
 
=== Installing an ATLAS software release (NEW 03.02.2010)  ===
 
 
 
*Decide where to install the software - this is "swdir".
 
*Decide where to install the runtime scripts, this is "rtedir".
 
*Download the install scripts and <tt>chmod a+x</tt> them afterwards
 
**http://www-f9.ijs.si/atlas/grid/script/AtlasInstall.sh
 
**http://www-f9.ijs.si/atlas/grid/script/sw-mgr
 
*Install an appropriate gcc432 compiler (use "uname -m" to find yours). You can't use the i686 gcc-kit on a x86_64 system! When installed in this way the atlas release(s) you install later will refer to them implicitly - you shouldn't need to source them yourself unless you are doing non-atlas program development. Note that SLC4 is no longer the default platform and will soon be discontinued (though if you must have older releases you will find SLC4 kits).
 
**<tt>./AtlasInstall.sh --release 4.3.2 --project gcc --arch I686-SLC5-GCC43-OPT --rtedir /my/runtime --swdir /my/atlas/software</tt>
 
**<tt>./AtlasInstall.sh --release 4.3.2 --project gcc --arch X86_64-SLC5-GCC43-OPT --rtedir /my/runtime --swdir /my/atlas/software</tt>
 
**<tt>./AtlasInstall.sh</tt> with no arguments will give you help.
 
*Install the kit you want (x86_64 kits are not yet validated for physics). The i686 kit will work on both i686 and x86_64 systems. Beware that the DBRelease needs to be modern enough. Check the release notes for your release in http://atlas-computing.web.cern.ch/atlas-computing/projects/releases/status/ (and be warned, these pages are written by hand and the DBRelease is often wrong).
 
**Example: <tt>./AtlasInstall.sh --release 15.6.3 --project AtlasOffline --arch I686-SLC5-GCC43-OPT --rtedir /my/runtime --swdir /my/atlas/software --dbrelease 8.5.1</tt>
 
*If you need a production cache the next step is:
 
**<tt>./AtlasInstall.sh --release 15.6.3 --project AtlasOffline --arch I686-SLC5-GCC43-OPT --rtedir /my/runtime --swdir /my/atlas/software --dbrelease 8.5.1</tt>
 
**If you want a Tier0 cache instead just change the project to AtlasTier0.
 
*If your OS is x86_64 there may be some missing 32-bit libraries, my "favorites" are from the blas, blas-devel, libgfortran and libf2c packages. See [[Computing System Administration#Getting_ATLAS_releases_to_work_on_RHEL5_x86_64_nodes]] and the [https://twiki.cern.ch/twiki/bin/view/Atlas/RPMCompatSLC5 ATLAS wiki] for more details.
 
 
 
=== ATLAS VO Registration and Management  ===
 
 
 
[https://lcg-voms.cern.ch:8443/vo/atlas/vomrs?path=/RootNode&action=execute This site] is used for registering atlas VO grid resource users and for managing information about their affiliation with the ATLAS VO and their permissions regarding use of grid resources.<br>
 
 
 
=== Setting up VOMS for US ATLAS certificates  ===
 
 
 
Create a file
 
<pre>/etc/grid-security/vomsdir/atlas/vo.racf.bnl.gov.lsc
 
</pre>
 
The file currently needs to be named
 
<pre>/etc/grid-security/vomsdir/atlas/vo02.racf.bnl.gov.lsc
 
</pre>
 
The file name is determined from the output of "voms-proxy-info -uri".Better create _both_ files for now. This is due to a misconfiguration of the BNL VOMS server. The admins have been informed (10.03.10).
 
 
 
The contents:
 
<pre>/DC=org/DC=doegrids/OU=Services/CN=vo.racf.bnl.gov
 
/DC=org/DC=DOEGrids/OU=Certificate Authorities/CN=DOEGrids CA 1
 
</pre>
 
Then you can stop bothering with the certificate for racf.bnl (which should be in the lcg-vomscerts package and which expires once a year).
 
 
 
=== Installation of a new release of DQ2Clients  ===
 
 
 
*You had better set umask to 022 or nobody else will be able to read files or execute any commands!
 
*Typical install command is <tt></tt>
 
 
 
<tt>pacman -trust-all-caches -allow tar-overwrite -get http://atlas.web.cern.ch/Atlas/GROUPS/DATABASE/project/ddm/releases/pacman/cache:DQ2Clients</tt>
 
 
 
*Additional documentation is [https://twiki.cern.ch/twiki/bin/view/Atlas/DQ2Clients here] - there is also a [https://twiki.cern.ch/twiki/bin/view/Atlas/DQ2ClientsHowTo howto].
 
 
 
=== Getting COMPHEP to work on 64-bit RHEL5  ===
 
<pre>yum install compat-gcc-34-g77 g2clib-devel
 
cd /usr/lib64;ln -s libg2c.so.0.0.0 libg2c.so
 
</pre>
 
The latter assumes that a similar fix for /usr/lib (32-bit) has already been done (see above).
 
 
 
=== Installing a new DBRelease in an existing athena release  ===
 
 
 
*Set up pacman, e.g. <tt>/shared/SOFTWARE/APPS/HEP/pacman/pacman-3.29/setup.sh </tt>
 
*<tt>cd</tt> to the release directory, e.g. <tt>/shared/SOFTWARE/APPS/HEP/atlas/releases/15.6.9 </tt>
 
*Install the new DBRelease, e.g. <tt>pacman -get http://atlas.web.cern.ch/Atlas/GROUPS/DATABASE/pacman4/DBRelease/:DBRelease-10.7.1 </tt>
 
*Make sure the ownership of the updated softlink DBRelease/current and e.g. DBRelease/10.7.1 are correct.
 
*And you might as well check that DBRelease/current softlinks to the DBRelease you intended...
 
  
 
=== Installing ROOT with Jupyter notebook capability ===
 
=== Installing ROOT with Jupyter notebook capability ===

Nåværende revisjon fra 1. okt. 2020 kl. 09:07

Installing ROOT with Jupyter notebook capability

  • ROOT is in the EPEL repository (at UiO/FI this should already be set up)
  • sudo yum install root root-notebook python3-root python3-devel python3-jupyroot (an admin/superuser needs to do this)
  • pip3 install --user jupyter juypterlab metakernel (this you can do as a normal user)
    • This will create ~/.local - later you can move the directory somewhere else and softlink to it, i.e. ln -s /scratch/.local ~/.local
  • Make sure ~/.local/bin is in your PATH