Computing System Administration

Fra mn/fys/epf
Revisjon per 30. sep. 2020 kl. 10:56 av Read@uio.no (diskusjon | bidrag) (Added installation instructions to enable "root --notebook")

Hopp til: navigasjon, søk

Getting ATLAS releases to work on RHEL5 x86_64 nodes

  • For pacman to work you need to install an older 32-bit opensll: rpm -ivh http://ftp.scientificlinux.org/linux/scientific/51/i386/SL/openssl097a-0.9.7a-9.i386.rpm
  • To satisfy some packages, especially event generation: yum install libgfortran-4.1.2-44.el5.i386
  • You need to install blas: yum install blas.i386 and yum install blas-devel.i386
  • To get /usr/lib/libreadline.so.4 (also needed for 32-bit Linux) do rpm -ivh http://linuxsoft.cern.ch/cern/slc5X/i386/SL/compat-readline43-4.3-3.i386.rpm
  • You need 32-bit gcc 3.4 (also needed for 32-bit Linux): yum install compat-gcc-34
  • You need 32-bit g++ 3.4 (also needed for 32-bit Linux): yum install compat-gcc-34-c++
  • You may need to install 32-bit popt: yum install popt.i386
  • In order to analyze luminosity blocks in real data libxml2-devel is needed: yum install libxml2-devel

Getting the I686-SLC5-GCC43 kits to validate (and run) on RHEL5

Installing root and enabling the xrootd service (needed for proof)

  • on RHEL5 x86_64 systems: (gcc 4.3 is too advanced for UiO desktops)
tar -xvzf /mn/kvant/hep/linux/root/root_v5.22.00.Linux-slc5-gcc3.4.tar.gz -C /opt
  • Assume standard setup is on default system (e.g. scalar.uio.no) and $node is the node you want to install on. Of course if you are logged on to $node all the ssh'ing is not needed
scp /etc/sysconfig/xrootd $node:/etc/sysconfig/
scp /etc/xpd.cf $node:/etc/
scp /etc/rc.d/init.d/xrootd $node:/etc/rc.d/init.d/
ssh $node chkconfig --add xrootd
ssh $node chkconfig --level 35 xrootd on
  • Note, xrootd runs under read account (as of May 2009)
ssh $node mkdir /var/log/xrootd 
ssh $node chown read:fysepf /var/log/xrootd
  • Edit /etc/xpd.cf and restart xrootd to add a worker node.
  • Redistribute /etc/cpd.cf as well (have to find a simpler but reliable system for this
/sbin/service xrootd start


Installing VirtualBox

echo vboxusers:x:15522:esbenlu >> /etc/group<br>echo vboxusers:x:15522:esbenlu >> /etc/group.local 
  • Finalize the vbox setup:
/etc/init.d/vboxdrv setup


Installing ARC client on RHEL5 boxes (as superuser)

rpm -Uvh http://download.fedora.redhat.com/pub/epel/5/i386/epel-release-5-4.noarch.rpm
  • Typical /etc/yum.repos.d/nordugrid:
[nordugrid] name=NorduGrid - $basearch - stable 
baseurl=http://download.nordugrid.org/repos/redhat/$releasever/$basearch/stable
enabled=1
gpgcheck=1
gpgkey=http://download.nordugrid.org/RPM-GPG-KEY-nordugrid;
  • yum groupinstall "ARC Client" to install the client s/w
  • yum install nordugrid-arc-ca-utils to install the missing Certificate Authority utilities
  • /opt/nordugrid/sbin/grid-update-crls to jumpstart the first update of the Certificate Revocation Lists - after that there is a cron that does the job automatically.
  • yum install lfc-devel lfc-python> to get LFC needed by the ATLAS DQ2 commands (this should cure "python LFC exception [Python bindings missing]").

Installing an ATLAS software release (NEW 03.02.2010)

  • Decide where to install the software - this is "swdir".
  • Decide where to install the runtime scripts, this is "rtedir".
  • Download the install scripts and chmod a+x them afterwards
  • Install an appropriate gcc432 compiler (use "uname -m" to find yours). You can't use the i686 gcc-kit on a x86_64 system! When installed in this way the atlas release(s) you install later will refer to them implicitly - you shouldn't need to source them yourself unless you are doing non-atlas program development. Note that SLC4 is no longer the default platform and will soon be discontinued (though if you must have older releases you will find SLC4 kits).
    • ./AtlasInstall.sh --release 4.3.2 --project gcc --arch I686-SLC5-GCC43-OPT --rtedir /my/runtime --swdir /my/atlas/software
    • ./AtlasInstall.sh --release 4.3.2 --project gcc --arch X86_64-SLC5-GCC43-OPT --rtedir /my/runtime --swdir /my/atlas/software
    • ./AtlasInstall.sh with no arguments will give you help.
  • Install the kit you want (x86_64 kits are not yet validated for physics). The i686 kit will work on both i686 and x86_64 systems. Beware that the DBRelease needs to be modern enough. Check the release notes for your release in http://atlas-computing.web.cern.ch/atlas-computing/projects/releases/status/ (and be warned, these pages are written by hand and the DBRelease is often wrong).
    • Example: ./AtlasInstall.sh --release 15.6.3 --project AtlasOffline --arch I686-SLC5-GCC43-OPT --rtedir /my/runtime --swdir /my/atlas/software --dbrelease 8.5.1
  • If you need a production cache the next step is:
    • ./AtlasInstall.sh --release 15.6.3 --project AtlasOffline --arch I686-SLC5-GCC43-OPT --rtedir /my/runtime --swdir /my/atlas/software --dbrelease 8.5.1
    • If you want a Tier0 cache instead just change the project to AtlasTier0.
  • If your OS is x86_64 there may be some missing 32-bit libraries, my "favorites" are from the blas, blas-devel, libgfortran and libf2c packages. See Computing System Administration#Getting_ATLAS_releases_to_work_on_RHEL5_x86_64_nodes and the ATLAS wiki for more details.

ATLAS VO Registration and Management

This site is used for registering atlas VO grid resource users and for managing information about their affiliation with the ATLAS VO and their permissions regarding use of grid resources.

Setting up VOMS for US ATLAS certificates

Create a file

/etc/grid-security/vomsdir/atlas/vo.racf.bnl.gov.lsc

The file currently needs to be named

/etc/grid-security/vomsdir/atlas/vo02.racf.bnl.gov.lsc

The file name is determined from the output of "voms-proxy-info -uri".Better create _both_ files for now. This is due to a misconfiguration of the BNL VOMS server. The admins have been informed (10.03.10).

The contents:

/DC=org/DC=doegrids/OU=Services/CN=vo.racf.bnl.gov
/DC=org/DC=DOEGrids/OU=Certificate Authorities/CN=DOEGrids CA 1

Then you can stop bothering with the certificate for racf.bnl (which should be in the lcg-vomscerts package and which expires once a year).

Installation of a new release of DQ2Clients

  • You had better set umask to 022 or nobody else will be able to read files or execute any commands!
  • Typical install command is

pacman -trust-all-caches -allow tar-overwrite -get http://atlas.web.cern.ch/Atlas/GROUPS/DATABASE/project/ddm/releases/pacman/cache:DQ2Clients

  • Additional documentation is here - there is also a howto.

Getting COMPHEP to work on 64-bit RHEL5

yum install compat-gcc-34-g77 g2clib-devel
cd /usr/lib64;ln -s libg2c.so.0.0.0 libg2c.so

The latter assumes that a similar fix for /usr/lib (32-bit) has already been done (see above).

Installing a new DBRelease in an existing athena release

  • Set up pacman, e.g. /shared/SOFTWARE/APPS/HEP/pacman/pacman-3.29/setup.sh
  • cd to the release directory, e.g. /shared/SOFTWARE/APPS/HEP/atlas/releases/15.6.9
  • Install the new DBRelease, e.g. pacman -get http://atlas.web.cern.ch/Atlas/GROUPS/DATABASE/pacman4/DBRelease/:DBRelease-10.7.1
  • Make sure the ownership of the updated softlink DBRelease/current and e.g. DBRelease/10.7.1 are correct.
  • And you might as well check that DBRelease/current softlinks to the DBRelease you intended...

Installing ROOT with Jupyter notebook capability

  • ROOT is in the EPEL repository (at UiO/FI this should already be set up)
  • sudo yum install root root-notebook python3-root python3-devel python3-jupyroot (an admin/superuser needs to do this)
  • pip3 install --user jupyter juypterlab metakernel (this you can do as a normal user)
    • This will create ~/.local - later you can move the directory somewhere else and softlink to it, i.e. ln -s /scratch/.local ~/.local
  • Make sure ~/.local/bin is in your PATH