ND Cloud stuff

Fra mn/fys/epf
Hopp til: navigasjon, søk

Frontier/Squid configuration at sites

  • From release 15.4.0 the Oracle database access is via Squid and Frontier servers. This is configured in siteconfig.sh which is expected to be located in the root directory of the Atlas kits installed a site. For the moment there is only a single Squid accepted by the main European Frontier server at FZK. Thus PIC and BNL are backups. As more Squids are deployed in the ND/NDGF cloud we can configure the sites to have various, nearby network-wise, squids in order to balance the load. Currently the default setup for FRONTIER/SQUID is shown here (COOLPAYLOADPATH is described in another section).

[read@scalar ~]$ cat /shared/SOFTWARE/APPS/HEP/atlas/siteconfig.sh

#
#......ATLAS Site configuration for grid.uio.no
#

export FRONTIER_SERVER="(serverurl=http://atlassq1-fzk.gridka.de:8021/fzk)(serverurl=http://atlfrontier.pic.es:3128/pic-frontier)(serverurl=http://squid-frontier.usatlas.bnl.gov:23128/frontieratbnl)(proxyurl=http://db-atlas-squid.ndgf.org:3128)"
export FRONTIER_LOG_LEVEL=debug 
export COOLPAYLOADPATH="/usit/cargo/fysepf/atlas/software/CoolPayloadFiles"
  • The intention is for NDGF to set up aliases db-atlas-squidNN.ndgf.org with NN=01, 02, etc and for these to be registered with FZK (email registration request to "hn-atlas-dbops &at& cern.ch"). The current list of squid servers is
bash-3.2$ host db-atlas-squid.ndgf.org
db-atlas-squid.ndgf.org has address 129.240.15.83
db-atlas-squid.ndgf.org has address 129.240.85.59
db-atlas-squid.ndgf.org has address 194.249.156.126
bash-3.2$ host 129.240.15.83
83.15.240.129.in-addr.arpa domain name pointer db-atlas-squid.titan.uio.no.
bash-3.2$ host 129.240.85.59
59.85.240.129.in-addr.arpa domain name pointer scalar.uio.no.
bash-3.2$ host 194.249.156.126
126.156.249.194.in-addr.arpa domain name pointer dcache.ijs.si.

Access to COOL Payload Files (Conditions data)

  • See here and here for information about access to conditions data stored in COOL/POOL files.
  • COOLPAYLOADPATH should be set to the directory where the POOL Conditions files are stored. This must be a shared file system seen by the worker nodes (like the software repository - which is large enough on some sites also to host the COOL conditions data). A simple python script to download the conditions files and construct the Pool File Catalog of the files is here. It should be executed periodically (sensible frequency is currently unknown, let's start with twice a day??).
  • In order to download the files a valid VOMS proxy is needed, the VO should be "atlas". Also the DQ2 Client tools are needed. The python binding to LFC must be installed for dq2-get to work properly. There are issues with LFC versions less than 1.7.2.5.
    • It is not yet decided how to implement this in a sustainable, automatic or semi-automatic system. One possibility is to map the Atlas production or software manager role to a local user with write access to the COOLPAYLOADPATH and keep it updated with regular grid jobs.
  • As of 25.10.09 there are ~150 GB of COOL conditions data. This is likely to grow to ~1 TB or so in the coming years.
  • For athena jobs running on worker nodes to know where to access the COOL conditions data, the file catalog in COOLPAYLOADPATH needs to be linked to a couple of files in the poolcond directory in the run directory of the job, e.g.
[read@scalar CoolPayloadFiles]$ cat /shared/SOFTWARE/runtime/APPS/HEP/ATLAS-15.5.0
#!/bin/bash

# Runtime environment scripts are called (bash source)
# from NorduGrid ARC with argument 0,1 or 2.
# First call with argument "0" is made before the the batch
# job submission script is written.
# Second call is made with argument "1" just prior to execution of the
# user specified executable.
# Third "clean-up" call is made with argument "2" after the user
# specified executable has returned.
# No argument is assumed to mean the script is called interactively.

if test -z $1; then
  shift
  source /shared/SOFTWARE/APPS/HEP/atlas/15.5.0-I686-SLC4-GCC34-OPT/pacman-3.29/setup.sh
  source /shared/SOFTWARE/APPS/HEP/atlas/15.5.0-I686-SLC4-GCC34-OPT/cmtsite/setup.sh -tag=15.5.0
  source /shared/SOFTWARE/APPS/HEP/atlas/15.5.0-I686-SLC4-GCC34-OPT/AtlasOffline/15.5.0/AtlasOfflineRunTime/cmt/setup.sh ' '
  source /shared/SOFTWARE/APPS/HEP/atlas/siteconfig.sh
  mkdir -pv poolcond
  if test -e $COOLPAYLOADPATH/PoolFileCatalog.xml; then
      ln -s $COOLPAYLOADPATH/PoolFileCatalog.xml poolcond/PoolCat_comcond.xml
      ln -s $COOLPAYLOADPATH/PoolFileCatalog.xml poolcond/PoolCat_oflcond.xml
  fi
elif [ $1 == 1 ]
then
  shift
  source /shared/SOFTWARE/APPS/HEP/atlas/15.5.0-I686-SLC4-GCC34-OPT/pacman-3.29/setup.sh
  source /shared/SOFTWARE/APPS/HEP/atlas/15.5.0-I686-SLC4-GCC34-OPT/cmtsite/setup.sh -tag=15.5.0
  source /shared/SOFTWARE/APPS/HEP/atlas/15.5.0-I686-SLC4-GCC34-OPT/AtlasOffline/15.5.0/AtlasOfflineRunTime/cmt/setup.sh ' '
  source /shared/SOFTWARE/APPS/HEP/atlas/siteconfig.sh
  mkdir -pv poolcond
  if test -e $COOLPAYLOADPATH/PoolFileCatalog.xml; then
      ln -s $COOLPAYLOADPATH/PoolFileCatalog.xml poolcond/PoolCat_comcond.xml
      ln -s $COOLPAYLOADPATH/PoolFileCatalog.xml poolcond/PoolCat_oflcond.xml
  fi
fi