Search


DASWG LAL Doxygen

Docs

How-To's
Technical
Software Docs
Minutes

Download

Browse CVS, Git, or SVN
Software Repositories
OS Security Updates
LIGO software virtual machine
VMware SL6 Install

Participate

SCCB - Software Change Control Board
Edit these pages
Sub-committees
Mailing List
Telecon

Projects

DMT
DQSEGDB
Glue
GraceDB
gstlal
LALSuite
LDAS Tools
LDG Client/Server
LDR
ligoDV
LIGOtools
LVAlert Administration
LVAlert
MatApps
Metaio
NDS Client
PyLAL
LSCSOFT VM

Legacy Projects

geopp
LDAS
LDM
LIGOtools
LSCGIS
Onasys
OSG-LIGO

Instructions on how to set up a machine to the LSC segment database

Introduction

This page documents the setup of a machine which is going to run the segment database and the associated servers (such as LDBDServer and LSCsegFindServer). These instructions cover both Linux and Solaris machines.

System Administration Prerequisites

The following steps need to be performed by the administrator of the machine as the root user:

  1. Obtain a license to use the DB2 software.

    The IBM DB2 database software is available free of change for non-profit academic use through the IBM Academic Initiative Program. To obtain the software, go to http://www.ibm.com/university and click on "Membership" in the menu on the left of the page, then click on the "Apply now" link at the right of the page, or you can follow this link directly to the registration page. If you do not already have one, you will need to register for a universal IBM ID and password. This is a simple matter of providing you email address and contact details.
    After you register, you can sign up for the Academic Initiative program. Follow the links on the sign up page or follow this link directly to the Academic Initiative Registration Page. You will be asked to provide some cursory information about your institution, position and intended use of the software by choosing from a sequence of pop-up menus on the application page. Once you agree to the license, you can click submit and IBM will process your request. It says that this may take up to 5 days, but I received my reply within 24 hours.
    When you have received the reply from IBM, you can use your IBM ID to access the software catalog and obtain DB2. Note that one caveat is that IBM request a short, yearly summary of what you used DB2 for. This appears to be for PR purposes, rather than any kind of proposal-type statement.

  2. Download the IBM DB2 database software.

    Once you have you IBM ID, you may download the DB2 software. Go to http://www.ibm.com/university and follow the link to the IBM Academic Initiative Student Software Catalog. Click on the "Access the Software Catalog" link and enter your ID and password when prompted. Under the section called "WebSphere, DB2, Lotus, and Tivoli" click the "Download now" link and then click the "Enterprise Level" link on the Downloads page. This will list all the DB2 software.
    You will need to download IBM DB2 Universal Database Enterprise Server Edition (ESE) V8.2 for either Solaris or Intel Linux, depending on your platform. The following links should take you directly to the download pages, if you have trouble navigating the web site:


    Download the latest FixPak for the DB2 software from the DB2 support download page. Select either Solaris or Intel Linux from the "Choose your platform" menu and click "Go." Download the latest version of the Regular FixPak. At the time of writing, this is FixPax 10.

  3. Install the IBM Java RPM

    On Linux systems, you will also need to download the Java Developer Kit 1.4.2 from the IBM Java download page. This is not necessary on Solaris machines. Make sure you download the SDK RPM (e.g. IBMJava2-SDK-1.4.2-1.0.i386.rpm). Also, it is necessary to create a link for Java2 1.3.1 due to hard-coded paths in some DB2 utilities. Run the following commands as user root to properly install IBM Java2 1.4.2.
    rpm -ivh IBMJava2-SDK-1.4.2-1.0.i386.rpm
    cd /opt
    ln -s IBMJava2-142 IBMJava2-131
    You can now install DB2.

  4. Install the IBM DB2 Database Software.

    Install the IBM DB2 software by uncompressing the installation tar file you downloaded above (either dese82so.zip for Solaris, dese82ui.zip for Linux 32 bit, dese82u6.zip for Linux 64 bit). Change to the top-level directory of the DB2 installation files and run the command
    ./db2_install
    to start the installer script. When the installer prompts for a product code, enter
    DB2.ESE
    When the installation completes, run the following command to install the license file.
    /opt/IBM/db2/V8.1/adm/db2licm -a ./db2/license/db2ese.lic
    You can now safely remove the installation files.
    To verify that a license is installed successfully, issue the command
    db2licm -l

  5. Install DB2 FixPak.

    Extract the archive, uncompressing as necessary, to a temporary directory e.g. /tmp. Change to the temporary directory e.g. /tmp and type
    ./installFixPak -y
    to update the DB2 installation. After the update completes, you can safely remove the extracted files.

  6. Create an account for LDBD.

    Create an account which will run the LDBD services; the account will typically be called ldbd. On the Lab gateway machines, a home directory is created under /usr1 and a data directory is created under /export, i.e.
    [root@ldas-cit]# mkdir /usr1/ldbd
    [root@ldas-cit]# mkdir /export/ldbd
    [root@ldas-cit]# chown ldbd:ldbd /usr1/ldbd /export/ldbd
    The ldbd user should use bash as the login shell.

  7. Create an account for the DB2 administration server.

    Create an account which will run the DB2 administration server; the account will typically be called ldbdadm. On the Lab gateway machines, the home directory is created under /usr1. The ldbdadm user should use bash as the login shell.

  8. Create a Database Instance for LDBD.

    Create a database instance for LDBD by running the following command as root
    /opt/IBM/db2/V8.1/instance/db2icrt -s ese -u ldbd ldbd
    This will create a database instance that the user ldbd can use.

  9. Create a DB2 Administration Server.

    Create a database administration server by running the following command as root
     /opt/IBM/db2/V8.1/instance/dascrt -u ldbdadm
    This will create an administration server account that can be used for federation and replication.
  10. Increase the Number of Semaphores

    If the database is running on Solaris 10, then the default number of semaphores needs to be increased. Run the commands
    prctl -n project.max-sem-ids -v 4096 -r -i project 3
    prctl -n project.max-shm-ids -v 4096 -r -i project 3
    prctl -n project.max-msg-ids -v 4096 -r -i project 3
    projmod -a -K "project.max-sem-ids=(priv,4096,deny)" default
    projmod -a -K "project.max-shm-ids=(priv,4096,deny)" default
    projmod -a -K "project.max-msg-ids=(priv,4096,deny)" default
    
    This will update the number of semaphores immediately and be persistent after a reboot.
This is all that root needs to do. The rest of the instructions below should be done as the unprivileged user ldbd.

Set up the LDBD Account

First log into the machine as the user ldbd.

  1. On a Lab Solaris gateway machine, you will need to add the following lines to ~/.bash_profile
    LD_LIBRARY_PATH=/ldcg/lib:${LD_LIBRARY_PATH}
    PATH=/ldcg/bin:${PATH}
    export LD_LIBRARY_PATH PATH
    so that programs such as cvs and python are available. Linux Fedora Core systems already have these programs in the path so this step is not necessary.

  2. Add the following lines to the ~/.bash_profile
    if [ -f ${HOME}/sqllib/db2profile ]; then
        . ${HOME}/sqllib/db2profile
    fi
    This script sets up the environment variables needed for DB2.

  3. Log out and log back in to source the profile.

  4. The ldbd user needs to be able to access at least the LDG Client bundle. If you have the LDG Client or Server installed system wide on the machine that will run LDBD, add the lines that source the system LDG setup.sh script to the ldbd users .bash_profile file. If you do not already have LDG Client or Server installed, install the LDG Client in the ldbd users home directory. The following instructions install the Solaris LDG Client. Linux users should follow the instructions on the LDG client install web page. On a Solaris machine follow the instructions below to install the LSC Data Grid client for Solaris in the ldbd user's home directory:
    • Download pacman 3.12.1
    • Unpack it with
      tar zxf pacman-3.12.1.tar.gz
      Set up the shell to use pacman:
      cd pacman-3.12.1
      source setup.sh
      A line saying where pacman is installed should be echoed to the terminal.
    • Create a directory for the install by running the commands:
      cd ~
      mkdir ldg
      cd ldg
      pacman -get http://www.ldas-sw.ligo.caltech.edu/grid/LSC-DataGrid-Client-4.0.1-Solaris:LSC-DataGrid-Client-Solaris.pacman
      If you are asked if you want to trust the cache, answer y to continue.
    • Add the lines
      if [ -f ${HOME}/ldg/setup.sh ]; then
        source ${HOME}/ldg/setup.sh
      fi
      to ~/.bash_profile. There is no need to log out and back in yet, you will do this after the next step.

  5. Install the software required to run LDBD services. Create a directory for this software and run pacman to obtain it:
    mkdir ~/ldbddeps
    cd ~/ldbddeps
    
    For 32 bit, download:
    pacman -get http://www.ldas-sw.ligo.caltech.edu/grid/LDBD-Dependencies-1.0:LDBD-Dependencies.pacman
    For 64 bit, download:
    pacman -get http://www.gravity.phy.syr.edu/grid/LDBD-Dependencies-1.0:LDBD-Dependencies.pacman
    If you are asked if you want to trust the cache, answer y to continue.

  6. Once the install has finished, add the lines
    if [ -f ${HOME}/ldbddeps/setup.sh ]; then
      source ${HOME}/ldbddeps/setup.sh
    fi
    to the ldbd users .bash_profile script and log out and log back in to make sure your environment is correctly set up to find the LDG Client and the LDBD dependencies. To check the environment is properly set up, run the commands:
    python -c "from pyGlobus import io"
    python -c "import mx.ODBC.DB2 as mxdb"
    Both commands should complete without errors.

  7. Set up glue. On a Linux machine with LSCSoft installed, you may use the installation in /opt/lscsoft. On a lab gateway Solaris machine, you may use the installation in /ldcg by adding the lines
    GLUE_LOCATION=/ldcg/glue
    export GLUE_LOCATION
    if [ -f ${GLUE_LOCATION}/etc/glue-user-env.sh ] ; then
      source ${GLUE_LOCATION}/etc/glue-user-env.sh
    fi
    to the file ~/.bash_profile and log out and log back in.

    If you do not have glue installed on the machine, you may check it out from CVS and use it from there by running the commands
    mkdir -p ~/src
    cd ~/src
    cvs -d :pserver:anonymous@gravity.phys.uwm.edu:2402/usr/local/cvs/lscsoft co glue
    cd glue
    python setup.py install --home=${HOME}/glue
    You should then set GLUE_LOCATION to ${HOME}/glue in your .bash_profile
  8. Create data directories for the servers:
    mkdir -p /export/ldbd/var/log
    mkdir -p /export/ldbd/var/run
    mkdir -p /export/ldbd/etc/grid-security/ldbdserver
    mkdir -p /export/ldbd/etc/grid-security/lscsegfindserver

    Create symbolic links in /export/ldbd/etc/grid-security to /usr1/ldbd/ldg-4.0/globus/share/certificates/*.conf.* files. For example:
    cd /export/ldbd/etc/grid-security
    ln -s /usr1/ldbd/ldg-4.0/globus/share/certificates/globus-host-ssl.conf.1c3f2ca8 globus-host-ssl.conf
    ln -s /usr1/ldbd/ldg-4.0/globus/share/certificates/globus-user-ssl.conf.1c3f2ca8 globus-user-ssl.conf
    ln -s /usr1/ldbd/ldg-4.0/globus/share/certificates/grid-security.conf.1c3f2ca8 grid-security.conf
    
    Without these symbolic links the next step would not work.
  9. Request service certificates for the servers by running the following two commands. Replace ldas-cit.ligo.caltech.edu with the FQDN of the host you are installing on:
    grid-cert-request -host ldas-cit.ligo.caltech.edu -dir /export/ldbd/etc/grid-security/ldbdserver -ca 1c3f2ca8 -service ldbd
    grid-cert-request -host ldas-cit.ligo.caltech.edu -dir /export/ldbd/etc/grid-security/lscsegfindserver -ca 1c3f2ca8 -service lscsegfind
    This will create four files called
    /export/ldbd/etc/grid-security/ldbdserver/ldbdcert_request.pem
    /export/ldbd/etc/grid-security/ldbdserver/ldbdkey.pem
    /export/ldbd/etc/grid-security/lscsegfindserver/lscsegfindcert_request.pem
    /export/ldbd/etc/grid-security/lscsegfindserver/lscsegfindkey.pem
    Mail the two files named ldbdcert_request.pem and lscsegfindcert_request.pem to Phil Ehrens (or your CA with responsibility for for signing host and service certificates). He will mail you back two files called ldbdcert.pem and lscsegfindcert.pem which should be copied to
    /export/ldbd/etc/grid-security/ldbdserver/ldbdcert.pem
    and
    /export/ldbd/etc/grid-security/lscsegfindserver/lscsegfindcert.pem
    If there are existing certificate files in these directories, overwrite them with the new ones.

  10. Make sure the permissions on the grid certificates and keys are correct by running
    chmod 644 /export/ldbd/etc/grid-security/ldbdserver/ldbdcert.pem
    chmod 644 /export/ldbd/etc/grid-security/lscsegfindserver/lscsegfindcert.pem
    chmod 600 /export/ldbd/etc/grid-security/ldbdserver/ldbdkey.pem
    chmod 600 /export/ldbd/etc/grid-security/lscsegfindserver/lscsegfindkey.pem

Setup and Create the Database

  1. Run the following commands to set up and start DB2:
    db2set DB2COMM=TCPIP
    db2set DB2AUTOSTART=YES
    db2 update database manager configuration using svcename 50002
    db2 update database manager configuration using diaglevel 4   
    db2 terminate
    db2start
    All commands should complete without errors and after the last command completes you should see the message
    10/03/2005 19:23:37     0   0   SQL1063N  DB2START processing was successful.
    SQL1063N  DB2START processing was successful.

  2. Create the a directory to hold the databases and backups:
    mkdir -p /export/ldbd/var/db2
    mkdir -p /export/ldbd/var/db2/backup

  3. Create a database. The segment database should be called seg_lho at Hanford, seg_llo at Livingston, and so on. Additional database may be created, if desired. All command here are shown for database creation at Hanford. For other sites, replace lho with the appropriate letters. To create the segment database, run the command
    db2 create database seg_lho on /export/ldbd/var/db2 alias seg_lho
    This command may take a while and on successful completion will print the message
    DB20000I  The CREATE DATABASE command completed successfully.
    if everything is successful.
  4. Obtain the SQL which defines the database tables. The SQL is stored in the glue CVS, but is not installed when glue is installed. If you have checked out your own copy of glue, as above, you will have the tables. Otherwise, you will need to obtain the glue source code to get the table definitions. If you do not already have the glue source, check it out of CVS by running
    mkdir -p ~/src
    cd ~/src
    cvs -d :pserver:anonymous@gravity.phys.uwm.edu:2402/usr/local/cvs/lscsoft co glue
    cd glue
    The next two steps should be run in the directory containing the DB2 SQL tables, so
    cd ~/src/glue/src/conf/db2
    to change into this directory.

  5. Each database site must have a unique creator_db number to ensure that federation and replication work correctly. Run the perl script change_creator_db.pl to change the creator id to the correct number listed in the file creator_db_table.txt. The possible sites are llo lho cit dev test mit uwm psu. For example to set the correct id for Hanford, run the script with the argument lho:
    ./change_creator_db.pl lho

  6. Create the database tables by running the create_base script in glue
    ./create_base seg_lho
    During the setup of the tables, you will see several messages that look like
    DB21034E  The command was processed as an SQL statement because it was not a 
    valid Command Line Processor command.  During SQL processing it returned:
    SQL0556N  An attempt to revoke a privilege from "PUBLIC" was denied because 
    "PUBLIC" does not hold this privilege.  SQLSTATE=42504
    These are a result of revoking grant permissions on newly created tables and can be safely ignored.

  7. Configure the database for online backups. To enable online backups, the LOGRETAIN and USEREXIT configuration settings must be set to RECOVERY and ON. Run the commands:
    db2 force application all
    db2 update db cfg for seg_lho using logretain recovery
    db2 update db cfg for seg_lho using userexit on
    db2 backup database seg_lho to /export/ldbd/var/db2/backup with 2 buffers buffer 1024
    db2 force application all
    db2 update db cfg for seg_llo using logretain recovery
    db2 update db cfg for seg_llo using userexit on
    db2 backup database seg_llo to /export/ldbd/var/db2/backup with 2 buffers buffer 1024
    db2 force application all
    db2 update db cfg for seg_cit using logretain recovery
    db2 update db cfg for seg_cit using userexit on
    db2 backup database seg_cit to /export/ldbd/var/db2/backup with 2 buffers buffer 1024
    This tells DB2 to allow users to have full access to a database while the backup is running.

Setup Q Replication for Observatory and Tier 1 Sites

The Hanford, Livingston and Caltech databases should configured for low latency peer-to-peer replication. This is archived by setting up IBM Q Replication between the databases. Data published into on of the three databases this then replicated to the other via WebSphere Message Queues. The following steps describe how this is configured. Note that this configuration is only necessary for the three primary databases. If replication to Tier 2 centers is desired, they should be configured for read-only replication from the Caltech database.

This part of the instructions has only been tested on Solaris, since the peer-to-peer databases run on the gateway machines at Caltech and the observatories.

  1. Install IBM WebSphere MQ software.

    WebSphere MQ provides an award-winning messaging backbone for deploying an enterprise service bus as the connectivity layer of a service-oriented architecture. Apparently. You need to be root to install the MQM package.
    1. Download the following packages from IBM:
    2. Create a user called mqm and a group mqm. Add the user ldbd to the mqm group.
    3. Unzip the Global Security Kit 6 package gs6bas.zip, cd into the resulting directory and install the GSK with
      pkgadd -d .
    4. Unzip the base MQ 5.3 package names mqs531so.zip and change into the resulting directory. Read the license file and install by typing
      ./mqlicence.sh
      pkgadd -d .
      When prompted, enter the number of the MQM package (2). Of the options presented, install packages 1,2,3,4. Say no when prompted for DCE and yes to all the other questions the installer asks.
    5. Determine the number of processors on the machine with
      /usr/sbin/psrinfo
      count the number of lines (processors) and run the following commands, changing 4 to the number of processors on your machine.
      cd /opt/mqm/bin
      ./setmqcap 4
    6. Install the FixPak by unziping the file U802142.gskit.tar.Z, change into the working directory and run
      pkgadd -d mqm-U802142.img
      Choose option (2) for MQM and answer yes to all the questions the installer asks.

  2. Configure the message queues on each of the three servers.

    The code below should be executed at each of the three sites. It performs the following steps:
    • Create a message queue manager.
    • Convert the PEM encoded ldbd service certificate and key into a PKCS12 format file which can be imported into Web Sphere MQ.
    • Create a key database containing the DOE grid root certificates and the service certificate. This will be used by WebSphere MQ to authenticate to it's peers.
    • Start the message queue manager.
    • Create the necessary local queues, remote queues and channels. The channels are configured to use SSL encryption and peer authentication and authorization.
    • Start the message queue listener process.
    • Start the message queue channels.
    To accomplish these tasks, the following code should be run at each of the sites. The password string xxxxxxxx should be replace the desired key database password. This does not have to be particularly secure.
    1. On the machine ldas.ligo-wa.caltech.edu run the commands
      export JAVA_HOME=/opt/mqm/ssl
      crtmqm QM1
      openssl pkcs12 -export -out ~/ibmwebspheremqqm1.p12 -name "ibmwebspheremqqm1" -inkey /export/ldbd/etc/grid-security/ldbdserver/ldbdkey.pem -in /export/ldbd/etc/grid-security/ldbdserver/ldbdcert.pem
      gsk6cmd -keydb -create -db /var/mqm/qmgrs/QM1/ssl/key.kdb -pw xxxxxxxx -type cms -expire 365 -stash
      gsk6cmd -cert -add -db /var/mqm/qmgrs/QM1/ssl/key.kdb -pw xxxxxxxx -label "ESnet Root CA 1" -file ~/ldg/globus/share/certificates/d1b603c3.0 -format ascii
      gsk6cmd -cert -add -db /var/mqm/qmgrs/QM1/ssl/key.kdb -pw xxxxxxxx -label "DOEgrids CA 1" -file ~/ldg/globus/share/certificates/1c3f2ca8.0 -format ascii
      gsk6cmd -cert -import -file ~/ibmwebspheremqqm1.p12 -pw xxxxxxxx -type pkcs12 -target /var/mqm/qmgrs/QM1/ssl/key.kdb -target_pw xxxxxxxx -target_type cms
      strmqm QM1
      runmqsc QM1 < ~/src/glue/src/conf/q_replication/lho/mq_setup_lho.mqs
      runmqlsr -t tcp -m QM1 &> /export/ldbd/var/log/mqlsr.out </dev/null &
      runmqsc QM1 << EOF
      start channel (QM1_TO_QM2)
      start channel (QM1_TO_QM3)
      end
      EOF
    2. On the machine ldas.ligo-la.caltech.edu run the commands
      export JAVA_HOME=/opt/mqm/ssl
      crtmqm QM2
      openssl pkcs12 -export -out ~/ibmwebspheremqqm2.p12 -name "ibmwebspheremqqm2" -inkey /export/ldbd/etc/grid-security/ldbdserver/ldbdkey.pem -in /export/ldbd/etc/grid-security/ldbdserver/ldbdcert.pem
      gsk6cmd -keydb -create -db /var/mqm/qmgrs/QM2/ssl/key.kdb -pw xxxxxxxx -type cms -expire 365 -stash
      gsk6cmd -cert -add -db /var/mqm/qmgrs/QM2/ssl/key.kdb -pw xxxxxxxx -label "ESnet Root CA 1" -file ~/ldg/globus/share/certificates/d1b603c3.0 -format ascii
      gsk6cmd -cert -add -db /var/mqm/qmgrs/QM2/ssl/key.kdb -pw xxxxxxxx -label "DOEgrids CA 1" -file ~/ldg/globus/share/certificates/1c3f2ca8.0 -format ascii
      gsk6cmd -cert -import -file ~/ibmwebspheremqqm2.p12 -pw xxxxxxxx -type pkcs12 -target /var/mqm/qmgrs/QM2/ssl/key.kdb -target_pw xxxxxxxx -target_type cms
      strmqm QM2
      runmqsc QM2 < ~/src/glue/src/conf/q_replication/llo/mq_setup_llo.mqs
      runmqlsr -t tcp -m QM2 &> /export/ldbd/var/log/mqlsr.out </dev/null &
      runmqsc QM2 << EOF
      start channel (QM2_TO_QM1)
      start channel (QM2_TO_QM3)
      end
      EOF
    3. On the machine ldas-cit.ligo.caltech.edu run the commands
      export JAVA_HOME=/opt/mqm/ssl
      crtmqm QM3
      openssl pkcs12 -export -out ~/ibmwebspheremqqm3.p12 -name "ibmwebspheremqqm3" -inkey /export/ldbd/etc/grid-security/ldbdserver/ldbdkey.pem -in /export/ldbd/etc/grid-security/ldbdserver/ldbdcert.pem
      gsk6cmd -keydb -create -db /var/mqm/qmgrs/QM3/ssl/key.kdb -pw xxxxxxxx -type cms -expire 365 -stash
      gsk6cmd -cert -add -db /var/mqm/qmgrs/QM3/ssl/key.kdb -pw xxxxxxxx -label "ESnet Root CA 1" -file ~/ldg/globus/share/certificates/d1b603c3.0 -format ascii
      gsk6cmd -cert -add -db /var/mqm/qmgrs/QM3/ssl/key.kdb -pw xxxxxxxx -label "DOEgrids CA 1" -file ~/ldg/globus/share/certificates/1c3f2ca8.0 -format ascii
      gsk6cmd -cert -import -file ~/ibmwebspheremqqm3.p12 -pw xxxxxxxx -type pkcs12 -target /var/mqm/qmgrs/QM3/ssl/key.kdb -target_pw xxxxxxxx -target_type cms
      strmqm QM3
      runmqsc QM3 < ~/src/glue/src/conf/q_replication/cit/mq_setup_cit.mqs
      runmqlsr -t tcp -m QM3 &> /export/ldbd/var/log/mqlsr.out </dev/null &
      runmqsc QM3 << EOF
      start channel (QM3_TO_QM1)
      start channel (QM3_TO_QM2)
      end
      EOF

  3. Configure the Q Capture and Apply programs for the databases
    1. Make directories for the capture and apply log files. On each server run the commands:
      mkdir /export/ldbd/var/db2/apply
      mkdir /export/ldbd/var/db2/capture
      
      to create directories where the capture and apply programs will store their log files.
    2. Configure the Capture and Apply Control Tables and the Q Subscriptions. The following SQL scripts are provided in the glue CVS under the path src/conf/q_replication/lho and so on. They create that control tables for replication between sites and determine which data tables should be replicated. Run the following commands to create and populate the tables:
      1. ldas.ligo-wa.caltech.edu
        cd ~/src/glue/src/conf/q_replication/lho
        db2 connect to seg_lho
        db2 -tf cap_ctrl_lho_1.sql
        db2 -tf app_ctrl_lho_1.sql
        db2 -tf map_cit_to_lho_2.sql
        db2 -tf map_lho_to_cit_1.sql
        db2 -tf map_lho_to_llo_1.sql
        db2 -tf map_llo_to_lho_2.sql
        db2 -td# -f create_q_subs_1.sql
        db2 connect reset
        
      2. ldas.ligo-la.caltech.edu
        cd ~/src/glue/src/conf/q_replication/llo
        db2 connect to seg_llo
        db2 -tf cap_ctrl_llo_1.sql
        db2 -tf app_ctrl_llo_1.sql
        db2 -tf map_cit_to_llo_2.sql
        db2 -tf map_lho_to_llo_2.sql
        db2 -tf map_llo_to_cit_1.sql
        db2 -tf map_llo_to_lho_1.sql
        db2 -td# -f create_q_subs_2.sql
        db2 connect reset
        
      3. ldas-cit.ligo.caltech.edu
        cd ~/src/glue/src/conf/q_replication/cit
        db2 connect to seg_cit
        db2 -tf cap_ctrl_cit_1.sql 
        db2 -tf app_ctrl_cit_1.sql 
        db2 -tf map_cit_to_lho_1.sql
        db2 -tf map_cit_to_llo_1.sql
        db2 -tf map_lho_to_cit_2.sql
        db2 -tf map_llo_to_cit_2.sql
        db2 -td# -f create_q_subs_3.sql
        db2 connect reset
        
  4. Start the Capture and Apply Processes on the servers

    Run the following commands:
    1. ldas.ligo-wa.caltech.edu
      export CAP_PATH=/export/ldbd/var/db2/capture
      export APP_PATH=/export/ldbd/var/db2/apply
      asnqcap capture_server=seg_lho capture_schema=ASN CAPTURE_PATH=${CAP_PATH} 1>> ${CAP_PATH}/asnqcap.stdout 2>> ${CAP_PATH}/asnqcap.stderr < /dev/null &
      asnqapp apply_server=seg_lho apply_schema=ASN APPLY_PATH=${APP_PATH} 1>> ${APP_PATH}/asnqapp.stdout 2>> ${APP_PATH}/asnqapp.stderr < /dev/null &
      
    2. ldas.ligo-la.caltech.edu
      export CAP_PATH=/export/ldbd/var/db2/capture
      export APP_PATH=/export/ldbd/var/db2/apply
      asnqcap capture_server=seg_llo capture_schema=ASN CAPTURE_PATH=/export/ldbd/var/db2/capture 1>> ${CAP_PATH}/asnqcap.stdout 2>> ${CAP_PATH}/asnqcap.stderr < /dev/null &
      asnqapp apply_server=seg_llo apply_schema=ASN APPLY_PATH=/export/ldbd/var/db2/apply 1>> ${APP_PATH}/asnqapp.stdout 2>> ${APP_PATH}/asnqapp.stderr < /dev/null &
      
    3. ldas-cit.ligo.caltech.edu
      export CAP_PATH=/export/ldbd/var/db2/capture
      export APP_PATH=/export/ldbd/var/db2/apply
      asnqcap capture_server=seg_cit capture_schema=ASN CAPTURE_PATH=/export/ldbd/var/db2/capture 1>> ${CAP_PATH}/asnqcap.stdout 2>> ${CAP_PATH}/asnqcap.stderr < /dev/null &
      asnqapp apply_server=seg_cit apply_schema=ASN APPLY_PATH=/export/ldbd/var/db2/apply 1>> ${APP_PATH}/asnqapp.stdout 2>> ${APP_PATH}/asnqapp.stderr < /dev/null &
      

  5. Send the Start signal to Q replication

    Finally, start the replication processes by running the following commands at LHO, not LLO or CIT. This triggers the replication to start at all the databases.
    cd ~/src/glue/src/conf/q_replication
    db2 connect to seg_lho
    db2 -tf start_1.sql
    db2 -tf loaddone_1.sql
    db2 -tf loaddone_2.sql
    db2 -tf start_2.sql
    db2 connect reset
    

You should check the apply and capture stdout files for error messages, but everything should now be running.

Configure the LDBD and LSCsegFind Servers

  1. In the grid-security directories for each server create a file called grid-mapfile which contains the DN of each user allowed to access the server. Each DN should appear on a separate line in the file. The full path to each file should be:
    /export/ldbd/etc/grid-security/ldbdserver/grid-mapfile
    /export/ldbd/etc/grid-security/lscsegfindserver/grid-mapfile
    If you need to add more users to the grid mapfile for each server, then simply add the extra lines to the file. There is not need to restart the server. Note: Any users in the ldbdserver grid-map file has write access to the database. Users in the lscsegfindserver grid-map file only have read access.

  2. Copy the default configuration files from glue and edit them so that they are appropriate for your installation paths:
    cp ${GLUE_LOCATION}/etc/ldbdserver.ini /export/ldbd/etc
    cp ${GLUE_LOCATION}/etc/lscsegfindserver.ini /export/ldbd/etc

Start the servers

  1. Start the LDBD server with
    ldbdd -d -c /export/ldbd/etc/ldbdserver.ini

  2. Start the LSCsegFind server with
    ldbdd -d -c /export/ldbd/etc/lscsegfindserver.ini

Enable segment publishing

If the segment publishing script is to be run as the user ldbd then no further setup is required. It is common, however, for the segment publishing to be performed by a different user, for example the user grid. If this is the case, the following steps must be followed to allow this publishing user to insert state segments into the database.

  1. If the publishing user name is grid no further setup of the database is required as the previously run database table create_base script gives the user grid all the necessary permissions. In this case skip to the next step.

    If the publishing user is not called grid then the following commands must be run as the user ldbd to allow the publishing user to connect.
    • Set an environment variable to the name of the publishing user. In this example we use seguser, change this as appropriate for you system:
      export PUBLISHING_USER=seguser
    • Now run the following commands to give the publish user access to the necessary tables:
      db2 connect to seg_lho
      db2 grant select,insert on table state_segment to user ${PUBLISHING_USER}
      db2 grant select,insert on table segment_lfn_map to user ${PUBLISHING_USER}
      db2 grant select,insert on table segment_def_map to user ${PUBLISHING_USER}
      db2 grant select,insert on table segment to user ${PUBLISHING_USER}
      db2 grant select,insert on table segment_definer to user ${PUBLISHING_USER}
      db2 grant select,insert on table lfn to user ${PUBLISHING_USER}
      db2 grant select,insert,update on table process to user ${PUBLISHING_USER}
      db2 terminate
    • Optionally, you may also wish to revoke the insert privileges from the grid user, which are created by the create_base script by running the following commands:
      db2 connect to seg_lho
      revoke insert on table state_segment from user grid
      revoke insert on table segment_lfn_map from user grid
      revoke insert on table segment_def_map from user grid
      revoke insert on table segment from user grid
      revoke insert on table segment_definer from user grid
      revoke insert on table lfn from user grid
      revoke insert,update on table process from user grid
      db2 terminate
      This is the only configuration that needs to be performed as the database user.

  2. The remaining steps should be performed as the segment publishing user, e.g. the user grid

    First add the following lines to the users .bash_profile script to give set the DB2 path and the database instance name in the users environment:
    LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:/opt/IBM/db2/V8.1/lib
    export LD_LIBRARY_PATH
    PATH=${PATH}:/opt/IBM/db2/V8.1/bin
    export PATH
    DB2INSTANCE=ldbd
    export DB2INSTANCE
    Once this is done, log out and log back in to set up the environment.

  3. The user should install the LDBD Dependencies package, which contains the necessary ODBC libraries for connecting to the database. The instructions here are identical for those above for the ldbd user. We repeat them for convenience here:
    • Download pacman 3.12.1
    • Unpack it with
      tar zxf pacman-3.12.1.tar.gz
      Set up the shell to use pacman:
      cd pacman-3.12.1
      source setup.sh
      A line saying where pacman is installed should be echoed to the terminal.
    • Create a directory for this software and run pacman to obtain it:
      mkdir ~/ldbddeps
      cd ~/ldbddeps
      pacman -get http://www.ldas-sw.ligo.caltech.edu/grid/LDBD-Dependencies-1.0:LDBD-Dependencies.pacman
      If you are asked if you want to trust the cache, answer y to continue.

    • Once the install has finished, add the lines
      if [ -f ${HOME}/ldbddeps/setup.sh ]; then
        source ${HOME}/ldbddeps/setup.sh
      fi
      to the users .bash_profile script and log out and log back in to make sure your environment is correctly set up to find the LDBD dependencies.

  4. Create aliased for the tables so the grid user can access them directly, without needing to prefix them with the instance name. Run the following commands, replacing GRID with the publishing user name, if necessary:
    db2 connect to seg_lho
    db2 -x "select 'create alias GRID.'||rtrim(name)||' for '||rtrim(creator)||'.'||ltrim(name)||';' from sysibm.systables where creator = 'LDBD'" > crtalias.sql
    db2 -tvf crtalias.sql
    db2 terminate

  5. Set up the publishing user to use the same version of glue as the ldbd user. Add the following lines to the publishing user's .bash_profile, changing the value of GLUE_LOCATION as appropriate
    GLUE_LOCATION=/ldcg/glue
    export GLUE_LOCATION
    if [ -f ${GLUE_LOCATION}/etc/glue-user-env.sh ] ; then
      source ${GLUE_LOCATION}/etc/glue-user-env.sh
    fi

The segment publishing user is now set up to write state segments into the database.

$Id$