Generating the S3 MACHO DAX
- Build and install LALApps by following the LALApps README, which is
available from the LALApps home page. This will require you to
install some required software and build LAL and the README will explain this.
You should choose to build the required software from source, as this is the
only method that has been validated.
- Assume that you installed LALAPPS into the directory
LAL_LOCATION. Change directories to
LAL_LOCATION/bin. Make sure that in that
directory you have
Also check that in the directory LAL_LOCATION/lib/python you have
- Add the directory LAL_LOCATION/lib/python
to your PYTHONPATH environment variable so that the
modules inspiral.py and pipeline.py can
- Add the directory LAL_LOCATION/bin to your
PATH environment variable so that the binaries
can be found.
- The S3 MACHO search consists of three separate DAX, one for each
interferometer L1, H1 and H2. Create a directory in which you are going to
generate the DAX and copy the parameter files
These files are configuration files for lalapps_inspiral_pipe, which is the executable that generates the DAX. lalapps_inspiral_pipe will be run three times on each of these configuration files to generate the three DAX.
- In your working directory make a directory called
cache_files. In that directory please put the files
These calibration cache files point to the location of the calibration frames and are parsed during the generation of the DAX/DAG to determine the actual calibration files needed as part of the workflow.
The paths in the in these calibration cache files are correct on the machines hydra.phys.uwm.edu and contra.phys.uwm.edu. If you want to generate a DAX on your own machine you will need to download the tarball S3_V02_CAL.tar.gz which contains the S3 calibration frames and edit the URLs in the calibration cache files to point to wherever you untar these frames.
Eventually this mechanism will be replaced by calls to LSCdataFind or the like, but for now this file needs to exist.
- In your working directory put the three files files
These files are listed in the configuration files as the file containing the list of segments to actually try and analyze. This is the data we need to analyze for the S3 search.
- Install the latest version of the LSC DataGrid
The LSC DataGrid Client 3.x is necessary for a couple or reasons. First, during generation of the DAX the program LSCdataFind needs to be run in order to determine using metadata like GPS times which logical filenames (LFNs) are part of the workflow.
Second, the GriPhyN Virtual Data System (VDS) (otherwise known as Pegasus) is included in the LSC DataGrid, and you will need VDS tools in order to convert the DAX to a DAG.
IMPORTANT NOTE: We really want to use version 3.x of the LSC DataGrid Client which has the latest LSCdataFind and VDS. Neither are in version 2.
- In your environment set the variable
LSC_DATAFIND_SERVER. You can usually use the server
at UWM which is dataserver.phys.uwm.edu
- Make sure LSCdataFind is in your path. Usually you want to
You should probably also run LSCdataFind once with the --ping option to make sure you are able to authenticate to the server.
- In your working directory generate the the three DAX using the commands:
./lalapps_inspiral_pipe --dax --template-bank --inspiral --config-file l1_s3_macho.ini --log-path /people/skoranda ./lalapps_inspiral_pipe --dax --template-bank --inspiral --config-file h1_s3_macho.ini --log-path /people/skoranda ./lalapps_inspiral_pipe --dax --template-bank --inspiral --config-file h2_s3_macho.ini --log-path /people/skoranda
Replace /people/skoranda with a directory on your system.
This will generate the three DAX files:
which describe the workflow for the S3 MACHO search.