LSC-Virgo Burst Analysis Working Group


Burst Group Home
ExtTrig Home
CBC Group Home
LSC, LIGO, Virgo


'How-to' docs
[2008, 2007, earlier]
Talks/Posters [pre-wiki]
Paper plans
White papers


Analysis projects
Old notebook [General, S2, S3, S4, S5]
Virgo workarea
External collabs


Main review page

Code Changes in S5 BlockNormal Event Generation


Between the S2 and S5 FirstYear data analysis, the BlockNormal event generation has had only one change that affected the selection of events and another that impacted the later coincidence processing. The other changes have been to the implementation to make it more flexible, faster and to scale for the much larger S5 data set. These changes are detailed below

A narrative of the S2 code implementation was prepared for the S2 review (S2_BNETG_Implementation.pdf). A narrative of the S5 FirstYear code implementation has now been prepared for the S5 review (S5_BNETG_Implementation.pdf)

The Penn State group uses two revision control systems. A local Subversion repository (PSUBurst) is used for development and testing. The code is published to the `matapps' CVS repository for use in LSC data analysis. This change document covers the BN_2007-07-09 code release. The archived data-conditioning code can be viewed through this web interface

Changes in calculations

Relative excess power for event thresholding

For the S3 analysis, we changed from thresholding separately on the mean or variance of each block (function evtsigma.m) to a combined quantity termed `relative excess power' (function evtpwrsigma.m). If a block of time-series data has mean μ and variance ν, and the background is characterized by mean μ0 and variance ν0, then the relative excess power (PE) is defined as

PE = [(μ - μ0)2 + (ν - ν0)] / (μ02 + ν0)

This is similar to (χ2/N) - 1 for a block of N samples. A Boolean switch was added to evnts.m to define whether old or new thresholding would be used. Since then, we have only used the new thresholding

Changed routines: evnts.m

New routines: evtpwrsigma.m, evtsigma.m

Variance-weight time centroid

To allow us to tighten our coincidence windows (and improve detection efficiency on long-duration bursts), during S5 we implemented a variance-weighted time centroid. The implementation and testing of this was detailed on the S5 Coincidence Window Selection web page. Assuming ti represents the time of each of N samples and yi is the whitened DARM_ERR signal for each sample, we defined
&tau(0) = &Sigma ti / N
&tau(1) = &Sigma ti yi / &Sigma yi = &Sigma ti yi / N &mu
&tau(2) = &Sigma ti (yi-&mu)2 / &Sigma (yi-&mu)2 = &Sigma ti (yi-&mu)2 / (N-1) &nu

This is similar to the hrss weighted used for calibrated signals, but is applicable to the ADC output which can have large offsets. We use the &tau(2) value as our new event centroid. This change had no effect on the event generation and candidate selection. It was designed to be compatible with the existing event file format, using fields that were no longer in use.

Changed routines: cp, evnts, wrevnts, readEvents

Changes in implementation

CPU time reduction in refine step

During the S4 BlockNormal analysis, it was found that the average CPU time for event generation was much higher than in runs S2 and S3. An investigation detailed in this BNETG CPU Usage web page indicated that the longer duration of stationary segments (indicating better noise stationarity) was the cause. The additional CPU time scaled quadratically with the duration of the stationary segment. This dependence was isolated to the change-point refinement step. As detailed in the Refine Fixes web page, the refine code was re-written to avoid continually changing the size of the change-point array, but to instead simply tag the change-points to be discarded at the end. This returned the CPU time to a linear dependence on length of stationary segment. This code change did not change which change-points were discarded.

Changed routines: refine

Configuration flags for processing

To support detailed control of the data-conditioning and event generations, all the parameters were moved to the BNETG configuration file. The event generation code uses the rdbnetgconfig function to read in the MATLAB binary file of structures and provide defaults for all the flags. The state of the flags and parameters for each processing step (data-conditioning, epoch creation, stationary-segment determination and change-point analysis) are written to the log. The stationary segment and change-point routine interfaces were changed to pass in the parameters.

Changed routines: BNETG, rdbnetgconfig, wrssegs

New routines: prbnconfig, prssconfig, prcpconfig

Lock-indexed sub-directories for event, epoch and stat-segment files

The long duration of the S5 run results in over 500 lock segments during event generation. Since files are created for each of three IFOs and each of nine frequency bands, the quantities of epoch, stationary-segment and event files si too large for single directories. We implemented a lock-indexed sub-directory scheme for these files. A group of `set' routines would create filenames and sub-directories as needed. A corresponding group of `get' routines would check for lock-indexed sub-directories and return the path to the file.

Changed routines: BNETG, rdepochs, wrssegs

New routines: setepochfile, getepochfile, setssfile, getssfile, seteventfile, geteventfile

Termination string and validation of event files

For robust pipeline operation, each step must validation the successful completion of the previous step. To accomodate this, a standard comment string was added to the end of every event file (which would not be output if a job was aborted). A valideventfile function was added to check for this termination string in an event file

Changed routines: BNETG, calcLiveTime

New routines: valideventfile, checkeventfiles

$Id: BNETG_Code_Changes.html,v 1.3 2007/11/27 22:00:45 kathorne Exp $