LSC-Virgo Burst Analysis Working Group

Navigation

Burst Group Home
Wiki
ExtTrig Home
CBC Group Home
LSC, LIGO, Virgo

Documents

'How-to' docs
Agendas/minutes
[2008, 2007, earlier]
Talks/Posters [pre-wiki]
Papers
Paper plans
White papers
ViewCVS

Investigations

Analysis projects
Old notebook [General, S2, S3, S4, S5]
Virgo workarea
External collabs

Review

Main review page
Telecons

S5 X-Pipeline Review Summary

REVIEW PROCEDURE

  1. CODE REVIEW

    For the code walk-through, we will use m2html to set up web pages similar to those Keith has used for the BurstMDC review. (The X-Pipeline Trac is very good for looking at code, but it does not supply clickable links to functions within other functions the way m2html does.)

    The obvious order is to follow the technical document. We would do the core matlab functions first:

     xdetection.m
     xcondition.m
     xtimefrequencymap.m
     xclustertfmap.m
     xsupercluster.m
    
    We would then study the codes that set up the matlab jobs, and that do the post processing:
     grb.py
     xmakegrbwebpage.m
    
    Of course, there are lots of other functions called by these, and we'll want to look at them too. These just seem to be the obvious place to start.

  2. TESTS

    The code review will undoubtedly turn up some issues or dodgy looking codes that will require testing. (We've had a few suggestions already.) Jones and Sutton will try to do these as they arise, so that when we finish the code walk-though most of the the tests are already done.

  3. GRB SEARCH RESULTS

    We'll want to study the results of the ~140 GRBs analysed by X-Pipeline. (We will probably have already looked at a few GRBs in parallel with the code walk-though, to help understand better what the code is doing.) This will probably turn up more questions and requests for tests. (Jones has already set up the S5 report page.)

  4. CLEAN UP / OTHER STUFF

    The review of code and results will likeliy lead to requests for changes and bug fixes. We'll need to re-check any codes that have been changed, and the new results they produce.

REVIEW STATUS - "Major" Issues

The review has turned up the following issues that require additional study or discussion:

  1. [PENDING] Data conditioning. A bug in the PSD estimation was found and fixed, and various changes were made to the code in the course of investigating the bug. We need to review and test the new xcondition.m.
    • Response: Gareth posted a study of the performance of xcondition for whitening data (elog page). This showed no problems with whitening. The significant bug fixes and changes to xcondition affect the PSD estimation, not the whitening, so we don't expect the whitening performance to change. Nevertheless, Gareth will rerun the study with the updated xcondition code.
  2. [RESOLVED] Overwhitening of data ... probably too significant a change for this search.
    • Response: Agreed!
  3. [RESOLVED] DQ flags issue entangled with injection issue (into on-source or off-source).
    • Response: Prefer to use on-source injections -- not tricky accounting of deadtimes, and no changes needed to post-processing.
    • Resolution: Final tuning and box-opening based on on-source injections. Veto deadtime accounted for "automatically" by efficiencies not going to 100%.
  4. [RESOLVED] Change code to compute plus-energy rather than total-energy for H1H2.
    • Response: Bug fix checked in and H1H2 GRB results regenerated (twiki).
  5. [RESOLVED] Investigate the effect of calibration errors on the search.
    • Response: A comprehensive investigation was done; see the burst twiki here and here. The end results are that (a) we will quote hrss limits from linear interpolation, rather than using the FlareFit interpolation, and (b) the SGC 150/1000 Hz limits will be rescaled by 1.10/1.25 to account for calibration, Monte Carlo, and other sources of error.
  6. [JOLIEN?] Quoting Jolien: "I'd like to understand the whole analysis from a bird's-eye perspective. I'd like to know exactly what you're your using the xpipeline to compute, how you interpret results, what kinds of upper limits you're planning to set, how you are dealing with systematic errors, if you plan to do anything beyond per-GRB statistics (e.g., do you intend to do a population study?) etc."
    • Response: The analysis is as detailed in the technical document (PDF) and the review readiness presentation (PDF). Here are the changes and specific choices made since the review document and technical document were written:
      1. We select a "detection threshold" of 0.95 (this is in the review readiness presentation); i.e., the loudest on-source event must be louder than 95% of the background loudest events in order for that GRB to be subject to follow-up analysis with the detection checklist.
      2. Coherent consistency cut: The median-tracking cut is used for all GRBs. It is tuned as described in the document. For H1-H2 only GRBs, an additional linear ratio cut Inull/Enull≥1.2 is also applied. For these GRBs, this extra cut was found to be effective at killing very loud glitches, and safe for injections. For other networks and likelihood types no fixed ratio cut is used because it was found to not be safe in general, and did not make the expected limits better for either the 50th or 95th percentile of the background.
      3. Tuning: In preliminary studies, the consistency test thresholds were tuned using the median loudest surviving background event in the tuning set as the dummy on-source event. This gives the optimal tuning in the sense of best expected upper limit. We have since switched to using the 95% loudest surviving event as the dummy on-source; this means tuning the cuts to kill as much of the tail of loud events. This will give somewhat worse upper limits on average, but ideally would give a better detection probability. See this page for more details.
      4. Upper limit statement: It was proposed at the March F2F meeting that we report lower limits on the distance to the source assuming it emits some fixed amount of energy EGW isotropically in gravitational waves. A physical distance is more meaningful to an astrophysicist than hrss, and it is consistent with how the CBC GRB results will be presented. We plan to quote the distance limits for EGW = 0.01 Moc2. This is approximately the isotropic-equivalent energy emission in the 100-200 Hz band for an on-axis 1.4-1.4 Mo binary.
      5. Population statement: Three statistical/population tests were performed in the S2-S3-S4 paper:
        • The binomial test: We perform the binomial test as in the S2-S3-S4 paper. We use the 7 GRBs with the most significant on-source events for the test; this is the round number closest to 5% of the sample (137 GRBs).
        • Wilcoxon rank-sum test: We do not perform this test. It is not applicable since it requires that the background distribution be the same for each GRB. (Note that the CBCers plan to do this in their paper.)
        • Maximum-likelihood population study: We do not perform this test.
      6. We do not distinguish between short and long GRBs in this search. We also do not use other information (redshift, fluence, etc.), only the sky position and trigger time. Note that the CBC group is doing a specialized short GRB search.

REVIEW STATUS - "Minor" Issues

The review has turned up a number of minor issues such as checks to be done and small changes to codes (typically adding error checking or making corrections to help information). These issues and their current status are listed here.

$Id: summary.html,v 1.15 2009/05/15 11:51:43 psutton Exp $