LSC-Virgo Burst Analysis Working Group

Navigation

Burst Group Home
Wiki
ExtTrig Home
CBC Group Home
LSC, LIGO, Virgo

Documents

'How-to' docs
Agendas/minutes
[2008, 2007, earlier]
Talks/Posters [pre-wiki]
Papers
Paper plans
White papers
ViewCVS

Investigations

Analysis projects
Old notebook [General, S2, S3, S4, S5]
Virgo workarea
External collabs

Review

Main review page
Telecons

Review Committee Meeting Monday 17 July 2006 09:00 PST / 12:00 EST

Minutes: Monday 17 July 2006 09:00 PST / 12:00 EST

Agenda and Contact Info

Agenda

  1. Discussion of draft of S4 untriggered burst search [ PDF ]
  2. Continue discussion of the S2-S3-S4 GRB-GWB search [ HTML ]

Contact Info

AccuConference teleconferencing service:
   Phone: 1-800-704-9896, participant code: 038621#
   International callers ++1-404-920-6472 with same code

Minutes

  1. S4-untriggered
    • Keith Riles' comments on paper:
      • p6, top of page: H1-H2 excess left hanging -- needs more comment
      • p13 before bullets: "similar sensitivities" statement not really accurate -- needs a more nuanced statement
      • p15: "efficiency points near zero and unity excluded" -- was is the origin of this? It is more than in S3 where fits were skewed by problems in estimation of uncertainty of these points. Here, the points near zero efficiency are just poorly modelled by the sigmoid, and to get a nice fit near 50% and 90% they need to be dropped from fit. Needs a bit more detail.
      • p17, first (incomplete) sentence: get rid of parenthetical comment.
    • What else needs to be done?
      • Double check fits. Keith did this before and has code to do this. Willing to check them (roughly) if Laura provides a detailed description.
      • Discussion of hrss and energy.
      • Calibration: 10% is used. Is this OK?
      • Possible lingering item from WaveBurst review: need to double check the symlet decomposition. Michele might have already done this.
      • CorrPower review. Keith Thorne is doing this. Question about how much has been done already by Katherine Rawlins [ HTML ]. Is this up-to-date? How much more needs to be done?
      • Theoretical calculation of sensitivities for WaveBurst. There are Gaussian noise simulations (report?) and semi-analytical calculations due to Sergey(?). Goal is to have some analytical way of estimating what sensitivity the search should have given a threshold. (Note: online BurstMon FOM is approaching "optimum" FOM as detector stability has improved. This suggests that predictions based on Gaussian noise are becoming increasingly relevant. But this is probably not very important for predicting sensitivity.)
      • Livetime accounting has a mysterious comment in the paper relating to whether the loss due to a particular cut is relative to the initial amount of data or the amount of data after the cuts so far. Erik will sort this out.
      • All numbers need to be cross-referenced to the technical document to make sure that they are explained there.
      • KleineWelle code review. Is it needed? Lindy and Alessandra and John Z. were close to completing it.
      • Other remaining things to check:
        • Is all code in CVS and tagged appropriately?
        • Are there any pre/post processing scripts that we need to check? Are they all in CVS and tagged appropriately?
        • Is there enough documentation so that the entire search can be repeated?
        • Bias in time reconstructed by WaveBurst (arises due to the fact that the response function is not used) -- can this cause problems? Is it entirely modelled by MDCs?
  2. GRB: Discussion of Isabel's simulations illustrating binomial test method of population detection [ HTML ]
    • Look at 114 GRBs from S5. Inject linearly polarized SGs with random amplitudes ~10 times smaller than hrss90. Compute maxcc (same as maxcc with no injections unless the cc of the injection exceeds that maxcc).
    • Find that the binomial test can detect this population of weak signals.
    • Need to compare binomial test with loudest 25% of events with:
      • binomial test with all events
      • binomial test for loudest event only
      • rank-sum statistic
      Goal is to (i) establish that the binomial test is well-motivated (i.e., that choosing something between "all events" and "loudest event" makes sense, at least for some population distribution), and (ii) to establish the "complementarity" between the binomial test and the rank-sum test (as claimed in the paper). The latter is important because if there are to be several hypothesis tests, the multiplicity needs to be justified.
  3. AOB
    • Soumya is nearly finished the redshift distribution investigation. Will report over the next week.
    • Luca has found that the sensitivity vs. Q is nearly constant in power units [ HTML ]. Will discuss in future.
$Id: minutes-2006-07-17.html,v 1.4 2006/07/17 21:57:55 jolien Exp $