This page details the follow-ups we have performed on the outstanding issues in the S2
BlockNormal review (PDF final draft
), as well as their status in our S5 work
Problem in Whitening Filters
We first developed a metric (the Rayleigh statistic) and tested the performance of each step of our data-conditioning.
The expectation is that each step (Kalman line filter, line regression filters, and whitening
filters) should each improve the whiteness of the data. We studied the performance on
all playground in S2 (web page S2 Data-Conditioning Performance on Playgrounds
) and S3 (web page S3 Data Conditioning Performance
). The playgrounds are relevant as they are used to create the filters. If the data-conditioning fails on those, it will not be
better on the whole data. These studied identified which steps on which frequency bands were
failing. The worst problem was in the Band 4 filtering in S3. We investigated in detail (web page Band 4 Filtering Study and Fixes
) what was causing the problem. The root cause was that the 60Hz harmonic regression filters were of too high an order (90-150). The calibration line regression filter (which worked) were only of order 4.
For S5, we developed a technique to track the Rayleigh statistic performance on regression filters as a function of order (web page S5 Regress Filter Selection
). The filter order would only be
increased until the Rayleigh statistic reached the plateau. We tried regressing the 60Hz harmonics, but we never found filters of low order that worked well. So we only regress calibration lines in S5.
Regression Channel Glitch Testing
We did not develop a test for this. However, in S5 we are only using injection channels
to remove calibration lines (normal and photon calibrator. We are not doing any regression
filtering with the PEM channels. The low-order regression filters
ensure that any glitches in these high-coherence channels will be filtered matching their
strength in DARM_ERR. In fact, the regression filters might remove some of these glitches
that couple into DARM_ERR.
History Effects on Event Detection
As mentioned in the final S2 report, we analyzed the performance of BlockNormal
on two MDC sets (web page Signal Pile-Up Studies
). Both sets had the same average injection spacing (120s), but one
set (BH3) used Poisson sampling, so the spacing ranged down to 10s, while the other
set (BH4) forced spacing to be centered on 120s. An early version of WaveBurst had
performed poorly on BH3 (and was fixed to resolve that issue). The BlockNormal performance
was essentially identical on both MDC sets. We did some detailed tests (same web page) that
showed there might be an issue of injections only 15s apart. However, no metric was
determined to push this further. We note that all current Burst Group MDC sets use
As noted in the final S2 report, we did some initial studies on S2 data (web page Change-Point Uniformity Studies
). What these showed is that the uniformity issue disappeared once we applied
non-zero thresholds on the blocks to define events. During S5, we examined this issue
in more detail when we tuned the BlockNormal thresholds of &rho2
on change-points and PE
on blocks between change-points. As shown in
our background rate tuning (web page S5 Background Tuning - Epoch 2
), we found that
the smooth statistical behavior broke down if we tried too high a &rho2
We explored this further in our BlockNormal methods paper (PDF BlockNormal CQG Paper
The root cause was that a too-high of a &rho2
threshold resulted in so low a change-point rate
of the iteration interval that the assumption of Gaussian statistics was invalid. As long
as one keeps the &rho2
threshold low enough, the blocks above the event threshold have the expected Poisson spacing distribution.
There is a different, but related issue with event pile-up at the beginning and end
of each epoch. This is mostly due to some ringing in the filters as well as an excess
of change-points in BlockNormal. We studied this in S2 (web page Fiducial Cut Study
). This is only a problem within
0.25 s of an edge. We solve this by extending the time epoch edges by 1 second in each direction, then removing all events in the last second during clustering.
Frequency Band Non-Overlap
We haven't done any studies yet regarding this (as it would require special MDCs). However
we typically get good performance on Sine-Gaussians that are close to band edges.