Vetoing techniques ( time/frequency test)

In the single-phase case, suppose that one of our optimal chirp filters
is triggered with a large SNR at time . We suppose
that the signal which was responsible for this trigger may be written
in either the time or the frequency domain as

(6.24.105) |

We assume that we have identified what is believed to be the ``correct" template , by the procedure already described of maximizing the SNR over arrival time and template, and have used this to estimate . We assume that has been determined exactly (a good approximation since it can be estimated to high accuracy). Our goal is to construct a statistic which will indicate if our estimate of and identification of are credible.

We will denote
the signal value at time offset by the real number :

(6.24.106) |

(6.24.107) |

(6.24.108) |

are defined by the condition that the

(6.24.109) |

Now, define a set of signal values, one for each frequency interval:

(6.24.110) |

(6.24.111) |

To characterize the statistic, we will need the probability distribution
of the . Because each of these values is a sum over different
(non-overlapping) frequency bins, they are independent random Gaussian
variables with unknown mean values. Their a-priori probability
distribution is

(6.24.113) |

[Note that in this expression and the following ones, all integrals are from to .] This may be used to find a closed form for : let . This gives

(6.24.115) |

(6.24.116) |

which can be inverted to yield

(6.24.117) | |||

The Jacobian of this coordinate transformation is:

(6.24.118) |

The integral may now be written as

(6.24.119) |

A few moments of algebra shows that the exponent may be expressed in terms of the new integration variables as

(6.24.120) |

This probability distribution arises because we do not know the true mean value of which is but can only estimate it using the actual measured value of . Similar problems arise whenever the mean of a distribution is not know but must be estimated (problem 14-7 of [24]). This probability distribution is ``as close as you can get to Gaussian" subject to the constraint that the sum of the must vanish. It is significant that this probability density function is completely independent of , which means that the properties of the do not depend upon whether a signal is present or not.

The individual have identical mean and variance, which may be
easily calculated from the probability distribution function ().
For example the mean is zero:
.
To calculate the variance, let
in ().
One finds

Define the statistic

(6.24.123) |

The expected value of is trivial to calculate

(6.24.124) |

(6.24.125) |

It's now easy to do the integral over the coordinate , and having done this, we are left with a spherically-symmetric integral over :

(6.24.126) |

where is the volume of a unit-radius sphere . The incomplete gamma function is the same function that describes the likelihood function in the traditional test [the

In practice (based on CIT 40-meter data) breaking up the frequency range into intervals provides a very reliable veto for rejecting events that trigger an optimal filter, but which are not themselves chirps. The value of so if then one can conclude that the likelihood that a given trigger is actually due to a chirp is less than ; rejecting or vetoing such events will only reduce the ``true event" rate by . However in practice it eliminates almost all other events that trigger an optimal filter; a noisy event that stimulates a binary chirp filter typically has or larger!

The previous analysis for the ``single-phase" case assumes that we have
found the correct template describing the signal. In searching for
a binary inspiral chirp however, the signal is a linear combination of
the two different possible phases:

(6.24.127) |

and the amplitudes and are unknown. The reader might well wonder why we can't simply construct a single properly normalized template as

(6.24.128) |

The description and characterization of the test for the two phase
case is similar to the single-phase case. For the two phase case, the signal
is a complex number

(6.24.130) |

(6.24.131) |

(6.24.132) |

(6.24.133) |

(6.24.134) |

(6.24.135) |

(6.24.136) |

(6.24.137) |

and has an expectation value which as twice as large as in the single-phase case:

The calculation of the distribution function of is similar to the single phase case (but with twice the number of degrees of freedom) and gives the incomplete -function

(6.24.140) |

This is precisely the distribution of a statistic with degrees of freedom: each of the variables has 2 degrees of freedom, and there are two constraints since the sum of both the real and imaginary parts vanishes. In fact since the expectation value of the statistic is just the number of degrees of freedom:

the relationship between the and statistic may be obtained by comparing equations () and (), giving

(6.24.142) |