Wiener (optimal) filtering

Suppose that the detector output is a dimensionless strain . (In Section we show how to construct this quantity for the CIT 40-meter prototype interferometer, using the recorded digital data stream). We denote by the waveform of the signal (i.e., the chirp) which we hope to find, hidden in detector noise, in the signal stream . Since we would like to know about chirps which start at different possible times , we'll take where is the canonically normalized waveform of a chirp which enters the sensitivity band of the interferometer at time . The constant quantifies the strength of the signal we wish to extract as compared to an otherwise identical signal of canonical strength (we will discuss how this canonical normalization is defined shortly). In other words, contains all the information about the chirp we are searching for apart from the arrival time and the strength, which are given by and respectively. For the moment, we will ignore the fact that the chirps come in two different phase ``flavors".

We will construct a signal , which is a number defined by

(6.14.59) |

We use the Fourier transform conventions of () and
(), in terms of which we can write the signal as

(6.14.60) |

This final expression gives the signal value written in the frequency domain, rather than in the time domain.

Now we can ask about the expected value of , which we denote
. This is the average of over an ensemble of
detector output streams, each one of which contains an identical
chirp signal but different realizations of the noise:

(6.14.61) |

(6.14.62) |

The expectation value of clearly vanishes by definition, so . The expected value of is non-zero, however. It may be calculated from the (one-sided) strain noise power spectrum of the detector , which is defined by

and has the property that

(6.14.65) |

There is a nice way to write the formulae for the expected signal and the expected noise-squared. We introduce an ``inner product" defined for any pair of (complex) functions and . The inner product is a complex number denoted by and is defined by

Because is positive, this inner product has the property that for all functions , vanishing if and only if . This inner product is what a mathematician would call a ``positive definite norm"; it has all the properties of an ordinary dot product of vectors in three-dimensional Cartesian space.

In terms of this inner product, we can now write the expected signal, and the expected
noise-squared, as

(6.14.68) |

(6.14.69) |

we choose

(6.14.71) |

Going back to the definition of our signal , you will notice that the signal for ``arrival time offset" is given by

Given a template and the signal , the signal values can be easily evaluated for any choice of arrival times by means of a Fourier transform (or FFT, in numerical work). Thus, it is not really necessary to construct a different filter for each possible arrival time; one can filter data for all possible choices of arrival time with a single FFT.

The signal-to-noise ratio for this optimally-chosen filter can be
determined by substituting the optimal filter () into
equation (), obtaining

With this normalization, the expected value of the squared noise is

(6.14.75) |

(6.14.76) |

For example, consider a system composed of two 1.4 masses in circular orbit. Let us normalize the filter so that equation () is satisfied. This normalization corresponds to placing the binary system at some distance. For the purpose of discussion, suppose that this distance is 15 megaparsecs (i.e., choosing to be the strain produced by an optimally-oriented two 1.4 system at a distance of 15 megaparsecs). If we then detect a signal with a signal-to-noise ratio , this corresponds to detecting an optimally-oriented source at a distance of half a megaparsec. Note that the normalization we have choosen has the r.m.s. noise and therefore the signal and signal-to-noise values are equal.

The functions `correlate()` and `productc()` are designed to
perform this type of optimal filtering. We document these routines in
the following section and in Section , then provide a simple
example of an optimal filtering program.

There is an additional complication, arising from the fact that the
gravitational radiation from a binary inspiral event is a linear
combination of two possible orbital phases, as may be seen by reference
to equations () and (). Thus, the
strain produced in a detector is a linear combination of two waveforms,
corresponding to each of the two possible ( and )
orbital phases:

(6.14.77) |

In the optimal filtering, we are now searching for a pair of amplitudes
and rather than just a single amplitude. One can easily do
this by choosing a filter function which corresponds to a complex-valued signal
in the time-domain:

(6.14.78) |

Note that and are only exactly orthogonal in the adiabatic limit where they each have many cycles in any frequency interval in which the noise power spectrum changes significantly. Also note that the filter function does not correspond to a real filter in the time domain, since , so that the signal

is a complex-valued functions of the lag . We define the noise as before, by . Its mean-squared modulus is

(6.14.81) |

where we have made use of the orthornormality relation (). This value is twice as large as the expected noise-squared in the case of a single phase waveform considered previously.

The expected signal at zero lag is

(6.14.82) |

(6.14.83) |

(6.14.84) |

(6.14.85) |

(6.14.86) |

(6.14.87) |

Another way to understand these two different choices of normalization, and to understand why the conventional choice of normalization is , is that conventionally one treats the two-phase case in the same way as the single phase case, but regards as a function of a phase parameter, . For any fixed , has rms value one, but the statistic has rms value .

The attentive reader will notice, with our choice of filter and signal normalizations, that we have lost a factor of in the signal-to-noise ratio compared to the case where we were searching for only a single phase of waveform. The expected signal strength in the presence of a 0-phase signal is the same as in the single-phase case, but the expected (noise) has been doubled. This is because of the additional uncertainty associated with our lack of information about the relative contributions of the two orbital phases. In other words, if we know in advance that a waveform is composed entirely of the zero-degree orbital phase, then the expectation value of the signal-to-noise, determined by equation () would be given by . However if we need to search for the correct linear combination of the two possible phase waveforms, then the expectation value of the signal-to-noise is reduced to . However, as we will see in the next section, this reduction in signal-to-noise ratio does not significantly affect our ability to detect signals with a given false alarm rate.