next up previous contents
Next: Comparison of signal detectability Up: GRASP Routines: Gravitational Radiation Previous: Example: compare_chirps program   Contents

Wiener (optimal) filtering

0 The technique of optimal filtering is a well-studied and well-understood technique which can be used to search for characteristic signals (in our case, chirps) buried in detector noise. In order to establish notation, we begin this section with a brief review of the optimal filtering technique.

Suppose that the detector output is a dimensionless strain $h(t)$. (In Section [*] we show how to construct this quantity for the CIT 40-meter prototype interferometer, using the recorded digital data stream). We denote by ${C}(t)$ the waveform of the signal (i.e., the chirp) which we hope to find, hidden in detector noise, in the signal stream $h(t)$. Since we would like to know about chirps which start at different possible times $t_0$, we'll take ${C}(t) =
\alpha T(t-t_0)$ where $T(t)$ is the canonically normalized waveform of a chirp which enters the sensitivity band of the interferometer at time $t=0$. The constant $\alpha$ quantifies the strength of the signal we wish to extract as compared to an otherwise identical signal of canonical strength (we will discuss how this canonical normalization is defined shortly). In other words, $T(t)$ contains all the information about the chirp we are searching for apart from the arrival time and the strength, which are given by $t_0$ and $\alpha$ respectively. For the moment, we will ignore the fact that the chirps come in two different phase ``flavors".

We will construct a signal $S$, which is a number defined by

S = \int_{-\infty}^\infty dt \;h(t) Q(t),
\end{displaymath} (6.14.59)

where $Q(t)$ is an optimal filter function in time domain, which we will shortly determine in a way that maximizes the signal-to-noise ratio $S/N$ or SNR. We will assume that $Q$ is a real function of time.

We use the Fourier transform conventions of ([*]) and ([*]), in terms of which we can write the signal $S$ as

$\displaystyle S$ $\textstyle =$ $\displaystyle \int_{-\infty}^\infty dt \int_{-\infty}^\infty df \int_{-\infty}^\infty df'
{\rm e}^{-2 \pi i f t+2 \pi i f' t} \tilde h(f) \tilde Q^* (f')$  
  $\textstyle =$ $\displaystyle \int_{-\infty}^\infty df \int_{-\infty}^\infty df'
\delta(f-f') \tilde h(f) \tilde Q^* (f')$  
  $\textstyle =$ $\displaystyle \int_{-\infty}^\infty df \tilde h(f) \tilde Q^* (f).$ (6.14.60)

This final expression gives the signal value $S$ written in the frequency domain, rather than in the time domain.

Now we can ask about the expected value of $S$, which we denote $\langle S \rangle$. This is the average of $S$ over an ensemble of detector output streams, each one of which contains an identical chirp signal $C(t)$ but different realizations of the noise:

h(t) = C(t) + n(t).
\end{displaymath} (6.14.61)

So for each different realization, $C(t)$ is exactly the same function, but $n(t)$ varies from each realization to the next. We will assume that the noise has zero mean value, and that the phases are randomly distributed, so that $\langle \tilde n(f) \rangle=0$. We can then take the expectation value of the signal in the frequency domain, obtaining
\langle S \rangle = \int_{-\infty}^\infty df \langle \tilde ...
...e Q^*(f) = \int_{-\infty}^\infty df \tilde C(f) \tilde
\end{displaymath} (6.14.62)

We now define the noise $N$ to be the difference between the signal value and its mean for any given element of the ensemble:
N \equiv S-\langle S \rangle = \int_{-\infty}^\infty df \tilde n(f) \tilde Q^* (f).
\end{displaymath} (6.14.63)

The expectation value of $N$ clearly vanishes by definition, so $\langle N \rangle=0$. The expected value of $N^2$ is non-zero, however. It may be calculated from the (one-sided) strain noise power spectrum of the detector $S_h(f)$, which is defined by
\langle \tilde n(f) \tilde n^*(f') \rangle = {1 \over 2} S_h(\vert f\vert) \delta(f-f'),
\end{displaymath} (6.14.64)

and has the property that
\langle n^2(t) \rangle = \int_0^\infty S_h(f) \; df.
\end{displaymath} (6.14.65)

We can now find the expected value of $N^2$, by squaring equation ([*]), taking the expectation value, and using ([*]), obtaining
$\displaystyle \langle N^2 \rangle$ $\textstyle =$ $\displaystyle \int_{-\infty}^\infty df \int_{-\infty}^\infty
df' \tilde Q^*(f) \langle \tilde n(f) \tilde n^*(f') \rangle \tilde
  $\textstyle =$ $\displaystyle {1 \over 2} \int_{-\infty}^\infty df \; S_h(\vert f\vert) \vert\tilde Q(f) \vert^2$  
  $\textstyle =$ $\displaystyle \int_{0}^\infty df \; S_h(f) \vert\tilde Q(f) \vert^2.$ (6.14.66)

There is a nice way to write the formulae for the expected signal and the expected noise-squared. We introduce an ``inner product" defined for any pair of (complex) functions $A(f)$ and $B(f)$. The inner product is a complex number denoted by $(A,B)$ and is defined by
(A,B) = \int_{-\infty}^\infty df \; {A(f) B^*(f) S_h(\vert f\vert)}.
\end{displaymath} (6.14.67)

Because $S_h$ is positive, this inner product has the property that $(A,A)
\ge 0$ for all functions $A(f)$, vanishing if and only if $A=0$. This inner product is what a mathematician would call a ``positive definite norm"; it has all the properties of an ordinary dot product of vectors in three-dimensional Cartesian space.

In terms of this inner product, we can now write the expected signal, and the expected noise-squared, as

\langle S \rangle = ({\tilde C \over S_h},\tilde Q)
\quad {\...
... \quad \langle N^2 \rangle = {1 \over 2} (\tilde Q, \tilde Q).
\end{displaymath} (6.14.68)

(Note that whenever $S_h$ appears inside the inner product, it refers to the function $S_h(\vert f\vert)$ rather than $S_h(f)$.) Now the question is, how do we choose the optimal filter function $Q$ so that the expected signal is as large as possible, and the expected noise-squared is as small as possible? The answer is easy! Recall Schwarz's inequality for inner products asserts that
(A,B)^2 \le (A,A)(B,B),
\end{displaymath} (6.14.69)

the two sides being equal if (and only if) $A$ is proportional to $B$. So, to maximize the signal-to-noise ratio
\left( {S \over N} \right)^2 ={\langle S \rangle^2 \over \la...
... ({\tilde C \over S_h},\tilde Q)^2 \over (\tilde Q,
\tilde Q)}
\end{displaymath} (6.14.70)

we choose
\tilde Q(f) \propto {\tilde C(f) \over S_h(\vert f\vert)} = ...
...ilde T(f) \over S_h(\vert f\vert)} \; {\rm e}^{2 \pi i f t_0}.
\end{displaymath} (6.14.71)

The signal-to-noise ratio defined by equation ([*]) is normalized in a way that is generally accepted and used. Note that the definition is independent of the normalization of the optimal filter $\tilde Q$, since that quantity appears quadratically in both the numerator and denominator. However if we wish to speak about ``Signal" values rather than about signal-to-noise values, then the normalization of $\tilde Q$ is relevant. If we choose the constant of proportionality to be $2 \alpha^{-1}$, (i.e. set $\alpha = 2$, for reasons we will discuss shortly) then we can express the template in terms of the canonical waveform,
\tilde{Q}(f)=2 \> \frac{\tilde{T}(f)}{S_h(\vert f\vert)} {\rm e}^{2\pi i f t_0}
\end{displaymath} (6.14.72)

Going back to the definition of our signal $S$, you will notice that the signal $S$ for ``arrival time offset" $t_0$ is given by
$\displaystyle S$ $\textstyle =$ $\displaystyle \int_{-\infty}^\infty df \; \tilde h(f) \tilde Q^* (f)$  
  $\textstyle =$ $\displaystyle 2 \int_{-\infty}^\infty df \; { \tilde h(f) \tilde T^* (f)
\over S_h(\vert f\vert)} \; {\rm e}^{- 2 \pi i f t_0}.$ (6.14.73)

Given a template $\tilde T$ and the signal $\tilde h$, the signal values can be easily evaluated for any choice of arrival times $t_0$ by means of a Fourier transform (or FFT, in numerical work). Thus, it is not really necessary to construct a different filter for each possible arrival time; one can filter data for all possible choices of arrival time with a single FFT.

The signal-to-noise ratio for this optimally-chosen filter can be determined by substituting the optimal filter ([*]) into equation ([*]), obtaining

\left( {S \over N} \right)^2 =
2 \int_{-\infty}^...
...S_h(\vert f\vert)},\frac{\tilde{T}}{S_h(\vert f\vert)}\right).

You will notice that the signal-to-noise ratio $S/N$ in ([*]) is independent of the overall normalization of the filter $Q$: if we make $Q$ bigger by a factor of ten, both the expected signal and the expected noise increase by exactly the same amount. For this reason, we can specify the normalization of the filter as we wish. Furthermore, it is obvious from ([*]) that normalizing the optimal filter is equivalent to specifying the normalization of the canonical signal waveform. It is traditional (for example in Cutler and Flanagan [21]) to choose
\left(\frac{\tilde{T}}{S_h(\vert f\vert)},\frac{\tilde{T}}{S_h(\vert f\vert)}
\end{displaymath} (6.14.74)

With this normalization, the expected value of the squared noise is
\langle N^2 \rangle = {1 \over 2} (\tilde Q,\tilde Q) = {1 \...
...rt f\vert)},2 \frac{\tilde{T}}{S_h(\vert f\vert)}
\right) = 1
\end{displaymath} (6.14.75)

and the signal-to-noise ratio takes the simple form
\left(\frac{S}{N}\right)^2 = \alpha^2.
\end{displaymath} (6.14.76)

This adjustment or change of the filter normalization can be obtained by moving the (fictitious) astrophysical system emitting the chirp template either closer or farther away from us. Because the metric strain $h$ falls off as $1/\rm distance$, the measured signal strength $S$ is then a direct measure of the inverse distance.

For example, consider a system composed of two 1.4 $M_\odot$ masses in circular orbit. Let us normalize the filter $\tilde T$ so that equation ([*]) is satisfied. This normalization corresponds to placing the binary system at some distance. For the purpose of discussion, suppose that this distance is 15 megaparsecs (i.e., choosing $T(t)$ to be the strain produced by an optimally-oriented two $\times$ 1.4 $M_\odot$ system at a distance of 15 megaparsecs). If we then detect a signal with a signal-to-noise ratio $S/N=30$, this corresponds to detecting an optimally-oriented source at a distance of half a megaparsec. Note that the normalization we have choosen has the r.m.s. noise $\sqrt{
\langle N^2 \rangle}= 1$ and therefore the signal and signal-to-noise values are equal.

The functions correlate() and productc() are designed to perform this type of optimal filtering. We document these routines in the following section and in Section [*], then provide a simple example of an optimal filtering program.

There is an additional complication, arising from the fact that the gravitational radiation from a binary inspiral event is a linear combination of two possible orbital phases, as may be seen by reference to equations ([*]) and ([*]). Thus, the strain produced in a detector is a linear combination of two waveforms, corresponding to each of the two possible ($0^\circ$ and $90^\circ$) orbital phases:

h(t) = \alpha T_{0}(t) + \beta T_{90}(t) + n(t).
\end{displaymath} (6.14.77)

Here the subscripts $0$ and $90$ label the two possible orbital phases; the constants $\alpha$ and $\beta$ depend upon the distance to the source (and the normalization of the templates) and the orientation of the source relative to the detector. Thus $ T_{0}(t) $ denotes the (suitably normalized) function $h_c (t)$ given by equation ([*]) and $ T_{90}(t) $ denotes the (suitably normalized) function $h_s(t)$ given by equation ([*]).

In the optimal filtering, we are now searching for a pair of amplitudes $\alpha$ and $\beta$ rather than just a single amplitude. One can easily do this by choosing a filter function which corresponds to a complex-valued signal in the time-domain:

\tilde Q(f) = 2 \>
{\tilde T_{0}(f) - i \tilde T_{90}(f) \over S_h(\vert f\vert)} \; {\rm e}^{2 \pi i f t_0}.
\end{displaymath} (6.14.78)

We will assume that the individual filters for each polarization are normalized by the convention just described, and that they are orthogonal:
\left( {\tilde T_{0} \over S_h} , {\tilde T_{0} \over S_h} \...
...ilde T_{0} \over S_h} , {\tilde T_{90} \over S_h} \right) = 0.
\end{displaymath} (6.14.79)

Note that $T_{0}$ and $T_{90}$ are only exactly orthogonal in the adiabatic limit where they each have many cycles in any frequency interval $df$ in which the noise power spectrum $S_h(f)$ changes significantly. Also note that the filter function $\tilde Q(f)$ does not correspond to a real filter $Q(t)$ in the time domain, since $\tilde Q(-f) \ne \tilde Q^*(f)$, so that the signal
S(t_0) = \left( { \tilde h \over S_h}, \tilde Q \right)
\end{displaymath} (6.14.80)

is a complex-valued functions of the lag $t_0$. We define the noise as before, by $N= S - \langle S \rangle$. Its mean-squared modulus is
$\displaystyle \langle \vert N \vert ^2 \rangle$ $\textstyle =$ $\displaystyle {1 \over 2} (\tilde Q,\tilde Q)$  
  $\textstyle =$ $\displaystyle 2 \left( {\tilde T_{0} -i \tilde T_{90} \over S_h} ,
{\tilde T_{0} -i \tilde T_{90} \over S_h} \right)$  
  $\textstyle =$ $\displaystyle 2 \left[ \left(
{\tilde T_{0} \over S_h} , {\tilde T_{0} \over S_...
...eft( {\tilde T_{90} \over S_h} , {\tilde T_{90} \over S_h} \right) \right] = 2,$ (6.14.81)

where we have made use of the orthornormality relation ([*]). This value is twice as large as the expected noise-squared in the case of a single phase waveform considered previously.

The expected signal at zero lag $t_0=0$ is

\langle S \rangle = \left( { \langle \tilde h \rangle \over ...
...T_{0} -i \tilde T_{90} \over S_h} \right) =
\alpha + i \beta.
\end{displaymath} (6.14.82)

Hence the signal-to-noise ratio is
{\langle S \rangle \over \sqrt{\langle \vert N\vert^2 \rangle} } = \frac{1}{\sqrt{2}}
(\alpha + i \beta).
\end{displaymath} (6.14.83)

In the absence of a signal $\langle S \rangle=0$ the expected value of the square of this quantity (from the definition of $N$) is unity:
{\langle \vert S\vert^2 \rangle \over \langle \vert N\vert^2 \rangle} = 1.
\end{displaymath} (6.14.84)

In the presence of a signal, the squared signal-to-noise ratio is
{\vert \langle S \rangle \vert^2 \over \langle \vert N\vert^2 \rangle} = \frac{1}{2}
(\alpha^2 + \beta^2)
\end{displaymath} (6.14.85)

In the case discussed previously, for a single-phase signal, we pointed out that there was general agreement on the definition of signal-to-noise value. In the present case (a complex or two-phase signal) there is no such agreement. The definition given above is the one used by most experimenters: it is a quantity whose square has expected value of unity in the absence of a signal. However the definition often used in this subject is
\left( {S \over N} \right)_{\rm Cutler\ and\ Flanagan} =
...S(t_0) \vert=
\sqrt{2} \left( {S \over N} \right)_{\rm GRASP}.
\end{displaymath} (6.14.86)

Note that because ${S(t_0)}$ is complex, we maximize the modulus. This is a quantity whose expected squared value, in the absence of a signal, is 2. To avoid confusion in the future, we will use a different symbol for this quantity, and define
\rho \equiv
\left( {S \over N} \right)_{\rm Cutler\ and\ Fl...
...\rm Thorne} =
\sqrt{2} \left( {S \over N} \right)_{\rm GRASP}.
\end{displaymath} (6.14.87)

This quantity is equal to the signal value alone (rather than the signal value divided by the expected noise).

Another way to understand these two different choices of normalization, and to understand why the conventional choice of normalization is $\rho$, is that conventionally one treats the two-phase case in the same way as the single phase case, but regards ${S \over N}$ as a function of a phase parameter, $\theta = \arctan(\beta/\alpha)$. For any fixed $\theta$, ${S \over N}(\theta)$ has rms value one, but the statistic $\max_\theta
{S \over N} (\theta)$ has rms value $\sqrt{2}$.

The attentive reader will notice, with our choice of filter and signal normalizations, that we have lost a factor of $\sqrt{2}$ in the signal-to-noise ratio compared to the case where we were searching for only a single phase of waveform. The expected signal strength in the presence of a 0-phase signal is the same as in the single-phase case, but the expected (noise)${}^2$ has been doubled. This is because of the additional uncertainty associated with our lack of information about the relative contributions of the two orbital phases. In other words, if we know in advance that a waveform is composed entirely of the zero-degree orbital phase, then the expectation value of the signal-to-noise, determined by equation ([*]) would be given by $\langle S
\rangle/N = \sqrt{2} \alpha$. However if we need to search for the correct linear combination of the two possible phase waveforms, then the expectation value of the signal-to-noise is reduced to $\langle
S \rangle/N = \alpha$. However, as we will see in the next section, this reduction in signal-to-noise ratio does not significantly affect our ability to detect signals with a given false alarm rate.

next up previous contents
Next: Comparison of signal detectability Up: GRASP Routines: Gravitational Radiation Previous: Example: compare_chirps program   Contents
Bruce Allen 2000-11-19