Recent notes
(Older stuff is in the notebook.)My favorite tenline computer program
August 13, 2012
I write a lot of software, but here is my favorite tenline program:
float b = 1.0fa;
float sx = 1.0f, sy = a;
float yi = 0.0f;
y[0] = yi = sy*yi+sx*x[0];
for (int i=1; i<n1; ++i)
y[i] = yi = a*yi+b*x[i];
sx /= 1.0f+a; sy /= 1.0f+a;
y[n1] = yi = sy*yi+sx*x[n1];
for (int i=n2; i>=0; i)
y[i] = yi = a*yi+b*y[i];
Simon Luo, a Mines graduate student, recently helped to remove some clutter in a slightly longer version that I wrote awhile ago. Yes, we are cheating a little bit by putting two statements on one line, but our program still looks clean, with plenty of symmetry that makes us feel good. Besides, a real cheater would write this code in a oneline program!
This program is valid in the programming languages Java, C++, and
(with a few minor changes) C.
The program inputs are a float
parameter a
and a sequence of n float
values x[i]
,
for i = 0, 1, ..., n1
.
The program output is a sequence of n float
values
y[i]
with the same length.
Before I explain why this is my favorite tenline program, let's
first look at an example of what it does.
In the following figure, the input sequence x
in red
was recorded in a lab experiment.
We suspect that the rapid fluctuations are noise, because they do not
fit any reasonable model of the signal we were expecting.
The output sequence y
(black) computed with our
tenline program better approximates the expected signal.
 A noisy input sequence (red) after smoothing (black) with the tenline program.
Each output value y[i]
is a weighted average of
input values x[j]
.
The weights are proportional to a^{ij}
, so
that (for 0 <= a <= 1
) they decrease exponentially
with increasing ij
.
In this example, the parameter a = 0.932
.
In effect, our tenline program is a smoothing filter.
The weights are normalized so that they sum to one, which means that
when this filter is applied to a sequence with constant input value
x[i] = constant
(already as smooth as can be), the output
values will be the same y[i] = constant
.
The following figure illustrates weights for the exponential filter and two alternative smoothing filters with comparable widths.
 Impulse responses of exponential (black), Gaussian (blue) and boxcar (red) smoothing filters. For these filters, each output sample is a weighted average of nearby input samples; shown here are the weights.
All three of these smoothing filters can be implemented efficiently, with computational cost that is independent of their width. But efficient (recursive) implementations of both the Gaussian and boxcar filters require more complex programs, especially when carefully handling the ends of input and output sequences.
A less efficient (nonrecursive) but simpler implementation of the boxcar filter is often used, perhaps because it more obviously computes a moving average. But the abrupt transition from nonzero weight to zero weight makes no sense for such an average, and it is hard to beat the simplicity of our tenline program.
To summarize, the exponential filter
 gives more weight to input samples located (in time or space) near the output sample than to input samples located farther away
 guarantees that all weights are nonnegative, unlike many recursive implementations of Gaussian filters

can be applied inplace, so that output values
y[i]
replace input valuesx[i]
stored in the same array, which is especially useful when filtering large arrays of arrays that consume a lot of memory 
requires only six floatingpoint operations (multiplies and adds)
per output sample
y[i]
, and this computational cost is independent of the extent of smoothing  can be implemented with a simple tenline computer program that includes careful treatment of the ends of input and output sequences
Careful treatment of the ends is important. When averaging input values to compute output values near the ends, our tenline program listed above implicitly assumes that input values beyond the ends are equal to the end values. This zeroslope assumption is often appropriate.
Another common assumption is that input values beyond the ends are zero, and the figure below shows how this zerovalue assumption may be inappropriate, as the decrease in the output sequence near time zero seems inaccurate.
 A noisy input sequence (red) after smoothing (black) with the recursive twosided exponential filter. Input values off the ends have been assumed to be zero, which seems unreasonable for the left end where times are nearly zero.
But suppose that the data we are smoothing are known to have zero mean. In this case, the zerovalue assumption may be more appropriate, and our tenline program becomes
float b = 1.0fa;
float sx = b, sy = a;
float yi = 0.0f;
y[0] = yi = sy*yi+sx*x[0];
for (int i=1; i<n1; ++i)
y[i] = yi = a*yi+b*x[i];
sx /= 1.0f+a; sy /= 1.0f+a;
y[n1] = yi = sy*yi+sx*x[n1];
for (int i=n2; i>=0; i)
y[i] = yi = a*yi+b*y[i];
Can you see the difference? It is not obvious, but look carefully at the second line. That's it! A simple change to the initialization of one variable switches our treatment of the ends from zeroslope to zerovalue.
One final tip.
The relationship between the extent of smoothing and the
parameter a
is not at all intuitive,
so I rarely specify this parameter directly.
Instead, I compute the parameter a
to obtain
an exponential filter that (for low frequencies) is comparable to
a Gaussian filter with a specified halfwidth sigma
(measured in samples), using:
float ss = sigma*sigma;
float a = (1.0f+sssqrt(1.0f+2.0f*ss))/ss;
So, in practice, this adds two lines to our tenline program.
And if you would rather think in terms of the integer halfwidth
m
of a boxcar filter, then (again, for low frequencies)
the halfwidth sigma
of the comparable Gaussian filter
can be computed using:
float sigma = sqrt(m*(m+1)/3.0f);
For the boxcar filter with halfwidth m = 10
displayed in the figure above, these expressions yield
sigma = 6.06
and a = 0.792
.