Using Gaussian and sinc filters on elevation data

To begin, let me state that I am not an expert, or even that knowledgeable about signal processing. My math skills are just enough to understand the concepts involved in using and selecting a particular filter. This is merely a collection of a couple days worth of reading and playing around. Much of the knowledge I absorbed came from the Wikipedia articles on the subjects, conversations with my friend Joe (a good friend and PHD student in AI at Oregon State) and one helpful EE professor at Oregon State. Living in a college town has benefits: I knocked on the doors of several professors that specialize in digital signal processing, and found a friendly one with 10 minutes of free time :)

This exploration of filters is an attempt to solve an accuracy problem when modeling elevation data for bicycle rides on http://ridewithgps.com – there are two scenarios for elevation data: before a user ever goes on a bike ride, they can plot out the route on a Google map and an elevation profile is generated from the USGS provided national elevation dataset. The second scenario occurs when a user comes back from the ride they just took, and uploads their logged data from a GPS unit. This log file includes elevation data, which in the case of the Garmin 305, is provided by a fairly accurate barometric sensor based altimeter. The two datasets are inaccurate in separate ways. The USGS data is highly accurate for the points in the dataset, however, those points are spaced 10 meters apart. As a result, asking for an elevation point on some road that rides along a steep hillside may give me a point that is 20 meters above/below the actual requested point. Over the course of plotting a longer route, we can get off by 2000 feet in some cases! There are ways to improve this, and I am exploring interpolation when querying the elevation dataset DEMs, but that is another topic for another day.

The logged elevation data has a different problem: the Garmin saves the raw data, which has a some variation up and down due to sensor accuracy. As a result, if you ride along a perfectly flat road for 50 miles (0 elevation gain and loss), the logged data will have jitter up and down of 0.5 to 1 meter or so. When calculating the elevation gain and loss of this flat ride, we can come up with upwards of 1000 feet of gain and loss! So, in order to accurately calculate a ride or planned rides elevation gain and loss, we must smooth out these jitters.

Above is an example of two datasets with a filter applied to each. Keep in mind this is a zoomed in section of the dataset, and as a result the jaggedness is exacerbated. The green line is the raw USGS data, and you can see how jittery it is. The red line running through it is a sinc filtered dataset, with a max window size of 21 (can be smaller dynamically), and a frequency of 0.04. The jagged purple line is raw elevation data taken from my Garmin 305 log. The altimeter has a step size of .4805 meters, and an accuracy of something like +/- 1 meter. As you can see, it is a much cleaner dataset when compared to the USGS dataset. The blue line running through it is the sinc filtered data. When calculating the gain/loss of the filtered data, we get a much more accurate representation of the real gain and loss. Even though the USGS data looks like it is a long ways off from the actual data, the gain and loss numbers average out to be fairly close to reality, especially when graphing a longer route.

Now onto the details! The three filters I am about to describe all utilize a near identical programming implementation. They are window filters, which means you take a neighborhood of points surrounding the central point that you would like to filter, and use them to generate the new central point value. The simplest of these is a central moving average. With a CMA, you average all the points in your window, and assign the result to the central point. This is a quick and easy filter, which produces decent results. More complex filters use the same concept, but instead of taking a simple average of all points in the neighborhood, we take a weighted average, where the weights of the surrounding points are generated by different mathematical functions. As a result, with the same code I can merely choose a different function to generate a “kernel” with different weights. This kernel is just a set of n weights, where n is the length of my window size. So, for a central moving average, the weights would be equal: with a 5 point window, each weight in the kernel would be 0.20. In something like a Gaussian filter, the 5 point kernel would look something like [0.01, 0.15, 0.3, 0.15, 0,01], where those made up values are points on the Gaussian function.

I explored both a sinc filter and a Gaussian filter when working with my elevation data. There are more complex and accurate functions available, but approach serious overkill for the problem I am solving. Additionally, the more powerful signal processing techniques involved regularly sampled datasets, which we do not have (explained below). In order to use those, we would need to fill-in our datasets with interpolated data, which starts to get messy.

You may recognize the common bell shaped curve, which is apparent when you graph the kernel created by the Gaussian function:

The Gaussian filter outputs a `weighted average’ of each point’s (or pixel, since filtering is typically used in image manipulation and recognition) neighborhood, with a larger weight towards the value of the central points. These weights are generated by the function shown above, where x is the current index of the kernel we are creating and theta is the standard deviation. The central point in the kernel has the strongest weight. What does all this mean? Simply put, a Gaussian filter provides gentler smoothing and preserves edges better than a mean filter. Intuitively this makes sense; we are saying the immediately neighboring points are much more relevant when identifying the real value of the current point. As the points get further away from the current point their values become less relevant. It is tuned by changing the window size and the standard deviation. The standard deviation is chosen to be the standard deviation of the noise from the underlying form.

The sinc filter is implemented identically, though we use a different function to generate our kernel of weights. This filter is considered an ideal low-pass filter (in its raw mathematical form), with perfect cutoff. Without going into much detail about a topic I don’t know much about, the sinc filter performs better when dealing with different elevation sets that have disparate accuracies. This is important, since I need to smooth both altimeter recorded (accurate and clean) and the USGS provided data (inaccurate and unclean). For the sinc filter, we tune it by changing the window size (like the Gaussian), and a frequency value, f. The electrical engineers choose f based on some logic, but I do not know how to apply that logic to my dataset (since it’s not an actual signal) and instead I found a good value of f by trial and error.

The graph of a sinc function:

The sinc function used in the filter is a specific variation of the sinc function with a Blackman window, which was developed by some smart EEs in the 60’s. For more details check out this excellent intro from dspguide.com

Selecting different standard deviations and window sizes will change the profile of this curve, and thus the resulting smoothing of your data. If you have very tight sampling (your data points are close together), a larger window size can be beneficial. If your data points are spaced far apart, then a smaller window size is needed, to avoid giving weight to less important (far out and unrelated) surrounding points. This presents an interesting problem for the data I am attempting to smooth: I do not have a regular sample rate for these elevation points. Since many GPS units have a “smart recording” feature, they only record data points when it is necessary. So, if I am riding along a straight line with no change in heading or elevation, the Garmin 305 puts down something close to 1 point every 16 seconds. As I make a turn, my logging rate approaches 1 point per second. Similarly, when drawing a route on the google map, we have irregularly spaced lat/lon points returned. With this in mind, we need to consider a dynamic window size. If we link the window size to a distance rather than number of points, we can achieve a more consistant weighting of relevant points in regards to our horizontal scale. This can include even dropping the concept of filtering when the distance between points is great, by having a window size of 1.

Below is the code, in ruby, for the sinc filter. The guassian is left up to the reader, but is a fairly simple drop in replacement.

class SincFilter
  def initialize(start_n=10, win_width=200, freq=0.04)
    @start_n = start_n #try this window size, and increase or decrease based on point density
    @win_width = win_width #width of window in meters
    @freq = freq #frequency cutoff
  end

  def filter(dists, eles)
    sincify(eles, dists)
  end


  private

  #if the sum of the kernel vals is not 1, we will shrink everything a bit...
  #this is a cheap way to normalize so they sum to 1
  def normalize_kernel(kern)
    norm = kern.inject(0) { |res, val| res += val }
    kern.map { |val| val /= norm }
  end

  #The sinc function, using a blackman window.
  def create_kernel(size)
    return [1] if size == 1
    kern = []

    (0..size).each do |i|
      kern[i] = (0.54 - 0.46 * Math.cos(2*Math::PI*i/size))
      x = i - size/2
      if x == 0
        kern[i] *= 2 * Math::PI * @freq
      else
        kern[i] *= Math.sin(2 * Math::PI * @freq * x) / x
      end
    end

    return kern
  end

  #now we calculate a weighted average for each point and it's neighbors,
  #using our kernel as the weight values
  def sincify(eles, dists)
    kern = create_kernel(@start_n)
    result = []

    eles.each_with_index do |point, i|
      n = @start_n
      z = n / 2
      s = i - z
      e = i + z
      ks = 0
      ke = n

      #pick our start and end index such that it is within our array bounds
      while s < 0 do s += 1; ks += 1 end
      while e >= eles.length do e -= 1; ke -= 1 end

      #dynamically select the window size
      dist = dists[e] - dists[s]
      while dist > @win_width do
        unless s + 1 > i or e - i > i - s
          s += 1
          ks += 1
        end
        unless e - 1 < i or e - i < i - s
          e -= 1 
          ke -= 1
        end
        dist = dists[e] - dists[s]
      end

      neighbors = []
      #renormalize the kernel.  If all values don't add up to 1, point will shrink
      used_kern = normalize_kernel(kern[ks..ke])
      (s..e).each { |j| neighbors << eles[j] unless j < 0 or j >= eles.length }

      val = 0
      neighbors.each_with_index { |pt, i| val += pt * used_kern[i] }
      result << (val * 10000.0).round / 10000.0 #geto round..no use with more than 4 decimal places
    end
    return result
  end
end