**The Matlab code used for the examples is available for FREE at the end of the post**
Every day thousands of kids across the world practice interpolation, and they do it for fun. The best part, they don’t even know it’s called interpolation to them it’s just a game called Connect-the-Dots. You might remember it from your childhood. A teacher or parent would give you a white sheet of paper filled with dots numbered 1 to N, like the one shown in Figure 1 (thanks supercoloring.com). The goal is to recreate the image by sequentially connecting the dots. The children typically connect the dots in the most efficient manner by using a straight line. Once all the dots are connected, it’s easy to see the original image as demonstrated in Figure 2.
What is Interpolation?
Interpolation is the process of estimating data between two or more samples. The dots in a Connect-the-Dots game can be viewed as samples taken from an illustration’s outline. The straight line drawn between two points represents an estimate of the missing data. This post lists several different interpolation methods and some of their applications.
Linear Interpolation
Linear interpolation uses two samples to estimate the missing data by finding the line that intersects both points. The missing data is estimated to be on the line. The children in the opening example use linear interpolation to great effect. The equation for linear interpolation can easily be derived from the equation of a line (expand the hidden text if interested) and is
where is the interpolated value and is the fractional sample time. There are a couple of important observations. First, is the output of a linear combination of samples i.e. the input samples are only scaled and added instead of being raised to some power. As such, can be considered the output of a linear low-pass filter who’s coefficients are and . Second, the filter coefficients are dependent on the desired sampling instant, not the sample themselves. If the desired sampling instant does not change, then the coefficients themselves remain the same.
The above equation finds a single interpolated value but there are many occasions where you want a higher resolution data-set. We accomplish this by upsampling (inserting zeros in between samples) and low-pass filtering. The low-pass filter is found by sweeping the desired sampling instant and computing the filter coefficients. For example, suppose we wish to upsample our data-set by four. We’d insert 3 zeros in between all of the samples and sweep from to in increments as shown in Table 1.
0 | 1 |
0.25 | 0.75 |
0.5 | 0.5 |
0.75 | 0.25 |
1 | 0 |
By offsetting the filter coefficients by the desired interpolant’s index (0,1,2,3 or 4) and combining them, we get the upsampled, linear interpolation filter shown in Figure 3.
This filter can now be used on the upsampled data set.
Linear interpolation short cut
Instead of using the equation above to compute the filter coefficients, you could instead create the triangular filter by convolving two rects. Personally, this is my preferred method and can be accomplished by following the steps below.
- Upsample your data using an integer factor N.
- Create a rect whose width is the same as the upsample factor e.g. if your upsample factor is four create a rect that is four taps wide
- Convolve the rect with itself to create your trianlge pulse. This will produce a triangle pulse that is 2*N-1 taps wide
- Scale the triangle pulse so that the largest tap is one.
- Filter your upsampled data from step one with your triangle pulse.
Figure 4 illustrates linear interpolation on a sine wave. The top plot is a sine wave sampled at two times the Nyquist sampling rate where the red asterisks represent the samples. The bottom plot is the linearly interpolated approximation to the sine wave only using the red samples. As you can see, linear interpolation didn’t render the most accurate results.
Linear Interpolator Use case
Linear interpolation is simple and computationally easy. It’s great when only coarse approximations are needed or if your signal is already highly oversampled. It’s extremely quick and uses the fewest number of samples. That being said, for many applications, linear interpolation doesn’t produce accurate enough measurements.
Piecewise Polynomial Interpolation
Piecewise Polynomial interpolation is similar to linear interpolation, in fact, linear interpolation just a specific form of piecewise polynomial interpolation. The difference is instead of using a 1st-degree polynomial (linear) to estimate the missing data you use an N-degree polynomial. However, odd degrees are by far the most common with 3rd-degree polynomials being the most common (also known as cubic interpolators). This is because odd degreed polynomials use an even number of samples to solve for the unknowns, allowing for half of the samples used to come before the desired interpolation point and half coming after the desired interpolation point. Because cubic interpolators are so common, we’ll show what that looks like. The equation for a cubic polynomial is
Notice there are four unknowns, , , , and . The problem set up is the same for the linear interpolation shown earlier. Because inverting a 4×4 matrix is much more difficult and time-consuming, I’m just going to skip it all and give you the equation to find an interpolated value using a cubic interpolator.
Just as with the linear interpolator, the cubic interpolator can be viewed as a low pass filter whose filter coefficients are the terms multiplying the s. Sweeping from to in increments creates filter response (filter coefficients for signal upsampled by 4) shown in Figure 5.
As an example, I have taken a low resolution sampled sine wave, upsampled it by 30, and applied the corresponding cubic interpolation filter. The results are shown in figure 6.
As can be seen from the figure, the cubic interpolator does a better job of approximating the sine wave. You can see that the points are rounded, unlike the linear approximation. Obviously, this still isn’t a very good approximation but for some applications, this works well. As you might have guessed, if you wanted a better approximation to the sine wave, you could keep increasing the polynomial degree i.e. use a 5th or 9th-degree polynomial.
Piecewise Polynomial Interpolator Use Case
Transmitters contain a local oscillator that mixes a baseband signal to the correct center frequency. Receivers also contain a local oscillator to mix the received signal down back to baseband. However, because we live in the real world, the oscillators are not at exactly the same frequency. They’ll be close to each other but not perfectly the same. As a result, the samples generated by the analog to digital converter (ADC) are almost never at the desired sampling instant. Luckily we can do some clever math to find the optimal sampling instant and then use cubic interpolation, or a higher-order polynomial filter, to find the desired sample.
Windowed Sinc Interpolation
So what is the best interpolating filter? The answer, derived from sampling theory (which is beyond the scope of this post), is the sinc interpolator. In fact, for a band-limited signal (assuming the signal was sampled at Nyquist) a sinc interpolator would perfectly recreate the original signal. So why don’t we just use a sinc interpolator? Well, for a few reasons. The equation for the sinc function is
where extends from to . As far as I know, no computing device has been made that can compute an infinite number of points for a given filter. Furthermore, most real-life data sets we want to analyze have a starting or ending point. Thus using the perfect sinc interpolator is physically impossible.
However, instead of trying to do the impossible, you could just truncate the function after it has sufficiently approached zero. Depending on the application, that might still require more samples than we wish. The other option is to truncate the sinc function and then apply a window to it so that spectral leakage is minimized. The post on DSP Related, found here, does a really good job of explaining this concept.
Using a windowed sinc is actually pretty easy to do. The first step is to sample the sinc function at one over the upsample factor i.e. . The next step is to take your favorite window and apply it to the sampled sinc function. I’ve shown the results of this in Figure 7.
The next step is to convolve the upsampled data set with the windowed sinc filter. I’ve shown the results of doing this in Figure 8. Here the upsample factor is 30 and I’ve selected 6 zero crossings for my sinc interpolator. As can be clearly seen, the sinc interpolator almost perfectly reproduces the sampled sine wave.
Windowed Sinc Interpolator Use Case
Windowed sinc interpolators commonly find applications in fractional delay filters. Such filters are used when the data set needs to be delayed by a fraction of the sample rate. For example, the delay & sum time-domain adaptive beamformer works by carefully delaying the received signals from multiple antennas. The amount each signal is delayed will differ from one signal to the next and will almost always be some fraction of the sampling rate. Thus a windowed sinc interpolator is needed. After appropriately delaying the signals, the signals are added together, creating nulls and gains in desired directions. See here for more info on adaptive beamformers.
Fourier-based interpolation
Chad Spooner from the cyclostationary.blog website, suggested that we include a Fourier based method of interpolation. Fourier-based interpolation is a pretty cool idea. We know that zero padding a signal in the time-domain results in an increased sample rate in the frequency-domain. Conversely, by zero padding in the frequency domain, we are increasing the sample rate in the time-domain.
Interpolation using the Fourier-based method can be accomplished by the following steps:
- Transform the original time-domain signal to the Fourier-domain using an N point FFT where N is the of data samples.
- Insert N*(L-1) zeros in the middle of the Fourier-domain dataset
- L is the upsample factor
- Apply an N*L inverse FFT on the upsampled Fourier-domain dataset
- Scale the new time-domain signal by L
The last step, scaling by the upsample factor, is needed because the energy contained in the signal is spread across more points.
An example of zero padding is shown in Figure 9. The top plot shows the FFT for the original dataset sampled at 4 Hz (the same dataset shown on the top plot of Figure 10). Note that the x axis is the index of the frequency-domain values without doing an fftshift. This is the output when performing Octave’s or Matlab’s FFT function.
The second step is shown in the bottom plot of Figure 9. Here the frequency-domain data has been zero-padded in the middle. It’s important that the dataset remain symmetric.
Finally, the last two steps are to take the inverse FFT of the upsampled frequency-domain dataset and scale by the upsample factor, the results of which are shown in Figure 10. Notice that the approximation is similar to the windowed sinc interpolation.
Fourier-based Interpolator Use Case
The time-domain based interpolation methods mentioned above all allow for streaming-based processing. In other words, as a new upsampled value comes in, a new value is interpolated. However, with the Fourier-domain interpolation method described above a block processing approach must be taken. This means buffering N samples before interpolating, requiring more memory. Some applications do not require real-time operation and can handle the increased latency. The advantage is very accurate interpolations. And if N is a power of 2 you can take advantage of highly optimized algorithms.
One thing to note, the discrete Fourier transform expects the signal to be exactly periodic over the number of samples used in the transform. If it is not exactly periodic, you will see Gibbs’ phenomenon produce errors at the beginning and ending of the interpolated values.
Conclusion
This post has gone over the most basic interpolators, there are many others. Interpolators are used in many digital signal processing applications and should be something every DSP engineer is familiar with. So, whether you are computing the optimal sample for a receiver or you are delaying a signal in time, it’s all about connecting the dots.
I used to use curves in the “connect the dot”game when I was 4.Maybe I’m a special girl.