Understand the convolution theorem

Peter Barrett Bryan
4 min readJul 21, 2023

--

Learn the intuitions behind the convolution theorem visually!

Demonstration of the convolution theorem. Image by author.

What is the convolution theorem?

From Wikipedia…

In mathematics, the convolution theorem states that under suitable conditions the Fourier transform of a convolution of two functions (or signals) is the pointwise product of their Fourier transforms. More generally, convolution in one domain (e.g., time domain) equals point-wise multiplication in the other domain (e.g., frequency domain). Other versions of the convolution theorem are applicable to various Fourier-related transforms.

In imaging settings, that means that convolving a kernel with an image is identical to performing point-wise multiplication (Hadamard product) in Fourier space. So, more simply:

The Fourier transform of the convolution of two functions is equal to the product of their individual Fourier transforms

Why a Gaussian kernel?

We could use any kernel for this demo, but Gaussians have a couple nice properties. First, I bet you already know what a Gaussian looks like. Second, the effect of a Gaussian convolution is relatively straight forward: an image gets smoother (i.e., blurrier) as a function of the Gaussian’s standard deviation. Third — and most important for this demo — the Fourier transform (FT) of a Gaussian is itself a Gaussian!

How to read the visualization

Let’s start with just the first frame….

Demonstration of the convolution theorem, low standard deviation Gaussian. Image by author.

In the first subplot, I show a sample image in pixel space. In the second subplot, the two-dimensional FT of that image.

In the third subplot, I’ve plotted a 2D Gaussian kernel with a realllllly small standard deviation. In the fourth, the FT of the kernel.

Now things get interesting! If we convolve the kernel (3) with the image (1), we get the fifth subplot. Since the standard deviation is so small, the effect is barely visible. The image is just a tiny bit smoother. Finally, in the sixth subplot, I’ve taken the FT of the smoothed image (5). That is, I’ve plotted the FT (6) of the original image (1) convolved with the Gaussian kernel (3).

Building intuitions

Right off the bat, there are some features we can pick out. The FT of the kernel is white everywhere, but drops off in the corners. In Fourier space, the highest frequency content are far from the origin. If that sounds vaguely familiar, but you need a refresher, give this a read:

It is visually clear that subplot 6 is a pointwise multiplication of 2 and 4. Things are starting to click! The slight blurriness of subplot 5 is because convolution with the Gaussian kernel removed the highest frequency content from the image.

Now, let’s crank up the standard deviation of the Gaussian in our kernel.

Demonstration of the convolution theorem, high standard deviation Gaussian. Image by author.

Intuitively, a larger standard deviation in our kernel will result in a blurrier image, and this is reflected in the FTs! The FT of our kernel is now only white near the origin. Subplot 6 is still a pointwise multiplication of 2 and 4. Convolution of a Gaussian kernel with our image removed all the frequency contents except for the lowest frequencies (nearest the origin).

Putting it all together

Demonstration of the convolution theorem. Image by author.

Now, let’s gradually ratchet up the standard deviation and watch the effect continuously in Fourier space.

Why is this useful?

This is undeniably a neat party trick, but what does it mean practically? Well, first, it means that we can implement convolutions as FTs and pointwise multiplication. Convolutions are expensive! In some settings, it is quicker to use two FTs. That acceleration can be a big deal with large images and/or kernels.

More generally, though, it means that if we can solve a problem in one domain, we have an answer in the other. This is immensely valuable when solving differential and integral equations.

Thanks for reading! I hope you come away with a more intuitive grasp of the convolution theorem.

--

--

Peter Barrett Bryan

Software engineer with specializations in remote sensing, machine learning applied to computer vision, and project management.