← Back to homepage

How to use Upsample for upsampling with PyTorch

December 28, 2021 by Chris

Within computer vision, upsampling is a relatively common practice these days. Whereas Convolutional layers and Pooling layers make inputs smaller, or downsample the inputs, we sometimes want to perform the opposite as well.

This is called Upsampling, and in today's tutorial you're going to learn how you can perform upsampling with the PyTorch deep learning library.

Upsampling is commonly used within encoder-decoder architectures and within Generative Adversarial Networks, such as StyleGAN.

In today's tutorial, we will take a look at three different things:

  1. What upsampling involves. Conceptually, and very briefly, we're taking a look at what happens when an image is upsampled.
  2. The PyTorch Upsample layer. We take a look at how upsampling is implemented within PyTorch, one of today's leading deep learning libraries.
  3. A PyTorch based Upsample example. You will also move from theory into practice, by learning how to perform upsampling within PyTorch by means of an example.

Are you ready? Let's take a look 😎

What is upsampling?

Here's the Wikipedia explanation of upsampling:

When upsampling is performed on a sequence of samples of a signal or other continuous function, it produces an approximation of the sequence that would have been obtained by sampling the signal at a higher rate (or density, as in the case of a photograph).

Wikipedia (2004)

In other words: you have an input, in computer vision frequently an image, that has a specific size. For example, you have an MNIST image that is 28 x 28 pixels and has one color channel. That is, a grayscale image.

Instead of 28x28 pixels, you want the image to be 56x56 pixels. This is when, in the words of the Wikipedia page, you will need to produce an approximation as if you'd sampled at a higher rate or density. In other words, if you imagine one MNIST sample to be a photograph, when upsampling you'd approximate as if you would have made a larger-pixel image with better equipment.

If you cannot distinguish between the approximation and the true image, upsampling has succeeded. As you will see next, there are multiple interpolation algorithms for upsampling - but let's take a look at a usecase for upsampling first.

Upsampling use: encoder-decoder architectures

Below, you can see the architecture of the StyleGAN generative adversarial network. The left side produces a so-called latent vector which is used subsequently in the synthesis network that produces an output picture:

The synthesis network consists of a number of blocks that produce an image of a specific resolution, which is then used to increase image size even further. For example, in the picture above you see a 4 x 4 resolution for the first block, followed by an 8 x 8 pixel resolution, all the way to a 1024 x 1024 pixels image size.

Between each block, upsampling takes place. After the last adaptive instance normalization element in each block, an upsample step is performed to increase the current output to something larger than the image output of the next block. Using a Convolutional layer, important input features from the previous block are learned by the next block, to which noise and styles are then added for control and randomness in the image synthesis process.

Read the StyleGAN article for a deep dive into that specific GAN, but hopefully this makes it clear how upsampling can be used within your neural network! :)

PyTorch Upsample layer

In PyTorch, upsampling is built into the torch.nn.Upsample class representing a layer called Upsample that can be added to your neural network:

Upsamples a given multi-channel 1D (temporal), 2D (spatial) or 3D (volumetric) data.

PyTorch (n.d.)

In other words, it works with both 1D, 2D and 3D data:

The Upsample layer is made available in the following way:

torch.nn.Upsample(_size=None_, _scale_factor=None_, _mode='nearest'_, _align_corners=None_)

Configurable attributes

These attributes can be configured:

PyTorch Upsample example

The example below shows you how you can use upsampling in a 2D setting, with images from the MNIST dataset.

It contains multiple parts:

import os
import torch
from torch import nn
from torchvision.datasets import MNIST
from torch.utils.data import DataLoader
from torchvision import transforms
import matplotlib.pyplot as plt

class UpsampleExample(nn.Module):
  '''
    Simple example for upsampling
  '''
  def __init__(self):
    super().__init__()
    self.layers = nn.Sequential(
      nn.Upsample(size=(56,56), mode = 'nearest')
    )


  def forward(self, x):
    '''Forward pass'''
    return self.layers(x)


if __name__ == '__main__':

  # Prepare MNIST
  dataset = MNIST(os.getcwd(), download=True, transform=transforms.ToTensor())
  trainloader = torch.utils.data.DataLoader(dataset, batch_size=10, shuffle=True, num_workers=1)

  # Initialize the upsample_example
  upsample_example = UpsampleExample()

  # Iterate over the DataLoader for training data
  for i, data in enumerate(trainloader, 0):

    # Get inputs
    inputs, targets = data

    # Take just one input
    before_upsampling = inputs

    # Perform forward pass
    after_upsampling = upsample_example(before_upsampling)[0]

    # Visualize subplots
    fig, (ax1, ax2) = plt.subplots(1, 2)
    ax1.imshow(before_upsampling[0].reshape((28, 28)))
    ax2.imshow(after_upsampling.reshape(56, 56))
    plt.show()

After upsampling, this is what the inputs look like:

On the left, you can see the image before upsampling. On the right, the image after upsampling.

You can see that the image pretty much stayed the same - but from the axes, you can see that it became bigger.

From 28x28 pixels (the default sample shape of an MNIST sample), the image is now 56 x 56 pixels.

Successfully upsampled with PyTorch! :D

References

PyTorch. (n.d.). Upsample — PyTorch 1.10.1 documentationhttps://pytorch.org/docs/stable/generated/torch.nn.Upsample.html

Wikipedia. (2004, December 23). Upsampling. Wikipedia, the free encyclopedia. Retrieved December 28, 2021, from https://en.wikipedia.org/wiki/Upsampling

Hi, I'm Chris!

I know a thing or two about AI and machine learning. Welcome to MachineCurve.com, where machine learning is explained in gentle terms.