Visualizing Keras model inputs with Activation Maximization

Visualizing Keras model inputs with Activation Maximization

Deep neural networks were black boxes traditionally. The mantra “you feed them data, you’ll get a working model, but you cannot explain how it works” is still very common today. Fortunately, however, developments in the fields of machine learning have resulted in explainable AI by means of visualizing the internals of machine learning models.

In this blog, we’ll take a look at a practice called activation maximization, which can be used to visualize ‘perfect inputs’ for your deep neural network. It includes an example implementation for Keras classification and regression models using the keras-vis library.

However, we’ll start with some rationale – why visualize model internals in the first place? What is activation maximization and how does it work? And what is keras-vis?

Subsequently, we move on to coding examples for two Keras CNNs: one trained on the MNIST dataset, the other trained on the CIFAR10 dataset. Finally, we’ll wrap up our post by looking at what we created. Let’s go! ๐Ÿ˜Ž

The code for this post is also available on GitHub.

Why visualize model internals?

Over the past few years we have seen quite a few AI breakthroughs. With the ascent of deep neural networks since 2012 have come self-driving cars, usage of AI in banking, and so on. However, humans still don’t trust AI models entirely – and rightfully so, as for example AI has a gender bias. It is really important to visualize the neural network which has so far been a black box (Gehrman et al., 2019), as:

  • Users give up their agency, or autonomy, and control over the processes automated by machine learning.
  • Users are forced to trust models that have been shown to be biased.
  • Similarly, users have to rely on these same models.

Fortunately, various approaches for studying the internals of the model with respect to how it works have emerged over the past few years. Activation Maximization is one of them, and can be used to generate images of the ‘best input’ for some class. We’ll take a look at it intuitively now.

Activation Maximization explained intuitively

During the supervised training process, your neural network learns by adapting its weights, step by step, based on the error generated by the training data fed forward.

Suppose you’re training a classifier. During training, you thus have fixed model inputs and model outputs for these inputs (since your training samples will always have a corresponding class number), and what is dynamic are the weights. These are adapted continuously (Valverde, 2018) in order to generate a model that performs well.

Now think the other way around. Suppose that you finished training a classifier. How do you know that it was trained correctly? Firstly, you can take a look at the loss value, but this does not tell you everything. Rather, you would want to see what the model thinks belongs to every class. So, suppose you’re using the MNIST dataset of handwritten digits, you’re interested in e.g. what the model thinks is the best visual representation of class ‘4’. This is hopefully a visualization that somewhat (or perhaps even better, greatly) resembles an actual 4.

This is what activation maximization can do: you visualize what a class in your trained model looks like by inverting the process mentioned above. This time, the weights and the desired output are constant, and the input will be modified as long as neurons that yield the class are maximized (Valverde, 2018). Since only the best possible image will maximize the activation of the neurons that produce this class number as output, you’ll find what the model thinks it sees when you’re talking about some class.

In the case of the ‘4’ mentioned above, that would be something like this:

Visualizing Keras model performance: say hi to keras-vis

Fortunately, for engineers who use Keras in their deep learning projects, there is a toolkit out there that adds activation maximization to Keras: keras-vis (link). Since it integrates with Keras quite well, this is the toolkit of our choice. As we’ll be creating actual models, we’ll next take a look at what software dependencies you need to install in order to run the models. Additionally, we’ll also take a closer look at installing keras-vis, which is slightly more complex than you realize now.

What you’ll need to run the models

We’ll do two things in this tutorial:

  • Create a Keras model (based on a model we created before);
  • Visualize the network’s inputs with keras-vis.

We hence need the following dependencies:

  • Keras, the deep learning framework of our choice;
  • One of the backends being Tensorflow, Theano or CNTK – where Tensorflow is preferred given its deep integration with Keras today;
  • Python, preferably version 3.6+;
  • Keras-vis, for generating the input visualizations with activation maximization;
  • Matplotlib, for converting these visualizations into actual plots.

Installing keras-vis

Now, installing keras-vis. As said before, it’s a bit more difficult than simply running pip install keras-vis, due to this error:

ImportError: cannot import name 'imresize'

Especially when you run updated versions of Tensorflow and Keras, this error makes sense. Scipy, which is used under the hood, has deprecated imresize and it no longer exists in these installations. This initially looks bad, since it renders keras-vis useless, however… it seems that the PyPI package wasn’t updated! pip installs version 0.4.1:

pip uninstall keras-vis
  Successfully uninstalled keras-vis-0.4.1

…while version 0.5.0 is the most recent version.

Hence, if you install it directly from Github, with pip install https://github.com/raghakot/keras-vis/archive/master.zip, the most recent version will be installed and keras-vis will work with more recent versions of Keras.

>pip install https://github.com/raghakot/keras-vis/archive/master.zip
Collecting https://github.com/raghakot/keras-vis/archive/master.zip
  Downloading https://github.com/raghakot/keras-vis/archive/master.zip
     \ 58.1MB 819kB/s
Building wheels for collected packages: keras-vis
  Building wheel for keras-vis (setup.py) ... done
Successfully built keras-vis
Installing collected packages: keras-vis
Successfully installed keras-vis-0.5.0

Visualizing Keras CNN MNIST inputs

Let’s now create an example of visualizing inputs with activation maximization!

… a simple one, but a very fun one indeed: we’ll be visualizing true inputs for the Keras MNIST CNN created in another blog post. This means that our code will consist of two parts:

  • The Keras MNIST CNN, which can be replaced by your own Keras code, as long as it has some model instance.
  • The activation maximization visualization code.

Open up your file explorer, navigate to some directory, and create a file. You can name it as you like, but activation_maximization_mnist.py seems to be a good choice for us today, so if you’re uninspired perhaps just choose that one.

Keras CNN

We’ll first add the code for the Keras CNN that we’ll visualize. Since this code was already explained here, and an explanation will only distract us from the actual goal of this blog post, I’d like to refer you to the post if you wish to understand the CNN code in more detail.

'''
  Visualizing how layers represent classes with keras-vis Activation Maximization.
'''

# =============================================
# Model to be visualized
# =============================================
import keras
from keras.datasets import mnist
from keras.models import Sequential
from keras.layers import Dense, Dropout, Flatten
from keras.layers import Conv2D, MaxPooling2D
from keras import backend as K
from keras import activations

# Model configuration
img_width, img_height = 28, 28
batch_size = 250
no_epochs = 25
no_classes = 10
validation_split = 0.2
verbosity = 1

# Load MNIST dataset
(input_train, target_train), (input_test, target_test) = mnist.load_data()

# Reshape data based on channels first / channels last strategy.
# This is dependent on whether you use TF, Theano or CNTK as backend.
# Source: https://github.com/keras-team/keras/blob/master/examples/mnist_cnn.py
if K.image_data_format() == 'channels_first':
    input_train = input_train.reshape(input_train.shape[0], 1, img_width, img_height)
    input_test = input_test.reshape(input_test.shape[0], 1, img_width, img_height)
    input_shape = (1, img_width, img_height)
else:
    input_train = input_train.reshape(input_train.shape[0], img_width, img_height, 1)
    input_test = input_test.reshape(input_test.shape[0], img_width, img_height, 1)
    input_shape = (img_width, img_height, 1)

# Parse numbers as floats
input_train = input_train.astype('float32')
input_test = input_test.astype('float32')

# Convert them into black or white: [0, 1].
input_train = input_train / 255
input_test = input_test / 255

# Convert target vectors to categorical targets
target_train = keras.utils.to_categorical(target_train, no_classes)
target_test = keras.utils.to_categorical(target_test, no_classes)

# Create the model
model = Sequential()
model.add(Conv2D(32, kernel_size=(3, 3), activation='relu', input_shape=input_shape))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
model.add(Conv2D(64, kernel_size=(3, 3), activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(256, activation='relu'))
model.add(Dense(no_classes, activation='softmax', name='visualized_layer'))

# Compile the model
model.compile(loss=keras.losses.categorical_crossentropy,
              optimizer=keras.optimizers.Adam(),
              metrics=['accuracy'])

# Fit data to model
model.fit(input_train, target_train,
          batch_size=batch_size,
          epochs=no_epochs,
          verbose=verbosity,
          validation_split=validation_split)

# Generate generalization metrics
score = model.evaluate(input_test, target_test, verbose=0)
print(f'Test loss: {score[0]} / Test accuracy: {score[1]}')

One difference, though

Do note that one thing is different, though: we added a name attribute to our final Dense layer that is named visualized_layer.

We need this name in order to tell the keras-vis code which model layer must be visualized.

You can literally choose any layer you wish, so if you wish to understand in more detail how certain layers think classes look like, you can add the name to other layers as well. However, since we’re interested in how the true class input for certain class outputs looks like, we add the name attribute to the final layer.

Activation Maximization code

Imports

We next add the imports: the most important one is visualize_activation from keras-vis, in order to apply activation maximization. Secondly, we import utils from the same toolkit, which allows us to map the layer name (visualized_layer) to the layer index easily. Thirdly, and finally, we import Matplotlib, for actually outputting the visualizations.

# =============================================
# Activation Maximization code
# =============================================
from vis.visualization import visualize_activation
from vis.utils import utils
import matplotlib.pyplot as plt

Preparations

# Find the index of the to be visualized layer above
layer_index = utils.find_layer_idx(model, 'visualized_layer')

# Swap softmax with linear
model.layers[layer_index].activation = activations.linear
model = utils.apply_modifications(model)  

Next, we prepare our visualization code, by performing two things:

  • With utils, we find the layer index of the layer to be visualized.
  • We also swap the Softmax activation function in our trained model, which is common for multiclass classification problems, with the linear activation function. Why this is necessary can be seen in the images below. Since you’re essentially looking backwards, from outputs and fixed weights to inputs, you need a free path from outputs back to inputs. Softmax disturbs this free path by essentially transforming your model data in intricate ways, which makes the activation maximizations no longer understandable to humans.
You don’t want this – so swap Softmax for Linear.

Visualization

Finally, we add code for visualization – which is essentially a loop over the classes in our model, a call to visualize_activation per class, and Matplotlib code to generate a plot.

# Numbers to visualize
numbers_to_visualize = [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9 ]

# Visualize
for number_to_visualize in numbers_to_visualize:
  visualization = visualize_activation(model, layer_index, filter_indices=number_to_visualize)
  plt.imshow(visualization[..., 0])
  plt.title(f'MNIST target = {number_to_visualize}')
  plt.show()

Class visualizations

Now, let’s take a look what happens when we run our model. Obviously, it will first train for 25 epochs and will likely achieve a very high accuracy in the range of 99%. Subsequently, it will start outputting plots one by one.

And surprisingly, they are quite interpretable for humans! Take a look at the 0, the 2 or the 8. Really cool! ๐Ÿ˜Ž

Sharpening the input visualizations

But can they get sharper? Yes!

Add an input_range to visualize_activation, like this:

visualization = visualize_activation(model, layer_index, filter_indices=number_to_visualize, input_range=(0., 1.))

…and you will get these results, which speak for themselves ๐Ÿ˜Š

I’m really impressed!

By playing with the input_range, you can visualize the inputs from different angles – in the sense that different aspects of the image get visualized. For example, compare the 8 with input range \((0., 1.)\) above with the one with range \((0., 2.)\) below:

It’s thus perhaps always a good idea to generate activation maximization plots with various input ranges in order to capture model inputs from various points of view. Therefore, it may be a good idea to save your model every now and then, as you can then easily reload it every time.

Why swapping Softmax is necessary

You already saw what happened when you don’t swap Softmax for linear. However, for the sake of completeness, this is what you’ll get for every class when you don’t swap Softmax:

Visualizing Keras CNN CIFAR10 inputs

Let’s now see what happens when we perform the same operation with the CIFAR10 dataset. We train the same model, once for 25 epochs and once for 100 epochs, and hope that our visualizations somewhat resemble the objects in the dataset.

This is a random selection from CIFAR10:

This is the code used for CIFAR10 visualization. It is really similar to the MNIST one above, so take a look there for explanations:

'''
  Visualizing how layers represent classes with keras-vis Activation Maximization.
'''

# =============================================
# Model to be visualized
# =============================================
import keras
from keras.datasets import cifar10
from keras.models import Sequential
from keras.layers import Dense, Dropout, Flatten
from keras.layers import Conv2D, MaxPooling2D
from keras import backend as K
from keras import activations

# Model configuration
img_width, img_height = 32, 32
batch_size = 250
no_epochs = 100
no_classes = 10
validation_split = 0.2
verbosity = 1

# Load MNIST dataset
(input_train, target_train), (input_test, target_test) = cifar10.load_data()

# Reshape data based on channels first / channels last strategy.
# This is dependent on whether you use TF, Theano or CNTK as backend.
# Source: https://github.com/keras-team/keras/blob/master/examples/mnist_cnn.py
if K.image_data_format() == 'channels_first':
    input_train = input_train.reshape(input_train.shape[0],3, img_width, img_height)
    input_test = input_test.reshape(input_test.shape[0], 3, img_width, img_height)
    input_shape = (3, img_width, img_height)
else:
    input_train = input_train.reshape(input_train.shape[0], img_width, img_height, 3)
    input_test = input_test.reshape(input_test.shape[0], img_width, img_height, 3)
    input_shape = (img_width  , img_height, 3)

# Parse numbers as floats
input_train = input_train.astype('float32')
input_test = input_test.astype('float32')

# Convert them into black or white: [0, 1].
input_train = input_train / 255
input_test = input_test / 255

# Convert target vectors to categorical targets
target_train = keras.utils.to_categorical(target_train, no_classes)
target_test = keras.utils.to_categorical(target_test, no_classes)

# Create the model
model = Sequential()
model.add(Conv2D(32, kernel_size=(3, 3), activation='relu', input_shape=input_shape))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
model.add(Conv2D(64, kernel_size=(3, 3), activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(256, activation='relu'))
model.add(Dense(no_classes, activation='softmax', name='visualized_layer'))

# Compile the model
model.compile(loss=keras.losses.categorical_crossentropy,
              optimizer=keras.optimizers.Adam(),
              metrics=['accuracy'])

# Fit data to model
model.fit(input_train, target_train,
          batch_size=batch_size,
          epochs=no_epochs,
          verbose=verbosity,
          validation_split=validation_split)

# Generate generalization metrics
score = model.evaluate(input_test, target_test, verbose=0)
print(f'Test loss: {score[0]} / Test accuracy: {score[1]}')

# =============================================
# Activation Maximization code
# =============================================
from vis.visualization import visualize_activation
from vis.utils import utils
import matplotlib.pyplot as plt

# Find the index of the to be visualized layer above
layer_index = utils.find_layer_idx(model, 'visualized_layer')

# Swap softmax with linear
model.layers[layer_index].activation = activations.linear
model = utils.apply_modifications(model)  

# Classes to visualize
classes_to_visualize = [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9 ]
classes = {
  0: 'airplane',
  1: 'automobile',
  2: 'bird',
  3: 'cat',
  4: 'deer',
  5: 'dog',
  6: 'frog',
  7: 'horse',
  8: 'ship',
  9: 'truck'
}

# Visualize
for number_to_visualize in classes_to_visualize:
  visualization = visualize_activation(model, layer_index, filter_indices=number_to_visualize, input_range=(0., 1.))
  plt.imshow(visualization)
  plt.title(f'CIFAR10 target = {classes[number_to_visualize]}')
  plt.show()

Visualizations at 25 epochs

At 25 epochs, it’s possible to detect the shapes of the objects very vaguely – I think this is especially visible at automobiles, deer, horses and trucks.

Visualizations at 100 epochs

At 100 epochs, the model specified above is overfitting quite severely – but nevertheless, these are the visualizations:

Primarily, they have become ‘sharper’ – but not necessarily more detailed. Well, this is already questionable at first for an 28×28 pixel image, but also shows that you should not expect magic to happen, despite the possible advantages of methods like activation maximization.

Summary

In this blog post, we studied what Activation Maximization is and how you can visualize the ‘best inputs’ for your CNN classes with keras-vis, i.e., with Keras. Activation Maximization helps you in understanding what happens within your model, which may help you to find hidden biases that – when removed – really improve the applicability of your machine learning model.

I hope you’ve learnt something today – for me, it was really interesting to see how it’s possible to visualize the model’s black box! ๐Ÿ˜Š If you have any questions, remarks, or other comments, feel free to leave a comment below ๐Ÿ‘‡ I will try to respond as soon as possible.

Thanks for reading MachineCurve and happy engineering! ๐Ÿ˜Ž

The code for this post is also available on GitHub.

References

Kotikalapudi, Raghavendra and contributors. (2017). Github / keras-vis. Retrieved from https://github.com/raghakot/keras-vis

Valverde, J. M. (2018, June 18). Introduction to Activation Maximization and implementation in Tensorflow. Retrieved from http://laid.delanover.com/introduction-to-activation-maximization-and-implementation-in-tensorflow/

Gehrmann, S., Strobelt, H., Kruger, R., Pfister, H., & Rush, A. M. (2019). Visual Interaction with Deep Learning Models through Collaborative Semantic Inference. IEEE Transactions on Visualization and Computer Graphics, 1-1. doi:10.1109/tvcg.2019.2934595

2 thoughts on “Visualizing Keras model inputs with Activation Maximization

Leave a Reply

Your email address will not be published. Required fields are marked *