Extract the activation maps of your Keras models
Short code and useful examples to show how to get the activations for each layer for Keras.
-> Works for any kind of model (recurrent, convolutional, residuals...). Not only for images!
Example of MNIST
Shapes of the activations (one sample) on Keras CNN MNIST:
----- activations -----
(1, 26, 26, 32)
(1, 24, 24, 64)
(1, 12, 12, 64)
(1, 12, 12, 64)
(1, 9216)
(1, 128)
(1, 128)
(1, 10) # softmax output!
Shapes of the activations (batch of 200 samples) on Keras CNN MNIST:
----- activations -----
(200, 26, 26, 32)
(200, 24, 24, 64)
(200, 12, 12, 64)
(200, 12, 12, 64)
(200, 9216)
(200, 128)
(200, 128)
(200, 10)
Activation map of CONV1 of LeNet
Activation map of FC1 of LeNet
Activation map of Softmax of LeNet. Yes it's a seven!
The function for visualizing the activations is in the script read_activations.py
Inputs:
-
model
: Keras model -
model_inputs
: Model inputs for which we want to get the activations (for example 200 MNIST images) -
print_shape_only
: If set to True, will print the entire activations arrays (might be very verbose!) -
layer_name
: Will retrieve the activations of a specific layer, if the name matches one of the existing layers of the model.
Outputs:
- returns a list of each layer (by order of definition) and its corresponding activations.
I provide a simple example to see how it works with the MNIST model. I separated the training and the visualizations because if the two were to be done sequentially, we would have to re-train the model every time we would like to visualize the activations! Not very practical! Here are the main steps:
Running python model_train.py
will do:
- define the model
- if no checkpoints are detected:
- train the model
- save the best model in checkpoints/
- load the model from the best checkpoint
- read the activations
model_multi_inputs_train.py
contains very simple examples to visualize activations with multi inputs models.