-By Julius Adebayo, Justin Gilmer, Michael Muelly, Ian Goodfellow, Moritz Hardt, Been Kim Google Brain
University of California Berkeley
This blog post is the fourth part of the Explainable Artificial Intelligence (XAI) series. Refer previous posts link. The post discusses on Salience Mapping techniques, performance, and metrics.
Salience Mapping
source: Analytics India Magazine
Abstract
Saliency methods have emerged as a popular tool to highlight features in an input deemed relevant for the prediction of a learned model. Several saliency methods have been proposed, often guided by visual appeal on image data. This paper proposes an actionable methodology to evaluate what kinds of explanations a given method can and cannot provide. they found that reliance, solely, on visual assessment can be misleading. Through extensive experiments show some existing saliency methods are independent both of the model and of the data generating process. Consequently, methods that fail the proposed tests are inadequate for tasks that are sensitive to either data or model, such as, finding outliers in the data, explaining the relationship between inputs and outputs that the model learned, or debugging the model. They interpret findings through an analogy with edge detection in images, a technique that requires neither training data nor model. Theory in the case of a linear model and a single-layer convolutional neural network supports our experimental findings.
Methods
In the formal setup, an input is a vector x ∈
R d . A model describes a function S : R d → R C , where C is the number of
classes in the classification problem. An explanation method provides an
explanation map E : R d → R d that maps inputs to objects of the same shape.
Paper briefly describe some of the explanation methods that were examined. The supplementary
materials contain an in-depth overview of these methods (Ref paper). The goal is not to
exhaustively evaluate all prior explanation methods, but rather to highlight
how our methods apply to several cases of interest.
The gradient explanation
for an input x is Egrad(x) = ∂S ∂x . The gradient quantifies how
much a change in each input dimension would a change the predictions S(x) in a
small neighborhood around the input.
Gradient Input.
Another form
of explanation is the element-wise product of the input and the gradient,
denoted x ∂S ∂x , which can address “gradient
saturation”, and reduce visual diffusion.
Integrated Gradients (IG)
IG also
addresses gradient saturation by summing over scaled versions of the input. IG for an input x is defined as EIG(x) = (x − x¯) × R 1 0 ∂S(¯x+α(x−x¯)
∂x dα, where x¯ is a “baseline input” that represents the absence of a feature
in the original input x.
Guided Backpropagation (GBP)
GBP builds on the
“DeConvNet” explanation method and corresponds to the gradient explanation
where negative gradient entries are set to zero while backpropagating through a
ReLU unit.
Guided GradCAM.
Introduced by Selvaraju et al. , GradCAM
explanations correspond to the gradient of the class score (logit) with respect
to the feature map of the last convolutional unit of a DNN. For pixel level
granularity GradCAM can be combined with Guided Backpropagation through an
element-wise product.
SmoothGrad (SG)
SG seeks to alleviate noise and visual
diffusion for saliency maps by averaging over explanations of noisy
copies of an input. For a given explanation map E, SmoothGrad is defined as
Esg(x) = 1 N PN i=1 E(x+gi), where noise vectors gi ∼ N (0, σ2 )) are drawn i.i.d. from a normal distribution.
Model Randomization Test
For the model randomization test, we randomize the weights of a model starting from the top layer, successively, all the way to the bottom layer. This procedure destroys the learned weights from the top layers to the bottom ones. We compare the resulting explanation from a network with random weights to the one obtained with the model’s original weights. Below we show the evolution of saliency masks from different methods for a demo image from the ImageNet dataset and the Inception v3 model.
Figure: Cascading randomization on Inception v3 (ImageNet). The figure shows the original
explanations (first column) for the Junco bird. Progression from left to right indicates complete
randomization of network weights (and other trainable variables) up to that ‘block’ inclusive. We
show images for 17 blocks of randomization. Coordinate (Gradient, mixed_7b) shows the gradient
explanation for the network in which the top layers starting from Logits up to mixed_7b have been
reinitialized. The last column corresponds to a network with completely reinitialized weights.
Data Randomization Test
In our data randomization test, we permute the training labels and train a model on the randomized training data. A model achieving high training accuracy on the randomized training data is forced to memorize the randomized labels without being able to exploit the original structure in the data. We now compare saliency masks for a model trained on random labels and one trained true labels. We present examples below on MNIST and Fashion MNIST.
Figure: Explanation for a true model vs. model trained on random labels. Top Left: Absolute value visualization of masks for digit 0 from the MNIST test set for a CNN. Top Right: Saliency
masks for digit 0 from the MNIST test set for a CNN shown in diverging color. Bottom Left:
Spearman rank correlation (with absolute values) bar graph for saliency methods. We compare the
similarity of explanations derived from a model trained on random labels, and one trained on real
labels. Bottom Right: Spearman rank correlation (without absolute values) bar graph for saliency
methods for MLP. See appendix for corresponding figures for CNN, and MLP on Fashion MNIST
Conclusion
The goal of the experimental method is to give researchers guidance in assessing the scope of model
explanation methods. Authors envision these methods to serve as sanity checks in the design of new model
explanations. The results show that visual inspection of explanations alone can favor methods that
may provide compelling pictures, but lack sensitivity to the model and the data generating process.
Invariances in explanation methods give a concrete way to rule out the adequacy of the method for
certain tasks.
Comments