Skip to main content

XAI: Sanity Checks for Saliency Maps

-By Julius Adebayo, Justin Gilmer, Michael Muelly, Ian Goodfellow, Moritz Hardt, Been Kim Google Brain 
University of California Berkeley



This blog post is the fourth part of the Explainable Artificial Intelligence (XAI) series. Refer previous posts link. The post discusses on Salience Mapping techniques, performance, and metrics.


Salience Mapping

source: Analytics India Magazine

The salience map approach is exemplified by occlusion procedure by Zeiler, where a network is repeatedly tested with portions of the input occluded to create a map showing which parts of the data actually have an influence on the network output.

Abstract
Saliency methods have emerged as a popular tool to highlight features in an input deemed relevant for the prediction of a learned model. Several saliency methods have been proposed, often guided by visual appeal on image data. This paper proposes an actionable methodology to evaluate what kinds of explanations a given method can and cannot provide. they found that reliance, solely, on visual assessment can be misleading. Through extensive experiments show some existing saliency methods are independent both of the model and of the data generating process. Consequently, methods that fail the proposed tests are inadequate for tasks that are sensitive to either data or model, such as, finding outliers in the data, explaining the relationship between inputs and outputs that the model learned, or debugging the model. They interpret findings through an analogy with edge detection in images, a technique that requires neither training data nor model. Theory in the case of a linear model and a single-layer convolutional neural network supports our experimental findings.


 Methods


In the formal setup, an input is a vector x R d . A model describes a function S : R d → R C , where C is the number of classes in the classification problem. An explanation method provides an explanation map E : R d → R d that maps inputs to objects of the same shape. 

Paper briefly describe some of the explanation methods that were examined. The supplementary materials contain an in-depth overview of these methods (Ref paper). The goal is not to exhaustively evaluate all prior explanation methods, but rather to highlight how our methods apply to several cases of interest. 

The gradient explanation for an input x is Egrad(x) = ∂S ∂x . The gradient quantifies how much a change in each input dimension would a change the predictions S(x) in a small neighborhood around the input. 

Gradient Input. 

Another form of explanation is the element-wise product of the input and the gradient, denoted x    ∂S ∂x , which can address “gradient saturation”, and reduce visual diffusion. 

Integrated Gradients (IG) 

IG also addresses gradient saturation by summing over scaled versions of the input. IG for an input x is defined as EIG(x) = (x − x¯) × R 1 0 ∂S(¯x+α(x−x¯) ∂x dα, where x¯ is a “baseline input” that represents the absence of a feature in the original input x. 

Guided Backpropagation (GBP) 

GBP builds on the “DeConvNet” explanation method and corresponds to the gradient explanation where negative gradient entries are set to zero while backpropagating through a ReLU unit. 

Guided GradCAM. 

Introduced by Selvaraju et al. , GradCAM explanations correspond to the gradient of the class score (logit) with respect to the feature map of the last convolutional unit of a DNN. For pixel level granularity GradCAM can be combined with Guided Backpropagation through an element-wise product. 

SmoothGrad (SG) 

SG seeks to alleviate noise and visual diffusion for saliency maps by averaging over explanations of noisy copies of an input. For a given explanation map E, SmoothGrad is defined as Esg(x) = 1 N PN i=1 E(x+gi), where noise vectors gi N (0, σ2 )) are drawn i.i.d. from a normal distribution.



Model Randomization Test

For the model randomization test, we randomize the weights of a model starting from the top layer, successively, all the way to the bottom layer. This procedure destroys the learned weights from the top layers to the bottom ones. We compare the resulting explanation from a network with random weights to the one obtained with the model’s original weights. Below we show the evolution of saliency masks from different methods for a demo image from the ImageNet dataset and the Inception v3 model.


Figure: Cascading randomization on Inception v3 (ImageNet). The figure shows the original explanations (first column) for the Junco bird. Progression from left to right indicates complete randomization of network weights (and other trainable variables) up to that ‘block’ inclusive. We show images for 17 blocks of randomization. Coordinate (Gradient, mixed_7b) shows the gradient explanation for the network in which the top layers starting from Logits up to mixed_7b have been reinitialized. The last column corresponds to a network with completely reinitialized weights.

Data Randomization Test

In our data randomization test, we permute the training labels and train a model on the randomized training data. A model achieving high training accuracy on the randomized training data is forced to memorize the randomized labels without being able to exploit the original structure in the data. We now compare saliency masks for a model trained on random labels and one trained true labels. We present examples below on MNIST and Fashion MNIST.


Figure: Explanation for a true model vs. model trained on random labels. Top Left: Absolute value visualization of masks for digit 0 from the MNIST test set for a CNN. Top Right: Saliency masks for digit 0 from the MNIST test set for a CNN shown in diverging color. Bottom Left: Spearman rank correlation (with absolute values) bar graph for saliency methods. We compare the similarity of explanations derived from a model trained on random labels, and one trained on real labels. Bottom Right: Spearman rank correlation (without absolute values) bar graph for saliency methods for MLP. See appendix for corresponding figures for CNN, and MLP on Fashion MNIST


Conclusion


The goal of the experimental method is to give researchers guidance in assessing the scope of model explanation methods. Authors envision these methods to serve as sanity checks in the design of new model explanations. The results show that visual inspection of explanations alone can favor methods that may provide compelling pictures, but lack sensitivity to the model and the data generating process. Invariances in explanation methods give a concrete way to rule out the adequacy of the method for certain tasks. 

Comments

Popular posts from this blog

ABOD and its PyOD python module

Angle based detection By  Hans-Peter Kriegel, Matthias Schubert, Arthur Zimek  Ludwig-Maximilians-Universität München  Oettingenstr. 67, 80538 München, Germany Ref Link PyOD By  Yue Zhao   Zain Nasrullah   Department of Computer Science, University of Toronto, Toronto, ON M5S 2E4, Canada  Zheng Li jk  Northeastern University Toronto, Toronto, ON M5X 1E2, Canada I am combining two papers to summarize Anomaly detection. First one is Angle Based Outlier Detection (ABOD) and other one is python module that  uses ABOD along with over 20 other apis (PyOD) . This is third part in the series of Anomaly detection. First article exhibits survey that covered length and breadth of subject, Second article highlighted on data preparation and pre-processing.  Angle Based Outlier Detection. Angles are more stable than distances in high dimensional spaces for example the popularity of cosine-based similarity measures for text data. Object o is an out

TableSense: Spreadsheet Table Detection with Convolutional Neural Networks

 - By Haoyu Dong, Shijie Liu, Shi Han, Zhouyu Fu, Dongmei Zhang Microsoft Research, Beijing 100080, China. Beihang University, Beijing 100191, China Paper Link Abstract Spreadsheet table detection is the task of detecting all tables on a given sheet and locating their respective ranges. Automatic table detection is a key enabling technique and an initial step in spreadsheet data intelligence. However, the detection task is challenged by the diversity of table structures and table layouts on the spreadsheet. Considering the analogy between a cell matrix as spreadsheet and a pixel matrix as image, and encouraged by the successful application of Convolutional Neural Networks (CNN) in computer vision, we have developed TableSense, a novel end-to-end framework for spreadsheet table detection. First, we devise an effective cell featurization scheme to better leverage the rich information in each cell; second, we develop an enhanced convolutional neural network model for tab

DEEP LEARNING FOR ANOMALY DETECTION: A SURVEY

-By  Raghavendra Chalapathy  University of Sydney,  Capital Markets Co-operative Research Centre (CMCRC)  Sanjay Chawla  Qatar Computing Research Institute (QCRI),  HBKU  Paper Link Anomaly detection also known as outlier detection is the identification of rare items, events or observations which raise suspicions by differing significantly from the majority of the data. Typically the anomalous items will translate to some kind of problem such as bank fraud, a structural defect, medical problems or errors in a text. Anomalies are also referred to as outliers, novelties, noise, deviations and exceptions Hawkins defines an outlier as an observation that deviates so significantly from other observations as to arouse suspicion that it was generated by a different mechanism. Aim of this paper is two-fold, First is a structured and comprehensive overview of research methods in deep learning-based anomaly detection. Furthermore the adoption of these methods