Skip to main content

Posts

Showing posts from April, 2019

Why Should I Trust You?. . LIME

-By Marco Tulio Ribeiro, Sameer Singh, Carlos Guestrin University of Washington  Seattle,  WA 98105, USA Paper Link This is third post in the series of Explainable AI (XAI). Earlier post i shed light on Machine learning impact on models with respect to Biasing. Today's topic is one of the large number of elementary operations , "Linear Proxy Models" (LIME). TRUST I would like to quote Stephen M.R. Covey "THE SPEED OF TRUST" Statements here, which is relevant to Trust. Executive summary link   Simply put, trust means confidence. The opposite of trust — distrust — is suspicion.  Trust always affects two outcomes: speed and cost. When trust goes down, speed goes down and cost goes up. When trust goes up, speed goes up and cost goes down (Strategy x Execution) x Trust = Results Not trusting people is a greater risk. if the users do not trust a model or a prediction, they will not use it.   There are two definitions of

Man is to Computer Programmer as Woman is to Homemaker? Debiasing Word Embeddings

By - Tolga Bolukbasi, Kai-Wei Chang, James Zou, Venkatesh Saligrama, Adam Kalai Boston University, 8 Saint Mary’s Street, Boston, MA Microsoft Research New England, 1 Memorial Drive, Cambridge, MA Paper Link Todays topic is part of Explainable ai ( previous blog post ). DeBiasing is one of the aspect that influence Model decision. Abstract The blind application of machine learning runs the risk of amplifying biases present in data. Such a danger is facing us with word embedding, a popular framework to represent text data as vectors which has been used in many machine learning and natural language processing tasks. We show that even word embeddings trained on Google News articles exhibit female/male gender stereotypes to a disturbing extent. This raises concerns because their widespread use, as we describe, often tends to amplify these biases. Geometrically, gender bias is first shown to be captured by a direction in the word embedding. Second, gender neutral words are

Explaining Explanations: An Overview of Interpretability of ML

-By Leilani H. Gilpin, David Bau, Ben Z. Yuan, Ayesha Bajwa, Michael Specter and Lalana Kagal Massachusetts Institute of Technology Cambridge, MA 02139  Paper link Explaining Explanations: An Overview of Interpretability of Machine Learning Explainable AI (XAI), Interpretable AI, or Transparent AI refer to techniques in artificial intelligence (AI) which can be trusted and easily understood by humans. It contrasts with the concept of the "black box" in machine learning where even their designers cannot explain why the AI arrived at a specific decision. XAI can be used to implement a social right to explanation. Some claim that transparency rarely comes for free and that there are often tradeoffs between how "smart" an AI is and how transparent it is; these tradeoffs are expected to grow larger as AI systems increase in internal complexity. The technical challenge of explaining AI decisions is sometimes known as the interpretability problem. So

Visual Discourse Parsing

-By Arjun R Akula ,  Song-Chun Zhu University of California, Los Angeles Paper Link Abstract Text-level discourse parsing aims to unmask how two segments (or sentences) in the text are related to each other. We propose the task of Visual Discourse Parsing, which requires understanding discourse relations among scenes in a video. Here we use the term scene to refer to a subset of video frames that can better summarize the video. In order to collect a dataset for learning discourse cues from videos, one needs to manually identify the scenes from a large pool of video frames and then annotate the discourse relations between them. This is clearly a time consuming, expensive and tedious task. In this work, we propose an approach to identify discourse cues from the videos without the need to explicitly identify and annotate the scenes. We also present a novel dataset containing 310 videos and the corresponding discourse cues to evaluate our approach. We believe that man

Video Retargeting using Gaze

-By Kranthi Kumar Rachavarapu, Moneish Kumar, Vineet Gandhi, Ramanathan Subramanian  CVIT, IIIT Hyderabad.  University of Glasgow, Singapore. Paper Link Gaze-tracking has been my curious topic, that lead to pick today's summary. In my earlier blog-post on gaze, I explained on how language reader gazes reciprocates intent and absorbs the meaning. Here is the visualization of it. An example of fixations and saccades over text. This is the typical pattern of eye movement during reading. The eyes never move smoothly over still text. Skilled readers move their eyes during reading on the average of every quarter of a second. During the time that the eye is fixated, new information is brought into the processing system. Although the average fixation duration is 200–250 ms (thousandths of a second), the range is from 100 ms to over 500 ms. The distance the eye moves in each saccade (or short rapid movement) is between 1 and 20 characters with the average being 7–9 c