Skip to main content

Man is to Computer Programmer as Woman is to Homemaker? Debiasing Word Embeddings

By - Tolga Bolukbasi, Kai-Wei Chang, James Zou, Venkatesh Saligrama, Adam Kalai

Boston University, 8 Saint Mary’s Street, Boston, MA
Microsoft Research New England, 1 Memorial Drive, Cambridge, MA


Todays topic is part of Explainable ai (previous blog post). DeBiasing is one of the aspect that influence Model decision.
Abstract
The blind application of machine learning runs the risk of amplifying biases present in data. Such a danger is facing us with word embedding, a popular framework to represent text data as vectors which has been used in many machine learning and natural language processing tasks. We show that even word embeddings trained on Google News articles exhibit female/male gender stereotypes to a disturbing extent. This raises concerns because their widespread use, as we describe, often tends to amplify these biases. Geometrically, gender bias is first shown to be captured by a direction in the word embedding. Second, gender neutral words are shown to be linearly separable from gender definition words in the word embedding. Using these properties, we provide a methodology for modifying an embedding to remove gender stereotypes, such as the association between between the words receptionist and female, while maintaining desired associations such as between the words queen and female. We define metrics to quantify both direct and indirect gender biases in embeddings, and develop algorithms to “debias” the embedding. Using crowd-worker evaluation as well as standard benchmarks, we empirically demonstrate that our algorithms significantly reduce gender bias in embeddings while preserving the its useful properties such as the ability to cluster related concepts and to solve analogy tasks. The resulting embeddings can be used in applications without amplifying gender bias.

How does an AI system become bias and therefore discriminated against some people? And how do we try to reduce or eliminate this effect in our AI systems? Let's start with example. A group at Microsoft found this remarkable result that when AI learns from text file on the internet, it can learn unhealthy stereotypes. To the credit, they also proposed technical solutions for reducing the amount of bias in this type of AI system. Here's what they found. By having an AI read text on the Internet, it can learn about words, and you can ask it to reason about analogies. So, you can quiz the AI system now that you've read all this text on the Internet, in the analogy, man is to woman as father is to what? So, the AI will output the word mother, which reflects the way these words are typically used on the Internet. If you ask it men is to women, as king is to what? Then the same AI system will say, as King is to Queen, which again seems reasonable relative to the way these words are used on the Internet. The researchers also found the following result, which is that if you ask it, man is to computer programmer as women is to what? That same AI system would output the answer, woman is to homemaker. I think this answer is really unfortunate. Less bias answer would be of words to say, woman is to computer programmer.



If we want our AI system to understand that men and women can equally be computer programmers, just as men and women can equally be homemakers, then we would like it to output man is to computer programmer, as woman is to computer programmer, and also man is to homemaker as woman is to homemaker. How does an AI system learn to become bias like this from data? Let's dive a bit more into the technical details. The way an AI system stores words is using a set of numbers. So, let's say the word man is stored, or we sometimes say represented as the two numbers 1,1. The way an AI system comes up with these numbers is through statistics of how the word man is used on the Internet. The specific process for how these numbers are computed is quite complex and I won't go into that here. But these numbers represent the typical usage of these words. In practice, an AI might have hundreds or thousands of numbers to store a word, but I'm just going to use two numbers here to keep the example simpler.



Lets plot it on a chart. So, the word man, I'm going to plot at the position 1,1 on the figure on the right. By looking at the statistics of how the words or how the phrase computer programmer is used on the Internet, the AI will have a different pair of numbers, say 3,2, to store or to represent the phrase computer programmer. Similarly, by looking at how the word woman is used, it'll come up with a different pair of numbers, say 2,3, to store or to represent the word woman. When you ask the AI system to compute the analogy above, man is to computer programmer, as women is to what? Then what the AI system will do, is construct a parallelogram that looks like this. It will ask, what is the word associated with the position 4,4? Because it will think that is the answer to this analogy. One way to think about this mathematically is that the AI thinks the relationship of man to computer programmer is that you start from the word man, go two steps to the right, and one step up. So, to find the same answer for women is to what,? You would also go two steps to the right, and one step up. Unfortunately, when these numbers are derived from texts on the Internet, and the AI system finds that the way the word homemaker is used on the internet causes it to be placed to the position 4,4, which is why the AI system comes up with this bias analogy.



AI systems are already making important decisions today, and will continue to do so in the future as well. So, bias matters. For example, there's a company that was using AI for hiring, and found that their hiring too discriminated against women. This is clearly unfair, and so this company shut down their tool. Second, there're also some facial recognition systems that seem to work more accurately for light-skinned and dark-skinned individuals. If an AI system is trained primarily on data of lighter skin individuals, then it will be more accurate for that category of individuals to the extent that these systems are used in, for example, criminal investigations, this can create a very biased and unfair effect for dark-skinned individuals. So, many face recognition teams today are working hard to ensure that the systems do not exhibit this type of bias. There have also been AI or statistical loan approval systems that wound up discriminating against some minority ethnic groups, and quoted them a higher interest rate. Banks have also been working to make sure to diminish or eliminate this type of bias in their approval systems. Finally, it's important that AI systems do not contribute to the toxic effect of reinforcing unhealthy stereotypes.

For example, if an-eight-year old girl goes to an image search engine and searches for Chief Executive Officer, if they see only pictures of men or if they see no one that looks like themselves either by gender or ethnicity, we don't want them to be discouraged from pursuing a career that might lead her to someday be a Chief Executive Officer of a large company. Because of these issues, the AI community has put a lot of effort into combating bias. For example, we're starting to have better and better technical solutions for reducing bias in AI systems.  Simplifying the description a little bit, researchers have found that when an AI system learns a lot of different numbers with which to store words, there are few numbers that correspond to the bias. If you zero out those numbers, just set them to zero, then the bias diminishes significantly. A second solution is to try to use less bias and or more inclusive data. For example, if you are building a face-recognition system, and make sure to include data from multiple ethnicities, and all genders, then your system will be less biased and more inclusive.

Second, many AI teams are subjecting their systems to better transparency and or auditing processes, so that we can constantly check what types of bias, if any, these AI systems are exhibiting, so that we can at least recognize the problem if it exists, and then take steps to address it. For example, many face recognition teams are systematically checking how accurate their system is on different subsets of the population to check whether, it is more or less accurate on dark-skinned versus light-skinned individuals, for example. Having transparent systems as well as systematic auditing processes increases the odds that will at least quickly spot a problem, in case there is one, so that we can fix it.

Having a diverse workforce will also help reduce bias. If you have a diverse workforce, then the individuals in your workforce are more likely to be able to spot different problems, and maybe they'll help make your data more diverse and more inclusive in the first place. By having more unique points of view as you're building AI systems, I think there's a hope all of us create less bias applications. AI systems are making really important decisions today, and so the bias or potential for bias is something we must pay attention to and work to diminish. One thing that makes me optimistic about this is that we actually have better ideas today for reducing bias in AI than reducing bias in humans. So, while we should never be satisfied until all AI bias is gone, and it will take us quite a bit of work to get there, if we could take AI systems that started off with a level similar to humans, because it learned from humans, and we can cut down the bias from there through technical solutions or other means, so that as a society, we can hopefully make the decisions we're making through humans or through AI rapidly become more fair and less biased


 Ten possible word pairs to define gender, ordered by word frequency, along with agreement with two sets of 100 words solicited from the crowd, one with definitional and and one with stereotypical gender associations. For each set of words, comprised of the most frequent 50 female and 50 male crowd suggestions, the accuracy is shown for the corresponding gender classifier based on which word is closer to a target word, e.g., the she-he classifier predicts a word is female if it is closer to she than he. With roughly 80-90% accuracy, the gender pairs predict the gender of both stereotypes and definitionally gendered words solicited from the crowd.

Comments

Popular posts from this blog

ABOD and its PyOD python module

Angle based detection By  Hans-Peter Kriegel, Matthias Schubert, Arthur Zimek  Ludwig-Maximilians-Universität München  Oettingenstr. 67, 80538 München, Germany Ref Link PyOD By  Yue Zhao   Zain Nasrullah   Department of Computer Science, University of Toronto, Toronto, ON M5S 2E4, Canada  Zheng Li jk  Northeastern University Toronto, Toronto, ON M5X 1E2, Canada I am combining two papers to summarize Anomaly detection. First one is Angle Based Outlier Detection (ABOD) and other one is python module that  uses ABOD along with over 20 other apis (PyOD) . This is third part in the series of Anomaly detection. First article exhibits survey that covered length and breadth of subject, Second article highlighted on data preparation and pre-processing.  Angle Based Outlier Detection. Angles are more stable than distances in high dimensional spaces for example the popularity of cosine-based similarity measures for text data. Object o is an out

Ownership at Large

 Open Problems and Challenges in Ownership Management -By John Ahlgren, Maria Eugenia Berezin, Kinga Bojarczuk, Elena Dulskyte, Inna Dvortsova, Johann George, Natalija Gucevska, Mark Harman, Shan He, Ralf Lämmel, Erik Meijer, Silvia Sapora, and Justin Spahr-Summers Facebook Inc.  Software-intensive organizations rely on large numbers of software assets of different types, e.g., source-code files, tables in the data warehouse, and software configurations. Who is the most suitable owner of a given asset changes over time, e.g., due to reorganization and individual function changes. New forms of automation can help suggest more suitable owners for any given asset at a given point in time. By such efforts on ownership health, accountability of ownership is increased. The problem of finding the most suitable owners for an asset is essentially a program comprehension problem: how do we automatically determine who would be best placed to understand, maintain, evolve (and

Hybrid Approach to Automation, RPA and Machine Learning

- By Wiesław Kopec´, Kinga Skorupska, Piotr Gago, Krzysztof Marasek  Polish-Japanese Academy of Information Technology Paper Link Courtesy DZone   Abstract One of the more prominent trends within Industry 4.0 is the drive to employ Robotic Process Automation (RPA), especially as one of the elements of the Lean approach.     The full implementation of RPA is riddled with challenges relating both to the reality of everyday business operations, from SMEs to SSCs and beyond, and the social effects of the changing job market. To successfully address these points there is a need to develop a solution that would adjust to the existing business operations and at the same time lower the negative social impact of the automation process. To achieve these goals we propose a hybrid, human-centred approach to the development of software robots. This design and  implementation method combines the Living Lab approach with empowerment through participatory design to kick-start the