Print

Can Artificial Intelligence Be Sexist/Racist/Biased?

October 26, 2018

written by gabby chia

It's the spookiest time of the year with October coming to an end, and what better time to celebrate with something both amazing and scary--Artificial Intelligence (AI). AI is closer than you think. In fact, you're holding it. From speech recognition engines to self-driving cars, the tech topic of heated debate is all around us. Some choose to attend to the benefits of artificial intelligence, like having your google home turn off your lights and make your coffee, but aside from the robots-taking-over-the-world argument, what are the pitfalls to AI? Could they possibly express the same racism, sexism, and biases as humans?

Source: CNBC

Can AI be sexist?

AI has been shown to have the capability to reproduce human errors.
The human brain is susceptible to a bias called heuristics, one example is the representative heuristic. This heuristic contributes to probability errors that are common in humans, especially when fed limited information. A famous example by Tversky and Kahneman is the Linda Case. Participants are given a description of a woman and are then asked which of two options are more likely.

They are told, “Linda is 31 years old, single, outspoken, and very bright. She majored in philosophy. As a student, she was deeply concerned with issues of discrimination and social justice, and also participated in anti-nuclear demonstrations. Which is more probable? (1) Linda is a bank teller; (2) Linda is a bank teller and active in the feminist movement.”

Tversky and Kahneman found that most people choose option 2, however, the question was to choose the most probable answer and it is more likely that that Linda is a bank teller than that she is a bank teller and active in the feminist movement. In other words, the first option suggests she is a bank teller, but does not exclude the possibility that she is part of other groups as well, such as being a feminist; whereas the second option restricts her to a group of people who are both feminists and bankers. What does this all mean and how does it relate to AI's bias? Well, unlike us, AI is much better at weighing probability. However, like us, what if they don't have all the information available to make those judgements?

Researchers working on AI have found that a similar sort of conjunction fallacy can and does occur in AI They found that if shown photos of people (men or women) in the kitchen, the AI would classify the subject as a woman, and vice versa for people hunting or playing sports. Not only would it reproduce the bias, it would amplify it. It even went so far as to classify kitchen-related items as relating to women. It automatically relates “woman” with “kitchen”, even if inapplicable.

Source: iManage

Similarly, researchers from Carnegie Mellon University and the International Computer Science Institute created a program called AdFisher to see how Google targets job seekers depending on their gender. They found that users listed as men were shown high paying executive jobs 1,852 times but only 318 times to women, almost fives times less than men.

Why is it that such a phenomenon could be reproduced in an entirely man-made mechanism? For that plain reason, entirely man-made. The lack of a woman's perspective in the development of AI is a huge issue that could lead to unfavorable repercussions should the female perspective be ignored. This is not to say it is the male programmer’s fault for being male, but the fault lays at the foot of social construct which creates a barrier for women who strive to pursue STEM jobs.

Source: TedTalk

Can AI be racist?

A woman's perspective is not the only one lacking in programming. AI have been shown to reproduce racial bias as well. Not in the same sense as human racism which is developed through the individual’s beliefs, but because of a similar issue as sexist robots: the lack of perspectives.

Joy Buolamwiw gives an inspiring TED Talk on her dodgy relationship with facial recognition software that could not register her face until she put on a white mask. This caused problems for her in her computing science undergraduate degree where she worked with social robots. The goal was to get the robot to play peek-a-boo, but Buolamwiw encountered an issue where the robot could not recognize her face. She calls this phenomenon the “coded gaze”, a play on widely discussed “male gaze” that is attributed to conversation about sexism. She decided to challenge this issue by creating a website and working on her own algorithm to fight bias in machine learning which you can learn more about here.

Source: Twitter user @jackyalcine

Due to the complexity of the human brain, it is incredibility difficult and demanding to simulate a range of human processes. However, even our brain uses shortcuts. Heuristics – which we mentioned earlier – are only one of the shortcuts we use. Programmers use these shortcuts to replicate human recognition, and what better shortcut than stereotyping, right? Before you get too confused, stereotyping is widely misconceived to become affiliated with issues of sexism, racism, etc. However, this is a very complex mechanism, and AI has not been doing great with it so far. A current example is when Google Photos mistakenly classified black people as gorillas. This incredibly discriminatory problem has at least one up-side to it, it resurfaced the issues behind underrepresentation in programming. AI learns through data, algorithms, and experience, however, how will it learn to discriminate a wide range of faces if they are not considered during the programming stage? 

Can AI be biased?

AI can’t really be biased, per say, since they cannot “want” to believe something, they just, sort of, do. To even say it can be programmed to believe would be difficult to confirm since belief suggests some kind of agency in thinking. However, programmers can reproduce certain values through the machine, but those values would not necessarily be from that of the robot, but that of the programming. Until we create a fully self-aware and learning AI, that is. 

Source: Seattle Times

How is this different than the sexism or racism argument, you might be wondering. Well, let’s assume that the programmers tried their hardest to keep the AI unbiased, but it continues to represent unbalanced values. The programmers may have inadvertently favoured their bias and programmed it in, whereas with the subject of gender and race, this issue is what is not there rather than what is suppressed.

Biases are influenced by concepts such as fear, anger, passion, and other emotions aside from logic. Computation can only take us so far in this respect, how would one quantify humiliation, for example. Or one step further, the concept of schadenfreude, which is defined as “the experience of pleasure, joy, or self-satisfaction that comes from learning of or witnessing the troubles, failures, or humiliation of another”. Additionally, AI has difficulty weighing something such as importance. AI recognizes all mistakes as equal, but of course that’s not true. For example, asking Siri to send a text saying “get here fast” is much more problematic when sent to 911 rather than when it’s sent to the wrong person on your contact list. If AI cannot differentiate the significance of the two mistakes, it will have difficulty learning value judgement. In this sense, it could be said that it is unbiased, since it treats the situations similarly, but perhaps not in such a beneficial way.

Source: Twitter

It is incredibly difficult to measure whether AI matches up with every aspect of human cognition, and given the circumstances of the programming field’s lack of diversity and our incomplete knowledge of the brain, problems with the program may only arise as soon as the system is deployed. 

In retrospect, your robot costume in 6th grade was a little bit scarier than you thought, wasn't it? You might be thinking, what can we do to minimize the amount of sexism, racism, and bias from AI programming before it’s too late? The simplest solution is find ways to retain diverse people in technical fields, especially computer engineering and computing science. This would widen the amount of perspectives going into programming and eliminate the “coded gaze”, the “male gaze” and any other problematic gazes that could arise due to discriminatory practices. Encourage young girls to pusue STEM careers as much as you would encourage it to young boys to ensure women and other minority groups do not face discrimination from technology in the future! Our society is working towards more complex AI, so let's make sure we hear from as many voices as possible to keep the doom and gloom on the big screen, rather than on the screens we keep in our pocket.

If you’re interested in learning more about AI, you might want to check out our podcast with Roboticist Angelica Lim here. Or get involved with Joy Buolamwini's fight against algorithmic bias by testing out software here.