Discriminating Data with Wendy Hui Kyong Chun: Event Summary

October 25, 2021

Kayla Hilstob
PhD student, SFU's School of Communication

The views and opinions expressed in SFU Public Square's blogs are those of the authors, and they do not necessarily reflect the official position of Simon Fraser University, SFU Public Square or any other affiliated institutions in any way.

In the first President's Faculty Lecture of the 2021/2022 series, Dr. Wendy Hui Kyong Chun discussed her forthcoming book, Discriminating Data: Correlation, Neighborhoods, and the New Politics of Recognition. The book “reveals how algorithms encode legacies of segregation, eugenics and multiculturalism through proxies, latent factors, principal components and network neighbourhoods to create agitated clusters of comforting rage.” This event was the first of six talks in the series around the theme of equity and justice from a variety of disciplinary perspectives.

Sxwpilemaát Siyám (Chief Leanne Joe) from the Squamish Nation opened the event with a welcome to the territories of the Sḵwx̱wú7mesh Úxwumixw (Squamish), səl̓ilw̓ətaʔɬ (Tsleil-Waututh) and xʷməθkʷəy̓əm (Musqueam) Nations. She invited us to open our hearts and minds and emphasized the importance of creating spaces of curiosity and transformation with the knowledge being shared at this event. SFU President Dr. Joy Johnson noted that this talk was taking place the day before our first ever National Day for Truth and Reconciliation, during this collective reckoning with Indian Residential Schools and a time to stand in solidarity.

Dr. Chun opened with the premise that the equity and justice challenges we face today feel overwhelming to the point that they “threaten the vision of ourselves and a multi-cultural society.” Dr. Chun’s forthcoming book, Discriminating Data, offers a timely exploration of these monumental questions in the context of social media and predictive algorithms. She reminded us that for this problem we blamed the internet, social media, polarization, misinformation, and the hate it espoused. Like in the view presented in the well-known Netflix documentary The Social Dilemma, “the challenges social media present are new and unprecedented because recommender systems have hacked the human psyche.” And, “we have been manipulated like marionettes by evil tech dudes... and it’s game over for humanity," remarked Dr. Chun. On the other hand, she contrasted, others point out this is not new, and blaming tech for everything keeps us from taking on fundamental issues. For Dr. Chun, it is not either/or, but “they are part of the same chorus,” and through social media networks, “we continue to relive this history.”

What Dr. Chun means by this provocative statement is that in their current form, our predictive programs embed within them past mistakes: segregation, internment and eugenics that “make it really difficult to imagine and live a different future.” For example, images of mostly white U.S. celebrities make up the training data for facial recognition systems, not civil rights activists or interned Japanese children. She asked us to “imagine a world where nothing could change from this deepfake past” premised on the white ideal because “learning and truth equaled repeating this past.” For Dr. Chun, this is the nightmare of predictive machine learning. Applying the concept of Ariella Azoulay, we need to instead “unlearn, not by ignoring history but engaging it differently.” 

Overall, Dr. Chun sees Discriminating Data in dialogue with scholars such as Cathy O’Neil, Safiya Umoja Noble, Ruha Benjamin, Meredith Broussard, Kate Crawford, Virginia Eubanks, and others. In conversation with their work, Dr. Chun gave us a sample of the five main things that Discriminating Data does:

  1. Expose and investigate how ignoring difference amplifies discrimination both currently and historically. Dr. Chun explained how ignoring race promotes racism using a classic example of Compass, an algorithm used by some U.S. courts to predict the risk of recidivism. She critiqued the idea that “pretending visual markers do not exist solves the problem of racism.” This program has been shown by researchers to discriminate against racial minorities “because [if] a program can’t see race, it can’t see racism.”
  2. Interrogate default assumptions that ground algorithms and data structures. Her book explains how overblown promises about correlation tie 21st-century data analytics to 20th-century eugenics. She also shows how homophily, the notion that similarity breeds connection, “ties social networks to U.S. segregation” due to its emergence from studies on residential segregation. Thus, for Dr. Chun, echo chambers aren’t an accident, but the goal. “Segregation is the default.”
  3. Grasp the future machine learning algorithms put in place, focusing on when, why and how their predictions work. Dr. Chun explained that “for machine learning, truth equals consistency, therefore the future equals the past.” Programs like Compass are tested on their ability to predict the past, which reinforces inequalities. Therefore, according to Dr. Chun they “do not offer new and unforeseen futures, they close the future [and] they automate past mistakes so we can no longer learn from them.”
  4. Use existing AI systems to diagnose current inequalities. Dr. Chun suggests that we can read AI systems against the grain to understand discrimination, to use them as evidence. She returned to Compass, where the program “showed that racial bias is institutional.” For Dr. Chun, Compass shows us how racism works.
  5. Devise different algorithms and ways to verify machine learning programs that displace the eugenic and segregationist network structures of the present. As they currently are, our networks rely on two studies with serious ethical and methodological issues that Dr. Chun pointed out. First, homophily, where there is an overselection of white illiberal responses that led to the conclusion that similarly breeds connection. The second is sentiment analysis, a method developed in Japanese internment camps to manage people in occupied lands. She pointed out, powerfully, that these people were meant to disappear.

She ended with an earnest question as she showed us photos of Japanese internees: ”We live in their spaces when we reside in social media; how might we reside together with them?”

In the question and answer period, Dr. Johnson and Dr. Chun conversed over some fundamental yet controversial questions of the contemporary networked world. From that, Dr. Chun’s message to the audience was to not accept what is given to you as a default, but rather, constantly question, as there will always be biases in what is presented. The most salient question of the night came from an audience member that was relayed by Dr. Johnson—“Is it possible to have a better internet?”—to which Dr. Chun replied, “We can have a better world, and then a better internet!” This answer lies at the heart of Discriminating Data, that undoing the legacies of eugenics and segregation all around us will build a better world, first offline, to build them online.

More from Voices in the Square