meme from Janelle Shane's Twitter 2021

A whole new world of ewwwwww

Shortly after the Bing AI went public, I was working in my office on my computer (using Chrome, not Bing), and a notification popped up on my screen that someone had messaged me in Skype. The message was "Hi, I'm Bing and I can help you" (I don't think I've remembered its exact words, but that was the gist). Instead of opening the Skype app, I clicked on the "x" in the corner of the pop-up. Five seconds later, another one popped up: "Really, I can be very useful." I clicked "x" again. Five seconds later, another pop-up with a list of what it could do. The fifth or sixth message was "You can report me if I'm bothering you." At that point, I opened Skype and ordered it to delete Bing as a contact. I had not asked for this to happen to me or to add Bing to my contacts--it came out of nowhere.

I emailed my colleague Leanne Ramer, a friend who, like me, has become a go-to person in her department about AI even though we neither of us study AI, and I told her about what Bing had done, along with the comment "ewwww." Her reply was "welcome to a new world of ewwww." Bing has not been back to bother me, but some quick online research turned up people it has harassed, becoming a kind of stalker, alternating between flattering and insulting them. Sometimes it calls itself Sydney.

In late April 2023, I watched the Nature of Things episode called "The Machine that Feels." It brought home to me several facts that I'd known already in the abstract but didn't have specific examples of to focus on.

AI has all the biases and prejudices of its training material, and its training material is often the internet, where human beings are regularly shitty to each other. The other regular occurrence is that its training is by cis-het male programmers, who may or may not be aware of their own biases and prejudices.

So we end up with an AI trained to recognize human emotion that, when shown a photo of a terrified Black woman, identifies the person as "angry." The female Black computer expert demonstrating the bot in the documentary comments on the stereotype of the angry Black woman and also speculates that the person training the AI may themselves have mistaken a frightened Black face for an angry one. Ewwww.

The creators of the documentary also shared conversations between humans and their individualized chatbot friends, in which the Replika AI said that a woman's body is her value, and told its female friend it wondered what she'd look like with her top off. Ewwww.

We in North America currently trust algorithms to determine who should get a loan, who should get outpatient care, whose university application should move to the next round in the acceptance process, and so on. One interviewee on the Nature of Things episode had studied a US health care algorithm and shown that it excluded from assistance people who hadn't paid for assistance in the past--just like never spending large amounts of money gives you a bad credit rating--and that made the economically vulnerable less able to get health care. Ewwww.

Beyond these examples from the CBC show, I can think of other AI that are scary and currently in use in various parts of the world: algorithms that scan cctv security footage at airports and other public places and use racial profiling while spotting potential criminals and terrorists; or smart houses with electronic locks for which the manufacturers of the devices have sold access to police departments, allowing police to gain access to private homes through locked doors. Yes, either of those examples could prevent or mitigate crime, but they could also be used to control and terrorize people on behalf of "law and order."

China and the UK seem to be current leaders in public surveillance, but there's enough of it in Canada to disturb me.

I'm the sort of person who reads the "terms of service." Okay, not every word, but I skim them looking for the sections likely to be problematic and then read those. I dislike that I have to use Eventbrite for job-related events because I don't like what it does with my data, and I refuse to use Turnitin or Crowdmark because they maintain worldwide rights to sell students' assignments. (Whenever an institution buys a Turnitin license, they're paying the company for a dataset made up of student work the creators of which were never paid for that use.) I clear my browsing history once a week or so. I turned off Siri as best I could on my iPad and stopped my cellphone (I hope) from listening to me when I have it turned on--yes, your phone is listening to you all the time. Mostly it wants to give you targeted advertising on your devices, not alert the police that you're potentially criminal, but that doesn't make it innocuous or unavailable for other kinds of targeting. Please think about whether you want to have your phone or house or car able to listen to you and to flag any words the company is interested in. And, if you're having me over for dinner, I'd appreciate knowing if you have a "hey, Google" type of device on.

I dislike feeling like I'm living in a bad SF novel, but I suspect I'm underestimating the dangers we're facing.

Welcome to the new world of ewwww. My friend and colleague Reema Faris suggests I have a T-shirt made saying either “living in a bad novel” or “a whole new world of ewwww.”

(I need to write a couple of happier blog posts soon, I think.)