Once upon a time a virtual assistant named Mrs. Dewey, a comely librarian played by Janina Gavankar who helped you with your inquiries about Microsoft’s first attempt at a search engine. Ms. Dewey was launched in 2006, with over 600 recorded lines of dialogue. She was ahead of her time in several respects, but one particularly overlooked example was captured by information scientist Miriam Sweeney in her 2013 doctoral dissertation, where she detailed the gendered and racialized implications of Dewey’s responses. It included lines like, “Hey, if you can get into your computer, you can do whatever you want to me.” Or how searching for “blow jobs” played a clip of her eating a banana, or typing in terms like “ghetto” led her to rap with lyrics including gems like ” No, goldtooth, ghetto-fabulous mutha-fucker BEEP steps to this piece of [ass] BIP.” Sweeney breaks down the obvious: that Dewey was designed to meet the needs of a white, straight male user. Blogs at the time were praising Dewey’s flirtation, after all..
Ms. Dewey was disabled by Microsoft in 2009, but later critics — myself included — identified a similar pattern of bias in the way some users interacted with virtual assistants like Siri or Cortana. When Microsoft engineers revealed they had programmed Cortana to firmly rebuff sexual requests or advances, there was seething outrage on Reddit. A much-liked post read, “Are these fucking people serious?! “His” goal is to do what people tell him to do! Hey, bitch, add this to my calendar… The day Cortana becomes an “independent woman” is the day software becomes useless. Critics of such behavior have flourished, including from your humble correspondent.
NOW, in the middle of the pushback against ChatGPT and its ilk, the pendulum has swung hard, and we are warned against empathy with these things. This is a point I made following the LaMDA AI fiasco last year: a bot doesn’t have to be intelligent for us to anthropomorphize it, and that fact will be exploited by profiteers. I stand by this warning. But some have gone further by suggesting that past criticisms of people who abused their virtual assistants are naïve empowerments in retrospect. Maybe the men who repeatedly called Cortana a “bitch” were onto something!
It may shock you to learn that this is not the case. Not only were past criticisms of AI abuse correct, but they anticipated the more dangerous digital landscape we face today. The real reason the criticism has shifted from “people are too mean to bots” to “people are too nice to them” is that the political economy of AI has suddenly and radically changed, and with it, the selling points of tech companies. Where once bots were sold to us as perfect servants, they will now be sold to us as our best friends. But in each case, the pathological response to each generation of robots implicitly compelled us to humanize them. The bot owners have always weaponized our worst and best impulses.
A counter-intuitive truth about violence is that, although dehumanizing, it actually forces the abuser to see you as human. It’s a grim reality, but everyone from war criminals to pub goons enjoys, to some degree, the thought of their victims feeling pain. Dehumanization is not the inability to see someone as human, but the desire to see someone as less than human and to act accordingly. So, on some level, it’s precisely the degree to which people mistake their virtual assistants for real human beings that encourages them to abuse them. It wouldn’t be fun otherwise. This brings us to the present moment.
The previous generation of AI were sold to us as the perfect servants – a sophisticated sound system or perhaps Majel Barrett’s Starship Enterprise computer. Yielding, omniscient, always ready to serve. The new chatbot search engines also carry some of the same associations, but as they evolve they will also be sold to us as our new confidants, even our new therapists.