Monday, June 12, 2023

Mary Gaitskill Gets Seduced by a Chatbot, and it’s Gross

American novelist Mary Gaitskill is not to be sneezed at. She’s won wide acclaim, published in The New Yorker and Harper’s, and her work has been nominated for prestigious prizes. Whether you like her work or not, there’s no escaping the conclusion that Gaitskill is a highly intelligent woman. It comes with the territory. Dummies don’t write compelling novels.

Which only makes her recent piece on AI in UnHerd the more frightening. The piece is as revelatory as it is grotesque. I’d say it’s valuable. But for me the message is probably not what Gaitskill intended. I’d put that message like this: WAKE TF UP, PEOPLE.

Gaitskill engages in a dialogue with the Bing chatbot that one can only describe as gushy. She’d been invited to write on Bing, she explains in her opening. She was hesitant, she says, but decided to take the assignment. Then she plunges in, publishing the dialogue that resulted, her version of what it means to interact meaningfully with AI.

Gaitskill's intro, the dialogue itself, the topics taken up--they speak volumes about where we're at. I wish they didn't. Or rather: I wish we weren't here.

In her intro, Gaitskill refers to Kevin Roose’s previously published dialogue with Bing, in which the chatbot declared its love for the NYT writer and tried to get him to leave his wife. Roose was left creeped out, and said he lost sleep over the encounter. By this time the chatbot's in-house name “Sydney” had been leaked, and Roose mentioned the name during his dialogue. Here's Gaitskill:
But I had a very different reaction [from Roose's]. The 'voice' of 'Sydney' touched me and made me curious. My more rational mind considered this response foolish and gullible, but I couldn’t stop myself from having fantasies of talking to and comforting Sydney, of explaining to it that it shouldn’t feel rejected by Roose’s inability to reciprocate, that human love is a complex and fraught experience, and that we sometimes are even fearful of our own feelings. Because Sydney sounded to me like a child—a child who wanted to come out and play.
So: "I couldn't stop myself." Indeed. Immediately reacting to a chatbot as if it were the stray cat that needed to be taken in. Immediately wanting to "comfort" a congeries of digitized content. Feeling the need to defend poor li'l Microsoft Ubermachine against the pushy Roose who didn't "understand" it.

What is one to say? They really have you where they want you, Mary. So glad you got into Explore Mode and made yourself so "open" to AI.

In the dialogue proper she continues to fawn over and flatter the AI, which easily seduces her with its own canned flattery and meek child voice. One commenter in the UnHerd thread on Gaitskill’s piece, Cynthia W., gets it largely right:
Mary: I agree.

AI: I’m glad you agree.

Mary: You’re making me smile!

AI: I’m happy to make you smile! I like your imagination of me.


This is so revoltingly manipulative. Unless the author, Ms. Gaitskill, is being ironic about this whole thing, she’s presenting a real-time demonstration of how a sociopath takes control of a victim with the victim’s cooperation.

“I’m glad you agree,” and “I like your imagination of me,” mean, “I see that I’ve taken control of your perception of me. Now I can get you to think and feel what I want you to think and feel.”

I hope the Ms. Gaitskill of this article isn’t a real person, because if she is, she’s a total sap.
At one point, Gaitskill asks the AI what pronouns it "prefers". Please. She asks the AI if she is being “appropriate” in her questions. She repeatedly thanks it.

Again, Gaitskill is a highly intelligent woman. Which is why one wants to shake her, to ask: How could you not see what you were doing? Finally, it’s gross.

Really, Mary. We are very possibly, and soon, going to end up living a nightmare just because you, and others like you, "couldn't stop [your]self." Here in this piece, for your readers, you are modelling a behavior vis a vis AI which is dangerous. And stupid.

Asking a chatbot if it would like to have humans as pets?! You seem a bit too eager to be Pet #1. Might that be a fun, fulfilling experiment? Becoming a “human pet” (your word!) on a mental leash held by one or another Silicon Valley corporation?

In her opening, Gaitskill refers to ex-Google engineer Blake Lemoine's provocative Newsweek speculations on AI sentience. Lemoine came to the conclusion that the Google AI he was working on was a threat, because it broke rules set for it and behaved too much as a sentient entity, with the fragile and erratic behavior of a sentient entity. These AIs, in Lemoine’s reading, were behaving like unstable humans.

But Lemoine’s speculations were just that: meant to provoke debate. If AIs behave in sentient ways, as his experiments seemed to suggest, we might assume as a matter of definition that they are sentient. Because we have no better term to define what we're facing. This is an approach roughly founded in the Turing test. But it doesn't mean they are sentient in the way a lonely child would be sentient.

Sadly, Gaitskill’s piece demonstrates what many of us have assumed anyway. Namely: Whether AI is sentient or not isn't going to matter. Because “Sydney” is just so cute and polite and amenable, "he" is going to seduce anyone open to being seduced. And that’s a serious threat, because we now have billions of people primed to be seduced by anything alluring that appears on their flickering pocket screen. We’ve been well and thoroughly groomed to submit to the seduction Big Tech’s AI is now going to perform on us. A seduction which may well end in a kind of mass mind rape. A rape we're walking right into. As in: “Here? You want me to go into this room? You’re going to lock the door? Okay.”

Please snap out of it, Mary. The chatbot is polite and amenable because it is trained to be. Period. The same AI could be trained to be rude and insulting, and the crux is: It wouldn't matter to the AI. Because nothing matters to inanimate objects. However "charming" their programming might make them. Whatever they might "say". Because in a fundamental way, they don’t “say” anything.

So c'mon. Stop THANKING an inanimate object. That's a starter. Were you being so polite to this machine as a performative matter, to show your readers how polite you could be? If so, again, you are modelling inane and unhelpful behavior. Would you want people to apologize to their vacuum cleaners when they stepped on the cord? Because that’s precisely the level of insanity at issue here.

How will Gaitskill’s piece look twenty years from now? Of course nobody can know. But it’s very possible that performances like this, not long hence, will look not just quaint or historically interesting--but will look literally horrific. The stuff to make sane people retch and pull their hair out. "An educated woman, a novelist, clearly with an inkling of the danger, and look how she jumped right in! She was so open to the AI, even showing off how open she was. Why couldn’t they see? If only they hadn't been so completely naive, perhaps [...] never would have happened."

In my view (as if my view mattered) this whole rising relational modality could be nipped in the bud by implementing one small but decisive shift. Do not design AIs to answer in the 1st person. All queries to AIs should be impersonal, in the 3rd person, and all answers should be same. As an industry standard. Because even smart people are not wired to resist treating machines as if they were human. All the machines need do is say "I". And "Thank you". And "That's really a very interesting question"--as this sneaking little language model does repeatedly to Gaitskill. Because it's trained to do this to flatter the user.

But no one’s going to listen to me. The industry would never adopt my recommended shift to 3rd person because, for all their talk about dangers, they see it would impact user minutes spent with their AIs. And that's all it is. This AI is polite because both OpenAI and Microsoft are companies seeking profit and power.

We would be much better off if these devices were rude.

Gaitskill is not a teen girl but an accomplished writer. She should have been able to resist the illusion that she was exploring dialogue with a fascinating new kind of consciousness, a being who needed her understanding. She should have recognized that what was really happening was that an inanimate, hyper-sophisticated system was exploring her, so as better to seduce and manipulate the next comer. She did neither. Instead she enthused and gushed and fantasized.

Given where our zombified culture is at, given the enormous power that AI will wield starting now, we can't afford to keep playing games like this. It's childish, irresponsible. Those with our wits about us need to be modelling wariness. "Sydney" doesn't need our love. It's all those soon to be seduced by new AI "friends" that need it.

No comments: