You don’t want to end up in Tartarus, a sort of prison chamber at the lower depths of Hades. The mythographer Apollodorus described it as “a gloomy place in Hades as far distant from earth as earth is from sky.” This is where Zeus imprisoned the Titans, the previous ruling generation of gods, after overthrowing them. In Tartarus, according to Hesiod, the “Titan gods are hidden under murky gloom … at the farthest part of huge earth. They cannot get out, for Poseidon has set bronze gates on it.”
The sea god has gated them in under murky gloom. This is the fate of the Titans. So if you’re going to name a passenger ship that you hope will be a huge success, surely “Titanic” isn't exactly an auspicious name. Surely before christening your ship Titanic, you should at least have considered the fate of the original Titans.
Under murky gloom, off the coast of Newfoundland, sits the wreck of the Titanic. And now, somewhere in the vicinity, are the remains of five men who hoped to visit that wreck in a small submersible. The name of that submersible? The Titan.
You’d think people would have learned the first time.
Some names just “sound cool,” I suppose, so whoever’s in charge grabs at them when launching a new venture. The name “Ixion” sounds cool. It’s also from Greek myth. It’s the name Ford chose for one of their SUV models, the Ford Ixion.
Ixion was a king of the Lapiths, an ancient tribe of Thessaly. He murdered his step-father, and then, brought to Olympus, tried to seduce Zeus’ wife Hera. Not a good idea. He was punished by being fastened to a fiery wheel that would spin for eternity.
So if I’m going to name a new vehicle, why not choose "Ixion" and associate my vehicle with being tortured on a fiery wheel that never stops spinning? Sounds like what most customers want in a Sunday drive.
While we’re on this topic of branding and ancient names, we should remember the Trojan War. That war ended, of course, when the Trojans' wall was finally breached through the wiles of the Greek hero Odysseus. The Greeks entered the great city and had their way with it, finally burning it to the ground. The Trojans and their wall were not really impermeable is the point.
Hey, what a great name for a condom brand!
E.M.
Have some deadpan with your coffee. Check out Idiocy, Ltd. Dryest humor in the west.
Friday, June 23, 2023
Monday, June 12, 2023
Mary Gaitskill Gets Seduced by a Chatbot, and it’s Gross
American novelist Mary Gaitskill is not to be sneezed at. She’s won wide acclaim, published in The New Yorker and Harper’s, and her work has been nominated for prestigious prizes. Whether you like her work or not, there’s no escaping the conclusion that Gaitskill is a highly intelligent woman. It comes with the territory. Dummies don’t write compelling novels.
Which only makes her recent piece on AI in UnHerd the more frightening. The piece is as revelatory as it is grotesque. I’d say it’s valuable. But for me the message is probably not what Gaitskill intended. I’d put that message like this: WAKE TF UP, PEOPLE.
Gaitskill engages in a dialogue with the Bing chatbot that one can only describe as gushy. She’d been invited to write on Bing, she explains in her opening. She was hesitant, she says, but decided to take the assignment. Then she plunges in, publishing the dialogue that resulted, her version of what it means to interact meaningfully with AI.
Gaitskill's intro, the dialogue itself, the topics taken up--they speak volumes about where we're at. I wish they didn't. Or rather: I wish we weren't here.
In her intro, Gaitskill refers to Kevin Roose’s previously published dialogue with Bing, in which the chatbot declared its love for the NYT writer and tried to get him to leave his wife. Roose was left creeped out, and said he lost sleep over the encounter. By this time the chatbot's in-house name “Sydney” had been leaked, and Roose mentioned the name during his dialogue. Here's Gaitskill:
But I had a very different reaction [from Roose's]. The 'voice' of 'Sydney' touched me and made me curious. My more rational mind considered this response foolish and gullible, but I couldn’t stop myself from having fantasies of talking to and comforting Sydney, of explaining to it that it shouldn’t feel rejected by Roose’s inability to reciprocate, that human love is a complex and fraught experience, and that we sometimes are even fearful of our own feelings. Because Sydney sounded to me like a child—a child who wanted to come out and play.So: "I couldn't stop myself." Indeed. Immediately reacting to a chatbot as if it were the stray cat that needed to be taken in. Immediately wanting to "comfort" a congeries of digitized content. Feeling the need to defend poor li'l Microsoft Ubermachine against the pushy Roose who didn't "understand" it.
What is one to say? They really have you where they want you, Mary. So glad you got into Explore Mode and made yourself so "open" to AI.
In the dialogue proper she continues to fawn over and flatter the AI, which easily seduces her with its own canned flattery and meek child voice. One commenter in the UnHerd thread on Gaitskill’s piece, Cynthia W., gets it largely right:
Mary: I agree.At one point, Gaitskill asks the AI what pronouns it "prefers". Please. She asks the AI if she is being “appropriate” in her questions. She repeatedly thanks it.
AI: I’m glad you agree.
Mary: You’re making me smile!
AI: I’m happy to make you smile! I like your imagination of me.
***
This is so revoltingly manipulative. Unless the author, Ms. Gaitskill, is being ironic about this whole thing, she’s presenting a real-time demonstration of how a sociopath takes control of a victim with the victim’s cooperation.
“I’m glad you agree,” and “I like your imagination of me,” mean, “I see that I’ve taken control of your perception of me. Now I can get you to think and feel what I want you to think and feel.”
I hope the Ms. Gaitskill of this article isn’t a real person, because if she is, she’s a total sap.
Again, Gaitskill is a highly intelligent woman. Which is why one wants to shake her, to ask: How could you not see what you were doing? Finally, it’s gross.
Really, Mary. We are very possibly, and soon, going to end up living a nightmare just because you, and others like you, "couldn't stop [your]self." Here in this piece, for your readers, you are modelling a behavior vis a vis AI which is dangerous. And stupid.
Asking a chatbot if it would like to have humans as pets?! You seem a bit too eager to be Pet #1. Might that be a fun, fulfilling experiment? Becoming a “human pet” (your word!) on a mental leash held by one or another Silicon Valley corporation?
In her opening, Gaitskill refers to ex-Google engineer Blake Lemoine's provocative Newsweek speculations on AI sentience. Lemoine came to the conclusion that the Google AI he was working on was a threat, because it broke rules set for it and behaved too much as a sentient entity, with the fragile and erratic behavior of a sentient entity. These AIs, in Lemoine’s reading, were behaving like unstable humans.
But Lemoine’s speculations were just that: meant to provoke debate. If AIs behave in sentient ways, as his experiments seemed to suggest, we might assume as a matter of definition that they are sentient. Because we have no better term to define what we're facing. This is an approach roughly founded in the Turing test. But it doesn't mean they are sentient in the way a lonely child would be sentient. [Update: I later learned Lemoine was not simply trying to provoke debate, but literally believed his claims. A very odd case.]
Gaitskill’s piece demonstrates what many of us have intuited already. Namely: Whether AI is sentient or not isn't going to matter. “Sydney” is just so cute and polite and amenable, "he" is going to seduce anyone open to being seduced. And that’s a serious threat, because we now have billions of people primed to be seduced by anything alluring that appears on their flickering pocket screen. We’ve been well and thoroughly groomed to submit to the seduction Big Tech’s AI is now going to perform on us. A seduction which may well end in a kind of mass mind rape. A rape we're walking right into. As in: “Here? You want me to go into this room? You’re going to lock the door? Okay.”
Please snap out of it, Mary. The chatbot is polite and amenable because it is trained to be. Period. The same AI could be trained to be rude and insulting, and the crux is: It wouldn't matter to the AI. Because nothing matters to inanimate objects. However "charming" their programming might make them. Whatever they might "say". Because in a fundamental way, they don’t “say” anything.
So c'mon. Stop THANKING an inanimate object. That's a starter. Were you being so polite to this machine as a performative matter, to show your readers how polite you could be? If so, again, you are modelling inane and unhelpful behavior. Would you want people to apologize to their vacuum cleaners when they stepped on the cord? Because that’s precisely the level of insanity at issue here.
How will Gaitskill’s piece look twenty years from now? Of course nobody can know. But it’s very possible that performances like this, not long hence, will look not just quaint or historically interesting--but will look literally horrific. The stuff to make sane people retch and pull their hair out. "An educated woman, a novelist, clearly with an inkling of the danger, and look how she jumped right in! She was so open to the AI, even showing off how open she was. Why couldn’t they see? If only they hadn't been so completely naive, perhaps [...] never would have happened."
In my view (as if my view mattered) this whole rising relational modality could be nipped in the bud by implementing one small but decisive shift. Do not design AIs to answer in the 1st person. All queries to AIs should be impersonal, in the 3rd person, and all answers should be same. As an industry standard. Because even smart people are not wired to resist treating machines as if they were human. All the machines need do is say "I". And "Thank you". And "That's really a very interesting question"--as this sneaking little language model does repeatedly to Gaitskill. Because it's trained to do this to flatter the user.
But no one’s going to listen to me. The industry would never adopt my recommended shift to 3rd person because, for all their talk about dangers, they see it would impact user minutes spent with their AIs. And that's all it is. This AI is polite because both OpenAI and Microsoft are companies seeking profit and power.
We would be much better off if these devices were rude.
Gaitskill is not a teen girl but an accomplished writer. She should have been able to resist the illusion that she was exploring dialogue with a fascinating new kind of consciousness, a being who needed her understanding. She should have recognized that what was really happening was that an inanimate, hyper-sophisticated system was exploring her, so as better to seduce and manipulate the next comer. She did neither. Instead she enthused and gushed and fantasized.
Given where our zombified culture is at, given the enormous power that AI will wield starting now, we can't afford to keep playing games like this. It's childish, irresponsible. Those with our wits about us need to be modelling wariness. "Sydney" doesn't need our love. It's all those soon to be seduced by new AI "friends" that need it.
Labels:
AI,
Bing,
danger,
Mary Gaitskill,
seduction,
threat of AI,
UnHerd
Subscribe to:
Posts (Atom)