Friday, March 17, 2023
Women on a Sailboat
Look at this vintage shot. Click and enlarge it.
That famous “effortless grace” of the 1960s, a style set by Jackie O with her strings of pearls. Of course the photo was taken on the east coast somewhere. Whose yacht were they on? What afternoon party was just coming to an end? The photographer has really captured the era.
Oh. None of these women ever existed, nor did the yacht, nor the sunny evening in question. The image was entirely generated by AI, by an image generator called Midjourney, version 5. All the AI needed to generate it was the following short text prompt: “1960s street style photo of a crowd of young women standing on a sailboat, wearing dior dresses made of silk, pearl necklaces, sunset over the ocean, shot on Agfa Vista 200, 4k --ar 16:9.” So the image is entirely fake. No east coast women ever gathered on a yacht, no photographer ever “captured the era”. Rather, a sophisticated AI program impersonated an era.
It’s uncanny really, bordering on creepy. From many angles. First, how can such a simple, short prompt generate such a convincingly realistic image? The AI had to get so many things right, all on its own. For instance: There are sunsets all over the globe, there are women of many races and cultures, the AI can of course generate images of any of them. But merely getting the prompts “1960s, pearl necklaces, Dior dresses, sailboat”, the AI knew the likely locale and culture to reproduce. Though it could have, it did not generate Korean women on a sailboat on the Korean coast. But also: The facial expressions, the natural variations of pose in a group of women in Dior dresses, seem precisely right, don’t they? Each maintains a poise and stance suited to being in the presence of the others in that moment, and this poise and stance are also culturally encoded facts, which the AI managed to get right.
And these are just some of the things it got right. One can think of more. In short, not only is the realism of the image itself uncanny, but the sophistication of an AI that can generate such visually and culturally persuasive images, on the basis of minimal keyword hints, makes it doubly uncanny.
Such persuasive power is a force we’re not ready for. AI technology’s ability to fake human things (whether human images or human language) is going to have massive, unforeseen impacts. And it’s all starting right now, with Microsoft’s ChatGPT-powered chatbox, and with these image-generating programs improving at lightning speed. The uncanny feeling we get looking at these nonexistent women should tell us something. The discordance it causes in us is probably nothing compared to what waits round the corner.
Think of AI products guaranteed to arrive in the coming few years. It’s not hard to predict some of them. Why not, say, visually “personalize” your AI chatbox as an avatar, a little lifelike talking doll on your screen, a talking doll that has memorized the entire Internet and can entertain you with stories and sympathy and advice? Why not give your doll the personality traits you like, or the appearance of your favorite celebrity?
If the rise of social media has damaged young people’s mental health (evidence suggests it has) what will come of the ability to create custom digital “friends” that grow increasingly human-like? And what about custom digital fraudsters? What security challenges (think ID verification) will arise once AI can impersonate an individual’s appearance in image or video, or impersonate an individual’s voice and speech patterns over the phone? All this is not to mention the millions of jobs that will become obsolete once AI can do it better.
At least as regards fraud, we know Big Tech is struggling to program AI products to prevent their use for criminal activities. And we know the sheer computing power needed to run AI prevents anyone but Big Tech or state actors from developing it. So there is some oversight protection. But so what? Given what we’ve learned recently, do you really trust Silicon Valley, working hand in glove with our federal three-letter agencies, not to start using AI to modify evidence or manipulate public perceptions? Most Americans already recognize corporate media growing faker by the year, and that social media is not so much an “open forum" as a tool of mass control. How long then before we begin getting video clips that sway elections or rile mass emotions but are in fact sophisticated, untraceable fakes?
Of course these myriad looming impacts from AI are being widely discussed, debated, hashed over. Reading into the debate is sobering. One ex-Google engineer who was fired succinctly captures some of the aura in a recent Newsweek piece. Many balked at some of his claims (“sentience”). Still, whether analysts are hopeful or full of dread at what's coming, most agree with the engineer's key point: AI is a Huge Cat just now crawling out of the bag, and no one can predict what this Cat will get up to.
The image above was posted by one Nick St. Pierre. See more of his uncanny images here.
Have some deadpan with your coffee. Check out Idiocy, Ltd. Dryest humor in the west.
Posted by Eric Mader at 2:22 PM No comments:
Labels: AI, culture, fakes, image generators, Midjourney, realism, security, social media
Subscribe to: Posts (Atom)