Wednesday, September 6, 2023

The Pipe-Dream of Strong AI; The Nightmare of “Weak” AI


Are “thinking machines” possible? Will AI develop to a point where it surpasses human IQ, after which, improving itself, it will advance so far beyond human thinking that we won’t even be able to comprehend what it’s trying to tell us? Will AI “take over the world”?

There are very good reasons to see AI as a threat, but based on our best understanding of what is meant by “thinking,” the answer to all these questions is probably No. We’re never going to reach Ray Kurzweil’s “Singularity”. AI is never going to be making scientific or technological breakthroughs.

As a friend in data security puts it: “AI could watch Newton’s apple fall millions of times over but could never take the next step and theorize gravity. If you think it will, it means you don’t understand how AI works.”

I’m a newbie in this area, but less so in philosophy and linguistics. I know enough about AI to grasp the point. But my friend was referring to large language models (LLMs), the kind of AI that grounds ChatGPT. Will his point prove true once AI programmers push into other directions?

To understand why his point will likely prove true no matter what programmers get up to—well, that requires a bit of effort. But if you're interested in such questions, a great place to start is Paul Folbrecht’s quick summation, “Why Strong AI is a Logical Impossibility”. Folbrecht presents two key arguments, and his strategy for bringing the reader into the harder argument (based on Gödel’s incompleteness theorems) is spot on. He conveys the gist with no wasted words. Read it.

Since Folbrecht does the work so well, I won’t rehash the arguments here. Based on these arguments, and a few related ones, I too doubt we’re entering an era of truly “thinking machines”. What I wonder however is whether it will really matter. Because I’m convinced we’re entering a very perilous era either way.

That AI will never be able to think in anything strongly analogous to what we do when we think may in fact make little difference. Sure, it will make a difference in the long run, given that AI won’t be making scientific breakthroughs. But in the short run? No. Because the real threat is not that envisioned in 20th-century sci fi. It’s not that AI will take over. Rather, it's that government or other elites will use AI to control mass populations, finally achieving immunity to citizen resistance.

This is the actual threat, and like it or not, it's all too viable. AI will never have to attain “thinking” capabilities to be the perfect tool for implementing total state control. The technologies already available are stuff such as Stalin or Hitler never dreamed of. And is our American population ready to resist encroachments on our liberty from AI-enabled state bureaucrats? From monomaniacal ideologues using “safety” or “progress” as buzzwords to gain power?

Hardly. We are far from ready. Much of the American population seems actually primed for just such power grabs. Which is perhaps not by accident. And it’s this America that will negotiate its future with AI-wielding bureaucrats? It’s a depressing thought.

Making our situation yet more perilous is something I’ve written about recently: our human susceptibility to AI simulations of personality, our hardwired tendency to assume that anything that says “I” and can string sentences together is actually an “I”. That this delusion will be exploited by those who seek to corral and control us is certain. Consider the case I lay out.

As it melds with social and other media, as it’s incorporated in ever more humanoid robots, so-called "weak" AI is going to insinuate itself into our culture and access our lives in ways that even social media could not. It won't matter that this AI can't think. Within a few short years it will already be powerful enough to “work wonders”.

Paul Folbrecht, whose article doesn’t address these questions, would likely agree. Strong AI is almost certainly not on the horizon. But the AI that is on the horizon is an immensely dangerous tool, especially given our current political and social order, softened up by social media and our willingness to give up privacy for the slightest convenience.

With Americans now entirely transparent to Big Tech, weak and distracted by circus diversions and identity politics, with an over-the-top cult of “safety” dominating public discourses, our culture looks something like the opening pages of a User’s Manual explaining how best to politically weaponize AI.

"Mass control will be easier to establish if you begin with a population like this: ..."

QED: Idiocy, Ltd.

Chinese edition / 中文版 : Idiocy, Ltd.


2 comments:

James said...

What is intelligence? In my opinion, intelligence is the ability to solve problem. Currently, AI can answer question but not solve problem.

Eric Mader said...

Yes, one answers a question by providing information or data. But to solve a problem, one must think. *Providing data* and *thinking* are very different.

The "AI" we have is amazingly sophisticated at accessing and arranging information in a human-like manner, but the idea that it will *jump* to the next level and begin thinking--that is a HUGE leap, and the very nature of machines suggests it can't happen.