The Corner

AI: A Thinking Machine, Not So Much

The humanoid robot Rmeca at the AI for Good Global Summit in Geneva, Switzerland, July 6, 2023. (Pierre Albouy/Reuters)

In the WSJ, Andy Kessler looks at the belief that AI will turn into some sort of ‘thinking machine.’

Sign in here to read more.

Writing for the Wall Street Journal, Andy Kessler looks at the belief that AI will turn into some sort of “thinking machine.” That’s a belief that has conjured up thoughts of Skynet and quickened the pulses of regulators everywhere.

Some extracts:

Sometime soon, the digirati will declare that artificial-intelligence machines have passed the Turing test and thus the era of superintelligence and sentient computers has arrived. The promised land is artificial general intelligence: AGI. Don’t fall for it. Your cranial cavity’s inner voice and self-awareness explain why.

In 1950 computing pioneer Alan Turing proposed a simple “Imitation Game” test to answer the question, “Can machines think?” If an interrogator blindly connected to a machine and a human can’t tell the difference based on their answers, then the machine can think. Turing thought that by 2000 machines would be able to imitate humans 70% of the time after five minutes of discussion. He then brushed off his own analysis by saying, “The original question ‘Can machines think?’ I believe to be too meaningless to deserve discussion.” Instead, the Turing test simply measured if machines could fool humans. Look up the verb “ape.” . . .

What’s really needed is solid definitions of thinking, intelligence and sentience. Computers are already better than humans at many tasks. Unless you’re Rain Man, spreadsheets can add rows of numbers faster than you can. Uber can outperform dispatchers. In 2016 Google showed off a computer that beat humans at the game Go. In 1997 IBM’s Deep Blue beat grandmaster . . . Garry Kasparov in chess by calculating several hundred million potential moves per second. IBM’s Watson even won the TV game “Jeopardy!” Impressive.

But these are finite systems. Let’s call them two-dimensional. . . .

Computing power, stresses Kessler, wins “in worlds of defined rules.” But:

 [L]ife doesn’t have rules. Humans are 3D, or 4D, or of limitless depth. We have almost infinite choices bound only by moral and religious codes that are often ignored anyway. We have laws to maintain order, but most people have free will to make decisions. A University of Leicester researcher estimates humans make more than 35,000 conscious decisions every day. To emulate humans, a computer would have to compute more than 10 to the 100,000th power moves (roughly 35,000 factorial). Even astronomers don’t think that big. . . .

Critics dismiss generative AI and ChatGPT as “autocompletion” or worse, a “stochastic parrot.” It’s way more, finding patterns among thousands of words at a time. But the actual smarts of AI come from the human logic embedded between words and sentences. That’s enough to emulate rudimentary reasoning. And AI’s true power has yet to be fully harnessed. But thinking? Nah. I’m with Turing.

As am I.

For some strange reason, I suspect that Andrew Orlowski, writing for Spiked, feels the same way:

In 2023, the policy elites became immersed in a giant work of collaborative science fiction. Both the White House and Whitehall are now gripped by fear of a technology that doesn’t exist and may never exist – namely, a form of god-like artificial intelligence, or artificial general intelligence (AGI). . . .

Speculation about killer AI is a bit like QAnon for posh people. It is a collaborative metafiction, where people compete to envision ever wilder doomsday scenarios. . . .

None of this is to deny that AI is going to cause problems, above all when it comes to jobs. As Kessler writes, “AI’s trajectory is amazing and will outpace humans in many areas.” And those areas will stretch far higher up the jobs pyramid than has been the case with most earlier technological innovations. The idea that the jobs lost to AI may be replaced may be true one day, but what happens in the interlude, particularly an interlude in which many of the newly jobless discover that their previously highly skilled, high-status jobs no longer exist?

I wrote about this in a 2016 article for NR concerned with automation rather than AI, and my conclusions were not cheerful. That remains the case.

You have 1 article remaining.
You have 2 articles remaining.
You have 3 articles remaining.
You have 4 articles remaining.
You have 5 articles remaining.
Exit mobile version