Skepticism In the Age of Artificial Intelligence
Kyle Polich
January 14, 2024
Kyle Polich is a podcaster whose talk was titled “Skepticism in the Age of Artificial Intelligence.”
He began with a re-titling: “A Staircase to the Moon? The Path to AGI.” AGI stands for “Artificial General Intelligence,” which would resemble the kind of intelligence we humans have, and which AI currently does not. Polich’s metaphor was that to go to the Moon, you could build a staircase, which would get you part-way, but would soon prove the wrong approach — as opposed to building rockets. He posed the question whether current AI efforts are staircase-equivalents for getting to AGI.
A main theme was elucidating why AI has progressed as much as it has in just the last few years. Polich said it was really a matter of solving math problems. What current AI iterations basically do is turn language problems into math problems it’s capable of solving. An AI will be trained for that by exposing it to a vast amount of diverse human-generated texts, from books that have been put online, Wikipedia, and general internet crawls, etc. Its basic task is then predicting what word comes next in a sequence. It does this by converting words into numbers, and figuring out language patterns — what words most typically go with each other. By testing itself to guess the missing word in a sequence, then checking if it was right, done a zillion times, it gets pretty good at predicting the next word. That’s all ChatGPT is really doing when asked to compose, say, a sonnet about peanut butter in the style of Allen Ginsburg.
But this, again, is not General Artificial Intelligence. In this context Polich discussed the “Turing Test,” in which a human converses with an AI to judge whether they’re talking to another human or not. Polich deemed this an “unbiased double-blind test,” and posited that today’s AIs have not passed it.
He also introduced the term ASI — “Artificial Super Intelligence,” which would be able to do, well, everything. Unlike any human. There’s some concern that such an entity could wreak havoc with our economies, our cultures, etc. But Polich deemed that threat far distant, expressing more concern about mis-use of AI in human hands, for example to wage bioterrorism, or create fake news, even personalized for the recipient. Yet he cited nuclear weapons, which we’ve had for decades without using.
He meanwhile put forth a list of ways in which AI differs importantly from humans. Like, no embodiment — many researchers posit that our human consciousness is not just a matter of what goes on in our brains, but their connection with our bodies is highly important. Similarly, AI has no pain, no emotions, no motivations. However, those could be emergent properties that arise from neuron-like information processing. Indeed, many theorize that that’s how our own consciousness originates.
Polich concluded by voicing his view that, with AI, we’re not engaged in building a staircase to the Moon — we’re building a rocket.