Digital Media Center
Bryant-Denny Stadium, Gate 61
920 Paul Bryant Drive
Tuscaloosa, AL 35487-0370
(800) 654-4262

© 2024 Alabama Public Radio
Play Live Radio
Next Up:
0:00
0:00
0:00 0:00
Available On Air Stations

Is AI More Threatening Than North Korean Missiles?

In this April 30, 2015, file photo, Tesla Motors CEO Elon Musk unveils the company's newest products, in Hawthorne, Calif.
Ringo H.W. Chiu
/
AP
In this April 30, 2015, file photo, Tesla Motors CEO Elon Musk unveils the company's newest products, in Hawthorne, Calif.

One of Tesla CEO Elon Musk's companies, the nonprofit start-up OpenAI, manufactures a device that last week was victorious in defeating some of the world's top gamers in an international video game (e-sport) tournament with a multi-million-dollar pot of prize money.

We're getting very good, it seems, at making machines that can outplay us at our favorite pastimes. Machines dominate Go, Jeopardy, Chess and — as of now — at least some video games.

Instead of crowing over the win, though, Musk is sounding the alarm. Artificial Intelligence, or AI, he argued last week, poses a far greater risk to us now than even North Korean warheads.

No doubt Musk's latest pronouncements make for good advertising copy. What better way to drum up interest in a product than to announce that, well, it has the power to destroy the world.

But is it true? Is AI a greater threat to mankind than the threat posed to us today by an openly hostile, well-armed and manifestly unstable enemy?

AI means, at least, three things.

First, it means machines that are faster, stronger and smarter than us, machines that may one day soon, HAL-like, come to make their own decisions and make up their own values and, so, even to rule over us, just as we rule over the cows. This is a very scary thought, not the least when you consider how we have ruled over the cows.

Second, AI means really good machines for doing stuff. I used to have a coffee machine that I'd set with a timer before going to bed; in the morning I'd wake up to the smell of fresh coffee. My coffee maker was a smart, or at least smart-ish, device. Most of the smart technologies, the AIs, in our phones, and airplanes, and cars, and software programs — including the ones winning tournaments — are pretty much like this. Only more so. They are vastly more complicated and reliable but they are, finally, only smart-ish. The fact that some of these new systems "learn," and that they come to be able to do things that their makers cannot do — like win at Go or Dota — is really beside the point. A steam hammer can do what John Henry can't but, in the end, the steam hammer doesn't really do anything.

Third, AI is a research program. I don't mean a program in high-tech engineering. I mean, rather, a program investigating the nature of the mind itself. In 1950, the great mathematician Alan Turing published a paper in a philosophy journal in which he argued that by the year 2000 we would find it entirely natural to speak of machines as intelligent. But more significantly, working as a mathematician, he had devised a formal system for investigating the nature of computation that showed, as philosopher Daniel Dennett puts it in his recent book, that you can get competence (the ability to solve problems) without comprehension (by merely following blind rules mechanically). It was not long before philosopher Hilary Putnam would hypothesize the mind is a Turing Machine (and a Turing Machine just is, for all intents and purposes, what we call a computer today). And, thus, the circle closes. To study computational minds is to study our minds, and to build an AI is, finally, to try to reverse engineer ourselves.

Now, Type 3 AI, this research program, is alive and well and a continuing chapter in our intellectual history that is of genuine excitement and importance. This, even though the original hypothesis of Putnam is wildly implausible (and was given up by Putnam decades ago). To give just one example: the problem of the inputs and the outputs. A Turing Machine works by performing operations on inputs. For example, it might erase a 1 on a cell of its tape and replace it with a 0. The whole method depends on being able to give a formal specification of a finite number of inputs and outputs. We can see how that goes for 1s and 0s. But what are the inputs, and what are the outputs, for a living animal, let alone a human being? Can we give a finite list, and specify its items in formal terms, of everything we can perceive, let alone, do?

And there are other problems, too. To mention only one: We don't understand how the brain works. And this means that we don't know that the brain functions, in any sense other than metaphorical, like a computer.

Type 1 AI, the nightmare of machine dominance, is just that, a nightmare, or maybe (for the capitalists making the gizmos) a fantasy. Depending on what we learn pursuing the philosophy of AI, and as luminaries like John Searle and the late Hubert Dreyfus have long argued, it may be an impossible fiction.

Whatever our view on this, there can be no doubt that the advent of smart, rather than smart-ish, machines, the sort of machines that might actually do something intelligent on their own initiative, is a long way off. Centuries off. The threat of nuclear war with North Korea is both more likely and more immediate than this.

Which does not mean, though, that there is not in fact real cause for alarm posed by AI. But if so, we need to turn our attention to Type 2 AI: the smart-ish technologies that are everywhere in our world today. The danger here is not posed by the technologies themselves. They aren't out to get us. They are not going to be out to get us any time soon. The danger, rather, is our increasing dependence on them. We have created a technosphere in which we are beholden to technologies and processes that we do not understand. I don't mean you and me, that we don't understand: No one person can understand. It's all gotten too complicated. It takes a whole team — or maybe a university — to understand adequately all the mechanisms, for example, that enable air traffic control, or drug manufacture, or the successful production and maintenance of satellites, or the electricity grid, not to mention your car.

Now this is not a bad thing in itself. We are not isolated individuals all alone and we never have been. We are a social animal — and it is fine and good that we should depend on each other and on our collective.

But are we rising to the occasion? Are we tending our collective? Are we educating our children and organizing our means of production to keep ourselves safe and self-reliant and moving forward? Are we taking on the challenges that, to some degree, are of our own making? How to feed 7 billion people in a rapidly warming world?

Or have we settled? Too many of us, I fear, have taken up a "user" attitude to the gear of our world. We are passive consumers. Like the child who thinks chickens come from supermarkets, we are hopelessly alienated from how things work.

And if we are, then what are we going to do if some clever young person some where — maybe a young lady in North Korea — writes a program to turn things off? This is a serious and immediate pressing danger.


Alva Noë is a philosopher at the University of California, Berkeley, where he writes and teaches about perception, consciousness and art. He is the author of several books, including his latest, Strange Tools: Art and Human Nature (Farrar, Straus and Giroux, 2015). You can keep up with more of what Alva is thinking on Facebook and on Twitter: @alvanoe

Copyright 2021 NPR. To see more, visit https://www.npr.org.

Alva Noë is a contributor to the NPR blog 13.7: Cosmos and Culture. He is writer and a philosopher who works on the nature of mind and human experience.
News from Alabama Public Radio is a public service in association with the University of Alabama. We depend on your help to keep our programming on the air and online. Please consider supporting the news you rely on with a donation today. Every contribution, no matter the size, propels our vital coverage. Thank you.