Fear of the Future: Artificial Intelligence

“A man is a god in ruins,” wrote Ralph Waldo Emerson in the 19th century.

Ever since people contemplated the existence of a divine dimension—a belief that dates back at least to the very early stages of Homo sapiens or even earlier with the Neanderthals—a split occurred between the human condition and the eternal. As humans, it is our curse and our blessing to be aware of our own mortality and to suffer with the loss of loved ones—and in a broader sense, with the predicament of others. This dual nature between animal and demigod is at the very core of our existence, and pulls us in opposite directions. Unless, of course, science can do something about it.

It’s been 200 years since Mary Shelley published Frankenstein; or, the Modern Prometheus, a novel that continues to be as relevant today as it was then. The dream of defeating death via cutting-edge science is at least that old; alchemists would claim it’s much older. In Shelley’s time, the potentially death-defying science—discovered by Luigi Galvani and Alessandro Volta—was that electric currents passing through nerves made muscles twitch. (For details, see chapters 37 and 38 of A Tear at the Edge of Creation.)

Today’s cutting-edge, potentially death-defying science combines digital technology, machine intelligence, and genetic engineering. There are different ways to think about the rise of AI (artificial intelligence) and how it will impact society and the individual. Same with genetic engineering. Here, I will focus on AI and on why we should all be concerned—not about the death-defying ramifications some transhumanists desire, but of its impact in the world we live in. With AI, we may very well be redefining the meaning of our own humanity.

The world is a tangle of wired information. Modern society is completely dependent on computers to function. This is a given, and we all depend crucially on them, from buying goods to watching videos to paying bills to storing information. Case in point, this essay is being written on a laptop, will be emailed to my editor, and will be published online. It has no existence in material reality (understood here as the world of outside objects) unless someone decides to print it. Cars, electric grids, airports, trains, the banking system, and hospitals are all dependent on computers. So are you, of course.

Smarter by the day

Now, parallel to this ever-growing global digitalization and co-dependence, computers are becoming smarter by the day. They are taking over the world, quite literally. As they become smarter, they take on ever more complex tasks, to the point of displacing humans from their traditional jobs—from high-precision micro-surgery and medical diagnostics to assembly lines in factories, digging in dangerous mining or in highly-radioactive environments, and other robotized kinds of work. Soon, with self-driving vehicles, it will be the turn of truck drivers, school-bus drivers, taxi drivers, and train engineers, causing a massive impact on the job market, potentially creating vast unemployment and worldwide need for professional retraining.

For now, at least, computers and automated machines are taking over the world because we want them to. The question, and reason for fear, is that this situation may change. It is possible that these machines will, eventually, become autonomous, capable of taking their functioning and actions into their own hands—well, their own circuits, anyway. In other words, one day, computers may take over the world. This is the argument of Oxford philosopher Nick Bostrom in his book Superintelligence, Elon Musk’s anti-AI crusade, Stephen Hawking’s fears, and many others.

Key to this discussion is the understanding of intelligence and autonomy. There is the AI of the future, the sci-fi scary one, and the AI of the present. Is today’s AI truly “intelligent”? If so, how? That’s a thorny issue for many reasons, the first being that we don’t really have a universally-accepted definition of intelligence.

The AI acronym is all over the place. Everywhere, you see ads featuring the latest in AI applications for your business and individual needs. You hear the words “machine learning” all the time, implying that it is possible to teach computer programs to learn as they go, so as to become more efficient. We see this, at a modest scale, with home devices like Google Home or Amazon’s Echo and Alexa, or with the music provider Spotify and others, as they get to “know” your tastes and needs in time. At a higher level, computers can beat chess and go masters, in the latter case developing strategies that baffled even their own programmers. Google’s AlphaGo AI program seemed to be creating something new on the fly.

Even so, those applications are far from the level of intelligence that could pose an existential threat to humanity. If anything, they reflect the intelligence of their human programmers, tricking users to depend more and more on their services. At the present level, machine “intelligence” simply means programs that serve the goals of their human designers, mostly to goad consumers into spending more money on their products, or to scan vast amounts of data in search of consumer or criminal information. (Machine learning is also a key tool in scientific research—for example, in data mining algorithms—but less the focus of our argument here.)

Current levels of AI are focused on the specific tasks ascribed to them by their programmers and users. They still serve humanity’s needs. Can the situation change? This is where we step into murky waters. We don’t know what it means for a machine to develop autonomy, to have self-awareness or a will to perpetuate its existence. Current machines dubbed AI are a far cry from HAL 9000, the super-advanced AI in Stanley Kubrick’s 2001: A Space Odyssey, that decides that men are too incompetent to establish contact with aliens and decides to kill them all and take over. (That’s HAL, pictured at the top of this page.)

Still, to blindly move forward with AI research “because we can” is reckless and irresponsible. Long before we are able to build such an AI machine—if that is even possible—lower-level AI can cause serious social displacements, redefining the job market and creating vast unemployment. Before we start fearing for the survival of our species due to a HAL-like take over, we should be creating safeguards insuring that machines that we invent are here to help humanity, and not to tear it apart.


The second in an occasional series where Gleiser addresses different sources of fear we will face or are facing as a species.

Icon-O
Templeton Prize winner Marcelo Gleiser is a professor of natural philosophy, physics and astronomy at Dartmouth College.