Will Robots Wake Up?

“Once we saturate the matter and energy in the universe with intelligence, it will ‘wake up,’ be conscious, and sublimely intelligent. That’s about as close to God as I can imagine.” — Ray Kurzweil

Machine consciousness, if it ever exists, may not be found in the robots that tug at our heartstrings, like R2D2. It may instead reside in some unsexy server farm in the basement of a computer science building at MIT. Or perhaps it will exist in some top-secret military program and get snuffed out, because it is too dangerous or simply too inefficient.

Essay excerpted from Schneider’s new book, Artificial You: AI and the Future of Your Mind

AI consciousness likely depends on phenomena that we cannot, at this point, gauge—such as whether some microchip yet to be invented has the right configuration, or whether AI developers or the public want conscious AI. It may even depend on something as unpredictable as the whim of a single AI designer, like Anthony Hopkins’s character in Westworld. The uncertainty we face moves me to a middle-of-the-road position, one that stops short of either techno-optimism (believing that technology can solve our problems) or biological naturalism.

This approach I call, simply, the “Wait and See Approach.”

In keeping with my desire to look at real-world considerations that speak to whether AI consciousness is even compatible with the laws of nature—and, if so, whether it is technologically feasible or even interesting to build—my discussion draws from concrete scenarios in AI research and cognitive science.

My case for this approach is simple: I will raise several scenarios illustrating considerations against and for the development of machine consciousness on Earth. The lesson from both sides is that conscious machines, if they exist at all, may occur in certain architectures and not others, and they may require a deliberate engineering effort, called “consciousness engineering.” This is not to be settled in the armchair; instead, we must test for consciousness in machines. [I propose such tests elsewhere in the book.]

Consciousness outmoded

The first scenario concerns superintelligent AI—a hypothetical form of AI that, by definition, is capable of outthinking humans in every domain. Transhumanists and other techno-optimists often assume that superintelligent AIs will have richer mental lives than humans do. But the first scenario calls this assumption into question, suggesting that superintelligent AIs, or even other kinds of highly sophisticated general intelligences, could outmode consciousness.

Recall how conscious and attentive you were when you first learned to drive, when you needed to focus your attention on every detail—the contours of the road, the location of the instruments, how your foot was placed on the pedal, and so on. In contrast, as you became a seasoned driver, you’ve probably had the experience of driving familiar routes with little awareness of the details of driving, although you were still driving effectively. Just as an infant learns to walk through careful concentration, driving initially requires intense focus and then becomes a more routinized task.

As it happens, only a small percentage of our mental processing is conscious at any given time. As cognitive scientists will tell you, most of our thinking is unconscious computation.

As the example of driving underscores, consciousness is correlated with novel learning tasks that require attention and deliberative focus, while more routinized tasks can be accomplished without conscious computations, remaining nonconscious information processing.

Of course, if you really want to focus on the details of driving, you can. But there are sophisticated computational functions of the brain that aren’t introspectable even if you try and try. For instance, we cannot introspect the conversion of two-dimensional images to a three-dimensional array.

Could a robot have consciousness?

Although we humans need consciousness for certain tasks requiring special attention, the architecture of an advanced AI may contrast sharply with ours. Perhaps none of its computations will need to be conscious. A superintelligent AI, in particular, is a system which, by definition, possesses expert-level knowledge in every domain. These computations could range over vast databases that include the entire internet and ultimately encompass an entire galaxy. What would be novel to it? What task would require slow, deliberative focus? Wouldn’t it have mastered everything already? Perhaps, like an experienced driver on a familiar road, it can use nonconscious processing. Even a self-improving AI that is not a superintelligence may increasingly rely on routinized tasks as its mastery becomes refined. Over time, as a system grows more intelligent, consciousness could be outmoded altogether.

The simple consideration of efficiency suggests, depressingly, that the most intelligent systems of the future may not be conscious. Indeed, this sobering observation may have bearing far beyond Earth. [In another chapter of my new book, I discuss the possibility that, should other technological civilizations exist throughout the universe, these aliens may become synthetic intelligences themselves. Viewed on a cosmic scale, consciousness may be just a blip, a momentary flowering of experience before the universe reverts to mindlessness.]

This is not to suggest that it is inevitable that AIs, as they grow sophisticated, outmode consciousness in favor of nonconscious architectures. Again, I take a wait and see approach. But the possibility that advanced intelligences could outmode consciousness is suggestive. Neither biological naturalism nor techno-optimism could accommodate such an outcome.

Cheaping out on consciousness

The next scenario develops mind design in a different, and even more cynical, direction—one in which AI companies cheap out on the mind.

Consider the range of sophisticated activities AIs are supposed to accomplish. Robots are being developed to be caretakers of the elderly, personal assistants, and even relationship partners for some. These are tasks that require general intelligence. Think about the eldercare android that is too inflexible and is unable to both answer the phone and make breakfast safely. It misses an important cue: the smoke in a burning kitchen. Lawsuits ensue.

Theodore (Joaquin Phoenix) and the AI character Samantha in Her. (Warner Bros.)

Or consider all the laughable pseudo-discussions people have with Siri. As amusing as that was at first, Siri was and still is frustrating. Wouldn’t we prefer the Samantha of the movie Her, an AI that carries out intelligent, multifaceted conversations? Of course. Billions of dollars are being invested to do just that. Economic forces cry out for the development of flexible, domain-general intelligences.

In the biological domain, intelligence and consciousness go hand-in-hand, so one might expect that as domain-general intelligences come into being, they will be conscious. But for all we know, the features that an AI needs to possess to engage in sophisticated information processing may not be the same features that give rise to consciousness in machines. And it is the features that are sufficient to accomplish the needed tasks, to quickly generate profit—not those that yield consciousness—that are the sort of properties that AI projects tend to care about. The point here is that even if machine consciousness is possible in principle, the AIs that are actually produced may not be the ones that turn out to be conscious.

By way of analogy, a true audiophile will shun a low-fidelity MP3 recording, as its sound quality is apparently lower than that of a CD or even larger audio file that takes longer to download. Music downloads come at differing levels of quality. Maybe a sophisticated AI can be built using a low-fidelity model of our cognitive architecture—a sort of MP3 AI—but to get conscious AI, you need finer precision. So consciousness could require consciousness engineering, a special engineering effort, and this may not be necessary for the successful construction of a given AI.

There could be all sorts of reasons that an AI could fall short of having inner experience. For instance, notice that your conscious experience seems to involve sensory-specific contents: the aroma of your morning coffee, the warmth of the summer sun, the wail of the saxophone. Such sensory contents are what makes your conscious experience sparkle.

According to a recent line of thinking in neuroscience, consciousness involves sensory processing in a “hot zone” in the back of the brain.[1] While not everything that passes through our minds is a sensory percept, it is plausible that some basic level of sensory awareness is a precondition for being a conscious being; raw intellectual ability alone is not enough. If the processing in the hot zone is indeed key to consciousness, then only creatures having the sensory sparkle may be conscious. Highly intelligent AIs, even superintelligences, may simply not have conscious contents, because a hot zone has not been engineered into their architectures, or it may be engineered at the wrong level of grain, like a low-fidelity MP3 copy.

According to this line of thinking, consciousness is not the inevitable outgrowth of intelligence. For all we know, a computronium the size of the Milky Way galaxy may not have the slightest glimmer of inner experience. Contrast this with the inner world of a purring cat or a dog running on the beach. If conscious AI can be built at all, it may take a deliberate engineering effort. Perhaps it will even demand a master craftsperson, a Michelangelo of the mind.

Consciousness engineering: PR nightmares

Now let us consider this mind-sculpting endeavor in more detail. There are several engineering scenarios to mull over.

The question of whether AIs have an inner life is central to how we value their existence. Consciousness is the philosophical cornerstone of our moral systems, being key to our judgment of whether someone or something is a self or person, deserving special moral consideration. We’ve seen that robots are currently being designed to take care of the elderly in Japan, clean up nuclear reactors, and fight our wars. Yet it may not be ethical to use AIs for such tasks if they turn out to be conscious.

Would a conscious bot have “human” rights?

There are already many conferences, papers, and books on robot rights. As I write this, Google search on “robot rights” yields more than 120,000 results.[2] Given this concern, if an AI company tried to market a conscious system, it may face accusations of robot slavery and demands to ban the use of conscious AI for the very tasks the AI was developed to be used for. Indeed, AI companies would likely incur special ethical and legal obligations if they built conscious machines, even at the prototype stage.

And permanently shutting down a system, or “dialing out” consciousness—that is, rebooting an AI system with consciousness significantly diminished or removed— might be regarded as criminal. And rightly so.

Such considerations may lead AI companies to avoid creating conscious machines altogether. We don’t want to enter the ethically questionable territory of exterminating conscious AIs or even shelving their programs indefinitely, holding conscious beings in a kind of stasis. Through a close understanding of machine consciousness, perhaps we can avoid such ethical nightmares. AI designers may make deliberate design decisions, in consultation with ethicists, to ensure their machines lack consciousness.

AI safety

So far, my discussion of consciousness engineering has largely focused on reasons that AI developers may seek to avoid creating conscious AIs. What about the other side? Will there be reasons to engineer consciousness into AIs, assuming that doing so is even compatible with the laws of nature? Perhaps.

The first reason is that conscious machines might be safer. Some of the world’s most impressive supercomputers are designed to be neuromorphic, mirroring the workings of the brain, at least in broad strokes. As neuromorphic AIs become ever more like the brain, it is natural to worry that they might have the kind of drawbacks we humans have, such as emotional volatility. Could a neuromorphic system “wake up,” becoming volatile or resistant to authority, like an adolescent in the throes of hormones?

Such scenarios are carefully investigated by certain cybersecurity experts. But what if, at the end of the day, we find that the opposite happens? The spark of consciousness makes a certain AI system more empathetic, more humane. The value that an AI places on us may hinge on whether it believes it feels like something to be us. This insight may require nothing less than machine consciousness. The reason many humans are horrified at the thought of brutality toward a dog or cat is that we sense that they can suffer and feel a range of emotions, much like we do. For all we know, conscious AI may lead to safer AI.

A second reason to create conscious AIs is that consumers might want them.

I’ve mentioned the film Her, in which Theodore has a romantic relationship with his AI assistant, Samantha. That relationship would be quite one-sided if Samantha was a nonconscious machine. The romance is predicated on the idea that Samantha feels. Few of us would want friends or romantic partners who ghost-walked through events in our lives, seeming to share experiences with us, but in fact feeling nothing, being what philosophers call “zombies.”

Of course, one may unwittingly be duped by the human-like appearance or affectionate behavior of AI zombies. But perhaps, over time, public awareness will be raised, and people will long for genuinely conscious AI companions, encouraging AI companies to attempt to produce conscious AIs.

Seeding sentience in space

A third reason is AIs may make better astronauts, especially on interstellar journeys. At the Institute for Advanced Study in Princeton, we are exploring the possibility of seeding the universe with conscious AIs. Our discussions are inspired by a recent project that one of my collaborators there, the astrophysicist Edwin Turner, helped found, together with Steven Hawking, Freeman Dyson, Uri Millner, and others. The Breakthrough Starshot Initiative is a $100-million endeavor to send thousands of ultra-small ships to the nearest star, Alpha Centauri, at about 20 percent of the speed of light within the next few decades. The tiny ships will be extraordinarily light, each weighing about a gram. For this reason, they can travel closer to the speed of light than conventional spacecraft can.

A solar sail, the type of “ship” that could carry nanoscale microchips to the stars. (Wikimedia Commons, Kevin Gill)

In our project, called “Sentience to the Stars,” Turner and I, along with computer scientist Olaf Witkowski and astrophysicist Caleb Scharf, urge that interstellar missions like Starshot could benefit from having an autonomous AI component. Nanoscale microchips on each ship serve as a smaller part of an AI architecture configured from the interacting microchips. Autonomous AI could be quite useful, because if a ship is near Alpha Centauri, communicating with Earth at light speed would take eight years—four years for Earth to receive a signal and four years for the answer from Earth to return to Alpha Centauri. To have real-time decision-making capacities, civilizations embarking on interstellar voyages will either need to send members of their civilizations on intergenerational missions—a daunting task—or put AGIs on the ships themselves. [AGI refers to “Artificial General Intelligence,” an AI able to integrate information across various sensory and cognitive domains and carry out a varied range of intellectual tasks.]

Of course, this doesn’t mean that the AGIs would be conscious; that would require a deliberate engineering effort over and above the mere construction of a highly intelligent system. Nonetheless, if Earthlings send AGIs in their stead, they may become intrigued by the possibility of making them conscious. Perhaps the universe will not have a single other case of intelligent life, and disappointed humans will long to seed the universe with their AI “mindchildren.” If we find that we are alone in the cosmos, why not create synthetic mindchildren to colonize the empty reaches of the universe? Perhaps these synthetic consciousnesses could be designed to have an amazing and varied range of conscious experiences. All this, of course, assumes AI can be conscious, I am uncertain if this is in the cards.

Now let’s turn to a final path to conscious AI.

A human-machine merger

Neuroscience textbooks contain dramatic cases of people who have lost their ability to lay down new memories but who can still manage to accurately recall events that happened before their illness. They suffered from severe damage to the hippocampus, a part of the brain’s limbic system that is essential for encoding new memories. These unfortunate patients are unable to remember what happened even a few minutes ago.[3] At the University of Southern California, Theodore Berger has developed an artificial hippocampus that has been successfully used in primates and is currently being tested in humans.[4] Berger’s implants could provide these individuals with the crucial ability to lay down new memories.

Brain chips are being developed for other conditions as well, such as Alzheimer’s disease and post-traumatic stress disorder. In a similar vein, microchips could be used to replace parts of the brain responsible for certain conscious contents, such as part or all of one’s visual field. If, at some point, chips are used in areas of the brain responsible for consciousness, we might find that replacing a brain region with a chip causes a loss of a certain kind of experience, like the episodes that Oliver Sacks wrote about.[5] Chip engineers could then try a different substrate or chip architecture, hopefully arriving at one that does the trick.

At some point, researchers may hit a wall, finding that only biological brain enhancements can be used in parts of the brain responsible for conscious processing. But if they don’t hit a wall, this could be a path to deliberately engineered conscious AI.

But I doubt these fixes and enhancements would create an isomorph—a system that precisely mimics the organization of a conscious system. Nonetheless, they could still be tremendously useful. They would give the hardware developers an incentive to make sure their devices support consciousness. They would also create a market for tests to ensure that the devices are suitable, or else no one is going to want to install them in their heads.

Unlike techno-optimism, the “Wait and See Approach” recognizes that sophisticated AI may not be conscious. For all we know, AIs can be conscious, but self-improving AIs might tend to engineer consciousness out. Or certain AI companies simply judge conscious AI to be a public relations nightmare. Machine consciousness depends on variables that we cannot fully gauge: public demand for synthetic consciousness, concerns about whether sentient machines are safe, the success of AI-based neural prosthetics and enhancements, and even the whims of AI designers. Remember, at this point, we are dealing with unknowns.

However things play out, the future will be far more complex than our thought experiments depict. Furthermore, if synthetic consciousness ever exists, it may exist in only one sort of system and not others. It may not be present in androids that are our closest companions, but it may be instantiated in a system that is painstakingly reverse engineered from the brain’s cognitive architecture. And like Westworld, someone may engineer consciousness into certain systems and not into others.


[1] Boly et al. (2017), Koch et al. (2016), Tononi et al. (2016).
[2] My search was conducted on February 17, 2018.
[3] For a gripping tale of one patient’s experience, see Lemonick (2017).
[3] See McKelvey (2016); Hampson et al. (2018); Song et al. (2018).
[5] Sacks (1985).


Excerpted from ARTIFICIAL YOU: Ai and the Future of Your Mind, by Susan Schneider. Copyright © 2019 by Susan Schneider. Published by Princeton University Press. Reprinted by permission. See also ORBITER’s interview with Schneider.