“Some People Swear Their Houseplants Are Conscious”

Michael Graziano, a professor of psychology and neuroscience at Princeton University, is one of the world’s top experts on consciousness. His groundbreaking 2013 book, Consciousness and the Social Brain, theorized that the brain machinery that attributes awareness to others also attributes it to oneself.

Last year, Graziano released another book, The Spaces Between Us: A Story of Neuroscience, Evolution, and Human Nature (Oxford University Press), examining (among other things) how our brains develop the notion of “personal space” and boundaries.

With such magnificent brains, certainly humans stand unique and distinct from all others, right? Well, yes . . . and no. Many in the animal kingdom share some of these traits with us, though perhaps not with as much complexity and sophistication.

We talked to Graziano about consciousness, personal space, and human uniqueness.

ORBITER: How would you define consciousness?

Michael Graziano: It’s very tricky. One reason there’s so much difficulty studying it is because nobody really knows what it is, and there’s so many different definitions of it.

I think the way most scientists study it today is a little different from the way it was studied 50 years ago. People used to think about consciousness as simply “all the stuff in my mind”—all my thoughts, emotions, and perceptions of the world, memories, sense of self, and so on.

But today, scholars tend to think all of that is just the content, and what we really want to understand is how you get to be conscious of the content. Then, consciousness is the subjective experience of anything at all. Think of it like this: You could have this objective experience of something seemingly trivial, like the color red or a spot on a screen. But we don’t just process it like a computer; we have a subjective internal experience of it. That’s the essence of consciousness.

Bottom line: Consciousness is the ability to have a subjective experience that seems to go beyond merely processing the information.

The old way of thinking about consciousness is that it’s just a hard drive full of data.

It’s a very close metaphor, actually. I suspect one of the reasons why the scientific and philosophical approach has shifted is because of computer technology. It used to be that memory and decision making and visual processing were huge mysteries, but now we actually know how to do those things. Computers can make decisions. They have memory. They can process images. All those things that used to be mysterious are now, from an engineering point of view, more routine. That’s highlighted the fact that there’s still this deeper question that isn’t answered yet, which is, “How is it that we also have this extra subjective experience?” And we can’t manage to copy that yet in computers.

Is that essentially self-awareness? We’re aware that we’re here, that we’re thinking, that we have feelings. But we can’t program a computer to –

Self-awareness is just a category of consciousness, of being conscious of a particular kind of information. We don’t have a good grasp of how we have a subjective experience of something while the computer doesn’t.

What is it about human consciousness that differs us from other species—from, say, a chimpanzee or aardvark or even a single-celled organism?

My view would be that as humans, there’s some kind of nonphysical essence or something inside of us, the “what it feels like” to look at the world and to look at ourselves. What I suspect—and I think a growing number of neuroscientists are beginning to suspect—is we’re talking about a “self-model,” something that has properties that sound kind of magical. A self-model, like any other internal model, is information put together in the brain. The brain generally doesn’t build terribly accurate models. It builds quick-and-dirty useful descriptions of itself and things in the world.

Consciousness, as we understand it, may be essentially a kind of a quick-and-dirty self-model. It’s easier to think of ourselves as having kind of a “spirit” in the head instead of thinking of ourselves as a massively complicated collection of neurons processing information. Then we can begin to ask what animals have that kind of self-model and which ones don’t. Generally that would only apply to things with brains, right? A single-celled organism does process information at a complex level, but there’s no indication that it’s building a self-model. You have to go pretty far up the scale for that. But how far up? That’s not so clear.

I would guess most mammals, at least, and maybe birds have some this kind of self-description. I’m not sure beyond that. I would say all those animals can be conscious of things, in the sense that they can have this “internal experience.” But the range of things of which they can be conscious is probably a lot more limited, because they don’t have the intellectual capacity to think the complicated thoughts we think.

Some people might see consciousness and the soul as similar things. Any thoughts on that?

As a scientist, I’d say humans are predisposed to believe in things like souls because, as I said, the brain builds these simple models, bundles of information that describe things in the world. That’s what we do. That’s what the brain does. But these models, are never accurate.

Michael Graziano

To give an example, consider the brain’s model of “white.” You look at white light, and the visual system has this model that’s been honed over millions of years of evolution. In that model, white is pure brightness and the absence of any color. But that’s wrong, because we now know that white is a mixture of all colors. The brain has taken a shortcut and built a descriptive model that’s not right. But it’s good enough, and we get by with it.

So we come predisposed with these very simplified models. Instead of understanding our minds as a trillion-stranded sculpture made out of information that’s constantly changing, instead the model we build of ourselves is, “I have a soul. I have this unified energy-like essence inside me that can understand things and make decisions.”

I think we come predisposed to a lot of these religious beliefs, including this almost universal religious belief in a soul that lives on past the death of body. But we’re just looking at the brain’s simple models, honed over millions of years of evolution, about what we are.

Could that be the thing that makes us uniquely human—this predisposition to build such a model, to believe that we have a soul or inner being?

Like us, a chimp sees white as pure luminance without any color because they have the same organization to their visual system. And like us, I think they also see themselves as being actuated by some invisible force inside them that’s essentially nonphysical. Their brains are doing the same simple trick as our brains. So in that sense, yeah, I think chimps believe in a soul. (FWIW, Jane Goodall agrees.)

But the difference, of course, is that we attach it to these cultural and much more elaborated ideas. So I don’t think a chimp is going around saying, “Oh, I have a soul. Therefore, it will live beyond my body, and I will go to heaven,” or something like that. I don’t think chimps have that sophistication. But I think the raw experience, so to speak, the self-perception, that there’s something in me that looks out at the world, the something that’s kind of invisible and has no physical substance . . . I think that’s got to be the same between us and a chimp.

You’ve come up with something called attention schema theory. Is that unique to humans?

Well, it fits right into what I’ve been talking about. Attention schema is just the way the brain models, the way it understands how it processes information. The theory certainly applies to people, and it probably applies to most mammals, maybe birds. So it’s not really a theory that’s uniquely about people.

But there is one interesting thing that people do more than other animals, as far as I know: We don’t just think about our own consciousness. We think about consciousness in others too. In fact, that may be the main way that we humans use it. I mean, talking to you right now, I am not just talking to a telephone. I have this strong sense that there’s a person who is conscious on the other side of the line. And that’s how we interact with each other. That’s the basis of all our social intelligence, our social cognition—we have this ability to paint consciousness onto the things around us. So what we’re doing, what I’m doing, is building a model of a mind and attributing it to you.

I think many animals do this, but in a less sophisticated way. I mean, the zebra has to look at the lion and think, “Is he aware of me? Is he conscious of me?” And the cat looks at the light spot from the laser pointer and thinks, “Oh, my God, it’s alive!” But I think people do this to an extreme. I mean, we attribute consciousness to everything. Children do this with their stuffed animals, and some people swear their houseplants are conscious. And the other day, I got mad at my computer. So we all have this incredibly strong tendency to promiscuously attribute consciousness to empty spaces and objects around us. I think that exaggeration of that ability may be a human thing.

I hear you about getting mad at the computer! Speaking of, do you think we’ll reach a point in artificial intelligence where we can create consciousness in a machine?

I think we’re getting close to that. But again, here’s this difference: I think we begin to understand consciousness from an engineering perspective. If I am really an information-processing machine that claims to be conscious, and I make that claim on the basis of information that’s been programmed into me—in an automatic and reflexive way that I don’t even realize that it’s a construct—then yes, I think that’s a form of consciousness. And I think we’re very close to being able to engineer that.

But when most people think of a conscious machine, they’re thinking of something like C3PO, who’s not only conscious but verbally capable. He can walk around pretty well. He has vast amounts of knowledge, including social knowledge. He’s an incredibly capable machine across vast ranges of domains. And he appears to be conscious, right? And that level of capability may be very far in the distant future. But I think we’re very close to being able to build the basics of the same algorithms that make us conscious.

Does that notion scare you? Or excite you? Or a little of both? Maybe I’ve watched too many sci-fi movies with robots going rogue and taking over the world.

Many people think if you make a machine conscious, that’s like it “wakes up” from being asleep, and now it’s capable of acting on its own. It has autonomy, and knowledge about itself, separate from the world. But actually, none of that’s true. Because machines, as they are right now, already have those things. And they can make decisions on their own. AI is already perfectly capable of killing people effectively without the consciousness part. So what I –

But only because we’ve programmed it to do so, correct? Not because it “decided” to.

Well, machines that can make decisions on their own are already here, including machines that can learn over time. But what the machine lacks is that it doesn’t claim, “Oh, and in addition to doing all of that, I also have a subjective experience of it. And there’s an essence inside of me that feels.” That’s what it’s lacking, and that’s the consciousness part, right?

So my first point is it’s not clear that adding consciousness to the machine necessarily makes it more dangerous. All the dangers are already there. You want machines that know what a conscious agent is, so they can at least recognize people as being also conscious and not merely obstacles. So maybe there’s some benefit, but I really don’t know.

Can the machine that becomes “conscious” also be programmed to know the difference between good and evil?

Good question. Morality is another example of content, right? You can build a machine that’s programmed, “Don’t kill,” or, “Don’t behave this way,” or, “Do behave that way.” I mean, that’s a kind of morality. But then there’s a separate question of having a subjective experience of what you’re doing. So, in a sense, the question of morality and the question of consciousness are not really the same question. The old way of thinking about consciousness—everything that’s flooding through my mind right now—includes morality as a part of that flooding. But the newer approach is that there’s this one essential quality that we can separate out—the experiential quality—that kind of pulls consciousness away from the morality question.

It’s still a huge question. I mean, I think we’re screwed. The technology is moving way faster than we can keep track of it. And with or without consciousness, we may well be in trouble with our machines.

I hear you. On another note, you’ve written about “personal space.” Is that uniquely human?

Personal space. Definitely not uniquely human. Personal space is this buffer zone around the body that is really a defensive zone, where you don’t want other people, or other things. It’s your safety buffer. It was actually first studied in animals by Heini Hediger, who directed the Zürich Zoo in the 1950s and came up with the idea of personal space and flight zones. A zebra, for example, doesn’t just run when it sees a lion, but only when the lion enters its flight zone. And then the zebra moves away to reinstate that buffer zone around it.

The idea of personal space, and the brain mechanisms behind it, has been studied intensively since then. There are neurons in the brain that are like radar. They detect when objects get within a certain range of the body, and then these neurons become highly active, feeding into mechanisms that cause movement, that make you cringe or duck away or avoid. The brain has this beautiful system, a really complicated reflex where sensory inputs tell you when objects are looming too close. Motor outputs shape your behavior so that you don’t crash into objects. It’s a very basic mechanism is very basic. It’s been studied in monkeys, and it’s probably present in a lot of other animals as well. So personal space is something that’s shared across mammals, anyway.

The lion-and-zebra example makes total sense, because it’s a self-protection thing. But what about when a member of the same species gets too close—like that annoying guy at the office who’s all in your face? Is that unique to humans?

No. Many animals have this personal space thing with respect to other members of the same species. For example, if you look at birds on a wire, if it’s crowded, they still space themselves just enough to maintain “personal space.” And actually, it’s the same mechanism with the zebra and the lion. The difference is that, if you were afraid of me, you would never let me get that close. Your personal space would expand out to a safe distance. That’s what the zebra’s doing to the lion. But in the case where the threat is lower, then the personal space doesn’t extend out as far.

In the case of two members of the same species—let’s say humans—and we’re not going to physically hurt each other, we’re friendly. But you start getting uncomfortable if I get too close. But with a romantic partner, your personal space shrinks up so much they you’ll let someone get right up on you. So it’s titrated by the level of anxiety and the threat level of the object.

Anything else to add on this topic of being uniquely human?

(Laughs.) Well, of course, almost everything I’ve said is, “Well, we’re not really unique.”

But you know, we are very brainy. We’re just incredibly smart creatures. But that smartness seems to ride on top of basic processes that are similar across species. So with respect to the consciousness topic, I see continuity between us and other animals. But on the other hand, we’re the only animals that are potentially about to build conscious computers. So that’s pretty uniquely human.


Watch this TED-Ed video in which Graziano gives an animated lesson on consciousness.

 

Icon-O
Managing Editor, ORBITER magazine