Your Brain, AI, and the Future

It is 2045. Today, you are out shopping. Your first step is the Center for Mind Design. As you walk in, a large menu stands before you. It lists brain enhancements with funky names.

“Hive Mind” is a brain chip allowing you to experience the innermost thoughts of your loved ones. “Zen Garden” is a microchip for Zen master-level meditative states. “Human Calculator” gives you savant-level mathematical abilities. What would you select . . .

So begins the introduction to Susan Schneider’s new book, Artificial You: AI and the Future of Your Mind (Princeton University Press). (Read an exclusive excerpt for ORBITER.)

Schneider, an Associate Professor of Philosophy and Cognitive Science at the University of Connecticut, has been thinking, speaking, and writing about artificial intelligence for some time.

She wraps up many of those thoughts in this new book, a fascinating read about some of the technological enhancements that might be just around the corner. The Brain Foundation calls Schneider “the Sarah Connor of philosophy as she ponders the role of science fiction and thought experiments to help understand uploading, time travel, superintelligence, the Singularity, a new approach to the computational theory of mind, consciousness, and physicalism.”

Schneider, who holds both NASA-Baruch Blumberg Chair and a Distinguished Scholar Chair with the Library of Congress and NASA, is the first to admit she doesn’t have all the answers to what the future might look like.

Heck, we don’t even have all the questions yet. But ORBITER decided to give it a try in this interview.

ORBITER: How did you get into the field of philosophy and cognitive science?

Susan Schneider: It was a sort of roundabout route. I did an exchange program in Budapest, Hungary, my junior year, and they assigned a lot of philosophy, particularly people like Michel Foucault and the sociologist Erving Goffman. And then I went back to UC Berkeley, where I was actually an economics major. I took a course with the famous philosopher Donald Davidson about philosophy of mind, which fascinated me.

I started to think deeply about the nature of mind, and I started to take graduate cognitive science courses while I was an undergrad. I decided to study philosophy and cognitive science. I love to think and learn about science, as well as the humanities, and the areas that I’m most interested in touch on how we make value judgments about emerging technologies. Which, to me, is really the intersection of science and philosophy.

I also did a few years of research in Princeton at the Center of Theological Inquiry and learned the debates about the nature of life in the field of astrobiology. Theologians are amazingly open-minded about the big questions.

Let’s talk about your new book, starting with the basics. How do you define artificial intelligence?

It would be synthetic intelligence, defined broadly to include both domain-specific systems, like AlphaGo, and hypothetical general intelligences that, like biological systems, combine information from different sensory modalities and have more sophisticated cognitive lives. I use it as a sort of generic term. People tend to overuse the term, applying it just to algorithms in general. I’m fine with that; I do the same. If someone’s talking about AI that is behind Facebook, I’m happy to talk that talk as well.

Algorithms being slightly different than true intelligence, you mean?

Right. Well, there’s a big issue about what intelligence is. People in the cognitive science community talk about the intelligence of trees and slime molds, believe it or not. Astrobiologists talk about the intelligence of a planet. So we can’t anthropomorphize intelligence. I think it’s important that as we look at artificial intelligence, as AI develops, it may not actually be like us.

A key topic in your book is whether or not AI can achieve consciousness. How do you define consciousness?

It’s the felt quality of experience. There’s something it feels like to be you, so when you smell the scent of lavender or hear the sound of a saxophone, there’s a felt quality to your experience. Consciousness is what makes life so wonderful and at times so painful. It’s what makes us sentient.

Do we want artificial intelligence to have consciousness? And why or why not?

That’s a really important question. Even if we could create conscious AI, it’s not clear we would even want to. We have to think about the purpose that the AI is supposed to be achieving. For example, suppose we’re creating robots for military operations. Do we really want a conscious being sent into a war zone? I’m not even saying it’s a good idea to use robots in war zones; I’m just saying that’s a case where the creation of a sentient being who would suffer could be deeply problematic. On the other hand, some people might argue that we should, because it would make the creature more compassionate.

Consider a different case: What about our household robots? We are building androids to take care of the elderly. The one being built in Hiroshi Ishiguro’s lab are highly humanlike, practically indistinguishable from humans. Do we want these creatures to be conscious? I mean, science fiction has done a lovely job of depicting the moral quandaries that could occur, right?

Definitely.

Films like I, Robot, for instance, depict the oppression of sentient beings. I think people have a sense that, if we build a conscious robot, it deserves special moral consideration, alongside other sentient beings. So, do you really want to purchase a sentient being and have it work for you? That would be akin to slavery.

Sophia (Hanson Robotics)

On the other hand, I think it’s very easy to misunderstand whether an AI is conscious. For instance, look at Sophia of Hanson Robotics. A lot of people anthropomorphized Sophia’s level of intelligence because she looked very human. People assume there’s something emotional going on inside of her, that it feels like something to be her. I call this the “cute and fluffy fallacy.” If we build robots that are fluffy and cute and look human, or look like nonhuman animals, people will believe that it feels like something to be those creatures. We’re making inferences based on our experience, but the realm of AI will introduce new challenges. So I don’t know that it’s a good idea to make human-looking AIs, because I think it’s just going to encourage confusion.

Though the two words are similar, there’s a difference between consciousness and conscience. I think I’d want a robot with a conscience, particularly in the military: “No, I won’t kill that civilian.” So, the question is, can you have a conscience—with empathy and compassion—without being conscious?

It’s an important question. When I was an undergraduate, learning this stuff for the first time, I kind of conflated the two. It may be that, to have a conscience, you do need to be a conscious being. But those are indeed separate things; for example, there are conscious beings, like sociopaths, who lack a conscience.

I think your question gets to the issue of safety—if we deploy a military bot or we have somebody taking care of our grandmother, what sort of moral programming should that creature have? And if it wasn’t a conscious being, could we trust it to act appropriately in all kinds of circumstances? This is an issue that’s being carefully researched. AI safety is a major concern. What if we build intelligent systems that actually exceed us in all sorts of domains in terms of intelligence, how do we control them? There’s been millions of dollars poured into AI safety, trying to figure out what sort of algorithms we code into the machine. Can we hardwire a machine to be ethical? There are all sorts of strategies that people are trying out.

It’s tricky, because there’s not an agreement in the field of ethics about what a right action is. There are classic debates between the Kantians and utilitarians—debates about how we should decide what is right. But even without agreement in ethics, the challenge is to make robots that behave correctly, and it is super hard. Is consciousness required? We just don’t know. I think we would have to run tests to determine if we did have a conscious robot, and understand the ramifications if it is conscious.

Then you’d get into all kinds of issues like human right, er, I mean robot rights. And if it were conscious, could you “deprogram” it to remove that consciousness?

Or if there’s a prototype AI that we believe is conscious, then, at the R&D stage, how do we treat it? Is it even justifiable to dial down the level of consciousness, if you will, or turn the machine off permanently? These are really important questions. That being said, I don’t think it’ll be easy to build a conscious machine. I think we’ll have general intelligences that are impressive well before we’re capable of creating conscious machines. I think people are far too optimistic about machine consciousness.

I’m thinking of the movie I, Robot, where the robots become sentient and then rebel against humans. I once heard someone say that if we do end up building robots that become sentient, it’s our own fault for programming them that way. Therefore, in theory, there should never be a scenario where robots turn against us.

To me, it doesn’t really matter whose “fault” it is. AI safety is so important. Our lack of foresight . . . How I would I put it? We didn’t understand the ethical implications of nuclear technology as we were developing it. And there are lots of technologies where we didn’t consider the ethics until after the development of the technology. That’s something that should give us pause, right? We have to do our best to understand the potential repercussions of AI, but at the same time, there are inevitably going to be unknowns. So all we can do is do our best.

Let’s talk about transhumanism and brain enhancements—the integration of computer technology and our human bodies. My mom has Alzheimer’s, and I think it’d be great to put a computer implant in her brain to help her remember things. I can’t imagine an ethical issue with that. But when you go beyond that, and think about the possibilities, things get dicey and weird and maybe unethical. Elon Musk wants to put the internet in our brains, and others want to upload our brains onto a hard drive in the pursuit of immortality. The mind reels.

Yes. I recently wrote about Musk’s Neuralink in The New York Times and Financial Times. Musk’s aspirations are to get us to upgrade our brains in a transhumanist way, sort of like Ray Kurzweil, where you’re actually replacing brain tissue with microchips until eventually, you merge with the AIs themselves. It’s this idea of a mind-machine merger, where we become essentially artificial intelligences.

A 3D rendering of the hippocampus

You started by saying it would be wonderful to help people like your mom. I think everybody agrees that the use of AI or nanotechnology or biological therapies in the brain to help people who are missing a working hippocampus, who are locked-in, who have Alzheimer’s, Parkinson’s—I mean, it’s wonderful, and it can’t come fast enough. These are therapeutic uses of brain technology, whether it be artificial intelligence technologies or biological technologies.

Right now, they’ve got an artificial hippocampus that’s being tested in humans at Ted Berger’s lab. It’s meant to help people who’ve had damage to their hippocampus. Now suppose that, as we age, they develop an artificial hippocampus that functions better than our biological one did. It’d be just like LASIK, right? When my father had cataract surgery, they threw in LASIK too. So, people who go in for surgeries in the future, they might ask, “Hey, why not get a brain enhancement too?” What do we want to achieve? It’s an important question, but I think it’s difficult to answer.

Give it a shot.

In my book, I talk about a future place called a Center for Mind Design—sort of like a store where there’s a menu of enhancements. You’re excited about the possibility of maybe getting savant-like mathematical or musical skills, or enhancing your attention. You have to ask, How far you can push it and still be yourself? Because if you overhaul your brain in radical ways and maybe even offload some of your mental activities to microchips, there’s a chance that that “new” being may not even really be you, but someone else entirely.

I think we have to be super careful. I mean, I’m not at all critical of the use of AI technology in the brain to help people who have had some sort of devastating illness. At Neuralink right now, the hope is that some of the new technology could be used by locked-in patients. But enhancing a normal patient who doesn’t have a disorder? I think it’s very dangerous.

And then there are issues involving privacy of thought, right? Cyberpunk, a genre in science fiction, has long depicted scenarios where people actually lose access to their own thoughts. Imagine a mother who uploads many of her memories to the cloud—and then loses her subscription and can’t remember her child’s early years. Or imagine not being able to remember your parents because you’ve offloaded it and you can’t pay for your subscription to your own memory. We already see these kinds of issues with our photographs being stored in the cloud. All kinds of nefarious things could happen if AI goes in the head.

If there were a Center of the Mind, with all these options, you’d have to wrestle with them. I’d love to be able to push a button and know how to play the piano. But that would be cheating, right? At the same time, it’d be nice to remove things from my head that cause anxiety or depression. The possibilities are literally mind boggling.

Yeah. I’m a transhumanist myself. I think people should be free from unwanted psychology, so if somebody has lived a life of painfully depression and wants to tweak their brain chemistry, all the power to them. But these are individual decisions, and people need to understand that the nature of the self and mind are vexing philosophical questions with no easy answers. So if one wants to radically alter their cognitive abilities or perceptual abilities, the end result may not be them, according to leading theories of personal identity in the realm of metaphysics.

I’m all about consumer education. I think that, if these brain enhancement technologies go up for FDA approval in the United States, the FDA should demand that the public understand the issues involving the nature of the person. These are philosophical issues, deciding whether to enhance your brain. It’s not just a matter of the safety of the implants.

But I support people’s right to enhance. And I hope that medical technology frees us from illness and our eventual aging and demise. That’s a very desirable end goal, right?

Well, now we’re getting into immortality. I suppose someday we’ll be able to upload our brains to live on forever on a hard drive after our bodies die. And some –

I oppose that. I think that that is hogwash. Because I don’t think it would be you. If your brain was destroyed, and the data was coded into a computer and then downloaded into a robot, that creature might have your personality traits and may even recall all kinds of events from your life, but it wouldn’t be you. Your consciousness wouldn’t magically travel to the robot’s head.

I’m inclined to agree, but can you explain why my consciousness would not go with my upload?

Well, you should read my book, because I list the reasons. I use the example of Kim Suozzi, a young lady who was actually a neuroscience major. She was going to go into neuroscience, and she found out she had a brain tumor. She had her brain cryogenically frozen at Alcor, and they were talking about uploading her brain. In the book, I talk about whether it could actually be her if that upload is ever created. I think uploading, by the way, is not going to happen any time soon.

Let’s move a little bit beyond the philosophical and into the theological. I believe I have a soul and it is somehow connected to consciousness and my brain. What are some of the possible theological objections to the notion of brain enhancement?

Who knows if the soul would survive radical enhancement? If you walked into the Center for Mind Design, and you walked out with all sorts of microchips in your brain and a different personality and so on, and you were a religious person . . .

Can the soul be “uploaded”?

I just think it would be a mistake to not consult your religious adviser on such things, to ask questions about the soul. I’m not a religious adviser, but I would imagine that these are the sorts of issues that could come up as we begin to enhance our brains. Similarly, as we create AIs that look very humanlike, or that the public believes may be conscious, there are questions there about whether robots would have souls.

It’ll be interesting to see the development of theological positions on this. My hope is that we develop tests for consciousness that go beyond surface appearance—true standards for understanding if and when a machine is conscious. If it turns out that something’s conscious, then to make it work for us would be akin to slavery. But, at the same time, if we make a mistake and say that something is conscious when it’s not, we’re giving it legal protection that we normally afford to conscious beings. So we wouldn’t want to wrongly claim that something’s conscious when it’s not.

Final question: When you think about AI and the future of our minds, are you a) excited, b) scared, or c) a little of both? Or d), none of the above?

I am optimistic because I think that, through public dialogue, people will understand better the nuances of what these technologies can and can’t achieve. People will be sensitive to issues involving whether AIs are minded or are conscious, and similarly whether we could radically overhaul our mental lives and still be the same person. I think it’s easy to raise these issues with the public. I’m seeing a lot of sensitivity. I’m also seeing sensitivity with the AI community as well.

Sounds like you’re cautiously optimistic.

Yeah. Cautiously optimistic. There’s a lot of effort right now to build safe AI. There are institutes working on AI safety that are doing such nice work, like The Future of Humanity Institute at Oxford and The Future of Life Institute at MIT. And I know that members of Congress are very concerned about AI. I’m meeting with them in November, and I’ll be in DC working with people in DC about these issues all next year.

Icon-O
Managing Editor, ORBITER magazine