Part one of a three-part series. In each part, I will examine one uniquely human characteristic that is central to our flourishing and explore its relevance to contemporary and future technologies. This part describes the uniquely human penchant for sociality and explores the social-technological landscape with an eye toward the social robots of the future.
What does it mean for a human being to flourish? And how can technology help with that—particularly biotechnology and artificial intelligence?
Humans are biophysical animals whose capacities for sociality, culture, and self-reflection set us apart from all other organisms. Distinctively human flourishing emerges when those elements of our nature are authentically fulfilled at the deepest levels, resulting in a life of rich social connections, engagement, and joy in our day-to-day lives, and a deep sense of meaning and purpose.
Unpacking all of human flourishing would be the work of a lifetime, so for this article, I’ll focus on the central element of human sociality.
In the first century BC, Lucretius wrote, “We are each of us angels with only one wing, and we can only fly embracing one another.” Indeed, the importance of concepts such as friendship, social support, social connection, healthy familial ties, bonding, attachment, and partnership in the sociological, psychological, and public health literatures is widely evident. The ties that bind us together form a lattice that supports our growth, health, and longevity; thus, humans naturally seek fulfilling relationships—with each other, of course, but also with pets, rudimentary artificial intelligence (AI) systems, and perhaps someday with fully competent social robots.
Humans are uniquely social
Somewhere between 2 million and 200,000 years ago, a seemingly insignificant primate in the ancestral lineage of Homo sapiens gradually developed the most massive mammalian brain (as a percentage of body mass) that the world had ever seen. Since then, brain size expanded from around 500 cm3 (about one pint—the size of a chimpanzee brain) to around 1400 cm3 (about three pints). The brain is a very greedy organ, metabolically speaking (it consumes about 20 percent of your body’s energy while comprising just two percent of its mass), so for our human ancestors to survive and reproduce, there had to be considerable evolutionary payoffs for this costly investment.
There are many explanations for the evolutionary advantage granted by our large brains, but a prime contender is the social brain hypothesis. In short, the idea is that the computationally-intensive tasks of socializing with increasingly large numbers of fellow humans—while holding complex mental simulations of social dynamics in place (reputations and who knows what about whom)—drove the evolution of the human brain. As a result, humans display unmatched abilities to communicate and cooperate with each other, which allowed them to exploit environmental resources and outcompete other species to an unprecedented degree.
Consequently, we are the only ones who can achieve flexible cooperation at scale. Sure, bees may be able to cooperate in huge numbers to do relatively simple tasks (exploit food resources, move nests), and dolphins and chimps can cooperate on more complicated pursuits with known others (patrolling territory, challenging dominance hierarchies), but only humans can cooperate with perfect strangers at highly complex tasks and at huge scales (trading securities on global stock exchanges, carrying out religious rituals at holy sites, attending an academic conference where 1,000 strangers can sit quietly to listen to a lecture at close quarters that no chimpanzee would tolerate). These advanced capacities to socialize and cooperate are key to all major human achievements and are crucial to sustain human well-being, particularly in the face of major global challenges.
Sociality is key to flourishing
Most of us know the particular misery of loneliness, isolation, or social ostracization: hardly a state anybody would call “flourishing.” Loneliness can also affect our biology and longevity. Professor Steve Cole has demonstrated that people who feel lonely have increased expression of genes related to inflammation: the body is essentially preparing itself to respond to injury because evolutionarily, our social well-being was crucial to our safety and survival. When loneliness persists, it can have long-term health consequences on par with known carcinogens. Professors Julianne Holt-Lunstad and Timothy Smith estimate that loneliness harms health as much as smoking 15 cigarettes per day. And another study found that those who were least socially connected tended to die more than two years earlier than those who were more socially connected, even when controlling for other factors.
Unfortunately, various personal, societal, and cultural forces have conspired to make this one of the most socially disconnected times in history. Since 1985, social networks have shrunk: average participants in the General Social Survey have gone from having three to two social confidants, and indeed, more people than ever before (25 percent in 2004) have zero trusted relationships.
Perhaps surprisingly, in my own research and in nationally representative surveys, we find that elderly populations in the United States, on average, are not lonelier than younger groups. AARP surveys consistently show that, even controlling for other factors, as adults get older, they become less lonely than those in middle age, as measured by the UCLA Loneliness Scale. Specifically, the 45-49-year old demographic reported loneliness 43 percent of the time compared to 25 percent in those over 70.
But on average, and particularly among the youngest adults, we are more socially disconnected than ever, due to a confluence of factors. On the sociocultural side, this might be because we tend to uproot from our hometown communities to attend college or seek work elsewhere; we change jobs more frequently, so we don’t prioritize workplace relationships; and economic necessity and cultural ideologies drive us to work so much that we don’t prioritize the process of building long-term relationships. John Cacioppo, a leading expert on loneliness, says that loneliness isn’t just “the physical absence of other people—it’s the sense that you’re not sharing anything that matters with anyone else. If you have lots of people around you—perhaps even a husband or wife, or a family, or a busy workplace—but you don’t share anything that matters with them, then you’ll still be lonely.”
On the technological side, we now spend massive amounts of our time in digital worlds both for work and recreation, which leaves a smaller slice of time for quality, in-person relationship building. And even when we are socializing online, it’s rarely high-quality social connection. Indeed, some research shows that online social interaction does not substitute for offline connection with respect to feelings of loneliness.
In sum, a lack of social connection is usually detrimental to our health and happiness and ability to engage in any meaningful life pursuits—which is why psychologist Martin Seligman lists positive relationships as one of the five key elements to flourishing. Our social nature is evident in our biology, our anatomy, language, our psychology, and in every culture.
So, could the social robots of tomorrow fulfill our need for healthy socializing, and thus for flourishing?
The role of social robots
Social robots (machines built with faces, voices, or bodies that are intended to elicit social behaviors from humans like natural conversation) are already playing surprising roles in our lives. They can soothe our infants and educate our children. In the future, they will serve as our friends, romantic and sexual partners, and even our entertainers. And they will comfort and care for us as we grow old and die. It’s easy to imagine that advanced capabilities in these domains are just decades away at most.
On the hardware side, we will engineer ever more lifelike skin, musculature, and coordination of movement. Optical and gyroscopic sensors on real humans combined with machine learning will analyze natural human movement and instantiate it in robots. The software that powers the voice and personality of these robots will advance far beyond Siri and Alexa.
Ray Kurzweil and Nick Bostrom predict a future of artificial superintelligence (ASI)—well-explained with cute amateur graphics by Tim Urban—in which machine intelligence will vastly exceed that of humanity’s in every major domain (as opposed to the regular AI of today, where machines can match or outperform us in very narrow domains). (For the sake of focus over breadth, I will largely sidestep the possibilities and perils of ASI here, and parts two and three in this series will deal more with the inevitable hard-wired integration between human brains and these robots, as well as dreams of human immortality.)
But let’s take these predictions at face value and perform a thought experiment regarding the distant future. Let’s assume that, at the very least, a humanoid robot with artificial intelligence is indistinguishable from a genuine human’s self-portrayal: a fully convincing human simulacrum. Though it was manufactured, and though there’s no internal human anatomy, from all external perspectives, it’s just another human. It can navigate social complexities as well as the best of us; it can be programmed to remember and forget; we can even engineer in human foibles and personality flaws for authenticity’s sake. It may even be conscious and have free will.
They may seem convincing, but . . .
Such robots would undoubtedly revolutionize society and have myriad beneficial effects on many individuals and groups. That said, I still think even the most convincing humanoid robot will fall short of fulfilling certain deep social functions that humans fulfill for each other for the following reasons.
Such robots will lack a true human ontogeny. No parents conceived and gave birth to them. They didn’t grow and develop. They weren’t loved and cherished and worried over as they grew older, bigger, and more sophisticated. Even if the outcome were the same as a human adult (a thoroughly convincing simulacrum), knowing this lack of a human developmental trajectory would give us pause when we consider what to make of this thing in front of us.
Their status as “objects” separates them from humans, practically and philosophically. The “owner” of such a robot would be just that: somebody with legal and practical power superiority, which is not the relationship we have with fellow humans. There’s a lot to be worked out in the space of how to control our future super intelligent creations, but at the very least, responsible creators will do their best to ensure that humans retain the power to control or destroy their creations. (Although if we decide they are conscious, they would certainly be worthy at least of the rights we afford to other conscious animals, if not significantly more).
This inherent inequality will eventually belie the artificiality behind the relationship. This was played out very well in the plot of “Be Right Back,” an early episode of the sci-fi series, Black Mirror: Following the unexpected death of her beloved partner, and deep in a vulnerable period of mourning, our protagonist is prompted by an online service to chat with a simulation of the deceased. Eventually she orders an entire simulacrum of her partner, and he is so convincing that she is lured into accepting him as her partner. The spell is broken when she realizes that she could very well order the robot to throw itself off a cliff, with the robot only displaying emotional protest if that’s what the human owner requests.
Robots don’t have the individual identity and frailty that characterizes the human experience. They are manufactured products that can be reproduced in identical form thousands of times. If some accident destroys your robot, it is no more consequential than if you accidentally destroyed your backed-up laptop. A fresh order to the factory and re-uploading their software and memory, and it would be as if it had never been destroyed at all.
And even if we grant that they are conscious (a big “if,” see more on consciousness in part three of this series), we would wonder whether their experience of pain, without genuine human biology and physiology underneath it, were the same as ours. Though many people will be fooled, some will never accept deathbed consolation and companionship from an entity that was not born and that does not age or die, at least not in the same way we accept it from a beloved human partner. We may fear death and contemplate what happens to us afterward, but we gain comfort in making this journey with our fellow, fragile human companions.
Shared acts of human drama
To return to human flourishing, I suggest that a deep aspect of our social nature lies in the shared acts of the great human drama that comprise our lives: birth, growth, frailty, and death. We are each unique, fragile, and mortal, and this shared human condition is essential to authentic human relationships. This condition is what makes our hearts go out to the families of victims of a disaster; it’s what moves us to forgiveness and compassion when we find out the coworker acting like a jerk just learned of his cancer diagnosis; it’s what makes childbirth one of the most miraculous and joyous moments in life; and it’s crucial to any genuine altruism and self-sacrificing love that we can give or receive.
These types of human experiences, arising through our sociality, put us most closely in touch with our most basic social nature and highest human capacities. None of them are possible with robots that don’t share these aspects of our nature.
Now, of course we could also program robots to express the fears or ambivalences that we do, but knowing their artificial nature would belie the inauthenticity of their statements, making them ring hollow. It’s also true that if you didn’t know it were a robot, these objections would not hold weight from a practical perspective, though we might still object that it is morally wrong to deceive humans about the true nature of their robotic companions, regardless of the consequences. We might even create robots that only live once, that have similar relationships to mortality as we do. But that would be by choice of human designers and engineers. Death, for such simulacra, would only ever be optional.
For these reasons, I think even the most perfect instantiations of artificial humans (be they even possible) will fall short of fulfilling the deepest and most meaningful aspects of our social nature. We flourish most fully when we know and love authentic fellow humans, recognizing in each other both a uniqueness and a commonality that is distinctive to the human condition.
 Humphrey, N. The Inner Eye. (Oxford University Press, 2002).
 Cole, S. W. et al. Social regulation of gene expression in human leukocytes. Genome Biol. 8, R189 (2007).
 Holt-Lunstad, J. & Smith, T. B. Social Relationships and Mortality. Soc. Personal. Psychol. Compass 6, 41–53 (2012).
 Berkman, L. F. & Syme, S. L. Social networks, host resistance, and mortality: a nine-year follow-up study of Alameda County residents. Am. J. Epidemiol. 109, 186–204 (1979).
 McPherson, M., Smith-Lovin, L. & Brashears, M. E. Social Isolation in America: Changes in Core Discussion Networks over Two Decades. Am. Sociol. Rev. 71, 353–375 (2006).
 Crane, S. M. Social Connection, Loneliness, and Social Burdens as Experienced by Participants in the WELL for Life Project. (Stanford University, 2019).
 Thayer, C. & Anderson, G. O. Loneliness and social connections: A national survey of adults 45 and older. (AARP, 2018). doi:10.26419/res.00246.001
 Hari, J. Lost Connections: Why You’re Depressed and How to Find Hope. (Bloomsbury USA, 2018).
 Yao, M. Z. & Zhong, Z. Loneliness, social contacts and Internet addiction: A cross-lagged panel study. Comput. Human Behav. 30, 164–170 (2014).
 Seligman, M. E. P. Flourish: A Visionary New Understanding of Happiness and Well-being. (Free Press, 2011).
 Kurzweil, R. How to Create a Mind: The Secret of Human Thought Revealed. (Viking, 2012).
 Bostrom, N. Superintelligence: Paths, Dangers, Strategies. (Oxford University Press, 2014).
Steven Michael Crane (@stevenmcrane) is a researcher in Anthropology and Neurobiology at Stanford University working on the interdisciplinary project: The Boundaries of Humanity: Humans, Animals, and Machines in the Age of Biotechnology.