A question to ponder: Should astronomers beam messages toward other possible civilizations with the hope of announcing our existence and, perhaps, initiating contact?
Depending on your perspective, the fate of the world could hang on the answer. But even raising the question opens the door to a fundamental question for human beings:
Are ethics universal?
The search for extraterrestrial intelligence (SETI) has been going on since Frank Drake first trained a giant radio telescope at nearby stars looking for signals from an alien civilization. But in the 60 years since Drake took that step, a number of astronomers (including Drake) have done more than just listen.
They have also sent messages to the stars.
While almost no one has argued with the passive act of just listening that makes up SETI, the topic of messaging extraterrestrial intelligences (called METI) has been fraught with heated disagreements.
METI proponents argue that sending targeted messages to cosmic locations is the best way to begin conversations with other civilizations. After all, if they don’t know we’re here, they won’t be inclined to send us any messages. Think of the benefits, says the pro-METI crowd, of having a conversation with a more advanced civilization than our own. We could learn about fantastic new medical technologies or machines that could take us to the stars.
The anti-METI crowd thinks this is a foolhardy and even dangerous idea. They counter that no one knows what other civilizations will be like in terms of their relations with neighbors. Perhaps all successful civilizations are essentially warlike. And if that’s the case, then by beaming out messages announcing our presence, we are essentially like sheep letting the wolves know our exact location.
The anti-METI crowd is also not moved by arguments claiming we’ve been sending out our location via leaked TV, radio, and radar signals for almost a century. They argue that the leaked radiation is too weak to be detected unless someone is looking right at us with very sensitive equipment. Sending a directed message with a powerful radio telescope, however, is like sending out a cosmic Bat-Signal.
The stuff of sci-fi
Science fiction has given us some compelling visions in support of the cautious “there may be wolves among the stars” anti-METI argument. Greg Bears’ fantastic 1987 novel The Forge of God tells the story of Earth being visited by seemly friendly aliens who were, in fact, just a front for another species whose intent was to wipe out potential interstellar threats before they arose. The Earth ended up entirely being destroyed by the end of the story.
More recently, Chinese author Cixin Liu’s series The Three-Body Problem presented readers with a “Dark Forest” theory for interstellar sociology. According Liu, since there is no way to know in advance if other civilizations are hostile or not, the safest course of action is to remain hidden and pre-emptively destroy any civilization that makes itself visible.
When I reflect on the question of METI, I find myself inclined toward the cautious. I certainly agree with the anti-METI crowd that astronomers should try and come to some kind of agreement about the efforts before anyone just goes ahead and beams a “howdy” to the stars.
But what I find most interesting in this discussion is the central question about ethics and its cosmic manifestations. There has been a long tradition in thinking about exo-civilizations of equating advanced technology with advanced ethics.
Carl Sagan often took this perspective. If a species had developed enormous technological capacities, then its ability for destruction was so great that it must have worked out ways to live peacefully, or it wouldn’t still exist. It’s a compelling and hopeful argument, but I never really bought it.
For me, it’s not a question of whether aliens be peaceful or warlike. Instead I’m more intrigued by the notion that this dichotomy won’t even appear in their thinking.
One of the most difficult aspects of thinking clearly about other civilizations in the universe, is recognizing just how completely that thinking is constrained. We are creatures with a very specific culture and biology, bound to a very specific evolutionary history that emerged on one particular planet. While some basic aspects of that biology may be dictated by physics and chemistry, the further you go up the ladder of complexity, the less likely you are to find rules or laws that will be shared among different species on different worlds. And once you get to sociology and culture—the domain of ethics—looking for universals seems pretty hopeless.
So, our thinking about other civilizations and their ethics (warlike, peaceful, etc.) is constrained because there are no constraints. We can’t use the categories that seem so natural to us because it’s not clear that there’s anything natural (i.e., universal) about them.
That is why, with no ethics to guide my thinking about alien ethics, I would argue that an ounce of caution is worth a pound of destroyed planet.