In 2010, John Brockman and the Edge Foundation held a conference entitled “The New Science of Morality.” I attended along with Roy Baumeister, Paul Bloom, Joshua D. Greene, Jonathan Haidt, Marc Hauser, Joshua Knobe, Elizabeth Phelps, and David Pizarro. Some of our conversations have now been published in a book (along with many interesting essays) entitled Thinking: The New Science of Decision-Making, Problem-Solving, and Prediction
John Brockman and Harper Collins have given me permission to reprint my edited remarks here.
What I intended to say today has been pushed around a little bit by what has already been said and by a couple of sidebar conversations. That is as it should be, no doubt. But if my remarks are less linear than you would hope, blame that—and the jet lag.
I think we should differentiate three projects that seem to me to be easily conflated, but which are distinct and independently worthy endeavors:
The first project is to understand what people do in the name of “morality.” We can look at the world, witnessing all of the diverse behaviors, rules, cultural artifacts, and morally salient emotions like empathy and disgust, and we can study how these things play out in human communities, both in our time and throughout history. We can examine all these phenomena in as nonjudgmental a way as possible and seek to understand them. We can understand them in evolutionary terms, and we can understand them in psychological and neurobiological terms, as they arise in the present. And we can call the resulting data and the entire effort a “science of morality.” This would be a purely descriptive science of the sort that I hear Jonathan Haidt advocating.
For most scientists, this project seems to exhaust all that legitimate points of contact between science and morality—that is, between science and judgments of good and evil and right and wrong. But I think there are two other projects that we could concern ourselves with, which are arguably more important.
The second project would be to actually get clearer about what we mean, and should mean, by the term “morality,” understanding how it relates to human well-being altogether, and to use this new discipline to think more intelligently about how to maximize human well-being. Of course, philosophers may think that this begs some of the important questions, and I’ll get back to that. But I think this is a distinct project, and it’s not purely descriptive. It’s a normative project. The question is, how can we think about moral truth in the context of science?
The third project is a project of persuasion: How can we persuade all of the people who are committed to silly and harmful things in the name of “morality” to change their commitments and to lead better lives? I think that this third project is actually the most important project facing humanity at this point in time. It subsumes everything else we could care about—from arresting climate change, to stopping nuclear proliferation, to curing cancer, to saving the whales. Any effort that requires that we collectively get our priorities straight and marshal our time and resources would fall within the scope of this project. To build a viable global civilization we must begin to converge on the same economic, political, and environmental goals.
Obviously the project of moral persuasion is very difficult—but it strikes me as especially difficult if you can’t figure out in what sense anyone could ever be right and wrong about questions of morality or about questions of human values. Understanding right and wrong in universal terms is Project Two, and that’s what I’m focused on.
There are impediments to thinking about Project Two: the main one being that most right-thinking, well-educated, and well-intentioned people—certainly most scientists and public intellectuals, and I would guess, most journalists—have been convinced that something in the last 200 years of intellectual progress has made it impossible to actually speak about “moral truth.” Not because human experience is so difficult to study or the brain too complex, but because there is thought to be no intellectual basis from which to say that anyone is ever right or wrong about questions of good and evil.
My aim is to undermine this assumption, which is now the received opinion in science and philosophy. I think it is based on several fallacies and double standards and, frankly, on some bad philosophy. The first thing I should point out is that, apart from being untrue, this view has consequences.
In 1947, when the United Nations was attempting to formulate a universal declaration of human rights, the American Anthropological Association stepped forward and said that it couldn’t be done—for this would be to merely foist one provincial notion of human rights on the rest of humanity. Any notion of human rights is the product of culture, and declaring a universal conception of human rights is an intellectually illegitimate thing to do. This was the best our social sciences could do with the crematory of Auschwitz still smoking.
But, of course, it has long been obvious that we need to converge, as a global civilization, in our beliefs about how we should treat one another. For this, we need some universal conception of right and wrong. So in addition to just not being true, I think skepticism about moral truth actually has consequences that we really should worry about.
Definitions matter. And in science we are always in the business of framing conversations and making definitions. There is nothing about this process that condemns us to epistemological relativism or that nullifies truth claims. We define “physics” as, loosely speaking, our best effort to understand the behavior of matter and energy in the universe. The discipline is defined with respect to the goal of understanding how matter behaves.
Of course, anyone is free to define “physics” in some other way. A Creationist physicist could come into this room and say, “Well, that’s not my definition of physics. My physics is designed to match the Book of Genesis.” But we are free to respond to such a person by saying, “You know, you really don’t belong at this conference. That’s not ‘physics’ as we are interested in it. You’re using the word differently. You’re not playing our language game.” Such a gesture of exclusion is both legitimate and necessary. The fact that the discourse of physics is not sufficient to silence such a person, the fact that he cannot be brought into our conversation and subdued on our terms, does not undermine physics as a domain of objective truth.
And yet, on the subject of morality, we seem to think that the possibility of differing opinions is a deal breaker. The fact that someone can come forward and say that his morality has nothing to do with human flourishing—that it depends upon following shariah law, for instance—the very fact that such a position can be articulated proves that there’s no such thing as moral truth. Morality, therefore, must be a human invention. But this is a fallacy.
We have an intuitive physics, but much of our intuitive physics is wrong with respect to the goal of understanding how matter and energy behave in this universe. I am saying that we also have an intuitive morality, and much of our intuitive morality may be wrong with respect to the goal of maximizing human flourishing—and with reference to the facts that govern the well-being of conscious creatures, generally.
So I will argue, briefly, that the only sphere of legitimate moral concern is the well-being of conscious creatures. I’ll say a few words in defense of this assertion, but I think the idea that it has to be defended is the product of several fallacies and double standards that we’re not noticing. I don’t know that I will have time to expose all of them, but I’ll mention a few.
Thus far, I’ve introduced two things: the concept of consciousness and the concept of well-being. I am claiming that consciousness is the only context in which we can talk about morality and human values. Why is consciousness not an arbitrary starting point? Well, what’s the alternative? Just imagine someone coming forward claiming to have some other source of value that has nothing to do with the actual or potential experience of conscious beings. Whatever this is, it must be something that cannot affect the experience of anything in the universe, in this life or in any other.
If you put this imagined source of value in a box, I think what you would have in that box would be—by definition—the least interesting thing in the universe. It would be—again, by definition—something that cannot be cared about. Any other source of value will have some relationship to the experience of conscious beings. So I don’t think consciousness is an arbitrary starting point. When we’re talking about right and wrong, and good and evil, and about outcomes that matter, we are necessarily talking about actual or potential changes in conscious experience.
I would further add that the concept of “well-being” captures everything we can care about in the moral sphere. The challenge is to have a definition of well-being that is truly open-ended and can absorb everything we care about. This is why I tend not to call myself a “consequentialist” or a “utilitarian,” because traditionally, these positions have bounded the notion of consequences in such a way as to make them seem very brittle and exclusive of other concerns—producing a kind of body count calculus that only someone with Asperger’s could adopt.
Consider the Trolley Problem: If there just is, in fact, a difference between pushing a person onto the tracks and flipping a switch—perhaps in terms of the emotional consequences of performing these actions—well, then this difference has to be taken into account. Or consider Peter Singer’s Shallow Pond problem: We all know that it would take a very different kind of person to walk past a child drowning in a shallow pond, out of concern for getting his suit wet, than it takes to ignore an appeal from UNICEF. It says much more about you if you can walk past that pond. If we were all this sort of person, there would be terrible ramifications as far as the eye can see. It seems to me, therefore, that the challenge is to get clear about what the actual consequences of an action are, about what changes in human experience are possible, and about which changes matter.
In thinking about a universal framework for morality, I now think in terms of what I call a “moral landscape.” Perhaps there is a place in hell for anyone who would repurpose a cliché in this way, but the phrase, “the moral landscape” actually captures what I’m after: I’m envisioning a space of peaks and valleys, where the peaks correspond to the heights of flourishing possible for any conscious system, and the valleys correspond to the deepest depths of misery.
To speak specifically of human beings for the moment: any change that can affect a change in human consciousness would lead to a translation across the moral landscape. So changes to our genome, and changes to our economic systems—and changes occurring on any level in between that can affect human well-being for good or for ill—would translate into movements within this space of possible human experience.
A few interesting things drop out of this model: Clearly, it is possible, or even likely, that there are many peaks on the moral landscape. To speak specifically of human communities: perhaps there is a way to maximize human flourishing in which we follow Peter Singer as far as we can go, and somehow train ourselves to be truly dispassionate to friends and family, without weighting our children’s welfare more than the welfare of other children, and perhaps there’s another peak where we remain biased toward our own children, within certain limits, while correcting for this bias by creating a social system which is, in fact, fair. Perhaps there are a thousand different ways to tune the variable of selfishness versus altruism, to land us on a peak on the moral landscape.
However, there will be many more ways to not be on a peak. And it is clearly possible to be wrong about how to move from our present position to the nearest available peak. This follows directly from the observation that whatever conscious experiences are possible for us are a product of the way the universe is. Our conscious experience arises out of the laws of nature, the states of our brain, and our entanglement with the world. Therefore, there are right and wrong answers to the question of how to maximize human flourishing in any moment.
This becomes incredibly easy to see when we imagine there being only two people on earth: we can call them Adam and Eve. Ask yourself, are there right and wrong answers to the question of how Adam and Eve might maximize their well-being? Clearly there are. Wrong answer number one: they can smash each other in the face with a large rock. This will not be the best strategy to maximize their well-being.
Of course, there are zero sum games they could play. And yes, they could be psychopaths who might utterly fail to collaborate. But, clearly, the best responses to their circumstance will not be zero-sum. The prospects of their flourishing and finding deeper and more durable sources of satisfaction will only be exposed by some form of cooperation. And all the worries that people normally bring to these discussions—like deontological principles or a Rawlsian concern about fairness—can be considered in the context of our asking how Adam and Eve can navigate the space of possible experiences so as to find a genuine peak of human flourishing, regardless of whether it is the only peak. Once again, multiple, equivalent but incompatible peaks still allow for a realistic space in which there are right and wrong answers to moral questions.
One thing we must not get confused about is the difference between answers in practice and answers in principle. Needless to say, fully understanding the possible range of experiences available to Adam and Eve represents a fantastically complicated problem. And it gets more complicated when we add another 7 billion people to the experiment. But I would argue that it’s not a different problem; it just gets more complicated.
By analogy, consider economics: Is economics a science yet? Apparently not, judging from the last few years. Maybe economics will never get better than it is now. Perhaps we’ll be surprised every decade or so by something terrible, and we’ll be forced to concede that we’re blinded by the complexity of our situation. But to say that it is difficult or impossible to answer certain problems in practice does not even slightly suggest that there are no right and wrong answers to these problems in principle.
The complexity of economics would never tempt us to say that there are no right and wrong ways to design economic systems, or to respond to financial crises. Nobody will ever say that it’s a form of bigotry to criticize another country’s response to a banking failure. Just imagine how terrifying it would be if the smartest people around all more or less agreed that we had to be nonjudgmental about everyone’s view of economics and about every possible response to a global economic crisis.
And yet that is exactly where we stand as an intellectual community on the most important questions in human life. I don’t think you have enjoyed the life of the mind until you have witnessed a philosopher or scientist talking about the “contextual legitimacy” of the burka, or of female genital excision, or any of these other barbaric practices that we know cause needless human misery. We have convinced ourselves that somehow science is by definition a value-free space and that we can’t make value judgments about beliefs and practices that needlessly derail our attempts to build happy and sane societies.
The truth is, science is not value-free. Good science is the product of our valuing evidence, logical consistency, parsimony, and other intellectual virtues. And if you don’t value those things, you can’t participate in the scientific conversation. I’m saying we need not worry about the people who don’t value human flourishing, or who say they don’t. We need not listen to people who come to the table saying, “You know, we want to cut the heads off adulterers at half-time at our soccer games because we have a book dictated by the Creator of the universe which says we should.” In response, we are free to say, “Well, you appear to be confused about everything. Your “physics” isn’t physics, and your “morality” isn’t morality.” These are equivalent moves, intellectually speaking. They are borne of the same entanglement with real facts about the way the universe is. In terms of morality, our conversation can proceed with reference to facts about the changing experiences of conscious creatures. It seems to me to be just as legitimate, scientifically, to define “morality” in this way as it is to define “physics” in terms of the behavior of matter and energy. But most people engaged in of the scientific study of morality don’t seem to realize this.
From the publisher: Daniel Kahneman on the power (and pitfalls) of human intuition and “unconscious” thinking • Daniel Gilbert on desire, prediction, and why getting what we want doesn’t always make us happy • Nassim Nicholas Taleb on the limitations of statistics in guiding decision-making • Vilayanur Ramachandran on the scientific underpinnings of human nature • Simon Baron-Cohen on the startling effects of testosterone on the brain • Daniel C. Dennett on decoding the architecture of the “normal” human mind • Sarah-Jayne Blakemore on mental disorders and the crucial developmental phase of adolescence • Jonathan Haidt, Sam Harris, and Roy Baumeister on the science of morality, ethics, and the emerging synthesis of evolutionary and biological thinking • Gerd Gigerenzer on rationality and what informs our choices
Find this article online at: http://www.samharris.org/blog/item/thinking-about-go/