Back

Essays

Brain Science and Human Values

September 9, 2008

[Note: This is a response to Jonathan Haidt’s target article, What Makes People Vote Republican?]

The human brain is an engine of belief. Our minds continually consume, produce, and attempt to reconcile propositions about ourselves and the world that purport to be true: Iran is seeking to acquire nuclear weapons; human beings are contributing to global climate change; I actually look better with gray hair. What must a brain do to believe such propositions? This question marks the intersection of many fields: psychology, neuroscience, philosophy, economics, political science, and even jurisprudence. Understanding belief at the level of the brain is the main focus of my current research, using functional magnetic resonance imaging (fMRI).

Belief encompasses two domains that have been traditionally divided in our discourse. We believe propositions about facts, and these acts of cognition subsume almost every effort we make to get at the truth—in science, history, journalism, etc. But we also form beliefs about values: judgments about morality, meaning, personal goals, and life’s larger purpose. While they differ in certain respects, these types of belief share some important features.

Both types of belief make tacit claims about normativity: claims not merely about how we human beings think and behave, but about how we should think and behave. Factual beliefs like “water is two parts hydrogen and one part oxygen” and ethical beliefs like “cruelty is wrong” are not expressions of mere preference. To really believe a proposition (whether about facts or values) is also to believe that one has accepted it for legitimate reasons. It is, therefore, to believe that one is in compliance with a variety norms (i.e., that one is sane, rational, not lying to oneself, not overly biased, etc.) When we really believe that something is factually true or morally good, we also believe that another person, similarly placed, should share our conviction.

Despite the remonstrations of people like Jonathan Haidt and Richard Shweder, science has long been in the values business. Scientific validity is not the result of scientists abstaining from making value judgments; it is the result of scientists making their best effort to value principles of reasoning that reliably link their beliefs to reality, through valid chains of evidence and argument. The answer to the question, “What should I believe, and why should I believe it?” is generally a scientific one:  Believe a proposition because it is well supported by theory and evidence; believe it because it has been experimentally verified; believe it because a generation of smart people have tried their best to falsify it and failed; believe it because it is true (or seems so). This is a norm of cognition as well as the epistemic core of any scientific mission statement.

But what about meaning and morality? Here we appear to move from questions of truth—which have long been in the domain of science if they are to be found anywhere—to questions of goodness. How should we live? Is it wrong to lie? If so, why and in what sense? Which personal habits, uses of attention, modes of discourse, social institutions, economic systems, governments, etc. are most conducive to human well-being? It is widely imagined that science cannot even pose, much less answer, questions of this sort.

Jonathan Haidt appears to exult in this pessimism. He doubts that anyone can justifiably make strong, realistic claims about right and wrong, or good and evil, because he has observed that human beings tend to make moral judgments on the basis of emotion, justify these judgments with post hoc reasoning, and stick to their guns even when their post hoc reasoning demonstrably fails. As he says in one of his earlier papers, when asked to justify their emotional reactions to certain moral (and pseudo-moral) dilemmas, people are often “morally dumbfounded.” He reports that subjects often “stutter, laugh, and express surprise at their inability to find supporting reasons, yet they would not change their initial judgments…” But couldn’t the same be said of people’s failures to solve logical puzzles? I think it would be fair to say that the Monty Hall problem leaves many of its victims “logically dumbfounded.” Which is to say that even when a person gets the gist of why he should switch doors, he often cannot shake his initial intuition that each door represents a 50 percent chance of success. This reliable failure of human reasoning is just that—a failure of reasoning. It does not suggest that there isn’t a single correct answer to the Monty Hall problem. While it might seem the height of arrogance to say it, the people who actually understand the Monty Hall problem really do hold the “logical high ground.”

As a counterpoint to the prevailing liberal opinion that morality is a system of”prescriptive judgments of justice, rights, and welfare pertaining to how people ought to relate to each other,” Haidt asks us to ponder mysteries of the following sort: “But if morality is about how we treat each other, then why did so many ancient texts devote so much space to rules about menstruation, who can eat what, and who can have sex with whom?” Interesting question. Are these the same ancient texts that view slavery as morally unproblematic? It would seem so. Perhaps slavery has no moral implications after all—could Abolition have been just another instance of liberal bias?—otherwise, surely these ancient texts would have something of substance to say about it. Or, following Haidt’s initial logic, why not ask, “if physics is just a system of laws which explains the structure of the universe in terms of mass and energy, why do so many ancient texts devote so much space to immaterial influences and miraculous acts of God?” Why indeed.

Haidt is, of course, right to worry that liberals may not always “hold the moral high ground.” In a recent study of moral reasoning, subjects were asked to judge whether it was morally correct to sacrifice the life of one person to save one hundred, while being given subtle clues as to the races of the people involved. Conservatives proved less biased by race than liberals and, therefore, more even-handed. It turns out that liberals were very eager to sacrifice a white person to save one hundred non-whites, but not the other way around, all the while maintaining that considerations of race had not entered into their thinking. Observations of this sort are useful in revealing the biasing effect of ideology—even the ideology of fairness.

Haidt often writes, however, as if there were no such thing as moral high ground. At the very least, he seems to believe that science will never be able to judge higher from lower. He admonishes us to get it into our thick heads that many of our neighbors “honestly prefer the Republican vision of a moral order to the one offered by Democrats.” Yes, and many of them honestly prefer the Republican vision of cosmology, wherein it is still permissible to believe that the big bang occurred less than ten thousand years ago. These same people tend to prefer Republican doubts about biological evolution and climate change. There are names for this type of “preference,” one of the more polite being “ignorance.” What scientific purpose is served by avoiding this word at all costs?

Haidt appears to consider it an intellectual virtue to adopt, uncritically, the moral categories of his subjects. But where is it written that everything that people do or decide in the name of “morality” deserves to be considered part its subject matter? A majority of Americans believe that the Bible provides an accurate account of the ancient world (as well as accurate prophecies of the future). Many millions of Americans also believe that a principal cause of cancer is “repressed anger.” Happily, we do not allow these opinions to anchor us when it comes time to have serious discussions about history and oncology.

Much of humanity is clearly wrong about morality—just as much of humanity is wrong about physics, biology, history, and everything else worth understanding. If, as I believe, morality is a system of thinking about (and maximizing) the well being of conscious creatures like ourselves, many people’s moral concerns are frankly immoral.

Does forcing women and girls to wear burqas make a positive contribution to human well-being? Does it make happier boys and girls? More compassionate men? More confident and contented women? Does it make for better relationships between men and women, between boys and their mothers, or between girls and their fathers? I would bet my life that the answer to each of these questions is “no.” So, I think, would many scientists. And yet, most scientists have been trained to think that such judgments are mere expressions of cultural bias. Very few of us seem willing to admit that simple, moral truths increasingly fall within the purview of our scientific worldview. I am confident that this period of reticence will soon come to an end.

Unless human well-being is perfectly random, or equally compatible with any events in the world or state of the brain, there will be scientific truths to be known about it. These truths will, inevitably, force us to draw clear distinctions between ways of thinking and living, judging some to better or worse, more or less true to the facts, and more or less moral.

Of course, questions of human well-being run deeper than any explicit code of morality. Morality—in terms of consciously held precepts, social-contracts, notions of justice, etc.—is a relatively recent invention. Such conventions require, at a minimum, language and a willingness to cooperate with strangers, and this takes us a stride or two beyond the Hobbesian “state of nature.” But prior to emergence of explicit notions of right and wrong, the concept of well-being still applies. Whatever behaviors served to mitigate the internecine misery of our ancestors would fall within the scope of this analysis. To simplify matters enormously: (1) genetic changes in the brain gave rise to social emotions, moral intuitions, and language… (2) which produced increasingly complex cooperative behavior, the keeping of promises, concern about one’s reputation, etc… (3) which became the basis for cultural norms, laws, and social institutions whose purpose has been to render this growing system of cooperation durable in the face of countervailing forces.

Some version of this progression has occurred in our case, and each step represents an undeniable enhancement of our personal and collective well-being. Of course, catastrophic regressions are always possible. We could, either by design or negligence, employ the hard-won fruits of civilization, and the emotional and social leverage of millennia of biological and cultural evolution, to immiserate ourselves more fully than unaided Nature ever could. Imagine a global North Korea, where the better part of a starving humanity serves as slaves to a lunatic with bouffant hair: this might, in fact, be worse than a world filled merely with warring Australopithecines. What would “worse” mean in this context? Just what our (liberal?) intuitions suggest: more painful, less fulfilling, more conducive to fear and despair, etc. While it will never be feasible to compare such counterfactual states of the world, that does not mean that there are no experiential facts of the matter to be compared.

Haidt is, of course, right to notice that emotions have primacy in many respects—and the way in which feeling drives judgment is surely worthy of study. It does not follow, however, that there are no right and wrong answers to questions of morality. Just as people are often less than rational when claiming to be rational, they are often less than moral when claiming to be moral. We know from many lines of converging research that our feeling of reasoning objectively, in concordance with compelling evidence, is often an illusion. This is especially obvious in split-brain research, when the left hemisphere’s “interpreter” finds itself sequestered, and can be enticed to simply confabulate by way of accounting for right-hemisphere behavior. This does not mean, however, that dispassionate reasoning, scrupulous attention to evidence, and awareness of the ever-present possibility of self-deception are not cognitive skills that human beings can acquire. And there is no reason to expect that all cultures and sub-cultures value these skills equally.

If there are objective truths about human well-being—if kindness, for instance, is generally more conducive to happiness than cruelty is—then there seems little doubt that science will one day be able to make strong and precise claims about which of our behaviors and uses of attention are morally good, which are neutral, and which are bad. At time when only 28 percent of Americans will admit the truth of evolution, while 58 percent imagine that a belief in God is necessary for morality, it is truism to say that our culture is not prepared to think critically about the changes to come.

Link to Article