In this episode of the Waking Up Podcast, Sam Harris talks about atheism, artificial intelligence, rape, public speaking, meditation, consciousness, free will, intellectual honesty, and other topics.

Audio transcript:

Welcome to the Waking Up podcast. For today’s episode, I’ve decided to do an “Ask Me Anything” podcast. I solicited questions on Twitter, and got some hundreds of them, so I’ll do my best to answer as many as I can over the next hour or so. These will, by definition, not be on a theme.

How does the struggle of Atheists for acceptance compare with that of women, blacks, gays, etc.? How long until true equality arrives?

Well, I’m not sure I would want to draw any strict analogy between the civil rights struggles of blacks, gays, and women and that of atheists. While atheism as a political identity is more or less a nonstarter in American politics—which is to say that you cannot have a political career, or at least no reasonable expectation of one, while being out of the closet as an atheist—atheists are, nevertheless, disproportionately well-educated, well-off financially, and powerful. Far more than 5% of the people you meet in journalism, academia, or Silicon Valley are atheists. This is just my impression. I don’t know of any scientific polling that’s been done on this question apart from among scientists, where the vast majority are nonbelievers, and the proportion of nonbelievers only increases among the most successful and influential scientists. But I’m reasonably confident that when you’re in the company of wealthy, connected, powerful people—internet billionaires, movie stars, and the people who are running major financial and academic institutions—you are disproportionately in the presence of atheists.

While I’m as eager as anyone to see atheism get its due—or rather, to see reason and common sense get their due—in our political discourse, I don’t think it’s fair say that atheists have the same kind of civil rights problem that blacks, gays, and women have had in our society.

Of course, in the Muslim world, things change entirely, because to be exposed as an atheist is in many places to live under a death sentence. That’s a problem that the civilized world has to address.

What is your view on laws that prevent people from not hiring on the basis of religion?

Well, here, I’m sure I’m going to stumble into yet another controversy. I tend to take a libertarian view of questions of this kind. I think people should be free to embarrass themselves publicly, to destroy their reputations, to be boycotted—so if you want to open a restaurant that serves only redheaded people, I think you should be free to do that. If you only want to serve people who are six feet tall, you should be able to do that. And, by definition, if you only want to serve muslims, or whites, or jews, if you want a club that excludes everyone but yourself, I think you should be free to do all of these things, and people should be free to write about you, to picket in front of your store, or clubhouse, or restaurant.

I think law is too blunt an instrument to correct this kind of bias—and this is not to disregard all of the gains we’ve made for civil rights based on the laws. At this point, I think we should probably handle these things through conversation and reputation management rather than legislate who businesses have to hire or serve. If the social attitudes of a business are egregious and truly out of step with those of the community, then it will pay the price. It’s only because 50 years ago, the attitudes of the community were so unenlightened that we needed rather heavy-handed laws to ram through a sane and compassionate social agenda. Some might argue that we’re still in that situation. I think we are less so by the hour and, at a certain point, I think the law is the wrong mechanism to enforce positive social attitudes.

Of course, my enemies will summarize this as “Sam Harris thinks that it should be legal to discriminate against blacks, and gays, and women.”

Can you say something about Artificial Intelligence (AI) and your concerns about it?

This is a very interesting topic. The question of how to build artificial intelligence that isn’t going to destroy us is something that I’ve only begun to pay attention to, and it is a deep and consequential problem.

I went to a conference in Puerto Rico focused on this issue organized by the Future of Life Institute. I was brought there by a friend, Elon Musk—who, no doubt, many of you have heard of. Elon had recently said that he thought AI was the greatest threat to human survival, perhaps greater than nuclear weapons. Many people considered that an incredibly hyperbolic statement.

Knowing Elon, and knowing how close to the details he’s apt to be, I took it is as a very interesting diagnosis of a problem, but I wasn’t quite sure what I thought about it, because I haven’t spent much time focusing on the progress we’ve been making in AI or its implications.

So I went to this conference in San Juan, held by and for the people who are closest to doing this work. It was not open to the public—I think I was maybe one of two or three interlopers there who hadn’t been invited. What was fascinating was that this was a collection of people who ranged from very worried, like Elon and others who felt that we had to find some way to pull the brakes, to those who were doing the work most energetically, and who wanted to convince the others not to worry about having to pull the brakes. What I had heard outside this conference—what you hear on, say, Edge.org or in general discussions about the prospects of making real breakthroughs in artificial intelligence—was a timeframe of 50-100 years before anything terribly scary or terribly interesting is going to happen. In this conference, that was almost never the case. Everyone who was trying to assure the others that they were doing this research as safely as possible, was still conceding that a timeframe of 5-10 years admitted of rather alarming progress.

When I came back from that conference, the Edge question for 2015 just happened to be on the topic of AI, so I wrote a short piece distilling what my view now was. Perhaps I’ll just read that, it won’t take too long and hopefully won’t bore you:

It seems increasingly likely that we will one day build machines that possess superhuman intelligence. We need only continue to produce better computers—which we will, unless we destroy ourselves or meet our end some other way. We already know that it is possible for mere matter to acquire “general intelligence”—the ability to learn new concepts and employ them in unfamiliar contexts—because the 1,200 cc of salty porridge inside our heads has managed it. There is no reason to believe that a suitably advanced digital computer couldn’t do the same.

It is often said that the near-term goal is to build a machine that possesses “human level” intelligence. But unless we specifically emulate a human brain—with all its limitations—this is a false goal. The computer on which I am writing these words already possesses superhuman powers of memory and calculation. It also has potential access to most of the world’s information. Unless we take extraordinary steps to hobble it, any future artificial general intelligence (AGI) will exceed human performance on every task for which it is considered a source of “intelligence” in the first place. Whether such a machine would necessarily be conscious is an open question. But conscious or not, an AGI might very well develop goals incompatible with our own. Just how sudden and lethal this parting of the ways might be is now the subject of much colorful speculation.

So, just to make things perfectly clear here: All you have to grant to get your fears up and running is that we will continue to make progress in hardware and software design (unless we destroy ourselves some other way) and that there’s nothing magical about the wetware we have running inside of our heads (that is, an intelligent machine could be built of other material). Once you grant those two things, which I think everyone who has thought about the problem will grant—I actually can’t imagine a scientist not granting that 1) we’re going to make progress in computer design, unless something terrible happens and 2) that there’s nothing magical about biological material where intelligence is concerned—once you’ve granted those two propositions, you’ll now be hard-pressed to find some handhold with which to resist your slide into real concern about where this is all going.

So, back to the text:

One way of glimpsing the coming risk is to imagine what might happen if we accomplished our aims and built a superhuman AGI that behaved exactly as intended. Such a machine would quickly free us from drudgery and even from the inconvenience of doing most intellectual work. What would follow under our current political order? There is no law of economics that guarantees that human beings will find jobs in the presence of every possible technological advance. Once we built the perfect labor-saving device, the cost of manufacturing new devices would approach the cost of raw materials. Absent a willingness to immediately put this new capital at the service of all humanity, a few of us would enjoy unimaginable wealth, and the rest would be free to starve. Even in the presence of a truly benign AGI, we could find ourselves slipping back to a state of nature, policed by drones.

And what would the Russians or the Chinese do if they learned that some company in Silicon Valley was about to develop a superintelligent AGI? This machine would, by definition, be capable of waging war—terrestrial and cyber—with unprecedented power. How would our adversaries behave on the brink of such a winner-take-all scenario? Mere rumors of an AGI might cause our species to go berserk.

It is sobering to admit that chaos seems a probable outcome even in the best-case scenario, in which the AGI remained perfectly obedient. But of course we cannot assume the best-case scenario. In fact, “the control problem”—the solution to which would guarantee obedience in any advanced AGI—appears quite difficult to solve.

Imagine, for instance, that we build a computer that is no more intelligent than the average team of researchers at Stanford or MIT—but, because it functions on a digital timescale, it runs a million times faster than the minds that built it. Set it humming for a week, and it would perform 20,000 years of human-level intellectual work. What are the chances that such an entity would remain content to take direction from us? And how could we confidently predict the thoughts and actions of an autonomous agent that sees more deeply into the past, present, and future than we do?

The fact that we seem to be hastening toward some sort of digital apocalypse poses several intellectual and ethical challenges. For instance, in order to have any hope that a superintelligent AGI would have values commensurate with our own, we would have to instill those values in it (or otherwise get it to emulate us). But whose values should count? Should everyone get a vote in creating the utility function of our new colossus? If nothing else, the invention of an AGI would force us to resolve some very old (and boring) arguments in moral philosophy.


Perhaps I don’t need to spell this out any further, but it’s interesting that, once you imagine having to build values into a superintelligent AGI, you then realize that you need to get straight about what you think is good. I think the advent of this technology would cut through moral relativism like a laser: Who is going to want to engineer into this thing the values of theocracy or traditional religious authoritarianism? Do you want to build homophobia and intolerance toward free speech into a machine that makes tens of thousands of years of human level intellectual progress every week? I don’t think so.

Even designing self-driving cars presents potential ethical problems that we need to get straight about. Any self-driving car needs some algorithm with which to rank order bad outcomes. If you want a car that will avoid a child who dashes in front of it in the road, perhaps by driving up on the sidewalk, you also want a car that will avoid the people on the sidewalk, or preferentially hit a mailbox instead of a baby carriage.

So, you need some intelligent sorting of outcomes here. These are moral decisions.

Do you want a car that is unbiased with respect to the age and size of people, or the color of their skin? Would you like a car that was more likely to run over white people than people of color? That might seem like a peculiar question, but if you do psychological tests—trolley problem tests on liberals—this is the one psychological experiment that I’m aware of where liberals come out looking worse than conservatives reliably. If you test them on whether or not they’d be willing to sacrifice one life to save five, or one life to save a hundred, and you give subtle clues as to the color of the people involved—if you say that LeBron belongs to the Harlem Boy’s Choir, and there’s some scenario under which he can be sacrificed to save Chip and his friends who study music Juilliard—they simply won’t take a consequentialist approach to the problem. They will not sacrifice a black life to save any number of white lives. Whereas, if you reverse the variables, they will sacrifice a white life to save black lives all day long. Conservatives, strangely, are unbiased in this paradigm—which is to say, colorblind.

Do we like bias here? Do you want a self-driving car that preferentially avoids people of color? You have to decide. You must build it one way or the other.

So this is an interesting phenomenon: Technology is going to force us to admit to ourselves that we know right from wrong in a way that many people imagine isn’t possible.

Okay, back to the text:

However, a true AGI would probably acquire new values, or at least develop novel—and perhaps dangerous—near-term goals. What steps might a superintelligence take to ensure its continued survival or access to computational resources? Whether the behavior of such a machine would remain compatible with human flourishing might be the most important question our species ever asks.

The problem, however, is that only a few of us seem to be in a position to think this question through. Indeed, the moment of truth might arrive amid circumstances that are disconcertingly informal and inauspicious: Picture ten young men in a room—several of them with undiagnosed Asperger’s—drinking Red Bull and wondering whether to flip a switch. Should any single company or research group be able to decide the fate of humanity? The question nearly answers itself.

And yet it is beginning to seem likely that some small number of smart people will one day roll these dice. And the temptation will be understandable. We confront problems—Alzheimer’s disease, climate change, economic instability—for which superhuman intelligence could offer a solution. In fact, the only thing nearly as scary as building an AGI is the prospect of not building one. Nevertheless, those who are closest to doing this work have the greatest responsibility to anticipate its dangers. Yes, other fields pose extraordinary risks—but the difference between AGI and something like synthetic biology is that, in the latter, the most dangerous innovations (such as germline mutation) are not the most tempting, commercially or ethically. With AGI the most powerful methods (such as recursive self-improvement) are precisely those that entail the most risk.

We seem to be in the process of building a God. Now would be a good time to wonder whether it will (or even can) be a good one.

 

I guess I should probably explain this final notion of “recursive self-improvement.” The idea is that, once you build an AGI that is superhuman, then the way it will truly takeoff is if it is given or develops an ability to improve its own code. Just imagine something that could make centuries of human-level intellectual progress in minutes—improving itself. Not only learning more, but learning more about how to learn, and improving its ability to learn. Then you have this exponential takeoff function where this thing stands in relation to us intellectually, the way we stand to chickens, and sea urchins, and snails.

Now this might sound like a crazy thing to worry about—but it isn’t. The only assumptions are that we will continue to make progress and that there’s nothing magical about biological substrate where intelligence is concerned. Again, I’m agnostic as to whether or not such a machine would, by definition, be “conscious.” Let’s assume it’s not conscious—so what, we’re still talking about something that will have the functional power of a god, whether or not the lights are on.

Perhaps you all got more than you wanted from me on that topic…

I like you, but as an atheist, I find statism to be a dangerous form of religion and I won’t paint a billion people as barbarians.

Okay, well there are two axes to grind there.

This whole business about “statism” I find profoundly uninteresting. This is a separate conversation about the problems of U.S. foreign policy, the problems of bureaucracy, the problems of the tyranny of the majority, or the tyranny of empowered minorities (oligarchy)—these are all topics worth thinking about. But to compare a powerful state per se with the problem of religion is to make a hash of everything that’s important to talk about here. And the idea that we could do without a powerful state at this point is just preposterous.

If you’re an anarchist, you’re either fifty or a hundred years before your time (not withstanding what I just said about artificial intelligence), or you’re an imbecile. We need the police, we need the fire department, we need people to pave our roads, we can’t privatize all that stuff, and privatizing it would beget its own problems.

So, whenever I hear someone say, “You worship the religion of the State,” I know I’m in the presence of someone who isn’t ready for a conversation about religion, and isn’t ready to talk about the degree to which we rely, and are wise to rely, on the powers of a well-functioning government. In so far as our government doesn’t function well, then we have to change it. We have to resist its overreach into our lives. But behind this concern about statism is always some confusion about the problem of religion.

This person ends his almost-question with “I won’t’ paint a billion people as barbarians.” Well, neither will I. Again, when I criticize Islam, I’m criticizing the doctrine of Islam—and in so far as people adhere to it to the letter, then I get worried.

There will be much more on this topic when I publish my book with Maajid Nawaz. I originally said that was happening in June, but that’s unfortunately been pushed back to October, as it’s still hard to publish a physical book, apparently. You’ll have your fill of our thoughts about how to reform Islam when that comes out.

What do you think of Cenk Uygur of ‘The Young Turks” attacks on you and Ayaan recently?

Well, I guess I’ve ceased to think about it. I pushed back against it briefly, saying on Twitter that obviously my three hours with Cenk has been a waste of time. It appears to have been a waste of time, at least for him. I think many people got some benefit from listening to us go round-and-round and get wrapped around the same axle for three hours.

Actually, it wasn’t entirely a waste of time for him. I heard from a former employee there that it was literally the most profitable interview they’ve ever put on the show. I don’t know what he made on that interview, and I don’t begrudge him making money on his show, obviously, but I feel that Cenk now systematically acts in bad faith on this topic. He has made no effort to accurately represent my views. Again, it’s child’s play to pick a single sentence from something I’ve said or written and to hew to a misinterpretation of that sentence and attack me.

I think that the thing I’ve finally realized here—and this is not just a problem with Cenk, it’s a problem with all the usual suspects and all of their followers on Twitter, I’ve just reluctantly begun to accept it—is the fact that, when someone hates you, they take so much pleasure from hating you that it’s impossible to correct a misunderstanding. Because that would force your opponent to relinquish some of the pleasure he’s taking in hating you. This is an attitude I think we’re all familiar with to some degree: Once you’re convinced that someone is a total asshole, you’ve lost any sense that you should give him the benefit of the doubt, and then you see one more transgression from them, another thing that confirms whatever attitude in them you hate—whether they’re homophobic or they’re racist or they don’t believe in climate change, whatever it is—once that has calcified in you, and you see yet one further iteration of this thing, then you’re not inclined to second-guess it, you’re not inclined to try to read between the lines. In fact, if someone shows you that their new offense isn’t what it seemed, you can be slow to admit that.

This is not totally foreign to me, I notice this in myself, and it’s something that I do my best to shed. I think it’s an extremely unflattering quality of mind—this is not where I want to be caught standing. But my opponents seem to be always standing there, and that makes conversation impossible.

How did you become such a good public speaker? I have a speech class this Fall and I’m sick about it.

Well, I certainly wouldn’t claim to think that I’m such a good public speaker—I think, at best, I’m an adequate one.

As I wrote on my blog a couple of years ago in an article entitled “The Silent Crowd,” I really did have a problem with this—I was really terrified to speak publicly early in life and overcame it rather quickly just by doing it. Meditation was helpful, but meditation is insufficient for this sort of thing. You have to do the thing you’re afraid of; you can’t just get yourself into some kind of position of confidence beforehand and then hope to do it without any anxiety. You have to be willing to actually feel the anxiety—and what is anxiety? It is just a sensation of energy in the body. It has no content, really. It has no philosophical content. It need not have any psychological content. It’s like indigestion—you wouldn’t read a pattern of painful sensation in your abdomen after a bad meal and imagine that it says something negative about you as a person. This is a negative experience that is peripheral to your identity, but something about anxiety suggests that it lands more at the core of who we are—you’re a fearful person. But you need not have this relationship with anxiety. Anxiety is a hormonal cascade that you can be willing to feel and even become interested in, and it need not be the impediment to doing the thing that you’re anxious about doing. Not at all.

I go into this in more detail on my blog, but this is something to just get over. It’s worth pushing past this and not caring whether you appear anxious while doing it. Just do your thing, and you’ll eventually realize that you can do it happily. Some people are natural speakers, they’re natural performers, this is what they’re comfortable doing: they love to do it; they’re loose; they have access to the full bandwidth of their personality in that space. I’m not that way, it doesn’t come naturally to me, but I’m happy that I’ve fooled at least you. 

If you think I’m a good public speaker, that suggests that I have something interesting to say. If you pay close attention, you’ll see that I just kind of drone on in a monotone and my lack of style is, to some degree, a necessity because I want to approach public speaking very much as a conversation. I get uncomfortable anytime my pattern of speech departs too much from what it would be in a conversation with one person at a dinner table. Now, if you’re standing in front of a thousand people, it’s going to depart somewhat—it’s just the nature of the situation—but I try to be as conversational as possible, and when I’m not, or when someone else isn’t, it begins to strike me as dishonest.

Yet, I will grant you that the performance aspect of public speaking allows for what many people consider the best examples of oratory. If you listen to Martin Luther King Jr., he is so far from a natural speech pattern—it is pure performance. Just imagine sitting at a dinner party across from someone who was speaking to you the way MLK spoke in his speeches. You would know that you were in the presence of a madman. It would be intolerable. It would be terrifying. So that distance between what is normal in conversation and what is dramaturgical in a public speech—I don’t want to traverse that too far. I’m not comfortable doing it, and I actually tend to find it suspect as a member of the audience.

What is really entailed in Dzogchen mediation? Is it the loss of ‘I’ that is the self or does it go beyond that?

Well, traditionally speaking, it goes beyond that in certain ways, but I think the core point is what is called non-dual awareness: to lose the sense of subject-object awareness in the present moment, and to just rest as open, centerless consciousness and to fully relax into whatever is arising without hope and fear, without praise and blame, without grasping at the pleasant or pushing away the unpleasant. It’s a kind of mindfulness, but it’s mindfulness of there being nothing at all to grasp at as “self.” Selflessness is the core insight—but they don’t tend to talk about selflessness, they talk about nonduality.

Any suggestions or advice if I want to do a two year silent meditation on retreat?

Don’t do it by yourself—you really need guidance if you’re going to go on a retreat of any significant length. Find a meditation center where they’re doing a practice that you really want to do. Find a teacher you really admire and who you trust, and then follow his or her instructions.

A couple of more questions about meditation:

Why do we do it sitting up? If having a straight back is so valuable, why not do it lying down?

Well, you can do it lying down, it’s just harder. We’re so deeply conditioned to fall asleep lying down that most people find that meditation is just a precursor to a nap in that case. But it can be a very nice nap.

If you’re injured or just tired of sitting, then lying down is certainly a reasonable thing to attempt, but most people find that it is harder to stay awake and people often have a problem with sleepiness even while sitting up. So that’s the reason.

I haven’t read any of your books, but want to soon. Does your view that there’s no free will give you sympathy for your enemies.

Yes, it does. I’ve talked about this a little bit—it is an antidote to hatred. I have a long list of people who I really would hate if I thought they could behave differently than they do. Now, occasionally, I’m taken in by the illusion that they could and should be behaving differently, but when I have my wits about me, I realize that I’m dealing with people who are going to do what they’re going to do, and my efforts to talk sense into them are going to be as ineffectual as they will be, and there’s no really place to stand where this was going to be other than what it is. So it really is an antidote to hating some of the dangerously deluded and impossibly smug people I have the misfortune of colliding with on a regular basis.

Can the form of human consciousness be distinguished from its contents or are the two identical?

That’s an interesting question. In so far as I understand it—there’s a couple of different ways I can interpret what you said there—but I think human consciousness clearly has a form both conscious and unconscious. When you’re talking about the contents of consciousness, you’re talking about what is actually appearing before the light of consciousness—that is, what is available to attention in each moment, what can be noticed. But there’s much that can’t be noticed which is structuring what can.

So the contents are dependent upon unconscious processes which are noticeably human, in that the contents they deliver are human. For instance, an example I often site is our ability to understand and produce language—the ability to follow grammatical rules, to notice when they’re broken—all of these processes are unconscious, and yet this is not something that dogs do; it’s not something that chimps do; we’re the only ones we know to do it, and all of this gets tuned in a very particular way in each person’s case. For instance, I’m totally insensitive to the grammatical rules of Japanese. When Japanese is spoken in my presence, I don’t hear much of anything linguistic.

So the difference between being an effortless parser or meaning and syntax in English, and being little better than a chimpanzee in the presence of Japanese, that difference is, again, unconscious, yet determining the contents of consciousness. So there are both conscious and unconscious ways in which consciousness in our case is demonstrably human, and I don’t really think you can talk about the “humanness” of consciousness beyond that, because, to me, consciousness is simply the fact that it’s like something to have an experience of the world—the fact that there’s a qualitative character to anything—that’s consciousness. And if our computers ever acquire that, well, then our computers will be conscious.

What’s your opinion of the rise of the new Nationalist Right in Europe and the issue of Islam there?

There’s a very unhappy linkage there: the Nationalist Right has an agenda beyond resisting the immigration of Muslims, but clearly we have a kind of fascism playing both sides of the board there, and that’s a very unhappy situation—and a recipe for disaster, ultimately.

I think the problem of Islam in Europe is of deep concern now—probably especially so in France, although it’s bad in many countries. You have a level of radicalization and disinclination to assimilate on the part of far too many people, and it’s a problem unlike the situation in the United States for reasons that are purely a matter of historical accident. But I think it’s a cause of great concern, and, as I said in that article on fascism, it is of double concern that liberals are sleepwalking on this issue and that to even express a concern about Islam in Europe gets you branded as a “right-winger” or a “nationalist” or a “xenophobe,” because these are the only people who have been articulating the problem until now, with a few notable exceptions like Ayaan Hirsi Ali, Douglas Murray, and Maajid Nawaz whom I’ve mentioned a lot recently. So it’s not all fascists who are talking about the problem of jihadism and Islamism in Europe, but for the most part, liberals have been totally out to lunch on this topic, and one wonders what it will take for them to come round.

Lots of questions here—apologies for not getting to but the smallest fraction, there appear to be hundreds….

What charity organization do you think is doing the best work?

There are two charities that I frequently give money to: Doctor’s Without Borders and St. Jude Children’s Hospital. Both do amazing work for which there really is no substitute. When people use any of the affiliate links on my website, or you see on a blog post where I link to a book—say, I’m interviewing an author and I link to his book—if you buy his book or anything else on Amazon through that affiliate link, then 50% of that royalty goes to charity, and generally it’s Doctors Without Borders or St. Jude.

When you’re helping people in refugee camps in Africa, or close to the site of a famine, natural disaster, or civil war, or you’re doing pioneering research on pediatric cancer and never turning any child away at your hospital for want of funds, it’s hard to see a better allocation of money than either of those two projects.

I reject religion entirely but I’m curious how you know with complete certainty that there is no God. What proof do you have?

Well, this has the burden of proof reversed. It’s not that I have proof that there is no God—I can’t prove that there is no Apollo, or Zeus, or Isis, or Shiva. These are all gods who might exist, but, of course, there’s no good evidence that they do, and there are many signs that they are all the products of literature. When you’re looking at the Mythology shelf at the bookstore, you’re perusing the graveyard of dead gods. And the God of Abraham has exactly that status.

So it’s not that I can prove that he doesn’t exist, it’s that it’s obvious that there’s no proof attesting to his existence and, when you look at the kind of wishful thinking that has propped up faith for millennia, there’s every reason to believe that a culture of faith is a culture of deception for children and of self-deception on the part of the adults. The one thing I can say with certainty is that these books show no sign of being authored by an omniscient intelligence, and that really is the only thing one need be certain about to torpedo Judaism, Christianity, and Islam. The Bible and the Quran are deeply inadequate books on every level: scientifically, historically, medically, aesthetically, ethically, spiritually, contemplatively. These are just not the best books we have on any topic, and they should be if they were written by the creator of the Universe.

This is Russell’s teapot argument: can you prove that there is not a china teapot circling the sun between Mars and Earth? No, you can’t prove that. But is there any reason to think that such a teapot exists? No, and the burden of proof is, of course, on the one who asserts this seemingly outrageous truth claim.

Is militant atheism the right way to tackle growing religious bigotry? Calm reason or arrogant atheism?

I don’t think “militant” atheism is at all arrogant. What you have actually witnessed is the consequence of being calmly reasonable in the face of religious superstition and demagoguery—you immediately get back charges of arrogance. There’s nothing arrogant about saying that the Quran expresses a thorough intolerance of infidels and is, therefore, a divisive book. That’s just a fact. That is a calmly reasonable thing to say about the Quran, and the same thing goes for anything else that I, or Richard Dawkins, or Christopher Hitchens, or Daniel Dennett, or Ayaan Hirsi Ali, or any other friend of colleague of mine has said about religion.

Now, we all probably have our moments where we might seem arrogant, but the general tenor of what we’ve said and written has been not arrogant, but worried. There’s an urgency that comes through in all of our work because the situation on the ground is outrageous. It is just astonishing that we’re living in a world where we have to spend any time at all thinking about these things.

My entire career, on this topic, I view as a massive opportunity cost. I mean, really, it is insane to have to argue for the right for gays to marry in the United States in the year 2015. Happily we seem to be on the verge of winning that battle, but it boggles the mind that this has to be argued for, much less the teaching of evolution in school. The fact that anyone has to argue against the intrusion of intelligent design is an outrage and a massive forfeiture of time and energy. It’s totally understandable that many smart people just don’t want to go near this topic because it is, on some level, a waste of time. It’s not to say that it’s not useful, I think it’s incredibly useful, and I wouldn’t do this if I didn’t think it was important. But whenever I move on to a fundamentally different topic that is just interesting and fun to think about and useful to put out into the world, I feel a kind of fresh air come into the room that I don’t feel when I’m having to talk about Christianity or Islam and the obvious harms that these ideologies continue to manufacture moment by moment in the world. These religions are engines of stupidity and division, and anything good you think is coming out of them can be had for better reasons elsewhere. That should be obvious, but it isn’t, so I continue to talk about these things. But, again, the attitude is one of real concern and certainty on specific points. Yes, it is clear that evolution has occurred. Yes, it is clear that every sentence that someone like Jerry Coyne has to speak in defense of evolution is galling. So, when Dawkins gets accused of being arrogant, this is a trope that religious people resort to when they don’t actually have an argument.

Some of your critics like to paint you as a philosophically illiterate scientist. Please speak about your relationship to philosophy.

That’s always an interesting one, because I actually consider myself a philosopher, much to the consternation of my critics and, no doubt, some academic philosophers and graduate students. While I’m usually described as a “scientist” or “neuroscientist,” and occasionally refer to myself this way, the truth is that most of my work has been philosophical. My interest in the brain has always been philosophical—I never thought that I was going to cure Alzheimer’s with my research. The focus of my research has been on the nature of human consciousness, and human values, and how our growing understanding of ourselves through science will change our conception of what we are as subjective creatures and change our view about what is worth wanting in this world, and how we should enshrine these values in public policy and public intuitions. So, it’s in the philosophy of mind, moral philosophy, and metaethics, thus far, that I’ve tried to make a contribution, and I have just taken a route through neuroscience, in part, to express those interests.

The truth is that I consider my defects as an intellectual to be far more in the area of science than in philosophy, at this point. There’s so much more to know in science. There’s so much more to be wrong about definitively. In philosophy, I feel like I have more or less all the tools I need—that’s not to say that I’ve read everything or am even aware of everything, even in my areas of interest—but I’m not continually having my ignorance pointed out to me by my critics in philosophy. This is true on the philosophical topic of free will. Dan Dennett and I don’t agree on free will, but there’s nothing Dan has said about our disagreement that has made me think that I have made elementary philosophical errors—to the contrary. Whereas in science, I can freely admit that I am ignorant of most of what is now well understood. There is such an explosion of knowledge about which one can be ignorant—about which one must be ignorant, given the speed with which scientific knowledge is accumulating.

So, while there is no boundary in principle between science and philosophy—which is to say that scientific questions can be viewed as philosophical, and philosophical questions can be viewed as scientific, depending on how one chooses to address them—the fields are different, and they require different tools. That’s neither a strength nor a weakness of philosophy, it’s just that you don’t require the same base of knowledge to arrive at sound philosophical positions or to detect the flaws in another philosopher’s arguments.

So, for better or worse, I do consider myself a philosopher as well as a scientist—a neurophilosopher, a moral philosopher, a philosopher of mind—because this is what I’m doing, I am philosophizing a lot of the time. I consider my lack of a PhD in philosophy a non-issue, and I would consider someone’s lack of a PhD in any discipline to which he or she is making a valid contribution a non-issue. There are physicists who don’t have PhDs in physics. There are computer scientists who don’t have graduate degrees of any kind. Your intellectual credibility is based on the credibility of the work you produce. There are many people with PhDs in philosophy who I think are doing terrible philosophy—who I think are wrong on almost every question they touch.

So, you’re as good as your last sentence, as far as I am concerned. And if your last sentence didn’t make sense, then you are fit to be pilloried by even a lowly freshman, and a string of Nobel prizes will not inoculate you against that embarrassment. That’s the way that the boundary between knowledge and ignorance, and authority and mere intellectual imposture, should be policed.

This connects, actually, to another question I saw on Twitter.

If you were going to criticize Sam Harris, what do you think the most valid criticism is intellectually?

It relates to some very real deficits I have as a scientist and thinker. My mathematical background and computational background (computer science), generally, is incredibly weak. I often fantasize about dropping 90% of what I’m doing and going back and relearning mathematics from the ground up and computer science, as well. This is where you worry that taking any significant amount of time to correct for your deficits is, in the big picture, a waste of time—and that you should actually be simply honing your strengths and digging one or at most a few wells as deeply as possible and let other people do the work that you’re not competent to do. But I do fantasize about going back to school to fill-in some of these gaps. Who knows, I may yet do that.

I have condensed a variety of questions on a related theme:

You’re often attacked for taking quotes out of context—out of the Quran, in particular, but scripture, generally—and therefore misconstruing them or misrepresenting their meaning, and yet you complain incessantly about the people who do this to you. Is that fair?

Well, I never do that in a way that I think misrepresents the context. It’s not that you can’t take a quote of context, that is what it is to offer a quotation of an author’s work—otherwise you have to take the full work. So there’s nothing wrong with quoting people, but when you select a quote which is guaranteed to misrepresent the context—one that’s guaranteed to be misunderstood by readers who read only that quote—and you do this knowingly, and you do it against the protests of the author, then you’re up to something intellectually disreputable and unethical.

When I take a quote out of the Quran that makes it seem like the Quran demonizes infidels, I do that because the Quran, on balance, demonizes infidels. That is absolutely the central, unambiguous message of the Quran. The central message of the Quran is not to have compassion for all of the ignorant people in the world; it is a sustained expression of revulsion for unbelievers, and it recommends this attitude to all Muslims: you should hate the infidel, fear the infidel, not befriend the infidel, above all, don’t be an infidel for then you will spend eternity in hellfire, and this punishment will be much deserved. To say that this message is divisive is to state an objective fact and in no sense distorts the more global message of the Quran.

Now, there are other messages in the Quran. There are other quotes that can be cherry-picked that, in fact, do misrepresent its basic content. The more benign quotes, unfortunately, are the more misleading ones. A quote like, “There is no compulsion in religion,” which Muslim apologists always trot out on occasions like this, does in fact misrepresent the central thrust of the Quran’s message which is, above all, get this straight—you have to believe in God and Muhammad as his Prophet, and the Quran as the perfect word, otherwise the worst possible fate awaits you. That is the central message of the Quran.

So, I’m not aware of having ever misrepresented the context when I have quoted from the Quran or any other scripture, and in so far as I have, and these errors are pointed out to me, I will be very quick to correct them. But, of course, the same cannot be said about my critics. When my critics pull inflammatory quotations of mine out of context, they are doing it for the express reason of misleading their readers about the context. The most egregious example of this recently is a quote that’s being thrown around by the usual suspects, where I say that rape is “perfectly natural.” That quote is being used to suggest that I see no moral problem with rape. It’s amazing that anyone could imagine that that is actually cashed out if you look at the context. Of course, I said that rape was “natural” in the very midst of making an argument that you cannot base notions of right and wrong and good and evil merely on what is natural, because the worst things about us—rape, tribal violence, etc.—are perfectly natural. No one would ever move from the observation that rape is a natural phenomenon, that it occurs in humans and orangutans and dolphins, to the claim that we shouldn’t do everything we can to prevent rape and punish rapists in the civilized world. In fact, we’re civilized to the degree that we do this.

The other rape quote comes from an interview I did about ten years ago, where I said that if I could wave a magic wand and rid the world of religion or rape, I “would not hesitate to get rid of religion.” Now, that’s a provocative line, and I was aware that it was provocative at the time. I made it perfectly clear in that interview, however, just how harmful religion is—I was not in the least minimizing the harm of rape.

First of all, if you want to understand my thinking here, just imagine how many rapes I attribute to religion. Think of the child rape scandal in the Catholic Church; think of all the rapes that are born of “arranged marriages” in the Muslim world; think of all the punitive rapes that happen in those cultures; think of the use of rape as a weapon of war—not because soldiers just feel like raping people, but as a strategy to destroy a community given how taboo rape is within that community—this happened to the Bosnian Muslims. The Serbs were raping Bosnian Muslims strategically, as a tool of war, simply because of how fully it would destroy the community once these women had to admit that they had been raped, and many of them got pregnant. The stigma around rape is entirely born of religion in that culture, and in many others. Think of the honor killings that occur in the Muslim world as a result of a woman or girl getting raped, and then add on that balance everything else that is wrong with religion: all of the religious wars, all of the jihadist terrorism we’ve seen in our lifetime, all of the future liabilities of people who are expecting an apocalypse, the resistance to lifesaving medical research, the preaching of the sinfulness of condom use in sub-Saharan Africa where people are dying of AIDS. The list of negative effects attributable to religion that include rape and worse is practically endless. Again, that is not at all to minimize the significance of rape, I think it’s just about the worst thing that happens. But that “just about” allows for many things that are worse. Crucifying children, as ISIS has done, is worse; burying them alive is worse.

If you want to appreciate the intellectual dishonesty of my critics, please understand that these are people who are actively spreading the idea that I see no ethical problem with rape. When I complain about being quoted out of context, this is what I’m complaining about. I’m complaining about the intentional use of (usually accurate) quotations to misrepresent a larger discussion. Again, this is not an accident. It’s not that people like Reza Aslan, Glenn Greenwald, Max Blumenthal, Cenk Uygur, and Murtaza Hussain, and the rest of this now growing list of malicious critics don’t understand what I’m saying in context; it’s not that I’m such a bad writer that I can’t make my meaning clear. No, they view it as fair to pull misleading quotes—a practice that someone has dubbed “quote mining”—out of context for the purpose of defaming their author. It should go without saying that this is not journalism, and it’s not scholarship, and it’s not ethical.

But this is the sort of thing that happens, and there really is no defense against it. Pointing people back to the context doesn’t work if they already find you such a revolting character based on how you’ve been successfully slimed that they won’t read you.

And, needless to say, people who do this use your complaining about their behavior against you. Cenk Uygur of the Young Turks, when he’s criticized by his erstwhile fans for quoting me out of context and talking about my wanting to execute a nuclear first strike on the Muslim world, says, “Oh yeah, yeah, I know, everyone’s always misrepresenting Sam Harris and it hurts his feelings.” This is Cenk’s journalistic policy with respect to me now on the Young Turks.

I think it would be fascinating—I don’t know who the right opponent would be here, somebody like Glenn Greenwald—to attempt to have a dialogue the main agenda of which was to try to do a postmortem on how the conversation went this far and fully into the ditch. The subtext of this conversation would not be to win the debate on one point or another, but to try to diagnose the problem of conversation across ideological lines—why is it so hard to communicate effectively on this topic? Why does one side or the other get so fully hijacked so that they can’t even give a charitable reading of their opponent’s views—which is to say, they can’t even take the time to understand what is being said before they react to it. This is a massive problem in public discourse, and I have front row seat for this travesty every time I see the reaction to something I publish. But it seems to me that it would be interesting, with the right person, to figure out how to have a conversation in such a way that polarization and bad faith are overcome. Perhaps I’m being uncharitable, but I don’t think any of the more interesting interlocutors here (or the more consequential ones) are up to it. I think it would be a disaster. So put that in the “pipedream” category of future projects.

This brings me, rather naturally, to another question I saw on Twitter.

I’m worried you’re spending too much time on defense. Have you ever considered completely ignoring misrepresentations?

Well, yeah, for a very long time that’s what I did. When Chris Hedges was spreading this lie about me that I advocate a nuclear first strike against the Muslim world and want to kill hundreds of millions of people, I ignored it because I felt that what I had written was absolutely clear in context, and I felt that answering these crazy charges gave them more credibility. But, I discovered, when I finally started paying attention, that this didn’t work. Ignoring the misrepresentations just allowed them to flourish, and my silence on this issue was taken by many people as confirmation of the charges. The truth is, there’s just no winning this game because, as the question implies, the more time you spend defending yourself, the more boring you become, both to yourself and to those people who actually understand what your views are in the first place.

So I will now run the following experiment: I will no longer mention any of these people ever again, as I vowed never to mention the name of the serial plagiarist, pseudo-atheist lunatic who has been trolling me on Twitter and was given a very prominent platform by Cenk and the other people at the Young Turks network. As far as I can tell, I’ve kept that vow. I’m now extending this to Reza Aslan, Glenn Greenwald, Chris Hedges, and the other people whom I have mentioned, to your boredom and mine, all too frequently. I will never mention any of these people again. I issue this with a few caveats—if anything too consequential happens, I might have to respond to it, whether on my behalf or someone else’s. But that aside, you will never hear another word from me about these people for the rest of our lives.

Who argues against your position on religion honestly?

Well that’s an interesting question. It’s certainly not the liberal/secular/atheist apologists for faith—the “accomodationists,” the people who think we’re simply being “arrogant” or “uncivil” to criticize religion and, therefore, want us to shut up without ever really dealing with the substance of our arguments. I think the honest criticism comes from true believers, people who really believe that they have valid evidence for the truth of one or another religious doctrine. They’re almost certainly wrong, their definition of what constitutes valid evidence is certainly in need of revision. But they’re often connecting the dots in an intellectually honest way, albeit a flawed way, and honestly arguing from that point of view. They’re people who think their experience in prayer, and the change in their lives born of it, proves that Christianity, for instance, is true—that Jesus, the son of God, is invisibly present in their lives and changing their experience for the better, etc. I don’t consider all claims of this sort dishonest. Some of them are confused; some people are using experience in a way to make claims about the Cosmos that are illegitimate, but there’s no intellectual dishonesty in this.

Intellectual dishonesty is different from scientific ignorance or logical errors. Intellectual dishonesty is when someone really should know better—where he clearly has the tools to use, but he’s not using them, and yet he’s using them in other contexts. Someone like Francis Collins is being intellectually dishonest in how he justifies his Christianity. The dishonesty here is in his unwillingness to acknowledge the patently emotional basis for his views—he’s attached to the consolation he gets from these views; he wants certain propositions to be true because of the way these “truths” make him feel. That is the quintessence of the unscientific attitude. Motivated reasoning is a problem; it is a way of failing to be in contact with reality; it is a way of fooling yourself. Richard Feynman’s famous line is something like, “Science is the art of not fooling yourself, and you are the easiest person to fool.” Every scientist has to know this about him or herself—you are the easiest one to fool. You have to get out of your own way in order to think clearly about the nature of reality, and what you must remove to think scientifically is a dogmatic commitment to maintaining certain beliefs despite the evidence against them, or in the absence of compelling evidence. Someone like Francis Collins has not done that. He’s rebranded his wishful thinking as “faith,” and yet he’s a scientist and claims to square his religiosity with his science—that’s his intellectual dishonesty.

The dishonesty I often see in nonbelievers is born of their fundamental skepticism that anyone actually believes anything, especially when these doubts align with their political attitudes or political correctness. You get a very blinkered perspective here. There are feminists who are attacking Ayaan Hirsi Ali as a bigot: They’re worried about the treatment of women in Silicon Valley, and yet they’re completely callous about Ayaan’s background or her present efforts to make life better for millions of women living in intolerable conditions because of the doctrine of Islam, because of political Islam, because of theocracy. That’s intellectual dishonesty, and it’s more a liberal problem than a conservative one at this point, which depresses me to no end.

How often should we be aware of the illusion of free will? Should it serve more as a reflective function rather than happening in real time?

Well, for me, a direct awareness of the illusoriness of free will—the very clear sense that the notion of free will doesn’t name anything in my experience—this is more or less coincident with a moment of mindfulness or a moment of meditation, where I’m clearly aware of how thoughts, intentions, and desires and their subsequent actions arise spontaneously. Something was not there a moment ago and then, suddenly, it’s there. All of one’s mental life, even the most voluntary behavior, has this character when you look at in a fine-grained way.

But I think the most important understanding of it is reflective—certainly the most important ethical implications are borne of reflecting on this truth about us. It’s just the understanding that people are operating on the basis of everything that has made them who they are and that they are not agents in the deepest possible sense.

This brings to mind an email I received recently which revealed the sometimes surprising consequences of drilling down on these philosophical topics. This is the kind of response that is surprising and gratifying on a topic like free will, which would seem on its face to be purely of academic interest.

So here’s the email:

Sometime in the Fall of 2012, I read your book Free Will. It positively impacted me and helped me cope with my mother’s suicide differently. After her death, I felt the same way Christopher Hitchens felt about his Mom, Yvonne, in the first chapter of Hitch-22. I wonder if in some way I could have thwarted my mom’s death. Suicide, unlike other ways of dying, is odd to cope with. I’m tempted to say harder, but that’s unfair. So far, all I’ve come up with is the perceived voluntary aspect of suicide, the feelings of rejection, abandonment, and guilt that follow, and the wish to tell her I loved her and to hear her say that she loved me too. No other book about suicide or depression has come to close to helping me understand my Mom and other people, nothing has ever helped me grapple with the difficult questions I’ve asked myself about the nature of the world. Nothing has ever made me feel more hopeful and subdued the feelings I mentioned above like your book Free Will.

What I’m trying to say is thank you.

That’s a total surprise. When I wrote about free will, I didn’t think I was striking a tangent to this kind of emotional territory. But there are many things like this, where I can see that this is one of the consequences of thinking clearly on these sorts of topics. And this project is not derivative of any criticism of religion per se, it’s just the result of thinking about the nature of the human mind, the nature of human behavior, and trying to live with the consequences of reasoning honestly on those topics.

Apologies again for not getting to more of your questions. I am longwinded, as you no doubt know, and there were so many of them. Please let me know if Q&A podcasts like this are useful to you. I won’t do them if they’re not. You can do that on Twitter or by email.

Thanks again for listening. Until next time….

April 25, 2015