An article by Kate Murphy in the New York Times discusses a recent controversy in the field of fMRI over statistics. Although Murphy correctly observes that flawed methods of data analysis are a problem in neuroimaging, she falsely implies that our 2009 study of the neural correlates of belief employed the methods in question. Here is the letter that Mark S. Cohen, the senior author on that paper, sent to the Times.—SH
Wherein Sam and Joe talk for 4.5 hours…
Known as “Mad Max” for his unorthodox ideas and passion for adventure, Max Tegmark’s scientific interests range from precision cosmology to the ultimate nature of reality, all explored in his new popular book Our Mathematical Universe. Tegmark is a professor of physics who has published more than two hundred technical papers and been featured in dozens of science documentaries. His work with the Sloan Digital Sky Survey on galaxy clustering shared the first prize in Science magazine’s “Breakthrough of the Year: 2003.” For more information about his work, please visit his MIT website and the Future of Life Institute.
It seems increasingly likely that we will one day build machines that possess superhuman intelligence. We need only continue to produce better computers—which we will, unless we destroy ourselves or meet our end some other way. We already know that it is possible for mere matter to acquire “general intelligence”—the ability to learn new concepts and employ them in unfamiliar contexts—because the 1,200 cc of salty porridge inside our heads has managed it. There is no reason to believe that a suitably advanced digital computer couldn’t do the same.
It is often said that the near-term goal is to build a machine that possesses “human level” intelligence. But unless we specifically emulate a human brain—with all its limitations—this is a false goal. The computer on which I am writing these words already possesses superhuman powers of memory and calculation. It also has potential access to most of the world’s information. Unless we take extraordinary steps to hobble it, any future artificial general intelligence (AGI) will exceed human performance on every task for which it is considered a source of “intelligence” in the first place. Whether such a machine would necessarily be conscious is an open question. But conscious or not, an AGI might very well develop goals incompatible with our own. Just how sudden and lethal this parting of the ways might be is now the subject of much colorful speculation.
Meditation and the Nature of the Self A Conversation Between Sam Harris and Dan Harris at the Rubin Museum
I’d like to begin, once again, by congratulating Ryan Born for winning our essay contest. The points he raised certainly merit a response. Also, I should alert readers to a change in the expected format of this debate: Originally, I had planned to have an extended conversation with the winning author, with Russell Blackford serving as both moderator and commentator. In the end, this design proved unworkable—and it was not for want of trying on our parts. I know I speak for both Ryan and Russell when I say that our failure to produce an acceptable text was frustrating. However, rather than risk boring and confusing readers with our hairsplitting and backtracking, we’ve elected to simply publish Russell’s “Judge’s Report” and Ryan’s essay, followed by my response, given here.—SH
Supporting the Blog and Podcast
SamHarris.org is supported by the generosity of its readers. If you find my essays, interviews, or podcasts useful, please consider becoming a sponsor of the website. —SH
You can also become a member by making a one-time donation in any amount.
Get Sam's Newsletter
Joining our email list is the best way to hear about Sam's upcoming books, articles, podcasts, and public talks.