The morality of destroying the earth

 
tgwaty
 
Avatar
 
 
tgwaty
Total Posts:  10
Joined  06-06-2016
 
 
 
05 March 2017 05:42
 

Suppose I said that I want to destroy all human life.  Most people would say that such an act is extremely evil. It is comic-book movie villain evil. Age of Ultron.  There may be moral gray areas, but this ain’t it.

Now consider this hypothetical scenario. There is an island with a population of one million people. It is isolated. There is nowhere to go. It is going to be destroyed, say, by a volcano. But a radical technological solution is found to prevent the destruction. The problem is that it requires the torture of a little girl. Her continuous severe torture over years until she finally dies.

This is a tough moral decision for some. But I bet some of you have an immediate emotional response against sacrificing the girl. It is to those of you that I write. To others, I suggest that you can modify the scenario with more extreme sacrifice of innocents, maybe fewer people to save, until you are in line against the sacrifice of innocents as well.

But the situation in which I, Ultron, hold my finger on the button to destroy the earth is worse. Far worse. Staving off the destruction of earth does require the torture of innocents unto death… innocents that are being tortured, abused, starved—often by their parents—right now as you read this.  Thousands of them at least. Maybe hundreds of thousands. Then are millions of people who lives are simply miserable. It may be hundreds of millions. Every suicide is a person in misery but not every miserable human being commits suicide. There are at least 20 million people living in the wretched condition of slavery today.

Take a moment to Google “parents abused starved child to death”. Or image search “birth defects”. Or, my personal favorite, the story of “Genie”, the girl locked in a room, tied to a potty chair for over ten years without being spoken to.

The scale of human suffering that you ignore while you sip your morning latte is an unimaginable as astronomical distances. It is not a numbers game. You cannot grasp what I am getting at by pondering the millions of the Holocaust. You have to rub your nose into an individual life of misery to “get” it.

“But my life is good. The lives of most of the people I know are good. The lives of many are good.” Yes, but is that worth it? Is the goodness of your lives worth all that suffering? Because there is no separating them. It is the nature of humanity. To continue the human race is to continue all that suffering.  If you had your finger on Ultron’s button and you chose to spare the earth, it means that to you, the good lives are worth all that suffering.  To me, that is like saying that of course the plantation slave labor is necessary since without it the gentry could never have such good lives.

“Ah, but in the last analogy the gentry are causing the slaves suffering. I am not causing the world’s suffering.”

I don’t see that it matters. Your continued existence necessitates all that suffering whether you will it or not.

[ Edited: 05 March 2017 06:09 by tgwaty]
 
Ola
 
Avatar
 
 
Ola
Total Posts:  1040
Joined  12-07-2016
 
 
 
05 March 2017 06:06
 

And my non-existence, if I were to pop off today, would make no difference to any of that suffering. Not. One. Jot.

 
tgwaty
 
Avatar
 
 
tgwaty
Total Posts:  10
Joined  06-06-2016
 
 
 
05 March 2017 06:09
 
Ola - 05 March 2017 06:06 AM

And my non-existence, if I were to pop off today, would make no difference to any of that suffering. Not. One. Jot.

Right. That’s why we need to destroy the whole earth. Who’s with me?

 
Ola
 
Avatar
 
 
Ola
Total Posts:  1040
Joined  12-07-2016
 
 
 
05 March 2017 06:12
 
tgwaty - 05 March 2017 06:09 AM
Ola - 05 March 2017 06:06 AM

And my non-existence, if I were to pop off today, would make no difference to any of that suffering. Not. One. Jot.

Right. That’s why we need to destroy the whole earth. Who’s with me?

To destroy the most advanced life form that we know of rather than work out an alternative plan (to end the torture and misery) seems a bit defeatist to me.

 
EN
 
Avatar
 
 
EN
Total Posts:  19025
Joined  11-03-2007
 
 
 
05 March 2017 07:33
 
tgwaty - 05 March 2017 06:09 AM
Ola - 05 March 2017 06:06 AM

And my non-existence, if I were to pop off today, would make no difference to any of that suffering. Not. One. Jot.

Right. That’s why we need to destroy the whole earth. Who’s with me?

I’ll leave that to you.  Right now I’m focused on more local situations.  The closer something is to me, the more power I have to make a difference.  Today, I want the Baylor women’s basketball team to destroy Kansas State, and I’m rooting for them with all my power.  I also want to disrupt President’s Trump’s plan to build a wall along the Texas border, because it would ruin something that I love. Beyond that, I’m not inclined to get involved in destruction.

 
no_profundia
 
Avatar
 
 
no_profundia
Total Posts:  324
Joined  14-07-2016
 
 
 
05 March 2017 09:12
 

Actually, this is an interesting question. It is something I have wondered about Sam’s moral philosophy. I am only familiar with it from watching him give talks about it (I have not read his book yet) so I am hoping someone who has read his book can perhaps respond to this question and provide more detail about Sam’s position.

But here is the question:

Sam seems to believe that morality is scientific or he seems to challenge the ought/is distinction and he claims that moral questions are amenable to science. He also seems to be a consequentialist who sees the primary goal of morality to be to maximize well-being. Sam talks about a “moral landscape” and he seems to view it as sort of a two-dimensional line with peaks and troughs and he says we should move toward the peaks and away from the troughs and science can tell us how to do this (if this is an overly simplistic overview of Sam’s position I would love to have it corrected and deepened).

However, there are a number of ways you could formulate a consequentialist goal focused on well-being. You could say the goal is to maximize well-being. In this case you might wind up with a scenario where it would be moral to increase the population as much as possible even if most people live rather paltry lives. As long as they experience some well-being you will increase the well-being in the world. Even if some innocent people suffer terribly there will be more well-being in the world then there would be if we destroyed all life.

Another way to formulate it would be: we should reduce suffering as much as possible. But, if we formulate it that way, then it seems like destroying all life on earth would be the moral thing to do. You could eliminate all suffering on earth.

We could also try to combine the two in some way and say we need to try to balance the suffering in the world with the well-being and shoot for some equilibrium point where we have the maximum well-being consistent with a certain level of acceptable suffering and we could choose any level of acceptable suffering we wanted.

So, there are multiple (and potentially infinite) ways to formulate the same moral position and which formulation we choose leads to radically different solutions or answers to the question “What is moral?” My question is: How can science tell us which is the correct way to formulate the consequentialist goal? What does science have to tell us about which formulation is correct?

It seems to me to get out of this dilemma you need to have a notion of values that is more complex than Sam’s troughs and peaks notion of well-being and I am not convinced that science has much to tell us about what multi-dimensional combination of values we should adopt (perhaps suffering is valuable in some cases?). But perhaps I am simplifying Sam’s position based on the fact that I have not read his book.

Addendum: I want to elaborate a bit on what I mean by a multi-dimensional value space. We can think of it as analogous to evaluating a meal. Sam’s position seems to me to be one-dimensional. The notion of peaks and troughs means we are evaluating a moral position based on a simple question “How much well-being is there?” It would be like evaluating a meal on a single dimension “How spicy is it?”

But when we have a meal we evaluate it on multiple dimensions. We are not simply interested in how spicy it is. Where we are in “taste space” depends on a ton of factors. Similarly, I think where we are in “value space” is based on a number of dimensions (Jonathan Haidt argues we evaluate moral positions based on six dimensions but I think it could potentially be more than that).

If we want to know if a meal is good we need to evaluate it on all relevant dimensions (and there will be meals that are equally “good” that are in very different places in taste space - some will be very spicy, some will not be spicy at all, etc.). If we collapse the question into “How spicy is it?” we are going to wind up with absurd conclusions (the spicier the better). I think the conclusions above: that we should increase population to its absolute maximum or we should destroy all life on earth are a result of collapsing moral questions to a single dimension.

The problem is: once we increase the dimensions of evaluation I am not sure science can tell us much about what trade-offs to make or what points within value space are the “best” (except perhaps in very general ways, where positions are absolutely dominated on all, or multiple really important dimensions).

[ Edited: 05 March 2017 09:48 by no_profundia]
 
 
Jan_CAN
 
Avatar
 
 
Jan_CAN
Total Posts:  1138
Joined  21-10-2016
 
 
 
05 March 2017 09:47
 

Morality cannot be put into a scientific equation or hypothesis.  We must understand the world we live in by means of the sciences, but science will not answer all of our social and moral needs, nor should it.  There are no easy answers to most moral questions – these must be answered by ethicists, philosophers, artists, scientists, by all of us.  Trying to find the answer to a moral question using science alone could lead to inhumane conclusions, and a world I would not want to live in.

 
 
GAD
 
Avatar
 
 
GAD
Total Posts:  15575
Joined  15-02-2008
 
 
 
05 March 2017 10:20
 

I accept suffering as the cost of living as does everyone who chooses life over death.

 
 
Cheshire Cat
 
Avatar
 
 
Cheshire Cat
Total Posts:  667
Joined  01-11-2014
 
 
 
06 March 2017 13:08
 

“Only what we have lost forever do we possess forever. Only when we have drunk from the river of darkness can we truly see. Only when our legs have rotted off can we truly dance. As long as there is death, there is hope.”
– Brother Theodore

 
 
Antisocialdarwinist
 
Avatar
 
 
Antisocialdarwinist
Total Posts:  5773
Joined  08-12-2006
 
 
 
06 March 2017 14:35
 
no_profundia - 05 March 2017 09:12 AM

I want to elaborate a bit on what I mean by a multi-dimensional value space. We can think of it as analogous to evaluating a meal. Sam’s position seems to me to be one-dimensional. The notion of peaks and troughs means we are evaluating a moral position based on a simple question “How much well-being is there?” It would be like evaluating a meal on a single dimension “How spicy is it?”

You’re ignoring what Harris claims to have discovered about the purpose of morality, that it is to maximize the well-being of conscious creatures. If the purpose of meals was to maximize spice, then it would be appropriate to evaluate a meal according to the single dimension of spiciness. Since the purpose of morality is to maximize the well-being of conscious creatures, it’s appropriate to evaluate a moral position according to the single dimension of well-being.

But actually, I think a better analogy would be to compare well-being with taste instead of spiciness. Suppose the purpose of meals was to maximize taste. It would then be appropriate to evaluate a meal according to the single dimension of taste. The problem is, taste—unlike spiciness—is subjective. And so is the maximum well-being of conscious creatures. Harris tries to get around this by focusing on individual well-being. But even if there is an objective measure of individual well-being that can in theory be quantified with a tricorder-like Well-Being-O-Meter, there is no way to objectively quantify aggregate well-being. And it should be clear that the purpose of morality is to maximize aggregate well-being, not individual well-being. (In terms of maximizing WBCC, the answer to the OP’s hypothetical question hinges on the value or importance of the little girl’s well-being relative to everyone else’s—a subjective preference.) Therefore, there is no way for science to determine the moral landscape or human values.

 
 
no_profundia
 
Avatar
 
 
no_profundia
Total Posts:  324
Joined  14-07-2016
 
 
 
06 March 2017 17:04
 
Antisocialdarwinist - 06 March 2017 02:35 PM
no_profundia - 05 March 2017 09:12 AM

I want to elaborate a bit on what I mean by a multi-dimensional value space. We can think of it as analogous to evaluating a meal. Sam’s position seems to me to be one-dimensional. The notion of peaks and troughs means we are evaluating a moral position based on a simple question “How much well-being is there?” It would be like evaluating a meal on a single dimension “How spicy is it?”

You’re ignoring what Harris claims to have discovered about the purpose of morality, that it is to maximize the well-being of conscious creatures. If the purpose of meals was to maximize spice, then it would be appropriate to evaluate a meal according to the single dimension of spiciness. Since the purpose of morality is to maximize the well-being of conscious creatures, it’s appropriate to evaluate a moral position according to the single dimension of well-being.

But actually, I think a better analogy would be to compare well-being with taste instead of spiciness. Suppose the purpose of meals was to maximize taste. It would then be appropriate to evaluate a meal according to the single dimension of taste. The problem is, taste—unlike spiciness—is subjective. And so is the maximum well-being of conscious creatures. Harris tries to get around this by focusing on individual well-being. But even if there is an objective measure of individual well-being that can in theory be quantified with a tricorder-like Well-Being-O-Meter, there is no way to objectively quantify aggregate well-being. And it should be clear that the purpose of morality is to maximize aggregate well-being, not individual well-being. (In terms of maximizing WBCC, the answer to the OP’s hypothetical question hinges on the value or importance of the little girl’s well-being relative to everyone else’s—a subjective preference.) Therefore, there is no way for science to determine the moral landscape or human values.

Interesting. I have to give a very quick response because I am off to class.

First, how did Sam “discover” the purpose of morality? Is this a scientific discovery or does science only get started after we accept this premise (according to Sam)?

Second, my own suspicion is there is no such thing as “well-being”. The brain is extremely complex and is capable of being in more states than we could map and it would be very difficult to categorize those states based on their similarities (and there would be more than one way to do it). No doubt we group some of those states together and give them rough names (“happy”, “sad”, etc.) but these are very rough and I think the analysis I applied to value would apply to these brain states. We evaluate them on multiple dimensions and not a single dimension. Sometimes I think I prefer “profound sadness” to “shallow happiness”. Profound and shallow are metaphors, and I am not sure what they refer to, but I don’t think we evaluate our brain states on a single dimension.

Even if we have just those two dimensions “profound/shallow” and “sad/happy” and we put them on a scale, it is not clear what it means to “maximize well-being.” How much depth of experience do we trade for more happiness? And, of course, I am sure there are many more dimensions than this.

I would love to hear more about Harris’s position though or your own position. Take care.

 
 
Antisocialdarwinist
 
Avatar
 
 
Antisocialdarwinist
Total Posts:  5773
Joined  08-12-2006
 
 
 
06 March 2017 19:49
 
no_profundia - 06 March 2017 05:04 PM
Antisocialdarwinist - 06 March 2017 02:35 PM
no_profundia - 05 March 2017 09:12 AM

I want to elaborate a bit on what I mean by a multi-dimensional value space. We can think of it as analogous to evaluating a meal. Sam’s position seems to me to be one-dimensional. The notion of peaks and troughs means we are evaluating a moral position based on a simple question “How much well-being is there?” It would be like evaluating a meal on a single dimension “How spicy is it?”

You’re ignoring what Harris claims to have discovered about the purpose of morality, that it is to maximize the well-being of conscious creatures. If the purpose of meals was to maximize spice, then it would be appropriate to evaluate a meal according to the single dimension of spiciness. Since the purpose of morality is to maximize the well-being of conscious creatures, it’s appropriate to evaluate a moral position according to the single dimension of well-being.

But actually, I think a better analogy would be to compare well-being with taste instead of spiciness. Suppose the purpose of meals was to maximize taste. It would then be appropriate to evaluate a meal according to the single dimension of taste. The problem is, taste—unlike spiciness—is subjective. And so is the maximum well-being of conscious creatures. Harris tries to get around this by focusing on individual well-being. But even if there is an objective measure of individual well-being that can in theory be quantified with a tricorder-like Well-Being-O-Meter, there is no way to objectively quantify aggregate well-being. And it should be clear that the purpose of morality is to maximize aggregate well-being, not individual well-being. (In terms of maximizing WBCC, the answer to the OP’s hypothetical question hinges on the value or importance of the little girl’s well-being relative to everyone else’s—a subjective preference.) Therefore, there is no way for science to determine the moral landscape or human values.

Interesting. I have to give a very quick response because I am off to class.

First, how did Sam “discover” the purpose of morality? Is this a scientific discovery or does science only get started after we accept this premise (according to Sam)?

Second, my own suspicion is there is no such thing as “well-being”. The brain is extremely complex and is capable of being in more states than we could map and it would be very difficult to categorize those states based on their similarities (and there would be more than one way to do it). No doubt we group some of those states together and give them rough names (“happy”, “sad”, etc.) but these are very rough and I think the analysis I applied to value would apply to these brain states. We evaluate them on multiple dimensions and not a single dimension. Sometimes I think I prefer “profound sadness” to “shallow happiness”. Profound and shallow are metaphors, and I am not sure what they refer to, but I don’t think we evaluate our brain states on a single dimension.

Even if we have just those two dimensions “profound/shallow” and “sad/happy” and we put them on a scale, it is not clear what it means to “maximize well-being.” How much depth of experience do we trade for more happiness? And, of course, I am sure there are many more dimensions than this.

I would love to hear more about Harris’s position though or your own position. Take care.

Harris claims that maximizing WBCC is the purpose of morality. He defines “science” loosely, but the bottom line is that the purpose of morality is a fact, not a premise. If you disagree, then give an example of some behavior that A) has moral implications; and B) for which the purpose of encouraging or discouraging said behavior—by labeling it right or wrong—is something other than maximizing the well-being of conscious creatures.

I won’t try to argue the second point. Harris’s claim that individual well-being not only exists, but can be objectively quantified seems a little dubious to me, too. But Harris is a neuroscientist, so I grant him a certain amount of leeway there. My point is that even if we accept his claim, it still doesn’t get us to a moral landscape. For that we need aggregate well-being, which can only be quantified subjectively.

 
 
Jefe
 
Avatar
 
 
Jefe
Total Posts:  5949
Joined  15-02-2007
 
 
 
07 March 2017 07:42
 
Antisocialdarwinist - 06 March 2017 07:49 PM

For that we need aggregate well-being, which can only be quantified subjectively.

I would say that objective evidence and facts can only be used to gauge morality when measured against the subjective judgements of the moralizers.