Utilitarian Ethics vs. Rights Based Ethics - Page 3 - Politics Forum.org | PoFo

Wandering the information superhighway, he came upon the last refuge of civilization, PoFo, the only forum on the internet ...

For discussion of moral and ethical issues.
Forum rules: No one line posts please.
User avatar
By ThomasNewton
#13629180
Good post pugsville! Indeed, pugsville points out one of the other problems of Utilitarianism, the entire system is basically one big appeal to consequences argument. Not only is this fallacious, but even if it wasn't it would be an impossible system because it is impossible to know the outcome of an action before it resolves. The situation pugsville described is a good example of this.
By lucky
#13629236
ThomasNewton wrote:appeal to consequences argument. Not only is this fallacious

The wiki article you quoted is about a different kind of reasoning: "a statement X is true because X being true would be nice". That would indeed be a fallacy.

But when it comes to choosing one action/policy or another, we have a very different kind of argument: "I can do X or Y. I think doing X will lead to better outcomes than Y, so I choose X." There is nothing fallacious about it, it's how rational choices are made.

ThomasNewton wrote:an impossible system because it is impossible to know the outcome of an action before it resolves.

It is possible to predict outcomes to some extent. I can't be completely sure if boarding a bus will take me to my destination or get me killed, but that does not mean it's impossible for me to approximate outcomes and probabilities and make a choice based on that. I take the bus because I think the likely outcome is going to be arriving at my planned destination.

I still have not understood what the alternative proposed procedure for making decisions is. And if the alternative decision-making approach is claimed to be superior: by what measure?
Last edited by lucky on 15 Feb 2011 02:12, edited 7 times in total.
User avatar
By El Gilroy
#13629238
Utilitarian is generally superior, but can also be abused to greatly undesirable effects if it's used towards the wrong ends.
By lucky
#13629247
ThomasNewton wrote:It would make more people happy if 5 people waiting for organs were saved than if one person died, therefore we should kill organ donors/anyone off the street for their organs.

Not really. If health services were allowed to hunt for organ donors on the street unpunished, that would make life for almost everybody a lot more difficult, and subsequently it would not "make more people happy". Everybody would be in hiding avoiding the streets, or else would organize into herds fighting each other. Peaceful cooperation would be very hard.
Last edited by lucky on 15 Feb 2011 02:20, edited 1 time in total.
User avatar
By ThomasNewton
#13629252
lucky wrote:The wiki article you quoted is about a different kind of reasoning: "a statement X is true because X being true would be nice". That would indeed be a fallacy.

But when it comes to choosing one action/policy or another, we have a very different kind of argument: "I can do X or Y. I think doing X will lead to nice things in the future, so I choose X." There is nothing fallacious about it, it's how rational choices are made.


This is a good point lucky, good job! However, we are not talking about how people think, we are talking about what is morally right. You actually had a great example of a very simple thought scenario there. However, in that scenario even if you choose X you cannot say "I can do X or Y. I think doing X will lead to nice things in the future, so X is the morally right thing to do." Again, this is what Utilitarianism says and like you said yourself it's an appeal to consequences fallacy.

lucky wrote:It is possible to know outcomes to some extent. I can't be completely sure if boarding a bus will take me to my destination or get me killed, but that does not mean it's impossible for me to approximate outcomes and probabilities and make a choice based on that. I take the bus because I think the likely outcome is going to be arriving at my planned destination.


Also well stated lucky, but this is where linguistics in philosophy becomes important. Again you've come up with a good simple situation to demonstrate the concept, so I will follow your example. In your last statement you are correct in saying that you can think of a likely outcome. However you are only thinking of a possible probability, you cannot be assured of the outcome to any extent. For instance, say the bus broke down 10 minutes before you arrived at the stop. Even though you still think it is likely for you to get on the bus at the usual time, it has actually become unlikely without your knowledge.

You can say people can predict a probable outcome of their actions, but it is impossible to know the outcomes.

This is why Utilitarianism is an impossible to apply system. It is impossible to know any the consequences of an action before they resolve, nevertheless all of them.
By lucky
#13629256
ThomasNewton wrote:However, we are not talking about how people think, we are talking about what is morally right.

Well, I don't really see a difference. My morality is a description of what my goals are, and that's what's guiding my decisions. If I have to choose between X and Y, I first approximate the outcomes of doing X and of doing Y (the consequentialism part), and then evaluate which outcome I like better (the morality part).

ThomasNewton wrote:You can say people can predict a probable outcome of their actions, but it is impossible to know the outcomes. This is why Utilitarianism is an impossible to apply system.

Only by your own definition of the word "utilitarianism". It does not mean that one does what will turn out to have best outcomes, it only means that one does what he predicts to have best outcomes. The first one indeed would not make any sense, since there is only one actual outcome, so it's impossible to compare alternative realized outcomes.
User avatar
By El Gilroy
#13629258
The ClockworkRat wrote:If utilitarian ethics are used towards the wrong ends then they aren't utilitarian ethics; they're amoral acts being disguised as utilitarian.

"wrong" by your definition may not be "wrong" by my judgement.
User avatar
By The Clockwork Rat
#13629265
Yes, and? I don't think many people can honestly argue that moral senses can vary between people. Utilitarianism just provides a type of framework on which to work. Possible contradictions arising in utilitarian ethics can be resolved more easily and satisfactorily than those arising in the Liberal ethics framework, which is inherently contradictory.
User avatar
By Obversity
#13629267
We have a bone to pick, Mr Newton.

Firstly, an appeal to consequences is NOT a fallacy in moral discussion; it's only a fallacy when applied to propositions with a truth value. An appeal to consequences usually takes the form of "this conclusion is preferable, thus this conclusion is true", for example "God must exist, or the world has no meaning; thus I conclude that god exists".

You're correct in that it's impossible to know the outcome of an action before it resolves, and this is a major problem in the application of Utilitarianism -- but as I've said before, it doesn't disprove the theory, it just makes it hard to practice. There's also a fairly obvious, if not perfect, solution to the problem: it is often possible to calculate the likely result of an actions, by looking at similar actions in similar scenarios. So for politics, we can study historical and current governments and systems, and take the paradigms and policies that tended to work best, and then edit or improve them to fit the culture or the technological level.

Unless you assume happiness is moral you have to show happiness is moral


Happiness is intrinsically good, and sadness is intrinsically bad. They're facts about our psychology and our nature, and they're also analytic facts. No one will say that happiness is bad in itself, and few will say that happiness is not good. Similarly, no one will say that depression is good, or that depression is not bad. If someone were to say "I like being sad", natural language begs you to accuse them of a contradiction, or of misunderstanding the terms they're using. If they 'like' it, they they must think or feel that it's good in some way.

depression is a mental disorder wherein people are incapable of feeling happiness for just one example.


I'm not sure I agree with this, so I'd like a reference at the very least. I've been both depressed and suicidal, however I was still able to feel at least momentarily happy -- it was just a very rare feeling.

The only way death is bad is if it causes other people to be sad.


This is short sighted. Killing someone reduces any possible future utility. So yes, while death itself may have a utility value of 0, in the act of killing, you've probably done wrong by removing the potential for utility.

2) We live in a universe of scarcity. Since there exists only a limited amount of resources the opportunity cost of one individual using a resource affects everyone else in existence by denying them that resource. Therefore existence has an overall negative Utility.


I don't accept this. You make it sound as if resources are absolutely finite, which they are not. Since we're speaking in terms the 'universe', I'll posit this: since there is a mass-energy equivalence, no resource is ever completely destroyed, and thus there's potential, with the right technology perhaps, to recover any 'used' resources.

Additionally, your conclusion, that existence has an overall negative utility, does not follow, or I'm not understanding your 'utility' evaluation. Let's create a scenario. We have 10 people in a small environment, with enough resources to last them till well after their own lives -- enough for them to have children, and for, let's say, two more generations. Now let's apply your logic. Since using these resources now will deny them to a future generation, this environment cannot possibly allow more happiness than unhappiness.

If this is an analogous application of your logic, then I don't understand it.

Also, do you believe that your ultimate conclusion, that we should destroy ourselves, is necessarily bad? If yes, then why? Is this an assumption? Or do you have reasons? And if you have reasons, are they at all consequential? -- and if so, could they not have been in the initial premises?

As for the patients in need of organs, there is not nearly enough context to make a decision. Are these the last people alive? If so, then sacrificing the one person might be best, however that completely depends upon the existing relationships between the people. If not, then it would not be justified unless the five people were of particular importance. Such a policy would have knock on effects that can't be ignored: people would live in fear of being taken and having family and friends taken, and this would massively reduce the happiness of society. And if it was only for organ donors, then no one would sign up as an organ donor, and we would thus have a massive, unfulfilled market for them, which would cause many more people to die from not having organ replacements, and might even spring criminal organisations and black markets centred around kidnapping and organ harvesting, further reducing happiness.
User avatar
By El Gilroy
#13629270
The ClockworkRat wrote:Yes, and? I don't think many people can honestly argue that moral senses can vary between people. Utilitarianism just provides a type of framework on which to work. Possible contradictions arising in utilitarian ethics can be resolved more easily and satisfactorily than those arising in the Liberal ethics framework, which is inherently contradictory.


Let me explain my thought.

A society governed by utilitarian principles will be more efficient in pushing its agendas than one relying on solid rights. Depending on the agenda, societies goals (peace, prosperity, wealth, growth, greed, power, ethnic purity, whatnot) may be more or less desirable. If a society is largely savage and malevolent, then utilitarian policies will likely cause more bad for minorities than if said society's options were limited by solid (I hate to say "inalienable") rights of those otherwise affected.

Differently put, Ethics are the means to an end; and even if the end is simply called "happiness" - which most will agree is a good thing - then one majority's happiness can nonetheless be their corresponding minority's nightmare.

Is my point visible? :hmm:
User avatar
By The Clockwork Rat
#13629285
Aye, though I believe that repressing minorities in that manner is psychologically damaging to a population, simply because of how we are. For example, to induce Godwin, the Nazis had to keep their extermination and concentration camps secret from the majority of the population because it would demoralise them. Basically, when it comes to country sized populations, one group's nightmare is another group's nightmare.
User avatar
By ThomasNewton
#13629313
El Gilroy again brought up the relativistic fallacies of utilitarianism, which is another excellent point. Good job El Gilroy!

And it's great to see you back, Obversity. I caught your introduction in the Lobby and was hoping to get to talk to you again. It would be great to see a comment from you in the Lobby-I mostly want to assure there is no ill will between us.

Obversity wrote:Firstly, an appeal to consequences is NOT a fallacy in moral discussion; it's only a fallacy when applied to propositions with a truth value. An appeal to consequences usually takes the form of "this conclusion is preferable, thus this conclusion is true", for example "God must exist, or the world has no meaning; thus I conclude that god exists".


Lucky, a poster before you brought this up as well. However, I either disagree or use a different application of it. Do we both agree that under Utilitarianism the thought process concerning morality is "I think it is good to do X, therefore X is good." I don't know how else to describe this without calling it begging the question or an appeal to consequence. I also can't even being to describe the implications this has if it was actually applied. Are people held morally accountable for consequences they did not foresee (if a hammer you sold at a garage sale is used later to kill someone, are guilty of murder)? Or on the other hand, are people never accountable as long as they have good intentions ("I thought killing that baby would make the world better!")?

But again, like you said these are problems with application and may not pertain to whether it is true or not.

Obversity wrote:You're correct in that it's impossible to know the outcome of an action before it resolves, and this is a major problem in the application of Utilitarianism -- but as I've said before, it doesn't disprove the theory, it just makes it hard to practice.


Now this is a fallacy. Again, this may have no bearing on whether it is true or not, but if it is impossible to know the outcome of your actions it does not make Utilitarianism "hard" to practice-it makes it impossible. You cannot say it is impossible to know the outcomes of an action but Utilitarianism is still possible to practice because knowing the outcomes beforehand is essential to practice Utilitarianism. Even if it is true that you can predict a likely result (which may be possible for a single outcome but which I also find impossible for all outcomes-once an action is made it begins a causal chain and elementary chaos theory shows there's no possible way to predict results down a causal chain) then people can only try to practice Utilitarianism, they can never succeed.

Obversity wrote:Happiness is intrinsically good, and sadness is intrinsically bad. They're facts about our psychology and our nature, and they're also analytic facts. No one will say that happiness is bad in itself, and few will say that happiness is not good.


Stoicism says that happiness is bad. Or rather, they are opposed to "passion," which is one definition of happiness. How do you define happiness, Obversity? Assume that I have never been happy and explain it to me. Or rather, how do you show that your happiness and my happiness are the same thing? I'm sorry to hear you went through a period of depression, I hope you are better now. Even if it is not impossible to feel happiness when depressed, surely you agree that people suffering from depression feel happiness less than others (that is the nature of the problem)? Are they therefore less moral than other people as well?

Obversity wrote:We have 10 people in a small environment, with enough resources to last them till well after their own lives -- enough for them to have children, and for, let's say, two more generations. Now let's apply your logic. Since using these resources now will deny them to a future generation, this environment cannot possibly allow more happiness than unhappiness.


That is a great example, thank you Obversity! You also said earlier "Killing someone reduces any possible future utility. So yes, while death itself may have a utility value of 0, in the act of killing, you've probably done wrong by removing the potential for utility." Assuming removing the potential for utility is synonymous with negative utility the scarcity of the universe does exactly this. Everyone on that island is going to painfully starve to death and unless you can get positive utility out of that the better option is to kill everyone as quickly as possible to reach 0 (in real world terms the United States and Russia have enough VX nerve gas to kill every living thing on the planet if a cult of people thinking this way got in the right positions).

Obversity wrote:As for the patients in need of organs, there is not nearly enough context to make a decision.


In real life there will never be enough context either, because again the consequences of actions remain unknown. Here's another example of a problem with Utilitarianism: arbitrary choice. You're at a vending machine, what do you buy? If you do not make the purchase with the most utility, you are morally wrong. Again, call it a problem with applicability instead of veracity, but this is an instance where someone would deserve to be punished for choosing pretzels instead of crackers
User avatar
By ThomasNewton
#13629460
:lol: Great to see a good comment from you, Paradigm. I do think the good news is that if one were to find a morally correct deontology I believe it logically follows that morally right outcomes would result. The problem with real life is finding a morally correct deontology and being able to act upon it. In instances of doubt about the morality of a situation I certainly think expected consequence should be considered but cannot act as the only standard by which morality is judged. St. Thomas Aquinas had a good theory about morally ambiguous situations (whether it was right to kill in self-defense) in his work.
User avatar
By Obversity
#13629482
Ill will? -- Of course not! Those who I consider my closest friends are those who are willing to debate vigorously with me. I'm told I often sound angry when discussing philosophy. Perhaps this proclivity is evident in writing as well. If so, my apologies. More than anything, I'm enjoying myself, and this spat of ours.

About the appeal to consequences:

If you can somehow turn the statement "X action produces good consequences, thus X action is good" into "I think X is good to do, therefore X is good", then by all means do so. I'm having trouble, though.


About happiness:

Happiness is an emotion, and an ineffable one at that. I'm not sure it can be described, per-se, without resorting to synonyms. Science may have something to say about this matter. For example, it might be able to describe brain states that are often present when people say that they're 'happy'.

Are they therefore less moral than other people as well?


This could be an equivocation over the term 'moral', depending on how you meant it. If I am sad, I am not immoral, and similarly if I am happy, I am not necessarily moral.

A moral person would be one who tends to act with the intention of promoting happiness, and avoiding or negating unhappiness. I'm not entirely decided over accountability under a utilitarian scheme. In fact, I'm not even sure if it's rational, under utilitarianism, to hold someone accountable in the traditional sense.

Under utilitarianism, you would lock up criminals for three reasons (that I can think of): to deter others from such consequentially bad behaviour; to stop them personally from doing it again during their time; and to rehabilitate them, if possible.

As for your example of the hammer, you're thinking in terms of accountability, when you need to think in terms of consequences. There's no consequential reason to punish you for selling them the hammer.

The baby example is interesting. Did Hitler actually think he was doing the right thing? Did Lenin and Stalin? But the crux, once again, is the terminology. It matters not whether the mother -- or Hitler -- is guilty, or whether they actually did the crime; what matters, is whether punishing them will produce better consequences than not pushing them.


About your chaos theory point:

I agree that it's impossible to know precisely the consequences of your actions. However we can still practice a poor-man's utilitarianism, by doing our best to predict, and giving our causal reasoning, with historical examples to back our case.

Your argument that it's "impossible" to practice dubious. You wouldn't say that it's impossible to live life at all, would you? But that's what you do every day of your life: you predict the consequences of your actions, then you act. Utilitarianism may be impossible to practice perfectly, but I fail to see what's wrong with that.


Scarcity and negative utility:

The idea of 'potential utility' being equal to 'utility' is questionable. It's not really the same thing, or equivalent, but I'm not sure how to grade it.

I still think you'r missing something in the example, though. Let's say that the ten people on the island are happy 80% of the time, and unhappy 20% of the time, each. This gives them a utility ratio of 1:4 negative to positive, rather than 1:1, that is, your '0 utility' score. Let's say their children are the same, as are their grandchildren, and any future generations that can survive on the resources as well. Now, every generation born on the island has grown up knowing that there's limited resources, and that at some date, they will die out. This makes them accustomed to the idea that they and their friends might eventually die before their time. So instead of that, they decide on euthenasia -- that when the last of their supplies runs out, they'll spend a night celebrating, then all kill themselves.

Where's the negative utility in this scenario? As long as more people are happy, then the utility value will not be negative. There's nothing 'necessary' about the negative utility when there's finite resources; it's up to the psychologies of the people who exist. Happiness isn't inseparably linked to resources, nor does it depend on them.


About the patients:

In real life, there is always more context than you've given me. Your final example is amusing though, and quite relevant to an earlier point.

Yes, you would be morally wrong for buying the pretzel if, for example, you know that you don't like pretzels, and that this pretzel will actually make you sad. However only by deontological standards does someone inherently deserve punishment for doing something morally wrong.


Maybe one day we'll create artificially intelligent mind-reading vending machines with an interest in punitive justice and moral theory, so we can ask them their opinion on the matter.
By grassroots1
#13629485
The problem is in absolutely accepting either ethical system. Morality can only be determined based on particular circumstances; any attempt to create a universal rule will inevitably encounter problems.
User avatar
By El Gilroy
#13629611
I just want to clarify: I am in no way opposed to utilitarian methods; but I distrust contemporary societies and very much doubt that good would come of trusting politicians, judges, officials or just plain people to use utilitarianism towards desirable ends.
User avatar
By Eran
#13629684
Unless I missed it, nobody seems to have mentioned one of the biggest problems with Utilitarianism - the impossibility of inter-personal comparisons of happiness.

Utilitarianism is aimed at some variation of "maximizing happiness", implicitly assumes that happiness, at least in principle, can be summed up over multiple people. Worst yet, that the an increase in happiness of one person can be compared with a decrease of happiness of another person.

This is not just difficult. It is totally impossible, and with it the entire Utilitarian project.

@FiveofSwords Doesn't this 'ethnogenesis' mala[…]

Britain: Deliberately imports laborers from around[…]

There's nothing more progressive than supporting b[…]

A man from Oklahoma (United States) who travelled […]