Secret Santa 2024
Original Post
Utilitarianism - Moral or immoral?
At what point does utilitarian philosophy become "moral" or "immoral"?

Imagine there's a kid playing on some train tracks. The train's racing towards him at full speed, you're a meter away from the kid and you have only 3 seconds to act. The reasonable thing to do seems to be to push the kid out of the direction of the train in order to save his/her life. Now, the push, in itself, is an immoral thing to do, as it qualifies as assault, but the kid's alive because of the push. Most of us would agree that the action is justified by the end result in this instance.

What about a subject like taxation? From an anarcho capitalist's point of view, taxation would be immoral as it is the taking of one's labour without the consent of the worker. However, from a socialist's point of view, taxation is moral because the wealth is redistributed to those with a lower income.

With the train scenario, there doesn't appear to be an alternative to pushing the child out of the way. Perhaps you could hold the child's hand and happily skip across the train tracks, but in 3 seconds you're not wanting to take that risk.

Is the taxation scenario comparable to the train scenario? Well, no. There are many alternatives to taxation that would prevent the infringement of one's own right to liberty, so does that make the utilitarianism immoral in this instance? I would say so.

So it appears that utilitarianism's moral basis lies with whether there are alternatives to said scenario, which ultimately determines the morality of the action.

But let's go further... let's say you're stranded on a desert island, and the only thing to eat are rabbits. Now, you have no alternatives, but you know that the killing and consumption of the rabbits is the only way you're going to survive. We can agree that it's probably right to eat the rabbits, but is the act in itself moral? No, because you're ending the life of an animal that wants to live, but you have no alternatives in the goal of survival, so is utilitarianism moral in this instance?

Discuss.
Originally Posted by Mallymkun View Post
At what point does utilitarian philosophy become "moral" or "immoral"?

Imagine there's a kid playing on some train tracks. The train's racing towards him at full speed, you're a meter away from the kid and you have only 3 seconds to act. The reasonable thing to do seems to be to push the kid out of the direction of the train in order to save his/her life. Now, the push, in itself, is an immoral thing to do, as it qualifies as assault, but the kid's alive because of the push. Most of us would agree that the action is justified by the end result in this instance.

What about a subject like taxation? From an anarcho capitalist's point of view, taxation would be immoral as it is the taking of one's labour without the consent of the worker. However, from a socialist's point of view, taxation is moral because the wealth is redistributed to those with a lower income.

With the train scenario, there doesn't appear to be an alternative to pushing the child out of the way. Perhaps you could hold the child's hand and happily skip across the train tracks, but in 3 seconds you're not wanting to take that risk.

Is the taxation scenario comparable to the train scenario? Well, no. There are many alternatives to taxation that would prevent the infringement of one's own right to liberty, so does that make the utilitarianism immoral in this instance? I would say so.

So it appears that utilitarianism's moral basis lies with whether there are alternatives to said scenario, which ultimately determines the morality of the action.

I think you've used the wrong examples to prove your hypothesis. In your taxation example, you're highlighting the process of moral relativity, not utilitarianism.

Obviously though, we can examine taxation through the lens of utilitarianism. It's worth defining the term at this point - Utilitarianism is a prescriptive ethical theory that states that we should be taking actions that maximises the well-being for as many sentient beings as possible.

Not paying your taxes is an inherently selfish thing to do. It benefits only yourself. Paying your taxes benefits society. There's much more total utility in paying your taxes than there is in not paying your taxes.

Originally Posted by Mallymkun View Post
But let's go further... let's say you're stranded on a desert island, and the only thing to eat are rabbits. Now, you have no alternatives, but you know that the killing and consumption of the rabbits is the only way you're going to survive. We can agree that it's probably right to eat the rabbits, but is the act in itself moral? No, because you're ending the life of an animal that wants to live, but you have no alternatives in the goal of survival, so is utilitarianism moral in this instance?

In this example, it depends on how much one values the sentience of rabbits (we don't know what their sentience is actually worth, so we have to make a value judgement). Utilitarianism states that we ought to be maximising the utility of all sentient beings, so the question really is 'just how sentient are rabbits'. Consciousness is a spectrum, with humans (as far as we know) on the top, and things like plants at bottom - Where do rabbits place in that scale and how does that rank against the life of a fully conscious human? We currently don't have the science to figure this out.

Another thing to consider is that if you don't eat the rabbits, you'll die. Your death would impact your family and your friends. Their well-being would suffer as a result of your death - Does their emotional suffering (which could manifest physically in dark ways) outweigh the lives of the few rabbits you ate?

I'd come down on the side of the argument that says it's ethically okay under utilitarianism to eat some rabbits on a stranded island to survive.
Originally Posted by Mallymkun View Post
At what point does utilitarian philosophy become "moral" or "immoral"?

All utilitarianism does is have people aim to maximise the well-being for as many things as possible. I don't think that this prescription could ever be immoral, unless it were corrupted. For example, animal activists could claim that the lives of ants are just as important as the lives of humans - We intuitively know this to not be true. Nobody is having an emotional breakdown over stepping on an ant, but they might from hitting some kid with their car. The lives of humans are more important - We're worth more because our level of consciousness (our capacity to feel and think). People can corrupt utilitarianism by ignoring this fact. But again, that'd be a corrupted form of utilitarianism, and not the real thing.

So no, I don't think utilitarianism could ever be considered immoral. Maximising well-being/minimising suffering is a great ethical position to hold.
Last edited by Ele; Oct 25, 2017 at 06:05 AM.
Originally Posted by Ele View Post
I think you've used the wrong examples to prove your hypothesis. In your taxation example, you're highlighting the process of moral relativity, not utilitarianism.

You're right, I should have mentioned consequentialism, which is what utilitarianism falls under - the idea that "the end justifies the means", which is what I'm really focused on. Utilitarianism goes further and suggests that the consequence defines the morality of the action. For the sake of discussion, I'll just say that I'm really drawn to deontological ethics and moral absolutism.

Originally Posted by Ele View Post
Not paying your taxes is an inherently selfish thing to do. It benefits only yourself. Paying your taxes benefits society. There's much more total utility in paying your taxes than there is in not paying your taxes.

Yes, that's textbook utilitarianism and what I was looking for. You believe that the maximisation of utility justifies pointing a gun at someone and demanding their help - "or else". What I'm interested in is the idea that if there were no alternatives to taxation, then would taxation be moral? But first, we need to make it clear whether or not you believe that taking someone's money (without their consent) is a moral thing to do, then we can carry on with this conversation. I'm expecting you to say that it isn't, but it's justified by the maximisation of utility, making it, overall, a moral thing. Feel free to correct me if I'm wrong though.

Originally Posted by Ele View Post
In this example, it depends on how much one values the sentience of rabbits (we don't know what their sentience is actually worth, so we have to make a value judgement). Utilitarianism states that we ought to be maximising the utility of all sentient beings, so the question really is 'just how sentient are rabbits'. Consciousness is a spectrum, with humans (as far as we know) on the top, and things like plants at bottom - Where do rabbits place in that scale and how does that rank against the life of a fully conscious human? We currently don't have the science to figure this out.

That's the thing, one has to assert that we all value rabbits at around the same level. In order to do that, we would have to take Schopenhauer's belief that compassion is the basis of morality and apply it to the hypothesis. When we do that, we end up with a creature we value, higher than plant life, bugs and insects, but lower than humans. Therefore, does the intent (survival) justify the killing of the rabbit. It's an immoral thing to do, but if we take your point of our friends and family being affected by our death, then we're maximising utility by eating the rabbit.

Originally Posted by Ele View Post
I'd come down on the side of the argument that says it's ethically okay under utilitarianism to eat some rabbits on a stranded island to survive.

I would say it's unethical on the basis of the action.
Originally Posted by Mallymkun View Post
Yes, that's textbook utilitarianism and what I was looking for. You believe that the maximisation of utility justifies pointing a gun at someone and demanding their help - "or else".

I wouldn't put it like that, people would get the wrong idea.

I would say that torture can be justified under utilitarianism though (e.g. imagine a terrorist has time-sensitive information that could save numerous lives and is refusing to reveal that information).

Originally Posted by Mallymkun View Post
What I'm interested in is the idea that if there were no alternatives to taxation, then would taxation be moral? But first, we need to make it clear whether or not you believe that taking someone's money (without their consent) is a moral thing to do, then we can carry on with this conversation. I'm expecting you to say that it isn't, but it's justified by the maximisation of utility, making it, overall, a moral thing. Feel free to correct me if I'm wrong though.

My answer remains the same - Taxation is moral, regardless of alternatives. We pay tax because governments run social services and construct infrastructure that benefits their citizens. In other words, we pay tax to ensure the perpetuation of the 'social safety net' (max utility).

I tend to be a utilitarian on most things, so I've no issue with taxation (other than when they tax things to influence behaviour, like cigarettes in Australia). The uncorrupted purpose of tax is meant to be to raise revenue, not alter the behaviour of citizens.

Originally Posted by Mallymkun View Post
I would say it's unethical on the basis of the action.

What do you mean by that? Eating rabbits is an inherently immoral act?

The wanton killing/torture of an animal is unethical, sure. For the purpose of survival, though?

I think it's interesting to note that utilitarianism is sometimes used to justify vegetarianism/veganism. You've probably heard of Peter Singer. Think of all the factory farms and all the millions upon millions of animals suffering shit conditions all their lives before they're finally slaughtered. That's a whole lot of suffering, yet because it's non-human and because it's necessary to keep our supermarkets stocked with beef and chicken, we turn a blind eye to it.

Now, I'm not a vegetarian or vegan because I like my chicken nuggets, but once cloned meat comes along (thereby killing off factory farms), I'll certainly make the switch.
Originally Posted by Ele View Post
So no, I don't think utilitarianism could ever be considered immoral. Maximising well-being/minimising suffering is a great ethical position to hold.

Say there is a disease that is highly contagious (90% of populace would get infected in an outbreak), extremely painful to those who contract it, there is no cure or vaccine for the disease, has a high mortality rate, and severely cripples any survivors. You've successfully isolated it, there are no known outbreaks of the disease, and it does not exist naturally, but could easily be reached through the evolutionary process of a current natural disease.

If this disease were to evolve into natural existence, you can be reasonably certain that 90% of the world's population will be devastated by this disease. The only way you can develop a cure for the disease is to have infected individuals to properly examine the disease's effects on the body. However, nobody is willing to volunteer for the testing required, and animal experimentation is not capable of producing a cure for humans. Upon the outbreak, it is reasonable to assume that most of the medical and scientific community would be crippled by the outbreak, limiting effective responses, so waiting until the outbreak to develop a cure or vaccine is risky.

Would it be morally right to infect 1,000 people unwillingly with this terrible disease to develop a cure? What about 10,000? 100,000? What if these "volunteers" are prisoners? What if it can only infect children and only manifests in adults, so you'd need to infect children? What if this disease only has a low chance of evolving naturally?


Utilitarianism could be a dangerous approach to this type of problem. Is it morally acceptable to sacrifice a few for the good of the many? What if that few is only 1 less than the many? What if those few are unwilling? What if those few are particularly vulnerable in society? What if the good of the many is a theoretical possibility that may never arise? Utilitarianism doesn't care if you kill 1000 to save 1001, because 1000 < 1001. But what if it's not certain that you'll save the 1001, or that you can't save all 2001? What is the acceptable threshold of sacrifice?

Furthermore, who benefits from this added happiness? Let's say there are two people, one who has, out of a scale of 100, a happiness of 10, and another who has a happiness of 89. Under utilitarianism, taking the happiness of the person with 10 and reducing it 0, to raise the happiness of the person at 89 to 100 is morally right. But what if we attribute these numbers to actions. Say 10 means this person is homeless, miserable, and soon to die from natural causes, while 89 is an affluent, successful, sadist. If 89 kills 10, bringing 10's happiness to 0, but letting 89 reach the pinnacle of happiness, is this morally acceptable?

I know there are variations of utilitarianism that would say the last example is immoral, but that's why it's dangerous to blanket assume utilitarianism is morally right. There are some cases, however rare or implausible, where utilitarianism could condone arguably immoral actions.
nyan :3
Youtube Channel i sometimes post videos of other games
Originally Posted by Oracle View Post
Would it be morally right to infect 1,000 people unwillingly with this terrible disease to develop a cure? What about 10,000? 100,000? What if these "volunteers" are prisoners? What if it can only infect children and only manifests in adults, so you'd need to infect children? What if this disease only has a low chance of evolving naturally?

Yes to everything. Utilitarianism was developed to answer questions like these.
Originally Posted by Oracle View Post
There are some cases, however rare or implausible, where utilitarianism could condone arguably immoral actions.

Let's flip this. By not infecting those 1000 people (whatever number) and not pursuing a cure, you're effectively dooming 90% of the planet (billions upon billions) by your inaction. You could make an argument that inaction doesn't hold the same 'moral weight' that action does, but I reckon that's a bunch of baloney (think about how you'd feel if you did nothing and some kid died as a result of some action that you could've prevented).

Dooming billions upon billions of people to die vs. dooming 1000. Which action is ultimately more moral? They're certainly don't carry equal moral weight.

Don't come back at with with 'Well what if we had to infect 3 billion people instead of 1000', because that's a little nuts. Come up with a realistic example if you're going to go down that road. If you can't think of a realistic example, then that should indicate that these 'weeds' in utilitarianism may apply in theory but not in practice.
Last edited by Ele; Oct 25, 2017 at 10:28 PM.
Originally Posted by Ele View Post
Yes to everything. Utilitarianism was developed to answer questions like these.

Let's flip this. By not infecting those 1000 people (whatever number) and not pursuing a cure, you're effectively dooming 90% of the planet (billions upon billions) by your inaction. You could make an argument that inaction doesn't hold the same 'moral weight' that action does, but I reckon that's a bunch of baloney (think about how you'd feel if you did nothing and some kid died as a result of some action that you could've prevented).

Dooming billions upon billions of people to die vs. dooming 1000. Which action is ultimately more moral? They're certainly don't carry equal moral weight.

Don't come back at with with 'Well what if we had to infect 3 billion people instead of 1000', because that's a little nuts. Come up with a realistic example if you're going to go down that road. If you can't think of a realistic example, then that should indicate that these 'weeds' in utilitarianism may apply in theory but not in practice.

That's why I include uncertainty. If the outcome is a possible billions dying, is it still right to perform the sacrifice of a thousand? What if that possibility never comes around? Is this action suddenly immoral because the sacrifice was for naught? Since you've now sacrificed 1,000 people for no measurable benefit for the billion, you've now just performed an entirely immoral act under utilitarianism.

It's at this uncertainty that utilitarianism falters. No matter how you slice it, if you plan for a possibility and make a sacrifice, and that possibility never arises, it's near impossible to defend the morality of the sacrifice due to the fact you've sacrificed happiness and received less in return.
nyan :3
Youtube Channel i sometimes post videos of other games
Originally Posted by Oracle View Post
That's why I include uncertainty.

If the outcome is a possible billions dying, is it still right to perform the sacrifice of a thousand? What if that possibility never comes around? Is this action suddenly immoral because the sacrifice was for naught? Since you've now sacrificed 1,000 people for no measurable benefit for the billion, you've now just performed an entirely immoral act under utilitarianism.

Sure. In this extreme fantasy example of your making, utilitarianism messes up.

But does it, actually..? See below.
Originally Posted by Oracle
No matter how you slice it, if you plan for a possibility and make a sacrifice, and that possibility never arises, it's near impossible to defend the morality of the sacrifice due to the fact you've sacrificed happiness and received less in return.

How are we meant to know that an outbreak would never happen? That's an impossibility for anyone to know. There would still, therefore, be great value in possessing the cure and having infected those people. The utility might not be immediate, but I'm certain the world would sleep a whole lot better with the knowledge that they have a cure to that psycho-crazy disease that would torture and slaughter 90% of them.
Originally Posted by Ele View Post
Sure. In this extreme fantasy example of your making, utilitarianism messes up.

But does it, actually..? See below.

How are we meant to know that an outbreak would never happen? That's an impossibility for anyone to know. There would still, therefore, be great value in possessing the cure and having infected those people. The utility might not be immediate, but I'm certain the world would sleep a whole lot better with the knowledge that they have a cure to that psycho-crazy disease that would torture and slaughter 90% of them.

Alright, let's advance this case. What if this disease's existence is not known to the general public? They would sleep no worse knowing the cure doesn't exist because they do not know that the disease exists. In this situation, is it still moral to go through with the development of the cure?

Let's also posit another scenario while we're at it. Say there's a train track that splits in two directions, and you're at the helm of the track switch. You see two twins, identical in genetics, upbringing, and social standing, standing separately on each of the tracks. A train is approaching. Killing either of the twins results in the same net change of happiness. Is there a moral choice to be made? Is the choice inherently immoral because neither choice is good? Are you forced to make an immoral choice because every choice is the same net loss of happiness?

I understand the sentiment behind utilitarianism being well-intended, but there are circumstances where the decision that makes everyone happy is not necessarily the best decision, and circumstances where the best decision may change based on how far forward you look.

For example, there's a complete other level of utilitarianism that tries to factor in the impact choices will have on future generations' happiness, but how far can you honestly predict it? For example, fossil fuels make a lot of modern society easier to deal with, but overuse of them will inevitably cause problems for future generations. Under utilitarianism, who's happiness is worth more? The people who exist now, and would greatly appreciate the easy energy source, or the people of tomorrow, who have yet to exist but may suffer from the effects of carbon emissions? Do the rights of happiness extend to future people? Do the rights of happiness no longer apply to people who have died? Should wills hold any value, since the dead should no longer have any rights to happiness?
nyan :3
Youtube Channel i sometimes post videos of other games
Originally Posted by Oracle View Post
Alright, let's advance this case. What if this disease's existence is not known to the general public? They would sleep no worse knowing the cure doesn't exist because they do not know that the disease exists. In this situation, is it still moral to go through with the development of the cure?

Yes. Whether the public is aware or not, at that stage it would be prudent to go through with development. At that stage, the people in the know are still 'reasonably certain' of the plague's imminent arrival.

Even after the date that the date of the plague's arrival passes and nothing happens, nobody still has anyway of knowing that the plague will never arrive (this was my main point).

Originally Posted by Oracle View Post
Let's also posit another scenario while we're at it. Say there's a train track that splits in two directions, and you're at the helm of the track switch. You see two twins, identical in genetics, upbringing, and social standing, standing separately on each of the tracks. A train is approaching. Killing either of the twins results in the same net change of happiness. Is there a moral choice to be made? Is the choice inherently immoral because neither choice is good? Are you forced to make an immoral choice because every choice is the same net loss of happiness

There is no moral choice to be made. The consequences are the same, regardless of if you chose action or inaction. There is no immoral choice (other than pulling down your pants and jacking off).

Originally Posted by Oracle View Post
For example, there's a complete other level of utilitarianism that tries to factor in the impact choices will have on future generations' happiness, but how far can you honestly predict it?

I don't claim to support that particular concept within utilitarianism.
Last edited by Ele; Oct 26, 2017 at 02:48 AM.