At what point does utilitarian philosophy become "moral" or "immoral"?
Imagine there's a kid playing on some train tracks. The train's racing towards him at full speed, you're a meter away from the kid and you have only 3 seconds to act. The reasonable thing to do seems to be to push the kid out of the direction of the train in order to save his/her life. Now, the push, in itself, is an immoral thing to do, as it qualifies as assault, but the kid's alive because of the push. Most of us would agree that the action is justified by the end result in this instance.
What about a subject like taxation? From an anarcho capitalist's point of view, taxation would be immoral as it is the taking of one's labour without the consent of the worker. However, from a socialist's point of view, taxation is moral because the wealth is redistributed to those with a lower income.
With the train scenario, there doesn't appear to be an alternative to pushing the child out of the way. Perhaps you could hold the child's hand and happily skip across the train tracks, but in 3 seconds you're not wanting to take that risk.
Is the taxation scenario comparable to the train scenario? Well, no. There are many alternatives to taxation that would prevent the infringement of one's own right to liberty, so does that make the utilitarianism immoral in this instance? I would say so.
So it appears that utilitarianism's moral basis lies with whether there are alternatives to said scenario, which ultimately determines the morality of the action.
But let's go further... let's say you're stranded on a desert island, and the only thing to eat are rabbits. Now, you have no alternatives, but you know that the killing and consumption of the rabbits is the only way you're going to survive. We can agree that it's probably right to eat the rabbits, but is the act in itself moral? No, because you're ending the life of an animal that wants to live, but you have no alternatives in the goal of survival, so is utilitarianism moral in this instance?
At what point does utilitarian philosophy become "moral" or "immoral"?
I think you've used the wrong examples to prove your hypothesis. In your taxation example, you're highlighting the process of moral relativity, not utilitarianism.
Not paying your taxes is an inherently selfish thing to do. It benefits only yourself. Paying your taxes benefits society. There's much more total utility in paying your taxes than there is in not paying your taxes.
In this example, it depends on how much one values the sentience of rabbits (we don't know what their sentience is actually worth, so we have to make a value judgement). Utilitarianism states that we ought to be maximising the utility of all sentient beings, so the question really is 'just how sentient are rabbits'. Consciousness is a spectrum, with humans (as far as we know) on the top, and things like plants at bottom - Where do rabbits place in that scale and how does that rank against the life of a fully conscious human? We currently don't have the science to figure this out.
I'd come down on the side of the argument that says it's ethically okay under utilitarianism to eat some rabbits on a stranded island to survive.
Yes, that's textbook utilitarianism and what I was looking for. You believe that the maximisation of utility justifies pointing a gun at someone and demanding their help - "or else".
What I'm interested in is the idea that if there were no alternatives to taxation, then would taxation be moral? But first, we need to make it clear whether or not you believe that taking someone's money (without their consent) is a moral thing to do, then we can carry on with this conversation. I'm expecting you to say that it isn't, but it's justified by the maximisation of utility, making it, overall, a moral thing. Feel free to correct me if I'm wrong though.
So no, I don't think utilitarianism could ever be considered immoral. Maximising well-being/minimising suffering is a great ethical position to hold.
Would it be morally right to infect 1,000 people unwillingly with this terrible disease to develop a cure? What about 10,000? 100,000? What if these "volunteers" are prisoners? What if it can only infect children and only manifests in adults, so you'd need to infect children? What if this disease only has a low chance of evolving naturally?
There are some cases, however rare or implausible, where utilitarianism could condone arguably immoral actions.
Yes to everything. Utilitarianism was developed to answer questions like these.
Let's flip this. By not infecting those 1000 people (whatever number) and not pursuing a cure, you're effectively dooming 90% of the planet (billions upon billions) by your inaction. You could make an argument that inaction doesn't hold the same 'moral weight' that action does, but I reckon that's a bunch of baloney (think about how you'd feel if you did nothing and some kid died as a result of some action that you could've prevented).
Dooming billions upon billions of people to die vs. dooming 1000. Which action is ultimately more moral? They're certainly don't carry equal moral weight.
Don't come back at with with 'Well what if we had to infect 3 billion people instead of 1000', because that's a little nuts. Come up with a realistic example if you're going to go down that road. If you can't think of a realistic example, then that should indicate that these 'weeds' in utilitarianism may apply in theory but not in practice.
That's why I include uncertainty.
If the outcome is a possible billions dying, is it still right to perform the sacrifice of a thousand? What if that possibility never comes around? Is this action suddenly immoral because the sacrifice was for naught? Since you've now sacrificed 1,000 people for no measurable benefit for the billion, you've now just performed an entirely immoral act under utilitarianism.
No matter how you slice it, if you plan for a possibility and make a sacrifice, and that possibility never arises, it's near impossible to defend the morality of the sacrifice due to the fact you've sacrificed happiness and received less in return.
Sure. In this extreme fantasy example of your making, utilitarianism messes up.
But does it, actually..? See below.
How are we meant to know that an outbreak would never happen? That's an impossibility for anyone to know. There would still, therefore, be great value in possessing the cure and having infected those people. The utility might not be immediate, but I'm certain the world would sleep a whole lot better with the knowledge that they have a cure to that psycho-crazy disease that would torture and slaughter 90% of them.
Alright, let's advance this case. What if this disease's existence is not known to the general public? They would sleep no worse knowing the cure doesn't exist because they do not know that the disease exists. In this situation, is it still moral to go through with the development of the cure?
Let's also posit another scenario while we're at it. Say there's a train track that splits in two directions, and you're at the helm of the track switch. You see two twins, identical in genetics, upbringing, and social standing, standing separately on each of the tracks. A train is approaching. Killing either of the twins results in the same net change of happiness. Is there a moral choice to be made? Is the choice inherently immoral because neither choice is good? Are you forced to make an immoral choice because every choice is the same net loss of happiness
For example, there's a complete other level of utilitarianism that tries to factor in the impact choices will have on future generations' happiness, but how far can you honestly predict it?