Originally Posted by
Ele
So no, I don't think utilitarianism could ever be considered immoral. Maximising well-being/minimising suffering is a great ethical position to hold.
Say there is a disease that is highly contagious (90% of populace would get infected in an outbreak), extremely painful to those who contract it, there is no cure or vaccine for the disease, has a high mortality rate, and severely cripples any survivors. You've successfully isolated it, there are no known outbreaks of the disease, and it does not exist naturally, but could easily be reached through the evolutionary process of a current natural disease.
If this disease were to evolve into natural existence, you can be reasonably certain that 90% of the world's population will be devastated by this disease. The only way you can develop a cure for the disease is to have infected individuals to properly examine the disease's effects on the body. However, nobody is willing to volunteer for the testing required, and animal experimentation is not capable of producing a cure for humans. Upon the outbreak, it is reasonable to assume that most of the medical and scientific community would be crippled by the outbreak, limiting effective responses, so waiting until the outbreak to develop a cure or vaccine is risky.
Would it be morally right to infect 1,000 people unwillingly with this terrible disease to develop a cure? What about 10,000? 100,000? What if these "volunteers" are prisoners? What if it can only infect children and only manifests in adults, so you'd need to infect children? What if this disease only has a low chance of evolving naturally?
Utilitarianism could be a dangerous approach to this type of problem. Is it morally acceptable to sacrifice a few for the good of the many? What if that few is only 1 less than the many? What if those few are unwilling? What if those few are particularly vulnerable in society? What if the good of the many is a theoretical possibility that may never arise? Utilitarianism doesn't care if you kill 1000 to save 1001, because 1000 < 1001. But what if it's not certain that you'll save the 1001, or that you can't save all 2001? What is the acceptable threshold of sacrifice?
Furthermore, who benefits from this added happiness? Let's say there are two people, one who has, out of a scale of 100, a happiness of 10, and another who has a happiness of 89. Under utilitarianism, taking the happiness of the person with 10 and reducing it 0, to raise the happiness of the person at 89 to 100 is morally right. But what if we attribute these numbers to actions. Say 10 means this person is homeless, miserable, and soon to die from natural causes, while 89 is an affluent, successful, sadist. If 89 kills 10, bringing 10's happiness to 0, but letting 89 reach the pinnacle of happiness, is this morally acceptable?
I know there are variations of utilitarianism that would say the last example is immoral, but that's why it's dangerous to blanket assume utilitarianism is morally right. There are some cases, however rare or implausible, where utilitarianism could condone arguably immoral actions.