More Than Trolley Problems
In recent months, I’ve read a few pieces concerning utilitarianism, effective altruism, consequentialism, and related ideas. These pieces always seem to misunderstand key ideas, so I wanted to offer clarity.
We can divide moral philosophy into three fields: applied, normative, and metaethics. Applied moral philosophy analyzes specific ethical issues like abortion and torture while the normative field constructs generalized ethical frameworks that one could apply to a large range of issues. Metaethics abstract further and discusses the nature of morality itself.
For some quick background, the most discussed normative frameworks are deontology and consequentialism. The former involves following a set of rules, usually related to personal duties. Consequentialism, on the other hand, categorizes good and evil based on the results of an action. As a subset of consequentialism, utilitarianism specifies the right action to be the one that promotes the most good. Thinkers from both camps will agree on almost all real-world issues, but they’ll differ in their reasoning. While a deontologist might condemn lying or stealing on its own merits, a consequentialist will highlight the harms that manifest from them.
I’ve noticed an uptick in normative discussion over the past few years. This could stem from The Good Place, or I could be imagining a trend where none exists. I don’t mind these discussions, but we should remember that most of our ethical discussions occur in the realm of applied moral philosophy. There, we don’t assume an overarching normative framework from our interlocutors. I’ve argued in favor of abortion, free speech, and accepting the annoyances of work on this blog, and none of these pieces mention deontology or consequentialism.
By definition, you make more universal arguments when you don’t presume a normative framework. If you’re making the consequentialist, feminist, or libertarian case for X, you’re losing everyone who don’t hold the chosen framework. Of course, there’s nothing wrong with narrowcasting. Andrew Sullivan’s conservative case for gay marriage made a positive impact on our politics. However, Sullivan’s success relied on the fact that many Americans respect conservative views on family. I doubt that’s the case for normative moral philosophy. As far as I can tell, the average person doesn’t commit to a single moral framework for all cases. I don’t even know if one needs to accept a single framework. Maybe it’s fine to mix and match, though that’s a question for metaethics. Ultimately, most people probably don’t enter ethical debates with strong consequentialist or deontological priors. If your argument depends on one of those frameworks, most people will tune it out.
Induction Works Because Induction Works
Some readers may find the concept of a moral truth bewildering. How can we prove an ethical propositions? As strange as it may sound, we prove ethical propositions the same way we prove anything else. Let’s start with a classic philosophy 101 argument:
Premise 1: Socrates is man
Premise 2: All men are mortal
Conclusion: Socrates is mortal.
If the premises hold, the conclusion must hold. The premises do hold, so we conclude that Socrates is mortal. Moral arguments function the same way. The only difference being that, in a moral argument, we reach a normative conclusion. In other words, the conclusions will tell us the way the world should be instead of describing how it is. Since we need to reach a normative conclusion, we will need at least one normative premise. Here’s another simple argument about our favorite Greek gadfly:
Premise 1: Socrates is a man
Premise 2: People should not murder men
Conclusion: People should not murder Socrates
We must accept the conclusion if premises one and two hold. In that sense, this argument works just like the one above.
Readers may feel that the second premise constitutes an assumption rather than something we can prove with math or a scientific experiment. That’s true, and Socrates himself could tell you that not everyone accepts it. However, the same necessity of assumptions applies to math and science themselves. Mathematical proofs rely on axioms. Mathematicians can prove calculus theorems once they assume these axioms, but they can’t prove the axioms themselves1. You’re just going to have to assume that a + b = b + a for all real numbers.
But, hold on, don’t we know that a + b = b + a from experience? We’ve built bridges and airplanes with this assumption, so it appears pretty robust. Furthermore, any programmer could code a simulation that checks if a + b = b + a for a wide range of real numbers, and two sides would match every time. Are we still reliant on assumptions?
We are. Let’s construct an empirical argument of a + b equaling b + a for all real numbers.
Premise 1: The scientific method2 shows that a + b = b + a for all real numbers
Premise 2: If the scientific method shows that a proposition is true, then that proposition is true
Conclusion: a + b = b + a for all real numbers is true.
Premise 2 seems obvious, but it’s still something we need to assume. Sure, we see evidence for the scientific method all around us, but how do we know that science will work in the future? Maybe Descartes’ evil demon constructed a world where scientific inquiry functioned for all of history up until this second, and will only fail us hereafter. I don’t think that’s the case, but I can’t offer any definitive proof against it. We just have to assume that tomorrow’s world will function like today’s. I feel pretty safe in that assumption, but it remains an assumption. Similarly, I could run a simulation showing that a + b = b + a for a quadrillion real numbers, but I can’t know that the quadrillion-and-first wouldn’t break the pattern.
Wait, didn’t the first section argue that we shouldn’t base our argument assumptions? Not quite. I said that an ideal moral argument shouldn’t presume that its readers hold a particular normative framework. I didn’t say that an argument should involve no assumptions. That’s impossible.
With that in mind, I’ll discuss a more interesting example of a moral argument. When discussing the nuclear bombings of Japan, you’ve probably read something like this:
Premise 1: We should not target non-combatants in war
Premise 2: The nuclear bombings targeted non-combatants
Conclusion: We should not have dropped the nuclear bombs
This is the last time I’ll say this, I promise: if the premises are true, we must accept the conclusion. To reject the conclusion, one would have to show that at least one premise fails. Bombing proponents might hold that the actions prevented a far bloodier invasion, and it remains acceptable to kill civilians if doing so saves a large number of lives. In that case, they would reject premise one. Other proponents might argue that we can’t make a compelling combatant/non-combatant distinction when an entire country devotes itself in the war effort. They would reject premise two. I’m not going to litigate this issue here, but I hope this example shows how moral arguments work just like arguments about everything else. One side constructs a well-formed argument, while its opponents attempt to poke holes in the premises. Moral arguments aren’t special.
No Utility Monsters in the Shallow Pond
In 1971, philosopher Peter Singer presented the “shallow pond” thought experiment, and I will paraphrase it here. Imagine that you’re walking home and you see a drowning child. The pond is shallow enough that it poses no threat to your life, though it would muddy your shoes and force you to buy a new pair. Would it be morally acceptable for you to let the kid drown? Most of us wouldn’t think so. It seems pretty reprehensible to let someone die to prevent a minor inconvenience. Yet, most of us do something similar every day. We could donate to life-saving charities right now, and doing so would only sacrifice some meaningless material comforts. If one thinks we’re morally obligated to save the kid in the shallow pond, we’re also morally obligated to save people through donations.
I’ve seen many premise-and-conclusion reconstructions of this argument, but they usually overcomplicate it or insert irrelevant details. Here’s my best shot:
Premise 1: If you can save lives without sacrificing anything important, you’re obligated to do so.
Premise 2: Donating money to high-impact charities would save lives without sacrificing anything important.
Conclusion: You’re morally obligated to donate money to high-impact charities
You’ll often see this argument discussed in the context of consequentialism or utilitarianism. I remember one article connecting this to Soma injections in Brave New World and utility monsters. Some philosophy textbooks and websites place this argument in a utilitarianism section, and Singer himself endorses utilitarianism. However, this argument (or, at least, my reconstruction of his argument) doesn’t mention utilitarianism. Maybe, one second after rescuing the child, a meteor strikes the child’s head and kills him. The same argument would apply, even though saving the child didn’t result in any positive consequences. The shallow pond lies in the field of applied moral philosophy. It’s not a consequentialist argument. Hence, there’s also no reducio ad absurdum where we’re everyone’s hooked on Soma. You could accept the first premise due to a commitment to the greater good, but you could also accept it out of a sense of duty to your fellow humans.
Once we remove the utilitarian red herring, we can see that the shallow pond argument doesn’t demand us to sacrifice all our worldly pleasures in favor of the greater good. It doesn’t ask you to sacrifice your basic health and safety, nor does it ask you to become a slave to the Against Malaria foundation. We could imagine alternative versions where a cursed pond water makes those who enter it lose their life savings, healthcare coverage, or basic freedom. In all those cases, we probably wouldn’t condemn someone who stayed out. The shallow pond also doesn’t ask much of the poor, for whom the second premise wouldn’t apply.
I haven’t read a compelling objection to the shallow pond argument. Rejecting premise one will probably send your morality down a bad path. In my experience, most of the counterarguments abstract the shallow pond to a discussion about consequentialism or utilitarianism, which misses the point. Other counterarguments point towards the physical distance or national boundaries involved, compared to the immediacy of the shallow pond. I don’t see how either of those could matter. The thought experiment seems to work if I imagine the pond sitting past a wormhole that connects me to Australia. I also don’t understand the objection that it’s too demanding. Online donations demand much less of me than slavery abolition did of antebellum planation owners.
I think many people who write about these topics rank highly on openness-to-experience, meaning they enjoy discussion of abstract topics. That’s mostly a positive trait, but we should remember that abstracting a problem doesn’t help us solve it. Of course, enjoy your debates about deontology, consequentialism, and even virtue ethics. Just remember that some of the most important moral arguments don’t involve them.
At least, not in real analysis. There might be some more abstract study where mathematicians can prove these axioms. Even then, those proofs would rely on logical assumptions.
I’m using “the scientific method” as shorthand for randomized controlled trials, natural experiments, statistical simulations, etc.
Really nice, Klaus. I appreciate how clearly you write. It has always seemed strange to me how Philosophers—who you'd think would want their ideas disseminated far and wide—write in a language that limits their audience to academic specialists and some masochistic amateurs.
I’ve been working on writing out my “ethic of life” and this is very helpful to consider. I’ll link to this when I post it!