Lisa Tessman’s When Doing the Right Thing is Impossible offers an engaging and accessible exploration of the complex philosophical issues surrounding moral dilemmas and moral failure. Are there genuine moral conflicts? Is it true that in some situations a moral agent cannot help but fail? Tessman offers her own answer – yes, in some situations, moral failure is unavoidable – while guiding readers through the debates surrounding these questions, clarifying the various positions sympathetically and carefully.
Part of what makes the book so immediately gripping is the case study it begins with in Chapter One: Tessman focuses on the case Memorial Medical Center in New Orleans during Hurricane Katrina. During the storm, the hospital was full of patients as well as a number of community members, off-duty staff and their families who were seeking shelter. In the aftermath, the hospital was seriously compromised—the air conditioning stopped working, the water became unsafe to drink or wash with, toilets stopped working, and medications ran low. Some critically ill patients were evacuated by helicopter, but many more were not. As time went on, exhausted staff and volunteers made mistakes, and doctors and administrators made pressured judgment calls, including the decision to go into lockdown and post armed guards to keep out desperate people (seen as potential looters) trying to get in. After being trapped in the hospital for four days, the backup generators failed and the situation worsened further. Staff, nurses, and doctors acted heroically: they pumped oxygen by hand for patients who had required ventilators, and designed IV drips that would not require electricity. But, in desperation, they began to make decisions about who would most deserve evacuation, now with a plan to leave those who were either most sick and therefore least likely to survive evacuation, or too large and unwieldy to move, for last. When it became clear, given complex and horrifying circumstances, that not everyone would be evacuated, according to some accounts, some doctors and nurses gave injections of morphine and other drugs to hasten the deaths of those who would not be.
The question Tessman raises from this case is the one at the heart of the book: are there situations in which what agents are morally required to do is something that is impossible to do? As in the case of the doctors and nurses, we might think they were both morally required to not leave patients in the hospital to suffer and die, and at the same time that deciding to give patients who might survive a drug to hasten their deaths was morally reprehensible. Or, put differently, we might think they were morally required to save their patients but also that saving their patients was impossible—they were morally required to do something they could not do. Giving a philosophical account of how this can be the case is complicated, given some fundamental commitments that shape much of philosophical ethics. For one thing, as Tessman notes, if we agree with her that there can be impossible moral requirements, then we must be willing to contradict the Kantian principle “ought implies can” (Tessman 16).
Chapter two takes up the question of whether moral dilemmas can in fact exist. Tessman distinguishes moral conflicts (i.e., situations in which there is a moral requirement to do both A and B and one cannot do both A and B) from moral dilemmas (i.e., situations in which there is a moral requirement to do both A and B, one cannot do both, and neither ceases to be a moral requirement as a result of the conflict). In the case of dilemmas, even if an agent judges that moral requirement A overrides moral requirement B, B does not stop being a requirement. Failing to do B, even in order to successfully do A which the agent judged to be more important, would still be a moral failure. As Tessman notes, many philosophers believe that moral dilemmas do not exist. She canvasses two “anti-dilemma” philosophical positions: those who argue that there are no moral conflicts at all (the ‘no-conflict approach’), and those who argue that there are no moral conflicts that count as dilemmas (the ‘conflict resolution approach’). Tessman ultimately rejects both approaches, the first by rejecting the principle that ‘ought always implies can’, and the second by arguing that there are some cases of conflicting moral requirements where both requirements are non-negotiable and neither can be overridden or cancelled. As she concludes, “as long as some moral requirements are non-negotiable, then if there are conflicts among these kinds of requirements, there will also be dilemmas” (42).
The third chapter of the book considers how to distinguish between negotiable and non-negotiable moral requirements. Those that are negotiable may conflict without producing circumstances of moral failure: we can simply prioritize the more important, non-negotiable requirements over those that are less important. But where non-negotiable requirements conflict, moral agents can find ourselves failing no matter which requirement we fulfill. Tessman distinguishes different kinds of moral requirements—there is a plurality of kinds of moral values (47), and because not all moral values are of the same kind, they cannot always substitute for one another. In some cases, the value of an action can be replaced by some other value. In other cases, an action’s value is irreplaceable. In particular, there are cases of some values which, if sacrificed, could never be replaced. Tessman gives the example of the murder of someone you love. Nothing can substitute for what you have lost: it is a loss of an irreplaceable value. Irreplaceability seems to be one component of what makes some moral requirements non-negotiable, but not the only component, since some irreplaceable losses are not substantial enough to count as non-negotiable moral requirements (e.g., a child losing a beloved balloon may be an irreplaceable loss but not one her parent is obligated to prevent at all costs). As Tessman argues, drawing on Gowans (1994) and Nussbaum (2011), the more serious, non-negotiable requirements, are ones which nothing can substitute or compensate for fulfilling, and ones which provide what is of deepest value in human lives.
In the fourth chapter, Tessman turns to the work of empirical moral psychologists to consider further how moral agents could judge ourselves to be facing non-negotiable moral requirements. Contrary to the common philosophical assumption that processes of moral judgment are chiefly processes of reasoning, Tessman surveys Jonathan Haidt and Joshua Greene’s work on dual process models of moral judgment, which consider how moral judgments of non-negotiable requirements might more standardly occur through automatic, unconscious, intuitive processes. The difference between reaching moral judgments through intuitive processes and reaching them through reasoning processes suggests a way of distinguishing negotiable and non-negotiable requirements: the alarm bell emotions that accompany the intuitive process of moral judgment may be part of what distinguishes judgments that some moral requirements are non-negotiable. As Tessman writes, “If you see a vulnerable person in danger, for instance, and this immediately provokes an ‘I must protect!’ alarm bell, then you’ll experience the moral requirement indicated by this ‘I must’ as non-negotiable” (Tessman 76).
In chapter five, Tessman considers the evolutionary development of moral practice more broadly. Focusing on multilevel selection theory, Tessman explains how traits tied to abilities to cooperate and be altruistic (specifically, the traits of individuals in groups who successfully practiced alloparenting) have been seen as more likely to be passed on (82). Having gone into detail on the view and common misconceptions to be avoided, Tessman highlights this as a plausible evolutionary explanation of why morality in general would have involved, and proceeds to concentrate on evolutionary explanations for the specific experiences of intuitively judging that we are required to do something. In brief, she notes that we have good evolutionary explanations for why humans rely on system 1 (the quick, intuition system of the dual system models) associating certain perceptions (e.g., of an object that looks like feces) with certain feelings (e.g., yuck) and behavior (e.g., do not eat) (94-95). Moral responses (e.g., “vulnerable person in danger/empathic fear for the person/protect the person!”) could have evolved similarly (Tessman 98). The presence of such moral responses does not ensure that they are ones we can enact, of course, since we cannot always protect the person. In such cases, we may face impossible moral requirements.
Tessman turns in chapter six to a consideration of second-order judgments about our first-order judgments of what action is morally required. In particular, she argues that a specific class of first-order judgments arrived at by intuition should not be subjected to verification by a reasoning process, because to do so would actually undermine the value expressed in the first-order judgment. For instance, a parent’s first-order automatic judgment that they must stop their toddler from running into a busy street should not be subjected to second-order evaluation (Tessman 108). To do anything other than stop the child is unthinkable. As Tessman writes,
“If intuitively judging some actions to be unthinkable is part of what constitutes loving someone, and if judgments of unthinkability preclude double-checking our intuition through reasoning, then in order to love in this way we’ll have to trust some of our intuitive judgments about what actions are required or prohibited, and to do so without relying on any reasoning about them” (Tessman 109).
On Tessman’s view, this amounts to both a first order intuitive judgment (i.e., one must stop the child), and a second order intuitive judgment [i.e., “Don’t think the unthinkable by reasoning about what to do in this case!” (110)], at the same time. In some cases, agents might face either doing the unthinkable (e.g., not protecting one’s child) or doing the impossible (e.g., lifting a 5000-pound car off of them). In these sorts of tragic cases, one cannot help but fail.
In chapter seven, Tessman continues the consideration of unthinkable actions, but now in contexts of relationships beyond those of intimates. In some cases of moral action, it can be wrong to arrive at a moral judgment via controlled reasoning rather than an affect-laden, automatic, intuitive process. For instance, we should not need to think very hard about whether to save a child’s life. In the words of Bernard Williams, it can be possible to think ‘one thought too many’. As Tessman writes, “We should treat other people as beings whom it is unthinkable to do certain things to, and the mark of our finding these things to be unthinkable is that we don’t have to reason, or find justification, in order to grasp that we mustn’t do them” (Tessman 133). While recognizing and honoring sacred values can be at the core of much of the most important moral actions, doing so can also be dangerous—we must also remain attentive to the values that conflict with them.
In chapter eight, Tessman concludes that, contra constructivists and theorists who think we should reach consistency among moral values by processes of reflective equilibrium, morality is not necessarily made up of a consistent set of judgments reached by reasoning. In fact, the set of our moral values is often arrived at through an automatic, intuitive process, and may persistently contain conflicting values. We should not respond to the existence of conflicting values with an attempt to tidy them up and produce a unified, consistent set.
One of the most important dimensions of Tessman’s work in this text as I see it is its reflection on why the possibility of unavoidable moral failure can feel so troubling to moral agents, despite the fact that such situations are not rare in our moral lives. As Tessman notes,
“It’s very distressing to think that, due to something completely outside of your own control, you might be caught in a situation in which you’re inevitably going to have to commit a moral wrongdoing. Perhaps we like to think that we can control how morally good or bad we are. If there are dilemmas, then even if we always try to do the right thing, we might end up with no right thing that we can do…We can expect our moral lives to be less clean than we might have previously imagined because we might fail in ways that we never would have, if only it were always in our control to avoid moral failure…The point of recognizing the phenomenon of unavoidable moral failure isn’t to identify more things that people can blamed for. Instead, the main point is to acknowledge how difficult moral life can be” (Tessman 2017, 29-30, 159-160).
Her reflection on the allure of control in moral lives will be helpful both at the level of philosophical developments in moral psychology, as well as at the level of on-the-ground first-person moral experience.
This is an excellent book not only for philosophers working in ethics and moral psychology, but also for a much broader audience . It would be ideal for use in ethics classes, and is accessible enough for readers beyond academic contexts who are interested in understanding the complexities surrounding these all-too familiar experiences. Throughout, Tessman helpfully frames questions to the reader (e.g., “are you willing to agree that…?”), making the text especially engaging, and facilitating its use in classrooms and discussion groups. The book manages to maintain what is deeply relatable about situations in which moral agents cannot succeed, while introducing readers to the philosophical controversy surrounding such situations.
Department of Philosophy and Women and Gender Studies
Gowans, Christopher. 1994. Innocence Lost: An Examination of Inescapable Moral Wrongdoing. New York: Oxford University Press.
Greene, Joshua, Brian Sommerville, Leigh Nystrom, John Darly, and Johnathan Cohen. 2001. “An fMRI Investigation of Emotional Engagement in Moral Judgment” Science 293 (5537): 2105-2108.
Haidt, Jonathan. 2001. “The Emotional Dog and its Rational Tail: A Social Intuitionist Approach to Moral Judgment” Psychological Review 108 (4): 814-834.
Nussbaum, Martha. 2011. Creating Capabilities: The Human Development Approach. Cambridge MA: Harvard University Press.