Making moral decisions: The conflict between morality and utility

Today we visit the HMS Belfast and the Churchill War Rooms. The Belfast is not my province – I expect to learn something about group dynamics in war from my co-leader – but the war rooms, the site where so many vital decision were made, will bring the question of moral decision making to life.

The question we demand of murderers and war leaders alike is, how could you do that? Of the SS soldier: How could you cold-bloodedly murder dozens of Jews, as they stood there in front of you and pleaded for their life? Of Churchill or Eisenhower or other generals of war: How could you give the order to bomb Dresden knowing how many would die, how could you order soldiers into this battle or that one knowing how many would become casualties?

To get some insight into what is going on in their minds, we have ordinary people spend some time in the fMRI scanner, reading stories and making decisions while we measure the flow of blood in their brain. One dilemma is usually “the trolley”, a classic of the field:

“A runaway trolley is headed for five people who will be killed if it proceeds on its present course. The only way to save them is to hit a switch that will turn the trolley onto an alternate set of tracks where it will kill one person instead of five. Should you turn the trolley in order to save five people at the expense of one?”

Another would be “the crying baby” (and likely to have an emotional impact on parents and non-parents alike):

“Enemy soldiers have taken over your village. They have orders to kill all remaining civilians. You and some of your townspeople have sought refuge in the cellar of a large house. Outside, you hear the voices of soldiers who have come to search the house for valuables.

“Your baby begins to cry loudly. You cover his mouth to block the sound. If you remove your hand from his mouth, his crying will summon the attention of the soldiers who will kill you, your child, and the others hiding out in the cellar. To save yourself and the others, you must smother your child to death.

“Is it appropriate for you to smother your child in order to save yourself and the other townspeople?”

There are many differences between these stories*, but the key one here is that “the trolley” is impersonal, and “the crying baby” is so very personal, defined by researchers as a “me hurt you” scenario. “The trolley” still involves “hurt you”, but the “me” has been removed – given critical space by the fact that you flip a switch to change the outcome of events, instead of using your bare hands to carry them out start to finish.

This distinction between personal and impersonal is to some extent the difference between the leader, making decisions on a map rather like a game of chess, and the solider on the ground actually killing another human being. This distinction is played out in people’s decisions – it’s much easier to decide to flip the switch in the trolley than to decide to smother the baby – and in their brains, where thought and emotion compete in two distinct but connected networks. There is not much competition in the impersonal situation; our emotional brains tend not to get drawn in, thanks to the distance of the switch, so it’s easy to make the utilitarian decision to sacrifice one and save five. The personal “crying baby” dilemma, on the other hand, will evoke strong activity in emotional regions of the brain, which must be overridden by non-emotional regions for you to decide to smother one child to save the town.

Not all people – not all leaders – have such an easy time of supposedly impersonal decisions, though. There are some leaders who take their actions quite personally, whose decisions weigh heavily on them in lives cost, and others who can brush it off. The question then becomes, assuming that you do feel the weight of responsibility and must make the hard decision, how are you doing it? Or if you are a Jewish mother, trying to shush a crying baby as you hide from the Nazis, how do you decide?

Essentially, to make the tough decision we must override our emotional response in favor of a more abstract “greater good”. This is accomplished by two regions of the brain: the anterior cingulate cortex (ACC), which detects the conflict between emotion and abstract good, and then gives a boost to the dorsolateral prefrontal cortex (dlPFC), the region of the brain that helps us maintain a goal (to make the right decision, and save as many people as possible), manipulate information in working memory (think out the consequences of different courses of action) and make a final decision.

Now imagine the person in the fMRI scanner imagining the crying baby, or the woman in hiding living it. We know that when people say no, don’t smother the baby, they tend to say it very quickly, and have less activity in their ACC and dlPFC, than people who ultimately yes, smother the baby for the greater good. The emotional reaction comes first, and if it will carry the day it does so quickly; the utilitarian reaction comes more slowly, courtesy of the prefrontal cortex and intense efforts of working memory that manage to override the emotional response.

To some extent, then, you want a leader who is not emotional; perhaps one who has a mild form of damage to another part of the brain called the ventromedial cortex, because these patients tend to make more utilitarian judgments. You might argue that we want our leaders to be human, and feel the impact of the choices they make – but we wouldn’t want them to be paralyzed by fear, nor would we want them to be too empathetic to lose sight of the big picture. The Nazis might well have won if Allied generals had not been willing to lose a few battles to win the war.

A strong dlPFC and excellent cognitive control might become the better marker of a good leader, because they could have the emotional response but still be able to override it. We might choose leaders who are naturally more reflective, or put them in situations where we push them to be more reflective, because reflection helps us make utilitarian decisions. On the other hand, weaker emotional responses might be more reliable, because the dlPFC is subject to ego depletion, and utilitarian judgments suffer when we have other things on their mind (officially called cognitive load), or when the prefrontal cortex is impaired by sleep deprivation or stress, which are all but guaranteed in wartime.

As for myself, sitting here in a reflective frame of mind writing about the science of moral decision making, it is easy to say “yes, smother the baby” and “yes, flip the switch”. Given my visceral aversion to setting traps for mice in my apartment just a few months ago, however, I wouldn’t want to be in Churchill’s position or any soldier’s of deciding to kill, or even to order to kill….although I probably could, if the circumstances were right. Modern psychology and our ability to peer inside the brain have taken our understanding of the human mind quite far, but the gulf to understanding and predicting one individual’s decision is still quite large.

*There are other differences between these two specific two dilemmas: the number of people (five versus one), the innocence (a child versus adults), the relationship (strangers versus your own child). Overall, the research so far has tried to wash these out – there were many scenarios the subjects in these studies considered in that fMRI scanner, and overall all of these differences were subsumed under the big personal vs. impersonal one. These questions of relationship and number are worthy to explore, but in some sense they don’t matter. The Nazis who ran the death camps such as Auschwitz seemed to have no problem killing children as well as parents; in one interview, I believe in Auschwitz again, an SS soldier was pressed about killing children, because they couldn’t have been responsible for any grievances he had listed, and he shrugged and said, essentially, that they were still Jews.

 

Greene, J. D., Nystrom, L. E., Engell, A. D., Darley, J. M., & Cohen, J. D. (2004). The neural bases of cognitive conflict and control in moral judgment. Neuron, 44, 389-400.

Leave a comment