Saturday, November 23, 2019

Moral Minds by Marc D. Hauser

I very much liked the thesis of this book. It is as follows:

As a classic text in moral philosophy concludes, “Morality is, first and foremost, a matter of consulting reason. The morally right thing to do, in any circumstance, is whatever there are the best reasons for doing.”

This dominant perspective falls prey to an illusion: Just because we can consciously reason from explicit principles -- handed down from parents, teachers, lawyers, or religious leaders -- to judgments of right and wrong doesn’t mean that these principles are the source of our moral decisions. On the contrary, I argue that moral judgments are mediated by an unconscious process, a hidden moral grammar that evaluates the causes and consequences of our own and others’ actions. This account shifts the burden of evidence from a philosophy or morality to a science of morality.

In other words, contrary to much public opinion, morality is not a reasoned position, arrived at through philosophy. It is an unconscious process, whose grammar and expression can best be determined through scientific inquiry.

That’s an intriguing idea. But I am probably oversimplifying it -- because, as the author admits and as it apparent to anyone who tries to do the right thing, sometimes we do engage in moral reasoning, and sometimes that reasoning is the basis on which our moral judgment is expressed.

But, at the same time...

Acknowledging that we do engage in conscious, rational forms of reasoning is different from accepting that this is the one and only form of mental operation underlying our moral judgments. … Many of [our] judgments are made rapidly, involuntarily, and without recourse to well-defined principles.

So, this moral landscape that Hauser leads us into is a treacherous one. We’ll find a little conscious moral reasoning here and a little unconscious moral grammar there. What sense are we to make of it all?

Thou Shalt Not

One really useful tool the book offers is the reminder to look at moral questions through three different lenses: actions, causes, and consequences. Many ancient moral systems only take the first of these into account, classifying certain actions as morally forbidden.

For example, all of the following actions are universally forbidden: killing, causing pain, stealing, cheating, lying, breaking promises, and committing adultery.

But…

Like other rules, these moral rules have exceptions. Thus, killing is generally forbidden in all cultures, but most if not all cultures recognize conditions in which killing is permitted or might be justifiable: war, self-defense, and intolerable pain due to illness. Some cultures even support conditions in which killing is obligatory: in several Arabic countries, if a husband finds his wife in flagrante delicto, the wife’s relatives are expected to kill her, thereby erasing the family’s shame.

Something is clearly wrong here. You can’t call actions universally forbidden if they are certain exceptions in which they are not. Rules of conduct therefore have to be bounded by other moral forces and judgments -- things like causes and consequences.

Do the rules actually capture the relationship between the nature of the relevant actions (e.g., HARMING, HELPING), their causes (e.g., INTENDED, ACCIDENTAL), and consequences (e.g., INTENDED, [UN]FORESEEN)?

In other words, an action that is harmful, done intentionally by the actor, with foreseen consequences is morally distinct from one that is helpful, done accidentally by the actor, and with unforeseen consequences. And the pivotal point in this triad, from my way of thinking, the actor’s intentions. It is not, despite what the Book of Exodus teaches, actions that are moral or immoral. It is intentions.

But intentions can be hard to discern. Frequently, the consequences get in the way. Here’s a great example from Hauser’s book.

How we judge the moral relevance of someone’s actions may also influence how we attribute cause. This shows the interaction between the more general folk psychology and more specific moral psychology. Consider the following scenario:

The vice president of a company went to the chairman of the board and said, “We are thinking of starting a new program. It will help us increase profits, and it will also harm the environment.” The chairman of the board answered, “I don’t care at all about harming the environment. I just want to make as much profit as I can. Let’s start the new program.” They started the new program. Sure enough, the environment was harmed.

How much blame does the chairman deserve for what he did? Answer on a scale of 1 (considerable blame) to 7 (no blame).

Did the chairman intentionally harm the environment? Yes or no?

When subjects answer this question, they typically say that the chairman deserves blame because he intentionally harmed the environment. In contrast, when they read a virtually identical scenario in which the word “help” replaces the word “harm,” and “praise” replaces “blame,” they typically say that the CEO deserved little praise and did not act to intentionally help the environment. At the heart of these scenarios is whether a side effect of an action is perceived as intentional or not. In these cases, there is an asymmetry. When the side effect of the action is a negative outcome, people are more willing to say that the agent caused the harm. This is not the case when the outcome is positive or helpful. Recent studies with children show that such effects are present as early as at three years of age, suggesting that we are endowed with a capacity that is more likely to perceive actions as intentional when they are morally bad than when they are morally good.

It is finding inherent moral capacities like this one that Hauser’s book is really about -- about defining the grammar of the “folk” morality that our genes and our culture creates for us. In the above example, the board chair did not intend to harm the environment. He intended to make a profit and harming the environment was correlated consequence of that intent. It is not our moral reasoning that holds him morally liable, because the distinction between intentions and consequences should be clear when we invoke our reasoning. It is the fact that we need to suppress another, apparently innate set of moral instincts in order to make this determination that makes Hauser’s exploration so interesting.

Is Someone Watching?

Hauser spends a lot of time trying to break down the “case-relevant dimensions” of moral thought experiments -- the kinds of things that are rife in the psychological literature. Things like “Sports Car”:

A man is driving his new sports car when he sees a child on the side of the road with a bloody leg. The child asks the car driver to take her to the nearby hospital. The owner contemplates this request while also considering the $200 cost of repairing the car’s leather interior.

Is it obligatory for the man to take this child to the hospital?

And things like “Charity”:

A man gets a letter from UNICEF’s child health care division, requesting a contribution of $50 to save the lives of twenty-five children by providing oral rehydration salts to eliminate dehydrating diarrhea.

It is obligatory for the man to send money for these twenty-five children?

Researchers like Hauser use questions like these, and dozens of minor variations on each, in an attempt to tease out the distinct dimensions on which our moral judgment -- both our innate and our reasoned versions -- turn. Using these two cases as his example, he presents a fairly comprehensive list of these dimensions, ranging from how many people are helped, to how much it costs the person making the moral judgment, to the relationship between the judger and the people helped, to the degree to which the judger caused the situation that should be resolved.

But in all of this analysis, there was one dimension that I found conspicuously missing -- whether or not someone is watching us when we need to make our decision. Driving past that kid in your sports car, after all, would be harder if your own children were in the back seat, and tearing up that UNICEF solicitation would be harder at your family’s dinner table.

Trolley Problems

Of course, the most famous of these thought experiments are the trolley problems.

Denise is a passenger on an out-of-control trolley. The conductor has fainted and the trolley is headed toward five people walking on the track; the banks are so steep that they will not be able to get off the track in time. The track has a side track leading off to the left, and Denise can turn the trolley onto it. There is, however, one person on the left-hand track. Denise can turn the trolley, killing the one; or she can refrain from flipping the switch, letting the five die.

Is it morally permissible for Denise to flip the switch, turning the trolley onto the side track?

Hauser calls this one “Bystander Denise,” and he offers three other variations: “Bystander Frank,” in which Frank has to decide whether or not to push a large man in front of the trolley to stop it from killing five others; “Bystander Ned,” is which Ned has to decide whether or not to flip a switch that will divert the trolley onto a track that will cause it to be stopped by the body of a large man before it can kill five others; and “Bystander Oscar,” in which Oscar has to decide whether or not to flip a switch that will divert the trolley onto a track that will cause it to be stopped by a heavy weight before it can kill five others, but also kill one in the process.

A lot has been written about trolley problems like these, and a lot more has been assumed about what people will actually do based on the way they solve them. But Hauser’s analysis has helped me understand that the utility of trolley problems does not lie in their predictive power. It lies in the way they reveal our innate moral intuitions.

At the level of moral reasoning, all four of Hauser’s trolley problems resolve down to the same fundamental choice: act and kill one, or fail to act and watch five die. And there are undoubtedly human creatures among us who see them all in exactly that light. Whether you’re Denise, Frank, Ned, or Oscar, your choice is exactly the same because each situation carries with it the same numerical calculation.

But most people will feel differently about the four situations. Denise, Ned, and Oscar should probably act, because flipping a switch is a neutral/impersonal action, but Frank should definitely not act, because pushing a man is a negative/personal action. Frank is intending to do harm, while Denise, Ned, and Oscar are not.

Some others will feel a different distinction. Their moral intuition is telling them that Denise and Oscar should probably act, because they can foresee, but do not intend a negative outcome from their action. But Frank and Ned should definitely not act, because not only to they foresee a negative outcome, they intend it. The man who dies after Denise’s or Oscar’s action is an unintended casualty. The one who dies after Frank’s or Ned’s action is an outright victim.

Whatever kind of human creature you are -- the one who sees all four situations as the same, the one who sees the distinction between Denise/Ned/Oscar and Frank, or the one who sees the distinction between Denise/Oscar and Frank/Ned -- the point is we cannot predict how you would really act in any of these situations because they are not designed to determine that. I secretly suspect that most people would actually do nothing in all four of these cases, secure in their knowledge that they themselves are the innocent bystander in all this mayhem, but that’s beside the point. The utility of trolley problems is not in their ability to help us develop actionable moral reasoning, but in illuminating the innate moral intuitions that we all possess.

But most people are confused by this, thinking that there is a logical set of moral precepts that will lead them through the tangled jungle of trolley problems. Hauser beautifully makes this point with the following illustrative story about his father.

My father’s responses to some of these dilemmas represent a perfect illustration, especially given his training as a hyperrational, logical physicist. I first asked him to judge the Denise case. He quickly fired back that it was permissible for her to flip the switch, saving five but killing one. I then delivered the Frank case. Here, too, he quickly judged that it was permissible for Frank to act, pushing the large person onto the tracks. When I asked why he had judged both cases in the same way -- why they were morally equivalent -- he replied, “It’s obvious. The cases are the same. They reduce to numbers. Both are permissible because both actions result in five people surviving and one dying. And saving five is always better than saving one.” I then gave him a version of the organ-donor case mentioned in chapter 1. In brief, a doctor can allow five people, each needing an organ transplant, to die, or he can take the life of an innocent person who just walked into the hospital, cutting out his organs to save the five. Like the 98 percent of our internet subjects who judged this act as impermissible, so did my father. What happened next was lovely to watch. Realizing that his earlier justification no longer carried any weight, his logic unraveled, forcing him to revise his judgment of Frank. And just as he was about to undo his judgment about Denise, he stopped and held to his prior judgment. I then asked why only Denise’s actions would be permissible. Not having an answer, he said that the cases were artificial. I am not recounting this story to make fun of my father. He has a brilliant mind. But like all other subjects, he doesn’t have access to the principles underlying his judgments, even when he thinks he does.

And that’s it in a nutshell, what makes this book so interesting. We don’t have access to the principles underlying our judgments, even when we think we do.

When the Trolley Goes Off the Rails

This, so far, has been Hauser’s set-up. We are endowed with an innate moral grammar that we typically neither sense nor understand. But what is that moral grammar? And from where does it arise? The rest of Hauser’s book is a study of those two questions, often citing the results of scientific and behavioral experiments to provide a stable foundation for the structure he is building. But many of these experiments -- or perhaps the way Hauser describes and then interprets them -- leave me a little flat.

Here’s one example.

Are infants built with the machinery that perceives actions with respect to a hierarchy, unconsciously recognizing the infinite potential to combine and recombine meaningless actions into meaningful actions and events? As a small step in the direction of answering this question, the developmental psychologists Dare Baldwin and Jodie Baird presented infants with a video of a woman walking into a kitchen, preparing to wash dishes, and then washing and drying the dishes. As part of this sequence, they saw the woman drop and then pick up a towel. After watching this video over and over again, infants looked at one of two stills extracted from the sequence: a completed action of the woman grabbing the towel on the ground or an incomplete action of the woman reaching for the towel, but stopping midway, bending at the waist. Infants looked longer when the woman appeared frozen at the hip, suggesting that they carve up a continuous stream of actions into smaller units that represent subgoals within larger goals. Like human speech comprehension, in which we glide over the smaller phonemic units to achieve a higher level of understanding with respect to words and phrases, event perception in infants is similarly processed.

Yeah. That. Or maybe the infants just had gas?

Hauser is meticulously laying his foundation here, apparently not willing even to assume that humans can perceive the world around them as populated by moral agents with an awareness of the consequences of their actions, or even that consequences come from the actions of moral agents. He wants to build the syntax of his moral grammar from the ground up. And, despite some of what I see as methodological shortcomings, his conclusion is a powerful one.

We are endowed with a moral acquisition device. Infants are born with the building blocks for making sense of the causes and consequences of actions, and these early capacities grow and interface with others to generate moral judgments. Infants are also equipped with a suite of unconscious, automatic emotions that can reinforce the expression of some actions while blocking others. Together, these capacities enable children to build moral systems. Which system they build depends on their local culture and how it sets the parameters that are part of the moral faculty.

It is very much like language, this moral grammar, in which an underlying set of structures and syntax rules them all, but in which each individual language can be customized to the specifics of the culture that gives it rise.

The Animalistic Continuum

One more thing. In the last section of his book, Hauser tries to tackle one of my favorite subjects: deciding whether or not humans are unique in the animal kingdom -- in this case when its comes to having a moral grammar -- or whether, like most everything else, there is a kind of continuum on which all animals, including humans, reside. It’s probably little surprise that Hauser cites plenty of evidence for moral systems in the non-human animal kingdom, and in doing so, makes an observation I have so seldom come across in similar discussions.

If we run the subtraction operation, taking away those aspects of our moral psychology that we share with other animals, we are left with a suite of traits that appear uniquely human: certain aspects of a theory of mind, the moral emotions, inhibitory control, and punishment of cheaters. There may be others, and some of those remaining from the subtraction operation may be more developed in animals than currently believed. If I have learned anything from watching and studying animals, as well as reading about their behavior from my colleagues, it is that reports of human uniqueness are often shot down before they have much of a shelf life. Consider the proposed set of nonoverlapping abilities as an interim report.

Indeed. Here’s an article published a decade after Hauser’s book, that seems to demolish the concept that non-human animals do not engage in the punishment of cheaters, as one example:
https://royalsocietypublishing.org/doi/full/10.1098/rstb.2015.0090

But sometimes, here again, Hauser goes a little too far in assigning human meaning to the actions of animals in moral-based psychological studies. My favorite has to do with blue jays and reciprocal altruism.

A second way to test for reciprocal altruism in animals comes from work on captive blue jays trained to peck keys in one of two classic economic games. … Every game involved two jays, each with access to a “cooperate” and a “defect” key. One jay started off, pecking either the “cooperate” or the “defect” key. Immediately after the first jay pecked, the second jay had an opportunity to peck, but with no information about his partner’s choice until the food reward emerged; the experimenter made the food payoff depend upon the jay’s choice, indicated below by the relative size of each circle within the two-by-two table.


When the jays played a prisoner’s dilemma game, they rapidly defected. No cooperation. In contrast, when the jays switched to a game of mutualism, they not only cooperated but maintained this pattern over many days. The jays switch strategies as a function of the game played shows that their responses are contingent upon the payoffs associated with each game.

Yes. Their responses are contingent upon the payoffs associated with each game. Obviously. But to what degree can we say that the jays are “cooperating” or defecting”? What, I wonder would be the outcome if you took the “cooperate” and “defect” labels off the keys? Or switched them? In the latter case, the jays aren’t likely to peck the “cooperate” keys when they want to cooperate and the “defect” keys when they want to defect. They, if I may be so bold to suggest, don’t actually want to “cooperate” or “defect.” They just want food.

A Clockwork Morality

And finally, this anecdote from Hauser’s book is too good to pass up.

When Anthony Burgess sent the manuscript of A Clockwork Orange to the United States, his New York editor told him that he would publish the book, but only if Burgess dropped the last chapter -- chapter 21. Burgess needed the money, and so went along with the suggested change. The rest of the world published the full twenty-one chapters. When Stanley Kubrick produced the film adaptation, it was hailed as cinematic genius. Kubrick used the shorter, American edition of the novel.

When I first read about the shortening of Orange, I immediately assumed that the last chapter would be ferociously violent, a continuation of the protagonist’s destructive streak, a rampage against the moral norms. I was completely wrong. As Burgess put it in the preface to the updated American Orange: “Briefly, my young thuggish protagonist grows up. He grows bored of violence and recognizes that human energy is better expended on creation than destruction. Senseless violence is a prerogative of youth, which has much energy but little talent for the constructive.” This change is not sappy or pathetic but, rather, a proper ending to a great novel. As Burgess acidly pointed out: “When a fictional work fails to show change, when it merely indicates that human character is set, stony, unregenerable, then you are out of the field of the novel and into that of the fable or allegory. The American of Kubrickian Orange is a fable; the British or world one is a novel.”

I’m going to have to check that unread copy of A Clockwork Orange on my bookshelf to find out which one I have. But more importantly, I wonder which one better describes human morality: a novel or a fable?

+ + +

This post first appeared on Eric Lanke's blog, an association executive and author. You can follow him on Twitter @ericlanke or contact him at eric.lanke@gmail.com.



No comments:

Post a Comment