Note: This post is the one hundred and fifth in a series about government and commercial ethics. Click
here for the full listing of the series. The
first post in the series has more detail on the book 'Systems of Survival' by Jane Jacobs which inspired this series.
This week the topic is the book, "
The Righteous Mind" by Jonathan Haidt. Having read the book, I highly recommend the
NY Times review of it as an excellent summary.
Note that we have encountered the work of Jonathan Haidt before, albeit indirectly, in
this earlier post which discussed an essay by Steven Pinker, which was written as a reaction to Haidt's work.
Today's post covers the first of Haidt's three main arguments, that:
1) People don't make rational decisions to decide what is moral, but instead have instinctive reactions regarding morality and then rationalize their instinctive reaction after the fact. Haidt likens the rational, conscious part of the brain to a rider sitting on an elephant (the part of the brain which makes the instinctive moral judgement) and argues that the rider has little control.
In fact, Haidt argues that the primary function of the ration part of the brain is not to conduct dispassionate analysis, but rather to come up with reasons to support whatever instinctive judgment the elephant (the intuitive part of the brain) has already come up with. Haidt recounts a number of experiments which support his thesis, including one where people were hypnotized to have negative associations with certain words, and then read passages describing moral violations some of which included the code word and some that didn't. The researchers found that passages containing negative code words lead to stronger negative reactions from the readers. To their surprise, even a story which didn't describe any moral transgression at all, and simply said either that, "Dad tries to take topics that appeal to both professors and students in order to stimulate discussion, or the same thing but worded so that "Dan often picks topics" found that in a third of respondents, inclusion of a negative code word lead them to morally condemn Dan. The researchers had asked people to explain their reaction and those who reacted negatively to Dan said things like, "Dan is a popularity seeking snob' or "I don't know, it just seems like he's up to something."
Haidt argues that the intuitive part of our brain is always active, instantly judging everything and everyone we come across as favourable or unfavourable, and then the 'rational' part of our brain steps in to provide reasonable sounding arguments to support this position. In one study, (echoed in the news recently), researches found that people who were more intelligent were able to come up with more reasons to support whatever position they held, but greater intelligence did not help at all in coming up with reasons for the opposing point of view. In other words, being smarter just makes you better able to rationalize your own intuitive reactions, not better able to understand other opinions.
Haidt figures that in evolution, it was more important for people to be able to maintain their social reputation (by explaining their actions, creating arguments to support their gut (intuitive) reactions and so on) than it was for them to come to accurate conclusions about what was true.
Haidt does allow for some capacity of the rational part of the brain to do more than just support the intuitive part. He cites a study in which if people were forced to wait 2 minutes before responding to some stimulus, then they would be less likely to just go with the gut reaction and more likely to come to a reasoned conclusion. But mostly Haidt is pessimistic about the ability of the individual to question their own biases or challenge their own intuitive reactions and beliefs - he believes that we need other people to challenge us and that society needs a back and forth between people of different viewpoints in order for people to be exposed to multiple viewpoints and have a chance to update their opinions based on competing arguments rather than just constantly searching out more supporting evidence for what they already believe.
In the last chapter of the first section of the book, Haidt has a list of bullet points summarizing the argument so far, that we care obsessively about our reputation, that conscious reasoning is like a press secretary that argues on our behalf, not a scientist searching for truth, and that reasoning can take us to almost any conclusion because we ask, 'can I believe it?' about things we want to believe and 'Must I believe it?" about things we don't.
But I wanted to focus on his last bullet point which is as follows:
"In moral and political matters we are often groupish, rather than selfish. We deploy our reasoning skills to support our team, and to demonstrate commitment to our team."
Atrios expresses this in characteristically pithy fashion, as "
It's tribal." an further notes that, "Policy preferences mostly aren't about narrow personal economic considerations, even for the rich."
It's interesting that Haidt focused in on the political realm, home to the guardian syndrome, which is filled with interpersonal ethics such as 'be loyal' as compared to the commercial syndrome where the duty to other people is pretty much limited to not screwing them over (foregoing force and fraud). In this he is echoing some of the earlier works we have encountered such as
Hans Ritschl, Howard Margolis and
Plato.
Disappointingly, Haidt does not really delve into the question of how or why commercial activity or science might lack the groupishness or tribalness that is present in morality (as seen by Haidt) and in politics, or why politics in particular sees this tribal behaviour.
Labels: ethics, Jonathan Haidt, rationality, self interest