Crawl Across the Ocean

Tuesday, November 02, 2010

69. Selfishness, Altruism and Rationality, Part 2

Note: This post is the sixty-ninth in a series about government and commercial ethics. Click here for the full listing of the series. The first post in the series has more detail on the book 'Systems of Survival' by Jane Jacobs which inspired this series.

This week's topic is a continuation of last week's post on the book, Selfishness, Altruism and Rationality, by Howard Margolis

In the last post, we talked about how Margolis explained the failings of the traditional rational choice theory to explain human behaviour in situations where self-interest conflicted with group interest. This is a pretty common observation, generally taken for granted outside of economic circles, but Margolis goes a step further and proposes an alternative model of human motivation.

Margolis calls his model the Fair Share model and it is based on the notion that people feel a desire to 'contribute their fair share' to the public welfare. He describes the underlying motivation of people in this model as follows:

"The larger the share of my resources that I have spent unselfishly, the more weight I give to my selfish interests in allocating marginal resources. On the other hand, the larger the benefit I can confer on the group compared with the benefit from spending marginal resources on myself, the more I will tend to act unselfishly."


Margolis imagines that a person (who he calls 'Smith') contains two separate components, 'G-Smith' who values (Smith's perception of) the general welfare, and 'S-Smith' who values only Smith's personal welfare. Smith stays in equilibrium by adjusting the level of his spending on the public interest so that the marginal value of more public spending by Smith (to G-smith) equals the marginal value of more selfish spending (to S-Smith).

Margolis argues that, from an evolutionary point of view, it would be easier for this sort of limited, 'fair share' altruism to be maintained over time, because it would be less vulnerable to being exploited by selfish people than an unlimited altruism that didn't keep track of how much a person had already sacrificed their personal interests for the public good.

Margolis further notes that,
"The notion that human beings might have the kind of dual preference structure posited by this study is very old, going back at least to Plato's distinction between man as a private individual and man as citizen."


In chapter 6, Margolis goes into more detail on how his model differs from the classical rational choice model, and helpfully unpacks some of the assumptions which are embedded within the rational choice model (but often go unstated or unnoticed):

1. Smith can be treated as narrowly self-interested
2. Smith's utility function is a goods function (i.e. he only cares about what goods people possess, not how they got them or what role he played in determining the allocation)
3. Smith chooses in conformity with the principle of consumer sovereignty (i.e. Smith thinks what's best for society is that everybody get what they wants, as opposed to Smith having a vision of what's best for society which might conflict with what other people want).
4. Smith has only one utility function (as opposed to having one for his own interests and one for the social interest).

As Margolis explains, economists, if confronted with these assumptions might deny that they are a necessary part of the model, but after their denial, they will then go right back to building models and making predictions that only make sense if those assumptions are there.

He also explains how, in the marketplace, where the public interest and private interest are in alignment (subject to all the caveats he have discussed in this series), the difference between the predictions of his 'fair share' model and a traditional rational choice model that posits self-interested behaviour by all participants is not that big. It is primarily in political situations where the differences will be clearer, because here the contrast between private and social objectives is sharper.

If you've had the same struggles as I have over the years trying to pin down how economists come to the (often wrong-headed) conclusions they do, this chapter is a must read. Margolis is that rare bird who knows enough economics to be able to explain things clearly using the language of economics but has still retained enough common sense to be interested in models of people as they actually are as opposed to making unrealistic assumptions so as to have a model that is easier to work with mathematically.

---
Later on, Margolis talks about how his fair share model explains why people might act differently in different circumstances, pursuing their own interest in one and the public interest in another,
"If I am a producer facing reasonably competitive markets (even the experience of Ford in trying to promote safety features in automobiles in the mid 1950-s is instructive here), then I will scarcely be in a position to do anything very different than produce what the market seems to want. Even if my choices affect only me and my customers, I will not have any customers to benefit unless I offer them things they want at a price they are willing to pay. If there are external effects (environmental side-effects), the dilemma is even worse.

However, if I am in a senior position in my government, my decisions on public matters often affect society in a large way. This being so, there will be no necessary inconsistency between my behaving as a narrow profit maximizer (to a good approximation) as a private businessman; as a rather casual decision maker, as a voter, and as a very serious decision maker, working very hard and feeling great personal responsibility for the social effects of my decisions as a high public official.

...I wish to say enough here to indicate why [James] Buchanan's and [Gord] Tullock's 'paradox of bifurcated man' seems, from the [Fair Share model] view, to reflect a mistaken assumption that an internally consistent model could not account for a disposition for the same individual to behave in a very public spirited way in some circumstances and as a profit-maximizing economic man in other contexts."


Now this just seems like common sense to me, but then consider the surprise in the reactions of experimenters when they found, exactly as Margolis and his model would have predicted, that when they ran the exact same Prisoner's Dilemma experiment on the same people and only changed the name (in one case 'Wall Street Game' in the other 'Community Game') that people behaved very differently. As the abstract states,
"The results of these studies showed that the relevant labeling manipulations exerted far greater impact on the players’ choice to cooperate versus defect—both in the first round and overall—than anticipated by the individuals who had predicted their behavior." (emphasis added).


My one disagreement with Margolis in the passage above is that after stressing the role played by competition in preventing public interested behaviour in the marketplace, he then fails to note how the lack of competition in the public sector is an essential component of allowing the pursuit of the public interest there (although to be fair, it is somewhat implied in the text).

Finally, Margolis offers some interesting speculation on how caste might be partially explained by the fair share model. In the early stages of society, people who are more disposed towards public action would be more willing to undertake key tasks such as organizing irrigation schemes or a defensive army. In successful societies, these actions lead to large gains for the whole society, and those who were among the early organizers of the action would claim some of that gain for themselves, leading to greater wealth and influence. But under the fair share model, the more wealth you have, the more resources you will donate to the public interest. Richer people have more resources to donate in the first place, plus they donate a higher proportion of their resources, so there is a positive feedback whereby people with more power put more effort into the public realm which gets them more power in return and so on.

As Margolis says,
"As generations pass, the resulting division between those who manage and defend the state (often enough at real personal cost and risk) and those who labor comes to seem to accord with the natural order. What gives that presumption special potency is that there is some substance - something more than a self-serving myth - in the presumption that the noble and commoner are motivated in different ways. In terms of [the Fair Share model], that presumption is false at its root [because all people have an interest in their own welfare and the public welfare] but nevertheless consistent with observed behaviour. Our modern colloquial usage of words like 'noble' and 'peasant' is an anachronism but not necessarily a libel."


If you like economics but generally find economists irritating, and you are interested in the public welfare and the interaction between the two topics, this is a great book.

Labels: , , , , ,

Tuesday, October 26, 2010

68. Selfishness, Altruism and Rationality, Part 1

Note: This post is the sixty-eighth in a series about government and commercial ethics. Click here for the full listing of the series. The first post in the series has more detail on the book 'Systems of Survival' by Jane Jacobs which inspired this series.

This week's topic is the book, Selfishness, Altruism and Rationality, by Howard Margolis

Margolis' goal in this book is to extend the Economic Theory of Rational Choice so that it covers political situations as well as Economic ones.

He opens the book with a quote from James Coleman which eloquently outlines the problem, while also covering our now familiar choice between two versions of self-interest,
"Classical economic theory always assumes that the individual will 'act in his interest'; but it never examined carefully the entity to which 'his' refers. Often, as when households are taken as the unit for income and consumption, it is implicitly assumed that 'the family' or 'the household' is the entity whose interest is being maximized. Yet this is without theoretical foundation, merely a convenient but slipshod device. In this case, as in many others (e.g. when a man is willing to contribute much, even his life, to national defense, rather than use a strategy that will push the cost onto others), men act as if the 'his' referred to some entity larger than themselves. That is, they appear to act in terms, not of their own interest, but of the interest of a collectivity or even of another person. Indeed, if they did not do so, the basis for society could hardly exist.

Yet how can this be reconciled with the narrow premise of individual interest ... we could simply solve the problem by fiat, letting 'his' refer to whatever entity the individual appeared to act in the interest of. This would obviously make the theory trivially true, and never disconfirmable. A more adequate solution is one which states the conditions under which the entity in whose interests he acts will be something other than himself."


We saw in the last post that James Buchanan was willing to settle for a theory that based human motivation solely on the desire for material gain, arguing that the desire for material gain is always present to some degree in people.

But Margolis isn't willing to settle so easily,
"A satisfactory theory of social choice requires a model of individual choice that is consistent with the way human beings are observed to behave. Yet, even after a generation of work on the problem of applying the economic 'rational choice' perspective to social choice, often leading to striking results, this fundamental problem remains unresolved. We still lack a model that accommodates (without fudging) such obvious observations as that citizens bother to vote and do not always cheat when no one is looking. A resolution of this difficulty can be expected to require some departure from conventional assumptions."


Margolis goes on to indicate that, in his opinion, the main difference between situations which can modelled fruitfully using the traditional model and situations requiring a new model is that situations where the old model works are economic in nature whereas situations where a new model is required are political in nature (echoes of Mancur Olson specifically indicating that this theories on collective action only applied to economic groups, not groups formed for no-economic reasons.

Says Margolis,
"This classical model is profoundly shaped by its root concern with the problems of the marketplace. But in politics we are dealing with goods allocated largely through some coercive process, not through voluntary market transactions; and political 'goods' (such as justice) are often inherently unmarketable. Nonmarket effects (externalities) which are aberrations - market failures, which one seeks to correct - for most economists are the central feature of political life for political scientists.

We can expect that Samuelson's notion of public goods (which can best be understood as a generalization of the notion of externalities) would play a central role in any viable formal theory of politics, and indeed that is the case. It is not too strong a statement to say that societies, and hence politics exist because public goods exist."


Margolis spends a chapter illustrating his argument that the classical rational choice models fails to handle political situations via a series of 3 examples:

* Voting
* Repeated Prisoner's Dilemmas
* Public Goods

In the case of voting, the rational choice model fails to explain why people might go the trouble of voting even when they know their vote won't affect the outcome.

In the case of the repeated Prisoner's Dilemma, the model fails to explain why people will generally cooperate even though on any given iteration they could gain by defecting against the other player in the dilemma.

In the case of public goods, the model fails to explain why people will make contributions to things that are publicly available to everyone. Margolis asks us to imagine a hypothetical man named Smith who is planning a $10 donation to his favourite charity. The classical economic model says that Smith would do this because he wants the charity to have $10 more available to it than it does currently.

But now imagine Smith finds out that someone else has just donated $10 to the charity. Under the classical model, Smith, realizing that his favourite charity is now $10 richer just as he wanted it to be, no longer feels a need to make a donation.

Of course in reality there may be some relationship between how much money a charity has raised and how much people contribute, but it is nowhere near this strong a relationship. Clearly there must be something more to Smith's motivation than simply wanting the charity to be $10 richer, but the classical model has no answer to what that might be.

Margolis argues that there are two altruistic motivations that need to be taken into consideration. We have an altruistic motivation based on wanting other people to have more, and an altruistic motivation based on wanting to contribute our fair share (what Margolis calls 'participation').

Margolis also notes that all of his examples are prisoner's dilemma type situations, which is not surprising since the Prisoner's Dilemma is the formalization of situations where what is in the self-interest of participants is opposed to the group interest.

In the next post we will look at the solution that Margolis proposes in order to create a model of rational choice that can model human behaviour accurately in the case of prisoner's dilemma / public goods type situations.

Labels: , , , , , ,

Tuesday, August 10, 2010

63. The Stag Hunt

Note: This post is the sixty-third in a series about government and commercial ethics. Click here for the full listing of the series. The first post in the series has more detail on the book 'Systems of Survival' by Jane Jacobs which inspired this series.

Just a short post this week, more of an intro to next week's post than anything else. I covered most of this ground back here, but I wanted to formally include it in the series on ethics.

We've talked a lot here about the Prisoner's Dilemma, but another type of interaction / game that comes up when talking about ethics is the 'Stag Hunt.'

Wikipedia summarizes the Stag Hunt as follows:

"In game theory, the stag hunt is a game which describes a conflict between safety and social cooperation. Other names for it or its variants include "assurance game", "coordination game", and "trust dilemma". Jean-Jacques Rousseau described a situation in which two individuals go out on a hunt. Each can individually choose to hunt a stag or hunt a hare. Each player must choose an action without knowing the choice of the other. If an individual hunts a stag, he must have the cooperation of his partner in order to succeed. An individual can get a hare by himself, but a hare is worth less than a stag. This is taken to be an important analogy for social cooperation.

The stag hunt differs from the Prisoner's Dilemma in that there are two Nash equilibria: when both players cooperate and both players defect. In the Prisoners Dilemma, however, despite the fact that both players cooperating is Pareto efficient, the only Nash equilibrium is when both players choose to defect."


Here's an example:

                                        Adam
                              Stag                    Hare
Eve          Stag:      [2,2]                 [0,1]
                 Hare :     [1,0]                [1,1]

Unlike in the Prisoner's Dilemma where Adam's best choice would be to defect (hunt Hare) no matter what Eve does, in this case, Adam's response depends on what Eve is doing. If Eve is cooperating (hunting Stag) then it makes sense for Adam to hunt stag (cooperate as well). If Eve isn't going to cooperate, then Adam shouldn't cooperate either.

You can see how this dynamic sets up the two equilibiria that Wikipedia mentioned:

1) A 'good' equilibrium where hunters catch deer, the most valuable game animal in the forest. Because the deer is elusive, catching it requires cooperation between the hunters.

2) A 'bad' equilibrium where the hunters don't cooperate, and are not able to catch the deer so they catch rabbits instead, which can be caught without cooperation, but are not as tasty and meaty as deer. Mmm, venison.

In the 'bad' equilibrium, the hunters know they could do better by working together to catch a deer, but because nobody can act on his own (you can't catch the deer without help) and because they can't be sure that if they go to hunt deer that others will help, it is safer to just catch rabbits, rather than going off by yourself to catch the deer, having nobody help you and ending up with nothing.


The description of the Stag Hunt - where you should cooperate if the other person does and defect if they do - may also sound reminiscent of the Tit for Tat strategy that performed so well in the repeated Prisoner's Dilemma tournaments that Robert Axelrod described in 'The Evolution of Cooperation'. This isn't a coincidence - having a Prisoner's Dilemma repeat over time and having the participants switch to defection if the other player defects, transforms the payouts from A Prisoner's Dilemma into the payouts from a Stag Hunt.

Basically, what happens is that the gain a person gets from betraying the other player in the first Prisoner's Dilemma is more than offset by the losses that follow because the other player is never again willing to cooperate with you. Taking this potential future loss into consideration, it becomes in your best interest to cooperate now - if you expect the other player to cooperate.

This last part is the rub with the Stag Hunt, and you can see why it is also sometimes known as the 'Assurance Game' - if you could only assure the other player that you were going to cooperate (maybe by signing a contract, shaking hands, or by maintaining a high seller rating on ebay, etc.) then it would be in their interest to also cooperate.

In a way, the Stag Hunt is like a stepping stone on the road from the hopeless one-time Prisoner's Dilemma style interaction, to an outcome of mutually beneficial cooperation.

It's not just the possibility of the Prisoner's Dilemma repeating that can transform the interaction into a Stag Hunt, a moral principle that places merit on being 'nice' in the sense that Axelrod used it - starting off by cooperating with people, and only stopping cooperation if the other player betrays you first - could also change the payoffs in the Prisoner's Dilemma so that they resemble the Stag Hunt instead.

More on the Stag Hunt next week.

Labels: , , ,

Tuesday, August 03, 2010

62. The Evolution of Cooperation (part 2 of 2)

Note: This post is the sixty-second in a series about government and commercial ethics. Click here for the full listing of the series. The first post in the series has more detail on the book 'Systems of Survival' by Jane Jacobs which inspired this series.

This post is a continuation of last week's post which began discussing The Evolution of Cooperation by Robert Axelrod and covered the first 6 chapters.

In chapter 7 Axelrod provides advice for how people should act if they are in a position where, rather than accepting that they are in a Prisoner's Dilemma and trying to choose the best action based on their situation, they can try to change the situation.

Axelrod notes that while in most cases people will want to influence the situation to encourage cooperation, there are some cases (such as when businesses collude, or in the original Prisoner's Dilemma when the interrogator was trying to get the Prisoners to rat on each other) where the goal will be to discourage cooperation.

So the following advice is for encouraging cooperation in the Prisoner's Dilemma - if you want to discourage cooperation, do the opposite.

1. Enlarge the shadow of the future.

What this means is that interactions between people should be structured so that the same people meet repeatedly rather than meeting a different person every time. It might also mean setting things up so that interactions are more frequent and spaced closer together in time. Basically any change that increases the potential risk of suffering retaliation if you choose to defect rather than cooperate with someone.

2. Change the payoffs

The bigger the payoff for defecting, the greater the temptation to do so. So reducing the payoff from defection or increasing the payoff from cooperation makes cooperation more likely. For example, the potential of an audit, helps encourage people to do their taxes accurately

3. Teach people to care about each other

This is really just another way to change the payoffs. If you care for the other person's wellbeing as much as your own, than their can never really be a Prisoner's Dilemma. And every increase in empathy reduces the element of dilemma in the situation.

4. Teach reciprocity

If people turn the other cheek and forgive each other 490 times or more, then this just allows people who exploit the suckers to prosper. If people repay kindness with kindness but also defection with defection, then this will help keep down the population of defectors and exploiters.

5. Improve recognition abilities

If you don't recognize the person who defected on you last time around, you won't be able to exact your revenge this time around. If you can tell ahead of time whether someone looks like a cooperator or a defector then you can be much better off rather than going in blind.


In Chapter 8, Axelrod looks at some ways in which a basic scenario of
randomly occurring interactions can be modified.
4 different types of 'structure' are considered.

1) Stereotyping - stereotyping means that you judge people based on some easily obesrvable characteristic such as the colour of their skin. Axelrod points out that these sorts of stereotypes can be self-reinforcing. If a blue person expects to be poorly treated by a green person (and vice-versa) then they have no reason to cooperate and will defect against each other. This defective behaviour then reinforces the notion that those blue/green people never cooperate. While this has negative consequences for everyone in that opportunities for cooperation are missed, it also has particularly negative consequences for whichever stereotyped group is in the minority since they will face a lack of cooperation from a majority of the population.

2) Reputations - Axelrod points out that it is good to have a reputation as a bully (someone who will respond to any defection with a very heavy retaliation) since that will scare people into cooperating with you. The hazards of establishing this sort of reputation is that you must pass up opportunities to engage in cooperative behaviour with people by forgiving them for their past transgression. And if more than one person is trying to establish a reputation as a bully, this can lead to a long series of defections until one gets the upper hand.

3) Government - The government needs to design the payoff structure such that most of the citizens will comply on their own. Punishment is generally reserved for setting an example of the few people who do defect and reassuring people that other people are getting away with breaking the laws. Once enough people began to ignore a particular law, because they don't respect the legitimacy of the law and the punishment/reward payoffs aren't set correctly, then enforcing compliance becomes extremely difficult because the cost is simply too high to physically coerce a large percentage of the population and the law tends to break down (see Prohibition, War on Drugs, Marijuana).

4) Territory - Axelrom segues from talking about government to talking about territory by noting that, 'an interesting characteristic of governments that has not yet been taken into account is that they are based upon specific territories.'

Axelrod develops a formal territorial model in which each participant in the situation has four neighbours, one to the North, South, East and West. Each round, a participant plays a repeated Prisoner's Dilemma against the four neighbours and is assigned a score based on their combined result against their four neighbours. Then for the next round, if a participant finds that one of their neighbours did better than they did, they switch to using their neighbours strategy. In this way, successful strategies can spread throughout the population.

Axelrod finds that in the territorial model, it is at least as hard (or harder) for a new strategy to invade a population using a given strategy (as compared to a mode where people meet randomly). If a strategy is stable (resistant to invasion by other strategies) in a population that mixes randomly, it will be stable as well in a population that is organized territorially.

Another finding was that in a territorial model, where people imitate their best neighbour, strategies that do really well in some situations and poorly in others tend to do better than they would in a model where people mix randomly. The territorial nature of the model means that you end up with a greater diversity of strategies in certain areas, and this allows the inconsistent strategy to thrive and convert its neighbours in the areas where conditions are suitable, while dying away rapidly where conditions are unsuitable.


Chapter 9 is a conclusion which basically just recaps everything that has
come before.

The quick summary of Axelrod's results is that, cooperation based on reciprocation (e.g. tit for tat style behaviour) can get started and can thrive in a wide variety of environments and that it withstand attempted 'invasions' by uncooperative strategies. Furthermore, the people doing the cooperating don't have to be friends, they don't have to possess foresight and they don't have to be rational. These characteristics might help, but they are not necessary as shown by the evolution of cooperation between enemy soldiers on the front in WWI, cooperation between bacteria and so on. Furthermore, neither altruistic behaviour nor a central authority is required to maintain cooperation (at least in Axelrod's model where people still retain the capacity to retaliate after someone has defected against them).
Axelrod notes that Prisoner's Dilemma situations ...

As always, there is far more to the book, then I covered here. The Evolution of Cooperation is noteworthy for the clear prose of the author and the thorough take on one particular type of interaction, the repeated Prisoner's Dilemma with two participants. I recommend anyone interested enough in the topic to have made it to the end of this post, should read it for themselves (if they haven't already).

Labels: , , ,

Tuesday, July 27, 2010

61. The Evolution of Cooperation (part 1 of 2)

Note: This post is the sixty-first in a series about government and commercial ethics. Click here for the full listing of the series. The first post in the series has more detail on the book 'Systems of Survival' by Jane Jacobs which inspired this series.

The Evolution of Cooperation is the title of perhaps the most famous book on the Prisoner's Dilemma, and possibly Game Theory in general, ever written - by Robert Axelrod.

Reading it again, for the first time in a long time, I could see why it is so popular - it manages to cover a lot of ground with very clear, accessible prose.

The Evolution of Cooperation starts off by recounting a famous game theory tournament. Participants were invited to submit a strategy or 'rule' that would play a Prisoner's Dilemma against strategies submitted by other people. The strategies would be paired up against each other in turn and would play a repeated Prisoner's Dilemma against each other for a certain number of times. The goal was to achieve the highest possible point total, adding up across the matches against all the other strategies.

Recall that the nature of the Prisoner's Dilemma is such that, no matter what action your opponent takes, you will maximize your own total by defecting rather than cooperating. But by changing the situation from a single game to a repeated game, and by allowing participants to retain a memory of what happened before and by allowing them to clearly identify who they were playing against, the tournament introduced a strong signalling element into the Dilemma.

The tournament was won by the simplest strategy submitted, a strategy known as 'Tit For Tat.' Tit for Tat started off by cooperating (Axelrod refers to strategies that start by cooperating as 'nice' strategies), and then each round it just reacts to what the strategy it is matched up with did the previous round. If the strategy it is playing with defected on the last round, Tit for Tat defects this round, and if the strategy it is playing against cooperated on the last round, Tit for Tat cooperates this round.

After the results of the first tournament were published, a second one with more entries was held, but Tit for Tat again turned out to be the winner.

Strategies aren't fixed over time, and people might change their approach if they see another approach that is working better. Or those using a poor strategy might die out (or get fired) and be replaced by someone with a better strategy. Or some people may simply decide to try a new approach that they thought up. Through these sorts of mechanisms, the distribution of strategies, or rules, being used in the population can evolve over time.

An evolutionarily stable strategy is one that, even if everybody in a population is using it, can't be invaded by some other strategy designed to take advantage of it. Axelrod notes that a population where everybody defects is evolutionary stable because it is not possible for anyone playing any sort of cooperative strategy to invade (because they never meet anyone who will reciprocate their cooperation). But even a small cluster of cooperators can invade a much larger population of defectors if the conditions are right (because they will do well enough cooperating with each other to offset their poor results against the defectors).

But the converse is not true. A population where everybody plays a nice strategy like Tit for Tat can't be invaded by an 'Always Defect' strategy, because the Tit for Tats will do better playing each other than the 'Always Defect's will do playing with each other. This is a hopeful result (for those who like to see cooperation) since it suggests that a cooperative equilibrium is more stable than a defective one and that even a small group of cooperators can sometimes thrive in a sea of defectors.

Based on the results of the tournaments, and the success of Tit for Tat, Axelrod offers the following suggested courses of action for doing well in a repeated Prisoner's Dilemma type situation:

1) Don't be envious

As we saw before, envy can transform an absolute gain into a relative loss and a positive sum situation into a zero-sum situation. A common theme throughout the book is the distinction between absolute gains, made possible by the non zero-sum nature of the Prisoner's Dilemma, and zero-sum situations where only relative gains are possible.

2) Don't be the first to defect

'Nice' rules which don't defect first, will do well when playing with each other. This means that 'Mean' rules which defect first, will end up with lower scores against 'nice' opponents than 'Nice' rules do.

3) Reciprocate both cooperation and defection

A failure to reciprocate cooperation leads to unnecessary defection on both sides. A failure to reciprocate defection (by defecting in return the next round) leads to being taken advantage of.

4)Don't Be Too Clever

Unlike in a zero-sum game where you don't want your opponent to have any advantage, in a Prisoner's Dilemma it is important that those who are willing to cooperate recognize that you are willing to cooperate as well. Tit for Tat is a simple rule that helps other rules understand what they are dealing with and act accordingly. And since the best plan when facing Tit for Tat is to cooperate, rules will generally cooperate when they figure out that is the rule their opponent is using.

* * *

Moving along, Chapter 4 shows that friendship is not necessary for cooperation to develop by recounting the story of the 'live and let live' system that developed in the trenches during World War I where enemy units would cooperate by not killing each other, while facing off with each other across the same piece of ground for months at a time.

Chapter 5 shows that even creatures with very limited intelligence (e.g. bacteria) can engage in cooperation in Prisoner's Dilemma type situations. It also theorizes that the cooperation born from Kin Selection (the notion that it makes sense for us to evolve so that we are willing to make sacrifices for those we share genes with) might have provided a foothold of cooperation that could have spread into the sort of reciprocal tit for tat cooperation that would extend across larger groups of people, regardless of whether they are related or not.

I'll cover the rest of 'The Evolution of Cooperation' and talk about some of the implications of the ideas covered in it in next week's post.

Labels: , , , , , ,

Tuesday, June 08, 2010

56. Keystone Economics

Note: This post is the fifty-sixth in a series. Click here for the full listing of the series. The first post in the series has more detail on the book 'Systems of Survival' by Jane Jacobs.

----
"A chain is only as strong as its weakest link"

Proverb



I was driving across the prairies a couple of weeks ago (long story) and I noticed the sign on the Manitoba border claimed that Manitoba was the 'keystone' province, and it got me thinking.

Wikipedia notes that,
"The term [keystone] is used figuratively to refer to the central supporting element of a larger structure, such as a theory or an organization, without which the whole structure would collapse,"
while also noting that the actual meaning of keystone is
"the architectural piece at the crown of a vault or arch which marks its apex, locking the other pieces into position."


It was the literal meaning that reminded me of an old figurative use of the word, one we encountered a few posts back from David Hume,
"The happiness and prosperity of mankind, arising from the social virtue of benevolence and its subdivisions, may be compared to a wall, built by many hands, which still rises by each stone that is heaped upon it, and receives increase proportional to the diligence and care of each workman. The same happiness, raised by the social virtue of justice and its subdivisions, may be compared to the building of a vault, where each individual stone would, of itself, fall to the ground; nor is the whole fabric supported but by the mutual assistance and combination of its corresponding parts."


Imagine a prisoner's dilemma type situation with more than two participants. For the sake of example, let's say 10 people. But the cooperative benefit is only gained if all 10 people cooperate. If even just one person defects, then the whole effort of everyone else is wasted. You could imagine a game where 10 people choose to put money in to a collective pot and if everyone contributes, the money is doubled, but any person is able to choose to take what the others have contributed instead of contributing themselves. Are there situations like this in real life? Well, in a military battle, only a single traitor can have a disastrous impact on his/her erstwhile allies. That may be one reason why treason is considered the most serious of crimes.

Or consider another type of prisoner's dilemma. In this one, the cooperative benefit is proportional to how many people cooperate. So if, say 7 out of 10 people cooperate, than there will some benefit, but not as much as if all 10 did. But now imagine that this dilemma is repeated over and over, and that people can see what the others are doing. After the first instance where the 3 defectors take advantage of the 7 cooperators, it seems likely that some of the 7 will cease to cooperate. As the number of cooperators drops, the number of people taking advantage of the remaining cooperators grows larger, and the pressure grows for everyone to defect.

In 'The Efficient Society' Joseph Heath gave the example of littering as a situation where if there is no litter, then people feel embarrassed to litter themselves, but if they are surrounded by litter dropped by other people, the situation reverses and they feel embarrassed to be the sucker carrying their litter to the garbage instead of just dropping it.

These sorts of situations resemble a hand on a clock face that can have one of two equilibriums. One with the hand pointing upwards towards 12 which is unstable because any perturbing of the hand (i.e. defection from the people in the dilemma) will cause the hand to fall toward the other equilibrium with the hand pointing towards the 6. The equilibrium at 6 is stable because, even if you give the hand a little push, it will return to the 6 due to gravity. Similarly, even if a few people try to start a move towards cooperation in the group prisoner's dilemma, unless they can get everyone involved, the effort is likely to fail.

Now, if you assume that people are hardwired to defect in prisoner's dilemma type situations then you might see the problem here as one of how to change the incentives of the situation so that it is in people's self-interest to cooperate. If, on the other hand, you believe that there are 2 (or more) types of people and that some people are inclined to cooperate while others are inclined to defect, you might see the question as being, how do the cooperators keep the defectors in line. This calls to mind another quote we encountered a while back, this time from Hans Ritchsl,
"This understanding of the fundamental power of the communal spirit leads to a meaningful explanation of coercion in the state economy. Coercion is a means of assuring the full effectiveness of the communal spirit, which is not equally developed in all members of the community. Coercion forces the individual to act as if he were inspired by communal spirit. Coercion is only the outer clasp and fastening of the community, but if communal spirit be lacking, coercion can replace it only in part."



The main point of this post is that there are situations where the best outcome can only be achieved if everyone (or very close to everyone) is on board. In these situations, giving people freedom of choice means nothing more than allowing defectors to frustrate the desires of cooperators to achieve a better outcome. Now, you could say (if you had a very good memory) that I'm just rehashing the points Tom Slee made in his book that we covered back near the start of this series - given a prisoner's dilemma type structure, giving people a choice leads to inferior outcomes. And that's true, but what I wanted to emphasize this time around was three things:

1) That the repetition over time of a Prisoner's Dilemma situation can mean that even in situations which don't necessarily have an all or nothing outcome at first, there might be an all or nothing outcome over time
2) In a multi-participant dilemma, full cooperation and monopoly behaviour are equivalent descriptions.
3) If a large group wants to achieve cooperation over time, given different behaviour types regarding defection/cooperation, it is likely to be necessary for an element of coercion or punishment to be employed by the cooperators against the defectors

This is different than simple economies of scale where a larger effort is more productive (per amount of effort) than a smaller effort. This is not a case of natural monopoly as much as it is a case of necessary or efficient monopoly.

Labels: , , , , , , , ,

Tuesday, April 13, 2010

48. Self-Interest, Weak Reciprocity and Strong Reciprocity

Note: This post is the forty-eighth in a series. Click here for the full listing of the series.

The ideas in this post are generally taken from the book (a collection of essays) 'Moral Sentiments and Material Interests: The Foundation of Cooperation in Economic Life.' (edited by Herbert Gintis, Samuel Bowles, Robert Boyd and Ernst Fehr)

The central theme of the book is the notion of 'strong reciprocity,' but before defining 'strong reciprocity', I should cover the alternatives.

Most economic and biological theory these days starts from a premise that people are self-interested in the sense that given two options they will choose whichever one gives them the highest payout without being concerned about what payments everyone else gets.

The primary support for this viewpoint is the notion that if someone was to sacrifice their own good for someone else's, this would negatively impact their ability to reproduce, so this would not a successful evolutionary strategy - i.e. 'nice guys finish last (in terms of number of descendants with their genes)'

Robert Axelrod is well known for his book, 'The Evolution of Cooperation' in which he invoked 'the shadow of the future' showing that in situations where people were playing a repeated prisoner's dilemma against each other, the most successful strategy was often 'tit-for-tat' - a strategy which involved cooperating in the prisoner's dilemma against people who also cooperated and defecting against people who defect. In the field of biology, Robert Trivers came to a similar conclusion that what looked like altruistic behaviour (e.g. cooperation in a prisoner's dilemma) could actually benefit the person being altruistic, under the right circumstances (i.e. where there might be a later payback to this 'altruism').

The authors of 'Moral Sentiments and Material Interests: The Foundation of Cooperation in Economic Life.' refer to this self-interested cooperation in repeated Prisoner's Dilemmas as 'weak reciprocity' - weak because it is conditional on certain circumstances holding such that a person believes it to be in their long term interest to behave in a cooperative manner.

The book presents results from a number of experiments that demonstrate that, even in situations where people have no reasonable expectation of benefiting from their actions, a significant percentage of the population is predisposed to cooperation (with those who also cooperate) and will go out of their way to punish people who defect. In their words,
"Strong reciprocity is a predisposition to cooperate with others, and to punish (at personal cost, if necessary) those who violate the norms of cooperation, even when it is implausible to expect that these costs will be recovered at a later date."


The authors also address the notion that this behaviour could be 'an error' due to the fact that people just aren't used to dealing with this sort of non-repeated prisoner's dilemma type situation so they behave like strong reciprocators 'by accident'. As the authors note, from the standpoint of predicting behaviour, this is a distinction without a difference. They also note that the behaviour observed is quite stable over time and doesn't seem to go away as people get more and more experience playing the game in the experiments conducted.

One of the examples of an experiment conducted is an 'ultimatum game' where one player offers the second player some portion of $100. The second player can then accept the offer, meaning that the second players gets what was offerred and the first player gets the rest, or reject it, meaning that both players get nothing. The self-interest theory suggests that the first player will offer the most they can while still offering the second player something (e.g. $99) and that the second player will accept this offer (because $1 is better than nothing).

Of course, as you might expect (if you are not an economist!), it turns out that people don't behave that way at all, with $50 being the most common offer, and people frequently rejecting offers that are less than $50, just to punish the first player for the ungenerous offer.

The authors report the results from a cross-cultural study they did of behaviour in the ultimatum game. What they found was that, although none of the 15 societies they studied resembled the 'self-interest' model, there was a wide variety of behaviour patterns, that seemed to vary based on the way that each society operated. For example, in the Lamalera whaling society, the average offer by the first players was 57%, reflecting that when the Lamalera people make a large catch there is a meticulous distribution system set up so that everybody gets a 'fair share'. Meanwhile, the Hazda, a group of small scale foragers, generally made lower offers and had high rejection rates, reflecting the fact that although they did share meat, there was generally a lot of conflict involved in the process (e.g. people trying to hide their catches).

* * *

One of the things the authors (unsurprisingly) don't get into is whether strong reciprocity is more consistent with guardian or commercial type activities. But given that a key component of strong reciprocity is to 'take vengeance' and that it also seems to be connected with solving prisoner's dilemma's and public goods problems and cooperation in large groups, it seems logical that strong reciprocity leans more towards guardian type situations rather than market exchange situations.

Later on in the book, the authors note that strong reciprocity can work to make communities a powerful force, but that it has a negative effect in that the effectiveness of community depends on the ability to include 'us' and exclude 'them' leading to a tendency of communities to 'be exclusive' - a member of the list of guardian ethics. At the same time, the authors note that a weakness of communities is the difficulty with achieving scale of operations, although they do not consider the potential use of hierarchy to overcome this weakness.

* * *

An interesting result of the experiments is that rather than their being a single type of person (as assumed in most models), it seems that people exist along a range - from a sizable portion of the population (~25% in some studies) that does behave like the standard economic model of self-interest suggests, through people who exhibit what seems like weak reciprocity, to a large number of people that exhibit strong reciprocity and even further into a small part of the population which seems to be dedicated cooperators who will cooperate in almost all situations, even when their generosity is being taken advantage of repeatedly (i.e. in terms of the original prisoner's dilemma - they won't rat on their cellmate no matter how many times their cellmate gets off easy by ratting them out.)

Given this heterogeneity in people's dispositions toward cooperation, we can see that it is a tricky problem to design societies and systems which allow the cooperative types to benefit from cooperation without being exploited by the self-interested types.

It reminds of something I've previously quoted from Hans Ritschl's 'Community and Market Economy:'
"This understanding of the fundamental power of the communal spirit leads to a meaningful explanation of coercion in the state economy. Coercion is a means of assuring the full effectiveness of the communal spirit, which is not equally developed in all members of the community. Coercion forces the individual to act as if he were inspired by communal spirit. Coercion is only the outer clasp and fastening of the community, but if communal spirit be lacking, coercion can replace it only in part."


Even more so than usual, I am only scratching the surface of this densely packed book, which I highly recommend to anyone interested in the question of human cooperation.

Labels: , , ,

Tuesday, December 22, 2009

32. Moral Conditions of Economic Efficiency, Part 3

Note: This post is the thirty-second in a series. Click here for the full listing of the series.

Chapter 4 of 'The Moral Conditions of Economic Efficiency', by Walter Schultz, takes on the notion that even though strict rational eogists may not be able to achieve economic efficiency immediately, their behaviour will settle into an efficient pattern as they (strictly rationally) adopt rules that prevent their selfishness from keeping them from achieving efficient outcomes.

For example, back here, I quoted a Washington Post article which read,
"[Alan] Greenspan had an unusual take on market fraud, Born recounted: "He explained there wasn't a need for a law against fraud because if a floor broker was committing fraud, the customer would figure it out and stop doing business with him."


Schultz first argues that, although coordination type situations (e.g. choosing which side of the road to drive on) enable strict rational egoists to form rules that are to everyone's benefit, exchange is not a coordination type situation. The reason is that a coordination situation allows everyone to achieve an optimal result, whereas in exchange, each person's best option is to get what the other person is offering without parting with anything themselves, by way of force or fraud if necessary. But it's not possible for both parties to come out ahead on exchange by cheating each other, so this makes exchange a collective action or Prisoner's Dilemma type problem.

Next, Schultz argues that 'the shadow of the future', i.e. concerns about what might happen in the future, will not cause strict rational egoists to refrain from force and fraud. I'm not sure I quite follow Schult'z argument on this point, so I'll quote him,
"we have already shown that strict rational egoists will always choose the best feasible means to achieve their most highly valued social state, so when similar situations emerge [in the future] inefficient outcomes result."


As best I can tell, Schultz is arguing that strict rational egoists are not capable of prudence in the sense of weighing the benefits of theft/fraud now against the benefits of cooperation over the long run. Schultz could make the case that this sort of prudence is itself a moral rule that does not belong in our sketch of the strict rational egoist but he doesn't make this argument explicitly.

The question of whether (strict) self-interest leads to cooperative behavior in collective action problems that are repeated is one that has been much studied and I will likely come back to it later on in the series. For now, I'll simply note that despite morals against force and fraud, and the presence of a government that will punish you if you are caught in such activities, we are far from eliminating these behaviours entirely, so the 'shadow of the future' (as game theorist refer to the effect where concerns about future results influence present decisions) may help some, but it seems incapable of playing the role Greenspan imagined it playing, where no rules against fraud are necessary.

In chapter 5, Schultz discusses externalities. He defines externalities as follows: "An externality is an uncompensated cost or benefit that may be intentional, accidental or incidental."

...and clarifies that...

"Acts of theft and fraud directly affect the well-being of consumers and exemplify intentional externalities. Harm resulting from negligence or from an accident exemplifies an accidental externality. Externalities also include incidental effects of the acts of production and consumption."

He goes on to comment that, "To assume that all externalities are absent and that every agent behaves competitively is to set aside the role of morality. The system of moral constraints presented in Chapter 6 secures competitive behaviour and eliminates intentional externalities but makes no provision for the internalization of accidental and incidental externalities."

Schultz then claims that:

1) A system of moral normative constraints precludes externalities due to intentional consequences of nonmarket action.

2) A system of moral normative constraints and conventions rectifies accidental and incidental externalities.

3) Moral normative constraints and conventions coordinate expectations and thereby reduce transaction costs

4) Moral normative constraints are the logical limits of the commodification of desire.

Schultz explains the first 3 points: "We have established the first claim. Claims (2) and (3) are based on the general goals of tort law, property law and contract law, respectively, and have been established."

To be honest, it wasn't clear to me how claims 2 and 3 have been established, but never mind.

Schultz says no more on the first 3 points and devotes the rest of the chapter to an explanation of point 4, arguing that the desires of people that we recognize in calculating the effects of externalities are limited by the rights that people need to have in order to secure economic efficiency. In other words, my desire to have you a slave is not recognized as a valid preference since if you don't have autonomy to make your own decisions we won't achieve the same efficiency that we might have (because you can't pursue your preferences properly, if you are my slave).


---

In chapter 6, Schultz sets out what he sees as the moral conditions of economic efficiency. Note that where I might say, for example, that people need to follow a moral rule to 'be honest', Schultz instead says, using the same example, that people have a 'right to true information' and that people also have a moral incentive to respect that right. It amount to the same thing, as far as I can tell.

The Moral Conditions of Economic Efficiency per Schultz:

1) Property Rights - meaning that people can't mess with your stuff and you can do what you want with your stuff.

2) Right to True Information (that is relevant to a potential exchange) - meaning that you shouldn't tell your car insurance company that your car is just for personal use, when really you drive to work and back every day.

3) A right to welfare - Schultz recognizes that given a choice between stealing or starving, people will and should choose the latter because the right to life takes precedence over the efficiency based rights. Plus Schultz makes an insurance argument (that seems a bit out of place) that it is more efficient for basic welfare to be assured centrally than for everyone to self-insure against deprivation.

4) A right to autonomy - without autonomy, people can't make exchanges that match their preferences, so autonomy is a precondition for trade as we understand it even being possible.

Schultz also notes that we need some mechanism by which people are held accountable for their behaviour in recognizing these rights as well as a set of conventions for setting prices and conventions and normative constraints for commodifying desire and for rectifying the results of accidental and intentional externalities.

---

I realize that this post doesn't really show all that clearly how Schultz gets to his final requirements, but that's likely because it wasn't all that clear to me reading the book.

Labels: , , , , , , ,

Wednesday, July 22, 2009

20. Morals By Agreement, Chapter 3: Strategy (Game Theory)

Note: This post is the twentieth in a series. Click here for the full listing of the series.

This is the second of what should be a few posts on the book Morals by Agreement, by David Gauthier.

In the last post, we covered chapter 2 of Morals by Agreement, where Gauthier sketched out his view of what it meant for people to behave 'rationally' in situations which didn't involve other people who might themselves be trying to act rationally and whose actions might depend on our actions and vice-versa.

So in chapter 3, Gauthier extends his model of rational behaviour to cover what he refers to as 'strategic interaction' - that is situations where people are interacting with each other, rather than acting on their own. Basically, in the terms I used earlier in this series, he is moving from rational actions, to rational transactions.

The formal branch of knowledge that studies transactions is Game Theory. The most famous game in Game Theory is the Prisoner's Dilemma, which I introduced earlier in the series here.

For more background on game theory here is the Wikipedia entry on game theory and here is the excellent Stanford encyclopedia of philosophy entry on game theory.

When it comes to game theory, examples are the way to go to gain understanding.

Consider a simple game, where Harold and Kumar are trying to meet up at a local restaurant for a meal, but they are not in communication with each other. However, they both know that there are only two restaurants in town, White Castle and Black Castle. Furthermore, they know that Black Castle is closed.

                                        Harold
                            White                     Black
Kumar   White:      [5,5]                 [3,0]
                 Black :     [0,3]              [2,2]

The best outcome is if they both meet at the White Castle (5 for both). For both Harold and Kumar, the next best option is if they go to White Castle and the other person goes to Black Castle - they don't get to meet up, but at least they can eat. For both players, the third best is to meet at Black Castle and the worst option is to go to Black Castle while the other person has gone to White Castle (alone AND hungry).

Gauthier defines rational behavior in transactions, or 'strategic interaction' as follows:

A)Each person's choice must be a rational response (i.e. utility maximizing) to the choices she expects the others to make
B) Each person must expect every other person's choice to satisfy A
C) Each person must believe her choice and expectations to be reflected in the expectations of every other person.

So in the example above, Harold figures that if Kumar goes to White Castle, he (Harold) is better off going as well. But even if Kumar doesn't go to White Castle, Harold is still better off going to White Castle. So condition A sends Harold to White Castle. Condition B tells Harold that Kumar will follow a similar logic and also end up at the White Castle.

The outcome where Harold and Kumar both go to White Castle is what is known in game theory as an equilibrium outcome. What this means is that, given that both Harold and Kumar are choosing to go to White Castle, there is no reason for either of them to unilaterally change their choice. Compare that with the situation where Harold is going to White Castle and Kumar is going to Black Castle - this is not an equilibrium because in this situation, Kumar would be better off to change his strategy.

Now let's look at a different type of game. Consider the question of what side of the road to drive on. For now, imagine that only two people live on a road, Adam and Eve, and they need to agree on which side of the road to drive on and they both own British cars that were designed for driving on the left side of the road.

                                        Eve
                            Left                     Right
Adam   Left:      [2,2]                 [-10,-10]
            Right :    [-10,-10]            [1,1]

Note that there are 2 equilibrium outcomes in this game, One where both drive on the left and both drive on the right. Even though both Adam and Eve are better off if they both drive on the left, if for some reason they are currently both driving on the right, neither has an incentive to unilaterally change their strategy. Only by working together could they shift from the sub-optimal equilibrium to the optimal equilibrium. Unsurprisingly, this type of game is known as a coordination game, where the coordination needed has two parts: 1) making sure that both people pick the same outcome and 2) making sure the equilibrium they end up in is the optimal one.

But what if we change the game slightly so that Adam has a car that is designed to drive on the left and he has always driven on the left so strongly prefers driving on the left. Meanwhile, Eve has a car that is designed to drive on the right, but she just got her license so she is less attached to driving on one particular side.

                                        Eve
                            Left                     Right
Adam   Left:      [5,2]                 [-10,-10]
            Right :    [-10,-10]            [2,3]

Again there are 2 equilibriums and Adam and Eve need to coordinate to make sure they drive on the same side of the road. But the situation is complicated now by the fact that Adam prefers the 'drive on the left' equilibrium and Eve prefers the 'drive on the right' equilibrium.

This now becomes a bargaining problem, one that has been much studied and argued over in the history of game theory. The reason I bring it up here is because Gauthier himself brings it up in chapter 3 - because it will be useful to him later on in the book.


Finally, I won't go over the Prisoner's Dilemma again, but it is worth noting (as Gauthier does) that in the Prisoner's Dilemma, the equilibrium that Gauthier's rules for rational behavior lead to is different from the (Pareto) optimal outcome. In other words, if people pursue their own utility maximization it will lead to sub-optimal outcomes where there are possibilities to make people better off without making anyone worse off, but these possibilties are placed out of reach by people's self-interested behaviour.

Gauthier will argue that morality consists of the constraints necessary to generate optimal outcomes for 'rational' people.

Labels: , , , , , ,

Thursday, June 11, 2009

15. The Logic of Collective Action

Note: This post is the fifteenth in a series. Click here for the full listing of the series.

The Logic of Collective Action, by Mancur Olson, is a pretty straightforward book, at heart. There is one primary argument which is at once profound and straightforward:

Almost all group activity has a prisoner's dilemma type structure.

The argument is as follows:
1) People form groups to further their interests
2) The interests that the group is intended to further are non-excludable* in the sense that the benefits will accrue to all group members, regardless of their contribution towards achieving them.
3) People have an incentive to 'free ride' by getting the benefits of group membership while not contributing personally towards the achievement of group goals.

* Olson calls these public goods but acknowledges that the 'non-rival' component of the standard definition of public goods is not all that relevant to his argument

For example:

1) a labour union is formed to further its member's (the workers) interests.
2) the goals that the union pursues higher wages, better working conditions, etc. will go to all union members
3) Maintaining an effective union takes work (dues need to be paid to support it financially, people have to be willing to go on strike and forego their paycheck if necessary, etc.) and workers have an incentive to try and gain the benefits from the union without contributing towards achieving its objectives.

The notion is that if other people are going to cooperate (i.e. contribute towards group goals), then the best approach for a person is to defect or free-ride (i.e. not contribute towards group goals). And if other people aren't going to contribute towards group goals then you are still best off to also not contribute.

This is equivalent to the original Prisoner's Dilemma scenario where regardless of what your fellow prisoner does, the best option is to rat them out and confess.

Olson is primarily arguing against those who assume that group members always have individual interests that are aligned with the group interest.

Olson figures that in small enough groups, the members may each get enough benefit for it to be worth it to contribute (although contribution levels will still be less than the optimal or efficient level) but for larger groups there is unlikely to be any contributions (or any group) at all.

Taking the example of labour unions, Olson notes that historically unions started from small groups and only later merged into bigger organizations.

For large or 'latent' groups, as Olson calls them, to maintain themselves, they either need to make membership in the group mandatory to prevent free riding, or they need to provide a specific benefit to group members that justifies their membership beyond the public benefit. This is why unions want a closed shop and why professional associations (e.g. bar associations) want the same thing.

It is also why small groups of people with a common interest are often more powerful than large groups with a common interest. Olson notes that business lobbyists are effective within a particular industry, where there are typically only a handful of players, but lobby groups that represent business as a whole are weak because the individual members have little incentive to support the general business lobby group.

One way around this dilemma, Olson notes, is for large groups to use a federated structure, much in the way that most large unions are organized with locals and umbrella groups. Another similar option, that Olson doesn't mention, but I want to throw in so I don't forget about it later, is to use a hierarchical structure, where there are small groups that report to a single superior, and then a small group of superiors who report up to another higher superior, and so on.

Naturally, the larger the group, the less likely it is that you can count on everyone else to contribute, which in turn means the less likely you are to contribute so the harder it gets to achieve the group goals as the group gets larger.

On the topic of large groups, Olson notes that even the state, which can appeal to patriotism and which provides large economic benefits to the citizens by virtue of its existence, can not rely on voluntary contributions and instead must compel people to pay taxes.

Unfortunately, Olson does not pursue this further to inquire why, under a democracy where voters could presumably elect those politicians promising to eliminate taxes, taxes still remain, or to consider the possibility that a significant number of people are happy to pay taxes as long as they know that everybody else is paying a fair share as well - meaning that the compulsory nature of taxation might only be down to a small group of anti-social people who would try to free-ride. Or, in other words, the ironic prospect that it might only be due to the presence of libertarians that we need a coercive state.

We can see the limitations of Olson's concerns about the impossibility of collective action in large groups more clearly from a passage on page 64,
"Some critics may protest that even if social pressure does not exist in the large or latent group, it does not follow that the completely selfish or profit maximizing behaviour, which the concept of latent groups apparently assumes, is necessarily significant either; people might even in the absence of social pressure act in a selfless way. But this criticism of the concept of the latent group is not relevant, for that concept does not necessarily assume the selfish, profit-maximizing behaviour that economists usually find in the marketplace. The concept of the large or latent group offered here holds true whether behaviour is selfish or unselfish, so long as it is strictly speaking "rational".

...

"A man who tried to hold back a flood with a pail would probably be considered more of a crank than a saint, even by those he was trying to help. It is no doubt possible infinitesimally to lower the level of a river in a flood with a pail ... but the effect is imperceptible, and those who sacrifice themselves in the interest of imperceptible improvement may not even receive the praise normally due selfless behaviour."


It's hard to imagine a poorer choice of analogy. Anyone who's ever lived near a river, and even most who haven't, probably appreciates that you don't hold back a flood with a pail (what would you do with the water in your pail?), you hold back a flood with sandbags. I mention the word 'sandbag' and what mental image comes into your head? If you're like me it's the image of a large group of people working day and night often with the help of many selfless volunteers to build walls of sandbags to prevent a river from flooding a community - exactly what Olson is arguing will not happen.

Moving on, there's a very interesting passage on page 100 where Olson quotes at length from Hans Ritschl in 'Community Economy and Market Economy',
"The fatherland and mother tongue make us all brethren together. Anyone is welcome to the exchange society who obeys its regulations. But to the national community belong only the men and women of the same speech, of the same ilk, the same mind ... Through the veins of society streams the one, same money; through those of the community, the same blood...

Any individualist conception of "the State" is a gross aberration ... [and] nothing but a blind ideology of shopkeepers and hawkers.

The State economy serves the satisfaction of communal needs ... If the State satisfies needs which are purely individual, or groups of individual needs which can technically be met otherwise than jointly, it does so for the sake of revenue only.

In the free market economy the economic self-interest of the individual reigns supreme and the almost sole factor governing relations is the profit motive, in which the classical theory of the free market economy was appropriately and securely anchored. This is not changed by the fact that more economic units, such as those of associations, cooperatives or charities, may have inner structures where we find motivations other than self-interest. Internally, love or sacrifice, solidarity or generosity may be determining: but irrespective of their inner structures and the motives embodied therein, the market relations of economic units with each other are always governed by self-interest.

In the exchange society, then, self-interest alone regulates the relations of the members; by contrast, the state economy is characterized by communal spirit within the community. Egotism is replaced by the spirit of sacrifice, loyalty and communal spirit ... This understanding of the fundamental power of the communal spirit leads to a meaningful explanation of coercion in the state economy. Coercion is a means of assuring the full effectiveness of the communal spirit, which is not equally developed in all members of the community.

The objective collective needs tend to prevail. Even the party stalwart who moves into responsible government office undergoes factual compulsion and spiritual change which makes a statesman out of a party leader ... There is not a single German statesman of the last 12 years .. who escaped compliance with this law."


To which Olson comments,

"Ritschl's argument is exactly the opposite of the approach in this book. He assumes a curious dichotomy in the human psyche such that self-interest rules supreme in all transactions among individuals, whereas self-sacrifice knows no bounds in the individual's relationship to the state and to the many types of private organizations."


Personally, I was instantly struck by the echoes of Jane Jacobs 'Systems of Survival' in Ritschl's passage, with his references to an exclusive state sector with a spirit of loyalty, sacrifice, coercion and communal spirit in contrast with a cosmopolitan commercial sector open to all and characterized by pursuit of self-interest. Not to mention Olson's response describing Ritschl's 'curious dichotomy'!

Thinking of Jacobs helps us find the passage in Olson's work that reconciles his views with the 'exactly opposite' views of Ritschl - footnote 17 on page 61, which says,

"In addition to monetary and social incentives, there are also erotic incentives, psychological incentives, moral incentives and so on. To the extent that any of these types of incentives leads a latent [large] group to obtain a collective good, it could only be because they are or can be used as 'selective incentives," i.e., because they distinguish between those individuals who support action in the common interest and those who do not. Even in the case where moral attitudes determine whether or not people will act in a group-oriented way, the crucial factor is that the moral reaction serves as a "selective incentive." If the sense of guilt, or the destruction of self-esteem, that occurs when a person feels he has forsaken his moral code, affected those who had contributed toward the achievement of a group good, as well as those who had not, the moral code could not help to mobilize a latent group.

To repeat: the point is that moral attitudes could mobilize a latent group only to the extent they provided selective incentives. The adherence to a moral code that demands the sacrifices needed to obtain a collective good therefore need not contradict any of the analysis in this study; indeed this analysis shows the need for such a moral code or for some other selective incentive." (my italics)


So on page 100, Olson is baffled by Ritschl's argument that people will abide by a collectivist moral code when working in collective activities and a self-interested moral code when working in market activities, suggesting that this was the opposite of his argument.

But back on page 61, Olson came out and said that one way to overcome the problems of collective action was to employ a special moral code for collective activities which puts the group interest ahead of the selfish interest - something that his theory demonstrates the need for!


I'll close this post with a passage from David Hume's 'A Treatise of Human Nature' that Olson, in footnote 53 (many of the most important parts of the book are in the footnotes), says was pointed out to him by John Rawls,

"There is no quality in human nature which causes more fatal errors in our conduct, than that which leads us to prefer whatever is present to the distant and remote, and makes us desire objects more according to their situation than their intrinsic value. Two neighbours may agree to drain a meadow, which they possess in common; because it is easy for them to know each others mind; and each must perceive, that the immediate consequence of his failing in his part, is, the abandoning the whole
project. But it is very difficult, and indeed impossible, that a thousand persons should agree in any such action; it being difficult for them to concert so complicated a design, and still more difficult for them to execute it; while each seeks a pretext to free himself of the trouble and expence, and would lay the whole burden on others. Political society easily remedies both these inconveniences. Magistrates find an immediate interest in the interest of any considerable part of their subjects. They need consult no body but themselves to form any scheme for the promoting of that interest. And as the failure of any one piece in the execution is
connected, though not immediately, with the failure of the whole, they prevent that failure, because they find no interest in it, either immediate or remote. Thus bridges are built; harbours opened; ramparts raised; canals formed; fleets equiped; and armies disciplined every where, by the care of government, which, though composed of men subject to all human infirmities, becomes, by one of the finest and most subtle inventions imaginable, a composition, which is, in some measure, exempted
from all these infirmities."

Labels: , , , , , ,

Monday, May 25, 2009

12. Self-Interest (part 1)

Note: This post is the twelfth in a series. Click here for the full listing of the series.

Here’s a question which sounds straightforward but can be hard to nail down – what does it mean for someone to act in their ‘self interest’?


In trying to research the answer to this question, the best answers I found were these two (Danny Shahar, Alonzo Fyfe) similar blog posts on the topic.

Here, Danny explains the typical economists usage of the phrase 'self-interest',

"I am told that within the discipline of economics, what it means to say that a person "acted in her own self-interest" is that a person "acted according to her own interests." The idea here is that all action demonstrates preference, and that this necessarily means that the actor preferred the action that was taken to all other actions. So if I jump on a grenade in order to save my friends, what I have demonstrated is that I preferred to jump on the grenade over all other alternatives that I considered, and it's fair to say that I wanted to jump on the grenade; that out of all available alternatives, the one I consider the best is the one where I jump on the grenade so that my friends live. I'm down with that.

When I jump on the grenade because I want to save my friends, I take it to be uncontroversial that I do so according to my own interests. How could it be otherwise? And if what we mean by "self-interest" is simply that I act according to my own interests, then yes, my jumping on the grenade is self-interested."


Of course if you're like me, or most people I suspect, you don't associate the phrase 'self-interest' with jumping on a grenade to save the lives of your friends.

As Danny points out, there is a dramatic difference between the economic usage of the expression and the English language usage of the phrase, and this causes trouble.

On the one hand, the economic usage refers to self-interest meaning any interest that the self has, whereas in normal English, the phrase self-interest means taking an interest in oneself or taking an action where the object of that action is yourself.

Danny notes,
"So if my sister were sick, I might go get her some medicine. To say that my getting the medicine is "self-interested" would mean, to the lay person, that I get the medicine in order to promote some self-directed end. That is, I get the medicine because, perhaps, I am happier when my sister is not sick, or my sister is irritating when she's sick, or there's a cute pharmacist who will think I'm sweet for taking care of my sick sister. The lay-person, then, would call "non-self-interested" or "selfless" an interest with an object which does not directly involve the actor. So I act selflessly if the reason I go get the medicine is that I value my sister's health for its own sake, and am willing to take on the costs necessary to promote her health."

To the economist, however, all action is self-interested. Since every action you take was presumably taken for a reason, and that reason reflects your interest in taking that particular action.

Alonzo describes the two different meanings, as follows:

"(a) "Interests in self" [the typical 'laymans' usage of the term]

(b) "Interests of self" [the economic usage of the term]"


Sometimes people will muddy the waters further by referring to (a) as 'narrow' self-interest and (b) as 'enlightened' self-interest, but this isn't really helpful.

Alonzo describes the confusion caused by the two conflicting meanings as follows:

"I am going to assert that most people who hear or read the phrase, 'rational self-interest' immediately call to mind the narrower 'interests in the self' definition. To make matters worse, the 'rational self-interest' theorist often asserts this same definition. Then the listener/reader starts to raise all sorts of objections to this 'interests in the self' concept. In responding to this, the 'rational self-interest' theorist equivocates. He switches to the concept of 'interests of the self' to defend himself from objections to the 'interests in the self' concept, claiming that this is what he meant all along. Yet, when asked for a specific definition, the defender of rational self-interest goes right back to using the 'interests in the self' definition.

After which, the listener walks away mumbling to himself, 'those guys are nuts.'"


Danny makes a similar objection, and further notes that in the economist's conception of the term, the possibility of altruistic or selfless acts has been defined out of existence, which is not helpful since there are a wide range of actions that are routinely categorized by people as being 'selfless'.

He also notes that economists themselves, being English speakers before they were Economists, are prone to confusing the two meanings themselves.

There is some usefulness in the economic meaning of the term since it can remind us that just because people are in a group or organization of some sort, doesn't mean that they suddenly take on the motivations of the group for their actions, they are still 'self-interested' in the sense that they follow their own reasons in deciding what to do, but this is a pretty marginal usefulness compared to the much richer distinction made in the regular English usage in which self-interested acts are made with regard to the effect on the self, and selfless acts are made with regard to the effect on someone else.

So for this series of posts (and in general) I will attempt to use the phrase 'self-interested' solely in the English sense of meaning actions with regard to the effect on the self, i.e. not selfless. If I find the need to use the economic definition, I'll make it explicit what I am referring to.

In order to make this all a bit clearer, let's return to the example of the Prisoner's Dilemma.

Two suspects are arrested by the police. The police have insufficient evidence for a conviction, and, having separated both prisoners, visit each of them to offer the same deal. If one testifies (defects) for the prosecution against the other and the other remains silent, the betrayer goes free and the silent accomplice receives the full 10-year sentence. If both remain silent, both prisoners are sentenced to only one year in jail for a minor charge. If each betrays the other, each receives a five-year sentence. Each prisoner must choose to betray the other or to remain silent. Each one is assured that the other would not know about the betrayal before the end of the investigation. How should the prisoners act?


If we assume that the Prisoners are self-interested, meaning that they place no weight on what happens to the other Prisoner, then the payoffs are as follows (each pair of brackets represents the jail time of Prisoner 1 followed by that of Prisoner 2):

                                                      Prisoner 2
                                                   No Confession Confess
Prisoner 1   No Confession:      [1,1]                 [10,0]
                               Confess :      [0,10]                [5,5]

If both prisoners are purely self-interested and don't care about the other prisoner, then we will end up with both confessing and they both serve 5 years.

Now consider what happens if both Prisoners are purely selfless, in the sense that they care 100% about the other Prisoner, and care nothing about their own fate. Now the payoffs look like the following:

                                                      Prisoner 2
                                                   No Confession Confess
Prisoner 1   No Confession:      [1,1]                 [0,10]
                               Confess :      [10,0]                [5,5]


This time, both prisoners refuse to confess while hoping that the other prisoner will so that the other prisoner will get away with no sentence. Their actions prevent this, however and we end up with neither confessing and they both serve one year.

Finally, let's say that both prisoners apply a fairness rule which says that all people are valued equally so they equally weight their own potential jail time and the other prisoners potential jail time. Now the payoffs look like the following:

                                                      Prisoner 2
                                                   No Confession Confess
Prisoner 1   No Confession:      [1,1]                 [5,5]
                               Confess :      [5,5]                [5,5]

Here, the values in the brackets represent the average sentence given to the prisoners (since they weight each prisoner the same, a sentence of 10 years to one and 0 to the other, is equivalent to 5 each) and we can see that both prisoners have a clear motivation not to confess.

Oddly enough, if you change the rules a little so that if one prisoner confesses and the other doesn't, the jail time for the one who doesn't confess is 20 years, then you get the following:

                                                      Prisoner 2
                                                   No Confession Confess
Prisoner 1   No Confession:      [1,1]                 [10,10]
                               Confess :      [10,10]                [5,5]

Now this has become a coordination game where the two prisoners need to ensure that whether they confess or don't confess, the main thing is that they both pick the same option.

I'm not sure if that has any significance, I just thought it was kind of odd.

To sum up, I'll refer to self-interested or selfish actions as those which place weight in making the decision solely or primarily on the consequences for the self, with altruistic or selfless actions referring to those which take into account the consequences for others as well.

Labels: , , ,