Crawl Across the Ocean

Wednesday, June 27, 2012

107. The Righteous Mind, Part 4

Note: This post is the one hundred and seventh in a series about government and commercial ethics. Click here for the full listing of the series. The first post in the series has more detail on the book 'Systems of Survival' by Jane Jacobs which inspired this series.

This week the topic is the book, "The Righteous Mind" by Jonathan Haidt. Having read the book, I highly recommend the NY Times review of it as an excellent summary.

Note that we have encountered the work of Jonathan Haidt before, albeit indirectly, in this earlier post which discussed an essay by Steven Pinker, which was written as a reaction to Haidt's work.

Today's post covers the third of Haidt's three main arguments, that people are 90% chimp and 10% bee, meaning that people are a mix of self-interested and group-interested.

In this section of the book, Haidt eventually defines 'moral systems' as,
"interlocking sets of values, virtues, norms, practices, identities, institutions, technologies and evolved psychological mechanisms that work together to suppress or regulate self-interest and make cooperative societies possible."

This definition follows a lengthy argument by Haidt trying to justify the notion that people could have evolved to be cooperative in nature (10% bee), which he no doubt felt was necessary because this is a controversial position to take in the current academic environment. But I didn't need any convincing on that score, so I'm not going to dwell on that aspect of this section of the book.

Instead, let's look more closely at the definition of moral systems spelled out by Haidt.

The first point to make is that it seems a bit narrow, in denyng the possible existence of morality outside of a social context. I tend to agree more with Francis Fukuyama, who, if we recall from an earlier post in the series, stated that,
"The capacity for hard work, frugality, rationality, innovativeness, and openness to risk are all entrepreneurial virtues that apply to individuals and could be exercised by Robinson Crusoe on his proverbial desert island. But there is also a set of social virtues, like honesty, reliability, cooperativeness and a sense of duty to others, that are essentially social in nature."
More recently, we saw that Ayn Rand spokesman John Galt made the point even more clearly,
"You who prattle that morality is social and that man would need no morality on a desert island - it is on a desert island that he would need it most. Let him try to claim ... that he will collect a harvest tomorrow by devouring his stock seed today - and reality will wipe him out, as he deserves."
Ayn Rand, in a point echoed more recently by Joseph Heath, emphasized further that there are situations (e.g. corporations that might mutually benefit from cooperative price-fixing) where the pursuit of self-interest at the expense of cooperative behaviour is the best course of action. True, Haidt might argue that this falls under the 'regulation' of self-interest, but it doesn't seem like this sort of thing is what he had in mind.

More generally, Haidt suggests the existence of moral systems, denies that he is a moral relativist who believes that all moral systems are equally valid, and identifies two distinct moral casts of mind present in the population ("WEIRD" and "Normal") but he never makes any sort of attempt to categorize what sort of moral systems might exist or what might make one moral system superior to another.

This is not really criticism, Haidt has already covered a lot of ground, it's just that he went quite far, but mostly stopped short of addressing the questions I've been pursuing in this series. Why do some moral systems apply in some contexts and not others, what makes one moral system superior to another, etc.

Haidt spends a chapter explaining his viewpoint (which I mostly share) that religion may be wrong (supernaturally speaking) but is nevertheless useful (here on earth) because it helps bind communities together and support cooperative efforts (such as feeding the poor or inquisiting heretics). Throughout the third section of the book he emphasizes that our groupish behaviour typically applies only to whichever group we identify with, not with the human race as a whole, but that experimental results have shown as people become more groupish in a situation, the increased love for the in-group outweighs any increased hate for  out-groups.

Haidt briefly (page 266) seems to suggest that religion is beneficial to trade, "In the medieval world, Jews and Muslims excelled in long-distance trade in part because their religions helped them create trustworthy relationships and enforceable contracts." However, he doesn't go on to note that Christians were certainly quite religious during the medieval period as well, or that in modern times, the nations with the highest standard of living tend to be the least religious. Similarly, he doesn't spend any time on the relationship between religion and scientific inquiry. Coming from an Irish background, I can see that religion supports social cohesion, but I might take some convincing that greater religiosity coincides with greater commercial trade.

---

Putting the three different sections of the book together, Haidt has presented three reasons why we struggle to agree on what is right: 1) We have instinctive moral reactions to situations that we rationalize, rather than coming to a rational conclusion based on disinterested reasoning, 2) different people have different sets of moral instincts, and 3) by our nature we are tribal, in the sense that we define ourselves and choose our actions based on the groups that we belong to, not just on our individual situation.

Earlier on in the book, Haidt suggested that rather than aiming for some grand rational argument that would teach us how to all act morally (as he thought Plato was engaged in in The Republic) we should instead try to design society in such a manner that we would naturally behave in a moral manner (which is what Plato actually was engaged in in The Republic). But where Plato set out an elaborate scheme for disentangling two sets of people to follow two distinct moral systems, according to their nature, Haidt has little to offer beyond suggesting that U.S. congressman should bring their families with them to Washington rather than leaving them at home, so that there is more socializing across party lines. But despite the lack of solutions offerred it's an interesting book that just might change the way you think about how you think so it's worth a read.










Labels: , , , , ,

Tuesday, June 14, 2011

91. Another View on the Evolution of Cooperation

Note: This post is the ninety-first in a series about government and commercial ethics. Click here for the full listing of the series. The first post in the series has more detail on the book 'Systems of Survival' by Jane Jacobs which inspired this series.

I happened upon an interesting article the other day by Daron Acemolgu.

Acemolgu points out that researchers often use coordination models to study the level of cooperation in society because these models allow for multiple equilibria - i.e. one with cooperation, one without1

"Why do similar societies end up with different social norms, and why and how social norms sometimes change? A common approach to answering these questions is to use coordination games, which have multiple equilibria corresponding to different self-fulfilling patterns of behaviour and rationalise the divergent social norms as corresponding to these equilibria. For example, it can be an equilibrium for all agents to be generally trusting of each other over time, while it is also an equilibrium for no agent to trust anybody else in society. We can then associate the trust and no-trust equilibria with different social norms."


As he goes on to point out, this isn't a very dynamic analysis, in the sense that it doesn't answer the questions of why or how we get from one equilibrium to another.

"Simply ascribing different norms to different equilibria has several shortcomings, however. First, it provides little insight about why particular social norms and outcomes emerge in some societies and not in others. Second, it is similarly silent about why and how some societies are able to break away from a less favourable (e.g., no trust) equilibrium. Third, it also does not provide a conceptual framework for studying how leadership by some individuals can help change social norms."


I didn't spring for the $5 required to download the full paper, but from the article it seems like one mechanism posited by Acemolgu for society to move from one equilibrium to another is if a 'prominent' person influences other people with their own behaviour.

"A particularly important form of history in our analysis is the past actions of "prominent" agents who have greater visibility (for example because of their social station or status). Their actions matter for two distinct but related reasons. First, the actions of prominent agents, impact the payoffs of the other agents who directly interact with them. Second, and more importantly, because prominent agents are commonly observed, they help coordinate expectations in society. For example, following a dishonest or corrupt behaviour by a prominent agent, even future generations who are not directly affected by this behaviour become more likely to act similarly for two reasons; first, because they will be interacting with others who were directly affected by the prominent agent's behaviour and who were thus more likely to have followed suit; and second, because they will realise that others in the future will interpret their own imperfect information in light of this type of behaviour. The actions of prominent agents may thus have a contagious effect on the rest of society."


What strikes me, coming back to the discussion about coordination, is all the words we have that, in the right context, mean the same thing: coordination, cooperation, correlation, collaboration, etc. Naturally, the trick with a coordination problem is to somehow coordinate everyone's behaviour. A hierarchical structure can create a monopoly in which one entity/person controls all, thus greatly simplifying the problem of getting everyone to sing from the same songbook. When putting leviathan in charge isn't feasible or isn't desired, then it becomes trickier to get a bunch of independent actors to coordinate on a particular outcome.

The 'prominent' person is like a soft version of the leviathan - not forcing everyone to go along, merely setting a good or bad example and hoping the ripples of that behaviour are enough to 'tip' society from one equilibrium to another. I didn't read the paper so I shouldn't really comment, but the notion that something like JFK asking people what they can do for their country is going to lead to a widespread change in behaviour seems hard to swallow for me. To me it seems more likely that levels of cooperation will be driven by a combination of history (as Acemolgu acknowledges) and changes in fundamental factors like technology (e.g. the medium is the message) and the natural environment (along the lines that I discussed in my last post).


***
1Note: The Stag Hunt, that we discussed back here is an example of a game theory model with more than one equilibrium.

Labels: , , , ,

Tuesday, August 17, 2010

64. Stag Hunting & Correlation

Note: This post is the sixty-fourth in a series about government and commercial ethics. Click here for the full listing of the series. The first post in the series has more detail on the book 'Systems of Survival' by Jane Jacobs which inspired this series.

Note: This post is a continuation from last week's post on the Stag Hunt.

This week, I'm going to talk about the book, "The Stag Hunt and The Evolution of Social Structure," by Brian Skyrms.

Skyrms argues that the Stag Hunt is a better model of the challenge of human cooperation than the Stag Hunt. Whereas the Prisoner's Dilemma only has one equilibrium solution (everyone defects), the Stag Hunt has two possible equilibriums, everyone cooperates (hunts stag) or everyone defects (hunts hare). He argues that this better reflects the notion of a social contract which can either not exist (state of nature) or exist (society).

The primary theme of the book is Skyrms' argument that they key to achieving cooperation in the stag hunt is correlation. By correlation he means that those who are disposed to stag hunting (cooperation) must find a way to work together and avoid the hare hunters (defectors). This is a similar point to the one Axelrod made, but Skyrms goes into more detail. Note that, implicit in this argument is an assumption that there are two sort of people out there, stag hunters (cooperators) and hare hunters (defectors).

The first method of improving correlation that Skyrms looks at is location.

One of the interesting aspects of this book is the rich use that Skyrms makes of biological examples for how cooperation is far from a distinctly human characteristic. For example, his opening to the chapter on location,

"One strain of E Coli bacteria produces a poison , to which it is immune, that kills competing strains. It takes resources to produce the poison, and the strain that produces it pays a cost in reproduction for the privilege of killing competitors. If the poisoner strain evolved from a more peaceful strain of E Coli, how did it get started? A few mutant poisoners would cause little change to the average fitness of a large peaceful group ... If a few mutant poisoners are added to a well stirred culture of peaceful E Coli, the mutants are gradually eliminated.

But when the same experiment is performed on agar plates rather than in a well-stirred solution, the poisoners can invade and eventually take over the population. ... I won't tell the full story here, but I hope that I have told enough to illustrate the importance of spatial structure, location, and local interaction for evolutionary dynamics."


It requires energy to move matter from one place to another, so all else equal, people (and animals) don't move more than they have to. If people who are willing to cooperate rather than defect (stag hunters) are clustered rather than dispersed at random, then they will tend to have more interactions with each other - interactions where their cooperative behaviour will pay off. This is pretty much common sense - You can imagine that a group of, say, rebel soldiers is more dangerous if they are all located in one area than if they are thinly scattered across a country - but it's still interesting to see some of the underlying theory that explains why a cluster of like-minded people (or bacteria) can be so effective.

Skyrms' approach throughout the book is to run simulations where he takes a set of agents and lets them interact over time, playing either Stag Hunt or some other game against each other. There are different dynamics allowing the situation to change over time. Sometimes Skyrms has people choose a strategy which will be the best response to the sort of person they expect to encounter (if they expect a hare hunter, they will hunt hare too, and the same for stag hunting). Other times Skyrms has people choose to imitate the strategy of those around them that are performing the best.

In running simulations to test the influence of location on the spread (or demise) of cooperative behaviour in a population, Skyrms tried one-dimensional setups (everybody is in a line) and two dimensional setups (people interact on a grid) and also tried both a 'best response' type of strategy evolution and a 'imitate the best' type of strategy evolution. What he found was that the two dimensional model and the 'imitate the best' approach had the most favourable outcomes for cooperation (Stag Hunting).

But in general, the effect of taking into consideration location (having people interact with their neighbours), rather than just assuming that people interact randomly with each other was that simulations with location as a factor were more favourable to the evolution of cooperation. Note that is consistent with Axelrod's findings as well.


The second method of improving correlation between cooperators that Skyrms looks at is signalling (which we discussed a few posts back).

He describes how a myxobacteria known as Myxococcis xanthus engage in coordinated attacks on larger microbial prey, by making use of quorum signalling - a signal that tells the bacteria when there are enough of them gathered in one place to launch a successful attack. As Skyrms notes, "Myxococcis xanthus has solved the problem of the Stag Hunt."

On the topic of signalling, Skyrms explains how situations where the provision of a signal can lead to mutual benefit result in quick convergence to a mutually recognized set of signals - even when there is no communication between parties (other than the signals) and there aren't any initial cues to suggest what each signal might mean to either side.

Even in cases where there are parties that might benefit from sending out a false signal, and it does not cost anything to send out a false signal (what game theorists call 'cheap talk') the possibility of sending signals can still lead to more cooperative outcomes than might be achieved if signals weren't available to be used.


The third method of correlating cooperative behaviour that Skyrms looks at is through allowing the structure of interaction itself to change. If we let people choose for themselves who to interact with, then stag hunters will typically quickly learn to interact with other stag hunters. Allowing this sort of self-selection is a powerful tool that allows the cooperators to self-select into a club whose members benefit much more than any of the hare hunters that are excluded.

Skyrms concludes by summarizing the results of the various simulations that make up most of the book, and what they tell us about how stag hunting (cooperation) can arise in a population of hare hunters (defectors) over time (given that interactions between people follow a stag hunt type model),

"Over time there is some low level of experimentation with Stag Hunting. Eventually a small group of Stag Hunters comes to interact largely or exclusively with each other. This can come to pass through pure chance and the passage of time in a situation of interaction with neighbors. Or it can happen more rapidly when stag hunters find each other by means of fast interaction dynamics. The small group of stag hunters prospers and can spread by reproduction or by imitation. This process is facilitated if reproduction or imitation neighborhoods are larger than interaction neighborhoods. As a local culture of stag hunting spreads, it can even maintain itself in the unfavorable environment of a large random-mixing population by the device of signalling."

Labels: , , , , ,

Tuesday, July 27, 2010

61. The Evolution of Cooperation (part 1 of 2)

Note: This post is the sixty-first in a series about government and commercial ethics. Click here for the full listing of the series. The first post in the series has more detail on the book 'Systems of Survival' by Jane Jacobs which inspired this series.

The Evolution of Cooperation is the title of perhaps the most famous book on the Prisoner's Dilemma, and possibly Game Theory in general, ever written - by Robert Axelrod.

Reading it again, for the first time in a long time, I could see why it is so popular - it manages to cover a lot of ground with very clear, accessible prose.

The Evolution of Cooperation starts off by recounting a famous game theory tournament. Participants were invited to submit a strategy or 'rule' that would play a Prisoner's Dilemma against strategies submitted by other people. The strategies would be paired up against each other in turn and would play a repeated Prisoner's Dilemma against each other for a certain number of times. The goal was to achieve the highest possible point total, adding up across the matches against all the other strategies.

Recall that the nature of the Prisoner's Dilemma is such that, no matter what action your opponent takes, you will maximize your own total by defecting rather than cooperating. But by changing the situation from a single game to a repeated game, and by allowing participants to retain a memory of what happened before and by allowing them to clearly identify who they were playing against, the tournament introduced a strong signalling element into the Dilemma.

The tournament was won by the simplest strategy submitted, a strategy known as 'Tit For Tat.' Tit for Tat started off by cooperating (Axelrod refers to strategies that start by cooperating as 'nice' strategies), and then each round it just reacts to what the strategy it is matched up with did the previous round. If the strategy it is playing with defected on the last round, Tit for Tat defects this round, and if the strategy it is playing against cooperated on the last round, Tit for Tat cooperates this round.

After the results of the first tournament were published, a second one with more entries was held, but Tit for Tat again turned out to be the winner.

Strategies aren't fixed over time, and people might change their approach if they see another approach that is working better. Or those using a poor strategy might die out (or get fired) and be replaced by someone with a better strategy. Or some people may simply decide to try a new approach that they thought up. Through these sorts of mechanisms, the distribution of strategies, or rules, being used in the population can evolve over time.

An evolutionarily stable strategy is one that, even if everybody in a population is using it, can't be invaded by some other strategy designed to take advantage of it. Axelrod notes that a population where everybody defects is evolutionary stable because it is not possible for anyone playing any sort of cooperative strategy to invade (because they never meet anyone who will reciprocate their cooperation). But even a small cluster of cooperators can invade a much larger population of defectors if the conditions are right (because they will do well enough cooperating with each other to offset their poor results against the defectors).

But the converse is not true. A population where everybody plays a nice strategy like Tit for Tat can't be invaded by an 'Always Defect' strategy, because the Tit for Tats will do better playing each other than the 'Always Defect's will do playing with each other. This is a hopeful result (for those who like to see cooperation) since it suggests that a cooperative equilibrium is more stable than a defective one and that even a small group of cooperators can sometimes thrive in a sea of defectors.

Based on the results of the tournaments, and the success of Tit for Tat, Axelrod offers the following suggested courses of action for doing well in a repeated Prisoner's Dilemma type situation:

1) Don't be envious

As we saw before, envy can transform an absolute gain into a relative loss and a positive sum situation into a zero-sum situation. A common theme throughout the book is the distinction between absolute gains, made possible by the non zero-sum nature of the Prisoner's Dilemma, and zero-sum situations where only relative gains are possible.

2) Don't be the first to defect

'Nice' rules which don't defect first, will do well when playing with each other. This means that 'Mean' rules which defect first, will end up with lower scores against 'nice' opponents than 'Nice' rules do.

3) Reciprocate both cooperation and defection

A failure to reciprocate cooperation leads to unnecessary defection on both sides. A failure to reciprocate defection (by defecting in return the next round) leads to being taken advantage of.

4)Don't Be Too Clever

Unlike in a zero-sum game where you don't want your opponent to have any advantage, in a Prisoner's Dilemma it is important that those who are willing to cooperate recognize that you are willing to cooperate as well. Tit for Tat is a simple rule that helps other rules understand what they are dealing with and act accordingly. And since the best plan when facing Tit for Tat is to cooperate, rules will generally cooperate when they figure out that is the rule their opponent is using.

* * *

Moving along, Chapter 4 shows that friendship is not necessary for cooperation to develop by recounting the story of the 'live and let live' system that developed in the trenches during World War I where enemy units would cooperate by not killing each other, while facing off with each other across the same piece of ground for months at a time.

Chapter 5 shows that even creatures with very limited intelligence (e.g. bacteria) can engage in cooperation in Prisoner's Dilemma type situations. It also theorizes that the cooperation born from Kin Selection (the notion that it makes sense for us to evolve so that we are willing to make sacrifices for those we share genes with) might have provided a foothold of cooperation that could have spread into the sort of reciprocal tit for tat cooperation that would extend across larger groups of people, regardless of whether they are related or not.

I'll cover the rest of 'The Evolution of Cooperation' and talk about some of the implications of the ideas covered in it in next week's post.

Labels: , , , , , ,

Tuesday, July 20, 2010

60. Signalling

Note: This post is the sixtieth in a series about government and commercial ethics. Click here for the full listing of the series. The first post in the series has more detail on the book 'Systems of Survival' by Jane Jacobs which inspired this series.

------

"Buying bread from a man in Brussels
He was six foot four and full of muscles
I said, 'Do you speak-a my language?'
He just smiled and gave me a vegemite sandwich"



In a world where everyone behaves the same way (e.g. the world of most economic models), being able to tell one person from another isn't all that useful. But as we saw a few posts back, and as common sense also indicates, people vary, with some people being more prone to cooperative behaviour than others.

If we imagine ourselves wandering around a world filled with a mix of those who are willing to cooperate with us, and those who will pretend to cooperate just to take advantage of us, then the importance of knowing who you can trust becomes obvious.

One way is to learn by experience. If you can remember who you've dealt with in the past, then you can shun those who've defected against you and only deal with people you've had successful dealings with before. But this still leaves a problem of what to do with people you are meeting for the first time, or people who you are dealing with in a new situation, or even people who defected on you before, but claim to have turned over a new leaf.

In these cases, we tend to look for signs that indicate we can trust someone. Maybe a common language or culture, or skin colour or alma mater, or a certain manner of dress or hairstyle, or a certain level of courteousness, or a certain credit score, or even a certain food choice (e.g. vegemite sandwich).

Thinking more generally, even someone's past behaviour could just be considered another type of signal to take into consideration.

Ideally, the signs that we look for will be hard to fake, since otherwise defectors might just try to pass themselves off as cooperators. Barring the use of memory altering technology, past experience with a person can be very hard to fake, which makes past experience with someone one of the best signals of their potential willingness to cooperate in the future.

Given a perfect ability to differentiate cooperators from defectors, even a group of cooperators as small as just 2 people could outperform a society full of defectors. But given a complete inability to differentiate cooperators from defectors, and lacking a way of keeping track of the results of prior dealings with people, then cooperators would become helpless against defectors - and the defectors will likely take advantage of the cooperators until eventually there are no cooperators left.

As we covered a while back, this distinction is what led David Gauthier to assume, in Morals By Agreement that people pursuing cooperation would be able to perfectly differentiate cooperators from defectors.

This is all pretty straightforward common sense, but as this series goes along and I talk more about how cooperative behaviour can evolve (or go extinct) in various circumstances, it will be useful to keep some basic points in mind, such as the importance of signalling cooperative intentions and remembering past interactions with people in sustaining cooperation.

* * *

A few other notes on the topic of signalling:

It's worth mentioning that signalling is only necessary when one party to a transaction has information that another party lacks (the information that will be signalled) so, from an economic perspective, situations requiring signalling are an example of one of the effects of asymmetric information , a topic which we discussed here a while back.

Of course, barring an ability to read minds or predict the future, you never really know for sure what the other person is going to do, so in that sense every transaction involves asymmetric information, which is a bit hard on economic theories which rely on the absence of asymmetric information as a key assumption.

* * *

Signalling problems can also be tied back to game theory via various 'signalling games' which investigate what messages might be sent and received between players under varying circumstances.

* * *

Some concepts related to the idea of signalling:

Cheap Talk - Signs that can be made with little effort and thus don't signify much. For example, if I ask you, 'Are you going to screw me over?' And you say, 'No', your answer would qualify as cheap talk, since it doesn't cost you much to make that statement. Or if cooperators tried to identify one another by wearing a yellow shirt, than anyone could wear a yellow shirt and pretend to be a cooperator. This notion is generally captured in the expression, 'actions speak louder than words' - since actions typically have a higher cost than words do.


Common Knowledge - Something that everybody knows, and everybody knows that everybody knows, and everybody knows that everybody knows and everybody knows, and so on. The Stanford Encyclopedia of Philosophy provides an example:

"A proposition A is mutual knowledge among a set of agents if each agent knows that A. Mutual knowledge by itself implies nothing about what, if any, knowledge anyone attributes to anyone else. Suppose each student arrives for a class meeting knowing that the instructor will be late. That the instructor will be late is mutual knowledge, but each student might think only she knows the instructor will be late. However, if one of the students says openly 'Peter told me he will be late again,' then each student knows that each student knows that the instructor will be late, each student knows that each student knows that each student knows that the instructor will be late, and so on, ad infinitum. The announcement made the mutually known fact common knowledge among the students."

Labels: , , , , , , ,

Tuesday, June 01, 2010

55. One Thing Leads to Another

Note: This post is the fifty-fifth in a series. Click here for the full listing of the series. The first post in the series has more detail on the book 'Systems of Survival' by Jane Jacobs.

----

In a world where everyone is the same and they all just pursue their own self-interest with no regard for what happens to other people, the question of what happens when people with different preference types interact doesn't arise.

But my last post raised the possibility that people might have different 'temperaments' with respect to how they personally are affected by the fate of the people they deal with. As Alan anticipated in the comments on the last post, if there are different sorts of people potentially out there, then a natural question is to try and see what happens if the different types interact with one another and how a collection of different types of people might change over time.

One of the best known ways that a population makeup can change over time is via evolutionary dynamics. People who are more 'successful' with their actions will have more children than those who are less successful, meaning that, over time, more successful strategies will come to dominate.

A common debate in the social sciences is then whether unselfish behaviour can sustain itself over time, given that selfish people might be able to take advantage of the unselfishness of the altruists. It's true that a group of unselfish people will likely outperform a group of selfish people, but then won't the unselfish group fall victim to selfishness from within? The answers are (as usual) it depends, but I won't get into the details any more in this particular post (with a 2 hour episode of wipeout on tonight, time for posting is limited!).

Evolutionary dynamics are not the only way for a population makeup to change over time. Imitation works too. The Czech Republic (for example) isn't a capitalist country because it was outbred by capitalist countries, it's capitalist (arguably, at least) because the population decided to imitate what they felt was a more successful method of doing things. At a personal level, people will imitate what they other people doing around them if they feel those people are successful (see also Bubble, Housing).

A third option is migration. If people are able to move from one society to another, their movements will alter the distribution of preference types within each society. A constant migration of unselfish types to an unselfish society might offset a trend towards successful acts of selfishness within that society, for example.

A fourth mechanism is that the people themselves do not change, but their relative strength of influence does. Maybe an unselfish society contains only one selfish person, but if that person uses their unchecked greed as a means to taking control of the whole society, then the society could change dramatically despite nobody changing their particular nature.

No doubt there are other mechanisms by which the makeup of preferences in a society can change over time - I can't think of any at the moment, but feel free to point them out in the comments.

This is all pretty abstract, but the point is that it would theoretically be possible to model or simulate various ways in which a society of people with different preference types might evolve over time, applying different mechanisms by which behaviours might spread or change or change in influence over time. There are folks out there who have undertaken this sort of work, and I'll cover some of their efforts in future posts.

Labels: , , , , ,

Tuesday, May 04, 2010

51. Types of Cooperation

Note: This post is the fifty-first in a series. Click here for the full listing of the series. The first post in the series has more detail on the book 'Systems of Survival' by Jane Jacobs.


The topic of this post is an essay by Joseph Heath on 'The Benefits of Cooperation (pdf)'

Heath posits that there are 5 different ways that people can benefit by working together:
1) Economies of Scale
2) Benefits from trade
3) Risk sharing
4) Self-binding
5) Information sharing

Heath notes that the first theorem of welfare economics demonstrates that competitive markets (which achieve benefits by allowing people to trade with one another) will achieve a perfectly efficient outcome, but that the assumptions behind the theorem rule out all other forms of cooperation:

"The 'invisible hand theorem' (or 'first fundamental theorem of welfare economics'), shows that the competitive equilibrium of a market economy will be Pareto-optimal as long as certain 'standard' conditions obtain. The list is quite long, but includes inter alia constant returns to scale, individuals with well-behaved utility functions, symmetric information, a complete set of futures markets, and in cases of
uncertainty, a complete set of insurance markets.

In other words, the theorem shows that markets achieve perfect efficiency, so long as every other mechanism of cooperative benefit is excluded from consideration – either by assuming that no such benefits are possible, or that all such benefits are freely available. To see how uninformative this is, consider how we would respond
to someone who proposed a model for the 'optimal' production of scientific knowledge, based upon the assumption that both material resources and labor were available in unlimited supply and at zero cost. Whatever its technical merits, such a model would give us very little assistance with real-life policy questions."


Heath goes on to point out how circumstances often force us to choose between pursuing different types of benefits. In the case of economies of scale, there is an obvious conflict between having a large number of firms to make a market more competitive, and having a smaller number of firms so that the bigger firms can achieve greater economies of scale. It's worth nothing that, due to network externalities, any organization that physically extends across geographic territory (i.e. power grid, sewer system, road network) tends to be operated as a monopoly, with the economies of scale overwhelming the gains from competition leading to more trade in this particular case.

The story (conflict between benefits from trade vs. benefits from other forms of cooperation) is similar with risk sharing. Heath gives the example of the enclosure of the commons that occurred in England prior to the industrial revolution. Allowing individuals to take ownership of parts of the commons created far more opportunity for specialization and trade to take place by ensuring that anyone who took the trouble to improve or develop a piece of land could expect to reap most of the benefits from their efforts, rather than sharing them with everyone else in the commons. But the commons system provided a great benefit by insulating individuals from the uncertainties of having their fortunes entirely dependent on one piece of land. Splitting the land into privately owned chunks removed the benefit that arises from reducing uncertainty for individuals by diversifying risks across a large group of people.

By self-binding, Heath means situations where we want to pre-emptively prevent ourselves from taking actions we know we will regret (due to hyperbolic discounting) and enlist others to aid us. The thought reminds me of this passage from Plato's Republic,

"Tell me then, O thou heir of the argument, what did Simonides say, and according to you truly say, about justice?

He said that the repayment of a debt is just, and in saying so he appears to me to be right.

I should be sorry to doubt the word of such a wise and inspired man, but his meaning, though probably clear to you, is the reverse of clear to me. For he certainly does not mean, as we were just now saying, that I ought to return a deposit of arms or of anything else to one who asks for it when he is not in his right senses; and yet a deposit cannot be denied to be a debt.

True.

Then when the person who asks me is not in his right mind I am by no means to make the return?

Certainly not.

When Simonides said that the repayment of a debt was justice, he did not mean to include that case?

Certainly not; for he thinks that a friend ought always to do good to a friend and never evil.

You mean that the return of a deposit of gold which is to the injury of the receiver, if the two parties are friends, is not the repayment of a debt,—that is what you would imagine him to say?

Yes."


On the topic of self-binding, Heath makes the same analogy between borrowing and substance abuse that I did in last week's post,
"Economist David Laibson has argued that financial innovation, by increasing the overall liquidity of assets, has made it increasingly difficult for individuals to create 'golden eggs.' The difference between savings and checking accounts has become purely nominal; the introduction of ATMs has meant that everyone has access to their money at all hours (and so withdrawing a fixed amount at the beginning of the week can no longer be used as a self-control mechanism); reverse mortgages allow people to drain the asset value of their homes; and, of course, consumer credit has rendered the practice of 'saving up' for a major purchase almost obsolete. This sort of 'easy money' is a mixed blessing to consumers, in the same way that a 24-hour beer store is a mixed blessing to the alcoholic."


Again, making it easier to complete trades conflicts with a different form of cooperative benefit.

----
What this notion of 5 different forms of cooperative benefits means with regard to the two syndromes identified by Jacobs is not entirely clear to me. The view I'm drifting towards is that the guardian syndrome represents cases where perfect monopoly is required and the commercial syndrome represents something close to the perfect competition ideal.

Cases where economies of scale are all that matters would seem to lead to a monopoly which would fall under the guardian syndrome. Cases where there the benefits to risk sharing far outweigh gains from trade would be similar. Self-binding relies on a hierarchical relationship (the one who is binding themselves submits to the authority of the person who will bind them) such as we find in the guardian syndrome and measures to encourage the generation of new information (intellectual property) often seem to do so by guaranteeing a monopoly to the producer of the information.

Cases where gains from trade need to be balanced against one of these other sources of cooperative benefit would seem to involve a grey area of competition mixed with monopoly in some measure. Perhaps it's not surprising that these areas tend to be an ongoing source of controversy.

Labels: , , , , , ,

Tuesday, April 13, 2010

48. Self-Interest, Weak Reciprocity and Strong Reciprocity

Note: This post is the forty-eighth in a series. Click here for the full listing of the series.

The ideas in this post are generally taken from the book (a collection of essays) 'Moral Sentiments and Material Interests: The Foundation of Cooperation in Economic Life.' (edited by Herbert Gintis, Samuel Bowles, Robert Boyd and Ernst Fehr)

The central theme of the book is the notion of 'strong reciprocity,' but before defining 'strong reciprocity', I should cover the alternatives.

Most economic and biological theory these days starts from a premise that people are self-interested in the sense that given two options they will choose whichever one gives them the highest payout without being concerned about what payments everyone else gets.

The primary support for this viewpoint is the notion that if someone was to sacrifice their own good for someone else's, this would negatively impact their ability to reproduce, so this would not a successful evolutionary strategy - i.e. 'nice guys finish last (in terms of number of descendants with their genes)'

Robert Axelrod is well known for his book, 'The Evolution of Cooperation' in which he invoked 'the shadow of the future' showing that in situations where people were playing a repeated prisoner's dilemma against each other, the most successful strategy was often 'tit-for-tat' - a strategy which involved cooperating in the prisoner's dilemma against people who also cooperated and defecting against people who defect. In the field of biology, Robert Trivers came to a similar conclusion that what looked like altruistic behaviour (e.g. cooperation in a prisoner's dilemma) could actually benefit the person being altruistic, under the right circumstances (i.e. where there might be a later payback to this 'altruism').

The authors of 'Moral Sentiments and Material Interests: The Foundation of Cooperation in Economic Life.' refer to this self-interested cooperation in repeated Prisoner's Dilemmas as 'weak reciprocity' - weak because it is conditional on certain circumstances holding such that a person believes it to be in their long term interest to behave in a cooperative manner.

The book presents results from a number of experiments that demonstrate that, even in situations where people have no reasonable expectation of benefiting from their actions, a significant percentage of the population is predisposed to cooperation (with those who also cooperate) and will go out of their way to punish people who defect. In their words,
"Strong reciprocity is a predisposition to cooperate with others, and to punish (at personal cost, if necessary) those who violate the norms of cooperation, even when it is implausible to expect that these costs will be recovered at a later date."


The authors also address the notion that this behaviour could be 'an error' due to the fact that people just aren't used to dealing with this sort of non-repeated prisoner's dilemma type situation so they behave like strong reciprocators 'by accident'. As the authors note, from the standpoint of predicting behaviour, this is a distinction without a difference. They also note that the behaviour observed is quite stable over time and doesn't seem to go away as people get more and more experience playing the game in the experiments conducted.

One of the examples of an experiment conducted is an 'ultimatum game' where one player offers the second player some portion of $100. The second player can then accept the offer, meaning that the second players gets what was offerred and the first player gets the rest, or reject it, meaning that both players get nothing. The self-interest theory suggests that the first player will offer the most they can while still offering the second player something (e.g. $99) and that the second player will accept this offer (because $1 is better than nothing).

Of course, as you might expect (if you are not an economist!), it turns out that people don't behave that way at all, with $50 being the most common offer, and people frequently rejecting offers that are less than $50, just to punish the first player for the ungenerous offer.

The authors report the results from a cross-cultural study they did of behaviour in the ultimatum game. What they found was that, although none of the 15 societies they studied resembled the 'self-interest' model, there was a wide variety of behaviour patterns, that seemed to vary based on the way that each society operated. For example, in the Lamalera whaling society, the average offer by the first players was 57%, reflecting that when the Lamalera people make a large catch there is a meticulous distribution system set up so that everybody gets a 'fair share'. Meanwhile, the Hazda, a group of small scale foragers, generally made lower offers and had high rejection rates, reflecting the fact that although they did share meat, there was generally a lot of conflict involved in the process (e.g. people trying to hide their catches).

* * *

One of the things the authors (unsurprisingly) don't get into is whether strong reciprocity is more consistent with guardian or commercial type activities. But given that a key component of strong reciprocity is to 'take vengeance' and that it also seems to be connected with solving prisoner's dilemma's and public goods problems and cooperation in large groups, it seems logical that strong reciprocity leans more towards guardian type situations rather than market exchange situations.

Later on in the book, the authors note that strong reciprocity can work to make communities a powerful force, but that it has a negative effect in that the effectiveness of community depends on the ability to include 'us' and exclude 'them' leading to a tendency of communities to 'be exclusive' - a member of the list of guardian ethics. At the same time, the authors note that a weakness of communities is the difficulty with achieving scale of operations, although they do not consider the potential use of hierarchy to overcome this weakness.

* * *

An interesting result of the experiments is that rather than their being a single type of person (as assumed in most models), it seems that people exist along a range - from a sizable portion of the population (~25% in some studies) that does behave like the standard economic model of self-interest suggests, through people who exhibit what seems like weak reciprocity, to a large number of people that exhibit strong reciprocity and even further into a small part of the population which seems to be dedicated cooperators who will cooperate in almost all situations, even when their generosity is being taken advantage of repeatedly (i.e. in terms of the original prisoner's dilemma - they won't rat on their cellmate no matter how many times their cellmate gets off easy by ratting them out.)

Given this heterogeneity in people's dispositions toward cooperation, we can see that it is a tricky problem to design societies and systems which allow the cooperative types to benefit from cooperation without being exploited by the self-interested types.

It reminds of something I've previously quoted from Hans Ritschl's 'Community and Market Economy:'
"This understanding of the fundamental power of the communal spirit leads to a meaningful explanation of coercion in the state economy. Coercion is a means of assuring the full effectiveness of the communal spirit, which is not equally developed in all members of the community. Coercion forces the individual to act as if he were inspired by communal spirit. Coercion is only the outer clasp and fastening of the community, but if communal spirit be lacking, coercion can replace it only in part."


Even more so than usual, I am only scratching the surface of this densely packed book, which I highly recommend to anyone interested in the question of human cooperation.

Labels: , , ,

Tuesday, October 27, 2009

27. Ethics and Words

Note: This post is the twenty-seventh in a series. Click here for the full listing of the series.

"Don't tarry in the Marshes," Orddu, while from within the cottage Taran heard loud and angry noises. "Else you may regret your foolish boldness, or bold foolishness, whichever1"



In this post, I'm going to describe a behaviour between two people and then ask you to think about what word you would use to describe this behaviour. Simple enough?

First case: Two people work together to achieve something that benefits them both that they couldn't do on their own.

Second case: Two people work together to find a place to live, where they can share living expenses

Third case: Two people work together to fight off a bear that attacks them

Fourth case: Two business executives from different companies work together to prevent prices from dropping in their industry due to an unproductive price war

Fifth case: A home inspector and a home owner work together to reach an agreement that benefits them both more than issuing a citation for a violation of local bylaws would.

Sixth case: A businessman reaches a deal with the head of an invading army not to raise trouble as long as the invading troops don't disrupt his business.

Seventh case: Two anarchists work together to form a plot to kill all the members of the Canadian government

Eighth case: Two Taliban soldiers work together to ambush and a kill a group of Canadian soldiers.


-----

The first case, two people working together to achieve something they both benefit from, generally, as far as I know, goes by the name 'cooperation' and is generally held to be a 'good thing' or virtuous.

But all 8 cases involving two people working together for mutual gain yet not all would typically go by the same name.

Cases 2 and 3 are still standard cooperation.

But case 4 would normally go by the name 'collusion' which is considered unethical and is illegal in many places/contexts.

Case 5 typically goes by the name 'corruption' or 'bribe-taking' and is also considered unethical.

Case 6 goes by the name 'collaboration' and is even more unethical.

Case 7 might go by the name 'conspiracy' and is (arguably) most unethical of all.

Finally, case 8 seems similar to case 7, but here I suspect that we would normally be back to using the phrase 'cooperation' since there is no ethical condemnation of the act because it is understood that, in war, attempting to kill the enemy is what you are supposed to do.

---

In Systems of Survival, after listing out the ethics in the guardian and commercial syndromes, Jane Jacobs explains the absence from the lists of some typical ethical values,
"Where's cooperation, courage, moderation, mercy, common sense, foresight, judgment, perseverance, faith, energy, patience, wisdom? I omitted these because they're esteemed across the board, in all kinds of work."



But based on the 8 cases I've listed above, I can't agree that the simple act of cooperation is universally esteemed, unless we include that esteem as part of the definition of cooperation.

When it comes to ethical values, there is both the denotation (what behaviour is described by the value) and the connotation (whether that behaviour is considered good or bad) to consider2.

In the extreme case, a word like 'good' is all connotation, no denotation.

Interestingly, even though 'cooperate' has a strong positive connotation such that a different word is used for 'bad' cooperation, it's opposite, 'competition', does not have a strong connotation. Whether in a good sense, 'our business is a lot more competitive than it used to be' or in a bad sense, 'Bobby needs to learn to not be so competitive with the other children' the same word is routinely used (although it's interesting to note that a quick review of the thesaurus, shows that most of the synonyms for 'competitive' carry negative connotations - 'aggressive', 'antagonistic', 'combative' etc.).

This leaves open the question of whether there actually are any behaviours that are universally supported, or just words with strong positive connotations such as 'wisdom'. Even something as universally admired as perseverance gets recast as stubbornness when it seems that no good will come from the perseverance. In more severe cases (i.e. when perseverance is not combined with moderation), it might even start to be referred to as obsessiveness. A google search of the word 'perseverance' finds nothing but praiseworthy behaviour, but a google search of 'perseverance' and 'obsessive' brings up a gallery of mental disorders and destructive behaviour patterns.

Anyway, I'm sure that was all no-longer-fashionable hat to linguists and ethicists, but it's new to me, at least in terms of my awareness of the extent to which ethical terms contain a mix of both a descriptive and a positive/negative component. We need to be careful not to assume that a certain behaviour is universally praiseworthy just because it goes by a less common name in its non-praiseworthy context.

---
1from Taran Wanderer, by Lloyd Alexander

2This is true for a lot of words, of course, but it applies particularly to ethical values.

Labels: , , , ,

Wednesday, August 05, 2009

22. Morals by Agreement: Cooperation and Bargaining

Note: This post is the twenty-second in a series. Click here for the full listing of the series.

This is the fourth of what should be a few posts on the book Morals by Agreement, by David Gauthier.

--


The clip above is 'Opportunities' by the Pet Shop Boys. The chorus says, "I've got the brains, you've got the looks, let's make lots of money" and this is a pretty good summary of chapter 5 of Morals by Agreement.

Of course, in Gauthier's language this translates into, "we become aware of each other as potential co-operators in the production of an increased supply of goods, and this awareness enables us to realize new benefits"

Gauthier's argument is that, given the opportunity to work together to better (or not worsen) their position, people are rational to do so. I agree not to play my tuba late at night and you agree not to mow your lawn early in the morning. Or maybe we form a joint partnership (in order to make lots of money). Or maybe I agree to give you my wallet and you agree not to shoot me*, etc.

Now, I guess to most people it probably seems fairly obvious that people make agreements of this sort all the time and it makes sense to do so, but you have to remember that Gauthier is starting from his notion of the rational human as one that does nothing but maximize its own interest in the present, with no ability to work together with others in a cooperative fashion.

The problem is that, unlike in the perfectly competitive market where people could (per Gauthier's theory) reach optimal outcomes by strictly pursuing maximization of their self-interest, situations where cooperation would be beneficial are situations where the market has 'failed' meaning that simplistic pursuit of self-interest by all parties will lead to sub-optimal solutions because it fails to consider superior 'joint strategies' where the people in a situation choose their strategies together (by bargaining to reach agreement) rather than separately.

You can think of a basketball team that has a better chance of scoring if they follow the play their coach drew up for them than if they all just do their own thing and hope for the best.

Or if we think back to the original Prisoner's Dilemma, if the prisoners were able to make a binding agreement with each other not to confess, then this would open up a new, optimal (from the prisoners' standpoint) solution to their dilemma that was unavailable when they were only able to each maximize their own self-interest.

Gauthier argues that in this context, rational behaviour requires that people agree to constrain their self-interest maximization for a while in order to secure a share of the benefits of cooperation.

Gauthier explains that, in the bargaining phase where the spoils of (the future) cooperation are being divided up, people are still acting to maximize their self-interest as best they can under the circumstances. But the bargain that is reached is an agreement to not maximize self-interest while carrying out the terms of the bargain.

So, I maximize my self interest as I push for the smallest concession possible when agreeing not to play my tuba late at night ('ok, I won't play after midnight, but I won't agree not to play between 11 and midnight' etc.) but when midnight strikes, I go against my self-interest by keeping the terms of the agreement and not just going ahead and playing my tuba anyway.

This question of whether the rational Gauthier people will actually stick to their agreements is one of the stickiest that Gauthier faces and is the subject of chapter 6.

I should also mention that Gauthier spends much of the chapter explaining his bargaining theory that if people have an equal willingness and ability to drive a hard bargain, they will end up making an equal bargain (one where everybody gains a roughly equal share of the spoils of cooperation).


---
* You might think that the bargain where I give up my wallet in return for my life is not a fair one and should count differently than the others somehow - Gauthier agrees but defers discussion of the bargaining position of the parties until chapter 7

Labels: , , , , ,

Sunday, July 03, 2005

The Efficient Society: Cooperation vs. Competition

Note: This post is a continuation of this post and this post.

Second note: This post doesn't really add much new to this excellent post from Andrew Spicer from over 2 years ago (which you really should read), but I wanted to write it all out in my own words to get a grasp on the ideas involved



Chapters 3 through 5 of 'The Efficient Society' all revolve around the concept of the Prisoner's Dilemma1.

From the wikipedia site on the Prisoner's Dilemma:

"The classical prisoner's dilemma (PD) is as follows:

Two suspects A, B are arrested by the police. The police have insufficient evidence for a conviction, and having separated both prisoners, visit each of them and offer the same deal: if one turns Kings Evidence against the other and the other remains silent, the silent accomplice receives the full 10-year sentence and the betrayer goes free. If both stay silent, the police can only give both prisoners 6 months for a minor charge. If both betray each other, they receive a 2-year sentence each.

Assume both prisoners are completely selfish and their only goal is to minimise their own jail terms. Each prisoner has two options: to cooperate with his accomplice and stay quiet, or to betray his accomplice and give evidence. The outcome of each choice depends on the choice of the accomplice. However, neither prisoner knows the choice of his accomplice. Even if they were able to talk to each other, neither could be sure that they could trust the other.

Now, let's assume our protagonist prisoner is rationally working out his best move. If his partner stays quiet, his best move is to betray as he then walks free instead of receiving the minor sentence. If his partner betrays, his best move is still to betray, as by doing it he receives a relatively lesser sentence than staying silent. At the same time, the other prisoner thinking rationally would also have arrived at the same conclusion and therefore will betray. Thus, in a game of PD played only once by two rational players both will betray each other and the world will become a place for monsters. Betrayal is their only rational choice.

However, if only they could arrive at a conspiracy, if only they could be sure that the other player would not betray, they would both have stayed silent and achieved a better result. However, such a conspiracy cannot exist, as it is vulnerable to the treachery of selfish individuals, which we assumed our prisoners to be. Therein lies the true beauty and the maddening paradox of the game.

If only they could both cooperate, they would both be better off; however, from a game theorist's point of view, their best play is not to cooperate but to betray. This treacherous quality of the deceptively simple game has inspired libraries full of books, made it one of the most popular examples of game theory and made some people appeal for banning studies on the game.

If reasoned from the perspective of the optimal interest of the group (of two prisoners), the correct outcome would be for both prisoners to cooperate with each other, as this would reduce the total jail time served by the group to one year total. Any other decision would be worse for the two prisoners considered together. However by each following their selfish interests, the two prisoners each receive a lengthy sentence."


Clearly, the outcome of the dilemma depends upon where on the axis running from perfect cooperation to perfect competition the people involved happen to be. If people always cooperate then prisoner's dilemmas have no impact at all. If people always compete then the prisoner's dilemma becomes inescapable no matter how obviously damaging it becomes.

Now consider the following two prisoner's dilemmas (both taken from 'The Efficient Society'.

Dilemma #1:
"Imagine a situation in which there are no police or security guards, and no official rules to prevent you from doing what you want. Everyone will therefore be responsible for his or her own protection. In order to guarantee your personal security, you might reasonably decide to carry a weapon. Let's say you have a choice between carrying a knife and carrying a gun. Your potential assailant will have the same choice. So if you do get attacked and need to defend yourself, the set of possible scenarios looks something like the following (in descending order of desirability):

1. You have a gun; your opponent has a knife.
2. You both have knives.
3. You both have guns.
4. You have a knife; your opponent has a gun.

Both of you will try to get your own number one, and as a result you will both get number three. Everyone will carry a gun. Thus the desire to achieve greater personal security has the effect of increasing the deadliness of any eventual conflict (and thus the overall level of insecurity).


Dilemma #2:
Suppose you and your competitor each are holding substantial quantities of some good. If you both charge six dollars per unit, buyers will purchase 100 units total, probably 50 from each of you. However, at a price of five dollars, buyers would be willing to purchase 120 units. Suppose that your cost is three dollars per unit. Here are the possible scenarios:

1. You lower prices; your competitor doesn't. Sold: 120 @ $5. Profit: $240 [120 * (5-3)]
2. Neither of you lowers prices. Sold: 50 @ $6. Profit $150 [50 * (6-3)]
3. Both of you lower prices. Sold: 60 2 $5. Profit $120 [60 * (5-3)]
4. Your competitor lowers prices; you don't. Sold 0. Profit 0.

Whether you are chasing after windfall profits or simply trying not to get frozen out of the market, your best strategy is to lower prices. The same goes for your competitor. As a result, you will both lower prices, and so instead of making profits of $150 each, you will make only $120 each. It is clearly not in your interest to pursue this competition, but you don't have much choice. ... That is why they say every free market is but a failed cartel.


Note that from the perspective of the person involved in the dilemma, these two situations are identical. In both cases they will benefit by cooperating and suffer by competing. But from the perspective of society as a whole the two cases are quite different: in the first case cooperation is beneficial (less deadly violence is good for society) but in the second case competition is beneficial (lower prices are good for society).

So what does an efficient society do? Clearly, an efficient society encourages cooperation in the first type of dilemma but encourages competition in the second kind.

---

So, chapter 3 of the Efficient Society looks at the role of government in solving dilemmas with negative impacts on society. Because the government has, by definition, a monopoly in any given territory, if the government takes responsibility for some activity than we can ensure that there is no competition within that particular territory for that particular activity. Here we see why the monopoly on the use of force is so fundamental to government. Not only is competition in the use of force (violent crime/civil war) the most devastating prisoner's dilemma problem we face, but unless the government controls the use of force, it has no power to enforce its monopoly in any other area.

The above examples and Chapter 3 may give the misleading impression that we always need government to intervene in cases where cooperation is beneficial. In fact, for the vast majority of these situations, government intervention is not required, and we rely on our sense of what is right to cooperate rather than having the government force it on us.

In chapter 4, Heath looks at situations where moral suasion replaces the role of government in helping us to cooperate with each other,
"the practice of lining up is itself an efficiency promoting institution. The reason we line up for things is that any kind of rationed access to a good generates a prisoner's dilemma. Everyone has an incentive to push his or her way to the front, but when everyone does this, the mob scene that ensues results in everyone getting through more slowly. This is especially true in the case of emergency exits. When I was young, it always seemed strange to me that parents and teachers expected us to line up to escape from burning buildings. The reason, I later discovered, is that more people can escape in a given time if the exit is conducted in an orderly fashion. This is not much comfort to those at the back of the line, and so the rule requires fairly powerful moral incentives in order to have any force."


The fact that morals supporting cooperation have always been a bulwark of society not falling apart into a million prisoner's dilemmas helps explain why trade, in which competition is promoted over cooperation, has been, throughout history, consistently regarded as a base or shameful occupation.

In chapter 5, Heath looks at prisoner's dilemmas which have positive outcomes for society and shows how markets, by facilitating competition, make it impossible for sellers to get out of their prisoner's dilemmas.

The one thing I would take issue with Heath on here is his statement,
"In point of fact, the central "discovery" that led to the development of the market economy is not that one can do without morality but that one can permit a selective decline in morality without having society fall apart."


In fact, I would argue, based on 'Systems of Survival' by Jane Jacobs, that the market system does not depend on a decline in morality as much as on the usage of a second set of (equally valid) morals - ones specifically suited to prisoners dilemmas with positive outcomes for society. Instead of looking at prisoner's dilemmas, Jacobs tracks the two systems of morals back to 'taking' and 'trading' as our two means of making a living - but clearly there is considerable overlap between activities based on 'taking' and prisoners dilemmas with negative outcomes for society and a similar overlap between trading activities and prisoner's dilemmas with a positive outcome for society2.

Jacobs tracks 'commercial' morality back to the ancient Greeks and through societies such as the Romans, Japanese, Indians and medieval Europeans which were effectively divided into classes or castes with one caste living by 'taking' / cooperative morality and one caste living by commercial / trading / competitive morality.

Somewhere along the line, for some reason, the commercial set of morals became greatly strengthened in Europe coming out of the middle ages leading to the expansion of markets, the enlightenment, the industrial revolution etc. etc.

Which brings us back to chapter 2. In chapter 2, Heath suggested that our move to a society where the state stayed out of questions of values was a result of improved technological weaponry which made disagreements much more deadly than in the past. At this point, I am more inclined to think that the shrinking role of the state has more to do with the increased role of commercial/competitive morality in our society than it did to the Reformation, the invention of gunpowder or to the European civil wars. I suspect that Hobbes, with his social contract, was simply reflecting the changes in society which had already had taken place rather than actually having much of an impact on the course of events himself.

For instance, Jacobs tracks most our current individual rights back to the development of the medieval European Custom of Merchants, noting that, "So many of what we call civil rights are actually rights to make contracts as equals" and that, "Another variety of sexism has often denied homosexuals the benefit of contractual law". Of course, none of this disputes the truth of what Heath wrote in chapter 2, just the source of the changes he described.

And of course I don't really have any answer to explain how it was that in Europe at that particular time commercial morality became so much stronger and more influential than it ever had anywhere else. I suppose Heath's guess is as good as any.

Well, my head hurts, so I think I'm going to leave it at that for now. The amazing thing is that there is actually still a lot more stuff in 'The Efficient Society' that I want to discuss (the usefulness of bureaucracies, problems with GDP as a measurement tool, the need for global government and the challenge presented by an information economy, among others), but I'll have to save that for future posts.



------
1As Andrew points out in his post,
"the more applicable generalization is that of the problem of suboptimization. As defined on the Principia Cybernetica web site:

Optimizing the outcome for a subsystem will [in many cases] not optimize the outcome for the system as a whole. This intrinsic difficulty may degenerate into the "tragedy of the commons": the exhaustion of shared resources because of competition between the subsystems."



2To see why, note that when you take for a living you are basically relying on nature's bounty (or the work of others), and when you take something for yourself, then nobody else can take the same thing. So taking amounts to being a zero-sum game.

When trading, however, it can be assumed that because both people make the exchange voluntarily, both must benefit so the outcome of a trade is a net gain.

Now imagine introducing competition into a taking activity and into a trading activity. In the zero sum world of taking, increased competition accomplishes nothing other than to make everybody work harder (and to fight more), so an optimal strategy in a taking environment is to cooperate rather than to compete. In the positive sum world of trading, however, competition makes everybody work harder which (because it is a positive sum game) leads to a better outcome for everybody - assuming that everybody competes within the rules of the trading game, sticking to voluntary exchange rather than using force.

This is why when people who take for a living start to compete with each other it is generally referred to as treason, while when people who trade for a living stop competing with each other it is known as collusion.

Of course, trading can occur and be beneficial even without competitive markets, but competitive markets help a lot because it is the competition which creates the prisoner's dilemma which ensures that prices are set efficiently and which motivates traders to work hard.


-----
A couple of other related posts on this topic:

Andrew Spicer describes in detail an interesting (and surprising) prisoner's dilemma involving trucks passing other vehicles on the highway.

I write about when competition is good and when it is bad in the context of industrial subsidies, as per Systems of Survival.

Labels: , , , , , ,