Crawl Across the Ocean

Tuesday, June 14, 2011

91. Another View on the Evolution of Cooperation

Note: This post is the ninety-first in a series about government and commercial ethics. Click here for the full listing of the series. The first post in the series has more detail on the book 'Systems of Survival' by Jane Jacobs which inspired this series.

I happened upon an interesting article the other day by Daron Acemolgu.

Acemolgu points out that researchers often use coordination models to study the level of cooperation in society because these models allow for multiple equilibria - i.e. one with cooperation, one without1

"Why do similar societies end up with different social norms, and why and how social norms sometimes change? A common approach to answering these questions is to use coordination games, which have multiple equilibria corresponding to different self-fulfilling patterns of behaviour and rationalise the divergent social norms as corresponding to these equilibria. For example, it can be an equilibrium for all agents to be generally trusting of each other over time, while it is also an equilibrium for no agent to trust anybody else in society. We can then associate the trust and no-trust equilibria with different social norms."


As he goes on to point out, this isn't a very dynamic analysis, in the sense that it doesn't answer the questions of why or how we get from one equilibrium to another.

"Simply ascribing different norms to different equilibria has several shortcomings, however. First, it provides little insight about why particular social norms and outcomes emerge in some societies and not in others. Second, it is similarly silent about why and how some societies are able to break away from a less favourable (e.g., no trust) equilibrium. Third, it also does not provide a conceptual framework for studying how leadership by some individuals can help change social norms."


I didn't spring for the $5 required to download the full paper, but from the article it seems like one mechanism posited by Acemolgu for society to move from one equilibrium to another is if a 'prominent' person influences other people with their own behaviour.

"A particularly important form of history in our analysis is the past actions of "prominent" agents who have greater visibility (for example because of their social station or status). Their actions matter for two distinct but related reasons. First, the actions of prominent agents, impact the payoffs of the other agents who directly interact with them. Second, and more importantly, because prominent agents are commonly observed, they help coordinate expectations in society. For example, following a dishonest or corrupt behaviour by a prominent agent, even future generations who are not directly affected by this behaviour become more likely to act similarly for two reasons; first, because they will be interacting with others who were directly affected by the prominent agent's behaviour and who were thus more likely to have followed suit; and second, because they will realise that others in the future will interpret their own imperfect information in light of this type of behaviour. The actions of prominent agents may thus have a contagious effect on the rest of society."


What strikes me, coming back to the discussion about coordination, is all the words we have that, in the right context, mean the same thing: coordination, cooperation, correlation, collaboration, etc. Naturally, the trick with a coordination problem is to somehow coordinate everyone's behaviour. A hierarchical structure can create a monopoly in which one entity/person controls all, thus greatly simplifying the problem of getting everyone to sing from the same songbook. When putting leviathan in charge isn't feasible or isn't desired, then it becomes trickier to get a bunch of independent actors to coordinate on a particular outcome.

The 'prominent' person is like a soft version of the leviathan - not forcing everyone to go along, merely setting a good or bad example and hoping the ripples of that behaviour are enough to 'tip' society from one equilibrium to another. I didn't read the paper so I shouldn't really comment, but the notion that something like JFK asking people what they can do for their country is going to lead to a widespread change in behaviour seems hard to swallow for me. To me it seems more likely that levels of cooperation will be driven by a combination of history (as Acemolgu acknowledges) and changes in fundamental factors like technology (e.g. the medium is the message) and the natural environment (along the lines that I discussed in my last post).


***
1Note: The Stag Hunt, that we discussed back here is an example of a game theory model with more than one equilibrium.

Labels: , , , ,

Tuesday, December 14, 2010

75. The Strategy of Conflict Part 1, Deception and Tradition

Note: This post is the seventy-fifth in a series about government and commercial ethics. Click here for the full listing of the series. The first post in the series has more detail on the book 'Systems of Survival' by Jane Jacobs which inspired this series.

This week's post is about the book, "The Strategy of Conflict" by Thomas Schelling.

Writing in the 1960's, Schelling was concerned that the field of game theory was too focussed on zero-sum games (games where one person's gain is another's loss - think chess). He proposed a continuum of games, with a pure zero-sum game at one end, and a pure coordination game (where both people gain if they make the right choices and both lose if they don't - think charades) at the other, with 'mixed-motive' games in the middle. At the zero sum game end of the spectrum, participants in the game have negatively correlated outcomes (what's good for me is bad for you) while at the coordination end, they have positively correlated outcomes (what's good for me is good for you as well).

Schelling recognized that in the zero-sum game, deception and secrecy was the order of the day, while in the coordination game, open, forthright communication (honesty) was the key to success.

In the coordination game, Schelling offers a list of examples of how people will find a way to coordinate even when they can't communicate directly with one another:

"Name 'heads' or 'tails.' If you and your partner name the same, you both win a prize.

Circle one of the numbers listed in the line below. You win if you all succeed in circling the same number.

7 100 13 261 99 555

..."


These are the first two in a list of examples that show that when people need to agree on something without communicating, they focus on whatever they think will be the most obvious element to everyone ('heads' because it is written first, '7' because it is the first number in the list, in a later example, dividing a territory along a river since it is the most notable feature of the landscape.)

Later on, Schelling introduces games where the two participants must divide something between themselves. Attacking the other participant can lead to a gain for yourself, but reduces the total amount to be divided. The participants overall do best when they can identify some agreeable way to divide the pie without fighting, but individual participants do best if the agreement is made to suit them.

Schelling theorizes that in situations of this nature, tradition can play a powerful role in providing a focal point that people can agree on. Any attempt to break with tradition re-opens all the contention for position of the various parties involved and can lead to conflict and poorer results for all unless a new tradition can be quickly established1.

Says Schilling, "We have now rigged the game so that the players must bargain
their way to an outcome, either vocally or by the successive moves that they make, or both. They must find ways of regulating their behaviour, communicating their intentions, letting themselves be led to some meeting of minds, tacit or explicit, to avoid mutual destruction of potential gains. The 'incidental details' may facilitate the players' discovery of expressive behaviour patterns; and the extent to which the symbolic contents of the game - the suggestions and connotations - suggest compromises, limits and regulations should be expected to make a difference.

It should, because it can be a help to both players not to limit themselves to the abstract structure of the game in their search for stable, mutually nondestructive, recognizable patterns of movement. The fundamental psychic and intellectual process is that of participating in the creation of traditions"

In 'Systems of Survival' Jane Jacobs, while not using the language of game theory, expressed a similar speculation about the role of tradition,"I suspect one reason revolutionary governments have become cruel so easily and swiftly after ascendancy is that they've lost the brakes of tradition."

If there wasn't the same potential in the situation for destructive conflict, then agreement, and tradition, wouldn't need to carry the same premium, but in situations where there is potential for a costly back and forth battle, it's better to reach some agreement than none at all - and the bargaining can come down to who can identify a 'traditional' settlement that favours their interests.

That's as far as I've got so far in 'The Strategy of Conflict' but even if there is nothing else interesting in the rest of the book, it's been worthwhile.

----
1 For an example of how angry people can get with even a trivial break in tradition, consider the reallocation of a small amount of downtown Vancouver traffic right of way from cars to bicycles and just how angry this break with the 'all cars all the time' tradition has made people, leading to what one of the few sane articles written about the change accurately described as 'an outpouring of spectacular gibberish' - the gibberish makes more sense if you understand the enraged car drivers as being worried that once tradition has been broken in this fashion, who knows where it will lead or when it will stop.

Labels: , , , , , ,

Tuesday, December 22, 2009

32. Moral Conditions of Economic Efficiency, Part 3

Note: This post is the thirty-second in a series. Click here for the full listing of the series.

Chapter 4 of 'The Moral Conditions of Economic Efficiency', by Walter Schultz, takes on the notion that even though strict rational eogists may not be able to achieve economic efficiency immediately, their behaviour will settle into an efficient pattern as they (strictly rationally) adopt rules that prevent their selfishness from keeping them from achieving efficient outcomes.

For example, back here, I quoted a Washington Post article which read,
"[Alan] Greenspan had an unusual take on market fraud, Born recounted: "He explained there wasn't a need for a law against fraud because if a floor broker was committing fraud, the customer would figure it out and stop doing business with him."


Schultz first argues that, although coordination type situations (e.g. choosing which side of the road to drive on) enable strict rational egoists to form rules that are to everyone's benefit, exchange is not a coordination type situation. The reason is that a coordination situation allows everyone to achieve an optimal result, whereas in exchange, each person's best option is to get what the other person is offering without parting with anything themselves, by way of force or fraud if necessary. But it's not possible for both parties to come out ahead on exchange by cheating each other, so this makes exchange a collective action or Prisoner's Dilemma type problem.

Next, Schultz argues that 'the shadow of the future', i.e. concerns about what might happen in the future, will not cause strict rational egoists to refrain from force and fraud. I'm not sure I quite follow Schult'z argument on this point, so I'll quote him,
"we have already shown that strict rational egoists will always choose the best feasible means to achieve their most highly valued social state, so when similar situations emerge [in the future] inefficient outcomes result."


As best I can tell, Schultz is arguing that strict rational egoists are not capable of prudence in the sense of weighing the benefits of theft/fraud now against the benefits of cooperation over the long run. Schultz could make the case that this sort of prudence is itself a moral rule that does not belong in our sketch of the strict rational egoist but he doesn't make this argument explicitly.

The question of whether (strict) self-interest leads to cooperative behavior in collective action problems that are repeated is one that has been much studied and I will likely come back to it later on in the series. For now, I'll simply note that despite morals against force and fraud, and the presence of a government that will punish you if you are caught in such activities, we are far from eliminating these behaviours entirely, so the 'shadow of the future' (as game theorist refer to the effect where concerns about future results influence present decisions) may help some, but it seems incapable of playing the role Greenspan imagined it playing, where no rules against fraud are necessary.

In chapter 5, Schultz discusses externalities. He defines externalities as follows: "An externality is an uncompensated cost or benefit that may be intentional, accidental or incidental."

...and clarifies that...

"Acts of theft and fraud directly affect the well-being of consumers and exemplify intentional externalities. Harm resulting from negligence or from an accident exemplifies an accidental externality. Externalities also include incidental effects of the acts of production and consumption."

He goes on to comment that, "To assume that all externalities are absent and that every agent behaves competitively is to set aside the role of morality. The system of moral constraints presented in Chapter 6 secures competitive behaviour and eliminates intentional externalities but makes no provision for the internalization of accidental and incidental externalities."

Schultz then claims that:

1) A system of moral normative constraints precludes externalities due to intentional consequences of nonmarket action.

2) A system of moral normative constraints and conventions rectifies accidental and incidental externalities.

3) Moral normative constraints and conventions coordinate expectations and thereby reduce transaction costs

4) Moral normative constraints are the logical limits of the commodification of desire.

Schultz explains the first 3 points: "We have established the first claim. Claims (2) and (3) are based on the general goals of tort law, property law and contract law, respectively, and have been established."

To be honest, it wasn't clear to me how claims 2 and 3 have been established, but never mind.

Schultz says no more on the first 3 points and devotes the rest of the chapter to an explanation of point 4, arguing that the desires of people that we recognize in calculating the effects of externalities are limited by the rights that people need to have in order to secure economic efficiency. In other words, my desire to have you a slave is not recognized as a valid preference since if you don't have autonomy to make your own decisions we won't achieve the same efficiency that we might have (because you can't pursue your preferences properly, if you are my slave).


---

In chapter 6, Schultz sets out what he sees as the moral conditions of economic efficiency. Note that where I might say, for example, that people need to follow a moral rule to 'be honest', Schultz instead says, using the same example, that people have a 'right to true information' and that people also have a moral incentive to respect that right. It amount to the same thing, as far as I can tell.

The Moral Conditions of Economic Efficiency per Schultz:

1) Property Rights - meaning that people can't mess with your stuff and you can do what you want with your stuff.

2) Right to True Information (that is relevant to a potential exchange) - meaning that you shouldn't tell your car insurance company that your car is just for personal use, when really you drive to work and back every day.

3) A right to welfare - Schultz recognizes that given a choice between stealing or starving, people will and should choose the latter because the right to life takes precedence over the efficiency based rights. Plus Schultz makes an insurance argument (that seems a bit out of place) that it is more efficient for basic welfare to be assured centrally than for everyone to self-insure against deprivation.

4) A right to autonomy - without autonomy, people can't make exchanges that match their preferences, so autonomy is a precondition for trade as we understand it even being possible.

Schultz also notes that we need some mechanism by which people are held accountable for their behaviour in recognizing these rights as well as a set of conventions for setting prices and conventions and normative constraints for commodifying desire and for rectifying the results of accidental and intentional externalities.

---

I realize that this post doesn't really show all that clearly how Schultz gets to his final requirements, but that's likely because it wasn't all that clear to me reading the book.

Labels: , , , , , , ,

Wednesday, August 05, 2009

22. Morals by Agreement: Cooperation and Bargaining

Note: This post is the twenty-second in a series. Click here for the full listing of the series.

This is the fourth of what should be a few posts on the book Morals by Agreement, by David Gauthier.

--


The clip above is 'Opportunities' by the Pet Shop Boys. The chorus says, "I've got the brains, you've got the looks, let's make lots of money" and this is a pretty good summary of chapter 5 of Morals by Agreement.

Of course, in Gauthier's language this translates into, "we become aware of each other as potential co-operators in the production of an increased supply of goods, and this awareness enables us to realize new benefits"

Gauthier's argument is that, given the opportunity to work together to better (or not worsen) their position, people are rational to do so. I agree not to play my tuba late at night and you agree not to mow your lawn early in the morning. Or maybe we form a joint partnership (in order to make lots of money). Or maybe I agree to give you my wallet and you agree not to shoot me*, etc.

Now, I guess to most people it probably seems fairly obvious that people make agreements of this sort all the time and it makes sense to do so, but you have to remember that Gauthier is starting from his notion of the rational human as one that does nothing but maximize its own interest in the present, with no ability to work together with others in a cooperative fashion.

The problem is that, unlike in the perfectly competitive market where people could (per Gauthier's theory) reach optimal outcomes by strictly pursuing maximization of their self-interest, situations where cooperation would be beneficial are situations where the market has 'failed' meaning that simplistic pursuit of self-interest by all parties will lead to sub-optimal solutions because it fails to consider superior 'joint strategies' where the people in a situation choose their strategies together (by bargaining to reach agreement) rather than separately.

You can think of a basketball team that has a better chance of scoring if they follow the play their coach drew up for them than if they all just do their own thing and hope for the best.

Or if we think back to the original Prisoner's Dilemma, if the prisoners were able to make a binding agreement with each other not to confess, then this would open up a new, optimal (from the prisoners' standpoint) solution to their dilemma that was unavailable when they were only able to each maximize their own self-interest.

Gauthier argues that in this context, rational behaviour requires that people agree to constrain their self-interest maximization for a while in order to secure a share of the benefits of cooperation.

Gauthier explains that, in the bargaining phase where the spoils of (the future) cooperation are being divided up, people are still acting to maximize their self-interest as best they can under the circumstances. But the bargain that is reached is an agreement to not maximize self-interest while carrying out the terms of the bargain.

So, I maximize my self interest as I push for the smallest concession possible when agreeing not to play my tuba late at night ('ok, I won't play after midnight, but I won't agree not to play between 11 and midnight' etc.) but when midnight strikes, I go against my self-interest by keeping the terms of the agreement and not just going ahead and playing my tuba anyway.

This question of whether the rational Gauthier people will actually stick to their agreements is one of the stickiest that Gauthier faces and is the subject of chapter 6.

I should also mention that Gauthier spends much of the chapter explaining his bargaining theory that if people have an equal willingness and ability to drive a hard bargain, they will end up making an equal bargain (one where everybody gains a roughly equal share of the spoils of cooperation).


---
* You might think that the bargain where I give up my wallet in return for my life is not a fair one and should count differently than the others somehow - Gauthier agrees but defers discussion of the bargaining position of the parties until chapter 7

Labels: , , , , ,

Wednesday, July 22, 2009

20. Morals By Agreement, Chapter 3: Strategy (Game Theory)

Note: This post is the twentieth in a series. Click here for the full listing of the series.

This is the second of what should be a few posts on the book Morals by Agreement, by David Gauthier.

In the last post, we covered chapter 2 of Morals by Agreement, where Gauthier sketched out his view of what it meant for people to behave 'rationally' in situations which didn't involve other people who might themselves be trying to act rationally and whose actions might depend on our actions and vice-versa.

So in chapter 3, Gauthier extends his model of rational behaviour to cover what he refers to as 'strategic interaction' - that is situations where people are interacting with each other, rather than acting on their own. Basically, in the terms I used earlier in this series, he is moving from rational actions, to rational transactions.

The formal branch of knowledge that studies transactions is Game Theory. The most famous game in Game Theory is the Prisoner's Dilemma, which I introduced earlier in the series here.

For more background on game theory here is the Wikipedia entry on game theory and here is the excellent Stanford encyclopedia of philosophy entry on game theory.

When it comes to game theory, examples are the way to go to gain understanding.

Consider a simple game, where Harold and Kumar are trying to meet up at a local restaurant for a meal, but they are not in communication with each other. However, they both know that there are only two restaurants in town, White Castle and Black Castle. Furthermore, they know that Black Castle is closed.

                                        Harold
                            White                     Black
Kumar   White:      [5,5]                 [3,0]
                 Black :     [0,3]              [2,2]

The best outcome is if they both meet at the White Castle (5 for both). For both Harold and Kumar, the next best option is if they go to White Castle and the other person goes to Black Castle - they don't get to meet up, but at least they can eat. For both players, the third best is to meet at Black Castle and the worst option is to go to Black Castle while the other person has gone to White Castle (alone AND hungry).

Gauthier defines rational behavior in transactions, or 'strategic interaction' as follows:

A)Each person's choice must be a rational response (i.e. utility maximizing) to the choices she expects the others to make
B) Each person must expect every other person's choice to satisfy A
C) Each person must believe her choice and expectations to be reflected in the expectations of every other person.

So in the example above, Harold figures that if Kumar goes to White Castle, he (Harold) is better off going as well. But even if Kumar doesn't go to White Castle, Harold is still better off going to White Castle. So condition A sends Harold to White Castle. Condition B tells Harold that Kumar will follow a similar logic and also end up at the White Castle.

The outcome where Harold and Kumar both go to White Castle is what is known in game theory as an equilibrium outcome. What this means is that, given that both Harold and Kumar are choosing to go to White Castle, there is no reason for either of them to unilaterally change their choice. Compare that with the situation where Harold is going to White Castle and Kumar is going to Black Castle - this is not an equilibrium because in this situation, Kumar would be better off to change his strategy.

Now let's look at a different type of game. Consider the question of what side of the road to drive on. For now, imagine that only two people live on a road, Adam and Eve, and they need to agree on which side of the road to drive on and they both own British cars that were designed for driving on the left side of the road.

                                        Eve
                            Left                     Right
Adam   Left:      [2,2]                 [-10,-10]
            Right :    [-10,-10]            [1,1]

Note that there are 2 equilibrium outcomes in this game, One where both drive on the left and both drive on the right. Even though both Adam and Eve are better off if they both drive on the left, if for some reason they are currently both driving on the right, neither has an incentive to unilaterally change their strategy. Only by working together could they shift from the sub-optimal equilibrium to the optimal equilibrium. Unsurprisingly, this type of game is known as a coordination game, where the coordination needed has two parts: 1) making sure that both people pick the same outcome and 2) making sure the equilibrium they end up in is the optimal one.

But what if we change the game slightly so that Adam has a car that is designed to drive on the left and he has always driven on the left so strongly prefers driving on the left. Meanwhile, Eve has a car that is designed to drive on the right, but she just got her license so she is less attached to driving on one particular side.

                                        Eve
                            Left                     Right
Adam   Left:      [5,2]                 [-10,-10]
            Right :    [-10,-10]            [2,3]

Again there are 2 equilibriums and Adam and Eve need to coordinate to make sure they drive on the same side of the road. But the situation is complicated now by the fact that Adam prefers the 'drive on the left' equilibrium and Eve prefers the 'drive on the right' equilibrium.

This now becomes a bargaining problem, one that has been much studied and argued over in the history of game theory. The reason I bring it up here is because Gauthier himself brings it up in chapter 3 - because it will be useful to him later on in the book.


Finally, I won't go over the Prisoner's Dilemma again, but it is worth noting (as Gauthier does) that in the Prisoner's Dilemma, the equilibrium that Gauthier's rules for rational behavior lead to is different from the (Pareto) optimal outcome. In other words, if people pursue their own utility maximization it will lead to sub-optimal outcomes where there are possibilities to make people better off without making anyone worse off, but these possibilties are placed out of reach by people's self-interested behaviour.

Gauthier will argue that morality consists of the constraints necessary to generate optimal outcomes for 'rational' people.

Labels: , , , , , ,