The Waltz of Reason, page 28
Here is another example, dating back 200 years, to the time of Manchester Capitalism: Long before the first trade unions, workers spontaneously formed “sick clubs.” They paid into a common account to help each other out in case of illness. (An aside: there seems to be no kind soul, whether in Montenegro or in Manchester, who multiplies the contributions by three before sharing them out. This multiplication factor, however, mirrors the fact that someone in need, like the thunderstruck shepherd or the ailing mill hand, profits from the gift far more than what the donor is losing.)
A minor variant of the mutual aid game is the common good game: I can decide, as before, whether to contribute my $5 or not. The sum of all contributions is multiplied by some factor, say three again, and constitutes the common (or public) good. It is shared among all players, irrespective of whether they contributed or not. (The difference with the mutual aid game is that in the common good game, I receive a share from my own contribution in return.)
The common good game is a stylized version of many such “games” in real life. Here are some examples: Young couples may decide to take turns escorting their toddlers to the playground. Some of the parents may be free riders who regularly manage to skip their turn. Our sturdy ancestors may have joined forces for a mammoth hunt. The free riders were those who followed the maxim of never being closest to the mammoth. If all were to act like this, the hunt would never succeed. The defense of a fortification requires cooperation. A free rider hiding behind the others benefits from a success just as much as the others, but puts all at risk. The cleaning of a communal kitchen is a less adventurous pastime; but it, too, offers plenty of scope for a social dilemma.
The tragedy of the commons is proverbial. The commons are grazing land belonging to the whole village. These grounds were frequently overexploited and thereby ruined: for if a farmer sends an extra piece of cattle to the commons, then its milk and meat benefit him alone, whereas the cost to the grassland is borne by all. Nowadays, there are not many commons left. The “common goods” include clean air, rich fishing grounds, and public transportation. They always offer scope for exploitation by free riders.
What happens in the game lab? Hundreds of experiments have studied mutual aid or common good games, in many variations. Frequently, players are not restricted to contributing either a full share or nothing, but can chose any amount between, say, $0 and $20. In the first round, the game is usually such as can be expected from the donation game. Some contribute more, some less, and the average amount will be about half of the full contribution. From then on, in round after round, the contributions decline almost invariably.
Are the players learning to be selfish? Are they imitating those who gain more, namely the free riders? Or are they simply fed up with being exploited?
This experiment has been repeated in many places (Copenhagen, Minsk, Samara, Chengdu, Riyadh, etc.). There is a considerable geographic variability, interesting for students of ethnic prejudice. Yet, the overall trend is clear. Contributions decline in round after round. The game offers less and less prospects for a gain. The social trap closes with a snap.
Retaliation and the War of All Against All
The remedy seems obvious. The free riders need to be punished. In economic experiments of the public good or mutual aid type, this can be achieved by a simple variation of the game.
Each round, now, consists of two phases. Phase one is the former game: players decide to contribute or not. Phase two offers an opportunity for the players to punish the exploiters in their group. The free riders are sanctioned: they have to pay a fine. This fine does not land on the accounts of the players who penalized the free riders. On the contrary, these players have to pay themselves a fee for imposing the punishment. Fees and fines are collected by the experimenters.
In the jargon of game labs, this type of sanctioning is named peer punishment: players impose penalties on the free riders, at a cost for themselves. Indeed, punishing someone is usually expensive, in real life: it costs time and energy, and comes at a risk, since punished players are apt to retaliate, rather than meekly conform. Sanctioning is a costly business, as we learn frequently from the political news.
Despite these drawbacks, the effect of peer punishment is quite remarkable, as was shown in a much-touted game lab experiment run by Ernst Fehr and Simon Gächter. They had their subjects play, for the first six rounds, the usual public good game, without punishment. As a result, we see what we expect: in the first round, players invested on average some 50 percent of their game money in the common good. From then on, contributions declined, round after round.
Then, after six rounds, the players were offered the possibility to punish free riders. Immediately, contributions jumped up. They were larger than in the initial round—and this, even before the first punishment had been meted out. Better still, contributions increased in the following rounds (see Figure 12.4). In the end, almost everyone cooperated to the full, and hardly anyone needed to be punished.
Figure 12.4. Contributions to the public good game without and with punishment.
This result is surprising. Profit maximization should lead to a second-order social dilemma. Indeed, the effect of peer punishment benefits all players, by increasing the average contributions. But the cost of punishing free riders is borne by the individual punisher. Why not simply contribute to the common good, and leave the task of punishing free riders to the other players? (This strategy can obviously arise only if more than two players are involved in the game.) Whoever acts in such a way is a second-order free rider. If all players adopt this option, there will be no punishment, and consequently the first-order free riders—those who do not contribute to the public good—will take over. Indeed, they have nothing to fear.
One possible solution of the conundrum would be to also allow for the punishment of the second-order free riders. This, however, makes third-order free riding possible, and raises the specter of an infinite regress.
Let us leave these theoretical objections aside, for the moment. It is well documented that in most experiments, many players are willing, and even eager, to engage in the costly punishment of free riders. Some do it with pleasure. One might suspect that they reckon with a long-term effect of their sanctions: they expect that free riders, once punished, will reform, and thereafter will sagely contribute in the following rounds. Yet, such an expectation cannot explain all. Some experiments are arranged in such a way that for each round the groups are newly formed. This ploy guarantees that players never meet their previous co-players. Players are informed of this. They may possibly reform free riders by punishing them, but know that they will never meet them again, and thus will never benefit from their own (costly) decision to punish. Yet, they punish, and do so with passion. In most of the usual game lab experiments, boredom prevails: but introduce punishment, and the interest quickens perceptibly.
Q and A sessions after games of public good with punishment show that the motivation to reform free riders by penalties comes only second, at best. The foremost impulse is simply revenge. Players are irked at being exploited by free riders and want to retaliate. The cost of punishment plays a minor role. Revenge seems to be a very natural drive. It is irrational to a high degree. Small children kick the door they banged against.
Vindictiveness is usually viewed as base and destructive. However, the experiments by Fehr and Gächter suggest that it plays a positive role in an economy. It is remarkable, by the way, that vengeance is mostly described in economic terms, and indeed even as bookkeeping. “Wir rechnen noch ab!” “It’s payback time!” “Il va me payer cher!” “Un règlement de compte.” Similar idioms occur in many languages.
The need to retaliate is obviously a deep-seated drive. “Revenge is sweet,” as they say. We enjoy revenge even vicariously, from second hand: countless films and novels deal with vengeance, and entertain the millions.
As any good Darwinist knows, pleasurable drives usually have some survival value. This makes one ask how we profit from vengefulness.
The most probable reason is that if it becomes known that we are prone to retaliate, others will think twice about slighting us. Just as with indirect reciprocity, reputation plays a key role in punishment. Anger and indignation are loud: they broadcast something. The lowliest gangster demands respect. You cannot treat me that way. I will not take this from you.
We are entering a problem zone here. In the early lab games on public good with punishment, one player sanctions another, and this is it. Basta. The punished player takes it literally sitting down. Such a situation is completely unnatural. The punished players are unlikely to meekly accept the sanction. They want to hit back. As soon as the rules allow for penalized players to retaliate, costly vendettas spring up, even in the anonymous, almost clinical environment of the game lab, where “to punish” means merely to reduce the modest sums of money on the players’ accounts. In real life, the spiral of destruction can be murderous.
As the philosopher John Locke noted in his Two Treatises of Government from 1689: “Such resistance [to punishment] many times makes the punishment dangerous, and frequently destructive, for those who attempt it.”
Modern experimental economists were highly surprised, and some even scandalized, when they observed asocial punishment: exploiters punish cooperators, in retaliation or even as a preemptive measure, to intimidate them. Such reactions can lead indeed to a war of everyone against everyone. Let us listen to what Thomas Hobbes has to say:
Figure 12.5. John Locke (1632–1704) feared passion.
[So that] in the nature of man, we find three principal causes of quarrel. First, competition; secondly, diffidence; thirdly, glory. The first makes man invade for gain; the second, for safety; and the third, for reputation.
What Hobbes names “glory” is the wish to be respected. The “diffidence” is fear, which leads to aggression and preemptive strikes to anticipate the enemy. Competition, finally, is based on selfishness. We find all these causes of quarrel in the game theory model. Selfishness undermines cooperation; fear leads to attack; the wish for respect prevents any yielding. Hobbes stresses that “war” does not consist only of actual battle, but of being ready for it. The mere preparedness is ruinous.
Hobbes continued:
In such condition there is no place for industry, because the fruit thereof is uncertain, and consequently no culture on earth; no navigation… no arts; no letters; no society; and, which is worst of all, continual fear, and danger of violent death; and the life of man, solitary, poor, nasty, brutish and short.
And short! What a blessing that the torture of such a life lasts only for a limited time.
To take the law into our own hands, as in peer punishment, means anarchy. On Earth, we find it only in those corners that authority cannot reach: in the legend-crusted Wild West, among nomads, in ill-run jails—or in the fiction that baroque philosophers agreed to call the “state of nature.”
To quote Locke and Two Treatises of Government again:
For everyone in that state [of nature] being both judge and executioner of the law of nature, man being partial to themselves, passion and revenge is very apt to carry them too far, and with too much heat, in their own cases.
Onward with Locke:
It is this makes them so willingly give up everyone his single power of punishing, to be exercised by such alone, as shall be appointed among them; and by such rules as the community, or those authorized by them to that purpose, shall agree on.
Here is the social contract. Players submit to an authority (a sheriff, a lord, a police force). This step too can be mimicked by a stylized game that is a variation on the mutual aid game.
Each round consists of three stages. In the first stage, players may contribute to a punishment pool, or not; in the second stage, players may contribute to the mutual aid funds (the common good), or not; in the third stage, finally, the free riders—namely those who failed to contribute to the punishment pool or the mutual aid funds—are punished. The punishment is all the more severe the better the punishment pool is endowed.
This game was first introduced by the Japanese psychologist Toshio Yamagishi. It works well, unsurprisingly enough: most players cooperate. The punishment pool is the equivalent of a police force. The better that force is equipped, the more likely free riders will be spotted. Players are required to pay for the police, and thus cover the cost of punishment up front, before the mutual aid game is even played, and hence before it is known whether there will be any free riders to be punished.
This so-called pool punishment has considerable advantages, compared with peer punishment. It is more objective and less personal, thus making retaliation less likely. Moreover, it allows the spotting and punishment of the second-order free riders (those who contribute to the common good, but not to the punishing). However, pool punishment has a serious drawback. If all players cooperate, round after round, the police have nothing to do. Nevertheless, it must be paid; and such a tax reduces the economic advantage of the mutual aid. By contrast, the costs of peer punishment arise only when needed. Moreover, it requires communication and coordination to establish a punishment pool, whereas peer punishment needs no more than a lone, vengeful soul.
The Importance of Hunting Hares
Evolutionary game theory allows us to study cooperation and the social contract in mathematical models, which are simple thought experiments. Let us consider fictitious populations of players who can opt between various strategies. From time to time, a randomly chosen sample of players engages in a game. Players accumulate more or less payoff; the amount depends on their strategy, and on what the other members in their sample are doing. Occasionally, the players can adapt, by switching to another strategy, preferentially one that is doing better. Players interact only within their current sample, but they can imitate anyone in the community. Thus, the toy population evolves by social learning—a myopic, payoff-driven adaptation.
If the game is a mutual aid game, pure and simple, without punishment of any kind, then cooperation is doomed. It is just as much doomed as in the two-player version, which is the donation game. Free riders always do better. They are imitated preferentially, and eventually make up the whole population.
The plot line changes in a surprising way if the players from the sample are offered to play the mutual aid game, but left free to pull out. They are not obliged to take part. They can decline, stand aside, and do something else instead, some activity whose payoff does not depend on others. Philosophy buffs will recognize that extra option as nothing else than hunting hares, in Rousseau’s parable.
With this third alternative, we have defined the so-called voluntary mutual aid game. The players selected in the random sample have three strategies at their disposal:
1. Don’t participate.
2. Participate and contribute.
3. Participate, but don’t contribute.
If you have chosen the third option, you are exploiting those who chose the second—the contributors. This is something that the nonparticipants—those who go for the first option—don’t do: they rely only on themselves. We assume that the payoff, for the nonparticipants, lies somewhere between the payoff obtained in the mutual aid game if all participants contribute, and the payoff if no one contributes, which is zero. A technical point must be added: a single would-be participant volunteering for the mutual aid game cannot play all by himself or herself, but must hunt hare. Mutual aid needs several participants, who each decide, independently, whether to contribute or not.
The three strategies in the voluntary mutual aid game cyclically supersede each other, in a way reminiscent of the Rock-Paper-Scissors game. As known by children across the world, Rock beats Scissors, Scissors beats Paper, and Paper beats Rock.
In the same cyclic vein, a population of nonparticipants (1) will be invaded by contributors, who (2) will be overcome by defectors, who (3) will yield to nonparticipants (1) in their turn (Figure 12.6).
Indeed, if enough players are willing to participate and to contribute, they do well. More and more hare hunters will imitate them. They participate and contribute whenever they are offered an opportunity. Once enough of them are around, the free riders cash in on the suckers. With each additional free rider, however, participation becomes less alluring, for good and bad alike. In the end, those who don’t participate will do better. Nobody will want to play the mutual aid game any longer. This standstill persists until, by chance, a handful of players are sampled who want to participate and to contribute. Cooperation booms right away; but it takes only a short while until free riders spread. And so it goes on: long periods when nobody wants to play, interspersed with bursts of contribution, which quickly turn out to be bubbles because free riders undermine the game. Whenever one strategy dominates in the population, another is set to take over.
Figure 12.6. A Rock-Paper-Scissors cycle.
The voluntary mutual aid game can be viewed as a blend of the two best-known social dilemmas: the Stag Hunt and the Prisoner’s Dilemma. Put together, they weaken the springs of the social trap. Admittedly, long-term cooperation is not achieved. Yet, short-term bursts of cooperation recur.
Things improve even more when a punishment option is introduced into the game. Which form of punishment? As may be expected, peer punishment proves less stable than pool punishment, since it can be subverted by second-order free riders. But if the game is voluntary, free riders can always be overcome in the end. (See Figure 12.7.) This is in striking contrast to the compulsory game, where the free rider trap cannot be escaped. The voluntary aspect of the commitment (players participate because they hope to do better) is not a polite bow to democratic feelings: it is an essential ingredient of the strategic ploy.
