วันจันทร์ที่ 29 สิงหาคม พ.ศ. 2554

Cheaters and chumps: game theorists offer a surprising insight into the evolution of fair play

Since well before the time of Dostoyevsky, people have thought about crime, punishment, and their interconnections. Why punish society's miscreants? To change, reform, or rehabilitate them? To deter potential wrongdoers? To make the victims and punishers feel better? Research by Ernst Fehr and Simon Gachter, published in the January 10, 2002, issue of the eminent science journal Nature, shows an unappealing aspect of social behavior in action, as well as the unexpected good that can come of it
Whether you are a diplomat or a negotiator, an economist or a war strategist or just an ordinary person navigating the shoals of everyday life, sometimes you have to decide whether to behave cooperatively with other individuals, be they partners, competitors, or outright opponents. The same necessity arises among certain social animals in the wild. Just to pick one example, classic work by Gerald Wilkinson, of the University of Maryland, has shown that female vampire bats are continually confronted with strategic choices. After drinking the blood of prey species (such as cattle), the females fly back to large communal nests, where they feed the baby bats by disgorging the blood into their mouths. The females must choose: Do they feed only their own young, their own plus those of close relatives, or everyone's? And should the decision depend on what all the other bats are doing?
These questions of altruism, reciprocity, and competition are grist for the mill in game theory, a branch of mathematics applied to human behavior. Participants in game-theory experiments play pared-down games, with varying degrees of communication among the players, and are given differing rewards for differing outcomes. Players must decide when to cooperate and when--to use a highly technical game-theory term--to "cheat." Game theory gets taught in all sorts of academic programs. And it turns out that social animals, even without M.B.A.'s, have often evolved strategies for deciding when to cooperate and when to cheat. According to Joan E. Strassmann, of Rice University, even social bacteria have evolved optimal strategies for stabbing each other in the back.
Suppose you have an ongoing game, a round-robin tournament that involves two participants playing against each other in each round. The rules of the game are such that if both cooperate with each other, they both get a reward. And if both cheat, they both do poorly. On the other hand, if one cheats and the other cooperates, the cheater gets the biggest possible reward, and the cooperator loses big-time. Another condition is that the players in the tournament can't communicate with one another and therefore cannot work out some sort of collective strategy. Given these constraints, the only logical course is to avoid being a sucker and to cheat every time. Now suppose some players nonetheless figure out methods of cooperating. If enough of them do so--and especially if the cooperators can somehow quickly find one another--cooperation would soon become the better strategy. To use the jargon of evolutionary biologists who think about such things, it would drive noncooperation into extinction.
Get cooperation going among a group of individuals, and the group is eventually going to be in great shape. But whoever starts that trend (the first to spontaneously introduce cooperation) is going to be mathematically disadvantaged forever after. This might be termed the what-a-chump scenario. In an every-bacterium-for-himself world, when one addled soul does something spontaneously cooperative, all the other bacteria in the colony chortle, "What a chump!" and go back to competing--now one point ahead of that utopian dreamer. In this situation, a random act of altruism doesn't pay.
Yet systems of reciprocal altruism do emerge in various social species, even among us humans. Thus, the central question in game theory is: What circumstances bias a system toward cooperation?
One well-studied factor that biases toward cooperation is genetic relatedness. Familial ties are the driving force behind a large proportion of cooperative behaviors in animals. For example, individuals of some social insect species display such an outlandishly high degree of cooperation and altruism that most of them forgo the chance to reproduce and instead aid another individual (the queen) to do so. The late W.D. Hamilton, one of the giants of science, revolutionized thinking in evolutionary biology by explaining such cooperation in terms of the astoundingly high degree of relatedness among an insect colony's members. And a similar logic runs through the multitudinous, if less extreme, examples of cooperation among relatives in plenty of other social species, such as packs of wild dogs that are all sisters and cousins and that regurgitate food for one another's pups.
Another way to jump-start cooperation is to make the players feel related. This fostering of pseudokinship is a human specialty. All sorts of psychological studies have shown that when you arbitrarily divide a bunch of people into competing groups (the way kids in summer camp are stuck into, say, the red team and the blue team), even when you make sure they understand that their grouping is arbitrary, they'll soon begin to perceive shared and commendable traits among themselves and a distinct lack of them on the other side. The military exploits this tendency to the extreme, keeping recruits in cohesive units from basic training to frontline battle and making them feel so much like siblings that they're more likely to perform the ultimate cooperative act. And the flip side, pseudospeciation, is exploited in those circumstances as well: making the members of the other side seem so different, so unrelated, so un-human, that killing them barely counts.
One more way of facilitating cooperation in game-theory experiments is to have participants play repeated rounds with the same individuals. By introducing this prospect of a future, you introduce the potential for payback, for someone to be retaliated against by the person she cheated in a previous round. This is what deters cheaters. It's why reciprocity rarely occurs in species without cohesive social groups: no brine shrimp will lend another shrimp five dollars if, by next Tuesday, when the loan is to be repaid, the debtor will be long gone. And this is why reciprocity also demands a lot of social intelligence--if you can't tell one brine shrimp from another, it doesn't do you any good if the debtor will actually still be around next Tuesday. Zoologist Robin Dunbar, based at University College London, has shown that among the social primates, the bigger the social group (that is, the more individuals you have to keep track of), the larger the relative size of the brain. Of related interest is the finding that vampire bats, which wind up feeding one another's babies in a complex system involving vigilance against cheaters, have among the largest brains of any bat species.
An additional factor that biases toward cooperation in games is "open book" play--that is, a player facing someone in one round of a game has access to the history of that opponent's gaming behavior. In this scenario, the same individuals needn't play against each other repeatedly in order to produce cooperation. Instead, in what game theorists call sequential altruism, cooperation comes from the introduction of reputation. This becomes a pay-it-forward scenario, in which A is altruistic to B, who is then altruistic to C, and so on.
So game theory shows that at least three things facilitate the emergence of cooperation: playing with relatives or pseudorelatives, repeated rounds with the same individual, and open-book play. And this is where Fehr and Gachter's new study, a "public goods experiment," comes in. The authors set up a game in which all the rules seemed to be stacked against the emergence of cooperation. In a "one-shot, perfect-stranger" design, two individuals played each round, and while there were many rounds to the game, no one ever played against the same person twice. Moreover, all interactions were anonymous: no chance of getting to know cheaters by their reputations.

Here's the game. Each player of the pair begins with a set amount of money, say $5. Each puts any part or all of that $5 into a mutual pot, without knowing how much the other player is investing. Then a dollar is added to the pot, and the sum is split evenly between the two. So if both put in $5, they each wind up with $5.50 ($5 $5 $1, divided by 2). But suppose the first player puts in $5 and the second holds back, putting in only $4? The first player gets $5 at the end ($5 $4 $1, divided by 2), while the cheater gets $6 ($5 $4 $1, divided by 2--plus that $1 that was held back). Suppose the second player is a complete creep and puts in nothing. The first player has a loss, getting $3 ($5 $0 $1, divided by 2), while the second player gets $8 ($5 $0 $1, divided by 2--plus the $5 held back). The cheater always prospers.
But here's the key element in the game: Players make their investment decisions anonymously, but once the decisions are made, they find out the results and discover whether the other player cheated. At this point, a wronged player can punish the cheater. You can fine the cheater by taking away some money, as long as you're willing to give up the same amount yourself. In other words, you can punish a cheater if you're willing to pay for the opportunity.
The first interesting finding is that cooperation--which in the narrowly defined realm of this particular game means simply the steady absence of cheating--emerges even with the one-shot, perfect-stranger design. Cheaters stop cheating when punished.

Now comes the really interesting part. The authors showed that everyone jumps at the chance to punish the cheater, even when it means that the punisher will incur a cost. And remember the one-shot, perfect-stranger design: punishing brings no benefit to the punisher. Because the two players never play together again, there's no possibility that punishment will teach the cheater not to mess with you. And because of the anonymous design, the opportunity to punish doesn't warn other players about the cheater. Embedded in the open-book setting, by contrast, is an incentive to pay for the chance to conspicuously punish: you hope that other players do the same, thereby putting the mark of Cain on an untrustworthy future opponent. And various social animals will pay a great deal, in terms of energy expenditure and risk of injury, to punish open-book cheaters (one way to encourage this in an open-book world is to use the approach of certain military academies whose honor codes punish those who fail to punish cheaters). But here the act of punishing is as anonymous as was the act of cheating.
In Fehr and Gachter's game, no good can come to the punisher from being punitive, but people avidly do it anyway. Why? Simply out of the desire for revenge. The authors show that the more flagrant the cheaters are (in terms of how disproportionately they have held back their contributions), the more others will pay to punish them. This is true even of newly recruited players, unsavvy about any of the game's subtleties.
Think about how weird this is. If people were willing to be spontaneously cooperative even if it meant a cost to themselves, this would catapult us into a system of stable cooperation in which everyone profits. Think peace, harmony, Lennon's "Imagine" playing as the credits roll. But people aren't willing to do this. Establish instead a setting in which people can incur costs to themselves by punishing cheaters, in which the punishing doesn't bring them any direct benefit or lead to any direct civic good--and they jump at the chance. And then, indirectly, an atmosphere of stable cooperation just happens to emerge from a rather negative emotion: desire for revenge. And this finding is particularly interesting, given how many of our societal unpleasantries--perpetrated by the jerk who cuts you off in traffic on the crowded freeway, the geek who concocts the next fifteen-minutes-of-fame computer virus--are one-shot, perfect-stranger interactions.
People will pay for the chance to punish, but not to do good. If I were a Vulcan researching social behavior on Earth, this would seem to be an irrational mess. But for a social primate, it makes perfect, if ironic, sense. Social good emerges as the mathematical outcome of a not particularly attractive social trait. I guess you just have to take what you can get.
Robert Sapolsky is a professor of biology and neurology at Stanford University and author of A Primate's Memoir (Scribner, 2001).
COPYRIGHT 2002 Natural History Magazine, Inc.
COPYRIGHT 2008 Gale, Cengage Learning

ไม่มีความคิดเห็น:

แสดงความคิดเห็น