Keywords

Introduction

Human Altruism and Ultrasociality

“Are humans egoists or altruists?” “What is their motivation for cooperation with other humans?” “Do they always pursue their selfish interests, or are they, at least sometimes, considerate of other people’s needs?”

These questions have been a challenge for philosophers of all times. Prominent answers are Hobbes’ pessimistic view on the state of nature: “a war […] of every man against every man” [1] – the exact opposite of cooperation. Locke on the other hand argued that the state of nature is governed by reason, which should always lead us to realize that we are God’s children and therefore should not harm each other, as this meant that we harmed God’s own belongings [2]. Locke’s Christian stance was later opposed by Rousseau who hypothesized that in the state of nature, humans would evade each other most of the time and were only forced to live together in modern societies. This leads to envy and pride because humans start comparing themselves to others as soon as they live too closely together [3]. Hume, finally, claimed that humans cannot be understood but as social beings [4]. The state of nature, he claims, is that of groups of rational human beings having to solve problems of coordination and cooperation for which they developed, step-by-step, institutions such as law and justice.

In the following, we present contemporary approaches to the question of why humans cooperate so frequently. We restrict our view to two major domains: evolutionary biology and experimental (or behavioral) economy. These disciplines provide us with empirically informed perspectives on how human cooperativeness might have evolved and in which forms it appears in modern humans.

Today, countless empirical studies show that humans – unlike other animals [5] – cooperate willingly in various different situations. Sometimes, they even give when they can expect no reward whatsoever. Among the most prominent experimental settings in which humans display altruism – i.e., behavior with a negative net balance for the acting person – are dictator and ultimatum games.

In a dictator game, a first subject – the dictator – is given an amount of money and told that there is a second person – the receiver – to whom they can give any fraction of the money they just received. The receiver will be informed of the dictator’s decision, but has no possibility of reacting to that decision at all. Dictators are informed that the receiver will not be given any other information – particularly not about who the dictator is – so that there is absolutely no possibility of reciprocation.

Theories which understand humans as rational utility maximizers – such as the classical homo economicus approach (see e.g., [6, 7]) – predict that in this situation, the only rational option for dictators is to keep the money entirely for themselves because any amount they give to the receiver is lost, with no prospect of any possible future gain resulting from their benevolence.

And yet, people of all kinds of cultural backgrounds readily share some of the money they received as dictators with anonymous, unknown, and absent receivers [8]. Although the proportion of money shared varies with different cultural backgrounds, the phenomenon of altruistic sharing seems ubiquitous.

Ultimatum games are set up like the dictator game differing only in the receiver’s option to either accept or reject the fraction of money offered. If they accept, the money is split in exactly the way the first person offered. If they reject the offer, both subjects receive nothing. Here again, rational choice theories predict different behavior than is actually observed. It would be rational for the second person to accept any offer greater than zero because “a bird in the hand is better than two in the bush” – meaning receiving any amount greater than zero is better than getting nothing, which is the only other option.

Yet again, throughout all cultures studied, many subjects reject offers they perceive as too low [8]. In western cultures, for example, these are offers below a margin of about 25%. By rejecting, subjects altruistically punish those who make offers below a threshold of what is perceived as fair. The exact value of this threshold again varies between cultures and individuals [9].

It is our everyday experience that those people we have anonymous one-shot interactions with usually do not try to betray or overreach us. Instead, we frequently observe acts of kindness and generosity. We are used to this, although costly acts of genuine altruism still tend to astonish us. From a theoretical point of view, altruism is even more astounding. Many theorists during the ages have conceived of our world – for human and nonhuman animals alike – as an endless competition for resources (e.g., [1, 10, 11]). Whoever manages to attain stable, exclusive, and secure access to more valuable goods, services, resources, mates, etc. than others is able to actively form the future of the group, i.e., to spread ideas, norms, values, and – most importantly in the long run – genes.

Biological Perspectives

The measure of success from an evolutionary perspective is fitness. Genes that frequently increase the fitness of their carrier in comparison to the carrier’s competitors are – on average (!) – copied more frequently into the next generation and thus slowly spread through the population. Note that fitness is always a relative measure: fitness of X in comparison to Y under circumstances Z. Therefore, we can speak of adaptive, nonadaptive, and dysfunctional (genetic) traits. Trait X may well be an adaptive solution to a problem posed by circumstances Z1 while being nonadaptive (neutral to fitness) or even dysfunctional (fitness reducing) under circumstances Z2 or Z3. Keeping this in mind, we easily understand that behavioral traits promoted by natural selection will seldom be inflexible. Instead, behavioral adaptations – just like all other genetically heritable traits – mostly come as norms of reaction to certain frequently encountered problems posed by a species’ environment of evolutionary adaptedness (EEA).

Now, why is altruistic behavior astonishing from the evolutionary perspective? In many situations where two unrelated organisms could cooperate for a common benefit, they face a classical prisoner’s dilemma: they could create fitness benefits for both of them if they cooperated, but each of them could gain even greater fitness benefits by defecting if the other cooperated. In game theoretical terms, defection represents the dominant strategy in the prisoner’s dilemma. Even if somehow a population of cooperators appeared, this group would quickly be invaded by defectors if only one individual switched to a defective strategy, for example, by a small genetic mutation. Unconditional cooperation does not represent an evolutionary stable strategy (ESS; [12]). Evolution, it seems, imposes restrictions that promote “genetic egoism” – i.e., defective strategies for fitness relevant, competitive situations of a prisoner’s dilemma structure.

Kin Selection

Of course, this is only one part of the evolutionary picture. There are several ways in which different sorts of cooperative strategies can evolve even under the restrictions of genetic egoism.

First, genetically related individuals have been excluded from our short account above. Hamilton ([1315]) and others ([16]) pointed out that if an individual’s fitness is calculated, not only that particular individual’s reproductive success (its direct or Darwin fitness) has to be included but also the reproductive success of its genetic relatives (its indirect fitness). This is achieved by defining inclusive fitness = direct fitness + indirect fitness. Direct fitness equals the reproductive success of an individual. Indirect fitness is given by the reproductive success of its genetic relatives multiplied by the respective coefficient of relatedness r, which, for example, is 0.5 for parent-offspring relationships or full siblings, 0.25 for grandparent-grandchild relationships or half siblings, etc. Thus, we get (Fig. 15.1).

Fig. 15.1
figure 00281

The inclusive fitness of individual X is given as the reproductive success of X (R X ) plus the weighted sum of the reproductive success of X’s relatives (R i )

This definition of fitness as inclusive fitness explains why seemingly altruistic acts among relatives can be understood as selfish acts from the genes’ point of view. A strategy can promote an individual’s fitness by increasing its direct or its indirect fitness. Whenever a strategy leads to a gain in indirect fitness that is greater than its costs (its reduction of direct fitness), natural selection will favor that strategy. This relation is expressed in Hamilton’s famous inequation (Fig. 15.2).

Fig. 15.2
figure 00282

Hamilton’s rule. B is the benefit a strategy yields for an individual’s kin, r is the coefficient of relatedness, and C is the cost of the strategy to the individual

Whenever this inequation is satisfied, kin selection may guide an evolutionary process. A prominent example of a strategy which evolved in this manner is the so-called “helper at the nest” behavior. In meager times, in many species, some of the offspring stay with the parents to help raise their siblings while never reproducing themselves. This behavior, obviously, is very costly in terms of direct fitness for the helpers, but since their siblings, who usually carry about half of the helper’s genes, go on to reproduce, it is adaptive, nevertheless. They “make the best out of a bad situation.” In richer times though, when the helpers have good chances to achieve a higher fitness by reproducing on their own, their behavior changes, just as Hamilton’s rule predicts [17].

Hamilton’s concept of kin selection can explain cooperation and even altruism between genetically related individuals by reducing it to individual fitness calculations. At least in the human case, however, we observe ampler forms of cooperation than just that. So are there evolutionary scenarios in which not only relatives but also genetically unrelated individuals can individually benefit from cooperation? This question is especially interesting in the human case because current demographic results from anthropology lead to the conclusion that our ancestral subsistence groups of hunter-gatherers indeed cooperated extensively among nonkin [18]. Long-term reciprocity within groups composed of kin and nonkin is the rule, not the exception. Since humans very likely lived as hunter-gatherers most of the (evolutionarily relevant) time, these results are highly relevant for all explanations of cooperative behavior.

Mutualism

The most obvious scenario in which cooperation can evolve is when cooperation yields greater benefits than defection. Defection is not always the dominant strategy. Cooperation can be dominant in situations which do not have the structure of a prisoner’s dilemma. In game theory, these are called “win-win games” (see, e.g., [19]). Here both interaction partners benefit from (unconditional) cooperation in terms of direct fitness simply because defection would be costlier than cooperation. This form of cooperation is called mutualism [20]. A simple example for such situations among nonhuman animals is the formation of groups and herds: in a herd, every individual reduces its average risk of predation. Another example are lion males which form small collaborative groups for conquering prides of lion females. Every male in such a coalition increases its chance of reproduction by cooperation, none of them would be better off on its own.

Despite such clear-cut situations in which no one can achieve a benefit by defecting, there are more intricate scenarios which can lead to cooperation among nonrelatives. One class of such scenarios is represented by biological markets. Here, although both parties can benefit from cooperation, both parties also have incentives for defection and even deception. In game theoretical terms, these would be “trust games,” “stag hunts,” and other coordination games. Biological markets exist where individuals can choose from a group of potential social or sexual interaction partners. Adaptations for nontrivial problems posed by such freedom of choice encompass abilities to assess potential partners and one’s own “current market value” and capabilities of calculating a comparison balance between current options and future prospects [20]. Examples for biological markets [21] include the relationship of cleaner fish and reef fish [22], chimpanzees’ exchange of grooming and other services [23], and of course many aspects of the various systems of mating – bonobos, for example, trade food for sex [24].

To sum up, mutualism can evolve rather easily, and has indeed evolved frequently, because all interaction partners benefit from unconditional cooperative strategies. On biological markets, cooperation can evolve when problems of coordination are solved. To achieve this, strategies are needed, which use mechanisms and heuristics for the detection and analysis of both risks and opportunities of cooperation. A huge variety of such strategies has evolved in nonhuman animals.

Reciprocity

When the appropriate game theoretical model for social interactions changes from coordination games to the class of genuine dilemmas, called “tragic games” [19], cooperative strategies still – at least theoretically – have a chance to evolve. Trivers [25] proposed a model of reciprocally altruistic strategies which can achieve a net fitness benefit in a “society” of strategies similar to themselves. Probably, the most prominent theoretical study on this issue is Axelrod’s computer-based tournament in which various different strategies for playing the repeated prisoner’s dilemma competed with each other [26]. The striking result was that tit-for-tat (TFT), a clear-cut, simple, and cooperative strategy, outcompeted all its rivals, although most rival strategies were noncooperative, hence presenting an unfavorable environment for “nice” strategies. TFT always cooperates in its first move with a new interaction partner. During the following encounters, it simply mirrors the partner’s choice of their previous interaction. In this manner, TFT “rewards” cooperation by continuing cooperation and “punishes” defection by defecting on the next occasion. TFT’s main advantages are: (1) it cannot be exploited by defective strategies, and (2) whenever two instances of TFT or any other cooperative strategies meet, they reap the cooperative optima. Thus, TFT can thrive in a variety of social environments and may even become evolutionarily stable. This means that no strategy can invade a population of TFTs under certain circumstances (see [26] for details and proofs).

Although there are strong theoretical arguments underlining the power of TFT (and some improved versions of TFT, e.g., “contrite tit-for-tat”; [27]), there is still little evidence that reciprocal altruism actually has evolved in nonhuman animals [20]. TFT-like strategies are cognitively demanding. They require at least the capacities to (1) recognize interaction partners, (2) remember the behavior of interaction partners over time, and (3) control momentarily affective impulses in order to achieve a later goal (see also [28]). In addition, TFT can only flourish when the probability of meeting interaction partners again is sufficiently high. These conditions, so it seems, are not met frequently in nature. A simpler conditional strategy, “Pavlov,” which partially overcomes these problems of TFT, has been investigated by Nowak and Sigmund [29]. Pavlov, or “win-stay lose-shift,” continues to play one option as long as it leads to success and switches to alternative options as soon as “unsatisfactory” results, i.e., low payoffs, are obtained. This strategy is very robust and even able to outcompete TFT because it can exploit unconditional cooperators and does not run into defective “dead ends” like TFT if one interaction partner accidentally defects.

The major constraint on the evolution of reciprocally cooperative strategies is that they require a sufficiently high probability of encountering their interaction partners again. It can be proved that this probability must be greater than the cost-to-benefit ratio of cooperating in order for direct reciprocity to have an evolutionary chance – see [30]. For modern human societies, this requirement is not met frequently: we commonly face so-called “one-shot anonymous” encounters, but behave cooperatively in these as well. Therefore, direct reciprocity can only be regarded as a partial solution to the puzzle of human cooperativeness.

Green-Beard Altruism

Another explanation of the evolution of altruistic behavior uses slightly more intricate preconditions than the TFT-reciprocity approach. Imagine a population in which altruists have the ability to recognize other altruists and regularly band together with them for cooperative enterprises which benefit all members of those subgroups dominated by altruism. Dawkins labeled such a mechanism of self-recognition and self-preferential treatment of genetic traits in different phenotypes “green-beard effect” [31]. An additional assumption is that members of these altruist groups reap higher fitness benefits than the members of groups in which there is a majority of individuals acting selfishly. Under these circumstances, the proportion of altruists in a population can grow to a fairly high level, although altruists are regularly exploited when they meet nonaltruists. This scenario has been proposed by Sober and Wilson [32] and has received much criticism ever since, mostly because they claimed to have made a case for the existence of a group selection mechanism.

But, as Gildenhuys [33] clarifies, if we understand the ability to form groups of biased composition by recognizing others with similar prosocial tendencies as an individual-level trait – which it is – then we can use this model to understand how altruism can stabilize on the population level while being exploited on the individual level. As long as bands of altruists regroup from time to time and have much higher rates of reproductive success than other groups, altruistic traits can spread in a population – even if in the altruist-biased groups, a minority of egoists free rides on the altruists’ expense so that the proportion of altruists in that group slowly decreases.

Since we do not know whether the preconditions for this evolutionary path are met in our species or in others (see, “Assortment”), green-beard altruism remains a theoretically possible explanation for the evolution of human cooperativeness. One of the most important preconditions is that defectors must be unable to grow convincing imitations of these green beards (the recognizable indicators for the altruistic trait) and thus invade cooperative groups. In consequence, green beards must be forgery-proof, honest signals in order to guide the evolution of cooperativeness.

Handicap Altruism/Costly Signaling

Another evolutionary mechanism, considering signals, may play an important role [34]: altruistic behavior might not be directly fitness enhancing itself. Rather, it might be a publicly displayed signal for a hidden quality (e.g., parental skills) of those individuals, the senders, who act altruistically.

The classical example for the logic of the handicap principle is the peacock’s tail [35]. Darwin himself explained its evolution through sexual selection – stating increased female preference for males with impressive tails as the proximate cause – but could not give an ultimate explanation for this choice of the hens. Almost exactly 100 years later, Zahavi argued that impressive, brilliant tail plumes pose a self-inflicted handicap for males, thus being a costly and honest signal for their genetic quality, making it ultimately profitable for females to use them as a criterion for mate choice: only parasite-free peacocks are able to produce such splendid colors, which is an indicator for a good immune system, which in turn is an indicator for health and thus fitness.

Costly signals show that the sender can afford their costs – they pose a self-inflicted handicap. The signal indicates clearly that the sender has a surplus of resources. Applied to cooperation, this logic results in the simple conclusion that you can only afford to give if you have enough to give. Giving and sharing in this sense are both costly and forgery-proof or “honest.” These honest signals then benefit their senders indirectly because they become more attractive future interaction partners for those receiving the signals. Thus, the senders indirectly enhance their fitness through publicly displayed altruistic behavior because they can calculate on reaping future benefits from increased attractiveness as social partners and mates.

A thorough theoretical application of the handicap principle to cooperativeness has been carried out by Gintis et al. [34]. They provide a detailed game theoretical analysis of the relatively broad conditions under which handicap altruism can prosper and stabilize in a population. It is remarkable that – to our knowledge – the potential explanatory power of this approach has not led to more experimental investigation in this direction. One of the few commonly stated observations of a possible influence of signaling on cooperativeness is the increased willingness to give when others are watching (e.g., [36]; see also, “Reputation,” and [37]).

Cultural Group Selection

Yet another evolutionary approach to human cooperativeness puts forward the idea that humans’ unique level of prosociality is a consequence of humans’ unique cultural abilities. Such a level of prosociality is only paralleled by eusocial animals like bees, ants, and naked mole rats. However, all of them, unlike humans, live in genetically closely related populations. Henrich [38] presents his line of argument of this approach as follows: Altruistic individuals are beneficial for their group, but every member of that group always has an incetive to free ride on the altruists’ expenses rather than reciprocating their efforts. Therefore, even if groups purely consisting of altruists existed, they would always be vulnerable to invasion by free riders although possessing a higher fitness than mixed groups or groups consisting purely of free riders. According to Henrich [38], genetically anchored altruistic traits can never stabilize, but will always be selected against by natural selection, because of the rather dynamic flow of individuals between groups in human populations. In technical terms, without further mechanisms in place, within-group selection is always (much) stronger than between-group selection (see [16, 39]). This condition prevents the evolutionary spread of group-beneficial traits. Given this, every approach to explaining human prosociality must state how humans could overcome this evolutionary barrier. Henrich and others (e.g., [40]) argue that the most viable solution to this problem is given by cultural group selection. Cultural group selection must not be misunderstood as a mechanism equivalent to natural selection. Rather, cultural group selection for cooperativeness means that groups in which altruistic behavioral traits are prevalent reap higher fitness benefits than groups without these traits, and thus, these groups grow faster than the latter. “Extinction” of groups in cultural group selection does not necessarily mean that all individuals in that group die. Instead, these individuals could also disperse into other groups or adopt other behavioral traits.

Now, how does this approach solve the within-group/between-group selection problem? The spread of behavioral traits via cultural transmission has, Henrich argues, one crucial property: it is biased towards within-group conformity. It is a well-known phenomenon that humans – but also many other species – adapt their behavior to what they perceive as the common behavior in their group. Humans, though, do this in a manner unparalleled by any other species [41]. Combined with the evolution of a punishment mechanism for those who do not conform to group behavior (e.g., “shunning”), between-group selection can become stronger than within-group selection because within-group differences are mostly leveled out by the combination of these mechanisms [42]. Punishment of nonconformist behavior has evolutionary benefits apart from the stabilization of altruistic traits, mainly through solving coordination problems, so it might have evolved before the spread of genuine prosocial behavior [43]. According to Henrich [38], the combination of punishment of nonconformist behavior and the preferential cultural learning of behavior common in a group then constitute an environment stable enough to enable the spread of genetic prosocial traits.

Unlike the other approaches outlined above, cultural group selection models specify reasons why human prosociality is unparalleled in other species. It is also supported by the finding that prosociality in humans significantly varies between cultures (see e.g., [4446]), which it does not in other species [38]. On the other hand, this approach relies on assumptions about the strength of cultural learning mechanisms which need further empirical investigation.

Examples of Cooperation in Primates and in Bacteria

Before we concentrate exclusively on human cooperation in Sect. 3, we will discuss two examples from the animal kingdom – biofilm production in bacteria and cooperative hunting in chimpanzees. Both are particularly interesting because bacteria show cooperation on a very basic level, whereas chimpanzees are the closest living relatives to humans.

In general, we should expect bacteria to have high levels of cooperativeness since they reproduce asexually. In consequence, their degree of kinship is very high, sometimes up to r = 1. In contrast to humans and chimpanzees, however, bacteria, of course, have no cognitive abilities, no social networks, and no goals in a superordinate sense.

One interesting feature of most bacteria (99%) is the production of biofilms. Advantages of biofilms include high resistance against antibiotics, physical forces, and the opening up of new ecological niches [47, 48]. However, biofilms are expensive both resource- and energywise [48]. The more individuals contribute to the biofilm, the stronger it gets. Individuals who do not contribute to the production, i.e., free riders, reap the benefits, but do not share the costs. Thus, biofilm production is a common-pool resource problem.

How is this very basic form of cooperation sustained? One important prerequisite is the spatial separation of cooperating populations and free riders. Separation can be partly the effect of the biofilm itself since it creates a new niche but also the effect of active resistance by cooperators [49]. If the exclusion of free-riding bacteria cannot be sustained, biofilm production will quickly deteriorate – the so-called tragedy of the commons – since free riders have clear fitness advantages [50, 51].

What can be learned of that? First, cooperation does not necessarily depend on complex cognitive abilities. Second, selection can favor cooperation if spatial separation (i.e., working exclusion of free riders) and few mutants are given. Third, although free riders do have lower costs in direct comparison, individuals in a cooperative group will usually outperform them. In conclusion, cooperation exists even on a very basic level without elaborate preconditions.

Chimpanzees, on the other hand, do possess highly developed cognitive skills and social networks. Some groups are successful cooperative hunters, while others seem unable to coordinate such complex behavior. In general, cooperative hunting is rare in the animal kingdom since it is not an evolutionary stable strategy under most circumstances because free riders constantly threaten to destroy the common-pool resource (the cooperative hunting activity) by not hunting, but taking their share of the prey afterwards.

In chimpanzees, the motivation to engage in hunting is egoistic since chimpanzees seem to be neither altruistic nor mutualistic [52]. Preconditions for the forming of a group include a common history of food sharing [53], not too large differences in age and dominance and strong kinship relations [54]. It does play a role which individuals are in a group and which habitat the group lives in since these factors determine (among others) the cost-benefit ratio for each individual [5557].

The hunt itself is an extremely complex 3-D coordination game where experienced hunters take key roles. It takes up to 20 years to become a proficient hunter, to anticipate possible escape routes of the prey, and to know which role to play. The contribution of each individual during the hunt is duly noted and is accounted for when the prey is distributed [58].

Again, what can be learned from cooperative hunting in chimpanzees? Complex social rules, roles, and proportional fairness are not unique to humans. It takes years and considerable cognitive abilities to solve such notoriously hard coordination problems. Solutions like that are instable – as soon as one greedy and dominant individual reverts to snatching more than its share, other skilled hunters stop hunting in the group and hunt on their own [58]. Thus, one of the key elements for cooperation to come into existence is an advantageous individual cost-benefit ratio.

These insights about cooperation in nonhuman creatures and the above theoretical explanations take us to the question on what human cooperation really depends. The field of experimental economics has shed much light on this question during the past decades. We will now turn to that evidence.

Experimental Perspectives

Today, there is a veritable “industry” of conducting experiments on cooperation-related questions. Virtually thousands of different experiments have been done. Although they have shed much light on many questions, a consensus or a theoretical model of cooperation able to explain cooperation generally under diverse circumstances is not to be expected soon. However, there are some robust results that occur again and again, even in very different settings and cultures. This section focuses on these results, particularly those known to reliably enhance cooperation in humans.

One “work horse” of experimental game theory is the public goods game (PGG). These games are repeated n-person prisoners’ dilemmas. Here, in a group of subjects, subjects each receive an endowment of tokens and may either keep them for themselves (private pool) or invest parts or even all of it in a public pool. The experimenter usually doubles the amount paid into the public pool (to simulate the enhanced efficiency of cooperation in a group) and pays back an equal share of that amount to all subjects irrespective of their contribution. One extreme outcome is that all players behave egoistically (i.e., invest nothing in the common pool); therefore, no public good is produced. The other extreme consists in all players behaving altruistically by investing everything into the pool which means that the social optimum is reached. The social optimum always remains susceptible to free riding, whereas the all-egoistic extreme is the Nash equilibrium predicted by game theory. Although there are other famous games like the dictator and ultimatum games [59, 60], we will focus on PGG in the next sections.

Flexible Strategies of Humans

The first interesting aspect of human cooperative behavior is that it seems to be very adaptable – according to circumstances. Although there seem to be recurring types of players (“pure altruists,” “pure defectors,” “conditional cooperators,” [61]), humans switch strategies swiftly if settings change [62]. Twenty-seven percent of complete free riders (i.e., subjects who do not contribute anything to the public good in PGGs) in one study switch to full cooperation when they change from an institution without punishment to one with it. Moreover, 70% of all subjects who switch from a punishment institution to one without reduce their contributions. Strategic changes are also dependent on the perception of others (“How altruistic are they?” “Can they be trusted?”), the settings (“Is the setting fair to all or asymmetrical?”), and the motivation. Conditional cooperators, in particular, make their contributions dependent on the contributions of others (more on conditional cooperators, e.g., [61, 63, 64]).

However, it is far from easy to identify the respective strategies of subjects. One reason is that subjects, when asked, are often unable to produce a coherent strategy. In addition, introspective reports are notoriously unreliable. In consequence, only actual contributions may be used to deduce the strategy behind them.

Let us begin with one of the most robust results in PGG. On average, almost worldwide subjects contribute around 50% of the endowment to the public pool in the first period (see Fig. 15.3 below). This surprisingly high mean contribution decays rapidly to 0–20% in the last period. Usually, the last period shows an additional sharp decline since players are very much aware of the fact that they cannot lose future benefits by defecting in that last period.

Fig. 15.3
figure 00283

Typical decay of contributions in public goods games (Data with permission from [65], our chart)

Parameters Influencing Cooperation Levels

There has been much research on measures that influence cooperation. Less research has been done on the effects of age, gender, educational level, and socioeconomic background. The evidence concerning them is sparse and conflicting. However, increasing age seems to have a slight correlation with higher cooperation levels [6668].

Furthermore, group size does not have a negative influence on cooperation levels [69], although this has been repeatedly posited theoretically (e.g., [70]). Anonymous cooperation, however, does have a clearly negative effect [71] (see Section “Reputation”).

There is evidence that humans do not learn from failed cooperation [69] and that the amount of possible earnings (in some settings, up to three monthly wages may be earned for a few hours of play) does not influence altruistic behavior [72].

It is unclear why important social variables like gender or educational level have no or little influence on cooperative behavior. However, the fact that there are only few clear indications may be simply due to missing data. The result that the possibility of very high earnings does not alter behavior substantially is even more surprising. Cultural influences, in contrast, do change cooperative behavior [73].

Communication

The largest increase in cooperative levels can be achieved by introducing the possibility of communication in laboratory experiments. Throughout many studies, cooperation is boosted by allowing individuals to communicate with others (see, e.g., the surveys of [74, 75], and [66]). Contribution levels of up to 98% of the endowment can be reached compared to 47% without communication [76]. If the settings are more realistic, e.g., in common-pool resource games, the difference is even greater: in a study by Ostrom [77], for example, efficiency reaches the social optimum – which is an increase of 65% compared to the baseline with no communication. The advantage of communication is that it has almost no costs, compared to mechanisms like punishment (see, “Punishment”).

This has important consequences for economic theory since even promises which cannot be enforced by any sanctions – so-called cheap talk– are often kept and are meant sincerely in about 80% of cases in such and similar situations [78].

When communicating, subjects first focus on what is the best strategy for the group, that is, they try to figure out what situation they are in. Depending on the settings, this seems to be surprisingly hard for many of them. In consequence, one person in the group able to explain the best strategy may foster cooperation just by pointing out the social optimum to the others. Second, subjects in fact try to agree on a strategy – which in most treatments is difficult, since the actual strategic choice is independent of previous promises. Nonetheless, people do get emotional when defectors undercut their efforts to cooperate. A notable problem with communication is that subjects have substantially more problems to come to an unanimous agreement than just a majority vote [77] which may lead to suboptimal outcomes in some groups.

Punishment

Punishment is another important mechanism to increase cooperation. Not only is it ubiquitous in our societies (courts, police, etc.), but it has also been researched extensively in laboratory settings.

Punishment means that subjects receive the option to invest part of their endowment to impose fines on other subjects. The typical ratio used is one token cost for three tokens fine. That means, if, for example, three tokens are invested in a fine, the punished subject loses nine tokens. Figure 15.4 below shows that punishment can lead to high and stable contributions, whereas without it, the typical decay sets in.

Fig. 15.4
figure 00284

Comparison of average contributions in a PGG with punishment and without punishment (pun); subjects in protocol A played the treatment with punishment first and then the treatment without; protocol B vice versa (Data with permission from [79], our chart)

It is remarkable to what extent the possibility to punish defectors increases contributions – bringing it close to the social optimum (see also [80]). However, there are a few points complicating such results. First, sanctions depend on their effectiveness, i.e., “How much do I have to invest to punish another player?” [8082]. Second, free riders are not the only ones who are punished, but there is a significant amount of punishment against high contributors, the so-called antisocial punishment ([83]). Moreover, there is counter-punishment ([84]), i.e., people who punish are punished by the punished as a kind of revenge. Both, of course, are highly detrimental to efficiency, i.e., the amount of money remaining when all costs (investments in punishment and punishment costs) are subtracted.

Taken together, efficiency in punishment treatments is often smaller than in treatments without punishment (e.g., [65, 82]) since there are costs for both the punisher and the punished, which have to be subtracted from the overall earnings, thus lowering efficiency considerably.

However, efficiency may be better in the long run since expenditures for punishment decrease drastically from the first round to later rounds (Fig. 15.5, [85]). This can be seen in common-pool resource systems, too. Punishment is not only delegated from one individual to a group (a council, the local jurisdiction, etc.), but sanctions are typically graduated. The first violation of group norms often results in a very mild disciplinary measure or even a reprimand only. Repeated offenses are then dealt with sanctions increasing in severity.

Fig. 15.5
figure 00285

Comparison of earnings in treatments with (P) and without punishment (N) in short- (10 periods) and long-term (50 periods) PGG (Data with permission from [85], our chart)

Reputation

Reputation is another very effective way to increase cooperation levels. The image of a person, a company, or a nation is an important asset. Again, reputation building is ubiquitous in our societies – see, for example, Amazon, eBay, or any other Internet platform, where reputation is a decisive factor for sales. For companies, the right image can be worth billions of dollars – think of Apple versus BP. In the laboratory, cooperation levels remain high (95% against 53% of endowment in the baseline [71]) if subjects are allowed to build up reputation by giving generously in reciprocity games (see also [86]). Results from other experiments and real-world data suggest that persons behave in a more cooperative way with less rule breaking and defection when they know or suspect that they are observed [36]. The cues for this can be very subtle – for example, an ornamental eye on a box.

Reputation is especially important in situations where the other party is unknown, hence its importance in trade and on the Internet in particular. To build up a good reputation is time-consuming, costly, and a long-term effort. A reputation for being a trustworthy and reliable cooperative partner is probably a costly signal (see [87]). On the other hand, it could be used to attract other cooperators and thus be useful in the long run.

Assortment

Assortment refers to individuals actively searching out other high cooperators and avoiding free riders. This presupposes the ability to discriminate between cooperators and free riders: suppose cooperativeness is a stable disposition in individuals’ behavioral repertoires, do humans possess the ability to reliably discriminate between potential cooperators and defectors?

Indeed, Frank et al. [88] found that subjects were able to predict their interaction partners’ decisions in a prisoner’s dilemma game better than chance when they were allowed to interact with them for 30 min before the actual game was played. In a study by Fetchenhauer et al. [89], subjects achieved a comparable quality of predictions when they were presented 20-s silent video clips of target persons and asked to assess how these would decide in a dictator game.

The general problems of assortment approaches are that (1) recognition of cooperativeness must be reliably better than chance and (2) defectors who mimic signals of cooperativeness must be ruled out (see [90]). As long as defectors are somehow able to sneak into cooperative groups, they will reap higher benefits and flourish, thus leading to the decay of cooperation.

In humans, assortment with other cooperative persons might be a mechanism readily used by subjects when the possibility for it exists. In a study by Page [91], subjects have to pay a little fee to be allowed to switch groups. After every third round, subjects could decide to rank their fellow group members. The first rank indicated the most preferred partner in future interactions. Correct information about all contributions of all individuals in the past rounds was available. Subjects frequently used this option: 94% of all subjects ranked at least once, and a surprisingly high 79% of all subjects took part in each ranking round. In consequence, free riders ended up in groups with other free riders and little payoff. On the other end, the most sought-after cooperators played in highly profitable groups which played near the social optimum. On average, contributions rose by around 30% compared to a baseline without this combination of assortment and reputation building mechanisms.

Apart from the mechanisms discussed above, there is a host of other parameters known to influence cooperation significantly as well.

Framing Effects

It is well known that humans are sensitive to context. There is an extensive literature in cognitive psychology about framing effects (see e.g., [92, 93]). Most heuristics are suited only for a certain, specific environment [94]. It is therefore no surprise that even very subtle manipulations in the wording of instructions in experimental games, for example, mentioning the word “fair,” increase contributions by about 20% [75]! Other studies find large differences just by switching “someone” to “matched partner” in the instructions [95].

In real settings, this effect has been demonstrated for a substantial number of subjects (32,961) [96]. Students at the University of Zürich have to re-register each semester. On the respective form, there are two checkboxes for donations to two public funds at their university – one for foreign students and one for students in financial difficulties. Until 1998, the situation was as follows:

Before the winter semester of 1998, students received two invoices and had to choose between the two; one with the amount of the compulsory tuition fee on it, and the other with the amount of the tuition fee plus the amount due for contributions to both Funds.

After the year 1998, the text changed so that students:

[…] have to tick boxes to decide if they want to donate money to one or the other Fund, to both or to neither of the Funds. 1 month later, they receive an invoice with the compulsory tuition fee plus the chosen amount for the Social Funds. ([96], p. 77)

From an economic standpoint, these two decisions are completely identical. The results, however, are not: students’ contributions to both funds rose from 44% to 62%.

If framing indeed plays a major role, then all laboratory experimental settings are deficient in some way. Since they are not real and subjects know that they are being tested in an artificial setting, their behavior could deviate significantly from their usual choice in the real world. This suspicion is supported by the few studies that link experimental rigidity to real-world environments [77, 97, 98]. Some researchers even found completely divergent behavior [99].

Another aspect that has to be taken into account is that humans seem to have an appreciation for fairness. It seems to be important that – whatever subjects perceive as fair (in most cultures a division of 50:50) – strategies have to be fair in some way. Worldwide around 71% of all subjects offer between 40% and 50% of the endowment in ultimatum games; only about 4% offer less than 20% [60, 100]. The importance of fairness can also be supported from modern political philosophy where fairness is one of the central concepts [101].

Additional Influences

Besides the mechanisms and parameters mentioned, there are lots of other factors influencing cooperation levels to be considered. We will restrict ourselves to two important ones. The first factor is known as marginal per capita return (MPCR). This simply means as follows: how much of their investment into the public pool do players get back? Not surprisingly, the higher the return, the higher the willingness to contribute in the first place [66, 69, 102, 103]. In addition, the percentage of free riders is lower with a higher MPCR.

The second factor has been described as “integration to markets” or “level of globalization.” There is a high correlation between being near to markets and experienced in trading goods and trusting strangers in market transactions and the willingness to contribute to public goods [59, 104]. One study found that the probability to contribute to the public pool (the global pool in this particular experiment) of the “most globalized” individuals is 77% (from the USA); the probability for the “least globalized” is only 17% (Iran) [104].

There is one further very important result from [89] whose relevancy for game theory has apparently been underestimated. This study used an iterated prisoner’s dilemma game, but tested a sequential version as well. Sequential means that the decision of the first player (A) was known to the second player (B). Now, the prediction is clear – if a subject knows that player A cooperates, then defection is clearly the best option. However, quite surprisingly, 61% of American, 73% of Korean, and 75% of Japanese subjects (players B) cooperated as well after A cooperated! This is not due to confusion since 88% of Japanese and 100% of American and Korean subjects defected as a reaction to a prior defection.

It is difficult to interpret these results, but one suggestion from the authors is that humans have a cognitive mechanism they call “social exchange heuristic”: “The SEH is a robust psychological mechanism which makes people seek mutual cooperation in social exchange.” ([96], p. 87). This, of course, does not answer the question of why and how this mechanism could evolve in the first place.

Conclusion

The question as to why humans are such a cooperative species, unparalleled in nonhuman animals, has been under interdisciplinary investigation for more than two decades. The formal toolbox of evolutionary game theory enabled researchers to map possible pathways along which cooperative strategies might have evolved. This area of research is very much alive and constantly expands in numerous directions. Models and simulations of evolutionary effects of different mechanisms for amplifying and sustaining cooperation abound. For recent reviews and classifications of this vast literature, see [30] and [105]. Literature on theoretical pathways for the evolution of cooperation is complemented by experimental economics. For overviews, see, for example, [63] and [80]. In our article, we tried to review the most general and well-established findings of both fields. (Side note: it is encouraging to see how fruitful and successful interdisciplinary research can be. Cooperation research is propelled by results from such diverse disciplines as theoretical and evolutionary biology, experimental economics, behavioral ecology, evolutionary psychology and anthropology, philosophy and others.)

However, a synthesis of generally acknowledged results is still lacking in cooperation research. Many possible pathways for the evolution of cooperation have been widely accepted, yet no consensus has been reached which way human evolution actually took. In addition, no consensus has been established on the ranking, scope, logic, and ultimate ends of the psychological mechanisms which make humans act altruistically in the experiments mentioned above.

Another very prominent open question in cooperation research is: “What exactly started human cooperativeness?” There is a good understanding of the developmental paths along which a critical mass of conditional cooperators could have thrived and stabilized at high rates in ancestral human populations, but theories diverge on the question where this critical mass came from. Suggestions here encompass cultural group selection [38, 80], cooperative breeding [106], and side effects of shared intentionality [107] to name but the most prominent. While some problems continue to puzzle researchers, others have been solved and have given rise to new, more specific questions.

Cross-References

David Hume and the Scottish Enlightenment

Moral Implications of Rational Choice Theories

Scientific Study of Morals