Keywords

1 Introduction

Cooperative behaviour is defined as paying a cost to give a greater benefit to one or more other people. Since the benefit is greater than the cost, cooperation increases the total payoff of the group made of all people involved in the interaction. For this reason, cooperative behaviour is considered by social scientists to be one of the key ingredients for a successful society (Boyd et al. 2003; Fehr and Gächter 2002; Fehr and Fischbacher 2003; Tomasello et al. 2005; Nowak, 2006; Rand and Nowak 2013; Perc et al. 2017). Moreover, while individual cooperation increases social well-being, collective cooperation not only increases social well-being, but also increases personal well-being: each individual within a society made of cooperators is better off than each individual within a society made of defectors.

Yet, since cooperation is individually costly, it often breaks down. So, one of the most important research programs across social sciences seeks to find ways to promote and sustain cooperative behaviour. In this article, I will review the literature on this topic.

Obviously, this field of research is enormous, and, in the limited space of these pages, I can only scratch its surface. To try to counterbalance this, I will include several references, where the interested reader can find more detailed information. I hope that this review can be a useful starting point for social scientists, policy makers and leaders who are interested in how to promote cooperative behaviour.

2 Models of Cooperation

Cooperation is formally studied through social dilemmas. These are strategic interactions in which N > 1 individuals get to decide between two or more actions. Among these actions, there is one that benefits the group and one that benefits the individual. This tension between self-interest and collective interest is what defines a social dilemma. Moving down from general to particular, social scientists have defined different social dilemmas meant to conceptualise cooperative behaviour in different prototypical circumstances. In this article, I will focus on the two most-studied social dilemmas: the prisoner’s dilemma and the public goods game.

2.1 Cooperation Between Two Agents: The Prisoner’s Dilemma

In the prisoner’s dilemma there are two players, each of whom has two available strategies, cooperate or defect. Cooperators pay a cost c to give a greater benefit b to the other player. Defectors pay no cost and generate no benefit.

The conflict between individual and collective interest descends from the assumption b > c. Indeed, if both individuals cooperate, they each pay the cost of cooperation but also enjoy its benefit; therefore, they each get b – c. However, each individual has an incentive to deviate from cooperation to get the full payoff b, obtained by saving their own cost of cooperation, while keeping the benefit of the other’s cooperative act. However, if both individuals reason this way, they both end up with 0, which is smaller than b – c, the payoff that they would have gotten if they had both resisted the temptation to defect.

There are also other ways to formalise cooperative behaviour between two agents, including the traveller’s dilemma (Basu 1994), the Bertrand competition (Bertrand 1883), and the centipede game (Binmore 1987). While I acknowledge the importance of these models, in this article I will focus on the prisoner’s dilemma (for two individuals), because the theoretical mechanisms and the experimental regularities discussed below have been developed and tested mainly using the prisoner’s dilemma, but they are expected to work, in a similar fashion, in the other 2-player social dilemmas.

2.2 Cooperation Among N Agents: The Public Goods Game

The most popular N-player social dilemma is the public goods game, which is meant to conceptualise situations in which a group of individuals get to decide how much to contribute to a common project.

Formally, in the public goods game, each of N individuals gets to decide how much, if any, of their initial endowment e they want to contribute to the public good. Let ci be the contribution of individual i, the total amount contributed by all individuals is then c1 + … + cN. Let a be the “marginal return of cooperation”, the payoff of individual i is defined as e – ci + a*(c1 + … + cN), that is, i receives the portion of the endowment that she decided to keep plus a proportion a of the common good. The marginal return of cooperation a is assumed to be greater than 1/N and smaller than 1. This assumption guarantees that the public goods game is a social dilemma: the collective interest would be maximised if all individuals contribute the whole endowment, but each individual has an incentive to deviate and keep the whole endowment.

There are also other ways to conceptualise cooperation among N individuals, as the N-player prisoner’s dilemma and the piecewise linear-then-constant public goods game, among others (threshold public goods game, resource dilemma, volunteer’s dilemma, etc.). Although in this article I will not focus on these games, I think it is worth defining them because this allows to shed light on the different effects that group size can have on cooperative behaviour depending on the social dilemma.

In the public goods game as defined above, there is the underlying assumption that the individual return for full cooperation increases linearly with the number of individuals, that is, if all individuals cooperate, then each of them gets e*a*N, which increases linearly with N. In some practical contexts, however, this assumption is unrealistic. For studying these situations, one can consider social dilemmas where the assumption of linearity of the relationship between group size and individual return for full cooperation is replaced with other assumptions. Here, I discuss two prototypical cases. One is the N-player prisoner’s dilemma. Over the years, several definitions of this game have been proposed at various levels of generality (e.g., Hamburger 1973; Carroll 1988). Here, I define an N-player prisoner’s dilemma to be any N-player social dilemma where the individual return for full cooperation is constant with the number of players. This game is useful to formalise situations in which the individual benefit of cooperation does not depend on the number of players, but, still, one needs all players to cooperate (Yao and Darwen 1994; Grujić et al. 2012; Barcelo and Capraro 2015). Another practically relevant N-player social dilemma is the piecewise linear-then-constant public goods game, where the return of cooperation increases linearly until a certain group size N0, and then becomes constant. This conceptualises situations in which the production of the public good reaches a plateau due to natural limits in the production (Yang et al. 2013; Capraro and Barcelo 2015).

3 Iterated or Non-anonymous Interactions

Most of our everyday interactions are repeated or with people who are not completely anonymous. For example, we may interact with a friend of a friend, or with a company that has been recommended to us. In these cases, cooperation can evolve even among self-interested agents, according to five fundamental rules that have been summarised by Nowak (2006) in the case of the prisoner’s dilemma. I review these rules below. For each rule, I will also review the experimental evidence. Most of the experimental work is taken from Rand and Nowak (2013)’s review, which I recommend for further details.

3.1 Kin Selection

Kin selection allows to explain cooperation between relatives. The general assumption of the theory is that, if r is the probability of sharing a gene, then an individual does not only receive its payoff, but also a proportion r of the others’ payoff. Applying this to the prisoner’s dilemma, it follows that, if r*b > c, then it becomes individually optimal to cooperate. This rule takes the name of Hamilton’s rule, from the pioneering work of biologist William D. Hamilton (e.g., Hamilton 1964). The experimental evidence in support of this rule is, however, scarce, mainly because it is difficult to isolate the effect of genetic relatedness from other components that are usually associated with genetic relatedness, such as long-term relationships and the possibility of future interactions. Despite these technical difficulties, Madsen et al. (2007) were able to analyse data from two different cultures while controlling for three potential sources of confound, generational effects, sexual attraction, and reciprocity. In doing so, they found that people behaved in accordance with Hamilton’s rule.

3.2 Direct Reciprocity

Direct reciprocity permits to explain the evolution of cooperation in the context of repeated interactions between the same individuals. In this case, I can cooperate with you today, to receive the benefit of your future cooperation tomorrow. However, this leads to a cooperative equilibrium only when the probability of another encounter is large enough, so that the future benefit of cooperation, discounted by the probability of another encounter, is greater than the present cost of cooperation. Formally, let w be this probability of another encounter, one needs w*b > c. Experimental studies have shown that, indeed, the rate of cooperation in indefinitely iterated social dilemmas increases when the probability of future encounters increases (Roth and Murnighan 1978; Murnighan and Roth 1983; Duffy and Ochs 2009; Dal Bó and Fréchette 2011; Fudenberg et al. 2012).

3.3 Indirect Reciprocity

In case of repeated interactions with rematching after each round, people may become more likely to cooperate when they have information about the others’ reputation. In its simplest form, reputation simply coincides with the behaviour in the past interaction. In this case, people can selectively cooperate with those who have cooperated in the previous round. Anticipating this, people may become more inclined to cooperate from the first round. Several experiments have indeed shown that people tend to cooperate with people who have cooperated in the past and that the presence of a reputational mechanism can promote and sustain cooperation (Bolton et al. 2005; Milinski et al. 2006; Rockenbach and Milinski 2006; Seinen and Schram 2006; Rand et al. 2009; Pfeiffer et al. 2012). The fact that people assign value to knowing others’ behaviour is shown also by the fact that people invest a lot of time in acquiring information about the behaviour of others (Dunbar et al. 1997; Sommerfeld et al. 2007). Indeed, it can be shown that cooperation can be supported by indirect reciprocity only if the probability of knowing someone’s reputation is greater than c/b (Nowak and Sigmund 1998).

3.4 Network Reciprocity

Most human interactions are not random, but structured. Network reciprocity allows to explain the evolution of cooperation on graphs, where nodes represent actors and edges represent interactions between actors. The idea is that, if interactions are structured, then clusters of cooperators can protect themselves from the invasion of defectors. This however requires that the ratio b/c is large enough. A simple rule that works on many graphs and with several strategy updating mechanisms is b/c > k, where k is the average degree of the graph (Ohtsuki et al. 2006). Rand et al. (2014a) showed experimentally that cooperation indeed can evolve in graphs satisfying this rule. Instead, if this rule is not satisfied, the rate of cooperation in structured populations is typically the same as in well-mixed populations (Grujić et al. 2010; Traulsen et al. 2010; Suri and Watts 2011; Gracia-Lázaro et al. 2012; Grujić et al. 2012a). Some work also explored the evolution of cooperation on dynamic networks, where people can break old links and create new ones after each interaction. It has been found that people tend to break links with defectors and create links with cooperators, and this leads to an additional increase in the rate of cooperation, compared to static networks, both in mathematical models (Bilancini & Boncinelli, 2009; Bilancini et al. 2018) and in economic experiments (Fehl et al. 2011; Rand et al. 2011; Wang et al. 2012).

3.5 Group Selection

When there is group selection, that is, competition between groups, groups of cooperators might outperform groups of defectors, leading to the evolution of cooperation (Richerson et al. 2016). A mathematically simple necessary condition for the evolution of cooperation can be found assuming rare group splitting and weak selection: let n be the maximum group size and m the number of groups, cooperation may evolve only if b/c > 1 + n/m (Traulsen and Nowak 2006). To the best of my knowledge, this specific formula has not been tested experimentally. However, there is abundant evidence that the presence of intergroup competition can lead to the evolution of intragroup cooperation (Erev et al. 1993; Gunnthorsdottir and Rapoport 2006; Puurtinen and Mappes 2009), even when there is no monetary prize associated with winning the competition (Tan and Bolle 2007; Böhm and Rockenbach 2013).

3.6 Cost, Benefit, and Group Size

Beyond the five mechanisms above, there are also structural changes in the social dilemmas that might promote the evolution of cooperation, by facilitating the application of one of the five rules of cooperation. Two of these structural changes are straightforward. In fact, the five mathematical conditions described above imply that, when b increases or c decreases, the evolution of cooperation becomes easier, in the sense that the set of the values of the other parameters for which cooperation can evolve grows larger. Experimentally, the fact that cooperative behaviour in iterated prisoner’s dilemmas depends positively on b and negatively on c was already observed in the early book of Rapoport and Chammah (1965). Similar findings have been reported also in the iterated public goods game, where an increase in the marginal return of cooperation a is typically associated with an increase in cooperative behaviour, both with partner- and with random-rematching (Gunnthorsdottir et al. 2007).

For social dilemmas with N > 2 players there is another parameter that may affect cooperative behaviour, group size. However, the effect of group size on cooperation depends on the type of social dilemma: in the iterated public good game, larger groups tend to cooperate more (Isaac et al. 1994), whereas, in the iterated prisoner’s dilemma, larger groups tend to cooperate less (Grujić et al. 2012b). The intuition behind this result is that, in the prisoner’s dilemma, the individual return for full cooperation is constant as the group size increases, so it becomes more and more difficult to get the same payoff, and this may work as an incentive to defect; on the other hand, in the public goods game, the individual return for full cooperation increases linearly with the group size, and this might incentivise people to cooperate, despite a potentially larger absolute number of defectors.Footnote 1

3.7 Punishment and Reward

Numerous studies using iterated social dilemmas have shown that the presence of punishment or reward tend to increase cooperative behaviour (Yamagishi et al. 1986; Ostrom et al. 1992; Fehr & Gächter, 2000; Rand et al. 2009), despite the presence of occasional anti-social punishers, people who punish cooperative behaviour (Herrmann et al. 2008). In other words, if people interact knowing that they might be punished or rewarded for their behaviour, they tend to cooperate more. This happens because, on average, people tend to punish defectors and reward cooperators. This finding can be interpreted as a form of reciprocity (direct or indirect, depending on whether the punisher/rewarder is affected by the choice of the defector/cooperator) and therefore it could be seen as a way in which one can apply two of the five rules of cooperation in reality; indeed, institutional punishment of defectors is perhaps the oldest-known way to promote cooperative behaviour within a society.

4 One-Shot and Anonymous Interactions

In one-shot and anonymous games, the standard theory of rational, payoff-maximising behaviour predicts that people never cooperate. However, behavioural experiments have repeatedly shown that some people do cooperate even in these contexts (Rapoport and Chammah 1965). Usually, the structural changes in the strategic interaction that promote cooperation in iterated games (while maintaining the anonymity of the interactions) promote cooperation also in one-shot games. Specifically, decreasing the cost of cooperation (Engel and Zhurakhovska 2016) or increasing its benefit (Capraro et al. 2014) do promote cooperative behaviour; increasing the size of the group promotes cooperative behaviour in the public goods games, but reduces it in the N-player prisoner’s dilemma (Barcelo and Capraro 2015). In one-shot anonymous games and in iterated games with random-rematching, there is also some research on the effect of group size on cooperation in the piecewise linear-then-constant public goods game, but the results are mixed: one study found an inverted-U relationship, such that intermediate size groups were the most cooperative (Capraro & Barcelo 2015), but a subsequent study failed to replicate this finding and found a positive effect of group size on cooperation (Pereda et al. 2019).

The presence of punishment or reward increases cooperation also in one-shot games (Capraro et al. 2016; Capraro and Barcelo 2021a). Moreover, cues that suggest that the interaction may not be anonymous or one-shot can increase cooperative behaviour. For example, information about the other participants’ behaviour can increase cooperation, via conditional cooperation (Fischbacher et al. 2001; Kocher et al. 2008).

However, this does not explain why people cooperate in one-shot and anonymous social dilemmas. In this section, I review the main frameworks that have been proposed in the last decade.

4.1 Social Heuristics

One prominent account contends that people internalise strategies that are useful in their everyday life and use them as heuristics when they happen to interact in novel situations. Specifically, most real-life interactions are not one-shot and anonymous, but they are repeated or non-anonymous, they happen with friends or colleagues, or with individuals about whom we have information regarding their past behaviour. In these contexts, people may learn that cooperative behaviour pays off in the long run, especially when its cost is low, or its benefit is high (or in groups of special size). People might then learn and internalise these heuristics and apply them in one-shot and anonymous games. This framework takes the name of Social Heuristics Hypothesis (Rand et al. 2014b).

Over the last decade, scholars have sought to experimentally test this framework. The idea behind the experimental approach is the following: if cooperation in one-shot and anonymous games is driven by heuristics, then experimental manipulations aimed at increasing reliance on heuristics should promote cooperative behaviour. The non-trivial experimental challenge is how to promote the use of heuristics in the laboratory. Scholars have developed four different techniques: time pressure, ego depletion, cognitive load, and conceptual primes of intuitionFootnote 2. I review them below:

  • When people have little time to think about the details of a decision problem, they might be more likely to rely on general heuristics. Therefore, putting people under time pressure might increase their reliance on heuristics.

  • When people are depleted of their self-control, they might lose their ability to calculate the details of the decision problem at hand and, therefore, become more likely to use general heuristics. Self-control can be depleted through an ego depletion task, such as the Stroop task or the e-hunting task, or, in general, through any task that requires the use of self-control.

  • When people’s working memory is reduced by a concurrent task, their ability to make the complex reasoning needed to evaluate the situation they are facing might be reduced as well, making them more likely to follow simple heuristics. Working memory can be depleted using cognitive load tasks, such as keeping in mind a long sequence of numbers (typically seven).

  • Conceptual primes of intuition refer to a class of nudges that promote reliance on intuitive thinking. These nudges can be implicit or explicit. An instance of an implicit nudge could be that people, before playing a social dilemma, are given a set of letters that they can use to form words, and these words are related to intuitive decision making, (e.g., “intuition”, “emotion”, or “quick”). An explicit nudge could be to ask people to follow their intuition or their emotion, or to write about a time of their life in which following their intuitions worked out well.

It is important to note that none of these methods is perfect. Time pressure has been criticised because it is usually too long to eliminate reflective reasoning and access quick heuristics (Libet 2009; Soon et al. 2008). Ego depletion may even do something fundamentally different from deactivating reflective reasoning and activating reflexive reactions; moreover, the very basic assumption that self-control draws on a limited resource also came under scrutiny (Inzlicht et al. 2014). Cognitive load tasks might interact with the primary task, while conceptual primes, especially the explicit ones, may generate experimenter demand effect (Rand 2016).Footnote 3

Being aware of the limitations of these experimental manipulations, scholars have turned to meta-analytic techniques to find out whether, overall (i.e., putting all the studies together, regardless of the cognitive manipulation being used), intuition favours cooperative behaviour. An earlier meta-analysis found a positive effect of promoting intuition on cooperation (Rand 2016). However, this result was later criticised by another meta-analysis, which found a null effect (Kvarven et al. 2019), which in turn was criticised by a third meta-analysis, which replicated the original positive effect (Rand 2019). The debate about whether promoting intuition increases cooperative behaviour is still ongoing (see Capraro (2019) for a review). However, there is a result that has been consistently found in all meta-analyses: explicit primes of emotions increase cooperative behaviour. To make an example, an explicit message (shown to participants before making their decision) used to prime reliance on emotions is the following:

Sometimes people make decisions by using feeling and relying on their emotion. Other times, people make decisions by using logic and relying on their reason.

Many people believe that emotion leads to good decision-making. When we use feelings, rather than logic, we make emotionally satisfying decisions.

Please make your transfer decision by relying on emotion, rather than reason.

This prime was initially introduced by Levine et al. (2018) and shown to increase cooperative behaviour in the prisoner’s dilemma. More recently, it has been applied also to other contexts. For example, it has been shown to reduce speciesism, that one can interpret as a form of cooperation between humans and non-human animals (Caviola and Capraro 2020). However, this very same prime has also been shown to reduce intentions to wear a face mask during the COVID-19 pandemic (Barcelo & Capraro, 2021). This raises an issue that I think it deserves attention. While previous research has focused on “general cooperation” using stylised games, it is possible that particular forms of cooperation, especially those we are unfamiliar with, may require reflective reasoning.

Another important point to reflect upon is that the aforementioned work regards the effect of “general emotions” on cooperation. Specific emotions may affect cooperation in different ways, depending on the emotions themselves. For example, Polman and Kim (2013) found that inducing anger decreases cooperation in the public goods game, while inducing disgust increases cooperation. Motro et al. (2017) found that inducing anger decreases cooperation in a prisoner’s dilemma, but only when the other participant was angry as well. Chierchia et al. (2021) found that inducing fear increased cooperation compared to inducing anger, but none of them was different from the control condition.

4.2 Moral Preferences

Another account that has been proposed is the morality preferences hypothesis, according to which cooperative behaviour in one-shot and anonymous interactions is primarily driven by moral preferences for doing the right thing (Capraro and Perc 2021). This account is not mutually exclusive with the social heuristics hypothesis because personal norms – internal standards about what is right or wrong – may come from the internalisation of behaviours that have been learned in everyday interactions.

The experimental evidence on the morality preferences hypothesis began with a paper by Biziou van Pol et al. (2015), which found that cooperative behaviour in the one-shot prisoner’s dilemma correlates positively with honest behaviour in a sender-receiver game where a sender can send a dishonest message and increase both his payoff and that of the receiver. Therefore, the honest choice in this game is considered as a measure of moral preferences beyond monetary outcomes. The fact that this choice correlates with cooperative behaviour suggests that cooperative behaviour is partly driven by moral preferences beyond monetary outcomes. A subsequent article by Kimbrough and Vostroknutov (2016) showed that cooperative behaviour in the public goods game correlates with norm-following in a task where subjects, moving a virtual person on a computer screen using a keyboard, get to decide how long to stand at a virtual traffic-light which displays the red colour, losing time and money, in a context where they receive no negative payoff if they just cross the road disregarding the traffic-light. The decision in this norm-following task is anonymous, has nothing to do with social interactions, and does not have any material consequences. Therefore, although Kimbrough and Vostroknutov (2016) do not mention personal norms in their paper, I think it is fair to consider their task as a measure of people’s propensity to follow their personal norms. In this light, their results may be interpreted as providing further support for the assumption that cooperative behaviour in one-shot and anonymous social dilemmas is partly driven by moral preferences. Then, Capraro and Rand (2018) demonstrated that cooperative behaviour in the prisoner’s dilemma correlates with the moral choice in the trade-off game, regardless of the trade-off game frame, thus providing additional evidence that cooperative behaviour is partly driven by moral preferences beyond monetary outcomes.Footnote 4 Recently, Bašić and Verrina (2021) provided evidence that personal norms are complementary to social norms in predictive cooperative behaviour, and Catola et al. (2021) showed that both social and personal norms are correlated with cooperative behaviour in the public goods game, but the predictive power of personal norms is higher.

This line of research suggests that nudges that make the personal norm salient might be effective at increasing cooperative behaviour. However, to the best of my knowledge, only two papers explored this question.Footnote 5

Capraro et al. (2019) explored the effect of two norm-nudges, one based on the injunctive norm, and one based on the personal norm. Specifically, before playing a prisoner’s dilemma, participants answered one of the following questions:

Personal norm question: “What do you personally think is the morally right thing to do in this situation?”.

Injunctive norm question: “What do you think your society considers to be the morally right thing to do in this situation?”.

After answering this question, participants played a one-shot prisoner’s dilemma. The results showed that both messages increased cooperative behaviour compared to a baseline, where participants made their decision without being asked any question. Moreover, the effect of the two nudges was similar.

Mieth et al. (2021) tested the effect of moral labels on cooperative behaviour in the prisoner’s dilemma. They found that labelling the available options with morally loaded words, such as “I cooperate” vs. “I cheat”, increases cooperation, compared to using neutral labels “Option A” vs. “Option B”.

Thus, these two studies provide evidence that making the personal norm salient tend to increase cooperative behaviour. However, it seems that also making the injunctive norm salient increases cooperative behaviour to a similar extent, suggesting that also injunctive norms may play an important role in determining one-shot and anonymous cooperation. It might be possible that different people are nudged by different norms; for example, it could be that people high in internalised moral identity tend to react to personal norm-nudges, while people high in symbolised moral identity tend to react to injunctive norm-nudges.Footnote 6 In general, studying the role of potential moderators could be a promising route for future research. Moreover, perhaps surprisingly, I could not find any work testing the effective of making the descriptive norm salient.Footnote 7 One can imagine that this would also increase cooperative behaviour via conditional cooperation, but it is an open question if this is actually the case and, if so, how the magnitude of this effect compares with the magnitude of the effects of the other norm-nudges.

5 Conclusion

In this article, I reviewed the main mechanisms and interventions that are known to promote cooperative behaviour in social dilemma games. I summarise them in Table 1. I also highlighted some open questions that I hope can be answered in future work. I summarise them in Table 2.

Table 1. Summary of the mechanisms and interventions that are known to increase cooperative behaviour in social dilemmas.
Table 2. Open questions.