1 Introduction

Complex and distributed systems are often modeled by means of game dynamics in which the participants (players) act spontaneously, typically striving to maximize their own payoff. Such selfish behavior often results in a so-called (pure Nash) equilibrium which, roughly speaking, corresponds to the situation in which no player has an incentive to change her current strategy.Footnote 1

Consider the natural scenario in which people interact on a (social) network and take their decisions based on both their personal interests and also on what their friends decided. Situations of this sort are often modeled by means of games that are played locally by the nodes of some graph (see, e.g., [14] and [13, Chap. 19]). For example, players may have to choose between two alternatives (strategies), and each strategy becomes more valuable if other friends also choose it (perhaps it is easier to agree than to disagree, or it is better to adopt the same technology for working, rather than different ones).

In many cases, an extremely simple procedure to convergence to an equilibrium is the so-called best-response dynamics in which at each step one player revises her strategy so to maximize her own payoff (and the others stay put). These dynamics work in more general settings (not only on network games), where convergence to an equilibrium is proven via a potential argument (every move reduces the value of a global function – called potential). Games of this nature are called potential games and they are used to model a variety of situations. Interestingly, this argument fails as soon as two or more players move at the same time.

In this work we study a natural variant of best-response dynamics in which we relax the requirement that one player at a time moves. That is, now players become active independently with some probability and all active players revise their strategy according to the best-response rule (or more generally any better-response rule). This is similar as before but allowing simultaneous moves. Specifically, we study the convergence time of these dynamics when players play on a network a “local” potential game: (1) each player interacts only with her neighbors, meaning that the strategies of the non-neighbors do not affect the payoff of this player, and (2) locally the game is a potential game (see Sect. 2 for more details and formal definitions).

Simple examples show that convergence is impossible if two players are always active (move all the time), or that the time to converge can be made arbitrarily long if they become active at almost every step. At the other extreme, if the probability of becoming active is too small, then the dynamics will also take a long time to converge since almost all the time nothing happens. The trade-off is between having sufficiently many active players and, at the same time, not too many neighboring players moving simultaneously.

1.1 Our Contribution

We investigate how the convergence time depends on the probabilities of becoming active and on the degree of the network. This is also motivated by the search for simple dynamics that the players can easily implement without global knowledge of the network (namely, they only need to known how many neighbors they have), nor without having complex reasoning (they still myopically better-respond). We first show that for the symmetric coordination game, the convergence time is polynomial whenever the probability of being active is slightly below the inverse of the maximum degree of the network (Theorem 2 and Corollary 1). This generalizes to arbitrary potential games on graphs, where every node plays a possibly different potential game with each of its neighbors, and the maximum degree is replaced by a weighted maximum degree (see Theorem 6). These results indeed hold whenever each active player uses a better response (not necessarily the best response). Finally, we prove a lower bound saying that, in general, the probabilities of becoming active must depend on the degree for otherwise the convergence time is exponential with high probability (Theorem 5 and Corollary 2). Note that this holds also for the simplest scenario of symmetric coordination games.

Our upper bounds can be seen as a probabilistic version of the potential argument (under certain conditions, the potential decreases in expectation at every step by some fixed amount). To the best of our knowledge, this is the first study on the convergence time of these natural variants of best-response dynamics. Prior studies (see next section) either focus on sufficient conditions to guarantee convergence to Nash equilibria, or they consider noisy best-response dynamics whose equilibria can be different from best-response.

We note that the general upper bound necessarily depends on the maximum value of the potential, as these games include max-cut games which are PLS-complete [30]: for such games, no centralized algorithm for computing a Nash equilibrium in time polynomial in the number of players is known, and these games are hard precisely when the potential can assume arbitrarily large values. Obviously, one cannot hope that simple distributed dynamics do better than the best centralized procedure.

1.2 Related Work

Several works study convergence to Nash equilibria for simple variants of best-response dynamics. A first line of research concerns the ability to converge to a Nash equilibrium when the strict schedule of the moves of the players (one player at a time) is relaxed [10]; they proved that any “separable” schedule guarantees convergence to a Nash equilibrium. Other works study the convergence time of specific dynamics with limited simultaneous moves: [19] introduce a “local” coordination mechanism for congestion games (which are equivalent to potential games [24]), while [15] shows that with limited simultaneous moves the dynamics reaches quickly a state whose cost is not too far from the worst Nash equilibrium [15]; Fast convergence can be achieved in certain linear congestion games if approximate equilibria are considered [9].

Another well-studied variant of best-response dynamics is that of noisy or logit (response) dynamics [1, 6, 7], where players’ responses is probabilistic and determined by a noise parameter (as the noise tends to zero, players select almost surely best-responses, while for high noise they respond at random). These dynamics turn out to behave differently from “deterministic” best-response in many aspects. In the original logit dynamics by [6, 7], where one randomly chosen player moves at a time, they essentially rest on a subset of potential minimizers. When the players’ schedule is relaxed, this property is lost and additional conditions on the game are required [1, 2, 10, 18, 26]. Our independent better-response dynamics can be seen as an analog of the independent dynamics of [1] for logit response.

Potential games on graphs (a proper subclass of potential games) are well-studied because of their many applications. In physics, ferro-magnetic systems are modeled as noisy best-response dynamics on lattice graphs in which every player (node) plays a coordination game with each neighbor (see, for example, [23] and Chap. 15 of [22]). The version in which the coordination game is asymmetric (i.e., coordinating on one strategy is more profitable than another) is used to model the diffusion of new technologies [21, 25] and opinions [17] in social networks. Finally, potential games on graphs (every node plays some potential game with each neighbor) characterize the class of potential games for which the equilibria of noisy best-response dynamics with all players updating simultaneously can be “easily” computed [3]. The convergence time of best-response dynamics for games on graphs is studied in [12, 17]: Among other results, [12] showed that a polynomial number of steps are sufficient when the same game is played on all edges and the number of strategies is constant. Analogous results are proven for finite opinion games in [17]. Finally, [4] characterize the class of potential games which are also graphical games [20], where the potential can be decomposed into the sum of potentials of “maximal” cliques of an underlying graph. Graphical games have been studied in several works (see, e.g., [5, 8, 11, 28]). The class of local interaction potential games [3] is the restriction in which the potential can be decomposed into pairwise (edge) potential games. In this work we deal precisely with this class of games. Since this class includes the so-called max-cut games, which are known to be PLS-complete [30], it is considered unlikely that an equilibrium can be computed efficiently, even by a centralized procedure.

Our dynamics are similar to the \(\alpha \)-synchronous dynamics in cellular automata [16]. In particular, the case of symmetric coordination game corresponds to majority rule on general graphs [29] (where each cellular automaton tries to switch to the majority state of its neighbors, and stays put in case of ties). The present work can be seen as a first study of \(\alpha \)-synchronous dynamics on general graphs for the rules that follow from best-response to some potential games with neighbors.

2 Model (Local Interaction Potential Games)

Intuitively speaking we consider a network (graph) where each node is a player who repeatedly plays with her neighbors. We assume that a two-player potential game (defined below) is associated to each edge of the graph. Each player must play the same strategy on all the games associated to its incident edges, and her payoff is the sum of the payoffs obtained in each of these games. We also assume finite strategies, i.e. each player chooses her strategy within a finite set.

Symmetric Coordination Game. One of the simplest (potential) games is the symmetric coordination game where each player chooses color B or W (for black or white) and her payoff is 1 if players agree on their strategies, and 0 otherwise (see Fig. 1a where the two numbers are the payoff for the row and the column player, respectively).

Fig. 1.
figure 1

Examples of two-player games and potential function.

General Potential Games. In a general game, we have n players, and each of them can choose one color (strategy) and the combination \(c=(c_1,\ldots ,c_n)\) of all colors gives to each player u some payoff \(PAY_u(c)\). In a potential game, when the change in the payoff of any player improves by some amount, some global function P called the potential will be decreased by the same amount: For any player u and any two configurations c and \(c'\) which differ only in u’s strategy, it holds that

$$\begin{aligned} PAY_u(c') - PAY_u(c) = P(c) - P(c') . \end{aligned}$$
(1)

A configuration c is a (pure Nash) equilibrium if no player u can improve her payoff, that is, the quantity above is negative or zero for all \(c'=(c_1,\ldots ,c_u',\ldots ,c_n)\). Conversely, c is not an equilibrium if there is a player u who can improve her payoff (\(PAY_u(c') - PAY_u(c)>0\)) in which case \(c_u'\) is called a better response (to strategies c). A best response is a better response maximizing this improvement, over the possible strategies of the player. Potential games possess the following nice feature: A configuration c is an equilibrium if and only if no player can improve the potential function by changing her current strategy. In a general (two-player) potential game the payoff of the players is not the same, and the potential function is therefore not symmetric (see the example in Fig. 1c).

Local Interaction Potential Games[3]. In a local interaction potential game the potential function can be decomposed into the sum of two-player potential games, one for each edge of the network G:

$$\begin{aligned} P(c) = \sum _{uv \in E(G)} P_{uv}(c_u,c_v) . \end{aligned}$$
(2)

No edge exists if the strategies of the two players do not affect each others’ payoff (the corresponding potential is constant and can be ignored). This definition captures the following natural class of games on networks: Each edge corresponds to some potential game, and the payoff of a player is the sum of the payoffs of the games with the neighbors. Note that a player chooses one strategy to be played on all these games.

(Independent) Better-Response Dynamics. A simple procedure for computing an equilibrium consists of repeatedly selecting one player who is currently not playing a best response and let her play a better or best response. Every step reduces the potential by a finite amount, and therefore this procedure terminates into an equilibrium in O(M) time steps, where M is the maximum value for the potential (w.l.o.g., we assume that the potential is always non-negative and takes integer valuesFootnote 2). Here we consider the variant in which, at each time step, each player becomes independently active according to some probability, and those who can improve their payoff change strategy accordingly:

Definition 1

In independent better-response dynamics, at each time step t players do the following:

  • Each player (node) u becomes active with some probability \(p_u^t\) which can change over time (the case in which it is constant over time is a special case of this one).

  • Every active player (node) revises her strategy according to a better (or best) response rule. If the current strategy is already a best response, then no change is made.

Note that all players that are active at a certain time step may change their strategies simultaneously. So, for example, it may happen that on the symmetric coordination game in Fig. 1a the two players move from state BW to state WB and back if they are both active all the time.

Generic upper bound. To show that dynamics converge quickly, we show that the potential decreases in expectation at every step. To this end, we consider the probability space of all possible evolutions of the dynamics. A configuration c at a given time t is given by the colors chosen by players at the previous time step (strategy profile) and by the values \(p_u^t\) used by users for randomly deciding to be active at time t. The universe \(\varOmega \) is then defined as the set of all infinite sequences \(c^0,c^1,\ldots \) of configurations.

Definition 2

(\(\delta \) -improving dynamics). Dynamics are \(\delta \)-improving for a given (local interaction) potential game if in expectation the potential decreases by at least \(\delta \) during each time step, unless the current configuration is an equilibrium. That is, for any configuration c which is not an equilibrium, and any event \(F_c^t=\{c^0,c^1,\ldots \in \varOmega \mid c^t = c\}\) where configuration c is reached at time t, we have

$$\begin{aligned} E[P^{t+1}-P^t \mid F_c^t] \le -\delta \end{aligned}$$

where \(P^t\) denotes the potential at time t.

Standard Martingale arguments imply the following (see [27] for details):

Theorem 1

The expected convergence time of any \(\delta \)-improving dynamics is \(O\left( \frac{M_0}{\delta }\right) \) where \(M_0\) is the expected potential of the game at time 0.

3 Networks with Symmetric Coordination Games

We first consider the scenario in which every edge of the network is the symmetric coordination game in Fig. 1a. The nodes of a graph G (players) can choose between two colors B and W and are rewarded according to the number of neighbors with same color. We are thus considering the dynamics in which nodes attempt to choose the majority color of their neighbors and every active node changes its color if more than half of its neighbors has the different color.

In order to analyze the convergence time of these dynamics, we shall relate the probabilities of being active to the number of neighbors having a different color. We say that u is unstable at time t if more than half of the neighbors has the other color, that is,

$$\begin{aligned} dc_u^t>\frac{1}{2}\delta _u \end{aligned}$$

where \(\delta _u\) is the degree of u and \(dc_u^t\) is the number of neighbors of u that have a color different from the color of u at time t. By definition, the dynamics converge if no node is unstable. Note that we have \(dc_u^t\le \delta _u\le \varDelta _u\le \varDelta \) where \(\varDelta =\max _{u\in V(G)}\delta _u\) is the maximum degree of the graph, and \(\varDelta _u=\max _{uv\in E(G)}\delta _v\) is the local maximum degree in the neighborhood of u.

For the case of symmetric coordination games, the potential function of a configuration is the number of edges whose endpoints have different colors: An edge uv is said to be conflicting in configuration c if u and v have different colors. Therefore the potential is at most the number m of edges.

Theorem 2

Fix some real values \(p,q\in (0,1)\). If we have \(p_u^t \in [\frac{p}{\varDelta }, \frac{q}{\varDelta _u}]\) for all ut in a symmetric coordination game, then the expected convergence time is \(O\left( \frac{\varDelta m_0}{p(1-q)}\right) \) where \(m_0\) is the initial number of conflicting edges, \(\varDelta \) is the maximum degree, and \(\varDelta _u\) is the maximum degree in the neighborhood of u.

As an immediate corollary, we have the following result for the case in which all nodes are active with the same probability p.

Corollary 1

If all unstable nodes are active with probability \(p<\frac{1-\varepsilon }{\varDelta }\) for \(\varepsilon > 0\), then the dynamics converge to a stable state in \(O(\frac{m_0}{p\varepsilon })\) expected time.

Theorem 2 derives from the following lemma and Theorem 1.

Lemma 1

Any dynamics satisfying the hypothesis of Theorem 2 are \(\delta \)-improving for \(\delta =p(1 - q)/\varDelta \).

Proof

Consider the event \(F_c^t\) where a configuration c is reached at time t. Let \(C^t\) denote the number of conflicting edges in c, and \(U^t\) be the set of unstable nodes at time t respectively. Recall that the number of conflicting edges is equal to the potential, that is, \(P^t=C^t\). We now express \(E[C^{t+1}-C^{t} \mid F_c^t]\) as a function of the values \(\left\{ {p_u^t \mid u\in V(G)}\right\} \) associated to c.

For that purpose, we first analyze the probability that any given edge of c is conflicting after the random choices made at time t. We distinguish the following types of edges. Let \(S_1\) (resp. \(S_2\)) denote the set of edges in c with the same color and one unstable extremity (resp. two). Similarly, let \(C_1\) (resp. \(C_2\)) denote the set of edges in c with conflicting colors and one unstable extremity (resp. two). Note that \(C^t = |C_1| + |C_2|\). A conflicting edge uv will become non-conflicting if only one extremity changes its color. Similarly, a non-conflicting edge uv will become conflicting if only one extremity changes its color. Due to independence of choices, this happens in both cases with probability \(p_{uv}^t=p_u^t(1-p_v^t) + (1-p_u^t)p_v^t\) if both u and v are unstable, and with probability \(p_u^t\) if u is unstable and v is not. By linearity of expectation, we then obtain:

$$\begin{aligned} E[C^{t+1}-C^t\mid F_c^t] = \sum _{uv\in S_1}p_u^t + \sum _{uv\in S_2}p_{uv}^t - \sum _{uv\in C_1}p_u^t - \sum _{uv\in C_2}p_{uv}^t . \end{aligned}$$
(3)

(When we note \(uv\in C_1\) (resp. \(uv\in S_1\)), we assume that u is unstable and v is not.) By definition, each unstable node u sees more conflicting edges than non-conflicting ones, thus implying \( 1 + \sum _{v|uv\in S_1}1 + \sum _{v|uv\in S_2} 1 \le \sum _{v|uv\in C_1}1 + \sum _{v|uv\in C_2} 1. \) By multiplying by \(p_u^t\) and then summing over all unstable nodes, we obtain:

$$\begin{aligned} \sum _{u\in U^t}p_u^t + \sum _{uv\in S_1}p_u^t + \sum _{uv\in S_2}(p_u^t+p_v^t) \le \sum _{uv\in C_1}p_u^t + \sum _{uv\in C_2}(p_u^t+p_v^t) . \end{aligned}$$
(4)

As \(p_{uv}^t=p_u^t+p_v^t-2p_u^tp_v^t\), we deduce from (3) and (4):

$$\begin{aligned} E[C^{t+1}-C^t \mid F_c^t] \le \sum _{uv\in C_2} 2p_u^tp_v^t - \sum _{u\in U^t} p_u^t . \end{aligned}$$
(5)

Since every edge \(uv\in C_2\) has both endpoints in \(U^t\), we can rewrite (5) as

$$\begin{aligned} E[C^{t+1}-C^t \mid F_c^t] \le \sum _{u\in U^t} p_u^t \Big (-1 + \sum _{v | uv \in C_2} p_v^t\Big ) . \end{aligned}$$

Using \(p_v^t \le \frac{q}{\varDelta _v} \le \frac{q}{\delta _u}\) and \(p_u^t \ge \frac{p}{\varDelta }\), we obtain the following inequality: \( E[C^{t+1}-C^t \mid F_c^t] \le \sum _{u\in U^t} \frac{p}{\varDelta } (-1 + q) = - p(1-q) \frac{|U^t|}{\varDelta }. \) This completes the proof.    \(\square \)

Adaptive Probabilities. The upper bound of Theorem 2 can be improved if nodes are aware of the number of neighbors that are willing to change strategy (unstable) and then set accordingly the probability of changing too. More precisely, one can think of active nodes announcing to their neighbors that they are unstable and that they would like to switch to the other color, before actually doing so. Then, each unstable node will switch with a probability inversely proportional to the number of unstable neighbors. The following theorem shows that this yields an improved upper bound on the convergence time.

Theorem 3

Fix some real values \(p,q\in \left( 0,\frac{1}{2}\right) \). If we have \(p_u^t\in [\frac{p}{d_u^t+1}, \frac{q}{d_u^t+1}]\) for all ut in a symmetric coordination game, where \(d_u^t\) is the number of conflicting unstable neighbors of u, then the expected convergence time is \(O\left( \frac{m_0}{p(1-2q)}\right) \) where \(m_0\) is the initial number of conflicting edges.

To prove this theorem we adapt the proof of Lemma 1 and show that these dynamics are \(\delta \)-improving for \(\delta =p(1-2q)\)(see [27] for details).

Fully Local Dynamics. Theorem 2 requires that each node is aware of a bound on the maximum degree, or the local maximum degree in her neighborhood for setting \(p_u^t\). Theorem 3 requires knowledge of the number of conflicting unstable neighbors at each time step. We next consider dynamics that are fully local as each node u can set the probabilities \(p_u^t\) by only looking at its own degree.

Theorem 4

Fix some real values \(p,q\in \left( 0,\frac{1}{2}\right) \). If we have \(p_u^t\in [\frac{p}{\delta _u}, \frac{q}{\delta _u}]\) for all ut in a symmetric coordination game, where \(\delta _u\) is the degree of u, then the expected convergence time is \(O\left( \frac{\varDelta m_0}{p(1-2q)}\right) \) where \(m_0\) is the initial number of conflicting edges.

The proof of this theorem is similar to that of Theorems 2 and 3 (see [27]).

Tightness of the Results. Consider the following network composed of a clique and \(r/2+1\) paths, for even r (see figure below). Each node in a path is connected to all nodes to the right and to the left path (or clique for the first path) as feature by demi-edges with degree indications w.r.t. the previous and the next part of the construction. Below each part, we indicate the number of nodes in the part.

figure a

Intuitively, the construction is such that the process proceeds from left to right, where nodes in certain path become unstable only after all nodes in the previous path became black; moreover, inside each path the process is also sequential, i.e., the path becomes black from extremities to center. These observations imply that any dynamics in which nodes become active with probability \(p\simeq \alpha \), require \(\varOmega (r^2/\alpha )=\varOmega (n/\alpha )\) steps.

Since every node has degree \(\varTheta (r)=\varTheta (\sqrt{n})=\varTheta (\varDelta )\), and the initial configuration has \(m_0=\varTheta (r^2)=\varTheta (n)\) conflicting edges (those between the clique and the first path), non-adaptive dynamics take \(\varTheta (\varDelta m_0) = \varTheta (n^{3/2})\) time steps. On the contrary, adaptive dynamics take \(\varTheta (m_0)=\varTheta (n)\) steps since the number \(d_u^t\) of unstable conflicting neighbors of each node u is at most 1. Therefore, the analysis of Theorems 2, 3, and 4 is tight. Moreover, the adaptive dynamics are provably faster than non-adaptive ones.

4 An Exponential Lower Bound When the Degree Is Unbounded

In this section we prove a lower bound for the case of symmetric coordination game on each edge and dynamics with constant probabilities, that is, the case in which every node becomes active with some probability p which does not depend on the graph nor on the time, and which is the same over all nodes.

Theorem 5

For every \(p>0\), there are starting configurations of the complete bipartite graph where the expected number of steps to converge to an equilibrium is exponential in the number of nodes.

Proof Idea. Consider the continuous version of the problem in which, instead of a bipartite graph with n nodes on each side, we imagine L and R being two continuous intervals (see figure below). Start from a symmetric configuration in which a fraction \(\alpha >1/2\) of the players in L has color W and the same fraction in R has the other color B. Suppose that \( \alpha = \frac{1}{2-p}. \) Then after one step the system reaches the symmetric configuration, that is, a fraction \(\alpha \) of nodes in L has color B and the same fraction in R has color W. Indeed, the fraction \(\beta \) of players with color B in L after one step is precisely \( \beta = 1 - \alpha + p \cdot \alpha = \frac{1-p}{2-p} + \frac{p}{2-p} =\alpha . \)

figure b

We next prove the theorem via Chernoff bounds. For \(\epsilon =p/3\) consider the interval \( around(\alpha ) := [(1-\epsilon )\alpha , (1+\epsilon )\alpha ], \) and let CYCLE(t) be the following event:

\(CYCLE(t):=\{\)At time t a fraction \(\alpha _L\in around(\alpha )\) of the nodes in L has some color c, and a fraction \(\alpha _R\in around(\alpha )\) of the nodes in R has the other color \(\overline{c}\) (where \(\overline{B}=W\) and \(\overline{W}=B\)).\(\}\)

We say that the configuration is balanced at time t when CYCLE(t) holds. Since \(\epsilon <p/2\) we have \((1-\epsilon )\alpha >1/2\), and thus the best response of every (active) node in a balanced configuration is to switch color (since both \(\alpha _Ln\) and \(\alpha _R n\) are strictly larger than n / 2). Chernoff bounds guarantee that with high probability enough many nodes will be activated and therefore will switch to obtain a symmetric balanced configuration (see [27] for proof of next lemma):

Lemma 2

For any t, it holds that \( P[CYCLE(t+1)| \ CYCLE(t)] \ge 1 - 4\exp \left( -\frac{\delta ^2}{3}\mu \right) , \) where \(\delta =\frac{\epsilon }{1+\epsilon }\) and \(\mu = p(1+\epsilon )\alpha n\) with \(\epsilon =p/3\).

The above lemma implies that, starting from a balanced configuration, the probability of reaching an equilibrium in t steps is at least \((1-q)^{t-1}\) where \(q=4\exp (-\frac{\delta ^2}{3}\mu )\). The expected time to converge is thus at least \(1/q^2\) which proves the theorem. Simple calculations lead to the following result (see [27] for details):

Corollary 2

Starting from any balanced configuration, the expected number of steps to converge to an equilibrium in the complete bipartite graph is \(e^{\varOmega (n^{1-3c})}\), as long as \(p\ge 1/n^c\) with \(0 \le c<1/3\).

5 General Local Interaction Potential Games

In this section we extend the upper bound of Theorem 2 to general local interaction potential games: each edge uv of G is associated with a (two-player) potential game with potential \(P_{uv}\). Without loss of generality, we assume that the potential \(P_{uv}\) takes integer non-negative values. The upper bound is given in terms of the following quantity:

$$\begin{aligned} \varDelta _P := \max _u \sum _{v\in N(u)} \varDelta _{P_{uv}} , \end{aligned}$$
(6)

where \(\varDelta _{P_{uv}}\) denotes the maximum value of \(P_{uv}\). Note that for symmetric coordination games, \(\varDelta _P\) is simply the maximum degree \(\varDelta \) of the graph.

Theorem 6

For any \(p,q\in (0,1/2)\), if we have \(p_u^t \in [\frac{p}{\varDelta _P}, \frac{q}{\varDelta _P}]\) for all u and t and for \(\varDelta _P\) defined as in (6) in a general local interaction potential game, then the expected convergence time is \(O\left( \frac{n\varDelta _P^2}{p(1-2q)}\right) \).

Since local interaction potential games include max-cut games, which are notoriously PLS-complete [30], one cannot hope to have convergence time polynomial independent of ‘\(\varDelta _P\)’ in general. Local interaction games also include finite opinion games [17] and, in particular, \( 16\varDelta \le \varDelta _P \le 16(\varDelta +1) \), where \(\varDelta \) is the maximum degree of the underlying graph (details in [27]). Theorem 6 implies:

Corollary 3

In finite opinion games on networks of maximum degree \(\varDelta \), the expected converge time of independent better-response dynamics is \(O(n \varDelta ^2)\) whenever \(p_u^t = \frac{\alpha }{\varDelta }\) for some \(\alpha \in [\frac{p}{16}, \frac{q}{16}]\) with \(p,q\in (0,1/2)\).

6 Conclusion

This work provides bounds on the time to converge to a (pure Nash) equilibrium when players are active independently with some probability and they better or best respond to each others current strategies. Our study focuses on a natural (sub)class of potential games, namely, local interaction potential games. The bounds suggest that the time to converge to an equilibrium must depend on the degree of the nodes in the underlying network (cf. Theorems 2, 6 and Corollary 2).

Since our bounds hold for local interaction potential games, it would be interesting to investigate whether analogous results hold for general potential games. Here a relevant notion is that of graphical games [20] and the results in [4]. It would also be interesting to sharpen some of our bounds to show that \(p \simeq 1/\varDelta \) is essentially the threshold between fast and slow convergence, and to investigate the range \(p\in [1/n, 1/n^{1/3}]\) (cf. Theorem 2 and Corollary 2).