Keywords

1 Introduction

Polymatrix games [25] are well-known games that have been deeply investigated for decades. They are multi-player games and belong to the class of graphical games [26] since it is possible to represent player interactions using an interaction graph. In this graph, nodes correspond to players, while edges correspond to bimatrix games played by the agents at the extremes.

Each player chooses a pure strategy from a finite set, which she will play in all the binary games she is involved in. In the subclass of polymatrix coordination games [30], the interaction graph is undirected since the outcome of a binary game is the same for both players. An extension of this classic model is presented in [2], where the utility of an agent x depends not only on the games (edges) in which x is involved but also on the games (edges) played by agents that are at most a distance of d away from x.

In this paper, we present and study a new, more general model, generalized distance polymatrix games, where each local game can concern more than two players, and the utility of an agent x can also depend on the games at a distance bounded by d. In this new model, the interaction graph becomes an undirected hypergraph, where every hyperedge corresponds to a game played by the players it contains. Following the idea proposed in [2], the utility of an agent x is the sum of the outcomes of the games she plays plus a fraction of the outcomes of the games played by other players at a distance at most d from x. Each agent x also gets an additional payoff that is a function of her chosen strategy. Our new model is interesting both from a theoretical and a practical point of view since it is able to represent scenarios that previous models did not cover.

On the one hand, extending a local game to more than two players is reasonable because, in many natural social environments (e.g., economics, politics, sports, academia, etc.), people get a payoff from activities that involve more than two players.

As an example, in a scientific community, a project or a paper often involves more than two researchers, and its outcome depends on the choice made by each person. This can be modelled by using a hyperedge for each project/paper, with a weight (payoff) depending on the participants’ strategies.

On the other hand, any individual also gets a benefit, albeit to a smaller degree, when her close colleagues succeed in some project or publish a quality paper that she is not personally a part of. This is quite obvious when considering the student-advisor relationship, but also noticeable for the collective evaluation and reputation of the department or institution where the researchers are working, for instance, in terms of increased assignment of resources and positions. We can model these indirect relationships by introducing distances and discount factors.

Our framework can also be used to key out a local economy interaction. It is well known that the businesses of a small town or an area of a city strictly depend on each other. In many cases, the interaction is positive; if a business grows, it attracts customers and also positively influences the nearby ones. When small businesses are placed throughout an area, townspeople are likelier to shop around from one business to the next. An example is provided in Sect. 3.

Once formalized our new model, we provide some conditions for the existence of \(\beta \)-approximate k-strong Nash equilibria. Then we focus on the degradation of the social welfare when k players can simultaneously change their strategies. We analyze the Price of Anarchy and Stability for \(\beta \)-approximate k-strong Nash equilibria, determining tight lower and upper bounds.

1.1 Related Work

Polymatrix games were introduced more than forty years ago [25] and have since received considerable attention in the scientific literature, as they are a very general model that can be applied to many different real-world scenarios and can be used to derive several relevant games (e.g., hedonic games [17], max-cut [22]). Some seminal papers on the topic are [18, 23, 24, 27], and more recent studies are [4, 11, 16, 30], where the authors showed results mainly concerning equilibria and their computational issues.

Our model is related to polymatrix coordination games [4, 30] and the more recent distance polymatrix coordination games [2], where the authors introduced the idea of distances. Polymatrix coordination games are, in turn, an extension of a previously introduced model that did not include individual preferences [12]. Some preliminary results can be found in [1].

Our studies are also related to (symmetric) additively separable hedonic games [17] and hypergraph hedonic games [3], where there are no preferences, and the weights of the interaction graph are independent of the players’ strategies, except being null when concerning agents playing different strategies. Our model can also be seen as a generalization of the hypergraph hedonic games [3], introducing preferences and increasing the expressiveness of the weight function (also allowing weight related to different strategies to be non-null).

Pure Nash equilibria have been studied for synchronization games [31], which are a generalization of polymatrix coordination games to hypergraphs. However, they do not investigate the degradation of the social welfare, and they do not consider distances.

Another closely related model is the group activity selection problem [10, 14, 15], which is positioned somewhat between polymatrix coordination games and hedonic games.

Our model is also related to the social context games [5, 8], where the players’ utilities are computed from the payoffs based on the neighbourhood graph and an aggregation function. We take into account more than just the neighbourhood of an agent, we account for the player’s preference only for her utility, and we extend payoffs of local games to more than two agents.

The idea of obtaining utility from non-neighbouring players has also been analyzed for distance hedonic games [21], a variant of hedonic games that are non-additively separable since payoffs also depend on the size of the coalitions. They generalize fractional hedonic games [6, 9, 13, 19, 28] similarly as distance polymatrix games and our model do with polymatrix games.

Some negative results for our problem can be inherited from additively separable hedonic games. For instance, computing a Nash stable outcome is PLS-complete [12], while the problems of finding an optimal solution and determining the existence of a core stable, strict core stable, Nash stable, or individually stable outcome are all NP-hard [7].

2 Our Contribution

After formalizing our new model, we analyse the existence of \(\beta \)-approximate k-strong equilibria and investigate the degradation of social welfare when a deviation from the current strategy profile can involve up to k agents. Consequently, we compute tight bounds on the resulting Price of Anarchy and Stability. To the best of our knowledge, there are no previous results of this kind in the literature that would apply to our model.

In particular, in Sect. 4, we analyze the existence of \(\beta \)-approximate k-strong equilibria. In Sect. 5, we provide tight bounds on the Price of Anarchy for general hypergraphs. In Sect. 6, we prove a suitable lower bound on the Price of Stability for general hypergraphs, which is asymptotically equivalent to the upper bound of the Price of Anarchy when \(\beta =1\), meaning that the inefficiency of 1-approximate k-strong equilibria is fully characterized. Finally, in Sect. 7, we give upper and lower bounds for bounded-degree hypergraphs, with the gap being reasonably small. Some of our results are summarized in Table 1. Due to space constraints, some proofs are only sketched or omitted, while all the details are deferred to the full version.

Table 1. Summary of some of our results, where UB and LB stand for the upper and lower bound, respectively. Furthermore, \(\varDelta \) and r denote the maximal vertex degree and the maximum arity in the bounded-degree case, respectively, and \(\alpha _h, h\in [d]\), is the discounting factor for hyperedges at distance \(h-1\). The arrows denote that a result follows from an adjacent result in the table. The question mark stands for an open problem.

3 Preliminaries

Given two integers \(r\ge 1\) and \(n\ge 1\), let \([n]=\{1,2,\ldots , n\}\) and \((n)_r:=n\cdot (n-1)\cdot \ldots \cdot (n-r+1)\) be the falling factorial. A weighted hypergraph is a triple \(\mathcal {H}= (V,E,w)\) consisting of a finite set \(V=[n]\) of nodes, a collection \(E\subseteq 2^V\) of hyperedges, and a weight \(w:E\rightarrow \mathbb {R}\) associating a real value w(e) with each hyperedge \(e\in E\). For simplicity, when referring to weighted hypergraphs, we omit the term weighted.

The arity of a hyperedge e is its size |e|. An r-hypergraph is a hypergraph such that the arity of each hyperedge is at most r, where \(2\le r\le n\). A complete r-hypergraph is a hypergraph (VEw) such that \(E:=\{U\subseteq V:|U|\le r\}\). A uniform r-hypergraph is a hypergraph such that the arity of each hyperedge is r. An undirected graph is a uniform 2-hypergraph. A hypergraph is said to be \(\varDelta \)-regular if each of its nodes is contained in exactly \(\varDelta \) hyperedges. It is said to be linear if any two of its hyperedges share at most one node. A hypergraph is called a hypertree if it admits a host graph T such that is a tree. Given two distinct nodes u and v in a hypergraph \(\mathcal {H}\), a \(u-v\) simple path of length l in \(\mathcal {H}\) is a sequence of distinct hyperedges \((e_1,\ldots ,e_l)\) of \(\mathcal {H}\), such that \(u\in e_1\), \(v\in e_l\), \(e_i\cap e_{i+1}\ne \emptyset \), for every \(i\in [l-1]\), and \(e_i\cap e_j=\emptyset \) whenever \(j>i+1\). The distance from u to v, d(uv), is the length of the shortest \(u-v\) simple path in \(\mathcal {H}\). A cycle in a hypergraph \(\mathcal {H}\) is defined as a simple path \((e_1,\ldots ,e_l)\), but the further condition \(e_1\cap e_l\ne \emptyset \) must hold (that is, the first and the last hyperedge of the path must intersect, while in a simple path they are disjoint). This definition of cycle is originally due to Berge, and it can be also stated as an alternating sequence of \(v_1,e_1,v_2,\ldots ,v_n,e_n\) of distinct vertices \(v_i\) and distinct hyperedges \(e_i\) so that each \(e_i\) contains both \(v_i\) and \(v_{i+1}\). The girth of a hypergraph is the length of the shortest cycle it contains.

Generalized Distance Polymatrix Games. A generalized distance polymatrix game (or GDPG) \(\mathcal {G}=(\mathcal {H},(\varSigma _x)_{x\in V},(w_e)_{e\in E},(p_x)_{x\in V},(\alpha _h)_{h\in [d]}),\) is a game based on an r-hypergraph \(\mathcal {H}\), and defined as follows:

  • Agents: The set of agents is \(V=[n]\), i.e., each node corresponds to an agent. We reasonably assume that \(n\ge r \ge 2\).

  • Strategy profile or outcome: For any \(x\in V\), \(\varSigma _x\) is a finite set of strategies of player x. A strategy profile or outcome \(\boldsymbol{\sigma }=(\sigma _1,\ldots , \sigma _n)\) is a configuration in which each player \(x\in V\) plays strategy \(\sigma _x\in \varSigma _x\).

  • Weight function: For any hyperedge \(e\in E\), let \(w_e:\times _{x \in e} \varSigma _x \rightarrow \mathbb {R}_{\ge 0}\) be the weight function that assigns, to each subset of strategies \(\sigma _e\) played respectively by every \(x\in e\), a weight \(w_e(\sigma _e)\ge 0\). In what follows, for the sake of brevity, given any strategy profile \(\boldsymbol{\sigma }\), we will often denote \(w_e(\sigma _e)\) simply as \(w_e(\boldsymbol{\sigma })\).

  • Preference function: For any \(x\in V\), let \(p_x:\varSigma _x\rightarrow \mathbb {R}_{\ge 0}\) be the player-preference function that assigns, to each strategy \(\sigma _x\) played by player x, a non-negative real value \(p_x(\sigma _x)\), called player-preference. In what follows, for the sake of brevity, given any strategy profile \(\boldsymbol{\sigma }\), we will often denote \(p_x(\sigma _x)\) simply as \(p_x(\boldsymbol{\sigma })\).

  • Distance-factors sequence: Let \((\alpha _h)_{h\in [d]}\) be the distance-factors sequence of the game, that is a non-negative sequence of real parameters, called distance-factors, such that \(1= \alpha _1\ge \alpha _2 \ge \ldots \ge \alpha _{d}\ge 0\).

  • Utility function: For any \(h\in [d]\), let \(E_h(x)\) be the set of hyperedges e such that the minimum distance between x and one of the players \(v\in e\) is exactly \(h-1\). Then, for any \(x\in V \), the utility function \(u_x:\times _{x\in V}\varSigma _x\rightarrow \mathbb {R}\) of player x, for any strategy profile \(\boldsymbol{\sigma }\) is defined as \(u_x(\boldsymbol{\sigma }):=p_x(\boldsymbol{\sigma })+\sum _{h\in [d]}\alpha _h\sum _{e\in E_h(x)}w_e(\boldsymbol{\sigma }).\)

The social welfare \(\textsf{SW}(\boldsymbol{\boldsymbol{\sigma }})\) of a strategy profile \(\boldsymbol{\sigma }\) is defined as the sum of all the agents’ utilities in \(\boldsymbol{\sigma }\), i.e., \(\textsf{SW}(\boldsymbol{\sigma }):=\sum _{x\in V}u_x(\boldsymbol{\sigma })\). A social optimum of game \(\mathcal {G}\) is a strategy profile \(\boldsymbol{\sigma }^*\) that maximizes the social welfare. We denote by \(\mathsf OPT(\mathcal {G})=\textsf{SW}(\boldsymbol{\sigma }^*)\) the corresponding value.

\(\beta \)-approximate k-strong Nash Equilibrium. Given two strategy profiles \(\boldsymbol{\sigma }=(\sigma _1,\ldots , \sigma _n)\) and \(\boldsymbol{\sigma }^*=(\sigma _1^*,\ldots , \sigma _n^*)\), and a subset \(Z\subseteq V\), let \(\boldsymbol{\sigma }{\mathop {\rightarrow }\limits ^{Z}}\boldsymbol{\sigma }^*\) be the strategy profile \(\boldsymbol{\sigma }'=(\sigma _1',\ldots , \sigma _n')\) such that \(\sigma _x'=\sigma _x^*\) if \(x\in Z\), and \(\sigma _x'=\sigma _x\) otherwise. Given \(k\ge 1\), a strategy profile \(\boldsymbol{\sigma }\) is a \(\beta \)-approximate k-strong Nash equilibrium (or (\(\beta ,k\))-equilibrium) of \(\mathcal {G}\) if, for any strategy profile \(\boldsymbol{\sigma }^*\) and any \(Z\subseteq V\) such that \(|Z|\le k\), there exists \(x\in Z\) such that \(\beta u_x(\boldsymbol{\sigma })\ge u_x(\boldsymbol{\sigma }{\mathop {\rightarrow }\limits ^{Z}}\boldsymbol{\sigma }^*)\). We say that a player x \(\beta \)-improves from a deviation \(\boldsymbol{\sigma }{\mathop {\rightarrow }\limits ^{Z}}\boldsymbol{\sigma }^*\) if \(\beta u_x(\boldsymbol{\sigma })<u_x(\boldsymbol{\sigma }')\). Informally, \(\boldsymbol{\sigma }\) is a \((\beta ,k)\)-equilibrium if, for any coalition of at most k players deviating, there exists at least one player in the coalition that does not \(\beta \)-improve her utility by deviating. We denote the (possibly empty) set of \((\beta ,k)\)-equilibria of \(\mathcal {G}\) by \(\textsf{NE}^{\beta }_k(\mathcal {G})\). Clearly, if \(\beta =1\), \(\textsf{NE}^{\beta }_k(\mathcal {G})\) contains all the k-strong equilibria, and when \(\beta =1\) and \(k=1\), it contains the classic Nash equilibria.

\((\beta ,k)\)-Price of Anarchy (PoA) and \((\beta ,k)\)-Price of Stability (PoS). The \((\beta ,k)\)-Price of Anarchy of a game \(\mathcal {G}\) is defined as i.e., it is the worst-case ratio between the optimal social welfare and the social welfare of a \((\beta ,k)\)-equilibrium. The \((\beta ,k)\)-Price of Stability of game \(\mathcal {G}\) is defined as i.e., it is the best-case ratio between the optimal social welfare and the social welfare of a \((\beta ,k)\)-Nash equilibrium. Clearly, \(\textsf{PoS}^{\beta }_k(\mathcal {G})\le \textsf{PoA}^{\beta }_k(\mathcal {G})\), whereas both quantities are not defined if \(\textsf{NE}^{\beta }_k(\mathcal {G})=\emptyset \).

Fig. 1.
figure 1

Three shops in a shopping area.

Example 1

We give here an example of GDPG applied to the local economy of a city’s shopping area where the shops are one beside the other. Figure 1 schematizes three of the shops in the area, which are represented by three light blue hyperedges (\(\{1,2\}\), \(\{3,4,5\}\), \(\{6,7,8,9\}\)). They are positioned in the area like in Fig. 1, where the light grey hyperedges are just auxiliary hyperedges with null weights for every strategy profile. In this case, the distances are physical distances. Each node stands for the manager of a category of products sold. The manager’s strategy is to choose a brand for her product category. A strategy profile \(\boldsymbol{\sigma }\) corresponds to the brands the managers chose.

All the preferences are null while the weight \(w_e(\boldsymbol{\sigma })\) is the number of customers visiting the shop e for a specific strategy profile \(\boldsymbol{\sigma }\). It is reasonable that the number of customers \(w_e(\boldsymbol{\sigma })\) strictly depends on the brand chosen by the agents.

Since the three shops are beside each other, if a person goes to \(\{1,2\}\), she will probably enter \(\{3,4,5\}\) and enter \(\{6,7,8,9\}\) with less probability. This means that part of the people that visit \(\{1,2\}\) will stop at the other two shops, inversely proportional to the physical distance.

The utility of an agent is the number of received views. This number is strictly related to profit in the economy. We set \(\alpha _2=\alpha _3\) and \(\alpha _4=\alpha _5\) because of the auxiliary light grey hyperedges, which are not real shops. We can now compute the utilities of the agents. For example, agent 6 has \(u_6(\boldsymbol{\sigma }) = w_{\{6,7,8,9\}}(\boldsymbol{\sigma }) + \alpha _3 \cdot w_{\{3,4,5\}}(\boldsymbol{\sigma }) + \alpha _5 \cdot w_{\{1,2\}}(\boldsymbol{\sigma })\), which equals the number of customers that shop \(\{6,7,8,9\}\) gets for \(\sigma _{\{6,7,8,9\}}\) plus part (\(\alpha _3\)) of the number of customers got by shop \(\{3,4,5\}\), plus part (\(\alpha _5\)) of the number of customers got by shop \(\{1,2\}\). Clearly, this example cannot be modelled using previous polymatrix models, i.e., without using hypergraphs and the distance factors sequence.

4 Existence of \((\beta ,k)\)-Equilibria

This section analyzes the existence of \((\beta , k)\)-equilibria. First, we notice that \((\beta ,k)\)-equilibria may not exist since they cannot always exist even in polymatrix coordination games [4, 30]. In the following (Theorem 1), we give some conditions on \(\beta \) that guarantee the existence. These results extend the ones shown in [4, 30].

We say that a game \(\mathcal {G}\) has a finite \((\beta ,k)\)-improvement property (or \((\beta ,k)\)-FIP for short) if every sequence of \((\beta ,k)\)-improving deviations is finite. In such a case, we necessarily have that any \((\beta ,k)\)-FIP ends in a \((\beta ,k)\)-equilibrium, which implies the latter’s existence, too. To show that the \((\beta ,k)\)-FIP holds (and then the existence of a \((\beta ,k)\)-equilibrium), we resort to a potential function argument [29]. A function \(\varPhi \) that associates each strategy profile with a real number is called potential function if, for any strategy profile \(\boldsymbol{\sigma }\) and \((\beta ,k)\)-improving deviation \(\boldsymbol{\sigma }'=\boldsymbol{\sigma }{\mathop {\rightarrow }\limits ^{Z}}\boldsymbol{\sigma }^*\), we have that \(\varPhi (\boldsymbol{\sigma }')-\varPhi (\boldsymbol{\sigma })>0\). Thus, since any \((\beta ,k)\)-improving deviation increases the potential function and the number of strategy profiles is finite, any sequence of \((\beta ,k)\)-improving deviations cannot cycle and must necessarily meet a \((\beta ,k)\)-equilibrium after a finite number of steps.

For a given hyperedge e and a subset \(Z \subseteq V\), let \(n^Z_h(e):=|\{x\in Z \;:\; e\in E_h(x)\}|\), i.e., \(n^Z_h(e)\) is the number of players \(x\in Z\) that are at distance \(h-1\) from e.

Theorem 1

Let \(\mathcal {G}\) be a GDPG. Then: i) \(\mathcal {G}\) has the \((\beta ,1)\)-FIP for every \(\beta \ge 1\); ii) \(\mathcal {G}\) has the \((\beta ,k)\)-FIP for every \(\beta \ge \max _{\begin{array}{c} Z \subseteq V:\\ |Z|=k \end{array}}\{\max _{e \in E}\{\sum _{h\in [d]}\alpha _h n^Z_h(e)\}\}\) and for every k.

Proof (Proof sketch)

To prove both i) and ii) we show that \(\varPhi (\boldsymbol{\sigma })=\sum _{x \in V}p_x(\boldsymbol{\sigma })+\sum _{e \in E}w_e(\boldsymbol{\sigma })\) is a potential function. Proof of i) is left to the full version.

Proof Sketch of ii). Consider a \((\beta ,k)\)-improving deviation \(\boldsymbol{\sigma }'=\boldsymbol{\sigma }{\mathop {\rightarrow }\limits ^{Z}}\boldsymbol{\sigma }^*\). Let \(\overline{Z}=V\setminus Z\). Let also \(\textsf{SW}_{Z}(\boldsymbol{\sigma })\) be the social welfare related to the deviating agents, that is \(\textsf{SW}_{Z}(\boldsymbol{\sigma })=\sum _{x\in Z}u_x(\boldsymbol{\sigma })\). We can rewrite this social welfare as \(\textsf{SW}_{Z}(\boldsymbol{\sigma })=\sum _{x \in Z}p_x(\boldsymbol{\sigma })+\sum _{e\in E:e \not \subseteq \overline{Z}}a_e w_e(\boldsymbol{\sigma })+\sum _{e\in E:e \subseteq \overline{Z}}a_e w_e(\boldsymbol{\sigma })\), where \(a_e=\sum _{h\in [d]}\alpha _h n^Z_h(e)\). It follows that

$$\begin{aligned} & \beta \sum _{x \in Z}p_x(\boldsymbol{\sigma }')+\sum _{e\in E: e\not \subseteq \overline{Z}} \beta w_e(\boldsymbol{\sigma }')+\sum _{e\in E: e\subseteq \overline{Z}}\beta a_e w_e(\boldsymbol{\sigma }') \end{aligned}$$
(1)
$$\begin{aligned} \ge & \textsf{SW}_Z(\boldsymbol{\sigma }') \end{aligned}$$
(2)
$$\begin{aligned} > &\beta \cdot \textsf{SW}_Z(\boldsymbol{\sigma }) \end{aligned}$$
(3)
$$\begin{aligned} \ge &\beta \sum _{x \in Z}p_x(\boldsymbol{\sigma })+\sum _{e\in E: e\not \subseteq \overline{Z}}\beta w_e(\boldsymbol{\sigma })+\sum _{e\in E:e \subseteq \overline{Z}}\beta a_e w_e(\boldsymbol{\sigma }) \end{aligned}$$
(4)

where (2) is due to \(\beta \ge 1\) and \(\beta \ge a_e\) for any \(e\in E\), (3) holds since \(\boldsymbol{\sigma }'\) is a \(\beta \)-improving deviation; (4) is due to \(a_e\ge 1\) for every \(e \in E\) such that \(e\not \subseteq \overline{Z}\). From (1) > (4) and \(\sum _{e \subseteq \overline{Z}}w_e(\boldsymbol{\sigma }')=\sum _{e \subseteq \overline{Z}}w_e(\boldsymbol{\sigma })\), we can derive \(\varPhi (\boldsymbol{\sigma }')-\varPhi (\boldsymbol{\sigma })>0\).    \(\square \)

The value \(\sum _{h\in [d]}\alpha _h n^Z_h(e)\) strictly depends on d and \(n^Z_h(e)\). When \(d=1\), we have \(\sum _{h\in [d]}\alpha _h n^Z_h(e)= n^Z_1(e)\le |e|\) for every \(e\in E\) and \(Z\subseteq V\), so we can assume \(\beta \ge r\). When the hypergraph of a game is a hyperlist, we have \(\sum _{h\in [d]}\alpha _h n^Z_h(e)\le 2r\sum _{h\in [d]}\alpha _h\), for every \(e\in E\), and \(Z\subseteq V\). When the hypergraph of a game is a hyperthree of maximum degree \(\varDelta \), we have \(\sum _{h\in [d]}\alpha _h n^Z_h(e)\le r\sum _{h\in [d]}\alpha _h r^{h-1}\varDelta ^{h-1}\), for every \(e\in E\), and \(Z\subseteq V\).

5 \((\beta ,k)\)-PoA of General Graphs

In this section, we provide tight upper and lower bounds for the \((\beta ,k)\)-Price of Anarchy when the hypergraph \(\mathcal {H}\) of a game \(\mathcal {G}\) is general, that is there is no particular assumption on it. Such bounds depend on \(\beta \), k, the number of players n, the maximum arity r, and the value \(\alpha _2\) of the distance-factors sequence.

Theorem 2

For any integers \(\beta \ge 1\), \(r\ge 2\), \(k<r\), and \(n\ge r\), there exists a simple GDPG \(\mathcal {G}\) with n agents such that \(\textsf{PoA}^{\beta }_k(\mathcal {G})=\infty \).

Thus, in the rest of the paper, we will only take into account the estimation of the \((\beta ,k)\)-PoA for \(k\ge r\ge 2\) since it is not possible to bound the \((\beta ,k)\)-PoA for \(k<r\), not even for bounded-degree graphs and not even when \(\varDelta =1\).

5.1 \((\beta ,k)\)-PoA: Upper Bound

We now provide three results that we will use to compute the upper bound of the \((\beta ,k)\)-Price of Anarchy (Theorem 3). The first result is an upper bound to the social welfare of any strategy profile.

Lemma 1

For any strategy profile \(\boldsymbol{\sigma }\), it holds that \(\textsf{SW}(\boldsymbol{\sigma })\le \sum _{x\in V}p_x(\boldsymbol{\sigma })+(r+\alpha _2\cdot (n-2))\cdot \sum _{e\in E}w_e(\boldsymbol{\sigma }).\)

Before providing the other two preliminary results, we write Inequality (5), which is a necessary condition for an outcome \(\boldsymbol{\sigma }\) to be a \((\beta ,k)\)-equilibrium. For a fixed integer \(k\ge r\), let \(\boldsymbol{\sigma }\) and \(\boldsymbol{\sigma }^*\) be a \((\beta ,k)\)-equilibrium and a social optimum of \(\mathcal {G}\) respectively. Since \(\boldsymbol{\sigma }\) is a \((\beta ,k)\)-equilibrium, for every subset \(Z\subseteq V\) of at most k agents, there exists an agent \(z_1(Z)\in Z\) such that \(\beta u_{z_1(Z)}(\boldsymbol{\sigma })\ge u_{z_1(Z)}(\boldsymbol{\sigma }{\mathop {\rightarrow }\limits ^{Z}}\boldsymbol{\sigma }^*)\). Moreover, let \(Z(2):=Z\setminus \{z_1\}\), there exists another agent \(z_2(Z)\in Z(2)\) such that \(\beta u_{z_2(Z)}(\boldsymbol{\sigma })\ge u_{z_2(Z)}(\boldsymbol{\sigma }{\mathop {\rightarrow }\limits ^{Z(2)}}\boldsymbol{\sigma }^*)\). We can iterate this process for every \(z_i(Z)\in Z(i):=Z\setminus \{z_1(Z),\ldots , z_{i-1}(Z)\}\), obtaining the following inequality.

$$\begin{aligned} \beta u_{z_i(Z)}(\boldsymbol{\sigma })\ge u_{z_i(Z)}(\boldsymbol{\sigma }{\mathop {\rightarrow }\limits ^{Z(i)}}\boldsymbol{\sigma }^*) \end{aligned}$$
(5)

By summing the previous inequality’s left and right parts for every possible subset of Z of k players, we derive the following two results needed for Theorem ().

Lemma 2

For every \((\beta ,k)\)-equilibrium \(\boldsymbol{\sigma }\) and every subset \(K\subseteq V\), with \(|K|=k\), it holds that \( \beta \cdot \left( {\begin{array}{c}n-1\\ k-1\end{array}}\right) \cdot \textsf{SW}(\boldsymbol{\sigma })=\sum _{\begin{array}{c} Z\subseteq V\\ |Z|=k \end{array}}\sum _{i\in [k]}\beta \cdot u_{z_i(Z)}(\boldsymbol{\sigma }). \)

Proof (Proof sketch)

By summing \(\beta u_{Z(i)}(\boldsymbol{\sigma })\) for every \(Z\subseteq V\) of cardinality k and for every \(i \in [k]\), it is easy to see that the utility of an agent \(x\in V\) is counted exactly \(\left( {\begin{array}{c}n-1\\ k-1\end{array}}\right) \) times, which is the number of subsets K in V containing x.    \(\square \)

Lemma 3

For every \((\beta ,k)\)-equilibrium \(\boldsymbol{\sigma }\) and every subset \(K\subseteq V\), with \(|K|=k\), it holds that \(\sum _{\begin{array}{c} Z\subseteq V\\ |Z|=k \end{array}}\sum _{i\in [k]}u_{z_i(Z)}(\boldsymbol{\sigma }{\mathop {\rightarrow }\limits ^{Z(i)}}\boldsymbol{\sigma }^*)\ge \left( {\begin{array}{c}n-r\\ k-r\end{array}}\right) \left( \sum _{x\in V}p_x(\boldsymbol{\sigma }^*)+\sum _{e\in E}w_e\right. \left. (\boldsymbol{\sigma }^*)\right) \)

Proof (Proof sketch)

For every subset \(K\subseteq V\), with \(|K|=k\), it holds that

\(\sum _{i\in [k]}u_{z_i(Z)}(\boldsymbol{\sigma }{\mathop {\rightarrow }\limits ^{Z(i)}}\boldsymbol{\sigma }^*)\ge \sum _{x\in Z}p_x(\boldsymbol{\sigma }^*)+\sum _{e\subseteq Z}w_e(\boldsymbol{\sigma }^*)\). To establish this, we discarded the weights of all the hyperedges, far at least one from \(z_i(Z)\), and used the fact that every hyperedge in Z is counted exactly once. By summing the previous inequality for every subset Z, we obtain \(\sum _{\begin{array}{c} Z\subseteq V\\ |Z|=k \end{array}}\sum _{i\in [k]}u_{z_i(Z)}(\boldsymbol{\sigma }{\mathop {\rightarrow }\limits ^{Z(i)}}\boldsymbol{\sigma }^*) \ge \left( {\begin{array}{c}n-1\\ k-1\end{array}}\right) \sum _{x\in V}p_x(\boldsymbol{\sigma }^*)+\left( {\begin{array}{c}n-r\\ k-r\end{array}}\right) \sum _{e\in E}w_e(\boldsymbol{\sigma }^*)\), thus showing the claim.    \(\square \)

Finally, we can state the theorem for the upper bound of the \((\beta ,k)\)-Price of Anarchy.

Theorem 3

For any \(\beta \ge 1\), any integer \(k\ge r\) and any GDPG \(\mathcal {G}\) having a distance-factors sequence \((\alpha _h)_{h\in [d]}\), it holds that \( \textsf{PoA}^{\beta }_k(\mathcal {G})\le \beta \frac{(n-1)_{r-1}}{(k-1)_{r-1}} (r + \alpha _2 (n-2))\).

Proof (Proof sketch)

By using the results given in Lemmas 1, 2, and 3 we obtain

$$\begin{aligned} &\beta \cdot \left( {\begin{array}{c}n-1\\ k-1\end{array}}\right) \cdot \textsf{SW}(\boldsymbol{\sigma })=\sum _{Z\subseteq V:|Z|=k}\sum _{i\in [k]}\beta \cdot u_{z_i(Z)}(\boldsymbol{\sigma })\end{aligned}$$
(6)
$$\begin{aligned} &\ge \sum _{Z\subseteq V:|Z|=k}\sum _{i\in [k]}u_{z_i(Z)}(\boldsymbol{\sigma }{\mathop {\rightarrow }\limits ^{Z(i)}}\boldsymbol{\sigma }^*) \end{aligned}$$
(7)
$$\begin{aligned} &\ge \left( {\begin{array}{c}n-r\\ k-r\end{array}}\right) \left( \sum _{x\in V}p_x(\boldsymbol{\sigma }^*)+\sum _{e\in E}w_e(\boldsymbol{\sigma }^*)\right) \end{aligned}$$
(8)
$$\begin{aligned} &\ge \left( {\begin{array}{c}n-r\\ k-r\end{array}}\right) \cdot (r + \alpha _2\cdot (n-2))^{-1}\cdot \textsf{SW}(\boldsymbol{\sigma }^*) \end{aligned}$$
(9)

where (6), (7), (8), and (9) derive from Lemma 2, Eq. (5), Lemma 3, and Lemma 1, respectively. Concluding, from (6) and (9), we can get the upper bound for \(\textsf{PoA}^{\beta }_k(\mathcal {G})\).    \(\square \)

5.2 \((\beta ,k)\)-PoA: Lower Bound

We continue by showing the following tight lower bound.

Theorem 4

For every \(\beta \ge 1\), every integers \(r\ge 2\), \(k\ge r\), \(d\ge 1\), \(n\ge k\), and every d-distance-factors sequence \((\alpha _h)_{h\in [d]}\), there is a GDPG \(\mathcal {G}\) with \( \textsf{PoA}^{\beta }_k(\mathcal {G})\ge \beta \frac{(n-1)_{r-1}}{(k-1)_{r-1}}\left( r+\alpha _2(n-r) \right) \).

Proof (Proof sketch)

The idea is to use a GDPG game instance \(\mathcal {G}\) with n players where: (i) the underlying hypergraph \(\mathcal {H}\) is a hyperstar in which all the players \(x\ge 2\) are only connected to player 1; (ii) each hyperedge contains 1 and \(r-1\) other players; (iii) each agent has only two strategies, \(\{s,s^*\}\); (iii) \(w_e(\boldsymbol{\sigma })=\beta \) if every agent in e chooses \(s^*\) under outcome \(\boldsymbol{\sigma }\), and \(w_e(\boldsymbol{\sigma })=0\) otherwise; (iv) \(p_1(\boldsymbol{\sigma })=\left( {\begin{array}{c}k-1\\ r-1\end{array}}\right) \) if agent 1 chooses s under outcome \(\boldsymbol{\sigma }\), and \(p_1(\boldsymbol{\sigma })=0\) otherwise; and (v) \(p_x(\boldsymbol{\sigma })=0\) for every \(x\ge 2\) and outcome \(\boldsymbol{\sigma }\). We call \(\boldsymbol{\sigma }\) and \(\boldsymbol{\sigma }^*\) the two outcomes where all the agents choose s and \(s^*\), respectively. Since \(\boldsymbol{\sigma }\) is a \((\beta ,k)\)-equilibrium, we use the ratio of the social welfare of \(\boldsymbol{\sigma }^*\) and \(\boldsymbol{\sigma }\) to get the result.

6 (1, k)-PoS of General Graphs

This section shows a lower bound for the (1, k)-Price of Stability asymptotically equal to the upper bound for the (1, k)-Price of Anarchy given in Theorem 3. This means that we can use this upper bound also for the (1, k)-Price of Stability and close our study for general hypergraphs for the case \(\beta =1\).

The basic idea is to start from the lower bound instance of Theorem 4, then transform it to a new instance with the property of having every outcome with social welfare different from the minimum unstable.

Theorem 5

For any \(n \ge 6\), there exists a GDPG \(\mathcal {G}\) such that

\(\textsf{PoS}_k^1(\mathcal {G})\ge \frac{n-r}{n-1}\frac{(n-1)_{r-1}}{(k-1)_{r-1}}\frac{(r + \alpha _2(n-r))}{2(1+\alpha _2)}\)

Proof

Let \(\mathcal {H}=(V,E,w)\) be the interaction hypergraph of \(\mathcal {G}\), with \(|V|=n\) and \(|E|=2\left( {\begin{array}{c}n-2\\ r-1\end{array}}\right) +1\). Furthermore, let the set of hyperedges E be divided into \(\{1,2\}\), \(E^1\), and \(E^2\), where \(E^i\), with \(i\in \{1,2\}\), has \(\left( {\begin{array}{c}n-2\\ r-1\end{array}}\right) \) hyperedges of arity r, each containing node i and \(r-1\) nodes different from 1 and 2. Hypergraph \(\mathcal {H}\) is a kind of hyperstar with two roots connected by an edge of arity 2. Each agent x has a set \(\varSigma _x=\{1,2,3\}\) of three possible strategies. We call bottom layer, medium layer, and top layer the outcome in which every player plays strategy 3, 2, and 1, respectively.

Finally, all the non-null weights and preferences are defined as follows. For the bottom layer, \(p_1(3)=p_2(3)=(1+2\epsilon )+\left( {\begin{array}{c}k-1\\ r-1\end{array}}\right) (1+\alpha _2)(1+\epsilon )\). For the medium layer, \(w_{\{1,2\}}(2,2)=w_{e\in E^1}(\boldsymbol{\sigma })=w_{e\in E^2}(\boldsymbol{\sigma })=(1+\epsilon )\). For the top layer, \(p_{1}(1)=p_{2}(1)=1\), \(w_{e\in E^1}(\boldsymbol{\sigma })=w_{e\in E^2}(\boldsymbol{\sigma })=1+\epsilon \). Non-null hyperedges between the layers are only \(w_{\{1,2\}}(1,2)=2\epsilon \), \(w_{e\in E^1}(\boldsymbol{\sigma })=1+\epsilon \), and \(w_{e\in E^2}(\boldsymbol{\sigma })=1+\epsilon \), when some players play strategy 1 and all the others play strategy 2 in \(\boldsymbol{\sigma }\). Please note that every hyperedge with some players in the bottom layer and all the others out of the bottom layer have a null weight.    \(\square \)

Lemma 4

The bottom layer is a (1, k)-equilibrium with social welfare \(2(1+2\epsilon )+2\left( {\begin{array}{c}k-1\\ r-1\end{array}}\right) (1+\alpha _2)(1+\epsilon )\).

Lemma 5

All the (1, k)-equilibria have the same social welfare \(2(1+2\epsilon )+2\left( {\begin{array}{c}n-1\\ k-1\end{array}}\right) (1+\alpha _2)(1+\epsilon )\).

Proof (Proof sketch)

We only need to check the case where no one agent is in the bottom. In fact, any other outcome is either unstable or has a social welfare equal to the one given in Lemma 4. When all the players are out of the bottom, the utility of agents 1 and 2 can change only when one or both of them change layer. Now, if both 1 and 2 are in the top layer, they both prefer to go to the medium one because they get an extra \(\epsilon \) each. Then, agent 1 goes back to the top layer, increasing her utility of an \(\epsilon \) more. From this state, agent 2 goes back to the top layer. The last state is when agent 1 is in the medium layer and 2 is in the top layer. Then, 1 goes to the top layer.    \(\square \)

Lemma 6

\(\textsf{PoS}_k^1(\mathcal {G})=\frac{n-r}{n-1}\frac{(n-1)_{r-1}}{(k-1)_{r-1}}\frac{(r + \alpha _2(n-r))}{2(1+\alpha _2)}\).

Proof (Proof sketch)

We use the ratio between the social welfare of the medium and the bottom layers to get the lower bound.    \(\square \)

The proof of Theorem 5 is complete.    \(\square \)

7 \((\beta ,k)\)-PoA of Bounded-Degree Graphs

In this section, we analyze the \((\beta ,k)\)-Price of Anarchy for games whose hypergraphs have bounded-degree.Footnote 1 We also say that a game \(\mathcal {G}\) is \(\varDelta \)-bounded degree if the degree of every node in the underlying hypergraph is at most \(\varDelta \). Here, we will only focus on the cases where \(k\ge r\), as observed in Theorem 2, and \(\varDelta \ge 2\), since the case when \(\varDelta =1\) is encompassed by Sect. 5.

7.1 \((\beta ,k)\)-PoA: Upper Bound

As we did for general hypergraphs, we first show an upper bound on the social welfare of every outcome.

Lemma 7

Given a \(\varDelta \)-bounded-degree GDPG \(\mathcal {G}\), for every \((\beta ,k)\)-equilibrium \(\boldsymbol{\sigma }\) it holds that \(\textsf{SW}(\boldsymbol{\sigma })\le \sum _{x\in V}p_x(\boldsymbol{\sigma })+r\sum _{h\in [d]}\alpha _h\cdot (\varDelta -1)^{h-1}r^{h-1} \cdot \sum _{e\in E}w_e(\boldsymbol{\sigma }).\)

We can now state the main theorem on the upper bound.

Theorem 6

For every \(\varDelta \)-bounded-degree GDPG \(\mathcal {G}\), with distance-factor sequence \((\alpha _h)_{h\in [d]}\), and for every \(k\ge r\), it holds that \(\textsf{PoA}^{\beta }_k(\mathcal {G})\le \beta \cdot r\sum _{h\in [d]}\alpha _h\cdot \varDelta \cdot (\varDelta -1)^{h-1}r^{h-1}\).

Proof (Proof sketch)

First, we write some necessary conditions for every outcome \(\boldsymbol{\sigma }\) to be a \((\beta ,k)\)-equilibrium. Since the maximum arity is at most equal to k, if \(\boldsymbol{\sigma }\) is a \((\beta ,k)\)-equilibrium, then for every hyperedge e, there must exist a player \(z_1(e)\in e\) such that (i): \(\beta u_{z_1(e)}(\boldsymbol{\sigma })\ge u_{z_1}(\boldsymbol{\sigma }{\mathop {\rightarrow }\limits ^{e}}\boldsymbol{\sigma }^*)\ge p_{z_1(e)}(\boldsymbol{\sigma }^*)+w_e(\boldsymbol{\sigma }^*)\). Moreover, since a \((\beta ,k)\)-equilibrium is also a \((\beta ,1)\)-equilibrium, for every other \(z_i(e)\in e\), with \(z_i(e) \ne z_1(e)\), it must hold (ii): \(\beta u_{z_i(e)}(\boldsymbol{\sigma })\ge u_{z_i(e)}(\boldsymbol{\sigma }{\mathop {\rightarrow }\limits ^{z_i(e)}}\boldsymbol{\sigma }^*)\ge p_{z_i(e)}(\boldsymbol{\sigma }^*)\). By summing Eq. (i) plus all the inequalities (ii) for every hyperedge \(e\in E\), and by using Lemma 7, we get

$$\begin{aligned} &\beta \cdot \sum _{e\in E} \left( \sum _{z_i(e)} u_{z_i(e)}(\boldsymbol{\sigma })\right) \ge \left( r\sum _{h\in [d]}\alpha _h\cdot (\varDelta -1)^{h-1}r^{h-1}\right) ^{-1}\cdot \textsf{SW}(\boldsymbol{\sigma }^*) \end{aligned}$$
(10)

We notice now that it holds that \(\sum _{e\in E} \left( \sum _{z_i(e)}, u_{z_i(e)}(\boldsymbol{\sigma })\right) \le \sum _{x\in V} \varDelta \cdot u_x(\boldsymbol{\sigma })=\varDelta \cdot \textsf{SW}(\boldsymbol{\sigma })\) because in the left-hand part, the utility of each player x is counted at most \(\varDelta \) times, which is the maximum number of edges containing x. By using both (10) and the last inequality, we obtain \(\textsf{SW}(\boldsymbol{\sigma })\ge \beta ^{-1}\varDelta ^{-1} \left( r\sum _{h\in [d]}\alpha _h\cdot (\varDelta -1)^{h-1}r^{h-1}\right) ^{-1} \textsf{SW}(\boldsymbol{\sigma }^*)\).

Remark 1

Please note that Theorem 6 implies that the \((\beta ,k)\)-price of anarchy of \(\varDelta \)-bounded-degree GDPG, as a function of d, grows at most as \(O(\beta \cdot (\varDelta -1)^d\cdot r^d)\).

7.2 \((\beta ,k)\)-PoA: Lower Bound

In the following theorem, we provide a lower bound on the \((\beta ,k)\)-Price of Anarchy relying on a nice result from graph theory.

Theorem 7

For every \(\beta \ge 1\), any integers \(k\ge r\), \(\varDelta \ge 3\), \(d\ge 1\), and any distance-factors sequence \((\alpha _h)_{h\in [d]}\), there exists a \(\varDelta \)-bounded-degree GDPG \(\mathcal {G}\) such that

\(\textsf{PoA}^{\beta }_k(\mathcal {G}) \ge \frac{\beta \cdot \sum _{h\in [d]}\alpha _h \varDelta (\varDelta -1)^{h-1}b^{h-1}}{1+ \sum _{h=1}^{d-1}\alpha _{h+1}(2(\varDelta -1)^{\lfloor (h+1)/2 \rfloor }(r-1)^{\lfloor (h+1)/2 \rfloor -1} +2(\varDelta -1)^{\lfloor h/2 \rfloor -1}(r-1)^{\lfloor h/2 \rfloor })}\).

Proof

In Lemma 3 of [20], they state that for every integers \(\varDelta \), r, and \(\gamma _0\ge 3\), it is always possible to find a \(\varDelta \)-regular r-uniform hypergraph \(\mathcal {H}\) of girth at least \(\gamma _0\). By using this result, given integers \(k\ge r\), \(\varDelta \ge 3\) and \(d\ge 1\), a distance-factors sequence \((\alpha _h)_{h\in [d]}\), and a \(\varDelta \)-regular linear hypergraph \(\mathcal {H}=(V,E)\) of girth at least \(\gamma _0:=\max \{2d+1,k+1\}\), we can build a \(\varDelta \)-bounded-degree GDPG \(\mathcal {G}\), such that (i) \(\mathcal {H}\) is its underlying hypergraph; (ii) \((\alpha _h)_{h\in [d]}\) is its distance-factors sequence; (iii) each player x has two strategies, s and \(s^*\); (iv) for every hyperedge \(e\in E\) and outcome \(\boldsymbol{\sigma }\), \(w_e(\boldsymbol{\sigma })=\beta \) if all the nodes in e play \(s^*\) in \(\boldsymbol{\sigma }\), and 0 otherwise; and (v) for every \(x\in V\), \(p_x(\boldsymbol{\sigma })=1+ \sum _{h=1}^{d-1}\alpha _{h+1}\left( 1+\varDelta \frac{(\varDelta -1)^p(r-1)^p-1}{(\varDelta -1)(r-1)-1}+r\frac{(\varDelta -1)^q(r-1)^q-1}{(\varDelta -1)(r-1)-1}\right) \) if x plays s in \(\boldsymbol{\sigma }\), where \(p=\lfloor (h+1)/2 \rfloor \) and \(q=\lfloor h/2 \rfloor \), otherwise \(p_x(\boldsymbol{\sigma })=0\).

Let \(\boldsymbol{\sigma }\) and \(\boldsymbol{\sigma }^*\) be the strategy profiles in which all players play strategy s and \(s^*\), respectively. First, we show that \(\boldsymbol{\sigma }\) is an \((\beta ,k)\)-equilibrium of \(\mathcal {G}\).

Lemma 8

\(\boldsymbol{\sigma }\) is a \((\beta ,k)\)-equilibrium.

Proof (Proof sketch)

If \(\boldsymbol{\sigma }\) is not a \((\beta ,k)\)-equilibrium, then there must exist a subset \(Z\subseteq V\), \(|Z|\le k\), such that \(u_x(\boldsymbol{\sigma }{\mathop {\rightarrow }\limits ^{Z}}\boldsymbol{\sigma }^*)>\beta \cdot u_x(\boldsymbol{\sigma })\) for every \(x\in Z\).

We can assume w.l.o.g. that the subhypergraph \(\mathcal {T}\) induced by Z in \(\mathcal {H}\), rooted in \(x_r\), is a perfect linear hypertree of height d because y cannot get utility from hyperedges that are far more than \(d-1\) from y, and each node has degree \(\varDelta \). We will assume w.l.o.g. that y is one of the \(r-1\) leftmost leaves, that is one of the leaves in the leftmost hyperedge.

Let \((e_0,e_1,\ldots ,e_d)\) be a path \(\mathcal {P}\) from the leftmost leaf y to the root, where \(e_0\) is the leftmost hyperedge containing y, \(e_d\) is the one containing the root \(x_r\), and \(e_{l-1}\cap e_l=v_l\) for every other \(l\in [d-1]\). Clearly, each hyperedge \(e_l\) is made up of \(v_l\), \(v_{l+1}\), and other \(r-2\) nodes that we call \(v'_{l}\). The distance between y and any \(v_l\) is \(l-1\), while the distance between y and \(v'_l\) is l. The root is at distance \(d-l-1\) from \(v_l\) and any \(v'_l\), and there is a subhypertree having \(v_l\) or any \(v'_l\) as root and height l. Let \(\mathcal {T}_l\) be the subhypertree having root \(v_l\), and all the descendant nodes without the ones in \(\mathcal {P}\). Let also \(\mathcal {T}'_l\) be one of the \(r-2\) hypertrees with one of the nodes \(v'_l\) as root. Both \(\mathcal {T}_l\) and any \(\mathcal {T}'_l\) have height l. The number of hyperedges \(E_{l,t}\) at level t of \(\mathcal {T}_l\) is \((\varDelta -2)(\varDelta -1)^{t-1}(r-1)^{t-1}\), while the number of hyperedges \(E'_{l,t}\) at level t of any \(T'_l\) is \((\varDelta -1)^{t}(r-1)^{t-1}\).

All the hyperedges that are at distance \(h\ge 1\) from y are \(e_{h}\), plus \(E_{h,1}\cup E_{h-1,2}\cup \ldots \cup E_{\lceil (h+1)/2 \rceil , \lfloor (h+1)/2 \rfloor }\), plus \(E'_{h-1,1}\cup E'_{h-2,2}\cup \ldots \cup E'_{\lceil h/2 \rceil , \lfloor h/2 \rfloor }\) for the \((r-2)\) hypertrees \(\mathcal {T}'_l\). When \(h=0\), there is only the hyperedge \(e_0\) at distance h from y. Therefore, using the already defined p and q, the number of hyperedges at distance \(h\ge 1\) from y is \(1+\varDelta \frac{(\varDelta -1)^p(r-1)^p-1}{(\varDelta -1)(r-1)-1}+r\frac{(\varDelta -1)^q(r-1)^q-1}{(\varDelta -1)(r-1)-1}\).

We can conclude that the utility that a leftmost leaf y gets from the deviation \(u_y(\boldsymbol{\sigma }{\mathop {\rightarrow }\limits ^{Z}}\boldsymbol{\sigma }^*)\) is \(\beta \cdot p_x(\boldsymbol{\sigma })\), which is equal to \(\beta \cdot u_y(\boldsymbol{\sigma })\), so y does not profit from the deviation, and \(\boldsymbol{\sigma }\) is a \((\beta ,k)\)-equilibrium.    \(\square \)

Lemma 9

\(u_x(\boldsymbol{\sigma }^*)=\beta \sum _{h\in [d]}\alpha _h \varDelta (\varDelta -1)^{h-1}(r-1)^{h-1}\) for any \(x\in V\).

From Lemma 8 and 9, we obtain the lower bound for \(\textsf{PoA}^{\beta }_k(\mathcal {G})\), which concludes the proof of the theorem.    \(\square \)

Remark 2

Please note that, if all the distance-factors are not lower than a constant \(c>0\), from Theorem 7 we can conclude that the \((\beta ,k)\)-price of anarchy of \(\varDelta \)-bounded-degree GDPG, as a function of d, can grow as \(\varOmega (\beta (\varDelta -1)^{d/2}(r-1)^{d/2})\).

8 Conclusion and Future Works

This study leaves some open problems, such as (i) closing the gap between the upper and the lower bound on the Price of Anarchy for bounded-degree hypergraphs; (ii) extending the results on the Price of Stability to values of \(\beta \) greater than one; and (iii) computing a lower bound on the Price of Stability for bounded-degree hypergraphs. Concerning the latter problem, we are confident that it is possible to use the same modus operandi described in Sect. 6.

Another relevant open problem we consider worth investigating concerns finding particular classes of games which guarantee the existence of equilibria. We believe the class of games with a hypertree as underlying hypergraph are a good candidate. About the existence of \(\beta \)-approximate k-strong equilibria, we conjecture that the condition stated in Theorem 1 is necessary, and can be proven with some ad-hoc game instances.

Another interesting research direction is studying our model with respect to different social welfare functions, e.g., using the \(L^p\)-norm for the different values of p.