1 Introduction

We study the problem that arises in a community of individuals that want to select a community member to receive an award. This is a standard social choice [7] problem, that is typically encountered in scientific and sports communities but has also found important applications in distributed multi-agent systems. To give an entertaining example, the award for the player of the yearFootnote 1 by the Professional Footballers Association (PFA) is decided by the members of PFA themselves; each PFA member votes for the two players they consider the best for the award and the player with the maximum number of votes receives the award. Footballers consider it as one of the most prestigious awards, due to the fact that it is decided by their opponents. In distributed multi-agent systems, leader election (e.g., see [2]) can be thought of as a selection problem of similar flavor. Other notable examples include (see [11]) the selection of a representative in a group, funding decisions based on peer reviewing or even (see [1]) finding the most popular user of a social network.

The input of the problem can be represented as a directed graph, which we usually call nomination profile. Each vertex represents an individual and a directed edge indicates a vote (or nomination) by a community member to another. A selection mechanism (or selection rule) takes a nomination profile as input and returns a single vertex as the winner. Clearly, there is a highly desirable selection rule: the one which always returns the highest in-degree vertex as the winner. Unfortunately, such a rule suffers from a drawback that is pervasive in social choice. Namely, it is susceptible to manipulation.

In particular, the important constraint that makes the selection challenging is impartiality. As every individual has a personal interest to receive the award, selection rules should take the individual votes into account but in such a way that no single individual can increase her chance of winning by changing her vote. The problem, known as impartial selection, was introduced independently by Holzman and Moulin [13] and Alon et al. [1]. Unfortunately, the ideal selection rule mentioned above is not impartial. Consider the case with a few individuals that are tied with the highest number of votes. The agents involved in the tie might be tempted to lie about their true preferences to break the tie in their favor.Footnote 2

Impartial selection rules may inevitably select as the winner a vertex that does not have the maximum in-degree. Holzman and Moulin [13] considered minimum axiomatic properties that impartial selection rules should satisfy. For example, a highly desirable property, called negative unanimity, requires that an individual with no votes at all, should never be selected. Alon et al. [1] quantified the efficiency loss with the notion of approximation ratio, defined as the worst-case ratio of the maximum vertex in-degree over the in-degree of the vertex which is selected by the rule. According to their definition, an impartial selection rule should have as low approximation ratio as possible. This line of research was concluded by the work of Fischer and Klimm [11] who proposed impartial mechanisms with the optimal approximation ratio of 2.

It was pointed out in [1, 11], that the most challenging nomination profiles for both deterministic and randomized mechanisms are those with small in-degrees. In the case of deterministic mechanisms, the situation is quite extreme as all deterministic mechanisms can be easily seen to have an unbounded approximation ratio on inputs with a maximum in-degree of 1 for a single vertex and 0 for all others; see [1] for a concrete example. As a result, the approximation ratio does not seem to be an appropriate measure to classify deterministic selection mechanisms. Finally, Bousquet et al. [6] have shown that if the maximum in-degree is large enough, randomized mechanisms that return a near optimal impartial winner do exist.

We deviate from previous work and instead propose to use additive approximation as a measure of the quality of impartial selection rules. Additive approximation is defined using the difference between the maximum in-degree and the in-degree of the winner returned by the selection mechanism. Note that deterministic mechanisms with low additive approximation always return the highest in-degree vertex as the winner when her margin of victory is large. When this does not happen, we have a guarantee that the winner returned by the mechanism has a close-to-maximum in-degree.

Our Contribution

We provide positive and negative results for impartial selection mechanisms with additive approximation guarantees. We distinguish between two models. In the first model, which was considered by Holzman and Moulin [13], nomination profiles consist only of graphs with all vertices having an out-degree of 1. The second model is more general and allows for multiple nominations and abstentions (hence, vertices have arbitrary out-degrees).

As positive results, we present two randomized impartial mechanisms which have additive approximation guarantees of \({\varTheta }(\sqrt {n})\) and \({\varTheta }(n^{2/3}\ln ^{1/3}n)\) for the single nomination and multiple nomination models, respectively. Notice that both these additive guarantees are o(n) functions of the number n of vertices. We remark that an o(n)-additive approximation guarantee can be translated to an 1 − 𝜖 multiplicative guarantee for graphs with sufficiently large maximum in-degree, similar to the results of [6]. Conversely, the multiplicative guarantees of [6] can be translated to an O(n8/9)-additive guarantee.Footnote 3 This analysis further demonstrates that additive guarantees allow for a more smooth classification of mechanisms that achieve good multiplicative approximation in the limit.

Our mechanisms first select a small sample of vertices, and then select the winner among the vertices that are nominated by the sample vertices. These mechanisms are randomized variants of a class of mechanisms which we define and call strong sample mechanisms. Strong sample mechanisms are impartial mechanisms which select the winner among the vertices nominated by a sample set of vertices. In addition, they have the characteristic that the sample set does not change with changes in the nominations of the vertices belonging to it. For the single nomination model, we provide a characterization, and we show that all deterministic strong sample mechanisms should use a fixed sample set that does not depend on the nomination profile. This yields a n − 2 lower bound on the additive approximation guarantee of any deterministic strong sample mechanism. For their randomized variants, where the sample set is selected randomly, we present an \({\varOmega }(\sqrt {n})\) lower bound which shows that our first randomized impartial mechanism is best possible among all randomized variants of strong sample mechanisms. Finally, for the most general, multiple nomination model, we present a lower bound of 3 for all deterministic mechanisms.

Related Work

Besides the papers by Holzman and Moulin [13] and Alon et al. [1], which introduced impartial selection as we study it here, de Clippel et al. [10] considered a different version with a divisible award. Alon et al. [1] used the approximation ratio as a measure of quality for impartial selection mechanisms. After realizing that no deterministic mechanism achieves a bounded approximation ratio, they focused on randomized mechanisms and proposed the 2-Partition mechanism, which guarantees an approximation ratio of 4 and complemented this positive result with a lower bound of 2 for randomized mechanisms.

Later, Fischer and Klimm were able to design a mechanism that achieves an approximation ratio of 2, by generalizing 2-partition. Their optimal mechanism, called Permutation, examines the vertices sequentially following their order in a random permutation and selects as the winner the vertex of highest degree counting only edges with direction from “left” to “right.” They also provided lower bounds on the approximation ratio for restricted inputs (e.g., with no abstentions) and have shown that the worst case examples for the approximation ratio are tight when the input nomination profiles are small.

Bousquet et al. [6] noticed this bias towards instances with small in-degrees and examined the problem for instances of very high maximum in-degree. After showing that Permutation performs significantly better for instances of high in-degree, they have designed the Slicing mechanism with near optimal asymptotic behaviour for that restricted family of graphs. More precisely, they have shown that, if the maximum in-degree is large enough, Slicing can guarantee that the winner’s in-degree approximates the maximum in-degree by a small error. As we discussed in the previous section, the Slicing mechanism can achieve an additive guarantee of O(n8/9).

Holzman and Moulin [13] explored impartial mechanisms through an axiomatic approach. They focused on the single nomination model and proposed several deterministic mechanisms, including the Majority with Default mechanism. Majority with Default defines a vertex as a default winner and examines if there is any vertex with in-degree more than ⌈n/2⌉, ignoring the outgoing edge from the default winner. If such a vertex exists, then this is the winner; otherwise the default vertex wins. While this mechanism has the unpleasant property that the default vertex may become the winner with no incoming edges at all, its additive approximation is at most ⌈n/2⌉. Further to that, they came up with a fundamental limitation of the problem: no impartial selection mechanism can be simultaneously negative and positive unanimous (i.e., never selecting as a winner a vertex of in-degree 0 and always selecting the vertex of in-degree n − 1, whenever there exists one).

Mackenzie in [16] characterized symmetric (i.e., name-independent) rules in the single nomination model. Tamura and Ohseto [18] observed that when the demand for only one winner is relaxed, then impartial, negative unanimous and positive unanimous mechanisms do exist. Later on, Tamura [17] characterized them. On the same agenda, Bjelde et al. in [5] proposed a deterministic version of the permutation mechanism that achieves the 1/2 bound by allowing at most two winners. Alon et al. [1] also present results for selecting multiple winners.

Finally, we remark that impartiality has been investigated as a desired property in other contexts where strategic behaviour occurs. Recent examples include peer reviewing [3, 14, 15], selecting impartially the most influential vertex in a network [4] and in linear regression algorithms as a means to tackle strategic noise [9].

2 Preliminaries

Let N = {1,...,n} be the set of n ≥ 2 agents. A nomination graph G = (N, E) is a directed graph with vertices representing the agents. The set of outgoing edges from each vertex represents the nominations of each agent; it contains no self-loops (as, agents are not allowed to nominate themselves) and can be empty (as an agent is, in general, allowed to abstain). We write \(\mathcal {G}=\mathcal {G}_{n}\) for the set of all graphs with n vertices and no self-loops. We also use the notation \(\mathcal {G}^{1}=\mathcal {G}^{1}_{n}\) to denote the subset of \(\mathcal {G}\) with out-degree exactly 1. For convenience in the proofs, we sometimes denote each graph G by a tuple x, called nomination profile, where xu denotes the set of outgoing edges of vertex u in G. For uN, we use the notation xu to denote the graph (N, E ∖ ({uN)) and, for the set of vertices \(U\subseteq N\), we use xU to denote the graph (N, E ∖ (U × N)). We use the terms nomination graphs and nomination profiles interchangeably.

The notation δS(u, x) refers to the in-degree of vertex u in the graph x taking into account only edges that originate from the subset \(S\subseteq N\). When S = N, we use the shorthand δ(u, x) and when S = {v} we use the simplified notation δv(u, x). If the graph is clearly identified by the context we omit x too, using δ(u). We denote the maximum in-degree of graph x as \({\Delta }(\mathbf {x})= \max \limits _{u \in N} \delta (u,\mathbf {x})\) and, whenever x is clear from the context, we use Δ instead.

A selection mechanism for a set of graphs \(\mathcal {G}^{\prime } \subseteq \mathcal {G}\), is a function \(f: \mathcal {G}^{\prime } \rightarrow [0,1]^{n+1}\), mapping each graph of \(\mathcal {G}^{\prime }\) to a probability distribution over all vertices (which can be potential winners) as well as to the possibility of returning no winner at all. A selection mechanism is deterministic in the special case where for all x, (f(x))u ∈{0,1} for all vertices uN.

A selection mechanism is impartial if for all graphs \(\mathbf {x} \in \mathcal {G}^{\prime }\), all possible sets \(x^{\prime }_{u}\) of outgoing edges (from vertex u), it holds \((f(\mathbf {x}))_{u}=(f(x^{\prime }_{u},\mathbf {x}_{-u}))_{u}\) for every vertex u. In words, the probability that u wins must be independent of the set of its outgoing edges.

We use \(\mathbb {E}\left [ \delta (f(\mathbf {x})) \right ]\) to denote the expected in-degree of f on x, i.e. \(\mathbb {E}\left [ \delta (f(\mathbf {x})) \right ]={\sum }_{u \in N} (f(\mathbf {x}))_{u} \delta (u,\mathbf {x}) \). A selection mechanism f is called α(n)-additive if

$$\max_{\mathbf{x} \in \mathcal{G}_{n}} \left\{\Delta(\mathbf{x}) - \mathbb{E}\left[ \delta(f(\mathbf{x})) \right]\right\} \leq \alpha(n),$$

for every \(n\in \mathbb {N}\).

3 Upper Bounds

In this section we provide randomized selection mechanisms for the two best studied models in the literature. First, in Section 3.1 we propose a mechanism for the single nomination model of Holzman and Moulin [13], where nomination profiles consist only of graphs with all vertices having an out-degree of 1. Then, in Section 3.2 we provide a mechanism for the more general model studied by Alon et al. [1], which allows for multiple nominations and abstentions.

3.1 The Sample and Vote Mechanism

Our first mechanism, Sample and Vote, forms a sample S of vertices by repeating k times the selection of a vertex uniformly at random with replacement.Footnote 4 Any vertex that is selected at least once belongs to the sample S. Let W := {uNS : δS(u, x) ≥ 1} be the set of vertices outside S that are nominated by the vertices of S. If W = , no winner is returned. Otherwise, the winner is a vertex in \(\arg \max \limits _{u\in W}{\delta _{N\setminus W}(u,\mathbf {x})}\). We note here the crucial fact that the selection of the sample set S is independent of the profile x. An example of this mechanism is shown in Fig. 1a.

Fig. 1
figure 1

Examples for Sample and Vote and Sample and Poll, with sample size k = 3 and n = 12. In both cases we use the same sample set S = {2,3,12}. For Sample and Vote, the vertices in S define the set W = {4,5} of possible winners. The winner is then the vertex with maximum in-degree from the votes from NW to W (the solid drawn edges in the figure). For Sample and Poll, the sample set S immediately declares the winner, as one of the maximum in-degree vertices from edges starting in S, while the edges from vertices in NS are completely ignored. In both cases, the dark vertex is the winner and the light dashed-lined vertices belong to the sample set S. Also, all edges drawn with a dotted line are ignored by the mechanism. The shaded area in Fig. 1a shows which vertices belong in set W

Impartiality follows since a vertex that does not belong to W (no matter if it belongs to S or not) cannot become the winner and the nominations of vertices in W are not taken into account for deciding the winner among them. We now argue that, for a carefully selected k, this mechanism also achieves a good additive guarantee.

Theorem 1

For \(k={\varTheta }(\sqrt {n})\), the Sample and Vote mechanism is impartial and \({\varTheta }(\sqrt {n})\)-additive in the single nomination model.

Proof

Consider a nomination graph and let u be a vertex of maximum in-degree Δ. In our proof of the approximation guarantee, we will use the following two technical lemmas.

Lemma 1

If uW, then the winner has in-degree at least Δ − k.

Proof

This is clearly true if the winner returned by Sample and Vote is u. Otherwise, the winner w satisfies

$$ \begin{array}{@{}rcl@{}} \delta(w,\mathbf{x}) &\geq \delta_{N\setminus W}(w,\mathbf{x}) \geq \delta_{N\setminus W}(u^{*},\mathbf{x})=\delta(u^{*},\mathbf{x})-\delta_{W}(u^{*},\mathbf{x})\geq {\Delta}-k. \end{array} $$

The first inequality is trivial. The second inequality follows by the definition of the winner w. The third inequality follows since W is created by nominations of vertices in S, taking into account that each vertex has out-degree exactly 1. Hence, δW(u,x) ≤|W|≤|S|≤ k.□

Lemma 2

The probability that u belongs to the nominated set W is

$$\Pr\left[{u^{*}\in W}\right] = \left( 1-\left( 1-\frac{\Delta}{n-1}\right)^{k}\right)\left( 1-\frac{1}{n}\right)^{k}.$$

Proof

Indeed, u belongs to W if it does not belong to the sample S and instead some of the Δ vertices that nominate u is picked in some of the k vertex selections. The probability that u is not in the sample is

$$ \begin{array}{@{}rcl@{}} \Pr\left[{u^{*}\not\in S}\right] &=\left( 1-\frac{1}{n}\right)^{k}, \end{array} $$
(1)

i.e., the probability that vertex u is not picked in some of the k vertex selections. Observe that the probability that some of the Δ vertices that nominate u is picked in a vertex selection step assuming that u is never selected is \(\frac {\Delta }{n-1}\). Hence, the probability that some of the Δ vertices nominating u is in the sample assuming that uS is

$$ \begin{array}{@{}rcl@{}} \Pr\left[{\delta_{S}(u^{*},\mathbf{x})\geq 1|u^{*}\not\in S}\right] & =1-\left( 1-\frac{\Delta}{n-1}\right)^{k}. \end{array} $$
(2)

The lemma follows by the chain rule

$$ \begin{array}{@{}rcl@{}} \Pr\left[{u^{*}\in W}\right]&=&\Pr\left[{u^{*}\not=S \land \delta_{S}(u^{*},\mathbf{x})\geq 1}\right]\\ &=&\Pr\left[{\delta_{S}(u^{*},\mathbf{x})\geq 1|u^{*}\not\in S}\right]\cdot \Pr\left[{u^{*}\not\in S}\right] \end{array} $$

and(1) and (2). □

By Lemmas 1 and 2, we have that the expected degree of the winner returned by mechanism Sample and Vote is

$$ \begin{array}{@{}rcl@{}} \mathbb{E}\left[ \delta(w,\mathbf{x}) \right] &\geq& \Pr{u^{*}\in W}\cdot ({\Delta}-k) \\ &=& \left( 1-\left( 1-\frac{\Delta}{n-1}\right)^{k}\right)\left( 1-\frac{1}{n}\right)^{k} ({\Delta}-k)\\ &\geq& \left( 1-\left( 1-\frac{\Delta}{n-1}\right)^{k}\right)\left( 1-\frac{k}{n}\right) ({\Delta}-k) \\ &>&\left( 1-\left( 1-\frac{\Delta}{n-1}\right)^{k}\right)\left( {\Delta}-2k\right)\\ &=& {\Delta}-2k-\left( 1-\frac{\Delta}{n-1}\right)^{k}\left( {\Delta}-2k\right) \end{array} $$

The second inequality follows by Bernoulli’s inequality (1 + x)r ≥ 1 + rx for every real x ≥− 1 and r ≥ 0 and the third one since n > Δ. Now, the quantity \(\left (1-\frac {\Delta }{n-1}\right )^{k}\left ({\Delta }-2k\right )\) is maximized for \({\Delta }=\frac {n-1+2k^{2}}{k+1}\) to a value that is at most \(\frac {n+1}{k+1}-2\). Hence,

$$ \begin{array}{@{}rcl@{}} \mathbb{E}\left[ \delta(w,\mathbf{x}) \right] &\geq {\Delta}-2(k-1)-\frac{n+1}{k+1}. \end{array} $$

By setting \(k\in {\varTheta }(\sqrt {n})\), we obtain that \(\mathbb {E}\left [ \delta (w,\mathbf {x}) \right ]\geq {\Delta } - {\varTheta }(\sqrt {n})\), as desired. □

3.2 The Sample and Poll Mechanism

In the most general model, we propose the randomized mechanism Sample and Poll, which is even simpler than Sample and Vote. Sample and Poll forms a sample S of vertices by repeating k times the selection of a vertex uniformly at random with replacement. The winner (if any) is a vertex w in \(\arg \max \limits _{u\in {N\setminus S}}{\delta _{S}(u,\mathbf {x})}\). We remark that, for technical reasons, we allow S to be a multi-set if the same vertex is selected more than once. Then, edge multiplicities are counted in δS(u, x). Clearly, Sample and Poll is impartial. The winner is decided by the vertices in S, which in turn have no chance to become winners.Footnote 5 Figure 1b shows an example of this mechanism. Our approximation guarantee is slightly weaker now.

Theorem 2

For \(k=\left \lceil 4^{1/3} n^{2/3} \ln ^{1/3}n \right \rceil \), the Sample and Poll mechanism is impartial and \({\varTheta }(n^{2/3} \ln ^{1/3}n)\)-additive.

Proof

Let u be a vertex of maximum in-degree Δ. If Δ ≤ k, Sample and Poll is clearly \({\varTheta }(n^{2/3} \ln ^{1/3}n)\)-additive. So, in the following, we assume that Δ > k. Let C be the set of vertices of in-degree at most Δ − k − 1. We first show that the probability \(\Pr {\delta (w,\mathbf {x})\leq {\Delta }-k-1}\) that some vertex of C is returned as the winner by Sample and Poll is small.

Notice that if one of the vertices of C is the winner, then either vertex u belongs to to the sample set S or it does not belongs to S but it gets the same or fewer nominations compared to some vertex u of C. Hence,

$$ \begin{array}{@{}rcl@{}} &&\Pr\left[{\delta(w,\mathbf{x}) \leq {\Delta}-k-1}\right]\\ &\leq& \Pr\left[{u^{*}\in S}\right]+\Pr\left[{u^{*}\not\in S \land \delta_{S}(u^{*},\mathbf{x})\leq \delta_{S}(u,\mathbf{x}) \text{for some $u\in C$ s.t.~$u\not\in S$}}\right]\\ &\leq&\Pr\left[{u^{*}\in S}\right]+ \sum\limits_{u\in C}{\Pr\left[{u^{*}\not\in S \land u\not\in S \land \delta_{S}(u^{*},\mathbf{x})\leq \delta_{S}(u,\mathbf{x})}\right]}\\ &=&\Pr\left[{u^{*}\in S}\right]+ \sum\limits_{u\in C}{\Pr\left[{u^{*}, u\not\in S}\right] \Pr\left[{\delta_{S}(u^{*},\mathbf{x})\leq \delta_{S}(u,\mathbf{x})|u^{*}, u\not\in S}\right]} \end{array} $$
(3)

We will now bound the rightmost probability in (3).

Claim 1

For every uC, \(\Pr {\delta _{S}(u^{*},\mathbf {x})\leq \delta _{S}(u,\mathbf {x})|u^{*}\not \in S, u\not \in S} \leq \exp \left (-\frac {k^{3}}{2n^{2}}\right )\).

Proof

Assuming that u and u do not belong to the sample set S, we will express the difference δS(u,x) − δS(u, x) as the sum of independent random variables Yi for i = 1,...,k. Variable Yi indicates the contribution of the i-th vertex selection to the difference δS(u,x) − δS(u, x). In particular, Yi is equal to 1, − 1, and 0 if the outgoing edges of the i-th vertex selected in the sample set points to vertex u but not to vertex u, to vertex u but not to vertex u, and either to none or to both of them, respectively. Hence, \(\delta _{S}(u^{*},\mathbf {x})-\delta _{S}(u,\mathbf {x})={\sum }_{i=1}^{k}{Y_{i}}\) with Yi ∈{− 1,0,1} and

$$ \begin{array}{@{}rcl@{}} \mathbb{E}\left[ \delta_{S}(u^{*},\mathbf{x})-\delta_{S}(u,\mathbf{x})|u^{*},u\not\in S \right] &=& \left( {\Delta}-\delta_{u}(u^{*},\mathbf{x})-\delta(u,\mathbf{x})+\delta_{u^{*}}(u,\mathbf{x})\right) \frac{k}{n-2} \\ &\geq& \frac{k^{2}}{n}. \end{array} $$

Notice that for the computation of the expectation, we have used the facts that Δ − δu(u,x) vertices besides u have outgoing edges pointing to u, \(\delta (u,\mathbf {x})-\delta _{u^{*}}(u,\mathbf {x})\) vertices besides u have outgoing edges pointing to u, and each of them is included in the sample set with probability \(\frac {k}{n-2}\). The inequality follows since δ(u, x) ≤Δ− k − 1 and \(\delta _{u}(u^{*},\mathbf {x}), \delta _{u^{*}}(u,\mathbf {x})\in \{0,1\}\).

We will now apply Hoeffding’s bound, which is stated as follows.

Lemma 3 (Hoeffding 12)

Let X1,X2,...,Xt be independent random variables so that \(\Pr {a_{j}\leq X_{j} \leq b_{j}} =1\). Then, the expectation of the random variable \(X={\sum }_{j=1}^{t}{X_{j}}\) is \(\mathbb {E}[X]={\sum }_{j=1}^{t}{\mathbb {E}[X_{j}]}\) and, furthermore, for every ν ≥ 0,

$$\Pr{X \leq \mathbb{E}[X]- \nu}\leq \exp\left( -\frac{2\nu^{2}}{{\sum}_{j=1}^{t}{(b_{j}-a_{j})^{2}}}\right).$$

In particular, we apply Lemma 3 on the random variable X = δS(u,x) − δS(u, x) (assuming that u,uS). Note that t = k, aj = − 1 and bj = 1, and recall that \(\mathbb {E}\left [ X \right ]\geq \frac {k^{2}}{n}\). We obtain

$$ \begin{array}{@{}rcl@{}} \Pr\left[\delta_{S}(u^{*},\mathbf{x})-\delta_{S}(u,\mathbf{x}) \leq 0|u^{*},u\not\in S\right] &=& \Pr\left[X\leq 0\right] \\ &\leq& \left( -\frac{\mathbb{E}\left[ X \right]^{2}}{2k}\right) \leq \exp\left( -\frac{k^{3}}{2n^{2}}\right), \end{array} $$

as desired. □

Using the definition of \(\mathbb {E}\left [ \delta (w,\mathbf {x}) \right ]\), inequality (3), and Claim 1, we obtain

$$ \begin{array}{@{}rcl@{}} &&\mathbb{E}\left[ \delta(w,\mathbf{x}) \right] \geq ({\Delta}-k) \cdot \left( 1-\Pr\left[\delta(w,\mathbf{x})\leq {\Delta}-k-1\right]\right)\\ &\geq& ({\Delta}-k)\Pr\left[{u^{*}\not\in S}\right] \\ &-& ({\Delta}-k)\left( \sum\limits_{u\in C}{\Pr\left[{u^{*}, u\not\in S}\right] \cdot \Pr\left[{\delta_{S}(u^{*},\mathbf{x})\leq \delta_{S}(u,\mathbf{x})|u^{*}, u\not\in S}\right]}\right)\\ &\geq& ({\Delta}-k) \left( 1-\frac{1}{n}\right)^{k}- ({\Delta}-k)\left( \sum\limits_{u\in C}{\left( 1-\frac{2}{n}\right)^{k}\cdot \exp\left( -\frac{k^{3}}{2n^{2}}\right)} \right)\\ &\geq& ({\Delta}-k)\left( 1-\frac{k}{n}\right) - ({\Delta}-k) \cdot n\cdot \exp\left( -\frac{k^{3}}{2n^{2}}\right)\\ &\geq& {\Delta} - 2k -n^{2} \cdot \exp\left( -\frac{k^{3}}{2n^{2}}\right). \end{array} $$
(4)

The last inequality follows since n ≥Δ. Setting \(k=\left \lceil 4^{1/3} n^{2/3} \ln ^{1/3}n \right \rceil \), (4) yields \(\mathbb {E}\left [ \delta (w,\mathbf {x}) \right ]\geq {\Delta }-{\varTheta }\left (n^{2/3} \ln ^{1/3}n\right )\), as desired. □

4 Lower Bounds

In this section we complement our positive results by providing impossibility results. First, in Section 4.1, we provide lower bounds for a class of mechanisms which we call strong sample mechanisms, in the single nomination model of Holzman and Moulin [13]. Then, in Section 4.2, we provide a lower bound for the most general model of Alon et al. [1], which applies to any deterministic mechanism.

4.1 Strong Sample Mechanisms

In this section, we give a characterization theorem for a class of impartial mechanisms which we call strong sample impartial mechanisms. We then use this characterization to provide lower bounds on the additive approximation of deterministic and randomized mechanisms that belong to this class. Our results suggest that the Sample and Vote mechanism from Section 3.1 is essentially the best possible randomized mechanism in this class.

For a graph \(G \in \mathcal {G}\) and a subset of vertices S, let W := WS(G) be the set of vertices outside S nominated by S, i.e. W = {wNS : (v, w) ∈ E, vS}. Then, a deterministic sample mechanismFootnote 6(g, f) firstly selects a subset S using some sample function \(g: \mathcal {G} \rightarrow 2^{N}\), and then applies a (possibly randomized) selection mechanism f by restricting its range on vertices in W; notice that if W = , f does not select any vertex.

This definition allows for a large class of mechanisms. For example, the special case of sample mechanisms with |S| = 1 (in which, the winner has in-degree at least 1), coincides with all negative unanimous mechanisms defined by Holzman and Moulin [13]. Indeed, when |S| = 1, the set W in never empty and the winner has in-degree at least 1. This is not however the case for |S| > 1, where W could be empty when all vertices in S have outgoing edges destined for vertices in S and no winner can be declared. Characterizing all impartial sample mechanisms is an outstanding open problem. We are able to provide a first step, by providing a characterization for the more restricted class of impartial and strong sample mechanisms. Informally, in strong sample mechanisms, vertices cannot affect their chance of being selected in the sample set S.

Definition 1

(Deterministic strong sample mechanisms) We call a deterministic sample mechanism (g, f) with sample function \(g: \mathcal {G} \rightarrow 2^{N}\)strong if \(g(x^{\prime }_{u},\mathbf {x}_{-u})=g(\mathbf {x})\) for all ug(x), \(x^{\prime }_{u} \in N\setminus \{u\}\) and \(\mathbf {x} \in \mathcal {G}\).

The reader may observe the similarity of this definition with impartiality (function g of a strong sample mechanism satisfies similar properties with function f of an impartial selection mechanism). The following lemma describes a straightforward, yet useful, consequence of the above definition.

Lemma 4

Let (g, f) be a deterministic strong sample mechanism and let \(S \subseteq N\). For any nomination profiles \(\mathbf {x},\mathbf {x}^{\prime }\) with \(\mathbf {x}_{-S}=\mathbf {x}^{\prime }_{-S}\), if Sg(x)≠ then \(S \setminus g(\mathbf {x}^{\prime })\neq \emptyset \).

Proof

For the sake of contradiction, let us assume that \(S \setminus g(\mathbf {x}^{\prime })= \emptyset \), i.e., the sample vertices in \(\mathbf {x}^{\prime }\) are disjoint from S. Then, by Definition 1, g(x) remains the same as outgoing edges from vertices in S should not affect the sample set. But then, Sg(x) = , which is a contradiction. □

In the next theorem, we provide a characterization for the sample function of deterministic impartial strong sample mechanisms in the single nomination model. The theorem essentially states that the only possible way to choose the sample set must be independently of the graph.

Theorem 3

In the single nomination model, any impartial deterministic strong sample mechanism (g, f) selects the sample set independently of the nomination profile, i.e., for all \(\mathbf {x},\mathbf {x}^{\prime } \in \mathcal {G}^{1}\), \(g(\mathbf {x})=g(\mathbf {x}^{\prime })=S\).

Proof

Since we are in the single nomination model, without loss of generality, we can use the simplified notation xu = v instead of xu = {(u, v)} for any graph \(\mathbf {x} \in \mathcal {G}^{1}\). Consider any sample mechanism (g, f) and any nomination profile \(\mathbf {x} \in \mathcal {G}^{1}\). It suffices to show that for any vertex u, and any alternative vote \(x^{\prime }_{u}\), the sample set must remain the same, i.e., \(g(x^{\prime }_{u},\mathbf {x}_{-u},)=g(\mathbf {x})\). If ug(x), this immediately follows by Definition 1. In the following, we prove two claims showing that this holds also when ug(x); Claim 2 treats the case where u is a winner of a profile, while Claim 3 treats the case where u is a not a winner.

Claim 2

Let (g, f) be an impartial deterministic strong sample mechanism and let x be any nomination profile in \(\mathcal {G}^{1}\). Then the sample set must remain the same for any other vote of the winner, i.e., \(g(\mathbf {x})=g(x^{\prime }_{f(\mathbf {x})},\mathbf {x}_{-f(\mathbf {x})})\) for any \(x^{\prime }_{f(\mathbf {x})} \in N \setminus \{f(\mathbf {x})\}\).

Proof

Let w = f(x) be the winner, for some nomination profile x. We will prove the claim by induction on the in-degree of the winner, δ(w, x). Note that δ(w, x) > 0 for any sample mechanism and any \(\mathbf {x} \in \mathcal {G}^{\prime }\).

(Base case: δ(w, x) = 1) Let S = g(x) be the sample set for profile x. Assume for the sake of contradiction that when w changes its vote to \(x^{\prime }_{w}\), the sample for profile \(\mathbf {x}^{\prime }=(x^{\prime }_{w},\mathbf {x}_{-w})\) changes, i.e., \(g(\mathbf {x}^{\prime })=S^{\prime }\neq S\). We first note that impartiality of f implies that \(w=f(\mathbf {x}^{\prime })\). Next, observe that the vertex voting for w in S must be also in \(S^{\prime }\); otherwise, w becomes a winner without getting any vote from the sample set, which contradicts our definition of sample mechanisms. We will show that this must be the case for all vertices in S.

To do this, we will expand two parallel branches, creating a sequence of nomination profiles starting from x and \(\mathbf {x}^{\prime }\) which will eventually lead to a contradiction. Figure 2 depicts the situation for x and \(\mathbf {x}^{\prime }\).

Fig. 2
figure 2

The starting profiles x and \(\mathbf {x}^{\prime }\) in Claim 2. The dark vertex is the winner, while the light, dashed-lined vertices are the members of the sets S and \(S^{\prime }\), respectively

We start with the profile \(\mathbf {x}^{\prime }\). Consider a vertex \(s^{\prime } \in S^{\prime } \setminus S\). We create a profile \(\mathbf {z}^{\prime }\) in which all vertices in \(S^{\prime }\setminus s^{\prime }\) vote for \(s^{\prime }\) (i.e., \(z_{v}=s^{\prime }\), for each \(v\in S^{\prime }\setminus s^{\prime }\)), vertex \(s^{\prime }\) votes for w (i.e., zv = w), while the rest of the vertices vote as in \(\mathbf {x}^{\prime }\) (i.e., zv = xv, for each \(v\not \in S^{\prime }\)). For illustration, see Fig. 3a and b. By the definition of a strong sample mechanism, we obtain \(g(\mathbf {z}^{\prime })=g(\mathbf {x}^{\prime })\), since only votes of vertices in \(S^{\prime }\) have changed. Notice also that \(f(\mathbf {z}^{\prime })=w\), as this is the only vertex outside \(S^{\prime }\) that receives votes from \(S^{\prime }\). We now move to profile x and apply the same sequence of deviations, involving all the vertices in \(S^{\prime }\). These lead to the profile z, which differs from \(\mathbf {z}^{\prime }\) only in the outgoing edge of vertex w.

Fig. 3
figure 3

Profiles z and \(\mathbf {z}^{\prime }\) in the base case of the proof of Claim 2: if \(v=s^{\prime }\) then \(s^{\prime } \notin g(\mathbf {z})\) and since this is the only vertex voting for w, w cannot win in z, while it must be the winner in \(\mathbf {z}^{\prime }\) —a contradiction. Profiles y and \(\mathbf {y}^{\prime }\): if \( s^{\prime } \in g(\mathbf {y})\), we let \(s^{\prime }\) vote for v and v for w, making w the winner in \(\mathbf {y}^{\prime }\) but not in y. A dark circle denotes the winner, while light, dashed-lined circles denote the members of the sample sets S and \(S^{\prime }\). A solid-lined diamond denotes a vertex that cannot be the winner and a dashed-lined diamond denotes a vertex that cannot be in the sample set

By Lemma 4, there is a vertex \(v \in S^{\prime }\) such that vg(z). If \(v=s^{\prime }\), then we end up in a contradiction. This is because f(z)≠w, since \(s^{\prime }\) is the only vertex voting for w in \(\mathbf {z}^{\prime }\) and \(s^{\prime }\) is not in the sample, while \(f(\mathbf {z}^{\prime }) = w\), as stated by the other branch and since, when w change its vote to \(x^{\prime }_{w}\), the created profile is \((x^{\prime }_{w},\mathbf {z}_{-w})=\mathbf {z}^{\prime }\) contradicting impartiality (see also Fig. 3a and b).

We are now left with the case where \(s^{\prime }\in g(\mathbf {z})\) and \(v\neq s^{\prime }\). Starting from z and \(\mathbf {z}^{\prime }\), we will create profiles y and \(\mathbf {y}^{\prime }\) (see Fig. 3c and d) as follows: we construct y by letting \(s^{\prime }\) vote towards v (i.e., \(y_{s^{\prime }}=v\)), v vote towards w (i.e., yv = w) and yi = zi for all other vertices \(i\neq v,s^{\prime }\). By the strong sample property, when \(s^{\prime }\) votes towards v the sample set is preserved, i.e., v cannot get in the sample. Also, when v votes, v cannot get in the sample (by a trivial application of Lemma 4); therefore, vg(y). Hence, w cannot be the winner as its only incoming vote is from v, a vertex that does not belong to the sample set g(y).

Starting from \(\mathbf {z}^{\prime }\), we create similarly \(\mathbf {y}^{\prime }\) by letting \(s^{\prime }\) vote towards v (\(y^{\prime }_{s^{\prime }}=v\)), v to vote towards w (\(y^{\prime }_{v}=w\)) and \(y^{\prime }_{i}=z_{i}\) for all other vertices \(i\neq v,s^{\prime }\). In this case, \(S^{\prime }\) will be preserved as sample set in profile \(\mathbf {y}^{\prime }\) (i.e. \(g(\mathbf {y}^{\prime })=S^{\prime }\)). Therefore, w is the only vertex voted by the sample set and must be the winner, leading to a contradiction (see Fig. 3c and d).

(Induction step) Assume as induction hypothesis that, for all profiles \(\mathbf {x} \in \mathcal {G}^{1}\), it holds \(g(\mathbf {x})=g(x^{\prime }_{w},\mathbf {x}_{-w})=S\) when δ(w, x) ≤ λ, for some λ ≥ 1. Now, consider any profile x where f(x) = w and δ(w, x) = λ + 1 and assume for the sake of contradiction that there is some graph \(\mathbf {x}^{\prime }=(x^{\prime }_{w},\mathbf {x}_{-w})\) where \(g(\mathbf {x}^{\prime })=S^{\prime } \neq S\). Without loss of generality, let \(\delta _{S}(w,\mathbf {x}) \leq \delta _{S^{\prime }}(w,\mathbf {x}^{\prime })\).

Starting from \(\mathbf {x}^{\prime }\), we create profile \(\mathbf {z}^{\prime }\), by letting all vertices in \(S^{\prime }\) vote for some \(s^{\prime } \in S^{\prime }\) and \(s^{\prime }\) vote for w, i.e., \(z^{\prime }_{v} = s^{\prime }\) for each vertex \(v \in S^{\prime }\setminus \{s^{\prime }\}\) and \(z^{\prime }_{s^{\prime }}=w\). The strong sample property implies that \(g(\mathbf {z}^{\prime })=S^{\prime }\) and \(f(\mathbf {z}^{\prime })=w\). We focus now on profile x, and create the profile z, by performing the same series of deviations, i.e., by letting all vertices in \(S^{\prime }\setminus {s^{\prime }}\) vote for \(s^{\prime }\) and \(s^{\prime }\) vote for w. Note here that z differs from \(\mathbf {z}^{\prime }\) only in the outgoing edge of w. Like before, Lemma 4 establishes that there will be some vertex \(v \in S^{\prime }\) such that vg(z), i.e., \(g(\mathbf {z}) \neq S^{\prime }\). Turning our attention back to \(\mathbf {z}^{\prime }\), we let w change its vote to xw, creating profile \((x_{w},\mathbf {z}^{\prime }_{-w})\). Observe that \((x_{w},\mathbf {z}^{\prime }_{-w})=\mathbf {z}\). When \(\delta (w,\mathbf {z}^{\prime }) < \delta (w,\mathbf {x})\), by the induction hypothesis we have \(g(\mathbf {z})=S^{\prime }\), a contradiction.

We need also to handle the case \(\delta (w,\mathbf {z}^{\prime }) = \delta (w,\mathbf {x})\). We will use a series of careful steps to decrease the in-degree of w, without changing the sample set. This will allow us to use the induction hypothesis to finalize our proof.

Let L denote the set of vertices which vote for w in profile x. We note here that, we may end up in the case \(\delta (w,\mathbf {z}^{\prime }) = \delta (w,\mathbf {x})\) only because a single vertex votes for w in \(\mathbf {z}^{\prime }\), i.e. \(|g(\mathbf {z}^{\prime }) \cap L|=1\); otherwise we could decrease the in-degree of w in \(\mathbf {z}^{\prime }\) without changing the sample set and directly use the induction hypothesis to prove the claim. The aforementioned vertex \(s^{\prime }\) is the single vertex in \(g(\mathbf {z}^{\prime }) \cap L\). Note here that there exists at least one vertex in g(x) ∩ L. Say this is vertex s. If \(\delta (s,\mathbf {z}^{\prime }) \leq \lambda -1\), we can create the profile \(\mathbf {y}^{\prime }\), where \(s^{\prime }\) votes for s (i.e. \(y^{\prime }_{s^{\prime }}=s\) and \(\mathbf {y}^{\prime }=(y^{\prime }_{s^{\prime }},\mathbf {z}^{\prime }_{-s^{\prime }})\)), hence \(f(\mathbf {y}^{\prime })=s\) and \(g(\mathbf {y}^{\prime })=g(\mathbf {z}^{\prime })=S^{\prime }\) (recall that \(s^{\prime }\) is the single vertex in \(g(\mathbf {z}^{\prime })\) voting outside of \(g(\mathbf {z}^{\prime })\) ). We can create now the profile \(\mathbf {q}^{\prime }\) where vertex s votes for vertex \(s^{\prime }\) (i.e. \(q^{\prime }_{s}=s^{\prime }\)) and the other vertices vote like in \(\mathbf {y}^{\prime }\), i.e. \(\mathbf {q}^{\prime }=(q^{\prime }_{s},\mathbf {y}^{\prime }_{-s})\). Since \(\delta (s,\mathbf {y}^{\prime }) \leq \lambda \) and \(f(\mathbf {y}^{\prime })=s\), by changing the outgoing edge of the winning vertex s, the sample set does not change, due to the induction hypothesis, i.e. \(g(\mathbf {q}^{\prime })=g(\mathbf {y}^{\prime })=S^{\prime }\). Finally, we create the profile \(\mathbf {r}^{\prime }\), where \(s^{\prime }\) votes for w (i.e. \(r^{\prime }_{s^{\prime }}=w\)) and the other vertices vote like in \(\mathbf {q}^{\prime }\) (i.e. \(\mathbf {r}^{\prime }=(r^{\prime }_{s^{\prime }},\mathbf {q}^{\prime }_{-s^{\prime }})\)). The strong sample property now implies that \(g(\mathbf {r}^{\prime })=g(\mathbf {q}^{\prime })=S^{\prime }\) and \(f(\mathbf {r}^{\prime })=w\). Since \(\delta (w,\mathbf {r}^{\prime })= \lambda \), we can invoke the induction hypothesis once again: we can create profile r by letting w vote as in x (i.e. \(\mathbf {r}=(x_{w},\mathbf {r}^{\prime }_{-w})\)) and \(g(\mathbf {r})=S^{\prime }\).

At this point, we reverse our previous moves. First, we create the profile q by allowing \(s^{\prime }\) change its vote for s (i.e. \(q_{s^{\prime }}=s\) and \(\mathbf {q}=(q_{s^{\prime }},\mathbf {r}_{-s^{\prime }})\)). Again, the strong sample property implies that \(g(\mathbf {q})=g(\mathbf {r})=S^{\prime }\), which results to f(q) = s. Finally, we create profile y by letting s vote for w (i.e. ys = w and y = (ys,qs)). The induction hypothesis implies now that \(g(\mathbf {y})=S^{\prime }\). Observe that y is indeed profile z (i.e. y = z), which is identical to \(\mathbf {z}^{\prime }\), except from the vote of w, hence \(g(\mathbf {z})=S^{\prime }\). Recall however, that \(g(\mathbf {z})\neq S^{\prime }\), a contradiction. Given that \(\delta (s,\mathbf {z}^{\prime }) \leq \lambda -1\), the claim follows.

If \(\delta (s,\mathbf {z}^{\prime }) > \lambda -1\), we generalize the idea described in the previous two paragraphs. We will revert enough edges towards \(s^{\prime }\), for the in-degree of s to become equal to λ − 1. To do this we identify a tree T in \(\mathbf {z}^{\prime }\), with vertex s as the root: We start from the vertices voting towards s and we select δ(s, z) − (λ − 1) of them as children of s: first, we select vertices with in-degree at most λ − 1. If we end up with vertices with higher in-degree than λ − 1, we repeat the process for each child, until all leafs in the tree have in-degrees at most λ − 1. This is assured, since we are in the single nomination model and each vertex belongs in at most one directed cycle.

Let k be the number of vertices in the tree T and let \(\mathbf {r}^{(0)'}=\mathbf {z}^{\prime }\); hence \(g(\mathbf {r}^{(0)'})=S^{\prime }\). Starting from an arbitrary leaf on T, let vi denote the i-th vertex we visit. For each i ∈{1,...,k}, we create three profiles: \(\mathbf {y}^{(i)^{\prime }},\mathbf {q}^{(i)^{\prime }}\) and \(\mathbf {r}^{(i)^{\prime }}\). First, we create profile \(\mathbf {y}^{(i)^{\prime }}\) by letting \(s^{\prime }\) vote for vi (i.e. \(y^{(i)^{\prime }}_{s^{\prime }}=v_{i}\)) and the remaining vertices vote like in \(\mathbf {r}^{(i-1)^{\prime }}\), i.e. \(\mathbf {y}^{(i)^{\prime }}=(y^{(i)^{\prime }}_{s^{\prime }},\mathbf {r}^{(i-1)'}_{-v_{i}})\). Due to the strong sample property, \(g(\mathbf {y}^{(i)^{\prime }})=g(\mathbf {r}^{(i-1)'})\). Also, \(f(\mathbf {y}^{(i)^{\prime }})=v_{i}\) since vi is the only vertex voted by the sample. Then we create the profile \(\mathbf {q}^{(i)^{\prime }}\), where \(q^{(i)^{\prime }}_{v_{i}}=s^{\prime }\) and the other vertices vote like \(\mathbf {y}^{(i)^{\prime }}\), i.e. \(\mathbf {q}^{(i)^{\prime }}=(q^{(i)^{\prime }}_{v_{i}},\mathbf {y}^{(i)^{\prime }}_{-i})\). Due to impartiality \(f(\mathbf {q}^{(i)^{\prime }})=v_{i}\). If we are traversing the tree T from the leaves to the root, each vertex vi has in-degree at most λ and by the induction hypothesis \(g(\mathbf {q}^{(i)^{\prime }})=g(\mathbf {y}^{(i)^{\prime }})=S^{\prime }\). Finally, we create the profile \(\mathbf {r}^{(i)^{\prime }}\) by letting \(s^{\prime }\) vote for w, i.e \(r^{(i)^{\prime }}_{s^{\prime }}=w\) and \(\mathbf {r}^{(i)^{\prime }}=(r^{(i)^{\prime }}_{s^{\prime }},\mathbf {y}^{(i)^{\prime }}_{-s^{\prime }})\). We traverse the vertices starting from a leaf, and after visiting all vertices in the same level, we pass to the next level and we keep the order of the vertices visited. Note that in all these changes the sample set does not change and each vertex in T (including vertex s, which is traversed last, i.e. vk = s) has in-degree at most λ. An example of this process is depicted in Fig. 4.

Fig. 4
figure 4

Induction step for the proof of Claim 2. An example with λ = 2. To use the induction hypothesis, we need to decrease the in-degree of w to 2. In Fig. 4a, vertex \(s^{\prime }\) is the single vertex in the sample voting for the winner w. The other vertices in the sample vote for \(s^{\prime }\). To decrease the in-degree of w, we identify the tree T, denoted by the solid thick edges. First, we let \(s^{\prime }\) vote for s3, which is now the winner (Fig. 4b). Then we let s3 vote for \(s^{\prime }\). By impartiality s3 (with in-degree 1) retains its winner status and the sample set is still the same (Fig. 4c). The same procedure continues until all edges of T are redirected to \(s^{\prime }\) and the in-degree of w decreases to 2. During this process, the sample set remains invariant. (Fig. 4d). The dark vertex denotes the winner, while the light, dashed-lined vertices denote members of the sample set

At this point, we start a reverse procedure. We first create the profile \(\mathbf {r}^{(k)}=(x_{w},\mathbf {r}^{(k)^{\prime }}_{-w})\), where we let vertex w to vote like in profile x. By the induction hypothesis, \(g(\mathbf {r}^{(k)})=g(\mathbf {r}^{(k)^{\prime }})=S^{\prime }\), since \(f(\mathbf {r}^{(k)^{\prime }})=w\) and \(\delta (w,\mathbf {r}^{(k)^{\prime }}) \leq \lambda \). We then start to traverse the vertices in tree T on the opposite direction, i.e. vk,vk− 1,...,v1. For each i ∈{1,...,k} We create a similar series of profiles, where the sample set will remain invariant. Starting from r(i) we create the profile q(i), where \(s^{\prime }\) votes towards \(q^{(i)^{\prime }}_{s^{\prime }}\), i.e. \(\mathbf {q}^{(i)}=(q^{(i)^{\prime }}_{s^{\prime }},\mathbf {r}^{(i)}_{-s^{\prime }})\). Due to the strong sample property \(g(\mathbf {q}^{(i)})=g(\mathbf {r}^{(i)})=S^{\prime }\) and \(f(\mathbf {q}^{(i)})=q^{(i)^{\prime }}_{s^{\prime }}\). Observe that q(i) and \(q^{(i)^{\prime }}\) differ only in the outgoing edge of w. As a result \(\delta (q^{(i)^{\prime }}_{s^{\prime }},\mathbf {q}^{(i)}) \leq \lambda \). We create now the profile y(i) where \(q^{(i)^{\prime }}_{s^{\prime }}\), the winning node in q(i), votes towards \(y^{(i)^{\prime }}_{q^{(i)^{\prime }}_{s^{\prime }}}\), i.e. \(y^{(i)}_{q^{(i)^{\prime }}_{s^{\prime }}}=y^{(i)^{\prime }}_{q^{(i)^{\prime }}_{s^{\prime }}}\) and \(\mathbf {y}^{(i)} = (y^{(i)}_{q^{(i)^{\prime }}_{s^{\prime }}},\mathbf {q}^{(i)}_{-q^{(i)^{\prime }}_{s^{\prime }}}) \). Again y(i) and \(\mathbf {y}^{(i)^{\prime }}\) defer only in the vote of w. Because of the induction hypothesis \(g(\mathbf {y}^{(i)})=g(\mathbf {q}^{(i)})=S^{\prime }\). Finally, we revert \(s^{\prime }\) towards w and create the profile r(i− 1) such that \(r^{(i-1)}_{s^{\prime }}=w\) and \(\mathbf {r}^{(i-1)}=(r^{(i-1)}_{s^{\prime }}, \mathbf {y}^{(i)}_{-s^{\prime }})\). Again \(g(\mathbf {r}^{(i-1)})=g(\mathbf {y}^{(i)})=S^{\prime }\).

After this series of changes, we end up in profile r(0), which differs from \(\mathbf {r}^{(0)^{\prime }}\) only in the outgoing edge of w. Since in all changes described above the sample set remains invariant, then \(g(\mathbf {r}^{(0)})=S^{\prime }\). Observe now that r(0) = z, for which we know that \(g(\mathbf {z}) \neq S^{\prime }\), a contradiction. This concludes the proof. □

The next claim establishes the remaining case, that no vertex ug(x),uf(x) can change the sample set.

Claim 3

Let (g, f) be an impartial deterministic strong sample mechanism, x be a nomination profile in \(\mathcal {G}^{1}\) and u a vertex with ug(x),uf(x). Then \(g(\mathbf {x})=g(x^{\prime }_{u},\mathbf {x}_{-u})\) for any other vote \(x^{\prime }_{u} \in N \setminus \{{u}\}\).

Proof

For the sake of contradiction, consider any profile \(\mathbf {x} \in \mathcal {G}^{1}\) and assume that there exists some nomination profile \(\mathbf {x}^{\prime }=(x^{\prime }_{u},\mathbf {x}_{-u})\) with \(g(\mathbf {x}^{\prime })=S^{\prime } \neq g(\mathbf {x})\). Starting from \(\mathbf {x}^{\prime }\), we define a profile \(\mathbf {z}^{\prime }\) in which all vertices in \(S^{\prime }\) vote for u, and the rest vote as in \(\mathbf {x}^{\prime }\). That is, \(z^{\prime }_{v}=u\), for all \(v\in S^{\prime }\) and \(z^{\prime }_{v}=x^{\prime }_{v}\) otherwise. Clearly \(f(\mathbf {z}^{\prime })=u\), as all the sample vertices vote for u. By Claim 2, we know that \(g(x_{u},\mathbf {z}^{\prime }_{-u})=g(\mathbf {z}^{\prime })=S^{\prime }\).

Starting from x, we define a profile z in which all vertices in \(S^{\prime }\) vote for u, and the rest vote as in x. Since \(S^{\prime }\neq g(\mathbf {x})\), by Lemma 4, we get \(g(\mathbf {z}) \neq S^{\prime }\). Observe that \(\mathbf {z}=(x_{u},\mathbf {z}^{\prime }_{-u})\), which leads to a contradiction. □

This completes the proof of Theorem 3. □

We next use Theorem 3 to obtain lower bounds on the additive approximation guarantee obtained by any deterministic strong sample mechanisms.

Corollary 1

There is no impartial deterministic strong sample mechanism with additive approximation better than n − 2 in the single nomination model.

Proof

Let S be the sample set which, by Theorem 3, must be selected independently of x, and let vS. Define x so that all vertices in N ∖{v} vote for v and all other vertices have in-degree either 0 or 1. Then, Δ(x) = n − 1, but the mechanism selects a vertex of in-degree exactly 1.□

We remark that the strong sample mechanism that uses a specific vertex as singleton sample achieves this additive approximation guarantee.

Our next step, is to extend the notion of sample mechanisms to randomized variants and provide a lower bound on their additive approximation guarantee, which shows that Sample and Vote (with \(k={\varTheta }(\sqrt {n})\); see Section 3.1) is an optimal mechanism from this class. We next define the family of randomized strong sample mechanisms.

Definition 2

(Randomized strong sample mechanisms) A randomized strong sample mechanism (g, f) is a probability distribution over a family \(\{(g_{i},f_{i}): i \in \mathbb {N} \}\) of strong sample mechanisms.

Note that Sample and Vote and Sample and Poll are both randomized strong sample mechanisms: For a given k, each of the possible sample sets define a deterministic sample mechanism, and the winner (if any) belongs in the set W. This is however not the case for more complex mechanisms like those appearing in [6] and in [11].

Corollary 2

There is no impartial randomized strong sample mechanism with additive approximation better than \({\varOmega }(\sqrt {n})\) in the single nomination model.

Proof

By Theorem 3, in any deterministic strong sample mechanism, the sample set is the same for any input graph x. Hence, in a randomized strong sample mechanism, the probability that a vertex u belongs in the sample set, is affected only by the sample functions used by the mechanism. As such, it is independent of the input graph. Then for any such mechanism we can construct graphs which yield additive approximation \({\varOmega }(\sqrt {n})\).

First, if there exists any vertex vN with \(\Pr {v \in S}> 1/\sqrt {n}\), then consider a nomination profile consisting of vertex v having maximum in-degree Δ = n − 1 (i.e., all other vertices are pointing to it), with all other vertices having in-degree either 1 or 0. Since u belongs to the sample (and, hence, cannot be the winner) with probability at least \(1/\sqrt {n}\), the expected degree of the winner is at most \(1+(n-1)(1-1/\sqrt {n}) = {\Delta } - {\varTheta }(\sqrt {n})\).

Otherwise, assume that every vertex vN has probability at most \(1/\sqrt {n}\) of being selected in the sample set. Consider a nomination profile with a vertex uN having maximum degree \({\Delta }=\sqrt {n}/2\) and all other vertices having in-degree either 0 or 1. Consider a vertex u pointing to vertex u. The probability that u belongs to the sample is at most \(1/\sqrt {n}\). Hence, by the union bound, the probability that some of the \(\sqrt {n}/2\) vertices pointing to u is selected in the sample set is at most 1/2. Hence, the probability that u is returned as the winner is not higher than 1/2 and the expected in-degree of the winner is at most \(1+\sqrt {n}/2 \cdot 1/2 ={\Delta } - {\varTheta }(\sqrt {n})\). □

4.2 General Lower Bound

Our last result is a lower bound for all deterministic impartial mechanisms in the most general model of Alon et al. [1], where each agent can nominate multiple other agents or even abstain. We remark that our current proof applies to mechanisms that always select a winner.

Theorem 4

There is no impartial deterministic α-additive mechanism for α ≤ 2.

Proof

Let f be a deterministic impartial mechanism and, for the sake of contradiction, assume that it achieves additive approximation at most equal to 2. We will show that there is a profile with four vertices (denoted by a, b, c and d), in which the winner has in-degree 0, while the maximum in-degree is 3, which leads to a contradiction.

We first consider the profile with no edges, say s, and let us assume, without loss of generality, that the winner is a (see Fig. 5). Now consider the three profiles x, y and z produced when each of the other three vertices c, b and d vote for the other two of them, respectively (c votes for d and b, b votes for c and d, and d votes for b and c, as shown in Fig. 5). In all these profiles, the voter (the vertex which changes its outgoing edges, compared to profile s) cannot be the winner since this would break impartiality. Focus for example on the profile x. Since c cannot be the winner, it must be either a, b or d. There are essentially two cases, which we treat separately.

Fig. 5
figure 5

Vertex a is the winner in profile s. This leads to the three profiles x, y and z, where each diamond-shaped vertex cannot be the winner

Case 1: a is the Winner for at Least One of x, y, z Consider the profile x, where vertex c votes for both b and d and assume that a = f(x) (see Fig. 6). We let a vote for both b and d, to get the profile w = ({(a, b)(a, d)},xa). Impartiality implies that a = f(w).

Fig. 6
figure 6

Case 1 in the proof of Theorem 4. We assume that a is the winner in profile x. When a adds votes, the profile w is created. Then, in the upper profile \(\mathbf {w^{\prime }}\), vertex d must win (for an additive approximation guarantee strictly less than 3) and in the lower profile \(\mathbf {w^{\prime \prime }}\), vertex b must win. This however leads to the final profile t, where both d and b must win, due to impartiality —a contradiction

On the one hand, if b votes for d (profile \(\mathbf {w}^{\prime }=(\{(b,d)\},\mathbf {w}_{-b})\)), impartiality implies that \(b\neq f(\mathbf {w}^{\prime })\) and approximation allows only \(f(\mathbf {w}^{\prime })=d\). On the other hand, if d votes for b (profile \(\mathbf {w}^{\prime \prime }=(\{(d,b)\},\mathbf {w}_{-d})\)), by similar arguments we have \(f(\mathbf {w}^{\prime \prime })=b\) (see Fig. 6). Now, consider the profile t where both b and d vote for each other, i.e., \(\mathbf {t}=(\{(d,b)\},\mathbf {w^{\prime }}_{-d})\) and, at the same time, \(\mathbf {t}=(\{(b,d)\},\mathbf {w^{\prime \prime }}_{-b})\). Impartiality (applied to \(\mathbf {w}^{\prime }\) and \(\mathbf {w}^{\prime \prime }\), respectively) implies that both b and d must be winners which is absurd and leads to a contradiction. Similar arguments would apply for the other cases, establishing that a cannot be the winner in any of the profiles x, y or z.

Case 2: a is Not the Winner for Any x, y, z

In this case, due to impartiality, only vertices with in-degree 1 are possible winners. Hence, we are left only with two sub-cases; either two of these profiles share the same winner or all of them have a different winner.

In the first sub-case, consider (without loss of generality) the scenario where f(x) = f(y). Impartiality, plus the fact that a is not the winner in x, imply that f(x) = d. Assume that f(z) = b (illustrated in Fig. 7a). The alternative case f(z) = c follows through similar arguments. In profile y, we let d add 2 votes and create profile \(\mathbf {t^{\prime }}=(\{(d,b)(d,c)\},\mathbf {y}_{-d})\). By impartiality, \(f(\mathbf {t^{\prime }})=d\). In profile z, we let b add 2 votes and create again the profile \((\{(b,c)(b,d)\},\mathbf {z}_{-b})=\mathbf {t^{\prime }}\). Note that these graphs are, indeed, the same. By impartiality \(f(\mathbf {t^{\prime }})=b\), hence the impartial mechanism f at profile \(\mathbf {t^{\prime }}\) must award two vertices, a contradiction. Similar arguments hold in all the cases where two of the profiles x, y, z share the same winner.

Fig. 7
figure 7

Case 2 in the proof of Theorem 4. Vertex a is not the winner in any of the profiles x, y, z. In Fig. 7a, we assume that two of the winners in x, y, z are the same, and this leads to a mechanism with two winners —a contradiction. In Fig. 7b, we assume that no two profiles among x, y, z have the same winner. Impartiality implies that in the rightmost profile \(\mathbf {t}^{\prime \prime }\) where three vertices have in-degree 2, the winner is vertex a. When a votes for any other vertex, a remains the winner with in-degree 0, while the graph includes a vertex with in-degree 3—a contradiction

We are left now with the case where all these profiles, x, y and z have different winners, where none of them is a. There are 2 possible such scenarios: f(x) = d, f(y) = c and f(z) = b (see Fig. 7b), or f(x) = b, f(y) = d and f(z) = c. Consider the first one (similar arguments hold also for the second). From these profiles x, y and z we reach the profiles \(\mathbf {x}^{\prime }=(\{(d,b)(d,c)\},\mathbf {x}_{-d})\), \(\mathbf {y}^{\prime }=(\{(c,b)(c,d)\},\mathbf {y}_{-c})\) and \(\mathbf {z}^{\prime }=(\{(b,c)(b,d)\},\mathbf {z}_{-d})\), by letting the respective winners to add edges. Because of impartiality, all the winners are preserved, i.e., \(f(\mathbf {x})=f(\mathbf {x}^{\prime })\), \(f(\mathbf {y})=f(\mathbf {y}^{\prime })\) and \(f(\mathbf {z})=f(\mathbf {z}^{\prime })\). Let us now focus on profile \(\mathbf {x}^{\prime }\). By letting vertex b add edges (b, c) and (b, d), we create the profile \(\mathbf {t}^{\prime \prime }=(\{(b,c)(b,d)\},\mathbf {x}^{\prime }_{-b})\): a directed clique on the vertices b, c and d and the vertex a with no incoming nor outgoing edges (see Fig. 7b). Focusing now on profile \(\mathbf {y}^{\prime }\), we reach the profile \((\{(d,b)(d,c)\},\mathbf {y}^{\prime }_{-d})=\mathbf {t}^{\prime \prime }\) by a deviation of vertex d; the same profile as before. In a similar fashion, on profile \(\mathbf {z}^{\prime }\) we reach the profile \((\{(c,b)(c,d)\},\mathbf {z}^{\prime }_{-c})=\mathbf {t}^{\prime \prime }\) by a deviation of c. By impartiality, \(f(\mathbf {t}^{\prime \prime }) \notin \{b,c,d\}\), which implies that \(f(\mathbf {t}^{\prime \prime })=a\). Now, if a votes for at least any other vertex, impartiality implies that a must remain the winner, while the nominees of a will have in-degree 3, contradicting the approximation guarantee of f.□