Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

1 Introduction

Algorithmic mechanism design studies optimization problems in which part of the input is not directly available to the algorithm; instead, this data is collected from self-interested players. It quests for polynomial-time algorithms that (approximately) optimize a global objective function (usually called social welfare), subject to the strategic requirement that the best strategy of the players is to truthfully report their part of the input. Such algorithms are called truthful mechanisms.

If the underlying optimization problem can be efficiently solved to optimality, the celebrated VCG mechanism (see, e.g., [17]) achieves truthfulness, social welfare optimization, and polynomial running time.

In general, the underlying optimization problem can only be solved approximately. Lavi and Swamy ([15, 16]) showed that certain linear programming based approximation algorithms for the social welfare problem can be turned into randomized mechanisms that are truthful-in-expectation, i.e., reporting the truth maximizes the expected utility of an player. The LS-mechanism is powerful (see [4, 11, 15, 16] for applications), but unlikely to be efficient in practice because of its use of the Ellipsoid method. We show how to use the multiplicative weights update method instead. This results in simpler algorithms at the cost of somewhat weaker approximation and truthfulness guarantees.

We next review the LS-mechanism. It applies to integer linear programming problems of the packing typeFootnote 1 for which the linear programming relaxation can be solved exactly and for which an \(\alpha \)-integrality gap verifier is available (definition below). Let \(\mathcal {Q} \subseteq \mathbb R_{\ge 0}^d\) be a packing polytope, i.e., \(\mathcal {Q}\) is the intersection of finitely many halfspaces, and if \(y \in \mathcal {Q}\) and \(x \le y\) then \(x \in \mathcal {Q}\). We use \(\mathcal {Q}_{\mathcal {I}} :=\mathcal {Q} \cap \mathbb Z^d\) for the set of integral points in \(\mathcal {Q}\), \(x^j\) for a typical element of \(\mathcal {Q}_{\mathcal {I}}\), and \(\mathcal {N}\) for the index set of all elements in \(\mathcal {Q}_{\mathcal {I}}\). The mechanism consists of three main steps:

  1. 1.

    Let \(v_i \in \mathbb R_{\ge 0}^d, 1\le i\le n,\) be the valuation of the i-th player and let \(v = \sum _i v_i\) be the accumulated valuation. Solve the LP-relaxation, i.e., find a maximizer \(x^* = {\text {argmax}}_{x \in Q} v^T x\) for the social welfare of the fractional problem, and determine the VCG pricesFootnote 2 \(p_1,\ldots ,p_n\). The allocation \(x^*\) and the VCG-prices are a truthful mechanism for the fractional problem.

  2. 2.

    Write \(\alpha \cdot x^*\) as a convex combination of integral solutions in \(\mathcal {Q}\), i.e., \(\alpha \cdot x^* = \sum _{j \in \mathcal {N}} \lambda _j x^j\), \(\lambda _j \ge 0\), \(\sum _{j \in \mathcal {N}} \lambda _j = 1\), and \(x^j \in \mathcal {Q}_{\mathcal {I}}\). This step requires an \(\alpha \) -integrality-gap-verifier for \(\mathcal {Q}_{\mathcal {I}}\) for some \(\alpha \in [0,1]\). On input \(\bar{v}\in \mathbb R_{\ge 0}^d\) and \(x^* \in \mathcal {Q}\), an \(\alpha \)-integrality-gap-verifier returns an \(x \in \mathcal {Q}_{\mathcal {I}}\) such that

    $$\begin{aligned} \bar{v}^T x \ge \alpha \bar{v}^T x^*. \end{aligned}$$
  3. 3.

    Pick the integral solution \(x^j\) with probability \(\lambda _j\), and charge the i-th player the price \(p_i \cdot (v_i^T x^j/ v_{i}^T x^*)\) and If \(v_i^T x^* = 0\), charge zero.

The LS-mechanism approximates social welfare with factor \(\alpha \) and guarantees truthfulness-in-expectation, i.e., it converts a truthful fractional mechanism into an \(\alpha \)-approximate truthful-in-expectation integral mechanism. With respect to practical applicability, steps 1 and 2 are the two major bottlenecks. Step 1 requires solving a linear program; an exact solution requires the use of the Ellipsoid method (see e.g. [10]), if the dimension is exponential. Furthermore, up to recently, the only method known to perform the decomposition in Step 2 is through the Ellipsoid method. An alternative method avoiding the use of the Ellipsoid method was recently given by Kraft, Fadaei, and Bichler [14]. We comment on their result in the next section.

1.1 Our Results

Our result concerns the design and analysis of a practical algorithm for the LS-scheme. We first consider the case where the LP-relaxation of SWM (social welfare maximization) in Step 1 of the LS-scheme can be solved exactly and efficiently and then our problem reduces to the design of practical algorithm for the Step 2. In what follows we present an algorithm that is fast and practical for the convex decomposition (i.e., Step 2). Afterwards, we consider a more general problem where the LP-relaxation in Step 1 of the LS-scheme cannot be solved exactly and we only have an approximate solution that maximizes the LP-relaxation within a factor of \(1-\varepsilon \). Convex Decomposition.Over the past 15 years, simple and fast methods [2, 8, 9, 12, 13, 18, 19] have been developed for solving packing and covering linear programs within an arbitrarily small error guarantee \(\varepsilon \). These methods are based on the multiplicative weights update (MWU) method [1], in which a very simple update rule is repeatedly performed until a near-optimal solution is obtained. We show how to replace the use of the Ellipsoid method in Step 2 by an approximation algorithm for covering linear programs. This result is the topic of Sect. 2.

Theorem 1

Let \(\varepsilon > 0\) be arbitrary. Given a fractional point \(x^* \in \mathcal {Q}\), and an \(\alpha \)-integrality-gap verifier for \(\mathcal {Q}_{\mathcal {I}}\), we can find a convex decomposition

$$\begin{aligned} \frac{\alpha }{1+4\varepsilon }\cdot x^*=\sum _{j\in \mathcal {N}}\lambda _jx^j. \end{aligned}$$

The convex decomposition has size (= number of nonzero \(\lambda _j\)) at most \(s(1 + \lceil \varepsilon ^{-2} \ln s \rceil )\), where s is the size of the support of \(x^*\) (= number of nonzero components). The algorithm makes at most \(s \lceil \varepsilon ^{-2} \ln s \rceil \) calls to the integrality-gap-verifier.

Kraft, Fadaei, and Bichler [14] obtained a related result independently. However, their construction is less efficient in two aspects. First, it requires \(O(s^2 \varepsilon ^{-2})\) calls of the oracle. Second, the size of their convex decomposition might be as large as \(O(s^3 \varepsilon ^{-2})\). In the combinatorial auction problem, \(s=n+m\). Theorem 1 together with Steps 1 and 3 of the LS scheme implies a mechanism that is truthful-in-expectation and has \((\alpha /(1 + 4\varepsilon ))\)-social efficiency. A mechanism has \(\gamma \) -social efficiency, where \(\gamma \in [0,1]\), if the expected social welfare of the allocation returned by the mechanism is at least \(\gamma \) times the maximum possible social value. Approximately Truthful-in-Expectation Mechanism. In contrast to Lavi-Swamy mechanism, let us assume that we do not want to solve the LP-relaxation exactly but instead, we want to use an \(\varepsilon \)-approximation algorithm \(\mathcal {A}\) for it. Garg and Könemann [8] showed that there is an FPTAS for the packing problem and hence \(\mathcal {A}\) exists, for every \(\varepsilon >0\). Using this, we show how to construct a fractional randomized mechanism for given \(\varepsilon _0\in (0, 1/2]\) and \(\varepsilon =\varTheta (\frac{\varepsilon _0^5}{n^4})\) that satisfies:

  1. 1.

    No positive transfer ( i.e., prices are non-negative).

  2. 2.

    Individually rational with probability \(1 - \varepsilon _0\) (i.e., the utility of any truth-telling player is non-negative with probability at least \(1 - \varepsilon _0\)).

  3. 3.

    \((1 - \varepsilon _0)\) -truthful-in-expectation (i.e., reporting the truth maximizes the expected utility of an player up to a factor \(1 - \varepsilon _0\))

  4. 4.

    \((1 - \varepsilon ) (1-\varepsilon _0)\)-social efficiency.

Now, let us assume that x is a fractional allocation obtained from the above mechanism. We apply our convex decomposition technique and Step 3 of the Lavi-Swamy mechanism to obtain an integral randomized mechanism that satisfies the aforementioned conditions 1 to 3 and has \(\alpha (1 - \varepsilon )(1 - \varepsilon _0)/(1 +4 \varepsilon )\)-social efficiency. We show this result in Sect. 4.

Note that our fractional mechanism refines the one given in [5], where the dependency of \(\varepsilon \) on n and \(\varepsilon _0\) is as \(\varepsilon =\varTheta ({\varepsilon _0}/{n^9})\). A recent experimental study of our mechanism on Display Ad Auctions [6] shows the applicability of these methods in practice.

2 A Fast Algorithm for Convex Decompositions

Let \(x^* \in \mathcal {Q}\) be arbitrary. Carr and Vempala [3] showed how to construct a convex combination of points in \(\mathcal {Q}_{\mathcal {I}}\) dominating \(\alpha x^*\) using a polynomial number of calls to an \(\alpha \)-integrality-gap-verifier for \(\mathcal {Q}_{\mathcal {I}}\). Lavi and Swamy [16] modified the construction to get an exact convex decomposition \(\alpha x^* = \sum _{i \in \mathcal {N}}\lambda _i x^i\) for the case of packing linear programs. The construction uses the Ellipsoid method. We show an approximate version that replaces the use of the Ellipsoid method by the multiplicative weights update (MWU) method. For any \(\varepsilon > 0\), we show how to obtain a convex decomposition of \(\alpha x^*/(1 + \varepsilon )\). Let s be the number of non-zero components of \(x^*\). The size of the decomposition and the number of calls to the \(\alpha \)-integrality gap verifier are \(O(s \varepsilon ^{-2} \ln s)\).

This section is structured as follows. We first review Kkandekar’s FPTAS for covering linear programs (Subsect. 2.1). We then use it and the \(\alpha \)-integrality gap verifier to construct a dominating convex combination for \(\alpha x^*/(1 + 4 \varepsilon )\), where \(x^* \in \mathcal {Q}\) is arbitrary (Subsect. 2.2). In Subsect. 2.3, we show how to convert a dominating convex combination into an exact convex decomposition. Finally, in Subsect. 2.4, we put the pieces together.

2.1 An FPTAS for Covering Linear Programs

Consider a covering linear program:

$$\begin{aligned} \min c^{T}x ~~ s.t. ~~\{Ax\ge b,~~ x\ge 0\} \end{aligned}$$
(1)

where \(A\in \mathbb R_{\ge 0}^{m\times n}\) is an \(m\times d\) matrix with non-negative entries and \(c\in \mathbb R_{\ge 0}^n\) and \(b\in \mathbb R_{\ge 0}^m\) are non-negative vectors. We assume the availability of a \(\kappa \)-approximation oracle for some \(\kappa \in (0,1]\).

  • \(\mathcal O_\kappa (z)\): Given \(z\in \mathbb {R}^m_{\ge 0}\), the oracle finds a column j of A that maximizes \(\frac{1}{c_j}\sum _{i=1}^{m}\frac{z_ia_{ij}}{b_i}\) within a factor of \(\kappa \):

    $$\begin{aligned} \frac{1}{c_j}\sum _{i=1}^{m}\frac{z_ia_{ij}}{b_i} \ge \kappa \cdot \max _{j'\in [n]}\frac{1}{c_{j'}}\sum _{i=1}^{m}\frac{z_ia_{ij'}}{b_i} \end{aligned}$$
    (2)

For an exact oracle \(\kappa =1\), Khandekar [12] gave an algorithm which computes a feasible solution \(\hat{x}\) to covering LP (1) such that \(c^T\hat{x}\le (1+4\varepsilon )z^*\) where \(z^*\) is the value of an optimal solution. The algorithm makes \(O(m\varepsilon ^{-2}\log m)\) calls to the oracle, where m is the number of rows in A. If the exact oracle in Khandekar’s algorithm is replaced by a \(\kappa \)-approximation algorithm, it computes a feasible solution \(\hat{x}\in \mathbb R_{\ge 0}^n\) to (1) such that \(c^T\hat{x}\le (1+4\varepsilon )z^*/\kappa \). The algorithm is given as Algorithm 1 and can be thought of as the algorithmic dual of the FPTAS for multicommodity flows given in [8]. We use \(A_i\) to denote the i-th row of A. The algorithm constructs vectors \(x(t)\in \mathbb R_{\ge 0}^n\), for \(t=0,1,\ldots ,\) until \(M(t):=\min _{i\in [m]} A_ix(t)/b_i\) becomes at least \(T:=\frac{\ln m}{\varepsilon ^2}\). Define the active list at time t by \(L(t):=\{ i\in [m] : A_ix(t-1)/b_i <T \}\). For \(i\in L(t)\), define

$$\begin{aligned} p_{i}(t):=(1-\varepsilon )^{A_i x(t-1)/b_i}, \end{aligned}$$
(3)

and set \(p_i(t)=0\) for \(i\not \in L(t)\). At each time t, the algorithm calls the oracle with the vector \(z_t=p(t)/ \Vert p(t) \Vert _1\), and increases the variable \(x_{j(t)}\) by

$$\begin{aligned} \delta (t):=\min _{i\in L(t)~\text { and}~a_{i,j(t)} \not = 0\ \ \ } \frac{b_i}{a_{i,j(t)}}, \end{aligned}$$
(4)

where j(t) is the index returned by the oracle. Due to lack of space, the proof of following theorem and corollary are presented in the full paper [7].

figure a

Theorem 2

Let \(\varepsilon \in (0,\frac{1}{2}]\) and let \(z^*\) be the value of an optimum solution to (1). Procedure Covering\((\mathcal O_\kappa )\) (see Algorithm 1) terminates in at most \(m\lceil \varepsilon ^{-2} \ln m \rceil \) iterations with a feasible solution \(\hat{x}\) of (1) of at most \(m\lceil \varepsilon ^{-2} \ln m \rceil \) positive components. At termination, it holds that

$$\begin{aligned} c^T\hat{x}\le \frac{(1+4\varepsilon )}{\kappa }z^*. \end{aligned}$$
(5)

We observe that the proof of Theorem 2 can be modified to give:

Corollary 1

Suppose \(b=\varvec{1}\), \(c=\varvec{1}\), and we use the following oracle \(\mathcal O'\) instead of \(\mathcal O\) in Algorithm 1:

  • \(\mathcal O'(A,z)\): Given \(z\in \mathbb {R}^m_{\ge 0}\), such that \(\varvec{1}^Tz=1\), the oracle finds a column j of A such that \(z^TA \varvec{1}_j\ge 1\).

Then the algorithm terminates in at most \(m\lceil \varepsilon ^{-2} \ln m \rceil \) iterations with a feasible solution \(\hat{x}\) having at most \(m\lceil \varepsilon ^{-2} \ln m \rceil \) positive entries, such that \(\varvec{1}^T\hat{x}\le 1+4\varepsilon \).

2.2 Finding a Dominating Convex Combination

Recall that we use \(\mathcal {N}\) to index the elements in \(\mathcal {Q}_{\mathcal {I}}\). We assume the availability of an \(\alpha \)-integrality-gap-verifier \(\mathcal F\) for \(\mathcal {Q}_{\mathcal {I}}\). We will use the results of the preceding section and show how to obtain for any \(x^*\in \mathcal {Q}\) and any positive \(\varepsilon \) a convex composition of points in \(\mathcal {Q}_{\mathcal {I}}\) that covers \(\alpha x^*/(1 + 4 \varepsilon )\). Our algorithm requires \(O(s \varepsilon ^{-2} \ln s)\) calls to the oracle, where s is the support of \(x^*\).

Theorem 3

Let \(\varepsilon > 0\) be arbitrary. Given a fractional point \(x^* \in \mathcal {Q}\) and an \(\alpha \)-integrality-gap verifier \(\mathcal F\) for \(\mathcal {Q}_{\mathcal {I}}\), we can find a convex combination \(\bar{x}\) of integral points in \(\mathcal {Q}_{\mathcal {I}}\) such that

$$\begin{aligned} \frac{\alpha }{1+4\varepsilon }\cdot x^* \le \bar{x} = \sum _{i\in \mathcal {N}}\lambda _ix^i . \end{aligned}$$

The convex decomposition has size at most \(s \lceil \varepsilon ^{-2} \ln s \rceil \), where s is the number of positive entries of \(x^*\). The algorithm makes at most \(s \lceil \varepsilon ^{-2} \ln s \rceil \) calls to the integrality-gap verifier.

Proof

The task of finding the multipliers \(\lambda _i\) is naturally formulated as a covering LP, namely,

$$\begin{aligned} \min \quad&\sum _{i\in \mathcal {N}} \lambda _{i}\\ s.t.&\sum _{i\in \mathcal {N}} \lambda _{i}x^i_{j} \ge \alpha \cdot x_{j}^* \quad \text {for all}~j,\nonumber \\&\sum _{i\in \mathcal {N}} \lambda _i \ge 1,~~\lambda _{i} \ge 0.\nonumber \\&\lambda _{i} \ge 0.\nonumber \end{aligned}$$
(6)

Clearly, we can restrict our attention to the \(j \in S^+ :=\{ j : x^*_j > 0 \}\) and rewrite the constraint for \(j \in S^+\) as \( \sum _{i\in \mathcal {N}} \lambda _{i}x^i_{j}/ (\alpha \cdot x_{j}^*) \ge 1\). For simplicity of notation, we assume \(S^+ = [1..s]\). In the language of the preceding section, we have \(m=s+1\), \(n=|\mathcal {N}|\), \(c=\varvec{1}\), \(b=\varvec{1}\) and the variable \(x=\lambda \). The matrix \(A=(a_{j,i})\) is as follows (note that we use j for the row index and i for the column index):

$$ a_{j,i}:= \left\{ \begin{array}{l l} x^i_j/(\alpha x^*_j) &{} \quad 1\le j\le s, i\in \mathcal {N}\\ 1 &{} \quad j=s+1, i\in \mathcal {N}\\ \end{array} \right. $$

Thus we can apply Corollary 1 of Sect. 2.1, provided we can efficiently implement the required oracle \(\mathcal O'\). We do so using \({\mathcal F}\).

Oracle \(\mathcal O'\) has arguments \((A, \tilde{z})\) such that \(1^T\tilde{z}=1\). Let us conveniently write \(\tilde{z}=(w,z)\), where \(w\in \mathbb {R}_{\ge 0}^{s}\), \(z\in \mathbb R_{\ge 0}\), and \(\sum ^{j=1}_{j = s}w_j+z=1\). Oracle \(\mathcal O'\) needs to find a column i such that \(\tilde{z}^TA\varvec{1}_i\ge 1\). In our case \(\tilde{z}^TA\varvec{1}_i=\sum _{j=1}^s w_j x^i_j/\alpha x^*_j+z\), and we need to find a column i for which this expression is at least one. Since z does not depend on i, we concentrate on the first term. Define

$$ V_j:= \left\{ \begin{array}{l l} \frac{w_j}{\alpha x^*_j}&{} \quad \text {for}~j \in S^+\\ 0 &{} \quad \text {otherwise}.\\ \end{array}\right. $$

Call algorithm \({\mathcal F}\) with \(x^*\in \mathcal {Q}\) and \(V:=(V_1,\ldots ,V_d)\). \({\mathcal F}\) returns an integer solution \(x^i\in \mathcal {Q_I}\) such that

$$\begin{aligned} V^T x^i=\sum _{j\in S^+}\frac{w_j}{\alpha x_j^*}x^i_j\ge \alpha \cdot V^Tx^*=\sum _{j\in S^+} w_j, \end{aligned}$$

and hence,

$$\begin{aligned} \sum _{j\in S^+} \frac{w_j}{\alpha x_j^*}x^i_j+z \ge \sum _{j\in S^+}w_j+z=1. \end{aligned}$$

Thus i is the desired column of A.

It follows by Corollary 1 that Algorithm 1 finds a feasible solution \(\lambda ' \in \mathbb R_{\ge 0}^{|\mathcal {N}|}\) to the covering LP (6), and a set \(\mathcal {Q}_\mathcal {I}'\subseteq \mathcal {Q}_\mathcal {I}\) of vectors (returned by \(\mathcal {F}\)), such that \(\lambda '_i>0\) only for \(i\in \mathcal {N}'\), where \(\mathcal {N}'\) is the index set returned by oracle \(\mathcal {O}'\) and \(|\mathcal {N}' |\le s \lceil \varepsilon ^{-2} \ln s \rceil \) also \( \varLambda :=\sum _{i\in \mathcal {N}'}\lambda '_i\le (1+4\varepsilon )\). Scaling \(\lambda '_i\) by \(\varLambda \), we obtain a set of multipliers \(\{\lambda _i=\lambda _i'/\varLambda :~ i\in \mathcal {N}'\}\), such that \(\sum _{i\in \mathcal {N}'}\lambda _i =1\) and

$$\begin{aligned} \sum _{i\in \mathcal {N}'}\lambda _i x^i\ge \frac{\alpha }{1+4\varepsilon } x^*. \end{aligned}$$
(7)

We may assume \(x^i_j=0\) for all \(j\notin S^+\) whenever \(\lambda _i > 0\); otherwise simply replace \(x^i\) by a vector in which all components not in \(S^+\) are set to zero, by using packing property this is possible.    \(\square \)

2.3 From Dominating Convex Combination to Exact Convex Decomposition

We will show how to turn a dominating convex combination into an exact decomposition. The construction is general and uses only the packing property. Such a construction seems to have been observed in [15], but was not made explicit. Kraft, Fadaei, and Bichler [14] describe an alternative construction. Their construction may increase the size of the convex decomposition (= number of non-zero \(\lambda _i\)) by a multiplicative factor s and an additive factor \(s^2\). In contrast, our construction increases the size only by an additive factor s.

figure b

Theorem 4

Let \(x^* \in \mathcal {Q}\) be dominated by a convex combination \(\sum _{i \in \mathcal {N}} \lambda _i x^i\) of integral points in \(\mathcal {Q}_{\mathcal {I}}\), i.e.,

$$\begin{aligned} \sum _{i \in \mathcal {N}} \lambda _i x^i \ge x^*. \end{aligned}$$
(8)

Then Algorithm 2 achieves equality in (8). It increases the size of the convex combination by at most s, where s is the number of positive components of \(x^*\).

Proof

Let \(\varvec{1}_j\) be the j-th unit vector. As long as there is an \(i \in \mathcal {N}\) and a j such that \(\lambda _ix^i_j > 0\) and replacing \(x^i\) by \(x^i - \varvec{1}_j\) maintains feasibility, i.e., satisfies constraint (8), we perform this replacement. Note that \(x^i\) is an integer vector in \(\mathcal {Q}_{\mathcal {I}}\), therefore \(x^i-\varvec{1}_j\) remains positive vector and with using packing property, it is also in \(\mathcal {Q}_{\mathcal {I}}\). We may therefore assume that the set of vectors indexed by \(\mathcal {N}\) satisfy a minimality condition which is for all \(i\in \mathcal {N}\) and \(j\in S^+\) with \(\lambda _ix^i_j > 0\)

$$\begin{aligned} \sum _{h \in \mathcal {N}}\lambda _{h} x_j^{h} -\lambda _i \varvec{1}_j < x_j^* \end{aligned}$$
(9)

We will establish (9) as an invariant of the second while-loop.

For \(j \in S^+\), let \(\varDelta _j = \sum _{i\in \mathcal {N}}\lambda _i x^i_j - x^*_j \). Then \(\varDelta _j \ge 0\) and, by (9), for every \(j\in S^+\) and \(i\in \mathcal {N}\), with \(\lambda _i\ne 0\) either \(x_j^i=0\) or \(\varDelta _j<\lambda _i\le \lambda _i x_j^i\). If \(\varDelta _j = 0\) for all \(j\in S^+\), we are done. Otherwise, choose j and \(i \in \mathcal {N}\) such that \(\varDelta _{j} > 0\) and \(x^i_{j} > 0\). Let \(B=\{j\in S^+:~ x^i_j\ne 0 \text { and } \varDelta _j > 0\}\) be the indices in the support of \(x^i\) for which \(\varDelta _j\) is non-zero. We will change the left-hand side of (8) such that equality holds for all indices in B. The change will not destroy an already existing equality for an index outside B and hence the number of indices for which equality holds increases by \(\left| B\right| \).

Let \(b = \left| B\right| \). By renumbering the coordinates, we may assume \(B = \{1,\ldots ,b\}\) and \(\varDelta _1/x^i_1 \le \ldots \le \varDelta _b/x^i_b\). For \(j \in [b]\), we clearly have

$$\begin{aligned} \lambda _i - \frac{\varDelta _j}{x^i_j} = \lambda _i - \frac{\varDelta _b}{x^i_b} + \frac{\varDelta _b}{x^i_b} - \frac{\varDelta _{b-1}}{x^i_{b-1}} + \cdots + \frac{\varDelta _{j+1}}{x^i_{j+1}} - \frac{\varDelta _j}{x^i_j}. \end{aligned}$$

Multiplying by \(x^i_j\) and adding zero a few times, we obtain

$$\begin{aligned} \lambda _i x^i_j - \varDelta _j&= \left( \lambda _i - \frac{\varDelta _b}{x^i_b}\right) x^i_j + \sum _{\ell =j}^{b-1}\left( \frac{\varDelta _{\ell + 1}}{x^i_{\ell + 1}} - \frac{\varDelta _{\ell }}{x^i_{\ell }}\right) x^i_j + \sum _{\ell = 1}^{j-1}\left( \frac{\varDelta _{\ell + 1}}{x^i_{\ell + 1}} - \frac{\varDelta _{\ell }}{x^i_{\ell }}\right) 0 + \frac{\varDelta _1}{x^i_1} 0. \end{aligned}$$

For \(\ell \in \{0,\ldots , b-1\}\) define a vector \(y^\ell \) by \(y^\ell _j = x^i_j\) for \(j \le \ell \) and \(y^\ell _j = 0\) for \(j > \ell \). Then \(x^i_j = y^\ell _j\) for \(\ell \ge j\) and \(0 = y^\ell _j\) for \(\ell < j\). Hence for all \(j \le b\)

$$\begin{aligned} \lambda _i x^i_j - \varDelta _j&= \left( \lambda _i - \frac{\varDelta _b}{x^i_b}\right) x^i_j + \sum _{\ell =1}^{b-1}\left( \frac{\varDelta _{\ell + 1}}{x^i_{\ell + 1}} - \frac{\varDelta _{\ell }}{x^i_{\ell }}\right) y^{\ell }_j + \frac{\varDelta _1}{x^i_1} y^0_j. \end{aligned}$$
(10)

Note that the coefficients on the right-hand side of (10) are non-negative and sum up to \(\lambda _i\). Also, by the packing property of \(\mathcal {Q}\), \(y^\ell \in \mathcal {Q}_\mathcal {I}\) for \(0 \le \ell < b\). We now change the left-hand side of (8) as follows: we replace \(\lambda _i\) by \(\lambda _i - {\varDelta _b}/{x^i_b}\); for \(1 \le \ell < b\), we increase the coefficient of \(y^\ell \) by \({\varDelta _{\ell + 1}}/{x^i_{\ell + 1}} - {\varDelta _{\ell }}/{x^i_{\ell }}\); and we increase the coefficient of \(y^0\) by \(\varDelta _1/x^i_1\). As a result, we now have equality for all indices in B. The \(\varDelta _j\) for \(j \not \in B\) are not affected by this change.

We still need to establish that (9) holds for the vectors \(y^\ell \), \(0 \le \ell < b\), that have a non-zero coefficient in the convex combination. Note first that \(y^\ell _j > 0\) implies \(j \in B\). Also (8) holds with equality for all \(j \in B\). Thus (9) holds.

Consider any iteration of the second while-loop. It adds up to b vectors to the convex decomposition and decreases the number of nonzero \(\varDelta \)’s by b. Thus the total number of vectors added to the convex decomposition is at most s.    \(\square \)

2.4 Fast Convex Decomposition

Proof (of Theorem 1)

Theorem 3 yields a convex combination of integer points of \(\mathcal {Q}_I\) dominating \(\alpha x^*/1 + 4 \varepsilon \). Theorem 4 turns this dominating convex combination into an exact combination. It adds up to \(\left| S\right| \) additional vectors to the convex combination. The complexity bounds follow directly from the referenced theorems.    \(\square \)

3 Approximately Truthful-in-Expectation Fractional Mechanisms

In this section we assume that we do not want to solve the LP-relaxation of SWM exactly and there is an FPTAS for it. Then by using the FPTAS, we construct a randomized fractional mechanism, Algorithm 3, and state the following theorem. For the proof of the theorem and FPTAS algorithm see the full paper [7].

Theorem 5

Let \(\varepsilon _0 \in (0, 1/2]\), \(\varepsilon =\varTheta (\frac{\varepsilon _0^5}{n^4})\) and \(\gamma =(1 - \varepsilon ) (1-\varepsilon _0)\). Given an \(\varepsilon \)-approximation algorithm for \(\mathcal {Q}\), Algorithm 3 defines a fractional randomized mechanism with the following conditions:

$$\begin{aligned}&\text {No~positive~transfer.} \end{aligned}$$
(11)
$$\begin{aligned}&\text {Individually~rational~with~probability} 1 - \varepsilon _0. \end{aligned}$$
(12)
$$\begin{aligned}&(1 - \varepsilon _0)\text {-truthful-in-expectation}. \end{aligned}$$
(13)
$$\begin{aligned}&\gamma \text {-social~efficiency,~where}~\gamma ~\text {depends~on}~\varepsilon _0,~\text {and}~\varepsilon . \end{aligned}$$
(14)

In order to present Algorithm 3, we make some assumption and define some notation. Let us assume that the problem is separable that means the variables can be partitioned into disjoint groups, one for each player, such that the value of an allocation for a player depends only on the variables in his group, i.e.,

$$\begin{aligned} v_i(x)=v_i(x_i), \end{aligned}$$

where \(x_i\) is the set of variables associated with player i. Formally, any outcome \(x \in \mathcal {Q}\subseteq \mathbb R^d\) can be written as \(x = (x_1,\ldots ,x_n)\) where \(x_i \in \mathbb R^{d_i}\) and \(d = d_1 + \ldots + d_n\).Footnote 3 We further assume that for each player \(i\in [n]\), there is an optimal allocation \(u^i \in \mathcal {Q}\) that maximizes his value for every valuation \(v_i\), i.e.,

$$\begin{aligned} v_i(u^i) = \max _{z \in \mathcal {Q}}v_i(z), \end{aligned}$$
(15)

for every \(v_i\in \mathcal {V}_i\), where \(\mathcal {V}_i\) denote the all possible valuations of player i. In combinatorial auction, the allocation \(u^i\) allocates all the items to player i. Let

$$\begin{aligned} L_i :=\sum _{j \ne i} v_j(u^j) \quad \text {and}\quad \beta _i :=\varepsilon L_i. \end{aligned}$$
(16)

Note that \(L_i\) does not depend on the valuation of player i. Let \(\mathcal {A}\) be an FPTAS for LP relaxation of SWM. We use \(\mathcal {A}(v, \varepsilon )\) to denote the outcome of \(\mathcal {A}\) on input v and \(\varepsilon \). If \(\varepsilon \) is understood, we simply write \(\mathcal {A}(v)\); \(\mathcal {A}(v)\) is a fractional allocation in Q. In the following, we will apply \(\mathcal {A}\) to different valuations which we denote by \(v = (v_i,v_{-i})\), \(\bar{v} = (\bar{v}_i, v_{-i})\), and \(v' = (\mathbf 0, v_{-i})\). Here \(v_i\) is the reported valuation of player i, \(\bar{v}_i\) is his true valuation and \(v'_i=\mathbf 0\). We denote the allocation returned by \(\mathcal {A}\) on input v (resp., \(\bar{v}\), \(v'\)) by x (resp., \(\bar{x}\), \(x'\)) and use the payment rule:

$$\begin{aligned} p_i(v):= \max \{p_i^{ VCG }(v)-\beta _i,0\} \end{aligned}$$
(17)

where

$$\begin{aligned} p_i^{ VCG }(v):=v_{-i}(x')-v_{-i}(x). \end{aligned}$$

\(v_{-i}(x)=\sum _{j\ne i}v_{j}(x), x ={\mathcal {A}}(v)\) and \(x' ={\mathcal {A}}(0,v_{-i})\). Observe the similarity in the definition of \(p_i^{ VCG }(v)\) to the VCG payment rule. In both cases, the payment is defined as the difference of the total value of two allocations to the players different from i. The first allocation ignores the influence of player i (\(x' ={\mathcal {A}}(0,v_{-i})\)) and the second allocation takes it into account (\(x ={\mathcal {A}}(v)\)). Define \(q_0 = (1 - \frac{\varepsilon _0}{n})^n\), \(\bar{\varepsilon } = \varepsilon _0/2\), and \(q_j= (1 - q_0)/n\) for \(1 \le j \le n\). Let \(\eta = \bar{\varepsilon }(1 - q_0)^2/n^3\), \(\eta ' = \eta /q_j\), and \(\varepsilon = \eta \bar{\varepsilon } (1 - q_0)/(8 n)\). Let \(U_i(v)\) be the utility of player i obtained by the mechanism which has an allocation function \(\mathcal {A}\) and payment rule (17). Following [5], we call player i active if the following two conditions hold:

$$\begin{aligned} U_{i}(v)+\frac{\bar{\varepsilon } q_i}{q_0}v_i(u^i)&\ge \frac{q_i}{q_0} \eta ' L_i, \end{aligned}$$
(18)
$$\begin{aligned} v_i(u^i)&\ge \eta L_i. \end{aligned}$$
(19)
figure c

Now, we briefly explain Algorithm 3. Let us choose a random number \(j\in \{0,1,\ldots ,n\}\) with probability \(q_j\). If \(j=0\), we run \(\varepsilon \)-approximation algorithm \(\mathcal {A}\) on v to compute allocation \(x=(x_1,\ldots , x_n)\). Then we change \(x_i\) and \(p_i\) to zero for all inactive i. And if \(j\ne 0\), we give optimal set \(u^j\) to j-th player and charged him with a price \(\eta 'L_j\) if he is active and zero otherwise. For all other players, we do not assign any item to them and do not charge them any price.

4 Approximately Truthful-in-Expectation Integral Mechanisms

In this subsection we obtain a randomized mechanism \(M'\) which returns an integral allocation. Let \(\varepsilon > 0\) be arbitrary. First run Algorithm 3 to obtain x and p(v). Then compute a convex decomposition of \(\frac{\alpha }{1 +4 \varepsilon }x\), which is \( \frac{\alpha }{1 +4\varepsilon } x = \sum _{j \in \mathcal {N}} \lambda _j^x x^j\). Finally with probability \(\lambda _j^x\) (we used superscript to distinguish the convex decompositions of x) return the allocation \(x^j\) and charge i-th player, the price \(p_i(v) \frac{v_i(x^j)}{v_i(x)}\), if \(v_i(x) > 0\), and zero otherwise. In the following theorem we show mechanism \(M'\) is indeed an approximately truthful-in-expectation integral mechanism whose proof is appeared in the full paper [7].

Theorem 6

Suppose that \(\varepsilon _0 \in (0, 1/2]\) be any constant, \(\varepsilon =\varTheta (\frac{\varepsilon _0^5}{n^4})\) and \(\gamma =\alpha (1 - \varepsilon )(1 - \varepsilon _0)/(1 +4 \varepsilon )\). Then we obtain a randomized integral mechanism satisfying Conditions (11) to (14).