1 Introduction

Mechanism design is a field in game theory that studies how to implement good system-wide solutions to problems that involve multiple self-interested agents, each one holding (part of) the inputs and having her own private preferences about the solutions computed by the mechanism. Algorithmic mechanism design mainly focused on settings where the goal is to maximize the social welfare in auction contexts where the goal is to maximize the auctioner revenue. Recently, other optimization goals have been considered, such as those that incorporate fairness criteria. Among these alternative goals the problem of minimizing the makespan in the context of machine scheduling is certainly one of the most extensively studied.

Designing truthful mechanisms for scheduling problems was first suggested in the seminal paper by Nisan and Ronen [17], as a paradigm to demonstrate the applicability of Mechanism Design to an optimization problem. In its general form, where the machines are unrelated, there are n jobs to be assigned to m machines. The time needed by a machine i to process job k is described by a nonnegative real value t i k . Given such an input matrix, a standard task from the algorithm designer’s point of view, is to allocate the jobs in a way such that some global objective is optimized; a typical objective is to minimize the maximum completion time (i.e. the makespan).

In a game-theoretic setting, it is assumed that each entry of this matrix is not known to the designer, but instead it is a private value held by a selfish agent that controls the machine. We call this private value the agent’s type. The mechanism has to ask each agent for her type and the agent can misreport her type to the designer if this is advantageous to her. Mechanism design suggests using monetary compensation to incentivize agents to report truthfully. Truthful implementation in dominant strategies is desired, because it facilitates the prediction of the outcome and at the same time simplifies the agents’ way of reasoning. The challenge is to design truthful mechanisms that optimize/approximate the makespan.

When the entries of the matrix t are unrelated, the type domain for each machine i is an n-valued vector t i . For this multi-dimensional domain, the constraints imposed by truthfulness make the problem hard. Nisan and Ronen [17], showed that it is impossible to design a truthful mechanism with approximation factor better than 2, even for two machines. Later this bound was further improved to 2.41 [6] for 3 machines, and to 2.618 [11] for many machines. In [17], it was also shown that applying the VCG mechanism [7, 10, 20] achieves an approximation ratio of m, and it has been conjectured that this bound is tight. This conjecture still remains open, but it was further strengthened by Ashlagi et al. [1], who proved the conjecture for the intuitively very natural case of anonymous mechanisms (where roughly the allocation algorithm does not base its decisions on the machines’ ids).

Randomization provably helps for this problem. There are two notions of truthfulness for randomized mechanisms. Roughly, a mechanism is universally truthful if it is defined as a probability distribution over deterministic truthful mechanisms, while it is truthful-in-expectation, if in expectation no player can benefit by lying. Already in [17], a universally truthful mechanism was suggested for the case with two machines. The mechanism was extended to the case of m machines by Mu’alem and Schapira [16] with an approximation guarantee of 0.875m, and this was further improved in [16] to 0.837m. Lu and Yu [15] showed a truthful-in-expectation mechanism with an approximation guarantee of (m + 5)/2. In [17] a lower bound of 2−1/m for both randomized versions was shown, while in [5] the lower bound was extended to fractional mechanisms, and an upper bound of (m + 1)/2 was provided.

Surprisingly, even for the special case of two machines a tight answer on the approximation factor of truthful randomized mechanisms has not been given. Currently, the lower bound is 1.5 [16], while the best upper bound is 1.5963 due to [15].

Setting restrictions to the input domain can make the problem easier. The single-dimensional counterpart of the problem is scheduling on related machines. In this case it is assumed that machine i has speed s i and t i k = w k /s i , where the weights w k of the jobs are known to the designer. Notice that the only information missing to the designer is the speed of the machines. Here, the constraints imposed by truthfulness seem harmless; the optimal allocation is truthfully implementable [2], although it takes exponential running time, while the best possible approximation guarantee, a PTAS, can be achieved by polynomial time truthful mechanisms [4, 8]. In fact, a recent paper [9] shows that monotone PTASs can be achieved for a wide range of objective functions. An immediate conclusion is that when one restricts the domain, then truthfulness becomes less and less stringent.

A prominent approach suggested by Lavi and Swamy [12], is to restrict the input domain, but still keep the multi-dimensional flavour. They assumed that each entry in the input matrix can take only two possible values L, H, that are publicly known to the designer. In this case, a very elegant deterministic mechanism achieves an approximation factor of 2, that is a great improvement comparing to the m upper bound that is the best known for the general problem. Surprisingly, even for this special case the lower bound is 11/10.

1.1 Our Contribution

The focus of this work is to study selfish scheduling problems on restricted but multi-dimensional input domains for which optimized/efficient mechanisms can be given. Following [12], we consider domains where each agent’s type can take only one of two possible values L, H, with L < H, that are publicly known, but we even restrict the way these values are placed in a player input vector. We assume that the designer is given for each machine some publicly known partition of the tasks into two sets such that all jobs in the same set take the same execution time. Thus, the only information missing to the designer is which set of the partition contains jobs taking time L and which set contains jobs taking time H. Therefore, the only missing information is a single bit for each player.Footnote 1 The lower bound given in [12] is still valid for our setting. It is important to emphasize that all the aforementioned lower bounds are due to truthfulness, and hold even for exponential running time algorithms. We explore the effects of truthfulness (both randomized and deterministic) in this restricted setting. Our contributions are the following:

(1) Power of truthful-in-expectation mechanisms

There is a class of scheduling problems on two-values domains with publicly known partitions (see Section 1.2) for which every algorithm (thus including optimal ones) can be turned into a truthful-in-expectation mechanism with the same approximation guarantee (Theorem 4). On the contrary, universally truthful randomized mechanisms cannot achieve an approximation better than 31/30 (Theorem 22), and the 11/10 lower bound for deterministic mechanisms in [12] also applies.Footnote 2

(2) Mechanisms for two machines

For the special case of two machines with uniform partitions (see Section 1.2), we present an exact truthful-in-expectation mechanism (Theorem 19).

We also give some evidence that two-vales domains are easier than the general case, even for this simple case of two machines. In fact, we show that the lower bound of 2 for deterministic mechanisms for unrelated machines still hold (Theorem 24) when we admit three–values domains. On the other hand, a 3/2–approximation deterministic truthful mechanisms exists for two-values domains with publicly known partitions (Theorem 20), that are still a restriction of the two-values domains.

1.2 Preliminaries

The scheduling domain

We have n jobs to be scheduled on m machines. Each job must be assigned to exactly one machine. In the unrelated-machines setting, each machine i has a vector of processing times or type t i = (t i h ) h , where t i h R ≥ 0 is i’s processing time for job h.

In the two-values domains by Lavi and Swamy [12], the time for executing job h on machine i is either L (low) or H (high), with H > L (the case L = H is trivial). Given a partition of the jobs \((S, \bar S)\), we say that machine i is an L S -machine (respectively, H S -machine) if all jobs in S take time L (respectively, H), and all jobs not in S take time H (respectively, L). That is, the type t i of an L S -machine i is such that for any job h

$$t_{ih} = {L_{S}^{h}} := \left\{ \begin{array}{ll} L & \text{if} \, h \in S\\ H & \text{otherwise} \end{array} \right. $$

and similarly for H S -machines.

In this work, we consider the two-values domains with publicly known partitions, that are special cases of the two-values domains defined in [12]. For each machine i we are given a publicly known subset S i and the private information is whether i is an \(L_{S_{i}}\)-machine or an \(H_{S_{i}}\)-machine. Hence, the type \(\phantom {\dot {i}\!}t_{i} \in \{ L_{S_{i}}, H_{S_{i}}\}\). Intuitively, a type \(\phantom {\dot {i}\!}t_{i}=L_{S_{i}}\) indicates that machine i is “good” for the jobs in S i and “bad” for other jobs, while for \(\phantom {\dot {i}\!}t_{i}=H_{S_{i}}\) it is the other way around (notice that \(\phantom {\dot {i}\!}H_{S_{i}}=L_{\bar S_{i}}\) where \(\bar S_{i}=[n] \setminus S_{i}\)). We shall further distinguish between three restrictions (of increasing difficulty) of the domain with publicly known partitions:

  1. 1.

    identical partitions, where all subsets S i are identical;

  2. 2.

    uniform partitions, where all subsets S i have size s for some s ≥ 0;

  3. 3.

    (unrestricted) publicly known partitions, which impose no restriction on the subsets S i .

We say that job h is an L-job (respectively, H-job) for machine i if t i h = L (respectively, t i h = H), with t i being the type of machine i. We represent an allocation by a matrix x = (x i h ), where x i h ∈ {0, 1} and x i h = 1 iff job h is assigned to machine i (since every job is assigned to exactly one machine, \(\phantom {\dot {i}\!}{\sum }_{i}x_{ih}=1\)).

Given an allocation x and machine types t, we define the load of machine i as the set of jobs allocated to i in x and denote by

$$C_{i}(x, t):=\sum\limits_{h} x_{ih}t_{ih} $$

the cost of the allocation x for machine i. The makespan of x with respect to t is the maximum cost of any machine, i.e., maxi C i (x, t).

An exact or optimal allocation is an allocation that, for the given input t, minimizes the makespan. A c-approximation is an allocation whose makespan is at most c times that of the optimal allocation. A deterministic algorithm A outputs an allocation x = A(t). For a randomized algorithm A rand, A rand(t) is a probability distribution over all possible allocations; we call A rand(t) a randomized allocation.

Mechanism Design

In order to characterize truthful mechanisms, we consider the allocations that are given in output for two inputs which differ only in one machine’s type. We let \((\hat t_{i}, t_{-i})\) denote the vector \(\phantom {\dot {i}\!}(t_{1},\ldots , t_{i-1}, \hat t_{i}, t_{i+1}, \ldots , t_{m})\) obtained from t by replacing t i with \(\hat t_{i}\).

Given types t and a job allocation x, we count the number of L-jobs and the number of H-jobs allocated to machine i in x:

$$\begin{array}{@{}rcl@{}} {n_{L}^{i}}(x,t)& := & |\{h: t_{ih}=L~ \text{and}~ x_{ih}=1\}|, \\ {n_{H}^{i}}(x,t) &:=& |\{h: t_{ih}=H~ \text{and}~ x_{ih}=1\}|. \end{array} $$

We next introduce a monotonicity condition which characterizes truthful mechanisms for our domains. This condition (and the characterization) is a special case of a known characterization by Rochet [19] for general domains, as well as the corresponding characterization that Lavi and Swamy [12] derive for two-values domains. Rochet [19] proved that a mechanism is truthful if and only if its algorithm satisfies a so-called cycle monotonicity condition. For our domains, cycle monotonicity boils down to the following condition.

Definition 1

(monotone algorithm) An algorithm A is monotone (in expectation) if, for any machine i and for any two inputs that differ only in the type of machine i, t = (t i , t i ) and \(\hat t=(\hat t_{i},t_{-i})\), the following inequality holds (in expectation):

$$ {n^{i}_{L}} - {n^{i}_{H}} + {\hat{n}^{i}_{L}} - {\hat {n}^{i}_{H}} \geq 0 $$
(1)

where \({n_{L}^{i}}={n_{L}^{i}}(A(t),t)\), \({n_{H}^{i}}={n_{H}^{i}}(A(t),t)\), \(\hat {n}_{L}^{i}={n_{L}^{i}}(A(\hat t),\hat t)\), and \(\hat {n}_{L}^{i}={n_{L}^{i}}(A(\hat t),\hat t)\).

The results by Rochet [19] and by Lavi and Swamy [12] imply that truthfulness for our domains is equivalent to the monotonicity condition above. For completeness, we give a direct proof (using essentially the same arguments) since this will also allow us to provide a simple closed formula for the payments.

Theorem 2

For the case of two-values domains with publicly known partitions, there exist prices P such that the mechanism (A, P) is truthful (in expectation) iff A is monotone (in expectation).

Proof

Truthfulness is equivalent to the condition that, for every i and t = (t i , t i ) and \(\hat t=(\hat t_{i},t_{-i})\), the following holds

$$\begin{array}{@{}rcl@{}} P_{i}(t) - L \cdot {n_{L}^{i}}(x,t) - H \cdot {n_{H}^{i}}(x,t) & \geq & P_{i}(\hat t) - L \cdot {n_{L}^{i}}(\hat x,t) - H\cdot {n_{H}^{i}}(\hat x,t) \\ P_{i}(\hat t) - L \cdot {n_{L}^{i}}(\hat x,\hat t) - H \cdot {n_{H}^{i}}(\hat x,\hat t) & \geq & P_{i}(t) - L \cdot {n_{L}^{i}}(x,\hat t) - H \cdot {n_{H}^{i}}(x,\hat t) \end{array} $$

where x = A(t) and \(\hat x = A(\hat t)\). We can rewrite some of these terms in a more convenient form by observing that, by definition of our domain, the types t i and \(\hat t_{i}\) are “complementary” meaning that a job has cost L (respectively, H) in t i if and only if in \(\hat t_{i}\) it has cost H (respectively, L). Therefore

$${n_{L}^{i}}(\hat x,t) = {n_{H}^{i}}(\hat x,\hat t), \ \ {n_{H}^{i}}(\hat x,t)={n_{L}^{i}}(\hat x,\hat t), \ \ {n_{L}^{i}}(x,\hat t) = {n_{H}^{i}}(x,t), \ \ {n_{H}^{i}}(x,\hat t)={n_{L}^{i}}(x,t) $$

and truthfulness is equivalent to

$$\begin{array}{@{}rcl@{}} P_{i}(t) - L {n_{L}^{i}} - H {n_{H}^{i}} & \geq & P_{i}(\hat t) - L {\hat {n}_{H}^{i}} - H {\hat {n}_{L}^{i}} \end{array} $$
(2)
$$\begin{array}{@{}rcl@{}} P_{i}(\hat t) - L {\hat {n}_{L}^{i}} - H {\hat {n}_{H}^{i}} & \geq & P_{i}(t) - L {n_{H}^{i}} - H {n_{L}^{i}} \end{array} $$
(3)

where \({n_{L}^{i}}={n_{L}^{i}}(A(t),t)\), \({n_{H}^{i}}={n_{H}^{i}}(A(t),t)\), \({\hat {n}_{L}^{i}}={n_{L}^{i}}(A(\hat t),\hat t)\), and \({\hat {n}_{L}^{i}}={n_{L}^{i}}(A(\hat t),\hat t)\).

These two inequalities imply the monotonicity condition (1). That is, a mechanism (A, P) is truthful only if A is monotone.

We next prove that, if A is monotone, then there are payments P such that (A, P) is truthful. We set the payments as

$$\begin{array}{@{}rcl@{}} P_{i}(t) & = & ({n^{i}_{L}} + {n^{i}_{H}}) \left(\frac{L+H}{2}\right) \\ P_{i}(\hat t) & = & ({\hat {n}^{i}_{L}} + {\hat {n}^{i}_{H}}) \left(\frac{L+H}{2}\right) \end{array} $$

and observe that the truthfulness conditions (23) are satisfied since

$$\begin{array}{@{}rcl@{}} (2) & \Leftrightarrow & {n^{i}_{L}} \left(\frac{L+H}{2} - L \right) + {n^{i}_{H}} \left(\frac{L+H}{2} - H \right) \geq {\hat {n}^{i}_{L}} \left(\frac{L+H}{2} - H \right) + {n^{i}_{H}} \left(\frac{L+H}{2} - L \right) \\ & \Leftrightarrow & {n^{i}_{L}} \left(\frac{H-L}{2} \right) + {n^{i}_{H}} \left(\frac{L-H}{2}\right) \geq {\hat {n}^{i}_{L}} \left(\frac{L-H}{2} \right) + {n^{i}_{H}} \left(\frac{H-L}{2} \right) \\ & \Leftrightarrow & \left(\frac{H-L}{2} \right) ({n^{i}_{L}} - {n^{i}_{H}} + {\hat {n}^{i}_{L}} - {\hat {n}^{i}_{H}}) \geq 0 \Leftrightarrow (1) \end{array} $$

and a symmetric argument shows that (3) ⇔ (1). □

Throughout the paper, we refer to the quantity \({n_{L}^{i}}(x,t)-{n_{H}^{i}}(x,t)\) as the unbalance of machine i in the allocation x with respect to type t. We also refer to the quantity in (1) as the overall unbalance of machine i. For any instance t, for any two machines i and j, and for any α, β∈{L, H}, we consider the subset of jobs whose execution time is α on machine i and β on machine j:

$$J_{\alpha\beta}^{ij}(t) := \{h: t_{ih}=\alpha~\text{and}~t_{jh}=\beta \}. $$

1.3 An Illustrative Example

We begin with an example and show how we can use randomization to design optimal monotone-in-expectation algorithms from deterministic algorithms in the case of two-values domains with identical partitions.

Consider the following instances along with their optimal allocation (gray box), and the unbalance \({n_{L}^{i}}-{n_{H}^{i}}\) for each of the two machines (numbers outside the box):

(4)

Let machine 1 and machine 2 correspond to top and bottom machine, respectively. Note that we connect two instances by a dotted line if they differ in exactly one machine’s type, and the line indicates which one is that machine. Therefore the monotonicity condition (1) requires that, for any dotted line joining two instances t = (t i , t i ) and \(\hat t = (\hat t_{i},t_{-i})\), the corresponding machine i has nonnegative overall unbalance: For example, the first two instances differ by machine 1 and the overall unbalance is −3 + 4 = 1. Observe that monotonicity condition (1) is violated by the allocations of the two middle instances which differ in the type of machine i = 2 and the corresponding overall unbalance is 1 − 2 = − 1.

One may try to recover monotonicity by observing that the allocations in (4) are not the only ones that minimize the makespan. In the first and in the third input we could swap the jobs between the two machines (leaving the allocations of the second and fourth instance unchanged):

(5)

Unfortunately, now the monotonicity condition is violated due to the allocation of the first machine in the last two instances.

The above instances have been used by Lavi and Swamy [12] to prove their lower bound for deterministic mechanisms. However, if we choose randomly between the allocation in (4) and the one in (5) with the same probability, the corresponding optimal randomized algorithm satisfies monotonicity in expectation (for example, in the first instance the unbalance becomes −3/2 for both machines, while in the second instance it remains unchanged).

Fact 3

For two machines and two-values domains with identical partitions as above, with values L = 2 and H = 5, no truthful mechanism can achieve an approximation factor better than 1.1 [12]. On the contrary, for the same problem there exists an exact truthful-in-expectation mechanism.

In the sequel we show that this positive result holds in general for some of our domains, and not only in the very special instance where deterministic mechanisms cannot be optimal.

2 Identical Partitions

In this section we consider the case of machines with identical partitions and give a general (“black box”) method to convert scheduling deterministic algorithms into mechanisms that are truthful-in-expectation. The main result of this section is summarized by the following theorem.

Theorem 4

Every deterministic algorithm A for scheduling jobs on machines with identical partitions can be turned into a randomized mechanism M which is truthful-in-expectation and such that the allocation returned by M has makespan no greater than the one returned by A.

Let \((S, {\bar S})\) be a partition of the jobs, with |S| = s and \(|{\bar S}| = n-s\), such that for each machine i we have S i = S. Without loss of generality we can reorder the jobs in such a way that S = {1,2,…,s} and \({\bar S} = \{ s+1, s+2, \ldots , n\}\). Since the partition of the jobs is public, the only information that is private to each machine is which side of the partition contains its L-jobs. Thus, the type’s domain of each machine contains only two elements:

$$L_{S}=(L {\cdots} L H \cdots{\cdots} H) ~~ \text{and}~~ H_{S} =(H {\cdots} H L \cdots{\cdots} L) $$

For any instance t, we denote by m S (t) and \(m_{\bar S}(t)\) the number of L S -machines and H S -machines in t, respectively. Clearly, \(\phantom {\dot {i}\!}m_{S}(t) + m_{\bar S}(t) = m\). Given an allocation x, it is convenient to count, for each side of the partition, the number of jobs that are scheduled in x as L-jobs:

$$\begin{array}{@{}rcl@{}} n_{L_{S}}(x,t) = |\{h \in S: \exists i~\text{such that}~x_{ih}=1~\text{and}~t_{ih}=L\}| \end{array} $$
(6)
$$\begin{array}{@{}rcl@{}} n_{L_{\bar S}}(x,t) = |\{h \in \bar S: \exists i ~~\text{such that}~ x_{ih}=1~\text{and}~t_{ih}=L\}| \end{array} $$
(7)

and \(\phantom {\dot {i}\!}{n_{L}}(x,t):= n_{L_{S}}(x,t)+n_{L_{\bar S}}(x,t) = {\sum }_{i} {n_{L}^{i}}(x,t)\) is the overall number of jobs allocated in x as L-jobs.

Following the idea described in Section 1.3, we show now how to obtain a randomized allocation from a deterministic one by randomly “shuffling” machines of the same type:

Definition 5

For any deterministic allocation x, we denote by x (rand) the randomized allocation obtained from x as follows:

  • Pick an integer r ∈ {0,…,m S −1} uniformly at random, and set \(x^{(rand)}_{i}:= x_{i + r \bmod m_{S}}\) for each L S -machine i;

  • Pick an integer \(\bar r\in \{0,\ldots ,m_{\bar S}-1\}\) uniformly at random, and set \(x^{(rand)}_{i}:= x_{i + \bar r \bmod m_{\bar S}}\) for each H S -machine i.

For any deterministic algorithm A, we let A (rand) be the randomized algorithm that, on input t, returns the randomized allocation x (rand) where x = A(t).

Notice that \(n_{L_{S}}(x^{(rand)}, t) = n_{L_{S}}(x, t)\) and \(n_{L_{\bar S}}(x^{(rand)}, t) =n_{L_{\bar S}}(x, t)\). In the following discussion we fix x and t and simply write \(n_{L_{S}}\) and \(n_{L_{\bar S}}\).

For any L S -machine i, its expected load consists of \({n_{L}^{i}}=n_{L_{S}}/m_{S}\) L-jobs and \({n_{H}^{i}}=(n-s-n_{L_{\bar S}})/m_{S}\) H-jobs. Thus the expected unbalance of an L S -machine is

$${n_{L}^{i}} - {n_{H}^{i}} = \frac{n_{L_{S}} - (n-s -n_{L_{\bar S}})}{m_{S}} = \frac{n_{L} - (n-s)}{m_{S}}. $$

Similarly, the expected load of an H S -machine i consists of \({n_{L}^{i}}=n_{L_{\bar S}}/m_{\bar S}\) L-jobs and \({n_{H}^{j}}=(s-n_{L_{S}})/m_{\bar S}\) H-jobs and its expected unbalance is equal to

$${n_{L}^{i}} - {n_{H}^{i}} = \frac{n_{L_{\bar S}} - (s-n_{L_{S}})}{m_{\bar S}} = \frac{n_{L} - s}{m_{\bar S}}. $$

Lemma 6

Algorithm A (rand) is monotone-in-expectation if the deterministic algorithm A is such that for any t = (L S ,t −i ) and \(\hat t =(H_{S},t_{-i})\) , it holds that

$$ \frac{n_{L} - (n-s)}{m_{S}} + \frac{\hat n_{L} - s}{m_{\bar S}+1} \geq 0 $$
(8)

where n L =n L (A(t),t) and \(\hat n_{L} =n_{L}(A(\hat t),\hat t)\) denote the number of L-jobs allocated by A on input t and \(\hat t\) , respectively.

Proof

Consider two instances t = (L S , t i ) and \(\hat t=(H_{S},t_{-i})\). By the previous discussion we have that, for any i, the total unbalance of machine i is

$${n_{L}^{i}} - {n_{H}^{i}} + {\hat {n}^{i}_{L}} - {\hat {n}^{i}_{H}} = \frac{n_{L} - (n-s)}{m_{S}} + \frac{\hat n_{L} - s}{\hat m_{\bar S}} $$

where \(\hat m_{\bar S}\) is the number of H S -machines in \(\hat t\). Since \(\hat m_{\bar S} = m_{\bar S} + 1\), then (8) is equivalent to (1) and, by Definition 1, A (rand) is monotone-in-expectation. □

2.1 Canonical Allocations

Lemma 6 says that in order to design an exact truthful-in-expectation mechanism for two-values domains with identical partitions it is sufficient to design a deterministic exact algorithm which satisfies the condition in (8). We now show that this is always possible by taking any exact algorithm A and transforming the allocations computed by A into “canonical” allocations. Intuitively, canonical allocations are allocations where it is not possible to swap jobs between two machines and increase the number of allocated L-jobs without increasing the makespan of the allocation. These allocations can be obtained from any allocation by repeatedly swapping jobs according to local rules (specified below). We will show that each of these swap operations increases the number of allocated L-jobs, without increasing the makespan.

To this end, we extend our previous notation.

Definition 7

Given an instance t and an allocation x, for any α, β ∈ {L, H} and for any two machines i and j, we let \(n_{\alpha \beta }^{ij}(x,t)\) be the number of α-jobs that are allocated to machine i and that are β-jobs for machine j:

$$ n_{\alpha\beta}^{ij}(x,t) := |\{k: x_{ik}=1, \ t_{ik}=\alpha, ~ \text{and}~t_{jk}=\beta \}|. $$

Notice that \(n_{\alpha \beta }^{ij}\) is different from \(n_{\beta \alpha }^{ji}\) because the first index denotes the machine that has the jobs. We can now define canonical allocations.

Definition 8 (canonical allocation)

A canonical allocation (for instance t) is an allocation obtained by modifying a deterministic allocation x as follows:

  1. 1.

    Apply the following Rule R1 as long as possible: Suppose jobs h and k are allocated to machines i and j, respectively ( x i h = 1 = x j k ). If t i k t i h and t j h <t j k (i.e., no machine gets worse and at least one gets better if we swap the two jobs among the machines i and j), then move job h to machine j and job k to machine i (set x i k = x j h = 1 and x i h = x j k = 0).

  2. 2.

    Apply the following Rule R2 as long as possible: If \(n_{HL}^{ij}(x,t)>n_{LH}^{ji}(x,t)\) and j gets only jobs from \(J_{LH}^{ji}(t)\), then move \(n_{HL}^{ij}\) jobs in \(J_{HL}^{ij}(t)\) from i to j, and move \(n_{LH}^{ji}\) jobs in \(J_{LH}^{ji}(t)\) from j to i (see Fig. 1).

    Fig. 1
    figure 1

    The swapping Rule R2 in the definition of canonical allocation.

Fact 9

Both Rules R1 and R2 decrease the overall number of H-jobs by at least one. Thus, given x and t, it is possible to compute in polynomial time a canonical allocation x (following the two steps in Definition 8) whose cost is not larger than the cost of x.

2.2 A Black-box Construction (Proof of Theorem 4)

We can assume without loss of generality that our deterministic algorithm always returns canonical allocations (Fact 9). The following two lemmas provide important structural properties of such canonical allocations.

Lemma 10

Every allocation x that is canonical with respect to the instance t is such that either \(n_{L_{S}}(x, t)=s\) or \(n_{L_{\bar S}}(x, t)=n-s\).

Proof

Suppose for contradiction that \(n_{L_{S}}(x, t) < s\) and \(n_{L_{\bar S}}(x, t)< n-s\). Then, in the allocation x there exists a job hS that is assigned to an H S -machine i and a job \(l\in {\bar S}\) that is assigned to an L S -machine j. Since t i h = H and t j l = H while t j h = L and t i l = L then it is possible to apply Rule R1 to jobs j and l. But this contradicts the hypothesis that x is canonical with respect to t. □

Lemma 11

If the allocation x is canonical with respect to the instance t then we have that:

  1. 1.

    If \(n_{L_{S}}(x, t)=s\) then \(n_{L_{\bar S}}(x, t) \geq \frac {m_{\bar S}}{m} \cdot (n-s)\);

  2. 2.

    If \(n_{L_{\bar S}}(x, t)=n-s\) then \(n_{L_{S}}(x, t) \geq \frac {m_{S}}{m} \cdot s\).

Proof

We prove only point 1 (proof of point 2 being similar).

Let x be an allocation that is canonical with respect to t and such that \(n_{L_{S}}(x, t) = s\). Since \(n_{L_{\bar S}}(x, t)\) is equal to the number of jobs in \({\bar S}\) that are allocated to H S -machines then there are \(n-s-n_{L_{\bar S}}(x, t)\) jobs of \(\bar S\) that are allocated to L S -machines (i.e. they are allocated as H-jobs). Therefore, there must be an L S -machine i and an H S -machine j such that i gets at least \((n-s -n_{L_{\bar S}}(x, t))/m_{S}\) jobs from \(\bar S\) (and all these jobs are bad for i but good for j) and j gets at most \(n_{L_{\bar S}}(x, t)/m_{\bar S}\) jobs from \(\bar S\) (they are good for j but bad for i). Then, we have

$$ n_{HL}^{ij} \geq \frac{n-s -n_{L_{\bar S}}(x, t)}{m_{S}}. $$
(9)

and

$$ n_{LH}^{ji} \leq \frac{n_{L_{\bar S}}(x, t)}{m_{\bar S}}. $$
(10)

Observe now that since by hypothesis \(n_{L_{S}}(x, t) = s\), we have that j gets only jobs from \({\bar S}\) and, since x is canonical then, by point 2 of Definition 8, it must be \(n_{HL}^{ij} \leq n_{LH}^{ji}\). Thus, the following inequality holds:

$$ \frac{n_{L_{\bar S}}(x, t)}{m_{\bar S}} \geq \frac{n-s-n_{L_{\bar S}}(x, t)}{m_{S}} $$
(11)

from which we obtain that

$$n_{L_{\bar S}}(x, t)\cdot (1/m_{S} + 1/m_{\bar S}) = n_{L_{\bar S}}(x, t)\cdot m/(m_{S}\cdot m_{\bar S}) \geq (n-s) / m_{S}, $$

and thus

$$n_{L_{\bar S}}(x, t) \geq (n-s) \cdot \frac{m_{\bar S}}{m}. $$

We can now give a black-box construction to transform any deterministic algorithm into a randomized and monotone-in-expectation one that has no worse makespan.

Proof Theorem 4

Let A be a deterministic scheduling algorithm and let A be the algorithm that first calls A to compute an allocation and then transforms this allocation into a canonical one. We observe that the canonization process described in Definition 8 does not increase the makespan of the allocation. Thus, by Theorem 2 and Lemma 6, to prove the theorem it is sufficient to show that for any t = (L S , t i ) and \(\hat t=(H_{S},t_{-i})\) such that x = A (t) and \(\hat x = A^{\prime }(\hat t)\)

$$\frac{n_{L} - (n-s)}{m_{S}} + \frac{\hat n_{L} - s}{m-m_{S}+1} \geq 0 $$

where n L is the number of L-jobs allocated in x and \(\hat n_{L}\) is the number of L-jobs allocated in \(\hat x\).

Because of Lemma 10 there are only four cases to consider:

  1. 1.

    (\(n_{L_{S}} = s\) and \(n_{\hat {L}_{\bar S}} = n - s\).) Observe that \(n_{L} + \hat n_{L} \geq n\). Therefore

    $$\begin{array}{@{}rcl@{}} \frac{n_{L} - (n-s)}{m_{S}} + \frac{\hat n_{L} - s}{m-m_{S}+1} & \geq & \frac{n_{L} - (n-s) + \hat n_{L} - s}{\max\{m_{S},m-m_{S}+1\}} \\ & = & \frac{n_{L} + \hat n_{L} - n}{\max\{m_{S},m-m_{S}+1\}} \\ & \geq 0. \end{array} $$
  2. 2.

    (\(n_{L_{\bar S}} = n-s\) and \(n_{\hat {L}_{S}} = s\).) Also in this case \(n_{L} + \hat n_{L} \geq n\). The rest of the proof goes like the previous case.

  3. 3.

    (\(n_{L_{S}}=s\) and \(n_{\hat {L}_{S}} = s\).) Since \(m_{\bar S}=m - m_{S}\) and \(\hat m_{\bar S}=m_{\bar S}+1\), by Lemma 11, we have that

    $$n_{L_{\bar S}} \geq \frac{m- m_{S}}{m} \cdot (n-s) ~~\text{and}~~ \hat{n}_{L_{\bar S}} \geq \frac{m_{\bar S}+1}{m} \cdot (n-s). $$

    Hence

    $$\begin{array}{@{}rcl@{}} \frac{n_{L}-(n-s)}{m_{S}} + \frac{\hat n_{L} - s}{m - m_{S}+1} & = & \frac{s+n_{L_{\bar S}}-(n-s)}{m_{S}} + \frac{s + \hat{n}_{L_{\bar S}} - s}{m_{\bar S} +1} \\ & \geq & \frac{s+\frac{m - m_{S}}{m} \cdot (n-s)-(n-s)}{m_{S}} + \frac{\frac{m_{\bar S}+1}{m}(n-s)}{m_{\bar S} +1} \\ & \geq & \frac{s}{m_{S}} - \frac{n-s}{m} + \frac{n-s}{m} \\ & \geq & 0. \end{array} $$
  4. 4.

    (\(n_{L_{\bar S}}=n-s\) and \(n_{\hat {L}_{\bar S}} = n-s\).) This case follows by symmetry with the previous case (simply exchange S with \(\bar S\)).

2.3 Complexity Considerations

We stress that our black-box construction of truthful-in-expectation mechanisms (Theorem 4) can be carried out efficiently, i.e., in polynomial time. Indeed, any polynomial-time c-approximation algorithm can be transformed into a polynomial-time monotone algorithm with the same approximation guarantee. Moreover, the payments used in the proof of Theorem 4 can be computed directly from the allocation. In a recent paper [3] Bonifaci and Wiese gave a new PTAS for scheduling jobs on an arbitrary number of unrelated machines when these machines are of a limited number of types. Since, in our setting, machines are of only two types, we can use this result to show the following.

Corollary 12

For any number of machines with identical partitions and for any 𝜖>0, there exists a polynomial-time truthful-in-expectation mechanism which returns a (1+𝜖)-approximate allocation.

We shall prove in Section 4 that the same result cannot be achieved by (universally) truthful mechanisms, even if we allow them to run in exponential time.

3 Mechanisms for Two Machines

In this section we consider scheduling on two-values domains with publicly known partitions in the case of m = 2 machines. In particular, we give an exact truthful-in-expectation mechanism for the case with uniform partitions (Section 3.1) and a deterministic 3/2-approximation mechanism for the case of (restricted) publicly known partitions (Section 3.2).

3.1 An Exact Truthful-in-Expectation Mechanism

As in the case of identical partitions, we show that every exact algorithm can be turned into an exact monotone-in-expectation algorithm (by Theorem 2 this implies an exact truthful-in-expectation mechanism). Also in this case we restrict ourselves to scheduling algorithms that give in output only canonical allocations (this is without loss of generality by Fact 9) and obtain monotonicity-in-expectation by randomizing among several exact allocations. However, since the partitions are not identical, the randomization procedure will be applied only to some of the canonical allocations.

Consider the four possible inputs and let us represent them as the simple graph in Fig. 2 where two inputs are adjacent if and only if they differ in exactly the type of one machine:

Note that the monotonicity condition (1) involves only inputs that are adjacent in this graph. Our strategy is to prove that monotonicity is satisfied at every edge. To this aim, we define two classes of allocations and prove that all canonical allocations are in one of these two classes. We then show that the randomization procedure guarantees monotonicity for canonical allocations in each of the two possible classes.

Fig. 2
figure 2

The strategy to obtain randomized monotone allocations for two machines: Each node corresponds to a possible input and two nodes are adjacent if and only if they differ in exactly one machine’s type; The monotonicity condition (1) involves only pairs of adjacent inputs, and our strategy is to turn two out of the four canonical allocations into randomized ones

Definition 13 (allocation classes)

An allocation x is classified with respect to the instance t as

  • i -restricted if machine i gets only a proper subset of the jobs in \(J_{LH}^{ij}(t)\), where ji denotes the other machine;

  • symmetric if, for any i ∈ {1, 2} and ji, machine i gets all jobs in \(J_{LH}^{ij}(t)\), no job in \(J_{HL}^{ij}(t)\), and a (possibly empty) subset of the jobs in \(J_{LL}^{ij}(t)\cup J_{HH}^{ij}(t)\).

We say that x is restricted if it is 1-restricted or 2-restricted. Restricted and symmetric allocations are shown in Fig. 3.

The following lemma proves that all exact canonical allocations belong to one of the classes defined in Definition 13.

Fig. 3
figure 3

The two allocation classes of Definition 13

Lemma 14

Every exact canonical allocation x with respect to an instance t is either symmetric or restricted.

Proof

Let x be an exact canonical allocation with respect to instance t. We remark that, by definition, \(J_{LH}^{12}(t)=J_{HL}^{21}(t)\) and \(J_{LH}^{21}(t)=J_{HL}^{12}(t)\). This fact follows directly from Definition 8.

Fact 15

If the swapping Rule R1 cannot be applied to an allocation x with respect to t, then there is at most one machine i such that \(n_{HL}^{ij}(x,t)>0\) , with j≠i. In this case, machine j gets only jobs from the set \(J_{LH}^{ji}(t)\).

Therefore it is not possible that in the allocation x machine 1 gets some jobs from \(J_{HL}^{12}(t)\) and machine 2 gets some jobs from \(J_{HL}^{21}(t)\). Observe that if neither machine gets such jobs then the allocation x is of class symmetric (see Definition 13). Thus, we have only to prove that in all the other cases in the allocation x there is a machine i that gets only a subset of jobs in \(J_{LH}^{ij}(t)\).

Suppose first that machine 1 gets some jobs from \(J_{HL}^{12}(t)\) and machine 2 gets no jobs from \(J_{HL}^{21}(t)\). This means that all jobs in \(J_{HL}^{21}(t)=J_{HL}^{12}(t)\) are allocated to machine 1. To conclude that the solution is of class 2–restricted, it is sufficient to show that all jobs in J L L (t)∪J H H (t) are allocated to machine 1. But, this is true since if machine 2 would get a job \(\bar h \in J_{LL}(t) \cup J_{HH}(t)\) then it would be possible to apply Rule-R1 (see Definition 8) to swap \(\bar h\) with a job \(h\in J_{HL}^{12}(t)\), thus contradicting the hypothesis that the allocation is canonical.

Similarly, we have that if machine 2 gets some jobs from \(J_{HL}^{21}(t)\) and machine 1 gets no jobs from \(J_{HL}^{12}(t)\), then the solution if of class 1–restricted. □

The following two technical lemmas give bounds on the number of jobs allocated to machines in exact allocations.

Lemma 16

In any exact allocation for an instance t, if there is at least one job that can be assigned as L-job to one of the two machines (i.e., there is at least a machine i and a job k such that t ik =L), then no machine gets more than ⌊n/2⌋H-jobs.

Proof

Consider the allocation in which machine i gets job k and any subset of ⌊n/2⌋ other jobs, and the other machine j gets all the remaining n − (⌊n/2⌋+1)=⌊n/2⌋ jobs. The makespan of this allocation is at most L + ⌊n/2⌋H < H + ⌊n/2⌋H = ⌈n/2⌉H. That is, any allocation that assigns more than ⌊n/2⌋H-jobs cannot be an exact allocation because it has larger makespan. □

Lemma 17

If an instance t admits an exact allocation of class i–restricted then machine i gets at least n/2 jobs in \(J_{LH}^{ij}(t)\) , with i≠j. This implies that \(|J_{LH}^{ij}(t)| > n/2\).

Proof

Let x be an exact i–restricted allocation, for some i ∈ {1, 2}. By Definition 13, machine i gets in x only a proper subset of the jobs in \(J_{LH}^{ij}(t)\) and machine j gets at least one H-job. Then, \({n_{L}^{i}}(x, t) = n_{LH}^{ij}(x, t)\) and \({n_{H}^{i}}(x, t) = 0\). Since x is an exact allocation, i must get more jobs than j, otherwise a new allocation with smaller makespan could be obtained by moving one job in \(J_{LH}^{ij}(t)\) from j to i. Thus, \({n_{L}^{i}}(x, t) > n/2\). To conclude the proof observe that \({n_{L}^{i}}(x, t) = n_{LH}^{ij}(x, t) < |J_{LH}^{ij}(t)|\), where the inequality is strict because i gets in x only a proper subset of the jobs in \(J_{LH}^{ij}(t)\). □

We remark that in order to prove the monotonicity condition in (1) we would like to have allocations that have large unbalance. The following lemma says that, for two of the four possible inputs, exact canonical allocations can be converted into exact randomized allocations having a “good” expected unbalance on both machines.

Lemma 18

For every deterministic exact canonical allocation x for an instance

$$t\in \{(L_{S_{1}},L_{S_{2}}), (H_{S_{1}},H_{S_{2}})\} $$

with uniform partitions (i.e. |S 1 |=|S 2 |), there exists a randomized allocation x (rand) which gives an expected unbalance of \(\frac {n}{2}-|J_{HH}(t)|\) to both machines and has makespan equal to that of x.

Proof

Since the partitions are uniform, the subsets \(J_{LH}^{12}(t)\) and \(J_{HL}^{12}(t)\) have the same size

$$s - |S_{1} \cap S_{2}| $$

in both instances (see Fig. 4). This implies that allocation x must be symmetric, for otherwise we would have one of the two subsets \(J_{LH}^{12}(t)\) or \(J_{LH}^{21}(t)\) with more than n/2 elements (Lemma 14 and Lemma 17), which cannot be because they have the same size and they are disjoint.

Fig. 4
figure 4

Swapping operation to obtain allocation y from the symmetric allocation x

Since allocation x is symmetric, we can build a new allocation y by swapping jobs in \(J_{LL}^{12}(t)\) and in \(J_{HH}^{12}(t)\) between the two machines (see Fig. 4). Clearly, the number of H-jobs is now exchanged between the two machines, that is

$${n_{H}^{1}}(y,t) = {n_{H}^{2}}(x,t) ~~\text{and}~~ {n_{H}^{2}}(y,t) = {n_{H}^{1}}(x,t). $$

Since \(|J_{LH}^{12}(t)|=|J_{HL}^{12}(t)|\), the same happens with respect to the L-jobs, that is

$${n_{L}^{1}}(y,t) = {n_{L}^{2}}(x,t) ~~\text{and }~~ {n_{L}^{2}}(y,t) = {n_{L}^{1}}(x,t). $$

This means that y has the same makespan as x. Moreover, if we pick at random with probability 1/2 between allocation x and allocation y, in the resulting randomized allocation x (rand), the expected number of L-jobs assigned to any of the two machines is equal to

$$\frac {{n_{L}^{1}}(x, t) + {n_{L}^{2}}(x, t)}{2} = \frac {n - |J_{HH}(t)|}{2}, $$

while the expected number of H-jobs is equal to

$$\frac {{n_{H}^{i}}(x, t) + {n_{H}^{j}}(x, t)}{2} = \frac {|J_{HH}(t)|}{2}. $$

The expected unbalance of each machine is thus n/2−|J H H (t)|. Clearly, since y and x have the same makespan, the makespan of x (rand) is the same as the same makespan of x. □

We are now in a position to prove the main result of this section:

Theorem 19

For m = 2 machines there exists an exact truthful-in-expectation mechanism for scheduling on two-values domains with uniform partitions.

Proof

Let A be any exact deterministic algorithm for scheduling on two machines that gives in output only canonical allocations. We can define a randomized exact algorithm which first computes x = A(t), and then replaces x by the randomized allocation x (rand) of Lemma 18 in two out of the four possible input instances (see Fig. 2):

$$A^{(rand)}(t) := \left\{ \begin{array}{ll} x^{(rand)} & \text{if}~ t\in \{(L_{S_{1}},L_{S_{2}}), (H_{S_{1}},H_{S_{2}})\} \\ x & \text{if}~ t\in \{(H_{S_{1}},L_{S_{2}}), (L_{S_{1}},H_{S_{2}})\} \end{array} \right. $$

By Lemma 18, A (rand) is also an exact algorithm. So, we only need to prove that the monotonicity condition in (1) holds in expectation (Theorem 2).

Observe that any two instances t = (t i , t i ) and \(\hat t=(\hat t_{i},t_{-i})\) that differ only on the type of one machine i correspond to a pair of adjacent nodes in the graph in Fig. 2. We can thus assume without loss of generality that t denotes the instance for which A (rand) returns a randomized allocation x (rand) and \(\hat t\) denotes the instance for which it return a deterministic exact canonical allocation \(\hat x\),

$$t\in \{(L_{S_{1}},L_{S_{2}}), (H_{S_{1}},H_{S_{2}})\} ~~~~\text{and}~~~~ \hat t\in \{(L_{S_{1}},H_{S_{2}}), (H_{S_{1}},L_{S_{2}})\}. $$

Consider the overall expected unbalance of machine i,

$$ UN^{i}:= {n_{L}^{i}} - {n_{H}^{i}} + {\hat {n}_{L}^{i}} - {\hat {n}_{H}^{i}} $$
(12)

where \({n_{L}^{i}} = {n_{L}^{i}}(x^{(rand)}, t)\), \({n_{H}^{i}} = {n_{H}^{i}}(x^{(rand)}, t)\), \({\hat {n}_{L}^{i}} = {n_{L}^{i}}(\hat x, \hat t)\) and \({\hat {n}_{H}^{i}} = {n_{H}^{i}}(\hat x, \hat t)\). By Lemma 18, we know that \({n_{L}^{i}} - {n_{H}^{i}} = n/2 - |J_{HH}(t)|\) and thus

$$UN^{i} = n/2 - |J_{HH}(t)|+ {\hat {n}_{L}^{i}} - {\hat {n}_{H}^{i}}. $$

To show that this quantity is positive we distinguish two cases:

( \(\hat x\) is i –restricted) :

Machine i gets no H-jobs (Definition 13) and at least n/2L-jobs (Lemma 17), which means that

$$\hat {n}_{L}^{i} - \hat {n}_{H}^{i} \geq n/2 $$

thus implying

$$\begin{array}{@{}rcl@{}} UN^{i} & \geq & n/2 - |J_{HH}(t)| + n/2 \\ & = & n - |J_{HH}(t)| \\ & \geq & 0. \end{array} $$
( \(\hat x\) is not i –restricted) :

By Lemma 14, allocation \(\hat x\) is either symmetric or j–restricted with ji. In both cases, machine i gets all jobs in \(J_{LH}^{ij}(\hat t)\). Since \(\hat t\in \{(L_{S_{1}},H_{S_{2}}), (H_{S_{1}},L_{S_{2}})\}\) and because the partitions have uniform size, in instance \(\hat t\) there is at least one job that can be assigned as an L-job to one of the two machines. Thus, by Lemma 16, in allocation \(\hat x\) machine i gets at most n/2H-jobs. Therefore we have

$${\hat {n}_{L}^{i}} - {\hat {n}_{H}^{i}} \geq |J_{LH}^{ij}(\hat t)| - n/2. $$

Observe that \(J_{LH}^{ij}(\hat t)=J_{HH}{ij}(t)\), which implies

$$\begin{array}{@{}rcl@{}} UN^{i} & \geq & n/2 - |J_{HH}(t)| + |J_{HH}^(t)| - n/2 \\ & = & 0. \end{array} $$

We have thus shown that A (rand) is monotone in expectation. □

3.2 A Deterministic 3/2-Approximation Truthful Mechanism

Now we switch to deterministic mechanisms and we prove the following result:

Theorem 20

For m = 2 machines there exists a polynomial-time 3/2-approximation deterministic truthful mechanism for scheduling for two-values domains with (unrestricted) publicly known partitions.

In order to prove this result we exhibit a monotone 3/2-approximation algorithm. We were unable to extend the mechanism for the case of three or more machines so this is left as a natural open problem.

The algorithm

On input t, the algorithm partitions the jobs into three subsets:

$$J_{LL}(t), \ \ \ J_{HH}(t) ~~\text{ and }~~ J_{LHHL}(t) := J_{LH}^{12}(t) \cup J_{HL}^{12}(t) $$

First, the algorithm allocates jobs in J L H H L (t), and then completes the allocation by dividing “evenly” the other jobs in J L L (t) and in J H H (t). Some careful tie breaking rule will be used here to deal with the case in which some of these subsets of jobs have odd cardinality.

The algorithm consists of the following two steps (in the sequel we do not specify the input “t”):

1.:

Step 1 (allocate jobs in J L H H L ) We allocate these jobs depending on the class of the canonical exact allocation (see Definition 13) for all jobs:

  1. (a)

    (class i –restricted. Compute a canonical exact allocation for J L H H L .

  2. (b)

    (class symmetric). Assign all jobs in J L H H L as L-jobs.

We denote by \(J_{LHHL}^{i}\) the set of jobs that are assigned to machine i in this first step, and let \(C_{i}^{(LHHL)}\) be the corresponding cost of machine i.

2.:

Step 2 (allocate jobs in J L L and J H H ) For a set of jobs S, we denote by ⌊S/2⌋ and ⌈S/2⌉ an arbitrary subset of S of cardinality ⌊|S|/2⌋ and ⌈|S|/2⌉, respectively. We define the set J i of all jobs that are assigned to machine i at the end of this step (which includes the jobs \(J_{LHHL}^{i}\) assigned in the previous step) as follows: For i and j satisfying \(C_{i}^{(LHHL)}\geq C_{j}^{(LHHL)}\), we allocate to i the set

$$\begin{array}{@{}rcl@{}} J^{i} & := & \left\{ \begin{array}{ll} J_{LHHL}^{i}\cup\left\lfloor\frac{J_{HH}}{2}\right\rfloor \cup\left\lceil\frac{J_{LL}}{2}\right\rceil & \text{if both}~~ |J_{LL}|~~\text{and}~~ |J_{HH}|~~\text{are odd;}\\ \\ J_{LHHL}^{i}\cup\left\lfloor\frac{J_{HH}}{2}\right\rfloor \cup\left\lfloor\frac{J_{LL}}{2}\right\rfloor & \text{otherwise,} \end{array}\right. \end{array} $$

and thus machine j gets the rest of the jobs

$$J^{j} := [n] \setminus J^{i}. $$

Approximation guarantee

The two steps of the algorithm keep a small difference between the completion times of the two machines as is formally demonstrated in the following:

Lemma 21

After each step, the difference between the two completion times is at most the optimum, that is

$$\max\limits_{i}C_{i}^{(LHHL)}\leq OPT~~~~\text{and}~~~~\max\limits_{i}C_{i}\leq \min\limits_{i}C_{i} + OPT. $$

Proof

As for the first part, let J L H H L be not empty (otherwise the statement trivially holds). Recall that there are two cases for the allocation of jobs in J L H H L . In the first case (canonical exact is not symmetric) the algorithm allocates J L H H L identically to the exact allocation on this set, and therefore the statement holds. In the second case (canonical exact is symmetric), the algorithm allocates all jobs as L-jobs. This is exactly what the exact allocation does, so the statement follows.

For the second statement, if the machine with higher \(C_{i}^{(LHHL)}\) determines the final makespan, and both J L L , J H H have odd size, then

$$\begin{array}{@{}rcl@{}} \max\limits_{i}C_{i} &=& \max\limits_{i}C_{i}^{(LHHL)}+\left\lfloor\frac{J_{HH}}{2}\right\rfloor H+\left\lceil\frac{J_{LL}}{2}\right\rceil L\\ &\leq & \min\limits_{i}C_{i}^{(LHHL)}+OPT + \left\lfloor\frac{J_{HH}}{2}\right\rfloor H+\left\lceil\frac{J_{LL}}{2}\right\rceil L\\ &\leq& \min\limits_{i}C_{i}^{(LHHL)}+\left\lceil\frac{J_{HH}}{2}\right\rceil H+\left\lfloor\frac{J_{LL}}{2}\right\rfloor L + OPT\\ &=& \min\limits_{i}C_{i} + OPT, \end{array} $$

while if not both J L L , J H H are odd,

$$\begin{array}{@{}rcl@{}} \max\limits_{i}C_{i} &=& \max\limits_{i}C_{i}^{(LHHL)}+\left\lfloor\frac{J_{HH}}{2}\right\rfloor H+\left\lfloor\frac{J_{LL}}{2}\right\rfloor L\\ &\leq & \min\limits_{i}C_{i}^{(LHHL)}+OPT + \left\lfloor\frac{J_{HH}}{2}\right\rfloor H+\left\lfloor\frac{J_{LL}}{2}\right\rfloor L\\ &\leq& \min\limits_{i}C_{i} + OPT. \end{array} $$

Finally, if the machine with lower \(C_{i}^{(LHHL)}\) determines the makespan, then

$$\begin{array}{@{}rcl@{}} \max\limits_{i}C_{i} &\leq& \min\limits_{i}C_{i}^{(LHHL)}+\left\lfloor\frac{J_{HH}}{2}\right\rfloor H+\left\lfloor\frac{J_{LL}}{2}\right\rfloor L+H\\ & \leq &\max\limits_{i}C_{i}^{(LHHL)}+\left\lfloor\frac{J_{HH}}{2}\right\rfloor H+\left\lfloor\frac{J_{LL}}{2}\right\rfloor L+H\\ &\leq& \min\limits_{i}C_{i} + OPT. \end{array} $$

Proof of approximation guarantee

Then we prove the 3/2-approximation guarantee by distinguishing the two cases in Step 1 of the algorithm (allocation of jobs in J L H H L ) which correspond to the class of the canonical exact allocation for the whole instance.

(class 1–restricted or 2–restricted).:

Recall that an allocation of class 2–restricted assigns a proper subset of these jobs to machine 2, thus implying that machine 1 must get at least one H-job from this set. (The case of the solution for the class 1–restricted is similar). Therefore

$$OPT\geq OPT_{1} \geq |J_{LL}|L+|J_{HH}|H + H. $$

On the other hand, by using Lemma 21, we obtain

$$2APX=2\max\limits_{i}C_{i}\leq C_{1}+C_{2} + H=|J_{LL}|L+|J_{HH}|H+C_{1}^{(LHHL)}+C_{2}^{(LHHL)}+H\leq 3OPT. $$
(class symmetric).:

Consider the number of jobs from respectively J L L and J H H that the optimum x opt assigns to machine 1:

$$n_{LL}^{12}:=n_{LL}^{12}(x^{opt},t)~~\text{and}~~n_{HH}^{12}:=n_{HH}^{12}(x^{opt},t).$$

This gives the following lower bound on the optimum (the costs of the two machines):

$$\begin{array}{@{}rcl@{}} OPT & \geq & n_{LL}^{12}L + n_{HH}^{12}H + |J^{12}_{LH}|L \\ OPT & \geq & (|J_{LL}|-n_{LL}^{12})L + (|J_{HH}|-n_{HH}^{12})H + |J^{21}_{LH}|L \end{array} $$

and thus (by summing up)

$$2OPT \geq |J_{LL}|L + |J_{HH}|H + (|J_{LH}^{12}|+|J_{LH}^{21}|)L. $$

In this scenario, our algorithm assigns all jobs in \(J_{LHHL}=J_{LH}^{12}\cup J_{LH}^{21}\) as L-jobs and thus the two costs satisfy

$$C_{1}^{(LHHL)} + C_{2}^{(LHHL)} = |J_{LH}^{12}|L + |J_{HL}^{12}|L. $$

Therefore

$$\begin{array}{@{}rcl@{}} 2APX & = & 2\max\limits_{i}C_{i}\leq C_{1}+C_{2} + H=|J_{LL}|L+|J_{HH}|H+C_{1}^{(LHHL)}+C_{2}^{(LHHL)}+H\\ & \leq & |J_{LL}|L+|J_{HH}|H + H + 2OPT - |J_{LL}|L - |J_{HH}|H \leq 3OPT, \end{array} $$

Hence the approximation guarantee has been shown for both cases. □

Observe that the performance guarantee of Theorem 20 is tight. In fact, for the instance |J L L |=2,|J H H |=1,|J L H H L |=0, with H = 2L, we have that the optimal makespan is 2L while the makespan of the algorithm is 3L.

Monotonicity

First observe that the algorithm assigns to machine i at least half (rounded up or down) of its L-jobs in t, and at most half (rounded up or down) of its H-jobs in t. In particular,

$$n^{ij}_{LH}\geq\left\lceil\frac{J^{ij}_{LH}}{2}\right\rceil, n^{ij}_{LL}\geq\left\lfloor\frac{J^{ij}_{LL}}{2}\right\rfloor \text{ and } n^{ij}_{HL}\leq\left\lfloor\frac{J^{ij}_{HL}}{2}\right\rfloor, n^{ij}_{HH}\leq\left\lceil\frac{J^{ij}_{HH}}{2}\right\rceil. $$

For the jobsets J L L and J H H this is immediate. Now, let us consider the jobset \(J_{LH}^{ij}\) (the other case is symmetric). If we are in the symmetric case, the algorithm assigns all these jobs to i. Otherwise, the algorithm computes the canonical optimum on J L H H L . Suppose that \(n^{ij}_{LH}<\lceil \frac {J^{ij}_{LH}}{2}\rceil \). Optimality implies that machine i gets at least one H job from the set \(J^{ij}_{HL}\). But then the allocation is not canonical (nor optimal) since Rule 2 of Definition 8 can be applied. Therefore we obtain

$$ {n_{L}^{i}} \geq \left\lceil\frac{J^{ij}_{LH}}{2}\right\rceil + \left\lfloor \frac{J^{ij}_{LL}}{2}\right\rfloor, \qquad {n_{H}^{i}} \leq \left\lceil\frac{J^{ij}_{HH}}{2}\right\rceil + \left\lfloor \frac{J^{ij}_{HL}}{2}\right\rfloor.\\ $$
(13)

Second, observe that if the type of machine i flips from t i to \(\hat {t}_{i}\), then \(|\hat {J}^{ij}_{\bar {\alpha }\beta }|=|J^{ij}_{\alpha \beta }|\) (here \(\bar L=H\) and \(\bar H=L\)). Applying (13) for both \(t_{i},\hat {t}_{i}\) and using the last identity, we finally obtain (1).

Complexity

To prove that our mechanism is polynomial we have to show that both the allocation and the prices can be computed in polynomial time. Observe that the allocation algorithm has first to compute an exact canonical allocation to decide how to perform his first step. Since, the cost of the allocation depends only on how many L jobs and H jobs are assigned to each of the two machines we can compute an exact allocation in polynomial-time by a brute-force algorithm. Then, by Fact 9, this allocation can be transformed into an exact canonical allocation in polynomial time. Given an exact canonical allocation, the first step of the algorithm takes polynomial time to compute an allocation of the jobs in in J L H H L while the second step takes constant time to complete this allocation with the jobs in J L L and J H H . Finally, by Theorem 2, we can compute prices of our mechanism directly from the allocation in constant time.

4 Lower Bounds and Separation Results

The next theorem says that truthful-in-expectation mechanisms are provably more powerful than universally truthful mechanisms. Indeed, for this problem version, exact truthful-in-expectation mechanism exist for any number of machines (see Theorem 4 and Corollary 12). The proof combines the idea of the lower bound by Lavi and Swamy [12] with a technique used by Mu’alem and Schapira [16]: They use Yao’s Min-Max Principle to obtain lower bounds on universally truthful mechanisms.

Theorem 22

No universally truthful mechanism can achieve an approximation factor better than 31/30 for scheduling on two machines, even for the case of identical partitions.

Proof

A universally truthful mechanism is simply a probability distribution over deterministic truthful mechanisms [17]. A lower bound on the approximation ratio of any universally truthful mechanism can be obtained via Yao’s Min-Max Principle as illustrated in [16]: find a probability distribution over all possible inputs such that every deterministic mechanism (monotone algorithm) has an expected approximation guarantee of c or worse.

Example 23 (two-machines identical partitions)

We have the four inputs in the example in Section 1.3. We know that any deterministic algorithm must err on at least one of these four inputs and this implies that (because of the values) on this input the approximation is 10/9 ≈ 1.111 (the deterministic lower bound). Taking the uniform distribution over these four inputs, Yao’s principle tells us that every universally truthful mechanism must have an (expected) approximation which is not better than (1/4) ⋅ (10/9) + 3/4 = 37/36 ≈ 1.0277.

The previous bound can be improved by optimizing the probability distribution. In particular, we assign positive probabilities only to three out of the four inputs shown here:

(14)

Every deterministic monotone algorithm must change the output in at least one of these three inputs (because monotonicity is not satisfied between the first two inputs and between the last two). Deterministic algorithms that use an alternative allocation in the first (or the third) input have an expected approximation at least

$$p \cdot (10/9) + (1-p) = 1 + p/10 $$

while for those algorithms that use an alternative allocation in the second input, the approximation is at least

$$(1-2p) 11/10 + 2p = 1 + 1/10 - 2p/10 $$

Taking p = 1/3 we equate these two quantities and thus obtain that every deterministic algorithm must have an approximation of at least 31/30 ≈ 1.0333. Therefore Yao’s Min-Max Principle yields Theorem 22. □

Lower bounds for three-values domains

The above bounds can be strengthened by considering three-values domains (which are no-longer “single-bit”). First, we give an alternative proof for the lower bound of 2 for deterministic mechanisms on two machines, first showed by Nisan and Ronen in [17]. The proof in [17] requires that the input domain consists of at least 4 different values. Here, we extend the proof in order to hold even when the domain consists of 3 different values.

Theorem 24

For two machines and the case in which the processing times can take three values, no (deterministic) truthful mechanism can achieve an approximation factor better than 2.

Proof

Assume that we have only two machines and an odd number of jobs, that is n = 2k + 1. For every machine i and job h, the processing time can only take the following three values t i h ∈ {0, 1, 1 + 𝜖}.

Let us assume that the input is t i h = 1, for all machines i, and for all jobs h. Let x = A(t) be the allocation matrix determined by the mechanism M = (A, P). If one of the machines gets all the jobs then the algorithm has makespan 2k + 1, while the opt has makespan k + 1, and so the approximation ratio can be arbitrarily close to 2 for large arbitrarily large values of k. If the jobs are allocated to both machines, let j be the machine that gets a subset S j of jobs with odd cardinality. Now produce the following input \(\hat t\) by changing the type of j only:

$$\hat t_{ih}=\left\{ \begin{array}{l l} 0, & i=j ~~\text{and}~~ h\in S_{j}\\ 1+\epsilon, & i=j~~ \text{and}~~ h\not \in S_{j}\\ t_{ih}, & \text{otherwise} \end{array}\right. $$

Let \(\hat x=A(\hat t)\), be the allocation for the input \(\hat t\). Because the mechanism is truthful, it should also be monotone (see e.g. [6]):

$$\sum\limits_{h\in S_{j}}(t_{jh}-\hat t_{jh})\cdot(x_{jh}-\hat x_{jh})\leq 0. $$

From this we get that the subset of jobs that machine j gets for the input \(\hat t\) is also S j . The makespan of the mechanism for the input \(\hat t\) is \(\left |\bar S_{j}\right |\), where \(\bar S_{j}=[n] \setminus S_{j}\), while the optimal makespan is \(\left |\bar S_{j}\right |/2\). □

We next strengthen the lower bound on universally truthful mechanisms of Theorem 22 by considering domains of three values. The proof consists of a suitable probability distribution over the instances used for the deterministic lower bound.

Theorem 25

No universally truthful mechanism can achieve an approximation factor better than 9/8 for scheduling on two machines and three jobs that take three values.

Proof

There are three jobs and t is the matrix with all processing times equal 1. Besides the two allocations which assign all jobs to one machine, there are 6 non-trivial allocations:

(15)

We next show that every deterministic monotone algorithm which gives one of these allocations cannot have an approximation better than 2. Consider a modified instance for each of these allocations:

(16)

where δ = 1 + 𝜖 and 𝜖 is some tiny number. Every monotone deterministic algorithm which chooses one of the allocations in (16) is forced to output the corresponding allocation in (16) (this is the same as in the proof of Theorem 24 when considering three jobs). The approximation is then 2/(1 + 𝜖) which, for tiny 𝜖, gets arbitrarily close to 2.

Assign a probability p to each of the instances in (16) and probability q = (1 − 6p) to the instance of all 1’s. Every deterministic algorithm which allocates all jobs to the same machine has expected approximation at least

$$q(3/2) + 1-q = 1 + q/2 = 1 + 1/2 - 3p $$

while every deterministic monotone algorithm using on of the allocations in (15) has expected approximation at least

$$2p + 1-p = 1 + p $$

Taking p = 1/8 these two quantities are equal and thus every deterministic monotone algorithm has an expected approximation at least 9/8. Yao’s Min-Max Principle implies that this lower bound applies to every universally truthful mechanism. □

5 Conclusion

In this paper we studied scheduling on selfish machines for two-values input domains with publicly known partitions. We considered three different models of publicly known partitions and presented both exact randomized truthful-in-expectation and approximation determinisitc mechanisms and proved some lower bounds.

Our work leaves several open questions. First of all, we are able to derive exact truthful-in-expectation mechanisms for an arbitrary number of machines only in the case where partitions are identical. Moreover, exact truthful-in-expectation mechanisms for two machines are given only for uniform partitions. A natural open question is whether it is possible to extend the result to more general cases like (1) any number of machines with uniform partitions, or (2) two machines with unrestricted given partitions? Finally we note that the lower bound of 2 for three-values domains does not consider jobs’ partitions (as our upper bounds do) and thus it would be natural/interesting to prove a lower bound for three-values domains with publicly known partitions.