1 Introduction

The field of resource allocation problems has been widely studied in recent years. A fundamental and one of the most well-studied problems in this field is the Assignment problemFootnote 1 [1,2,3, 5, 6, 9, 19, 24, 35]. In the Assignment problem we are given a set of n agents, and a set of m items. Each agent (human, company, or any other entity) has strict preferences over a subset of items, and the objective is to allocate items to agents in an “optimal” way. Different notions of optimality have been considered in the literature, but the one that has received the most attention is pareto optimality (see, e.g., [2, 5, 6]). Intuitively, an assignment p is called pareto optimal if there is no other assignment q that is at least good as p for all the agents and also strictly better than p for at least one agent.

Besides its theoretical interest, the problem has also practical importance. Algorithms for the Assignment problem have applications in a variety of real-world situations, such as assigning jobs to workers, campus houses to students, time stamps to users on a common machine, players to sports teams, graduating medical students to their first hospital appointments, and so on [17, 23, 30, 36].

In the Assignment problem, each agent has exactly one preference list. The preference lists may represent a single subjective criterion according to which each agent ranks the items. However, they may also represent a combination of different such criteria: each agent associates a score to each item per criterion, and a single preference list is derived from some weighted sum of the scores. In many cases, it is unclear how to combine scores associated with criteria of inherently incomparable nature - that is like “comparing apples with oranges”. Additionally, even if a single list can be forcefully extracted, most data is lost.Footnote 2

Thus, the classic model seems somewhat restrictive in real world scenarios where people rely on highly varied aspects to rank other entities. For example, suppose that there are n candidates who need to be assigned to n positions. The recruiters may rank the candidates for each position according to different criteria, such as academic background, experience, impression, and so on [4, 22]. Moreover, when assigning campus houses to students, the students may rank the houses by multiple criteria such as their location, rent, size etc. [33]. This motivates the employment of multiple preference lists where each preference list (called a layer) is defined by a different criterion.

In many real-world scenarios, the preferences of the agents may sometimes depend on external circumstances that may not be completely known in advance such as growth of stocks in the market, natural phenomena, outbreak of pandemics [32, 34] and so on. In such cases, each layer in our generalized model can represent a possible “state” of the world, and we may seek an assignment that is optimal in as many states as possible. For instance, suppose that there is a taxi company with n taxis and m costumers (\(n > m\)) that want to be picked at a specific time in future. The “cost” of each taxi depends on the time taken to reach the costumer from the starting location of the taxi. Many factors (that may not be completely known a-priori) may affect the total cost such as road constructions, weather, car condition and the availability of the drivers [15, 29]. The firm may suggest different possible scenarios (each represents a layer). For each scenario, the costumers may be ranked differently by the taxis, and an assignment that is pareto optimal in as many layers as possible will cover most of the scenarios and will give the lowest expected total cost.

Furthermore, it is not always possible to completely take hold of preferences of some (or all) agents due to lack of information or communication, as well as security and privacy issues [10, 27]. In addition, even if it is technically and ethically feasible, it may be costly in terms of money, time, or other resources to gather all information from all the agents [26]. In these cases, we can “complete the preferences” using different assumptions on the agents. As a result, we will have a list of preference profiles that represent different possible states of the world. An assignment that is pareto optimal in as many preference profiles as possible will be pareto optimal with high probability.

Our work is inspired by that of Chen et al. [12], who studied the Stable Marriage problem under multiple preferences.Footnote 3 Chen et al. [12] considered an extension where there are \(\ell \) layers of preferences, and adapted the definition of stability accordingly. Specifically, three notions of stability were defined: \(\alpha \)-global stability, \(\alpha \)-pair stability, and \(\alpha \)-individual stability. The authors studied the algorithmic complexity of finding matchings that satisfy each of these stability notions. Their notion of \(\alpha \)-global stability extends the original notion of stability in a natural way, by requiring the sought matching to be stable in (at least) some \(\alpha \) layers. Our notion of \(\alpha \)-global optimality extends pareto optimality in the same way, by requiring an assignment to be pareto optimal in some \(\alpha \) layers.

Although the Assignment problem can be solved in polynomial time using a mechanism called “serial dictatorship" [2], we show that the problem becomes much harder when multiple preference lists are taken into account. So, in this paper, we study the parameterized complexity of deciding whether a globally optimal assignment exists with respect to various parameters.

Table 1. Summary of our results. Results marked with \(\dagger \) are proved to be optimal under the exponential-time hypothesis.

Our Contributions. One important aspect of our contribution is conceptual: we are the first to study pareto optimality (in the Assignment problem) in the presence of multiple preference lists. This opens the door to many future studies (both theoretical and experimental) of our concept, as well as refinements or extensions thereof (see Sect. 6). In this work, we focus on the classical and parameterized complexity of the problem.

We consider several parameters such as the number of layers \(\ell \), the number of agents n (also denoted by \(\mathrm {\# agents}\)), the number of items m (also denoted by \(\mathrm {\# items}\)), the maximum length of a preference list d, and the given number of layers \(\alpha \) for which we require an assignment to be pareto optimal. The choice of these parameters is sensible because in real-life scenarios such as those mentioned earlier, some of these parameters may be substantially smaller than the input size. For instance, \(\ell \), \(\alpha \) and \(\ell - \alpha \) are upper bounded by the number of criteria according to which the agents rank the items. Thus, they are likely to be small in practice: when ranking other entities, people usually do not consider a substantially large number of criteria (further, up until now, attention was only given to the case where \(\ell = \alpha = 1\)). For instance, when sports teams rank candidate players, only a few criteria such as the player’s winning history, his impact on his previous teams, and physical properties are taken into account [18]. In addition, the parameter \(\ell - \alpha \) may be small particularly in cases where we want to find an assignment that is optimal with respect to as many criteria as possible. Moreover, in various cases concerning ranking of people, jobs, houses etc., people usually have a limited number of entities that they want or are allowed to ask for [14]. In these cases, the parameter d is likely to be small. Moreover, in small countries (such as Israel), the number of universities, hospitals, sports teams and many other facilities and organizations is very small [13, 31]. Thus, in scenarios concerning these entities, at least one among n and m may be small. A summary of our results is given in Table 1.

Fixed-Parameter Tractability and ETH Based Lower Bounds. We prove that \(\alpha \)-Globally Optimal Assignment is fixed-parameter tractable (FPT) with respect to n by providing an \(\mathcal {O}^{*}(n!)\) time algorithm that relies on the connection between pareto optimality and serial dictatorship. We then prove that the problem admits a polynomial kernel with respect to \(n + \ell \) and that it is FPT with respect to \(\mathrm {\# items}+ \ell \) by providing an exponential kernel. We also prove that the problem is slice-wise polynomial (XP) with respect to \(\mathrm {\# items}\) by providing an \(m^{\mathcal {O}(m)}\cdot n^{\mathcal {O}(n)}\) time algorithm. In addition, we prove that \(\mathcal {O}^{*}(2^{\mathcal {O}(t\log {t})})\) is a tight lower bound on the running time (so, our \(\mathcal {O}^{*}(n!)\) time algorithm is essentially optimal) under ETH (defined in Sect. 2) for even larger parameters such as \(t = n + m + \alpha \) and \(t = n + m + (\ell - \alpha )\) using two linear parameter reductions from the \(k \times k\) Permutation Clique problem. Lastly, we prove that the problem is W[1]-hard with respect to \(m + \alpha \) and \(m + (\ell - \alpha )\) using two parameterized reductions from Multicolored Independent Set.

NP-Hardness. We prove that the problem is NP-hard for any fixed \(\alpha \) and \(\ell \) such that \(2 \le \alpha \le \ell \) using a polynomial reduction from the Serial Dictatorship Feasibility problem that relies on a reduction by Aziz el al. [6]. We also define three polynomial-time constructions of preference profiles given an instance of 3-SAT, and we rely on them in two polynomial reductions from 3-SAT, such that in the resulting instances \(\ell +d\) and \(\ell +(m - d)\) are bounded by fixed constants. This proves that the problem is para-NP-hard for \(\ell +d\) and \(\ell +(m - d)\).

Non-existence of Polynomial Kernels. We prove that the problem does not admit polynomial kernels unless NP \(\subseteq \) coNP/poly w.r.t. \(n + m + \alpha \), \(n + m + (\ell - \alpha )\), and \(m + \ell \) using three cross-compositions (defined in Sect. 2) from 3-SAT that rely on the aforementioned reduction to prove para-NP-hardness.

2 Preliminaries

For any natural number t, we denote \([t] = \lbrace 1,\ldots ,t \rbrace \). We use the \(\mathcal {O}^{*}\) and the \(\varOmega ^{*}\) notations to suppress polynomial factors in the input size, that is, \(\mathcal {O}^{*}(f(k))=f(k) \cdot n^{\mathcal {O}(1)}\) and \(\varOmega ^{*}(f(k))=\varOmega (f(k)) \cdot n^{\mathcal {O}(1)}\).

The Assignment Problem. An instance of the Assignment problem is a triple (AIP) where A is a set of n agents \(\lbrace a_{1},\ldots ,a_{n} \rbrace \), I is a set of m items \(\lbrace b_{1},\ldots ,b_{m} \rbrace \), and \(P=(<_{a_{1}},\ldots ,<_{a_{n}})\), called the preference profile, contains the preferences of the agents over the items, where each \(<_{a_{i}}\) is a linear order over a subset of I. We refer to such linear orders as preference lists. If \(b_{j} <_{a_{i}} b_{r}\), we say that agent \(a_{i}\) prefers item \(b_{r}\) over item \(b_{j}\), and we write \(b_{j} \le _{a_{i}} b_{r}\) if \(b_{j} <_{a_{i}} b_{r}\) or \(b_{j} = b_{r}\). Item b is ac We use the \(\mathcal {O}^{*}\) and the \(\varOmega ^{*}\) notations to suppress polynomial factors in the input size, that is, \(\mathcal {O}^{*}(f(k))=f(k) \cdot n^{\mathcal {O}(1)}\) and \(\varOmega ^{*}(f(k))=\varOmega (f(k)) \cdot n^{\mathcal {O}(1)}\).ceptable by agent a if b appears in \(<_{a}\). An assignment is an allocation of items to agents such that each agent is allocated at most one item, and each item is allocated to at most one agent. We define a special item \(b_{\emptyset }\), seen as the least preferred item of each agent, and will be used as a sign that an agent is not assigned to an item. We assume that \(b_{\emptyset }\) is not part of the input item set, and that it appears at the end of every preference list (we will not write it explicitly). Formally, an assignment \(p:A \rightarrow I \cup \lbrace b_{\emptyset } \rbrace \) is a mapping between agents to items, s.t. for each \(i \in [n]\): (1) \(p(a_{i})=b_{\emptyset }\), or (2) both \(p(a_{i}) \in I\) and for each \(j \in [n]\setminus \lbrace i \rbrace \), \(p(a_{i}) \ne p(a_{j})\). We refer to p as legal if each item is assigned to an agent who accepts it. For brevity, we will usually omit the term “legal”.Footnote 4 Moreover, when we write a set in a preference list, we assume that its elements are ordered arbitrarily, unless stated otherwise.

Optimality. An assignment p is pareto optimal if there does not exist another assignment q such that both \(p(a_{i}) \le _{a_{i}} q(a_{i})\) for every \(i \in [n]\), and there exists \(i \in [n]\) such that \(p(a_{i}) <_{a_{i}} q(a_{i})\); p admits a trading cycle \((a_{i_{0}},b_{j_{0}},a_{i_{1}},b_{j_{1}},\ldots ,a_{i_{k-1}},b_{j_{k-1}})\) if for each \(r \in \lbrace 0 ,\ldots ,k-1\rbrace \), we have that \(p(a_{i_r})=b_{j_r}\) and \(b_{j_r} <_{a_{i_{r}}} b_{j_{r+1 \ (\mathrm {mod}\ k)}}\). We say that p admits a self loop if there exist an agent \(a_{i}\) and an item \(b_j\) such that \(b_{j}\) is not allocated to any agent, and \(a_{i}\) prefers \(b_{j}\) over its own item. We now provide a simple characterization of pareto optimality that is defined with respect to trading cycles and self loops:

Proposition 1

(Folklore; see, e.g., Aziz et al. [5, 6]). An assignment is pareto optimal if and only if it does not admit trading cycles and self loops.

For an instance (AIP) and an assignment p, the corresponding trading graph is the directed graph over \(A \cup I\), constructed as follows: (1) for each \(a\in A\) s.t. \(p(a)\ne b_{\emptyset }\), p(a) points to a; (2) each \(a \in A\) points to all the items it prefers over its assigned item p(a); (3) each \(b \in I\) with no owner points to all the agents that accept it. An assignment is pareto optimal if and only if its corresponding trading graph does not contain cycles (see, e.g., Aziz et al. [5, 6]).

A simple assignment mechanism is the greedy serial dictatorship mechanism. For a given permutation over the agents, the agent ordered first allocates its most preferred item, then the agent ordered second allocates its most preferred item among the remaining items, and so on. If at some point, an agent has no available item to allocate, it allocates \(b_{\emptyset }\). We say that an assignment p is a possible outcome of serial dictatorship if there exists a permutation \(\pi \) such that applying serial dictatorship with respect to \(\pi \) results in p.

Proposition 2

(Abdulkadiroglu and Tayfun [2]). An assignment is pareto optimal if and only if it is a possible outcome of serial dictatorship.

This implies that the Assignment problem is solvable in polynomial time.

Generalization of the Assignment Problem. We introduce a generalized version of the Assignment problem where there are \(\ell \) layers of preferences. For each \(j \in [\ell ]\), we refer to \(<_{a_{i}}^{(j)}\) as \(a_{i}\)’s preference list in layer j. The preference profile in layer j is the collection of all the agents’ preference lists in layer j, namely, \(P_{j}=(<_{a_{1}}^{(j)},\ldots ,<_{a_{n}}^{(j)})\).

Definition 1

An assignment p is \(\alpha \)-globally optimal for an instance \((A,I,P_{1},\ldots ,P_{\ell })\) if there exist \(\alpha \) layers \(i_{1},\ldots ,i_{\alpha } \in [\ell ]\) such that p is pareto optimal in layer \(i_{j}\) for each \(j \in [\alpha ]\).

figure a

Notice that this problem is solvable in polynomial time when \(\alpha = 1 \) by applying serial dictatorship in some arbitrary layer. We study \(\alpha \)-Globally Optimal Assignment from the perspective of parameterized complexity.

Parameterized Complexity. In the framework of parameterized complexity, each instance of a problem \(\varPi \) is associated with a parameter k. We say that \(\varPi \) is fixed-parameter tractable (FPT) or slice-wise polynomial (XP) if any instance (Ik) of \(\varPi \) is solvable in time \(f(k)\cdot |I|^{\mathcal {O}(1)}\) or \(|I|^{f(k)}\), respectively, where f is an arbitrary computable function of k. We say that a problem is W[1]-hard if it is unlikely to be FPT, and the main technique to prove so is by using parameterized reductions. A polynomial compression from \(\varPi \) to \(\varPi '\) is a polynomial-time algorithm that given an instance (Ik) of \(\varPi \), outputs an equivalent instance \(I'\) of \(\varPi '\) such that \(|I'|\le poly(k)\). If \(\varPi '=\varPi \), we say that \(\varPi \) admits a polynomial kernel. A cross-composition from \(\varPi \) to \(\varPi '\) is a polynomial-time algorithm that given instances \(I_1,I_2,\ldots ,I_t\) of \(\varPi \) for some \(t \in \mathbb {N}\) that are of the same size \(s\in \mathbb {N}\), outputs an instance (Ik) of \(\varPi '\) such that (1) \(k\le poly(s + \log {t})\); and (2) (Ik) is a Yes-instance of \(\varPi '\) if and only if at least one of \(I_1,I_2,\ldots ,I_t\) is a Yes-instance of \(\varPi \). By [7, 8], the existence of a cross-composition from an NP-hard problem \(\varPi \) to a parameterized problem \(\varPi '\) implies that \(\varPi '\) does not admit a polynomial compression, unless NP \(\subseteq \) coNP/poly. To obtain (essentially) tight conditional lower bounds for the running times of algorithms, we rely on the Exponential-Time Hypothesis (ETH) [11, 20, 21]. Informally, ETH asserts that 3-SAT cannot be solved in time \(2^{o(n)}\) where n is the number of variables.

3 Fixed-Parameter Tractability and ETH Based Bounds

We first prove that \(\alpha \)-Globally Optimal Assignment is FPT with respect to the parameter \(n = \mathrm {\# agents}\).

Theorem 1

(*).Footnote 5 There exists an \(\mathcal {O}^{*}(n!)\) algorithm for \(\alpha \)-Globally Optimal Assignment.

Proof (sketch)

We provide a brute-force algorithm. The algorithm enumerates all possible pareto optimal assignments in each layer, using serial dictatorship with respect to all possible permutations on the agents. For each assignment p, it constructs the corresponding trading graphs for all the layers with respect to p, and checks whether there exist \(\alpha \) graphs with no cycles. The running time of the algorithm is \(\mathcal {O}^{*}(n!)\), since it iterates over \(\mathcal {O}(\ell n!)\) assignments (each layer may have at most n! different pareto optimal assignments by Proposition 2), and for each assignment, it takes polynomial time to construct the trading graphs, and to count how many contain no cycles.   \(\square \)

We now provide a simple lemma that will help us to design a polynomial kernel for \(\alpha \)-Globally Optimal Assignment with respect to \(n + \ell \).

Lemma 1

Let (AIP) be an instance of the Assignment problem where \(|A|= n\). Then, for any agent \(a \in A\) and pareto optimal assignment, a is assigned to \(b_{\emptyset }\) or to one of the n most preferred items in its preference list.

Proof (sketch)

By Proposition 2, each pareto optimal assignment is a possible outcome of serial dictatorship. Observe that for each \(i \in [n]\), when the mechanism is in the i-th step, it has already allocated at most \(i-1\) items. Thus, the i-th allocated item must be either: (i) \(b_{\emptyset }\) (if all the items in the current preference list has already been allocated); or (ii) one of the i first ranked items in the current preference list.   \(\square \)

Theorem 2

(*). \(\alpha \)-Globally Optimal Assignment admits a kernel of size \(\mathcal {O}(\ell n^{2})\). Thus, it admits a polynomial kernel w.r.t. \(n + \ell \).

Proof (sketch)

Given an instance of \(\alpha \)-Globally Optimal Assignment \(I_{1} = (A,I,P_{1},\ldots ,P_{\ell },\alpha )\), the kernel reduces each preference profile \(P_{i}\) to a preference profile \(P_{i}'\) by keeping only the (at most) n first-ranked items in each preference list. Let \(I'\) be a set containing the items ranked in the first n positions in some preference list in \(I_{1}\). The resulting instance is \(I_{2}=(A,I',P_{1}',\ldots ,P_{\ell }',\alpha )\), which satisfies \(|I_{2}| = \mathcal {O}(\ell n^{2})\). We prove that \(I_{1}\) is equivalent to \(I_{2}\) using Lemma 1.   \(\square \)

Before we present an exponential kernel for \(\alpha \)-Globally Optimal Assignment with respect to the parameter \(m + \ell \), let us define the following.

Definition 2

Let \(Q = (A,I,P_{1},\ldots ,P_{\ell },\alpha )\) be an instance of \(\alpha \)-Globally Optimal Assignment and \(u \in A\). The agent class of u in Q, \(\mathcal {C}(u,Q)\), is the tuple that contains the preference lists of u in all the layers, namely, \(\mathcal {C}(u,Q)=(<_{u}^{1},\ldots ,<_{u}^{\ell })\). Define \(D = \lbrace B \subseteq I \times I | B\text { is a linear order} \rbrace \). For a given tuple of length \(\ell \) consisting of linear orderings on subsets of I, \(C \subseteq D^{\ell }\), define \(A(C,Q) = \lbrace a \in A \mid \mathcal {C}(a,Q)=C \rbrace \).

Theorem 3

(*). \(\alpha \)-Globally Optimal Assignment admits a kernel of size \(\mathcal {O}((m!)^{\ell +1})\). Thus, it is FPT with respect to \(m + \ell \).

Proof (sketch)

Given an instance of \(\alpha \)-Globally Optimal Assignment \(Q = (A,I,P_{1},\ldots ,P_{\ell },\alpha )\), the kernelization algorithm works as follows (formally described in the full version): It removes from A agents which share the same agent class together with all their preference lists, such that in the resulting instance there will be at most \(m+1\) agents in the set \(A(\mathcal {C}(a,Q),Q)\), for each \(a \in A\). Intuitively, since there are m items, at most m agents in \(A(\mathcal {C}(a,Q),Q)\) will be assigned to items; we keep at most \(m+1\) agents (rather than m) in each agent class to cover the case where an agent is assigned to \(b_{\emptyset }\) and admits a self-loop.

Assume that we run the kernel on \(I_{1} = (A_{1},I,P_{1},\ldots ,P_{\ell },\alpha )\) to obtain an instance \(I_{2} = (A_{2},I,Q_{1},\ldots ,Q_{\ell },\alpha )\). We first show that \(|I_{2}| = \mathcal {O}((m!)^{\ell +1})\). There exist \(\sum _{j=0}^{m}{{\left( {\begin{array}{c}m\\ j\end{array}}\right) } \cdot j!} = \sum _{j=0}^{m}{\frac{m!}{j!(m-j)!}j!} = m! \sum _{j=0}^{m}{\frac{1}{j!}} \le e \cdot m! = \mathcal {O}(m!)\) possible orderings of subsets of I. Then, there exist \(\mathcal {O}((m!)^{\ell })\) different combinations of such \(\ell \) orderings, implying that there exist \(\mathcal {O}((m!)^{\ell })\) possible agent classes over the item set I. Since for each agent class C, \(|A_{2}(C,I_{2})| \le m+1\), we have that \(|A_{2}| = \Sigma _{\text {agent class C}}{|A_{2}(C,I_{2})|} \le (m!)^{\ell } \cdot (m\,+\,1)\). Thus, \(|I_{2}| = \mathcal {O}((m!)^{\ell } \cdot (m+1)) = \mathcal {O}((m!)^{\ell +1})\).

We now prove that \(I_{1}\) is a Yes-instance if and only if \(I_{2}\) is a Yes-instance.

(\(\Rightarrow \)): Assume that there exists an \(\alpha \)-globally optimal assignment p for \(I_{1}\). Then, there exist \(\alpha \) layers \(i_{1},\ldots ,i_{\alpha }\) of \(I_{1}\) in which p is pareto optimal. We create an assignment \(q:A_{2} \rightarrow I \cup \lbrace b_{\emptyset } \rbrace \) for the reduced instance as follows: For each \(a \in A_{2}\), let \(p(A_{1}(\mathcal {C}(a,I_{1}),I_{1}))\) denote the set of items allocated to the agents from \(A_{1}(\mathcal {C}(a,I_{1}),I_{1})\) by p. We allocate the items in \(p(A_{1}(\mathcal {C}(a,I_{1}),I_{1}))\) to agents in \(A_{2}(\mathcal {C}(a,I_{2}),I_{2})\) arbitrarily (observe that \(\mathcal {C}(a,I_{1}) = \mathcal {C}(a,I_{2})\)). Agents that do not have available items are assigned to \(b_{\emptyset }\). First, observe that q allocates all the items which are allocated by p since there are at most m items, and the algorithm keeps all or exactly \(m+1\) agents from each set \(A_{1}(\mathcal {C}(a,I_{1}),I_{1})\). As a result, q cannot admit self loops in layers \(i_{1},\ldots ,i_{\alpha }\) of \(I_{2}\). Formally, the sets \(A_{1}(\mathcal {C}(a,I_{1}),I_{1})\) and \(A_{2}(\mathcal {C}(a,I_{2}),I_{2})\) satisfy \(|A_{2}(\mathcal {C}(a,I_{2}),I_{2})| \le |A_{1}(\mathcal {C}(a,I_{1}),I_{1})|\). Since the agents in these sets are allocated the same number of items by p and q, if there exists an agent in \(A_{2}(\mathcal {C}(a,I_{2}),I_{2})\) that admits a self loop in \(I_{2}\), there must exist an agent in \(A_{1}(\mathcal {C}(a,I_{1}),I_{1})\) that admits a self loop in \(I_{1}\). Second, we claim that q does not admit trading cycles in these layers. For the sake of contradiction, suppose there exists a layer \(i_{j}\) in \(I_{2}\), and t agents \(a_{1}',\ldots ,a_{t}' \in A_{2}\) that admit a trading cycle \((a_{1}',q(a_{1}'),\ldots ,a_{t}',q(a_{t}'))\) in \(Q_{i_{j}}\). By the construction of q, notice that there exist t agents \(a_{1},\ldots ,a_{t} \in A_{1}\), such that for each \(i \in [t]\), \(\mathcal {C}(a_{i},I_{1})\) = \(\mathcal {C}(a_{i}',I_{2})\), and \(q(a_{i}')=p(a_{i})\). Then, p admits the trading cycle \((a_{1},p(a_{1}),\ldots ,a_{t},p(a_{t}))\) in \(P_{i_{j}}\). This gives a contradiction to Proposition 1.

(\(\Leftarrow \)): Assume that there exists an \(\alpha \)-globally optimal assignment q for \(I_{2}\). Then there exist \(\alpha \) layers \(i_{1},\ldots ,i_{\alpha }\) in \(I_{2}\) in which q is pareto optimal. We denote an assignment p for \(I_{1}\) by \( p(a) = {\left\{ \begin{array}{ll} q(a) \ \ a \in A_{2}\\ b_{\emptyset } \text { otherwise} \end{array}\right. }\), and we claim that p is pareto optimal in layers \(i_{1},\ldots ,i_{\alpha }\) in \(I_{1}\). By the construction of p, for each \(a_{1} \in A_{1} \setminus A_{2}\), there exists an agent \(a_{2} \in A_{2}\) such that \(\mathcal {C}(a_{1},I_{1}) = \mathcal {C}(a_{2},I_{2})\) and \(p(a_{1}) = q(a_{2})\). Namely, there exists a mapping f from agents in \(A_{1}\) to agents in \(A_{2}\) such that for each \(a_{1} \in A_{1}\), \(\mathcal {C}(a_{1},I_{1}) = \mathcal {C}(f(a_{1}),I_{2})\) and \(p(a_{1}) = q(f(a_{1}))\). If p admits a trading cycle \((a_{1},p(a_{1}), \ldots , a_{r},p(a_{r}))\) in some layer \(i_{j}\) of \(I_{1}\), then q admits the trading cycle \((f(a_{1}),q(f(a_{1})), \ldots , f(a_{r}),q(f(a_{r})))\) in layer \(i_{j}\) of \(I_{2}\). If p admits a self loop in layer \(i_{j}\) of \(I_{1}\) with agent \(a_{1} \in A_{1}\), then q admits a self loop with agent \(f(a_{1})\) in layer \(i_{j}\) of \(I_{2}\). Thus by Proposition 1, we conclude that p is \(\alpha \)-globally optimal in \(I_{1}\).   \(\square \)

Corollary 1

(of Theorems 1 and 3).  \(\alpha \)-Globally Optimal Assignment is solvable in time \(\mathcal {O}^{*}(((m!)^{\ell +1})!)\).

Theorem 4

\(\alpha \)-Globally Optimal Assignment is solvable in time \((nm)^{\mathcal {O}(m)}\). Thus, it is XP with respect to m.

Proof

We present a simple brute-force algorithm. The algorithm simply iterates over all subsets of items \(I' \subseteq I\). For each subset, it iterates over all subsets \(A' \subseteq A\) such that \(|A'|=|I'|\). For each \(a \notin A'\), the algorithm allocates \(b_\emptyset \), and it tries all possible \(|I'|!\) different ways to allocate the items in \(I'\) to the agents in \(A'\) (it skips allocations that allocate items that are not acceptable by their owners in more than \(\ell -\alpha + 1\) layers). The algorithm constructs the corresponding trading graphs, and verifies in polynomial time whether the current assignment is \(\alpha \)-globally optimal. Hence, the running time of the algorithm is \(\sum _{t=0}^{m} \left( {\begin{array}{c}m\\ t\end{array}}\right) \cdot \left( {\begin{array}{c}n\\ t\end{array}}\right) \cdot t! \cdot (n+m)^{\mathcal {O}(1)} \le m \cdot 2^{m}\cdot n^{\frac{m}{2}} \cdot m! \cdot (n+m)^{\mathcal {O}(1)} = (nm)^{\mathcal {O}(m)}\).   \(\square \)

Before we continue with our next results, let us discuss a simple property that will help in many of our proofs.

Definition 3

Let (AIP) be an instance of the Assignment problem and suppose that \(P = \lbrace <_{a} \mid a \in A \rbrace \). We say that agents \(a_{1}, a_{2} \in A\) respect each other if there exists a linear order on a subset of I, , such that both and .

Lemma 2

Let (AIP) be an instance of the Assignment problem such that there exist agents \(a_{1},\ldots ,a_{r} \in A\) where for each \(i,j \in [r]\), \(a_{i}\) and \(a_{j}\) respect each other. Then, for every assignment \(p : A \rightarrow I \cup \lbrace b_{\emptyset } \rbrace \), p does not admit a trading cycle among the agents \(a_{1},\ldots ,a_{r}\).

Proof

Towards a contradiction, suppose there exist an assignment p which admits a trading cycle \((a_{1},p(a_{1}),\ldots ,a_{r},p(a_{r}))\) whose agents pairwise respect each other. Then, there exists a linear order , such that for each \(i \in [r]\), . This implies that . Since \(p(a_{r}) <_{a_{r}} p(a_{1})\), we have that , a contradiction to being a linear order.   \(\square \)

We now prove that \(\varOmega ^{*}(k!)\) is a (tight) lower bound on the running time of any algorithm for \(\alpha \)-Globally Optimal Assignment under the ETH, even for larger parameters than n such as \(k= n + m + \alpha \) and \(k = n + m + (\ell - \alpha )\). So, the algorithm in Theorem 1 is optimal (in terms of running time).

Theorem 5

(*). Unless ETH fails, there does not exist an algorithm for \(\alpha \)-Globally Optimal Assignment with running time \(\mathcal {O}^{*}(2^{o(k \log {k})})\) where \(k=n+m+\alpha \) or \(k = n+m+(\ell - \alpha )\).

Proof (sketch)

We provide a proof sketch for the parameter \(k=n+m+\alpha \) (the proof for the second parameter is provided in the full version). We use the technique of linear parameter reduction (for more information, see the proposition by Cygan et al. [16] in the full version) from \(k \times k\) Permutation Clique to \(\alpha \)-Globally Optimal Assignment. In \(k \times k\) Permutation Clique, we are given a graph G where the vertices are elements of a \(k \times k\) table (\(V(G)=[k] \times [k]\)). The task is to decide whether there exists a \(k \times k\)-permutation clique in G, which is a clique of size k in G that contains exactly one vertex from each row and exactly one vertex from each column, i.e. there exists a permutation \(\pi \) on [k] such that the vertices of the clique are \((1,\pi (1)),\ldots ,(k,\pi (k))\). Lokshtanov et al. [25] proved that there is no \(\mathcal {O}^{*}(2^{o(k \log {k})})\)-time algorithm for \(k \times k\) Permutation Clique, unless ETH fails.

Let (Gk) be an instance of \(k \times k\) Permutation Clique. We create an agent \(a_{i}\) for each row \(i \in [k]\), and an item \(b_{j}\) for each column \(j \in [k]\). We construct an instance of \(\alpha \)-Globally Optimal Assignment with \(k^{2}\) layers, each corresponds to a row-column pair (ij), containing the preference profile \(P_{(i,j)}\) defined as follows: (i) \(a_{i}\ \): \(\ b_j\) (ii) \(a_{r}\ \):\(\ \lbrace b_{q} \mid \lbrace (i,j),(r,q) \rbrace \in E(G),q \ne j \rbrace \) (sorted in ascending order by q) \(\ \ \forall r \in [k]\setminus \lbrace i \rbrace \).

We finally set \(\alpha = k\). We prove that there exists a \(k \times k\)-permutation clique in G if and only if there exists a k-globally optimal assignment for the instance.

\((\Rightarrow )\) Suppose there exists a permutation \(\pi \) for [k] such that \((1,\pi (1)),\ldots ,(k,\pi (k))\) form a clique in G. We define an assignment p by \(p(a_{i})=b_{\pi (i)}\) for each \(i \in [k]\) (each row agent is assigned to its corresponding column item). Observe that for each \(i \in [k]\), \(b_{\pi (i)}\) is acceptable by \(a_{i}\) in \(P_{(i,\pi (i))}\) and in all profiles \(P_{(j,\pi (j))}\) such that \(j \in [k] \setminus \lbrace i \rbrace \) since there is an edge between \((i,\pi (i))\) and each \((j,\pi (j))\). Moreover, each \(P_{(j,\pi (j))}\) contains no self loops because all the items are allocated. Since we sorted each preference list in an ascending order by the item indices, all the agents respect each other in each preference profile and by Lemma 2, p does not admit trading cycles in any layer.

\((\Leftarrow )\) Suppose there exists a k-globally optimal assignment p for the constructed instance. Note that if p is pareto optimal in some profile \(P_{(i,j)}\), it must satisfy \(p(a_{i})=b_{j}\), as otherwise, \(a_{i}\) would admit a self loop. Hence, we have that for each \(i \in [k]\), p is pareto optimal in at most one profile among \(P_{(i,1)},\ldots ,P_{(i,k)}\) and in at most one profile among \(P_{(1,i)},\ldots ,P_{(k,i)}\). Since \(\alpha = k\), we have that there exists a permutation \(\pi \) on [k] such that p is pareto optimal in \(P_{(i,\pi (i))}\) for each \(i \in [k]\). It can be proved that \(\lbrace (i,\pi (i)) \mid i \in [k]\rbrace \) is the vertex set of a \(k \times k\)-permutation clique in G.

It holds that \(n + m+ \alpha = \mathcal {O}(k)\). Thus, by Cygan et al. [16], we conclude that there is no \(\mathcal {O}^{*}(2^{o(k \log {k})})\)-time algorithm for \(\alpha \)-Globally Optimal Assignment, unless ETH fails.   \(\square \)

Theorem 6

(*).\(\alpha \)-Globally Optimal Assignment is W[1]-hard for the parameters \(m+\alpha \) and \(m+(\ell - \alpha )\).

Proof (sketch)

We provide a proof sketch for \(m+(\ell - \alpha )\) (the proof for the parameter \(m+\alpha \) is provided in the full version). We present a parameterized (and also polynomial) reduction from the W[1]-hard problem Multicolored Independent Set to \(\alpha \)-Globally Optimal Assignment. The input of Multicolored Independent Set consists of an undirected graph \(G=(V,E)\), and a coloring \(c:\ V \rightarrow [k]\) that colors the vertices in V with k colors. The task is to decide whether G admits a multicolored independent set of size k, which is an independent set \(V' \subseteq V\) that satisfies \(\lbrace c(v') \mid v' \in V' \rbrace = [k]\) and \(|V'| = k\).

Given an instance \((G=(V,E),c)\), assume that \(V= \lbrace v_{1},\ldots ,v_{n} \rbrace \). We construct an instance of \(\alpha \)-Globally Optimal Assignment with the agent set \(A = \lbrace a_{1},\ldots ,a_{n} \rbrace \) and the item set \(I = \lbrace b_{1},\ldots ,b_{k} \rbrace \), consisting of \(\ell = n+1\) layers. Informally, the agents that will allocate the items from I in an \(\ell \)-globally optimal assignment will correspond to vertices that form a multicolored independent set in G. The first layer enforces each agent to allocate either the item that corresponds to its color, or \(b_{\emptyset }\) and it is defined by: \(a_{i}\ \): \(\ b_{c(i)}\ \forall i \in [k]\). For each \(i \in [n]\), the goal of layer \(1+i\) is to admit trading cycles if both \(v_{i}\) and one of its neighbors are included in the independent set (this happens when both of their agents allocate items). It is defined as follows:

(i) \(a_{i}\ \): \(\ \lbrace b_{j} \mid j \in [k] \rbrace {\setminus } \lbrace b_{c(v_{i})} \rbrace \) (ordered arbitrarily) \(>b_{c(v_{i})}\) (ii) \(a_{j}\ \): \(\ b_{c(v_{j})}\ \ \forall j \in [n] {\setminus } \lbrace i \rbrace \) such that (1) \(\lbrace v_{i},v_{j} \rbrace \in E\) and \(c(v_{j}) = c(v_{i})\) or (2) \(\lbrace v_{i},v_{j} \rbrace \notin E\) (iii) \(a_{j}\ \): \(\ b_{c(v_{i})} > b_{c(v_{j})}\ \ \forall j \in [n] {\setminus } \lbrace i \rbrace \) such that \(\lbrace v_{i},v_{j} \rbrace \in E\) and \(c(v_{j}) \ne c(v_{i})\) . We finally set \(\alpha = \ell \). We claim that G admits a multicolored independent set of size k if and only if there exists an \(\ell \)-globally optimal assignment for the constructed instance.

(\(\Rightarrow \)): Suppose that G admits a multicolored independent set of size k, \(V' = \lbrace v_{i_{1}},\ldots ,v_{i_{k}} \rbrace \). Denote an assignment p by \(p(a_{i_{j}}) = b_{c(v_{i_{j}})}\) for each \(j \in [k]\), and \(p(a_{i})=b_{\emptyset }\) for each \(i \notin \lbrace i_{1},\ldots ,i_{k} \rbrace \). Observe that for each \(i \in [k]\), \(p(a_{i})\) is acceptable by \(a_{i}\) in each layer, and each layer cannot admit self loops since all the items are allocated. Moreover, notice that p is pareto optimal in the first layer since no trading cycles can be performed. We prove that p is pareto-optimal in layer \(1+i\) for each \(i \in [n]\). Towards a contradiction, suppose that there exists \(i \in [n]\) such that p is not pareto optimal in layer \(1+i\). Observe that the only possible trading cycle in this layer consists of the agent \(a_{i}\) and an agent \(a_{r}\) such that \(p(a_{i}) = b_{c(v_{i})}\), \(c(v_{i}) \ne c(v_{r})\), \(\lbrace v_{i},v_{r} \rbrace \in E\), and \(p(a_{r}) = b_{c(v_{r})}\). Then, we have that \(v_{i},v_{r} \in V'\), a contradiction.

(\(\Leftarrow \)): Provided in the full version.   \(\square \)

4 NP Hardness

Theorem 7

(*). For any \(2 \le \alpha \le \ell \), \(\alpha \)-Globally Optimal Assignment is NP-hard.

Proof (sketch)

We extend a polynomial reduction by Aziz et al. [6] from the Serial Dictatorship Feasibility problem, which was proved to be NP-hard by Saban and Sethuraman [28]. In the Serial Dictatorship Feasibility problem, the input is a tuple (AIPab) where A is a set of n agents, I is a set of n items, P is the preference profile in which each agent has a complete linear order on the items, \(a \in A\), and \(b \in I\). The task is to decide whether there exists a permutation for which serial dictatorship (defined in Sect. 2) allocates item b to agent a. Given such (AIPab), Aziz et al. [6] constructed two preference profiles, \(P_{1}\) and \(P_{2}\), such that (AIPab) is a Yes-instance if and only if there exists an assignment that is pareto optimal in both \(P_{1}\) and \(P_{2}\).

We add \(\ell - \alpha \) additional new items \(c_{1},\ldots ,c_{\ell - \alpha }\) and we define \(I' = I \cup \lbrace c_{1},\ldots ,c_{\ell - \alpha } \rbrace \). We construct an instance of \(\alpha \)-Globally Optimal Assignment over A and \(I'\), consisting of \(\ell \) layers. The first two layers are \(P_{1}\) and \(P_{2}\), the next \(\alpha - 2\) layers are copies of \(P_{1}\), and the next \(\ell \,-\,\alpha \) layers are \(P_{1}',\ldots ,P_{\ell - \alpha }'\), where for each \(i \in [\ell - \alpha ]\), \(P_{i}'\) is defined as follows: (i) \(a\ \): \(\ c_{i}\) (ii) \(a'\ :\ \emptyset \ \forall a' \in A \setminus \lbrace a \rbrace \). Notice that the only pareto optimal assignment for \(P_{i}'\) is the assignment that allocates \(c_{i}\) to a, and \(b_{\emptyset }\) to each \(a' \in A {\setminus } \lbrace a \rbrace \). Using this observation, we prove that an assignment is \(\alpha \)-globally optimal for the constructed instance if and only if it is pareto optimal in both \(P_{1}\) and \(P_{2}\).   \(\square \)

We define three constructions of preference profiles given an instance of 3-SAT and we consider their connections to the satisfiability of the formula. We will rely on these connections to design a polynomial reduction from 3-SAT to \(\alpha \)-Globally Optimal Assignment that shows that \(\alpha \)-Globally Optimal Assignment is para-NP-hard with respect to \(\ell +d\). We will also rely on these results in Sect. 5 to prove that the problem is unlikely to admit polynomial kernels with respect to \(n + m+\alpha \), \(n +m+(\ell - \alpha )\), and \(m+\ell \).

Let \(n,m \in \mathbb {N}\) be positive integers. Denote the agent set \(A(m,n)= \lbrace a_{i,j},\overline{a_{i,j}} \mid i\in [m], j \in [n] \rbrace \), and the item set \(I(m,n)= \lbrace b_{i,j},\overline{b_{i,j}} \mid i\in [m], j \in [n] \rbrace \). We provide two preference profiles over A(mn) and I(mn): \(P_{1}(m,n)\) and \(P_{2}(m,n)\). Intuitively, given a 3-SAT instance with n variables and m clauses, the way the agents and the items are assigned to each other in an assignment that is pareto optimal in both \(P_{1}(m,n)\) and \(P_{2}(m,n)\) will encode a boolean assignment for the variable set of the instance. \(P_{1}(m,n)\) is defined as follows: \(\forall i \in [m], j \in [n]\): (i) \(a_{i,j}\) : \(b_{i,j}>\overline{b_{i,j}}\) (ii) \(\overline{a_{i,j}}\) : \(b_{i,j}>\overline{b_{i,j}}\). \(P_{2}(m,n)\) is defined as follows:

  • \(\forall j \in [n]\): (i) \(a_{m,j}\) : \(b_{m,j}>b_{m-1,j}>\overline{b_{m,j}}\) (ii) \(\overline{a_{m,j}}\ \): \(\ b_{m,j}>b_{m-1,j}>\overline{b_{m,j}}\)

  • \(\forall i \in \lbrace 2,\ldots ,m-1 \rbrace \), \( j \in [n]\): (i) \(a_{i,j}\) : \(b_{i-1,j}>\overline{b_{i,j}}>\overline{b_{i+1,j}}>b_{i,j}\) (ii) \(\overline{a_{i,j}}\) : \(b_{i-1,j}>\overline{b_{i,j}}>\overline{b_{i+1,j}}>b_{i,j}\)

  • \(\forall j \in [n]\): (i) \(a_{1,j}\) : \(\overline{b_{1,j}}>\overline{b_{2,j}}>b_{1,j}\) (ii) \(\overline{a_{1,j}}\) : \(\overline{b_{1,j}}>\overline{b_{2,j}}>b_{1,j}\)

Claim 1

(*). An assignment \(p : A(m,n) \rightarrow I(m,n) \cup \lbrace b_{\emptyset } \rbrace \) is pareto optimal in \(P_{1}(m,n)\) if and only if \(\lbrace p(a_{i,j}),p(\overline{a_{i,j}}) \rbrace = \lbrace b_{i,j},\overline{b_{i,j}} \rbrace \) for each \(i \in [m]\) and \(j \in [n]\).

We denote \(P_{j}^{\mathrm {true}}=\lbrace (a_{i,j},b_{i,j}) , (\overline{a_{i,j}},\overline{b_{i,j}}) \mid i \in [m] \rbrace \), and \(P_{j}^{\mathrm {false}}=\lbrace (a_{i,j},\overline{b_{i,j}}) , (\overline{a_{i,j}},b_{i,j}) \mid i \in [m] \rbrace \). Intuitively, \(P_{j}^{\mathrm {true}}\) and \(P_{j}^{\mathrm {false}}\) will correspond to setting the variable \(x_{j}\) to true or false, respectively.

Claim 2

(*). An assignment \(p : A(m,n) \rightarrow I(m,n) \cup \lbrace b_{\emptyset } \rbrace \) is pareto optimal in both \(P_{1}(m,n)\) and \(P_{2}(m,n)\) if and only if for each \(j \in [n]\), either \(P_{j}^{\mathrm {true}} \subseteq p\) or \(P_{j}^{\mathrm {false}} \subseteq p\).

Proof (sketch)

(\(\Rightarrow \)): Assume that p is pareto optimal in both \(P_{1}(m,n)\) and \(P_{2}(m,n)\). Towards a contradiction, suppose that there exists \(j \in [n]\) satisfying that both \(P_{j}^{\mathrm {true}} \nsubseteq p\) and \(P_{j}^{\mathrm {false}} \nsubseteq p\). By Claim 1, there exist \(i_{1},i_{2} \in [m]\) such that \(i_{1}<i_{2}\), satisfying that either (1) \(p(a_{i_{1},j})=b_{i_{1},j}\) and \(p(a_{i_{2},j})=\overline{b_{i_{2},j}}\), or (2) \(p(a_{i_{1},j})=\overline{b_{i_{1},j}}\) and \(p(a_{i_{2},j})=b_{i_{2},j}\). We have that there must exist \(i_{1}\le i < i_{2}\) such that either (1) \(p(a_{i,j})=b_{i,j}\) and \(p(a_{i+1,j})=\overline{b_{i+1,j}}\), or (2) \(p(a_{i,j})=\overline{b_{i,j}}\) and \(p(a_{i+1,j})=b_{i+1,j}\). This implies that p admits the trading cycles \((a_{i,j},b_{i,j},a_{i+1,j},\overline{b_{i+1,j}})\) or \((\overline{a_{i,j}},b_{i,j},\overline{a_{i+1,j}},\overline{b_{i+1,j}})\) in \(P_{2}(m,n)\), a contradiction.

(\(\Leftarrow \)): Assume that for each \(j \in [n]\), either \(P_{j}^{\mathrm {true}} \subseteq p\) or \(P_{j}^{\mathrm {false}} \subseteq p\). By Claim 1, p is pareto optimal in \(P_{1}(m,n)\). Then by the construction of \(P_{2}(m,n)\), observe that every possible trading cycle in \(P_{2}(m,n)\) has one of the forms: (1) \((a_{i,j},\overline{b_{i,j}},a_{i-1,j},b_{i-1,j})\) or (2) \((\overline{a_{i,j}},\overline{b_{i,j}},\overline{a_{i-1,j}},b_{i-1,j})\), where \(i \in \lbrace 2,\ldots ,m \rbrace \). Then, there exist \(j \in [n]\) and \(i \in \lbrace 2,\ldots ,m \rbrace \) such that either (1) \(p(a_{i,j})=b_{i,j}\) and \(p(a_{i-1,j})=\overline{b_{i,j}}\) or (2) \(p(a_{i,j})=\overline{b_{i,j}}\) and \(p(a_{i-1,j})=b_{i,j}\). Thus, both \(P_{j}^{\mathrm {true}} \subsetneq p\) and \(P_{j}^{\mathrm {false}} \subsetneq p\), a contradiction.   \(\square \)

Let \(D = (\mathcal {X},\mathcal {C})\) be an instance of 3-SAT where \(\mathcal {X}= \lbrace x_{1},\ldots ,x_{n} \rbrace \) is the set of variables, and \(\mathcal {C}= \lbrace C_{1},\ldots ,C_{m} \rbrace \) is the set of clauses, each of size 3. In order to construct the third preference profile \(P_{3}(D)\), order the literals in each clause arbitrarily, such that \(C_{i}=\ell _{i}^{1} \,\vee \, \ell _{i}^{2} \,\vee \, \ell _{i}^{3}\) for each \(i \in [m]\). The third preference profile is responsible for the satisfiability of the formula. We define \(\mathrm {ind}_{D}(i,j)\) as the index of the variable which appears in the j-th literal in \(C_{i}\) for each \(j \in [3]\), and we define \( \mathrm {b}_{D}(i, j) = {\left\{ \begin{array}{ll} \overline{b_{i,\mathrm {ind}_{D}(i,j)}} &{} \ell _{i}^{j} \text { is negative}\\ b_{i,\mathrm {ind}_{D}(i,j)} &{} \ell _{i}^{j} \text { is positive} \end{array}\right. } \). Intuitively, when \(a_{i,\mathrm {ind}_{D}(i,j)}\) gets \(\mathrm {b}_{D}(i, j)\) and \(\overline{a_{i,\mathrm {ind}_{D}(i,j)}}\) gets \(\overline{\mathrm {b}_{D}(i, j)}\), it means that \(\ell _{i}^{j}\) is “satisfied". Define the preference profile \(P_{3}(D)\) as follows:

  • \(\forall i \in [m]\): (i) \(a_{i,\mathrm {ind}_{D}(i,3)}\): \(\mathrm {b}_{D}(i,3)>\overline{\mathrm {b}_{D}(i,2)}>\overline{\mathrm {b}_{D}(i,3)}\)

    (ii) \(a_{i,\mathrm {ind}_{D}(i,2)}\ \): \(\ \mathrm {b}_{D}(i,2)>\overline{\mathrm {b}_{D}(i,1)}>\overline{\mathrm {b}_{D}(i,2)}\)

    (iii) \(a_{i,\mathrm {ind}_{D}(i,1)}\ \): \(\ \mathrm {b}_{D}(i,1)>\overline{\mathrm {b}_{D}(i,3)}>\overline{\mathrm {b}_{D}(i,1)}\)

    (iv) \(\overline{a_{i,\mathrm {ind}_{D}(i,r)}}\ \): \(\ \mathrm {b}_{D}(i,r)>\overline{\mathrm {b}_{D}(i,r)} \ \forall r \in [3]\)

  • \(\forall i \in [m], j \in [n]\) such that \(x_{j}\) does not appear in \(C_{i}\): (i) \(a_{i,j}\ \): \(\ b_{i,j}>\overline{b_{i,j}}\) (ii) \(\overline{a_{i,j}}\ \): \(\ b_{i,j}>\overline{b_{i,j}}\)

Claim 3

(*). An assignment \(p : A(m,n) \rightarrow I(m,n) \cup \lbrace b_{\emptyset } \rbrace \) is pareto optimal in \(P_{1}(m,n), P_{2}(m,n)\), and \(P_{3}(D)\) if and only if: (1) for each \(j \in [n]\), either \(P_{j}^{\mathrm {true}} \subseteq p\) or \(P_{j}^{\mathrm {false}} \subseteq p\); and (2) for each clause \(C_{i}=\ell _{i}^{1} \vee \ell _{i}^{2} \vee \ell _{i}^{3} \in \mathcal {C}\), there exists at least one \(j \in [3]\) such that \(p(a_{i,\mathrm {ind}_{D}(i,j)}) = \mathrm {b}_{D}(i,j)\).

Proof (sketch)

(\(\Rightarrow \)): Assume that p is pareto optimal in \(P_{1}(m,n), P_{2}(m,n)\) and \(P_{3}(D)\). By Claim 2, p satisfies the first condition. Observe that the only possible trading cycles in \(P_{3}(D)\) are of the form \((a_{i,\mathrm {ind}_{D}(i,3)},\overline{\mathrm {b}_{D}(i,3)} , a_{i,\mathrm {ind}_{D}(i,2)},\overline{\mathrm {b}_{D}(i,2)} , a_{i,\mathrm {ind}_{D}(i,1)}\) \(,\overline{\mathrm {b}_{D}(i,1)})\). Then by Claim 1, for each \(i \in [m]\), there must exists \(j \in [3]\) such that \(p(a_{i,\mathrm {ind}_{D}(i,j)}) = \mathrm {b}_{D}(i,j)\). The opposite direction is provided in the full version.   \(\square \)

Lemma 3

(*). An instance \(D = (\mathcal {X},\mathcal {C})\) of 3-SAT such that \(|\mathcal {X}|=n\) and \(|\mathcal {C}|=m\), is a Yes-instance if and only if there exists an assignment \(p : A(m,n) \rightarrow I(m,n) \cup \lbrace b_{\emptyset } \rbrace \) that is pareto optimal in \(P_{1}(m,n)\), \(P_{2}(m,n)\), and \(P_{3}(D)\).

Theorem 8

(*).3-SAT is polynomial-time reducible to \(\alpha \)-Globally Optimal Assignment  where \(\alpha = \ell = 3\) and \(d = 3\) or where \(\alpha = \ell = 4\) and \(d = m\).

5 Non-existence of Polynomial Kernels

In this section, we prove (using three cross-compositions) that \(\alpha \)-Globally Optimal Assignment is unlikely to admit polynomial kernels with respect to \(n + m+\alpha \), \(n + m+(\ell - \alpha )\), and \(m+\ell \).

Theorem 9

(*). There does not exist a polynomial kernel for \(\alpha \)-Globally Optimal Assignment with respect to \(n + m +\alpha \), \(n + m +(\ell - \alpha )\), and \(m + \ell \) unless NP \(\subseteq \) coNP/poly.

Proof (sketch)

We provide a proof sketch for the parameter \(m+\ell \) (the proofs for the other parameters are provided in the full version). We provide a cross-composition from 3-SAT to \(\alpha \)-Globally Optimal Assignment. Given instances of 3-SAT \(D_{0}=(\mathcal {X}_{0},\mathcal {C}_{0}),\ldots ,D_{t-1}=(\mathcal {X}_{t-1},\mathcal {C}_{t-1})\) of the same size \(n \in \mathbb {N}\) for some \(t \in \mathbb {N}\), we first modify each instance \(D_{i}\) to have \(\mathcal {X}_{i} = \lbrace x_{1}, \ldots , x_{n} \rbrace \) and \(|\mathcal {C}_{i}|=n\). We define an agent set \(A_{i}(n,n)=\lbrace a_{r,j}^{i}, \overline{a_{r,j}^{i}} \mid r,j \in [n] \rbrace \) for each \(i \in \lbrace 0 , \ldots , t-1 \rbrace \). The constructed instance is defined over the agent set \(A= \bigcup _{i=0}^{t-1}A_{i}(n,n)\) and the item set \(I= I(n,n)\) (defined in Sect. 4); and it consists of \(2\lceil \log {t} \rceil + 2\) layers. Intuitively, the goal of the first \(2 \lceil \log {t} \rceil \) layers is to enforce each \(\alpha \)-globally optimal assignment to allocate all the items only to agents that correspond to the same Yes-instance (if one exists). Let \(i \in [\lceil \log {t} \rceil ]\), the preference profile in layer i (or \(\lceil \log {t} \rceil + i\)) requires an assignment to assign the items in I only to agents whose corresponding instance is \(D_{j}\) such that the i-th bit in the binary representation of j is 0 (or 1), and to assign \(b_{\emptyset }\) to all other agents. These layers are constructed as a composition of \(P_{1}(n,n)\) over \(A_{r}(n,n)\) and I(nn) for every \(r \in \lbrace 0 , \ldots t-1 \rbrace \) such that the i-th bit in the binary representation of r is 0 (or 1), together with empty preferences for all the other agents. Layer \(2 \lceil \log {t} \rceil + 1\) is constructed as a composition of the profile \(P_{2}(n,n)\) over \(A_{i}(n,n)\) and I(nn) for each \(i \in \lbrace 0 , \ldots , t-1 \rbrace \), and the last layer is a composition of the profiles \(P_{3}(D_{i})\) over \(A_{i}(n,n)\) and I(nn) for each \(i \in \lbrace 0 , \ldots , t-1 \rbrace \). We finally set \(\alpha = \lceil \log {t} \rceil + 2\). Notice that every assignment can be pareto optimal in at most one among layers i and \(\lceil \log {t} \rceil + i\) for each \(i \in [\lceil \log {t} \rceil ]\). Then, each \(\alpha \)-globally optimal assignment is pareto optimal in exactly \(\lceil \log {t} \rceil \) layers among the first \(2\lceil \log {t} \rceil \) layers, and must be pareto optimal in the last two layers. Therefore, we have that each such assignment “encodes” some \(i \in \lbrace 0 , \ldots , t-1 \rbrace \) in the first \(2\lceil \log {t} \rceil \) layers (if it is pareto optimal in layer j or \(\lceil \log {t} \rceil + j\), then the j-th bit of i is 0 or 1, respectively). The optimality in the last two layers implies that p is pareto optimal in both \(P_{2}(n,n)\) and \(P_{3}(n,n)\) over \(A_{i}(n,n)\) and I(nn). Thus, by Lemma 3, \(D_{i}\) is a Yes-instance. The opposite direction follows from Lemma 3 as well.   \(\square \)

6 Conclusion and Future Research

In this paper, we introduced a new variant of the Assignment problem where each agent is equipped with multiple incomplete preference lists, and we defined the notion of global optimality, that naturally extends pareto optimality. We considered several natural parameters, and presented a comprehensive picture of the parameterized complexity of the problem with respect to them.

The results show that the problem of finding an \(\alpha \)-globally optimal assignment is, in general, computationally hard, but that it admits more positive results when the parameter depends on \(n = \mathrm {\# agents}\) (and \(\alpha \) or \(\ell \)) than when it depends on \(m = \mathrm {\# items}\) (and \(\alpha \) or \(\ell \)). We proved that the problem admits an XP algorithm with respect to m, but is unlikely to admit one with respect to \(\ell + d\) and \(\ell + (m - d)\). We provided an \(\mathcal {O}^{*}(n!)\)-time algorithm and an exponential kernel with respect to \(m + \ell \). Both results showed that the problem is FPT with respect to these parameters. In addition, we proved that \(\mathcal {O}^{*}(k!)\) is essentially a tight lower bound on the running time under ETH for even larger parameters than n such as \(k = n + m + \alpha \) and \(k = n + m + (\ell - \alpha )\). Moreover, we proved that the problem admits a polynomial kernel with respect to \(n + \ell \), but is unlikely to admit one with respect to all the other parameters that we considered. We also proved that the problem is W[1]-hard with respect to \(m + \alpha \) and \(m + (\ell - \alpha )\). However, two questions are still open: (1) Is it possible to obtain a (not polynomial) better kernel for \(m + \ell \) with size substantially smaller than \(\mathcal {O}^{*}((m!)^{\ell +1})\)? (2) Is it possible to obtain a better running time than \(\mathcal {O}^{*}(k!)\) for \(k = n + m + \ell \)?

Continuing our research, it might be interesting to study “weaker” definitions of optimality, for example: finding an assignment such that every group of k agents has some \(\alpha \) layers where they (1) do not admit trading cycles; (2) are not parts of larger trading cycles; or (3) do not admit the same trading cycle. Verification variants of these problems can also be suggested, i.e. given an assignment p, check whether it is optimal.

Another direction is to study the particular case where the preferences of the agents are complete since it may provide more positive algorithmic results under some parameterizations. In addition, notice that a solution to \(\alpha \)-Globally Optimal Assignment can be seen as an approximation to the “optimal” solution in which an assignment is pareto optimal in a maximum number of layers (this is similar to the Vertex Cover problem, where the parameter k is somewhat an “approximation” to the size of the minimum vertex cover). In this approach, we can define the problem as an approximation problem and study it from the perspective of parameterized approximability.

In this paper, we considered the basic “unweighed” model of the problem (since this is the first study of this kind). Another direction is to consider a weighted version in which some criteria (layers) may have higher importance than others. A straightforward way to model this is by having several copies of layers. However, if weights are high and varied, this might lead to inefficiency.