Abstract
We study the problem of finding maximum weakly stable matchings when preference lists are incomplete and contain one-sided ties of bounded length. We show that if the tie length is at most L, then it is possible to achieve an approximation ratio of \(1 + (1 - \frac {1}{L})^{L}\). We also show that the same ratio is an upper bound on the integrality gap, which matches the known lower bound. In the case where the tie length is at most 2, our result implies an approximation ratio and integrality gap of \(\frac {5}{4}\), which matches the known UG-hardness result.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
The stable matching model of Gale and Shapley [3] involves a two-sided market in which the agents are typically called men and women. Each agent has ordinal preferences over the agents of the opposite sex. A matching is said to be stable if no man and woman prefer each other to their partners. Stable matchings always exist and can be computed efficiently by the proposal algorithm of Gale and Shapley. Their algorithm is also applicable when the preference lists are incomplete, that is, when agents are allowed to omit from their preference lists any unacceptable agent of the opposite sex. If ties are allowed in the preference lists, the notion of stability can be generalized in several ways [4]. This paper focuses on weakly stable matchings, which always exist and can be obtained by invoking the Gale-Shapley algorithm after breaking all the ties arbitrarily. When incomplete lists are absent, every weakly stable matching is a maximum matching and hence has the same size. When ties are absent, the Rural Hospital Theorem guarantees that all stable matchings have the same size [5, 6]. However, when both ties and incomplete lists are present, weakly stable matchings can vary in size.
In this paper, we study the problem of finding large weakly stable matchings. Our main result is a polynomial-time algorithm that achieves an approximation ratio of \({1 + (1 - \frac {1}{L})^{L}}\) for maximum stable matching with one-sided ties and incomplete lists where the lengths of the ties are at most L. Since \((1-\frac {1}{L})^{L}\leq \frac {1}{e}\), our algorithm achieves a \(1 + \frac {1}{e}\) approximation ratio for one-sided ties with unbounded lengths.
In Section 1.1, we review the prior work related to this problem. In Section 1.2, we present an overview of our techniques. In Section 1.3, we describe the organization of the rest of the paper.
1.1 Related Work
In Section 1.1.1, we review the prior work on the maximum stable matching problem where ties are allowed on both sides of the market. In Section 1.1.2, we review the prior work for the case where ties are only allowed on one side. In Section 1.1.3, we review the prior work for the case where the maximum tie length is restricted. In Section 1.1.4, we mention other special cases of maximum stable matching that have been studied in the literature. In Section 1.1.5, we mention recent work on strategic issues associated with approximation algorithms for the maximum stable matching problem.
1.1.1 Two-Sided Ties
It is straightforward to see that any weakly stable matching is a 2-approximate solution [7]. Using a local search approach, Iwama et al. [8] gave an algorithm with an approximation ratio of \(\frac {15}{8}\) (= 1.875). Király [9] improved the approximation ratio to \(\frac {5}{3}\) (≈ 1.6667) by introducing the idea of promoting unmatched agents to higher priorities for tie-breaking. The current best approximation ratio for two-sided ties and incomplete lists is \(\frac {3}{2}\) (= 1.5), which is attained by the polynomial-time algorithm of McDermid [10], and the linear-time algorithms of Paluch [11] and of Király [12].
For hardness results, Iwama et al. [13] were the first to prove that finding a maximum weakly stable matching with ties and incomplete lists is NP-hard. Halldórsson et al. [14] showed that it is NP-hard to get an approximation ratio of 1 + ε. Results by Yanagisawa [15] imply that getting an approximation ratio of \(\frac {33}{29} - \varepsilon \) (≈ 1.1379) is NP-hard, and achieving \(\frac {4}{3} - \varepsilon \) (≈ 1.3333) is UG-hard. These hardness results hold even when the maximium tie length is two.
In the case of two-sided ties, Iwama et al. [16] showed that the integrality gap for the associated linear programming formulation is at least \(\frac {3 L - 2}{2 L - 1}\), where L is the maximum tie length. For the case of unbounded tie lengths, this implies a lower bound of \(\frac {3}{2}\) for the integrality gap, which coincides with the best approximation ratio known [10,11,12], indicating a potential barrier to further improvements.
1.1.2 One-Sided Ties
For the case where ties appear only on one side of the market, Király [9] showed an approximation ratio of \(\frac {3}{2}\) (= 1.5) for an algorithm based on the idea of promotion, and conjectured that a \((\frac {3}{2} - \varepsilon )\)-approximation is UG-hard. However, Iwama et al. [16] later presented an algorithm based on linear programming with an approximation ratio of \(\frac {25}{17}\) (≈ 1.4706). Dean and Jalasutram [17] improved on this approach to obtain an approximation ratio of \(\frac {19}{13}\) (≈ 1.4615). Huang and Kavitha [18] established an approximation ratio of \(\frac {22}{15}\) (≈ 1.4667) using an algorithm based on rounding half-integral stable matchings. Subsequently, a tight analysis [19, 20] of their algorithm established an approximation ratio of \(\frac {13}{9}\) (≈ 1.4444).
With one-sided ties, the problem of finding a maximum weakly stable matching remains NP-hard [7]. Results by Halldórsson et al. [21] imply that getting an approximation ratio of \(\frac {21}{19} - \varepsilon \) (≈ 1.1053) is NP-hard, and that achieving \(\frac {5}{4} - \varepsilon \) (≈ 1.25) is UG-hard. These hardness results hold even when each tie has length at most two.
In the case of one-sided ties, Iwama et al. [16] showed that the integrality gap for the associated linear programming formulation is at least \(1 + (1 - \frac {1}{L})^{L}\), where L is the maximum tie length. For the case of unbounded tie lengths, this implies a lower bound of \(1 + \frac {1}{e}\) (≈ 1.3679) for the integrality gap. In a paper by Huang et al. [22], the integrality gap is claimed to be at least \(\frac {3}{2}\), but their proof contains an error.Footnote 1
1.1.3 Ties with Restricted Lengths
For the case of two-sided ties where the length of each tie is at most two, Halldórsson et al. [21] presented an algorithm based on checking a small subset of tie breakers that achieves an approximation ratio of \(\frac {13}{7}\) (≈ 1.8571). For the same special case, the randomized algorithm of Halldórsson et al. [23] gives an expected approximation ratio of \(\frac {7}{4}\) (= 1.75). Huang and Kavitha [18] established an approximation ratio of \(\frac {10}{7}\) (≈ 1.4286) using the approach of rounding half-integral stable matchings. A better analysis [24] of their algorithm improved the approximation ratio to \(\frac {4}{3}\) (≈ 1.3333), which matches the UG-hardness result [15] and the lower bound for the integrality gap [16].
For the case of one-sided ties, the deterministic algorithm of Halldórsson et al. [21] attains an approximation ratio of \(\frac {2}{1 + L^{-2}}\), where L is the maximum length of the ties. The randomized algorithm of Halldórsson et al. [23] attains an approximation ratio of \(\frac {10}{7}\) (≈ 1.4286) for the case of one-sided ties where the length of each tie is at most two.
1.1.4 Other Special Cases
Further known NP-hard problems include the case where ties are restricted to the tail of the preference lists [21], where preference lists have length at most three [25], where preferences are symmetric [26], or where preferences are derived from master lists [27]. Some parameterized complexity results are also known [28].
1.1.5 Strategyproofness
In recent work, Hamada et al. [29] study strategic issues related to approximation algorithms for the maximum stable matching problem. For the case where ties appear only on one side, they show that no \((\frac {3}{2} - \varepsilon )\)-approximation algorithm is strategyproof for the side with ties, and no (2 − ε)-approximation algorithm is strategyproof for the side without ties. They also show that these bounds are tight, even for group strategyproof mechanisms.
1.2 Overview of the Techniques
The key techniques for the design and analysis of our approximation algorithm are based on linear programming. We focus on the maximum stable matching problem with one-sided ties and incomplete lists, and obtain a polynomial-time \((1 + (1 - \frac {1}{L})^{L})\)-approximation algorithm, where L is the maximum tie length.
Our algorithm is motivated by a proposal process similar to that of Iwama et al. [16], and that of Dean and Jalasutram [17], in which numerical priorities are adjusted according to the linear programming solution, and are used for tie-breaking purposes. However, instead of using their priority manipulation schemes, we introduce a method of priority incrementation based on an adjustable step size parameter. We first present the description and the properties of our process in terms of the step size parameter. We then consider the limit of this process as the step size becomes infinitesimally small, and present a polynomial-time algorithm which satisfies the key properties with the step size parameter set to zero.
Using these key properties, we analyze the approximation ratio of our algorithm by directly comparing the size of our output matching with the optimal value of the linear program. Although this is a standard approach to analyze approximation algorithms, it has not been used in prior work on this problem. Prior analyses [9, 18,19,20] which are not based on linear programming consider the symmetric difference of the output matching and an unknown optimal matching, and count augmenting paths of various lengths. Such symmetric difference arguments are also used in the analyses of Iwama et al. [16], and Dean and Jalasutram [17], where the output matching is compared to both an unknown optimal matching and an optimal linear programming solution. Instead of focusing on the symmetric difference, we develop a scheme that assigns charges to the matched man-woman pairs based on an exchange function. By applying the stability constraint and the tie-breaking criterion to the charges incurred due to indifferences in the preferences, we show that the charges cover the value of the linear programming solution. While none of the prior analyses directly implies an upper bound for the integrality gap, our approach enables us to obtain an upper bound of \(1 + (1 - \frac {1}{L})^{L}\) for the integrality gap, where L is the maximum tie length. This matches the known lower bound for the integrality gap [16]. When the maximum length of the ties is two, our result implies an approximation ratio and integrality gap of \(\frac {5}{4}\) (= 1.25), which matches the known UG-hardness result [21].
As part of our analysis, we formulate an infinite-dimensional factor-revealing linear program to find a good exchange function. The finite-dimensional factor-revealing linear programming technique was introduced by Jain et al. [30], and since then a number of variants have been proposed [31,32,33]. However, it is often difficult to obtain a nice closed-form solution. For the maximum stable matching problem with one-sided ties and incomplete lists, Dean and Jalasutram [17] obtained an approximation ratio of \(\frac {19}{13}\) by enumerating the combinatorial structures of augmenting paths and resorting to a computer-assisted proof for solving a large factor-revealing linear program. In contrast, our infinite-dimensional factor-revealing linear program is derived from our charging argument. Using numerical results as guidance, we are able to obtain an analytical solution for this linear program.
1.3 Organization of the Rest of the Paper
In Section 2, we formally define the stable matching market with one-sided ties. In Section 3, we present the linear programming formulation used by our algorithm. In Section 4, we present the proposal process and the implementations of our algorithm. In Section 5, we analyze the approximation ratio of our algorithm.
2 Stable Matching with One-Sided Ties
The stable matching market involves a set I of men and a set J of women. The sets I and J are assumed to be disjoint and finite. Furthermore, we assume that the sets I and J do not contain the element 0, which we use to denote being unmatched. The preference relation of each man i ∈ I is specified by a binary relation ≥i over J ∪{0} that satisfies asymmetryFootnote 2, transitivityFootnote 3, and totalityFootnote 4. Similarly, the preference relation of each woman j ∈ J is specified by a binary relation ≥j over I ∪{0} that satisfies transitivity and totality. We denote this stable matching market as (I, J,{≥i}i∈I,{≥j}j∈J).
For every man i ∈ I and woman j ∈ J, man i is said to be acceptable to woman j if i ≥j0. Similarly, woman j is said to be acceptable to man i if j ≥i0. The preference lists are allowed to be incomplete. In other words, there may exist i ∈ I and j ∈ J such that 0 >ji or 0 >ij.
Notice that the preference relations {≥j}j∈J of the women are not required to be antisymmetric, while the preference relations {≥i}i∈I of the men are required to be antisymmetric. For every man i ∈ I, we write >i to denote the asymmetric partFootnote 5 of ≥i. For every woman j ∈ J, we write >j and =j to denote the asymmetric part and the symmetric partFootnote 6 of ≥j, respectively. A tie in the preference list of woman j is an equivalence class of size at least 2 with respect to the equivalence relation =j, and the length of a tie is the size of this equivalence class.Footnote 7 We assume that there is at least one tie in a given problem instance, for otherwise every stable matching has the same size. We use L to denote the maximum length of the ties in the preference lists of the women, where \(2 \leq L \leq \lvert {I}\rvert + 1\).
Acceptability can be defined in terms of the preference relations. For every pair of agents k and \(k^{\prime }\) of opposite sex, either \(k^{\prime } \geq _{k} 0\) or \(0 >_{k} k^{\prime }\). We say that agent \(k^{\prime }\) is acceptable to agent k in the former case, and unacceptable to agent k in the latter case. If man i and woman j are acceptable to each other, we say that (i, j) is an acceptable pair. Otherwise, (i, j) is an unacceptable pair. Agent k is said to have a complete preference list if every agent \(k^{\prime }\) of the opposite sex is acceptable to agent k. Otherwise, agent k is said to have an incomplete preference list.
A matching is a subset \(\mu \subseteq I \times J\) such that for every \((i, j), (i^{\prime }, j^{\prime }) \in \mu \), we have \(i = i^{\prime }\) if and only if \(j = j^{\prime }\). For every man i ∈ I, if (i, j) ∈ μ for some woman j ∈ J, we say that man i is matched to woman j in matching μ, and we write μ(i) = j. Otherwise, we say that man i is unmatched in matching μ, and we write μ(i) = 0. Similarly, for every woman j ∈ J, if (i, j) ∈ μ for some man i ∈ I, we say that woman j is matched to man i in matching μ, and we write μ(j) = i. Otherwise, we say that woman j is unmatched in matching μ, and we write μ(j) = 0.
A matching μ is individually rational if for every (i, j) ∈ μ, we have j ≥i0 and i ≥j0. An individually rational matching μ is weakly stable if for every man i ∈ I and woman j ∈ J, either μ(i) ≥ij or μ(j) ≥ji. Otherwise, (i, j) forms a strongly blocking pair.
The goal of the maximum stable matching problem is to find a maximum-cardinality weakly stable matching for a given stable matching market. We say that a polynomial-time algorithm is a z-approximation algorithm, or achieves an approximation ratio of z, where z ≥ 1, if the algorithm produces a weakly stable matching with cardinality at least \(\frac {1}{z}\) times that of the largest weakly stable matching. We also say that a maximization-based linear program has an integrality gap of z ≥ 1 if z is the minimum ratio such that the objective value of an optimal fractional solution is at most z times that of an optimal integral solution.
3 The Linear Programming Formulation
The following linear programming formulation is based on that of Rothblum [34], which extends that of Vande Vate [35]. We seek to maximize \({\sum }_{(i, j) \in I \times J} x_{i, j}\) subject to the following constraints:
For the model with strict preferences and incomplete lists, it is known [34] that an integral solution x = {xi, j}(i, j)∈I×J corresponds to the indicator variables of a weakly stable matching if and only if x satisfies constraints (C1)–(C5). Our model allows ties to appear in the preference lists of the women. We also allow a woman to be indifferent between being unmatched and being matched with some of the men. Accordingly, we provide a proof of Lemma 1 for the sake of completeness.
Lemma 1
An integral solution x corresponds to the indicator variables of a weakly stable matching if and only if it satisfies constraints (C1)–(C5).
Proof
Suppose x satisfies constraints (C1)–(C5). Constraints (C1), (C2), and (C5) imply that x corresponds to a valid matching μ. Constraint (C4) implies that μ is individually rational. To show the weak stability of μ, consider man i ∈ I and woman j ∈ J. It suffices to show that (i, j) is not a strongly blocking pair. We may assume that j >i0 and i >j0, for otherwise individual rationality implies μ(i) ≥i0 ≥ij or μ(j) ≥j0 ≥ji. Consider constraint (C3) associated with (i, j). At least one of the two summations is equal to 1. If the first summation is equal to 1, then μ(i) >ij. If the second summation is equal to 1, then μ(j) ≥ji. Thus, μ is a weakly stable matching.
Conversely, suppose x corresponds to a weakly stable matching μ. Since μ is a valid matching, constraints (C1), (C2), and (C5) are satisfied. Also, the individual rationality of μ implies that constraint (C4) is satisfied. To show that constraint (C3) is satisfied, consider (i, j) ∈ I × J such that j >i0 and i >j0. It suffices to show that at least one of the two summations in constraint (C3) associated with (i, j) is equal to 1. By the weak stability of μ, we have either μ(i) ≥ij or μ(j) ≥ji. We consider two cases.
Case 1: μ(j) ≥ji. Since μ(j) ≥ji >j0, the second summation is equal to 1.
Case 2: i >jμ(j) and μ(i) ≥ij. Since i >jμ(j), we have (i, j)∉μ. Since μ(i) ≥ij and (i, j)∉μ, we have μ(i) >ij. Since μ(i) >ij >i0, the first summation is equal to 1. □
Given x which satisfies constraints (C1)–(C5), it is useful to define auxiliary variables
for every (i, j) ∈ I × (J ∪{0}), and
for every (i, j) ∈ (I ∪{0}) × J.
Lemma 2
The auxiliary variables satisfy the following conditions.
-
1.
For every i ∈ I, we have yi,0 = 0.
-
2.
For every i ∈ I and j ∈ J, we have xi, j ≤ yi, j ≤ 1.
-
3.
For every i ∈ I and \(j, j^{\prime } \in J\) such that \(j >_{i} j^{\prime }\), we have \(y_{i,j} \geq x_{i, j} + y_{i,j^{\prime }}\).
-
4.
For every \(i, i^{\prime } \in I \cup \{0\}\) and j ∈ J such that \(i =_{j} i^{\prime }\), we have \(z_{i,j} = z_{i^{\prime },j}\).
-
5.
For every i ∈ I and j ∈ J such that j ≥i0 and i ≥j0, we have yi, j + zi, j ≤ 1.
Proof
-
1.
Let i ∈ I. Then the definition of yi,0 implies
$$ y_{i,0} = \sum\limits_{\underset{0 \geq_{i} j^{\prime}}{j^{\prime} \in J}} x_{i, j^{\prime}} = \sum\limits_{\underset{0 >_{i} j^{\prime}}{j^{\prime} \in J}} x_{i, j^{\prime}} = 0, $$where the last equality follows from constraint (C4).
-
2.
Let i ∈ I and j ∈ J. By the definition of yi, j, we have
$$ y_{i,j} = \sum\limits_{\underset{j \geq_{i} j^{\prime}}{j^{\prime} \in J}} x_{i, j^{\prime}} \geq x_{i, j}, $$where the inequality follows from constraint (C5). Also by the definition of yi, j, we have
$$ y_{i,j} = \sum\limits_{\underset{j \geq_{i} j^{\prime}}{j^{\prime} \in J}} x_{i, j^{\prime}} \leq \sum\limits_{j^{\prime} \in J} x_{i, j^{\prime}} \leq 1, $$where the first inequality follows from constraint (C5), and the second inequality follows from constraint (C1).
-
3.
Let i ∈ I and \(j, j^{\prime } \in J\) such that \(j >_{i} j^{\prime }\). Then the definitions of yi, j and \(y_{i,j^{\prime }}\) imply
$$ y_{i,j} = \sum\limits_{\underset{j \geq_{i} j^{\prime\prime}}{j^{\prime\prime} \in J}} x_{i, j^{\prime\prime}} \geq x_{i, j} + \sum\limits_{\underset{j^{\prime} \geq_{i} j^{\prime\prime}}{j^{\prime\prime} \in J}} x_{i, j^{\prime\prime}} = x_{i, j^{\prime}} + y_{i,j^{\prime}}. $$ -
4.
Let \(i, i^{\prime } \in I \cup \{0\}\) and j ∈ J such that \(i =_{j} i^{\prime }\). Then the definitions of zi, j and \(z_{i,j^{\prime }}\) imply
$$ z_{i,j} = \sum\limits_{\underset{i >_{j} i^{\prime\prime}}{i^{\prime\prime} \in I}} x_{i^{\prime\prime}, j} = \sum\limits_{\underset{i^{\prime} >_{j} i^{\prime\prime}}{i^{\prime\prime} \in I}} x_{i^{\prime\prime}, j} = z_{i^{\prime},j}, $$where the second equality follows from \(i =_{j} i^{\prime }\).
-
5.
Let (i, j) ∈ I × J such that j ≥i0 and i ≥j0. We consider two cases.
Case 1: i =j0. Then the definition of zi, j implies
$$ z_{i,j} = \sum\limits_{\underset{i >_{j} i^{\prime}}{i^{\prime} \in I}} x_{i^{\prime}, j} = \sum\limits_{\underset{0 >_{j} i^{\prime}}{i^{\prime} \in I}} x_{i^{\prime}, j} = 0 \leq 1 - y_{i,j}, $$where the second equality follows from i =j0, the third equality follows from constraint (C4), and the last inequality follows from part 2.
Case 2: i >j0. Since j ∈ J and j ≥i0, we have j >i0. Since j >i0 and i >j0, constraints (C1)–(C3) imply
$$ \begin{array}{@{}rcl@{}} 0 &\leq{} & \Big(1 - \sum\limits_{j \in J} x_{i, j} \Big) + \Big(1 - \sum\limits_{i \in I} x_{i, j} \Big) + \Big({-1} + \sum\limits_{\underset{j^{\prime} >_{i} j}{j^{\prime} \in J}} x_{i, j^{\prime}} + \sum\limits_{\underset{i^{\prime} \geq_{j} i}{i^{\prime} \in I}} x_{i^{\prime}, j} \Big) \\ &={} & 1 - \sum\limits_{\underset{j \geq_{i} j^{\prime}}{j^{\prime} \in J}} x_{i, j^{\prime}} - \sum\limits_{\underset{i >_{j} i^{\prime}}{i^{\prime} \in I}} x_{i^{\prime}, j} \\ &={} & 1 - y_{i,j} - z_{i,j}, \end{array} $$where the last equality follows from the definitions of yi, j and zi, j.
□
4 The Algorithm
In Section 4.1, we present a proposal process along with some key properties in terms of a step size parameter. In Section 4.2, we present a polynomial-time algorithm to simulate this process with an infinitesimally small step size. In Section 4.3, we present some properties of the loop body of our algorithm. In Section 4.4, we show that our algorithm satisfies the key properties. In Section 4.5, we present an alternative implementation of our algorithm.
4.1 A Proposal Process with Priorities
Our proposal process with priorities takes a stable matching market and a step size parameter η > 0 as input, and produces a weakly stable matching μ as output. In the preprocessing phase, we compute an optimal fractional solution x to the associated linear program. Then, in the initialization phase, we assign the empty matching to μ and each man i is assigned a priority pi equal to 0. For each man i, we also maintain a set Si of women that is initialized to the empty set. We use the set Si to store the women to whom man i must propose before his priority pi is increased by η. After that, the process enters the proposal phase and proceeds iteratively.
In each iteration, we pick an unmatched man i with priority pi < 1 + η. If the set Si is empty, we increment his priority pi by η and then update Si to the set
Otherwise, the man i that we pick has a non-empty set Si of women. Let j denote the most preferred woman of man i in Si. We remove j from Si and man i proposes to woman j. When woman j receives the proposal from man i, she tentatively accepts him if she is currently unmatched and he is acceptable to her. Otherwise, if woman j is currently matched to another man \(i^{\prime }\), she tentatively accepts her preferred choice between men i and \(i^{\prime }\), and rejects the other. In the event of a tie, she compares the current priorities pi and \(p_{i^{\prime }}\) of the men and accepts the one with higher priority. (If the priorities of i and \(i^{\prime }\) are equal, she breaks the tie arbitrarily.) If man i is tentatively accepted by woman j, the matching μ is updated accordingly.
When every unmatched man i has priority pi ≥ 1 + η, the process terminates and outputs the final matching μ.
Our process is similar to that of Iwama et al. [16], and that of Dean and Jalasutram [17], who also use a proposal scheme with priorities. In particular, the way that we populate the set Si with a subset of women by referring to the linear programming solution is based on their methods. The major difference is that, in our process, priorities only increase by a small step size η, whereas in their algorithms, the priorities may increase by a possibly larger amount, essentially to ensure that a new woman is added to Si. As in their algorithms, for every woman j, the sequence of tentative partners μ(j) of woman j satisfies a natural monotonicity property. Woman j is initially unmatched, and becomes matched the first time she receives a proposal from a man who is acceptable to her. In each subsequent iteration, she either keeps her current partner or gets a weakly preferred partner. Furthermore, if she is indifferent between her new partner and her old partner, then the new partner has a weakly larger priority. When the process terminates, the following properties hold, which are analogous to properties satisfied by the algorithms of Iwama et al. [16] and Dean and Jalasutram [17].
-
(P1)
Let (i, j) ∈ μ. Then j ≥i0 and i ≥j0.
-
(P2)
Let i ∈ I be a man and j ∈ J be a woman such that j ≥iμ(i) and i ≥j0. Then μ(j)≠ 0 and μ(j) ≥ji.
-
(P3)
Let i ∈ I be a man. Then 1 − yi, μ(i) ≤ pi ≤ 1 + 2η.Footnote 8
-
(P4)
Let i ∈ I be a man and j ∈ J be a woman such that j ≥i0 and i ≥j0. Suppose pi − η > 1 − yi, j. Then μ(j)≠ 0 and μ(j) ≥ji. Furthermore, if μ(j) =ji, then pμ(j) ≥ pi − η.
For (P1), it is easy to see that man i proposes to woman j only if she is acceptable to him, and woman j accepts a proposal from man i only if he is acceptable to her.
For (P2), if man i weakly prefers woman j to μ(i) and is acceptable to woman j, then man i has proposed to woman j. Thus the monotonicity property implies that μ(j)≠ 0 and μ(j) ≥ji.
For (P3), it is easy to see that the priority pi of man i lies within the specified range when he proposes to woman μ(i).
For (P4), if man i and woman j satisfy the stated assumptions, then man i proposed to woman j when his priority was equal to pi − η, and this proposal was eventually rejected. Immediately after this proposal was rejected, woman j was matched with a man \(i^{\prime }\) such that \(i^{\prime } \neq i\) and \(i^{\prime } \geq _{j} i\). The monotonicity property implies that μ(j)≠ 0 and \(\mu (j) \geq _{j} i^{\prime } \geq _{j} i\). Furthermore, if μ(j) =ji, then \(\mu (j) =_{j} i^{\prime } =_{j} i\). Since \(i^{\prime } =_{j} i\), the priority of man \(i^{\prime }\) was at least pi − η when the aforementioned proposal was rejected. Since \(\mu (j) =_{j} i^{\prime }\), the monotonicity property implies that pμ(j) ≥ pi − η.
4.2 A Polynomial-Time Implementation
The proposal process with priorities of Section 4.1 depends on a step size parameter η > 0. To obtain a good approximation ratio, we would like the step size parameter η to be small. However, the running time of a naive implementation grows in proportion to η− 1. We can imagine that if we take an infinitesimally small step size, then (P1)–(P4) can be satisfied with η = 0.
Our algorithm is motivated by the idea of simulating the process of Section 4.1 with an infinitesimally small step size. We maintain for every man i ∈ I a priority pi and a pointer ℓi ∈ J ∪{0} into the preference list of man i. For every man i ∈ I and woman j ∈ J, we think of man i as having proposed to woman j if and only if j >iℓi and j ≥i0. Given ℓ = {ℓi}i∈I and j ∈ J, we define
as the set of all men i who have proposed to woman j. Given ℓ = {ℓi}i∈I, we define G(ℓ) as the bipartite graph with vertex set I ∪ J and edge set
Given ℓ and μ, we say that a (possibly zero-length) path π in G(ℓ) is μ-alternating if it alternates between edges not in μ and edges in μ. We say that a μ-alternating path π is oriented from k to \(k^{\prime }\) if no edge in π ∩ μ is incident to vertex k. The details of the implementation are given in Algorithm 1.
It is straightforward to prove that throughout any execution of Algorithm 1, the program variable μ corresponds to a matching. Likewise, where it is defined, the program variable μ0 corresponds to a matching. Accordingly, throughout our analysis, we assume that μ and μ0 are matchings.
In Sections 4.3 and 4.4 below, we establish that Algoritm 1 terminates in a state that satisfies (P1)–(P4) with η = 0 (Lemma 13). Before proceeding to our formal analysis, we offer some intuition regarding the sense in which Algorithm 1 can be viewed as simulating the proposal process of Section 4.1 with an infinitesimally small step size. A reader who is interested only in the formal presentation can skip the next two paragraphs without loss of continuity.
When the step size η is small, the proposal process of Section 4.1 is subject to a great deal of churn. For example, consider two men i and \(i^{\prime }\) and a woman j such that \(i=_{j}i^{\prime }\geq _{j}0\) and j is the top choice of both i and \(i^{\prime }\). The proposal process can begin with man i proposing to, and becoming matched with, woman j. Subsequently, we can have a large number of iterations in which one of the two men i and \(i^{\prime }\) is matched to j, and the other is unmatched and is chosen to propose. The priority of the unmatched man increases until it exceeds that of the matched man, at which point the matching is updated with the roles of the two men reversed. Throughout this period, whenever the sets Si or \(S_{i^{\prime }}\) associated with the proposal process are constructed, they are equal to {j}. This pattern continues, with woman j being alternately matched to i and \(i^{\prime }\), and with the priorities of man i and \(i^{\prime }\) gradually increasing (and remaining approximately equal) until the priority of one of the men, say i, either reaches a sufficiently high value to ensure that Si contains a second woman, or reaches its maximum value. Algorithm 1 is designed to efficiently simulate this churn by directly increasing the priorities of men i and \(i^{\prime }\) to the critical value.
The foregoing two-man one-woman scenario is a special case of a much more general class of “churn scenarios” that are efficiently simulated by Algorithm 1. At each iteration of Algorithm 1, an unmatched man i0 is selected at line 7. Each successive time a given man is selected in line 7, the set of women to whom he is willing to propose (i.e., \(S_{i_{0}}\)) includes one new entry. This new entry corresponds to the woman j0 defined in line 7. If woman j0 is unmatched and finds man i0 acceptable, then we add (i0, j0) to the current matching in line 10. If not, we determine in line 12 the man i1 (either i0 or μ(j0)) who is tentatively rejected by woman j0, and we update the current matching accordingly in line 13. In lines 14 through 18, Algorithm 1 efficiently simulates a possible trajectory of the proposal process in which an unmatched man in the set I0 (which includes i1) is repeatedly chosen to propose until the priority of some man in this set, say man i, has either increased sufficiently for a new woman to be added to his set Si, or has reached its maximum value. Using property (P4), it is not difficult to verify that in this trajectory (and with η infinitesimally small), the lowest priorities associated with the men in I0 gradually increase until \(\min \limits _{i\in I_{0}}w(i,\ell _{i})\) is reached. It follows that the overall effect on the priorities is faithfully captured by the update performed in line 17. Similarly, the overall change to the matching is faithfully captured by the update performed in line 18.
4.3 The Loop Body of the Algorithm
In this subsection, we analyze the loop body of Algorithm 1. It is convenient to define the following predicates.
- \({\mathscr{Q}}_1(\boldsymbol {\ell })\)::
-
for every i ∈ I, we have ℓi ≥i0.
- \({\mathscr{Q}}_2(\boldsymbol {\ell },\mu )\)::
-
μ is a matching of G(ℓ) such that for every i ∈ I and j ∈ J, if i ∈ Ij(ℓ) and i ≥j0, then μ(j)≠ 0.
- \({\mathscr{Q}}_3(\boldsymbol {\ell },\mathbf {p})\)::
-
for every i ∈ I, we have pi ≤ w(i, ℓi).
- \({\mathscr{Q}}_4(\boldsymbol {\ell },\mu ,\mathbf {p})\)::
-
for every i ∈ I such that μ(i) = 0, we have pi = w(i, ℓi).
- \({\mathscr{Q}}_5(\boldsymbol {\ell },\mathbf {p})\)::
-
for every i ∈ I and j ∈ J such that j >iℓi, we have w(i, j) ≤ pi.
- \({\mathscr{Q}}_6(\boldsymbol {\ell },\mu ,\mathbf {p})\)::
-
for every \(i, i^{\prime } \in I\) and j ∈ J such that (i, j) ∈ E(ℓ) and \(\mu (i^{\prime }) = j\), we have \(p_{i} \leq p_{i^{\prime }}\).
- \({\mathscr{Q}}(\boldsymbol {\ell },\mu ,\mathbf {p})\)::
-
all of \({\mathscr{Q}}_1(\boldsymbol {\ell })\), \({\mathscr{Q}}_2(\boldsymbol {\ell },\mu )\), \({\mathscr{Q}}_3(\boldsymbol {\ell },\mathbf {p})\), \({\mathscr{Q}}_4(\boldsymbol {\ell },\mu ,\mathbf {p})\), \({\mathscr{Q}}_5(\boldsymbol {\ell },\mathbf {p})\), and \({\mathscr{Q}}_6(\boldsymbol {\ell },\mu ,\mathbf {p})\) hold.
Consider the loop body of Algorithm 1. Throughout the rest of this subsection, we let ℓ−, μ−, and p− denote the values of ℓ, μ, and p before a given iteration of the loop, and we assume that \({\mathscr{Q}}(\boldsymbol {\ell }^{-},\mu ^{-},\mathbf {p}^{-})\) and the loop condition are satisfied. Also, we let ℓ+, μ+, p+ denote the values of ℓ, μ, and p after the iteration.
Lemma 3
For every i ∈ I, we have \(\ell ^{+}_{i} \geq _{i} 0\).
Proof
The only line in the loop body that modifies ℓ is line 8, which updates \(\ell _{i_{0}}\). The definition of i0 implies that \(\ell ^{-}_{i_{0}}>_{i_{0}}0\). It follows that \(\ell ^{+}_{i_{0}}\geq _{i_{0}}0\) holds. □
The following lemma characterizes how E(ℓ) changes in a single iteration of the loop of Algorithm 1. We omit the proof, which is straightforward but tedious.
Lemma 4
The following conditions hold.
-
1.
For every j ∈ J, we have μ−(j) ≥j0.
-
2.
If \(i_{0} <_{j_{0}} \mu ^{-}(j_{0})\), then E(ℓ+) = E(ℓ−).
-
3.
If μ−(j0) = 0 and \(i_{0} \geq _{j_{0}} 0\), then E(ℓ+) = E(ℓ−) ∪{(i0, j0)}.
-
4.
If μ−(j0)≠ 0 and \(i_{0} =_{j_{0}} \mu ^{-}(j_{0})\), then E(ℓ+) = E(ℓ−) ∪{(i0, j0)}.
-
5.
If μ−(j0)≠ 0 and \(i_{0} >_{j_{0}} \mu ^{-}(j_{0})\), then E(ℓ+) = {(i, j) ∈ E(ℓ−): j≠j0}∪{(i0, j0)}.
Lemma 5
Condition \({\mathscr{Q}}_2(\boldsymbol {\ell }^{+},\mu ^{+})\) holds. Furthermore, if μ−(j0)≠ 0 or \(0 >_{j_{0}} i_{0}\), then \({\mathscr{Q}}_2(\boldsymbol {\ell }^{+},\mu _{0})\) holds.
Proof
Since \({\mathscr{Q}}_2(\boldsymbol {\ell }^{-},\mu ^{-})\) holds, we know that μ− is a matching of G(ℓ−). Let i ∈ I and j ∈ J be such that \(i \in I_{j}(\boldsymbol {\ell }^{+})\) and i ≥j0.
Case 1: μ−(j0) = 0 and \(i_{0} \geq _{j_{0}}0\). Then μ+ = μ−∪{(i0, j0)}. Since μ−(j0) = 0, \(i_{0} \geq _{j_{0}}0\), and \({\mathscr{Q}}_2(\boldsymbol {\ell }^{-},\mu ^{-})\) holds, part 3 of Lemma 4 implies that E(ℓ+) = E(ℓ−) ∪{(i0, j0)}. Since μ− is a matching of G(ℓ−), μ−(i0) = 0, μ−(j0) = 0, E(ℓ+) = E(ℓ−) ∪{(i0, j0}, and μ+ = μ−∪{(i0, j0)}, we find that μ+ is a matching of G(ℓ+). To establish that \({\mathscr{Q}}_2(\boldsymbol {\ell }^{+},\mu _{0})\) holds, it remains to prove that μ+(j)≠ 0.
Case 1.1: j≠j0. Then \(I_{j}(\boldsymbol {\ell }^{+})=I_{j}(\boldsymbol {\ell }^{-})\), and hence \(i \in I_{j}(\boldsymbol {\ell }^{-})\). Since \(i \in I_{j}(\boldsymbol {\ell }^{-})\), i ≥j0, and \({\mathscr{Q}}_2(\boldsymbol {\ell }^{-},\mu ^{-})\) holds, we have μ−(j)≠ 0. Since μ+ = μ−∪{(i0, j0)} and j≠j0, we have μ+(j) = μ−(j). Since μ+(j) = μ−(j) and μ−(j)≠ 0, we have μ+(j)≠ 0.
Case 1.2: j = j0. Since μ+ = μ−∪{(i0, j0)}, μ+ is a matching of G(ℓ+), and j = j0, we deduce that μ+(j) = i0≠ 0.
Case 2: μ−(j0)≠ 0 or \(0>_{j_{0}}i_{0}\). We need to prove that \({\mathscr{Q}}_2(\boldsymbol {\ell }^{+},\mu _{0})\) and \({\mathscr{Q}}_2(\boldsymbol {\ell }^{+},\mu ^{+})\) hold. We begin by establishing two useful claims.
The first claim is that μ0 is a matching of G(ℓ+) that matches the same set of women as μ−. To prove this claim, we consider three cases.
-
(a) \(i_{0}<_{j_{0}}\mu ^{-}(j_{0})\). Then i1 = i0 and part 2 of Lemma 4 implies E(ℓ+) = E(ℓ−). Since i1 = i0, we have μ0 = μ−. Since μ0 = μ−, E(ℓ+) = E(ℓ−), and μ− is a matching of G(ℓ−), the claim follows.
-
(b) \(i_{0}=_{j_{0}}\mu ^{-}(j_{0})\). Then part 4 of Lemma 4 implies E(ℓ+) = E(ℓ−) ∪{(i0, j0)}. Since μ0 = (μ−∪{(i0, j0)}) ∖{(i1, j0)}, E(ℓ+) = E(ℓ−) ∪{(i0, j0)}, and μ− is a matching of G(ℓ−), the claim follows.
-
(c) \(i_{0}>_{j_{0}}\mu ^{-}(j_{0})\). Then i1≠i0 and part 5 of Lemma 4 implies E(ℓ+) = {(i, j) ∈ E(ℓ−): j≠j0}∪{(i0, j0)}. Since μ0 = (μ−∪{(i0, j0)}) ∖{(i1, j0)}, E(ℓ+) = {(i, j) ∈ E(ℓ−): j≠j0}∪{(i0, j0)}, and μ− is a matching of G(ℓ−), the claim follows.
The second claim is that μ+ is a matching of G(ℓ+) that matches the same set of women as μ0. Since μ0 is a matching of G(ℓ+) and μ+ is the symmetric difference between μ0 and an oriented μ0-alternating path in G(ℓ+) from i1 to i2, the second claim follows.
Given the two preceding claims, we can establish that \({\mathscr{Q}}_2(\boldsymbol {\ell }^{+},\mu _{0})\) and \({\mathscr{Q}}_2(\boldsymbol {\ell }^{+},\mu ^{+})\) hold by proving that μ−(j)≠ 0. If j = j0, the latter inequality follows from the Case 2 condition. Now suppose that j≠j0. Then \(I_{j}(\boldsymbol {\ell }^{+})=I_{j}(\boldsymbol {\ell }^{-})\), and hence \(i \in I_{j}(\boldsymbol {\ell }^{-})\). Since \(i \in I_{j}(\boldsymbol {\ell }^{-})\), i ≥j0, and \({\mathscr{Q}}_2(\boldsymbol {\ell }^{-},\mu ^{-})\) holds, we have μ−(j)≠ 0. □
Lemma 6
For every i ∈ I, we have \(p^{+}_{i} \leq w(i, \ell ^+_{i})\).
Proof
Let i ∈ I. We consider two cases.
Case 1: \(p^{+}_{i} = p^{-}_{i}\). Since \({\mathscr{Q}}_3(\boldsymbol {\ell }^{-},\mathbf {p}^{-})\) holds, we have \(p^{-}_{i} \leq w(i, \ell ^-_{i})\). Line 8 of Algorithm 1 implies w(i, ℓi−) ≤ w(i, ℓi+). Thus \(p^{+}_{i} = p^{-}_{i} \leq w(i, \ell ^-_{i}) \leq w(i, \ell ^+_{i})\).
Case 2: \(p^{+}_{i} \neq p^{-}_{i}\). Then line 17 of Algorithm 1 implies i ∈ I0 and \(p^{+}_{i} = w(i_{2}, \ell ^+_{i_{2}})\). Since i ∈ I0, line 15 of Algorithm 1 implies \(w(i_{2}, \ell ^+_{i_{2}}) \leq w(i, \ell ^+_{i})\). Thus \(p^{+}_{i} = w(i_{2}, \ell ^+_{i_{2}}) \leq w(i, \ell ^+_{i})\). □
Lemma 7
For every i ∈ I such that μ+(i) = 0, we have \(p^{+}_{i} = w(i, \ell ^+_{i})\).
Proof
Let i ∈ I be such that μ+(i) = 0. Then Lemma 6 implies that \(p^{+}_{i} \leq w(i, \ell ^+_{i})\). It remains to show that \(p^{+}_{i} \geq w(i, \ell ^+_{i})\). We consider two cases.
Case 1: μ−(j0) = 0 and \(i_{0} \geq _{j_{0}} 0\). Then \(p^{+}_{i} = p^{-}_{i}\). Since μ+(i) = 0, line 10 of Algorithm 1 implies i≠i0 and μ−(i) = 0. Since μ−(i) = 0, condition \({\mathscr{Q}}_{4}({\boldsymbol {\ell }^{-}},{\mu ^{-}},{\mathbf {p}^{-}})\) implies \(p^{-}_{i} = w(i, \ell ^-_{i})\). Since i≠i0, line 8 of Algorithm 1 implies \(\ell ^{+}_{i} = \ell ^{-}_{i}\). Thus \(p^{+}_{i} = p^{-}_{i} = w(i, \ell ^-_{i}) = w(i, \ell ^+_{i})\).
Case 2: μ−(j0)≠ 0 or \(0 >_{j_{0}} i_{0}\). We consider two subcases.
Case 2.1: i = i2. Then line 15 of Algorithm 1 implies i2 ∈ I0. Since i = i2 ∈ I0, line 17 of Algorithm 1 implies \(p^{+}_{i} \geq w(i, \ell ^+_{i})\).
Case 2.2: i≠i2. Since μ+(i) = 0, i≠i2, and \(\{i^{\prime }\in I \colon \mu ^{+}(i^{\prime })\neq 0\}=(\{i^{\prime }\in I \colon \mu ^{-}(i^{\prime })\neq 0\}\cup \{i_{0}\})\setminus \{i_{2}\}\), we deduce that i≠i0 and μ−(i) = 0. Line 17 of Algorithm 1 implies \(p^{+}_{i} \geq p^{-}_{i}\). Since μ−(i) = 0, condition \({\mathscr{Q}}_4(\boldsymbol {\ell }^{-},\mu ^{-},\mathbf {p}^{-})\) implies \(p^{-}_{i} = w(i, \ell ^-_{i})\). Since i≠i0, line 8 of Algorithm 1 implies \(\ell ^{+}_{i} = \ell ^{-}_{i}\). Thus \(p^{+}_{i} \geq p^{-}_{i} = w(i, \ell ^-_{i}) = w(i, \ell ^+_{i})\). □
Lemma 8
For every i ∈ I and j ∈ J such that \(j >_{i} \ell ^{+}_{i}\), we have \(w(i, j) \leq p^{+}_{i}\).
Proof
Let i ∈ I and j ∈ J be such that \(j >_{i} \ell ^{+}_{i}\). Line 17 of Algorithm 1 implies \(p^{+}_{i} \geq p^{-}_{i}\). We consider two cases.
Case 1: \(j >_{i} \ell ^{-}_{i}\). Then \({\mathscr{Q}}_5(\boldsymbol {\ell }^{-},\mathbf {p}^{-})\) implies \(p^{-}_{i} \geq w(i, j)\). Thus \(p^{+}_{i} \geq p^{-}_{i} \geq w(i, j)\).
Case 2: \(\ell ^{-}_{i} \geq _{i} j\). Since \(\ell ^{-}_{i} \geq _{i} j >_{i} \ell ^{+}_{i}\), line 8 of Algorithm 1 implies i = i0 and \(j = \ell ^{-}_{i}\). Since i = i0, line 7 of Algorithm 1 implies μ−(i) = 0. Since μ−(i) = 0, condition \({\mathscr{Q}}_4(\boldsymbol {\ell }^{-},\mu ^{-},\mathbf {p}^{-})\) implies \(p^{-}_{i} = w(i, \ell ^-_{i})\). We conclude that \(p^{+}_{i} \geq p^{-}_{i} = p^{+}_{i} \geq p^{-}_{i} = w({i},{j})\). □
Lemma 9
Suppose that μ−(j0)≠ 0 or \(0 >_{j_{0}} i_{0}\). Then the following conditions hold.
-
1.
For every \(i, i^{\prime } \in I\) and j ∈ J such that (i, j) ∈ E(ℓ+) and \(\mu _{0}(i^{\prime }) = j\), we have \(p^{-}_{i} \leq p^{-}_{i^{\prime }}\).
-
2.
For every \(i, i^{\prime } \in I\) and j ∈ J such that (i, j) ∈ E(ℓ+) and \(\mu _{0}(i^{\prime }) = j\), we have \(p^{+}_{i} \leq p^{+}_{i^{\prime }}\).
-
3.
For every i ∈ I on path π0, we have \(p^{+}_{i} = w(i_{2}, \ell ^+_{i_{2}})\).
-
4.
For every \(i, i^{\prime } \in I\) and j ∈ J such that (i, j) ∈ E(ℓ+) and \(\mu ^{+}(i^{\prime }) = j\), we have \(p^{+}_{i} \leq p^{+}_{i^{\prime }}\).
Proof
-
1.
Let \(i, i^{\prime } \in I\) and j ∈ J be such that (i, j) ∈ E(ℓ+) and \(\mu _{0}(i^{\prime }) = j\). We consider two cases.
Case 1: j≠j0. Since j≠j0, we have \(\mu ^{-}(j) = \mu _{0}(j) = i^{\prime }\). In addition, Lemma 4 implies that (i, j) ∈ E(ℓ−). Since \(\mu ^{-}(j) = i^{\prime }\) and (i, j) ∈ E(ℓ−), condition \({\mathscr{Q}}_6(\boldsymbol {\ell }^{-},\mu ^{-},\mathbf {p}^{-})\) implies \(p^{-}_{i} \leq p^{-}_{i^{\prime }}\).
Case 2: j = j0. Thus \(i^{\prime } = \mu _{0}(j_{0})\). Let \(i^{\prime \prime }\in I\) denote μ−(j0). We consider two subcases.
Case 2.1: i≠i0. Since i≠i0, Lemma 4 implies that \((i,j_{0})\in E(\boldsymbol {\ell }^{-})\). Since \((i, j_{0})\in E(\boldsymbol {\ell }^{-})\), condition \({\mathscr{Q}}_6(\boldsymbol {\ell }^{-},\mu ^{-},\mathbf {p}^{-})\) implies \(p^{-}_{i}\leq p^{-}_{i^{\prime \prime }}\). Since i≠i0 and \((i,j_{0})\in E(\boldsymbol {\ell }^{+})\), Lemma 4 implies that \(i \geq _{j_{0}} i_{0}\). Since \(i \geq _{j_{0}} i_{0}\), lines 12 and 13 of Algorithm 1 imply that \(p^{-}_{i^{\prime \prime }}\leq p^{-}_{i^{\prime }}\). Thus \(p^{-}_{i} \leq p^{-}_{i^{\prime \prime }} \leq p^{-}_{i^{\prime }}\).
Case 2.2: i = i0. Since \((i_{0},j_{0})\in E(\boldsymbol {\ell }^{+})\), we have \(i_{0} \geq _{j_{0}} i^{\prime \prime }\). Since \(i_{0} \geq _{j_{0}} i^{\prime \prime }\), lines12 and 13 of Algorithm 1 imply that \(p^{-}_{i_{0}} \leq p^{-}_{i^{\prime }}\).
-
2.
Let \(i, i^{\prime } \in I\) and j ∈ J be such that (i, j) ∈ E(ℓ+) and \(\mu _{0}(i^{\prime }) = j\). We consider two cases.
Case 1: \(p^{+}_{i} = p^{-}_{i}\). Then line 17 of Algorithm 1 implies \(p^{+}_{i^{\prime }} \geq p^{-}_{i^{\prime }}\). Part (1) implies \(p^{-}_{i^{\prime }} \geq p^{-}_{i}\). Thus \(p^{+}_{i^{\prime }} \geq p^{-}_{i^{\prime }} \geq p^{-}_{i} = p^{+}_{i}\).
Case 2: \(p^{+}_{i} \neq p^{-}_{i}\). Then line 17 of Algorithm 1 implies i ∈ I0 and \(p^{+}_{i} = w(i_{2}, \ell ^+_{i_{2}})\). Since i ∈ I0, line 14 of Algorithm 1 implies there exists an oriented μ0-alternating path in G(ℓ+) from i1 to i. Since (i, j) ∈ E(ℓ+) and \(\mu _{0}(i^{\prime }) = j\), there exists an oriented μ0-alternating path in G(ℓ+) from i to \(i^{\prime }\). Hence there exists an an oriented μ0-alternating path in G(ℓ+) from i1 to \(i^{\prime }\). So line 14 of Algorithm 1 implies \(i^{\prime } \in I_{0}\). Since \(i^{\prime } \in I_{0}\), line 17 of Algorithm 1 implies \(p^{+}_{i^{\prime }} \geq w(i_{2}, \ell ^+_{i_{2}})\). Thus \(p^{+}_{i^{\prime }} \geq w(i_{2}, \ell ^+_{i_{2}}) = p^{+}_{i}\).
-
3.
Let \(i_{1}=i_{1}^{\prime },\ldots ,i_{s}^{\prime }=i_{2}\) denote the sequence of men on path π0. By part (1), we have \(p^{-}_{i_{t}^{\prime }} \leq p^{-}_{i_{t+1}^{\prime }}\) for 1 ≤ t < s. It follows that \(p^{-}_{i} \leq p^{-}_{i_{2}}\) for every man i on path π0. Since \({\mathscr{Q}}_3(\boldsymbol {\ell }^{-},\mathbf {p}^{-})\) holds, we have \(p^{-}_{i_{2}} \leq w(i_{2}, \ell ^-_{i_{2}}) \leq w(i_{2}, \ell ^+_{i_{2}})\). Thus \(p^{-}_{i} \leq w(i_{2}, \ell ^+_{i_{2}})\) for every man i on path π0. Since every man on path π0 belongs to I0, line 17 of Algorithm 1 implies that \(p^{+}_{i}=w(i_{2}, \ell ^+_{i_{2}})\) for every man i on path π0.
-
4.
Let \(J^{\prime }\) denote the set of women who are matched in μ0. Line 18 of Algorithm 1 ensures that the set of women who are matched in μ+ is also \(J^{\prime }\). Moreover, by part (3), \(p^{+}_{\mu ^{+}(j)}=p^{+}_{\mu _{0}(j)}\) for every woman j in \(J^{\prime }\). Consequently, part (2) implies that \({\mathscr{Q}}_6(\boldsymbol {\ell }^{+},\mu ^{+},\mathbf {p}^{+})\) holds.
□
Lemma 10
Let \(i, i^{\prime } \in I\) and j ∈ J be such that (i, j) ∈ E(ℓ+) and \(\mu ^{+}(i^{\prime }) = j\). Then \(p^{+}_{i} \leq p^{+}_{i^{\prime }}\).
Proof
If μ−(j0)≠ 0 or \(0 >_{j_{0}} i_{0}\), then part (4) of Lemma 9 implies that \(p^{+}_{i} \leq p^{+}_{i^{\prime }}\). For the remainder of the proof, assume that μ−(j0) = 0 and \(i_{0} \geq _{j_{0}} 0\). Thus μ+ = μ−∪{(i0, j0)}, p+ = p−, and part 3 of Lemma 4 implies E(ℓ+) = E(ℓ−) ∪{(i0, j0)}. We consider two cases.
Case 1: j≠j0. Since j≠j0 and μ+ is equal to μ−∪{(i0, j0)}, we have \(\mu ^{-}(j)=\mu ^{+}(j)=i^{\prime }\). Since (i, j) ∈ E(ℓ+) = E(ℓ−) ∪{(i0, j0)} and j≠j0, we have (i, j) ∈ E(ℓ−). Since (i, j) ∈ E(ℓ−), \(\mu ^{-}(i^{\prime })=j\), and \({\mathscr{Q}}_6(\boldsymbol {\ell }^{-},\mu ^{-},\mathbf {p}^{-})\) holds, we have \(p^{-}_{i} \leq p^{-}_{i^{\prime }}\). Since \(p^{-}_{i} \leq p^{-}_{i^{\prime }}\) and p+ = p−, we have \(p^{+}_{i} \leq p^{+}_{i^{\prime }}\).
Case 2: j = j0. Since μ−(j0) = 0 and \({\mathscr{Q}}_2(\boldsymbol {\ell }^{-},\mu ^{-})\) holds, we deduce that none of the edges in E(ℓ−) are incident on j0. Since j = j0 and none of the edges in E(ℓ−) are incident on j0, we have (i, j)∉E(ℓ−). Since (i, j)∉E(ℓ−) and (i, j) ∈ E(ℓ+) = E(ℓ−) ∪{(i0, j0)}, we have (i, j) = (i0, j0). Since μ+ = μ−∪{(i0, j0)}, we have μ+(j0) = i0. Since (i, j) = (i0, j0) and μ+(j0) = i0, we have \(i^{\prime }=\mu ^{+}(j)=\mu ^{+}(j_{0})=i_{0}=i\). Since \(i=i^{\prime }\) we have \(p^{+}_{i}=p^{+}_{i^{\prime }}\). □
4.4 Correctness of the Algorithm
Lemma 11
Consider the loop body of Algorithm 1. Let ℓ−, μ−, and p− denote the values of ℓ, μ, and p at the start of the iteration. Assume that the loop condition is satisfied, and that \({\mathscr{Q}}({\boldsymbol {\ell }^{-}},{\mu ^{-}},{\mathbf {p}^{-}})\) holds. Let ℓ+, μ+, and p+ denote the values of ℓ, μ, and p at the end of the iteration. Then \({\mathscr{Q}}(\boldsymbol {\ell }^{+},\mu ^{+},\mathbf {p}^{+})\) holds.
Proof
Lemma 3 implies that \({\mathscr{Q}}_1(\boldsymbol {\ell }^{+})\) holds. Lemma 5 implies that \({\mathscr{Q}}_2(\boldsymbol {\ell }^{+},\mu ^{+})\) holds. Lemma 6 implies that \({\mathscr{Q}}_{3}({\boldsymbol {\ell }^{+}},{\mathbf {p}^{+}})\) holds. Lemma 7 implies that \({\mathscr{Q}}_{4}({\boldsymbol {\ell }^{+}},{\mu ^{+}},{\mathbf {p}^{+}})\) holds. Lemma 8 implies that \({\mathscr{Q}}_{5}({\boldsymbol {\ell }^{+}},{\mathbf {p}^{+}})\) holds. Lemma 10 implies that \({\mathscr{Q}}_{6}({\boldsymbol {\ell }^{+}},{\mu ^{+}},{\mathbf {p}^{+}})\) holds. Thus \({\mathscr{Q}}({\boldsymbol {\ell }^{+}},{\mu ^{+}},{\mathbf {p}^{+}})\) holds. □
Lemma 12
Let ℓ, μ, and p be such that \({\mathscr{Q}}(\boldsymbol {\ell },\mu ,\mathbf {p})\) holds. Suppose that for every i ∈ I, either μ(i)≠ 0 or 0 ≥iℓi. Then (μ, p) satisfies (P1)–(P4) with η = 0.
Proof
We begin by proving that (P1) holds. Let (i, j) ∈ μ. Since \({\mathscr{Q}}_2(\boldsymbol {\ell },\mu )\) holds, μ is a matching of G(ℓ). Since (i, j) ∈ μ and μ is a matching of G(ℓ), we have (i, j) ∈ E(ℓ). Since (i, j) ∈ E(ℓ), we have i ∈ Ij(ℓ) and i ≥j0. Since i ∈ Ij(ℓ), we have j >iℓi. Since \({\mathscr{Q}}_1(\boldsymbol {\ell })\) holds, we have ℓi ≥i0. Since j >iℓi and ℓi ≥i0, we have j >i0.
We now prove that (P2) holds. Let i ∈ I be a man and j ∈ J be a woman such that j ≥iμ(i) and i ≥j0. We prove that i ∈ Ij(ℓ) by considering two cases.
Case 1: μ(i) = 0. Then 0 ≥iℓi. Since j ∈ J and j ≥iμ(i) = 0, we have j >i0. Since j >i0 ≥iℓi, we have i ∈ Ij(ℓ).
Case 2: μ(i)≠ 0. Since \((i, \mu (i)) \in \mu \subseteq E(\boldsymbol {\ell })\), we have i ∈ Ij(ℓ).
Having established that i ∈ Ij(ℓ), we now complete the proof that (P2) holds. Since i ∈ Ij(ℓ) and i ≥j0, condition \({\mathscr{Q}}_2(\boldsymbol {\ell },\mu )\) implies μ(j)≠ 0. Since (μ(j),j) ∈ E(ℓ) and i ∈ Ij(ℓ), the definition of E(ℓ) implies μ(j) ≥ji.
We now prove that (P3) holds. Let i ∈ I be a man. We consider two cases.
Case 1: μ(i) = 0. Then 0 ≥iℓi. Since \({\mathscr{Q}}_1(\boldsymbol {\ell })\) holds, we have ℓi ≥i0. Since 0 ≥iℓi and ℓi ≥i0, we have ℓi = 0. Since ℓi = 0, we have \(w(i, \ell _{i}) = 1 - y_{i,\ell _{i}} = 1\). Since μ(i) = 0, w(i, ℓi) = 1, and \({\mathscr{Q}}_4(\boldsymbol {\ell },\mu ,\mathbf {p})\) holds, we have pi = 1.
Case 2: μ(i)≠ 0. Let j denote μ(i). Since \({\mathscr{Q}}_2(\boldsymbol {\ell },\mu )\) holds, μ is a matching of G(ℓ). Since μ(i) = j and μ is a matching of G(ℓ), we have (i, j) ∈ E(ℓ). Since (i, j) ∈ E(ℓ), we have i ∈ Ij(ℓ) and hence j >iℓi. Since j >iℓi and \({\mathscr{Q}}_5(\boldsymbol {\ell },\mathbf {p})\) holds, we have pi ≥ w(i, j) = 1 − yi, j. It remains to argue that pi ≤ 1. Since constraint (C1) holds, we have w(i, ℓi) ≤ 1. Since w(i, ℓi) ≤ 1 and \({\mathscr{Q}}_3(\boldsymbol {\ell },\mathbf {p})\) holds, we have pi ≤ 1.
It remains to prove that (P4) holds. Let i ∈ I be a man and j ∈ J be a woman such that j ≥i0, i ≥j0, and pi > 1 − yi, j. Since pi > 1 − yi, j = w(i, j) and \({\mathscr{Q}}_3(\boldsymbol {\ell },\mathbf {p})\) holds, we have j >iℓi and hence i ∈ Ij(ℓ). Since i ∈ Ij(ℓ), i ≥j0, and \({\mathscr{Q}}_2(\boldsymbol {\ell },\mu )\) holds, we know that μ is a matching of G(ℓ) with μ(j)≠ 0. Let \(i^{\prime }\in I\) denote μ(j). Since μ is a matching of G(ℓ) and \((i^{\prime },j)\) belongs to μ, we have \((i^{\prime },j)\in E(\boldsymbol {\ell })\). Since \((i^{\prime },j)\in E(\boldsymbol {\ell })\) and i ∈ Ij(ℓ), the definition of E(ℓ) implies that \(i^{\prime }\geq _{j} i\). It remains to prove that if \(i^{\prime }=_{j}i\) then \(p_{i} \leq p_{i^{\prime }}\). Assume \(i^{\prime }=_{j} i\). Since \((i^{\prime },j)\in E(\boldsymbol {\ell })\), i ∈ Ij(ℓ), and \(i^{\prime }=_{j} i\), the definition of E(ℓ) implies that (i, j) ∈ E(ℓ). Since (i, j) ∈ E(ℓ), \(\mu (i^{\prime })=j\), and \({\mathscr{Q}}_6(\boldsymbol {\ell },\mu ,\mathbf {p})\) holds, we have \(p_{i} \leq p_{i^{\prime }}\). □
Lemma 13
When Algorithm 1 terminates, (μ, p) satisfies (P1)–(P4) with η = 0.
Proof
It is straightforward to verify that \({\mathscr{Q}}(\boldsymbol {\ell },\mu ,\mathbf {p})\) holds before the first iteration of the algorithm. So, by Lemma 11 and induction on the number of iterations, \({\mathscr{Q}}(\boldsymbol {\ell },\mu ,\mathbf {p})\) holds when the algorithm terminates. Moreover, line 6 implies that for every i ∈ I, we have μ(i)≠ 0 or 0 ≥iℓi when the algorithm terminates. Hence Lemma 12 implies that (μ, p) satisfies (P1)–(P4) with η = 0 when the algorithm terminates. □
Lemma 14
Let μ be a matching such that (μ, p) satisfies (P1) and (P2) for some p. Then μ is a weakly stable matching.
Proof
Since (P1) holds, μ is individually rational. To establish weak stability of μ, consider (i, j) ∈ I × J. It suffices to show that (i, j) is not a strongly blocking pair. For the sake of contradiction, suppose j >iμ(i) and i >jμ(j). If 0 >ji, then 0 >ji >jμ(j), contradicting the individual rationality of μ. If i ≥j0, then since j >iμ(i), i ≥j0, and (P2) holds, we deduce that μ(j) ≥ji, contradicting the assumption that i >jμ(j). □
4.5 An Alternative Implementation
In this subsection, we present a more succinct alternative algorithm that does not maintain a priority vector p. This alternative algorithm, which we refer to as Algorithm 2, is implemented with weighted matchings.
Let us define the weight of any edge (i, j) ∈ E(ℓ) as w(i, ℓi). Also we define the weight of \(\mu \subseteq E(\boldsymbol {\ell })\) as the total weight of all the edges in μ. We use the abbreviations MCM and MWMCM to denote the terms maximum-cardinality matching and maximum-weight MCM, respectively.
Re-examining Algorithm 1 with the foregoing notions in mind, it is natural to conjecture that the predicate “μ is an MWMCM of G(ℓ)” is an invariant of the Algorithm 1 loop. Certainly, the straightforward update performed in line 10 is consistent with this conjecture. Moreover, the more complex update performed in lines 12 to 18 chooses to unmatch a vertex i2 in I0 such that \(w({i_{2}},{\ell _{i_{2}}})=\min \limits _{i \in I_{0}} w({i},{\ell _{i}})\); accordingly, no other choice of the unmatched vertex in I0 results in a matching of higher weight. In Lemma 17 below, we establish that the above conjecture does in fact hold.
For iterations of the Algorithm 1 loop where G(ℓ) admits multiple MWMCMs, it is natural to ask whether the predicate \({\mathscr{Q}}(\boldsymbol {\ell },\mu ,\mathbf {p})\) continues to be an invariant if we modify Algorithm 1 so that it sets μ to an arbitrary MWMCM of G(ℓ) at the end of each iteration. We answer this question in the affirmative in Lemma 19 below. It is then straightforward to establish the correctness (see Lemma 21) of Algorithm 2 below, which iteratively updates ℓ and computes an MWMCM of G(ℓ).
While Algorithm 2 is simpler to state than Algorithm 1, we emphasize that for an efficient implementation, it is useful to proceed as in Algorithm 1, since at each iteration we can (1) use the priority vector to efficiently update the MWMCM, and (2) efficiently update the priority vector. Our primary interest in presenting Algorithm 2 is to demonstrate that Algorithm 1 admits a succinct interpretation in terms of weighted matchings.
Lemma 15
Let ℓ, μ, and p satisfy \({\mathscr{Q}}_6(\boldsymbol {\ell },\mu ,\mathbf {p})\), let \(i,i^{\prime }\in I\), and let π be an oriented μ-alternating path in G(ℓ) from i to \(i^{\prime }\). Then \(p_{i} \leq p_{i^{\prime }}\).
Proof
If \(i=i^{\prime }\) then \(p_{i}=p_{i^{\prime }}\), so we can assume that \(i \neq i^{\prime }\). Let \(i=i_{1},i_{2},\ldots ,i_{k}=i^{\prime }\) denote the sequence of k > 1 men appearing on path π. Since \({\mathscr{Q}}_6(\boldsymbol {\ell },\mu ,\mathbf {p})\) holds and π is an oriented μ-alternating path in G(ℓ) from i to \(i^{\prime }\), we deduce that \(p_{i_{j}}\leq p_{i_{j+1}}\) for all j such that 1 ≤ j < k. Hence \(p_{i}=p_{i_{1}}\leq p_{i_{k}}=p_{i^{\prime }}\). □
Lemma 16
Let ℓ, μ, and p satisfy \({\mathscr{Q}}_2(\boldsymbol {\ell },\mu )\), \({\mathscr{Q}}_3(\boldsymbol {\ell },\mathbf {p})\), \({\mathscr{Q}}_4(\boldsymbol {\ell },\mu ,\mathbf {p})\), and \({\mathscr{Q}}_6(\boldsymbol {\ell },\mu ,\mathbf {p})\). Then μ is an MWMCM of G(ℓ).
Proof
Since \({\mathscr{Q}}_2(\boldsymbol {\ell },\mu )\) holds, μ is an MCM of G(ℓ). Let \(\mu ^{\prime }\) be an MWMCM of G(ℓ). Since \(\mu ^{\prime }\) is an MCM of G(ℓ), \({\mathscr{Q}}_2(\boldsymbol {\ell },\mu )\) implies that μ and \(\mu ^{\prime }\) match the same set of women. Thus \(\mu \oplus \mu ^{\prime }\) corresponds to a collection \({\mathscr{X}}\) of cycles (of positive even length) and man-to-man paths (of positive even length). For any cycle γ in \({\mathscr{X}}\), the edges of μ on γ match the same set of men as the edges of \(\mu ^{\prime }\) on γ. Thus the total weight (in G(ℓ)) of the edges of μ on γ is equal to the total weight of the edges of \(\mu ^{\prime }\) on γ.
Now consider a man-to-man path π in \({\mathscr{X}}\). Let the endpoints of π be i and \(i^{\prime }\), where i is matched in μ and not in \(\mu ^{\prime }\), and \(i^{\prime }\) is matched in \(\mu ^{\prime }\) and not in μ. Since \(\mu ^{\prime }\) is an MWMCM of G(ℓ), and since \(\mu ^{\prime } \oplus \pi \) is an MCM of G(ℓ), we deduce that \(w(i, \ell _{i}) \leq w(i^{\prime }, \ell _{i^{\prime }})\). Since \(\mu (i^{\prime })=0\) and \({\mathscr{Q}}_4(\boldsymbol {\ell },\mu ,\mathbf {p})\) holds, we have \(p_{i^{\prime }}=w(i^{\prime }, \ell _{i^{\prime }})\). Since \({\mathscr{Q}}_6(\boldsymbol {\ell },\mu ,\mathbf {p})\) holds and π is an oriented μ-alternating path in G(ℓ) from \(i^{\prime }\) to i, Lemma 15 implies that \(p_{i} \geq p_{i^{\prime }}\). Since \({\mathscr{Q}}_3(\boldsymbol {\ell },\mathbf {p})\) holds, we have pi ≤ w(i, ℓi). Since \(p_{i} \geq p_{i^{\prime }}\) and \(p_{i} \leq w(i, \ell _{i}) \leq w(i^{\prime }, \ell _{i^{\prime }})=p_{i^{\prime }}\), we deduce that \(p_{i}=w(i, \ell _{i})=w(i^{\prime }, \ell _{i^{\prime }})=p_{i^{\prime }}\). Thus the total weight (in G(ℓ)) of the edges of μ on π is equal to the total weight of the edges of \(\mu ^{\prime }\) on π.
The foregoing analysis of the cycles and paths in \({\mathscr{X}}\) implies that the weight of μ is equal to that of \(\mu ^{\prime }\), and hence that μ is an MWMCM of G(ℓ). □
Lemma 17
An invariant of the Algorithm 1 loop is that μ is an MWMCM of G(ℓ).
Proof
It is easy to check that μ is an MWMCM of G(ℓ) when the Algorithm 1 loop is first encountered. Hence the claim of the lemma follows by Lemmas 11 and 16. □
Lemma 18
Let ℓ, μ, and p satisfy \({\mathscr{Q}}_2(\boldsymbol {\ell },\mu )\), \({\mathscr{Q}}_3(\boldsymbol {\ell },\mathbf {p})\), \({\mathscr{Q}}_4(\boldsymbol {\ell },\mu ,\mathbf {p})\), and \({\mathscr{Q}}_6(\boldsymbol {\ell },\mu ,\mathbf {p})\), and let \(\mu ^{\prime }\) be an MWMCM of G(ℓ). Then \({\mathscr{Q}}_2(\boldsymbol {\ell },\mu ^{\prime })\), \({\mathscr{Q}}_4(\boldsymbol {\ell },\mu ^{\prime },\mathbf {p})\), and \({\mathscr{Q}}_6(\boldsymbol {\ell },\mu ^{\prime },\mathbf {p})\) hold.
Proof
Lemma 16 implies that μ is an MWMCM of G(ℓ). Let \(J^{\prime }\) denote the set of women with nonzero degree in G(ℓ). Since \({\mathscr{Q}}_2(\boldsymbol {\ell },\mu )\) holds, the set of women matched by μ is \(J^{\prime }\). Since \(\mu ^{\prime }\) is an MCM, we deduce that the set of women matched by \(\mu ^{\prime }\) is also \(J^{\prime }\), and hence that \({\mathscr{Q}}_2(\boldsymbol {\ell },\mu ^{\prime })\) holds. Thus \(\mu \oplus \mu ^{\prime }\) corresponds to a collection \({\mathscr{X}}\) of cycles (of positive even length) and man-to-man paths (of positive even length).
Consider a cycle γ in \({\mathscr{X}}\). Since \({\mathscr{Q}}_6(\boldsymbol {\ell },\mu ,\mathbf {p})\) holds and there is an oriented μ-alternating path in G(ℓ) from i to \(i^{\prime }\) for every pair of men i and \(i^{\prime }\) on γ, Lemma 15 implies that \(p_{i}=p_{i^{\prime }}\) for all men i and \(i^{\prime }\) on γ.
Consider a path π in \({\mathscr{X}}\). Let the endpoints of π be i and \(i^{\prime }\), where i is matched in μ and not in \(\mu ^{\prime }\), and \(i^{\prime }\) is matched in \(\mu ^{\prime }\) and not in μ. Since \({\mathscr{Q}}_6(\boldsymbol {\ell },\mu ,\mathbf {p})\) holds and π is an oriented μ-alternating path in G(ℓ) from \(i^{\prime }\) to i, there are oriented μ-alternating paths in G(ℓ) from \(i^{\prime }\) to \(i^{\prime \prime }\) and from \(i^{\prime \prime }\) to i for every man \(i^{\prime \prime }\) on π. Thus Lemma 15 implies that \(p_{i^{\prime }}\leq p_{i^{\prime \prime }}\leq p_{i}\) for every man \(i^{\prime \prime }\) on π. Since μ and \(\mu ^{\prime }\) are each MWMCMs, and μ ⊕ π and \(\mu ^{\prime }\oplus \pi \) are MCMs of G(ℓ), we deduce that \(w(i, \ell _{i})=w(i^{\prime }, \ell _{i^{\prime }})\). Since \({\mathscr{Q}}_4(\boldsymbol {\ell },\mu ,\mathbf {p})\) holds, we have \(p_{i^{\prime }}=w(i^{\prime }, \ell _{i^{\prime }})\). Since \({\mathscr{Q}}_3(\boldsymbol {\ell },\mathbf {p})\) holds, we have pi ≤ w(i, ℓi). Since \(p_{i} \leq w(i, \ell _{i})=w(i^{\prime }, \ell _{i^{\prime }})=p_{i^{\prime }}\leq p_{i}\), we deduce that \(p_{i}=w(i, \ell _{i})=p_{i^{\prime }}\). Since pi = w(i, ℓi), we conclude that \({\mathscr{Q}}_{4}({\boldsymbol {\ell }},{\mu ^{\prime }},{\mathbf {p}})\) holds. Since \(p_{i}=p_{i^{\prime }}\) and \(p_{i^{\prime }}\leq p_{i^{\prime \prime }}\leq p_{i}\) for every man \(i^{\prime \prime }\) on π, we deduce that \(p_{i}=p_{i^{\prime \prime }}\) for every man \(i^{\prime \prime }\) on π.
The foregoing analysis of the cycles and paths in \({\mathscr{X}}\) implies that \(p_{\mu (j)}=p_{\mu ^{\prime }(j)}\) for every woman j in \(J^{\prime }\). Since \({\mathscr{Q}}_6(\boldsymbol {\ell },\mu ,\mathbf {p})\) holds, we find that \({\mathscr{Q}}_6(\boldsymbol {\ell },\mu ^{\prime },\mathbf {p})\) holds. □
We now use our results concerning Algorithm 1 to reason about Algorithm 2. To do this, it is convenient to introduce a hybrid algorithm, which we define by modifying Algorithm 1 as follows: At the end of each iteration of the while loop, update the matching μ to an arbitrary MWMCM of G(ℓ).
Lemma 19
Consider the loop body of the hybrid algorithm. Let ℓ−, μ−, and p− denote the values of ℓ, μ, and p at the start of the iteration. Assume that the loop condition is satisfied, and that \({\mathscr{Q}}(\boldsymbol {\ell }^{-},\mu ^{-},\mathbf {p}^{-})\) holds. Let ℓ+, μ+, and p+ denote the values of ℓ, μ, and p at the end of the iteration. Then \({\mathscr{Q}}(\boldsymbol {\ell }^{+},\mu ^{+},\mathbf {p}^{+})\) holds.
Proof
Lemma 11 implies that \({\mathscr{Q}}(\boldsymbol {\ell },\mu ,\mathbf {p})\) holds just before μ is updated to an arbitrary MWMCM of G(ℓ). Lemma 16 implies that μ is an MWMCM of G(ℓ) at this point in the execution. Thus Lemma 18 implies that \({\mathscr{Q}}_2(\boldsymbol {\ell }^{+},\mu ^{+})\), \({\mathscr{Q}}_4(\boldsymbol {\ell }^{+},\mu ^{+},\mathbf {p}^{+})\), and \({\mathscr{Q}}_6(\boldsymbol {\ell }^{+},\mu ^{+},\mathbf {p}^{+})\) hold. Since \({\mathscr{Q}}(\boldsymbol {\ell },\mu ,\mathbf {p})\) holds just before μ is updated to an arbitrary MWMCM of G(ℓ), we conclude that \({\mathscr{Q}}_1(\boldsymbol {\ell }^{+})\), \({\mathscr{Q}}_3(\boldsymbol {\ell }^{+},\mathbf {p}^{+})\), and \({\mathscr{Q}}_5(\boldsymbol {\ell }^{+},\mathbf {p}^{+})\) hold. Hence \({\mathscr{Q}}(\boldsymbol {\ell }^{+},\mu ^{+},\mathbf {p}^{+})\) holds, as required. □
The converse of the following lemma also holds, but we only need the stated direction.
Lemma 20
Fix an execution of Algorithm 2, and let T denote the number of times the body of the loop is executed. For 0 ≤ t ≤ T, let ℓ(t) and μ(t) denote the values of the corresponding program variables after t iterations of the loop. Then there is a T-iteration execution of the hybrid algorithm such that, for 0 ≤ t ≤ T, the program variables ℓ and μ are equal to ℓ(t) and μ(t), respectively, after t iterations of the loop.
Proof
Observe that Algorithm 2 and the hybrid algorithm are equivalent in terms of their initialization of ℓ and μ, and also in terms of the set of possible updates to ℓ and μ associated with any given iteration. (While the hybrid algorithm also maintains a priority vector p, this priority vector has no influence on the overall update applied to ℓ and μ in a given iteration.) Given this observation, the claim of the lemma is straightforward to prove by induction on t. □
Lemma 21
When Algorithm 2 terminates, there exists p such that (μ, p) satisfies (P1)–(P4) with η = 0.
Proof
Fix an execution of Algorithm 2, and let ℓ∗ and μ∗ denote the final values of ℓ and μ. Lemma 20 implies that there exists an execution of the hybrid algorithm with the same final values of ℓ and μ. Fix such an execution of the hybrid algorithm, and let p∗ denote the final value of p. It is straightforward to verify that \({\mathscr{Q}}(\boldsymbol {\ell },\mu ,\mathbf {p})\) holds the first time the loop is reached in this execution of the hybrid algorithm. Thus, Lemma 19 implies that \({\mathscr{Q}}({\boldsymbol {\ell }^{*}},{\mu ^{*}},{\mathbf {p}^{*}})\) holds. Moreover, line 6 implies that for every i ∈ I, we have μ∗(i)≠ 0 or \(0 \geq _{i} \ell ^{*}_{i}\). Hence Lemma 12 implies that (μ∗, p∗) satisfies (P1)–(P4) with η = 0. □
5 Analysis of the Approximation Ratio
In this section, we analyze the approximation ratio and the integrality gap. Our analysis applies to Algorithms 1 and 2 alike. Throughout this section, whenever we mention x and μ, we are referring to their values when the algorithm terminates. Given x, we let the auxiliary variables {yi, j}(i, j)∈I×(J∪{0}) and {zi, j}(i, j)∈(I∪{0})×J be defined as in Section 3. By Lemmas 13 and 21, there exists p such that (μ, p) satisfies (P1)–(P4) with η = 0. We fix such a priority vector p throughout this section.
In Section 5.1, we describe a charging scheme which covers the value of the linear programming solution. In Section 5.2, we bound the charge incurred by each matched man-woman pair. In Section 5.3, we show that the approximation ratio is at most \(1 + (1 - \frac {1}{L})^{L}\).
5.1 The Charging Argument
Our charging argument is based on an exchange function \(h \colon [0, 1] \times [0, 1] \to \mathbb {R}\) which satisfies the following properties.
-
(H1)
For every ξ1, ξ2 ∈ [0,1], we have 0 = h(0,ξ2) ≤ h(ξ1, ξ2) ≤ 1.
-
(H2)
For every ξ1, ξ2 ∈ [0,1] such that ξ1 > ξ2, we have h(ξ1, ξ2) = 1.
-
(H3)
The function h(ξ1, ξ2) is non-decreasing in ξ1 and non-increasing in ξ2.
-
(H4)
For every ξ1, ξ2 ∈ [0,1], we have
$$ L \cdot \displaystyle{\int}_{\xi_{2} \cdot (1 - 1/L)}^{\xi_{2}} \Big(1 - h(\xi_{1}, \xi) \Big) d \xi \leq \max(\xi_{2} - \xi_{1}, 0). $$
Given an exchange function h that satisfies (H1)–(H4), our charging argument is as follows. For every (i, j) ∈ I × J, we assign to man i a charge of
and to woman j a charge of
The following lemma shows that the charges are non-negative and cover the value of an optimal solution to the linear program. We prove this using the stability constraint in the linear program and the tie-breaking criterion of our algorithm.
Lemma 22
Let i ∈ I and j ∈ J. Then 𝜃i, j and ϕi, j satisfy the following conditions.
-
1.
𝜃i, j ≥ 0 and ϕi, j ≥ 0.
-
2.
xi, j ≤ 𝜃i, j + ϕi, j.
Proof
-
1.
The definition of 𝜃i, j implies
$$ \theta_{i, j} = \displaystyle{\int}_{0}^{x_{i, j}} h(1 - p_{i}, y_{i,j} - \xi) d \xi \geq 0, $$where the inequality follows from (H1). Also, the definition of ϕi, j implies
$$ \begin{array}{@{}rcl@{}} \phi_{i, j} &\geq{} & \min \Big(0, x_{i, j}, x_{i, j} - \displaystyle{\int}_{0}^{x_{i, j}} h(1 - p_{\mu(j)}, 1 - z_{\mu(j),j} - \xi) d \xi \Big) \\ &\geq{} & \min \Big(0, x_{i, j}, x_{i, j} - \displaystyle{\int}_{0}^{x_{i, j}} 1 d \xi \Big) \\ &={} & 0, \end{array} $$where the second inequality follows from (H1).
-
2.
We consider two cases.
Case 1: yi, j ≤ 1 − pi. Then (H3) implies
$$ \begin{array}{@{}rcl@{}} 0 &\leq{} & \displaystyle{\int}_{0}^{x_{i, j}} \Big(h(1 - p_{i}, y_{i,j} - \xi) - h(1 - p_{i}, 1 - p_{i} - \xi) \Big) d \xi \\ &={} & \displaystyle{\int}_{0}^{x_{i, j}} \Big(h(1 - p_{i}, y_{i,j} - \xi) - 1 \Big) d \xi \\ &={} & \theta_{i, j} - x_{i, j} \\ &\leq{} & \theta_{i, j} + \phi_{i, j} - x_{i, j}, \end{array} $$where the first equality follows from (H2), the second equality follows from the definition of 𝜃i, j, and the last inequality follows from part 1.
Case 2: yi, j > 1 − pi. We may assume that xi, j≠ 0, for otherwise part 1 implies 𝜃i, j + ϕi, j ≥ 0 = xi, j. Since xi, j≠ 0, constraint (C4) implies j ≥i0 and i ≥j0. So (P4) with η = 0 implies μ(j)≠ 0 and μ(j) ≥ji. We consider two subcases.
Case 2.1: μ(j) >ji. Then the definition of ϕi, j implies
$$ 0 = \phi_{i, j} - x_{i, j} \leq \theta_{i, j} + \phi_{i, j} - x_{i, j}, $$where the inequality follows from part 1.
Case 2.2: μ(j) =ji. Then (P4) with η = 0 implies pi ≤ pμ(j). Also, since μ(j) =ji, parts 4 and 5 of Lemma 2 imply zμ(j),j = zi, j ≤ 1 − yi, j. Since pi ≤ pμ(j) and yi, j ≤ 1 − zμ(j),j, (H3) implies
$$ \begin{array}{@{}rcl@{}} 0 \leq \displaystyle{\int}_{0}^{x_{i, j}} \Big(h(1 - p_{i}, y_{i,j} - \xi) - h(1 - p_{\mu(j)}, 1 - z_{\mu(j),j} - \xi) \Big) d \xi = \theta_{i, j} + \phi_{i, j} - x_{i, j}, \end{array} $$where the equality follows from the definitions of 𝜃i, j and ϕi, j.
□
5.2 Bounding the Charges
To bound the approximation ratio, Lemma 22 implies that it is sufficient to bound the charges. In Lemma 23, we derive an upper bound for the charges incurred by a man using the strict ordering given by his preferences. In Lemma 24, we derive an upper bound for the charges incurred by a woman due to indifferences using the constraint on the tie length. In Lemma 25, we derive an upper bound for the total charges incurred by a matched couple by combining the results of Lemmas 23 and 24.
Lemma 23
Let i ∈ I be a man. Then
Proof
Let \(j_{1}, \dots , j_{\lvert {J}\rvert } \in J\) such that \(j_{\lvert {J}\rvert } >_{i} j_{\lvert {J}\rvert -1} >_{i} {\cdots } >_{i} j_{1}\). Then part 2 of Lemma 2 implies \(y_{i,j_{\lvert {J}\rvert }} \leq 1\). Also, parts 2 and 3 of Lemma 2 imply
for every \(1 \leq k \leq \lvert {J}\rvert \). Hence the definitions of \(\{ \theta _{i, j_{k}} \}_{1 \leq k \leq \lvert {J}\rvert }\) imply
for every \(1 \leq k \leq \lvert {J}\rvert \), where the inequality follows from (1) and (H1). Thus
where the last inequality follows from \(y_{i,j_{\lvert {J}\rvert }} \leq 1\) and (H1). □
Lemma 24
Let j ∈ J be a woman such that μ(j)≠ 0. Then
Proof
Let
for every \(\xi ^{\prime } \in [0, 1]\). Then (H1) and (H3) imply that H is concave and non-decreasing. Also, (H4) implies
Let \(I^{\prime } = \{ i \in I \colon \mu (j) =_{j} i\}\). Then \(\lvert {I^{\prime }}\rvert \leq L\) since L is the maximum tie-length. Let \(i_{1}, \dots , i_{\lvert {I^{\prime }}\rvert } \in I\) such that \(I^{\prime } = \{i_{1}, \dots , i_{\lvert {I^{\prime }}\rvert } \}\). Let
for every 1 ≤ k ≤ L. Then the definition of zμ(j),j implies
where the first inequality follows from constraint (C2), the second equality follows from the definitions of \(I^{\prime }\) and \(\{ i_{k} \}_{1 \leq k \leq \lvert {I^{\prime }}\rvert }\), and the third equality follows from the definitions of {ξk}1≤k≤L. Hence the monotonicity and concavity of H imply
Thus the definitions of the definitions of {ϕi, j}i∈I imply
where the third equality follows from the definition of H, the fourth equality follows from the definitions of \(I^{\prime }\) and \(\{ i_{k} \}_{1 \leq k \leq \lvert {I^{\prime }}\rvert }\), the fifth equality follows from the definitions of {ξk}1≤k≤L, the first inequality follows from (3), and the second inequality follows from (2). □
Lemma 25
Let i ∈ I and j ∈ J ∪{0} such that μ(i) = j. Then the following conditions hold.
-
1.
If j≠ 0, then
$$ \sum\limits_{j^{\prime} \in J} \theta_{i, j^{\prime}} + \sum\limits_{i^{\prime} \in I} \phi_{i^{\prime}, j} \leq 1 + \displaystyle{\int}_{1 - p_{i}}^{1} h(1 - p_{i}, \xi) d \xi. $$ -
2.
If j = 0, then \(\theta _{i, j^{\prime }} = 0\) for every \(j^{\prime } \in J\).
Proof
-
1.
Suppose j≠ 0. Then (P1) implies j ≥i0 and i ≥j0. So part 5 of Lemma 2 implies
$$ z_{i,j} \leq 1 - y_{i,j} \leq p_{i}, $$where the second inequality follows from (P3) with η = 0. So the definitions of \(\{\phi _{i^{\prime }, j}\}_{i^{\prime } \in I}\) imply
$$ \sum\limits_{i^{\prime} \in I} \phi_{i^{\prime}, j} = \sum\limits_{\underset{\mu(j) =_{j} i^{\prime}}{i^{\prime} \in I}} \phi_{i^{\prime}, j} + {\sum}_{\underset{\mu(j) >_{j} i^{\prime}}{i^{\prime} \in I}} x_{i^{\prime}, j} \leq \max(p_{i} - z_{i,j}, 0) + z_{i,j} = p_{i}, $$(4)where the first inequality follows from Lemma 24 and the definition of zi, j, and the last equality follows from pi ≥ zi, j. Also, by Lemma 23, we have
$$ \begin{array}{@{}rcl@{}} \sum\limits_{j^{\prime} \in J} \theta_{i, j^{\prime}} &\leq{} & {\displaystyle{\int}_{0}^{1}} h(1 - p_{i}, \xi) d \xi \\ &={} & \displaystyle{\int}_{0}^{1 - p_{i}} h(1 - p_{i}, \xi) d \xi + \displaystyle{\int}_{1 - p_{i}}^{1} h(1 - p_{i}, \xi) d \xi \\ &={} & \displaystyle{\int}_{0}^{1 - p_{i}} 1 d \xi + \displaystyle{\int}_{1 - p_{i}}^{1} h(1 - p_{i}, \xi) d \xi \\ &={} & 1 - p_{i} + \displaystyle{\int}_{1 - p_{i}}^{1} h(1 - p_{i}, \xi) d \xi, \end{array} $$(5)where the second equality follows from (H2). Combining (4) and (5) gives the desired inequality.
-
2.
Suppose j = 0. Let \(j^{\prime } \in J\). Since μ(i) = j = 0, (P3) with η = 0 implies
$$ 1 \geq p_{i} \geq 1 - y_{i,j} = 1 - y_{i,0} = 1, $$where the last equality follows from part 1 of Lemma 2. Hence the definition of \(\theta _{i, j^{\prime }}\) implies
$$ \theta_{i, j^{\prime}} = \displaystyle{\int}_{0}^{x_{i, j^{\prime}}} h(1 - p_{i}, y_{i,j^{\prime}} - \xi) d \xi = \displaystyle{\int}_{0}^{x_{i, j^{\prime}}} h(0, y_{i,j^{\prime}} - \xi) d \xi = 0, $$where the second equality follows from pi = 1, and the third equality follows from (H1).
□
5.3 The Approximation Ratio
To obtain the approximation ratio, it remains to select an exchange function h satisfying (H1)–(H4) such that the right hand side of part 1 of Lemma 25 is small. We can formulate this as an infinite-dimensional factor-revealing linear program. More specifically, we can minimize
over the set of all functions h that satisfy (H1)–(H4). Notice that the objective value and the constraints induced by (H1)–(H4) are linear in h. However, the space of all feasible solutions is infinite-dimensional. One possible approach to the infinite-dimensional factor-revealing linear program is to obtaining a numerical solution via a suitable discretization. Using the numerical results as guidance, we obtain the candidate exchange function
Lemma 26 below provides a formal analytical proof that the above choice of h satisfies (H1)–(H4) and achieves an objective value of \((1-\frac {1}{L})^{L}\). At the end of this section, we discuss a simpler choice for the exchange function that can be used to match the results presented in [1].
Lemma 26
Let h be the function defined by (6). Then the following conditions hold.
-
1.
The function h satisfies (H1)–(H4).
-
2.
For every ξ1 ∈ [0,1], we have \(\displaystyle {\int \limits }_{\xi _{1}}^{1} h(\xi _{1}, \xi ) d \xi \leq \Big (1 - \frac {1}{L} \Big )^{L}\).
Proof
-
1.
It is straightforward to see that (H1)–(H3) hold by inspecting the definition of h. To show that (H4) holds, let ξ1, ξ2 ∈ [0,1]. We consider three cases.
Case 1: ξ2 ≤ ξ1. Then
$$ L \cdot \displaystyle{\int}_{\xi_{2} \cdot (1 - 1/L)}^{\xi_{2}} \Big(1 - h(\xi_{1}, \xi) \Big) d \xi = L \cdot \displaystyle{\int}_{\xi_{2} \cdot (1 - 1/L)}^{\xi_{2}} (1 - 1) d \xi = 0 = \max(\xi_{2} - \xi_{1}, 0). $$Case 2: ξ2 > ξ1 = 0. Then
$$ L \cdot \displaystyle{\int}_{\xi_{2} \cdot (1 - 1/L)}^{\xi_{2}} \Big(1 - h(\xi_{1}, \xi) \Big) d \xi = L \cdot \displaystyle{\int}_{\xi_{2} \cdot (1 - 1/L)}^{\xi_{2}} (1 - 0) d \xi = \xi_{2} = \max(\xi_{2} - \xi_{1}, 0). $$Case 3: ξ2 > ξ1 > 0. Let \(k \in \{0, 1, 2, {\dots } \}\) such that \((1 - \frac {1}{L})^{k + 1} < \frac {\xi _{1}}{\xi _{2}} \leq (1 - \frac {1}{L})^{k}\). Then
$$ \begin{array}{@{}rcl@{}} & &L \cdot \displaystyle{\int}_{(1 - 1/L) \cdot \xi_{2}}^{\xi_{2}} \Big(1 - h(\xi_{1}, \xi) d \xi \Big) \\ &={} & \xi_{2} - L \cdot \displaystyle{\int}_{(1 - 1/L) \cdot \xi_{2}}^{\xi_{2}} h(\xi_{1}, \xi) d \xi \\ &={} & \xi_{2} - L \cdot \displaystyle{\int}_{(1 - 1/L) \cdot \xi_{2}}^{\xi_{1} / (1 - 1/L)^{k}} h(\xi_{1}, \xi) d \xi - L \cdot \displaystyle{\int}_{\xi_{1} / (1 - 1/L)^{k}}^{\xi_{2}} h(\xi_{1}, \xi) d \xi \\ &={} & \xi_{2} - L \cdot \displaystyle{\int}_{(1 - 1/L) \cdot \xi_{2}}^{\xi_{1} / (1 - 1/L)^{k}} \Big(1 - \frac{1}{L} \Big)^{k} d \xi - L \cdot \displaystyle{\int}_{\xi_{1} / (1 - 1/L)^{k}}^{\xi_{2}} \Big(1 - \frac{1}{L} \Big)^{k+1} d \xi \\ &={} & \xi_{2} - L \cdot (\xi_{1} - \xi_{2} \cdot (1 - \tfrac{1}{L})^{k+1}) - L \cdot (\xi_{2} \cdot (1 - \tfrac{1}{L})^{k+1} - \xi_{1} \cdot (1 - \tfrac{1}{L})) \\ &={} & \xi_{2} - \xi_{1} \\ &={} & \max(\xi_{2} - \xi_{1}, 0). \end{array} $$ -
2.
Let ξ1 ∈ [0,1]. We may assume that ξ1 > 0, for otherwise
$$ \displaystyle{\int}_{\xi_{1}}^{1} h(\xi_{1}, \xi) d \xi = \displaystyle{\int}_{\xi_{1}}^{1} 0 d \xi = 0 \leq \Big(1 - \frac{1}{L} \Big)^{L}. $$Let \(k \in \{0, 1, 2, {\dots } \}\) such that \((1 - \frac {1}{L})^{k + 1} < \xi _{1} \leq (1 - \frac {1}{L})^{k}\). Then
$$ \begin{array}{@{}rcl@{}} & &\displaystyle{\int}_{\xi_{1}}^{1} h(\xi_{1}, \xi) d \xi \\ =& & \displaystyle{\int}_{\xi_{1} / (1-1/L)^{k}}^{1} h(\xi_{1}, \xi) d \xi + \sum\limits_{0 \leq k^{\prime} < k} \displaystyle{\int}_{\xi_{1} / (1-1/L)^{k^{\prime}}}^{\xi_{1} / (1-1/L)^{k^{\prime}+1}} h(\xi_{1}, \xi) d \xi \\ =& & \displaystyle{\int}_{\xi_{1} / (1-1/L)^{k}}^{1} \Big(1 - \frac{1}{L} \Big)^{k+1} d \xi + \sum\limits_{0 \leq k^{\prime} < k} \displaystyle{\int}_{\xi_{1} / (1-1/L)^{k^{\prime}}}^{\xi_{1} / (1-1/L)^{k^{\prime}+1}} \Big(1 - \frac{1}{L} \Big)^{k^{\prime}+1} d \xi \\ =& & \Big(\Big(1 - \frac{1}{L} \Big)^{k+1} - \xi_{1} \cdot \Big(1 - \frac{1}{L} \Big) \Big) + {\sum}_{0 \leq k^{\prime} < k} \frac{\xi_{1}}{L} \\ =& & (1 - \tfrac{1}{L} )^{k+1} + \tfrac{\xi_{1}}{L} (k - L + 1). \end{array} $$(7)We consider three cases.
Case 1: k = L − 1. Then (7) implies
$$ \displaystyle{\int}_{\xi_{1}}^{1} h(\xi_{1}, \xi) d \xi = (1 - \tfrac{1}{L} )^{k+1} + \tfrac{\xi_{1}}{L} (k - L + 1) = (1 - \tfrac{1}{L} )^{L}. $$Case 2: k ≥ L. Then (7) implies
$$ \begin{array}{@{}rcl@{}} \displaystyle{\int}_{\xi_{1}}^{1} h(\xi_{1}, \xi) d \xi &={} & (1 - \tfrac{1}{L} )^{k+1} + \tfrac{\xi_{1}}{L} (k - L + 1) \\ &\leq{} & (1 - \tfrac{1}{L} )^{k+1} + \tfrac{1}{L} (k - L + 1) (1 - \tfrac{1}{L})^{k} \\ &={} & (1 - \tfrac{1}{L} )^{L} \cdot \tfrac{k}{L} \cdot (1 - \tfrac{1}{L})^{k - L} \\ &\leq{} & (1 - \tfrac{1}{L} )^{L} \cdot e^{k/L - 1} \cdot e^{(L - k)/L} \\ &={} & (1 - \tfrac{1}{L} )^{L}, \end{array} $$where the first inequality follows from \(\xi _{1} \leq (1 - \frac {1}{L})^{k}\), and the second inequality follows from \(e^{k/L - 1} \geq \frac {k}{L}\) and \(e^{-1/L} \geq 1 - \frac {1}{L}\).
Case 3: k ≤ L − 2. Then (7) implies
$$ \begin{array}{@{}rcl@{}} \displaystyle{\int}_{\xi_{1}}^{1} h(\xi_{1}, \xi) d \xi &={} & (1 - \tfrac{1}{L} )^{k+1} + \tfrac{\xi_{1}}{L} (k - L + 1) \\ &<{} & (1 - \tfrac{1}{L} )^{k+1} - \tfrac{1}{L} (L - k - 1) (1 - \tfrac{1}{L})^{k + 1} \\ &={} & (1 - \tfrac{1}{L} )^{L} \cdot \tfrac{k + 1}{L - 1} \cdot (1 + \tfrac{1}{L-1})^{L - k - 2} \\ &\leq{} & (1 - \tfrac{1}{L} )^{L} \cdot e^{(k + 1)/(L - 1) - 1} \cdot e^{(L - k - 2) / (L - 1)} \\ &={} & (1 - \tfrac{1}{L} )^{L}, \end{array} $$where the first inequality follows from \(\xi _{1} > (1 - \frac {1}{L})^{k + 1}\), and the second inequality follows from \(e^{(k + 1)/(L - 1) - 1} \geq \frac {k+1}{L-1}\) and \(e^{1 / (L - 1)} \geq 1 + \frac {1}{L - 1}\).
□
Lemma 27 below is obtained by combining Lemmas 22, 25, and 26. Our main results are presented in Theorems 28 and 29 and proved using Lemma 27.
Lemma 27
\({\sum }_{(i, j) \in I \times J} x_{i, j} \leq \Big (1 + \Big (1 - \frac {1}{L} \Big )^{L} \Big ) \cdot \lvert {\mu }\rvert \).
Proof
Consider the charging argument with the exchange function h as defined by (6). By part 1 of Lemma 26, the function h satisfies (H1)–(H4). Lemma 22 implies
Part 1 of Lemma 25 implies
where the second inequality follows from part 2 of Lemma 26. Part 2 of Lemma 25 implies
The definitions of {ϕi, j}(i, j)∈I×J imply
Combining (8), (9), (10), and (11) gives the desired inequality. □
Theorem 28
For the maximum stable matching problem with one-sided ties and tie length at most L, Algorithms 1 and 2 are polynomial-time \((1 + (1 - \frac {1}{L})^{L})\)-approximation algorithms.
Proof
Algorithms 1 and 2 each run in polynomial time because linear programming is polynomial-time solvable and the number of iterations of the loop is at most \(\lvert {I}\rvert \times \lvert {J}\rvert \).
By Lemmas 13 and 21, Algorithms 1 and 2 each produce a matching μ such that (μ, p) satisfies (P1)–(P4) with η = 0. So Lemma 14 implies that the output μ is a weakly stable matching. Let \(\mu ^{\prime }\) be a maximum weakly stable matching, and \(\textbf {x}^{\prime }\) be the indicator variables of \(\mu ^{\prime }\). Since \(\mu ^{\prime }\) is weakly stable, \(\textbf {x}^{\prime }\) satisfies constraints (C1)–(C5). Hence Lemma 27 implies
where the second inequality follows from the optimality of x. □
Theorem 29
For the maximum stable matching problem with one-sided ties and tie length at most L, the integrality gap of the linear programming formulation in Section 3 is \({1 + (1 - \frac {1}{L})^{L}}\).
Proof
Consider the matching μ produced by Algorithms 1 or 2. By Lemmas 13 and 21, there exists p such that (μ, p) satisfies (P1)–(P4) with η = 0. So Lemma 14 implies that the output μ is a weakly stable matching. Hence Lemma 27 implies
Let \(\mathbf {x}^{\prime }\) be the indicator variables of μ. Since μ is weakly stable, Lemma 1 implies that \(\mathbf {x}^{\prime }\) is an integral solution satisfying constraints (C1)–(C5). Since
the integrality gap is at most \(1 + (1 - \frac {1}{L})^{L}\). This upper bound matches the known lower bound for the integrality gap [16, Section 5.1]. □
We now discuss a simpler choice for the exchange function that can be used to match the results presented in [1]. For any ξ1, ξ2 ∈ [0,1], let
We claim that \(\tilde {h}\) is the “smooth” version of h, in the following sense: \(\tilde {h} (\xi _{1}, \xi _{2}) = \lim _{L \to \infty } h (\xi _{1}, \xi _{2})\) for all ξ1, ξ2 ∈ [0,1]. It is straightforward to prove this claim by considering the cases ξ1 = ξ2 = 0, 0 = ξ2 < ξ1, 0 < ξ2 < ξ1, and 0 < ξ1 ≤ ξ2 separately. (In the last case, observe that \(\frac {\xi _{1}}{\xi _{2}} (1 - \frac {1}{L}) \leq h(\xi _{1}, \xi _{2}) < \frac {\xi _{1}}{\xi _{2}}\), and hence \(\lim _{L \to \infty } h (\xi _{1}, \xi _{2}) = \xi _{1} / \xi _{2} = \tilde {h} (\xi _{1}, \xi _{2})\).) This claim, together with Lemma 26, implies that \(\tilde {h}\) satisfies (H1)–(H4) and that \({\int \limits }_{\xi _{1}}^{1} \tilde {h}(\xi _{1}, \xi ) d \xi \leq \frac {1}{e}\) for all ξ1 ∈ [0,1]. (Alternatively, it is straightforward to prove these facts directly from the definition of \(\tilde {h}\).) Hence, using \(\tilde {h}\) instead of h, we obtain weakened versions of Lemmas 26 and 27 and Theorems 28 and 29 in which the expression \((1-\frac {1}{L})^{L}\) is replaced by \(\frac {1}{e}\). We remark that the term \({\int \limits }_{\xi _{1}}^{1} \tilde {h}(\xi _{1}, \xi ) d \xi = - \xi _{1} \ln \xi _{1}\) appearing in this variant of Lemma 26 is analogous to the term \((p_{0} - 1) \ln (1 - p_{0})\) that appears in [1, Section 4.3.1].
Notes
In the proof of this claim [22, Theorem 19], Huang et al. exhibit a family of instances with 2k men and 2k women such that the corresponding LP has a feasible fractional value of (3/2 − o(1))k. It is asserted that a certain weakly stable matching of size k is a maximum weakly stable matching, but this assertion is incorrect. For the case when k = 2, there exists a weakly stable matching of size 3. Similarly, when k > 2, it can be shown that the maximum size of weakly stable matching is greater than k.
There is also a flaw related to the main result of their paper, which asserts an approximation ratio of \(\frac {5}{4}\) for the special case where ties are one-sided and are restricted to the end of the preference lists. In the derivation of inequalities (11) and (12) in their proof [22, Lemma 16], it is claimed that \(\frac {\delta _{m, w}}{1 + \nu _{w}} \leq \delta _{m, w}\). This claim depends on the unproven assumption that δm, w is non-negative. It is unclear whether this flaw can be fixed. Both flaws have been acknowledged by Huang et al. in a personal communication.
A binary relation ≽ over a set K is said to satisfy antisymmetry if for every k1, k2 ∈ K such that k1 ≽ k2 and k2 ≽ k1, we have k1 = k2.
A binary relation ≽ over a set K is said to satisfy transitivity if for every k1, k2, k3 ∈ K such that k1 ≽ k2 and k2 ≽ k3, we have k1 ≽ k3.
A binary relation ≽ over a set K is said to satisfy totality if for every k1, k2 ∈ K, we have either k1 ≽ k2 or k2 ≽ k1.
The asymmetric part of a binary relation ≽ over a set K is the binary relation ≻ over the set K such that for every k1, k2 ∈ K, we have k1 ≻ k2 if and only if k1 ≽ k2 and \(\lnot (k_{2} \succeq k_{1})\).
The symmetric part of a binary relation ≽ over a set K is the binary relation \(\sim \) over the set K such that for every k1, k2 ∈ K, we have \(k_{1} \sim k_{2}\) if and only if k1 ≽ k2 and k2 ≽ k1.
Some of the literature on stable matching with indifferences does not allow an agent to be indifferent between being matched to an agent and being unmatched. Our formulation of the stable matching problem with one-sided ties and incomplete lists allows for this possibility, since we can have i =j0 for any man i and woman j.
For any positive step size η, the upper bound on pi can be strengthened to pi < 1 + 2η. Later in the paper, we present an algorithm (Algorithm 1) that terminates in a state satisfying (P1) through (P4) with η = 0 (Lemma 13); in that context, we have pi ≤ 1 and not pi < 1.
References
Lam, C-K, Plaxton, C.G.: A (1 + 1/e)-approximation algorithm for maximum stable matching with one-sided ties and incomplete lists. In: Proceedings of the 30th Annual ACM-SIAM Symposium on Discrete Algorithms, pp 2823–2840 (2019)
Lam, C-K, Plaxton, C.G.: Maximum stable matching with one-sided ties of bounded length. In: Proceedings of the 12th International Symposium on Algorithmic Game Theory, pp 343–356 (2019)
Gale, D., Shapley, L.S.: College admissions and the stability of marriage. Amer. Math. Monthly 69(1), 9–15 (1962)
Irving, R.W.: Stable marriage and indifference. Discret. Appl. Math. 48(3), 261–272 (1994)
Gale, D., Sotomayor, M.A.O.: Some remarks on the stable matching problem. Discret. Appl. Math. 11(3), 223–232 (1985)
Roth, A.E.: The evolution of the labor market for medical interns and residents: A case study in game theory. J. Polit. Econ. 92(6), 991–1016 (1984)
Manlove, D.F., Irving, R.W., Iwama, K., Miyazaki, S., Morita, Y.: Hard variants of stable marriage. Theor. Comput. Sci. 276(1), 261–279 (2002)
Iwama, K., Miyazaki, S., Yamauchi, N.: A 1.875-approximation algorithm for the stable marriage problem. In: Proceedings of the 18th Annual ACM-SIAM Symposium on Discrete Algorithms, pp 288–297 (2007)
Király, Z.: Better and simpler approximation algorithms for the stable marriage problem. Algorithmica 60(1), 3–20 (2011)
McDermid, E.J.: A 3/2-approximation algorithm for general stable marriage. In: Proceedings of the 36th International Colloquium on Automata, Languages and Programming, pp 689–700 (2009)
Paluch, K.: Faster and simpler approximation of stable matchings. Algorithms 7(2), 189–202 (2014)
Király, Z.: Linear time local approximation algorithm for maximum stable marriage. Algorithms 6(3), 471–484 (2013)
Iwama, K., Manlove, D.F., Miyazaki, S., Morita, Y.: Stable marriage with incomplete lists and ties. In: Proceedings of the 26th International Colloquium on Automata, Languages, and Programming, pp 443–452 (1999)
Halldórsson, M.M., Irving, R.W., Iwama, K., Manlove, D.F., Miyazaki, S., Morita, Y., Scott, S.: Approximability results for stable marriage problems with ties. Theor. Comput. Sci. 306(1), 431–447 (2003)
Yanagisawa, H.: Approximation algorithms for stable marriage problems. Ph.D. Thesis, Graduate School of Informatics, Kyoto University (2007)
Iwama, K., Miyazaki, S., Yanagisawa, H.: A 25/17-approximation algorithm for the stable marriage problem with one-sided ties. Algorithmica 68 (3), 758–775 (2014)
Dean, B.C., Jalasutram, R.: Factor revealing LPs and stable matching with ties and incomplete lists. In: Proceedings of the 3rd International Workshop on Matching Under Preferences, pp 42–53 (2015)
Huang, C-C, Kavitha, T.: Improved approximation algorithms for two variants of the stable marriage problem with ties. Math. Program. 154(1), 353–380 (2015)
Bauckholt, F., Pashkovich, K., Sanità, L.: On the approximability of the stable marriage problem with one-sided ties. arXiv:1805.05391(2018)
Radnai, A.: Approximation algorithms for the stable marriage problem. Master’s Thesis, Department of Computer Science, Eötvös Loránd University (2014)
Halldórsson, M.M., Iwama, K., Miyazaki, S., Yanagisawa, H.: Improved approximation results for the stable marriage problem. ACM Trans. Algorithm. 3(3), 30 (2007)
Huang, C-C, Iwama, K., Miyazaki, S., Yanagisawa, H.: A tight approximation bound for the stable marriage problem with restricted ties. In: Proceedings of the 18th International Workshop on Approximation Algorithms for Combinatorial Optimization Problems, pp 361–380 (2015)
Halldórsson, M.M., Iwama, K., Miyazaki, S., Yanagisawa, H.: Randomized approximation of the stable marriage problem. Theor. Comput. Sci. 325 (3), 439–465 (2004)
Chiang, R., Pashkovich, K.: On the approximability of the stable matching problem with ties of size two. arXiv:1808.04510 (2019)
Irving, R.W., Manlove, D.F., O’Malley, G.: Stable marriage with ties and bounded length preference lists. J. Discret. Algorithm. 7(2), 213–219 (2009)
Abraham, D.J., Levavi, A., Manlove, D.F., O’Malley, G.: The stable roommates problem with globally-ranked pairs. In: Proceedings of the 3rd International Workshop on Web and Internet Economics, pp 431–444 (2007)
Irving, R.W., Manlove, D.F., Scott, S.: The stable marriage problem with master preference lists. Discret. Appl. Math. 156(15), 2959–2977 (2008)
Marx, D., Schlotter, I.: Parameterized complexity and local search approaches for the stable marriage problem with ties. Algorithmica 58(1), 170–187 (2010)
Hamada, K., Miyazaki, S., Yanagisawa, H.: Strategy-proof approximation algorithms for the stable marriage problem with ties and incomplete lists. In: Proceedings of the 30th International Symposium on Algorithms and Computation, pp 9:1–9:14 (2019)
Jain, K., Mahdian, M., Markakis, E., Saberi, A., Vazirani, V.: Greedy facility location algorithms analyzed using dual fitting with factor-revealing LP. J. ACM 50(6), 795–824 (2003)
Fernandes, C.G., Meira, L.A.A., Miyazawa, F.K., Pedrosa, L.L.C.: A systematic approach to bound factor-revealing LPs and its application to the metric and squared metric facility location problems. Math. Program. 153(2), 655–685 (2015)
Mahdian, M., Yan, Q.: Online bipartite matching with random arrivals: An approach based on strongly factor-revealing LPs. In: Proceedings of the 43rd Annual ACM Symposium on Theory of Computing, pp 597–606 (2011)
Mehta, A., Saberi, A., Vazirani, U., Vazirani, V.: Adwords and generalized online matching. J. ACM 54(5), 22:1–22:19 (2007)
Rothblum, U.G.: Characterization of stable matchings as extreme points of a polytope. Math. Program. 54(1), 57–67 (1992)
Vande Vate, J.H.: Linear programming brings marital bliss. Oper. Res. Lett. 8(3), 147–153 (1989)
Acknowledgments
We thank the two anonymous referees for their valuable comments, which helped us to significantly improve the presentation.
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
This article belongs to the Topical Collection: Special Issue on Algorithmic Game Theory (SAGT 2019)
Guest Editors: Dimitris Fotakis and Vangelis Markakis
This journal article combines the results presented in two conference papers [1, 2]. The first of these papers establishes an approximation ratio of \(1+\protect \frac {1}{e}\). The second establishes the more detailed bound stated in the abstract. The latter bound is increasing in L and approaches \(1+\protect \frac {1}{e}\) as L tends to infinity.
Rights and permissions
About this article
Cite this article
Lam, CK., Plaxton, C.G. Maximum Stable Matching with One-Sided Ties of Bounded Length. Theory Comput Syst 66, 645–678 (2022). https://doi.org/10.1007/s00224-022-10072-1
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00224-022-10072-1