1 Introduction

Clustering problems intend to classify high dimensional data based on the proximity of points to each other. There is an inherent assumption that the clusters satisfy locality property – points close to each other (in a geometric sense) should belong to the same category. Often, we model such problems by the notion of a center based clustering problem. We would like to identify a set of centers, one for each cluster, and then the clustering is obtained by assigning each point to the nearest center. For example, the k-means problem is defined in the following manner: given a dataset \(X = \{x_{1}, \ldots , x_{n}\} \subset \mathbb {R}^{d}\) and an integer k, output a set of k centers \(\{c_{1}, \ldots , c_{k}\} \subset \mathbb {R}^{d}\) such that the objective function \({\sum }_{x \in X} \min _{c \in \{c_{1}, \ldots , c_{k}\}} ||x - c||^{2}\) is minimized. The k-median and the k-center problems are defined in a similar manner by defining a suitable objective function.

However, often such clustering problems entail several side constraints. Such constraints limit the set of feasible clusterings. For example, the r-gather k-means clustering problem is defined in the same manner as the k-means problem, but has the additional constraint that each cluster must have at least r points in it. In such settings, it is no longer true that the clustering is obtained from the set of centers by the Voronoi partition. Ding and Xu [5] began a systematic study of such problems, and this is the starting point of our work as well. They defined the so-called constrained k-means problem. An instance of such a problem is specified by a set of points X, a parameter k, and a set \(\mathbb {C}\), where each element of \(\mathbb {C}\) is a partitioning of X into k disjoint subsets (or clusters). Since the set \(\mathbb {C}\) may be exponentially large, we will assume that it is specified in a succinct manner by an efficient algorithm which decides membership in this set. A solution needs to output an element \(\mathbb {O} = \{O_{1}, \ldots , O_{k}\}\) of \(\mathbb {C}\), and a set of k centers, c 1, …, c k , one for each cluster in \(\mathbb {O}\). The goal is to minimize \({\sum }_{i = 1}^{k} {\sum }_{x \in O_{i}} ||x-c_{i}||^{2}\). It is easy to check that the center c i must be the mean of the corresponding cluster O i . Note that the k-means problem is a special case of this problem where the set \(\mathbb {C}\) contains all possible ways of partitioning X into k subsets. The constrained k-median problem can be defined similarly. We will make the natural assumption (which is made by Ding and Xu as well) that it suffices to find a set of k centers. In other words, there is an (efficient) algorithm \({A^{\mathbb {C}}}\), which given a set of k centers c 1, …, c k , outputs the clustering \(\{O_{1}, \ldots , O_{k}\} \in \mathbb {C}\) such that \({\sum }_{i = 1}^{k} {\sum }_{x \in O_{i}} ||c_{i} - x||^{2}\) is minimized. Such an algorithm is called a partition algorithm by Ding and Xu [5].Footnote 1 For the case of the k-means problem, this algorithm will just give the Voronoi partition with respect to c 1, …, c k , whereas in the case of the r-gather k-means clustering problem, the algorithm \({A^{\mathbb {C}}}\) will be given by a suitable min-cost flow computation (see section 4.1 in [5]).

Ding and Xu [5] considered several natural problems arising in diverse areas, e.g. machine learning, which can be stated in this framework. These included the so-called r-gather k-means, r-capacity k-means and l-diversity k-means problems. Their approach for solving such problems was to output a list of candidate sets of centers (of size k) such that at least one of these were close to the optimal centers. We formalize this approach and show that if k is small, then one can obtain a PTAS for the constrained k-means (and the constrained k-median) problems whose running time is linear plus a constant number of calls to \({A^{\mathbb {C}}}\).

We define the list k-means problem. Given a set of points X and parameters k and ε, we want to output a list Ł of sets of k points (or centers). The list Ł should have the following property: for any partitioning \(\mathbb {O}=\{O_{1}, \ldots , O_{k}\}\) of X into k clusters, there exists a set c 1, …, c k in the list Ł such that (up-to reordering of these centers)

$$\begin{array}{@{}rcl@{}} \sum\limits_{i = 1}^{k} \sum\limits_{x \in O_{i}} ||c_{i} - x||^{2} \leq (1+{\varepsilon})\sum\limits_{i = 1}^{k} \sum\limits_{x \in O_{i}} ||x - m_{i}||^{2}, \end{array} $$
(1)

where \(m_{i} = \frac {{\sum }_{x \in O_{i}} x}{|O_{i}|}\) denotes the mean of O i . Note that the latter quantity is the k-means cost of the clustering \(\mathbb {O}\), and so we require c 1, …, c k to be such that the cost of assigning to these centers is close to the optimal k-means cost of this clustering. We shall use opt k \( (\mathbb {O}) \) to denote the optimal k-means cost of \(\mathbb {O}\).

Although such an oblivious approach to clustering may appear too optimistic, we show that it is possible to obtain such a list Ł of size \(2^{\tilde {O}(k/{\varepsilon })}\) in \(O \left (nd \cdot 2^{\tilde {O}(k/{\varepsilon })} \right )\) time. This improves the result of Ding and Xu [5], where they gave an algorithm which outputs a list of size O((log n)k ⋅ 2poly(k/ε)). Observe that we address a question which is both algorithmic and existential : how small can the size of Ł be, and how efficiently can we find it ? We also give almost matching lower bounds on the size of such a list Ł. Our algorithm for finding Ł relies on the D 2-sampling idea – iteratively find the centers by picking the next one to be far from the current set of centers. Although these ideas have been used for the k-means problems (see e.g. [9]), they rely heavily on the fact that given a set of centers, the corresponding clustering is obtained by the corresponding Voronoi partition. Our approach relies in showing that there is a small sized list Ł which works well for all possible clusterings.

It is not hard to show that a result for the list k-means problem implies a corresponding result for the constrained k-means problem with the number of calls to \({A^{\mathbb {C}}}\) being equal to the size of the list Ł. Therefore, we obtain as corollary of our main result efficient algorithms for the constrained k-means (and the constrained k-median) problems.

1.1 Related Work

The classical k-means problem is one of the most well-studied clustering problems. There is a long sequence of work on obtaining fast PTAS for the k-means and the k-median problems (see e.g., [1,2,3,4, 6, 7, 9, 11, 12] and references therein). Some of these works implicitly maintain a list of centers of size k such that the condition (1) is satisfied for all clusterings \(\mathbb {O}\) which correspond to a Voronoi partition (with respect to a set of k centers) of the input set of points, and one picks the best possible set of centers from this list (see e.g., [1, 9, 11]). The list has at most 2poly(k/ε) elements, and from this, one can recover a (1 + ε)-approximation algorithm for the k-means problem with running time O(n d ⋅ 2poly(k/ε)).

The more general case of the constrained k-means problem was studied by Ding and Xu [5] who also gave an algorithm that outputs a list of size O((log n)k ⋅ 2poly(k/ε)). Our work improves upon this result.

Moreover, we consider the formulation of the list k-means problem as an important contribution, and feel that similar formulations in other classification settings would be useful.

1.2 Preliminaries

We formally define the problems considered in this paper. The centroid or mean of a finite set of points \(X \subset \mathbb {R}^{d}\) is denoted by \({\Gamma }(X) = \frac {{\sum }_{x \in X} x }{|X|}\). Let Δ(X) denote the 1-means cost of these set of points, i.e., \({\sum }_{x \in X} ||x-{\Gamma }(X)||^{2}\).

An input instance \(\mathcal {I}\) for the list k-means (or the list k-median) problem consists of a set of points X, a positive integer k and a positive parameter ε. A partition of X into disjoint subsets O 1, …, O k will be called a clustering of X. Given a clustering \({\mathbb O^{\star }} = \{O_{1}^{\star }, \ldots , O_{k}^{\star }\}\) of X and a set of k centers C = {c 1, …, c k }, define cost C \( (\mathbb {O}^{\star })\) as the minimum, over all permutations π of C, of \({\sum }_{i = 1}^{k} {\sum }_{x \in O_{i}^{\star }} ||x-c_{\pi (i)}||^{2}\). Recall that opt k \( ({\mathbb O^{\star }})\) denotes the optimal k-means cost of \({\mathbb O^{\star }}\), i.e., \({\sum }_{i = 1}^{k} {\sum }_{x \in O_{i}^{\star }} ||x - {\Gamma }(O_{i}^{\star })||^{2}\).

For a set of points X and a set of points C (of size at most k), define Φ C (X) as \({\sum }_{x \in X} \min _{c \in C} ||x - c||^{2}\), i.e., we consider the Voronoi partition of X induced by C, and consider the k-means cost of X with respect to this partition. When considering the list k-median problem, we will use the same notation, except that we will consider the Euclidean norm instead of the square of the Euclidean norm. When C is a singleton set {c}, we shall abuse notation by using Φ c (X) instead of Φ{c}(X).

As mentioned in the introduction, the constrained k-means problem is specified by a set of points X, a positive integer k, and a set \(\mathbb {C}\) of feasible clusterings of X. Further, we are given an algorithm \({A^{\mathbb {C}}}\), which given a set of k centers C, outputs the clustering \(\mathbb {O}\) in \(\mathbb {C}\) which minimizes cost C \( (\mathbb {O})\). The goal is to find a clustering \(\mathbb {O} \in \mathbb {C}\) and a set C of size k which minimizes cost C \( (\mathbb {O})\). Note that the centers in C should just be the mean of each cluster in \(\mathbb {O}\). On the other hand, if we know C, then we can find the best clustering in \(\mathbb {C}\) by calling \({A^{\mathbb {C}}}\). We use the same notation for the constrained k-median problem.

We now mention a few results which will be used in our analysis. The following fact is well known.

Fact 1

For any \(X \subset \mathbb {R}^{d}\) and \(c \in \mathbb {R}^{d}\) we have \({\sum }_{x \in X} ||x - c||^{2} = {\sum }_{x \in X} ||x - {\Gamma }(X)||^{2} + |X| \cdot ||c - {\Gamma }(X)||^{2}\).

We next define the notion of D 2-sampling.

Definition 1 (D 2-sampling)

Given a set of points \(X \subset \mathbb {R}^{d}\) and another set of points \(C \subset \mathbb {R}^{d}\), D 2-sampling from X w.r.t. C samples a point xX with probability \(\frac {{\Phi }_{C}(\{x\})}{{\Phi }_{C}(X)}\). For the case C = , D 2-sampling is the same as uniform sampling from X.

The following result of Inaba et al. [8] shows that a constant size random sample is a good enough approximation of a set of points X as far as the 1-means objective is concerned.

Lemma 1 (8)

Let S be a set of points obtained by independently sampling M points with replacement uniformly at random from a point set \(X \subset \mathbb {R}^{d}\) . Then for any δ > 0,

$$\mathbf{Pr}\left[{\Phi}_{\Gamma(S)}(X) \leq \left( 1 + \frac{1}{\delta M}\right) \cdot {\Delta}(X) \right] \geq (1- \delta). $$

We will also use the following simple fact that may be interpreted as approximate version of the triangle inequality for squared Euclidean distance.

Fact 2 (Approximate triangle inequality)

For any \(x, y, z \in \mathbb {R}^{d}\), we have ||xz||2 ≤ 2 ⋅||xy||2 + 2 ⋅||yz||2.

1.3 Our Results

We now state our results for the list k-means and the list k-median problems.

Theorem 1

Given a set of n points \(X \subset \mathbb {R}^{d}\) , parameters k > 0 and 0 < ε ≤ 1, there is a randomized algorithm which outputs a list Ł of \(2^{\tilde {O}(k/{\varepsilon })}\) sets of centers of size k such that for any clustering \({\mathbb O^{\star }} = \{O_{1}^{\star }, ..., O_{k}^{\star }\}\) of X, the following event happens with probability at least 1/2: there is a set CŁ such that

$${\tt cost}_{C}({\mathbb O^{\star}}) \leq (1+{\varepsilon}) \cdot {\tt opt}_{k}({\mathbb O^{\star}}).$$

Moreover, the running time of our algorithm is \(O \left (n d \cdot 2^{\tilde {O}(k/{\varepsilon })} \right )\) . The samestatement holds for the list k-median problem as well, except that the size of the list Ł becomes \(2^{\tilde {O}(k/{\varepsilon }^{O(1)})}\) and the running time ofour algorithm becomes \(O \left (n d \cdot 2^{\tilde {O}(k/{\varepsilon }^{O(1)})} \right )\).

As a corollary of this result we get PTAS for the constrained k-means problem (and similarly for the constrained k-median problem).

Corollary 1

There is a randomized algorithm which given an instance of the constrained k-means problem and parameter ε > 0, outputs a solution of cost at most (1 + ε)-times the optimal cost with probability at least 1/2. Further, the time taken by this algorithm is \(O\left (n d \cdot 2^{\tilde {O}(k/{\varepsilon })} \right ) + 2^{\tilde {O}(k/{\varepsilon })} \cdot T\) , where T denotes the time taken by \({A^{\mathbb {C}}}\) on this instance.

Proof

We use the algorithm in Theorem 1 to get a list Ł for this data-set. For each set C ∈ Ł, we invoke \({A^{\mathbb {C}}}\) with C as the set of centers – let \(\mathbb {O}(C)\) denote the clustering produced by \({A^{\mathbb {C}}}\). We output the clustering for which \({\tt cost}_{C}(\mathbb {O}(C))\) is minimum. Let \({\mathbb O^{\star }}\) be the optimal clustering, i.e., the clustering in \(\mathbb {C}\) for which \({\tt opt}_{k}({\mathbb O^{\star }})\) is minimum. We know that with probability at least 1/2, there is a set C ∈Ł for which \({\tt cost}_{C}({\mathbb O^{\star }}) \leq (1+{\varepsilon }) {\tt opt}_{k}({\mathbb O^{\star }})\). Now, the solution produced by our algorithm has cost at most \({\tt cost}_{C}(\mathbb {O}(C))\), which by definition of \({A^{\mathbb {C}}}\), is at most \({\tt cost}_{C}({\mathbb O^{\star }})\). □

We also give a nearly matching lower bound on the size of Ł. The following result along with Yao’s Lemma shows that one cannot reduce the size of Ł to less than \(2^{\tilde {\Omega } \left (\frac {k}{\sqrt {\varepsilon }} \right )}\).

Theorem 2

Given a parameter k and a small enough positive constant ε , there exists a set X of points in \(\mathbb {R}^{d}\) and a set \(\mathbb {C}\) of clusterings of X such that any list Ł of k-centers of size k with the following property must have size at least \(2^{\tilde {\Omega } \left (\frac {k}{\sqrt {\varepsilon }} \right )}\) : for at least half of the clusterings \(\mathbb {O} \in \mathbb {C}\) , there exists a set C in Ł such that \({\tt cost}_{C}(\mathbb {O}) \leq (1+ {\varepsilon }) {\tt opt}_{k}(\mathbb {O})\) .

Our techniques also extend to settings involving many other “approximate” metric spaces (see the discussion in Section 6).

Another important observation is that in the lower bound result above, the clusterings in \(\mathbb {C}\) correspond to Voronoi partitions of X. This throws light on the previous works [1, 6, 9,10,11] as to why the running time of all the algorithms was proportional to 2poly(k/ε): they were implicitly maintaining a list which satisfied (1) for all Voronoi partitions of X, and therefore, our lower bound result applies to their algorithms as well.

1.4 Our Techniques

Our techniques are based on the idea of D 2-sampling that was used by Jaiswal et al. [9] to give a (1 + ε)-approximation algorithm for the k-means problem.

Our ideas also have similarities to the ideas of Ding and Xu [5]. We discuss these similarities towards the end of this subsection.

One of the crucial ingredients that is used in most of the (1 + ε)-approximation algorithms for k-means is Lemma 1. This result essentially states that given a set of points P, if we are able to uniformly sample O(1/ε) points from it, then the mean of these sampled points will be a good substitute for the mean of P. Consider an optimal clustering \(O^{\star }_{1}, \ldots , O^{\star }_{k}\) for a set of points X. If we could uniformly sample from each of the clusters \(O^{\star }_{i}\), then by the argument above, we would be done. The first problem one encounters is that one can only sample from the input set of points, and so, if we sample sufficiently many points from X, we need to somehow distinguish the points which belong to \(O^{\star }_{i}\) in this sample. This can be dealt with using the following argument: suppose we manage to get a small sample S of points (say of size O(poly(k/ε))) that contain at least Ω(1/ε) points uniformly distributed in \(O^{\star }_{i}\), then we can try all possible subsets of S of size O(1/ε) and ensure that at least one of the subsets is a uniform sample of appropriate size from \(O^{\star }_{i}\). Another issue is – how do we ensure that the sample S has sufficient representation from \(O^{\star }_{i}\)? Uniform sam pling from the input X will not work since |O i⋆| might be really small compared to the size of |X|. This is where D 2-sampling plays a crucial role and we discuss this next.

Given a set of points \(X \subseteq \mathbb {R}^{d}\) and candidate centers \(c_{1}, ..., c_{i} \in \mathbb {R}^{d}\), D 2-sampling with respect to the centers c 1, ..., c i samples a point xX with probability proportional to \(\min _{c \in \{c_{1}, ..., c_{i}\}} ||x - c||^{2}\). Note that this process “boosts” the probability of a cluster \(O^{\star }_{j}\) that has many points far from the set {c 1, …, c i }.

Therefore, even if a cluster \(O^{\star }_{j}\) has a small size, we will have a good chance of sampling points from it (if it is far from the current set of centers). However, this nonuniform sampling technique gives rise to another issue. The points being sampled are no longer uniform samples from the optimal clusters. Depending on the current set of centers, different points in a cluster \(O^{\star }_{j}\) have different probability of getting sampled. This issue is not that grave for the k-means problem where the optimal clusters are Voronoi regions since we can argue that the probabilities are not very different. However, for the constrained k-means problem where the optimal clusters are allowed to be arbitrary partition of the input points, this problem becomes more serious. This can be illustrated using the following example. Suppose we have managed to pick centers c 1, …, c i that are good (in terms of cluster cost) for the optimal clusters \(O_{1}^{\star }, \ldots , O_{i}^{\star }\). At this point let \(O^{\star }_{j}\) denote the cluster other than \(O_{1}^{\star }, {\ldots } , O_{i}^{\star }\), such that a point sampled using D 2 sampling w.r.t. c 1, …, c i is most likely to be from \(O^{\star }_{j}\).

Suppose we sample a set S of O(k/ε) points using D 2-sampling. Are we guaranteed (w.h.p.) to have a subset in S that is a uniform sample from \(O^{\star }_{j}\)? The answer is no (actually quite far from it). This is because the optimal clusters may form an arbitrary partition of the data-set and it is possible that most of the points in \(O^{\star }_{j}\) might be very close to the centers c 1, …, c i . In this case the probability of sampling such points will be close to 0. The way we deal with this scenario is that we consider a multi-set S that is the union of the set of samples S and O(1/ε) copies of each of c 1, …, c i . We then argue that all the points in \(O^{\star }_{j}\) that are far from c 1, …, c i will have a good chance of being represented in S (and hence in S ). On the other hand, even though the points that are close to one of c 1, …, c i will not be represented in S (and hence S ), the center (among c 1, …, c i ) that is close to these points have good representation in S and these centers may be regarded as “proxy” for the points in \(O^{\star }_{j}\).

Ding and Xu [5], instead of using the idea of D 2-sampling, rely on the ideas of Kumar et al. [11] which involves uniform sampling of points and then pruning the data-set by removing the points that are close to centers that are currently being considered. In their work, they also encounter the problem that points from some optimal cluster might be close to the current set of good centers (and hence will be removed before uniform sampling). Ding and Xu [5] deal with this issue using what they call a “simplex lemma”. Consider the same scenario as in the previous paragraph. At a very high level, they consider grids inside several simplices defined by the current centers c 1, …, c i and the sampled points.

Using the simplex lemma, they argue that one of the points inside these grids will be a good center for the cluster \(O^{\star }_{j}\).

We now give an overview of the paper. In Section 2, we give the algorithm for generating the list of sets of centers for an instance of the list k-means problem. The algorithm is analyzed in Section 3. In Section 4, we give the lower bound result on the size of the list Ł. In Section 5, we discuss how our algorithm can be extended to the list k-median problem. We conclude with a brief discussion on extensions to other metrics in Section 6.

2 The Algorithm

Consider an instance of the list k-means problem. Let X denote the set of points, and ε be a positive parameter. The algorithm List-k-means is described in Algorithm 2.1. It maintains a set C of centers, which is initially empty. Each recursive call to the function Sample-centers increases the size of C by one. In Step 2 of this function, the algorithm tries out various candidates which can be added to C (to increase its size by 1). First, it builds a multi-set S as follows: it independently samples (with replacement) O(k/ε 3) points using D 2-sampling from X w.r.t. the set C. Further, it adds O(1/ε) copies of each of the centers in C to the set S. Having constructed S, we consider all subsets of size O(1/ε) of S – for each such subset we try adding the mean of this set to C. Thus, each invocation of Sample-centers makes multiple recursive calls to itself (\(|S| \choose {M}\) to be precise). It will be useful to think of the execution of this algorithm as a tree \(\mathcal {T}\) of depth k. Each node in the tree can be labeled with a set C – it corresponds to the invocation of Sample-centers with this set as C (and i being the depth of this node). The children of a node denote the recursive function calls by the corresponding invocation of Sample-centers. Finally, the leaves denote the set of candidate centers produced by the algorithm.

figure a

3 Analysis

In this section we prove Theorem 1 for the list k-means problem. Let Ł denote the set of candidate solutions produced by List-k-means, where a solution corresponds to a set of centers C of size k. These solutions are output at the leaves of the execution tree \(\mathcal {T}\). Fix a clustering \({\mathbb O^{\star }}=\{O^{\star }_{1}, \ldots , O^{\star }_{k}\}\) of X.

Recall that anode v at depth i in the execution tree \(\mathcal {T}\) corresponds to aset C of size i – call this set C v . Our proof will argue inductively that for each i, there will be anode v at depth i such that the centers chosen so far in C v are good with respect to asubset of i clusters in \(O^{\star }_{1}, \ldots , O^{\star }_{k}\). We will argue that the following invariant P(i) is maintained during the recursive calls to Sample-centers:

P(i): With probability at least \(\frac {1}{2^{i-1}}\), there is anode v i at depth (i − 1) in the tree \(\mathcal {T}\) and aset of (i − 1) distinct clusters \(O^{\star }_{j_{1}}, O^{\star }_{j_{2}}, ..., O^{\star }_{j_{i-1}}\) such that

$$ \forall l \in \{1, ..., i-1\}, {\Phi}_{c_{l}}(O^{\star}_{j_{l}}) \leq \left( 1 + \frac{{\varepsilon}}{2} \right) \cdot {\Delta}(O^{\star}_{j_{l}}) + \frac{{\varepsilon}}{2k} \cdot {\tt opt}_{k}({\mathbb O^{\star}}), $$
(2)

where c 1, …, c i−1 are the centers in the set \(C_{v_{i}}\) corresponding to v i . Recall that \({\Delta }(O^{\star }_{j_{l}})\) refers to the optimal 1-means cost of j l .

The proof of the main theorem follows easily from this invariant property – indeed, the statement P(k) holds with probability at least 1/2k. Since the algorithm List-k-means invokes Sample-centers 2k times, the probability of the statement in P(k) being true in at least one of these invocations is at least a constant. We now prove the invariant by induction on i. The base case for i = 1 follows trivially: the vertex v 1 is the root of the tree \(\mathcal {T}\) and \(C_{v_{1}}\) is empty. Now assume that P(i) holds for some i ≥ 1. We will prove that P(i + 1) also holds. We first condition on the event in P(i) (which happens with probability at least \(\frac {1}{2^{i-1}}\)). Let v i and \(O^{\star }_{j_{1}}, \ldots , O^{\star }_{j_{i-1}}\) be as guaranteed by the invariant P(i). Let \(C_{v_{i}} = \{c_{1}, \ldots , c_{i-1}\}\) (as in the statement P(i)). For sake of ease of notation, we assume without loss of generality that the index j i is i, and we shall use C i to denote \(C_{v_{i}}\). Thus, the center c l corresponds to the cluster \(O^{\star }_{l}\), 1 ≤ li − 1. Note that for a cluster \(O^{\star }_{i^{\prime }}, i^{\prime } \geq i\), \({\Phi }_{C_{i}}({O^{\star }_{i^{\prime }}})\) is proportional to the probability that a point sampled from X using D 2-sampling w.r.t. C i comes from the set \(O^{\star }_{i^{\prime }}\) – let \(\bar {i} \in \{i, \ldots , k\}\) be the index i for which \({\Phi }_{C_{i}}(O^{\star }_{i^{\prime }})\) is maximum. We will argue that the invocation of Sample-centers corresponding to v i will try out a point c i (in Step 2(d)(i)) such that the following property will hold with probability at least 1/2: \({\Phi }_{c_{i}}(O^{\star }_{\bar {i}}) \leq (1 + \varepsilon /2) \cdot {\Delta }(O^{\star }_{\bar {i}}) + ({\varepsilon }/2k) \cdot {\tt opt}_{k}({\mathbb O^{\star }})\). For doing this, we break the analysis into the following two parts. These two parts are discussed in the next two subsections that follow.

  • Case I \(\left (\boldmath {\frac {{\Phi }_{C_{i}}(O^{\star }_{\bar {i}})}{{\sum }_{j = 1}^{k} {\Phi }_{C_{i}}(O^{\star }_{j})} < \frac {{\varepsilon }}{13k}} \right )\): This captures the scenario where the probability of sampling from any of the uncovered clusters is very small. Note that for the classical k-means problem, this is not an issue because in this case we can argue that the current set of centers C already provides a good approximation for the entire set of data points and we are done. However, for us this is an issue — for example, assuming i > 2, it is possible that some of the points in \(O^{\star }_{\bar {i}}\) are close to c 1, whereas the remaining points of this cluster are close to c 2. Still we need to output a center for \(O^{\star }_{\bar {i}}\). In this case we argue that it will be sufficient to output a suitable convex combination of c 1 and c 2.

  • Case II \(\left (\boldmath {\frac {{\Phi }_{C_{i}}(O^{\star }_{\bar {i}})}{{\sum }_{j = 1}^{k} {\Phi }_{C_{i}}(O^{\star }_{j})} \geq \frac {{\varepsilon }}{13k}} \right )\): In this case, we argue that with good probability we will sample sufficient points from \(O^{\star }_{\bar {i}}\) during Step 2(a) of Sample-centers. Further, we will show that a suitable combination of such points along with centers in C i will be a good center for \(O^{\star }_{\bar {i}}\).

Case I \(\left (\frac {{\Phi }_{C_{i}}(O^{\star }_{\bar {i}})}{{\sum }_{j = 1}^{k} {\Phi }_{C_{i}}(O^{\star }_{j})} < \frac {{\varepsilon }}{13k} \right )\)

In this case we argue that a convex combination of the centers in C i provides a good approximation to \({\Delta }(O^{\star }_{\bar {i}})\). Intuitively, this is because the points in \(O^{\star }_{\bar {i}}\) are close to the points in the set C i . This convex combination is essentially “simulated” by taking O(1/ε) copies of each of the centers c 1, ..., c i−1 in the multi-set S and then trying all possible subsets of size O(1/ε). The formal analysis follows. First, we note that \({\Phi }_{C_{i}}(O^{\star }_{\bar {i}})\) should be small compared to \({\tt opt}_{k}({\mathbb O^{\star }})\).

Lemma 2

\({\Phi }_{C_{i}}(O^{\star }_{\bar {i}}) \leq \frac {{\varepsilon }}{6k} \cdot {\tt opt}_{k}({\mathbb O^{\star }})\) .

Proof

Let D denote \({\sum }_{j = 1}^{k} {\Phi }_{C_{i}}(O^{\star }_{j})\). The induction hypothesis and the fact that \(\forall j \geq i, {\Phi }_{C_{i}}(O^{\star }_{\bar {i}}) \geq {\Phi }_{C_{i}}(O^{\star }_{j})\) imply that

$$D \,=\, \sum\limits_{j = 1}^{i-1} {\Phi}_{C_{i}}(O^{\star}_{j}) + \sum\limits_{j=i}^{k} {\Phi}_{C_{i}}(O^{\star}_{j}) \!\leq \left( 1 \,+\, \frac{{\varepsilon}}{2} \right) \cdot \sum\limits_{j = 1}^{i-1} {\Delta}(O^{\star}_{j}) + \frac{{\varepsilon}}{2} \cdot {\tt opt}_{k}({\mathbb O^{\star}}) + k \cdot {\Phi}_{C_{i}}({O^{\star}_{\bar{i}}}).$$

Since \({\Phi }_{C_{i}}(O^{\star }_{\bar {i}}) \leq \frac {{\varepsilon }}{13k} \cdot D\) and \({\sum }_{j = 1}^{i-1} {\Delta }(O^{\star }_{j}) \leq {\tt opt}_{k}({\mathbb O^{\star }})\), we get \(D \leq \frac {{\varepsilon }}{13} \cdot D + \left (1 + {\varepsilon } \right ) \cdot {\tt opt}_{k}({\mathbb O^{\star }})\). Thus, \(D \leq \left (\frac {1+{\varepsilon }}{1-{\varepsilon }/13} \right ) \cdot {\tt opt}_{k}({\mathbb O^{\star }})\). Finally, \({\Phi }_{C_{i}}(O^{\star }_{\bar {i}}) \leq \frac {{\varepsilon }}{13k} \cdot D \leq \frac {{\varepsilon }}{6k} \cdot {\tt opt}_{k}({\mathbb O^{\star }})\). □

For each point \(p \in O^{\star }_{\bar {i}}\), let c(p) denote the closest center in C i . We now define a multi-set \({O_{\bar {i}}^{\prime }}\) as \(\{c(p) : p \in O^{\star }_{\bar {i}} \}\). Note that \({O_{\bar {i}}^{\prime }}\) is obtained by taking multiple copies of points in C i . The remaining part of the proof proceeds in two steps. Let m and m denote the mean of \(O^{\star }_{\bar {i}}\) and \({O_{\bar {i}}^{\prime }}\) respectively. We first show that m and m are close, and so, assigning all the points of \(O^{\star }_{\bar {i}}\) to m will have cost close to \({\Delta }(O^{\star }_{\bar {i}})\). Secondly, we show that if we have a good approximation m to m , then assigning all the points of \(O^{\star }_{\bar {i}}\) to m will also incur small cost (comparable to \({\Delta }(O^{\star }_{\bar {i}})\)). We now carry out these steps in detail. Observe that

$$\begin{array}{@{}rcl@{}} \sum\limits_{p \in O^{\star}_{\bar{i}}} ||p-c(p)||{~}^{2} = {\Phi}_{C_{i}}(O^{\star}_{\bar{i}}). \end{array} $$
(3)

Lemma 3

\(||m^{\star } - m^{\prime }||^{2} \leq \frac {{\Phi }_{C_{i}}(O^{\star }_{\bar {i}})}{|O^{\star }_{\bar {i}}|}\) .

Proof

Let n denote \(|O^{\star }_{\bar {i}}|\). Then,

$$||m^{\star} - m^{\prime}||^{2} = \frac{1}{n^{2}} \left| \left| \sum\limits_{p \in O^{\star}_{\bar{i}}} (p-c(p)) \right|\right|^{2} \leq \frac{1}{n} \sum\limits_{p \in O^{\star}_{\bar{i}}} ||p-c(p)||^{2} = \frac{{\Phi}_{C_{i}}(O^{\star}_{\bar{i}})}{n},$$

where the second last inequality follows from Cauchy-Schwartz.Footnote 2

Now we show that \({\Delta }(O^{\star }_{\bar {i}})\) and \({\Delta }({O_{\bar {i}}^{\prime }})\) are close.

Lemma 4

\({\Delta }({O_{\bar {i}}^{\prime }}) \leq 2 \cdot {\Phi }_{C_{i}}(O^{\star }_{\bar {i}}) + 2 \cdot {\Delta }(O^{\star }_{\bar {i}})\) .

Proof

The lemma follows by the following inequalities:

$$\begin{array}{@{}rcl@{}} {\Delta}({O_{\bar{i}}^{\prime}}) & = & \sum\limits_{p \in O^{\star}_{\bar{i}}} ||c(p)-m^{\prime}||^{2} \overset{\text{{Fact~1}}}{\leq} \sum\limits_{p \in O^{\star}_{\bar{i}}} ||c(p)-m^{\star}||^{2} \\ & \overset{\text{{Fact~2}}}{\leq} & 2 \cdot \sum\limits_{p \in O^{\star}_{\bar{i}}} \left( ||c(p)-p||^{2} + ||p-m^{\star}||^{2} \right) = 2 \cdot {\Phi}_{C_{i}}(O^{\star}_{\bar{i}}) + 2 \cdot {\Delta}(O^{\star}_{\bar{i}}). \end{array} $$

Finally, we argue that a good center for \({O_{\bar {i}}^{\prime }}\) will also serve as a good center for \(O^{\star }_{\bar {i}}\).

Lemma 5

Let m be a point such that \({\Phi }_{m^{\prime \prime }}({O_{\bar {i}}^{\prime }}) \leq \left (1+\frac {{\varepsilon }}{8} \right ) \cdot {\Delta }({O_{\bar {i}}^{\prime }})\) . Then \({\Phi }_{m^{\prime \prime }}(O^{\star }_{\bar {i}}) \leq \left (1 + \frac {{\varepsilon }}{2} \right ) \cdot {\Delta }(O^{\star }_{\bar {i}}) + \frac {\varepsilon }{2k} \cdot {\tt opt}_{k}({\mathbb O^{\star }})\) .

Proof

Let n denote \(|O^{\star }_{\bar {i}}|\). Observe that

$$\begin{array}{@{}rcl@{}} {\Phi}_{m^{\prime\prime}}(O^{\star}_{\bar{i}}) &=& \sum\limits_{p \in O^{\star}_{\bar{i}}} ||m^{\prime\prime}-p||^{2} \overset{\text{{Fact~1}}}{=} \sum\limits_{p \in O^{\star}_{\bar{i}}} ||m^{\star}-p||^{2} + n^{\star} \cdot ||m^{\star}-m^{\prime\prime}||^{2} \\ & \overset{\text{{Fact~2}}}{\leq} & {\Delta}(O^{\star}_{\bar{i}}) + 2n^{\star} \left( ||m^{\star} - m^{\prime}||^{2} + ||m^{\prime}-m^{\prime\prime}||^{2} \right) \overset{{\text{Lemma} 3 }}{\leq} {\Delta}(O^{\star}_{\bar{i}}) \\&&+ 2 \cdot {\Phi}_{C_{i}}(O^{\star}_{\bar{i}}) + 2 n^{\star} ||m^{\prime}-m^{\prime\prime}||^{2} \\ & \overset{\text{{Fact~1}}}{\leq} & {\Delta}(O^{\star}_{\bar{i}}) + 2 \cdot {\Phi}_{C_{i}}(O^{\star}_{\bar{i}}) + 2 \left( {\Phi}_{m^{\prime\prime}}({O_{\bar{i}}^{\prime}}) - {\Delta}({O_{\bar{i}}^{\prime}})\right) \ \leq \ {\Delta}(O^{\star}_{\bar{i}}) \\&&+ 2 \cdot {\Phi}_{C_{i}}(O^{\star}_{\bar{i}}) + \frac{{\varepsilon}}{4} \cdot {\Delta}({O_{\bar{i}}^{\prime}}) \\ & \overset{\text{{Lemma~4}}}{\leq} & {\Delta}(O^{\star}_{\bar{i}}) + 2 \cdot {\Phi}_{C_{i}}(O^{\star}_{\bar{i}}) + \frac{{\varepsilon}}{2} \cdot \left( {\Phi}_{C_{i}}(O^{\star}_{\bar{i}}) + {\Delta}(O^{\star}_{\bar{i}})\right) \overset{\text{{Lemma~2}}}{\leq} \left( 1 + \frac{{\varepsilon}}{2} \right)\\&& \cdot {\Delta}(O^{\star}_{\bar{i}}) + \frac{\varepsilon}{2k} \cdot {\tt opt}_{k}({\mathbb O^{\star}}). \end{array} $$

This completes the proof of the lemma. □

The above lemma tells us that it will be sufficient to obtain a (1 + ε/8)-approximation to the 1-means problem for the dataset \({O_{\bar {i}}^{\prime }}\). Now, Lemma 1 tells us that there is a subset (again as a multi-set) O of size \(\frac {16}{\varepsilon }\) of \({O_{\bar {i}}^{\prime }}\) such that the mean m of these points satisfies the conditions of Lemma 5. Now, observe that O will be a subset of the set S constructed in Step 2 of the algorithm Sample-center – indeed, in Step 2(c), we add more than \(\frac {16}{{\varepsilon }}\) copies of each point in C i to S. Now, in Step 2(d), we will try out all subsets of size \(\frac {16}{{\varepsilon }}\) of S and for each such subset, we will try adding its mean to C i . In particular, there will be a recursive call of this function, where we will have C i+ 1 = C i ∪ {m } as the set of centers. Lemma 5 now implies that C i+ 1 will satisfy the invariant P(i + 1). Thus, we are done in this case.

Case II \(\left (\frac {{\Phi }_{C_{i}}(O^{\star }_{\bar {i}})}{{\sum }_{j} {\Phi }_{C_{i}}(O^{\star }_{j})} \geq \frac {{\varepsilon }}{13k} \right )\)

In this case, we would like to prove that we add a good approximation to the mean of \(O^{\star }_{\bar {i}}\) to the set C i . Again, consider the invocation of Sample-centers corresponding to C i . We want the multi-set S to contain a good representation from points in the set \(O^{\star }_{\bar {i}}\). Secondly, in order to apply Lemma 1, we will need this representation to be a uniform sample from \(O^{\star }_{\bar {i}}\). Since \({\Phi }_{C_{i}}(O^{\star }_{\bar {i}}) \geq \frac {\varepsilon }{13k} \cdot {\sum }_{j} {\Phi }_{C_{i}}(O^{\star }_{j})\), the probability that a point sampled using D 2 sampling w.r.t. C i is from \(O^{\star }_{\bar {i}}\) is not too small. So, the multi-set S will have non-negligible representation from the set \(O^{\star }_{\bar {i}}\). However the points from \(O^{\star }_{\bar {i}}\) in S may not be a uniform sample from \(O^{\star }_{\bar {i}}\). Indeed, suppose there is a good fraction of points of \(O^{\star }_{\bar {i}}\) which are close to C i , and remaining points of \(O^{\star }_{\bar {i}}\) are quite far from C i . Then, D 2-sampling w.r.t. to C i will not give us a uniform sample from \(O^{\star }_{\bar {i}}\). To alleviate this problem, we take sufficiently many copies of points in C i and add them to the multi-set S. In some sense, these copies act as proxy for points in \(O^{\star }_{\bar {i}}\) that are too close to C i . Finally, we argue that one of the subsets of S “simulates” a uniform sample from \(O^{\star }_{\bar {i}}\) and the mean of this subset provides a good approximation for the mean of \(O^{\star }_{\bar {i}}\). The formal analysis follows.

We divide the points in \(O^{\star }_{\bar {i}}\) into two parts – points which are close to a center in C i , and the remaining points. More formally, let the radius R be given by

$$ R^{2} = \frac{\varepsilon^{2}}{41} \cdot \frac{{\Phi}_{C_{i}}(O^{\star}_{\bar{i}})}{|O^{\star}_{\bar{i}}|} $$
(4)

Define \({O_{\bar {i}}^{n}}\) as the points in \(O^{\star }_{\bar {i}}\) which are within distance R of a center in C i , and \({O_{\bar {i}}^{f}}\) be the rest of the points in \(O^{\star }_{\bar {i}}\). As in Case I, we define a new set \({O_{\bar {i}}^{\prime }}\) where each point in \({O_{\bar {i}}^{n}}\) is replaced by a copy of the corresponding point in C i . For a point \(p \in {O_{\bar {i}}^{n}}\), define c(p) as the closest center in C i to p. Now define a multi-set \({O_{\bar {i}}^{\prime }}\) as \({O_{\bar {i}}^{f}} \cup \{c(p): p \in {O_{\bar {i}}^{n}}\}\). Intuitively, \({O_{\bar {i}}^{\prime }}\) denotes the set of points that are same as \(O^{\star }_{\bar {i}}\) except that points close to centers in C i have been “collapsed” to these centers by taking appropriate number of copies. Clearly, \(|{O_{\bar {i}}^{\prime }}| = |O^{\star }_{\bar {i}}|\). At a high level, we will argue that any center that provides a good 1-means approximation for \({O_{\bar {i}}^{\prime }}\) also provides a good approximation for \(O^{\star }_{\bar {i}}\). We will then focus on analyzing whether the invocation of Sample-centers tries out a good center for \({O_{\bar {i}}^{\prime }}\).

We give some more notation. Let m and m denote the mean of \(O^{\star }_{\bar {i}}\) and \({O_{\bar {i}}^{\prime }}\) respectively. Let n and n denote the size of the sets \(O^{\star }_{\bar {i}}\) and \({O_{\bar {i}}^{n}}\) respectively. First, we show that \({\Delta }(O^{\star }_{\bar {i}})\) is large with respect to R.

Lemma 6

\({\Delta }(O^{\star }_{\bar {i}}) = {\Phi }_{m^{\star }}(O^{\star }_{\bar {i}}) \geq \frac {16 n}{{\varepsilon }^{2}} R^{2}\) .

Proof

Let c be the center in C i which is closest to m . We divide the proof into two cases:

  1. (i)

    \(||m^{\star } - c|| \geq \frac {5}{{\varepsilon }} \cdot R\): For any point \(p \in {O_{\bar {i}}^{n}}\), triangle inequality implies that

    $$||p-m^{\star}|| \geq ||c(p)-m^{\star}|| - ||c(p)-p|| \geq \frac{5}{{\varepsilon}} \cdot R - R \geq \frac{4}{{\varepsilon}} \cdot R.$$

    Therefore,

    $${\Delta}(O^{\star}_{\bar{i}}) \geq \sum\limits_{p \in {O_{\bar{i}}^{n}}} ||p-m^{\star}||^{2} \geq \frac{16 n}{{\varepsilon}^{2}} R^{2}.$$
  2. (ii)

    \( ||m^{\star } - c|| < \frac {5}{{\varepsilon }} \cdot R \): In this case, we have

    $$\begin{array}{@{}rcl@{}} {\Phi}_{m^{\star}}(O^{\star}_{\bar{i}}) & \overset{\text{{Fact~1}}}{=} & {\Phi}_{c} (O^{\star}_{\bar{i}}) - n^{\star} \cdot ||m^{\star} - c||^{2} \geq {\Phi}_{C_{i}} (O^{\star}_{\bar{i}}) - n^{\star} \cdot ||m^{\star} - c||^{2} \\ & \overset{(4)}{\geq}& \frac{41 n^{\star} }{{\varepsilon}^{2}} \cdot R^{2} - \frac{25 n^{\star}}{{\varepsilon}^{2}} \cdot R^{2} \geq \frac{16 n}{\varepsilon^{2}} R^{2}. \end{array} $$

This completes the proof of the lemma. □

Lemma 7

\(||m^{\star } - m^{\prime }||^{2} \leq \frac {n}{n^{\star }} \cdot R^{2}\)

Proof

Since the only difference between \(O^{\star }_{\bar {i}}\) and \({O_{\bar {i}}^{\prime }}\) are the points in \({O_{\bar {i}}^{n}}\), we get

$$||m^{\star} - m^{\prime}||^{2} = \frac{1}{(n^{\star})^{2}} \left| \left| \sum\limits_{p \in {O_{\bar{i}}^{n}}} (p-c(p)) \right|\right|^{2} \leq \frac{n}{(n^{\star})^{2}} \sum\limits_{p \in {O_{\bar{i}}^{n}}} ||p-c(p)||^{2} \leq \frac{n^{2}}{(n^{\star})^{2}} R^{2} \leq \frac{n}{n^{\star}} \cdot R^{2}. $$

where the first inequality follows from the Cauchy-Schwartz inequality. □

We now show that \({\Delta }({O_{\bar {i}}^{\prime }})\) is close to \({\Delta }(O^{\star }_{\bar {i}})\).

Lemma 8

\({\Delta }({O_{\bar {i}}^{\prime }}) \leq 4 n R^{2} + 2 \cdot {\Delta }(O^{\star }_{\bar {i}})\) .

Proof

The lemma follows from the following sequence of inequalities:

$$\begin{array}{@{}rcl@{}} {\Delta}({O_{\bar{i}}^{\prime}}) & = & \sum\limits_{p \in {O_{\bar{i}}^{n}}} ||c(p)-m^{\prime}||^{2} + \sum\limits_{p \in {O_{\bar{i}}^{f}}} ||p-m^{\prime}||^{2} \\ & \overset{\text{{Fact~2}}}{\leq} & \sum\limits_{p \in {O_{\bar{i}}^{n}}} 2(||c(p)-p||^{2} + ||p-m^{\prime}||^{2}) + \sum\limits_{p \in {O_{\bar{i}}^{f}}} ||p-m^{\prime}||^{2}\\ & \leq & 2nR^{2} + 2 \sum\limits_{p \in O^{\star}_{\bar{i}}} ||p-m^{\prime}||^{2} = 2nR^{2} + 2 \cdot {\Phi}_{m^{\prime}}(O^{\star}_{\bar{i}}) \\ & \overset{\text{{Fact~1}}}{=} & 2nR^{2} + 2 \cdot \left( {\Delta}(O^{\star}_{\bar{i}}) + n^{\star} \cdot ||m^{\prime} - m^{\star}||^{2}\right) \\ & \overset{\text{{Lemma~7}}}{\leq} & 4nR^{2} + 2 \cdot {\Delta}(O^{\star}_{\bar{i}}). \end{array} $$

This completes the proof of the lemma. □

We now argue that any center that is good for \({O_{\bar {i}}^{\prime }}\) is also good for \(O^{\star }_{\bar {i}}\).

Lemma 9

Let m be such that \({\Phi }_{m^{\prime \prime }}({O_{\bar {i}}^{\prime }}) \leq \left (1 + \frac {{\varepsilon }}{16} \right ) \cdot {\Delta }({O_{\bar {i}}^{\prime }})\) . Then \({\Phi }_{m^{\prime \prime }}(O^{\star }_{\bar {i}}) \leq \left (1 + \frac {{\varepsilon }}{2} \right ) \cdot {\Delta }(O^{\star }_{\bar {i}}) \) .

Proof

The lemma follows from the following inequalities:

$$\begin{array}{@{}rcl@{}} {\Phi}_{m^{\prime\prime}}(O^{\star}_{\bar{i}}) &=& \sum\limits_{p \in O^{\star}_{\bar{i}}} ||m^{\prime\prime}-p||^{2} \overset{\text{{Fact~1}}}{=} \sum\limits_{p \in O^{\star}_{\bar{i}}} ||m^{\star}-p||^{2} + n^{\star} \cdot ||m^{\star}-m^{\prime\prime}||^{2} \\ & \overset{\text{{Fact~2}}}{\leq} & {\Delta}(O^{\star}_{\bar{i}}) + 2n^{\star} \left( ||m^{\star} - m^{\prime}||^{2} + ||m^{\prime}-m^{\prime\prime}||^{2} \right) \overset{\text{{Lemma~7}}}{\leq} {\Delta}(O^{\star}_{\bar{i}}) + 2n R^{2} \\&&+ 2n^{\star} \cdot ||m^{\prime}-m^{\prime\prime}||^{2} \\ & \overset{\text{{Fact~1}}}{\leq} & {\Delta}(O^{\star}_{\bar{i}}) + 2nR^{2} + 2 \cdot \left( {\Phi}_{m^{\prime\prime}}({O_{\bar{i}}^{\prime}}) - {\Delta}({O_{\bar{i}}^{\prime}})\right) \ \leq \ {\Delta}(O^{\star}_{\bar{i}}) + 2 nR^{2} \\&&+ \frac{{\varepsilon}}{8} \cdot {\Delta}({O_{\bar{i}}^{\prime}}) \\ & \overset{\text{{Lemma~8}}}{\leq} & {\Delta}(O^{\star}_{\bar{i}}) + 2 nR^{2} + \frac{{\varepsilon}}{2} \cdot nR^{2} + \frac{{\varepsilon}}{4} \cdot {\Delta}(O^{\star}_{\bar{i}}) \overset{\text{{Lemma~6}}}{\leq} \left( 1 + \frac{{\varepsilon}}{2} \right) \cdot {\Delta}(O^{\star}_{\bar{i}}). \end{array} $$

This completes the proof of the lemma. □

Given the above lemma, all we need to argue is that our algorithm indeed considers a center m such that \({\Phi }_{m^{\prime \prime }}({O_{\bar {i}}^{\prime }}) \leq (1+\varepsilon /16) \cdot {\Delta }({O_{\bar {i}}^{\prime }})\). For this we would need about O(1/ε) uniform samples from \({O_{\bar {i}}^{\prime }}\). However, our algorithm can only sample using D 2-sampling w.r.t. C i . For ease of notation, let \({c(O_{\bar {i}}^{n})}\) denote the multi-set \(\{c(p): p \in {O_{\bar {i}}^{n}}\}\). Recall that \({O_{\bar {i}}^{\prime }}\) consists of \({O_{\bar {i}}^{f}}\) and \({c(O_{\bar {i}}^{n})}\). The first observation is that the probability of sampling an element from \({O_{\bar {i}}^{f}}\) is reasonably large (proportional to ε/k). Using this fact, we show how to sample from \({O_{\bar {i}}^{\prime }}\) (almost uniformly). Finally, we show how to convert this almost uniform sampling to uniform sampling (at the cost of increasing the size of sample).

Lemma 10

Let X be a sample from D 2 -sampling w.r.t. C i . Then, \(\mathbf {Pr}[x \in {O_{\bar {i}}^{f}}] \geq \frac {{\varepsilon }}{15k}\) . Further, for any point \(p \in {O_{\bar {i}}^{f}}\) , \(\mathbf {Pr}[x=p] \geq \frac {\gamma }{|O^{\star }_{\bar {i}}|}\) , where γ denotes \(\frac {\varepsilon ^{3}}{533 k}\) .

Proof

Note that \({\sum }_{p \in O^{\star }_{\bar {i}} \setminus {O_{\bar {i}}^{f}}} \mathbf {Pr}[x=p] \leq \frac {R^{2}}{{\Phi }_{C_{i}}(X)} \cdot |O^{\star }_{\bar {i}}| \leq \frac {{\varepsilon }^{2}}{41} \cdot \frac {{\Phi }_{C_{i}}(O^{\star }_{\bar {i}})} {{\Phi }_{C_{i}}(X)}\). Therefore, the fact that we are in case II implies that

$$\mathbf{Pr}[x \in {O_{\bar{i}}^{f}}] \geq \mathbf{Pr}[x \in O^{\star}_{\bar{i}}] - \mathbf{Pr}[x \in O^{\star}_{\bar{i}} \setminus {O_{\bar{i}}^{f}}] \geq \frac{{\Phi}_{C_{i}}(O^{\star}_{\bar{i}})}{{\Phi}_{C_{i}}(X)} - \frac{{\varepsilon}^{2}}{41} \frac{{\Phi}_{C_{i}}(O^{\star}_{\bar{i}})} {{\Phi}_{C_{i}}(X)} \geq \frac{{\varepsilon}}{15k}.$$

Also, if \(x \in {O_{\bar {i}}^{f}}\), then \({\Phi }_{C_{i}}(\{x\}) \geq R^{2}=\frac {{\varepsilon }^{2}}{41} \cdot \frac {{\Phi }_{C_{i}}(O^{\star }_{\bar {i}})}{|O^{\star }_{\bar {i}}|}\). Therefore,

$$\frac{{\Phi}_{C_{i}}(\{x\})}{{\Phi}_{C_{i}}(X)} \geq \frac{{\varepsilon}}{13k} \cdot \frac{R^{2}}{{\Phi}_{C_{i}}(O^{\star}_{\bar{i}})} \geq \frac{\varepsilon}{13k} \cdot \frac{\varepsilon^{2}}{41} \cdot \frac{1}{|O^{\star}_{\bar{i}}|} \geq \frac{\varepsilon^{3}}{533 k} \cdot \frac{1}{|O^{\star}_{\bar{i}}|}. $$

This completes the proof of the lemma. □

Let X 1, …X l be l points sampled independently using D 2-sampling w.r.t. C i . We construct a new set of random variables Y 1, …, Y l . Each variable Y u will depend on X u only, and will take values either in \({O_{\bar {i}}^{\prime }}\) or will be ⊥. These variables are defined as follows: if \(X_{u} \notin {O_{\bar {i}}^{f}}\), we set Y u to ⊥. Otherwise, we assign Y u to one of the following random variables with equal probability: (i) X u or (ii) a random element of the multi-set \({c(O_{\bar {i}}^{n})}\). The following observation follows from Lemma 10.

Corollary 2

For a fixed index u, and an element \(x \in {O_{\bar {i}}^{\prime }}\) , \(\mathbf {Pr}[Y_{u}=x] \geq \frac {\gamma ^{\prime }}{|{O_{\bar {i}}^{\prime }}|},\) where γ = γ/2.

Proof

If \(x \in {O_{\bar {i}}^{f}}\), then we know from Lemma 10 that X u is x with probability at least \(\frac {\gamma }{|{O_{\bar {i}}^{\prime }}|}\) (note that \({O_{\bar {i}}^{\prime }}\) and \(O^{\star }_{\bar {i}}\) have the same cardinality). Conditioned on this event, Y u will be equal to X u with probability 1/2. Now suppose \(x \in {c(O_{\bar {i}}^{n})}\). Lemma 10 implies that X u is an element of \({O_{\bar {i}}^{f}}\) with probability at least \(\frac {{\varepsilon }}{15k}\). Conditioned on this event, Y u will be equal to x with probability at least \(\frac {1}{2} \cdot \frac {1}{|{c(O_{\bar {i}}^{n})}|}\). Therefore, the probability that X u is equal to x is at least \(\frac {{\varepsilon }}{15k} \cdot \frac {1}{2|{c(O_{\bar {i}}^{n})}|} \geq \frac {{\varepsilon }}{30k |{O_{\bar {i}}^{\prime }}|} \geq \frac {\gamma ^{\prime }}{|{O_{\bar {i}}^{\prime }}|}\). □

Corollary 2 shows that we can obtain samples from \({O_{\bar {i}}^{\prime }}\) which are nearly uniform (up to a constant factor). To convert this to a set of uniform samples, we use the idea of [9]. For an element \(x \in {O_{\bar {i}}^{\prime }}\), let γ x be such that \(\frac {\gamma _{x}}{|{O_{\bar {i}}^{\prime }}|}\) denotes the probability that the random variable Y u is equal to x (note that this is independent of u). Corollary 2 implies that γ x γ . We define a new set of independent random variables Z 1, …, Z l . The random variable Z u will depend on Y u only. If Y u is ⊥, Z u is also ⊥. If Y u is equal to \(x \in {O_{\bar {i}}^{\prime }}\), then Z u takes the value x with probability \(\frac {\gamma ^{\prime }}{\gamma _{x}}\), and ⊥ with the remaining probability. Note that Z u is either ⊥ or one of the elements of \({O_{\bar {i}}^{\prime }}\). Further, conditioned on the latter event, it is a uniform sample from \({O_{\bar {i}}^{\prime }}\). We can now prove the key lemma.

Lemma 11

Let l be \(\frac {128}{\gamma ^{\prime } \cdot {\varepsilon }}\) , and m denote the mean of the non-null samples from Z 1, …, Z l . Then, with probability at least 1/2, \({\Phi }_{m^{\prime \prime }}({O_{\bar {i}}^{\prime }}) \leq (1+{\varepsilon }/16) \cdot {\Delta }({O_{\bar {i}}^{\prime }})\) .

Proof

Note that a random variable Z u is equal to a specific element of \({O_{\bar {i}}^{\prime }}\) with probability equal to \(\frac {\gamma ^{\prime }}{|{O_{\bar {i}}^{\prime }}|}\). Therefore, it takes ⊥ value with probability 1 − γ . Now consider a different set of iid random variables \(Z_{u}^{\prime }\), 1 ≤ ul as follows: each Z u tosses a coin with probability of Heads being γ . If we get Heads, it gets value ⊥, otherwise it is equal to a random element of \({O_{\bar {i}}^{\prime }}\). It is easy to check that the joint distribution of the random variables \(Z_{u}^{\prime }\) is identical to that of the random variables Z u . Thus, it suffices to prove the statement of the lemma for the random variables \(Z_{u}^{\prime }\).

Now we condition on the coin tosses of the random variables \(Z_{u}^{\prime }\). Let n be the number of random variables which are not ⊥. (n is a deterministic quantity because we have conditioned on the coin tosses). Let m be the mean of such non- ⊥ variables among \(Z_{1}^{\prime }, \ldots , Z_{l}^{\prime }\). If m happens to be larger than 64/ε, Lemma 1 implies that with probability at least 3/4, \({\Phi }_{m^{\prime \prime }}({O_{\bar {i}}^{\prime }}) \leq (1+{\varepsilon }/16) \cdot {\Delta }({O_{\bar {i}}^{\prime }})\).

Finally, observe that the expected number of non- ⊥ random variables is γ l ≥ 128/ε. Therefore, with probability at least 3/4, the number of non- ⊥ elements will be at least 64/ε. □

Let \(C_{i}^{(l)}\) denote the multi-set obtained by taking l copies of each of the centers in C i . Now observe that all the non- ⊥ elements among Y 1, …, Y l are elements of \(\{X_{1}, \ldots , X_{l}\} \cup C_{i}^{(l)}\), and so the same must hold for Z 1, …, Z l . This implies that in Step 2(d) of the algorithm Sample-centers, we would have tried adding the point m as described in Lemma 11. Therefore, the induction hypothesis continues to hold with probability at least 1/2. This concludes the proof of Theorem 1.

4 Lower Bound

In this section, we prove the lower bound result Theorem 2. Consider parameters k and ε (assume ε is a small enough constant). We first define the set of points X. Let m denote \(\lceil \frac {1}{\sqrt {\varepsilon }}\rceil \). The points will belong to \(\mathbb {R}^{d}\), where d = k m. The set X will have d points, namely, e 1, …, e d , where e i denotes the vector which has all coordinates 0, except for the i th coordinate, which is 1. Now, we define the set \(\mathbb {C}\) of clusterings of X. The set \(\mathbb {C}\) will consist of those clusterings \(\mathbb {O} = \{O_{1}, \ldots , O_{k}\}\) for which each of the clusters has exactly m points. Observe that

$$\begin{array}{@{}rcl@{}} |\mathbb{C}| = \frac{(km)!}{(m!)^{k}} \end{array} $$
(5)

Now fix a set C of k centers, c 1, …, c k . We will now upper bound the number of clusterings \(\mathbb {O} \in \mathbb {C}\) for which

$$\begin{array}{@{}rcl@{}} {\tt cost}_{C}(\mathbb{O}) \leq (1+{\varepsilon}) {\tt opt}_{k}(\mathbb{O}). \end{array} $$
(6)

Let \(\mathbb {O}=\{O_{1}, \ldots , O_{k}\}\) be as above. Note that

$$\begin{array}{@{}rcl@{}} {\tt opt}_{k}(\mathbb{O}) = \sum\limits_{i = 1}^{k} {\Delta}(O_{i}) = km \cdot \left( (1 - 1/m)^{2} + (m-1) \cdot 1/m^{2}\right) = k(m-1) \end{array} $$
(7)

Recall that \({\tt cost}_{C}(\mathbb {O})\) is obtained by assigning each cluster in \(\mathbb {O}\) to a unique center in C, and then by computing the sum of square of distances of points in X to the corresponding centers. Wlog we rearrange the clusters in \(\mathbb {O}\) such that the points in O j are assigned to c j . For a vector v, we shall use (v) j to denote the j th coordinate of v. For every center c r we define a corresponding vector v r as follows:

$$({v_{r}})_{j} = \left\{ \begin{array}{cc} ({c_{r}})_{j} & {\text{ if \(e_{j} \notin O_{r}\)}} \\ ({c_{r}})_{j} - \frac{1}{m} & {\text{ otherwise}} \end{array} \right.$$

Lemma 12

\({\sum }_{r = 1}^{k} ||v_{r}||^{2} \leq \frac {k}{m(m-1)}\) .

Proof

Fix a cluster O r . Let m r denote the mean of O r . Note that (m r ) j is 1/m if e j O r , 0 otherwise. We now simplify the expression \({\tt cost}_{C}(\mathbb {O})\) as follows:

$$\begin{array}{@{}rcl@{}} {\tt cost}_{C}(\mathbb{O}) & = & \sum\limits_{r = 1}^{k} \sum\limits_{e_{j} \in O_{r}} ||e_{j} - c_{r}||^{2} \overset{\text{{Fact~1}}}{=} \sum\limits_{r = 1}^{k} \sum\limits_{e_{j} \in O_{r}} \left( ||e_{j} - m_{r}||^{2} + ||m_{r} - c_{r}||^{2} \right) \\ & = & {\tt opt}_{k}(\mathbb{O}) + \sum\limits_{r = 1}^{k} m \cdot ||m_{r} - c_{r}||^{2} = {\tt opt}_{k}(\mathbb{O}) + m \sum\limits_{r = 1}^{k} ||v_{r}||^{2} \end{array} $$

By our assumption, \({\tt cost}_{C}(\mathbb {O}) \leq (1+{\varepsilon }) {\tt opt}_{k}(\mathbb {O})\). Therefore,

$$\sum\limits_{r = 1}^{k} ||v_{r}||^{2} \leq \frac{{\varepsilon}}{m} \cdot {\tt opt}_{k}(\mathbb{O}) \overset{(7)}{=} \frac{{\varepsilon}}{m} \cdot k (m-1) \leq \frac{k}{m(m-1)}.$$

Now define a corresponding assignment function f : X → {1, …, k} as follows: f(e j ) = r if e j O r . Let \(\mathbb {O}^{\prime }=\{O_{1}^{\prime }, \ldots , O_{k}^{\prime }\}\) be another clustering in \(\mathbb {C}\) which satisfies condition (6). Define vectors \(v_{r}^{\prime }\) and the assignment function f in a similar manner. The following lemma shows that f and f cannot differ in too many coordinates.

Lemma 13

Let D denote the set of indices j for which f(e j ) ≠ f (e j ). Then |D| ≤ d/2.

Proof

Assume for the sake of contradiction that |D| > d/2. For cluster O r , let D r denote the set of indices j such that \(e_{j} \in O_{r} \triangle O_{r}^{\prime }\). Observe that (v r ) j and \(({v_{r}^{\prime }})_{j}\) differ (in absolute value) by 1/m. Therefore,

$$||v_{r}^{\prime}||^{2} = \sum\limits_{j \in D_{r}} \left( ({v_{r}})_{j} \pm \frac{1}{m} \right)^{2} \geq \frac{|D_{r}|}{m^{2}} - \frac{2}{m} \sum\limits_{j \in D_{r}} |({v_{r}})_{j}| .$$

Summing over r = 1, …, k, we get

$$\sum\limits_{r = 1}^{k} ||v_{r}^{\prime}||^{2} \geq \frac{2|D|}{m^{2}} - \frac{2}{m} \sum\limits_{r = 1}^{k} \sum\limits_{j \in D_{r}} |({v_{r}})_{j}| \geq \frac{d}{m^{2}} - \frac{2}{m} \cdot \sqrt{2d} \cdot \sqrt{\sum\limits_{r = 1}^{k} \sum\limits_{j \in D_{r}} |({v_{r}})_{j}|^{2}},$$

where the last inequality follows from Cauchy-Schwarz, and the observation that \({\sum }_{r} |D_{r}| = 2|D| > d\). Using Lemma 12, we see that

$$\sum\limits_{r = 1}^{k} ||v_{r}^{\prime}||^{2} \geq \frac{k}{m} - \frac{2}{m} \cdot \sqrt{2km} \cdot \sqrt{\sum\limits_{r = 1}^{k} ||v_{r}||^{2}} \geq \frac{k}{m} - \frac{4k}{m \sqrt{m-1}} > \frac{k}{m(m-1)},$$

assuming m is a large enough constant. But this contradicts Lemma 12. □

The above lemma shows that the number of clusterings in \(\mathbb {C}\) satisfying condition (6) is small.

Corollary 3

The number of clusterings in \(\mathbb {C}\) satisfying condition (6) is at most \(\binom {km}{km/2} \cdot \frac {(km/2)!}{((m/2)!)^{k}}\) .

Proof

Fix a clustering \(\mathbb {O}=\{O_{1}, \ldots , O_{r}\}\) satisfying condition (6), and let f be the corresponding assignment function. How many assignment functions (corresponding to a clustering in \(\mathbb {C}\)) can differ from f in at most d/2 coordinates ? There are at most \(\binom {km}{km/2}\) ways of choosing the coordinates in which the two functions differ. Consider a fixed choice of such coordinates, and say there are d r coordinates corresponding to points in O r . Let d denote \({\sum }_{r} d_{r}\) (and so, d d/2). Now, we need to partition these coordinates into sets of size d 1, …, d k (note that f corresponds to a clustering where all clusters are of equal size). The number of possibilities here is \(\frac {d^{\prime }!}{d_{1}! {\ldots } d_{k}!}\), which is at most \(\frac {(d/2)!}{(d/2k)!)^{k}}\). □

Recall that we want Ł to contain enough elements such that for at least half of the clusterings in \(\mathbb {C}\), condition (6) is satisfied with respect to some set of centers in Ł. Therefore, Corollary 3 and (5) imply that

$$|\L| \geq \frac{\frac{(km)!}{(m!)^{k}}}{\binom{km}{km/2} \cdot \frac{(km/2)!}{((m/2)!)^{k}}} = 2^{\tilde{\Omega}(km)} = 2^{\tilde{\Omega}(k/\sqrt{\varepsilon})}.$$

This concludes the proof of Theorem 2.

5 Extension to the List k-median Problem

The setting for the list k-median problem is same as that for the list k-means problem, except for the fact that distances are measured using the Euclidean norm (instead of the square of the Euclidean norm). As before, for a set C of k centers, and a clustering \(\mathbb {O}=\{O_{1}, \ldots , O_{k}\}\) of a set of points X, define \({\tt cost}_{C}(\mathbb {O})\) as the minimum, over all permutations π of C, of \({\sum }_{i = 1}^{k} {\sum }_{x \in O_{i}} ||x-c_{\pi (i)}||\). Define \({\tt opt}_{k}(\mathbb {O})\), Φ C (X) analogously.

For a set of points X, let Δ(X) denote the optimal 1-median cost of X, i.e., \(\min _{c \in {\mathbb {R}^{d}}} {\sum }_{x \in X} ||x-c||\). We no longer have an analogue of Fact 1 – for a set of points X, if c denotes the optimal center with respect to the 1-median objective, and c is a point such that \({\Phi }_{c}(X) \leq (1+{\varepsilon }) \cdot {\Phi }_{c^{\star }}(X)\), it is possible that ||cc || is large. This also implies that there is no analogue of the Lemma 1. However, instead of the approximate triangle inequality (Fact 2), we get triangle inequality in the Euclidean metric.

We shall use a result of Kumar et al. [11], which gives an alternative to Lemma 1, although it outputs several candidate centers instead of just the mean of a random sample.

Lemma 14 (Theorem 5.4 11)

Given a random sample (with replacement) R of size \(\frac {1}{{\varepsilon }^{4}}\) from a set of points \(X \in {\mathbb {R}}^{d}\) , there is a procedure construct(R), which outputs a set core(R) of size \(2^{\left (1/{\varepsilon }\right )^{O(1)}}\) such that the following event happens with probability at least 1/2: there is at least one point ccore(R) such that Φ c (X) ≤ (1 + ε) ⋅ Δ(X). The time taken by the procedure construct(R) is \(O \left (2^{\left (1/{\varepsilon }\right )^{O(1)}} \cdot d \right )\) .

Now we explain the changes needed in the algorithm and the analysis. Given a set of points X and another set of points C, D-sampling from X w.r.t. C samples a point xX with probability proportional to Φ C (x), i.e., min cC ||cx||.

5.1 The Algorithm

The algorithm is the same as that in Algorithm 2.1, except for some minor changes in the procedure Sample-Centers, and changes in the values of the various parameters. The parameters α and β in the procedure List-k-median are large enough constants. We briefly describe the changes in the procedure Sample-Centers. In Step 2(a), we sample the multi-set S using D-sampling w.r.t C. We replace Step 2(d) by the following: for all subsets TS of size M, and for all elements ccore(T) (i) CC ∪ {c}, (ii) Sample-centers (X, k, ε, i + 1, C). Recall that core(T) is the set guaranteed by Lemma 14. In other words, unlike for the k-means setting, where we could just work with the mean of T, we now need to try out all the elements in core(T). Algorithm 5.1, gives a detailed description of the algorithm.

figure b

5.2 Analysis

The analysis proceeds along the same lines as in Section 3, and we would again like to prove the induction hypothesis P(i). We use the same notation as in Section 3, and define Cases I and II analogously. Consider Case I first. Proof of Lemma 2 remains unchanged. The set \({O_{\bar {i}}^{\prime }}\) is defined similarly. Let m be the point for which \({\Delta }(O^{\star }_{\bar {i}})= {\Phi }_{m}(O^{\star }_{\bar {i}})\). Define m analogously for the set \({O_{\bar {i}}^{\prime }}\). The statement of Lemma 4 now changes as follows:

$$\begin{array}{@{}rcl@{}} {\Delta}({O_{\bar{i}}^{\prime}}) & \!\leq\! & \sum\limits_{p \in O^{\star}_{\bar{i}}} || c(p) \,-\, m^{\prime}|| \ \!\leq\! \ \sum\limits_{p \in O^{\star}_{\bar{i}}} || c(p) \,-\, m^{\star}|| \ \!\leq\!\! \ \sum\limits_{p \in O^{\star}_{\bar{i}}} \left( || c(p) \,-\, p|| \,+\, ||p\,-\,m^{\star}|| \right) \\ & = & {\Phi}_{C_{i}}(O^{\star}_{\bar{i}}) + {\Delta}(O^{\star}_{\bar{i}}) \end{array} $$
(8)

Proof of Lemma 5 also changes as follows: let m be as in the statement of this lemma. Then,

$$\begin{array}{@{}rcl@{}} {\Phi}_{m^{\prime\prime}}(O^{\star}_{\bar{i}}) & \,=\, & \sum\limits_{p \in O^{\star}_{\bar{i}}} ||p\,-\,m^{\prime\prime}|| \ \!\leq\! \ \sum\limits_{p \in O^{\star}_{\bar{i}}} \left( ||p\,-\,c(p)|| \,+\, ||c(p)-m^{\prime\prime}|| \right) \\ & \,=\, & {\Phi}_{C_{i}}(O^{\star}_{\bar{i}}) \,+\, {\Phi}_{m^{\prime\prime}}({O_{\bar{i}}^{\prime}}) \ \leq \ {\Phi}_{C_{i}}(O^{\star}_{\bar{i}}) + \left( 1 + \frac{{\varepsilon}}{8} \right) \cdot {\Delta}(O_{\bar{i}}^{\prime}) \\ &\! \!\!\overset{\text{{(8)}}}{\leq} \!& 2 \!\cdot \!{\Phi}_{C_{i}}(O^{\star}_{\bar{i}}) \,+\, \left( 1 \,+\, \frac{{\varepsilon}}{8} \right) \!\cdot\! {\Delta}(O^{\star}_{\bar{i}}) \ \!\overset{\text{{Lemma~2}}}{\leq}\! \ \frac{{\varepsilon}}{3k} \!\cdot {\tt \!opt}_{k}({\mathbb O^{\star}}) \,+\, \left( 1 \,+\, \frac{{\varepsilon}}{8} \right) \!\cdot\! {\Delta}(O^{\star}_{\bar{i}}) \end{array} $$

Rest of the arguments remain unchanged (we use Lemma 14 instead of Lemma 1). Now we consider Case II. We redefine the parameter R as

$$R = \frac{{\varepsilon}}{9} \cdot \frac{{\Phi}_{C_{i}}(O^{\star}_{\bar{i}})}{|O^{\star}_{\bar{i}}|}.$$

Define sets \({O_{\bar {i}}^{\prime }}, {c(O_{\bar {i}}^{n})}, {O_{\bar {i}}^{f}}\) as before. Let m be the point for which \({\Delta }(O^{\star }_{\bar {i}})= {\Phi }_{m^{\star }}(O^{\star }_{\bar {i}})\), and m be the analogous point for \(O_{\bar {i}}^{\prime }\). Proof of Lemma 6 can be easily modified to yield the following (instead of Fact 1, we just need to use triangle inequality) :

$$\begin{array}{@{}rcl@{}} {\Delta}(O^{\star}_{\bar{i}}) = {\Phi}_{m^{\star}}(O^{\star}_{\bar{i}}) \geq \frac{4 n}{{\varepsilon}} \cdot R \end{array} $$
(9)

We have the following version of Lemma 8:

$$\begin{array}{@{}rcl@{}} {\Delta}({O_{\bar{i}}^{\prime}}) & \leq & {\Phi}_{m^{\star}}({O_{\bar{i}}^{\prime}}) \ = \ \sum\limits_{p \in {O_{\bar{i}}^{n}}} ||c(p)-m^{\star}|| + \sum\limits_{p \in {O_{\bar{i}}^{f}}} ||p-m^{\star}|| \\ & \leq & \sum\limits_{p \in {O_{\bar{i}}^{n}}} \left( ||p-m^{\star}||+ ||c(p)-p|| \right) + \sum\limits_{p \in {O_{\bar{i}}^{f}}} ||p-m^{\star}|| \\ & \leq & nR + {\Delta}(O^{\star}_{\bar{i}}), \end{array} $$
(10)

where n denotes \(|O^{\star }_{\bar {i}}|\). Finally, let m be as in the statement of Lemma 9. Then,

$$\begin{array}{@{}rcl@{}} {\Phi}_{m^{\prime\prime}}(O^{\star}_{\bar{i}}) &=& \sum\limits_{p \in {O_{\bar{i}}^{n}}} ||p-m^{\prime\prime}|| + \sum\limits_{p \in {O_{\bar{i}}^{f}}} ||p-m^{\prime\prime}|| \\ & \leq & \sum\limits_{p \in {O_{\bar{i}}^{n}}} \left( ||c(p)-m^{\prime\prime}|| + ||c(p)-p|| \right) + \sum\limits_{p \in {O_{\bar{i}}^{f}}} ||p-m^{\prime\prime}|| \\ & \leq & nR + {\Phi}_{m^{\prime\prime}}({O_{\bar{i}}^{\prime}}) \ \leq \ nR + \left( 1 + \frac{{\varepsilon}}{8} \right) \cdot {\Delta}({O_{\bar{i}}^{\prime}}) \\ & \overset{(10)}{\leq} & 3nR + \left( 1 + \frac{{\varepsilon}}{8} \right) \cdot {\Delta}(O^{\star}_{\bar{i}}) \ \overset{(9)}{\leq} \ (1+{\varepsilon}) \cdot {\Delta}(O^{\star}_{\bar{i}}). \end{array} $$
(11)

Rest of the arguments go through without any changes.

6 Conclusion

We formulated the list k-means problem and gave nearly tight upper and lower bounds on the size of the list of candidate centers. We also obtained an algorithm for the constrained k-means problem getting a significant improvement over the previous results of Ding and Xu [5].

Furthermore, we show how our techniques generalize for the corresponding k-median problems. We would also like to point out that our techniques generalize for settings that involve non-Euclidean distance measures. After going through the analysis of our algorithm, it is not difficult to show that the only properties that are used in the analysis are:

  1. (i)

    Symmetry of the distance measure (used implicitly)

  2. (ii)

    (Approximate) Triangle Inequality: Fact 2

  3. (iii)

    Centroid property: Fact 1

  4. (iv)

    Sampling property: Lemma 1

The analysis holds even for some approximate versions of the above properties. For instance, for the k-median problem we were able to use Lemma 14 instead of Lemma 1 (i.e., the sampling property). Also, we were able to work without the centroid property since for the k-median problem the distances follow the exact triangle inequality instead of the approximate version (i.e., Fact 2). We note that there are a number of clustering problems in machine learning that are modeled as k-median problem over distance measures that follow the above properties in some approximate sense. Mahalanobis distance and μ-similar Bregman divergence are two examples of such distance measures. Our results can be very easily extended for the k-median problem over such distance measures.Footnote 3