1 Introduction

The notion of what is fair in the allocation of one or more infinitely divisible goods to a finite number of agents with their own preferences has long been debated. Predictably, no agreement has been reached on the subject. The situation is often exemplified with children (players) at a birthday party who are around a table waiting for their slice of the cake to be served, with the help of some parent (an impartial referee). If we think about a special class of resolute children who are able to specify their preferences in terms of utility set functions, the parent in charge of the division could ease his task by using a social welfare function to summarize the children’s utility values. Among the many proposals, the maxmin—or Rawlsian—division was extensively studied in the seminal work of Dubins and Spanier (1961), who showed the existence of maxmin optimal partitions of the cake for any completely divisible cake and their main properties. They also showed that when a condition of mutual appreciation holds (assumption MAC below) any optimal partition is also equitable, i.e., it assigns the same level of utility for each child.

The study of the maxmin optimal partitions and their properties has continued in more recent years. In particular, its relationship with other important notions such as efficiency (or Pareto optimality) and, above all, envy-freeness has been investigated with alternating success: each maxmin partition is efficient, but while for the two children case (Brams and Taylor 1996) showed that it is also envy-free, the same may not hold when three or more children are to be served, as shown in Dall’Aglio and Hill (2003).

It is worth pointing out the relationship with \(n\) player bargaining solutions. If we think about the division as deriving from a bargaining procedure among children, it is straightforward to show that the bargaining solution proposed by Kalai (1977), in the case where all the players’ utilities are normalized to 1, coincides with the equitable maxmin division. Therefore, if the conditions proposed by Dubins and Spanier hold, the two solutions actually coincide.

Little attention has been devoted, however, to finding optimal maxmin partitions with one notable exceptions: the case of two players with additive and linear utility over several goods has been considered by Brams and Taylor (1996), with the Adjusted Winner procedure.

For the case of general preferences (expressed as probability measures, i.e. nonnegative and countably additive set functions normalized to 1) and arbitrary number of players, (Legut and Wilczynski 1988) gave a characterization of the optimal maxmin allocation in terms of weighted density functions. Moreover, Elton et al. (1986) and Legut (1988) provided lower bounds on the maxmin value. The optimization problem was later analysed by Dall’Aglio (2001). The general problem was reformulated as the minimization of a convex function with a particular attention to the case where the maxmin allocation is not equitable and the allocation of the cake occurs in stages to subsets of players. No detail, however, was given on how to proceed with the minimization.

In most of the fair division literature, little is assumed about the strategic behaviour of the children. Brams and Taylor (1996) discuss the issue of the manipulability of the preferences: in most cases children may benefit from declaring false preferences. A different approach takes into account the possibility for the children to form coalitions after (Legut 1990 and Legut et al. 1994) or before (Dall’Aglio et al. 2009) the division of the cake. In both cases coalitional games are defined and analysed. In the case of early cooperation among children, the game is based on a maxmin allocation problem among coalitions, each one having a joint utility function and a weight. The first properties of the game are studied in Dall’Aglio et al. (2009). It turns out that the analysis of the game is made harder by the difficulties in computing the characteristic function, i.e., the value associated to each coalition. The tools we introduce, therefore, become essential in computing such values, as well as any synthetic value, such as the Shapley value, associated to the game.

The coalitional maxmin problem is indeed a generalization of the classical maxmin problem introduced by Dubins and Spanier. Therefore, we consider a common approach to set up an algorithm which, at each step, will compute an approximating allocation, together with lower and upper bounds for the maxmin value. The algorithm is based on a subgradient method proposed by Shor (1985) and it yields an approximation of the optimal allocation with any fixed degree of precision.

In Sect. 2 we describe the maxmin fair division problem with coalitions through the strategic model of interaction among players in Dall’Aglio et al. (2009) and the geometrical setting employed in Barbanel (1999, 2005) and Dall’Aglio (2001). In Sect. 3 we present the upper and the lower bounds for the objective value. In Sect. 4 we fit the Subgradient Method to our problem and we derive a procedure where the optimal value and the optimal partition are computed up to a desired precision and we provide a numerical example where we describe two fair division games and we compute the corresponding shapley values. Some final considerations are given in Sect. 5.

2 The model and the maxmin fair division problem with coalitions

We represent our completely divisible good as the set \(C,\) a Borel subset of a finite dimensional Euclidean space, and we denote as \({\mathcal {B}}(C)\) the Borel \(\sigma -\)algebra of subsets of \(C\). Let \(N=\{1,\ldots ,n\}\) be the set of players, whose preferences on the good are \(\mu _1, \, \ldots , \, \mu _n,\) where each \(\mu _i\), \(i \in N,\) is a probability measures on \((C,{\mathcal {B}}(C))\). By the Radon-Nikodym theorem, if \(\mu \) is a non-negative finite-valued measure with respect to which each \(\mu _i\) is absolutely continuous (for instance we may consider \(\mu = \sum _{i \in N}{\mu _i}\)), then, for each \(A \in {\mathcal {B}}(C),\)

$$\begin{aligned} \mu _i(A) = \int \limits _{A} f_i(x)d\mu (x) \quad \forall \; i \in N, \end{aligned}$$

where \(f_i\) is the Radon-Nikodym derivative of \(\mu _i\) with respect to \(\mu \).

We will consider the following assumptions:

(CD) :

complete divisibility of the good: For each \(i \in N\) and each \(A \in {\mathcal {B}}(C)\) such that \(\mu _i(A)> 0\), there exists a measurable set \(B \subset A\) such that \(\mu _i(A \cap B)>0\) and \(\mu _i(A \cap B^c)>0\).

(MAC) :

mutual absolutely continuity: If there exists \(i \in N\) and \(A \in {\mathcal {B}}(C)\) such that \(\mu _i(A) > 0,\) then \(\mu _j(A) >0\), for each \(j \ne i\).

(RD) :

relative disagreement: For each pair \(i,j \in N\) and each \(A \in {\mathcal {B}}(C)\) such that \(\mu _i(A) > 0\) and \(\mu _j(A) > 0\), there exists a measurable set \(B \subset A\) such that \(\frac{\mu _i(B)}{\mu _i(A)} \ne \frac{\mu _j(B)}{\mu _j(A)}\).

Condition RD is equivalent to the following: for each \(i,j,h \in N\) and each \(c \in {\mathbb {R}}\)

$$\begin{aligned} \mu _h \left\{ x \in C : f_i(x)= c f_j(x) \right\} = 0 \; . \end{aligned}$$

Throughout the rest of the work we will assume that CD always holds, while MAC and RD are useful, though restrictive, assumptions that we will employ only when strictly needed.

For any \(k \in {\mathbb {N}}\), let \((A_1,\ldots ,A_k)\) be a \(k\)-partition, i.e., a partition of the good \(C\) into \(k\) measurable sets. Let \(\Pi _k\) be the class of all \(k\)-partitions. How do players behave in the division procedure? In the simplest case, each player competes with the others to get a part of the cake with no strategic interaction with other players. Each \((A_1,\ldots ,A_n) \in \Pi _n\) determines a division of the good in which player \(i \in N\) gets the share \(A_i\) with value \(\mu _i(A_i)\). Here, individual players seek an allocation with values as high as possible. A fair compromise between the conflicting interests is given by maxmin allocation \((A^*_1,\ldots ,A^*_n) \in \Pi _n\) that achieves

$$\begin{aligned} v_{m} := \max _{(A_1,\ldots ,A_n) \in \Pi _n} \left\{ \min _{i \in N} \mu _i(A_i) \right\} . \end{aligned}$$
(1)

Here \(v_m\) denotes the maxmin value in the classical fair division problem. With a completely divisible good, the allocation \((A_1^*,\ldots ,A_n^*)\) is fair (or proportional), i.e. \(\mu _i(A_i^*) \ge \frac{1}{n}\) for all \(i \in N\). Moreover, if MAC holds, it must also be egalitarian, i.e. \(\mu _i(A_i^*) = \mu _j(A_j^*) \,\) for all \(i, j \in N\) (see Dubins and Spanier 1961). Therefore, under this assumption, an optimal allocation is also the bargaining solution proposed by Kalai and Smorodinsky (1975) (see also Kalai 1977).

Dall’Aglio et al. (2009) proposed a strategic model of interaction, where players, before the division takes place, gather into mutually disjoint coalitions. Within each coalition, players pursue an efficient allocation of their collective share of the cake.

Let \({\mathcal {G}}\) be the family of all partitions of \(N\) and, for each \(\Gamma \in {\mathcal {G}}\), let \(|\Gamma | = m\), \(m \le n,\) and let \(M = \{1,\ldots ,m\}\) be the coalitions’ indexes set. Thus, players cluster into coalitions specified by the partition \(\Gamma =\{S_1,\ldots ,S_m\}\). For each coalition \(S_j\), \(j \in M\), players state their joint preferences as follows

$$\begin{aligned} \mu _{S_j}(B) = \max _{\{D_i\}_{i \in S_j} \, \mathrm{partition} \, \mathrm{of} \, B} \sum _{i \in S_j} {\mu _i(D_i)} = \int \limits _{B} f_{S_j}(x) d\mu (x)\;, \end{aligned}$$
(2)

with \(f_{S_j}(x) = \max _{i \in S_j} f_i(x)\), \(B \in {\mathcal {B}}(C)\) and \(\{D_i\}_{i \in S_j} \subset {\mathcal {B}}(C)\). The utility \(\mu _{S_j}(B)\) of coalition \(S_j\) will be divided among its members in a way that prevents any of them to break the coalition in search of a better deal. Once the global coalition structure is known, a fair allocation of the cake among the competing coalitions is sought. In this context, assigning the same value to all coalitions could yield an unfair outcome. Fairness here must consider the different importance that coalitions may assume and this is taken into account by a weight function \(w: 2^N \rightarrow {\mathbb R}_+\).

In this framework, each coalition takes the role of a single player in Eq. (1). Following Kalai (1977), when coalitions in \(\Gamma \) are formed and the weight function \(w\) is considered, players should agree on a division of the cake which achieves the following value

$$\begin{aligned} v(\Gamma , w) = \max _{(B_1,\ldots ,B_m) \in \Pi _m} \left\{ \min _{j \in M } \frac{\mu _{S_j}(B_j)}{w(S_j)} \right\} . \end{aligned}$$
(3)

Each coalition can evaluate its performance in the division by considering the following coalitional game

$$\begin{aligned} \eta (S,w)=w(S) v(\Gamma _S,w) \qquad S \subseteq N \end{aligned}$$
(4)

where \(\Gamma _S=\{S,\{j\}_{j \notin S}\}\). The value \(\eta (S,w)\) can be interpreted as the minimal utility that coalition \(S\) is going to receive in the division when the system of weight \(w\) is enforced, independently of the behaviour of the other players.

A crucial question lies in the definition of the weight system. We consider two proposals:

  • \(w_{\mathrm{card}}=|S|\), \(S \subset N\). This is certainly the most intuitive setting. Although very natural, this proposal suffers from a serious drawback, since players participating in the game \(\eta (\cdot ,w)\) may be better off waiting to seek for cooperation well after the cake has been divided (see Dall’Aglio et al. 2009);

  • \(w_{\mathrm{pre}}=\mu _S\left( \cup _{i \in S} A^*_i \right) \), \(S \subset N\), where \((A^*_1,A^*_2,\ldots ,A^*_n) \) is the partition maximizing (1). By seeking early agreements among them, players will be better off than postponing such agreements until the cake is cut. The above mentioned problem is overcome at the cost of a less intuitive (and more computationally challenging) formulation (see Dall’Aglio et al. 2009). It is interesting to note that to find these weights we need to solve (1).

It is easy to verify that, for each \(S \subseteq N\),

$$\begin{aligned} \eta (S, w_{\mathrm{card}}) \le \eta (S, w_{\mathrm{pre}}) \end{aligned}$$

with equality if \(S =N\) or \(S= \{i\}\) where \(i \in N\).

The optimization problem (3) can be seen as an infinite dimensional assignment problem. In principle we could attribute any piece of the cake \(C\) to any of the participating players (provided certain measurability assumptions are met). For very special instances this becomes a linear program: when the preferences have piecewise constant densities, or when the cake is made of a finite number of completely divisible and homogeneous parts.

The fully competitive value \(v_m\) is a special instance of the cooperative case, since \(v_m=v(\Gamma _1,w_1)\), with \(\Gamma _1=\{\{1\},\ldots ,\{n\}\}\) and \(w_1(\{i\})=1\) for each \(i \in N\). Therefore, we focus on the cooperative case alone.

2.1 A geometrical setting

We now describe a geometrical setting already employed in Barbanel (1999, 2005); Dall’Aglio (2001) and Legut (1988) to explore fair division problems. In what follows we consider the weighted preferences and densities, \(\mu _{j}^{w}\) and \(f^{w}_{j}\), given respectively by

$$\begin{aligned} \mu _{j}^{w}= \frac{\mu _{S_j}}{w(S_j)} \qquad f^{w}_{j}= \frac{f_{S_j}}{w(S_j)}. \end{aligned}$$

The partition range, also known as individual pieces set (IPS) (see Barbanel 2005) is defined as

$$\begin{aligned} {\mathcal {P}} := \{ (\mu _1^w(B_1), \ldots , \mu _m^w(B_m)) : (B_1, \ldots , B_m) \in \Pi _m \} \subset {\mathbb {R}}^m_+ . \end{aligned}$$

Let us consider some of its features. Each point \(p \in {\mathcal {P}}\) is the image, under \((\mu _1^w,\ldots ,\mu _m^w)\), of an \(m\)-partition of \(C\). Moreover, \({\mathcal {P}}\) is compact and, if CD holds, \({\mathcal {P}}\) is also convex (see Dvoretzky et al. 1951). Therefore, \(v(\Gamma , w) = \text {max } \{x>0 : (x, x, \ldots , x) \cap {\mathcal {P}} \ne \emptyset \}\). So, the point \((v(\Gamma , w),\ldots ,v(\Gamma , w))\) is the intersection of the Pareto frontier of \({\mathcal {P}}\) and the egalitarian line

$$\begin{aligned} \ell = \{x \in {\mathbb {R}}^m : x_1 = x_2 = \ldots = x_m \}. \end{aligned}$$
(5)

In case \(MAC\) holds, this is the only point corresponding to optimal allocations in (3).

3 Upper and lower bounds for the maxmin value

We turn our attention to a simpler optimization problem that may have an unfair solution, but it provides easy-to-compute upper and lower bounds for the original problem. These bounds depend on a weighted maxsum partition, which we can derive through a straightforward extension of a result by Dubins and Spanier (1961). Let \(\Delta _{m-1}\) denote the unit \((m-1)-\)simplex.

Proposition 3.1

(see (Dubins and Spanier 1961, Theorem 2), (Dall’Aglio 2001, Proposition 4.3)) Let \(\alpha \in \Delta _{m-1}\) and let \(B^{\alpha }=(B^{\alpha }_1 , \ldots , B^{\alpha }_m)\) be an \(m-\)partition of \(C\). If

$$\begin{aligned} \alpha _k f_k^w (x) \ge \alpha _h f_h^w (x) \quad \text {for all }h,k \in M \quad \text { and } \quad \text { for all } x \in B^{\alpha }_k , \end{aligned}$$
(6)

then

$$\begin{aligned} (B_1^{\alpha },\ldots ,B_m^{\alpha }) \in \mathop {\mathrm{argmax}}\limits _{(B_1,\ldots ,B_m) \in \Pi _m} \sum _{j \in M}{\alpha _j \mu _{j}^{w}(B_j)}. \end{aligned}$$
(7)

The value of this maxsum problem is itself an upper bound for problem (3). For each choice of \(\alpha \in \Delta _{m-1}\), we have a maxsum partition \(B^{\alpha }=(B^{\alpha }_1 , \ldots , B^{\alpha }_m)\) corresponding to \(\alpha \).

Definition 3.2

The partition value vector (PVV) \(u^{\alpha }=(u_1^{\alpha }, \ldots , u_m^{\alpha })\) is defined by

$$\begin{aligned} u_j^{\alpha } = \mu _{j}^w (B_j^{\alpha }), \quad \text{ for } \text{ each } j = 1, \ldots , m. \end{aligned}$$

The PVV \(u^{\alpha }\) is a point where the hyperplane \(\sum _{j \in M} \alpha _j x_j = k\) touches the partition range \({\mathcal {P}},\) so \(u^{\alpha }\) lies on the Pareto border of \({\mathcal {P}}\). Moreover, for any \(\alpha \in \Delta _{m-1}\) there exists at least one PVV (see Barbanel 2005). We are ready to state the first approximation result.

Proposition 3.3

Let \(g : \Delta _{m-1} \rightarrow {\mathbb {R}}^+\) be as follows:

$$\begin{aligned} g(\alpha ) :=\int \limits _C{\max _{j \in M}{\{\alpha _j f_j^{w}(x)\}d\mu (x)}}. \end{aligned}$$

Then,

$$\begin{aligned} v(\Gamma , w) \le g(\alpha ) \le \max _{j \in M}{u_j^{\alpha }}. \end{aligned}$$

Proof

Following ((Dall’Aglio 2001, Proposition 4.3)) we know that the hyperplane that touches \({\mathcal {P}}\) at the point \(u^{\alpha }\) is defined by the equation

$$\begin{aligned} \sum _{i \in M} \alpha _i x_i = g(\alpha ) \; . \end{aligned}$$

Since \(\alpha \in \Delta _{m-1}\), this hyperplane intersects the egalitarian line \(\ell \) defined in (5) at the point \((g(\alpha ),\ldots ,g(\alpha ))\). Since the hyperplane is located above \({\mathcal {P}}\), this point lies above the maxmin point with coordinates \((v(\Gamma ,w),\ldots ,v(\Gamma ,w))\). Therefore,

$$\begin{aligned} g(\alpha ) \ge v(\Gamma ,w) \; . \end{aligned}$$

Finally, since \(g(\alpha )\) is a weighted average of the values \((u_1^{\alpha },\ldots ,u_m^{\alpha }),\) it follows that \(g(\alpha ) \le \max _{j \in M}{u_j^{\alpha }}\). \(\square \)

The function \(g\) was already considered in Dall’Aglio (2001), where it was shown that \(g\) is convex on \(\Delta _{m-1}\), and \(v(\Gamma , w) = \min _{\alpha \in \Delta _{m-1}}{g(\alpha )}\).

We now turn our attention to a lower bound for \(v(\Gamma , w)\). The following result generalizes Theorem 3 in Legut (1988) and Theorem 1.1 in Elton et al. (1986).

Proposition 3.4

Let \(u = (u_1, \ldots , u_m)\) be a partition value vector such that

$$\begin{aligned} u_h = \max _{j=1, \ldots , m} u_j. \end{aligned}$$
(8)

Then,

$$\begin{aligned} v(\Gamma , w) \ge \underline{v}(u):= \frac{u_h}{1 + \sum _{j \ne h}{\frac{u_h - u_j}{\mu _{j}^{w}(C)}}} \ge \min _{j \in M}{u_j} . \end{aligned}$$
(9)

Proof

Let us consider the following vectors

$$\begin{aligned} e^q= (0, \ldots , 0, \mu _q^{w}(C), 0, \ldots , 0) \qquad q \in M \quad q \ne h, \end{aligned}$$
(10)

where \(\mu _q^{w}(C)\) is the weighted joint utility of the whole cake by coalition \(S_q\). Now, consider the convex hull of the PVV \(u\) and the \(m-1\) points \(e^q\), \(q \ne h\),

$$\begin{aligned} V:= \left\{ t_h u + \sum _{q \ne h}{t_q e^q}: (t_h, \ldots , t_m) \in \Delta _{m-1} \right\} . \end{aligned}$$

The lower bound we are looking for is the intersection point between \(V\) and the egalitarian line \(\ell \) from (5) (see Fig. 1). Let us denote this point as \((x_w, \ldots ,x_w)\). Without loss of generality, let us suppose \(h = 1\). Then, we obtain \((x_w, \ldots ,x_w)\) as follows:

$$\begin{aligned} \left\{ \begin{array}{l} t_1 u_1 + 0 + \cdots + 0 = x_w \\ t_1 u_2 + t_2 \mu _{2}^{w}(C) + \cdots + 0 = x_w \\ \vdots \\ t_1 u_m + 0 + \cdots + t_m \mu _{m}^{w}(C) = x_w \\ t_1 + t_2 + \cdots + t_m = 1 \end{array} \right. \end{aligned}$$

We are dealing with a linear system with \(m+1\) unknown quantities, \(t_1, t_2,\ldots ,t_m, x_w.\), \(m+1\) equations and full rank.

Fig. 1
figure 1

Upper and lower bounds for the two-coalition case

Thus, by Cramer’s rule, we get \(x_w\) as

$$\begin{aligned} x_w&= \frac{det \begin{pmatrix} u_1 &{} 0 &{} \ldots &{} 0 &{} 0 \\ u_2 &{} \mu _{2}^{w}(C) &{} \ldots &{} 0 &{} 0 \\ \vdots &{} \vdots &{} \vdots &{} \vdots &{} \vdots \\ u_m &{} 0 &{} \ldots &{} \mu _{m}^{w}(C) &{} 0 \\ 1 &{} 1 &{} \ldots &{} 1 &{} 1 \end{pmatrix} }{det \begin{pmatrix} u_1 &{} 0 &{} \ldots &{} 0 &{} -1 \\ u_2 &{} \mu _{2}^{w}(C) &{} \ldots &{} 0 &{} -1 \\ \vdots &{} \vdots &{} \vdots &{} \vdots &{} \vdots \\ u_m &{} 0 &{} \ldots &{} \mu _{m}^{w}(C) &{} -1 \\ 1 &{} 1 &{} \ldots &{} 1 &{} 0 \end{pmatrix} } = \frac{u_1 \prod _{q\ne 1}{\mu _{q}^{w}(C)}}{det \begin{pmatrix} 0 &{} 1 &{} 1 &{} \ldots &{} 1 \\ -1 &{} u_1 &{} 0 &{} \ldots &{} 0 \\ - 1 &{} u_2 &{} \mu _{2}^{w}(C) &{} \ldots &{} 0 \\ \vdots &{} \vdots &{} \vdots &{} \vdots &{} \vdots \\ -1 &{} u_m &{} 0 &{} \ldots &{} \mu _{m}^{w}(C) \end{pmatrix} } \\&= \frac{u_1 \prod _{q\ne 1}{\mu _{q}^{w}(C)}}{\prod _{q\ne 1}{\mu _{q}^{w}(C)} + \sum _{q \ne 1}{\prod _{i\ne q,1}{\mu _{i}^{w}(C)} (u_1 - u_q)}} \\&= \frac{u_1 \prod _{q\ne 1}{\mu _{q}^{w}(C)}}{\prod _{q\ne 1}{\mu _{q}^{w}(C)} \left[ 1 + \sum _{q \ne 1}{\frac{(u_1 - u_{q})}{\mu _{q}^{w}(C)}} \right] } = \frac{u_1}{1 + \sum _{q \ne 1}{\frac{(u_1 -u_{q})}{\mu _{q}^{w}(C)}}}, \end{aligned}$$

where the second equality derives by suitable exchanges of rows and columns in the denominator matrix. In fact, we get the second one after an even number of exchanges on the first: \(m\) successive exchanges of the last row until it reach the first position, and \(m\) successive exchanges of the last column until it reach the first position. So the two matrices in the denominator have the same determinant. It is easy to verify that \(t_i > 0\), for every \(i \in N\).

Finally, since the lower bound belongs to the convex hull of the PVV \(u\) and the \(m-1\) vectors \(e^q\), \(q \ne h\), it is not less than the minimum component of each vector, in particular \(\underline{v}(u) \ge \min _{j \in M} {u_j}.\) \(\square \)

An illustration of the position of the bounds with respect to the partition range in the case of two coalitions is shown in Fig. 1.

4 The subgradient method

In the previous section we have seen that for each choice of the coefficients vector \(\alpha \) we can derive upper and lower bounds for \(v(\Gamma , w).\) We describe a way of improving the coefficients \(\alpha \) so that eventually the bounds will shrink to the optimal value.

Since in general \(g\) is a non-differentiable convex function, we can rely on a simple minimizing algorithm developed by Shor (1985), the subgradient method. For more recent overviews of the method we refer to Bertsekas (1999) and Boyd and Vanderberghe (2003). Let us start by describing the method through some basic definitions and the essential convergence result.

Definition 4.1

Let \(f\) be a convex function with domain \(E\), finite dimensional Euclidean space, and \(x_0 \in E\). A vector \(\gamma (x_0)\) in \({\mathbb R}^n\) is called a subgradient or a generalized gradient of \(f\) at \(x_0\) if it satisfies

$$\begin{aligned} f(x) - f(x_0) \ge \langle \gamma (x_0), x - x_0 \rangle \quad \text {for all } x \in E. \end{aligned}$$
(11)

We denote as \(\partial _x f(x)\) the set of subgradients of a convex function \(f\) at any point \(x\) of \(E\).

Definition 4.2

A sequence \(\{h_k\}_{k=0}^{\infty }\) of positive numbers is called diminishing step size rule if it satisfies the conditions:

$$\begin{aligned}&\lim _{k \rightarrow \infty } h_k = 0, \end{aligned}$$
(12)
$$\begin{aligned}&\sum _{k=0}^{ \infty }{h_k} = + \infty . \end{aligned}$$
(13)

The subgradient method minimizes a convex function (possibly non-differentiable) which has a bounded set of minimum points. This is obtained by moving a point in the domain in the opposite direction of a subgradient at that point by a step defined by a diminishing step size rule. We recall the general result

Theorem 4.3

(Theorem 2.3 in Shor 1985) Let \(f\) be a convex function defined on \(E\), with a bounded set of minimum points \(E^*\).

Suppose that a sequence \(\{x^k\}_{k=0}^{\infty }\) is generated according to the formula

$$\begin{aligned} x^{k+1} = x^k - h_{k+1} \gamma (x^k), \qquad k=1,2,\ldots \end{aligned}$$
(14)

where \(\gamma (x^k) \in \partial _x f(x^k)\), \(x_0\) is an arbitrary starting point and \(\{ h_k \}_{k=0}^{\infty }\) is a diminishing step size rule. Then, if the sequence \(\{\gamma (x^k)\}_{k=0}^{\infty }\) is bounded, the algorithm converges, in the sense that

$$\begin{aligned} \lim _{k \rightarrow \infty } \min _{x \in E^*} || x^k - x || = 0 \quad \text{ and } \quad \lim _{k \rightarrow \infty } f(x^k)= \min _{x \in E} f(x) \; , \end{aligned}$$

where \(|| \cdot ||\) is the Euclidean norm.

It should be noted that \(f(x^k)\) is not decreasing in \(k\), therefore this is not a pure descent method. We get a better approximation to \(\min _{x \in E} f(x)\) if we replace \(f(x^k)\) with \(\min _{j \le k}f(x^j)\) for every \(k=0,1,2,\ldots \).

We now consider an extension of the function \(g\) into the affine \((m-1)\)-dimensional subspace

$$\begin{aligned} D = \left\{ \alpha \in {\mathbb R}^m: \sum _{i \in M} \alpha _i=1\right\} . \end{aligned}$$

Let \(M_-(\alpha ) = \{i \in M: \alpha _i < 0\}\), \(M_+(\alpha ) = \{i \in M: \alpha _i \ge 0\}\) and \(s_+(\alpha )= \sum _{i \in M_+(\alpha )} \alpha _i\). Define \(\hat{\alpha } \in \Delta _{m-1}\) as

$$\begin{aligned} \hat{\alpha }_i = {\left\{ \begin{array}{ll} 0 &{}\quad \text{ if } i \in M_-(\alpha )\\ \frac{\alpha _i}{s_+(\alpha )} &{}\quad \text{ if } i \in M_+(\alpha ) \end{array}\right. } \end{aligned}$$

Clearly \(\hat{\alpha } = \alpha \), if \(\alpha \in \Delta _{m-1}\). For any \(\alpha \in D\) define

$$\begin{aligned} g(\alpha )&= \int \limits _{C} \max _{i \in M} \{\alpha _i {f_{i}^{w}}(x)\} \; d\mu (x) = s_+(\alpha ) \int \limits _{C} \max _{i \in M_+(\alpha )} \{\hat{\alpha }_i {f_{i}^{w}}(x)\} \; d\mu (x) \\&= s_+(\alpha ) \sum _{i \in M_+(\alpha )} \hat{\alpha }_i u^{\hat{\alpha }}_i = \sum _{i \in M} \alpha _i u^{\hat{\alpha }}_i \; , \end{aligned}$$

the last equality deriving from the fact that \(u^{\alpha }_i = 0\) whenever \(\alpha _i < 0\). We now establish several properties of \(g\) on \(D\).

Proposition 4.4

  1. (i)

    \(g\) is a convex function on \(D\).

  2. (ii)

    Suppose \(\alpha \in D {\setminus } \Delta _{m-1}\). Then there exists \(\hat{\alpha } \in \Delta _{m-1}\) such that \(g(\hat{\alpha }) < g(\alpha )\).

  3. (iii)

    For any \(\alpha ,\alpha ^0 \in D\)

    $$\begin{aligned} g(\alpha ) \ge \sum _{i \in M} \alpha _i u^{\hat{\alpha }^0}_i \; . \end{aligned}$$

Proof

To prove \((i)\), pick any \(\alpha , \beta \in D\) and \(\gamma \in [0,1]\). Then

$$\begin{aligned} \max _{i \in M} \left\{ [\gamma \alpha _i + (1 - \gamma ) \beta _i ] f_i(x) \right\} \le \gamma \max _{i \in M} \{\alpha _i f_i(x)\} + (1 - \gamma ) \max _{i \in M} \{\beta _i f_i(x)\} \; . \end{aligned}$$

Integrating over \(C\) we get

$$\begin{aligned} g(\gamma \alpha + (1 - \gamma ) \beta ) \le \gamma g(\alpha ) + (1 - \gamma )g(\beta ) \; . \end{aligned}$$

To prove \((ii)\) pick \(\alpha \in D {\setminus } \Delta _{m-1}\) and consider

$$\begin{aligned} g(\alpha ) = s_+(\alpha ) \sum _{i \in M_+(\alpha )} \hat{\alpha }_i u^{\hat{\alpha }}_i > g(\hat{\alpha }) \; , \end{aligned}$$

since \(s_+(\alpha ) > 1\).

Moving to \((iii)\), first of all, we prove that the inequality holds for any \(\alpha ,\alpha ^0 \in \Delta _{m-1}\)

$$\begin{aligned} g(\alpha )= \sum _{i \in M} \alpha _i u^{\alpha }_i = \max _{B \in \Pi _m} \sum _{i \in M }\alpha _i \mu _i^w (B_i) \ge \sum _{i \in M }\alpha _i \mu _i^w (B^{\alpha ^0}_i) = \sum _{i \in M} \alpha _i u^{\alpha ^0}_i \; . \end{aligned}$$
(15)

Now take \(\alpha \notin \Delta _{m-1}\) and \(\alpha ^0 \in D\) to get

$$\begin{aligned} g(\alpha )=s_+(\alpha ) \sum _{i \in M_+(\alpha )} \hat{\alpha }_i u^{\hat{\alpha }}_i \ge s_+(\alpha ) \sum _{i \in M_+(\alpha )} \hat{\alpha }_i u^{\hat{\alpha ^0}}_i \ge \sum _{i \in M} \alpha _i u^{\hat{\alpha }^0}_i \; , \end{aligned}$$

the first inequality deriving from (15), the second one from the fact that, in case \(\alpha _i < 0\) and \(\alpha ^0_i > 0\), the corresponding term in the sum is negative. \(\square \)

According to statement \((ii)\) in the last proposition,

$$\begin{aligned} \min _{\alpha \in D} g(\alpha ) = \min _{\alpha \in \Delta _{m-1}} g(\alpha ) \end{aligned}$$

and the two problems have the same set of minimizers. We now apply the subgradient method to find a minimizing sequence, formulating the problem as an unconstrained one on \({\mathbb R}^{m-1}\).

Proposition 4.5

Suppose \(CD\) holds. Let \(A^*\) be the set of minimzers of \(g(\alpha )\) in \(\Delta _{m-1}\) and let \(\{h_k\}_{k=0}^{\infty }\) be a diminishing step size rule. Then, for any \(\alpha ^0 \in \Delta _{m-1}\) and the update rule

$$\begin{aligned} u^k&= PVV(\hat{\alpha }^{k}) \end{aligned}$$
(16)
$$\begin{aligned} \alpha ^{k+1}_{-m}&= \alpha ^k_{-m} - h_{k+1} \left[ u_{-m}^{k} - u_m^{k} 1_{m-1} \right] \end{aligned}$$
(17)
$$\begin{aligned} \alpha ^{k+1}_m&= 1 - \langle \alpha _{-m}^{k+1},1_{m-1} \rangle \; , \end{aligned}$$
(18)

where \(1_{m-1}=(1,\ldots ,1) \in {\mathbb R}^{m-1}\), \(u^k_{-m}=(u^k_1,\ldots ,u^k_{m-1})\) and \(\alpha _{-m}^k=(\alpha ^k_1,\ldots ,\alpha _{m-1}^k)\), we have

$$\begin{aligned} \lim _{k \rightarrow \infty } \min _{\alpha \in A^*} || \alpha ^{k} - \alpha || = 0 \quad \text{ and } \quad \lim _{k \rightarrow \infty } g(\alpha ^k)= \lim _{k \rightarrow \infty } \min _{j \le k} g(\alpha ^j) = \min _{\alpha \in \Delta _{m-1}} g(\alpha ) . \end{aligned}$$
(19)

Proof

Consider \(x=(x_1,x_2,\ldots ,x_{m-1}) \in {\mathbb R}^{m-1}\). Denote \(\alpha ^x= (x_1,x_2,\ldots ,x_{m-1},1 -\sum _{i \ne m} x_i)\), \(A^*_x=\{x \in {\mathbb R}^{m-1}: \alpha ^x \in A^*\}\), and

$$\begin{aligned} g_{-1}(x)=g(\alpha ^x) \; . \end{aligned}$$

A subgradient for the function \(g_{-1}\) of \(m-1\) free variables at \(x^0 \in {\mathbb R}^{m-1}\) is given by

$$\begin{aligned} u_{-m}^0 - u_m^0 1_{m-1} \; , \end{aligned}$$

where \(\alpha ^0=\alpha ^{x^0}\), \(u^0_{-m} =(u_1^{\hat{\alpha }^0},u_2^{\hat{\alpha }^0}, \ldots ,u_{m-1}^{\hat{\alpha }^0})\) and \(u_m^0=u_m^{\hat{\alpha }^0}\). In fact, for any \(x,x^0 \in {\mathbb R}^{m-1}\), we have

$$\begin{aligned} g_{-1}(x) - g_{-1}(x_0)&= g(\alpha ^x) - g(\alpha ^0) \ge \sum _{i \in M} \alpha _i^x u_i^0 - \sum _{i \in M} \alpha _i^0 u_i^0 \\&= \sum _{i \ne m} u_i^0 (x_i - x_i^0) + u_m^0 \sum _{i \ne m} (x_i^0 - x_i) \\&= \sum _{i \ne m} (u_i^0 - u_m^0)(x_i - x_i^0) = \langle u_{-m}^0 - u_m^0 1_{m-1},x - x^0 \rangle \; , \end{aligned}$$

the inequality deriving from \((iii)\) in Proposition 4.4.

Since \(u_{-m}^{0} - u_m^{0} 1_{m-1} \in [-1,1]^{m-1}\), the subgradient is bounded for all possible choices of \(x^0\). Now consider the update rule

$$\begin{aligned} x^{k+1} = x^k - h_{k+1} \left[ u_{-m}^{\hat{x}^k} - u_m^{\hat{x}^k} 1_{m-1} \right] \; . \end{aligned}$$

Theorem 4.3 guarantees that

$$\begin{aligned} \lim _{k \rightarrow \infty } \min _{x \in A_x^*} || x^k - x ||&= \lim _{k \rightarrow \infty } \min _{\alpha \in A^*} || \alpha ^{x^k} - \alpha || = 0 \qquad \text{ and } \\ \min _{x \in {\mathbb R}^{m-1}}g_{-1}(x)&= \lim _{k \rightarrow \infty }g_{-1}(x^k) = \lim _{k \rightarrow \infty }g(\alpha ^{x^k}) = \min _{\alpha \in \Delta _{m-1}} g(\alpha ) \; . \end{aligned}$$

\(\square \)

The rules (16), (17) and (18) update \(\alpha ^k\), by considering player \(m\) as a reference player, and by adjusting the other players’ utilities to equalize with that of the reference. The convergence process will then make all utilities converge to an equal value, which will be independent of the chosen reference. In fact, any player can play that role and, furthermore, we could consider a procedure where all players’ utilities are compared to a symmetric value such as the utilities’ mean. This time we will consider an unconstrained problem in \({\mathbb R}^m\). Notice, however, that if we pick any point of \(D\) as the origin of the sequence, then the whole sequence of updated vectors will stay in \(D\).

Proposition 4.6

In Proposition 4.5, we could replace the update rules (16), (17) and (18) with

$$\begin{aligned} u^k&= PVV(\hat{\alpha }^k) \end{aligned}$$
(20)
$$\begin{aligned} \alpha ^{k+1}&= \alpha ^k - h_{k+1} \left[ u^k - \bar{u}^k \right] \; , \end{aligned}$$
(21)

where \(\bar{u}^{k} = \sum _{j \in M} u_j^{k} /m\), obtaining the same convergence results.

Proof

For any \(y=(y_1,\ldots ,y_m) \in {\mathbb R}^m\) and \(i \in M\) let

$$\begin{aligned} \alpha ^{y,i} = \left( y_1,\ldots ,y_{i-1},1 - \sum _{j \ne i} y_j,y_{i+1},\ldots ,y_m\right) \end{aligned}$$

and \(g^{i}(y)=g(\alpha ^{y,i})\). Now consider

$$\begin{aligned} \tilde{g}(y)=\frac{1}{m} \sum _{i \in M} g^{i}(y) \; . \end{aligned}$$

Now \(\tilde{g}\) is a convex function of \(y\in {\mathbb R}^m\), which coincides with \(g\) on \(D\). Moreover,

$$\begin{aligned} \min _{y \in {\mathbb R}^m} \tilde{g}(y)= \min _{\alpha \in D} g(\alpha ) \; . \end{aligned}$$

The minimizing points include those in \(\Delta _{m-1}\) which minimize \(g\) and, in case any other \(y^* \notin D\) minimizes \(\tilde{g}\), then \(\alpha ^{y,1},\alpha ^{y,2},\ldots ,\alpha ^{y,m}\) all minimize \(g\) in \(\Delta _{m-1}\) (and therefore any of their convex combinations). The \(i\)-th component of a subgradient of \(\tilde{g}\) at \(y\) is

$$\begin{aligned} \frac{1}{m} \sum _{j \ne i} \left( u_i^{\hat{\alpha }^{y,j}} - u_j^{\hat{\alpha }^{y,j}} \right) \; . \end{aligned}$$
(22)

Notice, however, that if we choose \(y \in D\), then \(\hat{\alpha }^{y,1}=\cdots =\hat{\alpha }^{y,m}=\hat{y}\), and (22) simplifies to

$$\begin{aligned} \frac{1}{m} \sum _{j \in M} \left( u_i^{\hat{y}} - u_j^{\hat{y}} \right) = u_i^{\hat{y}} - \bar{u}^{\hat{y}} \; . \end{aligned}$$

If we now consider the update rule with step \(h_y\), we have

$$\begin{aligned} y^{\text{ new }} = y - h_{y} \left[ u^{\hat{y}} - \bar{u}^{\hat{y}} \right] \in D \; . \end{aligned}$$

Therefore, if we start with a vector in \(D\) (and a fortiori, with a vector in \(\Delta _{m-1}\)), the whole sequence of updated values of \(y\) will stay in \(D\), where \(\tilde{g}\) and \(g\) coincide, and will converge to the set of minimizers of \(g\). Denoting any element in the sequence in \(D\) as \(\alpha ^k\), instead of \(y^k\), we get the desired convergence results (19). \(\square \)

We next consider an instance where also the convergence of the lower bound towards the optimal value \(v(\Gamma ,w)\) is established. Suppose that \(MAC\) and \(RD\) hold. Then, for any point on the Pareto border \({\mathcal {P}}\) there exists one and only one hyperplane touching \({\mathcal {P}}\) (See Sect. 12.C and 12.D in Barbanel 2005). Consequently, there exists a unique \(\alpha ^*\) minimizing \(g\), and there exists a unique equitable \(u^*=PVV(\alpha ^*)\). Therefore, we have the following.

Proposition 4.7

Suppose \(CD\). \(MAC\) and \(RD\) hold. Then, for any \(\alpha ^0 \in \Delta _{m-1}\) updated by the rules (16), (17) and (18), or by the rules (20) and (21), we have

$$\begin{aligned} \lim _{k \rightarrow \infty } \alpha ^k&= \alpha ^*, \qquad \lim _{k \rightarrow \infty }g(\alpha ^k) =g(\alpha ^*) = v(\Gamma ,w) \quad \text{ and } \end{aligned}$$
(23)
$$\begin{aligned} \lim _{k \rightarrow \infty } u^k&= u^* , \qquad \lim _{k \rightarrow \infty } \underline{v}(u^k) = \underline{v}(u^*) = v(\Gamma ,w) \; . \end{aligned}$$
(24)

Proof

Convergences in (23) are guaranteed by Propositions 4.5, 4.6 and by the uniqueness of the minimizing argument for \(g\).

To prove (24), suppose on the contrary that \(\alpha ^{k} \rightarrow \alpha ^*\) while \(u^k \nrightarrow u^*\). Since the sequence \(\{u^k\}\) lies in a compact set, there must be a convergent subsequence \(u^{k'} \rightarrow \tilde{u} \ne u^*\). The vector \(\tilde{u}\) is a second PVV associated to \(\alpha ^*\), but this is ruled out by RD. Thus, \(\lim _{k \rightarrow +\infty } u^k = u^*,\) and, by continuity, \(\lim _{k \rightarrow + \infty } \underline{v}(u^k) = \underline{v}(u^*) = v(\Gamma ,w)\). \(\square \)

The results in this section describe procedures that make an arbitrary initial allocation converge to the optimal value and allocation through an iterative adaptive process that reduces the weight for those players who are receiving more utility than a certain reference value, and increases it for those who are receiving too little. These iterative procedures towards an equilibrium have a long history in economic theory that can be traced back to Walras and his theory of tatonnement (See Walras 1889).

4.1 The algorithm

We now present two versions of an algorithm for the maxmin division problem in case \(CD\), \(MAC\) and \(RD\) hold. The common initializing elements for both versions are listed in Table 1. The first version computes upper and lower bounds for \(v(\Gamma ,w)\) and updates the coefficient vector \(\alpha \) through the subgradient rules (20) and (21). Both bounds are nonmonotonic at every step. Therefore, they are updated by means of a simple comparison with the old ones. The generic step is described in Table 2. A simpler but slower version, described in Table 3, computes the approximating optimal partition as well as the value. The finiteness of both algorithms is guaranteed by Proposition 4.7.

Table 1 Description and initialization algorithms elements
Table 2 Generic step \(k\) for algorithm returning \(v(\Gamma , w)\)
Table 3 Generic step \(k\) for algorithm returning \(u^*\)

Particular care is needed in choosing the diminishing step size rule. A sequence converging too fast to 0 may lead to an increase in the number of steps needed, since the step may soon become too small to reach the optimum. Similarly a sequence converging too slowly may result in values of \(\alpha \) jumping in and out of the unit simplex. This, again, will slow the convergence process.

4.2 A five players example

Let us consider the coalitional game defined in (4), with five players and players’ preferences listed as probability distributions on \(C=[0,1]\) in Table 4.

Table 4 Players preferences

In Fig. 2, we represent the initial densities (a) and then the maxmin partition for the fully competitive context (b), where \(\Gamma = \{\{i\}_{i \in N}\}\) and \(w(\{i\})=1,\) for all \(i \in N.\)

Fig. 2
figure 2

Densities of players preferences and maxmin partition in competitive context (red for player 1, green for player 2, blue for player 3, brown for player 4, purple for player 5)

For any \(S \subseteq N\), we run our algorithm enforcing the two weight systems \(w_{card}\) and \(w_{pre}\), with a tolerance of \(10^{-3}\) and we compute the corresponding game values (Table 5). Consequently, in Table 6, we compute the Shapley value for each game.

Table 5 Comparison between the coalitional game values
Table 6 Comparison between the Shapley values

The two games share the same ranking for the Shapley values

$$\begin{aligned} \text{ Pl.5 } \succ \text{ Pl.3 } \succ \text{ Pl.4 } \succ \text{ Pl.1 } \succ \text{ Pl.2 } \; , \end{aligned}$$

which therefore seems to be robust enough to the choice of system weights.

Also, the weight system \(w_{pre}\) amplifies the difference in the Shapley values obtained with \(w_{card}\), yielding a higher variance for the values’ distributions.

5 Concluding remarks

In the previous section we described a couple of algorithms that return maxmin values and partitions in both competitive and cooperative settings. It is important to note that we could think of the same procedures as interactively implemented between (coalitions of) players and an impartial referee. At first the referee proposes a division of the cake based on the maxsum division of the cake with equal weights for all players. The players now report their utilities and the referee corrects the inequalities in the division by proposing a new maxsum division with modified weights: Players who were better off will be given a smaller weight and those who were worst off will see their weight increase. Of course, one cannot hope to achieve the same degree of precision, since the algorithm performs that step dozens of times, but the bounds described in Sect. 3 give a precise idea on how far the proposed division is from the desired one.

Many issues remain open. We hint at two of them.

  • In the numerical example it would be interesting to link the Shapley value rankings to the original system of preferences. What makes Players 5 and 3 the most powerful players in the cooperative division process? Apparently the two utility functions have different features: Player 5’s distribution is uniform over the unit interval and his density is maximal only at the very ends of the interval. On the other hand, Player 3’s preferences are concentrated at the second half of the interval—where he has no competitors, except player 5 (who, however, has a smaller density). No simple explanation could be provided so far.

  • Beyond the convergence of the algorithms, which end in a finite number of steps, returning the approximate solution up to a specified degree of precision, it would be interesting to investigate about the computational efficiency of the same algorithms