1 Introduction

In 1947, Samuelson suggested the dual approach in consumer theory. Instead of considering a consumer maximizing his utility with respect to a budget constraint, one does consider a consumer minimizing his expenditure with respect to a level of utility he must achieve. In other words, one does consider the reciprocal problem of the Utility Maximization Problem. Due to this reciprocity, there exist well-known relations between the Marshallian demand (i.e., the solution of the Utility Maximization Problem) and the Hicksian demand (i.e., the solution of the Expenditure Minimization Problem). For a discussion concerning these relations, we refer the reader to [1]. For a more general discussion about the concept of duality in economics, we refer the reader to [2] and [3]. As Barten and Böhm explained in [1]:

In certain cases, it provides a more direct analysis of the price sensitivity of demand[...].

For this reason, the so-called Expenditure Minimization Problem has been extensively studied during the last decades. For instance, one can mention the early contribution of McKenzie [4], who obtained Slutsky equation in demand theory using expenditure minimization instead of utility maximization. We shall also mention the contribution of Diamond and McFadden [5], who provided three uses of the expenditure function in public finance: deadweight burden of taxation, optimal taxation and optimal investment. For an extensive discussion about the Expenditure Minimization Problem and for more references, we refer the reader to [1] and [2].

The purpose of this work is to deal with a generalization of this problem to a multiple constraint case. It is customary to study multiple constraint models in consumer theory. On the other hand, there is little research concerning consumer facing more than one utility constraint. As illustrated in next section, in many relevant situations, the consumer or the economic planner faces more than one utility constraint and this observation calls for such a generalization. Contrary to what one could expect, we do not restrict ourselves to the problem of existence and uniqueness. As a matter of fact, we study the Lipschitz behavior of the solution and we identify the conditions on the parameters (i.e., the utility levels and the price vector) under which this solution is continuously differentiable around some point. Our last contribution is a Slutsky-type property that generalizes the classical one.

The paper is divided as follows. Section 2 states the problem and the assumptions. In addition, we provide three economic motivations for the optimization problem. In Sect. 3, after characterizing the solution by necessary and sufficient first-order conditions, we prove the existence and continuity of the solution. Section 4 studies the classical properties of the generalized expenditure function and states that the generalized Hicksian demand is locally Lipschitz continuous. The proof relies on the result of Cornet and Vial [6] on the Lipschitz behavior of the solution of a mathematical programming problem. In Sect. 5, we show that the generalized Hicksian demand is continuously differentiable if a strict complementary slackness condition holds. Following Fiacco and McCormick [7], this result is a consequence of the classical Implicit Function Theorem. Finally, we obtain a Slutsky-type property for the generalized Hicksian demand. Section 6 presents perspectives of further research, while Sect. 7 summarizes the results.

2 Assumptions and Economic Motivations

Let \(u_1,\ldots ,u_n\) be n functions defined on \({\mathbb {R}}^{\ell }_{++}\). The problem with which we shall be concerned throughout this paper is:

$$\begin{aligned} \max \langle -p\cdot x\rangle \text{ s.t. } u_{k}(x)\ge v_{k} \text{, } \quad k=1,\ldots ,n \text{, } \;x \gg 0 \end{aligned}$$
(1)

with p belonging to \({\mathbb {R}}^{\ell }_{++}\) and \(v:=(v_{k})_{k=1}^{n} \in {\mathbb {R}}^{n}\). The solution of this problem will be denoted by \(\varDelta (p,v)\) and called the generalized Hicksian/compensated demand. The aim of the paper is to study the properties of this mapping.

We proceed to posit the assumptions concerning the functions \((u_{k})_{k=1}^{n}\).Footnote 1

Assumption 2.1

For all \(k=1,\ldots ,n\),

  1. 1.

    \(u_{k}\) is \(C^2\) on \({\mathbb {R}}^{\ell }_{++}\),

  2. 2.

    \(u_{k}\) is differentiably strictly quasi-concave (i.e., \(D^{2} u_{k}(x)\) is negative definite on \(\nabla u_{k}(x)^{\perp }\) for all \( x \in {\mathbb {R}}^{\ell }_{++}\)),

  3. 3.

    \(u_{k}\) is differentiably strictly increasing (i.e., \(\nabla u_{k}(x)\gg 0, \forall x \in {\mathbb {R}}^{\ell }_{++}\)).

Assumption 2.2

For all \(k \in \left\{ 1,\ldots ,n \right\} \), if a sequence \((x^{\nu })_{\nu \ge 0}\) converges to \(x \in \partial {\mathbb {R}}^{\ell }_{++}\), then:

$$\begin{aligned} \lim _{\nu \longmapsto +\infty }\dfrac{\nabla u_{k}(x^{\nu })\cdot x^{\nu }}{ \Vert \nabla u_{k}(x^{\nu })\Vert }=0. \end{aligned}$$

The boundary behavior of the preferences is given by Assumption 2.2. Roughly speaking, when the quantity of one good is very small, the consumer basically wants to increase it as explained in [8]. For that reason, this boundary assumption ensures us that the demand is interior. Generally, one considers the usual boundary assumption: For all \(x \in {\mathbb {R}}_{++}^{\ell }\), the closure in \({\mathbb {R}}^{\ell }\) of the set \( \lbrace x^\prime \in {\mathbb {R}}_{++}^{\ell } | u(x^\prime ) \ge u(x) \rbrace \) is contained in \({\mathbb {R}}_{++}^ {\ell }\) which implies Assumption 2.2. However, both assumptions are not equivalent. For instance, the Expected Utility Function often does not satisfy the classical closure assumption. As an example, the utility function u defined on \({\mathbb {R}}^{2}_{++}\) by: \(u(x_1,x_2)=\dfrac{1}{2}\sqrt{x_1}+\dfrac{1}{2}\sqrt{x_2}\) satisfies Assumption 2.2 but not the classical boundary assumption.Footnote 2

In Problem (1), all individual utility levels are not relevant. To determine the relevant one, we shall define, for \(v \in {\mathbb {R}}^{n}\), the set P(v) by:

$$\begin{aligned} P(v):=\left\{ x \in {\mathbb {R}}^{\ell }_{++}: u_{k}(x)\ge v_{k}, \quad \forall k=1,\ldots ,n \right\} . \end{aligned}$$

If this set is empty or equal to the whole set, the vector v is obviously not relevant. This motivates the definition of the set \({\mathcal {V}}\) by:

$$\begin{aligned} {\mathcal {V}}:= \left\{ v \in {\mathbb {R}}^{n}: P(v)\ne \emptyset \text{ and } P(v)\ne {\mathbb {R}}^{\ell }_{++}\right\} . \end{aligned}$$

As a matter of fact, one can give a more explicit description of \({\mathcal {V}}\cdot v \in {\mathcal {V}}\) means: \(\exists z \in {\mathbb {R}}^{\ell }_{++}\) such that \(u_{k}(z)\ge v_{k}\) for all \(k=1,\ldots ,n\) and \(\exists z^\prime \in {\mathbb {R}}^{\ell }_{++}, k_{0} \in \lbrace 1,\ldots ,n \rbrace \) such that: \(u_{k_{0}}(z^\prime )<v_{k_{0}}\). Clearly, the set \({\mathcal {V}}\) is an openFootnote 3 subset of \({\mathbb {R}}^{n}\).

Finally, we proceed to define the generalized expenditure function.

Definition 2.1

The function e is defined on \({\mathbb {R}}^{\ell }_{++} \times {\mathcal {V}}\) by: \(e(p,v)=p\cdot \varDelta (p,v)\) and called the generalized expenditure function.

Before pursuing the analysis, we present next three applications.

2.1 Generalization of the Classical Compensated Demand

If n is equal to one, \(\varDelta (p,v)\) is the so-called compensated demand or Hicksian demand.Footnote 4 So \(\varDelta (p,v)\) can be viewed as a multi-criterion extension of the Hicksian demand.

2.2 Public Goods

The following application concerns economic planning. Consider an economy with n consumers, \(\ell \) public goodsFootnote 5 and m private goods. Suppose that the basket of private goods \((\xi _{k})\in {\mathbb {R}}^{m}_{++}\) to be consumed by consumer k has already been chosen, i.e., \(u_{k}(x):=U_{k}(x,\xi _{k})\) where \(U_{k}\) is the utility function of consumer k. As usual, consumer k wishes to achieve an individual level of utility \(v_{k} \in {\mathbb {R}}\). In this situation, the economic planner shall choose the cheapest basket of public goods \(x \in {\mathbb {R}}^{\ell }_{++}\) with respect to the price \(p\in {\mathbb {R}}^{\ell }_{++}\) given the individual levels of private goods \((\xi _{k})_{k=1}^{n}\) and the individual levels of utility \((v_{k})_{k=1}^{n}\). Therefore, he has to solve Problem (1).

2.3 Private Goods and Positive Externalities

Consider an economy with n consumers and r private goods. The price of good h is denoted by \(q_{h}\), while the consumption bundle of consumer k is denoted by \(x_{k}\). Suppose that the consumption of every good by another consumer has a positive effectFootnote 6 on the utility of consumer k. Hence, the utility function \(u_{k}\) of consumer k is a function of both his consumption bundle \(x_{k}\) and the consumption bundles of the others \((x_{j})_{j\ne k}\). Let us write \(\ell :=r n\) and denote by \(x:=(x_{k})\in {\mathbb {R}}^{\ell }_{++}\) the concatenation of consumption bundles. In the same way, the vector \(p:=(q,\dots ,q)\in {\mathbb {R}}^{\ell }_{++}\) denotes the n-replica of the price vector \(p \in {\mathbb {R}}^{r}_{++}\). An economic planner who wants to minimize the expenditure of the society \(p\cdot x=\sum _{k=1}^{n}q\cdot x_{k}\) with respect to the individual utility levels \((v_{k})_{k=1}^{n}\) hasFootnote 7 to solve Problem (1).

3 Existence of the Solution

In order to establish the existence of the solution of Problem (1), we first show that this solution is characterized by first-order conditions. We then introduce an intermediary \(\varepsilon \)-problem and prove that this problem admits a unique solution \(\varDelta ^{\varepsilon }(p,v)\) for all \((p,v)\in {\mathbb {R}}^{\ell }_{++}\times {\mathcal {V}}.\) In addition, the continuity of the function \(\varDelta ^{\varepsilon }\) is proved. Finally, we present a characterization of \(\varDelta ^{\varepsilon }(p,v)\) by first-order conditions. Combining these results, we show that \(\varDelta (p,v)\) is a singleton and that \(\varDelta \) defines a continuous function on \({\mathbb {R}}^{\ell }_{++}\times {\mathcal {V}}\).

3.1 Characterization of the Generalized Hicksian Demand by First-Order Conditions

Proposition 3.1

Let \(p\in {\mathbb {R}}^{\ell }_{++}\) and \(v \in {\mathcal {V}}\). The two following assertions are equivalent:

  1. 1.

    \(\bar{x}=\varDelta (p,v)\)

  2. 2.

    There exists \(\lambda \in {\mathbb {R}}^{n}_{+}\setminus \lbrace 0 \rbrace \) such that \(\bar{x}\) is the solution of the system:

    $$\begin{aligned}&p=\sum \limits _{k =1}^{n} \lambda _{k}\nabla u_{k}(x) \nonumber \\&\lambda _{k}(u_{k}(x)-v_{k})=0, \quad k=1,\ldots ,n\nonumber \\&u_{k}(x)\ge v_{k}, \quad k=1,\ldots ,n\nonumber \\&x\gg 0 \end{aligned}$$
    (2)

Proof

We first show that Assertion 1 implies Assertion 2. Since v belongs to the set \({\mathcal {V}}\), there exists some element \(x \in {\mathbb {R}}^{\ell }_{++}\) such that \(u_{k}(x)\ge v_{k}\) for all \(k\in \lbrace 1,\ldots ,n\rbrace \). Hence, by monotony of the functions \((u_{k})_{k=1}^{n}\), there exists \(\hat{x}\) such that \(u_{k}( \hat{x})>v_{k}\) for all \(k=1,\ldots ,n\). As a consequence, the first-order conditions are necessary since Slater’s Constraint Qualification holds.Footnote 8 The multiplier vector \(\lambda :=(\lambda _{k})_{k=1}^{n}\) is necessarily different from zero because the vector p belongs to \( {\mathbb {R}}^{\ell }_{++}\).

Now we prove the converse statement. The functions \((u_{k})_{k=1}^{n}\) are differentiable and quasi-concave and satisfy: \(\nabla u_{k}(x)\ne 0\) for all \(x \in {\mathbb {R}}^{\ell }_{++}\), while the objective function is linear.Footnote 9 This implies that the first-order conditions are sufficient. Thus, Assertion 2 implies Assertion 1.\(\square \)

3.2 An Intermediary Existence Result

To solve Problem (1), we have to study an intermediary problem. For \(\varepsilon >0\) and \((p,v) \in {\mathbb {R}}^{\ell }_{++}\times {\mathcal {V}}\), we shall consider the following problem:

$$\begin{aligned} \max \langle -p\cdot x\rangle \text{ s.t. } u_{k}(x)\ge v_{k} \text{, } \quad k=1,\ldots ,n \text{, } x_{h} \ge \varepsilon \text{, } h=1,\ldots , \ell \end{aligned}$$
(3)

We start by an existence result:

Proposition 3.2

Let \(\varepsilon >0\). The solution of Problem (3) denoted by \(\varDelta ^{\varepsilon }(p,v)\) exists and is a singleton.

Proof

Since v belongs to \({\mathcal {V}}\), there exists \(z_0 \in {\mathbb {R}}^{\ell }_{++}\) such that \(u_k(z_0)\ge v_{k}\) for all \(k \in \lbrace 1,\ldots ,n \rbrace \). Let us choose \(z_{1}\in {\mathbb {R}}^{\ell }\) such that \(z_{1h}\ge \max \lbrace \varepsilon ,z_{0h}\rbrace \) for all \(h \in \lbrace 1,\ldots ,\ell \rbrace \). We now consider another intermediary problem.

$$\begin{aligned} \max \langle -p\cdot x\rangle \text{ s.t. } u_{k}(x)\ge v_{k} \text{, } \quad k=1,\ldots ,n \text{, } p\cdot x \le p\cdot z_{1} \text{, } x\ge \varepsilon \mathbf {1} \text{, } x \in {\mathbb {R}}^{\ell } \end{aligned}$$
(4)

Note that \(z_1\) is feasible for Problem (4) because \(z_1\ge z_0\) and because the functions \((u_k)_{k=1}^{n}\) are increasing.

We proceed to prove that Problem (4) admits a solution.

The set \(A:=\left\{ x \in {\mathbb {R}}^{\ell }: x\ge \varepsilon \mathbf {1} \text{ and } p\cdot x \le p\cdot z_{1} \right\} \) is a compact set as a closed and bounded set in a finite-dimensional vector space. Moreover, the function \(x\longmapsto -p\cdot x\) is continuous on \( {\mathbb {R}}^{\ell }_{++}\). According to Weierstrass Theorem, this problem admits a solution.

Thanks to Lemma 8.2 proved in “Appendix,” we deduce that Problem (3) also admits a solution. Finally, we show that the set \(\varDelta ^{\varepsilon }(p,v)\) is a singleton. Suppose that x and \(x^{\prime }\) are distinct solutions of Problem (3). The element \(x^{\prime \prime }:=\dfrac{1}{2}(x+x^{\prime })\) is clearly feasible. By strict quasi-concavity of the functions \((u_{k})_{k=1}^{n}\), we have indeed \(u_{k}(x^{\prime \prime })>v_{k}\) for every \(k=1,\ldots ,n\). On the other hand, \(x^{\prime \prime }\ge \varepsilon \mathbf {1}\) obviously holds. Both x and \(x^\prime \) cannot be equal to \(\varepsilon \mathbf {1}\). So, at least one of them has a component larger than \(\varepsilon \). To fix the ideas, suppose that \(x_{1}\) is larger than \(\varepsilon \). By continuity of the functions \((u_{k})_{k=1}^{n}\), for \(\delta \) positive sufficiently small, we have: \(x^{\prime \prime } -\delta e_{1}\ge \varepsilon \mathbf {1}\) and \(u_{k}(x^{\prime \prime } -\delta e_{1})>v_{k}\) for every \(k=1,\ldots ,n\). Moreover, the element \(\tilde{x}:=x^{\prime \prime }-\delta e_{1}\) satisfies \(-p\cdot x=-p\cdot x^{\prime \prime }< -p\cdot \tilde{x}\). So x is not a solution. Consequently, one gets a contradiction.\(\square \)

3.3 Continuity of \(\varDelta ^{\varepsilon }\)

Proposition 3.3

Let \(\varepsilon >0\). The function \(\varDelta ^{\varepsilon }\) is continuous on \({\mathbb {R}}^{\ell }_{++}\times {\mathcal {V}}\).

Proof

Let \((\bar{p},\bar{v}) \in {\mathbb {R}}^{\ell }_{++}\times {\mathcal {V}}\) and a compact neighborhood \(\varXi \) of \((\bar{p},\bar{v}). \varXi \) is chosen such that v belongs to \({\mathcal {V}}\) for all \((p,v)\in \varXi .\) Footnote 10 The compactness of \(\varXi \) allows us to say that for \(M>\varepsilon \) sufficiently large, \(u:= M\mathbf {1}\) belongs to the interior of P(v) for all \((p,v)\in \varXi \). Take such a number M. The budget setsFootnote 11 \(B(p,p\cdot u)\) are all contained in a compact set K since p varies in a compact set contained in \({\mathbb {R}}^{\ell }_{++}\) when (pv) belongs to \(\varXi \). For \(\bar{M}>0\) large enough, \(\bar{M}\mathbf {1}\) does not belong to the compact set K. Proceed now to define the correspondence \(C^{\varepsilon }\), for \((p,v)\in \varXi \), by:

\(C^{\varepsilon }(p,v):=\lbrace x \in {\mathbb {R}}^{\ell }_{++}: u_{k}(x)\ge v_{k}, \quad \forall k=1,\ldots ,n \text{ and } \varepsilon \mathbf {1}\le x\le \bar{M}\mathbf {1} \rbrace \) and observe that, by construction, for all \((p,v)\in \varXi , \varDelta ^{\varepsilon }(p,v)\) is the solution of the following problem:

$$\begin{aligned} \max \langle -p\cdot x\rangle \text{ s.t. } x\in C^{\varepsilon }(p,v). \end{aligned}$$

On \(\varXi \), the interior of \(C^{\varepsilon }(p,v)\) is nonempty since \({\mathcal {V}}\) is an open set. We now prove that the function \((p,v)\longmapsto \varDelta ^{\varepsilon }(p,v)\) is continuous on \(\varXi \). This is a consequence of Berge’s Theorem [13]. We have to prove that the correspondence \(C^{\varepsilon } \) is both upper semi-continuous and lower semi-continuous on \(\varXi \). First, we show that \(C^{\varepsilon }\) is upper semi-continuous. On \(\varXi \), the set \(C^{\varepsilon }(p,v)\) remains in a fixed compact set. Hence, the upper semi-continuity of \(C^{\varepsilon }\) is equivalent to the closedness of its graph, which is a consequence of the continuity of the functions \((u_{k})_{k=1}^{n}\).

We now have to show that the correspondence \(C^{\varepsilon }\) is lower semi-continuous. We proceed to define the correspondence \(\hat{C}^{\varepsilon }\) on \(\varXi \) by:

\(\hat{C}^{\varepsilon }(p,v):=\lbrace x \in {\mathbb {R}}^{\ell }_{++}: u_{k}(x)>v_{k}, \quad \forall k=1,\ldots ,n \text{ and } \varepsilon \mathbf {1} \ll x \ll \bar{M} \mathbf {1}\rbrace \). The correspondence \(\hat{C}^{\varepsilon }\) has an open graph by the continuity of the functions \((u_k)_{k=1}^{n}\). So \(\hat{C}^{\varepsilon }\) is lower semi-continuous. Note that \(\hat{C}^{\varepsilon }(p,v)\) is nonempty for every \((p,v)\in \varXi \) since \(\bar{M}\mathbf {1}\) belongs to the interior of P(v).

Moreover, the closure of \(\hat{C}^{\varepsilon }(p,v)\) is \(C^{\varepsilon }(p,v)\). Let \(x \in C^{\varepsilon }(p,v)\). We have to show that x is the limit of a sequence of elements of \(\hat{C}^{\varepsilon }(p,v)\). We choose \(y \in \hat{C}^{\varepsilon }(p,v)\) and observe that for all \(\lambda \in ]0,1[, (1-\lambda ) x +\lambda y\) belongs to \(\hat{C}^{\varepsilon }(p,v)\) since the functions \((u_k)_{k=1}^{n}\) are strictly quasi-concave. To conclude, x is the limit of the sequence \(\left( x^{\nu }:=\left( 1-\dfrac{1}{\nu }\right) x+\dfrac{1}{\nu }y\right) _{\nu \ge 1}\). Moreover, one remarks that \(x^{\nu }\) belongs to \( \hat{C}^{\varepsilon }(p,v)\) for all \(\nu \ge 1\) and the result follows.

We deduce that the correspondence \(C^{\varepsilon }\) is lower semi-continuous since the closure of a lower semi-continuous correspondence is lower semi-continuous.Footnote 12 Berge’s Theorem implies that the function \(\varDelta ^{\varepsilon }\) is continuous on the set \(\varXi \). Since \((\bar{p},\bar{v})\) was arbitrary chosen, the function \(\varDelta ^{\varepsilon }\) is continuous on \({\mathbb {R}}^{\ell }_{++}\times {\mathcal {V}}\) as required. \(\square \)

3.4 Characterization of \(\varDelta ^{\varepsilon }(p,v)\) by First-Order Conditions

Let \(\varepsilon >0\). The first-order conditions corresponding to Problem 3 are: there exists \(\lambda ^{\varepsilon } \in {\mathbb {R}}^{n}_{+}\) and \(\mu ^{\varepsilon }\in {\mathbb {R}}^{\ell }_{+}\) such that \(\varDelta ^{\varepsilon }(p,v)\) is the solution of the system:

$$\begin{aligned}&p=\sum \limits _{k =1}^{n} \lambda _{k}^{\varepsilon }\nabla u_{k}(x)+\mu ^{\varepsilon } \nonumber \\&\lambda _{k}^{\varepsilon }(u_{k}(x)-v_{k})=0, \quad k=1,\ldots ,n\nonumber \\&u_{k}(x)\ge v_{k}, \quad k=1,\ldots ,n\nonumber \\&\mu _{h}^{\varepsilon }(\varepsilon -x_{h})=0, \quad h=1,\ldots ,\ell \nonumber \\&x_{h} \ge \varepsilon , \quad h=1,\ldots ,\ell \end{aligned}$$
(5)

As before, the first-order conditions are necessary since Slater’s Constraint Qualification holds. These are sufficient since the objective function is linear, the functions \((u_{k})_{k=1}^{n}\) are quasi-concave functions satisfying \(\nabla u_{k}(x)\ne 0\) for all \(x \in {\mathbb {R}}^{\ell }_{++}\), and the \(\ell \) additional constraints are affine.

3.5 Existence and Continuity of the Solution of Problem 1

In this subsection, we show the main result of the section:

Proposition 3.4

For \((p,v)\in {\mathbb {R}}^{\ell }_{++}\times {\mathcal {V}}, \varDelta (p,v)\) is a singleton. Moreover, the function \(\varDelta \) is continuous on \({\mathbb {R}}^{\ell }_{++}\times {\mathcal {V}}\).

Proof

Let \(( p, v) \in {\mathbb {R}}^{\ell }_{++}\times {\mathcal {V}}\) and a compact neighborhood \(\varXi \) of ( pv). As before, the set \(\varXi \) is chosen such that \(v^\prime \) belongs to \({\mathcal {V}}\) for all \((p^\prime ,v^\prime )\in \varXi \). Our goal is to show that there exists \(\bar{\varepsilon }>0\) such that the multipliers \(\mu ^{\bar{\varepsilon }}\) corresponding to the additional constraints are equal to zero for all \((p^\prime ,v^\prime )\) in \(\varXi \). We reason by contradiction. Otherwise, there would exist a decreasing sequence \((\varepsilon _{q})_{q\ge 0}\) that converges to zero and a sequence of \(\varXi \) denoted by \((p_q,v_q)_{q\ge 0}\) such that \(\mu ^{\varepsilon _{q}}:=\mu ^{\varepsilon _{q}}(p_q,v_q)\ne 0\) for all \(q\in \mathbb {N}\).

Necessarily, \((x_{q}:=\varDelta ^{\varepsilon _{q}}(p_{q},v_{q}))_{q\ge 0}\) is bounded. Observe that for all \(q \in \mathbb {N}, x_{q}\gg 0\) and \(a\cdot x_{q}\le p_{q}\cdot x_{q}\le p_{0}\cdot x_{0}\) where the vector a is defined by \(a_{h}:=\min \lbrace p^{\prime }_{h}: (p^{\prime },v^{\prime }) \in \varXi \rbrace \) for \(h=1,\ldots , \ell \). The vector a is well defined and belongs to \({\mathbb {R}}^{\ell }_{++}\) thanks to the compactness of \(\varXi \). Therefore, the sequence \((x_{q})_{q\ge 0}\) converges, up to a subsequence, to an element \(\hat{x}\) belonging to the boundary of \({\mathbb {R}}^{\ell }_{++}\), and the sequence \((p_q,v_q)_{q \ge 0}\) converges, up to a subsequence, to some element \((\hat{p},\hat{v}) \in \varXi \) since \(\varXi \) is a compact set. In particular, remark that \(\hat{v}\) belongs to \({\mathcal {V}}\) and that \(\hat{p}\) is necessarily different from zero. With a slight abuse of notation, we denote the converging subsequences as the original sequences.

Observe that the sequence \((\mu ^{\varepsilon _{q}})_{q\ge 0}\) is also a bounded sequence thanks to the first equation of (5) and to the compactness of \(\varXi \). From the same equation, recalling that we consider the 1-norm, we have,Footnote 13 for all \(k \in \lbrace 1,\ldots , n\rbrace \) and all \(q\in \mathbb {N}\):

$$\begin{aligned} \lambda _{k}^{\varepsilon _{q}}\Vert \nabla u_{k}(x_{q})\Vert \le \Vert p_{q}\Vert . \end{aligned}$$

Therefore, for all \(k \in \lbrace 1,\ldots , n\rbrace \) and all \(q\in \mathbb {N}\), we get:

$$\begin{aligned} 0\le \lambda _{k}^{\varepsilon _{q}}\dfrac{\nabla u_{k}(x_{q})\cdot x_{q}}{ \Vert p_{q}\Vert }\le \dfrac{\nabla u_{k}(x_{q})\cdot x_{q}}{ \Vert \nabla u_{k}(x_{q})\Vert } . \end{aligned}$$

Thanks to Assumption 2.2 and in light of the previous inequalities, for all \(k \in \lbrace 1,\ldots , n\rbrace \), we get:

$$\begin{aligned} \lim _{q\longmapsto +\infty } \lambda _{k}^{\varepsilon _{q}}\dfrac{\nabla u_{k}(x_{q})\cdot x_{q}}{ \Vert p_{q}\Vert }=0. \end{aligned}$$

For \(q\in \mathbb {N}\), doing an inner product with \(x_{q}\) and dividing by \(\Vert p_{q}\Vert \) in the first equation of (5), we find:

$$\begin{aligned} \dfrac{p_{q}\cdot x_{q}}{\Vert p_{q}\Vert }=\sum _{k=1}^{n}\lambda _{k}^{\varepsilon _{q}}\dfrac{\nabla u_{k}(x_{q})\cdot x_{q}}{ \Vert p_{q}\Vert }+\dfrac{1}{\Vert p_{q}\Vert }\mu ^{\varepsilon _{q}}\cdot x_{q}. \end{aligned}$$

In view of (5), therefore, \(\mu ^{\varepsilon _{q}}\cdot x_{q}=\varepsilon _{q}\mu ^{\varepsilon _{q}}\cdot \mathbf {1}= \varepsilon _{q}\Vert \mu ^{\varepsilon _{q}}\Vert \) converges to zero. So the right-hand side goes to zero. Therefore, the left-hand side goes to zero. Since \((\hat{p},\hat{v})\) belongs to \(\varXi , \hat{p}\gg 0\) and the limit of the sequence \((x_{q})_{q\ge 0}\) is necessarily zero.

Let \(\bar{x}\in {\mathbb {R}}^{\ell }_{++}\). We show that \(\bar{x}\) belongs to \(P(\hat{v})\). For q sufficiently large, one has: \(\bar{x}\gg x_{q}\) and \(\bar{x} \gg \varepsilon _{q} \mathbf {1}\). Thus, by monotony of the functions \((u_{k})_{k=1}^{n}\), for all \(k\in \lbrace 1,\ldots ,n\rbrace , u_{k}(\bar{x})>u_{k}(x_{q})\ge v_{kq}\). Hence, \(\bar{x}\) belongs to \(P(v_{q})\) for q large enough. By continuity of the functions \((u_{k})_{k=1}^{n}, \bar{x}\) belongs to \(P(\hat{v})\). Since \(\bar{x}\) was arbitrarily chosen, we have: \(P(\hat{v})={\mathbb {R}}^{\ell }_{++}\), which contradicts \(\hat{v}\in {\mathcal {V}}\).

Consequently, there exists \(\bar{\varepsilon }>0\) such that \(\mu ^{\bar{\varepsilon }}=0\). Thus, \(\varDelta ^{\bar{\varepsilon }}(p^\prime ,v^\prime )\) satisfies the necessary and sufficient conditions corresponding to Problem 1 for all \((p^\prime ,v^\prime )\) in \(\varXi \). So \(\varDelta =\varDelta ^{\bar{\varepsilon }}\) on \(\varXi \) and the continuity of \(\varDelta \) follows. \(\square \)

4 Properties of \(\varDelta \) and e.

In Sect. 4.1, we study the properties of the function e. Section 4.2 concerns the Lipschitz behavior of \(\varDelta \) with respect to (pv).

4.1 Properties of e

Proposition 4.1

  1. 1.

    The function e is concave in p.

  2. 2.

    The function e is twice differentiable a.e., and \(D^{2} e(p,v)\) is semi-definite negative when defined.

  3. 3.

    \(D_{p}e(p,v)=\varDelta (p,v) \text{ and } D^{2}_{p}e(p,v)=D_{p}\varDelta (p,v)\) when defined.

Proof

The proof is essentially borrowed from Rader [15]. The function \(-e\) is convex in p as a maximum of linear functions. For instance, e(pv) can be defined by \(-e(p,v)=\max \lbrace -p\cdot y :y\in {\mathbb {R}}^{\ell }_{++} \text{, } u_{k}(y)\ge v_k \text{, } k=1,\ldots ,n\rbrace \) for all \((p,v) \in {\mathbb {R}}^{\ell }_{++}\times {\mathcal {V}}\). So the function e is concave. By Alexandroff’s Theorem, the function e is twice differentiable a.e. in p, and its second derivative is semi-definite negative.

By Theorem 4(iii) of Rader [15], \(D_{p} e(p,v)=\varDelta (p,v)\). \(\square \)

4.2 Lipschitz Behavior of \(\varDelta \)

Firstly, for all \((p,v)\in {\mathbb {R}}^{\ell }_{++}\times {\mathcal {V}}\), we define the set M(pv) by:

$$\begin{aligned} M(p,v):=\left\{ k \in \lbrace 1,\ldots ,n\rbrace : u_{k}(x)=v_{k}\right\} . \end{aligned}$$

Secondly, we define the set \(\varPi \) by:

This set is an open subset of \({\mathbb {R}}^{\ell }\times {\mathbb {R}}^{n}\) thanks to the continuity of \(\varDelta \) on the open subset \({\mathbb {R}}^{\ell }_{++}\times {\mathcal {V}}\).

For \((p,v)\in \varPi \), the constraints of the optimization problem satisfy the Linear Independence Constraint Qualification(LICQ).Footnote 14 Thus, the multipliers are unique, and the following definition makes sense. For \((p,v) \in \varPi \), we proceed to define the set K(pv) by:

$$\begin{aligned} K(p,v):=\left\{ k \in \lbrace 1,\ldots ,n\rbrace : \lambda _{k}(p,v)> 0\right\} . \end{aligned}$$

From Proposition 3.1, one deduces that this set is nonempty. The cardinal of K(pv) is denoted by \(\kappa (p,v)\).

Finally, the set \(\tilde{\varPi }\) is defined as follows:

$$\begin{aligned} \tilde{\varPi }:=\left\{ (p,v)\in \varPi :K(p,v)=M(p,v)\right\} . \end{aligned}$$

Proposition 4.2

The function \(\varDelta \) and the multipliers \((\lambda _{k})_{k=1}^{n}\) are locally Lipschitz continuous on \(\varPi \). Hence, the function \(\varDelta \) is differentiable almost everywhere on \(\varPi \).

Proof

This proposition happens to be a consequence of Corollary 2.3 in Cornet and Vial [6]. Actually, we prove that the function \(\varDelta \) is locally Lipschitz on \(\varPi \) by verifying that Assumptions (A.0), (C.1) and (C.2) of Corollary 2.3. of [6] are satisfied. We define on \({\mathbb {R}}^{\ell }_{++}\times \varPi \) the following functions:

  • \(f(x,p,v):=p\cdot x\),

  • \(g_{k}(x,p,v):= v_{k}-u_{k}(x)\) for \( k \in \lbrace 1,\ldots , n\rbrace \).

For \((p,v)\in \varPi , \varDelta (p,v)\) is the solution of the problem:

$$\begin{aligned}&\min f(x,p,v)\nonumber \\&\text{ s.t. } \nonumber \\&g_{k}(x,p,v)\le 0, \quad k=1,\ldots ,n \end{aligned}$$
(6)

x is the variable and (pv) are the parameters.

Assumptions (A.0) are satisfied. Indeed, we take \(U= {\mathbb {R}}^{\ell }_{++}\) and \(P=\varPi \). The set U is an open set, and the set P is obviously a metric space. So Assumption (A.0) (i) is satisfied. Assumptions (A.0) (ii), (iii), (iv) and (v) are satisfied because the functions at stake are \(C^{2}\) on the set \(U \times P\). Assumption (A.0) (vi) is satisfied with \(Q=C=-{\mathbb {R}}_{+}^{n}\).

Assumption (C.1) is satisfied. This is an immediate consequence of the definition of \(\varPi \).

Assumption (C.2) is satisfied. Let x be a solution of Problem (6) with a corresponding multiplier \(\lambda :=(\lambda _{k})_{k=1}^{n}\). We shall verify that for all \(h \in {\mathbb {R}}^{\ell }, h\ne 0\) such that: \(\nabla f(x,p,v) \cdot h=0\) and \(\nabla g_{k}(x,p,v)\cdot h=0\) for \(k\in K(p,v)\), we have:Footnote 15

$$\begin{aligned} \left[ D^{2} f(x,p,v)+\sum _{k\in K(p,v)}\lambda _{k}D^{2}g_k,x,p,v)\right] h \cdot h >0. \end{aligned}$$

Observing that \(D^{2} f\equiv 0\), it remains to show that:

$$\begin{aligned} \sum _{k\in K(p,v)}\lambda _{k}D^{2}g_k(x,p,v)h \cdot h >0. \end{aligned}$$

whence

$$\begin{aligned} -\sum _{k\in K(p,v)}\lambda _{k}D^{2}u^{k}(x)h\cdot h >0 \end{aligned}$$

which is true because of Assumption 2.1 and because \(\nabla u^{k}(x)\cdot h=0\) for \(k\in K(p,v).\) Footnote 16

According to Corollary 2.3 in [6], the function \(\varDelta \) is locally Lipschitz on \(\varPi \). Thanks to Rademacher’s Theorem, the function \(\varDelta \) is almost everywhere differentiable on \(\varPi \). \(\square \)

5 Differential Properties of \(\varDelta \)

In this section, we interest ourselves in the continuous differentiability of \(\varDelta \). We conclude by a Slutsky-type property.

5.1 Continuous Differentiability of \(\varDelta \)

Proposition 5.1

If \((\bar{p},\bar{v}) \in \tilde{\varPi }, \varDelta \) is continuously differentiable on a neighborhood of \((\bar{p},\bar{v})\).

Proof

This proof is essentially an application of the Implicit Function Theorem and is quite standard borrowing ideas from Fiacco and McCormick[7]. Without loss of generality, suppose that: \(M(\bar{p},\bar{v})=\lbrace 1,\ldots ,r\rbrace \). Observe that in light of the continuity of both \(\varDelta \) and the utility functions \((u_{k})_{k=1}^{n}\), we can neglect the nonbinding constraints. Moreover, by continuity of the positive multipliers, one has: \(M(p,v)=M(\bar{p},\bar{v})\) on a neighborhood of \((\bar{p},\bar{v})\). As shown above, since the first-order optimality conditions are necessary and sufficient and in light of the continuity of the functions \((u_{k})_{k=1}^{n}\), the element \(\varDelta (p,v)\) and the corresponding multipliers \(\lambda (p,v)\) are solution of the equation \(G(x,\lambda ,p,v)=0\) where G is defined by:

$$\begin{aligned} G(x,\lambda ,p,v)= \left\{ \begin{array}{ll} p-\sum \limits _{k =1}^{r} \lambda _{k}\nabla u_{k}(x) \\ u_{k}(x)-v_{k}, \quad k=1,\ldots ,r\\ \end{array} \right. \end{aligned}$$
(7)

To show that the function \(\varDelta \) and the multipliers are continuously differentiable on a neighborhood of \((\bar{p},\bar{v})\), from the Implicit Function Theorem, it suffices to show that the partial Jacobian matrix of G with respect to \((x,\lambda )\) has full column rankFootnote 17 at \(\bar{x}:=\varDelta (\bar{p},\bar{v})\). This matrix is equal to:

$$\begin{aligned} M:=\begin{bmatrix} -\sum \nolimits _{k =1}^{n} \bar{\lambda }_{k} D^{2} u^{k}(\bar{x})&-\nabla u_{1}(\bar{x})&\cdots&\cdots&-\nabla u_{r}(\bar{x}) \\ \nabla u_{1}(\bar{x})^{T}&0&\cdots&\cdots&0 \\ \vdots&\vdots&\vdots&\vdots&\vdots \\ \nabla u_{r}(\bar{x})^{T}&0&\cdots&\cdots&0 \end{bmatrix}. \end{aligned}$$

It is sufficient to prove that \(M\begin{pmatrix} \varDelta x \\ \varDelta \lambda \end{pmatrix} =0\) implies: \(\varDelta x=0\) and \(\varDelta \lambda =0. \varDelta x\) is a column vector of dimension \(\ell \), and \(\varDelta \lambda \) is a column vector of dimension r. We have to solve the system:

$$\begin{aligned}&\displaystyle -\sum _{k =1}^{r} \bar{\lambda }_{k} D^{2}u_{k}(\bar{x})\varDelta x -\sum _{k=1}^{r}\varDelta \lambda _{k} \nabla u_{k}(\bar{x})=0 \nonumber \\&\nabla u_{k}(\bar{x})\cdot \varDelta x=0, \quad \forall k=1,\ldots ,r\\ \end{aligned}$$

Multiplying the first line by \((\varDelta x)^{T}\), one has that:

$$\begin{aligned}&\displaystyle -\sum _{k =1}^{r} \bar{\lambda }_{k} (\varDelta x)^{T} D^{2}u_{k}(\bar{x})\varDelta x -\sum _{k=1}^{r}\varDelta \lambda _{k} \nabla u_{k}(\bar{x})\cdot \varDelta x=0 \nonumber \\&\nabla u_{k}(\bar{x})\cdot \varDelta x=0, \quad \forall k=1,\ldots ,r \end{aligned}$$

whence

$$\begin{aligned}&\displaystyle -\sum _{k=1}^{r} \bar{\lambda }_{k} (\varDelta x)^{T} D^{2}u_{k}(\bar{x})\varDelta x=0 \nonumber \\&\nabla u_{k}(\bar{x})\cdot \varDelta x=0, \quad \forall k \in \lbrace 1,\ldots ,r \rbrace \end{aligned}$$

For all \(k \in \lbrace 1,\ldots ,r\rbrace , D^{2}u_{k}(\bar{x})\) is negative definite on \(\nabla u_{k}(\bar{x})^{\perp }\) and since \(\varDelta x\in \nabla u_{k}(\bar{x})^{\perp }\), we find: \(\varDelta x=0\). Hence, the first equation becomes:

$$\begin{aligned} \displaystyle -\sum _{k=1}^{r}\varDelta \lambda _{k} \nabla u_{k}(\bar{x})=0 \end{aligned}$$

and we conclude that \(\varDelta \lambda =0\) since \((\bar{p},\bar{v}) \in \varPi \). \(\square \)

5.2 Slutsky-Type Property

The next result is a generalization of the well-known result about the negative definiteness of the Slutsky matrix.

Proposition 5.2

Suppose that \((\bar{p},\bar{v}) \in \tilde{\varPi }\). The matrix \(D_p\varDelta (\bar{p}, \bar{v})\) has rank \(\ell - \kappa (\bar{p},\bar{v})\), and its kernel is the linear space \({\mathcal {L}}\left( \nabla u^{k}(\bar{x}) \text{, } k\in M(\bar{p},\bar{v})\right) \) spanned by the family \(\left( \nabla u_{k}(\bar{x})\right) _{ k \in M(\bar{p},\bar{v})}\) where \(\bar{x}:=\varDelta (\bar{p},\bar{v})\).

Proof

According to Proposition 4.1, \(D_{p} \varDelta (\bar{p},\bar{v})\) defines a symmetric negative semi-definite bilinear form. Observe that by continuity of \(\varDelta \), we can neglect the nonbinding constraints and, by continuity of the positive multipliers, \(M(p,v)=M(\bar{p},\bar{v})\) on a neighborhood of \((\bar{p},\bar{v})\).

Without loss of generality, suppose that \(M(\bar{p},\bar{v})=\lbrace 1,\ldots ,r\rbrace \). For \(p \in {\mathbb {R}}^{\ell }_{++}\) sufficiently near from \(\bar{p} , \varDelta (p,\bar{v})\) is characterized by the first-order conditions:Footnote 18

$$\begin{aligned}&u_{k}(\varDelta (p,\bar{v}))=\bar{v}^{k}, \quad k= 1,\ldots ,r\nonumber \\&p=\sum _{k=1}^{n}\alpha _{k}(p)\nabla u_{k}(\varDelta _{K}(p,\bar{v})) \text{ with } \alpha _{k}(p)>0, \quad k=1,\ldots ,r \end{aligned}$$
(8)

We proceed to differentiate the first condition with respect to p and readily obtain at \(\bar{p}\) for all \(q \in {\mathbb {R}}^{\ell }\):

$$\begin{aligned} \nabla u_{k}(\varDelta (\bar{p},\bar{v})) \cdot D_p \varDelta (\bar{p},\bar{v})(q)=\nabla u_{k}(\bar{x}) \cdot D_p \varDelta (\bar{p},\bar{v})(q) = 0, \quad \forall k=1,\dots ,r. \end{aligned}$$

From these equalities, we readily deduce that the image of \( D_p\varDelta (\bar{p}, \bar{v})\) is contained in the linear subspace \(\displaystyle \cap _{k=1}^{r}\nabla u_{k}(\bar{x})^{\perp }\) of dimension \(\ell - r\), recalling that \((\bar{p},\bar{v})\) belongs to \(\varPi \). Furthermore, since \(D_p \varDelta (\bar{p}, \bar{v})\) defines a symmetric negative semi-definite bilinear form, \(\nabla u_k (\bar{x}) \) belongs to the kernel of \(D_p \varDelta (\bar{p}, \bar{v})\) for all \(k=1,\dots ,r\). Thus, the dimension of the image of \(D_p\varDelta (\bar{p}, \bar{v})\) is at most \(\ell -r\). Differentiating the second condition with respect to p, for \(q \in {\mathbb {R}}^{\ell }\), we find:

$$\begin{aligned} \displaystyle q=\sum _{k=1}^{r}\alpha _{k}(p)D^{2}u_{k}(\varDelta (p,\bar{v}))D_p\varDelta (p,\bar{v})(q)+\sum _{k=1}^{n}(\nabla \alpha _{k}(p)\cdot q)\nabla u_{k}(\varDelta (p,\bar{v})). \end{aligned}$$

In view of the previous equation, for all \( q \in \cap _{k=1}^{r}\nabla u_{k}(\bar{x})^{\perp } \), we get:

$$\begin{aligned} \displaystyle q=\left[ \sum _{k=1}^{r}\alpha _{k}(\bar{p})D^{2}u_{k}(\bar{x})\right] D_p\varDelta (\bar{p},\bar{v})(q). \end{aligned}$$

Thus, we have for \( q \in \cap _{k=1}^{r} \nabla u_{k}(\bar{x})^{\perp }\):

$$\begin{aligned} D_p\varDelta (\bar{p},\bar{v})(q)=0 \Longrightarrow q=0. \end{aligned}$$

So the kernel of the restriction on \( \cap _{k=1}^{r} \nabla u_{k}(\bar{x})^{\perp }\) of \(D_{p}\varDelta (\bar{p},\bar{v})\) is reduced to zero. As a consequence, the rank of \(D_{p}\varDelta (\bar{p}, \bar{v})\) is at least \(\ell -r\). Finally, the rank of \(D_{p}\varDelta (\bar{p}, \bar{v})\) is equal to \(\ell -r\), and the kernel of \(D_{p}\varDelta (\bar{p}, \bar{v})\) is equal to \({\mathcal {L}}\left( \nabla u_{k}(\bar{x}) \text{, } k=1,\dots ,r\right) \). \(\square \)

6 Perspectives

Two important questions concern, respectively, the sets \(\varPi \) and \(\tilde{\varPi }\). We wish to know under which conditions on the functions \((u_{k})_{k=1}^{n}\), the set \(\varPi \) is “big” from a topological or measure-theoretical point of view. We already know that this set is an open set and a legitimate question would be under which conditions this set is dense and under which conditions it has full Lebesgue measure. When the number of constraints n is equal to 1, we obviously have: \(\varPi ={\mathbb {R}}^{\ell }_{++}\times {\mathcal {V}}\). In view of the applications, we provide another framework in which the equality \(\varPi ={\mathbb {R}}^{\ell }_{++}\times {\mathcal {V}}\) holds true. Suppose that for \(k:=1,\ldots ,n\),

$$\begin{aligned} u_{k}(x):=\sum _{h=1}^{\ell }a_{kh}b_{h}(x_{h}) \end{aligned}$$

where the functions \((b_{h})_{h=1}^{\ell }\) are twice continuously real-valued functions such that: \(b_{h}^\prime >0 \) and \(b_{h}^{\prime \prime }<0, \lim _{x_{h}\longrightarrow 0} b^{\prime }(x_{h})=0\) and \(\sum _{h=1}^{\ell }a_{h}=1\) for all \(k \in \lbrace 1,\ldots ,n\rbrace \). As it was shown in [16], the only requirement is that \(A:=(a_{kh})_{1 \le k\le n,1\le h\le \ell }\) has full row rank. Such a framework can be related to decision theory through the Expected Utility Function (if \(b_{h}:=b\) for all \(h \in \lbrace 1 ,\ldots , \ell \rbrace \)) or to the separable preferences in microeconomics. We refer to [9] for a discussion about the Expected Utility Function and to [1] for a presentation of separable preferences. As a consequence, in these cases, the generalized Hicksian demand \(\varDelta \) is locally Lipschitz on the whole set \({\mathbb {R}}^{\ell }_{++}\times {\mathcal {V}}\).

Similarly, we wish to find conditions under which the set \(\tilde{\varPi }\) is dense in \(\varPi \). Under these conditions, the generalized Hicksian demand \(\varDelta \) would be locally Lipschitz on the open set \(\varPi \) and continuously differentiable on its open dense subset of full Lebesgue measure \(\tilde{\varPi }\).

7 Conclusions

Existence and uniqueness of the solution of Problem (1) are established under mild assumptions. Without any additional assumption, the continuity of the solution with respect to the parameter is obtained. On the other hand, the Lipschitz behavior only requires (LICQ). We shall point out that this analysis can be carried out because we restricted ourselves to the relevant levels of utility. If Strict Complementary Slackness holds in addition at a point, the solution is continuously differentiable on a neighborhood of this point. Strangely enough, a Slutsky-type property is obtained in this case.