1 Introduction

In this paper, we study optimal stochastic singular control for systems driven by nonlinear-controlled stochastic differential equations of mean-field type, which is also called McKean–Vlasov equations

$$\begin{aligned} \left\{ \begin{array}{l} \mathrm{d}x^{u,\eta }(t)=f\left( t,x^{u,\eta }(t),\mathbb {E(}x^{u,\eta }(t)),u(t)\right) \mathrm{d}t+\sigma \left( t,x^{u,\eta }(t),\right. \\ \qquad \qquad \qquad \times \left. \mathbb {E}(x^{u,\eta }(t)),u(t)\right) \mathrm{d}W(t) +G(t)d\eta (t),\\ {x^{u,\eta }(s)=\zeta ,} \end{array} \right. \end{aligned}$$
(1.1)

for some functions \(f,\sigma \), and \(G.\) Noting that mean-field dynamics (1.1) are obtained as the mean-square limit, when \(n\rightarrow +\infty \) of a system of interacting particles of the form

$$\begin{aligned} \mathrm{d}x_{n}^{k,u,\eta }(t)&= f\left( t,x_{n}^{k,u,\eta }(t),\frac{1}{n} \sum _{i=1}^{n}x_{n}^{i,u,\eta }(t),u(t)\right) \mathrm{d}t\\&+\,\sigma \left( t,x_{n}^{k,u,\eta }(t), \frac{1}{n}\sum _{i=1}^{n}x_{n}^{i,u,\eta }(t),u(t)\right) \mathrm{d}W^{k}(t)\\&+\,G(t)\mathrm{d}\eta (t), \end{aligned}$$

where \(\left( W^{k}(\cdot ):k\ge 1\right) \) is a collection of independent Brownian motions. The expected cost to be minimized over the class of admissible controls is also of mean-field type, which has the form

$$\begin{aligned} J^{s,\zeta }\left( u(\cdot ),\eta (\cdot )\right)&{=}\mathbb {E}\left[ h(x^{u,\eta }(T),\mathbb {E}( x^{u,\eta }(T)))+\int \limits _{s}^{T} \mathfrak {\ell }(t,x^{u,\eta }(t),\right. \nonumber \\&\quad \times \,\left. \mathbb {E}\left( x^{u,\eta }(t)\right) ,u(t))\mathrm{d}t+\int \limits _{\left[ s,T\right] }K(t)\mathrm{d}\eta (t)\right] , \end{aligned}$$
(1.2)

where the initial time \(s\) and the initial state \(\zeta \) of the system are fixed. Any admissible control \((u^{*}(\cdot ),\eta ^{*}(\cdot ))\) satisfying

$$\begin{aligned} J^{^{s,\zeta }}\left( u^{*}(\cdot ),\eta ^{*}(\cdot )\right) =\min _{(u(\cdot ),\eta (\cdot ))\in \mathcal {U}_{1}\times \mathcal {U} _{2}}J^{^{s,\zeta }}\left( u(\cdot ),\eta (\cdot )\right) , \end{aligned}$$
(1.3)

is called an optimal control. The corresponding state process, solution of MFSDE-(1.1), is denoted by \(x^{*}(\cdot )=x^{u^{*},\eta ^{*}}(\cdot )\).

The stochastic maximum principle for singular control was considered by many authors, see for instance [24, 911, 13, 14, 16]. The first version of maximum principle for singular stochastic control problems was obtained by Cadenillas et al. [9]. The first-order weak stochastic maximum principle has been studied in [4]. In [11], the authors derived stochastic maximum principle where the singular part has a linear form. Sufficient conditions for the existence of optimal singular control have been studied in Dufour et al. [10]. The necessary and sufficient conditions for near-optimal singular control were obtained by Hafayed et al. [13]. For this type of problem, the reader may consult the papers by Haussmann et al. [14] and the list of references therein.

Many authors made contributions on SDEs of mean-field type and applications, see for instance [1, 68, 12, 15, 17, 18, 20, 21]. Mean-field stochastic maximum principle of optimality was considered by many authors, see for instance [7, 17, 18, 21]. In Buckdahn et al. [8], the authors obtained mean-field backward stochastic differential equations. In Buckdahn et al. [7], the general maximum principle was introduced for a class of stochastic control problems involving MFSDEs, where the authors obtained a stochastic maximum principle differs from the classical one in the sense that the first-order adjoint equation turns out to be a linear mean-field backward SDE, while the second-order adjoint equation remains the same as in Peng’s [19] stochastic maximum principle. In Mayer-Brandis et al. [18], a stochastic maximum principle of optimality for systems governed by controlled Itô-Levy process of mean-field type is proved by using Malliavin calculus. The local maximum principle of optimality for mean-field stochastic control problem has been derived by Li [17]. The linear quadratic optimal control problem for MFSDEs has been studied by Yong [21]. The maximum principle for mean-field jump diffusion processes has been studied in Hafayed et al. [12].

Our main goal in this paper is to derive a set of necessary as well as sufficient conditions for the optimal singular stochastic control of mean-field problem (1.1)–(1.2). Following the standard idea of deriving necessary and sufficient conditions for optimal control processes, due to the fact that the control domain is assumed to be convex, one needs to use convex perturbations for both continuous and singular parts of our control process. The problem under consideration where the coefficients depend on the marginal probability law of the solution is not only simple extensions from the mathematical point of view, but also provides interesting models in applications. To streamline the presentation, we only study the one-dimensional case.

The rest of the paper is organized as follows. Section 2 begins with a general formulation of a mean-field singular control problem and gives the notations and assumptions used throughout the paper. In Sects. 3 and 4 we establish our necessary and sufficient conditions of optimality, respectively. In the last section an example is given to illustrate the theoretical results.

2 Assumptions and Statement of the Control Problem

We consider mean-field stochastic singular control problem of the following kind. Let \(T\) be a fixed strictly positive real number and \( (\Omega ,\mathcal {F},\left\{ \mathcal {F}_{t}\right\} _{t\in \left[ s,T\right] },\mathbb {P})\) be a fixed filtered probability space satisfying the usual conditions in which one-dimensional Brownian motion \(W(t)=\left\{ W(t):s\le t\le T\right\} \) and \(W(s)=0\) is defined. Let \(\mathbb {A}_{1}\) be a closed convex subset of \(\mathbb {R}\) and \(\mathbb {A}_{2}:=\left[ 0,\infty \right) .\) Let \(\mathcal {U}_{1}\) be the class of measurable, \( \mathcal {F}_{t}-\)adapted processes \(u(\cdot ):\left[ s,T\right] \times \Omega \rightarrow \mathbb {A}_{1}\), and \(\mathcal {U}_{2}\) is the class of measurable, \(\mathcal {F}_{t}-\)adapted processes \(\eta (\cdot ):\left[ s,T \right] \times \Omega \rightarrow \mathbb {A}_{2}\).

Since the objective of this paper is to study optimal singular stochastic control, we give here the precise definition of the singular part of an admissible control.

Definition 2.1

An admissible control is a pair \(\left( u(\cdot ),\eta (\cdot )\right) \) of measurable \(\mathbb {A} _{1}\times \mathbb {A}_{2}\)-valued, \(\mathcal {F}_{t}-\) adapted processes, such that

  1. 1.

    \(\eta (\cdot )\) is of bounded variation, nondecreasing continuous on the left with right limits and \(\eta (s)=0.\)

  2. 2.

    \(\mathbb {E}\left[ \sup _{t\in \left[ s,T\right] }\left| u(t)\right| ^{2}+\left| \eta (T)\right| ^{2}\right] <\infty .\)

We denote \(\mathcal {U}_{1}\times \mathcal {U}_{2},\) the set of all admissible controls. We note that since \(\mathrm{d}\eta (t)\) may be singular with respect to Lebesgue measure \(\mathrm{d}t,\) we call \(\eta (\cdot )\) the singular part of the control and the process \(u(\cdot )\) its absolutely continuous part.

We denote by \(\mathbb {L}_{\mathcal {F}}^{2}\left( \left[ s,T\right] ;\mathbb {R}\right) =\left\{ \Phi (\cdot ):=\Phi (t,w)\text { is an }\mathcal {F }_{t}\text {-adapted }\mathbb {R}\text {-valued}\right. \) \(\left. \text {measurable process on }\left[ s,T\right] \text { such that }\mathbb {E}\left( \int \nolimits _{s}^{T}\left| \Phi (t)\right| ^{2}\mathrm{d}t\right) <\infty \right\} .\) We denote by \(\chi _{\mathcal {R}}\) the indicator function of \(\mathcal {R}\). In what follows, \(C\) represents a generic constants, which can be different from line to line.

Conditions: Throughout this paper we assume the following.

(H1):

The functions \(f,\sigma ,\ell :\left[ s,T\right] \times \mathbb {R}\times \mathbb {R\times \mathbb {A}}_{1}\mathbb {\rightarrow R}\), and \(h:\mathbb {R}\times \mathbb {R\rightarrow R}\) are continuously differentiable with respect to \(\left( x,\tilde{x},u\right) \). Moreover, \(f,\sigma ,h\), and \(\ell \) and all their derivatives with respect to \(\left( x,\tilde{x} ,u\right) \) are continuous and bounded.

(H2):

The function \(G:\left[ s,T\right] \rightarrow \mathbb {R} ,K:\left[ s,T\right] \rightarrow \left[ 0,\infty \right) \), for each \( t\in [s,T]:G\) is continuous and bounded, also \(K\) is continuous.

Under the above assumptions, the MFSDE-(1.1) has a unique strong solution \(x^{u,\eta }(t)\) which is given by

$$\begin{aligned} x^{u,\eta }(t)&= \zeta +\int \limits _{s}^{t}f\left( r,x^{u,\eta }(r),\mathbb {E}(x^{u,\eta }(r)),u(r)\right) \mathrm{d}r \\&+\,\int \limits _{s}^{t}\sigma \left( r,x^{u,\eta }(r),\mathbb {E}(x^{u,\eta }(r)),u(r)\right) \mathrm{d}W(r) \\&+\,\int \limits _{\left[ s,t\right] }G(r)\mathrm{d}\eta (r). \end{aligned}$$

Moreover, by standard arguments it is easy to show that for any \(p>0\), it holds that

$$\begin{aligned} \mathbb {E}\left[ \sup _{t\in \left[ s,T\right] }\left| x^{u,\eta }(t)\right| ^{p}\right] <C_{p}, \end{aligned}$$
(2.1)

where \(C_{p}\) is a constant depending only on \(p\) and the functional \( J^{s,\zeta }\) is well defined.

We define the usual Hamiltonian associated with the mean-field stochastic control problem (1.1)–(1.2) as follows

$$\begin{aligned} H\left( t,x,\tilde{x},u,\Psi (t),Q(t)\right) {=}\Psi (t)f\left( t,x, \tilde{x},u\right) +Q(t)\sigma \left( t,x,\tilde{x},u\right) +\ell \left( t,x,\tilde{x},u\right) ,\nonumber \\ \end{aligned}$$
(2.2)

where \((t,x,\tilde{x},u)\in [s,T]\times \mathbb {R}\times \mathbb {R }\times \mathbb {A}_{1},x\) is a random variable such that \(x\in \mathbb {L} ^{1}\left( \Omega ,\mathcal {F},\mathbb {R}\right) \) and \(\left( \Psi (\cdot ),Q(\cdot )\right) \in \mathbb {R\times R}\) given by equation mean-field BSDE-(2.3).

We introduce the adjoint equations involved in the stochastic maximum principle for our singular mean-field control problem. The adjoint equation turns out to be a linear mean-field BSDE. So for any \(\left( u(\cdot ),\eta (\cdot )\right) \in \mathcal {U}_{1}\times \mathcal {U}_{2}\) and the corresponding state trajectory \(x(t)=x^{u,\eta }(t)\), we consider the following adjoint equations

$$\begin{aligned} \left\{ \begin{array}{l} -d\Psi (t)=\left\{ f_{x}\left( t,x(t),\mathbb {E}(x(t)),u(t)\right) \Psi (t){+} \mathbb {E}\left[ f_{\tilde{x}}\left( t,x(t),\mathbb {E}(x(t)),u(t)\right) \Psi (t)\right] \right. \\ \qquad \qquad \qquad +\sigma _{x}\left( t,x(t),\mathbb {E} (x(t)),u(t)\right) Q(t){+}\mathbb {E}\left[ \sigma _{\tilde{x}}\left( t,x(t),\mathbb {E}(x(t)),u(t)\right) Q(t)\right] \\ \qquad \qquad \qquad +\left. \ell _{x}\left( t,x(t),\mathbb {E} (x(t)),u(t)\right) {+}\mathbb {E}\left[ \ell _{\tilde{x}}\left( t,x(t), \mathbb {E}(x(t)),u(t)\right) \right] \right\} \mathrm{d}t\\ \qquad \qquad \qquad -Q(t)dW(t),\\ \Psi (T)=h_{x}\left( x(T),\mathbb {E}(x(T)\right) \!+\!\mathbb {E}\left[ h_{ \tilde{x}}\left( x(T),\mathbb {E}(x(T)\right) \right] . \end{array} \right. \nonumber \\ \end{aligned}$$
(2.3)

If we denote by

$$\begin{aligned} H\left( t\right) :=H\left( t,x(t),\tilde{x}(t),u(t),\Psi (t),Q(t)\right) , \end{aligned}$$

then the adjoint Eq. (2.3) can be rewritten as follows

$$\begin{aligned} \left\{ \begin{array}{l} -\mathrm{d}\Psi (t)=\left\{ H_{x}\left( t\right) +\mathbb {E}\left[ H_{\tilde{x} }\left( t\right) \right] \right\} \mathrm{d}t-Q(t)\mathrm{d}W(t), \\ \Psi (T)=h_{x}\left( x(T),\mathbb {E}(x(T)\right) +\mathbb {E}\left[ h_{ \tilde{x}}\left( x(T)),\mathbb {E}(x(T)\right) \right] . \end{array} \right. \end{aligned}$$
(2.4)

As it is well known that under conditions (H1) and (H2), the adjoint Eq.  (2.3) admits one and only one \(\mathcal {F}_{t}-\) adapted solution pair \(\left( \Psi (\cdot ),Q(\cdot )\right) \in \mathbb { L}_{\mathcal {F}}^{2}\left( \left[ s,T\right] ;\mathbb {R}\right) \times \mathbb {L}_{\mathcal {F}}^{2}\left( \left[ s,T\right] ;\mathbb {R}\right) \). This equation reduces to the standard one as in [5], when the coefficients do not explicitly depend on the expected value (or the marginal law) of the underlying diffusion process.

We note that since the derivatives \(f_{x},f_{\tilde{x}}, \sigma _{x},\sigma _{\tilde{x}},\ell _{x},\ell _{\tilde{x} },h_{x}\), and \(h_{\tilde{x}}\) are bounded, by assumptions (H1), we have the following estimate

$$\begin{aligned} \mathbb {E}\left[ \sup _{s\le t\le T}\left| \Psi (t)\right| ^{2}+\int \limits _{s}^{T}\left| Q(t)\right| ^{2}\mathrm{d}t\right] \le C. \end{aligned}$$
(2.5)

3 Mean-Field Maximum Principle for Optimal Singular Control

Our purpose in this section is to establish a stochastic maximum principle for optimal singular stochastic control for systems driven by nonlinear-controlled MFSDEs. Since the control domain is assumed to be convex, the proof of our result based on convex perturbation for both continuous and singular parts of the control process.

The main result of this paper is stated in the following theorem.

Theorem 3.1

(Mean-field maximum principle for optimal singular control in integral form). Let Conditions (H1) and (H2) hold. Then there exists a unique pair of \(\mathcal {F}_{t}-\) adapted processes \(\left( \Psi ^{*}(\cdot ),Q^{*}(\cdot )\right) \) such that for all \(\left( u,\eta \right) \in \mathbb {A} _{1}\times \mathbb {A}_{2}:\)

$$\begin{aligned}&\mathbb {E}\int \limits _{s}^{T}H_{u}(t,x^{*}(t),\mathbb {E}(x^{*}(t)\mathbb {)} ,u^{*}(t),\Psi ^{*}(t),Q^{*}(t))(u(t)-u^{*}(t))\mathrm{d}t\nonumber \\&\quad +\,\mathbb {E}\left[ \int \limits _{\left[ s,T\right] }(K(t)+G(t)\Psi ^{*}(t))\mathrm{d}\left( \eta -\eta ^{*}\right) (t)\right] \ge 0, \end{aligned}$$
(3.1)

where \(\left( \Psi ^{*}(\cdot ),Q^{*}(\cdot )\right) \) be the solution of adjoint equation (2.3) corresponding to \((u^{*}(\cdot ),\eta ^{*}(\cdot ),x^{*}(\cdot ))\).

Corollary 3.2

(Mean-field maximum principle for optimal singular control). Under Conditions of Theorem 3.1 Then there exists a unique pair of \(\mathcal {F}_{t}\mathcal {-}\)adapted processes \(\left( \Psi ^{*}(\cdot ),Q^{*}(\cdot )\right) \) solution of mean-field FBSDE-(2.3)such that for all \(\left( u,\eta \right) \in \mathbb {A}_{1}\times \mathbb {A}_{2}:\)

$$\begin{aligned}&H_{u}(t,x^{*}(t),\mathbb {E}(x^{*}(t)\mathbb {)},u^{*}(t),\Psi ^{*}(t),Q^{*}(t))(u(t)-u^{*}(t))\mathrm{d}t\nonumber \\&\quad +\,\mathbb {E}\left[ \int \limits _{\left[ s,T\right] }(K(t)+G(t)\Psi ^{*}(t))\mathrm{d}\left( \eta -\eta ^{*}\right) (t)\right] \ge 0,\\&\qquad \qquad \quad \quad \qquad \mathbb {P-}a.s.,\,a.e.\,t\in \left[ s,T\right] .\nonumber \end{aligned}$$
(3.2)

To prove Theorem  3.1 and Corollary 3.2 we need the following results which we have to translate to our mean-field singular problem.

Let \(\left( u^{*}(\cdot ),\eta ^{*}(\cdot ),x^{*}(\cdot )\right) \) be the optimal solution of the control problem (1.1)–(1.2).We derive the variational inequality (3.1) in several steps, from the fact that

$$\begin{aligned} J^{^{s,\zeta }}\left( u^{\varepsilon }(\cdot ),\eta ^{\varepsilon }(\cdot )\right) -J^{^{s,\zeta }}\left( u^{*}(\cdot ),\eta ^{*}(\cdot )\right) \ge 0, \end{aligned}$$
(3.3)

where \((u^{\varepsilon }(\cdot ),\eta ^{\varepsilon }(\cdot ))\) is the so-called convex perturbation of \(\left( u^{*}(\cdot ),\eta ^{*}(\cdot )\right) \) defined as follows: \(t\in \left[ s,T\right] \),

$$\begin{aligned} (u^{\varepsilon }(t),\eta ^{\varepsilon }(t))=\left( u^{*}(t),\eta ^{*}(t)\right) +\varepsilon \left[ \left( u(t),\eta (t)\right) -\left( u^{*}(t),\eta ^{*}(t)\right) \right] , \end{aligned}$$
(3.4)

where \(\varepsilon >0\) is sufficiently small and \(\left( u(\cdot ),\eta (\cdot )\right) \) is an arbitrary element of \(\mathcal {U}_{1}\times \mathcal { U}_{1}\). We emphasize that the convexity of \(\mathbb {A}_{1}\times \mathbb {A} _{1}\) has the consequence that \((u^{\varepsilon }(t),\eta ^{\varepsilon }(t))\in \mathcal {U}_{1}\times \mathcal {U}_{1}\).

Let \(x^{\varepsilon }(\cdot )=x^{(u^{\varepsilon },\eta ^{\varepsilon })}(\cdot )\) be the solutions of MFSDE-(1.1) corresponding to admissible control \((u^{\varepsilon }(t),\eta ^{\varepsilon }(t))\).

Lemma 3.3

Let Conditions (H1) and (H2) hold. Then we have

$$\begin{aligned} \lim _{\varepsilon \rightarrow 0}\mathbb {E}\left( \sup _{s\le t\le T}\left| x^{\varepsilon }(t)-x^{*}(t)\right| ^{2}\right) =0. \end{aligned}$$

Proof

From standard estimates and the Burkholder–Davis–Gundy inequality we obtain

$$\begin{aligned}&\mathbb {E}\left( \sup _{s\le r\le t}\left| x^{\varepsilon }(r)-x^{*}(r)\right| ^{2}\right) \\&\le \mathbb {E}\int \limits _{s}^{t}\left| f\left( r,x^{\varepsilon }(r), \mathbb {E}(x^{\varepsilon }(r)),u^{\varepsilon }(r)\right) -f\left( r,x^{*}(r),\mathbb {E}(x^{*}(r)),u^{*}(r)\right) \right| ^{2}\mathrm{d}r \\&\quad +\,\mathbb {E}\int \limits _{s}^{t}\left| \sigma \left( r,x^{\varepsilon }(r), \mathbb {E}(x^{\varepsilon }(r)),u^{\varepsilon }(r)\right) -\sigma \left( r,x^{*}(r),\mathbb {E}(x^{*}(r)),x^{*}(r)\right) \right| ^{2}\mathrm{d}r \\&\quad +\,\mathbb {E}\left| \int \limits _{\left[ s,t\right] }G(r)\mathrm{d}\left( \eta ^{\varepsilon }-\eta ^{*}\right) (r)\right| ^{2}, \end{aligned}$$

by applying assumption (H2) and the Lipschitz conditions on the coefficients \(f,\sigma \) with respect to \(x,\tilde{x},u\) we get

$$\begin{aligned}&\mathbb {E}\left( \sup _{s\le t\le T}\left| x^{\varepsilon }(t)-x^{*}(t)\right| ^{2}\right) \\&\le C_{T}\mathbb {E}\int \limits _{s}^{t}\left| x^{\varepsilon }(r)-x^{*}(r)\right| ^{2}\mathrm{d}r+C_{T}\varepsilon ^{2}\mathbb {E}\int \limits _{s}^{t}\left| u^{\varepsilon }(r)-u^{*}(r)\right| ^{2}\mathrm{d}r \\&\quad +\,C_{T}\varepsilon ^{2}\mathbb {E}\left| \eta ^{\varepsilon }(T)+\eta ^{*}(T)\right| ^{2}, \end{aligned}$$

from Definition 2.1 and Gronwall’s inequality, the desired result follows. \(\square \)

Lemma 3.4

Let \(\mathcal {Z}(t)\) be the solution of the following linear MFSDE

$$\begin{aligned} \left\{ \begin{array}{l} \mathrm{d}\mathcal {Z}(t)=\left\{ f_{x}\left( t,x^{*}(t),\mathbb {E}(x^{*}(t)),u^{*}(t)\right) \mathcal {Z}(t)+f_{\tilde{x}}\left( t,x^{*}(t),\mathbb {E}(x^{*}(t)), \right. \right. \\ \qquad \qquad \qquad \left. u^{*}(t)\right) \left. \mathbb {E(}\mathcal {Z} (t))+ f_{u}(t,x^{*}(t),\mathbb {E}(x^{*}(t)),u^{*}(t))(u(t)-u^{*}(t))\right\} \mathrm{d}t\\ \qquad \qquad \qquad +\left\{ \sigma _{x}\left( t,x^{*}(t),\mathbb {E}(x^{*}(t),u^{*}(t)\right) \mathcal {Z}(t)+\sigma _{\tilde{x}}\left( t,x^{*}(t),\mathbb {E}(x^{*}(t)),\right. \right. \\ \qquad \qquad \qquad \left. u^{*}(t)\right) \left. \mathbb {E}(\mathcal {Z}(t))+ \sigma _{u}\left( t,x^{*}(t),\mathbb {E}(x^{*}(t)),u^{*}(t)\right) (u(t)\!-\!u^{*}(t))\right\} \mathrm{d}W(t)\\ \qquad \qquad \qquad +G(t)\mathrm{d}\left( \eta -\eta ^{*}\right) (t),\\ \mathcal {Z}(s)=0, \end{array} \right. \nonumber \\ \end{aligned}$$
(3.5)

then the following estimation holds

$$\begin{aligned} \lim _{\varepsilon \rightarrow 0}\mathbb {E}\left[ \sup _{s\le t\le T}\left| \frac{x^{\varepsilon }(t)-x^{*}(t)}{\varepsilon }-\mathcal {Z }(t)\right| ^{2}\right] =0. \end{aligned}$$
(3.6)

Proof

Noting that since under conditions (H1) and (H2) then linear MFSDE-(3.5) has a unique solution. We put

$$\begin{aligned} \gamma ^{\varepsilon }(t)=\frac{x^{\varepsilon }(t)-x^{*}(t)}{ \varepsilon }-\mathcal {Z}(t),\text {t}\in \left[ s,T\right] , \end{aligned}$$
(3.7)

by a simple computations we show that

$$\begin{aligned}&\dfrac{x^{\varepsilon }(t)-x^{*}(t)}{\varepsilon } \\&=\int \limits _{s}^{t}\int \limits _{0}^{1}f_{x}\left( r,x^{*}(r)+\mu \varepsilon (\gamma ^{\varepsilon }(r)+\mathcal {Z}(r)),\mathbb {E}(x^{*}(r)),u^{\varepsilon }(r)\right) \left( \gamma ^{\varepsilon }(r)+\mathcal {Z} (r)\right) \mathrm{d}\mu \mathrm{d}r \\&\quad +\,\int \limits _{s}^{t}\int \limits _{0}^{1}\left\{ f_{\tilde{x}}\left( r,x^{\varepsilon }(r),\mathbb {E}(x^{*}(r))+\mu \varepsilon \mathbb {E} (\gamma ^{\varepsilon }(r)+\mathcal {Z}(r)),u^{\varepsilon }(r)\right) \mathbb {E}\left( \gamma ^{\varepsilon }(r)+\mathcal {Z}(r)\right) \right\} \mathrm{d}\mu \mathrm{d}r \\&\quad +\,\int \limits _{s}^{t}\int \limits _{0}^{1}\sigma _{x}\left( r,x^{*}(r)+\mu \varepsilon (\gamma ^{\varepsilon }(r)+\mathcal {Z}(r)),\mathbb {E}(x^{*}(r)),u^{\varepsilon }(r)\right) \left( \gamma ^{\varepsilon }(r)+\mathcal {Z} (r)\right) \mathrm{d}\mu \mathrm{d}r \\&\quad +\,\int \limits _{s}^{t}\int \limits _{0}^{1}\left\{ \sigma _{\tilde{x}}\left( r,x^{\varepsilon }(r),\mathbb {E}(x^{*}(r))+\mu \varepsilon \mathbb {E} (\gamma ^{\varepsilon }(r)+\mathcal {Z}(r)),u^{\varepsilon }(r)\right) \mathbb {E}\left( \gamma ^{\varepsilon }(r)+\mathcal {Z}(r)\right) \right\} \mathrm{d}\mu \mathrm{d}r \\&\quad +\,\int \limits _{s}^{t}\int \limits _{0}^{1}f_{u}\left( r,x^{*}(r),\mathbb {E}(x^{*}(r)),u^{*}(r)+\mu \varepsilon \left( u(r)-u^{*}(r)\right) \right) \left( u(r)-u^{*}(r)\right) \mathrm{d}\mu \mathrm{d}r \\&\quad +\,\int \limits _{s}^{t}\int \limits _{0}^{1}\sigma _{u}\left( r,x^{*}(r),\mathbb {E} (x^{*}(r)),u^{*}(r)+\mu \varepsilon \left( u(r)-u^{*}(r)\right) \right) \left( u(r)-u^{*}(r)\right) \mathrm{d}\mu \mathrm{d}r \\&\quad +\,\int \limits _{\left[ s,t\right] }G(r)\mathrm{d}\left( \eta -\eta ^{*}\right) (r), \end{aligned}$$

then from the above equation and (3.7) we conclude that if \(\gamma ^{\varepsilon }(t)\) is independent of singular part, then we can use similar method developed in Li [17] for the rest of proof.\(\square \)

Lemma 3.5

For any \((u(\cdot ),\eta (\cdot ))\in \mathcal { U}_{1}\times \mathcal {U}_{1}\) we have

$$\begin{aligned} 0&\le \mathbb {E}\left\{ \left[ h_{x}\left( x^{*}(T),\mathbb {E} (x^{*}(T)\right) +\mathbb {E}\left( h_{\tilde{x}}\left( x^{*}(T), \mathbb {E}(x^{*}(T)\right) \right) \right] \mathcal {Z}(T)\right. \\&+\,\int \limits _{s}^{T}[\ell _{x}\left( t,x^{*}(t),\mathbb {E}(x^{*}(t)),u^{*}(t)\right) \mathcal {Z}(t)+\mathbb {E}\left( \ell _{\tilde{x }}\left( t,x^{*}(t),\mathbb {E}(x^{*}(t)),u^{*}(t)\right) \right) \mathcal {Z}(t)\\&\left. +\,\left( u(t)-u^{*}(t)\right) \ell _{u}\left( t,x^{*}(t), \mathbb {E}(x^{*}(t)),u^{*}(t)\right) ]\mathrm{d}t\right\} \\&+\,\mathbb {E}\int \limits _{\left[ s,T\right] }K(t)\mathrm{d}\left( \eta -\eta ^{*}\right) (t). \end{aligned}$$

Proof

From (1.2) and (3.3), we have

$$\begin{aligned} 0&\le J^{^{s,\zeta }}\left( u^{\varepsilon }(\cdot ),\eta ^{\varepsilon }(\cdot )\right) -J^{^{s,\zeta }}\left( u^{*}(\cdot ),\eta ^{*}(\cdot )\right) \\&= \mathbb {E}\left[ h(x^{\varepsilon }(T),\mathbb {E}\left( x^{\varepsilon }(T)\right) )-h(x^{*}(T),\mathbb {E}\left( x^{*}(T)\right) )\right] \\&+\,\mathbb {E}\int \limits _{s}^{T}\left[ \mathfrak {\ell }(t,x^{\varepsilon }(t), \mathbb {E}\left( x^{\varepsilon }(t)\right) ,u^{\varepsilon }(t))-\mathfrak { \ell }(t,x^{*}(t),\mathbb {E}\left( x^{*}(t)\right) ,u^{\varepsilon }(t))\right] \mathrm{d}t \\&+\,\mathbb {E}\int \limits _{s}^{T}\left[ \mathfrak {\ell }(t,x^{*}(t),\mathbb {E} \left( x^{*}(t)\right) ,u^{\varepsilon }(t))-\mathfrak {\ell }(t,x^{*}(t),\mathbb {E}\left( x^{*}(t)\right) ,u^{*}(t))\right] \mathrm{d}t \\&+\,\mathbb {E}\int \limits _{\left[ s,T\right] }K(t)\mathrm{d}\left( \eta ^{\varepsilon }-\eta ^{*}\right) (t) \\&= \mathbb {I}_{1}\,+\,\mathbb {I}_{2}\mathbb {+I}_{3}\mathbb {+I}_{4}. \end{aligned}$$

By applying similar arguments developed in [17] for \(\mathbb {I}_{1}, \mathbb {I}_{2}\), and \(\mathbb {I}_{3}\). Let us turn to estimate \( \mathbb {I}_{4}.\) From (3.4), we get for any \(\eta (\cdot )\in \mathcal { U}_{2}\)

$$\begin{aligned} \eta ^{\varepsilon }(t))-\eta ^{*}(t)=\varepsilon \left( \eta (t)-\eta ^{*}(t)\right) , \end{aligned}$$

then we can easily prove that

$$\begin{aligned} \mathbb {I}_{4}&= \mathbb {E}\int \limits _{\left[ s,T\right] }K(t)d\left( \eta ^{\varepsilon }-\eta ^{*}\right) (t) \\&= \varepsilon \mathbb {E}\int \limits _{\left[ s,T\right] }K(t)d(\eta -\eta ^{*})(t). \end{aligned}$$

Finally, we conclude that \(\lim _{\varepsilon \rightarrow 0}\mathbb {I}_{4}=0,\) which completes the proof of Lemma 3.5. \(\square \)

Proof of Theorem 3.1

By applying Itô’s formula to \( \Psi ^{*}(t)\mathcal {Z}(t)\) and take expectation, where \(\mathcal {Z} (s)=0\), then a simple computations show that

$$\begin{aligned} \mathbb {E}(\Psi ^{*}(T)\mathcal {Z}(T))&= \mathbb {E}\int \limits _{s}^{T}\Psi ^{*}(t)\mathrm{d}\mathcal {Z}(t)+\mathbb {E}\int \limits _{s}^{T}\mathcal {Z}(t)\mathrm{d}\Psi ^{*}(t)\nonumber \\&+\,\mathbb {E} \int \limits _{s}^{T}Q^{*}(t)\left[ \sigma _{x}\left( t\right) \mathcal {Z} (t)+\sigma _{\tilde{x}}\left( t\right) \mathbb {E}(\mathcal {Z}(t))\right. \\&+\,\left. \sigma _{u}\left( t\right) (u(t)-u^{*}(t))\right] \mathrm{d}t \nonumber \\&= \mathbb {J}_{1} \mathbb {+J}_{2}\mathbb {+J}_{3},\nonumber \end{aligned}$$
(3.8)

where

$$\begin{aligned} \mathbb {J}_{1}&= \mathbb {E}\int \limits _{s}^{T}\Psi ^{*}(t)\mathrm{d}\mathcal {Z} (t)\nonumber \\&= \mathbb {E}\int \limits _{s}^{T}\Psi ^{*}(t)\left[ f_{x}\left( t\right) \mathcal {Z}(t)+f_{\tilde{x}}\left( t\right) \mathbb {E}(\mathcal { Z}(t))+f_{u}(t)(u(t)-u^{*}(t))\right] \mathrm{d}t\nonumber \\&+\,\mathbb {E}\int \limits _{s}^{T}\Psi ^{*}(t)G(t)\mathrm{d}(\eta -\eta ^{*})(t) \\&= \mathbb {E}\int \limits _{s}^{T}\Psi ^{*}(t)f_{x}\left( t\right) \mathcal {Z}(t)\mathrm{d}t+\mathbb {E}\int \limits _{s}^{T}\Psi ^{*}(t)f_{\tilde{x} }\left( t\right) \mathbb {E}(\mathcal {Z}(t))\mathrm{d}t\nonumber \\&+\,\mathbb {E}\int \limits _{s}^{T}\Psi ^{*}(t)f_{u}(t)(u(t)-u^{*}(t))\mathrm{d}t+\mathbb {E}\int \limits _{\left[ s,T\right] }\Psi ^{*}(t)G(t)\mathrm{d}(\eta -\eta ^{*})(t),\nonumber \end{aligned}$$
(3.9)
$$\begin{aligned} \mathbb {J}_{2}&= \mathbb {E}\int \limits _{s}^{T}\mathcal {Z}(t)\mathrm{d}\Psi ^{*}(t)\nonumber \\&= -\mathbb {E}\int \limits _{s}^{T}\mathcal {Z}(t)\left\{ f_{x}\left( t\right) \Psi ^{*}(t)+\mathbb {E}\left( f_{\tilde{x}}\left( t\right) \Psi (t)\right) +\sigma _{x}\left( t\right) Q^{*}(t)\right. \nonumber \\&+\,\left. \mathbb {E}\left( \sigma _{\tilde{x}}\left( t\right) Q^{*}(t)\right) +\ell _{x}\left( t\right) +\mathbb {E}\left( \ell _{\tilde{x}}\left( t\right) \right) \right\} \mathrm{d}t\nonumber \\&= -\,\mathbb {E}\int \limits _{s}^{T}\mathcal {Z}(t)f_{x}\left( t\right) \Psi ^{*}(t)\mathrm{d}t-\mathbb {E}\int \limits _{s}^{T}\mathcal {Z}(t)\mathbb {E}\left( f_{ \tilde{x}}\left( t\right) \Psi ^{*}(t)\right) \mathrm{d}t\\&-\,\mathbb {E}\int \limits _{s}^{T}\mathcal {Z}(t)\sigma _{x}\left( t\right) Q^{*}(t)\mathrm{d}t-\mathbb {E}\int \limits _{s}^{T}\mathcal {Z}(t)\mathbb {E}\left( \sigma _{\tilde{x}}\left( t\right) Q^{*}(t)\right) \mathrm{d}t\nonumber \\&-\mathbb {E}\int \limits _{s}^{T}\mathcal {Z}(t)\ell _{x}\left( t\right) \mathrm{d}t-\mathbb {E}\int \limits _{s}^{T}\mathcal {Z}(t)\mathbb {E}\left( \ell _{\tilde{x} }\left( t\right) \right) \mathrm{d}t,\nonumber \end{aligned}$$
(3.10)

and

$$\begin{aligned} \mathbb {J}_{3}&= \mathbb {E}\int _{s}^{T}Q^{*}(t)\left[ \sigma _{x}\left( t\right) \mathcal {Z}(t)+\sigma _{\tilde{x}}\left( t\right) \mathbb {E}( \mathcal {Z}(t))+\sigma _{u}\left( t\right) (u(t)-u^{*}(t))\right] \mathrm{d}t\nonumber \\&= \mathbb {E}\int \limits _{s}^{T}Q^{*}(t)\sigma _{x}\left( t\right) \mathcal {Z}(t)\mathrm{d}t+\mathbb {E}\int \limits _{s}^{T}Q^{*}(t)\sigma _{ \tilde{x}}\left( t\right) \mathbb {E}(\mathcal {Z}(t))\mathrm{d}t\nonumber \\&+\,\mathbb {E}\int \limits _{s}^{T}Q^{*}(t)\sigma _{u}\left( t\right) (u(t)-u^{*}(t))\mathrm{d}t, \end{aligned}$$
(3.11)

where \(b_{\rho }(t)=\dfrac{\partial b}{\partial \rho }(t,x^{*}(t), \mathbb {E}(x^{*}(t)),u^{*}(t))\) for \(b=f,\sigma ,\ell \) and \(\rho =x, \tilde{x},u\).

Combining (3.8)–(3.11) and the fact that \(\Psi ^{*}(T)=h_{x}\left( x^{*}(T),\mathbb {E}(x^{*}(T))\right. +\mathbb { E}\left[ h_{\tilde{x}}\left( x^{*}(T),\mathbb {E}(x^{*}(T))\right. \right] \) we get

$$\begin{aligned}&\mathbb {E}\left\{ \left[ h_{x}\left( x^{*}(T),\mathbb {E}(x^{*}(T))\right. +\mathbb {E}\left( h_{\tilde{x}}\left( x^{*}(T),\mathbb {E} (x^{*}(T))\right. \right) \right] \mathcal {Z}(T)\right\} \\&=\mathbb {E}\int \limits _{s}^{T}\Psi ^{*}(t)f_{u}(t)(u(t)-u^{*}(t))\mathrm{d}t+ \mathbb {E}\int \limits _{s}^{T}Q^{*}(t)\sigma _{u}\left( t\right) (u(t)-u^{*}(t))\mathrm{d}t \\&\quad -\,\mathbb {E}\int \limits _{s}^{T}\mathcal {Z}(t)\ell _{x}\left( t\right) \mathrm{d}t-\mathbb {E} \int \limits _{s}^{T}\mathcal {Z}(t)\mathbb {E}\left( \ell _{\tilde{x}}\left( t\right) \right) \mathrm{d}t+\mathbb {E}\int \limits _{\left[ s,T\right] }\Psi ^{*}(t)G(t)d(\eta -\eta ^{*})(t). \end{aligned}$$

Finally, applying Lemma 3.5 we get

$$\begin{aligned} 0&\le \mathbb {E}\int \limits _{s}^{T}\Psi ^{*}(t)f_{u}(t)(u(t)-u^{*}(t))\mathrm{d}t+ \mathbb {E}\int \limits _{s}^{T}Q^{*}(t)\sigma _{u}\left( t\right) (u(t)-u^{*}(t))\mathrm{d}t \\&+\,\mathbb {E}\int \limits _{s}^{T}\ell _{u}\left( t\right) \left( u(t)-u^{*}(t)\right) \mathrm{d}t, \\&+\,\mathbb {E}\int \limits _{\left[ s,T\right] }K(t)\mathrm{d}\left( \eta -\eta ^{*}\right) (t)+\,\mathbb {E}\int \limits _{\left[ s,T\right] }\Psi ^{*}(t)G(t)d(\eta -\eta ^{*})(t) \\&= \mathbb {E}\int \limits _{s}^{T}H_{u}(t,x^{*}(t),\mathbb {E}(x^{*}(t)),u^{*}(t),\Psi ^{*}(t),Q^{*}(t))\left( u(t)-u^{*}(t)\right) \mathrm{d}t \\&+\,\mathbb {E}\int \limits _{\left[ s,T\right] }\left( K(t)+\Psi ^{*}(t)G(t)\right) \mathrm{d}\left( \eta -\eta ^{*}\right) (t). \end{aligned}$$

This completes the proof of Theorem  3.1. \(\square \)

4 Mean-Field Sufficient Conditions for Optimal Singular Control

The specific purpose of this section is to derive sufficient conditions for optimal stochastic singular control for systems governed by MFSDE. We prove that under certain convexity conditions on the Hamiltonian and on the function \(h\), the necessary conditions also become sufficient for optimality. We assume

Conditions (H3): The functions \(h(\cdot ,\cdot ): \mathbb {R}\times \mathbb {R\rightarrow R},\) and \(H(t,\cdot ,\cdot ,\cdot ,\Psi ,Q):\mathbb {R}\times \mathbb {R}\times \mathbb {A}_{1}\rightarrow \mathbb {R}\) satisfy

$$\begin{aligned}&h(\cdot ,\cdot ) \hbox { is convex with respect to }\left( x,\tilde{x} \right) ,\end{aligned}$$
(4.1)
$$\begin{aligned}&H(t,\cdot ,\cdot ,\cdot ,\Psi ,Q)\,\, \hbox {is convex with respected to }\left( x,\tilde{x},u\right) . \end{aligned}$$
(4.2)

Let \(\left( v(\cdot ),\xi (\cdot )\right) \in \mathcal {U}_{1}\times \mathcal { U}_{2}\) be an admissible control, and \(x^{v,\xi }(\cdot ),(\Psi ^{v}(\cdot ),Q^{v}(\cdot ))\) be the solution of (1.1), (2.4), respectively, corresponding to \(\left( v(\cdot ),\xi (\cdot )\right) \).

Theorem 4.1

(Mean-field sufficient conditions for optimal singular control). Let conditions (H1)–(H3) hold. Suppose that the singular control \(\left( v(\cdot ),\xi (\cdot )\right) \) satisfies: for any \(\left( u(\cdot ),\eta (\cdot )\right) \in \mathcal {U}_{1}\times \mathcal {U}_{2}:\)

$$\begin{aligned} \mathbb {E}&\int \limits _{s}^{T}H_{u}(t,x^{v,\xi }(t),\mathbb {E}(x^{v,\xi }(t)\mathbb {) },v(t),\Psi ^{v}(t),Q^{v}(t))(u(t)-v(t))\mathrm{d}t\nonumber \\&+\,\mathbb {E}\left[ \int \limits _{\left[ s,T\right] }(K(t)+G(t)\Psi ^{v}(t))\mathrm{d}\left( \eta -\xi \right) (t)\right] \ge 0, \end{aligned}$$
(4.3)

then \(\left( v(\cdot ),\xi (\cdot )\right) \) is an optimal control, i.e.,

$$\begin{aligned} J^{s,\zeta }\left( v(\cdot ),\xi (\cdot )\right) =\min _{\left( u(\cdot ),\eta (\cdot )\right) \in \mathcal {U}_{1}\times \mathcal {U}_{2}}J^{s,\zeta }\left( u(\cdot ),\eta (\cdot )\right) . \end{aligned}$$
(4.4)

Proof

For any \(\left( u(\cdot ),\eta (\cdot )\right) \in \mathcal {U}_{1}\times \mathcal {U}_{2}\), and from (1.2) we get

$$\begin{aligned}&J^{s,\zeta }\left( u(\cdot ),\eta (\cdot )\right) -J^{s,\zeta }\left( v(\cdot ),\xi (\cdot )\right) \\&\quad =\mathbb {E}\left[ h(x^{u,\eta }(T),\mathbb {E}\left( x^{u,\eta }(T)\right) -h(x^{v,\xi }(T),\mathbb {E}\left( x^{v,\xi }(T)\right) \right] \\&\quad +\,\mathbb {E}\int \limits _{s}^{T}\left[ \mathfrak {\ell }(t,x^{u,\eta }(t),\mathbb {E}\left( x^{u,\eta }(t)\right) ,u(t)-\mathfrak {\ell } (t,x^{v,\xi }(t),\mathbb {E}\left( x^{v,\xi }(t)\right) ,v(t)\right] \mathrm{d}t\\&\quad +\,\mathbb {E}\int \limits _{\left[ s,T\right] }K(t)\mathrm{d}(\eta -\xi )(t). \end{aligned}$$

Using (4.1) then we get

$$\begin{aligned}&J^{s,\zeta }\left( u(\cdot ),\eta (\cdot )\right) -J^{s,\zeta }\left( v(\cdot ),\xi (\cdot )\right) \nonumber \\&\ge \mathbb {E}\left[ \left( h_{x}(x^{v,\xi }(T),\mathbb {E} \left( x^{v,\xi }(T)\right) +h_{\tilde{x}}(x^{v,\xi }(T),\mathbb {E} \left( x^{v,\xi }(T)\right) \right) \left( x^{u,\eta }(T)-x^{v,\xi }(T)\right) \right] \nonumber \\&+\,\mathbb {E}\int \limits _{s}^{T}\left[ \mathfrak {\ell }(t,x^{u,\eta }(t),\mathbb {E}\left( x^{u,\eta }(t)\right) ,u(t)-\mathfrak {\ell } (t,x^{v,\xi }(t),\mathbb {E}\left( x^{v,\xi }(t)\right) ,v(t)\right] \mathrm{d}t\nonumber \\&+\,\mathbb {E}\int \limits _{\left[ s,T\right] }K(t)\mathrm{d}(\eta -\xi )(t). \end{aligned}$$
(4.5)

Now, by noting that

$$\begin{aligned}&x^{u,\eta }(t)-x^{v,\xi }(t) \\&=\int \limits _{s}^{t}\left[ f\left( r,x^{u,\eta }(r),\mathbb {E}(x^{u,\eta }(r)),u(r)\right) -f\left( r,x^{v,\xi }(r),\mathbb {E}(x^{v,\xi }(r)),v(r)\right) \right] \mathrm{d}r \\&\quad +\,\int \limits _{s}^{t}\left[ \sigma \left( r,x^{u,\eta }(r),\mathbb {E}(x^{u,\eta }(r)),u(r)\right) -\sigma \left( r,x^{v,\xi }(r),\mathbb {E}(x^{v,\xi }(r)),v(r)\right) \right] \mathrm{d}W(r) \\&\quad +\,\int \limits _{\left[ s,t\right] }G(r)d\left( \eta -\xi \right) (r), \end{aligned}$$

and by using integration by parts formula to \(\Psi ^{v}(t)(x^{u,\eta }(t)-x^{v,\xi }(t))\) we get

$$\begin{aligned}&\mathbb {E(}\Psi ^{v}(T)(x^{u,\eta }(T)-x^{v,\xi }(T)))\nonumber \\&=\mathbb {E}\int \limits _{s}^{T}\Psi ^{v}(t)\mathrm{d}(x^{u,\eta }(t)-x^{v,\xi }(t))\nonumber \\&\quad +\,\mathbb {E}\int \limits _{s}^{T}(x^{u,\eta }(t)-x^{v,\xi }(t))\mathrm{d}\Psi ^{v}(t)\nonumber \\&\quad +\,\mathbb {E}\int \limits _{s}^{T}Q^{v}(t)\left[ \sigma \left( t,x^{u,\eta }(t),\mathbb {E}(x^{u,\eta }(t)),u(t)\right) \!{-}\!\sigma \left( t,x^{v,\xi }(t),\mathbb {E}(x^{v,\xi }(t)),v(t)\right) )\right] \mathrm{d}t\nonumber \\&=\mathbb {I}_{1}+\mathbb {I}_{2}+\mathbb {I}_{3}, \end{aligned}$$
(4.6)

where

$$\begin{aligned} \mathbb {I}_{1}&= \mathbb {E}\int \limits _{s}^{T}\Psi ^{v}(t)\mathrm{d}(x^{u,\eta }(t)-x^{v,\xi }(t))\nonumber \\&= \mathbb {E}\int \limits _{s}^{T}\Psi ^{v}(t)\left[ f\left( t,x^{u,\eta }(t),\mathbb {E}(x^{u,\eta }(t)),u(t)\right) \right. \nonumber \\&-\left. f\left( t,x^{v,\xi }(t),\mathbb {E}(x^{v,\xi }(t)),v(t)\right) \right] \mathrm{d}t\\&+\mathbb {E}\int \limits _{\left[ s,T\right] }\Psi ^{v}(t)G(t)\mathrm{d}\left( \eta -\xi \right) (t),\nonumber \end{aligned}$$
(4.7)

from (2.4) we get

$$\begin{aligned} \mathbb {I}_{2}&= \mathbb {E}\int \limits _{s}^{T}(x^{u,\eta }(t)-x^{v,\xi }(t))\mathrm{d}\Psi ^{v}(t)\nonumber \\&= -\mathbb {E}\int \limits _{s}^{T}(x^{u,\eta }(t)-x^{v,\xi }(t))\left[ H_{x}\left( t,x^{v,\xi }(t),\mathbb {E}\left( x^{v,\xi }(t)\right) ,v(t),\Psi ^{v}(t),Q^{v}(t)\right) \right. \nonumber \\&+\,\left. \mathbb {E}\left( H_{\tilde{x}}\left( t,x^{v,\xi }(t),\mathbb {E} \left( x^{v,\xi }(t)\right) ,v(t),\Psi ^{v}(t),Q^{v}(t)\right) \right) \right] \mathrm{d}t \end{aligned}$$
(4.8)

and

$$\begin{aligned} \mathbb {I}_{3}&= \mathbb {E}\int \limits _{s}^{T}Q^{v}(t)\sigma \left( r,x^{u,\eta }(r), \mathbb {E}(x^{u,\eta }(r)),u(r)\right) \mathrm{d}t\nonumber \\&-\mathbb {E}\int \limits _{s}^{T}Q^{v}(t)\sigma \left( r,x^{v,\xi }(r),\mathbb {E} (x^{v,\xi }(r)),v(r)\right) )\mathrm{d}t, \end{aligned}$$
(4.9)

combining (4.6)–(4.9) we get

$$\begin{aligned}&\mathbb {E(}\Psi ^{v}(T)(x^{u,\eta }(T)-x^{v,\xi }(T))) \nonumber \\&=\mathbb {E}\int \limits _{s}^{T}(H\left( t,x^{u,\eta }(t),\mathbb {E} (x^{u,\eta }(t)),u(t),\Psi ^{v}(t),Q^{v}(t)\right) \nonumber \\&\quad -\,H\left( t,x^{v,\xi }(t),\mathbb {E}(x^{v,\xi }(t)),v(t),\Psi ^{v}(t),Q^{v}(t)\right) )\mathrm{d}t\nonumber \\&\quad -\,\mathbb {E}\int \limits _{s}^{T}(x^{u,\eta }(t)-x^{v,\xi }(t))\left[ H_{x}\left( t,x^{v,\xi }(t),\mathbb {E}\left( x^{v,\xi }(t)\right) ,v(t),\Psi ^{v}(t),Q^{v}(t)\right) \right. \nonumber \\&\quad +\,\left. \mathbb {E}\left( H_{\tilde{x}}\left( t,x^{v,\xi }(t),\mathbb {E}\left( x^{v,\xi }(t)\right) ,v(t),\Psi ^{v}(t),Q^{v}(t)\right) \right) \right] \mathrm{d}t,\nonumber \\&\quad -\,\mathbb {E}\int \limits _{s}^{T}\ell \left( t,x^{u,\eta }(t),\mathbb {E} (x^{u,\eta }(t)),u(t))\right) +\mathbb {E}\int \limits _{s}^{T}\ell \left( t,x^{v,\xi }(t),\mathbb {E}(x^{v,\xi }(t)),v(t)\right) \nonumber \\&\quad +\,\mathbb {E}\int \limits _{\left[ s,T\right] }\Psi ^{v}(t)G(t)\mathrm{d}\left( \eta -\xi \right) (t), \end{aligned}$$
(4.10)

from (4.5)–(4.10) we get

$$\begin{aligned}&J^{s,\zeta }\left( u(\cdot ),\eta (\cdot )\right) -J^{s,\zeta }\left( v(\cdot ),\xi (\cdot )\right) \nonumber \\&\ge \mathbb {E}\int \limits _{s}^{T}(H\left( t,x^{u,\eta }(t),\mathbb {E} (x^{u,\eta }(t)),u(t),\Psi ^{v}(t),Q^{v}(t)\right) \nonumber \\&\quad -\,H\left( t,x^{v,\xi }(t),\mathbb {E}(x^{v,\xi }(t)),v(t),\Psi ^{v}(t),Q^{v}(t)\right) )\mathrm{d}t\nonumber \\&\quad -\,\mathbb {E}\int \limits _{s}^{T}(x^{u,\eta }(t)-x^{v,\xi }(t))\left[ H_{x}\left( t,x^{v,\xi }(t),\mathbb {E}\left( x^{v,\xi }(t)\right) ,v(t),\Psi ^{v}(t),Q^{v}(t)\right) \right. \nonumber \\&\quad +\,\left. \mathbb {E}\left( H_{\tilde{x}}\left( t,x^{v,\xi }(t),\mathbb {E}\left( x^{v,\xi }(t)\right) ,v(t),\Psi ^{v}(t),Q^{v}(t)\right) \right) \right] \mathrm{d}t,\nonumber \\&\quad +\, \mathbb {E}\int \limits _{\left[ s,T\right] }(K(t)+\Psi ^{v}(t)G(t))\mathrm{d}\left( \eta -\xi \right) (t). \end{aligned}$$
(4.11)

By applying (4.2) it holds that

$$\begin{aligned}&H\left( t,x^{u,\eta }(t),\mathbb {E}(x^{u,\eta }(t)),u(t),\Psi ^{v}(t),Q^{v}(t)\right) \\&\quad -\,H\left( t,x^{v,\xi }(t),\mathbb {E}(x^{v,\xi }(t)),v(t),\Psi ^{v}(t),Q^{v}(t)\right) \\&\ge H_{x}\left( t,x^{v,\xi }(t),\mathbb {E}\left( x^{v,\xi }(t)\right) ,v(t),\Psi ^{v}(t),Q^{v}(t)\right) (x^{u,\eta }(t)-x^{v,\xi }(t))\\&\quad +\,\mathbb {E}\left( H_{\tilde{x}}\left( t,x^{v,\xi }(t),\mathbb {E}\left( x^{v,\xi }(t)\right) ,v(t),\Psi ^{v}(t),Q^{v}(t)\right) \right) (\mathbb {E(} x^{u,\eta }(t))-\mathbb {E(}x^{v,\xi }(t)))\\&\quad +\,H_{u}\left( t,x^{v,\xi }(t),\mathbb {E}\left( x^{v,\xi }(t)\right) ,v(t),\Psi ^{v}(t),Q^{v}(t)\right) (u(t)-v(t)), \end{aligned}$$

from (4.11) we obtain

$$\begin{aligned}&J^{s,\zeta }\left( u(\cdot ),\eta (\cdot )\right) -J^{s,\zeta }\left( v(\cdot ),\xi (\cdot )\right) \nonumber \\&\quad \ge \mathbb {E}\int \limits _{s}^{T}H_{u}\left( t,x^{v,\xi }(t),\mathbb {E}\left( x^{v,\xi }(t)\right) ,v(t),\Psi ^{v}(t),Q^{v}(t)\right) (u(t)-v(t))\mathrm{d}t\nonumber \\&\qquad +\,\mathbb {E}\int \limits _{\left[ s,T\right] }(K(t)+\Psi ^{v}(t)G(t))\mathrm{d}\left( \eta -\xi \right) (t). \end{aligned}$$
(4.12)

Finally, since \(\left( u(\cdot ),\eta (\cdot )\right) \) is an arbitrary element of \(\mathcal {U}_{1}\times \mathcal {U}_{2}\) the desired result (4.4) follows immediately by combining (4.3) and (4.12). This completes the proof of Theorem 4.1. \(\square \)

5 Application: Singular Mean-Field Linear Quadratic Control Problem

In this section, optimal singular stochastic control problem for linear MFSDEs is considered. The optimal control is represented by a state feedback form involving both \(x(\cdot )\) and \(\mathbb {E(}x(\cdot ))\) , via the solutions of Riccati ordinary differential equations.

We assume \(T=1,s=0,\mathbb {A}_{1}=\left[ 0,1\right] ,\mathbb { A}_{2}=\left[ 0,1\right] \),

$$\begin{aligned}&f\left( t,x(t),\mathbb {E}\left( x(t)\right) ,u(t)\right) =x(t)+\mathbb {E} \left( x(t)\right) +u(t),\\&\sigma \left( t,x(t),\mathbb {E}\left( x(t)\right) ,u(t)\right) =x(t)+\mathbb {E}\left( x(t)\right) +u(t),\\&\ell \left( t,x(t),\mathbb {E}\left( x(t)\right) ,u(t)\right) =x(t)^{2}+u(t)^{2},\\&h\left( x(t),\mathbb {E}\left( x(t)\right) \right) =\frac{1}{2} x(t)^{2},\\&G(t)=1,\\&\eta (0)=0. \end{aligned}$$

We consider the following mean-field singular stochastic control problem:

Minimize

$$\begin{aligned} J\left( u(\cdot ),\eta (\cdot )\right)&= \frac{1}{2} \mathbb {E}\int \limits _{0}^{1}\left[ x(t)^{2}+u(t)^{2}\right] \mathrm{d}t\\&+\,\dfrac{1}{2}\mathbb {E}\left( x(1)^{2}\right) +\int \limits _{\left[ 0,1\right] }K(t)\mathrm{d}\eta (t),\nonumber \end{aligned}$$
(5.1)

subject to

$$\begin{aligned} \left\{ \begin{array}{l} \mathrm{d}x(t)=\left[ x(t)\!+\!\mathbb {E}\left( x(t)\right) \!+\!u(t)\right] \mathrm{d}t+\left[ x(t)+ \mathbb {E}\left( x(t)\right) \!+\!u(t)\right] \mathrm{d}W(t)+\mathrm{d}\eta (t),\\ x(0)=\zeta . \end{array} \right. \qquad \end{aligned}$$
(5.2)

For a given optimal control \(\left( u^{*}(\cdot ),\eta ^{*}(\cdot )\right) \), then due to (2.3) the corresponding adjoint equation gets the form

$$\begin{aligned} \left\{ \begin{array}{l} \mathrm{d}\Psi ^{*}(t)=-\left[ \Psi ^{*}(t)+\mathbb {E}(\Psi ^{*}(t))+Q^{*}(t)+\mathbb {E}(Q^{*}(t))+x^{*}(t)\right] \mathrm{d}t \\ \qquad \qquad \qquad +Q^{*}(t)\mathrm{d}W(t),\\ \Psi ^{*}(1)=x^{*}(1). \end{array} \right. \end{aligned}$$
(5.3)

The Hamiltonian function corresponding to control problem (5.1)–(5.2) gets the form

$$\begin{aligned}&H\left( t,x(t),\mathbb {E}\left( x(t)\right) ,u(t),\Psi (t),Q(t)\right) \nonumber \\&=\left( x(t)+\mathbb {E}\left( x(t)\right) +u(t)\right) \Psi (t)+\left( x(t)+\mathbb {E}\left( x(t)\right) +u(t)\right) Q(t)\nonumber \\&\quad +\,\frac{1}{2}(x(t)^{2}+u(t)^{2}). \end{aligned}$$
(5.4)

By applying Theorem 4.1, and the fact that

$$\begin{aligned} H_{u}\left( t,x^{*}(t),\mathbb {E}\left( x^{*}(t)\right) ,u^{*}(t),\Psi ^{*}(t),Q^{*}(t)\right) =\Psi ^{*}(t)+Q^{*}(t)+u^{*}(t), \end{aligned}$$

we deduce that the optimal control is given by

$$\begin{aligned} (u^{*}(\cdot ),\eta ^{*}(\cdot ))=\left( -\Psi ^{*}(t)-Q^{*}(t),\eta ^{*}(t)\right) ,\text {t}\in \left[ 0,1\right] . \end{aligned}$$
(5.5)

In order to solve explicitly the above equation, we try the adjoint process \( \Psi ^{*}(\cdot )\) as follows

$$\begin{aligned} \Psi ^{*}(t)=\Phi _{1}\left( t\right) x^{*}(t)+\Phi _{2}\left( t\right) \mathbb {E}\left( x^{*}(t)\right) . \end{aligned}$$
(5.6)

Applying (5.3) and by comparing the coefficient of \(x^{*}(t)\) and \(\mathbb {E}\left( x^{*}(t)\right) \) we can show that \(\Phi _{1}(t)\) and \(\Phi _{2}(t)\) are given by the following Riccati equations

$$\begin{aligned} \left\{ \begin{array}{l} \Phi _{1}\left( t\right) =-3\Phi _{1}\left( t\right) +4(1+\Phi _{1}(t))^{-1}\Phi _{1}^{2}(t)-1,\\ \Phi _{1}(1)=1,\\ \Phi _{2}\left( t\right) =-4\Phi _{2}\left( t\right) -5\Phi _{1}\left( t\right) +(1+\Phi _{1}(t))^{-1}\left[ 5\Phi _{1}(t)+\Phi _{2}\left( t\right) \right] \\ \quad \quad \quad \qquad \times \left[ \Phi _{1}\left( t\right) +\Phi _{2}\left( t\right) \right] ,\\ \Phi _{2}(1)=0, \end{array} \right. \end{aligned}$$
(5.7)

then we get

$$\begin{aligned} u^{*}(t)=-\frac{2\Phi _{1}(t)}{\left( 1+\Phi _{1}(t)\right) }x^{*}(t)-\frac{\left( \Phi _{1}(t)+\Phi _{2}(t)\right) }{\left( 1+\Phi _{1}(t)\right) }\mathbb {E}\left( x^{*}(t)\right) . \end{aligned}$$
(5.8)

However, from Theorem 4.1, the singular part \(\eta ^{*}(\cdot )\) satisfying: for any \(\eta (\cdot )\in \mathcal {U}_{2}\),

$$\begin{aligned} \mathbb {E}\int \limits _{\left[ 0,1\right] }(K(t)+\Psi ^{*}(t))d\left( \eta -\eta ^{*}\right) (t)\ge 0. \end{aligned}$$
(5.9)

Noting that Eqs. (5.7) are Riccati ordinary differential equations admit one and only one solution (see also Yong [21] and Li [17]). In particular case, if we define

$$\begin{aligned} \mathfrak {R}=\left\{ \left( w,t\right) \in \Omega \times \left[ 0,1\right] :K(t)+\Psi ^{*}(t)\ge 0\right\} , \end{aligned}$$

and let \(\eta (\cdot )\in \mathcal {U}_{2}\) such that

$$\begin{aligned} \mathrm{d}\eta (t)=\left\{ \begin{array}{l} 0\text { if }K(t)+\Psi ^{*}(t)\ge 0,\\ \mathrm{d}\eta ^{*}(t)\text { otherwise,} \end{array} \right. \end{aligned}$$
(5.10)

then by a simple computations it is easy to see that

$$\begin{aligned} 0&\le \mathbb {E}\int \limits _{\left[ 0,1\right] }(K(t)+\Psi ^{*}(t))\mathrm{d}\left( \eta -\eta ^{*}\right) (t) \\&= \mathbb {E}\int \limits _{\left[ 0,1\right] }(K(t)+\Psi ^{*}(t))\chi _{ \mathfrak {R}}\mathrm{d}\left( -\eta ^{*}\right) (t), \end{aligned}$$

which implies that \(\eta ^{*}(t)\) satisfies for any \(t\in \left[ 0,1 \right] \)

$$\begin{aligned} \mathbb {E}\int \limits _{\left[ 0,1\right] }(K(t)+\Psi ^{*}(t))\chi _{\mathfrak {R} }\mathrm{d}\eta ^{*}(t)=0. \end{aligned}$$
(5.11)

5.1 Some Discussions

In Sects. 3 and 4 we established mean-field type necessary and sufficient conditions, respectively, for optimal singular control for dynamics evolves according to nonlinear-controlled MFSDE. Moreover, the cost functional is also of mean-field type. In Sect. 5 a linear quadratic singular control problem of mean-field type was solved explicitly, where the optimal control was given in feedback form by means of Riccati differential equations.

Case 1::

If the coefficients \(f,\sigma \), and \(\ell \) of the underlying diffusion process and the cost functional do not explicitly depend on the expected value, Theorem 3.1 reduces to Theorem 8 proved in [4].

Case 2::

If we assume \(G(\cdot )\equiv K(\cdot )\equiv 0,\) our necessary and sufficient conditions (Theorem  3.1, Theorem 4.1) reduce to (Theorem 3.1 and Theorem 3.3), respectively, proved in Li [17].

Case 3::

Let Case 1 and Case 2 hold, then our necessary conditions of optimality (Theorem 3.1) reduce to Theorem 2.1 proved in Bensoussan [5].