Abstract
We study mean-field type optimal stochastic control problem for systems governed by mean-field controlled forward–backward stochastic differential equations with jump processes, in which the coefficients depend on the marginal law of the state process through its expected value. The control variable is allowed to enter both diffusion and jump coefficients. Moreover, the cost functional is also of mean-field type. Necessary conditions for optimal control for these systems in the form of maximum principle are established by means of convex perturbation techniques. As an application, time-inconsistent mean-variance portfolio selection mixed with a recursive utility functional optimization problem is discussed to illustrate the theoretical results.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
In this paper, we consider stochastic optimal control for systems governed by nonlinear mean-field controlled forward–backward stochastic differential equations with Poisson jump processes (FBSDEJs) of the form
where \(f,\sigma ,c,g,h\) are given maps and the initial condition \(\zeta \) is an \(\mathcal {F}_{0}\)-measurable random variable. The mean-field FBSDEJs-(1.1) called McKean–Vlasov systems are obtained as the mean square limit of an interacting particle system of the form
where \((W^{j}(\cdot ){:}j{\ge } 1)\) is a collection of independent Brownian motions and \((N^{j}(\cdot ,\cdot ):j\ge 1)\) is a collection of independent Poisson martingale measure. Noting that mean-field FBSDEJs-(1.1) occur naturally in the probabilistic analysis of financial optimization problems and the optimal control of dynamics of the McKean–Vlasov type. Moreover, the above mathematical mean-field approaches play an important role in different fields of economics, finance, physics, chemistry and game theory.
The expected cost to be minimized over the class of admissible control has the form
where \(\ell ,\phi ,\varphi \) is an appropriate functions. This cost functional is also of mean-field type, as the functions \(\ell , \phi ,\varphi \) depend on the marginal law of the state process through its expected value. It is worth mentioning that since the cost functional \(J\) is possibly a nonlinear function of the expected value stands in contrast to the standard formulation of a control problem. This leads to the so-called time-inconsistent control problem where the Bellman dynamic programming does not hold. The reason for this is that one cannot apply the law of iterated expectations on the cost functional.
An admissible control \(u(\cdot )\) is an \(\mathcal {F}_{t}\)-adapted and square-integrable process with values in a nonempty convex subset \( \mathcal {A}\) of \(\mathcal {\mathfrak {R}}.\) We denote by \(\mathcal {U}\left( \left[ 0,T\right] \right) \) the set of all admissible controls. Any admissible control \(u^{*}(\cdot )\in \mathcal {U}\left( \left[ 0,T\right] \right) \) satisfying
is called an optimal control.
The mean-field stochastic differential equation was introduced by Kac [1] as a stochastic model for the Vlasov kinetic equation of plasma and the study of this model was initiated by McKean [2]. Since then, many authors made contributions on mean-field stochastic problems and their applications, see for instance [3–23]. In a recent paper, mean-field games for large population multi-agent systems with Markov jump parameters have been investigated in Wang and Zhang [3]. Decentralized tracking-type games for large population multi-agent systems with mean-field coupling have been studied in Li and Zhang [4]. Discrete-time indefinite mean-field linear-quadratic optimal control problem has been investigated in Ni et al. [5]. Discrete time mean-field stochastic linear-quadratic optimal control problems with applications have been derived by Elliott et al. [6]. In Buckdahn, Li and Peng [7], a general notion of mean-field BSDE associated with a mean-field SDE was obtained in a natural way as a limit of some high-dimensional system of FBSDEs governed by a \(d\)-dimensional Brownian motion, and influenced by positions of a large number of other particles. In Buckdahn et al. [8], a general maximum principle was introduced for a class of stochastic control problems involving SDEs of mean-field type. However, sufficient conditions of optimality for mean-field SDE have been established by Shi [9]. In Meyer-Brandis, \(\emptyset \)ksendal and Zhou [10], a stochastic maximum principle of optimality for systems governed by controlled Itô-Levy process of mean-field type was proved using Malliavin calculus. Mean-field singular stochastic control problems have been investigated in Hafayed and Abbas [11]. More interestingly, mean-field type stochastic maximum principle for optimal singular control has been studied in Hafayed [12], in which convex perturbations used for both absolutely continuous and singular components. The maximum principle for optimal control of mean-field FBSDEJs with uncontrolled diffusion has been studied in Hafayed [13]. The necessary and sufficient conditions for near-optimality of mean-field jump diffusions with applications have been derived by Hafayed et al. [14]. Singular optimal control for mean-field forward–backward stochastic systems and applications to finance have been investigated in Hafayed [15]. Second-order necessary conditions for optimal control of mean-field jump diffusion have been obtained by Hafayed and Abbas [16]. Under partial information, mean-field type stochastic maximum principle for optimal control has been investigated in Wang, Zhang and Zhang [17]. Under the condition that the control domain is convex, Andersson and Djehiche [18] and Li [19] investigated problems for two types of more general controlled SDEs and cost functionals, respectively. The linear-quadratic optimal control problem for mean-field SDEs has been studied by Yong [20] and Shi [9]. The mean-field stochastic maximum principle for jump diffusions with applications has been investigated in Shen and Siu [21] Recently, maximum principle for mean-field jump diffusions stochastic delay differential equations and its applications to finance have been derived by Yang, Meng and Shi [22]. Mean-field optimal control for backward stochastic evolution equations in Hilbert spaces has been investigated in Xu and Wu [23].
The optimal control problems for stochastic systems described by Brownian motions and Poisson jumps have been investigated by many authors including [24, 25, 27–30]. The necessary and sufficient conditions of optimality for FBSDEJs were obtained by Shi and Wu [24]. General maximum principle for fully coupled FBSDEJs has been obtained in Shi [25], where the author generalized Yong’s maximum principle [26] to jump case.
In this paper, our main goal is to derive a maximum principle for optimal stochastic control of mean-field FBSDEJs, where the coefficient depends not only on the state process but also its marginal law of the state process through its expected value. The cost functional is also of mean-field type. Our mean-field control problem is not simple extension from the mathematical point of view, but also provide interesting models in many applications such as mathematical finance; (mean-variance portfolio selection problems), optimal control for mean-field systems. The proof of our result is based on convex perturbation method. These necessary conditions are described in terms of two adjoint processes, corresponding to the mean-field forward and backward components with jumps and a maximum conditions on the Hamiltonian. In the end, as an application to finance, a mean-variance portfolio selection mixed with a recursive utility optimization problem is given, where explicit expression of the optimal portfolio selection strategy is obtained in feedback form involving both state process and its marginal distribution, via the solutions of Riccati ordinary differential equations. To streamline the presentation of this paper, we only study the 1-dimensional case.
The rest of this paper is structured as follows. In Sect. 2, we formulate the mean-field stochastic control problem and describe the assumptions of the model. Section 3 is devoted to prove our mean-field stochastic maximum principle. As an illustration, using these results, a mean-variance portfolio selection mixed problem with recursive utility (time-inconsistent solution) is discussed in the last Sect. 4.
2 Problem Statement and Preliminaries
We consider stochastic optimal control problem of mean-field type of the following kind. Let \(T>0\) be a fixed time horizon and \((\varOmega , \mathcal {F},\left( \mathcal {F}_{t}\right) _{t\in \left[ 0,T\right] },P)\) be a fixed filtered probability space equipped with a \(P\)-completed right continuous filtration on which a \(1\)-dimensional Brownian motion \(W=\left( W(t)\right) _{t\in \left[ 0,T\right] }\) is defined. Let \(\eta \) be a homogeneous \(\left( \mathcal {F}_{t}\right) \)-Poisson point process independent of \(W\). We denote by \(\widetilde{N}(\mathrm{d}\theta , \mathrm{d}t)\) the random counting measure induced by \(\eta \), defined on \(\varTheta \times \mathfrak {R}_{+}\), where \(\varTheta \) is a fixed nonempty subset of \(\mathfrak {R}\) with its Borel \(\sigma \)-field \(\mathcal {B}\left( \varTheta \right) \). Further, let \(\mu \left( \mathrm{d}\theta \right) \) be the local characteristic measure of \(\eta \), i.e., \(\mu \left( \mathrm{d}\theta \right) \) is a \(\mathrm {\sigma }\)-finite measure on \(\left( \varTheta , \mathcal {B}\left( \varTheta \right) \right) \) with \(\mu \left( \varTheta \right) <+\infty \). We then define \(N(d\theta ,dt):=\widetilde{N}(\mathrm{d}\theta , \mathrm{d}t)-\mu \left( \mathrm{d}\theta \right) \mathrm{d}t,\) where \(N\left( \cdot ,\cdot \right) \) is Poisson martingale measure on \(\mathcal {B}\left( \varTheta \right) \times \mathcal {B} \left( \mathfrak {R}_{+}\right) \) with local characteristics \(\mu \left( d\theta \right) dt.\) We assume that \(\left( \mathcal {F}_{t}\right) _{t\in \left[ 0,T \right] }\) is \(P\)-augmentation of the natural filtration \((\mathcal {F} _{t}^{(W,N)})_{t\in \left[ 0,T\right] }\) defined as follows
where \(\mathcal {G}_{0}\) denotes the totality of \(P\)-null sets, and \(\sigma _{1}\vee \sigma _{2}\) denotes the \(\sigma \)-field generated by \(\sigma _{1}\cup \sigma _{2}.\)
In the sequel, \(L_{\mathcal {F}}^{2}\left( \left[ 0,T\right] ;\mathfrak {R}\right) \) denotes the Hilbert space of \(\mathcal {F}_{t}\)-adapted processes \( (X(t))_{t\in [0,T]}\) such that \(E\int _{0}^{T}\left| X(t)\right| ^{2}\mathrm{d}t<+\infty \) and \(\mathcal {M}_{\mathcal {F}}^{2}\left( \left[ 0,T\right] ;\mathfrak {R}\right) \) denote the Hilbert space of \(\mathcal {F} _{t}\)-predictable processes \((\psi \left( t,\theta \right) )_{t\in [0,T]}\) defined on \(\left[ 0,T\right] \times \varTheta \) such that \( E\int _{0}^{T}\int _{\varTheta }\left| \psi \left( t,\theta \right) \right| ^{2}\mu (\theta )\mathrm{d}t<+\infty .\) In what follows, \(C\) represents a generic constants, which can be different from line to line. For simplicity of notation, we still use \(f_{x}(t)=\frac{\partial f}{\partial x}(t,x^{*}(\cdot ),E(x^{*}(\cdot )),u^{*}(\cdot )),\) etc.
Throughout this paper, we also assume that the functions \(f,\sigma :\left[ 0,T\right] \times \mathfrak {R}\times \mathfrak {R}\times \mathcal {A}\rightarrow \mathfrak {R}, c:\left[ 0,T\right] \times \mathfrak {R}\times \mathcal {A}\times \varTheta \rightarrow \mathfrak {R}, g,\ell :\left[ 0,T\right] \times \mathfrak {R}\times \mathfrak {R}\times \mathfrak {R}\times \mathfrak {R}\times \mathfrak {R}\times \mathfrak {R}\times \mathfrak {R}\times \mathcal {A}\rightarrow \mathfrak {R}\) and \(h,\phi ,\varphi :\mathfrak {R}\times \mathfrak {R}\rightarrow \mathfrak {R}\) satisfy the following standing assumptions:
Assumption (H1) 1. The functions \(f,\sigma \) and \(c\) are global Lipschitz in \((x,\widetilde{x},u)\) and \(g\) is global Lipschitz in \((x, \widetilde{x},y,\widetilde{y},z,\widetilde{z},r,u)\).
2. The functions \(f,\sigma ,\ell ,c,g,h,\phi \, \hbox {and}\, \varphi \) are continuously differentiable in their variables including \((x,\widetilde{x},y, \widetilde{y},z,\widetilde{z},r,u)\).
Assumption (H2) 1. The derivatives of \(f,\sigma ,g,\phi \) with respect to their variables including \((x,\widetilde{x},y, \widetilde{y},z,\widetilde{z},r,u)\) are bounded, and
2. The derivatives \(b_{\rho }\) are bounded by \(C(1+\left| x\right| +\left| \widetilde{x} \right| +\left| y\right| +\left| \widetilde{y}\right| +\left| z\right| +\left| \widetilde{y}\right| +\left| r\right| +\left| u\right| )\) for \(\rho =x,\widetilde{x},y, \widetilde{y},z,\widetilde{z},r,u\) and \(b=f,\sigma ,g,c,\ell .\) Moreover, \( \varphi _{y},\varphi _{\widetilde{y}}\) are bounded by \(C\left( 1+\left| y\right| +\left| \widetilde{y}\right| \right) \) and \(h_{x},h_{ \widetilde{x}}\) are bounded by \(C\left( 1+\left| x\right| +\left| \widetilde{x}\right| \right) .\)
3. For all \(t\in \left[ 0,T\right] , f(t,0,0,0),g(t,0,0,0,0,0,0,0,0)\in L_{\mathcal {F}}^{2}\left( \left[ 0,T \right] ;\mathfrak {R}\right) , \sigma (t,0, 0, 0) \ \in L_{\mathcal { F}}^{2}\left( \left[ 0,T\right] ;\mathfrak {R}\times \mathfrak {R}\right) \) and \( c(t,0,0,0,\cdot )\in \mathcal {M}_{\mathcal {F}}^{2}\left( \left[ 0,T\right] ; \mathfrak {R}\right) .\)
Under the assumptions (H1) and (H2), the FBSDEJ-(1.1) has a unique solution \(\left( x(t),y(t),z(t),r(t,\cdot )\right) \in L_{\mathcal {F} }^{2}\left( \left[ 0,T\right] ;\mathfrak {R}\right) \times L_{\mathcal {F} }^{2}\left( \left[ 0,T\right] ;\mathfrak {R}\right) \times L_{\mathcal {F} }^{2}\left( \left[ 0,T\right] ;\mathfrak {R}\right) \times L_{\mathcal {F} }^{2}\left( \left[ 0,T\right] ;\mathfrak {R}\right) \). (See [21], Theorem 3.1], for mean-field BSDE with jumps)
For any \(u(\cdot )\in \mathcal {U}\left( \left[ 0,T\right] \right) \) with its corresponding state trajectories \((x\left( \cdot \right) ,y\left( \cdot \right) , \ z\left( \cdot \right) , r(\cdot ,\cdot ))\) we introduce the following adjoint equations:
Note that the first adjoint equation (backward) corresponding to the forward component turns out to be a linear mean-field backward SDE with jumps, and the second adjoint equation (forward) corresponding to the backward component turns out to be a linear mean-field (forward) SDE with jump processes. Further, we define the Hamiltonian function
associated with the stochastic control problems (1.1) and (1.2) as follows
If we denote by
then the adjoint equation (2.1) can be rewritten as the following stochastic Hamiltonian system’s type
Thanks to Lemma 3.1 in Shen and Siu [21], under assumptions (H1) and (H2), the adjoint equations (2.1) admit a unique solution \(\left( \varPsi (t),Q(t),K(t),R(t,\cdot )\right) \) such that
Moreover, since the derivatives of \(f,\sigma ,c,g,h,\varphi ,\phi \) with respect to \(x,\widetilde{x},y,\widetilde{y},z, \widetilde{z}, r\) are bounded, we deduce from standard arguments that there exists a constant \(C>0\) such that
3 Mean-Field Type Necessary Conditions for Optimal Control of FBSDEJs
In this section, we establish a set of necessary conditions of Pontraygin’s type for a stochastic control to be optimal where the system evolves according to nonlinear controlled mean-field FBSDEJs. Convex perturbation techniques are applied to prove our mean-field stochastic maximum principle.
The following theorem constitutes the main contribution of this paper.
Let (\(x^{*}(\cdot ),y^{*}(\cdot ),z^{*}(\cdot ),r^{*}(\cdot ,\cdot ))\) be the trajectory of the mean-field FBSDEJ-(1.1) corresponding to the optimal control \(u^{*}(\cdot ),\) and \(\left( \varPsi ^{*}(\cdot ),Q^{*}(\cdot ),K^{*}(\cdot ),R^{*}(\cdot ,\cdot )\right) \) be the solution of adjoint equation (2.1) corresponding to \(u^{*}(\cdot )\).
Theorem 3.1
(Maximum principle for mean-field FBSDEJs) Let Assumptions (H1) and (H2) hold. If \(\left( u^{*}(\cdot ),x^{*}(\cdot ),y^{*}(\cdot ),z^{*}(\cdot ),r^{*}(\cdot ,\cdot )\right) \) is an optimal solution of the mean-field control problems (1.1) and (1.2). Then the maximum principle holds, that is \(\forall u\in \mathcal {A}\)
where \(\lambda ^{*}(t,\theta )=(x^{*}(t),E(x^{*}(t)),y^{*}(t),E(y^{*}(t)),z^{*}(t),E(z^{*}(t)),r^{*}(t,\theta ))\) and \(\varLambda ^{*}(t,\theta )=(\varPsi ^{*}(t),Q^{*}(t),K^{*}(t),R^{*}(t,\theta )).\)
We derive the variational inequality (3.1) in several steps, from the fact that
Since the control domain \(\mathcal {A}\) is convex and for any given admissible control \(u(\cdot )\in \mathcal {U}(\left[ 0,T\right] )\) the following perturbed control process
is also an element of \(\mathcal {U}(\left[ 0,T\right] ).\)
Let \(\lambda ^{\varepsilon }(t,\theta )=\left( x^{\varepsilon }(t),y^{\varepsilon }(t),z^{\varepsilon }(t),r^{\varepsilon }(t,\theta )\right) \) be the solution of state equation (1.1) and \(\varLambda ^{\varepsilon }(t,\theta )=(\varPsi ^{\varepsilon }\left( t\right) ,Q^{\varepsilon }\left( t\right) ,K^{\varepsilon }\left( t\right) , R^{\varepsilon }\left( t,\theta \right) )\) be the solution of the adjoint equation (2.1) corresponding to perturbed control \(u^{\varepsilon }(\cdot ).\)
Variational equations. We introduce the following variational equations which have a mean-field type. Let \(\left( x_{1}^{\varepsilon }(\cdot ),y_{1}^{\varepsilon }(\cdot ),z_{1}^{\varepsilon }(\cdot ),r_{1}^{\varepsilon }(\cdot ,\cdot )\right) \) be the solution of the following forward–backward stochastic system described by Brownian motions and Poisson jumps of mean-field type
Duality relations. Our first Lemma below deals with the duality relations between \(\varPsi ^{*}(t), x_{1}^{\varepsilon }(t)\) and \(K^{*}(t), y_{1}^{\varepsilon }(t)\). This Lemma is very important for the proof of Theorem 3.1.
Lemma 3.2
We have
similarly, we get
and
Proof
By applying integration by parts formula for jump processes (see Lemma 6.1) to \(\varPsi ^{*}(t)x_{1}^{\varepsilon }(t)\), we get
A simple computation shows that
and
From (3.7), we get
The duality relation (3.4) follows immediately from combining (3.8)–(3.10) and (3.7).
Let us turn to second duality relation (3.5). By applying integration by parts formula for jump process (Lemma 6.1) to \(K^{*}(t)y_{1}^{\varepsilon }(t),\) we get
From (3.4), we obtain
from (2.1), we obtain
and
Since
the duality relation (3.5) follows immediately by combining (3.12)–(3.14) and (3.11). Let us turn to (3.6). Combining (3.4) and (3.5) we get
Using (2.2), we obtain
which implies that
This completes the proof of (3.6). \(\square \)
The second Lemma presents the estimates of the perturbed state process \((x^{\varepsilon }(\cdot ), y^{\varepsilon }(\cdot ),z^{\varepsilon }(\cdot ),r^{\varepsilon }(\cdot ,\cdot )).\)
Lemma 3.3
Under assumptions (H1) and (H2), the following estimations hold
and
Let us also point out that the above estimates (3.15)–(3.17) can be proved using similar arguments developed in ([21], Lemmas 4.2 and 4.3]) and ([24], Lemmas 2.1]). So we omit their proofs.
Proof
of (3.18). We set
and
From Eq. (1.1) we have
We denote
By Taylor’s expansion with a simple computations, we show that
where
and
we proceed as in Anderson and Djehiche [18], pp. 7–8], we get
Applying similar estimations for the third term with the help of Proposition 3.2 (in Appendix Bouchard and Elie [27]), we have
From (3.26) and (3.27) we obtain
We proceed to estimate the last terms in (3.18). First, from (3.19) and since \(\widehat{y}^{\varepsilon }(t)=\frac{1}{\varepsilon } \left[ y^{\varepsilon }(t)-y^{*}(t)\right] -y_{1}^{\varepsilon }(t)\), we get
and
Applying Taylor’s expansion, we get
finally, using similar arguments developed in [24], pp. 222–224], the desired result follows. This completes the proof of (3.18). \(\square \).
Lemma 3.4
Let assumptions (H1) and (H2) hold. The following variational inequality holds
Proof
From (3.2) we have
By applying Taylor’s expansion and Lemma 3.3, we have
From estimate (3.18), we get
Similarly, we have
and
The desired result follows by combining (3.29)–(3.32). This completes the proof of Lemma 3.4. \(\square \)
Proof of Theorem 3.1
The desired result follows immediately by combining (3.6) in Lemmas 3.3 and 3.4. \(\square \)
4 Application: Mean-Variance Portfolio Selection Problem Mixed with a Recursive Utility Functional, Time-Inconsistent Solution
The mean-variance portfolio selection theory, which was first proposed in Markowitz [31] is a milestone in mathematical finance and has laid down the foundation of modern finance theory. Using sufficient maximum principle, the authors in [30] gave the expression for the optimal portfolio selection in a jump diffusion market with time consistent solutions. The near-optimal consumption-investment problem has been discussed in Hafayed, Abbas and Veverka [28]. The continuous time mean-variance portfolio selection problem has been studied in Zhou and Li [32]. The mean-variance portfolio selection problem where the state driven by SDE (without jump terms) has been studied in [18]. Optimal dividend, harvesting rate, and optimal portfolio for systems governed by jump diffusion processes have been investigated in [10]. Mean-variance portfolio selection problem mixed with a recursive utility functional has been studied by Shi and Wu [24], under the condition that
where \(c\) is a given real positive number.
In this section, we will apply our mean-field stochastic maximum principle of optimality to study a mean-variance portfolio selection problem mixed with a recursive utility functional time-inconsistent solutions in a financial market and we will derive the explicit expression for the optimal portfolio selection strategy. This optimal control is represented by a state feedback form involving both \(x(\cdot )\) and \(E(x(\cdot ))\).
Suppose that we are given a mathematical market consisting of two investment possibilities:
-
1.
Risk-free security (Bond price). The first asset is a risk-free security whose price \(P_{0}(t)\) evolves according to the ordinary differential equation
$$\begin{aligned} \left\{ \begin{array}{l} \mathrm{d}P_{0}\left( t\right) =\rho (t)P_{0}\left( t\right) \mathrm{d}t,\ \ t\in \left[ 0,T \right] ,\\ P_{0}\left( 0\right) >0, \end{array} \right. \end{aligned}$$(4.1)where \(\rho \left( \cdot \right) :[0,T]\rightarrow \mathcal {\mathfrak {R}} _{+}\) is a locally bounded and continuous deterministic function.
-
2.
Risk-security (Stock price). A risk-security (e.g., a stock), where the price \(P_{1}\left( t\right) \) at time \(t\) is given by
$$\begin{aligned} \left\{ \begin{array}{l} \mathrm{d}P_{1}\left( t\right) =P_{1}\left( t_{-}\right) \big [\varsigma (t)\mathrm{d}t+G(t)\mathrm{d}W(t)+\int _{\varTheta }\xi \left( t,\theta \right) N\left( \mathrm{d}\theta , \mathrm{d}t\right) \big ], \\ P_{1}\left( 0\right) >0,\ \ \ t\in \left[ 0,T\right] . \end{array} \right. \end{aligned}$$(4.2)
Assumptions. In order to ensure that \(P_{1}\left( t\right) >0\) for all \(t\in \left[ 0,T\right] \), we assume
-
1.
The functions \(\varsigma (\cdot ):[0,T]\rightarrow \mathcal {\mathfrak {R}}, G(\cdot ):[0,T]\rightarrow \mathcal {\mathfrak {R}}\) are bounded deterministic such that
$$\begin{aligned} \varsigma (t),G(t)\ne 0,\ \ \ \varsigma (t)>\rho (t),\forall t\in [0,T], \end{aligned}$$ -
2.
\(\xi \left( t,\theta \right) >-1\) for \(\mu -\)almost all \(\theta \in \varTheta \) and all \(t\in [0,T],\)
-
3.
\(\int _{\varTheta }\xi ^{2}\left( t,\theta \right) \mu (d\theta )\) is bounded.
Portfolio strategy, the price dynamic with recursive utility process. A portfolio is a \(\mathcal {F}_{t}-\)predictable process \(e\left( t\right) =(e_{1}(t),e_{2}(t))\) giving the number of units of the risk-free and the risky security held at time \(t\). Let \(\pi (t)=e_{1}\left( t\right) P_{0}\left( t\right) \) denote the amount invested in the risky security. We call the control process \(\pi (\cdot )\) a portfolio strategy.
Let \(x^{\pi }(0)=\zeta >0\) be an initial wealth. By combining (4.1) and (4.2), we introduce the wealth process \(x^{\pi }(\cdot )\) and the recursive utility process \(y^{\pi }(\cdot )\) corresponding to \(\pi \left( \cdot \right) \in \mathcal {U}\left( \left[ 0,T\right] \right) \) as solution of the following FBSDEJs
Mean-variance portfolio selection problem mixed with a recursive utility functional: In this section, the objective is to apply our maximum principle to study the mean-variance portfolio selection problem mixed with a recursive utility functional maximization.
The cost functional, to be minimized, is given by
By a simple computation, we can show that
where the wealth process \(x^{\pi }(\cdot )\) and the recursive utility process \(y^{\pi }(\cdot )\) corresponding to \(\pi \left( \cdot \right) \in \mathcal {U}\left( \left[ 0,T\right] \right) \) are given by FBSDEJ-(4.3). We note that the cost functional (4.5) becomes a time-inconsistent control problem. Let \(\mathcal {A}\) be a compact convex subset of \(\mathfrak {R}\). We denote \(\mathcal {U}\left( \left[ 0,T\right] \right) \) the set of admissible \(\mathcal {F}_{t}-\)predictable portfolio strategies \(\pi \left( \cdot \right) \) valued in \(\mathcal {A}\). The optimal solution is denoted by \((x^{*}(\cdot ),\pi ^{*}(\cdot )).\) The Hamiltonian functional (2.2) gets the form
According to the maximum condition ((3.1), Theorem 3.1), and since \(\pi ^{*}(\cdot )\) is optimal we immediately get
The adjoint equation (2.1) being
In order to solve the above equation (4.7) and to find the expression of optimal portfolio strategy \(\pi ^{*}(\cdot )\), we conjecture a process \(\varPsi ^{*}(t)\) of the form:
where \(A_{1}(\cdot ),A_{2}(\cdot )\), and \(A_{3}(\cdot )\) are deterministic differentiable functions. (see Shi and Wu [24], Shi [9], Framstad, \(\emptyset \)ksendal and Sulem [30], Li [19], Yong [20], for other models of conjecture). From last equation in (4.7), which is a simple ordinary differential equation (ODE in short), we get immediately
Noting that from (4.3), we get
Applying Itô’s formula to (4.8) (see Lemma 6.1, Appendix) in virtue of SDE-(4.3), we get
which implies that
where \(A_{1}^{\prime }(t), A_{2}^{\prime }(t)\) and \(A_{3}^{\prime }(t)\) denote the derivatives with respect to \(t\).
Next, comparing (4.10) with (4.7), we get
By looking at the terminal condition of \(\varPsi ^{*}(t)\), in (4.10), it is reasonable to get
Combining (4.11) and (4.8), we deduce that \(A_{1}(\cdot ),A_{2}(\cdot )\), and \(A_{3}(\cdot )\) satisfying the following ODEs:
By solving the first two ordinary differential equations in (4.15), we obtain
Using integrating factor method for the third equation in (4.15), we get
where the integrating factor is \(\chi (t)=\exp \left\{ \int _{t}^{T}\rho (s)\mathrm{d}s\right\} ,\ \ \chi (T)=1.\)
Combining (4.6), (4.9), (4.12) and (4.13) and denoting
we get
and
Finally, we give the explicit optimal portfolio selection strategy in the state feedback form involving both \(x^{*}(\cdot )\) and \(E(x^{*}(\cdot )).\)
Theorem 4.1
The optimal portfolio strategy \(\pi ^{*}(t)\) of our mean-variance portfolio selection problems (4.3)–(4.5) is given in feedback form by
and
where \(A_{1}(t), A_{3}(t)\), and \(\varGamma (t)\) are given by (4.16), (4.17) and (4.18), respectively.
5 Conclusions
In this paper, we have discussed the necessary conditions for optimal stochastic control of mean-field forward–backward stochastic differential equations with Poisson jumps (FBSDEJs). Time-inconsistent mean-variance portfolio selection mixed with recursive utility functional optimization problem has been studied to illustrate our theoretical results. We would like to indicate that the general maximum principle for fully coupled mean-field FBSDEJs is not addressed, and we will work for this interesting issue in the future research.
References
Kac, M.: Foundations of kinetic theory. In: Berkeley Symposium on Mathematical Statistics and Probability, vol. 3, 171–197 (1956)
McKean, H.P.: A class of Markov processes associated with nonlinear parabolic equations. Proc. Natl. Acad. Sci. USA 56, 1907–1911 (1966)
Wang, B.C., Zhang, J.F.: Mean-field games for large-population multiagent systems with Markov jump parameters. SIAM J. Control 50(4), 2308–2334 (2012)
Li, T., Zhang, J.F.: Adaptive mean field games for large population coupled ARX systems with unknown coupling strength. Dyn. Games Appl. 3, 489–507 (2013)
Ni, Y.H., Zhang, J.F., Li, X.: Indefinite mean-field stochastic linear-quadratic optimal control. IEEE Trans. Autom. Control (2014). doi:10.1109/TAC.2014.2385253
Elliott, R.J., Li, X., Ni, Y.H.: Discrete time mean-field stochastic linear-quadratic optimal control problems. Automatica 49(11), 3222–3233 (2013)
Buckdahn, R., Li, J., Peng, S.: Mean-field backward stochastic differential equations and related partial differential equations. Stoch. Proc. Appl 119, 3133–3154 (2009)
Buckdahn, R., Djehiche, B., Li, J.: A general stochastic maximum principle for SDEs of mean-field type. Appl. Math. Optim. 64, 197–216 (2011)
Shi, J.: Sufficient conditions of optimality for mean-field stochastic control problems. In: 12th International Conference on Control, Automation, Robotics & Vision Guangzhou, 5–7 Dec, ICARCV 2012, 747–752 (2012)
Meyer-Brandis, T., \(\emptyset \)ksendal, B., Zhou, X.Y.: A mean-field stochastic maximum principle via malliavin calculus. Stochastics 84, 643–666 (2012)
Hafayed, M., Abbas, S.: On near-optimal mean-field stochastic singular controls: necessary and sufficient conditions for near-optimality. J. Optim. Theory Appl. 160, 778–808 (2014)
Hafayed, M.: A mean-field necessary and sufficient conditions for optimal singular stochastic control. Commun. Math. Stat. 1(4), 417–435 (2014)
Hafayed, M.: A mean-field maximum principle for optimal control of forward–backward stochastic differential equations with Poisson jump processes. Int. J. Dyn. Control 1(4), 300–315 (2013)
Hafayed, M., Abba, A., Abbas, S.: On mean-field stochastic maximum principle for near-optimal controls for Poisson jump diffusion with applications. Int. J. Dyn. Control 2, 262–284 (2014)
Hafayed, M.: Singular mean-field optimal control for forward–backward stochastic systems and applications to finance. Int. J. Dyn. Control 2(4), 542–554 (2014)
Hafayed, M., Abbas, S.: A general maximum principle for stochastic differential equations of mean-field type with jump processes, arXiv: 1301.7327v4, (2013)
Wang, G., Zhang, C., Zhang, W.: Stochastic maximum principle for mean-field type optimal control under partial information. IEEE Trans. Autom. Control 59(2), 522–528 (2014)
Andersson, D., Djehiche, B.: A maximum principle for SDEs of mean-field type. Appl. Math. Optim. 63, 341–356 (2011)
Li, J.: Stochastic maximum principle in the mean-field controls. Automatica 48, 366–373 (2012)
Yong, J.: A linear-quadratic optimal control problem for mean-field stochastic differential equations. SIAM J. Control Optim. 54(4), 2809–2838 (2013)
Shen, Y., Siu, T.K.: The maximum principle for a jump-diffusion mean-field model and its application to the mean-variance problem. Nonlinear Anal. 86, 58–73 (2013)
Shen, Y., Meng, Q., Shi, P.: Maximum principle for mean-field jump-diffusions to stochastic delay differential equations and its applicationt to finance. Automatica 50, 1565–1579 (2014)
Xu, R., Wu, T.: Mean-field backward stochastic evolution equations in Hilbert spaces and optimal control for BSPDEs. Abstr. Appl. Anal. 2014, Article ID 839467, pp. 15 (2014)
Shi, J., Wu, Z.: Maximum principle for forward–backward stochastic control system with random jumps and application to finance. J. Syst. Sci. Complex 23, 219–231 (2010)
Shi, J.: Necessary conditions for optimal control of forward–backward stochastic systems with random jumps. Int. J. Stoch. Anal. Article ID 258674, pp. 50 (2012)
Yong, J.: Optimality variational principle for controlled forward–backward stochastic differential equations with mixed intial-terminal conditions. SIAM J. Control Optim. 48(6), 4119–4156 (2010)
Bouchard, B., Elie, R.: Discrete time approximation of decoupled forward–backward SDE with jumps. Stoch. Proc. Appl. 118(1), 53–75 (2008)
Hafayed, M., Veverka, P., Abbas, S.: On maximum principle of near-optimality for diffusions with jumps, with application to consumption-investment problem. Diff. Equ. Dyn. Syst. 20(2), 111–125 (2012)
\(\emptyset \)ksendal, B., Sulem, A.: Maximum principles for optimal control of forward–backward stochastic differential equations with jumps. SIAM J. Control Optim. 48(5), 2845–2976 (2009)
Framstad, N.C., \(\emptyset \)ksendal, B., Sulem, A.: Sufficient stochastic maximum principle for the optimal control of jump diffusions and applications to finance. J. Optim. Theory Appl. 121, 77–98 (2004)
Markowitz, H.: Portfolio selection. J. Fin. 7, 77–91 (1952)
Zhou, X.Y., Li, D.: Continuous time mean-variance portfolio selection: a stochastic LQ framework. Appl. Math. Optim. 42, 19–33 (2000)
Acknowledgments
The authors would like to thank the editor, the associate editors, and anonymous referees for their constructive corrections and valuable suggestions that improved the manuscript. The first author was partially supported by Algerian CNEPRU Project Grant B01420130137, 2014-2016.
Author information
Authors and Affiliations
Corresponding author
Appendix
Appendix
The following result gives special case of the Itô formula for mean-field jump diffusions.
Lemma 6.1
(Integration by parts formula for mean-field jump diffusions). Suppose that the processes \(x_{1}(t)\) and \( x_{2}(t)\) are given by for \(i=1,2, t\in \left[ 0,T\right] \)
Then we get
Applying a similar method as in [30], Lemma 2.1], for the proof of the above Lemma.
Rights and permissions
About this article
Cite this article
Hafayed, M., Tabet, M. & Boukaf, S. Mean-Field Maximum Principle for Optimal Control of Forward–Backward Stochastic Systems with Jumps and its Application to Mean-Variance Portfolio Problem. Commun. Math. Stat. 3, 163–186 (2015). https://doi.org/10.1007/s40304-015-0054-1
Received:
Revised:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s40304-015-0054-1
Keywords
- Mean-field forward–backward stochastic differential equation with jumps
- Optimal stochastic control
- Mean-field maximum principle
- Mean-variance portfolio selection with recursive utility functional
- Time-inconsistent control problem