Abstract
The stochastic maximum principle (SMP) gives some necessary conditions for optimality for a stochastic optimal control problem. We give a summary of well-known results concerning stochastic maximum principle in finite- dimensional state space as well as some recent developments in infinite-dimensional state space.
Access provided by Autonomous University of Puebla. Download reference work entry PDF
Similar content being viewed by others
Keywords
Introduction
The problem of finding sufficient conditions for optimality for a stochastic optimal control problem with finite-dimensional state equation had been well studied since the pioneering work of Bismut (1976, 1978). In particular, Bismut introduced linear backward stochastic differential equations (BSDEs) which have become an active domain of research since the seminal paper of Pardoux and Peng in 1990 concerning (nonlinear) BSDEs in Pardoux and Peng (1990).
The first results on SMP concerned only the stochastic systems where the control domain is convex or the diffusion coefficient does not contain control variable. In this case, only the first-order expansion is needed. This kind of SMP was developed by Bismut (1976, 1978), Kushner (1972), and Haussmann (1986). It is important to note that (Bismut 1978) introduced linear BSDE to represent the first-order adjoint process.
Peng made a breakthrough by establishing the SMP for the general stochastic optimal control problem where the control domain need not to be convex and the diffusion coefficient can contain the control variable. He solved this general case by introducing the second-order expansion and second-order BSDE. We refer to the book Yong and Zhou (1999) for the account of the theory of SMP in finite-dimensional spaces and describe Peng’s SMP in the next section.
Despite the fact that the problem has been solved in complete generality more than 20 years ago, the infinite-dimensional case still has important open issues both on the side of the generality of the abstract model and on the side of its applicability to systems modeled by stochastic partial differential equations (SPDEs). The last section is devoted to the recent development of SMP in infinite-dimensional space.
Statement of SMP
Formulation of Problem
Let \((\Omega,\mathcal{F}, \mathbb{P})\) be a complete probability space, on which an m-dimensional Brownian motion W is given. Let \(\{\mathcal{F}_{t}\}_{t\geq 0}\) be the natural completed filtration of W.
We consider the following stochastic controlled system:
with the cost functional
In the above, b, σ, f, h are given functions with appropriate dimensions. (U, d) is a separable metric space.
We define
The optimal problem is: Minimize J(u(⋅ )) over \(\mathcal{U}\).
Any \(\bar{u} \in \mathcal{U}\) satisfying
is called an optimal control. The corresponding \(\bar{x}\) and \((\bar{x},\bar{u})\) is called an optimal state process/trajectory and optimal pair, respectively. In this section, we assume the following standard hypothesis:
Hypothesis 1
-
1.
The functions \(b : \mathbb{R}^{n} \times U\mapsto \mathbb{R}^{n}\) , \(\sigma = (\sigma ^{1},\cdots \,,\sigma ^{m}) : \mathbb{R}^{n} \times U\mapsto \mathbb{R}^{n\times m}\) , \(f : \mathbb{R}^{n} \times U\mapsto \mathbb{R}\) and \(h : \mathbb{R}^{n}\mapsto \mathbb{R}\) are measurable functions.
-
2.
For \(\varphi = b,\sigma ^{j},j = 1,\cdots \,,m,f\) , the functions \(x\mapsto \varphi (x,u)\) and \(x\mapsto h(x)\) are C 2 , denoted \(\varphi _{x}\) and \(\varphi _{xx}\) (respectively, h x and h xx ), which are also continuous functions of (x,u).
-
3.
There exists a constant K > 0 such that
$$\displaystyle{\vert \varphi _{x}\vert + \vert \varphi _{xx}\vert + \vert h_{x}\vert + \vert h_{xx}\vert \leq K,}$$and
$$\displaystyle{\vert \varphi \vert + \vert h\vert \leq K(1 + \vert x\vert + \vert u\vert ).}$$
Adjoint Equations
Let us first introduce the following backward stochastic differential equations (BSDEs).
The solution (p, q) to the above BSDE (first-order BSDE) is called the first-order adjoint process.
where the Hamiltonian H is defined by
The solution (P, Q) to the above BSDE (second-order BSDE) is called the second-order adjoint process.
Stochastic Maximum Principle
Let us now state the stochastic maximum principle.
Theorem 1
Let\((\bar{x},\bar{u})\)be an optimal pair of problem. Then there exist a unique couple (p,q) satisfying (5) and a unique couple (P,Q) satisfying (6) , and the following maximum condition holds:
SMP in Infinite-Dimensional Space
The problem of finding sufficient conditions for optimality for a stochastic optimal control problem with infinite-dimensional state equation, along the lines of the Pontryagin maximum principle, was already addressed in the early 1980s in the pioneering paper (Bensoussan 1983).
Whereas the Pontryagin maximum principle for infinite-dimensional stochastic control problems is a well-known result as far as the control domain is convex (or the diffusion does not depend on the control; see Bensoussan 1983; Hu and Peng 1990), for the general case (that is when the control domain need not be convex and the diffusion coefficient can contain a control variable), existing results are limited to abstract evolution equations under assumptions that are not satisfied by the large majority of concrete SPDEs.
The technical obstruction is related to the fact that (as it was pointed out in Peng 1990) if the control domain is not convex, the optimal control has to be perturbed by the so-called spike variation. Then if the control enters the diffusion, the irregularity in time of the Brownian trajectories imposes to take into account a second variation process. Thus, the stochastic maximum principle has to involve an adjoint process for the second variation. In the finite-dimensional case, such a process can be characterized as the solution of a matrix-valued backward stochastic differential equation (BSDE), while in the infinite-dimensional case, the process naturally lives in a non-Hilbertian space of operators and its characterization is much more difficult. Moreover, the applicability of the abstract results to concrete controlled SPDEs is another delicate step due to the specific difficulties that they involve such as the lack of regularity of Nemytskii-type coefficients in Lp spaces.
Concerning results on the infinite-dimensional stochastic Pontryagin maximum principle, as we already mentioned, in Bensoussan (1983) and Hu and Peng (1990), the case of diffusion independent on the control is treated (with the difference that in Hu and Peng (1990) a complete characterization of the adjoint to the first variation as the unique mild solution to a suitable BSDE is achieved).
The paper Tang and Li (1994) is the first one in which the general case is addressed with, in addition, a general class of noises possibly with jumps. The adjoint process of the second variation \((P_{t})_{t\in [0,T]}\) is characterized as the solution of a BSDE in the (Hilbertian) space of Hilbert-Schmidt operators. This forces to assume a very strong regularity on the abstract state equation and control functional that prevents application of the results in Tang and Li (1994) to SPDEs.
Then in the papers by Fuhrman et al. (2012, 2013), the state equation is formulated, only in a semiabstract way in order, on one side, to cope with all the difficulties carried by the concrete nonlinearities and, on the other, to take advantage of the regularizing properties of the leading elliptic operator.
Recently in Lü and Zhang (2012), P t was characterized as “transposition solution” of a backward stochastic evolution equation in \(\mathcal{L}(L^{2}(\mathcal{O}))\). Coefficients are required to be twice Fréchet differentiable as operators in \(L^{2}(\mathcal{O})\). Finally, even more recently in a couple of preprints (Du and Meng (2012, 2013)), the process P t is characterized in a similar way as it is in Fuhrman et al. (2012, 2013). Roughly speaking it is characterized as a suitable stochastic bilinear form. As it is the case in Lü and Zhang (2012), in Du and Meng (2012, 2013) as well, the regularity assumptions on the coefficients are too restrictive to apply directly the results in Lü and Zhang (2012), Du and Meng (2012, 2013) to controlled SPDEs.
Bibliography
Bensoussan A (1983) Stochastic maximum principle for distributed parameter systems. J Frankl Inst 315(5–6):387–406
Bismut JM (1976) Linear quadratic optimal stochastic control with random coefficients. SIAM J Control Optim 14(3):419–444
Bismut JM (1978) An introductory approach to duality in optimal stochastic control. SIAM Rev 20(1):62–78
Du K, Meng Q (2012) Stochastic maximum principle for infinite dimensional control systems. arXiv:1208.0529
Du K, Meng Q (2013) A maximum principle for optimal control of stochastic evolution equations. SIAM J Control Option 51(4):4343–4362
Fuhrman M, Hu Y, Tessitore G (2012) Stochastic maximum principle for optimal control of SPDEs. C R Math Acad Sci Paris 350(13–14):683–688
Fuhrman M, Hu Y, Tessitore G (2013) Stochastic maximum principle for optimal control of SPDEs. Appl Math Optim 68(2):181–217
Haussmann UG (1986) A stochastic maximum principle for optimal control of diffusions. Pitman research notes in mathematics series, vol 151. Longman Scientific & Technical, Harlow/Wiley, New York
Hu Y, Peng S (1990) Maximum principle for semilinear stochastic evolution control systems. Stoch Stoch Rep 33(3–4):159–180
Kushner HJ (1972) Necessary conditions for continuous parameter stochastic optimization problems. SIAM J Control 10:550–565
Lü Q, Zhang X (2012) General Pontryagin-type stochastic maximum principle and backward stochastic evolution equations in infinite dimensions. arXiv:1204.3275
Pardoux E, Peng S (1990) Adapted solution of a backward stochastic differential equation. Syst Control Lett 14(1):55–61
Peng S (1990) A general stochastic maximum principle for optimal control problems. SIAM J Control Optim 28(4):966–979
Tang S, Li X (1994) Maximum principle for optimal control of distributed parameter stochastic systems with random jumps. In: Markus L, Elworthy KD, Everitt WN, Lee EB (eds) Differential equations, dynamical systems, and control science. Lecture notes in pure and applied mathematics, vol 152. Dekker, New York, pp 867–890
Yong J, Zhou XY (1999) Stochastic controls: Hamiltonian systems and HJB equations. Applications of mathematics, vol 43. Springer, New York
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2015 Springer-Verlag London
About this entry
Cite this entry
Hu, Y. (2015). Stochastic Maximum Principle. In: Baillieul, J., Samad, T. (eds) Encyclopedia of Systems and Control. Springer, London. https://doi.org/10.1007/978-1-4471-5058-9_229
Download citation
DOI: https://doi.org/10.1007/978-1-4471-5058-9_229
Published:
Publisher Name: Springer, London
Print ISBN: 978-1-4471-5057-2
Online ISBN: 978-1-4471-5058-9
eBook Packages: EngineeringReference Module Computer Science and Engineering