1 Introduction

Linear quadratic (LQ, for short) problems constitute an extremely important class of optimal control problems. They are widely encountered in many fields, such as engineering, economy, and biology, and also play an essential role in the study of general optimal control problems. The LQ problems have been extensively investigated since the earliest work of Bellman et al. [3], Kalman [13], and Letov [16], however, very few studies actually involve constraints on both the state and control variables. There is no doubt that it is a much more challenging and interesting task to solve an LQ problem with constraints than one without, and that developing a deeper understanding of constrained LQ problems, as well as efficient algorithms for solving them, will have a big impact in a number of applications.

The aim of this paper is to study a class of constrained LQ optimal control problems whose main features are that the state end-points are fixed and that there are integral quadratic constraints. To be precise, consider the controlled linear system on a finite horizon [tT]:

$$\begin{aligned} \left\{ \begin{aligned} \dot{X}(s)&=A(s)X(s)+B(s)u(s), \quad s\in [t,T], \\ X(t)&=x. \end{aligned}\right. \end{aligned}$$
(1.1)

A control \(u(\cdot )\) is called admissible if \(u(\cdot )\in L^2(t,T;\mathbb {R}^m)\equiv {{\mathcal {U}}}[t,T]\), the space of all \(\mathbb {R}^m\)-valued functions that are square-integrable on [tT]. Assuming the system (1.1) is completely controllable on [tT], we know that for each initial state x and each target y, there exist admissible controls \(u(\cdot )\) giving \(X(T)=y\). For \((t,x,y)\in [0,T)\times \mathbb {R}^n\times \mathbb {R}^n\), we denote the corresponding solution of (1.1) by \(X(\cdot \,;t,x,u(\cdot ))\) and define

$$\begin{aligned} {{\mathcal {U}}}(t,x,y)=\big \{u:[t,T]\rightarrow \mathbb {R}^m~|~u(\cdot )\in {{\mathcal {U}}}[t,T] \text{ and } X(T;t,x,u(\cdot ))=y\big \}. \end{aligned}$$

For any \((t,x,y)\in [0,T)\times \mathbb {R}^n\times \mathbb {R}^n\) and any \(u(\cdot )\in {{\mathcal {U}}}(t,x,y)\), the associated cost (\(i=0\)) and constraint functionals (\(i=1,\ldots ,k\)) are given by

$$\begin{aligned} J_i(t,x,y;u(\cdot ))=\int _t^T \Big [ \langle Q_i(s)X(s),X(s)\rangle +\langle R_i(s)u(s),u(s)\rangle \Big ]ds, \end{aligned}$$
(1.2)

where \(Q_i(\cdot )\), \(R_i(\cdot )\), \(i=0,1,\ldots ,k\) are pointwise symmetric positive semi-definite matrices of proper dimensions. Now given constants \(c_1,\ldots ,c_k>0\), the constrained LQ optimal control problem considered in this paper can be stated as follows:

Problem (CLQ). For any given initial pair \((t,x)\in [0,T)\times \mathbb {R}^n\) and any given target \(y\in \mathbb {R}^n\), find an admissible control \(u^*(\cdot )\) such that the cost functional \(J_0(t,x,y;u(\cdot ))\) is minimized over \({{\mathcal {U}}}[t,T]\), subject to the terminal state and functional constraints

$$\begin{aligned} X(T;t,x,u(\cdot ))=y, \quad J_i(t,x,y;u(\cdot ))\leqslant c_i;\qquad i=1,\ldots ,k. \end{aligned}$$
(1.3)

Any admissible control \(u(\cdot )\) satisfying the constraints (1.3) is called a feasible control (w.r.t. (txy)), and it is called strictly feasible (w.r.t. (txy)) if the inequalities in (1.3) are strict. A feasible control is called optimal (w.r.t. (txy)) if it solves Problem (CLQ) for the initial pair (tx) and the target y. The infimum

$$\begin{aligned} V(t,x,y)\triangleq \inf \{J_0(t,x,y;u(\cdot )): u(\cdot ) \text{ is } \text{ feasible } \text{ w.r.t. } (t,x,y)\} \end{aligned}$$

is called the value function of Problem (CLQ).

The study of LQ optimal control problems has a long history that can be traced back to the work of Bellman et al. [3] in 1958, Kalman [13] in 1960, and Letov [16] in 1961. Since then, many researchers have made contributions to such kind of problems and applications; see, for example, Geerts and Hautus [9], Jurdjevic [11], Jurdjevic and Kogan [12], Willems et al. [23], and Yakubovich [25]. For a thorough study of unconstrained LQ problems, we further refer the reader to the classical books of Anderson and Moore [1, 2], Lee and Markus [15], Wonham [24], Yong and Zhou [26], and the survey paper of Willems [22].

One of the elegant features of the LQ theory is that the optimal control can be explicitly represented in a state feedback form, through the solution to the celebrated Riccati equation. Hence, the LQ problem can be reduced to that of solving the Riccati equation. Generally, there are three approaches for deriving the Riccati equation, namely the maximum principle, the dynamic programming, and the completion of squares technique. What essentially makes these approaches successful, besides the special LQ structure, is that the problem is not constrained. If there are state and control constraints, the whole Riccati approach may collapse.

However, many applications of optimal control theory are constrained problems. A typical example is flight planning in which the terminal state (destination) is fixed. Flight planners normally wish to minimize flight cost through the appropriate choice of route, height, and speed, and by loading the minimum necessary fuel on board. To ensure that the aircraft can safely reach the destination limits in a given time, strict performance specifications must be adhered to in all flying conditions, which can be expressed in the form of integral quadratic constraints. Other applications can be found in the problem of controlling certain space structures [21] and portfolio selection [10]. There were some attempts in attacking the constrained LQ control problems; see for example [5,6,7,8, 17, 18]. However, none of these works and their associated analyses actually involve constraints on both the state and control variables. Therefore there is need for the development and analysis of efficient solution techniques for constrained LQ control problems.

The main purpose of this paper is to give a complete solution to the LQ problem with fixed terminal states and integral quadratic constraints. The principal method for solving the problem is combination of duality theory and approximation techniques. We first approach the constrained LQ problem as a convex optimization problem. By the Lagrangian duality, it turns out that the optimal control can be derived by solving an LQ control problem with only a terminal state constraint together with an optimal parameter selection problem. We then approximate the reduced LQ problem, whose terminal state is fixed, by a sequence of standard LQ problems with penalized terminal states. This leads to the existence and uniqueness of a solution to the Riccati equation with infinite terminal value. With the solutions of the Riccati equations, we are able to calculate the gradient for the cost functional of the optimal parameter selection problem, and therefore the optimal control is obtained, which is a target-dependent feedback of the current state.

The rest of the paper is organized as follows. Section 2 collects some preliminaries. Among other things, we establish the unique solvability of Problem (CLQ). In Sect. 3, we present the main results of the paper (with their proofs deferred to Sects. 5 and 6). In Sect. 4, using duality theory, we reduce Problem (CLQ) to a parameterized LQ problem with only one constraint on the terminal state, then approximate it by a sequence of unconstrained LQ problems with penalized terminal states. The existence and uniqueness theorem for the Riccati equation with infinite terminal value is proved in Sect. 5. Section 6 is devoted to the proof of the main result Theorem 3.4. Some examples are presented in Sect. 7 to illustrate the results obtained.

2 Preliminaries

Throughout this paper, we will denote by \(M^\top \) the transpose of a matrix M and by \(\mathrm{tr}\,(M)\) the trace of M. Let \(\mathbb {R}^{n\times m}\) be the Euclidean space consisting of \((n\times m)\) real matrices and let \(\mathbb {R}^n=\mathbb {R}^{n\times 1}\). The inner product in \(\mathbb {R}^{n\times m}\) is denoted by \(\langle M,N\rangle \), where \(M,N\in \mathbb {R}^{n\times m}\), so that \(\langle M,N\rangle =\mathrm{tr}\,(M^\top N)\). This induces the Frobenius norm \(|M|=\sqrt{\mathrm{tr}\,(M^\top M)}\). Denote by \(\mathbb {S}^n\) the space of all symmetric \((n\times n)\) real matrices, and by \(\mathbb {S}^n_+\) the space of all symmetric positive definite \((n\times n)\) real matrices. For \(\mathbb {S}^n\)-valued functions M and N, if \(M-N\) is positive (respectively, semi-) definite a.e., we write \(M>N\) (respectively, \(M\geqslant N\)), and if there exists a \(\delta >0\) such that \(M-N\geqslant \delta I\) a.e, we write \(M\gg N\). Let \({{\mathcal {I}}}\) be an interval and \(\mathbb {H}\) a Euclidean space. We shall denote by \(C({{\mathcal {I}}};\mathbb {H})\) the space of all \(\mathbb {H}\)-valued continuous functions on \({{\mathcal {I}}}\), and by \(L^p({{\mathcal {I}}};\mathbb {H})\) \((1\leqslant p\leqslant \infty )\) the space of all \(\mathbb {H}\)-valued functions that are pth power Lebesgue integrable on \({{\mathcal {I}}}\).

Throughout this paper, we impose the following assumption:

(H1) The matrices appearing in (1.1) and (4.3) satisfy

$$\begin{aligned} \left\{ \begin{aligned}&A(\cdot )\in L^1(0,T;\mathbb {R}^{n\times n}),&\quad B(\cdot )\in L^2(0,T;\mathbb {R}^{n\times m}), \\&Q_i(\cdot )\in L^1(0,T;\mathbb {S}^n),&\quad Q_i(\cdot )\geqslant 0, \\&R_i(\cdot )\in L^\infty (0,T;\mathbb {S}^m),&\quad R_i(\cdot )\geqslant 0, \quad R_0(\cdot )\gg 0. \end{aligned}\right. \end{aligned}$$

Consider the controlled ordinary differential system

$$\begin{aligned} \dot{X}(s)=A(s)X(s)+B(s)u(s), \end{aligned}$$
(2.1)

which we briefly denote by [AB]. For \(0\leqslant t_0<t_1\leqslant T\), we denote \({{\mathcal {U}}}[t_0,t_1]\equiv L^2(t_0,t_1;\mathbb {R}^m)\). Clearly, under (H1), for any initial pair \((t_0,x)\) and any \(u(\cdot )\in {{\mathcal {U}}}[t_0,t_1]\), Eq. (2.1) admits a unique solution \(X(\cdot )\equiv X(\cdot \,;t_0,x,u(\cdot ))\in C([t_0,t_1];\mathbb {R}^n)\). We now introduce the following definition.

Definition 2.1

System [AB] is called completely controllable on \([t_0,t_1]\), if for any \(x,y\in \mathbb {R}^n\) there exists a \(u(\cdot )\in {{\mathcal {U}}}[t_0,t_1]\) such that

$$\begin{aligned} X(t_1;t_0,x,u(\cdot ))=y. \end{aligned}$$

System [AB] is just called completely controllable if it is completely controllable on any subinterval \([t_0,t_1]\) of [0, T].

It is well known that system [AB] is completely controllable on \([t_0,t_1]\) if and only if

$$\begin{aligned} \int _{t_0}^{t_1}\Phi _A(s)^{-1}B(s)\big [\Phi _A(s)^{-1}B(s)\big ]^\top ds>0, \end{aligned}$$

where \(\Phi _A(\cdot )\) is the solution to the \(\mathbb {R}^{n\times n}\)-valued ordinary differential equation (ODE, for short)

$$\begin{aligned} \left\{ \begin{aligned} {\dot{\Phi }}_A(s)&=A(s)\Phi _A(s), \quad s\in [0,T],\\ \Phi _A(0)&=I. \end{aligned}\right. \end{aligned}$$
(2.2)

The latter in turn is equivalent to the following regular condition:

$$\begin{aligned} \eta ^\top \Phi _A(s)^{-1}B(s)=0 \quad \mathrm{a.e.}\quad s\in [t_0,t_1] \quad \Longrightarrow \quad \eta =0. \end{aligned}$$
(2.3)

In particular, when the matrices \(A(\cdot )\) and \(B(\cdot )\) are constant-valued (time-invariant), the complete controllability of system [AB] can be verified by checking the Kalman rank condition

$$\begin{aligned} \text{ rank }\,(B,AB,\ldots ,A^{n-1}B)=n. \end{aligned}$$

In the rest of the paper, we will assume the following so that every target y can be reached from an arbitrary initial pair (tx):

(H2) System [AB] is completely controllable.

Let us return to Problem (CLQ). First, we need the following lemma.

Lemma 2.2

Let (H1)–(H2) hold. Then for each i, the mapping

$$\begin{aligned} {{\mathcal {U}}}(t,x,y) \rightarrow \mathbb {R}, \quad u(\cdot )\mapsto J_i(t,x,y;u(\cdot )) \end{aligned}$$

is convex.

Proof

For a control \(u(\cdot )\in {{\mathcal {U}}}(t,x,y)\), we denote by \(X^u(\cdot )\) the solution to the state equation (1.1). By the linearity of the differential equation in (1.1), we have for any \(u(\cdot ),v(\cdot )\in {{\mathcal {U}}}(t,x,y)\) and \(\alpha ,\beta \in (0,1)\) with \(\alpha +\beta =1\),

$$\begin{aligned} X^{\alpha u+\beta v}(\cdot )=\alpha X^u(\cdot )+\beta X^v(\cdot ). \end{aligned}$$

In particular, \(X^{\alpha u+\beta v}(T)=\alpha X^u(T)+\beta X^v(T)=y\). This means \(\alpha u(\cdot )+\beta v(\cdot )\in {{\mathcal {U}}}(t,x,y)\). Recall that for any positive semi-definite matrix \(M\in \mathbb {S}^k\), \(x,y\in \mathbb {R}^k\), and \(\alpha ,\beta \in (0,1)\) with \(\alpha +\beta =1\),

$$\begin{aligned} \langle M(\alpha x+\beta y),\alpha x+\beta y\rangle \leqslant \alpha \langle Mx,x\rangle +\beta \langle My,y\rangle . \end{aligned}$$

Thus, by the assumption that \(Q_i(\cdot )\) and \(R_i(\cdot )\) are positive semi-definite, we have

$$\begin{aligned}&J_i(t,x,y;\alpha u(\cdot )+\beta v(\cdot )) \\&\quad = \int _t^T\Big [\langle Q_i(s)X^{\alpha u+\beta v}(s),X^{\alpha u+\beta v}(s)\rangle +\langle R_i(s)[\alpha u(s)+\beta v(s)],\alpha u(s)+\beta v(s)\rangle \Big ]ds \\&\quad \leqslant \alpha \int _t^T\Big [\langle Q_i(s)X^u(s),X^u(s)\rangle +\langle R_i(s)u(s),u(s)\rangle \Big ]ds \\&\qquad +\,\beta \int _t^T\Big [\langle Q_i(s)X^v(s),X^v(s)\rangle +\langle R_i(s)v(s),v(s)\rangle \Big ]ds \\&\quad = \alpha J_i(t,x,y;u(\cdot ))+\beta J_i(t,x,y;v(\cdot )). \end{aligned}$$

This shows the mapping \(u(\cdot )\mapsto J_i(t,x,y;u(\cdot ))\) is convex. \(\square \)

The following basic result is concerned with the existence of an optimal control for Problem (CLQ).

Theorem 2.3

Let (H1)–(H2) hold, and let \((t,x,y)\in [0,T)\times \mathbb {R}^n\times \mathbb {R}^n\) be given. Suppose the set of feasible controls w.r.t. (txy) is nonempty. Then Problem (CLQ) admits a unique solution.

Proof

Let \({{\mathcal {F}}}(t,x,y)\) denote the set of feasible controls w.r.t. (txy), that is,

$$\begin{aligned} {{\mathcal {F}}}(t,x,y)= & {} \big \{u(\cdot )\in L^2(t,T;\mathbb {R}^m):X(T;t,x,u(\cdot )) =y,\\&\qquad J_i(t,x,y;u(\cdot ))\leqslant c_i;~i=1,\ldots ,k\big \}. \end{aligned}$$

Observing that the mappings

$$\begin{aligned} u(\cdot )\mapsto X(T;t,x,u(\cdot )),\quad u(\cdot )\mapsto J_i(t,x,y;u(\cdot )); \quad i=1,\ldots ,k \end{aligned}$$

are convex and continuous, one can easily verify that \({{\mathcal {F}}}(t,x,y)\) is a convex closed subset of \(L^2(t,T;\mathbb {R}^m)\). Because \(Q_0(\cdot )\geqslant 0\) and \(R_0(\cdot )\geqslant \delta I\) for some \(\delta >0\), the cost functional \(J_0(t,x,y;\,\cdot \,)\) defined on \({{\mathcal {F}}}(t,x,y)\) is strictly convex and continuous, and hence sequentially weakly lower semicontinuous (see [14, Theorem 7.2.6]). Let \(\{u_k(\cdot )\}_{k=1}^\infty \subseteq {{\mathcal {F}}}(t,x,y)\) be a minimizing sequence for \(J_0(t,x,y;\,\cdot \,)\). Since \({{\mathcal {F}}}(t,x,y)\) is nonempty, we have

$$\begin{aligned} \delta \int _t^T|u_k(s)|^2ds\leqslant J_0(t,x,y;u_k(\cdot ))\rightarrow V(t,x,y)<\infty . \end{aligned}$$

This implies that \(\{u_k(\cdot )\}_{k=1}^\infty \) is bounded in the Hilbert space \(L^2(t,T;\mathbb {R}^m)\). Consequently, there exists a subsequence \(\{u_{k_j}(\cdot )\}_{j=1}^\infty \) converging weakly to some \(u^*(\cdot )\in L^2(t,T;\mathbb {R}^m)\). Since \({{\mathcal {F}}}(t,x,y)\) is a convex and closed, it follows from Mazur’s lemma that \(u^*(\cdot )\in {{\mathcal {F}}}(t,x,y)\). Thus, by the sequential weak lower semicontinuity of the mapping \(u(\cdot )\mapsto J_0(t,x,y;u(\cdot ))\),

$$\begin{aligned} V(t,x,y)\leqslant J_0(t,x,y;u^*(\cdot ))\leqslant \liminf _{j\rightarrow \infty }J_0(t,x,y;u_{k_j}(\cdot ))=V(t,x,y), \end{aligned}$$

from which we see \(u^*(\cdot )\) is an optimal control with respect to (txy). The uniqueness follows directly from the strict convexity of \(u(\cdot )\mapsto J_0(t,x,y;u(\cdot ))\). \(\square \)

3 Main Results

Let \(Q(\cdot )\in L^1(0,T;\mathbb {S}^n)\) and \(R(\cdot )\in L^\infty (0,T;\mathbb {S}^m)\) be such that

$$\begin{aligned} Q(\cdot )\geqslant 0, \qquad R(\cdot )\gg 0.\end{aligned}$$
(3.1)

Consider the following Riccati-type equations:

$$\begin{aligned}&\left\{ \begin{aligned}&\dot{P}(s)+P(s)A(s)+A(s)^\top P(s)+Q(s) \\&-P(s)B(s)R(s)^{-1}B(s)^\top P(s)=0, \quad s\in [0,T),\\&\lim _{s\rightarrow T}\min \sigma (P(s))=\infty , \end{aligned}\right. \end{aligned}$$
(3.2)
$$\begin{aligned}&\left\{ \begin{aligned}&{\dot{\Pi }}(s)+\Pi (s)A(s)+A(s)^\top \Pi (s)-Q(s) \\&+\Pi (s)B(s)R(s)^{-1}B(s)^\top \Pi (s)=0, \quad s\in (t,T],\\&\lim _{s\rightarrow t}\min \sigma (\Pi (s))=\infty , \end{aligned}\right. \end{aligned}$$
(3.3)

where \(\sigma (M)\) denotes the spectrum of a matrix M. Our first result can be stated as follows.

Theorem 3.1

Let (H1)–(H2) hold. Then the Riccati equations (3.2) and (3.3) admit unique solutions \(P(\cdot )\in C([0,T);\mathbb {S}^n_+)\) and \(\Pi (\cdot )\in C((t,T];\mathbb {S}^n_+)\), respectively.

The proof of Theorem 3.1 will be given in the Sect. 5. Let us for the moment look at some properties of the solution \(P(\cdot )\) to (3.2). Consider the matrix-valued ODE

$$\begin{aligned} \left\{ \begin{aligned} {\dot{\Phi }}(s)&=\big [A(s)-B(s)R(s)^{-1}B(s)^\top P(s)\big ]\Phi (s), \quad s\in [0,T),\\ \Phi (0)&=I. \end{aligned}\right. \end{aligned}$$
(3.4)

Obviously, (3.4) admits a unique solution \(\Phi (\cdot )\in C([0,T);\mathbb {R}^{n\times n})\) which is invertible. However, one cannot conclude hastily that the solution \(\Phi (\cdot )\) could be extended to the whole interval [0, T] because P(s) explodes as \(s\uparrow T\). The following result gives a rigorous discussion of this issue.

Proposition 3.2

Let (H1)–(H2) hold, and let \(P(\cdot )\in C([0,T);\mathbb {S}^n_+)\) be the solution to the Riccati equation (3.2). The solution \(\Phi (\cdot )\) of (3.4) satisfies \(\lim _{s\rightarrow T}\Phi (s)=0\).

Proof

Let \(x\in \mathbb {R}^n\) be arbitrary. For any \(0<s<T\), integration by parts gives

$$\begin{aligned}&\langle P(s)\Phi (s)x,\Phi (s)x\rangle -\langle P(0)x,x\rangle \\&\quad =\int _0^s\Big \langle \Big \{\dot{P}(r)+P(r)\big [A(r)-B(r)R(r)^{-1}B(r)^\top P(r)\big ]\\&\qquad +\big [A(r)-B(r)R(r)^{-1}B(r)^\top P(r)\big ]^\top P(r)\Big \}\Phi (r)x,\Phi (r)x\Big \rangle dr\\&\quad =-\int _0^s\big \langle \big [Q(r)+P(r)B(r)R(r)^{-1}B(r)^\top P(r)\big ]\Phi (r)x,\Phi (r)x\big \rangle dr\leqslant 0. \end{aligned}$$

Let \(\lambda _s\) denote the minimal eigenvalue of P(s). Then the above yields

$$\begin{aligned} \lambda _s|\Phi (s)x|^2\leqslant \langle P(s)\Phi (s)x,\Phi (s)x\rangle \leqslant \langle P(0)x,x\rangle . \end{aligned}$$

Since \(\lambda _s\rightarrow \infty \) as \(s\rightarrow T\) and x is arbitrary, we must have \(\lim _{s\rightarrow T}\Phi (s)=0\). \(\square \)

In light of Proposition 3.2, the solution \(\Phi (\cdot )\) of (3.4) has a continuous extension to [0, T]. Thus, the ODE

$$\begin{aligned} \left\{ \begin{aligned} {\dot{\Psi }}(s)&=-A(s)^\top \Psi (s)-Q(s)\Phi (s), \quad s\in [0,T],\\ \Psi (0)&=P(0) \end{aligned}\right. \end{aligned}$$
(3.5)

admits a unique solution \(\Psi (\cdot )\) on the whole interval [0, T], and we have the following:

Proposition 3.3

Let (H1)–(H2) hold, and let \(P(\cdot )\in C([0,T);\mathbb {S}^n_+)\) be the solution to the Riccati equation (3.2). The solution \(\Phi (\cdot )\) of (3.4) satisfies

$$\begin{aligned} \lim _{s\rightarrow T}P(s)\Phi (s)=\Psi (T). \end{aligned}$$

Proof

By differentiating we get

$$\begin{aligned} {d\over ds}[P(s)\Phi (s)]&= \dot{P}(s)\Phi (s)+P(s){\dot{\Phi }}(s)\\&= \big [\dot{P}(s)+P(s)A(s)-P(s)B(s)R(s)^{-1}B(s)^\top P(s)\big ]\Phi (s)\\&= -\,A(s)^\top [P(s)\Phi (s)]-Q(s)\Phi (s),\quad s\in [0,T). \end{aligned}$$

Thus, \(P(\cdot )\Phi (\cdot )\) satisfies Eq. (3.5) on the interval [0, T). By uniqueness of solutions, we must have \(P(s)\Phi (s)=\Psi (s)\) for all \(s\in [0,T)\). The desired result then follows immediately. \(\square \)

Let \(\Gamma =\{(\lambda _1,\ldots ,\lambda _k)^\top :\lambda _i\geqslant 0,~i=1,\ldots ,k\}\) and define for \(\lambda =(\lambda _1,\ldots ,\lambda _k)^\top \in \Gamma \),

$$\begin{aligned} Q(\lambda ,s)=Q_0(s)+\sum _{i=1}^k \lambda _iQ_i(s), \quad R(\lambda ,s)=R_0(s)+\sum _{i=1}^k \lambda _iR_i(s). \end{aligned}$$
(3.6)

We have from Theorem 3.1 that under (H1)–(H2), the following (\(\lambda \)-dependent) Riccati equations are uniquely solvable:

$$\begin{aligned}&\left\{ \begin{aligned}&\dot{P}(\lambda ,s)+P(\lambda ,s)A(s)+A(s)^\top P(\lambda ,s)+Q(\lambda ,s) \\&\quad -\,P(\lambda ,s)B(s)R(\lambda ,s)^{-1}B(s)^\top P(\lambda ,s)=0, \quad s\in [0,T),\\&\lim _{s\rightarrow T}\min \sigma (P(\lambda ,s))=\infty , \end{aligned}\right. \end{aligned}$$
(3.7)
$$\begin{aligned}&\left\{ \begin{aligned}&{\dot{\Pi }}(\lambda ,s)+\Pi (\lambda ,s)A(s)+A(s)^\top \Pi (\lambda ,s)-Q(\lambda ,s) \\&\quad +\Pi (\lambda ,s)B(s)R(\lambda ,s)^{-1}B(s)^\top \Pi (\lambda ,s)=0, \quad s\in (t,T],\\&\lim _{s\rightarrow t}\min \sigma (\Pi (\lambda ,s))=\infty . \end{aligned}\right. \end{aligned}$$
(3.8)

Let \(\Phi (\lambda ,\cdot )\) and \(\Psi (\lambda ,\cdot )\) be the solutions to

$$\begin{aligned} \left\{ \begin{aligned} {\dot{\Phi }}(\lambda ,s)&=\big [A(s)-B(s)R(\lambda ,s)^{-1}B(s)^\top P(\lambda ,s)\big ]\Phi (\lambda ,s), \quad s\in [0,T),\\ \Phi (\lambda ,0)&=I \end{aligned}\right. \end{aligned}$$
(3.9)

and

$$\begin{aligned} \left\{ \begin{aligned} {\dot{\Psi }}(\lambda ,s)&=-A(s)^\top \Psi (\lambda ,s)-Q(\lambda ,s)\Phi (\lambda ,s), \quad s\in [0,T],\\ \Psi (\lambda ,0)&=P(\lambda ,0), \end{aligned}\right. \end{aligned}$$
(3.10)

respectively. Further, denote by c the column vector \((c_1,\ldots ,c_k)^\top \) with \(c_i\) as in (1.3). We are now ready for our next main result, whose proof will be given in Sect. 6.

Theorem 3.4

Let (H1)–(H2) hold, and let \((t,x,y)\in [0,T)\times \mathbb {R}^n\times \mathbb {R}^n\) be given. Suppose there exists at least one strictly feasible control w.r.t. (txy). Then the function \(L(\,\cdot \,,t,x,y):\Gamma \rightarrow \mathbb {R}\) defined by

$$\begin{aligned} L(\lambda ,t,x,y)\triangleq \langle P(\lambda ,t)x,x\rangle -2\langle \Psi (\lambda ,T)\Phi (\lambda ,t)^{-1}x,y\rangle +\langle \Pi (\lambda ,T)y,y\rangle -\lambda ^\top c \end{aligned}$$

achieves its maximum at some \(\lambda ^*\in \Gamma \), and the optimal control of Problem (CLQ) is given by

$$\begin{aligned} u(\lambda ^*,s)=-R(\lambda ^*,s)^{-1}B(s)^\top \big [P(\lambda ^*,s)X(\lambda ^*,s)+\eta (\lambda ^*,s)\big ], \quad s\in [t,T), \end{aligned}$$
(3.11)

where

$$\begin{aligned} \eta (\lambda ^*,s)=-\big [\Psi (\lambda ^*,T)\Phi (\lambda ^*,s)^{-1}\big ]^\top y,\quad s\in [0,T), \end{aligned}$$

and \(X(\lambda ^*,\cdot )\) is the solution to the closed-loop system

$$\begin{aligned} \left\{ \begin{aligned} \dot{X}(\lambda ^*,s)&=\big [A(s)-B(s)R(\lambda ^*,s)^{-1}B(s)^\top P(\lambda ^*,s)\big ]X(\lambda ^*,s)\\&-B(s)R(\lambda ^*,s)^{-1}B(s)^\top \eta (\lambda ^*,s), \quad s\in [t,T), \\ X(\lambda ^*,t)&=x. \end{aligned}\right. \end{aligned}$$

Remark 3.5

Form the representation (3.11), we see that the optimal control of Problem (CLQ) is a target-dependent feedback of the current state.

4 Approach by Standard LQ Problems

In this section we approach Problem (CLQ) by a class of LQ problems without constraints. Our first step is to reduce Problem (CLQ) to an LQ problem without the integral quadratic constraints by means of the Lagrangian duality. It is worth noting that the reduced LQ problem is still not standard because the terminal state is fixed.

For \(\lambda \in \Gamma =\{(\lambda _1,\ldots ,\lambda _k)^\top :\lambda _i\geqslant 0,~i=1,\ldots ,k\}\), let

$$\begin{aligned} J(\lambda ,t,x,y;u(\cdot ))&= J_0(t,x,y;u(\cdot ))+\sum _{i=1}^k\lambda _i J_i(t,x,y;u(\cdot )) \nonumber \\&= \int _t^T\Big [\langle Q(\lambda ,s)X(s),X(s)\rangle +\langle R(\lambda ,s)u(s),u(s)\rangle \Big ]ds, \end{aligned}$$
(4.1)

where \(Q(\lambda ,s)\) and \(R(\lambda ,s)\) are defined by (3.6). Consider the following problem:

Problem (CLQ*). For any given initial pair \((t,x)\in [0,T)\times \mathbb {R}^n\) and any target \(y\in \mathbb {R}^n\), find a \(u^*(\lambda ,\cdot )\in {{\mathcal {U}}}(t,x,y)\) such that

$$\begin{aligned} J(\lambda ,t,x,y;u^*(\lambda ,\cdot ))=\inf _{u(\cdot )\in {{\mathcal {U}}}(t,x,y)}J(\lambda ,t,x,y;u(\cdot ))\triangleq V(\lambda ,t,x,y). \end{aligned}$$

By the Lagrange duality theorem, we have the following result.

Theorem 4.1

Let (H1)–(H2) hold, and let \((t,x,y)\in [0,T)\times \mathbb {R}^n\times \mathbb {R}^n\) be given. Then for any \(\lambda \in \Gamma \), Problem (CLQ*) admits a unique optimal control \(u^*(\lambda ,\cdot )\). If, in addition, there exists a strictly feasible control w.r.t. (txy), then the dual functional

$$\begin{aligned} \varphi (\lambda )\triangleq J(\lambda ,t,x,y;u^*(\lambda ,\cdot ))-\lambda ^\top c, \quad \lambda \in \Gamma \end{aligned}$$
(4.2)

where \(c=(c_1,\ldots ,c_k)^\top \), achieves its maximum at some \(\lambda ^*\in \Gamma \), and the unique optimal control of Problem (CLQ) is \(u^*(\lambda ^*,\cdot )\).

Proof

The first assertion can be proved by a similar argument used in the proof of Theorem 2.3, and the second assertion follows from the Lagrange duality theorem [19, Theorem 1, p. 224]. \(\square \)

Once we find out the optimal control of Problem (CLQ*) and derive the value function \(V(\lambda ,t,x,y)\), we shall be able to calculate the gradient of the dual functional (4.2) and solve the original Problem (CLQ). In order to obtain an explicit representation of the optimal control for Problem (CLQ*), we adopt the penalty approach, in which Problem (CLQ*) is approximated by a sequence of standard LQ problems where the terminal states are unconstrained.

Let \(Q(\cdot )\in L^1(0,T;\mathbb {S}^n)\) and \(R(\cdot )\in L^\infty (0,T;\mathbb {S}^m)\) be such that (3.1) holds. For each \(\lambda \in \Gamma \), the matrices in the cost function (4.1) have the same properties as \(Q(\cdot )\) and \(R(\cdot )\). So in what follows we shall simply consider Problem (CLQ*) with the cost functional

$$\begin{aligned} J(t,x,y;u(\cdot )) = \int _t^T\Big [\langle Q(s)X(s),X(s)\rangle +\langle R(s)u(s),u(s)\rangle \Big ]ds, \end{aligned}$$

and the corresponding value function will be denoted by V(txy). For every integer \(i\geqslant 1\) let

$$\begin{aligned} J_i(t,x,y;u(\cdot ))=&{} \int _t^T\Big [\langle Q(s)X(s),X(s)\rangle + \langle R(s)u(s),u(s)\rangle \Big ]ds \nonumber \\ {}&\quad +\,i|X(T)-y|^2. \end{aligned}$$
(4.3)

The family of standard LQ problems, parameterized by i, is defined as follows.

Problem (LQ) \(_i\). For any given \((t,x,y)\in [0,T)\times \mathbb {R}^n\times \mathbb {R}^n\), find a \(u_i^*(\cdot )\in {{\mathcal {U}}}[t,T]\) such that

$$\begin{aligned} J_i(t,x,y;u_i^*(\cdot ))=\inf _{u(\cdot )\in {{\mathcal {U}}}[t,T]}J_i(t,x,y;u(\cdot ))\triangleq V_i(t,x,y). \end{aligned}$$

The solution of the above Problem (LQ)\(_i\) can be obtained by using a completion-of-squares technique via the Riccati equation

$$\begin{aligned} \left\{ \begin{aligned}&\dot{P}_i(s)+P_i(s) A(s)+A(s)^\top P_i(s)+Q(s) \\&-P_i(s) B(s)R(s)^{-1}B(s)^\top P_i(s)=0,\quad s\in [0,T],\\&P_i(T)=i I, \end{aligned}\right. \end{aligned}$$
(4.4)

see, e.g., [26] for a thorough study of the Riccati approach (see also [20] for some new developments). More precisely, we have the following result.

Theorem 4.2

Let \(Q(\cdot )\in L^1(0,T;\mathbb {S}^n)\) and \(R(\cdot )\in L^\infty (0,T;\mathbb {S}^m)\) be such that (3.1) holds. Then the Riccati equation (4.4) admits a unique solution \(P_i(\cdot )\in C([0,T];\mathbb {S}^n)\), and the unique optimal control \(u_i^*(\cdot )\) of Problem (LQ)\(_i\) for (txy) is given by the following state feedback form:

$$\begin{aligned} u^*_i(s)=-R(s)^{-1}B(s)^\top \big [P_i(s)X^*_i(s)+\eta _i(s)\big ],\quad s\in [t,T], \end{aligned}$$
(4.5)

where \(\eta _i(\cdot )\in C([0,T];\mathbb {R}^n)\) is the solution to the backward ODE

$$\begin{aligned} \left\{ \begin{aligned} {\dot{\eta }}_i(s)&= -\big [A(s)-B(s)R(s)^{-1}B(s)^\top P_i(s)\big ]^\top \eta _i(s),\quad s\in [0,T],\\ \eta _i(T)&= -i y, \end{aligned}\right. \end{aligned}$$
(4.6)

and \(X_i^*(\cdot )\) is the solution to the closed-loop system

$$\begin{aligned} \left\{ \begin{aligned} \dot{X}^*_i(s)&= \big [A(s)-B(s)R(s)^{-1}B(s)^\top P_i(s)\big ]X^*_i(s) \\&\quad -B(s)R(s)^{-1}B(s)^\top \eta _i(s),\quad s\in [t,T],\\ X^*_i(t)&= x. \end{aligned}\right. \end{aligned}$$
(4.7)

Moreover, the value function of Problem (LQ)\(_i\) has the following representation:

$$\begin{aligned} V_i(t,x,y)= & {} \langle P_i(t)x,x\rangle +2\langle \eta _i(t),x\rangle +i|y|^2 \nonumber \\&\quad -\int _t^T\big \langle R(s)^{-1}B(s)^\top \eta _i(s),B(s)^\top \eta _i(s)\big \rangle ds. \end{aligned}$$
(4.8)

The above result is a special case of Sun, Li, and Yong [20, Corollary 4.7]. We refer the reader to [20] for the proof and further information.

Remark 4.3

One observes that the solution \(\eta _i(\cdot )\) of (4.6) is independent of the initial state x. So taking \(x=0\) in (4.8) we obtain

$$\begin{aligned} V_i(t,0,y)=i|y|^2-\int _t^T\big \langle R(s)^{-1}B(s)^\top \eta _i(s),B(s)^\top \eta _i(s)\big \rangle ds. \end{aligned}$$

On the other hand, if \(y=0\), then the solution \(\eta _i(\cdot )\) of (4.6) is identically zero and hence

$$\begin{aligned} V_i(t,x,0)=\langle P_i(t)x,x\rangle , \quad \forall (t,x)\in [0,T]\times \mathbb {R}^n. \end{aligned}$$

Because the cost functional is nonnegative and the weight on the square of the terminal state is positive, it is not difficult to see by contradiction that \(P_i(t)>0\) for all \(t\in [0,T]\).

Note that \(J_i(t,x,y;u(\cdot ))\) is nondecreasing in i. Hence, when the system [AB] is completely controllable, it is expected that the sequence \(\{u^*_i(\cdot )\}^\infty _{i=1}\) defined by (4.5) converges to the unique optimal control of Problem (CLQ*) for the initial pair (tx) and target y. Actually, we have the following result.

Theorem 4.4

Let (H1)–(H2) hold. For \((t,x,y)\in [0,T)\times \mathbb {R}^n\times \mathbb {R}^n\), let \((u_i^*(\cdot ),X^*_i(\cdot ))\) be the corresponding optimal pair of Problem (LQ)\(_i\). We have the following:

  1. (i)

    \(V_i(t,x,y)\uparrow V(t,x,y)\) as \(i\rightarrow \infty \).

  2. (ii)

    \(\{u_i^*(\cdot )\}^\infty _{i=1}\) has a subsequence converging weakly to the unique optimal control of Problem (CLQ*) with respect to (txy).

Proof

We have seen in Theorem 4.1 that Problem (CLQ*) is uniquely solvable. Let \(u^*(\cdot )\in {{\mathcal {U}}}(t,x,y)\) be the unique optimal control of Problem (CLQ*) with respect to (txy), and let \(X^*(\cdot )\) be the corresponding optimal trajectory. Since \(Q(\cdot ),R(\cdot )\geqslant 0\) and \(X^*(T)=y\), we have

$$\begin{aligned} i|X_i^*(T)-y|^2&\leqslant J_i(t,x,y;u_i^*(\cdot ))=V_i(t,x,y),\nonumber \\ V_i(t,x,y)&\leqslant J_i(t,x,y;u^*(\cdot ))=J(t,x,y;u^*(\cdot ))=V(t,x,y), \end{aligned}$$
(4.9)

from which we conclude that

$$\begin{aligned} \lim _{i\rightarrow \infty }X_i^*(T)=y. \end{aligned}$$

On the other hand, since \(Q(\cdot )\geqslant 0\) and \(R(\cdot )\gg 0\), there exists a \(\delta >0\) such that

$$\begin{aligned} J_i(t,x,y;u(\cdot ))\geqslant \delta \int _t^T|u(s)|^2ds,\quad \forall \, u(\cdot )\in L^2(t,T;\mathbb {R}^m), \end{aligned}$$

which, together with (4.9), yields

$$\begin{aligned} \int _t^T|u_i^*(s)|^2ds\leqslant & {} \delta ^{-1}J_i(t,x,y;u_i^*(\cdot ))\\= & {} \delta ^{-1}V_i(t,x,y) \leqslant \delta ^{-1}V(t,x,y)<\infty , \quad \forall \,i\geqslant 1. \end{aligned}$$

Thus, \(\{u_i^*(\cdot )\}^\infty _{i=1}\) is bounded in the Hilbert space \(L^2(t,T;\mathbb {R}^m)\) and hence admits a weakly convergent subsequence \(\{u_{i_k}^*(\cdot )\}_{k=1}^\infty \). Let \(v(\cdot )\) be the weak limit of \(\{u_{i_k}^*(\cdot )\}_{k=1}^\infty \). The sequential weak lower semicontinuity of the mapping \(u(\cdot )\mapsto J(t,x,y;u(\cdot ))\) gives

$$\begin{aligned} J(t,x,y;v(\cdot ))&\leqslant \liminf _{k\rightarrow \infty }J(t,x,y;u_{i_k}^*(\cdot )) \leqslant \liminf _{k\rightarrow \infty }J_{i_k}(t,x,y;u_{i_k}^*(\cdot )) \nonumber \\&= \lim _{k\rightarrow \infty }V_{i_k}(t,x,y) \leqslant V(t,x,y). \end{aligned}$$
(4.10)

The above inequality will imply that \(v(\cdot )\) coincides with the unique optimal control \(u^*(\cdot )\) of Problem (CLQ*) with respect to (txy) once we prove \(v(\cdot )\in {{\mathcal {U}}}(t,x,y)\). Define a continuous, convex mapping \(\mathscr {L}:L^2(t,T;\mathbb {R}^m)\rightarrow \mathbb {R}^n\) by the following:

$$\begin{aligned} \mathscr {L}(u(\cdot ))=X(T;t,x,u(\cdot )), \end{aligned}$$

where \(X(\cdot \,;t,x,u(\cdot ))\) is the solution to the state equation (1.1) corresponding to \(u(\cdot )\) and (tx). By Mazur’s lemma, one can find \(\alpha _{kj}\in [0,1],~j=1,2\cdots ,N_k\) with \(\sum _{j=1}^{N_k}\alpha _{kj}=1\) such that \(\sum _{j=1}^{N_k}\alpha _{kj}u_{i_{k+j}}^*(\cdot )\) converges strongly to \(v(\cdot )\) as \(k\rightarrow \infty \). Thus,

$$\begin{aligned} X(T;t,x,v(\cdot ))&= \mathscr {L}(v(\cdot ))=\lim _{k\rightarrow \infty }\mathscr {L}\left( \sum _{j=1}^{N_k}\alpha _{kj}u_{i_{k+j}}^*(\cdot )\right) \\&= \lim _{k\rightarrow \infty }\sum _{j=1}^{N_k}\alpha _{kj}\mathscr {L}(u_{i_{k+j}}^*(\cdot )) =\lim _{k\rightarrow \infty }\sum _{j=1}^{N_k}\alpha _{kj}X_{i_{k+j}}^*(T)=y. \end{aligned}$$

This shows \(v(\cdot )\in {{\mathcal {U}}}(t,x,y)\), and hence (ii) holds. Now (4.10) yields

$$\begin{aligned} V(t,x,y)=J(t,x,y;v(\cdot ))\leqslant \lim _{k\rightarrow \infty }V_{i_k}(t,x,y)\leqslant V(t,x,y), \end{aligned}$$

and (i) follows readily. \(\square \)

5 Riccati Equation

The aim of this section is to investigate the existence and uniqueness of solutions to the Riccati equations (3.2) and (3.3). We will focus mainly on (3.2) as the well-posedness of the Riccati equation (3.3) can be obtained by a simple time-reversal on the result for (3.2).

First, we present the following result concerning the uniqueness of solutions to the Riccati equation (3.2).

Proposition 5.1

Let (H1) hold. Then the Riccati equation (3.2) has at most one solution \(P(\cdot )\in C([0,T);\mathbb {S}^n)\).

Proof

Suppose that \(P_1(\cdot ),P_2(\cdot )\in C([0,T);\mathbb {S}^n)\) are two solutions of (3.2). Take \(\tau \in [0,T)\) such that \(P_1(s),P_2(s)>0\) on \([\tau ,T)\), and set for \(i=1,2\),

$$\begin{aligned} \Sigma _i(s)=\left\{ \begin{aligned}&P_i(s)^{-1},&s\in [\tau ,T),\\&0,&s=T. \end{aligned}\right. \end{aligned}$$

By evaluating \({d\over ds}[P_i(s)\Sigma _i(s)]=0\), we see that both \(\Sigma _1(\cdot )\) and \(\Sigma _2(\cdot )\) solve the following ODE:

$$\begin{aligned} \left\{ \begin{aligned}&{\dot{\Sigma }}(s)-A(s)\Sigma (s) -\Sigma (s)A(s)^\top -\Sigma (s)Q(s)\Sigma (s)\\&\quad + B(s)R(s)^{-1}B(s)^\top =0, \quad s\in [0,T],\\&\Sigma (T)=0. \end{aligned}\right. \end{aligned}$$

Thus, \(\Pi (\cdot )\triangleq \Sigma _1(\cdot )-\Sigma _2(\cdot )\) satisfies \(\Pi (T)=0\) and

$$\begin{aligned} {\dot{\Pi }}(s)&= A(s)\Pi (s)+\Pi (s) A(s)^\top +\Sigma _1(s) Q(s)\Sigma _1(s)-\Sigma _2(s) Q(s)\Sigma _2(s) \\&= A(s)\Pi (s)+\Pi (s)A(s)^\top +\Pi (s)Q(s)\Sigma _1(s)+\Sigma _2(s)Q(s)\Pi (s) \\&= [A(s)+\Sigma _2(s)Q(s)]\Pi (s)+\Pi (s)[A(s)^\top +Q(s)\Sigma _1(s)], \quad s\in [\tau ,T]. \end{aligned}$$

It follows that

$$\begin{aligned} \Pi (s)= & {} -\int _s^T \Big \{[A(r)+\Sigma _2(r)Q(r)]\Pi (r)\\&+\,\Pi (r)[A(r)^\top +Q(r)\Sigma _1(r)]\Big \} dr, \quad s\in [\tau ,T], \end{aligned}$$

and thereby

$$\begin{aligned} |\Pi (s)| {\leqslant } \int _s^T \Big \{|A(r)+\Sigma _2(r)Q(r)|+|A(r)^\top +Q(r)\Sigma _1(r)|\Big \}|\Pi (r)| dr, \forall s\in [\tau ,T]. \end{aligned}$$

Applying Gronwall’s inequality then yields \(\Pi (s)=0\) for all \(s\in [\tau ,T]\). This shows \(P_1(\cdot )=P_2(\cdot )\) on \([\tau ,T]\). Now let \(\Gamma (\cdot )=P_1(\cdot )-P_2(\cdot )\). Then \(\Gamma (\tau )=0\) and

$$\begin{aligned} \dot{\Gamma }(s)&= -\Big [\Gamma (s)A(s) + A(s)^\top \Gamma (s) -P_1(s)B(s)R(s)^{-1}B(s)^\top P_1(s) \nonumber \\&\;\qquad \quad +P_2(s)B(s)R(s)^{-1}B(s)^\top P_2(s)\Big ], \quad s\in [0,\tau ]. \end{aligned}$$
(5.1)

Note that

$$\begin{aligned} P_1(s)B(s)R(s)^{-1}B(s)^\top P_1(s)&= [\Gamma (s)+P_2(s)]B(s)R(s)^{-1}B(s)^\top P_1(s), \\ P_2(s)B(s)R(s)^{-1}B(s)^\top P_2(s)&= P_2(s)B(s)R(s)^{-1}B(s)^\top [P_1(s)-\Gamma (s)]. \end{aligned}$$

Substituting the above into (5.1) gives

$$\begin{aligned} \dot{\Gamma }(s)&= -\,\Gamma (s)[A(s)-B(s)R(s)^{-1}B(s)^\top P_1(s)] \\&\quad -[A(s)^\top -P_2(s)B(s)R(s)^{-1}B(s)^\top ]\Gamma (s), \quad s\in [0,\tau ]. \end{aligned}$$

Proceeding as previously we obtain \(P_1(\cdot )=P_2(\cdot )\) on \([0,\tau ]\). \(\square \)

Next we prove the existence of solutions to the Riccati equation (3.2). The basic idea is to pass to the limit in (4.4). Theorem 4.4 will guarantee the existence of the limit \(P(s)\equiv \lim _{i\rightarrow \infty }P_i(s)\), which is a solution of (3.2).

Theorem 5.2

Let (H1)–(H2) hold. Then the Riccati equation (3.2) admits a unique solution \(P(\cdot )\in C([0,T);\mathbb {S}_+^n)\). Moreover,

$$\begin{aligned} V(t,x,0)\triangleq & {} \inf _{u(\cdot )\in {{\mathcal {U}}}(t,x,0)}J(t,x,0;u(\cdot ))\nonumber \\= & {} \langle P(t)x,x\rangle , \quad \forall \, (t,x)\in [0,T)\times \mathbb {R}^n. \end{aligned}$$
(5.2)

Proof

Consider Problem (LQ)\(_i\) with \(y=0\). For \(i\geqslant 1\), let \(P_i(\cdot )\in C([0,T];\mathbb {S}_+^n)\) be the solution to (4.4). Note that in the case of \(y=0\), the solution \(\eta _i(\cdot )\) of (4.6) is identically zero, and the value function of Problem (LQ)\(_i\) is given by

$$\begin{aligned} V_i(t,x,0)=\langle P_i(t)x,x\rangle ,\quad (t,x)\in [0,T]\times \mathbb {R}^n. \end{aligned}$$

Then from Theorem 4.4 (i), we see that for any \(t\in [0,T)\), \(\{P_i(t)\}^\infty _{i=1}\) is an increasing, bounded sequence, and hence has a limit \(P(t)\in \mathbb {S}_+^n\) having the property (5.2). On the other hand, one can easily verify that the control defined by

$$\begin{aligned} v(s)&= -\big [\Phi _A(s)^{-1}B(s)\big ]^\top \left( \int _t^T\Phi _A(r)^{-1}B(r)\big [\Phi _A(r)^{-1}B(r)\big ]^\top dr\right) ^{-1}\Phi _A(t)^{-1}x \\&\equiv \mathbb {V}(s,t)x, \quad s\in [t,T] \end{aligned}$$

is in \({{\mathcal {U}}}(t,x,0)\), where \(\Phi _A(\cdot )\) is the solution of (2.2). Thus, with \(\mathbb {X}(\cdot \,,t)\) denoting the solution to the matrix-valued ODE

$$\begin{aligned} \left\{ \begin{aligned} {\dot{\mathbb {X}}}(s,t)&= A(s)\mathbb {X}(s,t)+B(s)\mathbb {V}(s,t), \quad s\in [t,T],\\ \mathbb {X}(t,t)&= I, \end{aligned}\right. \end{aligned}$$

we have \(X(\cdot \,;t,x,v(\cdot ))=\mathbb {X}(\cdot \,,t)x\), and hence

$$\begin{aligned} \langle P_i(t)x,x\rangle&= V_i(t,x,0) \leqslant V(t,x,0)\leqslant J(t,x,0;v(\cdot )) \\&= \int _t^T\Big [\langle Q(s)\mathbb {X}(s,t)x,\mathbb {X}(s,t)x\rangle +\langle R(s)\mathbb {V}(s,t)x,\mathbb {V}(s,t)x\rangle \Big ]ds \\&\equiv \langle M(t)x,x\rangle , \quad \forall \, t\in [0,T),~\forall \, x\in \mathbb {R}^n. \end{aligned}$$

Noting that \(\mathbb {X}(s,t)\) and \(\mathbb {V}(s,t)\) are continuous functions of (st), we conclude that the function \(M(\cdot )\) is continuous in [0, T). Hence, \(\{P_i(t)\}^\infty _{i=1}\) is uniformly bounded on compact subintervals of [0, T), and by the dominated convergence theorem, we have for any \(t\in [0,T)\),

$$\begin{aligned} P(t)&= \lim _{i\rightarrow \infty }P_i(t) = \lim _{i\rightarrow \infty }\left[ P_i(0)-\int _0^t\Big (P_iA+A^\top P_i+Q-P_iBR^{-1}B^\top P_i\Big )ds\right] \\&= P(0)-\int _0^t\Big (PA+A^\top P+Q-PBR^{-1}B^\top P\Big )ds. \end{aligned}$$

This implies that \(P(\cdot )\) satisfies the differential equation in (3.2). Finally, since \(P(t)\geqslant P_i(t)\) for all \(i\geqslant 1\) and all \(t\in [0,T)\), we have

$$\begin{aligned} \lim _{t\rightarrow T}P(t)\geqslant \lim _{i\rightarrow \infty }\lim _{t\rightarrow T}P_i(t)=\lim _{i\rightarrow \infty }i I=\infty . \end{aligned}$$

The proof is completed. \(\square \)

Remark 5.3

From the proof of Theorem 5.2, we have the following facts:

  1. (i)

    The solution \(P_i(t)\) of the Riccati equation (4.4) is increasing in i and converges to P(t), the solution of the Riccati equation (3.2), for all \(t\in [0,T)\) as \(i\rightarrow \infty \).

  2. (ii)

    The sequence \(\{P_i(t)\}^\infty _{i=1}\) is uniformly bounded on compact subintervals of [0, T).

To show the unique solvability of the Riccati equation (3.2), let us fix \(t\in [0,T)\) and define for \(t\leqslant s\leqslant T\),

$$\begin{aligned} {\bar{A}}(s)&= -A(T+t-s),&\qquad {\bar{B}}(s)&= -B(T+t-s), \\ {\bar{Q}}(s)&= Q(T+t-s),&\qquad {\bar{R}}(s)&= R(T+t-s). \end{aligned}$$

For \(t\leqslant r<T\), consider the controlled ODE

$$\begin{aligned} \left\{ \begin{aligned} \dot{{\bar{X}}}(s)&= {\bar{A}}(s){\bar{X}}(s)+{\bar{B}}(s)v(s), \quad s\in [r,T],\\ {\bar{X}}(r)&= y, \end{aligned}\right. \end{aligned}$$

and the cost functional

$$\begin{aligned} {\bar{J}}(r,y,x;v(\cdot ))\triangleq \int _r^T\Big [\big \langle {\bar{Q}}(s){\bar{X}}(s),{\bar{X}}(s)\big \rangle +\big \langle {\bar{R}}(s)v(s),v(s)\big \rangle \Big ]ds. \end{aligned}$$

Using the criterion (2.3), it is not hard to show that system \([{\bar{A}},{\bar{B}}]\) is completely controllable. Since \({\bar{Q}}(\cdot )\geqslant 0\) and \({\bar{R}}(\cdot )\gg 0\) on [tT], we have by Theorem 5.2 that the Riccati equation

$$\begin{aligned} \left\{ \begin{aligned}&{\dot{\Sigma }}(s)+\Sigma (s){\bar{A}}(s)+{\bar{A}}(s)^\top \Sigma (s)+{\bar{Q}}(s)\\&\quad -\Sigma (s){\bar{B}}(s){\bar{R}}(s)^{-1}{\bar{B}}(s)^\top \Sigma (s)=0, \quad s\in [t,T),\\&\lim _{s\rightarrow T}\min \sigma (\Sigma (s))=\infty \end{aligned}\right. \end{aligned}$$

admits a unique solution \(\Sigma (\cdot )\in C([t,T);\mathbb {S}_+^n)\). For the initial pair (ty) and target \(x=0\), let \(v^*(\cdot )\) be the corresponding optimal control of the above problem. By Theorem 5.2, the corresponding value is

$$\begin{aligned} {\bar{V}}(t,y,0)\triangleq \inf _{v(\cdot )\in {{\mathcal {U}}}(t,y,0)}{\bar{J}}(t,y,0;v(\cdot ))=\langle \Sigma (t)y,y\rangle . \end{aligned}$$

By reversing time,

$$\begin{aligned} \tau =T+t-s, \quad s\in [t,T], \end{aligned}$$

we see that

$$\begin{aligned} u^*(s)\triangleq v^*(T+t-s),\quad s\in [t,T] \end{aligned}$$

is the unique optimal control of Problem (CLQ*) for the initial pair (t, 0) and target y, and that \(\Pi (s)=\Sigma (T+t-s)\) is the unique solution to the Riccati equation (3.3). This gives us the following result.

Proposition 5.4

Let (H1)–(H2) hold. Then for any \(t\in [0,T)\), the Riccati equation (3.3) admits a unique solution \(\Pi (\cdot )\in C((t,T];\mathbb {S}_+^n)\). Moreover,

$$\begin{aligned} V(t,0,y)\triangleq \inf _{u(\cdot )\in {{\mathcal {U}}}(t,0,y)}J(t,0,y;u(\cdot ))=\langle \Pi (T)y,y\rangle , \quad \forall y\in \mathbb {R}^n. \end{aligned}$$

Proof of Theorem 3.1

The proof follows directly from a combination of Theorem 5.2 and Proposition 5.4. \(\square \)

6 Proof of Theorem 3.4

In this section we prove the second main result of the paper, Theorem 3.4. Our proof requires some technical lemmas, which we establish first.

Lemma 6.1

Let \(1<p<\infty \) and let functions \(f_n\in L^p\) converge almost everywhere (or in measure) to a function f. Then, a necessary and sufficient condition for convergence of \(\{f_n\}\) to f in the weak topology of \(L^p\) is the boundedness of \(\{f_n\}\) in the norm of \(L^p\).

Proof

The proof can be found in [4, page 282]. \(\square \)

For arbitrary functions \(Q(\cdot )\geqslant 0\) in \(L^1(0,T;\mathbb {S}^n)\) and \(R(\cdot )\gg 0\) in \(L^\infty (0,T;\mathbb {S}^m)\), let \(P(\cdot )\) be the corresponding solution of the Riccati equation (3.2), and let \(\Phi (\cdot )\) and \(\Psi (\cdot )\) be the solutions to Eqs. (3.4) and (3.5), respectively. Recall from Remark 5.3 that the solution \(P_i(\cdot )\) of (4.4) converges to \(P(\cdot )\) on [0, T) as \(i\rightarrow \infty \). We have the following two lemmas.

Lemma 6.2

For \(i=1,2,\ldots ,\) let \(\Phi _i(\cdot )\) be the solution to

$$\begin{aligned} \left\{ \begin{aligned} {\dot{\Phi }}_i(s)&= \big [A(s)-B(s)R(s)^{-1}B(s)^\top P_i(s)\big ]\Phi _i(s), \quad s\in [0,T], \\ \Phi _i(0)&= I. \end{aligned}\right. \end{aligned}$$

We have the following:

  1. (i)

    \(\{\Phi _i(s)\}^\infty _{i=1}\) is uniformly bounded on [0, T], and

    $$\begin{aligned} \lim _{i\rightarrow \infty }\Phi _i(s)=\Phi (s), \quad \forall s\in [0,T]. \end{aligned}$$
  2. (ii)

    \(\{\Phi _i(s)^{-1}\}^\infty _{i=1}\) is uniformly bounded on compact subintervals of [0, T).

Proof

(i) Let \(A_i(s)=A(s)-B(s)R(s)^{-1}B(s)^\top P_i(s)\). By the integration by parts formula, we have for any \(s\in [0,T]\),

$$\begin{aligned}&\Phi _i(s)^\top P_i(s)\Phi _i(s)-P_i(0) \nonumber \\&\quad = \int _0^s\Phi _i(r)^\top \big [A_i(r)^\top P_i(r)+\dot{P}_i(r)+P_i(r)A_i(r)\big ]\Phi _i(r)dr \nonumber \\&\quad = -\int _0^s\Phi _i(r)^\top \big [Q(r)+P_i(r)B(r)R(r)^{-1}B(r)^\top P_i(r)\big ]\Phi _i(r) dr\leqslant 0. \end{aligned}$$
(6.1)

Since for any \(i\geqslant 1\), \(P_i(s)\geqslant P_1(s)>0\) for all \(s\in [0,T]\) and \(P(0)\geqslant P_i(0)\) (see Remark 5.3 (i)), there exists a constant \(\mu >0\) such that

$$\begin{aligned} \mu \Phi _i(s)^\top \Phi _i(s)\leqslant \Phi _i(s)^\top P_1(s)\Phi _i(s) \leqslant \Phi _i(s)^\top P_i(s)\Phi _i(s)\leqslant P_i(0)\leqslant P(0). \end{aligned}$$

This implies that

$$\begin{aligned} |\Phi _i(s)|^2=\mathrm{tr}\,\big [\Phi _i(s)^\top \Phi _i(s)\big ]\leqslant \mu ^{-1}\mathrm{tr}\,[P(0)], \quad \forall i\geqslant 1,~\forall s\in [0,T]. \end{aligned}$$

The first assertion follows readily. For the second, denote

$$\begin{aligned} \Pi (s) = B(s)R(s)^{-1}B(s)^\top P(s), \quad \Pi _i(s) = B(s)R(s)^{-1}B(s)^\top P_i(s), \end{aligned}$$

and note that for \(s\in [0,T)\),

$$\begin{aligned} \Phi _i(s)-\Phi (s)=\int _0^s\Big \{A_i(r)\big [\Phi _i(r)-\Phi (r)\big ]+\big [\Pi (r)-\Pi _i(r)\big ]\Phi (r)\Big \}dr. \end{aligned}$$

By the Gronwall inequality, we have

$$\begin{aligned} \big |\Phi _i(s)-\Phi (s)\big |\leqslant \int _0^se^{\int _r^s |A_i(u)|du}|\Pi (r)-\Pi _i(r)||\Phi (r)|dr, \quad s\in [0,T). \end{aligned}$$

Since \(P_i(s)\rightarrow P(s)\) on [0, T) and \(\{P_i(s)\}^\infty _{i=1}\) is uniformly bounded on compact subintervals of [0, T) (see Remark 5.3 (ii)), the dominated convergence theorem yields

$$\begin{aligned} \lim _{i\rightarrow \infty }\Phi _i(s)=\Phi (s), \quad \forall s\in [0,T). \end{aligned}$$

For the case \(s=T\), (6.1) gives

$$\begin{aligned} i\Phi _i(T)^\top \Phi _i(T)=\Phi _i(T)^\top P_i(T)\Phi _i(T)\leqslant P_i(0)\leqslant P(0), \quad \forall i\geqslant 1, \end{aligned}$$

from which follows

$$\begin{aligned} \lim _{i\rightarrow \infty }\Phi _i(T)=0=\Phi (T). \end{aligned}$$

(ii) One has

$$\begin{aligned} \left\{ \begin{aligned}&{d\over ds}\big [\Phi _i(s)^{-1}\big ]=-\Phi _i(s)^{-1}A_i(s), \quad s\in [0,T], \\&\Phi _i(0)^{-1}=I. \end{aligned}\right. \end{aligned}$$

Thus,

$$\begin{aligned} |\Phi _i(s)^{-1}|\leqslant |I|+\int _0^s|A_i(r)||\Phi _i(r)^{-1}|dr, \end{aligned}$$

and by the Gronwall inequality we have

$$\begin{aligned} |\Phi _i(s)^{-1}|\leqslant |I|e^{\int _0^s|A_i(r)|dr}=\sqrt{n}\exp \left\{ \int _0^s \Big |A(r)-B(r)R(r)^{-1}B(r)^\top P_i(r)\Big |dr\right\} . \end{aligned}$$

The result then follows immediately form the uniform boundedness of \(\{P_i(s)\}^\infty _{i=1}\) on compact subintervals of [0, T). \(\square \)

Lemma 6.3

For \(i=1,2,\ldots ,\) let \(\eta _i(\cdot )\) be the solution to (4.6). Then \(\{\eta _i(s)\}^\infty _{i=1}\) is uniformly bounded on compact subintervals of [0, T), and

$$\begin{aligned} \lim _{i\rightarrow \infty }\eta _i(s)=-\big [\Psi (T)\Phi (s)^{-1}\big ]^\top y, \quad \forall s\in [0,T).\end{aligned}$$
(6.2)

Proof

For \(s\in [0,T]\), let us denote

$$\begin{aligned} \theta _i(s)=-i\big [\Phi _i(T)\Phi _i(s)^{-1}\big ]^\top y. \end{aligned}$$

Then \(\theta _i(T)=-iy\). By differentiating and using the fact that

$$\begin{aligned} {d\over ds}\big [\Phi _i(s)^{-1}\big ]=-\Phi _i(s)^{-1}\big [A(s)-B(s)R(s)^{-1}B(s)^\top P_i(s)\big ], \end{aligned}$$

we obtain

$$\begin{aligned} {d\over ds}\theta _i(s)&= i\Big \{\Phi _i(T)\Phi _i(s)^{-1}\big [A(s)-B(s)R(s)^{-1}B(s)^\top P_i(s)\big ]\Big \}^\top y \\&= \big [A(s)-B(s)R(s)^{-1}B(s)^\top P_i(s)\big ]^\top i\big [\Phi _i(T)\Phi _i(s)^{-1}\big ]^\top y \\&= -\big [A(s)-B(s)R(s)^{-1}B(s)^\top P_i(s)\big ]^\top \theta _i(s). \end{aligned}$$

Thus, \(\theta _i(\cdot )\) satisfies the same equation as \(\eta _i(\cdot )\). By the uniqueness of solutions we must have

$$\begin{aligned} \eta _i(s)=\theta _i(s)=-i\big [\Phi _i(T)\Phi _i(s)^{-1}\big ]^\top y, \quad s\in [0,T]. \end{aligned}$$
(6.3)

By Lemma 6.2, \(\lim _{i\rightarrow \infty }\Phi _i(s)=\Phi (s)\) for all \(s\in [0,T]\). So in order to prove (6.2), it remains to show

$$\begin{aligned} \lim _{i\rightarrow \infty }i\Phi _i(T)=\Psi (T). \end{aligned}$$
(6.4)

For this, let \(\Psi _i(s)=P_i(s)\Phi _i(s)\). By differentiating we get

$$\begin{aligned} {\dot{\Psi }}_i(s)&= \dot{P}_i(s)\Phi _i(s)+P_i(s){\dot{\Phi }}_i(s)\\&= \big [\dot{P}_i(s)+P_i(s)A(s)-P_i(s)B(s)R(s)^{-1}B(s)^\top P_i(s)\big ]\Phi _i(s)\\&= -A(s)^\top P_i(s)\Phi _i(s)-Q(s)\Phi _i(s)\\&= -A(s)^\top \Psi _i(s)-Q(s)\Phi _i(s). \end{aligned}$$

Thus, \(P_i(\cdot )\Phi _i(\cdot )\) solves the following ODE:

$$\begin{aligned} \left\{ \begin{aligned} {\dot{\Psi }}_i(s)&= -A(s)^\top \Psi _i(s)-Q(s)\Phi _i(s), \quad s\in [0,T], \\ \Psi _i(0)&= P_i(0). \end{aligned}\right. \end{aligned}$$

Since \(P_i(0)\rightarrow P(0)\), \(\Phi _i(s)\rightarrow \Phi (s)\) as \(i\rightarrow \infty \) and \(\{\Phi _i(s)\}^\infty _{i=1}\) is uniformly bounded on [0, T], we conclude by the Gronwall inequality that

$$\begin{aligned} \lim _{i\rightarrow \infty }\Psi _i(s)=\Psi (s), \quad \forall s\in [0,T]. \end{aligned}$$

In particular,

$$\begin{aligned} \lim _{i\rightarrow \infty }i\Phi _i(T)=\lim _{i\rightarrow \infty }P_i(T)\Phi _i(T)=\lim _{i\rightarrow \infty }\Psi _i(T)=\Psi (T). \end{aligned}$$

Finally, let \(T^\prime \in (0,T)\) be arbitrary. By (6.4) the sequence \(\{i\Phi _i(T)\}_{i=1}^\infty \) is bounded, and by Lemma 6.2 (ii), \(\{\Phi _i(s)^{-1}\}^\infty _{i=1}\) is uniformly bounded on \([0,T^\prime ]\). It then follows from the relation (6.3) that \(\{\eta _i(s)\}^\infty _{i=1}\) is uniformly bounded on \([0,T^\prime ]\). Since \(T^\prime \in (0,T)\) is arbitrary we see that \(\{\eta _i(s)\}^\infty _{i=1}\) is actually uniformly bounded on any compact subinterval of [0, T). \(\square \)

Proof of Theorem 3.4

For arbitrary but fixed \(\lambda \in \Gamma =\{(\lambda _1,\ldots ,\lambda _k)^\top :\lambda _i\geqslant 0,~i=1,\ldots ,k\}\), denote

$$\begin{aligned} Q(s)\equiv Q(\lambda ,s)= & {} Q_0(s)+\sum _{i=1}^k \lambda _iQ_i(s),\\ R(s)\equiv R(\lambda ,s)= & {} R_0(s)+\sum _{i=1}^k \lambda _iR_i(s). \end{aligned}$$

Let \(P(\cdot )\equiv P(\lambda ,\cdot )\), \(\Pi (\cdot )\equiv \Pi (\lambda ,\cdot )\), \(\Phi (\cdot )\equiv \Phi (\lambda ,\cdot )\), and \(\Psi (\cdot )\equiv \Psi (\lambda ,\cdot )\) be the solutions to (3.7), (3.8), (3.9), and (3.10), respectively. According to Theorem 4.1, it suffices to show

$$\begin{aligned} V(\lambda ,t,x,y)&\triangleq \inf _{u(\cdot )\in {{\mathcal {U}}}(t,x,y)}J(\lambda ,t,x,y;u(\cdot )) \nonumber \\&= \langle P(t)x,x\rangle -2\langle \Psi (T)\Phi (t)^{-1}x,y\rangle +\langle \Pi (T)y,y\rangle , \end{aligned}$$
(6.5)

and that the (unique) optimal control of Problem (CLQ*) with the cost functional \(J(\lambda ,t,x,y;u(\cdot ))\) is given by

$$\begin{aligned} u^*(s)=-R(s)^{-1}B(s)^\top \big [P(s)X^*(s)+\eta (s)\big ], \quad s\in [t,T),\end{aligned}$$
(6.6)

where

$$\begin{aligned} \eta (s)=-\big [\Psi (T)\Phi (s)^{-1}\big ]^\top y, \quad s\in [0,T), \end{aligned}$$

and \(X^*(\cdot )\) is the solution to

$$\begin{aligned} \left\{ \begin{aligned} \dot{X}^*(s)&= \big [A(s)-B(s)R(s)^{-1}B(s)^\top P(s)\big ]X^*(s)\\&\quad -B(s)R(s)^{-1}B(s)^\top \eta (s), \quad s\in [t,T), \\ X^*(t)&= x. \end{aligned}\right. \end{aligned}$$

For this we use Theorem 4.4. Recall from Sect. 4 that the value function of the corresponding Problem (LQ)\(_i\) is

$$\begin{aligned} V_i(t,x,y)=\langle P_i(t)x,x\rangle +2\langle \eta _i(t),x\rangle +V_i(t,0,y) \end{aligned}$$

and converges pointwise to \(V(\lambda ,t,x,y)\). Letting \(i\rightarrow \infty \), we obtain (6.5) from Remark 5.3 (i), Lemma 6.3, and Proposition 5.4. To prove (6.6), let \(X_i^*(\cdot )\) be the solutions to (4.7) and set

$$\begin{aligned} \Pi (s)=B(s)R(s)^{-1}B(s)^\top , \quad A_i(s)=A(s)-B(s)R(s)^{-1}B(s)^\top P_i(s). \end{aligned}$$

Then we have for any \(t\leqslant s<T\),

$$\begin{aligned} X^*_i(s)-X^*(s)&= \int _t^s\Big \{A_i(r)[X^*_i(r)-X^*(r)] + \Pi (r)[P(r)-P_i(r)]X^*(r) \\&\qquad \qquad +\Pi (r)[\eta (r)-\eta _i(r)]\Big \}dr. \end{aligned}$$

An application of the Gronwall inequality yields

$$\begin{aligned}&|X^*_i(s)-X^*(s)|\\&\quad \leqslant \int _t^s e^{\int _r^s |A_i(u)|du}|\Pi (r)| \Big \{|P(r)-P_i(r)||X^*(r)|+|\eta (r)-\eta _i(r)|\Big \}dr \end{aligned}$$

for all \(t\leqslant s<T\). Since

$$\begin{aligned} \lim _{i\rightarrow \infty }P_i(s)=P(s),\quad \lim _{i\rightarrow \infty }\eta _i(s)=\eta (s); \quad \forall s\in [0,T), \end{aligned}$$

and the sequences \(\{P_i(s)\}^\infty _{i=1}\) and \(\{\eta _i(s)\}^\infty _{i=1}\) are uniformly bounded on compact subintervals of [0, T) (see Remark 5.3 (ii) and Lemma 6.3), we have by the dominated convergence theorem,

$$\begin{aligned} \lim _{i\rightarrow \infty }X^*_i(s)=X^*(s), \quad \forall s\in [0,T). \end{aligned}$$

It follows that the sequence \(\{u^*_i(\cdot )\}^\infty _{i=1}\) defined by (4.5) converges to \(u^*(s)\) for all \(s\in [t,T)\) as \(i\rightarrow \infty \). On the other hand, from the proof of Theorem 4.4 we see that \(\{u^*_i(\cdot )\}^\infty _{i=1}\) is bounded in the norm of \(L^2(t,T;\mathbb {R}^m)\). Thus, by Lemma 6.1, \(\{u^*_i(\cdot )\}^\infty _{i=1}\) converges weakly to \(u^*(\cdot )\) in \(L^2(t,T;\mathbb {R}^m)\). The desired result then follows from Theorem 4.4 (ii). \(\square \)

7 Examples

In this section we present two examples illustrating the results obtained. In the first example, the integral quadratic constraints are absent, in which case the optimal parameter \(\lambda ^*\) in Theorem 3.4 is obviously zero. Such kind of problems might represent the selection of a thrust program for a aircraft which must reach the destination limits in a given time.

Example 7.1

Consider the one-dimensional state equation

$$\begin{aligned} \left\{ \begin{aligned} \dot{X}(s)&= X(s)+u(s),\quad s\in [0,T],\\ X(0)&= x, \end{aligned}\right. \end{aligned}$$

and the cost functional

$$\begin{aligned} J(x,y;u(\cdot ))=\int _0^T |u(s)|^2 ds. \end{aligned}$$

Given the initial state x and the target y, we seek the control \(u^*(\cdot )\in L^2(0,T;\mathbb {R})\) minimizing \(J(x,y;u(\cdot ))\), while satisfying the terminal constraint

$$\begin{aligned} X^*(T)\equiv X(T;x,u^*(\cdot ))=y. \end{aligned}$$

So \({1\over 2}J(x,y;u^*(\cdot ))\) gives the least control energy needed to reach the target y at time T from the initial state x.

We now apply Theorem 3.4 to find the optimal control \(u^*(\cdot )\). As mentioned at the beginning of this section, the optimal parameter is zero. Thus the corresponding Riccati equations become

$$\begin{aligned}&\left\{ \begin{aligned}&\dot{P}(s)+2P(s)-P(s)^2=0, \quad s\in [0,T),\\&\lim _{s\rightarrow T}P(s)=\infty , \end{aligned}\right. \\&\left\{ \begin{aligned}&{\dot{\Pi }}(s)+2\Pi (s)+\Pi (s)^2=0, \quad s\in (0,T],\\&\lim _{s\rightarrow 0}\Pi (s)=\infty , \end{aligned}\right. \end{aligned}$$

and the corresponding ODEs become

$$\begin{aligned}&\left\{ \begin{aligned} {\dot{\Phi }}(s)&= [1-P(s)]\Phi (s), \quad s\in [0,T),\\ \Phi (0)&= 1, \end{aligned}\right. \\&\left\{ \begin{aligned} {\dot{\Psi }}(s)&= -\Psi (s), \quad s\in [0,T],\\ \Psi (0)&= P(0). \end{aligned}\right. \end{aligned}$$

A straightforward calculation leads to

$$\begin{aligned} P(s)={2\over 1-e^{2(s-T)}},\quad s\in [0,T);\qquad \Pi (s)={2\over e^{2s}-1},\quad s\in (0,T], \end{aligned}$$

and by the variation of constants formula,

$$\begin{aligned} \Phi (s)={e^{2T-s}-e^s\over e^{2T}-1},\quad \Psi (s)={2e^{2T-s}\over e^{2T}-1},\quad s\in [0,T]. \end{aligned}$$

Now the closed-loop system reads

$$\begin{aligned} \left\{ \begin{aligned} \dot{X}^*(s)&= [1-P(s)]X^*(s)-\eta (s), \quad s\in [0,T), \\ X^*(0)&= x, \end{aligned}\right. \end{aligned}$$

where

$$\begin{aligned} \eta (s)=-\big [\Psi (T)\Phi (s)^{-1}\big ]^\top y=-{2e^Ty\over e^{2T-s}-e^s},\quad s\in [0,T). \end{aligned}$$

A bit of computation using the variation of constants formula shows that

$$\begin{aligned} X^*(s)={e^{2T-s}-e^s\over e^{2T}-1}\,x +{(e^s-e^{-s})e^T\over e^{2T}-1}\,y,\quad s\in [0,T]. \end{aligned}$$

Thus, the optimal control \(u^*(\cdot )\) is given by

$$\begin{aligned} u^*(s)= -[P(s)X^*(s)+\eta (s)]={2e^{T-s}\over 1-e^{2T}}\big (e^Tx-y\big ),\quad s\in [0,T], \end{aligned}$$

and the least control energy needed to reach the target y at time T from the initial state x is given by

$$\begin{aligned} {1\over 2}J(x,y;u^*(\cdot ))&= {1\over 2}\Big [\langle P(0)x,x\rangle -2\langle \Psi (T)\Phi (0)^{-1}x,y\rangle +\langle \Pi (T)y,y\rangle \Big ],\\&= {1\over e^{2T}-1}\big (e^Tx-y\big )^2. \end{aligned}$$

Now we present an example in which the control energy is limited. Such kind of problems may arise when minimizing flight cost of completing the trip in a given time with finite fuel.

Example 7.2

Consider the one-dimensional state equation

$$\begin{aligned} \left\{ \begin{aligned} \dot{X}(s)&= X(s)+u(s), \quad s\in [0,1],\\ X(0)&= 1. \end{aligned}\right. \end{aligned}$$

We want to minimize

$$\begin{aligned} J_0(u(\cdot )) = \int _0^1\Big [15|X(s)|^2+|u(s)|^2\Big ]ds \end{aligned}$$

over all controls \(u(\cdot )\in L^2(0,1;\mathbb {R})\) subject to

$$\begin{aligned} X(1)=0, \quad J_1(u(\cdot ))\equiv \int _0^1|u(s)|^2ds\leqslant 3. \end{aligned}$$

To this end, we note that in this example the equation for \(P(\lambda ,\cdot )\) (\(\lambda \geqslant 0\)) becomes

$$\begin{aligned} \left\{ \begin{aligned}&\dot{P}(\lambda ,s)+2P(\lambda ,s)+15-{P(\lambda ,s)^2\over 1+\lambda }=0, \quad s\in [0,1),\\&\lim _{s\rightarrow 1}P(\lambda ,s)=\infty . \end{aligned}\right. \end{aligned}$$

It is easily verified that

$$\begin{aligned} P(\lambda ,s) = \lambda +1+\sqrt{(\lambda +1)(\lambda +16)} + {2\sqrt{(\lambda +1)(\lambda +16)}\,\Gamma (\lambda ,s)\over \Gamma (\lambda ,1)-\Gamma (\lambda ,s)},\quad s\in [0,1), \end{aligned}$$

where

$$\begin{aligned} \Gamma (\lambda ,s)=e^{{2\sqrt{(\lambda +1)(\lambda +16)}\over \lambda +1}s}. \end{aligned}$$

By calculating the derivative of

$$\begin{aligned} L(\lambda )\triangleq P(\lambda ,0)-3\lambda , \end{aligned}$$

we obtain the optimal parameter \(\lambda ^*\approx 0.1869\). Now the closed-loop system reads

$$\begin{aligned} \left\{ \begin{aligned} \dot{X}^*(s)&= \left[ 1-{P(\lambda ^*,s)\over 1+\lambda ^*}\right] X^*(s),\quad s\in [0,1), \\ X^*(0)&= 1. \end{aligned}\right. \end{aligned}$$

By the variation of constants formula we have

$$\begin{aligned} X^*(s)= {\Gamma (\lambda ^*,1)-\Gamma (\lambda ^*,s)\over (\Gamma (\lambda ^*,1)-1)\sqrt{\Gamma (\lambda ^*,s)}}, \quad s\in [0,1]. \end{aligned}$$

Thus, the optimal control \(u^*(\cdot )\) is given by

$$\begin{aligned} u^*(s)= -{P(\lambda ^*,s)\over 1+\lambda ^*}X^*(s)= {(\alpha +1)e^{\alpha (2-s)}+(\alpha -1)e^{\alpha s}\over 1-e^{2\alpha }}, \quad s\in [0,1], \end{aligned}$$

where

$$\begin{aligned} \alpha = {\sqrt{(\lambda ^*+1)(\lambda ^*+16)}\over \lambda ^*+1}\approx 3.6929. \end{aligned}$$

8 Conclusions

We have developed a systematic approach to the constrained LQ optimal control problem based on duality theory and approximation techniques. The problem gives rise to a Riccati differential equation with infinite terminal value as a result of the non-free feature of the terminal state. It is shown that by solving the Riccati equation and an optimal parameter selection problem, the optimal control can be represented as a target-dependent feedback of the current state. We extensively investigate the Riccati equation by a penalty method, and with the solutions of two Riccati-type equations, we explicitly solve a parameterized LQ problem without the integral quadratic constraints. This allows us to determine the optimal parameter by simply calculating derivatives. Our method also provides some alternative and useful viewpoint to study optimal control of exactly controllable stochastic systems. Research on this topic is currently in progress.