Abstract
The paper considers minimization problems with a free right endpoint on a given time interval for control affine systems of differential equations. For this class of problems, we study an estimate for the number of different zeros of switching functions that determine the form of the corresponding optimal controls. This study is based on analyzing nonautonomous linear systems of differential equations for switching functions and the corresponding auxiliary functions. Nonautonomous linear systems of third order are considered in detail. In these systems, the variables are changed so that the matrix of the system is transformed into a special upper triangular form. As a result, the number of zeros of the corresponding switching functions is estimated using the generalized Rolle’s theorem. In the case of a linear system of third order, this transformation is carried out using functions that satisfy a nonautonomous system of quadratic differential equations of the same order. The paper presents two approaches that ensure the extensibility of solutions of a nonautonomous system of quadratic differential equations to a given time interval. The first approach uses differential inequalities and Chaplygin’s comparison theorem. The second approach combines splitting a nonautonomous system of quadratic differential equations into subsystems of lower order and applying the quasi-positivity condition to these subsystems.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
INTRODUCTION
In the minimization problem with a free right endpoint on a given time interval for a control affine system of differential equations, the study of controls determined from the Pontryagin maximum principle [1, 2, 3] is closely related to the analysis of a nonautonomous linear system of differential equations for switching functions and corresponding auxiliary functions. These functions directly depend on the solutions of the corresponding adjoint system. Assuming that the maximum condition uniquely determines such controls almost everywhere, we can say that switching functions completely determine the form of optimal controls [1, 2, 3].
In turn, the study of the behavior of a switching function is associated with estimating the number of its different zeros on a given time interval. This directly depends on the possibility of reducing the matrix of the mentioned nonautonomous linear system of differential equations to a special upper triangular form [4, 5] using a change of variables. Functions that carry out such a change of variables satisfy a nonautonomous system of quadratic differential equations and, therefore, are defined only locally, in a neighborhood of a given initial condition. Consequently, it is important to extend the solution of such a system of quadratic equations from a certain neighborhood of the initial condition to a given time interval. This is because applying the generalized Rolle’s theorem to a transformed nonautonomous linear system of differential equations gives an estimate for the number of zeros of the corresponding switching function.
In this regard, we mention paper [6], which provides a statement that guarantees the existence of a solution to a nonautonomous system of quadratic differential equations defined on a given time interval. The proof of this statement uses differential inequalities and Chaplygin’s comparison theorem. The present paper formulates such a statement and notes its applicability in the analysis of nonautonomous linear systems of differential equations for switching functions and corresponding auxiliary functions in various minimization problems from epidemiology.
In addition, this paper describes a new approach that ensures the extensibility of solutions to a nonautonomous system of quadratic differential equations to a given time interval. It combines splitting this system of quadratic equations into subsystems of lower dimensions and applying the quasi-positivity condition to these subsystems. The applicability of this approach in the analysis of nonautonomous linear systems of differential equations for switching functions and corresponding auxiliary functions in various minimization problems from medicine is also indicated.
1 SWITCHING FUNCTIONS IN OPTIMAL CONTROL PROBLEMS
On a given time interval \([0,T]\), consider a control system
where \(x(t)=(x_{1}(t),x_{2}(t),\dots,x_{n}(t))\) is a trajectory of system (1.1) satisfying the inclusion
Here \(G\) is an open bounded set of \(\mathbb{R}^{n}\). We consider the components of the vector functions \(f(x)\) and \(g_{i}(x)\), \(i=1,\dots,m\), to be infinitely differentiable on the set \(G\). In applied problems, the role of \(G\) is most often played by the following sets:
where \(x_{i}^{\max}\), \(i=1,\dots,n\), and \(M\) are given constants.
For control system (1.1), inclusion (1.2) is usually established in advance, based on analyzing its differential equations. This inclusion is important for system (1.1) since it shows the physicality of its variables \(x_{i}(t)\), \(i=1,\dots,n\), as well as the correctness of writing the equations. In addition, inclusion (1.2) guarantees the existence of a solution \(x(t)\) to such a system on the entire interval \([0,T]\).
System (1.1) also contains control functions \(u_{i}(t)\), \(i=1,\dots,m\), satisfying the constraints
We assume that the set of admissible controls \(\Omega(T)\) consists of all possible families of controls \((u_{1}(t),u_{2}(t),\dots,u_{m}(t))\) whose components \(u_{i}(t)\), \(i=1,\dots,m\), are Lebesgue measurable functions subject to the corresponding constraints (1.3) for almost all \(t\in[0,T]\).
Remark 1
The components \(x_{i}(t)\), \(i=1,\dots,n\), of a solution \(x(t)\) to system (1.1) corresponding to the set of controls \((u_{1}(t),u_{2}(t),\dots,u_{m}(t))\) from \(\Omega(T)\) are absolutely continuous functions.
We add an objective function into control system (1.1):
where the functions \(b_{i}(x)\), \(i=1,\dots,m\), as well as \(f_{0}(x)\) and \(g_{0}(x)\) are infinitely differentiable on the set \(G\). For the objective function (1.4), we pose an optimal control problem consisting in minimizing this function on the set of admissible controls \(\Omega(T)\):
The properties of minimization problem (1.5) together with Theorem 4 from [3], Ch. 4, guarantee the existence of an optimal solution consisting of a family \((u_{1}^{*}(t),u_{2}^{*}(t),\dots,u_{m}^{*}(t))\) of optimal controls and the corresponding optimal solution \(x_{*}(t)=(x_{1}^{*}(t),x_{2}^{*}(t),\dots,x_{n}^{*}(t))\) to system (1.1).
To analyze an optimal solution, we use Pontryagin’s maximum principle ([2], Ch. 6, Theorem 1) as a necessary condition for optimality. To do this, we first write the Hamilton function corresponding to problem (1.5):
where \(\psi=(\psi_{1},\psi_{2},\dots,\psi_{n})\) is the adjoint vector, \(u=(u_{1},u_{2},\dots,u_{m})\) is a control vector, and \(\langle r,s\rangle\) is the inner product of vectors \(r\) and \(s\) from \(\mathbb{R}^{n}\). Then compute the required partial derivatives of this function:
Here \(\dfrac{\partial f}{\partial x}(x)\) and \(\dfrac{\partial g_{i}}{\partial x}(x)\), \(i=1,\dots,m\), are square matrices of order \(n\times n\), and \(\dfrac{\partial f_{0}}{\partial x}(x)\) and \(\dfrac{\partial b_{i}}{\partial x}(x)\), \(i=1,\dots,m\), are column vectors of order \(n\). The symbol \({}^{\top}\) means transposition.
Then, according to Pontryagin’s maximum principle, for a family of optimal controls \(u_{*}(t)=(u_{1}^{*}(t),u_{2}^{*}(t),\dots,u_{m}^{*}(t))\) and an optimal trajectory \(x_{*}(t)=(x_{1}^{*}(t),x_{2}^{*}(t),\dots,x_{n}^{*}(t))\), there is an adjoint function \(\psi_{*}(t)=(\psi_{1}^{*}(t),\psi_{2}^{*}(t),\dots,\psi_{n}^{*}(t))\) such that
-
the function \(\psi_{*}(t)\) is an absolutely continuous nontrivial solution to the adjoint system
$$\left\{\begin{array}[]{ll}\psi_{*}^{\prime}(t)=&-\dfrac{\partial H}{\partial x }(x_{*}(t),\psi_{*}(t),u_{*}(t))\\ =&-\left(\dfrac{\partial f}{\partial x}(x_{*}(t))+\sum\limits_{i=1}^{m}u_{i}^{ *}(t)\dfrac{\partial g_{i}}{\partial x}(x_{*}(t))\right)^{\top}\psi_{*}(t)\\ &+\left(\dfrac{\partial f_{0}}{\partial x}(x_{*}(t))+\sum\limits_{i=1}^{m}u_{i }^{*}(t)\dfrac{\partial b_{i}}{\partial x}(x_{*}(t))\right),\\ \psi_{*}(T)=&-\dfrac{\partial g_{0}}{\partial x}(x_{*}(T)),\end{array}\right.$$(1.6)where \(\dfrac{\partial g_{0}}{\partial x}(x)\) is a column vector of order \(n\);
-
the optimal controls \(u_{i}^{*}(t)\), \(i=1,\dots,m\), provide the maximum of the Hamiltonian \(H(x_{*}(t),\) \(\psi_{*}(t),u)\) in the variables \(u_{i}\), \(i=1,\dots,m\), satisfying constraints (1.3). Then, for these controls,
$$u_{i}^{*}(t)=\left\{\begin{array}[]{lll}u_{i}^{\max},&\mbox{if}&L_{i}(t)>0,\\ \mbox{any}\;u\in[u_{i}^{\min},u_{i}^{\max}],&\mbox{if}&L_{i}(t)=0,\\ u_{i}^{\min},&\mbox{if}&L_{i}(t)<0,\end{array}\right.\;\;\;i=1,\dots,m,$$(1.7)where the absolutely continuous functions
$$L_{i}(t)=\langle g_{i}(x_{*}(t)),\psi_{*}(t)\rangle-b_{i}(x_{*}(t)),\quad i=1, \dots,m,$$(1.8)corresponding to the controls \(u_{i}^{*}(t)\), \(i=1,\dots,m\), are switching functions. They describe the behavior of the corresponding optimal controls \(u_{i}^{*}(t)\), \(i=1,\dots,m\), by formulas (1.7).
As a result, a two-point boundary value problem of the maximum principle is formed consisting of systems (1.1) and (1.6) as well as relations (1.7).
The analysis of formulas (1.7) and (1.8) determines the properties of the optimal controls \(u_{i}^{*}(t)\), \(i=1,\dots,m\), and of the corresponding switching functions \(L_{i}(t)\), \(i=1,\dots,m\) ([7], Ch. 2). Namely, the controls \(u_{i}^{*}(t)\), \(i=1,\dots,m\), can only have a bang-bang form and switch between the corresponding values \(u_{i}^{\min}\) and \(u_{i}^{\max}\), \(i=1,\dots,m\). This occurs when passing through points at which the switching functions \(L_{i}(t)\), \(i=1,\dots,m\), become zero and the sign of these functions changes. Such points are called switching points. In addition to the intervals where the controls \(u_{i}^{*}(t)\), \(i=1,\dots,m\), are bang-bang functions, these controls may also contain so-called singular regimens. This occurs when one of the switching functions \(L_{i}(t)\), \(i=1,\dots,m\), or several functions simultaneously are identically zero on some subintervals of \([0,T]\) called singular intervals.
We will assume that a switching function \(L_{i}(t)\) does not vanish identically on any subinterval of \([0,T]\). Then it can vanish only at individual points, to the left and right of which it takes values of different signs on intervals of nonzero length. Accordingly, the question arises of estimating the number of distinct zeros of the switching function \(L_{i}(t)\) on the interval \((0,T)\). In turn, this leads to finding the corresponding estimate for the number of switchings of the control \(u_{i}^{*}(t)\) on a given interval. This paper is devoted to discussing how this can be done.
Remark 2
Sometimes, due to the positivity of the components \(x_{i}^{*}(t)\), \(i=1,\dots,n\), of the solution \(x_{*}(t)\), a switching function \(L_{i}(t)\) admits the representation \(L_{i}(t)=d_{i}(t){\widetilde{L}}_{i}(t)\), where the function \(d_{i}(t)\) is positive and continuous on the interval \([0,T]\). Then, it is convenient to introduce a new switching function \({\widetilde{L}}_{i}(t)\) and carry out further reasoning for it since the functions \(L_{i}(t)\) and \({\widetilde{L}}_{i}(t)\) vanish simultaneously and change their signs synchronously.
By differentiating a switching function \(L_{i}(t)\) and using the corresponding equations of systems (1.1) and (1.6), one can obtain a system of differential equations, which will be a nonautonomous linear system of differential equations for the function \(L_{i}(t)\), and some auxiliary functions corresponding to it. The presence of such a system is an essential feature of the minimization problem (1.5) when estimating the number of distinct zeros of the switching function \(L_{i}(t)\).
Further considerations on estimating the number of distinct zeros of a switching function \(L_{i}(t)\), \(i=1,\dots,m\), are valid for each such function. Therefore, to simplify these arguments, we will omit the index \(i\) in \(L_{i}(t)\) and consider the case \(n=3\).
2 SYSTEMS OF THREE DIFFERENTIAL EQUATIONSFOR SWITCHING FUNCTIONS
On a given time interval \([0,T]\), consider a nonautonomous linear system of differential equations for a switching function \(L(t)\) and two auxiliary functions \(G(t)\) and \(P(t)\) corresponding to it:
where \(a_{i}(t)\) and \(b_{i}(t)\), \(i=1,2,3\), and \(c_{i}(t)\), \(i=2,3\), are given Lebesgue measurable bounded functions. We also assume that the functions \(b_{1}(t)\) and \(c_{2}(t)\) are positive almost everywhere on the interval \((0,T)\).
To estimate the number of distinct zeros of the function \(L(t)\), we use the transformation of system (2.1) proposed in [4, 5]. It consists in reducing the matrix of this system to the following upper triangular form:
where \(``*"\) stand for nonzero elements of the matrix. To do this, let us perform a nondegenerate change of variables in system (2.1):
where the functions \(h_{i}(t)\), \(i=1,2,3\), satisfy the nonautonomous system of quadratic differential equations
Let us add initial conditions to system (2.2):
Then, by the theorem of existence and uniqueness of a solution to the Cauchy problem ([8], Theorem 1.1), the corresponding solution \(h(t)=(h_{1}(t),h_{2}(t),h_{3}(t))\) to system (2.2), (2.3) is defined on the half-open interval \([0,t_{\max})\), which is the largest possible half-open interval of the existence of such a solution, and either \(t_{\max}=+\infty\) or \(t_{\max}<+\infty\). In this case, system (2.1) is transformed to a system of the required form everywhere on \([0,t_{\max})\):
Reasoning by contradiction in such a system and simultaneously using the generalized Rolle’s theorem [4, 5] in the first two equations, we come to the conclusion that the switching function \(L(t)={\widetilde{L}}(t)\) has at most two distinct zeros on \([0,t_{\max})\). Since we would like to have such an estimate for the number of zeros of the function \(L(t)\) not on the half-open interval \([0,t_{\max})\) but on the closed interval \([0,T]\) (in this case, it may turn out that \(t_{\max}\leq T\)), we need a statement about the existence of a solution \(h(t)=(h_{1}(t),h_{2}(t),h_{3}(t))\) to system (2.2), (2.3) which would be defined on the entire interval \([0,T]\).
Remark 3
The above argument also implies the extensibility of the solution \(h(t)=(h_{1}(t),\) \(h_{2}(t),h_{3}(t))\) of system (2.2), (2.3) from \([0,t_{\max})\) to \([0,T]\) in a situation when \(t_{\max}\leq T\).
3 CHAPLYGIN’S COMPARISON THEOREM
A statement on the existence of a solution \(h(t)=(h_{1}(t),h_{2}(t),h_{3}(t))\) of system (2.2), (2.3) defined on the entire interval \([0,T]\) was proved in [6] in the form of Theorem 1. To formulate it, let us write system (2.2) in vector form. Introduce symmetric matrices
vector functions
and scalar functions
Then, system (2.2) is written in the form
We also assume positive constants \(A_{\max}\), \(B_{\max}\), and \(C_{\max}\) to be estimates of the corresponding quantities \(Q_{i}(t)\), \(d_{i}(t)\), and \(f_{i}(t)\), \(i=1,2,3\):
where \(q_{i}(t)\) is the eigenvalue of the matrix \(Q_{i}(t)\), \(i=1,2,3\), largest in absolute value and \(||p||\) is the Euclidean norm of a vector \(p\in\mathbb{R}^{3}\).
Remark 4
When obtaining system (2.1) in each specific example, one can see that the switching function \(L(t)\) and the corresponding auxiliary functions \(G(t)\) and \(P(t)\) are expressed primarily in terms of the components of the adjoint function \(\psi_{*}(t)\) and, perhaps, in terms of the components of the optimal solution \(x_{*}(t)\). The functions \(a_{i}(t)\) and \(b_{i}(t)\), \(i=1,2,3\), and \(c_{i}(t)\), \(i=2,3\), are also given using the components of the optimal solution \(x_{*}(t)\) and, perhaps, the optimal controls \(u_{i}^{*}(t)\), \(i=1,\dots,m\). Therefore, when finding the constants \(A_{\max}\), \(B_{\max}\), and \(C_{\max}\) in relations (3.2) for estimating the functions \(a_{i}(t)\), \(b_{i}(t)\), \(i=1,2,3\), and \(c_{i}(t)\), \(i=2,3\), we use inclusion (1.2), which is valid for the components \(x_{i}^{*}(t)\), \(i=1,\dots,n\), of the optimal solution \(x_{*}(t)\), and inequalities (1.3), which hold for the optimal controls \(u_{i}^{*}(t)\), \(i=1,\dots,m\).
As a result, the required statement is formulated as follows.
Theorem 1
Let inequalities (3.2) hold for the system of equations (3.1) for all \(t\in[0,T]\). Let values \(A_{\max},\) \(B_{\max},\) and \(C_{\max}\) satisfy the relation
Then there exists a solution \(h_{\star}(t)=\left(h_{1}^{\star}(t),h_{2}^{\star}(t),h_{3}^{\star}(t)\right)\) to system (3.1), (2.3), which is defined on the entire interval \([0,T]\).
Proof
The proof of this theorem uses differential inequalities and Chaplygin’s comparison theorem ([9], Theorem 16.2). \(\square\)
It follows from Theorem 1 that system (2.4), in which the functions \(h_{i}^{\star}(t)\), \(i=1,2,3,\) are used, is also defined on the entire interval \([0,T]\). This means that the switching function \(L(t)\) has at most two distinct zeros on the time interval \([0,T]\).
Remark 5
The application of Theorem 1 in the analysis of nonautonomous linear systems of differential equations for switching functions and corresponding auxiliary functions in various minimization problems (1.5) from epidemiology is demonstrated in [10, 11, 12, 13]. Other optimal control problems arising in mathematical epidemiology models are presented, for example, in [14, 15].
4 SPLITTING AND QUASI-POSITIVITY CONDITION
The existence of a solution \(h_{\star}(t)=\left(h_{1}^{\star}(t),h_{2}^{\star}(t),h_{3}^{\star}(t)\right)\) to system (2.2), (2.3) or, what is the same, to system (3.1), (2.3) defined on the entire interval \([0,T]\) can be justified in another way, without using Theorem 1. The corresponding reasoning is carried out as follows. First, we split the nonautonomous system of quadratic differential equations (2.2) into two subsystems of lower dimension. For this, note that the second and third equations of (2.2) do not contain the variable \(h_{1}(t)\). Therefore, such equations, together with the corresponding initial conditions from (2.3), will form the first subsystem of quadratic differential equations:
Then, we introduce a new variable \(h_{0}(t)\) in the first equation of system (2.2) by the formula
After that, we find a differential equation for the function \(h_{0}(t)\). As a result, adding the corresponding initial conditions from (2.3) and (4.2), we obtain the second subsystem of quadratic differential equations:
Thus, having split system (2.2), (2.3), we moved from this system of equations to two subsystems (4.1) and (4.3).
Now, vice versa, assume that we have two subsystems (4.1) and (4.3). Let us show that the connecting equality (4.2) is true. For this, we introduce the function
It follows from the initial conditions of subsystems (4.1) and (4.3) that \(m(0)=0\). In addition, using the equations of these subsystems, we find a linear homogeneous differential equation for this function:
The zero initial condition immediately leads to the fact that \(m(t)=0\) wherever the solutions \(h_{0}(t)\) and \(h_{1}(t)\) to subsystem (4.3) and the solutions \(h_{2}(t)\) and \(h_{3}(t)\) to subsystem (4.1) are defined simultaneously. This means the validity of the connecting equality (4.2). Substituting this equality into the second equation of subsystem (4.3) and adding the equations of subsystem (4.1), we obtain the original system (2.2), (2.3).
The above considerations show that system (2.2), (2.3) is equivalent to two subsystems (4.1) and (4.3) in the sense that this system, to which we add the connecting equation (4.2), has the same solutions as subsystems (4.1) and (4.3). Therefore, further study of these subsystems is important since it is easier to analyze two second-order subsystems than a system of third-order equations.
Next, let us perform a linear change of variables in the subsystems of quadratic differential equations (4.1) and (4.3):
where \(\mu_{i}(t)\) is some given function. For example, this function can either be equal to \(1\) or \(-1\), or be associated with some specific function from the family \(\{b_{1}(t);c_{2}(t)\}\). The quantity \(\nu_{i}(t)\) may, like \(\mu_{i}(t)\), be related to some specific function from the family \(\{a_{i}(t),i=1,2,3;\,b_{i}(t),i=2,3;\,c_{3}(t)\}\), may be equal to \(0\), or, conversely, may be unknown and need to be determined.
As a result of this change of variables, two new subsystems of quadratic differential equations arise:
An important feature of the equations of these subsystems is that minus signs are placed in front of the terms with \(g_{0}(t)g_{1}(t)\) and \(g_{1}^{2}(t)\) in subsystem (4.5) as well as in front of the terms with \(g_{2}(t)g_{3}(t)\) and \(g_{3}^{2}(t)\) in subsystem (4.6). In their appearance, such equations are similar to the corresponding equations of subsystems (4.1) and (4.3).
In what follows, we assume that:
-
the initial values \(g_{0}^{0}\) and \(g_{1}^{0}\) in subsystem (4.5) as well as the initial values \(g_{2}^{0}\) and \(g_{3}^{0}\) in subsystem (4.6) are nonnegative;
-
solutions \(g_{0}(t)\) and \(g_{1}(t)\) to subsystem (4.5) and solutions \(g_{2}(t)\) and \(g_{3}(t)\) to subsystem (4.6) are defined on the corresponding largest possible half-open intervals \([0,t_{0,1})\) and \([0,t_{2,3})\) of existence of these solutions.
Then, we apply the quasi-positivity condition to subsystems (4.5) and (4.6) (Theorem 2.1.1 of [16]). The condition consists in the verification of the conditions
for subsystem (4.5) and the conditions
for subsystem (4.6).
When relations (4.7) and (4.8) are satisfied, there are corresponding nonnegative functions \(g_{0}(t)\) and \(g_{1}(t)\), which are solutions to subsystem (4.5) on \([0,t_{0,1})\), and nonnegative functions \(g_{2}(t)\) and \(g_{3}(t)\) satisfying subsystem (4.6) on \([0,t_{2,3})\).
The following situations are possible:
-
conditions (4.7) and (4.8) lead to differential inequalities for finding the unknown functions \(\nu_{i}(t)\), \(i=0,1,2,3\), in the change of variables (4.4);
-
for given functions \(\nu_{i}(t)\), \(i=0,1,2,3\), in the change of variables (4.4), conditions (4.7) and (4.8) are valid without any additional constraints;
-
for given functions \(\nu_{i}(t)\), \(i=0,1,2,3\), in the change of variables (4.4), conditions (4.7) and (4.8) hold under some additional constraints.
Note that the second situation is the most preferable among these three situations. Its possible occurrence for subsystems (4.1) and (4.3) is another important reason for searching for suitable functions \(\mu_{i}(t)\) and \(\nu_{i}(t)\), \(i=0,1,2,3\), and making the change of variables (4.4). The first and third situations are more burdensome since they lead either to the need to solve differential inequalities for the functions \(\nu_{i}(t)\), \(i=0,1,2,3\) (the first situation), or to the appearance of additional constraints connecting the functions \(a_{i}(t)\), \(b_{i}(t)\), \(i=1,2,3,\) and \(c_{i}(t)\), \(i=2,3\) (the third situation).
Now it remains to justify the extensibity of the solutions \(g_{0}(t)\), \(g_{1}(t)\) and \(g_{2}(t)\), \(g_{3}(t)\) from the corresponding half-open intervals \([0,t_{0,1})\) and \([0,t_{2,3})\) to the entire interval \([0,T]\) if \(t_{0,1}\leq T\) and \(t_{2,3}\leq T\). To do this, we consider the Lyapunov functions
Then we calculate the derivatives of these functions with respect to the corresponding subsystems (4.5) and (4.6). The minus signs in front of the terms with \(g_{0}^{2}(t)g_{1}(t)\), \(g_{1}^{3}(t)\) and \(g_{2}^{2}(t)g_{3}(t)\), \(g_{3}^{3}(t)\), as well as the nonnegativity of the functions \(g_{i}(t)\), \(i=0,1,2,3\), lead to differential inequalities, the integration of which gives the boundedness of these functions at the largest half-open intervals of their existence. In turn, this means the extensibility of the solutions \(g_{0}(t)\), \(g_{1}(t)\) of subsystem (4.5) and solutions \(g_{2}(t)\), \(g_{3}(t)\) of subsystem (4.6) to the required interval \([0,T]\). Finally, returning to the original functions \(h_{i}(t)\), \(i=0,1,2,3\), we draw a similar conclusion for them.
We then turn to the system of equations (2.4) to find an estimate for the number of distinct zeros of the switching function \(L(t)\) on the interval \([0,T]\).
Remark 6
In the first situation, the functions \(\nu_{i}(t)\), \(i=0,1,2,3\), can be defined on some interval \([\tau,\theta]\) of length less than \(T\). Then the solutions \(g_{i}(t)\), \(i=0,1,2,3\), will be extended only to such time interval. We will also consider system (2.4) on the interval \([\tau,\theta]\). As a result, again, arguing by contradiction in this system and applying the generalized Rolle’s theorem in the first two equations, we come to the conclusion that the switching function \(L(t)\) has at most two distinct zeros on the interval \([\tau,\theta]\). This means that we can obtain the required estimate for the number of zeros of the function \(L(t)\) on the entire interval \([0,T]\) in the form \(2([T/(\theta-\tau)]+1)\), where \([\omega]\) is the integer part of a positive number \(\omega\).
Remark 7
The application of the described approach in the analysis of nonautonomous linear systems of differential equations for switching functions and corresponding auxiliary functions in various minimization problems (1.5) from medicine is demonstrated in [17, 18, 19]. Other optimal control problems arising in mathematical medicine models are presented, for example, in [20].
REFERENCES
L. S. Pontryagin, V. G. Boltyanskii, R. V. Gamkrelidze, and E. F. Mishchenko, The Mathematical Theory of Optimal Processes (Nauka, Moscow, 1983) [in Russian].
F. P. Vasil’ev, Optimization Methods (Faktorial, Moscow, 2002) [in Russian].
E. B. Lee and L. Markus, Foundations of Optimal Control Theory (Wiley, New York, 1967; Nauka, Moscow, 1972).
A. V. Dmitruk, “On a generalization of a bound for the number of zeros of solutions to a linear differential equation,” Trudy VNII Sist. Issl., No. 1, 72–76 (1990).
A. V. Dmitruk, “A generalized estimate on the number of zeros for solutions of a class of linear differential equations,” SIAM J. Control Optim. 30 (5), 1087–1091 (1992). https://doi.org/10.1137/0330057
E. N. Khailov and E. V. Grigorieva, “On the extensibility of solutions of nonautonomous quadratic differential systems,” Trudy Inst. Mat. Mekh. UrO RAN 19 (4), 279–288 (2013).
H. Schättler and U. Ledzewicz, Geometric Optimal Control : Theory, Methods and Examples (Springer, New York, 2012).
P. Hartman, Ordinary Differential Equations (Wiley, New York, 1964; Mir, Moscow, 1970).
J. Szarski, Differential Inequalities (Polish Sci., Warsaw, 1965).
E. V. Grigorieva and E. N. Khailov, “Optimal vaccination, treatment, and preventive campaigns in regard to the SIR epidemic model,” Math. Model. Nat. Phenom. 9 (4), 105–121 (2014). https://doi.org/10.1051/mmnp/20149407
E. V. Grigorieva and E. N. Khailov, “Optimal intervention strategies for a SEIR control model of Ebola epidemics,” Math. 3 (4), 961–983 (2015).doi 10.3390/math3040961
E. V. Grigorieva, E. N. Khailov, and A. Korobeinikov, “Optimal control for a SIR epidemic model with nonlinear incidence rate,” Math. Model. Nat. Phenom. 11 (4), 89–104 (2016). https://doi.org/10.1051/mmnp/201611407
E. Grigorieva, E. Khailov, and A. Korobeinikov, “Optimal control for an SEIR epidemic model with nonlinear incidence rate,” Stud. Appl. Math. 141, 353–398 (2018). https://doi.org/10.1111/sapm.12227
M. Martcheva, An Introduction to Mathematical Epidemiology (Springer, New York, 2015).
O. Sharomi and T. Malik, “Optimal control in epidemiology,” Ann. Oper. Res. 251, 55–71 (2017). https://doi.org/10.1007/s10479-015-1834-4
O. A. Kuzenkov and E. A. Ryabova, Mathematical Modeling of Selection Processes (Izd. Nizhegorod. Univ., N. Novgorod, 2007).
E. Khailov, E. Grigorieva, and A. Klimenkova, “Optimal CAR T-cell immunotherapy strategies for a leukemia treatment model,” Games 11 (4), 53 (2020). https://doi.org/10.3390/g11040053
E. V. Grigorieva, E. N. Khailov, and A. Korobeinikov, “Optimal controls of the highly active antiretroviral therapy,” Abstr. Appl. Anal. 2020, 8107106 (2020). https://doi.org/10.1155/2020/8107106
N. L. Grigorenko, E. N. Khailov, E. V. Grigorieva, A. D. Klimenkova, “Optimal strategies of CAR T-Cell therapy in the treatment of leukemia within the Lotka–Volterra predator–prey model,” Trudy Inst. Mat. Mekh. UrO RAN 27 (3), 43–58 (2021). https://doi.org/10.21538/0134-4889-2021-27-3-43-58
H. Schättler and U. Ledzewicz, Optimal Control for Mathematical Models of Cancer Therapies: An Applications of Geometric Methods (Springer, New York, 2015).
Funding
This work was supported by ongoing institutional funding. No additional grants to carry out or direct this particular research were obtained.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
The author of this work declares that he has no conflicts of interest.
Additional information
Translated from Trudy Instituta Matematiki i Mekhaniki UrO RAN, Vol. 30, No. 1, pp. 237 - 248, 2024 https://doi.org/10.21538/0134-4889-2024-30-1-237-248.
Translated by M. Deikalova
Publisher’s Note. Pleiades Publishing remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
About this article
Cite this article
Khailov, E.N. Extensibility of Solutions of Nonautonomous Systems of Quadratic Differential Equations and Their Application in Optimal Control Problems. Proc. Steklov Inst. Math. 325 (Suppl 1), S123–S133 (2024). https://doi.org/10.1134/S008154382403009X
Received:
Revised:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1134/S008154382403009X