7.1 Introduction

This paper is devoted to solving inverse problems of reconstruction of players’ trajectories and controls in differential games, using known inaccurate measurements of the realized trajectories. The a posteriori analysis is an important part of the decision making in the future. Inverse problems may occur in many areas such as economics, engineering, medicine and many others that involve the task of reconstruction of the players’ controls by known inaccurate trajectory measurements.

The inverse problems have been studied by many authors. The approach suggested by Osipov and Kryazhimskii [6, 7] is one of the closest to the material of this paper. The method suggested by them reconstructs the controls by using a regularized (a variation of Tikhonov regularization [12]) procedure of control with a guide. This procedure allows to reconstruct the controls on-line. It is originated from the works of Krasovskii’s school on the theory of optimal feedback [3, 4].

Another method for solving dynamic reconstruction problems by known history of inaccurate measurements has been suggested by Subbotina et al. [10]. It is based on a method, which use necessary optimality conditions for auxiliary optimal control problems [9]. This method has been also developed in [5, 8, 10, 11]. A modification of this approach is presented in this paper. It relies on necessary optimality conditions in an auxiliary variational problem on extremum for an integral functional. The functional is a variation of a Tikhonov regularizator.

In this paper the suggested method is justified for a special class of differential games with dynamics linear in controls and non-linear in state coordinates. Results of simulation are exposed.

7.2 Dynamics

We consider a differential game with dynamics of the form

$$\displaystyle \begin{aligned} \dot x(t) = G(x(t),t)u(t),\quad x(\cdot):[0,T]\to R^n,\quad u(\cdot):[0,T]\to R^n,\quad t\in [0,T]. \end{aligned} $$
(7.1)

Here G(x, t) is an n × n matrix with elements g ij(x, t) : R n × [0, T] → R, i = 1, ..., n, j = 1, ..., n that have continuous derivatives

$$\displaystyle \begin{aligned} \frac{\partial g_{ij}(x,t)}{\partial t},\quad \frac{\partial g_{ij}(x,t)}{\partial x_k},\quad i = 1,...,n,\ j = 1,...,n,\ k = 1,...,n,\quad \\ x\in R^n,\ t\in[0,T]. \end{aligned} $$

In (7.1) x i(t) is the state of the ith player, while u i(t) is the control of the ith player, restricted by constraints

$$\displaystyle \begin{aligned} \begin{array}{cc} |u_i(t)|\leq \overline U<\infty,\quad i = 1,\ldots,n,\quad t\in[0,T]. \end{array}\end{aligned} $$
(7.2)

We consider piecewise continuous controls with finite number of points of discontinuity.

7.3 Input Data

It is supposed that some base trajectory x (⋅) : [0, T] → R n of system (7.1) has been realized on the interval t ∈ [0, T]. Let u (⋅) : [0, T] → R n be the piecewise continuous control satisfying constrains (7.2) that generated this trajectory.

We assume that measurements y δ(⋅, δ) = y δ(⋅) : [0, T] → R n of the base trajectory x (t) are known and they are twice continuously differentiable functions that determine x (t) with the known accuracy δ > 0, i.e.

$$\displaystyle \begin{aligned} |y^\delta_i(t) - x^*_i(t)|\leq\delta,\quad i = 1,...,n, \quad t\in[0,T]. \end{aligned} $$
(7.3)

7.4 Hypotheses

We introduce two hypotheses on the input data.

Hypothesis 7.1

There exist such compact set Ψ  R n , such constant r > 0 and such constants \( \underline \omega >0,\ \overline \omega >0,\ \omega '>0\) that

$$\displaystyle \begin{aligned} \begin{array}{cc} \varPsi \supset\{x\in R^n:|x_i - x^*_i(t)|\leq r \quad \forall t\in[0,T]\},\\ \displaystyle 0<\underline\omega^2\leq| det G(x,t)|\leq\overline\omega^2,\quad \left|\frac{\partial g_{ij}(x,t)}{\partial t}\right|\leq\omega',\quad \left|\frac{\partial g_{ij}(x,t)}{\partial x_k}\right|\leq\omega',\\ i = 1,\ldots,n,\quad j = 1,\ldots,n,\quad k = 1,\ldots,n,\quad x\in\varPsi,\quad t\in[0,T]. \end{array} \end{aligned} $$
(7.4)

Let’s introduce the following constants

$$\displaystyle \begin{aligned} \begin{array}{cc} \displaystyle R_1 = \frac{\pi\overline\omega}{\underline\omega^3},\quad R_2 = \frac{\pi\overline\omega^2}{\underline\omega^4}(\overline\omega^2 + \omega'),\quad R_3 = \frac{\pi n \displaystyle T\overline\omega^4\omega'}{\underline\omega^2},\\ \displaystyle R_4 = \frac{T\pi\overline\omega^2}{2\underline\omega^4} + 4\frac{\pi\overline\omega^2}{\underline\omega} + 2\frac{R_1\pi\overline\omega^2}{\underline\omega}\left(\ln \displaystyle \frac{0.5T\overline\omega}{\underline\omega^2} + 1\right) + R_2 + 1,\\ \displaystyle R_w = \max\{\frac{1 + \underline\omega}{\underline\omega}(R_4 + 2 R_3),(1 + \underline\omega)(R_4 + 2 R_3)\}, \end{array} \end{aligned} $$
(7.5)

which will be used in Hypothesis 7.2 and Theorem 7.1.

Hypothesis 7.2

There exist such constants \(\delta _0\in (0,\min \{0.5 r,\frac {1}{R_w}\}]\) and \(\overline Y>0\) that for any δ ∈ (0, δ 0]

$$\displaystyle \begin{aligned} | y^\delta_i(t)|\leq \overline Y,\quad |\dot y^\delta_i(t)| \leq \overline Y,\quad \big|\dot x^*_i(t)|\leq \overline Y,\quad t\in [0,T],\quad i = 1,\ldots,n \end{aligned} $$
(7.6)

and for any δ ∈ (0, δ 0] exists such compact Ω δ ⊂ [0, T] with measure \(\mu \varOmega ^\delta = \beta ^\delta \stackrel {\delta \to 0}{\longrightarrow }\) that

$$\displaystyle \begin{aligned} \quad |\ddot y^\delta_i(t)|\leq \overline Y,\ T\in[0,T]\setminus \varOmega^\delta,\quad \max\limits_{t\in\varOmega^\delta}|\ddot y_i^\delta(t)|\beta^\delta \leq \overline Y,\quad i = 1,\ldots,n. \end{aligned} $$
(7.7)

Remark 7.1

Conditions (7.6) reflect the fact that the right hand sides of Eq. (7.1) are restricted.

Remark 7.2

In Hypothesis 7.2 the constant \(\overline Y\) is unified for all inequalities to simplify the further calculations and explanations.

Remark 7.3

Hypothesis 7.2 allows the functions \(\dot y^\delta (\cdot )\) to be able to approximate piecewise continuous functions \(\dot x^*(\cdot ) = g(x^*(\cdot ),\cdot )u^*(\cdot )\).

7.5 Problem Statement

Let’s consider the following reconstruction problem: for a given δ ∈ (0, δ 0] and a given measurement function y δ(⋅) fulfilling estimates (7.3) and Hypothesis 7.2 to find a function u(⋅, δ) = u δ(⋅) : [0, T] → R n that satisfies the following conditions:

  1. 1.

    The function u δ(⋅) belongs to the set of admissible controls, i.e. the set of piecewise continuous functions with finite number of points of discontinuity satisfying constraints (7.2);

  2. 2.

    The control u δ(⋅) generates trajectory x(⋅, δ) = x δ(⋅) : [0, T] → R n of system (7.1) with boundary condition x δ(T) = y δ(T). In other words, there exists a unique solution x δ(⋅) : [0, T] → R n of the system

    $$\displaystyle \begin{aligned} \dot x^\delta(t) = G(x^\delta(t),t)u^\delta(t),\quad t\in [0,T] \end{aligned}$$

    that satisfy the boundary condition x δ(T) = y δ(T).

  3. 3.

    Functions x δ(⋅) and u δ(⋅) satisfy conditions

    $$\displaystyle \begin{aligned} \lim\limits_{\delta\rightarrow 0}\|x^\delta_i(\cdot) - x^*_i(\cdot)\|{}_{C_{[0,T]}} = 0,\quad \lim\limits_{\delta\rightarrow 0}\|u^\delta_i(\cdot) - u^*_i(\cdot)\|{}_{L_{2,[0,T]}} = 0,\quad i = 1,...,n. \end{aligned} $$
    (7.8)

Hereinafter

$$\displaystyle \begin{aligned} \|f(\cdot)\|{}_{C_{[0,T]}} = \max\limits_{t\in[0,T]} |f(t)|,\quad f(\cdot):[0,T]\to R \end{aligned}$$

is the norm in the space of continuous functions C and

$$\displaystyle \begin{aligned} \|f(\cdot)\|{}_{L_{2,[0,T]}} = \sqrt{\int\limits_0^T \sum_{i = 1}^n f_i^2(\tau)d\tau},\quad f(\cdot):[0,T]\to R^n \end{aligned}$$

is the norm in space L 2.

7.6 A Solution of the Inverse Problem

7.6.1 Auxiliary Problem

To solve the inverse problem in Sect. 7.5, we introduce an auxiliary variational problem (AVP) for fixed parameters δ ∈ (0, δ 0], α > 0 and a given measurement function y δ(⋅) satisfying estimates (7.3) and Hypothesis 7.2.

We consider the set of pairs of continuously differentiable functions F xu = {{x(⋅), u(⋅)} : x(⋅) : [0, T] → R n, u(⋅) : [0, T] → R n} that satisfy differential equations (7.1) and the following boundary conditions

$$\displaystyle \begin{aligned} x(T) = y^\delta(T),\ u(T) = G^{ - 1}(y^\delta(T),T)\dot y^\delta(T). \end{aligned} $$
(7.9)

Hereinafter G −1 is the inverse matrix for non degenerate matrix G. Let us remark that due to Hypothesis 7.1, the inverse matrix G −1(y δ(T), T) exists.

AVP is to find a pair of functions x(⋅, δ, α) = x δ, α(⋅) : [0, T] → R n and u(⋅, δ, α) = u δ, α(⋅) : [0, T] → R n such that {x δ, α(⋅), u δ, α(⋅)}∈ F xu and such that they provide an extremum for the integral functional

$$\displaystyle \begin{aligned} I(x(\cdot),u(\cdot)) = \int \limits_{0}^T \left[ - \frac{\|x(t) - y^\delta(t)\|{}^2}{2} + \frac{\alpha^2\|u(t)\|{}^2}{2}\right]dt. \end{aligned} $$
(7.10)

Here α is a small regularising parameter [12] and \(\|f\| = \sqrt {\sum \limits _{i=1}^n f_i^2},\ f\in R^n\) is Euclidean norm in R n.

7.6.2 Necessary Optimality Conditions in the AVP

We can write the necessary optimality conditions for the AVP (7.1), (7.10), (7.9) in Lagrange form [14]. Lagrangian for the AVP has the form

$$\displaystyle \begin{aligned} L(x,u,\dot x,\lambda(t),t) = - \frac{\|x - y^\delta(t)\|{}^2}{2} + \frac{\alpha^2\|u\|{}^2}{2} + \sum_{i = 1}^n\left[\lambda_i(t)\sum_{j = 1}^n\left[\dot x_i - g_{ij}(x,t)u_j\right]\right], \end{aligned}$$

where λ(t) : [0, T] → R n is the Lagrange multipliers vector.

The 2n corresponding Euler equations are

$$\displaystyle \begin{aligned} \begin{array}{cc} \displaystyle\dot\lambda_i(t) + (x_i(t) - y^\delta_i(t)) + \sum_{j = 1}^n \left[\lambda_j(t) \sum_{k = 1}^n u_k(t) \frac{\partial g_{jk}}{\partial x_i}(x(t),t)\right] = 0,\\ - \alpha^2u_i(t) + \displaystyle\sum_{j = 1}^n\left[\lambda_j(t)g_{ji}(x(t),t)\right] = 0,\quad i = 1,\ldots,n. \end{array} \end{aligned} $$
(7.11)

The first n equations in (7.11) can be rewritten in vector form:

$$\displaystyle \begin{aligned} \displaystyle\dot\lambda_i(t) + (x_i(t) - y^\delta_i(t)) + \langle\lambda_j(t),\frac{\partial G}{\partial x_i}(x(t),t)u(t)\rangle = \overrightarrow{0},\quad i=1,\ldots,n. \end{aligned} $$
(7.12)

Hereinafter ‹a, b› means the scalar product of vectors a ∈ R n, b ∈ R n and \(\displaystyle \frac {\partial G}{\partial x_i}(x(t),t)\) is a matrix with elements \(\displaystyle \frac {\partial g_{jk}}{\partial x_i}(x(t),t)\), j = 1, ..., n, k = 1, ..., n.

The last n equations in (7.11) define the relations between the controls u i(t) and the Lagrange multipliers λ i(t), i = 1, …, n:

$$\displaystyle \begin{aligned} u(t) = \frac{1}{\alpha^2}G^T(x(t),t)\lambda(t). \end{aligned} $$
(7.13)

Hereinafter G T means transpose of a matrix G.

We can substitute equations (7.13) into (7.12) and (7.1) to rewrite them in the form of Hamiltonian equations, where the vector s(t) = −λ(t) plays the role of the adjoint variables vector:

$$\displaystyle \begin{aligned} \begin{array}{ll} \displaystyle\dot x(t) = - (1/\alpha^2)G(x(t),t)G^T(x(t),t)s(t),\\ \displaystyle\dot s_i(t) = x_i(t) - y^\delta_i(t) + \frac{1}{\alpha^2}\langle s(t),\frac{\partial G}{\partial x_i}(x(t),t)G^T(x(t),t)s(t)\rangle,\quad i=1,\ldots,n. \end{array} \end{aligned} $$
(7.14)

By substituting (7.13) into (7.9), one can obtain boundary conditions, written for system (7.14):

$$\displaystyle \begin{aligned} x(T) = y^\delta(T),\quad s(T) = - \alpha^2 \big(G(y^\delta(T),T)G^T(y^\delta(T),T)\big)^{ - 1}\dot y^\delta(T). \end{aligned} $$
(7.15)

Thus, we have got the necessary optimality conditions for the AVP (7.1), (7.10), (7.9) in Hamiltonian form (7.14), (7.15).

7.6.3 A Solution of the Reconstruction Problem

Let’s introduce the function

$$\displaystyle \begin{aligned} u^{\delta,\alpha}(\cdot) = -(1/\alpha^2)G^T(x^{\delta,\alpha}(\cdot),\cdot)s^{\delta,\alpha}(\cdot), \end{aligned} $$
(7.16)

where x δ, α(⋅), s δ, α(⋅) are the solutions of system (7.14) with boundary conditions (7.15).

We now introduce the cut-off functions

$$\displaystyle \begin{aligned} \hat u_i^\delta(t) = \left\{ \begin{array}{ll} \overline U, \quad u_i^{\delta,\alpha}(t) \geq \overline U,\\ u_i^{\delta,\alpha}(t), \quad |u_i^{\delta,\alpha}(t)| < \overline U,\\ -\overline U, \quad u_i^{\delta,\alpha}(t) \leq -\overline U.\\ \end{array}\right. \quad i = 1,\ldots,n. \end{aligned} $$
(7.17)

We consider the functions \(\hat u_i^\delta (\cdot )\) as the solutions of the inverse problem described in Sect. 7.5. We choose α = α(δ) in a such way that \(\alpha (\delta )\stackrel {\delta \to 0}{\longrightarrow } 0\).

7.6.4 Convergence of the Solution

In this paper a justification for the suggested method is presented for one sub-class of considered differential games (7.1), (7.2). Namely, we consider from now dynamics of form (7.1), where matrixes G(x, t) are diagonal with non-zero elements on the diagonals. The dynamics in such case have the form

$$\displaystyle \begin{aligned} \begin{array}{cc} \dot x_i(t)=g_i(x(t),t)u_i(t),\quad i=1,...,n, \end{array} \end{aligned}$$

where the functions g i(x, t) = g ii(x, t), i = 1, …, n are the elements on the diagonal of the matrix G(x, t).

Condition \( \underline \omega ^2\leq | det G(x,t)|\leq \overline \omega ^2\) in Hypothesis 7.1 in such case is replaced by equal condition

$$\displaystyle \begin{aligned} \underline\omega^2\leq g^2_i(x,t) \leq\overline\omega^2,\quad i=1,\ldots,n. \end{aligned} $$
(7.18)

Necessary optimality conditions (7.14) has now the form

$$\displaystyle \begin{aligned} \begin{array}{cc} \displaystyle \dot x_i(t) = -s_i(t)\frac {g^2_i(x(t),t)}{\alpha^2},\\ \displaystyle \dot s_i(t) = x_i(t)-y^\delta_i(t) + \frac{1}{\alpha^2}\sum_{j=1}^n \left[s^2_j(t)\frac{\partial g_j(x(t),t)}{\partial x_i(t)}g_j(x(t),t))\right],\\ i=1,\ldots,n \end{array} \end{aligned} $$
(7.19)

with boundary conditions

$$\displaystyle \begin{aligned} x_i(T)=y_i^\delta(T),\quad s_i(T)=-\alpha^2 \dot y_i^\delta(T)/g^2_i(y^\delta(T),T),\quad i=1,\ldots,n. \end{aligned} $$
(7.20)

The following lemma is true.

Lemma 7.1

For δ ∈ (0, δ 0] twice continuously differentiable measurement functions \(y_i^\delta (\cdot ),\ i=1,\ldots ,n\) satisfying estimates (7.3) and Hypothesis 7.2 fulfill the following relations

$$\displaystyle \begin{aligned} \lim\limits_{\delta\rightarrow 0}\|y^\delta_i(\cdot) - x^*_i(\cdot)\|{}_{C_{[0,T]}} = 0,\ \lim\limits_{\delta\rightarrow 0}\|\frac{\dot y_i^\delta(\cdot)}{g_i(y^\delta(\cdot),\cdot)} - u_i^*(\cdot)\|{}_{{L_{2,[0,T]}}} = 0,\ i = 1,...,n. \end{aligned} $$
(7.21)

Proof

The first relation in (7.21) is true due to (7.3). Let’s prove the second one.

Relying upon Luzin’s theorem [2] one can find for the piecewise continuous function u (⋅) such constant \(\overline Y^u\) that for any δ ∈ (0, δ 0] and all i = 1, …, n there exist such twice continuously differentiable functions \(\overline u^\delta _i(\cdot ):[0,T]\to R\) and such set \(\varOmega ^\delta _u\subset R\) with measure \(\mu \varOmega ^\delta _u = \beta ^\delta _u\) that

$$\displaystyle \begin{aligned} \begin{array}{cc} |\overline u^\delta_i(t)|\leq \overline Y^u,\ t\in[0,T],\quad |\dot{\overline u}^\delta_i(t)| \leq \overline Y^u,\ t\in[0,T]\setminus \varOmega^\delta_u,\\ \beta^\delta_u \max\limits_{t\in\varOmega^\delta_u}|\dot{\overline u}^\delta_i(t)| \leq \overline Y^u,\\ \|\overline u^\delta_i(\cdot) - u_i^*(\cdot)\|{}_{{L_{2,[0,T]}}} \leq \delta,\quad i=1,\ldots,n. \end{array} \end{aligned} $$
(7.22)

Let’s estimate the following expression first (hereinafter in the proof i = 1, …, n).

$$\displaystyle \begin{aligned} \displaystyle \|\dot y^\delta_i(\cdot) - \overline u^\delta_i(\cdot) g_i(y^\delta(t),t)\|{}^2_{{L_{2,[0,T]}}} = \int\limits_0^T \big(\dot y^\delta_i(t) - \overline u^\delta_i(t)g_i(y^\delta(t),t)\big)^2dt. \end{aligned} $$
(7.23)

The integral in (7.23) can be calculated by parts.

$$\displaystyle \begin{aligned} \begin{array}{cc} \displaystyle \int\limits_0^T\underbrace{\big(\dot y^\delta_i(t) - \overline u^\delta_i(t)g_i(y^\delta(t),t)\big)}_{\mathbf{U}}\underbrace{\big(\dot y^\delta_i(t) - \overline u^\delta_i(t)g_i(y^\delta(t),t)\big)dt}_{\mathbf{dV}}\\ \displaystyle = \Bigg[\underbrace{\big(\dot y^\delta_i(t) - \overline u^\delta_i(t)g_i(y^\delta(t),t)\big)}_{\mathbf{U}}\underbrace{\big(y^\delta_i(t) - x_i^*(0) - \int\limits_0^t\overline u^\delta_i(\tau)g_i(y^\delta(\tau),\tau)d\tau\big)}_{\mathbf{V}}\Bigg]\Bigg|{}_0^T \\ \displaystyle - \int\limits_0^T\underbrace{\big(y^\delta_i(t) - x_i^*(0) - \int\limits_0^t\overline u^\delta_i(\tau)g_i(y^\delta(\tau),\tau)d\tau\big)}_{\mathbf{V}}\\ \displaystyle \cdot \underbrace{\Big(\ddot y^\delta_i(t) - \dot{\overline u}^\delta_i(t)g_i(y^\delta(t),t) - \overline u^\delta_i(t)\big(\sum_{j=1}^n [g^{\prime}_{i,x_j}(y^\delta_i(t),t)\dot y^\delta_j] + g^{\prime}_{i,t}(y^\delta_i(t),t)\big)\Big)}_{\mathbf{dU}}dt \end{array} \end{aligned} $$
(7.24)

To estimate the whole expression (7.24) we first estimate the difference \(\mathbf {V} = y^\delta _i(t) - x_i^*(0) - \int \limits _0^t \overline u^\delta _i(\tau )g_i(y^\delta (\tau ),\tau )d\tau \). In order to do this, we estimate integral

$$\displaystyle \begin{aligned} \begin{array}{cc} \displaystyle \int\limits_0^t\big(\overline u^\delta_i(\tau) - u_i^*(\tau)\big)g_i(y^\delta(\tau),\tau)d\tau = \int\limits_{\varOmega^t_{\geq\delta}}\big(\overline u^\delta_i(\tau) - u_i^*(\tau)\big)g_i(y^\delta(\tau),\tau)d\tau\\ \displaystyle + \int\limits_{\varOmega^t_{<\delta}}\big(\overline u^\delta_i(\tau) - u_i^*(\tau)\big)g_i(y^\delta(\tau),\tau)d\tau, \end{array} \end{aligned} $$
(7.25)

where set \(\varOmega ^t_{\geq \delta }=\{\tau \in [0,t]:|\overline u^\delta _i(\tau ) - u_i^*(\tau )|\geq \delta \}\) and set \(\varOmega ^t_{<\delta }=\{\tau \in [0,t]:|\overline u^\delta _i(\tau ) - u_i^*(\tau )|<\delta \}\).

The first term in (7.25)

$$\displaystyle \begin{aligned} \left|\,\int\limits_{\varOmega^t_{<\delta}}\big(\overline u^\delta_i(\tau) - u_i^*(\tau)\big)g_i(y^\delta(\tau),\tau)d\tau\right| \leq \delta \mu(\varOmega^t_{<\delta}) \overline\omega \leq \delta T \overline\omega. \end{aligned} $$
(7.26)

Remark 7.4

Let’s remember that hereinafter when the first argument of functions g i(x, t), i = 1, …, n belongs to compact Ψ from Hypothesis 7.1, relations (7.18) are true.

Using (7.22), the second term in (7.25)

$$\displaystyle \begin{aligned} \begin{array}{cc} \displaystyle \left|\,\int\limits_{\varOmega^t_{<\delta}}\big(\overline u^\delta_i(\tau) - u_i^*(\tau)\big)g_i(y^\delta(\tau),\tau)d\tau\right| = \left|\,\int\limits_{\varOmega^t_{<\delta}}\big(\overline u^\delta_i(\tau) - u_i^*(\tau)\big)^2 \frac{g_i(y^\delta(\tau),\tau)}{\overline u^\delta_i(\tau) - u_i^*(\tau)}d\tau\right|\\ \displaystyle \leq \max\limits_{\tau\in\varOmega^t_{<\delta}}\left|\frac{g_i(y^\delta(\tau),\tau)}{\big(\overline u^\delta_i(\tau) - u_i^*(\tau)\big)}\right| \int\limits_{\varOmega^t_{<\delta}}\big(\overline u^\delta_i(\tau) - u_i^*(\tau)\big)^2d\tau\\ \displaystyle \leq \frac{\overline\omega}{\delta} \int\limits_0^T\big(\overline u^\delta_i(\tau) - u_i^*(\tau)\big)^2d\tau \leq \delta T\overline\omega. \end{array} \end{aligned} $$
(7.27)

From (7.25), (7.26) and (7.27) follows that

$$\displaystyle \begin{aligned} \displaystyle \left|\int\limits_0^t\big(\overline u^\delta_i(\tau) - u_i^*(\tau)\big)g_i(y^\delta(\tau),\tau)dt\right| \leq \delta2 T \overline\omega. \end{aligned} $$
(7.28)

We can now estimate function V in (7.24):

$$\displaystyle \begin{aligned} \begin{array}{cc} \displaystyle \Big|y^\delta_i(t) - x_i^*(0) - \int\limits_0^t \overline u^\delta_i(\tau)g_i(y^\delta(\tau),\tau)d\tau\Big|\\ \displaystyle \leq \Big|y^\delta_i(t) - x_i^*(0) - \int\limits_0^t u_i^*(\tau) g_i(y^\delta(\tau),\tau)d\tau\Big|\\ \displaystyle + \Big|\int\limits_0^t \big(\overline u^\delta_i(\tau) - u_i^*(\tau)\big)g_i(y^\delta(\tau),\tau)d\tau\Big|\\ \displaystyle \leq \Big|y^\delta_i(t) - x_i^*(0) - \int\limits_0^t u_i^*(\tau) g_i(x^*(\tau),\tau)d\tau\Big|\\ \displaystyle + \Big|\int\limits_0^t u_i^*(\tau) \big(g_i(y^\delta(\tau),\tau) - g_i(x^*(\tau),\tau)\big)d\tau\Big|\\ \displaystyle + \delta2\overline\omega T \leq \Big|y^\delta_i(t) - x_i^*(t)\Big|\\ \displaystyle + \Big|\int\limits_0^t \overline U \big(n\max\limits_{\theta\in[0,T],\ j=1,\ldots,n}\big|g^{\prime}_{i,x_j}(y^\delta(\theta),\theta)(x^*_j(\theta) -y_j^\delta(\theta))\big|\big)d\tau\Big| + \delta2 T \overline\omega \leq \\ \delta (1 + T \overline U n \omega' + 2\overline\omega T) \stackrel{def}{=} \delta R_u. \end{array} \end{aligned} $$
(7.29)

Thus, the term \(\mathbf {UV}|{ }_0^T\) in sum (7.24) can be estimated as

$$\displaystyle \begin{aligned} \begin{array}{cc} \displaystyle \Big(\dot y^\delta_i(t) - \overline u^\delta_i(t)g_i(y^\delta(t),t)\Big)\Big(y^\delta_i(t) - x_i^*(0) - \int\limits_0^t\overline u^\delta_i(\tau)g_i(y^\delta(\tau),\tau)d\tau\Big)\Bigg|{}_0^T\\ \leq 2\delta \big(\overline Y + \overline Y^u\overline\omega\big) R_u. \end{array} \end{aligned} $$
(7.30)

Using (7.7), (7.22) and (7.29), the term \(\int \limits _0^T \mathbf {VdU}dt\) in (7.24) can be estimated in the following way

$$\displaystyle \begin{aligned} \begin{array}{rcl}{} &\displaystyle \displaystyle \Big|\int\limits_0^T\Big[\Big(y^\delta_i(t) - x_i^*(0) - \int\limits_0^t\overline u^\delta_i(\tau)g_i(y^\delta(\tau),\tau)d\tau\Big)\\ &\displaystyle \displaystyle \cdot \Big(\ddot y^\delta_i(t) - \dot{\overline u}^\delta_i(t)g_i(y^\delta(t),t) - \overline u^\delta_i(t)\big(\sum_{j=1}^n [g^{\prime}_{i,x_j}(y^\delta_i(t),t)\dot y^\delta_j] + g^{\prime}_{i,t}(y^\delta_i(t),t)\big)\Big)\Big]dt\Big|\\ &\displaystyle \displaystyle \leq \Big|\int\limits_{[0,T]\setminus\varOmega^\delta}\Big(y^\delta_i(t) - x_i^*(0) - \int\limits_0^t\overline u^\delta_i(\tau)g_i(y^\delta(\tau),\tau)d\tau\Big)\ddot y^\delta_i(t)dt\Big|\\ &\displaystyle \displaystyle + \Big|\int\limits_{\varOmega^\delta}\Big(y^\delta_i(t) - x_i^*(0) - \int\limits_0^t\overline u^\delta_i(\tau)g_i(y^\delta(\tau),\tau)d\tau\Big)\ddot y^\delta_i(t)dt\Big|\\ &\displaystyle \displaystyle + \Big|\int\limits_{[0,T]\setminus\varOmega^\delta_u}\Big(y^\delta_i(t) - x_i^*(0) - \int\limits_0^t\overline u^\delta_i(\tau)g_i(y^\delta(\tau),\tau)d\tau\Big)\dot{\overline u}^\delta_i(t)g_i(y^\delta(t),t)dt\Big|\\ &\displaystyle \displaystyle + \Big|\int\limits_{\varOmega^\delta_u}\Big(y^\delta_i(t) - x_i^*(0) - \int\limits_0^t\overline u^\delta_i(\tau)g_i(y^\delta(\tau),\tau)d\tau\Big)\dot{\overline u}^\delta_i(t)g_i(y^\delta(t),t)dt\Big|\\ &\displaystyle \displaystyle + \Big|\int\limits_0^T\Big(y^\delta_i(t) - x_i^*(0) - \int\limits_0^t\overline u^\delta_i(\tau)g_i(y^\delta(\tau),\tau)d\tau\Big)\\ &\displaystyle \displaystyle \cdot \Big(\sum_{j=1}^n [g^{\prime}_{i,x_j}(y^\delta_i(t),t)\dot y^\delta_j] + g^{\prime}_{i,t}(y^\delta_i(t),t)\Big)dt\Big|\\ &\displaystyle \displaystyle \leq \delta T R_u \overline Y + \delta R_u \overline Y + \delta T R_u \overline Y^u\overline\omega + \delta R_u \overline Y^u\overline\omega + \delta T R_u \omega'(n\overline Y + 1) \stackrel{def}{=} \delta \overline R_u.\\ \end{array} \end{aligned} $$
(7.31)

Combining estimates (7.30) and (7.31), we can now estimate expression (7.24).

$$\displaystyle \begin{aligned} \begin{array}{cc} \displaystyle \int\limits_0^T \big(\dot y^\delta_i(t) - \overline u^\delta_i(t)g_i(y^\delta(t),t)\big)^2dt \leq \delta\big(\overline R_u + 2(\overline Y + \overline Y^u\overline\omega) R_u\big) \end{array} \end{aligned} $$
(7.32)

Finally, we can use the first mean value theorem for definite integrals and estimate (7.32) to get

$$\displaystyle \begin{aligned} \begin{array}{cc} \displaystyle \left|\int\limits_0^T \left(\frac{\dot y^\delta(t)}{g_i(y^\delta(t),t)} - \overline u^\delta_i(t)\right)^2dt\right|\\ \displaystyle \leq \max\limits_{t\in[0,T]}\left[\frac{1}{g_i^2(y^\delta(t),t)}\right] \int\limits_0^T \big(\dot y^\delta_i(t) - \overline u^\delta_i(t)g_i(y^\delta(t),t)\big)^2dt\\ \displaystyle \leq \delta \frac{2\big(\overline Y + \overline Y^u\overline\omega\big) R_u + \overline R_u}{\underline\omega^2} \stackrel{\delta\to 0}{\longrightarrow} 0. \end{array} \end{aligned} $$
(7.33)

It follows from (7.33) that

$$\displaystyle \begin{aligned} \lim\limits_{\delta\rightarrow 0}\left\|\frac{\dot y^\delta(\cdot)}{g_i(y^\delta(\cdot),\cdot)} - \overline u^\delta_i(\cdot)\right\|{}_{L_{2,[0,T]}} = 0. \end{aligned}$$

Remember that we consider such function \(\overline u^\delta _i (\cdot )\) that \( \lim \limits _{\delta \rightarrow 0}\left \|\overline u^\delta _i(\cdot ) - u^*_i(\cdot )\right \|{ }_{L_{2,[0,T]}} = 0. \) So, from the triangle inequality \(\|f_1(\cdot ) + f_2(\cdot )\|{ }_{L_{2,[0,T]}}\leq \|f_1(\cdot )\|{ }_{L_{2,[0,T]}} + \|f_2(\cdot )\|{ }_{L_{2,[0,T]}}\) follows that

$$\displaystyle \begin{aligned} \lim\limits_{\delta\rightarrow 0}\left\|\frac{\dot y^\delta(\cdot)}{g_i(y^\delta(\cdot),\cdot)} - u^*_i(\cdot)\right\|{}_{L_{2,[0,T]}} = 0,\quad i=1,\ldots,n, \end{aligned}$$

which was to be proved. □

Theorem 7.1

For any fixed δ ∈ (0, δ 0] there exists such parameter \(\alpha ^\delta _0 = \alpha ^\delta _0(\delta )\) that the solution \(x^{\delta ,\alpha ^\delta _0}(\cdot )\) , \(s^{\delta ,\alpha ^\delta _0}(\cdot )\) of system (7.19) with boundary conditions (7.20) is extendable and unique on t ∈ [0, T].

Moreover, \(\lim \limits _{\delta \to 0}\alpha ^\delta _0(\delta ) = 0\) and

$$\displaystyle \begin{aligned} \lim\limits_{\delta\rightarrow 0}\|x^{\delta,\alpha^\delta_0}_i(\cdot) - x^*_i(\cdot)\|{}_{C_{[0,T]}} = 0,\ \lim\limits_{\delta\rightarrow 0}\|u^{\delta,\alpha^\delta_0}_i(\cdot) - u^*_i(\cdot)\|{}_{L_{2,[0,T]}} = 0,\ i = 1,...,n, \end{aligned} $$
(7.34)

where

$$\displaystyle \begin{aligned} u_i^{\delta,\alpha^\delta_0}(\cdot) = -(1/(\alpha^\delta_0)^2)g_i(x^{\delta,\alpha^\delta_0}(\cdot),\cdot)s_i^{\delta,\alpha^\delta_0}(\cdot),\quad i=1,\ldots,n. \end{aligned} $$
(7.35)

Proof

Let’s introduce new variables:

$$\displaystyle \begin{aligned} z_i(t) = x_i(t) - y^\delta_i(t),\quad w_i(t) = s_i(t) + \frac{\alpha^2\dot y^\delta_i(t)}{g_i^2(x(t),t)},\quad i = 1,\ldots,n. \end{aligned} $$
(7.36)

Their derivatives are

$$\displaystyle \begin{aligned} \begin{array}{cc} \displaystyle\dot z_i(t) = \dot x_i(t) - \dot y^\delta_i(t),\\ \displaystyle \dot w_i(t) = \dot s_i(t) + \frac{\alpha^2 \ddot y^\delta_i(t)}{g_i^2(x(t),t)} - 2\frac{\alpha^2\sum_{j = 1}^n \left[{g_i}^{\prime}_{x_j}(x(t),t)\dot x_j(t)\right]}{g_i^3(x(t),t)}\\ \displaystyle = \dot s_i(t) + \frac{\alpha^2 \ddot y^\delta_i(t)}{g_i^2(x(t),t)} + 2\frac{\sum_{j = 1}^n \left[{g_i}^{\prime}_{x_j}(x(t),t)s_j(t)g_j^2(x(t),t)\right]}{g_i^3(x(t),t)},\quad i = 1,\ldots,n. \end{array} \end{aligned} $$
(7.37)

System (7.19) can be rewritten in this variables as

$$\displaystyle \begin{aligned} \begin{array}{cc} \displaystyle\dot z_i(t) = - w_i(t)\frac {g^2_i(z(t) + y^\delta(t),t)}{\alpha^2},\\ \dot w_i(t) = z_i(t) + F_i(z(t),w(t),t),\quad i = 1,\ldots,n. \end{array} \end{aligned} $$
(7.38)

where

$$\displaystyle \begin{aligned} \begin{array}{cc} \displaystyle F_i(z(t),w(t),t) = \alpha^2\Bigg(\frac{\ddot y^\delta_i(t)}{g_i^2(z(t) + y^\delta(t),t)} + \\ \displaystyle\sum_{j = 1}^n \Bigg[2\frac{g_j^2(z(t) + y^\delta(t),t){g_i}^{\prime}_{x_j}(z(t) + y^\delta(t),t)\left(\frac{w_j(t)}{\alpha^2} - \frac{\dot y^\delta_j(t)}{g_j^2(z(t) + y^\delta(t),t)}\right)}{g_i^3(z(t) + y^\delta(t),t)}\\ \displaystyle + \frac{{g_j}^{\prime}_{x_i}(z(t) + y^\delta (t),t)(\dot y_j^\delta(t))^2}{g_j^3(z(t) + y^\delta (t),t)} + \frac{g_j(z(t) + y^\delta(t),t) {g_j}^{\prime}_{x_i}(z(t) + y^\delta(t),t)w_j^2(t)}{\alpha^4}\\ \displaystyle - 2\frac{{g_j}^{\prime}_{x_i}(z(t) + y^\delta(t),t)\dot y^\delta_j(t)w_j(t)}{\alpha^2 g_j(z(t) + y^\delta(t),t)}\Bigg]\Bigg),\quad i = 1,\ldots,n. \end{array} \end{aligned} $$
(7.39)

Boundary conditions (7.20) in new variables take the form

$$\displaystyle \begin{aligned} z(T) = 0,\quad w(T) = 0. \end{aligned} $$
(7.40)

As it follows from Hypothesis 7.1, the right hand side of system (7.38) is locally Lipschitz on Ψ × [0, T]—so, by Cauchy theorem there exists such interval [T 0, T] ⊂ [0, T] that solutions z δ, α(⋅) : [T 0, T] → R n, w δ, α(⋅) : [T 0, T] → R n of system (7.38) with boundary conditions (7.40) exist and are unique on t ∈ [T 0, T]. Moreover, due to continuity of the solutions and zero boundary conditions (7.40), there exists such interval [t 1, T] ⊂ [T 0, T] that

$$\displaystyle \begin{aligned} |z^{\delta,\alpha}_i(t)|\leq \alpha\delta R_w,\quad |w^{\delta,\alpha}_i(t)|\leq \alpha^2\delta R_w,\quad i = 1,\ldots,n,\quad t\in[t_1,T], \end{aligned}$$

where the constant R w is defined in (7.5).

Let’s now extend the solution further in reverse time (to the left from t 1 on time axis). As the solution is continuous, we can always extend it up to such moment t 0 that either \(z^{\delta ,\alpha }_i(t_0)=2\alpha \delta R_w,\) i ∈{1, …, n} or \({w^{\delta ,\alpha }_i(t_0)=2\alpha ^2\delta R_w,\ i\in \{1,\ldots ,n\}}\) or extend it up to t = 0. If we are able to extend it up to t = 0 without reaching values 2αδR w, 2α 2 δR w (the second case), then

$$\displaystyle \begin{aligned} |z^{\delta,\alpha}_i(t)|\leq 2\alpha\delta R_w,\quad |w^{\delta,\alpha}_i(t)|\leq 2\alpha^2\delta R_w,\quad i=1,\ldots,n,\quad t\in[0,T]. \end{aligned}$$

In the first case there exists such moment t 0 ∈ [0, T] that

$$\displaystyle \begin{aligned} z^{\delta,\alpha}_i(t) \leq 2\alpha\delta R_w,\quad w^{\delta,\alpha}_i(t) \leq 2\alpha^2\delta R_w,\quad i=1,\ldots,n,\quad t\in[t_0,T]. \end{aligned} $$
(7.41)

Let’s consider this case closer.

We introduce a new system of ODEs for functions \(\overline z_i(\cdot )\), \(\overline w_i(\cdot ),\ i = 1,\ldots ,n\)

$$\displaystyle \begin{aligned} \begin{array}{cc} \displaystyle \dot {\overline z}_i(t) = - \overline w_i(t)\frac {g^2_i(z^{\delta,\alpha}(t) + y^\delta(t),t)}{\alpha^2},\\ \displaystyle \dot {\overline w}_i(t) = \overline z_i(t) + F_i(z^{\delta,\alpha}(t),w^{\delta,\alpha}(t),t),\\ i = 1,\ldots,n,\quad t\in[t_0,T] \end{array} \end{aligned} $$
(7.42)

with boundary conditions

$$\displaystyle \begin{aligned} \overline z(T) = 0,\quad \overline w(T) = 0, \end{aligned} $$
(7.43)

where \(z^{\delta ,\alpha }_i(t)\), \(w^{\delta ,\alpha }_i(t)\) are solutions of system (7.38) with boundary conditions (7.40), constrained by (7.41).

System (7.42) is a heterogeneous linear system of ODEs with time-dependent coefficients, continuous on t ∈ [t 0, T]. So, the solution of (7.42), (7.43) exists and is unique on t ∈ [t 0, T].

Let’s now prove that the solutions of (7.42), (7.43) coincide with the solutions of (7.38), (7.40). To do this, we introduce residuals

$$\displaystyle \begin{aligned} \triangle z(t) = z^{\delta,\alpha}(t) - \overline z(t),\quad \triangle w(t) = w^{\delta,\alpha}(t) - \overline w(t). \end{aligned}$$

Subtracting Eq. (7.42) from (7.38) (with substituted solutions z δ, α(t), w δ, α(t)), we get

$$\displaystyle \begin{aligned} \begin{array}{cc} \displaystyle\triangle z_i(t) = - w_i^{\delta,\alpha}(t)\frac {g^2_i(z^{\delta,\alpha}(t) + y^\delta(t),t)}{\alpha^2} + \overline w_i(t)\frac {g^2_i(z^{\delta,\alpha}(t) + y^\delta(t),t)}{\alpha^2}\\ \displaystyle = - \triangle w_i(t)\frac {g^2_i(z^{\delta,\alpha}(t) + y^\delta(t),t)}{\alpha^2},\\ \triangle w_i(t) = \triangle z_i(t) + F_i(z^{\delta,\alpha}(t),w^{\delta,\alpha}(t),t) - F_i(z^{\delta,\alpha}(t),w^{\delta,\alpha}(t),t) = \triangle z_i(t),\\ i = 1,\ldots,n \end{array} \end{aligned} $$
(7.44)

with boundary conditions

$$\displaystyle \begin{aligned} \triangle z(T) = 0,\quad \triangle w(T) = 0. \end{aligned} $$
(7.45)

As a homogenous system of linear ODEs with continuous time-dependent coefficients, system (7.44) with zero boundary conditions has the only trivial solution

$$\displaystyle \begin{aligned} \triangle z(t)\equiv 0,\quad \triangle w(t)\equiv 0,\quad t\in[t_0,T]. \end{aligned} $$
(7.46)

That means that \(z^{\delta ,\alpha }(t) = \overline z(t)\), \(w^{\delta ,\alpha }(t) = \overline w(t),\ t\in [t_0,T]\).

Now let’s study the properties of the solutions \(\overline z(t)\), \(\overline w(t)\) of system (7.42) with boundary conditions (7.43). System (7.42) can be rewritten in vector form

$$\displaystyle \begin{aligned} \dot Z(t) = A(t)Z(t) + F(t), \end{aligned} $$
(7.47)

where

$$\displaystyle \begin{aligned} \begin{array}{cc} Z(\cdot) = (\overline z_1(\cdot),\ldots,\overline z_n(\cdot),\overline w_1(\cdot),\ldots,\overline w_n(\cdot)),\\ F(\cdot) = (\underbrace{0,\ldots\,0}_n ,F_1(z^{\delta,\alpha}(\cdot),w^{\delta,\alpha}(\cdot),\cdot),\ldots,F_n(z^{\delta,\alpha}(\cdot),w^{\delta,\alpha}(\cdot),\cdot)) \end{array} \end{aligned} $$
(7.48)

and the 2n × 2n matrix A(t) can be written in the block form \(A(t) = \left ( \begin {array}{cc} O & G_A(x,t) \\ I_n & O \\ \end {array} \right )\), where I n is an identity matrix, O is an n × n zero matrix,

$$\displaystyle \begin{aligned} G_A(x,t) = \left( \begin{array}{cccc} - g_1^2(x^{\delta,\alpha}(t),t) & 0 & \ldots & 0 \\ 0 & - g_2^2(x^{\delta,\alpha}(t),t) & \ldots & 0 \\ \ldots & \ldots & \ldots & \ldots \\ 0 & 0 & \ldots & - g_n^2(x^{\delta,\alpha}(t),t) \\ \end{array} \right). \end{aligned}$$

Solutions of system (7.42) can be written in the following form with the help of Cauchy formula for solutions of a heterogenous system of linear ODEs with time-dependent coefficients. One can easily check that for boundary conditions, given at the point t = T (instead of t = 0), it has the form

$$\displaystyle \begin{aligned} Z(t) = \varPhi(t)\varPhi^{ - 1}(T)Z(T) - \varPhi(t)\int\limits_t^T \varPhi^{ - 1}(\tau)F(z^{\delta,\alpha}(\tau),w^{\delta,\alpha}(\tau),\tau)d\tau, \end{aligned} $$
(7.49)

were Φ(⋅) is an n × n fundamental matrix of solutions for the homogenous part of system (7.42). This matrix can be chosen as

$$\displaystyle \begin{aligned} \varPhi(t) = \exp\left[ - \int\limits_t^T A(\tau)d\tau\right] = \sum _{k = 0}^{\infty}\frac{1}{k!}\left( - \int\limits_t^T A(\tau)d\tau\right)^k. \end{aligned} $$
(7.50)

One can check that after expanding the kth powers in the sum in the latter formula and folding the sum again, using the Taylor series for sin and cos functions, we can get that \(\varPhi (t) = \left ( \begin {array}{cc} \varPhi _1(t) & \varPhi _2(t) \\ \varPhi _3(t) & \varPhi _1(t) \\ \end {array} \right )\), where Φ 1(t), Φ 2(t), Φ 3(t) are diagonal matrixes with ith elements on diagonals

$$\displaystyle \begin{aligned} \begin{array}{cc} \displaystyle \varPhi_{1ii}(t) = \cos\left(\frac{1}{\alpha}\sqrt{(T - t)\int\limits_t^T g_i^2(x^{\delta,\alpha}(\tau),\tau)d\tau}\right),\\ \displaystyle \varPhi_{2ii}(t) = \frac{1}{\alpha}\tilde\varPhi_i(t)\sin\left(\frac{1}{\alpha}\sqrt{(T - t)\int\limits_t^T g_i^2(x^{\delta,\alpha}(\tau),\tau)d\tau}\right),\\ \displaystyle \varPhi_{3ii}(t) = - \alpha\frac{1}{\tilde\varPhi_i(t)}\sin\left(\frac{1}{\alpha}\sqrt{(T - t)\int\limits_t^T g_i^2(x^{\delta,\alpha}(\tau),\tau)d\tau}\right), \end{array} \end{aligned} $$
(7.51)

where continuous function

$$\displaystyle \begin{aligned} \begin{array}{cc} \displaystyle \tilde\varPhi_i(t) = \left\{ \begin{array}{c} \displaystyle \frac{\sqrt{\int\limits_t^T g_i^2(x^{\delta,\alpha}(\tau),\tau)d\tau}}{\sqrt{T - t}},\ t\in[t_0,T), \\ g_i(x^{\delta,\alpha}(T),T),\ t = T, \\ \end{array} \right.\\ \ \\ i=1,\ldots,n. \end{array} \end{aligned} $$
(7.52)

Using (7.18), one can obtain that

$$\displaystyle \begin{aligned} \underline\omega \leq \left|\tilde\varPhi_i(t)\right| \leq \overline\omega,\quad i=1,\ldots,n. \end{aligned} $$
(7.53)

Due to simple structure of matrix Φ(t), one can check that inverse matrix \(\varPhi ^{ - 1}(t) = \left ( \begin {array}{cc} \varPhi _1(t) & - \varPhi _2(t) \\ - \varPhi _3(t) & \varPhi _1(t) \\ \end {array} \right )\).

Let’s return to (7.49). Here \(Z(T) = \overrightarrow {0}\), so vector \(Z(t) = -\varPhi (t) \int \limits _t^T\varPhi ^{-1}(\tau ) F(\tau )d\tau \) has the following coordinates

$$\displaystyle \begin{aligned} \begin{array}{cc} Z_i(t) = \overline z_i(t) = \varPhi_{1,ii}(t)\int\limits_t^T \varPhi_{2,ii}(\tau)F_i(z^{\delta,\alpha}(\tau),w^{\delta,\alpha}(\tau),\tau)d\tau\\ - \varPhi_{2,ii}(t) \int\limits_t^T \varPhi_{1,ii}(\tau) F_i(z^{\delta,\alpha}(\tau),w^{\delta,\alpha}(\tau),\tau)d\tau,\\ Z_{i + 1}(t) = \overline w_i(t) = \varPhi_{3,ii}(t) \int\limits_t^T \varPhi_{2,ii}(\tau) F_i(z^{\delta,\alpha}(\tau),w^{\delta,\alpha}(\tau),\tau)d\tau\\ - \varPhi_{1,ii}(t) \int\limits_t^T \varPhi_{1,ii}(\tau) F_i(z^{\delta,\alpha}(\tau),w^{\delta,\alpha}(\tau),\tau)\Big]d\tau,\\ t\in[t_0,T],\quad i = 1,\ldots,n. \end{array} \end{aligned} $$
(7.54)

To estimate these expressions, we consider the following expression

$$\displaystyle \begin{aligned} \int\limits_{t_0}^T \cos\left(\frac{1}{\alpha}\sqrt{(T - \tau)\int\limits_\tau^T g_i^2(x^{\delta,\alpha}(\theta),\theta)d\theta}\right)f^\delta_i(\tau)d\tau,\quad i=1,\ldots,n \end{aligned} $$
(7.55)

where function \(f^\delta _i(\cdot )=f^\delta _i(\cdot ,\delta ):[0,T]\to R\) depends on δ and is continuous in the first argument for any δ ∈ (0, δ 0].

Let’s introduce functions \(\varphi _i (\tau ) = \left (\sqrt {(T - \tau )\int \limits _\tau ^T g_i^2(x^{\delta ,\alpha }(\theta ),\theta )d\theta }\right ),\ i=1,\ldots ,n\), which are continuously differentiable in τ.

Note that all following calculations in the proof are true for all i ∈{1, …, n}.

Using Hypothesis 7.1, we can estimate the derivative

$$\displaystyle \begin{aligned} \begin{array}{cc} \displaystyle \dot\varphi_i(\tau) = \frac{ - \int\limits_\tau^T g_i^2(x^{\delta,\alpha}(\theta),\theta)d\theta - (T - \tau)g_i^2(x^{\delta,\alpha}(\tau),\tau)}{2\sqrt{(T - \tau)\int\limits_\tau^T g_i^2(x^{\delta,\alpha}(\theta),\theta)d\theta}}\\ \displaystyle \geq - \frac{2\overline\omega^2(T - \tau)}{2\sqrt{(T - \tau)^2\underline\omega^2}} = - \frac{\overline\omega^2}{\underline\omega}.\\ \displaystyle \mathrm{Similarly, }\dot\varphi_i (\tau)\leq - \frac{\underline\omega^2}{\overline\omega},\quad \tau\in[t_0,T]. \end{array} \end{aligned} $$
(7.56)

So, φ i(τ) is a decreasing function with restricted derivative and φ i(T) = 0. This means that we can construct a finite increasing sequence \(\{\tau _1<\tau _2<\ldots <\tau _{n_{\varphi _i}},\ n_{\varphi _i} \in N\}\) that has the following properties:

$$\displaystyle \begin{aligned} \varphi_i (\tau_{(n_{\varphi_i}-k)}) = \alpha(0.5 + k)\pi,\quad k=0,\ldots,(n_{\varphi_i}-1); \end{aligned}$$
$$\displaystyle \begin{aligned} \alpha\frac{\pi\underline\omega}{\overline\omega^2} \leq (\tau_{j+1}-\tau_{j}) \leq \alpha\frac{\pi\overline\omega}{\underline\omega^2},\quad n_{\varphi_i} \leq \frac{T\overline\omega}{\alpha\underline\omega^2}, \end{aligned} $$
(7.57)

as the derivative \(\dot \varphi (t)\) is restricted by (7.56).

Let’s add to this sequence elements τ 0 = t 0 and \(\tau _{(n_{\varphi _i}+1)} = T\).

Integral (7.55) can be rewritten as

$$\displaystyle \begin{aligned} \int\limits_{t_0}^T \cos\left(\frac{\varphi_i (\tau)}{\alpha}\right)f^\delta_i(\tau)d\tau = \sum_{j = 0}^{n_{\varphi_i}}\int\limits_{\tau_j}^{\tau_{j + 1}} \cos\left(\frac{\varphi_i (\tau)}{\alpha}\right)f^\delta_i(\tau)d\tau. \end{aligned} $$
(7.58)

Because \(\cos \left (\frac {\varphi _i (\tau )}{\alpha }\right )\) is sign-definite on \(\tau \in [\tau _j,\tau _{j + 1}],\ j = 0,\ldots ,n_{\varphi _i}\) and \(f^\delta _i(\tau )\) is continuous, it follows from the first mean value theorem for definite integrals that for each \(j = 0,\ldots ,n_{\varphi _i}\) there exists such point \(\tilde \tau _j\in [\tau _j,\tau _{j + 1}]\) that \(\int \limits _{\tau _j}^{\tau _{j + 1}} \cos \left (\frac {\varphi _i (\tau )}{\alpha }\right )f^\delta _i(\tau )d\tau = f^\delta _i(\tilde \tau _j)\int \limits _{\tau _j}^{\tau _{j + 1}} \cos \left (\frac {\varphi _i (\tau )}{\alpha }\right )d\tau \). Combining the terms of sum (7.58) by pairs [τ j, τ j+1], [τ j+1, τ j+2], we get

$$\displaystyle \begin{aligned} \begin{array}{cc} \displaystyle \int\limits_{\tau_j}^{\tau_{j + 2}} \cos\left(\frac{\varphi_i (\tau)}{\alpha}\right)f^\delta_i(\tau)d\tau\\ \displaystyle = f^\delta_i(\tilde\tau_j)\int\limits_{\tau_j}^{\tau_{j + 1}} \cos\left(\frac{\varphi_i (\tau)}{\alpha}\right)d\tau + f^\delta_i(\tilde\tau_{j+1})\int\limits_{\tau_{j + 1}}^{\tau_{j + 2}} \cos\left(\frac{\varphi_i (\tau)}{\alpha}\right)d\tau. \end{array} \end{aligned} $$
(7.59)

To estimate expression (7.59), we first make the following estimates:

$$\displaystyle \begin{aligned} \int\limits_{\tau_j}^{\tau_{j + 2}} \cos\left(\frac{\varphi_i (\tau)}{\alpha}\right)d\tau = \int\limits_{\tau_j}^{\tau_{j + 2}} \frac{\alpha}{\dot\varphi_i (\tau)}\frac{\dot\varphi_i (\tau)}{\alpha}\cos\left(\frac{\varphi_i (\tau)}{\alpha}\right)d\tau,\ j=0,\ldots,n_{\varphi_i} - 3, \end{aligned} $$
(7.60)

as (T − τ) ≠ 0 for \(j<n_{\varphi _i}\). We can integrate (7.60) by parts.

$$\displaystyle \begin{aligned} \begin{array}{cc} \displaystyle \int\limits_{\tau_j}^{\tau_{j + 2}} \underbrace{\frac{\alpha}{\dot\varphi_i (\tau)}}_{\mathbf{U}} \underbrace{\frac{\dot\varphi_i (\tau)}{\alpha}\cos\left(\frac{\varphi_i (\tau)}{\alpha}\right)d\tau}_{\mathbf{dV}} = \underbrace{\frac{\alpha}{\dot\varphi_i (\tau)}}_{\mathbf{U}} \underbrace{\sin\left(\frac{\varphi_i (\tau)}{\alpha}\right)}_{\mathbf{V}} {\bigg |}_{\tau_j}^{\tau_{j + 2}}\\ \displaystyle - \alpha\int\limits_{\tau_j}^{\tau_{j + 2}}\underbrace{\sin\left(\frac{\varphi_i (\tau)}{\alpha}\right)}_{\mathbf{V}} \underbrace{\frac{d}{d\tau}\left(\frac{1}{\dot\varphi_i (\tau)}\right)d\tau}_{\mathbf{dU}},\ j = 0,\ldots,n_{\varphi_i} - 3. \end{array} \end{aligned} $$
(7.61)

Here

$$\displaystyle \begin{aligned} \left|\frac{\alpha}{\dot\varphi_i (\tau)}\sin\left(\frac{\varphi_i (\tau)}{\alpha}\right){\bigg |}_{\tau_j}^{\tau_{j + 2}}\right| = \left|\frac{\alpha}{\dot\varphi_i (\tau)}{\bigg |}_{\tau_j}^{\tau_{j + 2}}\right| \leq \alpha\sup_{t\in [\tau_j,\tau_{j + 2}]} \left|\frac{d}{d\tau}\frac{1}{\dot\varphi_i (\tau)}\right|(\tau_{j + 2} - \tau_j). \end{aligned}$$

One can check that the derivative

$$\displaystyle \begin{aligned} \begin{array}{cc} \displaystyle \left|\frac{d}{d\tau}\frac{1}{\dot\varphi_i (\tau)}\right| = \Bigg|\frac{1}{\sqrt{(T - \tau)\int\limits_\tau^T g_i^2(x^{\delta,\alpha}(\theta),\theta)d\theta}}\\ \displaystyle - \frac{2}{\dot\varphi(\tau)}\Bigg(-g_i^2(x^{\delta,\alpha}(\tau),\tau)+ (T-\tau)g_i(x^{\delta,\alpha}(\tau),\tau)\\ \displaystyle \cdot\Big(\sum_{i=1}^n \left[\frac{\partial g_i(x^{\delta,\alpha}(\tau),\tau)}{\partial x_i} \left(-\frac{w^{\delta,\alpha}(\tau)g_i^2(x^{\delta,\alpha}(\tau),\tau)}{\alpha^2}\right)\right] + \frac{\partial g_i(x^{\delta,\alpha}(\tau),\tau)}{\partial t}\Big)\Bigg)\Bigg|\\ \displaystyle \leq \frac{1}{\underline\omega(T - \tau_{j+2})} + \frac{2\overline\omega}{\underline\omega^2}\big(\overline\omega^2 + T\overline\omega(\delta n\omega'\overline\omega^2 R_w + \omega')\big). \end{array} \end{aligned} $$
(7.62)

So, the term \(\mathbf {UV}\big |{ }_{\tau _j}^{\tau _{j+1}}\) in (7.61) can be estimated by using (7.57) and (7.62) as

$$\displaystyle \begin{aligned} \displaystyle \left|\frac{\alpha}{\dot\varphi_i (\tau)}\sin\left(\frac{\varphi_i (\tau)}{\alpha}\right){\bigg |}_{\tau_j}^{\tau_{j + 2}}\right| \leq \alpha^2\left(\frac{R_1}{T - \tau_{j+2}} + R_2 + \delta R_3 R_w\right), \end{aligned} $$
(7.63)

where the constants R 1, R 2, R 3 are defined in (7.5). Let’s emphasize that these constants don’t depend on δ and α.

Now let’s estimate the term \(\int _{\tau _j}^{\tau _{j+1}} \mathbf {VdU}\) in (7.61).

$$\displaystyle \begin{aligned} \begin{array}{cc} \displaystyle \alpha\left|\int\limits_{\tau_j}^{\tau_{j+2}}\sin\left(\frac{\varphi_i (\tau)}{\alpha}\right) \frac{d}{d\tau}\left(\frac{1}{\dot\varphi_i (\tau)}\right)d\tau\right|\\ \displaystyle \leq \alpha\sup_{\tau\in[\tau_j,\tau_{j+2}]} \left| \sin\left(\frac{\varphi_i (\tau)}{\alpha}\right) \frac{d}{d\tau}\left(\frac{1}{\dot\varphi_i (\tau)}\right)\right| (\tau_{j+2}-\tau_j)\\ \displaystyle \leq \alpha^2\left(\frac{R_1}{T - \tau_{j+2}} + R_2 + \delta R_3 R_w\right). \end{array} \end{aligned} $$
(7.64)

Applying estimates (7.63) and (7.64) to (7.60)–(7.61), we get

$$\displaystyle \begin{aligned} \left|\int\limits_{\tau_j}^{\tau_{j + 2}} \cos\left(\frac{\varphi_i (\tau)}{\alpha}\right)d\tau\right| \leq 2\alpha^2\left(\frac{R_1}{T - \tau_{j+2}} + R_2 + \delta R_3 R_w\right). \end{aligned} $$
(7.65)

Now let’s return to expression (7.59). By splitting the last integral term in (7.59) as \(\int \limits _{\tau _{j + 1}}^{\tau _{j + 2}} = \int \limits _{\tau _j}^{\tau _{j + 2}} - \int \limits _{\tau _j}^{\tau _{j + 1}}\), we get

$$\displaystyle \begin{aligned} \begin{array}{cc} \displaystyle \left|\int\limits_{\tau_j}^{\tau_{j + 2}} \cos\left(\frac{\varphi_i (\tau)}{\alpha}\right)f^\delta_i(\tau)d\tau\right|\\ \displaystyle \leq \left|\int\limits_{\tau_j}^{\tau_{j + 1}} \cos\left(\frac{\varphi_i (\tau)}{\alpha}\right)d\tau \Big(f^\delta_i(\tilde\tau_j) - f^\delta_i(\tilde\tau_{j+1})\Big)\right|\\ \displaystyle + \left|f^\delta_i(\tilde\tau_{j+1})\int\limits_{\tau_j}^{\tau_{j + 2}} \cos\left(\frac{\varphi_i (\tau)}{\alpha}\right)d\tau\right|. \end{array} \end{aligned} $$
(7.66)

By Heine–Cantor theorem, every continuous function defined on a closed interval is uniformly continuous. So, continuous \(f^\delta _i(\tau )\) is uniformly continuous on [t 0, T]. In other words,

$$\displaystyle \begin{aligned} \begin{array}{cc} \displaystyle \forall \delta >0\ \exists\alpha_1^\delta = \alpha_1^\delta(\delta)>0:\ \forall \tau_{1},\tau_{2}\in [\tau_j,\tau_{j + 2}]\\ \displaystyle \bigl(|\tau_{1}-\tau_{2}|<\alpha_1^\delta 2\frac{\pi\overline\omega}{\underline\omega^2} \bigr) \Rightarrow \bigl(|f^\delta_i(\tau_{1})-f^\delta_i(\tau_{2})|<\delta \bigr). \end{array} \end{aligned} $$
(7.67)

Remark 7.5

As \(f^\delta _i(\tau )\) is uniformly continuous on [t 0, T], we are able to choose the same \(\alpha _1^\delta = \alpha _1^\delta (\delta )\) in (7.67) for each \(j=0,\ldots ,(n_{\varphi _i}+1)\) as [τ j, τ j+2] ⊂ [t 0, T].

Combining (7.57), (7.65), (7.66), (7.67), we get

$$\displaystyle \begin{aligned} \begin{array}{cc} \displaystyle \left|\int\limits_{\tau_j}^{\tau_{j + 2}} \cos\left(\frac{\varphi_i (\tau)}{\alpha}\right)f^\delta_i(\tau)d\tau\right|\\ \displaystyle \leq \alpha\delta\frac{\pi\overline\omega}{\underline\omega^2} + 2\alpha^2 \max\limits_{\tau\in[\tau_j,\tau_{j + 2}]}f^\delta_i(\tau)\left(\frac{R_1}{T - \tau_{j+2}} + R_2 + \delta R_3 R_w\right). \end{array} \end{aligned} $$
(7.68)

To be specific, let’s assume that the number \(n_{\varphi _i}\) is odd. Then

$$\displaystyle \begin{aligned} \begin{array}{cc} \displaystyle \int\limits_{t_0}^T \cos\left(\frac{\varphi_i (\tau)}{\alpha}\right)f^\delta_i(\tau)d\tau\\ \displaystyle = \int\limits_{t_0}^{\tau_1} \cos\left(\frac{\varphi_i (\tau)}{\alpha}\right)f^\delta_i(\tau)d\tau + \sum_{j = 1}^{0.5 (n_{\varphi_i}-1)-1}\int\limits_{\tau_{2j-1}}^{\tau_{2j+1}} \cos\left(\frac{\varphi_i (\tau)}{\alpha}\right)f^\delta_i(\tau)d\tau \\ \displaystyle + \int\limits_{\tau_{n_{\varphi_i}-2}}^{\tau_{n_{\varphi_i}}} \cos\left(\frac{\varphi_i (\tau)}{\alpha}\right)f^\delta_i(\tau)d\tau + \int\limits_{n_{\varphi_i}}^T \cos\left(\frac{\varphi_i (\tau)}{\alpha}\right)f^\delta_i(\tau)d\tau. \end{array} \end{aligned} $$
(7.69)

Using (7.68), let’s first estimate the sum

$$\displaystyle \begin{aligned} \begin{array}{cc} \displaystyle \left|\sum_{j = 1}^{0.5 (n_{\varphi_i}-1)-1}\left[\int\limits_{\tau_{2j-1}}^{\tau_{2j+1}} \cos\left(\frac{\varphi_i (\tau)}{\alpha}\right)f^\delta_i(\tau)d\tau\right]\right| \\ \displaystyle \leq \sum_{j = 1}^{0.5 (n_{\varphi_i}-1)-1} \left[ \alpha\delta\frac{\pi\overline\omega}{\underline\omega^2} + 2\alpha^2 \overline f^\delta_i \left(\frac{R_1}{T - \tau_{2j+1}} + R_2 + \delta R_3 R_w\right)\right], \end{array} \end{aligned} $$
(7.70)

where \({n_{\varphi _i} \leq \frac {T\overline \omega }{\alpha \underline \omega ^2}}\) and \({\overline f^\delta _i = \max \limits _{\tau \in [t_0,T]}f^\delta _i(\tau )}\).

The following sum can be estimated by substituting the denominator in the fraction with it’s minimal possible value (7.57) and reversing the order of terms in the sum.

$$\displaystyle \begin{aligned} \sum_{j = 1}^{0.5 (n_{\varphi_i}-1)-1}\frac{\alpha}{T-\tau_{2j+1}} \leq \sum_{j = 1}^{0.5 (n_{\varphi_i}-1)-1}\frac{\alpha}{(\alpha \pi\underline\omega/\overline\omega^2)j} = \frac{\overline\omega^2}{\pi\underline\omega} \sum_{j = 1}^{0.5 (n_{\varphi_i}-1)-1}\frac{1}{j}. \end{aligned}$$

The partial sum \(\sum \limits _{j = 1}^{0.5 (n_{\varphi _i}-1)-1}\frac {1}{j}\) of a harmonic series can be estimated by Euler–Mascheroni formula \(\sum \limits _{n=1}^{k}{\frac {1}{n}} \leq (\ln k)+1\). Thus, continuing estimates (7.70) we get

$$\displaystyle \begin{aligned} \begin{array}{cc} \displaystyle \left|\sum_{j = 1}^{0.5 (n_{\varphi_i}-1)-1} \left[\int\limits_{\tau_{2j-1}}^{\tau_{2j+1}} \cos\left(\frac{\varphi_i (\tau)}{\alpha}\right)f^\delta_i(\tau)d\tau\right]\right|\\ \displaystyle \leq \delta\frac{T\pi\overline\omega^2}{2\underline\omega^4} + 2\alpha\overline f^\delta_i\left(\frac{\pi\overline\omega^2 R_1}{\underline\omega}(\ln \frac{0.5T\overline\omega}{\alpha\underline\omega^2} + 1) + R_2 + \delta R_3 R_w\right). \end{array} \end{aligned} $$
(7.71)

We have estimated the second term of sum in the right hand side of (7.69). Using (7.57), one can get the following relations for the first, third and forth terms in (7.69).

$$\displaystyle \begin{aligned} \begin{array}{cc} \displaystyle \Bigg|\int\limits_{t_0}^{\tau_1} \cos\left(\frac{\varphi_i (\tau)}{\alpha}\right)f^\delta_i(\tau)d\tau + \int\limits_{\tau_{n_{\varphi_i}-2}}^{\tau_{n_{\varphi_i}}} \cos\left(\frac{\varphi_i (\tau)}{\alpha}\right)f^\delta_i(\tau)d\tau\\ \displaystyle + \int\limits_{n_{\varphi_i}}^T \cos\left(\frac{\varphi_i (\tau)}{\alpha}\right)f^\delta_i(\tau)d\tau\Bigg| \leq 4\alpha \frac{\pi\overline\omega^2}{\underline\omega}\overline f^\delta_i. \end{array} \end{aligned} $$
(7.72)

Remark 7.6

We assumed that the number \(n_{\varphi _i}\) is odd. In the case of even \(n_{\varphi _i}\) the calculations are similar, because the only difference is in formula (7.69), where the lower limit of the integral \(\int \limits _{\tau _{n_{\varphi _i}-2}}^{\tau _{n_{\varphi _i}}} \cos \left (\frac {\varphi _i (\tau )}{\alpha }\right )f^\delta _i(\tau )d\tau \) is exchanged for \(\tau _{n_{\varphi _i}-1}\).

Finally, applying (7.71) and (7.72) to (7.69), we get

$$\displaystyle \begin{aligned} \begin{array}{cc} \displaystyle \Bigg|\int\limits_{t_0}^T \cos\left(\frac{\varphi_i (\tau)}{\alpha}\right)f^\delta_i(\tau)d\tau\Bigg| \leq \delta\frac{T\pi\overline\omega^2}{2\underline\omega^4}\\ \displaystyle + \alpha\overline f^\delta_i\left(4\frac{\pi\overline\omega^2}{\underline\omega} + 2\frac{R_1\pi\overline\omega^2}{\underline\omega}\left(\ln \frac{T\overline\omega}{2\underline\omega^2} + 1\right) + R_2\right) + \alpha|\ln\alpha|\overline f^\delta_i + 2\alpha\delta\overline f^\delta_i R_3 R_w. \end{array} \end{aligned} $$

For any given δ ∈ (0, δ 0] there exists a constant \({\overline f^\delta _i = \overline f^\delta _i(\delta )}\) (as \(f^\delta _i(\tau )\) is continuous for δ ∈ (0, δ 0]). We can always find such parameter \(\alpha _2^\delta = \alpha _2^\delta (\delta )\) that

$$\displaystyle \begin{aligned} \alpha_2^\delta(\delta)|\ln \alpha_2^\delta(\delta)|\overline f^\delta_i(\delta) \leq \delta. \end{aligned} $$
(7.73)

This is possible because \({\lim \limits _{\alpha \to 0}\alpha |\ln \alpha | = 0}\). Thus, for any

$$\displaystyle \begin{aligned} \alpha \leq \alpha_0 = \min\{\alpha_1^\delta,\alpha_2^\delta,1\},\quad (\mathrm{where }\alpha_1^\delta\mathrm{ is from~(7.67), } \alpha_2^\delta\mathrm{ is from~(7.73)}), \end{aligned} $$
(7.74)

we have

$$\displaystyle \begin{aligned} \displaystyle \left|\int\limits_{t_0}^T \cos\left(\frac{\varphi_i (\tau)}{\alpha}\right)f^\delta_i(\tau)d\tau\right| \leq \delta R_4 + 2\delta^2R_3 R_w, \end{aligned} $$
(7.75)

where the constants R 3, R 4 are defined in (7.5).

We can apply this result to expressions (7.54). First, let’s estimate expression

$$\displaystyle \begin{aligned} \alpha^2\varPhi_{2,ii}(t)\int\limits_{t_0}^T \varPhi_{1,ii}(\tau) \frac{F_i(z^{\delta,\alpha}(\tau),w^{\delta,\alpha}(\tau),\tau)}{\alpha^2}d\tau, \end{aligned} $$
(7.76)

for which \(f^\delta _i(\tau ) = F_i(z^{\delta ,\alpha }(\tau ),w^{\delta ,\alpha }(\tau ),\tau )/\alpha ^2 \stackrel {not}{=} f^\delta _{i,1}(\tau ) \) in the sense of (7.55). It follows from (7.39), (7.41) and Hypotheses 7.2, 7.1 that

$$\displaystyle \begin{aligned} \begin{array}{cc} \overline f^\delta_i = \overline f^\delta_{i,1} = \max\limits_{\tau\in[t_0,T]} \left|F_i(z^{\delta,\alpha}(\tau),w^{\delta,\alpha}(\tau),\tau)/\alpha^2\right|\\ \displaystyle \leq \left(\frac{\max\limits_{\tau\in[t_0,T]}\ddot y^\delta(\tau)}{\underline\omega^2} + n\frac{\overline\omega^2\omega'\overline Y + \omega'\overline Y^2}{\underline\omega^3}\right) + \delta n R_w \left(\frac{\overline\omega^2\omega'}{\underline\omega^3} + 2\frac{\omega'\overline Y}{\underline\omega}\right) + \delta^2 R_w^2 \overline\omega\omega'. \end{array} \end{aligned} $$
(7.77)

For \(\alpha \leq \alpha ^1_0\), where \(\alpha _0^1\) is defined in the same way as α 0 in (7.67), (7.73), (7.74), but assuming \(f^\delta _i(\tau ) = f^\delta _{i,1}(\tau ) \) and \(\overline f^\delta _i(\tau ) = \overline f^\delta _{i,1}(\tau ) \), estimates (7.75) and (7.53) give us

$$\displaystyle \begin{aligned} \begin{array}{cc} \left|\varPhi_{2,ii}(t)\int\limits_{t_0}^T \varPhi_{1,ii}(\tau) F_i(z^{\delta,\alpha}(\tau),w^{\delta,\alpha}(\tau),\tau)d\tau\right|\\ \displaystyle \leq \alpha\overline\omega(\delta R_4 + 2\delta^2R_3 R_w),\ t\in[t_0,T]. \end{array} \end{aligned} $$
(7.78)

Let’s introduce \(\alpha _0^2\) that is defined in the same way as α 0 in (7.67), (7.73), (7.74), but assuming

$$\displaystyle \begin{aligned} f^\delta_i(\tau) = \tilde\varPhi_i(\tau)F_i(z^{\delta,\alpha}(\tau),w^{\delta,\alpha}(\tau),\tau)/\alpha^2 \stackrel{not}{=} f^\delta_{i,2}(\tau),\quad \overline f^\delta_{i,2} = \overline \omega \overline f^\delta_{i,1}. \end{aligned} $$

One can use the scheme of proof (7.55)–(7.78) and (7.51)–(7.53) to obtain that for \(\alpha \leq \min \{\alpha ^1_0,\alpha ^2_0\}\) the following estimates are true as well

$$\displaystyle \begin{aligned} \begin{array}{cc} \left|\varPhi_{1,ii}(t)\int\limits_{t_0}^T \varPhi_{2,ii}(\tau) F_i(z^{\delta,\alpha}(\tau),w^{\delta,\alpha}(\tau),\tau)d\tau\right|\\ \displaystyle\leq \alpha\overline\omega(\delta R_4 + 2\delta^2R_3 R_w),\quad t\in[t_0,T]; \end{array} \end{aligned} $$
(7.79)
$$\displaystyle \begin{aligned} \begin{array}{cc} \left|\varPhi_{3,ii}(t) \int\limits_t^T \varPhi_{2,ii}(\tau) F_i(z^{\delta,\alpha}(\tau),w^{\delta,\alpha}(\tau),\tau)d\tau\right|\\ \leq \alpha^2\frac{1}{\underline\omega}(\delta R_4 + 2\delta^2R_3 R_w),\quad t\in[t_0,T]; \end{array} \end{aligned} $$
(7.80)
$$\displaystyle \begin{aligned} \begin{array}{cc} \left|\varPhi_{1,ii}(t) \int\limits_t^T \varPhi_{1,ii}(\tau) F_i(z^{\delta,\alpha}(\tau),w^{\delta,\alpha}(\tau),\tau)\Big]d\tau\right|\\ \leq \alpha^2(\delta R_4 + 2\delta^2R_3 R_w),\quad t\in[t_0,T]. \end{array} \end{aligned} $$
(7.81)

Remark 7.7

Estimates (7.78)–(7.81) are true under combined condition

$$\displaystyle \begin{aligned} \alpha \leq \alpha_0^\delta \stackrel{def}{=} \min\{\alpha^1_0,\alpha^2_0\}. \end{aligned} $$
(7.82)

Combining (7.54) and (7.78)–(7.81), we get

$$\displaystyle \begin{aligned} \begin{array}{cc} |\overline z_i(t)| \leq \alpha\delta (1 + \overline\omega)(R_4 + 2\delta R_3 R_w),\\ |\overline w_i(t)| \leq \alpha^2\delta \frac{1 + \underline\omega}{\underline\omega}(R_4 + 2\delta R_3 R_w),\quad t\in[t_0,T],\quad i = 1,\ldots,n. \end{array} \end{aligned}$$

For \(\delta :0<\delta \leq \delta _0\leq \frac {1}{R_w}\), \(\alpha \in (0,\alpha _0^\delta ]\), as far as \(z^{\delta ,\alpha }(t) = \overline z(t)\), \(w^{\delta ,\alpha }(t) = \overline w(t),\ t\in [t_0,T]\),

$$\displaystyle \begin{aligned} \begin{array}{cc}{} |z^{\delta,\alpha}_i(t)| = |\overline z_i(t)| \leq \alpha\delta (1 + \overline\omega)(R_4 + 2 R_3) \leq \alpha\delta R_w,\\ |w^{\delta,\alpha}_i(t)| = |\overline w_i(t)| \leq \alpha^2\delta \frac{1 + \underline\omega}{\underline\omega}(R_4 + 2 R_3) \leq \alpha^2\delta R_w,\\ t\in[t_0,T],\quad i = 1,\ldots,n. \end{array} \end{aligned} $$
(7.83)

Remark 7.8

Estimates (7.83) are true for t 0 ∈ [0, T) as long as solutions z δ, α(⋅), w δ, α(⋅) of system (7.38) with boundary conditions (7.40) exist and are unique on t ∈ [t 0, T] and (7.41) is true.

But (7.83) means that for δ ∈ (0, δ 0], \(\alpha \in (0,\alpha _0^\delta ]\) at t = t 0 (in particular)

$$\displaystyle \begin{aligned} |z^{\delta,\alpha}_i(t_0)| \leq \alpha\delta R_w,\quad |w^{\delta,\alpha}_i(t_0)| \leq \alpha^2\delta R_w,\quad i = 1,\ldots,n, \end{aligned}$$

which is contrary to the assumption that either \({z^{\delta ,\alpha }_i(t_0)=2\alpha \delta R_w,\ i\in \{1,\ldots ,n\}}\) or \({w^{\delta ,\alpha }_i(t_0)=2\alpha ^2\delta R_w,\ i\in \{1,\ldots ,n\}}\). That means that such moment t 0 does not exist.

In other words, we have proved that we can extend the solutions z δ, α(⋅), w δ, α(⋅) up to t = 0 and

$$\displaystyle \begin{aligned} \begin{array}{cc} |z^{\delta,\alpha}_i(t)| \leq \alpha\delta 2 R_w,\\ |w^{\delta,\alpha}_i(t)| \leq \alpha^2\delta 2 R_w,\quad t\in[0,T],\quad i = 1,\ldots,n \end{array} \end{aligned} $$
(7.84)

for δ ∈ (0, δ 0] and \(\alpha \in (0,\alpha _0^\delta ]\).

As far as we can extend solutions z δ, α(⋅), w δ, α(⋅) on t ∈ [0, T], we can return to variables (7.36)

$$\displaystyle \begin{aligned} x_i^{\delta,\alpha}(t) = z_i^{\delta,\alpha}(t) + y^\delta_i(t),\quad u_i^{\delta,\alpha}(t) = -\frac{g_i(x^{\delta,\alpha}(t),t)}{\alpha^2}w_i^{\delta,\alpha}(t) + \frac{\dot y^\delta(t)}{g_i(x^{\delta,\alpha}(t),t)}. \end{aligned}$$

Applying the result (7.84) (see Remark 7.8), we get that

$$\displaystyle \begin{aligned} \begin{array}{cc} \displaystyle |x^{\delta,\alpha}_i(t) - y^\delta_i(t)| \leq \alpha\delta 2 R_w,\quad |u^{\delta,\alpha}_i(t) - \frac{\dot y^\delta_i(t)}{g_i(x^{\delta,\alpha}(t),t)}| \leq \delta 2 R_w,\\ i = 1,...,n,\quad t\in[0,T] \end{array} \end{aligned} $$
(7.85)

for δ ≤ (0, δ 0] and \(\alpha \in (0,\alpha _0^\delta ]\).

It follow from (7.85) and Hypothesis 7.2 that

$$\displaystyle \begin{aligned} \begin{array}{cc} |x^{\delta,\alpha^\delta_0}_i(t) - x^*_i(t)| \leq |x^{\delta,\alpha^\delta_0}_i(t) - y^\delta_i(t)| + |x^*_i(t) - y^\delta_i(t)|\\ \leq \alpha\delta 2 R_w +\delta \stackrel{\delta\to 0}{\longrightarrow} 0,\quad i = 1,...,n,\quad t\in[0,T], \end{array} \end{aligned}$$

which means that

$$\displaystyle \begin{aligned} \lim\limits_{\delta\rightarrow 0}\|x^{\delta,\alpha^\delta_0}_i(\cdot) - x^*_i(\cdot)\|{}_{C_{[0,T]}} = 0 \end{aligned} $$
(7.86)

Let’s now make the following calculations.

$$\displaystyle \begin{aligned} \begin{array}{cc} \displaystyle \left|\frac{\dot y_i^\delta(t)}{g_i(x^{\delta,\alpha}(t),t)} - \frac{\dot y_i^\delta(t)}{g_i(y^\delta(t),t)}\right| = \left|\frac{\dot y_i^\delta(t)\big(g_i(y^\delta(t),t) - g_i(x^{\delta,\alpha}(t),t)\big)}{g_i(x^{\delta,\alpha}(t),t)g_i(y^\delta(t),t)}\right|\\ \displaystyle \leq \frac{\overline Y n\omega'2\alpha\delta R_w}{\underline\omega^2},\quad i = 1,...,n,\quad t\in[0,T] \end{array} \end{aligned} $$
(7.87)

for δ ≤ (0, δ 0] and \(\alpha \in (0,\alpha _0^\delta ]\).

It follows from (7.87) and (7.85) that

$$\displaystyle \begin{aligned} \begin{array}{cc} \displaystyle |u^{\delta,\alpha^\delta_0}_i(t) - \frac{\dot y^\delta_i(t)}{g_i(y^\delta(t),t)}| \leq |u^{\delta,\alpha^\delta_0}_i(t) - \frac{\dot y_i^\delta(t)}{g_i(x^{\delta,\alpha^\delta_0}(t),t)}|\\ \displaystyle + \left|\frac{\dot y_i^\delta(t)}{g_i(x^{\delta,\alpha^\delta_0}(t),t)} - \frac{\dot y_i^\delta(t)}{g_i(y^\delta(t),t)}\right|\\ \displaystyle \leq \delta 2 R_w + \frac{\overline Y n\omega'2\alpha^\delta_0\delta R_w}{\underline\omega^2},\quad i = 1,...,n,\quad t\in[0,T]. \end{array} \end{aligned} $$
(7.88)

Relation (7.88) and Lemma 1 imply that

$$\displaystyle \begin{aligned} \begin{array}{cc} \displaystyle \|u^{\delta,\alpha^\delta_0}_i(\cdot) - u^*_i(t)\|{}^2_{L_{2,[0,T]}} = \int\limits_0^T(u^{\delta,\alpha^\delta_0}_i(t) - u^*_i(t))^2dt = \int\limits_0^T\Big[\left(u^{\delta,\alpha^\delta_0}_i(t) - \frac{\dot y^\delta_i(t)}{g_i(y^\delta(t),t)}\right)^2\\ \displaystyle + 2\left(u^{\delta,\alpha^\delta_0}_i(t) - \frac{\dot y^\delta_i(t)}{g_i(y^\delta(t),t)}\right)\left(\frac{\dot y^\delta_i(t)}{g_i(y^\delta(t),t)} - u^*_i(t)\right) + \left(u^*_i(t) - \frac{\dot y^\delta_i(t)}{g_i(y^\delta(t),t)}\right)^2\Big]dt\\ \displaystyle \leq T \left(\delta 2 R_w + \frac{\overline Y n\omega'2\alpha^\delta_0\delta R_w}{\underline\omega^2}\right)^2 + T \left(\delta 2 R_w + \frac{\overline Y n\omega'2\alpha^\delta_0\delta R_w}{\underline\omega^2}\right) \left(\frac{\overline Y }{\underline\omega} + \overline U\right)\\ \displaystyle + \left\|\frac{\dot y^\delta_i(t)}{g_i(y^\delta(t),t)} - u^*_i(t)\right\|{}^2_{L_{2,[0,T]}} \stackrel{\delta\to 0}{\longrightarrow} 0, \end{array} \end{aligned}$$

which was to be proved. □

Let’s now consider for a fixed δ ∈ (0, δ 0] cut-off functions

$$\displaystyle \begin{aligned} \hat u_i^\delta(t) = \left\{ \begin{array}{ll} \overline U, \quad u_i^{\delta,\alpha^\delta_0}(t) \geq \overline U,\\ u_i^{\delta,\alpha^\delta_0}(t), \quad |u_i^{\delta,\alpha^\delta_0}(t)| < \overline U,\\ -\overline U, \quad u_i^{\delta,\alpha^\delta_0}(t) \leq -\overline U,\\ \end{array}\right. ,\quad i = 1,\ldots,n, \end{aligned} $$
(7.89)

where the functions \(u_i^{\delta ,\alpha ^\delta _0}(\cdot ),\ i=1,\ldots ,n\) are defined in (7.35) and \(\alpha ^\delta _0\) is introduced in Theorem 7.1 in (7.82).

It follows from Theorem 7.1 that

$$\displaystyle \begin{aligned} \begin{array}{cc} \displaystyle \|u^{\delta,\alpha^\delta_0}_i(\cdot) - u^*_i(\cdot)\|{}^2_{L_{2,[0,T]}} = \|(u^{\delta,\alpha^\delta_0}_i(\cdot) - \hat u_i^\delta(\cdot)) + (\hat u_i^\delta(\cdot) - u^*_i(\cdot))\|{}^2_{L_{2,[0,T]}}\\ \displaystyle = \|u^{\delta,\alpha^\delta_0}_i(\cdot) - \hat u_i^\delta(\cdot)\|{}^2_{L_{2,[0,T]}} + \|\hat u_i^\delta(\cdot) - u^*_i(\cdot)\|{}^2_{L_{2,[0,T]}}\\ \displaystyle + 2\int\limits_0^T \big(u^{\delta,\alpha^\delta_0}_i(t) - \hat u_i^\delta(t)\big)\big(\hat u_i^\delta(t) - u^*_i(t)\big) dt \stackrel{\delta\to 0}{\longrightarrow} 0. \end{array} \end{aligned} $$
(7.90)

Combining (7.89) and constraints (7.2), we get

$$\displaystyle \begin{aligned} \big(u^{\delta,\alpha^\delta_0}_i(t) - \hat u_i^\delta(t)\big)\big(\hat u_i^\delta(t) - u^*_i(t)\big) \geq 0,\quad t \in [0,T]. \end{aligned}$$

Since all terms in the last expression in (7.90) are non-negative, we obtain

$$\displaystyle \begin{aligned} \|u^{\delta,\alpha^\delta_0}_i(\cdot) - \hat u_i^\delta(\cdot)\|{}^2_{L_{2,[0,T]}} \stackrel{\delta\to 0}{\longrightarrow} 0, \end{aligned} $$
(7.91)
$$\displaystyle \begin{aligned} \|\hat u_i^\delta(\cdot) - u^*_i(\cdot)\|{}^2_{L_{2,[0,T]}} \stackrel{\delta\to 0}{\longrightarrow} 0. \end{aligned} $$
(7.92)

Now let’s prove the following lemma

Lemma 7.2

The system of differential equations

$$\displaystyle \begin{aligned} \dot x_i(t) = g_i(x(t),t)\hat u_i^\delta(t),\quad x_i(T) = y_i^\delta(T),\quad i=1,\ldots,n,\quad t\in [0,T], \end{aligned} $$
(7.93)

where \(\hat u_i^\delta (\cdot )\) is defined in (7.89) for a fixed δ ≤ (0, δ 0], have a unique solution \(x(\cdot ) \stackrel {not}{=} \hat x^\delta (\cdot ):[0,T]\to R^n\) . Moreover,

$$\displaystyle \begin{aligned} \lim\limits_{\delta\rightarrow 0}\|x^*_i(\cdot) - \hat x^\delta_i(\cdot)\|{}_{C_{[0,T]}} = 0,\quad i = 1,\ldots,n. \end{aligned}$$

Proof

Let’s introduce new variables

$$\displaystyle \begin{aligned} \triangle x_i(t) = x_i(t) - x^{\delta,\alpha^\delta_0}_i(t),\quad i=1,\ldots,n, \end{aligned}$$

where \(x^{\delta ,\alpha ^\delta _0}(t)\) is the solution of system (7.19) with boundary conditions (7.20).

System (7.93) in this variables has the form

$$\displaystyle \begin{aligned} \begin{array}{cc} \displaystyle \dot{\triangle x}_i(t) = g_i\big(\triangle x(t) + x^{\delta,\alpha^\delta_0}(t),t\big) \hat u^\delta_i(t) - g_i\big(x^{\delta,\alpha^\delta_0}(t),t\big) u^{\delta,\alpha^\delta_0}_i(t),\\ \triangle x_i(T) = 0,\quad i=1,\ldots,n. \end{array} \end{aligned} $$
(7.94)

The right-hand sides of this equations

$$\displaystyle \begin{aligned} \begin{array}{cc} \Big|g_i\big(\triangle x(t) + x^{\delta,\alpha^\delta_0}(t),t\big) \hat u^\delta_i(t) - g_i\big(x^{\delta,\alpha^\delta_0}(t),t\big) u^{\delta,\alpha^\delta_0}_i(t) \pm g_i\big(x^{\delta,\alpha^\delta_0}(t),t\big) \hat u^\delta_i(t)\Big|\\ = \Big| \hat u^\delta_i(t)\Big(g_i\big(\triangle x(t) + x^{\delta,\alpha^\delta_0}(t),t\big) - g_i\big(x^{\delta,\alpha^\delta_0}(t),t\big)\Big)\\ + g_i\big(x^{\delta,\alpha^\delta_0}(t),t\big)\big(\hat u^\delta_i(t) - u^{\delta,\alpha^\delta_0}_i(t)\big)\Big|\\ \leq \overline U \sum_{j=1}^n [\omega' |\triangle x_j(t)|] + \overline \omega |\hat u^\delta_i(t) - u^{\delta,\alpha^\delta_0}_i(t)|\\ \leq \overline U \omega' n\|\triangle x(t)\| + \overline \omega |\hat u^\delta_i(t) - u^{\delta,\alpha^\delta_0}_i(t)|. \end{array} \end{aligned} $$
(7.95)

Since estimates (7.95) are true and the function \(|\hat u^\delta _i(\cdot ) - u^{\delta ,\alpha ^\delta _0}_i(\cdot )|\) is continuous, the solution of system (7.94) is unique and can be extended on [0, T] [13]. Thus, the solutions \(\hat x^\delta _i(t) = \triangle x_i(t) - x^{\delta ,\alpha ^\delta _0}_i(t),\ i=1,\ldots ,n\) of system (7.93) can be extended on t ∈ [0, T] as well.

From (7.95) it follows that

$$\displaystyle \begin{aligned} \begin{array}{cc} \displaystyle \Big|\|\triangle x(t)\|{}^{\prime}_t\Big| = \left|\frac{\sum_{i=1}^n [\triangle x_i(t)\dot {\triangle x}_i(t)]}{\|\triangle x(t)\|}\right| \leq \frac{\sum_{i=1}^n [\|\triangle x(t)\|\cdot|\dot {\triangle x}_i(t)|]}{\|\triangle x(t)\|}\\ \leq n (\overline U \omega' n\|\triangle x(t)\| + \overline \omega |\hat u^\delta_i(t) - u^{\delta,\alpha^\delta_0}_i(t)|). \end{array} \end{aligned} $$

Hence,

$$\displaystyle \begin{aligned} \begin{array}{cc} \displaystyle \|\triangle x(t)\| \leq \|\triangle x(T)\| + \int\limits_t^T n \big(\overline U \omega' n\|\triangle x(\tau)\| + \overline \omega |\hat u^\delta_i(\tau) - u^{\delta,\alpha^\delta_0}_i(\tau)|\big)d\tau\\ \displaystyle \leq \|\triangle x(T)\| + n \overline \omega \int\limits_t^T |\hat u^\delta_i(\tau) - u^{\delta,\alpha^\delta_0}_i(\tau)|d\tau + n^2 \overline U \omega'\int\limits_t^T \|\triangle x(\tau)\|d\tau. \end{array} \end{aligned} $$

Applying the Grönwall–Bellman inequality, we get

$$\displaystyle \begin{aligned} \begin{array}{cc} \displaystyle \|\triangle x(t)\| \leq \Big(\|\triangle x(T)\| + n \overline \omega \int\limits_t^T |\hat u^\delta_i(\tau) - u^{\delta,\alpha^\delta_0}_i(\tau)|d\tau\Big)\exp (n^2 \overline U \omega'T). \end{array} \end{aligned} $$

Here \(\|\triangle x(T)\| \leq \sqrt {n}\delta \stackrel {\delta \to 0}{\longrightarrow } 0\). Since (7.91), \(\int \limits _t^T |\hat u^\delta _i(\tau ) - u^{\delta ,\alpha ^\delta _0}_i(\tau )|d\tau \stackrel {\delta \to 0}{\longrightarrow } 0,\ t\in [0,T]\). So, \(\|\triangle x(t)\|\stackrel {\delta \to 0}{\longrightarrow } 0,\ t\in [0,T]\). In other words,

$$\displaystyle \begin{aligned} \lim\limits_{\delta\rightarrow 0}\|x^{\delta,\alpha^\delta_0}_i(\cdot) - \hat x^\delta_i(\cdot)\|{}_{C_{[0,T]}} = 0,\quad i = 1,...,n. \end{aligned} $$

Combining this result with result of Theorem 7.1 (7.34), we get

$$\displaystyle \begin{aligned} \lim\limits_{\delta\rightarrow 0}\|\hat x^\delta_i(\cdot) - x^*_i(\cdot)\|{}_{C_{[0,T]}} = 0,\quad i = 1,...,n, \end{aligned} $$

which was to be proved.

Lemma 7.2, definition (7.89) and formula (7.92) mean that functions (7.89) can be considered as solution of the inverse problem described in Sect. 7.5.

7.7 Remarks on the Suggested Method

Note that Hypotheses 7.2 and 7.1, Theorem 7.1 and Lemmas 7.1 and 7.2 provide that in case of diagonal matrix G(x, t) the solution for the inverse problem described in Sect. 7.5 can be found as

$$\displaystyle \begin{aligned} \hat u_i^\delta(t) = \left\{ \begin{array}{ll} \overline U, \quad u_i^\delta(t) \geq \overline U,\\ u_i^\delta(t), \quad |u_i^\delta(t)| < \overline U,\\ -\overline U, \quad u_i^\delta(t) \leq -\overline U.\\ \end{array}\right. \mathrm{where } u_i^\delta(\cdot) = \frac{\dot y_i^\delta(\cdot)}{g_i(y^\delta(\cdot),\cdot)},\quad i=1,\ldots,n. \end{aligned} $$

The case of non diagonal non degenerate matrix G(x, t) is more interesting. In this case the solution can still be found by inversing the matrix G(y δ(t), t)

$$\displaystyle \begin{aligned} u^\delta(\cdot) = G^{-1}(y^\delta(\cdot),\cdot)\dot y^\delta(\cdot), \end{aligned} $$
(7.96)

but it involves finding the inverse matrix G −1(y δ(t), t) for each t ∈ [0, T].

One can modify the algorithm suggested in Sect. 7.6 to solve the inverse problem for the case of non-diagonal matrix G(y δ(t), t) as well. The justification uses the same scheme of proof, but is more complex due to more complicated form of system (7.19). It will be published in later works.

Comparing the direct approach (7.96) and the approach suggested in this paper, one can see that the second one reduces the task of inversing non-constant n × n matrix G(y δ(t), t) to the task of solving systems of non-linear ODEs. In some applications numerical integration of ODE systems may be more preferable than matrix inversing. Accurate comparing of this approaches (including numerical computations issues) is the matter of the upcoming studies and also will be published in later works.

7.8 Example

To illustrate the work of the suggested method let’s consider a model of a macroeconomic process, which can be described by a differential game with the dynamics

$$\displaystyle \begin{aligned} \begin{array}{ll} \displaystyle \frac{\displaystyle dx_1(t)}{dt} = \frac{\displaystyle \partial G (x_1(t),x_2(t))}{\partial x_1}u_1(t),\\ \displaystyle \frac{\displaystyle dx_2(t)}{dt} = \frac{\displaystyle \partial G (x_1(t),x_2(t))}{\partial x_2}u_2(t). \end{array} \end{aligned} $$
(7.97)

Here t ∈ [0, T], x 1 is the product, x 2 is the production cost. G(x 1, x 2) is the profit, which is described as

$$\displaystyle \begin{aligned} G(x_1,x_2) = x_1x_2 (a_0+a_1x_1+a_2x_2), \end{aligned} $$
(7.98)

where a 0 = 0.008, a 1 = 0.00019, a 2 = −0.00046 are parameters of the macroeconomic model [1]. The functions u 1(t), u 2(t) are bounded piecewise continuous controls

$$\displaystyle \begin{aligned} |u_1|\leq \overline U,\quad |u_2|\leq \overline U,\quad \overline U = 200,\quad t\in[0,T]. \end{aligned} $$
(7.99)

The control u 1 has the meaning of the scaled coefficient of the production increase speed and u 2 has the meaning of the scaled coefficient of the speed of the production cost changing.

This model has been suggested by Albrecht [1].

We assume that some base trajectories \(x_1^*(t),\ x_2^*(t)\) of system (7.97) have been realized on the time interval t ∈ [0, T] (time is measured in years). This trajectory is supposed to be generated by some admissible controls \(u_1^*(\cdot ),\ u_2^*(\cdot )\). We also assume that we know inaccurate measurements of \(x_1^*(t),\ x_2^*(t)\)—twice continuously differentiable functions \(y_1^\delta (t),\ y_2^\delta (t)\) that fulfill Hypothesis 7.2.

Remark 7.9

To model measurement functions \(y_1^\delta (t)\) and \(y_2^\delta (t)\) real statistics on Ural region’s industry during 1970–1985 [1] have been used. They satisfy Hypothesis 7.2.

We consider the inverse problem described in Sect. 7.5 for dynamics (7.97)–(7.99) and functions \(x_1^*(t),\ x_2^*(t)\), \(u_1^*(\cdot ),\ u_2^*(\cdot )\) and \(y_1^\delta (t),\ y_2^\delta (t)\). We assume in our example that we don’t know the base trajectory and controls, but know the inaccurate measurements \(y_1^\delta (t),\ y_2^\delta (t)\).

The trajectories \(x_1^{\alpha ,\delta }(t),\ x_2^{\alpha ,\delta }(t)\) and controls \(\hat u_1^{\alpha ,\delta }(t),\ \hat u_2^{\alpha ,\delta }(t)\), generating them, were obtained numerically. The results are presented on Figs. 7.1, 7.2, and 7.3. On Figs. 7.1 and 7.2 time interval is reduced for better scaling.

Fig. 7.1
figure 1

Graphics of \(x_1^{\delta ,\alpha }(t),\ t\in [1980,1985]\) for various values of approximation parameters

Fig. 7.2
figure 2

Graphics of \(u_1^{\delta ,\alpha }(t),\ t\in [1980,1985]\) for various values of approximation parameters

Fig. 7.3
figure 3

Graphic of error \(x_1^{\delta ,\alpha }(t)-y^\delta _1(t)\) for α = 10−5, t ∈ [1970, 1985]