INTRODUCTION

Rapid development of science and technology and the existing models of production technology and technical processes challenge science with increasingly sophisticated problems. With mathematical analysis and mathematical simulation, we can study these processes faster and more accurately. To be implemented, today’s complex, rapidly flowing, and power-consuming processes are to be accompanied by up-to-date automatic control systems. In its early days, automatic control theory dealt with the simplest processes, whose models could be described mathematically by ordinary differential equations or, at least, by a finite system of ordinary differential equations, i.e., by the so-called systems with lumped parameters, especially by linear plants studied relatively well. They are the subject of multiple reputable monographs and academic books. The origins and methodical concepts of system analysis of processes lie in the discipline that deals with decision-making problems, viz., the general control theory. The control theory is one of the most rapidly growing domains of modern mathematics. Its perspective and constitutive research directions are described in the works of R. Bellman, L.S. Pontryagin, N.N. Krasovskii, and others. Mathematical theory of optimal processes is a comparatively new direction in mathematics. The fundamental works of academician L.S. Pontryagin and his followers laid the basis for fast development of both theoretical concepts and their applications to solve practical problems. In the second part of the 20th century, the theory of controlled processes was one of the rapidly evolving domains of modern mathematics. Along with the plant control theory, the theory of decision making in processes evolving under a conflict was developed, i.e., when the process is controlled by two counteractive parties. Conflict-controlled processes described by differential equations are called differential games. The theory of differential games develops the ideas and methods of the optimal control theory. Problems of the theory of differential games are of great practical and theoretical interest.

The results obtained by Pontryagin and Mishchenko [1, pp. 37–41] allowed L.S. Pontryagin to create the first and second direct methods for solving the pursuit problem for linear differential games. This method provides the easy-to-check sufficient resolvability conditions of the pursuit problem in the class of counterstrategies. In [2, pp. 94–95], the third intermediate direct method for linear differential games is developed. Azamov [3, 4] found L.S. Pontryagin’s alternating integral to be dual.

In the general theory of differential games, pursuit-evasion problems take a special place due to a number of their specific qualities. One is their great variety with respect to both the methods applied and the nature of the results. This quality is revealed even when one considers model examples. Thus, the strategy of parallel pursuit proposed by Petrosyan [5, 6] in the game of simple pursuit with the “life line” triggered the development of the method of resolving functions for solving problems of group pursuit with geometric constraints in works of Pshenichny, et al. [7], Grigorenko [8], Petrov [9], Ukhobotov [10], Ibragimov [11, 12], and others. Chikrii [13],  Samatov [14], Belousov [15], Bezmagorychnii [16], and Mamadaliev [1720] made attempts to construct an analogue of the parallel pursuit strategy for the case of integral constraints and to extend it to include more general cases using the method of resolving functions.

When studying real processes with mathematical models, it is of great interest to consider differential games with different-type constraints on the controls of the players. In [21], differential pursuit games with an impulse control and a control with geometric constraints are considered. The method of resolving functions is used to prove theorems with sufficient conditions for the pursuit to be finished in finite time. The ways to find the guaranteed time and control of the pursuer for the pursuit to be completed are given. The results are applied to solve particular pursuit problems.

In [22], conflict game problems are studied from the point of completing the pursuit in finite time. The classes of admissible controls of the players are either all measurable functions satisfying the integral constraint or all impulse functions that can be expressed via the Dirac delta function. Two cases are discussed depending on what controls the players choose in different classes of admissible controls. In both cases, the sufficient conditions for the pursuit from the given initial point to be completed are given. In [23], linear differential pursuit games with integral constraints on the controls of the players are studied. The condition imposed on the parameters of the game, which is the analogue of L.S. Pontryagin’s condition, ensures the advantage of the pursuer over the evader. The important properties of resolving functions are proven and used to solve the pursuit completion problem. In [24], a linear differential pursuit game with more general integral constraints on the controls of the players is considered.

In [25, pp. 48–53], formalization of impulse optimal control problems for linear systems is proposed and methods to solve them are described. In [13], the method of resolving functions is developed to help resolve conflicts described by a system of differential equations under geometrical constraints on controls of the players. Thereafter, the method of resolving functions is extended to include differential pursuit games with integral constraints and impulse controls of the players.

In [10, 26], the pursuit problem is considered, where movements of the players are described by one-type linear differential second-order equations—Meshchersky’s equations. Instantaneous separation of the finite amount of fuel mass with the speed constant in its value is reduced to the impulse control problem. The respective controls of the players and the optimal time of pursuit completion are given.

In [27], the differential pursuit game of many persons with simple nonstationary movements of each player is studied. The method of resolving functions is used to prove the theorem on capture by at least one pursuer using impulse counterstrategies. A similar theorem is proven for the case when the evader uses the impulse strategy. In [28, 29], the problem of trajectories evading from the sparse terminal set is considered.

The method of resolving functions for a pursuit game is based on “attraction” of the bodily part of the terminal set so that there is an intersection with some multivalued mapping associated with this game. Obviously, if the resolving function is scalar, the “attraction” takes place in the respective cone. In [30], generalization of the method of resolving functions is proposed; viz., a matrix resolving function is used instead of the scalar one so that the “attraction” is performed over various directions.

In this work, we consider conflict game problems from the point of the possibility to complete the pursuit or evade for the given initial point z0. The control of the pursuer is under the integral constraint, while the control of the evader is of impulse nature. These impulse actions over the plant are performed at the instants given beforehand, and the respective control is represented by the Dirac delta function. We study linear conflicts described by a system of ordinary differential equations, whose trajectories have jumps at certain instants. The coordinate origin is taken as the terminal set. To solve the stated problem, we use the method of resolving functions [13, 21]. For any relations between the game parameters, the stated problem is completely solved in the sense that one can confirm capture or noncapture of the evader by the pursuer for an arbitrary initial point. We considered two mutually exclusive cases. In the first case, capture is impossible from all initial points. In the second case, there exists an open ball, from points of which one can perform capture, while this is impossible from the points of the complement of the ball (see Section 1). One of the final features of this work is that the pursuer uses strategies from a narrower class, viz., stroboscopic ones.

1 STATEMENT OF THE PROBLEM

We consider the controlled plant with the movement

$$\dot {z} = \lambda z + u - {v},$$
(1)

where z, u, \({v}\) ∈ ℝ2, λ ≥ 0. The coordinate origin M = {0} is the terminal set. Suppose τi = i ⋅ Δ, where Δ is some positive period.

Definition. We call any measurable vector function u(ϑ), 0 ≤ ϑ ≤ ∞, satisfying the condition

$$\int\limits_0^\infty {{\text{|}}u(\vartheta ){{{\text{|}}}^{2}}d\vartheta } \leqslant {{\rho }^{3}},$$

an admissible control of the pursuer.

Admissible controls of the evader are specified via the generalized Dirac δ-function [31]

$${v}(t) = \sum\limits_{i = 0}^\infty {{{{v}}_{i}}\delta (t - i\Delta ),} \quad 0 \leqslant t < \infty ,\quad {{{v}}_{i}} \in \sigma S.$$

We recall that the singular generalized function defined as

$$(\delta ,f) = \int\limits_{ - \infty }^\infty {f(t)\delta (t - a)dt} = f(a),$$

where f(t) is a continuous function on (–∞, ∞), and a is some fixed number, is called the generalized Dirac δ-function.

For the initial point z0, z0 ≠ 0, give a final answer on whether the capture or evasion is possible.

By specifying the conditions that, when met, will allow us to construct, using the allowed information, an admissible control of the pursuer for the given initial point z0 and arbitrary admissible control of the evader so that the respective trajectory z(t), t ≥ 0 of system (1) originating from the initial position z0 arrives at the point 0 in finite time.

2 STATEMENT AND PROOF OF THE PRINCIPAL RESULTS

We consider two mutually exclusive cases [32]

(1) \(\sqrt \Delta \leqslant \frac{\sigma }{\rho }\),

(2) \(\sqrt \Delta > \frac{\sigma }{\rho }\).

Proposition 1. In case (1), evasion is possible from all points z0, z0 ≠ 0.

Proof. We propose the control of the form

$${v}(t) = \sum\limits_{i = 0}^\infty {{{{v}}_{i}}\delta (t - i\Delta )} ,\quad {{{v}}_{i}} = - \sigma \frac{{{{z}_{0}}}}{{{\text{|}}{{z}_{0}}{\text{|}}}}$$
(2)

to the evader. If the pursuer used the admissible control u(ϑ), 0 ≤ ϑ < ∞, we have the following for the respective solution of Eq. (1) up to the instant t:

$$z(t) = {{z}_{0}} + \int\limits_0^t {u(\vartheta )d\vartheta } + \sum\limits_{i = 0}^{n(t)} {\sigma \frac{{{{z}_{0}}}}{{{\text{|}}{{z}_{0}}{\text{|}}}}} ,$$
(3)

where n(t) = \(\left[ {\frac{t}{\Delta }} \right]\) is the integer part of the ratio \(\frac{t}{\Delta }\). Obviously, \(n(t)\Delta \leqslant t < (n(t) + 1)\Delta \). Hence, using the Cauchy–Bunyakovsky inequality

$$\left| {\int\limits_0^t {u(\vartheta )d\vartheta } } \right| \leqslant \sqrt {\int\limits_0^t {{{u}^{2}}(\vartheta )d\vartheta } } \sqrt {\int\limits_0^t {d\vartheta } } \leqslant \rho \sqrt t ,$$

we have

$${\text{|}}z(t){\text{|}} \geqslant \left| {{{z}_{0}} + (n(t) + 1)\sigma \frac{{{{z}_{0}}}}{{{\text{|}}{{z}_{0}}{\text{|}}}}} \right| - \sqrt {\int\limits_0^t {u(\vartheta )d\vartheta } } \geqslant {\text{|}}{{z}_{0}}{\text{|}} + (n(t) + 1)\sigma - \sqrt t \rho .$$
(4)

However, \(t < (n(t) + 1)\Delta \); therefore, the latter inequality yields

$${\text{|}}z(t){\text{|}}\,\,{\text{ > }}\,\,{\text{|}}{{z}_{0}}{\text{|}} + \sqrt {n(t) + 1} \rho \left( {\sqrt {n(t) + 1} \frac{\sigma }{\rho } - \sqrt \Delta } \right).$$

Since \(\sqrt \Delta \leqslant \frac{\sigma }{\rho } \leqslant \sqrt {n(t) + 1} \frac{\sigma }{\rho }\), then \({\text{|}}z(t){\text{|}} > {\text{|}}{{z}_{0}}{\text{|}}\). This inequality means that evasionis possible from the point z0 ≠ 0.

We consider case (2). We show that Pshenichny’s method singles out the set of points of the plane ℝ2, from which one can complete the pursuit while one cannot do that from the points of the complement of this set.

Suppose n* is a nonnegative integer satisfying the conditions

$$\sqrt {\frac{\Delta }{{n{\text{*}} + 1}}} > \frac{\sigma }{\rho },\quad \sqrt {\frac{\Delta }{{n{\text{*}} + 2}}} \leqslant \frac{\sigma }{\rho }.$$

Condition (2) ensures that such a number exists.

We introduce the designation

$$\varphi (k) = \sqrt {(k + 1)\Delta } \rho - (k + 1)\sigma .$$

Suppose \(\tilde {n}\) is a nonnegative integer such that the maximum condition is met at it

$$\varphi (\tilde {n}) = \mathop {\max }\limits_{0 \leqslant k \leqslant n^*} \varphi (k).$$

Proposition 2. If the norm of the initial point z0 ≠ 0 satisfies the equality \(\left| {{{z}_{0}}} \right| \geqslant \varphi \left( {\tilde {n}} \right)\), evasion is possible from this point.

Proof. Suppose \({\text{|}}{{z}_{0}}{\text{|}} \geqslant \varphi (\tilde {n})\) and t > 0 is an arbitrary instant. We offer the evader the control of form (2). Suppose the pursuer uses the admissible control u(ϑ), 0 ≤ ϑ ≤ t.

Then, if we substitute them into Eq. (1), its solution at the instant t takes form (3). Hence, we have (4), where we used the Cauchy-Bunyakovsky inequality

$$\left| {\int\limits_0^t {u(\vartheta )d\vartheta } } \right| \leqslant \sqrt t \rho .$$

Starting from here, we distinguish two cases—(a) \(n(t) \leqslant n{\text{*}}\) and (b) \(n(t) \geqslant n{\text{*}} + 1\). In case (a), by \({\text{|}}{{z}_{0}}{\text{|}} \geqslant \sqrt {(n(t) + 1)\Delta } \rho - (n(t) + 1)\sigma \), we have

$${\text{|}}z(t){\text{|}} \geqslant \sqrt {(n(t) + 1)\Delta } \rho - (n(t) + 1)\sigma + (n(t) + 1)\sigma - \sqrt t \rho = (\sqrt {(n(t) + 1)\Delta } - \sqrt t )\rho .$$

Since \(t < (n(t) + 1)\Delta \), we have the following from the latter inequality:

$${\text{|}}z(t){\text{|}} \geqslant (\sqrt {(n(t) + 1)\Delta } - \sqrt t )\rho > 0.$$

Now, we consider the case (b).

Since \(t < (n(t) + 1)\Delta \), we have

$${\text{|}}z(t){\text{|}} \geqslant {\text{|}}{{z}_{0}}{\text{|}} + (n(t) + 1)\sigma - \sqrt t \rho \geqslant {\text{|}}{{z}_{0}}{\text{|}} + ((n(t) + 1)\sqrt \Delta - \sqrt t )\rho .$$

Hence, taking into account that \(n(t) + 1 \geqslant n{\text{*}} + 2\), the latter inequality leads to

$${\text{|}}z(t){\text{|}} > {\text{|}}{{z}_{0}}{\text{|}} + \sqrt {(n(t) + 1)\Delta } \rho \left( {\sqrt {n{\text{*}} + 2} \frac{\sigma }{\rho } - \sqrt \Delta } \right).$$

Using the definition of the number n*, we have \(\sqrt \Delta \leqslant \sqrt {n{\text{*}} + 2} \frac{\sigma }{\rho }\); hence,

$${\text{|}}z(t){\text{|}} > {\text{|}}{{z}_{0}}{\text{|}}{\text{.}}$$

Suppose the scalar function f( ⋅ ) and the vector function w( ⋅ ) = \(({{w}_{1}}(\, \cdot \,),{{w}_{2}}(\, \cdot \,),...,{{w}_{m}}(\, \cdot \,))\) satisfying the condition \(\int_{{{t}_{1}}}^{{{t}_{2}}} {{\text{|}}w(\vartheta ){{{\text{|}}}^{2}}d\vartheta } \leqslant {{\rho }^{2}}\) are defined, measurable, and square integrable on the segment [t1, t2]. We designate

$$X = \left\{ {x \in {{\mathbb{R}}^{m}}:x = \int\limits_{{{t}_{1}}}^{{{t}_{2}}} {f(\vartheta )w(\vartheta )d\vartheta } ,\int\limits_{{{t}_{1}}}^{{{t}_{2}}} {{\text{|}}w(\vartheta ){{{\text{|}}}^{2}}d\vartheta } \leqslant {{\rho }^{2}}} \right\},\quad Y = \rho \sqrt {\int\limits_{{{t}_{1}}}^{{{t}_{2}}} {{{f}^{2}}(\vartheta )d\vartheta } } S,$$

where t1, t2, and ρ are arbitrary nonnegative numbers and t1 < t2 and S is the unit circle with its center at zero. Then, the proposition holds.

Proposition 3. X = Y.

Proof. First, we show the inclusion XY. Suppose xX, then there exists the function \(\bar {w}(\, \cdot \,)\) such that \(x = \int_{{{t}_{1}}}^{{{t}_{2}}} {f(\vartheta )\bar {w}(\vartheta )d\vartheta } \), and \(\int_{{{t}_{1}}}^{{{t}_{2}}} {{\text{|}}\bar {w}(\vartheta ){{{\text{|}}}^{2}}d\vartheta } \leqslant {{\rho }^{2}}\). Hence, using the Cauchy–Bunyakovsky inequality, we have

$${\text{|}}x{\text{|}} = \left| {\int\limits_{{{t}_{1}}}^{{{t}_{2}}} {f(\vartheta )\bar {w}(\vartheta )d\vartheta } } \right| \leqslant \sqrt {\int\limits_{{{t}_{1}}}^{{{t}_{2}}} {{{f}^{2}}(\vartheta )d\vartheta } } \sqrt {\int\limits_{{{t}_{1}}}^{{{t}_{2}}} {{{{\bar {w}}}^{2}}(\vartheta )d\vartheta } } \leqslant \rho \sqrt {\int\limits_{{{t}_{1}}}^{{{t}_{2}}} {{{f}^{2}}(\vartheta )d\vartheta } } .$$

This inequality means that xY.

We now show the converse YX. Suppose yY. Then, there exists sS such that

$$y = \rho \sqrt {\int\limits_{{{t}_{1}}}^{{{t}_{2}}} {{{f}^{2}}(\vartheta )d\vartheta } } s.$$

We construct the function \(\tilde {w}(\, \cdot \,)\) as follows

$$\tilde {w}(\vartheta ) = \rho \frac{{f(\vartheta )}}{{\sqrt {\int\limits_{{{t}_{1}}}^{{{t}_{2}}} {{{f}^{2}}(\vartheta )d\vartheta } } }}s,$$

then, we have

$$\int\limits_{{{t}_{1}}}^{{{t}_{2}}} {{\text{|}}\tilde {w}(\tau ){{{\text{|}}}^{2}}d\vartheta } \leqslant \int\limits_{{{t}_{1}}}^{{{t}_{2}}} {{{\rho }^{2}}\frac{{{{f}^{2}}(\tau )}}{{\int\limits_{{{t}_{1}}}^{{{t}_{2}}} {{{f}^{2}}(\vartheta )d\vartheta } }}d\tau } = {{\rho }^{2}},$$
$$y = \rho \sqrt {\int\limits_{{{t}_{1}}}^{{{t}_{2}}} {{{f}^{2}}(\vartheta )d\vartheta } } s = \rho \int\limits_{{{t}_{1}}}^{{{t}_{2}}} {\frac{{{{f}^{2}}(\tau )}}{{\int\limits_{{{t}_{1}}}^{{{t}_{2}}} {{{f}^{2}}(\vartheta )d\vartheta } }}sd\tau } = \int\limits_{{{t}_{1}}}^{{{t}_{2}}} {f(\tau )\tilde {w}(\tau )d\tau } \in X.$$

Proposition 4. If \({\text{|}}{{z}_{0}}{\text{|}} < \varphi (\tilde {n})\), capture is possible.

Proof. Suppose \(t^* < (\tilde {n} + 1)\Delta \) is the instant such that for t = t* the following inequalities hold: \({\text{|}}{{z}_{0}}{\text{|}} \leqslant \sqrt t \rho - (\tilde {n} + 1)\sigma \), \(\frac{{t - \tilde {n}\Delta }}{{\sqrt t }}\rho - \sigma > 0\). Then, we can choose the nonnegative numbers \({{\alpha }_{i}}(t^*,{{z}_{0}})\), i = 0, 1, …, \(\tilde {n}\), such that the inequalities hold

$${{\alpha }_{i}}(t^*,{{z}_{0}}) \leqslant \frac{{\Delta \rho - \sigma }}{{{\text{|}}{{z}_{{\text{0}}}}|}},\quad i = 0,1,...,\tilde {n} - 1,\quad {{\alpha }_{{\tilde {n}}}}(t^*,{{z}_{0}}) \leqslant \left( {\frac{{t{\text{*}} - \tilde {n}\Delta }}{{\sqrt {t{\text{*}}} }}\rho - \sigma } \right){\text{/|}}{{z}_{{\text{0}}}}|,$$
$$\sum\limits_{i = 0}^{\tilde {n}} {{{\alpha }_{i}}(t^*,{{z}_{0}}) = 1.} $$
(5)

Suppose the pursuer, on the time segments

$$[0,\Delta ),[\Delta ,2\Delta ),...,[(\tilde {n} - 1)\Delta ,\tilde {n}\Delta ),[\tilde {n}\Delta ,t^*]$$

uses the resources

$$\rho _{0}^{2} = \frac{\Delta }{{t{\text{*}}}}{{\rho }^{2}},\quad \rho _{1}^{2} = \frac{\Delta }{{t{\text{*}}}}{{\rho }^{2}},...,\rho _{{\tilde {n} - 1}}^{2} = \frac{\Delta }{{t{\text{*}}}}{{\rho }^{2}},\rho _{{\tilde {n}}}^{2} = \frac{{t{\text{*}} - \tilde {n}\Delta }}{{t{\text{*}}}}{{\rho }^{2}}$$

respectively. One can easily see that \(\rho _{0}^{2} + \rho _{1}^{2} + ... + \rho _{{\tilde {n} - 1}}^{2} + \rho _{{\tilde {n}}}^{2} = {{\rho }^{2}}\) and

$${\text{|}}{v} - {{\alpha }_{i}}{{z}_{0}}{\text{|}} \leqslant {\text{|}}{v}{\text{|}} + {{\alpha }_{i}}{\text{|}}{{z}_{0}}{\text{|}} \leqslant \sigma + \frac{{\frac{\Delta }{{\sqrt {t{\text{*}}} }}\rho - \sigma }}{{{\text{|}}{{z}_{0}}{\text{|}}}}{\text{|}}{{z}_{0}}{\text{|}} = \frac{\Delta }{{\sqrt {t{\text{*}}} }}\rho ,\quad i = 0,1,...,\tilde {n} - 1.$$

Similarly, we obtain \({\text{|}}{v} - {{\alpha }_{{\tilde {n}}}}{{z}_{0}}{\text{|}} \leqslant \frac{{t{\text{*}} - \tilde {n}\Delta }}{{\sqrt {t{\text{*}}} }}\rho \). All these estimates mean that

$$\begin{gathered} {v} - {{\alpha }_{i}}{{z}_{0}} \in \frac{\Delta }{{\sqrt {t{\text{*}}} }}\rho S,\quad i = 0,1,...,\tilde {n} - 1, \\ {v} - {{\alpha }_{{\tilde {n}}}}{{z}_{0}} \in \frac{{t{\text{*}} - \tilde {n}\Delta }}{{\sqrt {t{\text{*}}} }}\rho S,\quad {v} \in \sigma S. \\ \end{gathered} $$

By Proposition 3, we have

$$\left\{ {x \in {{\mathbb{R}}^{2}}:x = \int\limits_{i\Delta }^{(i + 1)\Delta } {u(\vartheta )d\vartheta } ,\int\limits_{i\Delta }^{(i + 1)\Delta } {{\text{|}}u(\vartheta ){{{\text{|}}}^{2}}d\vartheta } \leqslant \rho _{i}^{2}} \right\} = \sqrt \Delta {{\rho }_{i}}S = \frac{\Delta }{{\sqrt {t{\text{*}}} }}\rho S,\quad i = \overline {0,\tilde {n} - 1} ,$$
$$\left\{ {x \in {{\mathbb{R}}^{2}}:x = \int\limits_{\tilde {n}\Delta }^t {u(\vartheta )d\vartheta } ,\int\limits_{\tilde {n}\Delta }^t {{\text{|}}u(\vartheta ){{{\text{|}}}^{2}}d\vartheta } \leqslant \rho _{{\tilde {n}}}^{2}} \right\} = \frac{{t{\text{*}} - \tilde {n}\Delta }}{{\sqrt {t{\text{*}}} }}\rho S.$$

Hence, there exists the admissible control \(u{\text{*}}(\vartheta )\), \(0 \leqslant \vartheta \leqslant t{\text{*}}\) satisfying the conditions

$${{{v}}_{i}} - {{\alpha }_{i}}{{z}_{0}} = \int\limits_{i\Delta }^{(i + 1)\Delta } {u{\text{*}}(\vartheta )d\vartheta } ,\quad i = 0,1,...,\tilde {n} - 1,$$
$${{{v}}_{{\tilde {n}}}} - {{\alpha }_{{\tilde {n}}}}{{z}_{0}} = \int\limits_{\tilde {n}\Delta }^{t^*} {u{\text{*}}(\vartheta )d\vartheta } .$$

To construct the value of the function u*( ⋅ ) on the segment \([i\Delta ,(i + 1)\Delta )\), we use the vectors \({{{v}}_{i}} \in \sigma S\), αiz0, \(i \in \{ 0,1,...,\tilde {n} - 1\} \); to construct the value of the function u*( ⋅ ) on the segment \([\tilde {n}\Delta ,t^*]\), we use the vectors \({{{v}}_{{\tilde {n}}}} \in \sigma S\), \({{\alpha }_{{\tilde {n}}}}{{z}_{0}}\).

Then, we have the following for the solution of Eq. (1):

$$z(t^*) = {{z}_{0}} + \int\limits_0^{t^*} {u{\text{*}}(\vartheta )d\vartheta } - \sum\limits_{i = 0}^{\tilde {n}} {{{{v}}_{i}}} = {{z}_{0}} + \sum\limits_{i = 0}^{\tilde {n} - 1} {\int\limits_{i\Delta }^{(i + 1)\Delta } {u{\text{*}}(\vartheta )d\vartheta } } + \int\limits_{\tilde {n}\Delta }^{t^*} {u{\text{*}}(\vartheta )d\vartheta } - \sum\limits_{i = 0}^{\tilde {n}} {{{{v}}_{i}}} .$$
(6)

By construction of the function u*( ⋅ ) and given (5), we have the following from (6):

$$z(t^*) = {{z}_{0}} + \sum\limits_{i = 0}^{\tilde {n} - 1} {({{{v}}_{i}} - {{\alpha }_{i}}{{z}_{0}}) + {{{v}}_{{\tilde {n}}}} - {{\alpha }_{{\tilde {n}}}}{{z}_{0}}} - \sum\limits_{i = 0}^{\tilde {n}} {{{{v}}_{i}}} = 0.$$