1. INTRODUCTION

The main purpose of operation of adaptive control systems with a reference model is to maintain the desired control performance in the presence of a substantial parametric uncertainty of the plant by adjusting the controller parameters. Nowadays, there are two classical approaches to designing such control systems, namely, indirect and direct adaptive control [1, 2]. This paper deals exclusively with the problem of direct adaptive control based on the complete vector of state coordinates.

The algorithm for adjusting the controller parameters in direct adaptive control schemes is usually based on a first-order optimization method (the gradient descent method and its variations) as well as on the Lyapunov second method [1, 3, 4]. So far, most adjustment laws involve the following unresolved problems: first, a persistent excitation of the regressor is necessary for the exponential decay of the parametric error between the adjusted and ideal controller parameters; second, the adaptation law contains a parameter—the adaptation gain—that must be chosen experimentally [4, 5]. The main techniques for solving the first problem can be found in [6]. In the present paper, we focus on the analysis and solution of the second problem.

The very presence of an experimentally selected parameter in the adjustment law causes difficulties, because it is not always possible to carry out experiments on its selection with a real plant, while the gain selected on the basis of a mathematical model is most often unsuitable for the real plant.

To analyze other problems that arise when using a constant value of the gain, we separately consider the cases of presence and absence of a persistent excitation of the regressor.

In the absence of persistent excitation, it is well known [1] that the use of large values of the adaptation-loop gain for a real plant can strongly amplify the high-frequency parasitic components in the plant dynamics and the measurement noise, which is unacceptable from the viewpoint of robust stability [7, 8]. Three types of instability of adaptive control systems caused by a high adjustment law gain and the presence of parasitic high-frequency dynamics in the plant are distinguished in the literature: instability due to fast adaptation, high-frequency instability, and instability due to large values of the controller parameters obtained as a result of adaptation [1, 7, 8, 9]. All these effects necessitate a trade-off in the choice of the gain: its values must be large enough to ensure a satisfactory performance of the adaptation process but not large enough to enhance the parasitic dynamics of the plant.

In addition, even if the gain is selected appropriately and the parasitic dynamics is absent, an adaptive system provides the desired performance of the adjustment algorithm and hence of the control only for a limited number of reference values for the plant under consideration. This is because the adaptive control system, even with a linear plant and a linear reference model, forms a nonlinear closed control loop, for which the superposition principle does not hold [10]. A gain scaling method was proposed in [11, 12] to solve this problem and ensure the desired performance for the widest range of tasks in the system. An approach using a similar idea can be found in [13]. These techniques allow the experimentally selected optimal rate to be scaled to different settings. The disadvantage of this method is the need for the experimental, manual selection of the initial gain value ensuring that the adaptation process has the desired convergence rate and of the scaling factor. Thus, the main disadvantages of using a constant adaptation law gain value in the absence of persistent excitation are the possible amplification of the high-frequency dynamics if the gain is too large and the limited number of tasks for which the desired adjustment rate of the controller parameters can be ensured.

In the presence of a persistent excitation, the above-described disadvantages associated with the use of large gain values are supplemented with yet another problem. In [6, 7, 14], the existence of an optimal gain value for the current regressor in the presence of persistent excitation was proved, and it was shown by example that the convergence rate of the adaptation process decreases rather than increases with the gain increasing beyond the optimal value. This means, on the one hand, that the rate of convergence of the adjusted parameters to the ideal values cannot be made arbitrarily large, and on the other hand, that for each new value of the regressor there exists a new optimal value of the gain for the adaptation process.

It follows from the analysis that the use of an experimentally selected constant adaptation gain in the presence as well as absence of persistent excitation leads to serious problems that significantly reduce the likelihood of success in the practical implementation of adaptive control systems.

Thus, the problem of developing a method for adjusting the gain in the adaptation loop is topical in the theory of adaptive systems and especially in their practical application.

Therefore, in this study we suggest to develop an adaptation loop that includes an algorithm for calculating the gain and hence is free from the above-described disadvantages related to its experimental selection. As a basis for the development of such a loop, we suggest to use the recursive least squares method with exponential forgetting factor [1, 8]. The advantages of this method are the presence of a gain adjustment law per se and the exponential decay of the parametric error with an adjustable convergence rate under the persistent excitation condition [1]. This approach is widely known in identification theory; however, the authors have not been able to find its efficient applications in direct adaptive control schemes.

In the present paper, when developing a parameter adaption loop for the controller with a variable adjustment rate, we restrict ourselves to the case of persistent excitation of the predictor.

2. STATEMENT OF THE PROBLEM

We consider the adaptive control problem for a class of linear plants that can be written in the space of state coordinates in the Frobenius form

$$ \dot x = Ax + Bu,$$
(2.1)

where \(x\in R^n \) is the vector of plant state coordinates, \(u\in R \) is the control, \(A \in R^{n\times n} \) is the Frobenius matrix of system states, and \(B=[0,0,\dots ,b]\in R^{n\times 1}\) is the gain matrix. The values of \(A \) and \(B \) are unknown, but the pair \((A,B) \) is controllable. It is assumed that the state coordinate vector \(x \) and the vector \(\dot x \) of its first derivatives are available for direct measurement. In practice, one can estimate the derivative of the state coordinate vector, in particular, by the methods in [15, 16]. The reference model determining the desired control performance for the plant (2.1) with unknown parameters is taken in the Frobenius form as well,

$$ {\dot x_\mathrm {ref\,}} = {A_\mathrm {ref\,}}{x_\mathrm {ref\,}} + {B_\mathrm {ref\,}}r,$$
(2.2)

where \(x_\mathrm {ref\,}\in R^n\) is the state coordinate vector of the reference model, \(r\in R\) is a bounded reference input signal, and \(B_\mathrm {ref\,} =[0,0,\dots ,b_\mathrm {ref\,}]\in R^{n \times 1}\). The state matrix \(A_\mathrm {ref\,} \in R^{n\times n}\) of the reference model is a Hurwitz matrix and is written in Frobenius form.

The equation for the errors between the plant equations (2.1) and the reference model equations (2.2) can be found in the form

$$ {\dot e_\mathrm {ref\,}} = {A_\mathrm {ref\,}}{e_\mathrm {ref\,}} + Bu - \left ( {{A_\mathrm {ref\,}} - A} \right )x - {B_\mathrm {ref\,}}r.$$
(2.3)

Since both the plant and the reference model are written in Frobenius form, it follows that the adaptability condition [4] is naturally satisfied,

$$ \mathrm {rank}\left \{ {B,{B_\mathrm {ref\,}}} \right \} = \mathrm {rank}\left \{ {B,A - {A_\mathrm {ref\,}}} \right \} = 1. $$
(2.4)

Assertion.

If the adaptability condition (2.4) is satisfied then so are the relations

$$ \begin {gathered} B{B^\dag } = \left [ {\begin {array}{*{20}{c}} {{Z_{n - 1,n}}}\\[.3em] {\begin {array}{*{20}{c}} 0&0& \ldots &1 \end {array}} \end {array}} \right ]; \\[.3em] B{B^\dag }\left ( {{A_\mathrm {ref\,}} - A} \right ) = {A_\mathrm {ref\,}} - A;\quad B{B^\dag }{B_\mathrm {ref\,}} = {B_\mathrm {ref\,}}, \end {gathered} $$
(2.5)

where \( Z_{n-1,n}\) is the zero matrix. The assertion can be verified, for example, by a straightforward substitution of any matrices consistent with the statement of the problem into formulas (2.5).

Condition (2.4) and relations (2.5) permit one to rewrite Eq. (2.3) in the form

$$ {\dot e_\mathrm {ref\,}} = {A_\mathrm {ref\,}}{e_\mathrm {ref\,}} + B\left [ {u - {B^\dag }\left ( {{A_\mathrm {ref\,}} - A} \right )x - {B^\dag }{B_\mathrm {ref\,}}r} \right ],$$
(2.6)

where \(B^\dag \) is the pseudoinverse of the matrix \(B \).

Then the control law delivering the desired control performance for the plant (2.1) can be determined from the error equation (2.6),

$$ \begin {gathered} {u^*} = {B^\dag }\left [ {\left ( {{A_\mathrm {ref\,}} - A} \right )x + {B_\mathrm {ref\,}}r} \right ] = {k_r}{k_x}x + {k_r}r,\\ {k_r}{k_x} = {B^\dag }\left ( {{A_\mathrm {ref\,}} - A} \right );\quad {k_r} = {B^\dag }{B_\mathrm {ref\,}}, \end {gathered} $$
(2.7)

where \(k_x \in R^{1 \times n} \) and \(k_r \in R \) are the ideal control law parameters.

For the case of known, say, rated values of the entries of the matrices \(A \) and \(B \), one can calculate the ideal controller for the plant (2.1) by formulas (2.7).

For the case of unknown (quasistationary) parameters of the matrices \(A \) and \(B \), we introduce a control law with the current parameters

$$ u = {\hat k_r}{\hat k_x}x + {\hat k_r}r. $$
(2.8)

From the definition of the parameters \(k_x \) and \(k_r \) in the expression (2.7), one can derive analytical expressions for calculating the matrix \(B \) and the difference \(({A_\mathrm {ref\,}} - A) \),

$$ {A_\mathrm {ref\,}} - A = {B_\mathrm {ref\,}}{k_x}; \quad B = k_r^{ - 1}{B_\mathrm {ref\,}}.$$
(2.9)

Taking into account the expressions (2.9) when substituting the control law (2.8) into the error equation (2.6), we obtain

$$ \begin {aligned} {\dot e}_\mathrm {ref\,} &= A_\mathrm {ref\,}e_\mathrm {ref\,} + B\left [{\hat k}_r{\hat k}_x x + {\hat k}_r r\right ] - \left ( A_\mathrm {ref\,} - A\right )x - B_\mathrm {ref\,}r \\ &= A_\mathrm {ref\,}e_\mathrm {ref\,} + k_r^{-1}B_\mathrm {ref\,}\left [{\hat k}_r{\hat k}_x x + {\hat k}_r r \right ] - B_\mathrm {ref\,}k_x x -B_\mathrm {ref\,}r + \left (B_\mathrm {ref\,}{\hat k}_x x - B_\mathrm {ref\,}{\hat k}_x x\right ) \\ &= A_\mathrm {ref\,}e_\mathrm {ref\,} + B_\mathrm {ref\,} \left [\left ({\hat k}_x - k_x\right ) x + {\hat k}_x x \left (k_r^{-1}{\hat k}_r - I \right ) + r\left (k_r^{-1}{\hat k}_r - I\right )\right ] \\ &= A_\mathrm {ref\,}e_\mathrm {ref\,} + B_\mathrm {ref\,}\left [{\tilde k}_x x + \left (k_r^{-1}{\hat k}_r - I\right )\left ({\hat k}_x x + r\right )\right ] \\ &= A_\mathrm {ref\,}e_\mathrm {ref\,} + B_\mathrm {ref\,}\left [{\tilde k}_x x - {\tilde k}_r^{-1}{\hat k}_r\left ({\hat k}_x x + r\right )\right ]. \end {aligned}$$
(2.10)

Here \({\tilde k_x} = {\hat k_x} - {k_x} \), \(\tilde k_r^{ - 1} = \hat k_r^{ - 1} - k_r^{ - 1} \). In Eq. (2.10), we introduce the notion of a generalized parameter error function \( \varepsilon \),

$$ \varepsilon = {B_\mathrm {ref\,}}{{\tilde \theta }^\mathrm {T}}\omega ,\quad \omega = {\left [ {\begin {array}{*{20}{c}} {{x^\mathrm {T}}}&{ - {{\hat k}_r}\left ( {{{\hat k}_x}x + r} \right )} \end {array}} \right ]^\mathrm {T}};\quad {{\tilde \theta }^\mathrm {T}}= \left [ {\begin {array}{*{20}{c}} {{{\tilde k}_x}}&{\tilde k_r^{ - 1}} \end {array}} \right ] = {{\hat \theta }^\mathrm {T}} - {\theta ^\mathrm {T}}. $$
(2.11)

Here \({\hat \theta ^\mathrm {T}} \in {R^{n + 1}} \) are adjustable parameters via which one can calculate (by inverting the estimate \(\hat k_r^{ - 1}\)) the current parameters of the controller (2.8), \({\theta ^\mathrm {T}} \in {R^{n + 1}}\) are the ideal parameters via which one can calculate (by inverting \(k_r^{ - 1} \)) the parameters of the ideal controller (2.7), and \({\tilde \theta ^\mathrm {T}}\in {R^{n + 1}}\) is the difference between \(\hat \theta ^\mathrm {T}\) and \(\theta ^\mathrm {T} \). Then Eq. (2.10), in view of (2.11), can be rewritten in the form

$$ {\dot e_\mathrm {ref\,}} = {A_\mathrm {ref\,}}{e_\mathrm {ref\,}} + {B_\mathrm {ref\,}}{\tilde \theta ^\mathrm {T}}\omega . $$
(2.12)

Based on Eq. (2.12), one can derive an adaptation law for the controller (2.8). Since we can pass from \(\hat \theta \) to the current parameters \(\hat k_x\), \(\hat k_r \) of the controller (2.8), its adaptation law will be understood to be the adjustment law \(\hat \theta \). The parametrization (2.12) was for the first time proposed in [17] with the aim to construct an adaptation law that does not require knowledge of the plant gain matrix \(B\).

For system (2.12), we need to construct a law for adjusting the parameters \(\hat \theta \) not requiring an experimental, manual selection of the adjustment law gain and ensuring the exponential decay of the error \(\xi =\big [\,e_\mathrm {ref\,}^\mathrm {T}\enspace {\tilde \theta }^\mathrm {T}\,\big ]^ \mathrm {T}\) under the persistent excitation condition for the regressor \(\omega \).

Definition.

For a bounded signal \(\omega \), the persistent excitation condition is satisfied if for all \(t \geqslant 0\) there exists a \(T > 0 \) and an \(\alpha > 0 \) such that

$$ \displaystyle \int \limits _t^{t + T} {\omega \left ( \tau \right ){\omega ^\mathrm {T}}\left ( \tau \right )d} \tau \geqslant \alpha I, $$
(2.13)

where \(I\) is the identity matrix and \(\alpha \) is the excitation degree.

3. IDENTIFYING THE IDEAL PARAMETERS OF A CONTROLLER WITH VARIABLE GAIN IN THE ADAPTATION LOOP

To achieve our goal, we start by constructing an estimation law \(\hat \theta \) that only ensures the exponential decay of the error \(\tilde \theta ^\mathrm {T}\) rather than of the entire vector \(\xi \) and does not involve manual gain selection.

To this end, we introduce the notion of desired behavior for the error equation (2.12), which we specify by the differential equation

$$ {\dot e_d} = {A_\mathrm {ref\,}}{e_\mathrm {ref\,}}. $$
(3.1)

Then the generalized parametric error (2.11) can be calculated via the difference between the error equation (2.12) and its desired behavior (3.1),

$$ \varepsilon = {\dot e}_\mathrm {ref\,} - {\dot e}_d = B_\mathrm {ref\,}{\tilde \theta } ^\mathrm {T}\omega = B_\mathrm {ref\,}\left ({\hat \theta }^\mathrm {T} - \theta ^\mathrm {T} \right )\omega = B_\mathrm {ref\,}\left ({\hat \theta }^\mathrm {T}\omega - y\right ), $$
(3.2)

where \(y\) is the ideal value of the parametric disturbance in system (2.12).

Equation (3.2) implies that

$$ B_\mathrm {ref\,}^\dag \varepsilon = {\tilde \theta }^\mathrm {T}\omega = \left ({\hat \theta }^\mathrm {T} - \theta ^\mathrm {T}\right )\omega = {\hat \theta }^\mathrm {T}\omega - y. $$
(3.3)

At this stage, following the recursive least squares method, we introduce the measurements \( y(\tau )\) and \(\omega (\tau ) \), \(0\leqslant \tau <t\), to construct the identification loop \(\hat \theta \left (t \right ) \) for the ideal parameters \(\theta \) at time \(t \). Taking into account the new time, we write Eq. (3.3) as

$$ B_\mathrm {ref\,}^\dag \varepsilon = {\hat \theta ^\mathrm {T}}\left ( t \right )\omega \left ( \tau \right ) - y\left ( \tau \right ).$$
(3.4)

In this case, the target criterion for minimizing the expression (3.4) according to the recursive least squares method with exponential forgetting factor is written in the integral form

$$ Q\left ( {\hat \theta } \right ) = \dfrac {1}{2}\displaystyle \int \limits _0^{t} {{e^{ - \lambda \left ( {t - \tau } \right )}}{{\left ( {B_\mathrm {ref\,}^\dag \varepsilon } \right )}^\mathrm {T}}B_\mathrm {ref\,}^\dag \varepsilon } d\tau ,$$
(3.5)

where \(\lambda \) is the exponential forgetting factor.

The condition for the minimum of the criterion (3.5) is that its gradient with respect to the adjusted parameters is zero,

$$ {\nabla _{{{\hat \theta }^\mathrm {T}}}}{Q^\mathrm {T}}\left ( {\hat \theta } \right ) = \displaystyle \int \limits _0^{t} {{e^{ - \lambda \left ( {t - \tau } \right )}}\omega \left ( \tau \right )\left [ {{\omega ^\mathrm {T}}\left ( \tau \right )\hat \theta \left ( t \right ) - {y^\mathrm {T}}\left ( \tau \right )} \right ]d} \tau = 0.$$
(3.6)

In the expression (3.6), we use the property of the sum of integrals, multiply out, and transpose the term containing the ideal value of the parametric disturbance to the right-hand side of the equation,

$$ \displaystyle \int \limits _0^{t} {{e^{ - \lambda \left ( {t - \tau } \right )}}\omega \left ( \tau \right ){\omega ^\mathrm {T}}\left ( \tau \right )\hat \theta \left ( t \right )d} \tau = \displaystyle \int \limits _0^{t} {{e^{ - \lambda \left ( {t - \tau } \right )}}\omega \left ( \tau \right ){y^\mathrm {T}}\left ( \tau \right )d} \tau .$$
(3.7)

From (3.7), using the least squares method, one can obtain an estimate \(\hat { \theta }\) for the ideal parameters of the controller \(\theta \),

$$ {\hat \theta }\left (t\right ) = \underbrace { \left [ \displaystyle \int \limits _0^t e^{-\lambda \left (t-\tau \right )} \omega \left (\tau \right )\omega ^\mathrm {T} \left (\tau \right )d\tau \right ]^{-1} }_{\Gamma \left (t\right )} \enspace \displaystyle \int \limits _0^t e^{-\lambda \left (t- \tau \right )} \omega \left (\tau \right )y^\mathrm {T}\left (\tau \right )\,d\tau .$$
(3.8)

Here \(\Gamma (t)\) is the gain matrix of the parameter adjustment law for the approximating linear regression.

The time variation law of the matrix \(\Gamma ^{-1} (t) \) can be found from the theorem on the derivative of an integral with respect to the upper limit,

$$ \dfrac {{d{\Gamma ^{ - 1}}}}{{dt}} = \omega ( t ){\omega ^\mathrm {T}} ( t ) - \lambda \displaystyle \int \limits _0^{t} {{e^{ - \lambda ( {t - \tau } )}}\omega ( \tau ){\omega ^\mathrm {T}} ( \tau )d} \tau = \omega ( t ){\omega ^\mathrm {T}} ( t ) - \lambda {\Gamma ^{ - 1}} ( t ). $$
(3.9)

At this stage, we introduce the auxiliary equation

$$ \dfrac {{dI}}{{dt}} = \dfrac {d}{{dt}}\left [ {\Gamma \left ( t \right ){\Gamma ^{ - 1}}\left ( t \right )} \right ] = \dfrac {{d\Gamma \left ( t \right )}}{{dt}}{\Gamma ^{ - 1}}\left ( t \right ) + \dfrac {{d{\Gamma ^{ - 1}}\left ( t \right )}}{{dt}}\Gamma \left ( t \right ) = 0. $$
(3.10)

Taking into account the expression (3.10) and the earlier-introduced definitions of the matrices \(\Gamma (t) \) and \(\Gamma ^{-1}(t) \), we obtain the time variation law of the matrix \(\Gamma (t) \),

$$ \dfrac {{d\Gamma \left ( t \right )}}{{dt}} = - \Gamma \left ( t \right )\dfrac {{d{\Gamma ^{ - 1}}\left ( t \right )}}{{dt}}\Gamma \left ( t \right ) = \lambda \Gamma \left ( t \right ) - \Gamma \left ( t \right )\omega \left ( t \right ){\omega ^\mathrm {T}}\left ( t \right )\Gamma \left ( t \right ).$$
(3.11)

We find a formula for estimating the parameters of the ideal control law (2.7) with allowance for the expression (3.11) by differentiating the estimate (3.8) with respect to time,

$$ \begin {aligned} \dfrac {{d\hat \theta \left ( t \right )}}{{dt}} &= \dfrac {{d\Gamma \left ( t \right )}}{{dt}}\int \limits _0^{t} {{e^{ - \lambda \left ( {t - \tau } \right )}}\omega \left ( \tau \right ){y^\mathrm {T}}\left ( \tau \right )d} \tau + \Gamma \left ( t \right )\dfrac {d}{{dt}}\left [\, {\int \limits _0^{t} {{e^{ - \lambda \left ( {t - \tau } \right )}}\omega \left ( \tau \right ){y^\mathrm {T}}\left ( \tau \right )d} \tau } \right ] \\ &= \left ( {\lambda - \Gamma \left ( t \right )\omega \left ( t \right ){\omega ^\mathrm {T}}\left ( t \right )} \right )\hat \theta \left ( t \right ) - \lambda \hat \theta \left ( t \right ) + \Gamma \left ( t \right )\omega \left ( t \right ){y^\mathrm {T}}\left ( t \right ) \\ &= \Gamma \left ( t \right )\omega \left ( t \right )\left [ {{y^\mathrm {T}}\left ( t \right ) - {\omega ^\mathrm {T}}\left ( t \right )\hat \theta \left ( t \right )} \right ]. \end {aligned} $$
(3.12)

In view of (3.4), Eq. (3.12) can be reduced to the form

$$ \dfrac {{d\hat \theta \left ( t \right )}}{{dt}} = - \Gamma \left ( t \right )\omega \left ( t \right ){\left ( {B_\mathrm {ref\,}^\dag \varepsilon } \right )^\mathrm {T}}.$$
(3.13)

Thus, the identification loop for ideal parameters of the controller (2.7) is described by the evolution law (3.11) for the gain matrix and directly by the adaptation law (3.13),

$$ \begin {aligned} \dot {\hat \theta } &= - \Gamma \omega {\left ( {B_\mathrm {ref\,}^\dag \varepsilon } \right )^\mathrm {T}},\\ \dot \Gamma &= \lambda \Gamma - \Gamma \omega {\omega ^\mathrm {T}}\Gamma . \end {aligned}$$
(3.14)

Let us state the properties of the estimation loop (3.14) in the form of a theorem.

Theorem 1.

The estimation loop (3.14) for the error \(\tilde \theta \) ensures the following properties:

  1. 1.

    The error \(\tilde \theta \) is a bounded function, \( \tilde \theta \in {L_2} \cap {L_\infty } \);

  2. 2.

    If the persistent excitation condition (2.13) holds and the first derivative of the regressor is bounded, \(\dot \omega \in {L_\infty } \), then the error \( \tilde \theta \) decays exponentially at a rate faster than \(\kappa \) (its value is determined in the Appendix).

The proof of Theorem 1 is given in the Appendix.

4. SYNTHESIS OF ADAPTIVE CONTROL WITH A VARIABLE ADJUSTMENT-LOOP GAIN

Earlier in the present paper, we have proved the convergence of the parametric error \(\tilde \theta \) to zero and hence the identification properties of the estimation loop (3.14) for the ideal controller parameters. However, the convergence of the entire vector \(\xi \) to zero and hence the stability of the closed-loop control (2.12) when using the resulting estimates of the parameters have not been considered. Therefore, let us take formulas (3.14) as basic ones and modify them so as to ensure the convergence to zero not only of the parameter error \(\tilde \theta \) but also of the tracking error \(e_\mathrm {ref} \). We present the results of this modification as the following theorem.

Theorem 2.

Let the adaptation loop for the closed control loop (2.12) be described by the expressions

$$ \begin {aligned} \dot {\hat \theta } &= - \Gamma \omega {\left [ {B_\mathrm {ref\,}^\dag \varepsilon + B_\mathrm {ref\,}^\mathrm {T}P{e_\mathrm {ref\,}}} \right ]^\mathrm {T}},\\ \dot \Gamma &= \lambda \Gamma - 2\Gamma \omega {\omega ^\mathrm {T}}\Gamma , \end {aligned} $$
(4.1)

where \(P \) is a matrix obtained by solving the Lyapunov equation

$$ A_\mathrm {ref\,}^{T}P + P{A_\mathrm {ref\,}} = - Q, $$

and \(Q \) is a matrix to be chosen experimentally.

Then

  1. 1.

    The error \(\xi \) is a bounded function, \(\xi \in {L_2}\cap {L_\infty }\) .

  2. 2.

    Condition (2.13) ensures the exponential decay of the error \(\xi \) at a rate faster than \(\eta _{\min } \).

  3. 3.

    Under condition (2.13), the maximum convergence rate \( \eta _{\max }\) of the error \( \xi \) to zero can be made arbitrarily high by increasing the parameter \(\lambda \).

The proof of Theorem 2, as well as the values of \(\eta _{\min }\) and \(\eta _{\max } \), is given in the Appendix.

5. EXAMPLE

The efficiency of our approach was demonstrated by mathematical modeling of the closed-loop system (2.12) when adapting the parameters of the control law (2.8) according to formulas (4.1). Modeling was performed in Matlab/Simulink based on numerical integration by the Euler method. In all the experiments, we used the constant discretization step \(\tau _s = 10^{-6}\) s. The plant was described in the experiments by the equation

$$ \dot x = \left [ {\begin {array}{*{20}{c}} 0&1\\ 4&2 \end {array}} \right ]x + \left [ {\begin {array}{*{20}{c}} 0\\ 2 \end {array}} \right ]u.$$
Fig. 1.
figure 1

Norm of the adaptation process gain matrix.

Fig. 2.
figure 2

Transients of the norms of the parameter error \(\tilde {\theta }\) and the error \(\xi \).

The reference model for the plant was chosen according to the equation

$$ {\dot x_\mathrm {ref\,}} = \left [ {\begin {array}{*{20}{c}} 0&1\\ { - 8}&{ - 4} \end {array}} \right ]{x_\mathrm {ref\,}} + \left [ {\begin {array}{*{20}{c}} 0\\ 8 \end {array}} \right ]r.$$

According to the results presented, e.g., in [7], to ensure the persistent excitation condition for a second-order plant, one should use a harmonic signal with at least two frequencies for the input signal. Therefore, the persistent excitation condition (2.13) for the predictor \(\omega \) was satisfied in the experiments by using the harmonic signal

$$ r = 125\sin \left ( t \right ) + 250\sin \left ( 125t \right ) + 500\sin \left ( 250t \right ).$$

All in all, we carried out two experiments. In the first experiment (see Figs. 1 and 2), the initial value of the matrix \(\Gamma (0) \), the initial values of the parameters of the control law (2.8), and the value of the forgetting factor \(\lambda \) were selected as follows:

$$ \Gamma \left ( 0 \right ) = 0{,}1I;\quad {\hat \theta ^\mathrm {T}}\left ( 0 \right ) = \left [ {\begin {array}{*{20}{c}} 0&0&1 \end {array}} \right ];\quad \lambda = 25. $$

It follows from the modeling results (Figs. 1 and 2) that the proposed adaptation loop (4.1) ensures the exponential decay of the parameter error \(\tilde \theta \) and the error \(\xi \), with a variable adaptation-loop gain used in the adaptation process.

In the second experiment, we used different values of the forgetting factor \(\lambda \),

$$ \lambda _1 = 25;\quad {\lambda _2} = 100;\quad {\lambda _3} = 1000.$$

However, the initial value of the matrix \(\Gamma (0) \) and the initial values of the coefficients of the control law (2.8) were the same as in the first experiment.

Fig. 3.
figure 3

Transients of the norms of the parameter error \(\tilde {\theta }\) and the error \(\xi \) for various \(\lambda \).

It follows from the modeling results (Fig. 3) that, when increasing the forgetting factor, the rate of convergence of the parameter error and the error \(\xi \) increases as well. This confirms the results obtained in the proof of Theorem 2.

It can also be seen from Fig. 3 that significant oscillations arise as \(\lambda \to \infty \); this also confirms the conclusions made in the Remark to Theorem 2. To eliminate this drawback, in further research we plan to modify the developed adaptation loop (4.1) by using methods of extension and filtering of the predictor [6] with the aim to minimize the value of \(T \) (maximize the admissible value of \(\lambda \)).

6. CONCLUSIONS

In the present paper, we have proposed an adaptive control system that, under the persistent excitation condition, does not require experimental, manual selection of the adaptation process gain matrix and, at the same time, ensures the exponential decay of the tracking error and the parameter error with an adjustable upper bound for the convergence rate.

Unlike the classical gradient scheme, which has a limit convergence rate for the current regressor [6, 13, 14], in the developed scheme, according to the proof of Theorem 2, the analysis performed, and the results of experiments, the upper bound for the convergence rate can be made arbitrarily large by increasing the forgetting factor \(\lambda \).

In further studies, we plan to modify the developed adaptation loop so as to relax the assumptions used (the persistent excitation condition and the availability of the first derivative of the state coordinate vector) and improve its properties (eliminate the oscillations at large \(\lambda \) and ensure the monotone exponential convergence).