1 FORMULATION OF THE PROBLEM

Consider a scleronomic holonomic mechanical system with kinetic energy

$$T = \frac{1}{2}\sum\limits_{i,j = 1}^n {{{a}_{{ij}}}{{{\dot {q}}}_{i}}{{{\dot {q}}}_{j}}} ,\quad n \geqslant 2,$$

where n is the number of degrees of freedom of the system, qi are the generalized coordinates, \({{\dot {q}}_{i}}\) are the generalized velocities, and \(({{a}_{{ij}}})\) is a positive definite symmetric matrix depending on the coordinates. The equations of motion of the system can be represented in the form of second-kind Lagrange equations

$$\frac{d}{{dt}}\left( {\frac{{\partial T}}{{\partial {{{\dot {q}}}_{i}}}}} \right) - \frac{{\partial T}}{{\partial {{q}_{i}}}} = {{Q}_{i}},\quad i = 1,...,n,$$
(1)

were t is time and the positional generalized forces \({{Q}_{i}} = {{Q}_{i}}({{q}_{1}},...,{{q}_{n}})\) are stationary. Suppose that the set \(({{q}_{1}},...,{{q}_{n}})\) consists of s position coordinates, followed by \((n - s)\) cyclic coordinates. Taking into account the arbitrariness in the numbering of the generalized coordinates, we distinguish the first generalized coordinate, which is denoted by \(x\). The other coordinates are redenoted as \({{u}_{j}} = {{q}_{{j + 1}}}\), \(j = 1,...,s - 1\), and \({{w}_{k}} = {{q}_{{s + k}}}\), \(k = 1,...,n - s\), where \({\mathbf{u}} = ({{u}_{1}},...,{{u}_{{s - 1}}})\) is the vector of position coordinates and \({\mathbf{w}} = ({{w}_{1}},...,{{w}_{{n - s}}})\) is the vector of cyclic coordinates, so that \(\frac{{\partial T}}{{{{w}_{k}}}}\, \equiv \,0\) and \(\frac{{\partial {{Q}_{i}}}}{{\partial {{w}_{k}}}} = 0,\) = 0 for \(i = 1,...,n\) and \(k = 1,...,n - s\). The kinetic energy becomes

$$T = \frac{1}{2}\left[ {{{a}_{{11}}}{{{\dot {x}}}^{2}} + 2\dot {x}\left( {\sum\limits_{j = 1}^{s - 1} {{{a}_{{1,j + 1}}}{{{\dot {u}}}_{j}} + \sum\limits_{k = 1}^{n - s} {{{a}_{{1,s + k}}}{{{\dot {w}}}_{k}}} } } \right)} \right] + T\text{*},$$

where s is the number of position coordinates, including the distinguished coordinate x, and

$$\begin{gathered} T{\kern 1pt} {\text{*}}(x,{\mathbf{u}},{\mathbf{\dot {u}}},{\mathbf{\dot {w}}}) = \frac{1}{2}\left( {\sum\limits_{j,r = 1}^{s - 1} {{{a}_{{j + 1,r + 1}}}{{{\dot {u}}}_{j}}{{{\dot {u}}}_{r}}} } \right. \\ \left. {\, + 2\sum\limits_{j = 1}^{s - 1} {\sum\limits_{k = 1}^{n - s} {{{a}_{{j + 1,k + s}}}{{{\dot {u}}}_{j}}{{{\dot {w}}}_{k}}} + \sum\limits_{k,r = 1}^{n - s} {{{a}_{{k + s,r + s}}}{{{\dot {w}}}_{k}}{{{\dot {w}}}_{r}}} } } \right). \\ \end{gathered} $$

In system (1), we distinguish the equation for the coordinate x:

$$\frac{d}{{dt}}\left( {{{a}_{{11}}}\dot {x} + \sum\limits_{j = 1}^{s - 1} {{{a}_{{1,j + 1}}}{{{\dot {u}}}_{j}}} + \sum\limits_{k = 1}^{n - s} {{{a}_{{1,s + k}}}{{{\dot {w}}}_{k}}} } \right) - \frac{{\partial T}}{{\partial x}} = F(x,{\mathbf{u}}),$$
(2)

where \({\mathbf{u}} = ({{u}_{1}},...,{{u}_{{s - 1}}}) \in {{R}^{{s - 1}}}\) and F(x, u) = Q1(x, \({{u}_{1}}, \cdots ,{{u}_{{s - 1}}})\). The coordinate x is used as an independent variable on the interval of its monotonicity: \(\dot {x} \ne 0\). Then Eq. (2) is transformed into

$$\dot {x}\frac{d}{{dx}}\left[ {f(x,{\mathbf{u}},{\mathbf{u}}{\kern 1pt} ',{\mathbf{w}}{\kern 1pt} ')\dot {x}} \right] - p(x,{\mathbf{u}},{\mathbf{u}}{\kern 1pt} ',{\mathbf{w}}{\kern 1pt} '){{\dot {x}}^{2}} = F(x,{\mathbf{u}}),$$
(3)

where \({\mathbf{u}}{\kern 1pt} ' = \frac{{d{\mathbf{u}}}}{{dx}}\), \({\mathbf{w}}{\kern 1pt} ' = \frac{{d{\mathbf{w}}}}{{dx}}\), and

$$f(x,{\mathbf{u}},{\mathbf{u}}{\kern 1pt} ',{\mathbf{w}}{\kern 1pt} ') = {{a}_{{11}}} + \sum\limits_{j = 1}^{s - 1} {{{a}_{{1,j + 1}}}u_{j}^{'}} + \sum\limits_{k = 1}^{n - s} {{{a}_{{1,s + k}}}w_{k}^{'}} ,$$
$$\begin{gathered} p(x,{\mathbf{u}},{\mathbf{u}}{\kern 1pt} ',{\mathbf{w}}{\kern 1pt} ') \\ \, = \frac{1}{2}\left( {\frac{{\partial {{a}_{{11}}}}}{{\partial x}} + 2\sum\limits_{j = 1}^{s - 1} {\frac{{\partial {{a}_{{1,j + 1}}}}}{{\partial x}}u_{j}^{'} + } \,2\sum\limits_{k = 1}^{n - s} {\frac{{\partial {{a}_{{1,s + k}}}}}{{\partial x}}w_{k}^{'}} } \right) \\ \, + \frac{{\partial T{\kern 1pt} {\text{*}}(x,{\mathbf{u}},{\mathbf{u}}{\kern 1pt} ',{\mathbf{w}}{\kern 1pt} ')}}{{\partial x}}. \\ \end{gathered} $$

Assume that \({\mathbf{u}} = {\mathbf{u}}(x)\) and \({\mathbf{\dot {w}}} = {\mathbf{\dot {w}}}(x)\) are given vectors with bounded components:

$$\begin{gathered} \left| {u_{j}^{m} \leqslant {{u}_{j}}(x) \leqslant u_{j}^{M}} \right|,\quad j = 1,...,s - 1, \\ \left| {\dot {w}_{k}^{m} \leqslant {{{\dot {w}}}_{k}}(x) \leqslant \dot {w}_{k}^{M}} \right|,\quad k = 1,...,n - s. \\ \end{gathered} $$
(4)

These vector functions are regarded as servo constraints imposed on the system. The generalized forces \({{Q}_{2}},...,{{Q}_{n}}\) should be chosen so as to obtain the indicated vector functions u(x) and \({\mathbf{\dot {w}}}(x)\). Assume that this has been done so that constraints (4) are satisfied and we can write the equation

$$\begin{gathered} \int\limits_{{{x}_{0}}}^x {\lambda \left\{ {F(x,{\mathbf{u}}) + p(x,{\mathbf{u}},{\mathbf{u}}{\kern 1pt} ',{\mathbf{w}}{\kern 1pt} '){{{\dot {x}}}^{2}}\mathop {}\limits_{_{{_{{}}}}} } \right.} \\ \, - \left. {\dot {x}\frac{d}{{dx}}\left[ {f(x,{\mathbf{u}},{\mathbf{u}}{\kern 1pt} ',{\mathbf{w}}{\kern 1pt} ')\dot {x}} \right]} \right\}dx = 0, \\ \end{gathered} $$
(5)

where x0 is the initial value of the independent variable x and \(\lambda (x,{\mathbf{u}},{\mathbf{u}}{\kern 1pt} ',{\mathbf{w}}{\kern 1pt} ')\) is an arbitrary function. Equation (5) follows directly from Eq. (3). Note that, if \(f(x,{\mathbf{u}},{\mathbf{u}}{\kern 1pt} ',{\mathbf{w}}{\kern 1pt} ') \ne 0\), then Eq. (3) admits an integrating factor [1]. We use it as \(\lambda \):

$$\begin{gathered} \lambda = f(x,{\mathbf{u}},{\mathbf{u}}{\kern 1pt} ',{\mathbf{w}}{\kern 1pt} ')\exp \left( { - \int\limits_{{{x}_{0}}}^x {\frac{{2p(x,{\mathbf{u}},{\mathbf{u}}{\kern 1pt} ',{\mathbf{w}}{\kern 1pt} ')}}{{f(x,{\mathbf{u}},{\mathbf{u}}{\kern 1pt} ',{\mathbf{w}}{\kern 1pt} ')}}dx} } \right), \\ f(x,{\mathbf{u}},{\mathbf{u}}{\kern 1pt} ',{\mathbf{w}}{\kern 1pt} ') \ne 0. \\ \end{gathered} $$
(6)

Then equality (5) can be transformed into

$$\int\limits_{{{x}_{0}}}^x {\lambda F(x,{\mathbf{u}})dx} - \frac{1}{2}\left. {[{{{\dot {x}}}^{2}}f(x,{\mathbf{u}},{\mathbf{u}}{\kern 1pt} ',{\mathbf{w}}{\kern 1pt} ')\lambda (x,{\mathbf{u}},{\mathbf{u}}{\kern 1pt} ',{\mathbf{w}}{\kern 1pt} ')]} \right|_{{{{x}_{0}}}}^{x} = 0$$

or

$$\begin{gathered} \int\limits_{{{x}_{0}}}^x {\lambda F(x,{\mathbf{u}})dx} \\ \, = \frac{1}{2}\left. {\left[ {f{\text{*}}(x,\dot {x},{\mathbf{u}},{\mathbf{\dot {u}}},{\mathbf{\dot {w}}})\lambda {\kern 1pt} {\text{*}}(x,\dot {x},{\mathbf{u}},{\mathbf{\dot {u}}},{\mathbf{\dot {w}}})} \right]} \right|_{{{{x}_{0}}}}^{x}, \\ \end{gathered} $$
(7)

where

$$f{\kern 1pt} {\text{*}}(x,\dot {x},{\mathbf{u}},{\mathbf{\dot {u}}},{\mathbf{\dot {w}}}) = {{a}_{{11}}}\dot {x} + \sum\limits_{j = 1}^{s - 1} {{{a}_{{1,j + 1}}}{{{\dot {u}}}_{j}}} + \sum\limits_{k = 1}^{n - s} {{{a}_{{1,s + k}}}{{{\dot {w}}}_{k}}} ,$$
$$\begin{gathered} \lambda {\kern 1pt} {\text{*}}(x,\dot {x},{\mathbf{u}},{\mathbf{\dot {u}}},{\mathbf{\dot {w}}}) \\ \, = f{\kern 1pt} {\text{*}}(x,\dot {x},{\mathbf{u}},{\mathbf{\dot {u}}},{\mathbf{\dot {w}}})\exp \left( { - \int\limits_{{{x}_{0}}}^x {\frac{{2p(x,{\mathbf{u}},{\mathbf{u}}{\kern 1pt} ',{\mathbf{w}}{\kern 1pt} ')}}{{f(x,{\mathbf{u}},{\mathbf{u}}{\kern 1pt} ',{\mathbf{w}}{\kern 1pt} ')}}dx} } \right). \\ \end{gathered} $$

If \(\dot {x} = \dot {x}(x)\) vanishes, then the integral on the left-hand side of (7) becomes improper, because the values of \({\mathbf{u}}{\kern 1pt} ' = {\mathbf{\dot {u}}}{\text{/}}\dot {x}\) and \({\mathbf{w}}{\kern 1pt} ' = {\mathbf{\dot {w}}}{\text{/}}\dot {x}\) at \(\dot {x} = 0\) can become infinitely large. Nevertheless, formula (7) shows that this integral exists and takes a finite value. Note that the variables f * and λ* do not involve such a singularity. Additionally,

$$\begin{gathered} f{\kern 1pt} {\text{*}}\lambda {\kern 1pt} {\text{*}} = {{\left( {f{\kern 1pt} {\text{*}}(x,\dot {x},{\mathbf{u}},{\mathbf{\dot {u}}},{\mathbf{\dot {w}}})} \right)}^{2}} \\ \, \times \exp \left( { - \int\limits_{{{x}_{0}}}^x {\frac{{2p(x,{\mathbf{u}},{\mathbf{u}}{\kern 1pt} ',{\mathbf{w}}{\kern 1pt} ')}}{{f(x,{\mathbf{u}},{\mathbf{u}}{\kern 1pt} ',{\mathbf{w}}{\kern 1pt} ')}}dx} } \right) \geqslant 0. \\ \end{gathered} $$
(8)

Assume that the force function

$$U(x,{\mathbf{u}}( \cdot )) = \int\limits_{{{x}_{0}}}^x {F(\tau ,{\mathbf{u}}(\tau ))} d\tau $$
(9)

has an isolated maximum with respect to x for \({\mathbf{u}}(\tau ) \equiv 0\) and this maximum remains isolated when \({\mathbf{u}}(\tau )\) changes. Consider the motion in a neighborhood of this maximum. Suppose that the initial conditions are chosen so that the equality \({{\dot {x}}_{0}} = \dot {x}({{x}_{0}}) = 0\) holds when \(x = {{x}_{0}}\). By the oscillation amplitude, we mean J = \({{x}_{1}} - {{x}_{0}}\), where \({{x}_{1}} > {{x}_{0}}\) is the next value of x at which \({{\dot {x}}_{1}} = \dot {x}({{x}_{1}})\) vanishes. In this case, the argument of the isolated maximum of the force function \(U(x,{\mathbf{u}}( \cdot ))\) belongs to the interval \([{{x}_{0}},{{x}_{1}}]\). In the general case, this argument can vary depending on the chosen vector functions u(x) and \({\mathbf{\dot {w}}}(x)\). At the endpoints of the interval, it must hold that

$${{\dot {x}}_{0}} = {{\dot {x}}_{1}} = 0,$$
(10)

and

$$f{\kern 1pt} {\text{*}}({{x}_{0}},{{\dot {x}}_{0}},{\mathbf{u}}({{x}_{0}}),{\mathbf{\dot {u}}}({{x}_{0}}),{\mathbf{\dot {w}}}({{x}_{0}}))$$
$$\begin{gathered} \, = \sum\limits_{j = 1}^{s - 1} {{{a}_{{1,j + 1}}}{{{\dot {u}}}_{j}}({{x}_{0}})} + \sum\limits_{k = 1}^{n - s} {{{a}_{{1,s + k}}}{{{\dot {w}}}_{k}}({{x}_{0}}),} \\ f{\kern 1pt} {\text{*}}({{x}_{1}},{{{\dot {x}}}_{1}},{\mathbf{u}}({{x}_{1}}),{\mathbf{\dot {u}}}({{x}_{1}}),{\mathbf{\dot {w}}}({{x}_{1}})) \\ \end{gathered} $$
(11)
$$\, = \sum\limits_{j = 1}^{s - 1} {{{a}_{{1,j + 1}}}{{{\dot {u}}}_{j}}} ({{x}_{1}}) + \sum\limits_{k = 1}^{n - s} {{{a}_{{1,s + k}}}{{{\dot {w}}}_{k}}({{x}_{1}}).} $$

The task is to find piecewise continuous controls u(x) and \({\mathbf{\dot {w}}}(x)\) that maximize (minimize) the functional J.

Now we consider the inverse motion of a pendulum and introduce \(\xi = - x\). Equation (2) becomes

$$\begin{gathered} \frac{d}{{dt}}\left( {{{a}_{{11}}}\dot {\xi } - \sum\limits_{j = 1}^{s - 1} {{{a}_{{1,j + 1}}}{{{\dot {u}}}_{j}}} - \sum\limits_{k = 1}^{n - s} {{{a}_{{1,s + k}}}{{{\dot {w}}}_{k}}} } \right) \\ \, - \frac{{\partial T}}{{\partial \xi }} = - F( - \xi ,{\mathbf{u}}). \\ \end{gathered} $$
(12)

Equation (3) can be represented in the form

$$\begin{gathered} \dot {\xi }\frac{d}{{d\xi }}[{{f}_{\xi }}( - \xi ,{\mathbf{u}},{\mathbf{u}}_{\xi }^{'},{\mathbf{w}}_{\xi }^{'})\dot {\xi }] \\ \, - {{p}_{\xi }}( - \xi ,{\mathbf{u}},{\mathbf{u}}_{\xi }^{'},{\mathbf{w}}_{\xi }^{'}){{{\dot {\xi }}}^{2}} = - F( - \xi ,{\mathbf{u}}), \\ \end{gathered} $$
(13)

where \({\mathbf{u}}_{\xi }^{'} = \frac{{d{\mathbf{u}}}}{{d\xi }}\), \({\mathbf{w}}_{\xi }^{'} = \frac{{d{\mathbf{w}}}}{{d\xi }}\), and

$${{f}_{\xi }}( - \xi ,{\mathbf{u}},{\mathbf{u}}_{\xi }^{'},{\mathbf{w}}_{\xi }^{'}) = {{a}_{{11}}} - \sum\limits_{j = 1}^{s - 1} {{{a}_{{1,j + 1}}}u_{{j\xi }}^{'}} - \sum\limits_{k = 1}^{n - s} {{{a}_{{1,s + k}}}w_{{k\xi }}^{'}} ,$$
$$\begin{gathered} {{p}_{\xi }}( - \xi ,{\mathbf{u}},{\mathbf{u}}_{\xi }^{'},{\mathbf{w}}_{\xi }^{'}) \\ \, = \frac{1}{2}\left( {\frac{{\partial {{a}_{{11}}}}}{{\partial \xi }} + 2\sum\limits_{j = 1}^{s - 1} {\frac{{\partial {{a}_{{1,j + 1}}}}}{{\partial \xi }}u_{{j\xi }}^{'} + \,} 2\sum\limits_{k = 1}^{n - s} {\frac{{\partial {{a}_{{1,s + k}}}}}{{\partial \xi }}w_{{k\xi }}^{'}} } \right) \\ \, + \frac{{\partial T\text{*}(x,{\mathbf{u}},{\mathbf{u}}{\kern 1pt} ',{\mathbf{w}}{\kern 1pt} ')}}{{\partial \xi }}. \\ \end{gathered} $$

Therefore, we obtain

$${{\lambda }_{\xi }}\, = \,{{f}_{\xi }}( - \xi ,{\mathbf{u}},{\mathbf{u}}_{\xi }^{'},{\mathbf{w}}_{\xi }^{'}){\text{exp}}\left( { - \int\limits_{{{\xi }_{0}}}^\xi {\frac{{2{{p}_{\xi }}( - \xi ,{\mathbf{u}},{\mathbf{u}}_{\xi }^{'},{\mathbf{w}}_{\xi }^{'})}}{{{{f}_{\xi }}( - \xi ,{\mathbf{u}},{\mathbf{u}}_{\xi }^{'},{\mathbf{w}}_{\xi }^{'})}}d\xi } } \right),$$
(14)

and formula (7) can be rewritten as

$$\begin{gathered} \int\limits_{{{\xi }_{0}}}^\xi {{{\lambda }_{\xi }}F( - \xi ,{\mathbf{u}})d\xi } \\ \, = - \frac{1}{2}\left. {[f_{\xi }^{*}( - \xi ,\dot {\xi },{\mathbf{u}},{\mathbf{\dot {u}}},{\mathbf{\dot {w}}})\lambda _{\xi }^{*}( - \xi ,\dot {\xi },{\mathbf{u}},{\mathbf{\dot {u}}},{\mathbf{\dot {w}}})]} \right|_{{{{\xi }_{0}}}}^{\xi }, \\ \end{gathered} $$
(15)

where

$$f_{\xi }^{*}( - \xi ,\dot {\xi },{\mathbf{u}},{\mathbf{\dot {u}}},{\mathbf{\dot {w}}}) = {{a}_{{11}}}\dot {\xi } - \sum\limits_{j = 1}^{s - 1} {{{a}_{{1,j + 1}}}{{{\dot {u}}}_{j}}} - \sum\limits_{k = 1}^{n - s + 1} {{{a}_{{1,s + k}}}{{{\dot {w}}}_{k}}} ,$$
$$\begin{gathered} \lambda _{\xi }^{*}( - \xi ,\dot {\xi },{\mathbf{u}},{\mathbf{\dot {u}}},{\mathbf{\dot {w}}}) = f_{\xi }^{*}( - \xi ,\dot {\xi },{\mathbf{u}},{\mathbf{\dot {u}}},{\mathbf{\dot {w}}}) \\ \, \times \exp \left( { - \int\limits_{{{\xi }_{0}}}^\xi {\frac{{2{{p}_{\xi }}( - \xi ,{\mathbf{u}},{\mathbf{u}}_{\xi }^{'},{\mathbf{w}}_{\xi }^{'})}}{{{{f}_{\xi }}( - \xi ,{\mathbf{u}},{\mathbf{u}}_{\xi }^{'},{\mathbf{w}}_{\xi }^{'})}}d\xi } } \right). \\ \end{gathered} $$

The oscillation amplitude has the form \(J = {{\xi }_{1}} - {{\xi }_{0}}\), where \({{\xi }_{1}} > {{\xi }_{0}}\), \({{\dot {\xi }}_{0}} = \dot {\xi }({{\xi }_{0}}) = 0\), \({{\dot {\xi }}_{1}} = \dot {\xi }({{\xi }_{1}}) = 0\), and \(\dot {\xi }(\xi ) \ne 0\) if \(\xi \in ({{\xi }_{0}},{{\xi }_{1}})\). The task is to find piecewise continuous controls \({\mathbf{u}}(\xi )\) and \({\mathbf{\dot {w}}}(\xi )\) that maximize (minimize) the functional J.

2 OPTIMAL AMPLIFICATION (DAMPING) OF OSCILLATIONS

Suppose that λ is defined by formula (6). Then the following theorems hold.

Theorem 1 (optimal amplification principle). Assume that the motion of a system is described by Eq. (2) and there exist two points x0 and x1 such that \({{x}_{0}} < {{x}_{1}}\) and \({{\dot {x}}_{0}} = {{\dot {x}}_{1}} = 0\). Then the following assertions hold:

(I) Necessary conditions for the optimality of controls \({\mathbf{u}} = {{{\mathbf{u}}}_{M}}(\tau )\), \({{{\mathbf{\dot {w}}}}_{M}}({{x}_{1}})\) that maximize \({{x}_{1}}\), while being constrained by (4), are the equations

$$\begin{gathered} {{{\mathbf{u}}}_{M}}(x) = \arg \mathop {\max }\limits_{\mathbf{u}} \text{[}\chi (x,{{{\mathbf{u}}}_{M}},{\mathbf{u}}_{M}^{'},{\mathbf{w}}_{M}^{'})F(x,{\mathbf{u}})], \\ {{{{\mathbf{\dot {w}}}}}_{M}}({{x}_{1}}) = \arg \mathop {\min }\limits_{{\mathbf{\dot {w}}}} \left[ {\sum\limits_{k = 1}^{n - s} {{{a}_{{1,s + k}}}({{x}_{1}},{{{\mathbf{u}}}_{M}}){{{\dot {w}}}_{k}}} } \right], \\ \end{gathered} $$
(16)

where \(\chi = \lambda (x)\,{\text{sgn[}}\lambda ({{x}_{1}} - 0)]\).

(II) Necessary conditions for the optimality of controls \({\mathbf{u}} = {{{\mathbf{u}}}_{m}}(\tau )\), \({{{\mathbf{\dot {w}}}}_{m}}({{x}_{0}})\) that minimize \({{x}_{0}}\), while being constrained by (4), are the equations

$$\begin{array}{*{20}{c}} {{{{\mathbf{u}}}_{m}}(x) = \arg \mathop {\min }\limits_{\mathbf{u}} \text{[}\chi (x,{{{\mathbf{u}}}_{m}},{\mathbf{u}}_{m}^{'},{\mathbf{w}}_{m}^{'})F(x,{\mathbf{u}})],} \\ {{{{{\mathbf{\dot {w}}}}}_{m}}({{x}_{0}}) = \arg \mathop {\max }\limits_{{\mathbf{\dot {w}}}} \left[ {\sum\limits_{k = 1}^{n - s} {{{a}_{{1,s + k}}}({{x}_{0}},{{{\mathbf{u}}}_{m}}){{{\dot {w}}}_{k}}} } \right],} \end{array}$$
(17)

where \(\chi = \lambda (x)\,{\text{sgn}}[\lambda ({{x}_{0}} + 0)]\).

Theorem 2 (optimal damping principle). Assume that the motion of a system is described by Eqs. (2) and there exist two points x0 and x1 such that \({{x}_{0}} < {{x}_{1}}\) and \({{\dot {x}}_{0}} = {{\dot {x}}_{1}} = 0\). Then the following assertions hold:

(I) Necessary conditions for the optimality of controls \({\mathbf{u}} = {{{\mathbf{u}}}_{m}}(\tau )\), \({{{\mathbf{\dot {w}}}}_{m}}({{x}_{1}})\) that minimize x1, while being constrained by (4), are the equations

$$\begin{array}{*{20}{c}} {{{{\mathbf{u}}}_{m}}(x) = \arg \mathop {\min }\limits_{\mathbf{u}} \text{[}\chi (x,{{{\mathbf{u}}}_{m}},{\mathbf{u}}_{m}^{'},{\mathbf{w}}_{m}^{'})F(x,{\mathbf{u}})],} \\ {{{{{\mathbf{\dot {w}}}}}_{m}}({{x}_{1}}) = \arg \mathop {\max }\limits_{{\mathbf{\dot {w}}}} \left[ {\sum\limits_{k = 1}^{n - s} {{{a}_{{1,s + k}}}({{x}_{1}},{{{\mathbf{u}}}_{m}}){{{\dot {w}}}_{k}}} } \right],} \end{array}$$
(18)

where \(\chi = \lambda (x)\,{\text{sgn}}[\lambda {\kern 1pt} {\text{*}}({{x}_{1}} - 0)]\).

(II) Necessary conditions for the optimality of controls \({\mathbf{u}} = {{{\mathbf{u}}}_{M}}(x)\), \({{{\mathbf{\dot {w}}}}_{M}}({{x}_{0}})\) that maximize x0, while being constrained by (4), are the equations

$$\begin{array}{*{20}{c}} {{{{\mathbf{u}}}_{M}}(x) = \arg \mathop {\max }\limits_{\mathbf{u}} \text{[}\lambda (x,{{{\mathbf{u}}}_{M}},{\mathbf{u}}_{M}^{'},{\mathbf{w}}_{M}^{'})F(x,{\mathbf{u}})],} \\ {{{{{\mathbf{\dot {w}}}}}_{M}}({{x}_{0}}) = \arg \mathop {\min }\limits_{{\mathbf{\dot {w}}}} \left[ {\sum\limits_{k = 1}^{n - s} {{{a}_{{1,s + k}}}({{x}_{0}},{{{\mathbf{u}}}_{M}}){{{\dot {w}}}_{k}}} } \right]{\text{,}}} \end{array}$$
(19)

where \(\chi = \lambda (x)\,{\text{sgn}}[\lambda \text{*}({{x}_{0}} + 0)]\).

These theorems are proved by applying formulas (7) and (15), taking into account that \({{a}_{{11}}} > 0\) in Eq. (2). Consider, for example, assertion I of Theorem 1. According to the general variational principle [2], the multipliers λ and λξ involved in (7) and (15) play a supporting role and do not participate in the optimization process. They are used only for an equivalent transformation of the equations of motion. Suppose that the coordinate value \(x = {{x}_{1}}\) corresponds to the maximum deviation of the system from the equilibrium position, but the first equality in (16) is violated at some interior point of the interval [\({{x}_{0}}\), \({{x}_{1}}\)]. Then it follows from formula (7) that the integrand in (7) can be increased at this point by applying the control u, with the other controls being fixed; as a result, the value of the velocity \({{\dot {x}}_{1}} > 0\) will increase as well. Then the new coordinate value \(x = {{x}_{1}}\) at which the velocity \(\dot {x}\) vanishes will grow. This contradiction proves the necessity of satisfying the first relation in (16). The second relation in (16) follows in a similar manner from the fact that the functions f * in (7), (11) depend linearly on the velocities of the system.

Assertion II of Theorem 1 is proved by applying formula (15), followed by interpreting the result in terms of the independent variable x.

Theorem 2 is dual to Theorem 1, so the proof of the former is analogous to the proof of the latter.

A detailed proof of Theorems 1 and 2 by applying the first variation method can be found in [3].

3 EXAMPLE: A DOUBLE PENDULUM OF VARIABLE LENGTH

A double pendulum (Fig. 1) consists of two rods. One rod is attached to the fixed point O. Another rod is hinged to the first one at a distance of l from its center of mass. Let \(M\) denote the mass of the first rod, I be the moment of inertia of the first rod about its center of mass, and \(\varphi \) be the deflection angle of the first rod from the downward vertical. The center of mass of the first rod is at the distance \(a(t)\) from the point O. Let \(m\) be the mass of the second rod, \({{I}_{2}}\) be its moment of inertia about its center of mass, b be the distance from the point of suspension of the second rod to its center of mass, \(\psi \) be the angle between the second rod and the extension of the first rod beyond the second one’s suspension, and g be the acceleration of gravity. The angle \(\psi \) and the distance a are regarded as controls used to swing the system; they are bounded by admissible values: \({{\psi }_{m}} \leqslant \psi \leqslant {{\psi }_{M}}\), \({{a}_{m}} \leqslant a \leqslant {{a}_{M}}\).

Fig. 1.
figure 1

Double pendulum.

The mechanical system described above can serve as a simplified mathematical model for a person swinging on a swing or for a gymnast swinging on a bar.

To describe the motion of the system, we introduce an absolute coordinate system with the origin \(O\) at the point suspension of the first rod, with the \(O\xi \) axis directed vertically downward, and with the horizontal \(O\eta \) axis directed to the right coordinate half-plane. Then the coordinates and velocities of the center of mass of the first rod are given by

$$\begin{gathered} {{\xi }_{1}} = a\cos \varphi ,\quad {{\eta }_{1}} = a\sin \varphi , \\ {{{\dot {\xi }}}_{1}} = - a\dot {\varphi }\sin \varphi + \dot {a}\cos \varphi , \\ {{\eta }_{1}} = a\dot {\varphi }\cos \varphi + \dot {a}\sin \varphi . \\ \end{gathered} $$

The coordinates and velocities of the center of mass of the second rod are written as

$$\begin{gathered} {{\xi }_{2}} = (a + l)\cos \varphi + b\cos (\varphi + \psi ), \\ {{{\dot {\xi }}}_{2}} = - [(a + l)\dot {\varphi }\sin \varphi + b(\dot {\varphi } + \dot {\psi })\sin (\varphi + \psi )] + \dot {a}\cos \varphi , \\ {{\eta }_{2}} = (a + l)\sin \varphi + b\sin (\varphi + \psi ), \\ {{{\dot {\eta }}}_{2}} = (a + l)\dot {\varphi }\cos \varphi + b(\dot {\varphi } + \dot {\psi })\cos (\varphi + \psi ) + \dot {a}\sin \varphi . \\ \end{gathered} $$

The coordinates of the center of mass of the system are found to be

$$\begin{array}{*{20}{c}} {{{\xi }_{c}} = \frac{{aM + (a + l)m}}{{M + m}}\cos \varphi + \frac{{bm}}{{M + m}}\cos (\varphi + \psi ),} \\ {{{\eta }_{c}} = \frac{{aM + (a + l)m}}{{M + m}}\sin \varphi + \frac{{bm}}{{M + m}}\sin (\varphi + \psi ).} \end{array}$$

The angular momentum of the system about the point O has the form

$$K = (I + M{{a}^{2}})\dot {\varphi } + {{I}_{c}}(\dot {\varphi } + \dot {\psi }) + m({{\xi }_{2}}{{\dot {\eta }}_{2}} - {{\eta }_{2}}{{\dot {\xi }}_{2}}).$$

After rearrangements, we obtain

$$\begin{array}{*{20}{c}} {K = [I + M{{a}^{2}} + m(a + l)(a + l + b\cos \psi )]\dot {\varphi }} \\ {\, + \{ {{I}_{c}} + mb[(a + l)\cos \psi + b]\} (\dot {\varphi } + \dot {\psi }) - m\dot {a}b\sin \psi .} \end{array}$$

The angular momentum equation is written as

$$\begin{gathered} \frac{{d(f\dot {\varphi })}}{{dt}} = F(\varphi ,a,\psi ) \\ \, = - g\{ [a(M + m) + lm]\sin \varphi + bm\sin (\varphi + \psi )\} , \\ \end{gathered} $$
(20)

where

$$\begin{gathered} f = [I + m(a + l)(a + l + b\cos \psi )] \\ \, + \{ {{I}_{c}} + mb[(a + l)\cos \psi + b]\} (1 + \psi {\kern 1pt} ') - ma{\kern 1pt} 'b\sin \psi , \\ \end{gathered} $$

and \(\psi {\kern 1pt} '\) and \(a{\kern 1pt} '\) are the derivatives with respect to \(\varphi \). It follows from (6) that \(\lambda = f\).

Assume that \(0 < b < a + l\). Apply Theorem 1(I). Suppose that \({{\varphi }_{0}} < 0\) and \({{\varphi }_{1}} > 0\) are the left and right boundaries, respectively, of the deflection angle \(\varphi \), and let \(\varphi = {{\varphi }_{0}}\) at the initial time. It follows from (16) that the best way of maximizing a positive oscillation half-cycle is given by the rule

$$\begin{gathered} \psi = \left\{ {\begin{array}{*{20}{l}} {{{\psi }_{M}}\quad {\text{ if}}\quad \varphi + {{\psi }_{M}} < - \frac{\pi }{2},{\text{ }}} \\ { - \frac{\pi }{2} - \varphi \quad {\text{if}}\quad \varphi + {{\psi }_{m}} \leqslant - \frac{\pi }{2} \leqslant \varphi + {{\psi }_{M}},} \\ {{{\psi }_{m}}\quad {\text{ if}}\quad \varphi + {{\psi }_{m}} > - \frac{\pi }{2};{\text{ }}} \end{array}} \right. \\ a = \left\{ {\begin{array}{*{20}{c}} {{{a}_{M}}\quad {\text{if}}\quad \varphi < 0,} \\ {{{a}_{m}}\quad {\text{if}}\quad \varphi \geqslant 0.} \end{array}} \right. \\ \end{gathered} $$
(21)

From the formula for f, it follows that, for \(0 < b < a + l,\) the sign of λ remains positive in any of the cases listed in (21).

Apply Theorem 1(II). Now suppose that \(\varphi = {{\varphi }_{1}}\) > 0 at the initial time. Then, on the contrary, we need to minimize the value \({{\varphi }_{0}}\) of a negative half-cycle. It follows from (17) that the best regime for minimizing a negative half-cycle is given by the formula

$$\begin{gathered} \psi = \left\{ {\begin{array}{*{20}{c}} {{{\psi }_{m}}{\text{ if}}\quad \varphi + {{\psi }_{m}} > \frac{\pi }{2},{\text{ }}} \\ {\frac{\pi }{2} - \varphi \quad {\text{if}}\quad \varphi + {{\psi }_{m}} \leqslant \frac{\pi }{2} \leqslant \varphi + {{\psi }_{M}},} \\ {{{\psi }_{M}}\quad {\text{if}}\quad \varphi + {{\psi }_{M}} < \frac{\pi }{2};{\text{ }}} \end{array}} \right. \\ a = \left\{ {\begin{array}{*{20}{c}} {{{a}_{m}}\quad {\text{if}}\quad \varphi < 0,} \\ {{{a}_{M}}\quad {\text{if}}\quad \varphi \geqslant 0.} \end{array}} \right. \\ \end{gathered} $$
(22)

Finally, the synthesis of control for optimal swinging the double pendulum is as follows: after attaining the maximum positive deflection, formulas (22) are applied; after attaining the minimum negative deflection, formulas (21) are applied, and so on.

From Theorem 2, we derive a synthesis of control for optimal damping of the double pendulum, namely, after attaining the maximum positive deflection, formulas (21) are applied; after attaining the minimum negative deflection, formulas (22) are applied, and so on.

It follows from Eq. (20) that, for b = 0, the angle \(\psi \) becomes a cyclic coordinate. Then, to ensure amplification, it suffices to use the rules for cyclic coordinates given in Section 2. In the case when \(b > (a + l)\), the parameter χ can change its sign for |ψ| > \({\text{arccos}}\left[ { - \frac{{a\, + \,l}}{b}} \right]\) in the regime \(\varphi + \psi = \pm \frac{\pi }{2}\). To avoid this difficulty, we use the constraints \({{\psi }_{m}} > - \arccos \left[ { - \frac{{a + l}}{b}} \right]\) and ψM < \(\arccos \left[ { - \frac{{a + l}}{b}} \right]\).

In the example considered above, we applied the theorem on the variation of the system’s angular momentum instead of the Lagrange equations of the second kind. The application of this theorem to angular coordinates leads to equations equivalent to the Lagrange ones, but containing fewer canceling-out terms.

CONCLUSIONS

Control algorithms derived from the necessary optimality conditions (16)–(19) have been proposed. These algorithms take into account the times at which the optimization coordinate attains its extreme values and use the direction of the corresponding half-cycle. Optimality conditions (16)–(19) do not contain adjoint variables in the sense of Pontryagin’s maximum principle [4]. This facilitates the application of the indicated conditions for the considered class of problems. As additional advantage of the method is that an optimal control law is derived in the form of a dependence on the optimization coordinate. By using the proposed optimality conditions, it is possible to obtain analytical solutions for some new nontrivial model problems. As compared with other known methods, the conditions for optimal control of the oscillation amplitude proposed in this paper simplify the solution of corresponding problems in the multidimensional space of control functions. They are effective for both amplification and suppression of oscillations.