1 Introduction

The system depicted in Fig. 1 consists of a flexible robot beam/arm (B), clamped at one end to the center of a disk (D) and free at the other end. We assume that the center of mass of the disk (D) is fixed in an inertial frame and the disk (D) rotates in that frame with a time-varying angular velocity. Hence, the motion of the whole structure is governed by the following equations (see [1] for more details)

$$\begin{aligned} \left\{ \begin{array}{ll} \rho y_{tt} + \mathrm{EI} y_{xxxx} = \rho \omega ^2 (t) y+\alpha \mathcal{U}_\mathrm{I} (t), &{} x \in (0,\ell ), t>0, \\ y(0,t) = y_x (0,t) = y_{xx} (\ell ,t)=0, &{} t>0, \\ \mathrm{EI} y_{xxx} (\ell ,t) = \beta \mathcal{U}_\mathrm{F} (t), &{} t>0, \\ \displaystyle {\dot{\omega }} (t) = \frac{\displaystyle -\gamma \mathcal{U}_\mathrm{T} (t)-2 \, \rho \, \omega (t) \int _0^{\ell } y y_{t} \mathrm{d}x}{\displaystyle I_\mathrm{d} + \rho \int _0^{\ell } y ^2 \mathrm{d}x }, &{} t>0, \\ y(x,0)=y_0(x), \quad y_t(x,0)=y_1(x), &{} x \in (0,\ell ),\\ \omega (0)=\omega _0 \in \mathbb {R}, \end{array} \right. \end{aligned}$$
(1)

in which \({\dot{\omega }}(t)\) stands for the time derivative of the angular velocity \(\omega \). For sake of clarity, let us recall that \(y_{tt}\) is the acceleration, \(y_{t}\) is the velocity, \(y_{x}\) is the rotation, \(y_{xx}\) is the bending moment and \(y_{xxx}\) is the shear force.

Fig. 1
figure 1

The disk-beam system

After the pioneer work of Baillieul & Levi [1], a burst of research activity occurred and many papers appeared in the context of stabilization of the system (1) (see, e.g., [2, 5, 6, 11, 2426, 33] and the references therein). The most recent works are those related to delay systems, in which the author showed that the effect of the delay occurring in the boundary force control can be compensated through the action of a boundary force control [7]. In fact, motivated by the presence of time delays in applications for several reasons and from different sources, the author recently considered the system (1) without damping (\(\alpha =0\)) and established an exponential stabilization result despite the presence of a boundary delay term in the force control \(\mathcal{U}_\mathrm{f}(t)\) [7]. More precisely, the following delay feedback law has been proposed in [7]

$$\begin{aligned}\left\{ \begin{array}{ll} \mathcal{U}_\mathrm{T}(t)=\omega (t)-\varpi , \quad \varpi \in \mathbb {R}\\ \mathcal{U}_\mathrm{f}(t) = y_t(\ell ,t)+\frac{\sigma }{\beta } y_t(\ell ,t-\tau ), \quad \sigma \in \mathbb {R}, \end{array} \right. \end{aligned}$$

and the closed-loop system, under the presence of the delay \(\tau >0\) but without damping (\(\alpha =0\)), is shown to be exponentially stable, provided that \(|\sigma | < \beta \) and \(\varpi \) is small enough. It turned out that this outcome can be extended even in the more complicated case of time-dependent delay \(\tau (t)\) [8].

In the present paper, we suppose that a positive time-dependent delay \(\tau (t) >0\) occurs in the interior control \(\mathcal{U}_\mathrm{I} (t)\) and the natural question arises: Is it possible to eliminate the effect of such a delay via the action of a boundary control \( \mathcal{U}_\mathrm{f} (t)\) so that the system (1) remains exponentially stable? As the reader may know, time-dependent time delays arise in a natural way in most controlled systems since the controller needs a certain time to monitor the state of the system, and based on its observations, it makes any necessary adjustments to the system. Obviously, such an adjustment can never be conducted neither instantaneously nor uniformly, and a fortiori a positive time-dependent delay occurs between any observation and the controller action.

The main contribution of this work is to provide an affirmative answer to the above question and show that the feedback law

$$\begin{aligned} \left\{ \begin{array}{l} \mathcal{U}_\mathrm{I} (t)= y_t(x,t-\tau (t)), \\ \mathcal{U}_\mathrm{F} (t)= y_{t} (\ell ,t)+\dfrac{1}{\beta } e_1^{\top } w (t), \\ \dot{w}(t)=M w(t)+ e_2 y_{t} (\ell ,t),\\ \mathcal{U}_\mathrm{T}(t)=\omega (t)-\varpi , \quad \varpi \in \mathbb {R}, \end{array} \right. \end{aligned}$$
(2)

can exponentially stabilize the system (1), provided that \(\alpha \) is small enough. Here, \(w \in \mathbb {R}^{n}\) is the actuator vector state of the dynamic boundary force control \(\mathcal{U}_\mathrm{F} (t)\), M is an \({n \times n}\) constant matrix and \(e_1, \, e_2 \in \mathbb {R}^{n}\) are constant vectors. This outcome extends the result obtained in [9], where the delay has been assumed to be constant. The proof of our result is based on the utilization of an appropriate Lyapunov function (see [1418, 31] for variants of the Euler-Bernoulli equation). It is also worth mentioning that the incorporation of dynamical property in the control \(\mathcal{U}_\mathrm{F} (t)\) (see (2)) provides a wide class of exponentially stabilizing controllers as there are extra degrees of freedom in designing controllers [27].

The rest of the paper is organized as follows: In Sect. 2, we set up the problem as a differential equation in an appropriate Hilbert space. Section 3 is devoted to the proof of stability of solutions of an uncoupled linear system. In Sect. 4, the main result of this work, namely the exponential stability of the system, is stated and proved. This finding is illustrated through numerical examples in Sect. 5. Finally, Sect. 6 concludes the paper with a discussion.

2 Assumptions and problem setup

First of all, one may assume, for sake of simplicity, that \(\mathrm{EI}=\rho =\ell =1\). Indeed, it suffices to change y(xt) to \(y(x \ell , \sqrt{(\ell ^4/\mathrm{EI}) \rho } t)\). Next, we shall assume throughout the remainder of this paper that the following conditions are fulfilled (see [28, 29] for heat and wave equations):

There exist constants \(d<1\), \(\tau _0 >0, \) and \( {\bar{\tau }}>0\) such that

$$\begin{aligned}&\tau \in W^{2,\infty }[0,T], \forall T >0; \end{aligned}$$
(3)
$$\begin{aligned}&0 < \tau _0 \le \tau (t) \le {\bar{\tau }}, \, \forall t >0; \end{aligned}$$
(4)
$$\begin{aligned}&\dot{\tau }(t) \le d <1. \end{aligned}$$
(5)

Furthermore, we suppose, as in [27], that the actuator w satisfies the followings:

H.I: The eigenvalues of the matrix M have negative real parts, and the triplet \((M, e_2,e_1)\) is both observable and controllable.

H.II: The actuator transfer function

$$\begin{aligned} Y(s)=\beta + e_1^{\top } (s I-M)^{-1} e_2 \end{aligned}$$

is strictly positive in the sense that there exists a constant \({\hat{\beta }} >0\) such that \( \beta > {\hat{\beta }}\) and \(\mathfrak {R}\{ Y(i r) \} > {\hat{\beta }}, \) for any \(r \in \mathbb {R}\).

Remark 1

As mentioned in [27], It follows from the hypotheses H.I-H.II and Kalman-Yakubovich-Popov Lemma [4] that given any \({n \times n}\) symmetric positive-definite matrix N, there exists an \({n \times n}\) symmetric positive-definite matrix Q, a constant vector \(p \in \mathbb {R}^{n}\) and \(\zeta >0\) sufficiently small such that:

$$\begin{aligned}&M^{\top } Q +Q M=-p p^{\top } -\zeta N, \end{aligned}$$
(6)
$$\begin{aligned}&Q e_2 -\frac{e_1}{2}=\sqrt{\beta -{\hat{\beta }}} \, p . \end{aligned}$$
(7)

Now, we invoke the standard change in variables [12]

$$\begin{aligned} u(x,\eta ,t)=y_t(x,t-\eta \tau (t)), \;x,\, \eta \in \varOmega = (0,1), \end{aligned}$$

so that the closed-loop system can be brought to the following form

$$\begin{aligned} \left\{ \begin{array}{ll} y_{tt} + y_{xxxx} =\omega ^2 (t) y +\alpha {u(x,1,t)}, \\ y(0,t) = y_x (0,t) = y_{xx} (1,t)=0, \\ y_{xxx} {(1,t)} = \beta y_{t} (1,t) +e_1^{\top } w (t), \\ \dot{w}(t)=M w(t)+ e_2 y_{t} (1,t), \\ \tau (t) u_t (x,\eta ,t) +(1-\dot{\tau }(t) \eta ) u_{\eta } (x,\eta ,t)=0,\\ \displaystyle {\dot{\omega }} (t) = \frac{\displaystyle -\gamma (\omega (t)-\varpi )-2 \, \omega (t) \int _0^{1} y y_{t} \mathrm{d}x}{\displaystyle I_\mathrm{d} + \int _0^{1} y ^2 \mathrm{d}x }, \\ y(x,0)=y_0(x), \; y_t(x,0)=y_1(x),\\ u(x,\eta ,0)=f(x,-\eta \tau (0)),\\ w(0)=w_0 \in \mathbb {R}^n, \; \omega (0)=\omega _0 \in \mathbb {R}. \end{array} \right. \end{aligned}$$
(8)

Let us recall (see [28, 29]) that if \(t<\tau (t)\), then \(y_t(x,t-\tau (t))\) is in the past, and hence, an initial value in the past has to be provided. To do so, we use (5) as well as the mean-value theorem to get \(t-\tau (t) > -\tau (0)\). This justifies the initial data \(y_t(x,t-\tau (0))=f(x,t-\tau (0))\) with \((x,t) \in (0,1) \times (0,\tau (0))\) and so \(u(x,\eta ,0)=f(x,-\eta \tau (0))\).

Subsequently, given a real-valued function

$$\begin{aligned} f: \varOmega =(0,1) \rightarrow \mathbb {R}, \end{aligned}$$

let us recall that

$$\begin{aligned} L^2 (\varOmega )= \left\{ f \; \text {is measurable and} \; \int _0^{1} | f (x)|^2 \; \mathrm{d}x < \infty \right\} \end{aligned}$$

is a Hilbert space endowed with its usual norm

$$\begin{aligned} \Vert f \Vert _{\mathrm{L}^2 (\varOmega )}= \left( \int _0^{1} | f (x)|^2 \; \mathrm{d}x \right) ^{1/2}, \end{aligned}$$

whereas the Sobolev space

$$\begin{aligned} H^n (\varOmega )= \left\{ {f }: \varOmega \rightarrow \mathbb {R}; \; {f }^{(n)} \in L^2 (\varOmega ), \text {for} \; n\in \mathbb {N}\right\} \end{aligned}$$

is equipped with the standard norm

$$\begin{aligned} \Vert f \Vert _{\mathrm{H}^n (\varOmega )} = \sum _{i=0}^{i=n} \Vert f^{(i)} \Vert _{\mathrm{L}^2 (\varOmega )}. \end{aligned}$$

With regard to our system (8), we adopt the following notations: Let

$$\begin{aligned} {\hat{\varOmega }}=(0,1)\times (0,1), \end{aligned}$$

and

$$\begin{aligned} H_c^m=\Bigl \{ g \in H^m ({\varOmega }); g(0)=g_x(0)=0 \Bigr \}, \; m=2,3,\ldots \end{aligned}$$

Moreover, we assume that

$$\begin{aligned} | \varpi | < 3. \end{aligned}$$
(9)

Furthermore, consider the state space \(\mathcal{X}\) defined by

$$\begin{aligned} \mathcal{X} = {H_c ^2 \times L^2(\varOmega ) \times L^2({\hat{\varOmega }} ) \times \mathbb {R}^n}\times \mathbb {R}=\mathcal{Y} \times \mathbb {R}\end{aligned}$$

equipped with the following real inner product

(10)

It is worth mentioning that the condition (9) is imposed in order to assure that the state space \(\mathcal{X}\) with the inner product (10) is a Hilbert space (see [7] for more details).

Thereafter, the system (8) can be written in \(\mathcal{X}\) as follows

$$\begin{aligned} \left\{ \begin{array}{l} \Phi _t (t)= \left[ \left( \begin{array}{ll} \mathcal{A}(t) &{} 0 \\ 0 &{} 0 \end{array} \right) + \mathcal{F} \right] \Phi (t),\\ \Phi (0)=\Phi _0 =(y_0, y_1, f(\cdot ,- \eta \tau (0) ), w_0, \omega _0), \end{array} \right. \end{aligned}$$
(11)

where \(z=y_t\), \(\Phi (t)=(y, z,u,w,\omega )\) and \({\mathcal{A}(t)}\) is a time-dependent linear operator in \({\mathcal{Y}}\) defined by

$$\begin{aligned} \left\{ \begin{array}{l} \mathcal{D}(\mathcal{A}(t)) = \Bigl \{ (y, z,u,w) \in H_c^4 \times H_c^2 \times L^2({\hat{\varOmega }}) \times \mathbb {R}^n;\\ \quad \quad \quad \quad \quad \quad \quad {u_x} \in L^2({\hat{\varOmega }}), \, z=u(\cdot ,0), \, y_{xx}(1)=0, \\ \quad \quad \quad \quad \quad \quad \quad \, y_{xxx}(1)=\beta z(1) +e_1^{\top } w \Bigr \} \\ \mathcal{A}(t) (y, z,u,w ) = \Bigl ( z, - y_{xxxx}+\varpi ^2 y+\alpha u(\cdot ,1), \\ \qquad \qquad \qquad \quad \quad \quad \quad \dfrac{\eta \dot{\tau }(t)-1}{\tau (t)} u_{\eta }, M w+e_2 u(1,0) \Bigr ), \end{array} \right. \end{aligned}$$
(12)

while \(\mathcal{F}\) is a nonlinear operator given by

$$\begin{aligned} \begin{array}{l} \mathcal{F} (y, z,u,w,\omega ) = \Bigl ( 0, ( \omega ^2 - \varpi ^2 )y, 0, 0, \\ \quad \frac{\textstyle -\gamma \left( \omega - \varpi \right) -2 \, \omega < y, z >_{\scriptscriptstyle L^2 (\varOmega )}}{\textstyle I_\mathrm{d} + \Vert y\Vert _{\scriptscriptstyle L^2 (\varOmega )} ^2 } \Bigr ), \end{array} \end{aligned}$$
(13)

for any \((y, z,u,w,\omega ) \; \in \mathcal{X}\).

We end this section by stating the following remark.

Remark 2

We point out that the feedback gain \(\alpha \) of the delayed term is assumed to be nonnegative for sake of simplicity. Notwithstanding, one could pick \(\alpha \in \mathbb {R}\) and replace \(\alpha \) by \(|\alpha |\) from the very beginning of the article (see [7] for a similar situation).

3 Uncoupled linear system

This section is intended to deal with an uncoupled linear system associated with the global nonlinear system (11), namely, let us consider

$$\begin{aligned} \left\{ \begin{array}{l} {\varphi }_t (t)= \mathcal{A} (t) \varphi (t), \\ \varphi (0)= \varphi _0, \end{array} \right. \end{aligned}$$
(14)

where \(\varphi =(y, z,u,w), \; \varphi _0 =(y_0, y_1 , f(\cdot ,- \eta \tau (0) ), w_0)\) and \(\mathcal{A}(t)\) is the time-dependent linear operator defined in (12).

As pointed out in [28, 29], a general theory of existence and uniqueness of solutions for equations of type (14) is already available in the literature (see, for instance, [10, 1922, 30]). One simple way, among others, to prove existence and uniqueness results is to use the following result:

Theorem 1

Let A(t) be a time-dependent operator on a Hilbert space H such that:

  1. (i)

    For all \(t \in [0, T]\), the operator A(t) generates a strongly continuous semigroup on H and the family

    \(\left\{ A(t): t \in [0, T] \right\} \) is stable with stability constants C and m independent of the time t.

  2. (ii)

    \(\mathcal{D}((A) (t))= \mathcal{D}(A(0)),\) for \( t \ge 0.\)

  3. (iii)

    The operator \(\dfrac{\mathrm{d}}{\mathrm{d}t} A(t)\) belongs to

    $$\begin{aligned} L^{\infty } ([0, T], B(\mathcal{D}(A(0)), H)), \end{aligned}$$

    which is the space of equivalent classes of essentially bounded, strongly measurable functions from [0, T] into the set \(B(\mathcal{D}(A(0)), H)\) of bounded operators from \(\mathcal{D}(A(0))\) into H.

Then, the Cauchy problem

$$\begin{aligned} \left\{ \begin{array}{l} v_t (t) = A(t) v(t),\\ v(0)=v_0, \end{array} \right. \end{aligned}$$

has a unique mild solution \(v \in C([0, T],H)\), for each initial data \(v_0\) in H. Moreover, for all \(t \in [0, T]\), there exists a positive constant c(t) such that

$$\begin{aligned} \Vert v(t) \Vert _H \le c(t)\Vert U\Vert _H. \end{aligned}$$

In turn, if the initial data \(v_0\) belong to \(\mathcal{D}(A (t))=\mathcal{D}(A(0))\), then the above Cauchy problem has a unique strong solution \(v \in C([0, T],\mathcal{D}(A(0)) \cap C^1 ([0, T],H)\).

Whereupon, our immediate task is to check that the assumptions of Theorem 1 are verified by our operator \(\mathcal{A}(t)\). This will be the objective of the following subsection.

3.1 Well-posedness of the problem (14)

Let us begin by claiming that it is obvious that

$$\begin{aligned} \mathcal{D}(\mathcal{A} (t))= \mathcal{D}(\mathcal{A}(0)), \; t \ge 0. \end{aligned}$$
(15)

Next, one can readily check that the operator \(\mathcal{A} (t)\) is closed and densely defined in \( \mathcal{Y}\) (one has merely to argue as in [13, 28, 29]). Furthermore, we define on

$$\begin{aligned} \mathcal{Y}=H_c ^2 \times L^2(\varOmega ) \times L^2({\hat{\varOmega }} ) \times \mathbb {R}^n \end{aligned}$$

the time-dependent Kato’s inner product

$$\begin{aligned}&\langle (y,z,u,w), (\tilde{y},\tilde{z}, \tilde{u}, \tilde{w} ) \rangle _{t} \nonumber \\&\quad = \int _0^{1} \left( y_{xx} \tilde{y}_{xx}- \varpi ^2 y \tilde{y}+ z \tilde{z} \right) \mathrm{d}x \nonumber \\&\quad + K \tau (t) \iint _{\hat{\varOmega }} u \tilde{u} \, \mathrm{d}x \,\mathrm{d}\eta +2 {\tilde{w}}^{\top } Q w , \end{aligned}$$
(16)

where K is a positive constant. Clearly, thanks to the assumptions (4) and (9), this new inner product is equivalent to that of (10). Thereafter, using exactly the same arguments as in [28, 29], it follows from (3)–(4) that for any \(\varphi =(y,z,u,w) \in \mathcal{Y}\), we have

$$\begin{aligned} \dfrac{\Vert \varphi \Vert _t}{\Vert \varphi \Vert _s} \le \exp \left( \frac{C}{2 \tau _0} |t-s|\right) , \end{aligned}$$
(17)

for any \(s, \, t \in [0,T]\) and for some positive constant C. In fact, it is easy to check that (16) yields

$$\begin{aligned}&\Vert \varphi \Vert _t - \exp \left( \frac{C}{\tau _0} |t-s|\right) \Vert \varphi \Vert _s =\\&\left( 1-\exp \left( \frac{C}{\tau _0} |t-s|\right) \right) \int _0^1 \left( y_t^2 + y_{xx} ^2 - \varpi ^2 y^2 \right) \, \mathrm{d}x \\&\quad + 2 \left( 1-\exp \left( \frac{C}{\tau _0} |t-s|\right) \right) w^{\top } Q w \\&\quad + K \left( \tau (t)-\tau (s) \exp \left( \frac{C}{\tau _0} |t-s|\right) \right) \\&\quad \times \iint _{\hat{\varOmega }} y_t^2 (x,t-\eta \tau (t)) \; \mathrm{d}x \; \mathrm{d}\eta , \end{aligned}$$

for any \(s, \, t \in [0,T]\). Furthermore, we clearly have

$$\begin{aligned} 1-\exp \left( \frac{C}{\tau _0} |t-s| \right) < 0 \end{aligned}$$

whenever c is positive and \(s, \, t \in [0,T]\). On the other hand, it follows from the mean-value theorem that there exists \(r \in (s,t)\) such that \(\tau (t)-\tau (s)=\tau ' (r) (t-s)\). This, together with (3)–(4), implies the desired inequality (17).

Our aim now is to show that for each fixed \(t>0\), the operator \(\mathcal{A}(t)\) defined by (12) is dissipative modulo a translation. To this end, let \(\varphi =(y,z,u,w) \in \mathcal{D}(\mathcal{A}(t))\). Then, in the light of (10) and (12), a simple integration yields

$$\begin{aligned}&\langle \mathcal{A}(t) \varphi , \varphi \rangle _t = \displaystyle \int _0^1 (y_{xx} z_{xx}-y_{xxxx} z) \; \mathrm{d}x \nonumber \\&\qquad +\, \alpha \int _0^1 u(x,1) z \; \mathrm{d}x +K (\dot{\tau }(t)-1) \displaystyle \iint _{\hat{\varOmega }} u_{\eta } u \, \mathrm{d}x \; \mathrm{d}\eta \nonumber \\&\qquad +\,\langle (QM+M^{\top }Q)w + 2 Qe_2 u(1,0),w\rangle _{\mathbb {R}^n}\nonumber \\&\quad =\alpha \displaystyle \int _0^1 u(x,1) z \; \mathrm{d}x+ \dfrac{K}{2} \int _0^1 u^2(x,0) \mathrm{d}x\nonumber \\&\qquad -\, y_{xxx}(1) z(1) +\dfrac{K}{2} (\dot{\tau }(t)-1) \displaystyle \int _0^1 u^2(x,1) \; \mathrm{d}x\nonumber \\&\qquad -\, \dfrac{K}{2} {\dot{\tau }(t)}\displaystyle \iint _{\hat{\varOmega }} u^2 \; \mathrm{d}x\, \mathrm{d}\eta \nonumber \\&\qquad +\, \langle (QM+M^{\top }Q)w + 2 Qe_2 u(1,0),w\rangle _{\mathbb {R}^n} \nonumber \\&\quad =-\beta z^2 (1) -\beta e_1^{\top } w z(1)+ \alpha \displaystyle \int _0^1 u(x,1) z \; \mathrm{d}x \nonumber \\&\qquad +\,\dfrac{K}{2} \int _0^1 z^2 \mathrm{d}x +\dfrac{K}{2} (\dot{\tau }(t)-1) \displaystyle \int _0^1 u^2(x,1) \; \mathrm{d}x\nonumber \\&\qquad -\,\dfrac{K}{2} {\dot{\tau }(t)} \displaystyle \iint _{\hat{\varOmega }} u^2 \; \mathrm{d}x\, \mathrm{d}\eta \nonumber \\&\qquad +\,\langle (QM+M^{\top }Q)w + 2 Qe_2 u(1,0),w\rangle _{\mathbb {R}^n},\nonumber \\ \end{aligned}$$
(18)

which, together with (6)–(7), implies that

$$\begin{aligned}&\langle \mathcal{A}(t) \varphi , \varphi \rangle _t =-{\hat{\beta }} z^2 (1) -\zeta w^{\top } N w+ \alpha \int _0^1 u(x,1) z \; \mathrm{d}x \nonumber \\&\quad +\dfrac{K}{2} \int _0^1 z^2 \mathrm{d}x +\dfrac{K}{2} (\dot{\tau }(t)-1) \displaystyle \int _0^1 u^2(x,1) \; \mathrm{d}x \nonumber \\&\quad -\dfrac{K}{2} {\dot{\tau }(t)} \iint _{\hat{\varOmega }} u^2 \; \mathrm{d}x\, \mathrm{d}\eta -\left[ \sqrt{\beta -{\hat{\beta }}} z(1)-w^{\top } p \right] ^2.\nonumber \\ \end{aligned}$$
(19)

Next, invoking Young’s inequality and (5), the identity (19) leads to

$$\begin{aligned}&\langle \mathcal{A}(t) (y,z,u), (y,z,u ) \rangle _t \le -{\hat{\beta }} z^2 (1) -\zeta w^{\top } N w \nonumber \\&\quad -\left[ \sqrt{\beta -{\hat{\beta }}} z(1)-w^{\top } p \right] ^2\nonumber \\&\quad + \left( \dfrac{K}{2}+\dfrac{\alpha }{2\sqrt{1-d}} \right) \displaystyle \int _0^1 z^2 \mathrm{d}x \nonumber \\&\quad +\,\dfrac{\sqrt{1-d}}{2} \left( \alpha -K \sqrt{1-d} \right) \int _0^1 u^2(x,1) \; \mathrm{d}x \nonumber \\&\quad +\,\dfrac{K}{2} {F(t)} \displaystyle \iint _{\hat{\varOmega }} u^2 \; \mathrm{d}x\, \mathrm{d}\eta , \end{aligned}$$
(20)

where

$$\begin{aligned} F(t)=\frac{\sqrt{1+ \dot{\tau }(t)^2}}{2 {\tau (t)}} >0. \end{aligned}$$

It follows from (20) that

$$\begin{aligned}&\langle \mathcal{A}(t) \varphi , \varphi \rangle _t \le -{\hat{\beta }} z^2 (1) -\zeta w^{\top } N w \nonumber \\&\quad -\left[ \sqrt{\beta -{\hat{\beta }}} z(1)-w^{\top } p \right] ^2\nonumber \\&\quad +\dfrac{\sqrt{1-d}}{2} \left( \alpha -K \sqrt{1-d} \right) \int _0^1 u^2(x,1) \; \mathrm{d}x \nonumber \\&\quad +\left( \frac{K}{2} (1+F(t))+\frac{\alpha }{2\sqrt{1-d}} \right) \Vert \varphi \Vert _t^2, \end{aligned}$$

and choosing

$$\begin{aligned} K \ge \frac{\alpha }{\sqrt{1-d}}, \end{aligned}$$
(21)

we can claim that the operator

$$\begin{aligned} \mathcal{B}(t)=\mathcal{A}(t)-\Upsilon (t) I \end{aligned}$$

is dissipative, where

$$\begin{aligned} \Upsilon (t)=\frac{K}{2} (1+F(t))+\frac{\alpha }{2\sqrt{1-d}} >0. \end{aligned}$$
(22)

The task ahead is to show that the operator \(\lambda I-\mathcal{A}(t)\) is onto \({\mathcal{Y}}\), for some \(\lambda >0\). To do so, we shall use the well-known Lax-Milgram result:

Theorem 2

(Lax-Milgram) [3] Assume that a(uv) is a continuous coercive bilinear form on a Hilbert space H. Then, given any \(\phi \in H\), there exists a unique element \(u \in H\) such that \(a(u,v) = <\phi ,v>, \, \forall v \in H\).

First, let \((f,g,h,r) \in {\mathcal{Y}}\), and let us seek \((y,z,u,w) \in \mathcal{D}(\mathcal{A}(t))\) such that \((\lambda I-\mathcal{A}(t)) (y,z,u,w)= (f,g,h,r)\), that is,

$$\begin{aligned} \left\{ \begin{array}{l} y_{xxxx}+(\lambda ^2-\varpi ^2) y-\alpha u(x,1)=\lambda f+g,\\ z=\lambda y-f,\\ (1-\dot{\tau }(t) \eta ) u_{\eta }+\lambda \tau (t) u = \tau (t) h ,\\ y(0)=y_x (0)=y_{xx}(1)=0,\\ y_{xxx}(1)=\beta z(1)+e_1^{\top } w ,\\ (\lambda I-M) w= \left[ e_2 (\lambda y(1)-f(1))+r \right] ,\\ z=u(\cdot ,0). \end{array} \right. \end{aligned}$$
(23)

Solving the equation of u in the above system, we get

$$\begin{aligned}&u(x,\eta ) = (\lambda y(x)-f(x)) \mathrm{e}^{-{\hat{\tau }} \lambda \eta }\nonumber \\&\quad +\, {\hat{\tau }} \displaystyle \int _0 ^{\eta } \mathrm{e}^{-{\hat{\tau }} \lambda (\eta - v)} h(x,v) \, dv, \;\; \text{ if } \; \tau (t)={\hat{\tau }} \; (\text{ constant }),\nonumber \\ \end{aligned}$$
(24)

or

$$\begin{aligned}&u(x,\eta ) = ( \lambda y(x)-f(x)) \mathrm{e}^{\lambda q(\eta ,t)}\nonumber \\&\quad +\, \mathrm{e}^{\lambda q(\eta ,t)} \int _0 ^{\eta } \dfrac{h(x,v)}{1-\dot{\tau }(t) v} \mathrm{e}^{-\lambda q(v,t)} \, dv, \;\; \text{ if } \;\; \tau (t)\not =0,\nonumber \\ \end{aligned}$$
(25)

where \(q(\eta ,t)=\dfrac{\tau (t)}{\dot{\tau }(t)} \ln (1-\dot{\tau }(t) x)\). Using (23), we have

$$\begin{aligned} w=(\lambda I-M)^{-1} \left[ e_2 (\lambda y(1)-f(1))+r \right] \end{aligned}$$

and in the light of (24), one has only to seek \(y \in H_c^4\) satisfying

$$\begin{aligned} \left\{ \begin{array}{l} y_{xxxx}+\left( \lambda ^2-\varpi ^2 - \alpha \lambda \mathrm{e}^{-{\hat{\tau }} \lambda }\right) y=\alpha Y(x)\\ \quad +\lambda f+g, \;\; \text{ if } \; \tau (t)={\hat{\tau }} \; (\text{ constant }),\\ y(0)=y_x (0)=y_{xx}(1)=0,\\ y_{xxx}(1)=\lambda \left[ \beta +e_1^{\top } (\lambda I-M)^{-1} e_2 \right] y(1)\\ \quad -\,\left[ \beta +e_1^{\top } (\lambda I-M)^{-1} e_2\right] f(1)\\ \quad +\, e_1^{\top } (\lambda I-M)^{-1} r, \end{array} \right. \end{aligned}$$
(26)

in which

$$\begin{aligned}&Y(x)=-f(x) \mathrm{e}^{-{\hat{\tau }} \lambda }\nonumber \\&\quad +\,{\hat{\tau }}\int _0 ^{1} \mathrm{e}^{-{\hat{\tau }} \lambda (1- v)} h(x,v) \, dv, \;\; \text{ if } \; \tau (t)\!=\!{\hat{\tau }} \; (\text{ constant }). \end{aligned}$$

In turn, using (25)), one should find \(y \in H_c^4\) satisfying

$$\begin{aligned} \left\{ \begin{array}{l} y_{xxxx}+\left( \lambda ^2-\varpi ^2 - \alpha \lambda \mathrm{e}^{\lambda q(1,t)}\right) y=\alpha Z(x)\\ \quad +\,\,\lambda f+g, \;\; \text{ if } \;\; \tau (t)\not =0,\\ y(0)=y_x (0)=y_{xx}(1)=0,\\ y_{xxx}(1)=\lambda \left[ \beta +e_1^{\top } (\lambda I-M)^{-1} e_2 \right] y(1)\\ \quad -\left[ \beta +e_1^{\top } (\lambda I-M)^{-1} e_2\right] f(1)\\ \quad +\, e_1^{\top } (\lambda I-M)^{-1} r, \end{array} \right. \end{aligned}$$
(27)

where

$$\begin{aligned}&Z(x)= -f(x) \mathrm{e}^{\lambda \mathrm{e}^{q(1,t)}}\nonumber \\&\quad +\,\mathrm{e}^{\lambda q(1,t)} \int _0 ^{1} \dfrac{h(x,v)}{1-\dot{\tau }(t) v} \mathrm{e}^{-\lambda q(v,t)} \, dv, \;\; \text{ if } \;\; \tau (t)\not =0. \end{aligned}$$

Multiplying the first equation in (26) by \(\phi \in H_c^2\), we get: If \(\tau (t)={\hat{\tau }}\) is constant, then

$$\begin{aligned}&\int _0 ^{1} \left( y_{xx} \phi _{xx}+[\lambda ^2 -\varpi ^2 - \alpha \lambda \mathrm{e}^{-{\hat{\tau }} \lambda }] y \phi \right) \; \mathrm{d}x \nonumber \\&\quad = \!\lambda \left[ \beta +e_1^{\top } (\lambda I-M)^{-1} e_2 \right] y(1) \phi (1) \nonumber \\&\qquad +\! \int _0^{1} \left( \alpha Y(x) + \lambda f+g \right) \phi \mathrm{d}x + e_1^{\top } (\lambda I-M)^{-1} r \phi (1) \nonumber \\&\qquad +\! \left[ \beta +e_1^{\top } (\lambda I-M)^{-1} e_2 \right] f(1) \phi (1), \end{aligned}$$
(28)

which can be written in the variational form

$$\begin{aligned} \mathcal{L}_1(y,\phi )=\Lambda _1 (\phi ), \end{aligned}$$

where

$$\begin{aligned} \mathcal{L}_1: H_c^2 \times H_c^2 \longrightarrow \mathbb {R}\end{aligned}$$

is a bilinear form given by

$$\begin{aligned}&\mathcal{L}_1 (y,\phi )=\int _0 ^{1} \left( y_{xx} \phi _{xx}+[\lambda ^2 -\varpi ^2 - \alpha \lambda \mathrm{e}^{-{\hat{\tau }} \lambda }] y \phi \right) \; \mathrm{d}x\\&\quad + \lambda \left[ \beta +e_1^{\top } (\lambda I-M)^{-1} e_2 \right] y(1) \phi (1), \end{aligned}$$

and

$$\begin{aligned}&\varLambda _1: H_c^2 \longrightarrow \mathbb {R}\\&\quad \phi \longmapsto \varLambda _1 (\phi )=\\&\quad \int _0^{1} \left( \alpha Y(x) + \lambda f+g \right) \phi \mathrm{d}x + e_1^{\top } (\lambda I-M)^{-1} r \phi (1) \\&\quad + \left[ \beta +e_1^{\top } (\lambda I-M)^{-1} e_2 \right] f(1) \phi (1). \end{aligned}$$

Using similar arguments for (27), we obtain: If \(\tau (t)\not =0\), then

$$\begin{aligned}&\int _0^{1} \left( y_{xx} \phi _{xx}+\left( \lambda ^2 -\varpi ^2 - \alpha \lambda \mathrm{e}^{\lambda q(1,t)} \lambda \right) y \phi \right) \; \mathrm{d}x \nonumber \\&\quad = \lambda \left[ \beta +e_1^{\top } (\lambda I-M)^{-1} e_2 \right] y(1) \phi (1) \nonumber \\&\qquad + \int _0^{1} \left( \alpha Z(x) + \lambda f+g \right) \phi \mathrm{d}x + e_1^{\top } (\lambda I-M)^{-1} r \phi (1) \nonumber \\&\qquad + \left[ \beta +e_1^{\top } (\lambda I-M)^{-1} e_2 \right] f(1) \phi (1), \end{aligned}$$
(29)

which leads to

$$\begin{aligned} \mathcal{L}_2(y,\phi )=\varLambda _2 (\phi ), \end{aligned}$$

where the bilinear form

$$\begin{aligned} \mathcal{L}_2: H_c^2 \times H_c^2 \longrightarrow \mathbb {R}\end{aligned}$$

is defined by

$$\begin{aligned}&\mathcal{L}_2(y,\phi )= \int _0 ^{1} \left( y_{xx} \phi _{xx}+[\lambda ^2 -\varpi ^2 - \alpha \lambda \mathrm{e}^{-{\hat{\tau }} \lambda }] y \phi \right) \; \mathrm{d}x \\&\quad + \lambda \left[ \beta +e_1^{\top } (\lambda I-M)^{-1} e_2 \right] y(1) \phi (1), \end{aligned}$$

and

$$\begin{aligned}&\varLambda _2: H_c^2 \longrightarrow \mathbb {R}\\&\quad \phi \longmapsto \varLambda _2 (\phi )=\int _0^{1} \left( \alpha Z(x) + \lambda f+g \right) \phi \mathrm{d}x \\&\quad + \left[ \beta +e_1^{\top } (\lambda I-M)^{-1} e_2 \right] f(1) \phi (1) \\&\quad +\, e_1^{\top } (\lambda I-M)^{-1} r \phi (1). \end{aligned}$$

Invoking the conditions (3)–(7) and (9), one can check that for \(\lambda >0\) large enough, \(\mathcal{L}_1\) and \(\mathcal{L}_2\) are continuous coercive bilinear forms on \(H_c^2 \times H_c^2\), whereas \(\varLambda _1\) and \(\varLambda _2\) are continuous linear forms on \(H_c^2\). Applying Theorem 2, we deduce that the operator \(\lambda I-\mathcal{A}(t)\) is onto \(\mathcal{Y}\) for \(\lambda >0\) large enough and so is the operator

$$\begin{aligned} \lambda I-\mathcal{B}(t)=(\lambda +\Upsilon (t))I-\mathcal{A}(t) \end{aligned}$$

since \(\Upsilon (t) >0\) (see (22)).

Moreover, using (3)–(4), we obtain as in [28, 29]

$$\begin{aligned} \frac{\mathrm{d}}{\mathrm{d}t} \mathcal{B} (t) \in L^{\infty }_* ([0,T], B (\mathcal{D}(\mathcal{A}(0)); \mathcal{Y} )), \end{aligned}$$
(30)

the space of equivalence classes of essentially bounded and strongly measurable functions from [0, T] into the set \(B(\mathcal{A} (0) ); \mathcal{Y})\) of bounded operators.

Summarizing the above properties and thanks to Theorem 1 (see also [10, 30]), we deduce that for any initial data \(\bar{\varphi }_0 \in \mathcal{Y}\), the Cauchy problem

$$\begin{aligned} \left\{ \begin{array}{l} \bar{\varphi }_t = \mathcal{B} (t) \bar{\varphi }, \\ \bar{\varphi }(0)=\bar{\varphi }_0, \end{array} \right. \end{aligned}$$
(31)

has a unique solution \(\bar{\varphi }\in C ([0,\infty ); \mathcal{Y})\). In turn, if \(\bar{\varphi }_0 \in \mathcal{D}(\mathcal{B} (t) ) \), then necessarily the solution \(\bar{\varphi }\) belongs to \( C ([0,\infty ); \mathcal{D}(\mathcal{A} (t) ) ) \cap C^1 ([0,\infty ); \mathcal{Y}) \). This existence and uniqueness result goes for the Cauchy problem involving the operator \(\mathcal{A} (t)\) instead of \(\mathcal{B} (t)\), thanks to the change in variables \(\varphi (t)=\mathrm{e}^{\Upsilon (t)} \bar{\varphi }(t)\). We have thus proved the following result

Proposition 1

For any initial data \(\varphi _0 \in \mathcal{Y}\), the Cauchy problem (14) has a unique solution \(\varphi (t) \in C ([0,\infty ); \mathcal{Y})\). In turn, if \(\varphi _0 \in \mathcal{D}(\mathcal{A} (t) ) \), then necessarily the solution \(\varphi (t)\) belongs to \( C ([0,\infty ); \mathcal{D}(\mathcal{A} (t) ) ) \cap C^1 ([0,\infty ); \mathcal{Y}) \).

3.2 Exponential stability of the system (14)

In this section, we shall state and prove the exponential stability result of the system (14):

Theorem 3

Under the assumption (9), namely \(|\varpi |< 3\), there exists a positive constant \(\alpha _0\) such that for any \(\alpha < \alpha _0\), the semigroup generated by the operator \(\mathcal{A}(t)\), defined by (12), is exponentially stable in \( {\mathcal{Y}}\). Whereupon, there exist uniform positive constants \(M_0\) and \(\kappa _0\) such that the solution of the Cauchy problem (14) obeys to the following estimate

$$\begin{aligned} \Vert \varphi (t) \Vert _\mathcal{H} \le M_0 \mathrm{e}^{-\kappa _0 t} \Vert \varphi _0 \Vert _\mathcal{H} , \quad \forall t \ge 0. \end{aligned}$$
(32)

For sake of clarity, we are going to establish some preliminary results which are needed for the proof of Theorem 3. To proceed, we pick up \(\varphi _0\) in \(\mathcal{D} (\mathcal{A}(t))\) and hence the solution \(\varphi =(y, y_t , y_t(\cdot ,t-\eta \tau (t)),w) \) of the system (14), namely,

$$\begin{aligned} \left\{ \begin{array}{ll} y_{tt} + y_{xxxx} =\varpi ^2 y +\alpha y_t (x,t-\tau (t)), \\ y(0,t) = y_x (0,t) = y_{xx} (1,t)=0, \\ y_{xxx} {(1,t)} = \beta y_{t} (1,t) +e_1^{\top } w (t), \\ \dot{w}(t)=M w(t)+ e_2 y_{t} (1,t), \\ \dfrac{\partial y_{t}}{\partial t} (x,t-\eta \tau (t) ) =\dfrac{\dot{\tau }(t) \eta -1}{\tau (t)} \dfrac{\partial y_{t}}{\partial \eta } (x,t-\eta \tau (t)), \end{array} \right. \end{aligned}$$
(33)

is regular in the sense that

$$\begin{aligned} \varphi \in C^1 (\mathbb {R}^+;{\mathcal{Y}}) \cap C ( \mathbb {R}^+;\mathcal{D} (\mathcal{A}(t))). \end{aligned}$$

On the other hand, in order to make the computations manageable, let us introduce the following functionals:

$$\begin{aligned} E_1 (t)= & {} \frac{1}{2} \int _0^1 \left( y_t^2 + y_{xx} ^2 - \varpi ^2 y^2 \right) \, \mathrm{d}x + w^{\top } Q w\nonumber \\+ & {} \frac{C_1}{2} K \tau (t) \iint _{\hat{\varOmega }} y_t^2 (x,t-\eta \tau (t)) \; \mathrm{d}x \; \mathrm{d}\eta , \end{aligned}$$
(34)
$$\begin{aligned} E_2 (t)= & {} {2} C_2 \int _0^1 \left( x y_t y_{x} \right) \, \mathrm{d}x , \end{aligned}$$
(35)
$$\begin{aligned} \dfrac{E_3 (t)}{K}= & {} C_3 \tau (t) \iint _{\hat{\varOmega }} \mathrm{e}^{-2\delta \eta \tau (t) } y_t^2 (x,t-\eta \tau (t)) \, \mathrm{d}x \, \mathrm{d}\eta , \nonumber \\ \end{aligned}$$
(36)

where K is a positive constant satisfying (21), and \(C_i\) is a positive constant to be determined, for each \(i=1,2,3\). In turn, \(\delta \) is an arbitrary positive constant.

The first result of this section is

Proposition 2

Let \(\varphi =(y, y_t , y_t(\cdot ,t-\eta \tau (t)),w)\) be the regular solution of (14). Then, for any \(t \ge 0\) and for any positive constant \(A_1\), we have the following estimates

$$\begin{aligned}&\min \{ 1,C_1 \} \Vert \varphi \Vert _t^2 \le 2 E_1 (t) \le \max \{ 1,C_1\} \Vert \varphi \Vert _t^2, \end{aligned}$$
(37)
$$\begin{aligned}&{\dot{E}_1} (t) \le -{\hat{\beta }} y_t^2 (1,t) -\left[ \sqrt{\beta -{\hat{\beta }}} y_t(1,t) -w^{\top } p \right] ^2 \nonumber \\&-\zeta w^{\top } N w +\dfrac{1}{2} \left( \alpha A_1+C_1 K \right) \int _0^1 y_t^2 \, \mathrm{d}x \nonumber \\&\quad + \dfrac{1}{2} \left( \dfrac{\alpha }{A_1}-C_1 K (1-d) \right) \int _0^1 y_t^2 (x,t-\tau (t)) \, \mathrm{d}x. \nonumber \\ \end{aligned}$$
(38)

Proof

The proof of (37) is obvious. With regard to the second estimate (38), one has merely to argue as for (19). Indeed, differentiating (34) and using (33), we obtain after integration by parts

$$\begin{aligned}&{\dot{E}_1} (t) \le -{\hat{\beta }} y_t^2 (1,t) -\left[ \sqrt{\beta -{\hat{\beta }}} y_t (1,t) -w^{\top } p \right] ^2\\&\quad -\zeta w^{\top } N w + \alpha \displaystyle \int _0^1 y_t y_t (x,t-\tau (t))\; \mathrm{d}x \\&\quad +\dfrac{C_1 K}{2}\displaystyle \int _0^1 y_t^2 \mathrm{d}x +\dfrac{C_1 K}{2} (\dot{\tau }(t)-1)\\&\quad \displaystyle \int _0^1 y_t^2 (x,t-\tau ) \; \mathrm{d}x. \end{aligned}$$

Now, it suffices to use (5) and the following Young’s inequality

$$\begin{aligned}&\int _0^1 y_t y_t (x,t-\tau (t))\; \mathrm{d}x \le \dfrac{A_1}{2} \displaystyle \int _0^1 y_t^2 \, \mathrm{d}x \\&\quad + \dfrac{1}{2A_1}\displaystyle \int _0^1 y_t^2 (x,t-\tau (t)) \, \mathrm{d}x, \end{aligned}$$

to get the desired estimate.

The estimates related to \(E_2 (t)\) are given below.

Proposition 3

Let \(\varphi =(y, y_t , y_t(\cdot ,t-\eta \tau (t)),w)\) be the regular solution of (14). Then, for any \(t \ge 0\), and for any positive constants \(A_2\) and \(A_3\), the following estimates hold

$$\begin{aligned} \left| E_2 (t) \right|\le & {} C_2 \int _0^1 \left( y_t^2 + \dfrac{1}{2}y_{xx} ^2 \right) \, \mathrm{d}x, \end{aligned}$$
(39)
$$\begin{aligned} \dfrac{{\dot{E}_2} (t) }{C_2}\le & {} (1+\beta A_2) y_t^2 (1,t) +{\alpha }{A_3} \int _0^1 y_t^2 (x,t-\tau ) \, \mathrm{d}x \nonumber \\&+ \left( \dfrac{\beta }{A_2}+ \dfrac{\alpha }{2 A_3} + \dfrac{1}{A_4}+\dfrac{\varpi ^2 }{3} -3 \right) \int _0^1 y_{xx}^2 \, \mathrm{d}x \nonumber \\&- \int _0^1 y_t^2 \mathrm{d}x + A_4 \left( e_1^{\top } w \right) ^2. \end{aligned}$$
(40)

Proof

Establishing (39) is a direct consequence of applying Young’s inequality

$$\begin{aligned} {2} C_2 \int _0^1 \left( x y_t y_{x} \right) \, \mathrm{d}x \le C_2 \int _0^1 \left( y_t^2 + y_{x}^2 \right) \, \mathrm{d}x \end{aligned}$$

and the Poincaré’s inequality

$$\begin{aligned} \int _0^1 f_x^2 \, \mathrm{d}x \le \frac{1}{2} \int _0^1 f^2_{xx} \, \mathrm{d}x, \;\; \forall f \in H_c^2. \end{aligned}$$
(41)

With regard to the proof of (40), we use once again the equations in (33), integrate by parts, to obtain after invoking the boundary conditions in (33)

$$\begin{aligned} \dfrac{{\dot{E}_2} (t) }{C_2}= & {} y_t^2 (1,t) -2 \beta y_x (1,t) y_t (1,t) +\varpi ^2 y^2(1,t) \nonumber \\&- \! \int _0^1 \left\{ y_t^2 + 3 y_{xx}^2 +\varpi ^2 y^2 \right\} \, \mathrm{d}x -2 y_x (1,t) e_1^{\top } w \nonumber \\&+ 2 \alpha \int _0^1 x y_x (x,t) y_t (x,t-\tau (t)) \, \mathrm{d}x. \end{aligned}$$
(42)

Applying Young’s inequality for the cross-product terms \(y_x (1,t) y_t (1,t)\) and \( x y_x (x,t) y_t (x,t-\tau )\) and by virtue of (41) as well as the well-known estimates

$$\begin{aligned} f^2(1) \!\le \! \dfrac{1}{3} \!\int _0^1 f_{xx}^2 \, \mathrm{d}x, \quad f_x^2(1) \!\le \! \int _0^1 \!\! f_{xx}^2 \, \mathrm{d}x, \,\, \forall f \in H_c^2 , \end{aligned}$$
(43)

the identity (42) yields (40).

We also have similar estimates for \(E_3 (t)\)

Proposition 4

Let \(\varphi =(y, y_t , y_t(\cdot ,t-\eta \tau (t)),w)\) be the regular solution of (14). Then, for any \(t \ge 0\), we have

$$\begin{aligned}&E_3 (t) \le C_3 \Vert \varphi (t) \Vert _t^2, \end{aligned}$$
(44)
$$\begin{aligned}&\dfrac{{\dot{E}_3} (t)}{K C_3} \le - (1-d) \mathrm{e}^{-2\delta {\bar{\tau }}} \int _0^1 y_t^2 (x,t-\tau (t)) \, \mathrm{d}x \nonumber \\&\quad -2 \delta \tau (t) \mathrm{e}^{-2\delta {\bar{\tau }}} \iint _{\hat{\varOmega }} y_t^2 (x,t-\eta \tau (t)) \, \mathrm{d}x \, \mathrm{d}\eta \nonumber \\&\quad +\int _0^1 y_t^2 \, \mathrm{d}x. \end{aligned}$$
(45)

Proof

Since (44) is trivial, let us focus on (45). Differentiating (36) and using (33), we obtain

$$\begin{aligned}&\dfrac{{\dot{E}_3} (t)}{K C_3} ={\dot{\tau }}(t) \displaystyle \iint _{\hat{\varOmega }} \mathrm{e}^{-2\delta \eta \tau (t)} y_t^2 (x,t-\eta \tau (t))\, \mathrm{d}x \, \mathrm{d}\eta \nonumber \\&\quad -2 \delta \tau (t) {\dot{\tau }}(t) \displaystyle \iint _{\hat{\varOmega }} \eta \mathrm{e}^{-2\delta \eta \tau (t)} y_t^2 (x,t-\eta \tau (t)) \, \mathrm{d}x \, d \eta \nonumber \\&\quad + \displaystyle \iint _{\hat{\varOmega }} \mathrm{e}^{-2\delta \eta \tau (t)} (\dot{\tau }(t) \eta -1)\nonumber \\&\quad \times \dfrac{\partial }{\partial \eta }\bigl ( y_t^2 (x,t-\eta \tau (t)) \bigr ) \, \mathrm{d}x \, \mathrm{d}\eta . \end{aligned}$$
(46)

Thanks to a simple integration by parts with respect to \(\eta \), we have

$$\begin{aligned}&\iint _{\hat{\varOmega }} \mathrm{e}^{-2\delta \eta \tau (t)} (\dot{\tau }(t) \eta -1) \dfrac{\partial }{\partial \eta }\bigl ( y_t^2 (x,t-\eta \tau (t)) \bigr ) \, \mathrm{d}x \, \mathrm{d}\eta \\&\quad = (\dot{\tau }(t) -1) \mathrm{e}^{-2\delta \tau (t)} \int _0^1 y_t^2 (x,t- \tau (t)) \, \mathrm{d}x+ \int _0^1 y_t^2 \, \mathrm{d}x \\&\quad - \iint _{\hat{\varOmega }} \mathrm{e}^{-2\delta \eta \tau (t)} \left[ {\dot{\tau }}(t) -2 \delta \tau (t) ({\dot{\tau }}(t) \eta -1) \right] \\&\quad \times y_t^2 (x,t-\eta \tau (t)) \, \mathrm{d}x \, \mathrm{d}\eta . \end{aligned}$$

Inserting the above identity in (46) and using (4)–(5) leads to the desired estimate.

Now, we are ready to state and prove the following result.

Proposition 5

Define, along the regular solution of (14), the functional

$$\begin{aligned} \mathcal{E} (t)= E_1 (t)+E_2 (t)+E_3 (t), \end{aligned}$$
(47)

where \(E_i, \, i=1,2,3\) are defined in (34)–(36). If the condition (9) holds, then

(i) there exist positive constants \(L_1\) and \(L_2\), independent of the initial data \(\varphi _0\) such that for any \( t \ge 0\), we have:

$$\begin{aligned} L_1 \Vert \varphi (t) \Vert _t^2 \le \mathcal{E} (t) \le L_2 \Vert \varphi (t) \Vert _t^2; \end{aligned}$$
(48)

(ii) the functional \(\mathcal{E} (t)\) is uniformly exponentially stable for \(\alpha \) small. Specifically, there exist positive constants \(M, \; \kappa , \; \alpha _0\), independent of the initial data \(\varphi _0\) such that for any \( t \ge 0\) and for any \(\alpha < \alpha _0\), we have:

$$\begin{aligned} \mathcal{E} (t) \le M \mathrm{e}^{-\kappa t} \mathcal{E} (0) . \end{aligned}$$
(49)

Proof

Combining (47) with the estimates (37), (39) and (44), and using the assumption (9), that is, \(|\varpi |< 3\), we get

$$\begin{aligned}&\mathcal{E} (t) \le \left( \dfrac{\max \{ 1,C_1\}}{2}+C_2+C_3 \right) \Vert \varphi \Vert _t^2, \\&\mathcal{E} (t) \ge \left( \dfrac{\min \{ 1,C_1\}}{2}-C_2 \right) \displaystyle \int _0^1 y_t^2 \, \mathrm{d}x\\&\quad +\frac{1}{2} \displaystyle \int _0^1 \left( \min \{ 1,C_1\} \left[ 1-\dfrac{\varpi ^2}{3} \right] -C_2 \right) y_{xx} ^2 \, \mathrm{d}x \\&\quad +\, K C_3 \tau (t) \iint _{\hat{\varOmega }} y_t^2 (x,t-\eta \tau (t)) \; \mathrm{d}x \; \mathrm{d}\eta \\&\quad +\,\min \{ 1,C_1\} w^{\top } Q w. \end{aligned}$$

Thereafter, the estimates (48) hold provided that

$$\begin{aligned} C_2 < \min \{ 1,C_1\} \min \left\{ 1/2,1-\frac{\varpi ^2}{3} \right\} . \end{aligned}$$
(50)

Let us turn now to the proof of (49). Differentiating (47) and invoking the estimates (38), (40) and (45), we obtain

$$\begin{aligned}&\dot{\mathcal{E}} (t) \le {C_2} \left[ \dfrac{\varpi ^2}{3}-3+\dfrac{\beta }{A_2}+\dfrac{\alpha }{2 A_3}+\dfrac{1}{A_4} \right] \displaystyle \int _0^1 y_{xx}^2 \, \mathrm{d}x\nonumber \\&\quad +\, [ -{\hat{\beta }}+C_2 (1+\beta A_2)] y_t^2 (1,t)\nonumber \\&\quad + \, \left[ \alpha \left( C_2 A_3\!+\!\dfrac{1}{2A_1}\right) \!-\!(1-d) K \left( \dfrac{C_1}{2} +{C_3} \mathrm{e}^{-2 \delta {\bar{\tau }}} \right) \right] \nonumber \\&\quad \times \displaystyle \int _0^1 y_t^2 (x,t-\tau ) \, \mathrm{d}x \nonumber \\&\quad + \left[ \dfrac{\alpha A_1+C_1 K}{2}+K C_3-C_2 \right] \displaystyle \int _0^1 y_t^2 \, \mathrm{d}x \nonumber \\&\quad -2K{C_3} \delta \tau (t) \mathrm{e}^{-2\delta {\bar{\tau }}} \displaystyle \iint _{\hat{\varOmega }} y_t^2 (x,t-\eta \tau (t)) \, \mathrm{d}x \, \mathrm{d}\eta \nonumber \\&\quad + C_2 A_4 \left( e_1^{\top } w \right) ^2-\zeta w^{\top } N w. \end{aligned}$$
(51)

Thanks to the assumption (9) of our theorem, it follows that \(\frac{\varpi ^2}{3}-3\) is negative, and hence, the coefficient

$$\begin{aligned} \frac{\varpi ^2}{3}-3+\frac{\beta }{A_2}+\frac{\alpha }{2 A_3}+\frac{1}{A_4} \end{aligned}$$

of \(\int _0^1 y_{xx}^2 \, \mathrm{d}x\) can be made negative by choosing \(A_2\), \(A_3\) and \(A_4\) large enough. Subsequently, in order to make the coefficient of \(\int _0^1 y_t^2 (x,t-\tau ) \, \mathrm{d}x\) nonpositive, we choose \(C_1=3\alpha \) and \(C_2\) small enough such as

$$\begin{aligned} C_2 \le \dfrac{1}{2A_3} \left( 3K(1-d)-\dfrac{1}{A_1} \right) , \end{aligned}$$
(52)

where \(A_1\) is chosen large enough so that

$$\begin{aligned} 3K(1-d)-\dfrac{1}{A_1}>0. \end{aligned}$$

In fact, one can consider, for instance,

$$\begin{aligned} A_1 > (3K(1-d))^{-1}. \end{aligned}$$

Furthermore, the coefficients

$$\begin{aligned} -{\hat{\beta }}+C_2 (1+\beta A_2) \end{aligned}$$

of \(y_t^2 (1,t)\) are nonpositive, provided that

$$\begin{aligned} C_2 \le \dfrac{\hat{\beta }}{1+\beta A_2}. \end{aligned}$$
(53)

Now, let us handle the two last terms in (51), namely

$$\begin{aligned} C_2 A_4 ( e_1^{\top } w) ^2-\zeta w^{\top } N w. \end{aligned}$$

We have

$$\begin{aligned} C_2 A_4 \left( e_1^{\top } w \right) ^2 \le C_2 A_4 \Vert e_1\Vert ^2 \Vert w \Vert ^2 \end{aligned}$$

and

$$\begin{aligned} -\zeta w^{\top } N w \le -\zeta \nu _{\min } \Vert w \Vert ^2, \end{aligned}$$

where \(\nu _{\min }\) is the smallest positive eigenvalue of N. Whereupon, we conclude that

$$\begin{aligned}&-\zeta w^{\top } N w+ C_2 A_4 \left( e_1^{\top } w \right) ^2\nonumber \\&\quad \le (C_2 A_4 \Vert e_1\Vert ^2 -\zeta \nu _{\min }) \Vert w \Vert ^2, \end{aligned}$$

which can be made negative by choosing

$$\begin{aligned} C_2 \le \dfrac{\zeta \nu _{\min }}{A_4 \Vert e_1\Vert ^2}. \end{aligned}$$
(54)

Finally, in order to assure the negativity of the coefficient of \(\int _0^1 y_t^2 \, \mathrm{d}x\), that is,

$$\begin{aligned} \frac{\alpha A_1+3 \alpha K}{2}+K C_3-C_2 <0, \end{aligned}$$

it suffices to pick up \(C_3<C_2/K\) and

$$\begin{aligned} \alpha < \alpha _0 =2\dfrac{C_2-KC_3}{A_1+3K}. \end{aligned}$$
(55)

Thanks to the above choices, there exists a positive constant \(\mu \), independent on the initial conditions, such that \( \dot{\mathcal{E}} (t) \le -\mu E_1 (t),\) which, together with (37) and (48), implies that

$$\begin{aligned} \dot{\mathcal{E}} (t) \le -\dfrac{\mu }{2 L_1} \mathcal{E} (t). \end{aligned}$$

Thereby, the estimate (49) can be easily deduced.

Remark 3

It is worth mentioning that the stability result obtained in (49) and (32) holds without any restriction on the positive constant \(\delta \). Whence, one can tune \(\delta \) such that the stability occurs as fast as possible.

3.2.1 Proof of Theorem 3

It suffices to recall the equivalence of the norms defined in (10) and (16) and use the estimates (48) and (49).

4 Stability of the global system

Now, we are ready to deal with the exponential stability of the original system (8) or equivalently (11). For sake of clarity, we shall provide a very brief presentation to some basic definitions and results which are going to be used for the proof of the main result.

Definition 1

[30] Let U(ts) be the evolution system corresponding to the operator A(t) defined on a Hilbert space H. A continuous solution u of the integral equation

$$\begin{aligned} u(t)=U(t,s)v +\int _s^t U(t,r) f(r,u(r)) \, dr, \end{aligned}$$

will be called a mild solution of the initial-value problem

$$\begin{aligned} \left\{ \begin{array}{l} {u}_t (t)= \mathcal{A} (t) u (t)+f(t,u(t)), \\ u(s)= v. \end{array} \right. \end{aligned}$$

The following result will be used throughout this section:

Theorem 4

[30] Let H be a Hilbert space and \(f:[0,\infty ] \times H \rightarrow H\) be continuous in t and locally Lipschitz continuous in u, uniformly in t on bounded intervals. If U(ts) is the evolution system corresponding to such that the operator A(t), then for every \(v \in H\) there exists a \(T \le \infty \) such that the above initial-value problem has a unique mild solution u on [0, T[.

The next result provides a sufficient condition for which the mild solution becomes a classical solution of our initial-value problem:

Theorem 5

[30] Assume that U(ts) be the evolution system corresponding to the operator A(t). If f is continuously differentiable, then the mild solution of the above initial-value problem with \(v \in D(A)\) is a classical solution.

We also state a result known as Barbalat’s lemma:

Lemma 1

[23] Let \(f: \mathbb {R}\rightarrow \mathbb {R}\) be a uniformly continuous real function on \([0,\infty )\). Suppose that \(\int _0^{\infty } f(s) \, \mathrm{d}s\) exists and is finite. Then, \(f(t) \rightarrow 0\) as \(t \rightarrow \infty \).

Finally, we give another useful inequality in the next lemma:

Lemma 2

(Gronwall’s lemma) Let \(a \in L^1 (0,T)\) such that \(a(t) \ge 0\). If for some \(K \ge 0\), the function \(g \in L^{\infty } (0,T) \) satisfies the inequality

$$\begin{aligned} g(t) \le K+\int _0^{t} a(s) g(s)\, \mathrm{d}s, \end{aligned}$$

then

$$\begin{aligned} g(t) \le K \exp \left( \int _0^{t} a(s)\, \mathrm{d}s \right) . \end{aligned}$$

Our main result is as follows:

Theorem 6

Assume that the desired angular velocity \(\varpi \) satisfies the smallness condition \(|\varpi |< 3\) (see (9)). Then, for any initial data \(\Phi _0 \in \mathcal{D}(\mathcal{A}(t)) \times \mathbb {R}\), there exists a positive constant \(\alpha _0\) (see (55)), independent of \(\Phi _0\), such that for any \(\alpha < \alpha _0\), the solution \(\Phi (t)\) of the closed-loop system (11) exponentially tends to the equilibrium point \((0_{\scriptscriptstyle {\mathcal{Y}}}, \varpi )\) in \(\mathcal{X}\), as \(t \rightarrow \infty \).

Proof

First of all, we point out that most of the arguments used in this proof run on much the same lines as in [7, 8] (see also [24, 33]) with of course a number of changes born out of necessity. Therefore, the exposition will be concise.

Recall that Proposition 1 leads us to claim, thanks to applying the evolution semigroups theory, that for \(s \in [0,t]\), there exists a unique evolution system U(ts) associated with the operator \(\mathcal{A}(t)\) [10, 30]. This, together with the fact that \(\mathcal{F}\) is continuously differentiable [33], yields the following assertion: For any \(\Phi _0 =(\phi _0,\omega _0) \in \mathcal{U}\), there exists a unique local mild solution \(\Phi (\cdot ) =(\phi (\cdot ),\omega (\cdot )) \in C \left( [0,T_0]; \mathcal{X} \right) \) of (11), for some \(T_0>0\) (see Definition 1 and Theorem 4). Indeed, \(\Phi \) is given by the variation in constant formula. Moreover, a regularity result [30] leads us to conclude that each local solution of (11), with initial data in \(\mathcal{D} (\mathcal{A}(t)) \times \mathbb {R}\), is a classical one (see Theorem 5). Then, using the same approach as in [32], one can show that that each strong solution exists globally and is bounded. Thus, \(\int _0^{+\infty } (\omega (t)- \varpi )^2 \mathrm{d}t \) converges and the solution \((\phi (t),\omega (t))\) is bounded in \(\mathcal{X}\). This implies, thanks to Barbalat’s result (see Lemma 1), that \(\lim _{t \rightarrow +\infty } \omega (t) =\varpi \), and hence, for any \(\epsilon >0\), there exists T sufficiently large such that for any \(t \ge T\)

$$\begin{aligned} | \omega ^2(t) - \varpi ^2 | < \epsilon . \end{aligned}$$
(56)

On the other hand, the solution

$$\begin{aligned} \Phi (t)= & {} (y(\cdot ,t), y_t (\cdot ,t), u(\cdot ,t), w(t), \omega (t))\\= & {} ( \Phi _1 (t), \omega (t)) \end{aligned}$$

of (11) stemmed from \(\Phi _s =(\Phi _1 (s), \omega (s)) \in \mathcal{D}(\mathcal{A}(t)) \times \mathbb {R}\) (\(s \in [0, t]\)) can be partially obtained from the variation in constants formula [10, 30]

$$\begin{aligned} \Phi _1 (t)= & {} U(t,s)\Phi _1 (s) +\int _s^t U(t,r) ( \omega ^2(r) - \varpi ^2 )\nonumber \\&\times P \Phi _1 (r)\, dr \end{aligned}$$
(57)

for any \( t \ge s \ge 0.\) Here P is the compact operator on \(\mathcal{V}\) defined by

$$\begin{aligned} P (y, z, u,w)=(0,y,0,0), \end{aligned}$$

for any \((y, z, u,w) \in \mathcal{Y}\). In turn, the second part of the solution \(\Phi (t)\), namely \(\omega (t)\), is given by the differential equation in (8). Using the boundedness of P and invoking (56) and (32), the identity (57) gives

$$\begin{aligned}&\Vert \Phi _1 (t) \Vert _{\scriptscriptstyle {\mathcal{Y}}} \le M_0 \mathrm{e}^{-\kappa _0 (t-T)} \Vert \Phi _1 (T) \Vert _{\scriptscriptstyle {\mathcal{Y}}}\nonumber \\&\quad + \epsilon M_0 \int _\mathrm{T}^t \mathrm{e}^{-\kappa _0 (t-s)} \Vert \Phi _1 (s) \Vert _{\scriptscriptstyle {\mathcal{Y}}}\, \mathrm{d} s, \quad \quad \forall \; t \ge T. \end{aligned}$$
(58)

Now, it suffices to applying Gronwall’s Lemma 2 to (58) to get

$$\begin{aligned} \Vert \Phi _1 (t) \Vert _{\scriptscriptstyle {\mathcal{Y}}} \le M_0 \Vert \Phi _1 (T) \Vert _{\scriptscriptstyle {\mathcal{Y}}} \mathrm{e}^{\scriptscriptstyle -(\kappa _0-\epsilon M_0) (t-T)}, \quad \forall \; t \ge T. \nonumber \\ \end{aligned}$$
(59)

This gives rise to the exponential stability of \(\Phi _1 (t)\) in \({\mathcal{Y}}\) as long as \(\epsilon <\frac{M_0}{\kappa _0}\), which is possible in view of the arbitrariness of \(\epsilon \) (see (56)). Finally, going back to (8), one can establish the exponential convergence of \(\omega (t) -{\varpi }\) in \(\mathbb {R}\) by arguing as in [24, 33] (see also [7, 8]).

5 Numerical application

The aim of this section is to illustrate the outcome stated and proved in the previous section via some numerical examples. More precisely, we shall give prominence to the importance of the smallness condition \(|\varpi |< 3\) related to the exponential stability result of the closed-loop system (11). To do so, we assume that \(\omega (t)=\varpi >0 \), \(\tau (t)=\tau >0 \) and the boundary force control \(\mathcal{U}_\mathrm{F} (t)\) is static. Thereafter, it is a simple task to check that \(\lambda \) is an eigenvalue of the system if and only if there exists a nonzero (yzu) such that

$$\begin{aligned} \left\{ \begin{array}{l} -y_{xxxx}+\varpi ^2 y + \alpha u(x,1) =\lambda z,\\ u_{\eta }(x,\eta )+ \lambda \tau u(x,\eta )=0,\\ z=\lambda y,\\ y(0)=y_x (0)=y_{xx}(1)=0,\\ y_{xxx}(1)=\lambda \beta y(1),\\ u(x,0)=z, \end{array} \right. \end{aligned}$$

which implies that one just has to seek a nontrivial solution y of the following system

$$\begin{aligned} \left\{ \begin{array}{l} y_{xxxx}+(\lambda ^2-\varpi ^2-\alpha \lambda \mathrm{e}^{-\tau \lambda }) y=0,\\ y(0)=y_x (0)=y_{xx}(1)=0,\\ y_{xxx}(1)=\lambda \beta y(1). \end{array} \right. \end{aligned}$$

Let

$$\begin{aligned} \sigma ^4={\varpi }^2 - \lambda ^2 +\alpha \lambda \mathrm{e}^{-\tau \lambda }. \end{aligned}$$
(60)

Hereby, one can show that \(\lambda \) is an eigenvalue of the system if and only if \(\lambda \) is a solution of the characteristic equation

$$\begin{aligned}&\sigma ^3 (1+\cosh \sigma \cos \sigma )+ \beta \lambda (\cosh \sigma \sin \sigma -\sinh \sigma \cos \sigma )\nonumber \\&\quad =0. \end{aligned}$$
(61)
Fig. 2
figure 2

Spectrum with \(\alpha =1, \, \beta =3, \; \tau =0.01\) and \(\varpi =2\)

Next, let \(\sigma = \sigma _1 +i \sigma _2\) and \(\lambda = \lambda _r +i \lambda _m\), where \(\sigma _i\) as well as \(\lambda _r\) and \(\lambda _m\) are real numbers for \(i=1,2\). Whereupon, (60) yields

$$\begin{aligned} \left\{ \begin{array}{l} \sigma _1^4 +\sigma _2^4-\sigma _1^2 \sigma _2^2\\ \quad =\alpha \mathrm{e}^{-\tau \lambda _r} \left[ \lambda _r \cos (\tau \lambda _m)+\lambda _m \sin (\tau \lambda _m) \right] \\ \quad +\varpi ^2-\lambda _r^2 +\lambda _m^2,\\ 4\sigma _1 \sigma _2 \left( \sigma _1^2 -\sigma _2^2 \right) \\ \quad = \alpha \mathrm{e}^{-\tau \lambda _r} \left[ \lambda _m \cos (\tau \lambda _m)-\lambda _r \sin (\tau \lambda _m) \right] \\ \quad -2\lambda _r \lambda _m. \end{array} \right. \end{aligned}$$

Inserting the above equations into (61), the latter can be reformulated into the form \(\varGamma _1 (\lambda _r,\lambda _m) + i \varGamma _2 (\lambda _r,\lambda _m) =0,\) in which \(\varGamma _1\) and \(\varGamma _2\) are two real functions of the unknowns \(\lambda _r\) and \(\lambda _m\). Clearly, \(\lambda =\lambda _r +i \lambda _m\) is an eigenvalue if and only if both \(\varGamma _1\) and \(\varGamma _2\) are zero functions. Lastly, thanks to MAPLE 13, one can get the approximate values of the pairs \(\left( \lambda _r,\lambda _m \right) \) for which the graphs of \(\varGamma _1 (\lambda _r,\lambda _m)=0\) and \(\varGamma _2 (\lambda _r,\lambda _m)=0\) intersect. Then, it is apparent from Fig. 2 that the spectrum of the system with \(\alpha =1\), \(\beta =3\), \(\tau =0.01\) and \(\varpi =2\) has two branches and more importantly has negative real part. Thus, the system is stable. This numerical result validates Theorem 6. Notwithstanding, the case \(\varpi =3.2\) generates some eigenvalues with positive real part as depicted in Fig. 3. This is due to the fact that the condition \(|\varpi | < 3\) of Theorem 6 is violated.

Fig. 3
figure 3

Spectrum with \(\alpha =1, \, \beta =3, \; \tau =0.01\) and \(\varpi =3.2\)

6 Concluding discussion

In this article, the well-known rotating disk-beam system has been considered under the presence of a time-dependent interior delayed damping control. Despite the presence of this delay which could be a source of poor performance and instability, it has been proved that the effect of such a delay can be compensated by the action of a boundary force control applied at the free end of the beam, in addition of the torque control exerted on the disk. Indeed, the closed-loop system is shown to have the very desirable property, namely the exponential stability, provided that the delay satisfies appropriate conditions and the angular velocity of the disk does not exceed the value 3. Numerical examples are provided to demonstrate the correctness of the results.

We would like to point out that there is a promising research avenue by considering a time-dependent delay in the boundary force control. Then, it would be very interesting to reduce the impact of such a delay via the presence of an interior damping control.