1 Introduction

We study the mechanical and thermal evolution of a thermoelastic beam in unilateral contact. These contact problems arise naturally in many situations in industrial processes, when two or more materials can come into contact or lose contact as a result of thermoelastic expansion or contraction. Here, we consider the cross-contact problem with Timoshenko’s beam model. The physical setting is represented in Fig. 1.

The equations of motion and energy balance are described by

$$\begin{aligned} \begin{array}{lll} \rho _1\varphi _{tt}-k(\varphi _x+\psi )_x=0,\\ \rho _2\psi _{tt}-b\psi _{xx}+k(\varphi _x+\psi )+\sigma \theta _x=0,\\ \rho _3\theta _t-\tau \theta _{xx}+\sigma \psi _{xt}=0. \end{array} \end{aligned}$$
(1.1)

The mathematical modelling can be found in [1, 2]. Here, \(\varphi (x,t),\) stands for the transversal displacement of the point x on the beam, \(\psi \) is the rotatory angle of the cross section and \(\theta \) is the difference of temperature of the beam. Here, \(\rho _1=\rho A,\ \ \rho _2=\rho I,\ k=\kappa G A,\ \ b= EI\) where E is Young’s modulus, G is the modulus of rigidity and \(\kappa \) is the transversal shear factor. The terms \(\rho ,\ A\) and I are density of body, the area of the cross section and the moment of inertia, respectively. The constants \(\rho _3,\ \tau ,\ \sigma \ > 0\) represent the physical parameters from thermoelasticity theory. The initial conditions of the model are given by

$$\begin{aligned} \begin{array}{lll} \varphi (x,0)&{}=&{}\varphi _0(x),\ \ \varphi _t(x,0)=\varphi _1(x), \ \theta (x,0)=\theta _0(x), \ \forall x\in (0,L)\\ \psi (x,0)&{}=&{}\psi _0(x),\ \ \psi _t(x,0)=\psi _1(x), \ \forall x\in (0,L), \end{array} \end{aligned}$$
(1.2)

and we use the following boundary conditions

$$\begin{aligned} \displaystyle \varphi (0,t)=\psi (0,t)=\theta (0,t)=\psi (L,t)=\theta (L,t)=0, \ \ \text{ in }\ \ (0,T). \end{aligned}$$
(1.3)

For the free end of the beam, where contact with the obstacle can occur, we consider Signorini’s contact condition.

$$\begin{aligned} g_1\le \varphi (L,t)\le g_2,\ \ \ 0\le t\le T. \end{aligned}$$
(1.4)
Fig. 1
figure 1

Beam subject to a constraint at the free end L

This condition ensures that the transversal displacement at \( x = L \) is restricted between stops \(g_1\) and \(g_2.\) The mathematical boundary conditions for this physical setting are as follows

$$\begin{aligned} \begin{array}{ccl} S(L,t)\ge 0 &{}\quad \text{ if }&{} \varphi (L,t)=g_1,\\ &{}&{}\\ S(L,t)=0 &{}\quad \text{ if }&{} g_1<\varphi (L,t)<g_2,\\ &{}&{}\\ S(L,t)\le 0 &{}\quad \text{ if }&{} \varphi (L,t)=g_2,\\ \end{array} \end{aligned}$$
(1.5)

where \(S=k(\varphi _x+\psi )\) and \(M=b\psi _{x}.\)

In a series of articles by Andrews et al. [3], Kuttler and Shillor [4], the authors studied the problem of one-dimensional semi-static thermoelastic contact. They demonstrated the existence of global weak solutions of their respective models. The numerical aspects of the problem were studied in [5, 6]. In [7] was considered the Signorini’s problem of the Euler–Bernoulli thermoelastic beams system, the authors showed the global existence of weak solutions of the model which decay exponentially to zero. Finally, in [8], the authors demonstrated the global existence of weak solutions of the Signorini’s problem to Timoshenko’s thermoelastic beam model and by introducing an additional friction mechanism, the authors were able to show that the solution of the models decay exponentially to zero. This additional dissipative mechanism makes the difference in the proof of the exponential stability of the problem. Here, we only consider the dissipation produced by the difference of temperature, for this reason we need that the waves speed of propagation be equal.

$$\begin{aligned} \chi _0:=\frac{k}{\rho _1}-\frac{b}{\rho _2}=0. \end{aligned}$$
(1.6)

In the general case (different propagation speeds), we prove that the decay rate is polynomial. Our method is different and follows the theory of semigroups.

The main contribution of this article is the use of semigroup theory to solve the Signorini’s problem. We do this by taking dynamic boundary conditions and addressing the Signorini’s problem using a Lipschitz disturbance, to obtain the normal compliance condition, then to arrive at the contact problem we use the observability inequalities. We believe that this method is stronger than the penalty method used in all the articles cited above. This is because we get more information about the asymptotic behaviour of the solution, under all boundary conditions. Unlike the articles [7,8,9] where special boundary conditions had to be used to show the exponential decay. Furthermore, with this method it is possible to prove the polynomial decay of the solutions of Timoshenko’s contact problem. Finally, we believe that the polynomial decay rate that we obtain is optimal in the sense that it is the same rate as that obtained in [10, 11] where optimality is demonstrated.

The remaining part of this manuscript is organized as follows: Sects. 2 and 3 deal with the global existence and the uniform stability of the hybrid system, respectively. In Sect. 4, we consider the normal compliance condition as a Lipschitz disturbance, then we take the limit \( \epsilon \rightarrow 0 \) to show the existence of global weak solutions to Signorini’s problem. In addition, we show the exponential stability, provided the waves speed of propagation of the system is equal and the polynomial stability in the general case. Finally, in Sect. 5 we develop numerical experiments that verify the decay properties of solutions.

2 The semigroup setting

Our starting point is to consider the linear hybrid Timoshenko system that is given by

$$\begin{aligned} \begin{array}{lll} \displaystyle \rho _1\varphi _{tt}-k(\varphi _x+\psi )_x =0,\ \ \text{ in }\ (0,L)\times (0,\infty )\\ \displaystyle \rho _2\psi _{tt}-b\psi _{xx}+k(\varphi _x+\psi ) +\sigma \theta _x=0,\ \ \text{ in }\ (0,L)\times (0,\infty )\\ \displaystyle \rho _3\theta _t-\tau \theta _{xx}+\sigma \psi _{xt}=0,\ \ \text{ in }\ (0,L)\times (0,\infty ). \end{array} \end{aligned}$$
(2.1)

Verifying the initial data (1.2) and the boundary conditions (1.3) with \(\varphi (L,t)=v(t)\), where

$$\begin{aligned} \begin{array}{lll} \epsilon v_{tt} +\epsilon v_t +\epsilon v+S(L,t)=0. \end{array} \end{aligned}$$
(2.2)

Equation (2.2) is called the dynamic boundary condition. The equations describe the oscillations of the uniform cantilever curved beam with a load mass \( \epsilon \) at its tip, with damping term proportional to the velocity. In a first moment, we omit the super index \(\epsilon \) in system (2.1)–(2.2), we use this dependence later when we begin the limit process \(\epsilon \rightarrow 0\). The objective of these boundary conditions is to apply the Lipschitz perturbation to obtain the normal compliance condition and then arrive to the Signorini’s problem.

Let us introduce the Hilbert space \({\mathcal {H}}\)

$$\begin{aligned} {\mathcal {H}}=V_0\times L^2(0,L)\times H_0^1(0,L)\times L^2(0,L)\times L^2(0,L)\times {\mathbb {R}}^2, \end{aligned}$$

where

$$\begin{aligned} V_0 = \{u\in H^1(0,L); u(0)=0\} \end{aligned}$$

which is a Hilbert space with the norm

$$\begin{aligned} \Vert U\Vert _{{\mathcal {H}}}^2=\int \limits _0^L\Big [\rho _1|\Phi |^2+\rho _2|\Psi |^2+k|\varphi _x\!+\!\psi |^2+b|\psi _x|^2 +\rho _3|\theta |^2\Big ]\ {\text {d}}x+\epsilon |V|^2+\epsilon |v|^2. \end{aligned}$$

Denoting by \(\Phi =\varphi _t\), \(\Psi =\psi _t\), \(V=v_t\) system (2.2) can be written as

$$\begin{aligned} \frac{{\text {d}}U}{{\text {d}}t}={\mathcal {A}}U,\ \ U(0)=U_0, \end{aligned}$$

where \((\varphi ,\Phi ,\psi ,\Psi ,\theta ,v,V)\), \(U_0:=(\varphi _0,\varphi _1,\psi _0,\psi _1,\theta _0,v_0,v_1)\) and \({\mathcal {A}}\) is the operator

$$\begin{aligned} {\mathcal {A}}U=\left( \Phi ,\frac{1}{\rho _1}S_x,\Psi , \frac{1}{\rho _2}M_x-\frac{1}{\rho _2}S-\frac{\sigma }{\rho _2}\theta _x, \frac{\tau }{\rho _3}\theta _{xx}-\frac{\sigma }{\rho _3}\Psi _x, V,-[ V+v+\frac{1}{\epsilon }S(L)]\right) ^\top \end{aligned}$$
(2.3)

with domain of \({\mathcal {A}}\) given by

$$\begin{aligned} D({\mathcal {A}}):=\left\{ U\in {\mathbb {H}};\;\; \text{ and }\;\; \varphi (L)=v\right\} \end{aligned}$$

where

$$\begin{aligned} {\mathbb {H}}:=[H^2(0,L)\cap V_0]\times V_0\times [H^2(0,L)\cap H_0^1(0,L)]\times H_0^1(0,L)\times [H^2(0,L)\cap H_0^1(0,L)]\times {\mathbb {R}}^2. \end{aligned}$$

Moreover \({\mathcal {A}}\) is dissipative,

$$\begin{aligned} \text{ Re }({\mathcal {A}}U,U)=-\tau \int \limits _0^L|\theta _x|^2\ {\text {d}}x-\epsilon |V|^2\le 0. \end{aligned}$$
(2.4)

To show the well-posedness of (2.1)–(2.2), we only need to prove that \({\mathcal {A}}\) is an infinitesimal generator of a \(C_0\) semigroup. To do that it is enough to show that \(0\in \varrho ({\mathcal {A}})\), see [12] . That is for any \(F=(f_1,f_2,f_3,f_4,f_5,f_6,f_7)^\top \in {\mathcal {H}}\), there exists only one \(U\in D({\mathcal {A}})\) such that \({\mathcal {A}}U=F\). In fact, recalling the definition of \({\mathcal {A}}\) we get the system

$$\begin{aligned} \Phi =f_1\in H^1(0,L),\quad \Psi =f_3\in H^1(0,L),\quad V=f_6, \end{aligned}$$

and

$$\begin{aligned} S_x=\rho _1 f_2,\quad M_x-S-\sigma \theta _x=\rho _2 f_4,\quad \tau \theta _{xx}-\sigma \Psi _x=\rho _3f_5,\quad V+v+\frac{1}{\epsilon }S(L)=-f_7. \end{aligned}$$

Since \(\theta \) verify Dirichlet boundary condition and \(\Psi \) is already given by \(f_3\), using the Lax–Milgram Lemma we conclude that there exists only one \(\theta \in H^2(0,L)\). It remains to show the existence of \(\psi \) and \(\varphi \).

$$\begin{aligned} k(\varphi _x+\psi )_x=\rho _1 f_2, \quad b\psi _{xx}-k(\varphi _x+\psi )=\sigma \theta _x+\rho _2 f_4, \end{aligned}$$

verifying the following boundary conditions

$$\begin{aligned} \varphi (0)=\psi (0)=\psi (L)=0,\quad \varphi (L)+\frac{1}{\epsilon }S(L)=-f_7-f_6. \end{aligned}$$

Denoting by \(U^i=(\varphi ^i,\psi ^i)\) the bilinear form

$$\begin{aligned} a(U^1,U^2)=\int \limits _0^Lk(\varphi _x^1+\psi ^1)(\varphi _x^2+\psi ^2)+b\psi _{x}^1\psi _{x}^2\;{\text {d}}x \end{aligned}$$

is symmetric, continuous and coercive over the convex set

$$\begin{aligned} {\mathbb {K}}=\left\{ (\varphi ,\psi );\;\; \varphi \in V_0, \psi \in H_0^1(0,L),\;\; \varphi (L)+\frac{1}{\epsilon }S(L)=-f_7-f_6\right\} . \end{aligned}$$

Thus, for any \((f_6,f_7)\in L^2(0,L)\times L^2(0,L)\) there exists only one weak solution to the above system (see Theorem 5.6 (Stampacchia) page 138 of [13]). Using the equations, we conclude that \((\varphi ,\Phi ,\psi ,\Psi ,\theta ,v,V)\in D({\mathcal {A}})\)

Theorem 2.1

The operator \({\mathcal {A}}\) is the infinitesimal generator of a \(C_0\) semigroup of contractions.

The above theorem implies the global existence of solution for the corresponding hybrid problem.

3 Asymptotic behaviour of the hybrid system

The main tool we use in this section is the result due to Pruess [14] and Borichev and Tomilov [15].

Theorem 3.1

Let \(S(t)=e^{{\mathcal {A}}t}\) be a \(C_0\)-semigroup of contractions over a Hilbert space \({\mathcal {H}}\). Then, ( [14]) S(t) is exponentially stable if and only if

$$\begin{aligned} i{\mathbb {R}}\subset \varrho ({\mathcal {A}})\quad {and}\quad \Vert (i\,\lambda \,I - {\mathcal {A}})^{-1}\Vert _{{\mathcal {L}}({\mathcal {H}})} \leqslant C, \qquad \forall \,\lambda \in {\mathbb {R}}. \end{aligned}$$

Moreover, if \(i{\mathbb {R}}\subset \varrho ({\mathcal {A}})\) then we have ( [15])

$$\begin{aligned} \Vert \left( i\lambda - {\mathcal {A}}\right) ^{-1}\Vert _{{\mathcal {L}}({\mathcal {H}})} \le C |\lambda |^\beta \;\;\Leftrightarrow \;\; \Vert \mathrm{e}^{t{\mathcal {A}}}\Phi _0\Vert _{{\mathcal {H}}} \le \frac{C}{t^{\frac{1}{\beta }}}\Vert {\mathcal {A}} \Phi _0\Vert _{{\mathcal {H}}}. \end{aligned}$$

Let us consider \(U=(\varphi ,\Phi ,\psi ,\Psi ,\theta ,v,V)^\top \in D({\mathcal {A}})\) and \(F=(f_1,f_2,f_3,f_4,f_5,f_6,f_7)^\top \) \(\in {\mathcal {H}}.\) The resolvent equation \(i\lambda U-{\mathcal {A}}U=F\) in terms of its components can be written as

$$\begin{aligned} \begin{array}{rll} &{}i\lambda \varphi -\Phi =f_1,\\ &{}i\lambda \rho _1\Phi -k(\varphi _x+\psi )_x =\rho _1f_2,\\ &{}i\lambda \psi -\Psi =f_3,\\ &{}i\lambda \rho _2 \Psi -b\psi _{xx}+k(\varphi _x+\psi )+\sigma \theta _x =\rho _2f_4,\\ &{}i\lambda \rho _3\theta -\tau \theta _{xx}+\sigma \Psi _x =\rho _3f_5,\\ &{}i\lambda v -V =f_6,\\ &{}i\lambda \epsilon V+\epsilon v+\epsilon V+S(L,t) =\epsilon f_7. \end{array} \end{aligned}$$
(3.1)

From (2.4) and the resolvent equation \(i\lambda U-{\mathcal {A}}U=F\), we get

$$\begin{aligned} \epsilon |V|^2+\int \limits _0^L\tau |\theta _x|^2\;{\text {d}}x =\text{ Re }(F,U)_{{\mathcal {H}}}. \end{aligned}$$
(3.2)

To get exponentially stability, we use condition (1.6). Let us introduce the functionals

$$\begin{aligned} {\mathcal {I}}(x)= & {} \underbrace{\rho _2b|\Psi (x)|^2 +|M(x)|^2}_{:={\mathcal {I}}_\psi (x)}+\underbrace{\rho _1k|\Phi (x)|^2 +|S(x)|^2}_{:={\mathcal {I}}_\varphi (x)} \\ {\mathcal {L}}(s)= & {} q'\left( \rho _2|\Psi |^2 +|M|^2+\rho _1|\Phi |^2 +|S|^2\right) - q \rho _1 k\Phi {\overline{\Psi }}- qS{\overline{M}}, \end{aligned}$$

where

$$\begin{aligned} q(x)= \frac{e^{nx}-e^{n\xi _1}}{n},\qquad q_0(x)= \frac{e^{-nx}-e^{-n\xi _2}}{n}. \end{aligned}$$
(3.3)

Note that in this case \(q'(x)\) is large in comparison with q for n large, therefore there exist positive constants such that

$$\begin{aligned} c_0\int \limits _{\xi _1}^{\xi _2}{\mathcal {I}}(x)\;{\text {d}}x\le \int \limits _{\xi _1}^{\xi _2}{\mathcal {L}}(s) {\text {d}}s\le c_1 \int \limits _{\xi _1}^{\xi _2}{\mathcal {I}}(x)\;{\text {d}}x. \end{aligned}$$

Lemma 3.1

For any \([\xi _1,\xi _2]\subset [0,L]\), the solution of system (2.1) satisfies

$$\begin{aligned}&\left| q(\xi _1){\mathcal {I}}_\psi (\xi _1)+q(\xi _2){\mathcal {I}}_\psi (\xi _2)-\int \limits _{\xi _1}^{\xi _2}{\mathcal {I}}_\psi \; {\text {d}}s\right| \le c\Vert U\Vert \Vert F\Vert +c\Vert S\Vert \Vert M\Vert , \\&\left| q(\xi _1){\mathcal {I}}(\xi _1)+q(\xi _2){\mathcal {I}}(\xi _2)-\int \limits _{\xi _1}^{\xi _2}{\mathcal {L}}(s)\,{\text {d}}s \right| \le c\Vert U\Vert \Vert F\Vert . \end{aligned}$$

Proof

Multiplying Eq. (3.1)\(_4\) by \(q{\overline{M}}\), we get

$$\begin{aligned} -\frac{1}{2}\int \limits _{\xi _1}^{\xi _2} q\frac{{\text {d}}}{{\text {d}}x}\left[ \rho _2b|\Psi |^2+|M|^2 \right] =\rho _2b\int \limits _{\xi _1}^{\xi _2} q\overline{f_{3,x}}\Psi \;{\text {d}}x-\int \limits _{\xi _1}^{\xi _2} q{\overline{M}}S\;{\text {d}}x+\rho _2\int \limits _{\xi _1}^{\xi _2} q{\overline{M}}f_4\;{\text {d}}x. \end{aligned}$$
(3.4)

Similarly, multiplying Eq. (3.1)\(_2\) by \(q{\overline{S}},\) we get

$$\begin{aligned} -\frac{1}{2}\int \limits _{\xi _1}^{\xi _2}q\frac{{\text {d}}}{{\text {d}}x}\left[ \rho _1k|\Phi |^2+|S|^2 \right]= & {} \rho _1\int \limits _{\xi _1}^{\xi _2}qf_1{\overline{S}}\;{\text {d}}x+\rho _1\int \limits _{\xi _1}^{\xi _2}qk\Phi \overline{f_{1,x}}\;{\text {d}}x+\rho _1k\int \limits _{\xi _1}^{\xi _2}q\Phi {\overline{\Psi }}\;{\text {d}}x \nonumber \\&-\rho _1k\int \limits _{\xi _1}^{\xi _2}q\Phi \overline{f_{3}}\;{\text {d}}x. \end{aligned}$$
(3.5)

So, our result follows. \(\square \)

Let us denote by Q and R any functions satisfying

$$\begin{aligned} |Q|\le C\Vert U\Vert _{{\mathcal {H}}}\Vert F\Vert _{{\mathcal {H}}}+\frac{c}{|\lambda |}\Vert U\Vert _{{\mathcal {H}}}^2,\quad |R|\le C\Vert U\Vert _{{\mathcal {H}}}\Vert F\Vert _{{\mathcal {H}}}+\Vert F\Vert _{{\mathcal {H}}}^2. \end{aligned}$$

Lemma 3.2

Under the above conditions, we have

$$\begin{aligned} \int \limits _0^L |\psi _x|^2\;{\text {d}}x\le & {} c\int \limits _0^L |\Psi |^2\;{\text {d}}x+\frac{c}{|\lambda |^2}\Vert \Phi \Vert _{L^2}^2+\frac{c}{|\lambda |^2}\Vert F\Vert _{\mathcal {H}}^2+\frac{c}{|\lambda |^2}\Vert U\Vert _{\mathcal {H}}\Vert F\Vert _{\mathcal {H}}. \end{aligned}$$
(3.6)
$$\begin{aligned} \int \limits _0^L |\Phi |^2\;{\text {d}}x\le & {} c\int \limits _0^L |S|^2\;{\text {d}}x+\frac{c}{|\lambda |^2}\Vert U\Vert _{\mathcal {H}}^2+\frac{c}{|\lambda |^2}\Vert F\Vert _{\mathcal {H}}^2+\frac{c}{|\lambda |^2}\Vert U\Vert _{\mathcal {H}}\Vert F\Vert _{\mathcal {H}}. \end{aligned}$$
(3.7)

Proof

Multiplying (3.1)\(_4\) by \({\overline{\psi }}\) and using integration by parts, we get

$$\begin{aligned} \int \limits _0^L b|\psi _x|^2\;{\text {d}}x=\int \limits _0^L \rho _2|\Psi |^2\;{\text {d}}x+k\int \limits _0^L\varphi {\overline{\psi }}_x\;{\text {d}}x-k\int \limits _0^L|\psi |^2\;{\text {d}}x-\sigma \int \limits _0^L\theta _x{\overline{\psi }}\;{\text {d}}x+\int \limits _0^Lf_4{\overline{\psi }}\;{\text {d}}x. \end{aligned}$$

from where we get (3.6). Similarly, multiplying (3.1)\(_2\) by \({\overline{\varphi }}\) we get the other inequality. \(\square \)

Lemma 3.3

Under the above conditions, we have that for any \([\xi _1,\xi _2]\subset [0,L]\) it follows

$$\begin{aligned} \left| i\lambda \int \limits _{\xi _1}^{\xi _2}{\Psi }\;{\text {d}}s\right| \le c\Vert U\Vert _{{\mathcal {H}}}^{1/2}\Vert \Psi \Vert _{L^2}^{1/2}+\frac{c}{|\lambda |^{1/2}}\Vert U\Vert _{{\mathcal {H}}}+c\Vert F\Vert _{{\mathcal {H}}}. \end{aligned}$$

Proof

Integrating over ]0, L[ (3.1)\(_4\), we get

$$\begin{aligned} \left| i\lambda \rho _2\int \limits _{\xi _1}^{\xi _2}\Psi \;{\text {d}}s\right|= & {} \left| b[\psi _{x}(\xi _2)-\psi _{x}(\xi _1)]-k[\varphi (\xi _2)-\varphi (\xi _1)]-\int \limits _{\xi _1}^{\xi _2}\big ( k\psi +\sigma \theta _x\big )\;{\text {d}}x+\rho _2\int \limits _{\xi _1}^{\xi _2} f_4\;{\text {d}}x\right| ,\\\le & {} c[{\mathcal {I}}_\psi (\xi _1)\!+\!{\mathcal {I}}_\psi (\xi _2)]^{1/2}+\frac{c}{|\lambda |}[{\mathcal {I}}(\xi _1)\!+\!{\mathcal {I}}(\xi _2)]^{1/2}+\left| \frac{k}{i\lambda }\int \limits _{\xi _1}^{\xi _2} \!{\Psi }\;{\text {d}}s\right| +c\Vert \theta _x\Vert _{L^2}+c\Vert F\Vert _{{\mathcal {H}}}. \end{aligned}$$

So, using Lemma 3.2 and (3.4) we get for \(\lambda \) large enough that

$$\begin{aligned} {\mathcal {I}}_\psi (\xi _1)\!+\!{\mathcal {I}}_\psi (\xi _2)\le c\Vert S\Vert _{L^2} \Vert \psi _x\Vert _{L^2}+c\Vert U\Vert _{\mathcal {H}}\Vert F\Vert _{\mathcal {H}}. \end{aligned}$$

From Lemma 3.2, we get

$$\begin{aligned} {\mathcal {I}}_\psi (\xi _1)\!+\!{\mathcal {I}}_\psi (\xi _2)\le c\Vert S\Vert _{L^2} \Vert \Psi \Vert _{L^2}+\frac{c}{|\lambda |}\Vert S\Vert _{L^2}^2+c\Vert U\Vert _{\mathcal {H}}\Vert F\Vert _{\mathcal {H}}, \end{aligned}$$
(3.8)

from where our conclusion follows. \(\square \)

Theorem 3.2

The semigroup \(e^{{\mathcal {A}}t}\) associated with system (2.1) verifying (1.3) is exponentially stable provided (1.6) holds. If \(\chi _0\ne 0\) the semigroup decays polynomially to zero, that is

$$\begin{aligned} \Vert T(t)U_0\Vert \le \frac{c}{t^{\frac{1}{2}}}\Vert U_0\Vert _{D({\mathcal {A}})}, \end{aligned}$$

where c is independent of \(\epsilon \).

Proof

Multiplying (3.1)\(_5\) by \(\displaystyle \int \limits _x^L{\overline{\Psi }}\;{\text {d}}s\), we get

$$\begin{aligned} \underbrace{-\rho _3\int \limits _0^L \theta \int \limits _x^L\overline{i\lambda \Psi }\;{\text {d}}s\;{\text {d}}x}_{:=J_1}+\underbrace{\tau \theta _{x}(0)\int \limits _0^L{\overline{\Psi }}\;{\text {d}}s}_{:=J_2}-\tau \int \limits _0^L \theta _{x}{\overline{\Psi }}\;{\text {d}}x +\sigma \int \limits _0^L |\Psi |^2\;{\text {d}}x= \int \limits _0^Lf_4\int \limits _x^L{\overline{\Psi }}\;{\text {d}}s\;{\text {d}}x. \end{aligned}$$

Therefore, we have

$$\begin{aligned} \sigma \int \limits _0^L |\Psi |^2\;{\text {d}}x=J_1-J_2+\tau \int \limits _0^L \theta _{x}{\overline{\Psi }}\;{\text {d}}x +\int \limits _0^Lf_4\int \limits _x^L{\overline{\Psi }}\;{\text {d}}s\;{\text {d}}x, \end{aligned}$$

from where we get

$$\begin{aligned} \int \limits _0^L |\Psi |^2\;{\text {d}}x\le c|J_1|+c|J_2|+c\Vert U\Vert _{\mathcal {H}} \Vert F\Vert _{\mathcal {H}}. \end{aligned}$$
(3.9)

From Lemma 3.3,

$$\begin{aligned} \left| \rho _2\int \limits _0^x\overline{i\lambda \Psi }\;{\text {d}}s\right| \le c\Vert U\Vert _{{\mathcal {H}}}^{1/2}\Vert \Psi \Vert _{L^2}^{1/2}+\frac{c}{|\lambda |^{1/2}}\Vert U\Vert _{{\mathcal {H}}}+c\Vert F\Vert _{{\mathcal {H}}}. \end{aligned}$$
(3.10)

Using Gagliardo–Nirenberg’s inequality and relation (3.1), we get

$$\begin{aligned} \left| \theta _{x}(0)\right|\le & {} c\Vert \theta _{x}\Vert ^{1/2}\Vert \theta _{xx}\Vert ^{1/2}\le c\Vert \theta _{x}\Vert _{L^2}^{1/2}| \lambda |^{1/2}\left[ \Vert \theta \Vert _{L^2}^{1/2}+\Vert \psi _x\Vert _{L^2}^{1/2}+\frac{1}{|\lambda |^{1/2}}\Vert F\Vert _{{\mathcal {H}}}^{1/2}\right] .\\\le & {} c|\lambda |^{1/2}\left[ \sqrt{R}+\Vert \theta _{x}\Vert _{L^2}^{1/2}\Vert \psi _x\Vert _{L^2}^{1/2}\right] . \end{aligned}$$

Using Lemma 3.3, the above inequality and recalling the definition of \(J_2\) we get

$$\begin{aligned} \left| J_2\right|\le & {} \frac{c}{|\lambda |^{1/2}}\left[ \sqrt{R}+\Vert \theta _{x}\Vert _{L^2}^{1/2}\Vert \psi _x\Vert _{L^2}^{1/2}\right] (\Vert U\Vert _{{\mathcal {H}}}^{1/2}\Vert \Psi \Vert _{L^2}^{1/2}+\frac{1}{|\lambda |^{1/2}}\Vert U\Vert _{{\mathcal {H}}}+\Vert F\Vert _{{\mathcal {H}}}),\nonumber \\\le & {} \frac{c}{|\lambda |^{1/2}}\sqrt{R}\Vert U\Vert _{{\mathcal {H}}}^{1/2}\Vert \Psi \Vert _{L^2}^{1/2}+\underbrace{\frac{c}{|\lambda |^{1/2}}\Vert \theta _{x}\Vert _{L^2}^{1/2} (\Vert \psi _x\Vert _{L^2}^{1/2}\Vert \Psi \Vert _{L^2}^{1/2})\Vert U\Vert _{{\mathcal {H}}}^{1/2}}_{:=J_3}+\frac{c}{|\lambda |}\Vert U\Vert _{{\mathcal {H}}}\sqrt{R}\nonumber \\&+\frac{c}{|\lambda |}\Vert U\Vert _{{\mathcal {H}}}\Vert \theta _{x}\Vert _{L^2}^{1/2}\Vert \psi _x\Vert _{L^2}^{1/2}+\frac{c}{|\lambda |^{1/2}}R. \end{aligned}$$
(3.11)

To estimate \(J_3\), we use (3.6)

$$\begin{aligned} J_3\le & {} \frac{c}{|\lambda |^{1/2}}\Vert \theta _{x}\Vert _{L^2}^{1/2}\Vert \Psi \Vert _{L^2}\Vert U\Vert _{{\mathcal {H}}}^{1/2}+\frac{c}{|\lambda |}\Vert \theta _{x}\Vert _{L^2}^{1/2}\Vert \Psi \Vert _{L^2}^{1/2}\Vert U\Vert _{{\mathcal {H}}} +\frac{c}{|\lambda |}R^{3/4}\Vert U\Vert _{{\mathcal {H}}}^{1/2},\\\le & {} \frac{\epsilon _0}{|\lambda |^2}\Vert U\Vert _{{\mathcal {H}}}^2+\epsilon _0 \Vert \Psi \Vert _{L^2}^2+R. \end{aligned}$$

Using the same above procedure in (3.11), we get

$$\begin{aligned} J_2\le & {} \frac{\epsilon _0}{|\lambda |^2}\Vert U\Vert _{{\mathcal {H}}}^2+\epsilon _0 \Vert \Psi \Vert _{L^2}^2+R.\end{aligned}$$

On the other hand, using interpolation we get

$$\begin{aligned} \Vert \theta \Vert _{L^2}\le c\Vert \theta \Vert _{-1}^{1/2}\Vert \theta _x\Vert ^{1/2}_{L^2}\le & {} \frac{c}{|\lambda |^{1/2}}\left[ \Vert \theta _x\Vert _{L^2}^{1/2}+\Vert \Psi \Vert _{L^2}^{1/2}+\Vert F\Vert ^{1/2}_{{\mathcal {H}}}\right] \Vert \theta _x\Vert _{L^2}^{1/2},\nonumber \\\le & {} \frac{c}{|\lambda |^{1/2}}\left[ \Vert \theta _x\Vert _{L^2}+\Vert \Psi \Vert _{L^2}^{1/2}\Vert \theta _x\Vert _{L^2}^{1/2}+\Vert F\Vert _{{\mathcal {H}}}^{1/2}\Vert \theta _x\Vert _{L^2}^{1/2}\right] ,\nonumber \\\le & {} \frac{c}{|\lambda |^{1/2}}\left[ c_\epsilon \sqrt{R}+\Vert \Psi \Vert _{L^2}^{1/2}\Vert \theta _x\Vert _{L^2}^{1/2}\right] . \end{aligned}$$
(3.12)

Recalling the definition of \(J_1\) and using (3.10) and (3.12), we get

$$\begin{aligned} \left| J_1\right|\le & {} \frac{c}{|\lambda |^{1/2}}\left[ c_\epsilon \sqrt{R}+\Vert \Psi \Vert _{L^2}^{1/2}\Vert \theta _x\Vert _{L^2}^{1/2}\right] \left[ \Vert U\Vert _{{\mathcal {H}}}^{1/2}\Vert \Psi \Vert _{L^2}^{1/2}+\frac{1}{|\lambda |^{1/2}}\Vert U\Vert _{{\mathcal {H}}}+\Vert F\Vert _{{\mathcal {H}}}\right] ,\nonumber \\\le & {} \frac{\epsilon }{|\lambda |^2}\Vert U\Vert _{{\mathcal {H}}}^2+\epsilon \Vert \Psi \Vert _{L^2}^2+R, \end{aligned}$$
(3.13)

where we used inequalities of the type

$$\begin{aligned} \frac{c}{|\lambda |^{1/2}}\Vert \theta _x\Vert _{L^2}^{1/2}\Vert U\Vert _{{\mathcal {H}}}^{1/2}\Vert \Psi \Vert _{L^2}\le & {} \frac{c_\epsilon }{|\lambda |}\Vert \theta _x\Vert _{L^2}\Vert U\Vert _{{\mathcal {H}}}+\epsilon \Vert \Psi \Vert _{L^2}^{2} \le c_\epsilon \Vert \theta _x\Vert _{L^2}^2 +\frac{\epsilon }{|\lambda |^2}\Vert U\Vert _{{\mathcal {H}}}^2+\epsilon \Vert \Psi \Vert _{L^2}^{2}. \end{aligned}$$

Recalling (3.2) and substitution of \(J_1\) and \(J_2\) into (3.9) yields

$$\begin{aligned} \int \limits _0^L |\Psi |^2\;{\text {d}}x\le & {} c_{\epsilon _0} \Vert U\Vert _{{\mathcal {H}}}\Vert F\Vert _{{\mathcal {H}}} +\frac{\epsilon _0}{|\lambda |^2}\Vert U\Vert _{{\mathcal {H}}}^2. \end{aligned}$$
(3.14)

Using Lemma 3.2, we arrive to

$$\begin{aligned} \int \limits _0^L |{\mathcal {I}}_\psi |^2\;{\text {d}}x\le & {} c_{\epsilon _0} \Vert U\Vert _{{\mathcal {H}}}\Vert F\Vert _{{\mathcal {H}}} +\frac{c}{|\lambda |^2}\Vert U\Vert _{{\mathcal {H}}}^2+R. \end{aligned}$$
(3.15)

Multiplying (3.1)\(_4\) by \({\overline{S}},\) we get

$$\begin{aligned} i\lambda \rho _2\int \limits _0^L \Psi {\overline{S}}\;{\text {d}}x-b\int \limits _0^L\psi _{xx}{\overline{S}}\;{\text {d}}x+\int \limits _0^L|{S}|^2\;{\text {d}}x+\sigma \int \limits _0^L\theta _x{\overline{S}}=\rho _2\int \limits _0^L f_4{\overline{S}}\;{\text {d}}x. \end{aligned}$$

Recalling the definition of S and using Eq. (3.1)\(_2\) to rewrite \(G_0\), we get

$$\begin{aligned} \int \limits _0^L|{S}|^2\;{\text {d}}x= & {} -\sigma \int \limits _0^L\theta _x{\overline{S}}\;{\text {d}}x+\rho _2k\int \limits _0^L \Psi [\overline{i\lambda (\varphi _x+\psi )}]\;{\text {d}}x+\left. b\psi _{x}{\overline{S}}\right| _{0}^L -b\overbrace{\int \limits _0^L\psi _{x}{\overline{S}}_x\;{\text {d}}x}^{:=G_0}+R,\nonumber \\= & {} -\sigma \int \limits _0^L\theta _x{\overline{S}}\;{\text {d}}x+\rho _2k\int \limits _0^L \Psi \overline{\Phi _x}+|\Psi |^2\;{\text {d}}x+b\rho _1\int \limits _0^L\Psi _x{\overline{\Phi }}\;{\text {d}}x+\left. b\psi _{x}{\overline{S}}\right| _{0}^L+R, \end{aligned}$$
(3.16)

where R is such that \(|R|\le C\Vert U\Vert \Vert F\Vert \). Therefore, we get

$$\begin{aligned} \int \limits _0^L|{S}|^2\;{\text {d}}x\le & {} \rho _2k\int \limits _0^L |\Psi |^2\;{\text {d}}x+\underbrace{(\rho _2k-\rho _1b)}_{:=\chi _0}\int \limits _0^L \Psi \overline{\Phi _x}\;{\text {d}}x+\left. (\rho _1b\Psi {\overline{\Phi }}+b\psi _{x}{\overline{S}})\right| _{0}^L. \end{aligned}$$
(3.17)

Using the observability inequalities (Lemma 3.1), we get

$$\begin{aligned} \left| \left. (\rho _1b\Psi {\overline{\Phi }}+b\psi _{x}{\overline{S}})\right| _{0}^L\right|\le & {} c\left( {\mathcal {I}}_\psi (0)+{\mathcal {I}}_\psi (L)\right) ^{1/2}\left( {\mathcal {I}}(0)+{\mathcal {I}}(L)\right) ^{1/2},\nonumber \\\le & {} \frac{c}{\delta }\left( {\mathcal {I}}_\psi (0)+{\mathcal {I}}_\psi (L)\right) +\delta \Vert U\Vert _{{\mathcal {H}}}^2+c\Vert U\Vert _{\mathcal {H}}\Vert F\Vert _{\mathcal {H}}, \end{aligned}$$
(3.18)

hence (3.14) implies

$$\begin{aligned} {\mathcal {I}}_\psi (0)+{\mathcal {I}}_\psi (L)\le \delta \Vert U\Vert _{{\mathcal {H}}}^2 +c\Vert U\Vert _{\mathcal {H}}\Vert F\Vert _{\mathcal {H}} \end{aligned}$$

for \(\epsilon =\delta ^2.\) Substitution of the above expression in (3.18) and using (3.14) we get

$$\begin{aligned} \int \limits _0^L|{S}|^2\;{\text {d}}x\le & {} c|\chi _0|\int \limits _0^L |\lambda ||\Psi \overline{\varphi _x}|\;{\text {d}}x+\delta \Vert U\Vert _{\mathcal {H}}^2+R. \end{aligned}$$
(3.19)

Using Lemma 3.2 implies

$$\begin{aligned} \int \limits _0^L{\mathcal {I}}_\varphi \;{\text {d}}x\le & {} c|\chi _0|\int \limits _0^L |\lambda ||\Psi \overline{\varphi _x}|\;{\text {d}}x+\delta \Vert U\Vert _{\mathcal {H}}^2+R. \end{aligned}$$
(3.20)

Note that

$$\begin{aligned} c\chi _0\int \limits _0^L |\lambda ||\Psi \overline{\varphi _x}|\;{\text {d}}x\le c\chi _0^2|\lambda |^2\int \limits _0^L |\Psi |^2\;{\text {d}}x+\frac{1}{2} \Vert U\Vert _{\mathcal {H}}^2. \end{aligned}$$

From (3.14) and (3.2), we get

$$\begin{aligned} \Vert U\Vert ^2_{\mathcal {H}}=\int \limits _0^L{\mathcal {I}}_\varphi +{\mathcal {I}}_\psi +\rho _3|\theta |^2\;{\text {d}}x+\epsilon |v|^2+\epsilon |V|^2\le & {} c\chi _0^2|\lambda |^2\int \limits _0^L |\Psi |^2\;{\text {d}}x+\delta \Vert U\Vert _{\mathcal {H}}^2+\frac{1}{2} \Vert U\Vert _{\mathcal {H}}^2 \end{aligned}$$

if \(\chi _0=0\), the exponential decays holds. Let us suppose that \(\chi _0\ne 0\). Using (3.14), we have

$$\begin{aligned} \Vert U\Vert ^2_{\mathcal {H}}\le & {} c_{\epsilon _0} |\lambda |^2\Vert U\Vert _{\mathcal {H}}\Vert F\Vert _{\mathcal {H}}+(\epsilon _0+\delta )\Vert U\Vert _{\mathcal {H}}^2+\frac{1}{2} \Vert U\Vert _{\mathcal {H}}^2. \end{aligned}$$

So we have

$$\begin{aligned} \Vert U\Vert ^2_{{\mathcal {H}}}\le & {} C_\delta |\lambda |^{4}\Vert F\Vert ^2_{{\mathcal {H}}}, \end{aligned}$$

therefore from Theorem 3.1, the polynomial decays hold. \(\square \)

4 The semilinear problem

Here, we prove the well-posedness of the abstract semilinear problem and we show, under suitable conditions that the solution also decays polynomially to zero. Let \( {\mathcal {F}} \) be a local Lipschitz function defined over a Hilbert space \( {\mathcal {H}} \). Here, we assume that there exists a globally Lipschitz function \( \widetilde{{\mathcal {F}}_R}\) such that for any ball \( B_R=\{W\in {\mathcal {H}};\;\; \Vert W\Vert _{{\mathcal {H}}}\le R\} \),

$$\begin{aligned} {\mathcal {F}}(0)=0,\quad {\mathcal {F}}(U)=\widetilde{{\mathcal {F}}_R}(U),\quad \forall U\in B_R. \end{aligned}$$
(4.1)

Additionally, we assume that there exists a positive constant \(\kappa _0\) such that

$$\begin{aligned} \int \limits _0^t\big (\widetilde{{\mathcal {F}}_R}(U(s)),U(s)\big )_{{\mathcal {H}}}\;{\text {d}}s \le \kappa _0\Vert U(0)\Vert _{{\mathcal {H}}}^2,\qquad \forall U\in C([0,T];{\mathcal {H}}). \end{aligned}$$
(4.2)

Under these conditions we present.

Theorem 4.1

Let \(\{T(t)\}_{t\ge 0}\) be a \(C_0\) semigroup of contraction, exponentially or polynomially stable with infinitesimal generator \({\mathbb {A}}\) over the phase space \({\mathcal {H}}\). Let \({\mathcal {F}}\) locally Lipschitz on \({\mathcal {H}}\) satisfying conditions (4.1) and (4.2). Then, there exists a global solution to

$$\begin{aligned} U_{t}-{\mathbb {A}}U={\mathcal {F}}(U),\quad U(0)=U_0\in {\mathcal {H}}, \end{aligned}$$
(4.3)

that decays exponentially or polynomially, respectively.

Proof

By hypotheses, there exist positive constants \(c_0\) and \(\gamma \) such that \( \Vert T(t)\Vert \le c_0e^{-\gamma t}, \) and \(\widetilde{{\mathcal {F}}_R}\) globally Lipschitz with Lipschitz constant \(K_0\) verifying conditions (4.1) and (4.2). Let us consider the following space.

$$\begin{aligned} E_{\mu }=\left\{ V\in L^\infty (0,\infty ;{\mathcal {H}});\;\; t\mapsto e^{-\mu t}\Vert V(s)\Vert \in L^\infty ({\mathbb {R}})\right\} . \end{aligned}$$

Using standard fixed point arguments, we can show that there exists only one global solution to

$$\begin{aligned} U_{t}^R-{\mathbb {A}}U^R=\widetilde{{\mathcal {F}}_R}(U^R),\quad U^R(0)=U_0\in {\mathcal {H}}. \end{aligned}$$
(4.4)

Multiplying the above equation by \(U^R\), we get that

$$\begin{aligned} \frac{1}{2} \frac{{\text {d}}}{{\text {d}}t}\Vert U^R(t)\Vert _{{\mathcal {H}}}^2-({\mathbb {A}}U^R, U^R)_{{\mathcal {H}}}=(\widetilde{{\mathcal {F}}_R}(U^R),U^R)_{{\mathcal {H}}}. \end{aligned}$$

Since the semigroup is contractive, its infinitesimal generator is dissipative, therefore

$$\begin{aligned} \Vert U^R(t)\Vert _{{\mathcal {H}}}^2\le \Vert U_0\Vert _{{\mathcal {H}}}^2+2\int \limits _0^t(\widetilde{{\mathcal {F}}_R}(U^R),U^R)_{{\mathcal {H}}}\;{\text {d}}t. \end{aligned}$$

Using (4.2), we get

$$\begin{aligned} \Vert U^R(t)\Vert _{{\mathcal {H}}}^2\le (1+k_0)\Vert U_0\Vert _{{\mathcal {H}}}^2. \end{aligned}$$

Note that for \(R> (1+k_0)\Vert U_0\Vert _{{\mathcal {H}}}^2\), we have that

$$\begin{aligned} \widetilde{{\mathcal {F}}_R}(V)={\mathcal {F}}(V),\quad \forall \;\; \Vert V\Vert _{{\mathcal {H}}}\le R. \end{aligned}$$

In particular, we have

$$\begin{aligned} \widetilde{{\mathcal {F}}_R}(U^R(t))={\mathcal {F}}(U^R(t)). \end{aligned}$$

This means that \(U^R\) is also solution of system (4.3) and because of the uniqueness we conclude that \(U^R=U\). To show the exponential stability to system (4.3), it is enough to show the exponential decay to system (4.4). To do that, we use fixed points arguments.

$$\begin{aligned} {\mathcal {T}}(V)=T(t)U_0+\int \limits _0^{t}T(t-s)\widetilde{{\mathcal {F}}_R}(V(s))\;{\text {d}}s. \end{aligned}$$

Note that \({\mathcal {T}}\) is invariant over \(E_{\gamma -\delta }\) for \(\delta \) small, (\(\gamma >\delta \)). In fact, for any \(V\in E_{\gamma -\delta }\) we have

$$\begin{aligned} \Vert {\mathcal {T}}(V)\Vert _{{\mathcal {H}}}\le & {} \Vert U_0\Vert _{{\mathcal {H}}}e^{-\gamma t}+\int \limits _0^t\Vert \widetilde{{\mathcal {F}}_R}(V(s))\Vert _{{\mathcal {H}}}e^{-\gamma (t-s)}\;{\text {d}}s,\\\le & {} \Vert U_0\Vert _{{\mathcal {H}}}e^{-\gamma t}+K_0\int \limits _0^t\Vert V(s)\Vert _{{\mathcal {H}}}e^{-\gamma (t-s)}\;{\text {d}}s,\\\le & {} \Vert U_0\Vert _{{\mathcal {H}}}e^{-\gamma t}+K_0e^{-\gamma t}\int \limits _0^te^{\delta s}\;{\text {d}}s \sup _{s\in [0,t]}\left\{ e^{(\gamma -\delta ) s}\Vert V(s)\Vert _{{\mathcal {H}}}\right\} ,\\\le & {} \Vert U_0\Vert _{{\mathcal {H}}}e^{-\gamma t}+\frac{K_0C}{\delta }e^{-(\gamma -\delta )t}. \end{aligned}$$

Hence, \({\mathcal {T}}(V)\in E_{\gamma -\delta }\). Using standard arguments, we show that \({\mathcal {T}}^n\) satisfies

$$\begin{aligned} \Vert {\mathcal {T}}^n(W_1)-{\mathcal {T}}^n(W_2)\Vert \le \frac{(k_1t)^n}{n!}\Vert W_1-W_2\Vert _{{\mathcal {H}}}. \end{aligned}$$

Therefore, we have a unique fixed point satisfying

$$\begin{aligned} {\mathcal {T}}^n(U)=U=T(t)U_0+\int \limits _0^{t}T(t-s)\widetilde{{\mathcal {F}}_R}(U(s))\;{\text {d}}s. \end{aligned}$$

That is U is a solution of (4.4), and since \({\mathcal {T}}\) is invariant over \(E_{\gamma -\delta }\), then the solution decays exponentially. To show the polynomial stability, we consider the space

$$\begin{aligned} E_{p}=\left\{ V\in L^\infty (0,\infty ;{\mathcal {H}});\;\; t\mapsto (1+t)^p\Vert V(s)\Vert \in L^\infty ({\mathbb {R}})\right\} \end{aligned}$$

To show the invariance, we use

$$\begin{aligned} \sup _{t>0}(1+t)^p\int \limits _0^t (1+t-s)^{-p}(1+s)^{-p}\;{\text {d}}s<C \end{aligned}$$

and use the same above reasoning. \(\square \)

Let us consider the semilinear system

$$\begin{aligned} \begin{array}{lll} &{} \displaystyle \rho _1\varphi ^{\epsilon }_{tt}-k(\varphi ^{\epsilon }_x+\psi ^{\epsilon })_x =0,\ \ \text{ in }\ (0,L)\times (0,\infty )\\ &{}\displaystyle \rho _2\psi ^{\epsilon }_{tt}-b\psi ^{\epsilon }_{xx}+k(\varphi ^{\epsilon }_x+\psi ^{\epsilon }) +\sigma \theta ^{\epsilon }_x=0,\ \ \text{ in }\ (0,L)\times (0,\infty )\\ &{}\displaystyle \rho _3\theta ^{\epsilon }_t-\tau \theta ^{\epsilon }_{xx}+\sigma \psi ^{\epsilon }_{xt}=0,\ \ \text{ in }\ (0,L)\times (0,\infty )\\ &{} \epsilon v_{tt}^{\epsilon } +\epsilon v_t^{\epsilon } +\epsilon v^{\epsilon }+S^\epsilon (L,t)=-\frac{1}{\epsilon }\Big [(v^{\epsilon }-g_2)^{+}-(g_1-v^{\epsilon })^{+}\Big ]. \end{array} \end{aligned}$$
(4.5)

The above system can be written as

$$\begin{aligned} U_t-{\mathcal {A}}U={\mathcal {F}}(U), \qquad U(0) = U_0, \end{aligned}$$

where \({\mathcal {A}}\) is given by (2.3) and \({\mathcal {F}}\) is given by

$$\begin{aligned} {\mathcal {F}}(U)=(0,0,0,0,0,0,f(v))^\top , \quad f(v)=-\frac{1}{\epsilon ^2}\Big [(v-g_2)^{+}-(g_1-v)^{+}\Big ]. \end{aligned}$$
(4.6)

Note that \({\mathcal {F}}\) is a Lipschitz function verifying hypothesis (4.1)–(4.2). In fact, \({\mathcal {F}}(0)=0\). Moreover,

$$\begin{aligned} \int \limits _0^t\big ({{\mathcal {F}}}(U(s)),U(s)\big )_{{\mathcal {H}}}\;{\text {d}}s= & {} -\int \limits _0^t\frac{1}{\epsilon ^2}\Big [(v-g_2)^{+}-(g_1-v)^{+}\Big ]v_t \;{\text {d}}s,\\= & {} -\frac{1}{2\epsilon ^2}\int \limits _0^t\frac{{\text {d}}}{{\text {d}}t}\Big [|(v-g_2)^{+}|^2+|(g_1-v)^{+}|^2\Big ] \;{\text {d}}s,\\\le & {} \frac{1}{2\epsilon ^2}\Big [|(v_0-g_2)^{+}|^2+|(g_1-v_0)^{+}|^2\Big ]. \end{aligned}$$

Theorem 4.2

The nonlinear semigroup defined by system (4.5) is exponentially stable, provided \(\chi _0=0\). Otherwise the solution decays polynomially as established in Theorem 3.2.

Proof

It is a direct consequence of Theorem 4.1.

Let us introduce the functionals

$$\begin{aligned} {\mathcal {I}}(x,t)= & {} \rho _2b|\psi _t(x,t)|^2 +|M(x,t)|^2+\rho _1k|\varphi _t(x,t)|^2 +|S(x,t)|^2, \\ {\mathcal {L}}(t)= & {} \int \limits _0^L \rho _2q_x|\psi _t|^2 +q_x|M|^2+\rho _1q_x|\varphi _t|^2 +q_x|S|^2 \;{\text {d}}x- \int \limits _0^L q \rho _1 k\Phi {\overline{\Psi }}- qS{\overline{M}}\;{\text {d}}x, \end{aligned}$$

where q is as in (3.3) hence there exist positive constants \(C_0\) and \(C_1\) such that

$$\begin{aligned} C_0\int \limits _0^L {\mathcal {I}}(x,t)\;{\text {d}}x\le {\mathcal {L}}(t)\le C_1\int \limits _0^L {\mathcal {I}}(x,t)\;{\text {d}}x. \end{aligned}$$
(4.7)

Under the above conditions, we establish the observability inequalities to the evolution system. \(\square \)

Lemma 4.1

The solution of system (4.5) satisfies

$$\begin{aligned} \left| \int \limits _0^t{\mathcal {I}}(L,s)\;{\text {d}}s-\int \limits _0^t{\mathcal {L}}(s)\;{\text {d}}s\right| \le cE(0), \\ \left| \int \limits _0^t{\mathcal {I}}(0,s)\;{\text {d}}s-\int \limits _0^t{\mathcal {L}}(s)\;{\text {d}}s\right| \le cE(0). \end{aligned}$$

Proof

Multiply Eq. (4.5)\(_1\) by \(q{\overline{S}}\) and Eq. (4.5)\(_2\) by \(q{\overline{M}}\) summing up and performing integration by parts and use the same approach as in the proof of Lemma 3.3. To achieve the second inequality, we use \(q_0\) instead of q given by (3.3). \(\square \)

Theorem 4.3

For any initial data \( (\varphi _0,\varphi _1,\psi _0,\psi _1,\theta _0)\in {\mathcal {H}}\), there exists a weak solution to Signorini’s problem (1.1)–(1.5), which decays as establish in Theorem 4.2.

Proof

From Theorem 4.1, there exists only one solution to system (4.5) verifying

$$\begin{aligned} E(t,\varphi ^{\epsilon },\psi ^{\epsilon },\theta ^{\epsilon })+\tau \int \limits _0^t\int \limits _0^L|\theta _x^{\epsilon }|^2{\text {d}}x{\text {d}}s\le E(0,\varphi ^{\epsilon },\psi ^{\epsilon },\theta ^{\epsilon }), \end{aligned}$$
(4.8)

where

$$\begin{aligned} 2E(t)= \int \limits _0^L\bigg [\rho _1|\varphi _t|^2+\rho _2| \psi _t|^2+k|\varphi _x + \psi |^2 +b|\psi _x|^2+\rho _3| \theta |^2 \bigg ]{\text {d}}x+\frac{1}{\epsilon }{\mathcal {N}}(t)+ \epsilon |v_t|^2+ \epsilon |v|^2. \end{aligned}$$

and

$$\begin{aligned} {\mathcal {N}}(t):=|(\varphi (L,t)-g_2)^{+}|^2+|(g_1-\varphi (L,t))^{+}|^2 \end{aligned}$$

In particular, from Lemma 4.1 we have that

$$\begin{aligned} {\mathcal {I}}_\epsilon (L,t) \quad \text{ uniformly } \text{ bounded } \text{ in } \quad L^2(0,T),\quad \forall \epsilon >0, \end{aligned}$$
(4.9)

which means that the first order energy is uniformly bounded for any \(\epsilon >0\). Standard procedures implies that the solution of system (4.5) converges in the distributional sense to system (1.1). It remains to show that condition (1.4) holds. Using Theorem 4.1, we get that \(\varphi _t^\epsilon (L,t)\) and \(S^\epsilon (L,t)\) are bounded in \(L^2(0,T)\) for any \(\epsilon >0\), so is \(v_{tt}^\epsilon \). Using (4.5)\(_4\), we get

$$\begin{aligned} \int \limits _0^T\Big [\epsilon v_{tt} +\epsilon v_t +\epsilon v+S^\epsilon (L,t)\big ][u-v]\, {\text {d}}t=-\frac{1}{\epsilon }\int \limits _0^T\Big [(v-g_2)^{+}-(g_1-v)^{+}\Big ][u-v]\, {\text {d}}t, \end{aligned}$$

for any \(u\in L^2(0,T;{\mathcal {K}})\cap H^1(0,T;L^2(0,L))\), where \({\mathcal {K}}=\{w\in H^1(0,L),\;\; g_1\le w(L)\le g_2\}.\) It is no difficult to see that

$$\begin{aligned} \lim _{\epsilon \rightarrow 0}\int \limits _0^T\Big [\epsilon v_{tt}^\epsilon +\epsilon v_t^\epsilon +\epsilon v^\epsilon \big ][u-v^\epsilon ]\, {\text {d}}t=0. \end{aligned}$$

In fact, from (4.5)\(_4\) \(\epsilon v_{tt}^\epsilon \) is bounded by a constant depending on \(\epsilon \), in \(L^2(0,T)\), from (4.9) \(v_{t}^\epsilon \) is also uniformly bounded in \(L^2(0,T)\). Therefore, \(v_{t}^\epsilon \) is a continuous function, uniformly bounded in \(L^\infty (0,T)\). Making an integration by parts, we get

$$\begin{aligned} \int \limits _0^T\epsilon v_{tt}^\epsilon [u(t)-v^\epsilon ]\, {\text {d}}t= \epsilon \left. v_{t}^\epsilon [u(t)-v^\epsilon ]\right| _0^T-\int \limits _0^T\epsilon v_{t}^\epsilon [u_t(t)-v_t^\epsilon ]\, {\text {d}}t\quad \rightarrow \quad 0. \end{aligned}$$

Hence,

$$\begin{aligned} \lim _{\epsilon \rightarrow 0}\int \limits _0^TS^\epsilon (L,t)[u-v]\, {\text {d}}t= & {} \lim _{\epsilon \rightarrow 0}\int \limits _0^T-\frac{1}{\epsilon }\Big [(v-g_2)^{+} -(g_1-v)^{+}\Big ][u-v]\, {\text {d}}t. \end{aligned}$$
(4.10)

Note that

$$\begin{aligned} \int \limits _0^T(v-g_2)^{+}[u(t)-v(t)]\, {\text {d}}t= & {} \int \limits _0^T(v-g_2)^{+}[u(t)-g_2]\;{\text {d}}t-\int \limits _0^T(v-g_2)^{+}(v-g_2)\;{\text {d}}t,\\= & {} \int \limits _0^T(v-g_2)^{+}[u(t)-g_2]\;{\text {d}}t-\int \limits _0^T(v-g_2)^{+}(v-g_2)^+\;{\text {d}}t\le 0, \end{aligned}$$

for any \(g_1\le u(L,t)\le g_2\). Similarly, we get

$$\begin{aligned} -\int \limits _0^T[(g_1-v)^{+}[u(t)-v(t)]\, {\text {d}}t\le & {} 0. \end{aligned}$$

Therefore, from the last two inequalities we arrive to

$$\begin{aligned} \int \limits _0^T\frac{1}{\epsilon }\Big [(v-g_2)^{+}-(g_1-v)^{+}\Big ][u(t)-v(t)]\, {\text {d}}t\le 0, \quad \forall \epsilon >0, \end{aligned}$$

for any \(u\in H^1(0,T;L^2(0,L))\) such that \(g_1\le u(L,t)\le g_2\). Letting \(\epsilon \rightarrow 0\) and recalling that \(v=\varphi (L,t)\) we get

$$\begin{aligned} \int \limits _0^TS(L,t)[u(L,t)-\varphi (L,t)]\, {\text {d}}t \ge 0,\quad \forall u\in L^2(0,T;{\mathcal {K}}). \end{aligned}$$

From this relation, we get (1.5). The proof of the existence is now complete. To show the asymptotic behaviour, we use Theorem 4.2 to get

$$\begin{aligned} E(t,\varphi ^{\epsilon },\psi ^{\epsilon },\theta ^{\epsilon })\le CE(0,\varphi ^{\epsilon },\psi ^{\epsilon },\theta ^{\epsilon })e^{-\gamma t}. \end{aligned}$$

So, using the semicontinuity of the norm and noting that \({\mathcal {N}}(0)=0,\) we obtain

$$\begin{aligned}E(t,\varphi ,\psi ,\theta )\le \liminf _{\epsilon \rightarrow 0^{+}}E(t,\varphi ^{\epsilon },\psi ^{\epsilon },\theta ^{\epsilon })\le C \Big \{\lim _{\epsilon \rightarrow 0^{+}}E(0,\varphi ^{\epsilon },\psi ^{\epsilon },\theta ^{\epsilon })\Big \}e^{-\gamma t}\le CE(0,\varphi ,\psi ,\theta )e^{-\gamma t}\end{aligned}$$

where C is a positive constant independent of parameter \(\epsilon .\) Thus, we conclude the exponential stability of the Signorini’s problem. Similarly, we get the polynomial stability. \(\square \)

Remark 4.1

We believe that the polynomial rate of decay is optimal in the sense that it is the same rate obtained in [10, 11] where the authors show the optimality.

Remark 4.2

The uniqueness of the solution to Signorini’s problem (1.1)–(1.4) remains an open question.

The same approach can be used to show existence of the semilinear problem

$$\begin{aligned} \begin{array}{lll} \rho _1\varphi _{tt}-k(\varphi _x+\psi )_x+\mu _1\varphi |\varphi |^{\alpha }=0,\\ \rho _2\psi _{tt}-b\psi _{xx}+k(\varphi _x+\psi )+\sigma \theta _x+\mu _2\psi |\psi |^{\beta }=0,\\ \rho _3\theta _t-\tau \theta _{xx}+\sigma \psi _{xt}=0. \end{array} \end{aligned}$$
(4.11)

Theorem 4.4

Under the same hypothesis from Theorem 4.3, there is at least one solution to Signorini’s problem (4.11) satisfying (1.2)–(1.5).

Proof

As in Theorem 4.3, we consider the function

$$\begin{aligned} {\mathcal {F}}(U)=(0,-\mu _1\varphi ^{\epsilon }|\varphi ^{\epsilon }|^{\alpha },0,-\mu _2\psi ^{\epsilon }|\psi ^{\epsilon }|^{\beta },0,0,f(v))^\top , \end{aligned}$$

where f is given by (4.6). Note that \( {\mathcal {F}}(0)=0\). Using the mean value theorem to \(g(s)=|s|^\alpha s\), we obtained the inequality

$$\begin{aligned} \Big |s|s|^{\alpha }-r|r|^{\alpha }\Big |\le (|s|^\alpha +|r|^\alpha )|s-r|. \end{aligned}$$

Taking the norm in \({\mathcal {H}}\) and since \(\varphi ^{\epsilon }_i\) and \(\psi ^{\epsilon }_i\) belong to \(H^{1}(0,L)\subset L^{\infty }(0,L),\) then we get

$$\begin{aligned} \Vert {\mathcal {F}}(U_1)-{\mathcal {F}}(U_2)\Vert _{{\mathcal {H}}}\le C\Vert U_1-U_2\Vert _{{\mathcal {H}}}. \end{aligned}$$

Therefore, \({\mathcal {F}}\) is locally Lipschitz. Since

$$\begin{aligned} ({\mathcal {F}}U,U)_{{\mathcal {H}}}= -\frac{{\text {d}}}{{\text {d}}t}\int \limits _0^L\frac{\mu _1}{1+\alpha }|\varphi ^{\epsilon }|^{\alpha +2}\;{\text {d}}x +\frac{\mu _2}{1+\beta }|\psi ^{\epsilon }|^{\beta +2}\;{\text {d}}x \end{aligned}$$

then

$$\begin{aligned} \int \limits _0^t({\mathcal {F}}U,U)_{{\mathcal {H}}}\le \int \limits _0^L\frac{\mu _1}{1+\alpha }|\varphi ^{\epsilon }(0)|^{\alpha +2}\;{\text {d}}x +\frac{\mu _2}{1+\beta }|\psi ^{\epsilon }(0)|^{\beta +2}\;{\text {d}}x. \end{aligned}$$

Thus, there exists a positive constant \(c_0\) such that

$$\begin{aligned} \int \limits _0^t({\mathcal {F}}U,U)_{{\mathcal {H}}}\le c_0\Vert U\Vert _{{\mathcal {H}}}^2. \end{aligned}$$

Note that for this function, there exists the cut-off function

$$\begin{aligned} f_{1,R_2}=\left\{ \begin{array}{lll} \mu _1 x|x|^{\alpha }&{}if&{} x\le R_2,\\ \mu _1 x|R_2|^{\alpha }&{}if&{} x\ge R_2. \end{array} \right. \qquad f_{2,R_2}=\left\{ \begin{array}{lll} \mu _2 x|x|^{\beta }&{}if&{} x\le R_2,\\ \mu _2 x|R_2|^{\beta }&{}if&{} x\ge R_2. \end{array} \right. \end{aligned}$$

It is not difficult to check that

$$\begin{aligned} \widetilde{{\mathcal {F}}}_{R_2}=(0,f_{1,R_2},0,f_{2,R_2},0,0,0)^\top \end{aligned}$$

is globally Lipschitz. Using Theorem 4.1, our conclusion follows. \(\square \)

5 Numerical approach

In this section, we consider the numerical solution of the penalized problem (4.5). We use the finite element methods over (0, L) and the finite difference in time.

5.1 Algorithms and numerical experiment

Let \(X_h\) be a partition over the interval \(\Omega =(0,L),\) that is, \( X_h=\{0=x_0<x_1<\cdots <x_N=L\},\ \ \ \Omega _{j+1}=(x_j,x_{j+1}), \) where \(N_e\) is the number of the elements obtained of partition. We consider the finite-dimensional \( S^h_1=\{u\in C(0,L); u\Big |_{\Omega _e}\in P_1(\Omega _e)\},\ \) where \(P_1\) is the set of linear polynomials over \(\Omega _e,\) and \( U^h=\{u^h\in S^h_1; u^h(0)=0\}\ \text{ and }\ V^h=\{v^h\in S^h_1; v^h(L)=0\} \). We use a representation of the functions \(\varphi ^h\) and \(\psi ^h\) as in [16], so we have

$$\begin{aligned} u^h(t,x)=\sum _{i=1}^{2N}d_i(t)\phi _i(x),\ \ v^h(t,x)=\sum _{i=1}^{N}\theta _i(t)\omega _i(x) \end{aligned}$$

where \(\phi _i(x),\ i = 1,\ldots , 2N,\) and \(\omega _i(x),\ i = 1,\ldots , N,\) are the global vector interpolation functions. So, we obtain the following dynamical problem in \({\mathbb {R}}^N\times {\mathbb {R}}^{2N}\).

$$\begin{aligned}&\mathbf{M }_1{\dot{\theta }}(t)+\mathbf{K }_1\theta (t)+\mathbf{C }^\top _1\dot{\mathbf{d }}(t)=\mathbf{F }_1(t),\\&\mathbf{M }_2\ddot{\mathbf{d }}(t)+\mathbf{K }_2(\mathbf{d }(t))+\mathbf{C }_1\theta (t)=\mathbf{F }_2(t), \\&\qquad \theta (0)=\theta _0,\quad \mathbf{d }(0)=\mathbf{d }_{0}\quad \text{ and }\quad \dot{\mathbf{d }}(0)=\mathbf{d }_1 \end{aligned}$$

where \(\mathbf{M }_1\) is the thermal capacity matrix, \(\mathbf{K }_1:\) the conductivity matrix, \(\mathbf{C }_1:\) the coupled matrix and \(\mathbf{F }_1:\) the heat source vector. \(\mathbf{M }_2:\) the consistent mass matrix, \(\mathbf{K }_2(\mathbf{d }(t)):\) the vector of consistent nodal elastic stiffness at time t, and \(\mathbf{F }_2(t):\) the vector of consistent nodal applied forces generalized at time t. Furthermore, \(\theta _0,\) \(\mathbf{d }_0\) and \(\mathbf{d }_1\) are temperature, displacement and velocities, nodal initial.

Fig. 2
figure 2

Beam’s oscillations at the end \(x=L: \varphi ^{h}(L,t)\) and the asymptotic behaviour of the energy at time 1 s. In this case, we performed the experiment to \(\chi _0=0\) (this equality is equivalent to \(G=\frac{E}{\kappa }\) ) and \(\chi _0\ne 0,\), respectively

To solve the above system, we introduce a partition P of the time domain [0, T] into M intervals of length \(\Delta t\) such that \(0=t_0<t_1<\cdots <t_M=T,\) with \(t_{n+1}-t_n=\Delta t\) and we use the well-known Trapezoidal generalized rules and Newmark’s methods (see [17, 18]). In our problem, we have a nonlinear system. Thus, our numerical scheme becomes

$$\begin{aligned}&\mathbf{M }_1{\dot{\theta }}_{n+1}+\mathbf{K }_1\theta _{n+1}+\mathbf{C }^\top _1\dot{\mathbf{d }}_{n+1}=\mathbf{F }^{n+1}_1\\&\mathbf{M }_2\ddot{\mathbf{d }}_{n+1}+\mathbf{K }_2\mathbf{d }_{n+1} +\mathbf{C }_1\theta _{n+1}=\mathbf{F }^{n+1}_2+\tilde{\mathbf{K }_2}(\mathbf{d }_{n+1})\\&\qquad \theta _{n+1}=\theta _n+\Delta t{\dot{\theta }}_{n+\alpha },\\&\qquad {\dot{\theta }}_{n+\alpha }=(1-\alpha ){\dot{\theta }}_n+\alpha {\dot{\theta }}_{n+1},\\&\qquad \mathbf{d }_{n+1}=\mathbf{d }_{n}+\Delta t\dot{\mathbf{d }}_{n}+\frac{\Delta t^2}{2}[(1-2\beta )\ddot{\mathbf{d }}_{n}+ 2\beta \ddot{\mathbf{d }}_{n+1}]\\&\qquad \dot{\mathbf{d }}_{n+1}=\dot{\mathbf{d }}_n+\Delta t[(1-\gamma )\ddot{\mathbf{d }}_n+\gamma \ddot{\mathbf{d }}_{n+1}] \end{aligned}$$

where

$$\begin{aligned}\tilde{\mathbf{K }}_2(\mathbf{d }_{n+1})=\displaystyle \frac{1}{\epsilon }(0,0,\ldots ,(d_{2N-1}(t)-g_2)^{+}-(g_1-d_{2N-1}(t))^{+},0)^\top \end{aligned}$$

and \(\beta ,\ \gamma \) and \(\alpha \) are parameters that govern the stability and accuracy of the methods.

Remark 5.1

A typical numerical problem to the Timoshenko system is the shear locking. Numerical alternatives were performed in the literature, we indicate the classical reference by Arnold [19], Hughes et al. [20] and Prathap and Bhashyam [21].

Remark 5.2

To get computational results, we use the implemented code in Language C. The graphics were developed using GNUplot.

5.1.1 Numerical experiment

To verify the asymptotic behaviour of the numerical solutions, we consider the parameter from algorithms \(\beta =\frac{1}{4},\ \gamma =\frac{1}{2}\) and \(\alpha =\frac{1}{2}.\) In these experiments, we consider the following initial conditions:

$$\begin{aligned} \varphi (x,0)=0,\ \psi (x,0)=0,\ \psi _t(x,0)=0\ \text{ and }\ \theta (x,0)=x^2-2x^3+x^4. \end{aligned}$$

Also, we take a finite element mesh with \(h=0.00125\) and \(\Delta t= 10^{-6}\) s.

Experiment We consider a rectangular beam with \(L= 1.0\) m, thickness 0.1 m, width 0.1 m, \(E=69.10^{9}\) N/\(\text{ m}^2\) \(\rho =2700\) Kg/\(\text{ m}^3\),\(\ \nu =0.3\) (Poisson ratio), and \(\tau =42\) W/m K and \(\varphi _t(x,0)=1-\cos (\frac{2\pi }{L} x).\) The penalization parameter \(\epsilon =10^{-9}\) and \(g_2=-g_1=0.001\) m (Fig. 2).