1 Introduction

In this paper, we investigate a thermoelastic Timoshenko system with a viscoelastic damping acting on the transverse displacement in the shear force equation and a thermoelastic dissipation effective on the shear force (1.11). The result obtained in this paper is general and optimal in the sense that it agrees with the decay rate of the relaxation function h (see conditions on h in Sect. 2). We indeed demonstrate this by giving some examples (see Sect. 4.1).

Timoshenko [1] in 1921 introduced a model which has been widely used to describe the vibration of a beam when the transverse shear strain is significant. Combining the evolution equations

$$\begin{aligned} \rho _1u_{tt}-S_x=0,\ \ \rho _2v_{tt}-M_x+S=0, \end{aligned}$$
(1.1)

and the constitutive equations,

$$\begin{aligned} S=k(u_x+v),\ \ M=bv_x, \end{aligned}$$
(1.2)

(see [2, 3] for detailed derivation), the following coupled hyperbolic system was derived

$$\begin{aligned} \left\{ \begin{aligned}&\rho _1 u_{tt} -k(u_x +v)_x =0,\\&\rho _2v_{tt}-bv_{xx}+ k(u_x +v)=0,\\ \end{aligned} \right. \end{aligned}$$
(1.3)

where \(u=u(x,t)\) represents the transverse displacement and \(v=v(x,t)\) represents the rotation angle of the center of mass of a beam element. The positive parameters \( \rho _1, \rho _2, k\) and b are: mass density, moment of mass inertia, shear coefficient and flexural rigidity, respectively. System (1.3) coupled with different initial conditions, boundary conditions and various damping mechanisms has been extensively studied in the literature, see [4,5,6,7] and references therein.

When viscoelastic law acts on the rotation angle v, (1.3) takes the form

$$\begin{aligned} \left\{ \begin{aligned}&\rho _1 u_{tt} -k(u_x +v)_x =0,\\&\rho _2v_{tt}-bv_{xx}+ k(u_x +v)+\int _0^tg(t-s)v_{xx}(x,s)\hbox {d}s=0.\\ \end{aligned} \right. \end{aligned}$$
(1.4)

This has been discussed by many researchers and several stability results have been established (see [4, 6, 8,9,10,11]).

Concerning thermoelastic Timoshenko systems, when thermoelastic dissipation is effective on the bending moment, we have the evolution equation

$$\begin{aligned} \left\{ \begin{aligned}&\rho _1 u_{tt} -k(u_x +v)_x =0,\\&\rho _2v_{tt}-bv_{xx}+ k(u_x +v)+\gamma \theta +\int _0^tg(t-s)v_{xx}(x,s)\hbox {d}s=0,\\&\rho _3\theta _t+q_x+\gamma v_{xt}=0, \end{aligned} \right. \end{aligned}$$
(1.5)

where \(\theta =\theta (x,t)\) is the temperature difference and \(q=-\beta \theta _x\) represents the heat flux, the positive constants \(\rho _3, \beta \) and \(\gamma \) are the capacity, diffusivity and adhesive stiffness, respectively. Rivera and Racke [12] studied (1.5) and proved that the system is exponentially stable if and only if the speeds of wave propagations are equal, that is,

$$\begin{aligned} \frac{k}{\rho _1}= \frac{b}{\rho _2}. \end{aligned}$$
(1.6)

In the case when (1.6) does not hold, they only established a polynomial decay result.

Also, when thermoelastic dissipation is effective on the shear force equation, we have the evolution equations

$$\begin{aligned} \left\{ \begin{aligned}&\rho _1 u_{tt} -k(u_x +v)_x+\gamma \theta _x =0,\\&\rho _2v_{tt}-bv_{xx}+ k(u_x +v)-\gamma \theta +\int _0^tg(t-s)v_{xx}(x,s)\hbox {d}s=0,\\&\rho _3\theta _t+q_x+\gamma (u_x+v)_t=0. \end{aligned} \right. \end{aligned}$$
(1.7)

Apalara [13] discussed (1.7) and proved a general decay result for the solution energy, without imposing the equal-wave-speed condition (1.6). Almeida Júnior et al. [14] considered the system (1.7) in the absence of the memory term and showed that the system is exponentially stable if and only if (1.6) holds, and only polynomial decay is guaranteed otherwise. Interested readers may also see [4, 8, 9, 13, 15,16,17,18,19,20] for more results on thermoelastic Timoshenko systems.

Guesmia et al. [11] considered (1.5) with infinite memory acting on the rotation angle and established some general decay results depending on the speed of wave propagation. However, in their work, they raised a natural question, which is; how possible is it to have a viscoelastic dissipation effect on the transverse displacement u in the shear force equation instead of the rotation angle v as seen in (1.4). This question was answered later by Guesmia and Messaoudi [10].

Recently, Alves et al. [21] derived Timoshenko system (1.8) with a viscoelastic dissipation mechanism acting on the transverse displacement u in the shear force:

$$\begin{aligned} \left\{ \begin{aligned}&\rho _1 u_{tt} -k(u_x +v)_x +k\int _0^t h(t-s)(u_x +v)_x(x,s)\hbox {d}s=0,\\&\rho _2v_{tt}-bv_{xx}+ k(u_x +v) - k\int _0^t h(t-s)(u_x +v)(x,s)\hbox {d}s =0, \end{aligned} \right. \end{aligned}$$
(1.8)

coupled with Dirichlet–Neumann boundary condition and proved a uniform decay result under the condition that (1.6) holds. In the case of nonequal speed, i.e., \(\dfrac{k}{\rho _1}\ne \dfrac{b}{\rho _2}\), they established only a polynomial decay (even if the relaxation function decays exponentially).

It is natural as well, to investigate the stability of (1.8) with an additional effect of a thermoelastic dissipation. This will be our goal in this paper.

Now, we consider instead of (1.1) and (1.2), the constitutive equations;

$$\begin{aligned} {\left\{ \begin{array}{ll} &{} S=k\bigg ((u_x+v)-\displaystyle \int _0^th(t-s)(u_x+v)(s)\hbox {d}s\bigg )-\gamma \theta , \\ \\ &{} M=bv_x,\ \ \ \ q=-\beta \theta _x, \end{array}\right. } \end{aligned}$$
(1.9)

and the evolution equations

$$\begin{aligned} {\left\{ \begin{array}{ll} &{} \rho _1 u_{tt}-S_x=0, \qquad \rho _2v_{tt}-M_x+S=0 \\ \\ &{} \rho _3\theta _t+q_x+\gamma \big (u_x+v\big )_t=0. \end{array}\right. } \end{aligned}$$
(1.10)

combining (1.9) and (1.10), we obtain the thermoelastic Timoshenko system

$$\begin{aligned} \left\{ \begin{aligned}&\rho _1 u_{tt} -k(u_x +v)_x + \gamma \theta _x + k\int _0^t h(t-s)(u_x +v)_x(x,s)\hbox {d}s =0,\\&\rho _2v_{tt}-bv_{xx}+ k(u_x +v) -\gamma \theta - k\int _0^t h(t-s)(u_x +v)(x,s)\hbox {d}s =0,\\&\rho _3\theta _t-\beta \theta _{xx}+\gamma \big (u_x+v\big )_t=0, \end{aligned} \right. \end{aligned}$$
(1.11)

where \(x\in (0,1),\ \mathrm{and}\ t>0\). The relaxation function h is a given function which will be specified later. We endow the system (1.11) with the following mixed boundary conditions:

$$\begin{aligned} {\left\{ \begin{array}{ll} &{}u(0,t)=u_x(1,t)=0,\ t\ge 0,\\ &{}v_x(0,t)=v(1,t)=0, \ t\ge 0,\\ &{}\theta _x(0,t)=\theta (1,t)=0,\ t\ge 0, \end{array}\right. } \end{aligned}$$
(1.12)

and initial data

$$\begin{aligned} {\left\{ \begin{array}{ll} &{}u(x,0)=u_0(x),\ v(x,0)=v_0(x),\ \theta (x,0)=\theta _0(x), \ x\in [0,1],\\ &{} u_t(x,0)=u_1(x),\ v_t(x,0)=v_1(x),\ \ x\in [0,1]. \end{array}\right. } \end{aligned}$$
(1.13)

The remaining part of this work is organized as follows: In Sect. 2, we present a few basic tools and state the assumptions on the relaxation function h. In Sect. 3, we prove some key technical lemmas that will be of help in obtaining our main result. In Sect. 4, we state and prove our general and optimal decay result.

2 Assumptions and Space Setting

Throughout this work, \( C \ \mathrm{and}\ c_i\) denote positive constants that may change from one line to the other or within the same line. We denote by \(\Vert .\Vert \) the usual norm in \(L^2(0,1).\) We consider the following assumptions on the function h

  1. (A1)

    \( h:[0,+\infty )\longrightarrow (0,+\infty )\) is a nonincreasing \(C^1\)-function such that

    $$\begin{aligned} h(0)> 0, \,\,\,\,\, 1 - \displaystyle \int _0^{\infty }h(\tau )\hbox {d}\tau = l_0> 0. \end{aligned}$$
    (2.1)
  2. (A2)

    There exists a \(C^1\)-function \( G:[0,+\infty )\rightarrow [0,+\infty )\) which is linear or it is strictly convex \(C^2\)-function on \(\mathrm{(0,r]}\), \( r \le h(t_0),\) for any \(t_0>0\) with \(\mathrm{G(0)=G'(0)=0}\) and a positive nonincreasing differentiable function \( \xi :[0,+\infty )\rightarrow (0,+\infty )\), such that

    $$\begin{aligned} h'(t)\le -\xi (t)G\left( h(t)\right) ,\,\,\,\,t\ge 0. \end{aligned}$$
    (2.2)

Remark 2.1

As mentioned in [22], we have the following:

  1. 1.

    Conditions \((A_1)\) and \((A_2)\) implies that G is a strictly increasing convex \( C^2\)-function on (0, r], with \(G(0)=G'(0)=0,\) thus there exists an extension of G say

    $$\begin{aligned} {\bar{G}}:[0,+\infty )\rightarrow (0,+\infty ) \end{aligned}$$

    that is also a strictly increasing and a strictly convex \(C^2\)-function. For instance, for any \(t>r\), we define \({\bar{G}}\) by

    $$\begin{aligned} {\bar{G}}(s)=\frac{G''(r)}{2}s^2 + ( G'(r)-G''(r)r)s+ G(r) - G'(r)r+ \frac{ G''(r)}{2}r^2. \end{aligned}$$
    (2.3)
  2. 2.

    Also, since h is continuous, positive and \(h(0)>0,\) then for any \(t_0>0\) with \(t\ge t_0\) we have

    $$\begin{aligned} \int _0^t h(s)\hbox {d}s\ge \int _0^{t_0} h(s)\hbox {d}s=h_0>0. \end{aligned}$$
    (2.4)

For completeness purpose, we introduce the following spaces:

$$\begin{aligned}&L_\star ^{2}(0, 1)=\{w\in L^2(0, 1) \ : \int _{0}^{1}w(s)\hbox {d}s= 0\}, \quad H_\star ^{1}=H^{1}(0, 1)\cap L_\star ^{2}(0, 1),\\&H^{2}_{\star }(0, 1)=\{w \in H^{2}(0, 1) \ : w_x(0)=w_x(1) = 0\}. \end{aligned}$$

Denote by \({{\mathcal {H}}}\) and \({{\mathcal {V}}}\) the following spaces:

$$\begin{aligned} {{{\mathcal {H}}}}:= H_{\star }^{1}(0, 1)\times L_{\star }^{2}(0, 1) \times H_0^1(0, 1)\times L^2(0, 1)\times H_{\star }^{1}(0, 1) \end{aligned}$$

and

$$\begin{aligned} {{{\mathcal {V}}}}:=&\left( H_{\star }^2(0, 1)\cap H_{\star }^1(0, 1)\right) \times H_{\star }^1(0, 1)\times \left( H^2(0, 1)\cap H_{0}^1(0, 1)\right) \\&\times H_{0}^1(0, 1)\times \left( H_{\star }^2(0, 1)\cap H_{\star }^1(0, 1)\right) . \end{aligned}$$

We state without proof, the following existence and regularity result:

Theorem 2.1

Let \(W = (u, z, v, s, \theta )^T;\ z=u_{t},\; s=v_{t},\) and assume that (G1) holds. Then, for all \(W_0\in {{{\mathcal {H}}}},\) the systems (1.11)–(1.13) have a unique global (weak) solution

$$\begin{aligned}&u\in C\left( {\mathbb {R}}^{+}; H_{\star }^{1}(0, 1)\right) \cap C^1\left( {\mathbb {R}}^{+}; L_{\star }^{2}(0, 1)\right) , \qquad \theta \in C\left( {\mathbb {R}}^{+}; H_{\star }^{1}(0, 1)\right) \\&v\in C\left( {\mathbb {R}}^{+}; H_{0}^{1}(0, 1)\right) \cap C^1\left( {\mathbb {R}}^{+}; L^{2}(0, 1)\right) . \end{aligned}$$

Moreover, if \(W_0\in {{{\mathcal {V}}}},\) then the solution satisfies,

$$\begin{aligned}&u\in C\left( {\mathbb {R}}^{+}; H_{\star }^2(0, 1)\cap H_{\star }^1(0, 1)\right) \cap C^1\left( {\mathbb {R}}^{+}; H_{\star }^1(0, 1)\right) \cap C^2 \left( {\mathbb {R}}^{+}; L_{\star }^{2}(0, 1)\right) ,\\&v\in C\left( {\mathbb {R}}^{+}; H^2(0, 1)\cap H_{0}^1(0, 1)\right) \cap C^1\left( {\mathbb {R}}^{+}; H_{0}^1(0, 1)\right) \cap C^2\left( {\mathbb {R}}^{+}; L^{2}(0, 1)\right) ,\\&\theta \in C\left( {\mathbb {R}}^{+}; H_{\star }^2(0, 1)\right) \cap C^1\left( {\mathbb {R}}^{+}; H_{\star }^1(0, 1)\right) . \end{aligned}$$

Remark 2.2

This result can be proved using the Faedo–Galerkin method and repeating the steps in [23].

Lemma 2.1

Let \( w\in L^2\big ([0,\infty );L^2(0,1)\big )\ \mathrm{and}\ f,g\in L^2(0,1)\), we have

$$\begin{aligned} \int _0^1\left( \int _0^t h(t-s)\big [w(t)-w(s)\big ]\mathrm{d}s\right) ^2 \mathrm{d}x \le (1-l_0)(h\circ w)(t), \end{aligned}$$
(2.5)

where

$$\begin{aligned} (h\circ w)(t)= \int _0^t h(t-s)\Vert w(t)-w(s)\Vert ^2 \mathrm{d}s. \end{aligned}$$

Proof

Using Cauchy–Schwarz inequality, we have

$$\begin{aligned} \begin{aligned}&\int _0^1\left( \int _0^t h(t-s)\big [ w(t)- w(s)\big ]\hbox {d}s\right) ^2 \hbox {d}x \\&\quad =\int _0^1\left( \int _0^t \sqrt{h(t-s)}{\sqrt{h(t-s)}}\big [w(t)- w(s)\big ]\hbox {d}s\right) ^2 \hbox {d}x\\&\quad \le \left( \int _0^{+\infty }h(s) \hbox {d}s\right) \int _0^1\int _0^t h(t-s)|w(t)- w(s)|^2\hbox {d}s \hbox {d}x\\&\quad \le (1-l) \int _0^t h(t-s)\Vert w(t)-w(s)\Vert ^2 \hbox {d}s\\&\quad =(1-l)\left( h\circ w \right) (t). \end{aligned} \end{aligned}$$
(2.6)

\(\square \)

As in [24], for any \(0<\alpha <1,\) let

$$\begin{aligned} g_\alpha (t)=\alpha h(t)-h'(t)\ \ \ \mathrm{and }\ \ \ A_{\alpha }=\int _0^{+\infty }\frac{h^2(s)}{\alpha h(s)-h'(s)}\hbox {d}s. \end{aligned}$$
(2.7)

We have the following lemma.

Lemma 2.2

Let \( (u, v,\theta )\) be the solution of problem (1.11)–(1.13). Then, for any \(0<\alpha <1\) we have

$$\begin{aligned} \int _0^1\left( \int _0^t h(t-s)\left( (u_x +v)(t)-(u_x +v)(s)\right) \mathrm{d}s\right) ^2 \mathrm{d}x \le A_{\alpha }\left( g\circ (u_x +v)\right) (t), \end{aligned}$$
(2.8)

where

$$\begin{aligned} \left( g\circ (u_x +v) \right) (t)=\int _0^t g(t-s)\Vert (u_x +v)(t)- (u_x +v)(s)\Vert _2^2\mathrm{d}s. \end{aligned}$$

Proof

Using Cauchy–Schwarz inequality, we have

$$\begin{aligned} \begin{aligned}&\int _0^1\left( \int _0^t h(t-s)\left( (u_x +v)(t)- (u_x +v)(s)\right) \hbox {d}s\right) ^2 \hbox {d}x \\&\quad =\int _0^1\left( \int _0^t \frac{h(t-s)}{\sqrt{g(t-s)}}\sqrt{g(t-s)}\left( (u_x +v)(t)- (u_x +v)(s)\right) \hbox {d}s\right) ^2 \hbox {d}x\\&\quad \le \left( \int _0^{+\infty }\frac{h^2(s)}{g(s)} \hbox {d}s\right) \int _0^1\int _0^t g(t-s)\left( (u_x +v)(t)- (u_x +v)(s)\right) ^2\hbox {d}s \hbox {d}x\\&\quad =A_{\alpha }\left( g\circ (u_x +v) \right) (t). \end{aligned} \end{aligned}$$
(2.9)

\(\square \)

Lemma 2.3

Let \( (u, v,\theta )\) be the solution of problem (1.11)–(1.13). Then, for any \(0<\alpha <1\) we have

  1. (i)
    $$\begin{aligned} \int _0^1\left( \int _0^t h(t-s)(u_x +v)(s)\mathrm{d}s\right) ^2 \mathrm{d}x \le&\ 2(\alpha +h(0))A_\alpha \int _0^1(u_x+v)^2\mathrm{d}x \nonumber \\&+2A_{\alpha }\left( g\circ (u_x +v)\right) (t), \end{aligned}$$
    (2.10)
  2. (ii)
    $$\begin{aligned} \int _0^1\left( \int _0^t h(t-s)(u_x +v)(s)\mathrm{d}s\right) ^2 \hbox {d}x \le&\ 2\int _0^1(u_x+v)^2\mathrm{d}x +2\left( h\circ (u_x +v)\right) (t). \end{aligned}$$
    (2.11)

Proof

We apply Cauchy−Schwarz inequality.

  1. (i)
    $$\begin{aligned}&\int _0^1\left( \int _0^t h(t-s)(u_x +v)(s)\hbox {d}s\right) ^2 \hbox {d}x\\&\quad = \int _0^1\left( \int _0^t \frac{h(t-s)}{\sqrt{g(t-s)}}\sqrt{g(t-s)}\big [(u_x +v)(s)-(u_x +v)(t)+(u_x +v)(t)\big ]\hbox {d}s\right) ^2 \hbox {d}x\\&\quad \le 2A_\alpha \int _0^1\int _0^t g(t-s)\big [\big ((u_x +v)(s)-(u_x +v)(t)\big )^2+(u_x +v)^2(t)\big ]\hbox {d}s\hbox {d}x\\&\quad = 2A_\alpha \left( \int _0^tg(s)\hbox {d}s\right) \int _0^1(u_x+v)^2\hbox {d}x\\&\qquad +2A_\alpha \int _0^1\int _0^t g(t-s)\big [(u_x +v)(t)-(u_x +v)(s)\big ]^2\hbox {d}x\\&\quad \le 2(\alpha +h(0))A_\alpha \int _0^1(u_x+v)^2\hbox {d}x +2A_\alpha \left( g\circ (u_x +v)\right) (t). \end{aligned}$$
  2. (ii)
    $$\begin{aligned}&\int _0^1\left( \int _0^t h(t-s)(u_x +v)(s)\hbox {d}s\right) ^2 \hbox {d}x\\&\quad = \int _0^1\left( \int _0^t \sqrt{h(t-s)}\sqrt{h(t-s)}\big [(u_x +v)(s)-(u_x +v)(t)+(u_x +v)(t)\big ]\hbox {d}s\right) ^2 \hbox {d}x\\&\quad \le 2\left( \int _0^th(s)\hbox {d}s\right) \int _0^1\int _0^t h(t-s)\big [\big ((u_x +v)(s)-(u_x +v)(t)\big )^2+(u_x +v)^2(t)\big ]\hbox {d}s\hbox {d}x\\&\quad = 2\left( \int _0^th(s)\hbox {d}s\right) ^2\int _0^1(u_x+v)^2\hbox {d}x\\&\qquad +2\left( \int _0^th(s)\hbox {d}s\right) \int _0^1\int _0^t h(t-s)\big [(u_x +v)(t)-(u_x +v)(s)\big ]^2\hbox {d}x\\&\quad \le 2\int _0^1(u_x+v)^2\hbox {d}x +2\left( h\circ (u_x +v)\right) (t). \end{aligned}$$

\(\square \)

Lemma 2.4

Let F be a convex function on the close interval [ab], and \(|\Omega |\ne 0\). Let \(f:\Omega \rightarrow [a,b]\) and j integrable function on \(\Omega ,\) such that \( j(x)\ge 0\) and \( \int _{\Omega }j(x)\mathrm{d}x=a>0.\) Then, we have the following Jensen inequality

$$\begin{aligned} \frac{1}{a}\int _{\Omega }F(f(y))j(y)\hbox {d}y\ge F\left( \frac{1}{a}\int _{\Omega }f(y)j(y)\hbox {d}y\right) . \end{aligned}$$
(2.12)

3 Essential Lemmas

Lemma 3.1

Let \((u,v,\theta )\) be the solution of problem (1.11)–(1.13). Therefore, the energy functional of system (1.11)–(1.13) defined by

$$\begin{aligned} \begin{aligned} E(t)=&\,\frac{1}{2} \left( \rho _1\Vert u_t\Vert ^2 +\rho _2 \Vert v_t\Vert ^2+b\Vert v_x \Vert ^2 +\rho _3 \Vert \theta \Vert ^2 + k\left( 1-\int _0^t h(s)\mathrm{d}s\right) \Vert u_x+v\Vert ^2\right) \\&+\frac{k}{2} (h\circ (u_x+ v))(t), \end{aligned} \end{aligned}$$
(3.1)

satisfies

$$\begin{aligned} E'(t)=-\beta \Vert \theta _x\Vert ^2-\frac{k}{2}h(t)\Vert u_x+v\Vert ^2 + \frac{k}{2}(h'\circ (u_x+v))(t)\le 0, \ \forall \ t\ge 0.\qquad \end{aligned}$$
(3.2)

Proof

We multiply (1.11)\(_1\) by \(u_t\), (1.11)\(_2\) by \( v_t\) and (1.11)\(_3\) by \(\theta \), then integrate over (0, 1) and make use of the boundary conditions (1.12). Finally, addition of the resulting equations yields

$$\begin{aligned} \begin{aligned}&\frac{1}{2}\frac{\hbox {d}}{\hbox {d}t}\int _0^1\big ( \rho _1u_t^2 + \rho _2v_t^2 + bv_x^2 +\rho _3{\theta }^2 + k(u_x+v)^2\big )\hbox {d}x\\&\quad = -\beta \int _0^1\theta _x^2\hbox {d}x + k\int _0^1(u_x+v)_t(t)\int _0^t h(t-s)(u_x+v)(s)\hbox {d}s\hbox {d}x. \end{aligned} \end{aligned}$$
(3.3)

Now, we estimate the last integral in (3.3) as follows:

$$\begin{aligned} J=&\, k\int _0^1(u_x+v)_t(t)\int _0^t h(t-s)(u_x+v)(s)\hbox {d}s\hbox {d}x\\ =&\, k\int _0^1(u_x+v)_t(t)\int _0^t h(t-s)\big [(u_x+v)(s)-(u_x+v)(t)+(u_x+v)(t)\big ]\hbox {d}s\hbox {d}x\\ =&\, k\int _0^1(u_x+v)_t(t)\int _0^t h(t-s)(u_x+v)(t)\hbox {d}s\hbox {d}x\\&+\, k\int _0^1\int _0^t h(t-s)\big [(u_x+v)(s)-(u_x+v)(t)\big ](u_x+v)_t(t)\hbox {d}s\hbox {d}x\\ =&\,\frac{k}{2} \left( \int _0^t h(s)\hbox {d}s\right) \frac{\hbox {d}}{\hbox {d}t}\int _0^1(u_x+v)^2\hbox {d}x\\&-\, \frac{k}{2}\int _0^1\int _0^t h(t-s)\frac{\hbox {d}}{\hbox {d}t}\big [(u_x+v)(t)-(u_x+v)(s)\big ]^2\hbox {d}s\hbox {d}x\\ =&\, \frac{k}{2}\frac{\hbox {d}}{\hbox {d}t}\left[ \left( \int _0^t h(s)\hbox {d}s\right) \Vert u_x+v\Vert ^2\right] - \frac{k}{2}h(t)\Vert u_x+v\Vert ^2 - \frac{k}{2}\frac{\hbox {d}}{\hbox {d}t}(h\circ (u_x+v))(t)\\&+\, \frac{k}{2}(h'\circ (u_x+v))(t). \end{aligned}$$

Then, we substitute J into (3.3) and immediately deduce (3.2). \(\square \)

According to (3.2), the energy is decreasing and \(E(t)\le E(0)\) for all \(t\ge 0\).

Lemma 3.2

For any \(\sigma >0\), the functional \(I_1\) defined along the solution of problem (1.11), by

$$\begin{aligned} I_1(t)=\rho _1\int _0^1u_t\int _0^x\theta (y)\mathrm{d}y\mathrm{d}x + \frac{\gamma \rho _1}{\rho _3}\int _0^1u_t\int _0^xv(y)\mathrm{d}y\mathrm{d}x - \rho _2\int _0^1vv_t\mathrm{d}x,\nonumber \\ \end{aligned}$$
(3.4)

satisfies

$$\begin{aligned} I'_1(t)\le&-\frac{\rho _1\gamma }{2\rho _3}\Vert u_t\Vert ^2 - \rho _2\Vert v_t\Vert ^2 + \sigma \Vert u_x+v\Vert ^2 +C_\sigma \Vert v_x\Vert ^2 + C_\sigma \Vert \theta _x\Vert ^2\nonumber \\&+CA_\alpha (g\circ (u_x+v))(t),\ \forall t\ge 0. \end{aligned}$$
(3.5)

Proof

From (3.4), we set

$$\begin{aligned} \Psi _1(t) =&\, \rho _1\int _0^1u_t\int _0^x\theta (y)\hbox {d}y\hbox {d}x,\\ \Psi _2(t)=&\, \frac{\gamma \rho _1}{\rho _3}\int _0^1u_t\int _0^xv(y)\hbox {d}y\hbox {d}x,\\ \Psi _3(t)=&\, - \rho _2\int _0^1vv_t\hbox {d}x, \end{aligned}$$

thus differentiation of \(I_1(t)\) gives

$$\begin{aligned} I'_1(t)=\Psi '_1(t)+\Psi '_2(t)+\Psi '_3(t). \end{aligned}$$
(3.6)

Now, we evaluate \(\Psi '_1(t), \Psi '_2(t)\ \mathrm{and}\ \Psi '_3(t)\) as follows.

$$\begin{aligned} \Psi '_1(t)=&\rho _1\int _0^1u_{tt}\int _0^x\theta (y)\hbox {d}y\hbox {d}x + \rho _1\int _0^1u_t\int _0^x\theta _t(y)\hbox {d}y\hbox {d}x. \end{aligned}$$

Using (1.11)\(_1\), (1.11)\(_3\), integration by parts and the boundary conditions (1.12), we obtain

$$\begin{aligned} \Psi '_1(t)=&\, -k\int _0^1(u_x+v)\theta \hbox {d}x + \gamma \int _0^1\theta ^2\hbox {d}x + k\int _0^1\theta \int _0^th(t-s)(u_x+v)(s)\hbox {d}s\hbox {d}x\\&+\, \frac{\beta \rho _1}{\rho _3}\int _0^1u_t\theta _x\hbox {d}x - \frac{\gamma \rho _1}{\rho _3} \int _0^1u_t\int _0^xu_{yt}(y)\hbox {d}y\hbox {d}x\\&-\, \frac{\gamma \rho _1}{\rho _3}\int _0^1u_t\int _0^xv_t(y)\hbox {d}y\hbox {d}x\\ =&\, -k\int _0^1(u_x+v)\theta \hbox {d}x + \gamma \int _0^1\theta ^2\hbox {d}x + k\left( \int _0^th(s)\hbox {d}s\right) \int _0^1(u_x+v)\theta \hbox {d}x\\&-\, k\int _0^1\theta \int _0^th(t-s)\big [(u_x+v)(t)-(u_x+v)(s)\big ]\hbox {d}s\hbox {d}x\\&+\, \frac{\beta \rho _1}{\rho _3}\int _0^1u_t\theta _x\hbox {d}x - \frac{\gamma \rho _1}{\rho _3}\int _0^1u_t^2\hbox {d}x - \frac{\gamma \rho _1}{\rho _3}\int _0^1u_t\int _0^xv_t(y)\hbox {d}y\hbox {d}x. \end{aligned}$$

Also, a direct differentiation yields

$$\begin{aligned} \Psi '_2(t)=&\frac{\gamma \rho _1}{\rho _3}\int _0^1u_t\int _0^xv_t(y)\hbox {d}y\hbox {d}x + \frac{\gamma \rho _1}{\rho _3}\int _0^1u_{tt}\int _0^xv(y)\hbox {d}y\hbox {d}x. \end{aligned}$$

Using (1.11)\(_1\), integration by parts and the boundary conditions (1.12), we get

$$\begin{aligned} \Psi '_2(t)=&\, \frac{\gamma \rho _1}{\rho _3}\int _0^1u_t\int _0^xv_t(y)\hbox {d}y\hbox {d}x - \frac{k\gamma }{\rho _3}\int _0^1(u_x+v)v\hbox {d}x + \frac{\gamma ^2}{\rho _3}\int _0^1\theta v\hbox {d}x\\&+\, \frac{k\gamma }{\rho _3}\int _0^1v\int _0^th(t-s)(u_x+v)(s)\hbox {d}s\hbox {d}x\\ =&\, \frac{\gamma \rho _1}{\rho _3}\int _0^1u_t\int _0^xv_t(y)\hbox {d}y\hbox {d}x - \frac{k\gamma }{\rho _3}\int _0^1(u_x+v)v\hbox {d}x + \frac{\gamma ^2}{\rho _3}\int _0^1\theta v\hbox {d}x\\&-\, \frac{k\gamma }{\rho _3}\int _0^1v\int _0^th(t-s)\big [(u_x+v)(t)-(u_x+v)(s)\big ]\hbox {d}s\hbox {d}x\\&+\, \frac{k\gamma }{\rho _3}\left( \int _0^th(s)\hbox {d}s\right) \int _0^1v(u_x+v)\hbox {d}x. \end{aligned}$$

Similarly, using (1.11)\(_2\), integration by parts and the boundary conditions (1.12), we get

$$\begin{aligned} \Psi '_3(t)=&-\rho _2\int _0^1v_t^2\hbox {d}x - \rho _2\int _0^1v_tv_{tt}\hbox {d}x\\ =&-\rho _2\int _0^1v_t^2\hbox {d}x +v\int _0^1v_x^2\hbox {d}x + k\int _0^1v(u_x+v) - \gamma \int _0^1v\theta \hbox {d}x\\&- k\int _0^1v\int _0^th(t-s)(u_x+v)(s)\hbox {d}s\\ =&-\rho _2\Vert v_t\Vert ^2 + b\Vert v_x\Vert ^2 + k\int _0^1v(u_x+v) - \gamma \int _0^1v\theta \hbox {d}x\\&+ k\int _0^1v\int _0^th(t-s)\big [(u_x+v)(t)-(u_x+v)(s)\big ]\hbox {d}s\hbox {d}x\\&- k\left( \int _0^th(s)\hbox {d}s\right) \int _0^1v(u_x+v)\hbox {d}x. \end{aligned}$$

Substituting \(\Psi '_1(t), \Psi '_2(t)\ \mathrm{and}\ \Psi '_3(t)\) into (3.6), we have

$$\begin{aligned} I'_1(t)=&-\frac{\gamma \rho _1}{\rho _3}\Vert u_t\Vert ^2 - \rho _2\Vert v_t\Vert ^2 + \gamma \Vert \theta \Vert ^2 + b\Vert v_x\Vert ^2 + \left( \frac{\gamma ^2}{\rho _3} -\gamma \right) \int _0^1v\theta \hbox {d}x\\&+ \left[ -k+k\left( \int _0^th(s)\hbox {d}s\right) \right] \int _0^1\theta (u_x+v)\hbox {d}x + \frac{\beta \rho _1}{\rho _3}\int _0^1u_t\theta _x \hbox {d}x\\&+ \left( k-\frac{k\gamma }{\rho _3}\right) \int _0^1v(u_x+v)\hbox {d}x + k\left( \int _0^th(s)\hbox {d}s\right) \left( \frac{\gamma }{\rho _3}-1\right) \int _0^1v(u_x+v)\hbox {d}x\\&- k\int _0^1\theta \int _0^th(t-s)\big [(u_x+vt)-(u_x+v)(s)\big ]\hbox {d}s\hbox {d}x\\&+ k\left( 1-\frac{\gamma }{\rho _3}\right) \int _0^1v\int _0^th(t-s)\big [(u_x+v)(t)-(u_x+v)(s)\big ]\hbox {d}s\hbox {d}x. \end{aligned}$$

Applying Young’s and Cauchy−Schwarz’s inequalities, as well as Lemma 2.2, we obtain

$$\begin{aligned} \begin{aligned} I'_1(t)\le&-\frac{\gamma \rho _1}{\rho _3}\Vert u_t\Vert ^2 - \rho _2\Vert v_t\Vert ^2 + b\Vert v_x\Vert ^2 + 3\varepsilon _3\Vert u_x+v\Vert ^2\\&+\left[ 1+\gamma + \frac{k^2}{4\varepsilon _3} +\frac{k^2}{4\varepsilon _3}\left( \int _0^th(s)\hbox {d}s\right) ^2 + \frac{1}{4}\left( \frac{\gamma ^2}{\rho _3} -\gamma \right) ^2 \right] \Vert \theta \Vert ^2\\&+ \frac{1}{4}A_\alpha \big (g\circ (u_x+v)\big )(t) + \varepsilon _4\Vert u_t\Vert ^2 + \frac{1}{4\varepsilon _4}\left( \frac{\beta \rho _1}{\rho _3}\right) ^2\Vert \theta _x\Vert ^2\\&+ \left[ 2+\frac{1}{4\varepsilon _3}\left( k-\frac{k\gamma }{\rho _3}\right) ^2 + \frac{k^2}{4\varepsilon _3}\left( \int _0^th(s)\hbox {d}s\right) ^2\left( \frac{\gamma }{\rho _3}-1\right) ^2 \right] \Vert v\Vert ^2\\&+ \frac{k^2}{4}\left( 1-\frac{\gamma }{\rho _3}\right) ^2A_\alpha \big (g\circ (u_x+v)\big )(t). \end{aligned} \end{aligned}$$
(3.7)

Now, we apply Poincaré’s inequality and assumption \(\mathrm{(A_1)}\); thus, (3.7) yields

$$\begin{aligned} \begin{aligned} I'_1(t)\le&-\left( \frac{\gamma \rho _1}{\rho _3}-\varepsilon _4\right) \Vert u_t\Vert ^2 - \rho _2\Vert v_t\Vert ^2 + 3\varepsilon _3\Vert u_x+v\Vert ^2\\&+\left[ 1+\gamma + \frac{k^2}{4\varepsilon _3} +\frac{k^2}{4\varepsilon _3} + \frac{1}{4}\left( \frac{\gamma ^2}{\rho _3} -\gamma \right) ^2 + \frac{1}{4\varepsilon _4}\left( \frac{\beta \rho _1}{\rho _3}\right) ^2 \right] \Vert \theta _x\Vert ^2\\&+ \left[ 2+b+\frac{1}{4\varepsilon _3}\left( k-\frac{k\gamma }{\rho _3}\right) ^2 + \frac{k^2}{4\varepsilon _3}\left( \frac{\gamma }{\rho _3}-1\right) ^2 \right] \Vert v_x\Vert ^2\\&+ \frac{1}{4}A_\alpha \big (g\circ (u_x+v)\big )(t) + \frac{k^2}{4}\left( 1-\frac{\gamma }{\rho _3}\right) ^2A_\alpha \big (g\circ (u_x+v)\big )(t). \end{aligned} \end{aligned}$$

Lastly, choosing \(\varepsilon _4=\dfrac{\gamma \rho _1}{2\rho _3}\) and \(\sigma =3\varepsilon _3\), we arrive at (3.5). \(\square \)

Lemma 3.3

For any \(\delta >0\), the functional \(I_2\) defined along the solution of problem (1.11), by

$$\begin{aligned} I_2(t)=-\frac{\rho _1b}{k}\int _0^1v_xu_t\mathrm{d}x -\rho _2\int _0^1v_tu_x\mathrm{d}x + \rho _2\int _0^1v_t\int _0^th(t-s)(u_x+v)(s)\mathrm{d}s\mathrm{d}x,\nonumber \\ \end{aligned}$$
(3.8)

satisfies

$$\begin{aligned} I'_2(t)\le&-\frac{b}{2}\Vert v_x\Vert ^2 +\delta \Vert v_t\Vert ^2 + C_\delta \Vert u_x+v\Vert ^2 + C_\delta \Vert \theta _x\Vert ^2\nonumber \\&+C_\delta (1+A_\alpha )(g\circ (u_x+v))(t) + \left( \rho _2-\frac{\rho _1b}{k}\right) \int _0^1v_{xt}u_t\mathrm{d}x,\ \forall t\ge 0. \end{aligned}$$
(3.9)

Proof

We differentiate \(I_2(t)\) and find

$$\begin{aligned} \begin{aligned} I'_2(t)=&-\frac{\rho _1b}{k}\int _0^1v_{xt}u_t\hbox {d}x \underbrace{-\frac{\rho _1b}{k}\int _0^1v_xu_{tt}\hbox {d}x}_{\Phi _1(t)} \underbrace{- \rho _2\int _0^1v_{tt}u_x\hbox {d}x}_{\Phi _2(t)} \underbrace{- \rho _2\int _0^1v_tu_{xt}\hbox {d}x}_{\Phi _3(t)}\\&+ \underbrace{\rho _2\int _0^1v_{tt}\int _0^th(t-s)(u_x+v)(s)\hbox {d}s\hbox {d}x + \rho _2h(0)\int _0^1v_{t}(u_x+v)\hbox {d}x}_{\Phi _4(t)}\\&+ \underbrace{\rho _2\int _0^1v_{t}\int _0^th'(t-s)(u_x+v)(s)\hbox {d}s\hbox {d}x.}_{\Phi _5(t)} \end{aligned}\nonumber \\ \end{aligned}$$
(3.10)

Using (1.11)\(_1\) and (1.11)\(_2\) as well as integration by parts and the boundary conditions (1.12), we estimate \(\Phi _1, \Phi _2,\Phi _3,\Phi _4\) as follows.

$$\begin{aligned} \Phi _1(t)=&-\frac{\rho _1b}{k}\int _0^1v_xu_{tt}\hbox {d}x\\ =&-b\int _0^1u_{xx}v_x\hbox {d}x - b\int _0^1v_x^2\hbox {d}x +\frac{\gamma b}{k}\int _0^1\theta _xv_x \\&- b\int _0^1v_{xx}\int _0^th(t-s)(u_x+v)(s)\hbox {d}s\hbox {d}x. \end{aligned}$$

Then, by Young’s inequality, we have

$$\begin{aligned} \Phi _1(t)\le&- b\int _0^1v_x^2\hbox {d}x +\delta _1\int _0^1v_x^2\hbox {d}x +\frac{1}{4\delta _1}\left( \frac{\gamma b}{k}\right) ^2\int _0^1\theta _x^2 -b\int _0^1u_{xx}v_x\hbox {d}x\nonumber \\&- b\int _0^1v_{xx}\int _0^th(t-s)(u_x+v)(s)\hbox {d}s\hbox {d}x, \end{aligned}$$
(3.11)

Also,

$$\begin{aligned} \Phi _2(t)=&- \rho _2\int _0^1v_{tt}u_x\hbox {d}x\\ =&\ b\int _0^1v_xu_{xx}\hbox {d}x + k\int _0^1(u_x+v)u_x\hbox {d}x - \gamma \int _0^1\theta u_x \\&- k\int _0^1u_x\int _0^th(t-s)(u_x+v)(s)\hbox {d}s\hbox {d}x\\ =&\ b\int _0^1v_xu_{xx}\hbox {d}x + k\int _0^1(u_x+v)^2\hbox {d}x - k\int _0^1(u_x+v)v\hbox {d}x - \gamma \int _0^1\theta (u_x+v)\hbox {d}x\\&+ \gamma \int _0^1\theta v\hbox {d}x - k\int _0^1(u_x+v)\int _0^th(t-s)(u_x+v)(s)\hbox {d}s\hbox {d}x \\&+ k\int _0^1v\int _0^th(t-s)(u_x+v)(s)\hbox {d}s\hbox {d}x. \end{aligned}$$

Therefore, applying Young’s inequality, we get

$$\begin{aligned} \begin{aligned} \Phi _2(t)\le&\ b\int _0^1v_xu_{xx}\hbox {d}x + \left( k+\frac{k^2}{4\delta _1}+\frac{\gamma }{2}+\frac{k}{2}\right) \int _0^1(u_x+v)^2\hbox {d}x +3\delta _1\int _0^1v^2\hbox {d}x\\&+\left( \frac{\gamma }{2}+\frac{\gamma }{4\delta _1}\right) \int _0^1\theta ^2 \hbox {d}x + \left( \frac{k}{2}+\frac{k^2}{4\delta _1}\right) \int _0^1 \left( \int _0^th(t-s)(u_x+v)(s)\hbox {d}s\right) ^2\hbox {d}x, \end{aligned}\nonumber \\ \end{aligned}$$
(3.12)

we now make use of Lemma 2.3 to arrive at

$$\begin{aligned} \begin{aligned} \Phi _2(t)\le&\ b\int _0^1v_xu_{xx}\hbox {d}x +\left( \frac{\gamma }{2} +\frac{\gamma }{4\delta _1}\right) \int _0^1\theta ^2 \hbox {d}x +3\delta _1\int _0^1v^2\hbox {d}x \\&+ \left[ \frac{3k}{2}+\frac{k^2}{4\delta _1}+\frac{\gamma }{2} +(\alpha +h(0))A_\alpha \left( k+\frac{k^2}{2\delta _1}\right) \right] \int _0^1(u_x+v)^2\hbox {d}x\\&+\left( k+\frac{k^2}{2\delta _1}\right) A_\alpha \big (g\circ (u_x+v)\big )(t). \end{aligned}\nonumber \\ \end{aligned}$$
(3.13)

Again,

$$\begin{aligned} \Phi _3(t)= - \rho _2\int _0^1v_tu_{xt}\hbox {d}x=\rho _2\int _0^1v_{xt}u_{t}\hbox {d}x. \end{aligned}$$
(3.14)

We as well have

$$\begin{aligned} \Phi _4(t)+\Phi _5(t)=&-k\int _0^1(u_x+v)\int _0^th(t-s)(u_x+v)(s)\hbox {d}s\hbox {d}x\\&+ b\int _0^1v_{xx}\int _0^th(t-s)(u_x+v)(s)\hbox {d}s\hbox {d}x \\&+ \gamma \int _0^1\theta \int _0^th(t-s)(u_x+v)(s)\hbox {d}s\hbox {d}x \\&+ \int _0^1\left( \int _0^th(t-s)(u_x+v)(s)\hbox {d}s\right) ^2\hbox {d}x\\&+ \rho _2\int _0^1v_t\int _0^th'(t-s)(u_x+v)(s)\hbox {d}s\hbox {d}x + \rho _2h(0)\int _0^1v_t(u_x+v)\hbox {d}x, \end{aligned}$$

therefore, recalling (2.7), we get

$$\begin{aligned} \Phi _4(t)+\Phi _5(t)=&-k\int _0^1(u_x+v)\int _0^th(t-s)(u_x+v)(s)\hbox {d}s\hbox {d}x\\&+ b\int _0^1v_{xx}\int _0^th(t-s)(u_x+v)(s)\hbox {d}s\hbox {d}x \\&+ \gamma \int _0^1\theta \int _0^th(t-s)(u_x+v)(s)\hbox {d}s\hbox {d}x \\&+ \rho _2\alpha \int _0^1v_t\int _0^th(t-s)(u_x+v)(s)\hbox {d}s\hbox {d}x\\&+ \int _0^1\left( \int _0^th(t-s)(u_x+v)(s)\hbox {d}s\right) ^2\hbox {d}x\\&- \rho _2\int _0^1v_t\int _0^tg(t-s)(u_x+v)(s)\hbox {d}s\hbox {d}x + \rho _2h(0)\int _0^1v_t(u_x+v)\hbox {d}x. \end{aligned}$$

It follows from the application of Young’s inequality that,

$$\begin{aligned} \begin{aligned} \Phi _4(t)+\Phi _5(t)\le&\ b\int _0^1v_{xx}\int _0^th(t-s)(u_x+v)(s)\hbox {d}s\hbox {d}x\\&+ \left( \frac{k}{2}+\frac{(\rho _2h(0))^2}{4\delta _2}\right) \int _0^1(u_x+v)^2\hbox {d}x\\&+ \frac{\gamma }{2}\int _0^1\theta ^2 \hbox {d}x + 3\delta _2\int _0^1v_t^2\hbox {d}x\\&+ \frac{\rho _2^2}{4\delta _2}\int _0^1\left( \int _0^tg(t-s)(u_x+v)(s)\hbox {d}s\right) ^2\hbox {d}x\\&+ \left( 1+\frac{k}{2}+\frac{\gamma }{2}+\frac{(\rho _2\alpha )^2}{4\delta _2}\right) \int _0^1\left( \int _0^th(t-s)(u_x+v)(s)\hbox {d}s\right) ^2\hbox {d}x. \end{aligned} \end{aligned}$$

Next, due to Lemma 2.3, we obtain

$$\begin{aligned} \begin{aligned}&\Phi _4(t)+\Phi _5(t)\\&\quad \le \ b\int _0^1v_{xx}\int _0^th(t-s)(u_x+v)(s)\hbox {d}s\hbox {d}x + \frac{\gamma }{2}\int _0^1\theta ^2 \hbox {d}x + 3\delta _2\int _0^1v_t^2\hbox {d}x\\&\qquad + \left[ \frac{k}{2}+\frac{(\rho _2h(0))^2}{4\delta _2} +\frac{\rho _2^2}{2\delta _2}+(\alpha +h(0))\left( 2+k+\gamma +\frac{(\rho _2\alpha )^2}{2\delta _2}\right) \right] \int _0^1(u_x+v)^2\hbox {d}x \\&\qquad + \frac{\rho _2^2}{2\delta _2}\big (g\circ (u_x+v)\big )(t) +2A_\alpha \left( 1+\frac{k}{2}+\frac{\gamma }{2} +\frac{(\rho _2\alpha )^2}{4\delta _2}\right) \big (g\circ (u_x+v)\big )(t). \end{aligned}\nonumber \\ \end{aligned}$$
(3.15)

Now, we substitute (3.11)−(3.15) into (3.10) and apply the Poincaré’s inequality to get

$$\begin{aligned} I'_2(t)\le&-(b-3\delta _1)\Vert v_x\Vert ^2 + 3\delta _2\Vert v_t\Vert ^2 + c_1(\delta _1,\delta _2)\Vert u_x+v\Vert ^2 + c_2(\delta _1,\delta _2)\Vert \theta _x\Vert ^2\\&+ c_2(\delta _1,\delta _2)(1+A_\alpha )\big (g\circ (u_x+v)\big )(t) + \left( \rho _2-\frac{\rho _1b}{k}\right) \int _0^1v_{xt}u_t\hbox {d}x. \end{aligned}$$

Finally, choosing \(\delta _1=\dfrac{b}{6}\) and setting \(\delta :=3\delta _2\), we arrive at (3.9). \(\square \)

Lemma 3.4

For any \(\delta >0\), the functional \(I_3\) defined along the solution of problem (1.11), by

$$\begin{aligned} I_3(t)=\ \rho _1\int _0^1u_t\int _0^x(u_y+v)(y)\mathrm{d}y\mathrm{d}x +\frac{\rho _1\rho _3}{\gamma }\int _0^1u_t\int _0^x\theta (y)\mathrm{d}y\mathrm{d}x, \end{aligned}$$
(3.16)

satisfies

$$\begin{aligned} I'_3(t)\le&-\frac{kl_0}{2}\Vert u_x+v\Vert ^2 +\varepsilon \Vert u_t\Vert ^2 + C\left( 1+\frac{1}{4\varepsilon }+\frac{6}{kl_0}\right) \Vert \theta _x\Vert ^2\nonumber \\&+C\left( 1+\frac{6}{kl_0}\right) A_\alpha (g\circ (u_x+v))(t),\ \forall t\ge 0. \end{aligned}$$
(3.17)

Proof

Differentiation of \(I_3\) yields

$$\begin{aligned} \begin{aligned} I'_3(t)=&\ \rho _1\int _0^1u_t\int _0^x(u_y+v)_t(y)\hbox {d}y\hbox {d}x + \underbrace{\rho _1\int _0^1u_{tt}\int _0^x(u_y+v)(y)\hbox {d}y\hbox {d}x }_{\Upsilon _1(t)}\\&+\underbrace{\frac{\rho _1\rho _3}{\gamma }\int _0^1u_{tt}\int _0^x\theta (y)\hbox {d}y\hbox {d}x}_{\Upsilon _2(t)} +\underbrace{\frac{\rho _1\rho _3}{\gamma }\int _0^1u_t\int _0^x\theta _t(y)\hbox {d}y\hbox {d}x}_{\Upsilon _3(t)}. \end{aligned} \end{aligned}$$
(3.18)

Now, we estimate \(\Upsilon _1(t),\Upsilon _2(t)\ \mathrm{and}\ \Upsilon _3(t)\) as follows.

Using (1.11)\(_1\), integration by parts and the boundary conditions (1.12), we have

$$\begin{aligned} \begin{aligned} \Upsilon _1(t)=&-k\int _0^1(u_x+v)^2\hbox {d}x +k\int _0^1(u_x+v)\int _0^th(t-s)(u_x+v)(s)\hbox {d}s\hbox {d}x\\&+\gamma \int _0^1\theta (u_x+v)\hbox {d}x\\ =&-k\left( 1-\int _0^th(s)\hbox {d}s\right) \int _0^1(u_x+v)^2\hbox {d}x +\gamma \int _0^1\theta (u_x+v)\hbox {d}x\\&- k\int _0^1(u_x+v)\int _0^th(t-s)\big [(u_x+v)(t)-(u_x+v)(s)\big ]\hbox {d}s\hbox {d}x. \end{aligned} \end{aligned}$$

Applying Young’s and Poincaré’s inequalities, we get

$$\begin{aligned} \begin{aligned} \Upsilon _1(t) \le&-k\left( 1-\int _0^th(s)\hbox {d}s\right) \int _0^1(u_x+v)^2\hbox {d}x +2\varepsilon _2\int _0^1(u_x+v)^2\hbox {d}x\\&+ \frac{\gamma ^2}{4\varepsilon _2}\int _0^1\theta ^2\hbox {d}x +\frac{k^2}{4\varepsilon _2}\int _0^1\left( \int _0^th(t-s) \big [(u_x+v)(t)-(u_x+v)(s)\big ]\hbox {d}s\right) ^2\hbox {d}x, \end{aligned} \end{aligned}$$

then using Poincaré’s inequality, Lemma 2.2 and assumption \(\mathrm{(A_1)}\), we have

$$\begin{aligned} \begin{aligned} \Upsilon _1(t)\le&-k\left( 1-\int _0^th(s)\hbox {d}s\right) \Vert u_x+v\Vert ^2 +2\varepsilon _2\Vert u_x+v\Vert ^2 + \frac{\gamma ^2}{4\varepsilon _2}\Vert \theta _x\Vert ^2\\&+\frac{k^2}{4\varepsilon _2}A_\alpha (g\circ (u_x+v))(t)\\ \le&-kl_0\Vert u_x+v\Vert ^2 +2\varepsilon _2\Vert u_x+v\Vert ^2 + \frac{\gamma ^2}{4\varepsilon _2}\Vert \theta _x\Vert ^2 +\frac{k^2}{4\varepsilon _2}A_\alpha (g\circ (u_x+v))(t). \end{aligned} \end{aligned}$$
(3.19)

Next, using (1.11)\(_1\), integration by parts and the boundary conditions (1.12), we have

$$\begin{aligned} \begin{aligned} \Upsilon _2(t)=&-\frac{k\rho _3}{\gamma }\int _0^1(u_x+v)\theta \hbox {d}x +\frac{k\rho _3}{\gamma }\int _0^1\theta \int _0^th(t-s)(u_x+v)(s)\hbox {d}s\hbox {d}x\\&+\rho _3\int _0^1\theta ^2\hbox {d}x\\ =&-\frac{k\rho _3}{\gamma }\left( 1-\int _0^th(s)\hbox {d}s\right) \int _0^1\theta (u_x+v)\hbox {d}x +\rho _3\int _0^1\theta ^2\hbox {d}x \\&- \frac{k\rho _3}{\gamma }\int _0^1\theta \int _0^th(t-s)\big [(u_x+v)(t)-(u_x+v)(s)\big ]\hbox {d}s\hbox {d}x. \end{aligned} \end{aligned}$$

Young’s inequality leads to

$$\begin{aligned} \begin{aligned} \Upsilon _2(t)\le&\ \varepsilon _2\int _0^1(u_x+v)^2\hbox {d}x +\frac{1}{4\varepsilon _2}\left( \frac{k\rho _3}{\gamma }\right) ^2 \left( 1-\int _0^th(s)\hbox {d}s\right) ^2\int _0^1\theta ^2\hbox {d}x \\&+\int _0^1\left( \int _0^th(t-s)\big [(u_x+v)(t)-(u_x+v)(s)\big ]\hbox {d}s\right) ^2\hbox {d}x\\&+\left[ \rho _3+\frac{1}{4}\left( \frac{k\rho _3}{\gamma }\right) ^2\right] \int _0^1\theta ^2\hbox {d}x, \end{aligned} \end{aligned}$$

then applying Poincaré’s inequality, Lemma 2.2 and assumption \(\mathrm{(A_1)}\), we get

$$\begin{aligned} \begin{aligned} \Upsilon _2(t)\le&\ \varepsilon _2\Vert u_x+v\Vert ^2 +\left[ \rho _3+\frac{1}{4}\left( \frac{k\rho _3}{\gamma }\right) ^2 \left( 1+\frac{1}{\varepsilon _2}\right) \right] \Vert \theta _x\Vert ^2 +A_\alpha (g\circ (u_x+v))(t). \end{aligned}\nonumber \\ \end{aligned}$$
(3.20)

Also, using (1.11)\(_3\), integration by parts and the boundary conditions (1.12), as well as Young’s inequality, we obtain

$$\begin{aligned} \begin{aligned} \Upsilon _3(t)=&-\rho _1\int _0^1u_t\int _0^x(u_y+v)_t(y)\hbox {d}y\hbox {d}x + \frac{\rho _1\beta }{\gamma }\int _0^1u_t\theta _x\hbox {d}x\\ \le&\ -\rho _1\int _0^1u_t\int _0^x(u_y+v)_t(y)\hbox {d}y\hbox {d}x + \varepsilon \Vert u_t\Vert ^2 +\frac{1}{4\varepsilon }\left( \frac{\rho _1\beta }{\gamma }\right) ^2\Vert \theta _x\Vert ^2. \end{aligned}\quad \quad \end{aligned}$$
(3.21)

Substitution of (3.19)−(3.21) into (3.18), we get

$$\begin{aligned} I'_3(t)\le&-(kl_0-3\varepsilon _2)\Vert u_x+v\Vert ^2 +\varepsilon \Vert u_t\Vert ^2 +C\left( 1+\frac{1}{\varepsilon }+\frac{1}{\varepsilon _2}\right) \Vert \theta _x\Vert ^2\\&+C\left( 1+\frac{1}{\varepsilon _2}\right) A_\alpha (g\circ (u_x+v))(t). \end{aligned}$$

Finally, choosing \(\varepsilon _2=\dfrac{kl_0}{6}\), we get (3.17). \(\square \)

Lemma 3.5

The functional \(I_4\), defined by

$$\begin{aligned} I_4(t)=\int _0^1\int _0^t J(t-s)(u_x+v)^2(x,s)\mathrm{d}s\mathrm{d}x, \ \ \mathrm{where} \ J(t)=\int _t^{+\infty }h(s)\mathrm{d}s, \end{aligned}$$

satisfies along the solution of problem (1.11), the estimate

$$\begin{aligned} \begin{aligned} I'_4(t)&\le -\frac{1}{2} \left( h\circ (u_x+v)\right) (t)+3(1-l)\Vert u_x+v\Vert _2^2,\ \ \forall \ t\ge 0. \end{aligned} \end{aligned}$$
(3.22)

Proof

We differentiate \(I_4\) and recall that \(J(t)=J(0)-\int _0^t h(s)\hbox {d}s\) and

\(J'(t)=-h(t)\); thus, we have

$$\begin{aligned} I'_4(t)=&\,\int _0^1 \int _0^tJ'(t-s)(u_x+v)^2(x,s)\hbox {d}s\hbox {d}x + J(0)\int _0^1(u_x+v)^2(x,s)\hbox {d}x\nonumber \\ =&\,-\int _0^1\int _0^t h(t-s)(u_x+v)^2(x,s)\hbox {d}s\hbox {d}x + J(t)\int _0^1(u_x+v)^2(x,s)\hbox {d}x\nonumber \\&\, +\int _0^1\int _0^t h(t-s)(u_x+v)^2(x,s)\hbox {d}s\hbox {d}x\nonumber \\ =&\, J(t)\Vert u_x+v\Vert _2^2 - \int _0^1 h(t-s)\big [ (u_x+v)(x,t)-(u_x+v)(x,s)\big ]^2 \hbox {d}s\hbox {d}x \nonumber \\&+2\int _0^1 (u_x+v)\int _0^t h(t-s)\big [ (u_x+v)(x,t)-(u_x+v)(x,s)\big ]\hbox {d}s\hbox {d}x\nonumber \\ \le&\,\ -\left( h\circ (u_x+v)\right) (t)+J(t)\Vert u_x+v\Vert _2^2 +2(1-l)\Vert u_x+v\Vert _2^2\nonumber \\&+\frac{\left( \int _0^t h(s)\hbox {d}s \right) }{2(1-l)}\left( h\circ (u_x+v)\right) (t)\nonumber \\ \le&\,\ -\frac{1}{2} \left( h\circ (u_x+v)\right) (t)+2(1-l)\Vert u_x+v\Vert _2^2 + J(t)\Vert u_x+v\Vert _2^2. \end{aligned}$$
(3.23)

By recalling that h is decreasing and positive, hence \(J(t)\le J(0)=(1-l)\), the result (3.22) follows. \(\square \)

4 General and Optimal Decay Result

In this section, we state and prove our main general and optimal decay result. In order to do this, we define a Lyapunov functional \({{\mathcal {L}}}\) as

$$\begin{aligned} {{{\mathcal {L}}}}(t)=NE(t)+N_1I_1(t)+N_2I_2(t)+N_3I_3(t), \end{aligned}$$
(4.1)

for \(N,N_1, N_2, N_3\) to be fixed later. In the lemma that follows, we show the equivalence of \({{\mathcal {L}}}\) and the energy functional E.

Lemma 4.1

There exist positive constants \(\alpha _1, \alpha _2\) such that for N large enough, the functional \({{\mathcal {L}}}\) satisfies

$$\begin{aligned} \alpha _1E(t)\le {{{\mathcal {L}}}}(t)\le \alpha _2E(t), \ \ \ \forall t\ge 0. \end{aligned}$$
(4.2)

Moreover, if \(\rho _2=\dfrac{\rho _1b}{k}\) then

$$\begin{aligned} {{{\mathcal {L}}}}(t)\le&-\lambda \left( \Vert u_t\Vert _2^2 + \Vert v_t\Vert _2^2 + \Vert v_x \Vert _2^2 + \Vert u_x+v\Vert _2^2 + \Vert \theta _x\Vert _2^2\right) \nonumber \\&+ \frac{1}{4}\left( h\circ (\varphi _x+\psi )\right) (t),\ \forall \ t\ge 0, \end{aligned}$$
(4.3)

for some positive constants \(N,N_1,N_2,N_3\) to be later chosen appropriately.

Proof

We have

$$\begin{aligned} |{{{\mathcal {L}}}}(t)-NE(t)|\le&\ N_1|I_1(t)|+N_2|I_2(t)|+N_3|I_3(t)|\\ \le&\ \rho _1N_1\left| \int _0^1u_t\int _0^x\theta (y)\hbox {d}y\hbox {d}x\right| + \frac{\gamma \rho _1}{\rho _3}N_1\left| \int _0^1u_t\int _0^xv(y)\hbox {d}y\hbox {d}x\right| \\&+ \rho _2N_1\left| \int _0^1vv_t\hbox {d}x\right| + \frac{\rho _1b}{k}N_2 \left| \int _0^1v_xu_t\hbox {d}x\right| +\rho _2N_2\left| \int _0^1v_tu_x\hbox {d}x\right| \\&+ \rho _2N_2\left| \int _0^1v_t\int _0^th(t-s)(u_x+v)(s)\hbox {d}s\hbox {d}x\right| \\&+ \rho _1N_3\left| \int _0^1u_t\int _0^x(u_y+v)(y)\hbox {d}y\hbox {d}x\right| \\&+\frac{\rho _1\rho _3}{\gamma }N_3\left| \int _0^1u_t\int _0^x\theta (y)\hbox {d}y\hbox {d}x\right| . \end{aligned}$$

Applying Young’s inequality and Lemma 2.1, we get

$$\begin{aligned} |{{{\mathcal {L}}}}(t)-NE(t)|&\le C\big (\rho _1\Vert u_t\Vert ^2+\rho _2\Vert v_t\Vert ^2 +\rho _3\Vert \theta \Vert ^2+b\Vert v_x\Vert ^2+k\Vert u_x+v\Vert ^2\big )\\&\quad \, + C\int _0^1\left( \int _0^th(t-s)(u_x+v)(s)\hbox {d}s\right) ^2\hbox {d}x. \end{aligned}$$

Using Cauchy−Schwarz’s inequality, the last integral above can be estimated in like manner as in Lemma 2.3, we get

$$\begin{aligned} \int _0^1\left( \int _0^th(t-s)(u_x+v)(s)\hbox {d}s\right) ^2\hbox {d}x\le&\ 2\Vert u_x+v\Vert ^2 +2\big (h\circ (u_x+v)\big )(t). \end{aligned}$$

Hence, there exists a constant \(C>0\) such that

$$\begin{aligned} |{{{\mathcal {L}}}}(t)-NE(t)|\le&\, \frac{C}{2}\bigg [\rho _1\Vert u_t\Vert ^2+\rho _2\Vert v_t\Vert ^2 +\rho _3\Vert \theta \Vert ^2+b\Vert v_x\Vert ^2\\&+k\bigg (1-\int _0^t h(s)\hbox {d}s\bigg )\Vert u_x+v\Vert ^2\bigg ] + \frac{Ck}{2}(h\circ (u_x+v))(t)\\ \le&\, CE(t). \end{aligned}$$

It follows that

$$\begin{aligned} |{{{\mathcal {L}}}}(t)-NE(t)|\le CE(t)\Longleftrightarrow (N-C)E(t)\le {{{\mathcal {L}}}}(t)\le (N+C)E(t). \end{aligned}$$

Now, we choose N large enough so that

$$\begin{aligned} N-C>0. \end{aligned}$$
(4.4)

Therefore, there exist positive constants \(\alpha _1,\alpha _2\) such that (4.2) holds. Hence, \({{{\mathcal {L}}}}\equiv E\).

Now, we differentiate (4.1), bearing in mind (3.2), (3.5), (3.9) and (3.17) and the fact that \(g=\alpha h-h'\), we get for any \(t\ge 0\),

$$\begin{aligned} {{{\mathcal {L}}}}'(t) \le&- \bigg [\frac{\gamma \rho _1}{2\rho _3}N_1 - \varepsilon N_3\bigg ]\Vert u_t\Vert ^2 - \bigg [\rho _2N_1-\delta N_2\bigg ]\Vert v_t\Vert ^2 - \bigg [\frac{b}{2}N_2-C_\sigma N_1\bigg ]\Vert v_x\Vert ^2\\&- \bigg [\frac{kl_0}{2}N_3-\sigma N_1-C_\delta N_2\bigg ]\Vert u_x+v\Vert ^2\\&- \bigg [\beta N-C_\sigma N_1-C_\delta N_2+C\left( 1+\frac{6}{kl_0}\right) N_3\bigg ] \Vert \theta _x\Vert ^2\\&+ \bigg [\frac{k}{2}N-(1+A_\alpha )\left( CN_1+C_\delta N_2 \left( 1+\frac{6}{kl_0}\right) N_3\right) \bigg ](g\circ (u_x+v))(t)\\&+ \frac{\alpha k}{2}N\big (h\circ (u_x+v)\big )(t) + N_2\left( \rho _2-\frac{\rho _1b}{k}\right) \int _0^1v_{xt}u_t\hbox {d}x. \end{aligned}$$

Setting \(N_1=1,\) \(\varepsilon =\dfrac{\gamma \rho _1N_1}{4\rho _3N_3}\), \(\sigma =\dfrac{kl_0N_3}{4N_1}\), \(\delta =\dfrac{\rho _2N_1}{2N_2}\) and recalling that \(\rho _2=\dfrac{\rho _1b}{k}\), we find

$$\begin{aligned} {{{\mathcal {L}}}}'(t) \le&-\frac{\gamma \rho _1}{4\rho _3}\Vert u_t\Vert ^2 -\frac{\rho _2}{2}\Vert v_t\Vert ^2-\left[ \frac{kl_0}{4}N_3-c_2N_2\right] \Vert u_x+v\Vert ^2 -\left[ \frac{b}{2}N_2-c_1\right] \Vert v_x\Vert ^2 \\&- \left[ \beta N-C(1+N_2+N_3)\right] \Vert \theta _x\Vert ^2 + \frac{\alpha k}{2}N \big (h\circ (u_x+v)\big )(t) \\&- \left[ \frac{k}{2} N-C(1+A_\alpha )(1+N_2+N_3)\right] (g\circ (u_x+v))(t). \end{aligned}$$

Choosing \(N_2=\dfrac{kl_0N_3}{8c_2}\), we get

$$\begin{aligned} {{{\mathcal {L}}}}'(t) \le&-\frac{\gamma \rho _1}{4\rho _3}\Vert u_t\Vert ^2-\frac{\rho _2}{2}\Vert v_t\Vert ^2 -\frac{kl_0N_3}{8}\Vert u_x+v\Vert ^2 -\left[ \frac{bkl_0N_3}{16c_2}-c_1\right] \Vert v_x\Vert ^2 \nonumber \\&- \left[ \beta N-C(1+N_2+N_3)\right] \Vert \theta _x\Vert ^2 + \frac{\alpha k}{2}N \big (h\circ (u_x+v)\big )(t) \nonumber \\&- \left[ \frac{k}{2} N-C(1+A_\alpha )(1+N_2+N_3)\right] (g\circ (u_x+v))(t). \end{aligned}$$
(4.5)

Now, we pick \(N_3\) large such that

$$\begin{aligned} \frac{bkl_0N_3}{16c_2}-c_1>0. \end{aligned}$$
(4.6)

Observe that \(\dfrac{\alpha h^2(s)}{g(s)}=\dfrac{\alpha h^2(s)}{\alpha h(s)-h'(s)}<h(s)\); therefore, application of the dominated convergence theorem yields

$$\begin{aligned} \alpha A_{\alpha }=\int _0^{+\infty } \frac{\alpha h^2(s)}{\alpha h(s)-h'(s)}\hbox {d}s\rightarrow 0\ \ as\ \ \alpha \rightarrow 0. \end{aligned}$$
(4.7)

Hence, there exists \(0<\alpha _0<1\) such that if \(\alpha<\alpha _0<1\), we have

$$\begin{aligned} \alpha A_{\alpha }<\frac{1}{8C\left( 1+N_2 +N_3\right) }. \end{aligned}$$
(4.8)

Finally, we choose N large enough and set \(\alpha =\dfrac{1}{2Nk}\) so that (4.2) remains valid and

$$\begin{aligned}&\frac{Nk}{3}-CA_{\alpha }\left( 1+N_2 +N_3\right) >0 \end{aligned}$$
(4.9)
$$\begin{aligned}&\frac{Nk}{6}-C\left( 1+N_2 +N_3\right) >0. \end{aligned}$$
(4.10)

as well as

$$\begin{aligned} \beta N-C(1+N_2+N_3)>0. \end{aligned}$$
(4.11)

Combining (4.5)−(4.11), we obtain (4.3). This completes the proof. \(\square \)

Now we will state and prove our main decay result.

Theorem 4.1

Let \((u,v,\theta )\) be the solution of problem (1.11)–(1.13). Assume \(\mathrm{(A_1)})\) and \(\mathrm{(A_2)}\) hold and that \(\rho _2=\dfrac{\rho _1b}{k}\). Then, there exist \(\omega _1,\omega _2>0\) such that the energy functional E of problem (1.11)–(1.13) satisfies

$$\begin{aligned} E(t)\le \omega _2G_1^{-1}\left( \omega _1\int ^t_{t_0}\xi (s)\mathrm{d}s\right) , \ \, \mathrm{where}\ \, G_1(t)=\int ^r_{t}\frac{1}{sG'(s)}\mathrm{d}s \end{aligned}$$
(4.12)

and \(G_1\) is a decreasing and strictly convex function on (0, r], with \(r=h(t_0)>0\) and \(\displaystyle \lim _{t\rightarrow 0}G_1(t)=+\infty \).

Proof

According to \((A_1)\) and \((A_2)\), we have that \(\xi \) and h are continuous, decreasing and positive. In addition, G is continuous and positive. Thus, \(\forall \ t\in [0,t_0]\), we have

$$\begin{aligned} 0<h(t_0)\le h(t)\le h(0), \ \ 0<\xi (t_0)\le \xi (t)\le \xi (0) \end{aligned}$$

then we can find some positive constants \(a_1\) and \(a_2\)

$$\begin{aligned} a_1 \le \xi (t)G(h(t))\le a_2. \end{aligned}$$

Therefore, it follows that

$$\begin{aligned} h'(t)\le -\xi (t) G(h(t)) \le -\frac{a_1}{h(0)} h(0)\le -\frac{a_1}{h(0)} h(t),\ \ \forall \ t\in [0,t_0]. \end{aligned}$$
(4.13)

From (3.1) and (4.13), we have

$$\begin{aligned}&\int _0^{t_0}h(s)\Vert (\varphi _x+\psi )(t)-(\varphi _x+\psi )(t-s)\Vert _2^2\hbox {d}s\nonumber \\&\quad \le -\frac{h(0)}{a_1}\int _0^{t_0}h'(s) \Vert (\varphi _x+\psi )(t)-(\varphi _x+\psi )(t-s)\Vert _2^2\hbox {d}s\nonumber \\&\quad \le -\,C E'(t),\ \ \forall \ t\in [0,t_0]. \end{aligned}$$
(4.14)

Using (4.3) and (4.14), we get

$$\begin{aligned} {{{\mathcal {L}}}}'(t)\le & {} -\lambda E(t) +\frac{1}{4} (h\diamond (\varphi _x+\psi )(t)\\= & {} -\lambda E(t) + \frac{1}{4} \int _0^{t_0}h(s) \Vert (\varphi _x+\psi )(t)-(\varphi _x+\psi )(t-s)\Vert _2^2\hbox {d}s\\&+ \frac{1}{4}\int _{t_0}^{t}h(s) \Vert (\varphi _x+\psi )(t)-(\varphi _x+\psi )(t-s)\Vert _2^2\hbox {d}s\\\le & {} -\lambda E(t)-C E'(t)+\frac{1}{4}\int _{t_0}^{t}h(s) \Vert (\varphi _x+\psi )(t)-(\varphi _x+\psi )(t-s)\Vert _2^2\hbox {d}s. \end{aligned}$$

Therefore, for all \(t\ge t_0,\) we get

$$\begin{aligned} {{{\mathcal {L}}}}_1'(t)\le -\lambda E(t)+\frac{1}{4}\int _{t_0}^{t}h(s) \Vert (\varphi _x+\psi )(t)-(\varphi _x+\psi )(t-s)\Vert _2^2\hbox {d}s, \end{aligned}$$
(4.15)

where \({{{\mathcal {L}}}}_1(t)= {{{\mathcal {L}}}}(t) + C E(t)\) and with (4.2) in mind, we deduce that \({{{\mathcal {L}}}}_1\) is equivalent to E.

To complete the proof of Theorem 4.1, we discuss two cases:

Case 1 G(t) is linear.

For this case, we multiply (4.15) by \(\xi (t)\), then with (3.1) and \((A_2)\) in mind, we have

$$\begin{aligned} \xi (t) {{{\mathcal {L}}}}_1'(t)\le & {} -\lambda \xi (t) E(t)+\frac{1}{4}\xi (t)\int _{t_0}^{t}h(s) \Vert (\varphi _x+\psi )(t)-(\varphi _x+\psi )(t-s)\Vert _2^2\hbox {d}s\nonumber \\\le & {} -\lambda \xi (t) E(t)+\frac{1}{2}\int _{t_0}^{t} \xi (s)h(s)\Vert (\varphi _x+\psi )(t)-(\varphi _x+\psi )(t-s)\Vert _2^2\hbox {d}s\nonumber \\\le & {} -\lambda \xi (t) E(t)-\frac{1}{2}\int _{t_0}^{t}h'(s)\Vert (\varphi _x+\psi )(t) -(\varphi _x+\psi )(t-s)\Vert _2^2\hbox {d}s\nonumber \\\le & {} -\lambda \xi (t) E(t)- C E'(t). \end{aligned}$$
(4.16)

Knowing that \(\xi \) is decreasing, we have

$$\begin{aligned} (\xi {{{\mathcal {L}}}}_1+ CE)'(t)\le -\lambda \xi (t) E(t), \ \ \ \forall \ t\ge t_0 \end{aligned}$$
(4.17)

and since \({{{\mathcal {L}}}}_1\sim E\), we obtain

$$\begin{aligned} \xi {{{\mathcal {L}}}}_1+C E\sim E. \end{aligned}$$
(4.18)

Hence, setting \({{{\mathcal {L}}}}_2(t)=\xi (t){{{\mathcal {L}}}}_1(t)+C E(t)\), then we can find some positive constant \(\omega \) such that

$$\begin{aligned} {{{\mathcal {L}}}}_2'(t)\le -\lambda \xi (t) E(t)\le -\omega \xi (t){{{\mathcal {L}}}}_2(t),\ \ \forall \ t\ge t_0. \end{aligned}$$
(4.19)

Integration of (4.19) over \((t_0,t)\) and recalling (4.18) yield

$$\begin{aligned} E(t)\le \omega \hbox {e}^{-{\widetilde{\omega }}\int _{t_0}^t\xi (s)\hbox {d}s} = \omega G_1^{-1}\left( {\widetilde{\omega }}\int _{t_0}^t\xi (s)\hbox {d}s\right) , \end{aligned}$$

where \({\widetilde{\omega }}\) is a positive constant.

Case 2 G(t) is nonlinear.

Firstly, for this case, we set \({\mathcal {F}}(t)=\mathcal{L}(t)+I_4(t).\) Thus, from Lemma 3.5 and (4.5), we have

$$\begin{aligned} {{{\mathcal {F}}}}'(t) \le&-\frac{\gamma \rho _1}{4\rho _3}\Vert u_t\Vert ^2 -\frac{\rho _2}{2}\Vert v_t\Vert ^2 -\left[ \frac{kl_0N_3}{8}-3(1-l_0)\right] \Vert u_x+v\Vert ^2 \nonumber \\&-\left[ \frac{bkl_0N_3}{16c_2}-c_1\right] \Vert v_x\Vert ^2 - \left[ \beta N-C(1+N_2+N_3)\right] \Vert \theta _x\Vert ^2 \nonumber \\&- \left[ \frac{k}{2} N-C(1+A_\alpha )(1+N_2+N_3)\right] (g\circ (u_x+v))(t) \nonumber \\&+ \left[ \frac{\alpha Nk}{2}-\frac{1}{2} \right] \big (h\circ (u_x+v)\big )(t). \end{aligned}$$
(4.20)

Next, we choose \(N_3\) large so that

$$\begin{aligned} \frac{kl_0N_3}{8}-3(1-l_0)>0\ \, \mathrm{and}\ \, \frac{bkl_0N_3}{16c_2}-c_1>0, \end{aligned}$$

then choose N large enough so that

$$\begin{aligned} \beta N-C(1+N_2+N_3)>0\ \, \mathrm{and}\ \, \frac{k}{2} N-C(1+A_\alpha )(1+N_2+N_3)>0. \end{aligned}$$

Now, we take \(\alpha =\dfrac{1}{2Nk}\); thus, there exists a constant \({\widetilde{\alpha }}>0\) such that

$$\begin{aligned} {\mathcal {F}}'(t)\le -{\widetilde{\alpha }}\,E(t),\ \forall t\ge t_0. \end{aligned}$$
(4.21)

From (4.21), it follows that

$$\begin{aligned} {\widetilde{\alpha }}\int _{t_0}^t E(s)\hbox {d}s\le {\mathcal {F}}(t_0)-{\mathcal {F}}(t)\le {\mathcal {F}}(t_0). \end{aligned}$$

Therefore, we have

$$\begin{aligned} \int _{0}^{+\infty } E(s)\hbox {d}s< \infty . \end{aligned}$$
(4.22)

We next define the functional \(\varphi \) as

$$\begin{aligned} \varphi (t):= \alpha \int _{t_0}^t\Vert (u_x+v)(t)-(u_x+v)(t-s)\Vert _2^2\hbox {d}s. \end{aligned}$$

Due to (4.22), we can select \(0<\alpha <1\) such that

$$\begin{aligned} \varphi (t)<1, \ \ \forall \ t\ge t_0. \end{aligned}$$
(4.23)

Henceforth, we can assume without loss of generality that \(\varphi (t)>0, \ \forall \ t\ge t_0,\) otherwise, we obtain immediately an exponential stability result, from (4.15). We as well define the function \(\psi \) as

$$\begin{aligned} \psi (t)=-\int _{t_0}^t h'(s)\Vert (u_x+v)(t)-(u_x+v)(t-s)\Vert _2^2\hbox {d}s \end{aligned}$$

and notice that

$$\begin{aligned} \psi (t)\le -CE'(t). \end{aligned}$$
(4.24)

We can see from assumption \(\mathrm{(A_2)}\) that G is strictly convex on \((0,r],\ r=h(t_0)\) and \(G(0)=G'(0)=0\). Therefore, we have

$$\begin{aligned} G(\epsilon s)\le \epsilon G(s),\ \ 0\le \epsilon \le 1,\ \ s\in (0,r]. \end{aligned}$$
(4.25)

It thus follows from assumption \(\mathrm{(A_2)}\), (4.23) and Jensen’s inequality (2.12) that

$$\begin{aligned} \psi (t)= & {} \frac{1}{\alpha \varphi (t)}\int _{t_0}^t \varphi (t)(-h'(s)) \alpha \Vert (u_x+v)(t)-(u_x+v)(t-s)\Vert _2^2\hbox {d}s\nonumber \\\ge & {} \frac{1}{\alpha \varphi (t)}\int _{t_0}^t \varphi (t)\xi (s) G(h(s))\alpha \Vert (u_x+v)(t)-(u_x+v)(t-s)\Vert _2^2\hbox {d}s\nonumber \\\ge & {} \frac{\xi (t)}{\alpha \varphi (t)}\int _{t_0}^t G(\varphi (t)h(s)) \alpha \Vert (u_x+v)(t)-(u_x+v)(t-s)\Vert _2^2\hbox {d}s\nonumber \\\ge & {} \frac{\xi (t)}{\alpha }G\left( \frac{1}{\varphi (t)} \int _{t_0}^t \varphi (t)h(s) \alpha \Vert (u_x+v)(t)-(u_x+v)(t-s)\Vert _2^2\hbox {d}s\right) \nonumber \\= & {} \frac{\xi (t)}{\alpha }G\left( \alpha \int _{t_0}^t h(s) \Vert (u_x+v)(t) -(u_x+v)(t-s)\Vert _2^2\hbox {d}s\right) \nonumber \\= & {} \frac{\xi (t)}{\alpha }{\widetilde{G}}\left( \alpha \int _{t_0}^t h(s)\Vert (u_x+v)(t)-(u_x+v)(t-s)\Vert _2^2\hbox {d}s\right) , \end{aligned}$$
(4.26)

where \({\widetilde{G}}\) is the extension of G on \((0,+\infty )\) in (2.3). From (4.26), we get

$$\begin{aligned} \int _{t_0}^t h(s)\Vert (u_x+v)(t)-(u_x+v)(t-s)\Vert _2^2\hbox {d}s\le \frac{1}{\alpha }{\widetilde{G}}^{-1}\left( \frac{\alpha \psi (t)}{\xi (t)} \right) . \end{aligned}$$

Thus, from (4.15), we get

$$\begin{aligned} {{{\mathcal {L}}}}_1'(t)\le -\lambda E(t)+ C{\widetilde{G}}^{-1}\left( \frac{\alpha \psi (t)}{\xi (t)} \right) , \ \ \forall \ t\ge t_0. \end{aligned}$$
(4.27)

Now, for \(r_0<r\) to be chosen later, we define the functional

$$\begin{aligned} M_2(t):={\widetilde{G}}'\left( r_0\frac{E(t)}{E(0)}\right) {\mathcal {L}}_1(t)+ E(t) \end{aligned}$$

which is equivalent to E since \({\mathcal {L}}_1\sim E\). Using (4.27) and due to \(E'(t)\le 0,\ {\widetilde{G}}'(t)>0,\ {\widetilde{G}}''(t)>0,\) we have

$$\begin{aligned} M_2'(t)=&\,r_0\frac{E'(t)}{E(0)}{\widetilde{G}}''\left( r_0\frac{E(t)}{E(0)} \right) {\mathcal {L}}_1(t) +{\widetilde{G}}'\left( r_0\frac{E(t)}{E(0)}\right) {\mathcal {L}}_1'(t) + E'(t)\nonumber \\ \le&-\lambda E(t){\widetilde{G}}'\left( r_0\frac{E(t)}{E(0)}\right) +C{\widetilde{G}}'\left( r_0\frac{E(t)}{E(0)}\right) {\widetilde{G}}^{-1} \left( \alpha \frac{\psi (t)}{\xi (t)}\right) + E'(t). \end{aligned}$$
(4.28)

In what follows, we consider the convex conjugate \({\widetilde{G}}^{*}\) of \({\widetilde{G}}\) in the sense of Young (see [25] page 61–64) defined by

$$\begin{aligned} {\widetilde{G}}^{*}(s)=s({\widetilde{G}}')^{-1}(s)-{\widetilde{G}}\left[ ({\widetilde{G}}')\right] , \end{aligned}$$
(4.29)

with \({\widetilde{G}}^{*}\) satisfying the generalized Young inequality

$$\begin{aligned} U_1U_2\le {\widetilde{G}}^{*}(U_1)+{\widetilde{G}}(U_2). \end{aligned}$$
(4.30)

Let us set \( U_1={\widetilde{G}}'\left( r_0\dfrac{E(t)}{E(0)}\right) \) and \(U_2={\widetilde{G}}^{-1}\left( \alpha \dfrac{\psi (t)}{\xi (t)}\right) .\) It follows from Lemma 3.1 and (4.28)–(4.30), that for all \(t\ge t_0\)

$$\begin{aligned} M_2'(t)\le&-\lambda E(t){\widetilde{G}}'\left( r_0\frac{E(t)}{E(0)}\right) + C{\widetilde{G}}^{*}\left( {\widetilde{G}}'\left( r_0\frac{E(t)}{E(0)}\right) \right) +C\alpha \frac{\psi (t)}{\xi (t)} + E'(t)\nonumber \\ \le&-\lambda E(t){\widetilde{G}}'\left( r_0\frac{E(t)}{E(0)}\right) + Cr_0\frac{E(t)}{E(0)}{\widetilde{G}}'\left( r_0\frac{E(t)}{E(0)}\right) +C\alpha \frac{\psi (t)}{\xi (t)} + E'(t). \end{aligned}$$
(4.31)

We multiply (4.31) by \(\xi (t)\), observe that \(r_0\dfrac{E(t)}{E(0)}<r\) and

$$\begin{aligned} {\widetilde{G}}'\left( r_0\frac{E(t)}{E(0)}\right) =G'\left( r_0\frac{E(t)}{E(0)}\right) , \end{aligned}$$

then making use of (3.2) and (4.24), we obtain

$$\begin{aligned} \xi (t)M_2'(t)\le&-\lambda \xi (t)E(t)G'\left( r_0\frac{E(t)}{E(0)}\right) + Cr_0\frac{E(t)}{E(0)}\xi (t)G'\left( r_0\frac{E(t)}{E(0)}\right) \nonumber \\&+C\alpha \psi (t)+ \xi (t)E'(t) \nonumber \\ \le&-\lambda \xi (t)E(t)G'\left( r_0\frac{E(t)}{E(0)}\right) + Cr_0\frac{E(t)}{E(0)}\xi (t)G'\left( r_0\frac{E(t)}{E(0)}\right) \nonumber \\&-CE'(t), \ \forall \ \ t\ge t_0. \end{aligned}$$
(4.32)

Let \(M_3(t)=\xi (t) M_2(t) + CE(t)\), then \(M_3\sim E\) since \(M_2\sim E,\) i.e., there exist constants \(a_0,a_1>0,\) such that \(M_3\) satisfies

$$\begin{aligned} a_0 M_3(t)\le E(t)\le a_1 M_3(t). \end{aligned}$$
(4.33)

Thus, from (4.32), we have

$$\begin{aligned} M_3'(t)\le -(\lambda E(0)-Cr_0)\xi (t)\frac{E(t)}{E(0)}\xi (t)G'\left( r_0\frac{E(t)}{E(0)}\right) , \ \forall t\ge t_0. \end{aligned}$$

Next, we choose \(r_0<r\) small enough such that \(\lambda E(0)-Cr_0>0\), then for some positive constant \(\omega >0\), we have

$$\begin{aligned} M_3'(t)\le&-\omega \xi (t)\frac{E(t)}{E(0)} \xi (t)G'\left( r_0\frac{E(t)}{E(0)}\right) \nonumber \\ =&-\omega \xi (t)G_2\left( \frac{E(t)}{E(0)} \right) , \ \ \forall t\ge t_0, \end{aligned}$$
(4.34)

where

$$\begin{aligned} G_2(s)=s G'(r_0 s). \end{aligned}$$

We therefore remark that

$$\begin{aligned} G_2'(s)=G'(r_0s)+r_0 sG''(r_0s), \end{aligned}$$

thus the strict convexity of G on (0, r] implies \(G_2(s)>0,\ G_2'(s)>0\) on (0, r].

Now, let \(M(t)=a_0\dfrac{M_3(t)}{E(0)},\) then due to (4.33) and (4.34), we have

$$\begin{aligned} M(t)\sim E(t) \end{aligned}$$
(4.35)

and

$$\begin{aligned} M'(t)=a_0\frac{M_3'(t)}{E(0)}\le -\omega _1\xi (t)G_2(M(t)), \ \forall t\ge t_0. \end{aligned}$$
(4.36)

We now integrate (4.36) over \((t_0,t)\), to get

$$\begin{aligned} \omega _1\int _{t_0}^t\xi (s)\hbox {d}s \le -\int _{t_0}^t \frac{M'(s)}{G_2(M(s))}\hbox {d}s=\frac{1}{r_0}\int _{r_0 M(t)}^{r_0 M(t_0)} \frac{1}{sG'(s)}\hbox {d}s \end{aligned}$$

which yields

$$\begin{aligned} M(t)\le \frac{1}{r_0}G_1^{-1}\left( {\widetilde{\omega }}_1 \int _{t_0}^t \xi (s)\hbox {d}s\right) , \end{aligned}$$
(4.37)

where

$$\begin{aligned} G_1(t)=\displaystyle \int _t^r \frac{1}{sG'(s)} \hbox {d}s. \end{aligned}$$

It follows from \(\mathrm{(A_2)}\), that \(G_1\) is a strictly decreasing function on (0, r]. Furthermore,

$$\begin{aligned} \displaystyle \lim _{t\longrightarrow 0} G_1(t)=+\infty . \end{aligned}$$

Finally, making use of (4.35) and (4.37), we obtain decay estimate (4.12). Hence, Theorem 4.1 is completely proved. \(\square \)

4.1 Examples

  1. (1).

    Let \(h_1(t)=\tau _0\hbox {e}^{-2\tau _0t},\ t\ge 0, \ \mathrm{where}\ \tau _0>0\); thus, \((A_1)\) holds with \(l_0=\frac{1}{2}\). Therefore,

    $$\begin{aligned} h_1'(t)=-2\tau _0^2\hbox {e}^{-2\tau _0t}=-2\tau _0 G(h_1(t)), \ \ \mathrm{where } \ \ G(s)=s. \end{aligned}$$

    In this case, the solution energy (3.1)\(_1\) satisfies

    $$\begin{aligned} E(t)\le \Theta \hbox {e}^{-\nu t},\ \ \forall \ t\ge 0, \end{aligned}$$

    for some constants \(\Theta>0,\nu >0.\)

  2. (2).

    Let \(h_2(t)=\tau _0\hbox {e}^{-(1+t)^{\tau _1}},\ t\ge 0, \ \ \mathrm{where }\ \tau _0>0, \ \mathrm{and}\ 0<a_1<1\) with \(\tau _0\) chosen such that \((A_1)\) holds. Therefore,

    $$\begin{aligned} h_2'(t)=-\tau _0\tau _1(1+t)^{\tau _1-1}\hbox {e}^{-(1+t)^{\tau _1}}=-\xi (t) G(h_2(t)), \end{aligned}$$

    where

    $$\begin{aligned} \xi (t)=\tau _1(1+t)^{\tau _1-1},\ \ G(s)=s. \end{aligned}$$

    In this case, the solution energy (3.1)\(_1\) satisfies

    $$\begin{aligned} E(t)\le \Theta \hbox {e}^{-\nu (1+t)^{\tau _1}},\ \ \forall \ t\ge 0, \end{aligned}$$

    for some constants \(\Theta>0,\nu >0.\)

  3. (3).

    Let \(h_3(t)=\frac{\tau _0}{(1+t)^{\tau _1}},\ t\ge 0, \ \ \mathrm{where }\ \ \tau _0>0, \ \mathrm{and}\ \tau _1>1\) with \(\tau _0\) chosen such that (\(A_1)\) holds. Therefore,

    $$\begin{aligned} h_3'(t)=\frac{-\tau _0\tau _1}{(1+t)^{\tau _1+1}}=-\tau _1 G(h_3(t)), \ \ \mathrm{where }\ \ \ \ G(s)=s^q,\ \ q=\frac{\tau _1 +1}{\tau _1} \end{aligned}$$

    with q satisfying \(1<q<2.\) Hence, the solution energy (3.1) satisfies

    $$\begin{aligned} E(t) \le \frac{\Theta }{(1+t)^{\tau _1}} \end{aligned}$$

    for some constant \(\Theta >0.\)