1 Introduction

The direct and indirect stability of locally coupled wave equations with local damping has arouses much interests in recent years. The study of coupled systems is also motivated by several physical considerations like Timoshenko and Bresse systems (see for instance Wehbe and Ghader 2021; Bassam et al. 2015; Akil et al. 2020, 2021; Akil and Badawi 2022; Abdallah et al. 2018; Fatori et al. 2014; Fatori and Monteiro 2012). The exponential or polynomial stability of the wave equation with local Kelvin-Voigt damping is considered in Liu and Rao (2006), Tebou (2016), Burq and Sun (2022), for instance. On the other hand, the direct and indirect stability of locally coupled wave equations with local viscous dampings are analyzed in Alabau-Boussouira and Léautaud (2013), Kassem et al. (2019), Gerbi et al. (2021). In this paper, we are interested in locally coupled wave equations with local Kelvin-Voigt dampings. Before stating our main contributions, let us mention similar results for such systems. In 2019, et al. in Hayek et al. (2020), studied the stabilization of a multi-dimensional system of weakly coupled wave equations with one or two locally Kelvin-Voigt damping and non-smooth coefficient at the interface. They established different stability results. In 2021, et al. in Wehbe et al. (2021), studied the stability of an elastic/viscoelastic transmission problem of locally coupled waves with non-smooth coefficients, by considering:

$$\begin{aligned} \left\{ \begin{array}{llll} \displaystyle u_{tt}-\left( au_x +{b_0 \chi _{(\alpha _1,\alpha _3)}} {u_{tx}}\right) _x +{c_0 \chi _{(\alpha _2,\alpha _4)}}y_t =0,&{} \text {in}\ (0,L)\times (0,\infty ) ,&{}\\ y_{tt}-y_{xx}-{c_0 \chi _{(\alpha _2,\alpha _4)}}u_t =0, &{}\text {in} \ (0,L)\times (0,\infty ) ,&{}\\ u(0,t)=u(L,t)=y(0,t)=y(L,t)=0,&{} \text {in} \ (0,\infty ) ,&{} \end{array}\right. \end{aligned}$$

where \(a, b_0, L >0\), \(c_0 \ne 0\), and \(0<\alpha _1<\alpha _2<\alpha _3<\alpha _4<L\). They established a polynomial energy decay rate of type \(t^{-1}\). In the same year, Akil et al. in 2021, studied the stability of a singular local interaction elastic/viscoelastic coupled wave equations with time delay, by considering:

$$\begin{aligned} \left\{ \begin{array}{llll}\displaystyle u_{tt}-\left[ au_x +{ \chi _{(0,\beta )}}(\kappa _1 {u_{tx}}+\kappa _2 u_{tx}(t-\tau ))\right] _x +{c_0 \chi _{(\alpha ,\gamma )}}y_t =0,&{} \text {in}\ (0,L)\times (0,\infty ) ,&{}\\ y_{tt}-y_{xx}-{c_0 \chi _{(\alpha ,\gamma )}}u_t =0, &{}\text {in} \ (0,L)\times (0,\infty ) ,&{}\\ u(0,t)=u(L,t)=y(0,t)=y(L,t)=0,&{} \text {in} \ (0,\infty ) ,&{} \end{array}\right. \end{aligned}$$

where \(a, \kappa _1, L>0\), \(\kappa _2, c_0 \ne 0\), and \(0<\alpha<\beta<\gamma <L\). They proved that the energy of their system decays polynomially in \(t^{-1}\). In 2021, Akil et al. in 2021, studied the stability of coupled wave models with locally memory in a past history framework via non-smooth coefficients on the interface, by considering:

$$\begin{aligned} \left\{ \begin{array}{llll} \displaystyle u_{tt}-\left( au_x +{ b_0 \chi _{(0,\beta )}} {\int _0^{\infty }g(s)u_{x}(t-s)ds}\right) _x +{c_0 \chi _{(\alpha ,\gamma )}}y_t =0,&{} \text {in}\ (0,L)\times (0,\infty ) ,&{}\\ y_{tt}-y_{xx}-{c_0 \chi _{(\alpha ,\gamma )}}u_t =0, &{}\text {in} \ (0,L)\times (0,\infty ) ,&{}\\ u(0,t)=u(L,t)=y(0,t)=y(L,t)=0,&{} \text {in} \ (0,\infty ) ,&{} \end{array}\right. \end{aligned}$$

where \(a, b_0, L >0\), \(c_0 \ne 0\), \(0<\alpha<\beta<\gamma <L\), and \(g:[0,\infty ) \longmapsto (0,\infty )\) is the convolution kernel function. They established an exponential energy decay rate if the two waves have the same speed of propagation. In case of different speed of propagation, they proved that the energy of their system decays polynomially with rate \(t^{-1}\). In the same year, Akil et al. in 2022, studied the stability of a multi-dimensional elastic/viscoelastic transmission problem with Kelvin-Voigt damping and non-smooth coefficient at the interface, they established some polynomial stability results under some geometric control condition. In those previous literature, the authors deal with the locally coupled wave equations with local damping and by assuming that there is an intersection between the damping and coupling regions. The aim of this paper was to study the direct/indirect stability of locally coupled wave equations with Kelvin-Voigt dampings/damping localized via non-smooth coefficients/coefficient and by assuming that the supports of the dampings and coupling coefficients are disjoint. In the first part of this paper, we consider the following one dimensional coupled system:

$$\begin{aligned} u_{tt}-\left( au_x+bu_{tx}\right) _x+c y_t= & {} 0,\quad (x,t)\in (0,L)\times (0,\infty ), \end{aligned}$$
(1.1)
$$\begin{aligned} y_{tt}-\left( y_x+dy_{tx}\right) _x-cu_t= & {} 0,\quad (x,t)\in (0,L)\times (0,\infty ), \end{aligned}$$
(1.2)

with fully Dirichlet boundary conditions,

$$\begin{aligned} u(0,t)=u(L,t)=y(0,t)=y(L,t)=0,\ t\in (0,\infty ), \end{aligned}$$
(1.3)

and the following initial conditions

$$\begin{aligned} u(\cdot ,0)=u_0(\cdot ),\ u_t(\cdot ,0)=u_1(\cdot ),\ y(\cdot ,0)=y_0(\cdot )\quad \text {and}\quad y_t(\cdot ,0)=y_1(\cdot ), \ x \in (0,L).\qquad \end{aligned}$$
(1.4)

In this part, for all \(b_0, d_0 >0\) and \(c_0 \ne 0\), we treat the following three cases:

Case 1 (See Figure 1):

$$\begin{aligned} \left\{ \begin{array}{l} b(x)=b_0 \chi _{(b_1,b_2)}(x) ,\ \quad c(x)=c_0\chi _{(c_1,c_2)}(x),\ \quad d(x)=d_0\chi _{(d_1,d_2)}(x),\\ \text {where}\ 0<b_1<b_2<c_1<c_2<d_1<d_2<L. \end{array} \right. \end{aligned}$$
(C1)

Case 2 (See Figure 2):

$$\begin{aligned} \left\{ \begin{array}{l} b(x)=b_0 \chi _{(b_1,b_2)}(x) ,\ \quad c(x)=c_0\chi _{(c_1,c_2)}(x),\ \quad d(x)=d_0\chi _{(d_1,d_2)}(x),\\ \text {where}\ 0<b_1<b_2<d_1<d_2<c_1<c_2<L. \end{array} \right. \end{aligned}$$
(C2)

Case 3 (See Figure 3):

$$\begin{aligned} \left\{ \begin{array}{l} b(x)=b_0 \chi _{(b_1,b_2)}(x) ,\ \quad c(x)=c_0\chi _{(c_1,c_2)}(x),\ \quad d(x)=0,\\ \text {where}\ 0<b_1<b_2<c_1<c_2<L. \end{array} \right. \end{aligned}$$
(C3)
Fig. 1
figure 1

Geometric description of the functions bc and d in Case 1

Fig. 2
figure 2

Geometric description of the functions bc and d in Case 2

Fig. 3
figure 3

Geometric description of the functions b and c in Case 3

While in the second part, we consider the following multi-dimensional coupled system:

$$\begin{aligned} u_{tt}-{{\,\mathrm{div}\,}}(\nabla u+b\nabla u_t)+cy_t= & {} 0\quad \text {in}\ \Omega \times (0,\infty ), \end{aligned}$$
(1.5)
$$\begin{aligned} y_{tt}-\Delta y-cy_t= & {} 0\quad \text {in}\ \Omega \times (0,\infty ), \end{aligned}$$
(1.6)

with full Dirichlet boundary condition

$$\begin{aligned} u=y=0\quad \text {on}\quad \Gamma \times (0,\infty ), \end{aligned}$$
(1.7)

and the following initial condition

$$\begin{aligned} u(\cdot ,0)=u_0(\cdot ),\ u_t(\cdot ,0)=u_1(\cdot ),\ y(\cdot ,0)=y_0(\cdot )\ \text {and}\ y_t(\cdot ,0)=y_1(\cdot ) \ \text {in} \ \Omega , \end{aligned}$$
(1.8)

where \(\Omega \subset \mathbb R^N\), \(N\ge 2\) is an open and bounded set with boundary \(\Gamma \) of class \(C^2\). Here, \(b,c\in L^{\infty }(\Omega )\) are such that \(b:\Omega \rightarrow \mathbb R_+\) is the viscoelastic damping coefficient, \(c:\Omega \rightarrow \mathbb R\) is the coupling function and

$$\begin{aligned} b(x)\ge b_0>0\ \ \text {in}\ \ \omega _b\subset \Omega , \quad c(x)\ge c_0\ne 0\ \ \text {in}\ \ \omega _c\subset \Omega \quad \text {and}\quad c(x)=0\ \ \text {on}\ \ \Omega \backslash \omega _c \end{aligned}$$
(1.9)

and

$$\begin{aligned} {{\,\mathrm{meas}\,}}\left( \overline{\omega _c}\cap \Gamma \right) >0\quad \text {and}\quad \overline{\omega _b}\cap \overline{\omega _c}=\emptyset . \end{aligned}$$
(1.10)

In the first part of this paper, we study the direct and indirect stability of system (1.1)-(1.4) by considering the three cases (C1), (C2), and (C3). In Sect. 2.1, we prove the well-posedness of our system by using a semigroup approach. In Sect. 2.2, by using the general criteria of Arendt-Batty, we prove the strong stability of our system in the absence of the compactness of the resolvent. Finally, in Sect. 2.3, by using a frequency domain approach combined with a specific multiplier method, we prove that our system decay polynomially of type \(t^{-4}\) or \(t^{-1}\).

In the second part of this paper, we study the indirect stability of System (1.5)-(1.8). In Sect. 3.1, we prove the well-posedness of our system by using a semigroup approach. Finally, in Sect. 3.2, under some geometric control condition, we prove the strong stability of this system.

2 Direct and indirect stability in the one dimensional case

In this section, we study the well-posedness, strong stability, and polynomial stability of system (1.1)-(1.4).

2.1 Well-posedness

In this section, we will establish the well-posedness of System (1.1)-(1.4) using semigroup approach. The energy of system (1.1)-(1.4) is given by

$$\begin{aligned} E(t)=\frac{1}{2}\int _0^L \left( |u_t|^2+a|u_x|^2+|y_t|^2+|y_x|^2\right) dx. \end{aligned}$$

Let \(\left( u,u_{t},y,y_{t}\right) \) be a regular solution of (1.1)-(1.4). Multiplying (1.1) and (1.2) by \(\overline{u_t}\) and \(\overline{y_t}\), respectively, then using the boundary conditions in (1.3), we get

$$\begin{aligned} E^\prime (t)=- \int _0^L \left( b|u_{tx}|^2+d|y_{tx}|^2\right) dx. \end{aligned}$$

Thus, if (C1) or (C2) or (C3) holds, we get \(E^\prime (t)\le 0\). Therefore, system (1.1)-(1.4) is dissipative in the sense that its energy is non-increasing with respect to time t. Let us define the energy space \({\mathcal {H}}\) by

$$\begin{aligned} {\mathcal {H}}=(H_0^1(0,L)\times L^2(0,L))^2. \end{aligned}$$

The energy space \({\mathcal {H}}\) is equipped with the following inner product:

$$\begin{aligned} \left( U,U_1\right) _{\mathcal {H}}=\int _{0}^Lv\overline{{v}}_1dx+a\int _{0}^Lu_x(\overline{{u}}_1)_xdx+\int _{0}^Lz\overline{{z}}_1dx+\int _{0}^Ly_x(\overline{{y}}_1)_xdx, \end{aligned}$$

for all \(U=\left( u,v,y,z\right) ^\top \) and \(U_1=\left( u_1,v_1,y_1,z_1\right) ^\top \) in \({\mathcal {H}}\). We define the unbounded linear operator \({\mathcal {A}}: D\left( {\mathcal {A}}\right) \subset {\mathcal {H}}\longrightarrow {\mathcal {H}}\) by

$$\begin{aligned} D({\mathcal {A}})= & {} \left\{ \displaystyle U=(u,v,y,z)^\top \in {\mathcal {H}};\ v,z\in H_0^1(0,L), \right. \\&\left. (au_{x}+bv_{x})_{x}\in L^2(0,L), (y_{x}+dz_x)_x\in L^2(0,L) \right\} \end{aligned}$$

and

$$\begin{aligned}&{\mathcal {A}}\left( u, v,y, z\right) ^\top =\left( v,(au_{x}+bv_{x})_{x}-cz, z, (y_x+dz_x)_x+cv \right) ^{\top },\\&\forall U=\left( u, v,y, z\right) ^\top \in D\left( {\mathcal {A}}\right) . \end{aligned}$$

Now, if \(U=(u,u_t,y,y_t)^\top \) is the state of system (1.1)-(1.4), then it is transformed into the following first-order evolution equation:

$$\begin{aligned} U_t={\mathcal {A}}U,\quad U(0)=U_0, \end{aligned}$$
(2.1)

where \(U_0=(u_0,u_1,y_0,y_1)^\top \in \mathcal H\).

Proposition 2.1

If (C1) or (C2) or (C3) holds. Then, the unbounded linear operator \(\mathcal A\) is m-dissipative in the Hilbert space \(\mathcal H\).

Proof

For all \(U=(u,v,y,z)^{\top }\in D({\mathcal {A}})\), we have

$$\begin{aligned} \Re \left<{\mathcal {A}}U,U\right>_{{\mathcal {H}}}=-\int _0^Lb|v_x|^2dx-\int _0^Ld|z_x|^2dx\le 0, \end{aligned}$$

which implies that \({\mathcal {A}}\) is dissipative. Now, similar to Proposition 2.1 in Wehbe et al. (2021), we can prove that there exists a unique solution \(U=(u,v,y,z)^{\top }\in D({\mathcal {A}})\) of

$$\begin{aligned} -{\mathcal {A}}U=F,\quad \forall F=(f^1,f^2,f^3,f^4)^\top \in {\mathcal {H}}. \end{aligned}$$

Then \(0\in \rho ({\mathcal {A}})\) and \({\mathcal {A}}\) is an isomorphism and since \(\rho ({\mathcal {A}})\) is open in \({\mathbb {C}}\) (see Theorem 6.7 (Chapter III) in Kato 1995), we easily get \(R(\lambda I -{\mathcal {A}}) = {{\mathcal {H}}}\) for a sufficiently small \(\lambda >0 \). This, together with the dissipativeness of \({\mathcal {A}}\), imply that \(D\left( {\mathcal {A}}\right) \) is dense in \({{\mathcal {H}}}\) and that \({\mathcal {A}}\) is m-dissipative in \({{\mathcal {H}}}\) (see Theorems 4.5, 4.6 in Pazy 1983). \(\square \)

According to Lumer–Phillips theorem (see Pazy 1983), then operator \(\mathcal A\) generates a \(C_{0}\)-semigroup of contractions \(e^{t\mathcal A}\) in \(\mathcal H\) which gives the well-posedness of (2.1). Then, we have the following result:

Theorem 2.2

For all \(U_0 \in \mathcal H\), system (2.1) admits a unique weak solution

$$\begin{aligned} U(t)=e^{t\mathcal A}U_0\in C^0 (\mathbb R_+ ,\mathcal H). \end{aligned}$$

Moreover, if \(U_0 \in D(\mathcal A)\), then the system (2.1) admits a unique strong solution

$$\begin{aligned} U(t)=e^{t\mathcal A}U_0\in C^0 (\mathbb R_+ ,D(\mathcal A))\cap C^1 (\mathbb R_+ ,\mathcal H). \end{aligned}$$

2.2 Strong stability

In this section, we will prove the strong stability of system (1.1)-(1.4). We define the following conditions:

$$\begin{aligned} (C1) \ \text {holds}\quad \text {and} \quad |c_0|<\min \left( \frac{\sqrt{a}}{c_{2}-c_{1}},\frac{1}{c_{2}-c_{1}}\right) , \end{aligned}$$
(SSC1)

or

$$\begin{aligned} (C3) \ \text {holds},\quad a=1\quad \text {and}\quad |c_{0}|<\frac{1}{c_{2}-c_{1}}. \end{aligned}$$
(SSC3)

The main result of this part is the following theorem:

Theorem 2.3

Assume that (SSC1) or (C2) or (SSC3) holds. Then, the \(C_0\)-semigroup of contractions \(\left( e^{t{\mathcal {A}}}\right) _{t\ge 0}\) is strongly stable in \({\mathcal {H}}\); i.e. for all \(U_0\in {\mathcal {H}}\), the solution of (2.1) satisfies

$$\begin{aligned} \lim _{t\rightarrow +\infty }\Vert e^{t{\mathcal {A}}}U_0\Vert _{{\mathcal {H}}}=0. \end{aligned}$$

According to Theorem A.2, to prove Theorem 2.3, we need to prove that the operator \(\mathcal A\) has no pure imaginary eigenvalues and \(\sigma (\mathcal A)\cap i\mathbb R\) is countable. Its proof has been divided into the following Lemmas:

Lemma 2.4

Assume that (SSC1) or (C2) or (SSC3) holds. Then, for all \({\lambda }\in {\mathbb {R}}\), \(i{\lambda }I-{\mathcal {A}}\) is injective, i.e.

$$\begin{aligned} \ker \left( i{\lambda }I-{\mathcal {A}}\right) =\left\{ 0\right\} . \end{aligned}$$

Proof

From Proposition 2.1, we have \(0\in \rho ({\mathcal {A}})\). We still need to show the result for \({\lambda }\in \mathbb R^{*}\). For this aim, suppose that there exists a real number \({\lambda }\ne 0\) and \(U=\left( u,v,y,z\right) ^\top \in D(\mathcal A)\) such that

$$\begin{aligned} \mathcal AU=i{\lambda }U. \end{aligned}$$

Equivalently, we have

$$\begin{aligned} v= & {} i{\lambda }u, \end{aligned}$$
(2.2)
$$\begin{aligned} (au_{x}+bv_{x})_{x}-cz= & {} i{\lambda }v, \end{aligned}$$
(2.3)
$$\begin{aligned} z= & {} i{\lambda }y, \end{aligned}$$
(2.4)
$$\begin{aligned} (y_{x}+dz_x)_x+cv= & {} i{\lambda }z. \end{aligned}$$
(2.5)

Next, a straightforward computation gives

$$\begin{aligned} 0=\Re \left<i{\lambda }U,U\right>_{\mathcal H}=\Re \left<\mathcal AU,U\right>_{\mathcal H}=-\int _0^L b|v_x|^2dx-\int _0^L d|z_x|^2dx. \end{aligned}$$
(2.6)

Inserting (2.2) and (2.4) in (2.3) and (2.5), we get

$$\begin{aligned} {\lambda }^2u+(au_{x}+i{\lambda }bu_x)_x-i{\lambda }cy= & {} 0\quad \text {in}\quad (0,L), \end{aligned}$$
(2.7)
$$\begin{aligned} {\lambda }^2y+(y_{x}+i{\lambda }dy_x)_x+i{\lambda }cu= & {} 0\quad \text {in}\quad (0,L), \end{aligned}$$
(2.8)

with the boundary conditions

$$\begin{aligned} u(0)=u(L)=y(0)=y(L)=0. \end{aligned}$$
(2.9)

\(\bullet \) Case 1: Assume that (SSC1) holds. From (2.2), (2.4), and (2.6), we deduce that

$$\begin{aligned} u_x= v_x=0 \ \ \text {in} \ \ (b_1,b_2) \ \ \text {and} \ \ y_x=z_x =0 \ \ \text {in} \ \ (d_1,d_2). \end{aligned}$$
(2.10)

Using (2.7), (2.8), and (2.10), we obtain

$$\begin{aligned} {\lambda }^2u+au_{xx}=0\ \ \text {in}\ \ (0,c_1)\quad \text {and}\quad {\lambda }^2y+y_{xx}=0\ \ \text {in}\ \ (c_2,L). \end{aligned}$$
(2.11)

Deriving the above equations with respect to x and using (2.10), we get

$$\begin{aligned} \left\{ \begin{array}{llllll} {\lambda }^2u_x+au_{xxx}=0&{}\text {in}&{}(0,c_1),\\ u_x=0&{}\text {in}&{}(b_1,b_2)\subset (0,c_1), \end{array} \right. \quad \text {and}\quad \left\{ \begin{array}{lll} {\lambda }^2y_x+y_{xxx}=0&{}\text {in}&{}(c_2,L),\\ y_x=0&{}\text {in}&{}(d_1,d_2)\subset (c_2,L). \end{array} \right. \nonumber \\ \end{aligned}$$
(2.12)

Using the unique continuation theorem, we get

$$\begin{aligned} u_x=0\ \ \text {in}\ \ (0,c_1)\quad \text {and}\quad y_x=0\ \ \text {in}\ \ (c_2,L). \end{aligned}$$
(2.13)

Using (2.13) and the fact that \(u(0)=y(L)=0\), we get

$$\begin{aligned} u=0\ \ \text {in}\ \ (0,c_1)\quad \text {and}\quad y=0\ \ \text {in}\ \ (c_2,L). \end{aligned}$$
(2.14)

Now, our aim is to prove that \(u=y=0 \ \text {in} \ (c_1,c_2)\). For this aim, using (2.14) and the fact that \(u, y\in C^1([0,L])\), we obtain the following boundary conditions:

$$\begin{aligned} u(c_1)=u_x(c_1)=y(c_2)=y_x(c_2)=0. \end{aligned}$$
(2.15)

Multiplying (2.7) by \(-2(x-c_2){\overline{u}}_x\), integrating over \((c_1,c_2)\) and taking the real part, we get

$$\begin{aligned}&-\int _{c_1}^{c_2}{\lambda }^2(x-c_2)(|u|^2)_xdx-a\int _{c_1}^{c_2}(x-c_2)\left( |u_x|^2\right) _xdx\nonumber \\&\quad +2\Re \left( i{\lambda }c_0\int _{c_1}^{c_2}(x-c_2)y{\overline{u}}_xdx\right) =0, \end{aligned}$$
(2.16)

using integration by parts and (2.15), we get

$$\begin{aligned} \int _{c_1}^{c_2}|{\lambda }u|^2dx+a\int _{c_1}^{c_2}|u_x|^2dx+2\Re \left( i{\lambda }c_0\int _{c_1}^{c_2}(x-c_2)y{\overline{u}}_xdx\right) =0. \end{aligned}$$
(2.17)

Multiplying (2.8) by \(-2(x-c_1){\overline{y}}_x\), integrating over \((c_1,c_2)\), taking the real part, and using the same argument as above, we get

$$\begin{aligned} \int _{c_1}^{c_2}|{\lambda }y|^2dx+\int _{c_1}^{c_2}|y_x|^2dx-2\Re \left( i{\lambda }c_0\int _{c_1}^{c_2}(x-c_1)u{\overline{y}}_x dx\right) =0. \end{aligned}$$
(2.18)

Adding (2.17) and (2.18), we get

$$\begin{aligned}&\int _{c_1}^{c_2}|{\lambda }u|^2dx+a\int _{c_1}^{c_2}|u_x|^2dx+\int _{c_1}^{c_2}|{\lambda }y|^2dx+\int _{c_1}^{c_2}|y_x|^2dx \nonumber \\&\quad \le 2|{\lambda }||c_0|(c_2-c_1)\int _{c_1}^{c_2}\left( |y||u_x|+|u||y_x|\right) dx. \end{aligned}$$
(2.19)

Using Young’s inequality in (2.19), we get

$$\begin{aligned}&\displaystyle \int _{c_1}^{c_2}|{\lambda }u|^2dx+a\int _{c_1}^{c_2}|u_x|^2dx+\int _{c_1}^{c_2}|{\lambda }y|^2dx \nonumber \\&\quad +\int _{c_1}^{c_2}|y_x|^2dx\le \frac{c_0^2(c_2-c_1)^2}{a}\int _{c_1}^{c_2}|{\lambda }y|^2dx\nonumber \\&\quad \displaystyle +\, a\int _{c_1}^{c_2}|u_x|^2dx+c_0^2(c_2-c_1)^2\int _{c_1}^{c_2}|{\lambda }u|^2dx+\int _{c_1}^{c_2}|y_x|^2dx; \end{aligned}$$
(2.20)

consequently, we get

$$\begin{aligned} \left( 1-\frac{c_0^2(c_2-c_1)^2}{a}\right) \int _{c_1}^{c_2}|{\lambda }y|^2dx+\left( 1-c_0^2(c_2-c_1)^2\right) \int _{c_1}^{c_2}|{\lambda }u|^2dx\le 0. \end{aligned}$$
(2.21)

Thus, from the above inequality and (SSC1), we get

$$\begin{aligned} u=y=0 \ \ \text {in} \ \ (c_1,c_2). \end{aligned}$$
(2.22)

Next, we need to prove that \(u=0\) in \((c_2,L)\) and \(y=0\) in \((0,c_1)\). For this aim, from (2.22) and the fact that \(u,y \in C^1([0,L])\), we obtain

$$\begin{aligned} u(c_2)=u_x(c_2)=0\quad \text {and}\quad y(c_1)=y_x(c_1)=0. \end{aligned}$$
(2.23)

It follows from (2.7), (2.8) and (2.23) that

$$\begin{aligned} \left\{ \begin{array}{lll} {\lambda }^2u+au_{xx}=0\ \ \text {in}\ \ (c_2,L),\\ u(c_2)=u_x(c_2)=u(L)=0, \end{array} \right. \quad \text {and}\quad \left\{ \begin{array}{rcc} {\lambda }^2y+y_{xx}=0\ \ \text {in}\ \ (0,c_1),\\ y(0)=y(c_1)=y_x(c_1)=0. \end{array} \right. \end{aligned}$$
(2.24)

Holmgren uniqueness theorem yields

$$\begin{aligned} u=0 \ \ \text {in} \ \ (c_2,L) \ \ \text {and} \ \ y=0 \ \ \text {in} \ \ (0,c_1). \end{aligned}$$
(2.25)

Therefore, from (2.2), (2.4), (2.14), (2.22) and (2.25), we deduce that

$$\begin{aligned} U=0. \end{aligned}$$

\(\bullet \) Case 2: Assume that (C2) holds. From (2.2), (2.4) and (2.6), we deduce that

$$\begin{aligned} u_x= v_x=0 \ \ \text {in} \ \ (b_1,b_2) \ \ \text {and} \ \ y_x=z_x =0 \ \ \text {in} \ \ (d_1,d_2). \end{aligned}$$
(2.26)

Using (2.7), (2.8) and (2.26), we obtain

$$\begin{aligned} {\lambda }^2u+au_{xx}=0\ \ \text {in}\ \ (0,c_1)\quad \text {and}\quad {\lambda }^2y+y_{xx}=0\ \ \text {in}\ \ (0,c_1). \end{aligned}$$
(2.27)

Deriving the above equations with respect to x and using (2.26), we get

$$\begin{aligned} \left\{ \begin{array}{lll} {\lambda }^2u_x+au_{xxx}=0&{}\text {in}&{}(0,c_1),\\ u_x=0&{}\text {in}&{}(b_1,b_2)\subset (0,c_1), \end{array} \right. \quad \text {and}\quad \left\{ \begin{array}{lll} {\lambda }^2y_x+y_{xxx}=0&{}\text {in}&{}(0,c_1),\\ y_x=0&{}\text {in}&{}(d_1,d_2)\subset (0,c_1). \end{array} \right. \qquad \end{aligned}$$
(2.28)

Using the unique continuation theorem, we get

$$\begin{aligned} u_x=0\ \ \text {in}\ \ (0,c_1)\quad \text {and}\quad y_x=0\ \ \text {in}\ \ (0,c_1). \end{aligned}$$
(2.29)

From (2.29) and the fact that \(u(0)=y(0)=0\), we get

$$\begin{aligned} u=0\ \ \text {in}\ \ (0,c_1)\quad \text {and}\quad y=0\ \ \text {in}\ \ (0,c_1). \end{aligned}$$
(2.30)

Using the fact that \(u,y\in C^1([0,L])\) and (2.30), we get

$$\begin{aligned} \left\{ \begin{array}{lll} {\lambda }^2u+au_{xx}-i{\lambda }c_0y=0&{}\text {in}&{}(c_1,c_2),\\ {\lambda }^2y+y_{xx}+i{\lambda }c_0u=0&{}\text {in}&{}(c_1,c_2),\\ u(c_1)=u_x(c_1)=y(c_1)=y_x(c_1)=0. \end{array} \right. \end{aligned}$$
(2.31)

Now, using the definition of c(x) in (2.7)-(2.8), (2.26) and (2.31), we get

$$\begin{aligned} u=y=0\ \text { in} \ (c_1,c_2). \end{aligned}$$

Again, using the fact that \(u,y\in C^1([0,L])\), we get

$$\begin{aligned} u(c_2)=u_x(c_2)=y(c_2)=y_x(c_2)=0. \end{aligned}$$
(2.32)

Now, using the same argument as in Case 1, we obtain

$$\begin{aligned} u=y=0 \ \text {in} \ (c_2,L); \end{aligned}$$

consequently, we deduce that

$$\begin{aligned} U=0. \end{aligned}$$

\(\bullet \) Case 3: Assume that (SSC3) holds. Using the same argument as in Cases 1 and 2, we obtain

$$\begin{aligned} u=0\ \ \text {in}\ \ (0,c_1)\quad \text {and}\quad u(c_1)=u_x(c_1)=0. \end{aligned}$$
(2.33)

Step 1. The aim of this step is to prove that

$$\begin{aligned} \int _{c_1}^{c_2}|u|^2dx=\int _{c_1}^{c_2}|y|^2dx. \end{aligned}$$
(2.34)

For this aim, multiplying (2.7) by \({\overline{y}}\) and (2.8) by \({\overline{u}}\), then using integrating by parts over (0, L), and (2.6), we get

$$\begin{aligned} \int _{0}^{L}{\lambda }^2u{\overline{y}}dx-\int _{0}^{L}u_x\overline{y_x}dx-i{\lambda }c_0\int _{c_1}^{c_2}|y|^2dx= & {} 0, \end{aligned}$$
(2.35)
$$\begin{aligned} \int _{0}^{L}{\lambda }^2y{\overline{u}}dx-\int _{0}^{L}y_x\overline{u_x}dx+i{\lambda }c_0\int _{c_1}^{c_2}|u|^2dx= & {} 0. \end{aligned}$$
(2.36)

Adding (2.35) and (2.36), taking the imaginary part, we get (2.34).

Step 2. Multiplying (2.7) by \(-2(x-c_2){\overline{u}}_x\), integrating over \((c_1,c_2)\) and taking the real part, we get

$$\begin{aligned}&-\Re \left( \int _{c_1}^{c_2}{\lambda }^2(x-c_2)(|u|^2)_xdx\right) -\Re \left( \int _{c_1}^{c_2}(x-c_2)\left( |u_x|^2\right) _xdx\right) \nonumber \\&\quad +2\Re \left( i{\lambda }c_0\int _{c_1}^{c_2}(x-c_2)y{\overline{u}}_xdx\right) =0, \end{aligned}$$
(2.37)

using integration by parts in (2.37) and (2.33), we get

$$\begin{aligned} \int _{c_1}^{c_2}|{\lambda }u|^2dx+\int _{c_1}^{c_2}|u_x|^2dx+2\Re \left( i{\lambda }c_0\int _{c_1}^{c_2}(x-c_2)y{\overline{u}}_xdx\right) =0. \end{aligned}$$
(2.38)

Using Young’s inequality in (2.38), we obtain

$$\begin{aligned} \int _{c_1}^{c_2}|{\lambda }u|^2dx+\!\int _{c_1}^{c_2}|u_x|^2dx\le |c_0|(c_2-c_1)\!\int _{c_1}^{c_2}|{\lambda }y|^2dx+|c_0|(c_2-c_1)\!\int _{c_1}^{c_2}|u_x|^2dx.\nonumber \\ \end{aligned}$$
(2.39)

Inserting (2.34) in (2.39), we get

$$\begin{aligned} \left( 1-|c_0|(c_2-c_1)\right) \int _{c_1}^{c_2}\left( |{\lambda }u|^2+|u_x|^2\right) dx\le 0. \end{aligned}$$
(2.40)

According to (SSC3) and (2.34), we get

$$\begin{aligned} u=y=0\quad \text {in}\quad (c_1,c_2). \end{aligned}$$
(2.41)

Step 3. Using the fact that \(u\in H^2(c_1,c_2)\subset C^1([c_1,c_2])\), we get

$$\begin{aligned} u(c_1)=u_x(c_1)=y(c_1)=y_x(c_1)=y(c_2)=y_x(c_2)=0. \end{aligned}$$
(2.42)

Now, from (2.7), (2.8) and the definition of c, we get

$$\begin{aligned} \left\{ \begin{array}{lll} {\lambda }^2u+u_{xx}=0\ \ \text {in} \ \ (c_2,L),\\ u(c_2)=u_x(c_2)=0, \end{array} \right. \quad \text {and}\quad \left\{ \begin{array}{lll} {\lambda }^2y+y_{xx}=0\ \ \text {in}\ \ (0,c_1)\cup (c_2,L),\\ y(c_1)=y_x(c_1)=y(c_2)=y_x(c_2)=0. \end{array} \right. \end{aligned}$$

From the above systems and Holmgren uniqueness Theorem, we get

$$\begin{aligned} u=0\ \ \text {in}\ \ (c_2,L)\quad \text {and}\quad y=0\ \ \text {in}\ \ (0,c_1)\cup (c_2,L). \end{aligned}$$
(2.43)

Consequently, using (2.33), (2.41) and (2.43), we get \(U=0\). The proof is thus completed. \(\square \)

Lemma 2.5

Assume that (SSC1) or (C2) or (SSC3) holds. Then, for all \(\lambda \in {\mathbb {R}}\), we have

$$\begin{aligned} R\left( i{\lambda }I-{\mathcal {A}}\right) ={\mathcal {H}}. \end{aligned}$$

Proof

See Lemma 2.5 in Wehbe et al. (2021) (see also Akil et al. 2021). \(\square \)

Proof of Theorems 2.3

From Lemma 2.4, we obtain that the operator \({\mathcal {A}}\) has no pure imaginary eigenvalues (i.e. \(\sigma _p(\mathcal A)\cap i\mathbb R=\emptyset \)). Moreover, from Lemma 2.5 and with the help of the closed graph theorem of Banach, we deduce that \(\sigma (\mathcal A)\cap i\mathbb R=\emptyset \). Therefore, according to Theorem A.2, we get that the C\(_0 \)-semigroup \((e^{t\mathcal A})_{t\ge 0}\) is strongly stable. The proof is thus complete. \(\square \)

2.3 Polynomial stability

In this section, we study the polynomial stability of system (1.1)-(1.4). Our main results in this part are the following theorems:

Theorem 2.6

Assume that (SSC1) holds. Then, for all \(U_0 \in D(\mathcal A)\), there exists a constant \(C>0\) independent of \(U_0\) such that

$$\begin{aligned} E(t)\le \frac{C}{t^4}\Vert U_0\Vert ^2_{D(\mathcal A)},\quad t>0. \end{aligned}$$
(2.44)

Theorem 2.7

Assume that (SSC3) holds . Then, for all \(U_0 \in D(\mathcal A)\) there exists a constant \(C>0\) independent of \(U_0\) such that

$$\begin{aligned} E(t)\le \frac{C}{t}\Vert U_0\Vert ^2_{D(\mathcal A)},\quad t>0. \end{aligned}$$
(2.45)

According to Theorem A.3, the polynomial energy decays (2.44) and (2.45) hold if the following conditions

figure a

and

figure b

are satisfied. Since condition (\(H_1\)) is already proved in Sect. 2.2. We still need to prove (\(H_2\)), let us prove it by a contradiction argument. To this aim, suppose that (\(H_2\)) is false, then there exists

$$\begin{aligned} \left\{ \left( {\lambda }_n,U_n:=(u_n,v_n,y_n,z_n)^\top \right) \right\} _{n\ge 1}\subset \mathbb R^{*}_+\times D(\mathcal A) \end{aligned}$$

with

$$\begin{aligned} {\lambda }_n\rightarrow \infty \ \text {as} \ n\rightarrow \infty \quad \text {and}\quad \Vert U_n\Vert _{{\mathcal {H}}}=1, \ \forall n\ge 1, \end{aligned}$$
(2.46)

such that

$$\begin{aligned} \left( {\lambda }_n\right) ^{\ell }\left( i{\lambda }_nI-\mathcal A\right) U_n=F_n:=(f_{1,n},f_{2,n},f_{3,n},f_{4,n})^{\top }\rightarrow 0 \ \ \text {in}\ \ {\mathcal {H}}, \ \text {as} \ n\rightarrow \infty .\qquad \end{aligned}$$
(2.47)

For simplicity, we drop the index n. Equivalently, from (2.47), we have

$$\begin{aligned} i{\lambda }u-v= & {} \dfrac{f_1}{{\lambda }^{\ell }}, \ f_1 \rightarrow 0 \ \ \text {in}\ \ H_0^1(0,L), \end{aligned}$$
(2.48)
$$\begin{aligned} i{\lambda }v-\left( au_x+bv_x\right) _x+cz= & {} \dfrac{f_2}{{\lambda }^{\ell }}, \ f_2 \rightarrow 0 \ \ \text {in}\ \ L^2(0,L), \end{aligned}$$
(2.49)
$$\begin{aligned} i{\lambda }y-z= & {} \dfrac{f_3}{{\lambda }^{\ell }}, \ f_3 \rightarrow 0 \ \ \text {in}\ \ H_0^1(0,L), \end{aligned}$$
(2.50)
$$\begin{aligned} i{\lambda }z-(y_x+dz_x)_x-cv= & {} \dfrac{f_4}{{\lambda }^{\ell }},\ f_4 \rightarrow 0 \ \ \text {in} \ \ L^2(0,L). \end{aligned}$$
(2.51)

2.3.1 Proof of Theorem 2.6

In this section, we will prove Theorem 2.6 by checking the condition (\(H_2\)). For this aim, we will find a contradiction with (2.46) by showing \(\Vert U\Vert _{{\mathcal {H}}}=o(1)\). For clarity, we divide the proof into several Lemmas. By taking the inner product of (2.47) with U in \({\mathcal {H}}\), we remark that

$$\begin{aligned} \int _0^L b\left| v_{x}\right| ^2dx+\int _0^Ld|z_x|^2dx=\Re \left( \left<(i{\lambda }I -\mathcal A)U,U\right>_{\mathcal H}\right) ={\lambda }^{-\frac{1}{2}}\Re \left( \left<F,U\right>_{\mathcal H}\right) =o\left( \lambda ^{-\frac{1}{2}}\right) . \end{aligned}$$

Thus, from the definitions of b and d, we get

$$\begin{aligned} \int _{b_1}^{b_2}\left| v_{x}\right| ^2dx=o\left( \lambda ^{-\frac{1}{2}}\right) \quad \text {and}\quad \int _{d_1}^{d_2}\left| z_{x}\right| ^2dx=o\left( \lambda ^{-\frac{1}{2}}\right) . \end{aligned}$$
(2.52)

Using (2.48), (2.50), (2.52), and the fact that \(f_1,f_3\rightarrow 0\) in \(H_0^1(0,L)\), we get

$$\begin{aligned} \int _{b_1}^{b_2}|u_x|^2dx=\frac{o(1)}{{\lambda }^{\frac{5}{2}}}\quad \text {and}\quad \int _{d_1}^{d_2}|y_x|^2dx=\frac{o(1)}{{\lambda }^{\frac{5}{2}}}. \end{aligned}$$
(2.53)

Lemma 2.8

The solution \(U\in D(\mathcal A)\) of system (2.48)−(2.51) satisfies the following estimations

$$\begin{aligned} \int _{b_1}^{b_2}|v|^2dx=\frac{o(1)}{{\lambda }^{\frac{3}{2}}}\quad \text {and}\quad \int _{d_1}^{d_2}|z|^2dx=\frac{o(1)}{{\lambda }^{\frac{3}{2}}}. \end{aligned}$$
(2.54)

Proof

We give the proof of the first estimation in (2.54), the second one can be done in a similar way. For this aim, we fix \(g\in C^1\left( [b_1,b_2]\right) \) such that

$$\begin{aligned} g(b_2)=-g(b_1)=1,\quad \max _{x\in [b_1,b_2]}|g(x)|=m_g\ \ \text {and}\ \ \max _{x\in [b_1,b_2]}|g'(x)|=m_{g'}. \end{aligned}$$

The proof is divided into several steps as folllows:

Step 1. The goal of this step is to prove that

$$\begin{aligned} |v(b_1)|^2+|v(b_2)|^2\le \left( \frac{{\lambda }^{\frac{1}{2}}}{2}+2m_{g'}\right) \int _{b_1}^{b_2}|v|^2dx+\frac{o(1)}{{\lambda }}. \end{aligned}$$
(2.55)

From (2.48), we deduce that

$$\begin{aligned} v_x=i{\lambda }u_x-{\lambda }^{-\frac{1}{2}}(f_1)_x. \end{aligned}$$
(2.56)

Multiplying (2.56) by \(2g{\overline{v}}\) and integrating over \((b_1,b_2)\), then taking the real part, we get

$$\begin{aligned} \int _{b_1}^{b_2}g\left( |v|^2\right) _xdx=\Re \left( 2i{\lambda }\int _{b_1}^{b_2}gu_x{\overline{v}}dx\right) -\Re \left( 2{\lambda }^{-\frac{1}{2}}\int _{b_1}^{b_2}g(f_1)_x{\overline{v}}dx\right) . \end{aligned}$$

Using integration by parts in the left-hand side of the above equation, we get

$$\begin{aligned} |v(b_1)|^2+|v(b_2)|^2= & {} \int _{b_1}^{b_2}g'|v|^2dx+\Re \left( 2i{\lambda }\int _{b_1}^{b_2}gu_x{\overline{v}}dx\right) \\&\quad -\Re \left( 2{\lambda }^{-\frac{1}{2}}\int _{b_1}^{b_2}g(f_1)_x{\overline{v}}dx\right) . \end{aligned}$$

Consequently, we get

$$\begin{aligned} |v(b_1)|^2+|v(b_2)|^2\le & {} m_{g'}\int _{b_1}^{b_2}|v|^2dx+2|{\lambda }|m_g\int _{b_1}^{b_2}|u_x||v|dx\nonumber \\&\quad +2|{\lambda }|^{-\frac{1}{2}}m_g\int _{b_1}^{b_2}|(f_1)_x||v|dx. \end{aligned}$$
(2.57)

Using Young’s inequality, we obtain

$$\begin{aligned} 2{\lambda }m_g|u_x||v|\le & {} \frac{{\lambda }^\frac{1}{2}|v|^2}{2}+2{\lambda }^{\frac{3}{2}}m_g^2|u_x|^2\ \text {and}\quad 2{\lambda }^{-\frac{1}{2}}m_g|(f_1)_x||v|\nonumber \\\le & {} m_{g'}|v|^2+m_g^2m_{g'}^{-1}{\lambda }^{-1}|(f_1)_x|^2. \end{aligned}$$

From the above inequalities, (2.57) becomes

$$\begin{aligned} |v(b_1)|^2+|v(b_2)|^2\le & {} \left( \frac{{\lambda }^{\frac{1}{2}}}{2}+2m_{g'}\right) \int _{b_1}^{b_2}|v|^2dx+2{\lambda }^{\frac{3}{2}}m_g^2\int _{b_1}^{b_2}|u_x|^2dx\nonumber \\&+\frac{m_g^2}{m_{g'}}{\lambda }^{-1}\int _{b_1}^{b_2}|(f_1)_x|^2dx. \end{aligned}$$
(2.58)

Inserting (2.53) in (2.58) and the fact that \(f_1 \rightarrow 0\) in \(H^1_0(0,L)\), we get (2.55).

Step 2. The aim of this step is to prove that

$$\begin{aligned} |(au_x+bv_x)(b_1)|^2+|(au_x+bv_x)(b_2)|^2\le \frac{{\lambda }^{\frac{3}{2}}}{2}\int _{b_1}^{b_2}|v|^2dx+o(1). \end{aligned}$$
(2.59)

Multiplying (2.49) by \(-2g\left( \overline{au_x+bv_x}\right) \), integrating by parts over \((b_1,b_2)\) and taking the real part, we get

$$\begin{aligned} \begin{array}{l} \displaystyle |\left( au_x+bv_x\right) (b_1)|^2+|\left( au_x+bv_x\right) (b_2)|^2=\int _{b_1}^{b_2}g'|au_x+bv_x|^2dx+\\ \displaystyle \Re \left( 2i{\lambda }\int _{b_1}^{b_2}gv(\overline{au_x+bv_x})dx\right) -\Re \left( 2{\lambda }^{-\frac{1}{2}}\int _{b_1}^{b_2}gf_2(\overline{au_x+bv_x})dx\right) ; \end{array} \end{aligned}$$

consequently, we get

$$\begin{aligned} \begin{array}{lll} \displaystyle |\left( au_x+bv_x\right) (b_1)|^2+|\left( au_x+bv_x\right) (b_2)|^2\le m_{g'}\int _{b_1}^{b_2}|au_x+bv_x|^2dx\\ \displaystyle +2{\lambda }m_g\int _{b_1}^{b_2}|v||au_x+bv_x|dx+2m_g{\lambda }^{-\frac{1}{2}}\int _{b_1}^{b_2}|f_2||au_x+bv_x|dx. \end{array} \end{aligned}$$
(2.60)

By Young’s inequality, (2.52), and (2.53), we have

$$\begin{aligned} 2{\lambda }m_g\int _{b_1}^{b_2}|v||au_x+bv_x|dx\le & {} \frac{{\lambda }^{\frac{3}{2}}}{2}\int _{b_1}^{b_2}|v|^2dx+2m_g^2{\lambda }^{\frac{1}{2}}\int _{b_1}^{b_2}|au_x+bv_x|^2dx \nonumber \\\le & {} \frac{{\lambda }^{\frac{3}{2}}}{2}\int _{b_1}^{b_2}|v|^2dx+o(1). \end{aligned}$$
(2.61)

Inserting (2.61) in (2.60), then using (2.52), (2.53) and the fact that \(f_2 \rightarrow 0\) in \(L^2(0,L)\), we get (2.59). Step 3. The aim of this step is to prove the first estimation in (2.54). For this aim, multiplying (2.49) by \(-i{\lambda }^{-1}{\overline{v}}\), integrating over \((b_1,b_2)\) and taking the real part, we get

$$\begin{aligned} \int _{b_1}^{b_2}|v|^2dx= & {} \Re \left( i{\lambda }^{-1}\int _{b_1}^{b_2}(au_x+bv_x){\overline{v}}_xdx\right. \nonumber \\&\quad \left. -\left[ i{\lambda }^{-1}\left( au_x+bv_x\right) {\overline{v}}\right] _{b_1}^{b_2}+i{\lambda }^{-\frac{3}{2}}\int _{b_1}^{b_2}f_2{\overline{v}}dx\right) . \end{aligned}$$
(2.62)

Using (2.52), (2.53), the fact that v is uniformly bounded in \(L^2(0,L)\) and \(f_2\rightarrow 0\) in \(L^2(0,L)\), and Young’s inequalities, we get

$$\begin{aligned} \int _{b_1}^{b_2}|v|^2dx\le & {} \frac{{\lambda }^{-\frac{1}{2}}}{2}[|v(b_1)|^2+|v(b_2)|^2]+\frac{{\lambda }^{-\frac{3}{2}}}{2}[|(au_x+bv_x)(b_1)|^2+|(au_x+bv_x)(b_2)|^2]\nonumber \\&\quad +\frac{o(1)}{{\lambda }^{\frac{3}{2}}}. \end{aligned}$$
(2.63)

Inserting (2.55) and (2.59) in (2.63), we get

$$\begin{aligned} \int _{b_1}^{b_2}|v|^2dx\le \left( \frac{1}{2}+m_{g'}{\lambda }^{-\frac{1}{2}}\right) \int _{b_1}^{b_2}|v|^2dx+\frac{o(1)}{{\lambda }^{\frac{3}{2}}}, \end{aligned}$$

which implies that

$$\begin{aligned} \left( \frac{1}{2}-m_{g'}{\lambda }^{-\frac{1}{2}}\right) \int _{b_1}^{b_2}|v|^2dx\le \frac{o(1)}{{\lambda }^{\frac{3}{2}}}. \end{aligned}$$
(2.64)

Using the fact that \({\lambda }\rightarrow \infty \), we can take \({\lambda }> 4m_{g'}^2\). Then, we obtain the first estimation in (2.54). Similarly, we can obtain the second estimation in (2.54). The proof has been completed. \(\square \)

Lemma 2.9

The solution \(U\in D(\mathcal A)\) of system (2.48)-(2.51) satisfies the following estimations

$$\begin{aligned} \int _0^{c_1}\left( |v|^2+a|u_x|^2\right) dx=o(1)\quad \text {and}\quad \int _{c_2}^L\left( |z|^2+|y_x|^2\right) dx=o(1). \end{aligned}$$
(2.65)

Proof

First, let \(h\in C^1([0,c_1])\) such that \(h(0)=h(c_1)=0\). Multiplying (2.49) by \(2a^{-1}h\overline{(au_x+bv_x)}\), integrating over \((0,c_1)\), using integration by parts and taking the real part, then using (2.52), the fact that \(u_x\) is uniformly bounded in \(L^2(0,L)\) and \(f_2 \rightarrow 0\) in \(L^2(0,L)\), we get

$$\begin{aligned}&\Re \left( 2i{\lambda }a^{-1}\int _0^{c_1}vh\overline{(au_x+bv_x)}dx\right) +a^{-1}\int _0^{c_1}h'|au_x+bv_x|^2dx\nonumber \\&\quad =\underbrace{\frac{1}{{\lambda }^{\frac{1}{2}}}\Re \left( \int _0^Lhf_2\overline{\left( au_x+bv_x\right) }dx\right) }_\frac{o(1)}{{\lambda }^{\frac{1}{2}}}. \end{aligned}$$
(2.66)

From (2.48), we have

$$\begin{aligned} i{\lambda }{\overline{u}}_x=-{\overline{v}}_x-{\lambda }^{-\frac{1}{2}}(\overline{f_1})_x. \end{aligned}$$
(2.67)

Inserting (2.67) in (2.66), using integration by parts, then using (2.52), (2.54), and the fact that \(f_1 \rightarrow 0 \) in \(H^1_0 (0,L)\) and v is uniformly bounded in \(L^2 (0,L)\), we get

$$\begin{aligned} \begin{array}{c} \displaystyle \int _0^{c_1}h'|v|^2dx+a^{-1}\int _0^{c_1}h'|au_x+bv_x|^2dx=\underbrace{2\Re \left( {\lambda }^{-\frac{1}{2}}\int _{0}^{c_1}vh(\overline{f_1})_xdx\right) }_{=o({\lambda }^{-\frac{1}{2}})}\\ \displaystyle -\underbrace{\Re \left( 2i{\lambda }a^{-1}b_0\int _{b_1}^{b_2}hv{\overline{v}}_xdx\right) }_{=o(1)}+\frac{o(1)}{{\lambda }^{\frac{1}{2}}}. \end{array} \end{aligned}$$
(2.68)

Now, we consider the following cut-off functions \(p_1,p_2\in C^1([0,b_2])\), such that

$$\begin{aligned} p_1(x):=\left\{ \begin{array}{ccc} 1&{}\text {in}&{}(0,b_1),\\ 0&{}\text {in}&{}(b_2,c_1),\\ 0\le p_1\le 1&{}\text {in}&{}(b_1,b_2), \end{array} \right. \quad \text {and}\quad p_2(x):=\left\{ \begin{array}{ccc} 1&{}\text {in}&{}(b_2,c_1),\\ 0&{}\text {in}&{}(0,b_1),\\ 0\le p_2\le 1&{}\text {in}&{}(b_1,b_2). \end{array} \right. \end{aligned}$$

Finally, take \(h(x)=xp_1(x)+(x-c_1)p_2(x)\) in (2.68) and using (2.52), (2.53), (2.54), we get the first estimation in (2.65). By using the same argument, we can obtain the second estimation in (2.65). The proof is thus completed. \(\square \)

Lemma 2.10

The solution \(U\in D(\mathcal A)\) of system (2.48)−(2.51) satisfies the following estimations

$$\begin{aligned} |{\lambda }u(c_1)|=o(1),\ |u_x(c_1)|=o(1),\ |{\lambda }y(c_2)|=o(1)\quad \text {and}\quad |y_x(c_2)|=o(1). \end{aligned}$$
(2.69)

Proof

First, from (2.48) and (2.49), we deduce that

$$\begin{aligned} {\lambda }^2u+au_{xx}=-\frac{f_2}{{\lambda }^{\frac{1}{2}}}-i{\lambda }^{\frac{1}{2}}f_1 \ \ \text {in} \ \ (b_2,c_1). \end{aligned}$$
(2.70)

Multiplying (2.70) by \(2(x-b_2){\bar{u}}_x\), integrating over \((b_2,c_1)\) and taking the real part, then using the fact that \(u_x\) is uniformly bounded in \(L^2(0,L)\) and \(f_2 \rightarrow 0\) in \(L^2(0,L)\), we get

$$\begin{aligned}&\int _{b_2}^{c_1}{\lambda }^2 (x-b_2)\left( |u|^2\right) _xdx+a\int _{b_2}^{c_1}(x-b_2)\left( |u_x|^2\right) _xdx \nonumber \\&\quad =-\Re \left( 2i{\lambda }^{\frac{1}{2}}\int _{b_2}^{c_1}(x-b_2)f_1{\overline{u}}_xdx\right) +\frac{o(1)}{{\lambda }^{\frac{1}{2}}}. \end{aligned}$$
(2.71)

Remark that from (2.65) and (2.48), we get

$$\begin{aligned} \int _{b_2}^{c_1}|{\lambda }u|^2dx=o(1)\quad \text {and}\quad \int _{b_2}^{c_1}|u_x|^2dx=o(1). \end{aligned}$$

Using integration by parts in (2.71), then using the above estimations, and the fact that \(f_1\rightarrow 0\) in \(H_0^1(0,L)\) and \({\lambda }u\) is uniformly bounded in \(L^2(0,L)\), we get

$$\begin{aligned} 0\le (c_1-b_2)\left( |{\lambda }u(c_1)|^2+a|u_x(c_1)|^2\right) =\Re \left( 2i{\lambda }^{\frac{1}{2}}(c_1-b_2)f_1(c_1){\overline{u}}(c_1)\right) +o(1), \end{aligned}$$
(2.72)

consequently, by using Young’s inequality, we get

$$\begin{aligned} \begin{array}{lll} \displaystyle |{\lambda }u(c_1)|^2+a|u_x(c_1)|^2 &{}\le &{} \displaystyle 2{\lambda }^{\frac{1}{2}}|f_1(c_1)||u(c_1)|+o(1)\\ &{}\le &{}\displaystyle \frac{1}{2}|{\lambda }u(c_1)|^2+\frac{2}{{\lambda }}|f_1(c_1)|^2 +o(1). \end{array} \end{aligned}$$

Then, we get

$$\begin{aligned} \frac{1}{2}|{\lambda }u(c_1)|^2+a|u_x(c_1)|^2\le \frac{2}{{\lambda }}|f_1(c_1)|^2+o(1). \end{aligned}$$
(2.73)

Finally, from the above estimation and the fact that \(f_1 \rightarrow 0\) in \(H^1_0 (0,L)\), we get the first two estimations in (2.69). By using the same argument, we can obtain the last two estimations in (2.69). The proof has been completed. \(\square \)

Lemma 2.11

The solution \(U\in D(\mathcal A)\) of system (2.48)−(2.51) satisfies the following estimation

$$\begin{aligned} \int _{c_1}^{c_2} (|{\lambda }u|^2 +a |u_x|^2 +|{\lambda }y|^2 +|y_x|^2) dx =o(1). \end{aligned}$$
(2.74)

Proof

Inserting (2.48) and (2.50) in (2.49) and (2.51), we get

$$\begin{aligned} -{\lambda }^2u-au_{xx}+i{\lambda }c_0y= & {} \frac{f_2}{{\lambda }^{\frac{1}{2}}}+i{\lambda }^{\frac{1}{2}}f_1+\frac{c_0f_3}{{\lambda }^{\frac{1}{2}}} \ \ \text {in} \ \ (c_1,c_2), \end{aligned}$$
(2.75)
$$\begin{aligned} -{\lambda }^2y-y_{xx}-i{\lambda }c_0u= & {} \frac{f_4}{{\lambda }^{\frac{1}{2}}}+i{\lambda }^{\frac{1}{2}}f_3-\frac{c_0f_1}{{\lambda }^{\frac{1}{2}}} \ \ \ \text {in} \ \ (c_1,c_2). \end{aligned}$$
(2.76)

Multiplying (2.75) by \(2(x-c_2)\overline{u_x}\) and (2.76) by \(2(x-c_1)\overline{y_x}\), integrating over \((c_1,c_2)\) and taking the real part, then using the fact that \(\Vert F\Vert _\mathcal H=o(1)\) and \(\Vert U\Vert _\mathcal H=1\), we obtain

$$\begin{aligned}&\displaystyle -{\lambda }^2\int _{c_1}^{c_2}(x-c_2)\left( |u|^2\right) _xdx-a\int _{c_1}^{c_2}(x-c_2)\left( |u_x|^2\right) _xdx\nonumber \\&\qquad +\Re \left( 2i{\lambda }c_0\int _{c_1}^{c_2}(x-c_2)y\overline{u_x}dx\right) \nonumber \\&\quad \displaystyle = \Re \left( 2i{\lambda }^{\frac{1}{2}}\int _{c_1}^{c_2}(x-c_2)f_1\overline{u_x}dx\right) +\frac{o(1)}{{\lambda }^{\frac{1}{2}}} \end{aligned}$$
(2.77)

and

$$\begin{aligned}&\displaystyle -{\lambda }^2\int _{c_1}^{c_2}(x-c_1)\left( |y|^2\right) _xdx-\int _{c_1}^{c_2}(x-c_1)\left( |y_x|^2\right) _xdx\nonumber \\&\qquad -\Re \left( 2i{\lambda }c_0\int _{c_1}^{c_2}(x-c_1)u\overline{y_x}dx\right) \nonumber \\&\quad \displaystyle = \Re \left( 2i{\lambda }^{\frac{1}{2}}\int _{c_1}^{c_2}(x-c_1)f_3\overline{y_x}dx\right) +\frac{o(1)}{{\lambda }^{\frac{1}{2}}}. \end{aligned}$$
(2.78)

Using integration by parts, (2.69), and the fact that \(f_1, f_3 \rightarrow 0\) in \(H^1_0(0,L)\), \(\Vert u\Vert _{L^2(0,L)}=O({\lambda }^{-1})\), and \(\Vert y\Vert _{L^2(0,L)}=O({\lambda }^{-1})\), we deduce that

$$\begin{aligned} \Re \left( i{\lambda }^{\frac{1}{2}}\int _{c_1}^{c_2}(x-c_2)f_1\overline{u_x}dx\right) =\frac{o(1)}{{\lambda }^{\frac{1}{2}}}\quad \text {and}\quad \Re \left( i{\lambda }^{\frac{1}{2}}\int _{c_1}^{c_2}(x-c_1)f_3\overline{y_x}dx\right) =\frac{o(1)}{{\lambda }^{\frac{1}{2}}}.\nonumber \\ \end{aligned}$$
(2.79)

Inserting (2.79) in (2.77) and (2.78), then using integration by parts and (2.69), we get

$$\begin{aligned} \int _{c_1}^{c_2}\left( |{\lambda }u|^2+a|u_x|^2\right) dx+\Re \left( i{\lambda }c_0\int _{c_1}^{c_2}(x-c_2)y\overline{u_x}dx\right)= & {} o(1), \end{aligned}$$
(2.80)
$$\begin{aligned} \int _{c_1}^{c_2}\left( |{\lambda }y|^2+|y_x|^2\right) dx-\Re \left( i{\lambda }c_0\int _{c_1}^{c_2}(x-c_1)u\overline{y_x}dx\right)= & {} o(1). \end{aligned}$$
(2.81)

Adding (2.80) and (2.81), we get

$$\begin{aligned}&\int _{c_1}^{c_2}\left( |{\lambda }u|^2+a|u_x|^2+|{\lambda }y|^2+|y_x|^2\right) dx\\&\quad =\displaystyle \Re \left( 2i{\lambda }c_0\int _{c_1}^{c_2}(x-c_1)u\overline{y_x}dx\right) -\Re \left( 2i{\lambda }c_0\int _{c_1}^{c_2}(x-c_2)y\overline{u_x}dx\right) +o(1)\\&\quad \le \displaystyle 2 |c_0|(c_2-c_1)\int _{c_1}^{c_2}|{\lambda }u||y_x|dx+2\frac{|c_0|}{a^{\frac{1}{4}}}(c_2-c_1)a^{\frac{1}{4}}\int _{c_1}^{c_2}|{\lambda }y||u_x|dx+o(1). \end{aligned}$$

Applying Young’s inequalities, we get

$$\begin{aligned}&\left( 1-|c_0|(c_2-c_1)\right) \int _{c_1}^{c_2}(|{\lambda }u|^2+|y_x|^2)dx+\left( 1-\frac{1}{\sqrt{a}}|c_0|(c_2-c_1)\right) \nonumber \\&\int _{c_1}^{c_2}(a|u_x|^2+|{\lambda }y|^2)dx\le o(1). \end{aligned}$$
(2.82)

Finally, using (SSC1), we get the desired result. The proof has been completed. \(\square \)

Lemma 2.12

The solution \(U\in D(\mathcal A)\) of system (2.48)−(2.51) satisfies the following estimations

$$\begin{aligned} \int _0^{c_1}\left( |z|^2+|y_x|^2\right) dx=o(1)\quad \text {and}\quad \int _{c_2}^L\left( |v|^2+a|u_x|^2\right) dx=o(1). \end{aligned}$$
(2.83)

Proof

Using the same argument of Lemma 2.9, we obtain (2.83). \(\square \)

Proof of Theorem 2.6. Using (2.53), Lemmas 2.8, 2.9, 2.11, 2.12, we get \(\Vert U\Vert _{{\mathcal {H}}}=o(1)\), which contradicts (2.46). Consequently, condition \(\mathrm{(H2)}\) holds. This implies the energy decay estimation (2.44).

2.3.2 Proof of Theorem 2.7

In this section, we will prove Theorem 2.7 by checking the condition (\(H_2\)), that is by finding a contradiction with (2.46) by showing \(\Vert U\Vert _{{\mathcal {H}}}=o(1)\). For clarity, we divide the proof into several Lemmas. By taking the inner product of (2.47) with U in \({\mathcal {H}}\), we remark that

$$\begin{aligned} \int _0^L b|v_x|^2dx=-\Re \left( \left<{\mathcal {A}}U,U\right>_{{\mathcal {H}}}\right) ={\lambda }^{-2}\Re \left( \left<F,U\right>_{{\mathcal {H}}}\right) =o({\lambda }^{-2}). \end{aligned}$$

Then,

$$\begin{aligned} \int _{b_1}^{b_2}|v_x|^2dx=o({\lambda }^{-2}). \end{aligned}$$
(2.84)

Using (2.48) and (2.84), and the fact that \(f_1 \rightarrow 0\) in \(H^1_0(0,L)\), we get

$$\begin{aligned} \int _{b_1}^{b_2}|u_x|^2dx=o({\lambda }^{-4}). \end{aligned}$$
(2.85)

Lemma 2.13

Let \(0<\varepsilon <\frac{b_2-b_1}{2}\); the solution \(U\in D({\mathcal {A}})\) of the system (2.48)−(2.51) satisfies the following estimation

$$\begin{aligned} \int _{b_1+\varepsilon }^{b_2-\varepsilon }|v|^2dx=o({\lambda }^{-2}). \end{aligned}$$
(2.86)

Proof

First, we fix a cut-off function \(\theta _1\in C^{1}([0,c_1])\) such that

$$\begin{aligned} \theta _1(x)=\left\{ \begin{array}{clc} 1&{}\text {if}&{}x\in (b_1+\varepsilon ,b_2-\varepsilon ),\\ 0&{}\text {if}&{}x\in (0,b_1)\cup (b_2,L),\\ 0\le \theta _1\le 1&{}&{}\text {elsewhere}. \end{array} \right. \end{aligned}$$
(2.87)

Multiplying (2.49) by \({\lambda }^{-1}\theta _1 {\overline{v}}\), integrating over \((0,c_1)\), using integration by parts, and the fact that \(f_2 \rightarrow 0\) in \(L^2(0,L)\) and v is uniformly bounded in \(L^2(0,L)\), we get

$$\begin{aligned} i\int _0^{c_1}\theta _1|v|^2dx+\frac{1}{{\lambda }}\int _0^{c_1}(u_x+bv_x)(\theta _1'{\overline{v}}+\theta \overline{v_x})dx=o({\lambda }^{-3}).\qquad \end{aligned}$$
(2.88)

Using (2.84), (2.85), the fact that \(\Vert U\Vert _{{\mathcal {H}}}=1\), and the definition of \(\theta _1\), we get

$$\begin{aligned} \frac{1}{{\lambda }}\int _0^{c_1}(u_x+bv_x)(\theta _1'{\overline{v}}+\theta \overline{v_x})dx=o({\lambda }^{-2}). \end{aligned}$$

Inserting the above estimation in (2.88), we get the desired result (2.86). The proof has been completed. \(\square \)

Lemma 2.14

The solution \(U\in D({\mathcal {A}})\) of the system (2.48)−(2.51) satisfies the following estimation:

$$\begin{aligned} \int _{0}^{c_1}(|v|^2+|u_x|^2)dx=o(1). \end{aligned}$$
(2.89)

Proof

Let \(h\in C^1([0,c_1])\) such that \(h(0)=h(c_1)=0\). Multiplying (2.49) by \(2h\overline{(u_x+bv_x)}\), integrating over \((0,c_1)\) and taking the real part, then using integration by parts, (2.84), the fact that \(u_x\) is uniformly bounded in \(L^2(0,L)\), and the fact that \(f_2 \rightarrow 0\) in \(L^2(0,L)\), we get

$$\begin{aligned} \Re \left( 2\int _0^{c_1}i{\lambda }vh\overline{(u_x+bv_x)}dx\right) +\int _0^{c_1}h'|u_x+bv_x|^2dx=o({\lambda }^{-2}). \end{aligned}$$
(2.90)

Using (2.84) and the fact that v is uniformly bounded in \(L^2(0,L)\), we get

$$\begin{aligned} \Re \left( 2\int _0^{c_1}i{\lambda }vh\overline{(u_x+bv_x)}dx\right) =2\Re \left( \int _0^{c_1}i{\lambda }vh\overline{u_x}dx\right) +o(1). \end{aligned}$$
(2.91)

From (2.48), we have

$$\begin{aligned} i{\lambda }{\overline{u}}_x=-{\overline{v}}_x-\frac{\left( \overline{f_1}\right) _x}{{\lambda }^2}. \end{aligned}$$
(2.92)

Inserting (2.92) in (2.91), using integration by parts, the facts that v is uniformly bounded in \(L^2(0,L)\), and \(f_1 \rightarrow 0\) in \(H^1_0(0,L)\), we get

$$\begin{aligned} \Re \left( 2\int _0^{c_1}i{\lambda }vh\overline{(u_x+bv_x)}dx\right) =\int _0^{c_1}h'|v|^2dx+o(1). \end{aligned}$$
(2.93)

Inserting (2.93) in (2.90), we obtain

$$\begin{aligned} \int _0^{c_1}h'\left( |v|^2+|u_x+bv_x|^2\right) dx=o(1). \end{aligned}$$
(2.94)

Now, we fix the following cut-off functions:

$$\begin{aligned} \theta _2(x):=\left\{ \begin{array}{ccc} 1&{}\text {in}&{}(0,b_1+\varepsilon ),\\ 0&{}\text {in}&{}(b_2-\varepsilon ,c_1),\\ 0\le \theta _2\le 1&{}\text {in}&{}(b_1+\varepsilon ,b_2-\varepsilon ), \end{array} \right. \quad \text {and}\quad \theta _3(x):=\left\{ \begin{array}{ccc} 1&{}\text {in}&{}(b_2-\varepsilon ,c_1),\\ 0&{}\text {in}&{}(0,b_1+\varepsilon ),\\ 0\le \theta _3\le 1&{}\text {in}&{}(b_1+\varepsilon ,b_2-\varepsilon ). \end{array} \right. \quad \end{aligned}$$

Taking \(h(x)=x\theta _2(x)+(x-c_1)\theta _3(x)\) in (2.94), then using (2.84) and (2.85), we get

$$\begin{aligned} \int _{(0,b_1+\varepsilon )\cup (b_2-\varepsilon ,c_1)}|v|^2dx+\int _{(0,b_1)\cup (b_2,c_1)}|u_x|^2dx=o(1). \end{aligned}$$
(2.95)

Finally, from (2.85), (2.86) and (2.95), we get the desired result (2.89). The proof has been completed. \(\square \)

Lemma 2.15

The solution \(U\in D(\mathcal A)\) of system (2.48)−(2.51) satisfies the following estimations

$$\begin{aligned} |{\lambda }u(c_1)|=o(1)\quad \text {and}\quad |u_x(c_1)|=o(1), \end{aligned}$$
(2.96)
$$\begin{aligned} \int _{c_1}^{c_2}|{\lambda }u|^2dx=\int _{c_1}^{c_2}|{\lambda }y|^2dx+o(1). \end{aligned}$$
(2.97)

Proof

First, using the same argument of Lemma 2.10, we claim (2.96). Inserting (2.48), (2.50) in (2.49) and (2.51), we get

$$\begin{aligned} {\lambda }^2u+\left( u_x+bv_x\right) _x-i{\lambda }cy= & {} -\frac{f_2}{{\lambda }^{2}}-i\frac{f_1}{{\lambda }}-c\frac{f_3}{{\lambda }^2}, \end{aligned}$$
(2.98)
$$\begin{aligned} {\lambda }^2y+y_{xx}+i{\lambda }cu= & {} -\frac{f_4}{{\lambda }^2}-\frac{if_3}{{\lambda }}+c\frac{f_1}{{\lambda }^2}. \end{aligned}$$
(2.99)

Multiplying (2.98) and (2.99) by \({\lambda }{\overline{y}}\) and \({\lambda }{\overline{u}}\), respectively, integrating over (0, L), then using integration by parts, (2.84), the fact that \(\Vert U\Vert _\mathcal H=1\) and \(\Vert F\Vert _\mathcal H=o(1)\), we get

$$\begin{aligned} {\lambda }^{3}\int _0^Lu{\bar{y}}dx-{\lambda }\int _0^Lu_x{\bar{y}}_xdx-i c_0\int _{c_1}^{c_2}|{\lambda }y|^2dx=o(1), \end{aligned}$$
(2.100)
$$\begin{aligned} {\lambda }^{3}\int _0^Ly{\bar{u}}dx-{\lambda }\int _0^Ly_x{\bar{u}}_xdx+i c_0\int _{c_1}^{c_2}|{\lambda }u|^2dx=\frac{o(1)}{{\lambda }}. \end{aligned}$$
(2.101)

Adding (2.100), (2.101), then taking the imaginary parts, we get the desired result (2.97). The proof is thus completed. \(\square \)

Lemma 2.16

The solution \(U\in D(\mathcal A)\) of system (2.48)−(2.51) satisfies the following estimations:

$$\begin{aligned} \int _{c_1}^{c_2}|{\lambda }u|^2dx=o(1),\quad \int _{c_1}^{c_2}|{\lambda }y|^2dx=o(1)\quad \text {and}\quad \int _{c_1}^{c_2}|u_x|^2dx=o(1). \end{aligned}$$
(2.102)

Proof

First, Multiplying (2.98) by \(2(x-c_2){\bar{u}}_x\), integrating over \((c_1,c_2)\) and taking the real part, using the fact that \(\Vert U\Vert _\mathcal H=1\) and \(\Vert F\Vert _\mathcal H=o(1)\), we get

$$\begin{aligned}&{\lambda }^2\int _{c_1}^{c_2}(x-c_2)\left( |u|^2\right) _xdx+\int _{c_1}^{c_2}(x-c_2)\left( |u_x|^2\right) _xdx \nonumber \\&\quad =\Re \left( 2i{\lambda }c_0\int _{c_1}^{c_2}(x-c_2)y{\bar{u}}_xdx\right) +o(1). \end{aligned}$$
(2.103)

Using integration by parts in (2.103) with the help of (2.96), we get

$$\begin{aligned} \int _{c_1}^{c_2}|{\lambda }u|^2dx+\int _{c_1}^{c_2}|u_x|^2dx\le 2{\lambda }|c_0|(c_2-c_1)\int _{c_1}^{c_2}|y||u_x|+o(1). \end{aligned}$$
(2.104)

Applying Young’s inequality in (2.104), we get

$$\begin{aligned}&\int _{c_1}^{c_2}|{\lambda }u|^2dx+\int _{c_1}^{c_2}|u_x|^2dx\le |c_0|(c_2-c_1)\nonumber \\&\int _{c_1}^{c_2}|u_x|^2dx+|c_0|(c_2-c_1)\int _{c_1}^{c_2}|{\lambda }y|^2dx+o(1). \end{aligned}$$
(2.105)

Using (2.97) in (2.105), we get

$$\begin{aligned} \left( 1-|c_0|(c_2-c_1)\right) \int _{c_1}^{c_2}\left( |{\lambda }u|^2+|u_x|^2\right) dx\le o(1). \end{aligned}$$
(2.106)

Finally, from the above estimation, (SSC3) and (2.97), we get the desired result (2.102). The proof has been completed. \(\square \)

Lemma 2.17

Let \(0<\delta <\frac{c_2-c_1}{2}\). The solution \(U\in D(\mathcal A)\) of system (2.48)−(2.51) satisfies the following estimations:

$$\begin{aligned} \int _{c_1+\delta }^{c_2-\delta }|y_x|^2dx=o(1). \end{aligned}$$
(2.107)

Proof

First, we fix a cut-off function \(\theta _4\in C^1([0,L])\) such that

$$\begin{aligned} \theta _4(x):=\left\{ \begin{array}{clc} 1&{}\text {if}&{}x\in (c_1+\delta ,c_2-\delta ),\\ 0&{}\text {if}&{}x\in (0,c_1)\cup (c_2,L),\\ 0\le \theta _4\le 1&{}&{}\text {elsewhere}. \end{array} \right. \end{aligned}$$
(2.108)

Multiplying (2.99) by \(\theta _4{\bar{y}}\), integrating over (0, L), then using integration by parts, \(\Vert F\Vert _{{\mathcal {H}}}=o(1)\) and \(\Vert U\Vert _{{\mathcal {H}}}=1\), we get

$$\begin{aligned} \int _{c_1}^{c_2}\theta _4|{\lambda }y|^2dx-\int _{0}^{L}\theta _4|y_x|^2dx-\int _0^L\theta _4'y_x{\bar{y}}dx+i{\lambda }c_0\int _{c_1}^{c_2}\theta _4u{\bar{y}}dx=\frac{o(1)}{{\lambda }^2}.\qquad \end{aligned}$$
(2.109)

Using (2.102), the definition of \(\theta _4\), and the fact that \({\lambda }u\) is uniformly bounded in \(L^2(0,L)\), we get

$$\begin{aligned} \int _{c_1}^{c_2}\theta _4|{\lambda }y|^2dx=o(1),\quad \int _0^L\theta _4'y_x{\bar{y}}dx=o({\lambda }^{-1}),\quad i{\lambda }c_0\int _{c_1}^{c_2}\theta _4u{\bar{y}}dx=o({\lambda }^{-1}).\qquad \qquad \end{aligned}$$
(2.110)

Finally, Inserting (2.110) in (2.109), we get the desired result (2.111). The proof has been completed. \(\square \)

Lemma 2.18

The solution \(U\in D(\mathcal A)\) of system (2.48)−(2.51) satisfies the following estimations:

$$\begin{aligned}&\int _0^{c_1+\delta }|{\lambda }y|^2dx,\int _{0}^{c_1+\delta }|y_x|^2dx,\int _{c_2-\delta }^L|{\lambda }y|^2dx,\nonumber \\&\int _{c_2-\delta }^L|y_x|^2dx,\int _{c_2}^{L}|{\lambda }u|^2dx,\int _{c_2}^{L}|u_x|^2dx=o(1). \end{aligned}$$
(2.111)

Proof

Let \(q\in C^1([0,L])\) such that \(q(0)=q(L)=0\). Multiplying (2.99) by \(2q{\bar{y}}_x\) integrating over (0, L), using (2.102), and the fact that \(y_x\) is uniformly bounded in \(L^2(0,L)\) and \(\Vert F\Vert _{{\mathcal {H}}}=o(1)\), we get

$$\begin{aligned} \int _0^{L}q'\left( |{\lambda }y|^2+|y_x|^2\right) dx=o(1). \end{aligned}$$
(2.112)

Now, take \(q(x)=x\theta _5(x)+(x-L)\theta _6(x)\) in (2.112), such that

$$\begin{aligned} \theta _5(x):=\left\{ \begin{array}{ccc} 1&{}\text {in}&{}(0,c_1+\delta ),\\ 0&{}\text {in}&{}(c_2-\delta ,L),\\ 0\le \theta _1\le 1&{}\text {in}&{}(c_1+\delta ,c_2-\delta ), \end{array} \right. \quad \text {and}\quad \theta _6(x)\left\{ \begin{array}{ccc} 1&{}\text {in}&{}(c_2-\delta ,L),\\ 0&{}\text {in}&{}(0,c_1+\delta ),\\ 0\le \theta _6\le 1&{}\text {in}&{}(c_1+\delta ,c_2-\delta ). \end{array} \right. \end{aligned}$$

Then, we obtain the first four estimations in (2.111). Now, multiplying (2.98) by \(2q\left( \overline{u_x+bv_x}\right) \) integrating over (0, L), then using the fact that \(u_x\) is uniformly bounded in \(L^2(0,L)\), (2.84), and \(\Vert F\Vert _{{\mathcal {H}}}=o(1)\), we get

$$\begin{aligned} \int _0^Lq'\left( |{\lambda }u|^2+|u_x|^2\right) dx=o(1). \end{aligned}$$
(2.113)

By taking \(q(x)=(x-L)\theta _7(x)\), such that

$$\begin{aligned} \theta _7(x)=\left\{ \begin{array}{ccc} 1&{}\text {in}&{}(c_2,L),\\ 0&{}\text {in}&{}(0,c_1),\\ 0\le \theta _7\le 1&{}\text {in}&{}(c_1,c_2), \end{array} \right. \end{aligned}$$

we get the the last two estimations in (2.111). The proof has been completed. \(\square \)

Proof of Theorem 2.7. Using (2.85), Lemmas 2.14, 2.16, 2.17 and 2.18, we get \(\Vert U\Vert _{{\mathcal {H}}}=o(1)\), which contradicts (2.46). Consequently, condition \(\mathrm{(H2)}\) holds. This implies the energy decay estimation (2.45)

3 Indirect stability in the multi-dimensional case

In this section, we study the well-posedness and the strong stability of system (1.5)-(1.8).

3.1 Well-posedness

In this section, we will establish the well-posedness of (1.5)-(1.8) by using semigroup approach. The energy of system (1.5)-(1.8) is given by

$$\begin{aligned} E(t)=\frac{1}{2}\int _0^L\left( |u_t|^2+|\nabla u|^2+|y_t|^2+|\nabla y|^2\right) dx. \end{aligned}$$
(3.1)

Let \((u,u_t,y,y_t)\) be a regular solution of (1.5)-(1.8). Multiplying (1.5) and (1.6) by \(\overline{u_t}\) and \(\overline{y_t}\), respectively, then using the boundary conditions (1.7), we get

$$\begin{aligned} E'(t)=-\int _{\Omega }b|\nabla u_{t}|^2dx, \end{aligned}$$
(3.2)

using the definition of b, we get \(E'(t)\le 0\). Thus, system (1.5)-(1.8) is dissipative in the sense that its energy is non-increasing with respect to time t. Let us define the energy space \({\mathcal {H}}\) by

$$\begin{aligned} {\mathcal {H}}=\left( H_0^1(\Omega )\times L^2(\Omega )\right) ^2. \end{aligned}$$

The energy space \({\mathcal {H}}\) is equipped with the inner product defined by

$$\begin{aligned} \left<U,U_1\right>_{{\mathcal {H}}}=\int _{\Omega }v\overline{v_1}dx+\int _{\Omega }\nabla {u}\nabla {\overline{u_1}}dx+\int _{\Omega }z\overline{z_1}dx+\int _{\Omega }\nabla {y}\cdot \nabla {\overline{y_1}}dx, \end{aligned}$$

for all \(U=(u,v,y,z)^\top \) and \(U_1=(u_1,v_1,y_1,z_1)^\top \) in \({\mathcal {H}}\). We define the unbounded linear operator \({\mathcal {A}}_d:D\left( {\mathcal {A}}_d\right) \subset {\mathcal {H}}\longrightarrow {\mathcal {H}}\) by

$$\begin{aligned} D({\mathcal {A}}_d)= & {} \left\{ U=(u,v,y,z)^\top \in {\mathcal {H}};\ v,z\in H_0^1(\Omega ),\right. \\&\left. \quad {{\,\mathrm{div}\,}}(u_x+bv_x)\in L^2(\Omega ),\ \Delta y \in L^2 (\Omega ) \right\} \end{aligned}$$

and

$$\begin{aligned} {\mathcal {A}}_d U=\begin{pmatrix} v\\ {{\,\mathrm{div}\,}}(\nabla u+b\nabla v)-cz\\ z\\ \Delta y+cv \end{pmatrix}, \ \forall U=(u,v,y,z)^\top \in D({\mathcal {A}}_d). \end{aligned}$$

If \(U=(u,u_t,y,y_t)\) is a regular solution of system (1.5)-(1.8), then we rewrite this system as the following first-order evolution equation:

$$\begin{aligned} U_t={\mathcal {A}}_dU,\quad U(0)=U_0, \end{aligned}$$
(3.3)

where \(U_0=(u_0,u_1,y_0,y_1)^{\top }\in \mathcal H\). For all \(U=(u,v,y,z)^{\top }\in D({\mathcal {A}}_d )\), we have

$$\begin{aligned} \Re \left<{\mathcal {A}}_d U,U\right>_{{\mathcal {H}}}=-\int _{\Omega }b|\nabla v|^2dx\le 0, \end{aligned}$$

which implies that \({\mathcal {A}}_d\) is dissipative. Now, similar to Proposition 2.1 in Akil et al. (2022), we can prove that there exists a unique solution \(U=(u,v,y,z)^{\top }\in D({\mathcal {A}}_d)\) of

$$\begin{aligned} -\mathcal A_d U=F,\quad \forall F=(f^1,f^2,f^3,f^4)^\top \in {\mathcal {H}}. \end{aligned}$$

Then \(0\in \rho ({\mathcal {A}}_d)\) and \({\mathcal {A}}_d\) is an isomorphism and since \(\rho ({\mathcal {A}}_d)\) is open in \({\mathbb {C}}\) (see Theorem 6.7 (Chapter III) in Kato 1995), we easily get \(R(\lambda I -{\mathcal {A}}_d) = {{\mathcal {H}}}\) for a sufficiently small \(\lambda >0 \). This, together with the dissipativeness of \({\mathcal {A}}_d\), implies that \(D\left( {\mathcal {A}}_d\right) \) is dense in \({{\mathcal {H}}}\) and that \({\mathcal {A}}_d\) is m-dissipative in \({{\mathcal {H}}}\) (see Theorems 4.5, 4.6 in Pazy 1983). According to Lumer–Phillips theorem (see Pazy 1983), then the operator \(\mathcal A_d\) generates a \(C_{0}\)-semigroup of contractions \(e^{t\mathcal A_d}\) in \(\mathcal H\) which gives the well-posedness of (3.3). Then, we have the following result:

Theorem 3.1

For all \(U_0 \in \mathcal H\), system (3.3) admits a unique weak solution

$$\begin{aligned} U(t)=e^{t\mathcal A_d}U_0\in C^0 (\mathbb R_+ ,\mathcal H). \end{aligned}$$

Moreover, if \(U_0 \in D(\mathcal A)\), then the system (3.3) admits a unique strong solution

$$\begin{aligned} U(t)=e^{t\mathcal A_d}U_0\in C^0 (\mathbb R_+ ,D(\mathcal A_d))\cap C^1 (\mathbb R_+ ,\mathcal H). \end{aligned}$$

3.2 Strong stability

In this section, we will prove the strong stability of system (1.5)-(1.8). First, we fix the following notations:

$$\begin{aligned} {\widetilde{\Omega }}=\Omega -\overline{\omega _c},\quad \Gamma _1=\partial \omega _c-\partial \Omega \quad \text {and}\quad \Gamma _0=\partial \omega _c-\Gamma _1. \end{aligned}$$

Let \(x_0\in {\mathbb {R}}^{d}\) and \(m(x)=x-x_0\) and suppose that (see Figure 4)

$$\begin{aligned} m\cdot \nu \le 0\quad \text {on}\quad \Gamma _0=\left( \partial \omega _c\right) -\Gamma _1. \end{aligned}$$
(GC)
Fig. 4
figure 4

Geometric description of the sets \(\omega _b\) and \(\omega _c\)

The main result of this section is the following theorem:

Theorem 3.2

Assume that (GC) holds and

$$\begin{aligned} \Vert c\Vert _{\infty }\le \min \left\{ \frac{1}{\Vert m\Vert _{\infty }+\frac{d-1}{2}},\frac{1}{\Vert m\Vert _{\infty }+\frac{(d-1)C_{p,\omega _c}}{2}}\right\} , \end{aligned}$$
(SSC)

where \(C_{p,\omega _c}\) is the Poincarré constant on \(\omega _c\). Then, the \(C_0-\)semigroup of contractions \(\left( e^{t{\mathcal {A}}_d}\right) \) is strongly stable in \({\mathcal {H}}\); i.e. for all \(U_0\in {\mathcal {H}}\), the solution of (3.3) satisfies

$$\begin{aligned} \lim _{t\rightarrow +\infty }\Vert e^{t{\mathcal {A}}_d}U_0\Vert _{{\mathcal {H}}}=0. \end{aligned}$$

Proof

First, let us prove that

$$\begin{aligned} \ker (i{\lambda }I-\mathcal A_d)=\{0\},\ \forall {\lambda }\in \mathbb R. \end{aligned}$$
(3.4)

Since \(0\in \rho ({\mathcal {A}}_d)\), we still need to show the result for \(\lambda \in {\mathbb {R}}^{*}\). Suppose that there exists a real number \(\lambda \ne 0\) and \(U=(u,v,y,z)^\top \in D({\mathcal {A}}_d)\), such that

$$\begin{aligned} {\mathcal {A}}_dU=i{\lambda }U. \end{aligned}$$

Equivalently, we have

$$\begin{aligned} v= & {} i{\lambda }u, \end{aligned}$$
(3.5)
$$\begin{aligned} {{\,\mathrm{div}\,}}(\nabla u+b\nabla v)-cz= & {} i{\lambda }v, \end{aligned}$$
(3.6)
$$\begin{aligned} z= & {} i{\lambda }y, \end{aligned}$$
(3.7)
$$\begin{aligned} \Delta y+cv= & {} i{\lambda }z. \end{aligned}$$
(3.8)

Next, a straightforward computation gives

$$\begin{aligned} 0=\Re \left<i{\lambda }U,U\right>_{{\mathcal {H}}}=\Re \left<{\mathcal {A}}_dU,U\right>_{{\mathcal {H}}}=-\int _{\Omega }b|\nabla v|^2dx, \end{aligned}$$

consequently, we deduce that

$$\begin{aligned} \sqrt{b}\nabla v=0\ \ \text {in}\ \ \Omega \quad \text {and}\quad \nabla v= \nabla u=0 \quad \text {in}\quad \omega _b. \end{aligned}$$
(3.9)

Inserting (3.5) in (3.6), then using the definition of c, we get

$$\begin{aligned} \Delta u=-{\lambda }^2 u\quad \text {in}\quad \omega _b. \end{aligned}$$
(3.10)

From (3.9) we get \(\Delta u=0\) in \(\omega _b\) and from (3.10) and the fact that \({\lambda }\ne 0\), we get

$$\begin{aligned} u=0\quad \text {in}\quad \omega _b. \end{aligned}$$
(3.11)

Now, inserting (3.5) in (3.6), then using (3.9), (3.11) and the definition of c, we get

$$\begin{aligned} \begin{array}{rll} {\lambda }^2u+\Delta u&{}=&{}0\ \ \text {in}\ \ {\widetilde{\Omega }},\\ u&{}=&{}0\ \ \text {in}\ \ \omega _b\subset {\widetilde{\Omega }}. \end{array} \end{aligned}$$
(3.12)

Using Holmgren uniqueness theorem, we get

$$\begin{aligned} u=0\quad \text {in}\quad {\widetilde{\Omega }}. \end{aligned}$$
(3.13)

It follows that

$$\begin{aligned} u=\frac{\partial u}{\partial \nu }=0\quad \text {on}\quad \Gamma _1. \end{aligned}$$
(3.14)

Now, our aim is to show that \(u=y=0\) in \(\omega _c\). For this aim, inserting (3.5) and (3.7) in (3.6) and (3.8), then using (3.9), we get the following system:

$$\begin{aligned} {\lambda }^2u+\Delta u-i{\lambda }cy= & {} 0\quad \text {in}\ \Omega , \end{aligned}$$
(3.15)
$$\begin{aligned} {\lambda }^2y+\Delta y+i{\lambda }cu= & {} 0\quad \text {in}\ \Omega , \end{aligned}$$
(3.16)
$$\begin{aligned} u= & {} 0\quad \text {on}\ \partial \omega _c, \end{aligned}$$
(3.17)
$$\begin{aligned} y= & {} 0\quad \text {on}\ \Gamma _0, \end{aligned}$$
(3.18)
$$\begin{aligned} \frac{\partial u}{\partial \nu }= & {} 0\quad \text {on}\ \Gamma _1. \end{aligned}$$
(3.19)

Let us prove (3.4) by the following three steps:

Step 1. The aim of this step is to show that

$$\begin{aligned} \int _{\Omega }c|u|^2dx=\int _{\Omega }c|y|^2dx. \end{aligned}$$
(3.20)

For this aim, multiplying (3.15) and (3.16) by \({\bar{y}}\) and \({\bar{u}}\), respectively, integrating over \(\Omega \) and using Green’s formula, we get

$$\begin{aligned} {\lambda }^2\int _{\Omega }u{\bar{y}}dx-\int _{\Omega }\nabla u\cdot \nabla {{\bar{y}}}dx-i{\lambda }\int _{\Omega }c|y|^2dx= & {} 0, \end{aligned}$$
(3.21)
$$\begin{aligned} {\lambda }^2\int _{\Omega }y{\bar{u}}dx-\int _{\Omega }\nabla y\cdot \nabla {{\bar{u}}}dx+i{\lambda }\int _{\Omega }c|u|^2dx= & {} 0. \end{aligned}$$
(3.22)

Adding (3.21) and (3.22), then taking the imaginary part, we get (3.20).

Step 2. The aim of this step is to prove the following: identity

$$\begin{aligned}&-d\int _{\omega _c}|{\lambda }u|^2dx+(d-2)\int _{\omega _c}|\nabla u|^2dx+\int _{\Gamma _0}(m\cdot \nu )\left| \frac{\partial u}{\partial \nu }\right| ^2d\Gamma \nonumber \\&\quad -2\Re \left( i{\lambda }\int _{\omega _c}cy\left( m\cdot \nabla {{\bar{u}}}\right) dx\right) =0. \end{aligned}$$
(3.23)

For this aim, multiplying (3.15) by \(2(m\cdot \nabla {\bar{u}})\), integrating over \(\omega _c\) and taking the real part, we get

$$\begin{aligned}&2\Re \left( {\lambda }^2\int _{\omega _c}u(m\cdot \nabla {\bar{u}})dx\right) +2\Re \left( \int _{\omega _c}\Delta u(m\cdot \nabla {\bar{u}})dx\right) \nonumber \\&-2\Re \left( i{\lambda }\int _{\omega _c}cy(m\cdot \nabla {\bar{u}})dx\right) =0. \end{aligned}$$
(3.24)

Now, using the fact that \(u=0\) in \(\partial \omega _c\), we get

$$\begin{aligned} \Re \left( 2{\lambda }^2\int _{\omega _c}u(m\cdot \nabla {\bar{u}})dx\right) =-d\int _{\omega _c}|{\lambda }u|^2dx. \end{aligned}$$
(3.25)

Using Green’s formula, we obtain

$$\begin{aligned}&2\Re \left( \int _{\omega _c}\Delta u(m\cdot \nabla {\bar{u}})dx\right) \nonumber \\&\quad =\displaystyle -2\Re \left( \int _{\omega _c}\nabla u\cdot \nabla \left( m\cdot \nabla {\bar{u}}\right) dx\right) +2\Re \left( \int _{\Gamma _0}\frac{\partial u}{\partial \nu }\left( m\cdot \nabla {\bar{u}}\right) d\Gamma \right) \nonumber \\&\quad =\displaystyle (d-2)\int _{\omega _c}|\nabla u|^2dx-\int _{\partial \omega _c}(m\cdot \nu )|\nabla u|^2dx+2\Re \left( \int _{\Gamma _0}\frac{\partial u}{\partial \nu }\left( m\cdot \nabla {\bar{u}}\right) d\Gamma \right) . \end{aligned}$$
(3.26)

Using (3.17) and (3.19), we get

$$\begin{aligned} \int _{\partial \omega _c}(m\cdot \nu )|\nabla u|^2dx= & {} \int _{\Gamma _0}(m\cdot \nu )\left| \frac{\partial u}{\partial \nu }\right| ^2d\Gamma \ \ \text {and}\nonumber \\ \Re \left( \int _{\Gamma _0}\frac{\partial u}{\partial \nu }\left( m\cdot \nabla {\bar{u}}\right) d\Gamma \right)= & {} \int _{\Gamma _0}(m\cdot \nu )\left| \frac{\partial u}{\partial \nu }\right| ^2d\Gamma . \end{aligned}$$
(3.27)

Inserting (3.27) in (3.26), we get

$$\begin{aligned} 2\Re \left( \int _{\omega _c}\Delta u(m\cdot \nabla {\bar{u}})dx\right) =(d-2)\int _{\omega _c}|\nabla u|^2dx+\int _{\Gamma _0}(m\cdot \nu )\left| \frac{\partial u}{\partial \nu }\right| ^2d\Gamma . \end{aligned}$$
(3.28)

Inserting (3.25) and (3.28) in (3.24), we get (3.23).

Step 3. In this step, we prove (3.4). Multiplying (3.15) by \((d-1){\overline{u}}\), integrating over \(\omega _c\) and using (3.17), we get

$$\begin{aligned} (d-1)\int _{\omega _c}|{\lambda }u|^2dx+(1-d)\int _{\omega _c}|\nabla u|^2dx-\Re \left( i{\lambda }(d-1)\int _{\omega _c}cy{\bar{u}}dx\right) =0. \end{aligned}$$
(3.29)

Adding (3.23) and (3.29), we get

$$\begin{aligned} \int _{\omega _c}|{\lambda }u|^2dx+\int _{\omega _c}|\nabla u|^2dx= & {} \int _{\Gamma _0}(m\cdot \nu )\left| \frac{\partial u}{\partial \nu }\right| ^2d\Gamma -2\Re \left( i{\lambda }\int _{\omega _c}cy\left( m\cdot \nabla {{\bar{u}}}\right) dx\right) \\&\quad -\Re \left( i{\lambda }(d-1)\int _{\omega _c}cy{\bar{u}}dx\right) =0. \end{aligned}$$

Using (GC), we get

$$\begin{aligned}&\int _{\omega _c}|{\lambda }u|^2dx+\int _{\omega _c}|\nabla u|^2dx\le 2|{\lambda }|\int _{\omega _c}|c||y||m\cdot \nabla u|dx\nonumber \\&\quad +|{\lambda }|(d-1)\int _{\omega _c}|c||y||u|dx. \end{aligned}$$
(3.30)

Using Young’s inequality and (3.20), we get

$$\begin{aligned} 2|{\lambda }|\int _{\omega _c}|c||y||m\cdot \nabla u|dx\le \Vert m\Vert _{\infty }\Vert c\Vert _{\infty }\int _{\omega _c}\left( |{\lambda }u|^2+|\nabla u|^2\right) dx \end{aligned}$$
(3.31)

and

$$\begin{aligned}&|{\lambda }|(d-1)\int _{\omega _c}|c(x)||y||u|dx\le \frac{(d-1)\Vert c\Vert _{\infty }}{2}\int _{\omega _c}|{\lambda }u|^2dx\nonumber \\&+\frac{(d-1)\Vert c\Vert _{\infty }C_{p,\omega _c}}{2}\int _{\omega _c}|\nabla u|^2dx. \end{aligned}$$
(3.32)

Inserting (3.32) in (3.30), we get

$$\begin{aligned}&\left( 1-\Vert c\Vert _{\infty }\left( \Vert m\Vert _{\infty }+\frac{d-1}{2}\right) \right) \int _{\omega _c}|{\lambda }u|^2dx+\left( 1-\Vert c\Vert _{\infty }\left( \Vert m\Vert _{\infty }+\frac{(d-1)C_{p,\omega _c}}{2}\right) \right) \\&\quad \int _{\omega _c}|\nabla u|^2dx\le 0. \end{aligned}$$

Using (SSC) and (3.20) in the above estimation, we get

$$\begin{aligned} u=0\quad \text {and}\quad y=0\quad \text {in}\quad \omega _c. \end{aligned}$$
(3.33)

In order to complete this proof, we need to show that \(y=0\) in \({\widetilde{\Omega }}\). For this aim, using the definition of the function c in \({\widetilde{\Omega }}\) and using the fact that \(y=0\) in \(\omega _c\), we get

$$\begin{aligned} \begin{array}{rll} \displaystyle {\lambda }^2y+\Delta y&{}=&{}0\ \ \text {in}\ \ {\widetilde{\Omega }},\\ \displaystyle y&{}=&{}0 \ \ \text {on}\ \ \partial {\widetilde{\Omega }},\\ \displaystyle \frac{\partial y}{\partial \nu }&{}=&{}0\ \ \text {on}\ \ \Gamma _1. \end{array} \end{aligned}$$
(3.34)

Now, using Holmgren uniqueness theorem, we obtain \(y=0\) in \({\widetilde{\Omega }}\) and consequently (3.4) holds true. Moreover, similar to Lemma 2.5 in Akil et al. (2022), we can prove \(R(i{\lambda }I-\mathcal A_d)=\mathcal H, \ \forall {\lambda }\in \mathbb R\). Finally, by using the closed graph theorem of Banach and Theorem A.2, we conclude the proof of this Theorem. \(\square \)

Let us notice that, under the sole assumptions (GC) and (SSC), the polynomial stability of System (1.5)-(1.8) is an open problem.

4 Conclusion and open problems

4.1 Conclusion

Concerning the first part of this paper: In Ghader et al. (2020) and (Ghader et al. 2021), Ghader et al. considered an elastic-viscoelastic wave equation with one locally Kelvin-Voigt damping and with an internal or boundary time delay. They got an optimal polynomial energy decay rate of type \(t^{-4}\). In 2021, Akil et al. in 2021 considered a singular locally coupled elastic-viscoelastic wave equations with one singular locally Kelvin-Voigt damping such that the region of damping and the region of coupling are intersecting, a polynomial energy decay rate is established of type \(t^{-1}\). Indeed, the case when the regions of damping and coupling are disjoint is still an open problem. In this paper, we are interested in considering this case. In fact, in the first part of this paper, we consider the case of direct stability of one-dimensional coupled-wave equations; i.e., the two wave equations are damped. We note that the position of the coupling region plays a very important role. We proved the following two cases:

  • If we divide the bar into 7 pieces; the first piece is the elastic part without coupling, the second piece is a viscoelastic part without coupling, the third piece is the elastic part without coupling, the fourth piece is a viscoelastic part without coupling, the fifth piece is the elastic part without coupling, the sixth piece is the elastic part with coupling, and the last piece is the elastic part without coupling (see (C2) and Figure 2). In this case, our system is always asymptotically stable and a polynomial energy decay rate of type \(t^{-4}\) has been obtained.

  • If we divide the bar into 7 pieces; the first piece is the elastic part without coupling, the second piece is a viscoelastic part without coupling, the third piece is the elastic part without coupling, the fourth piece is the elastic part with coupling, the fifth piece is the elastic part without coupling, the sixth piece is a viscoelastic part without coupling, and the last piece is the elastic part without coupling (see (C1) and Figure 1). Our system is strongly stable if the coupling coefficient satisfies

$$\begin{aligned} |c_0|<\min \left( \frac{\sqrt{a}}{c_2-c_1},\frac{1}{c_2-c_1}\right) . \end{aligned}$$
(4.1)

In this case, a polynomial energy decay rate of type \(t^{-4}\) has been proved. Concerning the second part of this paper,We consider a locally coupled wave equations with one locally Kelvin–Voigt damping such that the damping region and the coupling region are disjoint (see (C3) and Figure 3). When the two wave equations propagate at the same speed \((a=1)\) and the coupling coefficient satisfies the following condition:

$$\begin{aligned} |c_0|<\frac{1}{c_2-c_1}. \end{aligned}$$
(4.2)

In this case, our system is always strongly stable and a polynomial energy decay rate of type \(t^{-1}\) has been obtained.

Concerning the third part of this paper: In 2022, In Akil et al. (2022) Akil et al. considered multidimensional locally coupled wave equations with locally Kelvin-Voigt damping. If the regions of the coupling and the damping coefficients are intersecting, without any geometric conditions and without any conditions on the coefficients, the authors proved that the system is strongly stable. Also, under the Geometric control condition (GCC) the authors proved a polynomial energy decay rate of type \(t^{-1}\). In the third part of this paper, we consider the same system under the condition that the coupling and the damping region are disjoint. When the two wave equations propagate at the same speed \((a=1)\), the part of the boundary of the coupling region satisfies a Multiplier Geometric condition (see (GC)), and the coupling coefficient satisfies the following condition:

$$\begin{aligned} \Vert c\Vert _{\infty }\le \min \left\{ \frac{1}{\Vert m\Vert _{\infty }+\frac{d-1}{2}},\frac{1}{\Vert m\Vert _{\infty }+\frac{(d-1)C_{p,\omega _c}}{2}}\right\} , \end{aligned}$$
(4.3)

we prove that our system is strongly stable.

4.2 Open problems

In this part, we present some open problems:

(OP1):

The optimality of the polynomial decay rate of the system (1.1)-(1.4) remains an open problem.

(OP2):

For the first part of this paper: Can we get stability results if the coupling coefficient does not satisfy (4.1)?

(OP3):

For the second part of this paper: Can we get stability results if the coupling coefficient does not satisfy (4.2) or if the two waves equations propagate at different speeds (i.e. \(a\ne 1\))?

(OP4):

For the third part of this paper: Can we get stability results if the coupling coefficient does not satisfy any Geometric conditions or the coupling coefficient does not satisfy (4.3) or if the two waves equations propagate at different speeds (i.e. \(a\ne 1\))?