1 Introduction

1.1 Motivation and Aims

There are several mathematical models representing physical damping. The most often encountered type of damping in vibration studies are linear viscous damping and Kelvin-Voigt damping which are special cases of proportional damping. Viscous damping usually models external friction forces such as air resistance acting on the vibrating structures and is thus called “external damping”, while Kelvin-Voigt damping originates from the internal friction of the material of the vibrating structures and thus called “internal damping”. In 1988, F. Huang in [17] considered a wave equation with globally distributed Kelvin-Voigt damping, i.e. the damping coefficient is strictly positive on the entire spatial domain. He proved that the corresponding semigroup is not only exponentially stable, but also is analytic (see Definition A.10, Theorem A.12 and Theorem A.14 below). Thus, Kelvin-Voigt damping is stronger than the viscous damping when globally distributed. Indeed, it was proved that the semigroup corresponding to the system of wave equations with global viscous damping is exponentially stable but not analytic (see [11] for the one dimensional system and [8] for the higher dimensional system). However, the exponential stability of a wave equation is still true even if the viscous damping is localized, via a smooth or a non smooth damping coefficient, in a suitable subdomain satisfying some geometric conditions (see [8]). Nevertheless, when viscoelastic damping is distributed locally, the situation is more delicate and such comparison between viscous and viscoelastic damping is not valid anymore. Indeed, the stabilization of the wave equation with local Kelvin-Voigt damping is greatly influenced by the smoothness of the damping coefficient and the region where the damping is localized (near or faraway from the boundary) even in the one-dimensional case. So, the stabilization of systems (simple or coupled) with local Kelvin-Voigt damping has attracted the attention of many authors (see the Literature below for the history of this kind of damping). From a mathematical point of view, it is important to study the stability of a system coupling a locally damped wave equation with a conservative one. Moreover, the study of this kind of systems is also motivated by several physical considerations and occurs in many applications in engineering and mechanics. In this direction, recently in 2019, Hassine and Souayeh in [15], studied the stabilization of a system of global coupled wave equations with one localized Kelvin-Voigt damping. The system is described by

$$ \left \{ \textstyle\begin{array}{l} u_{tt}-\left (u_{x}+b(x)u_{tx}\right )_{x}+v_{t}=0, \hspace{0.7cm} (x,t)\in (-1,1)\times \mathbb{R}^{+}, \\ v_{tt}-cv_{xx}-u_{t}=0, \hspace{2.5cm} (x,t)\in (-1,1)\times \mathbb{R}^{+}, \\ u(0,t)=v(0,t)=0,u(1,t)=v(1,t)=0, \hspace{0.5cm} t>0, \\ u(x,0)=u_{0}(x),u_{t}(x,0)=u_{1}(x), \hspace{1cm} x\in (-1,1), \\ v(x,0)=v_{0}(x),v_{t}(x,0)=v_{1}(x), \hspace{1.3cm} x\in (-1,1), \end{array}\displaystyle \right . $$
(1.1)

where \(c>0\), and \(b\in L^{\infty }(-1,1)\) is a non-negative function. They assumed that the damping coefficient is given by , where \(d\) is a strictly positive constant. The Kelvin-Voigt damping \(\left (b(x)u_{tx}\right )_{x}\) is applied at the first equation and the second equation is indirectly damped through the coupling between the two equations. Under the two conditions that the Kelvin-Voigt damping is localized near the boundary and the two waves are globally coupled, they obtained a polynomial energy decay rate of type \(t^{-{\frac{1}{6}}}\). Then the stabilization of System (1.1) in the case where the Kelvin-Voigt damping is localized in an arbitrary subinterval of \((-1,+1)\) and the two waves are locally coupled has been left as an open problem. In addition, we believe that the energy decay rate obtained in [15] can be improved. So, we are interested in studying this open problem.

The main aim of this paper is to study the stabilization of a system of localized coupled wave equations with only one Kelvin-Voigt damping localized via non-smooth coefficient in a subinterval of the domain. The system is described by

$$\begin{aligned} u_{tt}-\left (au_{x}+b(x)u_{tx}\right )_{x}+c(x)\ y_{t} =&0,\quad (x,t) \in (0,L)\times \mathbb{R}^{+}, \end{aligned}$$
(1.2)
$$\begin{aligned} y_{tt}-y_{xx}-c(x)\ u_{t} =&0,\quad (x,t)\in (0,L)\times \mathbb{R}^{+}, \end{aligned}$$
(1.3)

with fully Dirichlet boundary conditions,

$$ u(0,t)=u(L,t)=y(0,t)=y(L,t)=0,\quad \forall \ t\in \mathbb{R}^{+}, $$
(1.4)

where

$$ b(x)=\left \{ \textstyle\begin{array}{c@{\quad }c@{\quad }c} b_{0}&\text{if}&x\in (\alpha _{1},\alpha _{3}) \\ 0&&\text{otherwise} \end{array}\displaystyle \right . \quad \text{and}\quad c(x)=\left \{ \textstyle\begin{array}{c@{\quad }c@{\quad }c} c_{0}&\text{if}&x\in (\alpha _{2},\alpha _{4}) \\ 0&&\text{otherwise} \end{array}\displaystyle \right . $$
(1.5)

and \(a>0, b_{0}>0\) and \(c_{0}\in \mathbb{R}^{\ast}\), and where we consider \(0<\alpha _{1}<\alpha _{2}<\alpha _{3}<\alpha _{4}<L\). This system is considered with the following initial data

(1.6)

1.2 Literature

The wave is created when a vibrating source disturbs the medium. In order to restrain those vibrations, several dampings can be added such as Kelvin-Voigt damping which is originated from the extension or compression of the vibrating particles. This damping is a viscoelastic structure having properties of both elasticity and viscosity. In the recent years, many researchers showed interest in problems involving this kind of damping (local or global) where different types of stability have been showed. In particular, in the one dimensional case, it was proved that the smoothness of the damping coefficient affects critically the studying of the stability and regularity of the solution of the system. Indeed, in the one dimensional case we can consider the following system

$$ \left \{ \textstyle\begin{array}{l} u_{tt}-\left (u_{x}+b_{1}(x)u_{tx}\right )_{x}=0, \hspace{1cm} -1\leq x\leq 1, t>0, \\ u(1,t)=u(-1,t)=0, \hspace{2.2cm} t>0, \\ u(x,0)=u_{0}(x),u_{t}(x,0)=u_{1}(x),\quad -1\leq x\leq 1, \end{array}\displaystyle \right . $$
(1.7)

with \(b_{1}\in L^{\infty }(-1,1)\) and

$$ b_{1}(x)=\left \{ \textstyle\begin{array}{c@{\quad }c@{\quad }c} 0&\text{if}&x\in (0,1), \\ a_{1}(x)&\text{if}& x\in (-1,0), \end{array}\displaystyle \right . $$
(1.8)

where the function \(a_{1}(x)\) is non-negative. The case of local Kelvin-Voigt damping was first studied in 1998 [19, 25], it was proved that the semigroup loses exponential stability and smooth property when the damping is local and \(a_{1}=1\) or \(b_{1}(\cdot )\) is the characteristic function of any subinterval of the domain. This surprising result initiated the study of an elastic system with local Kelvin-Voigt damping. In 2002, K. Liu and Z. Liu proved that system (1.7) is exponentially stable if \(b_{1}^{\prime }(.)\in C^{0,1}([-1,1])\) (see [20]). Later, in [34], the smoothness on \(b_{1}\) was weakened to \(b_{1}(\cdot )\in C^{1}([-1,1])\) and a condition on \(a_{1}\) was taken. In 2004, Renardy’s results [32] hinted that the solution of the system (1.7) may be exponentially stable under smoother conditions on the damping coefficient. This result was confirmed by K. Liu, Z. Liu and Q. Zhang in [26]. On the other hand, Liu and Rao in 2005 (see [21]) proved that the semigroup corresponding to system (1.7) is polynomially stable of order almost 2 if \(a_{1}(.)\in C(0,1)\) and \(a_{1}(x)\geq a_{1} \geq 0\) on \((0,1)\). The optimality of this order was later proved in [2]. In 2014, Alves and al., in [1], considered the transmission problem of a material composed of three components; one of them is a Kelvin–Voigt viscoelastic material, the second is an elastic material (no dissipation) and the third is an elastic material inserted with a frictional damping mechanism. They proved that the rate of decay depends on the position of each component. When the viscoelastic component is not in the middle of the material, they proved exponential stability of the solution. However, when the viscoelastic part is in the middle of the material, the solution decays polynomially as \(t^{-2}\). In 2016, under the assumption that the damping coefficient has a singularity at the interface of the damped and undamped regions and behaves like \(x^{\alpha }\) near the interface, it was proven by Liu and Zhang [23] that the semigroup corresponding to the system is polynomially or exponentially stable and the decay rate depends on the parameter \(\alpha \in (0,1]\). In [5], Ammari et al. generalized the cases of single elastic string with local Kelvin-Voigt damping (in [3, 20]). They studied the stability of a tree of elastic strings with local Kelvin-Voigt damping on some of the edges. They proved exponential/polynomial stability of the system under the compatibility condition of displacement and strain and the continuity condition of damping coefficients at the vertices of the tree.

In [13], Hassine considered the longitudinal and transversal vibrations of the transmission Euler-Bernoulli beam with Kelvin-Voigt damping distributed locally on any subinterval of the region occupied by the beam. He proved that the semigroup associated with the equation for the transversal motion of the beam is exponentially stable, although the semigroup associated with the equation for the longitudinal motion of the beam is polynomially stable of type \(t^{-2}\). In [14], Hassine considered a beam and a wave equation coupled on an elastic beam through transmission conditions with locally distributed Kelvin-Voigt damping that acts through one of the two equations only. He proved a polynomial energy decay rate of type \(t^{-2}\) for both cases where the dissipation acts through the beam equation and through the wave equation. In 2016, Oquendo and Sanez studied the wave equation with internal coupled terms where the Kelvin-Voigt damping is global in one equation and the second equation is conservative. They showed that the semigroup loses speed and decays with the rate \(t^{-\frac{1}{4}}\) and they proved that this decay rate is optimal (see [30]).

Let us mention some of the results that have been established for the case of wave equation with Kelvin-Voigt damping in the multi-dimensional setting. In [17], the author proved that when the Kelvin-Voigt damping div\((d(x)\nabla u_{t})\) is globally distributed, i.e. \(d(x)\geq d_{0}>0\) for almost all \(x\in \Omega \), the wave equation generates an analytic semi-group. In [22], the authors considered the wave equation with local visco-elastic damping distributed around the boundary of \(\Omega \). They proved that the energy of the system decays exponentially to zero as t goes to infinity for all usual initial data under the assumption that the damping coefficient satisfies: \(d\in C^{1,1}(\Omega )\), \(\Delta d\in L^{\infty }(\Omega )\) and \(|\nabla d(x)|^{2}\leq M_{0} d(x)\) for almost every \(x\) in \(\Omega \) where \(M_{0}\) is a positive constant. On the other hand, in [33], the author studied the stabilization of the wave equation with Kelvin-Voigt damping. He established a polynomial energy decay rate of type \(t^{-1}\) provided that the damping region is localized in a neighborhood of a part of the boundary and verifies certain geometric condition. Also in [28], under the same assumptions on \(d\), the authors established the exponential stability of the wave equation with local Kelvin-Voigt damping localized around a part of the boundary and an extra boundary with time delay where they added an appropriate geometric condition. Later on, in [4], the wave equation with Kelvin-Voigt damping localized in a subdomain \(\omega \) far away from the boundary without any geometric conditions was considered. The authors established a logarithmic energy decay rate for smooth initial data. Further more, in [27], the authors investigate the stabilization of the wave equation with Kelvin-Voigt damping localized via non smooth coefficient in a suitable sub-domain of the whole bounded domain. They proved a polynomial stability result in any space dimension, provided that the damping region satisfies some geometric conditions.

1.3 Description of the Paper

This paper is organized as follows: In Sect. 2.1, we reformulate the system (1.2)-(1.6) into an evolution system and we prove the well-posedness of our system by semigroup approach. In Sect. 2.2, using a general criteria of Arendt and Batty, we show the strong stability of our system in the absence of the compactness of the resolvent. In Sect. 3, we prove that the system lacks exponential stability using two different approaches. The first case is by taking the damping and the coupling terms to be globally defined, i.e. \(b(x)=b_{0}>0\) and \(c(x)=c_{0}>0\) and we prove the lack of exponential stability using Borichev-Tomilov results. The second case is by taking only the damping term to be localized and we use the method which was developed by Littman and Markus. In Sect. 4, we look for a polynomial decay rate by applying a frequency domain approach combined with a multiplier method based on the exponential stability of an auxiliary problem, where we establish a polynomial energy decay for smooth solution of type \(t^{-1}\).

2 Well-Posedness and Strong Stability

In this section, we study the strong stability of System (1.2)-(1.6). First, using a semigroup approach, we establish well-posedness result of our system.

2.1 Well-Posedness

Firstly, we reformulate System (1.2)-(1.6) into an evolution problem in an appropriate Hilbert state space.

The energy of System (1.2)-(1.6) is given by

$$ E(t)=\frac{1}{2}\int _{0}^{L} \left (|u_{t}|^{2}+a|u_{x}|^{2}+|y_{t}|^{2}+|y_{x}|^{2} \right )dx. $$

Let \(\left (u,u_{t},y,y_{t}\right )\) be a regular solution of (1.2)-(1.6). Multiplying (1.2), (1.3) by \(u_{t},\ y_{t}\), respectively, then using the boundary conditions (1.4), we get

$$ E^{\prime }(t)=- \int _{0}^{L} b(x)|u_{tx}|^{2}dx, $$

using the definition of the function \(b(x)\), we get \(E^{\prime }(t)\leq 0\). Thus, System (1.2)-(1.6) is dissipative in the sense that its energy is a non-increasing function with respect to the time variable \(t\). Let us define the energy space ℋ by

$$ \mathcal{H}=(H_{0}^{1}(0,L)\times L^{2}(0,L))^{2}. $$

The energy space ℋ is equipped with the inner product defined by

$$ \left < U,U_{1}\right >_{\mathcal{H}}=\int _{0}^{L}v\overline{{v}}_{1}dx+a \int _{0}^{L}u_{x}(\overline{{u}}_{1})_{x}dx+\int _{0}^{L}z \overline{{z}}_{1}dx+\int _{0}^{L}y_{x}(\overline{{y}}_{1})_{x}dx, $$

for all \(U=\left (u,v,y,z\right )\) and \(U_{1}=\left (u_{1},v_{1},y_{1},z_{1}\right )\) in ℋ. We use \(\|U\|_{\mathcal{H}}\) to denote the corresponding norm. We define the unbounded linear operator \(\mathcal{A}: D\left (\mathcal{A}\right )\subset \mathcal{H} \longrightarrow \mathcal{H}\) by

$$ D(\mathcal{A})=\left \{ \textstyle\begin{array}{l} \displaystyle U=(u,v,y,z) \in \mathcal{H};\ y\in H^{2}\left (0,L \right )\cap H_{0}^{1}(0,L) \\ \displaystyle v,z\in H_{0}^{1}(0,L),(au_{x}+b(x)v_{x})_{x}\in L^{2}(0,L) \end{array}\displaystyle \right \} $$

and for all \(U=\left (u, v,y, z\right )\in D\left (\mathcal{A}\right )\),

$$ \mathcal{A}\left (u, v,y, z\right )=\left (v,(au_{x}+b(x)v_{x})_{x}-c(x)z, z, y_{xx}+c(x)v \right )^{\top }. $$

If \(U=(u,u_{t},y,y_{t})\) is the state of System (1.2)-(1.6), then this system is transformed into the first order evolution equation on the Hilbert space ℋ given by

$$ U_{t}=\mathcal{A}U,\quad U(0)=U_{0}, $$
(2.1)

where \(U_{0}=(u_{0},u_{1},y_{0},y_{1})\).

Proposition 2.1

The unbounded linear operator \(\mathcal{A}\) is m-dissipative in the energy space ℋ.

Proof

For all \(U=(u,v,y,z)\in D\left (\mathcal{A}\right )\), we have

$$ \Re \left (\left < \mathcal{A}U,U\right >_{\mathcal{H}}\right )=- \int _{0}^{L} b(x)|v_{x}|^{2}dx=-\int _{\alpha _{1}}^{\alpha _{3}} b_{0} |v_{x}|^{2}dx\leq 0, $$

which implies that \(\mathcal{A}\) is dissipative. Here \(\Re \) is used to denote the real part of a complex number. Now, let \(F=(f_{1},f_{2},f_{3},f_{4})\), we prove the existence of \(U=(u,v,y,z)\in D(\mathcal{A})\), solution of the equation

$$ -\mathcal{A}U=F. $$
(2.2)

Equivalently, one must consider the system given by

$$\begin{aligned} -v =&f_{1}, \end{aligned}$$
(2.3)
$$\begin{aligned} -(au_{x}+b(x)v_{x})_{x}+c(x)z =&f_{2}, \end{aligned}$$
(2.4)
$$\begin{aligned} -z =&f_{3}, \end{aligned}$$
(2.5)
$$\begin{aligned} -y_{xx}-c(x)v =&f_{4}, \end{aligned}$$
(2.6)

with the boundary conditions

$$ u(0)=u(L)=0,\quad \text{and}\quad y(0)=y(L)=0. $$
(2.7)

Let \(\left (\varphi ,\psi \right )\in H_{0}^{1}(0,L)\times H_{0}^{1}(0,L)\). Multiplying Equations (2.4) and (2.6) by \(\overline{\varphi }\) and \(\overline{\psi }\) respectively, integrate over \((0,L)\), we obtain

$$\begin{aligned} \int _{0}^{L}(au_{x}+b(x)v_{x})\overline{\varphi }_{x}dx+\int _{0}^{L}c(x)z \overline{\varphi }dx =&\int _{0}^{L} f_{2}\overline{\varphi } dx, \end{aligned}$$
(2.8)
$$\begin{aligned} \int _{0}^{L}y_{x}\overline{\psi }_{x} dx-\int _{0}^{L} c(x)v \overline{\psi }dx =&\int _{0}^{L} f_{4}\overline{\psi } dx . \end{aligned}$$
(2.9)

Inserting Equations (2.3) and (2.5) into (2.8) and (2.9), we get

$$\begin{aligned} \int _{0}^{L}au_{x}\overline{\varphi }_{x}dx =&\int _{0}^{L} f_{2} \overline{\varphi } dx+\int _{0}^{L} b(x)(f_{1})_{x}\overline{\varphi }_{x}dx+ \int _{0}^{L}c(x)f_{3}\overline{\varphi }dx, \end{aligned}$$
(2.10)
$$\begin{aligned} \int _{0}^{L}y_{x}\overline{\psi }_{x} dx =&\int _{0}^{L} f_{4} \overline{\psi } dx-\int _{0}^{L} c(x)f_{1}\overline{\psi }dx . \end{aligned}$$
(2.11)

Adding Equations (2.10) and (2.11), we obtain

$$ a\left ((u,y),(\varphi ,\psi )\right )=L\left (\varphi ,\psi \right ), \quad \forall \ (\varphi ,\psi )\in H_{0}^{1}(0,L)\times H_{0}^{1}(0,L), $$
(2.12)

where

$$ a\left ((u,y),(\varphi ,\psi )\right )=a\int _{0}^{L} u_{x} \overline{\varphi }_{x}dx+\int _{0}^{L} y_{x}\overline{\psi }_{x}dx $$
(2.13)

and

$$\begin{aligned} L(\varphi ,\psi )={}&\int _{0}^{L} f_{2}\overline{\varphi } dx+\int _{0}^{L} b(x)(f_{1})_{x}\overline{\varphi }_{x}dx+\int _{0}^{L}c(x)f_{3} \overline{\varphi }dx+\int _{0}^{L} f_{4}\overline{\psi } dx \\ &-\int _{0}^{L} c(x)f_{1}\overline{\psi }dx. \end{aligned}$$
(2.14)

Thanks to (2.13), (2.14), we have that \(a\) is a bilinear continuous coercive form on \(\left ( H_{0}^{1}(0,L)\times H_{0}^{1}(0,L)\right )^{2}\), and \(L\) is a linear continuous form on \(H_{0}^{1}(0,L)\times H_{0}^{1}(0,L)\). Then, using Lax-Milgram theorem, we deduce that there exists \((u,y)\in H_{0}^{1}(0,L)\times H_{0}^{1}(0,L)\) unique solution of the variational problem (2.12). Applying the classical elliptic regularity we deduce that \(U=(u,v,y,z)\in D(\mathcal{A})\) is the unique solution of (2.2). The proof is thus complete. □

From Proposition 2.1, the operator \(\mathcal{A}\) is m-dissipative on ℋ and consequently, generates a \(C_{0}-\)semigroup of contractions \(\left (e^{t\mathcal{A}}\right )_{t\geq 0}\) following Lummer-Phillips theorem (see in [24] and [29]). Then the solution of the evolution Equation (2.1) admits the following representation

$$ U(t)=e^{t\mathcal{A}}U_{0},\quad t\geq 0, $$

which leads to the well-posedness of (2.1). Hence, we have the following result.

Theorem 2.2

Let \(U_{0}\in \mathcal{H}\) then, problem (2.1) admits a unique weak solution \(U\) satisfies

$$ U(t)\in C^{0}\left (\mathbb{R}^{+},\mathcal{H}\right ). $$

Moreover, if \(U_{0}\in D(\mathcal{A})\) then, problem (2.1) admits a unique strong solution \(U\) satisfies

$$ U(t)\in C^{1}\left (\mathbb{R}^{+},\mathcal{H}\right )\cap C^{0}( \mathbb{R}^{+},D(\mathcal{A})). $$

2.2 Strong Stability

This part is devoted for the proof of the strong stability of the \(C_{0}\)-semigroup \(\left (e^{t\mathcal{A}}\right )_{t\geq 0}\).

To obtain strong stability of the \(C_{0}\)-semigroup \(\left (e^{t\mathcal{A}}\right )_{t\geq 0}\) we use the theorem of Arendt and Batty in [6] (see Theorem A.11 in the Appendix).

Theorem 2.3

The \(C_{0}-\)semigroup of contractions \(\left (e^{t\mathcal{A}}\right )_{t\geq 0}\) is strongly stable in ℋ; i.e. for all \(U_{0}\in \mathcal{H}\), the solution of (2.1) satisfies

$$ \lim _{t\to +\infty }\|e^{t\mathcal{A}}U_{0}\|_{\mathcal{H}}=0. $$

For the proof of Theorem 2.3, since the condition \((u,v,y,z)\in D(\mathcal{A})\) implies only \(u\in H_{0}^{1}(0,L)\). Therefore, the embedding from \(D(\mathcal{A})\) into ℋ is not compact and the resolvent \((-\mathcal{A})^{-1}\) of the operator \(\mathcal{A}\) is not compact in general. Then according to Theorem A.11, we need to prove that the operator \(\mathcal{A}\) has no pure imaginary eigenvalues and \(\sigma \left (\mathcal{A}\right )\cap i\mathbb{R}\) contains only a countable number of continuous spectrum of \(\mathcal{A}\). The argument for Theorem 2.3 relies on the subsequent lemmas.

Lemma 2.4

For \({\lambda }\in \mathbb{R}\), we have \(i{\lambda }I -\mathcal{A}\) is injective i.e.

$$ \ker \left (i{\lambda }I-\mathcal{A}\right )=\{0\},\quad \forall { \lambda }\in \mathbb{R}. $$

Proof

From Proposition 2.1, we have \(0\in \rho (\mathcal{A})\). We still need to show the result for \({\lambda }\in \mathbb{R}^{\ast }\). Suppose that there exists a real number \({\lambda }\neq 0\) and \(U=\left (u,v,y,z\right )\in D(\mathcal{A})\), such that

$$ \mathcal{A}U=i{\lambda }U. $$

Equivalently, we have

$$\begin{aligned} v =&i{\lambda }u, \end{aligned}$$
(2.15)
$$\begin{aligned} (au_{x}+b(x)v_{x})_{x}-c(x)z =&i{\lambda }v, \end{aligned}$$
(2.16)
$$\begin{aligned} z =&i{\lambda }y, \end{aligned}$$
(2.17)
$$\begin{aligned} y_{xx}+c(x)v =&i{\lambda }z. \end{aligned}$$
(2.18)

Next, a straightforward computation gives

$$ 0=\Re \left < i{\lambda }U,U\right >_{\mathcal{H}}=\Re \left < \mathcal{A}U,U\right >_{\mathcal{H}}=-\int _{0}^{L} b(x)|v_{x}|^{2}dx=- \int _{\alpha _{1}}^{\alpha _{3}}b_{0}|v_{x}|^{2}dx, $$

consequently, we deduce that

$$ b(x)v_{x}=0\quad \text{in}\quad (0,L)\quad \text{and}\quad v_{x}=0 \quad \text{in} \quad (\alpha _{1},\alpha _{3}). $$
(2.19)

It follows, from Equation (2.15), that

$$ u_{x}=0\quad \text{in}\quad (\alpha _{1},\alpha _{3}). $$
(2.20)

Using Equations (2.16), (2.17), (2.19), (2.20) and the definition of \(c(x)\), we obtain

$$ y_{x}=0\quad \text{in}\quad (\alpha _{2},\alpha _{3}). $$
(2.21)

Substituting Equations (2.15), (2.17) in Equations (2.16), (2.18), and using Equation (2.19) and the definition of \(b(x)\) in (1.5), we get

$$\begin{aligned} {\lambda }^{2}u+au_{xx}-i{\lambda }c(x)y =&0, \hspace{1cm} \text{in}\quad (0,L) \end{aligned}$$
(2.22)
$$\begin{aligned} {\lambda }^{2}y+y_{xx}+i{\lambda }c(x)u =&0, \hspace{1cm} \text{in}\quad (0,L) \end{aligned}$$
(2.23)

with the boundary conditions

$$ u(0)=u(L)=y(0)=y(L)=0. $$
(2.24)

Our goal is to prove that \(u=y=0\) on \((0,L)\). For simplicity, we divide the proof into three steps.

Step 1. The aim of this step is to show that \(u=y=0\) on \((0,\alpha _{3})\). So, using Equation (2.20), we have

$$ u_{x}=0\quad \text{in}\quad (\alpha _{1},\alpha _{2}). $$

Using the above equation and Equation (2.22) and the fact that \(c(x)=0\) on \((\alpha _{1},\alpha _{2})\), we obtain

$$ u=0\quad \text{in}\quad (\alpha _{1},\alpha _{2}). $$
(2.25)

In fact, system (2.22)-(2.24) admits a unique solution \((u,y)\in C^{1}\left ([0,L]\right )\), then

$$ u(\alpha _{1})=u_{x}(\alpha _{1})=0. $$
(2.26)

Then, from Equations (2.22) and (2.26) and the fact that \(c(x)=0\) on \((0,\alpha _{1})\), we get

$$ u=0\quad \text{in}\quad (0,\alpha _{1}). $$
(2.27)

Using Equations (2.20) and (2.25) and the fact that \(u\in C^{1}([0,L])\), we get

$$ u=0\quad \text{in}\quad (\alpha _{1},\alpha _{3}). $$
(2.28)

Now, using Equations (2.20), (2.21) and the fact that \(c(x)=c_{0}\) on \((\alpha _{2},\alpha _{3})\) in Equations (2.22), (2.23), we obtain

$$ u=\dfrac{i c_{0}}{{\lambda }}y\quad \text{in}\quad (\alpha _{2}, \alpha _{3}). $$
(2.29)

Using Equation (2.28) in Equation (2.29), we obtain

$$ u=y=0\quad \text{in}\quad (\alpha _{2},\alpha _{3}). $$
(2.30)

Since \(y\in C^{1}([0,L])\), then

$$ y(\alpha _{2})=y_{x}(\alpha _{2})=0. $$
(2.31)

So, from Equations (2.23) and (2.31) and the fact that \(c(x)=0\) on \((\alpha _{1},\alpha _{2})\), we obtain

$$ y=0\quad \text{in}\quad (\alpha _{1},\alpha _{2}). $$
(2.32)

Using the same argument over \((0,\alpha _{1})\), we get

$$ y=0\quad \text{in}\quad (0,\alpha _{1}). $$
(2.33)

Hence, from Equations (2.25), (2.27), (2.28), (2.30), (2.32) and (2.33), we obtain \(u=y=0\) on \((0,\alpha _{3})\). Consequently, we obtain

$$ U=0\quad \text{in}\quad (0,\alpha _{3}). $$

Step 2. The aim of this step is to show that \(u=y=0\) on \((\alpha _{3},\alpha _{4})\). Using Equation (2.30), and the fact that \((u,y)\in C^{1}([0,L])\), we obtain the boundary conditions

$$ u(\alpha _{3})=u_{x}(\alpha _{3})=y(\alpha _{3})=y_{x}(\alpha _{3})=0. $$
(2.34)

Combining Equations (2.22), (2.23), and the fact that \(c(x)=c_{0}\) on \((\alpha _{3},\alpha _{4})\), we get

$$ au_{xxxx}+(a+1){\lambda }^{2} u_{xx}+{\lambda }^{2}\left ({\lambda }^{2}-c_{0}^{2} \right )u=0. $$
(2.35)

The characteristic equation of system (2.35) is

$$ P(r):= ar^{4}+(a+1){\lambda }^{2} r^{2}+{\lambda }^{2}\left ({ \lambda }^{2}-c_{0}^{2}\right ). $$

Setting

$$ P_{0}(m):=am^{2}+(a+1){\lambda }^{2}m+{\lambda }^{2}\left ({\lambda }^{2}-c_{0}^{2} \right ). $$

The polynomial \(P_{0}\) has two distinct real roots \(m_{1}\) and \(m_{2}\) given by:

$$\begin{aligned} m_{1}&= \frac{-{\lambda }^{2}(a+1)-\sqrt{{\lambda }^{4}(a-1)^{2}+4ac_{0}^{2}{\lambda }^{2}}}{2a} \quad \text{and}\\ m_{2}&= \frac{-{\lambda }^{2}(a+1)+\sqrt{{\lambda }^{4}(a-1)^{2}+4ac_{0}^{2}{\lambda }^{2}}}{2a}. \end{aligned}$$

It is clear that \(m_{1}<0\) and the sign of \(m_{2}\) depends on the value of \({\lambda }\) with respect to \(c_{0}\). We distinguish the following three cases: \({\lambda }^{2}< c_{0}^{2}\), \({\lambda }^{2}=c_{0}^{2}\) and \({\lambda }^{2}>c_{0}^{2}\).

Case 1. If \({\lambda }^{2}< c_{0}^{2}\), then \(m_{2}>0\). Setting

$$ r_{1}=\sqrt{-m_{1}}\quad \text{and}\quad r_{2}=\sqrt{m_{2}}. $$

Then \(P\) has four simple roots \(ir_{1}\), \(-ir_{1}\), \(r_{2}\) and \(-r_{2}\), and hence the general solution of system (2.22), (2.23), is given by

$$ \left \{ \textstyle\begin{array}{l@{\quad}l@{\quad}l} u(x)&=&\displaystyle {c_{1}\sin (r_{1}x)+c_{2}\cos (r_{1}x)+c_{3} \cosh (r_{2}x)+c_{4}\sinh (r_{2}x)}, \\ y(x)&=&\displaystyle \frac{({\lambda }^{2}-ar_{1}^{2})}{i{\lambda }c_{0}}\left (c_{1}\sin (r_{1}x)+c_{2} \cos (r_{1}x)\right )\\ &&\displaystyle + \frac{({\lambda }^{2}+ar_{2}^{2})}{i{\lambda }c_{0}}\left (c_{3} \cosh (r_{2}x)+c_{4}\sinh (r_{2}x)\right ), \end{array}\displaystyle \right . $$

where \(c_{j}\in \mathbb{C}\), \(j=1,\ldots ,4\). In this case, the boundary condition in Equation (2.34), can be expressed by

$$ M_{1} \begin{pmatrix} c_{1} \\ c_{2} \\ c_{3} \\ c_{4} \end{pmatrix} =0, $$

where

M 1 = ( sin ( r 1 α 3 ) cos ( r 1 α 3 ) cosh ( r 2 α 3 ) sinh ( r 2 α 3 ) r 1 cos ( r 1 α 3 ) r 1 sin ( r 1 α 3 ) r 2 sinh ( r 2 α 3 ) r 2 cosh ( r 2 α 3 ) ( λ 2 a r 1 2 ) i λ c 0 sin ( r 1 α 3 ) ( λ 2 a r 1 2 ) i λ c 0 cos ( r 1 α 3 ) ( λ 2 + a r 2 2 ) i λ c 0 cosh ( r 2 α 3 ) ( λ 2 + a r 2 2 ) i λ c 0 sinh ( r 2 α 3 ) ( λ 2 a r 1 2 ) i λ c 0 r 1 cos ( r 1 α 3 ) ( λ 2 a r 1 2 ) i λ c 0 r 1 sin ( r 1 α 3 ) ( λ 2 + a r 2 2 ) i λ c 0 r 2 sinh ( r 2 α 3 ) ( λ 2 + a r 2 2 ) i λ c 0 r 2 cosh ( r 2 α 3 ) ) .

The determinant of \(M_{1}\) is given by

$$ \det (M_{1})= \frac{r_{1}r_{2}a^{2}\left (r_{1}^{2}+r_{2}^{2}\right )^{2}}{{\lambda }^{2}c_{0}^{2}}. $$

System (2.22), (2.23) with the boundary conditions (2.34), admits only a trivial solution \(u=y=0\) if and only if \(\det (M_{1})\neq 0\), i.e. \(M_{1}\) is invertible. Since, \(r_{1}^{2}+r_{2}^{2}=m_{2}-m_{1}\neq 0\), then \(\det (M_{1})\neq 0\). Consequently, if \({\lambda }^{2}< c_{0}^{2}\), we obtain \(u=y=0\) on \((\alpha _{3},\alpha _{4})\).

Case 2. If \({\lambda }^{2}=c_{0}^{2}\), then \(m_{2}=0\). Setting

$$ r_{1}=\sqrt{-m_{1}}=\sqrt{\frac{(a+1)c_{0}^{2}}{a}}. $$

Then \(P\) has two simple roots \(ir_{1}\), \(-ir_{1}\) and 0 is a double root. Hence the general solution of System (2.22), (2.23) is given by

$$ \left \{ \textstyle\begin{array}{l@{\quad }l@{\quad }l} u(x)&=&c_{1}\sin (r_{1} x)+c_{2}\cos (r_{1}x)+c_{3}x+c_{4}, \\ y(x)&=&\displaystyle { \frac{({\lambda }^{2}-ar_{1}^{2})}{i{\lambda }c_{0}}\left (c_{1}\sin (r_{1}x)+c_{2} \cos (r_{1}x)\right )+\frac{{\lambda }}{ic_{0}}\left (c_{3}x+c_{4} \right )}, \end{array}\displaystyle \right . $$

where \(c_{j}\in \mathbb{C}\), for \(j=1,\ldots ,4\). Also, the boundary condition in Equation (2.34), can be expressed by

$$ M_{2} \begin{pmatrix} c_{1} \\ c_{2} \\ c_{3} \\ c_{4} \end{pmatrix} =0, $$

where

$$ M_{2}= \begin{pmatrix} \sin (r_{1}\alpha _{3})&\cos (r_{1}\alpha _{3})&\alpha _{3} &1 \\ r_{1}\cos (r_{1}\alpha _{3})&-r_{1}\sin (r_{1}\alpha _{3})&1&0 \\ \displaystyle {\frac{({\lambda }^{2}-ar_{1}^{2})}{i{\lambda }c_{0}} \sin (r_{1}\alpha _{3})}&\displaystyle { \frac{({\lambda }^{2}-ar_{1}^{2})}{i{\lambda }c_{0}}\cos (r_{1} \alpha _{3})}&\displaystyle {\frac{{\lambda }\alpha _{3}}{ic_{0}}}& \displaystyle {\frac{{\lambda }}{ic_{0}}} \\ \displaystyle {\frac{({\lambda }^{2}-ar_{1}^{2})}{i{\lambda }c_{0}}r_{1} \cos (r_{1}\alpha _{3})}&\displaystyle {- \frac{({\lambda }^{2}-ar_{1}^{2})}{i{\lambda }c_{0}}r_{1}\sin (r_{1} \alpha _{3})}&\displaystyle {\frac{{\lambda }}{ic_{0}}}&0 \end{pmatrix} . $$

The determinant of \(M_{2}\) is given by

$$ \det (M_{2})=\frac{-a^{2}r_{1}^{5}}{{\lambda }^{2}c_{0}^{2}}. $$

Since \(r_{1}=\sqrt{-m_{1}}\neq 0\), then \(\det (M_{2})\neq 0\). Thus, System (2.22), (2.23) with the boundary conditions (2.34), admits only a trivial solution \(u=y=0\) on \((\alpha _{3},\alpha _{4})\).

Case 3. If \({\lambda }^{2}>c_{0}^{2}\), then \(m_{2}<0\). Setting

$$ r_{1}=\sqrt{-m_{1}}\quad \text{and}\quad r_{2}=\sqrt{-m_{2}}. $$

Then \(P\) has four simple roots \(ir_{1}\), \(-ir_{1}\), \(ir_{2}\) and \(-ir_{2}\), and hence the general solution of System (2.22), (2.23) is given by

$$ \left \{ \textstyle\begin{array}{l@{\ \ }l@{\ \ }l} u(x)&=&\displaystyle {c_{1}\sin (r_{1}x)+c_{2}\cos (r_{1}x)+c_{3}\sin (r_{2}x)+c_{4} \cos (r_{2}x)}, \\ y(x)&=&\displaystyle { \frac{({\lambda }^{2}-ar_{1}^{2})}{i{\lambda }c_{0}}\left (c_{1}\sin (r_{1}x)+c_{2} \cos (r_{1}x)\right )+ \frac{({\lambda }^{2} -ar_{2}^{2})}{i{\lambda }c_{0}}\left (c_{3}\sin (r_{2}x)+c_{4} \cos (r_{2}x)\right )}, \end{array}\displaystyle \right . $$

where \(c_{j}\in \mathbb{C}\), for \(j=1,\ldots ,4\). Also, the boundary condition in Equation (2.34), can be expressed by

$$ M_{3} \begin{pmatrix} c_{1} \\ c_{2} \\ c_{3} \\ c_{4} \end{pmatrix} =0, $$

where

M 3 = ( sin ( r 1 α 3 ) cos ( r 1 α 3 ) sin ( r 2 α 3 ) cos ( r 2 α 3 ) r 1 cos ( r 1 α 3 ) r 1 sin ( r 1 α 3 ) r 2 cos ( r 2 α 3 ) r 2 sin ( r 2 α 3 ) ( λ 2 a r 1 2 ) i λ c 0 sin ( r 1 α 3 ) ( λ 2 a r 1 2 ) i λ c 0 cos ( r 1 α 3 ) ( λ 2 a r 2 2 ) i λ c 0 sin ( r 2 α 3 ) ( λ 2 + a r 2 2 ) i λ c 0 cos ( r 2 α 3 ) ( λ 2 a r 1 2 ) i λ c 0 r 1 cos ( r 1 α 3 ) ( λ 2 a r 1 2 ) i λ c 0 r 1 sin ( r 1 α 3 ) ( λ 2 a r 2 2 ) i λ c 0 r 2 cos ( r 2 α 3 ) ( λ 2 a r 2 2 ) i λ c 0 r 2 sin ( r 2 α 3 ) ) .

The determinant of \(M_{3}\) is given by

$$ \det (M_{3})=- \frac{r_{1}r_{2}a^{2}(r_{1}^{2}-r_{2}^{2})^{2}}{{\lambda }c_{0}^{2}}. $$

Since \(r_{1}^{2}-r_{2}^{2}=m_{2}-m_{1}\neq 0\), then \(\det (M_{3})\neq 0\). Thus, System (2.22)-(2.23) with the boundary condition (2.34), admits only a trivial solution \(u=y=0\) on \((\alpha _{3},\alpha _{4})\). Consequently, we obtain \(U=0\) on \((\alpha _{3},\alpha _{4})\).

Step 3. The aim of this step is to show that \(u=y=0\) on \((\alpha _{4},L)\). From Equations (2.22), (2.23) and the fact that \(c(x)=0\) on \((\alpha _{4},L)\), we obtain the following system

$$ \left \{ \textstyle\begin{array}{l@{\quad }l@{\quad }l} {\lambda }^{2}u+au_{xx}&=&0\quad \text{over}\quad (\alpha _{4},L) \\ {\lambda }^{2}y+y_{xx}&=&0\quad \text{over}\quad (\alpha _{4},L). \end{array}\displaystyle \right . $$
(2.36)

Since \((u,y)\in C^{1}([0,L])\) and the fact that \(u=y=0\) on \((\alpha _{3},\alpha _{4})\), we get

$$ u(\alpha _{4})=u_{x}(\alpha _{4})=y(\alpha _{4})=y_{x}(\alpha _{4})=0. $$
(2.37)

Finally, it is easy to see that System (2.36) admits only a trivial solution on \((\alpha _{4},L)\) under the boundary condition (2.37).

Consequently, we proved that \(U=0\) on \((0,L)\). The proof is thus complete. □

Lemma 2.5

For all \({\lambda }\in \mathbb{R}\), we have

$$ R(i{\lambda }I-\mathcal{A})=\mathcal{H}. $$

Proof

From Proposition 2.1, we have \(0\in \rho (\mathcal{A})\). We still need to show the result for \({\lambda }\in \mathbb{R}^{\ast }\). Set \(F = (f_{1}, f_{2}, f_{3}, f_{4})\in \mathcal{H}\), we look for \(U = (u, v, y, z)\in D(\mathcal{A})\) solution of

$$ (i{\lambda }I-\mathcal{A})U=F. $$
(2.38)

Equivalently, we have

$$\begin{aligned} v =&i{\lambda }u-f_{1}, \end{aligned}$$
(2.39)
$$\begin{aligned} i{\lambda }v-(au_{x}+b(x)v_{x})_{x}+c(x)z =&f_{2}, \end{aligned}$$
(2.40)
$$\begin{aligned} z =&i{\lambda }y-f_{3}, \end{aligned}$$
(2.41)
$$\begin{aligned} i{\lambda }z-y_{xx}-c(x)v =&f_{4}. \end{aligned}$$
(2.42)

Let \(\left (\varphi ,\psi \right )\in H_{0}^{1}(0,L)\times H_{0}^{1}(0,L)\), multiplying Equations (2.40) and (2.42) by \(\bar{\varphi }\) and \(\bar{\psi }\) respectively and integrate over \((0,L)\), we obtain

$$\begin{aligned} \int _{0}^{L}i{\lambda }v\bar{\varphi }dx+\int _{0}^{L}au_{x} \bar{\varphi }_{x}dx+\int _{0}^{L}b(x)v_{x}\bar{\varphi }_{x}dx+\int _{0}^{L}c(x)z \bar{\varphi }dx =&\int _{0}^{L}f_{2}\bar{\varphi }dx, \end{aligned}$$
(2.43)
$$\begin{aligned} \int _{0}^{L}i{\lambda }z\bar{\psi }dx+\int _{0}^{L}y_{x}\bar{\psi }_{x}dx- \int _{0}^{L}c(x)v\bar{\psi }dx =&\int _{0}^{L}f_{4}\bar{\psi }dx. \end{aligned}$$
(2.44)

Substituting \(v\) and \(z\) by \(i{\lambda }u-f_{1}\) and \(i{\lambda }y-f_{3}\) respectively in Equations (2.43)-(2.44) and taking the sum, we obtain

$$ a\left ((u,y),(\varphi ,\psi )\right )=\mathrm{L}(\varphi ,\psi ), \qquad \forall (\varphi ,\psi )\in H_{0}^{1}(0,L)\times H_{0}^{1}(0,L), $$
(2.45)

where

$$ a\left ((u,y),(\varphi ,\psi )\right )=a_{1}\left ((u,y),(\varphi , \psi )\right )+a_{2}\left ((u,y),(\varphi ,\psi )\right ) $$

with

$$ \left \{ \textstyle\begin{array}{l} a_{1}\left ((u,y),(\varphi ,\psi )\right )=\displaystyle {\int _{0}^{L} \left ( au_{x}\bar{\varphi }_{x}+ y_{x}\bar{\psi }_{x}\right )dx+i{ \lambda }\int _{0}^{L}b(x)u_{x}\bar{\varphi }_{x}dx}, \\ a_{2}\left ((u,y),(\varphi ,\psi )\right )=\displaystyle {-{\lambda }^{2} \int _{0}^{L}\left (u\bar{\varphi }+y\bar{\psi }\right )dx+i{\lambda } \,\int _{0}^{L} c(x)\left (y\bar{\varphi }-u\bar{\psi }\right )dx}, \end{array}\displaystyle \right . $$

and

$$\begin{aligned} \mathrm{L}(\varphi ,\psi )={}&\displaystyle \int _{0}^{L}\left (f_{2}+c(x)f_{3}+i{ \lambda }f_{1}\right )\bar{\varphi }dx+\int _{0}^{L}\left (f_{4}-c(x)f_{1}+i{ \lambda }f_{3}\right )\bar{\psi } dx\\ &+\int _{0}^{L}b(x)\left (f_{1} \right )_{x}\bar{\varphi }_{x}dx. \end{aligned}$$

Let \(V=H_{0}^{1}(0,L)\times H_{0}^{1}(0,L)\) and \(V'=H^{-1}(0,L)\times H^{-1}(0,L)\) the dual space of \(V\). Let us consider the following operators,

$$ \left \{ \textstyle\begin{array}{l@{\quad }l@{\quad }l@{\quad }l} \mathrm{A}:&V&\rightarrow & V' \\ &(u,y)&\rightarrow &\mathrm{A}(u,y) \end{array}\displaystyle \right . \left \{ \textstyle\begin{array}{l@{\quad }l@{\quad }l@{\quad }l} \mathrm{A_{1}}:&V&\rightarrow & V' \\ &(u,y)&\rightarrow &\mathrm{A_{1}}(u,y) \end{array}\displaystyle \right . \left \{ \textstyle\begin{array}{l@{\quad }l@{\quad }l@{\quad }l} \mathrm{A_{2}}:&V&\rightarrow & V' \\ &(u,y)&\rightarrow &\mathrm{A_{2}}(u,y) \end{array}\displaystyle \right . $$

such that

$$ \left \{ \textstyle\begin{array}{l@{\quad }l} \displaystyle {\left (\mathrm{A}(u,y)\right )(\varphi ,\psi )=a\left ((u,y),( \varphi ,\psi )\right )},&\forall (\varphi ,\psi )\in H_{0}^{1}(0,L) \times H_{0}^{1}(0,L), \\ \displaystyle {\left (\mathrm{A_{1}}(u,y)\right )(\varphi ,\psi )=a_{1} \left ((u,y),(\varphi ,\psi )\right )},&\forall (\varphi ,\psi )\in H_{0}^{1}(0,L) \times H_{0}^{1}(0,L), \\ \displaystyle {\left (\mathrm{A_{2}}(u,y)\right )(\varphi ,\psi )=a_{2} \left ((u,y),(\varphi ,\psi )\right )},&\forall (\varphi ,\psi )\in H_{0}^{1}(0,L) \times H_{0}^{1}(0,L). \end{array}\displaystyle \right . $$
(2.46)

Our goal is to prove that \(\mathrm{A}\) is an isomorphism operator. For this aim, we divide the proof into three steps.

Step 1. In this step, we prove that the operator \(\mathrm{A_{1}}\) is an isomorphism operator. For this goal, following the second equation of (2.46) we can easily verify that \(a_{1}\) is a bilinear continuous coercive form on \(H_{0}^{1}(0,L)\times H_{0}^{1}(0,L)\). Then, by Lax-Milgram Lemma, the operator \(\mathrm{A_{1}}\) is an isomorphism.

Step 2. In this step, we prove that the operator \(\mathrm{A_{2}}\) is compact. According to the third equation of (2.46), we have

$$ \lvert a_{2}\left ((u,y),(\varphi ,\psi )\right )\rvert \leq C\|(u,y) \|_{L^{2}(0,L)}\|(\varphi ,\psi )\|_{L^{2}(0,L)}. $$

Finally, using the compactness embedding from \(H_{0}^{1}(0,L)\) to \(L^{2}(0,L)\) and the continuous embedding from \(L^{2}(0,L)\) into \(H^{-1}(0,L)\) we deduce that \(A_{2}\) is compact.

From steps 1 and 2, we get that the operator \({\mathrm{A}=\mathrm{A}_{1}+\mathrm{A}_{2}}\) is a Fredholm operator of index zero. Consequently, by Fredholm alternative, to prove that operator \(\mathrm{A}\) is an isomorphism it is enough to prove that \(\mathrm{A}\) is injective, i.e. \(\ker \left \{ \mathrm{A}\right \} =\left \{ 0\right \} \).

Step 3. In this step, we prove that \(\ker \{\mathrm{A}\}=\{0\}\). For this aim, let \(\left (\tilde{u},\tilde{y}\right )\in \ker \{\mathrm{A}\}\), i.e.

$$ a\left ((\tilde{u},\tilde{y}),(\varphi ,\psi )\right )=0,\quad \forall \left (\varphi ,\psi \right )\in H_{0}^{1}(0,L)\times H_{0}^{1}(0,L). $$

Equivalently, we have

$$ \begin{aligned} -{\lambda }^{2}\int _{0}^{L}\left (\tilde{u} \bar{\varphi }+\tilde{y}\bar{\psi }\right )dx+i{\lambda }\, \int _{0}^{L} c(x)\left (\tilde{y}\bar{\varphi }-\tilde{u}\bar{\psi }\right )dx+\int _{0}^{L} \left ( a\tilde{u}_{x}\bar{\varphi }_{x} + \tilde{y}_{x}\bar{\psi }_{x} \right )dx \\ +i{\lambda }\int _{0}^{L}b(x)\tilde{u}_{x}\bar{\varphi }_{x}dx=0. \end{aligned} $$
(2.47)

Taking \(\varphi =\tilde{u}\) and \(\psi =\tilde{y}\) in equation (2.47), we get

$$ \begin{aligned} -{\lambda }^{2}\int _{0}^{L}\lvert \tilde{u}\rvert ^{2}dx-{\lambda }^{2} \int _{0}^{L}\lvert \tilde{y}\rvert ^{2}dx+a\int _{0}^{L}\lvert \tilde{u}_{x}\rvert ^{2}dx+\int _{0}^{L}\lvert \tilde{y}_{x}\rvert ^{2}dx-2{ \lambda }\Im \left (\int _{0}^{L}c(x)\tilde{y}\bar{\tilde{u}}dx \right )\\ +i{\lambda }\int _{0}^{L}b(x)\lvert \tilde{u}_{x}\rvert ^{2} dx=0. \end{aligned} $$

Taking the imaginary part of the above equality, we get

$$ 0=\int _{0}^{L}b(x)\lvert \tilde{u}_{x}\rvert ^{2} dx, $$

we get,

$$ \tilde{u}_{x}=0,\qquad \quad \text{in}\quad \left (\alpha _{1},\alpha _{3} \right ). $$
(2.48)

Then, we find that

$$ \left \{ \textstyle\begin{array}{l@{\quad }l@{\quad }l} -{\lambda }^{2}\tilde{u}-a\tilde{u}_{xx}+i{\lambda }c(x)\tilde{y}&=&0, \hspace{1cm} \text{in}\quad (0,L) \\ -{\lambda }^{2}\tilde{y}-a\tilde{y}_{xx}-i{\lambda }c(x)\tilde{u}&=&0, \hspace{1cm} \text{in}\quad (0,L) \\ \tilde{u}_{x}=\tilde{y}_{x}&=&0. \hspace{1cm} \text{in}\quad (\alpha _{2},\alpha _{3}) \end{array}\displaystyle \right .$$

Therefore, the vector \(\tilde{U}\) defined by

$$ \tilde{U}=\left (\tilde{u},i{\lambda }\tilde{u},\tilde{y},i{\lambda } \tilde{y}\right ) $$

belongs to \(D(\mathcal{A})\) and we have

$$ i{\lambda }\tilde{U}-\mathcal{A}\tilde{U}=0. $$

Hence, \(\tilde{U}\in \ker \left (i{\lambda }I-\mathcal{A}\right )\), then by Lemma 2.4, we get \(\tilde{U}=0\), this implies that \(\tilde{u}=\tilde{y}=0\). Consequently, \(\ker \left \{ A\right \} =\left \{ 0\right \} \).

Therefore, from step 3 and Fredholm alternative, we get that the operator \({\mathrm {A}}\) is an isomorphism. It is easy to see that the operator \(\mathrm{L}\) is continuous from \(V\) to \(L^{2}(0,L)\times L^{2}(0,L)\). Consequently, Equation (2.45) admits a unique solution \((u,y)\in H_{0}^{1}(0,L)\times H_{0}^{1}(0,L)\). Thus, using \(v=i{\lambda }u-f_{1}\), \(z=i{\lambda }y-f_{3}\) and using the classical regularity arguments, we conclude that Equation (2.38) admits a unique solution \(U\in D\left (\mathcal{A}\right )\). The proof is thus complete. □

Proof of Theorem 2.3

Using Lemma 2.4, we have that \(\mathcal{A}\) has non pure imaginary eigenvalues. According to Lemmas 2.4, 2.5 and with the help of the closed graph theorem of Banach, we deduce that \(\sigma (\mathcal{A})\cap i\mathbb{R}=\emptyset \). Thus, we get the conclusion by applying Theorem A.11 of Arendt Batty (see the Appendix). The proof of the theorem is thus complete. □

Remark 2.6

For the case when \(supp(b)\cap supp(c)=\emptyset\) it remains as an open problem.

3 Lack of the Exponential Stability

In this section, our goal is to show that system (1.2)-(1.6) in not exponentially stable.

3.1 Lack of Exponential Stability with Global Kelvin-Voigt Damping

In this part, assume that

$$ b(x)=b_{0}>0\quad \text{and}\quad c(x)=c_{0}\in \mathbb{R}^{\ast},\quad \forall \ x\in (0,L). $$
(3.1)

We introduce the following theorem.

Theorem 3.1

Under hypothesis (3.1), for \(\varepsilon >0\) small enough, we cannot expect the energy decay rate \(\frac{1}{t^{\frac{2}{2-\varepsilon }}}\) for all initial data \(U_{0}\in D(\mathcal{A})\) and for all \(t>0\).

Proof

Following Huang and Prüss [16, 31] (see also Theorem A.12 in the Appendix) it is sufficient to show the existence of a real sequences \((\lambda _{n})_{n}\) with \(\lambda _{n}\rightarrow +\infty \), \((U_{n})_{n}\in D(\mathcal{A})\), and \((F_{n})_{n} \subset \mathcal{H}\) such that \(\left (i\lambda _{n}I-\mathcal{A}\right )U_{n}=F_{n}\) is bounded in ℋ and \(\lambda _{n}^{-2+\varepsilon }\|U_{n}\|\rightarrow +\infty \). For this aim, take

$$\begin{aligned} F_{n}&=\left (0,0,0,\sin \left (\frac{n\pi x}{L}\right )\right ),\\ U_{n}&=\left (A_{n}\sin \left (\frac{n\pi x}{L}\right ), i\lambda _{n} A_{n} \sin \left (\frac{n\pi x}{L}\right ), B_{n}\sin \left ( \frac{n\pi x}{L}\right ),i{\lambda }_{n}B_{n}\sin \left ( \frac{n\pi x}{L}\right )\right ), \end{aligned}$$

where

$$ \lambda _{n}=\frac{n\pi }{L},\quad A_{n}=\frac{iL}{c_{0}n\pi },\quad B_{n}=- \frac{inb_{0}\pi }{c_{0}^{2}L}-\frac{a-1}{c_{0}^{2}}. $$

Clearly that \(U_{n}\in D(\mathcal{A})\), and \(F_{n}\) is bounded in ℋ. Let us show that \((i{\lambda }_{n} I-\mathcal{A})U_{n}=F_{n}\). Detailing \((i{\lambda }_{n} I-\mathcal{A})U_{n}\), we get

$$ (i{\lambda }_{n} I-\mathcal{A})U_{n}=\left (0,D_{1,n}\sin \left ( \frac{n\pi x}{L}\right ),0,D_{2,n}\sin \left (\frac{n\pi x}{L}\right ) \right ), $$

where

$$\begin{aligned} \begin{aligned} D_{1,n}&= \frac{-\left (L^{2}\lambda _{n}^{2}-an^{2}\pi ^{2}-i\pi ^{2}b_{0}\lambda _{n} n^{2}\right )A_{n}}{L^{2}}+iB_{n}c_{0} \lambda _{n},\quad \text{and}\\ D_{2,n}&=-iA_{n} c_{0}\lambda _{n}+ \frac{B_{n}\left (\pi ^{2}n^{2}-L^{2}\lambda _{n}^{2}\right )}{L^{2}}. \end{aligned} \end{aligned}$$
(3.2)

Inserting \({\lambda }_{n},A_{n},B_{n}\) in \(D_{1,n}\) and \(D_{2,n}\), we get \(D_{1,n}=0\) and \(D_{2,n}=1\). Hence we obtain

$$ \left (i{\lambda }_{n}I-\mathcal{A}\right )U_{n}=\left (0,0,0,\sin \left (\frac{n\pi x}{L}\right )\right )=F_{n}. $$

Now, we have

$$ \|U_{n}\|_{\mathcal{H}}^{2}\geq \int _{0}^{L}\left |i{\lambda }_{n}B_{n} \sin \left (\frac{n\pi x}{L}\right )\right |^{2}dx= \frac{L{\lambda }_{n}^{2}}{2}\lvert B_{n}\rvert ^{2}\sim {\lambda }_{n}^{4}. $$

Therefore, for \(\varepsilon >0\) small enough, we have

$$ {\lambda }_{n}^{-2+\varepsilon }\|U_{n}\|_{\mathcal{H}}\sim {\lambda }_{n}^{ \varepsilon }\rightarrow +\infty . $$

Then, we cannot expect the energy decay rate \(\frac{1}{t^{\frac{2}{2-\varepsilon }}}\). □

3.2 Lack of Exponential Stability with Local Kelvin-Voigt Damping

In this part, under the equal speed wave propagation condition (i.e. \(a=1\)), we use the classical method developed by Littman and Markus in [18] (see also [12]), to show that system (1.2)-(1.6) with Local Kelvin-Voigt damping and global coupling is not exponentially stable. For this aim, assume that

$$ a=1,\quad b(x)=\left \{ \textstyle\begin{array}{c@{\quad }c@{\quad }c} 0&\text{if}&0< x\leq \frac{1}{2}, \\ 1&\text{if}&\frac{1}{2}< x\leq 1. \end{array}\displaystyle \right .,\quad \text{and}\quad c(x)=c\in \mathbb{R}. $$
(3.3)

Our main result in this part is following theorem.

Theorem 3.2

Under condition (3.3). The semigroup of contractions \(\left (e^{t\mathcal{A}}\right )_{t\geq 0}\) generated by the operator \(\mathcal{A}\) is not exponentially stable in the energy space ℋ.

For the proof of Theorem 3.2, we recall the following definitions: the growth bound \(\omega _{0}\left (\mathcal{A}\right )\) and the spectral bound \(s\left (\mathcal{A}\right )\) of \(\mathcal{A}\) are defined respectively as

$$ \omega _{0}\left (\mathcal{A}\right )= \inf \left \{ \omega \in \mathbb{R}:\ \text{ there exists a constant } M_{\omega }\text{ such that } \forall \ t \geq 0,\ \left \| e^{t\mathcal{A}_{1}}\right \| _{\mathcal{L}( \mathcal{H}_{1})}\leq M_{\omega }e^{\omega t}\right \} $$

and

$$ s\left (\mathcal{A}\right )=\sup \left \{ \Re \left (\lambda \right ): \ \lambda \in \sigma \left (\mathcal{A}\right ) \right \} . $$

Then, according to Theorem 2.1.6 and Lemma 2.1.11 in [12], one has that

$$ s\left (\mathcal{A}_{1}\right )\leq \omega _{0}\left (\mathcal{A}_{1} \right ). $$

By the previous results, one clearly has that \(s\left (\mathcal{A}\right )\leq 0\) and the theorem would follow if equality holds in the previous inequality. It therefore amounts to show the existence of a sequence of eigenvalues of \(\mathcal{A}\) whose real parts tend to zero.

Since \(\mathcal{A}\) is dissipative, we fix \(\alpha _{0}>0\) small enough and we study the asymptotic behavior of the eigenvalues \(\lambda \) of \(\mathcal{A}\) in the strip

$$ S=\left \{ \lambda \in \mathbb{C}:-\alpha _{0}\leq \text{Re}(\lambda ) \leq 0\right \} . $$

First, we determine the characteristic equation satisfied by the eigenvalues of \(\mathcal{A}\). For this aim, let \(\lambda \in \mathbb{C}^{\ast }\) be an eigenvalue of \(\mathcal{A}\) and let \(U=(u,{\lambda }u,y,{\lambda }y)\in D(\mathcal{A})\) be an associated eigenvector. Then, the eigenvalue problem is given by

$$\begin{aligned} {\lambda }^{2} u-\left (1+\lambda \right )u_{xx}+c{\lambda }y =&0, \quad x\in (0,1), \end{aligned}$$
(3.4)
$$\begin{aligned} {\lambda }^{2} y-y_{xx}-c{\lambda }u =&0,\quad x\in (0,1) , \end{aligned}$$
(3.5)

with the boundary conditions

$$ u(0)=u(1)=y(0)=y(1)=0. $$

We define

$$ \left \{ \textstyle\begin{array}{c@{\quad }c@{\quad }c} u^{-}(x):=u(x),&y^{-}(x):=y(x)&x\in (0,\frac{1}{2}), \\ u^{+}(x):=u(x),&y^{+}(x):=y(x)&x\in [\frac{1}{2},1). \end{array}\displaystyle \right . $$

Then, system (3.4)-(3.5) becomes

$$\begin{aligned} {\lambda }^{2}u^{-}-u^{-}_{xx}+c{\lambda }y^{-} =&0,\quad x\in (0,1/2), \end{aligned}$$
(3.6)
$$\begin{aligned} {\lambda }^{2}y^{-}-y^{-}_{xx}-c{\lambda }u^{-} =&0,\quad x\in (0,1/2) , \end{aligned}$$
(3.7)
$$\begin{aligned} {\lambda }^{2}u^{+}-(1+{\lambda })u^{+}_{xx}+c{\lambda }y^{+} =&0, \quad x\in [1/2,1) , \end{aligned}$$
(3.8)
$$\begin{aligned} {\lambda }^{2}y^{+}-y_{xx}^{+}-c{\lambda }u^{+} =&0,\quad x\in [1/2,1), \end{aligned}$$
(3.9)

with the boundary conditions

$$\begin{aligned} u^{-}(0)=y^{-}(0)=0, \end{aligned}$$
(3.10)
$$\begin{aligned} u^{+}(1)=y^{+}(1)=0, \end{aligned}$$
(3.11)

and the continuity conditions

$$\begin{aligned} u^{-}(1/2)=u^{+}(1/2), \end{aligned}$$
(3.12)
$$\begin{aligned} u_{x}^{-}(1/2)=(1+{\lambda })u_{x}^{+}(1/2), \end{aligned}$$
(3.13)
$$\begin{aligned} y^{-}(1/2)=y^{+}(1/2), \end{aligned}$$
(3.14)
$$\begin{aligned} y_{x}^{-}(1/2)=y_{x}^{+}(1/2). \end{aligned}$$
(3.15)

Here and below, in order to handle, in the case where \(z\) is a non zero non-real number, we denote by \(\sqrt{z}\) the square root of \(z\); i.e., the unique complex number whose square is equal to \(z\), that is defined by

$$ \sqrt{z}=\sqrt{\frac{\lvert z\rvert +\Re (z)}{2}}+i\ \text{sign}(\Im (z)) \sqrt{\frac{\lvert z\rvert -\Re (z)}{2}}. $$

Our aim is to study the asymptotic behavior of the largest eigenvalues \({\lambda }\) of \(\mathcal{A}\) in \(S\). By taking \({\lambda }\) large enough, the general solution of system (3.6)-(3.7) with boundary condition (3.10) is given by

$$ \left \{ \textstyle\begin{array}{c@{\quad }c@{\quad }l} u^{-}(x)&=&d_{1}\dfrac{{\lambda }^{2}-r_{1}^{2}}{c\,{\lambda }}\sinh (r_{1}x)+d_{2} \dfrac{{\lambda }^{2}-r_{2}^{2}}{c\,{\lambda }}\sinh (r_{2}x), \\ y^{-}(x)&=&d_{1}\sinh (r_{1}x)+d_{2}\sinh (r_{2}x), \end{array}\displaystyle \right . $$

and the general solution of system (3.6)-(3.7) with boundary condition (3.11) is given by

$$ \left \{ \textstyle\begin{array}{c@{\quad }c@{\quad }l} u^{+}(x)&=&-D_{1}\dfrac{{\lambda }^{2}-s_{1}^{2}}{c\,{\lambda }} \sinh (s_{1}(1-x))-D_{2} \dfrac{{\lambda }^{2}-s_{2}^{2}}{c\,{\lambda }}\sinh (s_{2}(1-x)), \\ y^{+}(x)&=&-D_{1}\sinh (s_{1}(1-x))-D_{2}\sinh (s_{2}(1-x)), \end{array}\displaystyle \right . $$

where \(d_{1},d_{2},D_{1},D_{2}\in \mathbb{C}\),

$$ r_{1}=\lambda \sqrt{1+\frac{ic}{{\lambda }}},\qquad r_{2}=\lambda \sqrt{1-\frac{ic}{\lambda }} $$
(3.16)

and

$$ s_{1}={\lambda }\sqrt{ \frac{1+\frac{2}{{\lambda }}+\sqrt{1-\frac{4c^{2}}{{\lambda }^{3}}-\frac{4c^{2}}{{\lambda }^{4}}}}{2\left (1+\frac{1}{{\lambda }}\right )}}, \qquad s_{2}=\sqrt{{\lambda }}\sqrt{ \frac{{\lambda }+2-{\lambda }\sqrt{1-\frac{4c^{2}}{{\lambda }^{3}}-\frac{4c^{2}}{{\lambda }^{4}}}}{2\left (1+\frac{1}{{\lambda }}\right )}}. $$
(3.17)

The boundary conditions in (3.12)-(3.15), can be expressed by \(M(d_{1}\ d_{2}\ D_{1}\ D_{2})^{\top }=0\), where

$${\scriptsize{M=\left ( \textstyle\begin{array}{c@{\quad }c@{\quad }c@{\quad }c} \sinh (\frac{r_{1}}{2}) & \sinh (\frac{r_{2}}{2}) & \sinh ( \frac{s_{1}}{2}) &\sinh (\frac{s_{2}}{2}) \\ r_{1}\cosh (\frac{r_{1}}{2}) & r_{2}\cosh (\frac{r_{2}}{2}) & -s_{1} \cosh (\frac{s_{1}}{2}) & -s_{2}\cosh (\frac{s_{2}}{2}) \\ r_{1}^{2}\sinh (\frac{r_{1}}{2}) & r_{2}^{2}\sinh (\frac{r_{2}}{2}) & s_{1}^{2} \sinh (\frac{s_{1}}{2}) & s_{2}^{2}\sinh (\frac{s_{2}}{2}) \\ r_{1}^{3}\cosh (\frac{r_{1}}{2}) & r_{2}^{3}\cosh (\frac{r_{2}}{2}) & -s_{1}(s_{1}^{2}- \lambda (\lambda ^{2}-s_{1}^{2}))\cosh (\frac{s_{1}}{2})& -s_{2}(s_{2}^{2}- \lambda (\lambda ^{2}-s_{2}^{2}))\cosh (\frac{s_{2}}{2}) \end{array}\displaystyle \right) }} $$

System (3.6)-(3.15) admits a non trivial solution if and only if \(det(M)=0\). Using Gaussian elimination, \(det(M)=0\) is equivalent to \(det(M_{1})=0\), where \(M_{1}\) is given by

$${\scriptsize{M_{1}=\left ( \textstyle\begin{array}{c@{\quad }c@{\quad }c@{\quad }c} \sinh (\frac{r_{1}}{2}) & \sinh (\frac{r_{2}}{2}) & \sinh ( \frac{s_{1}}{2}) & 1-e^{-s_{2}} \\ r_{1}\cosh (\frac{r_{1}}{2}) & r_{2}\cosh (\frac{r_{2}}{2}) & -s_{1} \cosh (\frac{s_{1}}{2}) & -s_{2}(1+e^{-s_{2}}) \\ r_{1}^{2}\sinh (\frac{r_{1}}{2}) & r_{2}^{2}\sinh (\frac{r_{2}}{2}) & s_{1}^{2} \sinh (\frac{s_{1}}{2}) & s_{2}^{2}(1-e^{-s_{2}}) \\ r_{1}^{3}\cosh (\frac{r_{1}}{2}) & r_{2}^{3}\cosh (\frac{r_{2}}{2}) & -s_{1}(s_{1}^{2}- \lambda (\lambda ^{2}-s_{1}^{2}))\cosh (\frac{s_{1}}{2})&-s_{2}(s_{2}^{2}- \lambda (\lambda ^{2}-s_{2}^{2}))(1+e^{-s_{2}}) \end{array}\displaystyle \right). }} $$

Then, we get

$$ det(M_{1})=F_{1}+F_{2}e^{-s_{2}}, $$
(3.18)

where

$$\begin{aligned} F_{1}={}\quad& \displaystyle {-s_{1} s_{2} \left (r_{1}^{2}-r_{2}^{2}\right ) \left (s_{1}^{2}-s_{2}^{2} \right )\left (\,\lambda +1\right ) \sinh \left (\frac{r_{1}}{2} \right ) \sinh \left (\frac{r_{2}}{2} \right ) \cosh \left ( \frac{s_{1}}{2} \right ) } \\ &\displaystyle {+r_{1} s_{2} \left (r_{2}^{2}-s_{1}^{2}\right ) \left (( \lambda ^{2}-s_{2}^{2})\, \,\lambda +r_{1}^{2}-s_{2}^{2}\right ) \cosh \left (\frac{r_{1}}{2} \right ) \sinh \left (\frac{r_{2}}{2} \right ) \sinh \left (\frac{s_{1}}{2} \right ) } \\ &\displaystyle {-r_{2} s_{2} \left (r_{1}^{2}-s_{1}^{2}\right ) \left (( \lambda ^{2}-s_{2}^{2})\, \,\lambda +r_{2}^{2}-s_{2}^{2}\right ) \sinh \left (\frac{r_{1}}{2} \right ) \cosh \left (\frac{r_{2}}{2} \right ) \sinh \left (\frac{s_{1}}{2} \right ) } \\ &\displaystyle {-r_{1} r_{2} \left (r_{1}^{2}-r_{2}^{2}\right ) \left (s_{1}^{2}-s_{2}^{2} \right ) \cosh \left (\frac{r_{1}}{2} \right ) \cosh \left ( \frac{r_{2}}{2} \right ) \sinh \left (\frac{s_{1}}{2} \right ) } \\ &\displaystyle {+r_{2} s_{1} \left (r_{1}^{2}-s_{2}^{2}\right ) \left (( \lambda ^{2}-s_{1}^{2})\, \,\lambda +r_{2}^{2}-s_{1}^{2}\right ) \sinh \left (\frac{r_{1}}{2} \right ) \cosh \left (\frac{r_{2}}{2} \right ) \cosh \left (\frac{s_{1}}{2} \right ) } \\ &\displaystyle {-r_{1} s_{1} \left (r_{2}^{2}-s_{2}^{2}\right ) \left (( \lambda ^{2}-s_{1}^{2})\, \,\lambda +r_{1}^{2}-s_{1}^{2}\right ) \cosh \left (\frac{r_{1}}{2} \right ) \sinh \left (\frac{r_{2}}{2} \right ) \cosh \left (\frac{s_{1}}{2} \right ) } \end{aligned}$$

and

$$\begin{aligned} F_{2}={} \quad&\displaystyle {-s_{1} s_{2} \left (r_{1}^{2}-r_{2}^{2}\right ) \left (s_{1}^{2}-s_{2}^{2} \right )\left (\,\lambda +1\right ) \sinh \left (\frac{r_{1}}{2} \right ) \sinh \left (\frac{r_{2}}{2} \right ) \cosh \left ( \frac{s_{1}}{2} \right ) } \\ &\displaystyle {+r_{1} s_{2} \left (r_{2}^{2}-s_{1}^{2}\right ) \left (( \lambda ^{2}-s_{2}^{2})\, \,\lambda +r_{1}^{2}-s_{2}^{2}\right ) \cosh \left (\frac{r_{1}}{2} \right ) \sinh \left (\frac{r_{2}}{2} \right ) \sinh \left (\frac{s_{1}}{2} \right ) } \\ &\displaystyle {-r_{2} s_{2} \left (r_{1}^{2}-s_{1}^{2}\right ) \left (( \lambda ^{2}-s_{2}^{2})\, \,\lambda +r_{2}^{2}-s_{2}^{2}\right ) \sinh \left (\frac{r_{1}}{2} \right ) \cosh \left (\frac{r_{2}}{2} \right ) \sinh \left (\frac{s_{1}}{2} \right ) } \\ &\displaystyle {+r_{1} r_{2} \left (r_{1}^{2}-r_{2}^{2}\right ) \left (s_{1}^{2}-s_{2}^{2} \right ) \cosh \left (\frac{r_{1}}{2} \right ) \cosh \left ( \frac{r_{2}}{2} \right ) \sinh \left (\frac{s_{1}}{2} \right ) } \\ &\displaystyle {-r_{2} s_{1} \left (r_{1}^{2}-s_{2}^{2}\right ) \left (( \lambda ^{2}-s_{1}^{2})\, \,\lambda +r_{2}^{2}-s_{1}^{2}\right ) \sinh \left (\frac{r_{1}}{2} \right ) \cosh \left (\frac{r_{2}}{2} \right ) \cosh \left (\frac{s_{1}}{2} \right ) } \\ &\displaystyle {+r_{1} s_{1} \left (r_{2}^{2}-s_{2}^{2}\right ) \left (( \lambda ^{2}-s_{1}^{2})\, \,\lambda +r_{1}^{2}-s_{1}^{2}\right ) \cosh \left (\frac{r_{1}}{2} \right ) \sinh \left (\frac{r_{2}}{2} \right ) \cosh \left (\frac{s_{1}}{2} \right ) }. \end{aligned}$$

Lemma 3.3

Let \({\lambda }\in \mathbb{C}\) be an eigenvalue of \(\mathcal{A}\). Then, we have \(\Re ({\lambda })\) is bounded.

Proof

Multiplying equations (3.6)-(3.9) by \(u^{-},y^{-},u^{+},y^{+}\) respectively, then using the boundary conditions, we get

$$ \|{\lambda }u^{-}\|^{2}+\|u_{x}^{-}\|^{2}+\|{\lambda }y^{-}\|^{2}+\|y_{x}^{-} \|^{2}+\|{\lambda }u^{+}\|^{2}+\left (1+\Re ({\lambda })\right )\|u_{x}^{+} \|^{2}+\|{\lambda }y^{+}\|^{2}+\|y_{x}^{+}\|^{2}=0. $$
(3.19)

Since the operator \(\mathcal{A}\) is dissipative then the real part of \(\lambda \) is negative. It is easy to see that \(u^{+}_{x}\neq 0\), hence using the fact that \(\|U\|_{\mathcal{H}}=1\) in (3.19), we get that \(\Re ({\lambda })\) is bounded below. Therefore, there exists \(\alpha >0\), such that

$$ -\alpha \leq \Re ({\lambda })< 0. $$

 □

Proposition 3.4

Assume that the condition (3.3) holds. Then there exists \(n_{0}\in \mathbb{N}\) sufficiently large and two sequences \(\left ({\lambda }_{1,n}\right )_{\lvert n\rvert \geq n_{0}}\) and \(\left ({\lambda }_{2,n}\right )_{\lvert n\rvert \geq n_{0}}\) of simple root of \(det(M_{1})\) satisfying the following asymptotic behavior:

Case 1. If \(\sin \left (\frac{c}{4}\right )\neq 0\), then

$$ {\lambda }_{1,n}=2n\pi i+i\pi - \frac{2\sin ^{2}(\frac{c}{4})(1-i\,sign(n))}{\left (3+\cos (\frac{c}{2})\right )\sqrt{\lvert n\rvert \pi }}+O \left (\frac{1}{n}\right ) $$
(3.20)

and

$$ {\lambda }_{2,n}=2n\pi i+i\arccos \left (\cos ^{2}\left (\frac{c}{4} \right )\right )-\frac{\gamma }{\sqrt{\lvert n\rvert \pi }}+i \frac{sign(n)\gamma }{\sqrt{\lvert n\rvert \pi }}+O\left (\frac{1}{n} \right ), $$
(3.21)

where

$$ \gamma = \frac{\left (\cos (\frac{c}{2})\sin \left (\frac{\arccos \left (\cos ^{2}(\frac{c}{4})\right )}{2}\right )+\sin \left (\frac{3\arccos \left (\cos ^{2}(\frac{c}{4})\right )}{2}\right )\right )}{4\sqrt{1-cos^{4}\left (\frac{c}{4}\right )}\cos \left (\frac{\arccos \left (\cos ^{2}(\frac{c}{4})\right )}{2}\right )}. $$

Case 2. If \(\sin \left (\frac{c}{4}\right )= 0\), then

$$ {\lambda }_{1,n}=2n\pi i+i\pi +\frac{i\ c^{2}}{32\pi n}- \frac{(4+i\pi )c^{2}}{64\pi ^{2}n^{2}}+O\left ( \frac{1}{\lvert n\rvert ^{\frac{5}{2}}}\right ) $$
(3.22)

and

$$ {\lambda }_{2,n}=2n\pi i+O\left (\frac{1}{n}\right ). $$
(3.23)

The proof of Proposition 3.4, is divided into two lemmas.

Lemma 3.5

Assume that condition (3.3) holds. Let \(\lambda \) be largest eigenvalue of \(\mathcal{A}\), then \(\lambda \) is large root of the following asymptotic behavior estimate

$$ F({\lambda }):= f_{0}(\lambda )+ \frac{f_{1}({\lambda })}{{\lambda }^{1/2}}+ \frac{f_{2}({\lambda })}{8{\lambda }}+ \frac{f_{3}({\lambda })}{8{\lambda }^{3/2}}+ \frac{f_{4}({\lambda })}{128{\lambda }^{2}}+O({\lambda }^{-5/2}), $$
(3.24)

where

$$ \left \{ \textstyle\begin{array}{l@{\quad }l@{\quad }l} f_{0}({\lambda })&=&\cosh \left (\frac{3{\lambda }}{2}\right )-\cosh \left (\frac{{\lambda }}{2}\right )\cos \left (\frac{c}{2}\right ), \\ f_{1}({\lambda })&=&\sinh \left (\frac{3{\lambda }}{2}\right )+\sinh \left (\frac{{\lambda }}{2}\right )\cos \left (\frac{c}{2}\right ), \\ f_{2}({\lambda })&=&c^{2}\sinh \left (\frac{3{\lambda }}{2}\right )-4 \cosh \left (\frac{3{\lambda }}{2}\right )+4\left (\cosh \left ( \frac{{\lambda }}{2}\right )\cos \left (\frac{c}{2}\right )+c\sinh \left (\frac{{\lambda }}{2}\right )\sin \left (\frac{c}{2}\right ) \right ), \\ f_{3}({\lambda })&=&-8\sinh \left (\frac{3{\lambda }}{2}\right )+c^{2} \cosh \left (\frac{3{\lambda }}{2}\right )-12c\cosh \left ( \frac{{\lambda }}{2}\right )\sin \left (\frac{c}{2}\right )-8\sinh \left (\frac{{\lambda }}{2}\right )\cos \left (\frac{c}{2}\right ), \\ f_{4}({\lambda })&=&-40c^{2}\sinh \left (\frac{3{\lambda }}{2}\right )+(c^{4}+72c^{2}+48) \cosh \left (\frac{3{\lambda }}{2}\right ) \\ &&+32c\left (c\ \cos \left ( \frac{c}{2}\right )+7\sin \left (\frac{c}{2}\right )\right )\sinh \left (\frac{{\lambda }}{2}\right ) \\ &&-\left (8c^{2}+8c^{3}\sin \left (\frac{c}{2}\right )+16(4c^{2}+3) \cosh \left (\frac{c}{2}\right )\right )\cosh \left ( \frac{{\lambda }}{2}\right ). \end{array}\displaystyle \right . $$
(3.25)

Proof

Let \({\lambda }\) be a large eigenvalue of \(\mathcal{A}\), then \({\lambda }\) is root of \(det(M_{1})\). In this Lemma, we give an asymptotic development of the function \(det(M_{1})\) for large \({\lambda }\). First, using the asymptotic expansion in (3.16)-(3.17), we get

$$ \left \{ \textstyle\begin{array}{l} r_{1}={\lambda }+\frac{ic}{2}+\frac{c^{2}}{8{\lambda }}- \frac{ic^{3}}{16{\lambda }^{2}}+O({\lambda }^{-3}),\ r_{2}={\lambda }- \frac{ic}{2}+\frac{c^{2}}{8{\lambda }}+ \frac{ic^{3}}{16{\lambda }^{2}}+O({\lambda }^{-3}), \\ s_{1}={\lambda }-\frac{c^{2}}{2{\lambda }}+O({\lambda }^{-5}),\ s_{2}= \sqrt{{\lambda }}-\frac{1}{2\sqrt{{\lambda }}}+ \frac{4c^{2}+3}{8{\lambda }^{\frac{3}{2}}}+O\left ({\lambda }^{-3/2} \right ). \end{array}\displaystyle \right . $$
(3.26)

From (3.26), we get

$$ \left \{ \textstyle\begin{array}{l} 2ic{\lambda }s_{1}s_{2}(s_{1}^{2}-s_{2}^{2})({\lambda }+1)=ic{ \lambda }^{11/2}\left (2-\frac{1}{{\lambda }}+ \frac{3+4c^{2}}{4{\lambda }^{2}}+O({\lambda }^{-3})\right ), \\ r_{1}s_{2}(r_{2}^{2}-s_{1}^{2})\left (({\lambda }^{2}-s_{2}^{2}){ \lambda }+r_{1}^{2}-s_{2}^{2}\right )=-ic{\lambda }^{11/2}\left (1- \frac{1-i\,c}{2{\lambda }}+\frac{5c^{2}+3+14i\,c}{8{\lambda }^{2}}+O({ \lambda }^{-3})\right ), \\ r_{2}s_{2}(r_{1}^{2}-s_{1}^{2})\left (({\lambda }^{2}-s_{2}^{2}){ \lambda }+r_{2}^{2}-s_{2}^{2}\right )=ic{\lambda }^{11/2}\left (1- \frac{1+i\,c}{2{\lambda }}+\frac{5c^{2}+3-14i\,c}{8{\lambda }^{2}}+O({ \lambda }^{-3})\right ), \\ 2i\,c{\lambda }r_{1}r_{2}\left (s_{1}^{2}-s_{2}^{2}\right )=ic{ \lambda }^{11/2}\left (\frac{2}{\sqrt{{\lambda }}}- \frac{2}{{\lambda }^{3/2}}+O({\lambda }^{-5/2})\right ), \\ r_{2}s_{1}(r_{1}^{2}-s_{2}^{2})(({\lambda }^{2}-s_{1}^{2}){\lambda }+r_{2}^{2}-s_{1}^{2})=-ic{ \lambda }^{11/2}\left (\frac{1}{\sqrt{{\lambda }}}- \frac{2-3ic}{2{\lambda }^{3/2}}+O\left ({\lambda }^{-5/2}\right ) \right ), \\ r_{1}s_{1}(r_{2}^{2}-s_{2}^{2})(({\lambda }^{2}-s_{1}^{2}){\lambda }+r_{1}^{2}-s_{1}^{2})=ic{ \lambda }^{11/2}\left (\frac{1}{\sqrt{{\lambda }}}- \frac{2+3ic}{2{\lambda }^{3/2}}+O({\lambda }^{-5/2})\right ). \end{array}\displaystyle \right . $$
(3.27)

From equation (3.27) and using the fact that \(\Re ({\lambda })\) is bounded, we get

$$\begin{aligned} & \displaystyle {\frac{F_{1}}{ic{\lambda }^{11/2}}} =-\bigg[ \displaystyle {\left (2-\frac{1}{\lambda }+ \frac{4c^{2}+3}{4\lambda ^{2}}\right ) \sinh \left (\frac{r_{1}}{2} \right ) \sinh \left (\frac{r_{2}}{2} \right ) \cosh \left ( \frac{s_{1}}{2} \right ) } \\ &\displaystyle \hspace{2.45cm} +\left (1-\frac{1}{2\lambda }+\frac{5 c^{2}+3}{8\lambda ^{2}} \right ) \Bigl(\cosh \left (\frac{r_{1}}{2} \right ) \sinh \left ( \frac{r_{2}}{2} \right ) \\ & \displaystyle \hspace{2.45cm} +\sinh \left (\frac{r_{1}}{2} \right ) \cosh \left (\frac{r_{2}}{2} \right ) \Bigr)\sinh \left ( \frac{s_{1}}{2}\right ) \\ & \displaystyle { \hspace{2.45cm} +\left (\frac{i\, c}{2\lambda }+\frac{7 i\, c}{4\lambda ^{2}} \right ) \left (\cosh \left (\frac{r_{1}}{2} \right ) \sinh \left ( \frac{r_{2}}{2} \right ) -\sinh \left (\frac{r_{1}}{2} \right ) \cosh \left (\frac{r_{2}}{2} \right ) \right )\sinh \left ( \frac{s_{1}}{2}\right )} \\ & \displaystyle { \hspace{2.45cm} +\left (\frac{2}{\sqrt{\lambda }}-\frac{2}{\lambda ^{3/2}}\right ) \cosh \left (\frac{r_{1}}{2} \right ) \cosh \left (\frac{r_{2}}{2} \right ) \sinh \left (\frac{s_{1}}{2} \right ) } \\ & \displaystyle \hspace{2.45cm} +\left (\frac{1}{\sqrt{\lambda }}-\frac{1}{\lambda ^{3/2}}\right ) \Bigl(\sinh \left (\frac{r_{1}}{2} \right ) \cosh \left ( \frac{r_{2}}{2} \right ) \\ & \displaystyle \hspace{2.45cm} +\cosh \left (\frac{r_{1}}{2} \right ) \sinh \left (\frac{r_{2}}{2} \right ) \Bigr)\cosh \left ( \frac{s_{1}}{2} \right ) \\ & \displaystyle \hspace{2.45cm} +\left (\frac{3i\, c}{2\lambda ^{3/2}} \right ) \left (\sinh \left ( \frac{r_{1}}{2} \right ) \cosh \left (\frac{r_{2}}{2} \right ) - \cosh \left (\frac{r_{1}}{2} \right ) \sinh \left (\frac{r_{2}}{2} \right ) \right )\cosh \left (\frac{s_{1}}{2} \right ) \\ & \displaystyle \hspace{2.45cm} +O\left ( \lambda ^{-5/2}\right )\bigg]. \end{aligned}$$
(3.28)

From equation (3.27) and using the fact that \(\Re ({\lambda })\) is bounded, we get

$$ \textstyle\begin{array}{l} F_{2} =-i\, c\, \lambda ^{11/2}\bigg[\displaystyle {2 \sinh \left (\frac{r_{1}}{2} \right ) \sinh \left (\frac{r_{2}}{2} \right ) \cosh \left (\frac{s_{1}}{2} \right ) } \\ \displaystyle { \hspace{2.3cm} + \left (\cosh \left (\frac{r_{1}}{2} \right ) \sinh \left ( \frac{r_{2}}{2} \right ) +\sinh \left (\frac{r_{1}}{2} \right ) \cosh \left (\frac{r_{2}}{2} \right ) \right )\sinh \left ( \frac{s_{1}}{2}\right )+O\left (\lambda ^{-1/2}\right )\bigg]}. \end{array} $$
(3.29)

Since the real part of \(\sqrt{{\lambda }}\) is positive, then

$$ \lim _{\lvert {\lambda }\rvert \to \infty }{\lambda }^{-5/2}e^{-\sqrt{{ \lambda }}}=0, $$

hence

$$ e^{-\sqrt{{\lambda }}}=o({\lambda }^{-5/2}), $$
(3.30)

then,

$$ F_{2}e^{-s_{2}}=-ic{\lambda }^{11/2}\left (o({\lambda }^{-5/2}) \right ). $$
(3.31)

Inserting (3.28) and (3.31), in (3.18), we get

$$ {\mathrm{det}(\mathrm{M}_{1})}=-ic\,{\lambda }^{11/2}F({\lambda }), $$

where,

$$\begin{aligned} &F(\lambda ) =\displaystyle {\left (1-\frac{1}{2\lambda }+ \frac{4c^{2}+3}{8\lambda ^{2}}\right ) \left (\cosh \left (\frac{r_{1}+r_{2}}{2} \right ) -\cosh \left ( \frac{r_{1}-r_{2}}{2} \right )\right ) \cosh \left (\frac{s_{1}}{2} \right ) } \\ & \displaystyle +\left (1-\frac{1}{2\lambda }+ \frac{5 c^{2}+3}{8\lambda ^{2}} \right ) \sinh \left ( \frac{r_{1}+r_{2}}{2} \right ) \sinh \left (\frac{s_{1}}{2}\right ) \\ & \displaystyle - \left (\frac{i\, c}{2\lambda }+\frac{7 i\, c}{4\lambda ^{2}}\right ) \sinh \left (\frac{r_{1}-r_{2}}{2} \right ) \sinh \left ( \frac{s_{1}}{2}\right ) \\ & \displaystyle {+\left (\frac{1}{\sqrt{\lambda }}- \frac{1}{\lambda ^{3/2}} \right ) \left (\cosh \left ( \frac{r_{1}+r_{2}}{2} \right ) +\cosh \left (\frac{r_{1}-r_{2}}{2} \right )\right ) \sinh \left (\frac{s_{1}}{2} \right ) } \\ & \displaystyle +\left (\frac{1}{\sqrt{\lambda }}- \frac{1}{\lambda ^{3/2}}\right ) \sinh \left (\frac{r_{1}+r_{2}}{2} \right )\cosh \left (\frac{s_{1}}{2} \right )+\left ( \frac{3i\, c}{2\lambda ^{3/2}} \right ) \sinh \left ( \frac{r_{1}-r_{2}}{2} \right )\cosh \left (\frac{s_{1}}{2} \right ) \\ & \displaystyle +O \left (\lambda ^{-5/2}\right ). \end{aligned}$$
(3.32)

Therefore, system (3.10)-(3.15) admits a non trivial solution if and only if \({\mathrm{det}(\mathrm{M}_{1})=0}\), if and only if the eigenvalues of \(\mathcal{A}\) are roots of the function \(F\). Next, from (3.26) and the fact that real \({\lambda }\) is bounded, we get

$$ \left \{ \textstyle\begin{array}{l} \cosh \left (\frac{r_{1}+r_{2}}{2}\right )=\cosh ({\lambda })+ \frac{c^{2}\sinh ({\lambda })}{8\,{\lambda }}+ \frac{c^{4}\,\cosh ({\lambda })}{128{\lambda }^{2}}+O({\lambda }^{-3}), \\ \cosh \left (\frac{r_{1}-r_{2}}{2}\right )=\cos \left (\frac{c}{2} \right )+ \frac{c^{3}\sin \left (\frac{c}{2}\right )}{16{\lambda }^{2}}+O({ \lambda }^{-3}), \\ \sinh \left (\frac{r_{1}+r_{2}}{2}\right )=\sinh ({\lambda })+ \frac{c^{2}\,\cosh ({\lambda })}{8{\lambda }}+ \frac{c^{4}\,\sinh ({\lambda })}{128{\lambda }^{2}}+O({\lambda }^{-3}), \\ \sinh \left (\frac{r_{1}-r_{2}}{2}\right )=i\sin \left (\frac{c}{2} \right )-i \frac{c^{3}\cos \left (\frac{c}{2}\right )}{16{\lambda }^{2}}+O({ \lambda }^{-3}), \\ \sinh \left (\frac{s_{1}}{2}\right )=\sinh (\frac{{\lambda }}{2})- \frac{c^{2}\cosh \left (\frac{{\lambda }}{2}\right )}{4{\lambda }^{2}}+O({ \lambda }^{-4}), \\ \cosh (\frac{s_{1}}{2})=\cosh \left (\frac{{\lambda }}{2}\right )- \frac{c^{2}\sinh \left (\frac{{\lambda }}{2}\right )}{4{\lambda }^{2}}+O({ \lambda }^{-4}). \end{array}\displaystyle \right . $$
(3.33)

Inserting (3.33) in (3.32), we get (3.24). □

Lemma 3.6

Under condition (3.3), there exists \(n_{0}\in \mathbb{N}\) sufficiently large and two sequences \(\left ({\lambda }_{1,n}\right )_{\lvert n\rvert \geq n_{0}}\) and \(\left ({\lambda }_{2,n}\right )_{\lvert n\rvert \geq n_{0}}\) of simple roots of \(F\) satisfying the following asymptotic behavior

$$ {\lambda }_{1,n}=2in\pi +i\pi +\epsilon _{1,n}\quad \textit{where}\quad \lim _{\lvert n\rvert \to +\infty }\epsilon _{1,n}=0 $$
(3.34)

and

$$ {\lambda }_{2,n}=2n\pi i+i\arccos \left (\cos ^{2}\left (\frac{c}{4} \right )\right )+\epsilon _{2,n}\quad \textit{where}\quad \lim _{\lvert n \rvert \to +\infty }\epsilon _{2,n}=0. $$
(3.35)

Proof

First, we look at the roots of \(f_{0}\). From (3.25), we deduce that \(f_{0}\) can be written as

$$ f_{0}({\lambda })=2\cosh \left (\frac{{\lambda }}{2}\right )\left ( \cosh ({\lambda })-\cos ^{2}\left (\frac{c}{4}\right )\right ). $$
(3.36)

Then, the roots of \(f_{0}\) are given by

$$ \left \{ \textstyle\begin{array}{l@{\quad }r} \mu _{1,n}=2n\pi i+i\pi ,&n\in \mathbb{Z}, \\ \mu _{2,n}=2n\pi i+i\arccos \left (\cos ^{2}\left (\frac{c}{4}\right ) \right ),&n\in \mathbb{Z}. \end{array}\displaystyle \right . $$

Now, with the help of Rouché’s Theorem, we will show that the roots of \(F\) are close to \(f_{0}\). Let us start with the first family \(\mu _{1,n}\). Let \(B_{n}=B\left ((2n+1)\pi i,r_{n}\right )\) be the ball of centrum \((2n+1)\pi i\) and radius \(r_{n}=\lvert n\rvert ^{-\frac{1}{4}}\) and \({\lambda }\in \partial \, B_{n}\); i.e. \({\lambda }_{n}=2n\pi i+i\pi +r_{n}e^{i\theta }\), \(\theta \in [0,2\pi [\). Then

$$ \cosh \left (\frac{{\lambda }}{2}\right )= \frac{i(-1)^{n}r_{n}e^{i\theta }}{2}+O(r_{n}^{2}),\quad \text{and} \quad \cosh ({\lambda })=-1+O(r_{n}^{2}). $$
(3.37)

Inserting (3.37) in (3.36), we get

$$ f_{0}({\lambda })=-i(-1)^{n}r_{n}e^{i\theta }\left (1+\cos ^{2}\left ( \frac{c}{4}\right )+O(r_{n}^{3})\right ). $$

It follows that there exists a positive constant \(C\) such that

$$ \forall \ {\lambda }\in \partial \, B_{n},\quad \lvert f_{0}({ \lambda })\rvert \geq C\,r_{n}=C\lvert n\rvert ^{-\frac{1}{4}}. $$

On the other hand, from (3.24), we deduce that

$$ \lvert F({\lambda })-f_{0}({\lambda })\rvert =O\left ( \frac{1}{\sqrt{{\lambda }}}\right )=O\left ( \frac{1}{\sqrt{\lvert n\rvert }}\right ). $$

It follows that, for \(\lvert n\rvert \) large enough

$$ \forall {\lambda }\in \partial \, B_{n},\quad \lvert F({\lambda })-f_{0}({ \lambda })\rvert < \,\lvert f_{0}({\lambda })\rvert . $$

Hence, with the help of Rouché’s theorem, there exists \(n_{0}\in \mathbb{N}^{\ast }\) large enough, such that \(\forall \, \lvert n\rvert \geq n_{0}\), the first branch of roots of \(F\) denoted by \({\lambda }_{1,n}\) are close to \(\mu _{1,n}\), that is

$$ {\lambda }_{1,n}=\mu _{1,n}+i\pi +\epsilon _{1,n}\quad \text{where} \quad \lim _{\lvert n\rvert \to +\infty }\epsilon _{1,n}=0. $$
(3.38)

Passing to the second family \(\mu _{2,n}\). Let \(\tilde{B}_{n}=B\left (\mu _{2,n},r_{n}\right )\) be the ball of centrum \(\mu _{2,n}\) and radius

$$ r_{n}:=\left \{ \textstyle\begin{array}{l@{\quad }l@{\quad }l} \frac{1}{\lvert n\rvert ^{\frac{1}{8}}}&\text{if}&\sin \left ( \frac{c}{4}\right )=0, \\ \frac{1}{\lvert n\rvert ^{\frac{1}{4}}}&\text{if}&\sin \left ( \frac{c}{4}\right )\neq 0, \end{array}\displaystyle \right . $$

such that \({\lambda }\in \partial \, \tilde{B}_{n}\); i.e. \({\lambda }_{n}=\mu _{2,n}+r_{n}e^{i\theta }\), \(\theta \in [0,2\pi [\). Then,

$$ \cosh ({\lambda })-\cos ^{2}\left (\frac{c}{4}\right )=\cosh \left (2n \pi i+i\, \arccos \left (\cos ^{2}\left (\frac{c}{4}\right )+r_{n}e^{i \theta }\right )\right )-\cos ^{2}\left (\frac{c}{4}\right ). $$

It follow that,

$$ \cosh ({\lambda })-\cos ^{2}\left (\frac{c}{4}\right )=i\, r_{n}\sqrt{1- \cos ^{4}\left (\frac{c}{4}\right )}e^{i\theta }+ \frac{r_{n}^{2}\cos ^{2}\left (\frac{c}{4}\right )e^{2i\theta }}{2}+O(r_{n}^{3}), $$
(3.39)

and

$$\begin{aligned} \cosh \left (\frac{{\lambda }}{2}\right )={}&(-1)^{n}\cos \left ( \frac{\arccos \left (\cos ^{2}\left (\frac{c}{4}\right )\right )}{2} \right )+\frac{ir_{n}e^{i\theta }(-1)^{n}}{2}\sin \left ( \frac{\arccos \left (\cos ^{2}\left (\frac{c}{4}\right )\right )}{2} \right ) \\ &+O(r_{n}^{2}). \end{aligned}$$
(3.40)

Inserting (3.39) and (3.40) in (3.36), we get

$$ f_{0}({\lambda })=R_{1}\,e^{i\theta }r_{n}+R_{2}\,e^{2i\theta }r_{n}^{2}+O(r_{n}^{3}), $$
(3.41)

where

$$ \left \{ \textstyle\begin{array}{l} R_{1}=i\,(-1)^{n}\sqrt{1-\cos ^{4}\left (\frac{c}{4}\right )}\cos \left ( \frac{\arccos \left (\cos ^{2}\left (\frac{c}{4}\right )\right )}{2} \right ), \\ R_{2}=-(-1)^{n}\sqrt{1-\cos ^{4}\left (\frac{c}{4}\right )}\sin \left ( \frac{\arccos \left (\cos ^{2}\left (\frac{c}{4}\right )\right )}{2} \right )+(-1)^{n}\cos ^{2}\left (\frac{c}{4}\right )\cos \left ( \frac{\arccos (\cos ^{2}\left (\frac{c}{4}\right ))}{2}\right ). \end{array}\displaystyle \right . $$
(3.42)

We distinguish two cases:

Case 1. If \(\sin \left (\frac{c}{4}\right )=0\), then

$$ R_{1}=0\quad \text{and}\quad R_{2}=(-1)^{n}\neq 0. $$

It follows that there exists a positive constant \(C\) such that

$$ \forall {\lambda }\in \partial \, \tilde{B}_{n},\quad \lvert f_{0}({ \lambda })\rvert \geq C\, r_{n}^{2}=C\lvert n\rvert ^{-\frac{1}{4}}. $$

Case 2. If \(\sin \left (\frac{c}{4}\right )\neq 0\), then \(R_{1}\neq 0\). It follows that, there exists a positive constant \(C\) such that

$$ \forall {\lambda }\in \partial \, \tilde{B}_{n},\quad \lvert f_{0}({ \lambda })\rvert \geq C\, r_{n}=C\lvert n\rvert ^{-\frac{1}{4}}. $$

On the other hand, from (3.24), we deduce that

$$ \lvert F({\lambda })-f_{0}({\lambda })\rvert =O\left ( \frac{1}{\sqrt{{\lambda }}}\right )=O\left ( \frac{1}{\sqrt{\lvert n\rvert }}\right ). $$

In both cases, for \(\lvert n\rvert \) large enough, we have

$$ \forall \, {\lambda }\in \partial \tilde{B}_{n},\quad \lvert F({ \lambda })-f_{0}({\lambda })\rvert < \lvert f_{0}({\lambda })\rvert . $$

Hence, with the help of Rouché’s Theorem, there exists \(n_{0}\in \mathbb{N}^{\ast }\) large enough, such that \(\forall \lvert n\rvert \geq n_{0}\), the second branch of roots of \(F\), denoted by \({\lambda }_{2,n}\) are close to \(\mu _{2,n}\) that is defined in equation (3.35). The proof is thus complete. □

We are now in position to conclude the proof of Proposition 3.4.

Proof of Proposition 3.4

The proof is divided into two steps.

Calculation of \(\epsilon _{1,n}\). From (3.38), we have

$$ \left \{ \textstyle\begin{array}{l} \displaystyle {\cosh \left (\frac{3\lambda _{1,n}}{2}\right )=-i\,(-1)^{n} \, \sinh \left (\frac{3\epsilon _{1,n}}{2}\right ),\ \sinh \left ( \frac{3\lambda _{1,n}}{2}\right )=-i\,(-1)^{n}\, \cosh \left ( \frac{3\epsilon _{1,n}}{2}\right ) ,} \\ \displaystyle {\cosh \left (\frac{\lambda _{1,n}}{2}\right )=i\,(-1)^{n} \, \sinh \left (\frac{\epsilon _{1,n}}{2}\right ),\ \sinh \left ( \frac{\lambda _{1,n}}{2}\right )=i\,(-1)^{n}\, \cosh \left ( \frac{\epsilon _{1,n}}{2}\right ) ,} \\ \displaystyle {\frac{1}{\lambda _{1,n}}= -\frac{i}{2\pi n}+ \frac{i}{4\pi n^{2}}+O\left (\epsilon _{1,n}\, n^{-2}\right )+O\left (n^{-3} \right ),\ \frac{1}{\lambda ^{2}_{1,n}}= -\frac{1}{4\pi ^{2} n^{2}}+O \left (n^{-3}\right )} \\ \displaystyle {\frac{1}{\sqrt{\lambda _{1,n}}}= \frac{1-i\operatorname {sign}(n)}{2\sqrt{\pi |n|}}+ \frac{i-\operatorname {sign}(n)}{8\sqrt{\pi |n|^{3}}} +O\left (\epsilon _{1,n}\, |n|^{-3/2} \right )+O\left (|n|^{-5/2}\right )}, \\ \displaystyle {\frac{1}{\sqrt{\lambda ^{3}_{1,n}}}= \frac{-1-i\operatorname {sign}(n)}{4\sqrt{\pi ^{3} |n|^{3}}}+O\left (|n|^{-5/2} \right )\, ,\frac{1}{\sqrt{\lambda ^{5}_{1,n}}}=O\left (|n|^{-5/2} \right )}. \end{array}\displaystyle \right . $$
(3.43)

On the other hand, since \(\displaystyle {\lim _{|n|\to +\infty }\epsilon _{1,n}=0}\), we have the asymptotic expansion

$$ \left \{ \textstyle\begin{array}{l} \displaystyle {\sinh \left (\frac{3\epsilon _{1,n}}{2}\right )= \frac{3\epsilon _{1,n}}{2}+O(\epsilon _{1,n}^{3}) },\ \displaystyle { \cosh \left (\frac{3\epsilon _{1,n}}{2}\right )=1+ \frac{9\epsilon _{1,n}}{8}+O(\epsilon _{1,n}^{4})}, \\ \displaystyle {\sinh \left (\frac{\epsilon _{1,n}}{2}\right )= \frac{\epsilon _{1,n}}{2}+O(\epsilon _{1,n}^{3}) },\ \displaystyle { \cosh \left (\frac{\epsilon _{1,n}}{2}\right )=1+ \frac{\epsilon _{1,n}}{8}+O(\epsilon _{1,n}^{4})}. \end{array}\displaystyle \right . $$
(3.44)

Inserting (3.44) in (3.43), we get

$$ \left \{ \textstyle\begin{array}{l} \displaystyle \cosh \left (\frac{3\lambda _{1,n}}{2}\right )=- \frac{3 i\,(-1)^{n} \epsilon _{1,n}}{2}+O(\epsilon _{1,n}^{3}) ,\\ \displaystyle \sinh \left (\frac{3\lambda _{1,n}}{2}\right )=-i\,(-1)^{n} - \frac{9i\,(-1)^{n}\,\epsilon _{1,n}}{8}+O(\epsilon _{1,n}^{4}) , \\ \displaystyle {\cosh \left (\frac{\lambda _{1,n}}{2}\right )= \frac{i\,(-1)^{n}\,\epsilon _{1,n}}{2}+O(\epsilon _{1,n}^{3}),\ \sinh \left (\frac{\lambda _{1,n}}{2}\right )=i\,(-1)^{n} + \frac{i\,(-1)^{n}\,\epsilon _{1,n}}{8}+O(\epsilon _{1,n}^{4}) ,} \\ \displaystyle {\frac{1}{\lambda _{1,n}}= -\frac{i}{2\pi n}+ \frac{i}{4\pi n^{2}}+O\left (\epsilon _{1,n}\, n^{-2}\right )+O\left (n^{-3} \right ),\ \frac{1}{\lambda ^{2}_{1,n}}= -\frac{1}{4\pi ^{2} n^{2}}+O \left (n^{-3}\right )} \\ \displaystyle {\frac{1}{\sqrt{\lambda _{1,n}}}= \frac{1-i\operatorname {sign}(n)}{2\sqrt{\pi |n|}}+ \frac{i-\operatorname {sign}(n)}{8\sqrt{\pi |n|^{3}}} +O\left (\epsilon _{1,n}\, |n|^{-3/2} \right )+O\left (|n|^{-5/2}\right )}, \\ \displaystyle {\frac{1}{\sqrt{\lambda ^{3}_{1,n}}}= \frac{-1-i\operatorname {sign}(n)}{4\sqrt{\pi ^{3} |n|^{3}}}+O\left (|n|^{-5/2} \right )\, ,\frac{1}{\sqrt{\lambda ^{5}_{1,n}}}=O\left (|n|^{-5/2} \right )}. \end{array}\displaystyle \right . $$
(3.45)

Inserting (3.45) in (3.24), we get

$$ \textstyle\begin{array}{l} \displaystyle {\frac{\epsilon _{1,n }}{2}\left (3+\cos \left ( \frac{c}{2}\right )\right )\left (1+\frac{i}{4\pi \, n}\right )+ \frac{\left (1-i\operatorname {sign}(n)\right )\left (1-\cos \left (\frac{c}{2}\right )\right )}{2\sqrt{\pi \, |n|}}+ \frac{i\, c\left (4\sin \left (\frac{c}{2}\right )-c\right )}{16\pi n}} \\ \hspace{1cm}\displaystyle - \frac{\left (2+i\pi \right )\left (1+i\operatorname {sign}(n)\right )\left (1-\cos \left (\frac{c}{2}\right )\right )}{8\sqrt{\pi ^{3}\, |n|^{3}}} \\ \hspace{1cm}\displaystyle + \frac{4c\left (7-2i\pi \right )\sin \left (\frac{c}{2}\right )+c^{2}\, (2i\pi +5+4 \cos \left (\frac{c}{2}\right ))}{64\pi ^{2}n^{2}} \\ \hspace{1cm}\displaystyle { +O\left (|n|^{-5/2}\right )+O\left (\epsilon _{1,n }\, |n|^{-3/2} \right )+O\left (\epsilon _{1,n }^{2}\, |n|^{-1/2}\right )+O\left ( \epsilon _{1,n }^{3}\right )=0}. \end{array} $$
(3.46)

We distinguish two cases.

Case 1. If \(\sin \left (\frac{c}{4}\right )\neq 0\), then \(\displaystyle {1-\cos \left (\frac{c}{2}\right )=2\sin ^{2}\left ( \frac{c}{4}\right )\neq 0}\), then from (3.46), we get

$$ \frac{\epsilon _{1,n}}{2}\left (3+\cos \left (\frac{c}{2}\right ) \right )+ \frac{\sin ^{2}\left (\frac{c}{4}(1-i\operatorname {sign}(n))\right )}{\sqrt{\lvert n\rvert \pi }}+O( \epsilon _{1,n}^{3})+O(\lvert n\rvert ^{-1/2}\epsilon _{1,n}^{2})+O(n^{-1})=0, $$

hence, we get

$$ \epsilon _{1,n}=- \frac{2\sin ^{2}\left (\frac{c}{4}\right )(1-i\operatorname {sign}(n))}{\left (2+\cos \left (\frac{c}{2}\right )\right )}+O(n^{-1}). $$
(3.47)

Inserting (3.47) in (3.38), we get (3.22).

Case 2. If \(\sin \left (\frac{c}{4}\right )=0\),

$$ 1-\cos \left (\frac{c}{2}\right )=2\sin ^{2}\left (\frac{c}{4}\right )=0, \, \, \sin \left (\frac{c}{2}\right )=2\sin \left (\frac{c}{4}\right ) \cos \left (\frac{c}{4}\right )=0, $$

then, from (3.46), we get

$$ \begin{aligned} 2\epsilon _{1,n}\left (1+\frac{i}{4\pi n}\right )- \frac{i\, c^{2}}{16\pi n}+\frac{c^{2}(2i\pi +9)}{64\pi ^{2}n^{2}}+O \left (\lvert n\rvert ^{-5/2}\right )+O\left (\epsilon _{1,n}\lvert n \rvert ^{-3/2}\right ) \\ +O\left (\epsilon _{1,n}^{2}\lvert n\rvert ^{-1/2}\right )+O\left ( \epsilon _{1,n}^{3}\right )=0. \end{aligned} $$
(3.48)

By a straightforward calculation in equation (3.48), we get

$$ \epsilon _{1,n}=\frac{i\, c^{2}}{32\pi n}- \frac{(4+i\, \pi )c^{2}}{64\pi ^{2}n^{2}}+O\left (\lvert n\rvert ^{-5/2} \right ). $$
(3.49)

Inserting (3.49) in (3.38), we get (3.21).

Calculation of \(\epsilon _{2,n}\). From (3.35), we have

$$ \frac{1}{\sqrt{{\lambda }_{2,n}}}= \frac{1-i\operatorname {sign}(n)}{2\sqrt{\lvert n\rvert \pi }}+O\left (\lvert n \rvert ^{-3/2}\right )\quad \text{and}\quad \frac{1}{{\lambda }_{2,n}}=O(n^{-1}). $$
(3.50)

Inserting (3.35) and (3.50) in (3.24), we get

$$ \textstyle\begin{array}{l} \displaystyle {\cosh \left (\frac{{\lambda }_{2,n}}{2}\right )\left ( \cosh ({\lambda }_{2,n})-\cos ^{2}\left (\frac{c}{4}\right )\right )} \\ \displaystyle {+ \frac{(1-i\operatorname {sign}(n))\left (\sinh \left (\frac{3{\lambda }_{2,n}}{2}\right )+\sinh \left (\frac{{\lambda }_{2,n}}{2}\right )\cos \left (\frac{c}{2}\right )\right )}{4\sqrt{\lvert n\rvert \pi }}+O(n^{-1})=0}. \end{array} $$
(3.51)

On the other hand, we have

$$ \textstyle\begin{array}{l@{\quad }l@{\quad }l} \cosh ({\lambda }_{2,n})-\cos ^{2}\left (\frac{c}{4}\right )&=&\cosh \left (2n\pi i+i\, \arccos \left (\cos ^{2}\left (\frac{c}{4}\right ) \right )+\epsilon _{2,n}\right )-\cos ^{2}\left (\frac{c}{4}\right ) \\ &=&\cos ^{2}\left (\frac{c}{4}\right )\cosh (\epsilon _{2,n})+i\sqrt{1- \cos ^{4}\left (\frac{c}{4}\right )}\sinh (\epsilon _{2,n})-\cos ^{2} \left (\frac{c}{4}\right ) \\ &=&i\,\epsilon _{2,n}\sqrt{1-\cos ^{4}\left (\frac{c}{4}\right )}+O( \epsilon _{2,n}^{2}), \end{array} $$
(3.52)

and

$$ \left \{ \textstyle\begin{array}{l@{\quad }l@{\quad }l} \cosh \left (\frac{{\lambda }_{2,n}}{2}\right )&=&(-1)^{n}\cos \left ( \frac{\arccos \left (\cos ^{2}\left (\frac{c}{4}\right )\right )}{2} \right )+O(\epsilon _{2,n}), \\ \sinh \left (\frac{{\lambda }_{2,n}}{2}\right )&=&i(-1)^{n}\sin \left ( \frac{\arccos \left (\cos ^{2}\left (\frac{c}{4}\right )\right )}{2} \right )+O(\epsilon _{2,n}), \\ \sinh \left (\frac{3{\lambda }_{2,n}}{2}\right )&=&i(-1)^{n}\sin \left ( \frac{3\arccos \left (\cos ^{2}\left (\frac{c}{4}\right )\right )}{2} \right )+O(\epsilon _{2,n}). \end{array}\displaystyle \right . $$
(3.53)

Inserting (3.52) and (3.53) in (3.51), we get

$$ \textstyle\begin{array}{l} \epsilon _{2,n}\cos \left ( \frac{\arccos \left (\cos ^{2}\left (\frac{c}{4}\right )\right )}{2} \right )\sqrt{1-\cos ^{4}\left (\frac{c}{4}\right )}+O\left ( \frac{\epsilon _{2,n}}{n}\right )+O\left (\frac{1}{n}\right ) \\ \displaystyle {+ \frac{(1-i\,\operatorname {sign}(n))\left (\sin \left (3\frac{\arccos \left (\cos ^{2}\left (\frac{c}{4}\right )\right )}{2}\right )+\cos \left (\frac{c}{2}\right )\sin \left (\frac{\arccos \left (\cos ^{2}\left (\frac{c}{4}\right )\right )}{2}\right )\right )}{4\sqrt{\lvert n\rvert \pi }}=0}. \end{array} $$
(3.54)

We distinguish two cases.

Case 1. If \(\sin \left (\frac{c}{4}\right )\neq 0\), then from (3.54), we get

$$ \epsilon _{2,n}=- \frac{\left (\cos (\frac{c}{2})\sin \left (\frac{\arccos \left (\cos ^{2}(\frac{c}{4})\right )}{2}\right )+\sin \left (\frac{3\arccos \left (\cos ^{2}(\frac{c}{4})\right )}{2}\right )\right )(1-i\, \operatorname {sign}(n))}{4\sqrt{1-cos^{4}\left (\frac{c}{4}\right )}\cos \left (\frac{\arccos \left (\cos ^{2}(\frac{c}{4})\right )}{2}\right )\sqrt{\pi \lvert n\rvert }}+O(n^{-1}). $$
(3.55)

Inserting (3.55) in (3.35), we get (3.21).

Case 2. If \(\sin \left (\frac{c}{4}\right )=0\), we get

$$ \epsilon _{2,n}=O(n^{-1}). $$
(3.56)

Inserting (3.56) in (3.35), we get (3.23). Thus, the proof is complete. □

Proof of Theorem 3.2

From Proposition 3.4, the operator \(\mathcal{A}\) has two branches of eigenvalues such that the real parts tending to zero. Then the energy corresponding to the first and second branch of eigenvalues is not exponentially decaying. Then the total energy of the wave equations with local Kelvin-Voigt damping with global coupling are not exponentially stable in the equal speed case. □

4 Polynomial Stability

From Sect. 3, System (1.2)-(1.6) is not uniformly (exponentially) stable, so we look for a polynomial decay rate. As the condition \(i\mathbb{R}\subset \rho (\mathcal{A})\) is already checked in Lemma 2.4, following Theorem A.13, it remains to prove that condition (A.39) holds. This is made with the help of a specific multiplier and by using the exponential decay of an auxiliary problem. Our main result in this section is the following theorem.

Theorem 4.1

There exists a constant \(C>0\) independent of \(U_{0}\), such that the energy of system (1.2)-(1.6) satisfies the following estimation:

$$ E(t)\leq \frac{C}{t}\|U_{0}\|^{2}_{D(\mathcal{A})},\quad \forall t>0, \ \forall U_{0}\in D(\mathcal{A}). $$
(4.1)

According to Theorem A.13, by taking \(\ell =2\), the polynomial energy decay (4.1) holds if the following conditions

$$ i\mathbb{R}\subset \rho (A), $$
(H1)

and

$$ \sup _{\lambda \in \mathbb{R}}\left \| \left (i\lambda I-\mathcal{A} \right )^{-1}\right \| _{\mathcal{L}\left (\mathcal{H}\right )}=O \left (|\lambda |^{2}\right ), $$
(H2)

are satisfied. Condition (H1) is already proved in Lemma 2.4. We will prove condition (H2) using an argument of contradiction. For this purpose, suppose that (H2) is false, then there exists \(\left \{ (\lambda _{n},U_{n}=\left (u_{n},v_{n},y_{n},z_{n}\right )) \right \} _{n\geq 1}\subset \mathbb{R}\times D\left (\mathcal{A} \right )\) and

$$ \lambda _{n}\to +\infty ,\ \ \|U_{n}\|_{\mathcal{H}}=1, $$
(4.2)

such that

$$ \lambda _{n}^{2}\left (\ i\lambda _{n} U_{n}-\mathcal{A}U_{n}\right )= \left (f_{1,n},g_{1,n},f_{2,n},g_{2,n}\right ):=F_{n}\to 0\ \text{ in } \mathcal{H}. $$
(4.3)

For simplicity, we drop the index \(n\). Detailing Equation (4.3), we obtain

$$\begin{aligned} i{\lambda }u-v =&\lambda ^{-2} f_{1}\longrightarrow 0\ \ \text{in}\ \ H_{0}^{1}(0,L), \end{aligned}$$
(4.4)
$$\begin{aligned} i{\lambda }v-(au_{x}+b(x)v_{x})_{x}+c(x)z =&\lambda ^{-2} g_{1} \longrightarrow 0\ \ \text{in}\ \ L^{2}(0,L), \end{aligned}$$
(4.5)
$$\begin{aligned} i{\lambda }y-z =&\lambda ^{-2} f_{2}\longrightarrow 0\ \ \text{in}\ \ H_{0}^{1}(0,L), \end{aligned}$$
(4.6)
$$\begin{aligned} i{\lambda }z- y_{xx}-c(x)v =&\lambda ^{-2} g_{1}\longrightarrow 0\ \ \text{in}\ \ L^{2}(0,L). \end{aligned}$$
(4.7)

Here we will check the condition (H2) by finding a contradiction with (4.2) such as \(\left \| U\right \| _{\mathcal{H}}=o(1)\). For clarity, we divide the proof into several lemmas. By taking the inner product of (4.3) with \(U\) in ℋ, we remark that

$$ \int _{0}^{L} b(x)\left |v_{x}\right |^{2}dx=-\Re \left (\left < \mathcal{A}U,U\right >_{\mathcal{H}}\right )=\Re \left (\left < \left (i{ \lambda }I-\mathcal{A}\right )U,U\right >_{\mathcal{H}}\right )=o\left ( \lambda ^{-2}\right ). $$

Then,

$$ \int _{\alpha _{1}}^{\alpha _{3}}\left |v_{x}\right |^{2}dx=o\left ( \lambda ^{-2}\right ). $$
(4.8)

Remark 4.2

Since \(v\) and \(z\) are uniformly bounded in \(L^{2}(0,L)\), then from equations (4.4) and (4.6), the solution \((u,v,y,z)\in D(\mathcal{A})\) of (4.4)-(4.7) satisfies the following asymptotic behavior estimation

$$\begin{aligned} \|u\| =&O\left (\lambda ^{-1}\right ), \end{aligned}$$
(4.9)
$$\begin{aligned} \|y\| =&O\left (\lambda ^{-1}\right ). \end{aligned}$$
(4.10)

Using equation (4.4), and equation (4.8) we get

$$ \int _{\alpha _{1}}^{\alpha _{3}}\left |u_{x}\right |^{2}dx=o\left ( \lambda ^{-4}\right ). $$
(4.11)

Lemma 4.3

Let \(\varepsilon <\frac{\alpha _{3}-\alpha _{1}}{4}\), the solution \((u,v,y,z)\in D(\mathcal{A})\) of the system (4.4)-(4.7) satisfies the following estimation

$$ \int _{\alpha _{1}+\varepsilon }^{\alpha _{3}-\varepsilon }\left |v \right |^{2} dx=o(1)\quad \textit{and}\quad \int _{\alpha _{1}+ \varepsilon }^{\alpha _{3}-\varepsilon }\lvert {\lambda }u\rvert ^{2}dx=o(1). $$
(4.12)

Proof

We define the function \(\rho \in C_{0}^{\infty }(0,L)\) by

$$ \rho (x)=\left \{ \textstyle\begin{array}{c@{\quad }c@{\quad }c} 1&\text{if}&x\in (\alpha _{1}+\epsilon ,\alpha _{3}-\epsilon ), \\ 0&\text{if}& x\in (0,\alpha _{1})\cup (\alpha _{3},L), \\ 0\leq \rho \leq 1&& elsewhere. \end{array}\displaystyle \right . $$
(4.13)

Multiply equation (4.5) by \(\dfrac{1}{\lambda }\rho \bar{v}\), integrate over \((0,L)\), using the fact that \(\|g_{1}\|_{L^{2}(0,L)}=o(1)\) and \(v\) is uniformly bounded in \(L^{2}(\Omega )\), we get

$$ \int _{0}^{L} i\rho \left |v\right |^{2} dx+\dfrac{1}{\lambda }\int _{0}^{L} (au_{x}+b(x)v_{x})\left (\rho ^{\prime } \bar{v}+\rho \bar{v}_{x} \right )dx+\dfrac{1}{\lambda }\int _{0}^{L} c(x)z\rho \bar{v}dx=o( \lambda ^{-3}). $$
(4.14)

Using Equation (4.8), Remark 4.2 and the fact that \(v\) and \(z\) are uniformly bounded in \(L^{2}(\Omega )\), we get

$$ \dfrac{1}{\lambda }\int _{0}^{L} (au_{x}+b(x)v_{x})\left (\rho ^{ \prime } \bar{v}+\rho \bar{v}_{x}\right )dx=o({\lambda }^{-2})\quad \text{and}\quad \dfrac{1}{\lambda }\int _{0}^{L} c(x)z\rho \bar{v}dx=o(1). $$
(4.15)

Inserting Equation (4.15) in Equation (4.14), we obtain

$$ \int _{0}^{L} i\rho \left |v\right |^{2}dx=o(1). $$
(4.16)

Hence, we obtain the first estimation in Equation (4.12). Now, multiplying Equation (4.4) by \(\lambda \rho \bar{u}\) integrate over \((0,L)\) and using the fact that \(\|f_{1}\|_{H_{0}^{1}(\Omega )}=o(1)\) and Remark 4.2, we get

$$ \int _{0}^{L} i\rho \left |\lambda u\right |^{2}dx-\int _{0}^{L}\rho \lambda v \bar{u}dx=o(\lambda ^{-2}). $$

Using Equation (4.16), we get

$$ \int _{0}^{L} i\rho \left |\lambda u\right |^{2}dx=o(1). $$

Then, we obtain the desired second estimation in Equation (4.12). □

Inserting equations (4.4) and (4.6) respectively in equations (4.5) and (4.7), we get

$$\begin{aligned} \lambda ^{2}u+(au_{x}+b(x)v_{x})_{x}-i\lambda c(x)y =&F_{1}, \end{aligned}$$
(4.17)
$$\begin{aligned} \lambda ^{2}y+y_{xx}+i\lambda c(x)u =&F_{2}, \end{aligned}$$
(4.18)

where

$$ F_{1}=-\lambda ^{-2}g_{1}-i\lambda ^{-1}f_{1}-c(x)\lambda ^{-2}f_{2} \quad \text{and}\quad F_{2}=-\lambda ^{-2}g_{2}-i\lambda ^{-1}f_{2}+c(x) \lambda ^{-2}f_{1}. $$
(4.19)

Lemma 4.4

Let \(\varepsilon <\frac{\alpha _{3}-\alpha _{1}}{4}\), the solution \((u,v,y,z)\in D(\mathcal{A})\) of the system (4.4)-(4.7) satisfies the following estimation

$$ \int _{\alpha _{2}}^{\alpha _{3}-2\varepsilon }\left |\lambda y\right |^{2} dx=o(1)\quad \textit{and}\quad \int _{\alpha _{2}}^{\alpha _{3}-2 \varepsilon }\left |z\right |^{2}dx=o(1). $$
(4.20)

Proof

We define the function \(\zeta \in C_{0}^{\infty }(0,L)\) by

$$ \zeta (x)=\left \{ \textstyle\begin{array}{c@{\quad }c@{\quad }c} 1&\text{if}&x\in (\alpha _{1}+2\varepsilon ,\alpha _{3}-2\varepsilon ), \\ 0&\text{if}& x\in (0,\alpha _{1}+\varepsilon )\cup (\alpha _{3}- \varepsilon ,L), \\ 0\leq \zeta \leq 1&& elsewhere. \end{array}\displaystyle \right . $$
(4.21)

Multiply equations (4.17) by \(\lambda \zeta \bar{y}\) and (4.18) by \(\lambda \zeta \bar{u}\) respectively, integrate over \((0,L)\), using Remark 4.2 and the fact that \(\|F\|_{\mathcal{H}}=\|(f_{1},g_{1},f_{2},g_{2})\|_{\mathcal{H}}=o(1)\), we get

$$ \int _{0}^{L}\lambda ^{3}\zeta u\bar{y}dx-\int _{0}^{L}\lambda \left (au_{x}+b(x)v_{x} \right )(\zeta ^{\prime }\bar{y}+\zeta \bar{y}_{x})dx-i\int _{0}^{L} c(x) \zeta (x)\left |\lambda y \right |^{2} dx=o({\lambda }^{-1}) $$
(4.22)

and

$$ \int _{0}^{L}\lambda ^{3}\zeta y\bar{u}dx-\int _{0}^{L}\lambda y_{x} \zeta ^{\prime }\bar{u}_{x}dx-\int _{0}^{L}\lambda y_{x}\zeta \bar{u}_{x}dx+i \int _{0}^{L} c(x)\zeta (x)\left |\lambda u \right |^{2} dx=o({ \lambda }^{-1}). $$
(4.23)

Using Remark 4.2, Lemma 4.3 and the fact that \(y_{x}\) is uniformly bounded in \(L^{2}(0,L)\), we get

$$ \begin{aligned} &\int _{0}^{L}\lambda \left (au_{x}+b(x)v_{x}\right )(\zeta ^{\prime } \bar{y}+\zeta \bar{y}_{x})dx=o(1),\quad -\int _{0}^{L}\lambda y_{x} \zeta ^{\prime }\bar{u}_{x}dx=o(1)\quad \text{and}\\ &\int _{0}^{L} \lambda y_{x}\zeta \bar{u}_{x}dx=o(1). \end{aligned} $$
(4.24)

Using Lemma 4.3, we have that

$$ \int _{0}^{L} c(x)\zeta \left |\lambda u \right |^{2} dx=o(1). $$
(4.25)

Inserting Equations (4.24) and (4.25) in Equations (4.22) and (4.23), and summing the result by taking the imaginary part, and using the definition of the functions \(c\) and \(\zeta \), we get the first estimation of Equation (4.20).

Now, multiplying equation (4.6) by \(\bar{z}\), integrating over \((\alpha _{2},\alpha _{3}-2\varepsilon )\) and using the fact that \(\|f_{2}\|_{H_{0}^{1}(0,L)}=o(1)\) and \(z\) is uniformly bounded in \(L^{2}(0,L)\), in particular in \(L^{2}(\alpha _{2},\alpha _{3}-2\varepsilon )\), we get

$$ \int _{\alpha _{2}}^{\alpha _{3}-2\varepsilon } i\lambda y\bar{z}dx- \int _{\alpha _{2}}^{\alpha _{3}-2\varepsilon }\left |z\right |^{2}dx=o( \lambda ^{-2}). $$

Then, using the first estimation of Equation (4.20), we get the second desired estimation of Equation (4.20). □

Now, like as [27], we will construct a new multiplier satisfying some ordinary differential systems.

Lemma 4.5

Let \(0<\alpha _{1}<\alpha _{2}<\alpha _{3}<\alpha _{4}<L\) and suppose that \(\varepsilon <\frac{\alpha _{3}-\alpha _{1}}{4}\), and \(c(x)\) the function defined in Equation (1.5). Then, for any \({\lambda }\in \mathbb{R}\), the solution \(\left (\varphi ,\psi \right )\in ((H^{2}(0,L)\cap H^{1}_{0}(0,L))^{2}\) of system

(4.26)

satisfies the following estimation

$$ \|\lambda \varphi \|_{L^{2}(0,L)}^{2}+\|\varphi _{x}\|_{L^{2}(0,L)}^{2}+ \|\lambda \psi \|_{L^{2}(0,L)}^{2}+\|\psi _{x}\|_{L^{2}(0,L)}^{2} \leq M\left (\| u\|_{L^{2}(0,L)}^{2}+\| y\|_{L^{2}(0,L)}^{2}\right ). $$
(4.27)

Proof

Following Theorem A.2, the exponential stability of System (A.1), proved in the Appendix, implies that the resolvent of the auxiliary operator \(\mathcal{A}_{a}\) defined by (A.2)-(A.3) is uniformly bounded on the imaginary axis i.e. there exists \(M>0\) such that

$$ \sup _{{\lambda }\in \mathbb{R}}\|\left (i\lambda I-\mathcal{A}_{a} \right )^{-1}\|_{\mathcal{L}\left (\mathcal{H}_{a}\right )}\leq M< + \infty $$
(4.28)

where \(\mathcal{H}_{a}=\left (H_{0}^{1}(0,L)\times L^{2}(0,L)\right )^{2}\). Now, since \((u,y)\in H^{1}_{0}(0,L)\times H^{1}_{0}(0,L)\), then \((0,-u,0,-y)\) belongs to \(\mathcal{H}_{a}\), and from (4.28), there exists \((\varphi ,\eta ,\psi ,\xi )\in D(\mathcal{A}_{a})\) such that \(\left (i\lambda I-\mathcal{A}_{a}\right )(\varphi ,\eta ,\psi ,\xi )=(0,-u,0,-y)^{ \top }\) i.e.

$$\begin{aligned} i{\lambda }\varphi -\eta =&0, \end{aligned}$$
(4.29)
(4.30)
$$\begin{aligned} i{\lambda }\psi -\xi =&0, \end{aligned}$$
(4.31)
(4.32)

such that

$$ \|(\varphi ,\eta ,\psi ,\xi )\|_{\mathcal{H}_{a}}\leq M\left (\|u\|_{L^{2}(0,L)}+ \|y\|_{L^{2}(0,L)}\right ). $$
(4.33)

From equations (4.29)-(4.33), we deduce that \((\varphi ,\psi )\) is a solution of (4.26) and we have

$$ \|\lambda \varphi \|_{L^{2}(0,L)}^{2}+\|\varphi _{x}\|_{L^{2}(0,L)}^{2}+ \|\lambda \psi \|_{L^{2}(0,L)}^{2}+\|\psi _{x}\|_{L^{2}(0,L)}^{2} \leq M\left (\| u\|_{L^{2}(0,L)}^{2}+\| y\|_{L^{2}(0,L)}^{2}\right ). $$

Then, we get our desired result. □

Remark 4.6

There was no reference found for the proof of the exponential stability of System (A.1) when the coefficients of the damping and the coupling are both non smooth. For this, we give the proof of the exponential stability of System (A.1) in Theorem A.2 (see Sect. A.1 in the Appendix section).

Lemma 4.7

Let \(\varepsilon <\frac{\alpha _{3}-\alpha _{1}}{4}\). Then, the solution \((u,v,y,z)\in D(\mathcal{A})\) of (4.4)-(4.7) satisfies the following asymptotic behavior estimation

$$ \int _{0}^{L} \left |\lambda u\right |^{2}dx=o(1), $$
(4.34)

and

$$ \int _{0}^{L} \left |\lambda y\right |^{2}dx=o(1). $$
(4.35)

Proof

The proof of this Lemma is divided into two steps.

Step 1.

Multiplying equation (4.17) by \(\lambda ^{2}\bar{\varphi }\), integrate over \((0,L)\), and using Equation (4.27) and the facts that \(u\) is uniformly bounded in \(L^{2}(0,L)\) and \(\|F\|_{\mathcal{H}}=\|(f_{1},g_{1},f_{2},g_{2})\|_{\mathcal{H}}=o(1)\), we get

$$ \int _{0}^{L} \left (\lambda ^{2}\bar{\varphi }+a\bar{\varphi }_{xx} \right )\lambda ^{2}udx-\int _{0}^{L}\lambda ^{2}b(x)v_{x} \bar{\varphi }_{x}dx-\int _{0}^{L} i\lambda^{3} c(x)y\bar{\varphi }dx=o( \lambda ^{-1}). $$
(4.36)

Using Equations (4.8) and (4.27), we get

$$ \int _{0}^{L}\lambda ^{2}b(x)v_{x}\bar{\varphi }_{x}dx=o(1). $$
(4.37)

Combining Equations (4.36) and (4.37), we obtain

$$ \int _{0}^{L} \left (\lambda ^{2}\bar{\varphi }+a\bar{\varphi }_{xx} \right )\lambda ^{2}udx-\int _{0}^{L} i\lambda ^{3} c(x)y \bar{\varphi }dx=o(1). $$
(4.38)

From System (4.26), we have

(4.39)

Substituting (4.39) in (4.38), we get

(4.40)

Using Remark 4.2, Lemma 4.3 and Equation (4.27), we obtain

(4.41)

Inserting Equation (4.41) in Equation (4.40), we get

$$ \int _{0}^{L} \left |\lambda u\right |^{2}dx-\int _{0}^{L} i\lambda ^{3} c(x)\bar{\psi }udx-\int _{0}^{L} i\lambda ^{3} c(x)y\bar{\varphi }dx=o(1). $$
(4.42)

Step 2.

Multiplying equation (4.18) by \(\lambda ^{2}\bar{\psi }\), integrate over \((0,L)\), and using Equation (4.27) and the facts that \(y\) is uniformly bounded in \(L^{2}(0,L)\) and \(\|F\|_{\mathcal{H}}=\|(f_{1},g_{1},f_{2},g_{2})\|_{\mathcal{H}}=o(1)\), we get

$$ \int _{0}^{L}\left ({\lambda }^{2}\bar{\psi }+\bar{\psi }_{xx}\right ){ \lambda }^{2}ydx+\int _{0}^{L} i\lambda c(x)u\bar{\psi }dx=o(\lambda ^{-1}). $$
(4.43)

From System (4.26), we have

(4.44)

Substituting (4.44) in (4.43), we get

(4.45)

Using Remark 4.2, Lemma 4.4 and Equation (4.27), we obtain

(4.46)

Inserting Equation (4.46) in Equation (4.45), we get

$$ \int _{0}^{L} \left |\lambda y\right |^{2}dx+\int _{0}^{L} i\lambda ^{3} c(x)\bar{\varphi }ydx+\int _{0}^{L} i\lambda ^{3} c(x)u\bar{\psi }dx=o(1). $$
(4.47)

Finally, summing up equations (4.42) and (4.47) we get

$$ \int _{0}^{L} \left |\lambda u\right |^{2}dx=o(1)\quad \mbox{and} \quad \int _{0}^{L} \left |\lambda y\right |^{2}dx=o(1). $$

Hence,

$$ \int _{0}^{L} \left |v\right |^{2}dx=o(1) \quad \text{and} \quad \int _{0}^{L} \left |z\right |^{2}dx=o(1). $$
(4.48)

Then, the proof has been completed. □

Lemma 4.8

The solution \((u,v,y,z)\in D(\mathcal{A})\) of the (4.4)-(4.7) satisfies the following asymptotic behavior estimations

$$ \int _{0}^{L} \left | u_{x}\right |^{2}dx=o(1)\quad \textit{and}\quad \int _{0}^{L} \left | y_{x}\right |^{2}dx=o(1). $$
(4.49)

Proof

Multiplying (4.17) by \(\bar{u}\) integrate over \((0,L)\), using the fact that \(\|F\|_{\mathcal{H}}=\|(f_{1},g_{1},f_{2}, g_{2})\|_{\mathcal{H}}=o(1)\) and \(u\) is uniformly bounded in \(L^{2}(0,L)\), we get

$$ \int _{0}^{L} \left |\lambda u\right |^{2}dx-\int _{0}^{L} a\left | u_{x} \right |^{2}dx-\int _{0}^{L} b(x)v_{x} \bar{u}_{x}dx-\int _{0}^{L} i \lambda c(x)y\bar{u}dx=o(\lambda ^{-2}). $$
(4.50)

Using equations (4.8) and (4.34), we get

$$ \int _{0}^{L} \left | u_{x}\right |^{2}dx=o(1). $$

Similarly, multiply (4.18) by \(\bar{y}\) and integrate, we get

$$ \int _{0}^{L} \left | y_{x}\right |^{2}dx=o(1). $$

The proof has been completed. □

Proof of Theorem 4.1

Consequently, from the results of Lemmas 4.7 and 4.8, we obtain

$$ \int _{0}^{L}\left (|v|^{2}+|z|^{2}+a\,|u_{x}|^{2}+ |y_{x}|^{2} \right )dx =o\left (1\right ). $$

Hence \(\|U\|_{\mathcal{H}}=o(1)\), which contradicts (4.2). Consequently, condition \({(\mathrm{H}2)}\) holds. This implies, from Theorem A.13, the energy decay estimation (4.1). The proof is thus complete. □

5 Conclusion

We have studied the stabilization of a system of locally coupled wave equations with only one internal localized Kelvin-Voigt damping via non-smooth coefficients. We proved the strong stability of the system using Arendt-Batty criteria. Lack of exponential stability results has been proved in both cases: The case of global Kelvin-Voigt damping and the case of localized Kelvin-Voigt damping, taking into consideration that the coupling is global. In addition, if both coupling and damping are localized internally via non-smooth coefficients, we established a polynomial energy decay rate of type \(t^{-1}\). We can conjecture that the energy decay rate \(t^{-1}\) is optimal. However, if the intersection between the supports of the domains of the damping and the coupling coefficients is empty, the nature of the decay rate of the system will be unknown. This question is still an open problem.