1 Introduction

The exact controllability of wave equations has a history that stretches back to the late sixties. The early progress on the topic mostly due to Russell and his collaborators is quite slow, e.g. [8,9,10, 15, 40,41,42,43,44,45,46] but a steady development follows which reaches a culminating point in the nineties thanks to the introduction by Lions of the famous Hilbert uniqueness method aka H.U.M [33,34,35] in the mid-eighties. H.U.M reduces the controllability problem for a given distributed system to obtaining an inverse or observability estimate for the corresponding adjoint system. Within that period of development, several methods were proposed in the literature to tackle exact controllability problems for wave equations; the prominent ones are listed below, and the interested reader is referred to the introduction of [52].

Those methods include:

  • The multipliers method: it was introduced in control theory by Russell [45], and this was later exemplified in many works, e.g. [1, 2, 8,9,10, 17, 18, 20,21,22, 24,25,26, 33, 35, 38, 55, 59, 60]. This technique is very flexible as long as the system is conservative, free of lower order terms, and involves only constant coefficients. It becomes cumbersome when either the coefficients are nonconstant or there are general lower order terms in the system; in this case, sharp constraints must be imposed on the coefficients and lower order terms.

  • The microlocal analysis method: this method was developed by Bardos-Lebeau-Rauch [4] to tackle control issues for hyperbolic equations. They gave a general sufficient and almost necessary geometric condition for exact controllability:

    (GCC): The set \(\omega \) is an admissible region for exact controllability in time T if every ray of geometric optics enters \(\omega \times (0,T)\) in a time less than T.

    This method describes the optimal control region for exact controllability of hyperbolic equations with time-independent \(C^\infty \) coefficients in general \(C^\infty \) domains. The \(C^\infty \) requirements in [4] for both the domain and coefficients was later relaxed in [7] to \(C^3\) for the domain and \(C^2\) for the coefficients.

  • Carleman estimates method: The Carleman estimate technique for global controllability was initiated by Fursikov and Imanuvilov [14] in the framework of parabolic equations. Its counterpart for hyperbolic equations was devised by Zhang [57, 58] for the ordinary wave equation. Subsequently, Fu-Yong-Zhang [13] and Duyckaerts-Zhang-Zuazua [12] adapted it to general hyperbolic equations. It is fair to state here that the first precise Carleman estimate for the wave equation was established by Ruiz [39] in the context of unique continuation for weak solutions of the wave equation with a potential. It is worth noting that the introduction of precise Carleman estimates made it possible to study control problems not only for wave equations involving general lower order terms, but also for semilinear wave equations, including the case of nonlinearities with superlinear growth, e.g. [5, 13, 19, 31, 48, 50, 57]. Other Carleman estimates for hyperbolic equations were developed by Beaudouin-De Buhan-Ervedoza [5], Imanuvilov [19], Lasiecka-Triggiani [26], Lasiecka-Triggiani-Zhang [31], Triggiani-Yao [53], etc.

  • The Riemannian geometry approach: This method was developed by Yao [56] for hyperbolic equations with \(C^\infty \) coefficients and was subsequently relaxed to account for \(C^1\) coefficients and energy level first order terms by Lasiecka-Triggiani-Yao [29, 30].

Though for single wave-like systems much progress has been made as far as controllability is concerned, it cannot be said the same for systems of wave equations. In the case of systems of wave equations, the notions of controllability that have gained some traction are: the notion of indirect controllability and the notion of simultaneous controllability. In the case of indirect controllability, one wants to control a multi-component system using a smaller number of controls than the number of differential equations involved, while simultaneous controllability involves multi-component systems where the same control mechanism appears in each equation. Several contributions exist in the literature about the indirect controllability of wave equations, e.g. [1, 3, 11, 36, 48, 50].

The notion of simultaneous controllability was introduced in the literature by Russell in his study of the boundary controllability of Maxwell equations in rectangular domains [47]. To solve that problem, he transformed it into a controllability problem for a system of two uncoupled wave equations, one having the Dirichlet boundary conditions while the other one had the Neumann boundary conditions. That simultaneous controllability result was later generalized by Lions [33, Chapter 5] to a larger class of domains, and to uncoupled plate equations. As for the simultaneous internal controllability of uncoupled wave equations (two or more equations), Haraux initiated that work in [16], where he established some unique continuation results leading to approximate controllability in all space dimensions. He also proved an exact controllability result in one space dimension where the control region was an arbitrary open subinterval of the interval under consideration, and he proved another one in higher space dimensions where the control region was the whole domain under consideration. The one-dimensional controllability result of Haraux was later generalized to all space dimensions and under the Bardos-Lebeau-Rauch geometric control condition [4, 7] by the second author of the present contribution [49, 52]. A simultaneous controllability result for uncoupled wave and plate equation was derived in [51]. One may also mention the work by Zuazua [62] where the author discusses partial observability inequality for uncoupled dynamical systems; in his work, the author considers two uncoupled dynamical systems having commuting principal operators, viewing the second system as a perturbation of the first one. He also proves a simultaneous controllability result provided that the final states of the adjoint system are suitably related (see [62, p. 25]). The authors of the present work are unaware of simultaneous boundary controllability results for uncoupled wave equations other than those of Russell and Lions mentioned above. The controllability problem that we shall tackle seems new even in the conservative case. This explains why we start by providing the proof of the observability estimate in that simpler situation, where multipliers technique can be used, before tackling the more technically involved nonconservative case. Other than the obvious pedagogical benefits, doing this has the extra benefits:

  • it enables us to cover the analogous problem for the heat equations,

  • it gives us some insight into tackling the nonconservative case of interest to us.

Now, we are going to formulate our controllability problem. First, we introduce some useful notations.

Let \(T>0\). Set \(V=\{u\in H^1(0,1);u(0)=0\},\) and denote by \(V'\) the topological dual of V. Set \(Q=(0,1)\times (0,T)\), and let a in \(L^\infty (Q)\).

Consider the following boundary controllability problem:

Given \((y^0,y^1)\) in \(L^2(0,1)\times H^{-1}(0,1)\) and \((z^0,z^1)\) in \(L^2(0,1)\times V'\), find a control function h in \(L^2(0,T)\) such the solution couple (yz) of the following system

$$\begin{aligned} \left\{ \begin{array}{rl} &{}y_{tt} - y_{xx} +a(x,t)y= 0 \text { in }Q, \\ &{}z_{tt} - z_{xx} +a(x,t)z = 0 \text { in }Q, \\ &{}y(0,t)=h(t),\, z(0,t)=h(t),\quad y(1,t)=0,\, z_x(1,t)=0 \text { in }(0,T), \\ &{}y(.,0)=y^0,\, y_t(.,0)=y^1,\quad z(.,0)=z^0,\, z_t(.,0)=z^1, \end{array}\right. \end{aligned}$$
(1.1)

satisfies

$$\begin{aligned} y(.,T)=0,\,y_t(.,T)=0,\quad z(.,T)=0,\,z_t(.,T)=0. \end{aligned}$$
(1.2)

Remark 1.1

Some comments are in order.

  • Notice that, for all \((y^0,y^1)\) in \(L^2(0,1)\times H^{-1}(0,1)\), \((z^0,z^1)\) in \(L^2(0,1)\times V'\) and h in \(L^2(0,T)\), System (1.1) is well-posed; this can be established using the method of transposition as described in Lions-Magenes [32, Vol. 1]. More on this in Section 4.

  • Returning to the controllability problem, we point out that if such a control exists, we say that System (1.1) is exactly controllable in time T. Note that given the time reversibility of the wave equation, null controllability and exact controllability are equivalent; this means that if a control exists that steers the system to equilibrium in time T, then there exists a control that can steer it to any desired state in time T.

  • Given that the speed of propagation is finite, the controllability time must be large enough; in the present problem, the controllability time T is greater than 4, which is the double of the time needed to control from one endpoint a single wave equation with speed of propagation one. This controllability time is in agreement with the controllability time for an uncoupled system of two wave equations discussed in [33, Chaper 5].

  • The fact that the control is at the left endpoint is not important; provided we switch the boundary conditions, the boundary control may be placed at the right endpoint. However, notice that if we use the Dirichlet boundary conditions at the left endpoint, and place the control at the right endpoint, then this is similar to what is done in Lions’ book in the conservative case (\(a\equiv 0\) in Q) [33, Chapter 5]. This latter controllability problem in the nonconservative case requires a different approach than the one that we will develop to solve the problem at hand, and will be dealt with in a subsequent contribution.

  • An attentive reader should have noticed that we are using the same potential function in both equations. This is due to the fact that we are unable to establish Carleman estimates for the adjoint system in the case of two different potential functions. In the latter situation, there are lower order terms that appear on the right side and that cannot be absorbed by the left side, given the structure of the weight functions. More on this later on.

Remark 1.2

One may fairly wonder why we choose to use different boundary conditions at the right endpoint. In other words, can we solve the simultaneous control problem for the system (1.1) if both y and z satisfy homogeneous Dirichlet boundary conditions at \(x=1\), or both satisfy homogeneous Neumann boundary conditions at \(x=1\)? This is not possible. To help the reader understand our answer, we find it useful to expound on the method for solving this simultaneous controllability problem. To solve the simultaneous controllability problem at hand, we implicitly transform it to an indirect controllability problem for the new variables \(S=y+z\) and \(D=y-z\). We say, implicitly, because we do not do this on the primal problem, but rather on the dual one. Now, it is easy to check that the S-system is controlled while the D-system is uncontrolled. When the boundary conditions at the right endpoint are distinct, the (SD)-system is coupled at the right endpoint, so the effect of the control is transmitted to the D-system through the coupling; this enables us to control the (SD)-system and finally the initial system (1.1). If the boundary conditions at the right endpoint were the same, the (SD)-system would be uncoupled; in particular, the D-system would be conservative, precluding us to control either the (SD)-system or the original (yz)-system.

We can now state our first result:

Theorem 1.3

Let \(T>4\). Let \((y^0,y^1)\) in \(L^2(0,1)\times H^{-1}(0,1)\) and \((z^0,z^1)\) in \(L^2(0,1)\times V'\). Let a in \(L^\infty (Q)\). There exists a control function h in \(L^2(0,T)\) such that the solution couple (yz) of System (1.1) satisfies (5.5).

Furthermore there exists a positive constant \(C_0\), that depends on T and \(||a||_{L^\infty (Q)}\) only, such that

$$\begin{aligned} ||h||_{L^2(0,T)}\le C_0\left\{ ||y^0||_{L^2(0,1)}+||y^1||_{H^{-1}(0,1)}+||z^0||_{L^2(0,1)}+||z^1||_{V'}\right\} . \end{aligned}$$
(1.3)

To prove this theorem, we shall rely on the Hilbert uniqueness method (H.U.M) of Lions [33], which reduces solving the controllability problem to establishing an inverse inequality for the adjoint system corresponding to System (1.1). The rest of the paper goes as follows: In Sect. 2, we introduce the adjoint system and prove the observability estimate in the conservative case. Section 3 is devoted to proving Carleman estimates for the nonconservative adjoint system; this is where the main contribution of the paper appears. In Sect. 4, we discuss the well-posedness of a system like (1.1), and prove Theorem 1.3. Section 5 deals with some final remarks and open problems.

2 The Adjoint System and an Observability Inequality

Consider the following adjoint system

$$\begin{aligned} \left\{ \begin{array}{rl} &{}u_{tt} - u_{xx} +a(x,t)u= 0 \text { in }Q, \\ &{}v_{tt} - v_{xx} +a(x,t)v = 0 \text { in }Q, \\ &{}u(0,t)=0= v(0,t),\quad u(1,t)=0,\, v_x(1,t)=0 \text { in }(0,T), \\ &{}u(.,T)=u^0,\, u_t(.,T)=u^1,\quad v(.,T)=v^0,\, v_t(.,T)=v^1, \end{array}\right. \end{aligned}$$
(2.1)

where

  • \(u^0\in H_0^1(0,1)\), \(v^0\in V\),

  • \(u^1\), \(v^1\in L^2(0,1)\).

It is well known that System (2.1) is well posed, namely, for every \(T>0\) and all \((u^0,u^1)\) in \( H_0^1(0,1)\times L^2(0,1)\) and \((v^0,v^1)\) in \( V\times L^2(0,1)\), this system has a unique solution couple (uv) with

$$\begin{aligned} (u,v)\in C([0,T];H_0^1(0,1)\times V)\cap C^1([0,T];L^2(0,1)\times L^2(0,1)). \end{aligned}$$

Introduce the energy functional E defined by

$$\begin{aligned} E(t)=\frac{1}{2}\int _0^1\left\{ |u_t(x,t)|^2+|u_x(x,t)|^2+|v_t(x,t)|^2+|v_x(x,t)|^2\right\} \,dx,\quad \forall t\in [0,T]. \end{aligned}$$

For the conservative adjoint system (\(a\equiv 0\) in Q), we have the following observability result:

Proposition 2.1

Let \(T>4\). Assume that \(a\equiv 0\) in Q. There exists a positive constant \(C_0\), that depends on T only, such that for all \((u^0,u^1)\) in \( H_0^1(0,1)\times L^2(0,1)\) and \((v^0,v^1)\) in \( V\times L^2(0,1)\), the solution couple (uv) of System (2.1) satisfies

$$\begin{aligned} E(0)=E(T)\le C_0\int _0^T|u_x(0,t)+v_x(0,t)|^2\,dt. \end{aligned}$$
(2.2)

To prove this result or the Carleman estimate in the nonconservative case, an important step, in the opinion of the authors of this work, is to introduce appropriate new variables. For this purpose, the couple (uv) being any solution of System (2.1), set

$$\begin{aligned} p=u+v,\quad q=v-u. \end{aligned}$$

One readily checks that the functions p and q satisfy the following system involving Kirchhoff boundary conditions at the right endpoint

$$\begin{aligned} \left\{ \begin{array}{rl} &{}p_{tt} - p_{xx} +a(x,t)p= 0 \text { in }Q,\\ &{}q_{tt} - q_{xx} +a(x,t)q = 0 \text { in }Q,\\ &{}p(0,t)=0= q(0,t),\\ {} &{} p(1,t)={v(1,t)}=q(1,t),\,{p}_{{x}}{(1,t)}={u}_{{x}}{(1,t)}\\ {} &{}\quad =-{q}_{{x}}{(1,t)} \,\text {or}\, p_x(1,t)+q_x(1,t)=0 \text { in }(0,T),\\ &{}p(.,T)=u^0+v^0,\, p_t(.,T)=v^1+u^1,\\ {} &{} q(.,T)=v^0-u^0,\, q_t(.,T)=v^1-u^1, \end{array}\right. \end{aligned}$$
(2.3)

Proof of Proposition 2.1

One proves the proposition for data

$$\begin{aligned}&(u^0,u^1) \in \left( H^2(0,1)\cap H_0^1(0,1)\right) \times H_0^1(0,1)\text { and }\nonumber \\&(v^0,v^1) \in \left( H^2(0,1)\cap V\right) \times V, \end{aligned}$$
(2.4)

and one derives the proposition by using a density argument. Under the latter assumption on the data, it is known that the corresponding solution couple (uv) of (2.1) with \(a\equiv 0\) in Q satisfies

$$\begin{aligned} (u,v)\in C([0,T];(H^2(0,1)\cap H_0^1(0,1))\times ( H^2(0,1)\cap V))\cap C^1([0,T];H_0^1(0,1)\times V), \end{aligned}$$

so that all the computations that follow are justified.

Let the data satisfy (2.4), and let (pq) be the corresponding solution couple for System (2.3).

Multiply the first equation in (2.3) by \(2(x-1)p_x\) and the second equation by \(2xq_x\), and integrate by parts to derive

$$\begin{aligned} 2\int _0^1(x-1)p_tp_x\,dx\Bigg ]_0^T-\int _0^T\int _0^1(x-1)(p_t^2+p_x^2)_x\,dxdt=0, \end{aligned}$$
(2.5)

and

$$\begin{aligned} 2\int _0^1xq_tq_x\,dx\Bigg ]_0^T-\int _0^T\int _0^1x(q_t^2+q_x^2)_x\,dxdt=0. \end{aligned}$$
(2.6)

Reintegrating by parts yields

$$\begin{aligned} 2\int _0^1(x-1)p_tp_x\,dx\Bigg ]_0^T+\int _0^T\int _0^1(p_t^2+p_x^2)\,dxdt=\int _0^Tp_x(0,t)^2\,dt, \end{aligned}$$
(2.7)

and

$$\begin{aligned} 2\int _0^1xq_tq_x\,dx\Bigg ]_0^T+\int _0^T\int _0^1(q_t^2+q_x^2)\,dxdt&=\int _0^T(q_t(1,t)^2+q_x(1,t)^2)\,dt\nonumber \\&= \int _0^T(p_t(1,t)^2+p_x(1,t)^2)\,dt, \end{aligned}$$
(2.8)

where in the last equality, we have used the boundary conditions at the right endpoint in (2.3).

Now, we need to eliminate the boundary integral at \(x=1\). To this end, multiply the first equation in (2.3) by \(2xp_x\) and proceed as above to derive

$$\begin{aligned} 2\int _0^1xp_tp_x\,dx\Bigg ]_0^T+\int _0^T\int _0^1(p_t^2+p_x^2)\,dxdt= \int _0^T(p_t(1,t)^2+p_x(1,t)^2)\,dt. \end{aligned}$$
(2.9)

The combination of (2.8) and (2.9) yields

$$\begin{aligned} 2\int _0^1xq_tq_x\,dx\Bigg ]_0^T+\int _0^T\int _0^1(q_t^2+q_x^2)\,dxdt&=2\int _0^1xp_tp_x\,dx\Bigg ]_0^T\nonumber \\&+\int _0^T\int _0^1(p_t^2+p_x^2)\,dxdt. \end{aligned}$$
(2.10)

Multiplying (2.7) by 2, and adding the resulting equation to (2.10), we find

$$\begin{aligned}&2\int _0^1((x-2)p_tp_x+xq_tq_x)\,dx\Bigg ]_0^T+\int _0^T\int _0^1(p_t^2+q_t^2+p_x^2+q_x^2)\,dxdt\nonumber \\&\quad =2\int _0^Tp_x(0,t)^2\,dt, \end{aligned}$$
(2.11)

By the Cauchy-Schwarz inequality, it follows, for every t in [0, T]:

$$\begin{aligned}&2\int _0^1((x-2)p_tp_x+xq_tq_x)(x,t)\,dx\nonumber \\&\quad \le \int _0^1((2-x)(p_t^2+p_x^2)+x(q_t^2+q_x^2)(x,t)\,dx\nonumber \\&\quad \le 2\int _0^1(p_t^2+q_t^2+p_x^2+q_x^2)(x,t)\,dx\\&\quad \le 4\int _0^1(u_t^2+v_t^2+u_x^2+v_x^2)(x,t)\,dx\le 8E(t)=8E(T).\nonumber \end{aligned}$$
(2.12)

One then readily derives

$$\begin{aligned} \left| 2\int _0^1((x-2)p_tp_x+xq_tq_x)\,dx\Bigg ]_0^T\right| \le 8(E(T)+E(0))=16E(T). \end{aligned}$$
(2.13)

Gathering (2.11) and (2.13), and using the fact that

$$\begin{aligned} \int _0^T\int _0^1(p_t^2+q_t^2+p_x^2+q_x^2)\,dxdt=2\int _0^T\int _0^1(u_t^2+v_t^2+u_x^2+v_x^2)\,dxdt=4TE(T), \end{aligned}$$

it follows

$$\begin{aligned} 4(T-4)E(T)\le 2\int _0^Tp_x(0,t)^2\,dt, \end{aligned}$$

hence the claimed observability estimate, for every \(T>4\). \(\square \)

Remark 2.2

Thanks to Lions H.U.M., the observability estimate of Proposition 2.1 leads to the existence of a boundary control h in \(L^2(0,T)\) for System (1.1) with \(a\equiv 0\).

Using the transmutation method of Miller [37], one derives that there exists a control g in \(L^2(0,T)\) for the following parabolic system

$$\begin{aligned} \left\{ \begin{array}{rl} &{}\chi _{t} - \chi _{xx}= 0 \text { in }Q, \\ &{}\zeta _{t} - \zeta _{xx} = 0 \text { in }Q, \\ &{}\chi (0,t)=g(t),\, \zeta (0,t)=g(t) \text { in }(0,T), \\ &{} \chi (1,t)=0,\, \zeta _x(1,t)=0 \text { in }(0,T), \\ &{}\chi (.,0)=\chi ^0\in L^2(0,1),\quad \zeta (.,0)=\zeta ^0\in L^2(0,1), \end{array}\right. \end{aligned}$$
(2.14)

such that \(\chi (.,T)=0\) and \(\zeta (.,T)=0\).

Indeed, let \(L>4\) and introduce the following function \(\rho \), solution of the one-dimensional heat equation

$$\begin{aligned} \left\{ \begin{array}{rl} &{} \rho _{t} - \rho _{ss}= 0 \text { in }{\mathbb R}\times (0,\infty ), \\ &{}\rho (.,0)=\delta ,\, \rho (.,T)=0, \end{array}\right. \end{aligned}$$
(2.15)

where \(\delta \) denotes the Dirac mass at \(t = 0\) and \(\rho (x; t) = 0 \text { in } {\mathbb R}^2 {\setminus } [(-L;L) \times (0; T)]\).

Now we have the existence of a control h in \(L^2(0,L)\) such that the solution couple (yz) of the following system

$$\begin{aligned} \left\{ \begin{array}{rl} &{}y_{ss} - y_{xx} = 0 \text { in }Q_L=(0,1)\times (0,L), \\ &{}z_{ss} - z_{xx} = 0 \text { in }Q_L, \\ &{}y(0,s)=h(s),\, z(0,s)=h(s)\text { in }(0,L), \\ &{} y(1,s)=0,\, z_x(1,s)=0 \text { in }(0,L), \\ &{}y(.,0)=\chi ^0,\, y_s(.,0)=0,\quad z(.,0)=\zeta ^0,\, z_s(.,0)=0, \end{array}\right. \end{aligned}$$
(2.16)

satisfies

$$\begin{aligned} y(.,L)=0,\,y_s(.,L)=0,\quad z(.,L)=0,\,z_s(.,L)=0. \end{aligned}$$
(2.17)

Define the extension \(\ell \) of y by reflection about \(s=0\), so that \(\ell (x,s)=y(x,s)=\ell (x,-s)\) for all (xs) in \((0,1)\times (0,L)\). Simillarly, define the reflection m of z. Set

$$\begin{aligned} \chi (x,t)=\int _{\mathbb R}\rho (s,t)\ell (x,s)\,ds,\quad \zeta (x,t)=\int _{\mathbb R}\rho (s,t)m (x,s)\,ds, \end{aligned}$$

and

$$\begin{aligned} g(t)=\int _{\mathbb R}\rho (s,t)h(s)\,ds. \end{aligned}$$

One readily checks that the couple \((\chi ,\zeta )\) solves (2.14) and satisfies \(\chi (.,T)=0\) and \(\zeta (.,T)=0\), as claimed above.

3 Carleman Estimates

In this section, we are going to state our main result, which is a Carleman estimate for System (2.1), and to prove it, we will need two Carleman estimates for System (2.3). The first Carleman estimate for System (2.3) is established under the condition that all initial and final displacements be zero. The second one deals with the case where the null displacements constraint is dropped.

Carleman estimates always depend on appropriate weight functions. We are going to introduce the weight functions to be used in our Carleman estimates now.

3.1 Weight Functions and Statement of the Main Result

For every \(j=1,2\), set

$$\begin{aligned} \psi _j(x,t)=r_j(x)-\mu \left( t-\frac{T}{2}\right) ^2 +K \end{aligned}$$

where \(\mu >0\) and \(K>0\) are constants. The constant K is chosen large enough so that \(\psi _j(x,t)\geqslant 1\), forall \((x,t)\in [0,1]\times [0,T]\). The functions \(r_j\), (\(j=1,~2\)), are quadratic polynomials satisfying:

$$\begin{aligned}&r_1(1)=r_2(1),\quad r'_1(1)=-r'_2(1),\quad \quad r''_1(1)=r''_2(1), \end{aligned}$$
(3.1)
$$\begin{aligned}&r'_1(x)<0\text { and } r'_2(x)>0\text { for all }x\in [0,1]. \end{aligned}$$
(3.2)

A basic class of functions \(r_1\) and \(r_2\) satisfying all those requirements is given by

$$\begin{aligned} r_1(x)=mx^2-(4m+n)x+4m+2n\quad \text { and }\quad r_2(x)=mx^2+nx,\quad \forall x\in [0,1], \end{aligned}$$
(3.3)

where \(m>0\) and \(n>0\) are arbitrary constants.

Those two functions may be used in the proof of our first Carleman estimate given in the Subsect. 3.2, and which is valid for all \(T>0\). Notice that this time relaxation is possible because the null displacements constraint is enforced at the initial and final time. When the null displacements constraint at the initial and final time is dropped, the time T must be large enough, and a refined set of functions \(r_1\) and \(r_2\) will be used in that case.

An attentive reader may wonder why we introduce these new weight functions instead of using the standard one \((x-1)^2-\mu \left( t-\dfrac{T}{2}\right) ^2\). To answer this question, we recall the conservative case. In that case, we rely, not only, on the usual multiplier \((x-1)\), but also on this one x. This was already an indication that, for the Carleman estimate, the standard weight might not work for both components p and q of the modified adjoint system (3.7). When proving our first Carleman estimate below, we encounter unwanted boundary terms that must be discarded. Using the two functions \(r_1\) and \(r_2\) satisfying the constraints (3.1) and (3.2) enables us to achieve that goal (see \(J_2'\) to \(J_{10}'\) in pp 22–24). Notice that we may pick fixed values for both constants m and n, for instance, we may set \(m=1=n\), but we want to emphasize that there is an infinite family of functions that meet our constraints (3.1) and (3.2). In particular, having such a family handy enables us to select the appropriate constants m and n to meet our need (see Sect. 3.3 about our improved Carleman estimate).

Let \(\lambda >0\). Define for every \(j=1,2\),

$$\begin{aligned} \varphi _j(x,t)=e^{\lambda \psi _j(x,t)}. \end{aligned}$$

Let a in \(L^\infty (Q)\). Set \({\mathcal {E}}_a=\{(u,v)\in \left( H^1({Q})\right) ^2;u_{tt}-u_{xx}+au=0,\,v_{tt}-v_{xx}+av=0,\,u(0,t)=v(0,t)=0,\, u(1,t)=0\, \text { and }\, v_x(1,t)=0\text { in }(0,T)\}\). Notice that one can readily check, using a density argument, and the multipliers technique that for each each \(T>0\) and each couple (uv) in \({\mathcal {E}}_a\), one has \(u_x(1,.)\), \(u_x(0,.)\) and \(v_x(0,.)\) all lie in \(L^2(0,T)\).

Let \(r\in C^2([0,T])\) be a suitable cutoff function. Our main result in this work is the following

Theorem 3.1

Let \(T>4\). Assume that a lies in \(L^\infty (Q)\).

There exist two constants \(C>0\) and \(s_1\geqslant 1\), such that for every \(s\geqslant s_1\), any couple (uv) in \({\mathcal {E}}_a\) satisfies the Carleman estimate

$$\begin{aligned}&s\int _Q \big (u_t^2+u_x^2+v_t^2+v_x^2\big )e^{2s\varphi _2}dxdt +s^3\int _Q \big (u^2+v^2\big )e^{2s\varphi _2} dxdt\nonumber \\&\quad +s\int _0^Tr^2\varphi _2(0,t)\big (v_x(0,t)-u_x(0,t)\big )^2e^{2s\varphi _2(0,t)} dt \\&\leqslant Cs\int _0^T\varphi _1(0,t)\big (u_x(0,t)+v_x(0,t)\big )^2 e^{2s\varphi _1(0,t)} dt. \nonumber \end{aligned}$$
(3.4)

Remark 3.2

The following unique continuation result readily follows from Theorem 3.1:

$$\begin{aligned} \forall (u,v)\in {{\mathcal {E}}}_a,\quad u_x(0,t)+v_x(0,t)=0,\text { a.e. }t\in (0,T)\Longrightarrow u\equiv 0,~v\equiv 0\text { in }Q. \end{aligned}$$

As our proof of the Carleman estimate in the following section will show, provided one switches \(\varphi _1\) and \(\varphi _2\) in the definition of \(p_1\) and \(p_2\) in the proof of Proposition 3.4, one can obtain the following result:

Theorem 3.3

Let \(T>4\). Assume that a lies in \(L^\infty (Q)\).

There exist two constants \(C>0\) and \(s_1\geqslant 1\), such that for every \(s\geqslant s_1\), any couple (uv) in \({\mathcal {E}}_a\) satisfies the Carleman estimate

$$\begin{aligned}&s\int _Q \big (u_t^2+u_x^2+v_t^2+v_x^2\big )e^{2s\varphi _2}dxdt +s^3\int _Q \big (u^2+v^2\big )e^{2s\varphi _2} dxdt\nonumber \\&\quad +s\int _0^Tr^2\varphi _2(0,t)\big (v_x(0,t)+u_x(0,t)\big )^2e^{2s\varphi _2(0,t)} dt \\&\leqslant Cs\int _0^T\varphi _1(0,t)\big (u_x(0,t)-v_x(0,t)\big )^2 e^{2s\varphi _1(0,t)} dt. \nonumber \end{aligned}$$
(3.5)

3.2 A Global Carleman Inequality

Set \({\mathcal {S}}=\{(u,v)\in \left( H^1({Q})\right) ^2; u_{tt}-u_{xx}\in L^2(Q),~v_{tt}-v_{xx}\in L^2(Q),\,u(0,t)=v(0,t)=0,\, u(1,t)=0\, \text { and }\, v_x(1,t)=0\text { in }(0,T),\,u(x,0)=u(x,T)=v(x,0)=v(x,T)=0\text { in }(0,1)\}\).

Proposition 3.4

Let the weight functions \(\psi _j\) and \(\varphi _j\) be defined as above for every \(j=1,2\). Then there exist three constants \(\lambda _0\geqslant 1\), \(s_0>0\) and \(C>0\), such that for \(\lambda \geqslant \lambda _0\), \(s\geqslant s_0\) and for all \((u,v)\in {\mathcal {S}}\), the following inequality holds

$$\begin{aligned}&s\lambda \int _Q\varphi _1\big [\big (v_t+u_t\big )^2+\big (v_x+u_x\big )^2\big ]e^{2s\varphi _1} dxdt \\&\quad +s\lambda \int _Q\varphi _2\big [\big (v_t-u_t\big )^2 +\big (v_x-u_x\big )^2\big ]e^{2s\varphi _2} dxdt \nonumber \\&\quad +s^3\lambda ^3\int _Q \varphi _1^3\big (v+u\big )^2e^{2s\varphi _1} dxdt +s^3\lambda ^3\int _Q \varphi _2^3\big (v-u\big )^2e^{2s\varphi _2} dxdt \nonumber \\&\quad +s\lambda \int _0^T\varphi _2(0,t)\big (v_x(0,t)-u_x(0,t)\big )^2e^{2s\varphi _2(0,t)} dt \nonumber \\&\leqslant C\Big (\int _Q F^2e^{2s\varphi _1} dxdt +\int _Q G^2e^{2s\varphi _2} dxdt \nonumber \\&\quad +s\lambda \int _0^T\varphi _1(0,t)\big (v_x(0,t)+u_x(0,t)\big )^2e^{2s\varphi _1(0,t)} dt\Big ),\nonumber \end{aligned}$$
(3.6)

where \(F=u_{tt}-u_{xx}+v_{tt}-v_{xx}\) and \(G=v_{tt}-v_{xx}-\big (u_{tt}-u_{xx}\big )\).

Proof

Let (uv) in \({{\mathcal {S}}}\). Set \(p=u+v\) and \(q=v-u\). Then the couple (pq) satisfies the system

$$\begin{aligned} \left\{ \begin{array}{rl} &{}p_{tt} - p_{xx} =F \text { in }Q, \\ &{}q_{tt} - q_{xx} = G \text { in }Q, \\ &{}p(0,t)=0= q(0,t),\\ {} &{} p(1,t)=q(1,t),\, p_x(1,t)+q_x(1,t)=0 \text { in }(0,T), \\ &{}p(.,0)=0,\, p(.,T)=0,\quad q(.,0)=0,\, q(.,T)=0, \end{array}\right. \end{aligned}$$
(3.7)

Set

$$\begin{aligned} p=p_1e^{-s\varphi _1}\text { and }q=p_2e^{-s\varphi _2}. \end{aligned}$$
(3.8)

Using (3.8), we are going to write (3.7)\(_1\) in terms of \(p_1\) and (3.7)\(_2\) in terms of \(p_2\). Elementary calculus shows that

$$\begin{aligned}&p_{1tt}-p_{1xx}-2s\lambda \varphi _1\Big (\psi _{1t}p_{1t}-\psi _{1x}p_{1x}\Big ) \nonumber \\&\quad -s\lambda \left( \psi _{1tt}-\psi _{1xx}\right) \varphi _1p_1-s\lambda ^2\left( \psi _{1t}^2-\psi _{1x}^2\right) \varphi _1p_1 \\&\quad +s^2\lambda ^2\varphi _1^2\Big (\psi _{1t}^2-\psi _{1x}^2\Big )p_1=e^{s\varphi _1}F\text { in }(0,1)\times (0,T),\nonumber \end{aligned}$$
(3.9)

and similarly

$$\begin{aligned}&\Big [p_{2tt}-p_{2xx}-2s\lambda \varphi _2\Big (\psi _{2t}p_{2t}-\psi _{2x}p_{2x}\Big ) \\&\quad -s\lambda \left( \psi _{2tt}-\psi _{2xx}\right) \varphi _2p_2-s\lambda ^2\left( \psi _{2t}^2-\psi _{2x}^2\right) \varphi _2p_2 \\&\quad +s^2\lambda ^2\varphi _2^2\Big (\psi _{2t}^2-\psi _{2x}^2\Big )p_2=e^{s\varphi _2}G\text { in }(0,1)\times (0,T). \end{aligned}$$

Now, we write (3.9) as

$$\begin{aligned} M_1p_1 + M_2p_1 =Fe^{s\varphi _1} +Rp_1\text { in }(0,1)\times (0,T) \end{aligned}$$
(3.10)

where

$$\begin{aligned} M_1 p_1&=p_{1tt}-p_{1xx} +s^2\lambda ^2\varphi _1^2\Big (\psi _{1t}^2-\psi _{1x}^2\Big )p_1, \\ M_2 p_1&=(\eta -1)s\lambda \Big (\psi _{1tt}-\psi _{1xx}\Big )\varphi _1p_1 \\&\quad -s\lambda ^2\varphi _1\Big (\psi _{1t}^2-\psi _{1x}^2\Big )p_1 \\&\quad -2s\lambda \varphi _1\Big (\psi _{1t}p_{1t}-\psi _{1x}p_{1x}\Big ), \\ Rp_1&=-\eta s\lambda \Big (\psi _{1tt}-\psi _{1xx}\Big )\varphi _1p_1, \end{aligned}$$

\(\eta \) being a positive constant to be determined later on. Taking the \(L^2-\)norm of each side of (3.10), we find

$$\begin{aligned}{} & {} \int _Q |M_1p_1|^2 dxdt +\int _Q|M_2p_1|^2 dxdt +2\int _Q M_1p_1M_2p_1 dxdt\nonumber \\ {}{} & {} \quad =\int _Q |Fe^{s\varphi _1}+Rp_1|^2 dxdt. \end{aligned}$$
(3.11)

One checks that

$$\begin{aligned} 2\int _Q M_1p_1M_2p_1 dxdt&=2s\lambda (\eta -1)\int _Q p_{1tt}\Big (\psi _{1tt}-\psi _{1xx}\Big )\varphi _1p_1 dxdt \\&-2s\lambda ^2\int _Q \varphi _1\Big (\psi _{1t}^2-\psi _{1x}^2\Big )p_{1tt}p_1 dxdt \\&-4s\lambda \int _Q p_{1tt}\varphi _1\Big (\psi _{1t}p_{1t}-\psi _{1x}p_{1x}\Big ) dxdt \\&-2s\lambda (\eta -1)\int _Q p_{1xx}\Big (\psi _{1tt}-\psi _{1xx}\Big )\varphi _1p_1 dxdt \\&+2s\lambda ^2\int _Q \varphi _1\Big (\psi _{1t}^2-\psi _{1x}^2\Big )p_{1xx}p_1 dxdt \\&+4s\lambda \int _Q \varphi _1\Big (\psi _{1t}p_{1t}-\psi _{1x}p_{1x}\Big )p_{1xx} dxdt \\&+2s^3\lambda ^3(\eta -1)\int _Q \varphi _1^3\Big (\psi _{1tt}-\psi _{1xx}\Big )\Big (\psi _{1t}^2-\psi _{1x}^2\Big )p_1^2 dxdt \\&-2s^3\lambda ^4\int _Q \varphi _1^3\Big (\psi _{1t}^2-\psi _{1x}^2\Big )^2p_1^2 dxdt\\&-4s^3\lambda ^3\int _Q \varphi _1^3\Big (\psi _{1t}p_{1t}-\psi _{1x}p_{1x}\Big )\Big (\psi _{1t}^2-\psi _{1x}^2\Big )p_1 dxdt. \end{aligned}$$

The main idea is to bound from below the cross-product term by positive quantities, and pass negative quantities that cannot be absorbed to the right side.

Now, we set \(\displaystyle 2\int _0^T\int _\Omega M_1p_1M_2p_1 dxdt=I_1+\cdots +I_9\). Using elementary calculus, we are going to find all the terms \(I_i\)s.

$$\begin{aligned} I_1&=2s\lambda (\eta -1)\int _Q p_{1tt}\Big (\psi _{1tt}-\psi _{1xx}\Big )\varphi _1p_1 dxdt \\&=-2s\lambda (\eta -1)\int _Q \varphi _1\Big (\psi _{1tt}-\psi _{1xx}\Big )p_{1t}^2 dxdt \\&\quad -2s\lambda ^2(\eta -1)\int _Q \varphi _1\psi _{1t}\Big (\psi _{1tt}-\psi _{1xx}\Big )p_{1t}p_1 dxdt \\&=I_{11}+I_{12}, \end{aligned}$$

since \(\psi _{1ttt}(x,t)=\psi _{1xxt}(x,t)=0\) and \(p_1(x,0)=p_1(x,T)=0\).

Computing \(I_{12}\). We have

$$\begin{aligned} I_{12}&=-s\lambda ^2(\eta -1)\int _Q \varphi _1\psi _{1t}\Big (\psi _{1tt}-\psi _{1xx}\Big )(p_{1}^2)_t dxdt \\&=s\lambda ^3(\eta -1)\int _Q \varphi _1\psi _{1t}^2\Big (\psi _{1tt}-\psi _{1xx}\Big )p_{1}^2 dxdt \\&\quad +s\lambda ^2(\eta -1)\int _Q \varphi _1\psi _{1tt}\Big (\psi _{1tt}-\psi _{1xx}\Big )p_{1}^2 dxdt. \end{aligned}$$

Consequently, we can write

$$\begin{aligned} I_1&=-2s\lambda (\eta -1)\int _Q \varphi _1\Big (\psi _{1tt}-\psi _{1xx}\Big )p_{1t}^2 dxdt \nonumber \\&\quad +s\lambda ^3(\eta -1)\int _Q \varphi _1\psi _{1t}^2\Big (\psi _{1tt}-\psi _{1xx}\Big )p_{1}^2 dxdt \nonumber \\&\quad +s\lambda ^2(\eta -1)\int _Q \varphi _1\psi _{1tt}\Big (\psi _{1tt}-\psi _{1xx}\Big )p_{1}^2 dxdt. \end{aligned}$$
(3.12)

Similarly, we have

$$\begin{aligned} I_2&=-2s\lambda ^2\int _Q \varphi _1\Big (\psi _{1t}^2-\psi _{1x}^2\Big )p_{1tt}p_1 dxdt \\&=2s\lambda ^2\int _Q \varphi _1\Big (\psi _{1t}^2-\psi _{1x}^2\Big )p_{1t}^2 dxdt \\&\quad +4s\lambda ^2\int _Q \varphi _1\psi _{1tt}\psi _{1t}p_{1t}p_1 dxdt \\&\quad +2s\lambda ^3\int _Q \varphi _1\psi _{1t}\Big (\psi _{1t}^2-\psi _{1x}^2\Big )p_{1t}p_1 dxdt \\&=I_{21}+I_{22}+I_{23} \end{aligned}$$

since \((\psi _{1x}^2)_t(x,t)=0\) and \(p_1(x,0)=p_1(x,T)=0\).

Computing \(I_{22}\) and \(I_{23}\). We have

$$\begin{aligned} I_{22}&=2s\lambda ^2\int _Q \varphi _1\psi _{1tt}\psi _{1t}(p_1^2)_t dxdt\\&=-2s\lambda ^3\int _Q \varphi _1\psi _{1tt}\psi _{1t}^2p_{1}^2 dxdt -2s\lambda ^2\int _Q \varphi _1\psi _{1tt}^2p_{1}^2 dxdt \end{aligned}$$

since \(\psi _{1ttt}(x,t)=0\).

Similarly,

$$\begin{aligned} I_{23}&=s\lambda ^3\int _Q \varphi _1\psi _{1t}\Big (\psi _{1t}^2-\psi _{1x}^2\Big )(p_1^2)_t dxdt \\&=-s\lambda ^4\int _Q \varphi _1\psi _{1t}^2\Big (\psi _{1t}^2-\psi _{1x}^2\Big )p_1^2 dxdt \\&\quad -s\lambda ^3\int _Q \varphi _1\psi _{1tt}\Big (\psi _{1t}^2-\psi _{1x}^2\Big )p_1^2 dxdt-2s\lambda ^3\int _Q \varphi _1\psi _{1tt}\psi _{1t}^2p_1^2 dxdt \\&=-s\lambda ^4\int _Q \varphi _1\psi _{1t}^4p_1^2 dxdt -3s\lambda ^3\int _Q \varphi _1\psi _{1tt}\psi _{1t}^2p_1^2 dxdt \\&\quad +s\lambda ^4\int _Q \varphi _1\psi _{1t}^2\psi _{1x}^2p_1^2 dxdt +s\lambda ^3\int _Q \varphi _1\psi _{1tt}\psi _{1x}^2p_1^2 dxdt. \end{aligned}$$

We can now write \(I_2\) as

$$\begin{aligned} I_2&=2s\lambda ^2\int _Q \varphi _1\Big (\psi _{1t}^2-\psi _{1x}^2\Big )p_{1t}^2 dxdt \nonumber \\&\quad -5s\lambda ^3\int _Q \varphi _1\psi _{1tt}\psi _{1t}^2p_{1}^2 dxdt -2s\lambda ^2\int _Q \varphi _1\psi _{1tt}^2p_{1}^2 dxdt \nonumber \\&\quad -s\lambda ^4\int _Q \varphi _1\psi _{1t}^4p_1^2 dxdt +s\lambda ^4\int _Q \varphi _1\psi _{1t}^2\psi _{1x}^2p_1^2 dxdt \nonumber \\&\quad +s\lambda ^3\int _Q \varphi _1\psi _{1tt}\psi _{1x}^2p_1^2 dxdt. \nonumber \\ I_3&=-4s\lambda \int _Q \varphi _1\Big (\psi _{1t}p_{1t}-\psi _{1x}p_{1x}\Big )p_{1tt} dxdt \nonumber \\&=-2s\lambda \int _Q \varphi _1\psi _{1t}(p_{1t}^2)_t dxdt +4s\lambda \int _Q \varphi _1\psi _{1x}p_{1x}p_{1tt} dxdt \nonumber \\&=-2s\lambda \int _0^1\Big [(\varphi _1\psi _{1t}p_{1t}^2)(x,.)\Big ]_0^T dx +2s\lambda ^2\int _Q \varphi _1\psi _{1t}^2p_{1t}^2 dxdt \nonumber \\&\quad +2s\lambda \int _Q \varphi _1\psi _{1tt}p_{1t}^2 dxdt -4s\lambda \int _Q \varphi _1\psi _{1x}p_{1xt}p_{1t} dxdt \nonumber \\&\quad -4s\lambda ^2\int _Q \varphi _1\psi _{1t}\psi _{1x}p_{1x}p_{1t} dxdt \nonumber \\&=I_{31}+I_{32}+I_{33}+I_{34}+I_{35}. \end{aligned}$$
(3.13)

since \(p_{1x}(x,0)=p_{1x}(x,T)=0\). Computing \(I_{34}\) gives

$$\begin{aligned} -4s\lambda \int _Q \varphi _1\psi _{1x}p_{1xt}p_{1t} dxdt&=-2s\lambda \int _Q \varphi _1\psi _{1x}(p_{1t}^2)_x dxdt \\&=-2s\lambda \int _0^T\Big [\varphi _1\psi _{1x}p_{1t}^2\Big ](1,t) dt \\&\quad +2s\lambda ^2\int _Q \varphi _1\psi _{1x}^2p_{1t}^2 dxdt +2s\lambda \int _Q \varphi _1\psi _{1xx}p_{1t}^2 dxdt, \end{aligned}$$

since \(p_{1t}(0,t)=0\). Thus, we have

$$\begin{aligned} I_3&= -2s\lambda \int _0^1\Big [(\varphi _1\psi _{1t}p_{1t}^2)(x,.)\Big ]_0^T dx -2s\lambda \int _0^T\Big [\varphi _1\psi _{1x}p_{1t}^2\Big ](1,t) dt \nonumber \\&\quad +2s\lambda ^2\int _Q \varphi _1\psi _{1t}^2p_{1t}^2 dxdt +2s\lambda \int _Q \varphi _1\psi _{1tt}p_{1t}^2 dxdt \nonumber \\&\quad +2s\lambda ^2\int _Q \varphi _1\psi _{1x}^2p_{1t}^2 dxdt +2s\lambda \int _Q \varphi _1\psi _{1xx}p_{1t}^2 dxdt \nonumber \\&\quad -4s\lambda ^2\int _Q \varphi _1\psi _{1t}\psi _{1x}p_{1x}p_{1t} dxdt. \nonumber \\ I_4&=-2s\lambda (\eta -1)\int _Q p_{1xx}\Big (\psi _{1tt}-\psi _{1xx}\Big )\varphi _1p_1 dxdt \nonumber \\&=2s\lambda (\eta -1)\int _Q \varphi _1\Big (\psi _{1tt}-\psi _{1xx}\Big )p_{1x}^2 dxdt \nonumber \\&\quad +2s\lambda ^2(\eta -1)\int _Q \varphi _1\psi _{1x}\Big (\psi _{1tt}-\psi _{1xx}\Big )p_{1x}p_1 dxdt \nonumber \\&\quad -2s\lambda (\eta -1)\int _0^T \Big [\varphi _1\Big (\psi _{1tt}-\psi _{1xx}\Big )p_{1x}p_1\Big ](1,t) dxdt \nonumber \\&=I_{41} + I_{42} + I_{43}. \end{aligned}$$
(3.14)

since \(p_1(0,t)=0\). Computing \(I_{42}\), we get

$$\begin{aligned} I_{42}&=s\lambda ^2(\eta -1)\int _Q \varphi _1\psi _{1x}\Big (\psi _{1tt}-\psi _{1xx}\Big )(p_{1}^2)_x dxdt \\&=s\lambda ^2(\eta -1)\int _0^T \Big [\varphi _1\psi _{1x}\Big (\psi _{1tt}-\psi _{1xx}\Big )p_{1}^2\Big ](1,t) dt \\&\quad -s\lambda ^3(\eta -1)\int _Q \varphi _1\psi _{1x}^2\Big (\psi _{1tt}-\psi _{1xx}\Big )p_{1}^2 dxdt \\&\quad -s\lambda ^2(\eta -1)\int _Q \varphi _1\psi _{1xx}\Big (\psi _{1tt}-\psi _{1xx}\Big )p_{1}^2 dxdt \end{aligned}$$

since \(\psi _{1ttx}(x,t)=\psi _{1xxx}(x,t)=0\). Thus, we obtain

$$\begin{aligned} I_4&=2s\lambda (\eta -1)\int _Q \varphi _1\Big (\psi _{1tt}-\psi _{1xx}\Big )p_{1x}^2 dxdt \nonumber \\&\quad -s\lambda ^3(\eta -1)\int _Q \varphi _1\psi _{1x}^2\Big (\psi _{1tt}-\psi _{1xx}\Big )p_{1}^2 dxdt \nonumber \\&\quad -s\lambda ^2(\eta -1)\int _Q \varphi _1\psi _{1xx}\Big (\psi _{1tt}-\psi _{1xx}\Big )p_{1}^2 dxdt \nonumber \\&\quad +s\lambda ^2(\eta -1)\int _0^T \Big [\varphi _1\psi _{1x}\Big (\psi _{1tt}-\psi _{1xx}\Big )p_{1}^2\Big ](1,t) dt \nonumber \\&\quad -2s\lambda (\eta -1)\int _0^T \Big [\varphi _1\Big (\psi _{1tt}-\psi _{1xx}\Big )p_{1x}p_1\Big ](1,t) dxdt. \nonumber \\ I_5&=2s\lambda ^2\int _Q \varphi _1\Big (\psi _{1t}^2-\psi _{1x}^2\Big )p_{1xx}p_1 dxdt \nonumber \\&=-2s\lambda ^2\int _Q \varphi _1\Big (\psi _{1t}^2-\psi _{1x}^2\Big )p_{1x}^2 dxdt \nonumber \\&\quad -2s\lambda ^3\int _Q \varphi _1\psi _{1x}\Big (\psi _{1t}^2-\psi _{1x}^2\Big )p_{1x}p_1 dxdt \nonumber \\&\quad +4s\lambda ^2\int _Q \varphi _1\psi _{1xx}\psi _{1x}p_{1x}p_1 dxdt \nonumber \\&\quad +2s\lambda ^2\int _0^T\Big [ \varphi _1\Big (\psi _{1t}^2-\psi _{1x}^2\Big )p_{1x}p_1\Big ](1,t) dt \nonumber \\&=I_{51}+I_{52}+I_{53}+I_{54} \end{aligned}$$
(3.15)

since \((\psi _{1t}^2)_x(x,t)=0\) and \(p_1(0,t)=0\). Computing \(I_{52}\) and \(I_{53}\), we have

$$\begin{aligned} I_{52}&=-s\lambda ^3\int _Q \varphi _1\psi _{1x}\Big (\psi _{1t}^2-\psi _{1x}^2\Big )(p_1^2)_x dxdt \\&=s\lambda ^4\int _Q \varphi _1\psi _{1x}^2\Big (\psi _{1t}^2-\psi _{1x}^2\Big )p_1^2 dxdt \\&\quad +s\lambda ^3\int _Q \varphi _1\psi _{1xx}\Big (\psi _{1t}^2-\psi _{1x}^2\Big )p_1^2 dxdt-2s\lambda ^3\int _Q \varphi _1\psi _{1xx}\psi _{1x}^2p_1^2 dxdt \\&\quad -s\lambda ^3\int _0^T\Big [ \varphi _1\psi _{1x}\Big (\psi _{1t}^2-\psi _{1x}^2\Big )p_1^2\Big ](1,t) dt \\&=-s\lambda ^4\int _Q \varphi _1\psi _{1x}^4p_1^2 dxdt -3s\lambda ^3\int _Q \varphi _1\psi _{1xx}\psi _{1x}^2p_1^2 dxdt \\&\quad +s\lambda ^4\int _Q \varphi _1\psi _{1x}^2\psi _{1t}^2p_1^2 dxdt +s\lambda ^3\int _Q \varphi _1\psi _{1xx}\psi _{1t}^2p_1^2 dxdt \\&\quad -s\lambda ^3\int _0^T\Big [ \varphi _1\psi _{1x}\Big (\psi _{1t}^2-\psi _{1x}^2\Big )p_1^2\Big ](1,t) dt, \end{aligned}$$

and

$$\begin{aligned} I_{53}&=2s\lambda ^2\int _Q \varphi _1\psi _{1xx}\psi _{1x}(p_{1}^2)_x dxdt \\&\quad =-2s\lambda ^3\int _Q \varphi _1\psi _{1xx}\psi _{1x}^2p_{1}^2 dxdt -2s\lambda ^2\int _Q \varphi _1\psi _{1xx}^2p_{1}^2 dxdt\\&\quad +2s\lambda ^2\int _0^T\Big [ \varphi _1\psi _{1xx}\psi _{1x}p_{1}^2\Big ] (1,t)dt \end{aligned}$$

since \(\psi _{1xxx}(x,t)=0\). Hence,

$$\begin{aligned} I_5&=-2s\lambda ^2\int _Q \varphi _1\Big (\psi _{1t}^2-\psi _{1x}^2\Big )p_{1x}^2 dxdt \nonumber \\&\quad -s\lambda ^4\int _Q \varphi _1\psi _{1x}^4p_1^2 dxdt -5s\lambda ^3\int _Q \varphi _1\psi _{1xx}\psi _{1x}^2p_1^2 dxdt \nonumber \\&\quad +s\lambda ^4\int _Q \varphi _1\psi _{1x}^2\psi _{1t}^2p_1^2 dxdt +s\lambda ^3\int _Q \varphi _1\psi _{1xx}\psi _{1t}^2p_1^2 dxdt \nonumber \\&\quad -2s\lambda ^2\int _Q \varphi _1\psi _{1xx}^2p_{1}^2 dxdt \nonumber \\&\quad -s\lambda ^3\int _0^T\Big [ \varphi _1\psi _{1x}\Big (\psi _{1t}^2-\psi _{1x}^2\Big )p_1^2\Big ](1,t) dt \nonumber \\&\quad +2s\lambda ^2\int _0^T\Big [ \varphi _1\psi _{1xx}\psi _{1x}p_{1}^2\Big ] (1,t)dt \nonumber \\&\quad +2s\lambda ^2\int _0^T\Big [ \varphi _1\Big (\psi _{1t}^2-\psi _{1x}^2\Big )p_{1x}p_1\Big ](1,t) dt. \nonumber \\ I_6&=4s\lambda \int _Q \varphi _1\Big (\psi _{1t}p_{1t}-\psi _{1x}p_{1x}\Big )p_{1xx} dxdt \nonumber \\&=4s\lambda \int _0^T\Big [\varphi _1\psi _{1t}p_{1t}p_{1x}\Big ](1,t) dt \nonumber \\&\quad -4s\lambda ^2\int _Q \varphi _1\psi _{1x}p_{1x}\psi _{1t}p_{1t} dxdt \nonumber \\&\quad -4s\lambda \int _Q \varphi _1\psi _{1t}p_{1xt}p_{1x} dxdt \nonumber \\&\quad -2s\lambda \int _Q \varphi _1\psi _{1x}(p_{1x}^2)_x dxdt \nonumber \\&=I_{61}+I_{62}+I_{63}+I_{64}. \end{aligned}$$
(3.16)

since \(\psi _{1tx}(x,t)=0\), \(p_{1t}(0,t)=0\) and \(p_{1x}(x,0)=p_{1x}(x,T)=0\). We can write that

$$\begin{aligned} I_{63}&= -2s\lambda \int _Q \varphi _1\psi _{1t}(p_{1x} ^2)_tdxdt \\&=2s\lambda ^2\int _Q \varphi _1\psi _{1t}^2p_{1x} ^2dxdt +2s\lambda \int _Q \varphi _1\psi _{1tt}p_{1x} ^2dxdt, \end{aligned}$$

and

$$\begin{aligned} I_{64}&= -2s\lambda \int _0^T \Big [(\varphi _1\psi _{1x}p_{1x}^2)(.,t)\Big ]_0^1 dt \\&\quad +2s\lambda ^2\int _Q \varphi _1\psi _{1x}^2p_{1x}^2 dxdt +2s\lambda \int _Q \varphi _1\psi _{1xx}p_{1x}^2 dxdt. \end{aligned}$$

Consequently,

$$\begin{aligned} I_6&=4s\lambda \int _0^T\Big [\varphi _1\psi _{1t}p_{1t}p_{1x}\Big ](1,t) dt -2s\lambda \int _0^T \Big [(\varphi _1\psi _{1x}p_{1x}^2)(.,t)\Big ]_0^1 dt \nonumber \\&\quad -4s\lambda ^2\int _Q \varphi _1\psi _{1x}p_{1x}\psi _{1t}p_{1t} dxdt \nonumber \\&\quad +2s\lambda ^2\int _Q \varphi _1\psi _{1t}^2p_{1x} ^2dxdt +2s\lambda \int _Q \varphi _1\psi _{1tt}p_{1x} ^2dxdt \nonumber \\&\quad +2s\lambda ^2\int _Q \varphi _1\psi _{1x}^2p_{1x}^2 dxdt +2s\lambda \int _Q \varphi _1\psi _{1xx}p_{1x}^2 dxdt. \end{aligned}$$
(3.17)

No computations needed for the next two terms.

$$\begin{aligned} I_7&= 2s^3\lambda ^3(\eta -1)\int _Q \varphi _1^3\Big (\psi _{1tt}-\psi _{1xx}\Big )\Big (\psi _{1t}^2-\psi _{1x}^2\Big )p_1^2 dxdt\nonumber \\ I_8&=-2s^3\lambda ^4\int _Q \varphi _1^3\Big (\psi _{1t}^2-\psi _{1x}^2\Big )^2p_1^2 dxdt. \end{aligned}$$
(3.18)

For the last term, proceeding as above, we find

$$\begin{aligned} I_9&=-4s^3\lambda ^3\int _Q \varphi _1^3\Big (\psi _{1t}p_{1t}-\psi _{1x}p_{1x}\Big )\Big (\psi _{1t}^2-\psi _{1x}^2\Big )p_1 dxdt \\&=-2s^3\lambda ^3\int _Q \varphi _1^3\psi _{1t}\Big (\psi _{1t}^2-\psi _{1x}^2\Big )(p_1^2)_t dxdt +2s^3\lambda ^3\int _Q \varphi _1^3\psi _{1x}\Big (\psi _{1t}^2-\psi _{1x}^2\Big )(p_1^2)_x dxdt \\&=6s^3\lambda ^4\int _Q \varphi _1^3\psi _{1t}^2\Big (\psi _{1t}^2-\psi _{1x}^2\Big )p_1^2 dxdt +2s^3\lambda ^3\int _Q \varphi _1^3\psi _{1tt}\Big (\psi _{1t}^2-\psi _{1x}^2\Big )p_1^2 dxdt \\&\quad +4s^3\lambda ^3\int _Q \varphi _1^3\psi _{1tt}\psi _{1t}^2p_1^2 dxdt -6s^3\lambda ^4\int _Q \varphi _1^3\psi _{1x}^2\Big (\psi _{1t}^2-\psi _{1x}^2\Big )p_1^2 dxdt \\&\quad -2s^3\lambda ^3\int _Q \varphi _1^3\psi _{1xx}\Big (\psi _{1t}^2-\psi _{1x}^2\Big )p_1^2 dxdt \\&\quad +4s^3\lambda ^3\int _Q \varphi _1^3\psi _{1xx}\psi _{1x}^2p_1^2 dxdt +2s^3\lambda ^3\int _0^T \Big [\varphi _1^3\psi _{1x}\Big (\psi _{1t}^2-\psi _{1x}^2\Big )p_1^2\Big ](1,t) dt \end{aligned}$$

since \((\psi _{1x}^2)_t(x,t)=(\psi _{1t}^2)_x(x,t)=0\), \(p_1(x,0)=p_1(x,T)=0\) and \(p_1(0,t)=0\).

Hence

$$\begin{aligned} I_9&=6s^3\lambda ^4\int _Q \varphi _1^3\Big (\psi _{1t}^2-\psi _{1x}^2\Big )^2p_1^2 dxdt \nonumber \\&\quad +2s^3\lambda ^3\int _Q \varphi _1^3\Big (\psi _{1tt}-\psi _{1xx}\Big )\Big (\psi _{1t}^2-\psi _{1x}^2\Big )p_1^2 dxdt \nonumber \\&\quad +4s^3\lambda ^3\int _Q \varphi _1^3\Big (\psi _{1tt}\psi _{1t}^2+\psi _{1xx}\psi _{1x}^2\Big )p_1^2 dxdt \nonumber \\&\quad +2s^3\lambda ^3\int _0^T \Big [\varphi _1^3\psi _{1x}\Big (\psi _{1t}^2-\psi _{1x}^2\Big )p_1^2\Big ](1,t) dt. \end{aligned}$$
(3.19)

Combining (3.12)–(3.19), we get

$$\begin{aligned}&2\int _Q M_1p_1M_2p_1 dxdt=4s\lambda ^2\int _Q \varphi _1\Big (\psi _{1t}p_{1t}-\psi _{1x}p_{1x}\Big )^2 dxdt\nonumber \\&\quad +2s\lambda \int _Q\varphi _1\Big [2\psi _{1tt}-\eta \Big (\psi _{1tt}-\psi _{1xx}\Big )\Big ]p_{1t}^2 dxdt \nonumber \\&\quad +2s\lambda \int _Q\varphi _1\Big [\eta \Big (\psi _{1tt}-\psi _{1xx}\Big )-\Big (\psi _{1tt}-\psi _{1xx}\Big )+\Big (\psi _{1tt}+\psi _{1xx}\Big )\Big ]p_{1x}^2 dxdt \nonumber \\&\quad +4s^3\lambda ^4\int _Q \varphi _1^3\Big (\psi _{1t}^2-\psi _{1x}^2\Big )^2p_1^2 dxdt +4s^3\lambda ^3\int _Q \varphi _1^3\Big (\psi _{1tt}\psi _{1t}^2+\psi _{1xx}\psi _{1x}^2\Big )p_1^2 dxdt \nonumber \\&\quad +2s^3\lambda ^3\eta \int _Q \varphi _1^3\Big (\psi _{1tt}-\psi _{1xx}\Big )\Big (\psi _{1t}^2-\psi _{1x}^2\Big )p_1^2 dxdt \nonumber \\&\quad +s\lambda ^3(\eta -1)\int _Q \varphi _1\psi _{1t}^2\Big (\psi _{1tt}-\psi _{1xx}\Big )p_{1}^2 dxdt +s\lambda ^2(\eta -1)\nonumber \\&\quad \int _Q \varphi _1\psi _{1tt}\Big (\psi _{1tt}-\psi _{1xx}\Big )p_{1}^2 dxdt \nonumber \\&\quad -s\lambda ^4\int _Q \varphi _1\psi _{1t}^2\Big (\psi _{1t}^2-\psi _{1x}^2\Big )p_1^2 dxdt -5s\lambda ^3\int _Q \varphi _1\psi _{1tt}\psi _{1t}^2p_{1}^2 dxdt\nonumber \\&\quad +s\lambda ^3\int _Q \varphi _1\psi _{1tt}\psi _{1x}^2p_{1}^2 dxdt -2s\lambda ^2\int _Q \varphi _1\psi _{1tt}^2p_{1}^2 dxdt \nonumber \\&\quad -s\lambda ^3(\eta -3)\int _Q \varphi _1\psi _{1x}^2\Big (\psi _{1tt}-\psi _{1xx}\Big )p_{1}^2 dxdt -s\lambda ^2(\eta -1)\nonumber \\&\quad \int _Q \varphi _1\psi _{1xx}\Big (\psi _{1tt}-\psi _{1xx}\Big )p_{1}^2 dxdt \nonumber \\&\quad +s\lambda ^4\int _Q \varphi _1\psi _{1x}^2\Big (\psi _{1t}^2-\psi _{1x}^2\Big )p_1^2 dxdt -4s\lambda ^3\int _Q \varphi _1\psi _{1xx}\psi _{1x}^2p_{1}^2 dxdt \nonumber \\&\quad +s\lambda ^3\int _Q \varphi _1\psi _{1xx}\Big (\psi _{1t}^2-\psi _{1x}^2\Big )p_1^2 dxdt -2s\lambda ^2\int _Q \varphi _1\psi _{1xx}^2p_{1}^2 dxdt \nonumber \\ {}&-2s\lambda \int _0^1\Big [(\varphi _1\psi _{1t}p_{1t}^2)(x,.)\Big ]_0^T dx -2s\lambda \int _0^T\Big [\varphi _1\psi _{1x}p_{1t}^2\Big ](1,t) dt\nonumber \\&+s\lambda ^2(\eta -1)\int _0^T \Big [\varphi _1\psi _{1x}\Big (\psi _{1tt}-\psi _{1xx}\Big )p_1^2\Big ](1,t) dt \nonumber \\&\quad -2s\lambda (\eta -1)\int _0^T \Big [\varphi _1\Big (\psi _{1tt}-\psi _{1xx}\Big )p_{1x}p_1\Big ](1,t) dt\nonumber \\&-s\lambda ^3\int _0^T\Big [ \varphi _1\psi _{1x}\Big (\psi _{1t}^2-\psi _{1x}^2\Big )p_1^2\Big ](1,t) dt +2s\lambda ^2\int _0^T \Big [\varphi _1\psi _{1xx}\psi _{1x}p_1^2\Big ](1,t) dt \nonumber \\&+4s\lambda \int _0^T\Big [\varphi _1\psi _{1t}p_{1t}p_{1x}\Big ](1,t) dt -2s\lambda \int _0^T \Big [(\varphi _1\psi _{1x}p_{1x}^2)(.,t)\Big ]_0^1 dt \nonumber \\&+2s^3\lambda ^3\int _0^T \Big [\varphi _1^3\psi _{1x}\Big (\psi _{1t}^2-\psi _{1x}^2\Big )p_1^2\Big ](1,t) dt +2s\lambda ^2\int _0^T\Big [ \varphi _1\Big (\psi _{1t}^2-\psi _{1x}^2\Big )p_{1x}p_1\Big ](1,t) dt. \end{aligned}$$
(3.20)

Denote by \(J_1,\ldots ,J_{18}\), the first eighteen integrals appearing in the right side of (3.20).

At this stage, we find it useful to recall the example of functions \(r_1\) and \(r_2\) given in (3.3).

$$\begin{aligned} r_1(x)=mx^2-(4m+n)x+4m+2n\quad \text { and }\quad r_2(x)=mx^2+nx,\quad \forall x\in [0,1], \end{aligned}$$
(3.21)

where \(m>0\) and \(n>0\) are arbitrary constants.

We observe that for every \(j=1,2\),

$$\begin{aligned} \psi _{jt}(x,t)&=-2\mu \left( t-\dfrac{T}{2}\right) ,\quad \psi _{jtt}(x,t)=-2\mu , \\ \psi _{jx}(x,t)&=r'_j(x),\quad \psi _{jxx}(x,t)=r''_j(x)=2m. \end{aligned}$$

We want the integrals \(J_2\) and \(J_3\) to be nonnegative. For that to happen, we need

$$\begin{aligned} \Big [2\psi _{1tt}-\eta \Big (\psi _{1tt}-\psi _{1xx}\Big )\Big ](x,t)=-4\mu -\eta (-2\mu -2m)>0 \end{aligned}$$

and

$$\begin{aligned} \Big [\eta \Big (\psi _{1tt}-\psi _{1xx}\Big )+2\psi _{1xx}\Big ](x,t)=\eta (-2\mu -2m)+4m>0, \end{aligned}$$

from which we deduce

$$\begin{aligned} \frac{2\mu }{\mu +m}<\eta <\frac{2m}{\mu +m}. \end{aligned}$$
(3.22)

Consequently, there exist \(\eta \) in \((2\mu /(\mu +m),2m/(\mu +m))\) and a constant \(C>0\) such that

$$\begin{aligned} J_2\geqslant Cs\lambda \int _Q\varphi _1p_{1t}^2 dxdt, \quad J_3\geqslant Cs\lambda \int _Q\varphi _1p_{1x}^2 dxdt. \end{aligned}$$
(3.23)

Henceforth, C denotes a generic positive constant that may vary from line to line or even in the same line, but is always independent of the parameters s and \(\lambda \).

We want the sum \(J_4+J_5+J_6\) to be positive. We have

$$\begin{aligned} J_4+J_5+J_6&= 4s^3\lambda ^4\int _Q \varphi _1^3\Big (\psi _{1t}^2-\psi _{1x}^2\Big )^2p_1^2 dxdt \\&\quad +4s^3\lambda ^3\int _Q \varphi _1^3\Big (\psi _{1tt}\psi _{1t}^2+\psi _{1xx}\psi _{1x}^2\Big )p_1^2 dxdt \\&\quad +2s^3\lambda ^3\eta \int _Q \varphi _1^3\Big (\psi _{1tt}-\psi _{1xx}\Big )\Big (\psi _{1t}^2-\psi _{1x}^2\Big )p_1^2 dxdt \\&=2s^3\lambda ^3\int _Q \varphi _1^3\Big [2\lambda \Big (\psi _{1t}^2-\psi _{1x}^2\Big )^2+2\Big (\psi _{1tt}\psi _{1t}^2+\psi _{1xx}\psi _{1x}^2\Big ) \\&\quad +\eta \Big (\psi _{1tt}-\psi _{1xx}\Big )\Big (\psi _{1t}^2-\psi _{1x}^2\Big )\Big ]p_1^2 dxdt. \end{aligned}$$

Set \(Z:=\Big [2\lambda \Big (\psi _{1t}^2-\psi _{1x}^2\Big )^2+2\Big (\psi _{1tt}\psi _{1t}^2+\psi _{1xx}\psi _{1x}^2\Big )+\eta \Big (\psi _{1tt}-\psi _{1xx}\Big )\Big (\psi _{1t}^2-\psi _{1x}^2\Big )\Big ](x,t)\). By setting \(\phi =(\psi _{1t}^2-\psi _{1x}^2)(x,t)\), we find

$$\begin{aligned} Z&=\Big (2\lambda \big (\psi _{1t}^2-\psi _{1x}^2\big )^2 +\eta \big (\psi _{1tt}-\psi _{1xx}\big )\big (\psi _{1t}^2-\psi _{1x}^2\big ) \\&\quad +2\big [\psi _{1tt}\big (\psi _{1t}^2-\psi _{1x}^2\big )+\big (\psi _{1tt}+\psi _{1xx}\big )\psi _{1x}^2\big ]\Big )(x,t) \\&=2\lambda \phi ^2 -2\big [2\mu +\eta \big (\mu +m\big )\big ]\phi +4(m-\mu )\psi _{1x}^2(x,t). \end{aligned}$$

We obtain a polynomial in \(\phi \) whose discriminant must be negative, in other words

$$\begin{aligned} D:=[2\mu +\eta \big (\mu +m\big )\big ]^2-8\lambda (m-\mu )r'_{1}(x)^2<0\text { for every }x\in [0,1]. \end{aligned}$$

It follows from (3.22) that \(\mu <m\). Similarly, (3.2) shows that \(r'_{1}(x)^2>0\) for every \(x\in [0,1]\). Thus we may choose \(\lambda \) large enough so that \(D<0\). We adopt such a choice throughout the rest of this section. As a consequence, there exists a constant \(C>0\) such that

$$\begin{aligned} J_4+J_5+J_6\geqslant Cs^3\lambda ^3\int _Q\varphi _1^3p_1^2 dxdt. \end{aligned}$$
(3.24)

Moreover, given that for each \(j=1,\,2\), the function \(\psi _j\) lies in \(C^4({\bar{Q}})\), there exists a constant \(C>0\) such that for all \(\lambda \) large in the sense stated above, and all \(s>0\), the internal lower order terms satisfy

$$\begin{aligned} \left| J_7+\cdots +J_{18}\right| \leqslant Cs\lambda ^4\int _Q\varphi _1^3p_1^2 dxdt, \end{aligned}$$
(3.25)

where we have also used the fact that \(\psi _j(x,t)>0\) for all (xt) in \({\bar{Q}}\).

As for the right side of (3.11), we have for some positice constant \(C_0\)

$$\begin{aligned} \int _Q |Fe^{s\varphi _1}+Rp_1|^2 dxdt\leqslant C_0\Big (\int _Q F^2e^{2s\varphi _1} dxdt +s^2\lambda ^2\int _Q \varphi _1^3p_1^2 dxdt\Big ),\qquad \end{aligned}$$
(3.26)

since \(\varphi (x,t)>1\) for all (xt) in \({\bar{Q}}\).

Choosing \(s\gg \lambda \), it readily follows from (3.24)–(3.26)

$$\begin{aligned} J_4+J_5+J_6-\left| J_7+\cdots +J_{18}\right| -C_0s^2\lambda ^2\int _Q \varphi _1^3p_1^2 dxdt\ge Cs^3\lambda ^3\int _Q\varphi _1^3p_1^2 dxdt, \nonumber \\ \end{aligned}$$
(3.27)

for some positive constant C, and for all s and \(\lambda \) large enough.

Reporting the estimates (3.23) and (3.27) in (3.20), we derive

$$\begin{aligned}&s\lambda \int _Q\varphi _1p_{1t}^2 dxdt+s\lambda \int _Q\varphi _1p_{1x}^2 dxdt +s^3\lambda ^3\int _Q \varphi _1^3p_1^2 dxdt \nonumber \\&\quad +C\Bigg (-2s\lambda \int _0^1\Big [(\varphi _1\psi _{1t}p_{1t}^2)(x,.)\Big ]_0^T dx -2s\lambda \int _0^T\Big [\varphi _1\psi _{1x}p_{1t}^2\Big ](1,t) dt \nonumber \\&\quad +s\lambda ^2(\eta -1)\int _0^T \Big [\varphi _1\psi _{1x}\Big (\psi _{1tt}-\psi _{1xx}\Big )p_1^2\Big ](1,t) dt+2s\lambda ^2\int _0^T \Big [\varphi _1\psi _{1xx}\psi _{1x}p_1^2\Big ](1,t) dt \nonumber \\&\quad -2s\lambda (\eta -1)\int _0^T \Big [\varphi _1\Big (\psi _{1tt}-\psi _{1xx}\Big )p_{1x}p_1\Big ](1,t) dt \\&\quad -s\lambda ^3\int _0^T\Big [ \varphi _1\psi _{1x}\Big (\psi _{1t}^2-\psi _{1x}^2\Big )p_1^2\Big ](1,t) dt+2s^3\lambda ^3\int _0^T \Big [\varphi _1^3\psi _{1x}\Big (\psi _{1t}^2-\psi _{1x}^2\Big )p_1^2\Big ](1,t) dt \nonumber \\&\quad +4s\lambda \int _0^T\Big [\varphi _1\psi _{1t}p_{1t}p_{1x}\Big ](1,t) dt -2s\lambda \int _0^T \Big [(\varphi _1\psi _{1x}p_{1x}^2)(.,t)\Big ]_0^1 dt \nonumber \\&\quad +2s\lambda ^2\int _0^T\Big [ \varphi _1\Big (\psi _{1t}^2-\psi _{1x}^2\Big )p_{1x}p_1\Big ](1,t) dt\Bigg ) \leqslant C\int _Q F^2e^{2s\varphi _1} dxdt. \nonumber \end{aligned}$$
(3.28)

An estimate similar to (3.28) will be satisfied by \(p_2\); more precisely, we have

$$\begin{aligned}{} & {} s\lambda \int _Q\varphi _2p_{2t}^2 dxdt+s\lambda \int _Q\varphi _2p_{2x}^2 dxdt +s^3\lambda ^3\int _Q \varphi _2^3p_2^2 dxdt \nonumber \\{} & {} \quad +C\Bigg (-2s\lambda \int _0^1\Big [(\varphi _2\psi _{2t}p_{2t}^2)(x,.)\Big ]_0^T dx -2s\lambda \int _0^T\Big [\varphi _2\psi _{2x}p_{2t}^2\Big ](1,t) dt \nonumber \\{} & {} \quad +s\lambda ^2(\eta -1)\int _0^T \Big [\varphi _2\psi _{2x}\Big (\psi _{2tt}-\psi _{2xx}\Big )p_2^2\Big ](1,t) dt+2s\lambda ^2\nonumber \\{} & {} \int _0^T \Big [\varphi _2\psi _{2xx}\psi _{2x}p_2^2\Big ](1,t) dt \nonumber \\{} & {} \quad -2s\lambda (\eta -1)\int _0^T \Big [\varphi _1\Big (\psi _{2tt}-\psi _{2xx}\Big )p_{2x}p_2\Big ](1,t) dt -s\lambda ^3\nonumber \\{} & {} \int _0^T\Big [ \varphi _2\psi _{2x}\Big (\psi _{2t}^2-\psi _{2x}^2\Big )p_2^2\Big ](1,t) dt \nonumber \\{} & {} \quad +4s\lambda \int _0^T\Big [\varphi _2\psi _{2t}p_{2t}p_{2x}\Big ](1,t) dt -2s\lambda \int _0^T \Big [(\varphi _2\psi _{2x}p_{2x}^2)(.,t)\Big ]_0^1 dt \nonumber \\{} & {} \quad +2s^3\lambda ^3\int _0^T \Big [\varphi _2^3\psi _{2x}\Big (\psi _{2t}^2-\psi _{2x}^2\Big )p_2^2\Big ](1,t) dt \nonumber \\{} & {} \quad +2s\lambda ^2\int _0^T\Big [ \varphi _2\Big (\psi _{2t}^2-\psi _{2x}^2\Big )p_{2x}p_2\Big ](1,t) dt\Bigg ) \leqslant C\int _Q G^2e^{2s\varphi _2} dxdt. \end{aligned}$$
(3.29)

Before analyzing the boundary terms, we shall add the boundary terms for both \(p_1\) and \(p_2\). To this end, we shall add the two estimations corresponding to \(p_1\) and \(p_2\). Denote by \(J'_1,\ldots ,J'_{10}\) all the ten boundary integrals which will appear in the sum, we find

$$\begin{aligned} J'_1&=-2s\lambda \int _0^1\Big [(\varphi _1\psi _{1t}p_{1t}^2+\varphi _2\psi _{2t}p_{2t}^2)(x,.)\Big ]_0^T dx \nonumber \\&=-2s\lambda \int _0^1\Big [-\mu T(\varphi _1 p_{1t}^2+\varphi _2 p_{2t}^2)(x,T)-\mu T(\varphi _1 p_{1t}^2+\varphi _2 p_{2t}^2)(x,0)\Big ] dx \nonumber \\&=2s\lambda \mu T\int _0^1\Big [(\varphi _1 p_{1t}^2+\varphi _2 p_{2t}^2)(x,T)+(\varphi _1 p_{1t}^2+\varphi _2 p_{2t}^2)(x,0)\Big ] dx. \end{aligned}$$
(3.30)
$$\begin{aligned} J'_2&=-2s\lambda \int _0^T \Big [\varphi _1\psi _{1x}p_{1t}^2+\varphi _2\psi _{2x}p_{2t}^2\Big ](1,t) dt \nonumber \\&=-2s\lambda \big (r'_1(1)+r'_2(1)\big ) \int _0^T\varphi _1(1,t)p_{1t}(1,t)^2 dt =0, \end{aligned}$$
(3.31)

since \(r_1(1)=r_2(1)\), \(r'_2(1)=-r'_1(1)\) and \(p_1(1,t)=p_2(1,t)\) for every t in [0, T].

$$\begin{aligned} J'_3= & {} s\lambda ^2(\eta -1)\int _0^T \Big [\varphi _1\psi _{1x}\big (\psi _{1tt}-\psi _{1xx}\big )p_1^2+\varphi _2\psi _{2x}\big (\psi _{2tt}-\psi _{2xx}\big )p_2^2\Big ](1,t) dt \nonumber \\= & {} -s\lambda ^2(\eta -1)\Big (r'_1(1)\big (2\mu +r''_1(1)\big )-r'_1(1)\big (2\mu +r''_2(1)\big )\Big )\int _0^T\varphi _1(1,t)p_1(1,t)^2 dt \nonumber \\= & {} -s\lambda ^2(\eta -1)r'_1(1)\big (r''_1(1)-r''_2(1)\big )\int _0^T\varphi _1(1,t)p_1(1,t)^2 dt = 0, \end{aligned}$$
(3.32)

since \(r''_2(1)=r''_1(1)\).

$$\begin{aligned} J'_4&=-2s\lambda (\eta -1)\int _0^T \Big [\varphi _1\big (\psi _{1tt}-\psi _{1xx}\big )p_{1x}p_1+\varphi _2\big (\psi _{2tt}-\psi _{2xx}\big )p_{2x}p_2\Big ](1,t) dt \nonumber \\&=-2s\lambda (\eta -1)\int _0^T \Big (\big (-2\mu -r''_1(1)\big )-\big (-2\mu -r''_2(1)\big )\Big )(\varphi _1p_{1x}p_1)(1,t) dt \nonumber \\&=2s\lambda (\eta -1)\big (r''_1(1)-r''_2(1)\big )\int _0^T\varphi _1(1,t)p_{1x}(1,t)p_1(1,t) dt=0 \end{aligned}$$
(3.33)
$$\begin{aligned} J'_5&=-s\lambda ^3\int _0^T\Big [ \varphi _1\psi _{1x}\big (\psi _{1t}^2-\psi _{1x}^2\big )p_1^2+\varphi _2\psi _{2x}\big (\psi _{2t}^2-\psi _{2x}^2\big )p_2^2\Big ](1,t) dt \nonumber \\&=-s\lambda ^3r'_1(1)\int _0^T\Big (\psi _{1t}(1,t)^2-r'_1(1)^2-\big (\psi _{1t}(1,t)^2-r'_1(1)^2\big )\Big )(\varphi _1p_1^2)(1,t) dt=0. \end{aligned}$$
(3.34)
$$\begin{aligned} J'_6&=2s\lambda ^2\int _0^T \Big [\varphi _1\psi _{1xx}\psi _{1x}p_1^2+\varphi _2\psi _{2xx}\psi _{2x}p_2^2\Big ](1,t) dt \nonumber \\&=2s\lambda ^2r'_1(1)\big (r''_1(1)-r''_2(1)\big )\int _0^T\varphi _1(1,t)p_1(1,t)^2 dt=0. \end{aligned}$$
(3.35)
$$\begin{aligned} J'_7&=4s\lambda \int _0^T\Big [\varphi _1\psi _{1t}p_{1t}p_{1x}+\varphi _2\psi _{2t}p_{2t}p_{2x}\Big ](1,t) dt \nonumber \\&=4s\lambda \int _0^T\psi _{1t}(1,t)\varphi _1(1,t)\big (p_{1t}(1,t)-p_{2t}(1,t)\big )p_{1x}(1,t) dt=0, \end{aligned}$$
(3.36)

since \(p_{1t}(1,t)=p_{2t}(1,t)\).

$$\begin{aligned} J'_8&=-2s\lambda \int _0^T \Big [(\varphi _1\psi _{1x}p_{1x}^2+\varphi _2\psi _{2x}p_{2x}^2)(.,t)\Big ]_0^1 dt \nonumber \\&=-2s\lambda \big (r'_1(1)+r'_2(1)\big )\int _0^T\varphi _1(1,t)p_{1x}(1,t)^2 dt \nonumber \\&\quad +2s\lambda \int _0^T \big (r'_1(0)\varphi _1(0,t) p_{1x}(0,t)^2+r'_2(0)\varphi _2(0,t)p_{2x}(0,t)^2\big ) dt \nonumber \\&=2s\lambda \Big (r'_1(0)\int _0^T\varphi _1(0,t) p_{1x}(0,t)^2 dt+r'_2(0)\int _0^T\varphi _2(0,t)p_{2x}(0,t)^2 dt\Big ). \end{aligned}$$
(3.37)
$$\begin{aligned} J'_{9}&=2s^3\lambda ^3\int _0^T \Big [\varphi _1^3\psi _{1x}\big (\psi _{1t}^2-\psi _{1x}^2\big )p_1^2+\varphi _2^3\psi _{2x}\big (\psi _{2t}^2-\psi _{2x}^2\big )p_2^2\Big ](1,t) dt \nonumber \\&=2s^3\lambda ^3\big (r'_1(1)+r'_2(1)\big )\int _0^T\psi _{1t}(1,t)^2\varphi _1(1,t)^3p_1(1,t)^2 dt \nonumber \\&\quad -2s^3\lambda ^3\big (r'_1(1)^3+r'_2(1)^3\big )\int _0^T\varphi _1(1,t)^3p_1(1,t)^2 dt=0. \end{aligned}$$
(3.38)
$$\begin{aligned} J'_{10}&=2s\lambda ^2\int _0^T\Big [ \varphi _1\big (\psi _{1t}^2-\psi _{1x}^2\big )p_{1x}p_1+\varphi _2\big (\psi _{2t}^2-\psi _{2x}^2\big )p_{2x}p_2\Big ](1,t) dt \nonumber \\&=2s\lambda ^2\int _0^T\Big (\psi _{1t}(1,t)^2-r'_1(1)^2-\big (\psi _{1t}(1,t)^2-r'_1(1)^2\big )\Big )(\varphi _1p_{1x}p_1)(1,t) dt=0. \end{aligned}$$
(3.39)

Adding (3.30)–(3.39), we obtain, for some positive constants \(C_1\) and \(C_2\):

$$\begin{aligned} J'_1+\cdots +J'_{10}= & {} 2s\lambda \mu T\int _0^1\big (\varphi _1(x,T)p_{1t}(x,T)^2+\varphi _2(x,T) p_{2t}(x,T)^2\big ) dx\nonumber \\{} & {} +2s\lambda \mu T\int _0^1\big (\varphi _1(x,0) p_{1t}(x,0)^2+\varphi _2(x,0) p_{2t}(x,0)^2\big ) dx \nonumber \\{} & {} +2s\lambda \Big (r'_1(0)\int _0^T\varphi _1(0,t) p_{1x}(0,t)^2 dt+r'_2(0)\int _0^T\varphi _2(0,t)p_{2x}(0,t)^2 dt\Big ).\nonumber \\\ge & {} s\lambda \left( -C_1\int _0^T\varphi _1(0,t) p_{1x}(0,t)^2 dt+C_2\int _0^T\varphi _2(0,t)p_{2x}(0,t)^2 dt.\right) \nonumber \\ \end{aligned}$$
(3.40)

Gathering (3.28), (3.29) and (3.40), we derive

$$\begin{aligned}&s\lambda \int _Q\varphi _1p_{1t}^2 dxdt+s\lambda \int _Q\varphi _1p_{1x}^2 dxdt +s^3\lambda ^3\int _Q \varphi _1^3p_1^2 dxdt\nonumber \\ {}&\quad + s\lambda \int _Q\varphi _2p_{2t}^2 dxdt+s\lambda \int _Q\varphi _2p_{2x}^2 dxdt \nonumber \\ {}&\qquad +s^3\lambda ^3\int _Q \varphi _2^3p_2^2 dxdt+s\lambda \int _0^T\varphi _2(0,t)p_{2x}(0,t)^2 dt\\ {}&\le C\int _Q\left( F^2e^{2s\varphi _1}+G^2e^{2s\varphi _2}\right) dxdt +C\int _0^T\varphi _1(0,t) p_{1x}(0,t)^2 dt.\nonumber \end{aligned}$$
(3.41)

Now, we must go back to the functions p and q. Since \(p(x,t)=p_1(x,t)e^{-s\varphi _1(x,t)}\) and \(q(x,t)=p_2(x,t)e^{-s\varphi _2(x,t)}\), it then follows

$$\begin{aligned} p_t =\big (p_{1t}-s\lambda \psi _{1t}\varphi _1p_1\big )e^{-s\varphi _1},\text { and } p_x =\big (p_{1x}-s\lambda \psi _{1x}\varphi _1p_1\big )e^{-s\varphi _1}. \end{aligned}$$

Using the inequality \((a-b)^2\leqslant 2(a^2+b^2)\), we find

$$\begin{aligned} p_t^2e^{2s\varphi _1}\leqslant 2\big (p_{1t}^2+s^2\lambda ^2\psi _{1t}^2\varphi _1^2p_1^2\big ),\quad p_x^2e^{2s\varphi _1}\leqslant 2\big (p_{1x}^2+s^2\lambda ^2\psi _{1x}^2\varphi _1^2p_1^2\big ). \end{aligned}$$
(3.42)

Multiplying (3.42) by \(s\lambda \varphi _1\), then integrating over Q and adding the results, we obtain the following inequality

$$\begin{aligned} \dfrac{s\lambda }{2}\int _Q\varphi _1\big (p_t^2+p_x^2\big )e^{2s\varphi _1} dxdt\leqslant s\lambda \int _Q\varphi _1\big (p_{1t}^2+p_{1x}^2\big ) dxdt +Ms^3\lambda ^3\int _Q\varphi _1^3p_1^2 dxdt, \end{aligned}$$
(3.43)

where \(\displaystyle M{:}{=}\max _{(x,t)\in Q}\big (\psi _{1t}^2(t)+\psi _{1x}^2(x)\big )\).

Similarly, one derives

$$\begin{aligned} \dfrac{s\lambda }{2}\int _Q\varphi _2\big (q_t^2+q_x^2\big )e^{2s\varphi _2} dxdt\leqslant s\lambda \int _Q\varphi _2\big (p_{2t}^2+p_{2x}^2\big ) dxdt +Ms^3\lambda ^3\int _Q\varphi _2^3p_2^2 dxdt, \end{aligned}$$
(3.44)

As for the boundary terms, we notice that since both \(p_1(0,t)\) and \(p_2(0,,t)\) are null for every t in [0, T], it follows that

$$\begin{aligned} p_{1x}(0,t)=p_x(0,t)e^{s\varphi _1}\text { and }p_{2x}(0,t)=q_x(0,t)e^{s\varphi _2},\quad \forall t\in [0,T]. \end{aligned}$$

Thus, multiplying (3.41) by \(M+1\) and adding (3.41)–(3.44) side by side, we find

$$\begin{aligned}&(M+1)s\lambda \sum _{j=1}^2\int _Q\varphi _j(p_{1t}^2+p_{1x}^2 ) dxdt +s^3\lambda ^3\int _Q \left( \varphi _1^3p^2e^{2s\varphi _1}+\varphi _2^3q^2e^{2s\varphi _2}\right) dxdt\nonumber \\&\quad +Ms^3\lambda ^3\sum _{j=1}^2\int _Q\varphi _jp_{j}^2 dxdt +\frac{s\lambda }{2}\int _Q\varphi _1\left( p_{t}^2+p_{x}^2\right) e^{2s\varphi _1} dxdt\nonumber \\&\quad +\frac{s\lambda }{2}\int _Q\varphi _2\left( q_{t}^2+q_{x}^2\right) e^{2s\varphi _2} dxdt\nonumber \\ {}&\quad +s\lambda \int _0^T\varphi _2(0,t)p_{2x}(0,t)^2e^{2s\varphi _2(0,t)} dt\\ {}&\le C\int _Q\left( F^2e^{2s\varphi _1}+G^2e^{2s\varphi _2}\right) dxdt +C\int _0^T\varphi _1(0,t) p_{x}(0,t)^2e^{2s\varphi _1(0,t)} dt\nonumber \\ {}&\quad +Ms^3\lambda ^3\sum _{j=1}^2\int _Q\varphi _jp_{j}^2 dxdt.\nonumber \end{aligned}$$
(3.45)

It then follows

$$\begin{aligned}&{s\lambda }\int _Q\left\{ \varphi _1\left( p_{t}^2+p_{x}^2\right) e^{2s\varphi _1} +\varphi _2\left( q_{t}^2+q_{x}^2\right) e^{2s\varphi _2}\right\} dxdt +s^3\lambda ^3\nonumber \\&\quad \int _Q \left( \varphi _1^3p^2e^{2s\varphi _1}+\varphi _2^3q^2e^{2s\varphi _2}\right) dxdt\nonumber \\ {}&\quad +s\lambda \int _0^T\varphi _2(0,t)p_{2x}(0,t)^2e^{2s\varphi _2(0,t)} dt\\ {}&\le C\int _Q\left( F^2e^{2s\varphi _1}+G^2e^{2s\varphi _2}\right) dxdt +C\int _0^T\varphi _1(0,t) p_{x}(0,t)^2e^{2s\varphi _1(0,t)} dt,\nonumber \end{aligned}$$
(3.46)

which is the claimed Carleman inequality, given the definition of p and q. \(\square \)

3.3 An Improved Carleman Estimate

For this new Carleman estimate, a more refined set of functions \(r_1\) and \(r_2\) will be needed as the null displacements constraint is now removed. The time T must be large. So, as in the conservative case discussed in Sect. 2, let \(T>4\). Then, there exists \(\delta _0\) in (0, 1) such that \(\delta _0 T>4\). Set \(a_0=T(1-\delta _0)\). Set

$$\begin{aligned} r_1(x)=\big (1+\frac{a_0}{8}\big )x^2-(4+a_0)x+4+\frac{3a_0}{2}\quad \text { and }\quad r_2(x)=\big (1+\frac{a_0}{8}\big )x^2+\frac{a_0}{2}x. \end{aligned}$$

Notice that these two functions are obtained by choosing \(m=1+\frac{a_0}{8}\) and \(n=\frac{a_0}{2} \) in (3.21); so they satisfy all the required constraints in (3.1) and (3.2).

Now, we are going to introduce a cut-off function that will be useful in the proof of our new and improved Carleman estimate. To this end, let \(\varepsilon \in (0,1/2)\), with \(\varepsilon \) close to zero, and introduce the function \(r\in C^2([0,T])\) with \(r(T)=0=r(0)\) and \(r\equiv 1\) on \([\varepsilon T, (1-\varepsilon )T]\).

Proposition 3.5

Let a in \(L^\infty (Q)\). Assume that \(T>4\). Let (pq) be a solution of (2.3). Then there exist two constants \(C>0\) and \(s_1\geqslant 1\), such that for every \(s\geqslant s_1\),

$$\begin{aligned}&s\int _Q \big (p_t^2+p_x^2\big )e^{2s\varphi _1}dxdt +s\int _Q \big (q_t^2+q_x^2\big )e^{2s\varphi _2}dxdt\\&\quad +s^3\int _Q p^2e^{2s\varphi _1} dxdt +s^3\int _Q q^2e^{2s\varphi _2} dxdt \nonumber \\&\quad +s\int _0^Tr^2\varphi _2(0,t)q_x(0,t)^2e^{2s\varphi _2(0,t)} dt \nonumber \\&\leqslant Cs\int _0^T\varphi _1(0,t)p_x(0,t)^2 e^{2s\varphi _1(0,t)} dt. \nonumber \end{aligned}$$
(3.47)

Proof

Let (uv) be a solution of (2.1). Consider the operator \({\mathcal {P}}\) defined by \({\mathcal {P}}y=y_{tt}-y_{xx}\). Set

$$\begin{aligned} w=ru\text { and }z=rv. \end{aligned}$$
(3.48)

Then

$$\begin{aligned} {\mathcal {P}}w=r{\mathcal {P}}u+2r'u_t+r''u \text { and }{\mathcal {P}}z=r{\mathcal {P}}v+2r'v_t+r''v. \end{aligned}$$

Now \((w,z)\in {\mathcal {S}}\), so, we can apply Proposition 3.4. We find by taking \(\lambda =\lambda _0\), that there exist \(s_0>0\) and \(C>0\) such that for every \(s\geqslant s_0\),

$$\begin{aligned}&s\int _Q\varphi _1\big [\big (w_t+z_t\big )^2 +\big (w_x+z_x\big )^2\big ] e^{2s\varphi _1} dxdt \\&\quad +s\int _Q\varphi _2\big [\big (z_t-w_t\big )^2 +\big (z_x-w_x\big )^2\big ]e^{2s\varphi _2} dxdt \nonumber \\&\quad +s^3\int _Q \varphi _1^3\big (w+z\big )^2e^{2s\varphi _1} dxdt +s^3\int _Q \varphi _2^3\big (z-w\big )^2e^{2s\varphi _2} dxdt \nonumber \\&\quad +s\int _0^T\varphi _2(0,t)\big (z_x(0,t)-w_x(0,t)\big )^2e^{2s\varphi _2(0,t)} dt \nonumber \\&\leqslant C\Big (\int _Q \big (r{\mathcal {P}}p+2r'p_t+r''p\big )^2e^{2s\varphi _1} dxdt \nonumber \\&\quad +\int _Q \big (r{\mathcal {P}}q+2r'q_t+r''q\big )^2e^{2s\varphi _2} dxdt \nonumber \\&\quad +s\int _0^Tr^2\varphi _1(0,t)\big (w_x(0,t)+z_x(0,t)\big )^2e^{2s\varphi _1(0,t)} dt\Big ).\nonumber \end{aligned}$$
(3.49)

Replacing w and z by their expressions given by (3.48), we get

$$\begin{aligned}&s\int _Q r^2\varphi _1\big (p_t^2+p_x^2\big )e^{2s\varphi _1} dxdt +s\int _Qr^2\varphi _2\big (q_t^2 +q_x^2\big )e^{2s\varphi _2} dxdt \\&\quad +s^3\int _Q r^2\varphi _1^3p^2e^{2s\varphi _1} dxdt +s^3\int _Q r^2\varphi _2^3q^2e^{2s\varphi _2} dxdt \nonumber \\&\quad +s\int _0^Tr^2\varphi _2(0,t)q_x(0,t)^2e^{2s\varphi _2(0,t)} dt \nonumber \\&\leqslant C\Big (\int _Q r^2\big ({\mathcal {P}}p\big )^2e^{2s\varphi _1} dxdt +\int _Q r^2\big ({\mathcal {P}}q\big )^2e^{2s\varphi _2} dxdt \nonumber \\&\quad +\int _Q r'^2p_t^2e^{2s\varphi _1} dxdt +\int _Q r'^2q_t^2e^{2s\varphi _2} dxdt \nonumber \\&\quad +\int _Q r''^2p^2e^{2s\varphi _1} dxdt +\int _Q r''^2q^2e^{2s\varphi _2} dxdt \nonumber \\&\quad +s\int _0^Tr^2\varphi _1(0,t)p_x(0,t)^2 e^{2s\varphi _1(0,t)} dt \nonumber \\&\quad +s\int _Q r'^2\varphi _1p^2e^{2s\varphi _1}dxdt +s\int _Q r'^2\varphi _2q^2 e^{2s\varphi _2}dxdt\Big ). \nonumber \end{aligned}$$
(3.50)

We have used in the left hand side of (3.50) the following estimate

$$\begin{aligned} |r'y+ry_t|^2\geqslant \dfrac{r^2}{2}y_t^2 -r'^2y^2. \end{aligned}$$

We deduce from (3.50) that there exist \(s_0>0\) and \(C>0\) such that for every \(s\geqslant s_0\),

$$\begin{aligned}&s\int _Q r^2\varphi _1\big (p_t^2+p_x^2\big )e^{2s\varphi _1} dxdt +s\int _Qr^2\varphi _2\big (q_t^2 +q_x^2\big )e^{2s\varphi _2} dxdt \\&\quad +s^3\int _Q r^2\varphi _1^3p^2e^{2s\varphi _1} dxdt +s^3\int _Q r^2\varphi _2^3q^2e^{2s\varphi _2} dxdt \nonumber \\&\quad +s\int _0^Tr^2\varphi _2(0,t)q_x(0,t)^2e^{2s\varphi _2(0,t)} dt \nonumber \\&\leqslant C\Big (\int _Q\big ({\mathcal {P}}p\big )^2e^{2s\varphi _1} dxdt +\int _Q \big ({\mathcal {P}}q\big )^2e^{2s\varphi _2} dxdt \nonumber \\&\quad +\int _Q p_t^2e^{2s\varphi _1} dxdt +\int _Q q_t^2e^{2s\varphi _2} dxdt +s\int _Q p^2e^{2s\varphi _1} dxdt \nonumber \\&\quad +s\int _Q q^2e^{2s\varphi _2} dxdt +s\int _0^T\varphi _1(0,t)p_x(0,t)^2 e^{2s\varphi _1(0,t)} dt\Big ). \nonumber \end{aligned}$$
(3.51)

It remains to get rid of the function r from the left hand side of (3.51), and absorb the undesired terms from the right hand side to get the claimed estimate. For this purpose, introduce for all \(t\in [0,T]\) the following weighted energy:

$$\begin{aligned} E_s(t)=\dfrac{1}{2}\int _0^1\big (p_t^2(x,t)+p_x^2(x,t)\big )e^{2s\varphi _1(x,t)} dx + \dfrac{1}{2}\int _0^1\big (q_t^2(x,t)+q_x^2(x,t)\big )e^{2s\varphi _2(x,t)} dx. \end{aligned}$$

Differentiating \(E_s\) in t, we find for every \(t\in [0,T]\),

$$\begin{aligned} E'_s(t)&=s\int _0^1\big (p_t^2+p_x^2\big )\varphi _{1t}e^{2s\varphi _1} dx + s\int _0^1\big (q_t^2+q_x^2\big )\varphi _{2t}e^{2s\varphi _2} dx \\&+\int _0^1\big (p_{tt}p_t+p_{xt}p_x\big )e^{2s\varphi _1} dx +\int _0^1\big (q_{tt}q_t+q_{xt}q_x\big )e^{2s\varphi _2} dx. \nonumber \end{aligned}$$
(3.52)

Integrating by parts the terms in \(p_{xt}p_x\) and \(q_{xt}q_x\) in (3.52), it follows from the fact that, for every \(t\in [0,T]\), \(p_t(0,t)=q_t(0,t)=0\), \(p(1,t)=q(1,t)\) and \(q_x(1,t)=-p_x(1,t)\),

$$\begin{aligned} E'_s(t)&=s\int _0^1\big (p_t^2+p_x^2\big )\varphi _{1t}e^{2s\varphi _1} dx + s\int _0^1\big (q_t^2+q_x^2\big )\varphi _{2t}e^{2s\varphi _2} dx \\&+\int _0^1\big (p_{tt}-p_{xx}-2sp_x\varphi _{1x}\big )p_te^{2s\varphi _1} dx \nonumber \\&+\int _0^1\big (q_{tt}-q_{xx}-2sq_x\varphi _{2x}\big )q_te^{2s\varphi _2} dx \nonumber \\&+p_t(1,t)p_x(1,t)\big (e^{2s\varphi _2(1,t)}-e^{2s\varphi _1(1,t)}\big ). \nonumber \end{aligned}$$
(3.53)

Using (2.3) and \(\varphi _1(1,t)=\varphi _2(1,t)\) for every \(t\in [0,T]\), we get

$$\begin{aligned} E'_s(t)&=s\int _0^1\big (p_t^2+p_x^2\big )\varphi _{1t}e^{2s\varphi _1} dx + s\int _0^1\big (q_t^2+q_x^2\big )\varphi _{2t}e^{2s\varphi _2} dx \\&-\int _0^1\big (a(x,t)p+2sp_x\varphi _{1x}\big )p_te^{2s\varphi _1} dx -\int _0^1\big (a(x,t)q+2sq_x\varphi _{2x}\big )q_te^{2s\varphi _2} dx. \nonumber \end{aligned}$$
(3.54)

Using the Cauchy-Schwarz inequality, we derive, (keeping in mind that \(\varphi _{1x}<0\) and \(\varphi _{2x}>0\) on \([0,1]\times [0,T]\)):

$$\begin{aligned} \Big |-2s\int _0^1\varphi _{1x}p_tp_xe^{2s\varphi _1} dx\Big |\leqslant -s\int _0^1 \big (p_t^2+p_x^2\big )\varphi _{1x}e^{2s\varphi _1} dx \end{aligned}$$
(3.55)

and

$$\begin{aligned} \Big |-2s\int _0^1\varphi _{2x}q_tq_xe^{2s\varphi _2} dx\Big |\leqslant s\int _0^1\big (q_t^2+q_x^2\big )\varphi _{2x}e^{2s\varphi _2} dx. \end{aligned}$$
(3.56)

Inserting the two preceding inequalities into (3.54), we obtain

$$\begin{aligned} E'_s(t)&\leqslant s\int _0^1\big (p_t^2+p_x^2\big )\big (\varphi _{1t}-\varphi _{1x}\big )e^{2s\varphi _1} dx + s\int _0^1\big (q_t^2+q_x^2\big )\big (\varphi _{2t}+\varphi _{2x}\big )e^{2s\varphi _2} dx \\&\quad +\int _0^1 |a(x,t)pp_t| e^{2s\varphi _1} dx +\int _0^1|a(x,t)qq_t| e^{2s\varphi _2} dx. \nonumber \end{aligned}$$
(3.57)

For all \((x,t)\in [0,1]\times [(1-\varepsilon ) T,T]\), we have

$$\begin{aligned} -\varphi _{1t}(x,t)+\varphi _{1x}(x,t)&=\lambda _0\big (2\mu (t-\frac{T}{2})+r'_1(x)\big )\varphi _{1}(x,t) \\&\geqslant \lambda _0\big (\mu T(1-2\varepsilon )+\min _{[0,1]}r'_1\big )\varphi _{1}(x,t)\\ {}&\ge \lambda _0\big ( T\left( \mu (1-2\varepsilon )-1+\delta _0\right) -4\big )\varphi _{1}(x,t) \end{aligned}$$

since by the definition of \(r_1\), we have \(\min _{[0,1]}r_1'(x)=r_1'(0)=-4-T(1-\delta _0)\).

Given that \(\delta _0T>4\), there exist \(\mu \) in \((1,1+(a_0/8))\) close to one, and \(\varepsilon \) in (0, 1/2) close to zero, such that

$$\begin{aligned} T\left( \mu (1-2\varepsilon )-1+\delta _0\right) -4>0. \end{aligned}$$
(3.58)

Similarly, with \(\mu \) and \(\varepsilon \) as in (3.58), we have

$$\begin{aligned} -\varphi _{2t}(x,t)-\varphi _{2x}(x,t)&=\lambda _0\big (\mu (1-2\varepsilon )T-\max _{[0,1]}r'_2(x)\big )\varphi _{2}(x,t) \\&\geqslant \lambda _0\big (\mu (1-2\varepsilon )T-2-\frac{3}{4}(1-\delta _0)T\big )\varphi _{2}(x,t), \end{aligned}$$

and thanks to (3.58), we derive that the constant \(\mu (1-2\varepsilon )T-2-\frac{3}{4}(1-\delta _0)T\) is positive.

Consequently, there exists a constant \(k>0\) such that for all \((x,t)\in [0,1]\times [(1-\varepsilon ) T,T]\),

$$\begin{aligned} -\varphi _{1t}(x,t)+\varphi _{1x}(x,t)\geqslant k\text { and }-\varphi _{2t}(x,t)-\varphi _{2x}(x,t)\geqslant k. \end{aligned}$$
(3.59)

Now it follows from the Young inequality

$$\begin{aligned} \int _0^1 |a(x,t)pp_t| e^{2s\varphi _1} dx&\leqslant \frac{sk}{4}\int _0^1p_t^2e^{2s\varphi _1} dx +\frac{C}{sk}\int _0^1p^2e^{2s\varphi _1} dx \nonumber \\&\leqslant \frac{sk}{2}E_s(t)+\frac{C}{sk}\int _0^1p^2e^{2s\varphi _1} dx. \end{aligned}$$
(3.60)

Similarly, one shows

$$\begin{aligned} \int _0^1 |a(x,t)qq_t| e^{2s\varphi _2} dx\leqslant \frac{sk}{2}E_s(t)+\frac{C}{sk}\int _0^1q^2e^{2s\varphi _2} dx. \end{aligned}$$
(3.61)

We derive from (3.57)–(3.61) that

$$\begin{aligned} E'_s(t)+skE_s(t)\leqslant \frac{C}{sk}\Big (\int _0^1p^2e^{2s\varphi _1} dx + \int _0^1q^2e^{2s\varphi _2} dx\Big ). \end{aligned}$$

The preceding inequality leads to

$$\begin{aligned} \big (E_s(t)e^{skt}\big )'\leqslant \frac{Ce^{skt}}{sk}\Big (\int _0^1p^2e^{2s\varphi _1} dx + \int _0^1q^2e^{2s\varphi _2} dx\Big ). \end{aligned}$$
(3.62)

Now, set \({\widetilde{T}}=(1-\varepsilon )T\). Integrating (3.62) over \([{\widetilde{T}},t]\) for every \(t\in [{\widetilde{T}},T]\) yields,

$$\begin{aligned}&E_s(t)\leqslant E_s({\widetilde{T}})e^{sk({\widetilde{T}}-t)} \\&\quad +\frac{C}{sk}\Big [\int _{\widetilde{T}}^te^{sk(\tau -t)}\Big (\int _0^1p^2e^{2s\varphi _1} dx\Big )d\tau +\int _{{\widetilde{T}}}^te^{sk(\tau -t)}\Big (\int _0^1q^2e^{2s\varphi _2} dx\Big )d\tau \Big ].\nonumber \end{aligned}$$
(3.63)

Integrating (3.63) over \([{\widetilde{T}},T]\), we get the existence of a constant \(C>0\) such that

$$\begin{aligned} s\int _{{\widetilde{T}}}^T E_s(t)dt\leqslant \dfrac{1}{k} E_s(\widetilde{T}) +C\Big (\int _Q p^2e^{2s\varphi _1} dxdt+\int _Q q^2e^{2s\varphi _2} dxdt\Big ). \end{aligned}$$
(3.64)

On the interval \([0,\varepsilon T]\), first, one multiplies (3.54) by minus one, then one proceeds exactly as above to derive the inequality

$$\begin{aligned} s\int _0^{\varepsilon T} E_s(t)dt\leqslant \dfrac{1}{k} E_s(\varepsilon T) +C\Big (\int _Q p^2e^{2s\varphi _1} dxdt+\int _Q q^2e^{2s\varphi _2} dxdt\Big ). \end{aligned}$$
(3.65)

Gathering (3.64) and (3.65), we find

$$\begin{aligned}{} & {} s\int _{{\widetilde{T}}}^T E_s(t)dt+s\int _0^{\varepsilon T} E_s(t)dt\leqslant \dfrac{1}{k} \left( E_s(\widetilde{T})+E_s(\varepsilon T)\right) \nonumber \\{} & {} \quad +C\Big (\int _Q p^2e^{2s\varphi _1} dxdt+\int _Q q^2e^{2s\varphi _2} dxdt\Big ). \end{aligned}$$
(3.66)

We shall now estimate the first two terms in the right hand side of (3.66).

Proceeding as we have just done above in the interval \([{\widetilde{T}},T]\), we readily check that for all \(t\in [0,T]\),

$$\begin{aligned} \big |E'_s(t)\big |&\leqslant Cs\int _0^1\big (p_t^2+p_x^2\big )e^{2s\varphi _1} dx \nonumber \\&+ Cs\int _0^1\big (q_t^2+q_x^2\big )e^{2s\varphi _2} dx \\&+s E_s(t)+\frac{C}{s}\Big (\int _0^1p^2e^{2s\varphi _1} dx +\int _0^1q^2e^{2s\varphi _2} dx\Big ). \nonumber \\ {}&\le Cs E_s(t)+\frac{C}{s}\Big (\int _0^1p^2e^{2s\varphi _1} dx +\int _0^1q^2e^{2s\varphi _2} dx\Big ). \nonumber \end{aligned}$$
(3.67)

Now, we are going to estimate \(E_s(\varepsilon T)\). We deduce from (3.67) that there exists a constant \(C>0\) such that for all \(t\in [0,T]\),

$$\begin{aligned} -E'_s(t)\leqslant C\Big [sE_s(t)+\frac{1}{s}\Big (\int _0^1p^2e^{2s\varphi _1} dx +\int _0^1q^2e^{2s\varphi _2} dx\Big )\Big ]. \end{aligned}$$
(3.68)

Let \(t\in [\varepsilon T,{\widetilde{T}}]\). Integrating (3.68) over \([\varepsilon T,t]\) yields

$$\begin{aligned} E_s(\varepsilon T)\leqslant C\Big [s\int _{\varepsilon T}^tE_s(\tau )d\tau +\frac{1}{s}\int _{\varepsilon T}^t\int _0^1\left( p^2e^{2s\varphi _1} +q^2e^{2s\varphi _2}\right) dxd\tau \Big ]. \end{aligned}$$

Integrating the last inequality over \([\varepsilon T,{\widetilde{T}}]\), it follows that there exists a constant \(C>0\) such that for all \(s\geqslant 1\) large enough, one has, (keeping in mind that \(r\equiv 1\) on \([\varepsilon T,{\widetilde{T}}]\)):

$$\begin{aligned} E_s(\varepsilon T)\leqslant C\Big (s\int _{\varepsilon T}^{\widetilde{T}}r^2 E_s(t)dt+\int _Q\left( p^2e^{2s\varphi _1}+ q^2e^{2s\varphi _2}\right) dxdt\Big ). \end{aligned}$$
(3.69)

Simlilarly, one derives from (3.67) that for every t in [0, T], one has the following inequality

$$\begin{aligned} E'_s(t)\leqslant C\Big [sE_s(t)+\frac{1}{s}\Big (\int _0^1p^2e^{2s\varphi _1} dx +\int _0^1q^2e^{2s\varphi _2} dx\Big )\Big ]. \end{aligned}$$
(3.70)

Let \(t\in [\varepsilon T,{\widetilde{T}}]\). Integrating (3.70) over \([t,{\widetilde{T}}]\) yields

$$\begin{aligned} E_s({\widetilde{T}})\leqslant E_s(t)+ C\Big [s\int _t^{\widetilde{T}}E_s(\tau )d\tau +\frac{1}{s}\int _t^{\widetilde{T}}\int _0^1\left( p^2e^{2s\varphi _1} +q^2e^{2s\varphi _2}\right) dxd\tau \Big ]. \end{aligned}$$

Proceeding as in the derivation of (3.69), one gets

$$\begin{aligned} E_s({\widetilde{T}})\leqslant C\Big (s\int _{\varepsilon T}^{\widetilde{T}}r^2 E_s(t)dt+\int _Q\left( p^2e^{2s\varphi _1}+ q^2e^{2s\varphi _2}\right) dxdt\Big ). \end{aligned}$$
(3.71)

Combining (3.66), (3.69) and (3.71), and using the fact that \(r\equiv 1\) in \([\varepsilon T, {\widetilde{T}}]\), it follows that there is a constant \(C>0\) such that for every \(s\geqslant 1\) large enough,

$$\begin{aligned} s\int _0^T E_s(t)dt\leqslant C\Big (s\int _0^{T}r^2E_s(t)dt+\int _Q p^2e^{2s\varphi _1} dxdt +\int _Q q^2e^{2s\varphi _2} dxdt\Big ). \nonumber \\ \end{aligned}$$
(3.72)

The following inequality readily follows from (3.51):

$$\begin{aligned}&s\int _0^T r^2E_s(t)dt +s^3\int _Q r^2p^2e^{2s\varphi _1} dxdt +s^3\int _Q r^2q^2e^{2s\varphi _2} dxdt \\&\quad +s\int _0^Tr^2\varphi _2(0,t)q_x(0,t)^2e^{2s\varphi _2(0,t)} dt \nonumber \\&\leqslant C\Big (\int _Q\big ({\mathcal {P}}p\big )^2e^{2s\varphi _1} dxdt +\int _Q \big ({\mathcal {P}}q\big )^2e^{2s\varphi _2} dxdt \nonumber \\&\quad +\int _Q p_t^2e^{2s\varphi _1} dxdt +\int _Q q_t^2e^{2s\varphi _2} dxdt +s\int _Q p^2e^{2s\varphi _1} dxdt \nonumber \\&\quad +s\int _Q q^2e^{2s\varphi _2} dxdt +s\int _0^T\varphi _1(0,t)p_x(0,t)^2 e^{2s\varphi _1(0,t)} dt\Big ). \nonumber \end{aligned}$$
(3.73)

Gathering (3.72) and (3.73), one derives that there exists a constant \(C>0\) such that for every \(s\geqslant 1\) large enough,

$$\begin{aligned}&s\int _0^TE_s(t)dt +s^3\int _Q r^2p^2e^{2s\varphi _1} dxdt +s^3\int _Q r^2q^2e^{2s\varphi _2} dxdt \\&\quad +s\int _0^Tr^2\varphi _2(0,t)q_x(0,t)^2e^{2s\varphi _2(0,t)} dt \nonumber \\&\leqslant C\Big (\int _Q\big ({\mathcal {P}}p\big )^2e^{2s\varphi _1} dxdt +\int _Q \big ({\mathcal {P}}q\big )^2e^{2s\varphi _2} dxdt \nonumber \\&\quad +\int _Q p_t^2e^{2s\varphi _1} dxdt +\int _Q q_t^2e^{2s\varphi _2} dxdt +s\int _Q p^2e^{2s\varphi _1} dxdt \nonumber \\&\quad +s\int _Q q^2e^{2s\varphi _2} dxdt +s\int _0^T\varphi _1(0,t)p_x(0,t)^2 e^{2s\varphi _1(0,t)} dt\Big ). \nonumber \end{aligned}$$
(3.74)

The third and fourth integrals in the right hand side of (3.74) can be absorbed in the left hand side by the first integral. Then there exists a constant \(C>0\) such that for every \(s\geqslant 1\) large enough,

$$\begin{aligned}&s\int _0^TE_s(t)dt +s^3\int _Q r^2p^2e^{2s\varphi _1} dxdt +s^3\int _Q r^2q^2e^{2s\varphi _2} dxdt\nonumber \\&\quad +s\int _0^Tr^2\varphi _2(0,t)q_x(0,t)^2e^{2s\varphi _2(0,t)} dt \nonumber \\&\leqslant C\Big (\int _Q\big ({\mathcal {P}}p\big )^2e^{2s\varphi _1} dxdt +\int _Q \big ({\mathcal {P}}q\big )^2e^{2s\varphi _2} dxdt \nonumber \\&\quad +s\int _Q p^2e^{2s\varphi _1} dxdt +s\int _Q q^2e^{2s\varphi _2} dxdt \nonumber \\&\quad +s\int _0^T\varphi _1(0,t)p_x(0,t)^2 e^{2s\varphi _1(0,t)} dt\Big ). \end{aligned}$$
(3.75)

In order to absorb the lower order terms in the right hand side of (3.75), we shall rely on the following result, which is the analog of [5, Lemma 2.4]:

Lemma 3.6

There exist constants \(C>0\) and \(s_1\ge 1\) such that for every \(s\geqslant s_1\),

$$\begin{aligned} s^2\int _0^1 p^2e^{2s\varphi _1} dx + s^2\int _0^1 q^2e^{2s\varphi _2} dx\leqslant C\Big (\int _0^1 p_x^2e^{2s\varphi _1} dx+\int _0^1 q_x^2e^{2s\varphi _2} dx\Big ). \nonumber \\ \end{aligned}$$
(3.76)

Proof

Differentiating \(\varphi _{ix}e^{2s\varphi _i}\) with respect to x, we find for every \(i=1,2\),

$$\begin{aligned} \big (\varphi _{ix}e^{2s\varphi _i}\big )_x=\big (\varphi _{ixx}+2s\varphi _{ix}^2\big )e^{2s\varphi _i}. \end{aligned}$$

Multiplying that identity by \(p^2\), then by \(q^2\) and integrating the results by parts over (0, 1), we successively get

$$\begin{aligned} 2s\int _0^1p^2\varphi _{1x}^2 e^{2s\varphi _1}dx&=-\int _0^1 p^2\varphi _{1xx}e^{2s\varphi _1}dx \nonumber \\ {}&-2\int _0^1 p_xp\varphi _{1x}e^{2s\varphi _1} dx +\varphi _{1x}(1,t)p(1,t)^2e^{2s\varphi _1(1,t)}, \end{aligned}$$
(3.77)

and

$$\begin{aligned} 2s\int _0^1 q^2\varphi _{2x}^2 e^{2s\varphi _2}dx&=-\int _0^1 q^2\varphi _{2xx}e^{2s\varphi _2}dx \nonumber \\ {}&-2\int _0^1 q_xq\varphi _{2x}e^{2s\varphi _2} dx +\varphi _{2x}(1,t)q(1,t)^2e^{2s\varphi _2(1,t)}. \end{aligned}$$
(3.78)

Adding (3.77) and (3.78), it follows from the fact that \(\varphi _{1x}(1,t)=-\varphi _{2x}(1,t)\) and \(\varphi _1(1,t)=\varphi _2(1,t)\):

$$\begin{aligned} 2s\int _0^1\big (p^2\varphi _{1x}^2 e^{2s\varphi _1}+q^2\varphi _{2x}^2 e^{2s\varphi _2}\big )dx&=-\int _0^1\big (p^2\varphi _{1xx}e^{2s\varphi _1}+q^2\varphi _{2xx}e^{2s\varphi _2}\big )dx \nonumber \\ {}&-2\int _0^1\big (p_xp\varphi _{1x}e^{2s\varphi _1}+q_xq\varphi _{2x}e^{2s\varphi _2}\big ) dx. \end{aligned}$$
(3.79)

Note that for each \(i=1,2\), \(\varphi _{ix}^2\geqslant \mu _0>0\). We derive from (3.79) and the Young inequality, that there exists a constant \(C>0\) such that

$$\begin{aligned}&2s\mu _0\int _0^1\big (p^2e^{2s\varphi _1}+q^2e^{2s\varphi _2}\big )dx \leqslant C\int _0^1 p^2e^{2s\varphi _1} dx +\mu _0 s\int _0^1 p^2e^{2s\varphi _1} dx\nonumber \\&\quad +C\int _0^1 q^2e^{2s\varphi _2}dx +\mu _0 s\int _0^1 q^2e^{2s\varphi _2} dx +\dfrac{C}{\mu _0 s}\int _0^1 p_x^2e^{2s\varphi _1} dx +\dfrac{C}{\mu _0 s}\int _0^1 q_x^2e^{2s\varphi _2} dx. \nonumber \\ \end{aligned}$$
(3.80)

Then choosing s sufficiently large, the first four integrals in the right hand side of (3.80) can be absorbed in the left hand side, which gives the following estimate

$$\begin{aligned} s\int _0^1\big (p^2e^{2s\varphi _1}+q^2e^{2s\varphi _2}\big )dx\leqslant \dfrac{C}{s}\int _0^1\big (p_x^2e^{2s\varphi _1} dx +q_x^2e^{2s\varphi _2}\big ) dx. \end{aligned}$$

This completes the proof of Lemma 3.6 since \(s\geqslant 1\). \(\square \)

It follows from (3.75) and (3.76) that

$$\begin{aligned}&s^3\int _Q p^2e^{2s\varphi _1} dxdt +s^3\int _Q q^2e^{2s\varphi _2} dxdt \leqslant Cs\int _Q \big (p_x^2e^{2s\varphi _1}+q_x^2e^{2s\varphi _2}\big ) dxdt \nonumber \\&\quad \leqslant Cs\int _0^T E_s(t) dt \nonumber \\&\quad \leqslant C\Big (\int _Q\big ({\mathcal {P}}p\big )^2e^{2s\varphi _1} dxdt +\int _Q \big ({\mathcal {P}}q\big )^2e^{2s\varphi _2} dxdt+s\int _Q p^2e^{2s\varphi _1} dxdt \nonumber \\&\quad \quad +s\int _Q q^2e^{2s\varphi _2} dxdt +s\int _0^T\varphi _1(0,t)p_x(0,t)^2 e^{2s\varphi _1(0,t)} dt\Big ). \end{aligned}$$
(3.81)

Combining (3.75) and (3.81), we get

$$\begin{aligned}&s\int _0^TE_s(t)dt +s^3\int _Q p^2e^{2s\varphi _1} dxdt +s^3\int _Q q^2e^{2s\varphi _2} dxdt \nonumber \\&\quad +s\int _0^Tr^2\varphi _2(0,t)\big (v_x(0,t)-u_x(0,t)\big )^2e^{2s\varphi _2(0,t)} dt \nonumber \\&\leqslant C\Big (\int _Q\big ({\mathcal {P}}p\big )^2e^{2s\varphi _1} dxdt +\int _Q \big ({\mathcal {P}}q\big )^2e^{2s\varphi _2} dxdt +s\int _Q p^2e^{2s\varphi _1} dxdt \nonumber \\&\quad +s\int _Q q^2e^{2s\varphi _2} dxdt +s\int _0^T\varphi _1(0,t)p_x(0,t)^2 e^{2s\varphi _1(0,t)} dt\Big ). \end{aligned}$$
(3.82)

The third and fourth integrals in the right hand side of (3.82) can now be absorbed in the left hand side. Moreover, since (pq) solves (2.3), we have \({\mathcal {P}}p=-a(x,t)p\) and \({\mathcal {P}}q=-a(x,t)q\), it follows that

$$\begin{aligned}&s\int _0^TE_s(t)dt +s^3\int _Q p^2e^{2s\varphi _1} dxdt +s^3\int _Q q^2e^{2s\varphi _2} dxdt\nonumber \\&\quad +s\int _0^Tr^2\varphi _2(0,t)\big (v_x(0,t)-u_x(0,t)\big )^2e^{2s\varphi _2(0,t)} dt \nonumber \\&\leqslant C\Big (\int _Q p^2e^{2s\varphi _1} dxdt +\int _Q q^2e^{2s\varphi _2} dxdt \nonumber \\&\quad +s\int _0^T\varphi _1(0,t)p_x(0,t)^2 e^{2s\varphi _1(0,t)} dt\Big ). \end{aligned}$$
(3.83)

Therefore, the first two integrals in the right hand side of (3.83) can also be absorbed in the left hand side, which ends the proof of Proposition 3.5. \(\square \)

Corollary 3.7

(Main Theorem) Let \(T>4\). Let (uv) be a solution of (2.1). Assume that a lies in \( L^\infty (Q)\).

There exist two constants \(C>0\) and \(s_1\geqslant 1\), such that for every \(s\geqslant s_1\),

$$\begin{aligned}&s\int _Q \big (u_t^2+u_x^2+v_t^2+v_x^2\big )e^{2s\varphi _2}dxdt +s^3\int _Q \big (u^2+v^2\big )e^{2s\varphi _2} dxdt\nonumber \\&\quad +s\int _0^Tr^2\varphi _2(0,t)\big (v_x(0,t)-u_x(0,t)\big )^2e^{2s\varphi _2(0,t)} dt \nonumber \\&\leqslant Cs\int _0^T\varphi _1(0,t)\big (u_x(0,t)+v_x(0,t)\big )^2 e^{2s\varphi _1(0,t)} dt. \end{aligned}$$
(3.84)

Proof

Let (uv) be a solution of (2.1). Recall that \((p,q)=(u+v,v-u)\) satisfies (2.3). Replacing p and q by their expression in (3.47), we find that there exist \(C>0\) and \(s_1>0\) such that for every \(s\geqslant 1\),

$$\begin{aligned}&s\int _Q \big [\big (u_t+v_t\big )^2+\big (u_x+v_x\big )^2\big ]e^{2s\varphi _1}dxdt\nonumber \\&\quad +s\int _Q \big [\big (v_t-u_t\big )^2+\big (v_x-u_x\big )^2\big ]e^{2s\varphi _2}dxdt \nonumber \\&\quad +s^3\int _Q \big (u+v\big )^2e^{2s\varphi _1} dxdt +s^3\int _Q \big (v-u\big )^2e^{2s\varphi _2} dxdt \nonumber \\&\quad +s\int _0^Tr^2\varphi _2(0,t)\big (v_x(0,t)-u_x(0,t)\big )^2e^{2s\varphi _2(0,t)} dt \nonumber \\&\leqslant Cs\int _0^T\varphi _1(0,t)\big (u_x(0,t)+(v_x(0,t)\big )^2 e^{2s\varphi _1(0,t)} dt. \end{aligned}$$
(3.85)

We get (3.84) by using the fact that \(\varphi _2\leqslant \varphi _1\) in the left hand side of (3.85). \(\square \)

4 Exact Boundary Controllability

Set \(H:=H_0^1(0,1)\times V\) and denote by \(H'\) its topological dual. Given \(a\in L^\infty (Q)\), \((u^1,v^1)\in H'\) and \((u^0,v^0)\in \big (L^2(0,1)\big )^2\). Consider the boundary controllability problem: Find a function \(g\in L^2(0,T)\) such that if the pair \((\sigma ,\rho )\) is solution of the system

$$\begin{aligned} \left\{ \begin{array}{ll} \sigma _{tt} - \sigma _{xx} +a(x,t)\sigma = 0 &{}\quad \text {in }Q, \\ \rho _{tt} - \rho _{xx} +a(x,t)\rho = 0 &{}\quad \text {in }Q, \\ \sigma (0,t)=\rho (0,t)=g(t)&{}\quad \text {in }(0,T), \\ \sigma (1,t)=0,\,\rho _x(1,t)=0 &{}\quad \text {in }(0,T), \\ \sigma (x,0)=u^0(x),\,\rho (x,0)=v^0(x)&{}\quad \text {in }(0,1), \\ \sigma _t(x,0)=u^1(x),\,\rho _t(x,0)=v^1(x)&{}\quad \text {in }(0,1), \end{array}\right. \end{aligned}$$
(4.1)

then

$$\begin{aligned} \big (\sigma (x,T),\rho (x,T),\sigma _t(x,T),\rho _t(x,T)\big )=(0,0,0,0)\text { in }(0,1)^4. \end{aligned}$$
(4.2)

Let \((f_1,f_2)\in L^2(Q)\) and consider the backward system

$$\begin{aligned} \left\{ \begin{array}{ll} w_{tt} - w_{xx} +a(x,t)w = f_1 &{}\quad \text {in }Q, \\ z_{tt} - z_{xx} +a(x,t)z = f_2 &{}\quad \text {in }Q, \\ w(0,t)=0,\,z(0,t)=0&{}\quad \text {in }(0,T), \\ w(1,t)=0,\,z_x(1,t)=0 &{}\quad \text {in }(0,T), \\ w(x,T)=z(x,T)=0&{}\quad \text {in }(0,1), \\ w_t(x,T)=z_t(x,T)=0&{}\quad \text {in }(0,1). \end{array}\right. \end{aligned}$$
(4.3)

The solution of (4.1) can be given by transposition (see [32, Theorem 9.1, Chapter 3]) in this sense:

$$\begin{aligned} \int _Q \sigma f_1 dxdt +\int _Q \rho f_2 dxdt&=\langle (u^1,v^1),(w(.,0),z(.,0))\rangle \\&\quad -\int _0^1 u^0(x)w_t(x,0) dx -\int _0^1 v^0(x)z_t(x,0) dx\\ {}&\quad -\int _0^T g(t)(w_x(0,t) +z_x(0,t)) dt, \end{aligned}$$

where \(\langle \cdot ,\cdot \rangle \) stands for the duality product between H and its topological dual \(H'\). It can be proved in view of [23] that there is a solution pair of (4.1) satisfying

$$\begin{aligned} (\sigma ,\rho )\in C([0,T];(L^2(0,1))^2)\cap C^1([0,T];H'). \end{aligned}$$

Thanks to the Hilbert Uniqueness Method developed by Lions in [33], to solve the exact controllability problem, it is enough to prove an observability inequality for the following adjoint system

$$\begin{aligned} \left\{ \begin{array}{ll} w_{tt} - w_{xx} +a(x,t)w = 0 &{}\quad \text {in }Q, \\ z_{tt} - z_{xx} +a(x,t)z = 0 &{}\quad \text {in }Q, \\ w(0,t)=0,\,z(0,t)=0&{}\quad \text {in }(0,T), \\ w(1,t)=0,\,z_x(1,t)=0 &{}\quad \text {in }(0,T), \\ \big (w(x,T),z(x,T)\big )=\big (w^0(x),z^0(x)\big ) \in H, \\ \big (w_t(x,T),z_t(x,T)\big )=\big (w^1(x),z^1(x)\big ) \in \big (L^2(0,1)\big )^2. \end{array}\right. \end{aligned}$$
(4.4)

The preceding system has a unique weak solution satisfying:

$$\begin{aligned} (w,z)\in C([0,T];H)\cap C^1([0,T];(L^2(0,1))^2). \end{aligned}$$

Proposition 4.1

(Observability inequality) Let \(T>4\). Let (wz) be a solution of (4.4). Consider the following energy:

$$\begin{aligned} E_{w,z}(t)=\dfrac{1}{2}\int _0^1\big (w_t^2(.,t)+w_x^2(.,t)+z_t^2(.,t)+z_x^2(.,t)\big ) dx. \end{aligned}$$

Then there exists a constant \(C>0\) independent of the data \((w^0,z^0)\) and \((w^1,z^1)\) such that the following inequality holds

$$\begin{aligned} E_{w,z}(0)\leqslant C\int _0^T\big (w_x(0,t)+z_x(0,t)\big )^2 dt. \end{aligned}$$
(4.5)

Proof

Proving (4.5) is a two-step process. First, one establishes an energy estimate, then one uses the energy estimate and the Carleman estimate of Corollary 3.7 to derive the claimed observability estimate.

Step 1: Energy estimate. Multiplying the first equation in (4.4) by \(w_t\), the second equation by \(z_t\), integrating over (0, 1) and adding the results, yield

$$\begin{aligned} E'_{w,z}(t)=-\int _0^1 a(.,t)\big (ww_t +zz_t\big ) dx. \end{aligned}$$
(4.6)

Using the Cauchy-Schwarz and Poincaré inequalities, it follows from (4.6) that there exists a constant \(C>0\) such that for every \(t\in [0,T]\),

$$\begin{aligned} E'_{w,z}(t)&\leqslant C\int _0^1 \big (\big |ww_t\big | +\big |zz_t\big |\big ) dx \nonumber \\&\leqslant C\left( \Big (\int _0^1w^2dx\Big )^{\frac{1}{2}}\Big (\int _0^1w_t^2dx\Big )^{\frac{1}{2}}+\Big (\int _0^1z^2dx\Big )^{\frac{1}{2}}\Big (\int _0^1z_t^2dx\Big )^{\frac{1}{2}}\right) \nonumber \\&\leqslant C\left( \Big (\int _0^1w_x^2dx\Big )^{\frac{1}{2}}\Big (\int _0^1w_t^2dx\Big )^{\frac{1}{2}}+\Big (\int _0^1z_x^2dx\Big )^{\frac{1}{2}}\Big (\int _0^1z_t^2dx\Big )^{\frac{1}{2}}\right) \nonumber \\&\leqslant CE_{w,z}(t). \end{aligned}$$
(4.7)

We now let s and t be in [0, T]. If \(s<t\), we observe that (4.7) implies that for every \(t\in [0,T]\),

$$\begin{aligned} \big (E_{w,z}(t)e^{-Ct}\big )'\leqslant 0. \end{aligned}$$

We find after integrating this inequality over [st],

$$\begin{aligned} E_{w,z}(t)\leqslant E_{w,z}(s)e^{C(t-s)}. \end{aligned}$$
(4.8)

On the other hand, there exists a constant \(C>0\) such that for every \(t\in [0,T]\),

$$\begin{aligned} -E'_{w,z}(t)\leqslant CE_{w,z}(t), \end{aligned}$$

that we obtain by multiplying (4.6) by \((-1)\) and then by estimating the result as previously. Now if \(s\geqslant t\), proceeding as before, we find that there exists a constant \(C>0\) such that

$$\begin{aligned} E_{w,z}(t)\leqslant E_{w,z}(s)e^{C(s-t)}. \end{aligned}$$
(4.9)

Hence, we derive from (4.8) and (4.9) that there exists a constant \(C>0\) such that for all st in [0, T],

$$\begin{aligned} E_{w,z}(t)\leqslant E_{w,z}(s)e^{C|t-s|}. \end{aligned}$$
(4.10)

Step 2: Observability estimate. It follows from Corollary 3.7:

$$\begin{aligned} \int _0^T E_{w,z}(t) dt \leqslant C\int _0^T\varphi _1(0,t)\big (w_x(0,t)+z_x(0,t)\big )^2 e^{2s\varphi _1(0,t)} dt. \end{aligned}$$
(4.11)

Therefore, combining (4.10) and (4.11), we derive the claimed observability estimate. \(\square \)

Theorem 4.2

Let \(T>4\). Let \(a\in L^\infty (Q)\), \((u^0,v^0)\in \big (L^2(0,1)\big )^2\) and \((u^1,v^1)\in H'\). There exists a unique control g of minimal norm in \(L^2(0,T)\) such that the pair \((\sigma ,\rho )\) solution of (4.1) satisfies (4.2).

Furthermore there exists a positive constant \(C_0\), that depends on T and \(||a||_{L^\infty (Q)}\) only, such that

$$\begin{aligned} ||g||_{L^2(0,T)}\le C_0\left\{ ||u^0||_{L^2(0,1)}+||u^1||_{H^{-1}(0,1)}+||v^0||_{L^2(0,1)}+||v^1||_{V'}\right\} . \end{aligned}$$
(4.12)

Proof

Introduce the following functional defined by

$$\begin{aligned}{}\begin{array}{rl} {\mathcal {J}}:&{}H\times (L^2(0,1))^2\rightarrow {\mathbb {R}} \\ &{}\big ((w^0,z^0),(w^1,z^1)\big )\mapsto \dfrac{1}{2}\displaystyle \int _0^T\big (w_x(0,t)+z_x(0,t)\big )^2 dt \\ &{}+\displaystyle \int _0^1 u^0w_t(.,0) dx +\int _0^1 v^0z_t(.,0) dx-\langle (u^1,v^1),(w(.,0),z(.,0))\rangle , \end{array} \end{aligned}$$
(4.13)

where the pair (wz) is the corresponding solution of (4.4). The quadratic functional \({\mathcal {J}}\) is continuous and strictly convex. Moreover, \({\mathcal {J}}\) is coercive. Indeed, using Proposition 4.1, one checks that there exists a constant \(C>0\) such that

$$\begin{aligned} {\mathcal {J}}\big ((w^0,z^0),(w^1,z^1)\big )&\geqslant CE_{w,z}(0) +\displaystyle \int _0^1 u^0w_t(.,0) dx +\int _0^1 v^0z_t(.,0) dx \\&\quad -\langle (u^1,v^1),(w(.,0),z(.,0))\rangle . \end{aligned}$$

It follows from the Cauchy-Schwarz inequality

$$\begin{aligned} {\mathcal {J}}\big ((w^0,z^0),(w^1,z^1)\big )&\geqslant CE_{w,z}(0) -\Vert (u^1,v^1)\Vert _{H'}\Vert w(.,0),z(.,0))\Vert _{H} \\&\quad -\Vert (u^0,v^0)\Vert _{(L^2(0,1))^2}\Vert (w_t(.,0),z_t(.,0))\Vert _{(L^2(0,1))^2}. \end{aligned}$$

One then deduces

$$\begin{aligned} {\mathcal {J}}\big ((w^0,z^0),(w^1,z^1)\big )&\geqslant C\Vert ((w^0,z^0),(w^1,z^1))\Vert _{H\times (L^2(0,1))^2}^2 \\&\quad -M\Vert ((w^0,z^0),(w^1,z^1))\Vert _{H\times (L^2(0,1))^2}, \end{aligned}$$

where the constant M depends on the initial conditions \((u^0,v^0)\) and \((u^1,v^1)\) only.

Consequently, the functional \({\mathcal {J}}\) has a unique minimizer \(\big (({\hat{w}}^0,{\hat{z}}^0),({\hat{w}}^1,{\hat{z}}^1)\big )\), solution of the Euler equation

$$\begin{aligned}&\int _0^T\big ({\hat{w}}_x(0,t)+{\hat{z}}_x(0,t)\big )\big (w_x(0,t)+z_x(0,t)\big ) dt +\int _0^1 u^0w_t(.,0) dx \\&\quad +\int _0^1 v^0z_t(.,0) dx-\langle (u^1,v^1),(w(.,0),z(.,0))\rangle =0, \nonumber \end{aligned}$$
(4.14)

for every solution pair (wz) of (4.4).

Note that in (4.14), \(({\hat{w}},{\hat{z}})\) is the solution of (4.4) corresponding to the minimizer of \({\mathcal {J}}\). Setting \(g(t)={\hat{w}}_x(0,t)+{\hat{z}}_x(0,t)\), then the following duality identity holds

$$\begin{aligned}&\int _0^T\big ({\hat{w}}_x(0,t)+{\hat{z}}_x(0,t)\big )\big (w_x(0,t)+z_x(0,t)\big ) dt +\displaystyle \int _0^1 u^0w_t(.,0) dx \\&\quad +\int _0^1 v^0z_t(.,0) dx-\langle (u^1,v^1),(w(.,0),z(.,0))\rangle -\int _0^1\sigma (.,T)w^1 dx \nonumber \\&\quad -\int _0^1 \rho (.,T)z^1 dx +\langle ( \sigma _t(.,T),\rho _t(.,T)),(w^0,z^0) \rangle =0, \nonumber \end{aligned}$$
(4.15)

for every solution pair (wz) of (4.4).

Thanks to the Euler equation, it readily follows from (4.15)

$$\begin{aligned} -\int _0^1 \sigma (.,T)w^1 dx -\int _0^1 \rho (.,T)z^1 dx +\langle (w^0,z^0),(\sigma _t(.,T),\rho _t(.,T))\rangle =0. \end{aligned}$$
(4.16)

Taking \((w^0(x),z^0(x))=(0,0)\) in \((0,1)^2\) then we find \((\sigma (x,T),\rho (x,T))=(0,0)\) in \((0,1)^2\). We deduce that (4.16) reduces

$$\begin{aligned} \langle ( \sigma _t(.,T),\rho _t(.,T)),(w^0,z^0) =0,\,\forall (w^0,z^0)\in H. \end{aligned}$$

Therefore \((\sigma _t(.,T),\rho _t(.,T))=(0,0)\) in \(H'\), and (4.2) holds as claimed.

Finally, using the Euler equation (4.14), the inequality (4.12) follows at once, which completes the proof of Theorem 4.2. \(\square \)

5 Final Comments and Open Problems

The Carleman estimate in our main result, that is Theorem 3.1 has several applications, one of which is the controllability problem discussed in the preceding section. We are going to provide another application below, and also comment on the case of two different potentials.

5.1 Indirect Boundary Controllability of a System of Wave Equations Involving the Kirchhoff Boundary Conditions

When solving the simultaneous boundary controllability problem for System (1.1), we solved two controllability problems in one shot; namely, the one of interest, and the following indirect controllability problem Let \(T>0\). Set \(V=\{u\in H^1(0,1);u(0)=0\},\) and denote by \(V'\) the topological dual of V. Set \(Q=(0,1)\times (0,T)\), and let a in \(L^\infty (Q)\).

Consider the following boundary controllability problem:

Given \((y^0,y^1)\) in \(L^2(0,1)\times V'\) and \((z^0,z^1)\) in \(L^2(0,1)\times V'\), find a control function h in \(L^2(0,T)\) such that the solution couple (yz) of the following system

$$\begin{aligned} \left\{ \begin{array}{rl} &{}y_{tt} - y_{xx} +a(x,t)y= 0 \text { in }Q, \\ &{}z_{tt} - z_{xx} +a(x,t)z = 0 \text { in }Q, \\ &{}y(0,t)=h(t),\, z(0,t)=0,\quad y(1,t)=z(1,t),\, y_x(1,t)+z_x(1,t)=0 \text { in }(0,T), \\ &{}y(.,0)=y^0,\, y_t(.,0)=y^1,\quad z(.,0)=z^0,\, z_t(.,0)=z^1, \end{array}\right. \end{aligned}$$
(5.1)

satisfies

$$\begin{aligned} y(.,T)=0,\,y_t(.,T)=0,\quad z(.,T)=0,\,z_t(.,T)=0. \end{aligned}$$
(5.2)

Indeed the adjoint system to (5.1) is given by

$$\begin{aligned} \left\{ \begin{array}{rl} &{}p_{tt} - p_{xx} +a(x,t)p= 0 \text { in }Q, \\ &{}q_{tt} - q_{xx} +a(x,t)q = 0 \text { in }Q, \\ &{}p(0,t)=0= q(0,t),\\ {} &{} p(1,t)=q(1,t),\, p_x(1,t)+q_x(1,t)=0 \text { in }(0,T), \\ &{}p(.,T)=p^0\in V,\, p_t(.,T)=p^1\in L^2(0,1),\\ {} &{}q(.,T)=q^0\in V,\, q_t(.,T)=q^1\in L^2(0,1). \end{array}\right. \end{aligned}$$
(5.3)

With the help of Proposition 3.5, one readily derives the needed observability inequality for System (5.3). Then, invoking Lions H.U.M, one deduces the desired controllability of (5.1).

5.2 The Case of Two Different Potentials

Let a and b both lie in \(L^\infty (Q)\) with \(a\not =b\). The following boundary controllability problem is open: Let \((y^0,y^1)\) in \(L^2(0,1)\times H^{-1}(0,1)\) and \((z^0,z^1)\) in \(L^2(0,1)\times V'\). Find a control function h in \(L^2(0,T)\) such that the solution couple (yz) of the following system

$$\begin{aligned} \left\{ \begin{array}{rl} &{}y_{tt} - y_{xx} +a(x,t)y= 0 \text { in }Q, \\ &{}z_{tt} - z_{xx} +b(x,t)z = 0 \text { in }Q, \\ &{}y(0,t)=h(t),\, z(0,t)=h(t),\quad y(1,t)=0,\, z_x(1,t)=0 \text { in }(0,T), \\ &{}y(.,0)=y^0,\, y_t(.,0)=y^1,\quad z(.,0)=z^0,\, z_t(.,0)=z^1, \end{array}\right. \end{aligned}$$
(5.4)

satisfies

$$\begin{aligned} y(.,T)=0,\,y_t(.,T)=0,\quad z(.,T)=0,\,z_t(.,T)=0.\end{aligned}$$
(5.5)

The adjoint to this system is given by

$$\begin{aligned} \left\{ \begin{array}{rl} &{}u_{tt} - u_{xx} +a(x,t)u= 0 \text { in }Q, \\ &{}v_{tt} - v_{xx} +b(x,t)v = 0 \text { in }Q, \\ &{}u(0,t)=0= v(0,t),\quad u(1,t)=0=v_x(1,t) \text { in }(0,T), \\ &{}u(.,T)=u^0\in H_0^1(0,1),\, u_t(.,T)=u^1\in L^2(0,1),\\ {} &{} v(.,T)=v^0\in V,\, v_t(.,T)=v^1\in L^2(0,1). \end{array}\right. \end{aligned}$$
(5.6)

If we want to use the change of variables that proved to be essential in establishing the observability inequality in both the conservative and nonconservative cases in the preceding sections, then, elementary algebra leads us to the new (pq)-system, (keep in mind that \(p=u+v\) and \(q=v-u\)):

$$\begin{aligned} \left\{ \begin{array}{rl} &{}p_{tt} - p_{xx} +\gamma (x,t)p+\delta (x,t)q= 0 \text { in }Q, \\ &{}q_{tt} - q_{xx} +\gamma (x,t)q+\delta (x,t)p = 0 \text { in }Q, \\ &{}p(0,t)=0= q(0,t),\quad p(1,t)=q(1,t),\, p_x(1,t)+q_x(1,t)=0 \text { in }(0,T), \\ &{}p(.,T)=u^0+v^0\in V,\, p_t(.,T)=u^1+v^1\in L^2(0,1),\\ {} &{} q(.,T)=v^0-u^0\in V,\, q_t(.,T)=v^1-u^1\in L^2(0,1), \end{array}\right. \nonumber \\ \end{aligned}$$
(5.7)

where \(\gamma (x,t)=(a+b)(x,t)/2\) and \(\delta (x,t)=(b-a)(x,t)/2\).

Applying the improved Carleman inequality to this system (see Proposition 3.5, we find

$$\begin{aligned}&s\int _Q \big (p_t^2+p_x^2\big )e^{2s\varphi _1}dxdt +s\int _Q \big (q_t^2+q_x^2\big )e^{2s\varphi _2}dxdt \\&\quad +s^3\int _Q p^2e^{2s\varphi _1} dxdt +s^3\int _Q q^2e^{2s\varphi _2} dxdt \nonumber \\&\quad +s\int _0^Tr^2\varphi _2(0,t)q_x(0,t)^2e^{2s\varphi _2(0,t)} dt \nonumber \\&\leqslant Cs\int _0^T\varphi _1(0,t)p_x(0,t)^2 e^{2s\varphi _1(0,t)} dt +C\int _Q \delta ^2\left( p^2e^{2s\varphi _2} +q^2e^{2s\varphi _1}\right) dxdt. \nonumber \end{aligned}$$
(5.8)

Now, we have by definition: \(\varphi _2(x,t)\le \varphi _1(x,t)\) for all (xt) in Q; therefore, we are able to absorb the lower order term in p in the right hand side of (5.8), but we cannot do the same for the corresponding term in q. A similar technical difficulty was already observed by the authors of [6] when dealing with a system like (5.7) in the parabolic equations framework. Therefore a different approach is needed in the case of different potentials. This seems to be an interesting open problem.