Keywords

1 Introduction

This paper is devoted to the study of the controllability properties of the wave equation, under positivity (or nonnegativity) constraints on the control.

We address both the case where the control acts in the interior of the domain where waves evolve or on its boundary.

This problem has been exhaustively considered in the unconstrained case but very little is known in the presence of constraints on the control, an issue of primary importance in applications, since whatever the applied context under consideration is, the available controls are always limited. For some of the basic literature on the unconstrained controllability of wave-like equations the reader is referred to: [1, 3,4,5, 8, 9, 15, 21, 22, 24, 26].

The developments in this paper are motivated by our earlier works on the constrained controllability of heat-like equations ([16, 19]). In that context, due to the well-known comparison principle for parabolic equations, control and state constraints are interlinked. In particular, for the heat equation, nonnegative controls imply that the solution is nonnegative too, when the initial configuration is nonnegative. Therefore, imposing non-negativity constraints on the control ensures that the state satisfies the non-negativity constraint too.

This is no longer true for wave-like equations in which the sign of the control does not determine that of solutions. However, as mentioned above, from a practical viewpoint, it is very natural to consider the problem of imposing control constraints. In this work, to fix ideas, we focus in the particular case of nonnegative controls.

First we address the problem of steady state controllability in which one aims at controlling the solution from a steady configuration to another one. This problem was addressed in [7], in the absence of constraints on the controls for semilinear wave equations. Our main contribution here is to control the system by preserving some constraints on the controls given a priori. And, as we shall see, when the initial and final steady states are associated to positive time-independent control functions, the constrained controllability can be guaranteed to hold if the time-horizon is long enough.

The proof is developed by a step-wise procedure presented in [19] (which differs from the one in [7, 16]), the so-called “stair-case argument”, along an arc of steady-states linking the starting and final one. The proof consists on moving recursively from one steady state to the other by means of successive small amplitude controlled trajectories linking successive steady-states. This method and result are presented in a general semigroup setting and it can be successfully implemented for any control system for which controllability holds by means of \(L\infty \) controls.

The same recursive approach enables us to prove a state constrained result, under additional dissipativity assumptions. But the time needed for this to hold is even larger than before.

The problem of steady-state controllability is a particular instance of the more general trajectory control problem, in which, given two controlled trajectories of the system, both obtained from nonnegative controls, and one state in each of them (possibly corresponding to two different time-instances) one aims at driving one state into the other one by means of nonnegative constrained controls. This result can also be proved by a similar iterative procedure, but under the added assumption that the system is conservative and its energy coercive so that uncontrolled trajectories are globally bounded.

These results hold for long enough control time horizons. The stepwise procedure we implement needs of a very large control time, much beyond the minimal control time for the control of the wave equation, that is determined by the finite velocity of propagation and the so-called Geometric Control Condition (GCC). It is then natural to introduce the minimal time of control under non-negativity constraints, in both situations above.

There is plenty to be done to understand how these constrained minimal times depends on the data to be controlled. Employing d’Alembert’s formula for the one dimensional wave equation, we compute both of them for constant steady states, showing that they coincide with the unconstrained one. In that case we also show that the property of constrained controllability holds in the minimal time too.

Controllability under constraints has already been studied for finite-dimensional models and heat-like equations (see [16, 19]). In both cases it was also proved that controllability by nonnegative controls fails if time is too short, when the initial datum differs from the final target. This fact exhibits a big difference with respect to the unconstrained control problem for these systems, where controllability holds in arbitrary small time in both cases. In the wave-like context addressed in this paper the waiting phenomenon, according to which there is a minimal control time for the constrained problem, is less surprising. But, simultaneously, on the other hand, in some sense, the fact that constraints can be imposed on controls and state seems more striking too.

In [12], authors analysed controllability of the one dimensional wave equation, under the more classical bilateral constraints on the control. Our work is, as far as we know, the first one considering unilateral constraints for wave-like equations.

1.1 Internal Control

Let \(\varOmega \) be a connected bounded open set of \(\mathbb {R}^n\), \(n \ge 1\), with \(C^{\infty }\) boundary, and let \(\omega \) and \(\omega _0\) be subdomains of \(\varOmega \) such that \(\overline{\omega _0}\subset \omega \).

Let \(\chi \in C^{\infty }(\mathbb {R}^n)\) be a smooth function supported in \(\omega \) such that \(\text{ Range }(\chi )\subseteq [0,1]\), .

We assume further that all derivatives of \(\chi \) vanish on the boundary of \(\varOmega \). We will discuss this assumption in Sect. 3.3.

We consider the wave equation controlled from the interior

$$\begin{aligned} {\left\{ \begin{array}{ll} y_{tt}-\varDelta y+cy=u\chi &{} \text{ in } (0,T)\times \varOmega \\ y=0 &{} \text{ on } (0,T)\times \partial \varOmega \\ y(0,x)=y^0_0(x), \ y_t(0,x)=y^1_0(x) &{} \text{ in } \varOmega \\ \end{array}\right. } \end{aligned}$$
(1)

where \(y=y(t,x)\) is the state, while \(u=u(t,x)\) is the control whose action is localized on \(\omega \) by means of multiplication with the smooth cut-off function \(\chi \). The coefficient \(c=c(x)\) is \(C^{\infty }\) smooth in \(\overline{\varOmega }\).

It is well known in the literature (e.g. [10, Sect. 7.2]) that, for any initial datum \((y_0^0,y_0^1)\in H^1_0(\varOmega )\times L^2(\varOmega )\) and for any control \(u\in L^2((0,T)\times \omega )\), the above problem admits an unique solution \((y,y_t)\in C^0([0,T];H^1_0(\varOmega )\times L^2(\varOmega ))\), with \(y_{tt}\in L^2(0,T;H^{-1}(\varOmega ))\).

We assume the Geometric Control Condition on \((\varOmega ,\omega _0,T^*)\), which basically asserts that all bicharacteristic rays enter in the subdomain \(\omega _0\) in time smaller than \(T^*\). This geometric condition is actually equivalent to the property of (unconstrained) controllability of the system (see [1, 3]).

1.1.1 Steady State Controllability

The purpose of our first result is to show that, in time large, we can drive (1) from one steady state to another by a nonnegative control, assuming the uniform positivity of the control defining the steady states.

More precisely, a steady state is a solution to

$$\begin{aligned} {\left\{ \begin{array}{ll} -\varDelta \overline{y}+c\overline{y}=\overline{u}\chi &{} \text{ in } \varOmega \\ \overline{y}=0 &{} \text{ on } \partial \varOmega ,\\ \end{array}\right. } \end{aligned}$$
(2)

where \(\overline{u}\in L^2(\omega )\) and \(\overline{y}\in H^2(\varOmega ) \cap H^1_0(\varOmega )\). Note that, as a consequence of Fredholm Alternative (see [11, Theorem 5.11 page 84]), the existence and uniqueness of the solution of this elliptic problem can be guaranteed whenever zero is not an eigenvalue of \(-\varDelta +cI:H^1_0(\varOmega )\longrightarrow H^{-1}(\varOmega )\).

The following result holds:

Theorem 1

(Controllability between steady states) Take \(\overline{y}_0\) and \(\overline{y}_1\) in \(H^2(\varOmega ) \cap H^1_0(\varOmega )\) steady states associated to \(L^2\)-controls \(\overline{u}^1\) and \(\overline{u}^2\), respectively. Assume further that there exists \(\sigma >0\) such that

$$\begin{aligned} \overline{u}^i\ge \sigma , \quad \text{ a.e. } \text{ in }\ \omega . \end{aligned}$$
(3)

Then, if T is large enough, there exists \(u\in L^2((0,T)\times \omega )\), a control such that

  • the unique solution \((y,y_t)\) to the problem (1) with initial datum \((\overline{y}_0,0)\) and control u verifies \((y(T,\cdot ),y_t(T,\cdot ))=(\overline{y}_1,0)\);

  • \( u\ge 0\) a.e. on \((0,T)\times \omega .\)

Theorem 1 is proved in Sect. 3.1. Inspired by [7], we implement a recursive “stair-case” argument to keep the control in a narrow tubular neighborhood of the segment connecting the controls defining the initial and final data. This will guarantee the actual positivity of the control obtained.

1.1.2 Controllability Between Trajectories

The purpose of this section is to extend the above result, under the additional assumption \(c(x)>-\lambda _1\), where \(\lambda _1\) is the first eigenvalue of the Dirichlet Laplacian in \(\varOmega \). This guarantees that the energy of the system defines a norm

$$\begin{aligned} \Vert (y^0,y^1)\Vert _{E}^2=\int _{\varOmega } \left[ \Vert \nabla y^0\Vert ^2 +c{\left( y^0\right) }^2\right] dx+\int _{\varOmega } (y^1)^2 dx \end{aligned}$$

on \(H^1_0(\varOmega )\times L^2(\varOmega )\). Thus, by conservation of the energy, uncontrolled solutions are uniformly bounded for all t.

Fig. 1
figure 1

Controllability between data lying on trajectories

We assume that both, the initial datum \((y_0^0,y_0^1)\) and the final target \((y_1^0,y_1^1)\), belong to controlled trajectories (see Fig. 1)

$$\begin{aligned} (y_i^0,y_i^1)\in \left\{ (\overline{y}_i(\tau ,\cdot ),(\overline{y}_i)_t(\tau ,\cdot ) \ | \ \tau \in \mathbb {R}\right\} , \end{aligned}$$
(4)

where \((\overline{y}_i,(\overline{y}_i)_t)\) solve (1) with nonnegative controls. We suppose that these trajectories are smooth enough, namely

$$(\overline{y}_i,(\overline{y}_i)_t)\in C^{s(n)}(\mathbb {R};H^1_0(\varOmega )\times L^2(\varOmega )),$$

with \(s(n)=\lfloor {n/2}\rfloor +1\). Hereafter, we denote by \((\overline{y}_0,(\overline{y}_0)_t)\) the initial trajectory, while \((\overline{y}_1,(\overline{y}_1)_t)\) stands for the target one.

Note that the regularity is assumed only in time and not in space. This allows to consider weak steady-state solutions.

We can in particular choose as final target the null state \((y_1^0,y_1^1)=(0,0)\). It is important to highlight that this is something specific to the wave equation. In the parabolic case (see [16, 19]), this was prevented by the comparison principle, since the zero target cannot be reached in finite time with non-negative controls. But, for the wave equation, the maximum principle does not hold and this obstruction does not apply.

The following result holds

Theorem 2

(Controllability between trajectories) Suppose \(c(x)>-\lambda _1\), for any \(x\in \overline{\varOmega }\). Let \((\overline{y}_i,(\overline{y}_i)_t)\in C^{s(n)}(\mathbb {R};H^1_0(\varOmega )\times L^2(\varOmega ))\) be solutions to (1) associated to controls \(\overline{u}^i\ge 0\) a.e. in \((0,T)\times \omega \), \(i=0, 1\). Take \((y_0^0,y_0^1)=(\overline{y}_0(\tau _0,\cdot ),(\overline{y}_0)_t(\tau _0,\cdot ))\) and \((y_1^0,y_1^1)=(\overline{y}_1(\tau _1,\cdot ),(\overline{y}_1)_t(\tau _1,\cdot ))\) for arbitrary values of \(\tau _0\) and \(\tau _1\). Then, in time \(T>0\) large enough, there exists a control \(u\in L^{2}((0,T)\times \omega )\) such that

  • the unique solution \((y,y_t)\) to (1) with initial datum \((y_0^0,y_0^1)\) verifies the end condition \((y(T,\cdot ),y_t(T,\cdot ))=(y_1^0,y_1^1)\);

  • \(u\ge 0\) a.e. in \((0,T)\times \omega \).

Remark 1

This result is more general than Theorem 1 for two reasons

  1. 1.

    it enables us to link more general data, with nonzero velocity, and not only steady states;

  2. 2.

    the control defining the initial and target trajectories is assumed to be only nonnegative. This assumption is weaker than the uniform positivity one required in Theorem 1.

On the other hand, the present result requires the condition \(c(x)>-\lambda _1\) on the potential \(c=c(x)\).

We give the proof of Theorem 2 in Sect. 3.2.

1.2 Boundary Control

Let \(\varOmega \) be a connected bounded open set of \(\mathbb {R}^n\), \(n \ge 1\), with \(C^{\infty }\) boundary, and let \(\varGamma _0\) and \(\varGamma \) be open subsets of \(\partial \varOmega \) such that \(\overline{\varGamma _0}\subset \varGamma \).

Let \(\chi \in C^{\infty }(\partial \varOmega )\) be a smooth function such that \(\text{ Range }(\chi )\subseteq [0,1]\), \(\text{ supp }(\chi )\subset \varGamma \) and .

We now consider the wave equation controlled on the boundary

$$\begin{aligned} {\left\{ \begin{array}{ll} y_{tt}-\varDelta y+cy=0 &{} \text{ in } (0,T)\times \varOmega \\ y=\chi u &{} \text{ on } (0,T)\times \partial \varOmega \\ y(0,x)=y^0_0(x), \ y_t(0,x)=y^1_0(x) &{} \text{ in } \varOmega \\ \end{array}\right. } \end{aligned}$$
(5)

where \(y=y(t,x)\) is the state, while \(u=u(t,x)\) is the boundary control localized on \(\varGamma \) by the cut-off function \(\chi \). As before, the space-dependent coefficient c is supposed to be \(C^{\infty }\) regular in \(\overline{\varOmega }\).

By transposition (see [15]), one can realize that for any initial datum \((y_0^0,y_0^1)\in L^2(\varOmega )\times H^{-1}(\varOmega )\) and control \(u\in L^2((0,T)\times \varGamma )\), the above problem admits an unique solution \((y,y_t)\in C^0([0,T];L^2(\varOmega )\times H^{-1}(\varOmega ))\).

We assume the Geometric Control Condition on \((\varOmega ,\varGamma _0,T^*)\) which asserts that all generalized bicharacteristics touch the sub-boundary \(\varGamma _0\) at a non diffractive point in time smaller than \(T^*\). By now, it is well known in the literature that this geometric condition is equivalent to (unconstrained) controllability (see [1, 3]).

1.2.1 Steady State Controllability

As in the context of internal control, our first goal is to show that, in time large, we can drive (5) from one steady state to another, assuming the uniform positivity of the controls defining these steady states.

In the present setting a steady state is a time independent solution to (5), namely a solution to

$$\begin{aligned} {\left\{ \begin{array}{ll} -\varDelta \overline{y}+c\overline{y}=0 &{} \text{ in } \varOmega \\ \overline{y}=\chi \overline{u} &{} \text{ on }\partial \varOmega .\\ \end{array}\right. } \end{aligned}$$
(6)

In the present setting, \(\overline{u}\in L^2(\partial \varOmega )\) and \(\overline{y}\in L^2(\varOmega )\) solves the above problem in the sense of transposition (see [14, Chap. II, Sect. 4.2] and [13]).

As in the context of internal control, if 0 is not an eigenvalue of \(-\varDelta +cI:H^1_0(\varOmega )\longrightarrow H^{-1}(\varOmega )\), for any boundary control \(\overline{u}\in L^2(\partial \varOmega )\), there exists a unique \(\overline{y}\in L^2(\varOmega )\) solution to (6) with boundary control \(\overline{u}\). This can be proved combining Fredholm Alternative (see [11, Theorem 5.11 page 84]) and transposition techniques [14, Theorem 4.1 page 73].

We prove the following result

Theorem 3

(Steady state controllability). Let \(\overline{y}_i\) be steady states defined by controls \(\overline{u}^i\), \(i=0, 1\), so that

$$\begin{aligned} \overline{u}^i\ge \sigma , \quad \text{ on }\ \varGamma , \end{aligned}$$
(7)

with \(\sigma >0\).

Then, if T is large enough, there exists \(u\in L^{2}([0,T]\times \varGamma )\), a control such that

  • the unique solution \((y,y_t)\) to (5) with initial datum \((\overline{y}_0,0)\) and control u verifies \((y(T,\cdot ),y_t(T,\cdot ))=(\overline{y}_1,0)\);

  • \( u\ge 0\) on \((0,T)\times \varGamma .\)

The proof of the above result can be found in Sect. 4.1. The structure of the proof resembles the one of Theorem 1, with some technical differences due to the different nature of the control.

1.2.2 Controllability Between Trajectories

As in the internal control case, we suppose \(c(x)>-\lambda _1\), where \(\lambda _1\) is the first eigenvalue of the Dirichlet Laplacian in \(\varOmega \). Then, the generator of the free dynamics is skew-adjoint (see [23, Proposition 3.7.6]), thus generating an unitary group of operators \(\left\{ \mathbb {T}_{t}\right\} _{t\in \mathbb {R}}\) on \(L^2(\varOmega )\times H^{-1}(\varOmega )\).

Both the initial datum and final target \((y_i^0,y_i^1)\) belong to a smooth trajectory, namely

$$\begin{aligned} (y_i^0,y_i^1)\in \left\{ (\overline{y}_i(\tau ,\cdot ),(\overline{y}_i)_t(\tau ,\cdot )) \ | \ \tau \in \mathbb {R}\right\} . \end{aligned}$$
(8)

We assume the nonnegativity of the controls \(\overline{u}^i\) defining \((\overline{y}_i,(\overline{y}_i)_t)\), for \(i=0,1\). Hereafter, in the context of boundary control, we take trajectories of class \(C^{s(n)}(\mathbb {R};L^2(\varOmega )\times H^{-1}(\varOmega ))\), with \(s(n)=\lfloor {n/2}\rfloor +1\). We set \((\overline{y}_0,(\overline{y}_0)_t)\) to be the initial trajectory and \((\overline{y}_1,(\overline{y}_1)_t)\) be the target one.

Note that, with respect to Theorem 3, we have relaxed the assumptions on the sign of the controls \(\overline{u}^i\). Now, they are required to be only nonnegative and not uniformly strictly positive.

Theorem 4

(Controllability between trajectories) Assume \(c(x)>-\lambda _1\), for any \(x\in \overline{\varOmega }\). Let \((\overline{y}_i,(\overline{y}_i)_t)\) be solutions to (5) with non-negative controls \(\overline{u}^i\) respectively. Suppose the trajectories \((\overline{y}_i,(\overline{y}_i)_t)\in C^{s(n)}([0,T];L^2(\varOmega )\times H^{-1}(\varOmega ))\). Pick \((y_0^0,y_0^1)=(\overline{y}_0(\tau _0,\cdot ),(\overline{y}_0)_t(\tau _0,\cdot ))\) and \((y_1^0,y_1^1)=(\overline{y}_1(\tau _1,\cdot ),(\overline{y}_1)_t(\tau _1,\cdot ))\). Then, in time large, we can find a control \(u\in L^{2}((0,T)\times \varGamma )\) such that

  • the solution \((y,y_t)\) to (5) with initial datum \((y_0^0,y_0^1)\) fulfills the final condition \((y(T,\cdot ),y_t(T,\cdot ))=(y_1^0,y_1^1)\);

  • \(u\ge 0\) a.e. in \((0,T)\times \varGamma \).

The above Theorem is proved in Sect. 4.2. Furthermore, in Sect. 5, we show how Theorem 4 applies in the one dimensional case, providing further information about the minimal time to control and the possibility of controlling the system in the minimal time.

1.2.3 State Constraints

We impose now constraints both on the control and on the state, namely both the control and the state are required to be nonnegative.

In the parabolic case (see [16, 19]) one can employ the comparison principle to get a state constrained result from a control constrained one. But, now, as we have explained before, the comparison principle is not valid in general for the wave equation. And we cannot rely on comparison to deduce our state constrained result from the control constrained one.

We shall rather apply the “stair-case argument” developed to prove steady state controllability, paying attention to the added need of preserving state constraints as well.

Let \(\lambda _1\) be the first eigenvalue of the Dirichlet Laplacian. We assume \(c> -\lambda _1\) in \(\overline{\varOmega }\). We also suppose that \(\chi \equiv 1\), meaning that the control acts on the whole boundary. We take as initial and final data two steady states \(y_0^0\) and \(y_1^0\) associated to controls \(\overline{u}^i\ge \sigma >0\). Our proof relies on the application of the maximum principle to (6). This ensures that the states \(\overline{y}_i\ge \sigma \) once we know \(\overline{~u}^i\ge \sigma \). For this reason, we need \(c> -\lambda _1\) and \(\chi \equiv 1\).

Our strategy is the following

  • employ the “stair-case argument” used to prove steady state controllability, to keep the control in a narrow tubular neighborhood of the segment connecting \(\overline{u}^0\) and \(\overline{u}^1\). This can be done by taking the time of control large enough. Since \(\overline{u}^i\ge \sigma >0\), this guarantees the positivity of the control;

  • by the continuous dependence of the solution on the data, the controlled trajectory remains also in a narrow neighborhood of the convex combination joining initial and final data. On the other hand, by the maximum principle for the steady problem (6), we have that \(y_i^0\ge \sigma \) in \(\varOmega \), for \(i=0, 1\). In this way the state y can be assured to remain nonnegative.

Theorem 5

We assume \(c(x)> -\lambda _1\) for any \(x\in \overline{\varOmega }\) and \(\chi \equiv 1\). Let \(y_0^0\) and \(y_1^0\) be solutions to the steady problem

$$\begin{aligned} {\left\{ \begin{array}{ll} -\varDelta y+cy=0 &{} \text{ in } \varOmega \\ y=\overline{u}^i, &{} \text{ on }\partial \varOmega \\ \end{array}\right. } \end{aligned}$$
(9)

where \(\overline{u}^i\ge \sigma \) a.e. on \(\partial \varOmega \), with \(\sigma >0\). We assume \(y_i^0\in H^{s(n)}(\varOmega )\). Then, there exists \(\overline{T}>0\) such that for any \(T>\overline{T}\) there exists a control \(u\in L^{\infty }((0,T)\times \partial \varOmega )\) such that

  • the unique solution \((y,y_t)\) to (5) with initial datum \((y_0^0,0)\) and control u is such that \((y(T,\cdot ),y_t(T,\cdot ))=(y_1^0,0)\);

  • \(u\ge 0\) a.e. on \((0,T)\times \partial \varOmega \);

  • \(y\ge 0\) a.e. in \((0,T)\times \varOmega \).

The proof of the above Theorem can be found in Sect. 4.3.

Note that the time needed to control the system keeping both the control and the state nonnegative is greater (or equal) than the corresponding one with no constraints on the state.

1.3 Orientation

The rest of the paper is organized as follows:

  • Section 2: Abstract results;

  • Section 3: Internal Control: Proof of Theorems 1 and 2;

  • Section 4: Boundary control: Proof of Theorems 3, 4 and 5;

  • Section 5: The one dimensional case;

  • Section 6: Conclusion and open problems;

  • Appendix.

2 Abstract Results

The goal of this section is to provide some results on constrained controllability for some abstract control systems. We apply these results in the context of internal control and boundary control of the wave equation (see Sect. 1).

We begin introducing the abstract control system. Let H and U be two Hilbert spaces endowed with norms \(\Vert \cdot \Vert _{H}\) and \(\Vert \cdot \Vert _{U}\) respectively. H is called the state space and U the control space. Let \(A:D(A)\subset H\longrightarrow H\) be a generator of a \({C}_0\)-semigroup \((\mathbb {T}_t)_{t\in \mathbb {R}^+}\), with \(\mathbb {R}^+= [0,+\infty )\). The domain of the generator D(A) is endowed with the graph norm \(\Vert x\Vert _{D(A)}^2=\Vert x\Vert _{H}^2+\Vert Ax\Vert _{H}^2\). We define \(H_{-1}\) as the completion of H with respect to the norm \(\Vert \cdot \Vert _{-1}=\Vert (\beta I-A)^{-1}(\cdot )\Vert _{H}\), with real \(\beta \) such that \((\beta I-A)\) is invertible from H to H with continuous inverse. Adapting the techniques of [23, Proposition 2.10.2], one can check that the definition of \(H_{-1}\) is actually independent of the choice of \(\beta \). By applying the techniques of [23, Proposition 2.10.3], we deduce that A admits a unique bounded extension A from H to \(H_{-1}\). For simplicity, we still denote by A the extension. Hereafter, we write \(\mathscr {L}(E,F)\) for the space of all bounded linear operators from a Banach space E to another Banach space F.

Our control system is governed by:

$$\begin{aligned} {\left\{ \begin{array}{ll} \frac{d}{dt}y(t)=Ay(t)+Bu(t),&{} t\in (0,\infty ),\\ y(0)=y_0,\\ \end{array}\right. } \end{aligned}$$
(10)

where \(y_0\in H\), \(u\in L^2_{loc}([0,+\infty ),U)\) is a control function and the control operator \(B\in \mathscr {L}(U,H_{-1})\) satisfies the admissibility condition in the following definition (see [23, Definition 4.2.1]).

Definition 1

The control operator \(B\in \mathscr {L}(U,H_{-1})\) is said to be admissible if for all \(\tau >0\) we have \(\text{ Range }(\varPhi _{\tau })\subset H\), where \(\varPhi _{\tau }:L^2((0,+\infty );U)\rightarrow H_{-1}\) is defined by:

$$\begin{aligned} \varPhi _{\tau }u=\int _{0}^{\tau }\mathbb {T}_{\tau -r}Bu(r)dr. \end{aligned}$$

From now on, we will always assume the control operator to be admissible. One can check that for any \(y_0\in H\) and \(u\in L^2_{loc}((0,+\infty );U)\) there exists a unique mild solution \(y\in C^0([0,+\infty ),H)\) to (10) (see, for instance, [23, Proposition 4.2.5]). We denote by \(y(\cdot ;y_0,u)\) the unique solution to (10) with initial datum \(y_0\) and control u.

Now, we introduce the following constrained controllability problem

Let \(\mathscr {U}_{\text{ ad }}\) be a nonempty subset of U. Find a subset E of H so that for each

\(y_0, \,y_1 \in E\), there exists \(T>0\) and a control \(u\in L^{\infty }(0,T;U)\) with \(u(t)\in \mathscr {U}_{\text{ ad }}\) for a.e. \(t\in (0,T)\), so that \(y(T;y_0,u)=y_1\).

We address this controllability problem in the next two subsections, under different assumptions on \(\mathscr {U}_{\text{ ad }}\) and (AB). In Sect. 2.1, we study the above controllability problem, where the initial and final data are steady states, i.e. solutions to the steady equation:

$$\begin{aligned} Ay+Bu=0 \quad \text{ for } \text{ some }\quad u\in U. \end{aligned}$$
(11)

In Sect. 2.2, we take initial and final data on two different trajectories of (10).

To study the above problem, we need two ingredients, which play a key role in the proofs of Sects. 2.1 and 2.2. First, we introduce the notion of smooth controllability. Before introducing this concept, we fix \(s\in \mathbb {N}\) and a Hilbert space V so that

$$\begin{aligned} \quad V \hookrightarrow U, \end{aligned}$$
(12)

where \( \hookrightarrow \) denotes the continuous embedding. Note that all throughout the remainder of the section, s and V remain fixed.

The concept of smooth controllability is given in the following definition. The notation \(y(\cdot ;y_0,u)\) stands for the solution of the abstract controlled Eq. (10) with control u and initial data \(y_0\).

Definition 2

The control system (10) is said to be smoothly controllable in time \(T_0>0\) if for any \(y_0\in D(A^s)\), there exists a control function \(v\in L^{\infty }((0,T_0);V)\) such that

$$\begin{aligned} y(T_0;y_0,v)=0 \end{aligned}$$

and

$$\begin{aligned} \Vert v\Vert _{L^{\infty }((0,T_0);V)}\le C\Vert y_0\Vert _{D(A^s)}, \end{aligned}$$
(13)

the constant C being independent of \(y_0\).

Remark 2

(i) In other words, the system is smoothly controllable in time \(T_0\) if for each (regular) initial datum \(y_0\in D(A^s)\), there exists a \(L^{\infty }\)-control u with values in the regular space V steering our control system to rest at time \(T_0\).

(ii) The smooth controllability in time \(T_0\) of system (10) is a consequence of the following observability inequality: there exists a constant \(C> 0\) such that for any \(z\in D(A^*)\)

$$\begin{aligned} \Vert \mathbb {T}_{T_0}^*z\Vert _{D(A^s)^*}\le C\int _0^{T_0} \Vert i^{*}B^*\mathbb {T}_{T_0-t}^{*}z\Vert _{V^*}dt, \end{aligned}$$

where \(D(A^s)^*\) is the dual of \(D(A^s)\) and \(i:V \hookrightarrow U\) is the inclusion. This inequality, that can often be proved out of classical observability inequalities employing the regularizing properties of the system, provides a way to prove the smooth controllability for system (10). This occurs for parabolic problem enjoying smoothing properties.

(iii) Besides, for some systems (AB), even if they do not enjoy smoothing properties, there is an alternative way to prove the aforementioned smooth controllability property exploiting the ellipticity properties of the control operator (see [9]).

Under suitable assumptions, the wave system is smoothly controllable (see Lemmas 4 and 5).

The second ingredient is following lemma, which concerns the regularity of the inhomogeneous problem.

Lemma 1

Fix \(k\in \mathbb {N}\) and take \(f\in H^{k}((0,T);H)\) such that

$$\begin{aligned} {\left\{ \begin{array}{ll} \frac{d^j}{dt^j}f(0)=0,\quad &{}\forall \ j\in \left\{ 0,\ldots ,k \right\} \\ f(t)=0,\quad &{}\text{ a.e. } \ t\in (\tau ,T),\\ \end{array}\right. } \end{aligned}$$
(14)

with \(0<\tau <T\). Consider y solution to the problem

$$\begin{aligned} {\left\{ \begin{array}{ll} \frac{d}{dt}y=Ay+f&{} t\in (0,T)\\ y(0)=0.\\ \end{array}\right. } \end{aligned}$$
(15)

Then, \(y\in \cap _{j=0}^k C^j([\tau ,T];D(A^{k-j}))\) and

$$\begin{aligned} \sum _{j=0}^k\Vert y\Vert _{C^j([\tau ,T];D(A^{k-j}))}\le C\Vert f\Vert _{H^k((0,T);H)}, \end{aligned}$$

the constant C depending only on k.

Remark 3

Note that the maximal regularity of the solution is only assured for \(t \ge \tau \), after the right hand side term f vanishes.

The proof of this Lemma is given in an Appendix at the end of this paper.

2.1 Steady State Controllability

In this subsection, we study the constrained controllability for some steady states. Recall s and V are given by (12). Before introducing our main result, we suppose:

\((H_1)\) the system (10) is smoothly controllable in time \(T_0\) for some \(T_0>0\).

\((H_2)\) \(\mathscr {U}_{\text{ ad }}\) is a closed and convex cone with vertex at 0 and \(\text{ int }^V(\mathscr {U}_{\text{ ad }}\cap V)\ne \varnothing \),

      where \(\text{ int }^V\) denotes the interior set in the topology of V.

Furthermore, we define the following subset

$$\begin{aligned} \mathscr {W}=\text{ int }^V(\mathscr {U}_{\text{ ad }}\cap V)+\mathscr {U}_{\text{ ad }}. \end{aligned}$$
(16)

(Note that, since \(\mathscr {U}_{\text{ ad }}\) is a convex cone, then \(\mathscr {W}\subset \mathscr {U}_{\text{ ad }}\).) The main result of this subsection is the following. The solution to (10) with initial datum \(y_0\) and control u is denoted by \(y(\cdot ;y_0,u)\).

Theorem 6

(Steady state controllability). Assume \((H_1)\) and \((H_2)\) hold. Let

\(\left\{ (y_i,\overline{u}^i)\right\} _{i=0}^1\subset H\times \mathscr {W}\) satisfying

$$\begin{aligned} Ay_i+B\overline{u}^i=0,\qquad i=0,1. \end{aligned}$$

Then there exists \(T>T_0\) and \(u\in L^2(0,T;U)\) such that

  • \( u(t)\in \mathscr {U}_{\text{ ad }}\) a.e. in (0, T);

  • \(y(T;y_0,u)=y_1\).

Remark 4

As we shall see, in the application to the wave equation with positivity constraints:

  • for internal control, \(U=L^2(\omega )\) and \(V=H^{s(n)}(\omega )\), with \(s=s(n)=\lfloor {n/2}\rfloor +1\);

  • for boundary control, \(U{=}L^2(\varGamma )\) and \(V{=}H^{s(n)-\frac{1}{2}}(\varGamma )\), where \(s(n)=\lfloor {n/2}\rfloor +1\).

\(\mathscr {U}_{\text{ ad }}\) is the set of nonnegative controls in U. In both cases, \(\mathscr {W}\) is nonempty and contains controls u in \(L^2(\omega )\) (resp. \(L^2(\varGamma )\)) such that \(u\ge \sigma \), for some \(\sigma >0\). For this to happen, it is essential that \(H^{s(n)}(\omega )\hookrightarrow C^0(\overline{\omega })\) (resp. \(H^{s(n)-\frac{1}{2}}(\varGamma )\hookrightarrow C^0(\overline{\varGamma })\)). This is guaranteed by our special choice of \(s=s(n)\). Furthermore, in these special cases:

$$\begin{aligned} \overline{\mathscr {W}}^{U}=\mathscr {U}_{\text{ ad }}, \end{aligned}$$

where \(\overline{\mathscr {W}}^{U}\) is the closure of \(\mathscr {W}\) in the space U.

In the remainder of the present subsection we prove Theorem 6. The following Lemma is essential for the proof of Theorem 6. Fix \(\rho \in C^{\infty }(\mathbb {R})\) such that

$$\begin{aligned} \text{ Range }(\rho )\subseteq [0,1],\quad \rho \equiv 1\quad \text{ over } \ (-\infty ,0] \quad \text{ and }\quad \text{ supp }(\rho )\subset \subset (-\infty ,1/2). \end{aligned}$$
(17)

Lemma 2

Assume that the system (10) is smoothly controllable in time \(T_0\), for some \(T_0>0\). Let \((\eta _0, \overline{v}^0)\in H\times U\) be a steady state, i.e. solution to (11) with control \(\overline{v}^0\). Then, there exists \(w\in L^{\infty }((1,T_0+1);V)\) such that the control

$$\begin{aligned} v(t)= {\left\{ \begin{array}{ll} \rho (t)\overline{v}^0 \quad &{}\text{ in } \ (0,1)\\ w\quad &{}\text{ in } \ (1,T_0+1) \end{array}\right. } \end{aligned}$$
(18)

drives (10) from \(\eta _0\) to 0 in time \(T_0+1\). Furthermore,

$$\begin{aligned} \Vert w\Vert _{L^{\infty }((1,T_0+1);V)}\le C\Vert \eta _0\Vert _{H}. \end{aligned}$$
(19)

The proof of the above Lemma can be found in the Appendix.

We prove now Theorem 6, by developing a “stair-case argument” (see Fig. 2).

Fig. 2
figure 2

Stepwise procedure

Proof

(Proof of Theorem 6 )

Let \(\left\{ (y_i,\overline{u}^i)\right\} _{i=0}^1\) satisfy

$$\begin{aligned} Ay_i+B\overline{u}^i=0 \quad \forall \ i\in \left\{ 0,1\right\} . \end{aligned}$$
(20)

By the definition of \(\mathscr {W}\), there exists \(\left\{ (q^i,z^i)\right\} _{i=0}^1\subset \text{ int }^V(\mathscr {U}_{\text{ ad }}\cap V)\times \mathscr {U}_{\text{ ad }}\) such that

$$\begin{aligned} \overline{u}^i=q^i+z^i \quad i=0,1. \end{aligned}$$
(21)

Define the segment joining \(y_0\) and \(y_1\)

$$\begin{aligned} \gamma (s)=(1-s) y_0+s y_1\quad \forall \ s\in [0,1]. \end{aligned}$$

For each \(s\in [0,1]\), \(\gamma (s)\) solves

$$\begin{aligned} A\gamma (s)+B(q(s)+z(s))=0 \quad \forall \ i\in \left\{ 0,1\right\} . \end{aligned}$$

where \((q(s),z(s))\in \text{ int }^V(\mathscr {U}_{\text{ ad }}\cap V)\times \mathscr {U}_{\text{ ad }}\) are defined by:

$$\begin{aligned} q(s)=(1-s)q^0+sq^1\quad \text{ and }\quad z(s)=(1-s)z^0+sz^1\quad \forall \ s\in [0,1]. \end{aligned}$$

The rest of the proof is divided into two steps.

Step 1 Show that there exists \(\delta >0\), such that for each \(s\in [0,1]\), \(q(s)+B^V(0,\delta )\subset \text{ int }^V(\mathscr {U}_{\text{ ad }}\cap V)\), where \(B^V(0,\delta )\) denotes the closed ball in V, centered at 0 and of radius \(\delta \).

Define

$$\begin{aligned} f(s)=\inf _{y\in V\setminus \text{ int }^V(\mathscr {U}_{\text{ ad }}\cap V)}\Vert q(s)-y\Vert _V,\quad s\in [0,1]. \end{aligned}$$
(22)

One can check that f is Lipschitz continuous over the compact interval [0, 1]. Then, by Weierstrass’ Theorem, we have that

$$\begin{aligned} \min _{s\in [0,1]}f(s)>0. \end{aligned}$$

Choose \(0<\delta < \min _{s\in [0,1]}f(s)\). Hence, by (22), it follows that, for each \(s\in [0,1]\),

$$\begin{aligned} q(s)+B^V(0,\delta )\subset \text{ int }^V(\mathscr {U}_{\text{ ad }}\cap V), \end{aligned}$$

as required.

Step 2 Conclusion.

Let \(C>0\) be given by Lemma 2. Let \(\delta >0\) be given by Step 1. Choose \(N_0\in \mathbb {N}\setminus \left\{ 0\right\} \) such that

$$\begin{aligned} N_0>\frac{2C\Vert y_0-y_1\Vert _H}{\delta }. \end{aligned}$$
(23)

For each \(k\in \left\{ 0,\ldots ,N_0\right\} \), define:

$$\begin{aligned} y_k=\left( 1-\frac{k}{N_0}\right) y_0+\frac{k}{N_0}y_1\quad \text{ and }\quad u_k=\left( 1-\frac{k}{N_0}\right) \overline{u}^0+\frac{k}{N_0}\overline{u}^1. \end{aligned}$$
(24)

It is clear that, by (21), for each \(k\in \left\{ 0,\ldots ,N_0-1\right\} \),

$$\begin{aligned} \Vert y_k-y_{k+1}\Vert _H=\frac{1}{N_0}\Vert y_0-y_1\Vert _H\quad \text{ and }\quad u_k-q\left( \frac{k}{N_0}\right) \in \mathscr {U}_{\text{ ad }}. \end{aligned}$$
(25)

Arbitrarily fix \(k\in \left\{ 0,\ldots , N_0-1\right\} \). Take \(\eta _0=y_k-y_{k+1}\) and \(\overline{v}^0=u_k-u_{k+1}\). Then, we apply Lemma 2, getting a control \(w_k\in L^{\infty }(1,T_0+1;V)\) such that

$$\begin{aligned} y(T_0+1;y_k-y_{k+1},\hat{v}_k)=0 \end{aligned}$$
(26)

and

$$\begin{aligned} \Vert w_k\Vert _{L^{\infty }(1,T_0+1;V)}\le C\Vert y_k-y_{k+1}\Vert _H, \end{aligned}$$
(27)

where

$$\begin{aligned} \hat{v}_k(t)= {\left\{ \begin{array}{ll} \rho (t)(u_k-u_{k+1}) \quad &{}t\in (0,1]\\ w_k(t)\quad &{}t\in (1,T_0+1). \end{array}\right. } \end{aligned}$$
(28)

Define

$$\begin{aligned} v_k(t)= {\left\{ \begin{array}{ll} \rho (t)(u_k-u_{k+1})+u_{k+1} \quad &{}t\in (0,1]\\ w_k(t)+u_{k+1}\quad &{}t\in (1,T_0+1). \end{array}\right. } \end{aligned}$$
(29)

At the same time, by (20) and (24), we have

$$\begin{aligned} Ay^{k+1}+Bu_{k+1}=0\quad \text{ and }\quad y(T_0+1;y_{k+1},u_{k+1})=y_{k+1}. \end{aligned}$$

The above, together with (26), (28) and (29), yields

$$\begin{aligned} y(T_0+1;y_k,v_k)= & {} y(T_0+1;y_k-y_{k+1},\hat{v}_k)+y(T_0+1;y_{k+1},u_{k+1})\nonumber \\= & {} y_{k+1}. \end{aligned}$$
(30)

Next, we claim that

$$\begin{aligned} v_k(t)\in \mathscr {U}_{\text{ ad }}\quad \text{ for } \text{ a.e. } \ t\in (0,T_0+1). \end{aligned}$$
(31)

To this end, by (16) and since \(\mathscr {U}_{\text{ ad }}\) is a convex cone, we have

$$\begin{aligned} \mathscr {W}\text{ is } \text{ convex } \text{ and }\mathscr {W}\subset \mathscr {U}_{\text{ ad }}. \end{aligned}$$
(32)

By (17), \(0\le \rho (t)\le 1\) for all \(t\in \mathbb {R}\). Then, by (29) an (32), it follows that, for a.e \(t\in (0,1)\),

$$\begin{aligned} v_k(t)=\rho (t)u_k+(1-\rho (t))u_{k+1}\in \rho (t)\mathscr {W}+(1-\rho (t))\mathscr {W}\subset \mathscr {W}\subset \mathscr {U}_{\text{ ad }}. \end{aligned}$$

At this stage, to show (31), it remains to prove that

$$\begin{aligned} v_k(t)\in \mathscr {U}_{\text{ ad }}\quad \text{ for } \text{ a.e. } \ t\in (1,T_0+1). \end{aligned}$$
(33)

Take \(t\in (1,T_0+1)\). By (27), (25) and (23), we have

$$\begin{aligned} \Vert w_k(t)\Vert _V\le \frac{C}{N_0}\Vert y_0-y_1\Vert _H\le \delta /2. \end{aligned}$$

From this and Step 1, it follows

$$\begin{aligned} w_k(t)+q\left( \frac{k+1}{N_0}\right) \in \text{ int }^V(\mathscr {U}_{\text{ ad }}\cap V). \end{aligned}$$

By this, (25), (29) and (16), we get, for a.e. t in \((1,T_0+1)\),

$$\begin{aligned} v_k(t)= & {} w_k(t)+u_{k+1}\\= & {} w_k(t)+q\left( \frac{k+1}{N_0}\right) +\left( u^{k+1}-q\left( \frac{k+1}{N_0}\right) \right) \\\in & {} \text{ int }^V(\mathscr {U}_{\text{ ad }}\cap V)+\mathscr {U}_{\text{ ad }}\\= & {} \mathscr {W}. \end{aligned}$$

From this and (32), we are led to (33). Therefore, the claim (31) is true.

Finally, define

$$\begin{aligned} u(t)=v_k(t-k(T_0+1)),\, \forall \ t\in [k(T_0+1),(k+1)(T_0+1)),\, k\in \left\{ 0,\ldots ,N_0-1\right\} . \end{aligned}$$

Then, from (30) and (31), the conclusion of this theorem follows. \(\square \)

In Sects. 3.1 and 4.1, we apply the above Theorem to prove Theorems 1 and 3 respectively. In particular,

  • for internal control,

    $$\begin{aligned} \mathscr {U}_{\text{ ad }}=\left\{ u\in L^2(\omega ) \ | \ u\ge 0, \ \text{ a.e. } \ {\omega }\right\} ; \end{aligned}$$
  • for boundary control,

    $$\begin{aligned} \mathscr {U}_{\text{ ad }}=\left\{ u\in L^2(\varGamma ) \ | \ u\ge 0, \ \text{ a.e. } \ {\varGamma }\right\} . \end{aligned}$$

Then, in both cases, \(\mathscr {U}_{\text{ ad }}\) is closed convex cone with vertex at 0.

Nevertheless, the above techniques can be adapted in a wide variety of contexts.

2.2 Controllability Between Trajectories

In this subsection, we study the constrained controllability for some general states lying on trajectories of the system with possibly nonzero time derivative. Recall s and V are given by (12). Before introducing our main result, we assume:

\((H_1^{\prime })\) the system (10) is smoothly controllable in time \(T_0\) for some \(T_0>0\).

\((H_2^{\prime })\) the set \(\mathscr {U}_{\text{ ad }}\) is a closed and convex and \(\text{ int }^V(\mathscr {U}_{\text{ ad }}\cap V)\ne \varnothing \), where \(\text{ int }^V\) denotes

      the interior set in the topology of V;

\((H_3^{\prime })\) the operator A generates a \(C_0\)-group \(\left\{ \mathbb {T}_t\right\} _{t\in \mathbb {R}}\) over H and \(\Vert \mathbb {T}_t\Vert _{\mathscr {L}(H,H)}=1\) for all \(t\in \mathbb {R}\). Furthermore, A is invertible from D(A) to H, with continuous inverse.

The main result of this subsection is the following. The notation \(y(\cdot ;y_0,u)\) stands for the solution of the abstract controlled Eq. (10) with control u and initial data \(y_0\).

Theorem 7

Assume \((H_1^{\prime })\), \((H_2^{\prime })\) and \((H_3^{\prime })\) hold. Let \(\overline{y}_i\in C^s(\mathbb {R};H)\) be solutions to (10) with controls \(\overline{u}^i\in L^2_{loc}(\mathbb {R};U)\) for \(i=0,1\). Assume \(\overline{u}^i(t)\in \mathscr {U}_{\text{ ad }}\) for a.e. \(t\in \mathbb {R}\). Let \(\tau _0\), \(\tau _1\in \mathbb {R}\). Then, there exists \(T>0\) and \(u\in L^2(0,T;U)\) such that

  • \(y(T;\overline{y}_0(\tau _0),u)=\overline{y}_1(\tau _1)\);

  • \(u(t)\in \mathscr {U}_{\text{ ad }}\) for a.e. \(t\in (0,T)\).

Remark 5

(i) Roughly, Theorem 7 addresses the constrained controllability for all initial data \(y_0\) and final target \(y_1\), with \(y_0,\ y_1\in E\), where

$$\begin{aligned} E=\bigg \{y(\tau )\ \Big | \ \tau \in \mathbb {R}, \ y\in C^s(\mathbb {R};H)\quad \text{ and }\quad \exists \ u\in L^2_{loc}(\mathbb {R};U) ,\bigg . \end{aligned}$$
$$\begin{aligned} \bigg .\text{ with }\quad u(t)\in \mathscr {U}_{\text{ ad }}\quad \text{ a.e. } \ t\in \mathbb {R}\quad \text{ s.t. }\quad \frac{d}{dt}y(t)=Ay(t)+Bu(t),\quad t\in \mathbb {R} \bigg \}. \end{aligned}$$

By Lemma 1, one can check that

$$\begin{aligned} \left\{ y(\tau ;0,u) \ \Big | \ \tau \in \mathbb {R},\, u\in C^s(\mathbb {R},\mathscr {U}_{\text{ ad }}), \frac{d^j}{dt^j}u(0)=0, \ j=0,\ldots ,s \right\} \subset E. \end{aligned}$$

Furthermore, we observe that such set E includes some non-steady states.

(ii) There are at least two differences between Theorems 6 and 7. First of all, Theorem 6 studies constrained controllability for some steady states, whereas Theorem 7 can deal with constrained controllability for some non-steady states (see (i) of this remark). Secondly, in Theorem 7 the controls \(\overline{u}^i\) (\(i=0,1\)) defining the initial datum \(\overline{y}^0(\tau _0)\) and final target \(\overline{y}^1(\tau _1)\) are required to fulfill the constraint

$$\begin{aligned} \overline{u}^i(t)\in \mathscr {U}_{\text{ ad }}, \quad \text{ a.e. } \ t\in \mathbb {R}, \ i=0,1, \end{aligned}$$

while \(\overline{u}^i\) in Theorem 6 is required to be in \(\mathscr {W}\subsetneq \mathscr {U}_{\text{ ad }}\). (Then, in Theorem 7 we have weakened the constraints on \(\overline{u}^i\). In particular, we are able to apply Theorem 7 to the wave system with nonnegative controls with final target \(\overline{y}^1\equiv 0\).)

Before proving Theorem 7, we show a preliminary lemma. Note that such Lemma works with any contractive semigroup. In particular, it holds both for wave-like and heat-like systems. A similar result was proved in [17, 20]. For the sake of completeness, we provide the proof of the aforementioned lemma in the Appendix.

Lemma 3

(Null Controllability by small controls) Assume that A generates a contractive \(C_0\)-semigroup \((\mathbb {T}_t)_{t\in \mathbb {R}^+}\) over H. Suppose that (\(H_1^{\prime }\)) holds. Let \(\varepsilon >0\) and \(\eta _0\in D(A^s)\). Then, there exists \(\overline{T}=\overline{T}(\varepsilon ,\Vert \eta _0\Vert _{D(A^{s})})>0\) such that, for any \(T\ge \overline{T}\), there exists a control \(v\in L^{\infty }((0,T);V)\) such that

  • \(y(T;\eta _0,v)=0\);

  • \(\Vert v\Vert _{L^{\infty }(\mathbb {R}^+;V)}\le \varepsilon \).

The proof of the Lemma above is given in the Appendix.

We are now ready to prove Theorem 7.

With respect to Theorem 5 we have weakened the constraints on the controls defining the initial and final trajectories. Then, a priori, we have lost the room for oscillations needed in the proof of that Theorem. We shall see how to recover this by modifying the initial and final trajectories away from the initial and final data (see Figs. 3, 4 and 5).

Fig. 3
figure 3

The two original trajectories. The time \(\tau \) parameterizing the trajectories is just a parameter independent of the control time t

Fig. 4
figure 4

The new trajectories to be linked, now synchronized with the control time t. Note that (1) we have translated the time parameter defining the trajectories and (2) we have modified them away from the initial and the final data, to apply Lemma 3. The new initial trajectory is represented in blue, while the new final trajectory is drawn in green. The modified part is dashed. Following the notation of the proof of Theorem 7, the new initial trajectory is \(y(\cdot ;\hat{u}^0,\overline{y}_0(\tau _0))\), while the new final trajectory is \(\varphi _T\)

Fig. 5
figure 5

The new trajectories linked by the controlled trajectory y, pictured in red. As in Fig. 4, the new initial trajectory is drawn in blue, while the new final trajectory is represented in green

Proof

(Proof of Theorem 7) The main strategy of proof is the following:

  1. (i)

    we reduce the constrained controllability problem (with initial data \(\overline{y}_0(\tau _0)\) and final target \(\overline{y}_1(\tau _1)\)) to another controllability problem (with initial datum \(\hat{y}_0\) and final target 0);

  2. (ii)

    we solve the latter controllability problem by constructing two controls. The first control is used to improve the regularity of the solution. The second control is small in a regular space and steers the system to rest.

Step 1 The part (i) of the above strategy.

For each \(T>0\), we aim to define a new trajectory with the final state \(\overline{y}_1(\tau _1)\) as value at time \(t=T\). Choose a smooth function \(\zeta \in C^{\infty }(\mathbb {R})\) such that

$$\begin{aligned} \zeta \equiv 1 \ \text{ over } \ \left( -\frac{1}{2},\frac{1}{2}\right) \quad \text{ and } \quad \text{ supp }(\zeta )\subset \subset (-1,1). \end{aligned}$$
(34)

Take \(\sigma \in \text{ int }^V(\mathscr {U}_{\text{ ad }}\cap V)\). Arbitrarily fix \(T>1\). Define a control

$$\begin{aligned} \hat{u}_{T}^1(t)=\zeta (t-T)\overline{u}^1(t-T+\tau _1)+(1-\zeta (t-T))\sigma . \end{aligned}$$
(35)

We denote by \(\varphi _T\) the unique solution to the problem

$$\begin{aligned} {\left\{ \begin{array}{ll} \frac{d}{dt}\varphi (t)=A\varphi (t)+B\hat{u}_T^1(t)&{} t\in \mathbb {R}\\ \varphi (T)=\overline{y}_1(\tau _1).\\ \end{array}\right. } \end{aligned}$$
(36)

In what follows, we will construct two controls which send \(\overline{y}_0(\tau _0)-\varphi _T(0)\) to 0 in time T, which is part (ii) of our strategy. Recall that \(\rho \) is given by (17). We define

$$\begin{aligned} \hat{u}^0(t)=\rho (t)\overline{u}^0(t+\tau _0)+(1-\rho (t))\sigma \quad t\in \mathbb {R}. \end{aligned}$$

Step 2 Estimate of \(\Vert y(1;\overline{y}_0(\tau _0)-\varphi _T(0),\hat{u}^0-\hat{u}^1_T)\Vert _{D(A^s)}\)

We take the control to be the first control mentioned in part (ii) of our strategy. In this step, we aim to prove the following regularity estimate associated with this control: there exists a constant \(C>0\) independent of T and \(\sigma \) such that

$$\begin{aligned} \Vert y(1;\overline{y}_0(\tau _0)-\varphi _T(0),\hat{u}^0-\hat{u}^1_T)\Vert _{D(A^s)} \end{aligned}$$
(37)
$$\begin{aligned} \le C\left[ \Vert \overline{y}_0\Vert _{C^s([\tau _0,\tau _0+1];H)}+\Vert \overline{y}_1\Vert _{C^s([\tau _1-1,\tau _1];H)}+\Vert \sigma \Vert _{U}\right] . \end{aligned}$$

To begin, we introduce \(\psi \) the solution to

$$\begin{aligned} A\psi +B\sigma =0. \end{aligned}$$
(38)

First, we have that

$$\begin{aligned}&\, y(1;\overline{y}(\tau _0)-\varphi _T(0),\hat{u}^0-\hat{u}_T^1)\nonumber \\= & {} y(1;\overline{y}(\tau _0),\hat{u}^0)-y(1;\varphi _T(0),\hat{u}^1)\nonumber \\= & {} [y(1;\overline{y}(\tau _0),\hat{u}^0)-\psi ]-[y(1;\varphi _T(0),\hat{u}_T^1)-\psi ]\nonumber \\= & {} y(1;\overline{y}(\tau _0)-\psi ,\hat{u}^0-\sigma )-y(1;\varphi _T(0)-\psi ,\hat{u}_T^1-\sigma ). \end{aligned}$$
(39)

To estimate (37), we need to compute the norms of the last two terms in (39), in the space \(D(A^s)\). We claim that there exists \(C_1>0\) (independent of T and \(\sigma \)) such that

$$\begin{aligned} \Vert y(1;\overline{y}(\tau _0)-\psi ,\hat{u}^0-\sigma )\Vert _{D(A^s)}\le C_1\left( \Vert \overline{y}_0\Vert _{C^s([\tau _0,\tau _0+1];H)}+\Vert \sigma \Vert _{U}\right) . \end{aligned}$$
(40)

To this end, we show that

$$\begin{aligned} y(t;\overline{y}(\tau _0)-\psi ,\hat{u}^0-\sigma )=\rho (t)(\overline{y}^0(t+\tau _0)-\psi )+\eta _2(t),\quad t\in \mathbb {R}, \end{aligned}$$
(41)

where \(\eta _2\) solves

$$\begin{aligned} {\left\{ \begin{array}{ll} \frac{d}{dt}\eta _2(t)=A\eta _2(t)-\rho ^{\prime }(\overline{y}(t+\tau _0)-\psi )&{} t\in \mathbb {R}\\ \eta _2(0)=0.\\ \end{array}\right. } \end{aligned}$$
(42)

Indeed,

$$\begin{aligned}&\,\frac{d}{dt}\left[ \rho (t)(\overline{y}^0(t+\tau _0)-\psi )+\eta _2(t)\right] \nonumber \\= & {} \rho (t)(A\overline{y}^0(t+\tau _0)+B\overline{u}^0(t+\tau _0))+\rho ^{\prime }(t)(\overline{y}^0(t+\tau _0)-\psi )\nonumber \\&+\,A\eta _2(t)-\rho ^{\prime }(t)(\overline{y}^0(t+\tau _0)-\psi )\nonumber \\= & {} A(\rho (t)\overline{y}^0(t+\tau _0)+\eta _2(t))+B\left( \rho (t)\overline{u}^0(t+\tau _0)\right) \nonumber \\= & {} A(\rho (t)(\overline{y}^0(t+\tau _0)-\psi )+\eta _2(t))+\rho (t)A\psi +B\left( \rho (t)\overline{u}^0(t+\tau _0)\right) \nonumber \\= & {} A(\rho (t)(\overline{y}^0(t+\tau _0)-\psi )+\eta _2(t))+B\left( \rho (t)(\overline{u}^0(t+\tau _0)-\sigma )\right) \nonumber \\= & {} A(\rho (t)(\overline{y}^0(t+\tau _0)-\psi )+\eta _2(t))+B(\hat{u}^0(t)-\sigma ). \end{aligned}$$
(43)

At the same time, since \(\rho (0)=1\), from (42), it follows that

From this and (43), we are led to (41).

Next, we will use (41) and (42) to prove (40). To this end, since we assumed \(\overline{y}^0\in C^s(\mathbb {R};H)\) and \(\psi \) is independent of t, we get that

$$\begin{aligned} \overline{y}^0(\cdot +\tau _0)-\psi \in C^s(\mathbb {R};H). \end{aligned}$$

By this, we apply Lemma 1 obtaining the existence of \(\hat{C}_1>0\) (independent of T and \(\sigma \)) such that

$$\begin{aligned} \Vert \eta _2(1)\Vert _{D(A^s)}\le \hat{C}_1\left( \Vert \overline{y}^0\Vert _{C^s([\tau _0,\tau _0+1];H)}+\Vert \psi \Vert _H\right) . \end{aligned}$$
(44)

At the same time, since \(\rho (1)=0\) (see (17)), by (41), we have that

$$\begin{aligned} y(1;\overline{y}(T_0)-\psi ,\hat{u}^0-\sigma )=\eta _2(1). \end{aligned}$$

This, together with (44) and (38), yields (40).

At this point, we estimate the norm of the second term in (39) in the space \(D(A^s)\), namely we prove the existence of \(C_2>0\) (independent of T and \(\sigma \)) such that

$$\begin{aligned} \Vert y(1;\varphi _T(0)-\psi ,\hat{u}_T^1-\sigma )\Vert _{D(A^s)}\le C_2\left[ \Vert \overline{y}^1\Vert _{C^s([\tau _1-1,\tau _1];H)}+\Vert \sigma \Vert _{U}\right] . \end{aligned}$$
(45)

To this end, as in the proof of (37), we get that

$$\begin{aligned} y(t;\varphi _T(0)-\psi ,\hat{u}_T^1-\sigma )=\zeta (t-T)(\overline{y}^1(t-T+\tau _1)-\psi )+\tilde{\eta }_2(t), \quad t\in \mathbb {R}, \end{aligned}$$
(46)

where \(\tilde{\eta }_2\) solves

$$\begin{aligned} {\left\{ \begin{array}{ll} \frac{d}{dt}\tilde{\eta }_2(t)=A\tilde{\eta }_2(t)-\zeta ^{\prime }(t-T)(\overline{y}^1(t-T+\tau _1)-\psi )&{} t\in \mathbb {R}\\ \tilde{\eta }_2(T)=0.\\ \end{array}\right. } \end{aligned}$$
(47)

We will use (46) and (47) to prove (45). Indeed, set

$$\begin{aligned} \hat{\eta }(t)=\tilde{\eta }_2(T-t). \end{aligned}$$

By definition of \(\hat{\eta }\), we have

$$\begin{aligned} {\left\{ \begin{array}{ll} \frac{d}{dt}\hat{\eta }(t)=-A\hat{\eta }(t)+\zeta ^{\prime }(-t)(\overline{y}^1(\tau _1-t)-\psi )&{} t\in \mathbb {R}\\ \hat{\eta }(0)=0.\\ \end{array}\right. } \end{aligned}$$
(48)

Since we have assumed \(\overline{y}^1\in C^s(\mathbb {R}, H)\) and \(\psi \) is independent of t (see (38)), we have

$$\begin{aligned} \overline{y}^1-\psi \in C^s(\mathbb {R};H). \end{aligned}$$

Recall that \(\zeta (t)\equiv 1\) in \(\left( -\frac{1}{2},\frac{1}{2}\right) \) (see (34)). Then, \(\zeta ^{\prime }(t)= 0\), for each \(t\in \left( -\frac{1}{2},\frac{1}{2}\right) \). Now, by hypothesis \((H_3^{\prime })\), A generates a group of operators. Hence, we can apply Lemma 1 to (48) getting the existence of \(\tilde{C}_2>0\) (independent of T and \(\sigma \)) such that

$$\begin{aligned} \Vert \hat{\eta }(1)\Vert _{D(A^s)}\le \tilde{C}_2\left( \Vert \overline{y}^1\Vert _{C^s([\tau _1-1,\tau _1];H)}+\Vert \psi \Vert _H\right) , \end{aligned}$$

whence

$$\begin{aligned} \Vert \tilde{\eta }_2(T-1)\Vert _{D(A^s)}\le \tilde{C}_2\left( \Vert \overline{y}^1\Vert _{C^s([\tau _1-1,\tau _1];H)}+\Vert \psi \Vert _H\right) . \end{aligned}$$
(49)

At the same time, by (\(H_3^{\prime }\)) and some computations, we have that

$$\begin{aligned} \Vert \mathbb {T}_t\Vert _{\mathscr {L}(D(A^s),D(A^s))}=1, \quad \text{ for } \text{ each } \ t\in \mathbb {R}. \end{aligned}$$

Since \(\zeta (t-T)=0\), for each \(t\in [0,T-1]\) (see (34)), the above, together with (46) and (47), yields

$$\begin{aligned} \Vert y(1;\varphi _T(0)-\psi ,\hat{u}_T^1-\sigma )\Vert _{D(A^s)}=\Vert \tilde{\eta }_2(1)\Vert _{D(A^s)} =\Vert \tilde{\eta }_2(T-1)\Vert _{D(A^s)}. \end{aligned}$$

This, together with (49) and (38), leads to (45).

Step 3 Conclusion.

In this step, we will first construct the second control mentioned in part (ii) of our strategy. Then we put together the first and second controls (mentioned in part (ii)) to get the conclusion.

By (45),

$$\begin{aligned} \Vert y(1;\varphi _T(0)-\psi ,\hat{u}_T^1-\sigma )\Vert _{D(A^s)}\le C_2\left[ \Vert \overline{y}^1\Vert _{C^s([\tau _1-1,\tau _1];H)}+\Vert \sigma \Vert _{U}\right] . \end{aligned}$$

The above estimate is independent of T. Then for each \(T>0\), by Lemma 3, there exists

$$\overline{T}=\overline{T}(\sigma ,\Vert \overline{y}^0\Vert _{C^s([\tau _0,\tau _0+1];H)},\Vert \overline{y}^1\Vert _{C^s([\tau _1-1,\tau _1];H)})>0$$

and \(w_T\in L^{\infty }(\mathbb {R}^+;V)\) such that

$$\begin{aligned} {\left\{ \begin{array}{ll} \frac{d}{dt}z(t)=Az(t)+Bw_T(t)&{} t\in (1,\overline{T})\\ z(1)=y(1;\overline{y}(\tau _0)-\varphi _T(0),\hat{u}^0-\hat{u}_T^1),\quad z(\overline{T})=0\\ \end{array}\right. } \end{aligned}$$
(50)

and

$$\begin{aligned} \Vert w_T\Vert _{L^{\infty }(1,\overline{T};V)}\le \frac{1}{2} \inf _{y\in V\setminus \text{ int }^V(\mathscr {U}_{\text{ ad }}\cap V)}\Vert \sigma -y\Vert _V. \end{aligned}$$
(51)

Note that the last constant is positive, because \(\sigma \) is taken from \(\text{ int }^V(\mathscr {U}_{\text{ ad }})\). Choose

\(T=\overline{T}+1\). Define a control:

$$\begin{aligned} v={\left\{ \begin{array}{ll} \hat{u}^0(t) \quad &{}t\in \ (0,1)\\ w_T(t)+\hat{u}_T^1(t) \quad &{}t\in \ (1,\overline{T})\\ \hat{u}_T^1(t) \quad &{}t\in \ (\overline{T},\overline{T}+1).\\ \end{array}\right. } \end{aligned}$$
(52)

We aim to show that

$$\begin{aligned} y(\overline{T}+1;\overline{y}^0(\tau _0),v)=\overline{y}^0(\tau _1)\quad \text{ and }\quad v(t)\in \mathscr {U}_{\text{ ad }} \ \text{ a.e. } \ t\in \ (1,\overline{T}+1). \end{aligned}$$
(53)

To this end, by (52), (50) and (36), we get that

$$\begin{aligned} y(\overline{T}+1;\overline{y}^0(\tau _0),v)= & {} y(\overline{T}+1;\overline{y}^0(\tau _0)-\varphi _T(0),v-\hat{u}^1_T)+ y(\overline{T}+1;\varphi _T(0),\hat{u}^1_T)\\= & {} \mathbb {T}_1(z_T(\overline{T}))+\varphi _T(\overline{T}+1)\\= & {} \overline{y}^1(\tau _1). \end{aligned}$$

This leads to the first conclusion of (53). It remains to show the second condition in (53). Arbitrarily fix \(t\in (0,1)\). By (52) and (45), we have

$$\begin{aligned} v(t)=\rho (t)\overline{u}^0(t+\tau _0)+(1-\rho (t))\sigma \end{aligned}$$
$$\begin{aligned} \in \rho (t)\mathscr {U}_{\text{ ad }}+(1-\rho (t))\mathscr {U}_{\text{ ad }}\subset \mathscr {U}_{\text{ ad }}. \end{aligned}$$

Choose also an arbitrary \(s\in (1,\overline{T})\). By (52), (51) and (35), we obtain

$$\begin{aligned} v(s)=w(s)+(1-\zeta (s-\overline{T}-1))\sigma +\zeta (s-\overline{T}-1)\overline{u}^1(s-\overline{T}-1+\tau _1) \end{aligned}$$
$$\begin{aligned} =w(s)+\sigma \in \text{ int }^V(\mathscr {U}_{\text{ ad }}\cap V)\subset \mathscr {U}_{\text{ ad }}. \end{aligned}$$

Take any \(t\in (\overline{T},\overline{T}+1)\). We find from (52) and (35) that

$$\begin{aligned} v(t)=\zeta (t-\overline{T}-1)\overline{u}^1(t-\overline{T}-1+\tau _1)+(1-\zeta (t-\overline{T}-1))\sigma \end{aligned}$$
$$\begin{aligned} \in \zeta (t-\overline{T}-1)\mathscr {U}_{\text{ ad }}+(1-\zeta (t-\overline{T}-1))\mathscr {U}_{\text{ ad }} \end{aligned}$$
$$\begin{aligned} \subset \mathscr {U}_{\text{ ad }}. \end{aligned}$$

Therefore, we are led to the second conclusion of (53). This ends the proof. \(\square \)

3 Internal Control: Proof of Theorems 1 and 2

The present section is organized as follows:

  • Section 3.1: proof of Lemma 4 and Theorem 1;

  • Section 3.2: proof of Theorem 2;

  • Section 3.3: discussion of the issues related to the internal control touching the boundary.

3.1 Proof of Theorem 1

We now prove Theorem 1 by employing Theorem 6.

Firstly, we place our control system in the abstract framework introduced in Sect. 2 and we prove that our control system is smoothly controllable (see Definition 2).

The free dynamics is generated by \(A:D(A)\subset H\longrightarrow H\), where

$$\begin{aligned} A=\begin{pmatrix} 0&{}I\\ -A_0&{}0\\ \end{pmatrix}, {\left\{ \begin{array}{ll} H=H^1_0(\varOmega )\times L^2(\varOmega )\\ D(A)=\left( H^2(\varOmega )\cap H^1_0(\varOmega )\right) \times H^1_0(\varOmega ). \end{array}\right. } \end{aligned}$$
(54)

where \(A_0=-\varDelta +cI:H^2(\varOmega )\cap H^1_0(\varOmega )\subset L^2(\varOmega )\longrightarrow L^2(\varOmega )\). The control operator

$$\begin{aligned} B(v)=\begin{pmatrix} 0\\ \chi v.\\ \end{pmatrix} \end{aligned}$$

defined from \(U=L^2(\omega )\) to \(H=H^1_0(\varOmega )\times L^2(\varOmega )\) is bounded, then admissible.

Lemma 4

In the above framework take \(V=H^{s(n)}(\omega )\) and \(s=s(n)=\lfloor {n/2}\rfloor +1\). Assume further \((\varOmega ,\omega _0,T^*)\) fulfills the Geometric Control Condition. Then, the control system (1) is smoothly controllable in any time \(T_0>T^*\).

The proof of this Lemma can be found in the reference [9, Theorem 5.1].

We are now ready to prove Theorem 1.

Proof

(of Theorem 1) We choose as set of admissible controls:

$$\begin{aligned} \mathscr {U}_{\text{ ad }}=\left\{ u\in L^2(\omega ) \ | \ u\ge 0, \ \text{ a.e. } \ \omega \right\} . \end{aligned}$$

Then,

$$\begin{aligned} \bigcup _{\sigma >0}\left\{ u\in L^2(\omega ) \ | \ u\ge \sigma , \ \text{ a.e. } \ {\omega }\right\} \subset \mathscr {W}. \end{aligned}$$
(55)

We highlight that, to prove (55), we need \(H^{s(n)}(\omega )\hookrightarrow C^0(\overline{\omega })\). For this reason, we have chosen \(s(n)=\lfloor {n/2}\rfloor +1\).

By Lemma (4), we have that the system is Smoothly Controllable with \(s=s(n)=\lfloor {n/2}\rfloor +1\) and \(V=H^{s(n)}(\omega )\). Then, by Theorem 6 we conclude. \(\square \)

3.2 Proof of Theorem 2

We prove now Theorem 2

Proof

(Proof of Theorem 2). As we have seen, our system fits the abstract framework. Moreover, we have checked in Lemma 4 that the system is Smoothly Controllable with \(s(n)=\lfloor {n/2}\rfloor +1\) and \(V=H^{s(n)}(\omega )\). Furthermore, \(\text{ int }^V(\mathscr {U}_{\text{ ad }}\cap V)\ne \varnothing \). Indeed, any constant \(\sigma >0\) belongs to \(\text{ int }^V(\mathscr {U}_{\text{ ad }}\cap V)\), since \(H^{s(n)}(\omega )\hookrightarrow C^0(\overline{\omega })\). This is guaranteed by our choice of \(s(n)=\lfloor {n/2}\rfloor +1\).

Therefore, we are in position to apply Theorem 7 and finish the proof. \(\square \)

3.3 Internal Controllability From a Neighborhood of the Boundary

So far, we have assumed that the control is localized by means of a smooth cut-off function \(\chi \) so that all its derivatives vanish on the boundary of \(\varOmega \). This implies that \(\chi \) must be constant on any connected component of the boundary. This prevents us to localize the internal control in a region touching the boundary only on a subregion, as in Fig. 6.

Fig. 6
figure 6

Controlling from the interior touching the boundary

In this case, as already pointed out in [8], some difficulties in finding regular controls may arise. Indeed, as indicated both in [8] and in [9] a crucial property needs to be verified in order to have controls in \(C^0([0,T];H^s(\omega ))\), namely

$$\begin{aligned} BB^*(D(A^*)^k)\subset D(A^k) \end{aligned}$$
(56)

for \(k=0,\ldots ,s\), where we have used the notation of the proof of Theorem 1.

Right now, for any \(k\in \mathbb {N}\) we have

$$\begin{aligned} D(A^k)=\left\{ \begin{pmatrix} \psi _1\\ \psi _2\\ \end{pmatrix} \Bigg | \begin{matrix} \psi _1\in H^{k+1}(\varOmega ),&{} \varDelta ^j\psi _1=0 \ \text{ on } \ \partial \varOmega ,&{} 0\le j \le \lfloor {k/2}\rfloor \\ \psi _2\in H^k(\varOmega ),&{} \varDelta ^j\psi _2=0 \ \text{ on } \ \partial \varOmega ,&{} 0\le j\le \lfloor {(k+1)/2}\rfloor -1\\ \end{matrix} \right\} , \end{aligned}$$

while

$$\begin{aligned} D((A^*)^k)=\left\{ \begin{pmatrix} \psi _1\\ \psi _2\\ \end{pmatrix} \Bigg | \begin{matrix} \psi _1\in H^{k}(\varOmega ),&{} \varDelta ^j\psi _1=0 \ \text{ on } \ \partial \varOmega ,\ 0\le j\le \lfloor {(k-1)/2}\rfloor \\ \psi _2\in H^{k-1}(\varOmega ), &{} \varDelta ^j\psi _2=0 \ \text{ on } \ \partial \varOmega ,\ 0\le j\le \lfloor {k/2}\rfloor -1\\ \end{matrix} \right\} . \end{aligned}$$
(57)

Furthermore,

$$\begin{aligned} BB^*=\begin{pmatrix} 0&{}0\\ \chi ^2&{}0\\ \end{pmatrix} \end{aligned}$$

Then, (56) is verified if and only if for any \(\psi \in H^s(\varOmega )\) such that

$$\begin{aligned} (\varDelta )^j(\psi )=0,\quad 0\le j\le \lfloor {(s-1)/2}\rfloor ,\quad \text{ a.e. } \text{ on } \ \partial \varOmega \end{aligned}$$

the following hold

$$\begin{aligned} (\varDelta )^j(\chi ^2\psi )=0,\quad 0\le j\le \lfloor {(s-1)/2}\rfloor ,\quad \text{ a.e. } \text{ on } \ \partial \varOmega . \end{aligned}$$
(58)

Choosing \(\chi \) so that all its normal derivatives vanish on \(\partial \varOmega \)

  • in case \(s<5\), we are able to prove (56). Then, by adapting the techniques of [9, Theorem 5.1], we have that our system is Smoothly Controllable (Definition 2), with \(s(n)=\lfloor {n/2}\rfloor +1\). This enables us to prove Theorem 1 in space dimension \(n<8\).

  • in case \(s\ge 5\), in (58) the biharmonic operator \((\varDelta )^2\) enters into play. By computing it in normal coordinates on the boundary, some terms appear involving the curvature and \(\frac{\partial }{\partial \xi _k}{\chi }\frac{\partial }{\partial v}\psi \), where \((\xi _1,\ldots ,\xi _{n-1})\) are tangent coordinates, while v is the normal coordinate. In general, these terms do not vanish, unless \(\partial \varOmega \) is flat. Then, for \(n\ge 8\), we are unable to deduce a constrained controllability result in case the internal control is localized along a subregion of \(\partial \varOmega \).

4 Boundary Control: Proof of Theorems 3, 4 and 5

This section is devoted to boundary control and is organized as follows:

  • Section 4.1: proof of Lemma 5 and Theorem 3;

  • Section 4.2: proof of Theorem 4;

  • Section 4.3: proof of Theorem 5.

4.1 Proof of Theorem 3

We prove Theorem 3.

First of all, we explain how our boundary control system fits the abstract semigroup setting described in Sect. 2. The generator of the free dynamics is:

$$\begin{aligned} A=\begin{pmatrix} 0&{}I\\ -A_0&{}0\\ \end{pmatrix}, {\left\{ \begin{array}{ll} H=L^2(\varOmega )\times H^{-1}(\varOmega )\\ D(A)=H^1_0(\varOmega )\times L^2(\varOmega ), \end{array}\right. } \end{aligned}$$
(59)

where \(A_0=-\varDelta +cI:H^1_0(\varOmega )\subset H^{-1}(\varOmega )\longrightarrow H^{-1}(\varOmega )\). The definition of the control operator is subtler than in the internal control case. Let \(\varDelta _0\) be the Dirichlet Laplacian. Then, the control operator

$$\begin{aligned} B(v)=\begin{pmatrix} 0\\ -\varDelta _{0} \tilde{z}\\ \end{pmatrix},\quad \text{ where } {\left\{ \begin{array}{ll} -\varDelta \tilde{z}=0 &{} \text{ in } \varOmega \\ \tilde{z}=\chi v(\cdot ,t) &{} \text{ on } \partial \varOmega .\\ \end{array}\right. } \end{aligned}$$

defined from \(L^2(\varGamma )\) to \(H^{-\frac{3}{2}}(\varOmega )\). In this case, B is unbounded but admissible (see [15] or [23, proposition 10.9.1 page 349]).

Lemma 5

In the above framework, set \(V=H^{s(n)-\frac{1}{2}}(\varGamma )\) and \(s=s(n)\), with \(s(n)=\lfloor {n/2}\rfloor +1\). Suppose (GCC) holds for \((\varOmega ,\varGamma _0,T^*)\). Then, in any time \(T_0>T^*\), the control system (5) is smoothly controllable in time \(T_0\).

One can prove the above Lemma, by employing [9, Theorem 5.4].

Proof

(Proof of Theorem 3) We prove our Theorem, by choosing the set of admissible controls:

$$\begin{aligned} \mathscr {U}_{\text{ ad }}=\left\{ u\in L^2(\varGamma ) \ | \ u\ge 0, \ \text{ a.e. } \ {\varGamma }\right\} . \end{aligned}$$

Hence,

$$\begin{aligned} \bigcup _{\sigma >0}\left\{ u\in L^2(\varGamma ) \ | \ u\ge \sigma , \ \text{ a.e. } \ {\varGamma }\right\} \subset \mathscr {W}. \end{aligned}$$
(60)

Note that, in order to show (60), it is essential that the embedding

\(H^{s(n)-\frac{1}{2}}(\varGamma )\hookrightarrow C^0(\overline{\varGamma })\) is continuous. This is guaranteed by the choice \(s(n)=\lfloor {n/2}\rfloor +1\).

By Lemma 5, we conclude that smooth controllability holds. At this point, it suffices to apply Theorem 6 to conclude. \(\square \)

4.2 Proof of Theorem 4

We prove now Theorem 4.

Proof

(Proof of Theorem 4) We have explained above how our control system (5) fits the abstract framework presented in Sect. 2. Furthermore, by Lemma 5, the system is Smoothly Controllable with \(s(n)=\lfloor {n/2}\rfloor +1\) and \(V=H^{s(n)-\frac{1}{2}}(\varGamma )\). Moreover, the set \(\text{ int }^V(\mathscr {U}_{\text{ ad }}\cap V)\) is non empty, since all constants \(\sigma >0\) belong to it. This is consequence of the continuity of \(H^{s(n)-\frac{1}{2}}(\varGamma )\hookrightarrow C^0(\overline{\varGamma })\), valid for \(s(n)=\lfloor {n/2}\rfloor +1\). The result holds as a consequence of Theorem 7. \(\square \)

4.3 State Constraints. Proof of Theorem 5

We conclude this section proving Theorem 5 about state constraints. The following result is needed.

Lemma 6

Let \(s\in \mathbb {N}^*\) and \(T>T^*\). Take a steady state solution \(\eta _0\) associated to the control \(v^0\in H^{s-\frac{1}{2}}(\varGamma )\). Then, there exists \(v\in \cap _{j=0}^{s}C^j([0,T];H^{s-\frac{1}{2}-j}(\varGamma ))\) such that the unique solution \((\eta ,\eta _t)\) to (5) with initial datum \((\eta _0,0)\) and control v is such that \((\eta (T,\cdot ),\eta _t(T,\cdot ))=(0,0)\). Furthermore,

$$\begin{aligned} \sum _{j=0}^s\Vert v\Vert _{C^j([0,T];H^{s-\frac{1}{2}-j}(\varGamma ))}\le C(T)\Vert v^0\Vert _{H^{s-\frac{1}{2}}(\varGamma )}, \end{aligned}$$
(61)

the constant C being independent of \(\eta _0\) and \(v^0\). Finally, if \(s=s(n)=\lfloor {n/2}\rfloor +1\), then the control \(v\in C^0([0,T]\times \overline{\varGamma })\) and

$$\begin{aligned} \Vert v\Vert _{C^0([0,T]\times \overline{\varGamma })}\le C(T) \Vert v^0\Vert _{H^{s(n)-\frac{1}{2}}(\varGamma )}. \end{aligned}$$
(62)

The above Lemma can be proved by using the techniques of Lemma 2. We now prove our Theorem about state constraints.

Proof

(of Theorem 5 )

Step 1 Consequences of Lemma 6.

Let \(T_0>T^*\), \(T^*\) being the critical time given by the Geometric Control Condition. By Lemma 6, for any \(\varepsilon >0\), there exists \(\delta _{\varepsilon } >0\) such that for any pair of steady states \(y_0\) and \(y_1\) defined by regular controls \(\overline{u}^i\in H^{s(n)-\frac{1}{2}}(\varGamma )\), such that:

$$\begin{aligned} \Vert \overline{u}^1-\overline{u}^0\Vert _{H^{s(n)-\frac{1}{2}}(\varGamma ) }<\delta _{\varepsilon } \end{aligned}$$
(63)

we can find a control u driving (10) from \(y_0\) to \(y_1\) in time \(T_0\) and verifying

$$\begin{aligned} \sum _{j=0}^{s(n)}\Vert u-\overline{u}^1\Vert _{C^j([0,T_0];H^{s(n)-\frac{1}{2} -j}(\varGamma ))}<\varepsilon , \end{aligned}$$
(64)

where \(\overline{u}^1\) is the control defining \(y_1\). Moreover, if \((y,y_t)\) is the unique solution to (5) with initial datum \((y_0,0)\) and control u, we have

$$\begin{aligned} \Vert y-y_1\Vert _{C^0([0,T_0]\times \overline{\varOmega })}\le C\Vert y-y_1\Vert _{C^0([0,T_0];H^{s(n)}(\varOmega ))} \end{aligned}$$
$$\begin{aligned} \le C\sum _{j=0}^{s(n)}\Vert u-\overline{u}^1\Vert _{C^j([0,T_0];H^{s(n)-\frac{1}{2} -j}(\varGamma ))}\le C\varepsilon , \end{aligned}$$

where we have used the boundedness of the inclusion \(H^{s(n)}(\varOmega )\hookrightarrow C^0(\overline{\varOmega })\) and the continuous dependence of the data

.

Step 2 Stepwise procedure and conclusion.

We consider the convex combination \(\gamma (s)=(1-s)y_0+sy_1\). Then, let

$$\begin{aligned} z_k=\gamma \left( \frac{k}{\overline{n}}\right) ,\quad k=0,\ldots ,\overline{n} \end{aligned}$$

be a finite sequence of steady states defined by the control \(\overline{u}_k=\frac{\overline{n}-k}{\overline{n}} \overline{u}^0+\frac{k}{\overline{n}} \overline{u}^1\). Let \(\delta >0\). By taking \(\overline{n}\) sufficiently large,

$$\begin{aligned} \Vert \overline{u}_{k}-\overline{u}_{k-1}\Vert _{H^{s(n)-\frac{1}{2}}(\varGamma )}<\delta . \end{aligned}$$
(65)

By the above reasonings, choosing \(\delta \) small enough, for any \(1\le k\le \overline{n}\), we can find a control \(u^k\) joining the steady states \(z_{k-1}\) and \(z_{k}\) in time \(T_0\), with

$$\begin{aligned} \Vert y^k-z_k\Vert _{C^0([0,T_0]\times \overline{\varOmega })}\le \sigma , \end{aligned}$$

where \((y^k,(y^k)_t)\) is the solution to (5) with initial datum \(z_{k-1}\) and control \(u^k\). Hence,

$$\begin{aligned} y^k=y^k-z_k+z_k\ge -\sigma +\sigma =0,\quad \text{ on }\ (0,T_0)\times \varOmega , \end{aligned}$$
(66)

where we have used the maximum principle for elliptic equations (see [2]) to assert that \(z^k\ge \sigma \) because \(u_k\ge \sigma \).

By taking the traces in (66), we have \(u^k\ge 0\) for \(1\le k\le \overline{n}\).

In conclusion, the control \( u:(0,\overline{n}T_0)\longrightarrow H^{s(n)-\frac{1}{2}}(\varGamma ) \) defined as \(u(t)=u_k(t-(k-1)T_0)\) for \(t\in ((k-1)T_0,k T_0)\) is the required one. This finishes the proof. \(\square \)

5 The One Dimensional Wave Equation

We consider the one dimensional wave equation, controlled from the boundary

$$\begin{aligned} {\left\{ \begin{array}{ll} y_{tt}-y_{xx}=0 &{} (t,x)\in (0,T)\times (0,1)\\ y(t,0)=u_0(t),\ y(t,1)=u_1(t) &{} t\in (0,T)\\ y(0,x)=y^0_0(x), \ y_t(0,x)=y^1_0(x). &{} x\in (0,1)\\ \end{array}\right. } \end{aligned}$$
(67)

As in the general case, by transposition (see [15]), for any initial datum \((y_0^0,y_0^1)\in L^2(0,1)\times H^{-1}(0,1)\) and controls \(u_i\in L^2(0,T)\), the above problem admits an unique solution \((y,y_t)\in C^0([0,T];L^2(0,1)\times H^{-1}(0,1))\).

We show how Theorem 4 reads in this one-dimensional setting, in the special case where both the initial trajectory \((\overline{y}_0,(\overline{y}_0)_t)\) and the final one \((\overline{y}_1,(\overline{y}_1)_t)\) are constant (independent of x) steady states.

We determine explicitly a pair of nonnegative controls steering (67) from one positive constant to the other. The controlled solution remains nonnegative.

In this special case, we show further that

  • the minimal controllability time is the same, regardless whether we impose the positivity constraint on the control or not;

  • constrained controllability holds in the minimal time.

The minimal controllability time for (67) is defined as follows.

Let \((y_0^0,y_0^1)\in L^2(0,1)\times H^{-1}(0,1)\) be an initial datum and \((y_1^0,y_1^1)\in L^2(0,1)\times H^{-1}(0,1)\) be a final target. Then the minimal controllability time without constraints is defined as follows:

$$\begin{aligned} T_{\text{ min }} \overset{{ \text{ def }}}{=} \inf \left\{ T>0 \ \big | \ \exists u_i\in L^{2}(0,T), \ (y(T,\cdot ),y_t(T,\cdot ))=(y_1^0,y_1^1)\right\} . \end{aligned}$$
(68)

Similarly, the minimal time under positivity constraints on the control is defined as:

$$\begin{aligned} T_{\text{ min }}^c \overset{{ \text{ def }}}{=} \inf \left\{ T>0 \ \big | \ \exists u_i\in L^{2}(0,T)^{+}, \ (y(T,\cdot ),y_t(T,\cdot ))=(y_1^0,y_1^1)\right\} . \end{aligned}$$
(69)

Finally, we introduce the minimal time with constraints on the state and and the control:

$$\begin{aligned} T_{\text{ min }}^s \overset{{ \text{ def }}}{=} \inf \left\{ T>0 \ \big | \ \exists u_i\in L^{2}(0,T)^{+}, \ (y(T,\cdot ),y_t(T,\cdot ))=(y_1^0,y_1^1), \ y\ge 0\right\} . \end{aligned}$$
(70)

The problem of controllability of the one-dimensional wave equation under bilateral constraints on the control has been studied in [12]. In the next Proposition, we concentrate on unilateral constraints and we compute explicitly the minimal time for the specific data considered.

Proposition 1

Let \((y_0^0,0)\) be the initial datum and \((y_1^0,0)\) be the final target, with \(y_0^0\in \mathbb {R}^+\) and \(y_1^0\in \mathbb {R}^+\). Then,

  1. 1.

    for any time \(T>1\), there exists two nonnegative controls

    $$\begin{aligned} u_0(t)= {\left\{ \begin{array}{ll} y_0^0 \quad &{} t\in [0,1)\\ (y_1^0-y_0^0)\frac{t-1}{T-1}+y_0^0 \quad &{} t\in (1,T]\\ \end{array}\right. } \end{aligned}$$
    (71)
    $$\begin{aligned} u_1(t)= {\left\{ \begin{array}{ll} (y_1^0-y_0^0)\frac{t}{T-1}+y_0^0 \quad &{} t\in [0,T-1)\\ y_1^0 \quad &{} t\in [T-1,T]\\ \end{array}\right. } \end{aligned}$$
    (72)

    driving (67) from \((y_0^0,0)\) to \((y_1^0,0)\) in time T. Moreover, the corresponding solution remains nonnegative, i.e.

    $$\begin{aligned} y(t,x)\ge 0,\quad \forall (t,x)\in [0,T]\times [0,1]. \end{aligned}$$
  2. 2.

    \(T_{\text{ min }}^s=T_{\text{ min }}^c=T_{\text{ min }}=1\);

  3. 3.

    the nonnegative controls \(\hat{u}_0\equiv y_0^0\) and \(\hat{u}_1\equiv y_1^0\) in \(L^2(0,1)\) steers (67) from \((y_0^0,0)\) to \((y_1^0,0)\) in the minimal time. Furthermore, the corresponding solution \(y\ge 0\) a.e. in \((0,1)\times (0,1)\);

  4. 4.

    the controls in the minimal time are not unique. In particular, for any \(\lambda \in [0,1]\), \(\hat{u}^0_{\lambda }=(1-\lambda )y_0^0+\lambda y_1^0\) and \(\hat{u}^1_{\lambda }=(1-\lambda )y_1^0+\lambda y_0^0\) drives (67) from \((y_0^0,0)\) to \((y_1^0,0)\) in the minimal time.

Proof

We proceed in several steps.

Step 1. Proof of the constrained controllability in time \(T>1\).

By D’Alembert’s formula, the solution \((y,y_t)\) to (67) with initial datum \((y_0^0,0)\) and controls \(u_i\) defined in (71) and (72), reads as

$$\begin{aligned} y(t,x)=f(x+t),\quad (t,x)\in [0,T]\times [0,1], \end{aligned}$$

where

$$\begin{aligned}f(\xi )= {\left\{ \begin{array}{ll} y_0^0 \quad &{} \xi \in [0,1)\\ (y_1^0-y_0^0)\frac{\xi -1}{T-1}+y_0^0 \quad &{} \xi \in [1,T)\\ y_1^0. \quad &{} \xi \in [T,T+1].\\ \end{array}\right. } \end{aligned}$$

This finishes the proof of (1.).

Step 2 Computation of the minimal time.

In any time \(T>1\), controllability under state and control constraints holds. Then, \(T_{\text{ min }}\le T_{\text{ min }}^c\le T_{\text{ min }}^s\le 1\).

It remains to prove that \(T_{\text{ min }}\ge 1\). This can be obtained by adapting the techniques of [18, Proposition 4.1].

Step 3 Controllability in the minimal time.

One can check (see Fig. 7) that the unique solution to (67) with initial datum \((y_0^0,0)\) and controls \(\hat{u}^i\) is

$$\begin{aligned} y(t,x)= {\left\{ \begin{array}{ll} y_0^0 \quad &{} t+x<1\\ y_1^0 \quad &{} t+x>1\\ \end{array}\right. } \end{aligned}$$
(73)

This concludes the argument. \(\square \)

Fig. 7
figure 7

Level sets of the solution to (67) with initial datum \((y_0^0,0)\) and controls \(\hat{u}^i\). In the darker region the solution takes value \(y_0^0\), while in the complement it coincides with \(y_1^0\)

6 Conclusions and Open Problems

In this paper we have analyzed the controllability of the wave equation under positivity constraints on the control and on the state.

  1. 1.

    In the general case (without assuming that the energy defines a norm), we have shown how to steer the wave equation from one steady state to another in time large, provided that both steady states are defined by positive controls, away from zero;

  2. 2.

    in case the energy defines a norm, we have generalized the above result to data lying on trajectories. Furthermore, the controls defining the trajectory are supposed to be only nonnegative, thus allowing us to take as target \((y_1^0,y_1^1)=(0,0)\).

We present now some open problems, which as long as we know, have not been treated in the literature so far.

  • Further analysis of controllability of the wave under state constraints. As pointed out in [16, 19], in the case of parabolic equations a state constrained result follows from a control constrained one by means of the comparison principle. For the wave equation, such principle does not hold. We have proved Theorem 5, using a “stair-case argument” but further analysis is required.

  • On the minimal time for constrained controllability. Further analysis of the minimal constrained controllability time is required. In particular, it would be interesting to compare the minimal constrained controllability time and the unconstrained one for any choice of initial and final data. As we have seen in Proposition 1, they coincide for constant steady data in one space dimension.

  • In the present paper, we have determined nonnegative controls by employing results of controllability of smooth data by smooth controls. This imposes a restriction to our analysis: the action of the control is localized by smooth cut-off functions. In particular, when controlling (1) from an interior subset touching the boundary, we encounter the issues discussed in Sect. 3.3 and already pointed out in [8] and [9]. Then, it would be worth to be able to build nonnegative controls without using smooth controllability.

  • Derive the Optimality System (OS) for the controllability of the wave by nonnegative controls.

  • Extend our results to the semilinear setting, by employing the analysis carried out in [4, Theorem 1.3], [5, 6, 25].

  • Extend the results to more general classes of potentials c. For instance, one could assume c to be bounded, instead of \(C^{\infty }\) smooth.