Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

1 1 Introduction

Let us consider the following viscous Burgers equation on the real line:

$${\partial _t}u - \mu \partial _x^2u + u{\partial _x}u = f(t,x),\quad x \in \mathbb{R}.$$
((1))

Here u = u(t,x) is an unknown function, μ > 0 is a viscosity coefficient, and f(t, x) is an external force which is assumed to be essentially bounded in x and integrable in t. Equation (1) is supplemented with the initial condition

$$u(0,x) = {u_0}(x),$$
((2))

where u 0L (ℝ). Due to the maximum principle, one can easily prove the existence and uniqueness of a solution for (1), (2) in appropriate functional classes. Our aim is to study controllability properties of (1). Namely, we assume that f has the form

$$f(t,x) = h(t,x) + \eta (t,x),$$
((3))

where h is a fixed regular function and η is a control, which is assumed to be a smooth function in time with range in a finite-dimensional subspace EL (ℝ). We shall say that (1) is approximately controllable at a time T > 0 if for any initial state u 0L (ℝ), any target ûC(ℝ), and any numbers ε, r > 0 there is a smooth function η : [0, T] → E such that the solution u(t, x) of problem (1)(3) satisfies the inequalities

$${\left\| {u(T, \cdot )} \right\|_{L\infty (\mathbb{R})}} \leqslant K,\quad {\left\| {u(T, \cdot ) - \hat u} \right\|_{L\infty ([ - r,r])}} < \varepsilon ,$$
((4))

where K > 0 does not depend on r and ε. Given a finite subset Λ ⊂ ℝ, we denote by E Λ the vector space spanned by the functions cos(λx) and sin(λx) with λ ∈ Λ. The following theorem is a weaker version of the main result of this paper.

Theorem 1. Let Λ = {0, λ1, λ2, 2λ1, 2λ2, λ1 + λ2}, where λ1 and λ2 are incommensurable positive numbers, and let E = E Λ. Then Eq. (1) is approximately controllable at any time T > 0.

We refer the reader to Sect. 2 for a stronger result on approximate controllability and for an outline of its proof, which is based on an adaptation of a general approach introduced by Agrachev and Sarychev in [2] and further developed in [3]; see also [1416] for some other extensions. Let us note that the Agrachev-Sarychev approach enables one to establish a much stronger property: given any initial and target states and any non-degenerate finite-dimensional functional, one can construct a control that steers the system to the given neighbourhood of the target so that the values of the functional on the solution and on the target coincide. However, to make the presentation simpler and shorter, we confine ourselves to the approximate controllability. The above-mentioned property of controllability will be analysed in [17] in the more difficult case of the 2D Navier-Stokes system.

The main theorem stated above proves the approximate controllability of the Burgers equation by a control whose Fourier transform is localised at 11 points. This result is in sharp contrast with the case of a control localised in the physical space, for which the approximate controllability does not hold even for the problem in a bounded interval. This fact was established by Fursikov and Imanuvilov; see Sect. I.6 of the book [9]. Other negative results on controllability of the Burgers equation via boundary were obtained by Diaz [7] and Guerrero and Imanuvilov [11]. On the other hand, Coron showed in [6] that any initial state can be driven to zero by a boundary control and Fernández-Cara and Guerrero [8] proved the exact controllability (with an estimate for the minimal time of control) for the problem with distributed control. Furthermore, Glass and Guerrero [10] established global controllability to non-zero constant states via boundary for small values of the viscosity and Chapouly [4] proved the global exact controllability to a given solution by two boundary and one distributed controls. Imanuvilov and Puel [12] proved the global boundary controllability of the 2D Burgers equation in a bounded domain under some geometric conditions. We refer the reader to the book [5] for a discussion of the methods used in the control theory for the Burgers equation on a bounded interval. To the best of our knowledge, the problem of controllability of the viscous Burgers equation was not studied in the case of an unbounded domain.

The paper is organised as follows. In Sect. 2, we formulate the main result and outline the scheme of its proof. Section 3 collects some facts about the Cauchy problem for Eq. (1) without decay condition at infinity. The proof of the main result of the paper is given in Sect. 4.

Notation. Let J ⊂ ℝ be a bounded closed interval, let D ⊂ ℝn be an open subset, and let X be a Banach space. We denote by B X(R) the closed ball in X of radius R centred at zero. We shall use the following functional spaces.

For p ∈ [1, ∞], wedenote by L P(J, X) thespace of measurable functions f : JX such that

$${\left\| f \right\|_{{L^p}(J,X)}}: = {\left( {\int_J {\left\| {f(t)} \right\|_X^p} } \right)^{1/p}} < \infty .$$

In the case p = ∞, this norm should be replaced by ess sup t∈J f(t)‖ X .

For an integer k ∈ [0, +∞], we write C k(J, X) for the space of k times continuously differentiable functions on J with range in X and endow it with natural norm. In the case k = 0, we omit the corresponding superscript.

For an integer s ≥ 0, we denote by H s(D) the Sobolev space on D of order s with the standard norm ‖ · ‖ s . In the case s = 0, we write L 2(D) and ‖ · ‖.

L = L (ℝ) is the space of bounded measurable functions f : ℝ → ℝ with the natural norm ‖fL . The space L (D) is defined in a similar way.

W k ,∞(ℝ) is the space of functions fL such that \(\partial _x^jf \in {L^\infty }for{\kern 1pt} \;0 \leqslant j \leqslant k\).\(C_b^\infty = C_b^\infty (\mathbb{R})\) stands for the space of infinitely differentiable functions f : ℝ → ℝ that are bounded together with all their derivatives.

\(H_{ul}^s = H_{ul}^s (\mathbb{R})\) is the space of functions f : ℝ → ℝ whose restriction to any bounded interval I ⊂ ℝ belongs H s(I) such that

$${\left\| f \right\|_{H_{\operatorname{ul} }^s}}: = \mathop {\sup }\limits_{x \in \mathbb{R}} {\left\| {f(x + \cdot )} \right\|_{{H^s}([0,1])}} < \infty .$$

If J = [a, b] and \(X = H_{ul}^s\;or\;H_{ul}^s \cap {L^\infty }\), then C *(J, X) stands for the space of functions f : JX that are bounded and continuous on the interval (a, b] and possess a limit in the space \(H_{loc}^s\) as ta +.

We denote by C i unessential positive constants.

2 2 Main result and scheme of its proof

We begin with the definition of the property of approximate controllability. As it will be proved in Sect. 3, the Cauchy problem (1), (2) is well posed. In particular, for any T > 0, any integer s ≥ 0, and any functions u 0L∞(ℝ) and \(f \in {L^1}({J_T},H_{ul}^s \cap {L^\infty })\), there is a unique solution \(u \in {C_*}({J_T},H_{ul}^s \cap {L^\infty })\) for (1), (2).

Definition 1. Let T > 0, let \(u \in {L^1}({J_T},H_{ul}^s)\) for any s ≥ 0, and let \(E \subset C_b^\infty \) be a finite-dimensional subspace. We shall say that problem (1), (3) is approximately controllable at time T by an E-valued control if for any integer s ≥ 0, any numbers ε, r > 0, and any functions u 0 L and \(\widehat u \in H_{ul}^s\) there is η ∈ C (J T , E) such that the solution u(t,x) of (1)(3) satisfies the inequalitiesFootnote 1

$${\left\| {u(T, \cdot )} \right\|_{H_{\operatorname{ul} }^s \cap {L^\infty }}} \leqslant {K_s},\quad {\left\| {u(T, \cdot ) - \hat u} \right\|_{{H^s}([ - r,r])}} < \varepsilon .$$
((5))

where K s > 0 is a constant depending only on \(||{u_0}|{|_{L\infty }},||\hat u|{|_{H_{ul}^s}}\), \(||\widehat u||H_{ul}^s\), T, and s (but not on r and ε).

Recall that, given a finite subset Λ ⊂ ℝ, we denote by \({E_\Lambda } \subset C_b^\infty \) the vector span of the functions cos(λx) and sin(λx) with λ ∈ Λ. The following theorem is the main result of this paper.

Theorem 2. Let T > 0, \(u \in {L^2}({J_T},H_{ul}^s)\) for any s ≥ 0, let λ1 and λ2 be incommensurable positive numbers, and let Λ = {0, λ1, λ2, 2λ1, 2λ2, λ1 + λ2}. Then problem (1), (3) is approximately controllable at time T by an E Λ -valued control.

A proof of this theorem is given in Sect. 4. Here we outline its scheme. Let us fix an integer s ≥ 0 and functions u 0L and \(\widehat u\)\(H_{loc}^s \). In view of the regularising property of the resolving operator for (1) (see Proposition 5), there is no loss of generality in assuming that u 0\(C_b^\infty \), and by a density argument, we can also assume that \(\widehat u\)\(C_b^\infty \). Furthermore, as it is proved in Sect. 4.5, if inequalities (5) are established for s = 0, then simple interpolation and regularisation arguments show that it remains true for any s ≥ 1. Thus, it suffices to prove (5) for s = 0.

Given a finite-dimensional subspace G\(C_b^\infty \), we consider the controlled equations

$${\partial _t}u - \mu \partial _x^2u + B(u) = h(t,x) + \eta (t,x),$$
((6))
$${\partial _t}u - \mu \partial _x^2u + \zeta (t,x) + B(u\zeta (t,x)) = h(t,x) + \eta (t,x),$$
((7))

where η and ζ, are G-valued controls, and we set B(u) = u∂ x u. We say that Eq. (6) is(ε, r, G)-controllable at time T for the pair (u 0,û) (or simply G-controllable if the other parameters are fixed) if one can find ηC (J T , G) such that the solution u of (6), (2) satisfies inequalities (5) with s = 0. The concept of (ε, r, G)-controllability for (7) is defined in a similar way.

We need to prove that (6) is E Λ -controllable. This fact will be proved in four steps. From now on, we assume that functions u 0, \(\widehat u\)\(C_b^\infty \) (ℝ) and the positive numbers T, ε, and r are fixed and do not indicate explicitely the dependence of other quantities on them.

Step 1. Extension. Let us fix a finite-dimensional subspace G\(C_b^\infty \). Even though Eq. (7) contains more control functions thanEq. (6), the property of G-controllability is equivalent for them. Namely, we have the following result.

Proposition 1. Equation (6) is G-controllable if and only if so is Eq. (7).

Step 2. Convexification. Let us fix a subset N\(C_b^\infty \) invariant under multiplication by real numbers such that

$$N \subset G,\quad B(N) \subset G.$$
((8))

We denote by F (N,G) ⊂ \(C_b^\infty \) the vector span of functions of the form

$$\eta + \xi {\partial _x}\tilde \xi + \tilde \xi {\partial _x}\xi ,$$
((9))

where η,ξG and \(\tilde \xi\). It is easy to see that F(N, G) is a finite-dimensional sub-space contained in the convex envelope of G and B(G); cf. Lemma 1 in Sect. 4.2. The following proposition is an infinite-dimensional analogue of the well-known convexification principle for controlled ODE’s (e. g., see [1, Theorem 8.7]).

Proposition 2. Under the above hypotheses, Eq. (7) is G-controllable if and only if Eq. (6) is F(N, G)-controllable.

Step 3. Saturation. Propositions 1 and 2 (and their proof) imply the following result, which is a kind of “relaxation property” for the controlled Burgers equation.

Proposition 3. Let N, G\(C_b^\infty \) be as in step 2. Then Eq. (6) is G-controllable if and only if it is F(N, G)-controllable. Moreover, the constant K 0 of ((5)) corresponding to Eq. (6) with G-valued control can be made arbitrarily close to that for Eq. (6) with F(N, G)-valued control.

We now set N = {ccos(λ1 x), csin(λ1 x), ccos(λ2 x), csin(λ2 x), c ∈ ℝ} and define E k = F(N, E k −1) for k ≥ 1, where E 0 = E Λ. Note that B(N) ⊂ E Λ (this inclusion will be important in the proof of Lemma 1). It follows from Proposition 2 that Eq. (6) is E Λ -controllable if and only if it is E k -controllable for some integer k ≥ 1. We shall show that the latter property is true for a sufficiently large k. To this end, we first establish the following saturation property: there is a dense countable subset Λ ⊂ ℝ+ such that

$$\bigcup\limits_{k = 1}^\infty {{E_k}} \;contains\;the\;function\;sin(\lambda x)\;and\;cos(\lambda ,x)\;with\;\lambda \in {\Lambda _\infty }.$$
((10))

Step 4. Large control space. Once (10) is proved, one can easily show that (6) is E k -controllable for a sufficiently large k. To this end, it suffices to join u 0 and û by a smooth curve, to use Eq. (6) to define the corresponding control η, and to approximate it, in local topologies, by functions belonging to E k . The fact that the corresponding solutions are close follows from continuity of the resolving operator for (6) in local norms (see Proposition 6). This will complete the proof of Theorem 2.

3 3 Cauchy problem

In this section, we discuss the existence and uniqueness of a solution for the Cauchy problem for the generalised Burgers equation

$${\partial _t}u - \mu \partial _x^2(u + g(t,x)) + B(u + g(t,x)) = f(t,x),\quad x \in \mathbb{R},$$
((11))

where f and g are given functions. We also establish some a priori estimates for higher Sobolev norms and Lipschitz continuity of the resolving operator in local norms. The techniques of the maximum principle and of weighted energy estimates enabling one to derive this type of results are well known, and sometimes we confine ourselves to the formulation of a result and a sketch of its proof.

3.1 3.1 Existence, uniqueness, and regularity of a solution

Before studying the well-posedness of the Cauchy problem for Eq. (11), we recall some results for the linear equation

$${\partial _t}v - \mu \partial _x^2v + a(t,x){\partial _x}v + b(t,x)v = c(t,x),\quad x \in \mathbb{R},$$
((12))

supplement with the initial condition

$$v(0,x) = {v_0}(x),$$
((13))

where v 0L (ℝ). The following proposition establishes the existence, uniqueness, and a priori estimates for a solution of problem (11), (12) in spaces with no decay condition at infinity.

Proposition 4. Let T > 0 and let a, b, c, and f be some functions such that

$$a \in {L^2}({J_T},{L^\infty }),\quad b,c, \in {L^1}({J_T},{L^\infty }),$$

Then for any v 0L problem (12), (13) has a unique solution v(t, x) such that

$$v \in {L^\infty }({J_T} \times \mathbb{R}) \cap {C_*}({J_T},L_{\operatorname{ul} }^2),\quad {\left\| {{\partial _x}v( \cdot ,x)} \right\|_{{L^2}({J_T})}} \in L_{\operatorname{ul} }^2.$$

Moreover, this solution satisfies the inequalities

$${\left\| v \right\|_{{L^\infty }({J_t} \times \mathbb{R})}} \leqslant \exp \left( {{{\left\| b \right\|}_{{L^1}({J_t},{L^\infty })}}} \right)\left( {{{\left\| {{v_0}} \right\|}_{{L^\infty }}} + {{\left\| c \right\|}_{{L^1}({J_t},{L^\infty })}}} \right),$$
((14))
$${\left\| {v(t)} \right\|_{L_{\operatorname{ul} }^2}} + {\left\| {{\partial _x}v} \right\|_{L_{\operatorname{ul} }^2{L^2}({J_t})}} \leqslant C{e^{C(\bar a(t) + b(t))}}\left( {{{\left\| {{v_0}} \right\|}_{L_{\operatorname{ul} }^2}} + {{\left\| c \right\|}_{L_{\operatorname{ul} }^2{L^2}({J_t})}}} \right),$$
((15))

where 0 ≤ t ≤ T,C > 0 is an absolute constant, and

$$\bar b(t) = b_{{L^2}({J_t},L_{ul}^2)}^2,\,\,\bar a(t) = a_{{L^2}({J_t},{L^\infty })}^2,\,\,c{_{L_{ul}^2{L^2}({J_t})}} = \mathop {\sup }\limits_{y \in \mathbb{R}} c{_{{L^2}({J_t} \times [y,y + 1])}}.$$

If in addition, we have a ∈ L (J T × ℝ), then \(u \in {L^p}({J_T},H_{ul}^1)\) for any \(p \in [1,\frac{4}{3})\) and

$${\left\| v \right\|_{{L^p}({J_t},H_{ul}^1}} \leqslant {C_1}\left( {{{\left\| {{v_0}} \right\|}_{L_{ul}^2}} + \int_0^t {{{\left\| {c(r)} \right\|}_{L_{ul}^2}}dr} } \right),$$
((16))

where C 1 > 0 depends only on p, \(||a|{|_{L\infty }}\) \(||b|{|_{{L^2}({J_T},L_{ul}^2)}}\).

Proof. Inequality (14) is nothing else but the maximum principle, while (15) can easily be obtained on multiplying (12) by e −|xy| v, integrating over x ∈ ℝ, and taking the supremum over y ∈ ℝ. Once these a priori estimates are established (by a formal computation), the existence and uniqueness of a solution in the required functional classes can be proved by usual arguments (e. g., see [13] for the more complicated case of the Navier-Stokes equations), and we omit them. The only non-standard point is inequality (16), and we now briefly outline its proof.

Let K t (x) be the heat kernel on the real line:

$${K_t}(x) = \frac{1}{{\sqrt {4\pi \mu t} }}\exp \left( { - \frac{{{x^2}}}{{4\mu t}}} \right),\quad x \in \mathbb{R},\quad t > 0.$$
((17))

The following estimates are easy to check:

$${\left\| {{K_t} * g} \right\|_{L_{ul}^2}} \leqslant {\left\| g \right\|_{L_{ul}^2}},\quad {\left\| {{\partial _x}({K_t} * g)} \right\|_{L_{ul}^2}} \leqslant {C_1}{t^{ - \frac{3}{4}}}{\left\| g \right\|_{L_{ul}^2}},\quad t > 0.$$
((18))

Here and henceforth, the constants C i in various inequalities may depend on μ and T. We now use the Duhamel formula to write a solution of (12), (13) in the form

$$v(t,x) = ({K_t} * {v_0})(x) + \int_0^t {{K_{t - r}} * \left( {c(r) - a{\partial _x}v(r) - bv(r)} \right)dr.} $$

It follows from (18) that

$$\begin{array}{*{20}{l}} {{{\left\| {v(t)} \right\|}_{H_{ul}^1}} \leqslant {C_1}{t^{ - \frac{3}{4}}}{{\left\| {{v_0}} \right\|}_{L_{ul}^2}}} \\ {\quad \quad \quad \quad + {C_2}\int_0^t {{{\left( {t - r} \right)}^{ - \frac{3}{4}}}\left( {{{\left\| c \right\|}_{L_{ul}^2}} + {{\left\| a \right\|}_{{L^\infty }}}{{\left\| v \right\|}_{H_{ul}^1}} + {{\left\| b \right\|}_{L_{ul}^2}}{{\left\| v \right\|}_{{L^\infty }}}} \right)dr} } \\ {\quad \quad \quad \leqslant {C_1}{t^{ - \frac{3}{4}}}{{\left\| {{v_0}} \right\|}_{L_{ul}^2}} + {C_2}\int_0^t {{{\left( {t - r} \right)}^{ - \frac{3}{4}}}\left( {{{\left\| c \right\|}_{L_{ul}^2}} + \left( {{{\left\| a \right\|}_{{L^\infty }}} + 1} \right){{\left\| v \right\|}_{H_{ul}^1}}} \right)d} r} \\ {\quad \quad \quad \quad + {C_3}\int_0^t {{{(t - r)}^{ - \frac{3}{4}}}} \left\| b \right\|_{L_{ul}^2}^2{{\left\| v \right\|}_{L_{ul}^2}}dr,} \end{array}$$

where we used the interpolation inequality \(||v||_{{L^\infty }}^2 \leqslant C||v|{|_{L_{ul}^2}}||v|{|_{H_{ul}^1}}\). Taking the left- and right-hand sides of this inequality to the p th power, integrating in time, and using (15), after some simple transformations we obtain the following differential inequality for the increasing function \(\varphi (t) = \int_0^t {||} v(r)||_{H_{ul}^1}^pdr\):

$$\varphi (t) \leqslant {C_4}{Q^P} + {C_4}{\left( {\int_0^t {{{\left\| {c(r)} \right\|}_{L_{ul}^2}}dr} } \right)^P} + {C_4} + \left( {\left\| a \right\|_{{L^\infty }({J_t} \times \mathbb{R})}^P + 1} \right)\int_0^t {{{(t - r)}^{ - \;\frac{3}{4}}}\varphi (r)dr,} $$

where Q stands for the expression in the brackets on the right-hand side of (16), and C 4 depends on \(\bar a(T),\bar b(T),T\), and μ. A Gronwall-type argument enables one to derive (16).

Let us note that inequality (15) does not use the fact that b,cL 1(J T , L ) and remains valid for any coefficient \(b \in {L^2}({J_T},L_{ul}^2)\) and any right-hand side c for which \(||c|{|_{L_{ul}^2{L^2}({J_T})}} < \infty \). This observation will be important in the proof of Theorem 3.

We now turn to the Burgers Eq. (11), supplemented with the initial condition (2). The proof of the following result is carried out by standard arguments, and we only sketch the main ideas.

Theorem 3. Let fL 1(J T , L ) and g ∈ L (J T × ℝ)∩ L 2(J T , W 1, ∞) ∩ L 1(J T , W 2,∞)for some T > 0 and let u 0 L . Then problem (11), (2) has a unique solution u(t, x) such that

$$u \in {L^\infty }({J_T} \times \mathbb{R}) \cap {C_*}({J_T},L_{\operatorname{ul} }^2) \cap {L^P}({J_T},H_{\operatorname{ul} }^1),\quad {\left\| {{\partial _x}u( \cdot ,x)} \right\|_{{L^2}({J_T})}} \in L_{\operatorname{ul} }^2,$$
((19))

where \(p \in [1,\frac{4}{3})\) is arbitrary. Moreover, the mapping (u 0, f, g) ↦ u is uniformly Lipschitz continuous (in appropriate spaces) on every ball.

Proof. To prove the existence, we first derive some a priori estimates for a solution, assuming that it exists. Let us assume that the functions u 0, f, and g belong to the balls of radius R centred at zero in the corresponding spaces. If a function u satisfies (11), then it is a solution of the linear Eq. (12) with

$$a = u + g,\quad b = {\partial _x}g,\quad c = f + \mu \partial _x^2g - g\partial xg.$$

It follows from (14) that

$${\left\| u \right\|_{{L^\infty }({J_T} \times \mathbb{R})}} \leqslant {C_1}(R).$$
((20))

Inequalities (15) and (16) now imply that

$${\left\| u \right\|_{{L^\infty }({J_T},L_{ul}^2)}} + {\left\| u \right\|_{{L^p}({J_T},H_{ul}^1)}} + {\left\| u \right\|_{H_{ul}^1{L^2}({J_T})}} \leqslant {C_2}(R).$$
((21))

We have thus established some bounds for the norm of a solution in the spaces entering (19). The local existence of a solution can now be proved by a fixed point argument, whereas the absence of finite-time blowup follows from the above a priori estimates.

Let us prove a Lipschitz property for the resolving operator, which will imply, in particular, the uniqueness of a solution. Assume that u i , i = 1,2, are two solutions corresponding to some data (u 0i , f i , g i ) that belong to balls of radius R centred at zero in the corresponding spaces. Setting v = u 1u 2, f = f 1f 2, g = g 1g 2, and v 0 = u 01u 02, we see that v satisfies (12), (13) with

$$a = {u_1} + {g_1},\quad b = {\partial _x}({u_2} + {g_2}),\quad c = f + \mu \partial _x^2g - ({u_1} + {g_1}){\partial _x}g - g{\partial _x}({u_2} + {g_2}).$$

Multiplying Eq. (12) by e |xy| v, integrating in x ∈ ℝ, and using (20) and (21), after some transformations we obtain

$${\partial _t}\left\| v \right\|_y^2 + \mu \left\| v \right\|_y^2 \leqslant {C_3}(R)\left\| v \right\|_y^2 + 2{\left\| c \right\|_y}{\left\| v \right\|_y},$$
((22))

where we set

$$\left\| w \right\|_y^2 = \int_\mathbb{R} {{w^2}(x){e^{ - |x - y|}}dx.} $$

Application of a Gronwall-type argument implies that

$$\left\| {v(t)} \right\|_y^2 + \int_0^t {\left\| {{\partial _x}v} \right\|_y^2ds \leqslant {C_4}(R){{\left( {{{\left\| {{v_0}} \right\|}_y} + \int_0^t {{{\left\| {c(s)} \right\|}_y}ds} } \right)}^2}.} $$
((23))

Taking the square root and the supremum in y ∈ ℝ, we derive

$${\left\| v \right\|_{{L^\infty }({J_t},L_{ul}^2)}} + {\left\| {{\partial _x}v} \right\|_{L_{ul}^2{L^2}({J_t})}} \leqslant {C_5}(R)\left( {{{\left\| {{v_0}} \right\|}_{L_{ul}^2}} + \mathop {\sup }\limits_{y \in \mathbb{R}} \int_0^t {{{\left\| {c(s)} \right\|}_y}ds} } \right).$$
((24))

Now note that

$${\left\| c \right\|_y} \leqslant {\left\| f \right\|_y} + \mu {\left\| {\partial _x^2g} \right\|_y} + {\left\| {{u_1} + {g_1}} \right\|_{{L^\infty }}} + {\left\| {{\partial _x}g} \right\|_y} + {\left\| g \right\|_{{L^\infty }}}\left\| {{\partial _x}{u_2} + {\partial _x}{g_2},} \right\|$$
((25))

whence it follows that

$$\begin{array}{*{20}{l}} {\int_0^t {{{\left\| c \right\|}_y}ds \leqslant {{\left\| f \right\|}_{L_{ul}^2{L^2}(Jt)}}} } \\ { + {C_6}(R)\left( {{{\left\| {\partial _x^2g} \right\|}_{{L^1}({J_t},{L^\infty })}} + {{\left\| {{\partial _x}g} \right\|}_{{L^2}({J_t},{L^\infty })}} + {{\left\| g \right\|}_{{L^2}({J_t},{L^\infty })}}} \right).} \end{array}$$

Substituting this inequality in (24), we obtain

$${\left\| v \right\|_{{L^\infty }({J_t},L_{ul}^2)}} + {\left\| {{\partial _x}v} \right\|_{L_{ul}^2{L^2}({J_t})}} \leqslant {C_8}(R)\left( {{{\left\| {{v_0}} \right\|}_{L_{ul}^2}} + {{\left\| f \right\|}_{L_{ul}^2{L^2}({J_t})}} + {{\left\| g \right\|}_t}} \right),$$
((26))

where we set

$${\left\| g \right\|_t} = {\left\| g \right\|_{{L^1}({J_t},{W^{2,\infty }})}} + {\left\| g \right\|_{{L^2}({J_t},{W^{1,\infty }})}}.$$

Inequality (26) establishes the required Lipschitz property of the resolving operator.

Remark 1. An argument similar to that used in the proof of Theorem 3 enables one to estimate the \(H_{ul}^1 \)-norm of the difference between two solutions. Namely, let u i (t, x), i = 1,2, be two solutions of (11), (2) corresponding to some data

$$\left( {{u_{0i}},{f_i},{g_i}} \right) \in H_{ul}^1 \times {L^2}({J_T},{L^\infty }) \times {L^\infty }({J_T},{W^{2,\infty }}),\quad i = 1,2,$$

whose norms do not exceed R. Then the difference v = u 1u 2 satisfies the inequality

$${\left\| v \right\|_{{L^\infty }({J_T},H_{\operatorname{ul} }^1)}} \leqslant C(R)\left( {{{\left\| {{v_0}} \right\|}_{H_{ul}^1}} + {{\left\| {{v_0}} \right\|}_{{L^2}({J_T},L_{\operatorname{ul} }^2)}} + {{\left\| g \right\|}_{{L^4}({J_T},{W^{2,\infty }})}}} \right),$$
((27))

where we retained the notation used in the proof of (26).

Finally, the following proposition establishes a higher regularity of solutions for (11) with g = 0, provided that the right-hand side is sufficiently regular.

Proposition 5. Under the hypotheses of Theorem 3, assume that fL 2 (J T, \(H_{ul}^s \))for an integer s ≥ 1 and g ≡ 0. Then the solution u(t, x) constructed in Theorem 3 belongs to C([τ,T],\(H_{ul}^s \)) for any τ > 0 and satisfies the inequality

$$\begin{array}{*{20}{l}} {\mathop {\sup }\limits_{t \in {J_T}} ({t^k}\left\| {\partial _x^k(t)} \right\|_{L_{ul}^2}^2)\mathop {\sup }\limits_{y \in \mathbb{R}} \int_0^T {{t^k}} \left\| {\partial _x^{k + 1}u(t)} \right\|_{{L^2}(Iy)}^2dt} \\ {\quad \quad \quad \quad \quad \quad \quad \quad \quad \leqslant {Q_k}\left( {{{\left\| {{u_0}} \right\|}_{{L^\infty }}} + {{\left\| f \right\|}_{{L^2}({J_T},H_{ul}^k \cap {L^\infty })}}} \right),} \end{array}$$
((28))

where 0 ≤ ks, I y = [y, y + 1], and Q k is an increasing function. Furthermore, if u 0\(C_b^\infty \) then the solution belongs to C(J T , \(H_{ul}^s \)), and inequality (28) is valid without the factor of t k/ 2 on the left-hand side and \(||{u_0}|{|_{L\infty }}\) replaced by \(||{u_0}|{|_{H_{ul}^k}}\) on the right-hand side.

Proof. We confine ourselves to the derivation of the a priori estimate (28) for u 0L . Once it is proved, the regularity of a solution can be obtained by standard arguments. Furthermore, the case when u 0\(C_b^\infty \) can be treated by a similar, but simpler technique, and we omit it. The proof of (28) is by induction on k. For k = 0, inequality (28) is a consequence of (21). We now assume that l ∈ [1, s] and that (28) is established for all k ≤ l − 1. Let us set

$${\varphi _y}(t) = {t^l}\int_\mathbb{R} {{e^{ - \left\langle {x - y} \right\rangle }}|\partial _x^lu{|^2}dx = {t^l}\left\| {\partial _x^lu} \right\|_y^2,\quad y \in \mathbb{R},} $$

where \(\left\langle z \right\rangle = \sqrt {1 + {z^2}} \). In view of (11), the derivative of φ y can be written as

$${\partial _t}{\varphi _y}(t) = l{t^{l - 1}}\left\| {\partial _x^lu} \right\|_y^2 + 2{t^l}\int_\mathbb{R} {{e^{ - \left\langle {x - y} \right\rangle }}\partial _x^lu\partial _x^l(\partial _x^lu - u{\partial _x}u + f)dx.} $$
((29))

Integrating by parts and using (20) and the Cauchy-Schwarz inequality, we derive

$$\begin{array}{*{20}{l}} {\quad \;\;\int_\mathbb{R} {{e^{ - \left\langle {x - y} \right\rangle }}\partial _x^lu\partial _x^{l + 2}u\;dx \leqslant - \left\| {\partial _x^{l + 1}u} \right\|_y^2 + {{\left\| {\partial _x^{l + 1}u} \right\|}_y}{{\left\| {\partial _x^lu} \right\|}_y},} } \\ {\;\quad \;\;\,\int_\mathbb{R} {{e^{ - \left\langle {x - y} \right\rangle }}\partial _x^lu\partial _x^lf\;dx \leqslant {{\left\| {\partial _x^l\,f} \right\|}_y}{{\left\| {\partial _x^lu} \right\|}_y},} } \\ {\int_\mathbb{R} {{e^{ - \left\langle {x - y} \right\rangle }}\partial _x^lu\partial _x^l(u{\partial _x}u)\;dx \leqslant \frac{1}{2}\int_\mathbb{R} {{e^{ - \left\langle {x - y} \right\rangle }}\partial _x^lu\partial _x^{l + 1}{u^2}\;dx} } } \\ {\quad \quad \quad \quad \quad \quad \quad \quad \quad \;\; \leqslant \frac{1}{2}({{\left\| {\partial _x^{l + 1}u} \right\|}_y} + {{\left\| {\partial _x^lu} \right\|}_y}){{\left\| {\partial _x^l{u^2}} \right\|}_y}.} \end{array}$$

Substituting these inequalities into (29) and integrating in time, we obtain

$$\begin{array}{*{20}{l}} {{\varphi _y}(t) + \int_0^t {{t^l}\left\| {\partial _x^{l + 1}u} \right\|_y^2dt} } \\ {\quad \quad \quad \quad \quad \leqslant \int_0^t {\left( {{s^{l - 1}}\left\| {\partial _x^lu} \right\|_y^2 + 4{\varphi _y}(s) + {s^l}\left\| {\partial _x^l{u^2}} \right\|_y^2 + {s^l}\left\| {\partial _x^lf} \right\|_y^2} \right)ds.} } \end{array}$$

Taking the supremum over y ∈ ℝ and using the induction hypothesis, we derive

$$\psi (t) \leqslant {Q_{l - 1}} + {C_1}\int_0^t \psi (s)ds + \mathop {\sup }\limits_{y \in \mathbb{R}} \int_0^t {{s^l}} \left\| {\partial _x^l{u^2}} \right\|_y^2ds + {C_1}\int_0^T {\left\| f \right\|_{H_{ul}^1}^2ds,} $$
((30))

where Q l−1 is the function entering (28) with k = l − 1, and

$$\psi (t) \leqslant \left\| {\partial _x^lu(t)} \right\|_{L_{ul}^2}^2 + \mathop {\sup }\limits_{y \in \mathbb{R}} \int_0^t {{t^l}} \left\| {\partial _x^{l + 1}{u^1}} \right\|_{{L^2}(Iy)}^2dt.$$

Now note that

$$\int_0^t {{s^l}} \left\| {\partial _x^l{u^2}} \right\|_y^2ds \leqslant {C_2}\left\| u \right\|_{{L^\infty }}^2\sum\limits_{k \in \mathbb{Z}} {{e^{ - |k - y|}}} \int_0^t {{s^l}} \left\| u \right\|_{{H^l}({I_k})}^2ds.$$

Substituting this into (30) and using again the induction hypothesis and inequality (20), we obtain

$$\psi (t) \leqslant {C_3}\int_0^t {\psi (s)ds + Q\left( {{{\left\| {{u_0}} \right\|}_{{L^\infty }}} + {{\left\| f \right\|}_{{L^2}({J_T},H_{ul}^1 \cap {L^\infty })}}} \right),} $$

where Q is an increasing function. Application of the Gronwall inequality completes the proof.

3.2 3.2 Uniform continuity of the resolving operator in local norms

Theorem 3 established, in particular, the Lipschitz continuity of the resolving operator for (11). The following proposition, which plays a crucial role in the next section, proves the uniform continuity of the resolving operator in local norms.

Proposition 6. Under the hypotheses of Theorem 3, for any positive numbers T, R, r, and δ there are ρ and C such that, if triples (u 0i , f i , g i ), i = 1,2, satisfy the inclusions

$${u_{0i}} \in {L^\infty },\;\;\;{f_i} \in L{\;^1}({J_T},{L^\infty }),\;\;{g_i} \in {L^\infty }({J_T} \times \mathbb{R}) \cap {L^2}({J_T},{W^{1,\infty }}) \cap L{\;^1}({J_T},{W^{2,\infty }}),$$

and corresponding norms are bounded by R, then

$$\begin{array}{*{20}{l}} {\mathop {\sup }\limits_{t \in {J_T}} {{\left\| {{u_1}(t) - {u_2}(t)} \right\|}_{{L^2}([ - r,r])}} \leqslant \delta } \\ { + C\left( {{{\left\| {{u_{01}} - {u_{02}}} \right\|}_{{L^2}(I\rho )}} + {{\left\| {{f_1} - {f_2}} \right\|}_{{L^1}({J_T},{L^2}(I\rho ))}} + {{\left\| {{g_1} - {g_2}} \right\|}_{{L^2}({J_T},{H^2}(I\rho ))}}} \right),} \end{array}$$
((31))

where I ρ = [−ρ, ρ], and u i (t) denotes the solution of (11) issued from u 0i .

Proof. We shall use the notation introduced in the proof of Theorem 3. It follows from inequality (23) with y = 0 that

$${e^{ - r/2}}{\left\| {v(t)} \right\|_{{L^2}(Ir)}} \leqslant {C_1}(R)\left( {{{\left\| {{e^{ - | \cdot |/2}}{v_0}} \right\|}_{{L^2}}} + \int_0^T {{{\left\| {{e^{ - | \cdot |/2}}c(t, \cdot )} \right\|}_{{L^2}}}dt} } \right).$$
((32))

Now note that

$$\left\| {{e^{ - |x|/2}}{v_0}} \right\|_{{L^2}}^2 = {\int_\mathbb{R} {|{v_0}|} ^2}{e^{ - |x|}}dx \leqslant \left\| {{v_0}} \right\|_{{L^2}(I\rho )}^2 + 4{e^{ - \rho }}\left\| {{v_0}} \right\|_{L_{ul}^2}^2.$$
((33))

By a similar argument, we check that (cf. (25))

$$\begin{array}{*{20}{l}} {||{e^{ - |x|/2}}c(t, \cdot )|{|_{{L^2}}} \leqslant {{\left\| f \right\|}_{{L^2}(I\rho ) + \mu }} + \mu {{\left\| {\partial _x^2g} \right\|}_{{L^2}(I\rho )}} + {C_2}(R){{\left\| {{\partial _x}g} \right\|}_{{L^2}(I\rho )}} + {C_3}(R){e^{ - \rho /2}}} \\ {\quad \quad \quad \quad \quad \;\; + \left( {{{\left\| g \right\|}_{{L^\infty }(I\rho )}} + {e^{ - \rho /4}}{{\left\| g \right\|}_{{L^\infty }}}} \right){{\left\| {{e^{ - | \cdot |/4}}({\partial _x}{u_2} + {\partial _x}{g_2})} \right\|}_{{L^2}(\mathbb{R})}}.} \end{array}$$

Integrating in time and using (21), we obtain

$$\begin{array}{*{20}{l}} {{{\int_0^T {\left\| {{e^{ - | \cdot |/2}}c(t, \cdot )} \right\|} }_{{L^2}dt}}} \\ {\quad \quad \leqslant {C_4}(R)\left\{ {{{\int_0^T {{{\left\| f \right\|}_{{L^2}(I\rho )}}dt + \left( {\int_0^T {\left\| g \right\|_{{H^2}(I\rho )}^2} dt} \right)} }^{1/2}} + {e^{ - \rho /4}}} \right\}.} \end{array}$$
((34))

Substituting (33) and (34) into (32) and taking ρ > 0 sufficiently large, we arrive at the required inequality (31).

4 4 Proof of Theorem 2

4.1 4.1 Extension: proof of Proposition 1

We only need to prove that if Eq. (7) is G-controllable, then so is (6), since the converse implication is obvious. Let \(\tilde \eta \), \(\tilde \zeta \)C (J T ,G) be such that the solution ũ of problem (7), (2) satisfies (5) with s = 0. In view of (26), replacing K 0 by a slightly larger constant, we can assume that \(\tilde \zeta \)(0) = \(\tilde \zeta \) (T) = 0. Let us set u = ũ + \(\tilde \zeta \). Then u is a solution of (6), (2) with the control \(\eta = \tilde \eta + {\partial _t}\tilde \zeta \), which takes values in G. Moreover, u(T) = ũ(T) and, hence, u satisfies (5). This completes the proof of Proposition 1, showing in addition that the constants K 0 entering (5) and corresponding to Eqs. (6) and (7) can be chosen arbitrarily close to each other.

4.2 4.2 Convexification: proof of Proposition 2

We begin with a number of simple observations. Let us set G 1 = F(N, G). By Proposition 1, if Eq. (7) is G-controllability,then so is Eq. (6), and since GG 1, we see that (6) is G 1 -controllable. Thus, it suffices to prove that if (6) is G 1 -controllable, then (7) is G-controllable. To establish this property, it suffices to prove that, for any η 1 C (J T , G 1) and any δ > 0 there are η, ζ, L (J T , G) such that the solution u(t,x) of (7), (2) satisfies the inequality

$${\left\| {u(T) - {u_1}(T)} \right\|_{H_{ul}^1}} < \delta ,$$
((35))

where u 1 stands for the solution of (6), (2) with η = η 1. Indeed, if this property is established, then we take two sequences {η n}, {ζn} ⊂ C (J T , G) such that (cf. (27))

$${\left\| {{\eta ^n} - \eta } \right\|_{{L^2}({J_T},G)}} + {\left\| {{\zeta ^n} - \zeta } \right\|_{{L^4}({J_T},G)}} \to 0\;\;\operatorname{as} \;n \to \infty $$

and denote by u n(t, x) the solution of (7), (2) with η = η n and ζ = ζn. It follows from (27) that

$${\gamma _n}: = {\left\| {{u^n}(T) - u(T)} \right\|_{H_{ul}^1}} \to 0\;\;\operatorname{as} \;n \to \infty .$$
((36))

Combining (35) and (36) and using the continuous embedding \(H_{{\text{ul}}}^1 \subset {L^\infty }\), we derive

$$\begin{array}{*{20}{l}} {\quad \quad {{\left\| {{u^n}(T)} \right\|}_{{L^\infty }}} \leqslant {{\left\| {{u_1}(T)} \right\|}_{{L^\infty }}} + {{\left\| {u(T) - {u_1}(T)} \right\|}_{{L^\infty }}} + {{\left\| {{u^n}(T) - u(T)} \right\|}_{{L^\infty }}}} \\ {\quad \quad \quad \quad \quad \quad \leqslant {K_o} + {C_1}(\delta + {\gamma _n}),} \\ {{{\left\| {{u^n}(T) - \hat u} \right\|}_{{L^2}(Ir)}} \leqslant {{\left\| {{u^n}(T) - u(T)} \right\|}_{{L^2}(Ir)}} + {{\left\| {u(T) - {u_1}(T)} \right\|}_{{L^2}(Ir)}}} \\ {\quad \quad \quad \quad \quad \quad \quad + {{\left\| {{u_1}(T) - \hat u} \right\|}_{{L^2}(Ir)}}} \\ {\quad \quad \quad \quad \quad \quad \leqslant {C_2}({\gamma _n} + \delta ) + {{\left\| {{u_1}(T) - \hat u} \right\|}_{{L^2}(Ir)}},} \end{array}$$

where I r = [−r, r]. Choosing δ > 0 sufficiently small and n sufficiently large, we conclude that u n satisfies inequalities (5), with a constant K 0 arbitrarily close to that for u 1. Finally, a similar approximation argument shows that, when proving (35), we can assume η 1(t) to be piecewise constant, with finitely many intervals of constancy. The construction of controls η, ζL (J T , G) for which (35) holds is carried out in several steps.

Step 1. An auxiliary lemma. We shall need the following lemma, which establishes a relationship between G- and F(N, G)-valued controls.

Lemma 1. For any η 1F (N, G) and any v > 0 there is an integer k ≥ 1, numbers α j > 0, and vectors η, ζ jG, j = 1,…, k, such that

$$\sum\limits_{j = 1}^k {{\alpha _j} = 1,} $$
((37))
$${\left\| {{\eta _1} - B(u) - \left( {\eta - \sum\limits_{j = 1}^k {{\alpha _j}} \left( {B(u + {\zeta ^j}) - \mu \partial _x^2{\zeta ^j}} \right)} \right)} \right\|_{H_{ul}^1}} \leqslant v\quad for\;any\;\;\;u \in H_{ul}^1.$$
((38))

Proof. It suffices to find functions η, \(\tilde \zeta \) jG, j = 1,…, m, such that

$${\left\| {{\eta _1} - \eta + \sum\limits_{j = 1}^k {{\alpha _j}} B({{\tilde \zeta }^j})} \right\|_{H_{ul}^1}} \leqslant v.$$
((39))

Indeed, if such vectors are constructed, then we can set k = 2m,

$${\alpha _j} + {\alpha _{j + m}} = \frac{1}{{2m}},\quad {\zeta ^j} = - {\zeta ^{j + m}} = \sqrt m {\tilde \zeta ^j}\quad \operatorname{for} \;j = 1, \ldots ,m,$$

and relations (37) and (38) are easily checked.

To construct η, \(\tilde \zeta \) jG satisfying (39), note that if η 1F(N, G), then there are functions \({\tilde \eta _j}\), ξ j G and \(\tilde \zeta \) j N such that

$${\eta _1} = \sum\limits_{j = 1}^k {\left( {{{\tilde \eta }_j} - {\xi _j}{\partial _x}{{\tilde \xi }_j} - {{\tilde \xi }_j}{\partial _x}{\xi _j}} \right).} $$
((40))

Now note that, for any ε > 0,

$${\xi _j}{\partial _x}{\tilde \xi _j} - {\tilde \xi _j}{\partial _x}{\xi _j} = B(\varepsilon {\xi _j} + {\varepsilon ^{ - 1}}{\tilde \xi _j}) - {\varepsilon ^2}B({\xi _j}) - {\varepsilon ^{ - 2}}B({\tilde \xi _j}).$$

Combining this with (40), we obtain

$${\eta _1} - \sum\limits_{j = 1}^k {\left( {{{\tilde \eta }_j} + {\varepsilon ^{ - 2}}B({{\tilde \xi }_j})} \right) + } \sum\limits_{j = 1}^k {B\left( {\varepsilon {\xi _j} + {\varepsilon ^{ - 1}}{{\tilde \xi }_j}} \right) = {\varepsilon ^2}\sum\limits_{j = 1}^k {B({\xi _j}).} } $$

Choosing ε > 0 sufficiently small and settingFootnote 2

$$\eta = \sum\limits_{j = 1}^k {\left( {{{\tilde \eta }_j} + {\varepsilon ^{ - 2}}B({{\tilde \xi }_j})} \right),} \quad {\tilde \zeta ^j} = \varepsilon {\xi _j} + {\varepsilon ^{ - 1}}{\tilde \xi _j}.$$
((41))

we arrive at the required inequality (39).

Step 2. Comparison with an auxiliary equation. Let η 1L (J T , G 1) be a piece-wise constant function and let u 1 be the solution of problem (6), (2) with η = η 1. To simplify notation, we assume that there are only two intervals of constancy for η 1(t) and write

$${\eta _1}(t,x) = {I_{{J_1}}}(t)\eta _1^1(x) + {I_{{J_2}}}(t)\eta _1^2(x),$$

where \(\eta _1^1,\eta _1^2 \in {G_1}\) are some vectors and J 1 = [0, a] and J 2 = [a, T] with a ∈ (0, T). We fix a small v > 0 and, for i = 1,2, choose numbers \(\alpha _j^i > 0\), j = 1,…,k i , and vectors η i; ζjiG such that (37), (38) hold. Let us consider the following equation on J T ::

$${\partial _t}u - \mu \partial _x^2u + \sum\limits_{j = 1}^{{k_i}} {\alpha _j^i\left( {B(u + {\zeta ^{ji}}(x)) - \mu \partial _x^2{\zeta ^{ji}}(x)} \right) = h(t,x) + {\eta ^i}(x),\quad t \in } \;{J_i}.$$
((42))

This is a Burgers-type equation, and using the same arguments as in the proof of Theorem 3, it can be proved that problem (42), (2) has a unique solution ũ(t,x) satisfying (19). Moreover, in view of the regularity of the data and an analogue of Proposition 5 for Eq. (42), we have

$$\tilde u \in C({J_T},H_{ul}^k)\;\;\operatorname{for} \;any\;k\; \geqslant \,0.$$
((43))

On the other hand, we can rewrite (42) in the form

$${\partial _t}u - \mu \partial _x^2u + u{\partial _x}u = h(t,x) + \eta _1^i(x) - c_v^i(t,x),\quad t \in {J_i},$$
((44))

where \(C_v^i \) (t,x) is defined for tJ i by the function under sign of norm on the left-hand side of (38) in which \({\eta _1} = \eta _1^i,\eta = {\eta ^i},{\alpha _j} = \alpha _j^i,{\zeta ^j} = {\zeta ^{ji}}\), and u = ũ(t, x). Since the resolving operator for (44) is Lipschitz continuous on bounded subsets, there is a constant C > 0 depending only on the L norms of η i1 such that (see Remark 1)

$${\left\| {{u_1}(T) - \tilde u(T)} \right\|_{H_{ul}^1}} \leqslant C\left( {{{\left\| {c_v^1} \right\|}_{{L^2}({J_1},{L^\infty })}} + {{\left\| {c_\delta ^1} \right\|}_{{L^2}({J_2},{L^\infty })}}} \right) \leqslant C\sqrt {2T} v.$$
((45))

On the other hand, let us define ηL (J T , G) by η(t) = η i for t ∈ J i . We shall show in the next steps that there is a sequence {ζ m } ⊂ L (J T , G) such that

$${\left\| {{u^m}(T) - \tilde u(T)} \right\|_{H_{ul}^1}} \to 0\quad \operatorname{as} \;m \to \infty ,$$
((46))

where u m(t, x) denotes the solution of problem (7), (2) in which ζ = ζ m . Combining inequalities (45) and (46) with v ≪ 1 and m ≫ 1, we obtain the required estimate (35) for u = u m.

Step 3. Fast oscillating controls. Following a classical idea in the control theory, we define functions ζ, m L (J T , G) by the relation

$${\zeta _m}(t) = \left\{ {\begin{array}{*{20}{c}} {{\zeta ^{(1)}}(mt/a)}&{\operatorname{for} \;t\; \in \;{J_1},} \\ {{\zeta ^{(2)}}(m(t - a)/(T - a))}&{\operatorname{for} \;t\; \in \;{J_2},} \end{array}} \right.$$

where ζ (i)(t) is a 1-periodic G-valued function such that

$${\zeta ^{(1)}}(t) = {\zeta ^{ji}}\quad \operatorname{for} \;0 \leqslant - (\alpha _1^i + \cdots + \alpha _{j - 1}^i) < \alpha _j^i,j = 1, \ldots ,{k_i}.$$

Let us rewrite (42) in the form

$${\partial _t}u - \mu \partial _x^2(u + {\zeta _m}(t,x)) + B(u + {\zeta _m}(t,x)) = h(t,x) + \eta (t,x) + {f_m}(t,x),$$

where we set f m = f m1 + f m2,

$${f_{m1}}(t) = \mu \partial _x^2{\zeta _m} + \mu \sum\limits_{j = 1}^{{k_i}} {\alpha _j^i\partial _x^2{\zeta ^{ji}},} $$
((47))
$${f_{m2}}(t) = B(\tilde u + {\zeta _m}) - \sum\limits_{j = 1}^{{k_i}} {\alpha _j^iB(\tilde u + {\zeta ^{ji}})} $$
((48))

for tJ i . We now define an operator \(K:{L^2}({J_T},{L^\infty }) \to {L^\infty }({J_T} \times \mathbb{R}) \cap {C_*}({J_T},L_{ul}^2)\) by the relation

$$(Kf)(t,x) = \int_0^t {{K_{t - s}} * f(s)ds,} $$

where the kernel K t was introduced in (17). Setting v m = ũ − K f m , we see that the function v m (t, x) satisfies the equation

$${\partial _t}v - \mu \partial _x^2(v + {\zeta _m}) + B(v + {\zeta _m} + K{f_m}) = h + \eta .$$
((49))

Suppose we have shown that

$${\left\| {K\;{f_m}(T)} \right\|_{H_{ul}^1}} + {\left\| {K\;{f_m}} \right\|_{{L^4}({J_T},{W^{2,\infty }})}} \to 0\quad \operatorname{as} \;m \to \infty .$$
((50))

Then, by (27), we have

$${\left\| {{u^m}(T) - \tilde u(T)} \right\|_{H_{ul}^1}} \leqslant {\left\| {{u^m}(T) - {v_m}(T)} \right\|_{H_{ul}^1}} + {\left\| {K\;{f_m}(T)} \right\|_{H_{ul}^1}} \to 0\quad \operatorname{as} \;m \to \infty .$$

Thus, it remains to prove (50).

Step 4. Proof of (50). We first note that {f m } is a bounded sequence in \({L^\infty }({J_T},H_{ul}^k)\) for any k ≥ 0. Integrating by parts, it follows that

$$K\;{f_m} = {F_m} + \mu K(\partial _x^2{F_m}),$$
((51))

where we set

$${F_m}(t) = \int_0^t {{f_m}(s)ds} .$$

In view of Proposition 4, the operator K is continuous from \({L^1 }({J_T},H_{ul}^k)\) to \(C({J_T},H_{ul}^k)\) for any integer k ≥ 0. Therefore (50) will follow if we show that

$${\left\| {{F_m}} \right\|_{C({J_T},H_{ul}^k)}} \to 0\quad \operatorname{as} \;\;m\; \to \infty .$$

This convergence is a straightforward consequence of relations (47) and (48); e. g., see [16, Sect. 3.3]. The proof of Proposition 2 is complete.

4.3 4.3 Saturation

We wish to prove (10). To this end, we shall need the following lemma describing explicitly some subspaces that are certainly included in E k . Without loss of generality, we assume that λ1 > λ2.

Lemma2. Let us set Λ k = {n 1λ1 + n 2λ2 ≥ 0 : n 1,n 2 ∈ ℤ, |n 1| + |n 2| ≤ k}. Then \({E_{{\Lambda _k}}} \subset {E_k}\) for any integer k ≥ 1.

Proof. The proof is by induction on k. We confine ourselves to carrying out the induction step, since the base of induction can be checked by a similar argument.

Let us fix any integer k ≥ 2 and assume that \({E_{{\Lambda _k}}} \subset {E_k}\). We need to show that that the functions sin(λx) and cos(λx) belong to E k+ 1 for λ = n 1 λ 1 + n 2λ2 ∈ Λ k+1. We shall only consider the case when the coefficients n 1 and n 2 are non-negative, since the other situations can be treated by similar arguments. Assume first n 1 ≥ 2 and n 1 + n 2k + 1. Then λ′ = λ − λ1 and λ″ = λ − 2λ1 belong to Λ k , and we have

$$\sin (\lambda x) = \frac{{\lambda ''}}{\lambda }\sin (\lambda ''x) + \frac{2}{\lambda }\left( {\sin ({\lambda _1}x){\partial _x}\sin (\lambda 'x) + \sin (\lambda 'x){\partial _x}sin({\lambda _1}x)} \right),$$
((52))
$$\cos (\lambda x) = \frac{\lambda }{{\lambda ''}}\cos (\lambda ''x) + \frac{2}{{\lambda ''}}\left( {\cos ({\lambda _1}x){\partial _x}\sin (\lambda 'x) + \sin (\lambda 'x){\partial _x}cos({\lambda _1}x)} \right),$$
((53))

whence we conclude that the functions on the left-hand side of these relations belong to E k+ 1. If λ = λ1 + 2 ∈ Λ k+1, then setting λ′ = λ − λ2 and λ″ = λ − 2λ2, we see that relations (52) and (53) with λ1 replaced by λ2 remain valid, and we can conclude again that sin(λx), cos(λx) ∈ E k+1. Finally, the same proof applies also in the case λ = (k + 1)λ2 ∈ Λ k+1.

Lemma 2 shows that the union of E k (which is a vector space) contains the trigonometric functions whose frequencies belong to the set Λ := ∪ k Λ k . It is straightforward to check that Λ is dense in ℝ+.

4.4 4.4 Large control space

Let us prove that (6) is \({E_{{\Lambda _k}}}\) -controllable (and, hence, E k -controllable) for a sufficiently large k. Indeed, let us set

$$u(t,x) = {T^{ - 1}}\left( {t\hat u(x) + (T - t){u_0}(x)} \right),\quad (t,x) \in {J_T} \times \mathbb{R}.$$
((54))

This is an infinity smooth function in (t, x) all of whose derivatives are bounded. We now define

$$\eta (t,x) = {\partial _t}u - \mu \partial _x^2u + u{\partial _x}u - h$$

and note that \(\eta \in {L^2}({J_T},H_{ul}^s)\) for any s ≥ 0 and that the solution of problem (6), (2) is given by (54) and coincides with û for t = T. We have thus a control that steers a solution starting from u 0 to û. To prove the required property, we approximate η, in local topologies, by an \({E_{{\Lambda _k}}}\) -valued function and use the continuity of the resolving operator to show that the corresponding solutions are close.

More precisely, let χ ∈ C (ℝ) be such that 0 ≤ χ ≤ 1,sup |χ′| 2, χ(x) = 0 for |x| ≥ 2, and χ(x) = 1 for |x| ≤ 1. Then the sequence η n (t, x) = χ(x/n)η(t, x) possesses the following properties:

$${\eta _n}(t,x) = 0\quad \operatorname{for} \;|x|\; \geqslant 2n\;\operatorname{and} \;any\;n \geqslant 1,$$
((55))
$${\left\| {{\eta _n}} \right\|_{{L^2}({J_T} \times H_{ul}^1)\;}} \leqslant 3{\left\| \eta \right\|_{{L^2}({J_T} \times H_{ul}^1)}}\quad \operatorname{for} \;all\;n \geqslant 1,$$
((56))
$${\left\| {{\eta _n} - \eta } \right\|_{{L^2}({J_T} \times I\rho )\;}} \to 0\quad \;\operatorname{as} \;n \to \infty \;\operatorname{for} \;\operatorname{any} \;\rho > 0,$$
((57))

where I ρ = [−ρ, ρ]. Given a frequency ω > 0 and an integer N ≥ 1, we denote by Pω,N : L 2(I π/ ω) → L (ℝ) a linear projection that takes a function g to its truncated Fourier series

$$({P_{\omega ,Ng}})(x) = \sum\limits_{|j| \leqslant N} {gj{e^{\omega ijx}},\quad gj = \frac{\omega }{{2\pi }}\int_{I\pi /\omega } {g(y){e^{ - \omega ijy}}dy.} } $$

The function P ω,N g is 2π/ω-periodic, and it follows from (55) and (56) that

$${\left\| {{P_{\omega ,N}}{\eta _n}} \right\|_{{L^1}({J_T},{L^\infty })}} \leqslant {C_1}{\left\| {{P_{\omega ,N}}{\eta _n}} \right\|_{{L^2}({J_T},H_{ul}^1)}} \leqslant {C_2}\quad \operatorname{for} \;all\;N,n \geqslant 1,$$
((58))
$${\left\| {{P_{\omega ,N}}{\eta _n} - {\eta _n}} \right\|_{{L^2}({J_T} \times I\rho )}} \to 0\quad \operatorname{as} \;N \to \infty \;\operatorname{for} \;any\;n \geqslant 1.$$
((59))

Note that if ω ∈ Λ, then for any N ≥ 1 there is k ≥ 1 such that the image of P ω,N is contained in \({E_{{\Lambda _k}}}\).

Let us denote by u n,N (t, x) the solution of problem (6), (2) with \(\eta = {{\text{P}}_{\omega ,N}}{\eta _n}\) In view of inequality (31) with δ = ε/2 and \(R = \max \{ ||{u_0}|{|_{{L^\infty }}},||\eta |{|_{{L^1}({J_T},{L^\infty })}},{C_2}\} \), we have

$$\begin{array}{*{20}{l}} {{{\left\| {{u_{n,N}}(T) - \hat u} \right\|}_{{L^2}(Ir)}} = {{\left\| {{u_{n,N}}(T) - u(T)} \right\|}_{{L^2}(Ir)}}} \\ {\quad \quad \quad \quad \quad \quad \leqslant \frac{\varepsilon }{2} + C{{\left\| {{\operatorname{P} _{\omega ,N}}{\eta _n} - \eta } \right\|}_{{L^1}({J_T},{L^2}(I\rho ))}}} \\ {\quad \leqslant \frac{\varepsilon }{2} + C\sqrt T \left( {{{\left\| {{\operatorname{P} _{\omega ,N}}{\eta _n} - {\eta _n}} \right\|}_{{L^1}({J_T},{L^2}(I\rho ))}} + {{\left\| {{\eta _n} - \eta } \right\|}_{{L^1}({J_T},{L^2}(I\rho ))}}} \right).} \end{array}$$
((60))

We now choose n ≥ 1 such that \(C\sqrt T ||{\eta _n} - \eta |{|_{{L^1}({J_T},{L^2}({I_\rho }))}}< \frac{\varepsilon }{4}\); see (57). We next find ω ∈ Λ so that \(\frac{\pi }{\omega } > \max (2n,\rho )\) (this is possible since Λ is dense in ℝ+) and choose N ≥ 1 such that \(C\sqrt T ||{P_{\omega ,N}}{\eta _n} - {\eta _n}|{|_{{L^1}({J_T},{L^2}({I_\rho }))}}< \frac{\varepsilon }{4}\). Substituting these estimates into (60), we obtain

$${\left\| {{u_{n,N}}(T) - \hat u} \right\|_{{L^2}(Ir)}} < \varepsilon ,$$

which is the second inequality in (5) with s = 0. It remains to note that, in view of (20), (56), and (58), the first inequality in (5) is also satisfied.

4.5 4.5 Reduction to the case s = 0

We now prove that if inequalities (5) hold for s = 0 and arbitrary T, r, and ε, then they remain valid for any s ≥ 1. Indeed, we fix an integer s ≥ 1, positive numbers r and ε, and functions u 0;\(\widehat u\)\(C_b^\infty \). Let us define η by zero on the half-line [T, +∞) and denote by ũ(t) the solution of (1), (3) issued from û at t = T. Using interpolation, regularity of solutions (Proposition 5), and continuity of the resolving operator in local norms (Proposition 6), we can write

$$\begin{array}{*{20}{l}} {\left\| {u(T + \tau ) - \hat u(\tau )} \right\|_{{H^s}({I_r})}^2 \leqslant {C_1}{{\left\| {u(T + \tau ) - \hat u(\tau )} \right\|}_{{L^2}({I_r})}}{{\left\| {u(T + \tau ) - \hat u(\tau )} \right\|}_{{H^{2s}}({I_r})}}} \\ {\quad \quad \quad \quad \quad \leqslant {C_2}{\tau ^{ - 2s}}\left( {\delta + C{{\left\| {u(T) - \hat u} \right\|}_{{L^2}(I\rho )}}} \right){Q_{2s}}\left( {{{\left\| {u(T)} \right\|}_{{L^\infty }}} + K} \right),} \end{array}$$
((61))

where C i are some constants depending on R and s, the quantities C and Q 2s are those entering (31) and (28), respectively, and \(K = ||\widehat u|{|_{{L^\infty }}} + ||h|{|_{{L^1}({J_T},H_{ul}^{2s})}}\). Furthermore, in view of Proposition 5, we have

$${\left\| {\hat u(\tau ) - \hat u} \right\|_{H_{ul}^s}} \to 0\quad \operatorname{as} \;\tau \to {0^ + }.$$

Let τ > 0 be so small that the left-hand side of this relation is smaller than ε 2/6. We next choose δ > 0 such that

$${C_2}{\tau ^{ - 2s}}{Q_{2s}}({K_0} + K)\delta < {\varepsilon ^2}/6,$$

where K 0 is defined in (5) (and is independent of r and ε). Finally, we construct ηC (J T , E Λ) for which inequalities (5) hold with r = ρ and ε = δ/C. Comparing the above estimates with (61), we obtain

$${\left\| {u(T + \tau ) - \hat u} \right\|_{H_{ul}^s(Ir)}}: = \mathop {\sup }\limits_{I \subset Ir} {\left\| {u(T + \tau ) - \hat u} \right\|_{{H^s}(I)}} < \varepsilon ,$$

where the supremum is taken oven all intervals II r of length ≤ 1. Furthermore, in view of (28), we have

$${\left\| {u(T + \tau ) - \hat u} \right\|_{H_{ul}^s}} \leqslant {\tau ^{ - s}}{Q_s}\left( {{K_0} + {{\left\| h \right\|}_{{L^1}({J_T},H_{ul}^s)}}} \right) = :{K_s}.$$

We have thus established inequalities (5) with T and \(|| \cdot |{|_{{H^s}({I_r})}}\) replaced by T + τ and \(|| \cdot |{|_{H_{ul}^s({I_r})}}\), respectively. Since T is arbitrary and the positive numbers τ and ε can be chosen arbitrarily small, we conclude that inequalities (5) are true for any integer s ≥ 0 and any numbers T,r,ε > 0. This completes the proof of Theorem 2.

Acknowledgements This research was carried out within the MME-DII Center of Excellence (ANR-11-LABX-0023-01) and supported by the ANR grant STOSYMAP (ANR 2011 BS01 015 01).