Keywords

2010 MSC

1 Introduction

In the case of matrices the study of differential-algebraic equations (DAEs), i.e. equations of the form

$$\begin{aligned} (Eu)'(t)+Au(t)&=f(t),\\ u(0)&=u_{0} \end{aligned}$$

for matrices \(E,A\in \mathbb {R}^{n\times n}\), is a very active field in mathematics (see e.g. [7,8,9,10] and the references therein). The main difference to classical differential equations is that the matrix E is allowed to have a nontrivial kernel. Thus, one cannot expect to solve the equation for each right hand side f and each initial value \(u_{0}\). In case of matrices one can use normal forms (see e.g. [2, 12, Theorem 2.7]) to determine the ‘right’ space of initial values, so-called consistent initial values. However, this approach cannot be used in case of operators on infinite dimensional spaces. Another approach uses so-called Wong sequences associated with matrices E and A (see e.g. [3]) and this turns out to be applicable also in the operator case.

In contrast to the finite dimensional case, the case of infinite dimensions is not that well studied. It is the aim of this article, to generalise some of the results in the finite dimensional case to infinite dimensions. For simplicity, we restrict ourselves to homogeneous problems. More precisely, we consider equations of the form

$$\begin{aligned} (Eu)'(t)+Au(t)&=0\quad (t>0),\\ u(0)&=u_{0},\nonumber \end{aligned}$$
(1)

where \(E\in L(X;Y)\) for some Banach spaces XY and \(A:{\text {dom}}(A)\subseteq X\rightarrow Y\) is a densely defined closed linear operator. We will define the notion of mild and classical solutions for such equations and determine the ‘right’ space of initial conditions for which a mild solution could be obtained.

For doing so, we start with the definition of Wong sequences associated with (EA) in Sect. 3 which turns out to yield the right spaces for initial conditions. In Sect. 4 we consider the space of consistent initial values and provide some necessary conditions for the existence of a \(C_{0}\)-semigroup associated with the above problem under the assumption that the space of consistent initial values is closed and the mild solutions are unique (Hypotheses A). In Sect. 5 we consider operators (EA) such that \((zE+A)\) is boundedly invertible on a right half plane and the inverse is polynomially bounded on that half plane. In this case it is possible to determine the space of consistent initial values in terms of the Wong sequence and we can characterise the conditions for the existence of a \(C_{0}\)-semigroup yielding the mild solutions of (1) at least in the case of Hilbert spaces. One tool needed in the proof are the Fourier-Laplace transform and the Theorem of Paley-Wiener, which will be recalled in Sect. 2.

As indicated above, the study of DAEs in infinite dimensions is not such an active field of study as for the finite dimensional case. We mention [22] where in Hilbert spaces the case of selfadjoint operators E is treated using positive definiteness of the operator pencil. Similar approaches in Hilbert spaces were used in [16] for more general equations. However, in both references the initial condition was formulated as \(\left( Eu\right) (0)=u_{0}.\) We also mention the book [6], where such equations are studied with the focus on maximal regularity. Another approach for dealing with such degenerated equations uses the framework of set-valued (or multi-valued) operators, see [5, 11]. Furthermore we refer to [18, 19], where sequences of projectors are used to decouple the system. Moreover, there exist several references in the Russian literature, where the equations are called Sobolev type equations (see e.g. [23] an the references therein). Finally, we mention the articles [24, 25], which are closely related to the present work, but did not consider the case of operator pencils with polynomially bounded resolvents. In case of bounded operators E and A, equations of the form (1) were studied by the author in [27, 28], where the concept of Wong sequences associated with (EA) was already used.

We assume that the reader is familiar with functional analysis and in particular with the theory of \(C_{0}\)-semigroups and refer to the monographs [4, 14, 29]. Throughout, if not announced differently, X and Y are Banach spaces.

2 Preliminaries

We collect some basic knowledge on the so-called Fourier-Laplace transformation and weak derivatives in exponentially weighted \(L_{2}\)-spaces, which is needed in Sect. 5. We remark that these concepts were successfully used to study a broad class of partial differential equations (see e.g. [15,16,17] and the references therein).

Definition

Let \(\rho \in \mathbb {R}\) and H a Hilbert space. Define

$$ L_{2,\rho }(\mathbb {R};H):=\{f:\mathbb {R}\rightarrow H\,;\,f\text { measurable},\int _{\mathbb {R}}\Vert f(t)\Vert \mathrm {e}^{-2\rho t}{\mathrm{d}}t<\infty \} $$

with the usual identification of functions which are equal almost everywhere. Moreover, we define the Sobolev space

$$ H_{\rho }^{1}(\mathbb {R};H):=\{f\in L_{2,\rho }(\mathbb {R};H)\,;\,f'\in L_{2,\rho }(\mathbb {R};H)\}, $$

where the derivative is meant in the distributional sense. Finally, we define \(\mathcal {L}_{\rho }\) as the unitary extension of the mapping

$$ C_{c}(\mathbb {R};H)\subseteq L_{2,\rho }(\mathbb {R};H)\rightarrow L_{2}(\mathbb {R};H),\quad f\mapsto \left( t\mapsto \frac{1}{\sqrt{2\pi }}\int _{\mathbb {R}}\mathrm {e}^{-({\mathrm{i}}t+\rho )s}f(s){\mathrm{d}}s\right) . $$

We call \(\mathcal {L}_{\rho }\) the Fourier-Laplace transform. Here, \(C_{c}(\mathbb {R};H)\) denotes the space of H-valued continuous functions with compact support.

Remark 2.1

It is a direct consequence of Plancherels theorem, that \(\mathcal {L}_{\rho }\) becomes unitary.

The connection of \(\mathcal {L}_{\rho }\) and the space \(H_{\rho }^{1}(\mathbb {R};H)\) is explained in the next proposition.

Proposition 2.2

(see e.g. [26, Proposition 1.1.4]) Let \(u\in L_{2,\rho }(\mathbb {R};H)\) for some \(\rho \in \mathbb {R}.\) Then \(u\in H_{\rho }^{1}(\mathbb {R};H)\) if and only if \(\left( t\mapsto ({\mathrm{i}}t+\rho )\left( \mathcal {L}_{\rho }u\right) (t)\right) \in L_{2}(\mathbb {R};H).\) In this case we have

$$ \left( \mathcal {L}_{\rho }u'\right) (t)=({\mathrm{i}}t+\rho )\left( \mathcal {L}_{\rho }u\right) (t)\quad (t\in \mathbb {R}\text { a.e.}). $$

Moreover, we have the following variant of the classical Sobolev embedding theorem.

Proposition 2.3

(Sobolev-embedding theorem, see [16, Lemma 3.1.59] or [26, Proposition 1.1.8]) Let \(u\in H_{\rho }^{1}(\mathbb {R};H)\) for some \(\rho \in \mathbb {R}.\) Then, u has a continuous representative with \(\sup _{t\in \mathbb {R}}\Vert u(t)\Vert \mathrm {e}^{-\rho t}<\infty .\)

Finally, we need the Theorem of Paley-Wiener allowing to characterise those \(L_{2}\)-functions supported on the positive real axis in terms of their Fourier-Laplace transform.

Theorem 2.4

(Paley-Wiener, [13] or [20, 19.2 Theorem]) Let \(\rho \in \mathbb {R}.\) We define the Hardy space

$$ \mathcal {H}_{2}(\mathbb {C}_{{\mathrm{Re}}>\rho };H):=\left\{ f:\mathbb {C}_{{\mathrm{Re}}>\rho }\rightarrow H\,;\,f\text { holomorphic},\,\sup _{\mu >\rho }\int _{\mathbb {R}}\Vert f({\mathrm{i}}t+\mu )\Vert ^{2}{\mathrm{d}}t<\infty \right\} . $$

Let \(u\in L_{2,\rho }(\mathbb {R};H).\) Then \({\text {spt}}u\subseteq \mathbb {R}_{\ge 0}\) if and only if

$$ \left( ({{\mathrm{i}}} t+\mu )\mapsto \left( \mathcal {L}_{\mu }u\right) (t)\right) \in \mathcal {H}_{2}(\mathbb {C}_{{\mathrm{Re}}>\rho };H). $$

3 Wong Sequences

Throughout, let \(E\in L(X;Y)\) and \(A:{\text {dom}}(A)\subseteq X\rightarrow Y\) densely defined closed linear.

Definition

For \(k\in \mathbb {N}\) we define the spaces \({\text {IV}}_{k}\subseteq X\) recursively by

$$\begin{aligned} {\text {IV}}_{0}&:={\text {dom}}(A),\\ {\text {IV}}_{k+1}&:=A^{-1}[E[{\text {IV}}_{k}]]. \end{aligned}$$

This sequence of subspaces is called the Wong sequence associated with (E, A).

Remark 3.1

We have

$$ {\text {IV}}_{k+1}\subseteq {\text {IV}}_{k}\quad (k\in \mathbb {N}). $$

Indeed, for \(k=0\) this follows from \({\text {IV}}_{1}=A^{-1}[E[{\text {IV}}_{0}]]\subseteq {\text {dom}}(A)={\text {IV}}_{0}\) and hence, the assertion follows by induction.

Definition

We define

$$ \rho (E,A):=\{z\in \mathbb {C}\,;\,(zE+A)^{-1}\in L(Y;X)\} $$

the resolvent set associated with (E, A).

We start with some useful facts on the Wong sequence. The following result was already given in [27] in case of a bounded operator A.

Lemma 3.2

Let \(k\in \mathbb {N}.\) Then

$$ E(zE+A)^{-1}A\subseteq A(zE+A)^{-1}E $$

and

$$ (zE+A)^{-1}E[{\text {IV}}_{k}]\subseteq {\text {IV}}_{k+1} $$

for each \(z\in \rho (E,A)\). Moreover, for \(x\in {\text {IV}}_{k}\) we find elements \(x_{1},\ldots ,x_{k}\in X,\,x_{k+1}\in {\text {dom}}(A)\) such that

$$ (zE+A)^{-1}Ex=\frac{1}{z}x+\sum _{\ell =1}^{k}\frac{1}{z^{\ell +1}}x_{\ell }+\frac{1}{z^{k+1}}(zE+A)^{-1}Ax_{k+1}\quad (z\in \rho (E,A)\setminus \{0\}). $$

Proof

For \(x\in {\text {dom}}(A)\) we compute

$$\begin{aligned} E(zE+A)^{-1}Ax&=E\left( x-(zE+A)^{-1}zEx\right) \\&=Ex-zE(zE+A)^{-1}Ex\\&=Ex-Ex+A(zE+A)^{-1}Ex\\&=A(zE+A)^{-1}Ex. \end{aligned}$$

We prove the second and third claim by induction. Let \(k=0\) and \(x\in {\text {IV}}_{0}={\text {dom}}(A).\) Then \((zE+A)^{-1}Ex\in {\text {dom}}(A)\) with

$$ A(zE+A)^{-1}Ex=E(zE+A)^{-1}Ax\in E[{\text {dom}}(A)]=E[{\text {IV}}_{0}] $$

and thus, \((zE+A)^{-1}Ex\in {\text {IV}}_{1}.\) Moreover

$$ (zE+A)^{-1}Ex=\frac{1}{z}(x-(zE+A)^{-1}Ax) $$

showing the equality with \(x_{1}=-x\in {\text {dom}}(A)\). Assume now that both assertions hold for \(k\in \mathbb {N}\) and let \(x\in {\text {IV}}_{k+1}\). Then \(Ax=Ey\) for some \(y\in {\text {IV}}_{k}\) and we infer

$$ A(zE+A)^{-1}Ex=E(zE+A)^{-1}Ey\in E[{\text {IV}}_{k+1}] $$

by induction hypothesis. Hence, \((zE+A)^{-1}Ex\in {\text {IV}}_{k+2}.\) Moreover, by assumption we find \(y_{1},\ldots ,y_{k}\in X\) and \(y_{k+1}\in {\text {dom}}(A)\) such that

$$\begin{aligned} \frac{1}{z}y+\sum _{\ell =1}^{k}\frac{1}{z^{\ell +1}}y_{\ell }+\frac{1}{z^{k+1}}(zE+A)^{-1}Ay_{k+1}&=(zE+A)^{-1}Ey\\&=(zE+A)^{-1}Ax\\&=x-z(zE+A)^{-1}Ex. \end{aligned}$$

Thus, we obtain the desired formula with \(x_{1}:=-y,x_{j}=-y_{j-1}\) for \(j\in \{2,\ldots ,k+2\}\).    \(\square \)

Lemma 3.3

Assume \(\rho (E,A)\ne \emptyset .\) Then for each \(k\in \mathbb {N}\) we have that

$$ A^{-1}\left[ E\left[ \overline{{\text {IV}}_{k}}\right] \right] \subseteq \overline{{\text {IV}}_{k+1}}. $$

Proof

We prove the claim by induction. For \(k=0,\) let \(x\in {\text {dom}}(A)\) such that \(Ax=Ey\) for some \(y\in \overline{{\text {IV}}_{0}}.\) Hence, we find a sequence \((y_{n})_{n\in \mathbb {N}}\) in \({\text {IV}}_{0}\) with \(y_{n}\rightarrow y\) and since E is bounded, we derive \(Ey_{n}\rightarrow Ey=Ax.\) For \(z\in \rho (E,A)\) set

$$ x_{n}:=(zE+A)^{-1}zEx+(zE+A)^{-1}Ey_{n}. $$

By Lemma 3.2 we have that \(x_{n}\in {\text {IV}}_{1}\) and

$$\begin{aligned} \lim _{n\rightarrow \infty }x_{n}&=(zE+A)^{-1}zEx+(zE+A)^{-1}Ey\\&=(zE+A)^{-1}zEx+(zE+A)^{-1}Ax\\&=x, \end{aligned}$$

hence \(x\in \overline{{\text {IV}}_{1}}.\)

Assume now that the assertion holds for some \(k\in \mathbb {N}\) and let \(x\in A^{-1}\left[ E\left[ \overline{{\text {IV}}_{k+1}}\right] \right] \). Then clearly \(x\in A^{-1}\left[ E\left[ \overline{{\text {IV}}_{k}}\right] \right] \subseteq \overline{{\text {IV}}_{k+1}}\) and hence, we find a sequence \((w_{n})_{n\in \mathbb {N}}\) in \({\text {IV}}_{k+1}\) with \(w_{n}\rightarrow x.\) For \(z\in \rho (A,E)\) we infer

$$ (zE+A)^{-1}zEw_{n}\rightarrow (zE+A)^{-1}zEx $$

and by Lemma 3.2 we have \((zE+A)^{-1}zEx\in \overline{{\text {IV}}_{k+2}}.\) Moreover, we find a sequence \((y_{n})_{n\in \mathbb {N}}\) in \({\text {IV}}_{k+1}\) with \(Ax=\lim _{n\rightarrow \infty }Ey_{n}.\) As above, we set

$$ x_{n}:=(zE+A)^{-1}zEx+(zE+A)^{-1}Ey_{n} $$

and obtain a sequence in \(\overline{{\text {IV}}_{k+2}}\) converging to x. Hence \(x\in \overline{{\text {IV}}_{k+2}}.\)    \(\square \)

4 Necessary Conditions for \(C_{0}\)-Semigroups

In this section we focus on the differential-algebraic problem

$$\begin{aligned} Eu'(t)+Au(t)&=0\quad (t>0)\\ u(0)&=u_{0},\nonumber \end{aligned}$$
(2)

where again \(E\in L(X;Y)\) and \(A:{\text {dom}}(A)\subseteq X\rightarrow Y\) is a linear closed densely defined operator and \(u_{0}\in X\). We begin with the notion of classical solutions and mild solutions of the above problem.

Definition

Let \(u:\mathbb {R}_{\ge 0}\rightarrow X\) be continuous.

  1. (a)

    u is called a classical solution of (2), if u is continuously differentiable on \(\mathbb {R}_{\ge 0}\), \(u(t)\in {\text {dom}}(A)\) for each \(t\ge 0\) and (2) holds.

  2. (b)

    u is called a mild solution of (2), if \(u(0)=u_{0}\) and for all \(t>0\) we have \(\int _{0}^{t}u(s){\mathrm{d}}s\in {\text {dom}}(A)\) and

    $$ Eu(t)+A\int _{0}^{t}u(s){\mathrm{d}}s=Eu_{0}. $$

Obviously, a classical solution of (2) is also a mild solution of (2). The main question is now to determine a natural space, where one should seek for (mild) solutions. In particular, we have to find the initial values. We define the space of such values by

$$ U:=\left\{ u_{0}\in X\,;\,\exists u:\mathbb {R}_{\ge 0}\rightarrow X\text { mild solution of }(2)\right\} . $$

Clearly, U is a subspace of X.

Proposition 4.1

Assume \(\rho (E,A)\ne \emptyset \). Let \(x\in U\) and \(u_{x}\) be a mild solution of (2) with initial value x. Then \(u_{x}(t)\in \bigcap _{k\in \mathbb {N}}\overline{{\text {IV}}_{k}}\) for each \(t\ge 0\). In particular, \(U\subseteq \bigcap _{k\in \mathbb {N}}\overline{{\text {IV}}_{k}}.\)

Proof

Let \(t\ge 0.\) Obviously, we have that \(u_{x}(t)\in \overline{{\text {IV}}_{0}}=\overline{{\text {dom}}(A)}=X.\) Assume now that we know \(u_{x}(t)\in \overline{{\text {IV}}_{k}}\) for all \(t\ge 0\). We then have

$$ A\int _{t}^{t+h}u_{x}(s){\mathrm{d}}s=Eu_{x}(t+h)-Eu_{x}(t)\in E[\overline{{\text {IV}}_{k}}]\quad (h>0) $$

and thus,

$$ \int _{t}^{t+h}u_{x}(s){\mathrm{d}}s\in A^{-1}\left[ E\left[ \overline{{\text {IV}}_{k}}\right] \right] \subseteq \overline{{\text {IV}}_{k+1}} $$

by Lemma 3.3. Hence,

$$ u_{x}(t)=\lim _{h\rightarrow 0}\int _{t}^{t+h}u_{x}(s){\mathrm{d}}s\in \overline{{\text {IV}}_{k+1}}. $$

   \(\square \)

We state the following hypothesis, which we assume to be valid throughout the whole section.

Hypotheses A

The space U is closed and for each \(u_{0}\in U\) the mild solution of (2) is unique.

As in the case of Cauchy problems, we can show that we can associate a \(C_{0}\)-semigroup with (2). The proof follows the lines of [1, Theorem 3.1.12].

Proposition 4.2

Denote for \(x\in U\) the unique mild solution of (2) by \(u_{x}.\) Then the mappings

$$ T(t):U\rightarrow X,\quad x\mapsto u_{x}(t) $$

for \(t\ge 0\) define a \(C_{0}\)-semigroup on U. In particular, \({\text {ran}}T(t)\subseteq U\) for each \(t\ge 0\).

Proof

Consider the mapping

$$ \Phi :U\rightarrow C(\mathbb {R}_{\ge 0};X),\quad x\mapsto u_{x}. $$

We equip \(C(\mathbb {R}_{\ge 0};X)\) with the topology induced by the seminorms

$$ p_{n}(f):=\sup _{t\in [0,n]}\Vert f(t)\Vert \quad (n\in \mathbb {N}) $$

for which \(C(\mathbb {R}_{\ge 0};X)\) becomes a Fréchet space. Then \(\Phi \) is linear and closed. Indeed, if \((x_{n})_{n\in \mathbb {N}}\) is a sequence in U such that \(x_{n}\rightarrow x\) and \(u_{x_{n}}\rightarrow u\) as \(n\rightarrow \infty \) for some \(x\in U\) and \(u\in C(\mathbb {R}_{\ge 0},X)\) we derive \(\int _{0}^{t}u_{x_{n}}(s){\mathrm{d}}s\rightarrow \int _{0}^{t}u(s){\mathrm{d}}s\) for each \(t\ge 0\) since \(u_{x_{n}}\rightarrow u\) uniformly on [0, t]. Moreover,

$$ A\int _{0}^{t}u_{x_{n}}(s){\mathrm{d}}s=Ex_{n}-Eu_{x_{n}}(t)\rightarrow Ex-Eu(t)\quad (n\rightarrow \infty ) $$

for each \(t\ge 0\) and hence, \(\int _{0}^{t}u(s){\mathrm{d}}s\in {\text {dom}}(A)\) with

$$ A\int _{0}^{t}u(s){\mathrm{d}}s=Ex-Eu(t)\quad (t\ge 0). $$

Finally, since \(u(0)=\lim _{n\rightarrow \infty }u_{x_{n}}(0)=x,\) we infer that \(u=u_{x}\) and hence, \(\Phi \) is closed. By the closed graph theorem (see e.g. [21, III, Theorem 2.3]), we derive that \(\Phi \) is continuous. In particular, for each \(t\ge 0\) the operator

$$ T(t)x=u_{x}(t)=\Phi (x)(t) $$

is bounded and linear. Moreover, \(T(t)x=u_{x}(t)\rightarrow x\) as \(t\rightarrow 0\) for each \(x\in U.\) We are left to show that \({\text {ran}}T(t)\subseteq U\) and that T satisfies the semigroup law. For doing so, let \(x\in U\) and \(t\ge 0.\) We define the function \(u:\mathbb {R}_{\ge 0}\rightarrow X\) by \(u(s):=u_{x}(t+s)=T(t+s)x.\) Then clearly, u is continuous with \(u(0)=u_{x}(t)=T(t)x\) and

$$ \int _{0}^{s}u(r){\mathrm{d}}r=\int _{0}^{s}u_{x}(t+r){\mathrm{d}}r=\int _{t}^{s+t}u_{x}(r){\mathrm{d}}r=\int _{0}^{s+t}u_{x}(r){\mathrm{d}}r-\int _{0}^{t}u_{x}(r){\mathrm{d}}r\in {\text {dom}}(A) $$

for each \(s\ge 0\) with

$$\begin{aligned} A\int _{0}^{s}u(r){\mathrm{d}}r&=A\int _{0}^{s+t}u_{x}(r){\mathrm{d}}r-A\int _{0}^{t}u_{x}(r){\mathrm{d}}r\\&=Eu_{x}(s+t)-Eu_{x}(t)\\&=Eu(s)-Eu_{x}(t)\quad (s\ge 0). \end{aligned}$$

Hence, u is a mild solution of (2) with initial value \(u_{x}(t)\) and thus, \(u_{x}(t)\in U\). This proves \({\text {ran}}T(t)\subseteq U\) and

$$ T(t+s)x=u(s)=T(s)u_{x}(t)=T(s)T(t)x\quad (s,t\ge 0,x\in U). $$

   \(\square \)

We want to inspect the generator of T a bit closer.

Proposition 4.3

Let B denote the generator of the \(C_{0}\)-semigroup T. Then we have \(-EB\subseteq A.\)

Proof

Let \(x\in {\text {dom}}(B)\). Consequently, \(u_{x}\in C^{1}(\mathbb {R}_{\ge 0};X)\) and thus,

$$ A\frac{1}{h}\int _{t}^{t+h}u_{x}(s){\mathrm{d}}s=-\frac{1}{h}E\left( u_{x}(t+h)-u_{x}(t)\right) \rightarrow -Eu_{x}'(t)\quad (h\rightarrow 0) $$

for each \(t\ge 0\). Since \(\frac{1}{h}\intop _{t}^{t+h}u_{x}(s){\mathrm{d}}s\rightarrow u_{x}(t)\) as \(h\rightarrow 0\) we infer that \(u_{x}(t)\in {\text {dom}}(A)\) for each \(t\ge 0\) and

$$ Eu_{x}'(t)+Au_{x}(t)=0\quad (t\ge 0), $$

i.e. u is a classical solution of (2). Choosing \(t=0,\) we infer \(x\in {\text {dom}}(A)\) and \(EBx=-Ax.\)    \(\square \)

5 Pencils with Polynomially Bounded Resolvent

Let \(E\in L(X;Y)\) and \(A:{\text {dom}}(A)\subseteq X\rightarrow Y\) densely defined closed and linear. Throughout this section we assume the following.

Hypotheses B

There exist \(\rho _{0}\in \mathbb {R},\,C\ge 0\) and \(k\in \mathbb {N}\) such that:

  1. (a)

    \(\mathbb {C}_{{\mathrm{Re}}\ge \rho _{0}}\subseteq \rho (E,A)\),

  2. (b)

    \(\forall z\in \mathbb {C}_{{\mathrm{Re}}\ge \rho _{0}}:\,\Vert (zE+A)^{-1}\Vert \le C|z|^{k}.\)

Definition

We call the minimal \(k\in \mathbb {N}\) such that there exists \(C\ge 0\) with

$$ \Vert (zE+A)^{-1}\Vert \le C|z|^{k}\quad (z\in \mathbb {C}_{{\mathrm{Re}}\ge \rho _{0}}) $$

the index of (E, A), denoted by \({\text {ind}}(E,A).\)

Proposition 5.1

Consider the Wong sequence \(({\text {IV}}_{k})_{k\in \mathbb {N}}\) associated with (EA). Then

$$ \overline{{\text {IV}}_{k}}=\overline{{\text {IV}}_{k+1}} $$

for all \(k>{\text {ind}}(E,A).\)

Proof

Since we clearly have \(\overline{{\text {IV}}_{k+1}}\subseteq \overline{{\text {IV}}_{k}}\) it suffices to prove \({\text {IV}}_{k}\subseteq \overline{{\text {IV}}_{k+1}}\) for \(k>{\text {ind}}(E,A).\) So, let \(x\in {\text {IV}}_{k}\) for some \(k>{\text {ind}}(E,A).\) By Lemma 3.2 there exist \(x_{1},\ldots ,x_{k}\in X,\,x_{k+1}\in {\text {dom}}(A)\) such that

$$ (zE+A)^{-1}Ex=\frac{1}{z}x+\sum _{\ell =1}^{k}\frac{1}{z^{\ell +1}}x_{\ell }+\frac{1}{z^{k+1}}(zE+A)^{-1}Ax_{k+1}.\quad (z\in \rho (E,A)). $$

We define \(x_{n}:=(nE+A)^{-1}nEx\) for \(n\in \mathbb {N}_{\ge \rho _{0}}\). Then \(x_{n}\in {\text {IV}}_{k+1}\) by Lemma 3.2 and by what we have above

$$ x_{n}=x+\sum _{\ell =1}^{k}\frac{1}{n^{\ell }}x_{\ell }+\frac{1}{n^{k}}(nE+A)^{-1}Ax_{k+1}\quad (n\in \mathbb {N}_{\ge \rho _{0}}). $$

Since \(k>{\text {ind}}(E,A),\) we have that \(\frac{1}{n^{k}}(nE+A)^{-1}\rightarrow 0\) as \(n\rightarrow \infty \) and hence, \(x_{n}\rightarrow x\) as \(n\rightarrow \infty ,\) which shows the claim.    \(\square \)

Our next goal is to determine the space U. For doing so, we restrict ourselves to Hilbert spaces X.

Proposition 5.2

Assume Hypotheses A and let X be a Hilbert space. Then \(U=\overline{{\text {IV}}_{{\text {ind}}(E,A)+1}}.\)

Proof

By Propositions 4.1 and 5.1 we have that \(U\subseteq \overline{{\text {IV}}_{{\text {ind}}(E,A)+1}}\). We now prove that \({\text {IV}}_{{\text {ind}}(E,A)+1}\subseteq U,\) which would yield the assertion. Let \(x\in {\text {IV}}_{{\text {ind}}(E,A)+1}\) and \(\rho >\max \{0,\rho _{0}\}.\) We define

$$ v(z):=1/\sqrt{2\pi } (zE+A)^{-1}Ex\quad (z\in \mathbb {C}_{{\mathrm{Re}}\ge \rho }) $$

and show that \(v\in \mathcal {H}_{2}(\mathbb {C}_{{\mathrm{Re}}\ge \rho };X).\) For doing so, we use Lemma 3.2 to find \(x_{1},\ldots ,x_{k}\in X,x_{k+1}\in {\text {dom}}(A)\), \(k:={\text {ind}}(E,A)+1\), such that

$$ \sqrt{2\pi } v(z)=(zE+A)^{-1}Ex=\frac{1}{z}x+\sum _{\ell =1}^{k}\frac{1}{z^{\ell +1}}x_{\ell }+\frac{1}{z^{k+1}}(zE+A)^{-1}Ax_{k+1}\quad (z\in \mathbb {C}_{{\mathrm{Re}}\ge \rho }). $$

Then we have

$$ \Vert v(z)\Vert \le \frac{K}{|z|}\quad (z\in \mathbb {C}_{{\mathrm{Re}}\ge \rho }) $$

for some constant \(K\ge 0\) and hence, \(v\in \mathcal {H}_{2}(\mathbb {C}_{{\mathrm{Re}}\ge \rho };X),\) since obviously v is holomorphic. Setting

$$ u:=\mathcal {L}_{\rho }^{*}v({\mathrm{i}}\cdot +\rho ) $$

we thus have \(u\in L_{2,\rho }(\mathbb {R}_{\ge 0};X)\) by the Theorem of Paley-Wiener, Theorem 2.4. Moreover,

$$ \sqrt{2\pi } zv(z)-x=\sum _{\ell =1}^{k}\frac{1}{z^{\ell }}x_{\ell }+\frac{1}{z^{k}}(zE+A)^{-1}Ax_{k+1}\quad (z\in \mathbb {C}_{{\mathrm{Re}}\ge \rho }) $$

and thus, \(z\mapsto zv(z)- \frac{1}{\sqrt{2\pi }} x\in \mathcal {H}_{2}(\mathbb {C}_{{\mathrm{Re}}\ge \rho };X)\) which yields

$$ (u-\chi _{\mathbb {R}_{\ge 0}}x)'=\mathcal {L}_{\rho }^{*}\left( \left( {\mathrm{i}}\cdot +\rho \right) v-\frac{1}{\sqrt{2\pi }}x\right) \in L_{2,\rho }(\mathbb {R}_{\ge 0};X), $$

i.e. \(u-\chi _{\mathbb {R}_{\ge 0}}x\in H_{\rho }^{1}(\mathbb {R};X),\) which shows that u is continuous on \(\mathbb {R}_{\ge 0}\) by the Sobolev embedding theorem, Proposition 2.3. We now prove that u is indeed a mild solution. Since \(u-\chi _{\mathbb {R}_{\ge 0}}x\) is continuous on \(\mathbb {R},\) we infer that

$$ u(0+)-x=0, $$

and thus u attains the initial value x. Moreover,

$$\begin{aligned} \left( \mathcal {L}_{\rho }E(u-\chi _{\mathbb {R}_{\ge 0}}x)'\right) (t)&=({\mathrm{i}}t+\rho )E\left( \mathcal {L}_{\rho }(u-\chi _{\mathbb {R}_{\ge 0}}x)\right) (t)\\&=({\mathrm{i}}t+\rho )Ev({\mathrm{i}}t+\rho )-\frac{1}{\sqrt{2\pi }}Ex\\&=\frac{1}{\sqrt{2\pi }}(({\mathrm{i}}t+\rho )E(({\mathrm{i}}t+\rho )E+A)^{-1}Ex-Ex)\\&=-\frac{1}{\sqrt{2\pi }}A(({\mathrm{i}}t+\rho )E+A)^{-1}Ex\\&=-A\left( \mathcal {L}_{\rho }u\right) (t) \end{aligned}$$

for almost every \(t\in \mathbb {R}.\) Hence, \(u(t)\in {\text {dom}}(A)\) almost everywhere and

$$ -Au(t)=\left( E(u-\chi _{\mathbb {R}_{\ge 0}}x)'\right) (t) $$

for almost every \(t\in \mathbb {R}.\) By integrating over an interval [0, t], we derive

$$ -\int _{0}^{t}Au(s){\mathrm{d}}s=Eu(t)-Ex\quad (t\ge 0) $$

and hence, u is a mild solution of (2). Thus, \(x\in U\) and so, \(U=\overline{{\text {IV}}_{{\text {ind}}(E,A)+1}}.\)    \(\square \)

For sake of readability, we introduce the following notion.

Definition

We define the space

$$ V:=A^{-1}\left[ E\left[ \overline{{\text {IV}}_{{\text {ind}}(E,A)+1}}\right] \right] . $$

Remark 5.3

Note that \({\text {IV}}_{{\text {ind}}(E,A)+2}\subseteq V\subseteq \overline{{\text {IV}}_{{\text {ind}}(E,A)+2}}=\overline{{\text {IV}}_{{\text {ind}}(E,A)+1}}\) by Lemma 3.3 and Proposition 5.1.

Lemma 5.4

Assume that \(E:\overline{{\text {IV}}_{{\text {ind}}(E,A)+1}}\rightarrow Y\) is injective. Then

$$ C:=E^{-1}A:V\subseteq \overline{{\text {IV}}_{{\text {ind}}(E,A)+1}}\rightarrow \overline{{\text {IV}}_{{\text {ind}}(E,A)+1}} $$

is well-defined and closed.

Proof

Note that \(A[V]\subseteq E[\overline{{\text {IV}}_{{\text {ind}}(E,A)+1}}]\) and thus, C is well-defined. Let \((x_{n})_{n\in \mathbb {N}}\) be a sequence in V such that \(x_{n}\rightarrow x\) and \(Cx_{n}\rightarrow y\) in \(\overline{{\text {IV}}_{{\text {ind}}(E,A)+1}}\) for some \(x,y\in \overline{{\text {IV}}_{{\text {ind}}(E,A)+1}}.\) We then have

$$ Ax_{n}=ECx_{n}\rightarrow Ey $$

and hence, \(x\in {\text {dom}}(A)\) with \(Ax=Ey\in E\left[ \overline{{\text {IV}}_{{\text {ind}}(E,A)+1}}\right] .\) This shows, \(x\in V\) and \(Cx=E^{-1}Ax=y\), thus C is closed.    \(\square \)

Proposition 5.5

Assume Hypotheses A and let X be a Hilbert space. Denote by B the generator of T. Then \(E:\overline{{\text {IV}}_{{\text {ind}}(E,A)+1}}\rightarrow Y\) is injective and \(B=-C\), where C is the operator defined in Lemma 5.4.

Proof

By Proposition 4.3 we have \(-EB\subseteq A.\) Hence, for \(x\in U=\overline{{\text {IV}}_{{\text {ind}}(E,A)+1}}\) (see Proposition 5.2) and \(z\in \rho (B)\cap \rho (E,A)\) we obtain

$$ (zE+A)(z-B)^{-1}x=(zE-EB)(z-B)^{-1}x=Ex $$

and hence,

$$ (z-B)^{-1}x=(zE+A)^{-1}Ex. $$

Thus, if \(Ex=0\) for some \(x\in \overline{{\text {IV}}_{{\text {ind}}(E,A)+1}}\), we infer that \((z-B)^{-1}x=0\) and thus, \(x=0.\) Hence, E is injective and thus, C is well defined. Moreover, we observe that for \(x,y\in \overline{{\text {IV}}_{{\text {ind}}(E,A)+1}}\) and \(z\in \rho (E,A)\cap \rho (B)\) we have

$$\begin{aligned} x\in {\text {dom}}(C)\wedge (z+C)x=y&\Leftrightarrow x\in {\text {dom}}(A):\left( zE+A\right) x=Ey\\&\Leftrightarrow x=(zE+A)^{-1}Ey=(z-B)^{-1}y \end{aligned}$$

and thus, \(z\in \rho (-C)\) with \((z+C)^{-1}=(z-B)^{-1},\) which in turn implies \(B=-C\).

   \(\square \)

The converse statement also holds true, even in the case of a Banach space X.

Proposition 5.6

Let \(E:\overline{{\text {IV}}_{{\text {ind}}(E,A)+1}}\rightarrow Y\) be injective and \(-C\) generate a \(C_{0}\)-semigroup on \(\overline{{\text {IV}}_{{\text {ind}}(E,A)+1}}\), where C is the operator defined in Lemma 5.4. Then Hypotheses A holds.

Proof

Denote by T the semigroup generated by \(-C.\) By Proposition 4.1 and Proposition 5.1 we know that \(U\subseteq \overline{{\text {IV}}_{{\text {ind}}(E,A)+1}}.\) We first prove equality here. For doing so, we need to show that \(T(\cdot )x\) is a mild solution of (2) for \(x\in \overline{{\text {IV}}_{{\text {ind}}(E,A)+1}}.\) We have

$$ T(t)x+C\int _{0}^{t}T(s)x{\mathrm{d}}s=x\quad (t\ge 0). $$

Since \(EC\subseteq A\), we know that

$$ \int _{0}^{t}T(s)x{\mathrm{d}}s\in {\text {dom}}(A)\quad (t\ge 0) $$

and that

$$ A\int _{0}^{t}T(s)x{\mathrm{d}}s=EC\int _{0}^{t}T(s)x{\mathrm{d}}s=Ex-ET(t)x $$

and thus, \(T(\cdot )x\) is a mild solution of (2), which in turn implies \(x\in U.\) So, we indeed have \(U=\overline{{\text {IV}}_{{\text {ind}}(E,A)+1}}\) and hence, U is closed. It remains to prove the uniqueness of mild solutions for initial values in U. So, let \(u_{x}\) be a mild solution for some \(x\in U.\) By Proposition 4.1 we know that \(u_{x}(t)\in \overline{{\text {IV}}_{{\text {ind}}(E,A)+1}}\) for each \(t\ge 0\). Hence,

$$ A\int _{0}^{t}u_{x}(s){\mathrm{d}}s=Ex-Eu_{x}(t)\in E\left[ \overline{{\text {IV}}_{{\text {ind}}(E,A)+1}}\right] \quad (t\ge 0), $$

which shows \(\int _{0}^{t}u_{x}(s){\mathrm{d}}s\in V={\text {dom}}(C).\) Hence,

$$ C\int _{0}^{t}u_{x}(s){\mathrm{d}}s=E^{-1}A\int _{0}^{t}u_{x}(s){\mathrm{d}}s=x-u_{x}(t)\quad (t\ge 0), $$

i.e. \(u_{x}\) is a mild solution of the Cauchy problem associated with \(-C\). Hence, \(u_{x}=T(\cdot )x,\) which shows the claim.    \(\square \)

We summarise our findings of this section in the following theorem.

Theorem 5.7

We consider the following two statements.

  1. (a)

    Hypotheses A holds,

  2. (b)

    \(E:\overline{{\text {IV}}_{{\text {ind}}(E,A)+1}}\rightarrow Y\) is injective and \(-C\) generates a \(C_{0}\)-semigroup on \(\overline{{\text {IV}}_{{\text {ind}}(E,A)+1}}\), where C is the operator defined in Lemma 5.4.

Then (b) \(\Rightarrow \) (a) and if X is a Hilbert space, then (b) \(\Leftrightarrow \) (a).

The crucial condition for Hypotheses A to hold is the injectivity of \(E:\overline{{\text {IV}}_{{\text {ind}}(E,A)+1}}\rightarrow Y.\) It is noteworthy that \(E|_{{\text {IV}}_{{\text {ind}}(E,A)+1}}\) is always injective. Indeed, if \(Ex=0\) for some \(x\in {\text {IV}}_{{\text {ind}}(E,A)+1},\) we can use Lemma 3.2 to find \(x_{1},\ldots ,x_{{\text {ind}}(E,A)+1}\in X,x_{{\text {ind}}(E,A)+2}\in {\text {dom}}(A)\) such that

$$ (zE+A)^{-1}Ex=\frac{1}{z}x+\sum _{\ell =1}^{{\text {ind}}(E,A)+1}\frac{1}{z^{\ell +1}}x_{\ell }+\frac{1}{z^{{\text {ind}}(E,A)+2}}(zE+A)^{-1}Ax_{{\text {ind}}(E,A)+2}\quad (z\in \rho (E,A)). $$

Thus, we have \(0=z(zE+A)^{-1}Ex\rightarrow x\) as \(z\rightarrow \infty \) and hence, \(x=0.\) However, it is not true in general,that the injectivity carries over to the closure \(\overline{{\text {IV}}_{{\text {ind}}(E,A)+1}}\) as the following example shows.

Example 5.8

Consider the Hilbert space \(L_{2}(-2,2)\) and define the operator

$$ \partial ^{\#}:{\text {dom}}(\partial ^{\#})\subseteq L_{2}(-2,2)\rightarrow L_{2}(-2,2),\quad u\mapsto u', $$

where

$$ {\text {dom}}(\partial ^{\#}):=\{u\in H^{1}(-2,2)\,;\,u(-2)=u(2)\}. $$

It is well-known that this operator is skew-selfadjoint. We set

$$ E:=\chi _{[-1,1]}({\text {m}}),\quad A:=\chi _{[-2,2]\setminus [-1,1]}({\text {m}})+\partial ^{\#}, $$

where \(\chi _{I}({\text {m}})\) denotes the multiplication operator with the function \(\chi _{I}\) on \(L_{2}(-2,2).\) Clearly, E is linear and bounded and A is closed linear and densely defined. Moreover, for \(z\in \mathbb {C}_{{\mathrm{Re}}>0}\) and \(u\in {\text {dom}}(\partial ^{\#})\) we obtain

$$\begin{aligned} {\mathrm{Re}}\langle (zE+A)u,u\rangle&={\mathrm{Re}}\langle z\chi _{[-1,1]}({\text {m}})u+\chi _{[-2,2]\setminus [-1,1]}({\text {m}})u,u\rangle \\&={{\mathrm{Re}}}\, z\Vert \chi _{[-1,1]}({\text {m}})u\Vert _{L_{2}(-2,2)}^{2}+\Vert \chi _{[-2,2]\setminus [-1,1]}({\text {m}})u\Vert _{L_{2}(-2,2)}^{2}\\&\ge \min \{{\mathrm{Re}}\, z,1\}\Vert u\Vert _{L_{2}(-2,2)}^{2}, \end{aligned}$$

where we have used the skew-selfadjointness of \(\partial ^{\#}\) in the first equality. Hence, we have

$$ \Vert u\Vert _{L_{2}(-2,2)}\le \frac{1}{\min \{{\mathrm{Re}}\, z,1\}}\Vert (zE+A)u\Vert _{L_{2}(-2,2)}, $$

which proves the injectivity of \((zE+A)\) and the continuity of its inverse. Since the same argumentation works for the adjoint \((zE+A)^{*}\), it follows that \((zE+A)^{-1}\in L(L_{2}(-2,2))\) with

$$ \Vert (zE+A)^{-1}\Vert \le \frac{1}{\min \{{\mathrm{Re}}\, z,1\}}\quad (z\in \mathbb {C}_{{\mathrm{Re}}>0}). $$

Hence, (EA) satisfies Hypotheses B on \(\mathbb {C}_{{\mathrm{Re}}\ge \rho _{0}}\) for each \(\rho _{0}>0\) with \({\text {ind}}(E,A)=0.\) Moreover, we have

$$ u\in {\text {IV}}_{1}=A^{-1}[E[{\text {dom}}(A)]] $$

if and only if \(u\in {\text {dom}}(\partial ^{\#})\) and

$$ \chi _{[-2,2]\setminus [-1,1]}({\text {m}})u+u'=\chi _{[-1,1]}({\text {m}})v $$

for some \(v\in {\text {dom}}(\partial ^{\#}).\) The latter is equivalent to \(u\in {\text {dom}}(\partial ^{\#})\cap H^{2}(-1,1)\) and

$$ u(t)+u'(t)=0\quad (t\notin [-1,1]\text { a.e}). $$

Thus, we have

$$\begin{aligned} {\text {IV}}_{1}=&\left\{ u\in {\text {dom}}(\partial ^{\#})\cap H^{2}(-1,1)\,;\,\exists c\in \mathbb {R}:\right. \\&\phantom {aaaaaaaaaaa}\left. \,u(t)=c\left( \chi _{[-2,-1]}(t)\mathrm {e}^{-t}+\chi _{[1,2]}(t)\mathrm {e}^{4-t}\right) \quad (t\notin [-1,1]\text { a.e.})\right\} . \end{aligned}$$

In particular, we obtain that

$$ v(t):=\chi _{[-2,-1]}(t)\mathrm {e}^{-t}+\chi _{[1,2]}(t)\mathrm {e}^{4-t}\quad (t\in (-2,2)) $$

belongs to \(\overline{{\text {IV}}_{1}}\). But this function satisfies \(Ev=0\) and hence, E is not injective on \(\overline{{\text {IV}}_{1}}.\)

Remark 5.9

In the case \(E,A\in L(X;Y)\) and \({\text {ind}}(E,A)=0,\) the injectivity of E carries over to \(\overline{{\text {IV}}_{1}}.\) Indeed, we observe that the operators

$$ (nE+A)^{-1}nE=1-(nE+A)^{-1} $$

for \(n\in \mathbb {N}\) large enough are uniformly bounded. Moreover, for \(x\in {\text {IV}}_{1}\) we have

$$ (nE+A)^{-1}nEx\rightarrow x\quad (n\rightarrow \infty ) $$

and hence, the latter convergence carries over to \(x\in \overline{{\text {IV}}_{1}}.\) In particular, if \(Ex=0\) for some \(x\in \overline{{\text {IV}}_{1}},\) we infer \(x=0\) and thus, E is indeed injective on \(\overline{{\text {IV}}_{1}}.\) So far, the author is not able to prove or disprove that the injectivity also holds for \({\text {ind}}(E,A)>0\) if E and A are bounded.