We first introduce basic notions on Banach and Hilbert spaces. Afterwards, we recall some well-known results, which help prove the well-posedness of the various sets of equations we study throughout this book. Unless otherwise specified, the proofs of these classic results can be found in [62, 92, 157, 207]. By well-posedness, it is usually understood that the problem admits one, and only one, solution, which depends continuously on the data. In the case of linear problems, the continuity property amounts to proving that the norm of the solution is bounded by a constant, times the norm of the data. The crucial point is that the norm, that measures the solution, and the norm, that measures the data, have to be chosen carefully, in order to derive the ad hoc constant. Particular attention is paid to problems whose formulation includes constraints on the solution.

4.1 Basic Results

To begin with, let us recall some familiar notions regarding topological, separable, Banach or Hilbert vector spaces (over \(\mathbb {C}\)), and (anti)linear mappings. All notions are easily extended to vector spaces over \(\mathbb {R}\), and linear mappings.

By definition, a topological space is separable if it contains a countable dense subset; a Banach space is a complete vector space with a norm; a Hilbert space is a vector space endowed with a scalar product, which is complete with respect to the norm induced by the scalar product.Footnote 1

Let X be a Banach space (with norm ∥⋅∥ X ). Throughout this chapter, I X denotes the identity mapping in X and, given Z as a vector subspace of X, i ZX denotes the canonical imbedding of Z in X. Let Y be a second Banach space (with norm ∥⋅∥ Y ), and let A be a linear mapping A : xAx defined on D(A), a vector subspace of X, with values in Y . Its kernel (respectively range) is denoted by (respectively R(A)).

We have the following incremental definitions and notations (cf. [62, 207]).

Definition 4.1.1

  • The linear mapping A is called an unbounded operator.

  • The subspace D(A) is called the domain of the unbounded operator A.

  • The unbounded operator A is continuous if

    $$\displaystyle \begin{aligned} \exists C>0,\ \forall x\in D(A),\ \|Ax\|{}_Y\le C\,\|x\|{}_X. \end{aligned}$$
  • A continuous unbounded operator A with domain D(A) equal to X is called a bounded operator. The space of all bounded operators from X to Y is denoted by \(\mathcal {L}(X,Y)\), with operator norm

    $$\displaystyle \begin{aligned} |||A|||{}_{\mathcal{L}(X,Y)}=\sup_{x\in X\setminus\{0\}}\frac{\|Ax\|{}_Y}{\|x\|{}_X}. \end{aligned}$$

    When X = Y , one uses the notation \(\mathcal {L}(X)\), instead of \(\mathcal {L}(X,X)\).

  • A bounded operator A is a Fredholm operator if \(\dim (\ker (A))<\infty \), R(A) is closed and codim(R(A)) < . In this case, its index is equal to \(\dim (\ker (A))-\mathrm {codim}(R(A))\).

  • A bounded bijective operator with a bounded inverse is called an isomorphism.

  • An unbounded operator A is closed if its graph

    $$\displaystyle \begin{aligned} G(A)=\{(x,Ax)\ :\ x\in D(A)\} \end{aligned}$$

    is closed in X × Y .

  • A bounded operator A is compact if the closure of the image by A of the unit ball B X (0, 1) = {x ∈ X  : ∥x X  ≤ 1} is compact in Y .

Once the basic results are recalled, we will often write “operator” instead of “unbounded operator”.

In practical situations, one usually proves closedness or compactness as follows. An unbounded operator A  : X → Y with domain D(A) is closed provided that, for any sequence (x k ) k of elements of D(A) such that x k  → x in X and Ax k  → y in Y , one has both x ∈ D(A) and y = Ax. On the other hand, a bounded operator \(A\in \mathcal {L}(X,Y)\) is compact, provided that, for any bounded sequence (x k ) k of elements of X, one can extract a subsequence of (Ax k ) k that converges in Y .

Proposition 4.1.2

The vector subspace of compact operators is closed in \(\mathcal {L}(X,Y)\) with respect to the norm \(|||\cdot |||{ }_{\mathcal {L}(X,Y)}\).

Let Z be a third Banach space, and let \(A\in \mathcal {L}(X,Y)\) and \(B\in \mathcal {L}(Y,Z)\) . Then, \(B\circ A\in \mathcal {L}(X,Z)\) . In addition, if A or B is compact, then B  A is also compact.

Theorem 4.1.3 (Closed Graph)

Let A be a closed unbounded operator with domain equal to X ; then, A is a bounded operator.

Theorem 4.1.4 (Banach-Schauder, or Open Mapping)

Let A be a bounded, bijective, operator from X to Y ; then, its inverse A −1 is a bounded operator from Y to X.

Next, let us introduce a useful norm.

Definition 4.1.5

Given an unbounded operator A, the norm defined by

$$\displaystyle \begin{aligned} \forall v\in D(A),\ \|v\|{}_{D(A)} = \left( \|v\|{}_X^2 + \|Av\|{}_Y^2\right)^{1/2}, \end{aligned}$$

is called the graph norm.

When the operator is bounded, ∥⋅∥D(A) is equivalent to ∥⋅∥ X on X.

Let us then consider the spectrum of a bounded operator.Footnote 2

Definition 4.1.6

Let \(A\in \mathcal {L}(X)\).

  • Its resolvent is \(\rho (A)=\{\lambda \in \mathbb {C}\ :\ (A-\lambda I_X)\) is bijective}.

  • Its spectrum is \(\sigma (A)=\mathbb {C}\setminus \rho (A)\).

  • Its point spectrum is \(Eig(A)=\{\lambda \in \sigma (A)\ :\ \ker (A-\lambda I_X)\neq \{0\}\}\).

An element λ of Eig(A) is called an eigenvalue of A. The vector space \(E_\lambda (A)=\ker (A-\lambda \,I_X)\) is the corresponding eigenspace. Non-zero elements of E λ (A) are called eigenvectors. The geometric multiplicity of λ is equal to , and its ascent is the smallest integer α such that \(\ker (A-\lambda \,I_X)^{\alpha +1}=\ker (A-\lambda \,I_X)^\alpha \). The vector space \(R_\lambda (A)=\ker (A-\lambda \,I_X)^\alpha \) is the corresponding generalized eigenspace. Non-zero elements of R λ (A) are called generalized eigenvectors. The algebraic multiplicity of λ is equal to \(\dim (R_\lambda (A))\).

By definition, for a given eigenvalue, its geometric multiplicity is lower than, or equal to, its algebraic multiplicity. Specifically, let us recall some results on the spectrum of compact operators.Footnote 3

Theorem 4.1.7

Let \(A\in \mathcal {L}(X)\) be a compact operator. Then:

  • The spectrum σ(A) is countable.

  • 0 ∈ σ(A) (it is assumed here that \(\dim (X)=\infty \) ).

  • σ(A) ∖{0} = Eig(A) ∖{0} (all non-zero elements of the spectrum are eigenvalues).

  • The multiplicities of all non-zero eigenvalues are finite.

    Furthermore, one of the following (exclusive) assertions holds:

    • σ(A) = {0},

    • σ(A) ∖{0} is finite,

    • σ(A) ∖{0} is a sequence whose limit is 0.

Let us turn our attention to Hilbert spaces. Let V be a Hilbert space, with scalar product (⋅, ⋅) V and associated norm ∥⋅∥ V . Recall that its dual spaceFootnote 4 V is the space of continuous antilinear forms on V , endowed with the norm

$$\displaystyle \begin{aligned} \|f\|{}_{V'}=\sup_{v\in V\setminus\{0\}}\frac{|\langle f,v\rangle_{V}|}{\|v\|{}_V}.\end{aligned} $$

Above, denotes the action of f on v. Whenever it is clear from the context, we denote it simply by 〈f, v〉.

Definition 4.1.8

A bounded operator \(A\in \mathcal {L}(V)\) is positive if

$$\displaystyle \begin{aligned} \forall v\in V,\ (Av,v)_V\ge0.\end{aligned} $$

A bounded operator \(A\in \mathcal {L}(V)\) is positive-definite if

$$\displaystyle \begin{aligned} \forall v\in V\setminus\{0\},\ (Av,v)_V>0.\end{aligned} $$

If a bounded operator is positive-definite, then its kernel reduces to {0}.

Definition 4.1.9

Let A be an unbounded operator of V with domain D(A). It is said to be monotone if

$$\displaystyle \begin{aligned} \forall v\in D(A),\ (Av,v)_V\ge0. \end{aligned}$$

It is said to be maximal monotone if:

  1. (i)

    it is monotone;

  2. (ii)

    i D(A)→V + A is surjective from D(A) to V .

Definition 4.1.10

An unbounded operator A : D(A) → V is symmetric if

$$\displaystyle \begin{aligned} \forall v,w\in D(A),\ (Av,w)_V=(v,Aw)_V. \end{aligned}$$

Let W be a second Hilbert space.

Definition 4.1.11

Let A : D(A) → W be an unbounded operator with a dense domain in V . Its adjoint is the unbounded operator A  : D(A ) → V , with

$$\displaystyle \begin{aligned} D(A^*) {\,=\,} \{w\in W\ :\ \exists v\in V,\ \forall v'\in D(A),\ (w,Av')_W {\,=\,} (v,v')_V\}, \mbox{ and }A^*w=v. \end{aligned}$$

Definition 4.1.12

Let A : D(A) → V be an unbounded operator with a dense domain in V . It is self-adjoint if A = A . It is skew-adjoint if A = −A .

There are several possibilities for proving that an operator is self-adjoint.

Proposition 4.1.13

Let \(A\in \mathcal {L}(V)\) . Then, A is self-adjoint if, and only if, it is symmetric.

Proposition 4.1.14

Let A : D(A) → V be a maximal monotone unbounded operator. Then, A is self-adjoint if, and only if, it is symmetric.

This last result is often used in conjunction with the next one.

Proposition 4.1.15

Let A : D(A) → V be an unbounded operator. Then, A is maximal monotone if, and only if, A is closed with a dense domain, and A and A are monotone.

We also have an alternative characterisation of compact operators in terms of weakly convergent sequences.

Definition 4.1.16 (Weak Convergence)

A sequence (v k )k≥0 of elements of V is weakly convergent if

$$\displaystyle \begin{aligned} \exists v\in V,\ \forall w\in V,\ \lim_{k\rightarrow\infty}(v_k,w)_V = (v,w)_V. \end{aligned}$$

One writes \(v_k \rightharpoonup v\) in V .

Proposition 4.1.17

Let \(A\in \mathcal {L}(V,W)\) . Then, given elements (v k )k≥0 and v of V , \(v_k \rightharpoonup v\) in V implies \(Av_k \rightharpoonup Av\) in W.

Moreover, A is compact if, and only if,

$$\displaystyle \begin{aligned} \forall (v_k)_{k\ge0},v\in V,\ v_k \rightharpoonup v\mathit{\mbox{ in }}V\ \Longrightarrow\ \lim_{k\rightarrow\infty}Av_k = Av\mathit{\mbox{ in }}W.\end{aligned} $$

Let us now state an important result in regard to compact operators.

Theorem 4.1.18

Let \(A\in \mathcal {L}(V)\) be a compact operator. Then,

  • \(\ker (I_V-A)\) is a finite-dimensional vector space.

  • R(I V  − A) is closed; more precisely, \(R(I_V-A)=\left (\ker (I_V-A^*)\right )^\perp \).

  • \(\ker (I_V-A)=\{0\}\ \Longleftrightarrow \ R(I_V-A)=V\).

  • \(\dim (\ker (I_V-A))=\dim (\ker (I_V-A^*))\).

Evidently, given \(\lambda \in \mathbb {C}\setminus \{0\}\), one can replace I V with λI V in the above Theorem ; in particular, λI V  − A is a Fredholm operator. It follows that the multiplicities of any non-zero eigenvalue λ of a compact operator are finite: \(0<\dim (E_\lambda (A))\le \dim (R_\lambda (A))<\infty \) (whereas \(0\le \dim (E_0(A))\le \infty \)).

Also, it allows one to solve the following classical problem .

Let \(A\in \mathcal {L}(V)\), \(\lambda \in \mathbb {C}\) and f ∈ V ,

$$\displaystyle \begin{aligned} \left\{ \begin{array}{l} \mathit{Find}\ u\in V\ \mathit{such \ that}\\ \lambda u - Au = f. \end{array} \right.\end{aligned} $$
(4.1)

According to Theorem 4.1.18, one can simply prove the following result when the operator is compact.

Corollary 4.1.19 (Fredholm Alternative)

Let \(A\in \mathcal {L}(V)\) be a compact operator and \(\lambda \in \mathbb {C}\setminus \{0\}\) . Then:

  • either, for all f  V , Problem (4.1) has one, and only one, solution u;

  • or, the homogeneous equation λu  Au = 0 has n λ  > 0 linearly independent solutions. In this case, given f  V , Problem (4.1) has solutions if, and only if, f satisfies n λ orthogonality conditions. Then, the space of solutions is affine, and the dimension of the corresponding vector space is equal to n λ .

This proposition has many practical applications, in particular, for solving Helmholtz-like problems (see the upcoming Sect. 4.5).

As one can check readily, in the case of a self-adjoint operator, all eigenvalues are real numbers. In addition, let us mention an important result in regard to the eigenvectors of compact and self-adjoint operators in a separable Hilbert space.

Theorem 4.1.20 (Spectral)

Assume that V is separable. Let \(A\in \mathcal {L}(V)\) be a compact and self-adjoint operator. Then, there exists a Hilbert basis Footnote 5 of V made of eigenvectors of A.

With this result, one can write a compact and self-adjoint operator as a sum of scaled projection operators onto its eigenspaces: this is the so-called spectral decomposition of a compact, self-adjoint operator.

Let us mention some results on interpolation theory, in a Hilbert space V (see [157, Chapter 1, §2]). In this setting, W is a second Hilbert space, and it is also a dense, strict subspace (with continuous imbedding) of V . Classically, there exists a self-adjoint, positive unbounded operator Λ of V with domain D(Λ) = W. Moreover, ∥⋅∥ W and the graph norm \((\|\cdot \|{ }_V^2+\|\varLambda \cdot \|{ }_V^2)^{1/2}\) are equivalent norms on W. On the other hand, given a self-adjoint, positive unbounded operator A of V , one can define the unbounded operators A θ for θ ≥ 0, with the help of the spectral representation of the unbounded operator A.Footnote 6 This leads to the …

Definition 4.1.21 (Interpolated Space)

Given θ ∈ [0, 1], the Hilbert space [W, V ] θ  = D(Λ 1−θ) is the interpolated space of order θ between W and V , with norm

$$\displaystyle \begin{aligned} \|\cdot\|{}_{[W,V]_\theta}=\left(\|\cdot\|{}_V^2+\|\varLambda^{1-\theta}\cdot\|{}_V^2\right)^{1/2} . \end{aligned}$$

We now list some properties of interpolated spaces.Footnote 7

Proposition 4.1.22

Let ([W, V ] θ )θ ∈ [0,1] be the interpolated spaces.

  • The definition of the interpolated space is independent of the choice of the unbounded operator Λ.

  • Given θ ∈ [0, 1], there exists C θ  > 0 such that

    $$\displaystyle \begin{aligned} \forall w\in W,\ \|w\|{}_{[W,V]_\theta}\le C_\theta\,\|w\|{}_W^{1-\theta}\|w\|{}_V^{\theta}. \end{aligned}$$
  • Given 0 ≤ θ 1 ≤ θ 2 ≤ 1, it holds that

    $$\displaystyle \begin{aligned} W \subset [W,V]_{\theta_1} \subset [W,V]_{\theta_2} \subset V, \end{aligned}$$

    with continuous imbeddings.

  • Assume that the imbedding of W into V is compact ; then, given 0 < θ 1 < θ 2 < 1, all above imbeddings are compact.

One can also apply interpolation theory to bounded operators (below, V , W are two other Hilbert spaces, with W a dense, strict subspace of V , with continuous imbedding).

Proposition 4.1.23 (Interpolated operator)

Given \(A\in {\mathcal L}(V,V^\diamond )\cap {\mathcal L}(W,W^\diamond )\) , then for all θ ∈ [0, 1], A belongs to \({\mathcal L}([W,V]_\theta ,[W^\diamond , V^\diamond ]_\theta )\).

Also, we will frequently make use of sesquilinearFootnote 8 continuous forms on V × W. Let \(a:V\times W\rightarrow \mathbb {C}\), (v, w)↦a(v, w): a(⋅, ⋅) is continuous if the quantity

$$\displaystyle \begin{aligned} |||a|||=\sup_{v\in V\setminus\{0\},w\in W\setminus\{0\}} \frac{|a(v,w)|}{\|v\|{}_{V}\|w\|{}_{W}} \end{aligned}$$

is bounded. When a(⋅, ⋅) is sesquilinear and continuous on V × W, it defines a unique bounded operator A from V to W′:

$$\displaystyle \begin{aligned} \forall (v,w)\in V\times W,\ \langle Av,w\rangle_{W}=a(v,w). \end{aligned}$$

Respectively, one can also define its conjugate transpose A from W to V:

$$\displaystyle \begin{aligned} \forall (v,w)\in V\times W,\ \langle A^\dagger w,v\rangle_{V}=\overline{a(v,w)}. \end{aligned}$$

For a bilinear form a defined on Hilbert spaces V, W over \(\mathbb {R}\), one defines A from V to W′ as above, respectively the transpose A t from W to V without conjugation.

Evidently, given a bounded operator A from V to W′, one could define a sesquilinear continuous form on V × W.

4.2 Static Problems

Let H be a Hilbert space. Then, let f be an element of H′, and define

$$\displaystyle \begin{aligned} \left\{ \begin{array}{l} \mathit{Find}\ u\in H \mathit{such \ that}\\ \forall v\in H,\ (u,v)_H=\langle f,v\rangle. \end{array} \right.\end{aligned} $$
(4.2)

Item (4.2) is called a Variational Formulation . It is the first instance in a long sequence of such Formulations.

The first result is the Riesz Theorem.

Theorem 4.2.1 (Riesz)

Problem (4.2) admits one, and only one, solution u in H. Moreover, it holds that \(\|u\|{ }_H=\|f\|{ }_{H'}\).

An interesting consequence of the Riesz Theorem 4.2.1 is the notion of pivot space. Indeed, the mapping fu is a bijective isometry from H′ to H. Then, one can choose to identify H′ with H.

Definition 4.2.2 (Pivot Space)

Let H be a Hilbert space. Whenever H′ is identified with H—with the mapping fuH is called the pivot space.

Thus follows …

Proposition 4.2.3

Let H be a Hilbert space. Let V be a second Hilbert space such that V is a dense, vector subspace of H, and such that the canonical imbedding i VH is continuous. Then, when H is chosen as the pivot space, one can identify H with a vector subspace of V ′.

Indeed, given two Hilbert spaces H and V as in the above proposition, the imbedding \(i_{H\rightarrow V'}\) is injective, continuous, and \(i_{H\rightarrow V'}H\) is dense in V. As a consequence, one can write

$$\displaystyle \begin{aligned} V \subset H\stackrel{\mbox{(pivot)}}{=}H' \subset V' ,\end{aligned} $$

with continuous and dense imbeddings.

Given two Hilbert spaces V , W, given a continuous sesquilinear form a on V × W, and given an element f of W′, let us introduce another Variational Formulation

$$\displaystyle \begin{aligned} \left\{ \begin{array}{l} \mathit{Find}\ u\in V \ \mathit{such \ that}\\ \forall w\in W,\ a(u,w)=\langle f,w\rangle. \end{array} \right.\end{aligned} $$
(4.3)

Definition 4.2.4 (Well-Posedness, Hadamard)

Problem (4.3) is well-posed in the Hadamard sense if, for all f ∈ W′, it has one, and only one, solution u ∈ V with continuous dependence, i.e.,

$$\displaystyle \begin{aligned} \begin{array}{ll} \exists C>0,\,\forall f\in W', &\mbox{ there exists a }\mathit{unique}\ u\in \mbox{V satisfying (\ref{problem-2})}\\ &\mbox{ and } \|u\|{}_V\le C\|f\|{}_{W'}. \end{array} \end{aligned}$$

We note that it is possible to reformulate Problem (4.3) as follows:

$$\displaystyle \begin{aligned} \left\{ \begin{array}{l} \mathit{Find}\ u\in V \ \mathit{such \ that}\\ Au=f\mbox{ in } W'. \end{array} \right.\end{aligned} $$
(4.4)

We see in Problem (4.3) that u is characterized by two items: first, the fact that it belongs to a specified space V so that it is measured by ∥⋅∥ V , and second, either by its action on all elements of W, or by an equation, set in W′.

Clearly, the operator A −1 is well-defined (and continuous) from W′ to V if, and only if, Problem (4.3) is well-posed in the Hadamard sense.

Proposition 4.2.5

Problem (4.3) is well-posed in the Hadamard sense if, and only if, the operator A of Problem (4.4) is an isomorphism.

We will usually write well-posed instead of well-posed in the Hadamard sense.

Then, we proceed with the second result, which generalizes Riesz’s Theorem in the case when V = W. It is called the Lax-Milgram Theorem, and provides a condition sufficient to achieve well-posedness for Problem (4.3).

Definition 4.2.6

Let a(⋅, ⋅) be a continuous sesquilinear form on V × V . It is coercive if

$$\displaystyle \begin{aligned} \exists\alpha>0,\ \forall v\in V,\ |a(v,v)|\ge\alpha\,\|v\|{}_V^2.\end{aligned} $$

Remark 4.2.7

One could also choose to define the coerciveness of continuous sesquilinear forms by assuming

$$\displaystyle \begin{aligned} \exists\alpha>0,\ \exists\theta\in[0,2\pi[,\ \ \forall v\in V,\ \Re[\exp(\imath\theta)\,a(v,v)] \ge \alpha\,\|v\|{}_V^2.\end{aligned} $$

This definition is equivalent to Definition 4.2.6. We shall use the latter for coerciveness throughout this monograph.

Moreover, with real-valued forms a(⋅, ⋅) (defined on a Hilbert space V over \(\mathbb {R}\)), both definitions boil down to

$$\displaystyle \begin{aligned} \exists s\in\{-1,+1\},\ \exists\alpha>0,\ \forall v\in V,\ s\,a(v,v) \ge\alpha\,\|v\|{}_V^2.\end{aligned} $$

Theorem 4.2.8 (Lax-Milgram)

When V = W, assume that the continuous and sesquilinear form a is coercive. Then, Problem (4.3) is well-posed.

Instead of imposing coerciveness, one can assume a stability condition , also called an inf-sup condition. This can be useful when the arguments v and w do not belong to the same space.

Definition 4.2.9

Let a(⋅, ⋅) be a continuous sesquilinear form on V × W.

It verifies a stability condition if

$$\displaystyle \begin{aligned} \exists\alpha'>0,\ \forall v\in V,\ \sup_{w\in W\setminus\{0\}}\frac{|a(v,w)|}{\|w\|{}_{W}}\ge\alpha'\,\|v\|{}_V. \end{aligned} $$
(4.5)

It verifies the solvability condition if

$$\displaystyle \begin{aligned} \{w\in W\ :\ \forall v\in V,\ a(v,w)=0\}=\{0\}. \end{aligned} $$
(4.6)

Remark 4.2.10

Condition (4.5) can be equivalently stated as the inf-sup condition

$$\displaystyle \begin{aligned} \exists\alpha'>0,\ \inf_{v\in V\setminus\{0\}}\sup_{w\in W\setminus\{0\}}\frac{|a(v,w)|}{\|v\|{}_V\|w\|{}_{W}}\ge\alpha'. \end{aligned} $$

When V = W, the coerciveness of a sesquilinear form implies both a stability condition (with α′ = α), together with a solvability condition, on the same form.

Then, one has the result below.

Proposition 4.2.11

Assume that the continuous and sesquilinear form a verifies a stability condition (4.5) with a suitable α′. Then, \(\ker (A)=\{0\}\) , R(A) is closed in W′, and A is a bijective mapping from V to R(A). As a consequence, given any f  R(A), Problem (4.3) admits one, and only one, solution u in V , and moreover, \(\alpha '\,\|u\|{ }_V\le \|f\|{ }_{W'}\) . Furthermore, if the form a satisfies the solvability condition (4.6), R(A) = W′, and as a consequence, Problem (4.3) is well-posed.

Theorem 4.2.12 (Banach-Necas-Babuska)

Problem (4.3) is well-posed if, and only if, the continuous and sesquilinear form a verifies a stability condition (4.5) and a solvability condition (4.6).

Let us now introduce an a priori intermediate condition (cf. [56]).

Definition 4.2.13

Let a(⋅, ⋅) be a continuous sesquilinear form on V × W. It is \(\mathbb {T}\) -coercive if

$$\displaystyle \begin{aligned} \exists \mathbb{T}\in\mathcal{L}(V, W),\ \mbox{bijective},\ \exists\underline{\alpha}>0,\ \forall v\in V,\ |a(v,\mathbb{T}v)|\ge\underline{\alpha}\,\|v\|{}_V^2. \end{aligned}$$

Proposition 4.2.14

Let a(⋅, ⋅) be a continuous and sesquilinear form: the form a is \(\mathbb {T}\) -coercive if, and only if, it satisfies a stability condition and a solvability condition.

Remark 4.2.15

So, to ensure that Problems (4.3) or (4.4) are well-posed:

  • a necessary and sufficient condition is that the form a verifies a stability condition and a solvability condition (see Theorem 4.2.12);

  • a necessary and sufficient condition is that the form a is \(\mathbb {T}\)-coercive (see Proposition 4.2.14);

  • when V = W, a sufficient condition is that the form a is coercive (see the Lax-Milgram Theorem 4.2.8).

Within the framework of the inf-sup theory, the operator \(\mathbb {T}\) is sometimes called an inf-sup operator.

Remark 4.2.16

If the form a is Hermitian (when V = W), the stability of a(⋅, ⋅) is sufficient to guarantee well-posedness. In the same spirit, for a Hermitian form a, the Definition 4.2.13 of \(\mathbb {T}\)-coercivity can be simplified to

$$\displaystyle \begin{aligned} \exists \mathbb{T}\in\mathcal{L}(V),\ \exists\underline{\alpha}>0,\ \forall v\in V,\ |a(v,\mathbb{T}v)|\ge\underline{\alpha}\,\|v\|{}_V^2. \end{aligned}$$

In other words, it is not required for \(\mathbb {T}\) to be bijective.

The next result is slightly more complicated, in the sense that it allows one to solve a Variational Formulation , which includes some constraints. More precisely, let Q be a third Hilbert space, and let:

  • a(⋅, ⋅) be a continuous sesquilinear form on V × V ;

  • b(⋅, ⋅) be a continuous sesquilinear form on V × Q;

  • f ∈ V;

  • g ∈ Q′.

Let us consider the mixed problem, or constrained problem:

$$\displaystyle \begin{aligned} \left\{ \begin{array}{l} \mathit{Find}\ (u,p)\in V\times Q\ \mathit{such \ that}\\ \forall v\in V,\ a(u,v)+ \overline{b(v,p)} =\langle f,v\rangle,\\ \forall q\in Q,\ b(u,q)=\langle g,q\rangle. \end{array} \right. \end{aligned} $$
(4.7)

In the above, the last line expresses the fact that u has to fulfill some constraints, with respect to its action on elements of Q. In terms of operators, recall that one can introduce the bounded operators B and B , respectively from V to Q′ and from Q to V:

$$\displaystyle \begin{aligned} \forall (v,q)\in V\times Q,\ \langle Bv,q\rangle=b(v,q)=\overline{\langle B^\dagger q,v\rangle}. \end{aligned} $$
(4.8)

Problem (4.7) can be reformulated equivalently:

$$\displaystyle \begin{aligned} \left\{ \begin{array}{l} \mathit{Find}\ (u,p)\in V\times Q \ \mathit{such \ that}\\ Au + B^\dagger p = f\mbox{ in }V',\\ Bu = g\mbox{ in }Q'. \end{array} \right. \end{aligned} $$
(4.9)

Remark 4.2.17

When the forms are real-valued and when a(⋅, ⋅) is symmetric, (4.7) is also referred to as a saddle-point problem. The expression mixed problem is generally used in the framework of variational analysis, whereas the term saddle-point formulation refers merely to the context of optimization under constraints. In the following, we will use, without distinction, the one or the other term, as they appear as two different sides of the same problem. Indeed, the mixed formulation (4.7) corresponds to the optimality conditions of the problem, which consists in minimizing the quadratic functional \(J(v)=\frac {1}{2}a(v,v) - \langle f,v\rangle \) on v under the constraint (4.7-bottom). The bilinear form a being symmetric, the couple (u, p) solution to the mixed problem can be viewed, in this case, as the saddle-point of the Lagrangian \(\mathcal {L}(v,q)=J(v)+b(v,q)-\langle g,q\rangle \). Recall that the saddle-point is defined as the couple (u, p) such that

$$\displaystyle \begin{aligned} \forall v \in V,\ \forall q \in Q,\ \mathcal{L}(u,q) \leq \mathcal{L}(u,p) \leq \mathcal{L}(v,p). \end{aligned}$$

Before stating the main result for the solution of (4.74.9), let us introduce the inf-sup condition on the form b for the mixed problem, where the infimum is taken over elements of Q:

$$\displaystyle \begin{aligned} \exists\beta>0,\ \inf_{q\in Q\setminus\{0\}}\,\sup_{v\in V\setminus\{0\}} \frac{|b(v,q)|}{\|v\|{}_V\,\|q\|{}_Q}\ge\beta. \end{aligned} $$
(4.10)

Now, let

$$\displaystyle \begin{aligned} K = \{v\in V\ :\ \forall q\in Q,\ b(v,q)=0\} \mbox{ and } K^0= \{h\in V'\ :\ \forall v\in K,\ \langle h,v\rangle=0\}. \end{aligned}$$

The subspace K of V is the kernel of B (when no confusion is possible, one writes that K is the kernel of b(⋅, ⋅)), and K 0 is called its polar set. Provided b(⋅, ⋅) is continuous, K is a closed subspace of V , so that one can write: V = K ⊕ K . It holds that

Lemma 4.2.18

Let b(⋅, ⋅) be a continuous sesquilinear form on V × Q. The three assertions are equivalent:

  • there exists β > 0 such that b(⋅, ⋅) satisfies (4.10);

  • the operator B is a bijective mapping from Q onto K 0 , and moreover,

    $$\displaystyle \begin{aligned} \exists \beta > 0, \quad \forall q\in Q,\ \|B^\dagger q\|{}_{V'}\ge\beta\|q\|{}_Q; \end{aligned}$$
  • the operator B is a bijective mapping from K onto Q′, and moreover,

    $$\displaystyle \begin{aligned} \exists \beta > 0, \quad \forall v\in K^\perp,\ \|Bv\|{}_{Q'}\ge\beta\|v\|{}_V. \end{aligned}$$

We finally reach …

Theorem 4.2.19 (Babuska-Brezzi [25, 63])

Let a, b, f, g be defined as above. Assume that

  1. (i)

    the sesquilinear form a is coercive on K × K;

  2. (ii)

    the sesquilinear form b satisfies an inf-sup condition.

Then, Problem (4.7) admits one, and only one, solution (u, p) in V × Q. Moreover, there exists a constant C independent of f such that \((\|u\|{ }_V+\|p\|{ }_Q)\le C\,(\|f\|{ }_{V'}+\|g\|{ }_{Q'})\).

There exist variations of this result, which rely on weaker assumptions than the coerciveness of the form a on K × K and the inf-sup condition on b(⋅, ⋅): we refer the reader to [49].

Proof (of Theorem 4.2.19)

Let us call α > 0 and β > 0, respectively, a coercivity constant for a(⋅, ⋅) on K × K (cf. Definition 4.2.6) and an inf-sup constant for b(⋅, ⋅) (cf. (4.10)).

  1. 1.

    Uniqueness is proven as follows. Assume that two solutions (u 1, p 1) and (u 2, p 2) to Problem (4.7) exist, then (δu, δp) = (u 1 − u 2, p 1 − p 2) solves

    $$\displaystyle \begin{aligned} \left\{ \begin{array}{l} \mathit{Find}\ (\delta u,\delta p)\in V\times Q\ \mathit{such \ that}\\ \forall v\in V,\ a(\delta u,v) + \overline{b(v,\delta p)}=0,\\ \forall q\in Q,\ b(\delta u,q)=0. \end{array} \right. \end{aligned}$$

    The second equation states that δu belongs to K. Next, using v = δu in the first equation leads to a(δu, δu) = 0, so that δu = 0, thanks to hypothesis (i). It follows that one has, for all v ∈ V , \(\overline {b(v,\delta p)}=0\) or, in other words, B (δp) = 0. Thanks to hypothesis (ii) and Lemma 4.2.18, one gets that δp = 0.

  2. 2.

    On the other hand, again using hypothesis (ii) and Lemma 4.2.18, we know that

    $$\displaystyle \begin{aligned} \exists! u_\perp\in K^\perp,\ Bu_\perp=g\ \mbox{and}\ \beta\|u_\perp\|{}_V\le\|g\|{}_{Q'}. \end{aligned}$$

    (Note that Bu  = g can be rewritten: ∀q ∈ Q, b(u , q) = 〈g, q〉.)

  3. 3.

    Then, according to hypothesis (i), one can solve

    $$\displaystyle \begin{aligned} \left\{ \begin{array}{l} \mathit{Find}\ u_\parallel\in K\ \mathit{such \ that}\\ \forall v_\parallel\in K,\ a(u_\parallel,v_\parallel)=\langle f,v_\parallel\rangle - a(u_\perp,v_\parallel)\,, \end{array} \right. \end{aligned}$$

    with the help of the Lax-Milgram Theorem 4.2.8. Its solution u exists and is unique, and moreover,

    $$\displaystyle \begin{aligned} \alpha\,\|u_\parallel\|{}_V \le \left\{\|f\|{}_{V'}+|||a|||\,\|u_\perp\|{}_{V}\right\} \le\left\{\|f\|{}_{V'}+|||a|||\,\beta^{-1}\|g\|{}_{Q'}\right\}. \end{aligned}$$
  4. 4.

    Let us aggregate steps 2. and 3. Introduce the candidate solution

    $$\displaystyle \begin{aligned} u=u_\parallel + u_\perp, \end{aligned} $$
    (4.11)

    and consider v ∈ V , which we split as v = v  + v , with (v , v ) ∈ K × K . According to the definition of u , one finds that

    $$\displaystyle \begin{aligned} \langle f,v\rangle - a(u,v) = \langle f,v_\perp\rangle - a(u,v_\perp). \end{aligned}$$

    Then, h ∈ V defined as 〈h, v〉 = 〈f, v 〉− a(u, v ) actually belongs to the polar set K 0 of K. Thanks again to Lemma 4.2.18, we obtain that

    $$\displaystyle \begin{aligned} \exists! p\in Q,\ B^\dagger p=h\ \mbox{and}\ \beta\|p\|{}_Q \le \|h\|{}_{V'} \le \left\{\|f\|{}_{V'}+|||a|||\,\|u\|{}_{V}\right\}. \end{aligned} $$
    (4.12)

    (Note that B p = h can be rewritten: \(\forall v\in V,\ \overline {b(v,p)}=\langle h,v\rangle \).)

  5. 5.

    Existence of a solution to Problem (4.7) is a consequence of the previous steps. Consider u and p as in (4.11) and (4.12), respectively. Then, for all v ∈ V , and for all q ∈ Q, one finds

    $$\displaystyle \begin{aligned} \begin{array}{rcl} &\displaystyle &\displaystyle a(u,v) + \overline{b(v,p)} = a(u,v) + \langle h,v\rangle = \langle f,v\rangle\,, \\ &\displaystyle &\displaystyle b(u,q) = b(u_\perp,q) = \langle g,q\rangle\,. \end{array} \end{aligned} $$

    Moreover, one has the estimates

    $$\displaystyle \begin{aligned} \begin{array}{rcl} &\displaystyle &\displaystyle \|u\|{}_{V}\le \alpha^{-1}\|f\|{}_{V'}+\beta^{-1}\left\{1+|||a|||\,\alpha^{-1}\right\}\|g\|{}_{Q'}\,, \\ &\displaystyle &\displaystyle \|p\|{}_Q\le\beta^{-1}\left\{\|f\|{}_{V'}+|||a|||\,\|u\|{}_{V}\right\}\,. \end{array} \end{aligned} $$

Remark 4.2.20

We carried out the proof over five steps. This process can be reproduced in other situations, such as time-dependent, or time-harmonic, problems with constraints.

We have so far defined a series of well-posed static problems, under ad hoc assumptions. To bridge the gap with time-harmonic problems (see Sect. 1.2.1), let us briefly consider forms associated with Fredholm operators of index 0.Footnote 9

Definition 4.2.21 (Well-Posedness, Fredholm)

Problem (4.3) is well-posed in the Fredholm sense if the associated operator of Problem (4.4) is a Fredholm operator of index 0.

In this setting, one may introduce a weak stability condition, respectively a weak \(\mathbb {T}\)-coercivity condition.

Definition 4.2.22

Let a(⋅, ⋅) be a continuous sesquilinear form on V × W.

It verifies a weak stability condition if

$$\displaystyle \begin{aligned} \begin{array}{l} \exists C\in \mathcal{L}(V,W)\mbox{ compact},\ \exists\alpha'>0,\beta'\ge0,\ \forall v\in V,\\ \hskip 20mm\displaystyle \sup_{w\in W\setminus\{0\}}\frac{|a(v,w)|}{\|w\|{}_{W}}\ge\alpha'\,\|v\|{}_V-\beta'\|Cv\|{}_W. \end{array} \end{aligned} $$

Definition 4.2.23

Let a(⋅, ⋅) be a continuous sesquilinear form on V × W. It is weakly \(\mathbb {T}\) -coercive if

$$\displaystyle \begin{aligned} \begin{array}{l} \exists \mathbb{T}\in\mathcal{L}(V,W)\mbox{ bijective},\ \exists C\in\mathcal{L}(V,W)\mbox{ compact},\ \exists\underline{\alpha}>0,\underline{\beta}\ge0,\ \forall v\in V,\\ \hskip 20mm\displaystyle|a(v,\mathbb{T}v)|\ge\underline{\alpha}\,\|v\|{}_V^2-\underline{\beta}\,\|Cv\|{}_W^2. \end{array} \end{aligned}$$

Regarding the weak stability and weak \(\mathbb {T}\)-coercivity conditions, one may prove the results below for Hermitian forms.

Proposition 4.2.24

When V = W, let a(⋅, ⋅) be a sesquilinear, continuous and Hermitian form on V × V . For Problem (4.3) to be well-posed in the Fredholm sense:

  • a necessary and sufficient condition is that the form a verifies a weak stability condition;

  • a necessary and sufficient condition is that the form a is weakly \(\mathbb {T}\) -coercive.

4.3 Time-Dependent Problems

Up to now, the abstract framework we have developed allows us to solve the so-called static problems in practical applications. In other words, problems in which the function spaces of solutions and of test functions, and the (anti)linear forms, depend only on the space variable. We turn now to problems that include some explicit dependence with respect to both the time and space variables (t, x). Within the framework of the theory we recall hereafter, the solution u is not considered directly as a function of (t, x). Instead, it is a function of t—and, as such, written as u(t)—with values in a function space that depends solely on the space variable:

$$\displaystyle \begin{aligned} u: t\mapsto u(t),\quad u(t): {\boldsymbol{x}} \mapsto u(t,{\boldsymbol{x}}). \end{aligned}$$

4.3.1 Problems Without Constraints

Let A be an unbounded operator of V with domain D(A), u 0 ∈ V and \(f:\mathbb {R}^+\rightarrow V\). Then, the first-order time-dependent problem to be solved is formulated as

$$\displaystyle \begin{aligned} \left\{ \begin{array}{l} \mathit{Find}\ u \ \mathit{such \ that}\\ \displaystyle\frac{du}{dt} + Au = f,\quad t>0,\\ u(0)=u_0. \end{array} \right. \end{aligned} $$
(4.13)

Above, u(0) = u 0 is called an initial condition .

We now introduce the important notion of strong solutions with respect to the time variable t. Here, we mostly follow the teaching material of Joly [144].

Definition 4.3.1

u is a strong solution to Problem (4.13), provided that

  1. (i)

    \(u\in C^1(\mathbb {R}^+;V)\);

  2. (ii)

    t ≥ 0, u(t) ∈ D(A) and, moreover, \(u\in C^0(\mathbb {R}^+,D(A))\);

  3. (iii)

    t > 0, in V , and u(0) = u 0.

According to the requested regularity in time, we note that a strong solution satisfies Problem (4.13) in the classical sense. Also, provided that f belongs to \(C^0(\mathbb {R}^+_*;V)\), conditions (i) and (iii) imply that \(u\in C^0(\mathbb {R}^+_*;D(A))\), when D(A) is endowed with its graph norm. Then, one has the fundamental result below.

Theorem 4.3.2 (Hille-Yosida [62, 171, 207])

Let A be an unbounded operator of V with domain D(A). Assume that there exists \(\mu \in \mathbb {R}\) such that A + μI V is maximal monotone. Then, given \(f\in C^1(\mathbb {R}^+;V)\) and u 0 ∈ D(A), Problem (4.13) admits one, and only one, strong solution in the sense of Definition 4.3.1 . In addition, the solution can be bounded as follows:

$$\displaystyle \begin{aligned} \begin{array}{rcl} &\displaystyle &\displaystyle \forall t\in\mathbb{R}^+,\ \|u(t)\|{}_V \le \|u_0\|{}_V + \int_0^t \|f(s)\|{}_V\,ds, \\ &\displaystyle &\displaystyle \forall t\in\mathbb{R}^+,\ \|\frac{du}{dt}(t)\|{}_V \le \|Au_0\|{}_V + \|f(0)\|{}_V + \int_0^t \|\frac{df}{dt}(s)\|{}_V\,ds. \end{array} \end{aligned} $$

The proof of this result is based on the semi-group theory .

Remark 4.3.3

One can choose to solve the first-order problem on the time interval ]0, T[, with T > 0 given. In this case, with the same assumptions about the operator A, one easily finds that

$$\displaystyle \begin{aligned} \left\{ \begin{array}{lll} C^1([0,T];V)\times D(A) & \rightarrow & C^0([0,T];D(A)) \times C^0([0,T];V) \\ (f,u_0) & \mapsto & (u,u') \end{array} \right. \end{aligned}$$

is continuous (with a constant that depends on T).

It is also possible to define strong solutions in a slightly weaker sense (see [62]). Basically, it is no longer required that the initial data belongs to D(A). As a consequence, the assumption about u 0 can be relaxed to u 0 ∈ V in the corresponding version of the Hille-Yosida Theorem.Footnote 10 In this case, items (i) and (ii) of Definition 4.3.1 are modified as follows:

  1. (i)’

    \(u\in C^1(\mathbb {R}^+_*;V)\cap C^0(\mathbb {R}^+;V)\);

  2. (ii)’

    t > 0, u(t) ∈ D(A) and, moreover, \(u\in C^0(\mathbb {R}^+_*,D(A))\).

For that, one can consider self-adjoint operators (other possibilities are described, for instance, in [92]).

Theorem 4.3.4 (Hille-Yosida [62])

Let A be an unbounded and self-adjoint operator of V with domain D(A). Assume that there exists \(\mu \in \mathbb {R}\) such that A + μI V is maximal monotone. Then, given \(f\in C^1(\mathbb {R}^+;V)\) and u 0 ∈ V , Problem (4.13) admits one, and only one, strong solution in the sense of Definition 4.3.1 with items (i)’-(ii)’-(iii). In addition, the solution can be bounded as follows:

$$\displaystyle \begin{aligned} \begin{array}{rcl} &\displaystyle &\displaystyle \forall t\in\mathbb{R}^+,\ \|u(t)\|{}_V \le \|u_0\|{}_V + \int_0^t \|f(s)\|{}_V\,ds, \\ &\displaystyle &\displaystyle \forall t\in\mathbb{R}^+_*,\ \|\frac{du}{dt}(t)\|{}_V \le \frac{1}{t}\|u_0\|{}_V + \|f(0)\|{}_V + \int_0^t \|\frac{df}{dt}(s)\|{}_V\,ds. \end{array} \end{aligned} $$

Moreover, if f = 0, one has

$$\displaystyle \begin{aligned} \forall k,l\in\mathbb{N},\ u \in C^k(\mathbb{R}^+_*;D(A^l)). \end{aligned}$$

The last result is called a regularizing effect. Also, it is possible that

$$\displaystyle \begin{aligned} \lim_{t\rightarrow0^+}\|u'(t)\|{}_V=+\infty. \end{aligned}$$

Remark 4.3.5

If one has \(f\in C^0(\mathbb {R}^+;V)\cap L^1(\mathbb {R}^+;D(A))\), then Problem (4.13) still has a strong solution. In addition, one has

$$\displaystyle \begin{aligned} \forall t\in\mathbb{R}^+,\ \|\frac{du}{dt}(t)\|{}_V \le \|Au_0\|{}_V + \|f(t)\|{}_V + \int_0^t \|A f(s)\|{}_V\,ds . \end{aligned}$$

On the other hand, if one has only \(f\in C^0(\mathbb {R}^+;V)\), then it is no longer guaranteed that this time-dependent problem has a strong solution (cf. Chapter XVII of [92]).

A third variant of a strong solution appears in a slightly different context, namely, when the operator A is skew-adjoint. Generally speaking, this feature corresponds to an energy conservation property of the evolution problem (4.13); one can thus define solutions for negative, as well as positive, values of time t, i.e., solve the “backward” problem (for t < 0), as well as the forward one. In this case, we take the following variants of the items in Definition 4.3.1:

  1. (i)”

    \(u\in C^1(\mathbb {R};V)\);

  2. (ii)”

    \(\forall t\in \mathbb {R}\), u(t) ∈ D(A) and, moreover, \(u\in C^0(\mathbb {R},D(A))\);

  3. (iii)”

    \(\forall t\in \mathbb {R}\), in V , and u(0) = u 0.

There is no regularizing effect in this case, i.e., the initial data must belong to the domain of A. On the other hand, the self-adjointness assumption of Theorem 4.3.4 is linked to energy dissipation, which accounts for the regularizing effect, and makes the backward problem ill-posed.

The corresponding result is now stated.

Theorem 4.3.6 (Stone [207])

Let A be an unbounded and skew-adjoint operator of V with domain D(A). Then, given u 0 ∈ D(A) and either (a) \(f\in C^1(\mathbb {R};V)\) or (b) \(f\in C^0(\mathbb {R};V)\cap L^1(\mathbb {R};D(A))\) , Problem (4.13) admits one, and only one, strong solution in the sense of Definition 4.3.1 , with items (i)”–(ii)”–(iii)”. In addition, the solution can be bounded as follows, according to the assumptions (a) or (b):

$$\displaystyle \begin{aligned} \begin{array}{rcl} &\displaystyle &\displaystyle \forall t\in\mathbb{R},\ \|u(t)\|{}_V \le \|u_0\|{}_V + \int_0^t \|f(s)\|{}_V\,ds, \\ \mathit{\mbox{(a)}} &\displaystyle &\displaystyle \forall t\in\mathbb{R},\ \|\frac{du}{dt}(t)\|{}_V \le \|Au_0\|{}_V + \|f(0)\|{}_V + \int_0^t \|\frac{df}{dt}(s)\|{}_V\,ds ,\\ \mathit{\mbox{(b)}} &\displaystyle &\displaystyle \forall t\in\mathbb{R},\ \|\frac{du}{dt}(t)\|{}_V \le \|Au_0\|{}_V + \|f(t)\|{}_V + \int_0^t \|A f(s)\|{}_V\,ds . \end{array} \end{aligned} $$

The proof once more relies upon semi-group theory. Furthermore, one can prove the following causality result.

Proposition 4.3.7

Assume the hypotheses of Theorem 4.3.6 . Let f 1, f 2 satisfy either (a) or (b), and u 1, u 2 be the corresponding solutions to (4.13). If f 1(t) = f 2(t) for a.e. t ≥ 0, then u 1 and u 2 also coincide for a.e. t ≥ 0. As a consequence, if one is interested in the forward problem only, it is not necessary to know the values of the r.h.s. for t < 0.

It turns out that one can apply this theory (Theorem 4.3.2) to solve second-order time-dependent problems and find strong solutions of such problems. These problems write

$$\displaystyle \begin{aligned} \left\{ \begin{array}{l} \mathit{Find}\ \mathtt{u}\ \mathit{such \ that}\\ \displaystyle\frac{d^2\mathtt{u}}{dt^2} + \mathtt{A}\mathtt{u}= \mathtt{f},\quad t>0\,;\\ \mathtt{u}(0)=\mathtt{u}_0\;,\;\displaystyle\displaystyle\frac{d\mathtt{u}}{dt}(0)=\mathtt{u}_1. \end{array} \right. \end{aligned} $$
(4.14)

Above, u(0) = u 0 and u (0) = u 1 are the two initial conditions .

Here, one needs to consider two Hilbert spaces:

  • \(\mathcal {H}\), a Hilbert space, with scalar product \((\cdot ,\cdot )_{\mathcal {H}}\) and norm \(\|\cdot \|{ }_{\mathcal {H}}\);

  • \(\mathcal {V}\), a Hilbert space, with scalar product \((\cdot ,\cdot )_{\mathcal {V}}\) and norm \(\|\cdot \|{ }_{\mathcal {V}}\);

  • the imbedding \(i_{\mathcal {V}\rightarrow \mathcal {H}}\) is continuous;

  • \(\mathcal {V}\) is dense in \(\mathcal {H}\).

The operator A is defined via a sesquilinear continuous and Hermitian form a defined on \(\mathcal {V}\times \mathcal {V}\), which fulfills the following property:

$$\displaystyle \begin{aligned} \exists\nu\in\mathbb{R}^+,\ \exists\alpha\in\mathbb{R}^+_*,\ \forall \mathtt{v}\in \mathcal{V},\ a(\mathtt{v},\mathtt{v}) + \nu\,\|\mathtt{v}\|{}_{\mathcal{H}}^2 \ge \alpha\,\|\mathtt{v}\|{}_{\mathcal{V}}^2. \end{aligned} $$
(4.15)

Remark 4.3.8

Note that one can define another scalar product on \(\mathcal {V}\), with associated norm \({ }_2\|\cdot \|{ }_{\mathcal {V}}\) equivalent to \(\|\cdot \|{ }_{\mathcal {V}}\) in \(\mathcal {V}\). It writes

$$\displaystyle \begin{aligned} \forall \mathtt{v},\mathtt{w}\in\mathcal{V},\ {}_2(\mathtt{v},\mathtt{w})_{\mathcal{V}} = a(\mathtt{v},\mathtt{w}) + \nu\,(\mathtt{v},\mathtt{w})_{\mathcal{H}}. \end{aligned}$$

Then, one can introduce the unbounded operator A of \(\mathcal {H}\) with domain D(A)

$$\displaystyle \begin{aligned} \left\{ \begin{array}{l} \displaystyle D(\mathtt{A})=\{\mathtt{v}\in\mathcal{V}\ :\ \exists h\in\mathcal{H},\ \forall \mathtt{w}\in\mathcal{V},\ a(\mathtt{v},\mathtt{w}) = (h,\mathtt{w})_{\mathcal{H}}\};\\ \forall \mathtt{v}\in D(\mathtt{A}),\ \forall \mathtt{w}\in\mathcal{V},\ (\mathtt{A} \mathtt{v},\mathtt{w})_{\mathcal{H}} = a(\mathtt{v},\mathtt{w}). \end{array} \right. \end{aligned} $$
(4.16)

Definition 4.3.9

u is a strong solution to Problem (4.14), provided that

  1. (i)

    \(\mathtt {u}\in C^2(\mathbb {R}^+;\mathcal {H}) \cap C^1(\mathbb {R}^+;\mathcal {V})\);

  2. (ii)

    t ≥ 0, u(t) ∈ D(A) and, moreover, \(\mathtt {u}\in C^0(\mathbb {R}^+,D(\mathtt {A}))\);

  3. (iii)

    t > 0, u″(t) + Au(t) = f(t) in \(\mathcal {H}\), u(0) = u 0 and u (0) = u 1.

From this point on, one can prove an equivalence result between the existence of u as a strong solution to Problem (4.14) and the existence of a strong solution to a companion—first-order time-dependent—problem. We give the main steps of the process, since it will be of use later on for solving the time-dependent Maxwell equations, written as wave equations with constraints (cf. Sect. 1.5.3). For the moment, we adopt the following point of view. To determine ad hoc conditions that ensure the existence and uniqueness of a strong solution to Problem (4.14), let us use the Hille-Yosida Theorem 4.3.2. To that aim, introduce \(V = \mathcal {V}\times \mathcal {H}\). Its elements are denoted by v = (v, h). It is a Hilbert space, with the scalar product \((v,\tilde v)_V = { }_2(\mathtt {v},\tilde {\mathtt {v}})_{\mathcal {V}} + (h,\tilde h)_{\mathcal {H}}\). Next, let A be an unbounded operator of V , defined by

$$\displaystyle \begin{aligned} \left\{ \begin{array}{l} \displaystyle D(A)=D(\mathtt{A})\times\mathcal{V};\\ \forall v=(\mathtt{v},h)\in D(A),\ Av = (-h,\mathtt{A}\mathtt{v}). \end{array} \right. \end{aligned}$$

The data are equal to u 0 = (u 0, u 1) and f = (0, f).

Finally, we are in a position to consider Problem (4.13) with V , A, f and u 0 as above. One obtains the following simple result…

Proposition 4.3.10

Assume that u is a strong solution to Problem (4.14); then, u = (u, u ) is a strong solution to Problem (4.13).

Conversely, assume that u = (u, h) is a strong solution to Problem (4.13); then, u is a strong solution to Problem (4.14).

As a conclusion, one can exhibit sufficient conditions to ensure the existence, uniqueness and continuous dependence of the solution to the second-order time-dependent problem. Indeed, according to the definition of the scalar product on V , maximal monotony of A + μI V stems from property (4.15), with the admissible choice \(\mu \ge \sqrt {\nu }/2\).

Theorem 4.3.11

Let a(⋅, ⋅) be a sesquilinear, continuous and Hermitian form defined on \(\mathcal {V}\times \mathcal {V}\) , which fulfills property (4.15). Let the operator A be defined as in (4.16). Then, given \(\mathtt {f}\in C^1(\mathbb {R}^+;\mathcal {H})\) , u 0 ∈ D(A) and \(\mathtt {u}_1\in \mathcal {V}\) , Problem (4.14) admits one, and only one, strong solution in the sense of Definition 4.3.9 . In addition, for any t ≥ 0, the norms \(\|\mathtt {u}(t)\|{ }_{\mathcal {V}}\) , \(\|\mathtt {u}'(t)\|{ }_{\mathcal {V}}\) and \(\|\mathtt {u}''(t)\|{ }_{\mathcal {H}}\) can be bounded by (homogeneous) expressions involving only the norms of the data.

So far, we have addressed the well-posedness of our first- and second-order time-dependent problems, based on the concept of strong solutions.

There exists an alternative technique for second-order time-dependent problems that relies on weak solutions. It is usually referred to as the Lions-Magenes theory [157]. It relies mainly on mathematical tools such as distributions, and Lebesgue and Sobolev spaces. The starting point is still Problem (4.14), which will be re-interpreted below. Here, the Hilbert space \(\mathcal {H}\) is usually considered as the pivot space, so that \(\mathcal {V}\subset \mathcal {H}\subset \mathcal {V}'\).

Consider T > 0 and assume that u is a strong solution to Problem (4.14) on the time interval ]0, T[, in the sense of Definition 4.3.9. Then, since \(\mathcal {V}\) is dense in \(\mathcal {H}\), one gets the series of equivalent statements:

$$\displaystyle \begin{aligned} \begin{array}{rcl} &\displaystyle &\displaystyle \forall t\in]0,T[,\ \frac{d^2\mathtt{u}}{dt^2}(t) + \mathtt{A}\mathtt{u}(t) = \mathtt{f}(t)\mbox{ in }\mathcal{H} \\ &\displaystyle \Longleftrightarrow&\displaystyle \forall t\in]0,T[,\ \forall\mathtt{v}\in\mathcal{V},\ (\frac{d^2\mathtt{u}}{dt^2}(t),\mathtt{v})_{\mathcal{H}} + (\mathtt{A}\mathtt{u}(t),\mathtt{v})_{\mathcal{H}} = (\mathtt{f}(t),\mathtt{v})_{\mathcal{H}} \\ &\displaystyle \Longleftrightarrow&\displaystyle \forall t\in]0,T[,\ \forall\mathtt{v}\in\mathcal{V},\ \frac{d^2}{dt^2}(\mathtt{u}(t),\mathtt{v})_{\mathcal{H}} + a(\mathtt{u}(t),\mathtt{v}) = (\mathtt{f}(t),\mathtt{v})_{\mathcal{H}}.\vspace{-3pt} \end{array} \end{aligned} $$

One defines weak solutions , for which the last statement is not satisfied for all t in ]0, T[, but in the sense of distributions instead. In other words, the weak solution, still denoted by u, satisfies the weaker statementFootnote 11:

$$\displaystyle \begin{aligned} \forall\mathtt{v}\in\mathcal{V},\ \frac{d^2}{dt^2}(\mathtt{u}(t),\mathtt{v})_{\mathcal{H}} + a(\mathtt{u}(t),\mathtt{v}) = (\mathtt{f}(t),\mathtt{v})_{\mathcal{H}}\mbox{ in }\mathcal{ D}'(]0,T[).\end{aligned} $$
(4.17)

Definition 4.3.12

u is a weak solution to Problem (4.14) on the time interval ]0, T[, provided that

  1. (i)

    \(u\in L^2(0,T;\mathcal {V})\) and \(\mathtt {u}'\in L^2(0,T;\mathcal {H})\);

  2. (ii)

    \(\forall \mathtt {v}\in \mathcal {V}\), \(\big ((\mathtt {u}(t),\mathtt {v})_{\mathcal {H}}\big )'' + a(\mathtt {u}(t),\mathtt {v}) = (\mathtt {f}(t),\mathtt {v})_{\mathcal {H}}\) in \(\mathcal {D}'(]0,T[)\), u(0) = u 0 and u (0) = u 1.

We note that Problem (4.14) must be re-interpreted when weak solutions are sought. Indeed, since u(t) belongs to \(\mathcal {V}\) instead of D(A)—in contrast to strong solutions (see Definition 4.3.9 (ii))—Au(t) has no meaning. For this reason, one instead introduces the bounded operator A w of \(\mathcal {L}(\mathcal {V},\mathcal {V}')\), defined by

$$\displaystyle \begin{aligned} \forall \mathtt{v},\mathtt{w}\in\mathcal{V},\ \langle \mathtt{A}_w\mathtt{v},\mathtt{w}\rangle_{\mathcal{V}}=a(\mathtt{v},\mathtt{w}). \end{aligned}$$

Thus, A w u(t) belongs to \(\mathcal {V}'\), and moreover, \(\mathtt {A}_w\mathtt {u}\in L^2(0,T;\mathcal {V}')\). So, when weak solutions to the second-order time-dependent Problem (4.14) are studied, the operator that acts on the solution is A w .

Theorem 4.3.13 (Lions-Magenes [157])

Assume that the sesquilinear, continuous and Hermitian form a fulfills property (4.15), and let the operator A w be defined as above. Then, given T > 0, \(\mathtt {f}\in L^2(0,T;\mathcal {H})\) , \(\mathtt {u}_0\in \mathcal {V}\) and \(\mathtt {u}_1\in \mathcal {H}\) , on the time interval ]0, T[, Problem (4.14), admits one, and only one, weak solution in the sense of Definition 4.3.12 . In addition,

$$\displaystyle \begin{aligned} \left\{ \begin{array}{lll} L^2(0,T;\mathcal{H})\times \mathcal{V} \times \mathcal{H} & \rightarrow & C^0([0,T];\mathcal{V}) \times C^0([0,T];\mathcal{H}) \\ (\mathtt{f},\mathtt{u}_0,\mathtt{u}_1) & \mapsto & (\mathtt{u},\mathtt{u}') \end{array} \right. \end{aligned}$$

is continuous (with a constant that depends on T).

In other words, the well-posedness of second-order time-dependent problems also holds for weak solutions (under assumptions that are different from those introduced in the case of strong solutions).

Remark 4.3.14

Within the framework of the previous Theorem, a weak solution is such that \(\mathtt {A}_w\mathtt {u}\in C^0([0,T];\mathcal {V}')\). Since \(\mathtt {f}\in L^2(0,T;\mathcal {H})\), it follows that \(\mathtt {u}''\in L^2(0,T;\mathcal {V}')\). In particular, one can choose to rewrite \(\big ((\mathtt {u}(t),\mathtt {v})_{\mathcal {H}}\big )''\) as 〈u″(t), v〉, for all \(\mathtt {v}\in \mathcal {V}\).

For Maxwell’s equations, it is important to note that the notion of weak solutions can be extended to the slightly modified problem below. Introduce \({ }_2(\cdot ,\cdot )_{\mathcal {H}}\), a second scalar product on \(\mathcal {H}\), such that \({ }_2\|\cdot \|{ }_{\mathcal {H}}\) and \(\|\cdot \|{ }_{\mathcal {H}}\) are equivalent norms. Therefore, one can equip \(\mathcal {H}\) with \({ }_2\|\cdot \|{ }_{\mathcal {H}}\) without changing its topology; let us denote this space as \(\mathcal {H}_2\) to emphasize this point of view. Note that in the formulation of property (4.15), one can replace \(\|\cdot \|{ }_{\mathcal {H}}\) with \({ }_2\|\cdot \|{ }_{\mathcal {H}}\) (resulting in a different ν). Then, statement (4.17) is replaced by

$$\displaystyle \begin{aligned} \forall\mathtt{v}\in\mathcal{V},\ \frac{d^2}{dt^2}\left\{{}_2(\mathtt{u}(t),\mathtt{v})_{\mathcal{H}}\right\} + a(\mathtt{u}(t),\mathtt{v}) = (\mathtt{f}(t),\mathtt{v})_{\mathcal{H}}\mbox{ in }\mathcal{D}'(]0,T[), \end{aligned} $$
(4.18)

which defines a modified second-order time-dependent problem. Interestingly, one can prove that this modified problem is also well-posed.

Corollary 4.3.15

Let \(\mathtt {f}\in L^2(0,T;\mathcal {H})\) , \(\mathtt {u}_0\in \mathcal {V}\) and \(\mathtt {u}_1\in \mathcal {H}\) . The variational formulation (4.18) admits one, and only one, weak solution on the time interval ]0, T[, satisfying \((\mathtt {u},\mathtt {u}')\in C^0([0,T];\mathcal {V}) \times C^0([0,T];\mathcal {H})\) . In addition,

$$\displaystyle \begin{aligned} \left\{ \begin{array}{lll} L^2(0,T;\mathcal{H})\times \mathcal{V} \times \mathcal{H} & \rightarrow & C^0([0,T];\mathcal{V}) \times C^0([0,T];\mathcal{H}) \\ (\mathtt{f},\mathtt{u}_0,\mathtt{u}_1) & \mapsto & (\mathtt{u},\mathtt{u}') \end{array} \right. \end{aligned}$$

is continuous (with a constant that depends on T).

Proof

Using Riesz’s Theorem in \(\mathcal {H}_2\), one can rewrite the r.h.s. of (4.18), which becomes:

$$\displaystyle \begin{aligned} \forall\mathtt{v}\in\mathcal{V},\ \frac{d^2}{dt^2}\left\{{}_2(\mathtt{u}(t),\mathtt{v})_{\mathcal{H}}\right\} + a(\mathtt{u}(t),\mathtt{v}) = {}_2(\mathtt{f}_{(2)}(t),\mathtt{v})_{\mathcal{H}}\mbox{ in }\mathcal{D}'(]0,T[).\end{aligned} $$
(4.19)

Of course, the functions of time with values in \(\mathcal {H}\) have the same regularity when seen as taking their values in \(\mathcal {H}_2\); and the norm of f (2) in \(L^2(0,T;\mathcal {H}_2)\) is bounded above and below by the norm of f in \(L^2(0,T;\mathcal {H})\). Applying Theorem 4.3.13 to the weak formulation (4.19), set in the spaces \(\mathcal {V}\) and \(\mathcal {H}_2\), gives us the result. ■

4.3.2 Problems with Constraints

We proceed by studying the existence of weak solutions for second-order time-dependent problems with constraints. Let \(\mathcal {Q}\) be a third Hilbert space, and let b(⋅, ⋅) be a continuous sesquilinear form on \(\mathcal {V}\times \mathcal {Q}\), with associated operators B and B defined as in (4.8). We are now interested in solving

$$\displaystyle \begin{aligned} \left\{ \begin{array}{l} \mathit{Find}\ (\mathtt{u},\mathtt{p})\ \mathit{such \ that}\\ \displaystyle\frac{d^2\mathtt{u}}{dt^2} + \mathtt{A}_w\mathtt{u} + \mathtt{B}^\dagger \mathtt{p}= \mathtt{f},\quad t>0,\\ \displaystyle \mathtt{B}\mathtt{u} = \mathtt{g},\quad t>0,\\ \mathtt{u}(0)=\mathtt{u}_0\;;\;\displaystyle\displaystyle\frac{d\mathtt{u}}{dt}(0)=\mathtt{u}_1. \end{array} \right.\end{aligned} $$
(4.20)

Next, we define weak solutions of such a problem on a time interval ]0, T[.

Definition 4.3.16

(u, p) is a weak solution to Problem (4.20) on the time interval ]0, T[, provided that

  1. (i)

    \(\mathtt {u}\in C^1([0,T];\mathcal {H}) \cap C^0([0,T];\mathcal {V})\);

  2. (ii)

    \(\mathtt {p}\in C^0([0,T];\mathcal {Q})\);

  3. (iii)

    \(\forall \mathtt {v}\in \mathcal {V}\), \(\big ((\mathtt {u}(t),\mathtt {v})_{\mathcal {H}}\big )'' + a(\mathtt {u}(t),\mathtt {v}) + \overline {b(\mathtt {v},\mathtt {p}(t))} = (\mathtt {f}(t),\mathtt {v})_{\mathcal {H}}\) in \(\mathcal {D}'(]0,T[)\), u(0) = u 0 and u (0) = u 1;

  4. (iv)

    t ∈ [0, T], \(\forall \mathtt {q}\in \mathcal {Q}\), b(u(t), q) = 〈g(t), q〉.

As we are mainly interested in solving Maxwell’s equations, we shall replace \(\big ((\mathtt {u}(t),\mathtt {v})_{\mathcal {H}}\big )''\) with \(\big ({ }_2(\mathtt {u}(t),\mathtt {v})_{\mathcal {H}}\big )''\) in (iii). As a consequence, Problem (4.20) becomes

$$\displaystyle \begin{aligned} \left\{ \begin{array}{l} \mathit{Find}\ (\mathtt{u},\mathtt{p})\ \mathit{such \ that}\\ \displaystyle\forall\mathtt{v}\in\mathcal{V},\ \frac{d^2}{dt^2}\{{}_2(\mathtt{u}(t),\mathtt{v})_{\mathcal{H}}\} + a(\mathtt{u}(t),\mathtt{v})\\ \displaystyle \hskip 34.82mm + \overline{b(\mathtt{v},\mathtt{p}(t))} = (\mathtt{f}(t),\mathtt{v})_{\mathcal{H}} \mbox{ in }\mathcal{D}'(]0,T[), \\ \displaystyle\forall t\in[0,T],\ \forall\mathtt{q}\in\mathcal{Q},\ b(\mathtt{u}(t),\mathtt{q}) = \langle\mathtt{g}(t),\mathtt{q}\rangle\,;\\ \mathtt{u}(0)=\mathtt{u}_0\;,\;\displaystyle\displaystyle\frac{d\mathtt{u}}{dt}(0)=\mathtt{u}_1. \end{array} \right. \end{aligned} $$
(4.21)

To analyse this problem, we shall introduce some definitions, which also serve in studying the associated discrete problems [17]. First, we introduce \(\mathcal {K}\), the kernel of b(⋅, ⋅) (which is a closed subspace of \(\mathcal {V}\)),

$$\displaystyle \begin{aligned} \mathcal{K} = \{\mathtt{v}\in\mathcal{V}\ :\ \forall \mathtt{q}\in \mathcal{Q},\ b(\mathtt{v},\mathtt{q})=0\}, \end{aligned}$$

its polar set \(\mathcal {K}^0 \subset \mathcal {V}'\), and its orthogonal \(\mathcal {K}^\perp \) in \(\mathcal {V}\). We still assume that the property (4.15) holds; thus, we take a priori the orthogonality in the sense of the equivalent scalar product \({ }_2(\cdot ,\cdot )_{\mathcal {V}} = a(\cdot ,\cdot ) + \nu \,(\cdot ,\cdot )_{\mathcal {H}}\) or \(a(\cdot ,\cdot ) + \nu _2\,{ }_2(\cdot ,\cdot )_{\mathcal {H}}\) (see Remark 4.3.8). Nevertheless, we shall need the following hypothesis to prove the well-posedness of the constrained formulations.

Definition 4.3.17

The spaces \(\mathcal {K}\) and \(\mathcal {K}^\perp \) satisfy a double orthogonality property in \(\mathcal {V}\) and \(\mathcal {H}\) (respectively \(\mathcal {H}_2\)) if:

$$\displaystyle \begin{aligned} \forall (\mathtt{v}_\parallel,\mathtt{v}_\perp)\in \mathcal{K}\times\mathcal{K}^\perp,\ a(\mathtt{v}_\parallel,\mathtt{v}_\perp)=0 \mbox{ and } (\mathtt{v}_\parallel,\mathtt{v}_\perp)_{\mathcal{H}} = 0, \mbox{ respectively } {}_2(\mathtt{v}_\parallel,\mathtt{v}_\perp)_{\mathcal{H}} = 0. \end{aligned}$$

This notion is of fundamental importance in addressing the solution of the time-dependent Maxwell equations. The proof of the following Lemma is left to the reader.

Lemma 4.3.18

Let \(\mathcal {L}\) be the closure of \(\mathcal {K}\) in \(\mathcal {H}\) , and \(\mathcal {L}^\perp \) its orthogonal in \(\mathcal {H}\) . If \(\mathcal {V}\) is dense in \(\mathcal {H}\) , and the double orthogonality property holds for \(\mathcal {K}\) and \(\mathcal {K}^\perp \) in \(\mathcal {V}\) and \(\mathcal {H}\) , then \(\mathcal {L}^\perp \) is the closure of \(\mathcal {K}^\perp \) in \(\mathcal {H}\).

Thus, any \(\mathtt {z}\in \mathcal {H}\) can be split as z = z  + z , with \((\mathtt {z}_\parallel ,\mathtt {z}_\perp )_{\mathcal {H}} = 0\); if \(\mathtt {z}\in \mathcal {V}\), this decomposition coincides with that in \(\mathcal {K}\times \mathcal {K}^\perp \). Of course, one can replace \(\mathcal {H}\) with \(\mathcal {H}_2\), i.e., the scalar product \((\cdot ,\cdot )_{\mathcal {H}}\) with \({ }_2(\cdot ,\cdot )_{\mathcal {H}}\) in the above Lemma.

Theorem 4.3.19

Assume that the sesquilinear, continuous and Hermitian form a fulfills the property (4.15), and that the sesquilinear and continuous form b satisfies the inf-sup condition (4.10) for some β > 0. Finally, assume that the spaces \(\mathcal {K}\) and \(\mathcal {K}^\perp \) satisfy a double orthogonality property in \(\mathcal {V}\) and \(\mathcal {H}_2\) , as in Definition 4.3.17 . Let \(\mathcal {L}\) be the closure of \(\mathcal {K}\) in \(\mathcal {H}\).

Then, let T > 0, \(\mathtt {f}\in C^0([0,T];\mathcal {H})\) , \(\mathtt {g}\in C^2([0,T];\mathcal {Q}^{\prime })\) , \(\mathtt {u}_0\in \mathcal {V}\) and \(\mathtt {u}_1\in \mathcal {H}\) be given, such that the projection u 1⊥ of u 1 onto \(\mathcal {L}^\perp \) belongs to \(\mathcal {V}\) , and

$$\displaystyle \begin{aligned} \forall \mathtt{q}\in\mathcal{Q},\ b(\mathtt{u}_0,\mathtt{q}) = \langle \mathtt{g}(0),\mathtt{q} \rangle_{\mathcal{Q}},\ \mathit{\text{and}}\ b(\mathtt{u}_{1\perp},\mathtt{q}) = \langle \mathtt{g}'(0),\mathtt{q} \rangle_{\mathcal{Q}}. {} \end{aligned} $$
(4.22)

On the time interval ]0, T[, Problem (4.21) admits a unique weak solution in the sense of Definition 4.3.16 (with \(\big ({ }_2(\mathtt {u}(t),\mathtt {v})_{\mathcal {H}}\big )''\) in (iii)). In addition, the mapping

$$\displaystyle \begin{aligned} \left\{ \begin{array}{lll} C^0([0,T];\mathcal{H})\times C^2([0,T];\mathcal{Q}^{\prime})\times\mathcal{V} \times \mathcal{H} & \rightarrow & C^0([0,T];\mathcal{V} \times \mathcal{H} \times \mathcal{Q}) \\ (\mathtt{f},\mathtt{g},\mathtt{u}_0,\mathtt{u}_1) & \mapsto & (\mathtt{u},\mathtt{u}',\mathtt{p}) \end{array} \right. \end{aligned}$$

is continuous (with a constant that depends on T).

Proof

Without loss of generality, we may assume \({ }_2(\cdot ,\cdot )_{\mathcal {H}} = (\cdot ,\cdot )_{\mathcal {H}}\), by reasoning as in Corollary 4.3.15 if necessary. Then, we proceed by analysis and synthesis. Suppose there exists a solution (u, p) in the sense of Definition 4.3.16; and split \(\mathtt {u}(t) = \mathtt {u}_\parallel (t) + \mathtt {u}_\perp (t) \in \mathcal {K} \oplus \mathcal {K}^\perp \) for all t ∈ [0, T]. As the projection onto closed subspaces is continuous, it holds that \((\mathtt {u}_\parallel ,\mathtt {u}_\perp ) \in C^0([0,T];\mathcal {K} \times \mathcal {K}^\perp )\times C^1([0,T];\mathcal {L}\times \mathcal {L}^\perp )\). Similarly, let \(\mathtt {u}_0 = \mathtt {u}_{0\parallel } + \mathtt {u}_{0\perp } \in \mathcal {K} \oplus \mathcal {K}^\perp \); \(\mathtt {u}_1 = \mathtt {u}_{1\parallel } + \mathtt {u}_{1\perp } \in \mathcal {L} \oplus \mathcal {L}^\perp \); \(\mathtt {f} = \mathtt {f}_\parallel + \mathtt {f}_\perp \in C^0([0,T];\mathcal {L}\times \mathcal {L}^\perp )\).

  1. 1.

    Item (iv) of Definition 4.3.16 is equivalent to: ∀t ∈ [0, T], \(\forall \mathtt {q}\in \mathcal {Q}\), b(u (t), q) = 〈g(t), q〉. By Lemma 4.2.18, we know this equation has a unique solution for each t. Moreover, one has \(\beta \|\mathtt {u}_\perp (t)\|{ }_V\le \|\mathtt {g}(t)\|{ }_{Q'}\), and similar inequalities link the first and second time derivatives of u and g: the norm of u in \(C^2([0,T];\mathcal {K}^\perp )\) is controlled by that of g in \(C^2([0,T];\mathcal {Q}^{\prime })\).

  2. 2.

    Then, let us take a test function \(\mathtt {v}_\parallel \in \mathcal {K}\) in item (iii). Using the definition of \(\mathcal {K}\) and the double orthogonality property, we obtain:

    $$\displaystyle \begin{aligned} \forall\mathtt{v}_\parallel\in\mathcal{K},\quad \frac{d^2}{dt^2}\{(\mathtt{u}_\parallel(t),\mathtt{v}_\parallel)_{\mathcal{H}}\} + a(\mathtt{u}_\parallel(t),\mathtt{v}_\parallel) = (\mathtt{f}_\parallel(t),\mathtt{v}_\parallel)_{\mathcal{H}} \mbox{ in }\mathcal{D}'(]0,T[).\end{aligned} $$

    But, by the same property, \( (\mathtt {u}_\parallel (t),\mathtt {v}_\perp )_{\mathcal {H}} = a(\mathtt {u}_\parallel (t),\mathtt {v}_\perp ) = 0\) for any \(\mathtt {v}_\perp \in \mathcal {K}^\perp \). Therefore, we can add an arbitrary function \(\mathtt {v}_\perp \in \mathcal {K}^\perp \) to v in the above equation. So, we see that u appears as a solution to the variational formulation:

    Find \(\mathtt {u}_\parallel : [0,T] \to \mathcal {V}\) such that:

    $$\displaystyle \begin{aligned} \forall\mathtt{v}\in\mathcal{V},\quad \frac{d^2}{dt^2}\{(\mathtt{u}_\parallel(t),\mathtt{v})_{\mathcal{H}}\} + a(\mathtt{u}_\parallel(t),\mathtt{v}) = (\mathtt{f}_\parallel(t),\mathtt{v})_{\mathcal{H}} \mbox{ in }\mathcal{D}'(]0,T[),\end{aligned} $$

    with the initial conditions \(\mathtt {u}_\parallel (0) = \mathtt {u}_{0\parallel },\ \mathtt {u}^{\prime }_\parallel (0) = \mathtt {u}_{1\parallel }\). Thus, it coincides with the unique weak solution to this formulation in the sense of Definition 4.3.12. Following the same line of reasoning, one shows that this solution does belong to \(\mathcal {K}\) at any time; furthermore, its norm in \(C^0([0,T];\mathcal {V})\times C^1([0,T];\mathcal {H})\) depends continuously on the data (f , u 0∥, u 1∥), which are themselves controlled by (f, u 0, u 1) in their respective spaces.

  3. 3.

    Now, consider \(\mathtt {v}\in \mathcal {V}\) and write v = v  + v , with \((\mathtt {v}_\parallel ,\mathtt {v}_\perp )\in \mathcal {K}\times \mathcal {K}^\perp \). Using the characterisation of u obtained in step 2, together with the double orthogonality property and footnote11, p. 167, one finds that

    $$\displaystyle \begin{aligned} \begin{array}{l} \displaystyle(\mathtt{f}(t),\mathtt{v})_{\mathcal{H}} - \frac{d^2}{dt^2}\{(\mathtt{u}(t),\mathtt{v})_{\mathcal{H}}\} - a(\mathtt{u}(t),\mathtt{v}) = \\ \displaystyle(\mathtt{f}_\perp(t),\mathtt{v}_\perp)_{\mathcal{H}} - \frac{d^2}{dt^2}\{(\mathtt{u}(t)_\perp,\mathtt{v}_\perp)_{\mathcal{H}}\} - a(\mathtt{u}_\perp(t),\mathtt{v}_\perp) \mbox{ in }\mathcal{D}'(]0,T[). \end{array}\end{aligned} $$

    Let us define \(\mathtt {h}(t)\in \mathcal {V}'\), for all t, by the condition:

    $$\displaystyle \begin{aligned} \forall \mathtt{v} \in \mathcal{V},\quad \langle \mathtt{h}(t),\mathtt{v}\rangle_{\mathcal{V}} = (\mathtt{f}_\perp(t),\mathtt{v}_\perp)_{\mathcal{H}} - (\mathtt{u}_\perp''(t),\mathtt{v}_\perp)_{\mathcal{H}} - a(\mathtt{u}_\perp(t),\mathtt{v}_\perp). \end{aligned} $$
    (4.23)

    Thanks to the assumptions on the data and to the preceding results, we have \(\mathtt {h}\in C^0([0,T];\mathcal {K}^0)\), where \(\mathcal {K}^0\) is the polar set of \(\mathcal {K}\). Using Lemma 4.2.18 once more, we conclude that

    $$\displaystyle \begin{aligned} \exists! \mathtt{p}\in C^0([0,T];\mathcal{Q}),\ \forall t\in[0,T],\ \forall\mathtt{v}\in\mathcal{V},\ \overline{b(\mathtt{v},\mathtt{p}(t))}= \langle \mathtt{h}(t),\mathtt{v}\rangle_{\mathcal{V}}. \end{aligned} $$
    (4.24)

    Moreover, the norm of p in \(C^0([0,T];\mathcal {Q})\) depends continuously on the data (f, g, u 0, u 1).

  4. 4.

    Conversely, let u = u  + u , where u and u are defined as in steps 1 and 2, and let p be defined by (4.24) and (4.23). They fulfill all items of Definition 4.3.16, including the initial conditions thanks to (4.22). What is more, the norm of (u, u , p) in \(C^0([0,T];\mathcal {V} \times \mathcal {H} \times \mathcal {Q})\) depends continuously on the data (f, g, u 0, u 1).

Remark 4.3.20

As in the case without constraints (cf. Theorem 4.3.13), one can have weaker time regularity assumptions on the right-hand sides, namely \(\mathtt {f} \in L^2(0,T;\mathcal {H})\) and \(\mathtt {g}\in H^2([0,T];\mathcal {Q}^{\prime })\). But one only finds that \(\mathtt {p}\in L^2(0,T;\mathcal {Q})\). Weaker space regularities can be also envisaged, under certain assumptions about the various spaces and sesquilinear forms (see below).

Remark 4.3.21

Let us comment on the double orthogonality requirement.

  • According to Remark 4.3.8, one can replace the scalar product \((\mathtt {v},\mathtt {w})_{\mathcal {V}}\) with \({ }_2(\mathtt {v},\mathtt {w})_{\mathcal {V}}=a(\mathtt {v},\mathtt {w})+\nu _2\,{ }_2(\mathtt {v},\mathtt {w})_{\mathcal {H}}\), with ν 2 > 0. Hence the denomination double orthogonality with respect to \({ }_2(\cdot ,\cdot )_{\mathcal {V}}\):

    for all \((\mathtt {v}_\parallel ,\mathtt {v}_\perp )\in \mathcal {K}\times \mathcal {K}^\perp \), one expects \(a(\mathtt {v}_\parallel ,\mathtt {v}_\perp )+\nu _2\,{ }_2(\mathtt {v}_\parallel ,\mathtt {v}_\perp )_{\mathcal {H}} = 0\), whereas we require both \(a(\mathtt {v}_\parallel ,\mathtt {v}_\perp )=0\ \mbox{\em and } { }_2(\mathtt {v}_\parallel ,\mathtt {v}_\perp )_{\mathcal {H}}=0\).

  • The part \((\mathtt {v}_\parallel ,\mathtt {v}_\perp )\in \mathcal {K}\times \mathcal {K}^\perp \Longrightarrow a(\mathtt {v}_\parallel ,\mathtt {v}_\perp )=0\) is required, because one cannot deal with a right-hand side of the form a(w(t), v)—in our case, with w = u and v = v —when solving the second-order time-dependent problem in \(\mathcal {V}\).Footnote 12

The result of Theorem 4.3.19 is not entirely satisfactory: as it appears from the proof, the part of the solution that is orthogonal to the kernel is much more regular than the one along the kernel. To address this dissymmetry, one can try to define suitable extensions of the operator B, and thus consider less regular data g. For instance, introduce the spaces:

$$\displaystyle \begin{aligned} \mathcal{Q}_w := \{ \mathtt{q} \in \mathcal{Q} : \mathtt{B}^\dagger \mathtt{q} \in \mathcal{H} \},\quad \mathcal{Q}_{ww} := \{ \mathtt{q} \in \mathcal{Q} : \mathtt{B}^\dagger \mathtt{q} \in \mathcal{V} \}, {} \end{aligned} $$
(4.25)

endowed with their canonical norms. For any \(\mathtt {q}\in \mathcal {Q}_w\), the continuous antilinear form on \(\mathcal {V}\) given by vb(v, q) can be extended to a continuous antilinear form on \(\mathcal {H}\). Thus, we have defined a continuous sesquilinear form b w on \(\mathcal {H}\times \mathcal {Q}_w\), which coincides with b(⋅, ⋅) on \(\mathcal {V}\times \mathcal {Q}_w\), as well as an extended operator \(\mathtt {B}_w: \mathcal {H} \to \mathcal {Q}_w^{\prime }\) and its conjugate transpose \(\mathtt {B}_w^\dagger : \mathcal {Q}_w \to \mathcal {H}'=\mathcal {H}\). Similarly, one defines the sesquilinear form b ww on \(\mathcal {V}'\times \mathcal {Q}_{ww}\) and the operators \(\mathtt {B}_{ww}: \mathcal {V}' \to \mathcal {Q}_{ww}^{\prime }\) and \(\mathtt {B}_{ww}^\dagger : \mathcal {Q}_{ww} \to \mathcal {V}''=\mathcal {V}\).

Theorem 4.3.22

Assume that the sesquilinear, continuous and Hermitian form a fulfills the property (4.15), and that the sesquilinear and continuous form b satisfies the inf-sup condition (4.10) for some β > 0. Assume, moreover, that the sesquilinear and continuous forms b w and b ww satisfy similar inf-sup conditions in the relevant spaces; and that the double orthogonality property in \(\mathcal {V}\) and \(\mathcal {H}_2\) holds.

Then, let T > 0, \(\mathtt {f}\in C^0([0,T];\mathcal {H})\) , \(\mathtt {g}\in \mathcal {G}_T := C^0([0,T];\mathcal {Q}^{\prime }) \cap C^1([0,T];\mathcal {Q}_w^{\prime }) \cap C^2([0,T];\mathcal {Q}_{ww}^{\prime })\) , \(\mathtt {u}_0\in \mathcal {V}\) and \(\mathtt {u}_1\in \mathcal {H}\) be given such that

$$\displaystyle \begin{aligned} \forall \mathtt{q}\in\mathcal{Q},\ b(\mathtt{u}_0,\mathtt{q}) = \langle \mathtt{g}(0),\mathtt{q} \rangle_{\mathcal{Q}} \,;\ \forall \mathtt{q}\in\mathcal{Q}_w,\ b_w(\mathtt{u}_1,\mathtt{q}) = \langle \mathtt{g}'(0),\mathtt{q} \rangle_{\mathcal{Q}_w}. {} \end{aligned} $$
(4.26)

On the time interval ]0, T[, Problem (4.21) admits a unique weak solution in the sense of Definition 4.3.16 (with \(\big ({ }_2(\mathtt {u}(t),\mathtt {v})_{\mathcal {H}}\big )''\) in (iii)). In addition, the mapping

$$\displaystyle \begin{aligned} \left\{ \begin{array}{lll} C^0([0,T];\mathcal{H})\times \mathcal{G}_T \times\mathcal{V} \times \mathcal{H} & \rightarrow & C^0([0,T];\mathcal{V} \times \mathcal{H} \times \mathcal{Q}) \\ (\mathtt{f},\mathtt{g},\mathtt{u}_0,\mathtt{u}_1) & \mapsto & (\mathtt{u},\mathtt{u}',\mathtt{p}) \end{array} \right. \end{aligned}$$

is continuous (with a constant that depends on T).

The proof is entirely similar to that of Theorem 4.3.19.

Remark 4.3.23

Let us comment on these regularity assumptions.

  • As in Remark 4.3.20, it is sufficient to assume \(\mathtt {f} \in L^2(0,T;\mathcal {H})\) and \(\mathtt {g}\in C^0([0,T];\mathcal {Q}^{\prime }) \cap C^1([0,T];\mathcal {Q}_w^{\prime }) \cap H^2([0,T];\mathcal {Q}_{ww}^{\prime })\) in order to have a well-posed evolution equation for u and an equation for p(t) at a.e. t; in this case, it holds that \(\mathtt {p}\in L^2(0,T;\mathcal {Q})\).

  • The inf-sup condition on the form b w allows one to prove the condition \(\mathtt {u}_\perp \in C^1([0,T];\mathcal {H})\), which is expected of a weak solution. By the same token, it expresses the compatibility between the initial condition u 1 and the constraint b(u, q) = 〈g, q〉. It also implies that \(\mathcal {L}\) is the kernel of b w (⋅, ⋅).

  • On the other hand, the form b ww plays a marginal role. Its inf-sup condition ensures \(\mathtt {u}_\perp \in C^2([0,T];\mathcal {V}')\) or \(H^2([0,T];\mathcal {V}')\), so that the r.h.s. of (4.23) is well-defined for a.e. t. If this condition is unavailable, one can still conclude favorably under the assumption \(\mathtt {g}\in C^0([0,T];\mathcal {Q}^{\prime }) \cap C^2([0,T];\mathcal {Q}_w^{\prime })\) or \(\mathtt {g}\in C^0([0,T];\mathcal {Q}^{\prime }) \cap H^2([0,T];\mathcal {Q}_w^{\prime })\).

To conclude this subsection, we introduce a reinterpretation of the equations satisfied by u and u , which also proves useful in analysing the numerical discretizations of Problem (4.21) [81]. According to item 1. in the proof of Theorem 4.3.19, the variable u is the solution, at any time, to the static mixed formulation:

$$\displaystyle \begin{aligned} \hskip -3mm\left\{ \begin{array}{l} \mathit{Find}\ (\mathtt{u}_\perp,\mathtt{p}_\perp)\ \mathit{such \ that}\\ \displaystyle\forall t\in[0,T],\ \forall\mathtt{q}\in\mathcal{Q},\ b(\mathtt{u}_\perp(t),\mathtt{q}) = \langle \mathtt{g}(t) , \mathtt{q} \rangle_{\mathcal{Q}},\\ \displaystyle\forall\mathtt{v}\in\mathcal{V},\ a(\mathtt{u}_\perp(t), \mathtt{v}) + \overline{b(\mathtt{v},\mathtt{p}_\perp(t))} = \langle \mathtt{A}_w\,\mathtt{B}_{|{}_{\mathcal{K}^\perp}}^{-1}\,\mathtt{g}(t) , \mathtt{v} \rangle_{\mathcal{V}} \mbox{ in }\mathcal{D}'(]0,T[)\,;\\ \mathtt{u}_\perp(0)=\mathtt{u}_{0\perp}\;,\;\displaystyle\displaystyle\frac{d\mathtt{u}_\perp}{dt}(0)=\mathtt{u}_{1\perp}. \end{array} \right. \end{aligned} $$

Indeed, the operator B restricted to \(\mathcal {K}^\perp \) admits a continuous inverse \(\mathtt {B}_{|{ }_{\mathcal {K}^\perp }}^{-1}: \mathcal {Q}^{\prime } \to \mathcal {K}^\perp \). By the uniqueness of the solution to the constrained formulation, it holds that p (t) = 0. As for u , it is the solution to the following time-dependent formulation, where u enters as data and p  = p:

$$\displaystyle \begin{aligned} \hskip -3mm\left\{ \begin{array}{l} \mathit{Find}\ (\mathtt{u}_\parallel,\mathtt{p}_\parallel)\ \mathit{such \ that}\\ \displaystyle\forall\mathtt{v}\in\mathcal{V},\ \frac{d^2}{dt^2}\{{}_2(\mathtt{u}_\parallel(t),\mathtt{v})_{\mathcal{H}}\} + a(\mathtt{u}(t),\mathtt{v}) + \overline{b(\mathtt{v},\mathtt{p}_\parallel(t))} = \\ \displaystyle(\mathtt{f}(t),\mathtt{v})_{\mathcal{H}} - \langle \mathtt{A}_w\,\mathtt{B}_{|{}_{\mathcal{K}^\perp}}^{-1}\,\mathtt{g}(t) , \mathtt{v} \rangle_{\mathcal{V}} - \frac{d^2}{dt^2}\{{}_2(\mathtt{u}_\perp(t),\mathtt{v})_{\mathcal{H}}\}\mbox{ in }\mathcal{D}'(]0,T[), \\ \displaystyle\forall\mathtt{q}\in\mathcal{Q},\quad b(\mathtt{u}_\parallel(t),\mathtt{q}) = 0 \mbox{ in } C^0([0,T]) \mbox{ respectively } L^2(0,T)\,;\\ \mathtt{u}_\parallel(0)=\mathtt{u}_{0\parallel}\;,\;\displaystyle\displaystyle\frac{d\mathtt{u}_\parallel}{dt}(0)=\mathtt{u}_{1\parallel}. \end{array} \right. \end{aligned} $$
(4.27)

4.4 Time-Dependent Problems: Improved Regularity Results

We now investigate the conditions under which the solution to the second-order time-dependent problems (4.14), (4.17), (4.21) (and their variants) may exhibit a higher regularity in space and time, such as that needed for the numerical analysis [17]. In addition to the hypotheses of Sect. 4.3, we assume that the canonical imbedding \(i_{\mathcal {V}\rightarrow \mathcal {H}}\) is compact.

To simplify the discussion, we shall assume in this section that the form a appearing in these problems is (Hermitian and) coercive on the whole space \(\mathcal {V}\), i.e., the property (4.15) holds with ν = 0. As a consequence, we replace the original norm of \(\mathcal {V}\) with the equivalent norm \({ }_2\|\mathtt {v}\|{ }_{\mathcal {V}} := a(\mathtt {v},\mathtt {v})^{1/2}\), usually called the energy norm, which we will denote by \(\|\mathtt {v}\|{ }_{\mathcal {V}}\) for the sake of simplicity.

4.4.1 Problems Without Constraints

First, we introduce the eigenvalue problemFootnote 13:

$$\displaystyle \begin{aligned} \hskip -3mm\left\{ \begin{array}{l} \mathit{Find}\ (\mathtt{e},\lambda)\in(\mathcal{V}\setminus\{0\})\times\mathbb{R}\ \mathit{such \ that}\\ \forall \mathtt{v}\in\mathcal{V},\ a(\mathtt{e},\mathtt{v}) = \lambda\, (\mathtt{e},\mathtt{v})_{\mathcal{H}}. \end{array}\right.\end{aligned} $$
(4.28)

According to Corollary 4.5.12, there exist a non-decreasing sequence of strictly positive eigenvalues \((\lambda _i)_{i\in \mathbb {N}}\) and a sequence of eigenfunctions \((\mathtt {e}_i)_{i\in \mathbb {N}}\) that are a Hilbert basis of \(\mathcal {H}\) and such that \((\lambda _i^{-1/2}\, \mathtt {e}_i)_{i\in \mathbb {N}}\) is a Hilbert basis for \(\mathcal {V}\). This leads to the definition of a scale \((\mathcal {V}^s)_{s\in \mathbb {R}}\) of Hilbert spaces, the A -Sobolev spaces.

Definition 4.4.1

Let \(s\in \mathbb {R}\). The space \(\mathcal {V}^s\) is:

  • if s ≥ 0, the subspace of \(\mathcal {H}\) characterised by the condition

    $$\displaystyle \begin{aligned} \sum_{i\in\mathbb{N}} u_i\,\mathtt{e}_i = \mathtt{u} \in \mathcal{V}^s \iff \| \mathtt{u} \|{}_{\mathcal{V}^s}^2 :=\sum_{i\in\mathbb{N}} \lambda_i^s\, |u_i|{}^2 < +\infty, {}\end{aligned} $$
    (4.29)

    which defines its canonical norm;

  • if s < 0, the dual of \(\mathcal {V}^{-s}\) with respect to the pivot space \(\mathcal {H}\).

Then, we summarise some properties of this scale. The proofs are left to the reader.

Proposition 4.4.2

The following statements hold true:

  1. 1.

    \(\mathcal {V}^0 = \mathcal {H}\) , \(\mathcal {V}^1 = \mathcal {V}\) , \(\mathcal {V}^2 = D(\mathtt {A})\) , \(\mathcal {V}^{-1} = \mathcal {V}'\) , algebraically and topologically.

  2. 2.

    For all \(i\in \mathbb {N}\) and \(s\in \mathbb {R}\) , \(\mathtt {e}_i \in \mathcal {V}^s\) . Furthermore, the sequence \((\mathtt {e}_i^s)_{i\in \mathbb {N}} := (\lambda _i^{-s/2}\, \mathtt {e}_i)_{i\in \mathbb {N}}\) is a Hilbert basis for \(\mathcal {V}^s\).

  3. 3.

    For all \(t < s \in \mathbb {R}\) , \(\mathcal {V}^s\) is densely and compactly imbedded in \(\mathcal {V}^t\).

  4. 4.

    Let \(s\in \mathbb {R}\) and \(u\in \mathcal {V}^s\) . The scalar u i equivalently defined as

    $$\displaystyle \begin{aligned} u_i = \left\langle \mathtt{u}, \mathtt{e}_i^{-t} \right\rangle_{\mathcal{V}^{-t}} = \lambda_i^{-t/2}\, \left( \mathtt{u}, \mathtt{e}_i^t \right)_{\mathcal{V}^t}\end{aligned} $$

    does not depend on t  s. Of course, if \(\mathtt {u}\in \mathcal {H}\) , u i coincides with the coordinate of u on the basis \((\mathtt {e}_i)_{i\in \mathbb {N}}\).

  5. 5.

    As a consequence of items 2 and 4 , an element of an A -Sobolev space admits a renormalised expansion \(\mathtt {u} = \sum _{i\in \mathbb {N}} u_i\, \mathtt {e}_i\) , which converges in \(\mathcal {V}^s\) under the condition (4.29).

With these results, one can define a natural generalisation of the “strong” and “weak” operators A and A w . The “formal” unbounded operator

$$\displaystyle \begin{aligned} \tilde{\mathtt{A}} :\quad \mathtt{u} = \sum_{i\in\mathbb{N}} u_i\, \mathtt{e}_i \longmapsto \sum_{i\in\mathbb{N}} \lambda_i\, u_i\, \mathtt{e}_i \end{aligned}$$

makes sense as soon as u belongs to some A-Sobolev space. By construction, it maps \(\mathcal {V}^s\) to \(\mathcal {V}^{s-2}\) for all s, and it is an isometry between these spaces. As particular cases, A and A w appear as the restrictions of \(\tilde {\mathtt {A}}\) to D(A) and \(\mathcal {V}\), respectively.

We are now ready to analyse a generalised version of Problem (4.14), namely:

$$\displaystyle \begin{aligned} \left\{ \begin{array}{l} \mathit{Find}\ \mathtt{u}\ \mathit{such \ that}\\ \displaystyle\frac{d^2\mathtt{u}}{dt^2} + \tilde{\mathtt{A}}\mathtt{u}= \mathtt{f},\quad t>0\,;\\ \mathtt{u}(0)=\mathtt{u}_0\;,\;\displaystyle\displaystyle\frac{d\mathtt{u}}{dt}(0)=\mathtt{u}_1. \end{array} \right. \end{aligned} $$
(4.30)

The above problem is meaningful as soon as u has the regularity \(C^1( [0,T] ; \mathcal {V}^\sigma )\), and \(f \in L^1_{loc}(\left \rbrack 0,T \right \lbrack ; \mathcal {V}^s)\), for some \(\sigma ,\ s \in \mathbb {R}\): the equality on the first line takes place in \(\mathcal {D}'(\left \rbrack 0,T \right \lbrack ; \mathcal {V}^{\min (\sigma -2,s)} )\). As particular cases, this covers the frameworks of Definitions 4.3.9 and 4.3.12. Considering the renormalised expansions at each time

$$\displaystyle \begin{aligned} \mathtt{u}(t) = \sum_{i\in\mathbb{N}} u_i(t)\, \mathtt{e}_i,\quad \mathtt{u}_m = \sum_{i\in\mathbb{N}} u_{m,i}\, \mathtt{e}_i \ (m=0,\ 1),\quad \mathtt{f}(t) = \sum_{i\in\mathbb{N}} f_i(t)\, \mathtt{e}_i, \end{aligned}$$

Problem (4.30) is equivalent to the sequence of Cauchy problems in \(\mathcal {D}'(\left \rbrack 0,T \right \lbrack )\) (for \(i\in \mathbb {N}\)):

$$\displaystyle \begin{aligned} \left\{ \begin{array}{l} \mathit{Find}\ u_i\ \mathit{such \ that}\\ \displaystyle\frac{d^2u_i}{dt^2} + \lambda_i\, u_i = f_i,\ t\in]0,T[\,;\,u_i(0) = u_{0,i}\;,\;\displaystyle\displaystyle\frac{du_i}{dt}(0)=u_{1,i}. \end{array} \right. \end{aligned}$$

The theory of ordinary differential equations gives us the unique solution:

$$\displaystyle \begin{aligned} u_i(t) = u_{0,i}\, \cos{}(\sqrt{\lambda_i} t) + \frac{u_{1,i}}{\sqrt{\lambda_i}}\, \sin{}(\sqrt{\lambda_i} t) + \int_0^t \sin{}(\sqrt{\lambda_i}(t-s))\, \frac{f_i(s)}{\sqrt{\lambda_i}}\, ds, \end{aligned} $$

which exists, e.g., under the condition f i  ∈ L 1(0, T). If f i  ∈ W 1, 1(0, T), one can perform an integration by parts and arrive at:

$$\displaystyle \begin{aligned} \begin{array}{rcl} u_i(t) &\displaystyle =&\displaystyle u_{0,i}\, \cos{}(\sqrt{\lambda_i} t) + \frac{u_{1,i}}{\sqrt{\lambda_i}}\, \sin{}(\sqrt{\lambda_i} t) + \frac{f(t) - f(0)\,\cos{}(\sqrt{\lambda_i} t)}{\lambda_i} \\ &\displaystyle &\displaystyle \qquad - \int_0^t \cos{}(\sqrt{\lambda_i}(t-s))\, \frac{f_i^{\prime}(s)}{\lambda_i}\, ds. {} \end{array} \end{aligned} $$

Using these representations and Proposition 4.4.2, it is not difficult to prove the following theorem, which furnishes solutions both less regular and more regular in space than the strong and weak solutions considered so far.

Theorem 4.4.3

Assume that the canonical imbedding \(i_{\mathcal {V}\rightarrow \mathcal {H}}\) is compact, and that the sesquilinear, continuous and Hermitian form a fulfills property (4.15) with ν = 0, and let the operator \(\tilde {\mathtt {A}}\) be defined as above. Then:

  1. 1.

    Given T > 0, \(s\in \mathbb {R}\) , p ≥ 1 \(f\in L^p(0,T;\mathcal {V}^s)\) , \(\mathtt {u}_0\in \mathcal {V}^{s+1}\) and \(\mathtt {u}_1\in \mathcal {V}^s;\) on the time interval ]0, T[, Problem (4.30) admits a unique solution in \(C^1( [0,T] ; \mathcal {V}^s) \cap C^0( [0,T] ; \mathcal {V}^{s+1})\) . In addition,

    $$\displaystyle \begin{aligned} \left\lbrace \begin{array}{lll} L^1(0,T;\mathcal{V}^s)\times \mathcal{V}^{s+1} \times \mathcal{V}^s & \rightarrow & C^0([0,T];\mathcal{V}^{s+1}) \times C^0([0,T];\mathcal{V}^s) \\ (\mathtt{f},\mathtt{u}_0,\mathtt{u}_1) & \mapsto & (\mathtt{u},\mathtt{u}') \end{array} \right. \end{aligned}$$

    is continuous (with a constant that depends on T), and \(\mathtt {u} \in W^{2,p}( 0,T ; \mathcal {V}^{s-1})\) , with continuous dependence.

  2. 2.

    Given T > 0, \(s\in \mathbb {R}\) , \(f\in \mathcal {Z}^s_T := L^1(0,T;\mathcal {V}^s) \cap C^0([0,T];\mathcal {V}^{s-1})\) , respectively, \(W^{1,1}(0,T;\mathcal {V}^{s-1})\) , \(\mathtt {u}_0\in \mathcal {V}^{s+1}\) and \(\mathtt {u}_1\in \mathcal {V}^s;\) on the time interval ]0, T[, Problem (4.30) admits a unique solution in \(C^2( [0,T] ; \mathcal {V}^{s-1}) \cap C^1( [0,T] ; \mathcal {V}^s) \cap C^0( [0,T] ; \mathcal {V}^{s+1})\) . In addition,

    $$\displaystyle \begin{aligned} \left\lbrace \begin{array}{lll} \mathcal{Z}^s_T \times \mathcal{V}^{s+1} \times \mathcal{V}^s & \rightarrow & C^0([0,T];\mathcal{V}^{s+1}) \times C^0([0,T];\mathcal{V}^s) \times C^0([0,T];\mathcal{V}^{s-1}) \\ (\mathtt{f},\mathtt{u}_0,\mathtt{u}_1) & \mapsto & (\mathtt{u},\mathtt{u}',\mathtt{u}'') \end{array} \right. \end{aligned}$$

    is continuous (with a constant that depends on T).

Now, we investigate the time regularity of the solutions to (4.30).

Theorem 4.4.4

Assume the hypotheses of Theorem 4.4.3 , and let \(m\in \mathbb {N}\) be given. Suppose that u m and u m+1 (defined, according to the parity of m, by the formulas (4.32) and (4.33) below) belong, respectively, to \(\mathcal {V}^{s+1}\) and \(\mathcal {V}^s\).

  1. 1.

    If \(\mathtt {f}\in W^{m,p}(0,T;\mathcal {V}^s)\) , the solution to Problem (4.30) belongs to \(W^{m+2,p}( 0,T ; \mathcal {V}^{s-1}) \cap C^{m+1}([0,T] ; \mathcal {V}^{s}) \cap C^m([0,T] ; \mathcal {V}^{s+1})\) , with continuous dependence on the data (f, u m , u m+1).

  2. 2.

    If either \(\mathtt {f}\in W^{m,1}(0,T;\mathcal {V}^s) \cap C^m([0,T];\mathcal {V}^{s-1})\) or \(\mathtt {f}\in W^{m+1,1}(0,T;\mathcal {V}^{s-1})\) , the solution to Problem (4.30) belongs to \(C^{m+2}([0,T] ; \mathcal {V}^{s-1}) \cap C^{m+1}([0,T] ; \mathcal {V}^{s}) \cap C^m([0,T] ; \mathcal {V}^{s+1})\) , with continuous dependence on the data (f, u m , u m+1).

Proof

We prove the first claim; the second is similar. The case m = 0 is that of Theorem 4.4.3. Thus, we suppose m ≥ 1, and we have \(\mathtt {f}\in C^{m-1}([0,T] ;\mathcal {V}^s)\). Using the identity \(\mathtt {u}'' = \mathtt {f} - \tilde {\mathtt {A}}\mathtt {u}\) iteratively, one arrives at the following expressions and regularities of the successive time derivatives of u:

$$\displaystyle \begin{aligned} \begin{array}{rcl} \mathtt{u}^{(2k)} &\displaystyle =&\displaystyle \sum_{\ell=0}^{k-1} (-1)^\ell\, \tilde{\mathtt{A}}^\ell \mathtt{f}^{(2k-2\ell-2)} + (-1)^k\, \tilde{\mathtt{A}}^k \mathtt{u} \;\in\; C^0([0,T] ; \mathcal{V}^{s-2k+1})\,, \\ \mathtt{u}^{(2k+1)} &\displaystyle =&\displaystyle \sum_{\ell=0}^{k-1} (-1)^\ell\, \tilde{\mathtt{A}}^\ell \mathtt{f}^{(2k-2\ell-1)} + (-1)^k\, \tilde{\mathtt{A}}^k \mathtt{u}' \;\in\; C^0([0,T] ; \mathcal{V}^{s-2k}) \,, \end{array} \end{aligned} $$

as long as 2k − 2, respectively 2k − 1 ≤ m − 1. Thus, in any case, \(\mathtt {u}^{(m)} \in C^1([0,T] ; \mathcal {V}^{s-m}) \cap C^0([0,T] ; \mathcal {V}^{s-m+1})\).

On the other hand, consider the generalised second-order problem:

$$\displaystyle \begin{aligned} \left\{ \begin{array}{l} \mathit{Find}\ \mathtt{v}\ \mathit{such \ that}\\ \displaystyle\frac{d^2\mathtt{v}}{dt^2} + \tilde{\mathtt{A}}\mathtt{v} = \mathtt{f}^{(m)},\quad t>0\,;\\ \mathtt{v}(0)=\mathtt{u}_m\;,\;\displaystyle\displaystyle\frac{d\mathtt{v}}{dt}(0)=\mathtt{u}_{m+1}, \end{array} \right. \end{aligned} $$
(4.31)

where the initial conditions are defined by the formula

$$\displaystyle \begin{aligned} \begin{array}{rcl} \mathtt{u}_{2k} &\displaystyle =&\displaystyle \sum_{\ell=0}^{k-1} (-1)^\ell\, \tilde{\mathtt{A}}^\ell \mathtt{f}^{(2k-2\ell-2)}(0) + (-1)^k\, \tilde{\mathtt{A}}^k \mathtt{u}_0 \,; {} \end{array} \end{aligned} $$
(4.32)
$$\displaystyle \begin{aligned} \begin{array}{rcl} \mathtt{u}_{2k+1} &\displaystyle =&\displaystyle \sum_{\ell=0}^{k-1} (-1)^\ell\, \tilde{\mathtt{A}}^\ell \mathtt{f}^{(2k-2\ell-1)}(0) + (-1)^k\, \tilde{\mathtt{A}}^k \mathtt{u}_1 \,. {} \end{array} \end{aligned} $$
(4.33)

According to the previous calculations, \(\mathtt {u}_m \in \mathcal {V}^{s-m+1}\) and \(\mathtt {u}_{m+1} \in \mathcal {V}^{s-m}\). As it also holds that \(f^{(m)}\in L^p(0,T;\mathcal {V}^{s-m})\), Problem (4.31) admits a unique solution in the space \(C^1([0,T] ; \mathcal {V}^{s-m}) \cap C^0([0,T] ; \mathcal {V}^{s-m+1})\), which is obviously equal to u (m).

Assume now that \((\mathtt {u}_m,\mathtt {u}_{m+1}) \in \mathcal {V}^{s+1}\times \mathcal {V}^s\). Again invoking Theorem 4.4.3, we see that Problem (4.31) also admits a unique solution in the smaller space \(C^1([0,T] ; \mathcal {V}^{s}) \cap C^0([0,T] ; \mathcal {V}^{s+1})\), which necessarily coincides again with u (m). Therefore, \(\mathtt {u} \in C^{m+1}([0,T] ; \mathcal {V}^{s}) \cap C^m([0,T] ; \mathcal {V}^{s+1})\), as announced. The regularity \(\mathtt {u}\in W^{m+2,p}( 0,T ; \mathcal {V}^{s-1})\) again follows from \(\mathtt {u}'' = \mathtt {f} - \tilde {\mathtt {A}}\mathtt {u}\), and the continuous dependence from Theorem 4.4.3. ■

4.4.2 Problems with Constraints

Now, we proceed to the framework of constrained problems. We thus consider a sesquilinear form b on \(\mathcal {V}\times \mathcal {Q}\), satisfying the inf-sup condition (4.10), its kernel \(\mathcal {K}\) and \(\mathcal {L}\) is the closure of \(\mathcal {K}\) within \(\mathcal {H}\). Furthermore, we assume the double orthogonality property of Definition 4.3.17. We begin by deducing two fundamental consequences of this property.

Lemma 4.4.5

Assume that the sesquilinear, continuous and Hermitian form a fulfills property (4.15) with ν = 0, and that the double orthogonality property holds between \(\mathcal {V}\) and \(\mathcal {H}\) . Then, for any \(\mathtt {v} \in \mathcal {V}^s\) with s ≥ 0, its \(\mathcal {H}\) -orthogonal projections \(\mathtt {v}_\parallel \in \mathcal {L}\) and \(\mathtt {v}_\perp \in \mathcal {L}^\perp \) belong to \(\mathcal {V}^s\) , with \(\|\mathtt {v}_\parallel \|{ }_{\mathcal {V}^s}^2 + \|\mathtt {v}_\perp \|{ }_{\mathcal {V}^s}^2 = \|\mathtt {v}\|{ }_{\mathcal {V}^s}^2\).

Proof

Let e be a solution to (4.28). Taking a test function \(\mathtt {v}_\parallel \in \mathcal {K}\) and using the double orthogonality, one obtains \(a(\mathtt {e}_\parallel ,\mathtt {v}_\parallel ) = \lambda \, ( \mathtt {e}_\parallel , \mathtt {v}_\parallel )_{\mathcal {H}}\). Again invoking the double orthogonality, one arrives at:

$$\displaystyle \begin{aligned} a(\mathtt{e}_\parallel,\mathtt{v}) = \lambda\, ( \mathtt{e}_\parallel , \mathtt{v} )_{\mathcal{H}},\quad \forall \mathtt{v}\in \mathcal{V}, \quad \text{and similarly,}\quad a(\mathtt{e}_\perp,\mathtt{v}) = \lambda\, ( \mathtt{e}_\perp , \mathtt{v} )_{\mathcal{H}}. \end{aligned}$$

In other words, the projections onto \(\mathcal {K}\) and \(\mathcal {K}^\perp \) of any eigenfunction are either an eigenfunction, or zero. Thus, the Hilbert basis \((\mathtt {e}_i)_{i\in \mathbb {N}}\) can be chosen such that all its elements belong either to \(\mathcal {K}\) or to \(\mathcal {K}^\perp \). Let I (respectively I ) be the set of indices i such that \(\mathtt {e}_i \in \mathcal {K}\) (respectively \(\mathtt {e}_i \in \mathcal {K}^\perp \)). Then, we have:

$$\displaystyle \begin{aligned} \forall \mathtt{v} = \sum_{i\in\mathbb{N}} v_i\,\mathtt{e}_i \in \mathcal{H}, \quad \mathtt{v}_\parallel = \sum_{i\in I_\parallel} v_i\,\mathtt{e}_i \quad \text{and}\quad \mathtt{v}_\perp = \sum_{i\in I_\perp} v_i\,\mathtt{e}_i. \end{aligned}$$

The conclusion follows using the property (4.29). ■

Lemma 4.4.6

Assume the hypotheses of Lemma 4.4.5 , and introduce the respective subspaces \(\mathcal {F}^s \subset \mathcal {Q}^{\prime }\) and \(\mathcal {Q}^s \subset \mathcal {Q}\) (for s ≥ 0), equipped with their canonical norms:

$$\displaystyle \begin{aligned} \begin{array}{rcl} \mathcal{F}^s &\displaystyle =&\displaystyle \mathtt{B} ( \mathcal{V}^{s+2} ) = \mathtt{B} ( \mathcal{V}^{s+2} \cap \mathcal{K}^\perp ), \\ \mathcal{Q}^s &\displaystyle =&\displaystyle \{ \mathtt{q} \in \mathcal{Q} : \mathtt{B}^\dagger \mathtt{q} \in \mathcal{V}^{s-1} \}. \end{array} \end{aligned} $$

Then, for any \(\mathtt {y}\in \mathcal {V}^s\) and \(\mu \in \mathcal {F}^s\) , the solution to the problem

$$\displaystyle \begin{aligned} \begin{array}{rcl} &\displaystyle &\displaystyle \mathrm{Find}\ (\mathtt{u},\mathtt{r})\in\mathcal{V}\times \mathcal{Q}\ \mathrm{such \ that} \\ &\displaystyle &\displaystyle \forall \mathtt{v}\in \mathcal{V},\ a(\mathtt{u},\mathtt{v}) + \overline{b(\mathtt{v},\mathtt{r})} = ( \mathtt{y} , \mathtt{v} )_{\mathcal{H}},{} \end{array} \end{aligned} $$
(4.34)
$$\displaystyle \begin{aligned} \begin{array}{rcl} &\displaystyle &\displaystyle \forall \mathtt{q}\in \mathcal{Q},\ b(\mathtt{u}, \mathtt{q}) = \langle \mu, \mathtt{q} \rangle_{\mathcal{Q}},{} \end{array} \end{aligned} $$
(4.35)

belongs to \(\mathcal {V}^{s+2} \times \mathcal {Q}^{s+1}\) , and \(\|\mathtt {u}\|{ }_{\mathcal {V}^{s+2}} + \|\mathtt {r}\|{ }_{\mathcal {Q}^{s+1}} \lesssim \|\mathtt {y}\|{ }_{\mathcal {V}^s} + \| \mu \|{ }_{\mathcal {F}^s}\).

Remark 4.4.7

It holds that: \(\mathcal {Q}^0 = \mathcal {Q},\ \mathcal {Q}^1 = \mathcal {Q}_w,\ \mathcal {Q}^2 = \mathcal {Q}_{ww}\), as in Eq. (4.25). The scale \((\mathcal {F}^s)_s\) can be extended to s ≥−1, and even to s ≥−2, provided the sesquilinear form b w satisfies an inf-sup condition on \(\mathcal {H} \times \mathcal {Q}_w\): \(\mathcal {F}^s = \mathtt {B}_w ( \mathcal {V}^{s+2} ) = \mathtt {B}_w ( \mathcal {V}^{s+2} \cap \mathcal {L}^\perp )\); in particular, \(\mathcal {F}^{-1} = \mathtt {B}(\mathcal {V}) = \mathcal {Q}^{\prime }\) and \(\mathcal {F}^{-2} = \mathtt {B}_w(\mathcal {H}) = \mathcal {Q}_w^{\prime }\).

Proof

Decompose \(\mathtt {u} = \mathtt {u}_\parallel + \mathtt {u}_\perp \in \mathcal {K} \oplus \mathcal {K}^\perp \) and \(\mathtt {y} = \mathtt {y}_\parallel + \mathtt {y}_\perp \in \mathcal {L} \oplus \mathcal {L}^\perp \). By definition of \(\mathcal {F}^s\), there exists \(\tilde {\mathtt {u}} \in \mathcal {V}^{s+2} \cap \mathcal {K}^\perp \) such that \(\mathtt {B}\tilde {\mathtt {u}} = \mu \). On the other hand, Eq. (4.35) is equivalent to Bu  = μ. By Lemma 4.2.18, this equation has a unique solution in \(\mathcal {K}^\perp \); hence, \(\mathtt {u}_\perp = \tilde {\mathtt {u}}\), and \(\|\mathtt {u}_\perp \|{ }_{\mathcal {V}^{s+2}} \lesssim \| \mu \|{ }_{\mathcal {F}^s}\) by definition of the latter norm.

Reasoning as in Lemma 4.4.5, we see that (4.34) implies that

$$\displaystyle \begin{aligned} a(\mathtt{u}_\parallel,\mathtt{v}) = ( \mathtt{y}_\parallel , \mathtt{v} )_{\mathcal{H}},\quad \forall \mathtt{v}\in \mathcal{V}, \quad \text{i.e.,}\quad \mathtt{A}_w \mathtt{u}_\parallel = \mathtt{y}_\parallel \in \mathcal{V}^s. \end{aligned}$$

Therefore, \(\mathtt {u}_\parallel \in \mathcal {V}^{s+2}\) and \(\|\mathtt {u}_\parallel \|{ }_{\mathcal {V}^{s+2}} \lesssim \|\mathtt {y}_\parallel \|{ }_{\mathcal {V}^s} \lesssim \|\mathtt {y}\|{ }_{\mathcal {V}^s}\). Finally, Eq. (4.34) is rewritten as: \(\mathtt {B}^\dagger \mathtt {r} = \mathtt {y} - \mathtt {A}\mathtt {u} \in \mathcal {V}^s \), i.e., \(\mathtt {r} \in \mathcal {Q}^{s+1}\) and \( \|\mathtt {r}\|{ }_{\mathcal {Q}^{s+1}} \lesssim \|\mathtt {y}\|{ }_{\mathcal {V}^s} + \|\mathtt {u}\|{ }_{\mathcal {V}^{s+2}} \lesssim \|\mathtt {y}\|{ }_{\mathcal {V}^s} + \| \mu \|{ }_{\mathcal {F}^s}\). ■

With these tools, one can determine the regularity of the solution to the mixed problem (4.21). We concentrate on solutions more regular in space and time than those provided by Theorem 4.3.22 or Remark 4.3.23, which are needed for the numerical analysis [17].

Theorem 4.4.8

Assume that the canonical imbedding \(i_{\mathcal {V}\rightarrow \mathcal {H}}\) is compact, that the sesquilinear, continuous and Hermitian form a fulfills property (4.15) with ν = 0, that the sesquilinear and continuous form b satisfies the inf-sup condition (4.10) for some β > 0, that the sesquilinear and continuous form b w satisfies a similar inf-sup condition in \(\mathcal {H} \times \mathcal {Q}_w\) , and that the double orthogonality property holds between \(\mathcal {V}\) and \(\mathcal {H}\).

Let T > 0, s ≥ 1, p ≥ 1 and \(m\in \mathbb {N}\) be given. Suppose that the data (f, g, u 0, u 1) of Problem (4.21) satisfy the following regularity and compatibility properties:

  1. 1.

    \(\mathtt {f}\in W^{m,p}(0,T;\mathcal {V}^s)\) ;

  2. 2.

    \(\mathtt {g}\in C^m([0,T];\mathcal {F}^{s-1}) \cap C^{m+1}([0,T];\mathcal {F}^{s-2}) \cap W^{m+2,p}(0,T;\mathcal {F}^{s-3})\) ;

  3. 3.

    \(\mathtt {u}_0\in \mathcal {V}^{s+1}\) and \(\mathtt {u}_1\in \mathcal {V}^s\) , and the conditions (4.26) hold;

  4. 4.

    the quantities u m and u m+1,∥, defined by the formulas (4.32) and (4.33) in function of the projections \(\mathtt {u}_{0\parallel },\ \mathtt {u}_{1\parallel },\ \left ( \mathtt {f}_\parallel ^{(\ell )}(0) \right )_{\ell =0,\ \ldots ,\ m-2}\) onto \(\mathcal {L}\) , belong, respectively, to \(\mathcal {V}^{s+1}\) and \(\mathcal {V}^s\).

Then, the solution (u, p) to Problem (4.21) satisfies

$$\displaystyle \begin{aligned} (\mathtt{u},\mathtt{u}') \in C^m([0,T] ; \mathcal{V}^{s+1} \times \mathcal{V}^s) \,, \quad (\mathtt{u}'',\mathtt{p}) \in W^{m,p}(0,T ; \mathcal{V}^{s-1} \times \mathcal{Q}^{s}),\end{aligned} $$

and depends continuously on the data (f, g, u 0, u 1, u m, u m+1,∥) in their respective spaces.

Proof

We take the characterisations of (u , u , p) from the proof of Theorem 4.3.19. The parallel component u is the solution to the unconstrained evolution problem:

$$\displaystyle \begin{aligned} \left\{ \begin{array}{l} \mathit{Find}\ \mathtt{u}_\parallel \ \mathit{such \ that}\\ \displaystyle\frac{d^2\mathtt{u}_\parallel}{dt^2} + \mathtt{A}_w\mathtt{u}_\parallel = \mathtt{f}_\parallel,\quad t>0\,;\\ \mathtt{u}_\parallel(0)=\mathtt{u}_{0\parallel}\;,\;\displaystyle\displaystyle\frac{d\mathtt{u}_\parallel}{dt}(0)=\mathtt{u}_{1\parallel}\,; \end{array} \right.\end{aligned} $$

and one applies Theorem 4.4.4. The perpendicular component u is defined, at each time, by the conditions

$$\displaystyle \begin{aligned} \forall \mathtt{q}\in\mathcal{Q},\quad b(\mathtt{u}_\perp(t),\mathtt{q}) = \langle \mathtt{g}(t),\mathtt{q} \rangle_{\mathcal{Q}} \quad \text{or}\quad \forall \mathtt{q}\in\mathcal{Q}_w,\quad b_w(\mathtt{u}_\perp(t),\mathtt{q}) = \langle \mathtt{g}(t),\mathtt{q} \rangle_{\mathcal{Q}_w}. \end{aligned}$$

Applying Lemma 4.2.18, one finds \(\mathtt {u}_\perp \in C^m([0,T];\mathcal {V}^{s+1}) \cap C^{m+1}([0,T];\mathcal {V}^s) \cap W^{m+2,p}(0,T;\mathcal {V}^{s-1})\), the continuous dependence following from the definition of the spaces \(\mathcal {F}^\sigma \) and their norms. Finally, the multiplier p satisfies

$$\displaystyle \begin{aligned} \mathtt{B}^\dagger \mathtt{p} = \mathtt{f} - \mathtt{u}'' - \mathtt{A}_w\mathtt{u} \in W^{m,p}(0,T;\mathcal{V}^{s-1}), \end{aligned}$$

the norm of the r.h.s. being bounded by that of the data in their respective spaces. Hence, \(\mathtt {p} \in W^{m,p}(0,T;\mathcal {Q}^s)\) by definition of the latter space, with continuous dependence on the data. ■

Remark 4.4.9

Let us comment on the assumptions of this theorem.

  • The form b w and its inf-sup condition are not needed if s ≥ 2.

  • If \(\mathtt {f}\in W^{m,1}(0,T;\mathcal {V}^s) \cap C^m([0,T];\mathcal {V}^{s-1})\) or \(\mathtt {f}\in W^{m+1,1}(0,T;\mathcal {V}^{s-1})\), and moreover, \(\mathtt {g} \in C^{m+2}([0,T] ; \mathcal {F}^{s-3})\), then \((\mathtt {u}'',\mathtt {p}) \in C^m([0,T] ; \mathcal {V}^{s-1} \times \mathcal {Q}^{s})\).

  • The regularity assumption on g has been chosen by an “aesthetic” criterion, viz., that u and u should have the same space-time regularity. For the purpose of convergence analysis, this is not always necessary: the regularity of u can be limited by that of u . In that case, it suffices to remark that \(\mathtt {u}_\perp \in E([0,T] ; \mathcal {V}^\sigma )\)—for any space E measuring time regularity on [0, T]—iff \(\mathtt {g} \in E([0,T] ; \mathcal {F}^{\sigma -2})\).

4.5 Time-Harmonic Problems

To conclude this brief overview, we consider classes of problems that stand in-between static and time-dependent formulations. From a practical point of view, it is assumed that the time-dependence is explicitly known—in \(\exp (-\imath \omega t)\)—which allows us to remove the time variable from the formulation. We shall consider two cases, depending on whether the pulsation ω of the signal is data, i.e., the fixed frequency problem , or it is an unknown, to be determined, i.e., the unknown frequency problem . From an abstract point of view, they respectively correspond to Helmholtz-like problems, and to eigenproblems. We again provide elements of proofs in this section.

4.5.1 Helmholtz-Like Problem

Let H and V be two Hilbert spaces, such that V is a vector subspace of H with continuous imbedding i VH. In what follows, we choose H as the pivot space. Let a(⋅, ⋅) be a sesquilinear continuous form on V × V , A the corresponding operator defined at (4.4) with V = W, and \(\lambda \in \mathbb {C}\setminus \{0\}\). Given f ∈ V, the Helmholtz-like problem to be solved is

$$\displaystyle \begin{aligned} \left\{ \begin{array}{l} \mathit{Find}\ u\in V \ \mathit{such \ that}\\ \forall v\in V,\ a(u,v)+\lambda(u,v)_H=\langle f,v\rangle. \end{array} \right. \end{aligned} $$
(4.36)

Such problems are usually solved with the help of the Fredholm alternative.

Theorem 4.5.1 (Helmholtz-Like Problem)

Assume that the sesquilinear form a is such that A is an isomorphism from V to V ′, and that the canonical imbedding i VH is compact. Then:

  • either, for all f  V ′, Problem (4.36) has one, and only one, solution u, which depends continuously on f;

  • or, Problem (4.36) has solutions if, and only if, f satisfies a finite number n λ of orthogonality conditions. Then, the space of solutions is affine, and the dimension of the corresponding linear vector space (i.e., the kernel) is equal to n λ . Moreover, the part of the solution that is orthogonal to the kernel depends continuously on the data.

Proof

Since the operator A −1 is well-defined, one can replace the right-hand side with a(A −1f, v) in (4.36). Also, one can replace the second term as follows. We mention the imbedding i VH explicitly here, to write

$$\displaystyle \begin{aligned} \forall v\in V,\ (u,v)_H = (i_{V\rightarrow H}u,v)_H = \langle i_{V\rightarrow H}u,v\rangle = a(A^{-1}\circ i_{V\rightarrow H} u,v). \end{aligned}$$

So, Problem (4.36) equivalently rewrites

$$\displaystyle \begin{aligned} \left\{ \begin{array}{l} \mathit{Find}\ u\in V\ \mathit{such \ that}\\ (I_V+\lambda\,A^{-1}\circ i_{V\rightarrow H}) u = A^{-1}f\mbox{ in } V. \end{array} \right. \end{aligned}$$

To conclude, we note that i VH is a compact operator, whereas A −1 is a bounded operator. According to Proposition 4.1.2, A −1 ∘ i VH is a compact operator of \(\mathcal {L}(V)\), so that Theorem 4.1.18 and Corollary 4.1.19 (Fredholm alternative) yield the desired result as far as the alternative is concerned.

There remains to study the continuous dependence of the solution with respect to the data. Let T = I V  + λ A −1 ∘ i VH, \(K_\lambda = \ker (T)\) and R λ  = R(T).

First, assume that K λ  = {0}. According to Theorem 4.1.18, T is a bijective mapping of \(\mathcal {L}(V)\). Then, the Open Mapping Theorem 4.1.4 states that T −1 belongs to \(\mathcal {L}(V)\), so one concludes that

$$\displaystyle \begin{aligned} \|u\|{}_V \le |||T^{-1} |||\, ||| A^{-1} |||\, \|f\|{}_{V'}. \end{aligned}$$

Or, assume that K λ is a finite-dimensional space of V that is not reduced to {0}. Let \(n_\lambda =\dim K_\lambda \). According to Theorem 4.1.18, R λ is a closed subspace of V , and codim R λ  = n λ . Moreover, the restriction of T to \(K_\lambda ^\perp \), denoted by \(T_{|K_\lambda ^\perp }\), is a bijective mapping from \(K_\lambda ^\perp \) to R λ . Thus, Problem (4.36) has a solution if, and only if, f satisfies n λ orthogonality conditions. In this case, the solution u can be written as u = u  + u 0, where u belongs to \(K_\lambda ^\perp \) and is unique, and u 0 is any element of the kernel K λ . When these conditions are met, one has

$$\displaystyle \begin{aligned} \|u_\perp\|{}_V \le ||| (T_{|K_\lambda^\perp})^{-1} |||\, ||| A^{-1} |||\, \|f\|{}_{V'}. \end{aligned}$$

Remark 4.5.2

For practical situations that ensure that A −1 is well-defined, we refer to Remark 4.2.15.

Corollary 4.5.3 (Helmholtz-Like Problem)

Provided there exists \(\mu \in \mathbb {C}\) such that the sesquilinear form a(⋅, ⋅) + μ(⋅, ⋅) H is coercive on V × V , and provided the canonical imbedding i VH is compact, the conclusions of Theorem 4.5.1 apply.

Proof

In Problem (4.36), one simply replaces a(u, v) + λ(u, v) H with {a(u, v) + μ(u, v) H } + {λ − μ}(u, v) H . ■

Remark 4.5.4

It is possible to use compact operators of \(\mathcal {L}(H)\) instead. For illustrative purposes, we adopt this point of view in the next subsection.

Remark 4.5.5

Static problems can be seen as Helmholtz-like problems with λ = 0. Also, in the particular case when a(⋅, ⋅) is coercive and λ ≥ 0, the sesquilinear form a(u, v) + λ(u, v) H is directly coercive on V × V , so the Lax-Milgram Theorem 4.2.8 applies: Problem (4.36) is well-posed in the Hadamard sense. On the other hand, when λ < 0, the form \(v \mapsto a(v,v)+\lambda \|v\|{ }_H^2\) can be indefinite (no specific sign). In this case, Problem (4.36) is well-posed in the Fredholm sense.

This result can be recast quite simply into the so-called coercive + compact framework. Let c(⋅, ⋅) be a second continuous sesquilinear form on H × V . Given f ∈ V, the second Helmholtz-like problem to be solved is

$$\displaystyle \begin{aligned} \left\{ \begin{array}{l} \mathit{Find}\ u\in V\ \mathit{such \ that}\\ \forall v\in V,\ a(u,v)+c(u,v)=\langle f,v\rangle. \end{array} \right. \end{aligned} $$
(4.37)

Remark 4.5.6

Problems (4.36) and (4.37) belong to the class of perturbed problems, here with a compact perturbation.

The previous Theorem can thus be generalized.

Theorem 4.5.7 (Helmholtz-Like Problem)

Assume that the sesquilinear form a is such that A is an isomorphism from V to V ′ and that the canonical imbedding i VH is compact. Then:

  • either, for all f  V ′, Problem (4.37) has one, and only one, solution u, which depends continuously on f;

  • or, Problem (4.37) has solutions if, and only if, f satisfies a finite number n c of orthogonality conditions. Then, the space of solutions is affine, and the dimension of the corresponding linear vector space (the kernel) is equal to n c . Moreover, the part of the solution that is orthogonal to the kernel depends continuously on the data.

Proof (Sketched)

Remark that, for all u, v ∈ V , c(u, v) = c(i VHu, v).

Given h ∈ H, Problem

$$\displaystyle \begin{aligned} \left\{ \begin{array}{l} \mathit{Find}\ w\in V \ \mathit{such \ that}\\ \forall v\in V,\ a(w,v)=c(h,v) \end{array} \right.\end{aligned} $$

admits one, and only one solution, and the mapping T c  : hw belongs to \(\mathcal {L}(H,V)\). Thus, the Helmholtz-like problem (4.37) can be rewritten equivalently as

$$\displaystyle \begin{aligned} \left\{ \begin{array}{l} \mathit{Find}\ u\in V\ \mathit{such \ that}\\ (I_V+T_c\circ i_{V\rightarrow H}) u = A^{-1}f\mbox{ in } V. \end{array} \right.\end{aligned} $$

One concludes as in the proof of Theorem 4.5.1. ■

We now turn to Helmholtz-like problems with constraints. Let us introduce a third Hilbert space, denoted by Q, g ∈ Q′ and b(⋅, ⋅), a continuous sesquilinear form on V × Q. The Helmholtz-like problem with constraints is formulated as follows:

$$\displaystyle \begin{aligned} \left\{ \begin{array}{l} \mathit{Find}\ (u,p)\in V\times Q\ \mathit{such \ that}\\ \forall v\in V,\ a(u,v)+c(u,v)+\overline{b(v,p)}=\langle f,v\rangle \\ \forall q\in Q,\ b(u,q)=\langle g,q\rangle. \end{array} \right.\end{aligned} $$
(4.38)

We introduce once more the kernel of b(⋅, ⋅),

$$\displaystyle \begin{aligned} K = \{v\in V\ :\ \forall q\in Q,\ b(v,q)=0\}.\end{aligned} $$

Let us assume that the form b satisfies the inf-sup condition (4.10) for some β > 0. According to Lemma 4.2.18, there existsFootnote 14 one, and only one, u g  ∈ K such that Bu g  = g. Let us introduce f′∈ V defined by

$$\displaystyle \begin{aligned} \forall v\in V,\ \langle f',v\rangle = \langle f,v\rangle - a(u_g,v) - c(u_g,v).\end{aligned} $$

It is then possible to consider another Helmholtz-like problem, set in K. It writes

$$\displaystyle \begin{aligned} \left\{ \begin{array}{l} \mathit{Find}\ u_0\in K\ \mathit{such \ that}\\ \forall v_\parallel\in K,\ a(u_0,v_\parallel)+c(u_0,v_\parallel)=\langle f',v_\parallel\rangle. \end{array} \right. \end{aligned} $$
(4.39)

One relates those two Helmholtz-like problems with constraints in the following way.

Proposition 4.5.8

Assume that the form b satisfies the inf-sup condition (4.10) for some β > 0. Let u g  ∈ K be characterized as Bu g  = g.

  1. 1.

    If there exists (u, p) a solution to (4.38), then u  u g solves (4.39).

  2. 2.

    If there exists u 0 a solution to (4.39), then there exists p  Q such that (u 0 + u g , p) solves (4.38).

Proof

  1. 1.

    Straightforward.

  2. 2.

    Let u′ = u 0 + u g . By definition, one has

    $$\displaystyle \begin{aligned} \forall q\in Q,\ b(u',q)=\langle g,q\rangle. \end{aligned}$$

    Let v ∈ V be split as v = v  + v , with (v , v ) ∈ K × K .

    $$\displaystyle \begin{aligned} \begin{array}{rcl} a(u',v)+c(u',v) &\displaystyle =&\displaystyle \langle f,v_\parallel\rangle + a(u',v_\perp)+c(u',v_\perp) \\ &\displaystyle =&\displaystyle \langle f,v\rangle + \{a(u',v_\perp)+c(u',v_\perp) - \langle f,v_\perp\rangle\}. \end{array} \end{aligned} $$

    The antilinear form va(u′, v ) + c(u′, v ) −〈f, v 〉 belongs to the polar set of K. From Lemma 4.2.18, there exists p ∈ Q such that

    $$\displaystyle \begin{aligned} \forall v\in V,\ a(u',v)+c(u',v) - \langle f,v\rangle = -\overline{b(v,p)}. \end{aligned}$$

    It follows that the couple (u′, p) solves (4.38).

From there, one can state the result in regard to Helmholtz-like problems with constraints.

Theorem 4.5.9 (Helmholtz-Like Problem with Constraints)

Assume that the sesquilinear form a is coercive on K, that the canonical imbedding i KH is compact, and finally, that the form b satisfies the inf-sup condition (4.10) for some β > 0. Then, the Helmholtz-like problems (4.38) and (4.39) fit into the coercive + compact framework.

Proof

According to the previous proposition, we know that Problem (4.38) admits a solution u if, and only if, Problem (4.39) admits a solution u 0. Moreover, the two are related by u = u 0 + u g , with u g  ∈ K being unique and such that \(\|u_g\|{ }_V\le \beta ^{-1}\|g\|{ }_{Q'}\) (Lemma 4.2.18). This characterizes the part of the solution (if it exists…) to Problem (4.38) that belongs to K . So, for simplicity, we assume that g = 0 so that u 0 = u (and f′ = f), and we choose to focus on Problem (4.39) from now on.

Since a(⋅, ⋅) is coercive on K, and since b(⋅, ⋅) satisfies an inf-sup condition, the Babuska-Brezzi Theorem 4.2.19 states that, given \( \underline {f}\in V'\), Problem

$$\displaystyle \begin{aligned} \left\{ \begin{array}{l} \mathit{Find}\ (w,r)\in V\times Q\ \mathit{such \ that}\\ \forall v\in V,\ a(w,v) + \overline{b(v,r)} = \langle \underline{f},v\rangle \\ \forall q\in Q,\ b(w,q) = 0 \end{array} \right. \end{aligned}$$

is well-posed, and the mapping \(T\ :\ \underline {f}\mapsto w\) belongs to \(\mathcal {L}(V',K)\). In (4.39), one can thus replace the right-hand side with a(Tf, v ), whereas the second term is likewise replaced with a(T c  ∘ i KHu 0, v ). Thanks to the coerciveness of the form a on K, Problem (4.38) rewrites

$$\displaystyle \begin{aligned} \left\{ \begin{array}{l} \mathit{Find}\ u_0\in K\ \mathit{such \ that}\\ (I_K+T_c\circ i_{K\rightarrow H}) u_0 = Tf\mbox{ in } K. \end{array} \right. \end{aligned}$$

Noting that T c  ∘ i KH is a compact operator of \(\mathcal {L}(K)\), we conclude by using the Fredholm alternative. ■

4.5.2 Eigenproblem

Let H and V be two Hilbert spaces, such that V is a separable, dense, vector subspace of H with continuous imbedding i VH. We choose H as the pivot space. Let a(⋅, ⋅) be a sesquilinear continuous form on V × V with the associated operator \(A\in \mathcal {L}(V,V')\). The eigenproblem to be solved is

$$\displaystyle \begin{aligned} \left\{ \begin{array}{l} \mathit{Find}\ (u,\lambda)\in (V\setminus\{0\})\times\mathbb{C}\ \mathit{such \ that}\\ \forall v\in V,\ a(u,v) = \lambda(u,v)_H. \end{array} \right. \end{aligned} $$
(4.40)

With a slight abuse of notations, we say that u is an eigenvector, λ is an eigenvalue, and (u, λ) is an eigenpair. As a matter of fact, assume that the operator A is an isomorphism, and let \(T\in \mathcal {L}(H,V)\) be defined by

$$\displaystyle \begin{aligned} g\mapsto Tg=w,\ w\mbox{ solution to }\left\{ \begin{array}{l} \mathit{Find}\ w\in V\ \mathit{such \ that}\\ \forall v\in V,\ a(w,v)=(g,i_{V\rightarrow H}v)_H. \end{array} \right. \end{aligned} $$

Above, w is well-defined, because A is an isomorphism. Indeed, one can replace the right-hand side (g, i VHv) H with \(\langle i_{H\rightarrow V'}g,v\rangle _V\), so that \(w=A^{-1}\circ i_{H\rightarrow V'}g\). In terms of operators, one has \(T=A^{-1}\circ i_{H\rightarrow V'}\). Next, let

$$\displaystyle \begin{aligned} T_H = i_{V\rightarrow H}\circ T\in \mathcal{L}(H). \end{aligned} $$

Given a solution (u, λ) of (4.40), one finds that T H u = λ −1 u, i.e., u belongs to the eigenspace \(E_{\lambda ^{-1}}(T_H)\).Footnote 15 Thus, in H, the eigenproblem (4.40) boils down to:

$$\displaystyle \begin{aligned} \left\{ \begin{array}{l} \mathit{Find}\ (u,\nu)\in (H\setminus\{0\})\times\mathbb{C}\ \mathit{such \ that}\\ \nu u = T_H u \end{array} \right.\,, \end{aligned} $$

where ν = λ −1≠0: (u, ν) is an eigenpair of T H , which justifies a posteriori the definition of (u, λ) as an eigenpair of (4.40). One has R(T H ) ⊂ V , so all eigenvectors belong to V .

Finally, if the canonical imbedding i VH is compact, then, by construction, T H is a compact operator (see Proposition 4.1.2) and one may apply Theorem 4.1.7.

Theorem 4.5.10 (Eigenvalues)

Assume that the operator A is an isomorphism and that the canonical imbedding i VH is compact. Then, 0 is not an eigenvalue of Problem (4.40). Moreover, the eigenvalues are all of finite multiplicities and the set of their moduli can be reordered as a nondecreasing sequence whose limit is + ∞.

One can be more precise, with the help of Theorem 4.1.20. This requires a compact and self-adjoint operatorFootnote 16 T H , for which it is sufficient to have a Hermitian form a (apply Proposition 4.1.13). In this case, the geometric and algebraic multiplicities of all eigenvalues coincide.

Theorem 4.5.11 (Eigenproblem)

Assume that the sesquilinear form a is Hermitian, that the operator A is an isomorphism and that the canonical imbedding i VH is compact. Thus, 0 is not an eigenvalue. Moreover, there exists a Hilbert basis (e k ) k of H made of eigenvectors of Problem (4.40) with corresponding real eigenvalues (λ k ) k . Finally, the eigenvalues are all of finite multiplicities and (|λ k |) k can be reordered as an increasing sequence whose limit is + ∞.

Corollary 4.5.12 (Eigenproblem)

In addition to the hypotheses of Theorem 4.5.11 , assume that the sesquilinear form a is coercive. In this case, all eigenvalues (λ k ) k are strictly positive, and \((\lambda _k^{-1/2}e_k)_k\) is a Hilbert basis for V .

We turn to an eigenproblem with constraints. Let us introduce the third Hilbert space, Q, b(⋅, ⋅), a continuous sesquilinear form on V × Q, and the kernel of b(⋅, ⋅),

$$\displaystyle \begin{aligned} K = \{v\in V\ :\ \forall q\in Q,\ b(v,q)=0\}. \end{aligned}$$

The eigenproblem set in K writes

$$\displaystyle \begin{aligned} \left\{ \begin{array}{l} \mathit{Find}\ (u,\lambda)\in (K\setminus\{0\})\times\mathbb{C}\ \mathit{such \ that}\\ \forall v\in K,\ a(u,v) = \lambda(u,v)_H. \end{array} \right. \end{aligned} $$
(4.41)

Define L as the closure of K in H. The notion of double orthogonality refers to Definition 4.3.17.

Theorem 4.5.13 (Eigenproblem with Constraints)

Assume that the sesquilinear form a is coercive and Hermitian on K, that the canonical imbedding i KH is compact, and a double orthogonality property of K and K with respect to a(⋅, ⋅) and (⋅, ⋅) H . Thus, 0 is not an eigenvalue. Moreover, there exists a Hilbert basis (f k ) k of L made of eigenvectors of Problem (4.41) with corresponding eigenvalues (ν k ) k , such that \((\nu _k^{-1/2}f_k)_k\) is a Hilbert basis for K. Furthermore, the eigenvalues can be reordered as an increasing sequence of real, strictly positive, numbers whose limit is + ∞. Finally, solving (4.41) is equivalent to solving

$$\displaystyle \begin{aligned} \left\{ \begin{array}{l} \mathit{Find}\ (u,\lambda)\in (K\setminus\{0\})\times\mathbb{C}\ \mathit{such \ that}\\ \forall v\in V,\ a(u,v) = \lambda(u,v)_H. \end{array} \right. \end{aligned} $$
(4.42)

Proof

Endow L with the norm of H, respectively K with the norm of V . L and K are two Hilbert spaces, and K is, by definition, a dense vector subspace of L with continuous imbedding. Thus, all the assumptions of Theorem 4.5.11 and its Corollary 4.5.12 are fulfilled, so the results on the eigenvalues and Hilbert bases of L, respectively K follow.

Finally, if (u, λ) solves (4.42), it obviously solves (4.41). Reciprocally, if (u, λ) solves (4.41), then given a test function v ∈ V split as v = v  + v with v ∈ K, v ∈ K , it holds that

$$\displaystyle \begin{aligned} a(u,v) = a(u,v_\parallel)\stackrel{(\text{\ref{problem-7-in-K}})}{=}\lambda(u,v_\parallel)_H=\lambda(u,v)_H, \end{aligned}$$

thanks to the double orthogonality property. Hence, (u, λ) solves (4.42). ■

On the other hand, an eigenproblem with constraints can be formulated in mixed form

$$\displaystyle \begin{aligned} \left\{ \begin{array}{l} \mathit{Find}\ (u,p,\lambda)\in (V\setminus\{0\})\times Q\times\mathbb{C}\ \mathit{such \ that}\\ \forall v\in V,\ a(u,v) + \overline{b(v,p)} = \lambda(u,v)_H\\ \forall q\in Q,\ b(u,q)=0. \end{array} \right. \end{aligned} $$
(4.43)

Note that we do not impose that p ≠ 0, since the eigenvector of interest is u (cf. [50] for an illuminating discussion on this topic). It is interesting to compare the two eigenproblems (4.41) and (4.43).

Proposition 4.5.14

One has the following results:

  1. 1.

    Let (u, p, λ) be an eigentriple of (4.43): (u, λ) is an eigenpair of (4.41).

  2. 2.

    Assume that the form b satisfies the inf-sup condition (4.10) for some β > 0. Let (u, λ) be an eigenpair of (4.41): there exists p  Q such that (u, p, λ) is an eigentriple of (4.43).

  3. 3.

    Assume further a double orthogonality property of K and K with respect to the form a and (⋅, ⋅) H . Any eigentriple (u, p, λ) of (4.43) is such that p = 0.

Proof

Let us proceed sequentially.

  1. 1.

    Let (u, p, λ) be an eigentriple of (4.43). According to the second equation u belongs to K. Then, taking v ∈ K in the first equation, one recovers the statement of (4.41). So, (u, λ) is an eigenpair of (4.41).

  2. 2.

    Conversely, let (u, λ) be an eigenpair of (4.41). From the definition of K, we conclude that, for all q ∈ Q, b(u, q) = 0. Next, splitting v ∈ V as v = v  + v with (v , v ) ∈ K × K , one obtains

    $$\displaystyle \begin{aligned} a(u,v)-\lambda(u,v)_H = a(u,v_\perp)-\lambda(u,v_\perp)_H, \end{aligned}$$

    since (u, λ) solves (4.41). It follows (as usual) that the antilinear form va(u, v ) − λ(u, v ) H belongs to the polar set of K. According to Lemma 4.2.18 (b(⋅, ⋅) satisfies an inf-sup condition), there exists p ∈ Q such that

    $$\displaystyle \begin{aligned} \forall v\in V,\ a(u,v)-\lambda(u,v)_H = -\overline{b(v,p)}. \end{aligned}$$

    In other words, (u, p, λ) is an eigentriple of (4.43).

  3. 3.

    Finally, let us assume a double orthogonality property, and consider an eigentriple (u, p, λ) of (4.43). Recall that (u, λ) is an eigenpair of (4.41) (see step 1.). According to Lemma 4.2.18, it is enough to prove that B p = 0. To that aim, consider any v = v  + v with (v , v ) ∈ K × K , and compute

    $$\displaystyle \begin{aligned} \begin{array}{rcl} \langle B^\dagger p,v \rangle &\displaystyle =&\displaystyle \overline{b(v,p)} = \lambda(u,v)_H - a(u,v) \\ &\displaystyle =&\displaystyle \{\lambda(u,v_\parallel)_H - a(u,v_\parallel)\} + \{\lambda(u,v_\perp)_H - a(u,v_\perp)\} = 0. \end{array} \end{aligned} $$

    Above, the first part vanishes because (u, λ) solves (4.41), whereas the second part vanishes thanks to the double orthogonality property. The conclusion follows.

4.6 Summing Up

We note that, according to the mathematical framework we have developed, the problems we solve are usually composed of two parts:

  • A function space in which we look for the solution, endowed with a given norm to measure it;

  • A set of equations or, in the Variational Formulations, the result of the action of the solution on test functions.

When the first statement is not explicitly stated, one has to be careful! As an example, we refer the interested reader to Grisvard’s works, for instance, [125], in which singular solutions of the Poisson problem are exhibited: these solutions are governed by the homogeneous Poisson problem, so, at first glance, one would expect the solution to be zero, but this is not the case!

As far as Maxwell’s equations and related models are concerned, Chap. 1 deals mainly with (sets of) equations, that is, the second statement. On the other hand, no information is provided as to the relevant spaces of solutions, the first statement. Therefore, in order to solve those problems, one has to build those spaces, using, for instance, the expression of the electromagnetic energy, or the expression of Coulomb’s law. These topics will be addressed at length in Chaps. 5, 6, 7 and 8. To that aim, we introduced (quite) well-known classes of function spaces in the previous chapter, Lebesgue or Sobolev spaces, for the most part. We also provided some results about the norms that can be used to measure elements of those spaces.