This chapter is devoted to mathematical prerequisites, including a detailed discussion of classification of partial differential equations and systems of partial differential equations, as well as classification of domains in which a process takes place, of notions of solutions and additional conditions as initial or boundary conditions to the solutions.

1 Classification of Linear Partial Differential Equations of Kovalevskian Type

Consider the linear partial differential equation of Kovalevskian type

$$\displaystyle \begin{aligned} \begin{array}{rcl} D_t^m u + \sum\limits_{k+|\alpha|\leq m,\,k \neq m}a_{k,\alpha}(t,x)D_x^\alpha D_t^k u =f(t,x), \end{array} \end{aligned} $$

where D t  = −i∂ t and \(D_{x_k}=-i\partial _{x_k}, k=1,\cdots ,n,\, i^2=-1\). Here \(\alpha =\big (\alpha _1,\cdots ,\alpha _n \big ) \in \mathbb {N}^n\) is a multi-index and \(m \in \mathbb {N}\). We introduce the notions principal part of the given linear partial differential operator, it is the linear partial differential operator

$$\displaystyle \begin{aligned} \begin{array}{rcl} D_t^m + \sum\limits_{k+|\alpha|= m,\,k \neq m}a_{k,\alpha}(t,x)D_x^\alpha D_t^k, \end{array} \end{aligned} $$

and part of lower order terms, it is the linear partial differential operator

$$\displaystyle \begin{aligned} \begin{array}{rcl} \sum\limits_{k+|\alpha|<m}a_{k,\alpha}(t,x)D_x^\alpha D_t^k. \end{array} \end{aligned} $$

To decide which type a given partial differential equation of Kovalevskian type has, we study the principal symbol of the principal part. We replace in the principal part D t by τ, \(D_{x_k}\) by ξ k . Consequently, we replace \(D_x^\alpha D_t^k=D_{x_1}^{\alpha _1} D_{x_2}^{\alpha _2} \cdots D_{x_n}^{\alpha _n} D_t^k\) by \(\xi _1^{\alpha _1} \xi _2^{\alpha _2}\cdots \xi _x^{\alpha _n}\tau ^k=:\xi ^{\alpha } \tau ^k\). In this way the principal symbol is given by

$$\displaystyle \begin{aligned} \begin{array}{rcl} \tau^m + \sum\limits_{k+|\alpha|= m,\,k \neq m}a_{k,\alpha}(t,x)\xi^\alpha \tau^k. \end{array} \end{aligned} $$

Definition 3.1

Consider the above differential operator of Kovalevskian type where the coefficients of the principal part are assumed to be real in a domain \(G \subset \mathbb {R}^{n+1}\). Then the operator is called

  • elliptic in a point (t 0, x 0) ∈ G if the characteristic equation

    $$\displaystyle \begin{aligned} \begin{array}{rcl} \tau^m + \sum\limits_{k+|\alpha|= m,\,k \neq m}a_{k,\alpha}(t_0,x_0)\xi^\alpha \tau^k =0 \,\, \end{array} \end{aligned} $$

    has for ξ ≠ 0 no real roots τ 1 = τ 1(t 0, x 0, ξ), ⋯ , τ m  = τ m (t 0, x 0, ξ)

  • elliptic in a domain G if the operator is elliptic in every point (t 0, x 0) ∈ G

  • hyperbolic in a point (t 0, x 0) ∈ G if the characteristic equation

    $$\displaystyle \begin{aligned} \begin{array}{rcl} \tau^m + \sum\limits_{k+|\alpha|= m,\,k \neq m}a_{k,\alpha}(t_0,x_0)\xi^\alpha \tau^k =0 \,\,\end{array} \end{aligned} $$

    has for ξ ≠ 0 only real roots τ 1 = τ 1(t 0, x 0, ξ), ⋯ , τ m  = τ m (t 0, x 0, ξ)

  • strictly hyperbolic in a point (t 0, x 0) ∈ G if the characteristic equation

    $$\displaystyle \begin{aligned} \begin{array}{rcl} \tau^m + \sum\limits_{k+|\alpha|= m,\,k \neq m}a_{k,\alpha}(t_0,x_0)\xi^\alpha \tau^k =0 \,\,\end{array} \end{aligned} $$

    has for ξ ≠ 0 only real and pairwise distinct roots τ 1 = τ 1(t 0, x 0, ξ), ⋯ , τ m  = τ m (t 0, x 0, ξ)

  • hyperbolic (strictly hyperbolic) in a domain G if the operator is hyperbolic (strictly hyperbolic) in every point (t 0, x 0) ∈ G.

Now we are interested in which type does the nonstationary plate operator \(\partial _t^2 + \varDelta ^2\) have?

At a first glance we are not able to give an answer because this operator is not of Kovalevskian type, so it does not fit into the class of operators from Definition 3.1. This operator is not hyperbolic, but it is a so-called 2-evolution operator.

Let us introduce the notion p-evolution operator. For this reason we consider the non-Kovalevskian (if p > 1) linear partial differential equation

$$\displaystyle \begin{aligned} \begin{array}{rcl} D_t^m u + \sum\limits^m_{j=1}A_j(t,x,D_x)D_t^{m-j}u=f(t,x), \end{array} \end{aligned} $$

where \(A_j=A_j(t,x,D_x)=\sum _{k=0}^{jp}A_{j,k}(t,x,D_x)\) are linear partial differential operators of order jp for a fixed integer p ≥ 1, and A j,k  = A j,k (t, x, D x ) are linear partial differential operators of order k. The principal part of this linear partial differential operator in the sense of Petrovsky is defined by

$$\displaystyle \begin{aligned} \begin{array}{rcl} D_t^m + \sum\limits^m_{j=1}A_{j,\,jp}(t,x,D_x)D_t^{m-j}. \end{array} \end{aligned} $$

Definition 3.2

The given linear partial differential operator

$$\displaystyle \begin{aligned} \begin{array}{rcl} D_t^m + \sum\limits^m_{j=1}A_j(t,x,D_x)D_t^{m-j} \end{array} \end{aligned} $$

is called a p-evolution operator if the principal symbol in the sense of Petrovsky

$$\displaystyle \begin{aligned} \begin{array}{rcl}\tau^m + \sum\limits^m_{j=1} A_{j,\,jp}(t,x,\xi)\tau^{m-j} \end{array} \end{aligned} $$

has only real and distinct roots τ 1 = τ 1(t, x, ξ), ⋯ , τ m  = τ m (t, x, ξ) for all points (t, x) from the domain of definition of coefficients and for all ξ ≠ 0.

Remark 3.1.1

The set of 1-evolution operators coincides with the set of strictly hyperbolic operators. The p-evolution operators with p ≥ 2 represent generalizations of Schrödinger operators.

Remark 3.1.2

One of the later goals is to study Cauchy problems for p-evolution equations. Taking account of the Lax-Mizohata theorem for the principal symbol [140], the assumption that the characteristic roots are real in Definition 3.2 is necessary for proving well-posedness for the Cauchy problem.

2 Classification of Linear Partial Differential Equations of Second Order

Let us consider the general linear partial differential equation of second order

$$\displaystyle \begin{aligned} \sum\limits^n_{j,k=1} a_{jk}(x)\partial^2_{x_jx_k}u + \sum\limits^n_{k=1} b_k(x)\partial_{x_k}u+c(x)u=f(x)\end{aligned}$$

with real and continuous coefficients in a domain \(G \subset \mathbb {R}^n\). Let u be a twice continuously differentiable solution. Then we may assume the matrix of coefficients \((a_{jk}(x))^n_{j,k=1}\) to be symmetric in G. Consequently,

  • all eigenvalues λ 1, ⋯ , λ n are real

  • the numbers of positive, negative and vanishing eigenvalues remain invariant under a regular coordinate transformation.

These properties allow for the following definition.

Definition 3.3

Let

$$\displaystyle \begin{aligned} \sum\limits^n_{j,k=1} a_{jk}(x)\partial^2_{x_jx_k}u + \sum\limits^n_{k=1} b_k(x)\partial_{x_k}u+c(x)u=f(x)\end{aligned}$$

be a given linear partial differential equation with real and continuous coefficients and right-hand side in a domain \(G \subset \mathbb {R}^n\). The matrix \((a_{jk}(x))^n_{j,k=1}\) is supposed to be symmetric in G. Let x 0 ∈ G.

  • If λ 1, ⋯ , λ n are non-vanishing and have the same sign in x 0, then the differential equation is called elliptic in x 0.

  • If λ 1, ⋯ , λ n are non-vanishing and if all eigenvalues, except exactly one, have the same sign in x 0, then the differential equation is called hyperbolic in x 0.

  • If λ 1, ⋯ , λ n are non-vanishing and if at least two are positive and at least two are negative in x 0, then the differential equation is called ultrahyperbolic in x 0.

  • If one eigenvalue is vanishing in x 0, then the differential equation is called parabolic in x 0.

  • If exactly one eigenvalue is vanishing in x 0 and if the other eigenvalues have the same sign in x 0, then the differential equation is called parabolic of normal type in x 0.

Example 3.2.1

Consider in \(\mathbb {R}^2\) the partial differential equation

$$\displaystyle \begin{aligned} \partial_{t}^2 u-t^2\partial_{x}^2 u=0.\end{aligned}$$

The characteristic roots τ 1,2 are given by τ 1(t, x, ξ) =  and τ 2(t, x, ξ) = −. Due to Definition 3.1, this partial differential equation is hyperbolic in \(\mathbb {R}^2\) and simultaneously strictly hyperbolic away from the line \(\{(t,x) \in \mathbb {R}^2 : t=0\}\). On the other hand, the eigenvalues in Definition 3.3 are given by λ 1(t, x) ≡ 1 and λ 2(t, x) = −t 2. Hence, with respect to Definition 3.3 this partial differential equation is hyperbolic away from the line \(\{(t,x) \in \mathbb {R}^2 : t=0\}\). On the line t = 0 this partial differential equation is parabolic of normal type. For this reason, this line is called a parabolic line, too.

Example 3.2.2

Let

$$\displaystyle \begin{aligned} \sum\limits^n_{j,k=1} a_{jk}(x_1,\cdots,x_n)\;\partial^2_{x_j x_k} \end{aligned}$$

be a linear differential operator with real and continuous coefficients and symmetric matrix \((a_{jk}(x))^n_{j,k=1}\) in a domain \(G \subset \mathbb {R}^n\). If its symbol satisfies the estimate

$$\displaystyle \begin{aligned} \sum\limits^n_{j,k=1} a_{jk}(x_1,\cdots,x_n)\;\xi_j \xi_k \ge C|\xi|^2, \end{aligned}$$

that is, the quadratic form on the left-hand side is positive definite, then:

  1. 1.

    the differential equation

    $$\displaystyle \begin{aligned} \sum\limits^n_{j,k=1} a_{jk}(x)\partial^2_{x_j x_k}u + \sum\limits^n_{k=1} b_k(x)\partial_{x_k}u+c(x)u=f(x)\end{aligned}$$

    is elliptic in G, if f is defined in G;

  2. 2.

    the differential equation

    $$\displaystyle \begin{aligned} \partial^2_tu - \sum^n_{j,k=1} a_{jk} (x_1,\cdots,x_n)\; \partial^2_{x_j x_k} u+ \sum\limits^n_{k=1} b_k(x)\partial_{x_k} u+c(x)u=f(t,x) \end{aligned}$$

    is strictly hyperbolic in every cylinder (0, T) × G, T > 0, if f is defined in (0, T) × G;

  3. 3.

    the differential equation

    $$\displaystyle \begin{aligned} \partial_t u - \sum^n_{j,k=1} a_{jk}(x_1,\cdots,x_n)\; \partial^2_{ x_j x_k} u + \sum\limits^n_{k=1} b_k(x)\partial_{x_k} u+c(x)u= f(t,x) \end{aligned}$$

    is parabolic of normal type in every cylinder (0, T) × G, T > 0, if f is defined in (0, T) × G.

3 Classification of Linear Systems of Partial Differential Equations

In this section, we firstly study linear systems of m first order partial differential equations in m unknowns and two independent variables of the form

$$\displaystyle \begin{aligned} \begin{array}{rcl} \partial_t u_k + \sum\limits_{j=1}^m \big(a_{kj}(t,x)\partial_x u_j + b_{kj}(t,x) u_j \big) = f_k(t,x), \,k=1,\cdots,m. \end{array} \end{aligned} $$

In matrix notation this system takes the form (U = (u 1, ⋯ , u m )T, F = (f 1, ⋯ , f m )T)

$$\displaystyle \begin{aligned} \begin{array}{rcl} \partial_t U + A(t,x) \partial_x U + B(t,x) U = F(t,x), \end{array} \end{aligned} $$

where we denote

$$\displaystyle \begin{aligned} A=A(t,x):=(a_{kj}(t,x))_{k,j=1}^m \,\,\, \mbox{and} \,\,\,B=B(t,x):=(b_{kj}(t,x))_{k,j=1}^m.\end{aligned}$$

Just as in the case of a single partial differential equation, it turns out that most of the properties of solutions depend on the principal part t U + A(t, x) x U of this system. Since the principal part is completely characterized by the matrix τI +  ( t is replaced by τ and x is replaced by ξ), this matrix plays a fundamental role in the study of these systems. There are two important classes of systems of the above form defined by properties of the matrix A.

Definition 3.4

Let us consider the above system of partial differential equations, where the entries of the matrix A are assumed to be real and continuous in a domain \(G \subset \mathbb {R}^{2}\). Then the system is called

  • elliptic in a point (t 0, x 0) ∈ G if the matrix A(t 0, x 0) has no real eigenvalues λ 1 = λ 1(t 0, x 0), ⋯ , λ m  = λ m (t 0, x 0)

  • elliptic in a domain G if the system is elliptic in every point (t 0, x 0) ∈ G

  • hyperbolic in a point (t 0, x 0) ∈ G if the matrix A(t 0, x 0) has real eigenvalues λ 1 = λ 1(t 0, x 0), ⋯ , λ m  = λ m (t 0, x 0) and a full set of right eigenvectors

  • strictly hyperbolic in a point (t 0, x 0) ∈ G if the matrix A(t 0, x 0) has distinct real eigenvalues λ 1 = λ 1(t 0, x 0), ⋯ , λ m  = λ m (t 0, x 0)

  • hyperbolic (strictly hyperbolic) in a domain G if the operator is hyperbolic (strictly hyperbolic) in every point (t 0, x 0) ∈ G.

Now we would like to present a classification for linear systems of partial differential equations of first order having the form

$$\displaystyle \begin{aligned} \partial_t U + \sum^n_{k=1} A_k(t,x) \partial_{x_k} U + A_0(t,x) U= F(t,x) \quad \mathrm{in}\quad [0,\infty) \times \mathbb{R}^n, \end{aligned}$$

where A k , k = 0, 1, ⋯n, are continuous m × m matrices, subject to the Cauchy condition

$$\displaystyle \begin{aligned} U(0,x)\;=\;U_0(x). \end{aligned}$$

For the following we introduce the notation

$$\displaystyle \begin{aligned} A(t,x,\xi) := \sum^n_{k=1} \xi_k\:A_k(t,x) \quad \mathrm{for}\quad t \ge 0,\; (x,\xi) \in \mathbb{R}^{2n}. \end{aligned}$$

Just as in the case of scalar partial differential equations, it turns out that most of the properties of solutions depend on the “principal part” of the differential operator. So, the matrix A(t, x, ξ) plays a fundamental role in the study of these systems. There are two important classes of systems of the above form defined by properties of the matrix A.

Definition 3.5

Let us consider the above differential system where the entries of the m × m matrix A k are assumed to be real and continuous in a domain \(G\subset \mathbb {R}^{1+n}\). Then the system is called

  • elliptic in a point (t 0, x 0) ∈ G if the matrix A(t 0, x 0, ξ) has no real eigenvalues λ 1 = λ 1(t 0, x 0, ξ), ⋯ , λ m  = λ m (t 0, x 0, ξ) for all \(\xi \in \mathbb {R}^n \setminus \{0\}\)

  • elliptic in a domain G if the system is elliptic in every point (t 0, x 0) ∈ G

  • hyperbolic in a point (t 0, x 0) ∈ G if the matrix A(t 0, x 0, ξ) has m real eigenvalues λ 1(t 0, x 0, ξ) ≤ λ 2(t 0, x 0, ξ) ≤⋯ ≤ λ m (t 0, x 0, ξ) and a full set of right eigenvectors

  • hyperbolic in a domain G if the system is hyperbolic in every point (t 0, x 0) ∈ G.

There are two important special cases.

Definition 3.6

We say that the above system is strictly hyperbolic if for each \(x,\xi \in \mathbb {R}^n,\, \xi \neq 0\), and each t ≥ 0, the matrix A(t, x, ξ) has m distinct real eigenvalues:

$$\displaystyle \begin{aligned} \lambda_1(t,x,\xi) < \lambda_2 (t,x,\xi)< \cdots < \lambda_m(t,x,\xi).\end{aligned}$$

We say that the above system is a symmetric hyperbolic system if all m × m matrices A k (t, x) are symmetric for k = 1, ⋯ , m.

Example 3.3.1

In the complex function theory holomorphic functions w = w(z), which are defined in a given domain \(G \subset \mathbb {C}\), are classical solutions of \(\partial _{\overline {z}} w= \frac {1}{2}\big (\partial _x + i\partial _y\big ) w=0\). If we introduce w(z) =: u(x, y) + iv(x, y), then this elliptic equation is equivalent to the Cauchy-Riemann system

$$\displaystyle \begin{aligned}\partial_x U+A\:\partial_y U\;=\;0,\end{aligned}$$

where U(x, y) = (u(x, y), v(x, y))T and the matrix

$$\displaystyle \begin{aligned} \begin{array}{rcl} A=\left(\begin{array}{cc} 0 &\displaystyle -1\\ 1 &\displaystyle 0 \end{array}\right). \end{array} \end{aligned} $$

The Cauchy-Riemann system is an elliptic system in the domain \(G \subset \mathbb {R}^2\).

Example 3.3.2

The study of long gravity waves on the surface of a fluid in a channel leads to the linear system

$$\displaystyle \begin{aligned} \begin{array}{rcl} \begin{array}{l} \partial_t u + g\:\partial_x \rho \;=\; 0,\\ \partial_t \rho +\:\frac{S_0(x)}{b}\;\partial_x u + \frac{S_0^{\prime}(x)}{b}\; u\;=\; 0,\end{array} \end{array} \end{aligned} $$

where b and g are positive constants and S 0 = S 0(x) is a given differentiable positive function representing the equilibrium cross-sectional area of the fluid in the channel (for more details see [120]). This system is strictly hyperbolic in the domain of the definition of the coefficients in the (t, x)-plane.

Example 3.3.3

In electrical engineering the modeling of transmission lines leads to the system

$$\displaystyle \begin{aligned} \begin{array}{l} L\partial_t I + \partial_x E + R I\;=\; 0,\\ C\partial_t E + \partial_x I + G E\;=\; 0, \\ I(0,x)=I_0(x),\quad E(0,x)=E_0(x),\quad x \in \mathbb{R}. \end{array} \end{aligned}$$

Here C denotes the capacitance to ground per unit length, G is the conductance to ground per unit length, L is the inductance per unit length and R the resistance per unit length. The unknowns I = I(t, x) and E = E(t, x) are, respectively, the current and potential at point x of the line at time t. This system is strictly hyperbolic in the whole (t, x)-plane (see [231] and for its solvability Example 22.4.1).

Example 3.3.4

The equations governing the electromagnetic field in \(\mathbb {R}^3\) are given by

$$\displaystyle \begin{aligned} \partial_t B + \nabla \times E=0, \qquad \partial_t E - \nabla \times B=0,\end{aligned}$$

where B := (B 1, B 2, B 3) and E := (E 1, E 2, E 3) denote the magnetic and electric fields, respectively. This system of partial differential equations forms a symmetric hyperbolic system for U := (B 1, B 2, B 3, E 1, E 2, E 3)T.

The class of symmetric hyperbolic systems can be enlarged to general symmetric hyperbolic systems with continuous coefficients of the form

$$\displaystyle \begin{aligned} \begin{array}{rcl} A_0(t,x)\partial_t U + \sum\limits_{k=1}^n A_k(t,x) \partial_{x_k} U + B(t,x) U = F(t,x), \end{array} \end{aligned} $$

where the real matrix A 0 is supposed to be positive definite uniformly with respect to (t, x) from the domain of definition of the coefficients, and the matrices A k are real and symmetric.

In a lot of cases we are able to transform a single linear partial differential equation of higher order to a linear system of partial differential equations.

For example, every linear hyperbolic equation of second order with smooth coefficients and smooth right-hand side can be transformed into a symmetric hyperbolic system. Let us consider the special class of linear hyperbolic equations of second order

$$\displaystyle \begin{aligned} \begin{array}{rcl} &\displaystyle &\displaystyle \partial^2_t u - \sum^n_{j,k=1} a_{j,k}(t,x) \partial^2_{x_jx_k} u + \sum^n_{j=1} b_j(t,x) \partial_{x_j} u + c(t,x)\partial_t u + d(t,x)u\\ &\displaystyle &\displaystyle \qquad = f(t,x), \end{array} \end{aligned} $$

where all coefficients are real-valued and \((a_{j,k} (t,x))^n_{j,k=1}\) is a symmetric positive definite n × n matrix (we assume smoothness of solutions) uniformly with respect to t and x. Let \(u_1 := \partial _{x_1} u, \cdots ,u_n := \partial _{x_n}u,\; u_{n+1} := \partial _t u,\; u_{n+2} := u\). We then obtain the following system of partial differential equations for the N := n + 2 functions u 1, ⋯ , u n+2:

$$\displaystyle \begin{aligned} \begin{array}{rcl} &\displaystyle &\displaystyle \sum^n_{k=1} a_{j,k}(t,x) \partial_t u_k - \sum^n_{k=1} a_{j,k}(t,x) \partial_{x_k} u_{n+1} = 0,\;\;k=1,\cdots,n,\\ &\displaystyle &\displaystyle \partial_t u_{n+1} - \sum^n_{j,k=1} a_{j,k}(t,x) \partial_{x_k} u_j + \sum^n_{j=1} b_j(t,x)u_j + c(t,x)u_{n+1} + d(t,x)u_{n+2} \\ &\displaystyle &\displaystyle \qquad = f(t,x),\\ &\displaystyle &\displaystyle \partial_t u_{n+2} - u_{n+1} = 0. \end{array} \end{aligned} $$

Setting U = (u 1, ⋯ , u n+2)T and F = (0, ⋯ , 0, f, 0)T, this system is equivalent to a symmetric hyperbolic system

$$\displaystyle \begin{aligned} \begin{array}{rcl} A_0(t,x)\partial_t U + \sum^n_{k=1} A_k(t,x) \partial_{x_k} U + B(t,x) U=F(t,x), \end{array} \end{aligned} $$

where

$$\displaystyle \begin{aligned} \begin{array}{rcl} A_0 := \;\;\left(\begin{array}{ccccc} a_{1,1} &\displaystyle \ldots &\displaystyle a_{1,n} &\displaystyle 0 &\displaystyle 0 \\ \vdots &\displaystyle &\displaystyle &\displaystyle \vdots &\displaystyle \vdots \\ a_{n,1} &\displaystyle \ldots &\displaystyle a_{n,n} &\displaystyle 0 &\displaystyle 0\\ 0 &\displaystyle \ldots &\displaystyle 0 &\displaystyle 1 &\displaystyle 0\\ 0 &\displaystyle \ldots &\displaystyle 0 &\displaystyle 0 &\displaystyle 1 \end{array}\right),\qquad B := \;\;\left(\begin{array}{ccccc} 0 &\displaystyle \ldots &\displaystyle 0 &\displaystyle 0 &\displaystyle 0 \\ \vdots &\displaystyle &\displaystyle &\displaystyle \vdots &\displaystyle \vdots \\ 0 &\displaystyle \ldots &\displaystyle 0 &\displaystyle 0 &\displaystyle 0\\ b_1 &\displaystyle \ldots &\displaystyle b_n &\displaystyle c &\displaystyle d\\ 0 &\displaystyle \ldots &\displaystyle 0 &\displaystyle -1 &\displaystyle 0 \end{array} \right),\\ {}A_k := \;\;\left(\begin{array}{ccccc} 0 &\displaystyle \ldots &\displaystyle 0 &\displaystyle -a_{1,k} &\displaystyle 0 \\ \vdots &\displaystyle &\displaystyle &\displaystyle \vdots &\displaystyle \vdots \\ 0 &\displaystyle \ldots &\displaystyle 0 &\displaystyle -a_{n,k} &\displaystyle 0\\ -a_{1,k} &\displaystyle \ldots &\displaystyle -a_{n,k} &\displaystyle 0 &\displaystyle 0\\ 0 &\displaystyle \ldots &\displaystyle 0 &\displaystyle 0 &\displaystyle 1 \end{array}\right)\qquad \mbox{for}\quad k=1,\cdots,n. \end{array} \end{aligned} $$

The matrix A 0 is positive definite. The matrices A k are symmetric.

Remark 3.3.1

There are other approaches to reduce scalar hyperbolic equations of higher order to systems of first order. Consider

$$\displaystyle \begin{aligned} \begin{array}{rcl} &\displaystyle &\displaystyle D_t^m u + \sum_{k+|\alpha|\le m,k\neq m} a_{k,\alpha}(t,x) D_x^\alpha D_t^k u = f(t,x). \end{array} \end{aligned} $$

If we introduce \(U=\big (\langle D_x \rangle ^{m-1} u,\langle D_x \rangle ^{m-2} D_t u,\cdots ,\langle D_x \rangle D^{m-2}_t u,D_t^{m-1}u\big )^T\) and F = (0, ⋯ , 0, f)T, then we get the system of first order

$$\displaystyle \begin{aligned} D_tU - A(t,x,D_x)U = F(t,x), \end{aligned}$$

where

$$\displaystyle \begin{aligned} \begin{array}{rcl} -A(t,x,D_x):= \;\;\left(\begin{array}{ccc} 0 &\displaystyle -\langle D_x \rangle &\displaystyle \cdots \\ \cdot &\displaystyle 0 &\displaystyle \\ \vdots &\displaystyle \vdots &\displaystyle \\ 0 &\displaystyle 0 &\displaystyle \\ \sum\limits_{|\alpha|\le m} a_{0,\alpha}(t,x) D_x^\alpha \langle D_x \rangle^{1-m} &\displaystyle \sum\limits_{|\alpha|\le m-1} a_{1,\alpha}(t,x) D_x^\alpha \langle D_x \rangle^{2-m} &\displaystyle \cdots \end{array} \right.\\ {}\left.\begin{array}{cc} 0 &\displaystyle 0 \\ \vdots &\displaystyle \vdots \\ -\langle D_x \rangle &\displaystyle 0 \\ 0 &\displaystyle -\langle D_x \rangle \\ \sum\limits_{|\alpha|\le 2} a_{m-2,\alpha}(t,x) D_x^\alpha \langle D_x \rangle^{-1} &\displaystyle \sum\limits_{|\alpha|\le 1} a_{m-1,\alpha}(t,x) D_x^\alpha \end{array} \right). \end{array} \end{aligned} $$

The matrix A has the so-called Sylvester structure. Here we used the pseudodifferential operator 〈D x 〉.

There is a difference between both systems of first order.

The second system is a pseudodifferential system of first order. After the reduction to the Sylvester structure, the theory of pseudodifferential operators should be applied.

4 Classification of Domains and Statement of Problems

Models in nature and technique are often described by partial differential equations or systems of partial differential equations. In general, we need additional conditions describing the behavior of solutions of such models. First we shall distinguish between stationary processes and nonstationary processes. Moreover, we shall distinguish between different domains in which a process takes place.

On the one hand we have interior domains (e.g. transversal vibration of a string which is fixed in two points, bending of a plate, or heat conduction in a body), exterior domains (e.g. potential flows around a cylinder), whole space \(\mathbb {R}^n\) (e.g. propagation of electro-magnetic waves, potential of a mass point, potential of a single-layer or a double-layer), wave guides (e.g. propagation of sound in an infinite tube), wave guides are domains \(\{(x,y) \in \mathbb {R}^{n+m}: (x,y) \in \mathbb {R}^n \times G,\,G \subset \mathbb {R}^m\}\), where G is an interior domain, the exterior to wave guides (e.g. diffraction of electromagnetic waves around an infinite cylinder), or, finally, the half-space (e.g. reflection of waves on a large plane obstacle).

If we have a stationary (nonstationary) process, then the process takes place in a domain G (in a cylinder (0, T) × G), where G is one of the above described domains. We are only interested in observing the forward (in time) process. If we observe the process for a long time, we set T = . Depending on a given domain, we have to pose different additional conditions.

4.1 Stationary Processes

We pose boundary conditions on the boundary ∂G of the domain G. If we have an interior domain, then the usual boundary conditions are those of first, second, or third kind. These are called boundary conditions of Dirichlet, Neumann or Robin-type, respectively.

If we study boundary value problems for solutions to the elliptic equation

$$\displaystyle \begin{aligned} \begin{array}{rcl} \sum\limits_{k,j=1}^n a_{jk}(x)\partial^2_{x_k x_j} u + \sum\limits_{k=1}^n b_k(x)\partial_{x_k} u + c(x)u =f(x)\end{array} \end{aligned} $$

in an interior domain G, then we state, instead of the Neumann boundary condition, the co-normal boundary condition

$$\displaystyle \begin{aligned} \begin{array}{rcl} \sum^n_{j,k=1} a_{jk}(x) \partial_{x_j} u \cos ({\mathbf n},\mathbf{e}_k) \Big|_{\partial G} = g(x). \end{array} \end{aligned} $$

This boundary condition is related to the structure of the given elliptic operator. If we consider the special case

$$\displaystyle \begin{aligned} \begin{array}{rcl} \varDelta u + \sum\limits_{k=1}^n b_k(x)\partial_{x_k} u + c(x)u =f(x),\end{array} \end{aligned} $$

then the coefficients a jk (x) are equal to the Kronecker symbol δ jk . Hence, the co-normal derivative is equal to

$$\displaystyle \begin{aligned} \begin{array}{rcl} \sum^n_{j=1} a_{jj}(x) \partial_ {x_j} u \cos ({\mathbf n},\mathbf{e}_j) \Big|_{\partial G} = \sum^n_{j=1} \partial_{x_j}u \cos ({\mathbf n},\mathbf{e}_j) \Big|_{\partial G} =\partial_{\mathbf n} u \Big|_{\partial G}=g(x), \end{array} \end{aligned} $$

and this is the classical Neumann condition.

If we study boundary value problems in exterior domains, then decay conditions (conditions for the solution if |x| tends to infinity) might select the physical one among all solutions. Thus, the problem

$$\displaystyle \begin{aligned} \begin{array}{rcl} \varDelta u=0, \quad u\big|_{\partial G} =g(x), \quad u(x) = O\Big(\frac{1}{|x|}\Big)\,\,\mbox{for}\,\, x \to \infty, \end{array} \end{aligned} $$

in \(G=\{x \in \mathbb {R}^3 :|x| > 1\}\) is of interest in the potential theory. Both conditions determine the solution uniquely.

If we are interested in boundary value problems for solutions to the Helmholtz equation Δu + k 2 u = 0, k 2 > 0, then spectral properties of the Helmholtz operator require an additional condition for the boundary and the decay condition to determine the solution uniquely.

Example 3.4.1

Let us study the boundary value problem

$$\displaystyle \begin{aligned} \begin{array}{rcl} \varDelta u + (2\pi)^2 u=0,\quad u\big|_{\partial G} =0, \quad u(x) = O\Big(\frac{1}{|x|}\Big)\,\,\mbox{for}\,\, x \to \infty \end{array} \end{aligned} $$

in the exterior domain \(G=\{x \in \mathbb {R}^3 :|x| > 1\}\). Then, the family of functions \(\{u_C=u_C(x),\,C \in \mathbb {R}\}\) with \(u_C(x)=-C \sin {}(2\pi |x|)/(4\pi |x|)\) is a family of radial solutions to this Dirichlet boundary value problem. Sommerfeld proposed Sommerfeld’s radiation condition,

$$\displaystyle \begin{aligned} \begin{array}{rcl} &\displaystyle &\displaystyle \mbox{either}\quad r \partial_r u - i 2\pi r u \to 0 \quad \mbox{or}\quad r \partial_r u + i 2\pi r u \to 0 \quad \mbox{for} \quad r=|x| \to \infty, \\ &\displaystyle &\displaystyle \mbox{and for solutions to the general Helmholtz equation}, \\ &\displaystyle &\displaystyle \,\,\mbox{either}\quad r \partial_r u - i k r u \to 0 \quad \mbox{or}\quad r \partial_r u + i k r u \to 0 \quad \mbox{for} \quad r=|x| \to \infty \end{array} \end{aligned} $$

to select the solution u ≡ 0.

4.2 Nonstationary Processes

We state boundary conditions on the lateral surface (0, T) × ∂G of the cylinder (0, T) × G and initial conditions on the bottom {t = 0}× G. The number m of initial conditions corresponds to the order m of partial derivatives with respect to t in

$$\displaystyle \begin{aligned} \partial_t^m u= \cdots. \end{aligned}$$

In general, it is permissible to state the initial conditions u(0, x) = u 0(x), t u(0, x) = u 1(x), ⋯, \(\partial _t^{m-1} u(0,x)=u_{m-1}(x)\) on {t = 0}× G. The number of boundary conditions on the lateral surface is half of the order of the elliptic part if we consider \(\partial _t^m u - P(t,x,D_x)u=f(t,x)\), where P = P(t, x, D x ) is supposed to be an elliptic operator in (0, T) × G. Problems consisting of partial differential equations, initial conditions and boundary conditions are called initial boundary value problems or mixed problems. Finally, we have to pay attention to so-called compatibility conditions between initial conditions and boundary conditions on the boundary of the bottom {t = 0}× ∂G.

Now let us look at to initial value problems, also called Cauchy problems, in the strip \((0,T) \times \mathbb {R}^n\) or in the half-space \((0,\infty ) \times \mathbb {R}^n\). Consider the model linear partial differential equation of p-evolution type

$$\displaystyle \begin{aligned} \begin{array}{rcl} D_t^m u + \sum_{j=1}^m \sum_{|\alpha|\le pj} a_{j,\alpha}(t,x) D_x^\alpha D_t^{m-j} u =f(t,x). \end{array} \end{aligned} $$

Then, the Cauchy problem means that for the solution to this equation we pose m initial conditions or Cauchy conditions on the hyperplane \(\{(t,x) \in \{t=0\} \times \mathbb {R}^n \}\) in the following form:

$$\displaystyle \begin{aligned} \begin{array}{rcl} \big(D_t^r u\big)(0,x)=u_r(x)\,\,\, \mbox{for}\,\,\,r=0,1,\cdots, m-2,m-1.\end{array} \end{aligned} $$

A usual question is whether or not the Cauchy problem is well-posed. Here well-posedness means the existence of a solution, the uniqueness of the solution and the continuous dependence of the solution on the data u 0, u 1, ⋯ u m−2, u m−1, the right-hand side f(t, x), and the coefficients a k,α (t, x). However, all these desired properties depend heavily on the choice of suitable function spaces.

Definition 3.7

The Cauchy problem

$$\displaystyle \begin{aligned} \begin{array}{rcl} &\displaystyle &\displaystyle D_t^m u + \sum_{j=1}^m \sum_{|\alpha|\le pj} a_{j,\alpha}(t,x) D_x^\alpha D_t^{m-j} u =f(t,x), \\ &\displaystyle &\displaystyle \big(D_t^r u\big)(0,x)=u_r(x)\,\,\,\mbox{for}\,\,\,r=0,1,\cdots, m-2,m-1 \end{array} \end{aligned} $$

is well-posed if we can fix function spaces A 0, A 1, ⋯ , A m−2, A m−1 for the data u 0, u 1, ⋯ , u m−2, u m−1, B and M for the right-hand side f = f(t, x), B 0, B 1, ⋯ , B m−2, B m−1 and M 0, M 1, ⋯ , M m−2, M m−1 for the solution u in such a way that for given data and right-hand side

$$\displaystyle \begin{aligned} \begin{array}{rcl} u_0 \in A_0,\,u_1 \in A_1, \cdots, u_{m-2} \in A_{m-2},\,u_{m-1} \in A_{m-1},\quad f \in M([0,T],B), \end{array} \end{aligned} $$

there exists a uniquely determined solution

$$\displaystyle \begin{aligned} \begin{array}{rcl} &\displaystyle &\displaystyle u\in M_0([0,T],B_0) \cap M_1([0,T],B_1) \\ &\displaystyle &\displaystyle \qquad \cap \cdots \cap M_{m-2}([0,T],B_{m-2})\cap M_{m-1}([0,T],B_{m-1}). \end{array} \end{aligned} $$

Moreover, the solution is required to depend continuously on the data and on the right-hand side, that is, if we introduce small perturbations of the data and the right-hand side according to the topologies of A 0, A 1, ⋯ , A m−2, A m−1 and of M([0, T], B), then we get only a small perturbation of the solution with respect to the topology of the space of solutions M 0([0, T], B 0) ∩M 1([0, T], B 1) ∩⋯∩M m−2([0, T], B m−2) ∩M m−1([0, T], B m−1).

But what are the usual function spaces to prove well-posedness?

The choice of the function spaces depends on the one hand on the regularity of coefficients of the partial differential equation and on the other hand on the partial differential equation itself. Let us suppose that the coefficients are smooth enough, thus having no “bad influence” on the choice of the function spaces. The usual spaces of functions or distributions to prove well-posedness are (for the definition of spaces of functions or distributions see Sect. 24.3)

  • in the case p = 1: A k  = H sk, B k  = H sk, B = H sm+1, M k  = C k, M = C or \(A_k=C^{m_k}\), \(B_k=C^{n_k}\), \(B=C^{n_m}\), \(M_k=C^{r_k}\), \(M=C^{r_m}\), where k = 0, ⋯ , m − 1

  • in the case p > 1: A k  = H spk, B k  = H spk, B = H sp(m−1), M k  = C k, M = C.

Let us introduce some examples.

Example 3.4.2

  1. 1.

    For strictly hyperbolic Cauchy problems (p = 1)

    $$\displaystyle \begin{aligned} \begin{array}{rcl} &\displaystyle &\displaystyle D_t^m u + \sum_{k+|\alpha|\le m, k\neq m} a_{k,\alpha}(t,x) D_x^\alpha D_t^k u = f(t,x),\\ &\displaystyle &\displaystyle \big(D_t^r u\big)(0,x)=u_r(x) \,\,\,\mbox{for}\,\,\,r=0,1,\cdots,m-2,m-1, \end{array} \end{aligned} $$

    one can prove well-posedness for solutions

    $$\displaystyle \begin{aligned} \begin{array}{rcl} &\displaystyle &\displaystyle u \in C\big([0,T],H^{m-1}(\mathbb{R}^n)\big) \cap C^1\big([0,T],H^{m-2}(\mathbb{R}^n)\big) \\ &\displaystyle &\displaystyle \qquad \cap \cdots \cap C^{m-2}\big([0,T],H^1(\mathbb{R}^n)\big) \cap C^{m-1}\big([0,T],L^2(\mathbb{R}^n)\big)\end{array} \end{aligned} $$

    for given data and right-hand side

    $$\displaystyle \begin{aligned} \begin{array}{rcl} &\displaystyle &\displaystyle u_0 \in H^{m-1}(\mathbb{R}^n),\,u_1 \in H^{m-2}(\mathbb{R}^n),\cdots, u_{m-2} \in H^1(\mathbb{R}^n),\,u_{m-1} \in L^2(\mathbb{R}^n),\\ &\displaystyle &\displaystyle \qquad f \in C\big([0,T],L^2(\mathbb{R}^n)\big) \end{array} \end{aligned} $$

    if the coefficients are smooth enough.

  2. 2.

    For strictly hyperbolic Cauchy problems (p = 1)

    $$\displaystyle \begin{aligned} \begin{array}{rcl} &\displaystyle &\displaystyle D_t^m u + \sum_{k+|\alpha|\le m, k\neq m} a_{k,\alpha}(t,x) D_x^\alpha D_t^k u = f(t,x),\\ &\displaystyle &\displaystyle \big(D_t^r u\big)(0,x)=u_r(x) \,\,\,\mbox{for}\,\,\,r=0,1,\cdots,m-2,m-1,\end{array} \end{aligned} $$

    one can prove well-posedness for solutions

    $$\displaystyle \begin{aligned} \begin{array}{rcl} &\displaystyle &\displaystyle u \in C^{r_0}\big([0,T],C^{n_0}(\mathbb{R}^n)\big) \cap C^{r_1}\big([0,T],C^{n_1}(\mathbb{R}^n)\big) \\ &\displaystyle &\displaystyle \qquad \cap C^{r_{m-2}}\big([0,T],C^{n_{m-2}}(\mathbb{R}^n)\big) \cdots \cap \cap C^{r_{m-1}}\big([0,T],C^{n_{m-1}}(\mathbb{R}^n)\big)\end{array} \end{aligned} $$

    for given data and right-hand side

    $$\displaystyle \begin{aligned} \begin{array}{rcl} &\displaystyle &\displaystyle u_0 \in C^{m_0}(\mathbb{R}^n),\,u_1 \in C^{m_1}(\mathbb{R}^n),\cdots, u_{m-2} \in C^{m_{m-2}}(\mathbb{R}^n),\,u_{m-1} \in C^{m_{m-1}}(\mathbb{R}^n),\\ &\displaystyle &\displaystyle \qquad f \in C^{r_m}\big([0,T],C^{n_m}(\mathbb{R}^n)\big) \end{array} \end{aligned} $$

    if the coefficients are smooth enough.

  3. 3.

    For the Cauchy problem to p-evolution equations (\(p \geq 1,\,p \in \mathbb {N}\))

    $$\displaystyle \begin{aligned} \begin{array}{rcl} &\displaystyle &\displaystyle D_t^m u + \sum_{j=1}^m \sum_{|\alpha|\le pj,} a_{j,\alpha}(t,x) D_x^\alpha D_t^{m-j} u = f(t,x),\\ &\displaystyle &\displaystyle \big(D_t^r u\big)(0,x)=u_r(x) \,\,\,\mbox{for}\,\,\,r=0,1,\cdots,m-2,m-1, \end{array} \end{aligned} $$

    one can prove well-posedness for solutions

    $$\displaystyle \begin{aligned} \begin{array}{rcl} &\displaystyle &\displaystyle u \in C\big([0,T],H^{p(m-1)}(\mathbb{R}^n)\big) \cap C^1\big([0,T],H^{p(m-2)}(\mathbb{R}^n)\big) \\ &\displaystyle &\displaystyle \qquad \cap \cdots \cap C^{m-2}\big([0,T],H^p(\mathbb{R}^n)\big) \cap C^{m-1}\big([0,T],L^2(\mathbb{R}^n)\big)\end{array} \end{aligned} $$

    for given data and right-hand side

    $$\displaystyle \begin{aligned} \begin{array}{rcl} &\displaystyle &\displaystyle u_0 \in H^{p(m-1)}(\mathbb{R}^n),\,u_1 \in H^{p(m-2)}(\mathbb{R}^n),\cdots, u_{m-2} \in H^p(\mathbb{R}^n),\,\\ &\displaystyle &\displaystyle \qquad u_{m-1} \in L^2(\mathbb{R}^n),\, f \in C\big([0,T],L^2(\mathbb{R}^n)\big) \end{array} \end{aligned} $$

    if the coefficients are smooth enough.

In general, it is impossible to choose for the Cauchy problem to the p-evolution equation with p > 1 the function spaces \(A_k=C^{m_k}(\mathbb {R}^n)\), \(B_k=C^{n_k}(\mathbb {R}^n)\), \(B=C^{n_m}(\mathbb {R}^n)\), \(M_k=C^{r_k}(\mathbb {R}^n)\), \(M=C^{r_m}(\mathbb {R}^n)\). The reason is that in this case the solutions do not possess the property of finite speed of propagation of perturbations (compare with Sects. 10.1.2 and 10.1.3 of Chap. 10). So, we are not able to apply localization techniques to reduce this case to the case explained in the third statement of Example 3.4.2.

Remark 3.4.1

Without new difficulties, we can consider for p-evolution equations instead of forward Cauchy problems (only t ≥ 0 is of interest) backward Cauchy problems (only t ≤ 0 is of interest). This is different from parabolic Cauchy problems which, in general, can be only studied in one time direction (see Sect. 9.3.1).

5 Classification of Solutions

In this section we explain different notions of solutions for partial differential equations or systems of partial differential equations. Consider the linear partial differential equation (not necessarily of Kovalevskian type)

$$\displaystyle \begin{aligned} \begin{array}{rcl} L(t,x,D_t,D_x)u:=D_t^m u + \sum\limits_{k+|\alpha|\leq r,\,k < m}a_{k,\alpha}(t,x)D_x^\alpha D_t^k u =f(t,x). \end{array} \end{aligned} $$

Classical solutions are all functions satisfying this partial differential equation in the classical sense, that is, taking a function, forming all partial derivatives appearing in the partial differential equation and setting these into the partial differential equation gives an identity. The notion of classical solutions is, however, too restrictive in general. If some of the coefficients, or the right-hand side, or the initial, or the boundary data are not smooth enough, then we can not, in general, expect classical solutions. Even a non-smooth boundary (with corners, cusps and so on) of a given domain has an influence on regularity properties of the solution.

Definition 3.8 (Notion of Sobolev Solutions)

Let \(G \subset \mathbb {R}^{n+1}\) be a domain and let \(u \in L^1_{loc}(G)\) be a given function. Then, u is called a Sobolev solution of L(t, x, D t , D x )u = f(t, x) if for all test functions \(\phi \in C^{\max (r,m)}_0(G)\) the following integral identity holds:

$$\displaystyle \begin{aligned} \begin{array}{rcl} \int_G u(t,x) L^\ast(t,x,D_t,D_x) \phi(t,x) \,d(t,x) = \int_G f(t,x) \phi(t,x) \,d(t,x), \end{array} \end{aligned} $$

where L (t, x, D t , D x ) denotes the adjoint operator to L(t, x, D t , D x ). Here we assume that all coefficients a k,α belong to \(C^{\max (r,m)}(G)\).

Definition 3.9 (Notion of Sobolev Solutions with Suitable Regularity)

Let \(G \subset \mathbb {R}^{n+1}\) be a domain and let \(u \in W^m_p(G) \subset L^1_{loc}(G)\) be a given function. Then, u is called a Sobolev solution from \(W^m_p(G)\) of L(t, x, D t , D x )u = f(t, x) if for all test functions \(\phi \in C^{\max (r,m)}_0(G)\) the following integral identity holds:

$$\displaystyle \begin{aligned} \begin{array}{rcl} \int_G u(t,x) L^\ast(t,x,D_t,D_x) \phi(t,x) \,d(t,x) = \int_G f(t,x) \phi(t,x) \,d(t,x), \end{array} \end{aligned} $$

where L (t, x, D t , D x ) denotes the adjoint operator to L(t, x, D t , D x ). Here we assume that all coefficients a k,α belong to \(C^{\max (r,m)}(G)\).

For the notion of Sobolev solution compare with Remarks 24.4.1 and 24.4.2. Sometimes one is interested in solutions of Lu = f which can not be Sobolev solutions, that is, for some reasons these solutions do not belong to \(L^1_{loc}(G)\). One possibility would be to introduce distributional solutions (cf. with Definition 24.29 and Remark 24.4.1).

Definition 3.10 (Notion of Distributional Solutions or Solutions in the Distributional Sense)

Let \(G \subset \mathbb {R}^{n+1}\) be a domain and let u ∈ D′(G) be a distribution. Then, u is called a distributional solution of L(t, x, D t , D x )u = f(t, x) if for all test functions \(\phi \in C^{\infty }_0(G)\) the following identity holds:

$$\displaystyle \begin{aligned} \begin{array}{rcl} u(t,x) \big(L^\ast(t,x,D_t,D_x) \phi(t,x)\big) = f(t,x) \big(\phi(t,x)\big), \end{array} \end{aligned} $$

where u(ϕ) denotes the action of the distribution u on the test function ϕ. Here we assume that all coefficients a k,α belong to C (G).

Sometimes one is interested in Sobolev solutions having additional properties. Consider, for example, the wave equation u tt  − Δu = 0. Then, one can ask for solutions having an energy E(u)(t) (cf. with the energy E W (u)(t) of Sect. 11.1), that is, for almost all t ∈ (0, T) we have \(u(t,\cdot ) \in H^1(\mathbb {R}^n)\) and \(u_t(t,\cdot ) \in L^2(\mathbb {R}^n)\). Such solutions are called energy solutions.

For strictly hyperbolic Cauchy problems

$$\displaystyle \begin{aligned} \begin{array}{rcl} &\displaystyle &\displaystyle D_t^m u + \sum_{k+|\alpha|\le m, k\neq m} a_{k,\alpha}(t,x) D_x^\alpha D_t^k u = f(t,x),\\ &\displaystyle &\displaystyle \qquad \big(D_t^r u\big)(0,x)=u_r(x)\,\,\,\mbox{for}\,\,\,r=0,1,\cdots,m-2,m-1, \end{array} \end{aligned} $$

one can study the existence of energy solutions

$$\displaystyle \begin{aligned} \begin{array}{rcl} &\displaystyle &\displaystyle u \in C\big([0,T],H^{m-1}(\mathbb{R}^n)\big) \cap C^1\big([0,T],H^{m-2}(\mathbb{R}^n)\big) \\ &\displaystyle &\displaystyle \qquad \cap \cdots \cap C^{m-2}\big([0,T],H^1(\mathbb{R}^n)\big) \cap C^{m-1}\big([0,T],L^2(\mathbb{R}^n)\big). \end{array} \end{aligned} $$

We will not discuss in detail how boundary or initial conditions need to be understood in a weak sense. We only mention that one has to use the notion of traces. In the case of energy solutions for hyperbolic Cauchy problems we have a regular behavior with respect to the time variable t. Thus, Cauchy conditions on the hyperplane t = 0 are understood in the classical sense, that is, the restriction of \(D_t^k u(t,x)\) on t = 0 exists in a given function space.Finally, we want to remark that different notions of solutions can be transferred directly to systems of partial differential equations.

Definition 3.11 (Notion of Sobolev Solutions for Systems)

Let \(G \subset \mathbb {R}^{n+1}\) be a domain and let \(U=(u_1,\cdots ,u_n)^T \in (W^m_p(G))^n \subset (L^1_{loc}(G))^n\) be a given vector function. Then, U is called a Sobolev solution from \((W^m_p(G))^n\) of

$$\displaystyle \begin{aligned} \begin{array}{rcl} A_0(t,x)\partial_t U + \sum\limits_{k=1}^n A_k(t,x) \partial_{x_k} U + B(t,x) U = F(t,x) \end{array} \end{aligned} $$

with real and continuously differentiable matrices A 0 and A k , with a real and continuous matrix B, and with a real, integrable vector F if for all test vector functions \(\varPhi =(\phi _1,\cdots ,\phi _n)^T \in (C^1_0(G))^n\) the following integral identity holds:

$$\displaystyle \begin{aligned} \begin{array}{rcl} &\displaystyle &\displaystyle \int_G U(t,x)\cdot \big(-\partial_t (A_0^T \varPhi) - \partial_{x_k}(A_k^T \varPhi) + B^T\varPhi \big)(t,x) \,d(t,x)\\ &\displaystyle &\displaystyle \qquad = \int_G F(t,x) \cdot \varPhi(t,x) \,d(t,x), \end{array} \end{aligned} $$

where A T denotes the transposed matrix of A.

Exercises Relating to the Considerations of Chap.3

Exercise 1

Find elliptic, hyperbolic and strictly hyperbolic operators. What type has the stationary plate operator Δ 2? Is the heat operator t  − Δ elliptic or hyperbolic or something else? Find an operator with constant coefficients which is hyperbolic but not strictly hyperbolic. Is there an elliptic operator of order 3?

Exercise 2

Show that the classical Schrödinger operators \(\frac {1}{i} \partial _t \pm \varDelta \) and the nonstationary plate operator \(\partial _t^2 + \varDelta ^2\) are 2-evolution operators. Is the heat operator t  − Δ a 2-evolution operator? No, it is a parabolic operator. Compare this with the classification of linear operators of second order by using the principal symbol.

Exercise 3

Find one example of an ultrahyperbolic partial differential equation that is not hyperbolic. Find one example of a parabolic partial differential equation which is not of normal type.

Exercise 4

How many boundary conditions can we pose for solutions to Δ m u = 0 in an interior domain?

Exercise 5

Find in \(\mathbb {R}^n \setminus \{0\}\) a classical radial solution u = u(r) of the Laplace equation Δu = 0 (cf. with Example 24.4.4 from Chap. 24).

Exercise 6

Study the boundary value problem of Example 3.4.1. Use the Laplace operator in polar-coordinates

$$\displaystyle \begin{aligned} \begin{array}{rcl} \varDelta u= \frac{1}{r^2} \partial_r \big(r^2 \partial_r u \big) + \frac{1}{r^2 \sin \theta} \partial_\theta \big(\sin \theta \, \partial_\theta u\big) + \frac{1}{r^2 \sin^2 \theta} \partial^2_\phi u. \end{array} \end{aligned} $$

How do we interpret the radiation condition in the case k = 0 in the Helmholtz equation?

Exercise 7

What kind of additional conditions to solutions may we pose for solutions to the nonstationary plate \(\partial _t^2 u + \varDelta ^2 u =0\) in the interior domain \(\{x \in \mathbb {R}^3: |x|<1\}\) or in the exterior domain \(\{x \in \mathbb {R}^3: |x|>1\}\)?

Exercise 8

Explain the compatibility conditions for the model of a vibrating string which is fixed in two points x = 0 and x = 1. Explain compatibility conditions for the model of a potential inside of a rectangular

$$\displaystyle \begin{aligned} R=\{(x,y) \in \mathbb{R}^2: (x,y) \in [0,a] \times [0,b]\}.\end{aligned}$$

Exercise 9

In which time directions are we able to study the Cauchy problems

$$\displaystyle \begin{aligned} \begin{array}{rcl} u_t - \varDelta u=f(t,x), \, u(0,x)=u_0(x),\quad u_t + \varDelta u=f(t,x), \, u(0,x)=u_0(x)? \end{array} \end{aligned} $$

Exercise 10

Look for a classical solution u in the form \(u(t,x)=t^{-n/2}h\big (\frac {|x|}{\sqrt {t}}\big )\) for the heat equation u t  − Δu = 0 for t > 0 and \(x\in \mathbb {R}^n\). Here h is a suitable chosen function (cf. with Example 24.4.5 from Chap. 24).

Exercise 11

What is the definition of a Sobolev solution of the Poisson equation Δu = f? Determine all Sobolev solutions of d t u = f, where f(t) = 0 if t ≤ 0 and f(t) = 1 for t > 0.

Exercise 12

Find all distributional solutions of d t u = δ 0 in \(\mathbb {R}^1\) and of − Δu = δ 0 in \(\mathbb {R}^n\), where δ 0 denotes Dirac’s delta distribution at the origin.

Exercise 13

Suppose that we have an energy solution

$$\displaystyle \begin{aligned} u \in C\big([0,T],H^s(\mathbb{R}^n)\big) \cap \, C^1\big([0,T],H^{s-1}(\mathbb{R}^n)\big)\end{aligned}$$

of the Cauchy problem for the wave equation with s ≥ 1. For which s do we have a classical solution?

Exercise 14

Show that any classical solution of L(t, x, D t , D x )u = 0 is a Sobolev solution. Show that any Sobolev solution of L(t, x, D t , D x )u = 0 is a distributional solution. Here L = L(t, x, D t , D x ) is a linear partial differential operator with smooth coefficients.