Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

1 Introduction

It is little less than 80 years since Norbert Wiener’s visionary article on “The homogeneous chaos” [39] and some of the questions he addressed are still vividly debated among the community that seeks for a comprehensive framework for uncertainties in fluid mechanics. One question in [39] can be summed up as

Question 1

Is it possible to have a measurement of the dynamics of a flow via polynomial expansions of certain quantities, where the polynomials are optimal with respect to some underlying probability laws?

The engineering and computational community recognized that it is a fundamental issue also for uncertainty calculations in many different fields, see [21] and references therein. Since then, any orthogonal polynomial expansion related to a certain probability law (not only the Hermite expansion [39] well suited for Gaussian processes and turbulence) is called a chaos polynomial expansion. There is actually another question asked at the end of Wiener’s paper.

Question 2

What is the compatibility of these polynomial expansions with respect to the PDE structure needed for fluid mechanics?

For the Burgers equations which is a paradigm, Wiener quickly realized that loss of regularity may degrade the accuracy of polynomial approximations. This remark looks evident nowadays since the theory [27] of shock waves and discontinuous solutions is well established. It seems at the lecture of Wiener’s paper that he wanted to address both questions at the same time, meaning a theory for the development of turbulence—whatever it meant for him—and for the existence of shocks (which degrades the regularity of the solutions so lessens the quality of polynomial approximations).

The purpose of this work is to review some recent progresses which try to answer the second question, and only the second one. It will be presented following a certain chronological order with which the author looked at these problems, so the title of the present contribution. Even if some problems evoked below seem at first sight extremely far from uncertainty propagation (such is the quaternion structure at the end of this paper in Sect. 4), it is hoped the ensemble has a coherent structure and reflects some scientific issues in the modeling of uncertainty propagation in hyperbolic and kinetic equations. In a completely different direction, the reader interested to a modern statistical but PDE based treatment of hyperbolic conservation laws is strongly advised to refer to [19], and therein.

The plan is the following. Section 2 begins with the introduction of standard notations and results about the hyperbolic structure of systems of conservation laws with polynomial modeling of uncertainties. Section 3 takes advantage of the rewriting of scalar conservation laws as the limit of kinetic equations. It will explain nevertheless that another view is possible for polynomial expansions, denoted as kinetic polynomials. Section 3.3 will provided advanced material on kinetic polynomials. Section 3.4 will deal with a first formal extension to isentropic Euler system with γ = 3. Numerical illustrations are provided in Sect. 4.

Similar notations will sometimes be used for different uses. For example f denotes the flux function in Sect. 2, but refers to the kinetic unknown in Sect. 3: the context makes this abuse non ambiguous. On the contrary the indices are kept the same: d is the space dimension, m the size of the system of conservation laws, p the dimension of the uncertain space and n the polynomial degree.

2 Hyperbolic Structure

The modern mathematical treatment of non viscous compressible fluid mechanics is based of the theory of hyperbolic systems of conservation laws [27]. Let us start with the Euler system of compressible non viscous fluid mechanics written in the domain \(\mathbf x\in \mathscr D \subset \mathbb R^d\)

$$\displaystyle \begin{aligned} \left\{ \begin{array}{ll} \partial_t \rho +\nabla \cdot (\rho \mathbf u)=0, \\ \partial_t (\rho \mathbf u) +\nabla \cdot (\rho \mathbf u\otimes \mathbf u)+\nabla p=0, \\ \partial_t (\rho e) +\nabla \cdot (\rho \mathbf u e + pu )=0, \end{array} \right. \end{aligned} $$
(1)

where ρ(x, t) > 0 stands for the density of a gas or a fluid, \(\mathbf u(\mathbf x,t) \in \mathbb R^d\) is the velocity and e is the total energy. The total energy is the sum of the internal energy ε and of the kinetic energy, that is \(e=\varepsilon +\frac 12 |\mathbf u|{ }^2\). Considering that an equation of state (EOS) is provided, the pressure law is p = EOS(ρ, ε). System (1) is rewritten as a system of conservation laws

$$\displaystyle \begin{aligned} \left\{ \begin{array}{llll} \partial_t U + \nabla \cdot f(U)=0, &\mathbf x\in \mathscr D, & t>0, \\ U(\mathbf x,0)=U_0(\mathbf x), &\mathbf x\in \mathscr D, \end{array} \right. \end{aligned} $$
(2)

The unknown \(U(\mathbf x,t)\in \varOmega \subset \mathbb R^m\) is assumed to live in the set Ω of admissible states. A minimal requirement for well posedness is to have the hyperbolic structure, which means that the Jacobian matrix

$$\displaystyle \begin{aligned} A(U)=\nabla _U f(U) \in \mathbb R^{m\times m} \end{aligned}$$

is diagonalizable in \(\mathbb R^m\): for all U ∈ Ω, there is a set of real eigenvectors and eigenvalues. This is guaranteed if one has a smooth entropy-entropy flux pair (S, F) for the system. The entropy function \( S:\varOmega \rightarrow \mathbb R \) and the entropy pair function \( F:\varOmega \rightarrow \mathbb R^d \) are such that S is strictly convex, that is ∇2 S > 0, and

$$\displaystyle \begin{aligned} \nabla S \nabla f= \nabla F, \quad U\in \varOmega. \end{aligned}$$

The modeling of uncertainty propagation with chaos polynomials techniques is usually performed with another variable, call it \(\omega \in \varTheta \subset \mathbb R^p\). The uncertainty can be in the initial data \(U_0(\mathbf x, \omega )\in \mathbb R^m\) or in the flux function \(f_\omega :\varOmega \rightarrow \mathbb R^{m\times d}\) which displays a dependency with respect to ω. One obtains the system of conservation laws

$$\displaystyle \begin{aligned} \left\{ \begin{array}{lllll} \partial_t U + \nabla \cdot f_\omega(U)=0, & \mathbf x\in \mathscr D, & \omega\in \varTheta,& t>0, \\ U(\mathbf x, 0; \omega)=U_0(\mathbf x, t; \omega), & \mathbf x\in \mathscr D, & \omega\in \varTheta, \end{array} \right. \end{aligned} $$
(3)

where the unknown U(x, t;ω) is function of the space-time variables and of the uncertain variable. The mathematical structure of (3) is extremely simple since it can be seen as an infinite collection of decoupled systems like (47), but for different ω.

For the simplicity of the exposure the function f is now considered as independent of ω. It is not really a restriction with respect to the main mathematical issues since it is possible to rewrite (3) as an augmented system

$$\displaystyle \begin{aligned} \partial_t \left( \begin{array}{lllll} U \\ \omega \end{array} \right) + \nabla \cdot \left( \begin{array}{c} f_\omega(U) \\ 0 \end{array} \right) =0, \quad \mathbf x\in \mathscr D, \quad \omega\in \varTheta, \quad t>0. \end{aligned} $$
(4)

Up to the definition of an augmented flux function \(f^{\mathrm {aug}}(U,\omega )= \left ( \begin {array}{c} f_\omega (U) \\ 0 \end {array} \right )\), the system (4) is made of m + p conservation laws. An entropy can be defined under natural conditions [17].

Since the number of variables of the generic system (3) is large, indeed the dimension of the space of static variables is

$$\displaystyle \begin{aligned} \mbox{dim}\left(\mbox{physical space} \right)+ \mbox{dim}\left(\mbox{uncertain space} \right)=m+p, \end{aligned}$$

the idea of model reduction is appealing. This is performed below with what is called chaos expansion or chaos polynomials [8, 13, 21, 24, 28, 34, 39, 41]. For this task, one adds for convenience one extremely important information which is the a priori knowledge of an underlying probability law (ω): one has that ∫ Θ (ω) = 1. One can argue that, for a practical problem, no such probability law is a priori known. This is true in general, but there exists situations where the probability law can be characterized by physical experiments. Three different examples are ICF (Inertial Confinement Fusion) modeling [33], discussion of EOS for ICF modeling [7] and in another direction signal processing [9].

The idea behind chaos polynomials is to use this information with optimal accuracy [2, 10]. The procedure is as follows: one determines firstly a family of orthogonal polynomial with increasing degree (partial or total)

$$\displaystyle \begin{aligned} \int_\varOmega p_i(\omega) p_j(\omega) d\mu(\omega)=\delta_{ij}. \end{aligned}$$

A basic example is Legendre polynomials

$$\displaystyle \begin{aligned}p_i(x)=\frac 1{2^i i!} \frac{d^i}{dx^i} (x^2-1)^i \end{aligned}$$

which are orthogonal for the uniform law

$$\displaystyle \begin{aligned} \int_{-1}^1 p_i(x) p_j(x) dx = \frac 2{2i+1}\delta_{ij}. \end{aligned}$$

For the simplicity of notations, the polynomials ordering is the simplest one, that is \(i\in \mathbb N\). All this motivates the definition of a truncated unknown

$$\displaystyle \begin{aligned} U^n(\mathbf x, t; \cdot)\in P^n:=\mbox{Span}_{0\leq i \leq n }\left\{p_i \right\}, \end{aligned} $$
(5)

that is

$$\displaystyle \begin{aligned} U^n(\mathbf x, t; \omega)= \sum_{i=0}^n U^n_i(\mathbf x, t) p_i(\omega) \mbox{ where } U^n_i(\mathbf x, t) = \int_\varOmega U^n(\mathbf x, t; \omega) p_i(\omega) d\mu(\omega). \end{aligned} $$
(6)

Since \(U^n_i\) is the ith moment of U n with respect to p i , this modeling is strongly related to two classical theories: the first one is the classical problem of moments [1, 11] and the second one is the closure problem of moments for kinetic equations [12, 23, 26, 30].

If correctly solved, the closure problem yields a closed system for the evolution of \(\left (U_n^i \right )_{0\leq i \leq n}\). A naive method is to close readily as

$$\displaystyle \begin{aligned} \partial_t U^n_i(\mathbf x, t) + \nabla \cdot \int_\varOmega f(U^n(\mathbf x, t; \omega)) p_i(\omega) d\mu(\omega)=0, \quad 0\leq i \leq n.\end{aligned} $$
(7)

When using such structure for calculations on computers, the numerical evaluation of the integrals ∫ Ω f(U n(x, t;ω))p i (ω)(ω) is needed. These integrals are highly non linear for many problems of interest. Some prescriptions can be found in [34]. Discarding these practical issues, there is a bad news [17].

Lemma 2.1

Take the uniform law dμ = dω on the interval Ω = (0, 1). When applied to the Euler system (48) or to the system of shallow water, the system (7) with the closure (6) may be non hyperbolic even for physical correct datas.

So far, the only possibility to have an hyperbolic closure is to modify the expansion (5) using the entropy variable V = ∇S which induces a diffeomorphism written as φ(U) = ∇S = V . The expansion writes

$$\displaystyle \begin{aligned} V^n(\mathbf x, t; \cdot)\in P^n:=\mbox{Span}_{0\leq i \leq n }\left\{p_i \right\}, \end{aligned} $$
(8)

that is

$$\displaystyle \begin{aligned} V^n(\mathbf x, t; \omega)= \sum_{i=0}^n V^n_i(\mathbf x, t) p_i(\omega) \mbox{ where } V^n_i(\mathbf x, t) = \int_\varOmega V^n(\mathbf x, t; \omega) p_i(\omega) d\mu(\omega)\end{aligned} $$
(9)

is the moment of the entropy variable. The closure is now written as

$$\displaystyle \begin{aligned} U^n(\mathbf x, t; \omega)= \varphi^{-1} \left( V^n(\mathbf x, t; \omega)\right). \end{aligned}$$

This method introduces additional non linearity in the model. There is however a good news.

Theorem 2.1 (Proof in [17])

The system of conservation laws (7) with the closure (9) is hyperbolic unconditionally for U  Ω. It admits the entropy-entropy pair \((\mathscr S, \mathscr F)\)

$$\displaystyle \begin{aligned} \mathscr S=\int_\varOmega S\left( U^n(\boldsymbol{\omega}) \right) d\omega \mathit{\mbox{ and }} \mathscr F= \int_\varOmega F\left( U^n(\boldsymbol{\omega}) \right) d\omega . \end{aligned}$$

Many different probability measures are available. An issue with such extended systems is the discretization procedure, since the simplicity of the coding is a highly desirable property. Since the situation is not very different from moment models, efficient implementation is possible [17]. A variant adapted to quasi-linear systems is proposed in [40], with a simpler implementation. Convergence to the limit entropy solution with respect to n is challenging to establish [22]: results are only partial. It seems that no theoretical result of convergence is available so far after the appearance of shocks in the solution, see [17] with a weak-strong technique. In the rest of this work, an alternative to chaos polynomials is considered. Following [15], it is called kinetic polynomials.

3 Kinetic Structure

The kinetic formulation of conservation laws [4, 29, 31, 32] is another possibility to model uncertainties. Let ε > 0 be a small parameter which ultimately tends to zero. The kinetic formulation of the conservation law with flux \(F:\mathbb R\rightarrow \mathbb R\)

$$\displaystyle \begin{aligned} \partial_t u + \partial_x F(u)=0 \end{aligned} $$
(10)

writes as a Boltzmann equation for t ≥ 0, \(x\in \mathbb {R}^d\) and ξ ≥ 0, in a BGK (relaxation) form,

$$\displaystyle \begin{aligned} \left\{ \begin{array}{l} \partial_t f_\varepsilon + a(\xi).\nabla f_\varepsilon + \frac 1 \varepsilon f_\varepsilon = \frac 1 \varepsilon M(u_\varepsilon; \xi), \qquad a=\nabla F, \\ {} u_\varepsilon(x, t) = \displaystyle \int f_\varepsilon(x,\xi, t)d\xi , \\ {} f_\varepsilon (t=0) = M\big(u^{\mathrm{init}}; \xi \big). \end{array}\right. \end{aligned} $$
(11)

The right hand side

(12)

is called a Maxwellian. The initial condition satisfies

$$\displaystyle \begin{aligned} 0 \leq u(x,\omega,0)=u^{\mathrm{init}}(x,\omega) \leq u_{\mathrm{max}}, \qquad \int u^{\mathrm{init}}(x,\omega) dx d\mu(\omega) < \infty. \end{aligned} $$
(13)

The non negativity u ≥ 0 is needed for (12) to make sense in our context. That is why we assume the initial data is non negative u init ≥ 0. This assumption simplifies some non essential technicalities and allows to disregard the negative part of M; the reader can find in [29, 31, 32] the adaptation to general signs as well as convergence proofs when ε → 0+: typically u ε  → u and f ε  → M(u) in natural functional spaces.

The Maxwellian M(u;ξ) is a universal minimizer for a family of entropy based functionals [4, 29, 31, 32]. For all convex functionals S(ξ), one has that

$$\displaystyle \begin{aligned} M(u;\cdot)= \operatornamewithlimits{argmin}_{u=\int g d\xi , \ 0\leq g \leq 1} \int S'(\xi) g d\xi . \end{aligned} $$
(14)

To model uncertainties, the idea is now to write (11) for all ω, and then to modify it in a polynomial manner so as to consider the intrusive kinetic formulation

$$\displaystyle \begin{aligned} \left\{ \begin{array}{l} \partial_t f^n_\varepsilon + a(\xi).\nabla f^n_\varepsilon + \frac 1 \varepsilon f^n_\varepsilon = \frac 1 \varepsilon M^n\big(u^n_\varepsilon; \xi ,\omega \big), \\ {} u^n_\varepsilon(x,\omega, t) = \displaystyle \int f^n_\varepsilon(x,\xi, \omega, t)d\xi , \\ {} f^n_\varepsilon (t=0) = M^n\big(u^{\mathrm{init}}; \xi ,\omega \big), \end{array}\right. \end{aligned} $$
(15)

where \(0\leq M^n(u^n_\varepsilon ; \xi ,\omega )\leq 1\) is a suitable polynomial modification of the Maxwellian M. Notice that \(\int f^n_\varepsilon (t=0) d\xi d \omega = \int u^{\mathrm {init}} d \omega \) but the initial data needs not be at equilibrium since u init usually does not belong to \(P^n_\omega \). The solutions of (15) depend now on two parameters ε and n. Depending on the way M n is defined, it is possible to get various estimates which explain the theoretical interest of the method.

3.1 Convolution Techniques

A first series of polynomial Maxwellian is obtained with suitable convolution techniques. One seeks M n under the form

$$\displaystyle \begin{aligned} M^n(u^n_\varepsilon; \xi, \omega) = G^n*_\omega M(u^n_\varepsilon; \xi) := \int G^n(\omega,\omega') M\big(u^n_\varepsilon(\omega'); \xi \big) d\mu(\omega') \end{aligned}$$

where the convolution kernel G n is decomposed as

$$\displaystyle \begin{aligned} G^n (\omega,\omega')= \sum_{i=0}^n c_{i} p_i(\omega) p_i(\omega'), {} \end{aligned} $$
(16)

where c i are appropriate coefficients and G n satisfies

$$\displaystyle \begin{aligned} G^n\geq 0, \qquad \int G^n(\omega,\omega') d\mu(\omega' )=c_0=1= \int G^n(\omega,\omega') d\mu(\omega). {} \end{aligned} $$
(17)

The theory of polynomial kernel approximation [18, 38] asserts that convolution kernels exist which satisfy the requirements (16)–(17). We quote [15] 3 possibilities: the Fejer kernel, the Jackson kernel and the modified Jackson kernel.

For example, considering the measure \(d\mu (\omega )= \frac {d\omega }{\pi \sqrt {1-\omega ^2}}\) with support in the interval ω ∈ I = (−1, 1) and the Tchebycheff orthonormal polynomials

$$\displaystyle \begin{aligned} T_i(\omega)=\cos\left( i \mbox{ arcos }\omega \right), \qquad -1\leq \omega \leq 1. \end{aligned}$$

The Fejer kernel \(G^n_F\) is defined by the coefficients c 0 = 1 and \(c_i =2 \frac {n+1-i}{n+1}\) for 1 ≤ i ≤ n. The Jackson kernels have better approximation properties than the Fejer kernel, with a slightly different definition of the coefficients c i . An example of strong error bounds follows, see [15] for additional properties.

Proposition 3.1

Consider the Jackson kernel. One has the inequalities

$$\displaystyle \begin{aligned} \int \left|\,f^n_\varepsilon(t)- G^n*_\omega f_\varepsilon (t) \right| dx d\xi d\mu(\omega) \leq C \frac t \varepsilon \int \mathrm{mod}_1( u^{\mathrm{init}},\frac 1n ) dx d\xi, \end{aligned} $$
(18)
$$\displaystyle \begin{aligned} \int \left|\,f^n_\varepsilon(t)- f_\varepsilon (t) \right| dx d\xi d\mu(\omega) \leq C\left(1+ \frac t \varepsilon\right) \int \mathrm{mod}_1( u^{\mathrm{init}},\frac 1n ) dx d\xi, \end{aligned} $$
(19)
$$\displaystyle \begin{aligned} \int \left|\,f^n_\varepsilon(t)- M(u; \xi) \right| dx d\xi d\mu(\omega) \leq c \sqrt \varepsilon + C\left(1+ \frac t \varepsilon\right) \int \mathrm{mod}_1( u^{\mathrm{init}},\frac 1n ) dx d\xi . \end{aligned} $$
(20)

Similar bounds are derived for \(u_\varepsilon ^n-u\).

However these estimates do not allow to pass to the limit ε independently of N. It is instructive to write the formal limit in the regime εn = O(1). The unknowns of the resulting moment system are the quantities

$$\displaystyle \begin{aligned} u^n_{\varepsilon,i}(x,t)= \int f^n_{\varepsilon,i}(x,\omega,t) d\xi, \quad f^n_{\varepsilon,i}(x,\omega,t)= \int f^n_{\varepsilon}(x,\xi,\omega,t) T_i(\omega)d\mu(\omega). \end{aligned}$$

An artificial damping phenomenon arises. Set for convenience \(n+1=\frac 1\varepsilon \). The projected equation for the modified Jackson kernel are

$$\displaystyle \begin{aligned} \partial_t u^n_{\varepsilon,i} + \mathrm{div} \int a(\xi) f^n_{\varepsilon,i} d\xi = \frac 1 \varepsilon \big[ c_i^{mod J} u^n_{\varepsilon,i} - u^n_{\varepsilon,i} \big] \end{aligned}$$
$$\displaystyle \begin{aligned} = (n+1) \left( \frac{ (n+1-i)\cos \frac{\pi i }{ n+1}+ \sin \frac{\pi i }{ n+1} \cot \frac{\pi }{ n+1} }{n+1} -1 \right) u^n_{\varepsilon,i} \end{aligned}$$
$$\displaystyle \begin{aligned} = \left( (n+1- i) \cos \frac{\pi i }{ n+1} + \sin \frac{\pi i }{ n+1} \cot \frac{\pi }{ n+1} - n-1 \right) u^n _{\varepsilon,i} =- h_n(i) u^n _{\varepsilon,i}. \end{aligned}$$

Elementary calculations show that h n (0) = 0, and that h n (x) > 0 for 0 < x < n with h n (x) → 0 for all x as n → 0. One also has that 0 < h n (i) < i for 0 < i ≤ n. It implies after integration in x

$$\displaystyle \begin{aligned} \partial_t \int u^n_{\varepsilon,i} dx= - h_n(i) \int u^n _{\varepsilon,i} dx \Longrightarrow \int u^n _{\varepsilon,i} dx(t) \end{aligned} $$
(21)
$$\displaystyle \begin{aligned} =e^{- h_n(i)t } \int u^n _{\varepsilon,i} dx(0) \Longrightarrow \lim_{t\rightarrow \infty }\int u^n _{\varepsilon,i} dx(t)=0. \end{aligned}$$

A similar damping phenomenon of the moments i ≠ 0 also shows up if one uses the Jackson kernel, and is even stronger starting from the Fejer kernel. This seems the price to pay for the good convergence properties of Proposition 3.1.

3.2 Minimization Techniques

The initial purpose of the method proposed below was precisely to obtain a polynomial modeling of uncertainties with good properties, such as the maximum principle and no damping (by comparison with (21, it means h n (i) = 0 for all i). Quite fortunately the universal entropy principle (14) can be generalized in this direction. It yields powerful tools with many good properties (even if some of them are still under studies).

Let us take one entropy S and u n  ∈ P n with u n  ≥ 0 for all ω. Define

$$\displaystyle \begin{aligned} K^n(u^n)= \left\{ g^n(\cdot , \cdot)\in P^n_\omega, \ u^n(\omega)=\int g^n(\xi, \omega) d\xi , \ 0\leq g^n \leq 1 \right\}. \end{aligned}$$

For any n ≥ 0, one tries to construct an equilibrium M n(u n;ξ, ω) as a minimizer

$$\displaystyle \begin{aligned} M^n(u^n)= \operatornamewithlimits{argmin}_{g^n\in K^n(u^n)} \int S'(\xi) g ^n d\xi d\mu(\omega). \end{aligned} $$
(22)

For n = 0 this is a Brenier inequality [4, 5], it yields a unique minimizer. For general n > 0, let us assume for a while that M n exists and is unique. One has the following a priori properties: under the assumption that a solution exists to the maximization problem (27), then the solution of the kinetic equation

$$\displaystyle \begin{aligned} \left\{ \begin{array}{l} \partial_t f^n_\varepsilon + a(\xi).\nabla f^n_\varepsilon + \frac 1 \varepsilon f^n_\varepsilon = \frac 1 \varepsilon M^n(u^n_\varepsilon; \xi, \omega), \\ {} u^n_\varepsilon(x,\omega, t) = \displaystyle \int f^n_\varepsilon(x,\xi, \omega, t)d\xi , \\ {} f^n_\varepsilon (t=0) = M^n(u^{\mathrm{init,n}}; \xi), {} \end{array}\right. \end{aligned} $$
(23)

satisfies the entropy principle under the form

$$\displaystyle \begin{aligned} \partial_t \int S'(\xi) f^n_\varepsilon(x,\xi, \omega, t) d\xi d\mu(\omega) + \mathrm{div} \int a(\xi) S'(\xi) f^n_\varepsilon(x,\xi, \omega, t) d\xi d\mu(\omega) \leq 0. \end{aligned}$$

Moreover under the same assumption, if \(u^n_\varepsilon \) converges strongly to some u n, then one passes to the limit ε → 0 in (23) and obtains the system of conservation laws for 0 ≤ i ≤ n

$$\displaystyle \begin{aligned} \partial_t u^n_{i} + \mathrm{div} \ {\mathscr F}_i^n[ u^n] =0, \qquad {\mathscr F}_i^n[ u^n] := \int a(\xi) M^n(u^n;\xi, \omega) p_i(\omega) d\xi d\mu(\omega),\end{aligned} $$
(24)

with the entropy inequality

$$\displaystyle \begin{aligned} \partial_t {\mathscr S}^n[u^n] + \mathrm{div} \ {\mathscr G}^n[ u^n] \leq 0, \end{aligned}$$

where the entropy and entropy fluxes are defined by

$$\displaystyle \begin{aligned} {\mathscr S}^n[ u^n] := \int S'(\xi) M^n(u^n;\xi, \omega) d\xi d\mu(\omega) \end{aligned}$$

and

$$\displaystyle \begin{aligned} {\mathscr G}^n[ u^n] := \int S'(\xi) a(\xi) M^n(u^n;\xi, \omega) d\xi d\mu(\omega). \end{aligned}$$

However a difficult question is to construct the solution of (22).

3.2.1 Quasi-Solution

A quasi-solution or feasible solution to the minimization problem (22) is proposed. This construction has two goals. The first one is to establish M n is a quasi-minimizer (22) but for all S. The second one is to propose an implementable algorithm, at least for small n.

Let us remark that

(25)

meaning that any function S′ such that S″ ≥ 0 and S′(0) = 0 is a non-negative integral of functions a s (ξ) which also satisfy \(a_s^{\prime }\geq 0\) and a s (0) = 0. That is the family of functions (a s ) s>0 constitutes a generating family (actually the function ξa s (ξ) is the derivative of a branch of a Kruzkov entropy). Let us replace (22) with a family of similar problems

$$\displaystyle \begin{aligned} M^n(u^n) = \operatornamewithlimits{argmin}_{g^n\in K^n(u^n)} \int_\xi^\infty g ^n ds d\mu(\omega), \quad \forall \xi. \end{aligned} $$
(26)

More precisely any solution of (26) (if it exists) is also a solution of (22) (with the same restriction concerning the existence). Since the mass is preserved, that is \(\int g^n(s,\omega ) ds d\mu (\omega ) = \int u^n(\omega ) ds d\mu (\omega ) \), this problem can be rewritten with the alternative formulation

$$\displaystyle \begin{aligned} M^n(u^n) = \operatornamewithlimits{argmax}_{g^n\in K^n(u^n)} \int_0^\xi g^n ds d\mu(\omega), \quad \forall \xi. \end{aligned} $$
(27)

A quasi-solution is possible based on (27). The idea is to solve (27) progressively with respect to ξ, starting from ξ = 0 and then increasing its value until \(u_+=\max _I u^n(\omega )\). A constructive method (an algorithm) [15] shows that the quasi-solution writes

(28)

with 0 = ξ 0 < ξ 1 < ⋯ < ξ L  < ξ L+1 = u +. The construction also shows the uniqueness of the feasible solution. The layer structure of this function is the key of the construction. The integral identity \(\int _0^{u_+}M^n(u^n;\xi , \omega ) d\xi = u^n(\omega )\) writes

$$\displaystyle \begin{aligned} \sum_{l\geq 0}( \xi_{l+1}-\xi_l ) h^n_l(\omega)= u^n(\omega), \quad \omega \in I. \end{aligned} $$
(29)

This function is constructed step by step, the first step for the bottom layer being trivial. The second step is the critical one where all the ideas of the method are carefully explained, in particular the role of the Bojavic-Devore theorem [3] for one sided approximation.

3.2.2 Discretization with Quasi-Solution

We discretize in time and space and implement the method issued from (28) under the form

$$\displaystyle \begin{aligned} \frac{\overline{u} _j ^n - {u} _j ^n }{\varDelta t} + \frac{F^n[{u} _j^n] - F^n[{u} _{j-1}^n]}{\varDelta x}=0 \end{aligned} $$
(30)

where \({u} _j ^n\in P^n(\omega )\) is a polynomial in ω of degree n (fixed), in cell j and at the current time step. The generic flux \(F^n[u^n_j]\) is constructed with (28). The value at next time step t + Δt in cell j is denoted with a bar \(\overline {u} _j^n \in P^n(\omega )\).

Let us assume the initial data is a positive and bounded polynomial

$$\displaystyle \begin{aligned} 0\leq U_{\mathrm{min}}\leq u_j^n(\omega) \leq U_{\mathrm{max}} <\infty, \quad \forall j \mbox{ and } \forall \omega \in I. \end{aligned} $$
(31)

Consider the archetype of a convex flux which is the Burgers flux \(F(\xi )=\frac {\xi ^2}2\). The following result states that the explicit Euler scheme satisfies the maximum principle (this is a minimal stability requirement) under a CFL condition which is independent of n. The property is here checked directly on the scheme (30) but can also be derived as a consequence of the underlying kinetic formulation.

Theorem 3.1

Assume the CFL condition U max Δt  Δx. Then

$$\displaystyle \begin{aligned} U_{\mathrm{min}}\leq \overline{u}_j^n(\omega) \leq U_{\mathrm{max}}, \quad \forall j \mathit{\mbox{ and }} \forall \omega \in I. \end{aligned} $$
(32)

3.3 More on Kinetic Polynomials

This section is based on the results recently announced in [16]. Not only it is proved that (22) is a well posed problem with existence and uniqueness of the minimizer, but the problem shows nice reformulation as an optimal control problem [36]. The minimization problem (22) concerns the variables (ξ, ω) but is independent of the variables (x, t). So we make for convenience a change of variables (x, t) ← (ω, ξ) together with a change of functions q n  ← u n and u n  ← M n. It yields simpler notations, also better in terms of an optimal control problem.

Set G = [0, 1] (which stands for the space of uncertain variables Ω). Let T > 0, \(n\in \mathbb N\) and \(q_n\in P^n_+\). Define \(U_n=\left \{ q_n\in P^n_+, \ 1-q_n\in P^n_+ \right \}\). Set

$$\displaystyle \begin{aligned} K_n(T,q_n):= \left\{ v_n \in L^\infty(\mathbb R^+:U_n): \ \int_0^T v_n(t)dt=q_n, \ \ v_n\equiv 0 \mbox{ for }t>T \right\}. \end{aligned} $$
(33)

Take a strictly convex function denoted as s = S and a Lebesgue integrable weight w ≥ 0 with ∫ G w(x)dx > 0 (with the correspondence w(x)dx = (ω) and x = ω). Define the linear cost function

$$\displaystyle \begin{aligned} J(u_n):= \int_G \int_{\mathbb R^+} u_n(t,x) ds(t) w(x) dx. \end{aligned} $$
(34)

Design of the polynomial Maxwellian (22) recasts as the following L 1 minimization problem.

Problem 1

Find u n  ∈ K n (T, q n ) such that

$$\displaystyle \begin{aligned} u_n=\operatornamewithlimits{argmin}_{ K_n(T,q_n)} J(v_n) \end{aligned} $$
(35)

Theorem 3.2

Assume the weight w ≥ 0 satisfies G w(x)dx > 0. Assume sis lower bounded from 0 and integrable. Assume \(T\geq \left \| q_n \right \|{ }_{L^\infty (G)}\) . Then there exists a unique minimum to the problem (22).

The proof is based on some convenient space-time comparison inequalities using ad-hoc tests functions. It is also proved that: (a) for T the solution u n is vanishes for large time; (b) there exists T  > 0 such that all solutions are the same for T > T .

A reformulation as an optimal control problem [35, 36] is appealing. Define

$$\displaystyle \begin{aligned} y_n(t,x)=\int_0^t u_n(t,x)dt \Longleftrightarrow y_n^{\prime}(t)=u_n(t) \mbox{ with } y_n(0)=0 . \end{aligned} $$
(36)

In this context, the function y n (t) ∈ P n is called the state and u n (t) ∈ U n is called the control. The minimization problem (35) reformulates as follows.

Problem 2

Find an optimal control u n  ∈ L (0, T : U n ) which minimizes the cost function and with the final state \( y_n(T)=q_n\in P_n^+\).

Let us first remind the PMP maximum principle. Since the set of controls is discrete, convex and closed, one can invoke the PMP [35, 36]: for all optimal trajectories, there exists a Pontryagin multiplier λ n  ∈ P n such that

  • the optimal control maximizes the criterion for almost all t ∈ (0, T)

    $$\displaystyle \begin{aligned} \int_G ( \lambda_n(x) - t) u_n(t,x)dx = \max_{v_n\in U_n} \int_G ( \lambda_n(x) - t) v(x)dx. \end{aligned} $$
    (37)

    This is called a normal trajectory, or a normal pair u n , λ n .

  • or the optimal control maximizes the criterion for almost all t ∈ (0, T)

    $$\displaystyle \begin{aligned} \int_G \lambda_n(x) u_n(t,x)dx = \max_{v_n\in U_n} \int_G \lambda_n(x) v_n(x)dx. \end{aligned} $$
    (38)

    This is called a abnormal trajectory, or a abnormal pair u n , λ n . The abnormality or degeneracy comes from the fact that the criterion is independent of the time variable.

Abnormal trajectories are easy to construct if q n (x) vanishes at some point x  ∈ [0, 1]. In this case one can consider the polynomial λ n  ∈ P n with the quadrature property ∫ I λ n (x)v n (x)dx = −v n (x ) for all ∀v n  ∈ P n . Since u n (t, x ) = 0 for all time t, it clear that λ n satisfies (38).

Theorem 3.3

Assume q n (x) ≥ ε > 0 over G. There exists an adjoint state λ n  ∈ P n such that the optimal solution of Problem 1 2 is solution of the PMP under the normal form

$$\displaystyle \begin{aligned} u_n(t)=\underset{v_n\in U_n}{\mathit{\mbox{argmax}}} \left( \int_G (\lambda_n(x)-s'(t))v_n(x)dt w(x) dx \right) \mathit{\mbox{ for almost all }}t\in [0,T]. \end{aligned} $$
(39)

A proof can be performed by showing that λ ∈ P n is a minimizer of a convenient cost function. Define the cost function as the integral in time of the criterion (37)

$$\displaystyle \begin{aligned} K(\lambda_n):= \int_0^\infty \int_G ( \lambda_n(x) - t) u_n(t,x)dxdt \geq 0 \end{aligned}$$

where u n (t) satisfies the Pontryagin maximum principle. The cost is non negative by construction. It is well defined since u n vanishes for large t. The cost function K is convex over P n . The Danskin theorem yields that

$$\displaystyle \begin{aligned} dK(\lambda_n) = \left<\int_0^\infty u_n(t,x)dt, d\lambda_n \right>. \end{aligned}$$

The shooting method which is the essence of the study of normal trajectories is as follows.

Problem 3 (Shooting Method)

Find λ n  ∈ P n such that u n (t) solution of (37) satisfies the endpoint condition \( \int _0^\infty u_n(t)dt=q_n\in P_n^+\).

The shooting method is conveniently studied with the Lagrangian

$$\displaystyle \begin{aligned} L(\lambda_n):= K(\lambda_n)- \int_G \lambda_n(x) q_n(x)dx \end{aligned}$$

where \(q_n\in P_n^+\) is the given endpoint. The polynomial λ n  ∈ P n is solution of the shooting method if and only if it is an extremal point of the Lagrangian

$$\displaystyle \begin{aligned} {dL(\lambda_n)}=\left< \int_0^\infty u_n(t,x)dt-q_n(x), d\lambda_n \right> =0. \end{aligned} $$
(40)

Since L is convex and differentiable, a solution to (40) is also a minimum of the Lagrangian. The cornerstone of the proof is to show that L is infinite at infinity. Another interest of the PMP for our problem is the general principle.

Principle 3.2

The Pontryagin multiplier is formally the adjoint entropic variable (in the sense of Godunov).

The formal proof proceeds as follows. For λ n  ∈ P n , consider u n (t) the minimizer of the cost function K and define

$$\displaystyle \begin{aligned} q_n=\int_0^\infty u_n(t) dt. \end{aligned}$$

Define K the formal Legendre transform of K

$$\displaystyle \begin{aligned} K^*(\lambda_n)= \int \lambda_n(x)q_n(x) dx -K(\lambda_n). \end{aligned}$$

The main difference between L and K is that q n is given in L but is function of λ n in K . One has

$$\displaystyle \begin{aligned} d K^*(\lambda_n)= \int \left( q_n d \lambda_n + \lambda_n d q_n\right) dx -d K(\lambda_n) \end{aligned}$$
$$\displaystyle \begin{aligned} = \int \lambda_n d q_n dx + \int_0^1 \left( q_n - \int_0^\infty u_n(t) dt \right) d \lambda_n dx = \int \lambda_n d q_n dx. \end{aligned} $$
(41)

It can summarized as \(dK=\left < q_n, d\lambda _n \right >\) and \(dK^*=\left < \lambda _n, d q_n \right >\). If one assumes that the transformation λ n q n is a diffeomorphism, then K can be understood as a function of q n . In this case K is a candidate to be an entropy. Let us now determine a candidate to be an entropy flux. We define

$$\displaystyle \begin{aligned} G(\lambda_n):= \int_0^\infty \int_G t ( \lambda_n(x) - t) u_n(t,x)dxdt \end{aligned}$$

which is well defined since u n is defined in function of λ n . Another use of the Danskin theorem yields

$$\displaystyle \begin{aligned} dG=\left< \int_0^\infty tu_n(t)dt, d\lambda_n \right> . \end{aligned}$$

The polar transform of G is

$$\displaystyle \begin{aligned} G^*(\lambda_n)=\int_0^1 \lambda_n (x) \left(\int_0^\infty tu_n(t)dt \right) dx - G(\lambda_n). \end{aligned}$$

One has

$$\displaystyle \begin{aligned} dG^*= \int_0^1 d \lambda_n (x) \left(\int_0^\infty tu_n(t)dt \right) dx + \int_0^1 \lambda_n (x) d\left(\int_0^\infty tu_n(t)dt \right) dx - dG= \end{aligned}$$
$$\displaystyle \begin{aligned} = \int_0^1 \lambda_n (x) d\left(\int_0^\infty tu_n(t)dt \right) dx. \end{aligned} $$
(42)

One obtains the following formal result.

Lemma 3.3

The system of projected equations

$$\displaystyle \begin{aligned} \partial_t q_n(x,t)+ \partial_x \int_0^\infty t u_n(x,t)dt =0 \end{aligned}$$

admits the formal additional law

$$\displaystyle \begin{aligned} \partial_t K^*(\lambda_n)+ \partial_x G^*(\lambda_n)=0. \end{aligned} $$
(43)

Proof

Indeed one has

$$\displaystyle \begin{aligned} \int_0^1 \lambda_n(x) \partial_t q_n(x,t) dx + \int_0^1 \lambda_n(x) \partial_x \int_0^\infty t u_n(x,t)dt dx =0. \end{aligned}$$

Using (41) and (42), it is rewritten as (43) and the proof is ended.

3.4 Isentropic Euler System with γ = 3

An interesting question is to extend the polynomial modeling of uncertainties into systems of conservation laws with physical importance. A first example for the isentropic Euler system with γ = 3 in dimension one is as follows. Consider

$$\displaystyle \begin{aligned} \left\{ \begin{array}{ll} \partial_t \rho +\partial_x (\rho u)=0, \\ \partial_t (\rho u) +\partial_x (\rho u^2 + p)=0, \\ \end{array} \right. \end{aligned}$$

where \(p=\frac 1{12} \rho ^3\). It admits the kinetic formulation [6, 29]

$$\displaystyle \begin{aligned} \partial_t f + v \partial_x f = \frac 1\varepsilon \left( M_{\rho , \rho u}-f \right) \end{aligned}$$

where M ρ,ρu (v) ≡ 1 for u − ρ/2 < v < u + ρ/2 and M ρ,ρu (v) ≡ 0 everywhere else. The Maxwellian M ρ,ρu minimizes \( \int _{\mathbb R} g(v)s'(v)dv \) over all functions 0 ≤ g ≤ 1 such that \(\int _{\mathbb R}g(v)dv=\rho >0\) and \(\int _{\mathbb R}g(v)vdv=\rho u\in \mathbb R\). A natural extension of the tools proposed previously would be to consider

$$\displaystyle \begin{aligned} M^n_{(\rho u)^n, \rho^n} = \operatornamewithlimits{argmin} _{g^n\in\ \text{admissible }\ \text{states}} \int_{\mathbb R} \int_0^1 g^n(v,\omega) v^2 dv d\omega. \end{aligned}$$

4 Numerical Methods

This section is devoted to provide elementary explanations and illustrations of some of the theoretical tools presented before and to explain advanced algorithms.

4.1 Regularity

A key feature is that weak solutions of a system of conservation laws with uncertainties (47) or (7) propagate in the uncertain space [25].

We consider the initial data

$$\displaystyle \begin{aligned} u^{\mathrm{ini}}(x,\omega)=\left\{ \begin{array}{ll} 3 & \mbox{ for }x <1/2 \mbox{ and }-1<\omega < 0, \\ 5 & \mbox{ for }x <1/2 \mbox{ and }0<\omega < 1, \\ 1 & \mbox{ for }1/2 < x \mbox{ and } -1 < \omega < 1. \end{array} \right. \end{aligned} $$
(44)

The exact solution is a shock at velocity 2 for ω < 0, and another shock at velocity 3 for 0 < ω

$$\displaystyle \begin{aligned} u(x,\omega,t)=\left\{ \begin{array}{ll} 3 & \mbox{ for }x <1/2+2t \mbox{ and }-1<\omega < 0, \\ 5 & \mbox{ for }x <1/2 +3t \mbox{ and }0<\omega < 1, \\ 1 & \mbox{ elsewhere}. \end{array} \right. \end{aligned} $$
(45)

This is visible in [15, p. 1010, Figure 4] where the numerical solution captured with a standard moment model is also represented.

4.2 Kinetic Polynomials

Kinetic polynomials can be used to design numerical methods with the preservation of the maximum principle. This is illustrated with an elementary implementation of the quasi-solutions.

We still consider the Burgers equation, but with a continuous initial data

$$\displaystyle \begin{aligned} u^{\mathrm{ini}}(x,\omega)=\left\{ \begin{array}{ll} 12 & \mbox{ for }x -\omega/5 <1/2 , \\ 1 & \mbox{ for } x -\omega/5 <3/2, \\ 12- 11\left( x -\omega/5 -1/2 \right) & \mbox{ in between}. \end{array} \right. \end{aligned} $$
(46)

The exact solution is a compressive ramp on all lines, and a shock at time \(T=\frac 1{11}\). So the exact solution is continuous in x and ω directions for t < T, and is discontinuous in the ω direction for T < t. The results are shown in [15, p. 1011, Figure 5].

4.3 Numerical Construction of Kinetic Polynomials

The construction of kinetic polynomials via optimal control theory brings the possibility to use many efficient numerical methods. For example it is proposed in [16] to use the AMPL language [20] to discretize and minimize (34)–(35). Note that L 1 minimization problems in combination with polynomial chaos expansions is pursued in [24].

An example of numerical implementation of the minimization problem (22) within the AMPL high level language is in Table 1 and a typical result is in Fig. 1.

Fig. 1
figure 1

Numerical computation of the polynomial Maxwellian-minimizer M n (22), referred to as u n  = M n within this section. Numerical parameters: N x  = 80, N t  = 200, n = 6, q 6 = 1 + x + x 2 + x 3. The function (x, t)↦u n (x, t) is represented on top as a surface, and is represented on bottom as many curves xu n (x;t) parametrized by t

Table 1 Script for an implementation of the solution of Problem 1 with the AMPL language [20]

4.4 Connection with Polynomial Properties

Finally we evoke an axis of research [14] which is about a new way to construct polynomials with two bounds, one lower bound and one upper bound, in relation with a numerical implementation of kinetic polynomials. Some of the main results can be summarized as follows.

Start from

$$\displaystyle \begin{aligned} p_n\in P_n^+ := \left\{p_n \in P_n(x), \ \ \mbox{ such that } 0\leq p_n(x) \quad \forall x\in [0,1] \right\}. \end{aligned} $$
(47)

Define the set of polynomials which enters in the construction of kinetic polynomials as

$$\displaystyle \begin{aligned} p_n\in U_n := \left\{p_n \in P_n(x), \ \ \mbox{ such that } 0\leq p_n(x) \leq 1 \quad \forall x\in [0,1] \right\}. \end{aligned} $$
(48)

Simpler subsets of U n exist based on convex combinations \(q_n=\sum _{j=0}^n \alpha _j u_j \) where the coefficients satisfy 0 ≤ α j and \( \sum _{j=0}^n\alpha _j=1\): the generating polynomials u j can be either the basis of the monomials x j, or the basis of the Berstein polynomials \(B_{n,j}(x)=\frac {n!}{j!(n-j)!}x^j(1-x)^{n-j}\), or the basis of the rescaled Tchebycheff polynomials \(\frac {T_j\left ( 2x-1 \right )+1}2\). However none of these subsets is able to generate all polynomials in U n only by convex combinations.

Theorem 4.1

Let \(n\in 2\mathbb N\) being even. There exists a smooth function from \(\mathbb R^{3n/2} \) onto U n . The smooth function is made explicit by a constructive algorithm and is 2π-periodic with respect to all its arguments.

The norm of a uniform convergence is ∥ f∥ =max0≤x≤1| f(x)| for f ∈ C 0[0, 1].

Theorem 4.2

Assume f  C 0[0, 1] and 0 ≤ f(x) ≤ 1 for 0 ≤ x ≤ 1. Then

$$\displaystyle \begin{aligned} \inf_{p_n\in U_n} \|\,f-p_n\| \leq 2 \inf_{g_n\in P_n} \|\,f-g_n\|. \end{aligned} $$
(49)

Even if completely elementary, this is a remarkable result since the constant 2 is independent of n. The right hand side shows spectral convergence. This representation comes from quaternion algebras and the 4-squares Euler identity.

The next tests use this structure to minimize functionals like

$$\displaystyle \begin{aligned} J(p_n) :=\int_0^1 \left( t- \lambda_n(x)\right) p_n(x)dx ,\quad p_n \in U_n \end{aligned} $$
(50)

where λ n  ∈ P n is given and t may vary. This problem has interest in the context of this review paper. A reference is provided by a recent work [16] where a characterization of p n is provided with the notion of a point of contact that comes from the seminal reference [3] is used. A point of contact of a function f ∈ C 1[0, 1] with 0 ≤ f ≤ 1 is any point 0 ≤ y ≤ 1 such that f(y) = 0 or f(y) = 1. The multiplicity order of the contact is the number of derivatives (+1) which vanish. For example if the point of contact is inside the interval, 0 < y < 1, then necessarily f′(y) = 0, so the multiplicity order at y is necessarily ≥ 2. It is proved in [16] that p n which realizes the minimum has not less than n + 1 points of contact counted with order of multiplicity (this is similar to one-sided L 1 minimization for which we refer to [3]) for almost all t. We use this theoretical property to check the accuracy of the approximation. We remark that the optimal solution p n (50) has the natural tendency to vanish where t − λ n (x) > 0 and to be equal to 1 where t − λ n (x) < 0, it is clearly a good strategy to minimize the cost function (50).

A numerical result representative of all the tests is the following. Take

$$\displaystyle \begin{aligned} \lambda_2(x) := T_2(2x-1)-t+x \mbox{ and }t=0.3. \end{aligned}$$

A first numerical simulation yields the function displayed on Fig. 2, the numerical value of the cost function is \(J(p_n^1)\approx -0.16737\). This function does not have the required number of contacts on the figure. But another minimum is captured by numerical simulations with another starting point, for which \(J(p_n^2)\approx -0.188478<J(p_n^1)\): its total order of contact is large enough (equal to 2n + 1 = 7 since n = 3) and this is in accordance with the theory. No other minimum with lower value of the cost have been obtained by simulations, so it is the best candidate. Note that the exact calculation of the derivative \(p_n^{\prime }(x)\) is convenient to count without ambiguity the number of derivatives which vanish at points of contact (Fig. 3).

Fig. 2
figure 2

Plot of λ 2(x) − t and of a local minimum \(p_n^1\) with \(J(p_n^1)\approx -0.16737\). The total order of contact if 1 + 2 + 2 = 5

Fig. 3
figure 3

On the top, plot of λ 2(x) − t and of another local minimum \(p_n^2\) with \(J(p_n^2)\approx -0.188478\). The total order of contact if 1 + 2 + 2 + 2 = 7 = 2n + 1, and so is the best candidate to be the global minimum. On the bottom, plot of the exact derivative \((p_n^2)'\)

5 Conclusion

The examination of the challenges posed by polynomial modeling of uncertainties shows that alternatives to standard moment methods with chaos polynomials do exist. These new formulations try to introduce the polynomial structure used to model the uncertain variable ω into standard PDEs, but preserving at best the theoretical properties of the initial systems. Convolution techniques, kinetic formulations of conservation laws, minimization formulations and construction of quasi-solutions may have interest for non linear hyperbolic equations because they address the maximum principle and the preservation of entropies, and so they constitute an answer to the second question in the introduction. In certain cases, the preservation of mathematical structures yields proofs of convergence with respect to the parameters which control the polynomial degree in the uncertain space. However one loses the simplicity of the implementation provided by moment models [37], and so a clear path to the design of efficient, fast and multidimensional algorithms based on these structures is still to invent.