Abstract
Motivated by applications in economics and finance, in particular to the modeling of limit order books, we study a class of stochastic second-order PDEs with non-linear Stefan-type boundary interaction. To solve the equation we transform the problem from a moving boundary problem into a stochastic evolution equation with fixed boundary conditions. Using results from interpolation theory we obtain existence and uniqueness of local strong solutions, extending results of Kim, Zheng and Sowers. In addition, we formulate conditions for existence of global solutions and provide a refined analysis of possible blow-up behavior in finite time.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
Moving boundary problems allow to model multi-phase systems with separating boundaries evolving in time. Typically, the evolution of the free interface is strongly coupled with the evolution of the whole system. A classical example is the so called Stefan problem introduced in 1888 by Stefan [30], which describes the evolution of temperature v(t, x) in a system of water and ice. In one space dimension it reads as
where \(\eta _i\) and \(\eta _w\) are the thermal diffusivities of ice and water, and \(x_*(t)\) is the position of the interface between the two phases. The evolution of the interface is governed by the so-called Stefan condition
This problem and various extensions have been studied extensively in the second half of the twentieth century, see [34] for a review of the literature. For classical solutions of semi-linear extensions of (1.1) see e. g. [11, 24]. In addition to the theory of classical and weak solutions, the corresponding evolution equations have been studied in the framework of maximal \(L^p\)-regularity, see [10, 29] and references therein. Compared to the deterministic case, stochastic partial differential equations with free or moving interface have received much less attention. One exception is [2] where Barbu and da Prato show existence of a solution and an invariant ergodic measure for the linear problem (1.1) with additive noise in multiple dimensions.
More recently, both deterministic and stochastic moving boundary problems have been applied in economics and finance to dynamic models of trading, in particular to models of so-called (electronic) limit order books where orders of buyers and sellers participating in stock exchanges are stored, see e.g. [3, 20, 26, 36]. In such models, the space coordinate x typically corresponds to price (usually on logarithmic scale), and the quantity v(t, x) to the density of buyers or sellers willing to commit to a transaction at time t for the price x. Buyers are recorded with positive sign and sellers with negative sign, such that the two phases of the system distinguish buyers from sellers. Of particular interest is the evolution of the separating boundary, which corresponds to the marginal price at which both sellers are currently willing to sell and buyers are willing to buy. Zheng [36] for example proposes the following stochastic moving boundary problem as a model for dynamic trading in a limit order book:
where the subscripts b and s correspond to buyer and seller respectively and \(d\xi _t(x)\) is Gaussian noise. The evolution of the interface is governed by the linear Stefan condition (1.2). The accompanying mathematical theory is developed in [15, 19] and numerical analysis in [17]. Other examples can be found in Lasry and Lions [20] where a free boundary model for price formation under negotiation is introduced in a mean-field game setting. Another part of the literature derives SPDE models for limit order books as functional limits from discrete queuing models of orders that arrive and then are filled or cancelled. For examples of this approach see e. g. [3] where a parabolic SPDE as a model for the order book is obtained in the limit. In addition, there is a series of papers by Bouchaud et al. [6, 26] with PDE and SPDE models observed as limiting equations of particle models.
In this paper we study a Stefan-type stochastic moving boundary problem, which can be considered an extension of (1.3) and of the theory developed in [19] with several important differences in scope and methodology:
-
Instead of the homogeneous linear stochastic Stefan problem, we allow for a more general drift coefficient and in particular a non-linear boundary condition replacing (1.2). Recent empirical studies of the dependency of price change on the imbalance of the order book (see [5, 22]) suggest linear behaviour for balanced order books and non-linear behaviour when imbalance is large.
-
In addition to mild and weak solutions as in [19] we obtain solutions in the analytically strong sense and make the transformation from free to fixed boundary, that is introduced in a deterministic setting in [24] and used in [19], rigorous in a stochastic setting.
-
We combine tools from the SPDE framework of da Prato and Zabczyk (cf. [7]) with results from interpolation theory, which allows for greater generality and avoids direct computations using the heat kernel as in [19].
2 A stochastic moving boundary problem
2.1 Problem formulation
Our goal is to establish a framework for solving stochastic moving boundary problems of the type
for \(t\ge 0\), \(x\in \mathbb {R}\), with the moving boundary \(x_*(t)\) governed by
with Dirichlet boundary conditions at \(x_*\), i. e.,
for \(t\ge 0\). The coefficients are functions \(\mu _\pm : \mathbb {R}^3\rightarrow \mathbb {R}\), \(\sigma _\pm : \mathbb {R}^2\rightarrow \mathbb {R}\), and real numbers \(\eta _\pm >0\). We denote by \(\xi \) the spatially colored noise given by
for some integral kernel \(\zeta : \mathbb {R}^2\rightarrow \mathbb {R}\) and a cylindrical Wiener process W on the Hilbert space \(U=L^2(\mathbb {R})\) with covariance operator identity. As usual, W lives on a filtered probability space \((\Omega , \mathcal {F}, (\mathcal {F}_t)_{t \ge 0}, \mathbb {P})\). For each \(t \ge 0\) we require that \(x \mapsto v(t,x)\) is continuously differentiable on \((-\infty , x_*(t))\) as well as on \((x_*(t),\infty )\), such that all first derivatives appearing in (2.1) can be understood in the classical sense. The second derivative should be considered a weak derivative and a suitable function space for v as well as the precise notion of ‘solution’ to (2.1) will be defined below.
We now make precise what we understand by a solution to the stochastic moving boundary problem (2.1). In general, solutions to the moving boundary problem may be local, i.e. only exist up to a stopping time \(\tau \). To formalize this, it will be convenient to work with stochastic intervals. Given two stopping times \(\varsigma \le \tau \) the stochastic interval \(\llbracket \varsigma ,\tau \rrbracket \) is defined as
By using strict inequalities also the open stochastic interval \(\rrbracket \varsigma ,\tau \llbracket \) and half-open analogues can be defined. As usual in a probabilistic setting we will soon ‘drop the omega’ and say e.g. that for two stochastic processes X and Y the equality \(X_t = Y_t\) holds for all \(t \in \llbracket 0, \tau \rrbracket \), when we mean that \(X_t( \omega ) = Y_t(\omega )\) for \(\mathbb {P}\)-almost all \(\omega \) and all t such that \((\omega ,t)\in \llbracket 0,\tau \rrbracket \), i. e.
To formalize the moving frame for the moving boundary problem we define for \(x \in \mathbb {R}\) the function space
where \(H^1_0\) and \(H^2\) are the usual Sobolev spaces. Note that due to the Sobolev embeddings, any function v in \(\Gamma (x)\) can be identified with an element of \(L^2(\mathbb {R})\).
Finally, using the notation from (2.1) we introduce the functions \({\overline{\mu }}: \mathbb {R}^4\rightarrow \mathbb {R}\), \({\overline{\sigma }}: \mathbb {R}^2\rightarrow \mathbb {R}\),
Definition 2.1
A local solution of the stochastic moving boundary problem (2.1) on the stochastic interval \(\llbracket 0, \tau \llbracket \), with initial data \(v_0\) and \(x_0\), is a couple \((v,x_*)\) of stochastic processes, where
such that \((v,x_*)\) is predictable as an \(L^2(\mathbb {R})\times \mathbb {R}\)-valued process, and
holds on \(\llbracket 0, \tau \llbracket \). The first equality is an equality in \(L^2(\mathbb {R})\); the first integral is a Bochner integral in \(L^2(\mathbb {R})\), and the second one a stochastic integral in \(L^2(\mathbb {R})\).
The solution is called global, if \(\tau = \infty \) and the interval \(\llbracket 0, \tau \llbracket \) is called maximal if there is no solution of (2.1) on a larger stochastic interval.
2.2 Assumptions and main results
We introduce the following assumptions on the coefficients appearing in (2.1)
Assumption 2.2
The functions \(\mu _\pm \) are continuously differentiable and
-
(i)
there exist \(a \in L^2(\mathbb {R}_+)\), b, \({\tilde{b}} \in L^\infty _{loc}(\mathbb {R}^2; \mathbb {R})\) such that for all \(x, y, z\in \mathbb {R}\)
$$\begin{aligned} \left|\mu _\pm (x,y,z)\right| + \left|\frac{\partial }{\partial x}\mu _\pm (x,y,z)\right| \le a(|x|) + b(y,z)\left( \left|y\right| + \left|z\right|\right) , \end{aligned}$$and
$$\begin{aligned} \left|\frac{\partial }{\partial y}\mu _\pm (x,y,z)\right| + \left|\frac{\partial }{\partial z}\mu _\pm (x,y,z)\right| \le {\tilde{b}}(y,z), \end{aligned}$$ -
(ii)
\(\mu _\pm \) and their partial derivatives (in x, y and z) are locally Lipschitz with Lipschitz constants independent of \(x\in \mathbb {R}\).
Assumption 2.3
The functions \(\sigma _\pm \) are twice continuously differentiable and
-
(i)
For every multi-index \(I = (i,j) \in \mathbb {N}^2\) with \(\left|I\right| \le 2\) there exist \(a_I\in L^2(\mathbb {R}_+)\) and \(b_I \in L^\infty _{loc}(\mathbb {R}, \mathbb {R}_+)\) such that
$$\begin{aligned} \left|\frac{\partial ^{\left|I\right|}}{\partial x^{i}\partial y^{j}} \sigma (x,y)\right| \le {\left\{ \begin{array}{ll} a_I(|x|) + b_I(y)\left|y\right|, &{} j = 0, \\ b_I (y), &{} j \ne 0. \end{array}\right. } \end{aligned}$$ -
(ii)
\(\sigma _\pm \) and their partial derivatives (in x, y and z) are locally Lipschitz with Lipschitz constants independent of \(x\in \mathbb {R}\).
-
(iii)
\(\sigma _\pm \) satisfy the boundary condition
$$\begin{aligned} \sigma _\pm (0,0) = 0. \end{aligned}$$(2.5)
Remark 2.4
Later on, certain Nemytskii operators will be defined through \(\mu _\pm \) and \(\sigma _\pm \) and the assumptions made above can be traced back to requirements on the regularity of these operators, see Appendix 1 and 2. Also note that if \(\sigma _+\) or \(\sigma _-\) is independent of \(x\in \mathbb {R}\), then, in Assumption 2.3, part (iii) is a consequence of part (i).
Assumption 2.5
\(\varrho :\mathbb {R}^2\rightarrow \mathbb {R}\) is locally Lipschitz continuous. More precisely, for all \(N\in \mathbb {N}\) there exists an \(L_{\varrho ,N}\) such that
Assumption 2.6
\(\zeta (.,y) \in C^3(\mathbb {R})\) for all \(y\in \mathbb {R}\) and \(\frac{\partial ^{i}}{\partial x^i}\zeta (x,.)\in L^2(\mathbb {R})\) for all \(x\in \mathbb {R}\), \(i\in \{0,1,2,3\}\). Moreover,
For the rest of this paper, we use the notation \(\zeta ^{(i)}:=\frac{\partial ^{i}}{\partial x^i} \zeta \).
Example 2.7
(Convolution) Let \(\zeta \) be a convolution kernel, i. e. \(\zeta (x,y) := \zeta (x-y)\), x, \(y\in \mathbb {R}\). If \(\zeta \in C^{\infty }(\mathbb {R})\cap H^3(\mathbb {R})\), where \(H^3\) denotes the Sobolev space of order 3, then Assumption 2.6 is satisfied. In this case, the operator \(T_\zeta \) corresponds to spatial convolution with \(\zeta \).
Example 2.8
(Stochastic Stefan Problem) Let \(\mu _+ =\mu _- \equiv 0\), \(\sigma _+(x,v) = \sigma _-(-x,v) = v\) and \(\varrho (x_1,x_2) = \varrho \cdot (x_2-x_1)\) for some \(\varrho \in \mathbb {R}\). Then, (2.1) is the two-phase Stefan problem with multiplicative colored noise. With \(\eta _2=0\), \(\mu _-, \sigma _-\equiv 0\) and \(\zeta (x,y):= \zeta (x-z)\) we end up with the one-phase system discussed in [19]. Even though our assumptions and proofs are formulated for the two-phase case, it is straight-forward to adapt them to a one-phase setting.
Example 2.9
(Two-Phase Burger’s equation) The case
yields a stochastic version of a two-phase viscous Burger’s equation in one dimension. Obviously, Assumption 2.2 on \(\mu _{\pm }\) is satisfied.
Example 2.10
(Reaction-Diffusion-type drift) Set \(\mu _\pm (x,v,v') := f_\pm (v)\), for some \(f_\pm \in C^1(\mathbb {R})\) with locally Lipschitz derivative and \(f_+(0) = f_-(0) = 0\). Also in this case it is easy to check that Assumption 2.2 is satisfied.
Let us also remark here that without substantial change in our proofs the constant Laplacian terms \(\eta _\pm \frac{\partial ^2}{\partial x^2}v\) in (2.1) can be replaced by space-dependent Laplacians in the divergence form \(\frac{\partial }{\partial x}\left( \eta _\pm (x - x_*(t)) \cdot \frac{\partial }{\partial x}v \right) \) for some scalar functions \(\eta _\pm \) that are bounded by strictly positive constants.
Our first main result concerns the existence of a maximal local solution to the moving boundary problem (2.1).
Theorem 2.11
(Maximal Local solution) Let Assumptions 2.2, 2.3, 2.5, and 2.6 hold true and let \(x_0 \in \mathbb {R}\) and \(v_0 \in \Gamma (x_0)\). Then there exists a predictable, strictly positive stopping time \(\tau \) and a local solution \((v,x_*)\) of (2.1) on the maximal interval \(\llbracket 0, \tau \llbracket \) in the sense of Definition 2.1. For almost every \(\omega \in \Omega \) it holds that \(v(\omega ,.) \in C([0, \tau (\omega )); H^1(\mathbb {R}))\) and \(x_*(\omega ,.) \in C^1([0,\tau (\omega )); \mathbb {R})\). Moreover, \((v,x_*)\) is unique among all \(H^1\oplus \mathbb {R}\)-continuous solutions.
Remark 2.12
The continuity statement implies that for all \(x\in \mathbb {R}\), \(t\mapsto v(t,x)\) is continuous almost surely.
Imposing some additional assumptions on \(\sigma _\pm \) and \(\rho \) the solution becomes global.
Assumption 2.13
The functions b and \({\tilde{b}}\) in Assumption 2.2 are globally bounded, and there exist functions \(\sigma ^1_\pm \in H^2(\mathbb {R}_+)\cap C^2(\mathbb {R}_{\ge 0})\) and \(\sigma ^2_\pm \in BUC^2(\mathbb {R}_{\ge 0})\), the space of all functions with bounded uniformly continuous second derivative. such that
for all \(x \in \mathbb {R}_{\ge 0}\) and \(y \in \mathbb {R}\).
Theorem 2.14
(Global Solution) If \(\rho \) is bounded and Assumption 2.13 holds in addition to the assumptions of Theorem 2.11, then \(\tau = \infty \) almost surely, i. e. (2.1) has a global solution.
In Theorem 4.5 we provide a refined analysis of the case of finite-time blow-up (\(\tau < \infty \)). It turns out that under Assumption 2.13, but with \(\rho \) unbounded a finite-time blow-up of the system (2.1) must coincide with a blow-up of the boundary terms.
2.3 Overview of the proof
Our treatment of equation (2.1) consists of three steps
-
Transformation into an equation with fixed boundary;
-
Formulation of the transformed equation as an abstract stochastic evolution equation;
-
Solving the abstract evolution equation by a fixed-point-argument.
For the first step we apply a change of coordinates
i.e. new coordinates are defined relative to the free boundary \(x_*(t)\), which yields
for \(t\ge 0\) and \(x> 0\) with Dirichlet boundary conditions,
Note that the classic chain rule is not sufficient to derive (2.8) from (2.1), since v is not differentiable in time. Rather a special case of Ito’s formula (a ‘stochastic chain rule’) is needed to justify the computation. The transformation turns the moving boundary into a fixed boundary at \(x=0\), but introduces an additional non-linear and unbounded drift term involving the spatial derivatives \(\frac{\partial }{\partial x}u_1\) and \(\frac{\partial }{\partial x}u_2\).
The second step is the abstract formulation of (2.8) in terms of the stochastic evolution equation
where
and W is a cylindrical Wiener process with covariance operator \({\text {Id}}\) on the separable Hilbert space \(U=L^2(\mathbb {R})\). Introducing the shorthand
for the boundary terms, the coefficients of (2.9) are given by
for \(w\in U,\,x\ge 0\), and \(u=(u_1,u_2,x_*)\), \(u_1\), \(u_2\in \mathcal {D}(\Delta )\), \(x_*\in \mathbb {R}\). Here, \(\Delta \) is the Laplacian on \(\mathbb {R}_+\) with Dirichlet boundary conditions and \(c > 0\) is an arbitrary constant, whose sole function is to move the spectrum of \(\mathcal {A}\) into the negative half-line \((-\infty ,0)\).
Finally, the solution of (2.1) in Theorem 2.11 will be obtained from the unique strong solution X on \(\llbracket 0, \tau \llbracket \) of the stochastic evolution equation (2.9) with initial data \(X_0:= (v_0(.+x_0)\vert _{\mathbb {R}_+}, v_0(x_0-.)\vert _{\mathbb {R}_+}, x_0)\) by setting \(X = (u_1,u_2,x_*)\) and
In the remainder of the paper we will make the above steps rigorous, by traversing them in the reverse direction:
-
In Sect. 3 we show that under certain assumptions the abstract stochastic evolution equation (2.9) has a unique strong solution.
-
In Sect. 4 we show that the parameter assumptions made in Sect. 2.2 are sufficient for the assumptions of Sect. 3
-
In Sect. 5 we show the stochastic chain rule that is necessary to make the transformation to fixed boundary rigorous and collect all pieces to complete the proof of our main results.
Remark 2.15
Considering carefully our proof of the existence result in the next section it can be seen that Eq. (2.9) can be solved also for homogeneous Neumann or even Robin boundary conditions. Of course the boundary conditions on \(\sigma _\pm \) in Assumption 2.3(iii) have to be adapted accordingly. The main difference to the case of Dirichlet boundary conditions, is that a discontinuity at the boundary introduces a jump to the dynamics of v in Eq. (2.1) at any given point \(x \in \mathbb {R}\), every time the boundary \(x_*(t)\) crosses x. In particular the ‘stochastic chain rule’ developed in Sect. 5 is no longer sufficient to pass from the moving boundary equation (2.1) to the fixed boundary equation (2.8) and back. Switching to solutions in distributional sense, the transformation from fixed to moving boundary problem could alternatively be performed using Ito–Wentzell formula, see [16].
Remark 2.16
The existence result for the centered equations (2.9) can be extended to the case with Brownian noise in the boundary, without any problems. That is,
where D is any locally Lipschitz operator from \(\mathcal {D}(\mathcal {A})\) into \(\mathbb {R}\), and \(\sigma _*>0\). Here, B can be either independent of W, or a Hilbert-Schmidt transformation of W into \(\mathbb {R}\).
Remark 2.17
Consider (2.1) with linear boundary interaction and \(\eta _{\pm } = 0\), i.e.,
An alternative approach is to transform the problem using the (multi-valued) enthalpy function
More detailed, the process \(X_t := L (v(t,.))\) satisfies the stochastic porous media type equation on \(\mathbb {R}\),
with
This method was used in [2] to study existence and ergodicity of the stochastic Stefan problem with additive noise in multiple space dimensions.
3 Solving a stochastic evolution equation
3.1 Preliminaries
In this section we concentrate on the evolution equation (2.9), i.e.
where W is a cylindrical Wiener process with covariance operator \({\text {Id}}\) on a separable Hilbert space U. At this point it is sufficient to assume that X takes values in an arbitrary separable Hilbert space E with norm \(\left|\left|.\right|\right|_{}\). On the coefficients A, B, C we will impose assumptions that are milder (but also more abstract) than the assumptions made in Sect. 2 on the coefficients of the free boundary problem. As will be shown in Sect. 4 the assumptions below are implied by the assumptions from Sect. 2.2 such that eventually the results on the evolution equation (2.9) can be used to solve the free boundary problem (2.1). Nevertheless, the results of this section may be of independent interest when generalizations of (2.1) are considered.
On the operator A in (3.1) we make the following assumption.
Assumption 3.1
A is a densely defined and sectorial operator with domain \(\mathcal {D}(A)\subset E\). Moreover, the resolvent set of A contains \([0,\infty )\) and there exists a \(M > 0\) such that the resolvent \(R(\lambda ,A)\) satisfies
Remark 3.2
This assumption is equivalent to each of the following statements
-
Equation (3.2) holds and the resolvent set of A contains 0 and a sector
$$\begin{aligned} \{\lambda \in \mathbb {C}: |\arg \lambda | < \theta \} \end{aligned}$$for some \(\theta \in (\pi /2,\pi )\).
-
The operator A is sectorial and \(-A\) is positive in the sense of [25].
Assumption 3.1 ensures that A generates an analytic semigroup \((S_t)_{t \ge 0}\) and that suitable interpolation spaces between E and \(\mathcal {D}(A)\) can be defined through fractional powers of \(-A\). They also imply that the semigroup \(S_t\) is of strictly negative type, i.e. there exist \(\delta \), \(M > 0\) such that \(\left|\left|S_t\right|\right|_{} \le M e^{-\delta t}\). Note, that if \(M=1\) then \(S_t\) is a contraction semigroup, which we shall not assume a priori.
Using the semigroup \(S_t\) that is generated by A we can introduce the important concept of mild solutions.
Definition 3.3
Let \(X= (X(t))\) be a \(\mathcal {D}(A)\)-valued predictable process and \(\tau \) be a predictable stopping time.
-
X is called global mild solution to the stochastic evolution equation (3.1) on \(\mathcal {D}(A)\) with initial data \(X_0\in \mathcal {D}(A)\), if
$$\begin{aligned} X(t) = S_tX_0 + \int _0^t S_{t-s} B(X(s)) \,{\text {d}}s + \int _0^t S_{t-s}C(X(s)) \,{\text {d}}W_s. \end{aligned}$$(3.3)holds for all \(t \ge 0\), \(\mathbb {P}\)-a.s.
-
X is called mild solution on \(\llbracket 0, \tau \llbracket \), if (3.1) holds on the stochastic interval \(\llbracket 0, \tau \llbracket \).
-
The stochastic interval \(\llbracket 0, \tau \llbracket \) is called maximal for X if there is no \(\mathcal {D}(A)\)-continuous extension of X to a larger stochastic interval.
In the last two terms of (3.3), \(\int \) denotes the Bochner and stochastic integral on the Hilbert space \(\mathcal {D}(A)\), respectively. If we want to emphasize the underlying space \(\mathcal {D}(A)\) we write global mild \(\mathcal {D}(A)\)-solution and mild \(\mathcal {D}(A)\)-solution respectively.
Finally we will be able to show that the mild solution is also a strong one in the following sense:
Definition 3.4
Given \(\mathcal {D}(\mathcal {A})\)-valued initial data \(X_0\) and a predictable stopping time \(\tau \), X is called strong solution of (2.9) on \(\llbracket 0, \tau \llbracket \), if X is a \(\mathcal {D}(\mathcal {A})\)-valued predictable process and
holds on \(\llbracket 0, \tau \llbracket \), where the integrals are supposed to exist as Bochner, resp. stochastic integral on E. Global solutions and maximality are defined in the same way as for mild solutions.
3.2 Interpolation spaces
By taking fractional powers of \(-A\) we introduce inter- and extrapolation spaces for E. For \(\alpha > 0\) we define
It is known that also \(E_\alpha \) with the induced scalar product is a separable Hilbert space. In particular, \(\left|\left|.\right|\right|_{1}\) is equivalent to the graph norm of A and the following continuous embedding relations hold for \(\alpha \in [0,1]\):
Note that the restriction of A to any \(E_\alpha , \alpha \in [0,1]\) is again a densely defined and closed operator on \(E_\alpha \). Moreover, it is the infinitesimal generator of the restriction of \(S_t\) to \(E_\alpha \), which is again an analytic (contraction) semigroup; see e.g. [9, Chap. II.5].
The following regularity property of \(S_t\) between different interpolation spaces \(E_\alpha \), \(\alpha \in [0,1]\) will be crucial in the proofs that follow. We derive it from results in [25] on interpolation spaces.
Lemma 3.5
Let \(\beta \ge 0\) and \(\alpha > \beta \). Then, for all \(t>0\) and \(h\in E_{\beta }\),
Note that for \(\alpha - \beta <1\), the factor in front of \(\left|\left|h\right|\right|_{\beta }\) is integrable at time \(t = 0\), which is the key property used in the estimates concerning the mild formulation of (3.1) on \(E_1\).
Proof
Suppose first that \(\alpha = \beta + n\) for some \(n\in \mathbb {N}\), then we get from [27, Thm 1.5.2d and p. 70] that there exists \(K_n>0\) such that
Now assume that \(\alpha \in (\beta + n, \beta + n +1)\) for some \(n\in \mathbb {N}_0\) and set \(\theta = \alpha - \beta - n\in (0,1)\). By [25, Proposition 4.7] the real interpolation space \((E_0,E_1)_{\theta , 1}\) is continuously embedded into \(\mathcal {D}((-A)^\theta )\). Combining this fact with [25, Corollary 1.7] we obtain that there exists \(K > 0\) such that
Now let \(h \in \mathcal {D}((-A)^\beta )\) and set \(h' = (-A)^n S_t (-A)^\beta h \in \mathcal {D}(A)\). Applying the above inequality and using boundedness of the semigroup \(S_t\) we obtain
Finally, (3.7) for n and \(n+1\) yields
proving the result. \(\square \)
To deal with the singularity in 0 on the right hand side above, we will use an extended version of Gronwall’s lemma, see [23, Lemma 7.0.3] or, for a proof, [13, p. 188].
Lemma 3.6
(Extended Gronwall’s lemma) Let \(\alpha >0\), a, \(b\ge 0\), \(T \ge 0\), and \(u:[0,T]\rightarrow \mathbb {R}\) be non-negative and integrable. If, for all \(t\in [0,T]\),
then exists a constant \(K_{\alpha , b,T}\), depending only on \(\alpha \), b and T, such that,
3.3 Existence of global mild solutions
We start by discussing global solutions. Subsequently, the existence of local solutions under milder assumptions will be shown by localizing with appropriate stopping times. Denoting by \({\text {HS}}(U,E_1)\) the (Hilbert) space of Hilbert–Schmidt operators from U to \(E_1\) we introduce the following Lipschitz-type assumption, which will imply the existence of global mild solutions to (3.1).
Assumption 3.7
There exists \(\alpha \in (0,1]\) such that \(B: E_1 \rightarrow E_\alpha \) and \(C:E_1\rightarrow {\text {HS}}(U,E_1)\) are Lipschitz continuous, i.e. there exists a constant \({\hat{L}}\) such that
holds for all Y, \(Z\in E_1\).
Remark 3.8
Assumption 3.7 implies a linear growth bound on B and C in the sense that
for all \(Y\in E_1\).
Theorem 3.9
(Global Mild Solution of (3.1)) Let Assumptions 3.1 and 3.7 hold true and let \(p>1\). Then, for every initial data \(X_0 \in L^{2p}(\Omega ; E_1)\) there exists a unique global mild solution X of (3.1) on \(E_1\). Moreover,
for all \(T\ge 0\) and X is \(E_1\)-continuous almost surely. If, in addition, \(S_t\) is a contraction semigroup, then the statement is true even for \(p=1\).
Remark 3.10
Without much effort one can extend the theorem to time- and path-dependent predictable coefficients \(B: \Omega \times \mathbb {R}_{\ge 0} \times E_1 \rightarrow E_\alpha \) and \(C:\Omega \times \mathbb {R}_{\ge 0} \times E_1\rightarrow {\text {HS}}(U,E_1)\), provided that (3.10) and (3.11) hold. The same is true for Theorem 3.17 below.
Proof
The theorem will be shown using a fixed-point argument. Using Lemma 3.5 we will be able to prove that the following mapping is a contraction. We define
for elements Y out of the Banach space
equipped with the norm defined by
To show the contraction property of \(\mathcal {K}\) on \(\widehat{\mathcal {H}}_{T,p}\) for small enough \(T>0\), we first decompose
where \(\mathcal {K}_B\) is the convolution of S with B and \(\mathcal {K}_C\) is the stochastic convolution with C, respectively. The first term is easiest to handle. From the strong continuity and boundedness of \(S_t\) we get
where the constant \(K_{1,1}\) depends on the bound for the norm of the semigroup. The term \(\mathcal {K}_B(Y)\) is more difficult to handle. Let \(Y\in \widehat{\mathcal {H}}_{T,p}\), then by Bochner’s inequality, Lemma 3.5, Jensen’s inequality and the growth estimate (3.11)
with constant K changing from line to line, but depending only on p, \(\alpha \) and \({\hat{M}}\). Note that to apply Jensen’s inequality we have used that \((t-s)^{\alpha -1}\,{\text {d}}s\) is a finite measure on (0, t) with mass , and that the inequality \(\left|\left|a+b\right|\right|_{}^{2p} \le 2^p \left( \left|\left|a\right|\right|_{}^{2p} + \left|\left|b\right|\right|_{}^{2p}\right) \) has entered in the last step. Applying the Fubini-Tonelli theorem yields
Inserting into (3.14) we get that
Let now Y, \(Z\in \widehat{\mathcal {H}}_{T,p}\), then with the same arguments as in (3.14), but with the Lipschitz estimate (3.10) instead of (3.11) we obtain
Applying again the Fubini–Tonelli theorem yields
Here, the constants \(K'\) and \(K''\) depend only on p, \(\alpha \) and \({\hat{L}}\).
To show similar properties for the stochastic convolution \(\mathcal {K}_C\) is exactly the same as in the proof of the classical result [7, Theorem 7.2, see p.189]. Everything together yields constants \(K_B\) and \(K_C\) independent of \(X_0\), such that
provided \(T< (K_B + K_C)^{-\alpha }\). Hence, \(\mathcal {K}\) is a contraction on \(\widehat{\mathcal {H}}_{T,p}\) and possesses a unique fixed point, which is a mild solution of (3.1) up to time \(T > 0\). Concatenating solutions, we obtain a global solution. Finally, to show the the uniqueness claim, we consider two arbitrary solutions \(X_1\) and \(X_2\) and the stopping times
Using the standard procedure as in the proof of [7, Theorem 7.2], but using the estimates for \(\mathcal {K}_B\) from above and Lemma 3.6 we obtain that the solutions \(X_1\) and \(X_2\) must coincide up to the stopping time \(\tau _R\). Passing to the limit \(R\rightarrow \infty \), global uniqueness follows. The remaining part, namely showing (3.12) and the continuity claim, is subject of Lemma 3.11 and 3.14 below. \(\square \)
Lemma 3.11
Let Assumption 3.1 and (3.11) hold true and let \(p > 1\). Let X be a mild solution on [0, T] of (3.1) with initial value \(X_0 \in L^{2p} (\Omega ; E_1)\) such that
Then,
If, in addition, \(S_t\) is a contraction semigroup, then the statement is true even for \(p = 1\).
Remark 3.12
We emphasize that the Lipschitz property (3.10) is not needed to show this Lemma.
Proof
We use the notation from the previous proof and write
First, note that the integrability assumption on X and the linear growth property (3.11) yield
For the case \(p=1\) we may assume that \(S_t\) is a contraction semigroup. Hence, we can apply [7, Theorem 6.10] which gives
For the case \(p>1\) we use that \(S_t\) is a \(C_0\)-semigroup and apply [8, Theorem 1.1] which yields
In both cases the growth bound (3.11) yields
For the drift part we again use the linear growth bound (3.11), and proceeding similar to (3.14) we obtain
for \(t\le T\). Taking expectations, using the Fubini–Tonelli theorem and inserting the estimates concerning \(\mathcal {K}_C\) yields
Finally, Gronwall’s lemma 3.6 finishes the proof. \(\square \)
\(E_1\)-continuity of the stochastic convolution \(\mathcal {K}_C(X)_t\) follows from standard results and the estimate (3.18). However, for the \(E_1\)-continuity of \(\mathcal {K}_B(X)\) we provide a detailed proof, since we can in general not assume that B is \(E_1\)-valued. To this end, we modify slightly the result [23, Proposition 4.2.1] and its proof.
Lemma 3.13
Let \(\psi : [0,T]\rightarrow E_\alpha \), be integrable and such that
Then, \((S*\psi )_t := \int _0^t S_{t-s} \psi (s) ds\) is in \(C([0,T], E_1)\).
Proof
Without loss of generality, assume \(\alpha <1\). If \(\alpha =1\), it holds that \(\psi \) fulfills the assumptions also for \(E_\beta \), for arbitrary \(\beta <1\), since \(E_1\hookrightarrow E_\beta \). Note that for \(0<t\le T\) and arbitrary \(0<\epsilon <t\),
where we use that \(S_\epsilon \varphi \in \mathcal {D}(A)\) and \( \frac{\,{\text {d}}}{\,{\text {d}}t} S_t = AS_t\) on \(\mathcal {D}(A)\); see e.g. [9, Lemma II.1.3]. We now observe that for \(0<s<t\le T\)
Hence, with Bochner’s inequality and Lemma 3.5
For \(s=0\) we get directly with Bochner inequality and Lemma 3.5 for \(t\searrow 0\),
\(\square \)
Lemma 3.14
Under the assumptions of Lemma 3.11 the mild solution X of (3.1) is almost surely \(E_1\)-continuous.
Proof
As above, we decompose
Continuity of the first summand in the decomposition is immediate, since \(S_t\) is strongly continuous. From Lemma 3.11 we get that \(\psi (t) := B(X)_t\) satisfies the conditions of Lemma 3.13 and hence continuity of \(\mathcal {K}_B(X)_t\) follows. In the case \(p>1\), estimate (3.18) together with [8, Theorem 1.1] yields continuity of \(\mathcal {K}_C(X)\). For \(p=1\) we may assume that \(S_t\) is a contraction semigroup on \(E_1\) and apply [7, Theorem 6.10] instead. Note that we are always using the continuous modifications of the stochastic integrals/convolutions. \(\square \)
Together, these Lemmas complete the proof of Theorem 3.9.
3.4 Existence of local mild solutions
To obtain only local solutions up to a stopping time \(\tau \), we can relax the assumptions on B and C made in the previous subsection.
Assumption 3.15
There exists \(\alpha \in (0,1]\) such that \(B: E_1 \rightarrow E_\alpha \) and \(C: E_1\rightarrow {\text {HS}}(U,E_1)\) are Lipschitz continuous on bounded sets, i. e. for all \(N\in \mathbb {N}\) there exists \(L_N\) such that
holds for all Y, \(Z\in E_1\) with \(\left|\left|Y\right|\right|_{1}\), \(\left|\left|Z\right|\right|_{1} \le (N + 1)\).
Remark 3.16
Assumption 3.15 yields for all \(N\in \mathbb {N}\) constants \(M_N\) such that for all \(Y\in E_1\) with \(\left|\left|Y\right|\right|_{1}\le (N + 1)\)
Theorem 3.17
(Local Mild Solution of (3.1)) Let Assumptions 3.1 and 3.15 hold true and let \(p >1\). Then, for every initial data \(X_0 \in L^{2p}(\Omega ; E_1)\) there exists a unique mild \(E_1\)-solution X of (3.1) on a maximal stochastic interval \(\llbracket 0, \tau \llbracket \). Moreover, X is \(E_1\)-continuous on \(\llbracket 0, \tau \llbracket \), \(\tau > 0\) and \(\lim _{t\nearrow \tau }\left|\left|X(t)\right|\right|_{1} = \infty \) on \(\{\tau <\infty \}\) almost surely. If, in addition, \(S_t\) is a contraction semigroup, then the statement is true even for \(p=1\).
We use the following localization method, similar to the truncation in [19]. For each \(N\in \mathbb {N}\) fix a monotone decreasing function \(h_N \in C^\infty (\mathbb {R}_{\ge 0})\) with
and for a constant \(c>0\),
Define the truncated coefficients
and consider the localized stochastic evolution equation
Lemma 3.18
Let B and C be such that the local Lipschitz assumption 3.15 holds. Then, \(B_N\) and \(C_N\) satisfy the global Lipschitz assumption 3.7 for all \(N\in \mathbb {N}\).
Proof
First, it is obvious to see that
For the global Lipschitz continuity let Y, \(Z\in E_1\) and assume, without loss of generality, that \(\left|\left|Y\right|\right|_{1} \ge \left|\left|Z\right|\right|_{1}\). Then, write
If \(\left|\left|Y\right|\right|_{1}> N+1\), then the first term vanishes. Else, it holds that \(\left|\left|Z\right|\right|_{1}\le \left|\left|Y\right|\right|_{1} \le N+1\) and thus, in both cases,
The second term in (3.24) vanishes if \(\left|\left|Z\right|\right|_{1} > N+1\). Otherwise,
where we applied chain rule and mean value theorem for Fréchet derivatives. Of course, replacing B by C and \(\left|\left|.\right|\right|_{\alpha }\) by \(\left|\left|.\right|\right|_{{\text {HS}}(U,E_1)}\) changes nothing in the computation so that we get a global Lipschitz constant \({\hat{L}}\), depending on c, \(M_N\) and \(L_N\). \(\square \)
For the proof of Theorem 3.17 we may assume \(X_0\in L^{2p}(\Omega :E_1)\), some \(p\ge 1\), to be given and Assumptions 3.1 and 3.15 to be true. For \(N\in \mathbb {N}\) we then denote by \(X^{(N)}\) the unique mild solution to the localized equation (3.22), which exists due to Theorem 3.9. To relax the truncation, we introduce the stopping times
and set
We start with the following preparatory Lemma.
Lemma 3.19
The stopping times defined in (3.27) and (3.28) have the following properties:
-
(1)
For all \(k\in \mathbb {N}\) the equality \(X^{(N)}(t) = X^{(N+k)}(t)\) holds a.s. for \(t \in \llbracket 0, \tau _N \rrbracket \).
-
(2)
The stopping time \(\tau \) is strictly positive.
Proof
By definition of \(h_N\) it holds that \(B_{N+1}(X^{(N)}(s)) = B_N(X^{(N)}(s))\) and \(C_{N+1}(X^{(N)}(s)) = C_N(X^{(N)}(s))\) for \(s \in \llbracket 0, \tau _N \rrbracket \). Hence, using the localization property of the stochastic convolution (cf. [4, Appendix A] and [33, Lemma 5.1]),
on \(\llbracket 0, \tau _N \rrbracket \). Now, uniqueness of the truncated solutions yields \(X^{(N)}(t) = X^{(N+1)}(t)\) almost surely on \(\llbracket 0, \tau _N \rrbracket \). For general \(k\in \mathbb {N}\) the argument can be iterated.
To show that \(\tau \) is strictly positive, note that it follows from path-wise continuity of \(X^{(N)}\) that
From the first part of the proof, \((\tau _N)\) is increasing. Hence,
showing positivity of \(\tau \). \(\square \)
Proof of Theorem 3.17
For \(x \in \mathbb {R}_+\) and \(t \in \llbracket 0, \tau \llbracket \) we set
The limit exists, since for almost every \(\omega \in \Omega \) the sequence \((X^{(N)}(\omega ; t,x))_{N \in \mathbb {N}}\) is eventually constant for each \(t \in \llbracket 0,\tau \llbracket \) by Lemma 3.19. It follows immediately that \(t \mapsto X(t,x)\) is a.s. continuous on each \(\llbracket 0,\tau _N \rrbracket \) and hence also continuous on \(\llbracket 0,\tau \llbracket \). Moreover, we may now rewrite \(\tau _N\) as
Continuity of X then implies that the sequence \(\tau _N\) is in fact strictly increasing and hence that \(\tau \) is predictable. Moreover, by definition of \(\tau \) we have
on \(\{\tau < \infty \}\).
We focus on the claim that X solves (3.1). By Lemma 3.19 it holds that \(X^{(N)}(t) = X(t)\) on \(t \in \llbracket 0, \tau _N \rrbracket \). Moreover, by construction of \(B_N\) and \(C_N\) we get \(B(X(t)) = B_{N}(X(t))\) and \(C(X(t)) = C_N(X(t))\) on \(\llbracket 0, \tau _N \rrbracket \). Thus,
holds, and X is a mild solution of (3.1) on \(\llbracket 0, \tau _N \rrbracket \). Since N was arbitrary X is a mild solution on \(\llbracket 0, \tau \llbracket \) as claimed. To show uniqueness, let \({\widetilde{X}}\) be another local mild \(E_1\)-solution of (3.1) on some stochastic interval \(\llbracket 0, \widetilde{\tau } \llbracket \). For \(N\in \mathbb {N}\) we introduce the stopping time
Clearly, it holds that
In addition, for \(t \in \llbracket 0, \widetilde{\tau }_N \rrbracket \) it holds that
As above, we derive that \({\widetilde{X}}\) is a mild solution of the truncated equation on \( \llbracket 0,{\widetilde{\tau }}_N \rrbracket \). The path-wise uniqueness claim of Theorem 3.9 implies \({\widetilde{X}}(t) = X(t)\) for all \(t \in \llbracket 0,{\widetilde{\tau }}_N \rrbracket \) and by arbitrariness of N also for \(t \in \llbracket 0, {\widetilde{\tau }} \wedge \tau \llbracket \). Assume now that \(\tau < {\widetilde{\tau }}\) on a set of positive probability. Then \(\lim _{t\nearrow \tau } \left|\left|X(t)\right|\right|_{1} = \infty \) on \(\{\tau <\infty \}\) leads to a contradiction to the continuity of \({\widetilde{X}}\). Hence X is unique and \(\llbracket 0, \tau \llbracket \) is maximal. \(\square \)
3.5 More global solutions and strong solutions
The results from above can be easily extended in two directions: First, we show the existence of global solutions under more general conditions, second we show that all mild solutions obtained in this section are in fact strong solutions.
Corollary 3.20
(More global solutions) Let the assumptions of Theorem 3.17 hold true, but with the local growth condition (3.20) replaced by the global growth condition (3.11). Then the solution X is global, i.e. \(\tau = \infty \) a.s.
Proof
By Theorem 3.17 we know that a local solution X to (3.1) exists on a maximal stochastic interval \(\llbracket 0, \tau \llbracket \) and that \(\lim _{t \rightarrow \tau } \left|\left|X(t)\right|\right|_{1} = \infty \) on \(\{\tau < \infty \}\). Moreover, we know that the stopping time \(\tau \) is the limit of a sequence of stopping times \(\tau _N < \tau \), that X coincides with the (global) solution \(X^N\) of the truncated equation (3.22) on the stochastic interval \(\llbracket 0, \tau _N \rrbracket \) and that \(\left|\left|X^N(\tau _N)\right|\right|_{1} \ge N\) for all \(N \in \mathbb {N}\). Finally, observe that the coefficients \(B_N, C_N\) of the truncated equation satisfy the same growth bound as the coefficients of the original equation, i.e.
with M independent of N.
If \(\{\tau < \infty \}\) has measure zero, then \(\tau = \infty \) a.s. and the proof is finished. Therefore assume, aiming for a contradiction, that \(\mathbb {P}\left[ \tau < \infty \right] = 2\epsilon > 0\). By monotone convergence it follows that there exists \(T > 0\) such that \(\mathbb {P}\left[ \tau < T\right] \ge \epsilon \) from which it follows that also \(\mathbb {P}\left[ \tau _N \le T\right] \ge \epsilon \) for all \(N \in \mathbb {N}\). Hence
holds. On the other hand, applying the growth bound (3.31) and Lemma 3.11 to each \(X_N\) it follows that
with the right hand side independent of N. Combining with (3.32) and choosing N large enough, the desired contradiction is obtained. \(\square \)
Corollary 3.21
(Strong solution) Under the assumptions of Theorems 3.9 and 3.17 respectively, the processes \(X^{(N)}\), \(N\in \mathbb {N}\) and X are also the unique strong solution of respectively (3.22) and (3.1).
Proof
Let \((X,\tau )\) be the unique mild solution from Theorem 3.17, and \(X^{(N)}\), \(\tau _N\) and respectively \(B_N\) and \(C_N\) be the solution, stopping times and parameters of the truncated equation (3.22), corresponding to (3.1). By the natural embedding we can identify the \(E_1\)-paths of \(X^{(N)}\) with paths in E. Further, Kuratowski’s theorem (cf. [18]) implies for the corresponding Borel \(\sigma \)-algebras that \(\mathfrak B\left( E_1\right) = \mathfrak B\left( E\right) \cap E_1\) and hence that we can extend \(B_N\) trivially to a Borel function on E without affecting the regularity properties that \(B_N\) has on \(E_1\). Writing down the Hilbert-Schmidt norm one immediately observes that also
Both are seperable Hilbert spaces so that we can argue in the same way to extend \(C_N\) to a Borel function from E into \({\text {HS}}(U, E)\). The stochastic evolution equations now fit in the framework of [28, Appendix F], where sufficient conditions for obtaining weak and strong solutions from mild ones are given. Proving the corollary now simply amounts to showing that these conditions are satisfied.
First, recall (3.15) which yields for all \(T\ge 0\) that
\(\mathbb {P}\)-almost surely and from equivalence of the norms of \(E_1\) and \(\mathcal {D}(A)\) we get a. s.
For the step from mild to weak solutions we also need to verify for all \(g\in \mathcal {D}(A^*)\)
Recall that A generates a strongly continuous semigroup on E. Using Cauchy-Schwarz inequality we then get for any complete orthonormal system \((e_k)\) of U and \(0\le s\le t\)
From (3.15) (or boundedness of \(C_N\)) it follows that (3.33) indeed holds true. As we have seen in the proof of the continuity part of Theorem 3.9,
are continuous, and therefore predictable. Hence, all integrability and measurability assumptions which are needed to apply [28, Propositions F.0.4 and F.0.5] are satisfied and we conclude that the mild solution \(X^{(N)}\) is also a weak and strong solution of the truncated equation (3.22). Applying the the same localization argument as above, but now on E, the result translates to X.
For the uniqueness claim, suppose first that \(Y^{(N)}\) is another global strong solution of the truncated equation (3.22). Applying the results in [28, Appendix F] we get that \(Y^{(N)}\) is also an E-mild solution. Since \(B_N\) and \(C_N\) are bounded, we get that
almost surely, so that \(Y^{(N)}\) is even an \(E_1\)-mild solution. Hence, the uniqueness part of Theorem 3.9 yields \(Y^{(N)} = X^{(N)}\) almost surely. Finally, for another local strong solution \((Y, \varsigma )\), truncating with respect to the \(E_1\)-norm yields \(Y = X\) on \(\llbracket 0, \varsigma \wedge \tau \llbracket \), almost surely. Since \(\llbracket 0, \tau \llbracket \) is maximal for the \(E_1\)-mild solution, we obtain that \(\varsigma \le \tau \) almost surely. \(\square \)
4 The fixed boundary problem as a stochastic evolution equation
The goal of this section is to reformulate the free boundary problem (2.1) as the abstract evolution equation (3.1). We start by identifying the appropriate function spaces and introduce
where, as usual, \(L^2\) denotes the Lebesgue space, \(H^k\) the k-th order Sobolev space. Recall that \(\oplus \) denotes the direct sum of Hilbert spaces, i.e. the scalar product on \(\mathfrak L^2\) is defined through the scalar product on \(L^2(\mathbb {R}_+)\) by
and similarly for the space \(\mathfrak H^k\).
Recall the definitions of the operators \(\mathcal {A}\), \(\mathcal {B}\), \(\mathcal {C}\) that were given in equations (2.11), (2.12), (2.13) in terms of \(\mu _\pm \), \(\sigma _\pm \), \(\rho \) and \(\zeta \). To define the domain \(\mathcal {D}(\mathcal {A})\) of \(\mathcal {A}\) we set
Finally, the space \(\mathcal {D}(\mathcal {A})\) shall be equipped with the graph norm
Note that on \(\mathcal {D}(\mathcal {A})\), the graph norm is equivalent to the \(\mathfrak H^2\)-norm, as can be seen from integration by parts and the Cauchy–Schwarz inequality. Moreover, \(\mathcal {D}(\mathcal {A})\) is a closed subset of \(\mathfrak H^2\).
To provide the connection with the results of Sect. 3, we set
As the norms of \(\mathcal {D}(\mathcal {A})\) and \(\mathfrak H^2\) are equivalent we may use either one to topologize \(E_1\). The following result holds true for interpolation between \(\mathcal {D}(\mathcal {A})\) and \(\mathfrak L^2\):
Lemma 4.1
For \(\alpha \in (0,1/4)\) it holds that,
with equivalent norms.
This follows from the fact that \(\mathcal {D}((c-\Delta )^\alpha ) = H^{2\alpha }(\mathbb {R}_+)\) for the Dirichlet Laplacian \(\Delta \), iff . To our knowledge, this was shown first in [12], but see also [21, Ch.1, Thm. 11.6]. By the structure of \(\mathcal {D}(\mathcal {A})\) and \(\mathfrak L^2\), this directly lifts to \(E_\alpha \). This result can be understood in the sense that the boundary conditions, which distinguish \(\mathcal {D}(\mathcal {A})\) from \(\mathfrak H^2\) ‘are lost’ during interpolation exactly at \(\alpha = 1/4\).
Lemma 4.2
The operator \(\mathcal {A}\), defined in (2.11), satisfies Assumption 3.1.
Proof
The Dirichlet Laplacian on \(L^2(\mathbb {R}_+)\) is a self-adjoint operator. This property is inherited by \(\mathcal {A}\), which is hence a self-adjoint operator . Moreover,
for all \(u \in \mathcal {D}(\mathcal {A})\) and hence \(-\mathcal {A}\) is positive in the sense of [25]. By [25, Lem. 4.31] it follows that \((-c, \infty )\) is contained in the resolvent set of \(\mathcal {A}\) and the resolvent satisfies
Choosing \(M > \max (1,1/c)\), the estimate (3.2) follows and Assumption 3.1 is satisfied. \(\square \)
Lemma 4.3
Suppose that \(\mu _{\pm }\), \(\sigma _{\pm }\), \(\rho \) and \(\zeta \) satisfy Assumptions 2.2, 2.3, 2.5 and 2.6. Then the operators \(\mathcal {B}(u)\) and \(\mathcal {C}(u)\), defined in (2.12) and (2.13) satisfy the local Lipschitz Assumption 3.15.
Proof
We decompose \(\mathcal {B}(u)\) into
where
Recall from (2.10) that \(\mathcal {I}(u)\) is the vector of boundary values given by \(\mathcal {I}(u) = \left( \frac{\partial }{\partial x}u_1(t,0+), \frac{\partial }{\partial x}u_2(t,0+)\right) \). The trace operator is known to be continuous on \(H^{2}(\mathbb {R}_+)\), see [21], and hence, by equivalence of norms we have that \(\mathcal {I}\in L(\mathfrak H^2, \mathbb {R}^2)\) with operator norm \(K_{\mathcal {I}}\), say. Denote by \(B_{N+1}\) the (closed) ball of radius \(N+1\) in \(\mathfrak H^2\). The image of \(B_{N+1}\) under \(\mathcal {I}\) is closed and bounded, hence a compact subset of \(\mathbb {R}^2\). By Assumption 2.5, the function \(\rho : \mathbb {R}^2 \mapsto \mathbb {R}\) is locally Lipschitz continuous and hence Lipschitz on any compact subset of \(\mathbb {R}^2\). Thus, we find a constant \(L_{\rho ,N}\) such that
By a similar argument and using only continuity of \(\rho \) instead of the Lipschitz property, we find \(M_{\rho ,N}\) such that
Finally, for any \(u, w \in B_{N + 1}\) and setting \(L_N = K_\mathcal {I}(M_{\rho ,N} + (N+1)L_{\rho ,N})\) we obtain
Now let \(\alpha < 1/4\). By the continuous embedding \(\mathfrak H^1 \hookrightarrow \mathfrak H^{2\alpha }\) and the equivalence of norms on \(\mathfrak H^2\) and \(\mathcal {D}(\mathcal {A})\) we may find \(K_\alpha \) such that also
Recalling that \(\mathfrak H^{2\alpha } = E_\alpha \) and \(\mathcal {D}(\mathcal {A})= E_1\), this yields that \(\mathcal {B}_\rho \) is Lipschitz as a mapping from bounded subsets of \(E_1\) to \(E_\alpha \).
It remains to show the same properties for \(\mathcal {B}_\mu \) and for \(\mathcal {C}\), which we delay to Appendices 1 and 2, respectively. \(\square \)
In particular, with the results from Sect. 3 we get a unique solution \(X = (u_1,u_2, x_*)\) of (2.9) on the maximal interval \(\llbracket 0,\tau \llbracket \). Applying the following lemma to Corollary 3.20 yields global existence, under global growth assumptions.
Lemma 4.4
Suppose that in addition to the assumptions of Lemma 4.3 also Assumption 2.13 holds and \(\rho \) is bounded. Then the operators \(\mathcal {B}\) and \(\mathcal {C}\) satisfy the global growth bound (3.11).
Proof
Decompose \(\mathcal {B}\) into \(\mathcal {B}_\mu \) and \(\mathcal {B}_\rho \) as in the proof of Lemma 4.3. Using Assumption 2.13 we get the point-wise estimate
where \(a\in L^2(\mathbb {R}_+)\) and \(b \in L^\infty (\mathbb {R})\). Taking \(\mathfrak L^2\)-norms yields
For the first weak derivative we extract from the proof of Theorem 6.6 [cf. Eq. (6.1).]
Whereas the first summand admits a bound similar to (4.2), part (b) of Assumption 2.13 yields for some \({\tilde{b}}\in L^\infty (\mathbb {R}^2;\mathbb {R})\),
and
Of course, the same holds for \(\mu _-\) and \(u_2\) so that we can summarize
Collecting all the estimates and using \(\mathfrak H^2 \hookrightarrow \mathfrak H^1\) we get a constant \(M_\mu \), depending on a, b and \({\tilde{b}}\) only, such that
Using the assumption that \(\rho \) is bounded by a constant \(M_\rho \) we easily estimate
Combining with (4.5) we obtain
Using the continuous embedding \(\mathfrak H^1 \hookrightarrow \mathfrak H^{2\alpha }\) and the equivalence of norms on \(\mathcal {H}^2\) and \(\mathcal {D}(\mathcal {A})\) as in the proof of Lemma 4.3 yields the global growth bound (3.11) for \(\mathcal {B}\).
To show the analogous growth bound for \(\mathcal {C}\), observe that the following equalities hold for all \(x \in \mathbb {R}_+\) and \(u\in \mathcal {D}(\mathcal {A})\subset \mathfrak H^2\):
Hence, there exists a constant \(K>0\) such that
We apply the same argument to \(\sigma _-(-.,u_2(.))\) to obtain
for a constant \(K_\sigma \), depending on \(\sigma _+\) and \(\sigma _-\) only. By Assumption 2.3 \(\sigma _{\pm }(.,u_1(.))\) satisfies Dirichlet boundary conditions at 0 and we may apply Lemma 7.4 to obtain
which together with (4.6) yields the desired linear growth bound. \(\square \)
Finally, we prove a refined result on the blow-up behavior of the solution X(t) at the stopping time \(\tau \). We show that under Assumption 2.13 a finite-time blow-up of X(t) can only happen if the boundary values \(\mathcal {I}(X(t))\) themselves blow up. For \(N\in \mathbb {N}\) we introduce the stopping times
and set
Recall at this point the convention that \(\inf \emptyset = \infty \). From Theorem 3.17 we know that
Since a blow up of \(\left|\mathcal {I}(X(t))\right|\) implies a blow-up of the norm \(\left|\left|X(t)\right|\right|_{\mathcal {H}^2}\), and hence also of \(\left|\left|X(t)\right|\right|_{\mathcal {A}}\) we obtain that \(\tau \le \tau _\circ \) a.s. and hence only two events are possible: Either
-
\(\tau = \tau _\circ \), i.e. a blow-up of X(t) coincides with the blow up of the boundary values \(\mathcal {I}(X(t))\), or
-
\(\tau < \infty \), but \(\tau _\circ = +\infty \), i.e. a blow-up of X(t) occurs without simultaneous blow-up of its boundary values.
The following theorem shows that Assumption 2.13 rules out the second case:
Theorem 4.5
If in addition to the assumptions of Lemma 4.3 also Assumption 2.13 holds, then
Proof
Because of \(\tau _\circ \ge \tau \) and maximality of \(\tau \), (4.7), it suffices to show
Indeed, this yields \(\tau _{\circ ,N}<\tau \) on \(\{\tau <\infty \}\) and thus \(\tau _{\circ }\le \tau \) almost surely. For \(N\in \mathbb {N}\), let \(h_N:\mathbb {R}\rightarrow \mathbb {R}\) be the truncation function defined in (3.21) and define
Then, due to Theorem 2.14, Eq. (2.9) with \(\rho \) replaced by \(\rho _N\) admits a unique global solution denoted by \(X_N\). Since \(\rho = \rho _N\) on the ball of radius N, we have \(X(t) = X_N(t)\) on \(\llbracket 0,\tau _{\circ ,N}\wedge \tau \llbracket \). Since \(X_N\) is \(\mathcal {D}(\mathcal {A})\)-continuous, we have
for all \(N \in \mathbb {N}\) and the proof is complete. \(\square \)
5 Transformation from moving to fixed boundary
As the last step towards a complete proof of the main result Theorem 2.11 we make the transformation (2.7) to the fixed-boundary equation (2.8) rigorous. Since the equation is stochastic and its solution not differentiable in time, the classic chain rule cannot be applied. As an alternative we could use Ito’s formula, which requires the transformation map to be \(C^2\). It turns out that the transformation is only \(C^1\), but linearity in its first argument and the bounded variation of \(x_*(t)\) are in combination sufficient to make a stochastic version of the chain rule work.
5.1 Stochastic chain rule
For a given cylindrical Wiener process W on U we consider the \(H^1(\mathbb {R})\oplus \mathbb {R}\)-continuous process (v, x) on \(\llbracket 0,\tau \llbracket \), for a predictable stopping time \(\tau \), such that,
where \(\mu : \llbracket 0,\tau \llbracket \rightarrow L^2(\mathbb {R})\) and \(\sigma : \llbracket 0, \tau \llbracket \rightarrow {\text {HS}}(U,L^2(\mathbb {R}))\) are predictable processes, and \( \dot{x}: \llbracket 0,\tau \llbracket \rightarrow \mathbb {R}\) is continuous and adapted.
For \(x \in \mathbb {R}\) we define the shift operator \(\theta _x\) acting on \(L^2(\mathbb {R})\) as
Observe that the shift operator is a linear isometry and hence continuous. It is obvious that the shift operators \((\theta _x)_{x \in \mathbb {R}}\) form a group under composition, i.e. \(\theta _x \theta _\xi = \theta _{x + \xi }\); in fact this group is strongly continuous in \(L^2(\mathbb {R})\), in the sense that
see e. g. [35, Sect. VII.4]. The same properties hold true for the restriction of \((\theta _x)_{x \in \mathbb {R}}\) to the Sobolev space \(H^1(\mathbb {R})\). Finally, consider the function
which formally transforms the solution \((u,x_*)\) of the fixed boundary problem (2.8) into the solution \((F(v,x_*),x_*)\) of the moving boundary problem (2.1).
We start with a Lemma on some uniform continuity estimates for the shift operator on \(L^2(\mathbb {R})\).
Lemma 5.1
Let \(T>0\), \(s \mapsto f_s\) and \(s \mapsto g_s\) be continuous functions from [0, T] to \(L^2(\mathbb {R})\) and \({\text {HS}}(U,L^2(\mathbb {R}))\) respectively, and let \(\theta _x\) be the shift operator on \(L^2(\mathbb {R})\). Then the following holds:
-
(a)
For every \(\epsilon > 0\) there exists \(\delta > 0\) such that
$$\begin{aligned} \left|\left|\theta _h f_s - f_t\right|\right|_{L^2(\mathbb {R})} \le \epsilon \end{aligned}$$for all \(|h| \le \delta \) and \(s,t \in [0,T]\) with \(|s-t| \le \delta \).
-
(b)
For every \(\epsilon > 0\) there exists \(\delta > 0\) such that
$$\begin{aligned} \left|\left|\theta _h g_s - g_t\right|\right|_{{\text {HS}}(U,L^2(\mathbb {R}))} \le \epsilon \end{aligned}$$for all \(|h| \le \delta \) and \(s,t \in [0,T]\) with \(|s-t| \le \delta \).
Remark 5.2
Applying the lemma to the constant function \(g_t\equiv g\), \(\theta \) can be considered as a strongly continuous group on \({\text {HS}}(U;L^2(\mathbb {R}))\).
Proof
For claim (a), note that due to the continuity of \(s \mapsto f_s\) there exists \(N := N(\epsilon ) \in \mathbb {N}\) and \(t_1, \cdots , t_N \in [0,T]\) such that
i.e., we can find N balls in \(L^2(\mathbb {R})\) around the points \(f_{t_i}\) of radius \(\frac{\epsilon }{3}\), which cover the whole range of \(f_s\). Moreover, due to the strong continuity of the group \((\theta _x)_{x \in \mathbb {R}}\) there exists \(\delta > 0\), depending on \(\epsilon \), N and \(f_{t_i}\), \(i=1,..,N\), such that
The estimate
concludes the proof of the first claim.
We show (b) in a similar way; to alleviate notation we denote by \(\left|\left|.\right|\right|_{HS}\) the Hilbert-Schmidt norm \(\left|\left|.\right|\right|_{{\text {HS}}(U,L^2(\mathbb {R}))}\). Due to the continuity of \(s \mapsto g_s\) there exists \(N := N(\epsilon ) \in \mathbb {N}\) and \(t_1, \cdots , t_N \in [0,T]\) such that
Denote by \((e_k)_{k \in \mathbb {N}}\) an arbitrary orthonormal basis of the (separable) Hilbert space U. Since \(g_s\) is a Hilbert-Schmidt-operator for every \(s \in [0,T]\), the series \(\left|\left|g_{t_i}\right|\right|_{HS}^2 = \sum _k \left|\left|g_{t_i} e_k\right|\right|_{L^2(\mathbb {R})}^2\) converges for every \(i \in \{1, \cdots , N\}\). Hence, there is \(M := M(\epsilon ) \in \mathbb {N}\) such that
Moreover, due to the strong continuity of the group \((\theta _x)_{x \in \mathbb {R}}\) there exists \(\delta > 0\), depending on \(\epsilon \), N, M and \(g_{t_i}e_k\), \(i=1,..,N\), \(k=1,...,M\), such that
Combining with the previous equation, we obtain
The estimate
concludes the proof. \(\square \)
For notational simplicity, we denote by \(v'\) the weak derivative of \(v\in H^1(\mathbb {R})\).
Lemma 5.3
The transformation \(F\) from (5.4), restricted to \(H^1(\mathbb {R})\oplus \mathbb {R}\), has the following properties:
-
(1)
F is a continuous mapping from \(H^1(\mathbb {R})\;\oplus \mathbb {R}\) to \(H^1(\mathbb {R})\);
-
(2)
F is a continuously differentiable mapping from \(H^1(\mathbb {R})\,\oplus \, \mathbb {R}\) to \(L^2(\mathbb {R})\) with Fréchet derivative given by
$$\begin{aligned} D_{(v,x)} F(h,\xi ) = \theta _x h+\xi \theta _x v' . \end{aligned}$$(5.5)for \(x,\xi \in \mathbb {R}\) and \(v, h \in H^1(\mathbb {R})\).
Proof
For (1), we estimate
where by the strong continuity of \(\theta _x\) on \(H^1(\mathbb {R})\) the latter term vanishes as \(|\xi - x| \rightarrow 0\). To show (2), we first verify that (5.5) gives the Fréchet derivate of F, by estimating
Applying first the fundamental theorem of calculus and in the second step Jensen’s inequality and Fubini’s theorem we continue with
showing Fréchet differentiability of F. To show continuous differentiability we estimate
Writing \(\left|\left|.\right|\right|_{op}\) for the operator norm from \(H^1(\mathbb {R})\oplus \mathbb {R}\) to \(L^2(\mathbb {R})\), this shows that
The right hand side goes to zero as \( \left|\left|v-w\right|\right|_{H^1(\mathbb {R})} +|x-y|\rightarrow 0\) by Lemma 5.1, and hence \(D_{(v,x)}F\) depends continuously on (v, x). \(\square \)
Theorem 5.4
(Stochastic Chain Rule) Let (v, x) be given by (5.1) and set \(u_t = F(v_t,x_t) = \theta _{x_t} v_t\). Then u satisfies
on \(\llbracket 0,\tau \llbracket \), where the first integral is an \(L^2\)-Bochner integral and the second one an \({\text {HS}}\)-stochastic integral.
Proof
Without loss of generality assume \(\tau = \infty \) almost surely. Else, take an announcing sequence of stopping times \((\tau _N)\) for \(\tau \). Then, multiply \(\dot{x}\), \(\mu _t\) and \(\sigma _t\) with the indicator function \(\mathbf {1}_{\llbracket 0,\tau _N\rrbracket }\)and replace resp. \(x_*\) and v by \(x_{.\wedge \tau _N}\) and \(v_{.\wedge \tau _N}\). In this proof we use the shorthand notation \(x_{s,t} = x_t - x_s\), \(v_{s,t} = v_t - v_s\) etc. For \(\mathcal {P}\) an arbitrary partition of [0, T] we decompose
where DF is the Fréchet derivative of F from (5.5) and R is a remainder term. Using equation (5.1) and the explicit forms of F and DF we rewrite
The formula (5.6) follows, if we can show each of the following (a.s.) convergence statements
and
for some sequence of partitions \((\mathcal {P}_n)_{n \in \mathbb {N}}\) as \(n \rightarrow \infty \).
For claim (5.7) chose \(\epsilon ' > 0\) and set \(\epsilon = \epsilon ' / V_x[0,T]\) where \(V_x[0,T]\) is the total variation of the process x over the interval [0, T]. By Lemma 5.1 we can find \(\delta > 0\) such that
for all \(|h| \le \delta , |s - r| \le \delta \). By continuity of x we can choose the mesh of \(\mathcal {P}_n\) fine enough such that \(|x_{r,s}| \le \delta \) for \(|s - r| \le |\mathcal {P}_n|\) and hence
for \(|\mathcal {P}_n|\) small enough. As \(\epsilon ' > 0\) was arbitrary this shows (5.7) for any sequence of partitions with \(|\mathcal {P}_n| \rightarrow 0\). For (5.8) we estimate
The integrand converges to 0 for \(\left|\mathcal {P}_n\right|\searrow 0\), due to strong continuity of \(\theta \) and continuity of x. By dominated convergence, this carries over to the whole integral.
For (5.9) estimate
Applying first the fundamental theorem of calculus and in the second step Jensen’s inequality and Fubini’s theorem we continue with
Again \(\sum _{[s,t] \in \mathcal {P}_n} |x_{s,t}|\) can be bounded by the total variation of x, while Lemma 5.1 shows that the \(L^2\)-norm vanishes uniformly as \(|\mathcal {P}_n| \rightarrow 0\). Finally, to obtain the convergence of the stochastic integrals in (5.10), we define the \({\text {HS}}(U,L^2(\mathbb {R}))\)-valued functions
and note that (5.10) is equivalent to \(\lim _{n\rightarrow \infty } \int _0^T \Phi _{\mathcal {P}_n}(r)\,{\text {d}}W_r = \int _0^T \Phi (r)\,{\text {d}}W_r\) along the sequence of partitions \(\mathcal {P}_n\). Denoting by \(\left|\left|.\right|\right|_{HS}\) the Hilbert-Schmidt-norm on \({\text {HS}}(U,L^2(\mathbb {R}))\) we claim that
Indeed, by rewriting
we may proceed as in the case of (5.8), since \(\theta \) is strongly continuous on \({\text {HS}}(U;L^2(\mathbb {R}))\), see Remark 5.2, to conclude that the Hilbert-Schmidt-norm on the right hand side vanishes uniformly as \(|\mathcal {P}_n| \rightarrow 0\). Choosing \(\delta , \epsilon > 0\), [7, Proposition 4.31] says that
and we see that as \(|\mathcal {P}_n| \rightarrow 0\) the right hand side can be made arbitrarily small. Hence, \(\Phi _{\mathcal {P}_n}(r)\,{\text {d}}W_r \rightarrow \int _0^T \Phi (r)\,{\text {d}}W_r\) in probability, as \(|\mathcal {P}_n| \rightarrow 0\). In particular any sequence of partitions with mesh tending to zero contains a subsequence \((\mathcal {P}_{n_k})_{k \in \mathbb {N}}\) such that the convergence of stochastic integrals takes place almost surely, completing the proof of (5.10). \(\square \)
5.2 Completing the proof of the main result
Making use of the notation defined in Sects. 2 and 4 we complete the proof of Theorem 2.11 by combining the relevant results.
Proof of Theorem 2.11
Given initial data \(x_0\in \mathbb {R}\), \(v_0\in \Gamma (x_0)\) we set
By application of the Lemmas in section 4 to Theorem 3.17 and Corollary 3.21 there exists a unique maximal strong solution of (2.9) X on \(\mathfrak L^2\) with initial value \(X_0\). Note that X is \(\mathfrak H^2\) continuous and denote its \(\mathfrak H^2\)-explosion time by \(\tau \). Setting
we obtain a solution of the fixed boundary problem (2.8) on \(\llbracket 0,\tau \llbracket \). Next we paste together \(u_1\) and \(u_2\) by setting
and note that due to the Dirichlet boundary condition at 0 it holds that \(u_t \in H^1(\mathbb {R})\). Thus we may apply the stochastic chain rule of Theorem 5.4 to \(v_t := F(u_t, -x_*(t)) = \theta _{-x_*(t)} u_t\) to obtain a local solution \(v_t\) of the stochastic free boundary problem (2.1).
To show uniqueness we assume that there exists another local solution \((\check{v},\check{x}_*)\) of (2.1) on a stochastic interval \(\llbracket 0,\varsigma \llbracket \), which, by definition is a. s. \(H^1(\mathbb {R})\)-continuous. Applying the stochastic chain rule of Theorem 5.4 to \(\check{u}_t = F(\check{v}_t, x_*(t)) = \theta _{\check{x}_*(t)} \check{v}_t\) we obtain a local solution \(\check{u}_t\) of the stochastic fixed boundary problem (2.1). Reversing the procedure from above \(\check{u}_t\) can be rewritten as a solution \(\check{X}(t)\) of the abstract stochastic evolution equation (2.9). The parts of Theorem 3.17 and Corollary 3.21 on uniqueness and maximality imply \(\varsigma \le \tau \) and that \(\check{X}(t)\) is equal to the solution X(t) constructed previously. We conclude that also \((\check{v},\check{x}_*) = (v,x_*)\) and the proof is complete. \(\square \)
References
Appell, J., Zabrejko, P. P.: Nonlinear Superposition Operators. Cambridge Tracts in Mathematics, vol. 95. Cambridge University Press, Cambridge (1990)
Barbu, V., Da Prato, G.: The two phase stochastic Stefan problem. Probab. Theory Relat. Fields 124(4), 544–560 (2002)
Bayer, C., Horst, U., Qiu, J.: A functional limit theorem for limit order books with state dependent price dynamics. arXiv:1405.5230 (2014)
Brzeźniak, Z., Maslowski, B., Seidler, J.: Stochastic nonlinear beam equations. Probab. Theory Relat. Fields 132(1), 119–149 (2005)
Cont, R., Kukanov, A., Stoikov, S.: The price impact of order book events. J. Financial Economet. 12(1), 47–88 (2014)
Donier, J., Bonart, J., Mastromatteo, I., Bouchaud, J.-P.: A fully consistent, minimal model for non-linear market impact. arXiv:1412.0141 (2014)
Da Prato, G., Zabczyk, J.: Stochastic Equations in Infinite Dimensions, 2nd edn. Encyclopedia of Mathematics and its Applications, vol. 152. Cambridge University Press, Cambridge (2014)
Da Prato, G., Zabczyk, J.: A note on stochastic convolution. Stoch. Anal. Appl. 10(2), 143–153 (1992)
Engel, K.-J., Nagel, R.: One-Parameter Semigroups for Linear Evolution Equations. Volume 194 of Graduate Texts in Mathematics. Springer-Verlag, New York, (2000)
Escher, J., Prüss, J., Simonett, G.: Analytic solutions for a Stefan problem with Gibbs–Thomson correction. J. Reine Angew. Math. 563, 1–52 (2003)
Fasano, A., Primicerio, M.: Free boundary problems for nonlinear parabolic equations with nonlinear free boundary conditions. J. Math. Anal. Appl. 72(1), 247–273 (1979)
Grisvard, P.: Commutativité de deux foncteurs d’interpolation et applications. J. Math. Pures Appl. 9(45), 207–290 (1966)
Henry, D.: Geometric Theory of Semilinear Parabolic Equations. Lecture Notes in Mathematics, vol. 840. Springer, Berlin (1981)
Kallenberg, O.: Foundations of Modern Probability. Applied probability. Springer, New York (2002)
Kim, K., Mueller, C., Sowers, R.B.: A stochastic moving boundary value problem. Illinois J. Math. 54(3), 927–962 (2010)
Krylov, N.V.: On the Itô–Wentzell formula for distribution-valued processes and related topics. Probab. Theory Relat. Fields 150(1–2), 295–319 (2011)
Kim, K., Sowers, R.B.: Numerical analysis of the stochastic moving boundary problem. Stoch. Anal. Appl. 30(6), 963–996 (2012)
Kuratowski, K.: Topology, vol. 1. Academic Press, New York (1966)
Kim, Ku, Zheng, Z., Sowers, R.B.: A stochastic stefan problem. J. Theor. Probab. 25, 1040–1080 (2012)
Lasry, J.-M., Lions, P.-L.: Mean field games. Jpn. J. Math. 2(1), 229–260 (2007)
Lions, J.-L., Magenes, E.: Non-homogeneous Boundary Value Problems and Applications, vol. 1. Springer, Springer (1972)
Lipton, A., Pesavento, U., Sotiropoulos, M. G: Trade arrival dynamics and quote imbalance in a limit order book. arXiv:1312.0514 (2013)
Lunardi, A.: Analytic Semigroups and Optimal Regularity in Parabolic Problems. Progress in Nonlinear Differential Equations and Their Applications, 2nd edn. Birkhäuser, Basel (1995)
Lunardi, A.: An Introduction to parabolic moving boundary problems. In: Iannelli, M., Nagel, R., Piazzera, S. (eds.) Functional Analytic Methods for Evolution Equations. Lecture Notes in Mathematics, vol. 1855, pp. 371–399. Springer, Berlin (2004)
Lunardi, A.: Interpolation Theory, 2nd. Appunti. Scuola Normale Superiore di Pisa (Nuova Serie). [Lecture Notes. Scuola Normale Superiore di Pisa (New Series)]. Edizioni della Normale, Pisa (2009)
Mastromatteo, I., Tóth, B., Bouchaud, J.-P.: Anomalous impact in reaction-diffusion financial models. Phys. Rev. Lett. 113, 268701 (2014)
Pazy, A.: Semigroups of Linear Operators and Applications to Partial Differential Equations. Applied Mathematical Sciences, vol. 44. Springer, New York (1992)
Prévôt, C., Röckner, M.: A Concise Course on Stochastic Partial Differential Equations. Springer, Berlin (2007)
Prüss, J., Saal, J., Simonett, G.: Existence of analytic solutions for the classical Stefan problem. Math. Ann. 338(3), 703–755 (2007)
Stefan, J.: Über die Theorie der Eisbildung, insbesondere über die Eisbildung im Polarmeere. Wien. Ber. XCVIII, Abt. 2a (965–983) (1888)
Valent, T.: A property of multiplication in Sobolev spaces. Some applications. Rend. Sem. Math. Univ. Padova 74, 63–73 (1985)
Valent, T.: Boundary Value Problems of Finite Elasticity: Local Theorems on Existence, Uniqueness, and Analytic Dependence on Data. Springer Tracts in Natural Philosophy, vol. 31. Springer, New York (1988)
van Neerven, J., Veraar, M., Weis, L.: Maximal \(L^p\)-regularity for stochastic evolution equations. SIAM J. Math. Anal. 44(3), 1372–1414 (2012)
Vuik, C.: Some Historical Notes about the Stefan Problem. Delft University of Technology, Faculty of Technical Mathematics and Informatics (1993)
Werner, D.: Funktionalanalysis. Springer-Lehrbuch, Berlin (2007)
Zheng, Z.:. Stochastic Stefan Problems: Existence, Uniqueness and Modeling of Market Limit Orders. PhD thesis, Graduate College of the University of Illinois at Urbana-Champaign (2012)
Acknowledgments
The authors acknowledge funding from the German Research Foundation (DFG) under Grants ZUK 64 and RTG 1845 and would like to thank Wilhelm Stannat for comments and discussions.
Author information
Authors and Affiliations
Corresponding author
Appendices
Appendix 1: The Nemytskii operator on Sobolev spaces
In this section we prove some regularity results on the Nemytskii operator N,
on the Sobolev spaces \(H^k(\mathbb {R}_+)\). Here, \(\mu :\mathbb {R}_+ \times \mathbb {R}^d \rightarrow \mathbb {R}\) and \(x\in \mathbb {R}_+\). Note that these results are well-known, even for more general spaces, in the case of bounded domains, see e.g. [1, 32]. However, in the case of unbounded domains several additional conditions on \(\mu \) are necessary to make them work. First, we state a result which guarantees, that under certain assumptions on \(\mu \), N maps \(H^k\) into \(H^k\). For a proof we refer to [31, Theorem 1], of which it is a special case.
Lemma 6.1
For each integer \(k\ge 1\) the space \(H^k(\mathbb {R}_+)\) is a Banach algebra. In particular, there exists a constant c such that for all u, \(v\in H^k(\mathbb {R}_+)\) it holds that \(uv\in H^k(\mathbb {R}_+)\) and
Next, we adapt [31, Theorem 2] to our setting. For notational reasons we also introduce the Nemytskii operators
for \(\quad u\in H^k(\mathbb {R}_+; \mathbb {R}^d),\;x\in \mathbb {R}_+\). In order for N to map \(H^k\) into \(H^k\) again, we need certain growths restrictions, which is not the case on bounded domains.
Assumption 6.2
Assume \(\mu \in C^m(\mathbb {R}_{\ge 0}\times \mathbb {R}^d, \mathbb {R})\) and
-
(a)
For each integer l, \(0\le l\le m\) there exists an \( a_l\in L^2(\mathbb {R}_+)\) and some \( b_l:\mathbb {R}^d\rightarrow \mathbb {R}_+\) locally bounded, such that
$$\begin{aligned} \left|D^{(l,0,...,0)} \mu (x,y)\right| \le a_l(x) + b_l(y)\left|y\right|,\quad \forall \,x\in \mathbb {R}_+,\,y\in \mathbb {R}^d \end{aligned}$$ -
(b)
For each multiindex \(\alpha \) with \(\alpha _1 \ne \left|\alpha \right| \le m\), the functions \(\sup _{x\in \mathbb {R}_+} \left|D^\alpha \mu (x,.)\right|\) are locally bounded.
Assumption 6.3
Assume that \(\mu \in C^{m}(\mathbb {R}_{\ge 0} \times \mathbb {R}^d, \mathbb {R})\) and \(D^{\alpha }\mu (x,.)\) is locally Lipschitz for all multi-indices \(\alpha \), \(\left|\alpha \right|\le m\) with Lipschitz constants uniform in \(x\in \mathbb {R}_{\ge 0}\), i. e. we assume that for all \(r\ge 0\) there exists \(L_r\ge 0\) such that
holds for all \(x\in \mathbb {R}_{\ge 0}\) and y, \(z\in \mathbb {R}^d\) with \(\left|y\right|\), \(\left|z\right|\le r\) and \(\alpha \), \(\left|\alpha \right|\le m\).
Remark 6.4
If \(\mu \) satisfies Assumption 6.2 for some integer \(m\ge 1\), then \(\mu \) satisfies Assumption 6.3 for \(m-1\).
Remark 6.5
Recall the Sobolev embeddings
where \(BUC^m(\mathbb {R}_+)\) denotes the Banach space of functions with bounded and uniformly continuous derivatives up to order m. As usual \(BUC^m(\mathbb {R}_+)\) is equipped with the \(C^m\)-norm. In the following, we will work with the \(BUC^m\) representative of the elements in \(H^{m+1}\) without further comment.
Theorem 6.6
If Assumption 6.2 holds for some integer \(m\ge 1\), then the operator N is continuous from \((H^m(\mathbb {R}_+))^d\) into \(H^m(\mathbb {R}_+)\).
Proof
We adapt the proof of [31, Theorem 2] for the domain \(\mathbb {R}_+\) and the spaces \(H^k\) by incorporating the additional growths assumptions. We proceed by induction and consider \(m=1\), first. Since \(\mu \in C^1(\mathbb {R}_{\ge 0}\times \mathbb {R}^d)\) we get immediately that N(u), \(N_x(u)\) and \(N_{y_j}(u)\), \(j=1,...,d\) are bounded and continuous functions for \(u\in H^1(\mathbb {R}_+; \mathbb {R}^d)\) fixed. Let now \((u^n) \subset H^1(\mathbb {R}_+; \mathbb {R}^d) \cap C^\infty (\mathbb {R}_{\ge 0};\mathbb {R}^d)\) such that \(u^n \longrightarrow u\) in \(H^1\). Then the convergence also takes place in \(\left|\left|.\right|\right|_{\infty }\) and by the chain rule we can write
By assumption,
Recall that \(u^n\) are globally bounded by Sobolev embeddings and \(b_0\), \(b_1\), are locally bounded by assumption. Hence, \(b_i\circ u^n\) are globally bounded for \(n\in \mathbb {N}\), \(i\in \{0,1\}\). Since \(u^n\rightarrow u\) uniformly and in \(L^2\) we get for each estimate that
and hence \(N(u^n)\) and \(N_x(u^n)\) are bounded by \(L^2\)-converging sequences. Hence, we can apply a version of Lebesgue’s dominated convergence [14, Theorem 1.21] and obtain the \(L^2\) convergence of
For the remaining summands we get
which goes to 0 as \(n\rightarrow \infty \). Indeed, uniform convergence of \((u^n)\) and dominated convergence yield \(L^2\)-convergence of \((N_{y_j}(u^n)\nabla u)\). Hence,
By completeness of \(H^1\) this implies \(N(u)\in H^1(\mathbb {R}_+)\) and also shows the continuity of N for the case \(m=1\).
For the induction step from m to \(m+1\) we may assume that the claim holds true for \(m \ge 1\) and that Assumption 6.2 holds for \((m+1)\). Clearly, the assumption also holds for m and so N maps \(H^{m+1}\) continuously into \(H^m\) by induction hypothesis. It thus remains to show that also \(\frac{\,{\text{ d }}}{\,{\text{ d }}x}N\) maps \(H^{m+1}\) into \(H^m\). We decompose \(\frac{\,{\text{ d }}}{\,{\text{ d }}x}N\) as in (6.1) and note that Assumption 6.2 for m is also satisfied by \(\frac{\partial }{\partial x}\mu \) and by
Hence, by induction hypothesis the operators \(N_x\) and \({\tilde{N}}_j\) are continuous from \(H^{m}(\mathbb {R}_+)^d\), resp. \(H^m (\mathbb {R}_+)^{d+1}\), into \(H^m (\mathbb {R}_+)\), where \({\tilde{N}}_j\) is the Nemytskii operator defined by \({\tilde{\mu }}_j\), \(j=1,...,d\). Since also \(u\mapsto \nabla u\) is continuous from \(H^{m+1}\) into \(H^m\) and Lemma 6.1 shows continuity of multiplication, (6.1) yields continuity of \(\frac{\,{\text{ d }}}{\,{\text{ d }}x}N\) from \(H^{m+1}\) into \(H^m\), as claimed. \(\square \)
Theorem 6.7
Let \(\mu \) satisfy Assumptions 6.2 and 6.3 for some positive integer m. Then, N is Lipschitz continuous from bounded subsets of \((H^m(\mathbb {R}_+))^d\) into \(H^m(\mathbb {R}_+)\).
Proof
We proceed as above, by induction on \(m\in \mathbb {N}\). First, let \(m=1\) and u, \(v\in H^1(\mathbb {R}_+;\mathbb {R}^d)\) with \(\left|\left|u\right|\right|_{H^1}\), \(\left|\left|v\right|\right|_{H^1} \le r\). By continuity of N we can assume, w. l. o. g. u, \(v\in H^1(\mathbb {R}_+;\mathbb {R}^d)\cap C^\infty (\mathbb {R}_{\ge 0}; \mathbb {R}^d)\). By Sobolev embeddings there exists a constant c s. t. \(\left|\left|u\right|\right|_{\infty }\),\(\left|\left|v\right|\right|_{\infty } \le cr\). By Assumption 6.3,
and for \(j=1,...,d\),
for \(K_{j,cr} := \sup _{\left|y\right|\le cr} \sup _{x\in \mathbb {R}}\left|\frac{\partial }{\partial y}_j \mu (x,y)\right|<\infty . \) Chain rule (6.1) then yields the assertion for \(m=1\).
For the induction step we may assume that the theorem holds for fixed m and that Assumptions 6.2 and 6.3 are satisfied for \(m+1\). By induction hypothesis, N is Lipschitz on bounded sets from \(H^m(\mathbb {R}_+)^d\) into \(H^m(\mathbb {R}_+)\) and thus, also from \(H^{m+1}(\mathbb {R}_+)^d\) into \(H^m(\mathbb {R}_+)\). Hence, it suffices to show that \(\frac{\,{\text{ d }}}{\,{\text{ d }}x}N\) is Lipschitz on bounded sets from \(H^m(\mathbb {R}_+)^d\) into \(H^m(\mathbb {R}_+)\). To this end note that \(\frac{\partial }{\partial x}\mu \) satisfies Assumptions 6.2 and 6.3 as well as \({\tilde{\mu }}_j\), \(j=1,...,d\), defined in (6.2). By induction hypothesis, the operators \(N_x\) and \({\tilde{N}}_j\), \(j=1,...,d\), defined in the proof of Theorem 6.6, are Lipschitz on bounded sets. Again, approximation by elements in \(H^m\cap C^m\) and (6.1) then show that the same holds true for \(\frac{\,{\text{ d }}}{\,{\text{ d }}x}N\). \(\square \)
Appendix 2: The noise operator
In this section we will study the operator-valued map \(\mathcal {C}\), defined in (2.13) by
for \(u\in \mathcal {D}(\mathcal {A})\), \(w\in U\) and \(x\in \mathbb {R}\). We can reduce the problem to the operator
for \(\sigma \) satisfying Assumption 2.3 and \(\zeta \) as in Assumption 2.6. Define the Nemytskii operator
which is Lipschitz on bounded sets by Theorem 6.7.
Lemma 7.1
Multiplication is bilinear continuous from \(H^2(\mathbb {R}_+) \times BUC^2(\mathbb {R}_{\ge 0})\) into \(H^2(\mathbb {R}_+)\).
Proof
By density of \(C^n \cap H^n\) in \(H^n\), \(n\in \mathbb {N}\), one can check that Leibniz formula holds for multiplication on \(H^n\times BUC^n\), so that
which is clearly square integrable for \(u\in H^n(\mathbb {R}_+)\), \(f\in BUC^n(\mathbb {R}_{\ge 0})\) and \(k\le n\). In particular, for \(n=2\),
for some constant K, so that for some \({\tilde{K}}\),
\(\square \)
Lemma 7.2
\(T_\zeta \) maps U into \(BUC^2(\mathbb {R})\). Moreover, \(T_\zeta w\) and its first two derivatives are Lipschitz continuous for all \(w\in U\).
Proof
First note that for all \(x\in \mathbb {R}\) and \(i\in \{0,1,2\}\) it holds that
Indeed, for all \(z\ne 0\), the fundamental theorem of calculus yields
By strong continuity of the shift group on \(L^2\), the integrand on the right hand side converges to 0, as \(z\rightarrow 0\). Since (2.6) holds true for \(i\in \{0,..,3\}\), (7.3) follows by dominated convergence.
Now, it suffices to show that \(T_\zeta w\in BUC(\mathbb {R})\) provided that (2.6) holds for \(i=0\) and \(i=1\). For fixed \(w\in U\) and \(x_1\), \(x_2\in \mathbb {R}\) we directly get
Here, we used fundamental theorem of calculus, Tonelli’s theorem and the Cauchy–Schwartz inequality. Hence, \(T_\zeta w\) is globally Lipschitz and particularly uniformly continuous. Analogously, Cauchy–Schwartz inequality yields
\(\square \)
Remark 7.3
From (7.5) we immediatly get \(T_\zeta \in L(U, BUC^2(\mathbb {R}))\). However, in general \(T_\zeta \) itself is not Hilbert–Schmidt. To get the Hilbert–Schmidt property we need the multiplication with \(N_\sigma \) as we will show in the next lemma.
Lemma 7.4
Let \(\Delta \) be the Dirichlet Laplacian on \(L^2(\mathbb {R}_+)\). For \(u\in \mathcal {D}(\Delta )\) and \(x_*\in \mathbb {R}\) it holds that
Remark 7.5
This result immediately extends to \(\mathcal {C}(u)\), because Assumption 2.3 and Lemma 7.1 assure \(N_\sigma (u)\in \mathcal {D}(\Delta )\) for all \(u\in \mathcal {D}(\Delta )\). Moreover, note that
where \(\zeta _x := \zeta (x+.,.)\) satisfies Assumption 2.6, too.
Proof
Linearity and continuity in w follow directly from the construction and Remark 7.3 and we are now interested in the Hilbert Schmidt norm. Without loss of generality, we can choose \(x_* = 0\). So denote by \((e_k)\) an arbitrary CONS of U, then
and the first sum equals
where we used Tonelli’s theorem and Parseval’s identity for the first equality. To bound the second sum we proceed on exactly the same way but first apply Leibnitz rule to get the second (weak) derivative
\(\square \)
To show the main result of this appendix, we just need to combine the previous lemmas.
Theorem 7.6
The map \(\mathcal {C}:\mathcal {D}(\mathcal {A})\rightarrow {\text {HS}}(U,\mathcal {D}(\mathcal {A}))\) is Lipschitz continuous on bounded sets.
Proof
By the structure of \(\mathcal {A}\) and \(\mathfrak L^2\), it suffices to show the property for the operator defined in (7.1). Assumption 2.3 yields \(N_\sigma (\mathcal {D}(\Delta ))\subset \mathcal {D}(\Delta ) \) and we can apply Lemma 7.4. For u, \({\tilde{u}}\in \mathcal {D}(\Delta )\), \(x_*\), \(y_*\in \mathbb {R}\) and writing \( \zeta _{z,\tilde{z}}(x,y) := \zeta (z+x,y) - \zeta (\tilde{z} + x)\), it holds that
A computation similar to (7.4) shows
Finally, we put everything together and use that on bounded sets \(N_\sigma \) is Lipschitz, and thus bounded, to get the assertion. \(\square \)
Rights and permissions
About this article
Cite this article
Keller-Ressel, M., Müller, M.S. A Stefan-type stochastic moving boundary problem. Stoch PDE: Anal Comp 4, 746–790 (2016). https://doi.org/10.1007/s40072-016-0076-z
Received:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s40072-016-0076-z