1 Introduction

Moving boundary problems are one of the important areas of partial differential equations. They describe a wide range of physically interesting phenomena where a system has two phases. However, since the boundary between these phases is defined implicitly, they provide deep mathematical challenges.

Our goal here is to study the effect of noise on the Stefan problem, which is one of the canonical moving boundary value problems. Fix a probability triple and assume that ζ is a random field which is Brownian in time but which is correlated in space (we will rigorously define ζ in Sect. 2). We consider the stochastic partial differential equation (SPDE)

$$ \begin{aligned}\frac{\partial u}{\partial t}(t,x) &= \frac{\partial^2 u}{\partial x^2}(t,x) + \alpha u(t,x)+ u(t,x)\,d\zeta_t(x)\quad x>\beta(t) \\\lim_{x\searrow \beta(t)}\frac{\partial u}{\partial x}(t,x)&= -\varrho \dot{\beta}(t) \\u(0,x)&= u_\circ(x) \quad x\in \mathbb {R}\\\bigl\{(t,x)\in \mathbb {R}_+\times \mathbb {R}\mid u(t,x)>0\bigr\} &= \bigl\{(t,x)\in \mathbb {R}_+\times \mathbb {R}\mid x>\beta(t)\bigr\}.\end{aligned}$$
(1)

The constant α∈ℝ is fixed (we shall later see why it is natural to include this term). We also assume that the initial condition u C(ℝ) satisfies some specific properties:

  • u ≡0 on ℝ, u >0 on (0,∞), and \(\lim_{x\searrow 0}\tfrac{du_{\circ}}{dx}(x)\) exists.

  • u and its first three derivatives exist on (0,∞) and are square-integrable (on (0,∞)).

The last requirement in (1) means that the boundary between u≡0 and u>0 is exactly the graph of β.

In fact, it is not yet clear that (1) makes sense. Differential equations are pointwise statements. Stochastic differential equations are in fact shorthand representations of corresponding integral equations; pointwise statements typically do not make sense. It will take some work to restate the pointwise stochastic statement in the first line of (1) as a statement about stochastic integrals.

There has been fairly little written on the effect of noise on moving boundary problems (see [1, 4], and [15]; see also the work on the stochastic porous medium equation in [2, 68, 14]). We note here that the multiplicative term u in front of the dζ t places this work slightly outside of the purview of the theory of infinite-dimensional evolution equations with Gaussian perturbations. The multiplicative term is in fact a natural nonlinearity. It implies that there is only one interface; bubbles where u>0 cannot spontaneously nucleate where u=0, and conversely, bubbles where u=0 cannot spontaneously nucleate where u>0 (see Lemma 5.8).

Our work here follows on [15], where we investigated the structure of a moving boundary problem driven by a single Brownian motion. The moving boundary problem of interest there is a model of detonation. We here focus on the Stefan problem, and consider noise which is spatially correlated. We formulate several techniques which can (hopefully) be applied to a number of stochastic moving boundary value problems. In our particular case, where the randomness comes from a noise which is correlated in space and Brownian in time, several transformations (the transformations of Lemmas 4.3 and 4.4 and (25)) can transform the problem into a random nonlinear PDE (see (26)). All of these transformations are not in general available when the noise is more complicated, but most of the techniques we develop here should be. Secondly, the irregularity of the Brownian driving force requires some detailed analysis, no matter what perspective one takes: namely, in the analysis of Lemma 4.2 and the iterative bounds of Lemma 5.5.

2 The Noise

Let us start by constructing the noise process. Fix ηC (ℝ)∩L 2(ℝ) such that η (n)C (ℝ)∩L 2(ℝ) for all n∈{1,2,3} and such that

$$ \int_{y\in \mathbb {R}}\eta^2(y)\,dy=1.$$
(2)

Let W be a Brownian sheet. For t≥0 and x∈ℝ, define

$$\zeta_t(x) \overset {\mathrm {def}}{=}\int_{s=0}^t \int_{y\in \mathbb {R}}\eta(x-y)W(ds,dy).$$

Then ζ is a zero-mean Gaussian field with covariance structure given by

$$\mathbb {E}\bigl[\zeta_t(x)\zeta_s(y)\bigr] = (t\wedge s)\int_{z\in \mathbb {R}}\eta(x-z)\eta(y-z)\,dz$$

for all s and t in ℝ+ and x and y in ℝ. Thus for each x∈ℝ, t↦ζ t (x) is a Brownian motion, and for each t>0, x↦ζ t (x) is in C 2 with derivative given by

$$\zeta^{(n)}_t(x) = \int_{s=0}^t \int_{y\in \mathbb {R}}\eta^{(n)}(x-y)W(ds,dy)$$

for all t≥0 and x∈ℝ for n∈{0,1,2}.

3 Weak Formulation

To see what we mean by (1), let us replace \(\dot{\zeta}_{t}(x)\) by a smooth function b:ℝ+×ℝ→ℝ; the Wong–Zakai result (cf. [13, Sect. 5.2D]) implies that this is a reasonable approximation of an SPDE with Stratonovich integration against the noise; we can then convert this into the desired SPDE with Ito integration. Namely, consider the PDE

$$ \begin{aligned}\frac{\partial v}{\partial t}(t,x) &= \frac{\partial^2 v}{\partial x^2} + \alpha_s v(t,x) + v(t,x)b(t,x) \quad x>\beta_\circ(t)\\\lim_{x\searrow \beta_\circ(t)}\frac{\partial v}{\partial x}(t,x)&= -\varrho \dot{\beta}_\circ(t)\\v(0,x)&= u_\circ(x) \quad x\in \mathbb {R}\\\bigl\{(t,x) \in \mathbb {R}_+\times \mathbb {R}\mid v(t,x)>0 \bigr\}&=\bigl\{(t,x) \in \mathbb {R}_+\times \mathbb {R}\mid x>\beta_\circ(t)\bigr\},\end{aligned}$$
(3)

where \(\alpha_{s} \overset {\mathrm {def}}{=}\alpha-\tfrac{1}{2}\) (we will see that this corresponds to the Stratonovich analog of (1)). This will be our starting point.

Let us see what a weak formulation looks like (see [10, Chap. 8]). Fix \(\varphi\in C_{c}^{\infty}(\mathbb {R}_{+}\times \mathbb {R})\). Assume that β is differentiable. Define

$$U_\varphi(t) \overset {\mathrm {def}}{=}\int_{x\in \mathbb {R}}v(t,x)\varphi(t,x)\,dx= \int_{x=\beta_\circ(t)}^\infty v(t,x)\varphi(t,x)\,dx.$$

Differentiating, we get that

and we can use the fact that v(t,β (t))=0 to delete the last term. We can also use the PDE for v for x>β (t) to rewrite \(\tfrac{\partial v}{\partial t}\). Integrating by parts, we have that

Again, we use the fact that v(t,β (t))=0, and we can also use the boundary condition on \(\tfrac{\partial v}{\partial x}\). Recombining things we get the standard formula that

Replacing b by our noise, we should have the following formulation: for any \(\varphi\in C^{\infty}_{c}(\mathbb {R}_{+}\times \mathbb {R})\) and any t>0,

The Ito formulation of this would be

Remark 3.1

The structure of the SPDE (1) is invariant under Ito and Stratonovich formulations; this is the motivation for including α in (1).

We can now formally define a weak solution of (1). In this definition, we allow for blowup. Define for all t≥0.

Definition 3.2

A weak solution of (1) is a nonnegative predictable path {u(t,⋅)∣0≤t<τ}⊂C(ℝ)∩L 1(ℝ), where τ is a predictable stopping time with respect to , such that for any \(\varphi\in C_{c}^{\infty}(\mathbb {R}_{+}\times \mathbb {R})\) and any finite stopping time τ′<τ,

$$ \begin{aligned}&\int_{x\in \mathbb {R}}u(\tau',x)\varphi(\tau',x)\,dx\\&\quad{}= \int_{x\in \mathbb {R}}u_\circ(x)\varphi(0,x)\,dx \\&\qquad{}+\int_{r=0}^{\tau'}\int_{x\in \mathbb {R}}u(r,x)\biggl\{ \frac{\partial \varphi}{\partial r}(r,x)+\frac{\partial^2 \varphi}{\partial x^2}(r,x)+\alpha \varphi(r,x)\biggr\}\,dx\,dr\\&\qquad{}+\int_{x\in \mathbb {R}} \int_{r=0}^{\tau'} u(r,x)\varphi(r,x)\,d\zeta_r(x)\,dx+\varrho \int_{r=0}^{\tau'}\varphi\bigl(r,\beta(r)\bigr)\dot{\beta}(r)\,dr\end{aligned}$$
(4)

and where

$$ \bigl\{(t,x)\in [0,\tau)\times \mathbb {R}\mid u(t,x)>0\bigr\} = \bigl\{(t,x)\in [0,\tau)\times \mathbb {R}\mid x>\beta(t)\bigr\},$$
(5)

where β is a semimartingale.

Our main existence and uniqueness theorems are the following. The arguments leading up to these results will come together in Sect. 5.

Theorem 3.3

(Existence)

There exists a predictable path {u(t,⋅)∣0≤t<τ}⊂C(ℝ)∩L 1(ℝ) which satisfies (4), and u(t,⋅)∈C 1[β(t),∞) for all t∈[0,τ) and

$$\tau \le \inf\biggl\{ t\ge 0: \biggl|\frac{\partial u}{\partial x}\bigl(t-,\beta(t)\bigr)\biggr|=\infty\biggr\}.$$

Furthermore, if u(t,⋅)∈C 2[β(t),∞) for all t∈[0,τ), then it satisfies (5).

Proof

Combine Lemmas 5.9 and 5.10. □

We also have uniqueness.

Theorem 3.4

(Uniqueness)

Suppose that {u 1(t,⋅); 0≤t<τ 1} and {u 2(t,⋅); 0≤t<τ 2} are two solutions of (1). Assume that for i∈{1,2}, the map xu i (t,x+β i (t)) has three generalized square-integrable derivatives on (0,∞). Then u 1(t,⋅)=u 2(t,⋅) for 0≤t<min{τ 1,τ 2}.

Proof

The proof follows from Lemma 5.11. □

4 Regularity and a Transformation

The proof of Theorems 3.3 and 3.4 will hinge upon a transformation of (1) into a nonlinear integral equation on a fixed (as opposed to an implicitly defined) domain; we will address this in Sect. 4.2. First, however, let us make sure that we understand a bit about regularity; this will illuminate the assumptions needed.

4.1 Regularity

While regularity of moving boundary value problems is an incredibly challenging area (see [3]), we can make some headway. Namely, if we assume enough regularity for the boundary, we can get better control of the sense in which the boundary behavior holds.

We start by rewriting (4) using heat kernels. Define

$$\begin{aligned}p_\circ(t,x) &\overset {\mathrm {def}}{=}\frac{1}{\sqrt{4\pi t}}\exp\biggl[-\frac{x^2}{4t}\biggr] \quad t>0,\, x\in \mathbb {R}\\p_\pm(t,x,y) &\overset {\mathrm {def}}{=}\bigl\{ p_\circ(t,x-y)\pm p_\circ(t,x+y)\bigr\} e^{\alpha t}\\&=\bigl\{ p_\circ(t,x-y)\pm p_\circ(t,-x-y)\bigr\} e^{\alpha t} \quad t>0,\, x,y\in \mathbb {R},\end{aligned}$$

where the second representation of p ± stems from the fact that p is even in its second argument. We then have that

$$ \begin{aligned}\frac{\partial p_\pm}{\partial t}(t,x,y) &= \frac{\partial^2 p_\pm}{\partial y^2}(t,x,y)+\alpha p_\pm(t,x,y) \quad t>0,\, x,y\in \mathbb {R}\\\lim_{t\searrow 0}p_\pm(t,x,\cdot)&=\delta_x\pm\delta_{-x}; \quad x\in \mathbb {R}\setminus \{0\}\end{aligned}$$
(6)

as the relevant distinction between p + and p is their behavior at x=0, namely,

$$\frac {\partial^{2n-1}p_+}{\partial x^{2n-1}} (t,0,y) = \frac {\partial^{2n}p_-}{\partial x^{2n}}(t,0,y) =0$$

for all \(n\in \mathbb {N}\overset {\mathrm {def}}{=}\{1,2,\ldots \}\), all t>0 and all y∈ℝ. This will come up in the arguments of Lemmas 4.2 and 4.3.

Let us next understand integration against ζ. Let {β(t)∣t≥0} be an ℝ-valued predictable and continuous function. Let {f(t)∣t≥0} be a second ℝ-valued predictable and continuous function, which is also bounded. We define

this being a limit in L 2. Due to (2),

$$\int_{s=0}^td\zeta_s\bigl(\beta(s)\bigr) = \int_{s=0}^t \int_{y\in \mathbb {R}}\eta\bigl(\beta(s)-y\bigr)W(ds,dy)$$

is a Brownian motion. Therefore we can define a Brownian motion \(B^{x}_{t}\) for each fixed x as

$$ B_t^x\overset {\mathrm {def}}{=}\int_{s=0}^t\,d\zeta_s\bigl(x+\beta(s)\bigr).$$
(7)

Note also that dζ t (β(t)) is not the total derivative of ζ t (β(t)); i.e.,

$$\zeta_t\bigl(\beta(t)\bigr)-\zeta_0\bigl(\beta(0)\bigr)\not = \int_{s=0}^t\,d\zeta_s\bigl(\beta(s)\bigr).$$

To understand the total derivative, we must also include the spatial variation of ζ t :

$$\begin{aligned}d\bigl[ \zeta_t\bigl(\beta(t)\bigr)\bigr]&=\int_{y\in \mathbb {R}} \eta\bigl(\beta(t)-y\bigr) W(dy,dt)\\&\quad{}+\int_{s=0}^t\int_{y\in \mathbb {R}} \dot{\eta}\bigl(\beta(t)-y\bigr)\dot{\beta}(t)W(dy,ds)\,dt.\end{aligned}$$

This implies that

$$ \zeta_t\bigl(\beta(t)\bigr)-\zeta_0\bigl(\beta(0)\bigr) =\int_{s=0}^t\,d\zeta_s\bigl(\beta(s)\bigr)+\int_{s=0}^t \dot{\zeta}_s\bigl(\beta(s)\bigr)\dot{\beta}(s)\,ds.$$
(8)

Lemma 4.1

Suppose that {u(t,⋅)∣0≤t<τ}⊂C(ℝ)∩C 1([β(t),∞)) is a weak solution of (1) with u(t,∞)=0 for 0≤t<τ. Suppose also that β is continuously differentiable and -adapted. Then u(t,x+β(t)) satisfies the integral equation

$$ \begin{aligned}u\bigl(t,x+\beta(t)\bigr) &= \int_{y=0}^\infty p_+(t,x,y)u_\circ(y)\,dy \\&\qquad{}+\int_{s=0}^t \int_{y=0}^\infty p_+(t-s,x,y)\frac{\partial u}{\partial x}\bigl(s,y+\beta(s)\bigr)\dot{\beta}(s)\,dy\,ds\\&\qquad{}+ \int_{y=0}^\infty\int_{s=0}^t p_+(t-s,x,y) u\bigl(s,y+\beta(s)\bigr)\,d\zeta_s\bigl(y+\beta(s)\bigr)\,dy\\&\qquad{}+ \varrho \int_{s=0}^t p_+(t-s,x,0) \dot{\beta}(s)\,ds\end{aligned}$$
(9)

for all t<τ and x>0.

Proof

Fix x>0 and T>0. For t∈[0,τT), define

$$U^T(t) \overset {\mathrm {def}}{=}\int_{y=0}^\infty u\bigl(t,y+\beta(t)\bigr)p_+(T-t,x,y)\,dy.$$

Using Definition 3.2, we have

where

Then

We have here used the fact that u(t,β(t))=0. Thus

Now we consider \(A^{T}_{2}(t)\). Using the stochastic Fubini theorem [18, Theorem 2.6], we obtain

Combine things to get that

Now let Tt to get the claimed result. □

Note that (9) is not an explicit formula for u since the right-hand side of (9) depends on u through β. Also, in order that u(t,∞)=0,0≤t<τ, it suffices that \(\{u(t,\cdot),\frac{\partial u}{\partial x}(t,\cdot)\}\subseteq L^{2}((\beta(t),\infty))\), which is guaranteed in the Picard iteration part. This can be seen by \(|f^{2}(\infty)|\le |f^{2}(M)|+\int_{M}^{\infty}2|ff'|\le|f^{2}(M)|+\int_{M}^{\infty}f^{2}+\int_{M}^{\infty}(f')^{2}\) for M>0, and liminf M→∞ f 2(M)=0.

The value of Lemma 4.1 is in that it implies that if β is continuously differentiable, then the Stefan boundary condition of (1) holds pointwise.

Lemma 4.2

Let {u(t,⋅)∣0≤t<τ} be a solution of (1) with u(t,∞)=0 for 0≤t<τ, and let u be continuously differentiable in x for x>β(t). Assume also that

$$ \mathbb {E}\Biggl[\sup_{\substack{0\le t<\tau \\n\in \{0,1\}}}\int_{x=\beta(t)}^\infty\biggl(\frac{\partial^nu}{\partial x^n}(t,x)\biggr)^2\,dx\Biggr]<\infty.$$
(10)

If β is continuously differentiable, then

$$ \lim_{x\searrow \beta(t)}\frac{\partial u}{\partial x}(t,x)=-\varrho \dot{\beta}(t)$$
(11)

for all t∈[0,τ).

Proof

To see this, let us rewrite (9) in a slightly more convenient way. If {u(t,⋅)∣0≤t<τ} is a weak solution of (1) and 0<t<τ, set

Note that

Thus

$$u\bigl(t,\beta(t)+\varepsilon \bigr) = A_1(t,\varepsilon )+A_2(t,\varepsilon )+A_2(t,-\varepsilon )+A_3(t,\varepsilon )+A_3(t,-\varepsilon )+A_4(t,\varepsilon )$$

and hence

Since \(\frac{\partial p_{+}}{\partial x}(t,0,y)=0\), we have

$$ \lim_{\varepsilon \searrow 0} \frac{\partial A_1}{\partial \varepsilon }(t,\varepsilon )=0.$$

Next note that

$$\frac{\partial p_\circ}{\partial x}(t,x) =-\frac{1}{2\sqrt{4\pi}}\frac{x}{t^{3/2}}\exp\biggl[-\frac{x^2}{4t}\biggr].$$

Thus

Dominated convergence then implies that

Turning to A 3, we can integrate by parts and using the fact that u(t,β(t))=0, we get that

Since

$$\sup_{\varepsilon \in (0,1)}\int_{t=0}^T \int_{y=0}^\infty p^2_\circ(t,y+\varepsilon )\,dy\,dt <\infty $$

for all T>0, we can use dominated convergence and (10) to get that

Finally, we consider A 4. Since p +(t,x,0)=2e αt p (t,x), we get that

$$\frac{\partial p_+}{\partial x}(t,x) = -\frac{e^{\alpha(t-s)}}{\sqrt{4\pi}}\frac{x}{t^{3/2}}\exp\biggl[-\frac{x^2}{4t}\biggr].$$

Therefore

Since β is continuously differentiable, dominated convergence ensures that

$$\lim_{\varepsilon \searrow 0}\frac{\partial A_4}{\partial x}(t,\varepsilon )=-\frac{2\varrho \dot{\beta}(t)}{\sqrt{4\pi}}\int_{u=0}^\infty \exp\biggl[-\frac{u^2}{4}\biggr]\,du=-\varrho \dot{\beta}(t).$$

Collecting things together, we get the claim. □

4.2 A Transformation

The characterization of β given in (11) allows us to rewrite the moving boundary value problem in a more convenient way. The calculation which gives us some analytical traction is found in [16] (see also [10, Chap. 8]). Let us again return to our deterministic PDE (3). For all t≥0 and x∈ℝ, define \(\tilde{v}(t,x) = v(t,x+\beta_{\circ}(t))\); then \(v(t,x)= \tilde{v}(t,x-\beta_{\circ}(t))\). Assuming that β is differentiable, we have that for x>0 and t>0,

$$\begin{aligned}\frac{\partial \tilde{v}}{\partial t}(t,x) &= \frac{\partial v}{\partial t}\bigl(t,x+\beta_\circ(t)\bigr)+ \frac{\partial v}{\partial x}\bigl(t,x+\beta_\circ(t)\bigr)\dot{\beta}_\circ(t), \\\frac{\partial \tilde{v}}{\partial x}(t,x) &= \frac{\partial v}{\partial x}\bigl(t,x+\beta_\circ(t)\bigr), \\\frac{\partial^2 \tilde{v}}{\partial x^2}(t,x) &= \frac{\partial^2 v}{\partial x^2}\bigl(t,x+\beta_\circ(t)\bigr).\end{aligned}$$

We can combine these equations and use the PDE for v to rewrite the evolution of \(\tilde{v}\) as

$$ \begin{aligned}\frac{\partial \tilde{v}}{\partial t}(t,x) &= \frac{\partial^2 v}{\partial x^2}\bigl(t,x+\beta_\circ(t)\bigr)+ \alpha_s v\bigl(t,x+\beta_\circ(t)\bigr)\\&\quad{}+v\bigl(t,x+\beta_\circ(t)\bigr)b\bigl(t,x+\beta_\circ(t)\bigr)+ \frac{\partial v}{\partial x}\bigl(t,x+\beta_\circ(t)\bigr)\dot{\beta}_\circ(t)\\&= \frac{\partial^2 \tilde{v}}{\partial x^2}(t,x)+\alpha_s \tilde{v}(t,x) + \frac{\partial \tilde{v}}{\partial x}(t,x)\dot{\beta}_\circ(t) + \tilde{v}(t,x)b\bigl(t,x+\beta_\circ(t)\bigr).\end{aligned}$$
(12)

Inserting the boundary condition that \(-\varrho \dot{\beta}_{\circ}(t)=\lim_{x\searrow \beta_{\circ}(t)}\frac{\partial v}{\partial x}(t,x)\) back into (12), we have that

Replacing b by ζ and α s by α, we should be able to write down a nonlinear SPDE for \(\tilde{u}(t,x) \overset {\mathrm {def}}{=}u(t,x+\beta(t))\). We now get the following.

Lemma 4.3

Suppose that {u(t,⋅)∣0≤t<τ}⊂C(ℝ)∩L 1(ℝ) is a solution of (1) with u(t,∞)=0 such that u(t,⋅)∈C 1([β(t),∞)) and \(\tfrac{\partial u}{\partial x}(t,\cdot)\in L^{1}([\beta(t),\infty))\) for each 0≤t<τ and (10) holds. Suppose also that β is continuously differentiable and -adapted. Then \(\tilde{u}(t,x) = u(t,x+\beta(t))\) satisfies the integral equation

$$ \begin{aligned}\tilde{u}(t,x) &= \int_{y=0}^\infty p_-(t,x,y)u_\circ(y)\,dy \\&\quad{} - \frac{1}{\varrho }\int_{s=0}^t \int_{y=0}^\infty p_-(t-s,x,y)\frac{\partial \tilde{u}}{\partial x}(s,0)\frac{\partial \tilde{u}}{\partial x}(s,y)\,dy\,ds \\&\quad{} + \int_{y=0}^\infty\int_{s=0}^t p_-(t-s,x,y)\tilde{u}(s,y)\,d\zeta_s\bigl(y+\beta(s)\bigr)\,dy\end{aligned}$$
(13)

for all t∈[0,τ) and x>0 where

$$\beta(t) = -\frac{1}{\varrho }\int_{s=0}^t \frac{\partial \tilde{u}}{\partial x}(s,0)\,ds$$

for all t∈[0,τ).

Proof

The proof is very similar to that of Lemma 4.1. Fix x>0 and T>0. For t∈[0,τT), define

$$U^T(t) \overset {\mathrm {def}}{=}\int_{y=0}^\infty \tilde{u}(t,y)p_-(T-t,x,y)\,dy = A^T_1(t) + A^T_2(t).$$

Using Definition 3.2, we have

where

Using (6) and the fact that p (Tt,x,0)=0, we get that

We have here used the fact that u(t,β(t))=0. Combining the characterization of \(\dot{\beta}\) as in Lemma 4.2, we get that

$$\dot{\beta}(t) = -\frac{1}{\varrho }\frac{\partial \tilde{u}}{\partial x}(t,0).$$

Thus

Now we consider \(A^{T}_{2}(t)\). Again, using the stochastic Fubini theorem, we obtain

Combine things to get that

Now let tT to get the claimed result. □

We can also obtain an SPDE for \(\tilde{u}\). By (6) and (13), we have that for 0<t<τ and x>0,

$$\begin{aligned}d\tilde{u}(t,x) &= \int_{y=0}^\infty \biggl\{\frac{\partial^2 p_-}{\partial x^2}(t,x,y)+\alpha p_-(t,x,y)\biggr\} u_\circ(y)\,dy\,dt \\&\quad{} - \frac{1}{\varrho }\int_{s=0}^t \int_{y=0}^\infty\biggl\{\frac{\partial^2 p_-}{\partial x^2}(t-s,x,y)+\alpha p_-(t-s,x,y)\biggr\}\\&\quad{}\times\frac{\partial \tilde{u}}{\partial x}(s,0)\frac{\partial \tilde{u}}{\partial x}(s,y)\,dy\,ds\,dt \\&\quad{}+ \int_{y=0}^\infty\int_{s=0}^t \biggl\{\frac{\partial^2 p_-}{\partial x^2}(t-s,x,y)+\alpha p_-(t-s,x,y)\biggr\}\\&\quad{}\times\tilde{u}(s,y)\,d\zeta_s\bigl(y+\beta(s)\bigr)\,dy\,dt \\&\quad{} -\frac{1}{\varrho }\frac{\partial \tilde{u}}{\partial x}(t,0)\frac{\partial \tilde{u}}{\partial x}(t,x)\,dt + \tilde{u}(t,x)\,d\zeta_t\bigl(x+\beta(t)\bigr).\end{aligned}$$

Since p (t,0,y)=0, we have the following SPDE:

$$ \begin{aligned}d\tilde{u}(t,x) &= \biggl\{\frac{\partial^2 \tilde{u}}{\partial x^2}(t,x)+\alpha\tilde{u}(t,x)-\frac{1}{\varrho }\frac{\partial \tilde{u}}{\partial x}(t,0)\frac{\partial \tilde{u}}{\partial x}(t,x)\biggr\}\,dt \\&\quad{} + \tilde{u}(t,x)\,d\zeta_t\bigl(x+\beta(t)\bigr) \quad 0<t<\tau,\, x>0 \\\tilde{u}(t,0)&=0 \quad 0<t<\tau\\\tilde{u}(0,x) &= u_\circ(x) \quad x>0 \\\dot{\beta}(t) &= -\frac{1}{\varrho }\frac{\partial \tilde{u}}{\partial x}(t,0) \quad 0<t<\tau.\end{aligned}$$
(14)

We can also find a converse to Lemma 4.3.

Lemma 4.4

Suppose that \(\{\tilde{u}(t,\cdot)\mid 0\le t<\tau\}\subset C^{1}(\mathbb {R}_{+})\cap L^{1}(\mathbb {R}_{+})\) with \(\tilde{u}(t,\infty)=0\) for 0≤t<τ satisfies (13) and is positive. Set

$$ \beta(t) = -\frac{1}{\varrho }\int_{s=0}^t \frac{\partial \tilde{u}}{\partial x}(s,0)\,ds\quad 0\le t<\tau $$
(15)

and define

$$ u(t,x) \overset {\mathrm {def}}{=}\begin{cases}\tilde{u}(t,x-\beta(t))&x\ge \beta(t), 0\le t<\tau \\0 & x<\beta(t), 0\le t<\tau.\end{cases}$$
(16)

Then {u(t,⋅)∣0≤t<τ} is a weak solution of (1).

Proof

Fix \(\varphi\in C^{\infty}_{c}(\mathbb {R}_{+}\times \mathbb {R})\) and define for 0≤t<τ,

To see the evolution of A, we fix δ>0 and define

Then define

$$A^\delta(t) \overset {\mathrm {def}}{=}\int_{x=0}^\infty \varphi\bigl(t,x+\beta(t)\bigr)\tilde{u}_\delta(t,x)\,dx =A^{\delta}_1(t)+A^{\delta}_2(t)+A^{\delta}_3(t),$$

where

where

$$\xi(t,x) = -\frac{1}{\varrho }\frac{\partial \tilde{u}}{\partial x}(t,0) \frac{\partial \tilde{u}}{\partial x}(t,x).$$

We also note that we can rewrite the evolution of β as

$$\dot{\beta}(t) = -\frac{1}{\varrho }\frac{\partial u}{\partial x}\bigl(t,\beta(t)\bigr) \quad t\in [0,\tau).$$

Thus

Similar calculations show that

and finally,

Adding these expressions together and using the definition of \(\tilde{u}_{\delta}\) and (13), we get that

By definition of p , we conclude that u δ (s,β(s))=0. In addition, we also have that

Upon letting δ↘0 and rearranging things, we indeed get a weak solution of (1). □

5 A Picard Iteration

Our main task now is to show that we can indeed solve (13). The main complication is that (13) is fully nonlinear due to the presence of the \(\frac{1}{\varrho }\frac{\partial \tilde{u}}{\partial x}(t,0)\) term in the drift and the shift by β in the evaluation of the integral against ζ. If we turn off the noise, we can do this via semigroup theory as in [16]. The noise, however, complicates things, as we need to respect the rules of Ito integration and (unless we want to use more advanced theories of stochastic integrals) integrate against predictable functions.

Our approach will be to set up a functional framework in which we can use Picard-type iterations to show existence and uniqueness. As usual, \(C^{\infty}_{0}(\mathbb {R}_{+})\) is the collection of infinitely smooth functions on [0,∞) which asymptotically vanishes at infinity. Define next

$$C^\infty_{0,\mathrm {odd}}(\mathbb {R}_+) \overset {\mathrm {def}}{=}\bigl\{ \varphi\in C^\infty_0(\mathbb {R}_+)\mid \text{$\varphi^{(n)}(0)=0$ for all even $n\in \mathbb {N}$}\bigr\};$$

in other words, \(C^{\infty}_{0,\mathrm {odd}}(\mathbb {R}_{+})\) are those elements of \(C^{\infty}_{0}(\mathbb {R}_{+})\) which can be extended to an odd element of C (ℝ) (namely, consider the map y↦sgn (y)φ(|y|). For all \(\varphi\in C^{\infty}_{0}(\mathbb {R}_{+})\), define

$$\|\varphi\|_H \overset {\mathrm {def}}{=}\sqrt{\sum_{i=0}^{2} \int_{x\in (0,\infty)} |\varphi^{(i)}(x)|^2\,dx}.$$

Let H be the closure of \(C^{\infty}_{0}(\mathbb {R}_{+})\) with respect to ∥⋅∥ H and let H odd be the closure of \(C^{\infty}_{0,\mathrm {odd}}(\mathbb {R}_{+})\) with respect to ∥⋅∥ H . We also define

$$\|\varphi\|_L \overset {\mathrm {def}}{=}\sqrt{\int_{x\in (0,\infty)} |\varphi(x)|^2\,dx}$$

for all square-integrable functions on ℝ+. Of course H and H odd are Hilbert spaces (H is more commonly written as H 2; i.e., it is the collection of functions on ℝ+ which possess two weak square-integrable derivatives). The important aspect of H is the following fairly standard result.

Lemma 5.1

We have that HC 1. More precisely, for any φH, we have that

$$\sup_{\substack{x\in \mathbb {R}_+\\ i\in {\{0,1\}}}}\bigl|\varphi^{(i)}(x)\bigr|\le 2\|\varphi\|_H.$$

Finally, for i∈{0,1}, \(\varphi^{(i)}(0) \overset {\mathrm {def}}{=}\lim_{x\searrow 0} \varphi^{(i)}(x)\) is well-defined.

The proof is in Sect. 5.1.

We next need some truncation functions to control the effects of various nonlinearities. In particular, we need to prevent \(\frac{\partial \tilde{u}}{\partial x}(t,0)\) from becoming too big; Picard iterations in general allow only linear growth of various coefficients. Fix L>0, which we will use as a truncation parameter. Let Ψ L C (ℝ;[0,1]) be monotone decreasing such that Ψ L (x)=1 if |x|≤L and Ψ L (x)=0 if |x|≥L+1 (and thus |Ψ L |≤1). In other words, Ψ L is a cutoff function with support of width L+1.

Define

$$\tilde{u}^L_1(t,x) =\int_{y=0}^\infty p_-(t,x,y)u_\circ(y)\,dy$$

for all t>0 and x∈ℝ and recursively define

$$ \begin{aligned}\beta^L_n(t) &\overset {\mathrm {def}}{=}-\frac{1}{\varrho }\int_{s=0}^t \frac{\partial \tilde{u}^L_n}{\partial x}(s,0)\,ds\quad t>0 \\\tilde{u}^L_{n+1}(t,x) &=\int_{y=0}^\infty p_-(t,x,y)u_\circ(y)\,dy \\&\quad{} - \frac{1}{\varrho }\int_{s=0}^t \int_{y=0}^\infty p_-(t-s,x,y)\frac{\partial \tilde{u}^L_n}{\partial x}(s,0)\frac{\partial \tilde{u}^L_n}{\partial x}\\&\quad{} \times (s,y)\varPsi _L\bigl(\bigl\|\tilde{u}^L_n(s,\cdot)\bigr\|_H\bigr)\,dy\,ds\\&\quad{} + \int_{s=0}^t \int_{y=0}^\infty p_-(t-s,x,y)\tilde{u}^L_n(s,y)\\&\quad{} \times \varPsi _L\bigl(\bigl\|\tilde{u}^L_n(s,\cdot)\bigr\|_H\bigr)\,d\zeta_s\bigl(y+\beta^L_n(s)\bigr)\,dy \qquad t>0,\, x>0.\end{aligned}$$
(17)

For each n∈ℕ, \(\{\tilde{u}^{L}_{n}(t,\cdot);\, t\ge 0\}\) is a well-defined, adapted, and continuous path in H odd.

To study (17), we will use the Dirichlet heat semigroup. For \(\varphi\in C^{\infty}_{0}(\mathbb {R}_{+})\), t>0, and x>0, define

$$(T_t \varphi)(x) \overset {\mathrm {def}}{=}\int_{y=0}^\infty p_-(t,x,y)\varphi(y)\,dy.$$

Lemma 5.2

For each t>0, T t has a unique extension from \(C^{\infty}_{0}(\mathbb {R}_{+})\) to H such that T t HH odd and such thatT t f H e αtf H for all fH. Secondly, there is a K A >0 such that

$$\|T_t \dot{f}\|_H \le \frac{K_A}{t^{3/4}}\|f\|_H$$

for all fH oddC 3(ℝ+).

Again, we delay the proof until Sect. 5.1.

Another convenience will be to rewrite the ds part of (17). Set

$$\tilde{\varPsi }^a_L(\psi) \overset {\mathrm {def}}{=}-\frac{1}{\varrho } \dot{\psi}(0)\varPsi _L(\|\psi\|_H)\quad \mbox{and}\quad \tilde{\varPsi }^b_L(\psi) \overset {\mathrm {def}}{=}\varPsi _L(\|\psi\|_H)$$

for all ψH. Then

for all n∈ℕ. For ψ and η in H, let us also define

Lemma 5.3

For each ψ and η in H, \((D\tilde{\varPsi }^{a}_{L})(\psi,\eta)\) is the Gâteaux derivative of \(\tilde{\varPsi }^{a}_{L}\) at ψ in the direction of η and similarly \((D\tilde{\varPsi }^{b}_{L})(\psi,\eta)\) is the Gâteaux derivative of \(\tilde{\varPsi }^{b}_{L}\) at ψ in the direction of η. Furthermore, there is a K B >0 such that

for all ψ and η in H and L>0.

Proof

The claim is straightforward. □

For each n∈ℕ, we now define \(\tilde{w}^{L}_{n}(t,x) \overset {\mathrm {def}}{=}\tilde{u}^{L}_{n+1}(t,x)-\tilde{u}^{L}_{n}(t,x)\) for all x≥0 and t≥0. Clearly \(\sup_{0\le t\le T}\mathbb {E}[\|\tilde{w}^{L}_{1}\|_{H}^{2}]<\infty\) for all T>0. We then write that

$$\tilde{w}^L_{n+1}(t,x) = \sum_{j=1}^5 A^{(n)}_j(t,x),$$

where

where \(\dot{\zeta}_{t}(x) = \frac{\partial \zeta_{t}}{\partial x}(x)\). Note that the \(\tilde{u}^{L}_{n}\)’s and \(\tilde{w}^{L}_{n}\)’s are all in H odd.

To bound \(A^{(n)}_{1}\) and \(A^{(n)}_{2}\), we use the fact that t −3/4 is locally integrable. More precisely,

$$\int_{s=0}^t \frac{1}{(t-s)^{3/4}}\,ds = 4 t^{1/4}$$

for all t>0. Thus,

Similarly, we have that

To bound \(A^{(n)}_{3}\), \(A^{(n)}_{4}\), and \(A^{(n)}_{5}\), we first rewrite them. For z∈ℝ, define \(\underline {\eta }_{z}(y) \overset {\mathrm {def}}{=}\eta(y-z)\) for all y∈ℝ. Then

We will use the following bound on the interaction between η and the H-norm.

Lemma 5.4

There is a K>0 such that

$$\int_{y\in \mathbb {R}}\bigl\|f\underline {\eta }^{(k)}_y\bigr\|_H^2\,dy\le K\|f\|_H^2$$

for all fH and k∈{0,1}.

Proof

The structure of η ensures that there is an \(\hat{\eta}\in L^{2}(\mathbb {R})\) such that \(|\eta^{(n)}(x)|\le \hat{\eta}(x)\) for all n∈{0,1,2,3} and x∈ℝ. Thus, for all x∈ℝ, k∈{0,1}, and n∈{0,1,2},

Thus,

The claim follows. □

Fix T>0 and set \(K_{1}\overset {\mathrm {def}}{=}\exp(2|\alpha| T)\). Let us first bound \(A^{(n)}_{3}\). For k∈{0,1,2},

thus for 0≤tT,

The bound on \(A^{(n)}_{4}\) is similar:

Consequently for 0≤tT,

To bound \(A^{(n)}_{5}\), we first bound \(\beta^{L}_{n+1}-\beta^{L}_{n}\). We have that

For k∈{0,1,2}, we then have that

Hence for 0≤tT,

Lemma 5.5

For each T>0, we have that \(\sum_{n=1}^{\infty}\sup_{0\le t\le T}\mathbb {E}[\|\tilde{u}^{L}_{n+1}-\tilde{u}^{L}_{n}\|_{H}]<\infty\). Thus, ℙ-a.s., \(\tilde{u}^{L}(t,\cdot) \overset {\mathrm {def}}{=}\lim_{n\to \infty}\tilde{u}^{L}_{n}(t,\cdot)\) exists as a limit in C([0,T];H) and u L satisfies the integral equation

$$ \begin{aligned}\tilde{u}^L(t,x) &=\int_{y=0}^\infty p_-(t,x,y)\tilde{u}_\circ(y)\,dy \\&\quad{} -\frac{1}{\varrho } \int_{s=0}^t \int_{y=0}^\infty p_-(t-s,x,y)\frac{\partial \tilde{u}^L}{\partial x}(s,0)\frac{\partial \tilde{u}^L}{\partial x}(s,y)\varPsi _L\bigl(\bigl\|\tilde{u}^L(s,\cdot)\bigr\|_H\bigr)\,dy\,ds \\&\quad{} + {\int_{y=0}^\infty}\int_{s=0}^t p_-(t-s,x,y)\tilde{u}^L(s,y)\varPsi _L\bigl(\bigl\|\tilde{u}^L(s,\cdot)\bigr\|_H\bigr)\,d\zeta_s\bigl(y+\beta(s)\bigr)\,dy\\&\quad\ \quad{} t>0,\, x>0,\end{aligned}$$
(18)

where

$$\beta(t)\overset {\mathrm {def}}{=}-\frac{1}{\rho}\int_{s=0}^t \frac{\partial u^L}{\partial x}(s,0)\,ds.$$

Proof

See also [18, Lemma 3.3]. Fixing T>0 we collect the above calculations to see that there is a K T,L >0 such that

$$\mathbb {E}\bigl[\bigl\|\tilde{w}^L_{n+1}(t,\cdot)\bigr\|^2_H\bigr]\le K_{T,L}\int_{s=0}^t \frac{\mathbb {E}[\|\tilde{w}^L_n({s},\cdot)\|^2_H]}{(t-s)^{3/4}}\,ds$$

for all t∈[0,T]. Iterating this, we get that

$$\mathbb {E}\bigl[\bigl\|\tilde{w}^L_n(t,\cdot)\bigr\|^2_H\bigr]\le K_{T,L}^{n-1}t^{(n-1)/4}\Biggl\{ \prod_{j=0}^{n-2}B(1+j/4,1/4)\Biggr\} \sup_{0\le t\le T}\mathbb {E}\bigl[\bigl\|\tilde{w}^L_1\bigr\|^2_H\bigr],$$

where B is the standard Beta function and thus that

$$\sqrt{\mathbb {E}\bigl[\bigl\|\tilde{w}^L_n(t,\cdot)\bigr\|^2_H\bigr]} \le K_{T,L}^{(n-1)/2}t^{(n-1)/8}\Biggl\{ \prod_{j=0}^{n-2} B(1+j/4,1/4)\Biggr\}^{1/2}\sup_{0\le t\le T}\sqrt{\mathbb {E}\bigl[\bigl\|\tilde{w}^L_1\bigr\|^2_H\bigr]}.$$

To show that the terms on the right are summable, we use the ratio test. It suffices to show that

$$ \lim_{n\to \infty} K_{T,L}^{1/2}t^{1/8}\biggl(B\biggl(1+\frac{n-2}{4},1/4\biggr)\biggr)^{1/2}=0.$$
(19)

We calculate

This implies (19). The rest of the proof follows by Jensen’s inequality and standard calculations. □

We can finally show uniqueness.

Lemma 5.6

The solution of (18) is unique.

Proof

Let u 1 and u 2 be two solutions. Define \(\tilde{w}\overset {\mathrm {def}}{=}u_{1}-u_{2}\). By calculations as above we get that

$$\mathbb {E}\bigl[\|\tilde{w}(t,\cdot)\|^2_H\bigr]\le K_{T,L}\int_{s=0}^t (t-s)^{-3/4}\mathbb {E}\bigl[\|\tilde{w}(s,\cdot)\|^2_H\bigr]\,ds.$$

We can iterate this inequality several times to get (cf. [18, Theorem 3.2])

We can now use Gronwall’s inequality. □

We can also show non-negativity. Define the random time

$$\tau_L \overset {\mathrm {def}}{=}\inf\bigl\{ t\ge 0: \bigl\|\tilde{u}^L(t,\cdot)\bigr\|_H\ge L\bigr\}$$

for L>0.

Lemma 5.7

The solution \(\tilde{u}^{L}(t,x)\) of (18) is nonnegative for 0≤tτ L and x≥0.

Proof

Let \(\tilde{u}^{L}_{\alpha}(t,x)\overset {\mathrm {def}}{=}\tilde{u}^{L}(t,x)e^{-\alpha t}\). Since we have that \(\varPsi _{L}(\|\tilde{u}^{L}(t,\cdot)\|_{H})=1\) for 0≤tτ L , from (18) we obtain that for 0≤tτ L ,

where \(B_{t}^{x}\) is defined as in (7). We will then follow the approach used in [5] to show non-negativity of \(\tilde{u}^{L}_{\alpha}\), which implies non-negativity of \(\tilde{u}^{L}\). To start, fix a nonnegative and nonincreasing ηC (ℝ) such that η(u)=2 if u≤−1 and η(u)=0 if u≥0. Define

$$\varphi(u) \overset {\mathrm {def}}{=}\int_{r=0}^u \int_{s=0}^r \eta(s)\,ds\,dr\quad u\in \mathbb {R}.$$

Finally, define \(\varphi_{\varepsilon }(u) \overset {\mathrm {def}}{=}\varepsilon ^{2} \varphi(\frac{u}{\varepsilon })\) for all u∈ℝ. Fixing x∈ℝ+ and applying Ito’s formula to \(\{\varphi_{\varepsilon }(\tilde{u}^{L}_{\alpha}(t,x)): 0\le t<\tau_{L}\}\), we have that

$$ \begin{aligned}&\varphi_\varepsilon (\tilde{u}^L_\alpha(t\wedge \tau_L,x))-\varphi_\varepsilon (u_\circ(x))\\&\quad{}=\int_{s=0}^{t\wedge \tau_L} \dot{\varphi}_\varepsilon \bigl(\tilde{u}^L_\alpha(s,x)\bigr)\frac{\partial^2 \tilde{u}^L_\alpha}{\partial x^2}(s,x)\,ds \\&\qquad{} -\frac{1}{\varrho }\int_{s=0}^{t\wedge \tau_L} \frac{\partial \tilde{u}^L_\alpha}{\partial x}(s,0)e^{\alpha s}\biggl\{\dot{\varphi}_\varepsilon \bigl(\tilde{u}^L_\alpha(s,x)\bigr)\frac{\partial \tilde{u}^L_\alpha}{\partial x}(s,x)\biggr\}\,ds\\&\qquad{} +\frac{1}{2}\int_{s=0}^{t\wedge \tau_L} \ddot{\varphi}_\varepsilon \bigl(\tilde{u}^L_\alpha(s,x)\bigr)\bigl(\tilde{u}^L_\alpha(s,x)\bigr)^2\,ds+\int_{s=0}^{t\wedge \tau_L} \dot{\varphi}_\varepsilon \bigl(\tilde{u}^L_\alpha(s,x)\bigr)\tilde{u}^L_\alpha(s,x)\,dB_s^x.\end{aligned}$$
(20)

Here φ ε (u (x))=0 since u ≥0. Then (20) implies that

$$ \begin{aligned}&\varphi_\varepsilon \bigl(\tilde{u}^L_\alpha(t\wedge \tau_L,x)\bigr)e^{-t\wedge \tau_L}-\varphi_\varepsilon \bigl(u_\circ(x)\bigr)\\&\quad{}=\int_{s=0}^{t\wedge \tau_L} \dot{\varphi}_\varepsilon \bigl(\tilde{u}^L_\alpha(s,x)\bigr)\frac{\partial^2 \tilde{u}^L_\alpha}{\partial x^2}(s,x) e^{-s}\,ds \\&\qquad{} -\frac{1}{\varrho }\int_{s=0}^{t\wedge \tau_L} \frac{\partial \tilde{u}^L_\alpha}{\partial x}(s,0)\biggl\{\dot{\varphi}_\varepsilon \bigl(\tilde{u}^L_\alpha(s,x)\bigr)\frac{\partial \tilde{u}^L_\alpha}{\partial x}(s,x)\biggr\} e^{(\alpha-1)s}\,ds\\&\qquad{} +\frac{1}{2}\int_{s=0}^{t\wedge \tau_L} \ddot{\varphi}_\varepsilon \bigl(\tilde{u}^L_\alpha(s,x)\bigr) \bigl(\tilde{u}^L_\alpha(s,x)\bigr)^2 e^{-s}\,ds\\&\qquad{} +\int_{s=0}^{t\wedge \tau_L} \dot{\varphi}_\varepsilon \bigl(\tilde{u}^L_\alpha(s,x)\bigr)\tilde{u}^L_\alpha(s,x)e^{-s}\,dB_s^x \\&\qquad{} -\int_{s=0}^{t\wedge \tau_L} \varphi_\varepsilon \bigl(\tilde{u}^L_\alpha(s,x)\bigr)e^{-s}\,ds.\end{aligned}$$
(21)

Next fix a nonincreasing ϖC (ℝ+) such that ϖ(x)=1 for x≤1 and ϖ(x)=0 for x≥2. For each N∈ℕ, define \(\varpi_{N}(x) \overset {\mathrm {def}}{=}\varpi(x/N)\). Let us now do several things. Let us multiply (21) with ϖ N . Let us then integrate in space, and finally take expectations. We get that

$$ \begin{aligned}&\mathbb {E}\biggl[\int_{x=0}^\infty \varphi_\varepsilon \bigl(\tilde{u}^L_\alpha(t\wedge \tau_L,x)\bigr)\varpi_N(x)e^{-t\wedge \tau_L}\,dx \biggr]\\&\quad{}= \mathbb {E}\biggl[\int_{s=0}^{t\wedge \tau_L} A^{N,\varepsilon }_1(s) e^{-s}\,ds\biggr] \\&\qquad{} -\frac{1}{\varrho }\mathbb {E}\biggl[\int_{s=0}^{t\wedge \tau_L} \frac{\partial \tilde{u}^L_\alpha}{\partial x}(s,0)e^{\alpha s}A^{N,\varepsilon }_2(s)e^{-s}\,ds\biggr] +\frac{1}{2}\mathbb {E}\biggl[\int_{s=0}^{t\wedge \tau_L} A^{N,\varepsilon }_3(s) e^{-s}\,ds\biggr],\end{aligned}$$
(22)

where

We need some bounds on φ ε . Define ∥⋅∥ C as the sup norm over ℝ. First note that \(\ddot{\varphi}(u) - 2\chi_{\mathbb {R}_{-}} = \eta(u)-2\chi_{\mathbb {R}_{-}}\). This implies that for all u∈ℝ,

$$ \begin{gathered}|\ddot{\varphi}(u)-2\chi_{\mathbb {R}_-}|\le \|\eta-2\|_C\chi_{[-1,0]}(u), \quad |\dot{\varphi}(u)-2u\chi_{\mathbb {R}_-}|\le \|\eta-2\|_C, \\\bigl|\varphi(u)-u^2\chi_{\mathbb {R}_-}\bigr|\le \|\eta-2\|_C |u|;\end{gathered}$$
(23)

the first bound is direct, and the second two follow by integration. Note also that since η is bounded,

$$ |\ddot{\varphi}(u)|\le \|\eta\|_C, \quad |\dot{\varphi}(u)|\le \|\eta\|_C |u|, \quad \mbox{and}\quad |\varphi(u)|\le \frac{1}{2}\|\eta\|_C u^2.$$
(24)

Let us understand the behavior of the various terms of (22) as N→∞ and then ε→0. From the last bound of (23), we have that \(\lim_{\varepsilon \to 0}\varphi_{\varepsilon }(x)=x^{2}\chi_{\mathbb {R}_{-}}(x)\). Due to the last bound of (24), we can use dominated convergence and thus conclude that

We next consider \(A^{N,\varepsilon }_{1}(s)\). Integrating by parts and using the boundary conditions at x=0, we have that

The first term is nonpositive since η and ϖ are nonnegative. We can also see that

Thus

$$\varlimsup_{\varepsilon \to 0}\varlimsup_{N\to \infty} \mathbb {E}\biggl[\int_{s=0}^{t\wedge \tau_L} A^{N,\varepsilon }_1(s) e^{-s}\,ds\biggr]\le 0.$$

Thirdly, another integration by parts gives us that

$$A^{N,\varepsilon }_2(s)=-\frac{1}{N}\int_{x=0}^\infty \varphi_\varepsilon \bigl(\tilde{u}^L_\alpha(s,x)\bigr)\dot{\varpi}\biggl(\frac{x}{N}\biggr)\,dx,$$

and thus

$$\bigl|A^{N,\varepsilon }_2(s)\bigr|\le \frac{1}{2N}\|\dot{\varpi}\|_C \|\eta\|_C \int_{x=0}^\infty \bigl|\tilde{u}^L_\alpha(s,x)\bigr|^2\,dx.$$

Thus,

$$\lim_{\varepsilon \to 0}\lim_{N\to \infty}\biggl|\mathbb {E}\biggl[\int_{s=0}^{t\wedge \tau_L}\frac{\partial \tilde{u}^L_\alpha}{\partial x}(s,0)e^{\alpha s}A^{N,\varepsilon }_2(s) e^{-s}\,ds\biggr]\biggr|=0.$$

Let us finally bound \(A^{N,\varepsilon }_{3}\). Note that for every u∈ℝ,

$$\lim_{\varepsilon \to 0} \ddot{\varphi}_\varepsilon (u)u^2 - 2\varphi_\varepsilon (u) = 2u^2\chi_{\mathbb {R}_-}(u) - 2u^2 \chi_{\mathbb {R}_-}(u)=0.$$

In light of the first and last bounds of (24), we can use dominated convergence to see that

$$\lim_{\varepsilon \to 0}\lim_{N\to \infty} \mathbb {E}\biggl[\int_{s=0}^{t\wedge \tau_L} A^{N,\varepsilon }_3(s) e^{-s}\,ds\biggr] = 0.$$

Combining things together, we finally get that

$$\mathbb {E}\biggl[\int_{x=0}^\infty \bigl[\tilde{u}^L_\alpha(t\wedge \tau_L,x)\bigr]^2\chi_{\mathbb {R}_-}\bigl(\tilde{u}^L_\alpha(t\wedge \tau_L,x)\bigr)e^{-t\wedge \tau_L}\,dx\biggr]\le 0.$$

This implies the claimed result. □

Let us now see what happens as L↗∞. Define the random time

$$\tau \overset {\mathrm {def}}{=}\varlimsup_{L\to \infty}(\tau_L\wedge L). $$

Let us also define

$$\tilde{u}(t,x) \overset {\mathrm {def}}{=}\varlimsup_{L\to \infty}\tilde{u}^L(t\wedge \tau_L,x) \quad t\ge 0,\, x\ge 0.$$

Lemma 5.8

(Positivity)

\(\tilde{u}(t,x)>0\) for all t≥0 and x>0.

Proof

We first define a transformation:

$$ \tilde{u}^*(t,x) \overset {\mathrm {def}}{=}\tilde{u}(t,x)\exp\bigl[-\zeta_t\bigl(x+\beta(t)\bigr)\bigr] \quad t\ge 0,\, x\ge 0.$$
(25)

Since ζ t (x+β(t)) is an Ito process (recall (8)), by applying Ito’s formula and plugging into (14), we get

Since ζ t is smooth in space, we also have the following equalities:

Putting things together, we have that \(\tilde{u}^{*}(t,x)\) satisfies the random PDE

$$ \begin{aligned}\frac{\partial \tilde{u}^*}{\partial t}(t,x) &= \frac{\partial^2\tilde{u}^*}{\partial x^2}(t,x) +\bigl[2 \dot{\zeta}_t\bigl(x+\beta(t)\bigr)+\dot{\beta}(t) \bigr]\frac{\partial \tilde{u}^*}{\partial x}(t,x) \\&\quad{} + \bigl[ \dot{\zeta}_t\bigl(x+\beta(t)\bigr)^2+ \ddot{\zeta}_t\bigl(x+\beta(t)\bigr)+ \alpha_s \bigr] \tilde{u}^*(t,x),\quad t>0,x>0 \\\tilde{u}^*(t,0)&= 0, \quad t>0, \\\tilde{u}^*(0,x) &= u_\circ(x), \quad x>0.\end{aligned}$$
(26)

Now suppose there exists t 0>0,x 0>0 such that \(\tilde{u}(t_{0},x_{0})=\tilde{u}^{*}(t_{0},x_{0})=0\). In light of Lemma 5.7, since \(\tilde{u}^{*}(t,x)\ge 0, t\ge0, x\ge0\), we have that (t 0,x 0) is a minimum point of \(\tilde{u}^{*}(t,x)\). By applying Strong Parabolic Maximum Principle (cf. Theorem 2, p. 309, [17]) upon \(-\tilde{u}^{*}(t,x)\) on \(U_{t_{0},x_{0}}\overset {\mathrm {def}}{=}(0,t_{0}+1)\times[0,x_{0}+1]\) with M=0, we obtain that \(\tilde{u}^{*}(t,x)\equiv 0, (t,x)\in U_{t_{0},x_{0}}\), which contradicts the fact that u (x)>0,x>0 and the continuity of \(\tilde{u}^{*}(t,x)\) at t=0. Therefore we have that \(\tilde{u}(t,x)>0,t\ge 0, x>0\). □

Lemma 5.9

We have that

$$\varlimsup_{t\nearrow \tau}\|\tilde{u}(t,\cdot)\|_H=\infty.$$

Define u as in (15)(16). Then {u(t,⋅)∣0≤t<τ} is a weak solution of (1).

Proof

Fixing L′>L we have from the uniqueness claim of Lemma 5.6 that \(\tilde{u}^{L'}(t,\cdot)=\tilde{u}^{L}(t,\cdot)\) for 0≤tτ L . Thus τ Lτ L for all L′>L, and so τ=lim L→∞ τ L =lim L→∞(τ L L) and τ is predictable. We also have that \(\tilde{u}(t,\cdot) =\lim_{L\to \infty}\tilde{u}^{L}(t,\cdot)\) for 0≤t<τ. From this, Lemma 5.8, and Lemma 4.4, we conclude that {u(t,⋅)∣0≤t<τ} as defined by (15)–(16) indeed is a weak solution of (1). The characterization of \(\|\tilde{u}(t,\cdot)\|_{H}\) at τ− is obvious. □

In fact, we have a more explicit characterization of τ.

Lemma 5.10

We have that

$$\varlimsup_{t\nearrow \tau}\biggl|\frac{\partial \tilde{u}}{\partial x}(t{-},0)\biggr|=\infty.$$

Proof

For each L>0, define

$$\tau'_L \overset {\mathrm {def}}{=}\inf\biggl\{ t\in [0,\tau)\mid \biggl|\frac{\partial \tilde{u}}{\partial x}(t,0)\biggr|\ge L\biggr\} \quad (\inf \emptyset = \tau)$$

Thus in fact \(\tau >\tau'_{L}\) and hence

$$\biggl|\frac{\partial \tilde{u}}{\partial x}(\tau'_L,0)\biggr|=L.$$

Consequently,

$$\lim_{L\to \infty}\biggl|\frac{\partial \tilde{u}}{\partial x}(\tau'_L,0)\biggr|=\infty.$$

Since \(\tau'_{L}< \tau\), we of course also have that \(\lim_{L\to \infty}\tau'_{L}\le \tau\). On the other hand, \(\|\tilde{u}(t,\cdot)\|_{H}\) may become large for many reasons other than \(|\tfrac{\partial \tilde{u}}{\partial x}(\tau'_{L},0)|\) becoming large, so necessarily \(\tau\le \lim_{L\to \infty}\tau'_{L}\). Putting things together, we get that \(\lim_{L\to \infty}\tau'_{L}=\tau\). The claimed result now follows. □

To finish things off, we prove uniqueness.

Lemma 5.11

(Uniqueness)

If \(\{\tilde{u}(t,\cdot)\mid 0\le t<\tau\}\subset H\) and \(\{\tilde{u}'(t,\cdot)\mid 0\le t<\tau'\}\subset H\) are two solutions of (13), then u(t,⋅)=u′(t,⋅) for 0≤t<min{τ,τ′}.

Proof

For each L>0, define

$$\sigma_L \overset {\mathrm {def}}{=}\inf\biggl\{ t\in [0,\tau\wedge \tau'): \biggl|\frac{\partial \tilde{u}}{\partial x}(t,0)\biggr|\ge L \mbox{ or }\biggl|\frac{\partial \tilde{u}'}{\partial x}(t,0)\biggr|\ge L\biggr\} \quad (\inf \emptyset = \tau\wedge \tau')$$

Then ττ′=lim L→∞ σ L . We can use standard uniqueness theory to conclude that \(\tilde{u}\) and \(\tilde{u}'\) coincide on [0,σ L ], and we then let L↗∞. □

5.1 Proofs

We here give the delayed proofs. We start with the structural claims about H.

Proof of Lemma 5.1

The fact that HC 1 is well known; [9]. Fix \(\varphi\in C^{\infty}_{0}(\mathbb {R}_{+})\), x∈(0,∞), and i∈{0,1}. We then have that

Thus

$$\biggl|\frac{\partial^i \varphi}{\partial x^i}(x)\biggr|\le \sqrt{\int_{s=x}^{x+1} \biggl|\frac{\partial^i \varphi}{\partial x^i}(s)\biggr|^2\,ds}+\sqrt{\int_{r=x}^{x+1} \biggl|\frac{\partial^{i+1} \varphi}{\partial x^{i+1}}(r)\biggr|^2\,dr} \\\le 2\|\varphi\|_H.$$

Of course we also have that

$$\bigl|\varphi^{(i)}(x)-\varphi^{(i)}(y)\bigr|\le {\biggl|\int_{r=x}^{y}\varphi^{(i+1)}(r)\,dr \biggr|}\leq \|\varphi\|_H\sqrt{|x-y|,}$$

so the stated limits at x=0 exist. □

We next study {T t } t>0.

Proof of Lemma 5.2

The proof relies upon a combination of fairly standard calculations.

To begin, fix \(\varphi\in C_{0}^{\infty}(\mathbb {R}_{+})\) and define

Thus, u(t,x)=(T t φ)(x) for x>0, and since p is even in its second argument,

so in fact u(t,⋅) is odd. Thus we indeed have that \(\frac{\partial^{n} u}{\partial x^{n}}(t,0)=0\) for all even n∈ℕ; thus, T t φH odd.

A standard calculation shows that T t is a contraction on H. Indeed, for each nonnegative integer n,

and thus

$$ \begin{aligned}\int_{x=0}^\infty \biggl|\frac{\partial^n u}{\partial x^n}(t,x)\biggr|^2\,dx&= \frac{1}{2} \int_{x\in \mathbb {R}} \biggl|\frac{\partial^n u}{\partial x^n}(t,x)\biggr|^2\,dx\le \frac{1}{2} \int_{x\in \mathbb {R}} \biggl|\frac{\partial^n u}{\partial x^n}(0,x)\biggr|^2\,dx \\&= \int_{x=0}^\infty \bigl|\varphi^{(n)}(x)\bigr|^2\,dx.\end{aligned}$$
(27)

Summing these inequalities up for n∈{0,1,2,3}, we see that \(\|T_{t} \varphi\|^{2}_{H}\le \|\varphi\|^{2}_{H}\) for all \(\varphi\in C^{\infty}_{0}(\mathbb {R}_{+})\). This implies that T t is a contraction on \(C^{\infty}_{0}(\mathbb {R}_{+})\) and has the claimed extension.

To proceed, fix \(\varphi\in C^{\infty}_{0,\mathrm {odd}}(\mathbb {R}_{+})\). Note that thus yφ(|y|) is continuous. Define

We can now fairly easily conclude from (27) with n=0 that

$$\int_{x=0}^\infty v^2(t,x)\,dx\le \int_{x=0}^\infty \bigl|\varphi^{(1)}(x)\bigr|^2\,dx.$$

Differentiating and integrating by parts as needed, we get that

$$\begin{aligned}\frac{\partial v}{\partial x}(t,x)&= \int_{y\in \mathbb {R}}\frac{\partial p_\circ}{\partial x}(t,x-y)\operatorname {sgn}(y)\varphi^{(1)}(|y|)\,dy\\&= -2p_\circ(t,x)\varphi^{(1)}(0)- \int_{y\in \mathbb {R}}p_\circ(t,x-y)\varphi^{(2)}(|y|)\,dy, \\\frac{\partial^2 v}{\partial x^2}(t,x) &= -2\frac{\partial p_\circ}{\partial x}(t,x)\varphi^{(1)}(0)-\int_{y\in \mathbb {R}}\frac{\partial p_\circ}{\partial x}(t,x-y)\varphi^{(2)}(|y|)\,dy\\&=-2\frac{\partial p_\circ}{\partial x}(t,x)\varphi^{(1)}(0)- \int_{y\in \mathbb {R}}p_\circ(t,x-y)\operatorname {sgn}(y)\varphi^{(3)}(|y|)\,dy.\end{aligned}$$

We now note that there is a K>0 such that

$$\biggl|\frac{\partial p_\circ}{\partial x}(t,x)\biggr|\le \frac{K}{\sqrt{t}}p_\circ(2t,x)$$

for all t>0 and x∈ℝ. Thus,

Note now that

$$\sqrt{\int_{x=0}^\infty p_\circ^2(t,x)\,dx}=\sqrt{\frac{1}{\sqrt{4\pi t}}\int_{x=0}^\infty \frac{1}{\sqrt{\pi t}}\exp\biggl[-\frac{x^2}{t}\biggr]\,dx}\le \frac{1}{(4\pi t)^{1/4}}.$$

Combining this and (27) with n=0, we get that

Combine things together to get the last claim. □

6 Numerical Simulation

In this section, we will see from numerical simulations where the boundary is and how it is moving. In general, it is difficult to simulate the SPDE (1) directly since we need to find a solution of a stochastic heat equation and at the same time we need to trace the position of the moving boundary. Here we can avoid this difficulty since we have the explicit formula for the solution u in Lemma 4.4. That is,

$$ u(t,x)\overset {\mathrm {def}}{=}\begin{cases}\tilde{u}(t,x-\beta(t)) &\text{$x\ge \beta(t)$, $0\le t<\tau$} \\0 &\text{$x<\beta(t)$, $0\le t<\tau$},\end{cases}$$

where \((\tilde{u},\beta)\) is a solution of the SPDE

$$ \begin{aligned}d\tilde{u}(t,x) &= \biggl\{\frac{\partial^2 \tilde{u}}{\partial x^2}(t,x)+\alpha\tilde{u}(t,x)-\frac{1}{\varrho }\frac{\partial \tilde{u}}{\partial x}(t,0)\frac{\partial \tilde{u}}{\partial x}(t,x)\biggr\}\,dt \\&\quad{} + \tilde{u}(t,x)\,d\zeta_t(x+\beta(t)) \quad t>0,\, x>0 \\\tilde{u}(t,0)&=0 \quad t>0,\\\tilde{u}(0,x) &= u_\circ(x) \quad x>0, \\\dot{\beta}(t) &= -\frac{1}{\varrho }\frac{\partial \tilde{u}}{\partial x}(t,0) \quad t>0,\\\beta(0) &=0.\end{aligned}$$
(28)

Therefore we first need to solve the SPDE (28) numerically in order to obtain the moving boundary β(t) and then the weak solution u(t,x). We first discretize space by using the explicit finite difference scheme. Here we can also approximate η by simple functions which converge to η in L 2(ℝ) (see [18]). As a result, we can have an approximation of ζ t (x). Note that ζ t (x) is a Brownian motion for each fixed x, however it is spatially correlated. Now we use the Euler–Maruyama type method to discretize time (see [11, 12]). Then we can get a numerical solution of (28). Since there is a stability issue for parabolic PDE, we note that Δt/(Δx)2<1/2, where Δt is a time step and Δx is a space step. Figure 1 is a simulation with initial condition

$$u_\circ(x)=\begin{cases}\frac{x+x^2}{1+(\frac{x}{2})^4} &\text{if $x\ge 0$} \\0 &\text{else}\end{cases}$$

and α=0.5, ϱ=0.5. We can clearly see that there are two phases separated by the black line, which is the moving boundary, and how u is changing on the colored region where u>0. Furthermore, we can see that the boundary is moving left. This just follows from the positivity of the solution u(t,x) for xβ(t) and the Stefan boundary condition of (1).

Fig. 1
figure 1

Weak solution u(t,x)