1 Introduction

The classical diffusion phenomenon is governed by a second order linear partial differential equation, whose Green function is given by a Gaussian probability density function and which describes the movement of energy through a medium in response to a gradient of energy. On the other hand, the diffusion processes in various systems with complex structure, such as liquid crystals, glasses, polymers, biopolymers, and proteins, usually do not follow a Gaussian density, as a consequence the phenomenon is described by a fractional partial differential equation [7]. Dipierro et al., [4] have studied the asymptotic behavior of the solutions of the time-fractional diffusion equation.

There is some previous work for the initial-boundary value problem on the first quadrant \(\mathbb{R}_{+}^{2}\) for fractional diffusion equations, where the Green function has been constructed and an integral representation of the solution was found [3, 6]. In this note, we consider the equation

$$ u_{t}=\Delta ^{\alpha }u, $$
(1)

where the operator \(\Delta ^{\alpha }\) is defined via the Riesz fractional derivative, for each coordinate. Let us notice that the generalization of the Laplacian most commonly used [1, 9] is different from the one we use in this work.

However, Eq. (1) is an idealized version because many aspects are missing in the modeling; such as the inhomogeneity of the medium, external sources, and measurement errors. Then a more realistic version is obtained by considering a stochastic version with additive noise. For example, Balanzario and Kaikina [2] studied the stochastic nonlinear Landau–Ginzburg equations on the half-line with Dirichlet white-noise boundary conditions, Shi and Wang [11] studied the solution for a stochastic fractional partial differential equation driven by an additive fractional space–time white noise. In Sanchez et al. [10], studied the stochastic version of (1) for the 2-dimensional case; however, the n-dimensional case on \(\mathbb{R}_{+}^{n} :=\{ {\mathbf{x}}=(x_{1},\dots , x_{n}) : x_{j}\geq 0, j=1,\dots n\}\) has not been studied. In the present work we tackle this problem via the main ideas of the Fokas method (unified transform) [5], this method is a technique for solving initial-boundary value problems for partial differential equations. Moreover, it generates integral representation formulas for solutions, where the integrals converge uniformly on the boundary.

2 Preliminaries

Let us give some known definitions and results.

Definition 1

The n-dimensional Fourier–Laplace transform is defined as follows:

$$\begin{aligned} \widehat{u}({\mathbf{k}},t) =& \int _{{\mathbb{R}}_{+}^{n}} e^{-i{\mathbf{k}} \cdot {\mathbf{x}}}u({\mathbf{x}},t)\,d{\mathbf{x}}, \end{aligned}$$

where \({\mathbf{x}}\in \mathbb{R}_{+}^{n}\), \({\mathbf{k}}\in \mathbb{C}^{n}=\{ {\mathbf{k}}=(k_{1},\dots , k_{n}) : k_{j} \in \mathbb{C}, j=1,\dots n\}\) and \(\Im m(k_{j})\leq 0\), \({\mathbf{k}}\cdot {\mathbf{x}}\) is the usual inner product, and its inverse is defined by

$$\begin{aligned} {u}({\mathbf{x}},t) =&\frac{1}{(2\pi )^{n}} \int _{{\mathbb{R}}^{n}} e^{i{ \mathbf{k}}\cdot {\mathbf{x}}}\widehat{u}({\mathbf{k}},t)\,d{ \mathbf{k}}. \end{aligned}$$

Definition 2

The Riesz fractional operator is defined by

$$\begin{aligned} \mathcal{D}_{x_{j}}^{\alpha }u({\mathbf{x}}, t)=- \frac{1}{2\Gamma (3-\alpha )\cos (\frac{\pi }{2}\alpha )} \int _{0}^{\infty }\frac{{\mathrm{sgn}}(x_{j}-y_{j})}{ \vert x_{j}-y_{j} \vert ^{\alpha -2}} \partial _{y_{j}}^{3} u({\mathbf{x}}_{j},t) \,dy_{j}. \end{aligned}$$

Here, \(\alpha \in (2,3)\), \({\mathbf{x}}_{j}\in {\mathbb{R}_{+}^{n}}\) is the vector x, where the jth coordinate is \(y_{j}\), \(j=1,\dots n\).

Note that the operator, using integration by parts, \(\mathcal{D}_{x_{j}}^{\alpha }\) can be represented in the following form [8]:

$$ (-\Delta )_{j}^{\alpha }u({\mathbf{x}}, t)= \frac{\alpha }{2\Gamma (1-\alpha )\cos (\frac{\pi }{2}\alpha )} \int _{0}^{\infty }\frac{u({\mathbf{x}}_{j}, t) - u({\mathbf{x}}, t) }{ \vert x_{j}-y_{j} \vert ^{1+\alpha }} \,dy_{j}. $$

Lemma 1

If \(\Delta ^{\alpha }\), \(\alpha \in (2,3)\), is the fractional n-dimensional Laplace operator

$$ \Delta ^{\alpha }= \mathcal{D}_{x_{1}}^{\alpha }+ \mathcal{D}_{x_{2}}^{\alpha }+\cdots +\mathcal{D}_{x_{n}}^{\alpha }, $$

then, for \(\Im m(k_{l})\leq 0\),

$$ \widehat{\Delta ^{\alpha } u}({\mathbf{k}})= \vert {\mathbf{k}} \vert ^{\alpha }\widehat{u}({ \mathbf{k}},t) -\sum_{l=1}^{n} \sum_{j=0}^{2} \frac{ \vert k_{l} \vert ^{\alpha }}{(ik_{l})^{j+1}} \partial _{x_{l}}^{j} \widehat{u}({\mathbf{k}}_{[-l]},t). $$

Here, \(|{\mathbf{k}}|^{\alpha }:=\sum_{l=1}^{n}|k_{l}|^{\alpha }\) and \({\mathbf{k}}_{[-l]} \in \mathbb{C}^{n}\) is the k vector, where its lth coordinate is zero.

Proof

The theorem follows from the linearity of the operator \(\Delta ^{\alpha }\) and the well-known equation

$$ \widehat{\mathcal{D}_{x}^{\alpha }u}(k)= \vert {k} \vert ^{\alpha }\widehat{u}({k},t) -\sum_{j=0}^{2} \frac{ \vert k \vert ^{\alpha }}{(ik)^{j+1}} \partial _{x}^{j} \widehat{u}(0,t). $$

 □

3 Green function

We consider a linear problem for an evolution equation with initial condition \(u_{0}\) and boundary conditions \(h_{j}\), \(j=1,\ldots , n\),

$$ \textstyle\begin{cases} u_{t}=\Delta ^{\alpha }u, \\ u({\mathbf{x}}, 0)=u_{0}({\mathbf{x}}), \\ u_{x_{j}}({\mathbf{x}}_{[-j]},t)=h_{j}({\mathbf{x}}_{[-j]},t), \end{cases} $$
(2)

where \(\alpha \in (2,3)\), \(t>0\), \({\mathbf{x}}_{[-j]}\in \mathbb{R}_{+}^{n}\) means that the jth coordinate of x is zero, with the compatibility conditions \(h_{j}({\mathbf{x}}_{[-j,-l]},t)=h_{l}({\mathbf{x}}_{[-j,-l]},t)\) where \({\mathbf{x}}_{[-j,-l]}\in \mathbb{R}_{+}^{n}\) is such that jth and lth coordinates, \(x_{l}\) and \(x_{j}\), are equal to zero for \(j\neq l\).

Theorem 1

Let the initial data \(u_{0}({\mathbf{x}})\in {\mathbf{L}}^{1}( \mathbb{R}_{+}^{n})\) and the boundary data \(h_{j}({\mathbf{x}}_{[-j]},t) \in {\mathbf{C}}(\mathbb{R}_{+};{\mathbf{L}}^{1}( \mathbb{R}_{+}^{n}))\). Suppose that there exists some function \(u({\mathbf{x}},t)\), which satisfies (2). Then \(u({\mathbf{x}},t)\) has the following integral representation:

$$\begin{aligned} u({\mathbf{x}},t) =&\mathcal{G}^{I}(t)u_{0} -\sum _{l=1}^{n} \int _{0}^{t} \mathcal{G}^{B_{l}}(t-s)h_{l} \,ds, \end{aligned}$$

where the Green operators are given by

$$\begin{aligned}& \mathcal{G}^{I}(t)u_{0} = \int _{\mathbb{R}_{+}^{n}} G^{I}({\mathbf{x}},{ \mathbf{y}},t)u_{0}({\mathbf{y}})\,d{\mathbf{y}}, \\& \mathcal{G}^{B_{l}}(t)h_{l} = \int _{\mathbb{R}_{+}^{n-1}} G^{B_{l}}({ \mathbf{x}},{ \mathbf{y}}_{[-l]},t)h_{l}({\mathbf{y}}_{[-l]},s)\,d{ \mathbf{y}}_{[-l]}, \end{aligned}$$
(3)

and the Green functions are

$$\begin{aligned}& G^{I}({\mathbf{x}},{\mathbf{y}},\tau ) = \frac{2^{n}}{\pi ^{n}} \int _{ \mathbb{R}_{+}^{n}} e^{-{\mathbf{k}}^{\alpha }\tau } \prod_{l=1}^{n} \cos [k_{l}x_{l}] \cos [k_{l}y_{l}] \,d{\mathbf{k}}, \\& G^{B_{l}}({\mathbf{x}},{\mathbf{y}}_{l},\tau ) = \frac{2^{n}}{\pi ^{n}} \int _{ \mathbb{R}_{+}^{n}} e^{-{\mathbf{k}}^{\alpha }\tau } k_{l}^{\alpha -2} \cos [k_{l}x_{l}] \prod_{\substack{m=1\\m\neq l}}^{n} \cos [k_{m}x_{m}]\cos [k_{m}y_{m}] \,d{ \mathbf{k}}. \end{aligned}$$

Here, \({\mathbf{k}}^{\alpha }=\sum_{l=1}^{n} k_{l}^{\alpha }\).

Proof

Applying Theorem 1 to Eq. (2), we obtain

$$\begin{aligned} \widehat{u}_{t}({\mathbf{k}},t)+ \vert {\mathbf{k}} \vert ^{\alpha }\widehat{u}({\mathbf{k}},t) =&\sum_{l=1}^{n} \sum_{j=0}^{2} \frac{ \vert k_{l} \vert ^{\alpha }}{(ik_{l})^{j+1}} \partial _{x_{l}}^{j} \widehat{u}({\mathbf{k}}_{[-l]},t). \end{aligned}$$

Now, we multiply the above equation by \(e^{|{\mathbf{k}}|^{\alpha }t}\) and integrate from 0 to t,

$$\begin{aligned} e^{ \vert {\mathbf{k}} \vert ^{\alpha }t}\widehat{u}({\mathbf{k}},t)- \widehat{u}_{0}({\mathbf{k}}) =& \sum_{l=1}^{n} \sum_{j=0}^{2} \frac{ \vert k_{l} \vert ^{\alpha }}{(ik_{l})^{j+1}} g_{j}^{l} \bigl( \vert { \mathbf{k}} \vert ^{\alpha },{ \mathbf{k}}_{[-l]},t\bigr) \end{aligned}$$
(4)

for \(\Im m (k_{l})\leq 0\), where

$$\begin{aligned} {g}_{j}^{l}(\sigma , {\mathbf{k}}_{[-l]}, t)= \int _{0}^{t} e^{\sigma s} \partial _{x_{l}}^{j} \widehat{u}({\mathbf{k}}_{[-l]}, s)\,ds. \end{aligned}$$

Now, we initially consider 2-dimensional case. Thus, Eq. (4) is expressed as

$$\begin{aligned} e^{ \vert {\mathbf{k}} \vert ^{\alpha }t}\widehat{u}({\mathbf{k}},t)- \widehat{u}_{0}({\mathbf{k}}) =& \sum_{j=0}^{2} \frac{ \vert k_{1} \vert ^{\alpha }}{(ik_{1})^{j+1}} g_{j}^{1} \bigl( \vert { \mathbf{k}} \vert ^{\alpha },{\mathbf{k}}_{[-1]},t\bigr) \\ &{}+ \sum_{j=0}^{2} \frac{ \vert k_{2} \vert ^{\alpha }}{(ik_{2})^{j+1}} g_{j}^{2} \bigl( \vert { \mathbf{k}} \vert ^{\alpha },{\mathbf{k}}_{[-2]},t\bigr). \end{aligned}$$
(5)

Applying the inverse transform in (5) with respect to \(k_{1}\) and moving the contour of integration for the terms with \(g_{j}^{1}\) in the integrand, we obtain

$$\begin{aligned} \widehat{u}(x_{1},k_{2},t) =& \frac{1}{2\pi } \int _{\mathbb{R}} e^{ik_{1}x_{1}- \vert { \mathbf{k}} \vert ^{\alpha }t} \Biggl[\widehat{u}_{0}({ \mathbf{k}})+\sum_{j=0}^{2} \frac{ \vert k_{2} \vert ^{\alpha }}{(ik_{2})^{j+1}} g_{j}^{2}\bigl( \vert {\mathbf{k}} \vert ^{\alpha },{ \mathbf{k}}_{[-2]},t\bigr) \Biggr] \,dk_{1} \\ &{}+\frac{1}{2\pi } \int _{\partial D_{1}^{+}}e^{ik_{1}x_{1}- \vert {\mathbf{k}} \vert ^{\alpha }t} \sum_{j=0}^{2} \frac{ \vert k_{1} \vert ^{\alpha }}{(ik_{1})^{j+1}}g_{j}^{1}\bigl( \vert { \mathbf{k}} \vert ^{\alpha },{\mathbf{k}}_{[-1]},t\bigr)\,dk_{1}, \end{aligned}$$
(6)

where \(D_{1}^{+}= \{k_{1}\in \mathbb{C}: 0 \leq \Im m (k_{1}) \leq \frac{\pi }{2\alpha }|\Re e (k_{1})| \}\). Let us note the following: if we substitute \(k_{1}\) by \(-k_{1}\), the functions \(g_{j}^{1}\) from Eq. (5) are invariant. Then, making this change of variables in (5), we get

$$\begin{aligned} e^{ \vert {\mathbf{k}} \vert ^{\alpha }t}\widehat{u}(-k_{1},k_{2},t)- \widehat{u}_{0}(-k_{1},k_{2}) =&\sum _{j=0}^{2} \frac{ \vert k_{1} \vert ^{\alpha }}{(-ik_{1})^{j+1}}g_{j}^{1} \bigl( \vert { \mathbf{k}} \vert ^{\alpha },{\mathbf{k}}_{[-1]},t \bigr) \\ &{}+\sum_{j=0}^{2} \frac{ \vert k_{2} \vert ^{\alpha }}{(ik_{2})^{j+1}}g_{j}^{2} \bigl( \vert { \mathbf{k}} \vert ^{\alpha },-{\mathbf{k}}_{[-2]},t \bigr), \end{aligned}$$
(7)

for \(\Im m (-k_{1}),\Im m (k_{2})\leq 0\). Substituting \(g_{2}^{1}\) from Eq. (7) in (6) and using the fact that

$$\begin{aligned} \int _{\partial D_{1}^{+}} e^{ik_{1}x_{1}}\widehat{u}(-k_{1},k_{2},t) \,dk_{1}=0, \end{aligned}$$

by the Cauchy theorem, we obtain the following integral representation:

$$\begin{aligned} \widehat{u}(x_{1},k_{2},t) =& \frac{1}{2\pi } \int _{\mathbb{R}}e^{ik_{1}x_{1}- \vert { \mathbf{k}} \vert ^{\alpha }t} \Biggl[\widehat{u}_{0}({ \mathbf{k}}) +\widehat{u}_{0}(-k_{1},k_{2})- \frac{2 \vert k_{1} \vert ^{\alpha }}{k_{1}^{2}}g_{1}^{1}\bigl( \vert {\mathbf{k}} \vert ^{\alpha },{\mathbf{k}}_{[-1]},t\bigr) \\ &{}+\sum_{j=0}^{2}\frac{ \vert k_{2} \vert ^{\alpha }}{(ik_{2})^{j+1}} \bigl[g_{j}^{2}\bigl( \vert { \mathbf{k}} \vert ^{\alpha }, {\mathbf{k}}_{[-2]},t\bigr)+g_{j}^{2} \bigl( \vert {\mathbf{k}} \vert ^{\alpha },-{\mathbf{k}}_{[-2]},t \bigr)\bigr] \Biggr]\,dk_{1}. \end{aligned}$$
(8)

Applying the inverse transform in (8) with respect to \(k_{2}\) and moving the contour of integration for the terms with \(g_{j}^{2}\) in the integrand, we obtain

$$\begin{aligned} u({\mathbf{x}},t) =&\frac{1}{(2\pi )^{2}} \int _{\mathbb{R}^{2}} e^{i{\mathbf{k}} \cdot {\mathbf{x}}- \vert {\mathbf{k}} \vert ^{\alpha }t} \bigl[\widehat{u}_{0}({ \mathbf{k}}) + \widehat{u}_{0}(-k_{1},k_{2}) \bigr] \\ &{}-\frac{1}{(2\pi )^{2}} \int _{\mathbb{R}^{2}} e^{i{\mathbf{k}}\cdot {\mathbf{x}}- \vert { \mathbf{k}} \vert ^{\alpha }t} \frac{2 \vert k_{1} \vert ^{\alpha }}{k_{1}^{2}}g_{1}^{1} \bigl( \vert {\mathbf{k}} \vert ^{\alpha },{\mathbf{k}}_{[-1]},t \bigr)\,d{\mathbf{k}} \\ &{}+\frac{1}{(2\pi )^{2}} \int _{\partial D_{2}^{+}} \int _{\mathbb{R}}e^{i{ \mathbf{k}}\cdot {\mathbf{x}}- \vert {\mathbf{k}} \vert ^{\alpha }t} \sum_{j=0}^{2} \frac{ \vert k_{2} \vert ^{\alpha }}{(ik_{2})^{j+1}} \\ &{}\times \bigl[g_{j}^{2}\bigl( \vert {\mathbf{k}} \vert ^{\alpha },{\mathbf{k}}_{[-2]},t\bigr)+g_{j}^{2} \bigl( \vert { \mathbf{k}} \vert ^{\alpha },-{\mathbf{k}}_{[-2]},t \bigr)\bigr]\,d{\mathbf{k}}, \end{aligned}$$
(9)

where \(D_{2}^{+}= \{k_{2}\in \mathbb{C}: 0 \leq \Im m (k_{2}) \leq \frac{\pi }{2\alpha }|\Re e (k_{2})| \}\). Let us note the following: if we substitute \(k_{2}\) by \(-k_{2}\), the functions \(g_{j}^{2}\) from Eq. (8) are invariant. Then, making this change of variables in (7), we get

$$\begin{aligned} \widehat{u}(x_{1},-k_{2},t) =& \frac{1}{2\pi } \int _{\mathbb{R}}e^{ik_{1}x_{1}- \vert { \mathbf{k}} \vert ^{\alpha }t} \bigl[\widehat{u}_{0}(k_{1},-k_{2}) +\widehat{u}_{0}(-{ \mathbf{k}}) \bigr] \\ &{}-\frac{1}{2\pi } \int _{\mathbb{R}}e^{ik_{1}x_{1}- \vert {\mathbf{k}} \vert ^{\alpha }t} \Biggl[\frac{2 \vert k_{1} \vert ^{\alpha }}{k_{1}^{2}}g_{1}^{1} \bigl( \vert {\mathbf{k}} \vert ^{\alpha },-{ \mathbf{k}}_{[-1]},t \bigr) \\ &{}+\sum_{j=0}^{2}\frac{ \vert k_{2} \vert ^{\alpha }}{(-ik_{2})^{j+1}} \bigl[g_{j}^{2}\bigl( \vert { \mathbf{k}} \vert ^{\alpha }, {\mathbf{k}}_{[-2]},t\bigr)+g_{j}^{2} \bigl( \vert {\mathbf{k}} \vert ^{\alpha },-{\mathbf{k}}_{[-2]},t \bigr)\bigr] \Biggr]\,dk_{1}, \end{aligned}$$
(10)

for \(\Im m (k_{1}),\Im m (k_{2})\geq 0\). Substituting \(g_{2}^{2}(|{\mathbf{k}}|^{\alpha },\pm {\mathbf{k}}_{[-2]},t)\) from Eq. (10) in (9) and using the fact that

$$\begin{aligned} \int _{\partial D_{2}^{+}} e^{ik_{2}x_{2}}\widehat{u}(x_{1},-k_{2},t) \,dk_{2}=0, \end{aligned}$$

by the Cauchy theorem, we obtain the following integral representation:

$$\begin{aligned} u=\frac{1}{(2\pi )^{2}} \int _{\mathbb{R}^{2}} e^{i{\mathbf{k}} \cdot {\mathbf{x}}- \vert { \mathbf{k}} \vert ^{\alpha }t} \Biggl[\sum _{{\mathbf{r}}\in S_{2}} \widehat{u}_{0}({\mathbf{r}})-2 \sum _{l=1}^{2}\sum_{{\mathbf{r}}_{[-l]}\in S_{2}} \frac{ \vert k_{l} \vert ^{\alpha }}{k_{l}^{2}}g_{1}^{l}\bigl( \vert {\mathbf{k}} \vert ^{\alpha },{\mathbf{r}}_{[-l]},t\bigr) \Biggr]\,d{\mathbf{k}}, \end{aligned}$$
(11)

where \({\mathbf{r}}\in S_{2}=\{(\pm k_{1},\pm k_{2})\}\) and \({\mathbf{r}}_{[-l]}\) is such that the lth coordinate is equal to zero. In Eq. (11) we have, after interchanging the integration order, integrals of the form

$$\begin{aligned}& \int _{\mathbb{R}^{2}} \int _{\mathbb{R}^{2}_{+}} e^{i{\mathbf{k}}\cdot {(x_{1} \pm y_{1},x_{2}\pm y_{2})}- \vert {\mathbf{k}} \vert ^{\alpha }t}u_{0}({\mathbf{y}})\,d{ \mathbf{y}}\,d{ \mathbf{k}}, \\& \int _{\mathbb{R}^{2}} \int _{0}^{t} \int _{\mathbb{R}_{+}} e^{i{\mathbf{k}} \cdot {(x_{1},x_{2}\pm y_{2})}- \vert {\mathbf{k}} \vert ^{\alpha }(t-s)} \frac{ \vert k_{1} \vert ^{\alpha }}{k_{1}^{2}}u_{x_{1}}(0, \pm y_{2},s)\,d{y_{2}}\,ds\,d \mathbf{k}, \end{aligned}$$

and

$$ \int _{\mathbb{R}^{2}} \int _{0}^{t} \int _{\mathbb{R}_{+}} e^{i{\mathbf{k}} \cdot {(x_{1}\pm y_{1},x_{2})}- \vert {\mathbf{k}} \vert ^{\alpha }(t-s)} \frac{ \vert k_{2} \vert ^{\alpha }}{k_{2}^{2}}u_{x_{2}}( \pm y_{1},0,s)\,d{y_{1}}\,ds\,d{ \mathbf{k}}. $$

We notice that all the integrals above are absolutely integrable, then using the Fubini theorem, after some simplifications, we arrive from Eq. (11) at the following equation:

$$\begin{aligned} u({\mathbf{x}},t) =&\mathcal{G}^{I}(t)u_{0} -\sum _{l=1}^{2} \int _{0}^{t} \mathcal{G}^{B_{l}}(t-s)h_{l} \,ds, \end{aligned}$$

where the Green operators are given by

$$\begin{aligned}& \mathcal{G}^{I}(t)u_{0} = \int _{\mathbb{R}_{+}^{2}} G^{I}({\mathbf{x}},{ \mathbf{y}},t)u_{0}({\mathbf{y}})\,d{\mathbf{y}}, \\& \mathcal{G}^{B_{l}}(t)h_{l} = \int _{\mathbb{R}_{+}} G^{B_{l}}({\mathbf{x}},{ \mathbf{y}}_{[-l]},t)h_{l}({\mathbf{y}}_{[-l]},s)\,d{ \mathbf{y}}_{[-l]}, \end{aligned}$$

and the Green functions are

$$\begin{aligned}& G^{I}({\mathbf{x}},{\mathbf{y}},\tau ) = \biggl(\frac{2}{\pi } \biggr)^{2} \int _{ \mathbb{R}_{+}^{2}} e^{-{\mathbf{k}}^{\alpha }\tau } \prod_{l=1}^{2} \cos [k_{l}x_{l}] \cos [k_{l}y_{l}] \,d{\mathbf{k}}, \\& G^{B_{l}}({\mathbf{x}},{\mathbf{y}}_{[-l]},\tau ) = \biggl( \frac{2}{\pi } \biggr)^{2} \int _{\mathbb{R}_{+}^{2}} e^{-{\mathbf{k}}^{\alpha }\tau } \cos [k_{l}x_{l}]k_{l}^{ \alpha -2} \prod_{\substack{m=1\\m\neq l}}^{2} \cos [k_{m}x_{m}] \cos [k_{m}y_{m}] \,d{\mathbf{k}}, \end{aligned}$$

where \({\mathbf{k}}^{\alpha }=k_{1}^{\alpha }+k_{2}^{\alpha }\). Now, following the previous arguments we can tackle the n-dimensional case. This can be achieved, via mathematical induction over n, passing from Eq. (4) to Eq. (12), through the steps that we describe in the 2-dimensional case. Analogous to Eq. (11), we obtain an integral representation for u,

$$\begin{aligned} u({\mathbf{x}},t) =&\frac{1}{(2\pi )^{n}} \int _{\mathbb{R}^{n}} e^{i{\mathbf{k}} \cdot {\mathbf{x}}- \vert {\mathbf{k}} \vert ^{\alpha }t} \Biggl[\sum _{{\mathbf{r}}\in S_{n}} \widehat{u}_{0}({\mathbf{r}}) \\ &{}-2\sum_{l=1}^{n} \sum _{{\mathbf{r}}_{[-l]}\in S_{n}} \frac{ \vert k_{l} \vert ^{\alpha }}{k_{l}^{2}}g_{1}^{l}\bigl( \vert {\mathbf{k}} \vert ^{\alpha }, {\mathbf{r}}_{[-l]},t\bigr) \Biggr]\,d{\mathbf{k}}, \end{aligned}$$
(12)

where \({\mathbf{r}}\in S_{n}=\{(\pm k_{1},\pm k_{2}, \dots , \pm k_{n})\}\) and \({\mathbf{r}}_{[-l]}\) is such that the lth coordinate is equal to zero. Interchanging the integrals in the above equation, by Fubini’s theorem, we obtain the desired result. □

4 Stochastic nonlinear problem

In order to state the problem, we define the Brownian sheet on \(\mathbb{R}^{n}_{+}\times [0,T]\) on a complete probability space \((\Omega ,\mathcal{F},\mathcal{F}_{t},P)\), here \(\mathcal{F}\) is a σ-algebra, \(\{\mathcal{F}_{t}\}_{t\geq 0}\) is a right-continuous filtration on \((\Omega ,\mathcal{F})\) such that \(\mathcal{F}_{0}\) contains all P-negligible subsets and P is a probability measure. We consider a center Gaussian field \(B=\{B({\mathbf{x}},t)| {\mathbf{x}}\geq 0, t\geq 0\}\) with covariance function given by

$$ K\bigl(({\mathbf{x}},t),({\mathbf{y}},s)\bigr)=\min \{t,s\} {\mathrm{diag}} \bigl(\min \{x_{1},y_{1} \},\ldots ,\min \{x_{n},y_{n}\}\bigr). $$

We suppose that B generates a \((\mathcal{F}_{t},t\geq 0)\)-martingale measure in the sense of Walsh [12]. Let the initial condition \(u_{0}\) be \(\mathcal{F}_{0}\times \mathcal{B}(\mathbb{R}^{n}_{+})\) measurable, where \(\mathcal{B}(\mathbb{R}^{n}_{+})\) is the Borelian σ-algebra over \(\mathbb{R}^{n}_{+}\).

Now, we consider the following initial-boundary value problem for a nonlinear equation:

$$ \textstyle\begin{cases} u_{t}-\Delta ^{\alpha }u=\mathcal{N}u+\dot{B} , \\ u({\mathbf{x}}, 0)=u_{0}({\mathbf{x}}), \\ u_{x_{j}}({\mathbf{x}}_{[-j]},t)=h_{j}({\mathbf{x}}_{[-j]},t), \end{cases} $$
(13)

where \({\mathbf{x}}\in \mathbb{R}_{+}^{n} \), \(t>0\), \(\alpha \in (2,3)\), \(\mathcal{N}\) is a Lipschitzian operator; i.e., \(|\mathcal{N}u - \mathcal{N}v| \leq C|u-v|\), \(C>0\), and the compatibility conditions \(h_{j}({\mathbf{x}}_{[-j,-l]},t)=h_{l}({\mathbf{x}}_{[-j,-l]},t)\) are satisfied. We understand the solutions for the problem (13) in the following sense: u is a solution if, for all \({\mathbf{x}}\in \mathbb{R}_{+}^{n} \) and \(t>0\), the following equation is fulfilled:

$$\begin{aligned} u({\mathbf{x}},t) =& \mathcal{G}^{I}(t)u_{0} +\sum_{l=1}^{n} \int _{0}^{t} \mathcal{G}^{B_{l}}(t-s)h_{l} \,ds \\ &{}+ \int _{0}^{t} \int _{\mathbb{R}_{+}^{n}} G({\mathbf{x}}-{\mathbf{y}},t-s) \mathcal{N}u({ \mathbf{y}},s)\,d{\mathbf{y}}\,ds, \\ &{}+ \int _{0}^{t} \int _{\mathbb{R}_{+}^{n}} G({\mathbf{x}}-{\mathbf{y}},t-s)\,dB({ \mathbf{y}},s), \end{aligned}$$
(14)

where the Green operators \(\mathcal{G}^{I}(t)\), \(\mathcal{G}^{B_{l}}(t)\) are given in Eq. (3) and the Green function is

$$\begin{aligned} G({\mathbf{x}},t)=\frac{1}{(2\pi )^{n}} \int _{\mathbb{R}_{+}^{n}} e^{i{ \mathbf{k}}\cdot {\mathbf{x}}-|{\mathbf{k}}|^{\alpha }t}\,d{\mathbf{k}}. \end{aligned}$$
(15)

Theorem 2

Let the initial data \(u_{0}({\mathbf{x}})\in {\mathbf{L}}^{1}( \mathbb{R}_{+}^{n})\) and the boundary data \(h_{j}({\mathbf{x}}_{[-j]},t) \in {\mathbf{C}}(\mathbb{R}_{+};{\mathbf{L}}^{1}( \mathbb{R}_{+}^{n}))\). Suppose that, for each \(T>0\), there exists a constant \(C>0\) such that, for each \({\mathbf{x}}\in \mathbb{R}_{+}^{n}\), \(t\in [0,T]\) and \(u,v \in \mathbb{R}^{n}\), \(|\mathcal{N}u-\mathcal{N}v|\leq C |u-v|\), and for some \(p\geq 1\),

$$\begin{aligned} \sup_{{\mathbf{x}}\geq 0} \mathbb{E}\bigl( \bigl\vert u_{0}({\mathbf{x}}) \bigr\vert ^{p}\bigr)< \infty . \end{aligned}$$
(16)

Then, there exists a unique solution \(u({\mathbf{x}},t)\) to Eq. (13). Moreover, for all \(T>0\) and \(p\geq 1\),

$$\begin{aligned} \sup_{\substack{{\mathbf{x}}\geq 0\\t \in [0,T]}} \mathbb{E}\bigl( \bigl\vert u({\mathbf{x}},t) \bigr\vert ^{p}\bigr)< \infty . \end{aligned}$$

Proof

First, we define a Picard succession:

$$\begin{aligned} u^{n+1}({\mathbf{x}},t) =&u^{0}({ \mathbf{x}},t)+\sum_{l=1}^{n} \int _{0}^{t} \int _{\mathbb{R}_{+}^{n-1}} G^{B_{l}}({\mathbf{x}},{\mathbf{y}}_{[-l]},t-s)h_{l}({ \mathbf{y}}_{[-l]},s)\,d{\mathbf{y}}_{[-l]}\,ds \\ &{}+ \int _{0}^{t} \int _{\mathbb{R}_{+}^{n}}G({\mathbf{x}}-{\mathbf{y}},t-s) \mathcal{N}u^{n}({\mathbf{y}},s)\,d{\mathbf{y}}\,ds \\ &{}+ \int _{0}^{t} \int _{\mathbb{R}_{+}^{n}}G({\mathbf{x}}-{\mathbf{y}},t-s)\,dB({ \mathbf{y}},s) \end{aligned}$$
(17)

where

$$\begin{aligned} u^{0}({\mathbf{x}},t) =& \int _{\mathbb{R}_{+}^{n}} G^{I}({\mathbf{x}},{\mathbf{y}},t)u_{0}({ \mathbf{y}})\,d{\mathbf{y}}. \end{aligned}$$

Now, let us prove that \(\{u^{n}({\mathbf{x}},t)\}_{n\geq 0}\) converges in \(L^{p}(\Omega )\). Using the fact that, for all \(t\geq 0\), \(G({\mathbf{x}},t)\) from Eq. (15) is a probability density function with respect to x, we obtain, for \(n\geq 2\),

$$\begin{aligned}& \mathbb{E}\bigl( \bigl\vert u^{n+1}({\mathbf{x}},t)-u^{n}({ \mathbf{x}},t) \bigr\vert ^{p}\bigr) \\& \quad =\mathbb{E} \biggl( \biggl\vert \int _{0}^{t} \int _{\mathbb{R}_{+}^{n}}G({ \mathbf{x}}-{\mathbf{y}},t-s)\bigl[ \mathcal{N}u^{n}({\mathbf{y}},s)-\mathcal{N}u^{n-1}({ \mathbf{y}},s)\bigr]\,d{\mathbf{y}}\,ds \biggr\vert ^{p} \biggr) \\& \quad \leq C(p) \int _{0}^{t} \int _{\mathbb{R}_{+}^{n}}G({\mathbf{x}}-{\mathbf{y}},t-s) \mathbb{E}\bigl( \bigl\vert u^{n}({\mathbf{y}},s)-u^{n-1}({\mathbf{y}},s) \bigr\vert ^{p}\bigr) \,d{\mathbf{y}}\,ds \\& \quad \leq C(p) \int _{0}^{t} \sup_{{\mathbf{x}}\geq 0} \mathbb{E}\bigl( \bigl\vert u^{n}({\mathbf{y}},s)-u^{n-1}({ \mathbf{y}},s) \bigr\vert ^{p}\bigr) \,ds \end{aligned}$$

and by (16) and Burkholder’s inequality we have

$$\begin{aligned}& \sup_{{\mathbf{x}}\geq 0}\mathbb{E}\bigl(|u^{1}({ \mathbf{x}},t)-u^{0}({\mathbf{x}},t)|^{p}\bigr) \\& \quad \leq C(p) \Bigl(\sup_{{\mathbf{x}}\geq 0}\mathbb{E}\bigl( \bigl\vert u^{1}({\mathbf{x}},t) \bigr\vert ^{p}\bigr) +\sup _{{\mathbf{x}}\geq 0}\mathbb{E}\bigl( \bigl\vert u^{0}({ \mathbf{x}},t) \bigr\vert ^{p}\bigr) \Bigr)< \infty . \end{aligned}$$

Then, by Gronwall’s lemma we obtain

$$\begin{aligned} \sum_{n\geq 0}\sup_{\substack{{\mathbf{x}}\geq 0\\t \in [0,T]}} \mathbb{E} \bigl( \bigl\vert u^{n}({\mathbf{x}},t)-u^{n-1}({ \mathbf{x}},t) \bigr\vert ^{p}\bigr)< \infty . \end{aligned}$$

Hence, \(\{u^{n}({\mathbf{x}},t)\}_{n\geq 0}\) is a Cauchy sequence in \(L^{p}(\Omega )\). Let

$$\begin{aligned} u({\mathbf{x}},t) =&\lim_{n\rightarrow \infty } u^{n}({ \mathbf{x}},t). \end{aligned}$$

Thus,

$$\begin{aligned} \sup_{\substack{{\mathbf{x}}\geq 0\\t \in [0,T]}}\mathbb{E}\bigl( \bigl\vert u({\mathbf{x}},t) \bigr\vert ^{p}\bigr)< \infty . \end{aligned}$$

Taking \(n\rightarrow \infty \) in \(L^{p}(\Omega )\) at both sides of (17) shows that \(u({\mathbf{x}},t)\) satisfies the problem (2). Finally, we have to prove the uniqueness of the solution. Let u and v be the two solutions of problem (2), then

$$\begin{aligned}& \mathbb{E}\bigl(\bigl|u({\mathbf{x}},t)-v({\mathbf{x}},t)\bigr|^{p}\bigr) \\& \quad =\mathbb{E} \biggl( \biggl\vert \int _{0}^{t} \int _{\mathbb{R}_{+}^{n}}G({ \mathbf{x}}-{\mathbf{y}},t-s)\bigl[\mathcal{N}u({ \mathbf{y}},s)-\mathcal{N}v({\mathbf{y}},s)\bigr]\,d{ \mathbf{y}}\,ds \biggr\vert ^{p} \biggr) \\& \quad \leq C(p) \int _{0}^{t} \int _{\mathbb{R}_{+}^{n}}G({\mathbf{x}}-{\mathbf{y}},t-s) \mathbb{E}\bigl( \bigl\vert u({\mathbf{y}},s)-v({\mathbf{y}},s) \bigr\vert ^{p} \bigr) \,d{\mathbf{y}}\,ds \\& \quad \leq C(p) \int _{0}^{t} \sup_{{{\mathbf{y}}\geq 0}} \mathbb{E}\bigl( \bigl\vert u({ \mathbf{y}},s)-v({\mathbf{y}},s) \bigr\vert ^{p}\bigr) \,ds. \end{aligned}$$

Therefore, Gronwall’s lemma yields

$$\begin{aligned} \mathbb{E}\bigl( \bigl\vert u({\mathbf{x}},t)-v({\mathbf{x}},t) \bigr\vert ^{p}\bigr)=0. \end{aligned}$$

 □

5 Example

In this section, we consider an example for the case \(n=2\), with the initial condition

$$ u_{0}(x_{1},x_{2})= \textstyle\begin{cases} 1,& 1\leq x_{1},x_{2}\leq 2, \\ 0,&\text{in the other case,} \end{cases} $$

and the boundary conditions, for \(l=1,2\),

$$ h_{l}({\mathbf{x}}_{[-l]}, t)= \textstyle\begin{cases} (-1)^{l+1}, &3/4\leq {\mathbf{x}}_{[-l]}\leq 5/4, \\ 0, &\text{in the other case.} \end{cases} $$

In Fig. 1, we present the plot of the solution \(u({\mathbf{x}}, t)\) for \(t=0.02, 0.1,0.5, 1\), and \(\alpha =2.5\).

Figure 1
figure 1

Anomalous diffusion for \(\alpha =2.5\)