Abstract
A finite horizon optimal stopping problem for an infinite dimensional diffusion X is analyzed by means of variational techniques. The diffusion is driven by a SDE on a Hilbert space \(\mathcal {H}\) with a non-linear diffusion coefficient \(\sigma (X)\) and a generic unbounded operator A in the drift term. When the gain function \(\Theta \) is time-dependent and fulfils mild regularity assumptions, the value function \(\mathcal {U}\) of the optimal stopping problem is shown to solve an infinite-dimensional, parabolic, degenerate variational inequality on an unbounded domain. Once the coefficient \(\sigma (X)\) is specified, the solution of the variational problem is found in a suitable Banach space \(\mathcal {V}\) fully characterized in terms of a Gaussian measure \(\mu \). This work provides the infinite-dimensional counterpart, in the spirit of Bensoussan and Lions (Application of variational inequalities in stochastic control, 1982), of well-known results on optimal stopping theory and variational inequalities in \(\mathbb {R}^n\). These results may be useful in several fields, as in mathematical finance when pricing American options in the HJM model.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
This paper studies a finite horizon optimal stopping problem associated to an infinite-dimensional diffusion process by means of variational techniques. It is well known that the value function of a wide class of optimal stopping problems for general diffusions in \(\mathbb {R}^n\) may be characterized as the solution of suitable variational problems (see [4] and references therein for a survey). Here we provide an infinite-dimensional counterpart of those results by extending methods employed in [4] and combining them with techniques borrowed from the theory of infinite dimensional SDEs.
This work is partially motivated by a central problem in the modern theory of mathematical finance. In fact, pricing American bond options on the forward interest rate curve gives rise to an infinite dimensional optimal stopping problem. This is a consequence of the dependence of the bond’s price on the whole structure of the forward curve. The results obtained here will be extended to solve that particular financial problem in a forthcoming paper [7].
Optimal stopping for processes in locally compact spaces has attracted great attention in the last decades (cf. [14, 27, 30] among others) while the case of general infinite-dimensional Markov processes has been studied in relatively few papers. The earliest paper on infinite dimensional optimal stopping and variational inequalities we are aware of is [8]. There Chow and Menaldi extended known finite dimensional results, in the spirit of [4], to the case of a particular infinite dimensional linear diffusion.
A first attempt towards a more comprehensive study of optimal stopping theory for processes taking values in a Polish space was made by Zabczyk [31] in 1997 from a purely probabilistic point of view and later on, in 2001, by variational methods [32]. Recently Barbu and Marinelli [2] contributed further insights in this direction adopting arguments similar to those in Zabczyk’s works. In both [2, 32] the Authors considered a diffusion process on a functional space \(\mathcal {H}\) and solved the variational problem in mild sense in a suitable \(L^2\)-space with respect to a measure on \(\mathcal {H}\). Instead in the present work we find Sobolev-type solutions (therefore local) of the variational problem. Barbu and Sritharan [3] also considered an optimal stopping problem for a 2-dimensional stochastic Navier–Stokes equation and solved the associated infinite dimensional variational inequality in a \(L^2\)-space.
A different approach is based on viscosity theory. It is extensively exploited to solve general stochastic control problems (cf. [15] for a survey) and the infinite-dimensional case is currently the object of intense study (cf. [19–21, 29] among others). However, as far as we know, the only paper on infinite-dimensional variational inequalities related to optimal stopping problems studied by viscosity methods is [16] by Ga̧tarek and Świȩch. The Authors deal with a problem arising in finance. They characterize the value function of the optimal stopping problem when the underlying diffusion has a particular form not involving the unbounded term that normally appears in infinite-dimensional stochastic differential equations (cf. [10] for a survey).
It is worth mentioning that attempts to provide some numerical results for this class of problems were recently made in [23]. However, arguments therein are mostly heuristic, proofs are only sketched and some of them seem incorrect.
In the present paper the underlying process X lives in a general Hilbert space \(\mathcal {H}\) and it is governed by the SDE (2.1) with a generic unbounded operator A (which is not even required to be self-adjoint) and with diffusion coefficient \(\sigma \) in a class of functions which depends on A through Assumptions 2.4 and 2.5 (see below the discussion after Remark 2.3). Under mild regularity assumptions on the gain function \(\Theta \), the value function \(\mathcal {U}\) of the corresponding optimal stopping problem (see (2.2)) solves an infinite dimensional variational inequality that is parabolic and highly degenerate on an unbounded domain. We point out that degenerate variational inequalities represent non-standard problems in the context of PDE theory even at the finite dimensional level (cf. [28]). For the associated optimal stopping problems one may consult the work of Menaldi [24, 25]. In our case we show that \(\mathcal {U}\) solves a variational inequality in a specific Sobolev-type space \(\mathcal {V}\) (cf. (4.62)) under a given centered Gaussian measure \(\mu \) (cf. (2.4)). We also obtain uniqueness at least in a special case under more restrictive assumptions on X (see Sect. 5).
This work is ideally the extension of [8] to general diffusions in Hilbert spaces and it provides the infinite dimensional analogue of the results in [24, 25]. Differently to [8] we consider a finite time-horizon and a SDE with a generic non-linear diffusion coefficient. The problem in [8] is analyzed as a special case of our study and two open questions raised in [8] find positive answers in our Sect. 5.
The paper is organized as follows. In Sect. 2 we set the problem and we make the main regularity assumptions on the diffusion X and on the gain function \(\Theta \). Then we obtain regularity of the value function \(\mathcal {U}\). Section 3 deals with the approximation of the SDE (2.1) and of the optimal stopping problem (2.2). The SDE is approximated in two steps: first the unbounded term A is replaced by its Yosida approximation \(A_\alpha \), \(\alpha >0\), and afterwards a n-dimensional reduction of the SDE is obtained. In this approximation procedure the corresponding process \(X^{(\alpha );n}\) gives rise to an optimal stopping problem whose value function we denote by \(\mathcal {U}^{(n)}_\alpha \). By means of purely probabilistic arguments we show that \(\mathcal {U}^{(n)}_\alpha \) converges to the value function \(\mathcal {U}\) of the original optimal stopping problem for \(n\rightarrow \infty \) and \(\alpha \rightarrow \infty \). The variational problem is studied in Sect. 4. Initially we prove that the value function \(\mathcal {U}_\alpha ^{(n)}\) is solution of a suitable variational inequality in \(\mathbb {R}^n\) and we characterize an optimal stopping time. We also provide a number of important bounds on \(\mathcal {U}_\alpha ^{(n)}\), its time derivative and its gradient, by means of penalization methods. Section 4.3 is entirely devoted to prove that our original value function \(\mathcal {U}\) solves a suitable infinite-dimensional variational problem. The result is obtained by taking the limit as \(n\rightarrow \infty \) and \(\alpha \rightarrow \infty \) of the variational problem detailed in Sects. 4.1 and 4.2. Both analytical and probabilistic tools are adopted to carry out the proofs and to characterize an optimal stopping time. In Sect. 5 uniqueness of the solution to the variational problem is proved for a specific class of diffusion processes. The paper is completed by a technical Appendix containing some proofs.
2 Setting and Preliminary Estimates
Let \(\mathcal {H}\) be a separable Hilbert space with scalar product \(\langle \cdot ,\cdot \rangle _{\mathcal {H}}\) and induced norm \(\Vert \cdot \Vert _{\mathcal {H}}\). Let \(A:D(A)\subset \mathcal {H}\rightarrow \mathcal {H}\) be the infinitesimal generator of a strongly continuous semigroup of operators \(\{S(t),t\ge 0\}\) on \(\mathcal {H}\) (cf. [26]), where D(A) denotes its domain. Notice that D(A) is dense in \(\mathcal {H}\). Let \(\{\varphi _1,\varphi _2,\ldots \}\) be an orthonormal basis of \(\mathcal {H}\) with \(\varphi _i\in D(A)\), \(i=1,2,\ldots \).
We now consider a stochastic framework. Let \((\Omega ,\mathcal {F},\mathbb {P})\) be a complete probability space and let \(W:=(W^0,W^1,W^2,\ldots )\) be a sequence of independent, real, standard Brownian motions on it. The filtration generated by W is \(\{\mathcal {F}_t,t\ge 0\}\) and it is completed by the null sets. Fix a finite horizon \(T>0\) and take a continuous map \(\sigma :\mathcal {H}\rightarrow \mathcal {H}\) whose regularity will be specified later in this section (cf. Assumption 2.5). Consider the stochastic differential equation (SDE)
in \(\mathcal {H}\). We denote by \(X^x\) a mild solution of (2.1) (see [10]). When the starting time is t rather than zero the solution is denoted by \(X^{t,x}\). To simplify exposition we have chosen an SDE driven by a 1-dimensional Brownian motion, however our results may be also extended to \(\mathcal {H}\)-valued Brownian motions with trace-class covariance operator. In this paper we will rely on the infinite sequence of Brownian motions W to find finite dimensional approximations of \(X^x\) driven by a SDEs similar to (2.1) but with Brownian motion \(\overline{W}^{(n)}:=(W^0,\ldots ,W^n)^\intercal \) instead of \(W^0\).
We aim to study the infinite dimensional optimal stopping problem
with \(\tau \) a stopping time with respect to the filtration \(\{\mathcal {F}_t,t\in [0,T]\}\) and with gain function \(\Theta :[0,T]\times \mathcal {H}\rightarrow \mathbb {R}\) such that \(\Theta \ge 0\) and \((t,x)\mapsto \Theta (t,x)\) continuous. Although infinite-dimensional optimal stopping problems like (2.2) have been proposed by several Authors (see, for example, [2, 8, 16, 31, 32]), here we provide an alternative method to characterize the value function \(\mathcal {U}\). Our results might be extended to the case of a discounted gain function, if the discount factor is a Lipschitz-continuous, non-negative function of X. In order to work out problem (2.2) we need to specify some properties of \(\Theta \), \(\sigma \) and A. For that we introduce suitable Gauss-Sobolev spaces.
Define the positive, linear operator \(Q:\mathcal {H}\rightarrow \mathcal {H}\) by
with \(\sum ^\infty _{i=1}{\lambda _i}<\infty \); i.e., Q is of trace class. Consider the centered Gaussian measure \(\mu \) with covariance operator Q (cf. [5, 9, 11]); that is, the restriction to the vectorsFootnote 1 \(x\in \ell _2\) of the infinite product measure
For \(1\le p<+\infty \) and \(f:\mathcal {H}\rightarrow \mathbb {R}\), define the \(L^p(\mathcal {H},\mu )\) norm as
Then, with the notation of [9], Chapter 10, we consider derivatives in the Friedrichs sense; that is,
when the limit exists.
If f belongs to the domain of the gradient operator D and \(\mathcal {H}\) is identified with its dual, then the \(L^p(\mathcal {H},\mu ;\mathcal {H})\) norm of \(Df=\left( D_1f,\,D_2f,\,\ldots \right) \) is defined as
where
One can show that D is closable in \(L^p(\mathcal {H},\mu )\) (cf. [9], Chapter 10). Let \(\overline{D}\) denote the closure of D in \(L^p(\mathcal {H},\mu )\) and define the Sobolev space
Notice however that in the case of generalized derivatives D and \(\overline{D}\) are the same.
For \(n\in \mathbb {N}\) the finite dimensional counterpart of \(\mu \), \(L^p({\mathcal {H},\mu })\), \(L^p(\mathcal {H},\mu ;\mathcal {H})\) are, respectively,
Remark 2.1
If \(f:\mathbb {R}^n\rightarrow \mathbb {R}\), then
and
Again as in [9], Chapter 10, we define
when the limit exists. For functions f in the domain of \(D^2\) one has \(D^2f:\mathcal {H}\rightarrow \mathcal {L}(\mathcal {H})\) where \(\mathcal {L}(\mathcal {H})\) denotes the space of linear operators on \(\mathcal {H}\). In this paper we do not need an \(L^p\)-space associated to the second derivative.
At this point we can go back to our optimal stopping problem (2.2) and make the following regularity assumptions on the gain function \(\Theta \).
Assumption 2.2
There exist positive constants \(\overline{\Theta }\), \(L_\Theta \), \(L^\prime _\Theta \) such that
Also, \((t,x)\mapsto D^2\Theta (t,x)\) is continuous and \(\sup _{(t,x)\in [0,T]\times \mathcal {H}}\big \Vert D^2\Theta (t,x)\big \Vert _{L}<+\infty \) with \(\Vert \,\cdot \,\Vert _L\) the norm in \(\mathcal {L}(\mathcal {H})\).
Obviously Assumption 2.2 implies
for some positive constant \(C_\Theta \) and all \(1\le p<+\infty \). In what follows condition (2.12) will be often referred to as Lipschitz property of the gain function \(\Theta \).
Remark 2.3
Notice that for existence results of the variational problem (4.101) associated to the optimal stopping one (2.2), we could assume
However, such \(\Theta \) may be approximated by regular ones satisfying Assumption 2.2, for example exponential functions as in [9] or cylindrical ones as in [5, 22].
The dynamics (2.1) is fully specified in terms of A and \(\sigma \). In applications of infinite dimensional SDEs the choice of the unbounded operator A is often distinctive of the phenomenon that one wants to describe (for example it may involve the Laplacian in Navier–Stokes equations or the first derivative in delay equations), whereas multiple choices of the diffusion coefficient are possible in several situations (see for instance various versions of Musiela’s model for interest rates). In our setting we allow for a very general operator A at the cost of restricting the class of admissible diffusion coefficients \(\sigma \). In fact, given A and denoted by \(A^*\) its adjoint operator we construct Q verifying the following
Assumption 2.4
The covariance operator Q of (2.3) is such that
The above condition is needed in Sect. 4, however such Q always exists. Indeed given an orthonormal basis \((\varphi _j)_{j\in \mathbb {N}}\subset D(A)\) of \(\mathcal {H}\), the operator Q is constructed by picking its eigenvalues \((\lambda _i)_{i\in \mathbb {N}}\) so that \(\sum _{j=1}^\infty \lambda _j\big \Vert A\varphi _j\big \Vert ^2_\mathcal {H}<+\infty \), which is equivalent to say that (2.16) holds. Once Q is constructed the class of diffusion coefficients \(\sigma \) is determined by
Assumption 2.5
The diffusion coefficient of (2.1) is such that
Clearly (1) includes \(\sigma \) state dependent.
Remark 2.6
Assumption 2.5 is redundant in the case of constant diffusion coefficients. In fact for any unbounded operator A and any constant \(\sigma \in D(A)\) one can pick an orthonormal basis \((\varphi _j)_{j\in \mathbb {N}}\) with \(\varphi _1:= \sigma /\Vert \sigma \Vert _\mathcal {H}\) and construct Q as in (2.3) with \(\lambda _1=1\).
Example 2.7
In a version of the Musiela model \(\mathcal {H}=L^2_\alpha (\mathbb {R}_+)\) is an \(L^2\)-space with exponential weight \(e^{-\alpha x}\). An orthonormal basis \((\varphi _j)_{j\in \mathbb {N}}\) may be constructed from polynomials by using Graham-Schmidt method, and the unbounded operator is \((Af)(x) =f'(x) \) for \(f\in D(A) \). The norm \(p_j:=\Vert A\varphi _j\Vert _\mathcal {H}\) is well defined and finite for all \(j\in \mathbb {N}\), hence it is enough to take \(\lambda _j:=1/(j \,p_j)^2\) for all \(j\ge j_0\) for some \(j_0\in \mathbb {N}\), and \(\gamma \) according to (2) of Assumption 2.5.
Remark 2.8
The second condition in Assumption 2.5 may be substantially relaxed throughout the paper by considering \(\gamma \) Lipschitz continuous with sublinear growth, however for simplicity we will not do so.
Under Assumption 2.5 we have existence and uniqueness of a mild solution \(X^x\) to (2.1) (cf. [10]). From now on and unless otherwise specified (see Sect. 5) we will take Assumptions 2.2 and 2.5 as standing assumptions.
Below we obtain some preliminary estimates and some regularity properties of the value function \(\mathcal {U}\).
Lemma 2.9
Let \(X^x\) and \(X^y\) be the mild solutions of (2.1) starting at x and y, respectively. Then
where the positive constant \(C_{p,T}\) depends only on p and T.
Proof
The proof of (2.17) follows from [10], Theorem 7.4, whereas the proof of (2.18) is a consequence of [10], Theorem 9.1 and a simple application of Jensen’s inequality. \(\square \)
Proposition 2.10
The value function \(\mathcal {U}(t,x)\) is non-negative, uniformly bounded with the same upper bound of \(\Theta \), i.e.
Moreover, there exists \(L_\mathcal {U}>0\) such that
Proof
The first claim is obvious. To show (2.20) take \(x,y\in \mathcal {H}\) and fix \(t\in [0,T]\). Then
by (2.12). Similarly for \(\mathcal {U}(t,y)-\mathcal {U}(t,x)\); hence
The coefficients in (2.1) are time-homogeneous, hence
and (2.20) follows with \(L_\mathcal {U}=L_{\Theta }\,C_{1,T}\) (cf. (2.18)). \(\square \)
3 The Approximation Scheme
In this section we provide an algorithm for the finite dimensional reduction of the optimal stopping problem (2.2). The algorithm requires two separate steps (a similar approach was used for instance in [17] in a different context). First, we obtain a Yosida approximation of the unbounded operator A by bounded operators \(A_\alpha \); then we provide a finite dimensional reduction of the SDE. At each step a corresponding optimal stopping problem is studied.
3.1 Yosida Approximation
A natural way to deal with an unbounded linear operator is to introduce its Yosida approximation, which does not require any further assumptions. The Yosida approximation of A is defined as \(A_\alpha :=\alpha A(\alpha I-A)^{-1}\), for \(\alpha >0\) (cf. [26]). The corresponding SDE is
which admits a unique strong solution, \(X^{(\alpha )x}\), since \(A_\alpha \) is a bounded linear operator. That is,
Clearly a strong solution is also a mild solution (cf. [10]), hence \(X^{(\alpha )x}\) might be equivalently interpreted as
Similarly \(X^{(\alpha )t,x}\) will denote the solution starting at time t from x. The following important convergence result is proved in [10], Proposition 7.5 and it is here recalled for completeness.
Proposition 3.1
Let \(X^x\) be the unique mild solution of equation (2.1) and \(X^{(\alpha )x}\) the unique strong solution of equation (3.1). For \(1\le p<\infty \), the following convergence holds
We define \(\mathcal {U}_\alpha \) to be the value function of the optimal stopping problem corresponding to \(X^{(\alpha )x}\),
Notice that \(\mathcal {U}_\alpha \) satisfies (2.19) and (2.20) with the same constants. We have the convergence of \(\mathcal {U}_\alpha \) to \(\mathcal {U}\) (cf. (2.2)) as \(\alpha \rightarrow \infty \) both uniformly with respect to t and in \(L^p(0,T;L^p(\mathcal {H},\mu ))\)-norms.
Theorem 3.2
The following convergence results hold,
Proof
The arguments are similar to those used in the proof of Proposition 2.10. In fact by the Lipschitz property of the gain function \(\Theta \) and the time-homogeneous character of the processes we have
Since \(L_\mathcal {U}\) is independent of t, the uniform convergence (3.4) follows from Proposition 3.1. To prove (3.5) it suffices to apply the dominated convergence theorem, since \(\mathcal {U}_\alpha \) is uniformly bounded by \(\overline{\Theta }\) (cf. (2.11)). \(\square \)
Corollary 3.3
If \(\mathcal {U}_\alpha \in C_b([0,T]\times \mathcal {H})\) for all \(\alpha >0\), then \(\mathcal {U}_\alpha \rightarrow \mathcal {U}\) as \(\alpha \rightarrow \infty \), uniformly on compact subsets \([0,T]\times \mathcal {K}\subset [0,T]\times \mathcal {H}\). Moreover \(\mathcal {U}(t,x)\in C_b([0,T]\times \mathcal {H})\).
Proof
Fix \(x\in \mathcal {H}\), then (3.4) implies \(\mathcal {U}(\cdot ,x)\in C_b([0,T];\mathbb {R})\). For each \(\alpha >0\) define
then \(F_\alpha (x)\rightarrow 0\) as \(\alpha \rightarrow \infty \) by (3.4). The family \((F_\alpha )_{\alpha >0}\) is equibounded and equi-continuous since (2.19) and (2.20) hold for both \(\mathcal {U}_\alpha \) and \(\mathcal {U}\), and
Then \(\mathcal {U}_\alpha \) converges uniformly to \(\mathcal {U}\), as \(\alpha \rightarrow \infty \), on compact subsets \([0,T]\times \mathcal {K}\) ([13], Theorem 7.5.6); that is
Hence, being the uniform limit of bounded continuous functions, \(\mathcal {U}\) is continuous on any compact subset \([0,T]\times \mathcal {K}\) (cf. [13], Theorem 7.2.1). That and (2.20) imply the continuity of \(\mathcal {U}\) on \([0,T]\times \mathcal {H}\). \(\square \)
3.2 Finite Dimensional Reduction
For each \(n\in \mathbb {N}\) let us consider the finite dimensional subset \(\mathcal {H}^{(n)}:={ span}\{\varphi _1,\varphi _2,\ldots ,\varphi _n\}\) and the orthogonal projection operator \(P_n:\mathcal {H}\rightarrow \mathcal {H}^{(n)}\). We approximate the diffusion coefficients of (3.1), respectively, by \(\sigma ^{(n)}:=(P_n\sigma )\circ P_n\) and \(A_{\alpha ,n}:=P_nA_\alpha P_n\). Notice that \(A_{\alpha ,n}\) is a bounded linear operator on \(\mathcal {H}^{(n)}\). We define the process \(X^{(\alpha )x;n}\) as the unique strong solution of the SDE on \(\mathcal {H}^{(n)}\) given by
where \((\epsilon _n)_{n}\) is a sequence of positive numbers such that
Obviously \(X^{(\alpha )x;n}\) lives in the finite dimensional subspace \(\mathcal {H}^{(n)}\) but it may still be seen as a process in \(\mathcal {H}\).
Remark 3.4
Notice that at each time \(t\in [0,T]\), \(X_t^{(\alpha )\,x;n}\) is not the projection of the process \(X^{(\alpha )x}_t\) on the finite dimensional subspace. In fact, a process with that property would not be necessarily Markovian. Hence \(X^{(\alpha )x;n}\) has to be considered as an auxiliary diffusion process which is used to approximate the original one.
Proposition 3.5
It holds that
uniformly with respect to x on compact subsets of \(\mathcal {H}\).
Proof
Since \(X^{(\alpha )x;n}\) and \(X^{(\alpha )x}\) are both strong solutions, i.e.
and (3.2) holds, we have
where we used the fact that \(A_{\alpha ,n}X^{(\alpha )x;n}=P_nA_\alpha X^{(\alpha )x;n}\). Denote by \(\Vert \,\cdot \,\Vert _{L}\) the norm of linear operators on \(\mathcal {H}\). We use Hölder’s inequality to estimate the time-integrals, then take the supremum over \(t\in [0,T]\) and the expected value. By isometry of the stochastic integral and Fubini’s theorem we obtain
By Assumption 2.5 the diffusion coefficient is Lipschitz and we denote by \(L_\sigma >0\) its Lipschitz constant. Then we get
A straightforward application of Gronwall’s lemma gives
for some positive constant \(C_T\) and with
a continuous real function. The right hand side converges to zero as \(n\rightarrow \infty \) by dominated convergence and condition (3.7) on \((\epsilon _n)_n\). Since \(M_n(x)\) decreases to zero as \(n\rightarrow \infty \), Dini’s theorem guarantees uniform convergence on any compact subset \(\mathcal {K}\subset \mathcal {H}\).
Remark 3.6
For any starting time \(t\in [0,T]\), the previous proposition and the arguments of its proof still hold for \(X^{(\alpha )t,x;n}\) and \(X^{(\alpha )t,x}\), thanks to the time-homogeneous property of equations (3.1) and (3.6).
For \(n\ge 1\) define \(\Theta ^{(n)}:[0,T]\times \mathcal {H}\rightarrow \mathbb {R}\) by
(cf. (3.6)). Of course, \(P_nx^{(n)}=x^{(n)}\), hence \(\Theta ^{(n)}(t,\,\cdot \,)=\Theta (t,\,\cdot \,)\) on \(\mathcal {H}^{(n)}\). However, in what follows it is convenient to use the notation \(\Theta ^{(n)}\) since this is a gain function on \(\mathcal {H}^{(n)}\) and it will occur in the variational formulation of a finite dimensional optimal stopping problem approximating (3.3). It is not hard to see that (2.12) and Dini’s Theorem imply
Remark 3.7
There is an isomorphism \(\mathcal {I}_n:(\mathcal {H}^{(n)},\Vert \,\cdot \,\Vert _\mathcal {H})\rightarrow (\mathbb {R}^n,\Vert \,\cdot \,\Vert _{\mathbb {R}^n})\), in fact for any \(x\in \mathcal {H}^{(n)}\) we may define \(x_i:=\langle x,\varphi _i\rangle _{\mathcal {H}}\), \(i=1,2,\ldots n\) and \(\mathcal {I}_n x:=(x_1,\ldots ,x_n)\).
Let \(\mathcal {U}^{(n)}_\alpha \) be the value function of the optimal stopping problem
Obviously \(\mathcal {U}_\alpha ^{(n)}\) may also be seen as a function defined on \([0,T]\times \mathbb {R}^{n}\). Again, as for \(\mathcal {U}_\alpha \), we point out that \(\mathcal {U}^{(n)}_\alpha \) satisfies (2.19) and (2.20) with the same constants. The value function \(\mathcal {U}^{(n)}_\alpha \) converges to \(\mathcal {U}_\alpha \) of (3.3) as \(n\rightarrow \infty \). In fact results similar to Theorems 3.2 and 3.3 hold.
Theorem 3.8
The following convergence results hold,
i.e. the convergence is uniform on any compact subset \([0,T]\times \mathcal {K}\), and
Proof
The proof follows along the same lines as the proof of Theorem 3.2 since \(\Theta ^{(n)}(t,X^{(\alpha )t,x;n}_s)=\Theta (t,X^{(\alpha )t,x;n}_s)\), \(s\ge t\). Then (3.13) follows from the uniform convergence in Propositions 3.5, and (3.14) follows from dominated convergence. \(\square \)
As a consequence we have
Corollary 3.9
If \(\mathcal {U}^{(n)}_\alpha \in C_b([0,T]\times \mathcal {H}^{(n)})\) for all \(n\in \mathbb {N}\), then \(\mathcal {U}_\alpha \in C_b([0,T]\times \mathcal {H})\).
Proof
Recall that \((\mathcal {U}^{(n)}_\alpha (t,x^{(n)}))_{n}\) is uniformly bounded (cf. Proposition 2.10) and (3.13) holds. Hence [13], Theorem 7.2.1 guarantees the continuity of \(\mathcal {U}_\alpha \) on \([0,T]\times \mathcal {K}\). Arguments as in Corollary 3.3 provide the continuity on \([0,T]\times \mathcal {H}\). \(\square \)
Later in the paper we will prove that \(\mathcal {U}^{(n)}_\alpha \) is indeed continuous (cf. Theorem 4.12).
4 Infinite Dimensional Variational Inequality: An Existence Result
In this section we prove that the value function \(\mathcal {U}\) of (2.2) is a strong solution (in the sense of [4]) of a parabolic infinite dimensional variational inequality on \([0,T]\times \mathcal {H}\). We start by considering finite-dimensional bounded domains and for those we employ results by [4]. Then we pass to finite-dimensional unbounded domains, and hence to infinite-dimensional ones by considering solutions in specific Gauss-Sobolev spaces. We deal with uniqueness in Sect. 5.
4.1 Finite-Dimensional, Bounded Domains: General Results
When dealing with variational problems on finite dimensional bounded domains, we find bounds which are uniform with respect to the order of the approximation and the size of the domain. Recall the finite dimensional SDE (3.6). Let \(n\in \mathbb {N}\) and fix \(\alpha >0\). Let \(\mathcal {O}_R\) be the open ball in \(\mathbb {R}^n\) with center in the origin and with radius R. Define \(\tau _{R}(t,x)\) to be the first exit time from \(\mathcal {O}_R\), i.e.
We are slightly abusing the notation by considering \(\mathcal {H}^{(n)}\sim \mathbb {R}^n\) and \(X^{(\alpha )t,x;n}\in \mathbb {R}^n\). For simplicity we set \(\tau _R:=\tau _R(t,x)\) and we introduce the optimal stopping problem arrested at \(\tau _R\),
The next result is similar to Theorem 3.8 and its proof is provided in the Appendix.
Proposition 4.1
The function \(\mathcal {U}^{(n)}_{\alpha ,R}\) converges to \(\mathcal {U}^{(n)}_{\alpha }\) as \(R\rightarrow \infty \), uniformly on every compact subset \([0,T]\times \mathcal {K}\subset [0,T]\times \mathbb {R}^n\). Moreover if \(\big (\mathcal {U}^{(n)}_{\alpha ,R}\big )_{R>0}\subset C_b([0,T]\times \mathbb {R}^n)\), then \(\mathcal {U}^{(n)}_{\alpha }\in C_b([0,T]\times \mathbb {R}^n)\).
Denote by \(C^2_c(\mathbb {R}^n)\) the set of all \(C^2\)-functions on \(\mathbb {R}^n\) with compact support. The infinitesimal generator of the diffusion \(X^{(\alpha )x;n}\) is
for \(g\in C^2_c(\mathbb {R}^n)\). Notice that
since \(W^0\) is a one dimensional Brownian motion. Moreover \(\mathcal {L}_{\alpha ,n}\) is a uniformly elliptic operator. The bilinear form associated to the operator \(\mathcal {L}_{\alpha ,n}\) is
for \(u,w\in H^1_0(\mathcal {O}_R)\) (cf. [4] for the definition of \(H^1_0\)),
where \(\delta _{i,j}=0\) for \(i\ne j\) and \(\delta _{i,i}=1\). Denote by \((\cdot ,\cdot )\) the scalar product in \(L^2(\mathcal {O}_R)\). From Assumption 2.5 and uniform ellipticity of \(\mathcal {L}_{\alpha ,n}\), it is not hard to see that there exist constants \(\zeta _{\alpha ,n,R},C_{\alpha ,n,R},C^\prime _{\alpha ,n,R}>0\) such that
These properties guarantee well-posedness of the variational problem in the following proposition.
Define the closed convex set
and set
We expect \(u^{(n)}_{\alpha ,R}:=\mathcal {U}^{(n)}_{\alpha ,R}-\Theta ^{(n)}\) to solve an obstacle problem with null obstacle. Now (4.6), (4.7) and the regularity of \(f_{\alpha ,n}\) in (4.10) are sufficient to apply [4], Chapter 3, Theorems 2.2, 2.13, Corollaries 2.2, 2.3, 2.4 to obtain
Proposition 4.2
There exists a unique solution \(\bar{u}\) of the variational problem:
Moreover, \(\bar{u}\!\in \! L^p(0,T;W^{1,p}_0(\mathcal {O}_R))\cap L^p(0,T;W^{2,p}(\mathcal {O}_R))\), \(\displaystyle {\frac{\partial \bar{u}}{\partial t}}\!\in \! L^p(0,T;L^{p}(\mathcal {O}_R))\) for all \(1\le p<\infty \) and \(\bar{u}\in C([0,T]\times \overline{\mathcal {O}}_R)\).
Corollary 4.3
The function \(\bar{u}\) coincides with the function \(u^{(n)}_{\alpha ,R}\) and uniquely solves in the almost everywhere sense the obstacle problem
Moreover, an optimal stopping time for \(\mathcal {U}^{(n)}_{\alpha ,R}\) of (4.2) is
and
The proof follows from Proposition 4.2 and is outlined in the Appendix for completeness.
Remark 4.4
Notice that when \(\Theta \) fulfils only (2.15), the variational inequality still makes sense by considering \(f_{\alpha ,n}\) as a map from [0, T] to the dual space of \(W^{1,p}\).
4.1.1 Penalization Method and Some Uniform Bounds
Now we would like to take limits in the variational inequalities as \(R\rightarrow \infty \), \(n\rightarrow \infty \), \(\alpha \rightarrow \infty \), respectively. For that we need bounds on \(u^{(n)}_{\alpha ,R}\), \(Du^{(n)}_{\alpha ,R}\) and \(\frac{\partial }{\partial \,t}u^{(n)}_{\alpha ,R}\) uniformly in \((R,n,\alpha )\). The first two bounds are obtained in the next Proposition.
Recall Remark 2.1 and the definition of \(W^{1,p}(\mathcal {H},\mu )\) of (2.9). Then for each \(R>0\), consider the zero extension outside \(\mathcal {O}_R\) of \(u^{(n)}_{\alpha ,R}\) and still denote it by \(u^{(n)}_{\alpha ,R}\) for simplicity.
Proposition 4.5
The family \(\big (u^{(n)}_{\alpha ,R}\big )_{R,n,\alpha }\) is bounded in \(L^p(0,T;W^{1,p}(\mathcal {H},\mu ))\) for \(1\le p<+\infty \) uniformly with respect to \((R,n,\alpha )\in (0,+\infty )\times \mathbb {N}\times (0,+\infty )\).
Proof
Clearly we may think of \(u^{(n)}_{\alpha ,R}\) as a function defined on \([0,T]\times \mathcal {H}\). Then from Assumption 2.2 and (4.9) it follows that \({u}^{(n)}_{\alpha ,R}\) is bounded by \(2\overline{\Theta }\) for all \((R,n,\alpha )\in (0,+\infty )\times \mathbb {N}\times (0,+\infty )\), uniformly in \((t,x)\in [0,T]\times \mathcal {H}\); i.e. \(\Vert {u}^{(n)}_{\alpha ,R}(t)\Vert _{L^p(\mathcal {H},\mu )}\le 2\overline{\Theta }\), \(t\in [0,T]\). It is easy to see that
Moreover, for all \((R,n,\alpha )\in (0,+\infty )\times \mathbb {N}\times (0,+\infty )\), \(u^{(n)}_{\alpha ,R}\) is Lipschitz in the space variable, uniformly with respect to \(t\in [0,T]\), with Lipschitz constant lesser or equal than \(L_\mathcal {U}+L_\Theta \). It follows that \(\big \Vert Du^{(n)}_{\alpha ,R}(t,x^{(n)})\big \Vert _{\mathcal {H}}=\big \Vert Du^{(n)}_{\alpha ,R}(t,x^{(n)})\big \Vert _{\mathbb {R}^n}\le L_\mathcal {U}+L_\Theta \) for a.e. \((t,x^{(n)})\in [0,T]\times \mathbb {R}^{n}\). Since \(\mu \) restricted to \(\mathbb {R}^n\) is equivalent to the Lebesgue measure (cf. Remark 2.1) it follows that \(\big \Vert Du^{(n)}_{\alpha ,R}(t)\big \Vert _{L^p(\mathcal {H},\mu ;\mathcal {H})}\le L_\mathcal {U}+L_\Theta \), \(t\in [0,T]\) and
\(\square \)
We now go through a number of steps (including penalization) in order to find a bound on \(\frac{\partial }{\partial \,t}\,u^{(n)}_{\alpha ,R}\). First, by arguments as in [24] we have
Lemma 4.6
Let \(\nu \) be any real adapted process in [0, 1], \(\varepsilon >0\), \(t\in [0,T]\), \(x^{(n)}\) and \(y^{(n)}\) in \(\mathbb {R}^n\), then
where \(\tau ^x_R:=\tau _R(t,x)\) and \(\tau ^y_R:=\tau _R(t,y)\) (cf. (4.1)) and \(L_f>0\) only depends on \(L_\Theta \) and \(L_\mathcal {U}\) (cf. Assumption 2.2 and Proposition 2.10).
Proof
The proof is in the Appendix. \(\square \)
Now we need to recall the penalization method used in [4], Chapter 3, Section 2, to obtain existence and uniqueness results for parabolic variational inequalities as in our Proposition 4.2. For fixed \((R,n,\alpha )\) we denote \(u^{R}:=u^{(n)}_{\alpha ,R}\) to simplify notation. In [4] \(u^{R}\) is found in the limit as \(\varepsilon \rightarrow 0\) of functions \(u^{R}_{\varepsilon }\) solving the penalized problem
From now on we consider the zero extension outside \(\mathcal {O}_R\) of \(u^R_\varepsilon \) which we still denote by \(u^R_\varepsilon \). Then \(u^R_\varepsilon \) may be represented as (cf. [4], Chapter 3, Section 4, Theorem 4.4)
where the supremum is taken over all real adapted stochastic processes \(\nu \in [0,1]\). Lipschitz continuity of \(u^R_\varepsilon \) in the space variable, uniformly with respect to time, follows by means of Lemma 4.6. The proof is inspired by [24] and it is contained in the Appendix.
Lemma 4.7
There exists a constant \(L_P>0\) independent of \((\varepsilon ,R,\alpha ,n)\) such that
In order to get bounds in \(L^p(\mathcal {H},\mu )\) it is convenient to find a formulation of (4.18) in such space. To do so we introduce some notation (cf. Remark 2.1).
Definition 4.8
For \(1< p<\infty \) and \(p'\) such that \(\frac{1}{p}+\frac{1}{p'}=1\), denote by \(\mathcal {V}^p_n\) the space
endowed with the norm
Then \((\mathcal {V}^p_n,\left| \left| \left| \cdot \right| \right| \right| _{p,n})\) is a separable Banach space.
Denote by \((\cdot ,\cdot )_{\mu _n}\) the scalar product in \(L^2(\mathbb {R}^n,\mu _n)\) and, for \(u,w\in \mathcal {V}^p_n\), define the bilinear form associated to the operator \(\mathcal {L}_{\alpha ,n}\) (cf. (4.3)),
with
and \(B^{(n)}_{i,j}\) and \(C^{(n,\alpha )}_{i,j}\) as in (4.5). From (4.4) it follows that
then (4.25) and the isometry \(\mathcal {H}^{(n)}\sim \mathbb {R}^n\) allow us to rewrite the bilinear form (4.23) as
where \(B^{(n)}:=\sigma ^{(n)}\sigma ^{(n)*}+\epsilon _n^2\,I\in \mathcal {L}(\mathcal {H})\), the set of all linear operators on \(\mathcal {H}\), and \(\overline{C}^{(n,\alpha )}\in \mathcal {H}\) is given by
Here \(Q_n:=P_nQP_n\) and \((D\sigma ^{(n)}\cdot \sigma ^{(n)})_i:=\sum _{j=1}^n{(D\sigma ^{(n)})_{i,j}\,\sigma ^{(n)}_j}\), \(i=1,\ldots ,n\). The continuity in \(\mathcal {V}^p_n\) of the bilinear form (4.26) follows from the next result which makes use of Assumption 2.4.
Theorem 4.9
For every \(1<p<\infty \) there exists a constant \(C_{\mu ,\gamma ,p}>0\), depending only on \(\mu \), p and the bounds of \(\gamma \) in Assumption 2.5, such that
for all \(u,w\in L^2(0,T;\mathcal {V}^p_n)\).
Proof
Thanks to Assumption 2.5 and since Q is of trace class (cf. (2.3)) the estimate is straightforward for all the terms in (4.26) except those involving \(\epsilon ^2_nQ^{-1}_n\) and \(A_{\alpha ,n}\). As for the first case notice that, although \(Q^{-1}_n\) becomes unbounded as \(n\rightarrow \infty \), there is no restriction in assuming that the sequence \((\epsilon _n)_{n\in \mathbb {N}}\) is such that \(\epsilon _n Q^{-1}_n\rightarrow 0\) as \(n\rightarrow \infty \) (cf. (3.7)). It then remains to look at
Recalling Assumption 2.4 and using Hölder’s inequality we obtain
where the last inequality follows from \(\int _{\mathbb {R}^n}\big |\langle x,y\rangle \big |^2_{\mathcal {H}}\mu _n(dx)=\langle Q_ny,y\rangle _{\mathcal {H}}\) for \(y\in \mathcal {H}\) (see for instance [9], p.13). \(\square \)
For \(v^R\in H^1_0(\mathcal {O}_R)\) we consider its zero extension outside \(\mathcal {O}_R\), again denoted by \(v^R\). Multiplying (4.18) by \(v^R\,\frac{1}{\sqrt{(2\pi )^n\lambda _1\lambda _2\cdots \lambda _n}}\exp { \left( -\sum _{i=1}^n{\frac{x_i^2}{\lambda _i}}\right) }\) and integrating by parts over \(\mathbb {R}^n\) gives the penalized problem in a weaker form; that is
Following arguments as in [4], Chapter 3, Section 2, p. 246, we finally obtain a bound on \(\frac{\partial }{\partial \,t}u^{(n)}_{\alpha ,R}\).
Proposition 4.10
The family \(\big (\frac{\partial }{\partial \,t}\,u^{(n)}_{\alpha ,R}\big )_{R,n,\alpha }\) is bounded in \(L^2(0,T;L^2(\mathcal {H},\mu ))\), uniformly with respect to \((R,n,\alpha )\in (0,\infty )\times \mathbb {N}\times (0,\infty )\).
Proof
As in [4] one may take \(v^R=\frac{\partial }{\partial \,t}u^R_\varepsilon \), possibly up to a regularization, or considering finite differences, as the estimate obtained at the end does not involve second derivatives of \(u^R_\varepsilon \) and it is therefore consistent. Plugging such \(v^R\) in (4.30) gives
Next observe that (4.26) implies
where
by symmetry and
By integrating with respect to t over [0, T], recalling that \(u^R_\varepsilon (T,\,\cdot \,)=0\) and rearranging terms one obtains
and therefore
To provide estimates for the terms on the right-hand side of (4.37), notice that by Assumption 2.5 and Lemma 4.7, one gets
with \(C_1>0\) depending only on \(L_P\), \(\mu \) and the bounds on \(\gamma \). Also, Assumption 2.4 and arguments as in the proof of Theorem 4.9 give
with \(C_2>0\) depending only on \(\mu \), T and the bounds on \(\gamma \). Similarly Assumption 2.2 implies
with \(C_3>0\) depending only on \(\mu \), T, \(L_\Theta \), \(L'_\Theta \) and the bounds on \(\gamma \).
Therefore, from (4.36), (4.37), (4.38) and (4.39) it follows that
for a suitable \(C_4>0\) independent of \((\varepsilon ,R,n,\alpha )\). Now, (4.40) holds for \(\frac{\partial }{\partial \,t}u^R\) as well since it is obtained as the weak limit in \(L^2(0,T;L^2(\mathcal {H},\mu ))\) of \(\frac{\partial }{\partial \,t}u^R_\varepsilon \) as \(\varepsilon \rightarrow \infty \) (cf. [4], Chapter 3, Section 2.3, p. 239). \(\square \)
4.2 Finite-Dimensional Unbounded Domains
Recall the optimal stopping problem (3.12) and set
From Propositions 4.1, 4.5 and 4.10 it follows
Lemma 4.11
There exists a sequence \((R_i)_{i\in \mathbb {N}}\) such that \(R_i\rightarrow \infty \) as \(i\rightarrow \infty \) and \(u^{(n)}_{\alpha ,R_i}\) converges to \(u^{(n)}_\alpha \) as \(R_i\rightarrow \infty \), weakly in \(L^p(0,T;\mathcal {V}^p_n)\) and strongly in \(L^p(0,T; L^p(\mathbb {R}^n,\mu _n))\), \(1\le p<\infty \). Moreover, \(\frac{\partial \,}{\partial \,t}u^{(n)}_{\alpha ,R_i}\) converges to \(\frac{\partial \,}{\partial \,t}u^{(n)}_{\alpha }\) as \(R_i\rightarrow \infty \), weakly in \(L^2(0,T;L^2(\mathbb {R}^n,\mu _n))\).
In the spirit of [4], Chapter 3, Section 1.11, take \(w_R\in \mathcal {K}_{n,R}\) (cf. (4.8)) and recall that \(u^{(n)}_{\alpha ,R}\) is the unique solution of (4.11). Define \(\tilde{w}_R\in \mathcal {K}_{n,R}\) by
Take \(w=\tilde{w}_R\) in (4.11) and use (4.42) to obtain
For every \(1<p<\infty \), denote by \(\mathcal {K}^p_{n,\mu }\) the closed convex set
We can now extend Proposition 4.2 to the unbounded case, i.e. to \(\mathbb {R}^n\).
Theorem 4.12
For every \(1<p<\infty \), the function \(u^{(n)}_\alpha \) is a solution of the variational problem on \(\mathbb {R}^n\)
Moreover, \(u^{(n)}_\alpha \in C([0,T]\times \mathbb {R}^n)\) and an optimal stopping time for \(\mathcal {U}^{(n)}_\alpha \) of (3.12) is
Proof
Observe that, by arguments on cut-off functions as in [1], Theorem 3.22, for each \(w\in \mathcal {K}^p_{n,\mu }\) there exists a family \((w_{R})_{R>0}\subset \mathcal {K}^p_{n,\mu }\cap \mathcal {K}_{n,R}\) (cf. (4.8)) such that \(w_{R}\rightarrow w\) as \(R\rightarrow \infty \) in \(\mathcal {V}^p_n\). Rewrite the inequality (4.43) as
Consider the sequences \((R_i)_{i\in \mathbb {N}}\) and \((u^{(n)}_{\alpha ,R_i})_{i\in \mathbb {N}}\) of Lemma 4.11 and fix arbitrary \(0\le t_1<t_2\le T\). Then taking limits as \(i\rightarrow \infty \) gives (cf. for instance [6], Proposition 3.5)
As for the last term on the right hand side of (4.47), consider
For the last two integrals argue as above, hence
On the other hand, to the first integral in (4.51) apply arguments similar to those in the proof of Theorem 4.9 to get
with p and \(p'\) as in (4.21) and \(C_p>0\) a suitable constant independent of i, \(\alpha \) and n. It then follows from Proposition 4.5 and Lemma 4.11 that
Now (4.52), (4.53) and (4.55) imply
Therefore (4.47), (4.48), (4.49), (4.50), (4.56) show the convergence of (4.11) to (4.45) since \(t_1\) and \(t_2\) are arbitrary.
The continuity of \(u^{(n)}_\alpha \) follows from Proposition 4.1 and Corollary 4.3. As for the optimality of \(\tau ^\star _{\alpha ,n}(t,x)\), notice that its proof is a simpler version of the one of Lemma 4.17 and Theorem 4.18 below, hence it is only outlined here. For any initial data (t, x) one has
by an extension of [4], Chapter 3, Section 3, Theorem 3.7 and by our Proposition 4.1. Since \(\tau ^\star _{\alpha ,n,R}\) is optimal for \(\mathcal {U}^{(n)}_{\alpha ,R}\) and \(\tau ^\star _{\alpha ,n,R}\wedge \tau ^\star _{\alpha ,n}\le \tau ^\star _{\alpha ,n,R}\) \(\mathbb {P}\)-a.s., it follows from (4.14) that
Therefore, Proposition 4.1, the continuity of \(\mathcal {U}^{(n)}_{\alpha }\) and (4.57) provide
by taking limits as \(R\rightarrow \infty \) in (4.58). It follows that \(\tau ^\star _{\alpha ,n}\) is optimal. \(\square \)
Remark 4.13
Notice that for any stopping time \(\sigma \) the same arguments that provide (4.57) also give
Therefore one has
4.3 Infinite Dimensional Domains
4.3.1 The Variational Inequality for Bounded Operator \(A_\alpha \)
Define the infinite-dimensional counterpart of \(\mathcal {V}^p_n\) of Definition 4.8 by setting
Endow \(\mathcal {V}^p\) with the norm
so to obtain a separable Banach space. Notice that \(\mathcal {V}^p_n\subset \mathcal {V}^p\) by Remark 2.1. Also, by (4.27)
for \(u,w\in L^2(0,T;\mathcal {V}^p)\).
Denote by \(\mathcal {L}_\alpha \) the infinitesimal generator of \(X^{(\alpha )}\) (cf. (3.1)); that is,
The bilinear form associated to (4.65) is the infinite-dimensional counterpart of (4.26) and it is given by
with \(B:=\sigma \sigma ^{*}\), \(\overline{C}^{(\alpha )}=\frac{1}{2}\left( Tr[D\sigma ]_{\mathcal {H}}\sigma +D\sigma \cdot \sigma - 2A_{\alpha }x-\sigma \sigma ^{*}Q^{-1}x\right) \) and \(D\sigma \cdot \sigma \) denotes the action of \(D\sigma \in \mathcal {L}(\mathcal {H})\) on \(\sigma \in \mathcal {H}\).
Let \(w\in L^2(0,T;\mathcal {V}^p)\) and \((w_n)_{n\in \mathbb {N}}\subset L^2(0,T;\mathcal {V}^p)\) be such that \(w_n\rightarrow w\). Then, for arbitrary \(0\le t_1<t_2\le T\), define \(\mathcal {T}_{\alpha ,w}(t_1,t_2)\in L^2(0,T;\mathcal {V}^{p})^*\) and the sequence \((\mathcal {T}^{\,n}_{\alpha ,w}(t_1,t_2))_{n\in \mathbb {N}}\subset L^2(0,T;\mathcal {V}^{p})^*\) by setting
Tedious but straightforward calculations give
Also, recall \(f_{\alpha ,n}\) of (4.10) and set
then it holds
by Assumptions 2.5 and 2.2 and dominated convergence theorem. Finally, similarly to \(\mathcal {K}^{p}_{n,\mu }\) of (4.44), for \(1< p< \infty \) define the closed, convex set
Lemma 4.14
Let \(w\in \mathcal {K}^p_{\mu }\) for some \(1<p<+\infty \). Then there exists a double-indexed sequence \( (w_{k,n})_{k,n\in \mathbb {N}}\subset \mathcal {V}^p\) such that for k fixed, \(w_{k,n}\in \cap _{m\ge n}\mathcal {K}^p_{m,\mu }\).
Moreover,
taking the limits in the prescribed order.
Proof
Since \(D(A^*)\) is dense in \(\mathcal {H}\) the set
is denseFootnote 2 in \(\mathcal {V}^p\) (cf. [9], Chapter 10 and [11], Chapter 9). Hence for \(w\in \mathcal {K}^p_{\mu }\) there exists a sequence \((\phi ^{(k)})_{k\in \mathbb {N}}\subset \mathcal {E}_A(\mathcal {H})\) such that \(\phi ^{(k)}\rightarrow w\) in \(\mathcal {V}^p\) as \(k\rightarrow \infty \). Recall the projection \(P_n\) and set \(\phi ^{(k)}_n(x):=\phi ^{(k)}(P_nx)\) for \(n\in \mathbb {N}\). Since \(\phi ^{(k)}\) is a finite linear combination of elements in \(\mathcal {E}_A(\mathcal {H})\) and it is continuous and bounded alongside with \(D\phi ^{(k)}\), dominated convergence implies \(\phi ^{(k)}_n\rightarrow \phi ^{(k)}\) in \(\mathcal {V}^p\) as \(n\rightarrow \infty \). It follows that \((\phi ^{(k)}_n)_{k,n\in \mathbb {N}}\) is bounded in \(\mathcal {V}^p\) and so is \((\phi ^{(k)}_{n,0})_{k,n\in \mathbb {N}}\) where \(\phi ^{(k)}_{n,0}:=0\vee \phi ^{(k)}_n=[\phi ^{(k)}_n]^+\). Therefore by taking limits as \(n\rightarrow \infty \) first, and as \(k\rightarrow \infty \) afterwards, one obtains weak convergence in \(\mathcal {V}^p\) of \(\phi ^{(k)}_{n,0}\) to some function g. However, \(\big |\phi ^{(k)}_{n,0}-w\big |=\big | [\phi ^{(k)}_{n}]^+-[w]^+\big |\le \big |\phi ^{(k)}_{n}-w\big |\) for all \(x\in \mathcal {H}\), since \(w\ge 0\). Therefore dominated convergence implies \(\phi ^{(k)}_{n,0}\rightarrow w\) in \(L^{p}(\mathcal {H},\mu )\) as limits are taken in the same order as before and we may conclude \(g\equiv w\). Clearly, for k fixed, \(\phi ^{(k)}_{n,0}\in \cap _{m\ge n}\mathcal {K}^p_{m,\mu }\) and the Lemma follows by setting \(w_{k,n}:=\phi ^{(k)}_{n,0}\). \(\square \)
Recall the value function \(\mathcal {U}_\alpha \) of the optimal stopping problem (3.3) and set \(u_\alpha :=\mathcal {U}_{\alpha }-\Theta \). Then Assumption 2.2, Theorem 3.8 and the same bounds as those employed to obtain Lemma 4.11 provide the following
Lemma 4.15
There exists a sequence \((n_i)_{i\in \mathbb {N}}\) such that \(n_i\rightarrow \infty \) as \(i\rightarrow \infty \) and \(u^{(n_i)}_{\alpha }\) converges to \(u_\alpha \) as \(n_i\rightarrow \infty \), weakly in \(L^p(0,T;\mathcal {V}^p)\) and strongly in \(L^p(0,T; L^p(\mathcal {H},\mu ))\), \(1\le p<\infty \).
Moreover, \(\frac{\partial \,}{\partial \,t}u^{(n_i)}_{\alpha }\) converges to \(\frac{\partial \,}{\partial \,t}u_{\alpha }\) as \(n_i\rightarrow \infty \) weakly in \(L^2(0,T;L^2(\mathcal {H},\mu ))\).
Denote by \((\cdot ,\cdot )_\mu \) the scalar product in \(L^2(\mathcal {H},\mu )\).
Theorem 4.16
For every \(1<p<\infty \) the function \(u_\alpha \) is a solution of the variational problem on \(\mathcal {H}\)
Moreover, \(u_\alpha \in C([0,T]\times \mathcal {H})\).
Proof
The continuity of \(u_\alpha \) is a consequence of Corollary 3.9 and Proposition 4.1. For arbitrary \(w\in \mathcal {K}^p_{\mu }\) take the corresponding approximating sequence \((w_{k,n})_{k,n\in \mathbb {N}}\) given by Lemma 4.14. For \(k\in \mathbb {N}\) arbitrary but fixed, Theorems 4.12, Lemma 4.14 and Remark 2.1 guarantee
for \(m\ge n\) and a.e. \(t\in [0,T]\). In the limit as \(m\rightarrow \infty \), Lemma 4.15, equations (4.68) and (4.70) and arguments similar to those used in the proof of Theorem 4.12 give
The proof now follows from Theorem 4.14 by taking limits as \(n,\,k\rightarrow \infty \) and then dividing by \(t_2-t_1\) and letting \(t_2-t_1\rightarrow 0\). \(\square \)
The existence of an optimal stopping time for \(\mathcal {U}_\alpha \) of (3.3) is obtained by purely probabilistic considerations (cf. Theorem 4.18 below). Two preliminary lemmas are needed. Given \((t,x)\in [0,T]\times \mathcal {H}\), let \(\tau ^\star _{\alpha ,n}(t,x)\) be as in (4.46) and define
Lemma 4.17
For each \((t,x)\in [0,T]\times \mathcal {H}\) there exists a subsequence \((\tau ^\star _{\alpha ,n_j}(t,x))_{j\in \mathbb {N}}\), with \(n_j=n_j(t,x)\), such that \(n_j\rightarrow \infty \) as \(j\rightarrow \infty \) and
Proof
Fix \(x_0\in \mathcal {H}\). There is no loss of generality if we consider the diffusions \(X^{(\alpha )x_0}\) and \(X^{(\alpha )x_0;n}\) starting at time zero as all results remain true for arbitrary initial time t. The proof of this Lemma is adapted from [4], Chapter 3, Section 3, Theorem 3.7 (cf. in particular p. 322).
Using Proposition 3.5, fix \(\Omega _0\subset \Omega \) with \(\mathbb {P}(\Omega _0)=1\) and a subsequence \((n_j)_{j\in \mathbb {N}}\), with \(n_j=n_j(x_0)\), such that
Since the starting point \(x_0\in \mathcal {H}\) is fixed, to simplify the notation in the rest of the proof, we shall write \(\tau ^\star _{\alpha ,n}\) and \(\tau ^\star _{\alpha }\) instead of \(\tau ^\star _{\alpha ,n}(0,x_0)\) and \(\tau ^\star _{\alpha }(0,x_0)\), respectively. The limit (4.76) is trivial if \(\omega ^\prime \in \Omega _0\) is such that \(\tau ^\star _\alpha (\omega ^\prime )=0\). On the other hand, if \(\omega ^\prime \in \Omega _0\) is such that \(\tau ^\star _\alpha (\omega ')>\delta \) for some \(\delta =\delta _{x_0}>0\), then by (4.75)
Since the map \(t\mapsto X^{(\alpha )x_0}_t(\omega ^\prime )\) is continuous and \([0,\tau ^\star _\alpha (\omega ^\prime )-\delta ]\) is a compact set it follows that the set \(\chi ^\delta (\omega ^\prime ):=\{y\in \mathcal {H}:\,y=X^{(\alpha )x_0}_t(\omega ^\prime ),\,t\in [0,\tau ^\star _\alpha (\omega ^\prime )-\delta ]\}\) is a compact subset of \(\mathcal {H}\). Therefore the continuous map \((t,x)\mapsto \mathcal {U}_\alpha (t,x)-\Theta (t,x)\) (cf. Theorem 4.16) attains its minimum on \([0,\tau ^\star _\alpha (\omega )-\delta ]\times \chi ^\delta (\omega ^\prime )\), call it \(\rho (\delta ,\omega ^\prime )>0\). Then
Recall from Theorem 3.8 and (3.11) that \(\mathcal {U}^{(n)}_\alpha \) and \(\Theta ^{(n)}\) converge respectively to \(\mathcal {U}_\alpha \) and \(\Theta \), uniformly on compact subsets of \([0,T]\times \mathcal {H}\). Therefore there exists \(n_\rho =n(\rho (\delta ,\omega ^\prime ))\in (n_j)_{j\in \mathbb {N}}\), \(n_\rho >0\) large enough such that
and
Now (4.78), (4.79) and (4.80) imply
On the other hand Assumption 2.2, Proposition 2.10 and the fact that \(P_{n_\rho }X^{(\alpha )x_0;n_\rho }=X^{(\alpha )x_0;n_\rho }\) imply
and
which, together with (4.81) and (4.82), imply
It follows that \(\tau ^\star _{\alpha ,n_\rho }(\omega ^\prime )>\tau ^\star _{\alpha }(\omega ^\prime )-\delta \). Notice that \(\rho (\delta ,\omega ^\prime )\rightarrow 0\) as \(\delta \rightarrow 0\) and hence \(n_\rho \rightarrow \infty \). Therefore \(\tau ^\star _{\alpha ,n_\rho }(\omega ^\prime )\wedge \tau ^\star _{\alpha }(\omega ^\prime )\rightarrow \tau ^\star _{\alpha }(\omega ^\prime )\) as \(n_\rho \rightarrow \infty \), which is equivalent to say that (4.76) holds along a subsequence. \(\square \)
Notice that arguments as in the proof of (2.20) also give
since the optimal stopping problems (2.2), (3.3) and (3.12) are considered under the same filtration \(\{\mathcal {F}_t,\,t\ge 0\}\).
Theorem 4.18
An optimal stopping time of (3.3) is \(\tau ^\star _{\alpha }(t,x)\) as defined in (4.75). Moreover
Proof
Given the initial data (t, x), we adopt the simplified notation used in the proof of Lemma 4.17; that is, we set \(\tau ^\star _\alpha :=\tau ^\star _{\alpha }(t,x)\) and \(\tau ^\star _{\alpha ,n}:=\tau ^\star _{\alpha ,n}(t,x)\). By Remark 4.13 we have
In (4.87) take the subsequence \((n_j)_{j\in \mathbb {N}}\) of Lemma 4.17 and apply Theorem 3.8 to obtain the convergence of \(\mathcal {U}^{(n_j)}_\alpha (t,x^{(n_j)})\) to \(\mathcal {U}_\alpha (t,x)\) as \(j\rightarrow \infty \).
On the other hand
where the first term on the right hand side goes to zero as \(j\rightarrow \infty \) by (2.20), Proposition 3.5 and Jensen’s inequality. Similarly, the second term goes to zero by (4.85) and dominated convergence, and the third term goes to zero by dominated convergence and Lemma 4.17.
In conclusion, by taking the limits in (4.87) along the subsequence \((n_j)_{j\in \mathbb {N}}\) we obtain
and the optimality of \(\tau _\alpha ^\star \) follows. Similar arguments are used to prove (4.86) since Lemma 4.17 implies \(\sigma \wedge \tau ^\star _{\alpha }\wedge \tau ^\star _{\alpha ,n_j}\rightarrow \sigma \wedge \tau ^\star _{\alpha }\) as \(j\rightarrow \infty \). \(\square \)
4.3.2 Removal of the Yosida Approximation
The function \(u_\alpha \) in Theorem 4.16 solves the variational inequality associated to the Yosida approximation \(A_\alpha \) of the unbounded operator A. In this section we study the limiting behavior, as \(\alpha \rightarrow \infty \), of \(u_\alpha \) and of the corresponding variational inequality by adopting both probabilistic and analytical tools.
When \(\alpha \rightarrow \infty \) the term involving \(A_\alpha \) in the bilinear form \(a^{(\alpha )}_\mu (\cdot ,\cdot )\) of (4.66) converges to a suitable operator that needs to be fully characterized. Let \(w\in \mathcal {V}^p\) be given and define the linear functional \(L^{(\alpha )}_{A}(w,\cdot )\in \mathcal {V}^{p\,*}\) by
It is easy to show that \(L^{(\alpha )}_{A}(w,\cdot )\) is continuous by (4.29) and any sequence \((L^{(\alpha _n)}_{A}(w,\cdot ))_{n\in \mathbb {N}}\), with \(\alpha _n\rightarrow \infty \) as \(n\rightarrow \infty \), is a Cauchy sequence in \(\mathcal {V}^{p\,*}\). In fact for \(n>m\) arguments similar to those in (4.29) give
and hence
Since \(A_\alpha \rightarrow A\) on D(A) as \(\alpha \rightarrow \infty \) and Assumption 2.4 holds, (4.91) goes to zero as \(m,n\rightarrow \infty \) and \((L^{(\alpha _n)}_{A}(w,\,\cdot \,))_{n\in \mathbb {N}}\) is Cauchy in \(\mathcal {V}^{p\,*}\). Therefore, by completeness of \(\mathcal {V}^{p\,*}\) there exists \(\hat{L}_{A}(w,\,\cdot \,)\in \mathcal {V}^{p\,*}\) such that \(L^{(\alpha )}_{A}(w,\,\cdot \,)\rightarrow \hat{L}_{A}(w,\cdot )\) as \(\alpha \rightarrow \infty \) in \(\mathcal {V}^{p\,*}\).
It suffices to characterize \(\hat{L}_{A}(w,\cdot )\) on the set \(\mathcal {E}_A(\mathcal {H})\) of (4.73) since that is dense in \(\mathcal {V}^p\). In order to do so notice that \(A^*Du\in L^p(\mathcal {H},\mu )\) for \(u\in \mathcal {E}_A(\mathcal {H})\) and
Now dominated convergence allows us to define a linear functional \(L_{A}(w,\cdot )\) by setting
Clearly its domain \(D(L_{A}(w,\cdot ))\) contains \(\mathcal {E}_A(\mathcal {H})\) and it is dense in \(\mathcal {V}^p\). Since (4.29) is uniform with respect to \(n\in \mathbb {N}\) and \(\alpha >0\) we also obtain
By density arguments \(L_{A}(w,\cdot )\) is continuously extended to the whole space \(\mathcal {V}^p\) and the extended functional is denoted by \(\bar{L}_{A}(w,\cdot )\). It then follows
Note that, for \(w\in L^2(0,T;\mathcal {V}^p)\) fixed, one has \(\big (L^{(\alpha )}_A(w,\,\cdot \,)\big )_{\alpha >0}\) bounded in \(L^{2}(0,T;\mathcal {V}^{p\,*})\) by (4.93) (or by (4.29)). Then for arbitrary \(0\le t_1<t_2\le T\) and \(u\in L^2(0,T;\mathcal {V}^p)\) we may define \(T^{(\alpha )}_{A}(w,\cdot )(t_1,t_2)\in L^2(0,T;\mathcal {V}^{p})^*\) and \(\bar{T}_{A}(w,\cdot )(t_1,t_2)\in L^2(0,T;\mathcal {V}^{p})^*\) by
and
Proposition 4.19
For arbitrary \(0\le t_1<t_2\le T\), with \(T^{(\alpha )}_{A}(w,\,\cdot \,)(t_1,t_2)\) and \(\bar{T}_{A}(w,\,\cdot \,)(t_1,t_2)\) given by (4.95) and (4.96), respectively, it holds that
Proof
A direct calculation gives
and hence \(\Vert (T^{(\alpha )}_{A}-\bar{T}_{A})(w,\,\cdot \,)(t_1,t_2)\Vert _{L^2(0,T;\mathcal {V}^{p})^*}\le \big \Vert (L^{(\alpha )}_{A}-\bar{L}_{A})(w,\,\cdot \,)\big \Vert _{L^2(0,T;\mathcal {V}^{p\,*})}\). Now, since \(\big \Vert (L^{(\alpha )}_{A}-\bar{L}_{A})(w(t),\,\cdot \,) \big \Vert _{\mathcal {V}^{p\,*}}\le 2 Tr[AQA^*]\left| \left| \left| w(t)\right| \right| \right| _p\) and the upper bound is independent of \(\alpha \) and it belongs to \(L^2(0,T)\), then dominated convergence theorem and (4.94) give (4.97). \(\square \)
Remark 4.20
Notice that for our gain function \(\Theta \) we have \(L^{(\alpha )}_{A}(\cdot ,\Theta )\in L^2(0,T;\mathcal {V}^{p\,*})\). Moreover \(T^{(\alpha )}_{A}(\cdot ,\Theta )(t_1,t_2)\rightarrow \bar{T}_{A}(\cdot ,\Theta )(t_1,t_2)\) in \(L^2(0,T;\mathcal {V}^p)^*\) as \(\alpha \rightarrow \infty \), for all \(0\le t_1<t_2\le T\), by arguments similar to those used in the proof of Proposition 4.19.
For \(t\in [0,T]\) define \(F(\cdot )(t)\in \mathcal {V}^{p\,*}\) by
Then, with \(f_\alpha \) as in (4.69), from dominated convergence, Assumption 2.2 and Remark 4.20 follows that
for all \(0\le t_1<t_2\le T\).
It is natural to consider the bilinear form associated to the infinitesimal generator of (2.1),
for \(u,w\in L^2(0,T;\mathcal {V}^p)\), and with B as in (4.66) and \(\hat{C}=\frac{1}{2}\left( Tr[D\sigma ]_{\mathcal {H}}\sigma +D\sigma \cdot \sigma -\sigma \sigma ^{*}Q^{-1}x\right) \). We set \(\hat{u}:=\mathcal {U}-\Theta \) (see (2.2)). By Theorem 3.2 and the same bounds as those used to prove Lemma 4.11 we obtain
Lemma 4.21
There exists a sequence \((\alpha _i)_{i\in \mathbb {N}}\) such that \(\alpha _i\rightarrow \infty \) as \(i\rightarrow \infty \) and \(u_{\alpha _i}\) converges to \(\hat{u}\) as \(\alpha _i\rightarrow \infty \), weakly in \(L^p(0,T;\mathcal {V}^p)\) and strongly in \(L^p(0,T; L^p(\mathcal {H},\mu ))\), \(1\le p<\infty \).
Moreover, \(\frac{\partial \,}{\partial \,t}u_{\alpha _i}\) converges to \(\frac{\partial \,}{\partial \,t}\hat{u}\) as \(\alpha _i\rightarrow \infty \) weakly in \(L^2(0,T;L^2(\mathcal {H},\mu ))\).
The next Theorem generalizes Theorem 4.16 to the case of unbounded operator A.
Theorem 4.22
For every \(1<p<\infty \) the function \(\hat{u}\) is a solution of the variational problem on \(\mathcal {H}\)
Moreover, \(\hat{u}\in C([0,T]\times \mathcal {H})\).
We omit the proof which follows from Lemma 4.21, Proposition 4.19, (4.99) and it goes through arguments similar to (but simpler than) those adopted in the proof of Theorem 4.16. Continuity of the solution is a consequence of Corollary 3.3.
An optimal stopping time of \(\mathcal {U}\) is found by probabilistic arguments as in Sect. 4.3.1. For \((t,x)\in [0,T]\times \mathcal {H}\), let \(\tau ^\star _{\alpha }(t,x)\) be defined as in (4.75) and set
Lemma 4.23
For each \((t,x)\in [0,T]\times \mathcal {H}\) there exists a sequence \((\alpha _j)_{j\in \mathbb {N}}\), with \(\alpha _j=\alpha _j(t,x)\), such that \(\alpha _j\rightarrow \infty \) as \(j\rightarrow \infty \) and
Proof
The proof follows along the same lines of that of Lemma 4.17 and it is based on Corollary 3.3 and Proposition 3.1. \(\square \)
Theorem 4.24
The stopping time \(\tau ^\star (t,x)\) is optimal for \(\mathcal {U}(t,x)\).
Proof
Set \(\tau ^\star =\tau ^\star (t,x)\) for simplicity. Take \(\sigma =\tau ^\star \) in (4.86) to obtain
Consider the subsequence \((\mathcal {U}_{\alpha _j})_{j\in \mathbb {N}}\) corresponding to the sequence \((\alpha _j)_{j\in \mathbb {N}}\) given in Lemma 4.23, and take limits in (4.104) as \(j\rightarrow \infty \). Proposition 3.1, Corollary 3.3 and arguments as in the proof of Theorem 4.18 allow us to conclude that
That is, \(\tau ^\star \) is optimal. \(\square \)
5 Uniqueness in a Particular Case
We address the question of uniqueness of the solution to problem (4.101) only in the case of processes X whose Kolmogorov operator generates a symmetric Ornstein-Uhlenbeck semigroup (cf. [11], Chapters 6 and 7). For instance, Chow and Menaldi [8] consider such dynamics while carrying out an analysis similar to ours.
In (2.1) we take \(\sigma (x)\equiv 1\) and repalce \(W^0\) by a Q-Wiener process \((W_t)_{t\in [0,T]}\) taking values in \(\mathcal {H}\) (cf. [10], Chapter 4 and Remark 5.1 of Chapter 5), with covariance operator \(Q\in \mathcal {L}(\mathcal {H})\) positive and of trace-class. We make the following assumption on A.
Assumption 5.1
The operator A is negative, self-adjoint and there esists \(m>0\) such that \(\langle Ax,x\rangle _{\mathcal {H}}\le -m\Vert x\Vert ^2_{\mathcal {H}}\). Moreover \(Tr\big [QA^{-1}\big ]_{\mathcal {H}}<+\infty \) and \(e^{tA}Q=Qe^{tA}\) for all \(t>0\).
Then the semigroup generated by the Kolmogorov operator associated to X is symmetric (cf. [11], Corollary 10.1.7), and admits a centered Gaussian invariant measure \(\nu \) (cf. [11], Proposition 10.1.1) with covariance operator \(\Gamma \) defined by
(cf. [11], Proposition 10.1.6). For \(\varphi _k\) and \(\lambda _k\) as in (2.3) the Q-Wiener process may be represented as \(W_t=\sum _{k}\sqrt{\lambda _k}\beta ^k_t\,\varphi _k=:Q^{\frac{1}{2}}B_t\) where \(\big \{\beta ^k_t,\,t\ge 0,\,k\in \mathbb {N}\big \}\) is an infinite sequence of independent, real, standard Brownian motions and \(B_t:=\sum _{k}\beta ^k_t\,\varphi _k\). Therefore, the SDE for X may be formally written as
Now the variational problem may be set in the Gauss-Sobolev space associated to the measure \(\nu \) rather than that associated to Q. All arguments developed in the previous sections may be carried out and, in particular, Theorems 4.22 and 4.24 hold with \(\mathcal {V}^p\) replaced by \(W^{1,2}(\mathcal {H},\nu )\), with \(a_\mu (\cdot ,\cdot )\) replaced by
and with \(F(\cdot )(t)\) replaced by the dual pairing
Notice that conditions (2.15) are sufficient to guarantee the well posedness of (5.4) and that it is no longer needed to introduce the operator \(L_A\) of Sect. 4.3.2 and its continuous extension; also, \(A\Gamma A\) is not necessarily of trace class and hence the analogue of Assumption 2.4 in this setting (i.e. \(Tr\big [A\Gamma A \big ]_{\mathcal {H}}<+\infty \)), breaks down. However, here we do not need to rely on that assumption since the existence of the Gaussian invariant measure and the particular form of its covariance operator \(\Gamma \) (cf. (5.1)) substantially simplify the bilinear form.
The uniqueness in \(L^2(\mathcal {H},\nu )\) of the solution of the variational inequality now follows from usual comparison arguments as in [4] and the fact that
Remark 5.2
Notice that our approach allow to give a positive answer to the open question in Remark 2, of [8], p. 49, under assumptions similar to those required there, although in the finite time-horizon case. Also, it solves the problem posed in Section 5 of [8] (see discussion following Theorem 3, p. 51, therein) regarding the connection between infinite dimensional variational inequalities and optimal stopping problems when \(\sigma \) depends on the process. We believe that our method extends to the infinite time-horizon case under quite natural integrability assumptions.
Remark 5.3
The above arguments suggest that when a Gaussian invariant measure can be found, then uniqueness is more likely to be obtained as well. That naturally links our work to [2, 3, 32], where variational problems associated to optimal stopping ones are solved in Sobolev spaces with respect to excessive measures (possibly invariant) of the diffusion process’ semigroup.
Our proof of existence of a solution to the variational problem and its connection to the optimal stopping one could be possibly replicated when the Gaussian measure \(\mu \) is replaced by an excessive measure \(\nu \) (possibly invariant) provided that derivatives of \(\nu \) along the basis vectors’ directions exist (in the sense of [5], Definition 5.1.3) and natural integrability conditions hold, together with some refinements of Assumptions 2.5 and 2.4. Then uniqueness of the solution of the variational problem (4.101) would follow as shown in [2, 3] and [32].
Notes
\(\ell _2\) denotes the set of infinite vectors \(x:=(x_1,x_2,\ldots )\) such that \(\sum _{k}{x_k^2}<+\infty \).
The proof relies on the fact that the set of continuous functions is dense in \(L^p(\mathcal {H},\mu )\) and goes through a finite-dimensional reduction, a localization and the Stone-Weierstrass theorem.
References
Adams, R.A.: Sobolev Spaces. Academic Press, London (1975)
Barbu, V., Marinelli, C.: Variational inequalities in Hilbert spaces with measures and optimal stopping problems. Appl. Math. Optim. 57, 237–262 (2008)
Barbu, V., Sritharan, S.S.: Optimal stopping-time problem for stochastic Navier-Stokes equations and infinite-dimensional variational inequalities. Nonlinear Anal. 64, 1018–1024 (2006)
Bensoussan, A., Lions, J.L.: Applications of Variational Inequalities in Stochastic Control. North-Holland, Amsterdam (1982)
Bogachev, V.I.: Gaussian Measures. American Mathematical Society (1997)
Brezis, H.: Functional Analysis, Sobolev Spaces and Partial Differential Equations. Universitext, Springer, New York (2010)
Chiarolla, M.B., De Angelis, T.: Analytical pricing of American put options on a zero coupon bond in the Heath-Jarrow-Morton model. Stoch. Process. 125, 678–707 (2015)
Chow, P.L., Menaldi, J.L.: Variational Inequalities for the Control of Stochastic Partial Differential Equations. Stochastic Partial Differential Equations and Applications II, Lecture Notes in Mathematics, Springer, Berlin, pp. 42–52 (1989)
Da Prato, G.: An Introduction to Infinite-Dimensional Analysis. Springer, Berlin (2006)
Da Prato, G., Zabczyk, J.: Stochastic Equations in Infinite Dimensions. Cambridge University Press, Cambridge (1992)
Da Prato, G., Zabczyk, J.: Second Order Partial Differential Equations in Hilbert Spaces. Cambridge University Press, Cambridge (2004)
De Angelis, T.: Pricing American Bond Options under HJM: An Infinite Dimensional Variational Inequality. Ph.D thesis (2012)
Dieudonné, J.: Foundations of Modern Analysis. Academic Press, London (1969)
El Karoui, N.: Les Aspects Probabilistes du Contrôle Stochastique. In: 9th Saint Flour Probability Summer School, Lecture Notes in Math. 876, Springer, Berlin, pp. 73–238 (1979)
Fleming, W.H., Soner, H.M.: Controlled Markov Processes and Viscosity Solutions, 2nd ed. Stochastic Modelling and Applied Probability 25, Springer, New York
Ga̧tarek, D., Świȩch, A.: Optimal stopping in Hilbert spaces and pricing of American options. Math. Methods Oper. Res. 50, 135–147 (1999)
Kelome, D., Świȩch, A.: Viscosity solutions of an infinite-dimensional Black-Scholes-Barenblatt equation. Appl. Math. Optim. 47, 253–278 (2003)
Krylov, N.V.: Controlled Diffusion Processes. Springer, Berlin (2009)
Lions, P.L.: Viscosity solutions of fully nonlinear second-order equations and optimal stochastic control in infinite dimensions. I. The case of bounded stochastic evolutions. Acta Math. 3–4, 243–278 (1988)
Lions, P.L.: Viscosity solutions of fully nonlinear second order equations and optimal stochastic control in infinite dimensions. II. Optimal control of Zakai’s equation. Lecture Notes in Math., 1390, Springer, Berlin, pp. 147–170 (1989)
Lions, P.L.: Viscosity solutions of fully nonlinear second-order equations and optimal stochastic control in infinite dimensions. III. Uniqueness of viscosity solutions for general second-order equations. J. Funct. Anal. 86(1), 1–18 (1989)
Ma, Z.M., Röckner, M.: Introduction to the theory of (non-symmetric) dirichlet forms. Springer, Berlin (1992)
Marcozzi, M.D.: On the approximation of infinite dimensional optimal stopping problems with application to mathematical finance. J. Sci Comput. 34, 287–307 (2008)
Menaldi, J.L.: On the optimal stopping time problem for degenerate diffusions. SIAM J. Control Optim. 18(6), 697–721 (1980)
Menaldi, J.L.: On Degenerate Variational and Quasi-Variational Inequalities of parabolic Type. Analysis and Optimization of Systems, Lecture notes in Control and Information Sciences, vol. 28, pp. 338–356 (1980)
Pazy, A.: Semigroups of Linear Operator and Applications to Partial Differential Equations. Springer, New York (1983)
Shiryaev, A.N.: Optimal Stopping Rules. Springer, Berlin (1978)
Stroock, D., Varadhan, S.R.S.: On degenerate elliptic-parabolic operators of second order and their associated diffusions. Comm. Pure Appl. Math. 25, 651–713 (1972)
Świȩch, A.: “Unbounded” second order partial differential equations in infinite-dimensional Hilbert spaces. Comm. Partial Differ. Equ. 19(11–12), 1999–2036 (1994)
Zabczyk, J.: Stopping Problems in Stochastic Control. In: Proceedings of the International Congress of Mathematicians, vol. 1–2, PWN, Warsaw, pp. 1425–1437 (1984)
Zabczyk, J.: Stopping Problems on Polish Spaces. Ann. Univ. Mariae Curie-Sklodowska, 51 Vol. 1.18 pp. 181–199 (1997)
Zabczyk, J.: Bellman’s inclusions and excessive measures. Probab. Math. Statist. 21(1), 101–122 (2001)
Acknowledgments
During this work the second named author was funded by the University of Rome “La Sapienza” through the Ph.D programme in Mathematics for Economic-Financial Applications and by the EPSRC Grant EP/K00557X/1
Author information
Authors and Affiliations
Corresponding author
Additional information
These results extend a portion of the second Author Ph.D. dissertation [12] under the supervision of the first Author. Both Authors wish to thank Franco Flandoli and Claudio Saccon for their helpful comments and suggestions.
Appendix
Appendix
Proof of Proposition 4.1
Fix \((t,x^{(n)})\in [0,T]\times \mathbb {R}^n\) and take \(\overline{R}>0\) such that \(x^{(n)}\in \mathcal {O}_{\overline{R}}\). Now for all \(R\ge \overline{R}\) we have
by (2.11) and with \(I_{\{\sigma >\tau _{R}\}}\) the indicator function of the set \(\{\sigma >\tau _{R}\}\). By Markov inequality and standard estimates for strong solutions of SDEs in \(\mathbb {R}^n\) (cf. for instance [18] Chapter 2, Section 5, Corollary 12), it follows
with \(C_{n,\alpha ,T}>0\), only depending on \((\alpha ,n,T)\) and bounds on \(\sigma \).
Therefore
for every compact subset \(\mathcal {K}\subset \mathbb {R}^n\). If all \(\mathcal {U}^{(n)}_{\alpha ,R}\), are continuous, then \(\mathcal {U}^{(n)}_{\alpha }\) is continuous on every compact subset \([0,T]\times \mathcal {K}\) and this is enough for global continuity in \(\mathbb {R}^n\). \(\square \)
Proof of Corollary 4.3
By the regularity of \(\bar{u}\) in Corollary 4.2, it is well known that the expression (4.11) is equivalent to
(see for instance [4], Chapter 3, Section 1, p. 191).
The regularity of \(\partial \mathcal {O}_R\) and [1], Theorem 3.22 enable us to find a sequence \((u_j)_{j\in \mathbb {N}}\), such that \(u_j\in C^\infty _c(\mathbb {R}^{n+1})\) and
In fact it suffices to take a partition of the domain and use the standard mollification on each element of the partition. Then (6.1) follows from the usual properties of the mollifiers and the fact that the operators \(\partial _t\), D and \(D^2\) are closed in \(L^p\). Moreover, the continuity of \(\bar{u}\) and that of a suitable extension to \(\mathbb {R}^{n+1}\) imply that the convergence is also uniform on any compact set \(\mathcal {O}^\prime \) such that \([0,T]\times \overline{\mathcal {O}}_R\subset \mathcal {O}^\prime \); that is
Now we fix an arbitrary \(t\in [0,T]\) and a stopping time \(\tau \in [t,T]\). An application of Dynkin’s formula from t to \(\tau \wedge \tau _R\) gives
On the other hand by [4], Chapter 2, Lemma 8.1 there exists a constant \(C_{T,R}>0\) such that
hence by taking the limit as \(j\rightarrow \infty \) and by using (6.1) and (6.2) we obtain
Recall that (4.12) holds almost everywhere in \((0,T)\times \mathcal {O}_R\) and, being the diffusion uniformly non degenerate, the law of \(X^{(\alpha )t,x;n}\) is absolutely continuous with respect to the Lebesgue measure on \((0,T)\times \mathcal {O}_R\). Then
in particular, with \(\tau ^\star \) defined by
(4.12) implies
Therefore, by using (4.10) and by recalling (4.2) we have
It now follows that \(\bar{u}=u^{(n)}_{\alpha ,R}\) and \(\tau ^\star =\tau ^\star _{\alpha ,n,R}\).
Notice that for any stopping time \(\tau \le \tau ^\star _{\alpha ,n,R}\), combining (6.9) and (6.5) gives
i.e. the dynamic programming principle for \(\mathcal {U}^{(n)}_{\alpha ,R}\) holds. \(\square \)
Proof of Lemma 4.6
Set \(u^R:=u^{(n)}_{\alpha ,R}\) and recall Corollary 4.3. An application of Dynkin’s formula based on the same arguments as those that lead to (6.5) gives
For the left-hand side of (6.11) we observe that on the set \(\big \{\tau ^x_R\le \tau ^y_R\big \}\) the difference inside the expectation is zero, whereas on the set \(\big \{\tau ^x_R>\tau ^y_R\big \}\) one has
Therefore from (6.11), (4.9), (2.12), (2.20) and Lemma 2.9 we obtain
To obtain (4.17) we need to find a similar bound for the first member of (6.13) but from below. For that we introduce the auxiliary problem
and we observe that same arguments as those used to obtain Proposition 4.2 and Corollary 4.3 give \(v^R\in L^p(0,T;W^{1,p}_0(\mathcal {O}_R))\cap L^p(0,T;W^{2,p}(\mathcal {O}_R))\) and \(\frac{\partial \,v^R}{\partial \,t}\in L^p(0,T;L^p(\mathcal {O}_R))\), for all \(1\le p<+\infty \). Moreover \(v^R\) uniquely solves, in the almost everywhere sense, the obstacle problem
Again, by arguing as above for (6.11) and by replacing \(u^R\) by \(v^R\), the reversed inequality is obtained. Hence, the analogous for \(v^R\) of (6.12) gives
Now (4.17) follows by (6.13) and (6.16). \(\square \)
Proof of Lemma 4.7
It is enough to show that \(\Vert u^R_\varepsilon (t,x^{(n)})-u^R_\varepsilon (t,y^{(n)})\Vert \le L_P\Vert x^{(n)}-y^{(n)}\Vert _{\mathcal {H}}\) for all \(t\in [0,T]\) and \(x,y\in \mathcal {H}\). Recalling (4.10) and (4.19), we find
From Itô’s formula, (4.10) and Lemma 4.6 one finds
and similarly,
Take now
then from (6.17), (6.18), (6.19), (6.20) and recalling (2.12) and Lemma 2.9 we obtain
One can argue in a similar way to bound \(u^R_\varepsilon (t,y^{(n)})-u^R_\varepsilon (t,x^{(n)})\). \(\square \)
Rights and permissions
About this article
Cite this article
Chiarolla, M.B., De Angelis, T. Optimal Stopping of a Hilbert Space Valued Diffusion: An Infinite Dimensional Variational Inequality. Appl Math Optim 73, 271–312 (2016). https://doi.org/10.1007/s00245-015-9302-8
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00245-015-9302-8
Keywords
- Optimal stopping
- Infinite-dimensional stochastic analysis
- Parabolic partial differential equations
- Degenerate variational inequalities