Abstract
This paper concerns an optimal stopping problem driven by the running maximum of a spectrally negative Lévy process X. More precisely, we are interested in capped versions of the American lookback optimal stopping problem (Gapeev in J. Appl. Probab. 44:713–731, 2007; Guo and Shepp in J. Appl. Probab. 38:647–658, 2001; Pedersen in J. Appl. Probab. 37:972–983, 2000), which has its origins in mathematical finance, and provide semi-explicit solutions in terms of scale functions. The optimal stopping boundary is characterised by an ordinary first-order differential equation involving scale functions and, in particular, changes according to the path variation of X. Furthermore, we will link these capped problems to Peskir’s maximality principle (Peskir in Ann. Probab. 26:1614–1640, 1998).
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
Let X={X t :t≥0} be a spectrally negative Lévy process defined on a filtered probability space \((\varOmega,\mathcal{F},\mathbb{F}=\{\mathcal{F}_{t}:t\geq 0\}, \mathbb {P} )\) satisfying the natural conditions (cf. p. 39, Sect. 1.3 of [4]). For \(x\in \mathbb {R} \), denote by \(\mathbb {P} _{x}\) the probability measure under which X starts at x and for simplicity write \(\mathbb {P} _{0}= \mathbb {P} \). We associate with X the maximum process \(\overline{X}=\{\overline{X}_{t}:t\geq 0\}\) where \(\overline{X}_{t}:=s\vee\sup_{0\leq u\leq t} X_{u}\) for t≥0,x≤s. The law under which \((X,\overline{X})\) starts at (x,s) is denoted by \(\mathbb {P} _{x,s}\).
We are interested in the following optimal stopping problem:
where q≥0,K≥0,ϵ∈(log(K),∞], \((x,s)\in E:=\{(x_{1},s_{1})\in \mathbb {R} ^{2}\,\vert\,x_{1}\leq s_{1}\}\), and \(\mathcal{M}\) is the set of all \(\mathbb{F}\)-stopping times (not necessarily finite). In particular, on {τ=∞} we set
This problem is, at least in the case ϵ=∞, classically associated with mathematical finance. It arises in the context of pricing American lookback options [9, 10, 19] and its solution may be viewed as the fair price for such an option. If ϵ∈(log(K),∞), an analogous interpretation applies for an American lookback option whose payoff is moderated by capping it at a certain level (a fuller description will be given in Sect. 2).
When K=0 and ϵ=∞, (1) is known as the Shepp-Shiryaev optimal stopping problem which was first studied by Shepp and Shiryaev [25, 26] for the case when X is a linear Brownian motion and later by Avram, Kyprianou and Pistorius [2] for the case when X is a spectrally negative Lévy process. If K=0 and \(\epsilon\in \mathbb {R} \) then the problem is a capped version of the Shepp-Shiryaev optimal stopping problem and was considered by Ott [18]. Therefore, our main focus in this paper will be the case K>0 which we henceforth assume.
Our objective is to solve (1) for ϵ=(log(K),∞) by a “guess and verify” technique and use this to obtain the solution to (1) when ϵ=∞ via a limiting procedure. Our work extends and complements results by Conze and Viswanathan [7], Guo and Shepp [10], Pedersen [19] and Gapeev [9] all of which solve (1) for ϵ=∞ and X a linear Brownian motion or a jump-diffusion.
As we shall see, the general theory of optimal stopping [22, 28] and the principle of smooth and continuous fit [1, 17, 21, 22] (and the results in [9, 10, 18, 19]) strongly suggest that under some assumptions on q and ψ(1), where ψ is the Laplace exponent of X, the optimal strategy for (1) is of the form
for some strictly positive solution g ϵ of the differential equation
where W (q) and Z (q) are the so-called q-scale functions associated with X (see Sect. 3). In particular, we will find that the optimal stopping boundary s↦s−g ϵ (s) changes shape according to the path variation of X. This has already been observed in [18] in the case of the capped version of the Shepp-Shiryaev optimal stopping problem. It will also turn out that our solutions exhibit a pattern suggested by Peskir’s maximality principle [20]. In fact, we will be able to give a reformulation of our main results in terms of Peskir’s maximality principle.
We conclude this section with an overview of the paper. In Sect. 2 we give an application of our results in the context or pricing capped American lookback options. Section 3 is an auxiliary section introducing some necessary notation, followed by Sect. 4 which gives an overview of the different parameter regimes considered. Sections 5 and 7 deal with the “guess” part of our “guess and verify” technique and our main results, which correspond to the “verify” part, are presented in Sect. 6. The proofs of our main results can then be found in Sect. 9. Finally, Sect. 8 provides an explicit example under the assumption that X is a linear Brownian motion.
2 Application to Pricing “Capped” American Lookback Options
The aim of this section is to give some motivation for studying (1).
Consider a financial market consisting of a riskless bond and a risky asset. The value of the bond B={B t :t≥0} evolves deterministically such that
The price of the risky asset is modeled as the exponential spectrally negative Lévy process
In order to guarantee that our model is free of arbitrage we will assume that ψ(1)=r. If X t =μt+σW t , where W={W t :t≥0} is a standard Brownian motion, we get the standard Black-Scholes model for the price of the asset. Extensive empirical research has shown that this (Gaussian) model is not capable of capturing certain features (such as skewness and heavy tails) which are commonly encountered in financial data, for example, returns on stocks. To accommodate for the these problems, an idea, going back to [16], is to replace the Brownian motion as model for the log-price by a general Lévy process X (cf. [6]). Here we will restrict ourselves to the model where X is given by a spectrally negative Lévy process. This restriction is mainly motivated by analytical tractability. It is worth mentioning, however, that Carr and Wu [5] as well as Madan and Schoutens [15] have offered empirical evidence to support the case of a model in which the risky asset is driven by a spectrally negative Lévy process for appropriate market scenarios.
A capped American lookback option is an option which gives the holder the right to exercise at any stopping time τ yielding payouts
The constant M 0 can be viewed as representing the “starting” maximum of the stock price (say, over some previous period (−t 0,0]). The constant C can be interpreted as cap and moderates the payoff of the option. The value C=∞ is also allowed and correspond to no moderation at all. In this case we just get a normal American lookback option. Finally, when C=∞ it is necessary to choose α strictly positive to guarantee that it is optimal to stop in finite time and that the value is finite (cf. Theorem 6.5).
Standard theory of pricing American-type options [27] directs one to solving the optimal stopping problem
where the supremum is taken over all \(\mathbb{F}\)-stopping times. In other words, we want to find a stopping time which optimizes the expected discounted claim. The right-hand side of (6) may be rewritten as
where q=r+α,x=log(S 0),s=log(M 0) and ϵ=log(C). Hence, we recognise (1) which is the problem of interest in this article.
3 Preliminaries
It is well-known that a spectrally negative Lévy process X is characterised by its Lévy triplet (γ,σ,Π), where \(\sigma\geq0, \gamma\in \mathbb {R} \) and Π is a measure on (−∞,0) satisfying the condition ∫(−∞,0)(1∧x 2) Π(dx)<∞. By the Lévy-Itô decomposition, the latter may be represented in the form
where {B t :t≥0} is a standard Brownian motion, \(\{X^{(1)}_{t}:t\geq 0\}\) is a compound Poisson process with discontinuities of magnitude bigger than or equal to one and \(\{X_{t}^{(2)}:t\geq 0\}\) is a square integrable martingale with discontinuities of magnitude strictly smaller than one and the three processes are mutually independent. In particular, if X is of bounded variation, the decomposition reduces to
where d>0 and {η t :t≥0} is a driftless subordinator. Further let
be the Laplace exponent of X which is known to take the form
Moreover, ψ is strictly convex and infinitely differentiable and its derivative at zero characterises the asymptotic behavior of X. Specifically, X drifts to ±∞ or oscillates according to whether ±ψ′(0+)>0 or, respectively, ψ′(0+)=0. The right-inverse of ψ is defined by
for q≥0.
For any spectrally negative Lévy process having X 0=0 we introduce the family of martingales
defined for any \(c\in \mathbb {R} \) for which \(\psi(c)=\log \mathbb {E} [\exp(cX_{1})]<\infty\), and further the corresponding family of measures \(\{ \mathbb {P} ^{c}\}\) with Radon-Nikodym derivatives
For all such c the measure \(\mathbb {P} ^{c}_{x}\) will denote the translation of \(\mathbb {P} ^{c}\) under which X 0=x. In particular, under \(\mathbb {P} _{x}^{c}\) the process X is still a spectrally negative Lévy process (cf. Theorem 3.9 in [13]).
A special family of functions associated with spectrally negative Lévy processes is that of scale functions (cf. [13]) which are defined as follows. For q≥0, the q-scale function \(W^{(q)}: \mathbb {R} \longrightarrow[0,\infty)\) is the unique function whose restriction to (0,∞) is continuous and has Laplace transform
and is defined to be identically zero for x≤0. Equally important is the scale function \(Z^{(q)}: \mathbb {R} \longrightarrow[1,\infty)\) defined by
The passage times of X below and above \(k\in \mathbb {R} \) are denoted by
We will make use of the following two identities (cf. [2]). For q≥0 and x∈(a,b) it holds that
For each c≥0 we denote by \(W_{c}^{(q)}\) the q-scale function with respect to the measure \(\mathbb {P} ^{c}\). A useful formula (cf. [13]) linking the scale function under different measures is given by
for q≥0 and x≥0.
We conclude this section by stating some known regularity properties of scale functions (cf. [12]).
- Smoothness::
-
For all q≥0,
$$ W^{(q)}\vert_{(0,\infty)}\in \begin{cases} C^1(0,\infty),&\text{if $X$ is of bounded variation and $\varPi$ has no atoms}, \\ C^1(0,\infty),&\text{if $X$ is of unbounded variation and $\sigma=0$},\\ C^2(0,\infty),&\text{$\sigma>0$}. \end{cases} $$ - Continuity at the origin::
-
For all q≥0,
$$ W^{(q)}(0+) = \begin{cases} \mathtt{d}^{-1},&\text{if $X$ is of bounded variation,}\\ 0,&\text{if $X$ is of unbounded variation.} \end{cases} $$(14) - Right derivative at the origin::
-
For all q≥0,
$$ W^{(q)\prime}_+(0+)=\begin{cases} \frac{q+\varPi(-\infty,0)}{\mathtt{d}^2},&\text{if $\sigma=0$ and $\varPi(-\infty,0)<\infty$,}\\ \frac{2}{\sigma^2},&\text{if $\sigma>0$ or $\varPi(-\infty,0)=\infty$,}\ \end{cases} $$(15)where we understand the second case to be +∞ when σ=0.
For technical reasons, we require for the rest of the paper that W (q) is in C 1(0,∞) (and hence Z (q)∈C 2(0,∞)). This is ensured by henceforth assuming that Π is atomless whenever X is of bounded variation.
4 The Different Parameter Regimes
Our analysis distinguishes between the following parameter regimes.
- Main cases::
-
-
q>0 and ϵ∈(log(K),∞).
-
q>0∨ψ(1) and ϵ=∞,
-
- Special cases::
-
-
q=0 and ϵ∈(log(K),∞),
-
q=0 and ϵ=∞,
-
0<q≤ψ(1) and ϵ=∞.
-
5 Candidate Solution for the Main Cases
The aim of this section is to derive a candidate solution to (1) for the main cases via the principle of smooth and continuous fit [1, 17, 21, 22].
We begin by heuristically motivating a class of stopping times in which we will look for the optimal stopping time under the assumption that q>0 and ϵ∈(log(K),∞). Because \(e^{-qt}(e^{\overline{X}_{t}\wedge\epsilon}-K)^{+}=0\) as long as \((X,\overline{X})\) is in the set
it is intuitively clear that it is never optimal to stop the process \((X,\overline{X})\) in \(C^{*}_{II}\). Moreover, as the process \((X,\overline{X})\) can only move upwards by climbing up the diagonal in the (x,s)-plane (see Fig. 1), it can only leave \(C^{*}_{II}\) through the point (log(K),log(K)). Therefore, one should not exercise until the process \((X,\overline{X})\) has hit the point (log(K),log(K)). It is possible that this never happens as X might escape to −∞ before reaching level log(K). On the other hand, if the process \((X,\overline{X})\) is in {(x,s)∈E:s≥ϵ}, it should be stopped immediately due to the discounting as the spatial part of the payout is deterministic and fixed at e ϵ−K in value. The remaining case is when \((X,\overline{X})\) is in {(x,s)∈E:log(K)<s<ϵ} in which case we can argue in the same way as described on p. 6, Sect. 3 of [20]: The dynamics of the process \((X, \overline{X})\) are such that \(\overline{X}\) remains constant at times when X is undertaking an excursion below \(\overline{X}\). During such periods the discounting in the payoff is detrimental. One should therefore not allow X to drop too far below \(\overline{X}\) in value as otherwise the time it will X take to recover to the value of its previous maximum will prove to be costly in terms of the gain on account of exponential discounting. More specifically, given a current value s, s∈(log(K),ϵ), of \(\overline{X}\), there should be a point g ϵ (s)>0 such that if the process X reaches or jumps below the value (s−g ϵ (s),s) we should stop instantly (see Fig. 1). In more mathematical terms, we expect an optimal stopping time of the form
for some function g ϵ :(log(K),ϵ)→(0,∞) such that lim s↑ϵ g ϵ (s)=0 and g ϵ (s)=0 for s>ϵ. This is illustrated in Fig. 1. For (x,s)∈E, we define the value function associated with \(\tau_{g_{\epsilon}}\) by
Now suppose for the moment that we have chosen a function g ϵ . The strong Markov property and Theorem 3.12 of [13] then imply that, for \((x,s)\in C^{*}_{II}\),
This means that \(V_{g_{\epsilon}}\) is determined on \(C^{*}_{II}\) as soon as \(V_{g_{\epsilon}}\) is known on
This leaves us with two key questions:
-
How should one choose g ϵ ?
-
Given g ϵ , what does \(V_{g_{\epsilon}}(x,s)\) look like for (x,s)∈E 1?
These questions can be answered heuristically in the spirit of the method applied in Sect. 3 of [20], but adapted to the case when X is a spectrally negative Lévy processes (rather than a diffusion). More precisely, as we shall see in more detail in Sect. 7, the general theory of optimal stopping [22, 28] together with the principle of smooth and continuous fit [1, 17, 21, 22] suggest that g ϵ should be a solution to the ordinary differential equation
and that \(V_{g_{\epsilon}}(x,s)=(e^{s\wedge\epsilon}-K) Z^{(q)}(x-s+g_{\epsilon}(s)) \) for (x,s)∈E 1. Note that there might be many solutions to (18) without an initial/boundary condition. However, we are specifically looking for the solution satisfying lim s↑ϵ g ϵ (s)=0. Summing up, we have suggested/found a candidate stopping time \(\tau_{g_{\epsilon}}\) and candidate value function \(V_{g_{\epsilon}}\).
As for the case q>0∨ψ(1) and ϵ=∞, one might let ϵ tend to infinity which informally yields a candidate stopping time of the form (16) with g ϵ replaced with g ∞, where g ∞ should satisfy (18), but on (log(K),∞) instead of (log(K),ϵ). The corresponding value function \(V_{g_{\infty}}\) is then expected to be of the form \(V_{g_{\infty}}(x,s)=(e^{s}-K) Z^{(q)}(x-s+g_{\infty}(s)) \) for (x,s)∈E 1. If we are to identify g ∞ as a solution to (18), we need an initial/boundary condition which in this case can be found as follows. For s≫K the payoff in (1) resembles the payoff of the Shepp-Shiryaev optimal stopping problem [2, 13, 18] and hence we expect s↦s−g ∞(s) to look similar to the optimal boundary of the Shepp-Shiryaev optimal stopping problem for s≫K. Therefore, we expect that lim s↑∞ g ∞(s)=k ∗, where k ∗>0 is the unique root of the equation Z (q)(s)−qW (q)(s)=0 (cf. [18]).
These heuristic arguments are made rigorous in the next section.
6 Main Results
6.1 The Different Solutions of the ODE
In this subsection we investigate, for q>0, the solutions of the ordinary differential equation
whose graph lies in
These solutions will, as already hinted in the previous section, play an important role. But before we analyse (19), recall that the requirement W (q)(0+)<q −1 is the same as asking that either X is of unbounded variation or X is of bounded variation with d>q. Similarly, the condition W (q)(0+)≥q −1 means that X is of bounded variation with 0<d≤q. Also note that W (q)(0+)≥q −1 implies q≥d>ψ(1).
The existence of solutions to (19) and their behaviour under the different parameter regimes is summarised in the next result.
Lemma 6.1
Assume that q>0. For ϵ∈(log(K),∞), we have the following.
-
(a)
If q>ψ(1) and W (q)(0+)<q −1, then there exists a unique solution g ϵ :(log(K),ϵ)→(0,∞) to (19) such that lim s↑ϵ g ϵ (s)=0.
-
(b)
If W (q)(0+)≥q −1 (and hence q>ψ(1)), then there exists a unique solution g ϵ :(log(K),ϵ∧β)→(0,∞) to (19) such that lim s↑ϵ∧β g ϵ (s)=0. Here, the constant β is given by β:=log(K(1−d/q)−1)∈(0,∞].
-
(c)
If q≤ψ(1), then there exists a unique solution g ϵ :(log(K),ϵ)→(0,∞) to (19) such that lim s↑ϵ g ϵ (s)=0.
For ϵ=∞, we have in particular:
-
(d)
If q > ψ(1) and W (q)(0+) < q −1, then there exists a unique solution g ∞:(log(K),∞) →(0,∞) to (19) such that lim s↑∞ g ∞(s)=k ∗, where k ∗∈(0,∞) is the unique root of Z (q)(s)−qW (q)(s)=0.
-
(e)
If W (q)(0+)≥q −1 (and hence q>ψ(1)), then there exists a unique solution g ∞:(log(K),β)→(0,∞) to (19) such that lim s↑β g ∞(s)=0. The constant β is as in (b).
Moreover, all the solutions mentioned in (a)–(e) tend to +∞ as s↓log(K). Also note that if β≤ϵ then the solutions in (b) and (e) coincide. Finally, the qualitative behaviour of the solutions of (19) is displayed in Figs. 2, 3, and 4.
We will henceforth use the following convention: If a solution to (19) is not defined for all s∈(log(K),∞), we extend it to (log(K),∞) by setting it equal to zero wherever it is not defined (typically s≥ϵ).
6.2 Verification of the Case q>0 and ϵ∈(log(K),∞)
We are now in a position to state our first main result.
Theorem 6.2
Suppose that q>0 and ϵ∈(log(K),∞). Then the solution to (1) is given by
with value A ϵ ∈(0,∞) given by
and optimal stopping time
where g ϵ is given in Lemma 6.1. Moreover,
Remark 6.3
With the help of excursion theory, it is possible to obtain an alternative representation for \(V^{*}_{\epsilon}(s,s)\) for log(K)≤s<ϵ∧β. (See Appendix B for the relevant computations.) Specifically, under the same assumptions as in Theorem 6.2, we have
where \(\hat{f}(u)= \frac {Z^{(q)}(u)W^{(q)\prime }(u)}{W^{(q)}(u)}-qW^{(q)}(u) \) and we understand β=∞ unless W (q)(0+)≥q −1, in which case we take β=log(K(1−d/q)−1) as before. In particular, we can identify the value A ϵ as the above expression, setting s=log(K).
Remark 6.4
Assume that (x,s)∈E such that log(K)<s<ϵ∧β and set β=∞ unless W (q)(0+)≥q −1. The excursion theoretic calculation that led to (22) contains an additional result, namely that \(\mathbb {P} _{x,s}[\tau^{*}_{\epsilon}=\tau^{+}_{\epsilon\wedge\beta}]\in(0,1)\). To see this, note that it follows from the computation in Appendix B that
Hence, the claim follows provided the integral on the right-hand side is strictly positive and finite. Indeed, changing variables according to v=g ϵ (u) and using the explicit form of \(g_{\epsilon}^{\prime}\) gives
where \(y(v):=\frac{e^{g_{\epsilon}^{-1}(v)}}{q(e^{g_{\epsilon}^{-1}(v)}-K)} Z^{(q)}(v) - W^{(q)}(v) \) and \(g_{\epsilon}^{-1}\) is the inverse of g ϵ . Using (14) one may then deduce that y(v) is bounded on (0,g ϵ (s)] by a constant, say C>0, and that
This proves the claim. A similar phenomenon in a different context has been observed in [14].
Let us now discuss some consequences of Theorem 6.2. Firstly, it shows that if ψ′(0+)≥0 the stopping problem has an optimal solution in the smaller class of [0,∞)-valued \(\mathbb{F}\)-stopping times. On the other hand, if there is a possibility that the process X drifts to −∞ before reaching log(K), which occurs exactly when ψ′(0+)<0, then the probability that \(\tau_{\epsilon}^{*}\) is infinite is strictly positive and \(\tau_{\epsilon}^{*}\) is only optimal in the class of [0,∞]-valued \(\mathbb{F}\)-stopping times.
Secondly, when W (q)(0+)≥q −1 or, equivalently, X is of bounded variation with q≥d, the result shows that g ϵ (s) hits the origin at ϵ∧β, where β=log(K(1−d/q)−1) (see Fig. 5). Intuitively speaking, if β<ϵ, the discounting is so strong that it is best to stop even before reaching the level ϵ. On the other hand, if β≥ϵ, it would be better to wait longer, but as there is a cap we are forced to stop as soon as we have reached it.
As already observed in [18], it is also the case in our setting that, if W (q)(0+)<q −1, the slope of g ϵ at ϵ (and hence the shape of the optimal boundary s↦s−g ϵ (s)) changes according to the path variation of X. Specifically, it holds that
Next, introduce the sets
Two examples of g ϵ and the corresponding continuation region \(C^{*}_{I}\cup C^{*}_{II}\) and stopping region D ∗ are pictorially displayed in Fig. 5.
6.3 Verification of the Case q>0∨ψ(1) and ϵ=∞
The analogous result to Theorem 6.2 reads as follows.
Theorem 6.5
Suppose that q>0∨ψ(1) and ϵ=∞. Then the solution to (1) is given by
with value A ∞∈(0,∞) given by
and optimal stopping time
where g ∞ is given in Lemma 6.1. Moreover,
Remark 6.6
As in Remark 6.3, \(V_{\infty}^{*}(s,s)\) can be identified as the integral in (22) with ϵ=∞ for log(K)≤s<β in the case W (q)(0+)≥q −1. Otherwise it is identified as
where \(\hat{f}(u)= \frac {Z^{(q)}(u)W^{(q)\prime }(u)}{W^{(q)}(u)}-qW^{(q)}(u) \) as before. (See again the computations in Appendix B.) In particular, one obtains an alternative expression for A ∞.
Similarly to Theorem 6.2 one sees again that if ψ′(0+)≥0 there is an optimal stopping time in the class of all [0,∞)-valued \(\mathbb{F}\)-stopping times. Furthermore, let \(C^{*}_{I}=C^{*}_{I,\infty}\) and \(D^{*}=D^{*}_{\infty}\) denote the same sets as in (23), but with g ∞ instead of g ϵ . The (qualitative) behaviour of g ∞ and the resulting shape of the continuation region \(C^{*}_{I}\cup C^{*}_{II} \) and stopping region D ∗ are illustrated in Fig. 6.
6.4 The Special Cases
In this subsection we deal with the cases that have not been considered yet, i.e., the special cases (see Sect. 4).
Lemma 6.7
Suppose that q=0 and ϵ∈(log(K),∞).
-
(a)
When ψ′(0+)<0 and Φ(0)≠1, then the solution to (1) is given by
$$ V_\epsilon^*(x,s)= \begin{cases}e^\epsilon-K,&s\geq\epsilon,\\ e^s-K+\frac{e^{x\varPhi(0)}}{\varPhi(0)-1} (e^{s(1-\varPhi(0))}-e^{\epsilon(1-\varPhi(0))} ) ,&\log(K)\leq s<\epsilon,\\ e^{-\varPhi(0)(\log(K)-x)}A_\epsilon,&s<\log(K),\end{cases} $$where \(A_{\epsilon}:=\frac{K^{\varPhi(0)}(K^{1-\varPhi(0)}-e^{\epsilon(1-\varPhi(0))})}{\varPhi(0)-1}\), and \(\tau_{\epsilon}^{*}=\tau_{\epsilon}^{+}\). If Φ(0)=1, then the middle term on the right-hand side in the expression for \(V_{\epsilon}^{*}(x,s)\) has to be replaced by e s−K+e x(ϵ−s) and A ϵ by K(ϵ−log(K)).
-
(b)
When ψ′(0+)≥0, then solution to (1) is given by \(V_{\epsilon}^{*}\equiv e^{\epsilon}-K\) and \(\tau_{\epsilon}^{*}=\tau^{+}_{\epsilon}\).
Note that although the optimal stopping time is the same in both parts of Lemma 6.7, in (a) it attains the value infinity with positive probability, whereas in (b) this happens with probability zero. Hence, in (b) there is actually an optimal stopping time in the class of [0,∞)-valued \(\mathbb{F}\)-stopping times.
Lemma 6.8
Suppose that ϵ=∞.
-
(a)
Assume that q=0. If ψ′(0+)<0 and Φ(0)>1, we have
$$ V_\infty^*(x,s)= \begin{cases}e^s-K+\frac{e^{x\varPhi(0)+s(1-\varPhi(0))}}{\varPhi(0)-1},&s\geq\log(K), \\ \noalign {\vspace {3pt}} e^{-\varPhi(0)(\log(K)-x)}\frac{K}{\varPhi(0)-1},&s<\log(K),\end{cases} $$(26)and the optimal stopping time is given by \(\tau_{\infty}^{*}=\infty\). On the other hand, if either ψ′(0+)<0 and Φ(0)≤1 or ψ′(0+)≥0, then \(V_{\infty}^{*}(x,s)\equiv\infty\) and \(\tau_{\infty}^{*}=\infty\).
-
(b)
When 0<q≤ψ(1), we have \(V_{\infty}^{*}(x,s)\equiv\infty\).
The second part in the Lemma 6.8 is intuitively clear. If 0<q≤ψ(1), then the average upwards motion of X (and hence \(\overline{X}\)) is stronger than the discounting. On the other hand, ψ′(0+)<0 means that X will eventually drift to −∞ and thus X will eventually attain its maximum (in the pathwise sense). Of course, we do not know when this happens, but since there is no discounting we do not mind waiting forever. The other cases in Lemma 6.8 have a similar interpretation.
6.5 The Maximality Principle
The maximality principle was understood as a powerful tool to solve a class of stopping problems for the maximum process associated with a one-dimensional time-homogeneous diffusion [20]. Although we work with a different class of processes, our main results (Lemma 6.1, Theorems 6.2, 6.5 and Lemma 6.8(b)) can be reformulated through the maximality principle.
Lemma 6.9
Suppose that q>0 and ϵ∈(log(K),∞). Define the set
Let \(g_{\epsilon}^{*}\) be the minimal solution in \(\mathcal{S}\). Then the solution to (1) is given by (20) and (21) with g ϵ replaced by \(g^{*}_{\epsilon}\).
In the case that there is a cap, it cannot happen that the value function becomes infinite. This changes when there is no cap.
Lemma 6.10
Let q>0 and ϵ=∞.
-
1.
Let \(g_{\infty}^{*}\) denote the minimal solution to (19) which does not hit zero (whenever such a solution exists). Then the solution to (1) is given by (24) and (25) with g ∞ replaced by \(g_{\infty}^{*}\).
-
2.
If every solution to (19) hits zero, then the value function in (1) is given by \(V_{\infty}^{*}(x,s)\equiv\infty\).
Remark 6.11
-
1.
We select the minimal solution rather than the maximal one as in [20], since our functions g ϵ (s) are the analogue of s−g ϵ (s) in [20].
-
2.
The “right” boundary conditions which were used to select g ϵ and g ∞ from the class of solutions of (19) (see Sect. 5) are not used in the formulation of Lemmas 6.9 and 6.10. In fact, by choosing the minimal solution, it follows as a consequence that \(g_{\epsilon}^{*}\) and \(g_{\infty}^{*}\) have exactly the “right” boundary conditions. Put differently, the “minimality principle” is a means of selecting the “good” solution from the class of all solutions of (19). This is a reformulation of [20] in our specific setting.
-
3.
A similar observation is contained in [8], but in a slightly different setting.
-
4.
If ϵ=∞, the solutions to (19) that hit zero correspond to the so-called “bad-good” solutions in [20]; “bad” since they do not give the optimal boundary, “good” as they can be used to approximate the optimal boundary.
7 Guess via Principle of Smooth and Continuous Fit
Our proofs are essentially based on a “guess and verify” technique. Here we provide the missing details from Sect. 5 on how to “guess” a candidate solution. The following presentation is an adaptation of the argument of Sect. 3 of [20] to our setting.
Assume that q>0 and ϵ∈(log(K),ϵ). Let g ϵ :(log(K),ϵ)→(0,∞) be continuously differentiable and define the stopping time \(\tau_{g_{\epsilon}}\) as in (16) and let \(V_{g_{\epsilon}}\) be as in (17). For simplicity assume from now on that X is of unbounded variation (if X is of bounded variation a similar argument based on the principle of continuous fit applies, see [1, 21, 22]). From the general theory of optimal stopping, [22, 28], we would expect that \(V_{g_{\epsilon}}\) satisfies for (x,s)∈E such that log(K)<s<ϵ the system
where Γ is the infinitesimal generator of the process X under \(\mathbb {P} \). For functions \(h\in C^{\infty}_{0}( \mathbb {R} )\) and \(z\in \mathbb {R} \), it is given by
Here \(C^{\infty}_{0}( \mathbb {R} )\) denotes the class of infinitely differentiable functions h on \(\mathbb {R} \) such that h and its derivatives vanish at infinity. In addition, the principle of smooth fit (cf. [17, 22]) suggests that the system above should be complemented by
Note that the smooth fit condition is not necessarily part of the general theory, it is imposed since by the “rule of thumb” outlined in Sect. 7 in [1] one suspects it should hold in this setting because of path regularity. This belief will be vindicated when we show that system (27) and (29) leads to the desired solution. Applying the strong Markov property at \(\tau^{+}_{s}\) and using (11) and (12) shows that
Furthermore, the smooth fit condition (29) implies
By (15) the first factor tends to a strictly positive value or infinity which shows that \(V_{g_{\epsilon}}(s,s)=(e^{s}-K) Z^{(q)}(g_{\epsilon}(s)) \). This would mean that for all (x,s)∈E such that log(K)<s<ϵ we have
Finally, using the normal reflection condition shows that our candidate function g ϵ should satisfy the first-order differential equation
8 Example
Suppose that \(X_{t}=(\mu-\frac{1}{2}\sigma^{2})t+\sigma W_{t}\), where \(\mu\in \mathbb {R} ,\sigma>0\) and (W t ) t≥0 is a standard Brownian motion. It is well-known that in this case the scale functions are given by
on x≥0, where \(\delta(q)=\delta=\sqrt{(\frac{\mu}{\sigma^{2}}-\frac{1}{2})^{2}+\frac{2q}{\sigma^{2}}}\) and \(\gamma=\frac{1}{2}-\frac{\mu}{\sigma^{2}}\). Additionally, let γ 1:=γ−δ and γ 2:=γ+δ=Φ(q) both of which are the roots of the quadratic equation \(\frac{\sigma^{2}}{2}\theta^{2}+(\mu-\frac{\sigma^{2}}{2})\theta-q=0\) and satisfy γ 2>0>γ 1. Using the specific form of Z (q) and W (q) it straightforward to obtain the following result.
Lemma 8.1
Let ϵ=∞ and assume that q>ψ(1) or, equivalently, q>μ. Then the solution to (1) is given by
where \(A_{\infty}=\lim_{s\downarrow\log(K)}(e^{s}-K)e^{\gamma_{2}g_{\infty}(s)}\). The corresponding optimal strategy is given by \(\tau_{\infty}^{*}:=\inf\{t>0:\overline{X}_{t}-X_{t}\geq g_{\infty}(\overline{X}_{t}) \ \textit{and} \ \overline{X}_{t}>\log(K)\}\), where g ∞ is the unique strictly positive solution to the differential equation
such that lim s↑∞ g ∞(s)=k ∗, where the constant k ∗∈(0,∞) is given by
Lemma 8.1 is nothing other than Theorem 2.5 of [19] or Theorem 1 of [10] which shows that our results are consistent with the existing literature.
9 Proof of Main Results
Proof of Lemma 6.1
Recall that q>0. We distinguish three cases:
-
q>ψ(1) and W (q)(0+)<q −1,
-
W (q)(0+)≥q −1 (and hence q>ψ(1), see beginning of Sect. 6.1),
-
ψ(1)≥q.
The Case q>ψ(1) and W (q)(0+)<q −1
The assumptions imply that the function H↦Z (q)(H)−qW (q)(H) is strictly decreasing on (0,∞) and has a unique root k ∗∈(0,∞) (cf. Proposition 2.1 of [18]). In particular, \(\frac{ Z^{(q)}(H) }{q W^{(q)}(H) }>1\) for H<k ∗, \(\frac{ Z^{(q)}(H) }{q W^{(q)}(H) }<1\) for H>k ∗ and \(\frac{ Z^{(q)}(k^{*}) }{q W^{(q)}(k^{*}) }=1\). It is also known that the mapping \(H\mapsto\frac{ Z^{(q)}(H) }{q W^{(q)}(H) }\) is strictly decreasing on (0,∞) (cf. first Remark in Sect. 3 of [23]) and that \(\lim_{H\to\infty}\frac{ Z^{(q)}(H) }{q W^{(q)}(H) }=\varPhi(q)^{-1}\) (cf. Lemma 1 of [2]). We will make use of these properties below.
The ordinary differential equation (19) has, at least locally, a unique solution for every starting point (s 0,H 0)∈U by the Picard-Lindelöf theorem (cf. Theorem 1.1 in [11]), on account of local Lipschitz continuity of the field. It is well-known that these unique local solutions can be extended to their maximal interval of existence (cf. Theorem 3.1 of [11]). Hence, whenever we speak of a solution to (19) from now on, we implicitly mean the unique maximal one. In order to analyse (19), we sketch its direction field based on various qualitative features of the ODE. The 0-isocline, that is, the points (s,H) in U satisfying \(1-\frac{e^{s} Z^{(q)}(H) }{(e^{s}-K)q W^{(q)}(H) }=0\), is given by the graph of
Using analytical properties of the map H↦Z (q)(H)/(qW (q)(H)) given at the beginning of the paragraph above, one deduces that f is strictly decreasing on (k ∗,∞) and that η:=lim H↑∞ f(H)=log(K(1−Φ(q)−1)−1) and \(\lim_{H\downarrow k^{*}}f(H)=\infty\). Moreover, the inverse of f, which exists due to the strict monotonicity of f, will be denoted by f −1. Using the 0-isocline and what was said in the paragraph above, we obtain qualitatively the direction field shown in Fig. 7.
We continue by investigating two types of solutions. Let s 0>log(K) and let g(s) be the solution such that g(s 0)=k ∗ which is defined on the maximal interval of existence, say I g , of g. From the specific form of the direction field and the fact that solutions tend to the boundary of U (cf. Theorem 3.1 of [11]), we infer that \(I_{g}=(\log(K),\tilde{s})\) for some \(\tilde{s}>s_{0}\), \(\lim_{s\uparrow\tilde{s}}g(s)=0\) and lim s↓log(K) g(s)=∞. In other words, the solutions of (19) which intersect the horizontal line H=k ∗ come from infinity and eventually hit zero (with infinite gradient if W (q)(0+)=0 and with finite gradient if W (q)(0+)∈(0,q −1)). Next, suppose that s 0>η and let g(s) be the solution such that g(s 0)=f −1(s 0). Similarly to above, we conclude that I g =(log(K),∞), lim s↑∞ g(s)=∞ and lim s↓log(K) g(s)=∞. Put differently, every solution that intersects the 0-isocline comes from infinity and tends to infinity.
Let \(\mathcal{S}^{-}\) be the set of solutions of (19) whose range contains the value k ∗ and \(\mathcal{S}^{+}\) the set of solutions of (19) whose graph s↦g(s) intersects the 0-isocline (see Fig. 7). Both these sets are non-empty as explained in the previous paragraph. For fixed s ∗>η define
It follows that \(k^{*}\leq H^{*}_{-}\leq H^{*}_{+}\leq f^{-1}(s^{*})\) and we claim that \(H^{*}_{-}=H^{*}_{+}\). Suppose this was false and choose H 1,H 2 such that \(H^{*}_{-}<H_{1}<H_{2}<H^{*}_{+}\). Denote by g 1 the solution to (19) such that g 1(s ∗)=H 1 and by g 2 the solutions of (19) such that g(s ∗)=H 2. Both these solutions must lie between the 0-isocline and the horizontal line H=k ∗. In particular, it holds that \(I_{g_{1}}=I_{g_{2}}=(\log(K),\infty)\) and
Furthermore, set \(F(s,H):=1-\frac{e^{s} Z^{(q)}(H) }{(e^{s}-K)q W^{(q)}(H) }\) for (s,H)∈U and observe that, from earlier remarks, for fixed s, it is an increasing function in H. Using this and the fact that g 1(s)<g 2(s) for all s>log(K) we may write (using the equivalent integral formulation of (19))
for s>log(K). This contradicts (33) and hence \(H^{*}_{-}=H^{*}_{+}\). Denote by g ∞ be the solution to (19) such that \(g_{\infty}(s^{*})=H^{*}_{-}\). By construction, g ∞ lies above all the solutions in \(\mathcal{S}^{-}\) and below all the solutions in \(\mathcal{S}^{+}\). In particular, \(I_{g_{\infty}}=(\log(K),\infty)\) and lim s→∞ g ∞(s)=k ∗.
So far we have found that there are (at least) three types of solutions of (19) and, in fact, there are no more, i.e., any solution to (19) either lies in \(\mathcal{S}^{-}\cup\mathcal{S}^{+}\) or coincides with g ∞. To see this, note that the graph of g ∞ splits U into two disjoint sets. If (s,H)∈U lies above the graph of g ∞, then the specific form of the field implies that the solution, g say, through (s,H) must intersect the vertical line s=s ∗ and \(g(s^{*})>H^{*}_{+}\); thus \(g\in\mathcal{S}^{+}\). Similarly, one may deduce that the solution through a point lying below the graph of g ∞ must intersect the horizontal line H=k ∗ and therefore lies in \(\mathcal{S}^{-}\).
Finally, we claim that given ϵ>log(K), there exists a unique solution g ϵ of (19) such that \(I_{g_{\epsilon}}=(\log(K),\epsilon)\) and lim s↑ϵ g ϵ (s)=0. Indeed, define the sets
One can then show by a similar argument as above that \(s^{-}_{\epsilon}=s^{+}_{\epsilon}\). The solution through \(s^{*}_{+}\), denoted g ϵ , is then the desired one.
This whole discussion is summarised pictorially in Fig. 2.
The Case W (q)(0+)≥q −1
Similarly to the first case, one sees that under the current assumptions it is still true that f is strictly decreasing on (0,∞) and η:=lim H↑∞ f(H)=log(K(1−Φ(q)−1)−1). Moreover, recalling that W (q)(0+)=d −1, one deduces that lim H↓0 f(H)=β, where
Analogously to the first case, one may use this information to qualitatively draw the direction field which is shown in Fig. 8.
As in the first case, one may show that there are again three types of solutions; the ones that intersect the 0-isocline (H↦f(H)) and never hit zero, the ones that hit zero before β and the one which lies in between the other two types. One may also show that for a given ϵ∈(log(K),∞) there exists a unique solution g ϵ such that \(I_{g_{\epsilon}}=(\log(K),\epsilon\wedge\beta)\) and lim s→ϵ∧β g ϵ (s)=0. This is pictorially displayed in Fig. 3.
The Case ψ(1)≥q
Under this assumption it holds that Φ(q)≤1 which together with Eq. (8.6) of [13] implies that
for H>0. This in turn means that Z (q)(H)/qW (q)(H)>1 for H>0. One may again draw the direction field and argue along the same line as above to deduce that all solutions of (19) are strictly decreasing, escape to infinity and hit zero (with infinite gradient if W (q)(0+)=0 and with finite gradient if W (q)(0+)∈(0,q −1)). Again, an argument as in the first case shows that for a given ϵ>log(K) there exists a unique solution g ϵ such that \(I_{g_{\epsilon}}=(\log(K),\epsilon)\) and lim s→ϵ g ϵ (s)=0. This was already pictorially displayed in Fig. 4. □
Proof of Theorem 6.2
The proof consists five of steps (i)–(v) which will imply the result. Before we go through these steps, recall that
for (x,s)∈E and let \(\tau^{*}_{\epsilon}\) be given as in (21). Moreover, define the function
for (x,s)∈E 1={(x,s)∈E:s>log(K)}. We claim that
-
(i)
\(\mathbb {E} _{x,s}[e^{-qt}V_{\epsilon}(X_{t},\overline{X}_{t})]\leq V_{\epsilon}(x,s)\) for (x,s)∈E 1,
-
(ii)
\(V_{\epsilon}(x,s)= \mathbb {E} _{x,s}[e^{-q\tau^{*}_{\epsilon}}(e^{\overline{X}_{\tau^{*}_{\epsilon}}\wedge\epsilon}-K)]\) for (x,s)∈E 1.
Verification of (i)
We first prove (i) under the assumption that X is of unbounded variation, that is, W (q)(0+)=0. To this end, let Γ be the infinitesimal generator of X defined in (28). Although the function Z (q) is only in \(C^{1}( \mathbb {R} )\cap C^{2}( \mathbb {R} \setminus\{0\})\) and it is a-priori not clear whether Γ applied to Z (q) is well-defined, one may, at least formally, define \(\varGamma Z^{(q)}: \mathbb {R} \setminus\{0\}\rightarrow \mathbb {R} \) by
For x<0 the quantity ΓZ (q)(x) is well-defined and ΓZ (q)(x)=0. On the other hand, for x>0 one needs to check whether the integral part in ΓZ (q)(x) is well-defined. This is done in Lemma A.1 in the Appendix of [18] which shows that this is indeed the case. Moreover, as shown in Sect. 3.2 of [23], it holds that
Now fix (x,s)∈E 1 and define the semimartingale \(Y_{t}:=X_{t}-\overline{X}_{t}+g_{\epsilon}(\overline{X}_{t})\). Applying an appropriate version of the Itô-Meyer formula (cf. Theorem 71, Chap. VI of [24]) to Z (q)(Y t ) yields \(\mathbb {P} _{x,s}\)-a.s.
where
and ΔX u =X u −X u−, ΔZ (q)(Y u )=Z (q)(Y u )−Z (q)(Y u−). The fact that ΓZ (q) is not defined at zero is not a problem as the time Y spends at zero has zero Lebesgue measure anyway. By the boundedness of Z (q)′ on (−∞,g ϵ (s)] the first two stochastic integrals in the expression for m t are zero-mean martingales and by the compensation formula (cf. Corollary 4.6 of [13]) the third and fourth term constitute a zero-mean martingale. Next, use stochastic integration by parts for semimartingales (cf. Corollary 2 of Theorem 22, Chap. II of [24]) to deduce that \(\mathbb {P} _{x,s}\)-a.s.
where \(M_{t}=\int_{0+}^{t}e^{-qu}(e^{\overline{X}_{u}\wedge\epsilon}-K)\,dm_{u}\) is a zero-mean martingale. The first integral is nonpositive since (Γ−q)Z (q)(y)≤0 for all \(y\in \mathbb {R} \). The last two integrals vanish since the process \(\overline{X}_{u}\) only increments when \(\overline{X}_{u}=X_{u}\) and by definition of g ϵ . Thus, taking expectations on both sides of (36) gives (i) if X is of unbounded variation.
If W (q)(0+)∈(0,q −1) or W (q)(0+)≥q −1 (X has bounded variation), then the Itô-Meyer formula is nothing more than an appropriate version of the change of variable formula for Stieltjes integrals and one may obtain (i) in the same way as above. The only change worth mentioning is that the generator of X takes a different form. Specifically, for \(h\in C^{\infty}_{0}( \mathbb {R} )\) and \(z\in \mathbb {R} \) it is given by
As above, we want to apply Γ to Z (q) which is only in \(C^{1}( \mathbb {R} \setminus\{0\})\). However, at least formally, we may define \(\varGamma Z^{(q)}: \mathbb {R} \setminus\{0\}\rightarrow \mathbb {R} \) by
This expression is well-defined and ΓZ (q) satisfies all the required properties in the proof by the results in the Appendix of [18]. This completes the proof of (i).
Verification of (ii)
Recalling that (Γ−q)Z (q)(y)=0 for y>0, we see from (36) that \(\mathbb {E} _{x,s} [e^{-q(t\wedge\tau^{*}_{\epsilon})}V(X_{t\wedge\tau^{*}_{\epsilon}},\overline{X}_{t\wedge\tau^{*}_{\epsilon}})]= V_{\epsilon}(x,s)\) and hence (ii) follows by dominated convergence.
Next, recall \(A_{\epsilon}:= \mathbb {E} _{\log(K),\log(K)}[e^{-q\tau^{*}_{\epsilon}} (e^{\overline{X}_{\tau^{*}_{\epsilon}}\wedge\epsilon}-K)] \) and note that
where in the second equality we have used (35). Now extend the definition of the function V ϵ to
We claim that
-
(iii)
V ϵ (x,s)≥(e s∧ϵ−K)+ for (x,s)∈E,
-
(iv)
\(\mathbb {E} _{x,s}[e^{-qt}V_{\epsilon}(X_{t},\overline{X}_{t})]\leq V_{\epsilon}(x,s)\) for (x,s)∈E,
-
(v)
\(V_{\epsilon}(x,s)= \mathbb {E} _{x,s} [e^{-q\tau^{*}_{\epsilon}} (e^{\overline{X}_{\tau^{*}_{\epsilon}}\wedge\epsilon}-K) ]\) for (x,s)∈E.
Condition (iii) is clear from the definition of Z (q) and V ϵ .
Verification of Condition (iv)
In view of (i), it is enough to show (iv) for \((x,s)\in C^{*}_{II}\). In order to prove this, set \(Y_{t}=e^{-qt}V_{\epsilon}(X_{t},\overline{X}_{t})\) and observe that
where in the inequality we have used (i). Combining this with the strong Markov property, we obtain on \(\{\tau^{+}_{\log(K)}<\infty\}\) for \((x,s)\in C^{*}_{II}\),
Hence, taking expectations on both sides and using (34) shows that, for \((x,s)\in C^{*}_{II}\), we have \(\mathbb {E} _{x,s}[Y_{t}]\leq \mathbb {E} _{x,s} [Y_{t\wedge\tau^{+}_{\log(K)}} ]\). Since \(Y_{t\wedge\tau^{+}_{\log(K)}}\) is a \(\mathbb {P} _{x,s}\)-martingale for \((x,s)\in C^{*}_{II}\) (see (9)) the inequality in (iv) follows.
Verification of Condition (v)
By the strong Markov property, Theorem 3.12 of [13] and the definition of A ϵ and V ϵ we have
for \((x,s)\in C^{*}_{II}\). This together with (iii) gives assertion (v).
We are now in a position to prove Theorem 6.2. Inequality (iv) and the Markov property of \((X,\overline{X})\) imply that the process \(e^{-qt}V_{\epsilon}(X_{t},\overline{X}_{t})\) is a \(\mathbb {P} _{x,s}\)-supermartingale for (x,s)∈E. Using (34), (iii), Fatou’s Lemma in the second inequality and the supermartingale property of \(e^{-qt}V_{\epsilon}(X_{t},\overline{X}_{t})\) and Doob’s stopping theorem in the third inequality shows that for \(\tau\in\mathcal{M}\),
This together with (v) shows that \(V^{*}_{\epsilon}=V_{\epsilon}\) and that \(\tau^{*}_{\epsilon}\) is optimal. □
Proof of Theorem 6.5
Recall that under the current assumptions Lemma A.1 in Appendix A implies that
for (x,s)∈E, from which it follows that
for (x,s)∈E. Also, for ϵ∈(log(K),∞), let \(V_{\epsilon}^{*}\),A ϵ , \(\tau^{*}_{\epsilon}\) and g ϵ be as in Theorem 6.2 and \(g_{\infty},\tau^{*}_{\infty}\) as stated in Theorem 6.5. An inspection of the proof of Lemma 6.1 and Theorem 3.2 of [11] show that g ∞(s)=lim ϵ↑∞ g ϵ (s) for s>log(K) which in turn implies that \(\lim_{\epsilon\uparrow\infty}\tau^{*}_{\epsilon}=\tau^{*}_{\infty}\text{ $ \mathbb {P} _{x,s}$-a.s.}\) for all (x,s)∈E. Furthermore, recall \(A_{\infty}:= \mathbb {E} _{\log(K),\log(K)}[e^{-q\tau^{*}_{\infty}}(e^{\overline{X}_{\tau^{*}_{\infty}}}-K)]\) and define
Now, using (38), (39) and dominated convergence, we see that
and
It follows in particular that \(V_{\infty}(x,s)=\lim_{\epsilon\uparrow\infty}V^{*}_{\epsilon}(x,s)\) for (x,s)∈E. Next, we claim that
-
(i)
V ∞(x,s)≥(e s−K)+ for (x,s)∈E,
-
(ii)
\(\mathbb {E} _{x,s}[e^{-qt}V_{\infty}(X_{t},\overline{X}_{t})]\leq V_{\infty}(x,s)\) for (x,s)∈E,
-
(iii)
\(V_{\infty}(x,s)= \mathbb {E} _{x,s} [e^{-q\tau^{*}_{\infty}} (e^{\overline{X}_{\tau^{*}_{\infty}}}-K) ]\) for (x,s)∈E.
Condition (i) is clear from the definition of Z (q) and V ∞. To prove (ii), use Fatou’s Lemma and (i) of the proof of Theorem 6.2 to show that
for (x,s)∈E. As for (iii), using (38), (39) and dominated convergence we deduce that
for (x,s)∈E. The proof of the theorem is now completed by using (i)–(iii) in the same way as in the proof of Theorem 6.2 to show that \(V_{\infty}^{*}=V_{\infty}\) and that \(\tau^{*}_{\infty}\) is optimal. □
Remark 9.1
Instead of proving Theorem 6.5 via a limiting procedure, it would be possible to prove it analogously to Theorem 6.2 by going through the Itô-Meyer formula. We chose to present the prove above as it emphasises that the capped version of (1) (ϵ∈(log(K),∞)), is a building block for the uncapped version of (1) (ϵ=∞) rather than an isolated problem in itself.
Proof of Lemma 6.7
First assume that ψ′(0+)<0 and fix (x,s)∈E such that log(K)≤s≤ϵ. Since the supremum process \(\overline{X}\) is increasing and there is no discounting, it follows that
The fact that ψ′(0+)<0 implies that sup0≤u<∞ X u is exponentially distributed with parameter Φ(0)>0 under \(\mathbb{P}_{0}\) (see Eq. (8.2) in [13]). Thus, if Φ(0)≠1, one calculates
Similarly, if Φ(0)=1, we have \(V_{\epsilon}^{*}(x,s)=e^{s}-K+e^{x}(\epsilon-s)\).
On the other hand, if (x,s)∈E such that s<log(K) then an application of the strong Markov property at \(\tau^{+}_{\log(K)}\) and Theorem 3.12 of [13] gives
The last expression on the right-hand side is known from the computations above and hence the first part of the proof follows.
As for the second part, it is well-known that ψ′(0+)≥0 implies that \(\mathbb {P} _{x,s}[\tau^{+}_{\epsilon}<\infty]=1\) for (x,s)∈E and since there is no discounting the claim follows. □
Proof of Lemma 6.8
The first part follows by taking limits in Lemma 6.7, since by monotone convergence we have
As for the second part, note that \(V^{*}_{\infty}(x,s)\geq\lim_{\epsilon\uparrow\infty}V_{\epsilon}^{*}(x,s)\) and hence it is enough to show that the limit equals infinity. To this end, observe that under the current assumptions we have lim ϵ↑∞ g ϵ (s)=∞ for s>log(K) (see Lemma 6.1(c)). This in conjunction with the fact that lim z→∞ Z (q)(z)=∞ shows that, for (x,s)∈E such that s>log(K),
On the other hand, if (x,s)∈E such that s≤log(K), the claim follows provided that lim ϵ→∞ A ϵ =∞. Indeed, using the strong Markov property and Theorem 3.12 of [13] one may deduce that
The second factor on the right-hand side increases to +∞ as ϵ↑∞ by the first part of the proof and thus the proof is complete. □
References
Alili, L., Kyprianou, A.E.: Some remarks of first passage of Lévy processes, the American put and pasting principles. Ann. Appl. Probab. 15, 2062–2080 (2005)
Avram, F., Kyprianou, A.E., Pistorius, M.R.: Exit problems for spectrally negative Lévy processes and applications to (Canadized) Russian options. Ann. Appl. Probab. 14, 215–238 (2004)
Bertoin, J.: Lévy Processes. Cambridge University Press, Cambridge (1996)
Bichteler, K.: Stochastic Integration with Jumps. Cambridge University Press, Cambridge (2002)
Carr, P., Wu, L.: The finite moment log stable process and option pricing. J. Finance 58, 753–778 (2003)
Chan, T.: Pricing contingent claims on stocks driven by Lévy processes. Ann. Appl. Probab. 9, 504–528 (1999)
Conze, A., Viswanathan, R.: Path dependent options: the case of lookback options. J. Finance 46, 1893–1907 (1991)
Cox, A.M.G., Hobson, D., Obłój, J.: Pathwise inequalities for local time: applications to Skorohod embeddings and optimal stopping. Ann. Appl. Probab. 18, 1870–1896 (2008)
Gapeev, P.V.: Discounted optimal stopping for maxima of some jump-diffusion processes. J. Appl. Probab. 44, 713–731 (2007)
Guo, X., Shepp, L.: Some optimal stopping problems with nontrivial boundaries for pricing exotic options. J. Appl. Probab. 38, 647–658 (2001)
Hartman, P.: Ordinary Differential Equations. Birkäuser, Basel (1982)
Kuznetsov, A., Kyprianou, A.E., Rivero, V.: The theory of scale functions for spectrally negative Lévy processes (2011). arXiv:1104.1280v1 [math.PR]
Kyprianou, A.E.: Introductory Lectures on Fluctuations of Lévy Processes with Applications. Springer, Berlin (2006)
Kyprianou, A.E., Ott, C.: Spectrally negative Lévy processes perturbed by functionals of their running supremum. J. Appl. Probab. 49, 1005–1014 (2012)
Madan, D.B., Schoutens, W.: Break on through to the single side. J. Credit Risk 4(3), 3–20 (2008)
Merton, R.C.: Lifetime portfolio selection under uncertainty: the continuous-time case. Rev. Econ. Stat. 1, 247–257 (1969)
Mikalevich, V.S.: Bayesian choice between two hypotheses for the mean value of a normal process. Visn. Kiiv. Univ. Ser. Fiz.-Mat. Nauki 1, 101–104 (1958)
Ott, C.: Optimal stopping problems for the maximum process with upper and lower caps. Ann. Appl. Probab. (2011, to appear)
Pedersen, J.L.: Discounted optimal stopping problems for the maximum process. J. Appl. Probab. 37, 972–983 (2000)
Peskir, G.: Optimal stopping of the maximum process: the maximality principle. Ann. Probab. 26, 1614–1640 (1998)
Peskir, G., Shiryaev, A.: Sequential testing problems for Poisson processes. Ann. Stat. 28, 837–859 (2000)
Peskir, G., Shiryaev, A: Optimal Stopping and Free-Boundary Problems. Birkhäuser, Basel (2006)
Pistorius, M.R.: On exit and ergodicity of the spectrally one-sided Lévy process reflected at its infimum. J. Theor. Probab. 17, 183–220 (2004)
Protter, P.E.: Stochastic Integration and Differential Equations, 2nd edn. Springer, Berlin (2005)
Shepp, L.A., Shiryaev, A.N.: The Russian option: reduced regret. Ann. Appl. Probab. 3, 631–640 (1993)
Shepp, L.A., Shiryaev, A.N.: A new look at pricing of the “Russian option”. Theory Probab. Appl. 39, 103–119 (1993)
Shiryaev, A.N.: Essentials of Stochastic Finance. World Scientific, London (1999)
Shiryaev, A.N.: Optimal Stopping Rules. Springer, Berlin (2008). Reprint
Author information
Authors and Affiliations
Corresponding author
Appendices
Appendix A: An Auxiliary Result
Lemma A.1
If q>ψ(1) we have for (x,s)∈E that
In particular, \(\limsup_{t\to\infty}e^{-qt+\overline{X}_{t}}=0\) \(\mathbb {P} _{x,s}\)-a.s. for (x,s)∈E.
Proof of Lemma A.1
We want to show that
First note that it is enough to consider the above integral over the interval (e s,∞), since for y<e s the probability inside the integral is equal to one. Next, for y>e s define γ=log(y)−x>0 and write
The last expression is the probability that the spectrally negative Lévy process \(\tilde{X}_{t}:=X_{t}-qt\), with Laplace exponent \(\psi_{\tilde{X}}(\theta)=\psi(\theta)-q\theta\), reaches level γ. Thus,
where \(\varPhi_{\tilde{X}}\) is the right-inverse of \(\psi_{\tilde{X}}\). Hence, the integral (40) converges provided \(\varPhi_{\tilde{X}}(0)>1\). The latter is indeed satisfied because \(\psi_{\tilde{X}}\) is convex and \(\psi_{\tilde{X}}(1)=\psi(1)-q<0\) by assumption.
As for the second assertion, let δ>0 such that q−δ>ψ(1). By the first part we may now, for (x,s)∈E, infer that \(\sup_{0\leq t<\infty}e^{-(q-\delta)t+\overline{X}_{t}}<\infty\) \(\mathbb {P} _{x,s}\)-a.s. and hence
This completes the proof. □
Appendix B: An Excursion Theoretic Calculation
Our aim is to compute the value \(\mathbb {E} _{s,s} [e^{-q\tau^{*}_{\epsilon}} (e^{\overline{X}_{\tau^{*}_{\epsilon}}\wedge\epsilon}-K ) ]\) for s∈[log(K),ϵ) with the help of excursion theory (see Remark 6.3). We shall spend a moment setting up some necessary notation. In doing so, we closely follow pp. 221–223 in [2] and refer the reader to Chaps. 6 and 7 in [3] for background reading. The process \(L_{t}:=\overline{X}_{t}\) serves as local time at 0 for the Markov process \(\overline{X}-X\) under \(\mathbb {P} _{0,0}\). Write \(L^{-1}:=\{L^{-1}_{t}:t\geq 0\}\) for the right-continuous inverse of L. The Poisson point process of excursions indexed by local time shall be denoted by {(t,ε t ):t≥0}, where
whenever \(L^{-1}_{t}-L^{-1}_{t-}>0\). Accordingly, we refer to a generic excursion as ε(⋅) (or just ε for short as appropriate) belonging to the space \(\mathcal{E}\) of canonical excursions. The intensity measure of the process {(t,ε t ):t≥0} is given by dt×dn, where n is a measure on the space of excursions (the excursion measure). A functional of the canonical excursion that will be of interest is \(\overline{\varepsilon}=\sup_{s<\zeta}\varepsilon(s)\), where ζ(ε)=ζ is the length of an excursion. A useful formula for this functional that we shall make use of is the following (cf. [13], Eq. (8.18)):
provided that x is not a discontinuity point in the derivative of W (which is only a concern when X is of bounded variation, but we have assumed that in this case Π is atomless and hence W is continuously differentiable on (0,∞)). Another functional that we will also use is ρ a :=inf{s>0:ε(s)>a}, the first passage time above a of the canonical excursion ε.
We now proceed with the promised calculation involving excursion theory. First, assume that log(K)<ϵ<∞ and β=∞. Note that for log(K)≤s<ϵ,
We compute the two terms on the right-hand side separately. An application of the compensation formula in the second equality and using Fubini’s theorem in the third equality gives for log(K)≤s<ϵ,
where in the first equality the time index runs over local times and the sum is the usual shorthand for integration with respect to the Poisson counting measure of excursions, and \(\hat{f}(u)= \frac {Z^{(q)}(u)W^{(q)\prime }(u)}{W^{(q)}(u)}-qW^{(q)}(u) \) is an expression taken from Theorem 1 in [2]. Next, note that \(L^{-1}_{t}\) is a stopping time and hence a change of measure according to (10) shows that the expectation inside the integral can be written as
Using the properties of the Poisson point process of excursions (indexed by local time) and with the help of (42) and (13) we may deduce
where n Φ(q) denotes the excursion measure associated with X under \(\mathbb {P} ^{\varPhi(q)}\). By a change of variables we finally get for log(K)≤s<ϵ,
As for the second term in (43), similarly to the computation of the first term, we obtain for log(K)≤s<ϵ,
Adding the two terms up gives the expression in Remark 6.3.
In the case that ϵ=β=∞ the second term on the right hand side of (43) is not needed. In the case that β=log(K(1−d/q)−1)<ϵ, the cap ϵ may effectively be replaced by β in (43).
Rights and permissions
About this article
Cite this article
Kyprianou, A., Ott, C. A Capped Optimal Stopping Problem for the Maximum Process. Acta Appl Math 129, 147–174 (2014). https://doi.org/10.1007/s10440-013-9833-4
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10440-013-9833-4
Keywords
- Optimal stopping
- Optimal stopping boundary
- Principle of smooth fit
- Principle of continuous fit
- Lévy processes
- Scale functions