Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

1 Introduction

The pricing of early-exercise securities is important in quantitative finance because these are the most widely-traded type of instruments on the derivative market. The American-style option valuation is an illustrative example of an optimal stopping time problem which could be further formulated as a parabolic variational inequality.

Let S stand for the underlying asset price process, following a standard geometric Brownian motion with volatility σ and drift equal to the interest rate r while t is the time to maturity. For computational purposes one must truncate the spatial domain S ∈ [0, ) and introduce the far field boundary location S max. We consider pricing with the following conditions on the parabolic boundary:

$$\displaystyle{ V (S,0) = V ^{{\ast}}(S),\ \ V (0,t) = V _{ L},\ \ V (S_{\max },t) = V _{R}, }$$

where V L ≥ 0 and V R ≥ 0 are given constants. The American put is a classical Stefan problem where the payoff is convex, continuous but nonsmooth.

The option value with maturity T satisfies the parabolic variational inequality, cf. e.g. [4]

$$\displaystyle{LV (S,t):= V _{t}-\frac{1} {2}\sigma ^{2}S^{2}V _{ SS}-rV _{S}+rV \geq 0 \perp V (S,t) \geq V ^{{\ast}}(S)\;\text{in}\;(0,\infty )\times [0,T),}$$

which could be written down as the following linear complementarity problem

$$\displaystyle\begin{array}{rcl} \text{LCP}:\; \left \{\begin{array}{@{}l@{\quad }l@{}} LV (S,t) =\lambda \geq 0, \quad \\ V (S, t) - V ^{{\ast} } (S) \geq 0,\ \ LV (S, t) \cdot (V (S, t) - V ^{{\ast} } (S)) = 0.\quad \\ \quad \end{array} \right.& & {}\\ \end{array}$$

From the complementarity condition

$$\displaystyle{V \geq V ^{{\ast}},\ \lambda \geq 0,\ \lambda \cdot (V (S,t) - V ^{{\ast}}(S)) = 0}$$

we infer for the Lagrange multiplier

$$\displaystyle{ \lambda =\max \left (0,\lambda +\epsilon ^{-1}(V ^{{\ast}}- V )\right ) }$$
(11.1)

for any sufficiently small (penalty) parameter ε > 0. Thus, we get to solve the following nonlinear equation, equipped with the complementarity condition and drawing analogies with the augmented Lagrangian method:

$$\displaystyle{LV (S,t) -\max \left (0,\lambda +\epsilon ^{-1}(V ^{{\ast}}- V )\right ) = 0.}$$

Superimposing infinite penalty when violating the constraint V (S, t) − V (S) ≥ 0 we may embed the LCP [1] in the family of the nonlinear equations

$$\displaystyle{ LV ^{\epsilon }(S,t) -\max \left (0,\epsilon ^{-1}(V ^{{\ast}}- V ^{\epsilon })\right ) = 0. }$$
(11.2)

The penalty method guarantees in an asymptotic sense the fulfilment of constraints by including in the objective function an additional penalty term. If we consider the upper bound on the multiplier Λ max we may state the following approximation, see [4]:

$$\displaystyle{ LV ^{\epsilon }(S,t) -\max \left (0,\varLambda _{\max } +\epsilon ^{-1}(V ^{{\ast}}- V _{\epsilon })\right ) = 0. }$$

There are, however, some issues with this approach since the early exercise constraint is not strictly satisfied by the solution for fixed small ε while the penalty term is nonsmooth and unbounded. The following interior approximation aims to tackle these drawbacks with C ≥ rK for pricing the American put, cf. [6, 11],

$$\displaystyle{ LV ^{\epsilon }(S,t) - \frac{\epsilon \mathrm{C}} {V ^{\epsilon } +\epsilon -(K - S)} = 0 }$$
(11.3)

where the general interior penalty method, applicable to any type of payoff, is

$$\displaystyle{ LV ^{\epsilon }(S,t) - \frac{\epsilon \mathrm{C}} {V ^{\epsilon } +\epsilon -V ^{{\ast}}} = 0. }$$
(11.4)

Zhang and Wang [14] prove convergence of the penalized solution V ε to the solution of the underlying variational inequality V. However, a major issue with this approach is its dependence on ε and some vague parameter C, resulting in overall lower accuracy and more Newton iterations per time level.

We therefore consider modifying this interior barrier method in order to enhance the performance [8]. Let us set up the fully-discrete LCP in order to present our considerations in a clear and concise manner. First, by the method of lines, we define a smooth nonuniform spatial grid and approximate the spatial derivatives by second-order finite difference formulas. After backward Euler time discretization with step Δt we have to solve the following discrete linear complementarity problem for \(U^{n} \in \mathbb{R}^{m-1}\)

$$\displaystyle\begin{array}{rcl} \left \{\begin{array}{@{}l@{\quad }l@{}} (I -\varDelta tA)U^{n} = U^{n-1} +\varDelta tg +\varDelta t\lambda ^{n} \quad \\ \lambda ^{n} \geq 0,\ \ U^{n} \geq U^{0},\ \ \lambda ^{n} \cdot \left (U^{n} - U^{0}\right ) = 0,\quad \end{array} \right.& & {}\\ \end{array}$$

where \(A \in \mathbb{R}^{(m-1)\times (m-1)}\) stands for the spatial discretization matrix, \(g \in \mathbb{R}^{m-1}\) is the boundary information (assuming Dirichlet conditions on the elliptic boundary) and \(\lambda ^{n} \in \mathbb{R}^{m-1}\) is the nonnegative auxiliary (multiplier) vector which satisfies

$$\displaystyle{ \lambda ^{n} =\max \left (0,\lambda ^{n} + \frac{1} {\varDelta t} (U^{0} - U^{n})\right ). }$$
(11.5)

The solution U n of the discrete LCP is the saddle point of the Lagrange functional

$$\displaystyle{\varLambda (U^{n},\lambda ^{n}) = \frac{1} {2}(I -\varDelta tA)U^{n} \cdot U^{n} - b^{n} \cdot U^{n} -\varDelta t\lambda ^{n} \cdot (U^{n} - U^{0}),\ b^{n}:= U^{n-1} +\varDelta tg.}$$

Let us now observe the following equivalence [12]

$$\displaystyle{U^{n} - U^{0} \geq 0 \Leftrightarrow \epsilon \log \left (1 +\epsilon ^{-1}(U^{n} - U^{0})\right ) \geq 0}$$

and further we shall modify the Lagrange functional accordingly

$$\displaystyle{\varLambda (U^{n},\lambda ^{n}) = \frac{1} {2}(I -\varDelta tA)U^{n} \cdot U^{n} - b^{n} \cdot U^{n} -\varDelta t\lambda ^{n} \cdot (\epsilon \log \left (1 +\epsilon ^{-1}(U^{n} - U^{0})\right )).}$$

From the Karush-Kuhn-Tucker conditions we get the discrete LCP:

$$\displaystyle\begin{array}{rcl} \left \{\begin{array}{@{}l@{\quad }l@{}} (I -\varDelta tA)U^{n} -\varDelta t\lambda ^{n} \frac{\epsilon } {U^{n}-U^{0}+\epsilon } = b^{n} \quad \\ \lambda ^{n} \geq 0,\ \ U^{n} \geq U^{0},\ \ \lambda ^{n} \cdot \left (U^{n} - U^{0}\right ) = 0.\quad \end{array} \right.& &{}\end{array}$$
(11.6)

As a matter of fact, if we consider the fairly rough estimate for the multiplier λ nrK in the put case we get the discrete interior penalty method (11.4):

$$\displaystyle{ (I -\varDelta tA)U^{n} -\varDelta t \frac{rK\epsilon } {U^{n} - U^{0}+\epsilon } = b^{n}. }$$

Substituting U 0 = max(KS, 0) with KS as in Eq. (11.3) is a band-aid for the case of put option to fix the accuracy and minimize the penalty term in the continuation region where S > K, far away from the free boundary.

2 The Finite Difference Method

There are many papers using the penalty method for solving American options, see [1, 2, 57, 911, 14]. In this section we present a simplified version of barrier penalty method (11.6), introduced in Sect. 11.1.

We consider the penalized problem which approximates the LCP for some sufficiently small positive parameter ε

$$\displaystyle\begin{array}{rcl} \begin{array}{rl} &V _{t}^{\epsilon } -\frac{1} {2}\sigma ^{2}S^{2}V _{ SS}^{\epsilon } - rSV _{ S}^{\epsilon } + rV ^{\epsilon } - g(S,V ^{\epsilon }) = 0,\;(S,t) \in (0,S_{\max }) \times [0,T), \\ &V ^{\epsilon }(S,0) = V ^{{\ast}}(S),\ \ V ^{\epsilon }(0,t) = V _{L},\ \ V ^{\epsilon }(S_{\max },t) = V _{R}\end{array} & &{}\end{array}$$
(11.7)

with the penalty term

$$\displaystyle{ g(S,V ^{\epsilon }) = \frac{\epsilon \lambda } {V ^{\epsilon } +\epsilon -V ^{{\ast}}}. }$$
(11.8)

For given integers m and N we define Δt = TN,  t n = nΔt and the nonuniform spatial grid

$$\displaystyle{\overline{\omega } =\{ S_{0} = 0,\;S_{i+1} = S_{i} + h_{i},\,i = 0,\ldots,m - 1,\,S_{m} = S_{\max }\},}$$

where the discrete solution, computed on the mesh \(\overline{\omega }\) is denoted by U i n = V ε(S i , t n). Let us now write down the considered finite difference approximations of the first derivative for h i = S i+1S i ,   i = 0. 5(h i + h i−1)

$$\displaystyle{(U_{\check{S}})_{i}^{n} = \frac{U_{i+1}^{n} - U_{ i}^{n}} {h_{i}},\ \ (U_{\hat{S}})_{i}^{n} = \frac{U_{i}^{n} - U_{ i-1}^{n}} {h_{i-1}},\ \ (U_{\mathring{S}})_{i}^{n} = \frac{h_{i-1}U_{\check{S}}{}_{i}^{n} + h_{ i}U_{\hat{S}}{}_{i}^{n}} {2\hslash _{i}},}$$

where \((U_{\check{S}})_{i}^{n},(U_{\hat{S}})_{i}^{n}\) are of first order and \((U_{\mathring{S}})_{i}^{n}\) of second on a smooth grid. The second derivative is further approximated as

$$\displaystyle{(U_{SS})_{i}^{n} = \left ((U_{\check{ S}})_{i}^{n} - (U_{\hat{ S}})_{i}^{n}\right )/\hslash _{ i}.}$$

After backward Euler time discretization of (11.7), (11.8) and application of the maximal use of central differencing with flag χ: = H(σ 2 S i rh i ) (H stands for the Heaviside function), see Wang and Forsyth [13], we get the following system of nonlinear equations for n = 0, , N and i = 2, , m − 1:

$$\displaystyle\begin{array}{rcl} & & L^{h}U_{ i}^{n+1} - \frac{\epsilon \lambda _{i}} {U_{i}^{n+1} +\epsilon -U^{0}} = 0, \\ & & U(0,t^{n+1}) = V _{ L},\ U(S_{m},t^{n+1}) = V _{ R},\ U(S_{i},0) = U^{0}(S_{ i}) = V ^{{\ast}}(S_{ i}), \\ & & L^{h}U_{ i}^{n}:= \frac{U_{i}^{n+1} - U_{ i}^{n}} {\varDelta t} -\frac{\sigma ^{2}S_{i}^{2}} {2} (U_{SS})_{i}^{n} - rS_{ i}\left (\chi (U_{\mathring{S}})_{i}^{n} + (1-\chi )(U_{\check{ S}})_{i}^{n}\right ) + rU_{ i}^{n}.{}\end{array}$$
(11.9)

Next, on the base of (11.1), (11.5) we set the simple version of λ:

$$\displaystyle{ \lambda _{i} =\max \{ 0,L^{h}U_{ i}^{0}\},\ \ i = 1,\ldots,m - 1. }$$
(11.10)

Numerical experiments show that with this choice of λ we attain similar precision as with λ n, computed by (11.5), but for smaller computational cost.

We find the solution U n+1 by initiating a Newton’s iteration process with initial guess U (0) = U n, where the Newton increment on the (k + 1)th step Δ (k+1) = U (k+1)U (k) is the solution of the following tridiagonal system of linear equations

$$\displaystyle\begin{array}{rcl} & & -A_{i}\varDelta _{i-1}^{(k+1)} + C_{ i}^{(k)}\varDelta _{ i}^{(k+1)} - B_{ i}\varDelta _{i+1}^{(k+1)} \\ & & \qquad \qquad \qquad \qquad \qquad = U_{i}^{n} + A_{ i}P_{i-1}^{(k)} -\widetilde{ C}_{ i}U_{i}^{(k)} + B_{ i}U_{i+1}^{(k)} + F_{ i}^{(k)},{}\end{array}$$
(11.11)

where A 0 = A m = B 0 = B m = 0, C 1 (k) = C m (k) = 1, F 1 (k) = V L , F m (k) = V R and

$$\displaystyle\begin{array}{rcl} & & A_{i} = \frac{S_{i}\varDelta t} {2\hslash _{i}h_{i-1}}\left (\sigma ^{2}S_{ i} -\chi rh_{i}\right ),\ \ B_{i} = \frac{S_{i}\varDelta t} {2\hslash _{i}h_{i}}\left (\sigma ^{2}S_{ i} +\chi rh_{i-1}\right ) + (1-\chi )\frac{rS_{i}\varDelta t} {h_{i}}, {}\\ & & C_{i}^{(k)} =\widetilde{ C}_{ i} + \frac{\epsilon \lambda _{i}\varDelta t} {(U_{i}^{(k)} +\epsilon -U_{i}^{0})^{2}},\ \ \widetilde{C}_{i} = 1 + A_{i} + B_{i} + r\varDelta t,\ \ F_{i}^{(k)} = \frac{\epsilon \lambda _{i}\varDelta t} {U_{i}^{(k)} +\epsilon -U_{i}^{0}},{}\\ \end{array}$$

The iteration process is terminated when reaching the desired tolerance i.e. we set U n+1: = U (k+1) when \(\max \limits _{i}\{\vert \varDelta _{i}^{(k+1)}\vert /(\max \{1,U_{i}^{(k+1)}\})\} <\textsf{tol}\).

At each iteration k, in view of the definition of χ, the coefficient matrix M (k) = tridiag[−A i , C i (k), −B i ], being strictly diagonally dominant and A i , C i (k), B i > 0 it is an M-matrix.

Theorem 11.1

The approximate option value U i n , obtained by the scheme (11.9), (11.10) satisfy

$$\displaystyle{U_{i}^{n} \geq U_{ i}^{0},\ \ i = 0,\ldots,m,\ \ n = 0,\ldots,N.}$$

Proof

We follow the same line of consideration as in [11]. Rewrite the scheme (11.9), (11.10) in the following equivalent form

$$\displaystyle{ (1 + A_{i} + B_{i} + r\varDelta t)U_{i}^{n+1} - B_{ i}U_{i+1}^{n+1} - A_{ i}U_{i-1}^{n+1} = U_{ i}^{n} + \frac{\epsilon \lambda \varDelta t} {U_{i}^{n+1} +\epsilon -U^{0}}. }$$
(11.12)

Let w i n = U i nU i 0. Thus, from (11.12) we obtain

$$\displaystyle{ (1+A_{i}+B_{i}+r\varDelta t)w_{i}^{n+1}-B_{ i}w_{i+1}^{n+1}-A_{ i}w_{i-1}^{n+1} = w_{ i}^{n}+ \frac{\epsilon \lambda \varDelta t} {w_{i}^{n+1}+\epsilon } -\varDelta tL^{h}U_{ i}^{0}. }$$
(11.13)

Define \(w^{n+1} =\min \limits _{i}w_{i}^{n+1}\) and let j be an index, such that w j n+1 = w n+1. For i = j, from (11.13) we have

$$\displaystyle{ (1 + A_{i} + B_{i} + r\varDelta t)w^{n+1} \geq w_{ j}^{n} + B_{ i}w^{n+1} + A_{ i}w^{n+1} + \frac{\epsilon \lambda \varDelta t} {w_{j}^{n+1}+\epsilon } -\varDelta tL^{h}U_{ j}^{0} }$$

and therefore

$$\displaystyle{ (1 + r\varDelta t)w^{n+1} \geq w_{ j}^{n} + \frac{\epsilon \lambda \varDelta t} {w_{j}^{n+1}+\epsilon } -\varDelta tL^{h}U_{ j}^{0}. }$$

Rearranging the above inequality, we obtain

$$\displaystyle{ (1 + r\varDelta t)w^{n+1} - \frac{\epsilon \lambda \varDelta t} {w^{n+1}+\epsilon } +\varDelta tL^{h}U_{ j}^{0} \geq w_{ j}^{n} \geq w^{n}. }$$

We use induction method on n: taking into account that w 0 ≥ 0, assume that w n ≥ 0 and prove w n+1 ≥ 0. Now we have

$$\displaystyle{ \mathcal{F}(w^{n+1}) \geq 0,\ \ \mbox{ where}\ \ \mathcal{F}(w):= (1 + r\varDelta t)w - \frac{\epsilon \lambda \varDelta t} {w+\epsilon } +\varDelta tL^{h}U_{ i}^{0}. }$$

Observe that

$$\displaystyle\begin{array}{rcl} \mathcal{F}(0)& =& -\varDelta t(\lambda - L^{h}U_{ i}^{0}) =\varDelta t(L^{h}U_{ i}^{0} -\max \{ 0,L^{h}U_{ i}^{0}\}) = {}\\ & & \varDelta t\left \{\begin{array}{ll} 0, &\mbox{ if}\;L^{h}U_{ i}^{0} \geq 0, \\ L^{h}U_{i}^{0},&\mbox{ if}\;L^{h}U_{i}^{0} <0, \end{array} \right.\ \ \ \ \mbox{ i.e.}\ \ \mathcal{F}(0) \leq 0.{}\\ \end{array}$$

Further, from

$$\displaystyle{\mathcal{F}'(w):= 1 + r\varDelta t + \frac{\epsilon \lambda \varDelta t} {(w+\epsilon )^{2}} \geq 0}$$

and \(\mathcal{F}(w^{n+1}) \geq 0\) we conclude that w n+1 ≥ 0.

For the discrete scheme (11.9), (11.10) we develop a two-grid algorithm TGA [7].

Let us define two non-uniform spatial grids—a coarse mesh \(\overline{\omega }_{c}\) and a fine grid \(\overline{\omega }_{f}\)

$$\displaystyle\begin{array}{rcl} & & \overline{\omega }_{c} =\{ S_{0} = 0,\;S_{i+1} = S_{i} + h_{i}^{c},\,i = 1,\ldots,m_{ c} - 1,\,S_{m_{c}} = S_{\max }\}, {}\\ & & \overline{\omega }_{f} =\{ S_{0} = 0,\;S_{i+1} = S_{i} + h_{i}^{\,f},\,i = 1,\ldots,m_{ f} - 1,\,S_{m_{f}} = S_{\max }\}, {}\\ \end{array}$$

where m f m c and the discrete solution, computed on the mesh \(\overline{\omega }_{{\ast}}\) is denoted by U i, ∗ n = U(S i , t n).

Algorithm 2 (TGA)

At each time level n = 0, 1,  we perform the two steps:

  1:    Set U c (0): = U c n and compute U c n+1 by (11.9), (11.10) through Newton’s iterations (11.11) on the coarse mesh \(\overline{\omega }_{c}\).

  2:    Set U f (0): = I(U c n+1), where I(U c ) is the interpolant of P c on the fine grid, perform only one Newton’s iteration (11.11) on the fine mesh \(\overline{\omega }_{f}\) and get U f n+1.

3 Numerical Experiments

We consider an American butterfly option with the payoff

$$\displaystyle{V ^{{\ast}}(S) =\max \{ S - K_{ 1},0\} - 2\max \{S - K\} +\max \{ S - K_{2},0\},}$$

where K 1, K = (K 1 + K 2)∕2, K 2 are the strikes and V L = V R = 0. The model parameters are: K 1 = 90, K 2 = 100, σ = 0. 2, r = 0. 1, ε = 1. e−6. We will test the relevance of the modified penalty method (11.9), (11.10) and the accuracy, order of convergence and efficiency of the constructed TGA.

The linearized system (11.11) is solved by BiConjugate gradients stabilized method using preconditioning with upper and lower triangular matrix. For stopping criteria, we chose tol = 1.e−6.

In the computational domain for S max = 200, we use a smooth non-uniform grid, cf. in ’t Hout et al. [3]—uniform inside the region [S l , S r ] = [1∕2K, 3∕2K], and non-uniform outside with stretching parameter c = K∕10:

$$\displaystyle{ S_{i}:=\phi (\xi _{i}) = \left \{\begin{array}{ll} S_{l} + c\sinh (\xi _{i}), &\xi _{\min } \leq \xi _{i} <0, \\ S_{l} + c\xi _{i}, &0 \leq \xi _{i} \leq \xi _{\text{int}}, \\ S_{r} + c\sinh (\xi _{i} -\xi _{\text{int}}),&\xi _{\text{int}} \leq \xi _{i} <\xi _{\max }.\\ \end{array} \right. }$$

The uniform partition of [ξ min, ξ max] is defined through ξ min = ξ 0 < < ξ M = ξ max:

$$\displaystyle{\xi _{\min } =\sinh ^{-1}\left (\frac{-S_{l}} {c} \right ),\ \ \xi _{\text{int}} = \frac{S_{r} - S_{l}} {c},\ \ \xi _{\max } =\xi _{\text{int}} +\sinh ^{-1}\left (\frac{S_{\max } - S_{r}} {c} \right ).}$$

Example 11.1 (Early Exercise Constraint)

We compare the different penalty methods—interior penalty (11.3), exterior penalty (11.2) and modified barrier penalty (11.8), (11.10) in maintaining the condition V < U. On Fig. 11.1 we plot the corresponding solutions at T = 1 and payoff, while on Fig. 11.2 we plot the difference U nV for exterior penalty (11.2) and modified penalty (11.8), (11.10). We observe that for butterfly option, only with modified barrier penalty (11.8), the numerical solution satisfy the early exercise constraint. Thus, the statement of Theorem 11.1 is verified.

Fig. 11.1
figure 1

Payoff and option value. Left: interior penalty (11.3); Center: exterior penalty (11.2); Right: modified penalty (11.8)

Fig. 11.2
figure 2

U nV , Left: exterior penalty (11.2); Right: modified penalty (11.8)

Example 11.2 (One-Grid Computations)

We perform computations only on one mesh \(\overline{\omega }\), i.e. step 1 of TGA with time steps Δt = h and Δt = h 2, \(h =\min \limits _{i}h_{i}\). The results are listed in Tables 11.1 and 11.2. We give the values of the solution at strike points K 1 and K at final time T, diff—the absolute value of the difference in the solution from the coarser grid, CR—computed as log2 from the ratio of the changes on successive grids, iter—the averaging number of iterations k at each time level and CPU time (in seconds). We observe that the order of convergence in space at strike points is closed to two and the computational process is more efficient for smaller time step.

Table 11.1 Values of U(K 1, T), \(U(K,T) = 10 +\widetilde{ U}(K,T)\), convergence rate (CR), averaging number of iterations (iter) and CPU time, one-grid computations, Δt = h
Table 11.2 Values of U(K 1, T), \(U(K,T) = 10 +\widetilde{ U}(K,T)\), convergence rate (CR), averaging number of iterations (iter) and CPU time, one-grid computations, Δt = h 2

Example 11.3 (TGA)

For the numerical tests, we set Δt = h f, Δt = (h f)2, \(h^{\,f} =\min \limits _{i}h_{i}^{\,f}\) and m f = (m c )2S max, i.e. h f = (h c)2 in the case of uniform meshes. The results are given in Tables 11.3 and 11.4. We observe that the order of convergence on the coarse mesh, tested at strike points K 1 and K is closed to four, i.e. \(\mathcal{O}(\varDelta t + \vert h^{c}\vert ^{4} + \vert h^{\,f}\vert ^{2})\), \(\vert h\vert =\max \limits _{i}h_{i}\). Also, the TGA accelerate the computational efficiency. Comparable values of the solution in Tables 11.1, 11.2, 11.3, and 11.4 are highlighted.

Table 11.3 Values of U(K 1, T), \(U(K,T) = 10 +\widetilde{ U}(K,T)\), convergence rate (CR), averaging number of iterations (iter) and CPU time, two-grid computations, Δt = (h f)
Table 11.4 Values of U(K 1, T), \(U(K,T) = 10 +\widetilde{ U}(K,T)\), convergence rate (CR), averaging number of iterations (iter) and CPU time, two-grid computations, Δt = (h f)2

4 Conclusions

In contrast to the interior (11.3) and exterior (11.2) penalty methods, the modified penalty method guarantees that the solution always satisfy the early exercise constraint, independently of the type of the option.

The two-grid algorithm attains fourth order convergence in space on the coarse mesh. We observe very fast performance of the presented TGA, independently of the choice of the time step size. One and the same accuracy as with one-grid computations, can be obtained by TGA, saving computational time.