1 Introduction

In recent years, credit risk has come out to be one of most fundamental financial risk. The most extensively studied form of credit risk is the default risk. Many people, such as Bielecki, Jarrow, Jeanblanc, Pham, Rutkowski [2, 3, 16, 17, 20, 28] and many others, have worked on this subject. In several papers (see for example Ankirchner et al. [1], Bielecki and Jeanblanc [4] and Lim and Quenez [23]), related to this topic, backward stochastic differential equations (BSDEs) with jumps have appeared. Unfortunately, the results relative to these latter BSDEs are far from being as numerous as for Brownian BSDEs. In particular, there is not any general result on the existence and the uniqueness of solution to quadratic BSDEs, except Ankirchner et al. [1], in which the assumptions on the driver are strong. In this paper, we study BSDEs with random marked jumps and apply the obtained results to mathematical finance where these jumps can be interpreted as default times. We give a general existence and uniqueness result for the solutions to these BSDEs, in particular we enlarge the result given by [1] for quadratic BSDEs.

A standard approach of credit risk modeling is based on the powerful technique of filtration enlargement, by making the distinction between the filtration \(\mathbb{F}\) generated by the Brownian motion, and its smallest extension \(\mathbb{G}\) that turns default times into \(\mathbb{G}\)-stopping times. This kind of filtration enlargement has been referred to as progressive enlargement of filtrations. This field is a traditional subject in probability theory initiated by fundamental works of the French school in the 1980s, see e.g. Jeulin [18], Jeulin and Yor [19], and Jacod [15]. For an overview of applications of progressive enlargement of filtrations on credit risk, we refer to the books of Duffie and Singleton [11], of Bielecki and Rutkowski [2], or the lectures notes of Bielecki et al. [3].

The purpose of this paper is to combine results on Brownian BSDEs and results on progressive enlargement of filtrations in view of providing existence and uniqueness of solutions to BSDEs with random marked jumps. We consider a progressive enlargement with multiple random times and associated marks. These marks can represent for example the name of the firm which defaults or the jump sizes of asset values.

Our approach consists of using the recent results of Pham [28] on the decomposition of predictable processes with respect to the progressive enlargement of filtrations to decompose a BSDE with random marked jumps into a sequence of Brownian BSDEs. By combining the solutions of Brownian BSDEs, we obtain a solution to the BSDE with random marked times. This method allows to get a general existence theorem. In particular, we get an existence result for quadratic BSDEs which is more general than the result of Ankirchner et al. [1]. This decomposition approach also allows to obtain a uniqueness theorem under Assumption (H) i.e. any \(\mathbb{F}\)-martingale remains a \(\mathbb{G}\)-martingale. We first set a general comparison theorem for BSDEs with jumps based on comparison theorems for Brownian BSDEs. Using this theorem, we prove, in particular, the uniqueness for quadratic BSDEs with a concave generator w.r.t. z.

We illustrate our methodology with two financial applications in default risk management: the pricing and the hedging of a European option, and the problem of utility maximization in an incomplete market. A similar problem (without marks) has recently been considered in Ankirchner et al. [1] and Lim and Quenez [23].

The paper is organized as follows. The next section presents the general framework of progressive enlargement of filtrations with successive random times and marks, and states the decomposition result for \(\mathbb{G}\)-predictable and specific \(\mathbb{G}\)-progressively measurable processes. In Sect. 3, we use this decomposition to make a link between Brownian BSDEs and BSDEs with random marked jumps. This allows to give a general existence result under a density assumption. We then give two examples: quadratic BSDEs with marked jumps for the first one, and linear BSDEs arising in the pricing and hedging problem of a European option in a market with a single jump for the second one. In Sect. 4, we give a general comparison theorem for BSDEs and we use this result to give a uniqueness theorem for quadratic BSDEs. Finally, in Sect. 5, we apply our existence and uniqueness results to solve the exponential utility maximization problem in an incomplete market with a finite number of marked jumps.

2 Progressive Enlargement of Filtrations with Successive Random Times and Marks

We fix a probability space \(({\varOmega}, {\mathcal{G}}, \mathbb{P})\), and we start with a reference filtration \(\mathbb{F}=({\mathcal{F}}_{t})_{t \geq 0}\) satisfying the usual conditionsFootnote 1 and generated by a d-dimensional Brownian motion W. Throughout the article, we consider a finite sequence (τ k ,ζ k )1≤kn , where

  • (τ k )1≤kn is a nondecreasing sequence of random times (i.e. nonnegative \({\mathcal{G}}\)-random variables),

  • (ζ k )1≤kn is a sequence of random marks valued in some Borel subset E of ℝm.

We denote by μ the random measure associated with the sequence (τ k ,ζ k )1≤kn :

For each k=1,…,n, we consider \(\mathbb{D}^{k}=({\mathcal{D}}^{k}_{t})_{t \geq 0}\) the smallest filtration for which τ k is a stopping time and ζ k is \({\mathcal{D}}_{\tau_{k}}^{k}\)-measurable. \(\mathbb{D}^{k}\) is then given by . The global information is then defined by the progressive enlargement \(\mathbb{G}=({\mathcal{G}}_{t})_{t \geq 0}\) of the initial filtration \(\mathbb{F}\) where \(\mathbb{G}\) is the smallest right-continuous filtration containing \(\mathbb{F}\), and such that for each k=1,…,n, τ k is a \(\mathbb{G}\)-stopping time, and ζ k is \({\mathcal{G}}_{\tau_{k}}\)-measurable. \(\mathbb{G}\) is given by \({\mathcal{G}}_{t}:=\tilde{{\mathcal{G}}}_{t^{+}}\), where \(\tilde{{\mathcal{G}}}_{t} := {\mathcal{F}}_{t}\vee {\mathcal{D}}^{1}_{t}\vee \cdots \vee {\mathcal{D}}^{n}_{t}\) for all t≥0.

We denote by Δ k the set where the random k-tuple (τ 1,…,τ k ) takes its values in {τ n <∞}:

$${\varDelta}_{k} := \bigl\{(\theta_{1},\ldots,\theta_{k})\in(\mathbb{R}_{+})^k : \theta_{1}\leq\cdots\leq \theta_{k}\bigr\},\quad 1\leq k\leq n. $$

We introduce some notations used throughout the paper:

  • \({\mathcal{P}}(\mathbb{F})\) (resp. \({\mathcal{P}}(\mathbb{G})\)) is the σ-algebra of \(\mathbb{F}\) (resp. \(\mathbb{G}\))-predictable measurable subsets of Ω×ℝ+, i.e. the σ-algebra generated by the left-continuous \(\mathbb{F}\) (resp. \(\mathbb{G}\))-adapted processes.

  • \({\mathcal{PM}}(\mathbb{F})\) (resp. \({\mathcal{PM}}(\mathbb{G})\)) is the σ-algebra of \(\mathbb{F}\) (resp. \(\mathbb{G}\))-progressively measurable subsets of Ω×ℝ+.

  • For k=1,…,n, \({\mathcal{PM}}(\mathbb{F},{\varDelta}_{k},E^{k})\) is the σ-algebra generated by processes X from ℝ+×Ω×Δ k ×E k to ℝ such that (X t (.)) t∈[0,s] is \({\mathcal{F}}_{s}\otimes {\mathcal{B}}([0,s]) \otimes {\mathcal{B}}({\varDelta}_{k})\otimes {\mathcal{B}}(E^{k})\)-measurable, for all s≥0.

  • For θ=(θ 1,…,θ n )∈Δ n and e=(e 1,…,e n )∈E n, we denote by

    $$\theta_{(k)} := (\theta_{1},\ldots, \theta_{k})\quad \hbox{and}\quad e_{(k)} := (e_{1},\ldots, e_{k}),\quad 1\leq k\leq n. $$

    We also denote by τ (k) for (τ 1,…,τ k ) and ζ (k) for (ζ 1,…,ζ k ), for all k=1,…,n.

The following result provides the basic decomposition of predictable and progressive processes with respect to this progressive enlargement of filtrations.

Lemma 2.1

  1. (i)

    Any \({\mathcal{P}}(\mathbb{G})\)-measurable process X=(X t ) t≥0 is represented as

    (2.1)

    for all t≥0, where X 0 is \(\mathcal{P}(\mathbb{F})\)-measurable and X k is \(\mathcal{P}(\mathbb{F}) \otimes {\mathcal{B}}({\varDelta}_{k})\otimes {\mathcal{B}}(E^{k})\)-measurable for k=1,…,n.

  2. (ii)

    Any càd-làg \({\mathcal{PM}}(\mathbb{G})\)-measurable process X=(X t ) t≥0 of the form

    $$X_t = J_t + \int_0^t \int_EU_s(e)\mu(de,ds),\quad t\geq 0, $$

    where J is \({\mathcal{P}}(\mathbb{G})\)-measurable and U is \({\mathcal{P}}(\mathbb{G})\otimes {\mathcal{B}}(E)\)-measurable, is represented as

    (2.2)

    for all t≥0, where X 0 is \({\mathcal{PM}}(\mathbb{F})\)-measurable and X k is \({\mathcal{PM}}(\mathbb{F},{\varDelta}_{k},E^{k})\)-measurable for k=1,…,n.

The proof of (i) is given in Pham [28] and is therefore omitted. The proof of (ii) is based on similar arguments. Hence, we postpone it to the Appendix.

Throughout the sequel, we will use the convention τ 0=0, τ n+1=+∞, θ 0=0 and θ n+1=+∞ for any θΔ n , and X 0(θ (0),e (0))=X 0 to simplify the notation.

Remark 2.1

In the case where the studied process X depends on another parameter x evolving in a Borelian subset \(\mathcal{X}\) of ℝp, and if X is \({\mathcal{P}}(\mathbb{G})\otimes {\mathcal{B}}(\mathcal{X})\), then, decomposition (2.1) is still true but where X k is \({\mathcal{P}}(\mathbb{F})\otimes {\mathcal{B}}({\varDelta}_{k})\otimes {\mathcal{B}}(E^{k})\otimes {\mathcal{B}}(\mathcal{X})\)-measurable. Indeed, it is obvious for the processes generating \({\mathcal{P}}(\mathbb{G})\otimes {\mathcal{B}}(\mathcal{X})\) of the form X t (ω,x)=L t (ω)R(x), (t,ω,x) ∈ \(\mathbb{R}_{+}\times{\varOmega}\times\mathcal{X}\), where L is \({\mathcal{P}}(\mathbb{G})\)-measurable and R is \({\mathcal{B}}(\mathcal{X})\)-measurable. Then, the result is extended to any \({\mathcal{P}}(\mathbb{G})\otimes {\mathcal{B}}(\mathcal{X})\)-measurable process by the monotone class theorem.

We now introduce a density assumption on the random times and their associated marks by assuming that the distribution of (τ 1,…,τ n ,ζ 1,…,ζ n ) is absolutely continuous with respect to the Lebesgue measure de on \({\mathcal{B}}({\varDelta}_{n})\otimes {\mathcal{B}}(E^{n})\). More precisely, we make the following assumption.

(HD) There exists a positive \({\mathcal{P}}(\mathbb{F})\otimes {\mathcal{B}}({\varDelta}_{n})\otimes {\mathcal{B}}( E^{n})\)-measurable map γ such that for any t≥0,

We then introduce some notation. Define the process γ 0 by

and the map γ k a \({\mathcal{P}}(\mathbb{F})\otimes {\mathcal{B}}({\varDelta}_{k})\otimes {\mathcal{B}}(E^{k})\)-measurable process, k=1,…,n−1, by

We shall use the natural convention γ n=γ. We find that under (HD), the random measure μ admits a compensator absolutely continuous w.r.t. the Lebesgue measure. The intensity λ is given by the following proposition.

Proposition 2.1

Under (HD), the random measure μ admits a compensator for the filtration \(\mathbb{G}\) given by λ t (e) dedt, where the intensity λ is defined by

(2.3)

with

$$\lambda_t^k(e,\theta_{(k-1)},e_{(k-1)}) := \frac{\gamma_t^{k}(\theta_{(k-1)},t,e_{(k-1)},e)}{\gamma^{k-1}_t(\theta_{(k-1)},e_{(k-1)})}, $$

for (θ (k−1),t,e (k−1),e)∈Δ k−1×ℝ+×E k.

The proof of Proposition 2.1 is based on similar arguments to those of [12]. We therefore postpone it to the Appendix.

We add an assumption on the intensity λ which will be used in existence and uniqueness results for quadratic BSDEs as well as for the utility maximization problem:

$$\hbox{(HBI)}\quad \hbox{The process}\ \biggl(\int_E \lambda_t(e)\,de\biggr)_{t\geq 0}\ \hbox{is bounded on}\ [0, \infty). $$

We now consider one dimensional BSDEs driven by W and the random measure μ. To define solutions, we need to introduce the following spaces, where a,b∈ℝ+ with ab, and T<∞ is the terminal time:

  • \({\mathcal{S}}_{\mathbb{G}}^{\infty}[a,b]\) (resp. \({\mathcal{S}}_{\mathbb{F}}^{\infty}[a,b]\)) is the set of ℝ-valued \({\mathcal{PM}}(\mathbb{G})\) (resp. \({\mathcal{PM}}(\mathbb{F})\))-measurable processes (Y t ) t∈[a,b] essentially bounded:

    $$\| Y \|_{{\mathcal{S}}^\infty[a,b]} := \mathop{\mathrm{ess\,sup}}\limits_{t\in[a,b]}|Y_{t}|< \infty. $$
  • \(L^{2}_{\mathbb{G}}[a,b]\) (resp. \(L^{2}_{\mathbb{F}}[a,b]\)) is the set of ℝd-valued \({\mathcal{P}}(\mathbb{G})\) (resp. \({\mathcal{P}}(\mathbb{F})\))-measurable processes (Z t ) t∈[a,b] such that

    $$\|Z\|_{L^2[a,b]} := \biggl(\mathbb{E}\biggr[ \int_a^b |Z_t|^2\,dt \biggr]\biggr)^{1\over 2} < \infty. $$
  • L 2(μ) is the set of ℝ-valued \({\mathcal{P}}(\mathbb{G})\otimes {\mathcal{B}}(E)\)-measurable processes U such that

    $$\|U\|_{L^2(\mu)} := \biggl(\mathbb{E}\biggl[\int_0^T \int_{E}\bigl|U_{s}(e)\bigr|^2\mu(de,ds)\biggr]\biggr)^{1\over2}<\infty. $$

We then consider BSDEs of the form: find a triple \((Y,Z,U)\in {\mathcal{S}}^{\infty}_{\mathbb{G}}[0,T] \times L^{2}_{\mathbb{G}}[0,T] \times L^{2}(\mu)\) such thatFootnote 2

(2.4)

where

  • ξ is a \({\mathcal{G}}_{T}\)-measurable random variable of the form:

    (2.5)

    with ξ 0 is \({\mathcal{F}}_{T}\)-measurable and ξ k is \({\mathcal{F}}_{T}\otimes {\mathcal{B}}({\varDelta}_{k})\otimes {\mathcal{B}}(E^{k})\)-measurable for each k=1,…,n,

  • f is map from [0,TΩ×ℝ×ℝd×Bor(E,ℝ) to ℝ which is a \({\mathcal{P}}(\mathbb{G})\otimes {\mathcal{B}}(\mathbb{R})\otimes {\mathcal{B}}(\mathbb{R}^{d})\otimes {\mathcal{B}}(Bor(E,\mathbb{R}))\)-\({\mathcal{B}}(\mathbb{R})\)-measurable map. Here, Bor(E,ℝ) is the set of Borelian functions from E to ℝ, and \({\mathcal{B}}(Bor(E,\mathbb{R}))\) is the Borelian σ-algebra on Bor(E,ℝ) for the pointwise convergence topology.

To ensure that BSDE (2.4) is well posed, we have to check that the stochastic integral w.r.t. W is well defined on \(L^{2}_{\mathbb{G}}[0,T]\) in our context.

Proposition 2.2

Under (HD), for any process \(Z\in L^{2}_{\mathbb{G}}[0,T]\), the stochastic integral \(\int_{0}^{T}Z_{s}\,dW_{s}\) is well defined.

Proof

Consider the initial progressive enlargement ℍ of the filtration \(\mathbb{G}\). We recall that \(\mathbb{H}=({\mathcal{H}}_{t})_{t\geq 0}\) is given by

$${\mathcal{H}}_t = {\mathcal{F}}_t\vee\sigma (\tau_1,\ldots,\tau_n,\zeta_1,\ldots,\zeta_n),\quad t\geq 0. $$

We prove that the stochastic integral \(\int_{0}^{T}Z_{s}\,dW_{s}\) is well defined for all \({\mathcal{P}}(\mathbb{H})\)-measurable process Z such that \(\mathbb{E}\int_{0}^{T}|Z_{s}|^{2}\,ds<\infty\). Fix such a process Z.

From Theorem 2.1 in [15], we see that W is an ℍ-semimartingale of the form

$$W_t = M_t + \int_0^ta_s(\tau_{(n)},\zeta_{(n)})\,ds,\quad t\geq 0, $$

where a is \({\mathcal{P}}(\mathbb{F})\otimes {\mathcal{B}}({\varDelta}_{n})\otimes {\mathcal{B}}(E^{n})\)-measurable. Since M is a ℍ-local continuous martingale with quadratic variation 〈M,M t =〈W,W t =t for t≥0, we get from Lévy’s characterization of Brownian motion (see e.g. Theorem 39 in [29]) that M is a ℍ-Brownian motion. Therefore the stochastic integral \(\int_{0}^{T}Z_{s}\,dM_{s}\) is well defined and we now concentrate on the term \(\int_{0}^{T} Z_{s}a_{s}(\tau_{(n)},\zeta_{(n)})\,ds\).

From Lemma 1.8 in [15] the process γ(θ,e) is an \(\mathbb{F}\)-martingale. Since \(\mathbb{F}\) is the filtration generated by W we get from the representation theorem of Brownian martingales that

$$\gamma_t(\theta,e) = \gamma_0(\theta,e) + \int_0^t{\varGamma}_s(\theta,e)\,dW_s, \quad t\geq 0. $$

Still using Theorem 2.1 in [15] and since γ(θ,e) is continuous, we have

$$\bigl\langle \gamma(\theta,e),W\bigr\rangle_t = \int_0^t\gamma_{s}(\theta,e)a_s(\theta,e)\,ds,\quad t\geq 0 $$

for all (θ,e)∈Δ n ×E n. Therefore we get

$${\varGamma}_s(\theta,e) = \gamma_s(\theta,e) a_s(\theta,e),\quad s\geq 1 $$

for all (θ,e)∈Δ n ×E n. Since γ(θ,e) is an \(\mathbb{F}\)-martingale, we obtain (see e.g. Theorem 62 Chapt. 8 in [10]) that

$$ \int_0^T \bigl|\gamma_s(\theta,e)a_s(\theta,e)\bigr|^2\,ds < +\infty,\quad \mathbb{P}\hbox{-a.s.} $$
(2.6)

for all (θ,e)∈Δ n ×E n. Consider the set \(A\in {\mathcal{F}}_{T}\otimes {\mathcal{B}}({\varDelta}_{n})\otimes {\mathcal{B}}(E^{n})\) defined by

$$A := \biggl\{(\omega,\theta,e)\in {\varOmega}\times {\varDelta}_n\times E^n : \int_0^T \bigl|\gamma_s(\theta,e)a_s(\theta,e)\bigr|^2\,ds = +\infty\biggr\}. $$

Then, we have \(\mathbb{P}(\tilde{\varOmega})=0\), where

$$\tilde{\varOmega} := \bigl\{\omega\in {\varOmega} : \bigl(\omega,\tau(\omega),\zeta(\omega)\bigr)\in A \bigr\}. $$

Indeed, we have from the density assumption (HD)

(2.7)

From the definition of A and (2.6), we have

for all (θ,e)∈Δ n ×E n. Therefore, we get from (2.7), \(\mathbb{P}(\tilde{\varOmega})=0\) or equivalently

(2.8)

From Corollary 1.11 we have γ t (τ 1,…,τ n ,ζ 1,…,ζ n )>0 for all t≥0 ℙ-a.s. Since γ .(τ 1,…,τ n ,ζ 1,…,ζ n ) is continuous we obtain

$$ \inf_{s\in [0,T]} \gamma_s(\tau_1,\ldots,\tau_n,\zeta_1,\ldots,\zeta_n) > 0,\quad \mathbb{P}\hbox{-a.s.} $$
(2.9)

Combining (2.8) and (2.9), we get

$$\int_0^T \bigl| a_s(\tau_1,\ldots,\tau_n,\zeta_1,\ldots,\zeta_n) \bigr|^2\,ds < +\infty,\quad \mathbb{P}\hbox{-a.s.} $$

Since Z satisfies \(\mathbb{E}\int_{0}^{T} |Z_{s}|^{2}ds<\infty\), we find that

$$\int_0^T \bigl|Z_sa_s(\tau_1,\ldots,\tau_n,\zeta_1,\ldots,\zeta_n) \bigr|\,ds < +\infty,\quad \mathbb{P}\hbox{-a.s.} $$

Therefore \(\int_{0}^{T} Z_{s}a_{s}(\tau_{1},\ldots,\tau_{n},\zeta_{1},\ldots,\zeta_{n})\,ds\) is well defined. □

3 Existence of a Solution

In this section, we use the decompositions given by Lemma 2.1 to solve BSDEs with a finite number of jumps. We use a similar approach to Ankirchner et al. [1]: one can explicitly construct a solution by combining solutions of an associated recursive system of Brownian BSDEs. But contrary to them, we suppose that there exist n random times and n random marks. Our assumptions on the driver are also weaker. Through a simple example we first show how our method to construct solutions to BSDEs with jumps works. We then give a general existence theorem which links the studied BSDEs with jumps with a system of recursive Brownian BSDEs. We finally illustrate our general result with concrete examples.

3.1 An Introductory Example

We begin by giving a simple example to illustrate the used method. We consider the following equation involving only a single jump time τ and a single mark ζ valued in E={0,1}:

(3.1)

where H t =(H t (0),H t (1)) with for t≥0 and iE. Here c is a real constant, and f and h are deterministic functions. To solve BSDE (3.1), we first solve a recursive system of BSDEs:

Suppose that the recursive system of BSDEs admits for any (θ,e)∈[0,T]×{0,1} a couple of solution Y 1(θ,e) and Y 0. Define the process (Y,U) by

and

We then prove that the process (Y,U) is solution of BSDE (3.1). By Itô’s formula, we have

This can be written

From the definition of U, we get

$$dY_t = -f(U_t)\,dt + U_t\,dH_t. $$

We also have , which shows that (Y,U) is solution of BSDE (3.1).

3.2 The Existence Theorem

To prove the existence of a solution to BSDE (2.4), we introduce the decomposition of the coefficients ξ and f as given by (2.5) and Lemma 2.1. From Lemma 2.1(i) and Remark 2.1, we get the following decomposition for f:

(3.2)

where f 0 is \({\mathcal{P}}(\mathbb{F})\otimes {\mathcal{B}}(\mathbb{R})\otimes {\mathcal{B}}(\mathbb{R}^{d})\otimes {\mathcal{B}}(Bor(E,\mathbb{R}))\)-measurable and f k is \({\mathcal{P}}(\mathbb{F})\otimes {\mathcal{B}}(\mathbb{R})\otimes {\mathcal{B}}(\mathbb{R}^{d})\otimes {\mathcal{B}}(Bor(E,\mathbb{R}))\otimes {\mathcal{B}}({\varDelta}_{k})\otimes {\mathcal{B}}(E^{k})\)-measurable for each k=1,…,n.

In the following theorem, we show how BSDEs driven by W and μ are related to a recursive system of Brownian BSDEs involving the coefficients ξ k and f k, k=0,…,n.

Theorem 3.1

Assume that for all (θ,e)∈Δ n ×E n, the Brownian BSDE

(3.3)

admits a solution \((Y^{n}(\theta,e),Z^{n}(\theta,e))\in {\mathcal{S}}^{\infty}_{\mathbb{F}}[\theta_{n}\wedge T,T]\times L^{2}_{\mathbb{F}}[\theta_{n}\wedge T,T]\), and that for each k=0,…,n−1, the Brownian BSDE

(3.4)

admits a solution \((Y^{k}(\theta_{(k)}, e_{(k)}), Z^{k}(\theta_{(k)}, e_{(k)}))\in {\mathcal{S}}^{\infty}_{\mathbb{F}}[\theta_{k}\wedge T,T]\times L^{2}_{\mathbb{F}}[\theta_{k}\wedge T,T]\). Assume moreover that each Y k (resp. Z k) is \({\mathcal{PM}}({\mathbb{F}})\otimes {\mathcal{B}}({\varDelta}_{k})\otimes {\mathcal{B}}(E^{k})\)-measurable (resp. \({\mathcal{P}}({\mathbb{F}})\otimes {\mathcal{B}}({\varDelta}_{k})\otimes {\mathcal{B}}(E^{k})\)-measurable).

If all these solutions satisfy

$$ \sup_{(k,\theta,e)} \bigl\|Y^k(\theta_{(k)},e_{(k)})\bigr\|_{{\mathcal{S}}^\infty[\theta_k\wedge T,T]} < \infty, $$
(3.5)

and

then, under (HD), BSDE (2.4) admits a solution \((Y, Z, U)\in {\mathcal{S}}^{\infty}_{\mathbb{G}}[0,T]\times L^{2}_{\mathbb{G}}[0,T]\times L^{2}(\mu)\) given by

(3.6)

where \(U^{k}_{t}(\tau_{(k)}, \zeta_{(k)},.)=Y^{k+1}_{t}(\tau_{(k)}, t, \zeta_{(k)},.)-Y^{k}_{t}(\tau_{(k)}, \zeta_{(k)})\) for each k=0,…,n−1.

Proof

To alleviate notation, we shall often write ξ k and f k(t,y,z,u) instead of ξ k(θ (k),e (k)) and f k(t,y,z,u,θ (k),e (k)), and \(Y^{k}_{t}(t, e)\) instead of \(Y^{k}_{t}(\theta_{(k-1)}, t, e_{(k-1)}, e)\).

Step 1: We prove that for t ∈ [0,T], (Y,Z,U) defined by (3.6) satisfies the equation

(3.7)

We make an induction on the number k of jumps in (t,T].

• Suppose that k=0. We distinguish two cases.

Case 1: there are n jumps before t. We then have τ n t and from (3.6) we get \(Y_{t}=Y^{n}_{t}\). Using BSDE (3.3), we can see that

$$Y_t = \xi^n+\int_t^Tf^n\bigl(s,Y^n_s,Z^n_s,0\bigr)\,ds - \int_t^TZ^n_s\,dW_s. $$

Since τ n T, we have ξ n=ξ from (2.5). In the same way, we have \(Y_{s}=Y^{n}_{s}\), \(Z_{s} = Z^{n}_{s}\) and U s =0 for all s∈(t,T] from (3.6). Using (3.2), we also get \(f^{n}(s,Y^{n}_{s},Z^{n}_{s},0)=f(s,Y_{s},Z_{s},U_{s})\) for all s∈(t,T]. Moreover, since the predictable processes and are indistinguishable on {τ n t}, we have from Theorem 12.23 of [13], \(\int_{t}^{T}Z_{s}\,dW_{s} = \int_{t}^{T}Z_{s}^{n}\,dW_{s} \) on {τ n t}. Hence, we get

$$Y_t = \xi + \int_t^Tf(s,Y_s,Z_s,U_s)\,ds - \int_t^TZ_s\,dW_s - \int_t^T \int_{E}U_s(e)\mu(de,ds), $$

on {τ n t}.

Case 2: there are i jumps before t with i<n hence \(Y_{t} = Y^{i}_{t}\). Since there is no jump after t, we have \(Y_{s}=Y^{i}_{s}\), \(Z_{s} = Z^{i}_{s}\), \(U^{i}_{s}(.)=Y^{i+1}_{s}(s,.)-Y^{i}_{s}\), ξ=ξ i and \(f^{i}(s,Y^{i}_{s},Z^{i}_{s},U^{i}_{s})=f(s,Y_{s},Z_{s},U_{s})\) for all s∈(t,T], and \(\int_{t}^{T}\int_{E}U_{s}(e) \times\mu(de,ds)=0\). Since the predictable processes and are indistinguishable on {τ i t}∩{T<τ i+1}, we have from Theorem 12.23 of [13], \(\int_{t}^{T}Z_{s}\,dW_{s} = \int_{t}^{T}Z_{s}^{i}\,dW_{s} \) on {τ i t}∩{T<τ i+1}. Combining these equalities with (3.4), we get

$$Y_t = \xi+\int_t^Tf(s,Y_s,Z_s,U_s)\,ds - \int_t^TZ_s\,dW_s - \int_t^T\int_{E}U_s(e)\mu(de,ds), $$

on {τ i t}∩{T<τ i+1}.

• Suppose equation (3.7) holds true when there are k jumps in (t,T], and consider the case where there are k+1 jumps in (t,T].

Denote by i the number of jumps in [0,t] hence \(Y_{t} = Y^{i}_{t}\). Then, we have \(Z_{s} = Z^{i}_{s}\), \(U^{i}_{s}(.)=Y^{i+1}_{s}(s,.)-Y^{i}_{s}\) for all s∈(t,τ i+1], and \(Y_{s}=Y^{i}_{s}\) and \(f(s,Y_{s},Z_{s},U_{s})= f^{i}(s,Y^{i}_{s},Z^{i}_{s},U^{i}_{s})\) for all s∈(t,τ i+1). Using (3.4), we have

Since the predictable processes and are indistinguishable on {τ i t<τ i+1}∩{τ i+k+1T<τ i+k+2}, we get from Theorem 12.23 of [13] that . Therefore, we get

(3.8)

on {τ i t<τ i+1}∩{τ i+k+1T<τ i+k+2}. Using the induction assumption on (τ i+1,T], we have

for all r∈[0,T], where

Thus, the processes and are indistinguishable since they are càd-làg modifications of the other. In particular they coincide at the stopping time τ i+1 and we get from the definition of Y

(3.9)

Combining (3.8) and (3.9), we get (3.7).

Step 2: Notice that the process Y (resp. Z, U) is \({\mathcal{P}}{\mathcal{M}}(\mathbb{G})\) (resp. \({\mathcal{P}}(\mathbb{G})\), \({\mathcal{P}}(\mathbb{G})\otimes {\mathcal{B}}(E)\))-measurable since each Y k (resp. Z k) is \({\mathcal{PM}}({\mathbb{F}})\otimes {\mathcal{B}}({\varDelta}_{k})\otimes {\mathcal{B}}(E^{k})\) (resp. \({\mathcal{P}}({\mathbb{F}})\otimes {\mathcal{B}}({\varDelta}_{k})\otimes {\mathcal{B}}(E^{k})\))-measurable.

Step 3: We now prove that the solution satisfies the integrability conditions. Suppose that the processes Y k, k=0,…,n, satisfy (3.5). Define the constant M by

$$M := \sup_{(k,\theta,e)} \bigl\|Y^k(\theta_{(k)},e_{(k)})\bigr\|_{{\mathcal{S}}^\infty[\theta_k\wedge T,T]}, $$

and consider the set \(A\in {\mathcal{F}}_{T}\otimes {\mathcal{B}}({\varDelta}_{n}\cap[0,T]^{n})\otimes {\mathcal{B}}(E^{n})\) defined by

Then, we have \(\mathbb{P}(\tilde{\varOmega})=1\), where

$$\tilde{\varOmega} := \bigl\{\omega\in{\varOmega} : \bigl(\omega,\tau(\omega),\zeta(\omega)\bigr)\in A\bigr\}. $$

Indeed, we have from the density assumption (HD)

(3.10)

From the definition of M and A, we have

for all (θ,e)∈(Δ n ∩[0,T]nE n. Therefore, we get from (3.10), \(\mathbb{P}(\tilde{\varOmega}^{c})=0\). Then, by definition of Y, we have

Since \(\mathbb{P}(\tilde {\varOmega})=1\), we have

(3.11)

Therefore, we get from (3.11)

$$|Y_t|\leq (n+1)M,\quad \mathbb{P}\hbox{-a.s.}, $$

for all t∈[0,T]. Since Y is càd-làg, we get

$$\|Y\|_{{\mathcal{S}}^\infty[0,T]}\leq (n+1)M. $$

In the same way, using (HD) and the tower property of conditional expectation, we get

Thus, \(Z\in L^{2}_{\mathbb{G}}[0,T]\) since the processes Z k, k=0,…,n, satisfy

Finally, we check that UL 2(μ). Using (HD), we have

Hence, UL 2(μ). □

Remark 3.1

From the construction of the solution of BSDE (2.4), the jump component U is bounded in the following sense:

$$\sup_{e\in E} \bigl\|U(e)\bigr\|_{{\mathcal{S}}^\infty[0,T]} < \infty. $$

In particular, the random variable \(\mathrm{ess\,sup}_{(t,e)\in[0,T]\times E}|U_{t}(e)|\) is bounded.

3.3 Application to Quadratic BSDEs with Jumps

We suppose that the random variable ξ and the generator f satisfy the following conditions:

  1. (HEQ1)

    The random variable ξ is bounded: there exists a positive constant C such that

    $$|\xi|\leq C,\quad \mathbb{P}\hbox{-a.s.} $$
  2. (HEQ2)

    The generator f is quadratic in z: there exists a constant C such that

    $$\bigl|f(t,y,z,u)\bigr|\leq C \biggl(1 + |y| + |z|^2 + \int_E \bigl|u(e)\bigr| \lambda_t(e)\,de \biggr), $$

    for all (t,y,z,u)∈[0,T]×ℝ×ℝd×Bor(E,ℝ).

  3. (HEQ3)

    For any R>0, there exists a function \(mc^{f}_{R}\) such that \(\lim_{\varepsilon \rightarrow0}mc^{f}_{R}(\varepsilon )=0\) and

    $$\bigl|f_t\bigl(y,z,\bigl(u(e)-y\bigr)_{e\in E}\bigr) - f_t\bigl(y',z',\bigl(u(e)-y\bigr)_{e\in E}\bigr)\bigr| \leq mc^f_R(\varepsilon ), $$

    for all (t,y,y′,z,z′,u)∈[0,T]×[ℝ]2×[ℝd]2×Bor(E,ℝ) s.t. |y|, |z|, |y′|, |z′|≤R and |yy′|+|zz′|≤ε.

Proposition 3.1

Under (HD), (HBI), (HEQ1), (HEQ2), and (HEQ3), BSDE (2.4) admits a solution in \({\mathcal{S}}^{\infty}_{\mathbb{G}}[0,T] \times L^{2}_{\mathbb{G}}[0,T] \times L^{2}(\mu)\).

Proof

Step 1. Since ξ is a bounded random variable, we can choose ξ k bounded for each k=0,…,n. Indeed, let C be a positive constant such that |ξ|≤C, ℙ-a.s., then, we have

with \(\tilde{\xi}^{k}(\tau_{1}, \ldots, \tau_{k}, \zeta_{1}, \ldots, \zeta_{k})=(\xi^{k}(\tau_{1}, \ldots, \tau_{k}, \zeta_{1}, \ldots, \zeta_{k})\wedge C)\vee (-C)\), for each k=0,…,n.

Step 2. Since f is quadratic in z, it is possible to choose the functions f k, k=0,…,n, quadratic in z. Indeed, if C is a positive constant such that |f(t,y,z,u)|≤C(1+|y|+|z|2+∫ E |u(e)|λ t (e) de), for all (t,y,z,u)∈[0,T]×ℝ×ℝd×Bor(E,ℝ), ℙ-a.s. and f has the following decomposition:

then, f satisfies the same decomposition with \(\tilde{f}^{k}\) instead of f k where

for all (t,y,z,u)∈[0,T]×ℝ×ℝd×Bor(E,ℝ) and (θ,e)∈Δ n ×E n.

Step 3. We now prove by a backward induction that there exists for each k=0,…,n−1 (resp. k=n), a solution (Y k,Z k) to BSDE (3.4) (resp. (3.3)) s.t. Y k is a \({\mathcal{P}}{\mathcal{M}}(\mathbb{F})\otimes {\mathcal{B}}({\varDelta}_{k})\otimes {\mathcal{B}}(E^{k})\)-measurable process and Z k is a \({\mathcal{P}}(\mathbb{F})\otimes {\mathcal{B}}({\varDelta}_{k})\otimes {\mathcal{B}}(E^{k})\)-measurable process, and

$$\sup_{(\theta_{(k)}, e_{(k)})\in {\varDelta}_k \times E^k} \bigl\|Y^k(\theta_{(k)}, e_{(k)})\bigr\|_{{\mathcal{S}}^\infty[\theta_k\wedge T, T]} + \bigl\|Z^k(\theta_{(k)}, e_{(k)})\bigr\|_{L^2[\theta_k\wedge T, T]} < \infty. $$

• Choosing ξ n(θ (n),e (n)) bounded as in Step 1, we get from (HEQ3) and Proposition D.1 and Theorem 2.3 of [22] the existence of a solution (Y n(θ (n),e (n)),Z n(θ (n),e (n))) to BSDE (3.3).

We now check that we can choose Y n (resp. Z n) as a \({\mathcal{P}}{\mathcal{M}}(\mathbb{F})\otimes {\mathcal{B}}({\varDelta}_{n})\otimes {\mathcal{B}}(E^{n})\) (resp. \({\mathcal{P}}(\mathbb{F})\otimes {\mathcal{B}}({\varDelta}_{n})\otimes {\mathcal{B}}(E^{n})\))-measurable process. Indeed, we know (see [22]) that we can construct the solution (Y n,Z n) as limit of solutions to Lipschitz BSDEs. From Proposition C.1, we then get a \({\mathcal{P}}(\mathbb{F})\otimes {\mathcal{B}}({\varDelta}_{n})\otimes {\mathcal{B}}(E^{n})\)-measurable solution as limit of \({\mathcal{P}}(\mathbb{F})\otimes {\mathcal{B}}({\varDelta}_{n})\otimes {\mathcal{B}}(E^{n})\)-measurable processes. Hence, Y n (resp. Z n) is a \({\mathcal{P}}{\mathcal{M}}(\mathbb{F})\otimes {\mathcal{B}}({\varDelta}_{n})\otimes {\mathcal{B}}(E^{n})\) (resp. \({\mathcal{P}}(\mathbb{F})\otimes {\mathcal{B}}({\varDelta}_{n})\otimes {\mathcal{B}}(E^{n})\))-measurable process. Applying Proposition 2.1 of [22] to (Y n,Z n), we get from (HEQ1) and (HEQ2)

$$\sup_{(\theta,e)\in {\varDelta}_n \times E^n} \bigl\|Y^n(\theta_{(n)},e_{(n)})\bigr\|_{{\mathcal{S}}^\infty[\theta_n \wedge T, T]} + \bigl\|Z^n(\theta_{(n)}, e_{(n)})\bigr\|_{L^2[\theta_n \wedge T, T]} < \infty. $$

• Fix kn−1 and suppose that the result holds true for k+1: there exists (Y k+1,Z k+1) such that

Then, using (HBI), there exists a constant C>0 such that

$$\bigl|f^k\bigl(s, y, z, Y_{s}^{k+1}(\theta_{(k)}, s, e_{(k)},.) - y, \theta_{(k)}, e_{(k)}\bigr)\bigr|\leq C \bigl( 1 + |y| + |z|^2 \bigr). $$

Choosing ξ k(θ (k),e (k)) bounded as in Step 1, we get from (HEQ3) and Proposition D.1 and Theorem 2.3 of [22] the existence of a solution (Y k(θ (k),e (k)), Z k(θ (k),e (k))).

As for k=n, we can choose Y k (resp. Z k) as a \({\mathcal{P}}{\mathcal{M}}(\mathbb{F})\otimes {\mathcal{B}}({\varDelta}_{k})\otimes {\mathcal{B}}(E^{k})\) (resp. \({\mathcal{P}}(\mathbb{F})\otimes {\mathcal{B}}({\varDelta}_{k})\otimes {\mathcal{B}}(E^{k})\))-measurable process.

Applying Proposition 2.1 of [22] to (Y k(θ (k),e (k)),Z k(θ (k),e (k))), we get from (HEQ1) and (HEQ2)

$$\sup_{(\theta_{(k)}, e_{(k)}) \in {\varDelta}_k \times E^k} \bigl\|Y^k(\theta_{(k)}, e_{(k)})\bigr\|_{{\mathcal{S}}^\infty[\theta_k\wedge T, T]} + \bigl\|Z^k(\theta_{(k)}, e_{(k)})\bigr\|_{L^2[\theta_k\wedge T, T]} < \infty. $$

Step 4. From Step 3, we can apply Theorem 3.1. We then get the existence of a solution to BSDE (2.4). □

Remark 3.2

Our existence result is given for bounded terminal condition. It is based on the result of Kobylanski for quadratic Brownian BSDEs in [22]. We notice that existence results for quadratic BSDEs with unbounded terminal conditions have recently been proved in Briand and Hu [5] and Delbaen et al. [8]. These works provide existence results for solutions of Brownian quadratic BSDEs with exponentially integrable terminal conditions and generators and concludes that the solution Y satisfies an exponential integrability condition.

Here, we cannot use these results in our approach. Indeed, consider the case of a single jump with the generator f(t,y,z,u)=|z|2+|u|. The associated decomposed BSDE at rank 0 is given by

$$Y^0_t = \xi^0 + \int_t^T \bigl[\bigl|Z^0_s\bigr|^2 + \bigl|Y^1_s(s)-Y^0_s\bigr|\bigr]\,ds - \int_t^TZ^0_s\,dW_s,\quad 0 \leq t \leq T. $$

Then to apply the results from [5] or [8], we require that the process \((Y^{1}_{s}(s))_{s}\) satisfies some exponential integrability condition. However, at rank 1, the decomposed BSDE is given by

$$Y^1_t(\theta) = \xi^1(\theta) + \int_t^T \bigl|Z^1_s(\theta)\bigr|^2\,ds - \int_t^TZ^1_s(\theta)\,dW_s,\quad \theta\leq t \leq T,\ 0 \leq \theta\leq T, $$

and since ξ 1 satisfies an exponential integrability condition by assumption we know that Y 1(θ) satisfies an exponential integrability condition for any θ∈[0,T], but we have no information about the process \((Y^{1}_{s}(s))_{s \in [0,T]}\). The difficulty here lies in understanding the behavior of the “sectioned” process \(\{Y^{1}_{s}(\theta)~:~s=\theta\}\) and its study is left for further research.

3.4 Application to the Pricing of a European option in a Market with a Jump

In this example, we assume that W is one dimensional (d=1) and there is a single random time τ representing the time of occurrence of a shock in the prices on the market. We denote by H the associated pure jump process:

We consider a financial market which consists of

  • a non-risky asset S 0, whose strictly positive price process is defined by

    $$dS^0_t = r_t S^0_t\,dt,\quad 0\leq t \leq T,\ S^0_0=1, $$

    with r t ≥0, for all t∈[0,T],

  • two risky assets with respective price processes S 1 and S 2 defined by

    $$dS^1_t = S^1_{t^-} (b_t\,dt + \sigma_t\,dW_t + \beta\,dH_t),\quad 0\leq t\leq T,\ S^1_0 = s^1_0, $$

    and

    $$dS^2_t = S^2_t (\bar{b}_t\,dt + \bar{\sigma}_t\,dW_t),\quad 0\leq t\leq T,\ S^2_0 = s^2_0, $$

    with σ t >0 and \(\bar{\sigma}_{t}>0\), and β>−1 (to ensure that the price process S 1 always remains strictly positive).

We make the following assumption which ensures the existence of the processes S 0, S 1, and S 2:

(HB) The coefficients r, b, \(\bar{b}\), σ, \(\bar{\sigma}\), \(\frac{1}{\sigma}\) and \(\frac{1}{\bar{\sigma}}\) are bounded: there exists a constant C s.t.

$$|r_t| + |b_t| + |\bar{b}_t| + |\sigma_t| + |\bar{\sigma}_t| + \biggl|\frac{1}{\sigma_t}\biggr| + \biggl|\frac{1}{\bar{\sigma}_t}\biggr|\leq C,\quad 0\leq t \leq T,\ \mathbb{P}\hbox{-a.s.} $$

We assume that the coefficients r, b, \(\bar{b}\), σ, and \(\bar{\sigma}\) have the following forms:

for all t≥0.

The aim of this subsection is to provide an explicit price for any bounded \({\mathcal{G}}_{T}\)-measurable European option ξ of the form

where ξ 0 is \({\mathcal{F}}_{T}\)-measurable and ξ 1 is \({\mathcal{F}}_{T}\otimes {\mathcal{B}}(\mathbb{R})\)-measurable, together with a replicating strategy π=(π 0,π 1,π 2) (\(\pi^{i}_{t}\) corresponds to the number of share of S i held at time t). We assume that this market model is free of arbitrage opportunity (a necessary and sufficient condition to ensure it is e.g. given in Lemma 3.1.1 of [7]).

The value of a contingent claim is then given by the initial amount of a replicating portfolio. Let π=(π 0,π 1,π 2) be a \({\mathcal{P}}(\mathbb{G})\)-measurable self-financing strategy. The wealth process Y associated with this strategy satisfies

$$ Y_t = \pi^0_t S^0_t +\pi^1_t S^1_t +\pi^2_t S^2_t,\quad 0\leq t\leq T. $$
(3.12)

Since π is a self-financing strategy, we have

$$dY_t = \pi^0_t\,dS^0_t + \pi^1_t\,dS^1_t + \pi^2_t\,dS^2_t,\quad 0\leq t\leq T. $$

Combining this last equation with (3.12), we get

(3.13)

Define the predictable processes Z and U by

$$ Z_t = \pi^1_t \sigma_{t} S^1_{t} + \pi^2_t \bar{\sigma}_{t} S^2_t\quad \hbox{and}\quad U_t = \pi^1_t \beta S^1_{t^-},\quad 0\leq t\leq T. $$
(3.14)

Then, (3.13) can be written in the form

Therefore, the problem of valuing and hedging of the contingent claim ξ consists in solving the following BSDE:

(3.15)

The recursive system of Brownian BSDEs associated with (3.15) is then given by

(3.16)

and

(3.17)

Proposition 3.2

Under (HD) and (HB), BSDE (3.15) admits a solution in \({\mathcal{S}}_{\mathbb{G}}^{\infty}[0,T]\times L^{2}_{\mathbb{G}}[0,T]\times L^{2}(\mu)\).

Proof

Using the same argument as in Step 1 of the proof of Proposition 3.1, we can assume w.l.o.g. that the coefficients of BSDEs (3.16) and (3.17) are bounded. Then, BSDE (3.16) is a linear BSDE with bounded coefficients and a bounded terminal condition. From Theorem 2.3 in [22], we get the existence of a solution (Y 1(θ),Z 1(θ)) in \({\mathcal{S}}^{\infty}_{\mathbb{F}}[\theta,T]\times L_{\mathbb{F}}^{2}[\theta,T]\) to (3.16) for all θ∈[0,T]. Moreover, from Proposition 2.1 in [22], we have

$$ \sup_{\theta\in[0,T]} \bigl\|Y^1(\theta)\bigr\|_{{\mathcal{S}}^\infty[\theta,T]} < \infty. $$
(3.18)

Applying Proposition C.1 with \({\mathcal{X}}=[0,T]\) and (θ)=γ 0(θ)  we can choose the solution (Y 1,Z 1) as a \({\mathcal{P}}(\mathbb{F})\otimes {\mathcal{B}}([0,T])\)-measurable process.

Estimate (3.18) gives the result that BSDE (3.17) is also a linear BSDE with bounded coefficients. Applying Theorem 2.3 and Proposition 2.1 in [22] as previously, we get the existence of a solution (Y 0,Z 0) in \({\mathcal{S}}^{\infty}_{\mathbb{F}}[0,T]\times L_{\mathbb{F}}^{2}[0,T]\) to (3.17). Applying Theorem 3.1, we get the result. □

Since BSDEs (3.16) and (3.17) are linear, we have explicit formulae for the solutions. For Y 1(θ), we get

$$Y^1_t(\theta) = \frac{1}{{\varGamma}^{1}_t(\theta)} \mathbb{E}\bigl[ \xi^1(\theta) {\varGamma}^1_T(\theta)\big| {\mathcal{F}}_t \bigr],\quad \theta\leq t\leq T, $$

with Γ 1(θ) defined by

$${\varGamma}^1_t (\theta) := \exp\biggl(\frac{r^1(\theta) - \bar{b}^1(\theta)}{\bar{\sigma}^1(\theta)} W_t - \frac{1}{2}\biggr| \frac{r^1(\theta) - \bar{b}^1(\theta)}{\bar{\sigma}^1(\theta)}\biggr|^2 t - r^1(\theta) t\biggr),\quad \theta\leq t\leq T. $$

For Y 0, we get

$$Y^0_t = \frac{1}{{{\varGamma}}^{0}_t} \mathbb{E}\biggl[ \xi^0 {\varGamma}^0_T + \int_t^T c_s {\varGamma}^0_s\,ds\bigg| {\mathcal{F}}_t \biggr],\quad 0\leq t\leq T, $$

with Γ 0 defined by

$${{\varGamma}^0_t} := \exp \biggl( \int_0^t d_s\,dW_s - \frac{1}{2} \int_0^t |d_s|^2\,ds + \int_0^t a_s\,ds \biggr),\quad 0\leq t\leq T, $$

where the parameters a, d and c are given by

The price at time t of the European option ξ is equal to \(Y^{0}_{t}\) if t<τ and \(Y^{1}_{t}(\tau)\) if tτ. Once we know the processes Y and Z, a hedging strategy π=(π 0,π 1,π 2) is given by (3.12) and (3.14).

Under no free lunch assumption, all the hedging portfolios have the same value, which gives the uniqueness of the process Y. This leads to the uniqueness issue for the whole solution (Y,Z,U).

4 Uniqueness

In this section, we provide a uniqueness result based on a comparison theorem. We first provide a general comparison theorem which allows to compare solutions to the studied BSDEs as soon as we can compare solutions to the associated system of recursive Brownian BSDEs. We then illustrate our general result with a concrete example in a convex framework.

4.1 The General Comparison Theorem

We consider two BSDEs with coefficients \((\underline{f},\underline{\xi})\) and \((\bar{f}, \bar{\xi})\) such that

  • \(\underline{\xi}\) (resp. \(\bar{\xi}\)) is a bounded \({\mathcal{G}}_{T}\)-measurable random variable of the form

    where \(\underline{\xi}^{0}\) (resp. \(\bar{\xi}^{0}\)) is \({\mathcal{F}}_{T}\)-measurable and \(\underline{\xi}^{k}\) (resp. \(\bar{\xi}^{k}\)) is \({\mathcal{F}}_{T}\otimes {\mathcal{B}}({\varDelta}_{k})\otimes {\mathcal{B}}(E^{k})\)-measurable for each k=1,…,n,

  • \(\underline{f}\) (resp. \(\bar{f}\)) is map from [0,TΩ×ℝ×ℝd×Bor(E,ℝ) to ℝ which is a \({\mathcal{P}}(\mathbb{G})\otimes {\mathcal{B}}(\mathbb{R})\otimes {\mathcal{B}}(\mathbb{R}^{d})\otimes {\mathcal{B}}(Bor(E,\mathbb{R}))\)-\({\mathcal{B}}(\mathbb{R})\)-measurable map.

We denote by \((\underline{Y},\underline{Z},\underline{U})\) and \((\bar{Y},\bar{Z},\bar{U})\) their respective solutions in \({\mathcal{S}}^{\infty}_{\mathbb{G}}[0,T]\times L^{2}_{\mathbb{G}}[0,T]\times L^{2}(\mu)\). We consider the decomposition \((\underline{Y}^{k})_{0\leq k\leq n}\) (resp. \((\bar{Y}^{k})_{0\leq k\leq n}\), \((\underline{Z}^{k})_{0\leq k\leq n}\), \((\bar{Z}^{k})_{0\leq k\leq n}\), \((\underline{U}^{k})_{0\leq k\leq n}\), \((\bar{U}^{k})_{0\leq k\leq n}\)) of \(\underline{Y}\) (resp. \(\bar{Y}\), \(\underline{Z}\), \(\bar{Z}\), \(\underline{U}\), \(\bar{U}\)) given by Lemma 2.1. For ease of notation, we shall write \(\underline{F}^{k}(t,y,z)\) and \(\bar{F}^{k}(t,y,z)\) instead of \(\underline{f}(t,y,z,\underline{Y}^{k+1}_{t}(\tau_{(k)},t,\zeta_{(k)},.)-y)\) and \(\bar{f}(t,y,z,\bar{Y}^{k+1}_{t}(\tau_{(k)},t,\zeta_{(k)},.)-y)\) for each k=0,…,n−1, and \(\underline{F}^{n}(t,y,z)\) and \(\bar{F}^{n}(t,y,z)\) instead of \(\underline{f}(t,y,z,0)\) and \(\bar{f}(t,y,z,0)\).

We shall make, throughout the text, the standing assumption known as (H)-hypothesis:

(HC) Any \(\mathbb{F}\)-martingale remains a \(\mathbb{G}\)-martingale.

Remark 4.1

Since W is an \(\mathbb{F}\)-Brownian motion, we get under (HC) that it remains a \(\mathbb{G}\)-Brownian motion. Indeed, using (HC), we see that W is a \(\mathbb{G}\)-local martingale with quadratic variation 〈W,W t =t. Applying Lévy’s characterization of Brownian motion (see e.g. Theorem 39 in [29]), we see that W remains a \(\mathbb{G}\)-Brownian motion.

Definition 4.1

We say that a generator gΩ×[0,T]×ℝ×ℝd→ℝ satisfies a comparison theorem for Brownian BSDEs if for any bounded \(\mathbb{G}\)-stopping times ν 2ν 1, any generator g′:Ω×[0,T]×ℝ×ℝd→ℝ and any \({\mathcal{G}}_{\nu_{2}}\)-measurable r.v. ζ and ζ′ such that gg′ and ζζ′ (resp. gg′ and ζζ′), we have YY′ (resp. YY′) on [ν 1,ν 2]. Here, (Y,Z) and (Y′,Z′) are solutions in \({\mathcal{S}}_{\mathbb{G}}^{\infty}[0,T]\times L^{2}_{\mathbb{G}}[0,T]\) to BSDEs with data (ζ,g) and (ζ′,g′):

$$Y_t = \zeta+\int_t^{\nu_2}g(s,Y_s,Z_s)\,ds - \int_t^{\nu_2}Z_s\,dW_s,\quad \nu_1\leq t\leq \nu_2, $$

and

$$Y'_t = \zeta' + \int_t^{\nu_2}g'\bigl(s,Y'_s,Z'_s\bigr)\,ds - \int_t^{\nu_2}Z'_s\,dW_s, \quad \nu_1\leq t\leq \nu_2. $$

We can state the general comparison theorem.

Theorem 4.1

Suppose that \(\underline{\xi}\leq \bar{\xi},~\mathbb{P}\)-a.s. Suppose moreover that for each k=0,…,n

$$\underline{F}^k(t,y,z)\leq \bar{F}^k(t,y,z),\quad \forall (t,y,z)\in [0,T]\times \mathbb{R}\times \mathbb{R}^d,\ \mathbb{P}\hbox{-\textrm{a.s.}}, $$

and the generators \(\bar{F}^{k}\) or \(\underline{F}^{k}\) satisfy a comparison theorem for Brownian BSDEs. Then, if \(\bar{U}_{t}= \underline{U}_{t}=0\) for t > τ n , we have under (HD) and (HC)

$$\underline{Y}_t\leq \bar{Y}_t,\quad 0\leq t\leq T,\ \mathbb{P}\hbox{-\textrm{a.s.}} $$

Proof

The proof is performed in four steps. We first identify the BSDEs of which the terms appearing in the decomposition of \(\bar{Y}\) and \(\underline{Y}\) are solutions in the filtration \(\mathbb{G}\). We then modify \(\bar{Y} ^{k}\) and \(\underline{Y} ^{k}\) outside of [τ k ,τ k+1) to get càd-làg processes for each k=0,…,n. We then compare the modified processes by killing their jumps. Finally, we retrieve a comparison for the initial processes since the modification has happened outside of [τ k ,τ k+1) (where they coincide with \(\bar{Y}\) and \(\underline{Y}\)).

Step 1. Since \((\bar{Y}, \bar{Z}, \bar{U})\) (resp. \((\underline{Y}, \underline{Z}, \underline{U})\)) is solution to the BSDE with parameters \((\bar{\xi}, \bar{f})\) (resp. \((\underline{\xi}, \underline{f})\)), we obtain from the decomposition in the filtration \(\mathbb{F}\) and Theorem 12.23 in [13] that \((\bar{Y}^{n}, \bar{Z}^{n})\) (resp. \((\underline{Y}^{n}, \underline{Z}^{n})\)) is solution to

(4.1)
(4.2)

and \((\bar{Y}^{k},\bar{Z}^{k})\) (resp. \((\underline{Y}^{k},\underline{Z}^{k})\)) is solution to

(4.3)
(4.4)

for each k=0,…,n−1.

Step 2. We introduce a family of processes \((\tilde{\bar{Y} }^{k})_{0 \leq k \leq n}\) (resp. \((\tilde{\underline{Y}} ^{k})_{0 \leq k \leq n}\)). We define it recursively by

and for k=0,…,n−1

These processes are càd-làg with jumps only at times τ l , l=1,…,n. Notice also that \(\tilde{\bar{Y} }^{n}\) (resp. \(\tilde{\underline{Y} }^{n}\), \(\tilde{\bar{Y} }^{k}\), \(\tilde{\underline{Y} }^{k}\)) satisfies equation (4.1) (resp. (4.2), (4.3), and (4.4)).

Step 3. We prove by a backward induction that \(\tilde {\underline{Y}}^{n}\)\(\tilde{\bar{Y}}^{n}\) on [τ n T,T] and \(\tilde{\underline{Y}}^{k}\)\(\tilde {\bar{Y}}^{k}\) on [τ k T,τ k+1T), for each k=0,…,n−1.

• Since \(\underline{\xi}\)\(\bar{\xi}\), \(\underline{F} ^{n}\)\(\bar{F} ^{n}\) and \(\bar{F}^{n}\) or \(\underline{F}^{n}\) satisfy a comparison theorem for Brownian BSDEs, we immediately get from (4.1) and (4.2)

$$\tilde{\underline{Y}}^{n}_{t}\leq \tilde{\bar{Y}}^{n}_{t}, \quad \tau_{n}\wedge T \leq t\leq T. $$

• Fix kn−1 and suppose that \(\tilde{\underline{Y}}^{k+1}_{t}\leq \tilde{\bar{Y}}_{t}^{k+1}\) for t∈[τ k+1T,τ k+2T). Denote by \(^{p}\tilde{\bar{Y}}^{l}\) (resp. \(^{p}\tilde{\underline{Y}}^{l}\)) the predictable projection of \(\tilde{\bar{Y}}^{l}\) (resp. \(\tilde{\underline{Y}}^{l}\)) for l=0,…,n. Since the random measure μ admits an intensity absolutely continuous w.r.t. the Lebesgue measure on [0,T], \(\tilde{\bar{Y}}^{l}\) (resp. \(\tilde{\underline{Y}}^{l}\)) has inaccessible jumps (see Chap. IV of [9]). We then have

$${}^p\tilde{\bar{Y}}^l_{t} = \tilde{\bar{Y}}^l_{t-}\quad \bigl(\hbox{resp.}\ {}^p\tilde{\underline{Y}}^l_{t}= \tilde{\underline{Y}}^l_{t-}\bigr),\quad 0\leq t\leq T. $$

From (4.3) and (4.4), and the definition of \(\tilde{\bar{Y}}^{l}\) (resp. \(\tilde{\underline{Y}}^{l}\)), we have for l=k

(4.5)
(4.6)

Since \(\tilde{\bar{Y}}^{k+1}_{\tau_{k+1}}\)\(\tilde{\underline{Y}}^{k+1}_{\tau_{k+1}}\), we get \(^{p}\tilde{\bar{Y}}^{k+1}_{\tau_{k+1}}\)\(^{p}\tilde{\underline{Y}}^{k+1}_{\tau_{k+1}}\). This together with conditions on \(\bar{\xi}\), \(\underline{\xi}\), \(\bar{F} ^{k}\) and \(\underline{F} ^{k}\) give the result.

Step 4. Since \(\tilde{\bar{Y}}^{k}\) (resp. \(\tilde{\underline{Y}}^{k}\)) coincides with \(\bar{Y}\) (resp. \(\underline{Y}\)) on [τ k T,τ k+1T), we get the result. □

Remark 4.2

It is possible to obtain Theorem 4.1 under weaker assumptions than (HC). For instance, it is sufficient to assume that W is a \(\mathbb{G}\)-semimartingale for the form

$$W = M + \int_0^. a_s\,ds, $$

with M a \(\mathbb{G}\)-local martingale and a a \(\mathbb{G}\)-adapted process satisfying

$$ \mathbb{E}\biggl[\exp\biggl(-\int_0^Ta_s\,dM_s-{1\over 2} \int_0^T |a_s|^2\,ds\biggr)\biggr] = 1. $$
(4.7)

Indeed, we first notice that (M t ) t∈[0,T] is a \(\mathbb{G}\)-Brownian motion since it is a continuous \(\mathbb{G}\)-martingale with 〈M,M t =t for t≥0. Then, from (4.7) we can apply Girsanov Theorem and find that (W t ) t∈[0,T] is a \((\mathbb{Q},\mathbb{G})\)-Brownian motion where ℚ is the probability measure equivalent to ℙ defined by

$${d \mathbb{Q}\over d\mathbb{P}}\bigg|_{{\mathcal{G}}_T} = \exp\biggl(-\int_0^Ta_s\,dM_s-{1\over 2} \int_0^T|a_s|^2\,ds\biggr). $$

Therefore we can prove Theorem 4.1 under ℚ. Since ℚ is equivalent to ℙ the conclusion remains true under ℙ.

4.2 Uniqueness via Comparison

In this form, the previous theorem is not usable since the condition on the generators of the Brownian BSDEs is implicit: it involves the solution of the previous Brownian BSDEs at each step. We give, throughout the sequel, an explicit example for which Theorem 4.1 provides uniqueness. This example is based on a comparison theorem for quadratic BSDEs given by Briand and Hu [6]. We first introduce the following assumptions.

  1. (HUQ1)

    The function f(t,y,.,u) is concave for all (t,y,u)∈[0,T]×ℝ×Bor(E,ℝ).

  2. (HUQ2)

    There exists a constant L s.t.

    $$\bigl|f\bigl(t,y,z,\bigl(u(e)-y\bigr)_{e\in E}\bigr) - f\bigl(t,y',z,\bigl(u(e)-y'\bigr)_{e\in E}\bigr)\bigr|\leq L\bigl|y-y'\bigr| $$

    for all (t,y,y′,z,u)∈[0,T]×[ℝ]2×ℝd×Bor(E,ℝ).

  3. (HUQ3)

    There exists a constant C>0 such that

    $$\bigl|f(t,y,z,u)\bigr|\leq C\biggl(1+|y|+|z|^2 + \int_E|u(e)|\lambda_t(e)\,de\biggr) $$

    for all (t,y,z,u)∈[0,T]×ℝ×ℝd×Bor(E,ℝ).

  4. (HUQ4)

    f(t,.,u)=f(t,.,0) for all uBor(E,ℝ) and all t∈(τ n T,T].

Theorem 4.2

Under (HD), (HBI), (HC), (HUQ1), (HUQ2), (HUQ3), and (HUQ4), BSDE (2.4) admits at most one solution.

Proof

Let (Y,Z,U) and (Y′,Z′,U′) be two solutions of (2.4) in \({\mathcal{S}}_{\mathbb{G}}^{\infty}[0,T]\times L^{2}_{\mathbb{G}}[0,T]\times L^{2}(\mu)\). Define the process \(\tilde{U}\) (resp. \(\tilde{U}'\)) by

Then, \(U=\tilde{U}\) and \(U'= \tilde{U}'\) in L 2(μ). Therefore, from (HUQ4), \((Y,Z,\tilde{U})\) and \((Y',Z',\tilde{U}')\) are also solutions to (2.4) in \({\mathcal{S}}_{\mathbb{G}}^{\infty}[0,T]\times L^{2}_{\mathbb{G}}[0,T]\times L^{2}(\mu)\).

We now prove by a backward induction on k=n,n−1,…,1,0 that

• Suppose that k=n. Then, and are solution to

Using Remark 4.1 and Theorem 5 in [6], we find that the generator satisfies a comparison theorem in the sense of Definition 4.1. We can then apply Theorem 4.1 with

for (t,y,z,u)∈[0,T]×ℝ×ℝd×Bor(E,ℝ), and we get .

• Suppose that . We can then choose Y j and Yj appearing in the decomposition of the processes Y and Y′ given by Lemma 2.1(ii) such that

$$Y^{j}_s(\theta_{(j)},e_{(j)}) = Y'^{j}_s(\theta_{(j)},e_{(j)}), $$

for all (θ,e)∈Δ n ×E n and j=k+1,…,n. Therefore, we find that and are solution to

for t∈[0,T], where the generator F is defined by

where

for all (t,y,z)∈[0,T]×ℝ×ℝd. Using Remark 4.1 and Theorem 5 in [6], we see that the generator F satisfies a comparison theorem in the sense of Definition 4.1. We can then apply Theorem 4.1 and we get .

• Finally the result holds true for all k=0,…,n which gives Y=Y′.

• We now prove that Z=Z′ and U=U′. Identifying the finite variation part and the unbounded variation part of Y we get Z=Z′. Then, identifying the pure jump part of Y we get \(\tilde{U} =\tilde{U}'\). Since \(\tilde{U} =U\) (resp. \(\tilde{U}' =U'\)) in L 2(μ), we finally get (Y,Z,U)=(Y′,Z′,U′). □

5 Exponential Utility Maximization in a Jump Market Model

We consider a financial market model with a riskless bond assumed for simplicity equal to one, and a risky asset subjects to some counterparty risks. We suppose that the Brownian motion W is one dimensional (d=1). The dynamic of the risky asset is affected by other firms, the counterparties, which may default at some random times, inducing consequently some jumps in the asset price. However, this asset still exists and can be traded after the default of the counterparties. We keep the notation of previous sections.

Throughout the sequel, we suppose that (HD), (HBI), and (HC) are satisfied. We consider that the price process S evolves according to the equation

$$S_t = S_0 + \int_0^tS_{u^-} \biggl(b_u\,du+\sigma_u\,dW_u + \int_{E}\beta_u(e)\mu(de,du)\biggr),\quad 0 \leq t \leq T. $$

All processes b, σ and β are assumed to be \(\mathbb{G}\)-predictable. We introduce the following assumptions on the coefficients appearing in the dynamic of S:

  1. (HS1)

    The processes b, σ and β are uniformly bounded: there exists a constant C s.t.

    $$|b_t|+|\sigma_t|+\bigl|\beta_t(e)\bigr|\leq C,\quad 0 \leq t \leq T,\ e\in E,\ \mathbb{P}\hbox{-a.s.} $$
  2. (HS2)

    There exists a positive constant c σ such that

    $$\sigma_t\geq c_\sigma,\quad 0 \leq t \leq T,\ \mathbb{P}\hbox{-a.s.} $$
  3. (HS3)

    The process β satisfies

    $$\beta_t(e)>-1,\quad 0 \leq t \leq T,\ e\in E,\ \mathbb{P}\hbox{-a.s.} $$
  4. (HS4)

    The process ϑ defined by \(\vartheta_{t} = \frac{b_{t}}{\sigma_{t}}\), t∈[0,T], is uniformly bounded: there exists a constant C such that

    $$|\vartheta_t|\leq C,\quad 0\leq t \leq T,\ \mathbb{P}\hbox{-a.s.} $$

We notice that (HS1) allows the process S to be well defined and (HS3) ensures it to be positive.

A self-financing trading strategy is determined by its initial capital x∈ℝ and the amount of money π t invested in the stock, at time t∈[0,T]. The wealth at time t associated with a strategy (x,π) is

$$X^{x,\pi}_t = x + \int_0^t\pi_s b_s\,ds + \int_0^t\pi_s\sigma_s\,dW_s + \int_0^t \int_E\pi_s\beta_s(e)\mu(de,ds),\quad 0\leq t \leq T. $$

We consider a contingent claim that is a random payoff at time T described by a \({\mathcal{G}}_{T}\)-measurable random variable B. We suppose that B is bounded and satisfies

where B 0 is \({\mathcal{F}}_{T}\)-measurable and B k is \({\mathcal{F}}_{T}\otimes {\mathcal{B}}({\varDelta}_{k})\otimes {\mathcal{B}}(E^{k})\)-measurable for each k=1,…,n. Then, we define

$$ V(x) := \sup_{\pi \in {\mathcal{A}}} \mathbb{E}\bigl[-\exp\bigl(-\alpha \bigl(X_T^{x,\pi}-B\bigr)\bigr)\bigr], $$
(5.1)

the maximal expected utility that we can achieve by starting at time 0 with the initial capital x, using some admissible strategy \(\pi\in {\mathcal{A}}\) (which is defined throughout the sequel) on [0,T] and paying B at time T. α is a given positive constant which can be seen as a coefficient of absolute risk aversion.

Finally, we introduce a compact subset \({\mathcal{C}}\) of ℝ with \(0 \in {\mathcal{C}}\), which represents an eventual constraint imposed to the trading strategies, that is, π t (ω) ∈ \({\mathcal{C}}\). We then define the space \({\mathcal{A}}\) of admissible strategies.

Definition 5.1

The set \({\mathcal{A}}\) of admissible strategies consists of all ℝ-valued \({\mathcal{P}}(\mathbb{G})\)-measurable processes π=(π t )0≤tT which satisfy

$$\mathbb{E}\int_0^T|\pi_t\sigma_t|^2\,dt+\mathbb{E}\int_0^T \int_E \bigl|\pi_t \beta_t(e)\bigr| \lambda_t(e)\,de\,dt < \infty, $$

and \(\pi_{t}\in {\mathcal{C}}\), dtdℙ-a.e., as well as the uniform integrability of the family

$$\bigl\{ \exp \bigl(-\alpha X^{x,\pi}_\tau\bigr) : \tau\ \hbox{stopping time valued in}\ [0,T] \bigr\}. $$

We first notice that the compactness of \({\mathcal{C}}\) implies the integrability conditions imposed to the admissible strategies.

Lemma 5.1

Any \({\mathcal{P}}(\mathbb{G})\)-measurable process π valued in \({\mathcal{C}}\) satisfies \(\pi\in {\mathcal{A}}\).

The proof is exactly the same as in [24]. We therefore omit it.

In order to characterize the value function V(x) and an optimal strategy, we construct, as in [14] and [24], a family of stochastic processes \((R^{(\pi)})_{\pi \in {\mathcal{A}}}\) with the following properties:

  1. (i)

    \(R^{(\pi)}_{T}=-\exp(-\alpha(X^{x,\pi}_{T}- B))\) for all \(\pi\in {\mathcal{A}}\),

  2. (ii)

    \(R^{(\pi)}_{0}=R_{0}\) is constant for all \(\pi\in {\mathcal{A}}\),

  3. (iii)

    R (π) is a supermartingale for all \(\pi\in {\mathcal{A}}\) and there exists \(\hat{\pi}\in {\mathcal{A}}\) such that \(R^{(\hat{\pi})}\) is a martingale.

Given processes owning these properties we can compare the expected utilities of the strategies \(\pi\in {\mathcal{A}}\) and \(\hat{\pi}\in {\mathcal{A}}\) by

$$\mathbb{E}\bigl[-\exp \bigl(-\alpha\bigl(X^{x,\pi}_T- B\bigr)\bigr)\bigr]\leq R_0(x) = \mathbb{E}\bigl[-\exp\bigl(-\alpha\bigl(X^{x,\hat{\pi}}_T- B\bigr)\bigr)\bigr] = V(x), $$

whence \(\hat{\pi}\) is the desired optimal strategy. To construct this family, we set

$$R^{(\pi)}_t = -\exp\bigl(-\alpha\bigl(X^{x,\pi}_t-Y_t\bigr)\bigr),\quad 0\leq t\leq T,\ \pi\in {\mathcal{A}}, $$

where (Y,Z,U) is a solution of the BSDE

(5.2)

We have to choose a function f for which R (π) is a supermartingale for all \(\pi\in {\mathcal{A}}\), and there exists a \(\hat{\pi}\in {\mathcal{A}}\) such that \(R^{(\hat{\pi})}\) is a martingale. We assume that there exists a triple (Y,Z,U) solving a BSDE with jumps of the form (5.2), with terminal condition B and with a driver f to be determined. We first apply Itô’s formula to R (π) for any strategy π

Thus, the process R (π) satisfies the following SDE:

$$dR^{(\pi)}_t = R^{(\pi)}_{t^-}\,dM^{(\pi)}_t + R^{(\pi)}_t\,dA^{(\pi)}_t, \quad 0< t\leq T, $$

with M (π) a local martingale and A (π) a finite variation continuous process given by

It follows that R (π) has the multiplicative form

$$R^{(\pi)}_t = R^{(\pi)}_0 \mathfrak{E}\bigl(M^{(\pi)}\bigr)_t \exp\bigl(A^{(\pi)}_t\bigr), $$

where \(\mathfrak{E}(M^{(\pi)})\) denotes the Doleans-Dade exponential of the local martingale M (π). Since exp(−α(π t β t (e)−U t (e)))−1>−1, ℙ-a.s., the Doleans-Dade exponential of the discontinuous part of M (π) is a positive local martingale and hence, a supermartingale. The supermartingale condition in (iii) holds true, provided, for all \(\pi\in {\mathcal{A}}\), the process exp(A (π)) is nondecreasing, this entails

This condition holds true, if we define f as follows:

recall that ϑ t =b t /σ t for t∈[0,T].

Theorem 5.1

Under (HD), (HBI), (HC), (HS1), (HS2), (HS3), and (HS4), the value function of the optimization problem (5.1) is given by

$$ V(x) = -\exp\bigl(-\alpha(x-Y_0)\bigr), $$
(5.3)

where Y 0 is defined as the initial value of the unique solution \((Y,Z,U)\in {\mathcal{S}}^{\infty}_{\mathbb{G}}[0,T]\times L^{2}_{\mathbb{G}}[0,T]\times L^{2}(\mu)\) of the BSDE

(5.4)

with

for all (t,z,u)∈[0,T]×ℝ×Bor(E,ℝ). There exists an optimal trading strategy \(\hat{\pi} \in {\mathcal{A}}\) which satisfies

(5.5)

for all t∈[0,T].

Proof

Step 1. We first prove the existence of a solution to BSDE) (5.4). We first check the measurability of the generator f. Notice that we have \(f(.,.,.,.) =\inf_{\pi\in {\mathcal{C}}}F(\pi,.,.,.,.)\) where F is defined by

$$F(\pi,t,y,z,u) := \frac{\alpha}{2} \biggl|\pi\sigma_t-\biggl(z+\frac{\vartheta_t}{\alpha}\biggr)\biggr|^2 + \int_E \frac{\exp(\alpha(u(e)-\pi\beta_t(e)))-1}{\alpha}\lambda_t(e)\,de $$

for all \((\omega,t,\pi,y,z,u)\in{\varOmega}\times[0,T]\times {\mathcal{C}}\times \mathbb{R}\times \mathbb{R}\times Bor(E,\mathbb{R})\). From Fatou’s Lemma we find that u↦∫ E u(e) de is l.s.c. and hence measurable on Bor(E,ℝ+):={uBor(E,ℝ):u(e)≥0,∀eE}. Therefore F(π,.,.,.,.) is \({\mathcal{P}}(\mathbb{G})\otimes {\mathcal{B}}(\mathbb{R})\otimes {\mathcal{B}}(\mathbb{R}) \otimes {\mathcal{B}}(Bor(E,\mathbb{R}))\)-measurable for all \(\pi\in {\mathcal{C}}\). Since F(.,t,y,z,u) is continuous for all (t,y,z,u) we have \(f(.,.,.,.) =\inf_{\pi\in {\mathcal{C}}\cap \mathbb{Q}}F(\pi,.,.,.,.)\), and f is \({\mathcal{P}}(\mathbb{G})\otimes {\mathcal{B}}(\mathbb{R})\otimes {\mathcal{B}}(\mathbb{R})\otimes {\mathcal{B}}(Bor(E,\mathbb{R}))\)-measurable.

We now apply Theorem 3.1. Let σ k, ϑ k and β k, k=0,…,n, be the respective terms appearing in the decomposition of σ, ϑ and β given by Lemma 2.1. Using (HS1) and (HS4), we can assume w.l.o.g. that these terms are uniformly bounded. Then, in the decomposition of the generator f, we can choose the functions f k, k=0,…,n, as

and

for k=0,…,n−1 and (θ,e)∈Δ n ×E n.

Notice also that since B is bounded, we can choose B k, k=0,…,n, uniformly bounded. We now prove by backward induction on k that the BSDEs (we shall omit the dependence on (θ,e))

$$ Y_{t}^n = B^n + \int_{t}^Tf^n\bigl(s,Z^n_{s},0\bigr)\,ds - \int_{t}^TZ^n_{s}\,dW_{s},\quad \theta_n\wedge T\leq t\leq T, $$
(5.6)

and

(5.7)

admit a solution (Y k,Z k) in \({\mathcal{S}}^{\infty}_{\mathbb{F}}[\theta_{k}\wedge T,T] \times L^{2}_{\mathbb{F}}[\theta_{k}\wedge T,T]\) such that Y k (resp. Z k) is \({\mathcal{P}}{\mathcal{M}}(\mathbb{F})\otimes {\mathcal{B}}({\varDelta}_{k})\otimes {\mathcal{B}}(E^{k})\) (resp. \({\mathcal{P}}(\mathbb{F})\otimes {\mathcal{B}}({\varDelta}_{k})\otimes {\mathcal{B}}(E^{k})\))-measurable with

$$\sup_{(\theta,e)\in {\varDelta}_{n}\times E^n} \bigl\|Y^k(\theta_{(k)},e_{(k)})\bigr\|_{{\mathcal{S}}^\infty[\theta_{k}\wedge T,T]} + \bigl\|Z^k(\theta_{(k)},e_{(k)})\bigr\|_{L^2[\theta_{k}\wedge T,T]} < \infty, $$

for all k=0,…,n.

• Since 0 ∈ \({\mathcal{C}}\), we have

$$- \vartheta^n_t z - \frac{|\vartheta^n_t|^2}{2\alpha}\leq f^n(t,z,0) \leq \frac{\alpha}{2}|z|^2. $$

Therefore, we can apply Theorem 2.3 of [22], and we see that for any (θ,e)∈Δ n ×E n, there exists a solution (Y n(θ,e),Z n(θ,e)) to BSDE (5.6) in \({\mathcal{S}}^{\infty}_{\mathbb{F}}[\theta_{n}\wedge T,T]\times L^{2}_{\mathbb{F}}[\theta_{n}\wedge T,T]\). Moreover, this solution is constructed as a limit of Lipschitz BSDEs (see [22]). Using Proposition C.1, we find that Y n (resp. Z n) is \({\mathcal{P}}{\mathcal{M}}(\mathbb{F})\otimes {\mathcal{B}}({\varDelta}_{n})\otimes {\mathcal{B}}(E^{n})\) (resp. \({\mathcal{P}}(\mathbb{F})\otimes {\mathcal{B}}({\varDelta}_{n})\otimes {\mathcal{B}}(E^{n})\))-measurable.

Then, using Proposition 2.1 of [22], we get the existence of a constant K such that

$$\sup_{(\theta,e)\in{\varDelta}_{n}\times E^n} \bigl\|Y^n(\theta,e)\bigr\|_{{\mathcal{S}}^\infty[\theta_n\wedge T,T]} + \bigl\|Z^n(\theta,e)\bigr\|_{L^2[\theta_n\wedge T,T]}\leq K. $$

• Suppose that BSDE (5.7) admits a solution at rank k+1 (kn−1) with

(5.8)

We denote g k the function defined by

$$g^k(t,y,z,\theta_{(k)},e_{(k)}) = f^k\bigl(t,z,Y^{k+1}_{t}(\theta_{(k)},t,e_{(k)},.) - y,\theta_{(k)},e_{(k)}\bigr), $$

for all (t,y,z)∈[0,T]×ℝ×ℝ and (θ,e)∈Δ n ×E n. Since g k has an exponential growth in the variable y in the neighborhood of −∞, we cannot directly apply our previous results. We then prove via a comparison theorem that there exists a solution by introducing another BSDE which admits a solution and whose generator coincides with g in the domain where the solution lives.

Let \((\underline{Y}^{k}(\theta_{(k)},e_{(k)}), \underline{Z}^{k}(\theta_{(k)},e_{(k)}))\) be the solution in \({\mathcal{S}}^{\infty}_{\mathbb{F}}[\theta_{k}\wedge T,T]\times L^{2}_{\mathbb{F}}[\theta_{k}\wedge T,T]\) to the linear BSDE

where

$$\underline{g}^k(t,y,z,\theta_{(k)},e_{(k)}) = -\vartheta_t^k(\theta_{(k)},e_{(k)}) z-\frac{\vartheta^k_t(\theta_{(k)},e_{(k)})}{2\alpha}, $$

for all (t,y,z)∈[0,T]×ℝ×ℝ. Since B k and ϑ k are uniformly bounded, we have

$$ \sup_{(\theta_{(k)},e_{(k)}) \in {\varDelta}_k \times E^k} \bigl\|\underline{Y}^{k}(\theta_{(k)},e_{(k)})\bigr\|_{{\mathcal{S}}^\infty[\theta_k\wedge T,T]}<\infty. $$
(5.9)

Then, define the generator \(\tilde{g}^{k}\) by

$$\tilde{g}^k(t,y,z,\theta_{(k)},e_{(k)}) = g^k\bigl(t,y\vee \underline{Y}^k_t(\theta_{(k)}, e_{(k)}),z,\theta_{(k)},e_{(k)}\bigr), $$

for all (t,y,z)∈[0,T]×ℝ×ℝ and (θ,e)∈Δ n ×E n.

Moreover, since 0 ∈ \({\mathcal{C}}\), we get from (5.8) and (5.9) the existence of a positive constant C such that

$$\bigl|\tilde{g}^k(t,y,z,\theta_{(k)},e_{(k)})\bigr|\leq C\bigl(1+|z|^2\bigr), $$

for all (t,y,z)∈[0,T]×ℝ×ℝ and (θ,e)∈Δ n ×E n. We can then apply Theorem 2.3 of [22], and we find that the BSDE

admits a solution \((\tilde{Y}^{k}(\theta_{(k)},e_{(k)}),\tilde{Z}^{k}(\theta_{(k)},e_{(k)}))\in {\mathcal{S}}^{\infty}_{\mathbb{F}}[\theta_{k}\wedge T,T] \times L^{2}_{\mathbb{F}}[\theta_{k}\wedge T,T]\). Using Proposition 2.1 of [22], we get

$$\sup_{(\theta_{(k)},e_{(k)})\in {\varDelta}_k \times E^k} \bigl\|\tilde{Y}^{k}(\theta_{(k)},e_{(k)})\bigr\|_{{\mathcal{S}}^\infty[\theta_k\wedge T,T]}<\infty. $$

Then, since \(\tilde{g}^{k}\)\(\underline{g}^{k}\) and since \(\underline{g}^{k}\) is Lipschitz continuous, we get from the comparison theorem for BSDEs that \(\tilde{Y}^{k}\)\(\underline{Y}^{k}\). Hence, \((\tilde{Y}^{k},\tilde{Z}^{k})\) is solution to BSDE (5.7). Notice then that we can choose \(\tilde{Y}^{k}\) (resp. \(\tilde{Z}^{k}\)) as a \({\mathcal{P}}{\mathcal{M}}(\mathbb{F})\otimes {\mathcal{B}}({\varDelta}_{k})\otimes {\mathcal{B}}(E^{k})\) (resp. \({\mathcal{P}}(\mathbb{F})\otimes {\mathcal{B}}({\varDelta}_{k})\otimes {\mathcal{B}}(E^{k})\))-measurable process. Indeed, these processes are solutions to quadratic BSDEs and hence can be written as the limit of solutions to Lipschitz BSDEs (see [22]). Using Proposition C.1 with \({\mathcal{X}}={\varDelta}_{k} \times E^{k}\) and (θ,e)=γ 0(θ,e) de we see that the solutions to Lipschitz BSDEs are \({\mathcal{P}}(\mathbb{F})\otimes {\mathcal{B}}({\varDelta}_{k})\otimes {\mathcal{B}}(E^{k})\)-measurable and hence \(\tilde{Y}^{k}\) (resp. \(\tilde{Z}^{k}\)) is \({\mathcal{P}}{\mathcal{M}}(\mathbb{F})\otimes {\mathcal{B}}({\varDelta}_{k})\otimes {\mathcal{B}}(E^{k})\) (resp. \({\mathcal{P}}(\mathbb{F})\otimes {\mathcal{B}}({\varDelta}_{k})\otimes {\mathcal{B}}(E^{k})\))-measurable.

Step 2. We now prove the uniqueness of a solution to BSDE (5.4). Let (Y 1,Z 1,U 1) and (Y 2,Z 2,U 2) be two solutions of BSDE (5.4) in \({\mathcal{S}}_{\mathbb{G}}^{\infty}[0,T]\times L_{\mathbb{G}}^{2}[0,T]\times L^{2}(\mu)\).

Applying an exponential change of variable, we see that \((\tilde{Y}^{i}, \tilde{Z}^{i}, \tilde{U}^{i})\) defined for i=1,2 by

for all t∈[0,T], are solution in \({\mathcal{S}}_{\mathbb{G}}^{\infty}[0,T]\times L_{\mathbb{G}}^{2}[0,T]\times L^{2}(\mu)\) to the BSDE

$$\tilde{Y}_t = \exp(\alpha B) + \int_t^T \tilde{f}(s, \tilde{Y}_s, \tilde{Z}_s, \tilde{U}_s)\,ds - \int_t^T \tilde{Z}_s\,dW_s - \int_t^T\int_E \tilde{U}_s(e)\mu(de,ds), $$

where the generator \(\tilde{f}\) is defined by

We then notice that

\(\tilde{f}\) satisfies (HUQ1) since it is an infimum of linear functions in the variable z,

\(\tilde{f}\) satisfies (HUQ2). Indeed, from the definition of \(\tilde{f} \) we have

for all (t,z,u)∈[0,T]×ℝ×Bor(E,ℝ) and y,y′∈ℝ. Since \({\mathcal{C}}\) is compact, we get from (HBI) the existence of a constant C such that

$$\tilde{f} (t,y,z,u-y)- \tilde{f}\bigl(t,y',z,u-y'\bigr)\geq -C\bigl|y-y'\bigr|. $$

Inverting y and y′ we get the result.

\(\tilde{f}\) satisfies (HUQ3). Indeed, since \(0\in {\mathcal{C}}\), we get from (HBI) the existence of a constant C such that

$$\tilde{f}(t,y,z,u)\leq C\biggl(|y|+\int_E \bigl|u(e)\bigr|\lambda_t(e)\,de\biggr), $$

for (t,y,z,u)∈[0,T]×ℝ×ℝ×Bor(E,ℝ). We get from (HBI), there exists a positive constant C s.t.

Then, from (HS1), (HS2), and the compactness of \({\mathcal{C}}\), we get

$$\tilde{f}(t,y,z,u)\geq -C\biggl(1+|y|+|z|+\int_E \bigl|u(e)\bigr|\lambda_t(e)\,de\biggr), $$

for all (t,y,z,u)∈[0,T]×ℝ×ℝ×Bor(E,ℝ).

\(\tilde{f} \) satisfies (HUQ4) since at time t it is an integral of the variable u w.r.t. λ t , which vanishes on the interval (τ n ,∞).

Since \(\tilde{f}\) satisfies (HUQ1), (HUQ2), (HUQ3), and (HUQ4), we get from Theorem 4.2 that \((\tilde{Y}^{1},\tilde{Z}^{1},\tilde{U}^{1})=(\tilde{Y}^{2},\tilde{Z}^{2},\tilde{U}^{2})\) in \({\mathcal{S}}^{\infty}_{\mathbb{G}}[0,T]\times L^{2}_{\mathbb{G}}[0,T]\times L^{2}(\mu)\). From the definition of \((\tilde{Y}^{i},\tilde{Z}^{i},\tilde{U}^{i})\) for i=1,2, we get (Y 1,Z 1,U 1)=(Y 2,Z 2,U 2) in \({\mathcal{S}}^{\infty}_{\mathbb{G}}[0,T]\times L^{2}_{\mathbb{G}}[0,T]\times L^{2}(\mu)\).

Step 3. We check that \(M^{(\hat{\pi})}\) is a BMO-martingale. Since \({\mathcal{C}}\) is compact, (HS1) holds and U is bounded as the jump part of the bounded process Y, it suffices to prove that \(\int_{0}^{.}Z_{s}\,dW_{s}\) is a BMO-martingale.

Let M denote the upper bound of the uniformly bounded process Y. Applying Itô’s formula to (YM)2, we obtain for any stopping time τT

The definition of f yields

$$- \vartheta_t Z_t - \frac{|\vartheta_t|^2}{2\alpha} - \frac{1}{\alpha} \int_E \lambda_t(e)\,de\leq f(t,Z_t, U_t), $$

for all t∈[0,T]. Therefore, since (HBI) and (HS4) hold, we get

Hence, \(\int_{0}^{.} Z_{s}\,dW_{s}\) is a BMO-martingale for k=0,…,n.

Step 4. It remains to show that R (π) is a supermartingale for any \(\pi\in {\mathcal{A}}\). Since \(\pi\in {\mathcal{A}}\), the process \(\mathfrak{E}(M^{(\pi)})\) is a positive local martingale, because it is the Doleans–Dade exponential of a local martingale whose the jumps are grower than −1. Hence, there exists a sequence of stopping times (δ n ) n∈ℕ satisfying lim n→∞ δ n =T, ℙ-a.s., such that \(\mathfrak{E}(M^{(\pi)})_{.\wedge \delta_{n}}\) is a positive martingale for each n∈ℕ. The process A (π) is nondecreasing. Thus, \(R^{(\pi)}_{t\wedge \delta_{n}}=R_{0}\mathfrak{E}(M^{(\pi)})_{t\wedge \delta_{n}}\exp(A^{(\pi)}_{t\wedge\delta_{n}})\) is a supermartingale, i.e. for st

$$\mathbb{E}\bigl[R^{(\pi)}_{t\wedge \delta_n}\big| {\mathcal{G}}_s \bigr]\leq R^{(\pi)}_{s\wedge \delta_n}. $$

For any set \(A\in {\mathcal{G}}_{s}\), we have

(5.10)

On the other hand, since

$$R^{(\pi)}_t = -\exp\bigl(-\alpha \bigl(X^{x, \pi}_t - Y_t\bigr)\bigr), $$

we use both the uniform integrability of \((\exp(-\alpha X^{x,\pi}_{\delta}))\) where δ runs over the set of all stopping times and the boundedness of Y to obtain the uniform integrability of

$$\bigl\{R^{(\pi)}_{\tau} : \tau\ \hbox{stopping time valued in}\ [0,T]\bigr\}. $$

Hence, the passage to the limit as n goes to ∞ in (5.10) is justified and it implies

We obtain the supermartingale property of R (π).

To complete the proof, we show that the strategy \(\hat{\pi}\) defined by (5.5) is optimal. We first notice that from Lemma 5.1 we have \(\hat{\pi}\in {\mathcal{A}}\). By definition of \(\hat{\pi}\), we have \(A^{(\hat{\pi})} = 0\) and hence, \(R^{(\hat{\pi})}_{t} = R_{0}\mathfrak{E}(M^{(\hat{\pi})})_{t}\). Since \({\mathcal{C}}\) is compact, (HS1) holds and U is bounded as jump part of the bounded process Y, there exists a constant δ>0 s.t.

$${\varDelta} M^{(\hat{\pi})}_t = M^{(\hat{\pi})}_t-M^{(\hat{\pi})}_{t^-}\geq -1+\delta. $$

Applying the Kazamaki criterion to the BMO martingale \(M^{(\hat{\pi})}\) (see [21]) we find that \(\mathfrak{E}(M^{(\hat{\pi})})\) is a true martingale. As a result, we get

$$\sup_{\pi\in {\mathcal{A}}} \mathbb{E}\bigl(R^{(\pi)}_T\bigr) = R_0 = V(x). $$

Using that (Y,Z,U) is the unique solution of the BSDE (5.4), we obtain the expression (5.3) for the value function. □

Remark 5.1

Concerning the existence and uniqueness of a solution to BSDE (5.4), we notice that the compactness assumption on \({\mathcal{C}}\) is only need for the uniqueness. Indeed, in the case where \({\mathcal{C}}\) is only a closed set, the generator of the BSDE still satisfies a quadratic growth condition which allows to apply Kobylanski existence result. However, for the uniqueness of the solution to BSDE (5.4), we need \({\mathcal{C}}\) to be compact to get Lipschitz continuous decomposed generators w.r.t. y. We notice that the existence result for a similar BSDE in the case of Poisson jumps is proved by Morlais in [24] and [25] without any compactness assumption on \({\mathcal{C}}\).