Abstract
This work deals with backward stochastic differential equations (BSDEs for short) with random marked jumps, and their applications to default risk. We show that these BSDEs are linked with Brownian BSDEs through the decomposition of processes with respect to the progressive enlargement of filtrations. We prove that the equations have solutions if the associated Brownian BSDEs have solutions. We also provide a uniqueness theorem for BSDEs with jumps by giving a comparison theorem based on the comparison for Brownian BSDEs. We give in particular some results for quadratic BSDEs. As applications, we study the pricing and the hedging of a European option in a market with a single jump, and the utility maximization problem in an incomplete market with a finite number of jumps.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
In recent years, credit risk has come out to be one of most fundamental financial risk. The most extensively studied form of credit risk is the default risk. Many people, such as Bielecki, Jarrow, Jeanblanc, Pham, Rutkowski [2, 3, 16, 17, 20, 28] and many others, have worked on this subject. In several papers (see for example Ankirchner et al. [1], Bielecki and Jeanblanc [4] and Lim and Quenez [23]), related to this topic, backward stochastic differential equations (BSDEs) with jumps have appeared. Unfortunately, the results relative to these latter BSDEs are far from being as numerous as for Brownian BSDEs. In particular, there is not any general result on the existence and the uniqueness of solution to quadratic BSDEs, except Ankirchner et al. [1], in which the assumptions on the driver are strong. In this paper, we study BSDEs with random marked jumps and apply the obtained results to mathematical finance where these jumps can be interpreted as default times. We give a general existence and uniqueness result for the solutions to these BSDEs, in particular we enlarge the result given by [1] for quadratic BSDEs.
A standard approach of credit risk modeling is based on the powerful technique of filtration enlargement, by making the distinction between the filtration \(\mathbb{F}\) generated by the Brownian motion, and its smallest extension \(\mathbb{G}\) that turns default times into \(\mathbb{G}\)-stopping times. This kind of filtration enlargement has been referred to as progressive enlargement of filtrations. This field is a traditional subject in probability theory initiated by fundamental works of the French school in the 1980s, see e.g. Jeulin [18], Jeulin and Yor [19], and Jacod [15]. For an overview of applications of progressive enlargement of filtrations on credit risk, we refer to the books of Duffie and Singleton [11], of Bielecki and Rutkowski [2], or the lectures notes of Bielecki et al. [3].
The purpose of this paper is to combine results on Brownian BSDEs and results on progressive enlargement of filtrations in view of providing existence and uniqueness of solutions to BSDEs with random marked jumps. We consider a progressive enlargement with multiple random times and associated marks. These marks can represent for example the name of the firm which defaults or the jump sizes of asset values.
Our approach consists of using the recent results of Pham [28] on the decomposition of predictable processes with respect to the progressive enlargement of filtrations to decompose a BSDE with random marked jumps into a sequence of Brownian BSDEs. By combining the solutions of Brownian BSDEs, we obtain a solution to the BSDE with random marked times. This method allows to get a general existence theorem. In particular, we get an existence result for quadratic BSDEs which is more general than the result of Ankirchner et al. [1]. This decomposition approach also allows to obtain a uniqueness theorem under Assumption (H) i.e. any \(\mathbb{F}\)-martingale remains a \(\mathbb{G}\)-martingale. We first set a general comparison theorem for BSDEs with jumps based on comparison theorems for Brownian BSDEs. Using this theorem, we prove, in particular, the uniqueness for quadratic BSDEs with a concave generator w.r.t. z.
We illustrate our methodology with two financial applications in default risk management: the pricing and the hedging of a European option, and the problem of utility maximization in an incomplete market. A similar problem (without marks) has recently been considered in Ankirchner et al. [1] and Lim and Quenez [23].
The paper is organized as follows. The next section presents the general framework of progressive enlargement of filtrations with successive random times and marks, and states the decomposition result for \(\mathbb{G}\)-predictable and specific \(\mathbb{G}\)-progressively measurable processes. In Sect. 3, we use this decomposition to make a link between Brownian BSDEs and BSDEs with random marked jumps. This allows to give a general existence result under a density assumption. We then give two examples: quadratic BSDEs with marked jumps for the first one, and linear BSDEs arising in the pricing and hedging problem of a European option in a market with a single jump for the second one. In Sect. 4, we give a general comparison theorem for BSDEs and we use this result to give a uniqueness theorem for quadratic BSDEs. Finally, in Sect. 5, we apply our existence and uniqueness results to solve the exponential utility maximization problem in an incomplete market with a finite number of marked jumps.
2 Progressive Enlargement of Filtrations with Successive Random Times and Marks
We fix a probability space \(({\varOmega}, {\mathcal{G}}, \mathbb{P})\), and we start with a reference filtration \(\mathbb{F}=({\mathcal{F}}_{t})_{t \geq 0}\) satisfying the usual conditionsFootnote 1 and generated by a d-dimensional Brownian motion W. Throughout the article, we consider a finite sequence (τ k ,ζ k )1≤k≤n , where
-
(τ k )1≤k≤n is a nondecreasing sequence of random times (i.e. nonnegative \({\mathcal{G}}\)-random variables),
-
(ζ k )1≤k≤n is a sequence of random marks valued in some Borel subset E of ℝm.
We denote by μ the random measure associated with the sequence (τ k ,ζ k )1≤k≤n :
For each k=1,…,n, we consider \(\mathbb{D}^{k}=({\mathcal{D}}^{k}_{t})_{t \geq 0}\) the smallest filtration for which τ k is a stopping time and ζ k is \({\mathcal{D}}_{\tau_{k}}^{k}\)-measurable. \(\mathbb{D}^{k}\) is then given by . The global information is then defined by the progressive enlargement \(\mathbb{G}=({\mathcal{G}}_{t})_{t \geq 0}\) of the initial filtration \(\mathbb{F}\) where \(\mathbb{G}\) is the smallest right-continuous filtration containing \(\mathbb{F}\), and such that for each k=1,…,n, τ k is a \(\mathbb{G}\)-stopping time, and ζ k is \({\mathcal{G}}_{\tau_{k}}\)-measurable. \(\mathbb{G}\) is given by \({\mathcal{G}}_{t}:=\tilde{{\mathcal{G}}}_{t^{+}}\), where \(\tilde{{\mathcal{G}}}_{t} := {\mathcal{F}}_{t}\vee {\mathcal{D}}^{1}_{t}\vee \cdots \vee {\mathcal{D}}^{n}_{t}\) for all t≥0.
We denote by Δ k the set where the random k-tuple (τ 1,…,τ k ) takes its values in {τ n <∞}:
We introduce some notations used throughout the paper:
-
\({\mathcal{P}}(\mathbb{F})\) (resp. \({\mathcal{P}}(\mathbb{G})\)) is the σ-algebra of \(\mathbb{F}\) (resp. \(\mathbb{G}\))-predictable measurable subsets of Ω×ℝ+, i.e. the σ-algebra generated by the left-continuous \(\mathbb{F}\) (resp. \(\mathbb{G}\))-adapted processes.
-
\({\mathcal{PM}}(\mathbb{F})\) (resp. \({\mathcal{PM}}(\mathbb{G})\)) is the σ-algebra of \(\mathbb{F}\) (resp. \(\mathbb{G}\))-progressively measurable subsets of Ω×ℝ+.
-
For k=1,…,n, \({\mathcal{PM}}(\mathbb{F},{\varDelta}_{k},E^{k})\) is the σ-algebra generated by processes X from ℝ+×Ω×Δ k ×E k to ℝ such that (X t (.)) t∈[0,s] is \({\mathcal{F}}_{s}\otimes {\mathcal{B}}([0,s]) \otimes {\mathcal{B}}({\varDelta}_{k})\otimes {\mathcal{B}}(E^{k})\)-measurable, for all s≥0.
-
For θ=(θ 1,…,θ n )∈Δ n and e=(e 1,…,e n )∈E n, we denote by
$$\theta_{(k)} := (\theta_{1},\ldots, \theta_{k})\quad \hbox{and}\quad e_{(k)} := (e_{1},\ldots, e_{k}),\quad 1\leq k\leq n. $$We also denote by τ (k) for (τ 1,…,τ k ) and ζ (k) for (ζ 1,…,ζ k ), for all k=1,…,n.
The following result provides the basic decomposition of predictable and progressive processes with respect to this progressive enlargement of filtrations.
Lemma 2.1
-
(i)
Any \({\mathcal{P}}(\mathbb{G})\)-measurable process X=(X t ) t≥0 is represented as
(2.1)for all t≥0, where X 0 is \(\mathcal{P}(\mathbb{F})\)-measurable and X k is \(\mathcal{P}(\mathbb{F}) \otimes {\mathcal{B}}({\varDelta}_{k})\otimes {\mathcal{B}}(E^{k})\)-measurable for k=1,…,n.
-
(ii)
Any càd-làg \({\mathcal{PM}}(\mathbb{G})\)-measurable process X=(X t ) t≥0 of the form
$$X_t = J_t + \int_0^t \int_EU_s(e)\mu(de,ds),\quad t\geq 0, $$where J is \({\mathcal{P}}(\mathbb{G})\)-measurable and U is \({\mathcal{P}}(\mathbb{G})\otimes {\mathcal{B}}(E)\)-measurable, is represented as
(2.2)for all t≥0, where X 0 is \({\mathcal{PM}}(\mathbb{F})\)-measurable and X k is \({\mathcal{PM}}(\mathbb{F},{\varDelta}_{k},E^{k})\)-measurable for k=1,…,n.
The proof of (i) is given in Pham [28] and is therefore omitted. The proof of (ii) is based on similar arguments. Hence, we postpone it to the Appendix.
Throughout the sequel, we will use the convention τ 0=0, τ n+1=+∞, θ 0=0 and θ n+1=+∞ for any θ∈Δ n , and X 0(θ (0),e (0))=X 0 to simplify the notation.
Remark 2.1
In the case where the studied process X depends on another parameter x evolving in a Borelian subset \(\mathcal{X}\) of ℝp, and if X is \({\mathcal{P}}(\mathbb{G})\otimes {\mathcal{B}}(\mathcal{X})\), then, decomposition (2.1) is still true but where X k is \({\mathcal{P}}(\mathbb{F})\otimes {\mathcal{B}}({\varDelta}_{k})\otimes {\mathcal{B}}(E^{k})\otimes {\mathcal{B}}(\mathcal{X})\)-measurable. Indeed, it is obvious for the processes generating \({\mathcal{P}}(\mathbb{G})\otimes {\mathcal{B}}(\mathcal{X})\) of the form X t (ω,x)=L t (ω)R(x), (t,ω,x) ∈ \(\mathbb{R}_{+}\times{\varOmega}\times\mathcal{X}\), where L is \({\mathcal{P}}(\mathbb{G})\)-measurable and R is \({\mathcal{B}}(\mathcal{X})\)-measurable. Then, the result is extended to any \({\mathcal{P}}(\mathbb{G})\otimes {\mathcal{B}}(\mathcal{X})\)-measurable process by the monotone class theorem.
We now introduce a density assumption on the random times and their associated marks by assuming that the distribution of (τ 1,…,τ n ,ζ 1,…,ζ n ) is absolutely continuous with respect to the Lebesgue measure dθ de on \({\mathcal{B}}({\varDelta}_{n})\otimes {\mathcal{B}}(E^{n})\). More precisely, we make the following assumption.
(HD) There exists a positive \({\mathcal{P}}(\mathbb{F})\otimes {\mathcal{B}}({\varDelta}_{n})\otimes {\mathcal{B}}( E^{n})\)-measurable map γ such that for any t≥0,
We then introduce some notation. Define the process γ 0 by
and the map γ k a \({\mathcal{P}}(\mathbb{F})\otimes {\mathcal{B}}({\varDelta}_{k})\otimes {\mathcal{B}}(E^{k})\)-measurable process, k=1,…,n−1, by
We shall use the natural convention γ n=γ. We find that under (HD), the random measure μ admits a compensator absolutely continuous w.r.t. the Lebesgue measure. The intensity λ is given by the following proposition.
Proposition 2.1
Under (HD), the random measure μ admits a compensator for the filtration \(\mathbb{G}\) given by λ t (e) de dt, where the intensity λ is defined by
with
for (θ (k−1),t,e (k−1),e)∈Δ k−1×ℝ+×E k.
The proof of Proposition 2.1 is based on similar arguments to those of [12]. We therefore postpone it to the Appendix.
We add an assumption on the intensity λ which will be used in existence and uniqueness results for quadratic BSDEs as well as for the utility maximization problem:
We now consider one dimensional BSDEs driven by W and the random measure μ. To define solutions, we need to introduce the following spaces, where a,b∈ℝ+ with a≤b, and T<∞ is the terminal time:
-
\({\mathcal{S}}_{\mathbb{G}}^{\infty}[a,b]\) (resp. \({\mathcal{S}}_{\mathbb{F}}^{\infty}[a,b]\)) is the set of ℝ-valued \({\mathcal{PM}}(\mathbb{G})\) (resp. \({\mathcal{PM}}(\mathbb{F})\))-measurable processes (Y t ) t∈[a,b] essentially bounded:
$$\| Y \|_{{\mathcal{S}}^\infty[a,b]} := \mathop{\mathrm{ess\,sup}}\limits_{t\in[a,b]}|Y_{t}|< \infty. $$ -
\(L^{2}_{\mathbb{G}}[a,b]\) (resp. \(L^{2}_{\mathbb{F}}[a,b]\)) is the set of ℝd-valued \({\mathcal{P}}(\mathbb{G})\) (resp. \({\mathcal{P}}(\mathbb{F})\))-measurable processes (Z t ) t∈[a,b] such that
$$\|Z\|_{L^2[a,b]} := \biggl(\mathbb{E}\biggr[ \int_a^b |Z_t|^2\,dt \biggr]\biggr)^{1\over 2} < \infty. $$ -
L 2(μ) is the set of ℝ-valued \({\mathcal{P}}(\mathbb{G})\otimes {\mathcal{B}}(E)\)-measurable processes U such that
$$\|U\|_{L^2(\mu)} := \biggl(\mathbb{E}\biggl[\int_0^T \int_{E}\bigl|U_{s}(e)\bigr|^2\mu(de,ds)\biggr]\biggr)^{1\over2}<\infty. $$
We then consider BSDEs of the form: find a triple \((Y,Z,U)\in {\mathcal{S}}^{\infty}_{\mathbb{G}}[0,T] \times L^{2}_{\mathbb{G}}[0,T] \times L^{2}(\mu)\) such thatFootnote 2
where
-
ξ is a \({\mathcal{G}}_{T}\)-measurable random variable of the form:
(2.5)with ξ 0 is \({\mathcal{F}}_{T}\)-measurable and ξ k is \({\mathcal{F}}_{T}\otimes {\mathcal{B}}({\varDelta}_{k})\otimes {\mathcal{B}}(E^{k})\)-measurable for each k=1,…,n,
-
f is map from [0,T]×Ω×ℝ×ℝd×Bor(E,ℝ) to ℝ which is a \({\mathcal{P}}(\mathbb{G})\otimes {\mathcal{B}}(\mathbb{R})\otimes {\mathcal{B}}(\mathbb{R}^{d})\otimes {\mathcal{B}}(Bor(E,\mathbb{R}))\)-\({\mathcal{B}}(\mathbb{R})\)-measurable map. Here, Bor(E,ℝ) is the set of Borelian functions from E to ℝ, and \({\mathcal{B}}(Bor(E,\mathbb{R}))\) is the Borelian σ-algebra on Bor(E,ℝ) for the pointwise convergence topology.
To ensure that BSDE (2.4) is well posed, we have to check that the stochastic integral w.r.t. W is well defined on \(L^{2}_{\mathbb{G}}[0,T]\) in our context.
Proposition 2.2
Under (HD), for any process \(Z\in L^{2}_{\mathbb{G}}[0,T]\), the stochastic integral \(\int_{0}^{T}Z_{s}\,dW_{s}\) is well defined.
Proof
Consider the initial progressive enlargement ℍ of the filtration \(\mathbb{G}\). We recall that \(\mathbb{H}=({\mathcal{H}}_{t})_{t\geq 0}\) is given by
We prove that the stochastic integral \(\int_{0}^{T}Z_{s}\,dW_{s}\) is well defined for all \({\mathcal{P}}(\mathbb{H})\)-measurable process Z such that \(\mathbb{E}\int_{0}^{T}|Z_{s}|^{2}\,ds<\infty\). Fix such a process Z.
From Theorem 2.1 in [15], we see that W is an ℍ-semimartingale of the form
where a is \({\mathcal{P}}(\mathbb{F})\otimes {\mathcal{B}}({\varDelta}_{n})\otimes {\mathcal{B}}(E^{n})\)-measurable. Since M is a ℍ-local continuous martingale with quadratic variation 〈M,M〉 t =〈W,W〉 t =t for t≥0, we get from Lévy’s characterization of Brownian motion (see e.g. Theorem 39 in [29]) that M is a ℍ-Brownian motion. Therefore the stochastic integral \(\int_{0}^{T}Z_{s}\,dM_{s}\) is well defined and we now concentrate on the term \(\int_{0}^{T} Z_{s}a_{s}(\tau_{(n)},\zeta_{(n)})\,ds\).
From Lemma 1.8 in [15] the process γ(θ,e) is an \(\mathbb{F}\)-martingale. Since \(\mathbb{F}\) is the filtration generated by W we get from the representation theorem of Brownian martingales that
Still using Theorem 2.1 in [15] and since γ(θ,e) is continuous, we have
for all (θ,e)∈Δ n ×E n. Therefore we get
for all (θ,e)∈Δ n ×E n. Since γ(θ,e) is an \(\mathbb{F}\)-martingale, we obtain (see e.g. Theorem 62 Chapt. 8 in [10]) that
for all (θ,e)∈Δ n ×E n. Consider the set \(A\in {\mathcal{F}}_{T}\otimes {\mathcal{B}}({\varDelta}_{n})\otimes {\mathcal{B}}(E^{n})\) defined by
Then, we have \(\mathbb{P}(\tilde{\varOmega})=0\), where
Indeed, we have from the density assumption (HD)
From the definition of A and (2.6), we have
for all (θ,e)∈Δ n ×E n. Therefore, we get from (2.7), \(\mathbb{P}(\tilde{\varOmega})=0\) or equivalently
From Corollary 1.11 we have γ t (τ 1,…,τ n ,ζ 1,…,ζ n )>0 for all t≥0 ℙ-a.s. Since γ .(τ 1,…,τ n ,ζ 1,…,ζ n ) is continuous we obtain
Combining (2.8) and (2.9), we get
Since Z satisfies \(\mathbb{E}\int_{0}^{T} |Z_{s}|^{2}ds<\infty\), we find that
Therefore \(\int_{0}^{T} Z_{s}a_{s}(\tau_{1},\ldots,\tau_{n},\zeta_{1},\ldots,\zeta_{n})\,ds\) is well defined. □
3 Existence of a Solution
In this section, we use the decompositions given by Lemma 2.1 to solve BSDEs with a finite number of jumps. We use a similar approach to Ankirchner et al. [1]: one can explicitly construct a solution by combining solutions of an associated recursive system of Brownian BSDEs. But contrary to them, we suppose that there exist n random times and n random marks. Our assumptions on the driver are also weaker. Through a simple example we first show how our method to construct solutions to BSDEs with jumps works. We then give a general existence theorem which links the studied BSDEs with jumps with a system of recursive Brownian BSDEs. We finally illustrate our general result with concrete examples.
3.1 An Introductory Example
We begin by giving a simple example to illustrate the used method. We consider the following equation involving only a single jump time τ and a single mark ζ valued in E={0,1}:
where H t =(H t (0),H t (1)) with for t≥0 and i∈E. Here c is a real constant, and f and h are deterministic functions. To solve BSDE (3.1), we first solve a recursive system of BSDEs:
Suppose that the recursive system of BSDEs admits for any (θ,e)∈[0,T]×{0,1} a couple of solution Y 1(θ,e) and Y 0. Define the process (Y,U) by
and
We then prove that the process (Y,U) is solution of BSDE (3.1). By Itô’s formula, we have
This can be written
From the definition of U, we get
We also have , which shows that (Y,U) is solution of BSDE (3.1).
3.2 The Existence Theorem
To prove the existence of a solution to BSDE (2.4), we introduce the decomposition of the coefficients ξ and f as given by (2.5) and Lemma 2.1. From Lemma 2.1(i) and Remark 2.1, we get the following decomposition for f:
where f 0 is \({\mathcal{P}}(\mathbb{F})\otimes {\mathcal{B}}(\mathbb{R})\otimes {\mathcal{B}}(\mathbb{R}^{d})\otimes {\mathcal{B}}(Bor(E,\mathbb{R}))\)-measurable and f k is \({\mathcal{P}}(\mathbb{F})\otimes {\mathcal{B}}(\mathbb{R})\otimes {\mathcal{B}}(\mathbb{R}^{d})\otimes {\mathcal{B}}(Bor(E,\mathbb{R}))\otimes {\mathcal{B}}({\varDelta}_{k})\otimes {\mathcal{B}}(E^{k})\)-measurable for each k=1,…,n.
In the following theorem, we show how BSDEs driven by W and μ are related to a recursive system of Brownian BSDEs involving the coefficients ξ k and f k, k=0,…,n.
Theorem 3.1
Assume that for all (θ,e)∈Δ n ×E n, the Brownian BSDE
admits a solution \((Y^{n}(\theta,e),Z^{n}(\theta,e))\in {\mathcal{S}}^{\infty}_{\mathbb{F}}[\theta_{n}\wedge T,T]\times L^{2}_{\mathbb{F}}[\theta_{n}\wedge T,T]\), and that for each k=0,…,n−1, the Brownian BSDE
admits a solution \((Y^{k}(\theta_{(k)}, e_{(k)}), Z^{k}(\theta_{(k)}, e_{(k)}))\in {\mathcal{S}}^{\infty}_{\mathbb{F}}[\theta_{k}\wedge T,T]\times L^{2}_{\mathbb{F}}[\theta_{k}\wedge T,T]\). Assume moreover that each Y k (resp. Z k) is \({\mathcal{PM}}({\mathbb{F}})\otimes {\mathcal{B}}({\varDelta}_{k})\otimes {\mathcal{B}}(E^{k})\)-measurable (resp. \({\mathcal{P}}({\mathbb{F}})\otimes {\mathcal{B}}({\varDelta}_{k})\otimes {\mathcal{B}}(E^{k})\)-measurable).
If all these solutions satisfy
and
then, under (HD), BSDE (2.4) admits a solution \((Y, Z, U)\in {\mathcal{S}}^{\infty}_{\mathbb{G}}[0,T]\times L^{2}_{\mathbb{G}}[0,T]\times L^{2}(\mu)\) given by
where \(U^{k}_{t}(\tau_{(k)}, \zeta_{(k)},.)=Y^{k+1}_{t}(\tau_{(k)}, t, \zeta_{(k)},.)-Y^{k}_{t}(\tau_{(k)}, \zeta_{(k)})\) for each k=0,…,n−1.
Proof
To alleviate notation, we shall often write ξ k and f k(t,y,z,u) instead of ξ k(θ (k),e (k)) and f k(t,y,z,u,θ (k),e (k)), and \(Y^{k}_{t}(t, e)\) instead of \(Y^{k}_{t}(\theta_{(k-1)}, t, e_{(k-1)}, e)\).
Step 1: We prove that for t ∈ [0,T], (Y,Z,U) defined by (3.6) satisfies the equation
We make an induction on the number k of jumps in (t,T].
• Suppose that k=0. We distinguish two cases.
Case 1: there are n jumps before t. We then have τ n ≤t and from (3.6) we get \(Y_{t}=Y^{n}_{t}\). Using BSDE (3.3), we can see that
Since τ n ≤T, we have ξ n=ξ from (2.5). In the same way, we have \(Y_{s}=Y^{n}_{s}\), \(Z_{s} = Z^{n}_{s}\) and U s =0 for all s∈(t,T] from (3.6). Using (3.2), we also get \(f^{n}(s,Y^{n}_{s},Z^{n}_{s},0)=f(s,Y_{s},Z_{s},U_{s})\) for all s∈(t,T]. Moreover, since the predictable processes and are indistinguishable on {τ n ≤t}, we have from Theorem 12.23 of [13], \(\int_{t}^{T}Z_{s}\,dW_{s} = \int_{t}^{T}Z_{s}^{n}\,dW_{s} \) on {τ n ≤t}. Hence, we get
on {τ n ≤t}.
Case 2: there are i jumps before t with i<n hence \(Y_{t} = Y^{i}_{t}\). Since there is no jump after t, we have \(Y_{s}=Y^{i}_{s}\), \(Z_{s} = Z^{i}_{s}\), \(U^{i}_{s}(.)=Y^{i+1}_{s}(s,.)-Y^{i}_{s}\), ξ=ξ i and \(f^{i}(s,Y^{i}_{s},Z^{i}_{s},U^{i}_{s})=f(s,Y_{s},Z_{s},U_{s})\) for all s∈(t,T], and \(\int_{t}^{T}\int_{E}U_{s}(e) \times\mu(de,ds)=0\). Since the predictable processes and are indistinguishable on {τ i ≤t}∩{T<τ i+1}, we have from Theorem 12.23 of [13], \(\int_{t}^{T}Z_{s}\,dW_{s} = \int_{t}^{T}Z_{s}^{i}\,dW_{s} \) on {τ i ≤t}∩{T<τ i+1}. Combining these equalities with (3.4), we get
on {τ i ≤t}∩{T<τ i+1}.
• Suppose equation (3.7) holds true when there are k jumps in (t,T], and consider the case where there are k+1 jumps in (t,T].
Denote by i the number of jumps in [0,t] hence \(Y_{t} = Y^{i}_{t}\). Then, we have \(Z_{s} = Z^{i}_{s}\), \(U^{i}_{s}(.)=Y^{i+1}_{s}(s,.)-Y^{i}_{s}\) for all s∈(t,τ i+1], and \(Y_{s}=Y^{i}_{s}\) and \(f(s,Y_{s},Z_{s},U_{s})= f^{i}(s,Y^{i}_{s},Z^{i}_{s},U^{i}_{s})\) for all s∈(t,τ i+1). Using (3.4), we have
Since the predictable processes and are indistinguishable on {τ i ≤t<τ i+1}∩{τ i+k+1≤T<τ i+k+2}, we get from Theorem 12.23 of [13] that . Therefore, we get
on {τ i ≤t<τ i+1}∩{τ i+k+1≤T<τ i+k+2}. Using the induction assumption on (τ i+1,T], we have
for all r∈[0,T], where
Thus, the processes and are indistinguishable since they are càd-làg modifications of the other. In particular they coincide at the stopping time τ i+1 and we get from the definition of Y
Combining (3.8) and (3.9), we get (3.7).
Step 2: Notice that the process Y (resp. Z, U) is \({\mathcal{P}}{\mathcal{M}}(\mathbb{G})\) (resp. \({\mathcal{P}}(\mathbb{G})\), \({\mathcal{P}}(\mathbb{G})\otimes {\mathcal{B}}(E)\))-measurable since each Y k (resp. Z k) is \({\mathcal{PM}}({\mathbb{F}})\otimes {\mathcal{B}}({\varDelta}_{k})\otimes {\mathcal{B}}(E^{k})\) (resp. \({\mathcal{P}}({\mathbb{F}})\otimes {\mathcal{B}}({\varDelta}_{k})\otimes {\mathcal{B}}(E^{k})\))-measurable.
Step 3: We now prove that the solution satisfies the integrability conditions. Suppose that the processes Y k, k=0,…,n, satisfy (3.5). Define the constant M by
and consider the set \(A\in {\mathcal{F}}_{T}\otimes {\mathcal{B}}({\varDelta}_{n}\cap[0,T]^{n})\otimes {\mathcal{B}}(E^{n})\) defined by
Then, we have \(\mathbb{P}(\tilde{\varOmega})=1\), where
Indeed, we have from the density assumption (HD)
From the definition of M and A, we have
for all (θ,e)∈(Δ n ∩[0,T]n)×E n. Therefore, we get from (3.10), \(\mathbb{P}(\tilde{\varOmega}^{c})=0\). Then, by definition of Y, we have
Since \(\mathbb{P}(\tilde {\varOmega})=1\), we have
Therefore, we get from (3.11)
for all t∈[0,T]. Since Y is càd-làg, we get
In the same way, using (HD) and the tower property of conditional expectation, we get
Thus, \(Z\in L^{2}_{\mathbb{G}}[0,T]\) since the processes Z k, k=0,…,n, satisfy
Finally, we check that U∈L 2(μ). Using (HD), we have
Hence, U ∈ L 2(μ). □
Remark 3.1
From the construction of the solution of BSDE (2.4), the jump component U is bounded in the following sense:
In particular, the random variable \(\mathrm{ess\,sup}_{(t,e)\in[0,T]\times E}|U_{t}(e)|\) is bounded.
3.3 Application to Quadratic BSDEs with Jumps
We suppose that the random variable ξ and the generator f satisfy the following conditions:
-
(HEQ1)
The random variable ξ is bounded: there exists a positive constant C such that
$$|\xi|\leq C,\quad \mathbb{P}\hbox{-a.s.} $$ -
(HEQ2)
The generator f is quadratic in z: there exists a constant C such that
$$\bigl|f(t,y,z,u)\bigr|\leq C \biggl(1 + |y| + |z|^2 + \int_E \bigl|u(e)\bigr| \lambda_t(e)\,de \biggr), $$for all (t,y,z,u)∈[0,T]×ℝ×ℝd×Bor(E,ℝ).
-
(HEQ3)
For any R>0, there exists a function \(mc^{f}_{R}\) such that \(\lim_{\varepsilon \rightarrow0}mc^{f}_{R}(\varepsilon )=0\) and
$$\bigl|f_t\bigl(y,z,\bigl(u(e)-y\bigr)_{e\in E}\bigr) - f_t\bigl(y',z',\bigl(u(e)-y\bigr)_{e\in E}\bigr)\bigr| \leq mc^f_R(\varepsilon ), $$for all (t,y,y′,z,z′,u)∈[0,T]×[ℝ]2×[ℝd]2×Bor(E,ℝ) s.t. |y|, |z|, |y′|, |z′|≤R and |y−y′|+|z−z′|≤ε.
Proposition 3.1
Under (HD), (HBI), (HEQ1), (HEQ2), and (HEQ3), BSDE (2.4) admits a solution in \({\mathcal{S}}^{\infty}_{\mathbb{G}}[0,T] \times L^{2}_{\mathbb{G}}[0,T] \times L^{2}(\mu)\).
Proof
Step 1. Since ξ is a bounded random variable, we can choose ξ k bounded for each k=0,…,n. Indeed, let C be a positive constant such that |ξ|≤C, ℙ-a.s., then, we have
with \(\tilde{\xi}^{k}(\tau_{1}, \ldots, \tau_{k}, \zeta_{1}, \ldots, \zeta_{k})=(\xi^{k}(\tau_{1}, \ldots, \tau_{k}, \zeta_{1}, \ldots, \zeta_{k})\wedge C)\vee (-C)\), for each k=0,…,n.
Step 2. Since f is quadratic in z, it is possible to choose the functions f k, k=0,…,n, quadratic in z. Indeed, if C is a positive constant such that |f(t,y,z,u)|≤C(1+|y|+|z|2+∫ E |u(e)|λ t (e) de), for all (t,y,z,u)∈[0,T]×ℝ×ℝd×Bor(E,ℝ), ℙ-a.s. and f has the following decomposition:
then, f satisfies the same decomposition with \(\tilde{f}^{k}\) instead of f k where
for all (t,y,z,u)∈[0,T]×ℝ×ℝd×Bor(E,ℝ) and (θ,e)∈Δ n ×E n.
Step 3. We now prove by a backward induction that there exists for each k=0,…,n−1 (resp. k=n), a solution (Y k,Z k) to BSDE (3.4) (resp. (3.3)) s.t. Y k is a \({\mathcal{P}}{\mathcal{M}}(\mathbb{F})\otimes {\mathcal{B}}({\varDelta}_{k})\otimes {\mathcal{B}}(E^{k})\)-measurable process and Z k is a \({\mathcal{P}}(\mathbb{F})\otimes {\mathcal{B}}({\varDelta}_{k})\otimes {\mathcal{B}}(E^{k})\)-measurable process, and
• Choosing ξ n(θ (n),e (n)) bounded as in Step 1, we get from (HEQ3) and Proposition D.1 and Theorem 2.3 of [22] the existence of a solution (Y n(θ (n),e (n)),Z n(θ (n),e (n))) to BSDE (3.3).
We now check that we can choose Y n (resp. Z n) as a \({\mathcal{P}}{\mathcal{M}}(\mathbb{F})\otimes {\mathcal{B}}({\varDelta}_{n})\otimes {\mathcal{B}}(E^{n})\) (resp. \({\mathcal{P}}(\mathbb{F})\otimes {\mathcal{B}}({\varDelta}_{n})\otimes {\mathcal{B}}(E^{n})\))-measurable process. Indeed, we know (see [22]) that we can construct the solution (Y n,Z n) as limit of solutions to Lipschitz BSDEs. From Proposition C.1, we then get a \({\mathcal{P}}(\mathbb{F})\otimes {\mathcal{B}}({\varDelta}_{n})\otimes {\mathcal{B}}(E^{n})\)-measurable solution as limit of \({\mathcal{P}}(\mathbb{F})\otimes {\mathcal{B}}({\varDelta}_{n})\otimes {\mathcal{B}}(E^{n})\)-measurable processes. Hence, Y n (resp. Z n) is a \({\mathcal{P}}{\mathcal{M}}(\mathbb{F})\otimes {\mathcal{B}}({\varDelta}_{n})\otimes {\mathcal{B}}(E^{n})\) (resp. \({\mathcal{P}}(\mathbb{F})\otimes {\mathcal{B}}({\varDelta}_{n})\otimes {\mathcal{B}}(E^{n})\))-measurable process. Applying Proposition 2.1 of [22] to (Y n,Z n), we get from (HEQ1) and (HEQ2)
• Fix k≤n−1 and suppose that the result holds true for k+1: there exists (Y k+1,Z k+1) such that
Then, using (HBI), there exists a constant C>0 such that
Choosing ξ k(θ (k),e (k)) bounded as in Step 1, we get from (HEQ3) and Proposition D.1 and Theorem 2.3 of [22] the existence of a solution (Y k(θ (k),e (k)), Z k(θ (k),e (k))).
As for k=n, we can choose Y k (resp. Z k) as a \({\mathcal{P}}{\mathcal{M}}(\mathbb{F})\otimes {\mathcal{B}}({\varDelta}_{k})\otimes {\mathcal{B}}(E^{k})\) (resp. \({\mathcal{P}}(\mathbb{F})\otimes {\mathcal{B}}({\varDelta}_{k})\otimes {\mathcal{B}}(E^{k})\))-measurable process.
Applying Proposition 2.1 of [22] to (Y k(θ (k),e (k)),Z k(θ (k),e (k))), we get from (HEQ1) and (HEQ2)
Step 4. From Step 3, we can apply Theorem 3.1. We then get the existence of a solution to BSDE (2.4). □
Remark 3.2
Our existence result is given for bounded terminal condition. It is based on the result of Kobylanski for quadratic Brownian BSDEs in [22]. We notice that existence results for quadratic BSDEs with unbounded terminal conditions have recently been proved in Briand and Hu [5] and Delbaen et al. [8]. These works provide existence results for solutions of Brownian quadratic BSDEs with exponentially integrable terminal conditions and generators and concludes that the solution Y satisfies an exponential integrability condition.
Here, we cannot use these results in our approach. Indeed, consider the case of a single jump with the generator f(t,y,z,u)=|z|2+|u|. The associated decomposed BSDE at rank 0 is given by
Then to apply the results from [5] or [8], we require that the process \((Y^{1}_{s}(s))_{s}\) satisfies some exponential integrability condition. However, at rank 1, the decomposed BSDE is given by
and since ξ 1 satisfies an exponential integrability condition by assumption we know that Y 1(θ) satisfies an exponential integrability condition for any θ∈[0,T], but we have no information about the process \((Y^{1}_{s}(s))_{s \in [0,T]}\). The difficulty here lies in understanding the behavior of the “sectioned” process \(\{Y^{1}_{s}(\theta)~:~s=\theta\}\) and its study is left for further research.
3.4 Application to the Pricing of a European option in a Market with a Jump
In this example, we assume that W is one dimensional (d=1) and there is a single random time τ representing the time of occurrence of a shock in the prices on the market. We denote by H the associated pure jump process:
We consider a financial market which consists of
-
a non-risky asset S 0, whose strictly positive price process is defined by
$$dS^0_t = r_t S^0_t\,dt,\quad 0\leq t \leq T,\ S^0_0=1, $$with r t ≥0, for all t∈[0,T],
-
two risky assets with respective price processes S 1 and S 2 defined by
$$dS^1_t = S^1_{t^-} (b_t\,dt + \sigma_t\,dW_t + \beta\,dH_t),\quad 0\leq t\leq T,\ S^1_0 = s^1_0, $$and
$$dS^2_t = S^2_t (\bar{b}_t\,dt + \bar{\sigma}_t\,dW_t),\quad 0\leq t\leq T,\ S^2_0 = s^2_0, $$with σ t >0 and \(\bar{\sigma}_{t}>0\), and β>−1 (to ensure that the price process S 1 always remains strictly positive).
We make the following assumption which ensures the existence of the processes S 0, S 1, and S 2:
(HB) The coefficients r, b, \(\bar{b}\), σ, \(\bar{\sigma}\), \(\frac{1}{\sigma}\) and \(\frac{1}{\bar{\sigma}}\) are bounded: there exists a constant C s.t.
We assume that the coefficients r, b, \(\bar{b}\), σ, and \(\bar{\sigma}\) have the following forms:
for all t≥0.
The aim of this subsection is to provide an explicit price for any bounded \({\mathcal{G}}_{T}\)-measurable European option ξ of the form
where ξ 0 is \({\mathcal{F}}_{T}\)-measurable and ξ 1 is \({\mathcal{F}}_{T}\otimes {\mathcal{B}}(\mathbb{R})\)-measurable, together with a replicating strategy π=(π 0,π 1,π 2) (\(\pi^{i}_{t}\) corresponds to the number of share of S i held at time t). We assume that this market model is free of arbitrage opportunity (a necessary and sufficient condition to ensure it is e.g. given in Lemma 3.1.1 of [7]).
The value of a contingent claim is then given by the initial amount of a replicating portfolio. Let π=(π 0,π 1,π 2) be a \({\mathcal{P}}(\mathbb{G})\)-measurable self-financing strategy. The wealth process Y associated with this strategy satisfies
Since π is a self-financing strategy, we have
Combining this last equation with (3.12), we get
Define the predictable processes Z and U by
Then, (3.13) can be written in the form
Therefore, the problem of valuing and hedging of the contingent claim ξ consists in solving the following BSDE:
The recursive system of Brownian BSDEs associated with (3.15) is then given by
and
Proposition 3.2
Under (HD) and (HB), BSDE (3.15) admits a solution in \({\mathcal{S}}_{\mathbb{G}}^{\infty}[0,T]\times L^{2}_{\mathbb{G}}[0,T]\times L^{2}(\mu)\).
Proof
Using the same argument as in Step 1 of the proof of Proposition 3.1, we can assume w.l.o.g. that the coefficients of BSDEs (3.16) and (3.17) are bounded. Then, BSDE (3.16) is a linear BSDE with bounded coefficients and a bounded terminal condition. From Theorem 2.3 in [22], we get the existence of a solution (Y 1(θ),Z 1(θ)) in \({\mathcal{S}}^{\infty}_{\mathbb{F}}[\theta,T]\times L_{\mathbb{F}}^{2}[\theta,T]\) to (3.16) for all θ∈[0,T]. Moreover, from Proposition 2.1 in [22], we have
Applying Proposition C.1 with \({\mathcal{X}}=[0,T]\) and dρ(θ)=γ 0(θ) dθ we can choose the solution (Y 1,Z 1) as a \({\mathcal{P}}(\mathbb{F})\otimes {\mathcal{B}}([0,T])\)-measurable process.
Estimate (3.18) gives the result that BSDE (3.17) is also a linear BSDE with bounded coefficients. Applying Theorem 2.3 and Proposition 2.1 in [22] as previously, we get the existence of a solution (Y 0,Z 0) in \({\mathcal{S}}^{\infty}_{\mathbb{F}}[0,T]\times L_{\mathbb{F}}^{2}[0,T]\) to (3.17). Applying Theorem 3.1, we get the result. □
Since BSDEs (3.16) and (3.17) are linear, we have explicit formulae for the solutions. For Y 1(θ), we get
with Γ 1(θ) defined by
For Y 0, we get
with Γ 0 defined by
where the parameters a, d and c are given by
The price at time t of the European option ξ is equal to \(Y^{0}_{t}\) if t<τ and \(Y^{1}_{t}(\tau)\) if t≥τ. Once we know the processes Y and Z, a hedging strategy π=(π 0,π 1,π 2) is given by (3.12) and (3.14).
Under no free lunch assumption, all the hedging portfolios have the same value, which gives the uniqueness of the process Y. This leads to the uniqueness issue for the whole solution (Y,Z,U).
4 Uniqueness
In this section, we provide a uniqueness result based on a comparison theorem. We first provide a general comparison theorem which allows to compare solutions to the studied BSDEs as soon as we can compare solutions to the associated system of recursive Brownian BSDEs. We then illustrate our general result with a concrete example in a convex framework.
4.1 The General Comparison Theorem
We consider two BSDEs with coefficients \((\underline{f},\underline{\xi})\) and \((\bar{f}, \bar{\xi})\) such that
-
\(\underline{\xi}\) (resp. \(\bar{\xi}\)) is a bounded \({\mathcal{G}}_{T}\)-measurable random variable of the form
where \(\underline{\xi}^{0}\) (resp. \(\bar{\xi}^{0}\)) is \({\mathcal{F}}_{T}\)-measurable and \(\underline{\xi}^{k}\) (resp. \(\bar{\xi}^{k}\)) is \({\mathcal{F}}_{T}\otimes {\mathcal{B}}({\varDelta}_{k})\otimes {\mathcal{B}}(E^{k})\)-measurable for each k=1,…,n,
-
\(\underline{f}\) (resp. \(\bar{f}\)) is map from [0,T]×Ω×ℝ×ℝd×Bor(E,ℝ) to ℝ which is a \({\mathcal{P}}(\mathbb{G})\otimes {\mathcal{B}}(\mathbb{R})\otimes {\mathcal{B}}(\mathbb{R}^{d})\otimes {\mathcal{B}}(Bor(E,\mathbb{R}))\)-\({\mathcal{B}}(\mathbb{R})\)-measurable map.
We denote by \((\underline{Y},\underline{Z},\underline{U})\) and \((\bar{Y},\bar{Z},\bar{U})\) their respective solutions in \({\mathcal{S}}^{\infty}_{\mathbb{G}}[0,T]\times L^{2}_{\mathbb{G}}[0,T]\times L^{2}(\mu)\). We consider the decomposition \((\underline{Y}^{k})_{0\leq k\leq n}\) (resp. \((\bar{Y}^{k})_{0\leq k\leq n}\), \((\underline{Z}^{k})_{0\leq k\leq n}\), \((\bar{Z}^{k})_{0\leq k\leq n}\), \((\underline{U}^{k})_{0\leq k\leq n}\), \((\bar{U}^{k})_{0\leq k\leq n}\)) of \(\underline{Y}\) (resp. \(\bar{Y}\), \(\underline{Z}\), \(\bar{Z}\), \(\underline{U}\), \(\bar{U}\)) given by Lemma 2.1. For ease of notation, we shall write \(\underline{F}^{k}(t,y,z)\) and \(\bar{F}^{k}(t,y,z)\) instead of \(\underline{f}(t,y,z,\underline{Y}^{k+1}_{t}(\tau_{(k)},t,\zeta_{(k)},.)-y)\) and \(\bar{f}(t,y,z,\bar{Y}^{k+1}_{t}(\tau_{(k)},t,\zeta_{(k)},.)-y)\) for each k=0,…,n−1, and \(\underline{F}^{n}(t,y,z)\) and \(\bar{F}^{n}(t,y,z)\) instead of \(\underline{f}(t,y,z,0)\) and \(\bar{f}(t,y,z,0)\).
We shall make, throughout the text, the standing assumption known as (H)-hypothesis:
(HC) Any \(\mathbb{F}\)-martingale remains a \(\mathbb{G}\)-martingale.
Remark 4.1
Since W is an \(\mathbb{F}\)-Brownian motion, we get under (HC) that it remains a \(\mathbb{G}\)-Brownian motion. Indeed, using (HC), we see that W is a \(\mathbb{G}\)-local martingale with quadratic variation 〈W,W〉 t =t. Applying Lévy’s characterization of Brownian motion (see e.g. Theorem 39 in [29]), we see that W remains a \(\mathbb{G}\)-Brownian motion.
Definition 4.1
We say that a generator g: Ω×[0,T]×ℝ×ℝd→ℝ satisfies a comparison theorem for Brownian BSDEs if for any bounded \(\mathbb{G}\)-stopping times ν 2≥ν 1, any generator g′:Ω×[0,T]×ℝ×ℝd→ℝ and any \({\mathcal{G}}_{\nu_{2}}\)-measurable r.v. ζ and ζ′ such that g≤g′ and ζ≤ζ′ (resp. g≥g′ and ζ≥ζ′), we have Y≤Y′ (resp. Y≥Y′) on [ν 1,ν 2]. Here, (Y,Z) and (Y′,Z′) are solutions in \({\mathcal{S}}_{\mathbb{G}}^{\infty}[0,T]\times L^{2}_{\mathbb{G}}[0,T]\) to BSDEs with data (ζ,g) and (ζ′,g′):
and
We can state the general comparison theorem.
Theorem 4.1
Suppose that \(\underline{\xi}\leq \bar{\xi},~\mathbb{P}\)-a.s. Suppose moreover that for each k=0,…,n
and the generators \(\bar{F}^{k}\) or \(\underline{F}^{k}\) satisfy a comparison theorem for Brownian BSDEs. Then, if \(\bar{U}_{t}= \underline{U}_{t}=0\) for t > τ n , we have under (HD) and (HC)
Proof
The proof is performed in four steps. We first identify the BSDEs of which the terms appearing in the decomposition of \(\bar{Y}\) and \(\underline{Y}\) are solutions in the filtration \(\mathbb{G}\). We then modify \(\bar{Y} ^{k}\) and \(\underline{Y} ^{k}\) outside of [τ k ,τ k+1) to get càd-làg processes for each k=0,…,n. We then compare the modified processes by killing their jumps. Finally, we retrieve a comparison for the initial processes since the modification has happened outside of [τ k ,τ k+1) (where they coincide with \(\bar{Y}\) and \(\underline{Y}\)).
Step 1. Since \((\bar{Y}, \bar{Z}, \bar{U})\) (resp. \((\underline{Y}, \underline{Z}, \underline{U})\)) is solution to the BSDE with parameters \((\bar{\xi}, \bar{f})\) (resp. \((\underline{\xi}, \underline{f})\)), we obtain from the decomposition in the filtration \(\mathbb{F}\) and Theorem 12.23 in [13] that \((\bar{Y}^{n}, \bar{Z}^{n})\) (resp. \((\underline{Y}^{n}, \underline{Z}^{n})\)) is solution to
and \((\bar{Y}^{k},\bar{Z}^{k})\) (resp. \((\underline{Y}^{k},\underline{Z}^{k})\)) is solution to
for each k=0,…,n−1.
Step 2. We introduce a family of processes \((\tilde{\bar{Y} }^{k})_{0 \leq k \leq n}\) (resp. \((\tilde{\underline{Y}} ^{k})_{0 \leq k \leq n}\)). We define it recursively by
and for k=0,…,n−1
These processes are càd-làg with jumps only at times τ l , l=1,…,n. Notice also that \(\tilde{\bar{Y} }^{n}\) (resp. \(\tilde{\underline{Y} }^{n}\), \(\tilde{\bar{Y} }^{k}\), \(\tilde{\underline{Y} }^{k}\)) satisfies equation (4.1) (resp. (4.2), (4.3), and (4.4)).
Step 3. We prove by a backward induction that \(\tilde {\underline{Y}}^{n}\) ≤ \(\tilde{\bar{Y}}^{n}\) on [τ n ∧T,T] and \(\tilde{\underline{Y}}^{k}\) ≤ \(\tilde {\bar{Y}}^{k}\) on [τ k ∧T,τ k+1∧T), for each k=0,…,n−1.
• Since \(\underline{\xi}\) ≤ \(\bar{\xi}\), \(\underline{F} ^{n}\) ≤ \(\bar{F} ^{n}\) and \(\bar{F}^{n}\) or \(\underline{F}^{n}\) satisfy a comparison theorem for Brownian BSDEs, we immediately get from (4.1) and (4.2)
• Fix k≤n−1 and suppose that \(\tilde{\underline{Y}}^{k+1}_{t}\leq \tilde{\bar{Y}}_{t}^{k+1}\) for t∈[τ k+1∧T,τ k+2∧T). Denote by \(^{p}\tilde{\bar{Y}}^{l}\) (resp. \(^{p}\tilde{\underline{Y}}^{l}\)) the predictable projection of \(\tilde{\bar{Y}}^{l}\) (resp. \(\tilde{\underline{Y}}^{l}\)) for l=0,…,n. Since the random measure μ admits an intensity absolutely continuous w.r.t. the Lebesgue measure on [0,T], \(\tilde{\bar{Y}}^{l}\) (resp. \(\tilde{\underline{Y}}^{l}\)) has inaccessible jumps (see Chap. IV of [9]). We then have
From (4.3) and (4.4), and the definition of \(\tilde{\bar{Y}}^{l}\) (resp. \(\tilde{\underline{Y}}^{l}\)), we have for l=k
Since \(\tilde{\bar{Y}}^{k+1}_{\tau_{k+1}}\) ≥ \(\tilde{\underline{Y}}^{k+1}_{\tau_{k+1}}\), we get \(^{p}\tilde{\bar{Y}}^{k+1}_{\tau_{k+1}}\) ≥ \(^{p}\tilde{\underline{Y}}^{k+1}_{\tau_{k+1}}\). This together with conditions on \(\bar{\xi}\), \(\underline{\xi}\), \(\bar{F} ^{k}\) and \(\underline{F} ^{k}\) give the result.
Step 4. Since \(\tilde{\bar{Y}}^{k}\) (resp. \(\tilde{\underline{Y}}^{k}\)) coincides with \(\bar{Y}\) (resp. \(\underline{Y}\)) on [τ k ∧T,τ k+1∧T), we get the result. □
Remark 4.2
It is possible to obtain Theorem 4.1 under weaker assumptions than (HC). For instance, it is sufficient to assume that W is a \(\mathbb{G}\)-semimartingale for the form
with M a \(\mathbb{G}\)-local martingale and a a \(\mathbb{G}\)-adapted process satisfying
Indeed, we first notice that (M t ) t∈[0,T] is a \(\mathbb{G}\)-Brownian motion since it is a continuous \(\mathbb{G}\)-martingale with 〈M,M〉 t =t for t≥0. Then, from (4.7) we can apply Girsanov Theorem and find that (W t ) t∈[0,T] is a \((\mathbb{Q},\mathbb{G})\)-Brownian motion where ℚ is the probability measure equivalent to ℙ defined by
Therefore we can prove Theorem 4.1 under ℚ. Since ℚ is equivalent to ℙ the conclusion remains true under ℙ.
4.2 Uniqueness via Comparison
In this form, the previous theorem is not usable since the condition on the generators of the Brownian BSDEs is implicit: it involves the solution of the previous Brownian BSDEs at each step. We give, throughout the sequel, an explicit example for which Theorem 4.1 provides uniqueness. This example is based on a comparison theorem for quadratic BSDEs given by Briand and Hu [6]. We first introduce the following assumptions.
-
(HUQ1)
The function f(t,y,.,u) is concave for all (t,y,u)∈[0,T]×ℝ×Bor(E,ℝ).
-
(HUQ2)
There exists a constant L s.t.
$$\bigl|f\bigl(t,y,z,\bigl(u(e)-y\bigr)_{e\in E}\bigr) - f\bigl(t,y',z,\bigl(u(e)-y'\bigr)_{e\in E}\bigr)\bigr|\leq L\bigl|y-y'\bigr| $$for all (t,y,y′,z,u)∈[0,T]×[ℝ]2×ℝd×Bor(E,ℝ).
-
(HUQ3)
There exists a constant C>0 such that
$$\bigl|f(t,y,z,u)\bigr|\leq C\biggl(1+|y|+|z|^2 + \int_E|u(e)|\lambda_t(e)\,de\biggr) $$for all (t,y,z,u)∈[0,T]×ℝ×ℝd×Bor(E,ℝ).
-
(HUQ4)
f(t,.,u)=f(t,.,0) for all u∈Bor(E,ℝ) and all t∈(τ n ∧T,T].
Theorem 4.2
Under (HD), (HBI), (HC), (HUQ1), (HUQ2), (HUQ3), and (HUQ4), BSDE (2.4) admits at most one solution.
Proof
Let (Y,Z,U) and (Y′,Z′,U′) be two solutions of (2.4) in \({\mathcal{S}}_{\mathbb{G}}^{\infty}[0,T]\times L^{2}_{\mathbb{G}}[0,T]\times L^{2}(\mu)\). Define the process \(\tilde{U}\) (resp. \(\tilde{U}'\)) by
Then, \(U=\tilde{U}\) and \(U'= \tilde{U}'\) in L 2(μ). Therefore, from (HUQ4), \((Y,Z,\tilde{U})\) and \((Y',Z',\tilde{U}')\) are also solutions to (2.4) in \({\mathcal{S}}_{\mathbb{G}}^{\infty}[0,T]\times L^{2}_{\mathbb{G}}[0,T]\times L^{2}(\mu)\).
We now prove by a backward induction on k=n,n−1,…,1,0 that
• Suppose that k=n. Then, and are solution to
Using Remark 4.1 and Theorem 5 in [6], we find that the generator satisfies a comparison theorem in the sense of Definition 4.1. We can then apply Theorem 4.1 with
for (t,y,z,u)∈[0,T]×ℝ×ℝd×Bor(E,ℝ), and we get .
• Suppose that . We can then choose Y j and Y′j appearing in the decomposition of the processes Y and Y′ given by Lemma 2.1(ii) such that
for all (θ,e)∈Δ n ×E n and j=k+1,…,n. Therefore, we find that and are solution to
for t∈[0,T], where the generator F is defined by
where
for all (t,y,z)∈[0,T]×ℝ×ℝd. Using Remark 4.1 and Theorem 5 in [6], we see that the generator F satisfies a comparison theorem in the sense of Definition 4.1. We can then apply Theorem 4.1 and we get .
• Finally the result holds true for all k=0,…,n which gives Y=Y′.
• We now prove that Z=Z′ and U=U′. Identifying the finite variation part and the unbounded variation part of Y we get Z=Z′. Then, identifying the pure jump part of Y we get \(\tilde{U} =\tilde{U}'\). Since \(\tilde{U} =U\) (resp. \(\tilde{U}' =U'\)) in L 2(μ), we finally get (Y,Z,U)=(Y′,Z′,U′). □
5 Exponential Utility Maximization in a Jump Market Model
We consider a financial market model with a riskless bond assumed for simplicity equal to one, and a risky asset subjects to some counterparty risks. We suppose that the Brownian motion W is one dimensional (d=1). The dynamic of the risky asset is affected by other firms, the counterparties, which may default at some random times, inducing consequently some jumps in the asset price. However, this asset still exists and can be traded after the default of the counterparties. We keep the notation of previous sections.
Throughout the sequel, we suppose that (HD), (HBI), and (HC) are satisfied. We consider that the price process S evolves according to the equation
All processes b, σ and β are assumed to be \(\mathbb{G}\)-predictable. We introduce the following assumptions on the coefficients appearing in the dynamic of S:
-
(HS1)
The processes b, σ and β are uniformly bounded: there exists a constant C s.t.
$$|b_t|+|\sigma_t|+\bigl|\beta_t(e)\bigr|\leq C,\quad 0 \leq t \leq T,\ e\in E,\ \mathbb{P}\hbox{-a.s.} $$ -
(HS2)
There exists a positive constant c σ such that
$$\sigma_t\geq c_\sigma,\quad 0 \leq t \leq T,\ \mathbb{P}\hbox{-a.s.} $$ -
(HS3)
The process β satisfies
$$\beta_t(e)>-1,\quad 0 \leq t \leq T,\ e\in E,\ \mathbb{P}\hbox{-a.s.} $$ -
(HS4)
The process ϑ defined by \(\vartheta_{t} = \frac{b_{t}}{\sigma_{t}}\), t∈[0,T], is uniformly bounded: there exists a constant C such that
$$|\vartheta_t|\leq C,\quad 0\leq t \leq T,\ \mathbb{P}\hbox{-a.s.} $$
We notice that (HS1) allows the process S to be well defined and (HS3) ensures it to be positive.
A self-financing trading strategy is determined by its initial capital x∈ℝ and the amount of money π t invested in the stock, at time t∈[0,T]. The wealth at time t associated with a strategy (x,π) is
We consider a contingent claim that is a random payoff at time T described by a \({\mathcal{G}}_{T}\)-measurable random variable B. We suppose that B is bounded and satisfies
where B 0 is \({\mathcal{F}}_{T}\)-measurable and B k is \({\mathcal{F}}_{T}\otimes {\mathcal{B}}({\varDelta}_{k})\otimes {\mathcal{B}}(E^{k})\)-measurable for each k=1,…,n. Then, we define
the maximal expected utility that we can achieve by starting at time 0 with the initial capital x, using some admissible strategy \(\pi\in {\mathcal{A}}\) (which is defined throughout the sequel) on [0,T] and paying B at time T. α is a given positive constant which can be seen as a coefficient of absolute risk aversion.
Finally, we introduce a compact subset \({\mathcal{C}}\) of ℝ with \(0 \in {\mathcal{C}}\), which represents an eventual constraint imposed to the trading strategies, that is, π t (ω) ∈ \({\mathcal{C}}\). We then define the space \({\mathcal{A}}\) of admissible strategies.
Definition 5.1
The set \({\mathcal{A}}\) of admissible strategies consists of all ℝ-valued \({\mathcal{P}}(\mathbb{G})\)-measurable processes π=(π t )0≤t≤T which satisfy
and \(\pi_{t}\in {\mathcal{C}}\), dt⊗dℙ-a.e., as well as the uniform integrability of the family
We first notice that the compactness of \({\mathcal{C}}\) implies the integrability conditions imposed to the admissible strategies.
Lemma 5.1
Any \({\mathcal{P}}(\mathbb{G})\)-measurable process π valued in \({\mathcal{C}}\) satisfies \(\pi\in {\mathcal{A}}\).
The proof is exactly the same as in [24]. We therefore omit it.
In order to characterize the value function V(x) and an optimal strategy, we construct, as in [14] and [24], a family of stochastic processes \((R^{(\pi)})_{\pi \in {\mathcal{A}}}\) with the following properties:
-
(i)
\(R^{(\pi)}_{T}=-\exp(-\alpha(X^{x,\pi}_{T}- B))\) for all \(\pi\in {\mathcal{A}}\),
-
(ii)
\(R^{(\pi)}_{0}=R_{0}\) is constant for all \(\pi\in {\mathcal{A}}\),
-
(iii)
R (π) is a supermartingale for all \(\pi\in {\mathcal{A}}\) and there exists \(\hat{\pi}\in {\mathcal{A}}\) such that \(R^{(\hat{\pi})}\) is a martingale.
Given processes owning these properties we can compare the expected utilities of the strategies \(\pi\in {\mathcal{A}}\) and \(\hat{\pi}\in {\mathcal{A}}\) by
whence \(\hat{\pi}\) is the desired optimal strategy. To construct this family, we set
where (Y,Z,U) is a solution of the BSDE
We have to choose a function f for which R (π) is a supermartingale for all \(\pi\in {\mathcal{A}}\), and there exists a \(\hat{\pi}\in {\mathcal{A}}\) such that \(R^{(\hat{\pi})}\) is a martingale. We assume that there exists a triple (Y,Z,U) solving a BSDE with jumps of the form (5.2), with terminal condition B and with a driver f to be determined. We first apply Itô’s formula to R (π) for any strategy π
Thus, the process R (π) satisfies the following SDE:
with M (π) a local martingale and A (π) a finite variation continuous process given by
It follows that R (π) has the multiplicative form
where \(\mathfrak{E}(M^{(\pi)})\) denotes the Doleans-Dade exponential of the local martingale M (π). Since exp(−α(π t β t (e)−U t (e)))−1>−1, ℙ-a.s., the Doleans-Dade exponential of the discontinuous part of M (π) is a positive local martingale and hence, a supermartingale. The supermartingale condition in (iii) holds true, provided, for all \(\pi\in {\mathcal{A}}\), the process exp(A (π)) is nondecreasing, this entails
This condition holds true, if we define f as follows:
recall that ϑ t =b t /σ t for t∈[0,T].
Theorem 5.1
Under (HD), (HBI), (HC), (HS1), (HS2), (HS3), and (HS4), the value function of the optimization problem (5.1) is given by
where Y 0 is defined as the initial value of the unique solution \((Y,Z,U)\in {\mathcal{S}}^{\infty}_{\mathbb{G}}[0,T]\times L^{2}_{\mathbb{G}}[0,T]\times L^{2}(\mu)\) of the BSDE
with
for all (t,z,u)∈[0,T]×ℝ×Bor(E,ℝ). There exists an optimal trading strategy \(\hat{\pi} \in {\mathcal{A}}\) which satisfies
for all t∈[0,T].
Proof
Step 1. We first prove the existence of a solution to BSDE) (5.4). We first check the measurability of the generator f. Notice that we have \(f(.,.,.,.) =\inf_{\pi\in {\mathcal{C}}}F(\pi,.,.,.,.)\) where F is defined by
for all \((\omega,t,\pi,y,z,u)\in{\varOmega}\times[0,T]\times {\mathcal{C}}\times \mathbb{R}\times \mathbb{R}\times Bor(E,\mathbb{R})\). From Fatou’s Lemma we find that u↦∫ E u(e) de is l.s.c. and hence measurable on Bor(E,ℝ+):={u∈Bor(E,ℝ):u(e)≥0,∀e∈E}. Therefore F(π,.,.,.,.) is \({\mathcal{P}}(\mathbb{G})\otimes {\mathcal{B}}(\mathbb{R})\otimes {\mathcal{B}}(\mathbb{R}) \otimes {\mathcal{B}}(Bor(E,\mathbb{R}))\)-measurable for all \(\pi\in {\mathcal{C}}\). Since F(.,t,y,z,u) is continuous for all (t,y,z,u) we have \(f(.,.,.,.) =\inf_{\pi\in {\mathcal{C}}\cap \mathbb{Q}}F(\pi,.,.,.,.)\), and f is \({\mathcal{P}}(\mathbb{G})\otimes {\mathcal{B}}(\mathbb{R})\otimes {\mathcal{B}}(\mathbb{R})\otimes {\mathcal{B}}(Bor(E,\mathbb{R}))\)-measurable.
We now apply Theorem 3.1. Let σ k, ϑ k and β k, k=0,…,n, be the respective terms appearing in the decomposition of σ, ϑ and β given by Lemma 2.1. Using (HS1) and (HS4), we can assume w.l.o.g. that these terms are uniformly bounded. Then, in the decomposition of the generator f, we can choose the functions f k, k=0,…,n, as
and
for k=0,…,n−1 and (θ,e)∈Δ n ×E n.
Notice also that since B is bounded, we can choose B k, k=0,…,n, uniformly bounded. We now prove by backward induction on k that the BSDEs (we shall omit the dependence on (θ,e))
and
admit a solution (Y k,Z k) in \({\mathcal{S}}^{\infty}_{\mathbb{F}}[\theta_{k}\wedge T,T] \times L^{2}_{\mathbb{F}}[\theta_{k}\wedge T,T]\) such that Y k (resp. Z k) is \({\mathcal{P}}{\mathcal{M}}(\mathbb{F})\otimes {\mathcal{B}}({\varDelta}_{k})\otimes {\mathcal{B}}(E^{k})\) (resp. \({\mathcal{P}}(\mathbb{F})\otimes {\mathcal{B}}({\varDelta}_{k})\otimes {\mathcal{B}}(E^{k})\))-measurable with
for all k=0,…,n.
• Since 0 ∈ \({\mathcal{C}}\), we have
Therefore, we can apply Theorem 2.3 of [22], and we see that for any (θ,e)∈Δ n ×E n, there exists a solution (Y n(θ,e),Z n(θ,e)) to BSDE (5.6) in \({\mathcal{S}}^{\infty}_{\mathbb{F}}[\theta_{n}\wedge T,T]\times L^{2}_{\mathbb{F}}[\theta_{n}\wedge T,T]\). Moreover, this solution is constructed as a limit of Lipschitz BSDEs (see [22]). Using Proposition C.1, we find that Y n (resp. Z n) is \({\mathcal{P}}{\mathcal{M}}(\mathbb{F})\otimes {\mathcal{B}}({\varDelta}_{n})\otimes {\mathcal{B}}(E^{n})\) (resp. \({\mathcal{P}}(\mathbb{F})\otimes {\mathcal{B}}({\varDelta}_{n})\otimes {\mathcal{B}}(E^{n})\))-measurable.
Then, using Proposition 2.1 of [22], we get the existence of a constant K such that
• Suppose that BSDE (5.7) admits a solution at rank k+1 (k≤n−1) with
We denote g k the function defined by
for all (t,y,z)∈[0,T]×ℝ×ℝ and (θ,e)∈Δ n ×E n. Since g k has an exponential growth in the variable y in the neighborhood of −∞, we cannot directly apply our previous results. We then prove via a comparison theorem that there exists a solution by introducing another BSDE which admits a solution and whose generator coincides with g in the domain where the solution lives.
Let \((\underline{Y}^{k}(\theta_{(k)},e_{(k)}), \underline{Z}^{k}(\theta_{(k)},e_{(k)}))\) be the solution in \({\mathcal{S}}^{\infty}_{\mathbb{F}}[\theta_{k}\wedge T,T]\times L^{2}_{\mathbb{F}}[\theta_{k}\wedge T,T]\) to the linear BSDE
where
for all (t,y,z)∈[0,T]×ℝ×ℝ. Since B k and ϑ k are uniformly bounded, we have
Then, define the generator \(\tilde{g}^{k}\) by
for all (t,y,z)∈[0,T]×ℝ×ℝ and (θ,e)∈Δ n ×E n.
Moreover, since 0 ∈ \({\mathcal{C}}\), we get from (5.8) and (5.9) the existence of a positive constant C such that
for all (t,y,z)∈[0,T]×ℝ×ℝ and (θ,e)∈Δ n ×E n. We can then apply Theorem 2.3 of [22], and we find that the BSDE
admits a solution \((\tilde{Y}^{k}(\theta_{(k)},e_{(k)}),\tilde{Z}^{k}(\theta_{(k)},e_{(k)}))\in {\mathcal{S}}^{\infty}_{\mathbb{F}}[\theta_{k}\wedge T,T] \times L^{2}_{\mathbb{F}}[\theta_{k}\wedge T,T]\). Using Proposition 2.1 of [22], we get
Then, since \(\tilde{g}^{k}\) ≥ \(\underline{g}^{k}\) and since \(\underline{g}^{k}\) is Lipschitz continuous, we get from the comparison theorem for BSDEs that \(\tilde{Y}^{k}\) ≥ \(\underline{Y}^{k}\). Hence, \((\tilde{Y}^{k},\tilde{Z}^{k})\) is solution to BSDE (5.7). Notice then that we can choose \(\tilde{Y}^{k}\) (resp. \(\tilde{Z}^{k}\)) as a \({\mathcal{P}}{\mathcal{M}}(\mathbb{F})\otimes {\mathcal{B}}({\varDelta}_{k})\otimes {\mathcal{B}}(E^{k})\) (resp. \({\mathcal{P}}(\mathbb{F})\otimes {\mathcal{B}}({\varDelta}_{k})\otimes {\mathcal{B}}(E^{k})\))-measurable process. Indeed, these processes are solutions to quadratic BSDEs and hence can be written as the limit of solutions to Lipschitz BSDEs (see [22]). Using Proposition C.1 with \({\mathcal{X}}={\varDelta}_{k} \times E^{k}\) and dρ(θ,e)=γ 0(θ,e) dθ de we see that the solutions to Lipschitz BSDEs are \({\mathcal{P}}(\mathbb{F})\otimes {\mathcal{B}}({\varDelta}_{k})\otimes {\mathcal{B}}(E^{k})\)-measurable and hence \(\tilde{Y}^{k}\) (resp. \(\tilde{Z}^{k}\)) is \({\mathcal{P}}{\mathcal{M}}(\mathbb{F})\otimes {\mathcal{B}}({\varDelta}_{k})\otimes {\mathcal{B}}(E^{k})\) (resp. \({\mathcal{P}}(\mathbb{F})\otimes {\mathcal{B}}({\varDelta}_{k})\otimes {\mathcal{B}}(E^{k})\))-measurable.
Step 2. We now prove the uniqueness of a solution to BSDE (5.4). Let (Y 1,Z 1,U 1) and (Y 2,Z 2,U 2) be two solutions of BSDE (5.4) in \({\mathcal{S}}_{\mathbb{G}}^{\infty}[0,T]\times L_{\mathbb{G}}^{2}[0,T]\times L^{2}(\mu)\).
Applying an exponential change of variable, we see that \((\tilde{Y}^{i}, \tilde{Z}^{i}, \tilde{U}^{i})\) defined for i=1,2 by
for all t∈[0,T], are solution in \({\mathcal{S}}_{\mathbb{G}}^{\infty}[0,T]\times L_{\mathbb{G}}^{2}[0,T]\times L^{2}(\mu)\) to the BSDE
where the generator \(\tilde{f}\) is defined by
We then notice that
• \(\tilde{f}\) satisfies (HUQ1) since it is an infimum of linear functions in the variable z,
• \(\tilde{f}\) satisfies (HUQ2). Indeed, from the definition of \(\tilde{f} \) we have
for all (t,z,u)∈[0,T]×ℝ×Bor(E,ℝ) and y,y′∈ℝ. Since \({\mathcal{C}}\) is compact, we get from (HBI) the existence of a constant C such that
Inverting y and y′ we get the result.
• \(\tilde{f}\) satisfies (HUQ3). Indeed, since \(0\in {\mathcal{C}}\), we get from (HBI) the existence of a constant C such that
for (t,y,z,u)∈[0,T]×ℝ×ℝ×Bor(E,ℝ). We get from (HBI), there exists a positive constant C s.t.
Then, from (HS1), (HS2), and the compactness of \({\mathcal{C}}\), we get
for all (t,y,z,u)∈[0,T]×ℝ×ℝ×Bor(E,ℝ).
• \(\tilde{f} \) satisfies (HUQ4) since at time t it is an integral of the variable u w.r.t. λ t , which vanishes on the interval (τ n ,∞).
Since \(\tilde{f}\) satisfies (HUQ1), (HUQ2), (HUQ3), and (HUQ4), we get from Theorem 4.2 that \((\tilde{Y}^{1},\tilde{Z}^{1},\tilde{U}^{1})=(\tilde{Y}^{2},\tilde{Z}^{2},\tilde{U}^{2})\) in \({\mathcal{S}}^{\infty}_{\mathbb{G}}[0,T]\times L^{2}_{\mathbb{G}}[0,T]\times L^{2}(\mu)\). From the definition of \((\tilde{Y}^{i},\tilde{Z}^{i},\tilde{U}^{i})\) for i=1,2, we get (Y 1,Z 1,U 1)=(Y 2,Z 2,U 2) in \({\mathcal{S}}^{\infty}_{\mathbb{G}}[0,T]\times L^{2}_{\mathbb{G}}[0,T]\times L^{2}(\mu)\).
Step 3. We check that \(M^{(\hat{\pi})}\) is a BMO-martingale. Since \({\mathcal{C}}\) is compact, (HS1) holds and U is bounded as the jump part of the bounded process Y, it suffices to prove that \(\int_{0}^{.}Z_{s}\,dW_{s}\) is a BMO-martingale.
Let M denote the upper bound of the uniformly bounded process Y. Applying Itô’s formula to (Y−M)2, we obtain for any stopping time τ≤T
The definition of f yields
for all t∈[0,T]. Therefore, since (HBI) and (HS4) hold, we get
Hence, \(\int_{0}^{.} Z_{s}\,dW_{s}\) is a BMO-martingale for k=0,…,n.
Step 4. It remains to show that R (π) is a supermartingale for any \(\pi\in {\mathcal{A}}\). Since \(\pi\in {\mathcal{A}}\), the process \(\mathfrak{E}(M^{(\pi)})\) is a positive local martingale, because it is the Doleans–Dade exponential of a local martingale whose the jumps are grower than −1. Hence, there exists a sequence of stopping times (δ n ) n∈ℕ satisfying lim n→∞ δ n =T, ℙ-a.s., such that \(\mathfrak{E}(M^{(\pi)})_{.\wedge \delta_{n}}\) is a positive martingale for each n∈ℕ. The process A (π) is nondecreasing. Thus, \(R^{(\pi)}_{t\wedge \delta_{n}}=R_{0}\mathfrak{E}(M^{(\pi)})_{t\wedge \delta_{n}}\exp(A^{(\pi)}_{t\wedge\delta_{n}})\) is a supermartingale, i.e. for s≤t
For any set \(A\in {\mathcal{G}}_{s}\), we have
On the other hand, since
we use both the uniform integrability of \((\exp(-\alpha X^{x,\pi}_{\delta}))\) where δ runs over the set of all stopping times and the boundedness of Y to obtain the uniform integrability of
Hence, the passage to the limit as n goes to ∞ in (5.10) is justified and it implies
We obtain the supermartingale property of R (π).
To complete the proof, we show that the strategy \(\hat{\pi}\) defined by (5.5) is optimal. We first notice that from Lemma 5.1 we have \(\hat{\pi}\in {\mathcal{A}}\). By definition of \(\hat{\pi}\), we have \(A^{(\hat{\pi})} = 0\) and hence, \(R^{(\hat{\pi})}_{t} = R_{0}\mathfrak{E}(M^{(\hat{\pi})})_{t}\). Since \({\mathcal{C}}\) is compact, (HS1) holds and U is bounded as jump part of the bounded process Y, there exists a constant δ>0 s.t.
Applying the Kazamaki criterion to the BMO martingale \(M^{(\hat{\pi})}\) (see [21]) we find that \(\mathfrak{E}(M^{(\hat{\pi})})\) is a true martingale. As a result, we get
Using that (Y,Z,U) is the unique solution of the BSDE (5.4), we obtain the expression (5.3) for the value function. □
Remark 5.1
Concerning the existence and uniqueness of a solution to BSDE (5.4), we notice that the compactness assumption on \({\mathcal{C}}\) is only need for the uniqueness. Indeed, in the case where \({\mathcal{C}}\) is only a closed set, the generator of the BSDE still satisfies a quadratic growth condition which allows to apply Kobylanski existence result. However, for the uniqueness of the solution to BSDE (5.4), we need \({\mathcal{C}}\) to be compact to get Lipschitz continuous decomposed generators w.r.t. y. We notice that the existence result for a similar BSDE in the case of Poisson jumps is proved by Morlais in [24] and [25] without any compactness assumption on \({\mathcal{C}}\).
Notes
\({\mathcal{F}}_{0}\) contains the ℙ-null sets and \(\mathbb{F}\) is right continuous: \({\mathcal{F}}_{t}={\mathcal{F}}_{t^{+}}:=\bigcap_{s>t}{\mathcal{F}}_{s}\).
The symbol \(\int_{s}^{t}\) stands for the integral on the interval (s,t] for all s,t∈ℝ+.
References
Ankirchner, S., Blanchet-Scalliet, C., Eyraud-Loisel, A.: Credit risk premia and quadratic BSDEs with a single jump. Int. J. Theor. Appl. Finance 13(7), 1103–1129 (2010)
Bielecki, T., Rutkowski, M.: Credit Risk: Modelling, Valuation and Hedging. Springer, Berlin (2004)
Bielecki, T., Jeanblanc, M., Rutkowski, M.: Stochastic Methods in Credit Risk Modelling. Lectures Notes in Mathematics, vol. 1856, pp. 27–128. Springer, Berlin (2004)
Bielecki, T., Jeanblanc, M.: Indifference prices in indifference pricing. In: Carmona, R. (ed.) Theory and Applications, Financial Engineering. Princeton University Press, Princeton (2008)
Briand, P., Hu, Y.: BSDE with quadratic growth and unbounded terminal value. Probab. Theory Relat. Fields 136(4), 604–618 (2006)
Briand, P., Hu, Y.: Quadratic BSDEs with convex generators and unbounded terminal conditions. Probab. Theory Relat. Fields 141, 543–567 (2008)
Callegaro, G.: Credit risk models under partial information. Ph.D. Thesis Scuola Normal Superior di Pisa and Université d’Évry Val d’Essonne (2010)
Delbaen, F., Hu, Y., Richou, A.: On the uniqueness of solutions to quadratic BSDEs with convex generators and unbounded terminal conditions. Ann. Inst. Henri Poincaré Probab. Stat. 47(2), 559–574 (2011)
Dellacherie, C., Meyer, P.A.: Probabilités et Potentiel—Chapitres I à IV. Hermann, Paris (1975)
Dellacherie, C., Meyer, P.A.: Probabilités et Potentiel—Chapitres V à VIII. Hermann, Paris (1980)
Duffie, D., Singleton, K.: Credit Risk: Pricing, Measurement and Management. Princeton University Press, Princeton (2003)
El Karoui, N., Jeanblanc, M., Jiao, Y.: Modelling successive default events. Preprint (2010)
He, S., Wang, J., Yan, J.: Semimartingale Theory and Stochastic Calculus. Science Press, CRC Press, New York (1992)
Hu, Y., Imkeller, P., Muller, M.: Utility maximization in incomplete markets. Ann. Appl. Probab. 15, 1691–1712 (2005)
Jacod, J.: Grossissement initial, hypothese H’ et théorème de Girsanov, séminaire de calcul stochastique. Lect. Notes Math. 1118, 1982–1983 (1987)
Jarrow, R.A., Yu, F.: Counterparty risk and the pricing of defaultable securities. J. Finance 56, 1765–1799 (2001)
Jeanblanc, M., Le Cam, Y.: Progressive enlargement of filtrations with initial times. Stoch. Process. Their Appl. 119(8), 2523–2543 (2009)
Jeulin, T.: Semimartingales et Grossissements d’une Filtration. Lect. Notes in Maths, vol. 883. Springer, Berlin (1980)
Jeulin, T., Yor, M.: Grossissement de Filtration: Exemples et Applications. Lect. Notes in Maths, vol. 1118. Springer, Berlin (1985)
Jiao, Y., Pham, H.: Optimal investment with counterparty risk: a default-density modeling approach. Finance Stoch. 15(4), 725–753 (2011)
Kazamaki, N.: A sufficient condition for the uniform integrability of exponential martingales. Math. Rep., Toyama Univ. 2, 1–11 (1979)
Kobylanski, M.: Backward stochastic differential equations and partial differential equations with quadratic growth. Ann. Probab. 28, 558–602 (2000)
Lim, T., Quenez, M.C.: Utility maximization in incomplete market with default. Electron. J. Probab. 16, 1434–1464 (2011)
Morlais, M.A.: Utility maximization in a jump market model. Stoch. Stoch. Rep. 81, 1–27 (2009)
Morlais, M.A.: A new existence result for quadratic BSDEs with jumps with application to the utility maximization problem. Stoch. Process. Appl. 120(10), 1966–1995 (2010)
Oksendal, B.: Stochastic Differential Equations. An Introduction with Applications, 6th edn. Springer, Berlin (2007)
Pardoux, E., Peng, S.: Adapted solution of a backward stochastic differential equation. Syst. Control Lett. 37, 61–74 (1990)
Pham, H.: Stochastic control under progressive enlargement of filtrations and applications to multiple defaults risk management. Stoch. Process. Appl. 120, 1795–1820 (2010)
Protter, P.: Stochastic Integration and Differential Equation, 2nd edn., vol. 21. Corrected 3-rd Printing, Stochastic Modeling and Applied Probability. Springer (2005)
Acknowledgements
The authors would like to thank Shiqi Song for useful remarks which helped to improve the article.
The research of the first author benefited from the support of the French ANR research grant LIQUIRISK.
The research of the second author benefited from the support of the “Chaire Risque de Crédit”, Fédération Bancaire Française.
Author information
Authors and Affiliations
Corresponding author
Appendices
Appendix A: Proof of Lemma 2.1(ii)
We prove the decomposition for the progressively measurable processes X of the form
where J is \({\mathcal{P}}(\mathbb{G})\)-measurable and U is \({\mathcal{P}}(\mathbb{G})\otimes {\mathcal{B}}(E)\)-measurable. To prove the decomposition (2.2), it suffices to prove it for the process J and the process V defined by
-
Decomposition of the process J.
Since J is \({\mathcal{P}}(\mathbb{G})\)-measurable, we can write
for all t≥0, where J 0 is \(\mathcal{P}(\mathbb{F})\)-measurable, and J k is \(\mathcal{P}(\mathbb{F}) \otimes {\mathcal{B}}({\varDelta}_{k})\otimes {\mathcal{B}}(E^{k})\)-measurable, for k=1,…,n. This leads to the following decomposition of J:
where
for k=1,…,n and (θ (k),e (k))∈Δ k ×E k. Since J k is \({\mathcal{P}}(\mathbb{F})\otimes {\mathcal{B}}({\varDelta}_{k})\otimes {\mathcal{B}}(E^{k})\)-measurable for all k=0,…,n, we find that \((\bar{J}^{k}_{t})_{t\in[0,s]}\) is \({\mathcal{F}}_{s}\otimes {\mathcal{B}}([0,s])\otimes {\mathcal{B}}({\varDelta}_{k})\otimes {\mathcal{B}}(E^{k})\)-measurable for all s≥0.
-
Decomposition of the process V.
Since U is \({\mathcal{P}}(\mathbb{G})\otimes {\mathcal{B}}(E)\)-measurable, we can write
for all t≥0, where U 0 is \(\mathcal{P}(\mathbb{F})\otimes {\mathcal{B}}(E)\)-measurable, and U k is \(\mathcal{P}(\mathbb{F}) \otimes {\mathcal{B}}({\varDelta}_{k})\otimes {\mathcal{B}}(E^{k})\otimes {\mathcal{B}}(E)\)-measurable, for k=1,…,n. This leads to the following decomposition of V:
where V k is defined by V 0=0 and
for k=1,…,n. We now check that, for all s≥0, \((V^{k}_{t}(.))_{t\in[0,s]}\) is \({\mathcal{F}}_{s}\otimes {\mathcal{B}}([0,s])\otimes {\mathcal{B}}({\varDelta}_{k})\otimes {\mathcal{B}}(E^{k})\)-measurable. Since U j is \({\mathcal{P}}(\mathbb{F})\otimes {\mathcal{B}}({\varDelta}_{j})\otimes {\mathcal{B}}(E^{j})\)-measurable, we see that \((U^{j}_{t}(.))_{t\in[0,s]}\) is \({\mathcal{F}}_{s}\otimes {\mathcal{B}}([0,s])\otimes {\mathcal{B}}({\varDelta}_{j})\otimes {\mathcal{B}}(E^{j})\)-measurable. Therefore is \({\mathcal{F}}_{s}\otimes {\mathcal{B}}([0,s])\otimes {\mathcal{B}}({\varDelta}_{j})\otimes {\mathcal{B}}(E^{j})\) for j=0,…,n. From the definition of V k we see that \((V^{k}_{t}(.))_{t\in[0,s]}\) is \({\mathcal{F}}_{s}\otimes {\mathcal{B}}([0,s])\otimes {\mathcal{B}}({\varDelta}_{k})\otimes {\mathcal{B}}(E^{k})\)-measurable.
Appendix B: Proof of Proposition 2.1
We first give a lemma which is a generalization of a proposition in [12]. Throughout the sequel, we denote
for any \({\mathcal{F}}_{\infty}\otimes {\mathcal{B}}({\varDelta}_{k})\otimes {\mathcal{B}}(E^{k})\)-measurable function G and any integers i and k such that 1≤i≤k≤n.
Lemma B.1
Fix t,s∈ℝ+ such that t≤s. Let X be a positive \({\mathcal{F}}_{s} \otimes {\mathcal{B}}( {\varDelta}_{n}) \otimes {\mathcal{B}}(E^{n})\)-measurable function on Ω×Δ n ×E n, then
Proof
Let H be a positive and \({\mathcal{G}}_{t}\)-measurable test random variable, which can be written
where H i is \({\mathcal{F}}_{t} \otimes {\mathcal{B}}({\varDelta}_{i}) \otimes {\mathcal{B}}(E^{i})\)-measurable for i=0,…,n. Using the joint density γ t (θ,e) of (τ,ζ), we have on the one hand
On the other hand, we have
□
We now prove Proposition 2.1. To this end, we prove that for any nonnegative \({\mathcal{P}}(\mathbb{G})\otimes {\mathcal{B}}(E)\)-measurable process U, any T>0 and any t∈[0,T], we have
where λ is defined by (2.3).
We first study the left hand side of (B.1). From Lemma 2.1 and Remark 2.1, we can write
where U k is a \({\mathcal{P}}(\mathbb{G})\otimes {\mathcal{B}}({\varDelta}_{k})\otimes {\mathcal{B}}(E^{k+1})\)-measurable process for k=0,…,n. Moreover, since U is nonnegative, we can assume that U k, k=0,…,n, are nonnegative. Then, from Lemma B.1, we have
We now study the right hand side of (B.1):
where the last equality comes from the definition of λ k. Hence, we get (B.1).
Appendix C: Measurability of Solutions to BSDEs Depending on a Parameter
3.1 C.1 Representation for Brownian Martingale Depending on a Parameter
We consider \({\mathcal{X}}\) a Borelian subset of ℝp and ρ a finite measure on \({\mathcal{B}}({\mathcal{X}})\). Let \(\{\xi(x)~:~x\in {\mathcal{X}}\}\) be a family of random variables such that the map \(\xi:~{\varOmega}\times {\mathcal{X}}\rightarrow \mathbb{R}\) is \({\mathcal{F}}_{T}\otimes {\mathcal{B}}({\mathcal{X}})\)-measurable and satisfies \(\int_{{\mathcal{X}}}\mathbb{E}|\xi(x)|^{2}\rho(dx)<\infty\). In the following result, we generalize the representation property as a stochastic integral w.r.t. W of square-integrable random variables to the family \(\{\xi(x)~:~x\in {\mathcal{X}}\}\). The proof follows the same lines as for the classical Itô representation Theorem which can be found e.g. in [26]. For the sake of completeness we sketch the proof.
Theorem C.1
There exists a \({\mathcal{P}}(\mathbb{F})\otimes {\mathcal{B}}({\mathcal{X}})\)-measurable map Z such that \(\int_{{\mathcal{X}}}\int_{0}^{T}\mathbb{E}|Z_{s}(x)|^{2}ds\) ρ(dx) < ∞ and
As for the standard representation theorem, we first need a lemma which provides a dense subset of \(L^{2}({\mathcal{F}}_{T}\otimes {\mathcal{B}}({\mathcal{X}}), \mathbb{P}\otimes\rho)\) generated by easy functions.
Lemma C.1
Random variables of the form
where h is a bounded \({\mathcal{B}}([0,T])\otimes {\mathcal{B}}({\mathcal{X}})\)-measurable map span a dense subset of \(L^{2}({\mathcal{F}}_{T}\otimes {\mathcal{B}}({\mathcal{X}}), \mathbb{P}\otimes\rho)\).
Sketch of the Proof
Let \({\varLambda}\in L^{2}({\mathcal{F}}_{T}\otimes {\mathcal{B}}({\mathcal{X}}), \mathbb{P}\otimes\rho)\) orthogonal to all functions of the form (C.2). Then, in particular, we have
for all α 1,…,α n ∈ℝ and all t 1,…,t n ∈[0,T]. Since G is identically equal to zero on ℝn and is analytical it is also identically equal to 0 on ℂn. We then have for any \({\mathcal{B}}({\mathcal{X}})\otimes {\mathcal{B}}(\mathbb{R}^{p})\)-measurable function ϕ such that ϕ(x,.)∈C ∞(ℝn) with compact support for all \(x\in {\mathcal{X}}\)
where \(\hat{\phi}(x,.)\) is the Fourier transform of ϕ(x,.). Hence, Λ is equal to zero since it is orthogonal to a dense subset of \(L^{2}({\mathcal{F}}_{T}\otimes {\mathcal{B}}({\mathcal{X}}))\). □
Sketch of the Proof of Theorem C.1
First suppose that ξ has the following form:
with h a bounded \({\mathcal{B}}([0,T])\otimes {\mathcal{B}}({\mathcal{X}})\)-measurable map. Then, applying Itô’s formula to the process \(\exp(\int_{0}^{.}h_{t}(x)\,dW_{t}-{1\over 2}\int_{0}^{.}|h_{t}(x)|^{2}\,dt)\), we find that ξ satisfies (C.1) where the process Z is given by
Now for any \(\xi\in L^{2}({\mathcal{F}}_{T}\otimes {\mathcal{B}}({\mathcal{X}}),\mathbb{P}\otimes\rho)\), there exists a sequence (ξ n) n∈ℕ such that each ξ n satisfies
and (ξ n) n∈ℕ converges to ξ in \(L^{2}({\mathcal{F}}_{T}\otimes {\mathcal{B}}({\mathcal{X}}),\mathbb{P}\otimes dt \otimes\rho)\). Then, using Itô’s Isometry, we see that the sequence (Z n) n∈ℕ is Cauchy and hence converges in \(L^{2}({\mathcal{P}}(\mathbb{F})\otimes {\mathcal{B}}({\mathcal{X}}),\mathbb{P}\otimes dt \otimes\rho)\) to some Z. Using again the Itô Isometry, we find that (ξ n) n∈ℕ converges to \(\mathbb{E}[\xi(x)]+\int_{0}^{T}Z_{s}(x)\,dW_{s}\) in \(L^{2}({\mathcal{F}}_{T}\otimes {\mathcal{B}}({\mathcal{X}}),\mathbb{P}\otimes\rho)\). Identifying the limits, we get the result. □
Corollary C.1
Let M be a \({\mathcal{P}}(\mathbb{F})\otimes {\mathcal{B}}({\mathcal{X}})\)-measurable map such that (M t (x))0≤t≤T is a martingale for all \(x\in {\mathcal{X}}\) and \(\int_{{\mathcal{X}}}\mathbb{E}|M_{T}(x)|^{2}\rho(dx)<\infty\). Then, there exists a \({\mathcal{P}}(\mathbb{F})\otimes {\mathcal{B}}({\mathcal{X}})\)-measurable map Z such that \(\int_{0}^{T} \int_{{\mathcal{X}}}\mathbb{E}|Z_{s}(x)|^{2} \rho(dx)\,ds\) < ∞ and
The proof is a direct consequence of Theorem C.1 as in [26] so we omit it.
3.2 C.2 BSDEs Depending on a Parameter
We now study the measurability of solutions to Brownian BSDEs whose data depend on the parameter \(x\in {\mathcal{X}}\). We consider
-
a family \(\{\xi(x):x\in {\mathcal{X}}\}\) of random variables such that the map \(\xi:{\varOmega}\times {\mathcal{X}}\rightarrow \mathbb{R}\) is \({\mathcal{F}}_{T}\otimes {\mathcal{B}}({\mathcal{X}})\)-measurable and satisfies \(\int_{{\mathcal{X}}}\mathbb{E}|\xi(x)|^{2}\rho(dx)<\infty\),
-
a family \(\{f(., x):x\in {\mathcal{X}}\}\) of random maps such that the map \(f:{\varOmega}\times[0,T]\times \mathbb{R}\times \mathbb{R}^{d}\times {\mathcal{X}}\rightarrow \mathbb{R}\) is \({\mathcal{P}}(\mathbb{F})\otimes {\mathcal{B}}(\mathbb{R})\otimes {\mathcal{B}}(\mathbb{R}^{d})\otimes {\mathcal{B}}({\mathcal{X}})\)-measurable and satisfies \(\int_{0}^{T} \int_{{\mathcal{X}}}\mathbb{E}|f(s,0,0,x)|^{2} \rho(dx)\,ds<\infty\).
We then consider the BSDEs depending on the parameter \(x\in {\mathcal{X}}\):
Lemma C.2
Assume that the generator f does not depend on (y,z) i.e. f(t,y,z,x)=f(t,x). Then, BSDE (C.3) admits a solution (Y,Z) such that Y and Z are \({\mathcal{P}}(\mathbb{F})\otimes {\mathcal{B}}({\mathcal{X}})\)-measurable.
Proof
Consider the family of martingales \(\{M(x):x\in {\mathcal{X}}\}\), where M is defined by
Then, from Corollary C.1, there exists a \({\mathcal{P}}(\mathbb{F})\otimes {\mathcal{B}}(\mathbb{R}^{d})\)-measurable map Z such that \(\int_{0}^{T}\int_{{\mathcal{X}}}\mathbb{E}|Z_{s}(x)|^{2} \rho(dx)\,ds\) < ∞ and
We then easily check that the process Y defined by
is \({\mathcal{P}}(\mathbb{F})\otimes {\mathcal{B}}({\mathcal{X}})\)-measurable and that (Y,Z) satisfies (C.3). □
We now consider the case where the generator f is Lipschitz continuous: there exists a constant L such that
for all (t,y,y′,z,z′)∈[0,T]×[ℝ]2×[ℝd]2.
Proposition C.1
Suppose that f satisfies (C.4). Then, BSDE (C.3) admits a \({\mathcal{P}}(\mathbb{F})\otimes {\mathcal{B}}({\mathcal{X}})\)-measurable solution (Y,Z) such that \(\mathbb{E}\int_{0}^{T}\int_{{\mathcal{X}}}(|Y_{s}(x)|^{2}+|Z_{s}(x)|^{2}) \times \rho(dx)\,ds<\infty\).
Proof
Consider the sequence (Y n,Z n) n∈ℕ defined by (Y 0,Z 0)=(0,0) and for n≥1
From Lemma C.2, we see that (Y n,Z n) is \({\mathcal{P}}(\mathbb{F})\otimes {\mathcal{B}}({\mathcal{X}})\)-measurable for all n∈ℕ. Moreover, since f satisfies (C.4), the sequence (Y n,Z n) n∈ℕ converges (up to a subsequence) a.e. to (Y,Z) solution to (C.3) (see [27]). Hence, the solution (Y,Z) is also \({\mathcal{P}}(\mathbb{F})\otimes {\mathcal{B}}({\mathcal{X}})\)-measurable. □
Appendix D: A Regularity Result for the Decomposition
Proposition D.1
Let p≥1 and \((f_{t}(x))_{(t,x)\in[0,T]\times \mathbb{R}^{p}}\) be a \({\mathcal{P}}(\mathbb{G})\otimes {\mathcal{B}}(\mathbb{R}^{p})\)-measurable map. Suppose that f t (.) is locally uniformly continuous (uniformly in ω∈Ω). Then \(f^{k}_{t}(.,\theta_{(k)},e_{(k)})\) is locally uniformly continuous (uniformly in ω∈Ω) for θ k ≤t and k=0,…,n.
Proof
For sake of clarity, we prove the result without marks, but the argument easily extends to the case with marks. Fix k∈{0,…,n} and for R>0, denote by \(mc_{R}^{f}\) the modulus of continuity of f on \(B_{\mathbb{R}^{p}}(0,R)\). Then for any \(\tilde{\theta}_{k}>\cdots>\tilde{\theta}_{1}>0\) and h 1,…,h n >0 we have from the definition of \(mc_{R}^{f}\) and (HD)
for \(x,x'\in B_{\mathbb{R}^{p}}(0,R)\) s.t. |x−x′|≤ε. Using the decomposition of f we have
Sending each h ℓ to zero we get
□
Rights and permissions
About this article
Cite this article
Kharroubi, I., Lim, T. Progressive Enlargement of Filtrations and Backward Stochastic Differential Equations with Jumps. J Theor Probab 27, 683–724 (2014). https://doi.org/10.1007/s10959-012-0428-1
Received:
Revised:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10959-012-0428-1
Keywords
- Backward SDE
- Quadratic BSDE
- Multiple random marked times
- Progressive enlargement of filtrations
- Decomposition in the reference filtration
- Exponential utility