Keywords

1 Introduction

This article is devoted to some important and exceptional properties of f-divergences. As known, the notion of f-divergence was introduced by Ciszar [3] to measure the difference between two absolutely continuous probability measures by mean of the expectation of some convex function f of their Radon-Nikodym density. More precisely, let f be a convex function and \(Z = \frac{dQ} {dP}\) be a Radon-Nikodym density of two measures Q and P, Q ≪ P. Supposing that f(Z) is integrable with respect to P, f-divergence of Q with respect to P is defined as

$$f(Q\vert \!\vert P) = E_{P}[f(Z)].$$

One can remark immediately that this definition cover such important cases as variation distance when \(f(x)\,=\,\vert x - 1\vert \), as Hellinger distance when \(f(x)\,=\,{(\sqrt{x} - 1)}^{2}\) and Kulback-Leibler information when f(x) = xln(x). Important as notion, f-divergence was studied in a number of books and articles (see for instance [14, 19])

In financial mathematics it is of particular interest to consider measures Q  ∗  which minimise on the set of all equivalent martingale measures the f-divergence. This fact is related to the introducing and studying so called incomplete models, like exponential Levy models (see [2, 6, 7, 20, 22]). In such models contingent claims cannot, in general, be replicated by admissible strategies. Therefore, it is important to determine strategies which are, in a certain sense optimal. Various criteria are used, some of which are linked to risk minimisation (see [9, 25, 26]) and others consisting in maximizing certain utility functions (see [1, 11, 16]). It has been shown (see [11, 18]) that such questions are strongly linked via Fenchel-Legendre transform to dual optimisation problems, namely to f-divergence minimisation on the set of equivalent martingale measures, i.e. the measures Q which are equivalent to the initial physical measure P and under which the stock price is a martingale.

Mentioned problems has been well studied in the case of relative entropy, when f(x) = xln(x) (cf. [10, 21]), also for power functions f(x) = x q, q > 1 or q < 0 (cf. [15]), \(f(x)\,=\, - {x}^{q}\), 0 < q < 1 (cf. [4, 5]) and for logarithmic divergence \(f(x)\,=\, -\ln (x)\) (cf. [17]), called common f-divergences. Note that the three mentioned functions all satisfy \({f}^{{\prime\prime}}(x)\,=\,a{x}^{\gamma }\) for an a > 0 and a \(\gamma \,\in \,\mathbb{R}\). The converse is also true, any function which satisfies \({f}^{{\prime\prime}}(x)\,=\,a{x}^{\gamma }\) is, up to linear term, a common f-divergence. It has in particular been noted that for these functions, the f-divergence minimal equivalent martingale measure, when it exists, preserves the Levy property, that is to say that the law of Levy process under initial measure P remains a law of Levy process under the f-divergence minimal equivalent martingale measure Q  ∗ .

The aim of this paper is to study the questions of preservation of Levy property and associated properties such as scaling property and invariance in time property for f-divergence minimal martingale measures when P is a law of d-dimensional Levy process X and Q  ∗  belongs to the set of so called equivalent martingale measures for exponential Levy model, i.e. measures under which the exponential of X is a martingale. More precisely, let fix a convex function f defined on \({\mathbb{R}}^{+,{_\ast}}\) and denote by \(\mathcal{M}\) the set of equivalent martingale measures associated with exponential Levy model related to X. We recall that an equivalent martingale measure Q  ∗  is f-divergence minimal if f(Z  ∗ ) is integrable with respect to P where Z  ∗  is the Radon-Nikodym density of Q  ∗  with respect to P, and

$$f({Q}^{{_\ast}}\vert \!\vert P) =\min _{ Q\in \mathcal{M}}f(Q\vert \!\vert P).$$

We say that Q  ∗  preserves Levy property if X remains Levy process under Q  ∗ . The measure Q  ∗  is said to be scale invariant if for all \(x \in {\mathbb{R}}^{+}\), \(E_{P}\vert f(x{Z}^{{_\ast}})\vert < \infty \) and

$$f(x{Q}^{{_\ast}}\vert \!\vert P) =\min _{ Q\in \mathcal{M}}f(xQ\vert \!\vert P).$$

We also recall that an equivalent martingale measure Q  ∗  is said to be time invariant if for all T > 0, and the restrictions Q T ,P T of the measures P, Q on time interval [0, T], \(E_{P}\vert f(Z_{T}^{{_\ast}})\vert < \infty \) and

$$f(Q_{T}^{{_\ast}}\vert \!\vert P_{ T}) =\min _{Q\in \mathcal{M}}f(Q_{T}\vert \!\vert P_{T})$$

In this paper we study the shape of f belonging to the class of strictly convex tree times continuously differentiable functions and ones used as f-divergence, gives an equivalent martingale measure which preserves Levy property. More precisely, we consider equivalent martingale measures Q belonging to the class \({\mathcal{K}}^{{_\ast}}\) such that for all compact sets K of \({\mathbb{R}}^{+,{_\ast}}\)

$$E_{P}\vert f(\frac{dQ_{T}} {dP_{T}})\vert < +\infty ,\,\,\,\,E_{Q}\vert {f}^{{^\prime}}(\frac{dQ_{T}} {dP_{T}})\vert < +\infty ,\,\,\,\,\sup _{t\leq T}\sup _{\lambda \in K}E_{Q}[{f}^{{\prime\prime}}(\lambda \frac{dQ_{t}} {dP_{t}})\frac{dQ_{t}} {dP_{t}}] < +\infty.$$

We denote by Z T  ∗  Radon-Nikodym density of Q T  ∗  with respect to P T and by β ∗  and Y  ∗  the corresponding Girsanov parameters of an f-divergence minimal measure Q  ∗  on [0, T], which preserves the Levy property and belongs to \({\mathcal{K}}^{{_\ast}}\).

To precise the shape of f we obtain fundamental equations which necessarily verify f. Namely, in the case \({ \circ \atop supp} (\nu )\neq \varnothing \), for a.e. \(x \in supp(Z_{T}^{{_\ast}})\) and a.e. \(y \in supp(\nu )\), we prove that

$${f}^{{^\prime}}(x{Y }^{{_\ast}}(y)) - {f}^{{^\prime}}(x) = \Phi (x)\sum \limits _{i=1}^{d}\alpha _{ i}({e}^{y_{i} } - 1)$$
(1)

where Φ is a continuously differentiable function defined on the set on \({ \circ \atop \mbox{ supp}} (Z_{T}^{{_\ast}})\) and \(y {=\, }^{\top }(y_{1},y_{2},\cdots y_{d})\), \(\alpha {=\, }^{\top }(\alpha _{1},\alpha _{2},\cdots \alpha _{d})\) are vectors of \({\mathbb{R}}^{d}\). Furthermore, if c≠0, for a.e. \(x \in \mbox{ supp}(Z_{T}^{{_\ast}})\) and a.e. \(y \in supp(\nu )\), we get that

$${f}^{{^\prime}}(x{Y }^{{_\ast}}(y)) - {f}^{{^\prime}}(x) = x{f}^{{\prime\prime}}(x)\sum \limits _{i=1}^{d}\beta _{ i}^{{_\ast}}({e}^{y_{i} } - 1) + \sum \limits _{j=1}^{d}V _{ j}({e}^{y_{j} } - 1)$$
(2)

where \({\beta }^{{_\ast}} {= }^{\top }\!(\beta _{1}^{{_\ast}},\cdots \,,\beta _{d}^{{_\ast}})\) is a first Girsanov parameter and \(V {=\, }^{\top }(V _{1},\cdots \,,V _{d})\) is a vector which belongs to the kernel of the matrix c, i.e. cV = 0.

Mentioned above equations permit us to precise the form of f. Namely, we prove that if the set \(\{\ln {Y }^{{_\ast}}(y),\,y \in supp(\nu )\}\) is of non-empty interior and it contains zero, then there exists a > 0 and \(\gamma \in \mathbb{R}\) such that for all \(x \in supp(Z_{t}^{{_\ast}})\),

$${f}^{{\prime\prime}}(x) = a{x}^{\gamma }.$$
(3)

Taking in account the known results we conclude that in considered case the relation (3) is necessary and sufficient condition for f-divergence minimal martingale measure to preserve Levy property. In addition, as we will see, such f-divergence minimal measure will be also scale and time invariant.

In the case when \({\,}^{\top }{\beta }^{{_\ast}}c{\beta }^{{_\ast}}\neq 0\) and support of ν is nowhere dense but when there exists at least one y ∈ supp(ν) such that ln(Y  ∗ (y))≠0, we prove that there exist \(n \in \mathbb{N}\), the real constants \(b_{i},\tilde{b}_{i},1 \leq i \leq n,\) and \(\gamma \in \mathbb{R}\), a > 0 such that

$${f}^{{\prime\prime}}(x) = a{x}^{\gamma } + {x}^{\gamma } \sum \limits _{i=1}^{n}b_{ i}{(\ln (x))}^{i} + \frac{1} {x}\sum \limits _{i=1}^{n}\tilde{b}_{ i}{(\ln (x))}^{i-1}$$

The case when \({\,}^{\top }{\beta }^{{_\ast}}c{\beta }^{{_\ast}} = 0\) and supp(ν) is nowhere dense, is not considered in this paper, and from what we know, form an open question.

We underline once more the exceptional properties of the class of functions such that:

$${f}^{{\prime\prime}}(x) = a{x}^{\gamma }$$

and called common f-divergences. This class of functions is exceptional in a sense that they verify also scale and time invariance properties for all Levy processes. As well known, Q  ∗  does not always exist. For some functions, in particular f(x) = xln(x), or for some power functions, some necessary and sufficient conditions of existence of a minimal measure have been given (cf. [13, 15]). We will give a unified version of these results for all functions which satisfy \({f}^{{\prime\prime}}(x) = a{x}^{\gamma }\), a > 0, \(\gamma \in \mathbb{R}\). We give also an example to show that the preservation of Levy property can have place not only for the functions verifying \({f}^{{\prime\prime}}(x) = a{x}^{\gamma }\).

The paper is organized in the following way: in Sect. 2 we recall some known facts about exponential Levy models and f-divergence minimal equivalent martingale measures. In Sect. 3 we give some known useful for us facts about f-divergence minimal martingale measures. In Sect. 4 we obtain fundamental equations for Levy preservation property (Theorem 3 ). In Sect. 5 we give the result about the shape of f having Levy preservation property for f-divergence minimal martingale measure (Theorem 5). In Sect. 6 we study the common f-divergences, i.e. with f verifying \({f}^{{\prime\prime}}(x) = a{x}^{\gamma }\), a > 0. Their properties are given in Theorem 6.

2 Some Facts About Exponential Levy Models

Let us describe our model in more details. We assume the financial market consists of a bank account B whose value at time t is

$$B_{t} = B_{0}{e}^{rt},$$

where r ≥ 0 is the interest rate which we assume to be constant. We also assume that there are d ≥ 1 risky assets whose prices are described by a d-dimensional stochastic process \(S = (S_{t})_{t\geq 0}\),

$$S_{t} {=\, }^{\top }\!(S_{ 0}^{(1)}{e}^{X_{t}^{(1)} },\cdots \,,S_{0}^{(d)}{e}^{X_{t}^{(d)} })$$

where \(X = (X_{t})_{t\geq 0}\) is a d-dimensional Levy process, \(X_{t} {=\, }^{\top }\!(X_{t}^{(1)},\cdots \,,X_{t}^{(d)})\) and \(S_{0} {=\, }^{\top }\!(S_{0}^{(1)},\cdots \,,S_{0}^{(d)})\). We recall that Levy processes form the class of processes with stationary and independent increment and that the characteristic function of the law of X t is given by the Levy-Khintchine formula: for all t ≥ 0, for all \(u \in \mathbb{R}\),

$$E[{e}^{i<u,X_{t}>}] = {e}^{t\psi (u)}$$

where

$$\psi (u) = i < u,b > -{\frac{1} {2}\,}^{\top }\!ucu + \int \limits _{{\mathbb{R}}^{d}}[{e}^{i<u,y>} - 1 - i < u,h(y) >]\nu (dy)$$

where \(b \in {\mathbb{R}}^{d}\), c is a positive d ×d symmetric matrix, h is a truncation function and ν is a Levy measure, i.e. positive measure on \({\mathbb{R}}^{d} \setminus \{ 0\}\) which satisfies

$$\int \limits _{{\mathbb{R}}^{d}}(1 \wedge \vert y{\vert }^{2})\nu (dy) < +\infty.$$

The triplet (b, c, ν) entirely determines the law of the Levy process X, and is called the characteristic triplet of X. From now on, we will assume that the interest rate r = 0 as this will simplify calculations and the more general case can be obtained by replacing the drift b by b − r. We also assume for simplicity that S 0 = 1.

We will denote by \(\mathcal{M}\) the set of all locally equivalent martingale measures:

$$\mathcal{M} =\{ Q{ loc \atop \sim } P,\,S\text{ is a martingale under }Q\}.$$

We will assume that this set is non-empty, which is equivalent to assuming the existence of \(Q{ loc \atop \sim } P\) such that the drift of S under Q is equal to zero. We consider our model on finite time interval [0, T], T > 0, and for this reason the distinction between locally equivalent martingale measures and equivalent martingale measures does not need to be made. We recall that the density Z of any equivalent to P measure can be written in the form \(Z = \mathcal{E}(M)\) where \(\mathcal{E}\) denotes the Doleans-Dade exponential and \(M = (M_{t})_{t\geq 0}\) is a local martingale. It follows from Girsanov theorem theorem that there exist predictable functions \(\beta {=\, }^{\top }\!({\beta }^{(1)},\cdots {\beta }^{(d)})\) and Y verifying the integrability conditions: for t ≥ 0 (P-a.s.)

$$\begin{array}{rcl} & & \int \limits _{0}^{t}{\,}^{\top }\beta _{ s}c\beta _{s}ds < \infty , \\ & & \int \limits _{0}^{t} \int \limits _{{\mathbb{R}}^{d}}\vert \,h(y)\,(Y _{s}(y) - 1)\,\vert {\nu }^{X,P}(ds,dy) < \infty , \\ \end{array}$$

and such that

$$M_{t} = \sum \limits _{i=1}^{d} \int \limits _{0}^{t}\beta _{ s}^{(i)}dX_{ s}^{c,(i)} + \int \limits _{0}^{t} \int \limits _{{\mathbb{R}}^{d}}(Y _{s}(y) - 1)({\mu }^{X} - {\nu }^{X,P})(ds,dy)$$
(4)

where μX is a jumps measure of the process X and νX, P is its compensator with respect to P and the natural filtration \(\mathbb{F}\), νX, P(ds, dy) = ds ν(dy) (for more details see [14]). We will refer to (β, Y ) as the Girsanov parameters of the change of measure from P into Q. It is known from Grigelionis result [12] that a semi-martingale is a process with independent increments under Q if and only if their semi-martingale characteristics are deterministic, i.e. the Girsanov parameters do not depend on ω, i.e. β depends only on time t and Y depends on time and jump size (t, x). Since Levy process is homogeneous process, it implies that X will remain a Levy process under Q if and only if there exists \(\beta \in \mathbb{R}\) and a positive measurable function Y such that for all t ≤ T and all ω, \(\beta _{t}(\omega ) = \beta \) and \(Y _{t}(\omega ,y) = Y (y)\).

We recall that if Levy property is preserved, S will be a martingale under Q if and only if

$$b + \frac{1} {2}diag(c) + c\beta + \int \limits _{{\mathbb{R}}^{d}}[({e}^{y} - 1)Y (y) - h(y)]\nu (dy) = 0$$
(5)

where e y is a vector with components \({e}^{y_{i}},1 \leq i \leq d,\) and \(y {=\, }^{\top }\!(y_{1},\cdots \,,y_{d})\). This follows again from Girsanov theorem and reflects the fact that under Q the drift of S is equal to zero.

3 Properties of f-Divergence Minimal Martingale Measures

Here we consider a fixed strictly convex continuously differentiable on \({\mathbb{R}}^{+,{_\ast}}\) function f and a time interval [0, T]. We recall in this section a few known and useful results about f-divergence minimisation on the set of equivalent martingale measures. Let \((\Omega ,\mathcal{F}, \mathbb{F},P)\) be a probability filtered space with the natural filtration \(\mathbb{F}\,=\,(\mathcal{F}_{t})_{t\geq t}\) satisfying usual conditions and let \(\mathcal{M}\) be the set of equivalent martingale measures. We denote by Q t , P t the restrictions of the measures Q, P on \(\mathcal{F}_{t}\). We introduce Radon-Nikodym density process \(Z = (Z_{t})_{t\geq 0}\) related to Q, an equivalent martingale measure, where for t ≥ 0

$$Z_{t} = \frac{dQ_{t}} {dP_{t}}.$$

We denote by Z  ∗  Radon-Nikodym density process related with f-divergence minimal equivalent martingale measure Q  ∗ .

Definition 1.

An equivalent martingale measure Q  ∗  is said to be f-divergence minimal on the time interval [0, T] if \(E_{P}\vert f(Z_{T}^{{_\ast}})\vert < \infty \) and

$$E_{P}[f(Z_{T}^{{_\ast}})] =\min _{ Q\in \mathcal{M}}E_{P}[f(Z_{T})]$$

where \(\mathcal{M}\) is a class of locally equivalent martingale measures.

Then we introduce the subset of equivalent martingale measures

$$\mathcal{K} =\{ Q \in \mathcal{M}\,\vert \,\,E_{P}\vert f(Z_{T})\vert < +\infty \text{ and }E_{Q}[\vert {f}^{{^\prime}}(Z_{ T})\vert ] < +\infty.\}$$
(6)

We will concentrate ourselves on the case when the minimal measure, if it exists, belongs to \(\mathcal{K}\). Note that for a certain number of functions this is necessarily the case.

Lemma 1 (cf. [19], Lemma 8.7). 

Let f be a convex continuously differentiable on \({\mathbb{R}}^{+,{_\ast}}\) function. Assume that for c > 1 there exist positive constants \(c_{0},c_{1},c_{2},c_{3}\) such that for u > c 0 ,

$$f(cu) \leq c_{1}f(u) + c_{2}u + c_{3}$$
(7)

Then a measure \(Q \in \mathcal{M}\) which is f-divergence minimal necessarily belongs to \(\mathcal{K}\) .

We now recall the following necessary and sufficient condition for a martingale measure to be minimal.

Theorem 1 (cf. [11], Theorem 2.2). 

Consider \({Q}^{{_\ast}}\in \mathcal{K}\) . Then, Q is minimal if and only if for all \(Q \in \mathcal{K}\) ,

$$E_{{Q}^{{_\ast}}}[{f}^{{^\prime}}(Z_{ T}^{{_\ast}})] \leq E_{ Q}[{f}^{{^\prime}}(Z_{ T}^{{_\ast}})].$$

This result is in fact true in the much wider context of semi-martingale modelling. We will mainly use it here to check that a candidate is indeed a minimal measure. We will also use extensively another result from [11] in order to obtain conditions that must be satisfied by minimal measures.

Theorem 2 (cf. [11], Theorem 3.1). 

Assume \({Q}^{{_\ast}}\in \mathcal{K}\) is an f-divergence minimal martingale measure. Then there exists \(x_{0} \in \mathbb{R}\) and a predictable d-dimensional process ϕ such that

$${f}^{{^\prime}}(\frac{dQ_{T}^{{_\ast}}} {dP_{T}} ) = x_{0} + \sum \limits _{i=1}^{d} \int \limits _{0}^{T}\phi _{ t}^{(i)}\,dS_{ t}^{(i)}$$

and such that \(\sum \limits _{i=1}^{d} \int \limits _{0}^{\cdot }\phi _{t}^{(i)}\,dS_{t}^{(i)}\) defines a martingale under the measure Q .

4 A Fundamental Equation for f-Divergence Minimal Levy Preserving Martingale Measures

Our main aim in this section is to obtain an equation satisfied by the Radon-Nikodym density of f-divergence minimal equivalent martingale measures. This result will both enable us to obtain information about the Girsanov parameters of f-divergence minimal equivalent martingale measures and also to determine conditions which must be satisfied by the function f in order to a f-minimal equivalent martingale measure exists. Let us introduce the class \({\mathcal{K}}^{{_\ast}}\) of locally equivalent martingale measures verifying: for all compact sets \(K\) of \({\mathbb{R}}^{+,{_\ast}}\)

$$E_{P}\vert f(Z_{T})\vert < +\infty ,\,\,\,\,E_{Q}\vert {f}^{{^\prime}}(Z_{ T})\vert < +\infty ,\,\,\,\,\sup _{t\leq T}\sup _{\lambda \in K}E_{Q}[{f}^{{\prime\prime}}(\lambda Z_{ t}^{{_\ast}})Z_{ t}^{{_\ast}}] < +\infty.$$
(8)

Theorem 3.

Let f be strictly convexe \({\mathcal{C}}^{3}({\mathbb{R}}^{+,{_\ast}})\) function. Let Z be the density of an f-divergence minimal measure Q on [0,T], which preserves the Levy property and belongs to \({\mathcal{K}}^{{_\ast}}\) . We denote by \(({\beta }^{{_\ast}},{Y }^{{_\ast}})\) its Girsanov parameters. Then, if \({ \circ \atop supp} (\nu )\neq \varnothing \,\) , for a.e. \(x \in supp(Z_{T}^{{_\ast}})\) and a.e. \(y \in supp(\nu )\) , we have

$${f}^{{^\prime}}(x{Y }^{{_\ast}}(y)) - {f}^{{^\prime}}(x) = \Phi (x)\sum \limits _{i=1}^{d}\alpha _{ i}({e}^{y_{i} } - 1)$$
(9)

where Φ is a continuously differentiable function defined on the set \({ \circ \atop \mbox{ supp}} (Z_{T}^{{_\ast}})\) and \(\alpha \,{=\,}^{\top }(\alpha _{1},\alpha _{2},\cdots \alpha _{d})\) is a vector of \({\mathbb{R}}^{d}\) . Furthermore, if c≠0, for a.e. \(x \in \mbox{ supp}(Z_{T}^{{_\ast}})\) and a.e. y ∈ supp(ν), we have

$${f}^{{^\prime}}(x{Y }^{{_\ast}}(y)) - {f}^{{^\prime}}(x) = x{f}^{{\prime\prime}}(x)\sum \limits _{i=1}^{d}\beta _{ i}^{{_\ast}}({e}^{y_{i} } - 1) -\sum \limits _{j=1}^{d}V _{ j}({e}^{y_{j} } - 1)$$
(10)

where \({\beta }^{{_\ast}} {= }^{\top }\!(\beta _{1}^{{_\ast}},\cdots \,,\beta _{d}^{{_\ast}})\) and \(V {=\, }^{\top }(V _{1},\cdots \,,V _{d})\) belongs to the kernel of the matrix c, i.e. cV = 0.

We recall that for all t ≤ T, since Q  ∗  preserves Levy property, Z t  ∗  and \(\frac{Z_{T}^{{_\ast}}} {Z_{t}^{{_\ast}}}\) are independent under P and that \(\mathcal{L}(\frac{Z_{T}^{{_\ast}}} {Z_{t}^{{_\ast}}} ) = \mathcal{L}(Z_{T-t}^{{_\ast}})\). Therefore denoting

$$\rho (t,x) = E_{{Q}^{{_\ast}}}[{f}^{{^\prime}}(xZ_{ T-t}^{{_\ast}})],$$

and taking cadlag versions of processes, we deduce that Q  ∗ -a.s. for all t ≤ T

$$E_{{Q}^{{_\ast}}}[{f}^{{^\prime}}(Z_{ T}^{{_\ast}})\vert \mathcal{F}_{ t}] = \rho (t,Z_{t}^{{_\ast}})$$

We note that the proof of Theorem 3 is based on the identification using Theorem 2 and an application of decomposition formula to function ρ. However, the function ρ is not necessarily twice continuously differentiable in x and once continuously differentiable in t. So, we will proceed by approximations, by application of Ito formula to specially constructed function ρ n . In order to do this, we need a number of auxiliary lemmas given in the next section.

Since the result of Theorem 3 is strongly related to the support of Z T  ∗ , we are also interested with the question: when this support is an interval? This question has been well studied in [24, 27] for infinitely divisible distributions. In our case, the specific form of the Girsanov parameters following from preservation of Levy property allow us to obtain the following result proved in Sect. 4.3.

Proposition 1.

Let Z be the density of an f-divergence minimal equivalent martingale measure on [0,T], which preserves the Levy property and belongs to \({\mathcal{K}}^{{_\ast}}\) . Then

  1. (i)

    If \({\,}^{\top }{\beta }^{{_\ast}}c{\beta }^{{_\ast}}\neq 0\) , then \(supp(Z_{T}^{{_\ast}}) = {\mathbb{R}}^{+,{_\ast}}\) .

  2. (ii)

    If \({\,}^{\top }{\beta }^{{_\ast}}c{\beta }^{{_\ast}} = 0\) , \({ \circ \atop supp} (\nu )\neq \varnothing \) , 0 ∈ supp(ν) and Y is not identically 1 on \({ \circ \atop supp} \ (\nu )\) , then

    1. (j)

      In the case ln (Y (y)) > 0 for all y ∈ supp(ν), there exists A > 0 such that \(supp(Z_{T}^{{_\ast}}) = [A,+\infty [\);

    2. (jj)

      In the case ln (Y (y)) < 0 for all y ∈ supp(ν) there exists A > 0 such that \(supp(Z_{T}^{{_\ast}}) =]0,A]\);

    3. (jjj)

      In the case when there exist \(y,\bar{y} \in supp(\nu )\) such that ln (Y (y)). \(\ln ({Y }^{{_\ast}}(\bar{y})) < 0\) , we have \(supp(Z_{T}^{{_\ast}}) = {\mathbb{R}}^{+,{_\ast}}\) .

4.1 Some Auxiliary Lemmas

We begin with approximation lemma. Let a strictly convex tree times continuously differentiable on \({\mathbb{R}}^{+,{_\ast}}\) function f be fixed.

Lemma 2.

There exists a sequence of bounded functions \((\phi _{n})_{n\geq 1}\) , which are of class \({\mathcal{C}}^{2}\) on \({\mathbb{R}}^{+{_\ast}}\) , increasing, such that for all n ≥ 1, ϕ n coincides with f on the compact set \([ \frac{1} {n},n]\) and such that for sufficiently big n the following inequalities hold for all x,y > 0 :

$$\vert \phi _{n}(x)\vert \leq 4\vert {f}^{{^\prime}}(x)\vert +\alpha \text{ , }\vert \phi _{ n}^{{^\prime}}(x)\vert \leq 3{f}^{{\prime\prime}}(x)\text{ , }\vert \phi _{ n}(x)-\phi _{n}(y)\vert \leq 5\vert {f}^{{^\prime}}(x)-{f}^{{^\prime}}(y)\vert $$
(11)

where α is a real positive constant.

Proof.

We set, for n ≥ 1,

$$\begin{array}{rcl} & & A_{n}(x) = {f}^{{^\prime}}( \frac{1} {n}) -\int \limits _{x\vee \frac{1} {2n} }^{ \frac{1} {n} }{f}^{{\prime\prime}}(y){(2ny - 1)}^{2}(5 - 4ny)dy \\ & & B_{n}(x) = {f}^{{^\prime}}(n) + \int \limits _{n}^{x\wedge (n+1)}{f}^{{\prime\prime}}(y){(n + 1 - y)}^{2}(1 + 2y - 2n)dy \\ \end{array}$$

and finally

$$\phi _{n}(x) = \left \{\begin{array}{@{}l@{\quad }l@{}} \quad &A_{n}(x)\text{ if }0 \leq x < \frac{1} {n}, \\ \quad &{f}^{{^\prime}}(x)\text{ if } \frac{1} {n} \leq x \leq n, \\ \quad &B_{n}(x)\text{ if }x > n. \end{array} \right.$$

Here A n and B n are defined so that ϕ n is of class \({\mathcal{C}}^{2}\) on \({\mathbb{R}}^{+,{_\ast}}\). For the inequalities we use the fact that f is increasing function as well as the estimations: \(0\,\leq \,{(2nx\,-\,1)}^{2}(5 - 4nx) \leq 1\) for \(\frac{1} {2n} \leq x \leq \frac{1} {n}\) and \(0 \leq {(n + 1 - x)}^{2}(1 + 2x - 2n) \leq 3\) for \(n \leq x \leq n + 1\). □ 

Let Q be Levy property preserving locally equivalent martingale measure and (β, Y ) its Girsanov parameters when change from P into Q. We use the function

$$\rho _{n}(t,x) = E_{Q}[\phi _{n}(xZ_{T-t})]$$

to obtain the following analog to Theorem 4, replacing f with ϕ n .

For this let us denote for 0 ≤ t ≤ T

$$\xi _{t}^{(n)}(x) = E_{ Q}[\phi _{n}^{{^\prime}}(xZ_{ T-t})\,Z_{T-t}]$$
(12)

and

$$H_{t}^{(n)}(x,y) = E_{ Q}[\phi _{n}(xZ_{T-t}Y (y)) - \phi _{n}(xZ_{T-t})]$$
(13)

Lemma 3.

We have Q -a.s., for all t ≤ T,

$$\begin{array}{rcl} & & \rho _{n}(t,Z_{t}) = E_{Q}[\phi _{n}(Z_{T})] + \\ & & \sum \limits _{i=1}^{d}\beta _{ i} \int \limits _{0}^{t}\xi _{ s}^{(n)}(Z_{ s-})\,Z_{s-}dX_{s}^{(c),Q,i} + \int \limits _{0}^{t} \int \limits _{{\mathbb{R}}^{d}}H_{s}^{(n)}(Z_{ s-},y)\,({\mu }^{X} - {\nu }^{X,Q})(ds,dy)\end{array}$$
(14)

where \(\beta {=\, }^{\top }\!(\beta _{1},\cdots \,,\beta _{d})\) and ν X,Q is a compensator of the jump measure \({\mu }^{X}\) with respect to \((\mathbb{F},Q)\) .

Proof.

In order to apply the Ito formula to ρ n , we need to show that ρ n is twice continuously differentiable with respect to x and once with respect to t and that the corresponding derivatives are bounded for all t ∈ [0, T] and \(x \geq \epsilon \), \(\epsilon > 0.\) First of all, we note that from the definition of ϕ n for all \(x \geq \epsilon > 0\)

$$\vert \frac{\partial } {\partial x}\phi _{n}(xZ_{T-t})\vert = \vert Z_{T-t}\phi _{n}^{{^\prime}}(xZ_{ T-t})\vert \leq \frac{(n + 1)} {\epsilon } \sup _{z>0}\vert \phi _{n}^{{^\prime}}(z)\vert < +\infty.$$

Therefore, ρ n is differentiable with respect to x and we have

$$\frac{\partial } {\partial x}\rho _{n}(t,x) = E_{Q}[\phi _{n}^{{^\prime}}(xZ_{ T-t})\,Z_{T-t}].$$

Moreover, the function \((x,t)\,\mapsto \,\phi _{n}^{{^\prime}}(xZ_{T-t})Z_{T-t}\) is continuous P-a.s. and bounded. This implies that \(\frac{\partial } {\partial x}\rho _{n}\) is continuous and bounded for t ∈ [0, T] and \(x \geq \epsilon \). In the same way, for all \(x \geq \epsilon > 0\)

$$\vert \frac{{\partial }^{2}} {\partial {x}^{2}}\phi _{n}(xZ_{T-t})\vert = Z_{T-t}^{2}\phi _{ n}^{{\prime\prime}}(xZ_{ T-t}) \leq \frac{{(n + 1)}^{2}} {{\epsilon }^{2}} \sup _{z>0}\phi _{n}^{{\prime\prime}}(z) < +\infty.$$

Therefore, ρ n is twice continuously differentiable in x and

$$\frac{{\partial }^{2}} {\partial {x}^{2}}\rho _{n}(t,x) = E_{Q}[\phi _{n}^{{\prime\prime}}(xZ_{ T-t})Z_{T-t}^{2}]$$

We can verify easily that it is again continuous and bounded function. In order to obtain differentiability with respect to t, we need to apply the Ito formula to ϕ n :

$$\begin{array}{rcl} \phi _{n}(xZ_{t})& =& \phi _{n}(x) + \sum \limits _{i=1}^{d} \int \limits _{0}^{t}x\phi _{ n}^{{^\prime}}(xZ_{ s-})\beta _{i}Z_{s-}dX_{s}^{(c),Q,i} \\ & & +\int \limits _{0}^{t} \int \limits _{{\mathbb{R}}^{d}}\phi _{n}(xZ_{s-}Y (y)) - \phi _{n}(xZ_{s-})\,({\mu }^{X} - {\nu }^{X,Q})(ds,dy) \\ & & +\int \limits _{0}^{t}\psi _{ n}(x,Z_{s-})ds \\ \end{array}$$

where

$$\begin{array}{rcl} \psi _{n}(x,Z_{s-})& =& {\,}^{\top }\beta c\beta [xZ_{ s-}\phi _{n}^{{^\prime}}(xZ_{ s-}) + \frac{1} {2}\,{x}^{2}\,Z_{ s-}^{2}\phi _{ n}^{{\prime\prime}}(xZ_{ s-})] \\ & & +\int \limits _{{\mathbb{R}}^{d}}[(\phi _{n}(xZ_{s-}Y (y)) - \phi _{n}(xZ_{s-}))\,Y (y) - x\phi _{n}^{{^\prime}}(xZ_{ s-})Z_{s-}(Y (y) - 1)]\nu (dy).\end{array}$$

Therefore,

$$E_{Q}[\phi _{n}(xZ_{T-t})] = \int \limits _{0}^{T-t}E_{ Q}[\psi _{n}(x,Z_{s-})]ds$$

so that ρ n is differentiable with respect to t and

$$\frac{\partial } {\partial t}\rho _{n}(t,x) = -E_{Q}[\psi _{n}(x,Z_{s-})]_{\vert _{s=(T-t)}}$$

We can also easily verify that this function is continuous and bounded. For this we take in account the fact that ϕ n , \(\phi _{n}^{{^\prime}}\) and \(\phi _{n}^{{\prime\prime}}\) are bounded functions and also that the Hellinger process of Q T and P T of the order 1 ∕ 2 is finite.

We can finally apply the Ito formula to ρ n . For that we use the stopping times

$$s_{m} =\inf \{ t \geq 0\,\vert \,Z_{t} \leq \frac{1} {m}\},$$

with m ≥ 1 and \(\inf \{\varnothing \} = +\infty \). Then, from Markov property of Lévy process we have :

$$\rho _{n}(t \wedge s_{m},Z_{t\wedge s_{m}}) = E_{Q}(\phi _{n}(\lambda Z_{T})\,\vert \,\mathcal{F}_{t\wedge s_{m}})$$

We remark that \((E_{Q}(\phi _{n}(\lambda Z_{T})\,\vert \,\mathcal{F}_{t\wedge s_{m}})_{t\geq 0}\) is Q-martingale, uniformly integrable with respect to m. From Ito formula we have:

$$\begin{array}{rcl} & & \rho _{n}(t \wedge s_{m},Z_{t\wedge s_{m}}) = E_{Q}(\phi _{n}(\lambda Z_{T})) + \int \limits _{0}^{t\wedge s_{m} }\frac{\partial \rho _{n}} {\partial s} (s,Z_{s-})ds \\ & & +\int \limits _{0}^{t\wedge s_{m} }\frac{\partial \rho _{n}} {\partial x} (s,Z_{s-})dZ_{s} + \frac{1} {2}\int \limits _{0}^{t\wedge s_{m} }\frac{{\partial }^{2}\rho _{n}} {\partial {x}^{2}} (s,Z_{s-})d < {Z}^{c} > _{ s} \\ & & +\sum \limits _{0\leq s\leq t\wedge s_{m}}\rho _{n}(s,Z_{s}) - \rho _{n}(s,Z_{s-}) -\frac{\partial \rho _{n}} {\partial x} (s,Z_{s-})\Delta Z_{s} \\ \end{array}$$

where \(\Delta Z_{s} = Z_{s} - Z_{s-}\). After some standard simplifications, we see that

$$\rho _{n}(t \wedge s_{m},Z_{t\wedge s_{m}}) = A_{t\wedge s_{m}} + M_{t\wedge s_{m}}$$

where \((A_{t\wedge s_{m}})_{0\leq t\leq T}\) is predictable process, which is equal to zero,

$$\begin{array}{rcl} & & A_{t\wedge s_{m}} = \int \limits _{0}^{t\wedge s_{m} }\frac{\partial \rho _{n}} {\partial s} (s,Z_{s-})ds + \frac{1} {2}\int \limits _{0}^{t\wedge s_{m} }\frac{{\partial }^{2}\rho _{n}} {\partial {x}^{2}} (s,Z_{s-})d < {Z}^{c} > _{ s} + \\ & & \int \limits _{0}^{t\wedge s_{m} } \int \limits _{\mathbb{R}}[\rho _{n}(s,Z_{s-} + x) - \rho _{n}(s,Z_{s-}) -\frac{\partial \rho _{n}} {\partial x} (s,Z_{s-})x]{\nu }^{Z,Q}(ds,dx) \\ \end{array}$$

and \((M_{t\wedge s_{m}})_{0\leq t\leq T}\) is a Q-martingale,

$$\begin{array}{rcl} M_{t\wedge s_{m}} = E_{Q}(\phi _{n}(\lambda Z_{T})) + \int \limits _{0}^{t\wedge s_{m} }\frac{\partial \rho _{n}} {\partial x} (s,Z_{s-})dZ_{s}^{c} + & & \\ \int \limits _{0}^{t\wedge s_{m} } \int \limits _{\mathbb{R}}[\rho _{n}(s,Z_{s-} + x) - \rho _{n}(s,Z_{s-})]({\mu }^{Z}(ds,dx) - {\nu }^{Z,Q}(ds,dx))& & \\ \end{array}$$

Then, we pass to the limit as \(m \rightarrow +\infty \). We remark that the sequence \((s_{m})_{m\geq 1}\) is going to +  as \(m \rightarrow \infty \). From [23], Corollary 2.4, p. 59, we obtain that

$$\lim _{m\rightarrow \infty }E_{Q}(\phi _{n}(Z_{T})\,\vert \,\mathcal{F}_{t\wedge s_{m}}) = E_{Q}(\phi _{n}(Z_{T})\,\vert \,\mathcal{F}_{t})$$

and by the definition of local martingales we get:

$$\lim _{m\rightarrow \infty }\int \limits _{0}^{t\wedge s_{m} }\frac{\partial \rho _{n}} {\partial x} (s,Z_{s-})dZ_{s}^{c} = \int \limits _{0}^{t}\frac{\partial \rho _{n}} {\partial x} (s,Z_{s-})dZ_{s}^{c} = \int \limits _{0}^{t}\lambda \xi _{ s}^{(n)}(Z_{ s-})dZ_{s}^{c}$$

and

$$\begin{array}{rcl} & & \lim _{m\rightarrow \infty }\int \limits _{0}^{t\wedge s_{m} } \int \limits _{\mathbb{R}}[\rho _{n}(s,Z_{s-} + x) - \rho _{n}(s,Z_{s-})]({\mu }^{Z}(ds,dx) - {\nu }^{Z,Q}(ds,dx)) = \\ & & \int \limits _{0}^{t} \int \limits _{\mathbb{R}}[\rho _{n}(s,Z_{s-} + x) - \rho _{n}(s,Z_{s-})]({\mu }^{Z}(ds,dx) - {\nu }^{Z,Q}(ds,dx)) \\ \end{array}$$

Now, in each stochastic integral we pass from the integration with respect to the process Z to the one with respect to the process X. For that we remark that

$$dZ_{s}^{c} = \sum \limits _{i=1}^{d}{\beta }^{(i)}Z_{ s-}dX_{s}^{c,Q,i},\,\,\,\Delta Z_{ s} = Z_{s-}Y (\Delta X_{s}).$$

Lemma 3 is proved. □ 

4.2 A Decomposition for the Density of Levy Preserving Martingale Measures

This decomposition will follow from a previous one by a limit passage. Let again Q be Levy property preserving locally equivalent martingale measure and (β, Y ) the corresponding Girsanov parameters when passing from P to Q. We introduce cadlag versions of the following processes: for t > 0

$$\xi _{t}(x) = E_{Q}[{f}^{{\prime\prime}}(xZ_{ T-t})Z_{T-t}]$$

and

$$H_{t}(x,y) = E_{Q}[{f}^{{^\prime}}(xZ_{ T-t}Y (y)) - {f}^{{^\prime}}(xZ_{ T-t})]$$
(15)

Theorem 4.

Let Z be the density of a Levy preserving equivalent martingale measure Q. Assume that Q belongs to \({\mathcal{K}}^{{_\ast}}\) . Then we have Q- a.s, for all t ≤ T,

$$\begin{array}{rcl} & & E_{Q}[{f}^{{^\prime}}(Z_{ T})\vert \mathcal{F}_{t}] = E_{Q}[{f}^{{^\prime}}(Z_{ T})] + \\ & & \sum \limits _{i=1}^{d}\beta _{ i} \int \limits _{0}^{t}\xi _{ s}(Z_{s-})Z_{s-}dX_{s}^{(c),Q,i} + \int \limits _{0}^{t} \int \limits _{{\mathbb{R}}^{d}}H_{s}(Z_{s-},y)\,({\mu }^{X} - {\nu }^{X,Q})(ds,dy)\end{array}$$
(16)

We now turn to the proof of Theorem 4. In order to obtain the decomposition for f , we obtain convergence in probability of the different stochastic integrals appearing in Lemma 3.

Proof of Theorem 4. For a n ≥ 1, we introduce the stopping times

$$\tau _{n} =\inf \{ t \geq 0\,\vert \,Z_{t} \geq n\text{ or }Z_{t} \leq \frac{1} {n}\}$$
(17)

where \(\inf \{\varnothing \} = +\infty \) and we note that \(\tau _{n} \rightarrow +\infty \) (P-a.s.) as \(n \rightarrow \infty \,\). First of all, we note that

$$\vert E_{Q}[{f}^{{^\prime}}(Z_{ T})\vert \mathcal{F}_{t}] - \rho _{n}(t,Z_{t})\vert \leq E_{Q}[\vert {f}^{{^\prime}}(Z_{ T}) - \phi _{n}(Z_{T})\vert \vert \mathcal{F}_{t}]$$

As f and ϕ n coincide on the interval \([ \frac{1} {n},n]\), it follows from Lemma 3 that

$$\begin{array}{rcl} \vert E_{Q}[{f}^{{^\prime}}(Z_{ T})\vert \mathcal{F}_{t}] - \rho _{n}(t,Z_{t})\vert & \leq & E_{Q}[\vert {f}^{{^\prime}}(Z_{ T}) - \phi _{n}(Z_{T})\vert \mathbf{1}_{\{\tau _{n}\leq T\}}\vert \mathcal{F}_{t}] \\ & & \leq E_{Q}[(5\vert {f}^{{^\prime}}(Z_{ T})\vert + \alpha )\mathbf{1}_{\{\tau _{n}\leq T\}}\vert \mathcal{F}_{t}].\end{array}$$

Now, for every ε > 0, by Doob inequality and Lebesgue dominated convergence theorem we get:\(\lim _{n\rightarrow +\infty }Q(\sup _{t\leq T}E_{Q}[(5\vert {f}^{{^\prime}}(Z_{T})\vert + \alpha )\mathbf{1}_{\{\tau _{n}\leq T\}}\vert \mathcal{F}_{t}] > \epsilon )\)

$$\leq \lim _{n\rightarrow +\infty }\frac{1} {\epsilon }E_{Q}[(5\vert {f}^{{^\prime}}(Z_{ T})\vert + \alpha )\mathbf{1}_{\{\tau _{n}\leq T\}}] = 0$$

Therefore, we have

$$\lim _{n\rightarrow +\infty }Q(\sup _{t\leq T}\vert E_{Q}[{f}^{{^\prime}}(Z_{ T}) - \rho _{n}(t,Z_{t})\vert \mathcal{F}_{t}]\vert > \epsilon ) = 0.$$

We now turn to the convergence of the three elements of the right-hand side of (14). We have almost surely \(\lim _{n\rightarrow +\infty }\phi _{n}(Z_{T}) = {f}^{{^\prime}}(Z_{T})\), and for all n ≥ 1, \(\vert \phi _{n}(Z_{T})\vert \leq 4\vert {f}^{{^\prime}}(Z_{T})\vert + \alpha \). Therefore, it follows from the dominated convergence theorem that,

$$\lim _{n\rightarrow +\infty }E_{Q}[\phi _{n}(Z_{T})] = E_{Q}[{f}^{{^\prime}}(Z_{ T})].$$

We prove now the convergence of continuous martingale parts of (14). It follows from Lemma 2 that

$$\begin{array}{rcl} Z_{t}\,\vert \xi _{t}^{(n)}(Z_{ t}) - \xi _{t}(Z_{t})\vert & \leq & E_{Q}[Z_{T}\vert \phi _{n}^{{^\prime}}(Z_{ T}) - {f}^{{\prime\prime}}(Z_{ T})\vert \,\vert \,\mathcal{F}_{t}] \leq \\ & & 4E_{Q}[Z_{T}\vert {f}^{{\prime\prime}}(Z_{ T})\vert \mathbf{1}_{\{\tau _{n}\leq T\}}\vert \mathcal{F}_{t}].\end{array}$$

Hence, we have as before for ε > 0

$$\lim _{n\rightarrow +\infty }Q(\sup _{t\leq T}Z_{t}\,\vert \xi _{t}^{(n)}(Z_{ t}) - \xi _{t}(Z_{t})\vert > \epsilon ) \leq \lim _{n\rightarrow +\infty }\frac{4} {\epsilon }E_{Q}[Z_{T}{f}^{{\prime\prime}}(Z_{ T})\mathbf{1}_{\{\tau _{n}\leq T\}}] = 0$$

Therefore, it follows from the Lebesgue dominated convergence theorem for stochastic integrals (see [14], Theorem I.4.31, p. 46 ) that for all \(\epsilon \,>\,0\) and \(1 \leq i \leq d\)

$$\lim _{n\rightarrow +\infty }Q(\sup _{t\leq T}\,{\bigl | \int \limits _{0}^{t}Z_{ s-}\,(\xi _{s}^{(n)}(Z_{ s-}) - \xi _{s}(Z_{s-}))dX_{s}^{(c),Q,i}\bigr |} > \epsilon ) = 0.$$

It remains to show the convergence of the discontinuous martingales to zero as \(n \rightarrow \ \infty \). We start by writing

$$\int \limits _{0}^{t} \int \limits _{{\mathbb{R}}^{d}}[H_{s}^{(n)}(Z_{ s-},y) - H_{s}(Z_{s-},y)]({\mu }^{X} - {\nu }^{X,Q})(ds,dy) = M_{ t}^{(n)} + N_{ t}^{(n)}$$

with

$$\begin{array}{rcl} M_{t}^{(n)}& =& \int \limits _{0}^{t} \int \limits _{\mathcal{A}}[H_{s}^{(n)}(Z_{ s-},y) - H_{s}(Z_{s-},y)]({\mu }^{X} - {\nu }^{X,Q})(ds,dy), \\ N_{t}^{(n)}& =& \int \limits _{0}^{t} \int \limits _{{\mathcal{A}}^{c}}[H_{s}^{(n)}(Z_{ s-},y) - H_{s}(Z_{s-},y)]({\mu }^{X} - {\nu }^{X,Q})(ds,dy), \\ \end{array}$$

where \(\mathcal{A} =\{ y : \vert Y (y) - 1\vert < \frac{1} {4}\}\).

For p ≥ 1, we consider the sequence of stopping times τ p defined by (17) with replacing n by real positive p. We introduce also the processes

$${M}^{(n,p)} = (M_{ t}^{(n,p)})_{ t\geq 0},\,\,{N}^{(n,p)} = (N_{ t}^{(n,p)})_{ t\geq 0}$$

with \(M_{t}^{(n,p)} = M_{t\wedge \tau _{p}}^{(n)}\), \(N_{t}^{(n,p)} = N_{t\wedge \tau _{p}}^{(n)}\). We remark that for p ≥ 1 and \(\epsilon > 0\)

$$Q(\sup _{t\leq T}\vert M_{t}^{(n)}+N_{ t}^{(n)}\vert > \epsilon ) \leq Q(\tau _{ p} \leq T)+Q(\sup _{t\leq T}\vert M_{t}^{(n,p)}\vert > \frac{\epsilon } {2})+Q(\sup _{t\leq T}\vert N_{t}^{(n,p)}\vert > \frac{\epsilon } {2}).$$

Furthermore, we obtain from Doob martingale inequalities that

$$Q(\sup _{t\leq T}\vert M_{t}^{(n,p)}\vert > \frac{\epsilon } {2}) \leq \frac{4} {{\epsilon }^{2}}\mathbb{E}_{Q}[{(M_{T}^{(n,p)})}^{2}]$$
(18)

and

$$Q(\sup _{t\leq T}\vert N_{t}^{(n,p)}\vert > \frac{\epsilon } {2}) \leq \frac{2} {\epsilon } \mathbb{E}_{Q}\vert N_{T}^{(n,p)}\vert $$
(19)

Since \(\tau _{p} \rightarrow +\infty \) as \(p \rightarrow +\infty \) it is sufficient to show that \(E_{Q}{[M_{T}^{(n,p)}]}^{2}\) and \(E_{Q}\vert N_{T}^{(n,p)}\vert \) converge to 0 as \(n \rightarrow \infty \).

For that we estimate \(E_{Q}[{(M_{T}^{(n,p)})}^{2}]\) and prove that

$$\begin{array}{rcl} E_{Q}& & [{(M_{T}^{(n,p)})}^{2}] \leq \\ & &\qquad C\big{(}\int \limits _{0}^{T}\sup _{ v\in K}\mathbb{E}_{Q}^{2}[Z_{ s}\,{f}^{{\prime\prime}}(vZ_{ s})<Emphasis Type="Bold">1</Emphasis>_{\{\tau _{q_{n}}<s\}}]ds\big{)}\,\big{(}\int \limits _{\mathcal{A}}{(\sqrt{Y (y)} - 1)}^{2}\nu (dy)\big{)} \\ \end{array}$$

where C is a constant, K is some compact set of \({\mathbb{R}}^{+,{_\ast}}\) and \(q_{n} = \frac{n} {4p}\).

First we note that on stochastic interval \([\![0,T \wedge \tau _{p})]\!]\) we have \(1/p \leq Z_{s-}\leq p\), and, hence,

$$\begin{array}{rcl} & & E_{Q}[{(M_{T}^{(n,p)})}^{2}] = E_{ Q}[\int \limits _{0}^{T\wedge \tau _{p} } \int \limits _{\mathcal{A}}\vert H_{s}^{(n)}(Z_{ s-},y) - H_{s}(Z_{s-},y){\vert }^{2}\,Y (y)\nu (dy)ds] \leq \\ & &\int \limits _{0}^{T} \int \limits _{\mathcal{A}}\sup _{1/p\leq x\leq p}\vert H_{T-s}^{(n)}(x,y) - H_{ T-s}(x,y){\vert }^{2}\,Y (y)\nu (dy)ds \\ \end{array}$$

To estimate the difference \(\vert H_{T-s}^{(n)}(x,y) - H_{T-s}(x,y)\vert \) we note that

$$H_{T-s}^{(n)}(x,y)-H_{ T-s}(x,y) = E_{Q}[\phi _{n}(xZ_{s}Y (y))-\phi _{n}(xZ_{s})-{f}^{{^\prime}}(xZ_{ s}Y (y))+{f}^{{^\prime}}(xZ_{ s})]$$

From Lemma 2 we deduce that if \(xZ_{s}Y (y) \in [1/n,n]\) and \(xZ_{s} \in [1/n,n]\) then the expression on the right-hand side of the previous expression is zero. But if \(y \in \mathcal{A}\) we also have: \(3/4 \leq Y (y) \leq 5/4\) and, hence,

$$\begin{array}{rcl} & & \vert H_{T-s}^{(n)}(x,y) - H_{ T-s}(x,y)\vert \leq \\ & &\vert E_{Q}[<Emphasis Type="Bold">1</Emphasis>_{\{\tau _{q_{n}}\leq s\}}\vert \phi _{n}(xZ_{s}Y (y)) - \phi _{n}(xZ_{s}) - {f}^{{^\prime}}(xZ_{ s}Y (y)) + {f}^{{^\prime}}(xZ_{ s})\vert ].\end{array}$$

Again from the inequalities of Lemma 2 we get:

$$\vert H_{T-s}^{(n)}(x,y) - H_{ T-s}(x,y)\vert \leq 6E_{Q}[<Emphasis Type="Bold">1</Emphasis>_{\{\tau _{q_{n}}\leq s\}}\vert {f}^{{^\prime}}(xZ_{ s}Y (y)) - {f}^{{^\prime}}(xZ_{ s})\vert ].$$

Writing

$${f}^{{^\prime}}(xZ_{ s}Y (y)) - {f}^{{^\prime}}(xZ_{ s}) = \int \limits _{1}^{Y (y)}xZ_{ s}{f}^{{\prime\prime}}(xZ_{ s}\theta )d\theta $$

we finally get

$$\vert H_{T-s}^{(n)}(x,y)-H_{ T-s}(x,y)\vert \leq 6\sup _{3/4\leq u\leq 5/4}E_{Q}[<Emphasis Type="Bold">1</Emphasis>_{\{\tau _{q_{n}}\leq s\}}\,xZ_{s}\,{f}^{{\prime\prime}}(xuZ_{ s})]\vert Y (y)-1\vert $$

and this gives us the estimation of \(E_{Q}[{(M_{T}^{(n,p)})}^{2}]\) cited above.

We know that \(P_{T} \sim Q_{T}\) and this means that the corresponding Hellinger process of order 1/2 is finite:

$$h_{T}(P,Q, \frac{1} {2}) ={ \frac{T} {2} \,}^{\top }\beta c\beta + \frac{T} {8} \int \limits _{\mathbb{R}}{(\sqrt{Y (y)} - 1)}^{2}\nu (dy) < +\infty.$$

Then

$$\int \limits _{\mathcal{A}}{(\sqrt{Y (y)} - 1)}^{2}\nu (dy) < +\infty.$$

From Lebesgue dominated convergence theorem and (8) we get:

$$\int \limits _{0}^{T}\sup _{ v\in K}\mathbb{E}_{Q}^{2}[Z_{ s}{f}^{{\prime\prime}}(vZ_{ s})\mathbf{1}_{\{\tau _{q_{n}}\leq s\}}]ds \rightarrow 0$$

as \(n \rightarrow +\infty \) and this information together with the estimation of \(E_{Q}[{(M_{T}^{(n,p)})}^{2}]\) proves the convergence of \(E_{Q}[{(M_{T}^{(n,p)})}^{2}]\) to zero as \(n \rightarrow +\infty \).

We now turn to the convergence of \(E_{Q}\vert N_{T}^{(n,p)}\vert \) to zero as \(n \rightarrow +\infty \). For this we prove that

$$E_{Q}\vert N_{T}^{(n,p)}\vert \leq 2TE_{ Q}[\mathbf{1}_{\{\tau _{n}\leq T\}}(5\vert {f}^{{^\prime}}(Z_{ T})\vert + \alpha )]\int \limits _{{\mathcal{A}}^{c}}Y (y)d\nu $$

We start by noticing that

$$\begin{array}{rcl} & & E_{Q}\vert N_{T}^{(n,p)}\vert \leq 2E_{ Q}[\int \limits _{0}^{T\wedge \tau _{p} } \int \limits _{{\mathcal{A}}^{c}}\vert H_{s}^{(n)}(Z_{ s-},y) - H_{s}(Z_{s-},y)\vert \,Y (y)\nu (dy)ds] \leq \\ & & 2\int \limits _{0}^{T} \int \limits _{{\mathcal{A}}^{c}}E_{Q}[\vert H_{s}^{(n)}(Z_{ s-},y) - H_{s}(Z_{s-},y)\vert \,Y (y)\nu (dy)ds] \\ \end{array}$$

To evaluate the right-hand side of previous inequality we write

$$\begin{array}{rcl} & & \vert H_{s}^{(n)}(x,y) - H_{ s}(x,y)\vert \\ & &\leq E_{Q}\vert \phi _{n}(xZ_{T-s}Y (y)) - {f}^{{^\prime}}(xZ_{ T-s}Y (y))\vert + E_{Q}\vert \phi _{n}(xZ_{T-s}) - {f}^{{^\prime}}(xZ_{ T-s})\vert.\end{array}$$

We remark that in law with respect to Q

$$\vert \phi _{n}(xZ_{T-s}Y (y)) - {f}^{{^\prime}}(xZ_{ T-s}Y (y))\vert = E_{Q}[\vert \phi _{n}(Z_{T}) - {f}^{{^\prime}}(Z_{ T})\vert \,\vert \,Z_{s} = x\,Y (y)]$$

and

$$\vert \phi _{n}(xZ_{T-s}) - {f}^{{^\prime}}(xZ_{ T-s})\vert = E_{Q}[\vert \phi _{n}(Z_{T}) - {f}^{{^\prime}}(Z_{ T})\vert \,\vert \,Z_{s} = x]$$

Then

$$H_{s}^{(n)}(x,y) - H_{ s}(x,y)\vert \leq 2E_{Q}\vert \phi _{n}(Z_{T}) - {f}^{{^\prime}}(Z_{ T})\vert $$

From Lemma 2 we get:

$$\begin{array}{rcl} E_{Q}\vert \phi _{n}(xZ_{T}) - {f}^{{^\prime}}(xZ_{ T})\vert & \leq & E_{Q}[<Emphasis Type="Bold">1</Emphasis>_{\{\tau _{n}\leq T\}}\vert \phi _{n}(Z_{T}) - {f}^{{^\prime}}(Z_{ T})\vert ] \\ & \leq & E_{Q}[<Emphasis Type="Bold">1</Emphasis>_{\{\tau _{n}\leq T\}}(5\vert {f}^{{^\prime}}(Z_{ T})\vert + \alpha )] \\ \end{array}$$

and is proves the estimation for \(E_{Q}\vert N_{T}^{(n,p)}\vert \).

Then, Lebesgue dominated convergence theorem applied for the right-hand side of the previous inequality shows that it tends to zero as \(n \rightarrow \infty \). On the other hand, from the fact that the Hellinger process is finite and also from the inequality \({(\sqrt{Y (y)} - 1)}^{2} \geq Y (y)/25\) verifying on \({\mathcal{A}}^{c}\) we get

$$\int \limits _{{A}^{c}}Y (y)d\nu < +\infty $$

This result with previous convergence prove the convergence of \(E_{Q}\vert N_{T}^{(n,p)}\vert \) to zero as \(n \rightarrow \infty \). Theorem 4 is proved. □ 

4.3 Proof of Theorem  3 and Proposition  1

Proof of Theorem 3. We define a process \(\hat{X}\,{=\,}^{\top }(\hat{{X}}^{(1)},\cdots \hat{{X}}^{(d)})\) such that for \(1 \leq i \leq d\) and t ∈ [0, T]

$$S_{t}^{(i)} = \mathcal{E}(\hat{{X}}^{(i)})_{ t}$$

where \(\mathcal{E}(\cdot )\) is Dolean-Dade exponential. We remark that if X is a Levy process then \(\hat{X}\) is again a Levy process and that

$$d\,S_{t}^{(i)} = S_{ t-}^{(i)}\,d\hat{X}_{ t}^{(i)}.$$

In addition, for \(1 \leq i \leq d\) and t ∈ [0, T]

$$\begin{array}{rcl} & & \hat{X}_{t}^{(c),i} = X_{ t}^{(c),i} \\ & & {\nu }^{\hat{{X}}^{(i)},{Q}^{{_\ast}} } = ({e}^{y_{i} } - 1) \cdot {\nu }^{{X}^{(i)},{Q}^{{_\ast}} }.\end{array}$$

Replacing in Theorem 2 the process S by the process \(\hat{X}\) we obtain Q-a.s. for all t ≤ T:

$$E_{{Q}^{{_\ast}}}[{f}^{{^\prime}}(Z_{ T}^{{_\ast}})\vert \mathcal{F}_{ t}] = x_{0}+\sum \limits _{i=1}^{d}[\int \limits _{0}^{t}\phi _{ s}^{(i)}S_{ s-}^{(i)}d\hat{X}_{ s}^{(c),{Q}^{{_\ast}},i }+\int \limits _{0}^{t} \int \limits _{{\mathbb{R}}^{d}}\phi _{s}^{(i)}S_{{ s}^{-}}^{(i)}\,d({\mu }^{\hat{{X}}^{(i)} }-{\nu }^{\hat{{X}}^{(i)},{Q}^{{_\ast}} })]$$
(20)

Then it follows from (20), Theorem 4 and the unicity of decomposition of martingales on continuous and discontinuous parts, that Q  ∗  − a. s. , for all s ≤ T and all y ∈ supp(ν),

$$H_{s}(Z_{s-}^{{_\ast}},y) = \sum \limits _{i=1}^{d}\phi _{ s}^{(i)}S_{ s-}^{(i)}({e}^{y_{i} } - 1)$$
(21)

and for all t ≤ T

$$\sum \limits _{i=1}^{d} \int \limits _{0}^{t}\xi _{ s}(Z_{s-}^{{_\ast}})\,Z_{ s-}^{{_\ast}}\,\beta _{ i}^{{_\ast}}\,dX_{ s}^{(c),{Q}^{{_\ast}},i } = \sum \limits _{i=1}^{d} \int \limits _{0}^{t}\phi _{ s}^{(i)}S_{ s-}^{(i)}dX_{ s}^{(c),{Q}^{{_\ast}},i }.$$
(22)

We remark that Q  ∗  − a. s. for all s ≤ T

$$H_{s}(Z_{s}^{{_\ast}},y) = E_{{ Q}^{{_\ast}}}({f}^{{^\prime}}({Y }^{{_\ast}}(y)Z_{ T}^{{_\ast}}) - {f}^{{^\prime}}(Z_{ T}^{{_\ast}})\,\vert \,\mathcal{F}_{ s}).$$

Moreover, \(H_{s}(Z_{s-}^{{_\ast}},y)\) coincide with \(H_{s}(Z_{s}^{{_\ast}},y)\) in points of continuity of Z  ∗ . Taking the sequence of continuity points of Z  ∗  tending to T and using that \(Z_{T} = Z_{T-}\) (Q  ∗ -a.s.) we get that Q  ∗  − a. s. for y ∈ supp(ν)

$${f}^{{^\prime}}(Z_{ T}^{{_\ast}}{Y }^{{_\ast}}(y)) - {f}^{{^\prime}}(Z_{ T}^{{_\ast}}) = \sum \limits _{i=1}^{d}\phi _{ T-}^{(i)}S_{ T-}^{(i)}({e}^{y_{i} } - 1)$$
(23)

We fix an arbitrary \(y_{0} \in { \circ \atop supp} (\nu )\). Differentiating with respect to y i , i ≤ d, we obtain that

$$Z_{T}^{{_\ast}} \frac{\partial } {\partial y_{i}}{Y }^{{_\ast}}(y_{ 0}){f}^{{\prime\prime}}(Z_{ T}^{{_\ast}}{Y }^{{_\ast}}(y_{ 0})) = \phi _{T-}^{(i)}S_{ T-}^{(i)}{e}^{y_{0,i} }$$

We also define:

$$\Phi (x) = x{f}^{{\prime\prime}}(x{Y }^{{_\ast}}(y_{ 0}))$$

and

$$\alpha _{i} = {e}^{-y_{0,i} } \frac{\partial } {\partial y_{i}}{Y }^{{_\ast}}(y_{ 0}).$$

We then have \(\phi _{T-}^{(i)}S_{T-}^{(i)} = \Phi (Z_{T}^{{_\ast}})\alpha _{i},\) and inserting this in (23), we obtain (9).

Taking quadratic variation of the difference of the right-hand side and left-hand side in (22), we obtain that Q  ∗  − a. s. for all s ≤ T

$${\,}^{\top }[\xi _{ s}(Z_{s-}^{{_\ast}})\,Z_{ s-}^{{_\ast}}\,{\beta }^{{_\ast}}- S_{ s-}\phi _{s}]\,c\,[\xi _{s}(Z_{s-}^{{_\ast}})\,Z_{ s-}^{{_\ast}}\,{\beta }^{{_\ast}}- S_{ s-}\phi _{s}] = 0$$

where by convention \(S_{s-}\phi _{s} = (S_{s-}^{(i)}\phi _{s}^{(i)})_{1\leq i\leq d}\). Now, we remark that Q  ∗  − a. s. for all s ≤ T

$$Z_{s}^{{_\ast}}\,\xi _{ s}(Z_{s}^{{_\ast}}) = E_{{ Q}^{{_\ast}}}({f}^{{\prime\prime}}(Z_{ T}^{{_\ast}})Z_{ T}^{{_\ast}}\,\vert \,\mathcal{F}_{ s})$$

and that it coincides with \(\xi (Z_{s-}^{{_\ast}})\) in continuity points of Z  ∗ . We take a set of continuity points of Z  ∗  which goes to T and we obtain since Levy process has no predictable jumps that Q  ∗  − a. s. 

$${\,}^{\top }[Z_{ T}^{{_\ast}}{f}^{{\prime\prime}}(Z_{ T}^{{_\ast}}){\beta }^{{_\ast}}- S_{ T-}\phi _{T-}]\,c\,[Z_{T}^{{_\ast}}{f}^{{\prime\prime}}(Z_{ T}^{{_\ast}}){\beta }^{{_\ast}}- S_{ T-}\phi _{T-}] = 0$$

Hence, if c≠0,

$$Z_{T}^{{_\ast}}{f}^{{\prime\prime}}(Z_{ T}^{{_\ast}}){\beta }^{{_\ast}}- S_{ T-}\phi _{T-} = V$$

where \(V \in {\mathbb{R}}^{d}\) is a vector which satisfies cV = 0. Inserting this in (23) we obtain (10). Theorem 3 is proved. □ 

Proof of Proposition 1. Writing Ito formula we obtain P-a.s. for t ≤ T:

$$\begin{array}{rcl} \ln (Z_{t}^{{_\ast}})& =& \sum \limits _{i=1}^{d}\beta _{ i}^{{_\ast}}X_{ t}^{(c),i} + \int \limits _{0}^{t} \int \limits _{{\mathbb{R}}^{d}}\ln ({Y }^{{_\ast}}(y))d({\mu }^{X} - {\nu }^{X,P}) \\ & & \left [-{\frac{t} {2}\,}^{\top }\!{\beta }^{{_\ast}}c{\beta }^{{_\ast}} + t\int \limits _{{\mathbb{R}}^{d}}[\ln ({Y }^{{_\ast}}(y)) - ({Y }^{{_\ast}}(y) - 1)\right ]\nu (dy)\end{array}$$
(24)

As we have assumed Q  ∗  to preserve the Levy property, the Girsanov parameters \(({\beta }^{{_\ast}},{Y }^{{_\ast}})\) are independent from (ω, t), and the process \(\ln ({Z}^{{_\ast}}) = (\ln (Z_{t}^{{_\ast}}))_{0\leq t\leq T}\) is a Levy process with the characteristics:

$$\begin{array}{rcl} & & {b}^{\ln {Z}^{{_\ast}} } = [-{\frac{1} {2}\,}^{\top }\!{\beta }^{{_\ast}}c{\beta }^{{_\ast}} + \int \limits _{{\mathbb{R}}^{d}}[\ln ({Y }^{{_\ast}}(y)) - ({Y }^{{_\ast}}(y) - 1)]\,\nu (dy), \\ & & {c}^{\ln {Z}^{{_\ast}} } ={\, }^{\top }\!{\beta }^{{_\ast}}c{\beta }^{{_\ast}}, \\ & & d{\nu }^{\ln {Z}^{{_\ast}} } =\ln ({Y }^{{_\ast}}(y)\,\nu (dy).\end{array}$$

Now, as soon as \({\,}^{\top }{\beta }^{{_\ast}}c{\beta }^{{_\ast}}\neq 0\), the continuous component of ln(Z  ∗ ) is non zero, and from Theorem 24.10 in [24] we deduce that \(\mbox{ supp}(Z_{T}^{{_\ast}}) = {\mathbb{R}}^{+,{_\ast}}\) and, hence, i).

If Y  ∗ (y) is not identically 1 on \({ \circ \atop supp} (\nu )\), then in (9) the α i , \(1 \leq i \leq d\), are not all zeros, and hence, the set \(supp({\nu }^{\ln ({Z}^{{_\ast}}) }) =\{\ln {Y }^{{_\ast}}(y),y \in supp(\nu )\}\) contains an interval. It implies that \({ \circ \atop supp} ({\nu }^{\ln {Z}^{{_\ast}} })\neq \varnothing \). Since \(0 \in supp(\nu )\), again from (9) it follows that \(0 \in supp({\nu }^{\ln {Z}^{{_\ast}} })\). Then ii) is a consequence of Theorem 24.10 in [24]. □ 

5 So Which f Can Give MEMM Preserving Levy Property?

If one considers some simple models, it is not difficult to obtain f-divergence minimal equivalent martingale measures for a variety of functions. In particular, one can see that the f-divergence minimal measure does not always preserve the Levy property. What can we claim for the functions f such that f-divergence minimal martingale measure exists and preserve Levy property?

Theorem 5.

Let \(f : {\mathbb{R}}^{+{_\ast}}\rightarrow \mathbb{R}\) be a strictly convex function of class \({\mathcal{C}}^{3}\) and let X be a Levy process given by its characteristics (b,c,ν). Assume there exists an f-divergence minimal martingale measure Q on a time interval [0,T], which preserves the Levy property and belongs to \({\mathcal{K}}^{{_\ast}}\) .

Then, if supp(ν) is of the non-empty interior, it contains zero and Y is not identically 1, there exists a > 0 and \(\gamma \in \mathbb{R}\) such that for all \(x \in supp(Z_{T}^{{_\ast}})\) ,

$${f}^{{\prime\prime}}(x) = a{x}^{\gamma }.$$

If \({\,}^{\top }{\beta }^{{_\ast}}c{\beta }^{{_\ast}}\neq 0\) and there exists \(y \in supp(\nu )\) such that \({Y }^{{_\ast}}(y)\neq 1\) , then there exist \(n \in \mathbb{N}\) , \(\gamma \in \mathbb{R}\) , a > 0 and the real constants \(b_{i},\tilde{b}_{i},1 \leq i \leq n\) , such that

$${f}^{{\prime\prime}}(x) = a{x}^{\gamma } + {x}^{\gamma } \sum \limits _{i=1}^{n}b_{ i}{(\ln (x))}^{i} + \frac{1} {x}\sum \limits _{i=1}^{n}\tilde{b}_{ i}{(\ln (x))}^{i-1}$$

We deduce this result from the equations obtained in Theorem 3. We will successively consider the cases when \({ \circ \atop supp} (\nu )\neq \varnothing \), then when c is invertible, and finally when c is not invertible.

5.1 First Case: The Interior of supp(ν) Is Not Empty

Proof of Theorem 5. We assume that \({ \circ \atop supp} (\nu )\neq \varnothing \), \(0 \in supp(\nu )\), Y  ∗  is not identically 1 on supp(ν). According to the Proposition 1 it implies in both cases \({\,}^{\top }{\beta }^{{_\ast}}c{\beta }^{{_\ast}}\neq 0\) and \({\,}^{\top }{\beta }^{{_\ast}}c{\beta }^{{_\ast}} = 0\), that \(supp(Z_{T}^{{_\ast}})\) is an interval, say J. Since the interior of supp(ν) is not empty, there exist open non-empty intervals \(I_{1},\ldots I_{d}\) such that \(I = I_{1} \times \ldots \times I_{d} \subseteq { \circ \atop supp} (\nu )\). Then it follows from Theorem 3 that for all \((x,y) \in J \times I\),

$${f}^{{^\prime}}(x{Y }^{{_\ast}}(y)) - {f}^{{^\prime}}(x) = \Phi (x)\sum \limits _{i=1}^{d}\alpha _{ i}({e}^{y_{i} } - 1)$$
(25)

where Φ is a differentiable on \({ \circ \atop J}\) function and \(\alpha \in {\mathbb{R}}^{d}\). If we now fix \(x_{0} \in { \circ \atop J}\), we obtain

$${Y }^{{_\ast}}(y) = \frac{1} {x_{0}}{({f}^{{^\prime}})}^{-1}({f}^{{^\prime}}(x_{ 0}) + \Phi (x_{0})\sum \limits _{i=1}^{d}\alpha _{ i}({e}^{y_{i} } - 1))$$

and so Y  ∗  is differentiable and monotonous in each variable. Since Y  ∗  is not identically 1 on \({ \circ \atop supp} (\nu )\) we get that α≠0. We may now differentiate (25) with respect to y i corresponding to α i ≠0, to obtain for all \((x,y) \in J \times I\),

$$\Psi (x_{0}){f}^{{\prime\prime}}(x{Y }^{{_\ast}}(y)) = \Psi (x){f}^{{\prime\prime}}(x_{ 0}{Y }^{{_\ast}}(y)),$$
(26)

where \(\Psi (x) = \frac{\Phi (x)} {x}\). Differentiating this new expression with respect to x on the one hand, and with respect to y i on the other hand, we obtain the system

$$\left \{\begin{array}{@{}l@{\quad }l@{}} \quad &\Psi (x_{0}){Y }^{{_\ast}}(y){f}^{{\prime\prime\prime}}(x{Y }^{{_\ast}}(y)) = {f}^{{\prime\prime}}(x_{0}{Y }^{{_\ast}}(y)){\Psi }^{{^\prime}}(x) \\ \quad &\Psi (x_{0})x{f}^{{\prime\prime\prime}}(x{Y }^{{_\ast}}(y)) = x_{0}{f}^{{\prime\prime\prime}}(x_{0}{Y }^{{_\ast}}(y))\Psi (x) \end{array} \right.$$
(27)

In particular, separating the variables, we deduce from this system that there exists \(\gamma \in \mathbb{R}\) such that for all \(x \in { \circ \atop J}\),

$$\frac{{\Psi }^{{^\prime}}(x)} {\Psi (x)} = \frac{\gamma } {x}.$$

Hence, there exists a > 0 and \(\gamma \in \mathbb{R}\) such that for all \(x\,\in \,{ \circ \atop J}\), \(\Psi (x) = a{x}^{\gamma }\). It then follows from (26) and (27) that for all \((x,y) \in J \times I\),

$$\frac{{f}^{{\prime\prime\prime}}(x{Y }^{{_\ast}}(y))} {{f}^{{\prime\prime}}(x{Y }^{{_\ast}}(y))} = \frac{\gamma } {x{Y }^{{_\ast}}(y)}$$

and hence that \({f}^{{\prime\prime}}(x{Y }^{{_\ast}}(y)) = a{(x{Y }^{{_\ast}}(y))}^{\gamma }\).

We take now the sequence of \((y_{m})_{m\geq 1}\), \(y_{m} \in supp(\nu )\), going to zero. Then, the sequence \(({Y }^{{_\ast}}(y_{m}))_{m\geq 1}\) according to the formula for Y  ∗ , is going to 1. Inserting y m in previous expression and passing to the limit we obtain that for all \(x \in { \circ \atop J}\),

$$\frac{{f}^{{\prime\prime\prime}}(x)} {{f}^{{\prime\prime}}(x)} = \frac{\gamma } {x}$$

and it proves the result on \({ \circ \atop supp} (Z_{T}^{{_\ast}})\). The final result on supp(Z T  ∗ ) can be proved again by limit passage. □ 

5.2 Second Case: c Is Invertible and ν Is Nowhere Dense

In the first case, the proof relied on differentiating the function Y  ∗ . This is of course no longer possible when the support of ν is nowhere dense. Howerever, since \({\,}^{\top }{\beta }^{{_\ast}}c{\beta }^{{_\ast}}\neq 0\), we get from Proposition 1 that \(supp({Z}^{{_\ast}}) = {\mathbb{R}}^{+,{_\ast}}\). Again from Theorem 3 we have for all x > 0 and y ∈ supp(ν),

$${f}^{{^\prime}}(x{Y }^{{_\ast}}(y)) - {f}^{{^\prime}}(x) = x{f}^{{\prime\prime}}(x)\sum \limits _{i=1}^{d}\beta _{ i}^{{_\ast}}({e}^{y_{i} } - 1).$$
(28)

We will distinguish two similar cases: b > 1 and 0 < b < 1. For b > 1 we fix ε, 0 < ε < 1, and we introduce for \(a \in \mathbb{R}\) the following vector space:

$$V _{a,b} =\{ \phi \in {\mathcal{C}}^{1}([\epsilon (1\wedge b), \frac{1 \vee b} {\epsilon } ]),\text{ such that for }x \in [\epsilon , \frac{1} {\epsilon } ],\phi (bx)-\phi (x) = ax{\phi }^{{^\prime}}(x)\}$$

with the norm

$$\vert \!\vert \phi \vert \!\vert _{\infty } =\sup _{x\in [\epsilon ,\frac{1} {\epsilon } ]}\vert \phi (x)\vert +\sup _{x\in [\epsilon ,\frac{1} {\epsilon } ]}\vert \phi (bx)\vert $$

It follows from (28) that \({f}^{{^\prime}}\,\in \,V _{a,b}\) with b = Y  ∗ (y) and \(a\,=\,\sum \limits _{i=1}^{d}\beta _{i}^{{_\ast}}({e}^{y_{i}} - 1)\). The condition that there exist y ∈ supp(ν) such that \({Y }^{{_\ast}}(y)\neq 1\) insure that \(\sum \limits _{i=1}^{d}\beta _{i}^{{_\ast}}({e}^{y_{i}} - 1)\neq 0\).

Lemma 4.

If a≠0 then V a,b is a finite dimensional closed in \(\vert \!\vert \cdot \vert \!\vert _{\infty }\) vector space.

Proof.

It is easy to verify that V a, b is a vector space. We show that V a, b is a closed vector space: if we consider a sequence \((\phi _{n})_{n\geq 1}\) of elements of V a, b which converges to a function ϕ, we denote by ψ the function such that \(\psi (x) = \frac{\phi (bx)-\phi (x)} {ax}\). We then have

$$\lim _{n\rightarrow +\infty }\vert \vert \phi _{n}^{{^\prime}}- \psi \vert \vert _{ \infty }\leq \frac{1} {\epsilon \vert a\vert (1 \wedge b)}\lim _{n\rightarrow +\infty }\vert \vert \phi _{n} - \phi \vert \vert _{\infty } = 0$$

Therefore, ϕ is differentiable and we have \({\phi }^{{^\prime}} = \psi \). Therefore, ϕ is of class \({\mathcal{C}}^{1}\) and belongs to V a, b . Hence, V a, b is a closed in \(\vert \!\vert \cdot \vert \!\vert _{\infty }\) vector space. Now, for \(\phi \in V _{a,b}\) and \(x,y \in [\epsilon , \frac{1} {\epsilon }]\), we have

$$\vert \phi (x)-\phi (y)\vert \,\leq \,\sup _{u\in [\epsilon ,\frac{1} {\epsilon } ]}\vert {\phi }^{{^\prime}}(u)\vert \vert x-y\vert \,\leq \,\sup _{ u\in [\epsilon ,\frac{1} {\epsilon } ]}\frac{\vert \phi (bu) - \phi (u)\vert } {\vert au\vert } \vert x-y\vert \,\leq \,\frac{\vert \vert \phi \vert \vert _{\infty }} {\vert a\vert \epsilon } \vert x-y\vert $$

Therefore, the unit ball of V a, b is equi-continuous, hence, by Ascoli theorem, it is relatively compact, and now it follows from the Riesz Theorem that V a, b is a finite dimensional vector space. □ 

We now show that elements of V a, b belong to a specific class of functions.

Lemma 5.

All elements of V a,b are solutions to a Euler type differential equation, that is to say there exists \(m \in \mathbb{N}\) and real numbers \((\rho _{i})_{0\leq i\leq m}\) such that

$$\sum \limits _{i=0}^{m}\rho _{ i}{x}^{i}{\phi }^{(i)}(x) = 0.$$
(29)

Proof.

It is easy to see from the definition of V a, b that if \(\phi \in V _{a,b}\), then the function \(x\mapsto x{\phi }^{{^\prime}}(x)\) also belongs to V a, b . If we now denote by ϕ(i) the derivative of order i of ϕ, we see that the span of \(({x}^{i}{\phi }^{(i)}(x))_{i\geq 0}\) must be a subvector space of V a, b and in particular a finite dimensional vector space. In particular, there exists \(m \in \mathbb{N}\) and real constants \((\rho _{i})_{0\leq i\leq m}\) such that (29) holds. □ 

Proof of Theorem 5. The previous result applies in particular to the function f since f verify (28). As a consequence, f satisfy Euler type differential equation. It is known that the change of variable x = exp(u) reduces this equation to a homogeneous differential equation of order m with constant coefficients. It is also known that the solution of such equation can be written as a linear combination of the solutions corresponding to different roots of characteristic polynomial. These solutions being linearly independent, we need only to considerer a generic one, say \(f_{\lambda }^{{^\prime}}\), λ being the root of characteristic polynomial. If the root of characteristic polynomial λ is real and of the multiplicity n, n ≤ m, then

$$f_{\lambda }^{{^\prime}}(x) = a_{ 0}{x}^{\lambda } + {x}^{\lambda } \sum \limits _{i=1}^{n}b_{ i}{(\ln (x))}^{i}$$

and if this root is complex then

$$f_{\lambda }^{{^\prime}}(x) = {x}^{Re(\lambda )} \sum \limits _{i=0}^{n}[c_{ i}\cos (\ln (Im(\lambda )x)) + d_{i}\sin (\ln (Im(\lambda )x))]\ln {(x)}^{i}$$

where \(a_{0},b_{i},c_{i},d_{i}\) are real constants. Since \({f}^{{^\prime}}\) is increasing, we must have for all i ≤ n, \(c_{i} = d_{i} = 0\). But f is strictly convex and the last case is excluded. Putting

$$f_{\lambda }^{{^\prime}}(x) = a_{ 0}{x}^{\lambda } + {x}^{\lambda } \sum \limits _{i=1}^{n}b_{ i}{(\ln (x))}^{i}$$

into the equation

$${f}^{{^\prime}}(bx) - {f}^{{^\prime}}(x) = ax{f}^{{\prime\prime}}(x)$$
(30)

we get using linear independence of mentioned functions that

$$a_{0}({b}^{\lambda } - a\lambda - 1) + {b}^{\lambda } \sum \limits _{i=1}^{n}b_{ i}{(\ln b)}^{i} - ab_{ 1} = 0$$
(31)

and that for all 1 ≤ i ≤ n,

$$\sum \limits _{k=i}^{n}{b}^{\lambda }b_{ k}C_{k}^{i}{(\ln (b))}^{k-i} - b_{ i}(1 + a\lambda ) - ab_{i+1}(i + 1) = 0$$
(32)

with \(b_{n+1} = 0\). We remark that the matrix corresponding to (32) is triangular matrix M with \({b}^{\lambda } - 1 - a\lambda \) on the diagonal. If \({b}^{\lambda } - 1 - a\lambda \neq 0\), then the system of equations has unique solution. This solution should also verify:for all x > 0

$$f_{\lambda }^{{\prime\prime}}(x) > 0$$
(33)

If \({b}^{\lambda } - 1 - a\lambda \,=\,0\), then rang(M) = 0, and b i are free constants. Finally, we conclude that there exist a solution

$$f_{\lambda }^{{^\prime}}(x) = a{x}^{\lambda } + {x}^{\lambda } \sum \limits _{i=1}^{n}b_{ i}{(\ln (x))}^{i}$$

verifying (33) with any λ verifying \({b}^{\lambda } - 1 - a\lambda = 0\). □ 

5.3 Third Case: c Is Non Invertible and ν Is Nowhere Dense

We finally consider the case of Levy models which have a continuous component but for which the matrix c is not invertible. It follows from Theorem 3 that in this case we have for all \(x \in supp({Z}^{{_\ast}})\) and \(y \in supp(\nu )\)

$${f}^{{^\prime}}(x{Y }^{{_\ast}}(y)) - {f}^{{^\prime}}(x) = x{f}^{{\prime\prime}}(x)\sum \limits _{i=1}^{d}\beta _{ i}^{{_\ast}}({e}^{y_{i} } - 1) -\sum \limits _{j=1}^{d}V _{ j}({e}^{y_{j} } - 1)$$
(34)

where cV = 0.

Proof of Theorem 5. First of all, we note that if \({f}^{{^\prime}}\) satisfies (34) then \(\phi : x\mapsto x{f}^{{\prime\prime}}(x)\) satisfies (30). The conclusions of the previous section then hold for ϕ. □ 

6 Minimal Equivalent Measures When \({f}^{{\prime\prime}}(x) = a{x}^{\gamma }\)

Our aim in this section is to consider in more detail the class of minimal martingale measures for the functions which satisfy \({f}^{{\prime\prime}}(x) = a{x}^{\gamma }\). First of all, we note that these functions are those for which there exists A > 0 and real B,C such that

$$f(x) = Af_{\gamma }(x) + Bx + C$$

where

$$f_{\gamma }(x) = \left \{\begin{array}{@{}l@{\quad }l@{}} \quad &c_{\gamma }{x}^{\gamma +2}\text{ if }\gamma \neq - 1,-2, \\ \quad &x\ln (x)\text{ if }\gamma = -1, \\ \quad &-\ln (x)\text{ if }\gamma = -2. \end{array} \right.$$
(35)

and \(c_{\gamma } = \mbox{ sign}[(\gamma + 1)/(\gamma + 2)]\). In particular, the minimal measure for f will be the same as that for f γ. Minimal measures for the different functions f γ have been well studied. It has been shown in [8, 15, 16] that in all these cases, the minimal measure, when it exists, preserves the Levy property.

Sufficient conditions for the existence of a minimal measure and an explicit expression of the associated Girsanov parameters have been given in the case of relative entropy in [10, 13] and for power functions in [15]. It was also shown in [13] that these conditions are in fact necessary in the case of relative entropy or for power functions when d = 1. Our aim in this section is to give a unified expression of such conditions for all functions which satisfy \({f}^{{\prime\prime}}(x)\,=\,a{x}^{\gamma }\) and to show that, under some conditions, they are necessary and sufficient, for all d-dimensional Levy models.

We have already mentioned that f-divergence minimal martingale measures play an important role in the determination of utility maximising strategies. In this context, it is useful to have further invariance properties for the minimal measures such as scaling and time invariance properties. This is the case when \({f}^{{\prime\prime}}(x) = a{x}^{\gamma }\).

Theorem 6.

Consider a Levy process X with characteristics (b,c,ν) and let f be a function such that \({f}^{{\prime\prime}}(x)\,=\,a{x}^{\gamma }\) , where a > 0 and \(\gamma \in \mathbb{R}\) . Suppose that c≠0 or \({ \circ \atop supp} (\nu )\neq \varnothing \) . Then there exists an f-divergence minimal equivalent to P martingale measure Q preserving Levy properties if and only if there exist \(\gamma ,\beta \in {\mathbb{R}}^{d}\) and measurable function \(Y : {\mathbb{R}}^{d} \setminus \{ 0\} \rightarrow {\mathbb{R}}^{+}\) such that

$$Y (y) = {({f}^{{^\prime}})}^{-1}({f}^{{^\prime}}(1) + \sum \limits _{i=1}^{d}\gamma _{ i}({e}^{y_{i} } - 1))$$
(36)

and such that the following properties hold:

$$Y (y) > 0\,\,\,\nu - a.e.,$$
(37)
$$\sum \limits _{i=1}^{d} \int \limits _{\vert y\vert \geq 1}({e}^{y_{i} } - 1)Y (y)\nu (dy) < +\infty.$$
(38)
$$b + \frac{1} {2}diag(c) + c\beta + \int \limits _{{\mathbb{R}}^{d}}(({e}^{y} - 1)Y (y) - h(y))\nu (dy) = 0.$$
(39)

If such a measure exists the Girsanov parameters associated with Q are β and Y , and this measure is scale and time invariant.

We begin with some technical lemmas.

Lemma 6.

Let Q be the measure preserving Levy property. Then, \(Q_{T} \sim P_{T}\) for all T > 0 iff

$$Y (y) > 0\,\,\,\nu - a.e.,$$
(40)
$$\int \limits _{{\mathbb{R}}^{d}}{(\sqrt{Y (y)} - 1)}^{2}\nu (dy) < +\infty.$$
(41)

Proof.

See Theorem 2.1, p. 209 of [14]. □ 

Lemma 7.

Let \(Z_{T} = \frac{dQ_{T}} {dP_{T}}\) . Under \(Q_{T} \sim P_{T}\) , the condition \(E_{P}\vert f(Z_{T})\vert < \infty \) is equivalent to

$$\int \limits _{{\mathbb{R}}^{d}}[f(Y (y)) - f(1) - {f}^{{^\prime}}(1)(Y (y) - 1)]\nu (dy) < +\infty $$
(42)

Proof.

In our particular case, \(E_{P}\vert f(Z_{T})\vert < \infty \) is equivalent to the existence of \(E_{P}f(Z_{T})\). We use Ito formula to express this integrability condition in predictable terms. Taking for n ≥ 1 stopping times

$$s_{n} =\inf \{ t \geq 0 : Z_{t} > n\,\mbox{ or}\,Z_{t} < 1/n\}$$

where \(\inf \{\varnothing \} = +\infty \), we get for \(\gamma \neq - 1,-2\) and \(\alpha = \gamma + 2\) that P-a.s.

$$\begin{array}{rcl} & & Z_{T\wedge s_{n}}^{\alpha } = 1 + \int \limits _{0}^{T\wedge s_{n} }\alpha \,Z_{s-}^{\alpha }\beta dX_{ s}^{c} + \int \limits _{0}^{T\wedge s_{n} } \int \limits _{{\mathbb{R}}^{d}}Z_{s-}^{\alpha }({Y }^{\alpha }(y) - 1)({\mu }^{X} - {\nu }^{X,P})(ds,dy) \\ & & +\frac{1} {2}\alpha (\alpha - 1){\beta }^{2}\,c\,\int \limits _{0}^{T\wedge s_{n} }Z_{s-}^{\alpha }\,ds + \int \limits _{0}^{T\wedge s_{n} } \int \limits _{{\mathbb{R}}^{d}}Z_{s-}^{\alpha }[{Y }^{\alpha }(y) - 1 - \alpha (Y (y) - 1)]ds\,\,\nu (dy) \\ \end{array}$$

Hence,

$$Z_{T\wedge s_{n}}^{\alpha } = \mathcal{E}({N}^{(\alpha )} + {A}^{(\alpha )})_{ T\wedge s_{n}}$$
(43)

where

$$N_{t}^{(\alpha )} = \int \limits _{0}^{t}\alpha \,\beta dX_{ s}^{c} + \int \limits _{0}^{t}({Y }^{\alpha }(y) - 1)({\mu }^{X} - {\nu }^{X,P})(ds,dy)$$

and

$$A_{t}^{(\alpha )} = \int \limits _{0}^{t} \int \limits _{{\mathbb{R}}^{d}}[{Y }^{\alpha }(y) - 1 - \alpha (Y (y) - 1)]ds\,\,\nu (dy)$$

Since \([{N}^{(\alpha )},{A}^{(\alpha )}]_{t} = 0\) for each t ≥ 0 we have

$$Z_{T\wedge s_{n}}^{\alpha } = \mathcal{E}({N}^{(\alpha )})_{ T\wedge s_{n}}\mathcal{E}({A}^{(\alpha )})_{ T\wedge s_{n}}$$

If \(E_{P}Z_{T}^{\alpha } < \infty \), then by Jensen inequality

$$0 \leq Z_{T\wedge s_{n}}^{\alpha } \leq E_{ P}(Z_{T}^{\alpha }\,\vert \,\mathcal{F}_{ T\wedge s_{n}})$$

and since the right-hand side of this inequality form uniformly integrable sequence, \((Z_{T\wedge s_{n}}^{\alpha })_{n\geq 1}\) is also uniformly integrable. We remark that in the case α > 1 and α < 0, \(A_{t}^{(\alpha )} \geq 0\) for all t ≥ 0 and

$$\mathcal{E}({A}^{(\alpha )})_{ T\wedge s_{n}} =\exp (A_{T\wedge s_{n}}^{(\alpha )}) \geq 1.$$

It means that \((\mathcal{E}({N}^{(\alpha )})_{T\wedge s_{n}})_{n\in {\mathbb{N}}^{{_\ast}}}\) is uniformly integrable and

$$E_{P}(Z_{T}^{\alpha }) =\exp (A_{ T}^{(\alpha )}).$$
(44)

If (42) holds, then by Fatou lemma and since \(\mathcal{E}({N}^{(\alpha )})\) is a local martingale we get

$$E_{P}(Z_{T}^{\alpha }) \leq \underline{\lim }_{ n\rightarrow \infty }E_{P}(Z_{T\wedge s_{n}})\, \leq \exp (A_{T}^{(\alpha )}).$$

For 0 < α < 1, we have again

$$Z_{T\wedge s_{n}}^{\alpha } = \mathcal{E}({N}^{(\alpha )})_{ T\wedge s_{n}}\mathcal{E}({A}^{(\alpha )})_{ T\wedge s_{n}}$$

with uniformly integrable sequence \((Z_{T\wedge s_{n}}^{\alpha })_{n\geq 1}\). Since

$$\mathcal{E}({A}^{(\alpha )})_{ T\wedge s_{n}} =\exp (A_{T\wedge s_{n}}^{(\alpha )}) \geq \exp (A_{ T}^{(\alpha )}),$$

the sequence \((\mathcal{E}({N}^{(\alpha )})_{T\wedge s_{n}})_{n\in {\mathbb{N}}^{{_\ast}}}\) is uniformly integrable and

$$E_{P}(Z_{T}^{\alpha }) =\exp (A_{ T}^{(\alpha )}).$$
(45)

For \(\gamma = -2\) we have that f(x) = xln(x) up to linear term and

$$\begin{array}{rl} Z_{T\wedge s_{n}}\ln (Z_{T\wedge s_{n}})& = \int \limits _{0}^{T\wedge s_{n}}(\ln (Z_{s-}) + 1)Z_{s-}\beta dX_{s}^{c} \\ & + \int \limits _{0}^{T\wedge s_{n}} \int \limits _{{\mathbb{R}}^{d}}[\ln (Z_{s-})(Y (y) - 1) - Y (y)\,\ln (Y (y))]({\mu }^{X} - {\nu }^{X,P})(ds,dy) \\ & + \frac{1} {2}{\beta }^{2}\,c\,\int \limits _{ 0}^{T\wedge s_{n}}Z_{ s-}\,ds\,+\,\int \limits _{0}^{T\wedge s_{n}} \int \limits _{{\mathbb{R}}^{d}}Z_{s-}[Y (y)\ln (Y (y))\,-\,Y (y)\,+\,1]ds\,\nu (dy) \end{array}$$

Taking mathematical expectation we obtain:

$$E_{P}[Z_{T\wedge s_{n}}\ln (Z_{T\wedge s_{n}})] = E_{P} \int \limits _{0}^{T\wedge s_{n} } \int \limits _{{\mathbb{R}}^{d}}Z_{s-}[Y (y)\ln (Y (y)) - Y (y) + 1]ds\,\,\nu (dy)$$
(46)

If \(E_{P}[Z_{T}\ln (Z_{T})] < \infty \), then the sequence \((Z_{T\wedge s_{n}}\ln (Z_{T\wedge s_{n}}))_{n\in {\mathbb{N}}^{{_\ast}}}\) is uniformly integrable and \(E_{P}(Z_{s-})\,=\,1\) and we obtain applying Lebesgue convergence theorem that

$$E_{P}[Z_{T}\ln (Z_{T})] = \frac{T} {2} {\beta }^{2}\,c + T\int \limits _{{\mathbb{R}}^{d}}[Y (y)\ln (Y (y)) - Y (y) + 1]\nu (dy)$$
(47)

and this implies (42). If (42), then by Fatou lemma from (46) we deduce that \(E_{P}[Z_{T}\ln (Z_{T})] < \infty \).

For \(\gamma = -1\), we have \(f(x) = -\ln (x)\) and exchanging P and Q we get:

$$E_{P}[-\ln (Z_{T})] = E_{Q}[\tilde{Z}_{T}\ln (\tilde{Z}_{T})] = \frac{T} {2} {\beta }^{2}\,c + T\int \limits _{{\mathbb{R}}^{d}}[\tilde{Y }(y)\ln (\tilde{Y }(y)) -\tilde{Y }(y) + 1]{\nu }^{Q}(dy)$$

where \(\tilde{Z}_{T} = 1/Z_{T}\) and \(\tilde{Y }(y) = 1/Y (y)\). But \({\nu }^{Q}(dy) = Y (y)\nu (dy)\) and, finally,

$$E_{P}[-\ln (Z_{T})] = \frac{T} {2} {\beta }^{2}\,c + T\int \limits _{{\mathbb{R}}^{d}}[-\ln (Y (y)) + Y (y) - 1]\nu (dy)$$
(48)

Again by Fatou lemma we get that \(E_{P}[-\ln (Z_{T})] < \infty \) which implies (42). □ 

Lemma 8.

If the second Girsanov parameter Y has a particular form (36) then the condition

$$\sum \limits _{i=1}^{d} \int \limits _{\vert y\vert \geq 1}({e}^{y_{i} } - 1)Y (y)\nu (dy) < +\infty $$
(49)

implies the conditions  (40) and  (42).

Proof.

We can cut each integral in (40) and (42) on two parts and integrate on the sets \(\{\vert y\vert \leq 1\}\) and \(\{\vert y\vert > 1\}\). Then we can use a particular form of Y and conclude easily writing Taylor expansion of order 2. □ 

Proof of Theorem 6. Necessity. We suppose that there exist f-divergence minimal equivalent martingale measure Q preserving Levy property of X. Then, since \(Q_{T} \sim P_{T}\), the conditions (37) and (40) follow from Theorem 2.1, p. 209 of [14]. From Theorem 3 we deduce that (36) holds. Then, the condition (38) follows from the fact that S is a martingale under Q. Finally, the condition (39) follows from Girsanov theorem since Q is a martingale measure and, hence, the drift of S under Q is zero.

Sufficiency. We take β and Y verifying the conditions (37)–(39) and we construct

$$M_{t} = \sum \limits _{i=1}^{d} \int \limits _{0}^{t}{\beta }^{(i)}dX_{ s}^{c,(i)} + \int \limits _{0}^{t} \int \limits _{{\mathbb{R}}^{d}}(Y (y) - 1)({\mu }^{X} - {\nu }^{X,P})(ds,dy)$$
(50)

As known from Theorem 1.33, p. 72–73, of [14], the last stochastic integral is well defined if

$$\begin{array}{rcl} & & C(W) = T\int \limits _{{\mathbb{R}}^{d}}{(Y (y) - 1)}^{2}I_{\{ \vert Y (y)-1\vert \leq 1\}}\nu (dy) < \infty , \\ & & C({W}^{{^\prime}}) = T\int \limits _{{\mathbb{R}}^{d}}\vert Y (y) - 1\vert I_{\{\vert Y (y)-1\vert >1\}}\nu (dy) < \infty.\end{array}$$

But the condition (38), the relation (36) and Lemma 8 implies (40). Consequently, \((Y - 1) \in G_{loc}({\mu }^{X})\) and M is local martingale. Then we take

$$Z_{T} = \mathcal{E}(M)_{T}$$

and this defines the measure Q T by its Radon-Nikodym density. Now, the conditions (37) and (38) together with the relation (36) and Lemma 8 imply (40), and, hence, from Lemma 6 we deduce P T  ∼ Q T .

We show that \(E_{P}\vert f(Z_{T})\vert < \infty \). Since \(P_{T} \sim Q_{T}\), the Lemma 7 gives needed integrability condition.

Now, since (39) holds, Q is martingale measure, and it remains to show that Q is indeed f-divergence minimal. For that we take any equivalent martingale measure \(\bar{Q}\) and we show that

$$E_{Q}{f}^{{^\prime}}(Z_{ T}) \leq E_{\bar{Q}}{f}^{{^\prime}}(Z_{ T}).$$
(51)

If the mentioned inequality holds, the Theorem 1 implies that Q is a minimal.

In the case \(\gamma \neq - 1,-2\) we obtain from (43) replacing α by γ + 1:

$$Z_{T}^{\gamma +1} = \mathcal{E}({N}^{(\gamma +1)})_{ T}\,\,\exp (A_{T}^{(\gamma +1)})$$

and using a particular form of f and Y we get that for 0 ≤ t ≤ T

$$N_{t}^{(\gamma +1)} = \sum \limits _{i=1}^{d}{\theta }^{(i)}\hat{X}_{ t}^{(i)}$$

where θ = β if c≠0 and θ = γ if \(c = 0\), and \(\hat{{X}}^{(i)}\) is a stochastic logarithm of S (i). So, \(\mathcal{E}({N}^{(\gamma +1)})\) is a local martingale and we get

$$E_{\bar{Q}}Z_{T}^{\gamma +1} \leq \exp (A_{ T}^{(\gamma +1)}) = E_{ Q}Z_{T}^{\gamma +1}$$

and, hence, (51).

In the case \(\gamma = -1\) we prove using again a particular form of f and Y that

$${f}^{{^\prime}}(Z_{ T}) = E_{Q}({f}^{{^\prime}}(Z_{ T})) + \sum \limits _{i=1}^{d}{\theta }^{(i)}\hat{X}_{ T}^{(i)}$$

with θ = β if c≠0 and θ = γ if c = 0. Since \(E_{\bar{Q}}\hat{X}_{T} = 0\) we get that

$$E_{\bar{Q}}({f}^{{^\prime}}(Z_{ T})) \leq E_{Q}({f}^{{^\prime}}(Z_{ T}))$$

and it proves that Q is f-divergence minimal.

The case \(\gamma = -2\) can be considered in similar way.

Finally, note that the conditions which appear in Theorem 6 do not depend in any way on the time interval which is considered and, hence, the minimal measure is time invariant. Furthermore, if Q  ∗  is f-divergence minimal, the equality

$$f(cx) = Af(x) + Bx + C$$

with A, B, C constants, A > 0, gives

$$E_{P}[f(c\frac{d\bar{Q}} {dP})] = AE_{P}[f(\frac{d\bar{Q}} {dP})]+B+C \geq AE_{P}[f(\frac{dQ} {dP})]+B+C = E_{P}[f(c\frac{dQ} {dP})]$$

and Q is scale invariant. □ 

6.1 Example

We now give an example of a Levy model and a convex function which does not satisfy \({f}^{{\prime\prime}}(x)\,=\,a{x}^{\gamma }\) yet preserves the Levy property. We consider the function \(f(x)\,=\,\frac{{x}^{2}} {2} + x\ln (x) - x\) and the \({\mathbb{R}}^{2}\)-valued Levy process given by equality \(X_{t} = (W_{t} +\ln (2)P_{t},W_{t} +\ln (3)P_{t} - t)\), where W is a standard one-dimensional Brownian motion and P is a standard one-dimensional Poisson process. Note that the covariance matrix

$$c = \left (\begin{array}{ll} 1&1\\ 1 &1 \end{array} \right )$$

is not invertible. The support of the Levy measure is the singleton a = (ln(2), ln(3)), and is in particular nowhere dense. Let Q be a martingale measure for this model, and (β, Y ) its Girsanov parameters, where \({\,}^{\top }\beta = (\beta _{1},\beta _{2})\). In order for Q to be a martingale measure preserving Levy property, we must have

$$\begin{array}{rcl} & & \ln (2) + \frac{1} {2} + \beta _{1} + \beta _{2} + Y (a) = 0, \\ & & \ln (3) -\frac{1} {2} + \beta _{1} + \beta _{2} + 2Y (a) = 0,\end{array}$$
(52)

and, hence, \(Y (a) = 1 -\ln (\frac{3} {2})\). Now, it is not difficult to verify using Ito formula that the measure Q satisfy: \(E_{P}Z_{T}^{2} < \infty \) and, hence, \(E_{P}\vert f(Z_{T})\vert < +\infty \). Moreover, the conditions (37) and (38) are satisfied meaning that \(P_{T} \sim Q_{T}\).

Furthermore, in order for Q to be minimal we must have according to Theorem 3:

$${f}^{{^\prime}}(xY (y)) - {f}^{{^\prime}}(x) = x{f}^{{\prime\prime}}(x)\sum \limits _{i=1}^{2}\beta _{ i}({e}^{a_{i} } - 1) + \sum \limits _{i=1}^{2}v_{ i}({e}^{a_{i} } - 1)$$

with \(a_{1}\,=\,\ln 2,a_{2}\,=\,\ln 3\) and \(V \,{=\,\,}^{\top }(v_{1},v_{2})\) such that cV = 0. We remark that \(v_{2} = -v_{1}\). Then for \(x \in supp(Z_{T})\)

$$\ln (Y (a)) + x(Y (a) - 1) = (x + 1)(\beta _{1} + 2\beta _{2}) - v_{1}$$

and since \(supp(Z_{T}) = {\mathbb{R}}^{+,{_\ast}}\) we must have

$$\beta _{1} + 2\beta _{2} = Y (a) - 1\text{ and }\beta _{1} + 2\beta _{2} - v_{1} =\ln (Y (a))$$

Using (52), this leads to

$$\left \{\begin{array}{@{}l@{\quad }l@{}} \quad &v_{1} = -\ln (1 -\ln (\frac{3} {2})) -\ln (\frac{3} {2}) \\ \quad &\beta _{1} = 3\ln (3) - 5\ln (2) - 3 \\ \quad &\beta _{2} = \frac{3} {2} + 3\ln (2) - 2\ln (3) \end{array} \right.$$

We now need to check that the martingale measure given by these Girsanov parameters is indeed minimal. Note that the decomposition of Theorem 4 can now be written

$${f}^{{^\prime}}(Z_{ T}) = E_{Q}[{f}^{{^\prime}}(Z_{ T})] + \sum \limits _{i=1}^{2} \int \limits _{0}^{T}\left [\beta _{ i}( \frac{1} {Z_{s-}} + E_{Q}[Z_{T-s}]) + v_{i}\right ] \frac{dS_{s}^{i}} {S_{s-}^{i}}$$

But for \(s \geq 0\)

$$\frac{dS_{s}^{i}} {S_{s-}^{i}} =\hat{ X}_{s}^{i}$$

and right-hand side of previous equality is a local martingale with respect to any martingale measure \(\bar{Q}\). Taking a localising sequence and then the expectation with respect to \(\bar{Q}\) we get after limit passage that

$$E_{\bar{Q}}[{f}^{{^\prime}}(Z_{ T})] \leq E_{Q}[{f}^{{^\prime}}(Z_{ T})],$$

and so, it follows from Theorem 1 that the measure Q is indeed minimal.