1 Introduction

Stochastic systems with both regular controls and singular controls have numerous applications in finance and insurance, including: portfolio optimization with transaction cost [8], optimal investment and dividend problem [1] and optimal reinsurance and dividend problem [5, 12, 13]. Recently, the importance of model uncertainty is well recognized in modeling financial and insurance dynamic. With the presence of model uncertainty, these regular-singular problems can be formulated as regular-singular stochastic differential games [9], which motivates our work. In this paper, we study two-person non-zero-sum regular-singular stochastic differential games, where the state process is described by a controlled singular Itô–Lévy process. The informations available to the two players are asymmetry partial informations and the control of each player consists of both regular control and singular control. We attempt to establish the associated maximum principle for their Nash equilibrium points.

The first version of stochastic maximum principle that covers singular control problem was obtained by Cadenillas and Haussmann [4]. Since then, there has been extensive literature on the maximum principles for singular control problems under more general assumptions (see, for example, [3, 6, 10, 19] and the references therein). For regular-singular control problem, a maximum principle was established by the relaxed control approach in [22], where the control system evolved by forward–backward stochastic differential equation (SDE) driven by Brownian motion. Hafayed et al. [11] considered a similar problem in the partial information case, where the system was governed by mean-field controlled SDE driven by Teugels martingales associated with some Lévy processes and an independent Brownian motion. Hu et al. [9] derived maximum principles for regular-singular mean-field stochastic differential games. In the above-mentioned maximum principles, the adjoint processes are defined in terms of backward SDEs, which are usually hard to solve.

In order to avoid solving the backward SDEs, Øksendal and Sulèm [16] applied the Malliavin calculus approach to establish maximum principle for optimal control of a forward–backward stochastic system, where the adjoint processes are given directly in terms of the parameters and the states of the system, not by backward SDEs. Moreover, the concavity conditions are not required in their maximum principle. Afterwards, the Malliavin calculus approach was widely used to derive maximum principles for various control problems and games, such as singular control problem [17], mean-field control problem [14], partial observation control problem [20], regular-impulse control problem [21], partial information stochastic differential games [2], stochastic differential games with insider information [15] and forward–backward stochastic differential games [18].

Our aim in this paper is to follow the Malliavin calculus approach used by Øksendal and Sulèm [16] and to derive necessary optimality conditions in the form of stochastic maximum principle for Nash equilibrium point of the regular-singular games. In our formulation, the adjoint processes are explicitly represented by the parameters and the states of the system, instead of backward SDEs. Since the control strategy includes both regular control and singular control, our results can be regarded as the generalization of [16] to regular-singular games.

2 Problem formulation

We start with a complete filtered probability space \(\left( {\varOmega }, \mathcal {F},\left\{ \mathcal {F} \right\} _{t\ge 0}, P \right) \). Suppose that the state process \(X(t)=X(t,\omega )\); \(t\in [0,T]\), \(\omega \in {\varOmega }\), is described by the following controlled singular jump diffusion:

(1)

where \(b:[0,T]\times \mathbb {R}\times \mathcal {U}_{1}\times \mathcal {U}_{2}\times {\varOmega }\rightarrow \mathbb {R}\), \(\sigma :[0,T]\times \mathbb {R}\times \mathcal {U}_{1}\times \mathcal {U}_{2}\times {\varOmega }\rightarrow \mathbb {R}\), \(\gamma :[0,T]\times \mathbb {R}\times \mathcal {U}_{1}\times \mathcal {U}_{2}\times \mathbb {R}_{0}\times {\varOmega }\rightarrow \mathbb {R}\) and \(\lambda _{1},\lambda _{2}:[0,T]\times \mathbb {R}\times {\varOmega }\rightarrow \mathbb {R}\) are given functions, \(\mathbb {R}_{0}:=\mathbb {R} \setminus \left\{ 0\right\} \) and \(\mathcal {U}_{i}\) are given nonempty open convex subsets of \(\mathbb {R}\). Here B(t) is Brownian motion and \(\tilde{N}(dt,dz)\) is compensated Poisson random measure defined as \(\tilde{N}(dt,dz)=N(dt,dz)-\nu (dz)dt\) where \(\nu \) is the Lévy measure of a Lévy process \(\eta \) with jump measure N. The process \(u_{i}(t)=u_{i}(t,\omega )\in \mathcal {U}_{i}\) is regular stochastic control, while \(\xi _{i}(t)=\xi _{i}(t,\omega )\in \mathbb {R}\) is singular control, assumed to be càdlàg and non-decreasing for each \(\omega \), with \(\xi _{i}(0^{-})=0\), \(i=1,2\).

We consider two-person stochastic differential games. For \(t\in [0,T]\), the player i intervenes on the system with regular-singular control \(\left( u_{i},\xi _{i}\right) \), \(i=1,2\). Suppose that the informations available to the two players are asymmetric partial informations. This means there are two subfiltrations \(\mathcal {G}^{(1)}_{t}\) and \(\mathcal {G}^{(2)}_{t}\) of \(\mathcal {F}_{t}\) satisfying

$$\begin{aligned} {\mathcal {G}}^{(i)}_{t}\subseteq {\mathcal {F}}_{t},\quad t\in [0,T],\quad i=1,2. \end{aligned}$$

The player i decides his strategy \(\left( u_{i}(t), \xi _{i}(t)\right) \) based on the partial information \(\mathcal {G}^{(i)}_{t}\) and the regular-singular control \(\left( u_{i}(t), \xi _{i}(t)\right) \) is \(\mathcal {G}^{(i)}_{t}-\)adapted. For example, let

$$\begin{aligned} \mathcal {G}^{(1)}_{t}= \mathcal {F}_{(t-\delta _{1})^{+}}\quad \text {and} \quad \mathcal {G}^{(2)}_{t}= \mathcal {F}_{(t-\delta _{2})^{+}}, \quad t\in [0,T], \end{aligned}$$

where \(\delta _{1}>0\) and \(\delta _{2}>0\) are constants. Then the players 1 and 2 get asymmetric delayed informations compared to \(\mathcal {F}_{t}\) and there is a delay \(\delta _{i}\) for the player i. Assume in addition that \(t\rightarrow \lambda _{i}(t,x)\) is continuous \(\mathcal {G}^{(i)}_{t}-\)adapted.

Let \(f_{i}:[0,T]\times \mathbb {R}\times \mathcal {U}_{1}\times \mathcal {U}_{2}\times {\varOmega }\rightarrow \mathbb {R}\), \(h_{i}:[0,T]\times \mathbb {R}\times {\varOmega }\rightarrow \mathbb {R}\), \(k_{i}:[0,T]\times \mathbb {R}\times {\varOmega }\rightarrow \mathbb {R}\) be given \(\mathcal {F}_{t}-\)predictable processes and let \(g_{i}:\mathbb {R}\times {\varOmega }\rightarrow \mathbb {R}\) be an \(\mathcal {F}_{T}\)—measurable random variable for each x. Then we define the utility functional associated with the player i is of the form

$$\begin{aligned} \mathcal {J}_{i}\left( u_{1},\xi _{1};u_{2},\xi _{2}\right)&= E\left[ \int _{0}^{T}f_{i}\left( t,X(t),u_{1}(t),u_{2}(t),\omega \right) dt+g_{i}(X(T),\omega )\right. \\&\quad \left. +\int _{0}^{T}h_{i}(t,X(t),\omega )d\xi _{1}(t)+\int _{0}^{T}k_{i}(t,X(t),\omega )d\xi _{2}(t)\right] , \end{aligned}$$

where E is the expectation with respect to P, \(i=1,2\).

Let \(\mathcal {A}^{(i)}_{\mathcal {G}}\) denote a given family of controls \(\left( u_{i},\xi _{i}\right) \), contained in the set of \(\mathcal {G}^{(i)}_{t}-\)adapted \((u_{i},\xi _{i})\) such that the system (1) has a unique strong solution. Then \(\mathcal {A}^{(i)}_{\mathcal {G}}\) is called the admissible control set of the player i, \(i=1,2\).

As the games are typically non-zero-sum, we seek a Nash equilibrium point \(\left( \hat{u}_{1},\hat{\xi }_{1};\hat{u}_{2},\hat{\xi }_{2}\right) \in \mathcal {A}^{(1)}_{\mathcal {G}}\times \mathcal {A}^{(2)}_{\mathcal {G}}\) such that

$$\begin{aligned}&\mathcal {J}_{1}\left( u_{1},\xi _{1};\hat{u}_{2},\hat{\xi }_{2}\right) \le \mathcal {J}_{1}\left( \hat{u}_{1},\hat{\xi }_{1};\hat{u}_{2},\hat{\xi }_{2}\right) \quad \text {for}\ \text {all}\ \left( u_{1},\xi _{1}\right) \in \mathcal {A}^{(1)}_{\mathcal {G}}\end{aligned}$$
(2)
$$\begin{aligned}&\mathcal {J}_{2}\left( \hat{u}_{1},\hat{\xi }_{1};u_{2},\xi _{2}\right) \le \mathcal {J}_{2}\left( \hat{u}_{1},\hat{\xi }_{1};\hat{u}_{2},\hat{\xi }_{2}\right) \quad \text {for}\ \text {all}\ \left( u_{2},\xi _{2}\right) \in \mathcal {A}^{(2)}_{\mathcal {G}}. \end{aligned}$$
(3)

The existence of Nash equilibrium point shows that the strategy \(\left( \hat{u}_{1},\hat{\xi }_{1}\right) \) is the best response of the player 1 to the player 2’s use of the control \(\left( \hat{u}_{2},\hat{\xi }_{2}\right) \) and vice verse.

3 Maximum principle via Malliavin calculus

In this section, we use the Malliavin calculus approach to derive stochastic maximum principle for the games (2)–(3). Let \(\mathbb {D}_{1,2}\) denote the set of all random variables which are Malliavin differentiable with respect to both \(B(\cdot )\) and \(\tilde{N}(\cdot ,\cdot )\). For \(F\in \mathbb {D}_{1,2}\), let \(D_{t}F\) denote the Malliavin derivative of F with respect to \(B(\cdot )\) at t and let \(D_{t,z}\) denote the Malliavin derivative with respect to \(\tilde{N}(\cdot ,\cdot )\) at tz of F. We refer to [7] for the background on Malliavin calculus.

For the sake of simplicity, we use the following short hand notations:

$$\begin{aligned}&\displaystyle X(t)=X(t,u_{1},\xi _{1},u_{2},\xi _{2})\\&\displaystyle X\left( t,u_{1}+y_{1}\beta _{1},\xi _{1}+y_{1}\varsigma _{1}\right) =X\left( t,u_{1}+y_{1}\beta _{1},\xi _{1}+y_{1}\varsigma _{1},u_{2},\xi _{2}\right) ,\\&\displaystyle X\left( t,u_{2}+y_{2}\beta _{2},\xi _{2}+y_{2}\varsigma _{2}\right) =X\left( t,u_{1},\xi _{1},u_{2}+y_{2}\beta _{2},\xi _{2}+y_{2}\varsigma _{2}\right) , \\&\displaystyle \frac{\partial b}{\partial x}(t)=\frac{\partial b}{\partial x}\left( t,X(t),u_{1}(t),u_{2}(t),\omega \right) ,\quad \frac{\partial b}{\partial u_{i}}(t)=\frac{\partial b}{\partial u_{i}}\left( t,X(t),u_{1}(t),u_{2}(t),\omega \right) \end{aligned}$$

and similarly for other derivatives.

Definition 1

For \(i=1,2\), we define the Hamiltonian function as follows:

$$\begin{aligned} H_{i}\left( t,x,u_{1},u_{2},p_{i},q_{i},r_{i}\right)&= f_{i}\left( t,x,u_{1},u_{2}\right) +b\left( t,x,u_{1},u_{2}\right) p_{i} +\sigma \left( t,x,u_{1},u_{2}\right) q_{i} \\&\quad +\int _{\mathbb {R}_{0}}\gamma \left( t,x,u_{1},u_{2},z\right) r_{i}(t,z)\nu (dz)] dt, \end{aligned}$$

where the adjoint processes \(p_{i}(t)\), \(q_{i}(t)\) and \(r_{i}(t,z)\) are given by

$$\begin{aligned} p_{i}(t)&= R_{i}(t)+\sum _{j=1}^{2}\int _{t}^{T} M_{i,j}(t,s)d\xi _{j}(s), \end{aligned}$$
(4)
$$\begin{aligned} q_{i}(t)&= D_{t}R_{i}(t)+\sum _{j=1}^{2}\int _{t}^{T} D_{t}M_{i,j}(t,s)d\xi _{i}(s), \end{aligned}$$
(5)
$$\begin{aligned} r_{i}(t,z)&= D_{t,z}R_{i}(t)+\sum _{j=1}^{2}\int _{t}^{T} D_{t,z}M_{i,j}(t,s)d\xi _{i}(s) \end{aligned}$$
(6)

with

$$\begin{aligned} R_{i}(t)= & {} {\varUpsilon }_{i}(t)+\int _{t}^{T}F_{i}(t,s)ds, \end{aligned}$$
(7)
$$\begin{aligned} {\varUpsilon }_{i}(t)= & {} \int _{t}^{T}\frac{\partial f_{i}}{\partial x}(s)ds+g_{i}'(X(T))\nonumber \\&+\int _{t}^{T}\frac{\partial h_{i}}{\partial x}(s)d\xi _{1}(s)+\int _{t}^{T}\frac{\partial k_{i}}{\partial x}(s)d\xi _{2}(s), \end{aligned}$$
(8)
$$\begin{aligned} F_{i}(t,s)= & {} G(t,s)\frac{\partial {\varPhi }_{i}}{\partial x}(s), \end{aligned}$$
(9)
$$\begin{aligned} M_{i,j}(t,s)= & {} G(t,s){\varUpsilon }_{i}(s)\frac{\partial \lambda _{j}}{\partial x}(s),\quad i,j=1,2, \end{aligned}$$
(10)
$$\begin{aligned} {\varPhi }_{i}(s,x,u_{1},u_{2})= & {} {\varUpsilon }_{i}(s)b(s,x,u_{1},u_{2})+D_{s}{\varUpsilon }_{i}(s)\sigma (s,x,u_{1},u_{2})\nonumber \\&+\int _{\mathbb {R}_{0}}D_{s,z}{\varUpsilon }_{i}(s)\gamma (s,x,u_{1},u_{2},z)\nu (dz), \end{aligned}$$
(11)

and

$$\begin{aligned} G(t,s)=&\exp \left\{ \int _{t}^{s}\left( \frac{\partial b}{\partial x}(r) -\frac{1}{2}\left( \frac{\partial \sigma }{\partial x}(r)\right) ^{2}\right) dr +\int _{t}^{s}\frac{\partial \sigma }{\partial x}(r)dB(r) \right. \nonumber \\&+\int _{t}^{s}\int _{\mathbb {R}_{0}}\ln \left( 1+\frac{\partial \gamma }{\partial x}(t,z)\right) \tilde{N}(dr,dz)\nonumber \\&+\int _{t}^{s}\int _{\mathbb {R}_{0}}\left[ \ln \left( 1+\frac{\partial \gamma }{\partial x}(t,z)\right) -\frac{\partial \gamma }{\partial x}(t,z)\right] \nu (dz)dr \nonumber \\&\left. +\int _{t}^{s}\frac{\partial \lambda _{2}}{\partial x}(r)d\xi _{2}(r)+\int _{t}^{s}\frac{\partial \lambda _{1}}{\partial x}(r)d\xi _{1}(r) \right\} ,\quad 0\le t\le s \le T. \end{aligned}$$
(12)

Assumption 1

We make the following assumptions:

  1. (1)

    The functions b, \(\sigma \), \(\gamma \), \(\lambda _{i}\), \(f_{i}\), \(g_{i}\), \(h_{i}\) and \(k_{i}\) are continuously differentiable with respect to x and \(u_{i}\) for each \(t\in [0,T]\) and almost all \(\omega \in {\varOmega }\).

  2. (2)

    For all th satisfying \(0\le t<t+h\le T\) and all bounded \(\mathcal {G}^{(i)}_{s}\)—measurable random variables \(\theta _{i}(\omega )\), \(s\in [0,T]\), the control \(\left( \beta _{i}(s),0\right) \) belongs to \(\mathcal {A}^{(i)}_{\mathcal {G}}\), where \(\beta _{i}(s)=\theta _{i}(\omega )\chi _{[t,t+h]}(s)\) and \(\chi _{[t,t+h]}\) is the indicator function of \([t,t+h]\), i.e., \( \chi _{[t,t+h]}(s)= {\left\{ \begin{array}{ll} 1,\quad s\in [t,t+h]\\ 0,\quad \text {else} \end{array}\right. }\),   \(t\in [0,T]\),   \(i=1,2\).

  3. (3)

    For all \((u_{i},\xi _{i})\in \mathcal {A}^{(i)}_{\mathcal {G}}\) and all bounded \((\beta _{i},\varsigma _{i})\in \mathcal {A}^{(i)}_{\mathcal {G}}\), there exists \(\delta >0\) such that \((u_{i}(t)+y_{i}\beta _{i}(t),\xi _{i}(t)+y_{i}\varsigma _{i}(t))\in \mathcal {A}^{(i)}_{\mathcal {G}}\) for all \(y_{i}\in (-\delta , \delta )\), \(t\in [0,T]\),\(i=1,2\).

  4. (4)

    For all \((u_{i},\xi _{i})\in \mathcal {A}^{(i)}_{\mathcal {G}}\), the processes \({\varUpsilon }_{i}(t)\), \(D_{t}{\varUpsilon }_{i}(t)\), \(D_{t,z}{\varUpsilon }_{i}(t)\), \({\varPhi }_{i}(t)\), G(ts), \(p_{i}(t)\), \(q_{i}(t)\) and \(r_{i}(t,z)\) exist for \(0\le t\le s\le T\), \(z\in \mathbb {R}_{0}\), \(i=1,2\).

For a bounded \(\left( \beta _{i},\varsigma _{i}\right) \), we define the derivative process \( \breve{X}^{\left( \beta _{i},\varsigma _{i}\right) }(t)\) as

$$\begin{aligned} \breve{X}^{\left( \beta _{i},\varsigma _{i}\right) }(t):=&\lim _{y_{i}\rightarrow 0^{+}}\frac{1}{y_{i}}\left[ X\left( t,u_{i}+y_{i}\beta _{i},\xi _{i}+y_{i}\varsigma _{i}\right) -X\left( t\right) \right] . \end{aligned}$$

Then it follows from (1) that \(\breve{X}^{\left( \beta _{i},\varsigma _{i}\right) }(t)\) satisfies the following singular SDEs:

$$\begin{aligned} d\breve{X}^{\left( \beta _{i},\varsigma _{i}\right) }(t)&= \beta _{i}(t) \left[ \frac{\partial b}{\partial u_{i}}(t)dt+ \frac{\partial \sigma }{\partial u_{i}}(t)dB(t)+\int _{\mathbb {R}_{0}}\frac{\partial \gamma }{\partial u_{i}}(t,z)\tilde{N}(dt,dz)\right] \nonumber \\&\quad + \breve{X}^{\left( \beta _{i},\varsigma _{i}\right) }(t) \left[ \frac{\partial b}{\partial x}(t)dt+ \frac{\partial \sigma }{\partial x}(t)dB(t)+\int _{\mathbb {R}_{0}}\frac{\partial \gamma }{\partial x}(t,z)\tilde{N}(dt,dz)\right. \nonumber \\&\quad \left. +\frac{\partial \lambda _{1}}{\partial x}(t)d\xi _{1}(t)+ \frac{\partial \lambda _{2}}{\partial x}(t)d\xi _{2}(t)\right] +\lambda _{i}(t,x)d\varsigma _{i}(t) \end{aligned}$$
(13)

with \(\breve{X}^{\left( \beta _{i},\varsigma _{i}\right) }(0)=0\).

Now we are ready to state and prove the maximum principle to characterize the Nash equilibrium point of the games (2)–(3).

Theorem 2

Let \(\left( u_{1},\xi _{1};u_{2},\xi _{2}\right) \in \mathcal {A}^{(1)}_{\mathcal {G}}\times \mathcal {A}^{(2)}_{\mathcal {G}}\) be a Nash equilibrium point of the games (2)–(3). Let \(\xi _{i}^{c}(t)\) denote the continuous part of \(\xi _{i}(t)\) and let \(\triangle \xi _{i}(t)=\xi _{i}(t)-\xi _{i}(t-)\) denote the purely discontinuous part of \(\xi _{i}(\cdot )\) at time t, \(i=1,2\). Assume that X(t) and \(\breve{X}^{\left( u_{i},\xi _{i}\right) }(t)\) are the solutions of (1) and (13) corresponding to \(\left( u_{1},\xi _{1};u_{2},\xi _{2}\right) \) and \(p_{i}(t)\), \(q_{i}(t)\), \(r_{i}(\cdot )(t,z)\) are the corresponding adjoint processes (4)–(6). Suppose that \(g_{i}'(X(T))\), \(\frac{\partial f_{i}}{\partial x}(t)\), \(\frac{\partial h_{i}}{\partial x}(t)\) and \(\frac{\partial k_{i}}{\partial x}(t)\) are Malliavin differentiable at \(t\in [0,T]\). Suppose in addition that for all \((u_{i},\xi _{i})\in \mathcal {A}^{(i)}_{\mathcal {G}}\) and \((\beta _{i},\varsigma _{i})\in \mathcal {A}^{(i)}_{\mathcal {G}}\),

$$\begin{aligned} \frac{\partial f_{i}}{\partial x}\left( t,X(t),u_{1},u_{2}\right) \breve{X}^{\left( \beta _{i},\varsigma _{i}\right) }(t) +\frac{\partial f_{i}}{\partial u_{i}}\left( t,X(t),u_{1},u_{2}\right) \beta _{i}(t) \end{aligned}$$

is \(m\times P-\)uniformly integrable, where m is Lebesgue measure,

$$\begin{aligned}&E\left[ \int _{0}^{T}\left| \frac{\partial h_{1}}{\partial x}(t,X(t)) \breve{X}^{\left( \beta _{1},\varsigma _{1}\right) }(t)\right| d\xi _{1}(t)\right. \\&\quad \left. +\int _{0}^{T}\left| h_{1}(t,X(t))\right| d\varsigma _{1}(t) +\int _{0}^{T}\left| \frac{\partial k_{1}}{\partial x}(t,X(t)) \breve{X}^{\left( \beta _{1},\varsigma _{1}\right) }(t)\right| d\xi _{2}(t) \right]<+\infty ,\\&E\left[ \int _{0}^{T}\left| \frac{\partial h_{2}}{\partial x}(t,X(t)) \breve{X}^{\left( \beta _{2},\varsigma _{2}\right) }(t)\right| d\xi _{1}(t)\right. \\&\quad \left. +\int _{0}^{T}\left| k_{2}(t,X(t))\right| d\varsigma _{2}(t) +\int _{0}^{T}\left| \frac{\partial k_{2}}{\partial x}(t,X(t)) \breve{X}^{\left( \beta _{2},\varsigma _{2}\right) }(t)\right| d\xi _{2}(t) \right] <+\infty \end{aligned}$$

and

$$\begin{aligned} g_{i}'(X(T)) \breve{X}^{\left( \beta _{1},\varsigma _{1}\right) }(T) \end{aligned}$$

is P-uniformly integrable. Set

$$\begin{aligned} U_{1}(t)&= h_{1}(t,X(t))+\left( {\varUpsilon }_{1}(t)+ \int _{t+}^{T} G(t,s)dQ_{1}(s)\right) \lambda _{1}(t,X(t)), \end{aligned}$$
(14)
$$\begin{aligned} V_{1}(t)&= h_{1}(t,X(t))+\left( {\varUpsilon }_{1}(t)+ \left( 1+\epsilon (t) \right) \int _{t+}^{T} G(t,s)dQ_{1}(s)\right) \lambda _{1}(t,X(t)), \\ U_{2}(t)&= k_{2}(t,X(t))+\left( {\varUpsilon }_{2}(t)+ \int _{t+}^{T} G(t,s)dQ_{2}(s)\right) \lambda _{2}(t,X(t)),\nonumber \\ V_{2}(t)&= k_{2}(t,X(t))+\left( {\varUpsilon }_{2}(t)+ \left( 1+\epsilon (t) \right) \int _{t+}^{T} G(t,s)dQ_{2}(s)\right) \lambda _{2}(t,X(t)),\nonumber \end{aligned}$$
(15)

where

(16)

and

$$\begin{aligned} dQ_{i}(t)=\frac{\partial {\varPhi }_{i}}{\partial x}(t)dt+{\varUpsilon }_{i}(t)\frac{\partial \lambda _{1}}{\partial x}(t)d\xi _{1}(t)+{\varUpsilon }_{i}(t)\frac{\partial \lambda _{2}}{\partial x}(t)d\xi _{2}(t). \end{aligned}$$
(17)

Then the following holds for almost all \(t\in [0,T]\),

$$\begin{aligned}&E\left[ \left. \frac{\partial H_{1}}{\partial u_{1}}\left( t,X(t),u_{1},u_{2},p_{1},q_{1},r_{1}(\cdot )\right) \right| \mathcal {G}^{(1)}_{t}\right] =0, \end{aligned}$$
(18)
$$\begin{aligned}&E\left[ \left. \frac{\partial H_{2}}{\partial u_{2}}\left( t,X(t),u_{1},u_{2},p_{2},q_{2},r_{2}(\cdot )\right) \right| \mathcal {G}^{(2)}_{t}\right] =0, \end{aligned}$$
(19)
$$\begin{aligned}&E\left[ \left. U_{1}(t)\right| \mathcal {G}^{(1)}_{t}\right] \le 0 \quad \text {and}\quad E\left[ \left. U_{1}(t)\right| \mathcal {G}^{(1)}_{t}\right] d\xi _{1}^{c}(t)=0, \end{aligned}$$
(20)
$$\begin{aligned}&E\left[ \left. U_{2}(t)\right| \mathcal {G}^{(2)}_{t}\right] \le 0 \quad \text {and}\quad E\left[ \left. U_{2}(t)\right| \mathcal {G}^{(2)}_{t}\right] d\xi _{2}^{c}(t)=0, \end{aligned}$$
(21)
$$\begin{aligned}&E\left[ \left. V_{1}(t)\right| \ \mathcal {G}^{(1)}_{t}\right] \le 0 \quad \text {and}\quad E\left[ \left. V_{1}(t)\right| \ \mathcal {G}^{(1)}_{t}\right] \triangle \xi _{1}(t)=0, \end{aligned}$$
(22)
$$\begin{aligned}&E\left[ \left. V_{2}(t)\right| \ \mathcal {G}^{(2)}_{t}\right] \le 0 \quad \text {and}\quad E\left[ \left. V_{2}(t)\right| \ \mathcal {G}^{(2)}_{t}\right] \triangle \xi _{2}(t)=0. \end{aligned}$$
(23)

Proof

Suppose that \(\left( u_{1},\xi _{1};u_{2},\xi _{2}\right) \in \mathcal {A}^{(1)}_{\mathcal {G}}\times \mathcal {A}^{(2)}_{\mathcal {G}}\) is a Nash equilibrium point of the game (2)–(3). Then we have

$$\begin{aligned} \lim _{y_{1}\rightarrow 0^{+}}\frac{1}{y_{1}}\left[ \mathcal {J}_{1}\left( u_{1}+y_{1}\beta _{1},\xi _{1}+y_{1}\varsigma _{1};u_{2},\xi _{2}\right) -\mathcal {J}_{1}\left( u_{1},\xi _{1};u_{2},\xi _{2}\right) \right] \le 0 \end{aligned}$$
(24)

holds for all bounded \(\left( \beta _{1},\varsigma _{1}\right) \in \mathcal {A}^{(1)}_{\mathcal {G}}\). It follows from \(\mathcal {J}_{1}\left( u_{1},\xi _{1};u_{2},\xi _{2}\right) \) that (24) gives

$$\begin{aligned}&E\left[ \int _{0}^{T}\left\{ \frac{\partial f_{1}}{\partial x}\left( t,X(t),u_{1},u_{2}\right) \breve{X}^{\left( \beta _{1},\varsigma _{1}\right) }(t) +\frac{\partial f_{1}}{\partial u_{1}}\left( t,X(t),u_{1},u_{2}\right) \beta _{1}(t) \right\} dt\right. \nonumber \\&\quad +g_{1}'(X(T)) \breve{X}^{\left( \beta _{1},\varsigma _{1}\right) }(T) +\int _{0}^{T}\frac{\partial h_{1}}{\partial x}(t,X(t)) \breve{X}^{\left( \beta _{1},\varsigma _{1}\right) }(t)d\xi _{1}(t)\nonumber \\&\quad \left. +\int _{0}^{T}h_{1}(t,X(t))d\varsigma _{1}(t) +\int _{0}^{T}\frac{\partial k_{1}}{\partial x}(t,X(t)) \breve{X}^{\left( \beta _{1},\varsigma _{1}\right) }(t)d\xi _{2}(t) \right] \le 0, \end{aligned}$$
(25)

where \(g_{1}'(x)\) is the derivative of \(g_{1}(x)\).

By the duality formulas of Malliavin calculus (see Theorem 3.14 and Theorem 12.10 in [7]), (13) and the Fubini theorem, we get

$$\begin{aligned}&E\left[ \int _{0}^{T}\frac{\partial f_{1}}{\partial x}(t)\breve{X}^{\left( \beta _{1},\varsigma _{1}\right) }(t)dt\right] \\&\quad = E\left[ \int _{0}^{T}\left\{ \left[ \int _{t}^{T}\frac{\partial f_{1}}{\partial x}(s)ds \left( \frac{\partial b}{\partial x}(t) \breve{X}^{\left( \beta _{1},\varsigma _{1}\right) }(t)+\frac{\partial b}{\partial u_{1}}(t) \beta _{1}(t)\right) \right. \right. \right. \\&\quad \quad +D_{t}\left( \int _{t}^{T} \frac{\partial f_{1}}{\partial x}(s)ds \right) \left( \frac{\partial \sigma }{\partial x}(t) \breve{X}^{\left( \beta _{1},\varsigma _{1}\right) }(t)+\frac{\partial \sigma }{\partial u_{1}}(t) \beta _{1}(t)\right) \\&\quad \quad \left. + \int _{\mathbb {R}_{0}}D_{t,z}\left( \int _{t}^{T}\frac{\partial f_{1}}{\partial x}(s) ds\right) \left( \frac{\partial \gamma }{\partial x}(t,z) \breve{X}^{\left( \beta _{1},\varsigma _{1}\right) }(t)+\frac{\partial \gamma }{\partial u_{1}}(t) \beta _{1}(t)\right) \nu (dz)\right] dt\\&\quad \quad +\left( \int _{t}^{T}\frac{\partial f_{1}}{\partial x}(s)ds\right) \breve{X}^{\left( \beta _{1},\varsigma _{1}\right) }(t)\left( \frac{\partial \lambda _{1}}{\partial x}(t)d\xi _{1}(t)+\frac{\partial \lambda _{2}}{\partial x}(t)d\xi _{2}(t)\right) \\&\quad \quad \left. \left. +\left( \int _{t}^{T}\frac{\partial f_{1}}{\partial x}(s)ds\right) \lambda _{1}(t,X(t))d\varsigma _{1}(t)\right\} \right] . \end{aligned}$$

Similarly we apply the duality formulas of Malliavin calculus and the Fubini theorem to \( E\left[ g'_{1}(X(T))\breve{X}^{\left( \beta _{1},\varsigma _{1}\right) }(T)\right] \), \(E\left[ \int _{0}^{T}\frac{\partial h_{1}}{\partial x}(t) \breve{X}^{\left( \beta _{1},\varsigma _{1}\right) }(t)d\xi _{1}(t)\right] \) and \(E\left[ \int _{0}^{T}\frac{\partial k_{1}}{\partial x}(t) \breve{X}^{\left( \beta _{1},\varsigma _{1}\right) }(t)d\xi _{2}(t)\right] \) in (25). Then (25) can be written as

$$\begin{aligned}&E\left[ \int _{0}^{T} \breve{X}^{\left( \beta _{1},\varsigma _{1}\right) }(t)\left( \frac{\partial {\varPhi }_{1}}{\partial x}(t)dt+{\varUpsilon }_{1}(t)\frac{\partial \lambda _{1}}{\partial x}(t)d\xi _{1}(t)+{\varUpsilon }_{1}(t)\frac{\partial \lambda _{2}}{\partial x}(t)d\xi _{2}(t)\right) \right. \nonumber \\&\quad +\int _{0}^{T}\left( h_{1}(t,X(t))+{\varUpsilon }_{1}(t)\lambda _{1}(t,X(t))\right) d\varsigma _{1}(t) +\int _{0}^{T} \beta _{1}(t)\left( \frac{\partial b}{\partial u_{1}}(t){\varUpsilon }_{1}(t)\right. \nonumber \\&\quad \left. \left. +\frac{\partial \sigma }{\partial u_{1}}(t)D_{t}{\varUpsilon }_{1}(t) +\int _{\mathbb {R}_{0}}\frac{\partial \gamma }{\partial u_{1}}(t,z)D_{t,z}{\varUpsilon }_{1}(t)\nu (dz)+\frac{\partial f_{1}}{\partial u_{1}}(t)\right) dt\right] \le 0, \end{aligned}$$
(26)

where \({\varUpsilon }_{1}(t)\) and \({\varPhi }_{1}(t)\) are given by (8) and (11).

Since the inequality (26) holds for all bounded \(\left( \beta _{1},\varsigma _{1}\right) \in \mathcal {A}^{(1)}_{\mathcal {G}}\), we can choose \(\varsigma _{1}=0\) and \(\beta _{1}=0\) to prove (18), (20) and (22), respectively.

In order to prove (18), we let \(\varsigma _{1}=0\) in (26) and have

$$\begin{aligned}&E\left[ \int _{0}^{T} \breve{X}^{\left( \beta _{1},0\right) }(t)\left( \frac{\partial {\varPhi }_{1}}{\partial x}(t)dt+{\varUpsilon }_{1}(t)\frac{\partial \lambda _{1}}{\partial x}(t)d\xi _{1}(t)+{\varUpsilon }_{1}(t)\frac{\partial \lambda _{2}}{\partial x}(t)d\xi _{2}(t)\right) \right. \nonumber \\&\quad \left. +\int _{0}^{T} \beta _{1}(t)\left( \frac{\partial f_{1}}{\partial u_{1}}(t)+\frac{\partial b}{\partial u_{1}}(t){\varUpsilon }_{1}(t) +\frac{\partial \sigma }{\partial u_{1}}(t)D_{t}{\varUpsilon }_{1}(t)\right. \right. \nonumber \\&\quad \left. \left. +\int _{\mathbb {R}_{0}}\frac{\partial \gamma }{\partial u_{1}}(t,z)D_{t,z}{\varUpsilon }_{1}(t)\nu (dz)\right) dt\right] \le 0, \end{aligned}$$
(27)

where \(\breve{X}^{\left( \beta _{1},0\right) }(t)\) is given by

$$\begin{aligned} d\breve{X}^{\left( \beta _{1},0\right) }(t)&= \breve{X}^{\left( \beta _{1},0\right) }(t) \left[ \frac{\partial b}{\partial x}(t)dt+ \frac{\partial \sigma }{\partial x}(t)dB(t)+\int _{\mathbb {R}_{0}}\frac{\partial \gamma }{\partial x}(t,z)\tilde{N}(dt,dz)\right. \nonumber \\&\left. +\frac{\partial \lambda _{1}}{\partial x}(t)d\xi _{1}(t)+ \frac{\partial \lambda _{2}}{\partial x}(t)d\xi _{2}(t)\right] +\beta _{1}(t) \left[ \frac{\partial b}{\partial u_{1}}(t)dt\right. \nonumber \\&\left. + \frac{\partial \sigma }{\partial u_{1}}(t)dB(t)+\int _{\mathbb {R}_{0}}\frac{\partial \gamma }{\partial u_{1}}(t,z)\tilde{N}(dt,dz)\right] \end{aligned}$$
(28)

with \(\breve{X}^{\left( \beta _{1},0\right) }(0)=0\).

Since (27) holds for all \(\beta _{1}\in \mathcal {A}^{(1)}_{\mathcal {G}}\), we set

$$\begin{aligned} \beta _{1}^{\theta }=\beta _{1}^{\theta }(s)=\theta (\omega )\chi _{(t,t+h]}(s),\quad 0\le t\le t+h\le T, \end{aligned}$$

where \(\theta (\omega )\) is \(\mathcal {G}^{(1)}_{t}\)—measurable random variable. And it is obvious that \(\breve{X}^{\left( \beta _{1},0\right) }(s)=0\) for \(0\le s\le t\). Then (27) can be written as

$$\begin{aligned} L_{1}(h)+L_{2}(h)+L_{3}(h)\le 0, \end{aligned}$$

where

$$\begin{aligned} L_{1}(h)&= E\left[ \int _{t}^{T} \breve{X}^{\left( \beta _{1}^{\theta },0\right) }(s) \frac{\partial {\varPhi }_{1}}{\partial x}(s)ds \right] ,\\ L_{2}(h)&= E\left[ \int _{t}^{t+h} \theta \left( \frac{\partial f_{1}}{\partial u_{1}}(s)+\frac{\partial b}{\partial u_{1}}(s){\varUpsilon }_{1}(s)+\frac{\partial \sigma }{\partial u_{1}}(s)D_{s}{\varUpsilon }_{1}(s)\right. \right. \\&\quad \,\left. \left. +\int _{\mathbb {R}_{0}}\frac{\partial \gamma }{\partial u_{1}}(s,z)D_{s,z}{\varUpsilon }_{1}(s)\nu (dz)\right) ds\right] ,\\ L_{3}(h)&= E\left[ \int _{t}^{T} \breve{X}^{\left( \beta _{1}^{\theta },0\right) }(s)\left( {\varUpsilon }_{1}(s)\frac{\partial \lambda _{1}}{\partial x}(s)d\xi _{1}(s)+{\varUpsilon }_{1}(s)\frac{\partial \lambda _{2}}{\partial x}(s)d\xi _{2}(s)\right) \right] . \end{aligned}$$

Differentiating \(L_{1}(h)\) and \(L_{2}(h)\) with respect to h at \(h=0\), we obtain

$$\begin{aligned} \left. \frac{d }{dh}L_{1}(h)\right| _{h=0}&= E\left[ \theta \left\{ \frac{\partial b}{\partial u_{1}}(t)\int _{t}^{T}F_{1}(t,s)ds+\frac{\partial \sigma }{\partial u_{1}}(t)D_{t}\int _{t}^{T}F_{1}(t,s)ds\right. \right. \nonumber \\&\quad \left. \left. +\int _{\mathbb {R}_{0}}\frac{\partial \gamma }{\partial u_{1}}(t,z)D_{t,z}\int _{t}^{T}F_{1}(t,s)ds\nu (dz)\right\} \right] , \end{aligned}$$
(29)

where \(F_{1}(t,s)\) is given by (9), and

$$\begin{aligned} \left. \frac{d }{dh}L_{2}(h)\right| _{h=0}&= E\left[ \theta \left( {\varUpsilon }_{1}(t)\frac{\partial b}{\partial u_{1}}(t) +D_{t}{\varUpsilon }_{1}(t)\frac{\partial \sigma }{\partial u_{1}}(t)\right. \right. \nonumber \\&\quad \left. \left. +\int _{\mathbb {R}_{0}}D_{t,z}{\varUpsilon }_{1}(t)\frac{\partial \gamma }{\partial u_{1}}(t,z)\nu (dz)+\frac{\partial f_{1}}{\partial u_{1}}(t)\right) \right] . \end{aligned}$$
(30)

The proofs of (29) and (30) are similar to that of Theorem 4.1 in [16], we omit the detail. Now we consider \(L_{3}(h)\). Differentiating \(L_{3}(h)\) with respect to h at \(h=0\) gives

$$\begin{aligned} \left. \frac{d }{dh}L_{3}(h)\right| _{h=0}={\varPi }_{1}+{\varPi }_{2}, \end{aligned}$$
(31)

where

$$\begin{aligned} {\varPi }_{1}&= \frac{d }{dh}E\left[ \int _{t}^{t+h} \breve{X}^{\left( \beta _{1}^{\theta },0\right) }(s)\left( {\varUpsilon }_{1}(s)\frac{\partial \lambda _{1}}{\partial x}(s)d\xi _{1}(s)+{\varUpsilon }_{1}(s)\frac{\partial \lambda _{2}}{\partial x}(s)d\xi _{2}(s)\right) \right] _{h=0}\\&= 0 \end{aligned}$$

and

$$\begin{aligned} {\varPi }_{2}&=\frac{d }{dh}E\left[ \int _{t+h}^{T} \breve{X}^{\left( \beta _{1}^{\theta },0\right) }(s){\varUpsilon }_{1}(s)\left( \frac{\partial \lambda _{1}}{\partial x}(s)d\xi _{1}(s)+\frac{\partial \lambda _{2}}{\partial x}(s)d\xi _{2}(s)\right) \right] _{h=0}\nonumber \\&= \sum _{j=1}^{2}E\left[ \int _{t}^{T} \frac{d }{dh}\left\{ \breve{X}^{\left( \beta _{1}^{\theta },0\right) }(t+h)G(t,s){\varUpsilon }_{1}(s)\frac{\partial \lambda _{j}}{\partial x}(s)\right\} _{h=0}d\xi _{j}(s) \right] . \end{aligned}$$
(32)

By (28) we have

$$\begin{aligned} \breve{X}^{\left( \beta _{1}^{\theta },0\right) }(t+h)&= \int _{t}^{t+h}\theta \left[ \frac{\partial b}{\partial u_{1}}(r)dr+\frac{\partial \sigma }{\partial u_{1}}(r)dB(r) +\int _{\mathbb {R}_{0}}\frac{\partial \gamma }{\partial u_{1}}(r,z)\tilde{N}(dr,dz)\right] \nonumber \\&\quad +\int _{t}^{t+h}\breve{X}^{\left( \beta _{1}^{\theta },0\right) }(r)\left[ \frac{\partial b}{\partial x}(r)dr+ \frac{\partial \sigma }{\partial x}(r)dB(r)+ \frac{\partial \lambda _{2}}{\partial x}(r)d\xi _{2}(r)\right. \nonumber \\&\quad \left. +\int _{\mathbb {R}_{0}}\frac{\partial \gamma }{\partial x}(r,z)\tilde{N}(dr,dz)+\frac{\partial \lambda _{1}}{\partial x}(r)d\xi _{1}(r)\right] . \end{aligned}$$
(33)

Substituting (33) in (32), we have, by the duality formulas of Malliavin calculus,

$$\begin{aligned} \left. \frac{d }{dh}L_{3}(h)\right| _{h=0}&=\sum _{j=1}^{2} E\left[ \theta \left\{ \frac{\partial b}{\partial u_{1}}(t) \int _{t}^{T} M_{1,j}(t,s)d\xi _{j}(s)\right. \right. \nonumber \\&\quad +\frac{\partial \sigma }{\partial u_{1}}(t) \int _{t}^{T} D_{t}M_{1,j}(t,s)d\xi _{j}(s) \nonumber \\&\quad \left. \left. +\int _{\mathbb {R}_{0}}\frac{\partial \gamma }{\partial u_{1}}(t,z) \int _{t}^{T} D_{t,z}M_{1,j}(t,s)d\xi _{j}(s) \nu (dz)\right\} \right] , \end{aligned}$$
(34)

where \(M_{1,j}(t,s)\) are given by (10). Combining (29), (34) and (31), we obtain

$$\begin{aligned}&\left. \frac{d }{dh}L_{1}(h)\right| _{h=0}+\left. \frac{d }{dh}L_{2}(h)\right| _{h=0}+\left. \frac{d }{dh}L_{3}(h)\right| _{h=0}\nonumber \\&\quad = E\left[ \theta \frac{\partial H_{1} }{\partial u_{1}}\left( t,x,u_{1},u_{2},p_{1},q_{1},r_{1}(\cdot )\right) \right] \end{aligned}$$
(35)

holds for all bounded \(\mathcal {G}^{(1)}_{t}\)—measurable random variable \(\theta \). Therefore, we conclude that (18) holds for almost all \(t\in [0,T]\).

It remains to prove (20) and (22). Since the inequality (26) holds for all bounded \(\left( \beta _{1},\varsigma _{1}\right) \in \mathcal {A}^{(1)}_{\mathcal {G}}\), one can choose \(\beta _{1}=0\) and has

$$\begin{aligned} E\left[ \int _{0}^{T} \breve{X}^{\left( 0,\varsigma _{1}\right) }(t)dQ_{1}(t) +\int _{0}^{T}\left( h_{1}(t,X(t))+{\varUpsilon }_{1}(t)\lambda _{1}(t,X(t))\right) d\varsigma _{1}(t)\right] \le 0, \end{aligned}$$
(36)

where \(Q_{1}(t)\) is given by (17) and \(\breve{X}^{\left( 0,\varsigma _{1}\right) }(t)\) is described by the following SDE:

$$\begin{aligned} d\breve{X}^{\left( 0,\varsigma _{1}\right) }(t)&= \breve{X}^{\left( 0,\varsigma _{1}\right) }(t) \left[ \frac{\partial b}{\partial x}(t)dt+ \frac{\partial \sigma }{\partial x}(t)dB(t)+\int _{\mathbb {R}_{0}}\frac{\partial \gamma }{\partial x}(t,z)\tilde{N}(dt,dz)\right. \nonumber \\&\quad \left. +\frac{\partial \lambda _{1}}{\partial x}(t)d\xi _{1}(t)+ \frac{\partial \lambda _{2}}{\partial x}(t)d\xi _{2}(t)\right] +\lambda _{1}(t,X(t))d\varsigma _{1}(t) \end{aligned}$$
(37)

with \(\breve{X}^{\left( 0,\varsigma _{1}\right) }(0)=0\). Then it is easy to obtain the solution of (37) as follows:

$$\begin{aligned} \breve{X}^{\left( 0,\varsigma _{1}\right) }(t)&=G(0,t)\left[ \int _{0}^{t}G^{-1}(0,s^{-})\lambda _{1}(s,X(s))d\varsigma _{1}(s)\right. \nonumber \\&\quad \left. +\sum _{0<s\le t}G^{-1}(0,s^{-})\lambda _{1}(s,X(s))\epsilon (s)\triangle \varsigma _{1}(s)\right] ,\quad t\in [0,T], \end{aligned}$$
(38)

where \(\epsilon (s)\) is given by (16) (see Lemma 2.1 in [17]). Substituting (38) into (36), we have, by the Fubini Theorem,

$$\begin{aligned} E\left[ \int _{0}^{T}U_{1}(t)d\varsigma ^{c}_{1}(t)+\sum _{0<t\le T}V_{1}(t)\triangle \varsigma _{1}(t)\right] \le 0, \end{aligned}$$
(39)

where \(U_{1}(t)\) and \(V_{1}(t)\) are defined by (14) and (15).

We firstly choose \(\varsigma _{1}\) such that

$$\begin{aligned} d\varsigma _{1}(t)=a_{1}(t)dt,\quad t\in [0,T], \end{aligned}$$

where \(a_{1}(t)\ge 0\) is continuous \(\mathcal {G}^{(1)}_{t}-\)adapted stochastic process. Then it follows from (39) that

$$\begin{aligned} E\left[ \left. U_{1}(t)\right| \ \mathcal {G}^{(1)}_{t}\right] \le 0, \quad \text {for}\ \text {almost} \ \text {all}\quad t\in [0,T]. \end{aligned}$$

Moreover, by choosing \(\varsigma _{1}(t)=\xi _{1}^{c}(t)\) and \(\varsigma _{1}(t)=-\xi _{1}^{c}(t)\), together with (39), we have

$$\begin{aligned} E\left[ \left. U_{1}(t)\right| \mathcal {G}^{(1)}_{t}\right] d\xi _{1}^{c}(t)=0,\quad t\in [0,T]. \end{aligned}$$

Next we fix \(t\in [0,T]\) and choose \(\varsigma _{1}\) such that

$$\begin{aligned} d\varsigma _{1}(s)=a_{1}(\omega )\delta _{t}(s),\quad s\in [0,T], \end{aligned}$$

where \(a_{1}(\omega )\ge 0\) is bounded \(\mathcal {G}^{(1)}_{t}\)—measurable and \(\delta _{t}(s)\) is the unit point mass at t. In this case, we obtain by (39) that

$$\begin{aligned} E\left[ \left. V_{1}(t)\right| \mathcal {G}^{(1)}_{t}\right] \le 0. \end{aligned}$$

Let \(\xi _{1}^{d}(t)\) denote the purely discontinuous part of \(\xi _{1}(t)\). Letting \(\varsigma _{1}(t)=\xi _{1}^{d}(t)\) and \(\varsigma _{1}(t)=-\xi _{1}^{d}(t)\), we conclude from (39) that

$$\begin{aligned} E\left[ \left. V_{1}(t)\right| \ \mathcal {G}^{(1)}_{t}\right] \triangle \xi _{1}(t)=0 \quad \text {for}\ \text {all} \quad t\in [0,T]. \end{aligned}$$

Therefore we obtain (20) and (22).

On the other hand, proceeding in the same way by letting

$$\begin{aligned} \lim _{y_{2}\rightarrow 0^{+}}\frac{1}{y_{2}}\left[ \mathcal {J}_{2}\left( u_{1},\xi _{1}; u_{2}+y_{2}\beta _{2},\xi _{2}+y_{2}\varsigma _{2}\right) -\mathcal {J}_{2}\left( u_{1},\xi _{1};u_{2},\xi _{2}\right) \right] \le 0 \end{aligned}$$

holds for all bounded \(\left( \beta _{2},\varsigma _{2}\right) \in \mathcal {A}^{(2)}_{\mathcal {G}}\), we have (19), (21) and (23), which completes the proof.

Remark 1

Suppose that \(\mathcal {G}^{(1)}_{t}=\mathcal {G}^{(2)}_{t}=\mathcal {G}_{t}\subset \mathcal {F}_{t}\) for all \(t\in [0,T]\). Then the games (2)–(3) are simplified to the regular-singular stochastic differential games with symmetric partial informations. Especially, in the case of \(\mathcal {G}^{(1)}_{t}=\mathcal {G}^{(2)}_{t}=\mathcal {F}_{t}\) for all \(t\in [0,T]\), the games (2)–(3) can be regarded as the regular-singular games under symmetric full informations. With the help of Theorem 2, the Nash equilibrium point \(\left( u_{1},\xi _{1};u_{2},\xi _{2}\right) \) satisfies

$$\begin{aligned}&\frac{\partial H_{1}}{\partial u_{1}}\left( t,X(t),u_{1},u_{2},p_{1},q_{1},r_{1}(\cdot )\right) = \frac{\partial H_{2}}{\partial u_{2}}\left( t,X(t),u_{1},u_{2},p_{2},q_{2},r_{2}(\cdot )\right) =0,\\&U_{1}(t)\le 0, \quad U_{1}(t)d\xi _{1}^{c}(t)=0, \quad V_{1}(t)\le 0 \quad \text {and}\quad V_{1}(t)\triangle \xi _{1}(t)=0,\\&U_{2}(t)\le 0, \quad U_{2}(t)d\xi _{2}^{c}(t)=0, \quad V_{2}(t)\le 0 \quad \text {and}\quad V_{2}(t)\triangle \xi _{2}(t)=0. \end{aligned}$$

4 Conclusion

We derive a maximum principle for non-zero-sum regular-singular stochastic differential games with asymmetry partial informations. The approach we apply is Malliavin calculus approach, with which the adjoint processes are explicitly represented by the parameters and the states of the system, not by backward SDEs. In our maximum principle, we give the necessary optimality conditions, which are not sufficient, for Nash equilibrium point of the games. The sufficient and necessary optimality conditions for Nash equilibrium point will be explored in our subsequent work.