1 Introduction

We study a class of two-person regular–singular stochastic differential games with asymmetric information, which is motivated by the optimal investment and dividend problem of an insurer under model uncertainty.

In actuarial science, finding optimal investment strategy and dividend policy are two fundamental problems. Compared with the classic optimal investment problem in financial economics (see, for example, [1, 2]), the investment problem of an insurer is different and more complex, for the insurer is exposed to both financial risk and insurance risk, instead of only financial risk. Financial risk is occurred due to the fluctuations of financial market, while insurance risk is caused by the liabilities related to insurance claims. Therefore, there has been much interest in the study of optimal investment and dividend problem of an insurer under different realistic model assumptions; see, for example, [3,4,5,6].

Model uncertainty, the uncertainty on the choice of model, is pervasive in the modeling of economic, financial, and insurance dynamics (see, for example, [7, 8]). The robust approach is widely adopted to deal with model uncertainty, where a family of probability measures equivalent to the original probability measure is generated and the optimal decision is made in the worst-case scenario over these probability measures (see, for example, [7, 9]). With the dissemination of quantitative models in actuarial science, the issue of model uncertainty has been incorporated in the investment problem of an insurer with or without reinsurance (see, for example, [10,11,12,13]). However, it seems that relatively little attention has been paid to optimal investment and dividend problems of an insurer under model uncertainty.

We consider the optimal investment and dividend problem of an insurer under model uncertainty. The information available to the insurer is partial information, which is less than the full information generated by the market events (see, for example, [14,15,16]). The insurer decides the investment strategy and dividend payment policy based on this partial information. We apply the robust approach to incorporate model uncertainty, where a family of probability measures is generated by Girsanov transform. Then three sources of risk are included in our model: financial risk, insurance risk, and model uncertainty. With the presence of model uncertainty, the investment and dividend problem of an insurer can be formulated as a zero-sum regular–singular stochastic differential game between the insurer and the market under asymmetric information.

In most cases, the regular–singular stochastic control problems or stochastic differential games are solved through solving HJB integro-variational inequalities (HJBIVI), which is possible under assumption that the underlying dynamics of the controlled system is Markovian (see, for example, [3,4,5, 17]). However, because of the non-Markovian nature of the partial information, our game cannot be solved by the well-established HJBIVI technique. This motivates us to derive the corresponding maximum principles to handle the partial information case.

The stochastic maximum principle that covers singular control problem was obtained by Cadenillas and Haussmann [18], where a convex cost function is to be minimized subject to linear dynamics. Since then, many papers on maximum principles for singular control problems under more general assumptions have appeared in the literature (see, for example, [15, 19,20,21,22,23] and the references therein, this list of references is not exhaustive). For a regular–singular stochastic control problem/stochastic differential game, a maximum principle was established by the relaxed control approach in [24], where the control system governed by forward-backward stochastic differential equation (SDE) driven by Brownian motion. Hafayed, Abba, and Abbas [25] considered a similar problem for the case of partial information, where the system was governed by mean-field controlled SDE driven by Teugels martingales associated with some Lévy processes and an independent Brownian motion. Hu et al. [26] derived maximum principles for a regular–singular mean-field stochastic differential game and reduced it to a Skorohod problem. In our situation, model uncertainty is incorporated in the problem, an additional controllable Lévy diffusion is added into the system to model the insurance risk and the information available to the two players is asymmetric information. Therefore, the existing maximum principles are no longer valid for our games.

In this paper, we consider a class of regular–singular stochastic differential games with asymmetric information arising in the optimal investment and dividend problem of an insurer under model uncertainty. The necessary and sufficient optimality conditions for the saddle points of the zero-sum games are derived. The derivation of these optimality conditions follows an approach similar to that adopted by An and Øksendal [14], where the necessary and sufficient optimality conditions for regular stochastic differential game under partial information are derived. Since the control variable consists of two components: the regular control and the singular control in our games, our results can be regarded as the generalization of [14] to regular–singular case.

The rest of the paper is structured as follows: In Sect. 2, we introduce optimal investment and dividend problem of an insurer under model uncertainty as a motivation example. In Sect. 3, we formulate a zero-sum regular–singular stochastic differential game with asymmetric information and establish necessary and sufficient optimality conditions for the saddle point of the game. As an application, in Sect. 4 these conditions are applied to characterize the solution of optimal investment and dividend problem of an insurer under model uncertainty. In Sect. 5, the necessary and sufficient optimality conditions obtained in Sect. 3 are generalized to nonzero-sum regular–singular game with asymmetric information. Finally, some concluding remarks are made in Sect. 6.

2 Optimal Investment and Dividend Problem of an Insurer Under Model Uncertainty

In order to illustrate the application background of a regular–singular stochastic differential game with asymmetric information, we introduce optimal investment and dividend problem of an insurer under model uncertainty as a motivation example.

Suppose there is a continuous-time financial market with two investment possibilities:

  • A risk free asset (e.g., a bond), with unit price \(S_{0}(t)\) at time t given by

    $$\begin{aligned} \mathrm{d}S_{0}(t)=\rho (t)S_{0}(t)\mathrm{d}t,\quad S_{0}(0)=1, \quad \text {for}\ \text {all}\quad t\in [0,T],\quad { T\in ]0,\infty [}. \end{aligned}$$
    (1)
  • A risky asset (e.g., a stock), with unit price \(S_{1}(t)\) at time t given by

    $$\begin{aligned} \mathrm{d}S_{1}(t)=S_{1}(t)\left[ \zeta (t)\mathrm{d}t+\pi (t)\mathrm{d}B(t)\right] ,\quad S_{1}(0)> 0, \quad \text {for}\ \text { all}\quad t\in [0,T], \end{aligned}$$
    (2)

where B(t) is a standard Brownian motion on a probability space \(({\varOmega }, \mathcal {F}, P)\) with respect to its right-continuous P-complete filtration \(\left\{ \mathcal {F}^{B}_{t}: t\in [0,T]\right\} \), the coefficients \(\rho (t)\), \(\zeta (t)\) and \(\pi (t)\) are the interest rate of the risk free asset, the appreciation rate, and the volatility of the risky asset at time t, respectively. We assume that \(\rho (t)\), \(\zeta (t)\) and \(\pi (t)\) are \(\mathcal {F}^{B}_{t}\)-predictable processes such that

$$\begin{aligned} \int _{0}^{T}\left( \left| \rho (t)\right| +\left| \zeta (t)\right| +\pi ^{2}(t)\right) \mathrm{d}t<\infty ,\quad \text {a.s.}. \end{aligned}$$

Let Z(t) be an aggregate insurance claim process of the insurer, which represents the amount of claim up to and including time t. For \(s\in [0,T]\), the claim size of \(Z(\cdot )\) at time s is denoted by

$$\begin{aligned} \triangle Z(s):=Z(s)-Z(s-). \end{aligned}$$

The space of claim size is \(\mathbb {R}_{0}:=\mathbb {R}\backslash \{0\}\). Then Z(t) is given by

$$\begin{aligned} Z(t)=\sum _{0\le s\le t}\triangle Z(s),\ \ \ \ Z(0)=0,\ \ \ \ \text {a.s.},\ \ \ \ t\in [0,T]. \end{aligned}$$

Let \(\tau _{k}\) denote the arrival time of the kth claim and let \(\triangle Z(\tau _{k})\) be the amount of the kth claim at the time \(\tau _{k}\). Then Poisson random measure \(N(\cdot ,\cdot )\) on \([0,T]\times \mathbb {R}_{0}\), which is induced by claim arrivals and sizes \(\triangle Z(s)\), is defined by

$$\begin{aligned} N(\mathrm{d}s,\mathrm{d}z)=\sum _{k\ge 1}\delta _{(\triangle Z(\tau _{k}),\tau _{k})}(\mathrm{d}z,\mathrm{d}s)\chi _{\left\{ \triangle Z(\tau _{k})\ne 0, \tau _{k}<\infty \right\} }. \end{aligned}$$

Here, \(\delta _{(\triangle Z(\tau _{k}),\tau _{k})}(\cdot , \cdot )\) is the random delta function at \((\triangle Z(\tau _{k}),\tau _{k})\) and \(\chi _{E}\) is the indicator function of an event E. Thus, the process Z(t) is a pure jump Lévy process given by

$$\begin{aligned} Z(t)=\int _{0}^{t}\int _{\mathbb {R}_{0}}z N(\mathrm{d}s,\mathrm{d}z),\quad t\in [0,T]. \end{aligned}$$

Suppose that Z(t) is bounded on [0, T]. Then, Z(t) can be written as:

$$\begin{aligned} Z(t)=\int _{0}^{t}\int _{\mathbb {R}_{0}}z \tilde{N}(\mathrm{d}s,\mathrm{d}z)+\int _{0}^{t}\int _{\mathbb {R}_{0}}z \nu (\mathrm{d}z)\mathrm{d}s,\quad t\in [0,T], \end{aligned}$$

where \(\tilde{N}(\cdot ,\cdot )\) is the compensation of the jump measure \(N(\cdot ,\cdot )\) and \(\nu \) is Lévy measure of a Lévy process \(Z(\cdot )\). We refer to [17, 27] for background on Lévy process and Poisson random measures.

For \(t\in [0,T]\), let \(\kappa (t)\) be the premium rate at time t and let R(t) denote the surplus process of the insurer in the absence of investment and dividend. Then, we have

$$\begin{aligned} R(t)&=R_{0}+\int _{0}^{t}\kappa (s)\mathrm{d}s-Z(t)+Q \tilde{B}(t)\\&=R_{0}+\int _{0}^{t}\left( \kappa (s)-\int _{\mathbb {R}_{0}}z \nu (\mathrm{d}z)\right) \mathrm{d}s-\int _{0}^{t}\int _{\mathbb {R}_{0}}z \tilde{N}(\mathrm{d}s,\mathrm{d}z)+Q\tilde{B}(t), \end{aligned}$$

where \(R_{0}\in \mathbb {R}\) is the initial surplus and \(Q\in \mathbb {R}\) is the diffusion coefficient. The stochastic process \(\tilde{B}(t)\) is another Brownian motion defined on \(({\varOmega }, \mathcal {F}, P)\) with respect to its right-continuous P-complete filtration \(\left\{ \mathcal {F}^{\tilde{B}}_{t}: t\in [0,T]\right\} \). It describes an additional source of the uncertainty, which may be attributed to uncertainty about premium incomes or uncertainty about aggregate claims (see, e.g., [28]). We assume that B(t), \(\tilde{B}(t)\) and \(\tilde{N}(\mathrm{d}t,\mathrm{d}z)\) are mutually independent under P, for \(t\in [0,T]\).

The insurer is responsible to mange his risk through the management of the dividend payment and the investment strategy. Let \(u(t)=u(t,\omega )\) denote the amount invested in the risky asset which we call portfolio strategy. And let the singular control \(\xi (t)=\xi (t,\omega )\) represent the cumulative amount of dividends paid up to time t.

Definition 2.1

For each \(\omega \in {\varOmega }\), \(\xi (t)=\xi (t,\omega )\), \(t\in [0,T]\), is a càdlàg non-decreasing stochastic process of bounded variation with \(\xi (0) = 0\), which is referred to as a singular.

The surplus process Y(t) in the presence of investment and dividend is given by

$$\begin{aligned} \mathrm{d}Y(t) =&\left\{ \kappa (t)+\rho (t)Y(t)+u(t)\left[ \zeta (t)-\rho (t)\right] +\int _{\mathbb {R}_{0}}z \nu (\mathrm{d}z)\right\} \mathrm{d}t \nonumber \\&+\pi (t)u(t)\mathrm{d}B(t) +Q \mathrm{d}\tilde{B}(t) -\int _{\mathbb {R}_{0}}z \tilde{N}(\mathrm{d}t,\mathrm{d}z)- \mathrm{d}\xi (t), \nonumber \\ Y(0)=&R_{0}. \end{aligned}$$
(3)

The control \(\left( u(t),\xi (t)\right) \) is to be decided by the insurer, which is referred to as a regular–singular control.

Now, we specify the information structure of the model. As we have defined above, the filtrations \(\left\{ \mathcal {F}^{B}_{t}: t\in [0,T]\right\} \) and \(\left\{ \mathcal {F}^{\tilde{B}}_{t}: t\in [0,T]\right\} \) are the right-continuous, P-complete, natural filtration generated by B(t) and \(\tilde{B}(t)\), respectively. Let \(\big \{ \mathcal {F}^{Z}_{t}: t\in [0,T]\big \}\) denote the P-augmentation of the \(\sigma \)-field generated by the insurance claims process Z(t). For each \(t\in [0,T]\), the enlarged \(\sigma \)-algebra \(\mathcal {F}_{t}\) is defined by

$$\begin{aligned} \mathcal {F}_{t}:=\mathcal {F}^{Z}_{t}\vee \mathcal {F}^{B}_{t} \vee \mathcal {F}^{\tilde{B}}_{t}, \end{aligned}$$

which is the minimal \(\sigma \)-field generated by \(\mathcal {F}^{Z}_{t}\), \(\mathcal {F}^{B}_{t}\) and \(\mathcal {F}^{\tilde{B}}_{t}\). Then the filtration \(\left\{ \mathcal {F}_{t}: t\in [0,T]\right\} \) represents the full information generated by the surplus process of the insurer and the price process of the risky asset.

In real world, however, the insurer can only get partial information instead of full information. That is, there is a subfiltration \(\mathcal {G}_{t}\subseteq \mathcal {F}_{t}\) for \(t\in [0,T]\). The insurer decides the portfolio strategy u(t) and the dividend policy \(\xi (t)\) based on the information \(\mathcal {G}_{t}\). Therefore \(\left( u(t),\xi (t)\right) \) is required to be \(\mathcal {G}_{t}\)-adapted.

We use a robust approach to incorporate model uncertainty. The Girsanov transform theorem is used to generate a family of real-world probability measures, which are absolutely continuous with respect to the original probability measure P. Suppose \(\theta (t)\in {\varTheta }\) is a real-valued stochastic process on \(({\varOmega },\mathcal {F},P)\) satisfying: \(\theta (t)\) is \(\mathcal {F}_{t}\)-progressive measurable for \(t\in [0,T]\); \(\theta (t)<1\) for almost all \((t,\omega )\in [0,T]\times {\varOmega }\); And \(E\left[ \int _{0}^{T}\theta ^{2}(t)\mathrm{d}t\right] <\infty \) for \(t\in [0,T]\). Then, for \(\theta (t)\in {\varTheta }\), we define

$$\begin{aligned} M^{\theta }(t)=&\exp \left\{ -\int _{0}^{t}\theta (s)\mathrm{d}B(s)-\int _{0}^{t}\theta (s)\mathrm{d}\tilde{B}(s) -\int _{0}^{t}\theta ^{2}(s)\mathrm{d}s\right. \\&+\int _{0}^{t}\int _{\mathbb {R}_{0}}\ln \left( 1-\theta (s)\right) \tilde{N}(\mathrm{d}s,\mathrm{d}z)\\&\left. +\int _{0}^{t}\int _{\mathbb {R}_{0}}\left[ \ln \left( 1-\theta (s)\right) +\theta (s)\right] \nu (\mathrm{d}z)\mathrm{d}s \right\} . \end{aligned}$$

Applying Itô formula to \(M^{\theta }(t)\), we have

$$\begin{aligned}&\mathrm{d}M^{\theta }(t)=-M^{\theta }(t)\left[ \theta (t)\mathrm{d}B(t)+\theta (t)\mathrm{d}\tilde{B}(t)+\int _{\mathbb {R}_{0}}\theta (t)\tilde{N}(\mathrm{d}t,\mathrm{d}z)\right] \\&M^{\theta }(0)=1,\quad P-\text {a.s.} \end{aligned}$$

Obviously, \( M^{\theta }(t)\) is a P-local martingale for \(\theta (t)\in {\varTheta }\). Let \(\theta (t)\) be bounded, P-a.s., such that \( M^{\theta }(t)\) is a P-martingale. Then, we have

$$\begin{aligned} E\left[ M^{\theta }(t) \right] =E\left[ M^{\theta }(0) \right] =1. \end{aligned}$$

For each \(\theta (t)\in {\varTheta }\), we define a new probability measure \(P^{\theta }\) on \(\mathcal {F}_{T}\) by

$$\begin{aligned} \left. \frac{\mathrm{d}P^{\theta }}{\mathrm{d}P}\right| _{\mathcal {F}_{T}} =M^{\theta }(T). \end{aligned}$$

Then \(P^{\theta }\) is absolutely continuous with respect to P on \(\mathcal {F}_{T}\). By choosing different processes \(\theta (t)\in {\varTheta }\), we obtain a family of real-world probability measures \(\left\{ P^{\theta }: \theta \in {\varTheta }\right\} \).

For a given strategy \((u(t),\xi (t))\), the utility function of the insurer under \(P^{\theta }\) is defined by

$$\begin{aligned} \mathcal {J}(u,\xi ,\theta )=&E^{\theta }\left[ Y(T)+\int _{0}^{T}e^{-\int _{0}^{t}\rho (s)\mathrm{d}s}\mathrm{d}\xi (t)\right] , \end{aligned}$$

where \(E^{\theta }\) is the expectation with respect to \(P^{\theta }\).

Let \(\mathcal {A}^{(1)}_{\mathcal {G}}\) be a family of admissible controls \((u,\xi )\), contained in the set of \(\mathcal {G}_{t}\)-adapted \((u,\xi )\) such that (3) has a unique strong solution and for all \(\theta \in {\varTheta }\),

$$\begin{aligned} E^{\theta }\left[ \left| Y(T)\right| +\int _{0}^{T}e^{-\int _{0}^{t}\rho (s)\mathrm{d}s}\mathrm{d}\xi (t)\right] <\infty . \end{aligned}$$
(4)

Let \(\mathcal {A}^{(2)}_{\mathcal {F}}\) be a family of admissible controls \(\theta (t)\), contained in the set of \(\mathcal {F}_{t}\)-adapted \(\theta (t)\in {\varTheta }\) such that (4) holds for all \((u,\xi )\).

With the presence of model uncertainty, the insurer’s control problem can be formulated as a zero-sum stochastic differential game between the insurer and the market with asymmetric information:

  1. (1)

    The objective of the insurer is to select optimal investment and dividend strategy \(\left( \hat{u},\hat{\xi }\right) \) with partial information \(\left\{ \mathcal {G}_{t}: t\in [0,T]\right\} \), which maximizes the utility function in the “worst-case” scenario, i.e.,

    $$\begin{aligned} \underline{{\varPhi }}(x)=\sup _{(u,\xi )\in \mathcal {A}^{(1)}_{\mathcal {G}}}\inf _{\theta \in \mathcal {A}^{(2)}_{\mathcal {F}}}\mathcal {J}(u,\xi ,\theta ); \end{aligned}$$
  2. (2)

    The market reacts antagonistically under full information \(\left\{ \mathcal {F}_{t}: t\in [0,T]\right\} \) by choosing an admissible strategy \(\hat{\theta }\) over all scenario \(\left\{ P^{\theta }: \theta \in {\varTheta }\right\} \) to minimize the maximal expected utility of the insurer, i.e.,

    $$\begin{aligned} \overline{{\varPhi }}(x)=\inf _{\theta \in \mathcal {A}^{(2)}_{\mathcal {F}}}\sup _{(u,\xi )\in \mathcal {A}^{(1)}_{\mathcal {G}}}\mathcal {J}(u,\xi ,\theta ). \end{aligned}$$

We say that the game has a value, if

$$\begin{aligned} {\varPhi }(x):=\underline{{\varPhi }}(x)= \overline{{\varPhi }}(x). \end{aligned}$$
(5)

And a pair \(\left( \hat{u},\hat{\xi };\hat{\theta }\right) \in \mathcal {A}^{(1)}_{\mathcal {G}}\times \mathcal {A}^{(2)}_{\mathcal {F}}\) (if exists) is called a saddle point of the zero-sum game (5).

3 Necessary and Sufficient Optimality Conditions for Zero-Sum Regular–Singular Stochastic Differential Game with Asymmetric Information

In this section, we consider a class of general zero-sum regular–singular stochastic differential games with asymmetric information. Then, the necessary and sufficient optimality conditions for the saddle points of the zero-sum games are derived. Our results can be regarded as the generalization of [14] to regular–singular case.

3.1 Problem Formulation

We start with a complete filtered probability space \(\left( {\varOmega }, \mathcal {F},\left\{ \mathcal {F} \right\} _{t\ge 0}, P \right) \). Suppose that the state process \(X(t)=X(t,\omega )\); \(t\in [0,T]\), \(\omega \in {\varOmega }\), is described by the following Lévy diffusion:

$$\begin{aligned} \mathrm{d}X(t)&= b\left( t,X(t),u^{(1)}(t),u^{(2)}(t),\omega \right) \mathrm{d}t+\sigma \left( t,X(t),u^{(1)}(t),u^{(2)}(t),\omega \right) \mathrm{d}B(t) \nonumber \\&+\,\varpi \left( t,X(t),u^{(1)}(t),u^{(2)}(t),\omega \right) \mathrm{d}\tilde{B}(t) \nonumber \\&+\,\int _{\mathbb {R}^{n}}\gamma \left( t,X(t),u^{(1)}(t),u^{(2)}(t),z,\omega \right) \tilde{N}(\mathrm{d}t,\mathrm{d}z) \nonumber \\&+\,\alpha (t,X(t),\omega )\mathrm{d}\xi ^{(1)}(t) +\lambda (t,X(t),\omega )\mathrm{d}\xi ^{(2)}(t), \\ X(0)&= x\in \mathbb {R}^{n},\nonumber \end{aligned}$$
(6)

where the coefficients

$$\begin{aligned}&b:[0,T]\times \mathbb {R}^{n}\times \mathcal {U}_{1}\times \mathcal {U}_{2}\times {\varOmega }\rightarrow \mathbb {R}^{n},\\&\sigma :[0,T]\times \mathbb {R}^{n}\times \mathcal {U}_{1}\times \mathcal {U}_{2}\times {\varOmega }\rightarrow \mathbb {R}^{n\times n},\\&\varpi :[0,T]\times \mathbb {R}^{n}\times \mathcal {U}_{1}\times \mathcal {U}_{2}\times {\varOmega }\rightarrow \mathbb {R}^{n\times n},\\&\gamma :[0,T]\times \mathbb {R}^n\times \mathcal {U}_{1}\times \mathcal {U}_{2}\times \mathbb {R}_{0}\times {\varOmega }\rightarrow \mathbb {R}^{n\times n},\\&\alpha :[0,T]\times \mathbb {R}^{n}\times {\varOmega }\rightarrow \mathbb {R}^{n\times n},\\&\lambda :[0,T]\times \mathbb {R}^{n}\times {\varOmega }\rightarrow \mathbb {R}^{n\times n} \end{aligned}$$

are \(C^{1}\) with respect to x, and \(\mathcal {U}_{1}\), \(\mathcal {U}_{2}\) are given nonempty open convex subsets of \(\mathbb {R}^{n}\). Here, B(t), \(\tilde{B}(t)\) are n-dimensional Brownian motions and \(\tilde{N}(\mathrm{d}t,\mathrm{d}z)\) are n independent compensated Poisson random measures of a Lévy process Z with jump measure N. We assume that B(t), \(\tilde{B}(t)\) and \(\tilde{N}(\mathrm{d}t,\mathrm{d}z)\) are mutually independent under P, \(t\in [0,T]\). For a given process F(tz), we denote

$$\begin{aligned} \int _{\mathbb {R}_{0}}F(\{t\},z)N(\{t\},\mathrm{d}z):={\left\{ \begin{array}{ll} F(t,z), \ \ \ \ \text {if}\ Z\ \text {has}\ \text {a}\ \text {jump}\ \text {of}\ \text {size}\ z\ \text {at}\ t, \\ \ \ 0\ \ , \quad \text {else}. \end{array}\right. } \end{aligned}$$

The stochastic process \(u^{(i)}(t)=u^{(i)}(t,\omega )\in \mathcal {U}_{i}\) is a regular control, while the stochastic process \(\xi ^{(i)}(t)=\xi ^{(i)}(t,\omega )\in \mathcal {I}_{i}\) is a singular control with \(\xi ^{(i)}(0)=0\), where \(\mathcal {I}_{i}\) is a nonempty open convex subset of \(\mathbb {R}^{n}\). The pair \(\left( u^{(i)}(t), \xi ^{(i)}(t)\right) \) is referred to as the regular–singular control, \(i=1,2\). In system (6), there are two different kinds of jumps: One is the jump of X(t) stemming from Poisson random measure N, and the other is the jump caused by singular control \(\xi \).

We consider a zero-sum stochastic differential game with two players (player 1 and player 2). For \(t\in [0,T]\), the regular–singular control \((u^{(i)}(t), \xi ^{(i)}(t))\) is controlled by the player i, \(i=1,2\). Suppose that the information available to the two players are asymmetric partial information (see, e.g., [29]). This means there are two subfiltrations \(\mathcal {G}^{(1)}_{t}\) and \(\mathcal {G}^{(2)}_{t}\) of \(\mathcal {F}_{t}\) satisfying

$$\begin{aligned} \mathcal {G}^{(i)}_{t}\subseteq \mathcal {F}_{t};\quad \quad t\in [0,T],\quad i=1,2. \end{aligned}$$

The player i decides his strategy \(\left( u^{(i)}(t), \xi ^{(i)}(t)\right) \) based on the partial information \(\mathcal {G}^{(i)}_{t}\). Then the regular–singular control \(\left( u^{(i)}(t), \xi ^{(i)}(t)\right) \) is \(\mathcal {G}^{(i)}_{t}\)-adapted, \(i=1,2\). Assume in addition that \(t\rightarrow \alpha (t,x)\) is continuous \(\mathcal {G}^{(1)}_{t}\)-adapted and \(t\rightarrow \lambda (t,x)\) is continuous \(\mathcal {G}^{(2)}_{t}\)-adapted.

Let

$$\begin{aligned}&f:[0,T]\times \mathbb {R}^{n}\times \mathcal {U}_{1}\times \mathcal {U}_{2}\times {\varOmega }\rightarrow \mathbb {R},\\&h:[0,T]\times \mathbb {R}^{n}\times {\varOmega }\rightarrow \mathbb {R}^{n},\\&k:[0,T]\times \mathbb {R}^{n}\times {\varOmega }\rightarrow \mathbb {R}^{n} \end{aligned}$$

be given \(\mathcal {F}_{t}\)-predictable processes and let \(g:\mathbb {R}^{n}\times {\varOmega }\rightarrow \mathbb {R}\) be a \(\mathcal {F}_{T}\)-measurable random variable for each x. Assume that f, g, h and k are \(C^{1}\) with respect to x. Let the performance functional of the two players be defined as follows:

$$\begin{aligned} \mathcal {J}\left( u^{(1)},\xi ^{(1)};u^{(2)},\xi ^{(2)}\right)&= E\left[ \int _{0}^{T}f\left( t,X(t),u^{(1)}(t),u^{(2)}(t),\omega \right) \mathrm{d}t\right. \nonumber \\&\quad \ +g(X(T),\omega )+\int _{0}^{T}h(t,X(t),\omega )\mathrm{d}\xi ^{(1)}(t) \nonumber \\&\quad \ \left. +\int _{0}^{T}k(t,X(t),\omega )\mathrm{d}\xi ^{(2)}(t)\right] , \end{aligned}$$
(7)

where E is the expectation with respect to P.

Let \(\mathcal {A}^{(i)}_{\mathcal {G}}\) denote a given family of controls \(\left( u^{(i)},\xi ^{(i)}\right) \), contained in the set of \(\mathcal {G}^{(i)}_{t}\)-adapted \((u^{(i)},\xi ^{(i)})\) such that system (6) has a unique strong solution and

$$\begin{aligned}&E\left[ \int _{0}^{T}\left\| f(t,X(t),u^{(1)}(t),u^{(2)}(t),\omega )\right\| \mathrm{d}t+\left\| g(X(T),\omega )\right\| \right. \\&\left. \quad +\int _{0}^{T}\left\| h(t,X(t),\omega )\right\| \mathrm{d}\xi ^{(1)}(t)+\int _{0}^{T}\left\| k(t,X(t),\omega )\right\| \mathrm{d}\xi ^{(2)}(t)\right] <+\infty . \end{aligned}$$

Then, \(\mathcal {A}^{(i)}_{\mathcal {G}}\) is called the admissible control set of the player i, \(i=1,2\).

In the zero-sum game, the actions of the two players are antagonistic, namely, the payoff \( \mathcal {J}\left( u^{(1)},\xi ^{(1)};u^{(2)},\xi ^{(2)}\right) \) is the reward for the player 1 and the cost for the player 2. The player 1 decides \(\left( u^{(1)},\xi ^{(1)}\right) \) to maximize the reward based on \(\mathcal {G}^{(1)}_{t}\), while the player 2 chooses \(\left( u^{(2)},\xi ^{(2)}\right) \) to minimize the cost based on \(\mathcal {G}^{(2)}_{t}\). We are thus led to the zero-sum regular–singular stochastic differential game under asymmetric partial information, with

$$\begin{aligned} {\varPhi }_{\mathcal {G}}&=\sup _{\left( u^{(1)},\xi ^{(1)}\right) \in \mathcal {A}^{(1)}_{\mathcal {G}}}\inf _{\left( u^{(2)},\xi ^{(2)}\right) \in \mathcal {A}^{(2)}_{\mathcal {G}}}\mathcal {J}\left( u^{(1)},\xi ^{(1)};u^{(2)},\xi ^{(2)}\right) \nonumber \\&=\,\inf _{\left( u^{(2)},\xi ^{(2)}\right) \in \mathcal {A}^{(2)}_{\mathcal {G}}}\sup _{\left( u^{(1)},\xi ^{(1)}\right) \in \mathcal {A}^{(1)}_{\mathcal {G}}}\mathcal {J}\left( u^{(1)},\xi ^{(1)};u^{(2)},\xi ^{(2)}\right) \nonumber \\&=\,\mathcal {J}\left( \hat{u}^{(1)},\hat{\xi }^{(1)};\hat{u}^{(2)},\hat{\xi }^{(2)}\right) . \end{aligned}$$
(8)

We say the game (8) has a value, if \({\varPhi }_{\mathcal {G}}\in \mathbb {R}\) exists. Meanwhile, the pair \(\left( \hat{u}^{(1)},\hat{\xi }^{(1)};\hat{u}^{(2)},\hat{\xi }^{(2)}\right) \in \mathcal {A}^{(1)}_{\mathcal {G}}\times \mathcal {A}^{(2)}_{\mathcal {G}}\) (if exists) is called a saddle point of the game (8). Intuitively, the strategy \(\left( \hat{u}^{(1)},\hat{\xi }^{(1)}\right) \) is the best response of the player 1 to the player 2 who uses the control \(\left( \hat{u}^{(2)},\hat{\xi }^{(2)}\right) \) and vice versa.

3.2 Necessary Optimality Conditions for the Zero-Sum Regular–Singular Game

To give Hamiltonian-based optimality conditions, we define the Hamiltonian function for the zero-sum game (8):

Definition 3.1

Let \(\triangle \xi ^{(i)}(t)=\xi ^{(i)}(t)-\xi ^{(i)}(t-)\) be the pure discontinuous part of \(\xi ^{(i)}(\cdot )\) at time t, \(t\in [0,T]\), \(i=1,2\). Then, the Hamiltonian function

$$\begin{aligned} H: [0,T]\times \mathbb {R}^{n}\times \mathcal {U}_{1}\times \mathcal {U}_{2} \times \mathbb {R}^{n}\times \mathbb {R}^{n\times n}\times \mathbb {R}^{n\times n}\times \mathcal {R}\rightarrow \mathcal {D} \end{aligned}$$

is defined by

$$\begin{aligned}&H\left( t,x,u^{(1)},u^{(2)},p,q,\tilde{q},r(\cdot )\right) \left( \mathrm{d}t,\mathrm{d}\xi ^{(1)},\mathrm{d}\xi ^{(2)}\right) \nonumber \\&\quad = f\left( t,x,u^{(1)},u^{(2)}\right) \mathrm{d}t +b^{T}\left( t,x,u^{(1)},u^{(2)}\right) p \mathrm{d}t \nonumber \\&\qquad +tr\left( \sigma ^{T}\left( t,x,u^{(1)},u^{(2)}\right) q \right) \mathrm{d}t \nonumber \\&\qquad +\sum _{j,l=1}^{n}\int _{\mathbb {R}}\gamma _{jl}\left( t,x,u^{(1)},u^{(2)},z\right) r_{jl}(t,z)\nu _{l}(\mathrm{d}z)\mathrm{d}t \nonumber \\&\qquad +tr\left( \varpi ^{T}\left( t,x,u^{(1)},u^{(2)}\right) \tilde{q} \right) \mathrm{d}t +\left[ p^{T}\alpha (t,x)+h(t,x)\right] \mathrm{d}\xi ^{(1)}(t)\nonumber \\&\qquad + \left[ p^{T}\lambda (t,x)+k(t,x)\right] \mathrm{d}\xi ^{(2)}(t) +\sum _{j,l,m=1}^{n}\left[ \alpha _{jl}\left( \{t\},x\right) \triangle \xi _{l}^{(1)}(t)\right. \nonumber \\&\qquad \left. +\,\lambda _{jl}\left( \{t\},x\right) \triangle \xi _{l}^{(2)}(t)\right] \int _{\mathbb {R}_{0}}r_{jm}(\{t\},z)N_{m}(\{t\},\mathrm{d}z). \end{aligned}$$
(9)

Here, \(\mathcal {R}\) is the set of functions \(r(\cdot ,\cdot ): [0,T]\times \mathbb {R}_{0}\rightarrow \mathbb {R}^{n\times n}\) such that (9) is well defined and \(\mathcal {D}\) is the set of all sums of stochastic \(\mathrm{d}t\)-, \(\mathrm{d}\xi _{1}\)- and \(\mathrm{d}\xi _{2}\)-differentials. And the adjoint processes \(\left( p(t),q(t),\tilde{q}(t),r(t,\cdot )\right) \) associated to \((u^{(1)},\xi ^{(1)};u^{(2)},\xi ^{(2)})\) are given by the following backward SDE:

$$\begin{aligned} \mathrm{d}p(t)=&-\triangledown _{x} H\left( t,X(t),u^{(1)}(t),u^{(2)}(t),p(t),q(t),\tilde{q}(t),r(t,\cdot )\right) \nonumber \\&\times \left( \mathrm{d}t,\mathrm{d}\xi ^{(1)},\mathrm{d}\xi ^{(2)}\right) \nonumber \\&+q(t)\mathrm{d}B(t)+\tilde{q}(t)\mathrm{d}\tilde{B}(t)+\int _{\mathbb {R}^{n}}r(t,z)\tilde{N}(\mathrm{d}t,\mathrm{d}z),\quad t\in [0,T], \nonumber \\ p(T)=&\triangledown g(X(T)), \end{aligned}$$
(10)

where \(\triangledown _{x}H(\cdot )(\cdot )=\left( \frac{\partial H}{\partial x_{1}},\ldots , \frac{\partial H}{\partial x_{n}}\right) ^{T}\) is the gradient of H with respect to \(x=\left( x_{1},\ldots ,x_{n}\right) \).

Assumption 3.1

We make the following assumptions:

  1. (1)

    For all t, h satisfying \(0\le t<t+h\le T\), and all bounded \(\mathcal {G}^{(i)}_{s}\)-measurable random variables \(\theta _{j}^{(i)}(\omega )\), \(s\in [0,T]\), the regular–singular control \(\left( \beta ^{(i)}(s),0\right) \) belongs to \(\mathcal {A}^{(i)}_{\mathcal {G}}\), where

    $$\begin{aligned} \beta ^{(i)}(s):=\left( 0,\ldots ,\beta ^{(i)}_{j}(s),\ldots ,0\right) ,\quad \quad j=1,\ldots ,n \end{aligned}$$

    with

    $$\begin{aligned} \beta ^{(i)}_{j}(s)=\theta _{j}^{(i)}(\omega )\chi _{[t,t+h]}(s);\quad \quad t\in [0,T],\quad i=1,2. \end{aligned}$$
  2. (2)

    For all \(\left( u^{(i)},\xi ^{(i)}\right) \in \mathcal {A}^{(i)}_{\mathcal {G}}\) and all bounded \(\left( \beta ^{(i)},\varsigma ^{(i)}\right) \in \mathcal {A}^{(i)}_{\mathcal {G}}\), there exists a constant \(\delta >0\) such that

    $$\begin{aligned} \left( u^{(i)}(t)+y\beta ^{(i)}(t),\xi ^{(i)}(t)+y\varsigma ^{(i)}(t)\right) \in \mathcal {A}^{(i)}_{\mathcal {G}} \end{aligned}$$

    for all \( y\in ]-\delta , \delta [\), \(t\in [0,T]\), \(i=1,2\).

For the sake of brevity, we introduce the following short hand notations:

$$\begin{aligned}&X\left( t,u^{(1)}+y\beta ^{(1)},\xi ^{(1)}+y\varsigma ^{(1)}\right) =X\left( t,u^{(1)}+y\beta ^{(1)},\xi ^{(1)}+y\varsigma ^{(1)},u^{(2)},\xi ^{(2)}\right) ,\\&X\left( t,u^{(2)}+y\beta ^{(2)},\xi ^{(2)}+y\varsigma ^{(2)}\right) =X\left( t,u^{(1)},\xi ^{(1)},u^{(2)}+y\beta ^{(2)},\xi ^{(2)}+y\varsigma ^{(2)}\right) ,\\&\triangledown _{u^{(i)}}b(t)=\triangledown _{u^{(i)}}b\left( t,X(t),u^{(1)}(t),u^{(2)}(t),\omega \right) ;\quad \triangledown _{x}H(t)\left( \mathrm{d}t,\mathrm{d}\xi ^{(1)},\mathrm{d}\xi ^{(2)}\right) \\&\quad =\triangledown _{x}H\left( t,X(t),u^{(1)}(t),u^{(2)}(t),p(t),q(t),\tilde{q}(t),r(t,\cdot )\right) \left( \mathrm{d}t,\mathrm{d}\xi ^{(1)},\mathrm{d}\xi ^{(2)}\right) \end{aligned}$$

and similarly for other derivatives.

For a bounded \(\left( \beta ^{(i)},\varsigma ^{(i)}\right) \), \(i=1,2\), let the derivative process \( \breve{X}^{\left( u^{(i)},\xi ^{(i)}\right) }(t)\) be defined by

$$\begin{aligned}&\breve{X}^{\left( u^{(i)},\xi ^{(i)}\right) }(t) \\&\quad = \left( \breve{X}_{1}^{\left( u^{(i)},\xi ^{(i)}\right) }(t),\breve{X}_{2}^{\left( u^{(i)},\xi ^{(i)}\right) }(t),\ldots ,\breve{X}_{n}^{\left( u^{(i)},\xi ^{(i)}\right) }(t) \right) ^{T}\\&\quad := \lim _{y\rightarrow 0^{+}}\frac{1}{y}\left[ X\left( t,u^{(i)}+y\beta ^{(i)},\xi ^{(i)}+y\varsigma ^{(i)}\right) -X\left( t,u^{(1)},\xi ^{(1)},u^{(2)},\xi ^{(2)}\right) \right] . \end{aligned}$$

Set

$$\begin{aligned}&\vartheta ^{\left( u^{(i)},\xi ^{(i)}\right) }_{j}(t)=\triangledown _{x}b_{j}(t)^{T} \breve{X}^{\left( u^{(i)},\xi ^{(i)}\right) }(t)+\triangledown _{u^{(i)}}b_{j}(t)^{T}\beta ^{(i)}(t),\\&\iota ^{\left( u^{(i)},\xi ^{(i)}\right) }_{jl}(t)=\triangledown _{x}\sigma _{jl}(t)^{T}\breve{X}^{\left( u^{(i)},\xi ^{(i)}\right) }(t)+\triangledown _{u^{(i)}}\sigma _{jl}(t)^{T}\beta ^{(i)}(t),\\&\kappa ^{\left( u^{(i)},\xi ^{(i)}\right) }_{jl}(t)=\triangledown _{x}\varpi _{jl}(t)^{T}\breve{X}^{\left( u^{(i)},\xi ^{(i)}\right) }(t)+\triangledown _{u^{(i)}}\varpi _{jl}(t)^{T}\beta ^{(i)}(t),\\&\varrho ^{\left( u^{(i)},\xi ^{(i)}\right) }_{jl}(t)= \triangledown _{x}\gamma _{jl}(t)^{T}\breve{X}^{\left( u^{(i)},\xi ^{(i)}\right) }(t)+\triangledown _{u^{(i)}}\gamma _{jl}(t)^{T}\beta ^{(i)}(t), \end{aligned}$$

for \(i=1,2\) and \(j,l=1,\ldots ,n\). Then, it follows from (6) that \(\breve{X}_{j}^{\left( u^{(1)},\xi ^{(1)}\right) }(t)\), the jth element of \( \breve{X}^{\left( u^{(1)},\xi ^{(1)}\right) }(t)\), satisfies the following SDE:

$$\begin{aligned} \mathrm{d}\breve{X}_{j}^{\left( u^{(1)},\xi ^{(1)}\right) }(t)&=\vartheta ^{\left( u^{(1)},\xi ^{(1)}\right) }_{j}(t) \mathrm{d}t+\sum _{l=1}^{n}\iota ^{\left( u^{(1)},\xi ^{(1)}\right) }_{jl}(t) \mathrm{d}B_{l}(t) \nonumber \\&\quad +\,\sum _{l=1}^{n}\kappa ^{\left( u^{(1)},\xi ^{(1)}\right) }_{jl}(t) \mathrm{d}\tilde{B}_{l}(t)+\sum _{l=1}^{n}\int _{\mathbb {R}}\varrho ^{\left( u^{(1)},\xi ^{(1)}\right) }_{jl}(t)\tilde{N}_{l}(\mathrm{d}t,\mathrm{d}z) \nonumber \\&\quad +\,\sum _{l=1}^{n}\triangledown _{x}\alpha _{jl}(t)^{T} \breve{X}^{\left( u^{(1)},\xi ^{(1)}\right) }(t)\mathrm{d}\xi ^{(1)}_{l}(t)+\sum _{l=1}^{n}\alpha _{jl}(t,x)\mathrm{d}\varsigma ^{(1)}_{l}(t) \nonumber \\&\quad +\,\sum _{l=1}^{n}\triangledown _{x}\lambda _{jl}(t)^{T} \breve{X}^{\left( u^{(1)},\xi ^{(1)}\right) }(t)\mathrm{d}\xi ^{(2)}_{l}(t)\\ \breve{X}_{j}^{\left( u^{(1)},\xi ^{(1)}\right) }(0)&= 0\nonumber \end{aligned}$$
(11)

and \(\breve{X}_{j}^{\left( u^{(2)},\xi ^{(2)}\right) }(t)\), the jth element of \(\breve{X}^{\left( u^{(2)},\xi ^{(2)}\right) }(t)\), is described by

$$\begin{aligned} \mathrm{d}\breve{X}_{j}^{\left( u^{(2)},\xi ^{(2)}\right) }(t)&= \vartheta ^{\left( u^{(2)},\xi ^{(2)}\right) }_{j}(t) \mathrm{d}t+\sum _{l=1}^{n}\iota ^{\left( u^{(2)},\xi ^{(2)}\right) }_{jl}(t) \mathrm{d}B_{l}(t) \nonumber \\&\quad +\,\sum _{l=1}^{n}\kappa ^{\left( u^{(2)},\xi ^{(2)}\right) }_{jl}(t) \mathrm{d}\tilde{B}_{l}(t) +\sum _{l=1}^{n}\int _{\mathbb {R}}\varrho ^{\left( u^{(2)},\xi ^{(2)}\right) }_{jl}(t)\tilde{N}_{l}(\mathrm{d}t,\mathrm{d}z) \nonumber \\&\quad +\,\sum _{l=1}^{n}\triangledown _{x}\lambda _{jl}(t)^{T}\breve{X}^{\left( u^{(2)},\xi ^{(2)}\right) }(t)\mathrm{d}\xi ^{(2)}_{l}(t) +\sum _{l=1}^{n}\lambda _{jl}(t,x)\mathrm{d}\varsigma ^{(2)}_{l}(t) \nonumber \\&\quad +\,\sum _{l=1}^{n}\triangledown _{x}\alpha _{jl}(t)^{T}\breve{X}^{\left( u^{(2)},\xi ^{(2)}\right) }(t)\mathrm{d}\xi ^{(1)}_{l}(t) \\ \breve{X}_{j}^{\left( u^{(2)},\xi ^{(2)}\right) }(0)&= 0.\nonumber \end{aligned}$$
(12)

Moreover, we denote

$$\begin{aligned} U(t)&=\left( U_{1}(t),\ldots ,U_{l}(t),\ldots ,U_{n}(t) \right) ,\ V(t)=\left( V_{1}(t),\ldots ,V_{l}(t),\ldots ,V_{n}(t) \right) ,\\ W(t)&=\left( W_{1}(t),\ldots ,W_{l}(t),\ldots ,W_{n}(t) \right) , \ \Upsilon (t)=\left( \Upsilon _{1}(t),\ldots ,\Upsilon _{l}(t),\ldots ,\Upsilon _{n}(t) \right) , \end{aligned}$$

where

$$\begin{aligned} U_{l}(t)=&\sum _{j=1}^{n}p_{j}(t)\alpha _{jl}(t,X(t))+h_{l}(t,X(t)), \end{aligned}$$
(13)
$$\begin{aligned} V_{l}(t)=&\sum _{j=1}^{n}\alpha _{jl}(t,X(t))\left( \sum _{m=1}^{n}\int _{\mathbb {R}_{0}}r_{jm}(\{t\},z)\tilde{N}_{m}(\{t\},\mathrm{d}z) +p_{j}(t)\right) \nonumber \\&+h_{l}(t,X(t)), \end{aligned}$$
(14)
$$\begin{aligned} W_{l}(t)=&\sum _{j=1}^{n}p_{j}(t)\lambda _{jl}(t,X(t))+k_{l}(t,X(t)), \end{aligned}$$
(15)
$$\begin{aligned} \Upsilon _{l}(t)=&\sum _{j=1}^{n}\lambda _{jl}(t,X(t))\left( \sum _{m=1}^{n}\int _{\mathbb {R}_{0}}r_{jm}(\{t\},z)\tilde{N}_{m}(\{t\},\mathrm{d}z) +p_{j}(t)\right) \nonumber \\&+k_{l}(t,X(t)). \end{aligned}$$
(16)

Now, we state and prove the necessary optimality conditions for the saddle point of the game (8).

Theorem 3.2

(Necessary optimality conditions for zero-sum game) Suppose \(\left( \hat{u}^{(1)},\hat{\xi }^{(1)};\hat{u}^{(2)},\hat{\xi }^{(2)}\right) \in \mathcal {A}^{(1)}_{\mathcal {G}}\times \mathcal {A}^{(2)}_{\mathcal {G}}\) is the saddle point of the game (8). Let \(\hat{X}(t)\), \(\hat{p}(t)\), \(\hat{q}(t)\), \(\hat{\tilde{q}}(t)\), \(\hat{r}(t,\cdot )\), \(\breve{X}^{\left( \hat{u}^{(1)},\hat{\xi }^{(1)}\right) }(t)\), \(\breve{X}^{\left( \hat{u}^{(2)},\hat{\xi }^{(2)}\right) }(t)\) be the solutions of (6), (10), (11), (12) corresponding to \(\left( \hat{u}^{(1)},\hat{\xi }^{(1)};\hat{u}^{(2)},\hat{\xi }^{(2)}\right) \) and let \(\hat{U}(t)\), \(\hat{V}(t)\), \(\hat{W}(t)\), \(\hat{\Upsilon }(t)\) be the corresponding coefficients (see (13)–(16) ). Moreover, for \(i=1,2\), assume that

$$\begin{aligned}&E\left[ \int _{0}^{T}\breve{X}^{\left( \hat{u}^{(i)},\hat{\xi }^{(i)}\right) }(t)^{T}\left\{ \hat{q}\hat{q}^{T}(t)+\hat{\tilde{q}}\hat{\tilde{q}}^{T}(t)\right. \right. \\&\qquad \left. \left. +\int _{\mathbb {R}^{n}}\hat{r}\hat{r}^{T}(t,z)\nu (\mathrm{d}z) \right\} \breve{X}^{\left( \hat{u}^{(i)},\hat{\xi }^{(i)}\right) }(t)\mathrm{d}t\right]<\infty ,\\&E\left[ \int _{0}^{T}\hat{p}^{T}(t)\left\{ \iota ^{\left( \hat{u}^{(i)},\hat{\xi }^{(i)}\right) }\left. \iota ^{\left( \hat{u}^{(i)},\hat{\xi }^{(i)}\right) }\right. ^{T}(t,\hat{X}(t),\hat{u}^{(1)}(t),\hat{u}^{(2)}(t))\right. \right. \\&\qquad +\kappa ^{\left( \hat{u}^{(i)},\hat{\xi }^{(i)}\right) }\left. \kappa ^{\left( \hat{u}^{(i)},\hat{\xi }^{(i)}\right) }\right. ^{T}(t,\hat{X}(t),\hat{u}^{(1)}(t),\hat{u}^{(2)}(t))\\&\qquad \left. \left. +\int _{\mathbb {R}^{n}}\gamma \gamma ^{T}(t,\hat{X}(t),\hat{u}^{(1)}(t),\hat{u}^{(2)}(t))\nu (\mathrm{d}z) \right\} \hat{p}(t)\mathrm{d}t\right] <\infty . \end{aligned}$$

Then, for almost all \(t\in [0,T]\), the saddle point \(\left( \hat{u}^{(1)},\hat{\xi }^{(1)};\hat{u}^{(2)},\hat{\xi }^{(2)}\right) \in \mathcal {A}^{(1)}_{\mathcal {G}}\times \mathcal {A}^{(2)}_{\mathcal {G}}\) satisfies:

$$\begin{aligned} 0&= E\left[ \left. \bigtriangledown _{u^{(1)}}H\left( t,\hat{X}(t),\hat{u}^{(1)},\hat{u}^{(2)},\hat{p},\hat{q},\hat{\tilde{q}},\hat{r}(\cdot )\right) \left( \mathrm{d}t,\mathrm{d}\hat{\xi }^{(1)},\mathrm{d}\hat{\xi }^{(2)}\right) \right| \mathcal {G}^{(1)}_{t}\right] \nonumber \\&= E\left[ \left. \bigtriangledown _{u^{(2)}}H\left( t,\hat{X}(t),\hat{u}^{(1)},\hat{u}^{(2)},\hat{p},\hat{q},\hat{\tilde{q}},\hat{r}(\cdot )\right) \left( \mathrm{d}t,\mathrm{d}\hat{\xi }^{(1)},\mathrm{d}\hat{\xi }^{(2)}\right) \right| \mathcal {G}^{(2)}_{t}\right] ; \end{aligned}$$
(17)
$$\begin{aligned}&E\left[ \left. \hat{U}_{l}(t)\right| \ \mathcal {G}^{(1)}_{t}\right] \le 0 \quad \text {and}\quad E\left[ \left. \hat{U}_{l}(t)\right| \ \mathcal {G}^{(1)}_{t}\right] \mathrm{d}\hat{\xi }_{l}^{(1),C}(t)=0; \end{aligned}$$
(18)
$$\begin{aligned}&E\left[ \left. \hat{W}_{l}(t)\right| \ \mathcal {G}^{(2)}_{t}\right] \ge 0 \quad \text {and}\quad E\left[ \left. \hat{W}_{l}(t)\right| \ \mathcal {G}^{(2)}_{t}\right] \mathrm{d}\hat{\xi }_{l}^{(2),C}(t)=0; \end{aligned}$$
(19)
$$\begin{aligned}&E\left[ \left. \hat{V}_{l}(t)\right| \ \mathcal {G}^{(1)}_{t}\right] \le 0 \quad \text {and}\quad E\left[ \left. \hat{V}_{l}(t)\right| \ \mathcal {G}^{(1)}_{t}\right] \triangle \hat{\xi }_{l}^{(1)}(t)=0; \end{aligned}$$
(20)
$$\begin{aligned}&E\left[ \left. \hat{\Upsilon }_{l}(t)\right| \ \mathcal {G}^{(2)}_{t}\right] \ge 0 \quad \text {and}\quad E\left[ \left. \hat{\Upsilon }_{l}(t)\right| \ \mathcal {G}^{(2)}_{t}\right] \triangle \hat{\xi }_{l}^{(2)}(t)=0, \end{aligned}$$
(21)

where \(\hat{\xi }_{l}^{(i)}(t)\) is the lth element of \(\hat{\xi }^{(i)}(t)\), \(l=1,\ldots ,n\), \(\hat{\xi }_{l}^{(i),C}(t)\) is the continuous part of \(\hat{\xi }_{l}^{(i)}(t)\) and \(\triangle \hat{\xi }_{l}^{(i)}(t)=\hat{\xi }_{l}^{(i)}(t)-\hat{\xi }_{l}^{(i)}(t-)\) is the pure discontinuous part of \(\hat{\xi }_{l}^{(i)}\) at time t, \(i=1,2\).

Proof

See “Appendix A.” \(\square \)

3.3 Sufficient Optimality Conditions for Zero-Sum Regular–Singular Game

In Theorem 3.2, we have presented the necessary optimality conditions (17), (18), (19), (20) and (21) for the saddle point of the game (8). In this subsection, we impose some additional conditions such that these necessary optimality conditions are also sufficient for the saddle point. For the sake of simplicity, we denote

$$\begin{aligned}&X^{\left( u^{(1)},\xi ^{(1)}\right) }(t)=X\left( t,u^{(1)}(t),\hat{u}^{(2)}(t),\xi ^{(1)}(t),\hat{\xi }^{(2)}(t)\right) ,\\&X^{\left( u^{(2)},\xi ^{(2)}\right) }(t)=X\left( t,\hat{u}^{(1)}(t),u^{(2)}(t),\hat{\xi }^{(1)}(t),\xi ^{(2)}(t)\right) . \end{aligned}$$

Let \(\hat{X}(t)\), \(\hat{p}(t)\), \(\hat{q}(t)\), \(\hat{\tilde{q}}(t)\), \(\hat{r}(\cdot )(t,z)\) be the solutions of Eqs. (6) and (10) corresponding to \(\left( \hat{u}^{(1)},\hat{\xi }^{(1)};\hat{u}^{(2)},\hat{\xi }^{(2)}\right) \in \mathcal {A}^{(1)}_{\mathcal {G}}\times \mathcal {A}^{(2)}_{\mathcal {G}}\). Assume that for all \(\left( u^{(1)},\xi ^{(1)}\right) \in \mathcal {A}^{(1)}_{\mathcal {G}}\) and \(\left( u^{(2)},\xi ^{(2)}\right) \in \mathcal {A}^{(2)}_{\mathcal {G}}\), we have

$$\begin{aligned}&E\left[ \int _{0}^{T}\left( \hat{X}(t)-X^{\left( u^{(i)},\xi ^{(i)}\right) }(t) \right) ^{T}\left\{ \hat{q}\hat{q}^{T}(t)+\hat{\tilde{q}}\hat{\tilde{q}}^{T}(t)\right. \right. \\&\qquad \left. \left. +\int _{\mathbb {R}^{n}}\hat{r}\hat{r}^{T}(t,z)\nu (\mathrm{d}z) \right\} \left( \hat{X}(t)-X^{\left( u^{(i)},\xi ^{(i)}\right) }(t) \right) \mathrm{d}t\right]<\infty ,\\&E\left[ \int _{0}^{T}\hat{p}^{T}(t)\left\{ \sigma \sigma ^{T}\left( t,X^{\left( u^{(i)},\xi ^{(i)}\right) }(t)\right) +\varpi \varpi ^{T}\left( t,X^{\left( u^{(i)},\xi ^{(i)}\right) }(t)\right) \right. \right. \\&\qquad \left. \left. +\int _{\mathbb {R}^{n}}\gamma \gamma ^{T}\left( t,X^{\left( u^{(i)},\xi ^{(i)}\right) }(t),z\right) \nu (\mathrm{d}z) \right\} \hat{p}(t)\mathrm{d}t\right] <\infty ,\ \ i=1,2. \end{aligned}$$

Theorem 3.3

(Sufficient optimality conditions for zero-sum game) Assume that \(\left( \hat{u}^{(1)},\hat{\xi }^{(1)};\hat{u}^{(2)},\hat{\xi }^{(2)}\right) \) satisfies (18), (19), (20), (21) and

$$\begin{aligned}&\sup _{ u^{(1)}} E\left[ \left. H\left( t,\hat{X}(t),u^{(1)},\hat{u}^{(2)},\hat{p},\hat{q},\hat{\tilde{q}},\hat{r}(\cdot )\right) \left( \mathrm{d}t,\mathrm{d}\hat{\xi }^{(1)},\mathrm{d}\hat{\xi }^{(2)}\right) \ \right| \ \mathcal {G}^{(1)}_{t}\right] \nonumber \\&\quad = E\left[ \left. H\left( t,\hat{X}(t),\hat{u}^{(1)},\hat{u}^{(2)},\hat{p},\hat{q},\hat{\tilde{q}},\hat{r}(\cdot )\right) \left( \mathrm{d}t,\mathrm{d}\hat{\xi }^{(1)},\mathrm{d}\hat{\xi }^{(2)}\right) \ \right| \ \mathcal {G}^{(1)}_{t}\right] \end{aligned}$$
(22)

and

$$\begin{aligned}&\inf _{u^{(2)}} E\left[ \left. H\left( t,\hat{X}(t),\hat{u}^{(1)},u^{(2)}\hat{p},\hat{q},\hat{\tilde{q}},\hat{r}(\cdot )\right) \left( \mathrm{d}t,\mathrm{d}\hat{\xi }^{(1)},\mathrm{d}\hat{\xi }^{(2)}\right) \ \right| \ \mathcal {G}^{(2)}_{t}\right] \nonumber \\&\quad = E\left[ \left. H\left( t,\hat{X}(t),\hat{u}^{(1)},\hat{u}^{(2)},\hat{p},\hat{q},\hat{\tilde{q}},\hat{r}(\cdot )\right) \left( \mathrm{d}t,\mathrm{d}\hat{\xi }^{(1)},\mathrm{d}\hat{\xi }^{(2)}\right) \ \right| \ \mathcal {G}^{(2)}_{t}\right] . \end{aligned}$$
(23)
  1. (I)

    Suppose that \(x\rightarrow g(x)\) and

    $$\begin{aligned} \left( x,u^{(1)},\xi ^{(1)}\right) \rightarrow H\left( t,x,u^{(1)},\hat{u}^{(2)},\hat{p},\hat{q},\hat{\tilde{q}},\hat{r}(\cdot )\right) \left( \mathrm{d}t,\mathrm{d}\xi ^{(1)},\mathrm{d}\hat{\xi }^{(2)}\right) \end{aligned}$$

    are concave. Then,

    $$\begin{aligned} \mathcal {J}\left( \hat{u}^{(1)},\hat{\xi }^{(1)};\hat{u}^{(2)},\hat{\xi }^{(2)}\right) =\sup _{\left( u^{(1)},\xi ^{(1)}\right) \in \mathcal {A}^{(1)}_{\mathcal {G}}} \mathcal {J}\left( u^{(1)},\xi ^{(1)};\hat{u}^{(2)},\hat{\xi }^{(2)}\right) . \end{aligned}$$
    (24)
  2. (II)

    Suppose that \(x\rightarrow g(x)\) and

    $$\begin{aligned} \left( x,u^{(2)},\xi ^{(2)}\right) \rightarrow H\left( t,x,\hat{u}^{(1)},u^{(2)},\hat{p},\hat{q},\hat{\tilde{q}},\hat{r}(\cdot )\right) \left( \mathrm{d}t,\mathrm{d}\hat{\xi }^{(1)},\mathrm{d}\xi ^{(2)}\right) \end{aligned}$$

    are convex. Then,

    $$\begin{aligned} \mathcal {J}\left( \hat{u}^{(1)},\hat{\xi }^{(1)};\hat{u}^{(2)},\hat{\xi }^{(2)}\right) =\inf _{\left( u^{(2)},\xi ^{(2)}\right) \in \mathcal {A}^{(2)}_{\mathcal {G}}} \mathcal {J}\left( \hat{u}^{(1)},\hat{\xi }^{(1)};u^{(2)},\xi ^{(2)}\right) . \end{aligned}$$
  3. (III)

    Suppose that both cases (I) and (II) hold. Then the regular–singular control \(\left( \hat{u}^{(1)},\hat{\xi }^{(1)};\hat{u}^{(2)},\hat{\xi }^{(2)}\right) \) is a saddle point for the zero-sum game (8).

Proof

See “Appendix B.” \(\square \)

4 Application to the Optimal Investment and Dividend Problem of an Insurer Under Model Uncertainty

We come back to the investment and dividend problem of an insurer under model uncertainty in Sect. 2 and apply the necessary and sufficient optimality conditions (Theorems 3.2 and 3.3) to characterize the saddle point \(\left( \hat{u}(\cdot ),\hat{\xi }(\cdot ), \hat{\theta }\right) \) of the zero-sum game (5).

First, we put

$$\begin{aligned} \mathrm{d}X(t) =&\left( \begin{array}{c} \mathrm{d}X_{1}(t)\\ \mathrm{d}X_{2}(t) \end{array} \right) = \left( \begin{array}{c} \mathrm{d}Y(t) \\ \mathrm{d}M^{\theta }(t) \end{array} \right) \nonumber \\ =&\left( \begin{array}{c} \kappa (t)+\rho (t)Y(t)+u(t)\left[ \zeta (t)-\rho (t)\right] -\int _{\mathbb {R}_{0}}z \nu (\mathrm{d}z)\\ 0 \end{array} \right) \mathrm{d}t \nonumber \\&+\left( \begin{array}{c} \pi (t)u(t)\\ -\theta (t) M^{\theta }(t) \end{array} \right) \mathrm{d}B(t) +\left( \begin{array}{c} Q\\ -\theta (t) M^{\theta }(t) \end{array} \right) \mathrm{d}\tilde{B}(t) \nonumber \\&-\left( \begin{array}{c} z\\ \theta (t) M^{\theta }(t) \end{array} \right) \tilde{N}(\mathrm{d}t,\mathrm{d}z) -\left( \begin{array}{c} 1\\ 0 \end{array} \right) \mathrm{d}\xi (t). \end{aligned}$$
(25)

With the notations of Sect. 4, we have, for the game (5),

$$\begin{aligned}&u^{(1)}(t)=u(t),\qquad u^{(2)}(t)=\theta (t),\qquad \xi ^{(1)}(t)=\xi (t),\\&\xi ^{(2)}(t)=0,\qquad \mathcal {G}^{(1)}_{t}=\mathcal {G}_{t},\qquad \mathcal {G}^{(2)}_{t}=\mathcal {F}_{t},\\&b(t,x,u^{(1)},u^{(2)})=\left( \begin{array}{c} \kappa (t)+\rho (t)x(t)+u(t)\left[ \zeta (t)-\rho (t)\right] -\int _{\mathbb {R}_{0}}z\nu (\mathrm{d}z)\\ 0 \end{array} \right) ,\\&\sigma (t,x,u^{(1)},u^{(2)})=\left( \begin{array}{c} \pi (t)u(t)\\ -\theta (t) M^{\theta }(t) \end{array} \right) ; \ \ \varpi (t,x,u^{(1)},u^{(2)})=\left( \begin{array}{c} Q\\ -\theta (t) M^{\theta }(t) \end{array} \right) ,\\&\gamma (t,x,u^{(1)},u^{(2)},z)=-\left( \begin{array}{c} z\\ \theta (t) M^{\theta }(t) \end{array} \right) , \ \ \alpha (t,x)=-\left( \begin{array}{c} 1\\ 0 \end{array} \right) , \ \ \lambda (t,x)=\left( \begin{array}{c} 0\\ 0 \end{array} \right) , \\&f(t,x,u^{(1)},u^{(2)})=0,\quad g(x)=x_{2}x_{1},\quad h(t,x)=e^{-\int _{0}^{t}\rho (s)\mathrm{d}s}x_{2},\quad k(t,x)=0. \end{aligned}$$

Then, by (9), the Hamiltonian is

$$\begin{aligned}&H(t,x_{1},x_{2},u,\theta ,p,q,\tilde{q},r(\cdot ))(\mathrm{d}t,\mathrm{d}\xi )\\&=\left\{ \left[ \kappa (t)+\rho (t)x_{1}+u(t)\left[ \zeta (t)-\rho (t)\right] +\int _{\mathbb {R}_{0}}z\nu (\mathrm{d}z)\right] p_{1} \right. \\&\quad \left. +\,\pi u q_{1}-\theta x_{2}q_{2}+Q \tilde{q}_{1}-\theta x_{2}\tilde{q}_{2} -\int _{\mathbb {R}}\left\{ zr_{1}(t,z)+\theta x_{2}r_{2}(t,z)\right\} \nu (\mathrm{d}z)\right\} \mathrm{d}t\\&\quad +\,\left[ - p_{1}+x_{2}e^{-\int _{0}^{t}\rho (s)\mathrm{d}s}\right] \mathrm{d}\xi (t)-\int _{\mathbb {R}_{0}}r_{1}(\{t\},z)N(\{t\},\mathrm{d}z)\triangle \xi (t), \end{aligned}$$

where the adjoint processes \(p_{i}(t)\), \(q_{i}(t)\), \(\tilde{q}_{i}(t)\), \(r_{i}(\cdot )(t,z)\), \(i=1,2\), are given by the following backward SDEs:

$$\begin{aligned} \mathrm{d}p_{1}(t)= & {} -\rho (t)p_{1}(t)\mathrm{d}t+q_{1}(t)\mathrm{d}B(t)+\tilde{q}_{1}(t)\mathrm{d}\tilde{B}(t)+\int _{\mathbb {R}_{0}}r_{1}(t,z)\tilde{N}(\mathrm{d}t,\mathrm{d}z), \nonumber \\ p_{1}(T)= & {} X_{2}(T) \end{aligned}$$
(26)

and

$$\begin{aligned} \mathrm{d}p_{2}(t)=&\left[ \theta (t)q_{2}(t)+\theta (t)\tilde{q}_{2}(t)+\int _{\mathbb {R}}\theta (t)r_{2}(t,z)\nu (\mathrm{d}z)\right] \mathrm{d}t+q_{2}(t)\mathrm{d}B(t) \nonumber \\&+\,\tilde{q}_{2}(t)\mathrm{d}\tilde{B}(t)+\int _{\mathbb {R}_{0}}r_{2}(t,z)\tilde{N}(\mathrm{d}t,\mathrm{d}z)-e^{-\int _{0}^{t}\rho (s)\mathrm{d}s}\mathrm{d}\xi (t), \nonumber \\ p_{2}(T)=&X_{1}(T). \end{aligned}$$
(27)

Moreover, we have

$$\begin{aligned}&U(t)= - p_{1}(t)+e^{-\int _{0}^{t}\rho (s)\mathrm{d}s}X_{2}(t),\qquad W(t)=0,\qquad \Upsilon (t)=0,\\&V(t)=-\int _{\mathbb {R}_{0}}r_{1}(\{t\},z)N(\{t\},\mathrm{d}z)- p_{1}(t)+e^{-\int _{0}^{t}\rho (s)\mathrm{d}s}X_{2}(t). \end{aligned}$$

Let \((\hat{u},\hat{\xi },\hat{\theta })\in \mathcal {A}^{(1)}_{\mathcal {G}}\times \mathcal {A}^{(2)}_{\mathcal {G}}\) be a saddle point of the game (5) and let \(\hat{X}(t)\), \(\hat{X}_{i}(t)\), \(\hat{p}_{i}(t)\), \(\hat{q}_{i}(t)\), \(\hat{\tilde{q}}_{i}(t)\), \(\hat{r}_{i}(\cdot )(t,z)\), \(i=1,2\), be the corresponding solutions of (25), (26) and (27). Then, by Theorems 3.2 and 3.3, the optimality condition (17) leads to

$$\begin{aligned} \left[ \zeta (t)-\rho (t)\right] E\left[ \left. \hat{p}_{1}(t)\ \right| \ \mathcal {G}_{t}\right] +\pi (t)E\left[ \left. \hat{q}_{1}(t)\ \right| \ \mathcal {G}_{t}\right] =0 . \end{aligned}$$
(28)

By the optimality condition (18), it follows that \(\hat{\xi }^{C}(t)\), which is the continuous part of \(\hat{\xi }(t)\), satisfies:

$$\begin{aligned} e^{-\int _{0}^{t}\rho (s)\mathrm{d}s} E\left[ \left. \hat{X}_{2}(t)\ \right| \ \mathcal {G}_{t}\right] \le E\left[ \left. \hat{p}_{1}(t)\ \right| \ \mathcal {G}_{t}\right] \end{aligned}$$
(29)

and

$$\begin{aligned} E\left[ \left. \left\{ e^{-\int _{0}^{t}\rho (s)\mathrm{d}s}\hat{X}_{2}(t)-\hat{p}_{1}(t)\right\} \ \right| \ \mathcal {G}_{t}\right] \mathrm{d}\hat{\xi }^{C}(t)=0. \end{aligned}$$
(30)

Furthermore, by the optimality condition (20), we see that

$$\begin{aligned} E\left[ \left. e^{-\int _{0}^{t}\rho (s)\mathrm{d}s}\hat{X}_{2}(t)-\int _{\mathbb {R}_{0}}\hat{r}_{1}(\{t\},z)N(\{t\},\mathrm{d}z)- \hat{p}_{1}(t)\ \right| \ \mathcal {G}_{t}\right] \le 0 \end{aligned}$$
(31)

and

$$\begin{aligned} E\left[ \left. e^{-\int _{0}^{t}\rho (s)\mathrm{d}s}\hat{X}_{2}(t)-\int _{\mathbb {R}_{0}}\hat{r}_{1}(\{t\},z)N(\{t\},\mathrm{d}z)- \hat{p}_{1}(t)\ \right| \ \mathcal {G}_{t}\right] \triangle \hat{\xi }(t)=0 \end{aligned}$$
(32)

hold for all \(t\in [0,T]\), where \(\triangle \hat{\xi }(t)=\hat{\xi }(t)-\hat{\xi }(t-)\) is the pure discontinuous part of \(\hat{\xi }(t)\).

On the other hand, minimizing \(E\left[ \left. H(t,x_{1},x_{2},u,\theta ,p,q,\tilde{q},r(\cdot ))(\mathrm{d}t,\mathrm{d}\xi )\ \right| \ \mathcal {F}_{t}\right] \) over all \(\theta \), we obtain a minimum point \(\hat{\theta }\) such that

$$\begin{aligned} E\left[ \left. \hat{X}_{2}(t)\hat{q}_{2}(t)+\hat{X}_{2}(t)\hat{\tilde{q}}_{2}(t)+\int _{\mathbb {R}}\hat{X}_{2}(t)\hat{r}_{2}(t,z) \nu (\mathrm{d}z)\ \right| \ \mathcal {F}_{t}\right] =0. \end{aligned}$$
(33)

We summarize the results obtained above in the following theorem.

Theorem 4.1

Let \((\hat{u},\hat{\xi };\hat{\theta })\in \mathcal {A}^{(1)}_{\mathcal {G}}\times \mathcal {A}^{(2)}_{\mathcal {F}}\) be a saddle point of the investment and dividend problem of an insurer under model uncertainty (5) if and only if \((\hat{u},\hat{\xi };\hat{\theta })\) satisfies (28), (29), (30), (31), (32) and (33), where \(\hat{p}_{i}(t)\), \(\hat{q}_{i}(t)\), \(\hat{\tilde{q}}_{i}(t)\), \(\hat{r}_{i}(\cdot )(t,z)\), \(i=1,2\), are solutions of the backward SDEs (26) and (27).

In Theorem 4.1, the adjoint processes \(\hat{p}_{i}(t)\), \(\hat{q}_{i}(t)\), \(\hat{\tilde{q}}_{i}(t)\), \(\hat{r}_{i}(\cdot )(t,z)\), \(i=1,2\), are obtained through solving the backward SDEs (26) and (27), which are usually very difficult to solve. Here, we leave the solution methods of these backward SDEs for future research.

5 Necessary and Sufficient Optimality Conditions for Nonzero-Sum Regular–Singular Stochastic Differential Game with Asymmetric Information

In this section, we consider the nonzero-sum game between the two players, who intervene on the system with their regular–singular controls. Their actions are not necessarily antagonistic, namely, they have different payoffs and each of them acts to maximize his own payoff.

Let X(t) be the stochastic process given by (6). Suppose that the player i, \(i=1,2\), controls the system with strategy \(\left( u^{(i)},\xi ^{(i)}\right) \in \mathcal {A}^{(i)}_{\mathcal {G}}\) and his utility functional is of the form

$$\begin{aligned} \mathcal {J}_{i}\left( u^{(1)},\xi ^{(1)};u^{(2)},\xi ^{(2)}\right)&= E\left[ \int _{0}^{T}f^{(i)}\left( t,X(t),u^{(1)}(t),u^{(2)}(t),\omega \right) \mathrm{d}t\right. \\&\quad \ +g^{(i)}(X(T),\omega )+\int _{0}^{T}h^{(i)}(t,X(t),\omega )\mathrm{d}\xi ^{(1)}(t)\\&\quad \ \left. +\int _{0}^{T}k^{(i)}(t,X(t),\omega )\mathrm{d}\xi ^{(2)}(t)\right] . \end{aligned}$$

As the game is typically nonzero-sum, we seek a Nash equilibrium point \(\Big (\hat{u}^{(1)},\hat{\xi }^{(1)}; \hat{u}^{(2)},\hat{\xi }^{(2)}\Big ) \in \mathcal {A}^{(1)}_{\mathcal {G}}\times \mathcal {A}^{(2)}_{\mathcal {G}}\) such that

$$\begin{aligned} \mathcal {J}_{1}\left( u^{(1)},\xi ^{(1)};\hat{u}^{(2)},\hat{\xi }^{(2)}\right)&\le \mathcal {J}_{1}\left( \hat{u}^{(1)},\hat{\xi }^{(1)};\hat{u}^{(2)},\hat{\xi }^{(2)}\right) , \left( u^{(1)},\xi ^{(1)}\right) \in \mathcal {A}^{(1)}_{\mathcal {G}}; \end{aligned}$$
(34)
$$\begin{aligned} \mathcal {J}_{2}\left( \hat{u}^{(1)},\hat{\xi }^{(1)};u^{(2)},\xi ^{(2)}\right)&\le \mathcal {J}_{2}\left( \hat{u}^{(1)},\hat{\xi }^{(1)};\hat{u}^{(2)},\hat{\xi }^{(2)}\right) , \left( u^{(2)},\xi ^{(2)}\right) \in \mathcal {A}^{(2)}_{\mathcal {G}}. \end{aligned}$$
(35)

The existence of a Nash equilibrium point shows that, by an appropriate design of \(\left( \hat{u}^{(1)},\hat{\xi }^{(1)} \right) \), the player 1 can induce the player 2 to choose the best control \(\left( \hat{u}^{(2)},\hat{\xi }^{(2)} \right) \) and vice versa.

In order to give the necessary and sufficient optimality conditions, let the Hamiltonian functions \(H_{1}\) and \(H_{2}\) for the nonzero-sum game (34)–(35) be defined in the following definition.

Definition 5.1

For \(i=1,2\), the Hamiltonian function

$$\begin{aligned} H_{i}: [0,T]\times \mathbb {R}^{n}\times \mathcal {U}_{1}\times \mathcal {U}_{2} \times \mathbb {R}^{n}\times \mathbb {R}^{n\times n}\times \mathbb {R}^{n\times n}\times \mathcal {R}\rightarrow \mathcal {D} \end{aligned}$$

is defined as follows:

$$\begin{aligned}&H_{i}\left( t,x,u^{(1)},u^{(2)},p^{(i)},q^{(i)},\tilde{q}^{(i)},r^{(i)}(\cdot )\right) \left( \mathrm{d}t,\mathrm{d}\xi ^{(1)},\mathrm{d}\xi ^{(2)}\right) \nonumber \\&\quad = f^{(i)}\left( t,x,u^{(1)},u^{(2)}\right) \mathrm{d}t +b^{T}\left( t,x,u^{(1)},u^{(2)}\right) p^{(i)}\mathrm{d}t \nonumber \\&\qquad +tr\left( \sigma ^{T}\left( t,x,u^{(1)},u^{(2)}\right) q^{(i)} \right) \mathrm{d}t +tr\left( \varpi ^{T}\left( t,x,u^{(1)},u^{(2)}\right) \tilde{q}^{(i)} \right) \mathrm{d}t \nonumber \\&\qquad +\sum _{j,l=1}^{n}\int _{\mathbb {R}}\gamma _{jl}\left( t,x,u^{(1)},u^{(2)},z\right) r^{(i)}_{jl}(t,z)\nu _{l}(\mathrm{d}z) \mathrm{d}t \nonumber \\&\qquad +\left[ p^{(i)T}\alpha (t,x)+h^{(i)}(t,x)\right] \mathrm{d}\xi ^{(1)}(t)+ \left[ p^{(i)T}\lambda (t,x)+k^{(i)}(t,x)\right] \mathrm{d}\xi ^{(2)}(t) \nonumber \\&\qquad +\sum _{j,l,m=1}^{n}\left[ \alpha _{jl}\left( \{t\},x\right) \triangle \xi _{l}^{(1)}(t)\right. \nonumber \\&\qquad \left. +\,\lambda _{jl}\left( \{t\},x\right) \triangle \xi _{l}^{(2)}(t)\right] \int _{\mathbb {R}_{0}}r^{(i)}_{jm}(\{t\},z)N_{m}(\{t\},\mathrm{d}z), \end{aligned}$$
(36)

where the adjoint processes \(\left( p^{(i)}(t),q^{(i)}(t),r^{(i)}(\cdot )(t,z)\right) \) are solutions of the following backward SDEs:

$$\begin{aligned} \mathrm{d}p^{(i)}(t)&= -\triangledown _{x} H_{i}\left( t,X(t),u^{(1)},u^{(2)},p^{(i)},q^{(i)},r^{(i)}(\cdot )\right) \left( \mathrm{d}t,\mathrm{d}\xi ^{(1)},\mathrm{d}\xi ^{(2)}\right) \nonumber \\&\quad \ +q^{(i)}(t)\mathrm{d}B(t)+\tilde{q}^{(i)}(t)\mathrm{d}\tilde{B}(t)+\int _{\mathbb {R}^{n}_{0}}r^{(i)}(t,z)\tilde{N}(\mathrm{d}t,\mathrm{d}z), t\in [0,T], \nonumber \\ p^{(i)}(T)&=\triangledown g^{(i)}(X(T)), \quad i=1,2. \end{aligned}$$
(37)

Let

$$\begin{aligned} U^{(1)}(t)&=\left( U^{(1)}_{1}(t),\ldots ,U^{(1)}_{l}(t),\ldots ,U^{(1)}_{n}(t) \right) ,\\ V^{(1)}(t)&=\left( V^{(1)}_{1}(t),\ldots ,V^{(1)}_{l}(t),\ldots ,V^{(1)}_{n}(t) \right) ,\\ W^{(2)}(t)&=\left( W^{(2)}_{1}(t),\ldots ,W^{(2)}_{l}(t),\ldots ,W^{(2)}_{n}(t) \right) ,\\ \Upsilon ^{(2)}(t)&=\left( \Upsilon ^{(2)}_{1}(t),\ldots ,\Upsilon ^{(2)}_{l}(t),\ldots ,\Upsilon ^{(2)}_{n}(t) \right) , \end{aligned}$$

where

$$\begin{aligned} U^{(1)}_{l}(t)=&\sum _{j=1}^{n}p^{(1)}_{j}(t)\alpha _{jl}(t,X(t))+h^{(1)}_{l}(t,X(t)), \nonumber \\ V^{(1)}_{l}(t)=&\sum _{j=1}^{n}\alpha _{jl}(t,X(t))\left( p^{(1)}_{j}(t)+\sum _{m=1}^{n}\int _{\mathbb {R}_{0}}r^{(1)}_{jm}(\{t\},z)\tilde{N}_{m}(\{t\},\mathrm{d}z)\right) \end{aligned}$$
(38)
$$\begin{aligned}&+h^{(1)}_{l}(t,X(t)), \end{aligned}$$
(39)
$$\begin{aligned} W^{(2)}_{l}(t)=&\sum _{j=1}^{n}p^{(2)}_{j}(t)\lambda _{jl}(t,X(t))+k^{(2)}_{l}(t,X(t)), \nonumber \\ \Upsilon ^{(2)}_{l}(t)=&\sum _{j=1}^{n}\lambda _{jl}(t,X(t))\left( p^{(2)}_{j}(t)+\sum _{m=1}^{n}\int _{\mathbb {R}_{0}}r^{(2)}_{jm}(\{t\},z)\tilde{N}_{m}(\{t\},\mathrm{d}z)\right) \end{aligned}$$
(40)
$$\begin{aligned}&+k^{(2)}_{l}(t,X(t)). \end{aligned}$$
(41)

Next, we give the necessary optimality conditions and the sufficient conditions for the nonzero-sum game (34)–(35), which are the generalizations of Theorems 3.2 and 3.3.

Theorem 5.1

(Necessary optimality conditions for nonzero-sum game) Let \(\left( \hat{u}^{(1)},\hat{\xi }^{(1)};\hat{u}^{(2)},\hat{\xi }^{(2)}\right) \in \mathcal {A}^{(1)}_{\mathcal {G}}\times \mathcal {A}^{(2)}_{\mathcal {G}}\) be a Nash equilibrium point for the nonzero-sum regular–singular game (34)–(35). Suppose that \(\hat{X}(t)\), \(\hat{p}^{(i)}(t)\), \(\hat{q}^{(i)}(t)\), \(\hat{\tilde{q}}^{(i)}(t)\), \(\hat{r}^{(i)}(\cdot )(t,z)\),\(\breve{X}^{\left( \hat{u}^{(1)},\hat{\xi }^{(1)}\right) }(t)\), \(\breve{X}^{\left( \hat{u}^{(2)},\hat{\xi }^{(2)}\right) }(t)\) are the solutions of (6), (37), (11), (12) corresponding to \(\left( \hat{u}^{(1)},\hat{\xi }^{(1)};\hat{u}^{(2)},\hat{\xi }^{(2)}\right) \) and \(\hat{U}^{(1)}_{l}(t)\), \(\hat{V}^{(1)}_{l}(t)\), \(\hat{W}^{(2)}_{l}(t)\), \(\hat{\Upsilon }^{(2)}_{l}(t)\) are the corresponding coefficients (see (38)–(41) ). Then, for almost all \(t\in [0,T]\), \(\left( \hat{u}^{(1)},\hat{\xi }^{(1)};\hat{u}^{(2)},\hat{\xi }^{(2)}\right) \) satisfies the following:

$$\begin{aligned}&E\left[ \left. \bigtriangledown _{u^{(1)}}H_{1}\left( t,\hat{X}(t),\hat{u}^{(1)},\hat{u}^{(2)},\hat{p}^{(1)},\hat{q}^{(1)},\hat{\tilde{q}}^{(1)},\hat{r}^{(1)}\right) \left( \mathrm{d}t,\mathrm{d}\hat{\xi }^{(1)},\mathrm {d}\hat{\xi }^{(2)}\right) \right| \mathcal {G}^{(1)}_{t}\right] =0; \nonumber \\&E\left[ \left. \bigtriangledown _{u^{(2)}}H_{2}\left( t,\hat{X}(t),\hat{u}^{(1)},\hat{u}^{(2)},\hat{p}^{(2)},\hat{q}^{(2)},\hat{\tilde{q}}^{(2)},\hat{r}^{(2)}\right) \left( \mathrm{d}t,\mathrm{d}\hat{\xi }^{(1)},\mathrm {d}\hat{\xi }^{(2)}\right) \right| \mathcal {G}^{(2)}_{t}\right] =0; \nonumber \\&\left. E\left[ \left. \hat{U}^{(1)}_{l}(t)\right| \mathcal {G}^{(1)}_{t}\right] \le 0 \quad \text {and}\quad E\left[ \hat{U}^{(1)}_{l}(t)\right| \mathcal {G}^{(1)}_{t}\right] \mathrm{d}\hat{\xi }_{l}^{(1),C}(t)=0; \end{aligned}$$
(42)
$$\begin{aligned}&E\left[ \left. \hat{W}^{(2)}_{l}(t)\right| \mathcal {G}^{(2)}_{t}\right] \le 0 \quad \text {and}\quad E\left[ \left. \hat{W}^{(2)}_{l}(t)\right| \mathcal {G}^{(2)}_{t}\right] \mathrm{d}\hat{\xi }_{l}^{(2),C}(t)=0; \end{aligned}$$
(43)
$$\begin{aligned}&E\left[ \left. \hat{V}^{(1)}_{l}(t)\right| \ \mathcal {G}^{(1)}_{t}\right] \le 0 \quad \text {and} \quad E\left[ \left. \hat{V}^{(1)}_{l}(t)\right| \ \mathcal {G}^{(1)}_{t}\right] \triangle \hat{\xi }_{l}^{(1)}(t)=0; \end{aligned}$$
(44)
$$\begin{aligned}&E\left[ \left. \hat{\Upsilon }^{(2)}_{l}(t)\right| \ \mathcal {G}^{(2)}_{t}\right] \le 0 \quad \text {and} \quad E\left[ \left. \hat{\Upsilon }^{(2)}_{l}(t)\right| \ \mathcal {G}^{(2)}_{t}\right] \triangle \hat{\xi }_{l}^{(2)}(t)=0. \end{aligned}$$
(45)

Theorem 5.2

(Sufficient optimality conditions for nonzero-sum game) Let \(\hat{X}(t)\), \(\hat{p}^{(i)}(t)\), \(\hat{q}^{(i)}(t)\), \(\hat{r}^{(i)}(\cdot )(t,z)\) be the solutions of (6) and (37) corresponding to \(\left( \hat{u}^{(1)},\hat{\xi }^{(1)};\hat{u}^{(2)},\hat{\xi }^{(2)}\right) \in \mathcal {A}^{(1)}_{\mathcal {G}}\times \mathcal {A}^{(2)}_{\mathcal {G}}\). Suppose that \(\left( \hat{u}^{(1)},\hat{\xi }^{(1)};\hat{u}^{(2)},\hat{\xi }^{(2)}\right) \) satisfies (42), (43), (44), (45),

$$\begin{aligned}&E\left[ \left. H_{1}\left( t,\hat{X}(t),u^{(1)},\hat{u}^{(2)},\hat{p}^{(1)},\hat{q}^{(1)},\hat{r}^{(1)}(\cdot )\right) \left( \mathrm{d}t,\mathrm{d}\hat{\xi }^{(1)},\mathrm {d}\hat{\xi }^{(2)}\right) \ \right| \ \mathcal {G}^{(1)}_{t}\right] \\&\quad \le E\left[ \left. H_{1}\left( t,\hat{X}(t),\hat{u}^{(1)},\hat{u}^{(2)},\hat{p}^{(1)},\hat{q}^{(1)},\hat{r}^{(1)}(\cdot )\right) \left( \mathrm{d}t,\mathrm{d}\hat{\xi }^{(1)},\mathrm {d}\hat{\xi }^{(2)}\right) \ \right| \ \mathcal {G}^{(1)}_{t}\right] \end{aligned}$$

and

$$\begin{aligned}&E\left[ \left. H_{2}\left( t,\hat{X}(t),\hat{u}^{(1)},u^{(2)}\hat{p}^{(2)},\hat{q}^{(2)},\hat{r}^{(2)}(\cdot )\right) \left( \mathrm{d}t,\mathrm{d}\hat{\xi }^{(1)},\mathrm {d}\hat{\xi }^{(2)}\right) \ \right| \ \mathcal {G}^{(2)}_{t}\right] \\&\quad \le E\left[ \left. H_{2}\left( t,\hat{X}(t),\hat{u}^{(1)},\hat{u}^{(2)},\hat{p}^{(2)},\hat{q}^{(2)},\hat{r}^{(2)}(\cdot )\right) \left( \mathrm{d}t,\mathrm{d}\hat{\xi }^{(1)},\mathrm {d}\hat{\xi }^{(2)}\right) \ \right| \ \mathcal {G}^{(2)}_{t}\right] . \end{aligned}$$

Moreover, suppose that for all \(t\in [0,T]\), \(x\rightarrow g^{(i)}(x)\) and

$$\begin{aligned} \left( x,u^{(i)},\xi ^{(i)}\right) \rightarrow H_{i}\left( t,x,u^{(1)},\hat{u}^{(2)},\hat{p}^{(1)},\hat{q}^{(1)},\hat{r}^{(1)}(\cdot )\right) \left( \mathrm{d}t,\mathrm{d}\xi ^{(1)},\mathrm{d}\hat{\xi }^{(2)}\right) \end{aligned}$$

are concave, \(i=1,2\). Then, the pair \(\left( \hat{u}^{(1)},\hat{\xi }^{(1)};\hat{u}^{(2)},\hat{\xi }^{(2)}\right) \) is a Nash equilibrium point of the nonzero-sum game (34)–(35).

The proofs are similar to those of Theorems 3.2 and 3.3. Therefore we omit the detail.

6 Conclusions

Motivated by optimal investment and dividend problem of an insurer under model uncertainty, we first studied the zero-sum regular–singular stochastic differential game with asymmetric partial information. It is worth noting that three sources of risk are included in our model: financial risk, insurance risk and model uncertainty. The necessary and sufficient optimality conditions are derived for the saddle point of the zero-sum game. Then, with these necessary and sufficient optimality conditions, we investigated the insurer’s optimal investment and dividend problem under model uncertainty. Furthermore, our results were generalized to nonzero-sum regular–singular game with asymmetric partial information. The necessary and sufficient optimality conditions obtained in this paper are expected to have potential application in various areas, such as mathematical finance and actuarial mathematics. However, in our necessary and sufficient optimality conditions, the adjoint processes are obtained through solving backward SDEs. This task is extremely difficult to achieve. Therefore, the solution methods of these backward SDEs will be explored in our subsequent work.