1 Introduction

The programming problem is to achieve an optimization objective under given constraint conditions. However, since real-world situations are usually not deterministic, traditional mathematical programming models cannot solve all practical decision-making problems. Therefore, probability theory, fuzzy theory and uncertainty theory are applied to programming problems.

Stochastic programming provides a method to consider objectives and constraints with stochastic parameters. In 1955, a complete computation procedure was provided by Dantzig (1955) for a special class of two-stage linear programming models in which the first-stage allocations were made to meet an uncertain but known distribution of demands occurring in the second stage. Charnes and Cooper (1959) pioneered chance-constrained programming as a means of dealing with uncertainty by specifying a confidence level at which one would determine the stochastic constraint to hold. A sequential solution procedure to stochastic linear programming problems with 0–1 variables was described by Levary (1984). Other hot issues of stochastic programming are studied by many scholars, for example Schultz (2003), Dyer and Stougie (2006), Nemirovski et al. (2009), etc.

Fuzzy programming provides a method for dealing with optimization problems with fuzzy parameters. The decision-making problem in fuzzy environment was presented by Bellman and Zadeh (1970), in which optimal decision-making was an alternative that maximized the membership function of fuzzy decision-making. Zimmermann (1978) gave the application of fuzzy linear programming approaches to the linear vector maximum problem. Expected values of fuzzy variables were proposed by Liu and Liu (2002), and they also constructed a spectrum of fuzzy expected value models. For recent developments of fuzzy programming, interested readers can refer to Chang (2007), Li and Liu (2015), Dalman and Bayram (2018), Ranjbar and Effati (2020), and so on.

In practice, it is usually encountered that fuzziness and randomness appear simultaneously. In order to deal with this situation, Kwakernaak (1978) introduced the concepts of fuzzy random variables, expectations of fuzzy random variables etc. He also gave a more intuitive interpretation of the notion of fuzzy random variables, and derived algorithms and examples for determining expectations, fuzzy probabilities etc. in Kwakernaak (1979). Fuzzy random programming is an optimization theory for dealing with fuzzy random decision-making problems. By discussing a practical engineering problem, linear programming with fuzzy random variable coefficients was introduced by Wang and Zhong (1993), and they also gave its simplex algorithm. In 2001, a new concept of chance of fuzzy random events and a general framework for fuzzy random chance-constrained programming were proposed by Liu (2001). Katagiri et al. (2004) investigated a multi-objective 0–1 programming problem involving fuzzy random variable coefficients and proposed an interactive satisfaction method based on the reference point approach. Fuzzy random programming is still a hot topic and studied by many scholars, such as Liu and Liu (2005), Li et al. (2006), Ammar (2008), Sakawa et al. (2012), etc.

For studying human uncertainty, Liu (2007) founded uncertainty theory. Uncertain programming is the optimization theory in uncertain environment. Liu (2009) proposed uncertain programming, including chance-constrained programming, dependent-chance programming, uncertain dynamic programming etc., and Liu (2011) applied uncertain programming to the study of project scheduling problem, machine sequencing problem etc. Subsequently, Liu and Chen (2015) further provided uncertain multi-objective programming and uncertain goal programming. In addition, uncertain multilevel programming was given by Liu and Yao (2015).

In order to better deal with complex systems involving both human uncertainties and stochasticities, Liu (2013a) presented a new concept of uncertain random variable, and combined probability measure and uncertain measure into a chance measure in 2013. Meanwhile, uncertain random programming was firstly provided based on chance theory by Liu (2013b). As the generalizations of uncertain random programming, Zhou et al. (2014) proposed uncertain random multi-objective programming, and uncertain random project scheduling programming model was built in Ke et al. (2015).

It it well-known that additivity of classical probabilities is difficult to portray non-linear characteristics of some problems, such as, risk behavior in incomplete market, industrial production with incomplete information, etc. Therefore, many scholars have tried to solve these problems by using non-additive probability measures. Choquet (1954) firstly introduced the concepts of non-additive probability (capacity) and Choquet expectation. With the rapid developments of computer science and data information technology, financial risks are becoming more and more complex and their dynamic characteristics are also stronger, Choquet expectation is difficult to be applied to the study of modern financial risk. Therefore, Peng (2007) founded sub-linear expectation theory. However, for a long time, there exists a class of complex systems that contain both human uncertainties and stochasticities with sub-linear characteristics, such as investment behavior in incomplete financial market influenced by government regulation, redundant design of system, etc. In order to describe characteristics of those phenomena, Fu et al. (2022) combined sub-linear expectation theory with uncertainty theory to construct two product spaces, so as to use a new mathematical tool called U-S chance theory to deal with complex systems involving both human uncertainties and stochasticities with sub-linear characteristics. In this paper, uncertain random programming models based on U-S chance theory are investigated for the first time, which provide more reasonable solutions to the problems of optimal investment and financial risk management in incomplete market. In addition, uncertain random programming models proposed in this paper are also applicable to the study of system reliability design.

The paper is organized as follows. In Sect. 2 and Appendix A, some definitions and properties about uncertainty theory, U-S chance theory, and sub-linear expectation theory used in this paper are reviewed. In Sect. 3, under the framework of U-S chance theory, we present the operational law of uncertain random variables. In Sect. 4, four types of expectations of uncertain random variables are defined, based on sub-linear expectations and Choquet integrals. In Sect. 5, we provide four types of uncertain random programming models. In Sect. 6, two of these models are applied to stock investment in incomplete financial market and system reliability design.

2 Preliminary

In this section, we introduce some basic concepts about uncertain variables and uncertain random variables under U-S chance spaces, which are used throughout the paper.

2.1 Uncertain variable

Definition 1

(Liu 2015) Let \(\mathcal {L}\) be a \(\sigma \)-algebra on a non-empty set \(\Gamma \). A set function \(\mathcal {M}\) is called an uncertain measure if it satisfies the following axioms:

Axiom 1 (Normality Axiom): \(\mathcal {M}\{\Gamma \}=1\), for the universal set \(\Gamma ;\)

Axiom 2 (Duality Axiom): \(\mathcal {M}\{\Lambda \} + \mathcal {M}\left\{ \Lambda ^{\textsf{c}}\right\} =1\), for any \(\Lambda \in \mathcal {L};\)

Axiom 3 (Sub-additivity Axiom): For every countable sequence of \(\left\{ \Lambda _{j}\right\} \subset \mathcal {L},\) we have

$$\mathcal {M}\left\{ \bigcup _{j=1}^{\infty }\Lambda _{j}\right\} \le \sum _{j=1}^{\infty }\mathcal {M}\left\{ \Lambda _{j}\right\} .$$

The triplet \((\Gamma , \mathcal {L}, \mathcal {M})\) is called an uncertainty space, and each element \(\Lambda \) in \(\mathcal {L}\) is called an event. In order to obtain an uncertain measure of compound event, a product uncertain measure is defined as follows:

Axiom 4 (Product Axiom): Let \((\Gamma _{k}, \mathcal {L}_{k}, \mathcal {M}_{k})\) be uncertainty spaces for \(k = 1, 2, \ldots .\) The product uncertain measure \(\mathcal {M}\) is an uncertain measure satisfying

$$\mathcal {M}\left\{ \prod _{k=1}^{\infty }\Lambda _{k}\right\} = \bigwedge _{k=1}^{\infty }\mathcal {M}_{k}\left\{ \Lambda _{k}\right\} ,$$

where \(\Lambda _{k}\) are arbitrarily chosen events from \(\mathcal {L}_{k}\) for \( k = 1, 2, \ldots ,\) respectively.

Definition 2

(Liu 2015) A function \(\tau : \Gamma \mapsto \mathbb {R}\) is called an uncertain variable if it is measurable, i.e.,

$$ \{ \tau \in B\} = \{\gamma \in \Gamma | \tau (\gamma ) \in B \} \in \mathcal {L} $$

for each \(B \in \mathcal {B}(\mathbb {R})\). Its uncertainty distribution is a function given by

$$ \Upsilon (x) = \mathcal {M}\{ \tau \le x\},\; x \in \mathbb {R}. $$

Definition 3

(Liu 2015) The uncertain variables \(\tau _{1}, \tau _{2}, \ldots , \tau _{n}\) are said to be independent if

$$ \mathcal {M}\left\{ \bigcap _{i=1}^{n}\left\{ \tau _i \in B_i \right\} \right\} = \bigwedge _{i=1}^{n} \mathcal {M}\{ \tau _i \in B_i\} $$

for any \(B_i \in \mathcal {B}(\mathbb {R})\), \(i=1,2,\ldots , n.\)

Definition 4

(Liu 2015) Let \(\tau \) be an uncertain variable. Then, the expected value of \(\tau \) is defined by

$$\begin{aligned} E[\tau ] = \int \limits _{0}^{+\infty } \mathcal {M} \{\tau \ge x\} \textrm{d}x - \int \limits _{-\infty }^{0} \mathcal {M} \{\tau \le x\} \textrm{d}x \end{aligned}$$

provided that at least one of the two integrals is finite.

2.2 Uncertain random variable under U-S chance spaces

In this subsection, we use the framework and notations of Fu et al. (2022).

Definition 5

(Fu et al. 2022) Let \((\Gamma , \mathcal {L}, \mathcal {M})\) be an uncertainty space, and \((\Omega , \mathcal {H}, \mathbb {E})\) be a sub-linear expectation space (see Remark 3 in Appendix A). Suppose that \(\mathbb {V}\) and v are non-additive probabilities generated by \(\mathbb {E}\). A pair of chance spaces generated by uncertainty space and sub-linear expectation space (U-S chance spaces for short) are the spaces of forms:

$$(\Gamma , \mathcal {L}, \mathcal {M}) \times (\Omega , \mathcal {F}, \mathbb {V}) = (\Gamma \times \Omega , \mathcal {L} \times \mathcal {F}, \mathcal {M} \times \mathbb {V})$$

and

$$(\Gamma , \mathcal {L}, \mathcal {M}) \times (\Omega , \mathcal {F}, v) = (\Gamma \times \Omega , \mathcal {L} \times \mathcal {F}, \mathcal {M} \times v),$$

where \(\Gamma \times \Omega \) is the universal set, \(\mathcal {L} \times \mathcal {F}\) is the product \(\sigma \)-algebra, \(\mathcal {M} \times \mathbb {V}\) and \(\mathcal {M} \times v\) are two product measures.

Here, the notations \((\Omega , \mathcal {H}, \mathbb {E})\), v and \(\mathbb {V}\) were introduced by Peng (2017, 2019) and Chen (2016). For more details, please refer to Appendix A.

Definition 6

(Fu et al. 2022) Let \(\Xi \in \mathcal {L} \times \mathcal {F}\) be an uncertain random event under U-S chance spaces. Then, chance measures ch and CH of \(\Xi \) are given by

$$\begin{aligned} ch\{\Xi \} := \int \limits _{0}^{1} v \left\{ \omega \in \Omega | \mathcal {M}\left\{ \gamma \in \Gamma | (\gamma ,\omega ) \in \Xi \right\} \ge r\right\} \textrm{d}r \end{aligned}$$
(1)

and

$$\begin{aligned} CH\{\Xi \} := \int \limits _{0}^{1} \mathbb {V} \left\{ \omega \in \Omega | \mathcal {M}\left\{ \gamma \in \Gamma | (\gamma ,\omega ) \in \Xi \right\} \ge r\right\} \textrm{d}r, \end{aligned}$$
(2)

respectively.

Remark 1

The universal set \(\Gamma \times \Omega \) is clearly the set of all ordered pairs of the form \((\gamma , \omega )\), where \(\gamma \in \Gamma \) and \(\omega \in \Omega \). That is,

$$\Gamma \times \Omega =\{(\gamma , \omega )|\gamma \in \Gamma ,\omega \in \Omega \}.$$

The product \(\sigma \)-algebra \(\mathcal {L} \times \mathcal {F}\) is the smallest \(\sigma \)-algebra containing measurable rectangles of the form \(\Lambda \times A\), where \(\Lambda \in \mathcal {L}\) and \(A \in \mathcal {F}\). Any element in \(\mathcal {L} \times \mathcal {F}\) is called an event in the U-S chance spaces.

In the following, we discuss the product measures \(\mathcal {M} \times \mathbb {V}\) and \(\mathcal {M} \times v\) by the similar method of (Liu 2015, pp. 409–410). Suppose \(\Xi \) is an event in \(\mathcal {L} \times \mathcal {F}\). For each \(\omega \in \Omega \), it is clear that the set

$$\begin{aligned} \Xi _{\omega }=\{\gamma \in \Gamma |(\gamma , \omega ) \in \Xi \} \end{aligned}$$

is an event in \(\mathcal {L}\). Thus, the uncertain measure \(\mathcal {M}\{\Xi _{\omega }\}\) exists for each \(\omega \in \Omega \). However, unfortunately, \(\mathcal {M}\{\Xi _{\omega }\}\) is not necessarily a measurable function with respect to \(\omega \). That is, the set

$$\begin{aligned} \Xi _{x}^{*}=\{\omega \in \Omega |\mathcal {M}\{\Xi _{\omega }\}\ge x\} \end{aligned}$$

is a subset of \(\Omega \) but not necessarily an event in \(\mathcal {F}\) for any real number x. Therefore, upper probability measure \(\mathbb {V}\{\Xi _{x}^{*}\}\) and lower probability measure \(v\{\Xi _{x}^{*}\}\) do not necessarily exist. In this case, we assign

$$\begin{aligned} \left. \mathbb {V}\{\Xi _{x}^{*}\}=\left\{ \begin{array}{cc}\inf \limits _{A\in \mathcal {F},A\supseteq \Xi _{x}^{*}}\mathbb {V}\{A\},& \textrm{if} \inf \limits _{A\in \mathcal {F},A\supseteq \Xi _{x}^{*}}\mathbb {V}\{A\}<0.5\\ \\ \sup \limits _{A\in \mathcal {F},A\subseteq \Xi _{x}^{*}}\mathbb {V}\{A\},& \textrm{if} \sup \limits _{A\in \mathcal {F},A\subseteq \Xi _{x}^{*}}\mathbb {V}\{A\}>0.5\\ \\ 0.5,& \textrm{otherwise}\end{array}\right. \right. \end{aligned}$$

and

$$\begin{aligned} \left. v\{\Xi _{x}^{*}\}=\left\{ \begin{array}{cc}\inf \limits _{A\in \mathcal {F},A\supseteq \Xi _{x}^{*}}v\{A\},& \textrm{if} \inf \limits _{A\in \mathcal {F},A\supseteq \Xi _{x}^{*}}v\{A\}<0.5\\ \\ \sup \limits _{A\in \mathcal {F},A\subseteq \Xi _{x}^{*}}v\{A\},& \textrm{if} \sup \limits _{A\in \mathcal {F},A\subseteq \Xi _{x}^{*}}v\{A\}>0.5\\ \\ 0.5,& \textrm{otherwise}\end{array}\right. \right. \end{aligned}$$

in the light of maximum uncertainty principle. This ensures upper probability measure \(\mathbb {V}\{\Xi _{x}^{*}\}\) and lower probability measure \(v\{\Xi _{x}^{*}\}\) exist for any real number x. It is now appropriate to define \(\mathcal {M} \times \mathbb {V}\) and \(\mathcal {M} \times v\) of \(\Xi \) as the expected values of \(\mathcal {M}\{\Xi _{\omega }\}\) with respect to \(\omega \in \Omega \), i.e.,

$$\begin{aligned} \int \limits _{0}^{1}\mathbb {V}\{\Xi _{r}^{*}\}\textrm{d}r \end{aligned}$$

and

$$\begin{aligned} \int \limits _{0}^{1}v\{\Xi _{r}^{*}\}\textrm{d}r. \end{aligned}$$

Thus, chance measures CH and ch are well-defined.

Fu et al. (2022) also verified that chance measures ch and CH satisfy the following four properties:

  1. (i)
    $$ch\{A \times B\} = \mathcal {M}\{A\} \times v\{B\},\ \ CH\{A \times B\} = \mathcal {M}\{A\} \times \mathbb {V}\{B\},$$

    for any \(A \in \mathcal {L}\) and \(B \in \mathcal {F};\)

  2. (ii)
    $$\begin{aligned} CH\{\Xi \} + ch\{\Xi ^{c}\} =1,\; \Xi \in \mathcal {L} \times \mathcal {F}; \end{aligned}$$
    (3)
  3. (iii)
    $$ch\{\Xi _{1}\} \le ch\{\Xi _{2}\},\ \ CH\{\Xi _{1}\} \le CH\{\Xi _{2}\},$$

    for events \(\Xi _{1}, \Xi _{2} \in \mathcal {L} \times \mathcal {F},\) such that \(\Xi _{1} \subseteq \Xi _{2};\)

  4. (iv)
    $$ch\{\Xi \} \le CH\{\Xi \}, \ \ \Xi \in \mathcal {L} \times \mathcal {F}.$$

Definition 7

A function \(\xi : \Gamma \times \Omega \mapsto \mathbb {R}\) is called an uncertain random variable under U-S chance spaces if it is measurable, i.e., for each \(B\in \mathcal {B}(\mathbb {R})\),

$$\{\xi \in B\} = \{(\gamma ,\omega )\in \Gamma \times \Omega |\xi (\gamma ,\omega )\in B\} \in \mathcal {L} \times \mathcal {F}.$$

Example 1

Let \(\eta \) be a Bernoulli random variable under \(\mathbb {E}\) (see Definition 16 in Appendix A) with the set of possible values \(\{a_{1}, a_{2}, \ldots , a_{n}\}\) and \(\tau _{1},\tau _{2},\ldots ,\tau _{n}\) be uncertain variables defined on \((\Gamma ,\mathcal {L},\mathcal {M})\). Suppose that f is a mapping from \(\Gamma \times \{a_{1}, a_{2}, \ldots , a_{n}\}\) to \(\mathbb {R}\) such that

$$f(\gamma , a_{i}) = \tau _{i}.$$

Then

$$\begin{aligned} {\xi } =f(\gamma ,\eta (\omega ))= {\left\{ \begin{array}{ll} \tau _{1},& {{\text {with possible probability measure}} \ p_{1}, }\\ {\tau _{2},}& {{\text {with possible probability measure}} \ p_{2}, } \\ {{\cdots } } \\ {\tau _{n},}& {{\text {with possible probability measure}} \ p_{n}} \end{array}\right. } \end{aligned}$$
(4)

is an uncertain random variable, where \(p_{k} \in [\underline{p}_{k}, \overline{p}_{k}]\) for \(k=1,2,\ldots ,n\), satisfying \(\sum _{k=1}^{n} p_{k}=1\), and \(n \in \mathbb {N}\).

Here and in the sequel, uncertain random variables are based on the U-S chance spaces \((\Gamma , \mathcal {L}, \mathcal {M}) \times (\Omega , \mathcal {F}, \mathbb {V})\) and \((\Gamma , \mathcal {L}, \mathcal {M}) \times (\Omega , \mathcal {F}, v)\).

3 Operational law

Theorem 1

Let \(\eta _{1}, \eta _{2}, \ldots , \eta _{m}\) be random variables under \(\mathbb {E}\), and \(\tau _{1}, \tau _{2}, \ldots , \tau _{n}\) be independent uncertain variables with regular uncertainty distributions \(\Upsilon _{1}, \Upsilon _{2}, \ldots , \Upsilon _{n}\), respectively. Then, uncertain random variable

$$\xi = f(\eta _{1}, \eta _{2}, \ldots , \eta _{m}, \tau _{1}, \tau _{2}, \ldots , \tau _{n})$$

has a lower distribution

$$\begin{aligned} \Phi _{1}(x)&= ch \{\xi \le x\} \nonumber \\&= \int \limits _{0}^{1} v\left\{ \omega \in \Omega | \mathcal {M}\{ \xi \le x\}\ge r\right\} \textrm{d}r \nonumber \\&= \int \limits _{0}^{1} v\left\{ \omega \in \Omega | F (x;\eta _{1}, \eta _{2}, \ldots , \eta _{m})\ge r\right\} \textrm{d}r \end{aligned}$$
(5)

and an upper distribution

$$\begin{aligned} \Phi _{2}(x)&= CH \{\xi \le x\} \nonumber \\&= \int \limits _{0}^{1} \mathbb {V}\left\{ \omega \in \Omega | \mathcal {M}\{ \xi \le x\}\ge r\right\} \textrm{d}r \nonumber \\&= \int \limits _{0}^{1} \mathbb {V}\left\{ \omega \in \Omega | F (x;\eta _{1}, \eta _{2}, \ldots , \eta _{m})\ge r\right\} \textrm{d}r, \end{aligned}$$
(6)

where \(F (x; y_{1}, y_{2}, \ldots , y_{m})\) is the uncertainty distribution of uncertain variable \(f(y_{1}, y_{2}, \ldots , y_{m}, \tau _{1}, \tau _{2}, \ldots , \tau _{n})\) for any given real numbers \(y_{1}, y_{2}, \ldots , y_{m},\) and is determined by its inverse function (see Theorem 2.18 in Liu 2015)

$$\begin{aligned}&F^{-1}(\alpha ; y_{1}, y_{2}, \ldots , y_{m}) = f \Big (y_{1}, y_{2}, \ldots , y_{m}, \Upsilon _{i_{1}}^{-1}(\alpha ), \ldots , \Big .\\&\Big .\Upsilon _{i_{k}}^{-1}(\alpha ), \Upsilon _{i_{k+1}}^{-1}(1-\alpha ),\ldots , \Upsilon _{i_{n}}^{-1}(1-\alpha )\Big ) \end{aligned}$$

provided that \(f(\eta _{1}, \eta _{2}, \ldots , \eta _{m}, \tau _{1}, \tau _{2}, \ldots , \tau _{n})\) is strictly increasing with respect to \(\tau _{i_{1}}, \ldots , \tau _{i_{k}}\) and strictly decreasing with respect to \(\tau _{i_{k+1}}, \ldots , \tau _{i_{n}}.\)

Proof

For any given real numbers \(y_{1}, y_{2}, \ldots , y_{m}\), it follows from the operational law of uncertain variables that \(f(y_{1}, y_{2}, \ldots , y_{m}, \tau _{1}, \tau _{2}, \ldots , \tau _{n})\) is an uncertain variable with uncertainty distribution \(F (x;y_{1}, y_{2}, \ldots , y_{m})\). By using Definition 6, we know that \(\Phi _{1}\) and \(\Phi _{2}\) are the lower and upper distributions of \(\xi \) just with forms (5) and (6), respectively. \(\square \)

Example 2

Let \(\eta _{1}, \eta _{2}, \ldots , \eta _{m}\) be independent random variables (see Definition 13 (i) in Appendix A) with maximal distribution (see Definition 14 in Appendix A) under \(\mathbb {E}\). i.e.,

$$\displaystyle \mathbb {E}[\varphi (\eta _{i})] = \sup _{\underline{\mu }_{i} \le y_{i} \le \overline{\mu }_{i} }\varphi (y_{i}),$$

for each Borel measurable function \(\varphi \) on \(\mathbb {R}\), and \(\overline{\mu }_{i}=\mathbb {E}[\eta _{i}]\), \(\underline{\mu }_{i}=\mathcal {E}[\eta _{i}]\), \(i = 1, \ldots , m\). And let \(\tau _{1}, \tau _{2}, \ldots , \tau _{n}\) be independent uncertain variables with regular uncertainty distributions \(\Upsilon _{1}, \Upsilon _{2}, \ldots , \Upsilon _{n}\), respectively. Then, the sum

$$\xi = \eta _{1} + \eta _{2} + \cdots + \eta _{m} + \tau _{1} + \tau _{2} + \cdots + \tau _{n}$$

has a lower distribution

$$\begin{aligned} \Phi _{1}(x)&= \int \limits _{0}^{1} v\left\{ \omega \in \Omega | \Upsilon (x-(\eta _{1} +\eta _{2} + \cdots + \eta _{m}))\ge r\right\} \textrm{d}r \nonumber \\&= \displaystyle \inf _{\underline{\mu }_{1} \le y_{1} \le \overline{\mu }_{1} } \cdots \inf _{\underline{\mu }_{m} \le y_{m} \le \overline{\mu }_{m} } \Upsilon (x-\left( y_{1} + y_{2} + \cdots + y_{m}\right) )\nonumber \\&=\displaystyle \Upsilon \left( x-\sum _{i=1}^{m}\overline{\mu }_{i} \right) \end{aligned}$$
(7)

and an upper distribution

$$\begin{aligned} \Phi _{2}(x)&= \int \limits _{0}^{1} \mathbb {V}\left\{ \omega \in \Omega | \Upsilon (x-(\eta _{1} + \eta _{2} + \cdots + \eta _{m}))\ge r\right\} \textrm{d}r \nonumber \\&= \displaystyle \sup _{\underline{\mu }_{1} \le y_{1} \le \overline{\mu }_{1} } \cdots \sup _{\underline{\mu }_{m} \le y_{m} \le \overline{\mu }_{m} } \Upsilon (x-(y_{1} + y_{2} + \cdots + y_{m}))\nonumber \\&=\displaystyle \Upsilon \left( x-\sum _{i=1}^{m}\underline{\mu }_{i} \right) , \end{aligned}$$
(8)

where \(\Upsilon \) is the uncertainty distribution of \(\tau _{1} + \tau _{2} + \cdots + \tau _{n}\) (see Theorem 2.14 in Liu 2015) determined by

$$\Upsilon (z) = \displaystyle \sup _{z_{1}+ \cdots +z_{n} = z} \Upsilon _{1} (z_{1}) \wedge \Upsilon _{2} (z_{2}) \wedge \cdots \wedge \Upsilon _{n} (z_{n}).$$

Example 3

Let \(\eta _{1}, \eta _{2}, \ldots , \eta _{m}\) be IID random variables (see Definition 13 (iii) in Appendix A) with G-normal distribution (see Definition 15 in Appendix A) under \(\mathbb {E}\), i.e., \(\eta _{1} \sim N \left( 0,[\underline{\sigma }^2,\overline{\sigma }^2]\right) \), where \(\mathbb {E}[\eta _{1}] = \mathcal {E}[\eta _{1}] = 0\), \(\overline{\sigma }^2 = \mathbb {E}[\eta _1^2]\) and \(\underline{\sigma }^2 = \mathcal {E}[\eta _1^2]\). And let \(\tau _{1}, \tau _{2}, \ldots , \tau _{n}\) be independent uncertain variables with regular uncertainty distributions \(\Upsilon _{1}, \Upsilon _{2}, \ldots , \Upsilon _{n}\), respectively. Then, the sum

$$\xi = \eta _{1} + \eta _{2} + \cdots + \eta _{m} + \tau _{1} + \tau _{2} + \cdots + \tau _{n}$$

has a lower distribution

$$\begin{aligned} \Phi _{1}(x)&= \frac{\underline{\sigma }-\overline{\sigma }}{\underline{\sigma }+\overline{\sigma }}\Upsilon (x) + \frac{2\overline{\sigma }}{\underline{\sigma }+\overline{\sigma }} \int \limits _{0}^{\Upsilon (x)} \Phi \left( \frac{x-\Upsilon ^{-1}(r) }{\sqrt{m}\overline{\sigma }}\right) \textrm{d}r \nonumber \\&\quad + \frac{2\underline{\sigma }}{\underline{\sigma }+\overline{\sigma }} \int \limits _{\Upsilon (x)}^{1} \Phi \left( \frac{x-\Upsilon ^{-1}(r) }{\sqrt{m}\underline{\sigma }}\right) \textrm{d}r \end{aligned}$$
(9)

and an upper distribution

$$\begin{aligned} \Phi _{2}(x)&= \frac{\overline{\sigma }-\underline{\sigma }}{\underline{\sigma }+\overline{\sigma }}\Upsilon (x) + \frac{2\underline{\sigma }}{\underline{\sigma }+\overline{\sigma }} \int \limits _{0}^{\Upsilon (x)} \Phi \left( \frac{x- \Upsilon ^{-1}(r)}{\sqrt{m}\underline{\sigma }}\right) \textrm{d}r \nonumber \\&\quad + \frac{2\overline{\sigma }}{\underline{\sigma }+\overline{\sigma }} \int \limits _{\Upsilon (x)}^{1} \Phi \left( \frac{x- \Upsilon ^{-1}(r)}{\sqrt{m}\overline{\sigma }}\right) \textrm{d}r, \end{aligned}$$
(10)

where \(\Phi \) denotes the distribution of standard normal distribution and \(\Upsilon \) is the uncertainty distribution of \(\tau _{1} + \tau _{2} + \cdots + \tau _{n}\) determined by

$$\Upsilon (z) = \displaystyle \sup _{z_{1}+ \cdots +z_{n} = z} \Upsilon _{1} (z_{1}) \wedge \Upsilon _{2} (z_{2}) \wedge \cdots \wedge \Upsilon _{n} (z_{n}).$$

Proof

From Definition 15 and Remark 4 in Appendix A, we know that \(\eta _{1} + \eta _{2} + \cdots + \eta _{m} \overset{{\text {d}}}{=} \sqrt{m}\eta _{1}\). According to Corollary 1 in Peng and Zhou (2020), we conclude that

$$\begin{aligned} {v\left\{ \omega \in \Omega \mid \eta _{1}(\omega ) \le t \right\} } = {\left\{ \begin{array}{ll} \frac{\underline{\sigma }-\overline{\sigma }}{\underline{\sigma }+\overline{\sigma }}+\frac{2\overline{\sigma }}{\underline{\sigma }+\overline{\sigma }} \Phi \left( \frac{t}{\overline{\sigma }}\right) ,& {t \ge 0}, \\ {\frac{2\underline{\sigma }}{\underline{\sigma }+\overline{\sigma }}\Phi \left( \frac{t}{\underline{\sigma }}\right) ,}& {t \le 0}, \end{array}\right. } \end{aligned}$$

and

$$\begin{aligned} {\mathbb {V}\left\{ \omega \in \Omega \mid \eta _{1}(\omega ) \le t \right\} } = {\left\{ \begin{array}{ll} \frac{\overline{\sigma }-\underline{\sigma }}{\underline{\sigma }+\overline{\sigma }}+\frac{2\underline{\sigma }}{\underline{\sigma }+\overline{\sigma }} \Phi \left( \frac{t}{\underline{\sigma }}\right) ,& {t \ge 0}, \\ {\frac{2\overline{\sigma }}{\underline{\sigma }+\overline{\sigma }}\Phi \left( \frac{t}{\overline{\sigma }}\right) ,}& {t \le 0}, \end{array}\right. } \end{aligned}$$

where \(\Phi \) denotes the distribution of standard normal distribution.

Then by using the above arguments, it follows that

$$\begin{aligned} \Phi _{1}(x)&= \int \limits _{0}^{1} v\left\{ \omega \in \Omega | \mathcal {M} \{\xi \le x\} \ge r\right\} \textrm{d}r \nonumber \\&= \int \limits _{0}^{1} v\left\{ \omega \in \Omega | \Upsilon (x-\sqrt{m}\eta _{1}(\omega )) \ge r\right\} \textrm{d}r \nonumber \\&= \int \limits _{0}^{1} v\left\{ \omega \in \Omega | \eta _{1}(\omega ) \le \frac{x-\Upsilon ^{-1}(r)}{\sqrt{m}} \right\} \textrm{d}r \nonumber \\&= \int \limits _{0}^{\Upsilon (x)} \frac{\underline{\sigma }-\overline{\sigma }}{\underline{\sigma }+\overline{\sigma }}+\frac{2\overline{\sigma }}{\underline{\sigma }+\overline{\sigma }} \Phi \left( \frac{x-\Upsilon ^{-1}(r) }{\sqrt{m}\overline{\sigma }}\right) \textrm{d}r + \int \limits _{\Upsilon (x)}^{1} \frac{2\underline{\sigma }}{\underline{\sigma }+\overline{\sigma }} \Phi \left( \frac{x-\Upsilon ^{-1}(r) }{\sqrt{m}\underline{\sigma }}\right) \textrm{d}r \nonumber \\&=\!\frac{\underline{\sigma }-\overline{\sigma }}{\underline{\sigma }\!+\!\overline{\sigma }}\Upsilon (x) \!+\! \frac{2\overline{\sigma }}{\underline{\sigma }+\overline{\sigma }} \int \limits _{0}^{\Upsilon (x)} \Phi \left( \frac{x-\Upsilon ^{-1}(r) }{\sqrt{m}\overline{\sigma }}\right) \textrm{d}r \nonumber \\&\quad + \frac{2\underline{\sigma }}{\underline{\sigma }+\overline{\sigma }} \int \limits _{\Upsilon (x)}^{1} \Phi \left( \frac{x-\Upsilon ^{-1}(r) }{\sqrt{m}\underline{\sigma }}\right) \textrm{d}r, \end{aligned}$$

where \(\Upsilon (z) = \sup _{z_{1}+ \cdots +z_{n} = z} \Upsilon _{1} (z_{1}) \wedge \Upsilon _{2} (z_{2}) \wedge \cdots \wedge \Upsilon _{n} (z_{n}).\)

Hence, (9) is proved. With the similar argument, we can verify that (10) holds. Thus, the proof is completed. \(\square \)

Example 4

Let \(\eta _{1}, \eta _{2}, \ldots , \eta _{m}\) be independent random variables under \(\mathbb {E}\), and \(\tau _{1}, \tau _{2}, \ldots , \tau _{n}\) be independent uncertain variables with regular uncertainty distributions \(\Upsilon _{1}, \Upsilon _{2}, \ldots , \Upsilon _{n}\), respectively.

  1. (i)

    The maximum

    $$\begin{aligned} \xi = \eta _{1} \vee \eta _{2} \vee \cdots \vee \eta _{m} \vee \tau _{1} \vee \tau _{2} \vee \cdots \vee \tau _{n} \end{aligned}$$

    has a lower distribution

    $$\begin{aligned} \Phi _{1}(x) = \Upsilon (x) v \{\eta _{1} \le x\} \cdots v \{\eta _{m} \le x\} \end{aligned}$$
    (11)

    and an upper distribution

    $$\begin{aligned} \Phi _{2}(x) = \Upsilon (x) \mathbb {V} \{\eta _{1} \le x\} \cdots \mathbb {V} \{\eta _{m} \le x\}, \end{aligned}$$
    (12)

    where \(\Upsilon \) is the uncertainty distribution of \(\tau _{1} \vee \tau _{2} \vee \cdots \vee \tau _{n}\) (see Exercise 2.13 in Liu 2015) determined by

    $$\Upsilon (x) = \Upsilon _{1}(x) \wedge \Upsilon _{2}(x) \wedge \cdots \wedge \Upsilon _{n}(x).$$
  2. (ii)

    The minimum

    $$\begin{aligned} \xi = \eta _{1} \wedge \eta _{2} \wedge \cdots \wedge \eta _{m} \wedge \tau _{1} \wedge \tau _{2} \wedge \cdots \wedge \tau _{n} \end{aligned}$$

    has a lower distribution

    $$\begin{aligned} \Phi _{1}(x) = 1-\left[ 1-\Upsilon (x)\right] \left( 1-v \{\eta _{1} \le x\}\right) \cdots \left( 1-v \{\eta _{m} \le x\}\right) \end{aligned}$$
    (13)

    and an upper distribution

    $$\begin{aligned} \Phi _{2}(x) = 1- \left[ 1-\Upsilon (x)\right] \left( 1-\mathbb {V} \{\eta _{1} \le x\}\right) \cdots \left( 1-\mathbb {V}\{\eta _{m} \le x\}\right) , \end{aligned}$$
    (14)

    where \(\Upsilon \) is the uncertainty distribution of \(\tau _{1} \wedge \tau _{2} \wedge \cdots \wedge \tau _{n}\) (see Exercise 2.12 in Liu 2015) determined by

    $$\Upsilon (x) = \Upsilon _{1}(x) \vee \Upsilon _{2}(x) \vee \cdots \vee \Upsilon _{n}(x).$$

Proof

(i) According to (5) and using the fact that \(\eta _{1}, \eta _{2}, \ldots , \eta _{m}\) are independent random variables under \(\mathbb {E}\), it can be shown that

$$\begin{aligned} \Phi _{1}(x)&= \int \limits _{0}^{1} v\left\{ \omega \in \Omega |\mathcal {M} \{\xi \le x\}\ge r\right\} \textrm{d}r \nonumber \\&= \int \limits _{0}^{1} v\left\{ \omega \in \Omega |I_{\{\eta _{1} \le x\} \cap \cdots \cap \{\eta _{m} \le x\}} \Upsilon (x) \ge r\right\} \textrm{d}r \nonumber \\&= \int \limits _{0}^{\Upsilon (x)} v \left\{ \omega \in \Omega | \{\eta _{1} \le x\} \cap \cdots \cap \{\eta _{m} \le x\} \right\} \textrm{d}r \nonumber \\&= \Upsilon (x) v \{\eta _{1} \le x\} \cdots v\{\eta _{m} \le x\} , \end{aligned}$$

where \(\Upsilon (x) = \Upsilon _{1}(x) \wedge \Upsilon _{2}(x) \wedge \cdots \wedge \Upsilon _{n}(x)\). Similarly, we can verify that (12) holds.

(ii) Since \(\eta _{1}, \eta _{2}, \ldots , \eta _{m}\) are independent random variables under \(\mathbb {E}\), from (3), it yields that

$$\begin{aligned} \Phi _{1}(x)&= \int \limits _{0}^{1} v\left\{ \omega \in \Omega |\mathcal {M} \{\xi \le x\}\ge r\right\} \textrm{d}r \nonumber \\&= 1- \int \limits _{0}^{1} \mathbb {V} \left\{ \omega \in \Omega |\mathcal {M} \{\xi> x\} \ge r\right\} \textrm{d}r \nonumber \\&= 1- \int \limits _{0}^{1} \mathbb {V}\left\{ \omega \in \Omega | I_{\{\eta _{1}> x\} \cap \cdots \cap \{\eta _{m}> x\}} \left[ 1-\Upsilon (x)\right] \ge r \right\} \textrm{d}r \nonumber \\&= 1-\int \limits _{0}^{1-\Upsilon (x)} \mathbb {V}\left\{ \omega \in \Omega | \{\eta _{1}> x\} \cap \cdots \cap \{\eta _{m}> x\} \right\} \textrm{d}r \nonumber \\&= 1-\left[ 1-\Upsilon (x)\right] \mathbb {V} \{\eta _{1}> x\} \cdots \mathbb {V} \{\eta _{m} > x\} \nonumber \\&= 1-\left[ 1-\Upsilon (x)\right] \left( 1-v \{\eta _{1} \le x\}\right) \cdots \left( 1-v \{\eta _{m} \le x\}\right) , \end{aligned}$$

where \(\Upsilon (x) = \Upsilon _{1}(x) \vee \Upsilon _{2}(x) \vee \cdots \vee \Upsilon _{n}(x)\). Similarly, we can verify that (14) holds. The proof is completed. \(\square \)

Theorem 2

Assume that \(\eta _{1}, \eta _{2}, \ldots , \eta _{m}\) are independent Boolean random variables under \(\mathbb {E}\) (see Definition 17 in Appendix A), i.e.,

$$\begin{aligned} {\eta _{i}} = {\left\{ \begin{array}{ll} 1,& {{\text {with possible probability measure}} \ p_{i},}\\ {0,}& {{\text {with possible probability measure}} \ 1-p_{i},} \\ \end{array}\right. } \end{aligned}$$
(15)

where \(\ p_{i}\in [\underline{p}_{i}, \overline{p}_{i}],\) \( i = 1,2, \ldots , m.\) And let \(\tau _{1}, \tau _{2}, \ldots , \tau _{n}\) be independent Boolean uncertain variables, i.e.,

$$\begin{aligned} {\tau _{j}} = {\left\{ \begin{array}{ll} 1,& {{\text {with uncertain measure}}\ q_{j}}, \\ {0,}& {{\text {with uncertain measure}}\ 1-q_{j}}, \end{array}\right. } \end{aligned}$$
(16)

for \(j=1,2,\ldots ,n.\) Suppose that f is a Boolean function from \(\{0,1\}^{n+1}\) to \(\{0,1\}\) and g is a Boolean function from \(\{0,1\}^{m}\) to \(\{0,1\}\). Then

$$\xi = f(g(\eta _{1},\ldots ,\eta _{m}),\tau _{1}, \tau _{2}, \ldots , \tau _{n})$$

is a Boolean uncertain random variable satisfying the following properties:

  1. (i)

    if the equation \(g(x_{1},\ldots ,x_{m})=1\) (\(x_i \in \{0,1\},i=1,\ldots ,m\)) has a unique solution \(\{y_{1}, \ldots , y_{m}\}\) in set \(\{0,1\}^{m}\) and \(f(0,\tau _{1}, \tau _{2}, \ldots , \tau _{n})=0\), then

    $$\begin{aligned} ch\{\xi =1\}=\displaystyle \prod _{i=1}^{m} w_{i}(y_{i}) Z(y_{1},\ldots ,y_{m}), \end{aligned}$$
    (17)

    and

    $$\begin{aligned} CH\{\xi =1\}=\displaystyle \prod _{i=1}^{m} u_{i}(y_{i}) Z(y_{1},\ldots ,y_{m}); \end{aligned}$$
    (18)
  2. (ii)

    if the equation \(g(x_{1},\ldots ,x_{m})=0\) (\(x_i \in \{0,1\},i=1,\ldots ,m\)) has a unique solution \(\{y_{1}, \ldots , y_{m}\}\) in set \(\{0,1\}^{m}\) and \(f(1,\tau _{1}, \tau _{2}, \ldots , \tau _{n})=1\), then

    $$\begin{aligned} ch\{\xi =0\}=\displaystyle \prod _{i=1}^{m} w_{i}(y_{i}) {{\overline{Z}}}(y_{1},\ldots ,y_{m}), \end{aligned}$$
    (19)

    and

    $$\begin{aligned} CH\{\xi =0\}=\displaystyle \prod _{i=1}^{m} u_{i}(y_{i}) {{\overline{Z}}}(y_{1},\ldots ,y_{m}). \end{aligned}$$
    (20)

    Here

    $$\begin{aligned} {Z(y_{1},\ldots ,y_{m})} = {\left\{ \begin{array}{ll} \displaystyle \sup _{f(g(y_{1},\ldots ,y_{m}),z_{1}, \ldots , z_{n})=1}\min _{1\le j \le n} v_{j}(z_{j}),\\ \ \ {{\text {if}} \displaystyle \sup _{f(g(y_{1},\ldots ,y_{m}),z_{1}, \ldots , z_{n})=1}\min _{1\le j \le n} v_{j}(z_{j}) <0.5}, \\ {1-\displaystyle \sup _{f(g(y_{1},\ldots ,y_{m}),z_{1}, \ldots , z_{n})=0}\min _{1\le j \le n} v_{j}(z_{j}),}\\ \ \ {{\text {if}} \displaystyle \sup _{f(g(y_{1},\ldots ,y_{m}),z_{1}, \ldots , z_{n})=1}\min _{1\le j \le n} v_{j}(z_{j}) \ge 0.5,} \end{array}\right. } \end{aligned}$$
    (21)
    $$\begin{aligned} {\overline{Z}(y_{1},\ldots ,y_{m})} = {\left\{ \begin{array}{ll} \displaystyle \sup _{f(g(y_{1},\ldots ,y_{m}),z_{1}, \ldots , z_{n})=0}\min _{1\le j \le n} v_{j}(z_{j}),\\ \ \ {{\text {if}} \displaystyle \sup _{f(g(y_{1},\ldots ,y_{m}),z_{1}, \ldots , z_{n})=0}\min _{1\le j \le n} v_{j}(z_{j}) <0.5}, \\ {1-\displaystyle \sup _{f(g(y_{1},\ldots ,y_{m}),z_{1}, \ldots , z_{n})=1}\min _{1\le j \le n} v_{j}(z_{j}),}\\ \ \ {{\text {if}} \displaystyle \sup _{f(g(y_{1},\ldots ,y_{m}),z_{1}, \ldots , z_{n})=0}\min _{1\le j \le n} v_{j}(z_{j}) \ge 0.5,} \end{array}\right. } \end{aligned}$$
    (22)
    $$\begin{aligned} {\displaystyle w_{i}(y_{i})} = {\left\{ \begin{array}{ll} \ \ \ \ \ \underline{p}_{i},& {{\text {if}}\ y_{i}=1,} \\ {1-\overline{p}_{i},}& {{\text {if}}\ y_{i}=0,} \end{array}\right. } \end{aligned}$$
    (23)
    $$\begin{aligned} {\displaystyle u_{i}(y_{i})} = {\left\{ \begin{array}{ll} \ \ \ \ \ \overline{p}_{i},& {{\text {if}}\ y_{i}=1,} \\ {1-\underline{p}_{i},}& {{\text {if}}\ y_{i}=0,} \end{array}\right. } \end{aligned}$$
    (24)

    and

    $$\begin{aligned} {v_{j}(z_{j})} = {\left\{ \begin{array}{ll} \ \ \ \ \ q_{j},& {{\text {if}}\ z_{j}=1,} \\ {1-q_{j},}& {{\text {if}}\ z_{j}=0.} \end{array}\right. } \end{aligned}$$
    (25)

Proof

(i) By the operational law of uncertain random variables (Theorem 1), we have

$$\begin{aligned} ch\{\xi =1\}&=\displaystyle \left( \prod _{i=1}^{m} w_{i}(y_{i})\right) \\&\mathcal {M}\left\{ f(g(y_{1},\ldots ,y_{m}),\tau _{1}, \tau _{2}, \ldots , \tau _{n})=1\right\} \end{aligned}$$

and

$$\begin{aligned} CH\{\xi =1\}&=\displaystyle \left( \prod _{i=1}^{m} u_{i}(y_{i})\right) \\&\mathcal {M}\left\{ f(g(y_{1},\ldots ,y_{m}),\tau _{1}, \tau _{2}, \ldots , \tau _{n})=1\right\} , \end{aligned}$$

where \((y_{1},\ldots ,y_{m})\) is the unique solution of \(g(x_{1},\ldots , x_{m})=1\). Since \(f(g(y_{1},\ldots ,y_{m}),\) \(\tau _{1}, \tau _{2}, \ldots , \tau _{n})\) is essentially a Boolean function of uncertain variables, it follows from the operational law of uncertain variables that \(\mathcal {M}\left\{ f(g(y_{1},\ldots ,y_{m}),\tau _{1}, \tau _{2}, \ldots , \tau _{n})=1\right\} \) is determined by (21) (see Theorem 2.23 in Liu 2015), thus (17) and (18) are verified.

By using the similar method of proof of (i), we can prove (ii). Therefore, it is omitted. The proof is completed. \(\square \)

Example 5

Let \(\eta _{1}, \eta _{2}, \ldots , \eta _{m}\) be independent Boolean random variables under \(\mathbb {E}\) defined by (15), and \(\tau _{1}, \tau _{2}, \ldots , \tau _{n}\) be independent Boolean uncertain variables defined by (16).

(i) The minimum

$$\begin{aligned} \xi = \eta _{1} \wedge \eta _{2} \wedge \cdots \wedge \eta _{m} \wedge \tau _{1} \wedge \tau _{2} \wedge \cdots \wedge \tau _{n} \end{aligned}$$

is a Boolean uncertain random variable such that

$$\begin{aligned} ch\{\xi =1\} = \underline{p}_{1} \underline{p}_{2} \cdots \underline{p}_{m}(q_{1} \wedge q_{2} \wedge \cdots \wedge q_{n}), \end{aligned}$$
(26)

and

$$\begin{aligned} CH\{\xi =1\} =\overline{p}_{1} \overline{p}_{2} \cdots \overline{p}_{m}(q_{1} \wedge q_{2} \wedge \cdots \wedge q_{n}). \end{aligned}$$
(27)

(ii) The maximum

$$\begin{aligned} \xi = \eta _{1} \vee \eta _{2} \vee \cdots \vee \eta _{m} \vee \tau _{1} \vee \tau _{2} \vee \cdots \vee \tau _{n} \end{aligned}$$

is a Boolean uncertain random variable such that

$$\begin{aligned} ch\{\xi =1\}&= 1-CH\{\xi =0\} \nonumber \\&= 1- (1-\underline{p}_{1})(1-\underline{p}_{2}) \cdots (1-\underline{p}_{m}) (1-q_{1} \vee q_{2} \vee \cdots \vee q_{n}), \end{aligned}$$
(28)

and

$$\begin{aligned} CH\{\xi =1\}&= 1- ch\{\xi =0\} \nonumber \\&= 1- (1-\overline{p}_{1})(1-\overline{p}_{2}) \cdots (1-\overline{p}_{m}) (1- q_{1} \vee q_{2} \vee \cdots \vee q_{n}). \end{aligned}$$
(29)

4 Expected value

Let \(\eta _{1}, \eta _{2}, \ldots , \eta _{m}\) be random variables under \(\mathbb {E}\), and \(\tau _{1}, \tau _{2}, \ldots , \tau _{n}\) be uncertain variables. Then, the uncertain random variable

$$\xi = f(\eta _{1}, \eta _{2}, \ldots , \eta _{m}, \tau _{1}, \tau _{2}, lcdots, \tau _{n})$$

has the following four types of expectations:

(i) Upper expectation

$$\begin{aligned} \tilde{E} [\xi ]&= \displaystyle \mathbb {E} \big \{E \left[ f \ (y_{1}, y_{2}, \ldots , y_{m}, \tau _{1}, \tau _{2}, \ldots , \tau _{n})\right] _{y_{i}=\eta _{i},i=1,2,\ldots ,m}\big \}\nonumber \\&= \displaystyle \sup _{\theta \in \Theta } E_{P_{\theta }} \big \{E \left[ f \ (y_{1}, y_{2}, \ldots , y_{m}, \tau _{1}, \tau _{2}, \ldots , \tau _{n})\right] _{y_{i}=\eta _{i},i=1,2,\ldots ,m} \big \}, \end{aligned}$$
(30)

(ii) Lower expectation

$$\begin{aligned} \tilde{\mathcal {E}} [\xi ]&= \displaystyle \mathcal {E} \big \{E \left[ f \ (y_{1}, y_{2}, \ldots , y_{m}, \tau _{1}, \tau _{2}, \ldots , \tau _{n})\right] _{y_{i}=\eta _{i},i=1,2,\ldots ,m}\big \}\nonumber \\&= \displaystyle \inf _{\theta \in \Theta } E_{P_{\theta }} \big \{E \left[ f \ (y_{1}, y_{2}, \ldots , y_{m}, \tau _{1}, \tau _{2}, \ldots , \tau _{n})\right] _{y_{i}=\eta _{i},i=1,2,\ldots ,m} \big \}, \end{aligned}$$
(31)

(iii) Choquet expectation with respect to CH

$$\begin{aligned} E_{CH}[\xi ]&= \int \limits _{0}^{\infty } CH\{\xi \ge x\} \textrm{d}x - \int \limits _{-\infty }^{0} 1-CH\{\xi \ge x\} \textrm{d}x \nonumber \\&= \int \limits _{0}^{\infty } CH\{\xi \ge x\} \textrm{d}x - \int \limits _{-\infty }^{0} ch\{\xi \le x\} \textrm{d}x, \end{aligned}$$
(32)

(iv) Choquet expectation with respect to ch

$$\begin{aligned} E_{ch}[\xi ]&= \int \limits _{0}^{\infty } ch\{\xi \ge x\} \textrm{d}x - \int \limits _{-\infty }^{0} 1-ch\{\xi \ge x\} \textrm{d}x \nonumber \\&= \int \limits _{0}^{\infty } ch\{\xi \ge x\} \textrm{d}x - \int \limits _{-\infty }^{0} CH\{\xi \le x\} \textrm{d}x. \end{aligned}$$
(33)

Remark 2

(i) In (30) and (31), \(E [f (y_{1},\ldots , y_{m}, \tau _{1}, \ldots , \tau _{n})]\) denotes the expected value of uncertain variable \(f \ (y_{1}, \ldots , y_{m}, \tau _{1}, \ldots , \tau _{n})\), and it is finite. In (32) and (33), at least one of the two integrals is finite.

(ii) From (30) and (31), we can verify that \(\tilde{E}\) and \(\tilde{\mathcal {E}}\) have the same properties as \(\mathbb {E}\) and \(\mathcal {E}\), respectively.

Theorem 3

Let \(\eta _{1}, \eta _{2}, \ldots , \eta _{m}\) be independent random variables under \(\mathbb {E}\), and \(\tau _{1}, \tau _{2}, \ldots , \tau _{n}\) be independent uncertain variables with regular uncertainty distributions \(\Upsilon _{1}, \Upsilon _{2}, \ldots , \Upsilon _{n}\), respectively. Assume that \(\{F_{\theta _{i}}(y_{i}), y_{i}\in \mathbb {R}\}_{\theta _{i} \in \Theta _{i}}\) is a family of distributions of \(\eta _{i}\) corresponding to the set of probability measures \(\{P_{\theta _{i}}\}_{\theta _{i} \in \Theta _{i}}\), for \(i=1, \ldots , m\), respectively. Then, the upper and lower expectations of uncertain random variable \(\xi = f(\eta _{1}, \ldots , \eta _{m}, \tau _{1}, \tau _{2}, \ldots , \tau _{n})\) are

$$\begin{aligned} \tilde{E} [\xi ]&= \displaystyle \sup _{\theta _{1} \in \Theta _{1}} \int \limits _{-\infty }^{\infty } \cdots \left( \sup _{\theta _{m} \in \Theta _{m}} \int \limits _{-\infty }^{\infty } \int \limits _{0}^{1}f\left( y_{1}, \ldots , y_{m}, \Upsilon _{i_{1}}^{-1}(\alpha ), \ldots , \Upsilon _{i_{k}}^{-1}(\alpha ), \right. \right. \nonumber \\&\left. \left. \Upsilon _{i_{k+1}}^{-1}(1-\alpha ), \ldots , \Upsilon _{i_{n}}^{-1}(1-\alpha )\right) \textrm{d}\alpha \textrm{d}F_{\theta _{m}}(y_{m}) \right) \cdots \textrm{d}F_{\theta _{1}}(y_{1}) \end{aligned}$$
(34)

and

$$\begin{aligned} \tilde{\mathcal {E}} [\xi ]&= \displaystyle \inf _{\theta _{1} \in \Theta _{1}} \int \limits _{-\infty }^{\infty } \cdots \left( \inf _{\theta _{m} \in \Theta _{m}} \int \limits _{-\infty }^{\infty } \int \limits _{0}^{1}f\left( y_{1}, \ldots , y_{m}, \Upsilon _{i_{1}}^{-1}(\alpha ), \ldots , \Upsilon _{i_{k}}^{-1}(\alpha ),\right. \right. \nonumber \\&\left. \left. \Upsilon _{i_{k+1}}^{-1}(1-\alpha ), \ldots , \Upsilon _{i_{n}}^{-1}(1-\alpha )\right) \textrm{d}\alpha \textrm{d}F_{\theta _{m}}(y_{m}) \right) \cdots \textrm{d}F_{\theta _{1}}(y_{1}) , \end{aligned}$$
(35)

respectively, where \(f(\eta _{1}, \eta _{2}, \ldots , \eta _{m}, \tau _{1}, \tau _{2}, \ldots , \tau _{n})\) is strictly increasing with respect to \(\tau _{i_{1}}, \ldots , \tau _{i_{k}}\) and strictly decreasing with respect to \(\tau _{i_{k+1}}, \ldots , \tau _{i_{n}}.\)

Proof

Without loss of generality, we only prove that (34) holds for \(m=2\), the proof of other cases of (34) is similar.

Suppose that \(\{P_{\delta }\}_{\delta \in \Delta }\) is a family of joint probability measures of \(\eta _{1}\) and \(\eta _{2}\). Since \(f(y_{1}, y_{2}, \tau _{1}, \tau _{2}, \ldots , \tau _{n})\) is strictly increasing with respect to \(\tau _{i_{1}}, \ldots , \tau _{i_{k}}\) and strictly decreasing with respect to \(\tau _{i_{k+1}}, \ldots , \tau _{i_{n}}\), then by Theorem 2.30 in Liu (2015), we obtain

$$\begin{aligned}&E \left[ f \ (y_{1}, y_{2}, \tau _{1}, \tau _{2}, \ldots , \tau _{n})\right] \nonumber \\&= \int \limits _{0}^{\infty } 1-F(x;y_{1}, y_{2}) \textrm{d}x - \int \limits _{0}^{-\infty } F(x;y_{1}, y_{2}) \textrm{d}x \nonumber \\&= \int \limits _{0}^{1} f\left( y_{1}, y_{2}, \Upsilon _{i_{1}}^{-1}(\alpha ), \ldots , \Upsilon _{i_{k}}^{-1}(\alpha ), \Upsilon _{i_{k+1}}^{-1}(1-\alpha ), \ldots , \Upsilon _{i_{n}}^{-1}(1-\alpha )\right) \textrm{d}\alpha , \end{aligned}$$

where \(F(x;y_{1}, y_{2})\) is the uncertainty distribution of \(f(y_1,y_2,\tau _1,\ldots ,\tau _n)\).

Since \(\eta _{2}\) is independent to \(\eta _{1}\) under \(\mathbb {E}\), it follows from (30) that

$$\begin{aligned} \tilde{E} [\xi ]&= \displaystyle \sup _{\delta \in \Delta } E_{P_{\delta }} \left[ \int \limits _{0}^{1} f \left( \eta _{1}, \eta _{2},\Upsilon _{i_{1}}^{-1}(\alpha ), \ldots , \Upsilon _{i_{k}}^{-1}(\alpha ), \Upsilon _{i_{k+1}}^{-1}(1-\alpha ), \ldots ,\Upsilon _{i_{n}}^{-1}(1-\alpha )\right) \textrm{d}\alpha \right] \nonumber \\&= \displaystyle \sup _{\theta _{1} \in \Theta _{1}} E_{P_{\theta _{1}}}\left[ \sup _{\theta _{2} \in \Theta _{2}} \int \limits _{-\infty }^{\infty }\int \limits _{0}^{1} f \left( \eta _{1}, y_{2},\Upsilon _{i_{1}}^{-1}(\alpha ), \ldots , \Upsilon _{i_{k}}^{-1}(\alpha ),\right. \right. \\&\left. \left. \Upsilon _{i_{k+1}}^{-1}(1-\alpha ), \ldots , \Upsilon _{i_{n}}^{-1}(1-\alpha )\right) \textrm{d}\alpha \textrm{d}F_{\theta _{2}}(y_{2})\right] \nonumber \\&=\displaystyle \sup _{\theta _{1} \in \Theta _{1}}\int \limits _{-\infty }^{\infty }\left( \sup _{\theta _{2} \in \Theta _{2}} \int \limits _{-\infty }^{\infty }\int \limits _{0}^{1} f \left( y_{1}, y_{2},\Upsilon _{i_{1}}^{-1}(\alpha ),\ldots ,\Upsilon _{i_{k}}^{-1}(\alpha ),\right. \right. \nonumber \\&\left. \left. \Upsilon _{i_{k+1}}^{-1}(1-\alpha ), \ldots , \Upsilon _{i_{n}}^{-1}(1-\alpha )\right) \textrm{d}\alpha \textrm{d}F_{\theta _{2}}(y_{2})\right) \textrm{d}F_{\theta _{1}}(y_{1}). \end{aligned}$$

Hence (34) is proved. With the similar argument, we can prove that (35) holds. Thus the proof is completed. \(\square \)

Theorem 4

Let \(\eta _{1}, \eta _{2}, \ldots , \eta _{m}\) be random variables under \(\mathbb {E}\), \(\tau _{1}, \tau _{2}, \ldots , \tau _{n}\) be uncertain variables and

$$\xi = f(\eta _{1}, \eta _{2}, \ldots , \eta _{m}, \tau _{1}, \tau _{2}, \ldots , \tau _{n})$$

be an uncertain random variable. Then

$$\begin{aligned} E_{ch}[\xi ] \le \tilde{\mathcal {E}} [\xi ] \le \tilde{E} [\xi ] \le E_{CH}[\xi ]. \end{aligned}$$
(36)

Proof

Firstly, for any non-negative uncertain random variable \(\xi \), from (32) and (33), it follows that

$$\begin{aligned} E_{ch}[\xi ]&= \int \limits _{0}^{\infty } ch\{\xi \ge x\} \textrm{d}x \nonumber \\&= \int \limits _{0}^{\infty } \int \limits _{0}^{1} v \left\{ \omega \in \Omega | \mathcal {M}\{\xi \ge x\}\ge r\right\} \textrm{d}r \textrm{d}x \nonumber \\&= \int \limits _{0}^{\infty } \int \limits _{0}^{1} \displaystyle \inf _{\theta \in \Theta } P_{\theta } \left\{ \omega \in \Omega | \mathcal {M}\{\xi \ge x\}\ge r\right\} \textrm{d}r \textrm{d}x \nonumber \\&\le \displaystyle \inf _{\theta \in \Theta } \left( \int \limits _{0}^{\infty } \int \limits _{0}^{1} P_{\theta } \left\{ \omega \in \Omega | \mathcal {M}\{\xi \ge x\}\ge r\right\} \textrm{d}r \textrm{d}x\right) \nonumber \\&= \displaystyle \inf _{\theta \in \Theta }\left( \int \limits _{0}^{\infty } \int \limits _{\Omega } \mathcal {M}\{\xi \ge x\} \textrm{d}P_{\theta } \textrm{d}x \right) \nonumber \\&=\displaystyle \inf _{\theta \in \Theta } \left( \int \limits _{\Omega } \int \limits _{0}^{\infty } \mathcal {M}\{\xi \ge x\} \textrm{d}x \textrm{d}P_{\theta }\right) \nonumber \\&= \tilde{\mathcal {E}} [\xi ] \end{aligned}$$
(37)

and

$$\begin{aligned} E_{CH}[\xi ]&= \int \limits _{0}^{\infty } CH\{\xi \ge x\} \textrm{d}x \nonumber \\&= \int \limits _{0}^{\infty } \int \limits _{0}^{1} \mathbb {V} \left\{ \omega \in \Omega |\mathcal {M}\{\xi \ge x\}\ge r\right\} \textrm{d}r \textrm{d}x \nonumber \\&= \int \limits _{0}^{\infty } \int _{0}^{1} \displaystyle \sup _{\theta \in \Theta } P_{\theta } \left\{ \omega \in \Omega |\mathcal {M}\{\xi \ge x\}\ge r\right\} \textrm{d}r \textrm{d}x \nonumber \\&\ge \displaystyle \sup _{\theta \in \Theta } \left( \int \limits _{0}^{\infty } \int \limits _{0}^{1} P_{\theta } \left\{ \omega \in \Omega |\mathcal {M}\{\xi \ge x\}\ge r\right\} \textrm{d}r \textrm{d}x \right) \nonumber \\&= \displaystyle \sup _{\theta \in \Theta } \left( \int \limits _{0}^{\infty } \int \limits _{\Omega } \mathcal {M}\{\xi \ge x\} \textrm{d}P_{\theta } \textrm{d}x \right) \nonumber \\&=\displaystyle \sup _{\theta \in \Theta } \left( \int \limits _{\Omega } \int \limits _{0}^{\infty } \mathcal {M}\{\xi \ge x\} \textrm{d}x \textrm{d}P_{\theta } \right) \nonumber \\&= \tilde{E} [\xi ]. \end{aligned}$$
(38)

Secondly, for any uncertain random variable \(\xi \),

$$\begin{aligned} \xi = \xi ^{+} - \xi ^{-}, \end{aligned}$$

where \(\xi ^{+} = \xi \vee 0=\max \{\xi ,0\}\), \(\xi ^{-} = -(\xi \wedge 0)=-\min \{\xi ,0\}\). Then by (32) and (33), we have

$$\begin{aligned} E_{ch}[\xi ] = E_{ch}[\xi ^{+}] - E_{CH}[\xi ^{-}],\; \ E_{CH}[\xi ] = E_{CH}[\xi ^{+}] - E_{ch}[\xi ^{-}]. \end{aligned}$$

Applying (37) and (38), it yields

$$\begin{aligned} E_{ch}[\xi ] \le \tilde{\mathcal {E}} [\xi ^{+}] - \tilde{E} [\xi ^{-}]],\; \ E_{CH}[\xi ] \ge \tilde{E} [\xi ^{+}] - \tilde{\mathcal {E}} [\xi ^{-}]. \end{aligned}$$

From the sub-additivity of \(\tilde{E}\) and the fact that \(\tilde{\mathcal {E}} [\xi ] =-\tilde{E} [-\xi ]\), it can be shown that

$$\begin{aligned} \tilde{E} [\xi ] \le \tilde{E} [\xi ^{+}] - \tilde{\mathcal {E}} [\xi ^{-}],\; \ \tilde{\mathcal {E}} [\xi ] \ge \tilde{\mathcal {E}} [\xi ^{+}] - \tilde{E} [\xi ^{-}]. \end{aligned}$$

Finally, by using the above arguments and noting the fact that \(\tilde{\mathcal {E}} [\xi ] \le \tilde{E} [\xi ]\), we obtain

$$\begin{aligned} E_{ch}[\xi ] \le \tilde{\mathcal {E}} [\xi ] \le \tilde{E} [\xi ] \le E_{CH}[\xi ]. \end{aligned}$$

The proof of Theorem 4 is completed. \(\square \)

Theorem 5

Let \(\eta \) be a random variable under \(\mathbb {E}\), and \(\tau \) be an uncertain variable. Then, we have

  1. (i)
    $$\begin{aligned} \tilde{E} [\eta + \tau ] = \mathbb {E} [\eta ] + E [\tau ] \end{aligned}$$
    (39)

    and

    $$\begin{aligned} \tilde{\mathcal {E}} [\eta + \tau ] = \mathcal {E} [\eta ] + E [\tau ]; \end{aligned}$$
    (40)
  2. (ii)
    $$\begin{aligned} \tilde{E} [\eta \tau ] = (E[\tau ])^{+} \mathbb {E}[\eta ] +( E[\tau ])^{-} \mathbb {E}[-\eta ] \end{aligned}$$
    (41)

    and

    $$\begin{aligned} \tilde{\mathcal {E}} [\eta \tau ] = (E[\tau ])^{+} \mathcal {E}[\eta ] +( E[\tau ])^{-} \mathcal {E}[-\eta ]; \end{aligned}$$
    (42)
  3. (iii)
    $$\begin{aligned} E_{CH}[\eta + \tau ]&= \int \limits _{0}^{\infty } \int \limits _{0}^{1} \mathbb {V} \left\{ \omega \in \Omega | 1-F(x-\eta )\ge r\right\} \textrm{d}r \textrm{d}x\nonumber \\&\quad -\int \limits _{-\infty }^{0} \int \limits _{0}^{1} v \left\{ \omega \in \Omega | F(x-\eta )\ge r\right\} \textrm{d}r \textrm{d}x \end{aligned}$$
    (43)

    and

    $$\begin{aligned} E_{ch}[\eta + \tau ]&= \int \limits _{0}^{\infty } \int \limits _{0}^{1} v \left\{ \omega \in \Omega | 1-F(x-\eta )\ge r\right\} \textrm{d}r \textrm{d}x \nonumber \\&\quad - \int \limits _{-\infty }^{0} \int _{0}^{1} \mathbb {V} \left\{ \omega \in \Omega | F(x-\eta )\ge r\right\} \textrm{d}r \textrm{d}x, \end{aligned}$$
    (44)

    where F(x) is the uncertainty distribution of \(\tau \).

Proof

(i) According to (30), the uncertain random variable \(\eta + \tau \) has an upper expectation

$$\begin{aligned} \tilde{E}[\eta + \tau ]&= \displaystyle \sup _{\theta \in \Theta } E_{P_{\theta }} \left\{ E[y + \tau ] | _{y=\eta }\right\} \nonumber \\&= \displaystyle \sup _{\theta \in \Theta } E_{P_{\theta }} [\eta ] + E[\tau ] \nonumber \\&= \mathbb {E}[\eta ] + E[\tau ]. \end{aligned}$$
(45)

Similarly, by (31), we can show that the uncertain random variable \(\eta + \tau \) has a lower expectation

$$\tilde{\mathcal {E}} [\eta + \tau ] = \mathcal {E} [\eta ] + E [\tau ].$$

(ii) From (30), the uncertain random variable \(\eta \tau \) has an upper expectation

$$\begin{aligned} \tilde{E}[\eta \tau ]&= \displaystyle \sup _{\theta \in \Theta } E_{P_{\theta }} \left\{ E[y \tau ] |_{y=\eta }\right\} \nonumber \\&= \displaystyle \sup _{\theta \in \Theta } E_{P_{\theta }} \left\{ \eta E[\tau ]\right\} \nonumber \\&= (E[\tau ])^{+} \mathbb {E}[\eta ] +( E[\tau ])^{-} \mathbb {E}[-\eta ]. \end{aligned}$$
(46)

Similarly, by (31), we can show that the uncertain random variable \(\eta \tau \) has a lower expectation

$$\tilde{\mathcal {E}} [\eta \tau ] = (E[\tau ])^{+} \mathcal {E}[\eta ] +( E[\tau ])^{-} \mathcal {E}[-\eta ].$$

(iii) Applying (32), it is easily obtain that

$$\begin{aligned} E_{CH}[\eta + \tau ]&= \int \limits _{0}^{\infty } CH\{\eta + \tau \ge x\} \textrm{d}x - \int \limits _{-\infty }^{0} ch\{\eta + \tau \le x\}\textrm{d}x \nonumber \\&= \int \limits _{0}^{\infty } \int \limits _{0}^{1} \mathbb {V} \left\{ \omega \in \Omega | 1-F(x-\eta )\ge r\right\} \textrm{d}r \textrm{d}x \nonumber \\&\quad - \int \limits _{-\infty }^{0} \int _{0}^{1} v \left\{ \omega \in \Omega | F(x-\eta )\ge r\right\} \textrm{d}r \textrm{d}x, \end{aligned}$$

where F(x) is the uncertainty distribution of \(\tau \).

With the similar argument, we can verify that (44) holds. The proof of Theorem 5 is completed. \(\square \)

Theorem 6

Assume \(\eta _{1}\) and \(\eta _{2}\) are random variables satisfying that \(\eta _{2}\) is independent to \(\eta _{1}\) under \(\mathbb {E}\), \(\tau _{1}\) and \(\tau _{2}\) are uncertain variables, and for any given real numbers \(y_{1}\) and \(y_{2}\), \(f_{1}(y_{1}, \tau _{1})\) and \(f_{2}(y_{2}, \tau _{2})\) are real-valued comonotonic functions with respect to \(\tau _{1}\) and \(\tau _{2}\). Then

$$\begin{aligned} \tilde{E} [f_{1}(\eta _{1}, \tau _{1}) + f_{2}(\eta _{2}, \tau _{2})] = \tilde{E} [f_{1}(\eta _{1}, \tau _{1})] + \tilde{E} [f_{2}(\eta _{2}, \tau _{2})] \end{aligned}$$
(47)

and

$$\begin{aligned} \tilde{\mathcal {E}} [f_{1}(\eta _{1}, \tau _{1}) + f_{2}(\eta _{2}, \tau _{2})] = \tilde{\mathcal {E}} [f_{1}(\eta _{1}, \tau _{1})] + \tilde{\mathcal {E}} [f_{2}(\eta _{2}, \tau _{2})]. \end{aligned}$$
(48)

Proof

Since \(f_{1}(y_{1}, \tau _{1})\) and \(f_{2}(y_{2}, \tau _{2})\) are real-valued comonotonic functions with respect to uncertain variables \(\tau _{1}\) and \(\tau _{2}\), we have

$$\begin{aligned} E[f_{1}(y_{1}, \tau _{1}) + f_{2}(y_{2}, \tau _{2})] = E[f_{1}(y_{1}, \tau _{1})] + E[f_{2}(y_{2}, \tau _{2})] \end{aligned}$$

according to Definition 4. Then from (30), it follows that

$$\begin{aligned}&\tilde{E}[f_{1}(\eta _{1}, \tau _{1}) + f_{2}(\eta _{2}, \tau _{2})]\nonumber \\&\qquad = \displaystyle \sup _{\theta \in \Theta } E_{P_{\theta }} \Big \{E[f_{1}(y_{1}, \tau _{1}) + f_{2}(y_{2}, \tau _{2})]_{y_{1}=\eta _{1}, \ y_{2}=\eta _{2}}\Big \}\nonumber \\&\qquad =\displaystyle \sup _{\theta \in \Theta } E_{P_{\theta }} \Big \{E[f_{1}(y_{1}, \tau _{1})]_{y_{1}=\eta _{1}} + E [f_{2}(y_{2}, \tau _{2})] _{y_{2}=\eta _{2}}\Big \}.\nonumber \\ \end{aligned}$$
(49)

Denote

$$g_{1}(\eta _{1}) = E[f_{1}(y_{1}, \tau _{1})]_{y_{1}=\eta _{1}}$$

and

$$g_{2}(\eta _{2}) = E [f_{2}(y_{2}, \tau _{2})]_{y_{2}=\eta _{2}}.$$

Hence

$$\begin{aligned} \tilde{E}[f_{1}(\eta _{1}, \tau _{1}) + f_{2}(\eta _{2}, \tau _{2})]&= \displaystyle \sup _{\theta \in \Theta } E_{P_{\theta }} \left\{ g_{1}(\eta _{1}) + g_{2}(\eta _{2})\right\} \nonumber \\&= \displaystyle \sup _{\theta \in \Theta } E_{P_{\theta }} \left\{ \sup _{\theta \in \Theta } E_{P_{\theta }} [g_{1}(y_{1})+g_{2}(\eta _{2})] _{y_{1}=\eta _{1}}\right\} \nonumber \\&= \displaystyle \sup _{\theta \in \Theta } E_{P_{\theta }} \left\{ g_{1}(\eta _{1}) + \sup _{\theta \in \Theta } E_{P_{\theta }} [g_{2}(\eta _{2})] \right\} \nonumber \\&= \displaystyle \sup _{\theta \in \Theta } E_{P_{\theta }} \left[ g_{1}(\eta _{1})\right] + \sup _{\theta \in \Theta } E_{P_{\theta }} \left[ g_{2}(\eta _{2})\right] \nonumber \\&=\tilde{E} [f_{1}(\eta _{1}, \tau _{1})] + \tilde{E} [f_{2}(\eta _{2}, \tau _{2})] \end{aligned}$$
(50)

by Definition 13 (i) in Appendix A. In the similar way, we can verify that (48) holds and the proof is completed. \(\square \)

5 Uncertain random programming

In this section, we suggest some classes of uncertain random optimization models, called uncertain random programming under U-S chance spaces, to solve decision-making problems in uncertain random environments.

Assume that \(\varvec{x}\) is a decision vector, \(\varvec{\xi }\) is an uncertain random vector, \(f(\varvec{x}, \varvec{\xi })\) is an objective function, and \(g_{j}(\varvec{x}, \varvec{\xi })\) are uncertain random constraint functions, \(j=1,2,\ldots ,p.\) Since the uncertain random objective function \(f(\varvec{x}, \varvec{\xi })\) cannot be directly maximized or minimized, we may maximize or minimize its expected values. Furthermore, since the uncertain random constraints \(g_{j}(\varvec{x}, \varvec{\xi }) \le (\ge ) 0\), \(j = 1, \ldots , p\) do not define a crisp feasible set, it is naturally desired that the uncertain random constraints hold with confidence levels \(\alpha _{j}\) or \(\beta _{j}\), \(j = 1, \ldots , p\). Then we have the following two sets of chance constraints:

$$\begin{aligned} \displaystyle CH \{ g_{j}(\varvec{x}, \varvec{\xi })\ge 0\} \le \alpha _{j},\; j = 1, \ldots , p \end{aligned}$$
(51)

and

$$\begin{aligned} ch \{ g_{j} (\varvec{x}, \varvec{\xi }) \le 0\} \ge \beta _{j},\; j = 1, \ldots , p. \end{aligned}$$
(52)

Four uncertain random programming models based on U-S chance theory are introduced in the following.

5.1 Two robust uncertain random programming models

In order to obtain a decision-making with maximum expected objective value subject to a set of chance constraints, we suggest the following two uncertain random programming models:

$$\begin{aligned} (a)\; {\left\{ \begin{array}{ll} \displaystyle \max _{\varvec{x}} \tilde{\mathcal {E}}[f(\varvec{x},\varvec{\xi })] \\ {\text{ subject } \text{ to: }} \\ {\displaystyle CH \{ g_{j}(\varvec{x},\varvec{\xi })\ge 0\} \le \alpha _{j},\; j = 1, \ldots , p} \end{array}\right. } \end{aligned}$$
(53)

and

$$\begin{aligned} (b)\; {\left\{ \begin{array}{ll} \displaystyle \max _{\varvec{x}}E_{ch}[f(\varvec{x},\varvec{\xi } )] \\ {\text{ subject } \text{ to: }} \\ {\displaystyle CH\{g_{j}(\varvec{x},\varvec{\xi }) \ge 0\}\le \alpha _{j},\; j = 1, \ldots , p.} \end{array}\right. } \end{aligned}$$
(54)

Definition 8

A vector \(\varvec{x}\) is called a feasible solution to the uncertain random programming model (a) (or (b)) if

$$\begin{aligned} \displaystyle CH \{ g_{j} (\varvec{x},\varvec{\xi }) \ge 0\} \le \alpha _{j},\; j = 1, \ldots , p . \end{aligned}$$
(55)

Definition 9

(i) A feasible solution \(\varvec{x}^{*}\) is called an optimal solution to the uncertain random programming model (a) if

$$\begin{aligned} \tilde{\mathcal {E}}[f(\varvec{x}^{*},\varvec{\xi })] \ge \tilde{\mathcal {E}}[f(\varvec{x},\varvec{\xi })] \end{aligned}$$
(56)

for any feasible solution \(\varvec{x}\).

(ii) A feasible solution \(\varvec{x}^{*}\) is called an optimal solution to the uncertain random programming model (b) if

$$\begin{aligned} E_{ch}[f(\varvec{x}^{*},\varvec{\xi })] \ge E_{ch}[f(\varvec{x},\varvec{\xi })] \end{aligned}$$
(57)

for any feasible solution \(\varvec{x}\).

5.2 Two radical uncertain random programming models

In order to obtain a decision-making with minimum expected objective value subject to a set of chance constraints, we suggest the following two uncertain random programming models:

$$\begin{aligned} (c)\; {\left\{ \begin{array}{ll} \displaystyle \min _{\varvec{x}}\tilde{E}[f(\varvec{x},\varvec{\xi } )] \\ {\text{ subject } \text{ to: }} \\ {\displaystyle ch \{ g_{j} (\varvec{x},\varvec{\xi })\le 0\} \ge \beta _{j},\; j = 1, \ldots , p} \end{array}\right. } \end{aligned}$$
(58)

and

$$\begin{aligned} (d)\; {\left\{ \begin{array}{ll} \displaystyle \min _{\varvec{x}}E_{CH}[f(\varvec{x},\varvec{\xi } )] \\ {\text{ subject } \text{ to: }} \\ {\displaystyle ch \{ g_{j} (\varvec{x},\varvec{\xi }) \le 0\} \ge \beta _{j},\; j = 1, \ldots , p.} \end{array}\right. } \end{aligned}$$
(59)

Definition 10

A vector \(\varvec{x}\) is called a feasible solution to the uncertain random programming model (c) (or (d)) if

$$\begin{aligned} ch \{ g_{j} (\varvec{x},\varvec{\xi }) \le 0\} \ge \beta _{j},\; j = 1, \ldots , p. \end{aligned}$$
(60)

Definition 11

(i) A feasible solution \(\varvec{x}^{*}\) is called an optimal solution to the uncertain random programming model (c) if

$$\begin{aligned} \tilde{E}[f(\varvec{x}^{*},\varvec{\xi })] \le \tilde{E}[f( \varvec{x},\varvec{\xi })] \end{aligned}$$
(61)

for any feasible solution \(\varvec{x}\).

(ii) A feasible solution \(\varvec{x}^{*}\) is called an optimal solution to the uncertain random programming model (d) if

$$\begin{aligned} E_{CH}[f(\varvec{x}^{*},\varvec{\xi })] \le E_{CH}[f( \varvec{x},\varvec{\xi })] \end{aligned}$$
(62)

for any feasible solution \(\varvec{x}\).

5.3 Equivalent conditions of uncertain random programming models

In this subsection, we state some equivalent conditions of the above four uncertain random programming models as the following two theorems.

Theorem 7

Let \(\eta _{1}, \eta _{2}, \ldots , \eta _{m}\) be independent random variables under \(\mathbb {E}\), and \(\tau _{1}, \tau _{2}, \ldots , \tau _{n}\) be independent uncertain variables with regular uncertainty distributions \(\Upsilon _{1}, \Upsilon _{2}, \ldots , \Upsilon _{n}\), respectively. Assume that \(\{F_{\theta _{i}}(y_{i}), y_{i}\in \mathbb {R}\}_{\theta _{i} \in \Theta _{i}}\) is a family of distributions of \(\eta _{i}\) corresponding to the set of probability measures \(\{P_{\theta _{i}}\}_{\theta _{i} \in \Theta _{i}}\), for \(i=1, \ldots , m\). If \(f(\varvec{x}, \eta _{1}, \eta _{2}, \ldots , \eta _{m}, \tau _{1}, \tau _{2}, \ldots , \tau _{n})\) and \(g_{j}(\varvec{x},\eta _{1}, \eta _{2}, \ldots , \eta _{m}, \tau _{1}, \tau _{2}, \ldots , \tau _{n})\) are strictly increasing with respect to \(\tau _{i_{1}}, \ldots , \tau _{i_{k}}\) and strictly decreasing with respect to \(\tau _{i_{k+1}}, \ldots , \tau _{i_{n}}\), for \(j = 1, \ldots , p\), then

  1. (i)

    the uncertain random programming (a)

    $$\begin{aligned} {\left\{ \begin{array}{ll} \displaystyle \max _{\varvec{x}} \tilde{\mathcal {E}}[f(\varvec{x}, \eta _{1}, \eta _{2}, \ldots , \eta _{m}, \tau _{1}, \tau _{2}, \ldots , \tau _{n})]\\ {\text{ subject } \text{ to: }} \\ \displaystyle CH \{ g_{j}(\varvec{x},\eta _{1}, \eta _{2}, \ldots , \eta _{m}, \tau _{1}, \tau _{2}, \ldots , \tau _{n})\ge 0\} \le \alpha _{j},\; j = 1, \ldots , p \end{array}\right. } \end{aligned}$$
    (63)

    is equivalent to the crisp mathematical programming

    $$\begin{aligned} {\left\{ \begin{array}{ll} \displaystyle \max _{\varvec{x}} \displaystyle \inf _{\theta _{1} \in \Theta _{1}} \int _{-\infty }^{\infty } \cdots \Big (\inf _{\theta _{m} \in \Theta _{m}} \int _{-\infty }^{\infty } \int _{0}^{1}f\Big (\varvec{x}, y_{1}, \ldots , y_{m}, \Upsilon _{i_{1}}^{-1}(\alpha ), \ldots , \Upsilon _{i_{k}}^{-1}(\alpha ),\\ \qquad \Upsilon _{i_{k+1}}^{-1}(1-\alpha ), \ldots , \Upsilon _{i_{n}}^{-1}(1-\alpha )\Big ) \textrm{d}\alpha \textrm{d}F_{\theta _{m}}(y_{m})\Big ) \cdots \textrm{d}F_{\theta _{1}}(y_{1})\\ {\text{ subject } \text{ to: }} \\ \displaystyle \int _{0}^{1} \mathbb {V}\left\{ \omega \in \Omega | 1- G_j (0;\varvec{x},\eta _{1}, \eta _{2}, \ldots , \eta _{m})\ge r\right\} \textrm{d}r \le \alpha _{j}, j = 1, \ldots , p; \end{array}\right. } \end{aligned}$$
    (64)
  2. (ii)

    the uncertain random programming (c)

    $$\begin{aligned} {\left\{ \begin{array}{ll} \displaystyle \min _{\varvec{x}}\tilde{E}[f(\varvec{x}, \eta _{1}, \eta _{2}, \ldots , \eta _{m}, \tau _{1}, \tau _{2}, \ldots , \tau _{n})] \\ {\text{ subject } \text{ to: }} \\ \displaystyle ch \{ g_{j} (\varvec{x},\eta _{1}, \eta _{2}, \ldots , \eta _{m}, \tau _{1}, \tau _{2}, \ldots , \tau _{n})\le 0\} \ge \beta _{j},\ j = 1, \ldots , p \end{array}\right. } \end{aligned}$$
    (65)

    is equivalent to the crisp mathematical programming

    $$\begin{aligned} {\left\{ \begin{array}{ll} \displaystyle \min _{\varvec{x}}\displaystyle \sup _{\theta _{1} \in \Theta _{1}} \int _{-\infty }^{\infty } \ldots \Big (\sup _{\theta _{m} \in \Theta _{m}} \int _{-\infty }^{\infty } \int _{0}^{1}f\Big (\varvec{x}, y_{1}, \ldots , y_{m}, \Upsilon _{i_{1}}^{-1}(\alpha ), \ldots , \Upsilon _{i_{k}}^{-1}(\alpha ),\\ \qquad \Upsilon _{i_{k+1}}^{-1}(1-\alpha ), \ldots , \Upsilon _{i_{n}}^{-1}(1-\alpha )\Big ) \textrm{d}\alpha \textrm{d}F_{\theta _{m}}(y_{m})\Big ) \cdots \textrm{d}F_{\theta _{1}}(y_{1})\\ {\text{ subject } \text{ to: }} \\ \displaystyle \int _{0}^{1} v\left\{ \omega \in \Omega | G_j (0;\varvec{x},\eta _{1}, \eta _{2}, \ldots , \eta _{m})\ge r\right\} \textrm{d}r \ge \beta _{j}, j = 1, \ldots , p, \end{array}\right. } \end{aligned}$$
    (66)

    where for each \(j\in \{1, \ldots , p\}\), \(G_j (z;\varvec{x}, y_{1}, y_{2}, \ldots , y_{m})\) is the uncertainty distribution of uncertain variable \(g_j(\varvec{x},y_{1}, y_{2}, \ldots , y_{m}, \tau _{1}, \tau _{2}, \ldots , \tau _{n})\) for any given real numbers \(y_{1}, y_{2}, \ldots , y_{m},\) and is determined by its inverse function

    $$\begin{aligned}&G_j^{-1}(\alpha ,\varvec{x}, y_{1}, y_{2}, \ldots , y_{m}) = g_j \Big (\varvec{x},y_{1}, y_{2}, \ldots , y_{m}, \Upsilon _{i_{1}}^{-1}(\alpha ),\Big .\\&\Big . \ldots ,\Upsilon _{i_{k}}^{-1}(\alpha ), \Upsilon _{i_{k+1}}^{-1}(1-\alpha ),\ldots , \Upsilon _{i_{n}}^{-1}(1-\alpha )\Big ). \end{aligned}$$

Proof

It follows from Theorem 1 and Theorem 3 immediately. \(\square \)

Theorem 8

Let \(\eta _{1}, \eta _{2}, \ldots , \eta _{m}\) be random variables under \(\mathbb {E}\), and \(\tau _{1}, \tau _{2}, \ldots , \tau _{n}\) be independent uncertain variables with regular uncertainty distributions \(\Upsilon _{1}, \Upsilon _{2}, \ldots , \Upsilon _{n}\), respectively. If \(f(\varvec{x}, \eta _{1}, \eta _{2}, \ldots , \eta _{m}, \tau _{1}, \tau _{2}, \ldots , \tau _{n})\) and \(g_{j}(\varvec{x},\eta _{1}, \eta _{2}, \ldots , \eta _{m}, \tau _{1}, \tau _{2}, \ldots , \tau _{n})\) are strictly increasing with respect to \(\tau _{i_{1}}, \ldots , \tau _{i_{k}}\) and strictly decreasing with respect to \(\tau _{i_{k+1}}, \ldots , \tau _{i_{n}}\), for \(j = 1, \ldots , p\), then

  1. (i)

    the uncertain random programming (b)

    $$\begin{aligned} {\left\{ \begin{array}{ll} \displaystyle \max _{\varvec{x}} E_{ch} [ f (\varvec{x},\eta _{1}, \eta _{2}, \ldots , \eta _{m}, \tau _{1}, \tau _{2}, \ldots , \tau _{n})] \\ {\text{ subject } \text{ to: }} \\ \displaystyle CH \{ g_{j}(\varvec{x},\eta _{1}, \eta _{2}, \ldots , \eta _{m}, \tau _{1}, \tau _{2}, \ldots , \tau _{n})\ge 0\} \le \alpha _{j},j = 1, \ldots , p \end{array}\right. } \end{aligned}$$
    (67)

    is equivalent to the crisp mathematical programming

    $$\begin{aligned} {\left\{ \begin{array}{ll} \displaystyle \max _{\varvec{x}} \left\{ \int _{0}^{\infty } \int _{0}^{1} v\left\{ \omega \in \Omega | 1-F (z;\varvec{x},\eta _{1}, \eta _{2}, \ldots , \eta _{m})\ge r\right\} \right. \\ \qquad \left. \ \textrm{d}r \textrm{d}z -\int _{-\infty }^{0} \int _{0}^{1} \mathbb {V}\left\{ \omega \in \Omega | F (z;\varvec{x},\eta _{1}, \eta _{2}, \ldots , \eta _{m})\ge r\right\} \textrm{d}r\textrm{d}z\right\} \\ {\text{ subject } \text{ to: }} \\ \displaystyle \int _{0}^{1} \mathbb {V}\left\{ \omega \in \Omega | 1- G_j (0;\varvec{x},\eta _{1}, \eta _{2}, \ldots , \eta _{m})\ge r\right\} \textrm{d}r \le \alpha _{j}, j = 1, \ldots , p; \end{array}\right. } \end{aligned}$$
    (68)
  2. (ii)

    the uncertain random programming (d)

    $$\begin{aligned} {\left\{ \begin{array}{ll} \displaystyle \min _{\varvec{x}} E_{CH} [f(\varvec{x},\eta _{1}, \eta _{2}, \ldots , \eta _{m}, \tau _{1}, \tau _{2}, \ldots , \tau _{n})] \\ {\text{ subject } \text{ to: }} \\ \displaystyle ch \{ g_{j} (\varvec{x},\eta _{1}, \eta _{2}, \ldots , \eta _{m}, \tau _{1}, \tau _{2}, \ldots , \tau _{n})\le 0\} \ge \beta _{j},\ j = 1, \ldots , p \end{array}\right. } \end{aligned}$$
    (69)

    is equivalent to the crisp mathematical programming

    $$\begin{aligned} {\left\{ \begin{array}{ll} \displaystyle \min _{\varvec{x}} \left\{ \int _{0}^{\infty } \int _{0}^{1} \mathbb {V}\left\{ \omega \in \Omega | 1-F (z;\varvec{x},\eta _{1}, \eta _{2}, \ldots , \eta _{m})\ge r\right\} \right. \\ \left. \ \textrm{d}r \textrm{d}z -\int _{-\infty }^{0} \int _{0}^{1} v\left\{ \omega \in \Omega | F (z;\varvec{x},\eta _{1}, \eta _{2}, \ldots , \eta _{m})\ge r\right\} \textrm{d}r\textrm{d}z\right\} \\ {\text{ subject } \text{ to: }} \\ \displaystyle \int _{0}^{1} v\left\{ \omega \in \Omega | G_j (0;\varvec{x},\eta _{1}, \eta _{2}, \ldots , \eta _{m})\ge r\right\} \textrm{d}r \ge \beta _{j}, j = 1, \ldots , p, \end{array}\right. } \end{aligned}$$
    (70)

    where \(F (z;\varvec{x}, y_{1}, y_{2}, \ldots , y_{m})\) is the uncertainty distribution of uncertain variable \(f(\varvec{x},y_{1}, y_{2}, \ldots , y_{m}, \tau _{1}, \tau _{2}, \ldots , \tau _{n})\) for any given real numbers \(y_{1}, y_{2}, \ldots , y_{m},\) and is determined by its inverse function

    $$\begin{aligned}&F^{-1}(\alpha ,\varvec{x}, y_{1}, y_{2}, \ldots , y_{m}) = f \Big (\varvec{x},y_{1}, y_{2}, \ldots , y_{m}, \Upsilon _{i_{1}}^{-1}(\alpha ),\Big .\\&\quad \Big . \ldots ,\Upsilon _{i_{k}}^{-1}(\alpha ), \Upsilon _{i_{k+1}}^{-1}(1-\alpha ),\ldots , \Upsilon _{i_{n}}^{-1}(1-\alpha )\Big ), \end{aligned}$$

    and for each \(j\in \{1, \ldots , p\}\), \(G_j (z;\varvec{x}, y_{1}, y_{2}, \ldots , y_{m})\) is the uncertainty distribution of uncertain variable \(g_j(\varvec{x},y_{1}, y_{2}, \ldots , y_{m}, \tau _{1}, \tau _{2}, \ldots , \tau _{n})\) for any given real numbers \(y_{1}, y_{2}, \ldots , y_{m},\) and is determined by its inverse function

    $$\begin{aligned}&G_j^{-1}(\alpha ,\varvec{x}, y_{1}, y_{2}, \ldots , y_{m}) = g_j \Big (\varvec{x},y_{1}, y_{2}, \ldots , y_{m}, \Upsilon _{i_{1}}^{-1}(\alpha ),\Big .\\&\quad \Big . \ldots ,\Upsilon _{i_{k}}^{-1}(\alpha ), \Upsilon _{i_{k+1}}^{-1}(1-\alpha ),\ldots , \Upsilon _{i_{n}}^{-1}(1-\alpha )\Big ). \end{aligned}$$

Proof

It follows from (32), (33), Theorem 1 and Axiom 2 of Definition 1 immediately. \(\square \)

6 Some applications of uncertain random programming models

In this section, some applications of two types of uncertain random programming models are given, namely, stock investment in incomplete financial market and system reliability design, respectively.

6.1 Stock investment in incomplete market

The optimal stock investment problem has always been a hot issue in the fields of economics and finance, and stock investment in incomplete financial market contains rich uncertainties, which disturb the choice of optimal strategies.

In finance, it is widely known that if the market is complete, then there exists a risk-neutral and unique probability measure P. However, if the market is incomplete, then this neutral probability measure is no longer unique, but there exists a series of probability measures \(\{P_{\theta }\}_{\theta \in \Theta }\). Incompleteness refers to the fact that more than one probability plays on the market, yet we have no way of knowing the transformation law. In this case, sub-linear expectation theory can be employed to analyze problems of mathematical finance, such as optimal investment.

In financial market, when no samples are available to estimate probability measures, we have to invite some domain experts to evaluate the belief degree about the unknown state. In this case, uncertain measures can be applied to analyze problems of mathematical finance, such as optimal investment.

Suppose that there exist two types of stocks in incomplete financial market, and the total number of stocks is \( m+ n\). For each \(i=1,2,\ldots ,m\), the initial price of the ith stock is \(Y_{0}^{i}\), and the process of price change of the ith stock can be described by the following uncertain stock model (see Sect. 16.1 in Liu 2015):

$$\begin{aligned} \textrm{d}Y_{t}^{i}=\mu _{i} Y_{t}^{i} \textrm{d}t + v_{i}Y_{t}^{i}\textrm{d}C_{t}^{i}, \end{aligned}$$
(71)

where \(\mu _{i}\) is the drift coefficient, \(v_{i}\) is the diffusion coefficient, which are both constants, and \(C_{t}^{i}\) is Liu process (see Definition 14.1 in Liu 2015). The solution of (71) is \(Y_{t}^{i} = Y_{0}^{i} exp(\mu _{i} t + v_{i} C_{t}^{i})\). Definition 14.3 in Liu (2015) shows that the expected value of \(Y_{t}^{i}\) is

$$\begin{aligned} E[Y_{t}^{i}]={\left\{ \begin{array}{ll} \displaystyle Y_{0}^{i}\exp (\mu _{i} t) \frac{v_{i}t \sqrt{3}}{\sin (v_{i}t \sqrt{3})},\ t< \frac{\pi }{v_{i} \sqrt{3}},\\ {+\infty ,\ t\ge \frac{\pi }{v_{i} \sqrt{3}} }. \end{array}\right. } \end{aligned}$$
(72)

In addition, for each \(j=m+1,m+2,\cdots ,m+n\), the initial price the jth stock is \(N_{0}^{j}\), and the process of price change of the jth stock can be described by the following stochastic differential equation driven by one-dimensional G-Brownian motion \(\{B_{t}^{j}\}\) (see Definition 3.1.2 in Peng 2019):

$$\begin{aligned} \textrm{d}N_{t}^{j}=e_{j} N_{t}^{j}\textrm{d}t + \sigma _{j} N_{t}^{j}\textrm{d}B_{t}^{j} + \alpha _{j} N_{t}^{j}\textrm{d}\langle B \rangle _{t}^{j}, \end{aligned}$$
(73)

where \(e_{j},\sigma _{j}, \alpha _{j}\) are constants, and \(\{\langle B \rangle _{t}^{j}\}\) is a process of quadratic variation (see Sect. 3.4 in Peng 2019) with respect to \(\{B_{t}^{j}\}\). By using G-Itô’s formula (see Theorem 3.6.5 in Peng 2019), it can be calculated that

$$\begin{aligned} N_{t}^{j} = N_{0}^{j} \textrm{exp}\left[ e_{j}t+ \sigma _{j} B_{t}^{j}+ \left( \alpha _{j}-\frac{1}{2} \sigma _{j}^{2}\right) \langle B \rangle _{t}^{j}\right] , \end{aligned}$$
(74)

and when \(\alpha _{j}=\frac{1}{2} \sigma _{j}^{2}\), it is known that

$$\begin{aligned} \begin{aligned} \mathcal {E} [N_{t}^{j}]&= -\mathbb {E} [-N_{t}^{j}] \\&= \frac{N_{0}^{j}\textrm{exp}(e_{j}t)}{\sqrt{2\pi t \underline{\sigma }_{j}^{2}}} \int \limits _{-\infty }^{+ \infty } \textrm{exp}\left[ \sigma _{j} x - \frac{x^{2}}{2t\underline{\sigma }_{j}^{2}}\right] \textrm{d}x, \end{aligned} \end{aligned}$$
(75)

from Proposition 3.1.6 in Peng (2019).

The financial product is purchased at the moment 0 and sold at the moment T. Let \(x_{k}\) represent the number of shares of the k-th stock (\(k=1,2,\ldots ,m+n\)) purchased, \(Y_{T}^{i}\) represent the stock price of the ith stock (\(i=1,2,\ldots ,m\)) at the moment T, and \(N_{T}^{j}\) represent the stock price of the jth stock (\(j=m+1,m+2,\ldots ,m+n\)) at the moment T. Denote

$$\begin{aligned} \varvec{x} = (x_{1}, x_{2}, \ldots , x_{m+n}) \end{aligned}$$

and

$$\begin{aligned} \varvec{\xi } = (Y_{T}^{1}, Y_{T}^{2}, \ldots , Y_{T}^{m}, N_{T}^{m+1}, N_{T}^{m+2}, \ldots , N_{T}^{m+n}). \end{aligned}$$

Let \({\varvec{T}} (\varvec{x}, \varvec{\xi }) \) represent the total price at the moment T of financial product purchased and \({\varvec{T}}_{k} (\varvec{x}, \varvec{\xi }) \) represent the price at the moment T of \(x_k\) shares of the kth stock purchased, then

$$\begin{aligned} {\varvec{T}}_{k} (\varvec{x}, \varvec{\xi }) ={\left\{ \begin{array}{ll} x_{k }Y_{T}^{k},\ k=1,2, \ldots , m,\\ x_{k}N_{T}^{k},\ k=m+1,m+2,\ldots ,m+n \end{array}\right. } \end{aligned}$$
(76)

and

$$\begin{aligned} {\varvec{T}} (\varvec{x}, \varvec{\xi }) = \displaystyle \sum _{k=1}^{m+n}{\varvec{T}}_{k} (\varvec{x}, \varvec{\xi }). \end{aligned}$$
(77)

In addition, the total cost of financial product purchased is

$$\begin{aligned} \begin{aligned} {\varvec{C}}(\varvec{x}) = x_{1}Y_{0}^{1} + \cdots + x_{m}Y_{0}^{m} + x_{m+1}N_{0}^{m+1} + \cdots + x_{m+n}N_{0}^{m+n}. \end{aligned} \end{aligned}$$
(78)

If the initial capital is \({\varvec{C}}_{0}\), then the capital constraint is

$$\begin{aligned} {\varvec{C}}(\varvec{x}) \le {\varvec{C}}_{0}. \end{aligned}$$
(79)

Under the capital constraint, we can maximize the return of financial product purchased at the moment T by building the following uncertain random programming model:

$$\begin{aligned} {\left\{ \begin{array}{ll} \displaystyle \max _{\varvec{x}} \tilde{\mathcal {E}}[{\varvec{T}} (\varvec{x}, \varvec{\xi })] \\ {\text{ subject } \text{ to: }} \\ {{\varvec{C}}(\varvec{x}) \le {\varvec{C}}_{0}}, \\ \varvec{x} \ge 1\ ({\text {integer vector}}). \end{array}\right. } \end{aligned}$$
(80)

Example 6

Suppose that there are five stocks in an incomplete financial market. Two of those obey the uncertain differential equations (71), the initial stock prices are \(Y_{0}^{1}=24, \ Y_{0}^{2}=18.2\), the drift coefficients are \(\mu _{1}=0.00216, \mu _{2}=0.0006,\) and the diffusion coefficients are \(v_{1}=0.003,\ v_{2}=0.0011\), respectively. The other three stocks obey the stochastic differential equations (73), the initial stock prices are \(N_{0}^{3}=16.3, \ N_{0}^{4}=17.9, N_{0}^{5}=13.6\), the parameters are \(e_{3}=0.00153,\ e_{4}=0.00104,\ e_{5}=0.00263\), \(\sigma _{3}=0.0023,\ \sigma _{4}=0.0019,\ \sigma _{5}=0.005\), \(\alpha _{3}=7.22\times 10^{-6},\ \alpha _{4}=8\times 10^{-6},\ \alpha _{5}=9.245\times 10^{-6}\), and \(\underline{\sigma }^{2}_{3} = 1.0304,\ \underline{\sigma }^{2}_{4} =1.0201,\ \underline{\sigma }^{2}_{5} =1.0501\), respectively. The risk-free interest rate is \(r=5.4 \times 10^{-5}\), and the maturity time is \(T=30\).

If the initial capital is assumed to be 1000, by using \(\textrm{MATLAB}\), we can calculate that the optimal stock portfolio is

$$\begin{aligned} x^{*} = (37,2,1,1,3), \end{aligned}$$

the initial capital consumption is 999.4, and the expected income is 65.7.

6.2 System reliability design

Providing redundancy for components in a system is an effective method to improve system reliability. The purpose of system reliability design is to determine the optimal number of redundant elements for balancing system performance and total cost. Suppose a series system consists of n components, and each component has only one type of elements. The lifetimes of elements are uncertain random variables. We also assume that redundant elements of all components are in a standby state. That is, one of redundant elements begins to work only when the active element fails. This approach is usually applied in cases where replacement can be accomplished immediately. Therefore, the lifetime of a component is the sum of lifetimes of all elements in the component. Let \(x_{i}\) be the number of elements in the ith component, and \(\xi _{ij}\) be the lifetime of the j-th element in the ith component, where \(j=1,2,\ldots ,x_{i}\), \(i=1,2, \ldots ,n\). Denote

$$\begin{aligned} \varvec{x}=(x_{1}, x_{2}, \ldots , x_{n}) \end{aligned}$$

and

$$\begin{aligned} \varvec{\xi }=(\xi _{11}, \ldots , \xi _{1x_{1}},\xi _{21}, \ldots , \xi _{2x_{2}}, \ldots , \xi _{n1}, \ldots , \xi _{nx_{n}}). \end{aligned}$$

Let \({\varvec{T}} (\varvec{x}, \varvec{\xi }) \) represent the system lifetime, and \({\varvec{T}}_{i} (\varvec{x}, \varvec{\xi }) \) represent the lifetime of the ith component, \(i=1,2,\ldots ,n\), respectively. Then

$$\begin{aligned} \varvec{T_{i}(x, \xi )}= \displaystyle \sum _{j=1}^{x_{i}} \xi _{ij},\ i=1,2, \ldots , n \end{aligned}$$
(81)

and

$$\begin{aligned} \varvec{T(x, \xi )} = \displaystyle \bigwedge _{i=1}^{n}\varvec{T_{i}(x, \xi )}. \end{aligned}$$
(82)

In addition, if we assume that the cost of each element in the ith component is \(c_{i}\), \(i=1,2,\ldots ,n,\) respectively, then the total cost is

$$\begin{aligned} \varvec{C(x)}=c_{1}x_{1}+c_{2}x_{2}+\cdots +c_{n}x_{n}. \end{aligned}$$

Suppose that the total capital available is \({\varvec{C}}_{0}\), then the cost constraint is

$$\begin{aligned} \varvec{C(x)} \le \varvec{C_{0}}. \end{aligned}$$
(83)

When the cost constraint is satisfied, the uncertain random redundancy model can be constructed as following to maximize the expected system lifetime:

$$\begin{aligned} {\left\{ \begin{array}{ll} \displaystyle \max _{\varvec{x}}E_{ch}[\varvec{T(x, \xi )}] \\ {\text{ subject } \text{ to: }} \\ {\varvec{C(x)} \le \varvec{C_{0}}},\\ {\varvec{x} \ge 1\ ({\text{ integer } \text{ vector }})}. \end{array}\right. } \end{aligned}$$
(84)

Since \(\varvec{T(x, \xi )}\) is a strictly increasing function with respect to \(\varvec{\xi }\), the uncertain random redundancy model may be converted to a crisp mathematical model like (68).

Example 7

Suppose a series system consists of 4 components, and each of those contains only one type of element. The lifetimes of the 4 types of elements are assumed to be

$$N \left( 8,[1,1.1]\right) , N \left( 10,[1,1.2]\right) , L \left( 5,8\right) , L \left( 7,10\right) ,$$

where \(N \left( \mu _i,[\underline{\sigma }_i^2,\overline{\sigma }_i^2]\right) \) represents a G-normal distribution with an expected value of \(\mu _i\), which can be generated by \(\mu _{i}+\eta _{i}\) satisfying \(\eta _{i} \sim N \left( 0,[\underline{\sigma }_i^2,\overline{\sigma }_i^2]\right) \), and \(L \left( a,b\right) \) represents a linear uncertain variable whose uncertainty distribution is

$$\begin{aligned} {\Upsilon (x)} = {\left\{ \begin{array}{ll} 0,& {{\text {if}} \ x\le a },\\ {(x-a)/(b-a),}& {{\text {if}} \ a\le x \le b}, \\ {1,}& {{\text {if}} \ x\ge b. } \end{array}\right. } \end{aligned}$$
(85)

Assume the costs of the 4 types of elements are assumed to be 8, 10, 12, 11, respectively, and the total capital is 100. Then by using \(\textrm{MATLAB}\), the optimal combination of elements for this system is

$$\begin{aligned} x^{*} = (2,2,3,2), \end{aligned}$$

the cost of consumption is 94, and the expected system lifetime is 13.27.

7 Conclusion

In the real world, there exists a class of complex systems where non-additive characteristic stochasticity and human uncertainty coexist. In this case, we can employ U-S chance theory to analyze problems in these complex systems. In this paper, we investigate the uncertain random programming models under U-S chance theory. The operational law for uncertain random variables is proven. Based on sub-linear expectations and Choquet integrals, four types of expectations of uncertain random variables under U-S chance spaces are defined, and their relations and some properties are presented. It follows from these four types of expectations that four uncertain random programming models are provided. And the four models’ equivalent conditions are offered. Furthermore, they can be successfully applied to optimal investment in incomplete financial market and system reliability design.

The following is our future research plan. In this paper, uncertain random single-objective programming models under U-S chance theory are studied. However, in practical applications, we may want more than one objective function. Therefore, in the forthcoming work, we will investigate the uncertain random multi-objective programming models under the U-S chance theory and present their compromise models and crisp equivalent models. Finally, these uncertain random multi-objective programming models will be applied to portfolio selection in incomplete financial market. And we have made some headway in this work as of right now.