Abstract
In order to handle some problems in which human uncertainties coexist with stochasticities characterized by non-additive probabilities, we develop uncertain random programming models based on four different types of expectations in the framework of U-S chance theory. In this paper, firstly, the operational law for uncertain random variables is proved in this framework. Then, based on sub-linear expectations and Choquet integrals, four types of expectations of uncertain random variables are defined. Finally, four uncertain random programming models are proposed and applied to optimal investment in incomplete financial market and system reliability design.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
The programming problem is to achieve an optimization objective under given constraint conditions. However, since real-world situations are usually not deterministic, traditional mathematical programming models cannot solve all practical decision-making problems. Therefore, probability theory, fuzzy theory and uncertainty theory are applied to programming problems.
Stochastic programming provides a method to consider objectives and constraints with stochastic parameters. In 1955, a complete computation procedure was provided by Dantzig (1955) for a special class of two-stage linear programming models in which the first-stage allocations were made to meet an uncertain but known distribution of demands occurring in the second stage. Charnes and Cooper (1959) pioneered chance-constrained programming as a means of dealing with uncertainty by specifying a confidence level at which one would determine the stochastic constraint to hold. A sequential solution procedure to stochastic linear programming problems with 0–1 variables was described by Levary (1984). Other hot issues of stochastic programming are studied by many scholars, for example Schultz (2003), Dyer and Stougie (2006), Nemirovski et al. (2009), etc.
Fuzzy programming provides a method for dealing with optimization problems with fuzzy parameters. The decision-making problem in fuzzy environment was presented by Bellman and Zadeh (1970), in which optimal decision-making was an alternative that maximized the membership function of fuzzy decision-making. Zimmermann (1978) gave the application of fuzzy linear programming approaches to the linear vector maximum problem. Expected values of fuzzy variables were proposed by Liu and Liu (2002), and they also constructed a spectrum of fuzzy expected value models. For recent developments of fuzzy programming, interested readers can refer to Chang (2007), Li and Liu (2015), Dalman and Bayram (2018), Ranjbar and Effati (2020), and so on.
In practice, it is usually encountered that fuzziness and randomness appear simultaneously. In order to deal with this situation, Kwakernaak (1978) introduced the concepts of fuzzy random variables, expectations of fuzzy random variables etc. He also gave a more intuitive interpretation of the notion of fuzzy random variables, and derived algorithms and examples for determining expectations, fuzzy probabilities etc. in Kwakernaak (1979). Fuzzy random programming is an optimization theory for dealing with fuzzy random decision-making problems. By discussing a practical engineering problem, linear programming with fuzzy random variable coefficients was introduced by Wang and Zhong (1993), and they also gave its simplex algorithm. In 2001, a new concept of chance of fuzzy random events and a general framework for fuzzy random chance-constrained programming were proposed by Liu (2001). Katagiri et al. (2004) investigated a multi-objective 0–1 programming problem involving fuzzy random variable coefficients and proposed an interactive satisfaction method based on the reference point approach. Fuzzy random programming is still a hot topic and studied by many scholars, such as Liu and Liu (2005), Li et al. (2006), Ammar (2008), Sakawa et al. (2012), etc.
For studying human uncertainty, Liu (2007) founded uncertainty theory. Uncertain programming is the optimization theory in uncertain environment. Liu (2009) proposed uncertain programming, including chance-constrained programming, dependent-chance programming, uncertain dynamic programming etc., and Liu (2011) applied uncertain programming to the study of project scheduling problem, machine sequencing problem etc. Subsequently, Liu and Chen (2015) further provided uncertain multi-objective programming and uncertain goal programming. In addition, uncertain multilevel programming was given by Liu and Yao (2015).
In order to better deal with complex systems involving both human uncertainties and stochasticities, Liu (2013a) presented a new concept of uncertain random variable, and combined probability measure and uncertain measure into a chance measure in 2013. Meanwhile, uncertain random programming was firstly provided based on chance theory by Liu (2013b). As the generalizations of uncertain random programming, Zhou et al. (2014) proposed uncertain random multi-objective programming, and uncertain random project scheduling programming model was built in Ke et al. (2015).
It it well-known that additivity of classical probabilities is difficult to portray non-linear characteristics of some problems, such as, risk behavior in incomplete market, industrial production with incomplete information, etc. Therefore, many scholars have tried to solve these problems by using non-additive probability measures. Choquet (1954) firstly introduced the concepts of non-additive probability (capacity) and Choquet expectation. With the rapid developments of computer science and data information technology, financial risks are becoming more and more complex and their dynamic characteristics are also stronger, Choquet expectation is difficult to be applied to the study of modern financial risk. Therefore, Peng (2007) founded sub-linear expectation theory. However, for a long time, there exists a class of complex systems that contain both human uncertainties and stochasticities with sub-linear characteristics, such as investment behavior in incomplete financial market influenced by government regulation, redundant design of system, etc. In order to describe characteristics of those phenomena, Fu et al. (2022) combined sub-linear expectation theory with uncertainty theory to construct two product spaces, so as to use a new mathematical tool called U-S chance theory to deal with complex systems involving both human uncertainties and stochasticities with sub-linear characteristics. In this paper, uncertain random programming models based on U-S chance theory are investigated for the first time, which provide more reasonable solutions to the problems of optimal investment and financial risk management in incomplete market. In addition, uncertain random programming models proposed in this paper are also applicable to the study of system reliability design.
The paper is organized as follows. In Sect. 2 and Appendix A, some definitions and properties about uncertainty theory, U-S chance theory, and sub-linear expectation theory used in this paper are reviewed. In Sect. 3, under the framework of U-S chance theory, we present the operational law of uncertain random variables. In Sect. 4, four types of expectations of uncertain random variables are defined, based on sub-linear expectations and Choquet integrals. In Sect. 5, we provide four types of uncertain random programming models. In Sect. 6, two of these models are applied to stock investment in incomplete financial market and system reliability design.
2 Preliminary
In this section, we introduce some basic concepts about uncertain variables and uncertain random variables under U-S chance spaces, which are used throughout the paper.
2.1 Uncertain variable
Definition 1
(Liu 2015) Let \(\mathcal {L}\) be a \(\sigma \)-algebra on a non-empty set \(\Gamma \). A set function \(\mathcal {M}\) is called an uncertain measure if it satisfies the following axioms:
Axiom 1 (Normality Axiom): \(\mathcal {M}\{\Gamma \}=1\), for the universal set \(\Gamma ;\)
Axiom 2 (Duality Axiom): \(\mathcal {M}\{\Lambda \} + \mathcal {M}\left\{ \Lambda ^{\textsf{c}}\right\} =1\), for any \(\Lambda \in \mathcal {L};\)
Axiom 3 (Sub-additivity Axiom): For every countable sequence of \(\left\{ \Lambda _{j}\right\} \subset \mathcal {L},\) we have
The triplet \((\Gamma , \mathcal {L}, \mathcal {M})\) is called an uncertainty space, and each element \(\Lambda \) in \(\mathcal {L}\) is called an event. In order to obtain an uncertain measure of compound event, a product uncertain measure is defined as follows:
Axiom 4 (Product Axiom): Let \((\Gamma _{k}, \mathcal {L}_{k}, \mathcal {M}_{k})\) be uncertainty spaces for \(k = 1, 2, \ldots .\) The product uncertain measure \(\mathcal {M}\) is an uncertain measure satisfying
where \(\Lambda _{k}\) are arbitrarily chosen events from \(\mathcal {L}_{k}\) for \( k = 1, 2, \ldots ,\) respectively.
Definition 2
(Liu 2015) A function \(\tau : \Gamma \mapsto \mathbb {R}\) is called an uncertain variable if it is measurable, i.e.,
for each \(B \in \mathcal {B}(\mathbb {R})\). Its uncertainty distribution is a function given by
Definition 3
(Liu 2015) The uncertain variables \(\tau _{1}, \tau _{2}, \ldots , \tau _{n}\) are said to be independent if
for any \(B_i \in \mathcal {B}(\mathbb {R})\), \(i=1,2,\ldots , n.\)
Definition 4
(Liu 2015) Let \(\tau \) be an uncertain variable. Then, the expected value of \(\tau \) is defined by
provided that at least one of the two integrals is finite.
2.2 Uncertain random variable under U-S chance spaces
In this subsection, we use the framework and notations of Fu et al. (2022).
Definition 5
(Fu et al. 2022) Let \((\Gamma , \mathcal {L}, \mathcal {M})\) be an uncertainty space, and \((\Omega , \mathcal {H}, \mathbb {E})\) be a sub-linear expectation space (see Remark 3 in Appendix A). Suppose that \(\mathbb {V}\) and v are non-additive probabilities generated by \(\mathbb {E}\). A pair of chance spaces generated by uncertainty space and sub-linear expectation space (U-S chance spaces for short) are the spaces of forms:
and
where \(\Gamma \times \Omega \) is the universal set, \(\mathcal {L} \times \mathcal {F}\) is the product \(\sigma \)-algebra, \(\mathcal {M} \times \mathbb {V}\) and \(\mathcal {M} \times v\) are two product measures.
Here, the notations \((\Omega , \mathcal {H}, \mathbb {E})\), v and \(\mathbb {V}\) were introduced by Peng (2017, 2019) and Chen (2016). For more details, please refer to Appendix A.
Definition 6
(Fu et al. 2022) Let \(\Xi \in \mathcal {L} \times \mathcal {F}\) be an uncertain random event under U-S chance spaces. Then, chance measures ch and CH of \(\Xi \) are given by
and
respectively.
Remark 1
The universal set \(\Gamma \times \Omega \) is clearly the set of all ordered pairs of the form \((\gamma , \omega )\), where \(\gamma \in \Gamma \) and \(\omega \in \Omega \). That is,
The product \(\sigma \)-algebra \(\mathcal {L} \times \mathcal {F}\) is the smallest \(\sigma \)-algebra containing measurable rectangles of the form \(\Lambda \times A\), where \(\Lambda \in \mathcal {L}\) and \(A \in \mathcal {F}\). Any element in \(\mathcal {L} \times \mathcal {F}\) is called an event in the U-S chance spaces.
In the following, we discuss the product measures \(\mathcal {M} \times \mathbb {V}\) and \(\mathcal {M} \times v\) by the similar method of (Liu 2015, pp. 409–410). Suppose \(\Xi \) is an event in \(\mathcal {L} \times \mathcal {F}\). For each \(\omega \in \Omega \), it is clear that the set
is an event in \(\mathcal {L}\). Thus, the uncertain measure \(\mathcal {M}\{\Xi _{\omega }\}\) exists for each \(\omega \in \Omega \). However, unfortunately, \(\mathcal {M}\{\Xi _{\omega }\}\) is not necessarily a measurable function with respect to \(\omega \). That is, the set
is a subset of \(\Omega \) but not necessarily an event in \(\mathcal {F}\) for any real number x. Therefore, upper probability measure \(\mathbb {V}\{\Xi _{x}^{*}\}\) and lower probability measure \(v\{\Xi _{x}^{*}\}\) do not necessarily exist. In this case, we assign
and
in the light of maximum uncertainty principle. This ensures upper probability measure \(\mathbb {V}\{\Xi _{x}^{*}\}\) and lower probability measure \(v\{\Xi _{x}^{*}\}\) exist for any real number x. It is now appropriate to define \(\mathcal {M} \times \mathbb {V}\) and \(\mathcal {M} \times v\) of \(\Xi \) as the expected values of \(\mathcal {M}\{\Xi _{\omega }\}\) with respect to \(\omega \in \Omega \), i.e.,
and
Thus, chance measures CH and ch are well-defined.
Fu et al. (2022) also verified that chance measures ch and CH satisfy the following four properties:
-
(i)
$$ch\{A \times B\} = \mathcal {M}\{A\} \times v\{B\},\ \ CH\{A \times B\} = \mathcal {M}\{A\} \times \mathbb {V}\{B\},$$
for any \(A \in \mathcal {L}\) and \(B \in \mathcal {F};\)
-
(ii)
$$\begin{aligned} CH\{\Xi \} + ch\{\Xi ^{c}\} =1,\; \Xi \in \mathcal {L} \times \mathcal {F}; \end{aligned}$$(3)
-
(iii)
$$ch\{\Xi _{1}\} \le ch\{\Xi _{2}\},\ \ CH\{\Xi _{1}\} \le CH\{\Xi _{2}\},$$
for events \(\Xi _{1}, \Xi _{2} \in \mathcal {L} \times \mathcal {F},\) such that \(\Xi _{1} \subseteq \Xi _{2};\)
-
(iv)
$$ch\{\Xi \} \le CH\{\Xi \}, \ \ \Xi \in \mathcal {L} \times \mathcal {F}.$$
Definition 7
A function \(\xi : \Gamma \times \Omega \mapsto \mathbb {R}\) is called an uncertain random variable under U-S chance spaces if it is measurable, i.e., for each \(B\in \mathcal {B}(\mathbb {R})\),
Example 1
Let \(\eta \) be a Bernoulli random variable under \(\mathbb {E}\) (see Definition 16 in Appendix A) with the set of possible values \(\{a_{1}, a_{2}, \ldots , a_{n}\}\) and \(\tau _{1},\tau _{2},\ldots ,\tau _{n}\) be uncertain variables defined on \((\Gamma ,\mathcal {L},\mathcal {M})\). Suppose that f is a mapping from \(\Gamma \times \{a_{1}, a_{2}, \ldots , a_{n}\}\) to \(\mathbb {R}\) such that
Then
is an uncertain random variable, where \(p_{k} \in [\underline{p}_{k}, \overline{p}_{k}]\) for \(k=1,2,\ldots ,n\), satisfying \(\sum _{k=1}^{n} p_{k}=1\), and \(n \in \mathbb {N}\).
Here and in the sequel, uncertain random variables are based on the U-S chance spaces \((\Gamma , \mathcal {L}, \mathcal {M}) \times (\Omega , \mathcal {F}, \mathbb {V})\) and \((\Gamma , \mathcal {L}, \mathcal {M}) \times (\Omega , \mathcal {F}, v)\).
3 Operational law
Theorem 1
Let \(\eta _{1}, \eta _{2}, \ldots , \eta _{m}\) be random variables under \(\mathbb {E}\), and \(\tau _{1}, \tau _{2}, \ldots , \tau _{n}\) be independent uncertain variables with regular uncertainty distributions \(\Upsilon _{1}, \Upsilon _{2}, \ldots , \Upsilon _{n}\), respectively. Then, uncertain random variable
has a lower distribution
and an upper distribution
where \(F (x; y_{1}, y_{2}, \ldots , y_{m})\) is the uncertainty distribution of uncertain variable \(f(y_{1}, y_{2}, \ldots , y_{m}, \tau _{1}, \tau _{2}, \ldots , \tau _{n})\) for any given real numbers \(y_{1}, y_{2}, \ldots , y_{m},\) and is determined by its inverse function (see Theorem 2.18 in Liu 2015)
provided that \(f(\eta _{1}, \eta _{2}, \ldots , \eta _{m}, \tau _{1}, \tau _{2}, \ldots , \tau _{n})\) is strictly increasing with respect to \(\tau _{i_{1}}, \ldots , \tau _{i_{k}}\) and strictly decreasing with respect to \(\tau _{i_{k+1}}, \ldots , \tau _{i_{n}}.\)
Proof
For any given real numbers \(y_{1}, y_{2}, \ldots , y_{m}\), it follows from the operational law of uncertain variables that \(f(y_{1}, y_{2}, \ldots , y_{m}, \tau _{1}, \tau _{2}, \ldots , \tau _{n})\) is an uncertain variable with uncertainty distribution \(F (x;y_{1}, y_{2}, \ldots , y_{m})\). By using Definition 6, we know that \(\Phi _{1}\) and \(\Phi _{2}\) are the lower and upper distributions of \(\xi \) just with forms (5) and (6), respectively. \(\square \)
Example 2
Let \(\eta _{1}, \eta _{2}, \ldots , \eta _{m}\) be independent random variables (see Definition 13 (i) in Appendix A) with maximal distribution (see Definition 14 in Appendix A) under \(\mathbb {E}\). i.e.,
for each Borel measurable function \(\varphi \) on \(\mathbb {R}\), and \(\overline{\mu }_{i}=\mathbb {E}[\eta _{i}]\), \(\underline{\mu }_{i}=\mathcal {E}[\eta _{i}]\), \(i = 1, \ldots , m\). And let \(\tau _{1}, \tau _{2}, \ldots , \tau _{n}\) be independent uncertain variables with regular uncertainty distributions \(\Upsilon _{1}, \Upsilon _{2}, \ldots , \Upsilon _{n}\), respectively. Then, the sum
has a lower distribution
and an upper distribution
where \(\Upsilon \) is the uncertainty distribution of \(\tau _{1} + \tau _{2} + \cdots + \tau _{n}\) (see Theorem 2.14 in Liu 2015) determined by
Example 3
Let \(\eta _{1}, \eta _{2}, \ldots , \eta _{m}\) be IID random variables (see Definition 13 (iii) in Appendix A) with G-normal distribution (see Definition 15 in Appendix A) under \(\mathbb {E}\), i.e., \(\eta _{1} \sim N \left( 0,[\underline{\sigma }^2,\overline{\sigma }^2]\right) \), where \(\mathbb {E}[\eta _{1}] = \mathcal {E}[\eta _{1}] = 0\), \(\overline{\sigma }^2 = \mathbb {E}[\eta _1^2]\) and \(\underline{\sigma }^2 = \mathcal {E}[\eta _1^2]\). And let \(\tau _{1}, \tau _{2}, \ldots , \tau _{n}\) be independent uncertain variables with regular uncertainty distributions \(\Upsilon _{1}, \Upsilon _{2}, \ldots , \Upsilon _{n}\), respectively. Then, the sum
has a lower distribution
and an upper distribution
where \(\Phi \) denotes the distribution of standard normal distribution and \(\Upsilon \) is the uncertainty distribution of \(\tau _{1} + \tau _{2} + \cdots + \tau _{n}\) determined by
Proof
From Definition 15 and Remark 4 in Appendix A, we know that \(\eta _{1} + \eta _{2} + \cdots + \eta _{m} \overset{{\text {d}}}{=} \sqrt{m}\eta _{1}\). According to Corollary 1 in Peng and Zhou (2020), we conclude that
and
where \(\Phi \) denotes the distribution of standard normal distribution.
Then by using the above arguments, it follows that
where \(\Upsilon (z) = \sup _{z_{1}+ \cdots +z_{n} = z} \Upsilon _{1} (z_{1}) \wedge \Upsilon _{2} (z_{2}) \wedge \cdots \wedge \Upsilon _{n} (z_{n}).\)
Hence, (9) is proved. With the similar argument, we can verify that (10) holds. Thus, the proof is completed. \(\square \)
Example 4
Let \(\eta _{1}, \eta _{2}, \ldots , \eta _{m}\) be independent random variables under \(\mathbb {E}\), and \(\tau _{1}, \tau _{2}, \ldots , \tau _{n}\) be independent uncertain variables with regular uncertainty distributions \(\Upsilon _{1}, \Upsilon _{2}, \ldots , \Upsilon _{n}\), respectively.
-
(i)
The maximum
$$\begin{aligned} \xi = \eta _{1} \vee \eta _{2} \vee \cdots \vee \eta _{m} \vee \tau _{1} \vee \tau _{2} \vee \cdots \vee \tau _{n} \end{aligned}$$has a lower distribution
$$\begin{aligned} \Phi _{1}(x) = \Upsilon (x) v \{\eta _{1} \le x\} \cdots v \{\eta _{m} \le x\} \end{aligned}$$(11)and an upper distribution
$$\begin{aligned} \Phi _{2}(x) = \Upsilon (x) \mathbb {V} \{\eta _{1} \le x\} \cdots \mathbb {V} \{\eta _{m} \le x\}, \end{aligned}$$(12)where \(\Upsilon \) is the uncertainty distribution of \(\tau _{1} \vee \tau _{2} \vee \cdots \vee \tau _{n}\) (see Exercise 2.13 in Liu 2015) determined by
$$\Upsilon (x) = \Upsilon _{1}(x) \wedge \Upsilon _{2}(x) \wedge \cdots \wedge \Upsilon _{n}(x).$$ -
(ii)
The minimum
$$\begin{aligned} \xi = \eta _{1} \wedge \eta _{2} \wedge \cdots \wedge \eta _{m} \wedge \tau _{1} \wedge \tau _{2} \wedge \cdots \wedge \tau _{n} \end{aligned}$$has a lower distribution
$$\begin{aligned} \Phi _{1}(x) = 1-\left[ 1-\Upsilon (x)\right] \left( 1-v \{\eta _{1} \le x\}\right) \cdots \left( 1-v \{\eta _{m} \le x\}\right) \end{aligned}$$(13)and an upper distribution
$$\begin{aligned} \Phi _{2}(x) = 1- \left[ 1-\Upsilon (x)\right] \left( 1-\mathbb {V} \{\eta _{1} \le x\}\right) \cdots \left( 1-\mathbb {V}\{\eta _{m} \le x\}\right) , \end{aligned}$$(14)where \(\Upsilon \) is the uncertainty distribution of \(\tau _{1} \wedge \tau _{2} \wedge \cdots \wedge \tau _{n}\) (see Exercise 2.12 in Liu 2015) determined by
$$\Upsilon (x) = \Upsilon _{1}(x) \vee \Upsilon _{2}(x) \vee \cdots \vee \Upsilon _{n}(x).$$
Proof
(i) According to (5) and using the fact that \(\eta _{1}, \eta _{2}, \ldots , \eta _{m}\) are independent random variables under \(\mathbb {E}\), it can be shown that
where \(\Upsilon (x) = \Upsilon _{1}(x) \wedge \Upsilon _{2}(x) \wedge \cdots \wedge \Upsilon _{n}(x)\). Similarly, we can verify that (12) holds.
(ii) Since \(\eta _{1}, \eta _{2}, \ldots , \eta _{m}\) are independent random variables under \(\mathbb {E}\), from (3), it yields that
where \(\Upsilon (x) = \Upsilon _{1}(x) \vee \Upsilon _{2}(x) \vee \cdots \vee \Upsilon _{n}(x)\). Similarly, we can verify that (14) holds. The proof is completed. \(\square \)
Theorem 2
Assume that \(\eta _{1}, \eta _{2}, \ldots , \eta _{m}\) are independent Boolean random variables under \(\mathbb {E}\) (see Definition 17 in Appendix A), i.e.,
where \(\ p_{i}\in [\underline{p}_{i}, \overline{p}_{i}],\) \( i = 1,2, \ldots , m.\) And let \(\tau _{1}, \tau _{2}, \ldots , \tau _{n}\) be independent Boolean uncertain variables, i.e.,
for \(j=1,2,\ldots ,n.\) Suppose that f is a Boolean function from \(\{0,1\}^{n+1}\) to \(\{0,1\}\) and g is a Boolean function from \(\{0,1\}^{m}\) to \(\{0,1\}\). Then
is a Boolean uncertain random variable satisfying the following properties:
-
(i)
if the equation \(g(x_{1},\ldots ,x_{m})=1\) (\(x_i \in \{0,1\},i=1,\ldots ,m\)) has a unique solution \(\{y_{1}, \ldots , y_{m}\}\) in set \(\{0,1\}^{m}\) and \(f(0,\tau _{1}, \tau _{2}, \ldots , \tau _{n})=0\), then
$$\begin{aligned} ch\{\xi =1\}=\displaystyle \prod _{i=1}^{m} w_{i}(y_{i}) Z(y_{1},\ldots ,y_{m}), \end{aligned}$$(17)and
$$\begin{aligned} CH\{\xi =1\}=\displaystyle \prod _{i=1}^{m} u_{i}(y_{i}) Z(y_{1},\ldots ,y_{m}); \end{aligned}$$(18) -
(ii)
if the equation \(g(x_{1},\ldots ,x_{m})=0\) (\(x_i \in \{0,1\},i=1,\ldots ,m\)) has a unique solution \(\{y_{1}, \ldots , y_{m}\}\) in set \(\{0,1\}^{m}\) and \(f(1,\tau _{1}, \tau _{2}, \ldots , \tau _{n})=1\), then
$$\begin{aligned} ch\{\xi =0\}=\displaystyle \prod _{i=1}^{m} w_{i}(y_{i}) {{\overline{Z}}}(y_{1},\ldots ,y_{m}), \end{aligned}$$(19)and
$$\begin{aligned} CH\{\xi =0\}=\displaystyle \prod _{i=1}^{m} u_{i}(y_{i}) {{\overline{Z}}}(y_{1},\ldots ,y_{m}). \end{aligned}$$(20)Here
$$\begin{aligned} {Z(y_{1},\ldots ,y_{m})} = {\left\{ \begin{array}{ll} \displaystyle \sup _{f(g(y_{1},\ldots ,y_{m}),z_{1}, \ldots , z_{n})=1}\min _{1\le j \le n} v_{j}(z_{j}),\\ \ \ {{\text {if}} \displaystyle \sup _{f(g(y_{1},\ldots ,y_{m}),z_{1}, \ldots , z_{n})=1}\min _{1\le j \le n} v_{j}(z_{j}) <0.5}, \\ {1-\displaystyle \sup _{f(g(y_{1},\ldots ,y_{m}),z_{1}, \ldots , z_{n})=0}\min _{1\le j \le n} v_{j}(z_{j}),}\\ \ \ {{\text {if}} \displaystyle \sup _{f(g(y_{1},\ldots ,y_{m}),z_{1}, \ldots , z_{n})=1}\min _{1\le j \le n} v_{j}(z_{j}) \ge 0.5,} \end{array}\right. } \end{aligned}$$(21)$$\begin{aligned} {\overline{Z}(y_{1},\ldots ,y_{m})} = {\left\{ \begin{array}{ll} \displaystyle \sup _{f(g(y_{1},\ldots ,y_{m}),z_{1}, \ldots , z_{n})=0}\min _{1\le j \le n} v_{j}(z_{j}),\\ \ \ {{\text {if}} \displaystyle \sup _{f(g(y_{1},\ldots ,y_{m}),z_{1}, \ldots , z_{n})=0}\min _{1\le j \le n} v_{j}(z_{j}) <0.5}, \\ {1-\displaystyle \sup _{f(g(y_{1},\ldots ,y_{m}),z_{1}, \ldots , z_{n})=1}\min _{1\le j \le n} v_{j}(z_{j}),}\\ \ \ {{\text {if}} \displaystyle \sup _{f(g(y_{1},\ldots ,y_{m}),z_{1}, \ldots , z_{n})=0}\min _{1\le j \le n} v_{j}(z_{j}) \ge 0.5,} \end{array}\right. } \end{aligned}$$(22)$$\begin{aligned} {\displaystyle w_{i}(y_{i})} = {\left\{ \begin{array}{ll} \ \ \ \ \ \underline{p}_{i},& {{\text {if}}\ y_{i}=1,} \\ {1-\overline{p}_{i},}& {{\text {if}}\ y_{i}=0,} \end{array}\right. } \end{aligned}$$(23)$$\begin{aligned} {\displaystyle u_{i}(y_{i})} = {\left\{ \begin{array}{ll} \ \ \ \ \ \overline{p}_{i},& {{\text {if}}\ y_{i}=1,} \\ {1-\underline{p}_{i},}& {{\text {if}}\ y_{i}=0,} \end{array}\right. } \end{aligned}$$(24)and
$$\begin{aligned} {v_{j}(z_{j})} = {\left\{ \begin{array}{ll} \ \ \ \ \ q_{j},& {{\text {if}}\ z_{j}=1,} \\ {1-q_{j},}& {{\text {if}}\ z_{j}=0.} \end{array}\right. } \end{aligned}$$(25)
Proof
(i) By the operational law of uncertain random variables (Theorem 1), we have
and
where \((y_{1},\ldots ,y_{m})\) is the unique solution of \(g(x_{1},\ldots , x_{m})=1\). Since \(f(g(y_{1},\ldots ,y_{m}),\) \(\tau _{1}, \tau _{2}, \ldots , \tau _{n})\) is essentially a Boolean function of uncertain variables, it follows from the operational law of uncertain variables that \(\mathcal {M}\left\{ f(g(y_{1},\ldots ,y_{m}),\tau _{1}, \tau _{2}, \ldots , \tau _{n})=1\right\} \) is determined by (21) (see Theorem 2.23 in Liu 2015), thus (17) and (18) are verified.
By using the similar method of proof of (i), we can prove (ii). Therefore, it is omitted. The proof is completed. \(\square \)
Example 5
Let \(\eta _{1}, \eta _{2}, \ldots , \eta _{m}\) be independent Boolean random variables under \(\mathbb {E}\) defined by (15), and \(\tau _{1}, \tau _{2}, \ldots , \tau _{n}\) be independent Boolean uncertain variables defined by (16).
(i) The minimum
is a Boolean uncertain random variable such that
and
(ii) The maximum
is a Boolean uncertain random variable such that
and
4 Expected value
Let \(\eta _{1}, \eta _{2}, \ldots , \eta _{m}\) be random variables under \(\mathbb {E}\), and \(\tau _{1}, \tau _{2}, \ldots , \tau _{n}\) be uncertain variables. Then, the uncertain random variable
has the following four types of expectations:
(i) Upper expectation
(ii) Lower expectation
(iii) Choquet expectation with respect to CH
(iv) Choquet expectation with respect to ch
Remark 2
(i) In (30) and (31), \(E [f (y_{1},\ldots , y_{m}, \tau _{1}, \ldots , \tau _{n})]\) denotes the expected value of uncertain variable \(f \ (y_{1}, \ldots , y_{m}, \tau _{1}, \ldots , \tau _{n})\), and it is finite. In (32) and (33), at least one of the two integrals is finite.
(ii) From (30) and (31), we can verify that \(\tilde{E}\) and \(\tilde{\mathcal {E}}\) have the same properties as \(\mathbb {E}\) and \(\mathcal {E}\), respectively.
Theorem 3
Let \(\eta _{1}, \eta _{2}, \ldots , \eta _{m}\) be independent random variables under \(\mathbb {E}\), and \(\tau _{1}, \tau _{2}, \ldots , \tau _{n}\) be independent uncertain variables with regular uncertainty distributions \(\Upsilon _{1}, \Upsilon _{2}, \ldots , \Upsilon _{n}\), respectively. Assume that \(\{F_{\theta _{i}}(y_{i}), y_{i}\in \mathbb {R}\}_{\theta _{i} \in \Theta _{i}}\) is a family of distributions of \(\eta _{i}\) corresponding to the set of probability measures \(\{P_{\theta _{i}}\}_{\theta _{i} \in \Theta _{i}}\), for \(i=1, \ldots , m\), respectively. Then, the upper and lower expectations of uncertain random variable \(\xi = f(\eta _{1}, \ldots , \eta _{m}, \tau _{1}, \tau _{2}, \ldots , \tau _{n})\) are
and
respectively, where \(f(\eta _{1}, \eta _{2}, \ldots , \eta _{m}, \tau _{1}, \tau _{2}, \ldots , \tau _{n})\) is strictly increasing with respect to \(\tau _{i_{1}}, \ldots , \tau _{i_{k}}\) and strictly decreasing with respect to \(\tau _{i_{k+1}}, \ldots , \tau _{i_{n}}.\)
Proof
Without loss of generality, we only prove that (34) holds for \(m=2\), the proof of other cases of (34) is similar.
Suppose that \(\{P_{\delta }\}_{\delta \in \Delta }\) is a family of joint probability measures of \(\eta _{1}\) and \(\eta _{2}\). Since \(f(y_{1}, y_{2}, \tau _{1}, \tau _{2}, \ldots , \tau _{n})\) is strictly increasing with respect to \(\tau _{i_{1}}, \ldots , \tau _{i_{k}}\) and strictly decreasing with respect to \(\tau _{i_{k+1}}, \ldots , \tau _{i_{n}}\), then by Theorem 2.30 in Liu (2015), we obtain
where \(F(x;y_{1}, y_{2})\) is the uncertainty distribution of \(f(y_1,y_2,\tau _1,\ldots ,\tau _n)\).
Since \(\eta _{2}\) is independent to \(\eta _{1}\) under \(\mathbb {E}\), it follows from (30) that
Hence (34) is proved. With the similar argument, we can prove that (35) holds. Thus the proof is completed. \(\square \)
Theorem 4
Let \(\eta _{1}, \eta _{2}, \ldots , \eta _{m}\) be random variables under \(\mathbb {E}\), \(\tau _{1}, \tau _{2}, \ldots , \tau _{n}\) be uncertain variables and
be an uncertain random variable. Then
Proof
Firstly, for any non-negative uncertain random variable \(\xi \), from (32) and (33), it follows that
and
Secondly, for any uncertain random variable \(\xi \),
where \(\xi ^{+} = \xi \vee 0=\max \{\xi ,0\}\), \(\xi ^{-} = -(\xi \wedge 0)=-\min \{\xi ,0\}\). Then by (32) and (33), we have
Applying (37) and (38), it yields
From the sub-additivity of \(\tilde{E}\) and the fact that \(\tilde{\mathcal {E}} [\xi ] =-\tilde{E} [-\xi ]\), it can be shown that
Finally, by using the above arguments and noting the fact that \(\tilde{\mathcal {E}} [\xi ] \le \tilde{E} [\xi ]\), we obtain
The proof of Theorem 4 is completed. \(\square \)
Theorem 5
Let \(\eta \) be a random variable under \(\mathbb {E}\), and \(\tau \) be an uncertain variable. Then, we have
-
(i)
$$\begin{aligned} \tilde{E} [\eta + \tau ] = \mathbb {E} [\eta ] + E [\tau ] \end{aligned}$$(39)
and
$$\begin{aligned} \tilde{\mathcal {E}} [\eta + \tau ] = \mathcal {E} [\eta ] + E [\tau ]; \end{aligned}$$(40) -
(ii)
$$\begin{aligned} \tilde{E} [\eta \tau ] = (E[\tau ])^{+} \mathbb {E}[\eta ] +( E[\tau ])^{-} \mathbb {E}[-\eta ] \end{aligned}$$(41)
and
$$\begin{aligned} \tilde{\mathcal {E}} [\eta \tau ] = (E[\tau ])^{+} \mathcal {E}[\eta ] +( E[\tau ])^{-} \mathcal {E}[-\eta ]; \end{aligned}$$(42) -
(iii)
$$\begin{aligned} E_{CH}[\eta + \tau ]&= \int \limits _{0}^{\infty } \int \limits _{0}^{1} \mathbb {V} \left\{ \omega \in \Omega | 1-F(x-\eta )\ge r\right\} \textrm{d}r \textrm{d}x\nonumber \\&\quad -\int \limits _{-\infty }^{0} \int \limits _{0}^{1} v \left\{ \omega \in \Omega | F(x-\eta )\ge r\right\} \textrm{d}r \textrm{d}x \end{aligned}$$(43)
and
$$\begin{aligned} E_{ch}[\eta + \tau ]&= \int \limits _{0}^{\infty } \int \limits _{0}^{1} v \left\{ \omega \in \Omega | 1-F(x-\eta )\ge r\right\} \textrm{d}r \textrm{d}x \nonumber \\&\quad - \int \limits _{-\infty }^{0} \int _{0}^{1} \mathbb {V} \left\{ \omega \in \Omega | F(x-\eta )\ge r\right\} \textrm{d}r \textrm{d}x, \end{aligned}$$(44)where F(x) is the uncertainty distribution of \(\tau \).
Proof
(i) According to (30), the uncertain random variable \(\eta + \tau \) has an upper expectation
Similarly, by (31), we can show that the uncertain random variable \(\eta + \tau \) has a lower expectation
(ii) From (30), the uncertain random variable \(\eta \tau \) has an upper expectation
Similarly, by (31), we can show that the uncertain random variable \(\eta \tau \) has a lower expectation
(iii) Applying (32), it is easily obtain that
where F(x) is the uncertainty distribution of \(\tau \).
With the similar argument, we can verify that (44) holds. The proof of Theorem 5 is completed. \(\square \)
Theorem 6
Assume \(\eta _{1}\) and \(\eta _{2}\) are random variables satisfying that \(\eta _{2}\) is independent to \(\eta _{1}\) under \(\mathbb {E}\), \(\tau _{1}\) and \(\tau _{2}\) are uncertain variables, and for any given real numbers \(y_{1}\) and \(y_{2}\), \(f_{1}(y_{1}, \tau _{1})\) and \(f_{2}(y_{2}, \tau _{2})\) are real-valued comonotonic functions with respect to \(\tau _{1}\) and \(\tau _{2}\). Then
and
Proof
Since \(f_{1}(y_{1}, \tau _{1})\) and \(f_{2}(y_{2}, \tau _{2})\) are real-valued comonotonic functions with respect to uncertain variables \(\tau _{1}\) and \(\tau _{2}\), we have
according to Definition 4. Then from (30), it follows that
Denote
and
Hence
by Definition 13 (i) in Appendix A. In the similar way, we can verify that (48) holds and the proof is completed. \(\square \)
5 Uncertain random programming
In this section, we suggest some classes of uncertain random optimization models, called uncertain random programming under U-S chance spaces, to solve decision-making problems in uncertain random environments.
Assume that \(\varvec{x}\) is a decision vector, \(\varvec{\xi }\) is an uncertain random vector, \(f(\varvec{x}, \varvec{\xi })\) is an objective function, and \(g_{j}(\varvec{x}, \varvec{\xi })\) are uncertain random constraint functions, \(j=1,2,\ldots ,p.\) Since the uncertain random objective function \(f(\varvec{x}, \varvec{\xi })\) cannot be directly maximized or minimized, we may maximize or minimize its expected values. Furthermore, since the uncertain random constraints \(g_{j}(\varvec{x}, \varvec{\xi }) \le (\ge ) 0\), \(j = 1, \ldots , p\) do not define a crisp feasible set, it is naturally desired that the uncertain random constraints hold with confidence levels \(\alpha _{j}\) or \(\beta _{j}\), \(j = 1, \ldots , p\). Then we have the following two sets of chance constraints:
and
Four uncertain random programming models based on U-S chance theory are introduced in the following.
5.1 Two robust uncertain random programming models
In order to obtain a decision-making with maximum expected objective value subject to a set of chance constraints, we suggest the following two uncertain random programming models:
and
Definition 8
A vector \(\varvec{x}\) is called a feasible solution to the uncertain random programming model (a) (or (b)) if
Definition 9
(i) A feasible solution \(\varvec{x}^{*}\) is called an optimal solution to the uncertain random programming model (a) if
for any feasible solution \(\varvec{x}\).
(ii) A feasible solution \(\varvec{x}^{*}\) is called an optimal solution to the uncertain random programming model (b) if
for any feasible solution \(\varvec{x}\).
5.2 Two radical uncertain random programming models
In order to obtain a decision-making with minimum expected objective value subject to a set of chance constraints, we suggest the following two uncertain random programming models:
and
Definition 10
A vector \(\varvec{x}\) is called a feasible solution to the uncertain random programming model (c) (or (d)) if
Definition 11
(i) A feasible solution \(\varvec{x}^{*}\) is called an optimal solution to the uncertain random programming model (c) if
for any feasible solution \(\varvec{x}\).
(ii) A feasible solution \(\varvec{x}^{*}\) is called an optimal solution to the uncertain random programming model (d) if
for any feasible solution \(\varvec{x}\).
5.3 Equivalent conditions of uncertain random programming models
In this subsection, we state some equivalent conditions of the above four uncertain random programming models as the following two theorems.
Theorem 7
Let \(\eta _{1}, \eta _{2}, \ldots , \eta _{m}\) be independent random variables under \(\mathbb {E}\), and \(\tau _{1}, \tau _{2}, \ldots , \tau _{n}\) be independent uncertain variables with regular uncertainty distributions \(\Upsilon _{1}, \Upsilon _{2}, \ldots , \Upsilon _{n}\), respectively. Assume that \(\{F_{\theta _{i}}(y_{i}), y_{i}\in \mathbb {R}\}_{\theta _{i} \in \Theta _{i}}\) is a family of distributions of \(\eta _{i}\) corresponding to the set of probability measures \(\{P_{\theta _{i}}\}_{\theta _{i} \in \Theta _{i}}\), for \(i=1, \ldots , m\). If \(f(\varvec{x}, \eta _{1}, \eta _{2}, \ldots , \eta _{m}, \tau _{1}, \tau _{2}, \ldots , \tau _{n})\) and \(g_{j}(\varvec{x},\eta _{1}, \eta _{2}, \ldots , \eta _{m}, \tau _{1}, \tau _{2}, \ldots , \tau _{n})\) are strictly increasing with respect to \(\tau _{i_{1}}, \ldots , \tau _{i_{k}}\) and strictly decreasing with respect to \(\tau _{i_{k+1}}, \ldots , \tau _{i_{n}}\), for \(j = 1, \ldots , p\), then
-
(i)
the uncertain random programming (a)
$$\begin{aligned} {\left\{ \begin{array}{ll} \displaystyle \max _{\varvec{x}} \tilde{\mathcal {E}}[f(\varvec{x}, \eta _{1}, \eta _{2}, \ldots , \eta _{m}, \tau _{1}, \tau _{2}, \ldots , \tau _{n})]\\ {\text{ subject } \text{ to: }} \\ \displaystyle CH \{ g_{j}(\varvec{x},\eta _{1}, \eta _{2}, \ldots , \eta _{m}, \tau _{1}, \tau _{2}, \ldots , \tau _{n})\ge 0\} \le \alpha _{j},\; j = 1, \ldots , p \end{array}\right. } \end{aligned}$$(63)is equivalent to the crisp mathematical programming
$$\begin{aligned} {\left\{ \begin{array}{ll} \displaystyle \max _{\varvec{x}} \displaystyle \inf _{\theta _{1} \in \Theta _{1}} \int _{-\infty }^{\infty } \cdots \Big (\inf _{\theta _{m} \in \Theta _{m}} \int _{-\infty }^{\infty } \int _{0}^{1}f\Big (\varvec{x}, y_{1}, \ldots , y_{m}, \Upsilon _{i_{1}}^{-1}(\alpha ), \ldots , \Upsilon _{i_{k}}^{-1}(\alpha ),\\ \qquad \Upsilon _{i_{k+1}}^{-1}(1-\alpha ), \ldots , \Upsilon _{i_{n}}^{-1}(1-\alpha )\Big ) \textrm{d}\alpha \textrm{d}F_{\theta _{m}}(y_{m})\Big ) \cdots \textrm{d}F_{\theta _{1}}(y_{1})\\ {\text{ subject } \text{ to: }} \\ \displaystyle \int _{0}^{1} \mathbb {V}\left\{ \omega \in \Omega | 1- G_j (0;\varvec{x},\eta _{1}, \eta _{2}, \ldots , \eta _{m})\ge r\right\} \textrm{d}r \le \alpha _{j}, j = 1, \ldots , p; \end{array}\right. } \end{aligned}$$(64) -
(ii)
the uncertain random programming (c)
$$\begin{aligned} {\left\{ \begin{array}{ll} \displaystyle \min _{\varvec{x}}\tilde{E}[f(\varvec{x}, \eta _{1}, \eta _{2}, \ldots , \eta _{m}, \tau _{1}, \tau _{2}, \ldots , \tau _{n})] \\ {\text{ subject } \text{ to: }} \\ \displaystyle ch \{ g_{j} (\varvec{x},\eta _{1}, \eta _{2}, \ldots , \eta _{m}, \tau _{1}, \tau _{2}, \ldots , \tau _{n})\le 0\} \ge \beta _{j},\ j = 1, \ldots , p \end{array}\right. } \end{aligned}$$(65)is equivalent to the crisp mathematical programming
$$\begin{aligned} {\left\{ \begin{array}{ll} \displaystyle \min _{\varvec{x}}\displaystyle \sup _{\theta _{1} \in \Theta _{1}} \int _{-\infty }^{\infty } \ldots \Big (\sup _{\theta _{m} \in \Theta _{m}} \int _{-\infty }^{\infty } \int _{0}^{1}f\Big (\varvec{x}, y_{1}, \ldots , y_{m}, \Upsilon _{i_{1}}^{-1}(\alpha ), \ldots , \Upsilon _{i_{k}}^{-1}(\alpha ),\\ \qquad \Upsilon _{i_{k+1}}^{-1}(1-\alpha ), \ldots , \Upsilon _{i_{n}}^{-1}(1-\alpha )\Big ) \textrm{d}\alpha \textrm{d}F_{\theta _{m}}(y_{m})\Big ) \cdots \textrm{d}F_{\theta _{1}}(y_{1})\\ {\text{ subject } \text{ to: }} \\ \displaystyle \int _{0}^{1} v\left\{ \omega \in \Omega | G_j (0;\varvec{x},\eta _{1}, \eta _{2}, \ldots , \eta _{m})\ge r\right\} \textrm{d}r \ge \beta _{j}, j = 1, \ldots , p, \end{array}\right. } \end{aligned}$$(66)where for each \(j\in \{1, \ldots , p\}\), \(G_j (z;\varvec{x}, y_{1}, y_{2}, \ldots , y_{m})\) is the uncertainty distribution of uncertain variable \(g_j(\varvec{x},y_{1}, y_{2}, \ldots , y_{m}, \tau _{1}, \tau _{2}, \ldots , \tau _{n})\) for any given real numbers \(y_{1}, y_{2}, \ldots , y_{m},\) and is determined by its inverse function
$$\begin{aligned}&G_j^{-1}(\alpha ,\varvec{x}, y_{1}, y_{2}, \ldots , y_{m}) = g_j \Big (\varvec{x},y_{1}, y_{2}, \ldots , y_{m}, \Upsilon _{i_{1}}^{-1}(\alpha ),\Big .\\&\Big . \ldots ,\Upsilon _{i_{k}}^{-1}(\alpha ), \Upsilon _{i_{k+1}}^{-1}(1-\alpha ),\ldots , \Upsilon _{i_{n}}^{-1}(1-\alpha )\Big ). \end{aligned}$$
Proof
It follows from Theorem 1 and Theorem 3 immediately. \(\square \)
Theorem 8
Let \(\eta _{1}, \eta _{2}, \ldots , \eta _{m}\) be random variables under \(\mathbb {E}\), and \(\tau _{1}, \tau _{2}, \ldots , \tau _{n}\) be independent uncertain variables with regular uncertainty distributions \(\Upsilon _{1}, \Upsilon _{2}, \ldots , \Upsilon _{n}\), respectively. If \(f(\varvec{x}, \eta _{1}, \eta _{2}, \ldots , \eta _{m}, \tau _{1}, \tau _{2}, \ldots , \tau _{n})\) and \(g_{j}(\varvec{x},\eta _{1}, \eta _{2}, \ldots , \eta _{m}, \tau _{1}, \tau _{2}, \ldots , \tau _{n})\) are strictly increasing with respect to \(\tau _{i_{1}}, \ldots , \tau _{i_{k}}\) and strictly decreasing with respect to \(\tau _{i_{k+1}}, \ldots , \tau _{i_{n}}\), for \(j = 1, \ldots , p\), then
-
(i)
the uncertain random programming (b)
$$\begin{aligned} {\left\{ \begin{array}{ll} \displaystyle \max _{\varvec{x}} E_{ch} [ f (\varvec{x},\eta _{1}, \eta _{2}, \ldots , \eta _{m}, \tau _{1}, \tau _{2}, \ldots , \tau _{n})] \\ {\text{ subject } \text{ to: }} \\ \displaystyle CH \{ g_{j}(\varvec{x},\eta _{1}, \eta _{2}, \ldots , \eta _{m}, \tau _{1}, \tau _{2}, \ldots , \tau _{n})\ge 0\} \le \alpha _{j},j = 1, \ldots , p \end{array}\right. } \end{aligned}$$(67)is equivalent to the crisp mathematical programming
$$\begin{aligned} {\left\{ \begin{array}{ll} \displaystyle \max _{\varvec{x}} \left\{ \int _{0}^{\infty } \int _{0}^{1} v\left\{ \omega \in \Omega | 1-F (z;\varvec{x},\eta _{1}, \eta _{2}, \ldots , \eta _{m})\ge r\right\} \right. \\ \qquad \left. \ \textrm{d}r \textrm{d}z -\int _{-\infty }^{0} \int _{0}^{1} \mathbb {V}\left\{ \omega \in \Omega | F (z;\varvec{x},\eta _{1}, \eta _{2}, \ldots , \eta _{m})\ge r\right\} \textrm{d}r\textrm{d}z\right\} \\ {\text{ subject } \text{ to: }} \\ \displaystyle \int _{0}^{1} \mathbb {V}\left\{ \omega \in \Omega | 1- G_j (0;\varvec{x},\eta _{1}, \eta _{2}, \ldots , \eta _{m})\ge r\right\} \textrm{d}r \le \alpha _{j}, j = 1, \ldots , p; \end{array}\right. } \end{aligned}$$(68) -
(ii)
the uncertain random programming (d)
$$\begin{aligned} {\left\{ \begin{array}{ll} \displaystyle \min _{\varvec{x}} E_{CH} [f(\varvec{x},\eta _{1}, \eta _{2}, \ldots , \eta _{m}, \tau _{1}, \tau _{2}, \ldots , \tau _{n})] \\ {\text{ subject } \text{ to: }} \\ \displaystyle ch \{ g_{j} (\varvec{x},\eta _{1}, \eta _{2}, \ldots , \eta _{m}, \tau _{1}, \tau _{2}, \ldots , \tau _{n})\le 0\} \ge \beta _{j},\ j = 1, \ldots , p \end{array}\right. } \end{aligned}$$(69)is equivalent to the crisp mathematical programming
$$\begin{aligned} {\left\{ \begin{array}{ll} \displaystyle \min _{\varvec{x}} \left\{ \int _{0}^{\infty } \int _{0}^{1} \mathbb {V}\left\{ \omega \in \Omega | 1-F (z;\varvec{x},\eta _{1}, \eta _{2}, \ldots , \eta _{m})\ge r\right\} \right. \\ \left. \ \textrm{d}r \textrm{d}z -\int _{-\infty }^{0} \int _{0}^{1} v\left\{ \omega \in \Omega | F (z;\varvec{x},\eta _{1}, \eta _{2}, \ldots , \eta _{m})\ge r\right\} \textrm{d}r\textrm{d}z\right\} \\ {\text{ subject } \text{ to: }} \\ \displaystyle \int _{0}^{1} v\left\{ \omega \in \Omega | G_j (0;\varvec{x},\eta _{1}, \eta _{2}, \ldots , \eta _{m})\ge r\right\} \textrm{d}r \ge \beta _{j}, j = 1, \ldots , p, \end{array}\right. } \end{aligned}$$(70)where \(F (z;\varvec{x}, y_{1}, y_{2}, \ldots , y_{m})\) is the uncertainty distribution of uncertain variable \(f(\varvec{x},y_{1}, y_{2}, \ldots , y_{m}, \tau _{1}, \tau _{2}, \ldots , \tau _{n})\) for any given real numbers \(y_{1}, y_{2}, \ldots , y_{m},\) and is determined by its inverse function
$$\begin{aligned}&F^{-1}(\alpha ,\varvec{x}, y_{1}, y_{2}, \ldots , y_{m}) = f \Big (\varvec{x},y_{1}, y_{2}, \ldots , y_{m}, \Upsilon _{i_{1}}^{-1}(\alpha ),\Big .\\&\quad \Big . \ldots ,\Upsilon _{i_{k}}^{-1}(\alpha ), \Upsilon _{i_{k+1}}^{-1}(1-\alpha ),\ldots , \Upsilon _{i_{n}}^{-1}(1-\alpha )\Big ), \end{aligned}$$and for each \(j\in \{1, \ldots , p\}\), \(G_j (z;\varvec{x}, y_{1}, y_{2}, \ldots , y_{m})\) is the uncertainty distribution of uncertain variable \(g_j(\varvec{x},y_{1}, y_{2}, \ldots , y_{m}, \tau _{1}, \tau _{2}, \ldots , \tau _{n})\) for any given real numbers \(y_{1}, y_{2}, \ldots , y_{m},\) and is determined by its inverse function
$$\begin{aligned}&G_j^{-1}(\alpha ,\varvec{x}, y_{1}, y_{2}, \ldots , y_{m}) = g_j \Big (\varvec{x},y_{1}, y_{2}, \ldots , y_{m}, \Upsilon _{i_{1}}^{-1}(\alpha ),\Big .\\&\quad \Big . \ldots ,\Upsilon _{i_{k}}^{-1}(\alpha ), \Upsilon _{i_{k+1}}^{-1}(1-\alpha ),\ldots , \Upsilon _{i_{n}}^{-1}(1-\alpha )\Big ). \end{aligned}$$
Proof
It follows from (32), (33), Theorem 1 and Axiom 2 of Definition 1 immediately. \(\square \)
6 Some applications of uncertain random programming models
In this section, some applications of two types of uncertain random programming models are given, namely, stock investment in incomplete financial market and system reliability design, respectively.
6.1 Stock investment in incomplete market
The optimal stock investment problem has always been a hot issue in the fields of economics and finance, and stock investment in incomplete financial market contains rich uncertainties, which disturb the choice of optimal strategies.
In finance, it is widely known that if the market is complete, then there exists a risk-neutral and unique probability measure P. However, if the market is incomplete, then this neutral probability measure is no longer unique, but there exists a series of probability measures \(\{P_{\theta }\}_{\theta \in \Theta }\). Incompleteness refers to the fact that more than one probability plays on the market, yet we have no way of knowing the transformation law. In this case, sub-linear expectation theory can be employed to analyze problems of mathematical finance, such as optimal investment.
In financial market, when no samples are available to estimate probability measures, we have to invite some domain experts to evaluate the belief degree about the unknown state. In this case, uncertain measures can be applied to analyze problems of mathematical finance, such as optimal investment.
Suppose that there exist two types of stocks in incomplete financial market, and the total number of stocks is \( m+ n\). For each \(i=1,2,\ldots ,m\), the initial price of the ith stock is \(Y_{0}^{i}\), and the process of price change of the ith stock can be described by the following uncertain stock model (see Sect. 16.1 in Liu 2015):
where \(\mu _{i}\) is the drift coefficient, \(v_{i}\) is the diffusion coefficient, which are both constants, and \(C_{t}^{i}\) is Liu process (see Definition 14.1 in Liu 2015). The solution of (71) is \(Y_{t}^{i} = Y_{0}^{i} exp(\mu _{i} t + v_{i} C_{t}^{i})\). Definition 14.3 in Liu (2015) shows that the expected value of \(Y_{t}^{i}\) is
In addition, for each \(j=m+1,m+2,\cdots ,m+n\), the initial price the jth stock is \(N_{0}^{j}\), and the process of price change of the jth stock can be described by the following stochastic differential equation driven by one-dimensional G-Brownian motion \(\{B_{t}^{j}\}\) (see Definition 3.1.2 in Peng 2019):
where \(e_{j},\sigma _{j}, \alpha _{j}\) are constants, and \(\{\langle B \rangle _{t}^{j}\}\) is a process of quadratic variation (see Sect. 3.4 in Peng 2019) with respect to \(\{B_{t}^{j}\}\). By using G-Itô’s formula (see Theorem 3.6.5 in Peng 2019), it can be calculated that
and when \(\alpha _{j}=\frac{1}{2} \sigma _{j}^{2}\), it is known that
from Proposition 3.1.6 in Peng (2019).
The financial product is purchased at the moment 0 and sold at the moment T. Let \(x_{k}\) represent the number of shares of the k-th stock (\(k=1,2,\ldots ,m+n\)) purchased, \(Y_{T}^{i}\) represent the stock price of the ith stock (\(i=1,2,\ldots ,m\)) at the moment T, and \(N_{T}^{j}\) represent the stock price of the jth stock (\(j=m+1,m+2,\ldots ,m+n\)) at the moment T. Denote
and
Let \({\varvec{T}} (\varvec{x}, \varvec{\xi }) \) represent the total price at the moment T of financial product purchased and \({\varvec{T}}_{k} (\varvec{x}, \varvec{\xi }) \) represent the price at the moment T of \(x_k\) shares of the kth stock purchased, then
and
In addition, the total cost of financial product purchased is
If the initial capital is \({\varvec{C}}_{0}\), then the capital constraint is
Under the capital constraint, we can maximize the return of financial product purchased at the moment T by building the following uncertain random programming model:
Example 6
Suppose that there are five stocks in an incomplete financial market. Two of those obey the uncertain differential equations (71), the initial stock prices are \(Y_{0}^{1}=24, \ Y_{0}^{2}=18.2\), the drift coefficients are \(\mu _{1}=0.00216, \mu _{2}=0.0006,\) and the diffusion coefficients are \(v_{1}=0.003,\ v_{2}=0.0011\), respectively. The other three stocks obey the stochastic differential equations (73), the initial stock prices are \(N_{0}^{3}=16.3, \ N_{0}^{4}=17.9, N_{0}^{5}=13.6\), the parameters are \(e_{3}=0.00153,\ e_{4}=0.00104,\ e_{5}=0.00263\), \(\sigma _{3}=0.0023,\ \sigma _{4}=0.0019,\ \sigma _{5}=0.005\), \(\alpha _{3}=7.22\times 10^{-6},\ \alpha _{4}=8\times 10^{-6},\ \alpha _{5}=9.245\times 10^{-6}\), and \(\underline{\sigma }^{2}_{3} = 1.0304,\ \underline{\sigma }^{2}_{4} =1.0201,\ \underline{\sigma }^{2}_{5} =1.0501\), respectively. The risk-free interest rate is \(r=5.4 \times 10^{-5}\), and the maturity time is \(T=30\).
If the initial capital is assumed to be 1000, by using \(\textrm{MATLAB}\), we can calculate that the optimal stock portfolio is
the initial capital consumption is 999.4, and the expected income is 65.7.
6.2 System reliability design
Providing redundancy for components in a system is an effective method to improve system reliability. The purpose of system reliability design is to determine the optimal number of redundant elements for balancing system performance and total cost. Suppose a series system consists of n components, and each component has only one type of elements. The lifetimes of elements are uncertain random variables. We also assume that redundant elements of all components are in a standby state. That is, one of redundant elements begins to work only when the active element fails. This approach is usually applied in cases where replacement can be accomplished immediately. Therefore, the lifetime of a component is the sum of lifetimes of all elements in the component. Let \(x_{i}\) be the number of elements in the ith component, and \(\xi _{ij}\) be the lifetime of the j-th element in the ith component, where \(j=1,2,\ldots ,x_{i}\), \(i=1,2, \ldots ,n\). Denote
and
Let \({\varvec{T}} (\varvec{x}, \varvec{\xi }) \) represent the system lifetime, and \({\varvec{T}}_{i} (\varvec{x}, \varvec{\xi }) \) represent the lifetime of the ith component, \(i=1,2,\ldots ,n\), respectively. Then
and
In addition, if we assume that the cost of each element in the ith component is \(c_{i}\), \(i=1,2,\ldots ,n,\) respectively, then the total cost is
Suppose that the total capital available is \({\varvec{C}}_{0}\), then the cost constraint is
When the cost constraint is satisfied, the uncertain random redundancy model can be constructed as following to maximize the expected system lifetime:
Since \(\varvec{T(x, \xi )}\) is a strictly increasing function with respect to \(\varvec{\xi }\), the uncertain random redundancy model may be converted to a crisp mathematical model like (68).
Example 7
Suppose a series system consists of 4 components, and each of those contains only one type of element. The lifetimes of the 4 types of elements are assumed to be
where \(N \left( \mu _i,[\underline{\sigma }_i^2,\overline{\sigma }_i^2]\right) \) represents a G-normal distribution with an expected value of \(\mu _i\), which can be generated by \(\mu _{i}+\eta _{i}\) satisfying \(\eta _{i} \sim N \left( 0,[\underline{\sigma }_i^2,\overline{\sigma }_i^2]\right) \), and \(L \left( a,b\right) \) represents a linear uncertain variable whose uncertainty distribution is
Assume the costs of the 4 types of elements are assumed to be 8, 10, 12, 11, respectively, and the total capital is 100. Then by using \(\textrm{MATLAB}\), the optimal combination of elements for this system is
the cost of consumption is 94, and the expected system lifetime is 13.27.
7 Conclusion
In the real world, there exists a class of complex systems where non-additive characteristic stochasticity and human uncertainty coexist. In this case, we can employ U-S chance theory to analyze problems in these complex systems. In this paper, we investigate the uncertain random programming models under U-S chance theory. The operational law for uncertain random variables is proven. Based on sub-linear expectations and Choquet integrals, four types of expectations of uncertain random variables under U-S chance spaces are defined, and their relations and some properties are presented. It follows from these four types of expectations that four uncertain random programming models are provided. And the four models’ equivalent conditions are offered. Furthermore, they can be successfully applied to optimal investment in incomplete financial market and system reliability design.
The following is our future research plan. In this paper, uncertain random single-objective programming models under U-S chance theory are studied. However, in practical applications, we may want more than one objective function. Therefore, in the forthcoming work, we will investigate the uncertain random multi-objective programming models under the U-S chance theory and present their compromise models and crisp equivalent models. Finally, these uncertain random multi-objective programming models will be applied to portfolio selection in incomplete financial market. And we have made some headway in this work as of right now.
Data availability
The present research does not involve the generation of any data.
References
Ammar EE (2008) On solutions of fuzzy random multiobjective quadratic programming with applications in portfolio problem. Inf Sci 178(2):468–484. https://doi.org/10.1016/j.ins.2007.03.029
Bellman RE, Zadeh LA (1970) Decision-making in a fuzzy environment. Manage Sci 17(4):141–164. https://doi.org/10.1287/mnsc.17.4.B141
Chang CT (2007) Binary behavior of fuzzy programming with piecewise linear membership functions. IEEE Trans Fuzzy Syst 15(4):710–717. https://doi.org/10.1109/TFUZZ.2006.889917
Charnes A, Cooper WW (1959) Chance-constrained programming. Manage Sci 6(1):73–79. https://doi.org/10.1287/mnsc.6.1.73
Chen Z (2016) Strong laws of large numbers for sub-linear expectations. Sci China Math 59:945–954. https://doi.org/10.1007/s11425-015-5095-0
Choquet G (1954) Theory of capacities. Ann Inst Fourier 5:131–295. https://doi.org/10.5802/aif.53
Dalman H, Bayram M (2018) Interactive fuzzy goal programming based on Taylor series to solve multiobjective nonlinear programming problems with interval type-2 fuzzy numbers. IEEE Trans Fuzzy Syst 26(4):2434–2449. https://doi.org/10.1109/TFUZZ.2017.2774191
Dantzig GB (1955) Linear programming under uncertainty. Manage Sci 1(3–4):197–206. https://doi.org/10.1287/mnsc.1.3-4.197
Dyer M, Stougie L (2006) Computational complexity of stochastic programming problems. Math Program 106:423–432. https://doi.org/10.1007/s10107-005-0597-0
Fu X, Hu F, Meng X, Tian Y, Yang D (2022) Laws of large numbers for uncertain random variables in the framework of U-S chance theory. (manuscript submitted for publication)
Katagiri H, Sakawa M, Kato K, Nishizaki I (2004) A fuzzy random multiobjective 0–1 programming based on the expectation optimization model using possibility and necessity measures. Math Comput Model 40(3):411–421. https://doi.org/10.1016/j.mcm.2003.08.007
Ke H, Liu H, Tian G (2015) An uncertain random programming model for project scheduling problem. Int J Intell Syst 30(1):66–79. https://doi.org/10.1002/int.21682
Kwakernaak H (1978) Fuzzy random variables-I: definitions and theorems. Inf Sci 15(1):1–29. https://doi.org/10.1016/0020-0255(78)90019-1
Kwakernaak H (1979) Fuzzy random variables-II: algorithms and examples for the discrete case. Inf Sci 17(3):253–278. https://doi.org/10.1016/0020-0255(79)90020-3
Levary RR (1984) An experimental sequential solution procedure to stochastic linear programming problems with 0–1 variables. Int J Syst Sci 15(10):1073–1085. https://doi.org/10.1080/00207728408926625
Li D, Liu J (2015) A parameterized nonlinear programming approach to solve matrix games with payoffs of I-fuzzy numbers. IEEE Trans Fuzzy Syst 23(4):885–896. https://doi.org/10.1109/TFUZZ.2014.2333065
Li J, Xu J, Gen M (2006) A class of multiobjective linear programming model with fuzzy random coefficients. Math Comput Model 44(11):1097–1113. https://doi.org/10.1016/j.mcm.2006.03.013
Liu B (2001) Fuzzy random chance-constrained programming. IEEE Trans Fuzzy Syst 9(5):713–720. https://doi.org/10.1109/91.963757
Liu B (2007) Uncertainty theory, 2nd edn. Springer-Verlag, Berlin
Liu B (2009) Theory and practice of uncertain programming. Springer-Verlag, Berlin
Liu B (2011) Uncertainty theory: a branch of mathematics for modeling human uncertainty. Springer-Verlag, Berlin
Liu Y (2013) Uncertain random variables: a mixture of uncertainty and randomness. Soft Comput 17:625–634. https://doi.org/10.1007/s00500-012-0935-0
Liu Y (2013) Uncertain random programming with applications. Fuzzy Optim Decis Mak 12:153–169. https://doi.org/10.1007/s10700-012-9149-2
Liu B (2015) Uncertainty theory, 4th edn. Springer-Verlag, Berlin
Liu B, Chen X (2015) Uncertain multiobjective programming and uncertain goal programming. J Uncertain Anal Appl 3:1–8. https://doi.org/10.1186/s40467-015-0036-6
Liu B, Liu Y (2002) Expected value of fuzzy variable and fuzzy expected value models. IEEE Trans Fuzzy Syst 10(4):445–450. https://doi.org/10.1109/TFUZZ.2002.800692
Liu Y, Liu B (2005) Fuzzy random programming with equilibrium chance constraints. Inf Sci 170(2):363–395. https://doi.org/10.1016/j.ins.2004.03.010
Liu B, Yao K (2015) Uncertain multilevel programming: algorithm and applications. Comput Ind Eng 89:235–240. https://doi.org/10.1016/j.cie.2014.09.029
Nemirovski A, Juditsky A, Lan G, Shapiro A (2009) Robust stochastic approximation approach to stochastic programming. SIAM J Optim 19(4):1574–1609. https://doi.org/10.1137/070704277
Peng S (2007) G-expectation, G-Brownian motion and related stochastic calculus of Itô type. Stochastic analysis and applications. Springer-Verlag, Berlin, Heidelberg, pp 541–567
Peng S (2017) Theory, methods and meaning of nonlinear expectation theory (in Chinese). Sci Sin Math 47:1223–1254
Peng S (2019) Nonlinear expectations and stochastic calculus under uncertainty: with robust CLT and G-brownian motion. Springer-Verlag, Berlin
Peng S, Zhou Q (2020) A hypothesis-testing perspective on the G-normal distribution theory. Stat Prob Lett 156:108623. https://doi.org/10.1016/j.spl.2019.108623
Ranjbar M, Effati S (2020) Symmetric and right-hand-side hesitant fuzzy linear programming. IEEE Trans Fuzzy Syst 28(2):215–227. https://doi.org/10.1109/TFUZZ.2019.2902109
Sakawa M, Katagiri H, Matsui T (2012) Stackelberg solutions for fuzzy random two-level linear programming through probability maximization with possibility. Fuzzy Sets Syst 188(1):45–57. https://doi.org/10.1016/j.fss.2011.07.006
Schultz R (2003) Stochastic programming with integer variables. Math Program 97:285–309. https://doi.org/10.1007/s10107-003-0445-z
Wang G, Zhong Q (1993) Linear programming with fuzzy random variable coefficients. Fuzzy Sets Syst 57(3):295–311. https://doi.org/10.1016/0165-0114(93)90025-D
Zhou J, Yang F, Wang K (2014) Multi-objective optimization in uncertain random environments. Fuzzy Optim Decis Mak 13:397–413. https://doi.org/10.1007/s10700-014-9183-3
Zimmermann HJ (1978) Fuzzy programming and linear programming with several objective functions. Fuzzy Sets Syst 1(1):45–55. https://doi.org/10.1016/0165-0114(78)90031-3
Acknowledgements
The authors would like to thank the Associate Editor and the anonymous referees for their constructive suggestions and valuable comments that greatly improved this paper. This work was supported in part by the National Natural Science Foundation of China under Grant 11801307, and the Natural Science Foundation of Shandong Province of China under Grant ZR2021MA009.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
The authors have no conflict of interest to declare that are relevant to the content of this article.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Sub-linear expectation theory
Sub-linear expectation theory
In real world, there are a number of problems where human uncertainties coexist with stochasticities characterized by non-additive probabilities. This stochasticity is usually constrained by a family of probabilities \(\{P_{\theta }\}_{\theta \in \Theta }\), but we do not know which one really works. In this case, we choose to take \(\sup _{\theta \in \Theta } E_{P_{\theta }}\) and \(\inf _{\theta \in \Theta } E_{P_{\theta }}\) to analyze these problems, where \(E_{P_{\theta }}\) is the expectation under probability measure \(P_{\theta }\). Let \((\Omega , \mathcal {F})\) be a given measurable space, \(\mathbb {M}\) be the set of all probability measures on \(\Omega \), and \(\mathcal {H}\) be a set of random variables such that if \(X\in \mathcal {H}\), then \( \sup _{\theta \in \Theta } E_{P_{\theta }}[|X|] <+\infty \). For any non-empty subset \(\{P_{\theta }\}_{\theta \in \Theta } \subseteq \mathbb {M}\), \(A \in \mathcal {F}\) and \(X \in \mathcal {H}\), define an upper probability \(\mathbb {V}(A):=\sup _{\theta \in \Theta } P_{\theta }(A),\) a lower probability \(v(A):=\inf _{\theta \in \Theta } P_{\theta }(A),\) a maximum expectation \(\mathbb {E}[X]:= \sup _{\theta \in \Theta } E_{P_{\theta }}[X]\) and a minimum expectation \(\mathcal {E}[X]:= \inf _{\theta \in \Theta } E_{P_{\theta }}[X]\). Obviously, \(\mathcal {E}[X] = - \mathbb {E}[-X]\).
Remark 3
The upper probability \(\mathbb {V}\) and the lower probability v are two special non-additive probabilities. Moreover, \(\mathbb {V}\) is sub-additive, and \(\mathbb {V}(A) + v(A^{c})=1\) for any \(A \in \mathcal {F}\). In fact, given a maximum expectation \(\mathbb {E}\), \(\mathbb {V}\) and v can generated by \(\mathbb {V}(A) = \mathbb {E}[I_A]\), \(v(A) = \mathcal {E}[I_A]= - \mathbb {E}[-I_A]\).
It is easy to show that the maximum expectation \(\mathbb {E}[X]\) is actually a sub-linear expectation on \(\mathcal {H}\). We use the framework and notations of sub-linear expectation introduced by Peng (2019) and Chen (2016). In the following, some definitions and properties of sub-linear expectation theory used in this paper are reviewed.
Definition 12
(Peng 2019) A sub-linear expectation \(\mathbb {E}\) on \(\mathcal {H}\) is a function \(\mathbb {E}: \mathcal {H} \mapsto \mathbb {R}\) satisfying the following properties: for all \(X,Y \in \mathcal {H}\), we have
-
(i)
Monotonicity: If \(X \ge Y\), then \(\mathbb {E}[X] \ge \mathbb {E}[Y]\);
-
(ii)
Constant preserving: \(\mathbb {E}[c]= c\), \(c \in \mathbb {R}\);
-
(iii)
Sub-additivity: \(\mathbb {E}[X+Y] \le \mathbb {E}[X] + \mathbb {E}[Y]\);
-
(iv)
Positive homogeneity: \(\mathbb {E}[\lambda X] = \lambda \mathbb {E}[X]\), \(\lambda \ge 0\).
The triple \((\Omega , \mathcal {H},\mathbb {E})\) is called a sub-linear expectation space. Given a sub-linear expectation \(\mathbb {E}\), let us denote the conjugate expectation \(\mathcal {E}\) of \(\mathbb {E}\) by
From Definition 12, it is easily shown that \(\mathcal {E}[X] \le \mathbb {E}[X],\) \(\mathbb {E}[X+c] = \mathbb {E}[X] + c,\) and \(\mathbb {E}[X-Y] \ge \mathbb {E}[X] - \mathbb {E}[Y]\) for all \(X,Y \in \mathcal {H}\).
Definition 13
(Peng 2019) (i) (Independence) Suppose that \(Y_1,Y_2,\ldots ,Y_n\) is a sequence of random variables. Random variable \(Y_n\) is said to be independent to \(X:= (Y_1,Y_2,\ldots ,Y_{n-1})\) under \(\mathbb {E}\), if for each Borel measurable function \(\varphi \) on \(\mathbb {R}^n\) with \(\varphi (X,Y_n) \in \mathcal {H}\) and \(\varphi (x,Y_n) \in \mathcal {H}\) for each \(x \in \mathbb {R}^{n-1}\), we have
where \(\bar{\varphi }(x):= \mathbb {E}[\varphi (x,Y_n)]\) and \(\bar{\varphi }(X) \in \mathcal {H}\).
(ii) (Identical distribution) Random variables X and Y are said to be identically distributed, denoted by \( X \overset{{\text {d}}}{=} Y\), if for each Borel measurable function \(\varphi \) on \(\mathbb {R}\) such that \(\varphi (X),\varphi (Y) \in \mathcal {H}\),
(iii) (IID random variables) A sequence of random variables \(\{ X_i\}_{i=1}^{\infty }\) is said to be IID, if \(X_i \overset{{\text {d}}}{=} X_1\) and \(X_{i+1}\) is independent to \(Y:= (X_1,\ldots ,X_i)\) for each \(i\in \mathbb {N}\).
Definition 14
(Maximal distribution, Peng 2019) A random variable \(\eta \) on a sub-linear expectation space \((\Omega , \mathcal {H}, \mathbb {E})\) is called maximally distributed if \(\displaystyle \mathbb {E}[\varphi (\eta )] = \sup \nolimits _{\underline{\mu } \le y \le \overline{\mu } }\varphi (y)\) for each Borel measurable function \(\varphi \) on \(\mathbb {R}\), and \(\overline{\mu }=\mathbb {E}[\eta ]\), \(\underline{\mu }=\mathcal {E}[\eta ]\).
Definition 15
(G-normal distribution, Peng 2019) A random variable \(\eta \) on a sub-linear expectation space \((\Omega , \mathcal {H}, \mathbb {E})\) is called G-normally distributed if
where \(\bar{\eta }\) is an independent copy of \(\eta \). Denote it as \(\eta \sim N(0,[\underline{\sigma }^2,\overline{\sigma }^2])\), where \(\mathbb {E}[\eta ] = \mathcal {E}[\eta ] = 0\), \(\underline{\sigma }^2 = \mathcal {E}[\eta ^2]\) and \(\overline{\sigma }^2 = \mathbb {E}[\eta ^2]\).
Remark 4
(Peng (2019)) Let \(\eta \) and \(\bar{\eta }\) be two random variables on a sub-linear expectation space \((\Omega , \mathcal {H}, \mathbb {E})\). \(\bar{\eta }\) is called an independent copy of \(\eta \) if \(\bar{\eta }\overset{{\text {d}}}{=}\eta \) and \(\bar{\eta }\) is independent to \(\eta \).
Definition 16
A random variable \(\eta \) under \(\mathbb {E}\) is said to be Bernoulli if it satisfies:
(i) The possible values of \(\eta \) are \(a_{1},a_{2}, \ldots ,a_{n}\), \(n \in \mathbb {N}\);
(ii) For any \(k \in \{1,2,\ldots ,n\}\), the possible probability measure that \(\eta =a_{k}\) is \(p_{k}\), and \(p_{1}+p_{2}+\cdots +p_{n}=1\), i.e.,
where \(p_{k} \in [\underline{p}_{k}, \overline{p}_{k}]\) for \(k=1,2,\ldots ,n\), satisfying \(\sum _{k=1}^{n} p_{k}=1\), and \(n \in \mathbb {N}\).
Remark 5
A Bernoulli random sequence \(\displaystyle \{{\eta _{i}}\}_{i=1}^{\infty }\) under \(\mathbb {E}\) is a infinite sequence of independent Bernoulli random variables under \(\mathbb {E}\).
Definition 17
A Bernoulli random variable \(\eta \) under \(\mathbb {E}\) is called Boolean random variable under \(\mathbb {E}\), if it takes values 1 and 0, and the possible probability measures that \(\eta =1\) and \(\eta =0\) are p and \(1-p\), respectively, i.e.,
where \( p \in [\underline{p}, \overline{p}].\)
To make readers easier to understand Definition 16, we give the following typical example of Bernoulli random sequence under \(\mathbb {E}\).
Example 8
Consider a countable infinity urns, ordered and indexed by the set \(\mathbb {N}\). It is known that the ith urn contains \(w_{i}\) white balls, \(y_{i}\) yellow balls and \(b_{i}\) black balls. The exact numbers of \(w_{i}, y_{i}\) and \(b_{i}\) are unknown, but we only know that \(w_{i} + y_{i} + b_{i} = 100+(i-1),\) \(w_{i} \in [10+(i-1), 20+(i-1)],\) and \(y_{i} \in [15+(i-1), 30+(i-1)]\). Now we are allowed to sufficiently mix the balls and then choose a ball from the ith urn. Let \(\eta _{i}\) be a random variable defined by
Then the distribution of \(\eta _{i}\) is
with the possible probability measures
p\(_{i_{1}} \in [\frac{9+i}{99+i}, \frac{19+i}{99+i}]\) and \(\ p_{i_{2}}=[\frac{14+i}{99+i},\frac{29+i}{99+i}].\)
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Hu, F., Qu, Z. & Yang, D. Uncertain random programming models in the framework of U-S chance theory and their applications. TOP (2024). https://doi.org/10.1007/s11750-024-00682-y
Received:
Accepted:
Published:
DOI: https://doi.org/10.1007/s11750-024-00682-y