Introduction

In the real world, optimization problems are very important in a variety of fields (Chen 2013; Jeang 2013; Yang et al. 2014; Yang and Zhou 2014). In particular, a type of optimization problems solved through several steps has been widely used such as Ali Allahverdi and Aydilek (2013), and Sun et al. (2011). These problems, usually called multi-stage optimization, are more complex when decisions involve uncertain parameters under the uncertain environment. A good result in practical problem has been provided by Dantzig (1995) who presented a kind of optimization problem by applying the linear programming to aircraft flight with the stochastic parameters about traffic flow. More examples such as the farmer problem, the news vendor problem and the material procurement problem can be found in Birge and Louveaux (1997), Mirabi et al. (2013) and Wang et al. (2012). Based on probability theory, stochastic programming has been well developed recently (Kall and Wallace 1994; Yang et al. 2013).

However, a fundamental premise of employing probability theory is that the estimated probability is close enough to the real frequency. Because of lack of observed data, we have to invite some experts to provide their belief degree that each event will occur. Since human beings tend to overweight unlikely events (Kahneman and Tversky 1979), the belief degree may have a much larger range than the real frequency. If we insist on treating the belief degree as probability, some counterintuitive results will happen. We may refer to Liu (2012) for examples. To overcome this disadvantage, the uncertainty theory was founded by Liu (2007) and refined by Liu (2010) based on normality, duality, subadditivity and product axioms. Since then, uncertainty theory was developed continuously such as Liu (2013a) and Sheng and Yao (2014). Nowadays it has become a branch of mathematics for modeling human uncertainty and has been widely applied to various fields as theoretical foundation such as Liu (2009b, 2013b, 2014), Wang et al. (2014, 2015a, b), Chen and Ralescu Dan (2013) and Yang and Gao (2013, 2014).

When some parameters of the underlying model are unknown with full certainty and characterized by uncertain variables whose uncertainty distributions are available, the goal is to formulate an optimization problem, explicitly taking all outcomes of the uncertain parameters into account rather than simply replacing them by their expected values. It is also assumed that some decisions must be taken before the outcomes of uncertain parameters are revealed and thus must be based on the knowledge of the distribution of the uncertain parameters only, which can be referred to as the first stage. In the second stage, outcomes of all uncertain parameters have been observed and some recourse (or corrective) actions may be taken. From the discussion above, we can see that the two-stage approach we present differs from the existing approaches such as Liu (2001), Wang and Qiao (1993) and Arnaout et al. (2014). In this paper, two-stage uncertain programming (UP) problem is presented firstly based on the uncertainty theory, and some fundamental properties of the two-stage UP, including the convexity of feasible set as well as objective function, are investigated, which provides a theoretical foundation for application of two-stage UP problem. Considering the complexity of the two-stage UP problem which is usually transformed into a deterministic nonlinear programming problem, meta-heuristics and evolutionary algorithms should be widely applied to this kind of problem. The artificial bee colony (ABC) algorithm is a meta-heuristic bionic algorithm based on the intelligent foraging behavior of honey bees proposed by Karaboga (2005). The ABC algorithm has been adopted by researchers in a variety of fields and that the effectiveness and efficiency on algorithm performance competitive to other optimization algorithms have been experimentally validated (Karaboga and Akay 2009). Therefore, in this paper, the ABC algorithm is applied to solve the two-stage UP problem, which is demonstrated specifically by several numerical examples, and more efficient solutions are obtained.

This paper is organized as follows. The next section reviews some basic results of the uncertainty theory. In “Two-stage uncertain programming problem” section, the formulation of a new class of two-stage UP problem is presented. Then the properties of model presented above are investigated in “Fundamental properties”. “Solution method” section illustrates the ABC algorithm to solve the two-stage UP problem, and several numerical examples are given to demonstrate the efficient solution method explicitly. Finally, the main results of this paper are summarized in “Conclusions” section.

Preliminaries

Let \(\mathrm {\Gamma }\) be a nonempty set, and \(\fancyscript{L}\) a \(\sigma \)-algebra over \(\mathrm {\Gamma }\). Each element \(\mathrm {\Lambda }\) in \(\fancyscript{L}\) is called an event. A set function \(\fancyscript{M}\) from \(\fancyscript{L}\) to \([ 0,1]\) is called an uncertain measure if it satisfies the following axioms (Liu 2007):

  • Axiom 1. (Normality axiom) \(\fancyscript{M}\{\mathrm {\Gamma }\}=1\) for the universal set \(\mathrm {\Gamma }\).

  • Axiom 2. (Duality axiom) \(\fancyscript{M}\{\mathrm {\Lambda }\}+\fancyscript{M}\{\mathrm {\Lambda }^{c}\}=1\) for any event \(\mathrm {\Lambda }\).

  • Axiom 3. (Subadditivity axiom) For every countable sequence of events \(\mathrm {\Lambda }_{1},\mathrm {\Lambda }_{2}, \ldots \), we have

    $$\begin{aligned} \fancyscript{M}\left\{ \mathop {\bigcup }^{\infty }_{i=1}\mathrm {\Lambda }_{i}\right\} \le \mathop {\sum }^{\infty }_{i=1}\fancyscript{M}\left\{ \mathrm {\Lambda }_{i}\right\} . \end{aligned}$$

    The triplet \((\mathrm {\Gamma },\fancyscript{L},\fancyscript{M})\) is called an uncertainty space. Furthermore, Liu (2009a) defined a product uncertain measure by the fourth axiom:

  • Axiom 4. (Product axiom) Let \((\mathrm {\Gamma }_{k},\mathcal {L}_{k},\fancyscript{M}_{k})\) be uncertainty space for \(k=1,2,\ldots \). The product uncertain measure \(\fancyscript{M}\) is an uncertain measure satisfying

    $$\begin{aligned} \fancyscript{M}\left\{ \prod \limits _{k=1}^{\infty }\mathrm {\Lambda }_{i} \right\} = {\displaystyle \bigwedge _{k=1}^{\infty }}\fancyscript{M}\{\mathrm {\Lambda }_{k}\} \end{aligned}$$

    where \(\mathrm {\Lambda }_{k}\) are arbitrarily chosen events from \(\mathcal {L}_{k}\) for \(k=1,2,\ldots \), respectively.

Definition 2.1

(Liu 2007) An uncertain variable is a measurable function \(\xi \) from an uncertainty space \((\mathrm {\mathrm {\Gamma }}, \fancyscript{L}, \fancyscript{M})\) to the set of real numbers, i.e., for any Borel set \(B\) of real numbers, the set \(\{ \xi \in B \}=\{ \gamma \in \mathrm {\Gamma } \mid \xi (\gamma ) \in B \}\) is an event.

Definition 2.2

(Liu 2007) The uncertainty distribution \(\mathrm {\Phi }\) of an uncertain variable \(\xi \) is defined by \({\mathrm {\Phi }}(x)= \fancyscript{M}\{\xi \le x\}\) for any real number \(x\).

Definition 2.3

(Liu 2009a) The uncertain variables \(\xi _{1},\xi _{2},\) \(\ldots ,\xi _{n}\) are said to be independent if \(\fancyscript{M}\{\mathop {\bigcap }^{n}_{i=1}(\xi _{i} \in B_{i})\}\) \(= \mathop {\bigwedge }^{n}_{i=1}\fancyscript{M}\{\xi _{i} \in B_{i}\}\) for any Borel sets \(B_{1},B_{2},\ldots ,B_{n}\) of real numbers.

Definition 2.4

(Liu 2010) An uncertain variable \(\mathrm {\Phi }\) is said to be regular if its inverse function \(\mathrm {\Phi }^{-1}(\alpha )\) exists and is unique for each \(\alpha \in (0,1.)\)

Definition 2.5

(Liu 2010) Let \(\xi \) be an uncertain variable with regular uncertainty distribution \(\mathrm {\Phi }\). Then the inverse function \(\mathrm {\Phi }^{-1}\) is called the inverse uncertainty distribution of \(\xi \).

Definition 2.6

(Liu 2007) Let \(\xi \) be an uncertain variable. Then the expected value of \(\xi \) is defined by

$$\begin{aligned} \mathrm {E}[\xi ]= \int _{0}^{\infty }\fancyscript{M}\{\xi \ge x\}dx-\int _{-\infty }^{0}\fancyscript{M}\{\xi \le x\}dx \end{aligned}$$
(1)

provided that at least one of the two integrals is finite.

Theorem 2.1

(Liu 2010) Let \(\xi \) and \(\eta \) be independent uncertain variables with finite expected values. Then for any real numbers \(a\) and \(b\), we have

$$\begin{aligned} \mathrm {E}[a\xi +b\eta ]=a\mathrm {E}[\xi ]+b\mathrm {E}[\eta ]. \end{aligned}$$
(2)

Theorem 2.2

(Liu 2009b) Let \(\xi _{1},\xi _{2},\ldots ,\xi _{n}\) be uncertain variables, and \(f\) a real-valued measurable function. Then \(f(\xi _{1},\xi _{2},\ldots ,\xi _{n})\) is an uncertain variable.

Theorem 2.3

(Liu 2010) Let \(\xi \) be an uncertain variable with regular uncertainty distribution \(\mathrm {\Phi }\). If the expected value exists, then

$$\begin{aligned} \mathrm {E}[\xi ]=\int ^{1}_{0}\mathrm {\mathrm {\Phi }}^{-1}(\alpha )d\alpha \end{aligned}$$

Theorem 2.4

(Liu 2010) Let \(\xi _{1},\xi _{1},\ldots ,\xi _{n}\) be independent uncertain variables with regular uncertainty distributions \(\mathrm {\Phi }_{1},\mathrm {\Phi }_{1},\ldots ,\mathrm {\Phi }_{n}\), respectively. If the function \(f(x_{1},x_{2},\ldots ,x_{n})\) is strictly increasing with respect to \(x_{1},x_{2},\ldots ,x_{m}\) and strictly decreasing with respect to \(x_{m+1},x_{m+2},\ldots ,x_{n}\), then \(\xi =f(\xi _{1},\xi _{1},\ldots ,\xi _{n})\) is an uncertain variable with inverse uncertainty distribution

$$\begin{aligned} \varPsi ^{-1} (\alpha )&= f\left( \mathrm {\Phi }^{-1}_{1}(\alpha ),\mathrm {\Phi }^{-1}_{2}(\alpha ),\ldots {\mathrm {\Phi }}^{-1}_{m} (\alpha ),\right. \\&\quad \left. \mathrm {\Phi }^{-1}_{m+1}(1-\alpha ),\mathrm {\Phi }^{-1}_{m+2}(1-\alpha ),\ldots ,\mathrm {\Phi }^{-1}_{n}(1-\alpha )\right) \!. \end{aligned}$$

Theorem 2.5

(Yang 2013) Let \(f\) and \(g\) be comonotonic functions. Then for any uncertain variable \(\xi \), we have

$$\begin{aligned} \mathrm {E}[f(\xi )+g(\xi )]=\mathrm {E}[f(\xi )]+\mathrm {E}[g(\xi )]. \end{aligned}$$

Two-stage uncertain programming problem

Formulation

Considering the following underlying deterministic problem

$$\begin{aligned} \left\{ \begin{array}{l@{\quad }l} \min \limits _x &{} c^T x+ \min \limits _y q^T(\gamma ) y\\ \text{ s.t. }&{} A x=b\\ &{} T(\gamma ) x+W(\gamma ) y=h(\gamma )\\ &{} x\ge 0, y\ge 0. \end{array} \right. \end{aligned}$$
(3)

where \(c\) is a known vector in \(R^{n_1}, b\) is a known vector in \(R^{m_1}; A\) is a known matrix of size \(m_1\times n_1\). For each \(\gamma , q(\gamma )\in R^{n_2}, h(\gamma )\in R^{m_2}; T(\gamma )\) and \(W(\gamma )\) are \(m_2\times n_1\) and \(m_2\times n_2\) matrices, respectively. The decision–observation scheme can be described as follows

$$\begin{aligned} \begin{array}{c} \hbox {decision on } x\\ \hbox {observation of uncertain event } \gamma \\ \hbox {decision on } y. \end{array} \end{aligned}$$

According to this scheme, the problem (3) obtains two optimization problems to be solved. Assuming that \(x\) and \(\gamma \) are given, the second-stage problem, or recourse problem can be formulated as follows

$$\begin{aligned} \left\{ \begin{array}{l@{\quad }l} \min \limits _y &{} q^T(\gamma ) y\\ \text{ s.t. }&{} T(\gamma ) x+W(\gamma )y=h(\gamma )\\ &{} y\ge 0, \end{array} \right. \end{aligned}$$
(4)

where \(x\) belongs to the feasible set \(S_1=\{x\mid Ax=b, x\) \(\ge 0\}.\)

By the above analysis, we can see that the second-stage problem is a more difficult one. For each \(\gamma \), the value \(y(\gamma )\) is the solution of a linear programming. To stress this fact, sometimes we use the notion of a deterministic equivalent programming. For a given realization \(\gamma \) of uncertain variable \(\xi \), let

$$\begin{aligned} Q(x,\xi (\gamma ))\!=\!\min \{q^T(\gamma )y| W(\gamma )y\!=\!h(\gamma )\!-\!T(\gamma ) x,y\ge 0\} \end{aligned}$$

be the second-stage value function, where \(\xi \) is an uncertain vector. For convenience, \(Q(x,\xi (\gamma ))\) is denoted by \(Q(x,\gamma )\).

Example 3.1

Suppose that the second-stage value function is defined by

$$\begin{aligned}&\left\{ \begin{array}{l@{\quad }l} \min \limits _y &{} 2y\\ \text{ s.t. }&{} \xi y=x-2\\ &{} y\ge 0 \end{array} \right. \end{aligned}$$

where \(\xi \) is a zigzag uncertain variable \(\fancyscript{Z}(1,2,4)\). Discuss the second-stage value function \(Q(x,\gamma )\) and the feasible set \(S_1\).

Note that zigzag uncertain variable \(\xi \) has the following uncertainty distribution

$$\begin{aligned}&\mathrm {\Phi }(x)=\left\{ \begin{array}{l@{\quad }l} 0, &{} \text{ if } ~~x \le 1 \\ \frac{x-1}{2},&{} \text{ if } ~~1 \le x\le 2\\ \frac{x}{4},&{} \text{ if } ~~2 \le x\le 4\\ 1, &{} \text{ if } ~~ x\ge 4. \end{array} \right. \end{aligned}$$

It is evident that \(\xi (\gamma )\in [1,4]\). If \(x < 2\), then there is no solution \(y\) satisfying the constraint \(\xi y=x-2\). Hence, it is easy to obtain that \(S_1=\{x\mid x\ge 2\}.\)

In addition, for each \(x\in S_1\) and \(\gamma \), we have the optimal solution \( y^*=(2x-4)/\xi (\gamma )\), therefore \(Q(x,\gamma )=(2x-4)/\xi (\gamma )\) which implies that \(Q(x,\gamma )\) is an uncertain variable for each \(x\in S_1\). For instance, \(Q(4,\gamma )\) is the uncertain variable \(4/\xi \), and the \(Q(2.25,\gamma )\) is the uncertain variable \(1/(2\xi )\).

We define the expected second-stage value function (or recourse function) as

$$\begin{aligned} {\fancyscript{Q}}_{\mathrm {E}}(x)={\mathrm {E}}_\xi [Q(x,\gamma )] \end{aligned}$$
(5)

where \({\mathrm {E}}_\xi \) is the expected value operator with respect to uncertain vector \(\xi \).

Therefore, the two-stage UP problem can be presented as follows

$$\begin{aligned} \left\{ \begin{array}{l@{\quad }l} \min \limits _x&{}z(x)=c^Tx+\fancyscript{Q}_\mathrm {E}(x)\\ \text{ s.t. } &{} Ax=b\\ &{} x \ge 0. \end{array} \right. \text{(UP) } \end{aligned}$$
(6)

where \(\fancyscript{Q}_\mathrm {E}(x)=\mathrm {E}_\xi [Q(x,\gamma )]\), and

$$\begin{aligned} \left\{ \begin{array}{l@{\quad }l} Q(x,\gamma )=\min \limits _y &{} q^T(\gamma ) y\\ ~~~~~~~~~~~~~~~~~~~\text{ s.t. }&{} T(\gamma ) x+W(\gamma )y=h(\gamma )\\ &{} y\ge 0. \end{array} \right. \end{aligned}$$
(7)

Obviously, by the above discussion, the problem (6) and (7) are equivalent to the following programming problem

$$\begin{aligned} \left\{ \begin{array}{l@{\quad }l} \min \limits _x &{} c^T x+ \mathrm {E}_\xi [\min \limits _y q^T(\xi ) y]\\ \text{ s.t. }&{} A x=b\\ &{} T(\xi ) x+W(\xi ) y=h(\xi )\\ &{} x\ge 0, y\ge 0 \end{array} \right. \text{(UP) }~ \end{aligned}$$
(8)

where \(\xi \) is an uncertain vector defined on the uncertain space \((\mathrm {\Gamma }, \fancyscript{L}, \fancyscript{M})\).

Example 3.2

Let the second-stage problem be defined as the one in Example 3.1. Calculate the recourse function \(\fancyscript{Q}_\mathrm {E}(x)=\mathrm {E}[Q(x, \xi )]\).

For the uncertainty distribution of zigzag uncertain variable \(\xi \) given in the Example 3.1, since the function \(f(x)=1/x\) is strictly decreasing, the inverse uncertainty distribution \(\mathrm {\Phi }^{-1}(1-\alpha )\) can be calculated as follows

$$\begin{aligned}&\mathrm {\Phi }^{-1}(1-\alpha )=\left\{ \begin{array}{l@{\quad }l} 4-4\alpha ,~~~~&{} \text{ if } ~~0\le \alpha \le \frac{1}{2}\\ 3-2\alpha ,~~~&{} \text{ if } ~~\frac{1}{2} \le \alpha \le 1.\\ \end{array} \right. \end{aligned}$$

By the Theorems 2.3 and 2.4, the \(\mathrm {E}[1/\xi ]\) can be obtained as follows

$$\begin{aligned} \mathrm {E} \left[ \frac{1}{\xi }\right]&= \int _0^1\frac{1}{\mathrm {\Phi }^{-1}(1-\alpha )} d \alpha =\int _0^\frac{1}{2}\frac{1}{4-4\alpha } d \alpha \\&+\int _\frac{1}{2}^1\frac{1}{3-2\alpha } d\alpha =\frac{3}{4}\ln 2. \end{aligned}$$

In addition, by Example 3.1, we know that \(Q(x,\gamma )=(2x-4)/\xi (\gamma )\), from which \(\fancyscript{Q}_\mathrm {E}(x)\) can be obtained as follows

$$\begin{aligned} \fancyscript{Q}_\mathrm {E}(x)&= \mathrm {E}_\xi \left[ Q(x,\xi )\right] =\mathrm {E}\left[ \frac{2x-4}{\xi }\right] \\&= (2x-4)\ \mathrm {E}\left[ \frac{1}{\xi }\right] =\frac{6(x-2)\ln 2}{4}. \end{aligned}$$

Feasible sets

For any given \(x\) and the realization \(\gamma \) of uncertain variable \(\xi \), the value of \(Q(x, \xi )\) of the second-stage programming is obtained by

$$\begin{aligned} Q(x,\gamma )=\min \{q^T(\gamma )y| W(\gamma )y=h(\gamma )-T(\gamma ) x,y\ge 0\}. \end{aligned}$$
(9)

If the problem (9) is infeasible, then the \(Q(x, \gamma )\) is defined to be \(+\infty \). In order to discuss the solutions of problems (6)–(7), it is necessary to introduce additional constraints on \(x\). Let \(S_2\) be the set of all those \(x\) vectors for which the problem (6)–(7) have a feasible solution for every possibly realized uncertain event \(\gamma \). It is easy to know that \(S_2\) can be expressed as

$$\begin{aligned} S_2=\{x|x\in R^{n_1}, Q(x,\gamma )<\infty \} \end{aligned}$$
(10)

which is defined as elementary feasibility set.

By the above discussion, now we can redefine the deterministic equivalent programming as follows

$$\begin{aligned} \left\{ \begin{array}{l@{\quad }l} \min \limits _x&{}z(x)=c^Tx+\fancyscript{Q}_\mathrm {E}(x)\\ \text{ s.t. } &{}x\in S_1\bigcap S_2. \end{array} \right. \end{aligned}$$
(11)

From a practical point of view, it is not absolutely necessary to have a complete description of region of finiteness of \(\fancyscript{Q}_\mathrm {E}(x)\). On the other hand, it is desirable to be able to check if a particular first-stage decision \(x\) leads to a finite second-stage value without having to calculate that value, so the definition of \(S_2\) is not useful in that respect. To illustrate this case, we consider an example where the second-stage defined by \(Q(x, \xi )=\min \limits \{y| \xi y=2-x,y\ge 0\},\) and \(\xi \) is an uncertain variable with the following uncertainty distribution

$$\begin{aligned}&\mathrm {\Phi }(x)=\left\{ \begin{array}{l@{\quad }l} 0, ~&{}~~ \text{ if } ~~x \le 0 \\ 4x^2, &{} ~~ \text{ if } ~~0 \le x\le \frac{1}{2}\\ 1,~&{}~~ \text{ if } ~~\frac{1}{2} \le x. \end{array} \right. \end{aligned}$$

By the definition of inverse uncertainty distribution of \(\xi \), the \(\mathrm {\Phi }^{-1}(1-\alpha )\) of uncertain variable \(\xi \) can be calculated as follows

$$\begin{aligned} \mathrm {\Phi }^{-1}(1-\alpha )=\frac{\sqrt{1-\alpha }}{2},~~~ \text{ if } ~~0\le \alpha \le 1. \end{aligned}$$

Similarly, it follows from the Theorem 2.3 and Theorem 2.4 that the expected value of uncertain variable \(1/\xi \) is as follows

$$\begin{aligned} \mathrm {E}\left[ \frac{1}{\xi }\right] =\int _0^1\frac{1}{\mathrm {\Phi }^{-1}(1-\alpha )} d \alpha =\int _0^1 \frac{2}{\sqrt{1-\alpha }} d \alpha =4. \end{aligned}$$

Note that here \(W\) reduces to a \(1\times 1\) matrix and is the only uncertain element.

For all \(\xi \) in \((0,1/2]\), the optimal solution \(y\) is \((2-x)/\xi \), so it is easy to obtain \(S_2(\xi )=\{x|x\le 2\},\) and \( Q(x, \xi )=(2-x)/\xi ~~\text{ for } ~x \le 2. \) If \(\xi =0\), there doesn’t exist \(y\) such that \(0\cdot y=2-x\) unless \(x=2\), so we can obtain that \(S_2(0)=\{x|x=2\}.\) Now, for \(x\ne 2, Q(x,0)\) should normally be \(+\infty \). However, since the \(\fancyscript{M}\{\gamma |\xi (\gamma )=0\}=0\), the convention is to take \(Q(x,0)=0\). Therefore, by the above discussion, the following results can be obtained as follows

$$\begin{aligned} S_2=S_2(\xi )\cap S_2(0)= \{x|x\le 2\}\cap \{x|x=2\}=\{x|x=2\}. \end{aligned}$$

When \(\xi \in [0, 1/2]\), the second-stage value function can be calculated as \(\fancyscript{Q}_\mathrm {E}(x)=\mathrm {E}[(2-x)/{\xi }]=(2-x)\mathrm {E}[1/{\xi }]\) \(=8-4x ~~ \text{ for } \text{ all }~~ x\le 2.\)

Therefore, \(S_2\) should contain some \(\xi \) values for infeasibilities occurring with 0 uncertain measure. For this purpose, an alternative definition is considered. Define

$$\begin{aligned} S_2^P=\{x|\fancyscript{M}\{\gamma \mid Q(x,\gamma )< \infty \} =1\} \end{aligned}$$

as the weak feasibility set of the second-stage. Evidently, \( S_2\subset S_2^P, \text{ and }~ S_2=\bigcap _{\xi \in \Xi } S_2^P(\xi ).\)

From the above discussion, it is necessary for us to consider the following optimal problem which is more reasonable in the real world:

$$\begin{aligned} \left\{ \begin{array}{l@{\quad }l} \min \limits _x&{}z(x)=c^Tx+\fancyscript{Q}_\mathrm {E}(x)\\ \text{ s.t. } &{}x\in S_1\bigcap S_2^P. \end{array} \right. \end{aligned}$$
(12)

Fundamental properties

Theorem 4.1

  1. (1)

    For each \(\xi \), the elementary feasibility set \(S_2\) is a closed convex polyhedron, hence the set \(S_2\) is closed and convex.

  2. (2)

    If \(\Xi \) is finite, then \(S_2^P\) coincides with \(K_2\).

Proof

  1. (1)

    For each \(\xi , S_2^P(\xi )\) is defined by a set of linear constraints. By the theory of linear programming, we can prove the first assertion.

  2. (2)

    A particular \(x\) belongs to \(S_2^P(\xi )\) if \(\fancyscript{Q}(x)\) is bounded above. Since \(\Xi \) is finite, by the expected value of discrete uncertain variable, we can obtain that \(\fancyscript{Q}(x)\) is positively weighted sum of finitely many \(Q(x,\xi )\) values, \(\fancyscript{Q}(x)\) is bounded above only if each \(Q(x,\xi )\) is bounded above, which implies that \(x\) belongs to \(S_2^P(\xi )\) for all \(\xi \). In turn, it implies that \(x\) belongs to \(S_2(\xi )\). Similarly, if \(x\) belongs to \(S_2(\xi ), Q(x,\xi )\) is bounded above for all values, which implies that \(\fancyscript{Q}(x)\) is bounded above and \(x\) belongs to \(S_2^P(\xi )\). The second assertion is completed.

\(\square \)

Theorem 4.2

For the two-stage UP problem, the weak feasibility set \(K_2^P\) is a closed convex set.

Proof

Suppose that \(x_i\in K_2^P, (i=1,2)\), and denote

$$\begin{aligned} \mathrm {\Gamma }_i&= \{\gamma |\fancyscript{M}\{Q(x_i,\gamma )<+\infty \}=1\},\\ \mathrm {\Gamma }_\lambda&= \{\gamma |Q(\lambda x_1+(1-\lambda )x_2,\gamma )<+\infty \}. \end{aligned}$$

Evidently, \(\fancyscript{M}\{\mathrm {\Gamma }_1\}=\fancyscript{M}\{\mathrm {\Gamma }_2\}=1\). It follows from the Duality Axiom of the uncertain measure that \(\fancyscript{M}\{\mathrm {\Gamma }_i^c\}=1-\fancyscript{M}\{\mathrm {\Gamma }_i\}=0, (i=1,2).\) Further, by the Subadditivity Axiom, we can obtain

$$\begin{aligned} 0\le \fancyscript{M}\{\mathrm {\Gamma }_1^c\cup \mathrm {\Gamma }_2^c\}\le \fancyscript{M}\{\mathrm {\Gamma }_1^c\}+\fancyscript{M}\{\mathrm {\Gamma }_2^c\}=0, \end{aligned}$$

which implies that \(\fancyscript{M}\{\mathrm {\Gamma }_1^c\bigcup \mathrm {\Gamma }_2^c\}=0\). Hence

$$\begin{aligned} \fancyscript{M}\{\mathrm {\Gamma }_1\cap \mathrm {\Gamma }_2\}=1-\fancyscript{M}\{\mathrm {\Gamma }_1^c\cup \mathrm {\Gamma }_2^c\}=1. \end{aligned}$$

For any \(\gamma \in \mathrm {\Gamma }_1\cap \mathrm {\Gamma }_2\), there exists \(y_1\) and \(y_2\) such that

$$\begin{aligned} W(\gamma )y_1&= h(\gamma )-T(\gamma )x_1,y_1\ge 0,\\ W(\gamma )y_2&= h(\gamma )-T(\gamma )x_2,y_1\ge 0. \end{aligned}$$

For any \(\lambda \in (0,1)\), by the above two equations, we can deduce that

$$\begin{aligned} W(\gamma )(\lambda y_1+(1-\lambda )y_2)=h(\gamma )-T(\gamma )(\lambda x_1+(1-\lambda )x_2), \end{aligned}$$

and \( \lambda y_1+(1-\lambda )y_2\ge 0,\) which implies that \(\gamma \in \mathrm {\Gamma }_\lambda \), i.e.,

$$\begin{aligned} \mathrm {\Gamma }_\lambda =\{\gamma |Q(\lambda x_1+(1-\lambda )x_2,\gamma )<+\infty \}\supset \mathrm {\Gamma }_1\cap \Gamma _2. \end{aligned}$$

Thus, we have

$$\begin{aligned} 1 \ge \fancyscript{M}\{\mathrm {\Gamma }_\lambda \}\ge \fancyscript{M}\{\mathrm {\Gamma }_1\cap \mathrm {\Gamma }_2\}=1, \end{aligned}$$

that is to say, \(\fancyscript{M}\{\mathrm {\Gamma }_\lambda \}=1,\) which implies that the \(K_2^p\) is a convex set.

Next, we prove that \(K_2^P\) is closed. Suppose that \(\{x_j\}\) converges to \(x_0\), and

$$\begin{aligned} \mathrm {\Gamma }_j=\{\gamma |\fancyscript{M}\{Q(x_j,\gamma )<+\infty \}=1\}, j=1,2,\ldots . \end{aligned}$$

It follows from the conditions that \(\fancyscript{M}\{\mathrm {\Gamma }_j\}=1\) for \(j\) \(=1,2,\ldots .\) Denote \(\Lambda _k=\bigcap _{j=1}^k \mathrm {\Gamma }_j,k=1,2,\ldots ,\) it is evident to know that \(\Lambda _{K+1}\subset \Lambda _K,\) and \( \lim _{k\rightarrow \infty }\Lambda _k\) \(=\bigcap _{k=1}^\infty \Lambda _k=\bigcap _{j=1}^\infty \mathrm {\Gamma }_j.\) Following the proof in the first part of the theorem, it follows from induction that

$$\begin{aligned} \fancyscript{M}\{\Lambda _k\}=1, j=1,2,\ldots . \end{aligned}$$

By the Duality Axiom, we have

$$\begin{aligned} \fancyscript{M}\{\Lambda _k^c\}=1-\fancyscript{M}\{\Lambda _k\}=0, k=1,2,\ldots . \end{aligned}$$

By the Subadditivity Axiom, we can obtain

$$\begin{aligned} 0\le \fancyscript{M}\left\{ \bigcup _{k=1}^\infty \Lambda _k^c\right\} \le \sum _{k=1}^\infty \fancyscript{M}\left\{ \Lambda _k^c\right\} =0, \end{aligned}$$

that is to say, \(\fancyscript{M}\{\bigcup _{k=1}^\infty \Lambda _k^c\}=0.\) Hence,

$$\begin{aligned} \fancyscript{M}\left\{ \bigcap \limits _{k=1}^\infty \mathrm {\Gamma }_i\right\} = \fancyscript{M}\left\{ \bigcap \limits _{k=1}^\infty \Lambda _k\right\} =1-\fancyscript{M}\left\{ \bigcup \limits _{k=1}^\infty \Lambda _k^c\right\} =1. \end{aligned}$$

Next, we prove \(x_0\in K_2^P\) by contradiction. If \(x_0\notin K_2^p\), then there exists \(\gamma _0\in \bigcap _{j=1}^\infty \mathrm {\Gamma }_j, \text{ and }~~~\rho >0,\) such that \(\fancyscript{M}\{\gamma _0\}>0\), and

$$\begin{aligned} \parallel W(\gamma _0)y=h(\gamma _0)-T(\gamma _0)x_0\parallel \ge \rho >0, \end{aligned}$$

for any \(y\ge 0, \) where \(\parallel \cdot \parallel \) is the Euclid norm in \(\mathfrak {R}^{m_2}\) defined as

$$\begin{aligned} \parallel (z_1,\ldots ,z_{m_2})\parallel =\sqrt{z_1^2+\cdots +z_{m_2}^2}, (z_1,\ldots ,z_{m_2})\in \mathfrak {R}^{m_2}. \end{aligned}$$

Denote \(O_{x_0}= \{x|\parallel W_{\gamma _0}(\omega _0)y=h_{\gamma _0}(\omega _0)-T_{\gamma _0}(\omega _0)x_0 \parallel \ge \rho /2>0\}\) as a neighborhood of \(x_0\). By the hypothesis condition that \(\lim _{j\rightarrow \infty }x_j=x_0\), there exists \(x_{j_0}\in O_{x_0}\) such that \(Q(x_{j_0},\gamma _0)<\infty \). Since \(\gamma _0\in \bigcap _{j=1}^\infty \Gamma _j \subset \Gamma _{j_0},\) there exists \(y_{j_0}\) such that \(W(\gamma _0)y_{j_0}=h(\gamma _0)-T({\gamma _0})x_{j_0}, \) which is a contradiction with \(x_{j_0}\in O_{x_0}\). Therefore, \(x_0\in K_2^p\). The theorem is completed.\(\square \)

Theorem 4.3

For the two-stage UP problem with the fixed recourse matrix \(W\), we have

  1. (1)

    \(Q(x,\gamma )\) is a convex function almost sure with respect to \(x\in S_2^P\). Further, if \(Q(x_i,\gamma )\) and \(Q(x_j,\gamma )\) are comonotonic, \(i,j\in I\), where \(I\) is an any-index set such that \(x_i, x_j\in S_2^p\), we can obtain that \(\fancyscript{Q}(x)\) is also a convex function on \( S_2^p\);

  2. (2)

    \(Q(x,\gamma )\) is a convex function with respect to \((h(\gamma ), T(\gamma ))\);

  3. (3)

    \(Q(x,\gamma )\) is a concave function with respect to \(q(\gamma )\).

Proof

We first prove the assertion (1). Suppose that \(x_1\in K_2^p, x_2\in K_2^p\), and

$$\begin{aligned} \mathrm {\Gamma }_i=\{\gamma |\fancyscript{M}\{Q(x_i,\gamma )<+\infty \}=1\}, (i=1,2). \end{aligned}$$

Evidently, \(\fancyscript{M}\{\mathrm {\Gamma }_1\}=\fancyscript{M}\{\mathrm {\Gamma }_2\}=1\). It follows from the Duality Axiom of the uncertain measure that \(\fancyscript{M}\{\mathrm {\Gamma }_i^c\}=1-\fancyscript{M}\{\mathrm {\Gamma }_i\}=0.\) Further, by the Subadditivity Axiom, we can obtain

$$\begin{aligned} 0\le \fancyscript{M}\{\mathrm {\Gamma }_1^c\cup \mathrm {\Gamma }_2^c\}\le \fancyscript{M}\{\mathrm {\Gamma }_1^c\}+\fancyscript{M}\{\mathrm {\Gamma }_2^c\}=0, \end{aligned}$$

which implies that \(\fancyscript{M}\{\mathrm {\Gamma }_1^c\bigcup \mathrm {\Gamma }_2^c\}=0\), hence \(\fancyscript{M}\{\mathrm {\Gamma }_1\cap \mathrm {\Gamma }_2\}=1-\fancyscript{M}\{\mathrm {\Gamma }_1^c\cup \mathrm {\Gamma }_2^c\}=1. \)

Hence, for any \(\gamma \in \mathrm {\Gamma }_1\cap \mathrm {\Gamma }_2\) and \(\lambda \in (0,1)\), it is sufficient to prove that

$$\begin{aligned}&\begin{array}{l@{\quad }l} Q(\lambda x_1+(1-\lambda )x_2,\gamma ) \le \lambda Q( x_1,\gamma )+(1-\lambda )Q(x_2,\gamma ), \end{array} \end{aligned}$$

where

$$\begin{aligned} \left\{ \begin{array}{l@{\quad }l} Q(x_i,\gamma )=&{}\min \limits _y \{q(\gamma )^Ty\}\\ \text{ s.t. } &{} Wy=h(\gamma )-T(\gamma )x_i\\ &{}y\ge 0 \end{array} \right. \end{aligned}$$
(13)

\(i=1,2\), and

$$\begin{aligned} \left\{ \begin{array}{l@{\quad }l} Q(\lambda x_1+(1-\lambda ) x_2,\gamma )=\min \limits _y \{q(\gamma )^Ty\} \quad \quad \quad \\ \text{ s.t. }~~~ Wy=h(\gamma )-T(\gamma )(\lambda x_1+(1-\lambda ) x_2)\\ ~~~~~~~~y\ge 0. \end{array} \right. \end{aligned}$$
(14)

Let \(y_i^* (i=1,2)\) be the optimal solutions of problem (13), respectively. For any \(\lambda \in (0,1)\), it is easy to know that \(\lambda y_1^*+(1-\lambda )y_2^*\) is the feasible solution of problem (14). Hence,

$$\begin{aligned}&Q(\lambda x_1+(1-\lambda ) x_2,\gamma )\le q(\gamma )^T(\lambda y_1^*+(1-\lambda )y_2^*)\\&\quad =\lambda Q(x_1,\gamma )+(1-\lambda )Q(x_2,\gamma ) \end{aligned}$$

which implies that \(Q(x,\gamma )\) is convex on \(S_2^p\).

Since \(Q(x_1,\gamma )\) and \(Q(x_2,\gamma )\) are comonotonic, by the Theorem 2.5 and the above inequality, we have

$$\begin{aligned}&\mathrm {E}[Q(\lambda x_1+(1-\lambda ) x_2,\gamma )]\le \lambda \mathrm {E}[ Q(x_1,\gamma )]\\&\quad +\,(1-\lambda )\mathrm {E}[Q(x_2,\gamma )]. \end{aligned}$$

It follows from the definition of the second-stage expectation value function that

$$\begin{aligned}&\begin{array}{l@{\quad }l}\fancyscript{Q}(\lambda x_1+(1-\lambda ) x_2)\le \lambda \fancyscript{Q}(x_1)+(1-\lambda )\fancyscript{Q}(x_2), \end{array} \end{aligned}$$

which implies that \(\fancyscript{Q}(x)\) is a convex function on \( S_2^p\).

We now prove the second assertion. Denote

$$\begin{aligned} (h_i, T_i)=(h_i(\gamma ), T_i(\gamma )), i=1,2, \end{aligned}$$

and

$$\begin{aligned} (h_\lambda , T_\lambda )=(\lambda h_1+(1-\lambda )h_2, \lambda T_1+(1-\lambda )T_2 ), \lambda \in (0,1). \end{aligned}$$

Suppose that \(y_i^* (i=1,2)\) are the optimal solutions of problem (13) for \((h_i, T_i) (i=1,2)\), respectively. Then, for any \( \lambda \in (0,1), \lambda y_1^*+(1-\lambda )y_2^*\) is the feasible solution with \((h_\lambda , T_\lambda )\) by using the similar method from the proof of assertion (1). Let \(y_\lambda ^*\) be the optimal solution of problem (14) with \((h_\lambda , T_\lambda )\). Then we can obtain

$$\begin{aligned} Q(x, h_\lambda , T_\lambda )&= q(\gamma )^T y_\lambda ^* \le q(\gamma )^T(\lambda y_1^*+(1-\lambda )y_2^*)\\&= \lambda q(\gamma )^T y_1^*+(1-\lambda )q(\gamma )^Ty_2^*\\&= \lambda Q(x, h_1, T_1)+ (1-\lambda ) Q(x, h_2, T_2), \end{aligned}$$

which proves the required assertion (2).

Finally, we prove the third assertion. Denote \(q_i=q_i(\lambda ), i=1,2,\) and \(q_\lambda =\lambda q_1+(1-\lambda )q_2, \lambda \in (0,1)\). Suppose that \(y_i^* (i=1,2)\) are the optimal solutions of the following second-stage problem

$$\begin{aligned} \left\{ \begin{array}{l@{\quad }l} Q(x,\gamma )=&{}\min \limits _y \{q(\gamma )^Ty\}\\ \text{ s.t. } &{} Wy=h-Tx\\ &{}y\ge 0. \end{array} \right. \end{aligned}$$
(15)

for \(q_1\) and \(q_2\), respectively. It is evident that any feasible solution of problem (15) for \(q_\lambda \) is also the feasible solution of problem (15) for \(q_i\). Hence,

$$\begin{aligned} Q(x,q_\lambda )&= q_\lambda ^T y= \lambda q_1^Ty+(1-\lambda )q_2^Ty\\&\ge \, \lambda q_1^Ty_1^*+(1-\lambda )q_2^Ty_2^*=\lambda Q(x,q_1)\\&+\,(1-\lambda )Q(x,q_2), \end{aligned}$$

which implies that \(Q(x,\gamma )\) is a concave function with respect to \(q(\gamma )\). The assertion (3) is proved.\(\square \)

Solution method

The second-stage expectation problem can be obtained by the Theorem 2.3 when \(\xi \) is an uncertain variable with regular uncertainty distribution. It is more complex when the first-stage problem is nonlinear. In order to avoid getting stuck at a local optimal solution, we present a heuristic algorithm, called artificial bee colony (ABC) algorithm, to solve the two-stage UP problem.

Example 5.1

Solve the following two-stage UP problem with \(W\) and \(h\) containing uncertain variables

$$\begin{aligned} \left\{ \begin{array}{l@{\quad }l} \min &{}\!\! (\!-\!\ln (x_1x_2)\cos x_2\!+\!\ln (x_1x_2\!+\!1)\cos x_1)^{\frac{1}{3}}+\mathrm {E}_\xi (Q(x, \xi )\\ \text{ s.t. } &{}\!\! 0 \le x_1\le 10\\ &{}\!\! 0 \le x_2\le 10, \end{array} \right. \end{aligned}$$
(16)

where \(Q(x, \xi )\) is the second-stage value function as follows

$$\begin{aligned} \left\{ \begin{array}{l@{\quad }l} \min &{} y_1+2y_2\\ \text{ s.t. }&{} \xi y_1=2+x_1\\ &{} y_2=\eta ^{\frac{1}{3}}+x_1+5x_2\\ &{} y_1,y_2\ge 0. \end{array} \right. \end{aligned}$$
(17)

Suppose that \(\xi \sim \fancyscript{LOGN}(e,\pi /{2\sqrt{3})}\) and \(\eta \sim \fancyscript{Z}(2,8,9)\) are independent uncertain variables with uncertainty distributions

$$\begin{aligned} {\mathrm {\Phi }}_1(x)=\left( 1+\exp (2\pi (e-\ln x))\right) ^{-1} ~ x\ge 0, \end{aligned}$$

and

$$\begin{aligned}&\mathrm {\Phi }_2(x)=\left\{ \begin{array}{l@{\quad }l}0, &{} \text{ if } ~~x \le 2 \\ \frac{x-2}{12}, &{} \text{ if } ~~2 \le x\le 8\\ \frac{x-7}{2}, &{} \text{ if } ~~8 \le x\le 9\\ 1, &{} \text{ if } ~~ x\ge 9, \end{array} \right. \end{aligned}$$

respectively.

Since \(\xi , x_i\ge 0\), it is easy to obtain that

$$\begin{aligned} Q(x, \xi )=\frac{2+x_1}{\xi }+2\eta ^{\frac{1}{3}}+2x_1+10x_2 \end{aligned}$$

for any fixed \(x\in S_1.\) Because \(\xi \) and \(\eta \) are independent uncertain variables, the expected value of second-stage problem can be deduced by Theorem 2.1 as follows

$$\begin{aligned} \mathrm {E}_\xi [Q(x, \xi )]&= \mathrm {E}_\xi \left[ \frac{2+x_1}{\xi }+2\eta ^{\frac{1}{3}}+2x_1+10x_2\right] \\&= (2+x_1)\,\mathrm {E}\left[ \frac{1}{\xi }\right] +2\mathrm {E}\left[ \eta ^{\frac{1}{3}}\right] +2x_1+10x_2. \end{aligned}$$

By the uncertainty distribution of \(\xi \) and \(\eta \), we can obtain their inverse uncertainty distribution

$$\begin{aligned} \mathrm {\Phi }_1^{-1}(\alpha )=\exp (e)\sqrt{ \frac{\alpha }{1-\alpha }}, \end{aligned}$$

and

$$\begin{aligned}&\mathrm {\Phi }_2^{-1}(\alpha )=\left\{ \begin{array}{l@{\quad }l} 12\alpha +2, &{}\text{ if } ~~0 \le \alpha \le \frac{1}{2} \\ 2\alpha +7,&{}\text{ if } ~~\frac{1}{2} \le \alpha \le 1, \end{array} \right. \end{aligned}$$

respectively. Since the \(f(x)=1/x\) is strictly decreasing and \(f(x)=x^{1/3}\) is strictly increasing, respectively, it follows from the Theorems 2.3 and 2.4 that

$$\begin{aligned}&\mathrm {E}_\xi [Q(x, \xi )]=(2+x_1)\,\mathrm {E}\left[ \frac{1}{\xi }\right] +2\mathrm {E}\left[ \eta ^{\frac{1}{3}}\right] +2x_1+10x_2\\&\quad =(2+x_1)\int _0^1\frac{1}{\mathrm {\Phi }_1^{-1}(1-\alpha )} d\alpha \\&\qquad +\,2\int _0^1\left( \mathrm {\Phi }^{-1}(\alpha )\right) ^\frac{1}{3} d\alpha +2x_1+10x_2\\&\quad =(2+x_1)\int _0^1\frac{1}{e^e\sqrt{\frac{1-\alpha }{\alpha }}} d\alpha +\,2\int _0^\frac{1}{2} (12\alpha +2) d\alpha \\&\qquad +\,2\int _\frac{1}{2}^1 (2\alpha +7) d\alpha + 2x_1+10x_2\\&\quad =(2+x_1) \frac{\pi }{2}e^e+ \frac{27}{2}+2x_1+10x_2\\&\quad =\left( \frac{\pi }{2}e^e+2\right) x_1+10x_2+\pi e^e+\frac{27}{2}.\\&\quad =(2+x_1) \frac{\pi }{2}e^e+ \frac{27}{2}+2x_1+10x_2=\left( \frac{\pi }{2}e^e+2\right) x_1\\&\qquad +\,10x_2+\pi e^e+\frac{27}{2}. \end{aligned}$$

Therefore, the two-stage UP problem (16)–(17) can be equivalent to the deterministic single programming problem as follows

$$\begin{aligned} \left\{ \begin{array}{l@{\quad }l}\min &{}(-\ln (x_1x_2)\cos x_2+\ln (x_1x_2+1)\cos x_1)^{\frac{1}{3}}\\ &{}\quad +\left( \frac{\pi }{2}e^e+2\right) x_1+10x_2+\pi e^e+\frac{27}{2}\\ \text{ s.t. } &{} 0 \le x_1\le 10\\ &{} 0 \le x_2\le 10. \end{array} \right. \end{aligned}$$
(18)

Now we will solve the problem by ABC algorithm proposed by Karaboga (2005). In the ABC algorithm, there are three essential components, which are food source positions, nectar-amount, and three kinds of foraging bees (employed bees, onlookers, and scouts). Each food source position represents a feasible solution to the optimization problem being considered and the nectar-amount of a food source corresponds to the quality (fitness) of the solution being represented by that food source. Each kind of foraging bee performs one particular operation for generating new candidate food source positions. Specifically, unlike real bee colonies, the ABC algorithm assumes that the number of food sources (solution) is the same as that of employed bees. The main steps can be summarized as follows:

Step 1: Initialize the control parameters, which are given in Table 1;

Step 2: Initialize population of food source with random solutions, denote \((x_{1},x_{2})\) as a food position;

Step 3: Send the employed bees to the food sources and determine the nectar amounts \(f_{i}\) based on the objective function, if the food source is not in the feasible solution, set the nectar amounts \(f_{i}\) as \(+\infty \);

Step 4: Calculate the fitness values of each solution \(fit_{i}\) and its corresponding probability values are as follows:

$$\begin{aligned} fit_{i}&= \left\{ \begin{array}{l@{\quad }l} 1/(1+f_{i}),&{} \text{ if } ~~f_{i}\ge 0\\ 1+abs(f_{i}), &{} \text{ if } ~~ f_{i} < 0\\ \end{array}\right. \\ p_{i}&= fit_{i}/\mathop {\sum }^{SN}_{i=1}fit_{i}, \end{aligned}$$

where \(i=1,2, \ldots , SN; SN\) is the number of food sources;

Step 5: Send the onlooker bees to their food sources according to the probability values;

Step 6: Send the scouts to the search area if a food source could not be improved through “limit” trials, and replace it with a new randomly produced solution if the new solution is better;

Step 7: Memorize the best food source (solution) achieved so far;

Step 8: If a stopping criterion is met, then output the best food source, otherwise, go back to Step 3.

Table 1 Control parameters adopted in the ABC algorithm

Following the steps given above, by the calculation, we can obtain that the optimal solution of the problem (18) is \(( 0.0026, 0.0068)^T\), and the corresponding objective value is \(63.4637\).

Particularly, it is not feasible to solve the two-stage UP problem by using inverse uncertainty distribution if the uncertain variable is discrete, so the following method is introduced.

Let \(\xi \) is a discrete uncertain variable if it takes values with the following uncertainty distribution

$$\begin{aligned}&\mathrm {\Phi }(x)=\left\{ \begin{array}{l@{\quad }l} \alpha _0, ~&{} \text{ if } ~~x \le \xi ^1 \\ \alpha _i,~~ &{} \text{ if } ~\xi ^i \le x\le \xi _{i+1}, ~ 1\le i<m,\\ \alpha _m, ~&{} \text{ if } ~~x \ge \xi ^m \end{array} \right. \end{aligned}$$

where \(0=\alpha _0<\alpha _1\cdots <\alpha _m=1\).

Define

$$\begin{aligned} p_i=\alpha _i-\alpha _{i-1}, i=1,2, \ldots , m \end{aligned}$$
(19)

as the weight of discrete point \(\xi ^i.\) Then it is easy to know that the corresponding weights satisfy the following constraints \(p_{i}\ge 0,\sum _{j=1}^{N_i}p_{i}=\alpha _m=1,i=1,2, \ldots , N.\)

By the Eq.(1), we can deduce that the expectation of discrete uncertain variable \(\xi \) is represented in the formula \(\mathrm {E}[\xi ]=\sum _{i=1}^m p_i\xi ^i.\)

Without any loss of generality, we assume that the second-stage value function satisfies the condition \(Q(x,\xi ^1)\le Q(x,\xi ^2) \cdots \le Q(x,\xi ^m).\) For any fixed \(x\), the function \(\fancyscript{Q}_\mathrm {E}(x)\) is calculated by the formula

$$\begin{aligned} \fancyscript{Q}_\mathrm {E}(x)= \sum \limits _{i=1}^m p_i Q(x, \xi ^i). \end{aligned}$$
(20)

Example 5.2

Solve the following two-stage UP problem

$$\begin{aligned} \left\{ \begin{array}{l@{\quad }l} \min &{} \frac{(x_1+x_2)\ln (\sqrt{x_1+x_2}+1) }{x_1x_2}+\frac{(1+\cos x_2) e^{\sin (x_1+1)}}{\sqrt{x_1+x_2}}\\ &{}\quad +\,\mathrm {E}_\xi [\min 2y_1+y_2]\\ \text{ s.t. }&{} x_1\le 1\\ &{} x_2\le 1\\ &{} y_1+y_2\ge 1-x_1\\ &{} y_1\ge \xi -x_1-x_2\\ &{} x_1,x_2,y_1,y_2\ge 0, \end{array} \right. \end{aligned}$$
(21)

where \(\xi \sim \fancyscript{D}(0,1/6,1,2/3,2,1)\) is an uncertain variable with the following uncertainty distribution

$$\begin{aligned}&\mathrm {\Phi }(x)=\left\{ \begin{array}{l@{\quad }l} 0 ~&{} \text{ if } ~~~~x \le 0 \\ \dfrac{1}{6} ~&{} \text{ if } ~~~~0 \le x \le 1\\ \dfrac{2}{3}~&{} \text{ if }~~~~1\le x\le 2\\ 1 ~&{} \text{ if } ~~~~x \ge 2. \end{array} \right. \end{aligned}$$

For any given \(0\le x_1\le 1, 0\le x_2\le 1\), if uncertain variable \(\xi \) takes value \(0\), then the optimal second-stage solution is \(y_1^*=0, y_2^*=1-x_1\). We can obtain that the second-stage value is \(Q(x_1,x_2,0)=1-x_1.\)

If uncertain variable \(\xi \) takes value \(1\), then the optimal second-stage solution is \(y_1^*=1-x_1-x_2, y_2^*=x_2\) provided that \(x_1+x_2\le 1\); and \(y_1^*=0, y_2^*=1-x_1\), provided that \(x_1+x_2>1\), which implies that the second-stage value

$$\begin{aligned}&Q(x_1,x_2,1)=\left\{ \begin{array}{l@{\quad }l} 1-x_1~~~~~~~ ~&{} \text{ if } ~~x_1+x_2 > 1 \\ 2-2x_1-x_2~&{} \text{ if } ~~x_1+x_2 \le 1. \end{array} \right. \end{aligned}$$

Similarly, if \(\xi \) takes value \(2\), then we can obtain \(y_1^*=2-x_1-x_2, y_2^*=0\), which implies that \(Q(x_1,x_2,0)=2(2-x_1-x_2).\)

Therefore, for any given \(0\le x_1\le 1, 0\le x_2\le 1\), by the uncertainty distribution of uncertain variable \(\xi \), we know that the second-stage value function takes on value \(Q(x_1,x_2,0), Q(x_1,x_2,1)\) and \(Q(x_1,x_2,2)\) with weights \(1/6, 1/2\) and \(1/3\) calculated by Eq. (19), respectively. In addition, it is easy to verify that \(Q(x_1,x_2,0)\le Q(x_1,x_2,1)\le Q(x_1,x_2,2)\) for any \(0\le x_1, x_2 \le 1\). Therefore, by Eq. (20), the recourse function can be obtained as follows

$$\begin{aligned} \fancyscript{Q}_\mathrm {E}(x_1, x_2)\!&= \!\frac{1}{6}Q(x_1,x_2,0)\!+\!\frac{1}{2} Q(x_1,x_2,1)\!+\!\frac{1}{3} Q(x_1,x_2,2)\\&= \left\{ \begin{array}{l@{\quad }l} -\frac{4}{3}x_1-\frac{2}{3}x_2+2 ~&{}\text{ if } ~~~~x_1+x_2>1 \\ -\frac{11}{6}x_1-\frac{7}{6}x_2+\frac{5}{2}~&{} \text{ if } ~~~~x_1+x_2\le 1. \end{array} \right. \end{aligned}$$

It follows from that the problem (21) is equivalent to the following deterministic programming problem

$$\begin{aligned} \left\{ \begin{array}{l@{\quad }l} \min &{}\frac{(x_1+x_2)\ln (\sqrt{x_1+x_2}+1) }{x_1x_2}+\frac{(1+\cos x_2) e^{\sin (x_1+1)}}{\sqrt{x_1+x_2}} +\fancyscript{Q}_\mathrm {E}(x)\\ \text{ s.t. }&{} 0 \le x_1\le 1\\ &{} 0 \le x_2\le 1, \end{array} \right. \end{aligned}$$
(22)

whose optimal value is the smaller optimal value of the deterministic programming problems as follows

$$\begin{aligned} \left\{ \begin{array}{l@{\quad }l} \min &{} \frac{(x_1+x_2)\ln (\sqrt{x_1+x_2}+1) }{x_1x_2}\\ {} &{}+\,\frac{(1+\cos x_2) e^{\sin (x_1+1)}}{\sqrt{x_1+x_2}}\!-\!\frac{4}{3}x_1\!-\!\frac{2}{3}x_2+2\\ \text{ s.t. }&{} 0 \le x_1\le 1\\ &{} 0 \le x_2\le 1\\ &{} 1< x_1+x_2, \end{array} \right. \end{aligned}$$
(23)

and

$$\begin{aligned} \left\{ \begin{array}{l@{\quad }l} \min &{} \frac{(x_1+x_2)\ln (\sqrt{x_1+x_2}+1) }{x_1x_2}\\ {} &{}+\, \frac{(1+\cos x_2) e^{\sin (x_1+1)}}{\sqrt{x_1+x_2}}\!-\!\frac{11}{6}x_1\!-\!\frac{7}{6}x_2\!+\!\frac{5}{2}\\ \text{ s.t. }&{} 0 \le x_1\le 1\\ &{} 0 \le x_2\le 1\\ &{} x_1+x_2\le 1. \end{array} \right. \end{aligned}$$
(24)

Next, we solve the two deterministic programming problems formulated above by ABC algorithm, respectively. Following the steps presented in the solution of Example 5.1, we can obtain the optimal solution to problem (23) is \(\bar{x}^{(1)}=(1,0.7761)^T\), and the corresponding optimal value is \(f(\bar{x}^{(1)})=0.9371\); the optimal solution to problem (24) is \(\bar{x}^{(2)}=(0.5828,0.4171)^T\), and the corresponding optimal value is \(f(\bar{x}^{(2)})=1.8570.\) Obviously, the optimal value of problem (23) is smaller than that of problem (24). Therefore the optimal solution of problem (22) is \(\bar{x}^{(1)}=(1,0.7761)^T\).

Conclusions

In this paper, we first presented a new class of two-stage uncertain programming problem based on uncertainty theory. Since convexity plays an important role in optimization theory, the fundamental properties of two-stage UP problem were discussed. Considering the complexity of the two-stage UP problem, a meta-heuristics algorithm, called ABC algorithm, was provided to solve the problem presented above. Several numerical examples were given to illustrate the solution method proposed in this paper with ABC algorithm.