1 Introduction

Goal programming was first introduced by Charnes and Cooper (1961) and subsequently developed by Ijiri (1965) to take into account simultaneously several objectives in a decision problem. It is essentially a compromise method to search for a solution by minimizing the deviations between the achievement level of the objectives and the goals set for them. At present, it has become a popular tool to solve multiobjective programming. Because of its simplicity and ease of use, goal programming has also been actually applied to many different fields such as finance, production, transportation and site selection, telecommunications, quality control, accounting, human resources, production and so on.

Considering that the parameters in the practical problem are always nondeterministic, goal programming with incomplete information is introduced and widely investigated. Contini (1968) was the first one to propose the stochastic goal programming with normally distributed random parameters, and then the method was applied by Retzlaff-Roberts (1993) to solve a stochastic allocative data envelopment analysis. Afterwards, the field was further developed and extended by several researchers. Ballestero (2001) proposed a stochastic goal programming model leading to a structure of mean-variance minimization. Liu (1997) provided a theoretical framework for dependent-chance goal programming as the third type of stochastic programming. Aouni and Torre (2010) generalized stochastic goal programming by searching for stochastic solutions. The interested readers can consult Aouni et al. (2012) which highlighted the main methodologies of the stochastic goal programming and presented an overview of its applications in several domains.

A prerequisite for application of stochastic goal programming is that there are enough data to estimate random parameters of the problem by using statistical methods. In other words, stochastic goal programming may not work well in the case of lack of historical data. For this case, the parameters are often estimated by experienced experts and the estimates given by experts may be described by fuzzy variables. In this circumstance, fuzzy goal programming was formulated in Narasimhan (1980) by incorporating fuzzy goals and constraints within the traditional goal programming. Hannan (1981) and Tiwari et al. (1987) continued extending the subject, and Liu (1999) provided a spectrum of dependent-chance goal programming models with fuzzy instead of crisp decisions. After that, fuzzy goal programming was applied to different areas such as the optimal planning of metropolitan solid waste management systems Chang and Wang (1997), vendor selection Kumar et al. (2004) and portfolio selection Parra et al. (2001).

Different from randomness or fuzziness, Liu (2010a) proposed the concept of uncertain variable and founded uncertainty theory to describe the indeterminate phenomena which behaves neither randomness nor fuzziness. Surrounding the subject, many researchers have contributed to this area. Meanwhile, uncertainty theory is also applied to several fields such as uncertain calculus Liu (2009), uncertain control Liu (2010b), uncertain risk analysis Liu (2010c), uncertain logic Liu (2011) and so forth. In particular, Liu (2010a) and Liu and Chen (2015) considered uncertain goal programming approaches and applied them to the capital budget problem.

As two types of indeterminacy, randomness and uncertainty always simultaneously appear in a complex system. For example, in portfolio optimization, security returns may be considered as random variables when available data are enough, or uncertain variables when lack of data. In order to deal with such a case, Liu (2013) introduced the concept of uncertain random variable to characterize the parameters representing the natural status of the problem. In a sense, uncertain random variable is the extensions of random variable and uncertain variable. Based on the concept, uncertain random programming was introduced by Liu (2012) and extended to uncertain random multilevel programming (Ke et al. 2014) and multi-objective uncertain random programming (Zhou et al. 2014). Similar to the traditional and stochastic cases, goal programming is a feasible and simple tool to solve multi-objective optimization problem. Inspired by this point, the paper aims to establish uncertain random goal programming models for the problem with multiple objectives and uncertain random parameters. In order to facilitate the use of the approaches, the paper discusses the equivalent deterministic forms for each model on the condition that the set of parameters consists of uncertain variables and random variables.

The rest of the paper is organized as follows. Section 2 recalls the fundamental concepts and conclusions on uncertainty theory. Section 3 recalls the basic form of goal programming. Sections 4 and 5 are the main body of the paper, which first presents expected value goal programming and chance-constrained goal programming and then gives the corresponding equivalent forms, respectively. Section 6 illustrates the use of the proposed approaches by an example. Finally, some conclusions are given in “Appendix”.

2 Goal programming—basic form

Assume that \({{\varvec{x}}}\) is a decision vector, and \({{\varvec{s}}}\) is a state vector representing the model parameters in a decision making problem. The return function \(f_i({{\varvec{x}}},{{\varvec{s}}})\) is the ith objective faced by a decision-maker for \(i=1,2,\ldots ,m\). In general, these m objectives are always conflicting with different importance. Therefore, the decision-maker may establish a hierarchy of importance among these incompatible goals so that they are satisfied as many as possible in the order specified. Suppose that \(g_j({{\varvec{x}}},{{\varvec{s}}})\) are constraint functions for \(j=1,2,\ldots ,p\).

For simplicity, we use the following notations in the rest of the paper.

m::

the number of goal constraints;

p::

the number of system constraints;

l::

the number of priorities;

\(b_i\)::

the aspiration level (goal) associated with the objective i;

\(P_j\)::

the preemptive priority factor which shows relative importance of various goals with \(P_j\gg P_{j+1}\) for all j;

\(u_{ij}\)::

the weighting factor corresponding to positive deviation for goal i with priority j assigned;

\(v_{ij}\)::

the weighting factor corresponding to negative deviation for goal i with priority j assigned.

Then a general form of goal programming is written as follows,

$$\begin{aligned} \left\{ \begin{array}{ll} \min &{} \sum \limits _{j=1}^lP_j\sum \limits _{i=1}^m\left( u_{ij}d_i^++v_{ij}d_i^-\right) \\ \text{ s.t. } &{} f_i({{\varvec{x}}},{{\varvec{s}}})-d_i^++d_i^-=b_i,\ i=1,2,\ldots ,m\\ &{} g_j({{\varvec{x}}},{{\varvec{s}}})\le 0,\ j=1,2,\ldots ,p\\ &{} d_i^+,d_i^-\ge 0,\ i=1,2,\ldots ,m. \end{array}\right. \end{aligned}$$
(1)

where \(d_i^+\) is the positive deviation from the target of goal i, defined as \(d_i^+=[f_i({{\varvec{x}}},{{\varvec{s}}})-b_i]\vee 0\), and \(d_i^-\) is the negative deviation from the target of goal i, defined as \(d_i^-=[b_i-f_i({{\varvec{x}}},{{\varvec{s}}})]\vee 0\). The objective function represents the weighted deviations between the achievement level of each objective and the goal set for it.

In general, the decision-maker does not know the values of the model parameters \({{\varvec{s}}}\) with certainty. For example, the vector \({{\varvec{s}}}\) may represent the demand quantities in supply chain management or the security returns in portfolio selection. In either case, the realizations of \({{\varvec{s}}}\) cannot be known in advance. As discussed in Introduction, the state vector \({{\varvec{s}}}\) i ever considered as random one, fuzzy one or uncertain one. Accordingly, Model (1) was also formulated as stochastic goal programming (Charnes and Cooper 1961), fuzzy goal programming (Narasimhan 1980), uncertain goal programming (Liu and Chen 2015), respectively.

3 Expected value goal programming

In this section, let \({{\varvec{\xi }}}\) represent the state vector, which is assumed to be an uncertain random vector to describe the quantities with human uncertainty. According to the operational law, the return functions \(f_i({{\varvec{x}}},{{\varvec{\xi }}})\) and the constraint functions \(g_j({{\varvec{x}}},{{\varvec{\xi }}})\) are also uncertain random variables, respectively, for \(i=1,2,\ldots ,m\) and \(j=1,2,\ldots ,p\). Different from the situation of real numbers, there does not exist a natural order between uncertain random variables. Moreover, the constraints \(g_j({{\varvec{x}}},{{\varvec{\xi }}})\le 0,\ j=1,2,\ldots ,p\) do not define a crisp feasible set. Generally, expected value is a more popular standard to rank uncertain random variables. Therefore, we first employ the expected value as the standard to rank uncertain random variable. The first uncertain random expected value goal programming is formulated as follows,

$$\begin{aligned} \left\{ \begin{array}{ll} \min &{} \sum \limits _{j=1}^lP_j\sum \limits _{i=1}^m\left( u_{ij}d_i^++v_{ij}d_i^{-}\right) \\ \text{ s.t. } &{} E[f_i({{\varvec{x}}},{{\varvec{\xi }}})]-d_i^++d_i^-=b_i,\ i=1,2,\ldots ,m\\ &{} E[g_j({{\varvec{x}}},{{\varvec{\xi }}})]\le 0,\ j=1,2,\ldots ,p\\ &{} d_i^+,d_i^-\ge 0,\ i=1,2,\ldots ,m \end{array}\right. \end{aligned}$$
(2)

where \(d_i^+\) is the positive deviation from the target of goal i, defined as \(d_i^+=[E[f_i({{\varvec{x}}},{{\varvec{\xi }}})]-b_i]\vee 0\), and \(d_i^-\) is the negative deviation from the target of goal i, defined as \(d_i^-=[b_i-E[f_i({{\varvec{x}}},{{\varvec{\xi }}})]]\vee 0\).

Expected value is the average value of uncertain random variable in the sense of chance measure. Thus, it is weak that an inequality constraint \(E[f({{\varvec{x}}},{{\varvec{\xi }}})]\le b\) holds only in the sense of expected value. A natural alternative is to require that an uncertain constraint holds at a given confidence level. Therefore, the constraints \(E[g_j({{\varvec{x}}},{{\varvec{\xi }}})]\le 0\) in Model (2) may be replaced with \(\mathrm{Ch}\{g_j({{\varvec{x}}},{{\varvec{\xi }}})\le 0\}\ge \alpha _j\) where \(\alpha _j\) are confidence levels for \(j=1,2,\ldots ,p\). The second uncertain random expected value goal programming is formulated as follows,

$$\begin{aligned} \left\{ \begin{array}{ll}\min &{} \sum \limits _{j=1}^lP_j\sum \limits _{i=1}^m\left( u_{ij}d_i^++v_{ij}d_i^{-}\right) \\ \text{ s.t. } &{} E[f_i({{\varvec{x}}},{{\varvec{\xi }}})]-d_i^++d_i^-=b_i,\ i=1,2,\ldots ,m\\ &{} \mathrm{Ch}\{g_j({{\varvec{x}}},{{\varvec{\xi }}})\le 0\}\ge \alpha _j,\ j=1,2,\ldots ,p\\ &{} d_i^+,d_i^-\ge 0,\ i=1,2,\ldots ,m. \end{array}\right. \end{aligned}$$
(3)

For Models (2) and (3), there are other objective functions commonly used in reality. For example, we can use the following one,

$$\begin{aligned} \text{ lexmin } \left\{ \sum \limits _{i=1}^m\left( u_{i1}d_i^++v_{i1}d_i^{-}\right) , \sum \limits _{i=1}^m\left( u_{i2}d_i^++v_{i2}d_i^-\right) ,\ldots ,\sum \limits _{i=1}^m\left( u_{il}d_i^++v_{il}d_i^-\right) \right\} \end{aligned}$$

where lexmin represents lexicographically minimizing the objective vector. Or, another simpler one is to set \(\text{ lexmin } \{S\}\) as the objective function, in which S is a subset of \(\{d_i^+,d_i^-,i=1,2,\ldots ,l\}\) with a given order.

Model (2) is the most general case where uncertain random parameters may be any types. Of course, it is also difficult to be solved in this situation. One of the circumstances in which randomness and uncertainties simultaneously appear is that some parameters are random variables, and the other parameters are uncertain variables. It is a simpler case and commonly faced by the decision-makers. More specifically, we assume that uncertain random vector is written as \({{\varvec{\xi }}}=(\eta _1,\eta _2,\ldots ,\eta _n,\tau _1,\tau _2,\ldots ,\tau _q)\) where \(\eta _1,\eta _2,\ldots ,\eta _n\) are independent random variables with probability distributions \(\Psi _1,\Psi _2,\ldots ,\Psi _n\), respectively, and \(\tau _1,\tau _2,\ldots ,\tau _q\) are uncertain variables. In this situation, it follows from Theorem 2 that Models (2) and (3) can be converted into the following crisp goal programming,

$$\begin{aligned} \left\{ \begin{array}{ll} \min &{} \sum \limits _{j=1}^lP_j\sum \limits _{i=1}^m\left( u_{ij}d_i^++v_{ij}d_i^{-}\right) \\ \text{ s.t. } &{} \displaystyle \int _{R^n}E[f_i({{\varvec{x}}},y_1,\ldots ,y_n,\tau _1,\ldots ,\tau _q)]\mathrm{d}\Psi _1(y_1)\ldots \mathrm{d}\Psi _n(y_n)-d_i^++d_i^-=b_i,\ i=1,2,\ldots ,m\\ &{} \displaystyle \int _{R^n}E[g_j({{\varvec{x}}},y_1,\ldots ,y_n,\tau _1,\ldots ,\tau _q)]\mathrm{d}\Psi _1(y_1)\ldots \mathrm{d}\Psi _n(y_n)\le 0,\ j=1,2,\ldots ,p\\ &{} d_i^+,d_i^-\ge 0,\ i=1,2,\ldots ,m, \end{array}\right. \end{aligned}$$
(4)

and

$$\begin{aligned} \left\{ \begin{array}{ll} \min &{} \sum \limits _{j=1}^lP_j\sum \limits _{i=1}^m\left( u_{ij}d_i^++v_{ij}d_i^{-}\right) \\ \text{ s.t. } &{} \displaystyle \int _{R^n}E[f_i({{\varvec{x}}},y_1,\ldots ,y_n,\tau _1,\ldots ,\tau _q)]\mathrm{d}\Psi _1(y_1)\ldots \mathrm{d}\Psi _n(y_n)-d_i^++d_i^-=b_i,\ i=1,2,\ldots ,m\\ &{} \displaystyle \int _{R^n}\text{ M }\left\{ g_j({{\varvec{x}}},y_1,\ldots ,y_n,\tau _1,\ldots ,\tau _q)\le 0\right\} \mathrm{d}\Psi _1(y_1)\cdots \mathrm{d}\Psi _n(y_n)\ge \alpha _j,\ j=1,2,\ldots ,p\\ &{} d_i^+,d_i^-\ge 0,\ i=1,2,\ldots ,m, \end{array}\right. \end{aligned}$$
(5)

where \(E[f_i({{\varvec{x}}},y_1,\ldots ,y_n,\tau _1,\ldots ,\tau _q)]\) and \(E[g_j({{\varvec{x}}},y_1,\ldots ,y_n,\tau _1,\ldots ,\tau _q)]\) are the expected values of the uncertain variables \(f_i({{\varvec{x}}},y_1,\ldots ,y_n,\tau _1,\ldots ,\tau _q)\) and \(g_j({{\varvec{x}}},y_1,\ldots ,y_n,\tau _1,\ldots ,\tau _q)\), respectively.

Further, suppose that \(f_i({{\varvec{x}}},\eta _1,\eta _2,\ldots ,\eta _n,\tau _1,\tau _2,\ldots ,\tau _q)\) is strictly increasing with respect to \(\tau _1,\ldots ,\tau _{q_i'}\) and strictly decreasing with respect to \(\tau _{q_i'+1},\ldots ,\tau _q\), and \(g_j({{\varvec{x}}},\eta _1,\eta _2,\ldots ,\eta _n,\tau _1,\tau _2,\ldots ,\tau _q)\) is strictly increasing with respect to \(\tau _1,\ldots ,\tau _{q_j''}\) and strictly decreasing with respect to \(\tau _{q_j''+1},\ldots ,\tau _q\). Meanwhile, we assume that \(\tau _1,\tau _2,\ldots ,\tau _q\) have regular uncertainty distributions \(\Upsilon _1,\Upsilon _2,\ldots ,\Upsilon _q\), respectively. Then it follows from Theorem 3 that Model (2) can be converted into the following one,

$$\begin{aligned} \left\{ \begin{array}{ll}\min &{} \sum \limits _{j=1}^lP_j\sum \limits _{i=1}^m\left( u_{ij}d_i^++v_{ij}d_i^{-}\right) \\ \text{ s.t. } &{} \displaystyle \int _{R^n}\int _0^1f_i\left( {{\varvec{x}}},y_1,\ldots ,y_n,\Upsilon _1^{-1}(\alpha ),\ldots ,\Upsilon _{q_{i}'}^{-1}(\alpha ),\Upsilon _{q_i'+1}^{-1}(1-\alpha ),\ldots ,\Upsilon _{q}^{-1}(1-\alpha )\right) \\ &{} \qquad \mathrm{d}\alpha \mathrm{d}\Psi _1(y_1)\cdots \mathrm{d}\Psi _n(y_n) -d_i^++d_i^-=b_i,\ i=1,2,\ldots ,m\\ &{} \displaystyle \int _{R^n}\int _0^1g_j\left( {{\varvec{x}}},y_1,\ldots ,y_n,\Upsilon _1^{-1}(\alpha ),\ldots ,\Upsilon _{q_{j}''}^{-1}(\alpha ),\Upsilon _{q_j''+1}^{-1}(1-\alpha ),\ldots ,\Upsilon _{q}^{-1}(1-\alpha )\right) \\ &{} \qquad \mathrm{d}\alpha \mathrm{d}\Psi _1(y_1)\cdots \mathrm{d}\Psi _n(y_n)\le 0, \\ &{} \qquad j=1,2,\ldots ,p\\ &{} d_i^+,d_i^-\ge 0,\ i=1,2,\ldots ,m, \end{array}\right. \end{aligned}$$
(6)

and

$$\begin{aligned} \left\{ \begin{array}{ll}\min &{} \sum \limits _{j=1}^lP_j\sum \limits _{i=1}^m\left( u_{ij}d_i^++v_{ij}d_i^{-}\right) \\ \text{ s.t. } &{} \displaystyle \int _{R^n}\int _0^1f_i\left( {{\varvec{x}}},y_1,\ldots ,y_n,\Upsilon _1^{-1}(\alpha ),\ldots ,\Upsilon _{q_{i}'}^{-1}(\alpha ),\Upsilon _{q_i'+1}^{-1}(1-\alpha ),\ldots ,\Upsilon _{q}^{-1}(1-\alpha )\right) \\ &{} \qquad \mathrm{d}\alpha \mathrm{d}\Psi _1(y_1)\cdots \mathrm{d}\Psi _n(y_n) -d_i^++d_i^-=b_i,\ i=1,2,\ldots ,m\\ &{} \displaystyle \int _{R^n}G_j({{\varvec{x}}},y_1,\ldots ,y_n)\mathrm{d}\Psi _1(y_1)\cdots \mathrm{d}\Psi _n(y_n)\ge \alpha _j,\ j=1,2,\ldots ,p\\ &{} d_i^+,d_i^-\ge 0,\ i=1,2,\ldots ,m, \end{array}\right. \end{aligned}$$
(7)

where \(G_j({{\varvec{x}}},y_1,\ldots ,y_n)\) is the root \(\alpha \) of the equation

$$\begin{aligned} g_j\left( {{\varvec{x}}},y_1,\ldots ,y_m,\Upsilon _1^{-1}(\alpha ),\ldots ,\Upsilon _{q_{j}''}^{-1}(\alpha ),\Upsilon _{q_j''+1}^{-1}(1-\alpha ),\ldots ,\Upsilon _{q}^{-1}(1-\alpha )\right) =0. \end{aligned}$$

4 Chance-constrained goal programming

If the decision-maker wishes the chance of the event that the ith goal \(f_i({{\varvec{x}}},{{\varvec{\xi }}})\) is as close as the target value \(b_i\) is at least some given confidence level, then we can formulate the uncertain random decision system as a chance-constrained goal programming according to the priority structure and target levels set by the decision-maker,

$$\begin{aligned} \left\{ \begin{array}{ll}\min &{} \sum \limits _{j=1}^lP_j\sum \limits _{i=1}^m\left( u_{ij}d_i^++v_{ij}d_i^{-}\right) \\ \text{ s.t. } &{} \mathrm{Ch}\left\{ f_i({{\varvec{x}}},{{\varvec{\xi }}})-b_i\le d_i^+\right\} \ge \beta _i^+,i=1,2,\ldots ,m\\ &{} \mathrm{Ch}\left\{ b_i-f_i({{\varvec{x}}},{{\varvec{\xi }}})\le d_i^-\right\} \ge \beta _i^-,i=1,2,\ldots ,m\\ &{} \mathrm{Ch}\{g_j({{\varvec{x}}},{{\varvec{\xi }}})\le 0\}\ge \alpha _j, j=1,2,\ldots ,p\\ &{} d_i^+,d_i^-\ge 0, i=1,2,\ldots ,m \end{array}\right. \end{aligned}$$
(8)

where \(d_i^+\) is the \(\beta _i^+\) positive deviation from the target of goal i, defined as \(\min \{d\vee 0|\mathrm{Ch}\{f_i({{\varvec{x}}},{{\varvec{\xi }}})-b_i\le d\}\ge \beta _i^+\}\), and \(d_i^-\) is the \(\beta _i^-\) negative deviation from the target of goal i, defined as \(\min \{d\vee 0|\mathrm{Ch}\{b_i-f_i({{\varvec{x}}},{{\varvec{\xi }}})\le d\}\ge \beta _i^-\}\).

In a deterministic goal programming, we have \(d_i^+\cdot d_i^-=0\) which implies at most one of positive deviation and negative deviation takes a positive value. However, it is possible that both \(d_i^+\) and \(d_i^-\) are positive in Model (8).

Next, we continue considering the situation where \({{\varvec{\xi }}}=(\eta _1,\eta _2,\ldots ,\eta _n,\tau _1,\tau _2,\ldots ,\tau _q)\) where \(\eta _1,\eta _2,\ldots ,\eta _n\) are independent random variables with probability distributions \(\Psi _1,\Psi _2,\ldots ,\Psi _q\), respectively, and \(\tau _1,\tau _2,\ldots ,\tau _n\) are uncertain variables. In this case, Model (8) can be converted into the following crisp goal programming,

$$\begin{aligned} \left\{ \begin{array}{ll}\min &{} \sum \limits _{j=1}^lP_j\sum \limits _{i=1}^m\left( u_{ij}d_i^++v_{ij}d_i^-\right) \\ \text{ s.t. } &{} \displaystyle \int _{R^n}\text{ M }\left\{ f_i({{\varvec{x}}},y_1,\ldots ,y_n,\tau _1,\ldots ,\tau _q)\le b_i+d_i^+\right\} \mathrm{d}\Psi _1(y_1)\cdots \mathrm{d}\Psi _n(y_n)\ge \beta _i^+,\ i=1,2,\ldots ,m\\ &{} \displaystyle \int _{R^n}\text{ M }\left\{ f_i({{\varvec{x}}},y_1,\ldots ,y_n,\tau _1,\ldots ,\tau _q)\ge b_i-d_i^-\right\} \mathrm{d}\Psi _1(y_1)\cdots \mathrm{d}\Psi _n(y_n)\ge \beta _i^-,\ i=1,2,\ldots ,m\\ &{} \displaystyle \int _{R^n}\text{ M }\left\{ g_j({{\varvec{x}}},y_1,\ldots ,y_n,\tau _1,\ldots ,\tau _q)\le 0\right\} \mathrm{d}\Psi _1(y_1)\cdots \mathrm{d}\Psi _n(y_n)\ge \alpha _j,\ j=1,2,\ldots ,p\\ &{} d_i^+,d_i^-\ge 0, i=1,2,\ldots ,m. \end{array}\right. \end{aligned}$$
(9)

Assume that uncertain variables \(\tau _1,\tau _2,\ldots ,\tau _q\) have regular uncertainty distributions \(\Upsilon _1,\Upsilon _2,\ldots ,\Upsilon _q\), respectively. Further, suppose that \(f_i({{\varvec{x}}},\eta _1,\eta _2,\ldots ,\eta _n,\tau _1,\tau _2,\ldots ,\tau _q)\) is strictly increasing with respect to \(\tau _1,\ldots ,\tau _{q_i'}\) and strictly decreasing with respect to \(\tau _{q_i'+1},\ldots ,\tau _q\), and \(g_j({{\varvec{x}}},\eta _1,\eta _2,\ldots ,\eta _n,\tau _1,\tau _2,\ldots ,\tau _q)\) is strictly increasing with respect to \(\tau _1,\ldots ,\tau _{q_j''}\) and strictly decreasing with respect to \(\tau _{q_j''+1},\ldots ,\tau _q\). In this case, Model (8) is equivalent to the following crisp goal programming,

$$\begin{aligned} \left\{ \begin{array}{ll}\min &{} \sum \limits _{j=1}^lP_j\sum \limits _{i=1}^m\left( u_{ij}d_i^++v_{ij}d_i^-\right) \\ \text{ s.t. } &{} \displaystyle \int _{R^n}F_i'({{\varvec{x}}},y_1,\ldots ,y_n)\mathrm{d}\Psi _1(y_1)\ldots \mathrm{d}\Psi _n(y_n)\ge \beta _i^+,\ i=1,2,\ldots ,m\\ &{} \displaystyle \int _{R^n}F_i''({{\varvec{x}}},y_1,\ldots ,y_n)\mathrm{d}\Psi _1(y_1)\cdots \mathrm{d}\Psi _n(y_n)\ge \beta _i^-,\ i=1,2,\ldots ,m\\ &{} \displaystyle \int _{R^n}G_j({{\varvec{x}}},y_1,\ldots ,y_n)\mathrm{d}\Psi _1(y_1)\cdots \mathrm{d}\Psi _n(y_n)\ge \alpha _j,\ j=1,2,\ldots ,p\\ &{} d_i^+,d_i^-\ge 0,\ i=1,2,\ldots ,m, \end{array}\right. \end{aligned}$$
(10)

where \(F_i'({{\varvec{x}}},y_1,\ldots ,y_n)\) is the root \(\alpha \) of the equation

$$\begin{aligned}&f_i\left( {{\varvec{x}}},y_1,\ldots ,y_m,\Upsilon _1^{-1}(\alpha ),\ldots ,\Upsilon _{q_{i}'}^{-1}(\alpha ),\Upsilon _{q_i'+1}^{-1}(1-\alpha ),\ldots ,\Upsilon _{q}^{-1}(1-\alpha )\right) \\&\quad =b_i+d_i^+, \end{aligned}$$

\(F_i''({{\varvec{x}}},y_1,\ldots ,y_n)\) is the root \(\alpha \) of the equation

$$\begin{aligned}&f_i\left( {{\varvec{x}}},y_1,\ldots ,y_m,\Upsilon _1^{-1}(1-\alpha ),\ldots ,\Upsilon _{q_{i}'}^{-1}(1-\alpha ),\Upsilon _{q_i'+1}^{-1}(\alpha ),\ldots ,\Upsilon _{q}^{-1}(\alpha )\right) \\&\quad =b_i-d_i^-, \end{aligned}$$

and \(G_j({{\varvec{x}}},y_1,\ldots ,y_n)\) is the root \(\alpha \) of the equation

$$\begin{aligned} g_j\left( {{\varvec{x}}},y_1,\ldots ,y_m,\Upsilon _1^{-1}(\alpha ),\ldots ,\Upsilon _{q_{j}''}^{-1}(\alpha ),\Upsilon _{q_j''+1}^{-1}(1-\alpha ),\ldots ,\Upsilon _{q}^{-1}(1-\alpha )\right) =0. \end{aligned}$$

5 A numerical example

In general, the chance measure of an uncertain random event is difficult to be analytically calculated. However, in the case of mixture of random variables and uncertain variables, the proposed models can be converted into crisp ones such as Models (6), (7) and (10). For the case, numerical methods can be employed to calculate the chance measures since which are essentially integrals.

Next, a simpler numerical example is presented in this part to illustrate the modelling idea. Assume that the uncertain random vector \({{\varvec{\xi }}}=(\eta _1,\eta _2,\tau _1,\tau _2)\) where \(\eta _1\) and \(\eta _2\) are independent random variables \({\mathcal {N}}(2,1)\), \({\mathcal {N}}(4,1)\), and \(\tau _1\) and \(\tau _2\) are independent uncertain variables \({\mathcal {L}}(0,4)\), \({\mathcal {L}}(4,8)\). The return functions \(f_1({{\varvec{x}}},{{\varvec{\xi }}})=x_1\eta _1+x_2\tau _2\) and \(f_2({{\varvec{x}}},{{\varvec{\xi }}})=x_1\eta _2+x_2\tau _1\). The constraint functions \(g_1({{\varvec{x}}},{{\varvec{\xi }}})=x_1\tau _1+x_2\eta _1-10\) and \(g_2({{\varvec{x}}},{{\varvec{\xi }}})=x_1^2\tau _2+x_2^2\eta _2-20\). In addition, the number l of priorities is set as 1, and \(b_1=10,\ b_2=15\). In this situation, Model (2) is converted into the following form,

$$\begin{aligned} \left\{ \begin{array}{ll}\min &{} \left( u_{1}d_1^++v_{1}d_1^-\right) +\left( u_{2}d_2^++v_{2}d_2^-\right) \\ \text{ s.t. } &{} \displaystyle 2x_1+6x_2+d_1^--d_1^+=10\\ &{} \displaystyle 4x_1+2x_2+d_2^--d_2^+=15 \\ &{} 2x_1+2x_2\le 10\\ &{} \displaystyle 2x_1^2+2x_2^2\le 20\\ &{} d_i^+,d_i^-\ge 0,\ i=1,2. \end{array}\right. \end{aligned}$$
(11)

It is a crisp goal programming model without uncertain random parameters. Without of generality, we set \(u_1=0.2, v_1=0, u_2=0.7, v_2=0.1\). Then the above model becomes a single-objective programming and can be solved by Lingo. The optimal solution is \(x_1^*=1.86, x_2^*=1.01\) and meanwhile \(d_1^-=0.22, d_1^+=0, d_2^+=5.53, d_2^+=0\). The value of the first objective is 9.78 and the second one is 9.47.

If the decision maker does not know how to determine the values of the weighting factors \(u_1,v_1,u_2\) and \(v_2\), then we can lexicographically minimize the subset of \(\{d_i^-,d_i^+, i=1,2\}\). For example, we set the objective function as \(\text{ lexmin }\{d_1^-, d_1^+, d_2^-, d_2^+\}\). Then the optimal solution is \(x_1^*=1.82, x_2^*=1.06\) and meanwhile \(d_1^-=0, d_1^+=0, d_2^+=5.60, d_2^+=0\). The value of the first objective is 10 and the second one is 9.4.

6 Conclusions

In this paper, we proposed two kinds of uncertain random goal programming for the problem simultaneously with randomness and uncertainty. The equivalent forms are obtained for a special case in which the set of uncertain random parameters is composed of random variables and uncertain variables. For the purpose of illustration, a numerical example is introduced to show the use of the approach.