1 Introduction

The solid transportation problem (STP) is a problem of carrying goods from numerous sources to different destinations by different conveyances so that the task is optimum. STP was introduced by Schell (1955). A solution procedure of STP was presented by Haley (1962). Then, Bhatia et al. (1976) presented a technique and achieved the minimum time for the STP. Li et al. (1997) introduced neural network approach for multi-criteria STPs.

In practice, the transportation system is required to carry more than one item from several sources to several destinations by different conveyances. Such a type of transportation model is called multi-item solid transportation problem (MISTP). But a problem appears when not all kinds of items can be carried by all kinds of conveyances owing to nature of the items. Nowadays, MISTP under different uncertainty has been studied by some scholars (Dalman et al. 2016; Liu et al. 2017) and so on.

Since data cannot be estimated accurately in real world transportation problems, it is hard to estimate the exact value amounts of resources, demands, direct unit costs of transportation systems. These shortcomings are sometimes treated as random, fuzzy and uncertain variables. Stochastic programming deals with situations where the input data are imprecise in the stochastic sense and described by random variables with the known probability distribution. Therefore, the probability distributions of variables cannot be observed generally. Thus, we can need some specialists to estimate the belief degrees that each event will happen. Such a type of indeterminate, a common view is to handle belief degrees like probability distributions. Also, Liu (2012) showed that it is inappropriate to determine belief degrees for probability theory, and it will give rise to unacceptable outcomes. In order to handle these types of uncertainties, Liu (2007) presented uncertainty theory which is a branch of mathematics based on normality, monotonicity, self-duality, countable subadditivity and product measure axioms. Then, the uncertainty theory was redefined by Liu (2010). Since then, studies on uncertainty theory have started to increase in both theory and practice. Moreover, it has been used in many areas such as game theory (Yang and Gao 2013; Gao et al. 2017), finance (Liu 2013; Guo and Gao 2017; Xiao et al. 2016), differential equation (Yao et al. 2013), regression (Yao and Liu 2017), optimization (Liu and Chen 2015; Zhou et al. 2014), graph theory (Liu 2014; Shi et al. 2017; Chen et al. 2017; Cheng et al. 2017; Rosyida et al. 2018; Chen et al. 2017).

STP with uncertain variables has been examined by some scholars. Cui and Sheng (2013) presented uncertain programming models for the STP. Zhang et al. (2016) investigated the fixed charge solid transportation problem applying uncertainty theory. Chen et al. (2017) presented uncertain bi-criteria models for the STP. Chen et al. (2017) studied an entropy-based STP with Shannon entropy. MISTP models with uncertain variables were presented by Dalman et al. (2016). Majumder et al. (2018) presented the uncertain fixed charge MISTP with budget constraints.

None of the earlier works did discuss entropy-based MISTP with uncertain variables. Entropy is employed to produce a quantitative measure of the degree of uncertainty. Based on the Shannon entropy of random variables (Shannon and Weaver 1949), fuzzy entropy was first introduced by Zadeh (1968) to quantify the fuzziness, who represented the entropy of a fuzzy event as a weighted Shannon entropy. Based on the uncertainty theory, the concept of entropy of uncertain variables is proposed by Liu (2009) to characterize the uncertainty of uncertain variables resulting from information deficiency. Chen and Dai (2011), Dai and Chen (2012) investigated the maximum entropy principle of uncertainty distribution for uncertain variables. They gave entropy of a function of uncertain variables.

Entropy function describes a measure of dispersals of trips from sources to destinations via conveyances. It is useful to minimize the transportation penalties as well as to maximize entropy amount Chen et al. (2017), Ojha et al. (2009). That ensures consistent distribution of commodities between origins and destinations. Thus, the contribution of this paper is that we offer an entropy-based model to MISTP by using uncertainty theory.

Therefore, in this paper, an entropy-based MISTP model with uncertain variables is offered to provide the uniform distribution of commodities. Thus, uncertain entropy function of dispersal of trips between origins and destinations is considered as a second objective function. Thus, the single objective MISTP with uncertain variables transformed to multi-objective MISTP with uncertain variables. To model the considered problem, expected value programming model and expected constrained programming model are used. By employing uncertainty theory, the entropy-based uncertain MISTP models with uncertain variables are turned into its deterministic equivalences that can be solved by using two different mathematical programming methods.

The remainder of the paper is formed as follows; Sect. 2 and Sect. 3 give fundamental definitions and theorems. The construction of MISTP is given in Sect. 4. Then, an entropy-based model is developed in Sect. 5. Sections 6 and  7 contain the solution method and numerical experiment. The paper is concluded in Sect. 8.

2 Preliminary

Basic definitions and notations of uncertainty theory are given here.

Definition 2.1

(Liu 2007) Let be a \(\sigma \)-algebra on a nonempty set \(\Gamma \). A set function is called an uncertain measure if it satisfies the following axioms:

Axiom 1. (Normality Axiom) ;

Axiom 2. (Duality Axiom) for any ;

Axiom 3. (Subadditivity Axiom) For every countable sequence of , we have

The triplet is called an uncertainty space, and each element \(\Lambda \) in is called an event. In addition, in order to obtain an uncertain measure of compound event, a product uncertain measure is defined by Liu (2009) by the following product axiom:

Axiom 4. (Product Axiom) Let be uncertainty spaces for \(k=1,2,\ldots \) The product uncertain measure is an uncertain measure satisfying

where \(\Lambda _k\) are arbitrarily chosen events from , respectively.

Definition 2.2

(Liu 2007) An uncertain variable\(\xi \) is a measurable function from an uncertainty space to the set of real numbers, i.e., for any Borel set B of real numbers, the set

$$\begin{aligned} \{\xi \in B\}=\{\gamma \in \Gamma |\xi (\gamma )\in B\} \end{aligned}$$

is an event.

Definition 2.3

(Liu 2007) The uncertainty distribution\(\Phi \) of an uncertain variable \(\xi \) is defined by

Definition 2.4

(Liu 2007) Let \(\xi \) be an uncertain variable. The expected value of \(\xi \) is defined by

provided that at least one of the above two integrals is finite. An uncertain variable \( \xi \) is called linear if it has a linear uncertainty distribution

$$\begin{aligned} \Phi \left( x \right) = \left\{ {\begin{array}{l} {0, \quad x \le a}\\ {\left( {x - a} \right) /\left( {b - a} \right) , \quad a \le x \le b}\\ {1, \quad x \ge b} \end{array}} \right. \end{aligned}$$

denoted by \( \L \left( {a,b} \right) \) where a and b are real numbers with \(a < b.\) Suppose that \( \xi _1 \) and \( \xi _2 \) are independent linear uncertain variables \( \L \left( {a_1,b_1} \right) \) and \( \L \left( {a_2,b_2} \right) .\) Then the sum \( {\xi _1} + {\xi _2} \) is also a linear uncertain variable \( \L \left( {{a_1} + {a_2},{b_1} + {b_2}} \right) . \)

Definition 2.5

(Liu 2007) Let \(\xi \) be an uncertain variable with a regular uncertainty distribution \(\Phi (x)\). If the expected value is available, then

$$\begin{aligned} E[\xi ]=\int _{0}^{1}{\Phi ^{-1}(\alpha )}\mathrm{d}\alpha \end{aligned}$$

where \(\Phi ^{-1}(\alpha )\) is the inverse uncertainty distribution of \(\xi \).

Theorem 2.1

(Liu 2010) Assume \(\xi _1,\xi _2,\ldots ,\xi _n\) are independent uncertain variables with regular uncertainty distributions \(\Phi _1,\Phi _2,\ldots ,\Phi _n\), respectively. If the function \(f(x_1,x_2,\)\(\ldots ,x_n)\) is strictly increasing with respect to \(x_1,x_2,\ldots \), \(x_m\) and strictly decreasing with respect to \(x_{m+1},x_{m+2},\ldots ,x_n\), then \(\xi =f(\xi _1,\xi _2,\ldots ,\xi _n)\) has an inverse uncertainty distribution

$$\begin{aligned}&\Psi ^{-1}(\alpha )\\&\quad =f\left( \Phi _1^{-1}(\alpha ),\ldots , \Phi _m^{-1}(\alpha ),\Phi _{m+1}^{-1}(1-\alpha ),\ldots ,\Phi _n^{-1}(1-\alpha )\right) . \end{aligned}$$

In addition, Liu and Ha (2010) proved that the uncertain variable \(\xi \) has an expected value

$$\begin{aligned}&E[\xi ]\\&=\int _{0}^{1}f \left( \Phi _1^{-1}(\alpha ),\ldots , \Phi _m^{-1}(\alpha ),\Phi _{m+1}^{-1}(1-\alpha ), \ldots ,\Phi _n^{-1}(1-\alpha )\right) \int \mathrm{{d}}\alpha . \end{aligned}$$

Theorem 2.2

(Liu 2010) Let \( \xi \) and \( \eta \) be independent uncertain variables with finite expected values. Since then, for any real numbers a and b, we obtain

$$\begin{aligned} \begin{aligned} \mathrm {E}[a\xi +b\eta ]=a\mathrm {E}[\xi ]+b\mathrm {E}[\eta ]. \end{aligned} \end{aligned}$$

Theorem 2.3

(Liu (2009)) Let \( g({\{x}, \xi _1, \xi _2, \ldots , \xi _n) \) be constraint function. This function is strictly increasing with respect to \( \xi _1, \xi _2, \ldots , \xi _k \) and strictly decreasing with respect to \( \xi _{k+1}. \)\( \xi _1, \xi _2, \ldots , \xi _k \) are also independent uncertain variables with uncertain distributions \( \Phi _1, \Phi _2,\ldots , \Phi _n, \) respectively, then the chance constraint

$$\begin{aligned} \begin{aligned} {\mathcal {M}}\left\{ g({\{x}, \xi _1, \xi _2, \ldots , \xi _n) \le 0 \right\} \ge \alpha \end{aligned} \end{aligned}$$

holds if and only if

$$\begin{aligned} \begin{aligned} \begin{array}{l} g\left( {\{x}, \Phi _1^{-1}(\alpha ), \ldots , \Phi _k^{-1}(\alpha ),\right. \\ \qquad \left. \Phi _{k+1}^{-1}(1- \alpha ), \ldots , \Phi _{n}^{-1}(1- \alpha ) \right) \le 0. \end{array} \end{aligned} \end{aligned}$$

3 Entropy of function of uncertain variables

Here, we will give the definition and theory of uncertain entropy function. The theory of entropy introduced by Liu (2009) is as follows.

Theorem 3.1

Let \( \xi \) be an uncertain variable and its entropy is determined as:

$$\begin{aligned} H\left[ \xi \right] = \int _{- \infty }^\infty S\left( {\mathcal{M}\left\{ {\xi \le x} \right\} } \right) \mathrm{{d}}x\mathrm{{,}} \end{aligned}$$

where \(S\left( t \right) = - t\ln t - \left( {1 - t} \right) \ln \left( {1 - t} \right) .\)

Theorem 3.2

Let \( \xi \) be an uncertain variable with regular uncertainty distribution \( \Phi . \) If the entropy \( H\left[ \xi \right] \) exists, then

$$\begin{aligned} H\left[ \xi \right] = \int _0^1{\Phi ^{ - 1}}\left( \alpha \right) \ln \frac{\alpha }{{1 - \alpha }}\mathrm{{d}}\alpha \mathrm{{.}} \end{aligned}$$

Theorem 3.3

(Dai and Chen 2012) Let \({\xi _1},{\xi _2}, \ldots ,{\xi _n}\) be re independent uncertain variables with regular uncertainty distribution \({\Phi _1},{\Phi _2}, \ldots ,{\Phi _n}\), respectively. If \(f:{\mathcal{R}^n} \rightarrow \mathcal{R}\) is a strictly monotone function, then the uncertain variable \(\xi = f\left( {{\xi _1},{\xi _2}, \ldots ,{\xi _n}} \right) \) has an entropy

$$\begin{aligned} H\left[ \xi \right] = \left| {\int _0^1f\left( {\Phi _1^{ - 1}\left( \alpha \right) ,\Phi _2^{ - 1}\left( \alpha \right) , \ldots ,\Phi _n^{ - 1}\left( \alpha \right) } \right) \ln \frac{\alpha }{{1 - \alpha }}\mathrm{{d}}\alpha } \right| \mathrm{{.}} \end{aligned}$$

Theorem 3.4

(Dai and Chen (2012)) Let \({\xi _1},{\xi _2}, \ldots ,{\xi _n}\) be re independent uncertain variables with regular uncertainty distribution \({\Phi _1},{\Phi _2}, \ldots ,{\Phi _n}\), respectively. If \(f:{\mathcal{R}^n} \rightarrow \mathcal{R}\) is a strictly increasing function with respect to \({x_1},{x_2}, \ldots ,{x_m}\) and strictly decreasing function with \({x_{m + 1}},{x_{m + 2}}, \ldots ,{x_n}\) then the uncertain variable \(\xi = f\left( {{\xi _1},{\xi _2}, \ldots ,{\xi _n}} \right) \) has an entropy

$$\begin{aligned} H\left[ \xi \right]= & {} \int _0^1f\left( \Phi _1^{ - 1}\left( \alpha \right) , \ldots ,\Phi _m^{ - 1}\left( \alpha \right) ,\Phi _{m + 1}^{ - 1}\left( {1 - \alpha } \right) , \ldots ,\right. \\&\left. \Phi _n^{ - 1}\left( {1 - \alpha } \right) \right) \ln \frac{\alpha }{{1 - \alpha }}\mathrm{{d}}\alpha \mathrm{{.}} \end{aligned}$$

Theorem 3.5

(Dai and Chen (2012)) Assume that \( \xi \) and \(\eta \) are independent uncertain variables. Then we obtain for \( a, b \in \mathfrak {R}\),

$$\begin{aligned} H\left[ {a\xi + b\eta } \right] = \left| a \right| H\left[ \xi \right] + \left| b \right| H\left[ \eta \right] \mathrm{{.}} \end{aligned}$$

4 Uncertain programming models for multi-objective multi-item solid transportation problem

In the MISTP, there is a multi-product to be carried from a set of origins to a set of destinations by a set of both similar or distinct conveyances. Every origin has such event to provide any of the destinations employing some of the conveyances and every destination can receive its demand from some of the origins employing some of the conveyances. Thus, every origin can provide zero, one or more destinations and the demand for each destination can be met by at least one origin. Each conveyance also is employed for zero, one or more unlocked ways from the origins to the destinations. An unit charge is taken into account for carrying any quantity of products between the origins and the destinations via distinct conveyances. The purpose of the MISTP is to minimize the total transportation cost by obtaining an optimal outcome of the products communicated in the unlocked directions by distinct conveyances.

In order to formulate the MISTP, the following notations in this paper are employed.

M::

the number of origins,

N::

the number of destinations,

L::

the number of conveyances,

R::

the number of items,

i, j, k,p::

the indexes used for source, destination and conveyance, respectively.

\( a^p_i \)::

the capacity of products of item p at origins i ,

\( b^p_j \)::

the demand of products of item p at destination j,

\( e_k \)::

the total transportation capacity of conveyance k,

\(c^{p}_{ijk} \) :

the unit cost of transporting one unit of item p from source i to destination j by conveyance k,

\(x^{p}_{ijk} \) :

the amount of item p to be carried from source i to destination j by conveyance k.

Using these notations, a mathematical model of single objective MISTP can be formulated as follows:

$$\begin{aligned} \left\{ {\begin{array}{l} {{f_{}}\left( x \right) = \sum \limits _{p = 1}^R {\sum \limits _{i = 1}^M {\sum \limits _{j = 1}^N {\sum \limits _{k = 1}^L {c_{ijk}^p} } } } x_{ijk}^p \quad \mathrm{(a)}}\\ {s.t.\left\{ {\begin{array}{l} {\sum \limits _{j = 1}^N {\mathop \sum \limits _{k = 1}^L } x_{ijk}^p \le a_i^p,\,\forall i \in M;\,\,\forall p \in R \quad \mathrm{(b)}}\\ {\sum \limits _{i = 1}^M {\mathop \sum \limits _{k = 1}^L } x_{ijk}^p \ge b_j^p,\forall j \in N; \quad \forall p \in R \quad \mathrm{(c)}}\\ {\sum \limits _{p = 1}^R {\mathop \sum \limits _{i = 1}^M } \mathop \sum \limits _{k = 1}^L x_{ijk}^p \le {e_k},\forall k \in L \quad \mathrm{(d)}}\\ {x_{ijk}^p \ge 0,\,\forall i \in M;\,\forall j \in N;\,\forall k \in L;\,\forall p \in R \quad \mathrm{(e)}} \end{array}} \right. } \end{array}} \right. \end{aligned}$$
(1)

In this model, the objective function (a) decreases the whole transportation cost which is the sum of each unit cost. Constraint (b) guarantees that the whole amount of merchandise p carried from each origin to every destination should not be greater than the magnitude of that origin. Constraint (c) warrants that the demand for each destination should be answered. Constraint (d) shows the capacity of each conveyance, and constraint (e) depicts nonnegative variables.

In order to construct its uncertain programming model, let us consider that the per unit cost \(\xi _{ijk}^{p}, \) the capacity of each origin \( {\tilde{a}}_{i}^{p}, \) that of each destination \( {\tilde{b}}_{j}^{p} \) and each conveyance \( {\tilde{b}}_{j}^{p} \) are all uncertain variables, respectively. Then the MISTP can be formulated as the following uncertain programming model as follows.

$$\begin{aligned} \left\{ {\begin{array}{l} {{f_{}}\left( {x,\xi } \right) = \sum \limits _{p = 1}^R {\sum \limits _{i = 1}^M {\sum \limits _{j = 1}^N {\sum \limits _{k = 1}^L {x_{ijk}^p\xi _{ijk}^p}}}}}\\ {s.t.\left\{ {\begin{array}{l} {\sum \limits _{j = 1}^N {\mathop \sum \limits _{k = 1}^L } x_{ijk}^p \le {\tilde{a}}_i^p,\,\forall i \in M;\quad \forall p \in R}\\ {\sum \limits _{i = 1}^M {\mathop \sum \limits _{k = 1}^L } x_{ijk}^p \ge {\tilde{b}}_j^p,\forall j \in N;\quad \forall p \in R}\\ {\sum \limits _{p = 1}^R {\mathop \sum \limits _{i = 1}^M } \mathop \sum \limits _{k = 1}^L x_{ijk}^p \le {{{\tilde{e}}}_k},\forall k \in L}\\ {x_{ijk}^p \ge 0,\,\forall i \in M;\quad \forall j \in N;\,\forall k \in L;\quad \forall p \in R} \end{array}} \right. } \end{array}} \right. \end{aligned}$$
(2)

It is clear that model (2) contains uncertain parameters. So it cannot be optimized as the deterministic programming problem. In order to optimize the model with uncertain variables, we transform into its equivalent models. Therefore, two programming models based on expected value and change constraint programming are presented.

4.1 Expected value programming model

Since the expected value is the average value of an uncertain variable in the sense of uncertain measure, the main idea of expected value programming model is to optimize the expected value of the objective function under the expected constraints.

Here, expected value programming model for optimizing MISTP can be formulated as:

$$\begin{aligned} \left\{ {\begin{array}{l} {E[{f_{}}(x,\xi )] = \min \mathrm{{E}}\left[ {\sum \limits _{p = 1}^R {\sum \limits _{i = 1}^M {\sum \limits _{j = 1}^N {\sum \limits _{k = 1}^L {\left( {\xi _{ijk}^px_{ijk}^p} \right) } } } } } \right] }\\ {s.t.\left\{ {\begin{array}{l} {E\left[ {\sum \limits _{j = 1}^N {\sum \limits _{k = 1}^L {x_{ijk}^p} } - {\tilde{a}}_i^p} \right] \, \le 0,\,\forall i \in M;\forall p \in R}\\ {E\left[ {{{{\tilde{b}}}_j} - \sum \limits _{k = 1}^M {\sum \limits _{k = 1}^L {x_{ijk}^p} } } \right] \le 0,\forall j \in N;\forall p \in R}\\ {\begin{array}{l} {E\left[ {\sum \limits _{p = 1}^R {\sum \limits _{i = 1}^M {\sum \limits _{j = 1}^L {x_{ijk}^p - {{{\tilde{e}}}_k}} } } } \right] \le 0,\forall k \in L}\\ {x_{ijk}^p \ge 0, \forall i \in M;\forall j \in N;\forall k \in L;\forall p \in R} \end{array}} \end{array}} \right. } \end{array}} \right. \end{aligned}$$
(3)

where \({\xi ^{p}_{ijk}},{{\tilde{a}}_i^{p}},{{\tilde{b}}_j^{p}},{{\tilde{e}}_k},\)\( \forall ijk \) are all independent uncertain variables.

Theorem 4.1

Furthermore, model (3) is equivalent to the model given below.

$$\begin{aligned} {\begin{array}{l} {\min \sum \limits _{p = 1}^R {\sum \limits _{i = 1}^M {\sum \limits _{j = 1}^N {\sum \limits _{k = 1}^L {x_{ijk}^p} } } } \displaystyle \int \limits _0^1 {\Phi _{\xi _{ijk}^p}^{ - 1}} \left( \alpha \right) \mathrm{d}\alpha }\\ {s.t.\left\{ {\begin{array}{l} {\sum \limits _{j = 1}^N {\sum \limits _{k = 1}^L {x_{ijk}^p} }- \displaystyle \int \limits _0^1 {\Phi _{{\tilde{a}}_i^p}^{ - 1}(1 - \alpha _i^p)\mathrm{d} \alpha _i^p} \le 0, \quad \forall i \in M; \forall p \in R}\\ {\displaystyle \int \limits _0^1 {\Phi _{{\tilde{b}}_j^p}^{ - 1}\left( {\beta _j^p} \right) \mathrm{d}\beta _j^p} - \sum \limits _{k = 1}^M {\sum \limits _{k = 1}^L {x_{ijk}^p} } \le 0,\quad \forall j \in N; \forall p \in R}\\ {\begin{array}{l} {\sum \limits _{p = 1}^R {\sum \limits _{i = 1}^M {\sum \limits _{j = 1}^N {x_{ijk}^p} } } - \displaystyle \int \limits _0^1 {\Phi _{\tilde{c}_k^{}}^{ - 1} \left( {1 - {\delta _k}} \right) } \mathrm{{d}}{\delta _k} \le 0, \quad \forall k \in L}\\ {x_{ijk}^p \ge 0, \quad \forall i \in M; \forall j \in N;\forall p \in R} \end{array}} \end{array}} \right. } \end{array}} \end{aligned}$$
(4)

Proof

Since \({\xi ^{p}_{ijk}},{{\tilde{a}}_i^{p}},{{\tilde{b}}_j^{p}},{{\tilde{e}}_k},\) for \( \forall ijk \) are independent uncertain variables with uncertainty distributions \( {\Phi _{\xi _{ijk}^p}},{\Phi _{{\tilde{a}}_i^p}},{\Phi _{{\tilde{b}}_j^p}},\)\({\Phi _{{{{\tilde{e}}}_k}}}. \)

Following from the linearity of expected value operator of uncertain variable, model (3) turns into the following model.

$$\begin{aligned} \left\{ {\begin{array}{l} {E [{f_{}}(x,\xi )] = \min \sum \limits _{p = 1}^R {\sum \limits _{i = 1}^M {\sum \limits _{j = 1}^N {\sum \limits _{k = 1}^L {x_{ijk}^p\mathrm{{E}}\left[ {\xi _{ijk}^p} \right] } } } } }\\ {s.t.\left\{ {\begin{array}{l} {\sum \limits _{j = 1}^N {\sum \limits _{k = 1}^L {x_{ijk}^p} } - E\left[ {{\tilde{a}}_i^p} \right] \le 0,\,\forall i \in M;\quad \forall p \in R}\\ {E\left[ {{{{\tilde{b}}}_j}} \right] - \sum \limits _{k = 1}^M {\sum \limits _{k = 1}^L {x_{ijk}^p} } \le 0, \forall j \in N;\quad \forall p \in R}\\ {\begin{array}{l} {\sum \limits _{p = 1}^R {\sum \limits _{i = 1}^M {\sum \limits _{j = 1}^L {x_{ijk}^p - } } } E\left[ {{{{\tilde{e}}}_k}} \right] \le 0, \quad \forall k \in L}\\ {x_{ijk}^p \ge 0, \quad \forall i \in M;\forall j \in N;\forall k \in L;\forall p \in R} \end{array}} \end{array}} \right. } \end{array}} \right. \end{aligned}$$
(5)

From Definition 2.5 and Theorem 2.1, it is seen that this model is equivalent to model (4). Thus, the theorem is verified. \(\square \)

4.2 Expected constrained programming model

The expected constrained programming is another method to deal with the optimal problem in the uncertain environment. The main idea of the model is to optimize the expected value of the objective function under the chance constraints. Its mathematical model is as follows.

$$\begin{aligned} \left\{ {\begin{array}{l} {E\,[{f_{}}(x,\xi )] = \min \mathrm{{E}}\left[ {\sum \limits _{p = 1}^R {\sum \limits _{i = 1}^M {\sum \limits _{j = 1}^N {\sum \limits _{k = 1}^L {\left( {\xi _{ijk}^px_{ijk}^p} \right) } } } } } \right] }\\ {s.t.\left\{ {\begin{array}{l} {\mathrm{{M}}\left\{ {\sum \limits _{j = 1}^N {\sum \limits _{k = 1}^L {x_{ijk}^p} } - {\tilde{a}}_i^p \le 0} \right\} \ge \gamma _i^p,\quad \forall i \in M;\forall p \in R}\\ {\mathrm{{M}}\left\{ {{{{\tilde{b}}}_j} - \sum \limits _{k = 1}^M {\sum \limits _{k = 1}^L {x_{ijk}^p} } \le 0} \right\} \ge \beta _j^p,\quad \forall j \in N;\forall p \in R}\\ \begin{array}{l} \mathrm{{M}}\left\{ {\sum \limits _{p = 1}^R {\sum \limits _{i = 1}^M {\sum \limits _{j = 1}^L {x_{ijk}^p - {{{\tilde{e}}}_k} \le 0} } } } \right\} \ge {\delta _k},\quad \forall k \in L\\ x_{ijk}^p \ge 0, \quad \forall i \in M; \forall j \in N; \forall k \in L; \forall p \in R \end{array} \end{array}} \right. } \end{array}} \right. \end{aligned}$$
(6)

where \({\alpha _i^{p}}\), \({\beta _j^{p}}\), \({\delta _k},\)\( \forall i \in M; \forall j \in N; \forall k \in L; \forall p \in R \) are the confidence levels of each of constraint.

Theorem 4.2

Further, the model given above is converted into its equivalent deterministic form as:

$$\begin{aligned} \left\{ {\begin{array}{l} {\min \sum \limits _{p = 1}^R {\sum \limits _{i = 1}^M {\sum \limits _{j = 1}^N {\sum \limits _{k = 1}^L {x_{ijk}^p} } } } \displaystyle \int \limits _0^1 {\Phi _{\xi _{ijk}^p}^{ - 1}} \left( \alpha \right) \mathrm{{d}}\alpha }\\ {s.t.\left\{ {\begin{array}{l} {\sum \limits _{j = 1}^N {\sum \limits _{k = 1}^L {x_{ijk}^p} } - \Phi _{{\tilde{a}}_i^p}^{ - 1}(1 - \alpha _i^p) \le 0,\quad \forall i \in M;\forall p \in R}\\ {\Phi _{{\tilde{b}}_j^p}^{ - 1}\left( {\beta _j^p} \right) - \sum \limits _{k = 1}^M {\sum \limits _{k = 1}^L {x_{ijk}^p} } \le 0,\,\,\forall j \in N;\forall p \in R}\\ \begin{array}{l} \sum \limits _{p = 1}^R {\sum \limits _{i = 1}^M {\sum \limits _{j = 1}^N {x_{ijk}^p} } } - \Phi _{\tilde{c}_k^{}}^{ - 1}\left( {1 - {\delta _k}} \right) \le 0,\,\,\forall k \in L\\ x_{ijk}^p \ge 0,\,\,\forall i \in M;\,\forall j \in N;\forall p \in R \end{array} \end{array}} \right. } \end{array}} \right. \end{aligned}$$
(7)

where \({\xi ^{p}_{ijk}},{{\tilde{a}}_i^{p}},{{\tilde{b}}_j^{p}},{{\tilde{e}}_k},\)\( \forall ijk \) are independent uncertain variables with uncertainty distributions \( {\Phi _{\xi _{ijk}^p}},{\Phi _{{\tilde{a}}_i^p}},{\Phi _{{\tilde{b}}_j^p}},{\Phi _{{{{\tilde{e}}}_k}}}. \)

Proof

Since \( \xi _{ijk}^p \) has a regular uncertainty distribution \(\Phi _{ijk}^p,\) from Theorems 2.1 and 2.2, we write

$$\begin{aligned} \begin{array}{l} \mathrm{{E}}\left[ {\sum \limits _{p = 1}^R {\sum \limits _{i = 1}^M {\sum \limits _{j = 1}^N {\sum \limits _{k = 1}^L {\left( {\xi _{ijk}^px_{ijk}^p} \right) } } } } } \right] \\ = \sum \limits _{p = 1}^R {\sum \limits _{i = 1}^M {\sum \limits _{j = 1}^N {\sum \limits _{k = 1}^L {x_{ijk}^p \mathrm{{E}}\left[ {\xi _{ijk}^p} \right] } } } }= \sum \limits _{p = 1}^R {\sum \limits _{i = 1}^M {\sum \limits _{j = 1}^N {\sum \limits _{k = 1}^L {x_{ijk}^p\displaystyle \int \limits _0^1 {\Phi _{\xi _{ijk}^p}^{ - 1}} } } } } \end{array}. \end{aligned}$$

Applying Theorem 2.3 to the constraints of model (3), we have

$$\begin{aligned}&\mathrm{{M}}\left\{ {\sum \limits _{j = 1}^N {\sum \limits _{k = 1}^L {x_{ijk}^p} } - {\tilde{a}}_i^p \le 0} \right\} \ge \gamma _i^p \\&\quad \Leftrightarrow \,\sum \limits _{j = 1}^N {\sum \limits _{k = 1}^L {x_{ijk}^p} } - \Phi _{{\tilde{a}}_i^p}^{ - 1}(1 - \alpha _i^p) \le 0,\,\forall i \in M;\forall p \in R\\&\mathrm{{M}}\left\{ {{{{\tilde{b}}}_j} - \sum \limits _{k = 1}^M {\sum \limits _{k = 1}^L {x_{ijk}^p} } \le 0} \right\} \ge \beta _j^p \\&\quad \Leftrightarrow \Phi _{{\tilde{b}}_j^p}^{ - 1}\left( {\beta _j^p} \right) - \sum \limits _{k = 1}^M {\sum \limits _{k = 1}^L {x_{ijk}^p} } \le 0,\,\,\,\forall j \in N;\forall p \in R\\&\mathrm{{M}}\left\{ {\sum \limits _{p = 1}^R {\sum \limits _{i = 1}^M {\sum \limits _{j = 1}^L {x_{ijk}^p - {{{\tilde{e}}}_k} \le 0} } } } \right\} \ge {\delta _k} \\&\quad \Leftrightarrow \sum \limits _{p = 1}^R {\sum \limits _{i = 1}^M {\sum \limits _{j = 1}^N {x_{ijk}^p} } } - \Phi _{\tilde{c}_k^{}}^{ - 1}\left( {1 - {\delta _k}} \right) \le 0, \forall k \in L \end{aligned}$$

The above operations show that model (3) is equivalent to model (4). \(\square \)

5 Entropy-based MISTP models with uncertain variables

MISTPs with the entropy function are studied by several scholars in the literature (Chen et al. 2017; Ojha et al. 2009). They used Shannon entropy function. Thus, they have shown that the entropy function in a transportation problem operates as a measure of dispersal of trips among origins, destinations and conveyances. It is useful for achieving the minimum transportation costs as well as maximum entropy amount. Thus, the carried products are distributed to each destination and besides the cost is expensive than that for the event of entropy function. In this paper, we use the uncertain entropy function as an additional objective in order to reach all directions in a transportation network, if likely.

By utilizing Theorems 3.4 and 3.5 to the objective function of model (4), we define the following uncertain entropy function.

Lemma 5.1

Suppose \( \xi _{ijk}^p \) is an uncertain variable with regular uncertainty distribution \(\Phi _{ijk}^p.\) If \(f:{\mathcal{R}^n} \rightarrow \mathcal{R}\) is a strictly increasing function with respect to \( {\varvec{x}}_{ijk}^p \) then the uncertain function \( f\left( {x,\xi } \right) \) has an entropy, i.e.,

$$\begin{aligned} H\left[ {x,\xi } \right] = \sum \limits _{p = 1}^R {\sum \limits _{i = 1}^M {\sum \limits _{j = 1}^N {\sum \limits _{k = 1}^L {x_{ijk}^p} } } } \int \limits _0^1 {\Phi _{\xi _{ijk}^p}^{ - 1}\left( \alpha \right) \ln \frac{\alpha }{{1 - \alpha }}\mathrm{{d}}\alpha } \end{aligned}$$

It is noted that if \(f:{\mathcal{R}^n} \rightarrow \mathcal{R}\) is a strictly decreasing function with respect to \( {\varvec{x}}_{ijk}^p,\) then the uncertain function \( f\left( {x,\xi } \right) \) has an entropy

$$\begin{aligned} H\left[ {x,\xi } \right] = \sum \limits _{p = 1}^R {\sum \limits _{i = 1}^M {\sum \limits _{j = 1}^N {\sum \limits _{k = 1}^L {x_{ijk}^p} } } } \int \limits _0^1 {\Phi _{\xi _{ijk}^p}^{ - 1}\left( {1 - \alpha } \right) \ln \frac{\alpha }{{1 - \alpha }}\mathrm{{d}}\alpha } \end{aligned}$$

Proof

Since \( \xi _{ijk}^p \) has a regular uncertainty distribution \(\Phi _{ijk}^p,\) we obtain

$$\begin{aligned} H\left[ {x,\xi } \right] = \sum \limits _{p = 1}^R {\sum \limits _{i = 1}^M {\sum \limits _{j = 1}^N {\sum \limits _{k = 1}^L {x_{ijk}^p} } } } \int \limits _{ - \infty }^\infty {f\left( {\Phi _{\xi _{ijk}^p}^{}\left( x \right) } \right) \mathrm{d}x} \end{aligned}$$

From Theorem 3.5, this equality can be rewritten as:

$$\begin{aligned} H\left[ {x,\xi } \right]= & {} \sum \limits _{p = 1}^R {\sum \limits _{i = 1}^M {\sum \limits _{j = 1}^N {\sum \limits _{k = 1}^L {x_{ijk}^p} } } } \int \limits _{ - \infty }^0 {\int \limits _0^{\Phi _{\xi _{ijk}^p}^{}\left( x \right) } {f'\left( \alpha \right) \mathrm{{d}}\alpha } } \mathrm{d}x \\&+ \sum \limits _{p = 1}^R {\sum \limits _{i = 1}^M {\sum \limits _{j = 1}^N {\sum \limits _{k = 1}^L {x_{ijk}^p} } } } \int \limits _{ - \infty }^0 {\int \limits _{\Phi _{\xi _{ijk}^p}^{}\left( x \right) }^\infty { - f'\left( \alpha \right) \mathrm{{d}}\alpha \mathrm{d}x} } \end{aligned}$$

where \( f'\left( \alpha \right) = {\left( { - \alpha \ln \alpha - \left( {1 - \alpha } \right) \ln \left( {1 - \alpha } \right) } \right) ^\prime } = - \ln \frac{\alpha }{{1 - \alpha }}. \)

By applying the Fubini Theorem to the above function, we obtain

$$\begin{aligned} H\left[ {x,\xi } \right]= & {} \sum \limits _{p = 1}^R {\sum \limits _{i = 1}^M {\sum \limits _{j = 1}^N {\sum \limits _{k = 1}^L {x_{ijk}^p} } } } \int \limits _{ - \infty }^0 {\int \limits _{\Phi _{\xi _{ijk}^p}^{ - 1} \left( x \right) }^0 {f'\left( \alpha \right) \mathrm{{d}}\alpha } } \mathrm{d}x \\&+ \sum \limits _{p = 1}^R {\sum \limits _{i = 1}^M {\sum \limits _{j = 1}^N {\sum \limits _{k = 1}^L {x_{ijk}^p} } } } \int \limits _{\Phi _{\xi _{ijk}^p}^{}\left( 0 \right) }^1 {\int \limits _0^{\Phi _{\xi _{ijk}^p}^{}\left( x \right) } { - f'\left( \alpha \right) \mathrm{{d}}\alpha \mathrm{d}x} }.\\= & {} \sum \limits _{p = 1}^R {\sum \limits _{i = 1}^M {\sum \limits _{j = 1}^N {\sum \limits _{k = 1}^L {x_{ijk}^p} } } } \int \limits _0^1 {\Phi _{\xi _{ijk}^p}^{ - 1}\left( \alpha \right) f'\left( \alpha \right) \mathrm{{d}}\alpha } \\= & {} \sum \limits _{p = 1}^R {\sum \limits _{i = 1}^M {\sum \limits _{j = 1}^N {\sum \limits _{k = 1}^L {x_{ijk}^p} } } } \int \limits _0^1 {\Phi _{\xi _{ijk}^p}^{ - 1}\left( \alpha \right) \ln \frac{\alpha }{{1 - \alpha }}\mathrm{{d}}\alpha }. \end{aligned}$$

\(\square \)

Example 5.1

Let us consider the objective function of (8) and suppose \( {x_{ijk}^p} \) are independent uncertain variables with linear distribution \( L\left( {a_{ijk}^p,b_{ijk}^p} \right) . \) Then the objective function of model (8) has the following entropy function

$$\begin{aligned} H\left[ {x,\xi } \right]= & {} \sum \limits _{p = 1}^R {\sum \limits _{i = 1}^M {\sum \limits _{j = 1}^N {\sum \limits _{k = 1}^L {x_{ijk}^p} } } } \int \limits _0^1 \Phi _{\xi _{ijk}^p}^{ - 1}\left( \alpha \right) \ln \frac{\alpha }{{1 - \alpha }}\mathrm{{d}}\alpha \\= & {} \sum \limits _{p = 1}^R {\sum \limits _{i = 1}^M {\sum \limits _{j = 1}^N {\sum \limits _{k = 1}^L {x_{ijk}^p\left( {\frac{{b_{ijk}^p - a_{ijk}^p}}{2}} \right) } } } } \end{aligned}$$

where

$$\begin{aligned} \Phi _{\xi _{ijk}^p}^{ - 1}\left( \alpha \right) = \left( {1 - \alpha } \right) a_{ijk}^p + \alpha b_{ijk}^p. \end{aligned}$$

Example 5.2

Suppose \( {x_{ijk}^p} \) are independent uncertain variables with with zigzag distribution \( Z\left( {a_{ijk}^p,b_{ijk}^p,c_{ijk}^p} \right) . \) Then the objective function of model (8) has the following entropy function

$$\begin{aligned} H\left[ {x,\xi } \right]= & {} \sum \limits _{p = 1}^R {\sum \limits _{i = 1}^M {\sum \limits _{j = 1}^N {\sum \limits _{k = 1}^L {x_{ijk}^p} } } } \int \limits _0^1 \Phi _{\xi _{ijk}^p}^{ - 1}\left( \alpha \right) \ln \frac{\alpha }{{1 - \alpha }}\mathrm{{d}}\alpha \\= & {} \sum \limits _{p = 1}^R {\sum \limits _{i = 1}^M {\sum \limits _{j = 1}^N {\sum \limits _{k = 1}^L {x_{ijk}^p\left( {\frac{{c_{ijk}^p - a_{ijk}^p}}{2}} \right) } } } } \end{aligned}$$

where

$$\begin{aligned} \Phi _{\xi _{ijk}^p}^{ - 1}\left( \alpha \right) = \left\{ {\begin{array}{*{20}{c}} {\left( {1 - 2\alpha } \right) a_{ijk}^p + 2\alpha b_{ijk}^p,}&{}{\mathrm{{if}}\quad \alpha < 0.5}\\ {\left( {2 - 2\alpha } \right) b_{ijk}^p + \left( {2\alpha - 1} \right) c_{ijk}^p,}&{}{\mathrm{{if}}\quad \alpha \ge 0.5.} \end{array}} \right. \end{aligned}$$

Example 5.3

Suppose \( {x_{ijk}^p} \) are independent uncertain variables with with normal uncertain distribution \( N\left( {e_{ijk}^p,\sigma _{ijk}^p} \right) . \) Then the objective function of model (8) has the following entropy function

$$\begin{aligned} H\left[ {x,\xi } \right]= & {} \sum \limits _{p = 1}^R {\sum \limits _{i = 1}^M {\sum \limits _{j = 1}^N {\sum \limits _{k = 1}^L {x_{ijk}^p} } } } \int \limits _0^1 \Phi _{\xi _{ijk}^p}^{ - 1}\left( \alpha \right) \ln \frac{\alpha }{{1 - \alpha }}\mathrm{{d}}\alpha \\= & {} \sum \limits _{p = 1}^R {\sum \limits _{i = 1}^M {\sum \limits _{j = 1}^N {\sum \limits _{k = 1}^L {x_{ijk}^p\left( {\frac{{\pi \sigma _{ijk}^p }}{{\sqrt{3} }}} \right) } } } } \end{aligned}$$

where \( \Phi _{\xi _{ijk}^p}^{ - 1}\left( \alpha \right) = e_{ijk}^p + \frac{{\sigma _{ijk}^p\sqrt{3} }}{\pi }\ln \frac{\alpha }{{1 - \alpha }}. \)

Adding the entropy function to model (4), it turns to the following multi-objective expected value programming model.

$$\begin{aligned} \left\{ {\begin{array}{l} \begin{array}{l} \min \sum \limits _{p = 1}^R {\sum \limits _{i = 1}^M {\sum \limits _{j = 1}^N {\sum \limits _{k = 1}^L {x_{ijk}^p} } } } \displaystyle \int \limits _0^1 {\Phi _{\xi _{ijk}^p}^{ - 1}} \left( \alpha \right) \mathrm{d}\alpha \\ \max \sum \limits _{p = 1}^R {\sum \limits _{i = 1}^M {\sum \limits _{j = 1}^N {\sum \limits _{k = 1}^L {x_{ijk}^p} } } } \displaystyle \int \limits _0^1 {\Phi _{\xi _{ijk}^p}^{ - 1}\left( \alpha \right) \ln \frac{\alpha }{{1 - \alpha }}\mathrm{{d}}\alpha } \end{array}\\ {s.t.\left\{ {\begin{array}{l} {\sum \limits _{j = 1}^N {\sum \limits _{k = 1}^L {x_{ijk}^p} } -\displaystyle \int \limits _0^1 {\Phi _{{\tilde{a}}_i^p}^{ - 1}(1 - \alpha _i^p)\mathrm{d}\alpha _i^p} \le 0, \forall i \in M; \quad \forall p \in R}\\ {\displaystyle \int \limits _0^1 {\Phi _{{\tilde{b}}_j^p}^{ - 1}\left( {\beta _j^p} \right) \mathrm{d}\beta _j^p} - \sum \limits _{k = 1}^M {\sum \limits _{k = 1}^L {x_{ijk}^p} } \le 0, \forall j \in N;\quad \forall p \in R}\\ {\begin{array}{l} {\sum \limits _{p = 1}^R {\sum \limits _{i = 1}^M {\sum \limits _{j = 1}^N {x_{ijk}^p} } } - \displaystyle \int \limits _0^1 {\Phi _{\tilde{c}_k^{}}^{ - 1}\left( {1 - {\delta _k}} \right) } \mathrm{d}{\delta _k} \le 0, \forall k \in L}\\ {x_{ijk}^p \ge 0, \forall i \in M; \forall j \in N;\forall p \in R} \end{array}} \end{array}} \right. } \end{array}} \right. \end{aligned}$$
(8)

Also adding entropy function to model (7), it turns into the multi-objective expected constrained programming model, as follows.

$$\begin{aligned} \left\{ {\begin{array}{l} \begin{array}{l} \min \sum \limits _{p = 1}^R {\sum \limits _{i = 1}^M {\sum \limits _{j = 1}^N {\sum \limits _{k = 1}^L {x_{ijk}^p} } } } \displaystyle \int \limits _0^1 {\Phi _{\xi _{ijk}^p}^{ - 1}} \left( \alpha \right) \mathrm{d}\alpha \\ \max \,\sum \limits _{p = 1}^R {\sum \limits _{i = 1}^M {\sum \limits _{j = 1}^N {\sum \limits _{k = 1}^L {x_{ijk}^p} } } } \displaystyle \int \limits _0^1 {\Phi _{\xi _{ijk}^p}^{ - 1}\left( \alpha \right) \ln \frac{\alpha }{{1 - \alpha }}\mathrm{{d}}\alpha } \end{array}\\ {s.t.\left\{ {\begin{array}{l} {\sum \limits _{j = 1}^N {\sum \limits _{k = 1}^L {x_{ijk}^p} } - \Phi _{{\tilde{a}}_i^p}^{ - 1}(1 - \alpha _i^p) \le 0, \forall i \in M;\forall p \in R}\\ {\Phi _{{\tilde{b}}_j^p}^{ - 1}\left( {\beta _j^p} \right) - \sum \limits _{k = 1}^M {\sum \limits _{k = 1}^L {x_{ijk}^p} } \le 0, \forall j \in N;\forall p \in R}\\ {\begin{array}{l} {\sum \limits _{p = 1}^R {\sum \limits _{i = 1}^M {\sum \limits _{j = 1}^N {x_{ijk}^p} } } - \Phi _{\tilde{c}_k^{}}^{ - 1}\left( {1 - {\delta _k}} \right) \le 0, \forall k \in L}\\ {x_{ijk}^p \ge 0, \forall i \in M; \forall j \in N;\forall p \in R} \end{array}} \end{array}} \right. } \end{array}} \right. \end{aligned}$$
(9)
Fig. 1
figure 1

Representation of network for the considered MISTP

Table 1 Transportation cost \( {\xi ^{p}_{ijk}}\) for item \( p=1 \)
Table 2 Transportation cost \( {\xi ^{p}_{ijk}}\) for item \( p=2 \)

6 Methodologies for deterministic equivalences

6.1 Minimizing distance function

This method combines multiple objectives \( E[f_1({\varvec{x}},{\varvec{\xi }})]\) and \(E[f_2({\varvec{x}},{\varvec{\xi }})] \) employing the distance metric of any solution from the ideal solution \( (E_1^*,E_2^*) \) where \( E_i^* \) for each \(i=1,2\) are the optimal values of the i-th objective function. Thus, multi-objective programming models (8) and (9) can be reformulated as single objective programming problems by using minimizing distance function, as follows.

$$\begin{aligned} \left\{ \begin{array}{l} \min \limits _{ {{{\varvec{x}}}}} \left( \sqrt{(E[f_1({\varvec{x}},{\varvec{\xi }})]-E_1^*)^2+(E[f_2({\varvec{x}},{\varvec{\xi }})]-E_2^*)^2}\right) \\ \hbox {subject to:} \\ \qquad \text{ constraints } \text{ of } \text{(8) } \text{ or } \text{(9)) } \end{array}\right. \end{aligned}$$
(10)

Theorem 6.1

Let \( x*\) be an optimal solution for model (10). Then \( x* \) should be a Pareto optimal solution for models (8) and (9).

Assume that the optimal solution \( x* \) is not a Pareto optimal solution of uncertain programming model (8) (or model 9). So, there must exist a feasible solution x such that

$$\begin{aligned}&\left( \sqrt{(E[f_1({\varvec{x}},{\varvec{\xi }})]-E_1^*)^2+(E[f_2({\varvec{x}},{\varvec{\xi }})]-E_2^*)^2} \right) \\&\quad <\left( \sqrt{(E[f_1({\varvec{x}}*,{\varvec{\xi }})]-E_1^*)^2 +(E[f_2({\varvec{x}}*,{\varvec{\xi }})]-E_2^*)^2}\right) . \end{aligned}$$

It implies that \( x* \) is not an optimal solution of model (10). This contradiction gives that \( x* \) is a Pareto optimal solution of model (8) (or model 9).

6.2 Linear weighting method

The linear weighted method has been widely employed for solving the multi-objective programming problems. In this method, the weights are the relative importance of each objective as determined by the decision makers.

Table 3 Sources \({{\tilde{a}}^p_i}\)
Table 4 Demands \({\tilde{b}}^p_j\)

The problem involving multi-objective can be transformed into the following single objective programming problem by using a simple weighted method.

$$\begin{aligned} \left\{ \begin{array}{l} \min \limits _{{{{\varvec{x}}}}} \left( w_1E[f_1({\varvec{x}},{\varvec{\xi }})]+(-w_2)E[f_2({\varvec{x}},{\varvec{\xi }})]\right) \\ \hbox {subject to:}\\ \qquad \hbox {constraints of} (8)\, or\, (9) \end{array}\right. \end{aligned}$$
(11)

where the weights \( w_1, w_2\) are positive numbers with \(w_1+w_2=1.\)

Definition 6.1

Let \( x* \) be an optimal solution of problem (11). Then \( x* \) must be a Pareto optimal solution of model (8) or (9).

Suppose that \( x* \) is not a Pareto optimal solution of model (8) or (9) and then there must exist a feasible solution x such that \( \left( w_1E[f_1({\varvec{x}},{\varvec{\xi }})]+(-w_2)E[f_2({\varvec{x}},{\varvec{\xi }})]\right) \le \left( -w_1E[f_1({\varvec{x}}*,{\varvec{\xi }})]\right. \)\(\left. +(-w_2)E[f_2({\varvec{x}}*,{\varvec{\xi }})]\right) \) where the weights \( w_1, w_2\) are positive numbers with \(w_1+w_2=1.\) This indicates that \( x* \) is not an optimal solution of model (11). Therefore, \( x* \) is a Pareto optimal solution of model (8) and/or (9).

7 A numerical example

In order to confirm the employment of these models, we give a numerical experiment in this section. Suppose that two products (items) are to be carried from three origins (\( O_1,O_2,O_3 \)) to four destinations (\( D_1,D_2,D_3,D_4 \)) through two distinct conveyances (\( C_1,C_2 \)).

The schematic illustration for the considered MISTP is given with Fig. 1 where the hosts display all the potential directions for carrying two different products from the origins to the destinations via conveyances.

Let us consider that variables of the objective function are normal uncertain variables and also variables of constraints are linear uncertain variables, respectively. All data of the considered model are presented in Tables 1234 and 5, respectively. Here, the computational procedures are carried out on a personal computer (Intel (R) Core (TM) i3-4005U, @ 1.70 GHz and 4 GB memory) employing Maple 2018 optimization toolbox.

$$\begin{aligned}&{\xi _{ijk}^p \rightarrow \mathrm{N}\left( {e_{ijk}^p,\sigma _{ijk}^p} \right) ,\quad i \in \left[ {1,3} \right] ;j \in \left[ {1,4} \right] ;k \in \left[ {1,2} \right] ;p \in \left[ {1,2} \right] }\\&{{{{\tilde{a}}}^p}_i \rightarrow L\left( {a_i^p,b_i^p} \right) , \quad i \in \left[ {1,3} \right] ;\,p \in \left[ {1,2} \right] }\\&{{{{\tilde{b}}}^p}_j \rightarrow L\left( {b_j^p,b_j^p} \right) ,\quad \,\,j \in \left[ {1,4} \right] ;\,p \in \left[ {1,2} \right] }\\&{{{{\tilde{e}}}_k} \rightarrow L\left( {b_j^p,b_j^p} \right) ,\quad k \in \left[ {1,2} \right] } \end{aligned}$$
Table 5 Transportation capacities \({\tilde{e}}_k \)

Following from the above data tables, expected value function for the MISTP is defined as;

$$\begin{aligned}&\sum \limits _{p = 1}^2 {\sum \limits _{i = 1}^3 {\sum \limits _{j = 1}^4 {\sum \limits _{k = 1}^2 {x_{ijk}^p} } } } \int \limits _0^1 {\Phi _{\xi _{ijk}^p}^{ - 1}} \left( \alpha \right) \mathrm{d}\alpha \\&= \sum \limits _{p = 1}^2 {\sum \limits _{i = 1}^3 {\sum \limits _{j = 1}^4 {\sum \limits _{k = 1}^2 {x_{ijk}^p} } } } e_{ijk}^p \end{aligned}$$

and from Lemma 5.1, it has an entropy

$$\begin{aligned}&\sum \limits _{p = 1}^2 {\sum \limits _{i = 1}^3 {\sum \limits _{j = 1}^4 {\sum \limits _{k = 1}^2 {x_{ijk}^p} } } } \int \limits _0^1 {\Phi _{\xi _{ijk}^p}^{ - 1}\left( \alpha \right) \ln \frac{\alpha }{{1 - \alpha }}\mathrm{{d}}\alpha } \\&\quad = \sum \limits _{p = 1}^2 {\sum \limits _{i = 1}^3 {\sum \limits _{j = 1}^4 {\sum \limits _{k = 1}^2 {x_{ijk}^p\left( {\frac{{\pi \sigma _{ijk}^p}}{{\sqrt{3} }}} \right) } } } } \end{aligned}$$

Assume that the expected value function and its entropy function are \( f_1\left( x,\xi \right) , \)\( f_2\left( x,\xi \right) , \)respectively.

Because the aim of this problem is to reach maximum roots with minimum costs, the corresponding multi-objective expected value programming model to entropy-based MISTP with uncertain variables is formulated as follows:

$$\begin{aligned} \left\{ {\begin{array}{l} \begin{array}{l} \min \sum \limits _{p = 1}^2 {\sum \limits _{i = 1}^3 {\sum \limits _{j = 1}^4 {\sum \limits _{k = 1}^2 {x_{ijk}^p} } } }\displaystyle \int \limits _0^1 {\Phi _{\xi _{ijk}^p}^{ - 1}} \left( \alpha \right) d \alpha \\ \max \sum \limits _{p = 1}^2 {\sum \limits _{i = 1}^3 {\sum \limits _{j = 1}^4 {\sum \limits _{k = 1}^2 {x_{ijk}^p} } } } \displaystyle \int \limits _0^1 {\Phi _{\xi _{ijk}^p}^{ - 1}\left( \alpha \right) \ln \frac{\alpha }{{1 - \alpha }}\mathrm{{d}}\alpha } \end{array}\\ {s.t.\left\{ {\begin{array}{l} {\sum \limits _{j = 1}^N {\sum \limits _{k = 1}^L {x_{ijk}^p} } - \displaystyle \int \limits _0^1 {\Phi _{{\tilde{a}}_i^p}^{ - 1}(1 - \alpha _i^p)\mathrm{{d}}\alpha _i^p} \le 0, i \in \left[ {1,3} \right] ;p \in \left[ {1,2} \right] }\\ {\displaystyle \int \limits _0^1 {\Phi _{{\tilde{b}}_j^p}^{ - 1}\left( {\beta _j^p} \right) \mathrm{{d}}\beta _j^p} - \sum \limits _{i = 1}^3 {\sum \limits _{k = 1}^2 {x_{ijk}^p} } \le 0, j \in \left[ {1,4} \right] ;p \in \left[ {1,2} \right] }\\ {\begin{array}{l} {\sum \limits _{p = 1}^2 {\sum \limits _{i = 1}^3 {\sum \limits _{j = 1}^4 {x_{ijk}^p} } } - \displaystyle \int \limits _0^1 {\Phi _{\tilde{c}_k^{}}^{ - 1}\left( {1 - {\delta _k}} \right) } \mathrm{{d}}{\delta _k} \le 0, k \in \left[ {1,2} \right] }\\ {x_{ijk}^p \ge 0, i \in \left[ {1,3} \right] ;j \in \left[ {1,4} \right] ;k \in \left[ {1,2} \right] ;p \in \left[ {1,2} \right] } \end{array}} \end{array}} \right. } \end{array}} \right. \end{aligned}$$
(12)

Solving problem (12) as a single objective programming problem ignoring another objective, respectively, the results obtained are as follows:

$$\begin{aligned}&\hbox {min E}\left[ {{f_1}} \right] =1637.5\,\hbox {and}\,\hbox {max E}\left[ {{f_1}} \right] =6340 \\&\hbox {min E}\left[ {{f_2}} \right] =913.820\,\hbox {and}\,\hbox {max E} \left[ {{f_2}} \right] =2371.850 \\ \end{aligned}$$

By applying distance minimization method (10) and linear weighted method (11) to multi-objective expected value programming problem (12), the following single objective programming problems are obtained, respectively.

$$\begin{aligned}&\left\{ {\begin{array}{l} \min \sqrt{{\left( {\sum \limits _{p = 1}^2 {\sum \limits _{i = 1}^3 {\sum \limits _{j = 1}^4 {\sum \limits _{k = 1}^2 {x_{ijk}^p} } } } \displaystyle \int \limits _0^1 {\Phi _{\xi _{ijk}^p}^{ - 1}} \left( \alpha \right) \mathrm{{d}}\alpha - \mathrm{{1637}}\mathrm{{.5}}} \right) }^2+ \left( {\sum \limits _{p = 1}^2 {\sum \limits _{i = 1}^3 {\sum \limits _{j = 1}^4 {\sum \limits _{k = 1}^2 {x_{ijk}^p} } } } \displaystyle \int \limits _0^1 {\Phi _{\xi _{ijk}^p}^{ - 1}\left( \alpha \right) \ln \frac{\alpha }{{1 - \alpha }}\mathrm{{d}}\alpha } - \mathrm{{2371}}\mathrm{{.850}}} \right) } \\ {s.t.\left\{ {\begin{array}{l} {\sum \limits _{j = 1}^N {\sum \limits _{k = 1}^L {x_{ijk}^p} } - \displaystyle \int \limits _0^1 {\Phi _{{\tilde{a}}_i^p}^{ - 1}(1 - \alpha _i^p)\mathrm{{d}}\alpha _i^p} \le 0,2 i \in \left[ {1,3} \right] ;p \in \left[ {1,2} \right] }\\ {\displaystyle \int \limits _0^1 {\Phi _{{\tilde{b}}_j^p}^{ - 1}\left( {\beta _j^p} \right) \mathrm{{d}}\beta _j^p} - \sum \limits _{i = 1}^3 {\sum \limits _{k = 1}^2 {x_{ijk}^p} } \le 0, j \in \left[ {1,4} \right] ;p \in \left[ {1,2} \right] }\\ {\begin{array}{l} {\sum \limits _{p = 1}^2 {\sum \limits _{i = 1}^3 {\sum \limits _{j = 1}^4 {x_{ijk}^p} } } - \displaystyle \int \limits _0^1 {\Phi _{\tilde{c}_k^{}}^{ - 1}\left( {1 - {\delta _k}} \right) } \mathrm{{d}}{\delta _k} \le 0, k \in \left[ {1,2} \right] }\\ {x_{ijk}^p \ge 0, i \in \left[ {1,3} \right] ;j \in \left[ {1,4} \right] ;k \in \left[ {1,2} \right] ;p \in \left[ {1,2} \right] } \end{array}} \end{array}} \right. } \end{array}} \right. \end{aligned}$$
(13)
$$\begin{aligned}&\left\{ {\begin{array}{l} \min \left( {w_1}\sum \limits _{p = 1}^2 {\sum \limits _{i = 1}^3 {\sum \limits _{j = 1}^4 {\sum \limits _{k = 1}^2 {x_{ijk}^p} } } } \displaystyle \int \limits _0^1 {\Phi _{\xi _{ijk}^p}^{ - 1}} \left( \alpha \right) \mathrm{{d}}\alpha \right. \\ \left. + \left( { - {w_2}\sum \limits _{p = 1}^2 {\sum \limits _{i = 1}^3 {\sum \limits _{j = 1}^4 {\sum \limits _{k = 1}^2 {x_{ijk}^p} } } } \displaystyle \int \limits _0^1 {\Phi _{\xi _{ijk}^p}^{ - 1}\left( \alpha \right) \ln \frac{\alpha }{{1 - \alpha }}\mathrm{{d}}\alpha } } \right) \right) \\ {s.t.\left\{ {\begin{array}{l} {\sum \limits _{j = 1}^N {\sum \limits _{k = 1}^L {x_{ijk}^p} } - \displaystyle \int \limits _0^1 {\Phi _{{\tilde{a}}_i^p}^{ - 1}(1 - \alpha _i^p)\mathrm{{d}}\alpha _i^p} \le 0, i \in \left[ {1,3} \right] ;p \in \left[ {1,2} \right] }\\ {\displaystyle \int \limits _0^1 {\Phi _{{\tilde{b}}_j^p}^{ - 1}\left( {\beta _j^p} \right) \mathrm{{d}}\beta _j^p} - \sum \limits _{i = 1}^3 {\sum \limits _{k = 1}^2 {x_{ijk}^p} } \le 0, j \in \left[ {1,4} \right] ;p \in \left[ {1,2} \right] }\\ {\begin{array}{l} {\sum \limits _{p = 1}^2 {\sum \limits _{i = 1}^3 {\sum \limits _{j = 1}^4 {x_{ijk}^p} } } - \displaystyle \int \limits _0^1 {\Phi _{\tilde{c}_k^{}}^{ - 1}\left( {1 - {\delta _k}} \right) } \mathrm{{d}}{\delta _k} \le 0, k \in \left[ {1,2} \right] }\\ {x_{ijk}^p \ge 0, i \in \left[ {1,3} \right] ;j \in \left[ {1,4} \right] ;k \in \left[ {1,2} \right] ;p \in \left[ {1,2} \right] } \end{array}} \end{array}} \right. } \end{array}} \right. \nonumber \\ \end{aligned}$$
(14)

where the weights \( w_1, w_2\) are positive numbers with \(w_1+w_2=1.\)

The above problems are solved by Maple 2018 optimization toolbox and then the optimal solutions are obtained as follows: Model (13):

$$\begin{aligned} x^1_{122}= & {} 52.500, x^1_{141} = 12.500, x^1_{232} = 25,\\ x^1_{312}= & {} 100, x^1_{{342}}= 25, x^2_{{131}}= 125, x^2_{{222}}= 80, x^2_{{232}}= 50, \\ x^2_{{312}}= & {} 77.500, x^2_{{342}}= 15 \end{aligned}$$

Model (14):

$$\begin{aligned} x^1_{122}= & {} 52.500, x^1_{141} = 12.500, x^1_{232} = 25,\\ x^1_{312}= & {} 100, x^1_{{342}}= 25, x^2_{{131}}= 125, x^2_{{222}}= 80, \\ x^2_{{232}}= & {} 50, x^2_{{312}}=115, x^2_{{342}}= 15 \end{aligned}$$

Moreover, the value of each of objective (12) is given in Table 6.

Similarly, the corresponding multi-objective expected value programming model under chance constraints for entropy-based MISTP with uncertain variables is formulated as follows:

$$\begin{aligned} \left\{ {\begin{array}{l} {\begin{array}{l} {\min \sum \limits _{p = 1}^2 {\sum \limits _{i = 1}^3 {\sum \limits _{j = 1}^4 {\sum \limits _{k = 1}^2 {x_{ijk}^p} } } } \displaystyle \int \limits _0^1 {\Phi _{\xi _{ijk}^p}^{ - 1}} \left( \alpha \right) \mathrm{{d}}\alpha }\\ {\max \sum \limits _{p = 1}^2 {\sum \limits _{i = 1}^3 {\sum \limits _{j = 1}^4 {\sum \limits _{k = 1}^2 {x_{ijk}^p} } } } \displaystyle \int \limits _0^1 {\Phi _{\xi _{ijk}^p}^{ - 1}\left( \alpha \right) \ln \frac{\alpha }{{1 - \alpha }}\mathrm{{d}}\alpha } } \end{array}}\\ {s.t.\left\{ {\begin{array}{l} {\sum \limits _{j = 1}^N {\sum \limits _{k = 1}^L {x_{ijk}^p} } - \Phi _{{\tilde{a}}_i^p}^{ - 1}(1 - \alpha _i^p) \le 0,\,i \in \left[ {1,3} \right] ;p \in \left[ {1,2} \right] }\\ {\Phi _{{\tilde{b}}_j^p}^{ - 1}\left( {\beta _j^p} \right) - \sum \limits _{i = 1}^3 {\sum \limits _{k = 1}^2 {x_{ijk}^p} } \le 0,j \in \left[ {1,4} \right] ;p \in \left[ {1,2} \right] }\\ {\begin{array}{l} {\sum \limits _{p = 1}^2 {\sum \limits _{i = 1}^3 {\sum \limits _{j = 1}^4 {x_{ijk}^p} } } - \Phi _{\tilde{c}_k^{}}^{ - 1}\left( {1 - {\delta _k}} \right) \le 0,k \in \left[ {1,2} \right] }\\ {x_{ijk}^p \ge 0,i \in \left[ {1,3} \right] ;j \in \left[ {1,4} \right] ;k \in \left[ {1,2} \right] ;p \in \left[ {1,2} \right] } \end{array}} \end{array}} \right. } \end{array}} \right. \end{aligned}$$
(15)

Let \(\gamma ^p_i=0.9\); \(\beta ^p_j=0.85\); \(\delta _k=0.95\) and \(i = 1,2.3, j = 1,2,3,4, k = 1,2, p=1,2\) are the predetermined confidence levels of model (15), respectively.

Solving (15) as a single objective programming problem under the system constraints neglecting the different objective, respectively, we obtain

$$\begin{aligned} \hbox {min E}\left[ {{f_1}} \right] =1288.250\,\hbox {and}\, \hbox {max E}\left[ {{f_1}} \right] =5580.250 \\ \hbox {min E}\left[ {{f_2}} \right] =721.759\,\hbox {and}\, \hbox {max E} \left[ {{f_2}} \right] =2083.419 \end{aligned}$$

Using distance minimization method (10) and linear weighted method (11), the above multi-objective expected value programming problem under chance constraints is transformed into the following single objective programming problems, respectively.

Minimizing distance function method under chance constraints:

$$\begin{aligned} \left\{ {\begin{array}{l} {\min \sqrt{{{\left( {\sum \limits _{p = 1}^2 {\sum \limits _{i = 1}^3 {\sum \limits _{j = 1}^4 {\sum \limits _{k = 1}^2 {x_{ijk}^p} } } } \displaystyle \int \limits _0^1 {\Phi _{\xi _{ijk}^p}^{ - 1}} \left( \alpha \right) \mathrm{{d}}\alpha - \mathrm{{1288}}\mathrm{{.250}}} \right) }^2} + {{\left( {\sum \limits _{p = 1}^2 {\sum \limits _{i = 1}^3 {\sum \limits _{j = 1}^4 {\sum \limits _{k = 1}^2 {x_{ijk}^p} } } } \displaystyle \int \limits _0^1 {\Phi _{\xi _{ijk}^p}^{ - 1}\left( \alpha \right) \ln \frac{\alpha }{{1 - \alpha }}\mathrm{{d}}\alpha } - \mathrm{{2083}}\mathrm{{.419}}} \right) }^2}} }\\ {s.t.\left\{ {\begin{array}{l} {\sum \limits _{j = 1}^N {\sum \limits _{k = 1}^L {x_{ijk}^p} } - \Phi _{{\tilde{a}}_i^p}^{ - 1}(1 - \alpha _i^p) \le 0,\,i \in \left[ {1,3} \right] ;p \in \left[ {1,2} \right] }\\ {\Phi _{{\tilde{b}}_j^p}^{ - 1}\left( {\beta _j^p} \right) - \sum \limits _{i = 1}^3 {\sum \limits _{k = 1}^2 {x_{ijk}^p} } \le 0,j \in \left[ {1,4} \right] ;p \in \left[ {1,2} \right] }\\ {\begin{array}{l} {\sum \limits _{p = 1}^2 {\sum \limits _{i = 1}^3 {\sum \limits _{j = 1}^4 {x_{ijk}^p} } } - \Phi _{\tilde{c}_k^{}}^{ - 1}\left( {1 - {\delta _k}} \right) \le 0,k \in \left[ {1,2} \right] }\\ {x_{ijk}^p \ge 0,i \in \left[ {1,3} \right] ;j \in \left[ {1,4} \right] ;k \in \left[ {1,2} \right] ;p \in \left[ {1,2} \right] } \end{array}} \end{array}} \right. } \end{array}} \right. \end{aligned}$$
(16)

where the confidence levels are \(\gamma ^p_i=0.9\); \(\beta ^p_j=0.85\);\(\delta _k=0.95\) and \(i = 1,2.3, j = 1,2,3,4, k = 1,2, p=1,2\), respectively.

By applying the weighting method, model (15) can be rewritten in the following form.

The linear weighted method under chance constraints:

$$\begin{aligned} \left\{ {\begin{array}{l} \min \left( {w_1}\sum \limits _{p = 1}^2 {\sum \limits _{i = 1}^3 {\sum \limits _{j = 1}^4 {\sum \limits _{k = 1}^2 {x_{ijk}^p} } } } \displaystyle \int \limits _0^1 {\Phi _{\xi _{ijk}^p}^{ - 1}} \left( \alpha \right) \mathrm{{d}}\alpha \right. \\ \left. + \left( { - {w_2}\sum \limits _{p = 1}^2 {\sum \limits _{i = 1}^3 {\sum \limits _{j = 1}^4 {\sum \limits _{k = 1}^2 {x_{ijk}^p} } } } \displaystyle \int \limits _0^1 {\Phi _{\xi _{ijk}^p}^{ - 1}\left( \alpha \right) \ln \frac{\alpha }{{1 - \alpha }}\mathrm{{d}}\alpha } } \right) \right) \\ {s.t.\left\{ {\begin{array}{l} {\sum \limits _{j = 1}^N {\sum \limits _{k = 1}^L {x_{ijk}^p} } - \Phi _{{\tilde{a}}_i^p}^{ - 1}(1 - \alpha _i^p) \le 0,\,i \in \left[ {1,3} \right] ;p \in \left[ {1,2} \right] }\\ {\Phi _{{\tilde{b}}_j^p}^{ - 1}\left( {\beta _j^p} \right) - \sum \limits _{i = 1}^3 {\sum \limits _{k = 1}^2 {x_{ijk}^p} } \le 0,j \in \left[ {1,4} \right] ;p \in \left[ {1,2} \right] }\\ {\begin{array}{l} {\sum \limits _{p = 1}^2 {\sum \limits _{i = 1}^3 {\sum \limits _{j = 1}^4 {x_{ijk}^p} } } - \Phi _{\tilde{c}_k^{}}^{ - 1}\left( {1 - {\delta _k}} \right) \le 0,k \in \left[ {1,2} \right] }\\ {x_{ijk}^p \ge 0,i \in \left[ {1,3} \right] ;j \in \left[ {1,4} \right] ;k \in \left[ {1,2} \right] ;p \in \left[ {1,2} \right] } \end{array}} \end{array}} \right. } \end{array}} \right. \end{aligned}$$
(17)

where the weights \( w_1, w_2\) are positive numbers with \(w_1+w_2=1.\) The confidence levels predetermined are as follows: \(\gamma ^p_i=0.9\); \(\beta ^p_j=0.85\); \(\delta _k=0.95\) and \(i = 1,2.3, j = 1,2,3,4, k = 1,2, p=1,2\), respectively.

Optimal solutions for problems (16) and (17) are obtained by using Maple 2018 optimization toolbox, and then, the results obtained are as follows:

The above problems are solved by Maple 2018 optimization toolbox, and then, the optimal solutions are obtained as follows: Model (16):

$$\begin{aligned} x^1_{122}= & {} 36.750, x^1_{141} = 6.25,\\ x^1_{232}= & {} 18, x^1_{312} = 82.5, x^1_{{342}}= 22.5, x^2_{{131}}= 105,\\ x^2_{{222}}= & {} 66, x^2_{{232}}= 58.5, x^2_{{312}}=61.75, x^2_{{342}}= 11.5 \end{aligned}$$
Table 6 Comparative results for different models

Model (17):

$$\begin{aligned} x^1_{122}= & {} 36.750, x^1_{141} = 6.25, x^1_{232} = 18,\\ x^1_{312}= & {} 82.5, x^1_{{342}}= 22.5, x^2_{{131}}= 105, x^2_{{222}}= 66,\\ x^2_{{232}}= & {} 10.5, x^2_{{312}}=103.5, x^2_{{342}}= 11.5 \end{aligned}$$

Moreover, the value of each of objective (15) is given in Table 6. To determine the degree of nearness of the obtained solutions to the ideal solution, let us define the following distance functions (Dalman and Bayram 2017):

$$\begin{aligned} {D_p}\left( {\gamma ,i} \right) = {\left[ {\sum \limits _{i = 1}^n {\gamma _i^p\left( {1 - {\tau _i}} \right) _{}^2} } \right] ^{\frac{1}{p}}} \end{aligned}$$

where \( \tau _i \) denotes that the degree of nearness of the solutions derived from (12) to the ideal solutions correspond to the ith objective. \( \sum \nolimits _{i = 1}^n {\gamma _i^{}} = 1 \) is a vector of objectives to be and also, \( \gamma _i \) is taken as equal weighted. p represents the distance parameter \(1 \le p \le \infty .\) Here, \( \tau _i \) is taken as:

So, we can point out that the maximum entropy value in this paper which provides better solution near to the ideal solution and the solution obtained is better than the other solutions if \( \min \,{D_p}\left( {\beta ,i} \right) \) is obtained for some p.

Table 6 shows that the cost of transportation increases when entropy increases. But when we look at to distance function \( D_2 \), the solutions which are closest to the ideal solutions of the functions which are obtained by the increase in the entropy function. Also, the closest results to ideals for both models (expected value programming and expected constrained programming models) were achieved with the distance minimizing method.

8 Conclusions

This paper investigates the entropy-based multi-item solid transportation problem with uncertain variables. Further, uncertain entropy function has introduced the concept of measure of entropy. It works as a measure of dispersal of trips among the origins, destinations, and conveyances of the model. When the one or more products, which are to be carried and the root points are large enough, then uncertain entropy measure is added as an additional objective. Applying expected value programming and expected constrained programming to the MISTP with uncertain variables model, it is transformed into deterministic multi-objective models. Then these deterministic models are reduced to the single objective programming problems by using the distance minimization and linear weighted methods. In order to obtain the optimal solutions of these deterministic models, Maple 2018 optimization toolbox is used. Finally, numerical experiments have shown that the solutions are closest to the ideal solutions of the functions which can be obtained by the increase in the entropy function.

The presented models and its solution procedures can be applied to various uncertain optimization models such as step fixed charge solid transportation, uncertain portfolio distribution and urban planning, etc.