1 Introduction

The minimum spanning tree (MST) problem, aiming to find the least weight in a deterministic network, was first introduced by Borüvka (1926). Extending Borüvka’s work Narula and Ho (1980) aimed to find the MST of a deterministic network subject to certain constraints on each node. In a deterministic network, all the weights are crisp numbers. In the past decades, the DCMST problem involving deterministic networks has drawn great interest researchers among the world over and it has been applied in many fields such as transportation, communications and logistics, etc. Many efficient classical algorithms have been used to solve the DCMST problem, such as heuristic algorithm (Narula and Ho 1980; Volgenant 1989), ant colony optimization algorithm (Bau et al. 2008), evolutionary algorithm (Raidl 2000), and genetic algorithm (Zhou and Gen 1997; Mu and Zhou 2008), etc.

However in case of real network the weights of the DCMST are analyzed in a state of indeterminacy. In order to model indeterminacy in a network, the concept of a random network was first proposed by Frank and Hakimi (1965) for modeling communication network with random capacities. Knowles and Corne (2000) first investigated the DCMST problem with random weights. Subsequently Torkestani (2012, 2013) proposed an algorithm based on learning automata to solve DCMST problems and min-degree constrained MST problems in random network. Although random network has been well applied in working on non-deterministic DCMST problems, in reality indeterminacy present in networks cannot always be explained by randomness in all cases. Since in practice the network is inevitably affected by collisions, congestions and interferences. For the lack of statistical data about the weights when making decisions, the concept of random network can not be adopted fruitfully. In this case, we have to rely on the experts in order to obtain their belief degrees on the possible outcomes. As a breakthrough, for dealing with the lack of historical data on the weights, uncertain networks were first investigated by Liu (2010a) for modeling project scheduling problem where the duration of activities were uncertain variables. Later, Gao et al. (2015) investigated the distribution of the diameter and the edge-connectivity was discussed by Gao and Qin (2016) of uncertain graph. Besides, the shortest path problem with uncertain arcs was investigated by Gao (2011), the maximum flow problem of uncertain network was investigated by Han et al. (2014), the uncertain minimum cost flow problem was investigated by Ding (2014), Gao et al. (2016) proposed some uncertain models on railway transportation planning problem, and the related uncertain MST problems have been solved, for example, inverse MST problem was investigated by Zhang et al. (2013), quadratic MST problem was investigated by Zhou et al. (2014), path optimality conditions for MST problem was investigated by Zhou et al. (2015), and the DCMST problem was first introduced by Gao and Jia (2017) and proposed uncertain programming models.

In our daily life, uncertainty and randomness may coexist in a complex network. For example, some weights have no historical data while others may have enough statistical data to obtain their probability distributions. In order to deal with this phenomenon, the concept of uncertain random network was first proposed by Liu (2014) and many network optimization problems subsequently studied by many researchers. Using these concepts many uncertain random network optimization problems have been solved, for example, the MST problem studied by Sheng et al. (2017), the maximum flow problem studied by Sheng and Gao (2014), and the shortest path problem studied by Sheng and Gao (2016) etc. In this paper, we consider the DCMST problem for an uncertain random network in which some weights are uncertain variables and others are random variables. We will propose an ideal chance distribution and formulate an uncertain random programming model to find the DCMST.

The remainder of this paper is organized as follows. In Sect. 2, we review some preliminaries of uncertainty theory, chance theory and uncertain random network. In Sect. 3, we introduce a concept of ideal chance distribution of DCMST. In Sect. 4, an uncertain random programming model is formulated here to find the DCMST. In Sect. 5, a numerical example is given to illustrate its effectiveness. In Sect. 6, we give a brief summary of this paper.

2 Preliminary

In this section, we revise some basic concepts and properties of uncertainty theory, chance theory and uncertain random network.

2.1 Uncertain theory

For modelling belief degrees, uncertainty theory was founded by Liu (2007) and perfected by Liu (2009a) based on normality, duality, subadditivity, and product axioms. Uncertainty theory has become a branch of mathematics. In theory, uncertain process (Liu 2008), uncertain differential equation (Chen and Liu 2010; Yang and Yao 2016) and uncertain set theory (Liu 2010b) have been established. In practice, uncertain programming (Liu 2009b), uncertain inference (Gao et al. 2010), uncertain statistics (Liu 2010a), uncertain optimal control (Zhu 2010), and uncertain differential game (Gao 2013; Yang and Gao 2013, 2016), etc., have also developed quickly. Nowadays, uncertain theory has been applied to many areas such as economics (Yang and Gao 2017, 2016), Management (Gao et al. 2017; Gao and Yao 2015) and Finance (Chen and Gao 2013; Guo and Gao 2017).

Let \(\varGamma\) be a nonempty set and \(\mathcal {L}\) be a \(\sigma\)-algebra over \(\Gamma.\) Each element \(\Lambda \in \mathcal {L}\) is called an event. Liu (2007) defined an uncertain measure by the following axioms:

Axiom 1

(Normality Axiom) \(\mathcal {M}\{\Gamma \} = 1\) for the universal set \(\Gamma\).

Axiom 2

(Duality Axiom) \(\mathcal {M}\{\Lambda \} + \mathcal {M}\{\Lambda ^{c}\} = 1\) for any event \(\Lambda\).

Axiom 3

(Subadditivity Axiom) For every countable sequence of events \(\Lambda _{1}, \Lambda _{2}, \ldots\), we have

$$\begin{aligned} \mathcal {M}\left\{ \bigcup \limits _{i=1}^{\infty }\Lambda _{i}\right\} \le \sum \limits _{i=1}^{\infty }\mathcal {M}\{\Lambda _{i}\}. \end{aligned}$$

The triplet \((\Gamma ,\mathcal {L},\mathcal {M})\) is called an uncertainty space. Furthermore, Liu (2009a) defined a product uncertain measure by the fourth axiom:

Axiom 4

(Product Axiom) Let \((\Gamma _{k},\mathcal{L}_{k},\mathcal {M}_{k})\) be uncertainty spaces for \(k=1,2,\ldots\) The product uncertain measure \(\mathcal {M}\) is an uncertain measure satisfying

$$\begin{aligned} \mathcal {M}\left\{ \prod \limits _{k=1}^{\infty }\Lambda _{k}\right\} =\bigwedge _{k=1}^{\infty }\mathcal {M}_{k}\{\Lambda _{k}\} \end{aligned}$$

where \(\Lambda _{k}\) are arbitrarily chosen events form \(\mathcal {L} _{k}\) for \(k=1,2,\ldots ,\) respectively.

Definition 1

(Liu 2007) An uncertain variable is a measurable function \(\xi\) from an uncertainty space \((\Gamma ,\mathcal {L},\mathcal {M})\) to the set of real numbers, i.e., for any Borel set B of real numbers, the set

$$\begin{aligned} \{\xi \in B\}=\{\gamma \in \Gamma \mid \xi (\gamma )\in B\} \end{aligned}$$

is an event.

The uncertainty distribution of an uncertain variable \(\xi\) is defined by \(\Phi (x)=\mathcal {M}\{\xi \le x\}\) for any real number x. For example, linear uncertain variables L(ab) has the following uncertainty distribution as shown in Fig. 1,

$$\Phi _{1} (x) = \left\{ \begin{array}{*{20}{l}} 0,\qquad \qquad \qquad {\text{if }}\; x\, < a \hfill \\ (x - a)/(b - a),\,\,\,{\text{if }}\;a \le x \le b \hfill \\ 1,\qquad \qquad \qquad {\text{if }} \; x\,> b \hfill \\ \end{array} \right.$$
Fig. 1
figure 1

Linear uncertainty distribution

and the zigzag uncertain variable Z(abc) has the following uncertainty distribution as shown in Fig. 2,

Fig. 2
figure 2

Zigzag uncertainty distribution

$$\Phi _{2} (x) = \left\{ {\begin{array}{*{20}l} {0,\quad } \hfill & {{\text{if}}} \hfill & {x \le a} \hfill \\ {(x - a)/2(b - a),\quad } \hfill & {{\text{if}}} \hfill & {a < x \le b} \hfill \\ {(x + c - 2b)/2(c - b),\quad } \hfill & {{\text{if}}} \hfill & {b < x \le c} \hfill \\ {1,\quad } \hfill & {{\text{if}}} \hfill & {x> c.} \hfill \\ \end{array} } \right.$$

An uncertainty distribution \(\Phi\) is said to be regular if its inverse function \(\Phi ^{-1}(\alpha )\) exists and is unique for each \(\alpha \in (0,1)\). It is clear that the linear uncertain variable and zigzag uncertain variable are regular, and their inverse uncertainty distributions are as follows,

$$\begin{aligned} \Phi _{1}^{-1}(\alpha )=(1-\alpha )a+\alpha b, \end{aligned}$$
$$\begin{aligned} \Phi _{2}^{-1}(\alpha )= {\left\{ \begin{array}{ll}a+2(b-a)\alpha ,&{} \quad \text {if} \quad \alpha \le 0.5\\ 2b-c+2(c-b)\alpha ,&{} \quad \text {if} \quad \alpha> 0.5.\\ \end{array}\right. } \end{aligned}$$

Definition 2

(Liu 2009a) The uncertain variables \(\xi _1,\) \(\xi _2,\ldots , \xi _n\) are said to be independent if

$$\begin{aligned} \mathcal {M}\left\{ \bigcap _{i=1}^{n}\{\xi _i\in B_i\}\right\} =\min _{1\le i\le n}\mathcal {M}\{\xi _i\in B_i\} \end{aligned}$$

for any Borel sets \(B_1, B_2,\ldots , B_n\) of real numbers.

It is usually assume that all uncertainty distributions in practical applications are regular. In the following, we will see the inverse uncertainty distribution \(\Phi ^{-1}(\alpha )\) has some good operational properties, which makes easier to obtain the solution of uncertain programming problems.

Theorem 1

(Liu 2010a) Let \(\xi _1,\xi _2,\ldots ,\xi _n\) be independent uncertain variables with regular uncertainty distributions \(\Phi _1, \Phi _2,\ldots , \Phi _n,\) respectively. If \(f(\xi _1,\xi _2,\ldots ,\xi _n)\) is strictly increasing with respect to \(\xi _1,\xi _2,\ldots ,\xi _m\) and strictly decreasing with respect to \(\xi _{m+1}, \xi _{m+2}, \ldots , \xi _n,\) then

$$\begin{aligned} \xi =f(\xi _1,\xi _2,\ldots ,\xi _n) \end{aligned}$$

has an inverse uncertainty distribution

$$\begin{aligned} \begin{array}{ll} \Psi ^{-1}(\alpha )=f(\Phi _1^{-1}(\alpha ),\Phi _2^{-1}(\alpha ),\ldots , \Phi _m^{-1}(\alpha ),\\ \Phi _{m+1}^{-1}(1-\alpha ), \ldots , \Phi _n^{-1}(1-\alpha )). \end{array} \end{aligned}$$

Example 1

Let \(\xi _1,\xi _2,\ldots ,\xi _n\) be independent uncertain variables with regular uncertainty distributions \(\Phi _1, \Phi _2,\ldots , \Phi _n,\) respectively. Then

$$\begin{aligned} \xi =\xi _1+\xi _2+\cdots +\xi _n \end{aligned}$$

has an inverse uncertainty distribution

$$\begin{aligned} \Psi ^{-1}(\alpha )=\Phi _1^{-1}(\alpha )+\Phi _2^{-1}(\alpha )+\cdots +\Phi _n^{-1}(\alpha ). \end{aligned}$$

Definition 3

(Liu 2007) Let \(\xi\) be an uncertain variable. Then the expected value of \(\xi\) is defined by

$$\begin{aligned} E[\xi ]= \int ^{+\infty }_{0}\mathcal {M}\{\xi \ge x\}\mathrm{d}x-\int ^{0}_{-\infty }\mathcal {M}\{\xi \le x\}\mathrm{d}x \end{aligned}$$

provided that at least one of the two integrals is finite.

Theorem 2

(Liu 2007) Let \(\xi\) be an uncertain variable with uncertainty distribution \(\Phi\). Then

$$\begin{aligned} E[\xi ]=\int ^{+\infty }_{0}(1-\Phi (x))\mathrm{d}x-\int ^{0}_{-\infty }\Phi (x)\mathrm{d}x. \end{aligned}$$

According to Theorem 2, the expected value of a linear uncertain variable L(ab) is

$$\begin{aligned} E[\xi ]=(a+b)/ 2, \end{aligned}$$

and the expected value of zigzag uncertain variable Z(abc) is

$$\begin{aligned} E[\xi ]=(a+2b+c)/ 4. \end{aligned}$$

2.2 Chance theory

Chance theory was pioneered by Liu (2013a) for modelling complex systems with not only uncertainty but also randomness. In this theory, the concepts of uncertain random variable, chance measure and chance distribution were proposed. In addition, the concepts of expected value and variance of uncertain random variables were also proposed. The operational law of uncertain random variables was presented by Liu (2013b). As an extension, a law of large numbers for uncertain random variables was proved by Yao and Gao (2016). Uncertain random process was investigated by Gao and Yao (2015). In practice, uncertain random programming (Liu 2013b), uncertain random risk analysis (Liu and Ralescu 2014) have been established.

Let \((\Gamma , \mathcal {L} , \mathcal {M})\) be an uncertainty space and let \((\Omega , \mathcal {A},\Pr )\) be a probability space. Then the product \((\Gamma , \mathcal {L} , \mathcal {M})\times (\Omega , \mathcal {A}, \Pr )\) is called a chance space.

Definition 4

(Liu 2013a) Let \((\Gamma , \mathcal {L} , \mathcal {M})\times (\Omega , \mathcal {A}, \Pr )\) be a chance space, and \(\Theta \in \mathcal {L} \times \mathcal {A}\) be an event. Then the chance measure of \(\Theta\) is defined as

$$\begin{aligned} \mathrm{Ch}\{\Theta \}=\int ^{1}_{0}\Pr \{\omega \in \Omega |\mathcal {M}\{\gamma \in \Gamma | (\gamma , \omega ) \in \Theta \} \ge x\}\mathrm{d}x. \end{aligned}$$

Theorem 3

(Liu 2013a) Let \((\Gamma , \mathcal {L} , \mathcal {M})\times (\Omega , \mathcal {A}, \Pr )\) be a chance space. Then

$$\begin{aligned} \mathrm{Ch}\{\Lambda \times A\}=\mathcal {M}\{\Lambda \} \times \Pr \{A\} \end{aligned}$$

for any \(\Lambda \in \mathcal {L}\) and any \(A \in \mathcal {A}\). Especially, we have

$$\begin{aligned} \mathrm{Ch}\{\emptyset \} = 0,~~\mathrm{Ch}\{\Gamma \times \Omega =1\}. \end{aligned}$$

Theorem 4

(Liu 2013a, Monotonicity Theorem) The chance measure is a monotone increasing set function. That is, for any events \(\Theta _1\) and \(\Theta _{2}\) with \(\Theta _1\le \Theta _{2}\), we have

$$\begin{aligned} \mathrm{Ch}\{\Theta _1\}\le \mathrm{Ch}\{\Theta _{2}\}. \end{aligned}$$

Theorem 5

(Liu 2013a, Duality Theorem) The chance measure is self-dual. That is, for any event \(\Theta\), we have

$$\begin{aligned} \mathrm{Ch}\{\Theta \}+\mathrm{Ch}\{\Theta ^{c}\}=1. \end{aligned}$$

Theorem 6

(Hou 2014, Subadditivity Theorem) The chance measure is subadditive. That is, for any countable sequence of events \(\Theta _1, \Theta _2, \ldots ,\) we have

$$\begin{aligned} \mathrm{Ch}\left\{ \mathop {\bigcup }_{i=1}^{\infty }\Theta _{i}\right\} \le \mathop \sum ^{\infty }_{i=1}\mathrm{Ch}\{\Theta _{i}\}. \end{aligned}$$

Definition 5

(Liu 2013a) An uncertain random variable is a function \(\xi\) from a chance space \((\Gamma , \mathcal {L} , \mathcal {M})\times (\Omega , \mathcal {A}, \Pr )\) to the set of real numbers such that \(\{\xi \in B\}\) is an event in \(\mathcal{L} \times \mathcal {A}\) for any Borel set B of real numbers.

Definition 6

(Liu 2013a) Let \(\xi\) be an uncertain random variable. Then its chance distribution is defined by

$$\begin{aligned} \Phi (x)=\mathrm{Ch}\{\xi \le x\} \end{aligned}$$

for any \(x \in \mathfrak {R}.\)

Theorem 7

(Liu 2013b) Let \(\eta _{1}, \eta _{2}, \ldots , \eta _{m}\) be independent random variables with probability distributions \(\Psi _{1}, \Psi _{2}, \ldots \Psi _{m},\) and let \(\tau _{1}, \tau _{2}, \ldots , \tau _{n}\) be independent uncertain variables with regular uncertainty distributions \(\Upsilon _{1}, \Upsilon _{2}, \ldots , \Upsilon _{n}\), respectively. Assume \(f(\eta _{1}, \eta _{2}, \ldots , \eta _{m},\tau _{1},\) \(\tau _{2}, \ldots , \tau _{n})\) is strictly increasing with respect to \(\tau _{1}, \tau _{2}, \ldots , \tau _{k}\) and strictly decreasing with respect to \(\tau _{k+1}, \tau _{k+2}, \ldots , \tau _{k+n}\). Then

$$\begin{aligned} \xi =f(\eta _{1}, \eta _{2}, \ldots , \eta _{m},\tau _{1}, \tau _{2}, \ldots , \tau _{n}) \end{aligned}$$

has a chance distribution

$$\begin{aligned} \Phi (x)=\int _{\mathfrak {R}^{m}}F(x;y_{1}, y_{2}, \ldots , y_{m})\mathrm{d}\Psi _{1}(y_{1})\mathrm{d}\Psi _{2}(y_{2})\ldots \mathrm{d}\Psi _{m}(y_{m}) \end{aligned}$$

where \(F(x;y_{1}, y_{2}, \ldots , y_{m})\) is the root of \(\alpha\) of the equation

$$\begin{aligned} \begin{array}{ll} f(y_{1}, y_{2}, \ldots , y_{m},\Upsilon ^{-1}_{1}(\alpha ),\ldots ,\Upsilon ^{-1}_{k}(\alpha ), \\ \Upsilon ^{-1}_{k+1}(1-\alpha ),\ldots , \Upsilon ^{-1}_{n}(1-\alpha ))=x. \end{array} \end{aligned}$$

Example 2

Let \(\eta\) be a random variable with probability distribution \(\Psi\), and let \(\tau\) be an uncertain variable with regular uncertainty distribution \(\Upsilon\). Then

$$\begin{aligned} \xi =\eta +\tau \end{aligned}$$

is an uncertain random variable with chance distribution

$$\begin{aligned} \Phi (x)=\int _{-\infty }^{+\infty }\Upsilon (x-y)\mathrm{d}\Psi (y). \end{aligned}$$

Theorem 8

(Liu 2013b) Let \(\eta _{1}, \eta _{2}, \ldots , \eta _{m}\) be independent random variables with probability distributions \(\Psi _{1}, \Psi _{2}, \ldots \Psi _{m},\) respectively, let \(\tau _{1}, \tau _{2}, \ldots , \tau _{n}\) be uncertain variables, and let f be a measurable function. Then

$$\begin{aligned} \xi =f(\eta _{1}, \eta _{2}, \ldots , \eta _{m},\tau _{1}, \tau _{2}, \ldots , \tau _{n}) \end{aligned}$$

has an expected value

$$\begin{aligned} \begin{array}{ll} E[\xi ]=\int _{\mathfrak {R}^{m}}E[f(y_{1}, y_{2}, \ldots , y_{m},\tau _{1}, \tau _{2}, \ldots , \tau _{n})]\\ \mathrm{d}\Psi _{1}(y_{1}) \mathrm{d}\Psi _{2}(y_{2})\ldots \mathrm{d}\Psi _{m}(y_{m}) \end{array} \end{aligned}$$

where \(E[f(y_{1}, y_{2}, \ldots , y_{m},\tau _{1}, \tau _{2}, \ldots , \tau _{n})\) is the expected value of the uncertain variable \(f(y_{1}, y_{2}, \ldots , y_{m},\tau _{1},\) \(\tau _{2}, \ldots , \tau _{n})\) for any real numbers \(y_1, y_2, \ldots , y_m\).

2.3 Uncertain random network

The term network is a synonym for a weighted graph, where the weights may be understood as cost, distance, time and others. For modelling the network in which some weights are random variables and others are uncertain variables, Liu (2014) first proposed the concept of uncertain random network.

Definition 7

(Liu 2014) Assume that \(\mathcal {N}\) is the collection of nodes, \(\mathcal {U}\) is the collection of uncertain arcs, \(\mathfrak {R}\) is the collection of random arcs, and \(\mathcal {W}\) is the collection of uncertain and random weights. Then the quartette \((\mathcal {N}, \mathcal {U}, \mathfrak {R}, \mathcal {W})\) is said to be an uncertain random network.

The uncertain random network becomes a random network (Frank and Hakimi 1965) if all weights are random variables; and becomes an uncertain network (Liu 2010a) if all weights are uncertain variables. Figure 3 shows an uncertain random network \((\mathcal {N}, \mathcal {U}, \mathfrak {R}, \mathcal {W})\) of order 6 in which

$$\begin{aligned} \begin{array}{ll} \mathcal {N}= \{1, 2, 3, 4, 5\},\\ \mathcal {U}= \{(1, 2), (2, 5), (3, 4)\},\\ \mathfrak {R}= \{(1, 5), (2, 3), (4,5)\},\\ \mathcal {W}=\{\omega _{12},\omega _{15},\omega _{23},\omega _{25},\omega _{34},\omega _{45}\}. \end{array} \end{aligned}$$
Fig. 3
figure 3

An uncertain random network of 5 nodes

3 Ideal chance distribution

In a deterministic undirected network, all the weights of edges are deterministic and finite. Let \((\mathcal {N},\mathrm{E},\mathcal {W})\) be a deterministic network, where \(\mathcal {N}\) is the set of vertices, \(\mathrm{E}\) is the edges set and \(\mathcal {W}\) is the set of deterministic weights. Let \(d_i~(i\in \mathcal {N})\) be the degree value of node i, and let \(b_i~(i\in \mathcal {N})\) be the degree constraint of node i. We recall at this point that a spanning tree is a subnetwork that contains all the nodes. The MST is a spanning tree with the least weight. The DCMST problem is to find the least weight spanning tree of a given graph, subject to constraints on the degree of the nodes. Many classical algorithms such as evolutionary algorithms (Raidl 2000), and genetic algorithm (Zhou and Gen 1997; Mu and Zhou 2008) can be used to solve the DCMST problem for the given network. Let \(w_{ij}\) be the weight of the edge (ij) for \((i,j)\in \mathrm{E}\), then the weight of the DCMST is a function of the weight \(w_{ij}\). We denote the function by \(f(\{w_{ij}|(i,j)\in \mathrm{E}\})\) or \(f(\mathcal {W})\). Assume that \(\mathcal {W}_1=\{{w^{1}_{ij}|(i,j)\in \mathrm{E}}\}\) and \(\mathcal {W}_2=\{{w^{2}_{ij}|(i,j)\in \mathrm{E}}\}\) are two sets of deterministic weights. If \(w^{1}_{ij}\le w^{2}_{ij}\) for each \((i,j)\in \mathrm{E}\), it is easy to prove that \(f(\mathcal {W}_1)\le f(\mathcal {W}_2)\). In other words, the function f is an increasing function about weights \(w_{ij}\).

In real life, lots of networks are indeterministic, which means uncertainty and randomness often simultaneously appear in a complex network. For example, some edge weights are considered as random variables when we obtain history data, and the others are considered as uncertain variables when data are not available and they are subjectively provided. Next we try to discuss the DCMST problem in a connected and undirected uncertain random network \((\mathcal {N},\mathcal {U},\mathfrak {R},\mathcal {W})\). Some assumptions are listed as follows:

  1. (i)

    The uncertain random network is connected and undirected;

  2. (ii)

    The weight of each edge \((i,j)\in \mathcal {U}\cup \mathfrak {R}\) is positive and finite;

  3. (iii)

    The weight of each edge \((i,j)\in \mathcal {U}\cup \mathfrak {R}\) is either an uncertain variable or a random variable;

  4. (iv)

    All the uncertain variables and the random variables are independent;

  5. (v)

    All the degree values of DCMST are less than the given degree constraints.

Given a connected and undirected uncertain random network \((\mathcal {N},\mathcal {U},\mathfrak {R},\mathcal {W})\), where \(\mathcal {N}=\{1,2,\ldots ,n\}\) is the set of vertices, and \(\mathcal {W}=\{\tau _{ij}, ~(i,j)\in \mathcal {U}; \xi _{ij}, ~(i,j)\in \mathfrak {R}\}\) is the set of edges. We assume there is a constraint on each node, and the degree value \(d_i~(i\in \mathcal {N})\) is less than a given value \(b_i~(i\in \mathcal {N})\) of the node i. We assume that the uncertain weights \(\tau _{ij}\) for \((i,j)\in \mathcal {U}\) are defined on an uncertainty space \((\Gamma , \mathcal {L} , \mathcal {M})\) and random weights \(\xi _{ij}\) for \((i,j)\in \mathfrak {R}\) are defined on probability space \((\Omega , \mathcal {A}, \Pr )\). Then for any given \(\gamma \in \Gamma\) and \(\omega \in \Omega\), \(\tau _{ij}(\gamma ), (i,j)\in \mathcal {U}\) and \(\xi _{ij}(\omega ), (i,j)\in \mathfrak {R}\) are all crisp numbers. In other words, if we know the realization of random and uncertain edge weights, then in fact, the network \((\mathcal {N},\mathcal {U},\mathfrak {R},\mathcal {W})\) is a certain network and its DCMST can be obtained by classical algorithms. For a given network, the degree-constrained spanning tree (DCST) is not unique, however the DCMST have the same weight. We still denote the weight of DCMST by \(f(\tau _{ij}, (i,j)\in \mathcal {U}; \xi _{ij}, (i,j)\in \mathfrak {R}, d_i \le b_i)\), which suggests that f is a function of \(\gamma\) and \(\omega\). With this method, for each uncertain random network \((\mathcal {N},\mathcal {U},\mathfrak {R},\mathcal {W})\) corresponding to a related f which is an uncertain random weight. According to the operational law that \(f(\tau _{ij}, (i,j)\in \mathcal {U}; \xi _{ij}, (i,j)\in \mathfrak {R}, d_i \le b_i)\) is also an uncertain random variable. The chance distribution of \(f(\tau _{ij}, (i,j)\in \mathcal {U}; \xi _{ij}, (i,j)\in \mathfrak {R}, d_i \le b_i)\) is called an chance distribution of the DCMST with respect to the uncertain random network \((\mathcal {N},\mathcal {U},\mathfrak {R},\mathcal {W})\). Owing to the fact that ideal chance distribution of DCMST is unique for a given uncertain random network, we give the following theorem to illustrate how to calculate ideal chance distribution of DCMST for an uncertain random network.

Theorem 9

Let \((\mathcal {N},\mathcal {U},\mathfrak {R},\mathcal {W})\) be an uncertain random network. Assume that the uncertain weights are uncertain variables \(\tau _{ij}\) with regular uncertainty distributions \(\Upsilon _{ij}\) for \((i,j)\in \mathcal {U}\), and the random weights are random variables \(\xi _{ij}\) with probability distributions \(\Psi _{ij}\) for \((i,j)\in \mathfrak {R}\), respectively. Then the ideal chance distribution of DCMST with respect to the uncertain random network \((\mathcal {N},\mathcal {U},\mathfrak {R},\mathcal {W})\) is

$$\begin{aligned} \begin{array}{ll} \Phi (x)=\int ^{+\infty }_{0}\ldots \int ^{+\infty }_{0}F(x;y_{ij}, (i,j)\in \mathfrak {R}, \\ d_i \le b_i) \prod \limits _{(i,j)\in \mathcal {R}}\mathrm{d}\Psi _{ij}(y_{ij}) \end{array} \end{aligned}$$

where \(F(x;y_{ij},(i,j)\in \mathfrak {R}, d_i \le b_i)\) is the uncertainty distribution of uncertain variable \(f(\tau _{ij}, (i,j)\in \mathcal {U}; y_{ij}, (i,j)\in \mathfrak {R}, d_i \le b_i)\).

Proof

Let \(f(\tau _{ij}, (i,j)\in \mathcal {U}; y_{ij}, (i,j)\in \mathfrak {R}, d_i \le b_i)\) be the weight of ideal chance distribution of DCMST. From Definition 6 and Theorem 7, we can obtain the ideal chance distribution of DCMST as follows,

$$\begin{aligned} \Phi (x) = {\text{Ch}}\{ f(\tau _{{ij}} ,(i,j) \in {\mathcal{U}};\xi _{{ij}} ,(i,j) \in \Re ,d_{i} \le b_{i} ) \le x\} \hfill \\ = \int { \cdots \int {F(x;y_{{ij}} ,(i,j) \in \Re ,d_{i} \le b_{i} )\prod\limits_{{(i,j) \in {\mathcal{R}}}} {{\text{d}}\Psi _{{ij}} (y_{{ij}} )} } } \hfill \\ = \int_{0}^{{ + \infty }} { \cdots \int_{0}^{{ + \infty }} {F(x;y_{{ij}} ,(i,j) \in \Re ,d_{i} \le b_{i} )\prod\limits_{{(i,j) \in {\mathcal{R}}}} {{\text{d}}\Psi _{{ij}} (y_{{ij}} ).} } } \hfill \\ \end{aligned}$$

where \(F(x;y_{ij},(i,j)\in \mathfrak {R}, d_i \le b_i)\) is the uncertainty distribution of \(f(\tau _{ij}, (i,j)\in \mathcal {U}; y_{ij}, (i,j)\in \mathfrak {R}, d_i \le b_i)\) for any real numbers \(y_{ij}, ~(i,j)\in \mathfrak {R}\). \(\square\)

Remark 1

The uncertainty distribution \(F(x;y_{ij},(i,j)\in \mathfrak {R}, d_i \le b_i)\) is determined by its inverse uncertainty distribution \(F^{-1}(\alpha ; y_{ij}, (i,j)\in \mathfrak {R}, d_i \le b_i)\). By Theorem 7, for given \(\alpha \in (0,1)\), we have

$$\begin{aligned} \begin{array}{ll} F^{-1}(\alpha ; y_{ij}, (i,j)\in \mathfrak {R}, d_i \le b_i)\\ \quad=f(\Upsilon ^{-1}_{ij}(\alpha ), (i,j)\in \mathcal {U}; y_{ij},(i,j)\in \mathfrak {R}, d_i \le b_i) \end{array} \end{aligned}$$

which is just the weight of the DCMST in a deterministic network and can be calculated by using the genetic algorithm or other evolutionary algorithms.

Remark 2

Let \((\mathcal {N},\mathcal {U},\mathfrak {R},\mathcal {W})\) be an uncertain random network, and let \(\Phi (x)\) be the ideal chance distribution of DCMST. Assume \(\Phi _{T^{'}}(x)\) is the chance distribution of a DCST \(T^{'}\). Then for each DCST \(T^{'}\), we have

$$\begin{aligned} \Phi _{T^{'}}(x) \le \Phi (x). \end{aligned}$$

Remark 3

An uncertain random network degenerates to a random network if all the weights are independent random variables. Thus the ideal chance distribution of DCMST is actually the following probability distribution

$$\begin{aligned} \Phi (x)=\int _{f(y_{ij},(i,j)\in \mathcal {R})\le x, d_i \le b_i)}\prod \limits _{(i,j)\in \mathcal {R}}\mathrm{d}\Psi _{ij}(y_{ij}). \end{aligned}$$
Fig. 4
figure 4

A simple uncertain random network

Remark 4

An uncertain random network degenerates to an uncertain network if all the weights are independent uncertain variables. Thus the ideal chance distribution of DCMST is actually an uncertainty distribution with the following inverse function

$$\begin{aligned} \Phi ^{-1}(\alpha )=f(\Upsilon ^{-1}_{ij}(\alpha ), (i,j)\in \mathcal {U}, d_i \le b_i), \quad ~\alpha \in (0,1). \end{aligned}$$

Example 3

Let us consider an uncertain random network with 4 nodes and 4 edges with degree constraint of node 1 is 2 is shown in Fig. 4. Assume that uncertain weights \(\tau _{12},~\tau _{14}\) have regular uncertainty distributions \(\Upsilon _{12}, \Upsilon _{14}\), and random weights \(\xi _{13},~ \xi _{23}\) have probability distributions \(\Psi _{13}, \Psi _{23}\), respectively. Then from Theorem 9, the ideal chance distribution of DCMST is

$$\begin{aligned} \begin{array}{ll} \Phi (x)=&\int _{0}^{+\infty }\int _{0}^{+\infty }F(x;y_{13},y_{23},d_1\le 2)\\ &\mathrm{d}\Psi _{13}(y_{13})\mathrm{d}\Psi _{23}(y_{23}) \end{array} \end{aligned}$$

where \(F(x;y_{13},y_{23},d_1\le 2)\) is determined by its inverse uncertainty distribution

$$\begin{aligned} \begin{array}{ll} F^{-1}(\alpha ;y_{13},y_{23},d_1\le 2)=&(\Upsilon ^{-1}_{14}(\alpha )+y_{13}+y_{23}) \wedge \\& (\Upsilon ^{-1}_{12}(\alpha )+\Upsilon ^{-1}_{14}(\alpha )+y_{23}) \end{array} \end{aligned}$$

Theoretically, it is difficult to obtain an analytical expression of ideal chance distribution of DCMST for an uncertain random network. Practically, we can calculate it by using numerical techniques.

4 Model of uncertain random DCMST problem

In real life, lots of networks are indeterministic, which means some of the edges weights are uncertain variables and others are random variables. Let us consider connected and undirected uncertain random networks, which are finite and loopless. In this work, we will concentrate on an uncertain random network \((\mathcal {N}, \mathcal {U}, \mathfrak {R}, \mathcal {W})\), where \(\mathcal {N}=\{1,2,\ldots ,n\}\) is the collection of vertices, \(\mathcal {U}\) is the collection of uncertain weights, \(\mathfrak {R}\) is the collection of random weights, and \(\mathcal {W}=\{\tau _{ij},(i,j)\in \mathcal {U}; \xi _{ij},(i,j)\in \mathfrak {R}\}\) is the collection of edges. Assume \(\tau _{ij},(i,j)\in \mathcal {U}\) are independent uncertain variables, and \(\xi _{ij},(i,j)\in \mathfrak {R}\) are independent random variables. Assume further that there is a constraint on each node, and the degree value \(d_{i}\) is less than a given value \(b_{i}\). Let \(\{x_{ij},(i,j)\in \mathcal {W}\}\) be decision vectors, where \(x_{ij}=1\) means that edge (ij) is chosen in DCST \(T^{'}\), otherwise \(x_{ij}=0\). Then \(T^{'}=\{x_{ij},(i,j)\in \mathcal {U}\cup \mathfrak {R}, d_{i}\le b_{i}\}\) is a DCST if and only if

$$\left\{ \begin{array}{l} \sum \limits _{(i,j)\in \mathcal {U}\cup \mathcal {R}} x_{ij}=n-1 \\ \sum \limits _{(i,j)\in \mathcal {U}\cup \mathcal {R}(S)} x_{ij} \le \mid S \mid -1 ,~ \forall S \subset \mathcal {N}, ~2\le \mid S \mid \le n\\ d_{i} \le b_{i}, ~\forall i \in \mathcal {N}\\ x_{ij} \in \{0,1\} , ~\forall (i,j)\in \mathcal {U}\cup \mathfrak {R}\end{array}\right.$$
(1)

where \(\mathcal {U}\cup \mathfrak {R}(S)\) represents the edges set of S, and S is a subset of \(\mathcal {N}\). The weight of a DCST \(T^{'}=\{x_{ij},(i,j)\in \mathcal {U}\cup \mathfrak {R}, d_{i}\le b_{i}\}\) is

$$\begin{aligned} \sum \limits _{(i,j)\in \mathcal {U}}x_{ij}\tau _{ij}+\sum \limits _{(i,j)\in \mathcal {R}}x_{ij}\xi _{ij}. \end{aligned}$$

Obviously it is an uncertain random variable. Its chance distribution is \(\Phi _{T^{'}}(z|x_{ij}, (i,j)\in \mathcal {U}\cup \mathfrak {R}, d_{i}\le b_{i})\),

$$\begin{aligned} ~&\Phi _{T^{'}}(z|x_{ij}, (i,j)\in \mathcal {U}\cup \mathfrak {R}, d_{i}\le b_{i}) \\ ~&=\mathrm{Ch}\left\{ \sum \limits _{(i,j)\in \mathcal {U}}x_{ij}\tau _{ij}+\sum \limits _{(i,j)\in \mathcal {R}}x_{ij}\xi _{ij}\le z\right\} . \end{aligned}$$

Under the chance theory, each \((\gamma , \omega )\) corresponds to a DCMST and different \((\gamma , \omega )\) may correspond to different DCMSTs. If there exists a DCMST for all \((\gamma , \omega )\), then we call it the ideal chance distribution of DCMST. However, it may happen that such an ideal chance distribution of the DCMST perhaps may not exists or else we may not know what \((\gamma , \omega )\) can be reached. Perhaps a better way is to find a real DCST, in which the chance distribution is closest to the ideal chance distribution.

Definition 8

Let \((\mathcal {N}, \mathcal {U}, \mathfrak {R}, \mathcal {W})\) be an uncertain random network, and let \(\Phi (z)\) be its ideal chance distribution of DCMST problem. Assume that \(T^{'}= \{x_{ij}, (i,j)\in \mathcal {U}\cup \mathfrak {R}, d_i \le b_i\}\) is a DCST with chance distribution \(\Phi _{T^{'}}(z|x_{ij}, (i,j)\in \mathcal {U}\cup \mathfrak {R}, d_i \le b_i)\). If \(\Phi _{T^{'}}(z|x_{ij}, (i,j)\in \mathcal {U}\cup \mathfrak {R}, d_i \le b_i)\) is closest to the \(\Phi (z)\), i.e.,

$$\begin{aligned}\int ^{+\infty }_{0}\left( \Phi (z)-\Phi _{T^{'}}(z|x_{ij}, (i,j)\in \mathcal {U}\cup \mathfrak {R}, d_i \le b_i)\right) \mathrm{d}z\end{aligned}$$

is minimum, then the DCST \(T^{'}\) is defined DCMST for the given uncertain random network.

Based on Definition 8, we formulate the following DCMST model for an uncertain random network,

$$\begin{aligned} \left\{ \begin{array}{l} \min \limits _{\{x_{ij}, (i,j)\in \mathcal {U}\cup \mathcal {R}, d_i \le b_i\}}\int ^{+\infty }_{0}(\Phi (z)-\Phi _{T^{'}}(z|x_{ij},\\ (i,j)\in \mathcal {U}\cup \mathfrak {R}, d_i \le b_i)) \mathrm{d}z \\ \text{ subject } \text{ to: }\\ \qquad \sum \limits _{(i,j)\in \mathcal {U}\cup \mathcal {R}} x_{ij}=n-1 \\ \qquad \sum \limits _{(i,j)\in \mathcal {U}\cup \mathcal {R}(S)} x_{ij} \le \mid S \mid -1 , \forall S \subset \mathcal {N}, 2\le \mid S \mid \le n\\ \qquad d_i \le b_i , \forall i\in \mathcal {N}\\ \qquad x_{ij} \in \{0,1\} , \forall (i,j)\in \mathcal {U}\cup \mathfrak {R}. \end{array}\right. \end{aligned}$$
(2)

Theorem 10

Let \((\mathcal {N}, \mathcal {U}, \mathfrak {R}, \mathcal {W})\) be an uncertain random network, and let \(\Phi (z)\) be its ideal chance distribution of DCMST problem. Assume that the edge uncertain weights \(\tau _{ij}\) are uncertain variables with regular uncertainty distributions \(\Upsilon _{ij}\) for \((i,j)\in \mathcal {U}\), and random weights \(\xi _{ij}\) are random variables with probability distributions \(\Psi _{ij}\) for \((i,j)\in \mathfrak {R}\), respectively. Then model (3) is equivalent to the following model:

$$\begin{aligned} \left\{ \begin{array}{l} \begin{array}{ll}\min \limits _{\{x_{ij}, (i,j)\in \mathcal {U}\cup \mathcal {R}, d_i \le b_i\}}\displaystyle \int ^{+\infty }_{0}(\Phi (z)-\int ^{+\infty }_{-\infty }\Upsilon (z\\-\sum \limits _{(i,j)\in \mathcal {R}}x_{ij}y_{ij})\mathrm{d}\Psi (y))\mathrm{d}z \end{array}\\ \mathrm {subject\, to:}\\ \qquad \sum \limits _{(i,j)\in \mathcal {U}\cup \mathcal {R}} x_{ij}=n-1 \\ \qquad \sum \limits _{(i,j)\in \mathcal {U}\cup \mathcal {R}(S)} x_{ij} \le \mid S \mid -1 , \forall S \subset \mathcal {N}, 2\le \mid S \mid \le n\\ \qquad d_i \le b_i , \forall i\in \mathcal {N}\\ \qquad x_{ij} \in \{0,1\}, \quad \forall (i,j)\in \mathcal {U}\cup \mathfrak {R}\end{array}\right. \end{aligned}$$
(3)

where

$$\begin{aligned} \Psi (y)=\int _{\sum \limits _{(i,j)\in \mathcal {R}}x_{ij}y_{ij}\le y}\prod \limits _{(i,j)\in \mathcal {R}}\mathrm{d}\Psi _{ij}(x_{ij}y_{ij}), \end{aligned}$$
$$\begin{aligned} \Upsilon (r)= \sup \limits _{\sum \limits _{(i,j)\in \mathcal {U}}x_{ij}r_{ij}=r} \left( \min \limits _{(i,j)\in \mathcal {U}}\Upsilon _{ij}(x_{ij}r_{ij})\right) . \end{aligned}$$

Proof

Let \(T^{'}= \{x_{ij}, (i,j)\in \mathcal {U}\cup \mathfrak {R}, d_i \le b_i\}\) be a DCST, and let \(\Psi (y)\) be the probability distribution of \(\sum \limits _{(i,j)\in \mathcal {R}}x_{ij}\xi _{ij}.\) According to assumption (iv), we can obtain

$$\begin{aligned} \Psi (y)=\int _{\sum \limits _{(i,j)\in \mathcal {R}}x_{ij}y_{ij}\le y}\prod _{(i,j)\in \mathcal {R}}\mathrm{d}\Psi _{ij}(x_{ij}y_{ij}). \end{aligned}$$
(4)

From Theorem 7, we have

$$\begin{aligned} ~&\Phi _{T^{'}}(z|x_{ij}, (i,j)\in \mathcal {U}\cup \mathfrak {R}, d_i \le b_i) \\ ~&=\mathrm{Ch}\left\{ \sum \limits _{(i,j)\in \mathcal {U}}x_{ij}\tau _{ij}+\sum \limits _{(i,j)\in \mathcal {R}}x_{ij}\xi _{ij}\le z\right\} \\ ~&=\displaystyle \int _{\mathfrak {R}}\mathcal {M}\left\{ \sum \limits _{(i,j)\in \mathcal {U}}x_{ij}\tau _{ij}+y \le z \right\} \Psi (y) \\ ~&=\displaystyle \int _{\mathfrak {R}}\mathcal {M}\left\{ \sum \limits _{(i,j)\in \mathcal {U}}x_{ij}\tau _{ij} \le z - y \right\} \Psi (y). \end{aligned}$$

Further, from Theorem 1,

$$\begin{array}{*{20}{l}}~\mathcal {M}\left\{ \sum \limits _{(i,j)\in \mathcal {U}}x_{ij}\tau _{ij} \le z - y \right\} \\ ~\quad =\sup \limits _{\sum \limits _{(i,j)\in \mathcal {U}}x_{ij}r_{ij}=z - y} \left( \min \limits _{(i,j)\in \mathcal {U}}\Upsilon _{ij}(x_{ij}r_{ij})\right) \\ ~\quad =\Upsilon \left( z-\sum \limits _{(i,j)\in \mathcal {R}}x_{ij}y_{ij}\right) \end{array}$$

Combining the above two formulas into the objective function of model (3), the theorem is completed.

Next we propose the following algorithm to solve the model above. \(\square\)

Algorithm:

Step 1: For each random edge \((i,j)\in \mathfrak {R}\), give a partition \(\Pi _{ij}\) on the interval \([a_{ij},b_{ij}]\) with the step length is 0.01, random weight values only in \(\Pi _{ij}\), denote by \(y_{ij}\).

Step 2: For any \((i,j)\in \mathfrak {R}\), given \(y_{ij} \in \Pi _{ij}\) and \(\alpha \in \{0.01,0.02,\ldots ,0.99\}\), calculate \(F^{-1}(\alpha ;y_{ij},(i,j)\in \mathfrak {R},d_i\le b_i)\).

Step 3: The uncertain distribution function \(F(x;y_{ij},(i,j)\in \mathfrak {R},d_i \le b_i)\) can be obtained by using linear interpolation from the discrete form of inverse uncertainty distribution.

Step 4: Submit \(F(x;y_{ij},(i,j)\in \mathfrak {R},d_i \le b_i)\) into objective function (1), calculate the ideal chance distribution.

Step 5: Find all the DCSTs for the given uncertain random network by using the genetic algorithm, and label each DCST.

Step 6: Calculate chance distribution of each DCST.

Step 7: Calculate the objective function of model (4). Choose the DCST corresponds to the minimum value, and that is the DCMST.

5 Numerical experiment

In this section, we will present a numerical example to illustrate the effectiveness of the proposed model. Given an uncertain random network \((\mathcal {N},\mathcal {U},\mathfrak {R},\mathcal {W})\) with 5 nodes and 8 edges as shown in Fig. 5, assume that the degree constraint of each node is 3. Assume that the uncertain weights \(\tau _{12},\tau _{24},\tau _{25},\tau _{34},\tau _{45}\) are independent uncertain variables with regular uncertainty distributions \(\Upsilon _{12},\Upsilon _{24},\Upsilon _{25},\Upsilon _{34},\Upsilon _{45},\) and the random weights \(\xi _{15},\xi _{23},\xi _{35}\) are independent random variables with probability distributions \(\Psi _{15},\Psi _{23},\Psi _{35},\) respectively. These parameters of weights are listed in Table 1, where U(ab) represent a uniformly random variable, L(ab) and Z(a,b,c) are linear and zigzag uncertain variables respectively.

Fig. 5
figure 5

An uncertain random network

Table 1 Parameters of the given network
Fig. 6
figure 6

Part of DCSTs

According to Theorems 7 and 9, the ideal chance distribution of DCMST problem with respect to the given uncertain random network can be easily obtained as follows,

$$\begin{array}{*{20}{c}} \Phi (x) = \int _{0}^{+\infty }\int _{0}^{+\infty }\int _{0}^{+\infty } F(x;y_{15},y_{23},y_{35},d_i\le 3) \\ {\rm d}\Psi _{15}(y_{15}){\rm d}\Psi _{23}(y_{23}){\rm d}\Psi _{35}(y_{35}) \end{array}$$

where \(F(x;y_{15},y_{23},y_{35},d_i\le 3)\) is determined by its inverse uncertainty distribution

$$ F^{{ - 1}} (x;y_{{15}} ,y_{{23}} ,y_{{35}} ,d_{i} \le 3) = f(\Upsilon _{{12}}^{{ - 1}} (\alpha ),\Upsilon _{{24}}^{{ - 1}} (\alpha ),\Upsilon _{{25}}^{{ - 1}} (\alpha )\Upsilon _{{34}}^{{ - 1}} (\alpha ),\Upsilon _{{45}}^{{ - 1}} (\alpha ),y_{{15}} ,y_{{23}} ,y_{{35}} ,d_{i} \le 3) $$

where f can be obtained by the genetic algorithm for each given \(\alpha\).

Fig. 7
figure 7

Ideal chance distribution and chance distribution of all DCSTs

Table 2 Difference of the area between each DCST and the ideal one

Base on the genetic algorithm, we can easily find all the DCSTs and label each DCST. Since we are able to fully obtain 38 DCSTs, we just listed part of them from tree 31 to tree 36 in Fig. 6. By the proposed algorithm above, we first calculate the ideal chance distribution of the DCMST and the chance distribution of all DCSTs as shown in Fig. 7. From Fig. 7, we can easily observe Tree 36 is closest to the ideal chance distribution. According to model (4), we come out to calculate the difference between the area of each DCST and the ideal chance distribution of the DCMST and the results are listed in Table 2. From Table 2, we once again confirm that Tree 36 is closest to the ideal chance distribution. So we conclude that Tree 36 can be regarded as DCMST.

6 Conclusion

This paper first introduced an uncertain random network into a DCMST problem in which uncertainty coexists with randomness. That is, in an uncertain random network, some weights are uncertain variables and others are random variables. In order to solve the DCMST problem, we applied chance theory and uncertain random variables to deal with this situation. We have also introduced the concept of ideal chance distribution and a new model of DCMST problem for uncertain random network, and solved the problem by finding the closest DCST to the ideal one. The performance of our method, that is in terms of its relevance and efficacy, is validated through a numerical illustration.

Future directions may be as follows:

  1. 1.

    Since the DCMST is an uncertain random variable, some other uncertain random programming models can be found to solve the problem.

  2. 2.

    The study can be extended to other constraint on MST problem in uncertain random network.

  3. 3.

    The proposed model and algorithm can be used in uncertain random traveling salesman problem.