Introduction

In the event of an emergency such as substantial acts of nature (e.g. earthquake, flooding, etc.), large human-caused accidents, and major terrorist attacks (e.g. the September 11 event), tremendous demands for medical supplies will occur at the incident sites in a short time period. Dispatching medical supplies is one of the fundamental problems in emergency events. A key ingredient in an effective response to an emergency is the prompt availability of necessary supplies at the emergency sites. Based on this fact, dispatch strategies in emergency response systems have been investigated for decades. For instance, Rathi et al. (1992) considered supply dispatch in conflict or emergency situations, Özdamar et al. (2004) investigated how to dispatch commodities to distribution centers in affected areas as soon as possible in natural disasters, and Yuan and Wang (2009) presented some models and algorithms for path selection in emergency events. In addition, Rodríguez et al. (2010) provided a decision support system to aid Humanitarian Non-Governmental Organizations concerned with the response to natural disasters, and Zhang et al. (2012) considered multiple secondary disasters by an integer programming.

In the literature mentioned above, the dispatch strategies were investigated in deterministic environment. That is, demand, running distance or running speed are supposed to be positive crisp values. However, the highly unpredictable nature of emergencies may lead to indeterminacy both in demand and running time. As a result, it is not suitable to employ the classical models and algorithms in these situations. Some researchers believed such indeterminacy behaves like randomness. Based on this assumption, a large number of studies have been presented. For example, Barbarosoğlu and Arda (2004) proposed a two-stage stochastic programming model to plan the transportation of vital first-aid commodities to disaster-affected areas, Beraldi et al. (2004) addressed the problem of designing robust emergency medical service via stochastic programming, and Chang et al. (2007) formulated two stochastic programming models for flood emergency dispatch. For more research of dispatch problem in random environment, we may consult Daskin (1987), Ingolfsson et al. (2008), Mete and Zabinsky (2010), and so on. In addition, the dispatch problems were researched in some other situations, such as Lin et al. (2006), Wen and Iwamura (2008), and Yang et al. (2011), Wang and Chen (2012), etc.

As we know, a fundamental premise of applying probability theory is that the obtained probability distribution is close enough to the real frequency. However, in the emergency circumstances, we are frequently lack of observed data, and no samples are available to estimate a probability distribution. In this case, we have to invite some experts to evaluate the belief degree that each event will occur. Since the human beings usually overweight unlikely events, the belief degrees deviate far from the real frequency (Kahneman and Tversky 1979). As a result, Liu (2012) pointed out that probability cannot be used to model belief degree, since it may lead to counterintuitive results. In addition, a counterexample was presented by Liu (2012). This fact promoted Liu (2007) to found an uncertainty theory to deal with the belief degree. Due to this fact, we deal with such indeterminacy factors in emergency by uncertainty theory in this paper.

The uncertainty theory was proposed by Liu (2007) in 2007, and subsequently studied by many scholars. So far, it has brought many new branches such as uncertain differential equation (Yao 2013; Gao and Yao 2014), uncertain finance (Peng and Yao 2011; Liu 2013a), uncertain programming (Liu 2009b; Zhang and Peng 2012, 2013), uncertain statistics (Liu 2010; Wang and Peng 2014), uncertain logic (Li and Liu 2009a, b; Li and Wong 2010; Chen and Ralescu 2011), uncertain optimal control (Zhu 2010), uncertain risk (Liu 2013b; Peng 2013), uncertain game (Yang and Gao 2013, 2014), and so on. For exploring the recent developments of uncertainty theory, the readers may consult (Liu 2010). In the area of vehicle dispatch, Liu (2009b) first introduced uncertainty theory into the study of vehicle routing problem. After that, Dong and Wang (2013) employed uncertain optimal control to the study of vehicle routing problem. Sheng and Yao (2012) studied transportation problem via uncertain programming.

As is known to us, emergency events are of high-consequence, low-frequency events. Due to the scarce resources and overwhelming demands occurs during an emergency, careful pre-planning is important for an emergency to save many lives. Because of the highly unpredictable nature of emergencies and the severity of the accident, the running times and the demands cannot be known precise, but can be estimated by some domain experts. The experts will give their belief degrees for these quantities. As mentioned before, the uncertainty theory is a powerful tool to deal with belief degrees. In order to deliver the medical supplies from a depot to the demanding locations as soon as possible, we will construct two uncertain programming models based on the uncertainty theory. In addition, we will give a hybrid intelligent algorithm for solving the proposed models.

The rest of the paper is organized as follows. “Preliminaries” briefly reviews some basic concepts and results of uncertainty theory. In section “Problem description”, we firstly introduce the problem of dispatching medical supplies in emergency events with uncertain running times and uncertain demands, and then present two mathematical models according to various decision criteria. In section “Theoretical analysis of the models”, some properties of the models are discussed. In section “Hybrid intelligent algorithm”, an algorithm for solving the proposed models in general cases is presented. In section “Numerical examples”, we will give some numerical examples to better illustrate the modeling ideas and to show the effectiveness of the proposed algorithm. Finally, the last section concludes this paper with a brief summary.

Preliminaries

Let \(\varGamma \) be a nonempty set and \({\mathcal {L}}\) a \(\sigma \)-algebra over \(\varGamma \). Each element \(\varLambda \in {\mathcal {L}}\) is called an event. For any \(\varLambda \in {\mathcal {L}}\), a set function \({\mathcal {M}}: {\mathcal {L}}\rightarrow [0,1]\) is called an uncertain measure if it satisfies the following axioms (Liu 2007): (1) \({\mathcal {M}}\{\varGamma \}=1\) for the universal set \(\varGamma \); (2) \({\mathcal {M}}\{\varLambda \}+{\mathcal {M}}\{\varLambda ^{c}\}=1\) for any \(\varLambda \in {\mathcal {L}}\); (3) For every countable sequence of events \(\{\varLambda _{i}\}\), we have

$$\begin{aligned} {\mathcal {M}}\left\{ {\bigcup _{i=1}^{\infty }}\varLambda _{i}\right\} \le {\sum _{i=1}^{\infty }}{\mathcal {M}}\{\varLambda _{i}\}. \end{aligned}$$

The triplet \((\varGamma ,{\mathcal {L}},{\mathcal {M}})\) is called an uncertainty space. In order to obtain an uncertain measure of compound event, Liu (2009a) defined the fourth axiom called product axiom: Let \((\varGamma _{k},{\mathcal {L}}_{k},{\mathcal {M}}_{k})\) be uncertainty spaces for \(k=1,2,\ldots \). The product uncertain measure \({\mathcal {M}}\) is an uncertain measure satisfying

$$\begin{aligned} {\mathcal {M}}\left\{ {\prod _{k=1}^{\infty }}\varLambda _{k}\right\} ={\bigwedge _{k=1}^{\infty }{\mathcal {M}}_{k}\{\varLambda _{k}\}} \end{aligned}$$

where \(\varLambda _{k}\) are arbitrarily chosen events from \({\mathcal {L}}_{k}\) for \(k=1,2,\ldots ,\) respectively.

Definition 1

(Liu 2007) An uncertain variable \(\xi \) is a measurable function from an uncertainty space \((\varGamma ,{\mathcal {L}},{\mathcal {M}})\) to the set of real numbers, i.e., for any Borel set of real numbers, the set

$$\begin{aligned} \{\xi \in B\}=\{\gamma \in \varGamma | \xi (\gamma )\in B\} \end{aligned}$$

is an event.

In order to describe an uncertain variable in practice, the concept of uncertainty distribution \(\varPhi : \mathfrak {R}\rightarrow [0, 1]\) of an uncertain variable \(\xi \) is defined by Liu (2007) as the following function

$$\begin{aligned} \varPhi (x)={\mathcal {M}}\{\xi \le x\},\quad x\in \mathfrak {R}. \end{aligned}$$

The inverse function \(\varPhi ^{-1}\) is called the inverse uncertainty distribution of \(\xi \).

An uncertain variable \(\xi \) is called zigzag if it has a zigzag uncertainty distribution

$$\begin{aligned} \varPhi (x)=\left\{ \begin{array}{ll} 0,&{}\quad \text{ if } \quad x\le a\\ \frac{x-a}{2(b-a)},&{}\quad \text{ if } \quad a\le x\le b\\ \frac{x+c-2b}{2(c-b)},&{}\quad \text{ if } \quad b\le x\le c\\ 1,&{}\quad \text{ if } \quad x\ge c \end{array}\right. \end{aligned}$$

denoted by \(\mathcal {Z}(a,b,c)\), where \(a,b\) and \(c\) are real numbers with \(a<b<c\). It is easy to verify that the inverse uncertainty distribution of \(\mathcal {Z}(a,b,c)\) is

$$\begin{aligned} \varPhi ^{-1}(\alpha )=\left\{ \begin{array}{ll} (1-2\alpha )a+2\alpha b,&{}\quad \text{ if } \quad \alpha <0.5\\ (2-2\alpha )b+(2\alpha -1)c,&{}\quad \text{ if } \quad \alpha \ge 0.5. \end{array}\right. \end{aligned}$$

Theorem 1

(Peng and Iwamura 2010) A function \(\varPhi (x): \mathfrak {R}\rightarrow [0,1]\) is an uncertainty distribution if and only if it is a monotone increasing function except \(\varPhi (x)\equiv 0\) and \(\varPhi (x)\equiv 1\).

Definition 2

(Liu 2007) Let \(\xi \) be an uncertain variable. Then the expected value of \(\xi \) is defined by

$$\begin{aligned} E[\xi ]=\int ^{+\infty }_{0}{\mathcal {M}}\{\xi \ge r\}\text{ d }r-\int ^{0}_{-\infty }{\mathcal {M}}\{\xi \le r\}\text{ d }r \end{aligned}$$

provided that at least one of the two integrals is finite.

Let \(\xi \) be an uncertain variable with uncertainty distribution \(\varPhi \), Liu (2007) proved that

$$\begin{aligned} E[\xi ]=\int ^{+\infty }_{0}(1-\varPhi (x))\text{ d }x-\int ^{0}_{-\infty }\varPhi (x)\text{ d }x. \end{aligned}$$

In addition, we have the following result.

Theorem 2

(Liu 2010) Let \(\xi \) be an uncertain variable with uncertainty distribution \(\varPhi \). If its expected value exists, then

$$\begin{aligned} E[\xi ]=\int _{0}^{1}\varPhi ^{-1}(\alpha )d \alpha . \end{aligned}$$

Definition 3

(Liu 2009a) The uncertain variables \(\xi _{1},\) \(\xi _{2},\ldots ,\) \(\xi _{n}\) are said to be independent if

$$\begin{aligned} {\mathcal {M}}\left\{ {\bigcap _{i=1}^{n}}(\xi _{i}\in B_{i})\right\} ={\bigwedge _{i=1}^{n}{\mathcal {M}}\{\xi _{i}\in B_{i}\}} \end{aligned}$$

for any Borel sets \(B_{1},B_{2},\ldots ,B_{n}\) of real numbers.

Let \(\xi \) and \(\eta \) be independent uncertain variables with finite expected values. Liu (2010) proved that for any real numbers \(a\) and \(b\), we have

$$\begin{aligned} E[a\xi +b\eta ]=aE[\xi ]+bE[\eta ]. \end{aligned}$$

A real-value function \(f(x_{1},x_{2},\ldots ,x_{n})\) is said to be strictly increasing if

  1. (1)

    \(f(x_{1},x_{2},\ldots ,x_{n})\le f(y_{1},y_{2},\ldots ,y_{n})\) where \(x_{i}\le y_{i}\) for \(i=1,2,\ldots ,n\);

  2. (2)

    \(f(x_{1},x_{2},\ldots ,x_{n})< f(y_{1},y_{2},\ldots ,y_{n})\) where \(x_{i}<y_{i}\) for \(i=1,2,\ldots ,n\).

Theorem 3

(Liu 2010) Let \(\xi _{1}, \xi _{2}, \ldots , \xi _{n}\) be independent uncertain variables with regular uncertainty distributions \(\varPhi _{1}, \varPhi _{2}, \ldots , \varPhi _{n}\), respectively. If \(f\) is a strictly increasing function, then \(\xi =f(\xi _{1},\xi _{2},\ldots ,\xi _{n})\) is an uncertain variable with inverse uncertainty distribution

$$\begin{aligned} \varPsi ^{-1}(\alpha )=f\left( \varPhi ^{-1}_{1}(\alpha ), \varPhi ^{-1}_{2}(\alpha ),\ldots ,\varPhi ^{-1}_{n}(\alpha )\right) . \end{aligned}$$

Example 1

Let \(\xi _{1},\xi _{2}\) and \(\xi _{3}\) be independent uncertain variables with uncertainty distributions \(\varPhi _{1},\varPhi _{2}\) and \(\varPhi _{3}\), respectively. Since \(f(x_{1},x_{2},x_{3})=x_{1}+x_{2}+\exp (x_{3})\) is a strictly increasing function, \(\xi _{1} + \xi _{2} + \exp (\xi _{3})\) is an uncertain variable with inverse uncertainty distribution

$$\begin{aligned} \varPsi ^{-1}(\alpha )=\varPhi ^{-1}_{1}(\alpha )+\varPhi ^{-1}_{2}(\alpha ) +\exp \left( \varPhi ^{-1}_{3}(\alpha )\right) . \end{aligned}$$

Through Theorem 3, we can easily obtain the inverse uncertainty distribution of \(f(\xi _{1},\xi _{2},\ldots ,\xi _{n})\). Furthermore, we can transform an indeterminacy model into a deterministic one based on this theorem.

Problem description

In this paper, we consider the problem as follow. Suppose that in a central depot (e.g. airport or central inventory), \(m\) vehicles are available for \(n\) demanding locations. Our task of this paper is to plan the routes such that emergency medical supplies are transported by vehicles to the demanding locations as soon as possible once an emergency event occurs.

To simplify the problem, we take some assumptions as follows: (a) each vehicle has a container with a physical limitation so that the total loading of each vehicle cannot exceed its capacity; (b) a vehicle will be assigned a route at most one time; (c) each route begins at the central depot, and also returns to the central depot; (d) each demanding location will be serviced exactly once; and (e) neglect the unload time by consider it a zero.

According to the discussion in section “Introduction”, the highly unpredictable nature of emergencies may lead to indeterminacy both in demands and running times. And we usually cannot obtain these data exactly at the moment. Generally, we have no choice but to invite some domain experts to evaluate the belief degrees about them. Therefore, we may use uncertain variables to describe the running times and demands. Before starting model construction, some notations and assumptions are listed as follows:

  • \(i=0:\) depot (e.g. airport or central inventory);

  • \(i=1,2,\ldots ,n:\) demanding locations;

  • \(k=1,2,\ldots ,m:\) vehicles;

  • \(\xi _{ij}:\) running time from demanding locations \(i\) to \(j\), \(i,j=0,1,2,\ldots ,n\); \(\xi _{ij}\) are independent uncertain variables;

  • \(\eta _{i}:\) amount of demand of demanding locations \(i\), \(i=1,2,\ldots ,n\); \(\eta _{i}\) are independent uncertain variables;

  • \(C_{k}:\) capacity of vehicle \(k, k=1,2,\ldots ,m\).

In the following, we describe the operational plan by the formulation Liu (2009b) via two decision vectors \(\mathbf x \) and \(\mathbf y \).

  • \(\mathbf x =(x_{1},x_{2},\ldots ,x_{n})\) : integer decision vector representing \(n\) demanding locations with \(1\le x_{i}\le n\) and \(x_{i}\ne x_{j}\) for all \(i\ne j, i,j=1,2,\ldots ,n\);

  • \(\mathbf y =(y_{1},y_{2},\ldots ,y_{m-1})\) : integer decision vector with \(y_{0}=0\le y_{1}\le y_{2}\le \cdots \le y_{m-1}\le n=y_{m}\).

It is no doubt that the operational plan is fully determined by the decision vectors \(\mathbf x \) and \(\mathbf y \) in the following way: For \(1\le k\le m\), if \(y_{k}=y_{k-1}\), then vehicle \(k\) is not used; if \(y_{k}>y_{k-1}\), then vehicle \(k\) is used, and the tour of vehicle \(k\) is:

$$\begin{aligned} 0\rightarrow x_{y_{k-1}+1}\rightarrow x_{y_{k-1}+2}\rightarrow \cdots \rightarrow x_{y_{k}}\rightarrow 0. \end{aligned}$$

Generally speaking, emergencies may lead to the scarce of relief supplies. It is natural that the decision maker would accept that the quantity of the supplies delivered by a vehicle cannot reach the total demand in the corresponding route to some extent. However, at a given confidence level which is considered as the safety margin, the demand must be achieved. Suppose that the total amount of the medical supplies transported by each vehicle equals to its capacity. Therefore the satisfaction constraint is verified in the following way: For \(1\le k\le m\), if \(y_{k}>y_{k-1}\), then

$$\begin{aligned} {\mathcal {M}}\left\{ \sum _{j=y_{k-1}+1}^{y_{k}}\eta _{x_{j}}\le C_{k}\right\} \ge \alpha \end{aligned}$$

where \(\alpha \) is the predetermined confidence level.

For simplicity, we write \(\varvec{\xi }=\{\xi _{ij}: i,j=0,1,2,\ldots ,n\}\). Let \(f_{k}(\mathbf x ,\mathbf y ,\varvec{\xi })\) be the total running time function of vehicle \(k\) in the corresponding route, \(k=1,2,\ldots ,m\). Then we have

$$\begin{aligned} f_{k}(\mathbf x ,\mathbf y ,\varvec{\xi })=\left\{ \begin{array}{ll} \xi _{0x_{y_{k-1}+1}}+\sum \nolimits _{i=y_{k-1}+1}^{y_{k}-1}\xi _{x_{i}x_{i+1}}+\xi _{x_{y_{k}}0}, \\ \quad \quad \quad \text{ if } \quad y_{k}>y_{k-1}\\ 0, \quad \quad \;\;\text{ if } \quad y_{k}=y_{k-1}. \end{array}\right. \end{aligned}$$

It is clear that \(f_{k}(\mathbf x ,\mathbf y ,\varvec{\xi })\) is also an uncertain variable. Under this condition, we must choose some criteria to rank uncertain variables, since it is difficult for us to rank them directly. A common way to rank uncertain variables is to use the expected value operator. That is, the larger the expected value is, the larger the corresponding uncertain variable is. Then, the goal of the problem can be formulated as

$$\begin{aligned} \min \quad \max _{1\le k\le m} E[f_{k}(\mathbf x ,\mathbf y ,\varvec{\xi })]. \end{aligned}$$

If the decision maker prefers optimizing the problem in the sense of expected running time, we may construct the mathematical model for the problem of dispatching medical supplies in emergency as follows:

$$\begin{aligned} \left\{ \begin{array}{ll} \min \quad \max _{1\le k\le m} E[f_{k}(\mathbf x ,\mathbf y ,\varvec{\xi })] \\ \text{ subject } \text{ to }:\\ \;\;\quad {\mathcal {M}}\left\{ \sum \nolimits _{j=y_{k-1}+1}^{y_{k}}\eta _{x_{j}}\le C_{k}\right\} \ge \alpha ,\;\;\text{ if }\; y_{k}>y_{k-1}\\ \quad \;\, k=1,2,\ldots ,m\\ \quad \;\, 1\le x_{i}\le n,\quad i=1,2,\ldots ,n\\ \quad \;\, x_{i}\ne x_{j},\quad i\ne j,\quad i,j=1,2,\ldots ,n\\ \quad \;\, 0\le y_{1}\le y_{2}\le \cdots \le y_{m-1}\le n\\ \quad \;\, x_{i}, \quad i=1,2,\ldots ,n,\quad \text{ integers } \\ \quad \;\, y_{j}, \quad j=1,2,\ldots ,m-1,\quad \text{ integers } \end{array}\right. \end{aligned}$$
(1)

where \(\alpha \) is the predetermined confidence level.

In fact, the decision maker may consider the problem from another point of view. He may firstly present a satisfying predetermined maximal running time \(\bar{f}\), and then maximize the minimum uncertain measure that the running time is no more than the given value. Taking this modeling idea, we may construct belief degree-chance programming model as follows:

$$\begin{aligned} \left\{ \begin{array}{ll} \max \quad \min _{1\le k\le m}{\mathcal {M}}\{f_{k}(\mathbf x ,\mathbf y ,\varvec{\xi })\le \bar{f}\}\\ \text{ subject } \text{ to }:\\ \;\;\quad {\mathcal {M}}\left\{ \sum \nolimits _{j=y_{k-1}+1}^{y_{k}}\eta _{x_{j}}\le C_{k}\right\} \ge \alpha ,\;\;\text{ if }\; y_{k}>y_{k-1}\\ \quad \;\, k=1,2,\ldots ,m\\ \quad \;\, 1\le x_{i}\le n,\quad i=1,2,\ldots ,n\\ \quad \;\, x_{i}\ne x_{j},\quad i\ne j,\quad i,j=1,2,\ldots ,n\\ \quad \;\, 0\le y_{1}\le y_{2}\le \cdots \le y_{m-1}\le n\\ \quad \;\, x_{i}, \quad i=1,2,\ldots ,n,\quad \text{ integers } \\ \quad \;\, y_{j}, \quad j=1,2,\ldots ,m-1,\quad \text{ integers } \end{array}\right. \end{aligned}$$
(2)

where \(\alpha \) is the predetermined confidence level.

Theoretical analysis of the models

We can see that the models (1) and (2) are constructed in uncertain environment. In order to seek the optimal solution for the models, it is necessary for us to compute expected value and uncertain measure. In order to solve the models easily, it is better for us to analyze some mathematical properties of the models. In the following, we shall discuss this issue.

For any feasible solution \((\mathbf x ,\mathbf y )\), we need to compute the expected value \(E[f_{k}(\mathbf x ,\mathbf y ,\varvec{\xi })]\) for model (1).

Theorem 4

If \(\xi _{ij}\) are independent uncertain variables with uncertainty distributions \(\varPhi _{ij}, i,j=0,1,2,\ldots ,n\), respectively, then

$$\begin{aligned} E[f_{k}(\mathbf x ,\mathbf y ,\varvec{\xi })]=\left\{ \begin{array}{ll} \int _{0}^{1}\varPhi _{0x_{y_{k-1}+1}}^{-1}(\alpha )d \alpha &{}\\ \quad +\sum \nolimits _{i=y_{k-1}+1}^{y_{k}-1}\int _{0}^{1}\varPhi ^{-1}_{x_{i}x_{i+1}}(\alpha )d \alpha &{}\\ \quad +\int _{0}^{1}\varPhi _{x_{y_{k}}0}^{-1}(\alpha )d \alpha ,&{}\quad if \,y_{k}>y_{k-1}\\ 0,&{}\quad if \quad y_{k}=y_{k-1}. \end{array}\right. \end{aligned}$$

Proof

Since \(\xi _{ij}, i,j=0,1,2,\ldots ,n\), are independent uncertain variables, it follows from the linearity of expected value operator of uncertain variable that

$$\begin{aligned} E[f_{k}(\mathbf x ,\mathbf y ,\varvec{\xi })]=\left\{ \begin{array}{ll} E[\xi _{0x_{y_{k-1}+1}}]+\sum \nolimits _{i=y_{k-1}+1}^{y_{k}-1}&{}\\ \quad E[\xi _{x_{i}x_{i+1}}]+E[\xi _{x_{y_{k}}0}],&{}\quad \text{ if } \quad y_{k}>y_{k-1}\\ 0,&{}\quad \text{ if } \quad y_{k}=y_{k-1}. \end{array}\right. \end{aligned}$$

According to Theorem 2, we can easily complete the proof. \(\square \)

In fact, if the uncertain variables \(\xi _{ij}\), \(i,j=0,1,2,\ldots ,n\), are special uncertain variables, for instance, linear uncertain variables, zigzag uncertain variables, or normal uncertain variables, we can then easily obtain the inverse uncertainty distributions \(\varPhi _{ij}^{-1}, i,j=0,1,2,\ldots ,n\). Thus, we can compute the expected value \(E[f_{k}(\mathbf x ,\mathbf y ,\varvec{\xi })]\). However, in most cases, it is difficult to do so. Recently, Liu (2010) pointed out that an uncertain variable \(\xi _{i}\) with uncertainty distribution \(\varPhi _{i}\) can be represented as Table 1 according to expert’s estimation.

Table 1 The expression of uncertainty distribution of uncertain variable \(\xi _{i}\)

In Table 1, \(0.01,0.02,0.03,\ldots ,0.99\) in the first row are the values of uncertainty distribution \(\varPhi _{i}\) and \(t_{i}^{1},t_{i}^{2},t_{i}^{3},\ldots ,t_{i}^{99}\) in the second row are the corresponding values of \(\varPhi _{i}^{-1}(0.01)\), \(\varPhi _{i}^{-1}(0.02), \varPhi _{i}^{-1}(0.03),\ldots ,\) \(\varPhi _{i}^{-1}(0.99)\).

Based on Table 1, we can estimate the expected value \(E[\xi _{i}]=\int _{0}^{1}\varPhi _{i}^{-1}(\alpha )\text{ d }\alpha \) by trapezoid

$$\begin{aligned}&\int _{0}^{1}\varPhi _{i}^{-1}(\alpha )\text{ d }\alpha \nonumber \\&\approx \sum _{s=1}^{98}\frac{0.01\left[ \varPhi _{i}^{-1}(0.01s)+\varPhi _{i}^{-1}(0.01(s+1))\right] }{2}\nonumber \\&= \sum _{s=1}^{98}\frac{t_{i}^{s}+t_{i}^{s+1}}{200}. \end{aligned}$$
(3)

Similarly, we can obtain the approximate value of \(E[f_{k}\) \((\mathbf x ,\mathbf y ,\varvec{\xi })]\) according to Eq. (3).

For any \((\mathbf x ,\mathbf y )\), if \(y_{k}>y_{k-1}\), \(k=1,2,\ldots ,m\), we need to check whether it satisfies the following chance constraint:

$$\begin{aligned} {\mathcal {M}}\left\{ \sum _{j=y_{k-1}+1}^{y_{k}}\eta _{x_{j}}\le C_{k}\right\} \ge \alpha . \end{aligned}$$
(4)

Fortunately, we can transform the constraint to the corresponding crisp equivalent if the uncertain variables \(\eta _{i}\) are independent uncertain variables, \(i=1,2,\ldots ,n\).

Theorem 5

If \(\eta _{i}\) are independent uncertain variables with regular uncertainty distributions \(\varPsi _{i}, i=1,2,\ldots ,n\), respectively, then the chance constraint (4) can be transformed into

$$\begin{aligned} \sum _{j=y_{k-1}+1}^{y_{k}}\varPsi _{x_{j}}^{-1}(\alpha )\le C_{k}, \quad k=1,2,\ldots ,m. \end{aligned}$$

Proof

Let

$$\begin{aligned} \tau _{k}=g_{k}\left( \eta _{x_{y_{k-1}+1}},\eta _{x_{y_{k-1}+2}},\ldots ,\eta _{x_{y_{k}}}\right) =\sum _{j=y_{k-1}+1}^{y_{k}}\eta _{x_{j}}, \end{aligned}$$

for \(k=1,2,\ldots ,m.\) Suppose that the uncertainty distributions of \(\tau _{k}\) are \(\varTheta _{k}, k=1,2,\ldots ,m\), respectively. Since \(\eta _{i}\) are independent uncertain variables, it follows from Theorem 3 that, for any \(0<\alpha <1\), we have

$$\begin{aligned} \varTheta _{k}^{-1}(\alpha )=\sum _{j=y_{k-1}+1}^{y_{k}}\varPsi _{x_{j}}^{-1}(\alpha ),\quad k=1,2,\ldots ,m. \end{aligned}$$

By using the inverse uncertainty distribution, from

$$\begin{aligned} {\mathcal {M}}\left\{ \sum _{j=y_{k-1}+1}^{y_{k}}\eta _{x_{j}}\le C_{k}\right\} \ge \alpha , \quad k=1,2,\ldots ,m, \end{aligned}$$

we know

$$\begin{aligned} C_{k}\ge \varTheta _{k}^{-1}(\alpha )=\sum _{j=y_{k-1}+1}^{y_{k}}\varPsi _{x_{j}}^{-1}(\alpha ), \quad k=1,2,\ldots ,m. \end{aligned}$$

The proof is completed. \(\square \)

As we can see, for each confidence level \(\alpha \) we will obtain an optimal objective value for model (1). Thus, choosing different \(\alpha \), we may obtain different optimal objective value. How about the relation between the optimal objective value and \(\alpha \)? Now, we shall analyze this issue.

Theorem 6

If \(\eta _{i}\) are independent uncertain variables with regular uncertainty distributions \(\varPsi _{i}, i=1,2,\ldots ,n\), respectively, then the optimal objective value of model (1) is non-decreasing with respect to confidence level \(\alpha \).

Proof

Suppose that the feasible set of the constraint (4) with respect to \(\alpha \) is denoted by \(F(\alpha )\), and the corresponding optimal objective value is denoted by \(Opti(\alpha )\). Without loss of generality, we assume \(\alpha _{1}\ge \alpha _{2}\). It is clear that

$$\begin{aligned} \sum _{j=y_{k-1}+1}^{y_{k}}\varPsi _{x_{j}}^{-1}(\alpha _{1})\ge \sum _{j=y_{k-1}+1}^{y_{k}}\varPsi _{x_{j}}^{-1}(\alpha _{2}). \end{aligned}$$

According to Theorem 5, \(F(\alpha _{1})\subseteq F(\alpha _{2})\). Since the objective of model (1) is to minimize the maximum expected running time. Obviously, the optimal objective value of model (1) with respect to \(\alpha _{1}\) is greater than or equal to that with respect to \(\alpha _{2}\). In other words, \(Opti(\alpha _{1})\ge Opti(\alpha _{2})\). The theorem is proved. \(\square \)

As the similar proof of Theorem 6, we can obtain the following corollary directly.

Corollary 1

If \(\eta _{i}\) are independent uncertain variables with regular uncertainty distributions \(\varPsi _{i}, i=1,2,\ldots ,n\), respectively, then the optimal objective value of model (2) is non-increasing with respect to confidence level \(\alpha \).

Hybrid intelligent algorithm

For solving the models, we must compute expected value \(E[f_{k}(\mathbf x ,\mathbf y ,\varvec{\xi })]\), inverse uncertainty distribution \(\varPsi _{x_{j}}^{-1}(\alpha )\) or uncertain measure \({\mathcal {M}}\{f_{k}(\mathbf x ,\mathbf y ,\varvec{\xi })\le \bar{f}\}\). If the uncertain variables are some special variables such as linear uncertain variables, zigzag uncertain variables, or normal uncertain variables, we can obtain them easily. However, in most cases, it is difficult to do so. In addition, the proposed models are complex, and are difficult to solve them by traditional methods. Therefore, we need to find an algorithm to solve the proposed models in general cases. As we know, genetic algorithm (GA) has successfully solved many complex industrial optimization problems that are difficult to solve by traditional methods. In this paper, we design a hybrid intelligent algorithm based on uncertain simulation and GA. GA was proposed by Holland (1975). GA has been well discussed in numerous literature, such as Fogel (1994), and Gen and Cheng (2000). Interested readers can refer to them. In the paper, the technique of uncertain simulation is first applied to compute inverse uncertainty distribution, expected value and uncertain measure. Then uncertain simulation is integrated into GA to solve uncertain models. The introduction of the algorithm is as follows.

Uncertain simulation

Inverse uncertainty distribution

As discussed in section “Theoretical analysis of the models”, an uncertain variable \(\xi _{i}\) can be represented as Table 1. Then, for any \(0<\alpha <1\), the inverse uncertainty distribution \(\varPhi _{i}^{-1}(\alpha )\) can be estimated by

$$\begin{aligned} \frac{\varPhi _{i}^{-1}(0.01\lfloor 100\alpha \rfloor )+\varPhi _{i}^{-1}(0.01\lceil 100\alpha \rceil )}{2} =\frac{t_{i}^{\lfloor 100\alpha \rfloor }+t_{i}^{\lceil 100\alpha \rceil }}{2}. \end{aligned}$$

Therefore, we can obtain the inverse uncertainty distribution \(\varPsi ^{-1}_{x_{j}}(\alpha )\) by the same way.

Expected value

As mentioned before, the expected value \(E[\xi _{i}]\) can be estimated by (3). Thus, the algorithm for computing \(E[\xi _{i}]\) can be summarized as follows:

  1. Step 1

    Set \(E=0\), and \(s=1\).

  2. Step 2

    Let \(y_{s}=t^{s}_{i}+t^{s+1}_{i}\), and \(E\leftarrow E+y_{s}\).

  3. Step 3

    If \(s<98\), let \(s\leftarrow s+1\), and then turn back to Step 2.

  4. Step 4

    Report \(E/200\) as the estimation of \(E[\xi _{i}]\).

By the linearity of expected value operator of uncertain variable, the expected value \(E[f_{k}(\mathbf x ,\mathbf y ,\varvec{\xi })]\) also can be estimated by the algorithm designed above.

Uncertain measure

For model (2), we need to compute uncertain measure \({\mathcal {M}}\{f_{k}(\mathbf x ,\mathbf y ,\varvec{\xi })\le \bar{f}\}\), where \(\bar{f}\) is a predetermined maximal value. Let \((\mathbf x ,\mathbf y )\) be a feasible solution. For \(1\le k\le m\), if \(y_{k}>y_{k-1}\), it is clear that \(f_{k}(\mathbf x ,\mathbf y ,\varvec{\xi })\) is a strictly increasing function respect to \(\xi _{0x_{y_{k-1}+1}},\xi _{x_{y_{k-1}+1}x_{y_{k-1}+2}},\ldots ,\xi _{x_{y_{k}}0}\). For simplicity, we assume that \(g(x_{1},x_{2},\ldots ,x_{n})\) is a strictly increasing function with respect to \(x_{1},x_{2},\ldots ,x_{n}\). We only need to illustrate how to compute \({\mathcal {M}}\{g(\xi _{1},\xi _{2},\ldots ,\xi _{n})\le \bar{f}\}\) as an example. We also assume that \(\xi _{i}\) are independent uncertain variables with uncertainty distributions \(\varPhi _{i}\), which are presented as Table 1, \(i=1,2,\ldots ,n\), respectively. According to Theorem 3, the inverse uncertainty distribution \(\varPsi ^{-1}(\alpha )\) of \(g(\xi _{1},\xi _{2},\ldots ,\xi _{n})\) can be expressed as Table 2.

Table 2 Inverse uncertainty distribution \(\varPsi ^{-1}(\alpha )\) of \(g(\xi _{1},\xi _{2},\ldots ,\xi _{n})\)

Now, we can design the process for computing the uncertain measure \({\mathcal {M}}\{g(\xi _{1},\xi _{2},\ldots ,\xi _{n})\le \bar{f}\}\) as follows:

  1. Step 1

    Set \(i=1\).

  2. Step 2

    Let \(y_{i}=g(t^{i}_{1},t^{i}_{2},\ldots ,t^{i}_{n})\).

  3. Step 3

    If \(y_{i}<\bar{f}\), let \(i\leftarrow i+1\), and then Turn back to Step 2.

  4. Step 4

    Report \(\alpha =0.01i\) as the estimation of \({\mathcal {M}}\{g(\xi _{1}, \xi _{2},\ldots ,\xi _{n})\le \bar{f}\}\).

Representation

Suppose that there are \(n\) demanding locations and \(m\) vehicles. In the GA, a decision vector \(\mathbf x =(x_{1},x_{2},\ldots ,x_{n})\) is represented by chromosome \(S=(s_{1},s_{2},\ldots ,s_{n})\), where \(\{s_{1},s_{2},\ldots ,s_{n}\}\) is a rearrangement of \(\{1,2,\ldots ,n\}\). In addition, a decision vector \(\mathbf y =(y_{1},y_{2},\ldots ,y_{m-1})\) is represented by chromosome \(T=(t_{1},t_{2},\ldots ,t_{m-1})\), where \(0\le t_{1}\le t_{2}\le \cdots \le t_{m-1}\le n\). The matching between the solution vector \((\mathbf x ,\mathbf y )\) and the chromosome vector \((S,T)\) is \(\mathbf x \equiv S, \mathbf y \equiv T\).

Initialization process

We initialize \( pop _{-} size \) feasible chromosomes. Checking of the feasibility of chromosomes is done by uncertain simulation.

From integer set \(I=\{1,2,\ldots ,n\}\), an integer vector \(S=(s_{1},s_{2},\ldots ,s_{n})\), where \(1\le s_{i}\le n\) and \(s_{i}\ne s_{j}\) for \(i\ne j, i,j=1,2,\ldots ,n\), is generated randomly. Also, an integer vector \(T=(t_{1},t_{2},\ldots ,t_{m-1})\), where \(0\le t_{1}\le t_{2}\le \cdots \le t_{m-1}\le n\), is generated from \(J=\{0,1,2,\ldots ,n\}\). Then the feasibility of \((S,T)\) is checked by uncertain simulation as follows,

$$\begin{aligned}&\hbox {If}\,y_{1}>0, \sum _{j=1}^{y_{1}}\varPsi _{x_{j}}^{-1}(\alpha )>C_{1},\quad \hbox {return}\,0;\\&\hbox {If}\,y_{2}>y_{1}, \sum _{j=y_{1}+1}^{y_{2}}\varPsi _{x_{j}}^{-1}(\alpha )>C_{2},\quad \hbox {return}\,0;\\&\cdots \\&\hbox {If}\,y_{m}>y_{m-1}, \sum _{j=y_{m-1}+1}^{y_{m}}\varPsi _{x_{j}}^{-1}(\alpha )>C_{m},\quad \hbox {return}\,0;\\&\hbox {Otherwise}, \quad \hbox {return}\,1; \end{aligned}$$

in which \(1\) means feasible, and \(0\) non-feasible. If \((S,T)\) is feasible, it is taken as an initial chromosome. Otherwise, another integer vector \((S,T)\) is generated until \((S,T)\) is proved to be feasible and taken as an initial chromosome. Then,

$$\begin{aligned} (S_{1},T_{1}),(S_{2},T_{2}),\ldots ,(S_{ pop _{-} size },T_{ pop _{-} size }) \end{aligned}$$

can be produced for initial feasible chromosomes by repeating the above process \( pop _{-} size \) times.

Genetic algorithm

Following crossover, mutation, and selection, the new population is ready for its next evaluation. The genetic algorithm for solving the proposed uncertain programming models is summarized as follows.

  1. Step 1

    Initialize \( pop _{-} size \) chromosome vectors randomly, in which uncertain simulation is used to check the feasibility of the chromosome vectors.

  2. Step 2

    Update the chromosome vectors by crossover and mutation operations.

  3. Step 3

    Compute the fitness of each chromosome vector in the following way: Firstly, calculate the objective values of the chromosome vectors by uncertain simulation; Secondly, rearrange the chromosome vectors according to the objective values; At last, compute the fitness of each chromosome vector according to the rank-based evaluation function.

  4. Step 4

    Select the chromosome vectors by spanning the roulette wheel.

  5. Step 5

    Repeat Steps 2 to 4 for a given number of cycles.

  6. Step 6

    Take the best chromosome vector as an approximate optimal solution.

Numerical examples

To show the applications of the proposed models and to test the effectiveness of the designed hybrid intelligent algorithm, we shall present some examples in this section. Our test platform is a personal computer (CPU: Intel Pentium Dual-Core T3400; Memory: 1 GB DDRII).

In the procedure of the algorithm, the crossover probability \(P_{c}=0.3\), the mutation probability \(P_{m}=0.2\), the population size \( pop _{-} size =10\), and the parameter \(a\) in the rank-based evaluation function is \(0.05\). Now, we simulate an emergency situation when a pandemic disease happens in a city and a certain quantity of medication need to be delivered from the airport to major downtown hospitals as soon as possible. Assume that three vehicles are available in the airport for seven hospitals in the city. We also assume that the capacities of the vehicles are \(C_{1}=45, C_{2}=50, C_{3}=40\), respectively. As mentioned before, the highly unpredictable nature of emergencies and the severity of the accident may lead to uncertainty both in demands and running times. For this condition, the usual way is to obtain the uncertain data by means of experience evaluation or expert advice.

In these examples, the running times \(\xi _{ij} (i,j=0,1, 2,\ldots ,7)\) and the demands \(\eta _{i} (i=1,2,\ldots ,7)\) are assumed to be independent zigzag uncertain variables listed in Tables 3 and 4, respectively. In Table 3, “hospital \(i\)” are replaced by “\(\text{ h } i\)” for short, \(i=1,2,\ldots ,7\), respectively. Suppose that the satisfaction constraint for each route should hold at confidence level \(\alpha =0.9\).

Table 3 The running time \(\xi _{ij}\)
Table 4 The demand \(\eta _{i}\) of hospital \(i\)

Example 2

If the decision maker prefers optimizing the objective function in the sense of expected running time, then we can construct the following uncertain programming model:

$$\begin{aligned} \left\{ \begin{array}{ll} \min \quad \max _{1\le k\le 3} E[f_{k}(\mathbf x ,\mathbf y ,\varvec{\xi })] \\ \text{ subject } \text{ to }:\\ \;\;{\mathcal {M}}\left\{ \sum \nolimits _{j=y_{k-1}+1}^{y_{k}}\eta _{x_{j}}\le C_{k}\right\} \ge 0.9, \;\;\text{ if }\;\; y_{k}>y_{k-1}\\ \quad \;\, k=1,2,3\\ \quad \;\, 1\le x_{i}\le 7,\quad i=1,2,\ldots ,7\\ \quad \;\, x_{i}\ne x_{j},\quad i\ne j,\quad i,j=1,2,\ldots ,7\\ \quad \;\, 0\le y_{1}\le y_{2}\le 7\\ \quad \;\, x_{i}, \quad i=1,2,\ldots ,7,\quad \text{ integers } \\ \quad \;\, y_{j}, \quad j=1,2,\quad \text{ integers. } \end{array}\right. \end{aligned}$$
(5)

We use the proposed algorithm to solve the problem of Example 2. A run of the algorithm with 1,000 generations shows that the optimal solution is

$$\begin{aligned}&\mathbf x ^{*}=(4,5,2,1,7,6,3), \\&\mathbf y ^{*}=(2,5). \end{aligned}$$

In other words, the optimal operational plan is

$$\begin{aligned}&\hbox {Vehicle 1: airport}\,\rightarrow 4\rightarrow 5\rightarrow \,\hbox {airport} \\&\hbox {Vehicle 2: airport}\,\rightarrow 2\rightarrow 1\rightarrow 7\rightarrow \,\hbox {airport} \\&\hbox {Vehicle 3: airport}\,\rightarrow 6\rightarrow 3\rightarrow \,\hbox {airport} \\ \end{aligned}$$

whose objective value is \(14.095\). Figure 1 is the evolution process.

Fig. 1
figure 1

The evolution process of Example 2

As we know, the objective value will be different if we change the parameters in the algorithm. Thus, we take more experiments for this example. The values of parameters and the corresponding objective values are shown in Table 5. To compare the objective values, we give an index called relative error, i.e., error \(=\) (objective value \(-\) the best objective value)/the best objective value \(\times 100\,\%\). Obviously, by using different parameters in the algorithm, the errors of objective values are not larger than \(1.88\,\%\). This fact states that the hybrid intelligent algorithm is much steady and robust.

Table 5 Comparison of the objective values of Example 2

Example 3

If the decision maker predetermines a time at 18, and sets the goal as maximizing the minimum uncertain measure that the running time is no more than the given time, then the belief degree-chance programming model can be built as follows:

$$\begin{aligned} \left\{ \begin{array}{ll} \max \quad \min _{1\le k\le 3}{\mathcal {M}}\{f_{k}(\mathbf x ,\mathbf y ,\varvec{\xi })\le 18\}\\ \text{ subject } \text{ to }:\\ \;\;{\mathcal {M}}\left\{ \sum \nolimits _{j=y_{k-1}+1}^{y_{k}}\eta _{x_{j}}\le C_{k}\right\} \ge 0.9,\;\;\text{ if }\;\; y_{k}>y_{k-1}\\ \quad \;\, k=1,2,3\\ \quad \;\, 1\le x_{i}\le 7,\quad i=1,2,\ldots ,7\\ \quad \;\, x_{i}\ne x_{j},\quad i\ne j,\quad i,j=1,2,\ldots ,7\\ \quad \;\, 0\le y_{1}\le y_{2}\le 7\\ \quad \;\, x_{i}, \quad i=1,2,\ldots ,7,\quad \text{ integers } \\ \quad \;\, y_{j}, \quad j=1,2,\quad \text{ integers. } \end{array}\right. \end{aligned}$$
(6)

The optimal solution, obtained by using the proposed algorithm, of Example 3 is \(\mathbf x ^{*}=(2,3,5,6,7,1,4)\), \(\mathbf y ^{*}=(2,5)\), whose objective value is 0.8. Figure 2 is the evolution process.

Fig. 2
figure 2

The evolution process of Example 3

Conclusion

This paper mainly investigated the problem of dispatching medical supplies in emergency events based on uncertainty theory. As is well known to us, many uncertain factors appear in emergency events. We are frequently lack of observed data, and the estimated probability distribution may be far from the cumulative frequency. In this case, uncertainty theory offers a powerful tool to deal with uncertain factors. In this paper, the running times and the demands were assumed to be uncertain variables. In order to deliver the medical supplies from the depot to the demanding locations as soon as possible, two uncertain programming models were presented via uncertain programming. Since the proposed models are complex, we provided an algorithm to solve the models in general cases. In addition, some numerical examples were provided to illustrate the application of the models. The results of the experiments show that the designed algorithm is robust to the set parameters in GA.

Comparison of the present paper with the previous works, the main contributions can be summarized as following aspects: (1) As a new tool to deal with uncertain factors in the real world, uncertain programming was introduced into the problem of dispatching medical supplies in emergency events. (2) Our models provided a useful way to reflect fairness among demanding locations, while traditional models depend heavily on the total running time. (3) A hybrid intelligent algorithm was proposed to seek the approximate optimal solution based on uncertain simulation and genetic algorithm.

Moreover, in this paper it is assume that the parameters in indeterminacy environment are uncertain variables. It will be interesting to investigate the dispatch strategies in uncertain random environment, in which uncertainty and randomness coexist.