Introduction

The hunger for new products and services provides business opportunities which are seized by investors seeking more profit. Decision makers face the general and major question of how to satisfy customers’ demands while maximizing the total profit; or alternatively minimizing the total cost. In order to meet the demand for products, several business units, including manufacturers and suppliers, transporters, warehouses, retailers, and customers, are, directly or indirectly, engaged in supply chain activities (Chopra and Meindel 2007). The elements, that constitute a supply chain, operate efficiently if only if the whole supply chain is well-structured (Davarzani and Rezapour 2009). In order to satisfy their customers’ demands, the industries are forced to restructure their supply chain considering the business environment changes (Dullaert et al. 2007). The introduction of Supply Chain Management (SCM) by Oliver and Webber (1982), was a response to the competitive environment of the late 70’s caused by the quality revolution (Erenguc et al. 2006). The supply chain design and planning, as one of the most critical problems in the context of SCM, has attracted both researchers and practitioners (Simchi-Levi et al. 2003).

In reality, supply chains have to deal with inexact, incomplete and estimated data come from the complex, fluctuating and dynamic business environment. One of the major factors that affects the effectiveness of supply chains is the uncertainty which propagates up and down in supply chains (Jung et al. 2004) and influences their performance (Bhatnagar and Sohal 2005). Therefore, failing to properly incorporate uncertainty in decision making leads to results that are not optimal compromising the competency of the whole supply chain. More specifically, the demand volumes and supply capacities are subject to uncertainty due to economic instability and market fluctuations besides other endogenous and exogenous factors. The importance of uncertainty has inspired many researchers to investigate stochastic models in supply chain problems (Georgiadis et al. 2011; Mizgier et al. 2012; Yang et al. 2011). However, under certain conditions (e.g., in case of as the technologic innovations and changes in preferences of consumers) the assumption of precise parameters of probability distributions is questionable (Yang and Liu 2013). On the other hand, in most real-world cases of SCM problems, where model parameters are often imprecise because of incomplete information and/or unavailable data, fuzzy programming techniques are more applicable (Paksoy et al. 2012). Specifically, fuzzy concepts have been successfully applied in supply chain design and planning problem (Fazel et al. 2012; Jouzdani et al. 2013a, b; Jung and Jeong 2012; Wei et al. 2012). In other words, the ease of use and applicability of the triangular fuzzy numbers (Lee et al. 2012) in addition to the concepts of superiority and inferiority for such numbers (van Hop 2007), were our inspirations for utilizing fuzzy linear programming techniques for modeling the uncertainty. The theorem, proved by van Hop (2007), provides a suitable tool for comparing two fuzzy triangular numbers. More specifically, by using this theorem, two sides of a constraint in an optimization model can be regarded as triangular fuzzy numbers and compared through superiority and inferiority concepts. One may refer to a review on supply chain uncertainty by Simangunsong et al. (2012) for further reading in this regard.

In recent decades, as the social and environmental aspects of business activities gain prominence, supply chain decision makers have to consider these concerns into account. In order to enlighten the importance of the subject, one may refer to the many researchers that have quantified the environmental influence of transportation in terms of \(\hbox {CO}_{2}\) emission (Harris et al. 2011; Le and Lee 2013; Nieuwenhuis et al. 2012; Zhang et al. 2014). Specifically, as one of the most obvious and tangible socio-economic effects of supply chain activities, traffic congestion affects any business involved in road transportation from bio-fuel industry (Bai et al. 2011), dairy supply chains (Jouzdani et al. 2013a, b), to urban freight distribution networks (Figliozzi 2011) and even influences the way competitive supply chains behave (Konur and Geunes 2011). Since a many products in different industries are transported by using road transportation system, the impact of congestion on the business units is bold and is worth investigation. Especially, the problem calls for more attention when the product are transported with a high frequency or in case of food products which need to be specially treated (Agustina et al. 2014). More specifically, for deteriorating items, such as dairy and fresh food products, the products should be transported via refrigerated vehicle and on usually on a daily basis. Therefore, traffic congestion significantly affects both the transportation system and the businesses. In addition, the concurrent consideration of uncertainty and traffic congestion is of importance due to interrelationship of these two phenomena. For further investigation in this regard, one may refer to a recent study by Jouzdani et al. (2013a, b).

The supply chain design and planning problem has its roots in facility location problem and the former inherits its NP-Hardness from the latter (Nemhauser and Wolsey 1988). Therefore, conventional methods fail to provide a solution to the problem in a reasonable amount of time. This calls for development of meta-heuristics which can tackle the problem more effectively. Several researchers have utilized conventional or heuristic methods in the context of supply chains (see Table 1); however, the research on meta-heuristics for the supply chain design and planning is scarce. The recent interest of researchers in the subject has led to developing meta-heuristics that can deal with difficulties of the problem (Kadadevaramath et al. 2012; Soleimani et al. 2013; Subramanian et al. 2013; Venkatesan and Kumanan 2012; Wang and Watada 2012). The Electromagnetism-like Algorithm (EMA), as a relatively new population-based optimization algorithm introduced by Birbil and Fang (2003), has shown to provide satisfying results in many optimization problems. Especially, EMA has been shown to be effective and efficient in tackling nonlinear programming models (Kusiak et al. 2013; Zeng et al. 2015). In addition, it has demonstrated superior outcomes in comparison to other population-based methods (Chang et al. 2009; Debels et al. 2006; Jamili et al. 2011; Jolai et al. 2012; Naji-Azimi et al. 2010). In order to further improve its performance, EMA can easily been combined with other methods (the applications of the EMA and its hybrids are discussed in more detail in “Computational experiments” section). EMA is developed for solving continuous problems with bounded; however, it has also shown flexibility for solving discrete problems. The flexibility of EMA in this regard in one hand and the fact that the problem is modeled as a mixed-integer zero-one optimization model, was a reason for using EMA. In addition, in the context of transportation planning in supply chains, the amounts of products are bounded between a lower and an upper bound. This makes EMA an eligible candidate for tackling the problem investigated in this paper. On the other hand, EMA and Simulated Annealing (SA) have been already shown to yield satisfying results (Jamili et al. 2011; Jouzdani et al. 2012; Naderi et al. 2010). Specifically, EMA, as a population-based algorithm, has satisfying exploration capability and SA, as an algorithm designed for finding near-optimal solutions to combinatorial optimization problems, provides exploitation of the search space. Furthermore, the research on the application of EMA in supply chain design and planning is still scarce; therefore, current research is an attempt for providing new insights in the integration of newly developed population-based algorithm and supply chain design and planning. For further reading about the applications of population-based algorithms in supply chain design and planning with an emphasis on green logistics the reader is referred to a recent review by Zhang et al. (2015). These inspired us to augment the EMA with SA to tackle the problem of supply chain design and planning under demand and supply uncertainties and congestion. The local search procedure in EMA is replaced by SA to increase the exploration capability of the algorithm. In addition, a “snap” mechanism is proposed to deal with the binary decision variables in the supply design and planning problem. The main contributions of our work can be summarized as follows.

  1. a)

    Modeling The proposed model is inspired by lack of study regarding the supply chains design and planning considering the following features concurrently.

    • Multiple products

    • Demand and supply uncertainty

    • Congestion costs

    • Multiple transportation modes

    • Time value of money

  2. b)

    Solution approach The proposed solution approach is a hybrid meta-heuristic algorithm, designed for Mixed Integer Non-Linear Problems. The Improved Electromagnetism-like Algorithm (IEMA) is the result of augmenting EMA with SA. In case of supply chain design and planning, promising results are obtained indicating the capability of IEMA.

The exposition of the paper is as follows. In the next section, a review of most recent and most related works in the literature is presented. “Superiority and inferiority concepts for fuzzy numbers” section presents an overview of fuzzy linear programming approach. “The proposed model” section, the mathematical model of the problem and related concepts of optimization under uncertainty are provided. “The solution approach” section introduces the proposed algorithm. Results of numerical examples are provided in “Computational experiments” section and finally, “Conclusions and future works” section concludes the paper.

Literature review

Supply chains optimization has been a point of attraction for both researchers and practitioners. Wang et al. (2011) optimized multiple objectives considering environmental concerns. Their model captured the trade-off between the total cost of the supply chain and its environmental influence and was solved by ILOG CPLEX 9.0. Following a stochastic programming approach, Chen and Fan (2012) proposed a model for bio-ethanol supply chain planning under demand and supply uncertainties considering truck capacities, travel time dependent and distance dependent costs and solved their proposed model by using a decomposition algorithm (DA). In a work by Pishvaee et al. (2011), a robust optimization model was proposed to handle the data uncertainty in a closed-loop supply chain network design problem. They solved their model by using ILOG CPLEX 10.1 optimization software. A detailed mathematical model was studied by Georgiadis et al. (2011) for supply networks design problem considering multiple products, shared production resources, warehouses, distribution centers and customer zones. They modeled the time-varying uncertainty in demand following a scenario-based approach and solved the model by using ILOG CPLEX 11.2.0 solver incorporated in GAMS 22.9. Mirzapour Al-e-hashem et al. (2011) considered a multi-supplier, multi-manufacturer and multi-customer supply chain addressing a multi-site, multi-period and multi-product aggregate production planning (APP) problem under uncertainty and utilized LINGO 8.0 to solve their proposed model. Dal-Mas et al. (2011) presented a dynamic, spatially explicit and multi-echelon mixed integer linear program modeling framework for assessing economic performances and risk on investment of the biomass-based ethanol supply chains . They used CPLEX solver of GAMS to solve their MILP model. Total cost of facility location, inventory holding, transportation and shortage are considered by Döyen et al. (2012) in a humanitarian relief logistics network problem for which a Lagrangian Relaxation (LR) heuristic was used. Liao et al. (2011a, b) proposed an integrated model considering stochastic demand following a multi-objective approach and developed a multi-objective evolutionary algorithm (MOEA) as a solution method for their model. In a research by Bai et al. (2011), a model for bio-fuel refinery location and supply chain planning considering the impact of traffic congestion was presented and solved by applying linear programming relaxation (LPR) and Lagrangian relaxation bounding (LRB) procedures. A model for stochastic supply chain network design under uncertainty was presented as a two-stage stochastic program and solved by applying sample average approximation in a work by Bidhandi and Yusuff (2011). Le and Lee (2013) considered location, inventory, production, distribution costs and transportation mode selections along with environmental impacts of supply chain planning and proposed a model for which LINGO 11 was utilized for finding optimal solution(s). Sadjady and Davoudpour (2012) investigated a two-echelon multi-commodity supply chain network design considering mode selection, lead-times and inventory costs and solved their proposed model by using a Lagrangian-based heuristic solution algorithm for real-sized problems. Wang and Watada (2012) proposed a Value-at-Risk-based fuzzy random facility location model considering uncertain demands and costs and presented a particle swarm optimization (PSO) solution method. Bashiri et al. (2012) proposed a production–distribution strategic and tactical planning model for a multiple-echelon and multiple-commodity supply chain operating in a certain environment and solved their model by using CPLEX MIP solver. Almansoori and Shah (2012) investigated the design and operation of a stochastic hydrogen supply chain network following a scenario-based approach and considering the uncertainty caused by long-term variation in hydrogen demand. Their models were formulated as Mixed Integer Linear Programs (MILP) and were solved by using GAMS (CPLEX v9.0). Strategic planning problem of integrated bio-ethanol–sugar supply chains under uncertainty was addressed by Kostin et al. (2012) and formulated as a multi-scenario mixed-integer linear programming. They applied sample average approximation to approximate the solution of the stochastic problem that entails the calculation of two models solved iteratively.

In a recent work, Jouzdani et al. (2013a, b) proposed a dynamic dairy facility location and supply chain planning under traffic congestion and demand uncertainty and provided a case study of Tehran, Iran. Soleimani et al. (2013) colleagues proposed a model for multi-echelon multi-period multi-product closed-loop supply chain design and planning problem and utilized GA to solve their model. Ramezani et al. (2013) proposed a robust design for a closed-loop supply chain network considering the uncertainty in demand and the rate of return. They utilized a Scenario Relaxation Algorithm (SRA) to solve their proposed model. Supply chain network design with service level in presence of disruptions and demand uncertainty is addressed by Baghalian et al. (2013). A hybrid Memetic Algorithm (MA) is designed by Yang and Liu (2013) to obtain the fuzzy solution in mean-risk SCND problem. To clearly illustrate the characteristics of our research, an overview of model features and solution approaches in recent literature is presented in Table 1 in which the last row presents the various features captured by the proposed model and its solution algorithm. In this research, an effort is made to incorporate major practical aspects which have not been considered simultaneously in the existing literature. For ease in presentation and convenience in explaining the numerical examples, multiple planning periods and various types of flows in supply networks are not included; however these may be taken into account by following a similar logic.

Table 1 An overview of model features and solution approaches in the recent related literature

Superiority and inferiority concepts for fuzzy numbers

In this section, a brief review of related concepts in fuzzy numbers theory, i.e. superiority and inferiority, is presented. These concepts are utilized for modeling the uncertainty in different parameters of the problem. Our main inspirations for applying the fuzzy linear programming used triangular fuzzy numbers are their ease of use and applicability for modeling the uncertainty.

Different phenomena may be compared based on their various attributes. Object \(O_1 \) is considered to be superior (inferior) to \(O_2 \) regarding an attribute A if the most (least) preferable value of that attribute for \(O_1 \) is superior (inferior) to the highest (lowest) value for that of \(O_2 \). Considering that, van Hop (2007) proposed two comparison measures of superiority and inferiority for fuzzy numbers. In what follows, these concepts are discussed for triangular fuzzy numbers.

Suppose that \(\tilde{U}=\left\{ {\left. {\tilde{u}=\left( {u,r,s} \right) } \right| r,s\ge 0} \right\} \) is a set of triangular fuzzy numbers. u, r and s are the center, the right spread and the left spread of a fuzzy triangular number \(\tilde{u}\), respectively. Then the membership degree of x can be expressed through:

$$\begin{aligned} \mu _{\tilde{u}} \left( x \right) =\left\{ \begin{array}{ll} \max \left( 0,1-\frac{u-x}{r} \right) , \qquad &{}\quad {if \quad x\le u ,r>0} \\ {1 ,} &{}\quad {if \quad r=0 ,and/or\;s=0} \\ \max \left( 0,1-\frac{u-x}{l} \right) , \qquad &{}\quad {if \quad x>u, l>0} \\ {0 ,} &{} \quad {otherwise} \end{array} \right. \end{aligned}$$
(1)

Triangular fuzzy numbers are well-studied and capable of quantifying various types of information. In addition, a crisp real number \(u\in \mathfrak {R}\) can be represented as a triangular fuzzy number with zero spreads as \(u=\left( {u, 0, 0} \right) \).

Theorem 1

(van Hop 2007) Consider two fuzzy triangular numbers, \(\tilde{P}=\left( {u,a,b} \right) \) and \(\tilde{Q}=\left( {v,c,d} \right) \). If \(\tilde{P}\le \tilde{Q}\), then the superiority of \(\tilde{Q}\) over \(\tilde{P}\) is

$$\begin{aligned} S\left( {\tilde{Q}, \tilde{P}} \right) =v-u+\frac{d-b}{2} \end{aligned}$$
(2)

and the inferiority of \(\tilde{P}\) to \(\tilde{Q}\) is

$$\begin{aligned} I\left( {\tilde{P}, \tilde{Q}} \right) =v-u+\frac{a-c}{2} \end{aligned}$$
(3)

The above theorem provides a valuable tool for comparing two fuzzy triangular numbers. More specifically, by means of this theorem, two sides of a constraint in an optimization model can be regarded as triangular fuzzy numbers and compared through superiority and inferiority concepts. In this paper, demand for products and supplied amounts are assumed to be subject to uncertainty and are considered as triangular fuzzy numbers. Here, the above theorem can be applied for comparing the “incoming annual flow of a product to a demand node” with the “triangular fuzzy annual demand for that product in that demand node” and comparing the “outgoing annual flow of product a from a supply node” with the “triangular fuzzy annual supply for that product in that supply node”. In order to shed light on the connection between these concepts and their application in supply chain planning, a small numerical example is provided in what follows.

Suppose that the demand for a product in some demand point and the amount of that product actually transported to that point can be represented by triangular fuzzy numbers as \(\tilde{P}=\left( {100,3,10} \right) \) and \(\tilde{Q}=\left( {105,0,0} \right) \), respectively. It should be noted that the transported amount of product is actually a crisp number (equals 105) represented as a triangular fuzzy number. Then, the inferiority of \(\tilde{P}\) to \(\tilde{Q}\) represents the amount by which the fuzzy demand is less that (inferior to) the transported amount of product and is equal to 8 according to Eq. (3). Similarly, assume that the supply capacity for a product in some supply node and the amount of that product transported from that node can be represented as triangular fuzzy numbers as \(\tilde{{P}'}=\left( {993,0,0} \right) \) and \(\tilde{{Q}'}=\left( {1000,14,18} \right) \), respectively. Therefore, the superiority of \(\tilde{{Q}'}\) over \(\tilde{{P}'}\) represents the amount by which the supply capacity is more than (superior over) the transported product amount and is equal to 25 according to Eq. (2).

The proposed model

In this section, supply chain design and planning under supply and demand uncertainties and traffic congestion is formulated as a non-linear mixed-integer programming. This section includes the assumptions of this research followed by a description of notions used in mathematical formulation of the problem and the mathematical model. In addition, some auxiliary variables are defined in order to facilitate the representation of the model.

Assumptions

The wise use of realistic and simplifying assumptions is the key to successful modeling of a real-world problem. Having this fact in mind, in this research, the problem is investigated under the following assumptions.

  1. (1)

    The total number of candidate supply nodes, potential warehouse locations, demand points, transportation modes and product types are known and fixed.

  2. (2)

    All the parameters are certain except the fuzzy annual supply and fuzzy annual demand as described in “Nomenclature” section.

  3. (3)

    No inventory shortage is allowed in the warehouses; however, there might be surplus quantities of products for which there is a penalty cost.

  4. (4)

    The periods of the planning horizon is assumed to be infinite; i.e. the supply chain is to be designed to operate theoretically forever and practically for a long period of time.

  5. (5)

    The parameters are assumed to remain unchanged or have ignorable changes from a period to another; i.e. the supply chain is designed to operate in a stable environment.

  6. (6)

    The products are produced and shipped in continuous quantities.

Nomenclature

Indices and sets

i :

index for candidate supply points (\(i\in I\))

j :

index for candidate warehouse locations (\(j\in J\))

k :

index for demand nodes (\(k\in K\))

m :

index for transportation modes (\(m\in M\))

p :

index for products (\(p\in P\))

Parameters

\(FC_i^S \) :

Fixed cost of opening a supplier facility at candidate location i

\(FC_j^W \) :

Fixed cost of opening a warehouse at candidate location j

\(MCAP^{m}\) :

Capacity of transportation mode m

\(CONCO^{m}\) :

Traffic congestion coefficient of transportation mode m

\(\tilde{S}_i^p \) :

Triangular fuzzy annual supply for product p in supply node i

\(CP\left( {\tilde{S}_i^p } \right) \) :

The center point of triangular fuzzy annual supply for product p in supply node i

\(LS\left( {\tilde{S}_i^p } \right) \) :

The left spread of triangular fuzzy annual supply for product p in supply node i

\(RS\left( {\tilde{S}_i^p } \right) \) :

The right spread of triangular fuzzy supply annual for product p in supply node i

\(WCAP_j^p \) :

The capacity available for product p in warehouse node j

\(C_j^p \) :

The annual inventory holding cost for product p at warehouse node j

\(C_j^{p,surp} \) :

The annual cost associated with surplus quantities of product p in warehouse node j

\(\tilde{D}_k^p \) :

Triangular fuzzy annual demand for product p in demand node k

\(CP\left( {\tilde{D}_k^p } \right) \) :

The center point of triangular fuzzy annual demand for product p in demand node k

\(LS\left( {\tilde{D}_k^p } \right) \) :

Left spread of triangular fuzzy annual demand for product p in demand node k

\(RS\left( {\tilde{D}_k^p } \right) \) :

Right spread of triangular fuzzy annual demand for product p in demand node k

\(C_{k,p}^{\inf _D } \) :

Penalty cost for \(\Lambda _k^{p,\inf _D } \)

\(C_{i,p}^{\sup _S } \) :

Penalty cost for \(\Lambda _i^{p,\sup _S } \)

\(TCAP_{i,j} \) :

Annual traffic capacity of the link from node i to node j

\(BF_{i,j} \) :

Basic annual flow of the link from node i to node j

\(FFTT_{i,j} \) :

Free flow travel time of the link from node i to node j

\(C_{i,j}^m \) :

Annual transportation cost for a vehicle of mode m on the link from node i to node j

IR :

The interest rate

MVT :

Monetary value of time

\(\beta \) :

Congestion constant

Auxiliary variables

\(\Lambda _k^{p,\inf _D }\) :

The inferiority of the fuzzy annual demand for product p in demand node k to the amount of product pactually transported to demand node k

\(\Lambda _i^{p,\sup _S } \) :

The superiority of the fuzzy annual supply for product p in supply node i to the amount of product pactually transported from node i

\(F_i^{p,out_S } \) :

Outgoing annual flow of product p from supply node i

\(F_j^{p,in_W } \) :

Incoming annual flow of product p to warehouse node j

\(F_j^{p,out_W } \) :

Outgoing annual flow of product pfrom warehouse node j

\(F_k^{p,in_D } \) :

Incoming annual flow of product p to demand node k

\(F_{i,j} \) :

Total annual traffic flow on the link from node i to node j

\(Z_{i,j}^m \) :

The total annual number of vehicles of mode m transporting goods on the link from node i to node j

TFC :

Total fixed facility location cost

TCC :

Total annual traffic congestion cost

TTC :

Total annual transportation cost

TUC :

Total annual uncertainty cost

TIC :

Total annual inventory-related cost

Decision variables

\(Y_i^S \) :

1 if a supplier is opened in potential supply node i, 0 otherwise

\(Y_j^W \) :

1 if a warehouse is opened in candidate warehouse location i, 0 otherwise

\(X1_{i,j}^{m,p} \) :

The amount of product p annually transported by transportation mode m on the link from supply node i to warehouse node j

\(X2_{j,k}^{m,p} \) :

The amount of product p annually transported by transportation mode m on the link from warehouse node j to demand node k

Mathematical model

The mathematical model of the problem can be expressed by using the above notations. The objective of the proposed model is to minimize the cost function consisted of five parts and presented by the following equations.

$$\begin{aligned} TFC= & {} \sum _{i\in I} {FC_i^S Y_i^S } +\sum _{j\in J} {FC_j^W Y_j^W } \end{aligned}$$
(4)
$$\begin{aligned} TCC= & {} MVT\times \left[ FFTT_{i,j} \left( {1+0.15\left( {{F_{i,j} } /{TCAP_{i,j} }} \right) ^{\beta }} \right) \right. \nonumber \\&\left. +FFTT_{j,k} \left( {1+0.15\left( {{F_{j,k} }/{TCAP_{j,k} }} \right) ^{\beta }} \right) \right] \end{aligned}$$
(5)
$$\begin{aligned} TTC= & {} \sum _{i\in I} {\sum _{j\in J} {\sum _{m\in M} {C_{i,j}^m Z_{i,j}^m } } } \end{aligned}$$
(6)
$$\begin{aligned} TUC= & {} \sum _{k\in K} {\sum _{p\in P} {C_{k,p}^{\inf _D } \Lambda _{k,p}^{\inf _D } } } +\sum _{i\in I} {\sum _{p\in P} {C_{i,p}^{\sup _S } \Lambda _{i,p}^{\sup _S } } } \end{aligned}$$
(7)
$$\begin{aligned} TIC= & {} \sum _{j\in J} \sum _{p\in P} C_j^p F_j^{p,in_W } \nonumber \\&+ C_j^{p,surp_p } \left( {F_j^{p,in_W } -F_j^{p,out_W } } \right) \end{aligned}$$
(8)

Equation (4) calculates the total fixed facility cost consisted of supplier and warehouse opening costs. Traffic congestion cost is calculated through the link performance function provided by US Bureau of Public Roads and presented in Eq. (5) in which the total flow from a supply node to a warehouse and the total flow from a warehouse to a customer node pair of nodes are obtained from the following Equations, respectively.

$$\begin{aligned} F_{i,j}= & {} BF_{i,j} +\sum _{m\in M} {CONCO^{m}Z_{i,j}^m }\quad \forall i\in I,\forall j\in J \end{aligned}$$
(9)
$$\begin{aligned} F_{j,k}= & {} BF_{j,k} +\sum _{m\in M} {CONCO^{m}Z_{j,k}^m } \quad \forall j\in J,\forall k\in K\nonumber \\ \end{aligned}$$
(10)

where

$$\begin{aligned} Z_{i,j}^m= & {} \left\lfloor {\frac{\mathop {\sum }\limits _{p\in P} {X1_{i,j}^{m,p} } }{MCAP^{m}}} \right\rfloor +1\quad \forall i\in I,\forall j\in J \end{aligned}$$
(11)
$$\begin{aligned} Z_{j,k}^m= & {} \left\lfloor {\frac{\mathop {\sum }\limits _{p\in P} {X2_{j,k}^{m,p} } }{MCAP^{m}}} \right\rfloor +1\quad \forall j\in J,\forall k\in K \end{aligned}$$
(12)

calculate the number of vehicles transporting products from a supply node i to warehouse node j and the total number of vehicles travelling from warehouse node j to demand node k, respectively. In the above equations, \(\left\lfloor x \right\rfloor \) represents the largest integer equal or smaller than \(x\in R\). Equation (6) calculates the transportation cost. The cost associated with the uncertainty is obtained by Eq. (7) in which

$$\begin{aligned} \Lambda _k^{p,\inf _D }= & {} F_k^{p,in_D } -CP\left( {\tilde{D}_k^p } \right) +LS\left( {\tilde{D}_k^p } \right) \hbox { and} \end{aligned}$$
(13)
$$\begin{aligned} \Lambda _i^{p,\sup _S }= & {} CP\left( {\tilde{S}_i^p } \right) -F_i^{p,out_S } +RS\left( {\tilde{S}_i^p } \right) \end{aligned}$$
(14)

are calculated by using the superiority and inferiority concepts for fuzzy triangular numbers (van Hop 2007). In Eqs. (13) and (14) we have

$$\begin{aligned} F_k^{p,in_D }= & {} \left( {\sum _{j\in J} {Y_j^W \sum _{m\in M} {X2_{j,k}^{m,p} } } } \right) \hbox { and} \end{aligned}$$
(15)
$$\begin{aligned} F_i^{p,out_S }= & {} Y_i^S \left( {\sum _{j\in J} {Y_j^W \sum _{m\in M} {X1_{i,j}^{m,p} } } } \right) \end{aligned}$$
(16)

Due to uncertainty, there might be surplus quantities of products in warehouses. Since no shortage is allowed, only the cost (penalty) associated with the surplus amounts of products are taken into account. Hence, the Eq. (8) presents the inventory-related cost where

$$\begin{aligned} F_j^{p,in_W }= & {} Y_j^W \left( {\sum _{i\in I} {Y_i^S \sum _{m\in M} {X1_{i,j}^{m,p} } } } \right) \hbox { and} \end{aligned}$$
(17)
$$\begin{aligned} F_j^{p,out_W }= & {} Y_j^W \left( {\sum _{k\in k} {\sum _{m\in M} {X2_{j,k}^{m,p} } } } \right) \end{aligned}$$
(18)

Based on what discussed above, the objective function and the constraints can be written as follows.

$$\begin{aligned}&\min obj=TFC+\frac{1}{IR}\left( {TCC+TTC+TUC+TIC} \right) \nonumber \\ \end{aligned}$$
(19)
$$\begin{aligned}&subject~to\nonumber \\&\quad F_j^{p,in_W } \le WCAP_j^p ,\quad \forall j\in J,\forall p\in P \end{aligned}$$
(20)
$$\begin{aligned}&\quad F_j^{p,out_W } \le F_j^{p,in_W } , \quad \forall j\in J,\forall p\in P \end{aligned}$$
(21)
$$\begin{aligned}&\quad X1_{i,j}^{m,p} \le Y_i^S \times L, \quad \forall i\in I,\forall j\in J,\forall m\in M,\forall p\in P\nonumber \\ \end{aligned}$$
(22)
$$\begin{aligned}&\quad X2_{j,k}^{m,p} \le Y_j^W \times L, \quad \forall j\in J,\forall k\in K,\forall m\in M,\forall p\in P \nonumber \\\end{aligned}$$
(23)
$$\begin{aligned}&\quad \Lambda _k^{p,\inf _D } \ge 0, \quad \forall p\in P,\forall k\in K \end{aligned}$$
(24)
$$\begin{aligned}&\quad \Lambda _i^{p,\sup _S } \ge 0, \quad \forall p\in P,\forall i\in I \end{aligned}$$
(25)
$$\begin{aligned}&\quad Y_i^S \in \left\{ {0,1} \right\} ,\quad \forall i\in I \end{aligned}$$
(26)
$$\begin{aligned}&\quad Y_j^W \in \left\{ {0,1} \right\} , \quad \forall j\in J \end{aligned}$$
(27)
$$\begin{aligned}&\quad X1_{i,j}^{m,p} \ge 0,\quad \forall i\in I,\forall j\in J,\forall m\in M,\forall p\in P \end{aligned}$$
(28)
$$\begin{aligned}&\quad X2_{i,j}^{m,p} \ge 0,\quad \forall i\in I,\forall j\in J,\forall m\in M,\forall p\in P \end{aligned}$$
(29)

It should be noted that fixed costs are one-time payments while other variable costs are incurred in each period of the planning horizon. In the proposed model, we assume that the variable costs variations for future periods are negligible. Therefore, these costs are modeled as uniform series payments and are converted to their present values by using the uniform series present worth factor expressed as

$$\begin{aligned} \left( {P/A;IR;n} \right) =\frac{\left( {1+IR} \right) ^{n}-1}{IR\left( {1+IR} \right) ^{n}} \end{aligned}$$
(30)

where n is the number of periods (White et al. 1983). When the payments are expected to continue for an unlimited number of periods, the above equation changes to the following one assuming \(n\rightarrow \infty \).

$$\begin{aligned} \left( {P/A;IR;\infty } \right) =\frac{1}{IR} \end{aligned}$$
(31)

Let us define the error of approximating the expression in Eq. (30) by the one presented in Eq. (31) as follows.

$$\begin{aligned} \delta =\left( {P/A;IR;\infty } \right) -\left( {P/A;IR;n} \right) \end{aligned}$$
(32)

Now, by definition of limit, it can be easily proved that if n is large enough, then \(\delta \) is arbitrarily small. Specifically, the following condition is sufficient for \(\delta \) being smaller than any arbitrary positive real number \(\varepsilon \).

$$\begin{aligned} n>\frac{-\ln \left( {\varepsilon \times IR} \right) }{\ln \left( {1+IR} \right) } \end{aligned}$$
(33)

More specifically, if the interest rate is large enough, then the conditions holds for small values of \(\varepsilon \) and n. For example, assuming \(n=20\) and \(IR=0.25\) we have \(\delta =0.0461\); and for \(n=50\), \(\delta \) decreases to 0.0001. As another example, for \(n=50\) and \(IR=0.15\) we have \(\delta =0.0102\) and when \(n=100\) and \(IR=0.09 \quad \delta \) decreases to 0.0057. Therefore, by theoretically assuming unlimited number of periods and utilizing the approximation of Eqs. (30) by (31), without loss of generality, the complexity of the model is drastically reduced; because an index/set of time periods is not needed. Therefore, the assumption regarding the number of periods is justifiable.

The solution approach

It is proved that exact methods, like gradient-based approaches or the B&B algorithm, are inefficient in solving most real-world problems. The emergence of meta-heuristic algorithms can be regarded as a response to the demand for fast and flexible optimization methods. Meta-heuristics may be categorized into two major groups: meta-heuristics basically designed to solve continuous problems and methods developed to deal with discrete ones. From another perspective, meta-heuristic methods may be classified to population-based and single-solution-based approaches. In what follows, a population-based algorithm, known as Electromagnetism-like Algorithm (EMA), is described to provide a bed for introduction of IEMA which is used as a solution approach for mixed integer problems. Our interest in EMA is for three reasons: (1) its ability of sufficient exploring the continuous search space in comparison to many other algorithms (such as Genetic Algorithms, Simulated Annealing, etc.) (Birbil and Fang 2003; Demirhan et al. 1999), (2) the ease of augmenting the algorithm by the “snap” procedure used to handle discrete variables, (3) the ease of improvement by replacing the local search procedure by SA algorithm and (4) ease of implementation.

The electromagnetism-like algorithm

One of population-based meta-heuristics, originally developed for solving continuous optimization problems, is the Electromagnetism-like algorithm introduced by Birbil and Fang (2003). The main idea of EMA is based on electromagnetism theory. In the original version of EMA, each particle is placed randomly in the solution space and then moved according to their charges and the forces they exert to each other. The charges are proportional to the objective function value for each particle. Algorithm 1 presents the general procedure of the original EMA.

figure a

EMA is originally designed to solve continuous optimization problems with bounded variables; however, it has been successfully applied to many discrete problems such as project scheduling (Debels et al. 2006), machine scheduling (Chang et al. 2009), periodic job-shop scheduling (Jamili et al. 2011), set covering (Naji-Azimi et al. 2010), cell formation (Jolai et al. 2012), flow-shop scheduling (Davoudpour and Hadji Molana 2008; Naderi et al. 2010), assembly sequences planning (Jabbarzadeh et al. 2012) and travelling salesman problem (Javadian et al. 2008; Wu et al. 2006).

The proposed improved electromagnetism-like algorithm (IEMA)

From another perspective, EMA has been successfully combined with other methods (Chang et al. 2009; Debels et al. 2006). Especially, EMA has shown satisfactory results when combined with SA and applied to discrete problems (Naderi et al. 2010) and continuous search spaces (Jouzdani et al. 2012). Kirkpatrick et al. (1983) introduced SA as a general-purpose meta-heuristic method capable of evading the local minima by allowing jumps to higher energy states. The analogy between SA and the process of annealing used by a craftsman in forging a sword from an alloy is the use of temperature to find the desired result. In SA, the temperature is a parameter used to create the balance between intensification and diversification of the search process. SA is proved to be an effective and efficient algorithm for solving optimization problems (Jouzdani et al. 2013a, b). However, it is known to be parameter-sensitive; i.e. the selection of temperature parameter greatly affects the performance of the algorithm and therefore, parameter tuning for SA is of significant importance. Solution representation is another key factor in SA as well as any other meta-heuristic. Solutions should be suitably encoded to create a homomorphism between the solution space and the set of corresponding representations. In addition, a neighbor to each solution in each step is the one checked in the next step of the algorithm and the neighborhood structure determines the way in which the algorithm acts in this regards. Therefore neighborhood structure may greatly influence the quality of the solution and the time spent on obtaining it.

In this research, a hybridization of EMA and SA is designed and applied to supply chain design and planning problem, formulated as a mixed integer non-linear model. More specifically, EMA is augmented by replacing the simple local search procedure by SA. Furthermore, a greedy heuristic is utilized to generate the initial population of solutions. In the proposed method, EMA is modified to deal with discrete variables as well as the continuous ones. The most obvious extension of EMA updates the discrete variables together with continuous ones. However, this approach may fail to find a satisfying solution because considering equal evolution pace for discrete and continuous variables results in the fact that the algorithm is unable to evolve adequately in continuous search space (Yiqing et al. 2007). Therefore, a probabilistic approach is applied to keep an appropriate balance in update rates for discrete and continuous variables.

Figure 1 depicts IEMA flowchart in which force calculation and moving particles are pretty much the same as the original EMA. In what follows, IEMA and its related concepts are described in detail.

Solution representation

Each solution is encoded into a structure consisted of two matrices: a \(\left| I \right| \times \left| J \right| \times \left| M \right| \times \left| P \right| \) matrix called \(X_1 \) and a \(\left| J \right| \times \left| K \right| \times \left| M \right| \times \left| P \right| \) matrix denoted by \(X_2 \). In the context of the proposed EMA-based algorithm, in each particle, \(X_1 \) and \(X_2 \) represent \(X1_{i,j}^{m,p} \) and \(X2_{j,k}^{m,p} \), respectively. The information about the binary decision variables, \(Y_i^S \) and \(Y_j^W \), are incorporated into the particle by applying the “snap” procedure, described in “Handling discrete and continuous variables” section, on \(X_1 \) and \(X_2 \). Therefore, each particle represents a set of all decision variables, i.e., a solution to the problem. For example, \(X_1 \left( {3,2,1,4} \right) =14.32\) means that 14.32 units of product 4 is transported by means of a vehicle of mode 1 from supply location 3 to warehouse node 2.

Fig. 1
figure 1

The flowchart of IEMA

Particle handling

Similar to the original EMA, in the proposed algorithm, the particle charges are calculated based on the corresponding objective function value for each particle through the following Equation assuming the population is composed of m particles.

$$\begin{aligned} q_i =e^{-n\dfrac{f\left( {x_i } \right) -f\left( {x_{best} } \right) }{\sum _{k=1}^m {f\left( {x_k } \right) -f\left( {x_{best} } \right) } }} \quad \forall i\in \left\{ {1,2,\ldots ,m} \right\} \nonumber \\ \end{aligned}$$
(34)

In the above Equation, \(f\left( x \right) \) represents the objective function value for solution x and n is the dimension of the solution space. The forces that particles exert are calculated according to the force calculation formula in the original EMA as follows.

$$\begin{aligned} F_{ij} =\frac{q_i q_j }{r_{ij}^2 } \quad \forall i,j\in \left\{ {1,2,\ldots ,m} \right\} \end{aligned}$$
(35)

In the above Equation, \(F_{ij} \) is the force particle i and particle j exerts on each other, \(r_{ij} \) is the distance between the particles and \(q_i \) and \(q_j \) are the charges of particle i and particle j, respectively.

The particles are moved according to the total force, exerted on each particle, calculated in Equation

$$\begin{aligned} F_i=\left\{ \begin{array}{ll} \sum _{\begin{array}{c} {j=1}\\ {j\ne i} \end{array}}^m \left( {x_j -x_i } \right) F_{ij} ,&{}\quad if \quad q_i <q_j \\ \sum _{\begin{array}{c} {j=1}\\ {j\ne i} \end{array}}^m \left( {x_i -x_j } \right) F_{ij} ,&\quad if \quad q_i \ge q_j \end{array}\right. \quad \forall i{\in } \left\{ {1,2,\ldots ,m} \right\} \nonumber \\ \end{aligned}$$
(36)

The movement of the particles are based on the total force, calculated in the above Equation, and through the following Equation where \(\lambda \) is used for randomizing the particle movement and \(\left\| {F_i } \right\| \) is the norm of the force vector.

$$\begin{aligned} x_i \leftarrow x_i +\lambda \frac{F_i }{\left\| {F_i } \right\| } \quad \forall i\in \left\{ {1,2,\ldots ,m} \right\} \end{aligned}$$
(37)

In each iteration of the algorithm, the movement of the particles may result in infeasible solutions. In such cases, the solutions are neglected and the movement for that particle is canceled in that specific iteration of the algorithm. One may argue that this may lead to slow movement of the particles; however, it should be noted that the cost of repairing a particle that represents an infeasible solution is very high in the proposed model due to supply/warehouse capacity and warehouse balance constraints. In addition, the lack is compensated by the SA search and the movement of the particles in the next iterations.

Greedy heuristic for initial population generation

In order to improve the performance of the algorithm, each individual in the initial population is generated by using a two-phase greedy heuristic. In each phase, one layer of the supply chain is considered and the demands are back-propagated to the upper layers. Algorithm 2 depicts the greedy procedure for generating the initial population. In Algorithm 2, \(U\left( {a,b} \right) \) represents the uniform distribution on the interval \(\left[ {a,b} \right] \). By using this procedure, the obtained individual solutions are expected to have smaller fixed facility location costs. One may think of considering other greedy approaches as well, e.g. assigning the most economic transportation mode and/or link and/or product. However, acting too greedy in generating the initial population does not allow the adequate exploration of the search space and may lead the algorithm to local optima.

figure b

Handling discrete and continuous variables

The individual in the population are probabilistically “snapped” to their near discrete counterparts. More specifically, in each individual if the total amount of all products shipped from a supply node is too small, then there is a high probability that the node is not selected as a supplier. Similarly, in each individual if the total amount of all products shipped from a warehouse node is too small, then there is a high probability that the node is not selected as a warehouse. Mathematically,

$$\begin{aligned}&\forall i\in I,\frac{\mathop {\sum }\limits _{j\in J} {\mathop {\sum }\limits _{m\in M} {\mathop {\sum }\limits _{p\in P} {X_1 \left( {i,j,m,p} \right) } } } }{\mathop {\sum }\limits _{{i}'\in I} {\mathop {\sum }\limits _{j\in J} {\mathop {\sum }\limits _{m\in M} {\mathop {\sum }\limits _{p\in P} {X_1 \left( {{i}',j,m,p} \right) } } } } }\nonumber \\&\quad \le U\left( {0,1} \right) \Rightarrow X_1 \left( {i,j,m,p} \right) =0,\forall j,m,p \end{aligned}$$
(38)

and similarly,

$$\begin{aligned}&\forall j\in J,\frac{\mathop {\sum }\limits _{k\in K} {\mathop {\sum }\limits _{m\in M} {\mathop {\sum }\limits _{p\in P} {X_2 \left( {j,k,m,p} \right) } } } }{\mathop {\sum }\limits _{{j}'\in I} {\mathop {\sum }\limits _{k\in K} {\mathop {\sum }\limits _{m\in M} {\mathop {\sum }\limits _{p\in P} {X_2 \left( {{j}',k,m,p} \right) } } } } }\nonumber \\&\quad \le U\left( {0,1} \right) \Rightarrow X_2 \left( {j,k,m,p} \right) =0,\forall k,m,p \end{aligned}$$
(39)

This operation may create solutions with lower fixed facility location costs. In addition, the “snapping” to discrete variables provides an opportunity for exploration of discrete search space; however, it may generate infeasible solutions. In such case, the snapping is not applied to the solution; i.e., resulted individual is rejected and the original individual remains as before.

Determining the initial and final temperature and the schedule

In SA, The temperature and its schedule have significant impact on the performance of the algorithm and the quality of solution. In this research, the initial and final temperatures are calculated by

$$\begin{aligned} T_0= & {} \Delta f_{\min } +\left( {{\Delta f_{\min } }/{\Delta f_{\max } }} \right) \left( {\Delta f_{\max } -\Delta f_{\min } } \right) \end{aligned}$$
(40)
$$\begin{aligned} T_f= & {} \left( {{\Delta f_{\min } }/{\Delta f_{\max } }} \right) \Delta f_{\min } \end{aligned}$$
(41)

where \(\Delta f_{\max } \) and \(\Delta f_{\min } \) are the maximum and minimum difference between the objective function values observed during a pre-processing procedure, respectively. In the pre-processing phase, feasible individuals are randomly generated and their objective values are obtained to determine the initial and final temperature; i.e. this phase is a random search in the solution space not to find the optimal solution but to find an upper and a lower bound for \(\Delta f\).

In order to gradually move from diversification toward intensification, the temperature must decrease according to a schedule. The temperature follows a schedule expressed by

$$\begin{aligned} T_i =\left( {{T_f }/{T_0 }} \right) ^{i/{maxIter}}T_0 \end{aligned}$$
(42)

where \(T_i \) is the temperature in iteration i and maxIter is the total number of iterations for the algorithm. According to the schedule, the temperature is initially set to \(T_0 \) and to \(T_f \) in the final iteration.

Computational experiments

To demonstrate the performance of IEMA in solving supply chain design and planning problems formulated as the proposed model, several instances with different sizes are solved and the results are presented. The problems in the literature differ from those of ours regarding the assumption and parameters; therefore, using the same exact data set and problem sizes are not logical even if possible. In order to conduct the experiments, 12 test problems in 3 different size classes are randomly generated; more specifically, \(P_1 \) to \(P_5 \) are small-sized, \(P_6 \) to \(P_9 \) are medium-sized and \(P_{10} \) to \(P_{12} \) are large-sized problems (see Table 2). The problem sizes are represented as \(\left| I \right| \times \left| J \right| \times \left| K \right| \times \left| M \right| \times \left| P \right| \) where \(\left| I \right| \), \(\left| J \right| \), \(\left| K \right| \), \(\left| M \right| \), \(\left| P \right| \) are the number of elements in the sets I, J, K, M, P, respectively. Although the data for each problem instance is generated randomly, in order to maintain logic and usability of the resulted problem instances, some rules are considered in creating the data. Specifically, the vehicle capacities are generated as a fraction of the demand in demand nodes. Assuming that the transporting vehicles have congestion effects more than an ordinary passenger car, the traffic congestion coefficients of the vehicles are randomly selected from the set \(\left\{ {1,2,\ldots ,5} \right\} \). In order to the problems to be feasible, for each product, the summation of the center points of the triangular fuzzy numbers representing the annual supply in supply nodes are considered larger than the summation of capacities of warehouses which in turn is assumed to be greater than the summation of the center points of the triangular fuzzy numbers representing the annual demand in demand nodes. In other words, we take into account the following Equation.

$$\begin{aligned} \sum _i {\tilde{S}_i^p } >\sum _j {WCAP_j^p } >\sum _k {\tilde{D}_k^p } \quad \quad \forall p\in P \end{aligned}$$
(43)

The spreads of the triangular fuzzy numbers are considered equal to a number less or equal than 10 % of the center point of the fuzzy number. The basic traffic flows are set to a value smaller than the capacity of the links; however, in order to consider the effect of traffic congestion, the differences are not large. The interest rate is set to a random number in the interval \(\left[ {0.03,0.30} \right] \) and other parameters are randomly generated.

Table 2 Problems and their sizes

The performance of IEMA is compared to that of original EMA and Global Solver of LINGO 8.0 based on several runs for each problem. The algorithms are coded by using MATLAB 2011a and run on a PC equipped with an Intel® \(\hbox {Atom}^{\mathrm{TM}}\) CPU N455 @1.66 GHz and 2.00 GB of RAM Running Windows 7 Starter operating system.

Parameter tuning

In order to obtain a satisfying result in a reasonable time, parameters of the algorithms should be tuned. EMA has four parameters of popSize, maxIter, LSIterand delta as described in Algorithm 1. Similarly, in our experiments IEMA uses four parameters to perform the search for optimal solutions: (a) population size (popSize), (b) maximum number of algorithm iterations (maxIter) which is used as the stopping criterion of the algorithm, (c) maximum number of SA search steps (SAIter) and (d) SA search parameter (SAdelta). It should be noted that the initial and final temperatures in IEMA are problem specific and do not need tuning. Therefore, to optimize the performance of the algorithms regarding solution quality and run time, response optimization experiments are conducted using a factorial design. For each set of problems the problem with the largest possible solution space is used to tune the parameters of each algorithm. The two performance measures are given the same weights in the Response Optimizer of Minitab® 16.1.0 software and the experimental data are analyzed to obtain the optimal parameter values used to perform the numerical experiments. The following Table presents the final combination of parameter settings for each problem category (Table 3).

Table 3 Tuned parameters for the problem categories
Table 4 Numerical experiment results for small-sized problems
Table 5 Numerical experiment results for medium-sized problems
Table 6 Numerical experiment results for large-sized problems

Numerical results

Each algorithm is tested several times on each problem and the results are collected. It is known that if the sample number is reasonably large, there is no need to make assumptions about the form of the underlying distribution to obtain valid test results (Montgomery and Runger 2010); therefore, each algorithm is run 40 times considering each run as a sample. In addition, EMA is also equipped with the “snap” subroutine to deal with discrete variables. IEMA is compared to EMA and LINGO Global Solver regarding run time and the quality of the objective function. Table 4 summarizes the outcomes of experiments obtained from applying each method on small-sized problems. In this Table, the right end column is the gap between the value obtained by IEMA and the best value found either by EMA or by LINGO. The gap is calculated using the equation

$$\begin{aligned} Gap=\frac{c_{Best} -c_{IEMA} }{c^*}\times 100 \end{aligned}$$
(44)

where \(c_{IEMA} \) is the value found by IEMA, \(c_{Best} \) is the best value obtained either by EMA or by LINGO and \(c^*\) is the best result obtained by either three of the methods. Therefore, a positive gap indicates that IEMA performance is superior to those of other two methods and a negative gap shows that IEMA is inferior regarding that criterion in the corresponding problem. The gaps for the averages in each test problem for each performance criterion is accompanied by the corresponding p value resulted from the paired t test

$$\begin{aligned} \left\{ \begin{array}{l} {H_0 :P_{IEMA} \equiv P_{Best} } \\ {H_1 :P_{IEMA} \ge P_{Best} } \\ \end{array} \right. \end{aligned}$$
(45)

where \(P_x \) is the performance of the algorithm x expressed as either mean of the objective function value or the run time, \(H_0 \) assumes that the performance of IEMA is statistically equal to that of the best of other two algorithms and \(H_1 \) states that IEMA is superior. In the right end column of Table 4, the p values resulted from the paired t tests are also provided when applicable.

According to the results from Table 4, there is no significant difference between the performances of IEMA and other algorithms in small-sized problems. The experiments with larger problems, sheds light on the merit of the proposed algorithm because as the search space grows larger, the exploration mechanism of IEMA leads to its superiority over other methods. The claims are supported by the experiments with medium- and large-sized problems for which the results are provided in Tables 5 and  6, respectively.

It should be noted that due to large size of the problems, LINGO Global Solver is unable to reach a satisfying solution in reasonable amount of time. Hence, IEMA is only compared to EMA on medium- and large-sized problems. Here, the gaps between the obtained objective function values may not clearly reflect the difference between the performances of the IEMA and EM. Therefore, instead of using simple gap for each problem instance i and for each algorithm a, a slightly modified version of Marginal Improvement per CPU (\(MIC_{i,a}\)) criterion (Osman 2004) is calculated by

$$\begin{aligned} MIC_{i,a} =\frac{RPI_{i,a} }{CPU_{i,a} } \end{aligned}$$
(46)

where \(CPU_{i,a} \) is the average CPU for algorithm a in problem instance i and \(RPI_{i,a} \) is the Relative Percent Improvement for algorithm a in problem instance i obtained by

$$\begin{aligned} RPI_{i,a} =\frac{c_W^i -c_B^{i,a} }{c_W^i -c_B^i }\times 100, \end{aligned}$$
(47)

in which \(c_B^{i,a} \) is the best result for performance criterion c obtained by algorithm a for problem instance i and \(c_B^i \) and \(c_W^i \) are respectively the best and the worst values for c considering both algorithms for problem instance i.

To compare the overall performance of IEMA and EMA, Average Marginal Improvement per CPU (AMIC) is used. The AMIC for IEMA (\(AMIC_{IEMA} )\) and EMA (\(AMIC_{EMA} )\) are compared through a paired t test. In the paired t test, \(\left( {MIC_{i,IEMA} ,MIC_{i,EMA} } \right) \) constitutes the paired sample obtained from the 40 runs of the algorithms on each problem instance i from medium-sized and large-sized categories. Based on the t test results, the 95 % lower bound for the difference between AMIC of the two algorithm (\(AMIC_{IEMA} -AMIC_{EMA} )\) is 0.0187. The outcomes of the test, based on the numerical results based on medium-sized and large-sized problems, show that the difference between the performance of IEMA and EMA is significant with a p value of 0.001 where \(AMIC_{IEMA} =0.0906\) and \(AMIC_{EMA} =0.0719\).

For medium-sized and large-sized problems, according to Tables 5 and 6, IEMA takes more time in comparison to EMA. In return, the proposed algorithm apparently yields better solutions in terms of objective function value. The results are obtained by the IEMA tuned for both time and objective function value equally; however, different outcomes may be observed if IEMA parameter settings are changed. More specifically, IEMA may be tuned to operate faster at the costs of worse objective function values. Similarly, it can be set to produce better results taking more computational time. Generally, considering the timeframe of the problem, run time is not an important performance factor. Therefore, it can be concluded that IEMA is preferable for solving the proposed model; however, in problems where time is prominent (e.g. real-time applications), EMA may be superior to the proposed algorithm.

Conclusions and future works

In this research, supply chain design and planning was investigated under uncertainty and traffic congestion. The uncertainty was modeled through fuzzy concepts and the congestion was regarded as a non-linear function of the flow in the network links. In addition, common costs associated with supply chain activities were taken into account. In order to solve the problem, the Electromagnetism-like Algorithm (EMA) was augmented by the Simulated Annealing (SA) and the result was the proposed method called the Improved Electromagnetism-like Algorithm (IEMA). The outcomes of the numerical analysis of the algorithms showed that there was no significant difference between IEMA and EMA for small-sized problems; however, both IEMA and EMA outperform LINGO Global Solver on those test problems. The merit of the proposed algorithm over EMA was shown through a set of numerical experiments with medium- and large-sized problems. From the experimental results on medium-sized and large-sized problems and according to the AMIC criterion, it could be concluded that IEMA outperforms EMA indicating that the proposed algorithm was a statistically significant improved version of EMA in terms of optimal objective values (not the run times) in medium-sized and large-sized instances of the proposed model.

Future research may be focused on dynamic networks and the effects of considering several planning periods in which the problem parameters may vary. In addition, the uncertainty in parameters other than demand and supply leads to a more realistic model of the problem. In this research, the decision is made based on a single objective. Considering multi-objective optimization methods for decision making may be considered as an interesting and challenging study regarding modeling and solution approaches. Finally, a separate research should be conducted to provide a parameter sensitivity analysis investigating the impacts of parameter changes on the optimal solution.