Abstract
Service composition and optimal selection (SCOS) is a key problem in cloud manufacturing (CMfg). The present study proposed a multi-objective hybrid artificial bee colony (HABC) algorithm to address the SCOS problem in consideration of both quality of service (QoS) and energy consumption, to which an improved solution update equation with multiple dimensions of perturbation was adopted in the employed bee phase. Likewise, a cuckoo search-inspired Lévy flight was employed in the onlooker bee phase to overcome basic artificial bee colony (ABC) drawbacks such as poor exploitation and slow convergence. Moreover, a parameter adaptive strategy was applied to adjust the perturbation rate and step size of the Lévy flight to improve the performance of the algorithm. The proposed algorithm was first tested on 21 multi-objective benchmark problems and compared with four other state-of-the-art multi-objective evolutionary algorithms (MOEAs). The effect of the improvement strategies was then experimentally verified. Finally, the HABC was applied to solve multiscale SCOS problems using comparison experiments, which resulted in more competitive results and outperformed other MOEAs.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Avoid common mistakes on your manuscript.
1 Introduction
Information technology advances bring unprecedented opportunities for many industries, especially in the recent rise of several cutting-edge technologies such as cloud computing, cyber-physical systems and the Internet of Things (IoT). The increased integration of manufacturing things (resources) with sensors in the cloud-based Internet preceded the emergence of cloud manufacturing (CMfg) [1]. Well-established connections between various manufacturing resources and capabilities (MR&Cs) and the Internet [2] assist CMfg in providing users with full sharing and on-demand use of MR&Cs in service form to change the way enterprises execute business plans. As such, CMfg is predicted to alter the manufacturing industry by means of improved perception, faster planning, more accurate executions, and user self-provisioning based on the pay-as-you-go principle and other techniques.
CMfg has many issues, e.g. task semantic modeling [3], formal description of manufacturing capability [4], resource virtualization [5] and sharing strategies [6], and service composition and optimal selection (SCOS) [7], wherein SCOS has been deemed a principal challenge. In a CMfg system, distributed MR&Cs are virtualized and encapsulated as cloud services to ease user searches and assembly when executing tasks. The exuberant growth of services in the CMfg resource pool increased the difficulty for the combination of multiple single services into a more capable and powerful composite service that meets the user’s requirements while maintaining optimal service performances. This difficulty is one of the most important issues that require further examination. Due to the large-scale characteristic of SCOS, the application of exact approaches or exhaustive search algorithms is limited. Evolutionary algorithms have been developed to solve the problem, such as chaos optimization [7–9], genetic algorithm (GA) [10, 11], ant colony optimization (ACO) [12], and ABC [13].
Evolutionary algorithms can obtain near-optimal solutions within a short period of time and have become a thriving area of research and academic efforts. Although SCOS is a multi-objective problem (MOP), almost all existing studies are focused only on QoS and have only obtained a single appropriate solution. However, it is more sensible to provide MOP decision-makers with a set of non-dominated solutions. The growing interest in solving sustainability issues has garnered an urgent need to consider the economic, environmental, and social aspects of manufacturing system [14]. In particular, the escalating demand for energy poses serious challenges to sustainable manufacturing [15] and thus requires continuous efforts to produce more sustainable manufactured products [16], which may involve multi-objective tradeoffs between the economic and environmental aspects of SCOS. The present study aims to provide an efficient approach for SCOS in consideration of these economic and environmental aspects.
ABC is one of the most recently developed metaheuristic algorithms based on the intelligent foraging behavior of a honey bee swarm [17]. ABC has been widely used in many fields [18–20] as it is easy to implement and has fewer control parameters [21, 22]. ABC performs a neighborhood search of food locations throughout the whole search process. This characteristic highlights ABC’s applicability for good exploration but poor exploitation capabilities, which result in relatively slow convergences of ABC in addressing certain complex issues. Yang and Deb recently proposed another novel evolutionary algorithm, namely the cuckoo search (CS) [23]. CS with Lévy flight performs an exploitative local or neighborhood search combined with occasional long jumps, which allow more efficient search space explorations to escape from the local optima. Comparison studies illustrate a more robust and precise search using CS than by using algorithms such as GA and ABC for complex multi-model optimization problems [24]. However, the convergence speed of the CS is relatively slow. The present study proposes a hybrid MOEA based on the hybridization of ABC and CS to deal with multi-objective SCOS, which ultimately takes advantage of CS and minimizes shortcomings of ABC. In addition, the original design of the onlooker bee’s movement only considers a single dimension, which results in insufficient information sharing when handling complex problems. To overcome this limitation, a multi-dimension perturbation mechanism was designed for the employed bees.
The rest of the paper is organized as follows: Section 2 reviews relevant works. In Section 3, SCOS that minimizes energy consumption while simultaneously maximizes QoS is mathematically formulated. In Section 4, HABC details are provided to address SCOS. Section 5 presents the experimental studies on benchmark problems. In Section 6, HABC is applied to solve SCOS. Finally, Section 7 summarizes the research conclusions and discusses future directives.
2 Related works
Although CMfg services may expose many operations, a single service generally acts fairly for atomic, low-level functions and cannot satisfy the various complicated requirements of many real-world cases, particularly in the presence of complex and diverse customer-generated tasks in personalized customization. Thus, service composition is one of the best approaches when encountering these challenges. However, selecting an optimal composition service is an NP-hard problem.
Traditionally, exact methods such as integer programming [25] and mixed integer programming [26] have long been applied to address these issues, though they perform well in some specific situations but fail in more complex situations. SCOS is generally an NP-hard problem in an n-dimensional hyperspace, where the QoS variables can be large and have complex non linear influence on the objective function of SCOS. Identifying the optimal solution using exact methods is computationally expensive and cannot be accomplished within limited time constraints, thereby resulting in the development of evolutionary algorithms to achieve better scalability. Zhang et al. [27] designed an improved GA for enterprise partner selection in consideration of the green criteria. Zhao et al. [28] proposed an improved particle swarm optimization (PSO) for SCOS. Wang et al. [12] constructed a culture max-min ant system (C-MMAS) algorithm to solve SCOS for virtual enterprises. However, research has focused on very limited and popular approaches, such as GA, PSO, and ACO, which have already been widely reported. Minimal efforts have been observed in exploring promising new algorithms for SCOS. To address such issues, the present study will introduce newly developed algorithms such as the cuckoo search (CS) and ABC to address SCOS. The superior performance of CS compared to other well-known algorithms has been demonstrated in [23, 24, 29]. Therefore we attempted to introduce the CS into ABC for solving SCOS.
Although service composition problems have been widely studied, research on service selection and scheduling in cloud system and CMfg is still in its initial stages. Zhang et al. [30] investigated the flexible management of service composition in CMfg. However, the formal description and algorithm for SCOS was not addressed. Wang et al. [11] proposed a cloud service composition method based on GA. In [10], a hybrid approach using GA and fruit fly optimization was introduced to solve SCOS in cloud computing. In [13], the authors adopted a global best-guided ABC to solve cloud service composition. Laili et al. [9] proposed the ranking-based chaos optimization by simultaneously considering SCOS and the allocation of computing resources. Huang et al. [8] designed chaos optimization with consideration of the energy consumption for cloud service selection. In [7], an adaptive chaos optimization called full connection based parallel adaptive chaos optimization with reflex migration (FC-PACO-RM) was developed to solve SCOS. However, these studies treated multi-objective SCOS problems as single objective optimization ones [8, 13].
MOEAs have garnered attention for simultaneously addressing multiple objectives for MOPs as well as obtaining a set of representative and well-distributed solutions in a single run, specifically with MOEAs such as the Non-dominated Sorting Genetic Algorithm II (NSGA-II), micro-GA, Strength Pareto Evolutionary Algorithm 2 (SPEA2), and multi-objective PSO (MOPSO) [31]. Currently, some typical MOEAs have been applied in MOPs. Ramacher et al. [32] presented an improved NSGA-II to determine the Pareto solutions for service selection. In [33], SPEA2 was introduced to multi-objective SCOS to achieve the Pareto solutions with a well-spread distribution. In [34], set evolution MOPSO was proposed to solve multi-objective problems. In [35], the performance of several classical MOEAs for SCOS was compared. Despite exhibiting good performances in their specified domains, classic MOEAs such as NSGA-II, micro-GA , SPEA2, and MOPSO were extended from the well-known GA and PSO, with many publications have reported on their enhancements and applications. Nevertheless, limited efforts have been pursued on more recently developed algorithms for SCOS, such as multi-objective grey wolf optimizer [36], multi-objective artificial raindrop algorithm [37], multi-objective gravitational search algorithm [38], multi-objective cat swarm optimization [39], multi-objective teaching learning based optimization [40], and multi-objective ABC (MOABC) algorithm [41], some of which truly support MOPs, especially MOABC, which is easily implementable and relatively efficient in comparison with others. Despite its impressive performance in practical applications [19, 41, 42], MOABC has not yet been applied to solve a multi-objective SCOS problem.
The classic ABC has presented some drawbacks, specifically prematurity and slow convergence, in finding near-optimal solutions for complex MOPs. To overcome such drawbacks, a hybrid ABC (HABC) is proposed. In HABC, the multi-dimension perturbation mechanism is employed to improve convergence speed, while the Lévy flight of CS is adopted to avoid prematurity. Lévy flight has an infinite mean and variance, thus it can explore the search space more efficiently as compared to algorithms with random permutation. This advantage, combined with the multidimensional perturbation mechanism designed for employed bees, allows the search space to be explored more efficiently. Furthermore, HABC parameters are dynamically configured based on the previous performances of generating promising solutions.
3 Problem statement
Generally, a complex task in CMfg is decomposed into several subtasks and fulfilled by a composite service. SCOS involves the determination and selection of manufacturing services (MSs) for each subtask such that the obtained composite service satisfies both the functional and QoS requirements. The SCOS process can be divided into following steps: (1) Task decomposition; (2) Service searching and matching; (3) Qualified candidate services set acquisition; (4) Composition of optimal services. More detailed matching and composition flows were investigated in the authors’ previous work [43].
Unlike single-objective problems, multi-objective SCOS problems try to simultaneously optimize all objectives by balancing multiple conflict objectives to get a set of non-dominated solutions. Concerns for the SCOS in the present study are focused on both providing the best QoS and reducing energy consumption.
3.1 QoS evaluation
QoS consists of several attributes for evaluating the MSs from different aspects, such as time (T), cost (C), availability (Av), reliability (Re) [7, 44]. The aggregation value of QoS for a composite manufacturing service (CMS) is evaluated based on composite structures (i.e., sequential, parallel, conditional, and loop). The related mathematical model was built in the authors’ previous work [45]. The QoS for CMS can be calculated as follows:
subject to
where NormQ k (CMS) represents the normalized value of the kth attribute for CMS, r is the number of attributes, w k represents the weight of the kth attribute, w k ∈[0, 1], \({\sum }_{k=1}^{r} w_{k} =1\), and Q k0 denotes the lowest requirement of the kth attribute.
3.2 Energy consumption evaluation
Although many criteria have been considered in the QoS evaluation of cloud services, “green criterion” such as carbon emission and energy consumption have been minimally considered. However, the increasing public awareness of environmental protection have gradually rendered green considerations as a significant factor in selecting qualified manufacturing service (MS) [8, 27, 46], among which energy consumption has become a significant component in such a way that it has been selected as one of the optimization goals for SCOS in this study.
Traditionally, services are deployed into a fixed computer and require specific maintenance, whereas the cloud services in CMfg have a larger scale and are mainly supported by virtual machines (VMs) [9]. In CMfg, VM-supported cloud services not only include software services (SSs) such as simulation and design software, but also include hardware services (HSs) such as machine tools and manufacturing equipment. The modeling and evaluation of energy consumption for SSs and HSs must be analyzed separately due to their different characteristics [47].
3.2.1 Energy consumption evaluation for SSs
SSs in CMfg are running with the support of VMs, which are the virtual division of the underlying computing resources. The resource information of SS, S 1, can be represented as:
where s represents the service execution efficiency in bits per second; vand p denote the minimum required speed and running speed of VM, respectively, in bits per second; c denotes the input communication amount from the predecessor node, in bytes; q denotes the transmission rate of VM, in bits per second; f denotes the failure probability of VM; u denotes the recovery time if VM fails; and gdenotes the average energy consumption of VM per unit time.
If the data size of a decomposed subtask is w and the input communication amount is c, then the total energy consumption En(S 1) of the software service can be calculated:
where T 1 denotes the total execution time.
3.2.2 Energy consumption evaluation for HSs
Unlike SSs, most HSs require supervision or control during execution, thus VMs are deemed indispensable as they are employed to control and supervise HSs. The transmission path of the control commands for SSs is ‘user-VM’, whereas the transmission path for HSs is ‘user-VM-HSs’. Such a process will increase service execution time and produce a large amount of energy consumption [9]. For this reason, more factors must be considered in the energy consumption of HSs and VMs, namely ζ, η and e, that require additional consideration. The resource information of HS, S 2, can be represented as
where ζ denotes the average control rate, η represents the transferring rate of data between VM and HSs, and e denotes the average energy consumption of HS per unit time.
The other parameters, namely s, v, p, c, q, f, u, and g, have been previously explained. The total energy consumption En(S 2) can be calculated as follows:
where T 2 denotes the total execution time of HS. If the HS does not require external commands, then ζ= 0.
3.2.3 Objective function of energy consumption
Let X denote the mapping between the subtask and the corresponding service, such that X(i) = jsignifies that subtask ST i has been assigned to service MS i j . For i∈{1, 2, ..., n}, X(i)∈{1, 2, ..., m i }, where nis the number of subtasks and m i is the number of candidate services for ST i . Then the objective function is stated as follows:
where
Here, n s1 and n s2 are the number of SSs and HSs, respectively; and En 1(CMS) and En 2(CMS) represent the energy consumption of the SSs and HSs, respectively.
3.3 Objective functions of multi-objective SCOS
As previously stated, QoS and energy consumption are considered as two objectives in SCOS, where time (T), cost (C), availability (Av), and reliability (Re) are treated as constraints. The problem can be then formalized as follows:
where the constraint in (9) requires the QoS criteria to satisfy the lowest requirement of each subtask.
4 Multi-objective hybrid ABC algorithm
This section starts with a brief review on Pareto solutions as well as the standard ABC and CS algorithms to propose the HABC using a multi-dimension perturbation mechanism and Lévy flight with adaptive parameter settings.
4.1 Pareto solutions for MOPs
For a minimization MOP, a solution X 1 is said to dominate another solution X 2 (denoted as \(\boldsymbol {X}_{\mathrm {1}}\prec \boldsymbol {X}_{\mathrm {2}})\), if and only if:
where fdenotes the objective functions and M is the number of objectives. Considering Pareto dominance, a vector X 0 is called the Pareto solution if and only if ¬ ∃X∈S such that \(\boldsymbol {X} \prec \boldsymbol {X}_{\mathrm {0}}\). The Pareto solution set (PS) is defined as follows:
where S denotes the search space. Figure 1 graphically shows an example of Pareto solutions (marked as the black circles) and dominated solutions (marked as the empty circles) for a two-objective optimization problem, in which both objectives require minimization.
4.2 Basic ABC algorithm
Karaboga proposed the ABC algorithm to simulate honey bee behavior during the foraging cycle [17]. The top level pseudocode of ABC is provided in Algorithm 1.
In initialization, the food sources (solutions) of ABC are randomly generated as follows:
where x i j denotes the jth dimension of the ith solution, \([x_{j}^{\max } ,\;x_{j}^{\min } ]\)is the boundary of the jth dimension, and rand (0, 1) is a random number uniformly generated within the range [0,1].
An employed bee, i, tries to improve a solution, X i =(x i1,x i2,…, x i j ,…, x i d ), by modifying one dimension as follows:
where v i j is the jth dimension of V i and V i = (v i1, v i2, . . . , v i d ) is the updated solution; φ i j is a random number within the range [ −1, 1]; and i, k, and jare randomly chosen indexes with i, k∈{1, 2, . . . , SN}(i≠k), and j∈{1, 2, . . . ,d}, where SNis the number of onlookers. The greedy selection is then performed between V i and X i to obtain and retain the better solution.
An onlooker bee chooses a solution depending on the probability value, p i , as follows:
where f irepresents the fitness of the ith solution, i∈{1,2,…, SN}, and SNis the total number of onlooker bees in the population.
If one solution has not improved over a predefined number of times in both the employed bee and onlooker phases, a scout bee generates a new random solution based on (13), which replaces the exhausted one [21, 22].
4.3 Cuckoo search with Lévy flight
Cuckoo search (CS) is one of the latest nature-inspired metaheuristic algorithms. Yang and Deb [23] developed the algorithm in 2009 after gaining inspiration from the obligate brood parasitic behavior of some cuckoo species in combination with the Lévy flight behavior of albatrosses, fruit flies, and spider monkeys. Preliminary studies deem it very promising and far more efficient than existing algorithms such as GA, PSO, and differential evolution (DE) [24, 29, 48].
Each cuckoo in a CS nest represents an existing solution, while a cuckoo egg represents a new solution. The main steps can be summarized as follows:
Given that a new solution, X i,n e w , for cuckoo egg i is generated from a randomly selected cuckoo/solution, X i , a Lévy flight can then be performed as follows:
where the product ⊕ deals with an entry-wise multiplications process, and α is the step size parameter related to the scale of the problem, which can be expressed as follows:
where X b e s t is the current best solution. For MOPs, X b e s t can be a non-dominated solution that is randomly selected from external Pareto archive sets, and the scaling factor (SF) is a constant. Levy(β) obeys the Lévy distribution as follows:
where β∈(0, 2] defines the index and determines the shape of the distribution, and tdenotes the step length. For the convenience of calculation, a simplified scheme of Levy(β) can be written as follows:
where u and vobey the standard normal distributions as follows:
and
where Γ is the standard Gamma function as calculated below:
Mantegna [49] suggested specific βvalues, i.e., 0.75 \(\leqslant \beta \leqslant \) 1.95, to create a faster and more efficient algorithm. The value 1.5 was determined most suitable among the different βvalues. According to (18), (19), (20), (21), and (22), a new solution with a Lévy distribution can be calculated as follows:
Lévy flights consist majorly of small steps and occasionally large steps or long-distance jumps. Such long jumps can significantly improve the search performance of the cuckoo search for some cases, especially in a large-scale irregular solution space.
4.4 Proposed hybrid ABC algorithm to address multi-objective SCOS problems
A hybrid Pareto-based ABC algorithm for solving the SCOS is proposed, wherein the global search ability of ABC and the exploitation ability of Lévy flight is combined. Firstly, the multi-dimension perturbation mechanism is introduced in the employed bee phase of ABC for more efficient global search space explorations. The cuckoo search with Lévy flight is then incorporated in the onlooker bee phase of ABC to refine the exploitation. In addition, the control parameters are gradually self-adapted by learning from their previous experiences in generating promising solutions.
4.4.1 Solution encoding
The integer coding method is applied to encode the candidate services and map the composition services into the position vectors, where the integer value of each vector dimension represents the index of a concrete service from the corresponding candidate set, as depicted in Fig. 2. It is assumed that a task consists of n subtasks. Under the integer array coding scheme, the position vector of a bee is denoted by an n-dimension array, X i ={ x i1, x i2 , ..., x i n }, wherein each element x i j denotes the candidate service, MS i,j . For example, the corresponding array of the position vector shown in Fig. 2 is {2, 1, ..., m}.
4.4.2 External Pareto archive set
HABC uses an external Pareto archive set which acts as an elite archive to store non-dominated solutions found during the search process, unlike the general single-objective optimization process. At the end of each iteration, the non-dominated solutions are stored in the set and the dominated members in the set are deleted. The external archive set is updated at each iteration.
4.4.3 Initialization
A random set of feasible solutions is generated to initialize food sources, to which non-dominated solutions are added. The initialization procedure for food sources is depicted in Algorithm 2. First, a set of integer solutions is randomly generated within the range of [1, m], where m denotes the number of candidate services for each subtask. The solutions are then judged on their feasibility, which requires the satisfaction of the constraints in (9). Otherwise, the solution will be regenerated. In this way, the selected initial food sources (solutions) will have better performances [27].
4.4.4 Employed bees with multi-dimension parameter perturbation
In the employed bee phase of ABC, an employed forager probabilistically produces a modification on the food source position (solution) with a single dimension to generate a new solution, which may result in a slow convergence speed and poor exploitation ability in the ABC algorithm when working with complicated composite and non-separable functions. In order to overcome the drawback caused by the single-dimension parameter perturbation, a perturbation rate (PR) that controls the perturbation frequency is introduced to the solution-updating equation such that multiple dimensions of a food source are changed at each iteration in the employed bee stage.
By means of such a modification, a new candidate food source is generated for each food source, x i j , as follows:
where r i j is a random number uniformly generated within the range [0, 1]; k∈{1, 2,…, SN}is a randomly chosen index and k≠i; the dimension j∈{1, . . . , n}is uniformly selected at random; and PRis the perturbation rate with a value between 0 and 1. A lower PRvalue results in a slow convergence rate, whereas an extremely high value may cause too much diversity in a population.
4.4.5 Probability calculation
In the basic ABC, solutions are assigned fitness values and onlookers select the individuals for mutation based on the probability in (15). However, in the multi-objective ABC version, the fitness function is not available given the existence of a set of non-dominated solutions. For this reason, a new metric fitness calculation method is introduced to compute the fitness of an individual that is selected by the onlookers for exploitation as follows:
where R(X i ) is the Pareto rank value of individual X i , and de(X i ) is the crowding entropy that is calculated by the fast non-dominated sorting crowding distance measure method [50] and the distribution entropy technique [51]. The combination of the crowding distance and the distribution entropy can exactly reflect the crowding degree of a solution in the objective function space. As such, individuals are selected based on their ranks and crowding entropies. An individual with a lower rank and higher crowding entropy is more desirable.
4.4.6 The onlooker bee phase improved by cuckoo search with Lévy flight
In ABC, the solution search equation for onlookers is similar to that of employed bees. An onlooker updates its position using the information provided by a randomly selected potential solution, therefore the obtained solution may be significantly influenced by a random selected solution. Meanwhile, a forager flies at random between the position itself and a randomly selected solution, with the updated solution depending highly on the step size. If the step size is large, for example, in the case where the difference between the current and randomly selected solution is large with a high absolute value of φ i j , then there is a higher likelihood for onlookers to skip the true solution. On the other hand, if the step size is small, the convergence speed of ABC may significantly decrease.
In order to overcome such a drawback, the cuckoo search-inspired Lévy flight is incorporated into the onlooker bee search phase. Lévy flight uses Lévy distributions instead of uniform and/or Gaussian distributions as mechanisms to generate step sizes. Frequent short steps generated by Lévy distributions intensifies the exploitative local or neighborhood search around the current promising food sources more precisely, which helps the colony search for food sources more quickly and efficiently and enhances the exploitation capability of ABC. Meanwhile, occasional Lévy distribution-produced long jumps explore very different areas of the current search space to scout for potential solutions, which helps foragers escape from the local optima using an exploratory global search.
Lévy flight motions are considered more efficient than stochastic search, especially with no prior knowledge on the locations of food sources, because Lévy flight paths exhibit the characteristics of a series of scale-free moves and Lévy distributions. Various empirical and theoretical studies have validated the characteristics of these Lévy flight patterns [52]. Equation (18) illustrate the scale-free Lévy flights, which consist of sequences of independent and randomly oriented steps that obey an inverse power law distribution with a heavy and long tail. Lévy movements are comprised of many short moves that are punctuated by rare longer moves, which can be used as the optimal local search for onlooker bees. Thus, Lévy flight was chosen as the local search strategy to closely examine the surroundings of some promising regions.
It is important to note the trade-off between the global and local searches for Lévy flights, given that the step length of Lévy flights is controlled by a scaling factor (SF). A lower value of SF allows onlooker bees to finely tune the search process using smaller steps at a slower convergence rate. A higher value of SFaccelerates the search process and presents the global random walk, but reduces the exploitation capability of the perturbation process. Thus, we tuned SF with the adaptive parameter configuration strategy described in the following section.
4.4.7 Self-adaptation of PR and SF
As mentioned earlier, PR and SFcontrol the perturbation rate of the food sources and step length of the Lévy flight, respectively. The control parameters, PR and SF, have a great impact on the exploration/exploitation abilities of the ABC search process. PRdecides the difference between the parent solution and the generated trial solution and it largely impacts the coverage speed of the algorithm. A larger PR value promotes the perturbation of food sources such that the algorithm can explore the search space to find more promising new solutions. A lower PR value favors the local search, thereby improving the exploitation performance. Similarly, a high SF value is devoted to exploring new solutions, thus inducing a raise in the population diversity, whereas a small SF value favors short-distance exploitation and enhances the local exploitation ability of the algorithm. For this reason, both PR and SF should be adapted to particular problems or particular phases of a search process. The present study proposed a novel adaptive dynamic parameter configuration mechanism to determine the best values of PR and SFbased on previous experiences so as to balance the exploitative local search and exploratory global search during the evolution process, which are described as follows.
Firstly, a current parameter configuration list (CPCL) with a specified length (Fig. 3a) is generated to reserve the parameter sets of PR and SF, both of which are randomly generated within a uniform distribution range [0, 1]. A parameter set is then selected from the CPCL and assigned to the solution-updating equation of the employed bees and onlookers, respectively. If the updated solution replaces the old one by using the selected parameter set and enters into the next generation, the associated parameter set is then added into a successful parameter configuration list (SPCL) (Fig. 3b) and cleared away from the CPCL.
Once the CPCL was empty, the PR and SFmedian values stored in the SPCL were calculated and denoted as mPRand mSF, respectively. The CPCL was then refilled with the values of PR and SFthat were randomly generated based on the normal distributions, namely N(mPR, 0.1) and N(mSF, 0.1), respectively. That is to say, CPCL elements were randomly sampled from the successful parameter configuration list (SPCL). The above process was repeated until the termination condition was satisfied. As a result, the proper parameter set for PR and SFgradually self-adapted by learning from previous experiences of generating promising solutions. If the SPCL overflows, the earliest records stored in the SPCL were removed so that the current successful parameter sets can be stored in the list, which can avoid any inappropriate long-term accumulation effects. The present study set the length (L) of both CPCL and SPCL at 200. A small variation in length Lwas verified as not having any significant influence on the performance of the proposed algorithm.
4.4.8 Pareto greedy selection
Considering both a parent solution, X i and an offspring solution, V i , the better solution is maintained by the Pareto greedy selection in Algorithm 3, where V i is added into the external archive (EXA) only if the new solution, V i , is not dominated by any member of the EXA. After such a selection, the number of exploitations, trial i , is incremented by 1, if X i is maintained (X i dominates V i ). When trial i exceeds the limit parameter, X i will be explored by the scouts.
4.4.9 Update external Pareto archive set
As stated in Section 4.4.2, an external archive (EXA) was employed to preserve the non-dominated solutions obtained during the search process. At the end of each iteration, the members of the EXAwere updated. However, the number of Pareto-optimal solutions for most problems is very large and may include an infinite number of individuals. A larger number of Pareto-optimal solutions results in a greater computational burden such that the EXA size must be restricted to a predefined value. The present study used the fast non-dominated sorting method to decrease the size of the archive set and maintain the diversity of the Pareto solutions. The crowding distances [50] of all the archive members were calculated and sorted from largest to smallest once the number of non-dominated solutions exceeded the allowed archive size. The top N m a x (maximum size of the EXA) members were maintained, whereas the remaining crowded members were removed to maintain a diverse distribution among the archive members.
4.4.10 The procedure for the proposed hybrid approach
The pseudocode for the proposed HABC is presented in Algorithm 4, where every key element has been explained as before.
5 Experimental studies on benchmark problems
Widely used benchmark problems were used to examine the proposed HABC algorithm in a computational environment with MATLAB R2013b for a 64-bit Windows 7 operating system on a 2.5 GHz PC with 4 GB RAM.
5.1 Test problems
For the performance comparison, the present study selected 21 benchmark problems, that are often used in research to test MOEAs for MOPs, including:
-
(1)
The ZDT bi-objective problems: ZDT1-ZDT4, ZDT6 [53].
-
(2)
The DTLZ tri-objective problems: DTLZ1- DTLZ5, DTLZ7 [54].
-
(3)
The CEC09 benchmark problems: UF1-UF10, wherein UF1-UF7 are bi-objective and UF8-UF10 are tri-objective [55].
5.2 Performance metrics
The performance metrics were classified into three categories depending on their ability to measure convergence, diversity of the obtained solutions, or both criteria. We adopted two widely used metrics, namely the invert generational distance (IGD) [53] and the hyper-volume (HV) [34], to measure both the convergence and diversity of the obtained solutions.
IGD was used to evaluate the average distance between the set of true Pareto solutions, P*, and the obtained approximation set of non-dominated solutions, P, which can be defined as follows:
where d(v, P) is the minimum Euclidean distance between v and the points in P, and |P*|is the number of P* points, wherein the small value of IGD-metric is preferred.
The hyper-volume (HV) criterion is used to assess both the convergence and diversity. Given the approximation set, P, and a reference point, R, HV is transformed into a measurement of the region that is simultaneously dominated by P and bounded above by R, which is formally described as follows:
where the reference point, R, for the HV-metric is the worst value in each objective dimension. A higher value of HV signifies a better approximation set.
5.3 Parameter setting
The proposed HABC was compared to several state-of-the-art MOEAs such as NSGA-II [50], AbYSS [56], MOsaDE [57], and SMPSO [58]. The parameter settings are listed below.
-
Public parameters:
-
Population size SN: 100 for bi-objective problems, 300 for tri-objective problems.
-
Maximal function evaluations FEs: 300,000.
-
-
Parameters of NSGA-II are set the same as those in [50]:
-
Crossover probability: 0.9
-
SBX distribution index: 20
-
Mutation probability: 1/n
-
Mutation distribution index: 20
-
-
Parameters of AbYSS are set the same as those in [56]:
-
Distribution index of the polynomial mutation: 20
-
Size of reference set: 10
-
Crossover probability: 1
-
Mutation probability: 1/n
-
-
Parameters of MOsaDE are set the same as those in [57]:
-
Crossover probability CR: with normal distribution of mean 0.5 and standard deviation 0.1.
-
Scaling factor SF: with linearly reducing mean value from 1.0 to 0.05 and standard deviation 0.1
-
-
Parameters of SMPSO are set the same as those in [58]:
-
Mutation distribution index: 20
-
Mutation probability: 1/n
-
where nis the number of decision variables. In addition, 30 runs were independently conducted for each algorithm with respect to each instance to obtain statistically sound conclusions. Likewise, the parameter limit for HABC was set at 100.
5.4 Experimental results
Tables 1 and 2 respectively present the mean and standard deviation of the HV and IGD metric values of the non-dominated solutions obtained by each algorithm for the 21 problems. The paired Wilcoxon signed-rank test was conducted to examine the statistical significance between HABC and the other algorithms. In Tables 1 and 2, signs “ + ”, “ =”, and “–” indicate a respectively better than, similar to, or worse HABC performance as compared to its competitor based on the Wilcoxon signed-ranked test at a 0.05 significance level. The results are summarized as “ w/t/l” on the last row of the two tables, which counts the number of problems that the proposed HABC significantly outperforms, performs similar to, or worse than its competitor in the corresponding column. In addition, the best results are highlighted in boldface based on the average metric value.
The proposed HABC algorithm provided the best results for 16 out of the 21 problems based on the average metric values in Table 1, wherein MOsaDE had the best performance for two problems, SMPSO exhibited the best performance on UF10, NSGA-II performed the best on UF5, and AbYSS offered the best solution for UF6. In addition, HABC achieved an overwhelming advantage over the other competitors because HABC outperformed NSGA-II 18 problems, AbYSS 17 problems, MOsaDE 18 problems, and SMPSO 19 problems according to pairwise comparison results summarized on the last line of Table 1.
More detailed information can be obtained if the results were displayed using boxplots, which depict the distribution of the numerical data. Figure 4 shows the HV value distributions for each algorithm’s final solutions on the tested problems, from which HABC exhibited a significantly higher and narrower boxplot as compared to those of other rivals for all the problems except for DTLZ7, UF4-UF6, and UF10. Given that HV is the performance metric that illustrates both the diversity and convergence of an algorithm, it can be stated that the Pareto optimal solutions obtained by HABC are close to and highly distributed along the true Pareto front.
A careful inspection of Fig. 4 reveals a slightly similar algorithm performance ranking order for certain types of problems. For ZDT bi-objective problems, HABC ranked first place among the contestant algorithms, NSGA-II was outperformed by all the other contestant algorithms and ranked last. Other algorithms such as AbYSS, MOsaDE, and SMPSO obtained similar results. However, SMPSO was considered the worst algorithm instead of NSGA-II for DTLZ problems, except for DTLZ5 and DTLZ7. HABC still exhibited the best performance on all DTLZ problems. AbYSS, MOsaDE, and SMPSO obtained similar results. The algorithms perform differently for different problems given that CEC09 problems contained composite types of problems.
The IGD metric results of the compared algorithms in Table 2 depict HABC yielding the best results for 17 of the 21 problems, whereas SMPSO and MOsaDE achieved only one and three best results, respectively, and NSGA-II and AbYSS failed to obtain any. The Wilcoxon signed-rank test result presented in the last row of Table 2 also illustrates a better HABC performance for the majority of the 21 problems by respectively obtaining 18, 19, 17, and 18 significantly better results as compared to its corresponding competitors. Given that IGD is a performance metric used to measure the convergence and diversity of the obtained solutions using an algorithm, the aforementioned results indicate that the obtained approximate Pareto fronts using HABC are more diversified and closer to the optimal Pareto fronts than those from the other algorithms.
The IGD value distributions obtained from the compared algorithms in Fig. 5 depict a narrower HABC boxplot as compared to those of its peers except for UF4, UF6, and UF10, which also shows stability of HABC in converging towards the true Pareto front. However, it should be pointed out that although HABC performed better than the other algorithms for a majority of the problems, it did not work well on several special problems such as DTLZ7, UF4, UF5, and UF10, wherein HABC was outperformed by its competitors in terms of both IGD and HV metrics. MOsaDE performed well in solving DTLZ7 and UF4, while SMPSO worked well for UF10.
These results can be explained by the No Free Lunch Theorem (NFLT) [59]. According to the NFLT, an algorithm cannot dominate another algorithm for all the problems and all the aspects, as a general-purpose universal optimization algorithm is theoretically impossible. Therefore, no singular strategy can be expected to outperform another for all types of problems.
The distributions of the final solutions with the median IGD values obtained by the algorithms with respect to UF1,UF3, and UF7 are presented in Figs. 6–8, respectively, to provide a graphical overview of the behavior of these algorithms. Figure 6 illustrates the ability of HABC in covering the entirety of the whole Pareto front. The Pareto fronts obtained by NSGA-II and MOsaDE were slightly similar and missed a portion of the central part of the true Pareto front. However, the Pareto front from AbYSS missed both the left and right sides of the true Pareto front. The solutions obtained by SMPSO had a farther distance to the true Pareto front despite having good diversity.
HABC still had the best convergence on UF3 based on the results presented in Fig. 7. The solutions obtained by NSGA-II and AbYSS were only partly close to the true Pareto front, whereas MOsaDE found solutions farther from the true Pareto front and SMPSO had solutions at the ends of the Pareto front.
The superiority of HABC is also observed in Fig. 8. However, the solutions obtained by the SMPSO algorithm were better than those of AbYSS and MOsaDE, though all of them missed a part of the true Pareto front. The worst results belonged to NSGA-II, which only obtained a small discontinuous part of solutions on the right side of the true Pareto front.
According to the above-presented comparison and statistical analysis, the proposed HABC performed the best among the five algorithms in terms of the convergence rate, stability, and solutions’ diversity. Fast HABC convergence originated from the solution-updating mechanism of the employed bee phase, where multiple dimensions of the solutions were perturbed at each iteration. Following the exploration of the employed bee phase, onlooker bees tended to exploit the promising regions of the search space provided by Lévy flight with adaptive step length. In addition, occasional large-steps or long-distance jumps are allowed in the Lévy distribution, which reemphasizes exploration and improves the diversity of the solutions across the Pareto front.
5.5 Effectiveness of improvement strategies
5.5.1 Impacts of parameter settings
PR and SFare two parameters used in HABC that control the convergence speed and adjust the exploration and exploitation. The perturbation rate, PR, was employed to adjust the parameter variation frequency when a neighboring solution is produced. Likewise, the scaling factor, SF, was used to determine the step length of the Lévy flight in the onlooker bee phase. To investigate the impacts of PR and SF on the performance of HABC, we tested 36 combinations of 9 PR values (i.e., from 0.1 to 0.9 with a step size of 0.1) and 4 SF values (0.2, 0.5, 0.8, 1) on ZDT6, DTLZ7, and UF1 problems, which were chosen to represent the ZDT, DTLZ, and UF problems, respectively. The other parameters remained the same as in Section 5.3. Thirty independent runs were conducted for each combination of PR and SF on each of the tested problems.
The mean IGD-metric values obtained by HABC with all the combinations of PR and SFare presented in Fig. 9, wherein the settings of PR and SFhad the least effect on ZDT6, a significant effect on DTLZ7, and the most effect on UF1 given that the search space of UF1 is more difficult than those of ZDT6 and DTLZ7. The performance of HABC in solving a specific problem crucially depends on appropriate parameter settings, and employing a trial-and-error scheme to search for the most suitable combination of PR and SF is time-consuming. Therefore, the adaptive PR and SFsettings may be critical for improving the performance of HABC for some problems.
5.5.2 Effectiveness of the adaptive perturbation rate
To verify the effect of the adaptive perturbation rate, HABC (with an adaptive PR as described above) was compared with its four variants at fixed perturbation rates, each of which only adopted a fixed PR(0.1, 0.3, 0.5, or 0.7) for the entire evolution process. The Friedman test [60] was used to determine the accumulation of wins for each algorithm by a non-parametric multiple comparisons test, the computational results of which are presented in Fig. 10, where the horizontal axis denotes the parameter setting of PRand the vertical axis represents the accumulation of wins for HABC and the variants (denoted as 0.1, 0.3, 0.5, and 0.7 for simplicity) when solving ZDT, DTLZ, and UF problems, respectively.
The best fixed value of PRin Fig. 10 differs for different types of problems given that a value of 0.7 resulted in the second best result for ZDT problems, while the same value gave the worst result for DTLZ problems. Likewise, the results were unsuccessful in determining a fixed PR setting that always provided the best solution for all the problems. In addition, the difference between the best and worst algorithms was more evident with increasing problem complexity. Nevertheless, HABC with adaptive PRwas better than its variants at all times. Therefore, the adaptive perturbation rate was concluded to enhance the effectiveness and robustness of the proposed HABC when solving different problems.
5.5.3 Effectiveness of the adaptive scaling factor
Similarly, the effect of the adaptive scaling factor was tested by comparing the proposed HABC with its four variants at fixed scaling factors (denoted as 0.2, 0.5, 0.8, and 1 for simplicity) during the evolution processes. The comparison results are presented in Fig. 11, where the accumulation of the wins for each variant is presented in the vertical axis, and the horizontal axis denotes a comparison between the algorithms. None of the four variants with fixed scaling factors obtained the best result for all of the problems, which is similar to previous observations presented in Section 5.5.2. In general, the adaptive scaling factor can be concluded as positive for the proposed HABC algorithm.
6 Case study on multi-objective SCOS problems in CMfg
In this section, the proposed algorithm was applied to engineering optimization problems, particularly SCOS problems in CMfg. CMfg resources include both software applications and various types of manufacturing equipment. One typical manufacturing project adapted from [7] was extended as a case study. A total set of 30 tasks in the cloud manufacturing environment was presented as a directed acyclic graph (DAG) (Fig. 12), in which each node represents a task unit, the color of the node represents its task type, each directed line between two nodes represents the communication relationship among the tasks, all tasks strictly observe the tasks’ priority rules, and a successor task node can only be started following the completion of all predecessor tasks.
To study the scalability of HABC in solving SCOS problems, we set the available number of cloud services for each task at 20, 50, 80, and 100, respectively. The value ranges of parameters involved in both QoS and energy consumption for candidate manufacturing services (MSs) are set as presented in Table 3. All parameter values involved in the candidate services were randomly generated within the specified ranges and stored in a .txt file for consistent initial conditions.
NSGA-II, AbYSS, MOsaDE, and SMPSO were used as competitors to address the multi-scaled SCOS problems when investigating the performance of HABC. The experimental settings were identical to those mentioned in Section 5.3, with 30 experimental runs conducted for each tested problem. Note that the true Pareto front is unknown in the SCOS problems. Therefore, in the IGD calculation, true Pareto front was assumed as the combined non-dominated Pareto front solutions that were obtained from all of the considered algorithms for a large number of cycles. The IGD values of the true Pareto front to the non-dominant solutions from the respective algorithms were then calculated. Tables 4 and 5 present the mean and standard deviation of the HV-metric and IGD-metric values for the solutions obtained by the compared algorithms on the 4 scales presented in the studied case.
In addition, the Wilcoxon rank-sum test set at a 5 % significance level was performed to compare the significance of the differences between the compared algorithms. Based on the IGD comparison in Table 5, HABC exhibited a better performance than NSGA-II, AbYSS, MOsaDE, and SMPSO as it resulted in 4, 4, 3, and 4 wins out of 4, respectively. The “ + ”, “ = ”, and “–” signs in the table also maintained its meaning. Likewise, ’ w/t/l’ on the last row of the table denotes the number of HABC wins (w), ties (t), and loses (l) as compared to its corresponding competitor.
The above comparisons clearly illustrate a significantly better HABC performance for the studied case in terms of both HV and IGD metrics, which validates the advantages of the HABC in solving for SCOS problems.
Algorithm comparisons based on the studied SCOS at two different scales for the candidate services are presented in Fig. 13. The graph on the left side of the figure represents the obtained non-dominant solutions from the case that presented 20 candidate services per subtask, while the graph on the right side presents a case having 80 candidate services per subtask. HABC is observed to be more significantly advantageous in tackling problems with more services. This is due to that the adaptive parameter adjustment strategy is especially effective in enhancing the search capability of the proposed algorithm when solving complicated problems. From the above comparisons, the proposed HABC algorithm appears to be a very promising approach and can be used an alternative tool for solving multi-objective SCOS problems in CMfg.
7 Conclusions
Although evolutionary algorithms have been widely investigated in handling various SCOS problems, they are most focused on single-objective optimization and rarely consider multi-objective SCOS problems in CMfg. This paper proposed HABC, which is a novel multi-objective optimization approach that can be utilized in addressing such multi-objective SCOS problems. In the proposed approach, the multi-dimension permutation mechanism was introduced to the employed bee phase to determine the number of parameters for a given solution that is to be perturbed. Moreover, the cuckoo search-inspired Lévy flight was employed in the onlooker bee phase, which formed a random walk process that obeyed a power-law step-length distribution with a heavy tail. This mechanism allowed more efficient search space explorations on the global scale as to avoid being trapped in the local optima. Furthermore, the perturbation rate of the employed bee search and the step length of the Lévy flight in the onlooker bee phase was adaptively adjusted by learning its search history. To validate the effectiveness of the proposed approach, a series of MOPs and a practical case was chosen. Its comparison results revealed a better HABC algorithm performance in terms of the solution quality.
Given the superior performance of HABC, It would be interesting to work with other multi-objective approaches based on the swarm intelligence applied in the service composition of CMfg. Since large-scale SCOS problems may require significant CPU time to find a Pareto solution set, the combined application of parallel computing and evolutionary algorithms appears to be promising. Moreover, the proposed algorithm can be further analyzed to develop more enhanced methods of solving other MOPs.
References
Li BH, Zhang L, Wang SL, Tao F, Cao JW, Jiang XD, Song X, Chai XD (2010) Cloud manufacturing: a new service-oriented networked manufacturing model. Comput Integr Manuf Syst 16(1):1–16
Tao F, Cheng Y, Xu LD, Zhang L, Li BH (2014) CCIOt-CMfg: Cloud Computing and Internet of Things-Based Cloud Manufacturing Service System. IEEE Trans Ind Inf 10(2):1435–1442
Tianri W, Shunsheng G, Chi-Guhn L (2014) Manufacturing task semantic modeling and description in cloud manufacturing system. Int J Adv Manuf Technol 71(9-12):2017–2031
Luo Y, Zhang L, Tao F, Ren L, Liu Y, Zhang Z (2013) A modeling and description method of multidimensional information for manufacturing capability in cloud manufacturing system. Int J Adv Manuf Technol 69(5-8):961–975
Liu N, Li X, Shen W (2014) Multi-granularity resource virtualization and sharing strategies in cloud manufacturing. J Netw Comput Appl 46:72–82
Tao F, Zuo Y, Xu LD, Zhang L (2014) Iot-based Intelligent Perception and Access of Manufacturing Resource Toward Cloud Manufacturing. IEEE Trans Ind Inf 10(2):1547–1557
Tao F, LaiLi Y, Xu L, Zhang L (2013) FC-PACO-RM: A parallel method for service composition Optimal-Selection in cloud manufacturing system. IEEE Trans Ind Inf 9(4):2023–2033
Huang B, Li C, Tao F (2014) A chaos control optimal algorithm for QoS-based service composition selection in cloud manufacturing system. Enterp Inf Syst 8(4):445–463
Laili Y, Tao F, Zhang L, Cheng Y, Luo Y, Sarker BR (2013) A Ranking Chaos Algorithm for dual scheduling of cloud service and computing resource in private cloud. Comput Ind 64(4):448–463
Seghir F, Khababa A (2016) A hybrid approach using genetic and fruit fly optimization algorithms for QoS-aware cloud service composition. J Intell Manuf. doi:10.1007/s10845-10016-11215-10840
Wang D, Yang Y, Mi Z (2015) A genetic-based approach to web service composition in geo-distributed cloud environment. Comput Electr Eng 43:129–141
Wang Z, Liu Z, Zhou X, Lou Y (2011) An approach for composite web service selection based on DGQos. Int J Adv Manuf Technol 56(9-12):1167–1179
Huo Y, Zhuang Y, Gu J, Ni S, Xue Y (2015) Discrete gbest-guided artificial bee colony algorithm for cloud service composition. Appl Intell 42(4):661–678
Zhang H, Zhu BC, Li YP, Yaman O, Roy U (2015) Development and utilization of a Process-oriented Information Model for sustainable manufacturing. J Manuf Syst 37:459–466
Dubey R, Gunasekaran A, Childe SJ, Wamba SF, Papadopoulos T (2016) The impact of big data on world-class sustainable manufacturing. Int J Adv Manuf Technol 84(1-4):631–645
Wang Z, Subramanian N, Gunasekaran A, Abdulrahman MD, Liu C (2015) Composite sustainable manufacturing practice and performance framework: Chinese auto-parts suppliers’ perspective. Int J Prod Econ 170:219–233
Karaboga D, Basturk B (2007) A powerful and efficient algorithm for numerical function optimization: artificial bee colony (ABC) algorithm. J Glob Optim 39(3):459–471
Han YY, Liang JJ, Pan QK, Li JQ, Sang HY, Cao NN (2013) Effective hybrid discrete artificial bee colony algorithms for the total flowtime minimization in the blocking flowshop problem. Int J Adv Manuf Technol 67(1-4):397–414
Chaves-Gonzalez JM, Vega-Rodriguez MA, Granado-Criado JM (2013) A multiobjective swarm intelligence approach based on artificial bee colony for reliable DNA sequence design. Eng Appl Artif Intel 26(9):2045–2057
Metlicka M, Davendra D (2015) Chaos driven discrete artificial bee algorithm for location and assignment optimisation problems. Swarm Evol Comput 25:15–28
Karaboga D, Akay B (2009) A comparative study of Artificial Bee Colony algorithm. Appl Math Comput 214(1):108–132
Karaboga D, Basturk B (2008) On the performance of artificial bee colony (ABC) algorithm. Appl Soft Comput 8(1):687–697
Yang X-S, Deb S (2010) Engineering optimisation by cuckoo search. Int J Math Model Numerical Optimiz 1(4):330–343
Civicioglu P, Besdok E (2013) A conceptual comparison of the Cuckoo-search, particle swarm optimization, differential evolution and artificial bee colony algorithms. Artif Intell Rev 39(4):315–346
Zeng LZ, Benatallah B, Ngu AHH, Dumas M, Kalagnanam J, Chang H (2004) Qos-aware middleware for Web services composition. IEEE Trans Softw Eng 30(5):311–327
Alrifai M, Risse T, Nejdl W (2012) A Hybrid Approach for Efficient Web Service Composition with End-to-End QoS Constraints. ACM T Web 6(2)
Zhang Y, Tao F, Laili Y, Hou B, Lv L, Zhang L (2013) Green partner selection in virtual enterprise based on Pareto genetic algorithms. Int J Adv Manuf Technol 67(9-12):2109–2125
Xinchao Z, Boqian S, Panyu H, Zichao W, Jialei W, Yi F (2012) An improved discrete immune optimization algorithm based on PSO for QoS-driven web service composition. Appl Soft Comput 12(8):2208–2216
Bhandari AK, Singh VK, Kumar A, Singh GK (2014) Cuckoo search algorithm and wind driven optimization based study of satellite image segmentation for multilevel thresholding using Kapur’s entropy. Expert Syst Appl 41(7):3538– 3560
Zhang L, Guo H, Tao F, Luo YL, Si N (2010) Flexible management of resource service composition in cloud manufacturing. Paper presented at the 2010 IEEE International Conference on Industrial Engineering & Engineering Management
Coello CAC, Pulido GT, Lechuga MS (2004) Handling multiple objectives with particle swarm optimization. IEEE Trans Evol Comput 8(3):256–279
Ramacher R, Monch L (2014) Robust Multi-criteria Service Composition in Information Systems. Bus Inform Syst Eng 6(3):141–151
Li L, Cheng P, Ou L, Zhang Z (2010) Applying Multi-Objective Evolutionary Algorithms to QoS-Aware Web Service Composition Paper presented at the 6th International Conference on Advanced Data Mining and Applications (ADMA), Chongqing, PEOPLES R CHINA
Sun XY, Chen Y, Liu YP, Gong DW (2016) Indicator-based set evolution particle swarm optimization for many-objective problems. Soft Comput 20(6):2219–2232
Cremene M, Suciu M, Pallez D, Dumitrescu D (2016) Comparative analysis of multi-objective evolutionary algorithms for QoS-aware web service composition. Appl Soft Comput 39:124–139
Mirjalili S, Saremi S, Mirjalili SM, Coelho L d S (2016) Multi-objective grey wolf optimizer: a novel algorithm for multi-criterion optimization. Expert Syst Appl 47:106–119
Jiang QY, Wang L, Hei XH, Yu GL, Lin YY, Lu XF (2016) MOEA/D-ARA plus SBX: a new multi-objective evolutionary algorithm based on decomposition with artificial raindrop algorithm and simulated binary crossover. Knowl-Based Syst 107:197–218
Hemmatian H, Fereidoon A, Assareh E (2014) Optimization of hybrid laminated composites using the multi-objective gravitational search algorithm (MOGSA). Eng Optimiz 46(9):1169–1182
Pradhan PM, Panda G (2012) Solving multiobjective problems using cat swarm optimization. Expert Syst Appl 39(3):2956–2964
Patel VK, Savsani VJ (2016) A multi-objective improved teaching-learning based optimization algorithm (MO-ITLBO). Inf Sci 357:182–200
Akay B (2013) Synchronous and asynchronous Pareto-based multi-objective Artificial Bee Colony algorithms. J Glob Optim 57(2):415–445
Maximiano MD, Vega-Rodriguez MA, Gomez-Pulido JA, Sanchez-Perez JM (2013) A new Multiobjective Artificial Bee Colony algorithm to solve a real-world frequency assignment problem. Neural Comput Appl 22 (7-8):1447–1459
Zhou J, Yao X (2016) A hybrid artificial bee colony algorithm for optimal selection of QoS-based cloud manufacturing service composition. Int J Adv Manuf Technol. doi:10.1007/s00170-016-9034-1
Li C, Wang S, Kang L, Guo L, Cao Y (2014) Trust evaluation model of cloud manufacturing service platform. Int J Adv Manuf Technol 75(1-4):489–501
Zhou J, Yao X (2016) DE-caABC: differential evolution enhanced context-aware artificial bee colony algorithm for service composition and optimal selection in cloud manufacturing. Int J Adv Manuf Technol. doi:10.1007/s00170-016-9455-x
Xiang F, Hu YF, Yu YR, Wu HC (2014) Qos and energy consumption aware service composition and optimal-selection based on Pareto group leader algorithm in cloud manufacturing system. Central Eur J Oper Res 22(4):663–685
Beloglazov A, Abawajy J, Buyya R (2012) Energy-aware resource allocation heuristics for efficient management of data centers for Cloud computing. Future Gener Comp Syst 28(5):755– 768
Yang XS, Deb S (2014) Cuckoo search: recent advances and applications. Neural Comput Appl 24(1):169–174
Mantegna RN (1994) Fast, accurate algorithm for numerical simulation of Levy stable stochastic processes. Phys Rev E 49(5):4677–4683
Deb K, Pratap A, Agarwal S, Meyarivan T (2002) A fast and elitist multiobjective genetic algorithm: NSGA-II. IEEE Trans Evol Comput 6(2):182–197
Wang YN, Wu LH, Yuan XF (2010) Multi-objective self-adaptive differential evolution with elitist archive and crowding entropy-based diversity measure. Soft Comput 14(3):193–209
Reynolds AM (2006) Cooperative random Levy flight searches and the flight patterns of honeybees. Phys Lett A 354(5-6):384–388
Zitzler E, Deb K, Thiele L (2000) Comparison of multiobjective evolutionary algorithms: Empirical results. Evol Comput 8(2):173–195
Deb K, Thiele L, Laumanns M (2002) Zitzler E Scalable multi-objective optimization test problems. In: Proceedings of the 2002 congress on evolutionary computation, CEC, 2002, Honolulu, HI, United states, pp 825–830
Zhang Q, Zhou A, Zhao S, Suganthan PN, Liu W, Tiwari S (2008) Multiobjective optimization test instances for the CEC 2009 special session and competition. University of Essex, Colchester, UK technical report
Nebro AJ, Luna F, Alba E, Dorronsoro B, Durillo JJ, Beham A (2008) AbYSS: Adapting scatter search to multiobjective optimization. IEEE Trans Evol Comput 12(4):439–457
Huang VL, Zhao SZ, Mallipeddi R (2009) Suganthan PN Multi-objective optimization using self-adaptive differential evolution algorithm, vol 2009. Trondheim, Norway
Nebro AJ, Durillo JJ, Nieto G, Coello CAC, Luna F, Alba E (2009) SMPSO: A new pso-based metaheuristic for multi-objective optimization. In: 2009 IEEE Symposium on computational intelligence in multi-criteria decision-making, MCDM 2009, Nashville, TN, United states, pp 66–73
Wolpert DH, Macready WG (1997) No free lunch theorems for optimization. IEEE Trans Evol Comput 1 (1):67–82
Beasley TM, Zumbo BD (2003) Comparison of aligned Friedman rank and parametric methods for testing interactions in split-plot designs. Comput Stat Data Anal 42(4):569–593
Acknowledgments
The project was supported by the National Natural Science Foundation of China under grant Nos. 51675186 and 51175187, the Science & Technology Foundation of Guangdong Province under grant No. 2016A020228005, and the Science & Technology Program of Zhanjiang City under grant No. 2015A01001. The authors would like to thank the Editors and the anonymous referees for their valuable comments and suggestions.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Zhou, J., Yao, X. Multi-objective hybrid artificial bee colony algorithm enhanced with Lévy flight and self-adaption for cloud manufacturing service composition. Appl Intell 47, 721–742 (2017). https://doi.org/10.1007/s10489-017-0927-y
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10489-017-0927-y