Abstract
The use of evolutionary algorithms (GlossaryTerm
EA
s) for solving multiobjective optimization problems has been very active in the last few years. The main reasons for this popularity are their ease of use with respect to classical mathematical programming techniques, their scalability, and their suitability for finding trade-off solutions in a single run. However, these algorithms may be computationally expensive because (1) many real-world optimization problems typically involve tasks demanding high computational resources and (2) they are aimed at finding a whole front of optimal solutions instead of searching for a single optimum. Parallelizing GlossaryTermEA
s emerges as a possible way of reducing the GlossaryTermCPU
time down to affordable values, but it also allows researchers to use an advanced search engine – the parallel model – that provides the algorithms with an improved population diversity and enable them to cooperate with other (eventually nonevolutionary) techniques. The goal of this chapter is to provide the reader with an up-to-date review of the recent literature on parallel GlossaryTermEA
s for multiobjective optimization.Access provided by Autonomous University of Puebla. Download chapter PDF
Similar content being viewed by others
Keywords
- Graphic Processing Unit
- Pareto Front
- Multiobjective Optimization
- Message Passing Interface
- Optimal Pareto Front
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.
1 Multiobjective Optimization and Parallelism
Multiobjective optimization arises in many real-world applications, especially in engineering, in which several performance criteria conflict with each other. These conflicting objectives make the optimization results in that no single solution can usually optimize them all simultaneously. Indeed, the aim of multiobjective optimization is to find a set of compromise solutions with different tradeoffs among criteria, also known as the Pareto optimal set. When this set is plotted in the objective space it is called the Pareto front [1, 2].
Many different techniques have been proposed in the multiobjective research community to address multiobjective optimization problems (GlossaryTerm
MOP
s). Unlike classical mathematical programming approaches, metaheuristics in general, and GlossaryTermEA
s (multiobjective evolutionary algorithms or GlossaryTermMOEA
s) in particular, have attracted growing attention over the last decade because of two main facts. On the one hand, GlossaryTermEA
s have the ability to generate several members of the Pareto optimal set in one single run, as opposed to classical multicriteria decision-making techniques. They are also less sensitive to the shape of the Pareto front so therefore can deal with a large variety of GlossaryTermMOP
s. On the other hand, as randomized black-box algorithms, GlossaryTermEA
s can address optimization problems with nonlinear, nondifferentiable, or noisy objective functions.In spite of these advantages, these algorithms might be computationally expensive because, on the one hand, they need to explore larger portions of the search space since they seek the entire Pareto front, which usually results in more function evaluations being performed; on the other hand, and even more importantly, many real-world multiobjective problems typically use computationally expensive methods for computing the objective functions and constraints.
These issues are usually addressed in two different ways. First, one can use surrogate models of the fitness functions instead of true fitness function evaluations [3, 4, 5]. The second more important line lies in using parallel computing platforms to speed up the GlossaryTerm
EA
search [6]. This is the mainstream of this chapter.Due to their population-based approach, GlossaryTerm
EA
s are very suitable for parallelization because their main operations (i. e., crossover, mutation, and in particular function evaluation) can be carried out independently on different individuals. There is a vast amount of literature on how to parallelize GlossaryTermEA
s; the reader is referred to [10, 7, 8, 9] for surveys on this topic. However, parallelism here is not only a way for solving problems more rapidly, but also for developing new and more efficient search models: a parallel GlossaryTermEA
can be more effective than a sequential one, even when executed on a single processor. The advantages that parallelism offer to single-objective optimization also hold in multiobjective optimization. Of particular interest for parallel GlossaryTermMOEA
s is the improvement in population diversity that shall help to fully approximate the entire Pareto front of the given optimization problems.The contribution of this chapter is to provide the reader with a recent review of publications related to parallel GlossaryTerm
MOEA
s, showing the latest advances in the field. Given the shear volume of papers, we have been forced to restrict ourselves to only those works which have been published since 2008/09, the years to which the two best known surveys date back [11, 12]. The structure of this chapter distinguishes between the numerical model of the parallel GlossaryTermMOEA
and its physical parallelization. In seminal papers in the fields [13], it was assumed that the model maps directly onto the parallel computing platform, but this is no longer true and any (MO)EA can be deployed in parallel, but not always resulting in a high performance. The next section is therefore devoted to presenting the classical models for parallel GlossaryTermEA
s and a recent proposal that not only considers GlossaryTermEA
s, but metaheuristics and exact algorithms in general. Section 50.3 dives into the details of more than publications, analyzing particular features of GlossaryTermMOEA
s (fitness assignment, diversity preservation) as well as on their parallelization (model, topology, parallel platform). Finally, the last section presents the main conclusions and the trends for future research on parallel GlossaryTermMOEA
s.2 Parallel Models for Evolutionary Multi-Objective Algorithms
Parallelism arises naturally when dealing with populations of individuals, since each individual is an independent unit. As a consequence, the performance of population-based algorithms is particularly improved when run in parallel. The main models for parallel GlossaryTerm
MOEA
s have been proposed within two clear scopes: especially GlossaryTermEA
-targeted models coming from the GlossaryTermEA
community [14, 15, 7] and those proposed for parallel metaheuristics in general (of which GlossaryTermEA
s are a subclass) [12, 16]. They are briefly presented in the following subsections.2.1 Specialized Models for Parallel EAs
The most well-known models for parallel GlossaryTerm
MOEA
s have been inherited directly from the single-objective parallel GlossaryTermEA
community, in which two parallelizing strategies are defined for population-based algorithms: (1) parallelization of computation, in which the operations commonly applied to each individual are performed in parallel, and (2) parallelization of population, in which the population is split into different parts, each one evolving in semi-isolation (individuals can be exchanged between subpopulations).The simplest parallelization scheme of GlossaryTerm
EA
s is the well-known master–slave or global parallelization method (Fig. 50.1a). In this scheme, a central processor performs the selection operations while the associated slave processors perform the recombination, mutation, and/or the evaluation of the fitness function. This algorithm is the same as the traditional (one population, panmictic), although it is faster, especially for time-consuming objective functions. Its simplicity has made it the most popular among practitioners.However, other models for parallel GlossaryTerm
EA
s utilize some kind of spatial disposition of the individuals (it is said that the population is then structured), and afterward parallelize the resulting chunks in a pool of processors. Among the most widely known types of structured GlossaryTermEA
s, the distributed (GlossaryTermdEA
) (or coarse-grain) and cellular (GlossaryTermCEA
) (fine-grain or diffusion) algorithms are very popular optimization procedures [7]. In the case of distributed GlossaryTermEA
s (Fig. 50.1b), the population is partitioned into a set of islands in which isolated GlossaryTermEA
s run in parallel. Sparse individual exchanges are performed among these islands, with the goal of inserting some diversity into the subpopulations, thus avoiding them getting stuck in local optima. Islands may apply the same (homogeneous) or different (heterogeneous) GlossaryTermEA
s [17]. In the case of a cellular GlossaryTermEA
(Fig. 50.1c), subpopulations are typically composed of one individual, which may only interact with its nearest neighbors in the breeding loop, i. e., the concept of neighborhood is introduced. These neighborhoods are overlapped, which implicitly defines a migration mechanism and allows a smooth diffusion of the best solutions throughout the population. This parallel scheme was targeted to massively parallel computers but nowadays it can be used sequentially on a regular computer or in parallel on graphic processing units (GlossaryTermGPU
s). Also, hybrid models have been proposed (Fig. 50.1d–f) in which a two-level approach of parallelization is undertaken. In these models, the higher level for parallelization uses to be a coarse-grain implementation and the basic island performs a GlossaryTermCEA
, a master–slave method, or even another distributed one.This taxonomy holds as well for parallel GlossaryTerm
MOEA
s [15], so we can consider master–slave GlossaryTermMOEA
s (GlossaryTermmsMOEA
s), distributed GlossaryTermMOEA
s (GlossaryTermdMOEA
s), and cellular GlossaryTermMOEA
s (GlossaryTermcMOEA
s). Nevertheless, these two decentralized population approaches need a further particularization for GlossaryTermMOP
s [14]. As we stated before, the main goal of any multiobjective optimization algorithm is to find the optimal Pareto front for a given GlossaryTermMOP
. It is clear that in GlossaryTermmsMOEA
s the management of this Pareto front is carried out by the master processor. But, when the search process is distributed among different subalgorithms, as happens in GlossaryTermdMOEA
s and GlossaryTermcMOEA
s, the management of the nondominated set of solutions during the optimization procedure becomes a capital issue. Hence, it can be distinguished when the Pareto front is distributed and locally managed by each sub-GlossaryTermEA
during the computation, or it is a centralized element of the algorithm. They have been called centralized Pareto front (GlossaryTermCPF
) structured GlossaryTermMOEA
s and Distributed Pareto Front (GlossaryTermDPF
) structured GlossaryTermMOEA
s, respectively [16].For distributed GlossaryTerm
MOEA
s, very specialized models have been proposed in the literature which are aimed at capturing the different approaches for partitioning the search of each island so as to avoid them overlapping their exploration [18]. On the one hand, each island may consider a different subset of the objectives and then either aggregate them into a single-objective problem [19] or use a coevolutionary approach [20]. On the other hand, the search space (either the decision space or the objective space) can be explicitly partitioned and assigned to different islands. As stated in [11], in a general multiobjective problem it is difficult to design an a priori distribution so that it:-
1.
Covers the entire search space,
-
2.
Assigns regions of equal size, and
-
3.
Aggregates a minimum complexity to constraint demes to their assigned region.
2.2 General Models for Parallel Metaheuristics
Several models have been proposed for parallelizing metaheuristics [21, 22] in which GlossaryTerm
EA
s, as a type of metaheuristic, perfectly fit. For parallel GlossaryTermMOEA
s, two main approaches have been proposed in the literature. In [16], the authors distinguish between single-walk and multiple-walk parallelizations. The former is aimed at speeding up the computations by parallelizing the evaluation of the objective functions or the search operators. In the latter, several search threads (GlossaryTermEA
s or any other search method) cooperate to better explore the search space (not only accelerating the execution). The same issue with the Pareto front as in the parallel GlossaryTermMOEA
models emerges here, so the authors also subdivide into centralized and distributed Pareto front models (GlossaryTermCPF
and GlossaryTermDPF
, respectively).On the other hand, Talbi etal [12] categorize parallel metaheuristics in three major hierarchical models. The self-contained parallel cooperation is targeted to parallel computing platforms with limited communication. The search is performed by several subalgorithms in parallel, which might cooperate by exchanging some kind of information. It embraces the island model or GlossaryTerm
dMOEA
s explained before. Two main groups are distinguished: cooperating subpopulations, which are based on partitioning the objective/search space; and the multistart approach, in which several optimization algorithms run separately in parallel. In the former, subpopulations can be homogeneous or heterogeneous, explore separate regions of the search space, etc. The latter lies in running several local search algorithms in parallel. On a second and third level of the hierarchy, the authors consider those models aimed merely at speeding up the computations: problem independent parallelization, which mainly comprises the master–slave approach of parallel fitness evaluation, and problem dependent parallelization, which focuses on subdividing single evaluations into parallel tasks that speed up the evaluation step.3 An Updated Review of the Literature
This section is devoted to presenting and analyzing the most recent contributions in the literature to the parallel evolutionary multiobjective optimization field. We have structured the published material according to the classical parallel GlossaryTerm
EA
models, i. e., master/slave, distributed, cellular, and hybrid models (Sect. 50.2.1) because this chapter is targeted precisely to GlossaryTermEA
s and, as a consequence, this classification better captures the design principles of the different contributions. Table 50.1 includes, ordered by the year of publication, an updated review of the field. Also, in order to help the reader with the terminology of this table, Table 50.2 displays the symbols used and their definitions. Then, for each row of Table 50.1, the following information is shown:-
GlossaryTerm
FA-DP
(Fitness assignment and diversity preservation): As two of the most important design issues in GlossaryTermEMO
algorithms, the fitness assignment and diversity preservation mechanisms allow, respectively, to better guide the search toward Pareto optimal solutions and to spread out these Pareto optimal solutions along the entire Pareto front. They are frequently merged into one single measure that translates the vector of objective functions value of a multiobjective problem into one single scalar value which is used to rank solutions properly (from a Pareto optimality point of view, nondominated solutions are noncomparable). -
GlossaryTerm
PM
(Parallel model): It can take the values GlossaryTermMS
(Master/slave), Dis (distributed model), Cell (cellular model) or Hyb (hybrid), according to the classical parallel GlossaryTermEA
categorization. -
GlossaryTerm
PFC
(Pareto front computation): This column distinguishes between the GlossaryTermCPF
and GlossaryTermDPF
strategies defined before. -
GlossaryTerm
PP
(parallel platform): When applicable, this column indicates the kind of parallel computing platform in which the given algorithm is executed (GlossaryTermGPU
s, multicore, cluster, grid, etc.). -
Topology: Communication topology of the parallel GlossaryTerm
MOEA
(Star, Hybrid, all-to-all [A2A], etc.). -
Programming: When publicly reported, the programming language used to implement the parallel GlossaryTerm
MOEA
is included in this column. -
Description: The main features of the parallel GlossaryTerm
MOEA
in a few words. -
Application domain: The area in which the parallel GlossaryTerm
MOEA
has been applied.
3.1 Analysis by Year
The first point of analysis of the published material is done with respect to the number of publications over the years considered in this chapter, i. e., the period between 2008 and 2011. Figure 50.2 displays this information not only for the period analyzed in this chapter, but also for the period 1993–2005 presented in [14]. The trend is fairly clear: this research topic has been active during the last few years. Indeed, if one compares this evolution with that presented in [14] by 2006, where the highest number of works per year was 10, it can be seen that the published material is doubled (more than publications/year in 2009, 2010, and 2011). Despite the relative lack of novel, attractive approaches in the field, parallelism remains as a powerful tool in the GlossaryTerm
EMO
community because of one major factor: the optimization problems addressed require to reduce the execution times to affordable values. This is emphasized with the current availability of cheap parallel computing platforms such as multicore processors and, lately, GlossaryTermGPU
s. Indeed, the keyword multicore in column GlossaryTermPP
in Table 50.1 is the second that appears the most.3.2 Analysis of the Parallel Models
In this section, the different contributions are analyzed from the point of view of the characteristics of the parallel model. We will pay particular attention to the columns GlossaryTerm
FA-DP
, GlossaryTermPM
, GlossaryTermPFC
, and Topology.The fitness assignment and diversity preservation is a major issue in parallel GlossaryTerm
MOEA
s because, in many cases, the Pareto front is spread between different subalgorithms (especially in the distributed models). The management of optimal solutions (via fitness assignment) and how they are distributed along the Pareto front (diversity) deserves a brief review. The GlossaryTermFA-DP
column shows that the Ranking and Crowding mechanisms inherited from the most widely used algorithm in the area, namely GlossaryTermNSGA
-II [95], are also the most present in the literature as long as GlossaryTermNSGA
-II is the base algorithm for many of these parallel GlossaryTermMOEA
s. In the case of the distributed and cellular models, it is worth mentioning that Crowding is applied locally, i. e., diversity if kept within the same subalgorithm. If no advanced mechanism is devised to partition the search space (such as in [96, 97]), the algorithm will be accepting/discarding solutions that should probably be in the same region as those computed by the other subalgorithm components. The same happens with classical GlossaryTermFA-DP
methods such as the strength raw fitness (GlossaryTermSRF
) or the indicator-based (GlossaryTermIB
) in column GlossaryTermFA-DP
, respectively). As a final note, we strongly believe that these algorithms based on decomposition such as GlossaryTermMOEA
/D [98] are especially well suited to profit from parallel platforms. Indeed, they are based on decomposing the multiobjective problems into a number of scalar subproblems that are distributed along the Pareto front. Therefore, the partition of the search space is implicitly done and they take full advantage of multicore processors. The texts [80, 81] follow this line of research.If one analyzes the usage of the different parallel models in the revised publications (Fig. 50.3), i. e., GlossaryTerm
MS
, distributed (Dis), and cellular (Cell), several clear conclusions can be drawn. First, despite the simplicity of the GlossaryTermMS
model, it appears in almost half of the related literature (). Multiobjective optimization problems are becoming more and more complex and demand high-end computational resources, what makes this approach very suitable in this context. Indeed, the underlying search model remains unchanged (entirely located at the master process), because authors are usually only interested at speeding up the computations. Second, the distributed models are still receiving much attention from the multiobjective community ( of the analyzed publications use this model) in the quest for engineering an improved algorithm that reaches the Pareto fronts in a more effective way (not only reducing execution times). The promising results published with their single objective counterparts [7] have pushed forward the research on this area. The major issue with distributed models arises with the difficult management of the Pareto front, as stated in Sect. 50.2.1 and as will be discussed below. Finally, a special comment about the cellular model: even though it is to be exploited in future literature, its percentage of use has doubled since the previous literature review in 2006 [14]. The point is that these algorithms are usually executed sequentially, with no parallelism at all, because they were originally targeted to massively parallel machines, and these kind of machines fell into disuse. There are, of course, some exceptions such as [46], where a cellular-like GlossaryTermMOEA
for aerodynamic optimization is deployed on a cluster of computers.The GlossaryTerm
PFC
column is a hot topic in parallel GlossaryTermMOEA
s. Handling the nondominated solutions found during the search, when the search is distributed in probably separate processors, has promoted and is still promoting fundamental research within the community. The most widely used strategy, however, is to keep a central pool of nondominated solutions (GlossaryTermCPF
), that is, there is one single front. This approach appears in of the analyzed publications and totally matches both the master/slave and the cellular parallel models. This design option is straightforward and makes common sense.Almost the same happens with the GlossaryTerm
DPF
strategy ( of the papers), which is mostly used with the distributed model of parallel GlossaryTermMOEA
s: the Pareto front is approximated separately for each of the subalgorithmic components during the search, only merged into one single front at the end of the exploration. In general, all GlossaryTermDPF
strategies are complemented with eventual GlossaryTermCPF
phases, which allows the search of the different subalgorithms to get overlapped [11]. A couple of exceptions are to be found among the revised literature, in which a distributed computation of the Pareto front is endowed with a fully centralized Pareto front. In [50], a distributed GlossaryTermMOEA
has one single Pareto front that is computed with several isolated islands. Each island uses a weighted sum approach (with different weights) and is targeted to one region of the search space. The second exception appears in [80, 81], in which a multithreaded parallelization of GlossaryTermMOEA
/D is presented. The Pareto front is stored in global memory and the different threads are in charge of separate groups of weight combinations.The final aspect under analysis in this section is the interconnection topology of the different components of the parallel algorithms. The Star topologies are widely used in two scenarios: (i) master/slave models and (ii) in distributed models with periodic gathering operations required to generate a single Pareto front. A star topology is able to capture the idea of topologies with a central master that delivers tasks to a set of worker nodes and this is why it is so popular in these two previous cases. The column Topology in Table 50.1 also reveals that All-to-All (GlossaryTerm
A2A
) and Random topologies also exist in the literature. The former enables the different components of the parallel algorithm to be tightly coupled, thus quickly spreading the nondominated solutions found for a faster convergence toward the optimal Pareto front. The later implies that the genetic material may take longer to reach all the algorithmic components, thus promoting diversity.3.3 Review of the Software Implementations
This section is mainly targeted to summarize the contents of the column Programming in Table 50.1, in which a note on the implementation of the algorithms is given. A quick look at the items of the column clearly states that the combination of C/C++ as the programming language and GlossaryTerm
MPI
(Message Passing Interface) [99] as the technology for enabling the parallel communication between the different components of the parallel algorithms are the preferred options. This can be explained by the strong engineering background of most of the GlossaryTermMOP
s addressed (and researchers), a field in which C/C++ has had a dominant position for many years. Indeed, C/C++ allows researchers to include very low level routines (even assembler code) that enable full control of all parts of their applications. GlossaryTermMPI
, in turn, is a standard (not just a library) for which many implementations exist (MPICH, LAM, etc.), so its use always guarantee correctness and efficiency.Despite this clear fact, only two novels, relevant trends on this topic that are worth mentioning in detail can be found. On the one hand, even though clusters of computers are able to provide researchers with a large computational power, there are GlossaryTerm
MOP
s that require still more additional resources. These resources can only be supplied by grid computing platforms [100]. This has promoted the parallelization of GlossaryTermEMO
algorithms with grid-enabled technologies such as MPICH-G2 [40] or MatGrid [34]. On the other hand, there already are several seminal works on the parallelization of multiobjective optimizers in GlossaryTermGPU
s, as stated in Sect. 50.3.1. To the best of our knowledge, only implementations with C and GlossaryTermCUDA
(compute unified device architecture) [101] have been proposed in [66, 93], but nowadays other opportunities have also emerged such as, for example, OpenCL [102].3.4 Main Application Domains
One of the main reasons, if not the main one for the popularity of GlossaryTerm
MOEA
s, is their success in solving real-world problems. Parallel GlossaryTermEMO
algorithms are no exception. As a consequence, the variability in the application domains is very large, which makes the task of classification rather difficult. By partially following the categorization proposed in [18], three main areas of application are distinguished: engineering, industrial, and scientific. Figure 50.4 summarizes the percentage of revised publications that fall into each area. Besides these three categories, we have also displayed in this figure a fourth item devoted to benchmarking. This latter is not an application but it appears a lot in the revised literature. There are well-established testbeds such as Zitzler–Deb–Thiele (GlossaryTermZDT
) [103], Deb–Thiele–Laumanns–Zitzler (GlossaryTermDTLZ
) [104], or the walking fish group (GlossaryTermWFG
) [105], that have been widely used as a comparison basis for introducing new algorithmic proposals.Among the real-world applications, engineering applications are, by far, the most popular domain within the parallel GlossaryTerm
EMO
literature (also in the entire GlossaryTermEMO
field), principally because they usually have suitable mathematical models for this kind of algorithms. Indeed, Fig. 50.4 shows that almost of the papers analyzed address an optimization problem from the engineering domain. Several relevant works among those analyzed are devoted to aerodynamic shape optimization [27, 30, 46], reconfiguration of operational satellite constellations [41], or the combustion control for different types of vehicles [85, 91].The second place in terms of popularity is occupied by the industrial applications, that appear in of the papers reviewed. These applications are related to the fields of manufacturing, scheduling, and management. Very interesting problems have been addressed in this industrial domain, such as the optimization of sonic crystal attenuation properties [52], the Camembert cheese ripening process [78], and evaluation of the input tax to regulate pollution from agricultural production [65].
Scientific applications, the third category of real-world applications analyzed ( of the papers), is intended to group optimization problems in bioinformatics, chemistry, and computer science. The most successful applications here are devoted to the bioinformatic and chemistry fields, in problems on molecular docking [33], protein design [40], drug design [56], and phylogenetic inference [69, 70]. In our view, the reason these applications (either engineering, industrial or scientific) have been tackled with parallel GlossaryTerm
MOEA
s is precisely the tasks involved to compute their objective functions. When a new enhanced parallel search model is sought, then authors usually rely on benchmarking functions. Indeed, as stated above, of the revised papers (Fig. 50.4) use these testbeds either exclusively [61, 68, 82, 94] or prior to solving a real-world application [20, 5, 51].4 Conclusions and Future Works
4.1 Summary
In this chapter, we have carried out a comprehensive survey of the literature concerning parallel GlossaryTerm
MOEA
s since 2008/2009, the year when two of the most well-known comprehensive surveys were published. We have first described the existing parallel models for GlossaryTermMOEA
s, distinguishing between those specifically targeted at GlossaryTermEA
s and those aimed at capturing the essence of parallel metaheuristics in general. Based on the former model (as long as we are interested in surveying parallel GlossaryTermMOEA
s), more than relevant papers have been carefully analyzed (many dozens more studied but left out because of little relevance to this survey). Fundamental aspects such as the fitness assignment and the diversity preservation, the parallel model used, the management of the approximated Pareto front, the underlying parallel platform used (if any), and the communication topology of the algorithms have been revised. Their main application domains have been gathered and structured into engineering, industrial, and scientific real-world multiobjective problems.Despite the lack of new, attractive ideas in the area, this survey has revealed that the research on parallel GlossaryTerm
MOEA
s is still moving forward for two main reasons. On the one hand, researchers think of parallelism as a way to not only speed up computations, but also as a strategy to enhance the search engines of the algorithms. On the other hand, the computational demands of many multiobjective problems (dimensionality, uncertainty, simulations, etc.) means that the parallelism is the only suitable option to address them.In the following section, some topics for future research for engineers interested in parallel GlossaryTerm
MOEA
s are outlined. In the authors’ opinion, these topics merit particular attention so that the current state-of-the-art can be improved.4.2 Future Trends
We have structured the future trends section as a bottom-up approach, proposing research lines for parallel GlossaryTerm
MOEA
s that range from low-level algorithmic details to more complex enhanced strategies at a higher level. We will also suggest that different studies that are missing in the literature need to be carried out and, in our opinion, once completed, might have a great impact within the community.Let us start with one of the main issues of parallel GlossaryTerm
MOEA
s which is such that, in the end, the algorithms have to compute one single approximation to the Pareto front, namely, they have an inherent centralized structure. When several subalgorithms cooperate to approximate such a Pareto front (i. e., the distributed models), they may coordinate themselves somehow in order to effectively sample different regions of the front in order to avoid overlaps. These overlaps are the main reason for the distributed algorithms being outperformed by centralized approaches (such as the master/slave). This issue has been partially addressed in the literature [96, 97] with some degree of success. However, these cited results have not been widely used yet and new advances are needed.We propose here two lines of research that might help the distributed models of parallel GlossaryTerm
MOEA
s to overcome this issue. The first one is based on designing a fully distributed diversity preservation method (e. g., a distributed crowding). The research question here is whether it is possible to devise a density estimator that considers both local and global information from the other components of the distributed GlossaryTermMOEA
. We think so. Instead of trying to allocate a given portion of the Pareto front to each island, these islands should periodically broadcast a list with the objective values of their local solutions (but not the decision variables). Then, when checking whether a solution is to be stored in its local Pareto front, the island has to consider both the local information and the global information received from the other islands. To the best of our knowledge, such a mechanism does not exist in the literature. The second proposal is to use rough set theory [106] to effectively partition the search space and allocate different portions to different components of the parallel GlossaryTermMOEA
. There does exist a preliminary work [107] that uses a multiobjective simulated annealing, and these ideas will have a great impact on the design of parallel GlossaryTermMOEA
s.On a higher algorithmic level, this survey has also revealed that, despite their advanced search model and accurate results on many benchmarking functions [108], cellular models of parallel GlossaryTerm
MOEA
s have been marginally used in most of the application domains. We strongly believe that these algorithms might improve the state-of-the-art in many of these unexplored research areas. On a related matter, designing heterogeneous algorithms [17] is also a line of research with a high potential. Indeed, few contributions in the literature have considered parallel heterogeneous algorithms for multiobjective optimization [59]. To the best of our knowledge, there is no published analysis on the impact of heterogeneity (i. e., several different subalgorithms cooperating, namely, GlossaryTermNSGA
-II, SPEA2, GlossaryTermPAES
, GlossaryTermMOEA
/D, etc.) while approximating Pareto fronts. Our experience with these totally different algorithms is that each one is usually better suited to explore given regions of the search space, so their collaboration may result in a newly improved algorithm.Finally, the last group of open research lines are devoted to well-grounded studies on the behavior of parallel GlossaryTerm
MOEA
s concerning their scalability with respect to the number of decision variables or their convergence speed toward the optimal Pareto front, as done for sequential GlossaryTermMOEA
s in [109, 110], respectively. Again, the influence of heterogeneity on these two search capabilities of parallel GlossaryTermMOEA
s is of interest.Last but not least, there is a distinct lack of theoretical work in this area. For example, to the best of our knowledge, there is no analysis of the takeover time [111] in the multiobjective context, and we believe that characterizing parallel GlossaryTerm
MOEA
s based on this metric may help researchers in future developments. The difficulty of the landscapes in GlossaryTermMOP
s and the relative theoretical advantages for different types of problems are open research lines.Abbreviations
- A2A:
-
all-to-all
- CEA:
-
cellular evolutionary algorithm
- cGA:
-
compact genetic algorithm
- cMOEA:
-
cellular MOEA
- CPF:
-
centralized Pareto front
- CPU:
-
central processing unit
- CUDA:
-
compute unified device architecture
- dEA:
-
distributed evolutionary algorithm
- DGA:
-
direct genetic algorithm
- dMOEA:
-
distributed MOEA
- DPF:
-
distributed Pareto front
- DTLZ:
-
Deb–Thiele–Laumanns–Zitzler
- EA:
-
evolutionary algorithm
- EMO:
-
evolutionary multiobjective optimization
- FA-DP:
-
fitness assignment and diversity preservation
- FDA:
-
factorized distribution algorithm
- GPU:
-
graphics processing unit
- IBEA:
-
indicator-based evolutionary algorithm
- IB:
-
indicator based
- MOEA:
-
multiobjective evolutionary algorithm
- MOGA:
-
multiobjective genetic algorithm
- MOP:
-
multiobjective optimization problem
- MPI:
-
message passing interface
- MS:
-
master/slave
- msMOEA:
-
master–slave MOEA
- NSGA:
-
nondominated sorting genetic algorithm
- PAES:
-
Pareto-archived evolution strategy
- PFC:
-
Pareto front computation
- PM:
-
parallel model
- PP:
-
parallel platform
- SRF:
-
strength raw fitness
- WFG:
-
walking fish group
- ZDT:
-
Zitzler–Deb–Thiele
References
C.A. Coello Coello, D.A. Van Veldhuizen, G.B. Lamont: Evolutionary Algorithms for Solving Multi-Objective Problems (Kluwer, Boston 2002)
K. Deb: Multi-Objective Optimization Using Evolutionary Algorithms (Wiley, New York 2001)
R.R. Coelho, P. Bouillard: Multi-objective reliability-based optimization with stochastic metamodels, Evol. Comput. 19(4), 525–560 (2011)
T. Goel, R. Vaidyanathan, R. Haftka, W. Shyy: Response surface approximation of Pareto optimization front in multi-objective optimization, 10th AIAA/ISSMO Multidiscip. Anal. Optim. Conf. (2004)
A. Syberfeldt, H. Grimm, A. Ng, R.I. John: A parallel surrogate-assisted multi-objective evolutionary algorithm for computationally expensive optimization problems, IEEE Congr. Evol. Comput. (2008) pp. 3177–3184
E. Alba: Parallel Metaheuristics: A New Class of Algorithms (Wiley, New York 2005)
E. Alba, M. Tomassini: Parallelism and evolutionary algorithms, IEEE Trans. Evol. Comput. 6(5), 443–462 (2002)
E. Alba, J.M. Troya: A Survey of parallel distributed genetic algorithms, Complexity 4(4), 31–52 (1999)
E. Cantú-Paz: Efficient and Accurate Parallel Genetic Algorithms (Kluwer, New York 2000)
G. Luque, E. Alba: Parallel Genetic Algorithms: Theory and Real World Applications (Springer, Berlin, Heidelberg 2011)
A. Lopez-Jaimes, C.A. Coello Coello: Applications of parallel platforms and models in evolutionary multi-objective optimization. In: Biologically-Inspired Optimisation Methods, ed. by A. Lewis, S. Mostaghim, M. Randall (Springer, Berlin, Heidelberg 2009) pp. 23–29
E.-G. Talbi, S. Mostaghim, T. Okabe, H. Ishibuchi, G. Rudolph, C.A. Coello Coello: Parallel approaches for multiobjective optimization, Lect. Notes Comput. Sci. 5252, 349–372 (2008)
A.J. Chipperfield, P.J. Fleming: Parallel genetic algorithms. In: Parallel and Distributed Computing Handbook, ed. by A.Y. Zomaya (McGraw Hill, New York 1996) pp. 1118–1143
F. Luna, A.J. Nebro, E. Alba: Parallel evolutionary multiobjective optimization. In: Parallel Evolutionary Computations, ed. by N. Nedjah, E. Alba, L. de Macedo (Springer, Berlin, Heidelberg 2006) pp. 33–56, Chapter 2
D.A. Van Veldhuizen, J.B. Zydallis, G.B. Lamont: Considerations in engineering parallel multiobjective evolutionary algorithms, IEEE Trans. Evol. Comput. 87(2), 144–173 (2003)
A.J. Nebro, F. Luna, E.-G. Talbi, E. Alba: Parallel multiobjective optimization. In: Parallel Metaheuristics, ed. by E. Alba (Wiley, New York 2005) pp. 371–394
F. Luna, E. Alba, A.J. Nebro: Parallel heterogeneous metaheuristics. In: Parallel Metaheuristics, ed. by E. Alba (Wiley, New York 2005) pp. 395–422
C.A. Coello Coello, G.B. Lamont, D.A. Van Veldhuizen: Evolutionary Algorithms for Solving Multi-Objective Problems, Genetic and Evolutionary Computation (Springer, Berlin, Heidelberg 2007)
E. Rashidi, M. Jahandar, M. Zandieh: An improved hybrid multi-objective parallel genetic algorithm for hybrid flow shop scheduling with unrelated parallel machines, Int. J. Adv. Manuf. Technol. 49, 1129–1139 (2010)
B. Dorronsoro, G. Danoy, P. Bouvry, A.J. Nebro: Multi-objective cooperative coevolutionary evolutionary algorithms for continuous and combinatorial optimization. In: Intelligent Decision Systems in Large-Scale Distributed Environments, Studies in Computational Intelligence, Vol. 362, (Springer, Berlin, Heidelberg 2011) pp. 49–74
T.G. Crainic, M. Toulouse: Parallel strategies for metaheuristics. In: Handbook of Metaheuristics, ed. by F.W. Glover, G.A. Kochenberger (Kluwer, Boston 2003)
V.-D. Cung, S.L. Martins, C.C. Ribeiro, C. Roucairol: Strategies for the parallel implementation of metaheuristics. In: Essays and Surveys in Metaheuristics, ed. by C.C. Ribeiro, P. Hansen (Kluwer, Boston 2003) pp. 263–308
L.F. Gonzalez: Robust Evolutionary Methods for Multi-objective and Multidisciplinary Design in Aeronautics, Ph.D. Thesis (University of Sydney, Sydney 2005)
D.S. Lee, L.F. Gonzalez, J. Periaux, G. Bugeda: Double-shock control bump design optimization using hybridized evolutionary algorithms, Proc. Inst. Mech. Eng. G: J. Aerosp. Eng. (2011) pp. 1175–1192
D.S. Lee, L.F. Gonzalez, J. Periaux, K. Srinivas: Evolutionary optimisation methods with uncertainty for modern multidisciplinary design in aeronautical engineering, Notes Numer. Fluid Mech. Multidiscip. Des. 100, 271–284 (2009)
D.S. Lee, L.F. Gonzalez, J. Periaux, K. Srinivas: Efficient hybrid-game strategies coupled to evolutionary algorithms for robust multidisciplinary design optimization in aerospace engineering, IEEE Trans. Evol. Comput. 15(2), 133–150 (2011)
D.S. Lee, L.F. Gonzalez, J. Periaux, K. Srinivas, E. Onate: Hybrid-game strategies for multi-objective design optimization in engineering, Comput. Fluids 47, 189–204 (2011)
D.S. Lee, L.F. Gonzalez, K. Srinivas, J. Periaux: Robust design optimisation using multi-objective evolutionary algorithms, Comput. Fluids 37(5), 565–583 (2008)
D.S. Lee, L.F. Gonzalez, K. Srinivas, J. Periaux: Robust evolutionary algorithms for UAV/UCAV aerodynamic and RCS design optimisation, Comput. Fluids 37(5), 547–564 (2008)
D.S. Lee, J. Periaux, L.F. Gonzalez, K. Srinivas, E. Onate: Robust multidisciplinary UAS design optimisation, Struct. Multidiscip. Optim. 45(3), 433–450 (2012)
D.S. Lee, J. Periaux, E. Onate, L.F. Gonzalez, N. Qin: Active transonic aerofoil design optimization using robust multiobjective evolutionary algorithms, J. Aircr. 48(3), 1084–1094 (2011)
J.-C. Boisson, L. Jourdan, E.-G. Talbi, D. Horvath: Parallel multi-objective algorithms for the molecular docking problem, IEEE Symp. Comput. Intell. Bioinform. Comput. Biol. (2008) pp. 187–194
J.-C. Boisson, L. Jourdan, E.-G. Talbi, D. Horvath: Single- and multi-objective cooperation for the flexible docking problem, J. Math. Model. Algorith. 9, 195–208 (2010)
G. Ewald, W. Kurek, M.A. Brdys: Grid implementation of a parallel multiobjective genetic algorithm for optimized allocation of chlorination stations in drinking water distribution systems: Chojnice case study, IEEE Trans. Syst. Man Cybern. C: Appl. Rev. 38(4), 497–509 (2008)
J.J. Durillo, A.J. Nebro, F. Luna, E. Alba: Solving three-objective optimization problems using a new hybrid cellular genetic algorithm, Lect. Notes Comput. Sci. 5199, 661–670 (2008)
C. Leon, G. Miranda, E. Segredo, C. Segura: Parallel hypervolume-guided hyperheuristic for adapting the multi-objective evolutionary island model, Nat. Inspir. Coop. Strat. Optim. (2009) pp. 261–272
C. Leon, G. Miranda, C. Segura: A self-adaptive island-based model for multi-objective optimization, Genet. Evol. Comput. Conf. (2008) pp. 757–758
C. Leon, G. Miranda, C. Segura: Hyperheuristics for a dynamic-mapped multi-objective island-based model, Lect. Notes Comput. Sci. 5518, 41–49 (2009)
C. Leon, G. Miranda, C. Segura: Optimizing the configuration of a broadcast protocol through parallel cooperation of multi-objective evolutionary algorithms, Int. Conf. Adv. Eng. Comput. Appl. Sci. (2008) pp. 135–140
P. Liu, S. Dong: Parallel multi-objective GA based rotamer optimization on grid, Int. Coll. Comput. Comm. Control. Manag. (CCCM) (2008) pp. 238–241
M.P. Ferringer, D.B. Spencer, P. Reed: Many-objective reconfiguration of operational satellite constellations with the large-cluster epsilon non-dominated sorting genetic algorithm II, IEEE Congr. Evol. Comput. (2009) pp. 340–349
P.M. Reed, J.B. Kollat, M.P. Ferringer, T.G. Thompson: Parallel evolutionary multi-objective optimization on large, heterogeneous clusters: An applications perspective, J. Aerosp. Comput. Inf. Commun. 5, 460–478 (2008)
J.L. Risco-Martin, D. Atienza, J.I. Hidalgo, J. Lanchares: A parallel evolutionary algorithm to optimize dynamic data types in embedded systems, Soft Comput. 12, 1157–1167 (2008)
J.L. Risco-Martin, D. Atienza, J.I. Hidalgo, J. Lanchares: Parallel and distributed optimization of dynamic data structures for multimedia embedded systems. In: Parallel and Distributed Computational Intelligence, ed. by F.F. Vega, E. Cantú-Paz (Springer, Berlin, Heidelberg 2010) pp. 263–290
D. Sharma, K. Deb, N.N. Kishore: Towards generating diverse topologies of path tracing compliant mechanisms using a local search based multi-objective genetic algorithm procedure, IEEE Congr. Evol. Comput. (2008) pp. 2004–2011
V.G. Asouti, K.C. Giannakoglou: Aerodynamic optimization using a parallel asynchronous evolutionary algorithm controlled by strongly interacting demes, Eng. Optim. 41(3), 241–257 (2009)
S. Bharti, M. Frecker, G. Lesieutre: Optimal morphing-wing design using parallel nondominated sorting genetic algorithm II, AIAA J. 47(7), 1627–1634 (2009)
M. Camara, J. Ortega, F. de Toro: A single front genetic algorithm for parallel multi-objective optimization in dynamic environments, Neurocomputing 72, 3570–3579 (2009)
M. Camara, J. Ortega, F. de Toro: Approaching dynamic multi-objective optimization problems by using parallel evolutionary algorithms. In: Advances in Multi-Objective Nature Inspired Computing, ed. by C.A. Coello Coello, C. Dhaenes, L. Jourdan (Springer, Berlin, Heidelberg 2010) pp. 63–86
P.-C.S.-H. Chand Chen: The development of a sub-population genetic algorithm ii (SPGA II) for multi-objective combinatorial problems, Appl. Soft Comput. 9, 173–181 (2009)
J. Bader, D. Brockhoff, S. Welten, E. Zitzler: On using populations of sets in multiobjective optimization, Lect. Notes Comput. Sci. 5467, 140–154 (2009)
J.M. Herrero, S. Garcia-Nieto, X. Blasco, V. Romero-Garcia, J.V. Sanchez-Perez, L.M. Garcia-Raffi: Optimization of sonic crystal attenuation properties by ev-MOGA multiobjective evolutionary algorithm, Struct. Multidiscip. Optim. 39, 203–215 (2009)
H. Ishibuchi, Y. Sakane, N. Tsukamoto, Y. Nojima: Effects of using two neighborhood structures on the performance of cellular evolutionary algorithms for many-objective optimization, IEEE Congr. Evol. Comput. (2009) pp. 2508–2515
H. Ishibuchi, Y. Sakane, N. Tsukamoto, Y. Nojima: Implementation of cellular genetic algorithms with two neighborhood structures for single-objective and multi-objective optimization, Soft Comput. 15, 1749–1767 (2011)
N. Jozefowiez, F. Semet, E.-G. Talbi: An evolutionary algorithm for the vehicle routing problem with route balancing, Eur. J. Oper. Res. 195, 761–769 (2009)
C.C. Kannas, C.A. Nicolaou, C.S. Pattichis: A parallel implementation of a multi-objective evolutionary algorithm, 9th Int. Conf. Inform. Technol. Appl. Biomed. (2009) pp. 1–6
C. Leon, G. Miranda, E. Segredo, C. Segura: Parallel library of multi-objective evolutionary algorithms, 17th Euromicro Int. Conf. IEEE (2009) pp. 28–35
C. Leon, G. Miranda, C. Segura: METCO: A parallel plugin-based framework for multi-objective optimization, Int. J. Artif. Intell. Tools 18(4), 569–588 (2009)
A. Rama Mohan Rao: Distributed evolutionary multi-objective mesh-partitioning algorithm for parallel finite element computations. Comput, Struct. 87(3), 1469–1473 (2009)
C. Segura, A. Cervantes, A.J. Nebro, M.D. Jaraíz-Simón, E. Segredo, S. García, F. Luna, J.A. Gómez-Pulido, G. Miranda, C. Luque, E. Alba, M.Á. Vega-Rodríguez, C. León, I.M. Galván: Optimizing the DFCN broadcast protocol with a parallel cooperative strategy of multi-objective evolutionary algorithms, Lect. Notes Comput. Sci. 5467, 305–319 (2009)
E. Szlachcic, W. Zubik: Parallel distributed genetic algorithm for expensive multi-objective optimization problems, Lect. Notes Comput. Sci. 5717, 938–946 (2009)
N. Wang, C.-M. Tsai, K.-C. Cha: Optimum design of externally pressurized air bearing using cluster OpenMP, Tribol. Int. 42, 1180–1186 (2009)
T. Qiu, G. Ju: A selective migration parallel multi-objective genetic algorithm, Chin. Control Decis. Conf. (2010) pp. 463–467
Z.X. Wang, G. Ju: A parallel genetic algorithm in multi-objective optimization, Chin. Control Decis. Conf. (2009) pp. 3497–3501
G. Whittaker, R. Confesor Jr., S.M. Griffith, R. Fare, S. Grosskopf, J.J. Steiner, G.W. Mueller-Warrant, G.M. Banow: A hybrid genetic algorithm for multiobjective problems with activity analysis-based local search, Eur. J. Oper. Res. 193, 195–203 (2009)
M.L. Wong: Parallel multi-objective evolutionary algorithms on graphics processing units, Genet. Evolut. Comput. Conf. (2009) pp. 2515–2522
M.L. Wong: Data mining using parallel multi-objective evolutionary algorithms on graphics hardware, IEEE Congr. Evol. Comput. (2010) pp. 1–8
A.A. Montaño, C.A. Coello Coello, E. Mezura-Montes: pMODE-LD${}+{}$SS: An effective and efficient parallel differential evolution algorithm for multi-objective optimization, Lect. Notes Comput. Sci. 6239, 21–30 (2010)
W. Cancino, L. Jourdan, E.-G. Talbi, A.C.B. Delbem: Parallel multi-objective approaches for inferring phylogenies, Lect. Notes Comput. Sci. 6023, 26–37 (2010)
W. Cancino, L. Jourdan, E.-G. Talbi, A.C.B. Delbem: Parallel multi-objective evolutionary algorithm for phylogenetic inference, Lect. Notes Comput. Sci. 6073, 196–199 (2010)
D. Becerra, A. Sandoval, D. Restrepo-Montoya, L.F. Nino: A parallel multi-objective ab initio approach for protein structure prediction, IEEE Int. Conf. Bioinform. Biomed. (2010) pp. 137–141
D. Dasgupta, D. Becerra, A. Banceanu, F. Nino, J. Simien: A parallel framework for multi-objective evolutionary optimization, IEEE Congr. Evol. Comput. (2010) pp. 1–8
J.R. Figueira, A. Liefooghe, E.-G. Talbi, A.P. Wierzbicki: A parallel multiple reference point approach for multi-objective optimization, Eur. J. Op. Res. 205, 390–400 (2010)
L. Fourment, R. Ducloux, S. Marie, M. Ejday, D. Monnereau, T. Masse, P. Montmitonnet: Mono and multi-objective optimization techniques applied to a large range of industrial test cases using metamodel assisted evolutionary algorithms, 10th Int. Conf. Numer. Methods Ind. Form. (2010) pp. 833–840
T. Hiroyasu, T. Noda, M. Yoshimi, M. Miki, H. Yokouchi: Examination of multi-objective genetic algorithm using the concept of a peer-to-peer network, 2nd World Congr. Nat. Biol. Inspir. Comput. (2010) pp. 508–512
I. Kamkar, M.-R. Akbarzadeh-T: Multiobjective cellular genetic algorithm with adaptive fuzzy fitness granulation, IEEE Int. Conf. Syst. Man Cybern. (2010) pp. 4147–4153
A. Kandil, K. El-Rayes, O. El-Anwar: Optimization research: Enhancing the robustness of large-scale multiobjective optimization in construction, J. Constr. Eng. Manag. 136(1), 17–25 (2009)
S. Mesmoudi, N. Perrot, R. Reuillon, P. Bourgine, E. Lutton: Optimal viable path search for a cheese ripening process using a multi-objective EA, Int. Conf. Evol. Comput. (2010)
J. Montgomery, I. Moser: Parallel constraint handling in a multiobjective evolutionary algorithm for the automotive deployment problem, 6th IEEE Int. Conf. e-Sci. Workshops (2010) pp. 104–109
J.J. Durillo, Q. Zhang, A.J. Nebro, E. Alba: Distribution of computational effort in parallel MOEA/D, Learn. Intell. Optim. (2011) pp. 488–502
A.J. Nebro, J.J. Durillo: A study of the parallelization of the multi-objective metaheuristic MOEA/D, Lect. Notes Comput. Sci. 6073, 303–317 (2010)
M. Pilat, R. Neruda: Combining multiobjective and single-objective genetic algorithms in heterogeneous island model, IEEE Congr. Evol. Comput. (2010) pp. 1–8
J.C. Calvo, J. Ortega, M. Anguita: Comparison of parallel multi-objective approaches to protein structure prediction, J. Supercomput. 58, 253–260 (2011)
M. Garza-Fabre, G. Toscano-Pulido, C.A. Coello Coello, E. Rodriguez-Tello: Effective ranking $+$ speciation $=$ many-objective optimization, IEEE Congr. Evol. Comput. (2011) pp. 2115–2122
D. Gladwin, P. Stewart, J. Stewart: Internal combustion engine control for series hybrid electric vehicles by parallel and distributed genetic programming/multiobjective genetic algorithms, Int. J. Syst. Sci. 42(2), 249–261 (2011)
D.S. Lee, C. Morillo, G. Bugeda, S. Oller, E. Onate: Multilayered composite structure design optimisation using distributed/parallel multi-objective evolutionary algorithms, Compos. Struct. 94(3), 1087–1096 (2012)
A.L. Márquez, C. Gil, R. Baños, J. Gómez: Parallelism on multicore processors using Parallel.FX, Adv. Eng. Softw. 42, 259–265 (2011)
B.S.P. Mishra, A.K. Addy, R. Roy, S. Dehuri: Parallel multi-objective genetic algorithms for associative classification rule mining, Int. Conf. Commun. Comput. Secur. (2011) pp. 409–414
E. Segredo, C. Segura, C. Leon: On the comparison of parallel island-based models for the multiobjectivised antenna positioning problem, 15th Int. Conf. Knowl. Intell. Inf. Eng. Syst. (2011) pp. 32–41
G.N. Shinde, S.B. Jagtap, S.K. Pani: Parallelizing multi-objective evolutionary genetic algorithms, Proc. World Congr. Eng. (2011) pp. 1534–1537
M. Yagoubi, L. Thobois, M. Schoenauer: Asynchronous evolutionary multi-objective algorithms with heterogeneous evaluation costs, IEEE Congr. Evol. Comput. (2011) pp. 21–28
A. Zhang, H. Li, C. Xiao: Parallel computing model for time-varied coordinated voltage/reactive power control, J. Electr. Syst. 7(1), 1–11 (2011)
W. Zhu, Y. Li: GPU-accelerated differential evolutionary Markov chain Monte Carlo method for multi-objective optimization over continuous space, 2nd Workshop Bio-Inspir. Algorithms Distrib. Syst. (2010) pp. 1–8
W. Zhu, A. Yaseen, Y. Li: DEMCMC-GPU: An efficient multi-objective optimization method with GPU acceleration on the fermi architecture, New Gener. Comput. 29, 163–184 (2011)
K. Deb, A. Pratap, S. Agarwal, T. Meyarivan: A fast and elitist multiobjective genetic algorithm: NSGA-II, IEEE Trans. Evol. Comput. 6(2), 182–197 (2002)
J. Branke, H. Schmeck, K. Deb, M.S. Reddy: Parallelizing multi-objective evolutionary algorithms: Cone separation, Congr. Evol. Comput. (2004) pp. 1952–1957
F. Streichert, H. Ulmer, A. Zell: Parallelization of multi-objective evolutionary algorithms using clustering algorithms, Lect. Notes Comput. Sci. 3410, 92–107 (2005)
Q. Zhang, H. Li: MOEA/D: A multi-objective evolutionary algorithm based on decomposition, IEEE Trans. Evol. Comput. 11(6), 712–731 (2007)
W. Gropp, E. Lusk, A. Skjellum: Using MPI: Portable Parallel Programming with the Message-Passing Interface (MIT, London 2000)
F. Berman, G.C. Fox, A.J.G. Hey: Grid Comptuing Making the Global Infrastructure A Reality, Communications Networking and Distributed Systems (Wiley, New York 2003)
NVIDIA Corporation: NVIDIA CUDA Compute Unified Device Architecture Programming Guide (NVIDIA Corporation, Santa Clara 2007)
R. Tsuchiyama, T. Nakamura, T. Iizuka, A. Asahara, S. Miki: The OpenCL Programming Book (Fixstars Corporation, Synnyvale 2010)
E. Zitzler, K. Deb, L. Thiele: Comparison of Multiobjective Evolutionary Algorithms: Empirical Results Evol, Comput. 8(2), 173–195 (2000)
K. Deb, L. Thiele, M. Laumanns, E. Zitzler: Scalable test problems for evolutionary multiobjective optimization. In: Evolutionary Multiobjective Optimization. Theoretical Advances and Applications, ed. by A. Abraham, L. Jain, R. Goldberg (Springer, Berlin, Heidelberg 2005) pp. 105–145
S. Huband, P. Hingston, L. Barone, L. While: A review of multiobjective test problems and a scalable test problem toolkit, IEEE Trans. Evol. Comput. 10(5), 477–506 (2006)
Z. Pawlak: Rough sets, Int. J. Parallel Program. 11, 341–356 (1982)
U. Maulik, A. Sarkar: Evolutionary rough parallel multi-objective optimization algorithm, Fundam. Inform. 99(1), 13–27 (2010)
A.J. Nebro, J.J. Durillo, F. Luna, B. Dorronsoro, E. Alba: A cellular genetic algorithm for multiobjective optimization, Int. J. Intell. Syst. 24(7), 723–725 (2009)
J.J. Durillo, A.J. Nebro, C.A. Coello, J. Garcia-Nieto, F. Luna, E. Alba: A study of multiobjective metaheuristics when solving parameter scalable problems, IEEE Trans. Evol. Comput. 14(4), 618–635 (2010)
J.J. Durillo, A.J. Nebro, F. Luna, C.A. Coello Coello, E. Alba: Convergence speed in multi-objective metaheuristics: Efficiency criteria and empirical study, Int. J. Numer. Methods Eng. 84(11), 1344–1375 (2010)
D.E. Goldber, K. Deb: A comparative analysis of selection schemes used in genetic algorithms. In: Foundations of Genetic Algorithms, ed. by G.J.E. Rawlins (Morgan Kaufmann, San Mateo 1991) pp. 69–93
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2015 Springer-Verlag Berlin Heidelberg
About this chapter
Cite this chapter
Luna, F., Alba, E. (2015). Parallel Multiobjective Evolutionary Algorithms. In: Kacprzyk, J., Pedrycz, W. (eds) Springer Handbook of Computational Intelligence. Springer Handbooks. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-662-43505-2_50
Download citation
DOI: https://doi.org/10.1007/978-3-662-43505-2_50
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-662-43504-5
Online ISBN: 978-3-662-43505-2
eBook Packages: EngineeringEngineering (R0)