Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

1 Multiobjective Optimization and Parallelism

Multiobjective optimization arises in many real-world applications, especially in engineering, in which several performance criteria conflict with each other. These conflicting objectives make the optimization results in that no single solution can usually optimize them all simultaneously. Indeed, the aim of multiobjective optimization is to find a set of compromise solutions with different tradeoffs among criteria, also known as the Pareto optimal set. When this set is plotted in the objective space it is called the Pareto front [1, 2].

Many different techniques have been proposed in the multiobjective research community to address multiobjective optimization problems (GlossaryTerm

MOP

s). Unlike classical mathematical programming approaches, metaheuristics in general, and GlossaryTerm

EA

s (multiobjective evolutionary algorithms or GlossaryTerm

MOEA

s) in particular, have attracted growing attention over the last decade because of two main facts. On the one hand, GlossaryTerm

EA

s have the ability to generate several members of the Pareto optimal set in one single run, as opposed to classical multicriteria decision-making techniques. They are also less sensitive to the shape of the Pareto front so therefore can deal with a large variety of GlossaryTerm

MOP

s. On the other hand, as randomized black-box algorithms, GlossaryTerm

EA

s can address optimization problems with nonlinear, nondifferentiable, or noisy objective functions.

In spite of these advantages, these algorithms might be computationally expensive because, on the one hand, they need to explore larger portions of the search space since they seek the entire Pareto front, which usually results in more function evaluations being performed; on the other hand, and even more importantly, many real-world multiobjective problems typically use computationally expensive methods for computing the objective functions and constraints.

These issues are usually addressed in two different ways. First, one can use surrogate models of the fitness functions instead of true fitness function evaluations [3, 4, 5]. The second more important line lies in using parallel computing platforms to speed up the GlossaryTerm

EA

search [6]. This is the mainstream of this chapter.

Due to their population-based approach, GlossaryTerm

EA

s are very suitable for parallelization because their main operations (i. e., crossover, mutation, and in particular function evaluation) can be carried out independently on different individuals. There is a vast amount of literature on how to parallelize GlossaryTerm

EA

s; the reader is referred to [10, 7, 8, 9] for surveys on this topic. However, parallelism here is not only a way for solving problems more rapidly, but also for developing new and more efficient search models: a parallel GlossaryTerm

EA

can be more effective than a sequential one, even when executed on a single processor. The advantages that parallelism offer to single-objective optimization also hold in multiobjective optimization. Of particular interest for parallel GlossaryTerm

MOEA

s is the improvement in population diversity that shall help to fully approximate the entire Pareto front of the given optimization problems.

The contribution of this chapter is to provide the reader with a recent review of publications related to parallel GlossaryTerm

MOEA

s, showing the latest advances in the field. Given the shear volume of papers, we have been forced to restrict ourselves to only those works which have been published since 2008/09, the years to which the two best known surveys date back [11, 12]. The structure of this chapter distinguishes between the numerical model of the parallel GlossaryTerm

MOEA

and its physical parallelization. In seminal papers in the fields [13], it was assumed that the model maps directly onto the parallel computing platform, but this is no longer true and any (MO)EA can be deployed in parallel, but not always resulting in a high performance. The next section is therefore devoted to presenting the classical models for parallel GlossaryTerm

EA

s and a recent proposal that not only considers GlossaryTerm

EA

s, but metaheuristics and exact algorithms in general. Section 50.3 dives into the details of more than 80 publications, analyzing particular features of GlossaryTerm

MOEA

s (fitness assignment, diversity preservation) as well as on their parallelization (model, topology, parallel platform). Finally, the last section presents the main conclusions and the trends for future research on parallel GlossaryTerm

MOEA

s.

2 Parallel Models for Evolutionary Multi-Objective Algorithms

Parallelism arises naturally when dealing with populations of individuals, since each individual is an independent unit. As a consequence, the performance of population-based algorithms is particularly improved when run in parallel. The main models for parallel GlossaryTerm

MOEA

s have been proposed within two clear scopes: especially GlossaryTerm

EA

-targeted models coming from the GlossaryTerm

EA

community [14, 15, 7] and those proposed for parallel metaheuristics in general (of which GlossaryTerm

EA

s are a subclass) [12, 16]. They are briefly presented in the following subsections.

2.1 Specialized Models for Parallel EAs

The most well-known models for parallel GlossaryTerm

MOEA

s have been inherited directly from the single-objective parallel GlossaryTerm

EA

community, in which two parallelizing strategies are defined for population-based algorithms: (1) parallelization of computation, in which the operations commonly applied to each individual are performed in parallel, and (2) parallelization of population, in which the population is split into different parts, each one evolving in semi-isolation (individuals can be exchanged between subpopulations).

The simplest parallelization scheme of GlossaryTerm

EA

s is the well-known master–slave or global parallelization method (Fig. 50.1a). In this scheme, a central processor performs the selection operations while the associated slave processors perform the recombination, mutation, and/or the evaluation of the fitness function. This algorithm is the same as the traditional (one population, panmictic), although it is faster, especially for time-consuming objective functions. Its simplicity has made it the most popular among practitioners.

Fig. 50.1 a–f
figure 1figure 1

Different models of parallel EAs: (a)  global parallelization, (b)  coarse grain, and (c)  fine grain. Many hybrids have been defined by combining parallel EAs at two levels: (d)  coarse and fine grain, (e)  coarse grain and global parallelization, and (f)  coarse grain at the two levels

However, other models for parallel GlossaryTerm

EA

s utilize some kind of spatial disposition of the individuals (it is said that the population is then structured), and afterward parallelize the resulting chunks in a pool of processors. Among the most widely known types of structured GlossaryTerm

EA

s, the distributed (GlossaryTerm

dEA

) (or coarse-grain) and cellular (GlossaryTerm

CEA

) (fine-grain or diffusion) algorithms are very popular optimization procedures [7]. In the case of distributed GlossaryTerm

EA

s (Fig. 50.1b), the population is partitioned into a set of islands in which isolated GlossaryTerm

EA

s run in parallel. Sparse individual exchanges are performed among these islands, with the goal of inserting some diversity into the subpopulations, thus avoiding them getting stuck in local optima. Islands may apply the same (homogeneous) or different (heterogeneous) GlossaryTerm

EA

s [17]. In the case of a cellular GlossaryTerm

EA

(Fig. 50.1c), subpopulations are typically composed of one individual, which may only interact with its nearest neighbors in the breeding loop, i. e., the concept of neighborhood is introduced. These neighborhoods are overlapped, which implicitly defines a migration mechanism and allows a smooth diffusion of the best solutions throughout the population. This parallel scheme was targeted to massively parallel computers but nowadays it can be used sequentially on a regular computer or in parallel on graphic processing units (GlossaryTerm

GPU

s). Also, hybrid models have been proposed (Fig. 50.1d–f) in which a two-level approach of parallelization is undertaken. In these models, the higher level for parallelization uses to be a coarse-grain implementation and the basic island performs a GlossaryTerm

CEA

, a master–slave method, or even another distributed one.

This taxonomy holds as well for parallel GlossaryTerm

MOEA

s [15], so we can consider master–slave GlossaryTerm

MOEA

s (GlossaryTerm

msMOEA

s), distributed GlossaryTerm

MOEA

s (GlossaryTerm

dMOEA

s), and cellular GlossaryTerm

MOEA

s (GlossaryTerm

cMOEA

s). Nevertheless, these two decentralized population approaches need a further particularization for GlossaryTerm

MOP

s [14]. As we stated before, the main goal of any multiobjective optimization algorithm is to find the optimal Pareto front for a given GlossaryTerm

MOP

. It is clear that in GlossaryTerm

msMOEA

s the management of this Pareto front is carried out by the master processor. But, when the search process is distributed among different subalgorithms, as happens in GlossaryTerm

dMOEA

s and GlossaryTerm

cMOEA

s, the management of the nondominated set of solutions during the optimization procedure becomes a capital issue. Hence, it can be distinguished when the Pareto front is distributed and locally managed by each sub-GlossaryTerm

EA

during the computation, or it is a centralized element of the algorithm. They have been called centralized Pareto front (GlossaryTerm

CPF

) structured GlossaryTerm

MOEA

s and Distributed Pareto Front (GlossaryTerm

DPF

) structured GlossaryTerm

MOEA

s, respectively [16].

For distributed GlossaryTerm

MOEA

s, very specialized models have been proposed in the literature which are aimed at capturing the different approaches for partitioning the search of each island so as to avoid them overlapping their exploration [18]. On the one hand, each island may consider a different subset of the objectives and then either aggregate them into a single-objective problem [19] or use a coevolutionary approach [20]. On the other hand, the search space (either the decision space or the objective space) can be explicitly partitioned and assigned to different islands. As stated in [11], in a general multiobjective problem it is difficult to design an a priori distribution so that it:

  1. 1.

    Covers the entire search space,

  2. 2.

    Assigns regions of equal size, and

  3. 3.

    Aggregates a minimum complexity to constraint demes to their assigned region.

2.2 General Models for Parallel Metaheuristics

Several models have been proposed for parallelizing metaheuristics [21, 22] in which GlossaryTerm

EA

s, as a type of metaheuristic, perfectly fit. For parallel GlossaryTerm

MOEA

s, two main approaches have been proposed in the literature. In [16], the authors distinguish between single-walk and multiple-walk parallelizations. The former is aimed at speeding up the computations by parallelizing the evaluation of the objective functions or the search operators. In the latter, several search threads (GlossaryTerm

EA

s or any other search method) cooperate to better explore the search space (not only accelerating the execution). The same issue with the Pareto front as in the parallel GlossaryTerm

MOEA

models emerges here, so the authors also subdivide into centralized and distributed Pareto front models (GlossaryTerm

CPF

and GlossaryTerm

DPF

, respectively).

On the other hand, Talbi etal [12] categorize parallel metaheuristics in three major hierarchical models. The self-contained parallel cooperation is targeted to parallel computing platforms with limited communication. The search is performed by several subalgorithms in parallel, which might cooperate by exchanging some kind of information. It embraces the island model or GlossaryTerm

dMOEA

s explained before. Two main groups are distinguished: cooperating subpopulations, which are based on partitioning the objective/search space; and the multistart approach, in which several optimization algorithms run separately in parallel. In the former, subpopulations can be homogeneous or heterogeneous, explore separate regions of the search space, etc. The latter lies in running several local search algorithms in parallel. On a second and third level of the hierarchy, the authors consider those models aimed merely at speeding up the computations: problem independent parallelization, which mainly comprises the master–slave approach of parallel fitness evaluation, and problem dependent parallelization, which focuses on subdividing single evaluations into parallel tasks that speed up the evaluation step.

3 An Updated Review of the Literature

This section is devoted to presenting and analyzing the most recent contributions in the literature to the parallel evolutionary multiobjective optimization field. We have structured the published material according to the classical parallel GlossaryTerm

EA

models, i. e., master/slave, distributed, cellular, and hybrid models (Sect. 50.2.1) because this chapter is targeted precisely to GlossaryTerm

EA

s and, as a consequence, this classification better captures the design principles of the different contributions. Table 50.1 includes, ordered by the year of publication, an updated review of the field. Also, in order to help the reader with the terminology of this table, Table 50.2 displays the symbols used and their definitions. Then, for each row of Table 50.1, the following information is shown:

  • GlossaryTerm

    FA-DP

    (Fitness assignment and diversity preservation): As two of the most important design issues in GlossaryTerm

    EMO

    algorithms, the fitness assignment and diversity preservation mechanisms allow, respectively, to better guide the search toward Pareto optimal solutions and to spread out these Pareto optimal solutions along the entire Pareto front. They are frequently merged into one single measure that translates the vector of objective functions value of a multiobjective problem into one single scalar value which is used to rank solutions properly (from a Pareto optimality point of view, nondominated solutions are noncomparable).

  • GlossaryTerm

    PM

    (Parallel model): It can take the values GlossaryTerm

    MS

    (Master/slave), Dis (distributed model), Cell (cellular model) or Hyb (hybrid), according to the classical parallel GlossaryTerm

    EA

    categorization.

  • GlossaryTerm

    PFC

    (Pareto front computation): This column distinguishes between the GlossaryTerm

    CPF

    and GlossaryTerm

    DPF

    strategies defined before.

  • GlossaryTerm

    PP

    (parallel platform): When applicable, this column indicates the kind of parallel computing platform in which the given algorithm is executed (GlossaryTerm

    GPU

    s, multicore, cluster, grid, etc.).

  • Topology: Communication topology of the parallel GlossaryTerm

    MOEA

    (Star, Hybrid, all-to-all [A2A], etc.).

  • Programming: When publicly reported, the programming language used to implement the parallel GlossaryTerm

    MOEA

    is included in this column.

  • Description: The main features of the parallel GlossaryTerm

    MOEA

    in a few words.

  • Application domain: The area in which the parallel GlossaryTerm

    MOEA

    has been applied.

Table 50.1 Parallel MOEAs since 2008

3.1 Analysis by Year

Table 50.2 List of symbols used in Table 50.1

The first point of analysis of the published material is done with respect to the number of publications over the years considered in this chapter, i. e., the period between 2008 and 2011. Figure 50.2 displays this information not only for the period analyzed in this chapter, but also for the period 1993–2005 presented in [14]. The trend is fairly clear: this research topic has been active during the last few years. Indeed, if one compares this evolution with that presented in [14] by 2006, where the highest number of works per year was 10, it can be seen that the published material is doubled (more than 20 publications/year in 2009, 2010, and 2011). Despite the relative lack of novel, attractive approaches in the field, parallelism remains as a powerful tool in the GlossaryTerm

EMO

community because of one major factor: the optimization problems addressed require to reduce the execution times to affordable values. This is emphasized with the current availability of cheap parallel computing platforms such as multicore processors and, lately, GlossaryTerm

GPU

s. Indeed, the keyword multicore in column GlossaryTerm

PP

in Table 50.1 is the second that appears the most.

Fig. 50.2
figure 2figure 2

Number of publications on parallel MOEAs grouped by the year of publication in the periods 1993–2005 (after [14]) and 2008–2011 (this chapter)

3.2 Analysis of the Parallel Models

In this section, the different contributions are analyzed from the point of view of the characteristics of the parallel model. We will pay particular attention to the columns GlossaryTerm

FA-DP

, GlossaryTerm

PM

, GlossaryTerm

PFC

, and Topology.

The fitness assignment and diversity preservation is a major issue in parallel GlossaryTerm

MOEA

s because, in many cases, the Pareto front is spread between different subalgorithms (especially in the distributed models). The management of optimal solutions (via fitness assignment) and how they are distributed along the Pareto front (diversity) deserves a brief review. The GlossaryTerm

FA-DP

column shows that the Ranking and Crowding mechanisms inherited from the most widely used algorithm in the area, namely GlossaryTerm

NSGA

-II [95], are also the most present in the literature as long as GlossaryTerm

NSGA

-II is the base algorithm for many of these parallel GlossaryTerm

MOEA

s. In the case of the distributed and cellular models, it is worth mentioning that Crowding is applied locally, i. e., diversity if kept within the same subalgorithm. If no advanced mechanism is devised to partition the search space (such as in [96, 97]), the algorithm will be accepting/discarding solutions that should probably be in the same region as those computed by the other subalgorithm components. The same happens with classical GlossaryTerm

FA-DP

methods such as the strength raw fitness (GlossaryTerm

SRF

) or the indicator-based (GlossaryTerm

IB

) in column GlossaryTerm

FA-DP

, respectively). As a final note, we strongly believe that these algorithms based on decomposition such as GlossaryTerm

MOEA

/D [98] are especially well suited to profit from parallel platforms. Indeed, they are based on decomposing the multiobjective problems into a number of scalar subproblems that are distributed along the Pareto front. Therefore, the partition of the search space is implicitly done and they take full advantage of multicore processors. The texts [80, 81] follow this line of research.

If one analyzes the usage of the different parallel models in the revised publications (Fig. 50.3), i. e., GlossaryTerm

MS

, distributed (Dis), and cellular (Cell), several clear conclusions can be drawn. First, despite the simplicity of the GlossaryTerm

MS

model, it appears in almost half of the related literature ( 45 % ). Multiobjective optimization problems are becoming more and more complex and demand high-end computational resources, what makes this approach very suitable in this context. Indeed, the underlying search model remains unchanged (entirely located at the master process), because authors are usually only interested at speeding up the computations. Second, the distributed models are still receiving much attention from the multiobjective community ( 47 % of the analyzed publications use this model) in the quest for engineering an improved algorithm that reaches the Pareto fronts in a more effective way (not only reducing execution times). The promising results published with their single objective counterparts [7] have pushed forward the research on this area. The major issue with distributed models arises with the difficult management of the Pareto front, as stated in Sect. 50.2.1 and as will be discussed below. Finally, a special comment about the cellular model: even though it is to be exploited in future literature, its percentage of use has doubled since the previous literature review in 2006 [14]. The point is that these algorithms are usually executed sequentially, with no parallelism at all, because they were originally targeted to massively parallel machines, and these kind of machines fell into disuse. There are, of course, some exceptions such as [46], where a cellular-like GlossaryTerm

MOEA

for aerodynamic optimization is deployed on a cluster of computers.

Fig. 50.3
figure 3figure 3

Percentage of use of the different parallel MOEA models in the revised literature

The GlossaryTerm

PFC

column is a hot topic in parallel GlossaryTerm

MOEA

s. Handling the nondominated solutions found during the search, when the search is distributed in probably separate processors, has promoted and is still promoting fundamental research within the community. The most widely used strategy, however, is to keep a central pool of nondominated solutions (GlossaryTerm

CPF

), that is, there is one single front. This approach appears in 55 % of the analyzed publications and totally matches both the master/slave and the cellular parallel models. This design option is straightforward and makes common sense.

Almost the same happens with the GlossaryTerm

DPF

strategy ( 45 % of the papers), which is mostly used with the distributed model of parallel GlossaryTerm

MOEA

s: the Pareto front is approximated separately for each of the subalgorithmic components during the search, only merged into one single front at the end of the exploration. In general, all GlossaryTerm

DPF

strategies are complemented with eventual GlossaryTerm

CPF

phases, which allows the search of the different subalgorithms to get overlapped [11]. A couple of exceptions are to be found among the revised literature, in which a distributed computation of the Pareto front is endowed with a fully centralized Pareto front. In [50], a distributed GlossaryTerm

MOEA

has one single Pareto front that is computed with several isolated islands. Each island uses a weighted sum approach (with different weights) and is targeted to one region of the search space. The second exception appears in [80, 81], in which a multithreaded parallelization of GlossaryTerm

MOEA

/D is presented. The Pareto front is stored in global memory and the different threads are in charge of separate groups of weight combinations.

The final aspect under analysis in this section is the interconnection topology of the different components of the parallel algorithms. The Star topologies are widely used in two scenarios: (i) master/slave models and (ii) in distributed models with periodic gathering operations required to generate a single Pareto front. A star topology is able to capture the idea of topologies with a central master that delivers tasks to a set of worker nodes and this is why it is so popular in these two previous cases. The column Topology in Table 50.1 also reveals that All-to-All (GlossaryTerm

A2A

) and Random topologies also exist in the literature. The former enables the different components of the parallel algorithm to be tightly coupled, thus quickly spreading the nondominated solutions found for a faster convergence toward the optimal Pareto front. The later implies that the genetic material may take longer to reach all the algorithmic components, thus promoting diversity.

3.3 Review of the Software Implementations

This section is mainly targeted to summarize the contents of the column Programming in Table 50.1, in which a note on the implementation of the algorithms is given. A quick look at the items of the column clearly states that the combination of C/C++ as the programming language and GlossaryTerm

MPI

(Message Passing Interface) [99] as the technology for enabling the parallel communication between the different components of the parallel algorithms are the preferred options. This can be explained by the strong engineering background of most of the GlossaryTerm

MOP

s addressed (and researchers), a field in which C/C++ has had a dominant position for many years. Indeed, C/C++ allows researchers to include very low level routines (even assembler code) that enable full control of all parts of their applications. GlossaryTerm

MPI

, in turn, is a standard (not just a library) for which many implementations exist (MPICH, LAM, etc.), so its use always guarantee correctness and efficiency.

Despite this clear fact, only two novels, relevant trends on this topic that are worth mentioning in detail can be found. On the one hand, even though clusters of computers are able to provide researchers with a large computational power, there are GlossaryTerm

MOP

s that require still more additional resources. These resources can only be supplied by grid computing platforms [100]. This has promoted the parallelization of GlossaryTerm

EMO

algorithms with grid-enabled technologies such as MPICH-G2 [40] or MatGrid [34]. On the other hand, there already are several seminal works on the parallelization of multiobjective optimizers in GlossaryTerm

GPU

s, as stated in Sect. 50.3.1. To the best of our knowledge, only implementations with C and GlossaryTerm

CUDA

(compute unified device architecture) [101] have been proposed in [66, 93], but nowadays other opportunities have also emerged such as, for example, OpenCL [102].

3.4 Main Application Domains

One of the main reasons, if not the main one for the popularity of GlossaryTerm

MOEA

s, is their success in solving real-world problems. Parallel GlossaryTerm

EMO

algorithms are no exception. As a consequence, the variability in the application domains is very large, which makes the task of classification rather difficult. By partially following the categorization proposed in [18], three main areas of application are distinguished: engineering, industrial, and scientific. Figure 50.4 summarizes the percentage of revised publications that fall into each area. Besides these three categories, we have also displayed in this figure a fourth item devoted to benchmarking. This latter is not an application but it appears a lot in the revised literature. There are well-established testbeds such as Zitzler–Deb–Thiele (GlossaryTerm

ZDT

) [103], Deb–Thiele–Laumanns–Zitzler (GlossaryTerm

DTLZ

) [104], or the walking fish group (GlossaryTerm

WFG

) [105], that have been widely used as a comparison basis for introducing new algorithmic proposals.

Fig. 50.4
figure 4figure 4

Application domains

Among the real-world applications, engineering applications are, by far, the most popular domain within the parallel GlossaryTerm

EMO

literature (also in the entire GlossaryTerm

EMO

field), principally because they usually have suitable mathematical models for this kind of algorithms. Indeed, Fig. 50.4 shows that almost 35 % of the papers analyzed address an optimization problem from the engineering domain. Several relevant works among those analyzed are devoted to aerodynamic shape optimization [27, 30, 46], reconfiguration of operational satellite constellations [41], or the combustion control for different types of vehicles [85, 91].

The second place in terms of popularity is occupied by the industrial applications, that appear in 21 % of the papers reviewed. These applications are related to the fields of manufacturing, scheduling, and management. Very interesting problems have been addressed in this industrial domain, such as the optimization of sonic crystal attenuation properties [52], the Camembert cheese ripening process [78], and evaluation of the input tax to regulate pollution from agricultural production [65].

Scientific applications, the third category of real-world applications analyzed ( 16 % of the papers), is intended to group optimization problems in bioinformatics, chemistry, and computer science. The most successful applications here are devoted to the bioinformatic and chemistry fields, in problems on molecular docking [33], protein design [40], drug design [56], and phylogenetic inference [69, 70]. In our view, the reason these applications (either engineering, industrial or scientific) have been tackled with parallel GlossaryTerm

MOEA

s is precisely the tasks involved to compute their objective functions. When a new enhanced parallel search model is sought, then authors usually rely on benchmarking functions. Indeed, as stated above, 30 % of the revised papers (Fig. 50.4) use these testbeds either exclusively [61, 68, 82, 94] or prior to solving a real-world application [20, 5, 51].

4 Conclusions and Future Works

4.1 Summary

In this chapter, we have carried out a comprehensive survey of the literature concerning parallel GlossaryTerm

MOEA

s since 2008/2009, the year when two of the most well-known comprehensive surveys were published. We have first described the existing parallel models for GlossaryTerm

MOEA

s, distinguishing between those specifically targeted at GlossaryTerm

EA

s and those aimed at capturing the essence of parallel metaheuristics in general. Based on the former model (as long as we are interested in surveying parallel GlossaryTerm

MOEA

s), more than 80 relevant papers have been carefully analyzed (many dozens more studied but left out because of little relevance to this survey). Fundamental aspects such as the fitness assignment and the diversity preservation, the parallel model used, the management of the approximated Pareto front, the underlying parallel platform used (if any), and the communication topology of the algorithms have been revised. Their main application domains have been gathered and structured into engineering, industrial, and scientific real-world multiobjective problems.

Despite the lack of new, attractive ideas in the area, this survey has revealed that the research on parallel GlossaryTerm

MOEA

s is still moving forward for two main reasons. On the one hand, researchers think of parallelism as a way to not only speed up computations, but also as a strategy to enhance the search engines of the algorithms. On the other hand, the computational demands of many multiobjective problems (dimensionality, uncertainty, simulations, etc.) means that the parallelism is the only suitable option to address them.

In the following section, some topics for future research for engineers interested in parallel GlossaryTerm

MOEA

s are outlined. In the authors’ opinion, these topics merit particular attention so that the current state-of-the-art can be improved.

4.2 Future Trends

We have structured the future trends section as a bottom-up approach, proposing research lines for parallel GlossaryTerm

MOEA

s that range from low-level algorithmic details to more complex enhanced strategies at a higher level. We will also suggest that different studies that are missing in the literature need to be carried out and, in our opinion, once completed, might have a great impact within the community.

Let us start with one of the main issues of parallel GlossaryTerm

MOEA

s which is such that, in the end, the algorithms have to compute one single approximation to the Pareto front, namely, they have an inherent centralized structure. When several subalgorithms cooperate to approximate such a Pareto front (i. e., the distributed models), they may coordinate themselves somehow in order to effectively sample different regions of the front in order to avoid overlaps. These overlaps are the main reason for the distributed algorithms being outperformed by centralized approaches (such as the master/slave). This issue has been partially addressed in the literature [96, 97] with some degree of success. However, these cited results have not been widely used yet and new advances are needed.

We propose here two lines of research that might help the distributed models of parallel GlossaryTerm

MOEA

s to overcome this issue. The first one is based on designing a fully distributed diversity preservation method (e. g., a distributed crowding). The research question here is whether it is possible to devise a density estimator that considers both local and global information from the other components of the distributed GlossaryTerm

MOEA

. We think so. Instead of trying to allocate a given portion of the Pareto front to each island, these islands should periodically broadcast a list with the objective values of their local solutions (but not the decision variables). Then, when checking whether a solution is to be stored in its local Pareto front, the island has to consider both the local information and the global information received from the other islands. To the best of our knowledge, such a mechanism does not exist in the literature. The second proposal is to use rough set theory [106] to effectively partition the search space and allocate different portions to different components of the parallel GlossaryTerm

MOEA

. There does exist a preliminary work [107] that uses a multiobjective simulated annealing, and these ideas will have a great impact on the design of parallel GlossaryTerm

MOEA

s.

On a higher algorithmic level, this survey has also revealed that, despite their advanced search model and accurate results on many benchmarking functions [108], cellular models of parallel GlossaryTerm

MOEA

s have been marginally used in most of the application domains. We strongly believe that these algorithms might improve the state-of-the-art in many of these unexplored research areas. On a related matter, designing heterogeneous algorithms [17] is also a line of research with a high potential. Indeed, few contributions in the literature have considered parallel heterogeneous algorithms for multiobjective optimization [59]. To the best of our knowledge, there is no published analysis on the impact of heterogeneity (i. e., several different subalgorithms cooperating, namely, GlossaryTerm

NSGA

-II, SPEA2, GlossaryTerm

PAES

, GlossaryTerm

MOEA

/D, etc.) while approximating Pareto fronts. Our experience with these totally different algorithms is that each one is usually better suited to explore given regions of the search space, so their collaboration may result in a newly improved algorithm.

Finally, the last group of open research lines are devoted to well-grounded studies on the behavior of parallel GlossaryTerm

MOEA

s concerning their scalability with respect to the number of decision variables or their convergence speed toward the optimal Pareto front, as done for sequential GlossaryTerm

MOEA

s in [109, 110], respectively. Again, the influence of heterogeneity on these two search capabilities of parallel GlossaryTerm

MOEA

s is of interest.

Last but not least, there is a distinct lack of theoretical work in this area. For example, to the best of our knowledge, there is no analysis of the takeover time [111] in the multiobjective context, and we believe that characterizing parallel GlossaryTerm

MOEA

s based on this metric may help researchers in future developments. The difficulty of the landscapes in GlossaryTerm

MOP

s and the relative theoretical advantages for different types of problems are open research lines.