Introduction

A metaheuristic is a high-level problem-independent algorithmic framework that provides a set of guidelines or strategies to develop heuristic optimization algorithms. The term is also used to refer to a problem-specific implementation of a heuristic optimization algorithm according to the guidelines expressed in such a framework. It combines the Greek prefix meta- (μετά, beyond in the sense of high-level) with heuristic (from the Greek heuriskein or \( \upvarepsilon \upupsilon \uprho \upiota \upsigma \upchi \upvarepsilon \upiota \upnu, \) to search) and was coined by Fred Glover in 1986.

Most metaheuristic frameworks have their origin in the 1980s (although in some cases roots can be traced to the mid 1960s and 1970s) and were proposed as an alternative to classic methods of optimization such as branch-and-bound and dynamic programming. As a means for solving difficult optimization problems, metaheuristics have enjoyed a steady rise in both use and popularity since the early 1980s. EU/ME – the metaheuristics community – is the EURO-sponsored working group on metaheuristics and the largest platform for communication among metaheuristics researchers worldwide. Conferences and journals devoted to metaheuristics, along with some software, are described at the end of this article.

Different metaheuristics can vary significantly in their underlying foundations. Some metaheuristics mimick a process seemingly unrelated to optimization, such as natural evolution, the cooling of a crystalline solid, or the behavior of animal swarms. Attending such variation is also a striking similarity among some methods that rely on a common foundation. For example, many methods have been proposed (and given different names) that differ in not much more than the metaphor underlying them, which is often a close variant of an original method’s metaphor. In this manner, the metaheuristic framework of ant colony optimization, for instance, has spawned a steady stream of different social insect-based methods (using bees, flies, termites, etc.). Most metaheuristic frameworks advise the use of randomness, although some propose completely deterministic strategies. In optimization, metaheuristics are most often used to solve combinatorial optimization problems, although metaheuristics for other problems exist (see below).

One of the defining characteristics of a metaheuristic framework is that the resulting methods are — as the name suggests — always heuristic in nature. Exact methods for combinatorial optimization, such as branch-and-bound or dynamic programming, are subject to combinatorial explosion, i.e., for NP-hard problems the computing time required by such methods increases as an exponential function of the problem size. By relaxing the demand that the optimal solution should be found in a finite (but often prohibitively large) amount of time, optimization methods can be built that attempt to find a solution that is good enough in a computing time that is small enough. However, there are important aspects of metaheuristics that link them more closely with exact methods and that give rise to a number of hybrids that unite these two types of methods. These aspects will be discussed later.

The required quality of a solution and the maximum allowable computing time can, of course, vary greatly across optimization problems and situations. Metaheuristic frameworks, being defined in very general terms, can be adapted to fit the needs of most real-life optimization problems, from the smallest and simplest to the largest and most complex. Additionally, metaheuristics do not put any demands on the formulation of the optimization problem (like requiring constraints or objective functions to be expressed as linear functions of the decision variables), in contrast, for example, to methods for mixed-integer programming. As a result, several commercial software vendors have implemented metaheuristics as their primary optimization engines, both in specialized software packages for production scheduling, vehicle routing (Sörensen et al. 2008) and nurse rostering (Burke et al. 2004), as well as in general-purpose simulation packages (April et al. 2003; Fu 2002; Glover et al. 1999).

However, the research field of metaheuristics is not without its critics, most of whom attack the perceived lack of a universally applicable design methodology for metaheuristics and the lack of scientific rigor in testing and comparing different implementations. The no free lunch theorems (Wolpert and Macready 1997) state that, when averaged over all problems, all optimization methods perform equally well. This suggests that no single metaheuristic can be considered as a panacea for combinatorial optimization problems, but rather that a lot of problem-specific tuning is necessary to achieve acceptable performance. Moreover, metaheuristics often have a large number of parameters and tuning them is a notoriously difficult process. Consequently, computational testing to compare different metaheuristics is very difficult and often done in an ad-hoc way, rather than by established scientific standards (Barr et al. 1995; Hooker 1995; Rardin and Uzsoy 2001) This has motivated work on self-adaptive metaheuristics that automatically tune their parameters (Cotta et al. 2008; Kramer 2008; Nonobe and Ibaraki 2001, 2002) From an alternative perspective, if a research study identifies parameter values that work well for a selected class of applications — as most studies attempt to do — then for practical purposes other researchers can consider these parameters as being constants (Of course, this doesn’t prevent future experimentation from seeking better parameter values.)

Another criticism sometimes levied at metaheuristics concerns the occasional tendency to create overly intricate methods (Michalewicz and Fogel 2004) with many different operators, where the contribution of these operators to the final quality of the solutions found may be poorly understood (Watson et al. 2006). Despite some theoretical results, such as proofs for the convergence of some metaheuristics under special assumptions – usually infinite running time (Eiben et al. 1991; Mitra et al. 1985) – or attempts to explain why genetic algorithms work (such as the heavily criticized Wright et al. (2003) building block hypothesis (Holland 1975)), research papers that attempt to capture the fundamental reasons why metaheuristics work are still few and far between.

Despite these criticisms, the ability to obtain good solutions where other methods fail has made metaheuristics the method of choice for solving a majority of large real-life optimization problems, both in academic research and in practical applications.

Metaheuristic Concepts

Like all optimization methods, metaheuristics attempt to find the best (feasible) solution out of all possible solutions of an optimization problem. In order to do this, they examine various solutions and perform a series of operations on them in order to find different, better solutions.

Metaheuristics operate on a representation or encoding of a solution, an object that can be stored in computer memory and can be conveniently manipulated by the different operators employed by the metaheuristic. Since metaheuristics are most often used to solve combinatorial optimization problems, representations too are generally combinatorial in nature (i.e., they are able to represent only a finite number of solutions). Representations used in the metaheuristics literature are quite diverse (see, e.g., Talbi (2009) for an overview) and range from vector-representations (binary, integer) over permutations to more complex representations such as trees and other graphs. Many metaheuristic algorithms use a combination of different representation types, such as a vector of permutations. Contrary to exact algorithms, metaheuristics do not require the encoding of solutions to be a bijection, i.e., several solutions may share the same encoding and a single solution may be encoded in different ways. Often, an encoding is chosen on the grounds of being convenient to manipulate, although sometimes a time-consuming decoding procedure may be required to obtain the actual solution (such as the encoding used in Prins (2004)).

Although many different metaheuristics have been proposed, their mechanisms for obtaining good solutions primarily operate by manipulating solutions in three ways: by iteratively making small changes to a current solution (local search metaheuristics), by constructing solutions from their constituting parts (constructive metaheuristics), and by iteratively combining solutions into new ones (population-based metaheuristics). Each of these manipulation mechanisms gives rise to a class of metaheuristic frameworks that are discussed separately below. It is important to note that these classes are not mutually exclusive, and many metaheuristic algorithms combine ideas from each of them. Also, in some instances the transitions from one solution to another are achieved by solving specially generated subproblems.

Local Search Metaheuristics

Local search metaheuristics find good solutions by iteratively making small changes, called moves, to a single solution, called the current solution. The set of solutions that can be obtained by applying a single move to a given solution is called the neighborhood of that solution. In each iteration, a solution from the neighborhood of the current solution is selected to become the new current solution. The sequence of moves defines a trajectory through the search space. Hence, local search metaheuristics are also known under the names of neighborhood search methods or trajectory methods.

For almost all problem representations, different move types can be defined, resulting in different neighborhood structures. The rule used to select the new current solution is called the move strategy or search strategy and determines the aggressiveness of the search. Metaheuristics that use the steepest descent or steepest ascent strategy select the best move from the neighborhood and are often called hill-climbers. Other move strategies include selecting the first move that improves upon the current solution (called the mildest ascent/descent or first-improving strategy), as well as selecting a random improving solution.

In general, the set of allowable moves is restricted to those that result in solutions that are both feasible and improve upon the current solution. Some metaheuristics allow infeasible moves in a strategy that is called strategic oscillation. In this strategy, the search is usually only allowed to temporarily remain in the infeasible region of the search space. A striking example of the utility of this strategy is shown in Glover and Hao (2010).

A solution whose neighborhood does not contain any better solutions is called a local optimum (as opposed to a global optimum, i.e., a best possible solution to the optimization problem). When the current solution is a local optimum, the metaheuristic utilizes a strategy to escape to other regions of the search space. It is this strategy that distinguishes metaheuristics from simple heuristics and from each other. The metaheuristic’s name therefore usually refers to the strategy to prevent the search from becoming ensnared within regions whose local optima may be substantially inferior to a global optimum.

The simplest strategy to escape to potentially more fertile regions is to either start the search again from a new, usually random, solution or to make a relatively large change (called a perturbation) to the current solution. These strategies are respectively called multi-start local search (MLS) and iterated local search (ILS) (Lourenco et al. 2003).

A number of metaheuristics define different move types and change the move type used once a local optimum has been reached. The rationale for this strategy is that a local optimum relative to a specific move type can often be improved by performing local search with a different move type. The global optimum on the other hand is a local optimum with respect to every possible move type. Metaheuristics that use this strategy are commonly called variable neighborhood search (VNS) (Mladenović and Hansen 1997) algorithms, but using more than one neighborhood is far more common in the metaheuristics literature and not restricted to algorithms labeled VNS (Sörensen et al. 2008).

Using memory structures is a third commonly encountered way for metaheuristics to avoid remaining trapped in a local optimum and to guide the search in general so as to find good solutions more quickly. Algorithms that use memory structures are commonly grouped under the umbrella term tabu search (Glover 1989, 1990, 1996) algorithms (sometimes also called adaptive memory programming algorithms). Different memory structures may be used to explicitly remember different aspects about the trajectory through the search space that the algorithm has previously undertaken and different strategies may be devised to use this information to direct the search (Glover and Laguna 1993) to promising areas of the search space. Often-used memory structures include the tabu list (from which the name of the metaheuristic framework derives) that records the last encountered solutions (or some attributes of them) and forbids these solutions (or attributes) from being visited again as long as they are on the list. Some variants record move attributes rather than solution attributes on the tabu list, for the purpose of preventing moves from being reversed. The tabu list is usually organized in a first-in, first-out (FIFO) fashion, i.e., the current solution replaces the oldest one on the list. The length of the tabu list is called the tabu tenure. Frequency memory records how often certain attributes have been encountered in solutions on the search trajectory, which allows the search to avoid visiting solutions that display the most often encountered attributes or to visit solutions with attributes seldom encountered. Such memory can also include an evaluative component that allows moves to be influenced by the quality of solutions previously encountered that contain various attributes or attribute combinations. Other memory structures such as an elite set of the best solutions encountered so far are also common. Another example of the use of memory can be found in a metaheuristic called guided local search (GLS) (Voudouris and Tsang 1999). GLS introduces an augmented objective function that includes a penalty factor for each potential element. When trapped in a local optimum, GLS increases the penalty factor for all elements of the current solution, making other elements (and therefore other moves) more attractive and allowing the search to escape from the local optimum. Similarly, some variants of tabu search use penalties to determine the tabu status of moves, though drawing more strongly on memory.

Contrary to most other local search metaheuristics, simulated annealing uses a random move strategy, emulating the annealing process of a crystalline solid. At each iteration, this strategy selects a random solution \( {x}^{\prime} \) from the neighborhood of the current solution \( x \) and accepts \( {x}^{\prime} \) as the new current solution with probability \( {e^{{-[f({x}^{\prime})-f(x)]/T}}} \), where \( f(\cdot ) \) is the objective function value (to be maximized) of the solution and \( T \) is an endogenous parameter called the temperature. The acceptance probability increases as the increase in solution quality is higher (or the decrease is lower). The temperature is initially set to a high value, which leads to higher acceptance probabilities, and then gradually lowered as the search progresses (although it may be increases again at certain moments during the search). The function that describes the evolution of \( T \) throughout the different iterations is called the cooling schedule. Simulated annealing was first described in Kirkpatrick et al. (1983), based upon an algorithm by Metropolis et al. (1953).

Relaxation induced local search (RINS) (Danna et al. 2005) is a metaheuristic that constructs a promising neighborhood using information contained in the continuous relaxation of the mixed integer programming (MIP) model of the optimization problem. Because it does not need problem-specific information to construct its neighborhood, RINS can be more easily built into general-purpose MIP solvers [11] and is currently available in the latest versions of LINDO/LINGO and CPLEX. Contrary to other metaheuristics, RINS requires the problem to be formulated as a MIP which makes it less general than other metaheuristics.

Constructive Metaheuristics

Constructive metaheuristics constitute a separate class from local search metaheuristics in that they do not operate on complete solutions, but rather construct solutions from their constituent elements, starting from an empty set and adding one element during each iteration, an operation that is also called a move. After each iteration except the last, the algorithm therefore operates on a partial solution (e.g., a traveling salesperson tour that does not visit all cities), of which it may not be possible to determine the objective function value or the feasibility status. Constructive metaheuristics are often adaptations of greedy algorithms, i.e., algorithms that add the best possible element at each iteration, a myopic strategy that may result in suboptimal solutions.

GRASP, the acronym for greedy randomized adaptive search procedure (Feo and Resende 1995), uses randomization to overcome this drawback of purely greedy algorithms by adding some randomness to the selection process. Several variants of GRASP have been proposed, founded on the following basic idea. At each iteration, a restricted candidate list, which contains the \( \alpha \) best elements that can be added, is updated. From the restricted candidate list, a random element is selected for addition to the partial solution, after which the list is updated to reflect the new situation. The parameter \( \alpha \) determines the greediness of the search: if \( \alpha \) equals 1, the search is completely greedy, whereas if \( \alpha \) is equal to the number of elements that can be added, the search is completely random. A particularly useful advance in GRASP algorithms has occurred by blending them with the path relinking strategy of tabu search. Notable examples of this approach include Commander et al. (2008); Nascimento et al. (2010); Resende et al. (2010).

Rather that using randomness to outperform a greedy heuristic, more strategic ways of performing constructive (or destructive) moves, once again making use of memory, are examined in Fleurent and Glover (1999); Glover et al. (2000). Another approach is embodied in a look-ahead strategy (Pearl 1984), which evaluates the elements that can be added by considering not only the next move, but several moves into the future. The pilot method (Duin and Voß 1999), for example, uses a (usually greedy) constructive heuristic to determine a pilot solution for each potential move, i.e., the value of a potential element is evaluated by determining the objective function value of the solution that results from applying the heuristic to generate a complete solution from the current partial solution with this element added. The idea of looking ahead has a long history, having been proposed in probing strategies for integer programming in (Lemke and Spielberg 1967).

Ant colony optimization (ACO) (Dorigo et al. 1996, 2006) is an umbrella term for a set of related constructive metaheuristics that build solutions by imitating the foraging behavior of ants. Perhaps because of the appeal of its imagery, this class of approaches has received and continues to receive widespread attention in the popular press (e.g., Anonymous 2010). Ant colony optimization introduces an external parameter for each potential element called the pheromone level (a pheromone is a chemical factor that triggers a social response in the same species), initially set to zero for all elements. The metaheuristic uses multiple parallel artificial agents (called ants) that each construct a solution by an iterative constructive process in which elements are selected based on a combination of the value of that element and its pheromone level. Once all ants have constructed a solution, the pheromone level of all elements is updated in a way that reflects the quality of the solution found by that ant (the elements of better solutions receive more pheromone). Each ant then constructs a new solution, but elements that were present in high-quality solutions will now receive a higher probability of being selected by the ants. Periodically, the pheromone level of all elements is reduced to reflect evaporation. The process of constructing solutions in the way described above is repeated, and the best solution found is reported at the end.

To improve the quality of the final solutions, most constructive metaheuristics include a local search phase after the construction phase.

Population-Based Metaheuristics

The main mechanism that allows population-based metaheuristics to find good solutions is the combination of existing solutions from a set, usually called the population. The fundamental reasoning behind this class of metaheuristics is that good solutions can be found by exchanging solution attributes between two or more (usually high-quality) solutions. The most important members of this class are called evolutionary algorithms because they mimic the principles of natural evolution. Following Michalewicz and Fogel (2004), here the term evolutionary algorithms is used as an umbrella term to encompass the wide range names given to metaheuristics based on evolution. This includes genetic algorithms (Goldberg et al. 1989; Holland 1975), genetic/evolutionary programming (Koza 1992), evolutionary computation (Fogel 2006), evolution strategies (Beyer and Schwefel 2002), and many others. The literature on evolutionary algorithms is larger than that on other metaheuristics, and this field has spawned several dedicated journals and conferences.

Typical of the field of evolutionary algorithms is that its researchers tend to adopt the vocabulary of the metaphor on which the algorithms are based. The descriptions of these algorithms therefore are stated in terms of chromosomes (instead of solutions), fitness (instead of objective function value), genotype (instead of encoding), etc. The driving force behind most evolutionary algorithms is selection and recombination. Selection ensures that predominantly high-quality solutions in the population are selected for recombination, usually by biasing the probability of each solution in the population to be selected towards its objective function value. Recombination utilizes specialized operators to combine the attributes of two or more solutions into new ones. The new solutions are then added to the population by a process called reinsertion, possibly subject to feasibility or minimum quality demands, to replace (usually low-quality) solutions. In a large majority of cases, all operators (selection, recombination and reinsertion) make heavy use of randomness. A large number of evolutionary algorithms additionally include a mutation operator that (again, randomly) changes a solution after it has been recombined. Most evolutionary algorithms iterate the selection, recombination, mutation, and reinsertion phases a number of times, and report the best solution in the population.

Scatter search and path relinking (Glover et al. 2000, 2003) are both population-based metaheuristics for continuous (or mixed-integer) and combinatorial optimization respectively, proposed as a deterministic alternative for the highly stochastic evolutionary algorithms. Scatter search encodes solutions as real-valued vectors (or rounded real-valued vectors for integer values) and generates new solutions by considering convex or concave linear combinations of these vectors. Path relinking, on the other hand, generalizes this idea, making it applicable to combinatorial optimization problems, by generating paths between high-quality solutions. Paths consist of elementary moves such as the ones used in local search metaheuristics and essentially link one solution (called the initiating solution) to a second solution (called the guiding solution) in the solution space. Contrary to local search metaheuristics, path relinking uses a move strategy that chooses the move to execute based on the fact that this move will bring the solution closer to the guiding solution. In both scatter search and path relinking, the selection of both initiating and guiding solution from a population (called the reference set) is done in a deterministic way, as are the mechanisms for updating the reference set once new solutions have been generated.

Hybrid Metaheuristics

Metaheuristics that combine aspects or operators from different metaheuristics paradigms are called hybrid metaheurstics. The term has lost much of its discriminatory power, however, since such combinations of operators from different metaheuristic frameworks have become the norm rather than the exception. Indeed, there is a tendency in the metaheuristics research field to look at metaheuristics frameworks as providing general ideas or components to build optimization algorithms, rather than to consider them as recipes that should be closely followed (Michalewicz and Fogel 2004). In this spirit, many metaheuristics use specialized heuristics to efficiently solve subproblems produced by the metaheuristic method (e.g., Gendreau et al. 1994). Also, a large number of local search metaheuristics use a construction phase to find an initial solution (or a set of initial solutions) from which to start the neighborhood search. In fact the original description of the GRASP metaheuristic (Feo and Resende 1995) prescribes a local search phase to follow the greedy randomized construction phase.

Memetic algorithms (Moscato 1989) are the only class of hybrid metaheuristics that has been given a specific name. Metaheuristics belonging to this class combine recombination operators from the class of evolutionary algorithms with local search (meta)heuristics. Although the name is commonly used, many evolutionary algorithms either replace or complement their mutation operator with a local search phase and can also be considered memetic.

Metaheuristics and Exact Methods

A more recent development has been a special focus on combining ideas from different metaheuristics, usually local search, with exact methods such as branch-and-bound or branch-and-cut. Sometimes called matheuristics, the resulting method usually integrates existing exact procedures to solve subproblems and guide the higher-level heuristic (Dumitrescu and Stützle 2009; Raidl and Puchinger 2008). In a similar way, ideas and operators from constraint programming techniques are integrated with metaheuristics (Van Hentenryck and Michel 2009). The links between metaheuristics and exact methods provide examples of additional forms of combinations:

  1. 1.

    There exist exact methods for solving various special classes of optimization problems, such as linear programming and certain graph (or matroid) problems, that can be incorporated to solve subproblems produced by a metaheuristic method. Such subproblems can be generated by a decomposition strategy, a restriction strategy or a relaxation strategy (see Glover and Klingman (1988); Rego (2005)).

  2. 2.

    Exact methods for more complex problems can sometimes solve small instances of these problems effectively. A metaheuristic may operate by constructing collections of such small instances as a strategy for generating structured moves that transition from a given solution to a new one (see, e.g., Glover (2005)).

  3. 3.

    An exact method can be run for a very long time to obtain optimal solutions (to at least some instances of a problem class), and these optimal solutions can be used in the learning approach called target analysis (Glover 1990; Glover and Laguna 1997) as a way to produce improved decision rules for both metaheuristics and exact methods.

  4. 4.

    Metaheuristics can be integrated with exact methods to improve the performance of the exact methods (Friden et al. 1989; Glover 1990; Puchinger et al. 2009).

  5. 5.

    By not demanding that the optimal solution be found, metaheuristics can, for example, employ a truncated optimization method in place of (or in conjunction with) generating subproblems that are structured to be easier to solve.

Metaheuristics for Different Optimization Problems

Continuous Optimization

Although metaheuristics are predominantly used for combinatorial optimization, many of them have been adapted for continuous optimization. Some metaheuristics are very naturally defined over continuous search spaces. Notable examples include scatter search (Glover et al. 2000), particle swarm optimization (Kennedy et al. 1995) and an evolutionary approach called differential evolution (Storn and Price 1997). Other, especially constructive and local search approaches, require a considerable adaptation from their original formulation. Nonetheless, algorithms for continuous optimization based on tabu search (Chelouah and Siarry 2000; Glover 1994), GRASP (Hirsch et al. 2007), variable neighborhood search (Liberti and Drazič 2005), and others, have been proposed.

Multi-objective Optimization

Many real-life problems have multiple objectives, for which the notion of optimality is generally replaced with the notion of dominance. A solution is said to dominate another solution if its quality is at least as good on every objective and better on atleast one. In multi-objective optimization, the set of non-dominated solutions is called the Pareto set and the projection of this set onto the objective function space is called the Pareto front or Pareto frontier. The aim of multi-objective metaheuristics, i.e., metaheuristics specifically designed to solve multi-objective optimization problems, is to approximate the Pareto front as closely as possible (Zitzler et al. 2004). The outcome of any multi-objective algorithm is therefore generally a set of mutually non-dominated solutions, the Pareto set approximation. To measure the quality of such an approximation, many different measures exist (Jaszkiewicz 2004). Although adaptations to the multi-objective paradigm of both tabu search and simulated annealing exist (Czyżak et al. 1998; Hansen 1997), most multi-objective metaheuristics are of the evolutionary type (Jones et al. 2002), a fact generally attributed to the observation that these algorithms naturally operate on a set of solutions. Evolutionary multi-objective metaheuristics include the vector evaluated genetic algorithm (VEGA) (Schaffer 1985), the non-dominated sorting algorithm (NDSA) (Srinivas and Deb 1994), the multi-objective genetic algorithm (MOGA) (Fonseca and Fleming 1993) and the improved strength pareto evolutionary algorithm (SPEA2) (Zitzler and Thiele 1999).

Stochastic Optimization

Stochastic combinatorial optimization problems include uncertain, stochastic or dynamic information in their parameters. Metaheuristics for such problems therefore need to take into account that the objective function value is a random variable and that the constraints are violated with some probability. Evaluating a solution’s objective function value and/or its feasibility can be done either exactly (if a closed-form expression is available), by approximation or by Monte Carlo simulation. Metaheuristicsusing each of these possibilities have been proposed to solve different stochastic problems (Bianchi et al. 2009; Ribeiro and Resende 2010).

Research in Metaheuristics

Conferences

The premier conference on metaheuristics is MIC, the Metaheuristics International Conference. Other conferences on metaheuristics include the yearly EU/ME meeting on a specific metaheuristics-related topic, organized by EU/ME in collaboration with a local research group, and the Hybrid Metaheuristics conference series that focuses on combinations of different metaheuristics and the integration of AI/OR techniques. The Learning and Intelligent Optimization conferences aim at exploring the boundaries between machine learning, artificial intelligent, mathematical programming and algorithms for optimization.

A large number of conferences focus exclusively on evolutionary algorithms, including Parallel Problem Solving From Nature (PPSN), the Genetic and Evolutionary Computation Conference (GECCO), EvoStar (a multi-conference comprising EuroGP, EvoCOP, EvoBIO,and EvoApplications), Evolutionary Multi-Criterion Optimization (EMO), and the IEEE Congress on Evolutionary Computation (CEC).

The Ants conference series is dedicated to research in swarm intelligence methods.

Journals

The field of metaheuristics has several dedicated journals: the well-established Journal of Heuristics and the newer International Journal of Metaheuristics and International Journal of Applied Metaheuristic Computing (IJAMC). However, a large majority of articles on metaheuristics are published in general OR/MS journals.

Several journals are devoted exclusively to evolutionary algorithms: Evolutionary Computation, IEEE Transactions on Evolutionary Computation, Genetic Programming and Evolvable Machines, and the Journal of Artificial Evolution and Applications.

The journal Swarm Intelligence is currently the main journal for advances in the swarm intelligence area.

Metaheuristics Software

Several vendors of commercial optimization software have included (albeit to a limited extent) metaheuristics in their packages. Frontline Systems’ Risk Solver Platform and its derivatives, an extension of the Microsoft Excel Solver, include a hybrid evolutionary solver. Tomlab/GENO is a package for static or dynamic, single- or multi-objective optimization based on a real-coded genetic algorithm. Both LINDO/LINGO and CPLEX include the relaxation induced neighborhood search (RINS) metaheuristic.

Open source metaheuristics software frameworks have recently appeared in the COIN-OR library. These include METSlib, an object oriented metaheuristics optimization framework, and Open Tabu Search (OTS), a framework for constructing tabu search algorithms.

Besides these solvers for combinatorial optimization, most commercial (stochastic) simulation packages today include an optimization tool (Fu 2002). Autostat, included in AutoMod, and Simrunner, included in ProModel, both use evolutionary algorithms. A variety of companies in the simulation industry, as well as general management service and consulting firms like Rockwell Software, Dassault Systemes, Flextronics, Halliburton, HP, Planview and CACI, employ OptQuest, which uses tabu search and scatter search.

See

Artificial Intelligence

COIN-OR Computational Infrastructure for Operations Research

Heuristics

Integer and Combinatorial Optimization

Multi-attribute Utility Theory

Neural Networks

Simulated Annealing

Simulation Optimization

Tabu Search