Abstract
In this tutorial, an overview on the basic techniques for proving convergence of metaheuristics to optimal (or sufficiently good) solutions is given. The presentation is kept as independent of special metaheuristic fields as possible by the introduction of a generic metaheuristic algorithm. Different types of convergence of random variables are discussed, and two specific features of the search process to which the notion “convergence” may refer, the “best-so-far solution” and the “model”, are distinguished. Some core proof ideas as applied in the literature are outlined. We also deal with extensions of metaheuristic algorithms to stochastic combinatorial optimization, where convergence is an especially relevant issue. Finally, the important aspect of convergence speed is addressed by a recapitulation of some methods for analytically estimating the expected runtime until solutions of sufficient quality are detected.
Access provided by Autonomous University of Puebla. Download to read the full chapter text
Chapter PDF
Similar content being viewed by others
Keywords
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.
References
E.H.M. Aarts and J.H.L. Korst. Simulated Annealing and Boltzmann Machines. John Wiley and Sons, Chichester, UK, 1990.
M.H. Alrefaei and S. Andradóttir. A simulated annealing algorithm with constant temperature for discrete stochastic optimization. Management Science, 45:748–764, 1999.
L. Bianchi, M. Dorigo, L.M. Gambardella, and W.J. Gutjahr. A survey on metaheuristics for stochastic combinatorial optimization. Natural Computing, to appear.
M. Birattari, P. Balaprakash, and M. Dorigo. The ACO/F-Race algorithm for combinatorial optimization under uncertainty. In K.F. Doerner, M. Gendreau, P. Greistorfer, W.J. Gutjahr, R.F. Hartl, and M. Reimann, editors, Metaheuristics—Progress in Complex Systems Optimization, pages 189–203. Springer Verlag, Berlin, Germany, 2006.
Y. Borenstein and R. Poli. Information perspective of optimization. In T.P. Runarsson, H.-G. Beyer, E.K. Burke, J.J. Merelo Guervós, L.D. Whitley, and X. Yao, editors, Proceedings of the 9th Conference on Parallel Problem Solving from Nature, volume 4193 of Lecture Note in Computer Science, pages 102–111. Springer Verlag, Berlin, Germany, 2006.
P.A. Borisovsky and A.V. Eremeev. A study on the performance of the (1+1)-evolutionary algorithm. In K.A. De Jong, R. Poli, and J.E. Rowe, editors, Proceedings of Foundations of Genetic Algorithms, volume 7, pages 271–287. Morgan Kaufmann Publishers, San Mateo, CA, 2003.
J. Brimberg, P. Hansen, and N. Mladenović. Convergence of variable neighborhood search. Les Cahiers du GERAD G-2002-21, Groupe d’études et de recherche en analyse des décisions (GERAD), Montréal, Canada, 2002.
T. Homem de Mello. Variable-sample methods for stochastic optimization. ACM Transactions on Modeling and Computer Simulation, 13:108–133, 2003.
F. Van den Bergh and A.P. Engelbrecht. A study of particle swarm optimization particle trajectories. Information Sciences, 176:937–971, 2006.
M. Dorigo, V. Maniezzo, and A. Colorni. Ant system: optimization by a colony of cooperating agents. IEEE Transactions on Systems, Man, and Cybernetics, 26:1–13, 1996.
S. Droste, T. Jansen, and I. Wegener. Perhaps not a free lunch but at least a free appetizer. In W. Banzhaf, J.M. Daida, A.E. Eiben, M.H. Garzon, V. Honavar, M.J. Jakiela, and R.E. Smith, editors, Proceedings of the Genetic and Evolutionary Computation Conference 1999, pages 833–839. Morgan Kaufmann, San Mateo, CA, 1999.
S. Droste, T. Jansen, and I. Wegener. On the analysis of the (1+1) evolutionary algorithm. Theoretical Computer Science, 276:51–81, 2002.
S. Droste, T. Jansen, and I. Wegener. Upper and lower bounds for randomized search heuristics in black-box optimization. Theory of Computing Systems, 39(4):525–544, 2006.
T. English. Optimization is easy and learning is hard in the typical function. In Proceedings of the 2000 Congress on Evolutionary Computation, volume 2, pages 924–931. IEEE Press, Piscataway, NJ, 2000.
T. English. On the structure of sequential search: beyond no free lunch. In J. Gottlieb and G.R. Raidl, editors, Evolutionary Computation in Combinatorial Optimization, 4th European Conference, EvoCOP 2004, volume 3004 of Lecture Notes in Computer Science, pages 95–103. Springer Verlag, Berlin, Germany, 2004.
M. Fischetti and A. Lodi. Local branching. Mathematical Programming Ser. B, 98:23–47, 2003.
S.B. Gelfand and S.K. Mitter. Analysis of simulated annealing for optimization. In Proceedings of the 24th IEEE Conference on Decision and Control, pages 779–786, 1985.
S.B. Gelfand and S.K. Mitter. Simulated annealing with noisy or imprecise measurements. Journal of Optimization Theory and Applications, 69:49–62, 1989.
F. Glover and G. Kochenberger, editors. Handbook of Metaheuristics, volume 57 of International Series in Operations Research & Management Science. Springer, 2003.
C. Gonzalez, J.A. Lozano, and P. Larrañaga. Analyzing the PBIL algorithm by means of discrete dynamical systems. Complex Systems, 11:1–15, 1997.
W.J. Gutjahr. A graph–based ant system and its convergence. Future Generation Computer Systems, 16:873–888, 2000.
W.J. Gutjahr. ACO algorithms with guaranteed convergence to the optimal solution. Information Processing Letters, 82:145–153, 2002.
W.J. Gutjahr. A converging ACO algorithm for stochastic combinatorial optimization. In A.A. Albrecht and K. Steinhöfel, editors, Stochastic Algorithms: Foundations and Applications, Second International Symposium, SAGA 2003, volume 2827 of Lecture Notes in Computer Science, pages 10–25. Springer Verlag, Berlin, Germany, 2003.
W.J. Gutjahr. An ant-based approach to combinatorial optimization under uncertainty. In M. Dorigo, L. Gambardella, F. Mondada, T. Stützle, M. Birratari, and C. Blum, editors, ANTS’2004, Fourth International Workshop on Ant Algorithms and Swarm Intelligence, volume 3172 of Lecture Notes in Computer Science, pages 238–249. Springer Verlag, Berlin, Germany, 2004.
W.J. Gutjahr. On the finite-time dynamics of ant colony optimization. Methodology and Computing in Applied Probability, 8:105–133, 2006.
W.J. Gutjahr. Mathematical runtime analysis of ACO algorithms: survey on an emerging issue. Swarm Intelligence, 1:59–79, 2007.
W.J. Gutjahr. First steps to the runtime complexity analysis of ant colony optimization. Computers & Operations Research, 35:2711–2727, 2008.
W.J. Gutjahr. Stochastic search in metaheuristics. Technical report, Department of Statistics and Decision Support Systems, University of Vienna, 2008.
W.J. Gutjahr, S. Katzensteiner, and P. Reiter. A VNS algorithm for noisy problems and its application to project portfolio analysis. In J. Hromkovic, R. Královic, M. Nunkesser, and P. Widmayer, editors, Stochastic Algorithms: Foundations and Applications, Second International Symposium, SAGA 2007, volume 4665 of Lecture Notes in Computer Science, pages 93–104. Springer Verlag, Berlin, Germany, 2007.
W.J. Gutjahr and G. Pflug. Simulated annealing for noisy cost functions. Journal of Global Optimization, 8:1–13, 1996.
W.J. Gutjahr and G. Sebastiani. Runtime analysis of ant colony optimization with best-so-far reinforcement. Methodology and Computing in Applied Probability, 10:409–433, 2008.
B. Hajek. Cooling schedules for optimal annealing. Mathematics of Operations Research, 13:311–329, 1988.
P. Hansen and N. Mladenović. Variable neighborhood search: Principles and applications. European Journal of Operational Research, 130:449–467, 2001.
R.F. Hartl. A global convergence proof for a class of genetic algorithms. Technical report, Institut für Ökonometrie & Operations Research, Technische Universität Wien, 1990.
J. He and X. Yao. Conditions for the convergence of evolutionary algorithms. Journal of Systems Architecture, 47:601–612, 2001.
J. He and X. Yao. Drift analysis and average time complexity of evolutionary algorithms. Artificial Intelligence, 127:57–85, 2003.
J. He and X. Yao. Towards an analytic framework for analysing the computation time of evolutionary algorithms. Artificial Intelligence, 145:59–97, 2003.
J. He and X. Yao. A study of drift analysis for estimating computation time of evolutionary algorithms. Natural Computing, 3:21–35, 2004.
H.H. Hoos. On the runtime behavior of stochastic local search algorithms for SAT. In Proceedings of the Sixteenth National Conference on Artificial Intelligence, pages 661–666. AAAI Press / The MIT Press, Menlo Park, CA, USA, 1999.
H.H. Hoos and T. Stützle. Local search algorithms for SAT: an empirical investigation. Journal of Automated Reasoning, 24:421–481, 2000.
C. Igel and M. Toussaint. On classes of functions for which no free lunch results hold. Information Processing Letters, 86:317–321, 2003.
C. Igel and M. Toussaint. A no-free-lunch theorem for non-uniform distributions of target functions. Journal of Mathematical Modelling and Algorithms, 3:313–322, 2004.
S.H. Jacobson, K.A. Sullivan, and A.W. Johnson. Discrete manufacturing process design optimization using computer simulation and generalized hill climbing algorithms. Engineering Optimization, 31:147–260, 1998.
S.H. Jacobson and E. Yuecesan. Analyzing the performance of generalized hill climbers. Journal of Heuristics, 10:387–405, 2004.
J. Jaegerskuepper. Lower bonds for hit-and-run direct search. In J. Hromkovic, R. Královic, M. Nunkesser, and P. Widmayer, editors, Stochastic Algorithms: Foundations and Applications, Second International Symposium, SAGA 2007, volume 4665 of Lecture Notes in Computer Science, pages 118–129. Springer Verlag, Berlin, Germany, 2007.
Y. Jin and J. Branke. Evolutionary optimization in uncertain environments—a survey. IEEE Transactions on Evolutionary Computation, 9:303–317, 2005.
J. Kennedy and R.C. Eberhart. Particle swarm optimization. In Proceedings of the IEEE International Conference on Neural Networks, pages 1942–1948. IEEE Press, Piscataway, NJ, 1995.
J. Kennedy and R.C. Eberhart. A discrete binary version of the particle swarm algorithm. In Proceedings of the 1997 IEEE International Conference on Systems, Man, and Cybernetics, volume 5, pages 4104–4109. IEEE Press, Piscataway, NJ, 1997.
L. Margolin. On the convergence of the cross-entropy method. Annals of Operations Research, 134:201–214, 2005.
H. Muehlenbein and J. Zimmermann. Size of neighborhood more important than temperature for stochastic local search. In Proceedings of the 2000 Congress on Evolutionary Computation, volume 2, pages 1017–1024. IEEE Press, Piscataway, NJ, 2000.
P.S. Oliveto, J. He, and X. Yao. Time complexity of evolutionary algorithms for combinatorial optimization: a decade of results. International Journal of Automation and Computing, 4:281–293, 2007.
P. Purkayastha and J.S. Baras. Convergence results for ant routing algorithms via stochastic approximation and optimization. In Proceedings of the 46th IEEE Conference on Decision and Control, pages 340–345. IEEE Press, Piscataway, NJ, 2007.
R.Y. Rubinstein. The cross-entropy method for combinatorial and continuous optimization. Methodology and Computing in Applied Probability, pages 127–170, 1999.
G. Rudolph. Convergence analysis of canonical genetic algorithms. IEEE Transactions on Neural Networks, 5:96–101, 1994.
G. Sebastiani and G.L. Torrisi. An extended ant colony algorithm and its convergence analysis. Methodology and Computing in Applied Probability, 7:249–263, 2005.
J.C. Spall, S.D. Hill, and D.R. Stark. Theoretical framework for comparing several stochastic optimization algorithms. In G. Calafiore and F. Dabbene, editors, Probabilistic and Randomized Methods for Design under Uncertainty, pages 99–117. Springer Verlag, London, UK, 2006.
T. Stützle and M. Dorigo. A short convergence proof for a class of ACO algorithms. IEEE Transactions on Evolutionary Computation, 6:358–365, 2002.
T. Stützle and H.H. Hoos. MAX –MIN ant ystem. Future Generation Computer Systems, 16:889–914, 2000.
A.S. Thikomirov. On the convergence rate of the Markov homogeneous monotone optimization method. Computational Mathematics and Mathematical Physics, 47:817–828, 2007.
I.C. Trelea. The particle swarm optimization algorithm: convergence analysis and parameter selection. Information Processing Letters, 85:317–325, 2003.
I. Wegener. Methods for the analysis of evolutionary algorithms on pseudo-boolean functions. In R. Sarker, M. Mohammadia, and X. Yao, editors, Evolutionary Optimization, volume 48 of International Series in Operations Research & Management Science. Kluwer Academic Publishers, Norwell, MA, 2003.
I. Wegener. Simulated annealing beats metropolis in combinatorial optimization. In L. Caires, G.F. Italiano, L. Monteiro, C. Palamidessi, and M. Yung, editors, Automata, Languages and Programming, 32nd International Colloquium, ICALP 2005, volume 3580 of Lecture Notes in Computer Science, pages 589–601. Springer Verlag, Berlin, Germany, 2005.
D.H. Wolpert and W.G. Macready. No free lunch theorems for optimization. IEEE Transactions on Evolutionary Computation, 1:67–82, 1997.
Y. Yu and Z.-H. Zhou. A new approach to estimating the expected first hitting time of evolutionary algorithms. In Proceedings of the Twentyfirst National Conference on Artificial Intelligence, pages 555–560. AAAI Press / The MIT Press, Menlo Park, CA, USA, 2006.
M. Zlochin, M. Birattari, N. Meuleau, and M. Dorigo. Model-based search for combinatorial optimization: a critical survey. Annals of Operations Research, 131:373–379, 2004.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2009 Springer Science+Business Media, LLC
About this chapter
Cite this chapter
Gutjahr, W.J. (2009). Convergence Analysis of Metaheuristics. In: Maniezzo, V., Stützle, T., Voß, S. (eds) Matheuristics. Annals of Information Systems, vol 10. Springer, Boston, MA. https://doi.org/10.1007/978-1-4419-1306-7_6
Download citation
DOI: https://doi.org/10.1007/978-1-4419-1306-7_6
Published:
Publisher Name: Springer, Boston, MA
Print ISBN: 978-1-4419-1305-0
Online ISBN: 978-1-4419-1306-7
eBook Packages: Business and EconomicsBusiness and Management (R0)