Abstract
In the experimental analysis of metaheuristic methods, two issues are still not sufficiently treated. Firstly, the performance of algorithms depends on their parametrizations—and of the parametrizations of the problem instances. However, these dependencies can be seen as means for understanding an algorithm’s behavior. Secondly, the nondeterminism of evolutionary and other metaheuristic methods renders result distributions, not numbers.
Based on the experience of several tutorials on the matter, we provide a comprehensive, effective, and very efficient methodology for the design and experimental analysis of metaheuristics such as evolutionary algorithms. We rely on modern statistical techniques for tuning and understanding algorithms from an experimental perspective. Therefore, we make use of the sequential parameter optimization (SPO) method that has been successfully applied as a tuning procedure to numerous heuristics for practical and theoretical optimization problems.
Access provided by Autonomous University of Puebla. Download to read the full chapter text
Chapter PDF
Similar content being viewed by others
Keywords
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.
References
Ackermann R (1989) The new experimentalism. British Journal for the Philosophy of Science 40:185–190
Auger A, Hansen N (2005) Performance Evaluation of an Advanced Local Search Evolutionary Algorithm. In: McKay B, et al. (eds) Proc. 2005 Congress on Evolutionary Computation (CEC’05), IEEE Press, Piscataway, NJ
Barr RS, Golden BL, Kelly JP, Resende MG, Stewart WR (1995) Designing and reporting on computational experiments with heuristic methods. Journal of Heuristics 1(1):9–32
Bartz-Beielstein T (2006) Experimental Research in Evolutionary Computation— The New Experimentalism. Natural Computing Series, Springer, Berlin, Heidelberg, New York
Bartz-Beielstein T (2008) How experimental algorithmics can benefit from Mayo’s extensions to Neyman-Pearson theory of testing. Synthese 163(3):385–396, DOI 10.1007/s11229-007-9297-z
Beyer HG (2001) The Theory of Evolution Strategies. Springer
Chalmers AF (1999) What Is This Thing Called Science. University of Queensland Press, St. Lucia, Australia
Champion R (2009) What is this thing called falsificationism. http://www.the-rathouse.com/shortreviews/ WhatisThisThingCalledScience.html. Cited 18 March 2009.
Cohen PR (1995) Empirical Methods for Artificial Intelligence. MIT Press, Cambridge MA
De Jong K (2007) Parameter Setting in EAs: a 30 Year Perspective. In: Fernando G Lobo and Cláudio F Lima and Zbigniew Michalewicz (ed) Parameter Setting in Evolutionary Algorithms, Springer
Demetrescu C, Italiano GF (2000) What do we learn from experimental algorithmics? In: MFCS ’00: Proceedings of the 25th International Symposium on Mathematical Foundations of Computer Science, Springer, pp 36–51
Eiben AE, Smith JE (2003) Introduction to Evolutionary Computing. Springer
Fisher RA (1935) The Design of Experiments. Oliver and Boyd, Edinburgh
Giere RN (1999) Using models to represent reality. In: Magnani L (ed) Model Based Reasoning in Scientific Discovery. Proceedings of the International Conference on Model-Based Reasoning in Scientific Discovery, Kluwer, New York, NY, pp 41–57
Gregoire T (2001) Biometry in the 21st century: Whither statistical inference? (invited keynote). Proceedings of the Forest Biometry and Information Science Conference held at the University of Greenwich, June 2001. http://cms1. gre.ac.uk/conferences/iufro/proceedings/gregoire.pdf. Cited 19 May 2004
Gregory DE, Gao L, Rosenberg AL, Cohen PR (1996) An empirical study of dynamic scheduling on rings of processors. In: Proceedings of the 8th IEEE Symposium on Parallel and Distributed Processing, SPDP’96 (New Orleans, Louisiana, October 23-26, 1996), IEEE Computer Society, Los Alamitos, CA, pp 470–473
Hansen N, Auger A, Finck S, Ros R (2009) Real-parameter black-box optimization benchmarking 2009: Experimental setup. Tech. Rep. RR-6828, INRIA, URL http://hal.inria.fr/inria-00362649/en/
Hooker J (1996) Testing heuristics: We have it all wrong. Journal of Heuristics 1(1):33–42
Hoos HH, Stützle T (2005) Stochastic Local Search—Foundations and Applications. Elsevier/Morgan Kaufmann
Jägersküpper J, Preuss M(2008) Empirical investigation of simplified step-size control in metaheuristics with a view to theory. In: McGeoch CC (ed) Experimental Algorithms, 7th InternationalWorkshop, WEA 2008, Proceedings, Springer, Lecture Notes in Computer Science, vol 5038, pp 263–274
Johnson DS (2002) A theoretician’s guide to the experimental analysis of algorithms. In: Data Structures, Near Neighbor Searches, and Methodology: Fifth and Sixth DIMACS Implementation Challenges, AMS, pp 215–250
Kleijnen JPC (1987) Statistical Tools for Simulation Practitioners. Marcel Dekker, New York, NY
Mayo DG (1983) An objective theory of statistical testing. Synthese 57:297–340
Mayo DG (1996) Error and the Growth of Experimental Knowledge. The University of Chicago Press, Chicago IL
Mayo DG, Spanos A (2006a) Severe testing as a basic concept in a neyman–pearson philosophy of induction. British Journal for the Philosophy of Science 57:323– 357
Mayo DG, Spanos A (2006b) Severe Testing as a Basic Concept in a Neyman-Pearson Philosophy of Induction. British Journal Philos Sci 57(2):323–357, DOI 10.1093/bjps/axl003, URL http://bjps.oxfordjournals.org/cgi/content/abstract/57/2/323, http://bjps.oxfordjournals.org/cgi/reprint/57/2/323.pdf
Mayo DG, Spanos A (2010) Error and Inference. Cambridge University Press, Cambridge
McGeoch CC (1986) Experimental analysis of algorithms. PhD thesis, Carnegie Mellon University, Pittsburgh
Morrison DE, Henkel RE (eds) (1970) The Significance Test Controversy—A Reader. Butterworths, London, UK
Popper K (1959) The Logic of Scientific Discovery. Hutchinson, London, UK
Preuss M (2009) Adaptability of algorithms for real-valued optimization. In: Giacobini M, et al. (eds) Applications of Evolutionary Computing, EvoWorkshops 2009. Proceedings, Springer, Lecture Notes in Computer Science, vol 5484, pp 665–674
Rardin R, Uzsoy R (2001) Experimental evaluation of heuristic optimization algorithms: A tutorial. Journal of Heuristics 7(3):261–304
Sacks J, Welch WJ, Mitchell TJ, Wynn HP (1989) Design and analysis of computer experiments. Statistical Science 4(4):409–435
Schwefel HP (1995) Evolution and Optimum Seeking. Sixth-Generation Computer Technology, Wiley, New York, NY
Suppes P (1969) A comparison of the meaning and uses of models in mathematics and the empirical sciences. In: Suppes P (ed) Studies in the Methodology and Foundation of Science, Reidel, Dordrecht, The Netherlands, pp 11–13
Thomke SH (2003) Experimentation Matters: Unlocking the Potential of New Technologies for Innovation. Harvard Business School Press
Acknowledgements
This work has been supported by the Bundesministerium für Forschung und Bildung (BMBF) under the grant FIWA (AIF FKZ 17N2309, "Ingenieurnachwuchs") and by the Cologne University of Applied Sciences under the grant COSA
Author information
Authors and Affiliations
Corresponding authors
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2010 Springer-Verlag Berlin Heidelberg
About this chapter
Cite this chapter
Bartz-Beielstein, T., Preuss, M. (2010). The Future of Experimental Research. In: Bartz-Beielstein, T., Chiarandini, M., Paquete, L., Preuss, M. (eds) Experimental Methods for the Analysis of Optimization Algorithms. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-02538-9_2
Download citation
DOI: https://doi.org/10.1007/978-3-642-02538-9_2
Published:
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-642-02537-2
Online ISBN: 978-3-642-02538-9
eBook Packages: Computer ScienceComputer Science (R0)