Abstract
When the function to be optimized is characterized by a limited and unknown number of interactions among variables, a context that applies to many real world scenario, it is possible to design optimization algorithms based on such information. Estimation of Distribution Algorithms learn a set of interactions from a sample of points and encode them in a probabilistic model. The latter is then used to sample new instances. In this paper, we propose a novel approach to estimate the Markov Fitness Model used in DEUM. We combine model selection and model fitting by solving an ℓ1-constrained linear regression problem. Since candidate interactions grow exponentially in the size of the problem, we first reduce this set with a preliminary coarse selection criteria based on Mutual Information. Then, we employ ℓ1-regularization to further enforce sparsity in the model, estimating its parameters at the same time. Our proposal is analyzed against the 3D Ising Spin Glass function, a problem known to be NP-hard, and it outperforms other popular black-box meta-heuristics.
Access provided by Autonomous University of Puebla. Download to read the full chapter text
Chapter PDF
Similar content being viewed by others
Keywords
References
Aarts, E., Korst, J.: Simulated annealing and Boltzmann machines: a stochastic approach to combinatorial optimization and neural computing. John Wiley & Sons, Inc., New York (1989)
Barahona, F.: On the computational complexity of Ising spin glass models. Journal of Physics A: Mathematical and General 15(10), 3241–3253 (1982)
Brown, D.F., Garmendia-Doval, A.B., McCall, J.A.W.: Markov Random Field Modelling of Royal Road Genetic Algorithms. In: Collet, P., Fonlupt, C., Hao, J.-K., Lutton, E., Schoenauer, M. (eds.) EA 2001. LNCS, vol. 2310, pp. 65–76. Springer, Heidelberg (2002)
Brownlee, A.E.I., McCall, J.A.W., Shakya, S.K., Zhang, Q.: Structure Learning and Optimisation in a Markov Network Based Estimation of Distribution Algorithm. In: Chen, Y.-P. (ed.) Exploitation of Linkage Learning. ALO, vol. 3, pp. 45–69. Springer, Heidelberg (2010)
Bunea, F., Tsybakov, A., Wegkamp, M.: Sparsity oracle inequalities for the lasso. Electronic Journal of Statistics 1, 169 (2007)
Efron, B., Hastie, T., Johnstone, I., Tibshirani, R.: Least angle regression. The Annals of Statistics 32(2), 407–499 (2004)
Hammersley, J., Clifford, P.: Markov fields on finite graphs and lattices (1971) (unpublished)
Hansen, N., Müller, S., Koumoutsakos, P.: Reducing the time complexity of the derandomized evolution strategy with covariance matrix adaptation (CMA-ES). Evolutionary Computation 11(1), 1–18 (2003)
Höfling, H., Tibshirani, R.: Estimation of sparse binary pairwise markov networks using pseudo-likelihoods. The Journal of Machine Learning Research 10, 883–906 (2009)
Larrañaga, P., Lozano, J.A. (eds.): Estimation of Distribution Algoritms. A New Tool for evolutionary Computation. Number 2 in Genetic Algorithms and Evolutionary Computation. Springer (2001)
Malagò, L., Matteucci, M., Pistone, G.: Optimization of pseudo-boolean functions by stochastic natural gradient descent. In: 9th Metaheuristics International Conference, MIC 2011 (2011)
Malagò, L., Matteucci, M., Pistone, G.: Towards the geometry of estimation of distribution algorithms based on the exponential family. In: Proceedings of the 11th Workshop on Foundations of Genetic Algorithms, FOGA 2011, pp. 230–242. ACM, New York (2011)
Malagò, L., Matteucci, M., Valentini, G.: Introducing ℓ1-regularized logistic regression in Markov Networks based EDAs. In: Proceedings of the IEEE Congress on Evolutionary Computation, CEC 2011. IEEE Press (2011)
Pelikan, M., Goldberg, D., Ocenasek, J., Trebst, S.: Robust and scalable black-box optimization, hierarchy, and ising spin glasses. Technical report, Illinois Genetic Algorithms Laboratory, IlliGAL (2003)
Pelikan, M., Goldberg, D.E.: A hierarchy machine: Learning to optimize from nature and humans. Complexity 8(5), 36–45 (2003)
Ravikumar, P., Wainwright, M.J., Lafferty, J.D.: High-dimensional Ising model selection using ℓ1-regularized logistic regression. The Annals of Statistics 38(3), 1287–1319 (2010)
Shakya, S., Brownlee, A., McCall, J., Fournier, F., Owusu, G.: A fully multivariate DEUM algorithm. In: IEEE Congress on Evolutionary Computation (2009)
Shakya, S., McCall, J.: Optimization by Estimation of Distribution with DEUM framework based on Markov random fields. International Journal of Automation and Computing 4(3), 262–272 (2007)
Shakya, S., McCall, J., Brown, D.: Solving the Ising spin glass problem using a bivariate EDA based on Markov random fields. In: IEEE Congress on Evolutionary Computation, pp. 908–915 (2006)
Tibshirani, R.: Regression shrinkage and selection via the lasso. Journal of the Royal Statistical Society. Series B (Methodological), 267–288 (1996)
Valentini, G., Malagò, L., Matteucci, M.: Evoptool: an extensible toolkit for evolutionary optimization algorithms comparison. In: Proceedings of IEEE World Congress on Computational Intelligence, pp. 2475–2482 (July 2010)
Winkler, G.: Image Analysis, Random Fields and Dynamic Monte Carlo Methods: A Mathematical Introduction, 2nd edn. Springer (2003)
Wolsey, L.A.: Integer Programming. Wiley Interscience (1998)
Yang, J., Xu, H., Cai, Y., Jia, P.: Effective structure learning for eda via l1-regularized bayesian networks. In: Proceedings of the 12th Annual Conference on Genetic and Evolutionary Computation, GECCO 2001, pp. 327–334. ACM (2010)
Zlochin, M., Birattari, M., Meuleau, N., Dorigo, M.: Model-based search for combinatorial optimization: A critical survey. Annals of Operations Research 131(1-4), 375–395 (2004)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2012 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Valentini, G., Malagò, L., Matteucci, M. (2012). Optimization by ℓ1-Constrained Markov Fitness Modelling. In: Hamadi, Y., Schoenauer, M. (eds) Learning and Intelligent Optimization. LION 2012. Lecture Notes in Computer Science, vol 7219. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-34413-8_18
Download citation
DOI: https://doi.org/10.1007/978-3-642-34413-8_18
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-642-34412-1
Online ISBN: 978-3-642-34413-8
eBook Packages: Computer ScienceComputer Science (R0)