Abstract
Rather than optimizing expensive objective functions such as complex engineering simulations directly, Bayesian optimization methodologies fit a surrogate model (typically Kriging or a Gaussian Process) on evaluations of the objective function(s). To determine the next evaluation, an acquisition function is optimized (also referred to as infill criterion or sampling policy) which incorporates the model prediction and uncertainty and balances exploration and exploitation. Therefore, Bayesian optimization methodologies replace a single optimization of the objective function by a sequence of optimization problems: this makes sense as the acquisition function is cheap-to-evaluate whereas the objective is not. Depending on the goal different acquisition functions are available: multi-objective acquisition functions are relatively new and this chapter gives a state-of-the-art overview and illustrates some approaches based on hypervolume improvement. It is shown that the quality of the model is crucial for the performance of Bayesian optimization and illustrate this by using the more flexible Student-t processes as surrogate models.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
Notes
- 1.
Marginal likelihood as in: marginalized over \(\mathbf {f}\).
- 2.
References
Auger, A., Bader, J., Brockhoff, D., Zitzler, E.: Theory of the hypervolume indicator: optimal \(\mu \)-distributions and the choice of the reference point. In: Workshop Foundation Genetic Algorithms (2009)
Beume, N., Naujoks, B., Emmerich, M.: SMS-EMOA: multiobjective selection based on dominated hypervolume. Eur. J. Oper. Res. 181(3), 1653–1669 (2007)
Bonnans, J., Gilbert, J., Lemaréchal, C., Sagastizábal, C.: Numerical Optimization: Theoretical and Practical Aspects. Springer, Berlin (2006)
Brillinger, D.R.: The calculation of cumulants via conditioning. Ann. Inst. Stat. Math. 21(1), 215–218 (1969)
Campigotto, P., Passerini, A., Battiti, R.: Active learning of Pareto fronts. IEEE Trans. Neural Netw. Learn. Syst. 25(3), 506–519 (2014). https://doi.org/10.1109/TNNLS.2013.2275918
Couckuyt, I., Deschrijver, D., Dhaene, T.: Fast calculation of multiobjective probability of improvement and expected improvement criteria for Pareto optimization. J. Glob. Optim. 60(3), 575–594 (2014). https://doi.org/10.1007/s10898-013-0118-2
Couckuyt, I., Dhaene, T., Demeester, P.: ooDACE toolbox: a flexible object-oriented Kriging implementation. J. Mach. Learn. Res. 15, 3183–3186 (2014)
Cox, D.D., John, S.: SDO: a statistical method for global optimization. Multidisciplinary design optimization: state of the art, pp. 315–329 (1997)
Damianou, A.: Deep Gaussian processes and variational propagation of uncertainty. Ph.D. thesis, University of Sheffield (2015)
Deb, K., Thiele, L., Laummans, M., Zitzler, E.: Scalable test problems for evolutionary multi-objective optimization. Technical report 112, Computer Engineering and Networks Laboratory (TIK), Swiss Federal Institute of Technology (ETH), Zurich, Switzerland (2001)
Deb, K., Pratap, A., Agarwal, S., Meyarivan, T.: A fast and elitist multiobjective genetic algorithm: NSGA-II. IEEE Trans. Evol. Comput. 6(2), 182–197 (2002). https://doi.org/10.1109/4235.996017
Duvenaud, D., Lloyd, J.R., Grosse, R., Tenenbaum, J.B., Ghahramani, Z.: Structure discovery in nonparametric regression through compositional kernel search. In: Proceedings of the 30th International Conference on Machine Learning, pp. 1166–1174 (2013)
Emmerich, M.T.M., Giannakoglou, K.C., Naujoks, B.: Single- and multiobjective evolutionary optimization assisted by Gaussian random field metamodels. IEEE Trans. Evol. Comput. 10(4), 421–439 (2006). https://doi.org/10.1109/TEVC.2005.859463
Emmerich, M.T.M., Deutz, A.H., Klinkenberg, J.W.: Hypervolume-based expected improvement: monotonicity properties and exact computation. In: Emmerich, M.T.M., Hingston, P. (eds.) Congress on Evolutionary Computation (CEC), pp. 2147–2154. IEEE, Institute of Electrical and Electronics Engineers, Inc., Piscataway, New Jersey, USA (2011). https://doi.org/10.1109/CEC.2011.5949880
Forrester, A.I.J., Jones, D.R.: Global optimization of deceptive functions with sparse sampling. In: 12th AIAA/ISSMO Multidisciplinary Analysis and Optimization Conference, vol. 1012. Aerospace Research Central (2008). https://doi.org/10.2514/6.2008-5996
Freitas, N.D., Zoghi, M., Smola, A.J.: Exponential regret bounds for Gaussian process bandits with deterministic observations. In: Langford, J., Pineau, J. (eds.) Proceedings of the 29th International Conference on Machine Learning (ICML-12), pp. 1743–1750. ACM, New York, NY, USA (2012)
Frohlich, H., Zell, A.: Efficient parameter selection for support vector machines in classification and regression via model-based global optimization. In: IEEE International Joint Conference on Neural Networks, IJCNN’05, vol. 3, pp. 1431–1436. IEEE, Institute of Electrical and Electronics Engineers, Inc., Piscataway, New Jersey, USA (2005). https://doi.org/10.1109/IJCNN.2005.1556085
Garnett, R., Osborne, M.A., Hennig, P.: Active learning of linear embeddings for Gaussian processes. In: Zhang, M.L., Tian, J. (eds.) Proceedings of the 30th Conference on Uncertainty in Artificial Intelligence, pp. 230–239. AUAI Press (2014)
Goethals, K., Couckuyt, I., Dhaene, T., Janssens, A.: Sensitivity of night cooling performance to room/system design: surrogate models based on CFD. Build. Environ. 58, 23–36 (2012). https://doi.org/10.1016/j.buildenv.2012.06.015
Gramacy, R.B., Apley, D.W.: Local Gaussian process approximation for large computer experiments. J. Comput. Graph. Stat. 24(2), 561–578 (2015). https://doi.org/10.1080/10618600.2014.914442
Hennig, P., Schuler, C.J.: Entropy search for information-efficient global optimization. J. Mach. Learn. Res. 13, 1809–1837 (2012)
Hernández-Lobato, D., Hernández-Lobato, J.M., Shah, A., Adams, R.P.: Predictive entropy search for multi-objective Bayesian optimization. In: Balcan, M.F., Weinberger, K.Q. (eds.) Proceedings of the 33rd International Conference on Machine Learning (ICML-16), Proceedings of Machine Learning Research, vol. 48, pp. 1492–1501. PMLR (2016)
Hernández-Lobato, J.M., Hoffman, M.W., Ghahramani, Z.: Predictive entropy search for efficient global optimization of black-box functions. In: Ghahramani, Z., Welling, M., Cortes, C., Lawrence, N.D., Weinberger, K.Q. (eds.) Advances in Neural Information Processing Systems, vol. 27, pp. 918–926. Curran Associates, Inc. (2014)
van der Herten, J., Couckuyt, I., Deschrijver, D., Dhaene, T.: Fast calculation of the knowledge gradient for optimization of deterministic engineering simulations. Submitted to the J. Mach. Learn. Res. (JMLR) (2017)
Hupkens, I., Emmerich, M., Deutz, A.: Faster computation of expected hypervolume improvement (2014). arXiv:1408.7114
Jones, D.R., Schonlau, M., Welch, W.J.: Efficient global optimization of expensive black-box functions. J. Glob. Optim. 13(4), 455–492 (1998). https://doi.org/10.1023/A:1008306431147
Keane, A.J.: Statistical improvement criteria for use in multiobjective design optimization. AIAA J. 44(4), 879–891 (2006)
Knowles, J.: ParEGO: a hybrid algorithm with on-line landscape approximation for expensive multiobjective optimization problems. IEEE Trans. Evol. Comput. 10(1), 50–66 (2006). https://doi.org/10.1109/TEVC.2005.851274
Malkomes, G., Schaff, C., Garnett, R.: Bayesian optimization for automated model selection. In: Advances in Neural Information Processing Systems, pp. 2900–2908 (2016)
Matthews, A.G.d.G., van der Wilk, M., Nickson, T., Fujii, K., Boukouvalas, A., León-Villagrá, P., Ghahramani, Z., Hensman, J.: GPflow: a Gaussian process library using TensorFlow. J. Mach. Learn. Res. 18(40), 1–6 (2017). http://jmlr.org/papers/v18/16-537.html
Metzen, J.H.: Minimum regret search for single- and multi-task optimization. In: Balcan, M.F., Weinberger, K.Q. (eds.) Proceedings of the 33rd International Conference on Machine Learning (ICML-16), Proceedings of Machine Learning Research, vol. 48, pp. 192–200. PMLR, New York, USA (2016)
Močkus, J.: On Bayesian methods for seeking the extremum. In: Marchuk, G. (ed.) Optimization Techniques IFIP Technical Conference, pp. 400–404. Springer, Berlin (1975)
Montagna, S., Tokdar, S.T.: Computer emulation with nonstationary Gaussian processes. SIAM/ASA J. Uncertain. Quantif. 4(1), 26–47 (2016). https://doi.org/10.1137/141001512
Rasmussen, C.E., Williams, C.K.I.: Gaussian Processes for Machine Learning (Adaptive Computation and Machine Learning). MIT Press, Cambridge (2006)
Santner, T., Williams, B., Notz, W.: The Design and Analysis of Computer Experiments. Springer Series in Statistics. Springer, New York (2003)
Shah, A., Ghahramani, Z.: Pareto frontier learning with expensive correlated objectives. In: Balcan, M.F., Weinberger, K.Q. (eds.) Proceedings of the 33rd International Conference on Machine Learning, Proceedings of Machine Learning Research, vol. 48, pp. 1919–1927. PMLR, New York, USA (2016)
Shah, A., Wilson, A.G., Ghahramani, Z.: Student-t processes as alternatives to Gaussian processes. In: AISTATS, Proceedings of Machine Learning Research, pp. 877–885. PMLR (2014)
Snoek, J., Larochelle, H., Adams, R.P.: Practical Bayesian optimization of machine learning algorithms. In: Pereira, F., Burges, C.J.C., Bottou, L., Weinberger, K.Q. (eds.) Advances in Neural Information Processing Systems, vol. 25, pp. 2951–2959. Curran Associates, Inc. (2012)
Stein, M.: Interpolation of Spatial Data: Some Theory for Kriging. Springer, Berlin (1999)
Van Dam, E.R., Husslage, B., Den Hertog, D., Melissen, H.: Maximin Latin hypercube designs in two dimensions. Oper. Res. 55(1), 158–169 (2007). https://doi.org/10.1287/opre.1060.0317
Wang, Z., Jegelka, S.: Max-value entropy search for efficient Bayesian optimization. In: Precup, D., Teh, Y.W. (eds.) Proceedings of the 34th International Conference on Machine Learning, Proceedings of Machine Learning Research, vol. 70, pp. 3627–3635. PMLR, International Convention Centre, Sydney, Australia (2017)
While, L., Bradstreet, L., Barone, L.: A fast way of calculating exact hypervolumes. IEEE Trans. Evol. Comput. 16(1), 86–95 (2012). https://doi.org/10.1109/TEVC.2010.2077298
Zitzler, E., Laumanns, M., Thiele, L.: SPEA2: improving the strength Pareto evolutionary algorithm. Technical report (2001)
Zitzler, E., Thiele, L., Laumanns, M., Fonseca, C., da Fonseca, V.G.: Performance assesment of multiobjective optimizers: an analysis and review. Evol. Comput. 7(2), 117–132 (2003)
Acknowledgements
Ivo Couckuyt is a post-doctoral research fellow of FWO-Vlaanderen.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2020 Springer Nature Switzerland AG
About this chapter
Cite this chapter
van der Herten, J., Knudde, N., Couckuyt, I., Dhaene, T. (2020). Multi-objective Bayesian Optimization for Engineering Simulation. In: Bartz-Beielstein, T., Filipič, B., Korošec, P., Talbi, EG. (eds) High-Performance Simulation-Based Optimization. Studies in Computational Intelligence, vol 833. Springer, Cham. https://doi.org/10.1007/978-3-030-18764-4_3
Download citation
DOI: https://doi.org/10.1007/978-3-030-18764-4_3
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-18763-7
Online ISBN: 978-3-030-18764-4
eBook Packages: Intelligent Technologies and RoboticsIntelligent Technologies and Robotics (R0)