Skip to main content

Multi-objective Bayesian Optimization for Engineering Simulation

  • Chapter
  • First Online:
High-Performance Simulation-Based Optimization

Part of the book series: Studies in Computational Intelligence ((SCI,volume 833))

Abstract

Rather than optimizing expensive objective functions such as complex engineering simulations directly, Bayesian optimization methodologies fit a surrogate model (typically Kriging or a Gaussian Process) on evaluations of the objective function(s). To determine the next evaluation, an acquisition function is optimized (also referred to as infill criterion or sampling policy) which incorporates the model prediction and uncertainty and balances exploration and exploitation. Therefore, Bayesian optimization methodologies replace a single optimization of the objective function by a sequence of optimization problems: this makes sense as the acquisition function is cheap-to-evaluate whereas the objective is not. Depending on the goal different acquisition functions are available: multi-objective acquisition functions are relatively new and this chapter gives a state-of-the-art overview and illustrates some approaches based on hypervolume improvement. It is shown that the quality of the model is crucial for the performance of Bayesian optimization and illustrate this by using the more flexible Student-t processes as surrogate models.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 159.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

Notes

  1. 1.

    Marginal likelihood as in: marginalized over \(\mathbf {f}\).

  2. 2.

    http://github.com/gpflow/GPflowOpt.

References

  1. Auger, A., Bader, J., Brockhoff, D., Zitzler, E.: Theory of the hypervolume indicator: optimal \(\mu \)-distributions and the choice of the reference point. In: Workshop Foundation Genetic Algorithms (2009)

    Google Scholar 

  2. Beume, N., Naujoks, B., Emmerich, M.: SMS-EMOA: multiobjective selection based on dominated hypervolume. Eur. J. Oper. Res. 181(3), 1653–1669 (2007)

    Article  Google Scholar 

  3. Bonnans, J., Gilbert, J., Lemaréchal, C., Sagastizábal, C.: Numerical Optimization: Theoretical and Practical Aspects. Springer, Berlin (2006)

    Google Scholar 

  4. Brillinger, D.R.: The calculation of cumulants via conditioning. Ann. Inst. Stat. Math. 21(1), 215–218 (1969)

    Article  Google Scholar 

  5. Campigotto, P., Passerini, A., Battiti, R.: Active learning of Pareto fronts. IEEE Trans. Neural Netw. Learn. Syst. 25(3), 506–519 (2014). https://doi.org/10.1109/TNNLS.2013.2275918

    Article  Google Scholar 

  6. Couckuyt, I., Deschrijver, D., Dhaene, T.: Fast calculation of multiobjective probability of improvement and expected improvement criteria for Pareto optimization. J. Glob. Optim. 60(3), 575–594 (2014). https://doi.org/10.1007/s10898-013-0118-2

    Article  MathSciNet  Google Scholar 

  7. Couckuyt, I., Dhaene, T., Demeester, P.: ooDACE toolbox: a flexible object-oriented Kriging implementation. J. Mach. Learn. Res. 15, 3183–3186 (2014)

    MATH  Google Scholar 

  8. Cox, D.D., John, S.: SDO: a statistical method for global optimization. Multidisciplinary design optimization: state of the art, pp. 315–329 (1997)

    Google Scholar 

  9. Damianou, A.: Deep Gaussian processes and variational propagation of uncertainty. Ph.D. thesis, University of Sheffield (2015)

    Google Scholar 

  10. Deb, K., Thiele, L., Laummans, M., Zitzler, E.: Scalable test problems for evolutionary multi-objective optimization. Technical report 112, Computer Engineering and Networks Laboratory (TIK), Swiss Federal Institute of Technology (ETH), Zurich, Switzerland (2001)

    Google Scholar 

  11. Deb, K., Pratap, A., Agarwal, S., Meyarivan, T.: A fast and elitist multiobjective genetic algorithm: NSGA-II. IEEE Trans. Evol. Comput. 6(2), 182–197 (2002). https://doi.org/10.1109/4235.996017

    Article  Google Scholar 

  12. Duvenaud, D., Lloyd, J.R., Grosse, R., Tenenbaum, J.B., Ghahramani, Z.: Structure discovery in nonparametric regression through compositional kernel search. In: Proceedings of the 30th International Conference on Machine Learning, pp. 1166–1174 (2013)

    Google Scholar 

  13. Emmerich, M.T.M., Giannakoglou, K.C., Naujoks, B.: Single- and multiobjective evolutionary optimization assisted by Gaussian random field metamodels. IEEE Trans. Evol. Comput. 10(4), 421–439 (2006). https://doi.org/10.1109/TEVC.2005.859463

    Article  Google Scholar 

  14. Emmerich, M.T.M., Deutz, A.H., Klinkenberg, J.W.: Hypervolume-based expected improvement: monotonicity properties and exact computation. In: Emmerich, M.T.M., Hingston, P. (eds.) Congress on Evolutionary Computation (CEC), pp. 2147–2154. IEEE, Institute of Electrical and Electronics Engineers, Inc., Piscataway, New Jersey, USA (2011). https://doi.org/10.1109/CEC.2011.5949880

  15. Forrester, A.I.J., Jones, D.R.: Global optimization of deceptive functions with sparse sampling. In: 12th AIAA/ISSMO Multidisciplinary Analysis and Optimization Conference, vol. 1012. Aerospace Research Central (2008). https://doi.org/10.2514/6.2008-5996

  16. Freitas, N.D., Zoghi, M., Smola, A.J.: Exponential regret bounds for Gaussian process bandits with deterministic observations. In: Langford, J., Pineau, J. (eds.) Proceedings of the 29th International Conference on Machine Learning (ICML-12), pp. 1743–1750. ACM, New York, NY, USA (2012)

    Google Scholar 

  17. Frohlich, H., Zell, A.: Efficient parameter selection for support vector machines in classification and regression via model-based global optimization. In: IEEE International Joint Conference on Neural Networks, IJCNN’05, vol. 3, pp. 1431–1436. IEEE, Institute of Electrical and Electronics Engineers, Inc., Piscataway, New Jersey, USA (2005). https://doi.org/10.1109/IJCNN.2005.1556085

  18. Garnett, R., Osborne, M.A., Hennig, P.: Active learning of linear embeddings for Gaussian processes. In: Zhang, M.L., Tian, J. (eds.) Proceedings of the 30th Conference on Uncertainty in Artificial Intelligence, pp. 230–239. AUAI Press (2014)

    Google Scholar 

  19. Goethals, K., Couckuyt, I., Dhaene, T., Janssens, A.: Sensitivity of night cooling performance to room/system design: surrogate models based on CFD. Build. Environ. 58, 23–36 (2012). https://doi.org/10.1016/j.buildenv.2012.06.015

    Article  Google Scholar 

  20. Gramacy, R.B., Apley, D.W.: Local Gaussian process approximation for large computer experiments. J. Comput. Graph. Stat. 24(2), 561–578 (2015). https://doi.org/10.1080/10618600.2014.914442

    Article  MathSciNet  Google Scholar 

  21. Hennig, P., Schuler, C.J.: Entropy search for information-efficient global optimization. J. Mach. Learn. Res. 13, 1809–1837 (2012)

    Google Scholar 

  22. Hernández-Lobato, D., Hernández-Lobato, J.M., Shah, A., Adams, R.P.: Predictive entropy search for multi-objective Bayesian optimization. In: Balcan, M.F., Weinberger, K.Q. (eds.) Proceedings of the 33rd International Conference on Machine Learning (ICML-16), Proceedings of Machine Learning Research, vol. 48, pp. 1492–1501. PMLR (2016)

    Google Scholar 

  23. Hernández-Lobato, J.M., Hoffman, M.W., Ghahramani, Z.: Predictive entropy search for efficient global optimization of black-box functions. In: Ghahramani, Z., Welling, M., Cortes, C., Lawrence, N.D., Weinberger, K.Q. (eds.) Advances in Neural Information Processing Systems, vol. 27, pp. 918–926. Curran Associates, Inc. (2014)

    Google Scholar 

  24. van der Herten, J., Couckuyt, I., Deschrijver, D., Dhaene, T.: Fast calculation of the knowledge gradient for optimization of deterministic engineering simulations. Submitted to the J. Mach. Learn. Res. (JMLR) (2017)

    Google Scholar 

  25. Hupkens, I., Emmerich, M., Deutz, A.: Faster computation of expected hypervolume improvement (2014). arXiv:1408.7114

  26. Jones, D.R., Schonlau, M., Welch, W.J.: Efficient global optimization of expensive black-box functions. J. Glob. Optim. 13(4), 455–492 (1998). https://doi.org/10.1023/A:1008306431147

    Article  MathSciNet  MATH  Google Scholar 

  27. Keane, A.J.: Statistical improvement criteria for use in multiobjective design optimization. AIAA J. 44(4), 879–891 (2006)

    Article  Google Scholar 

  28. Knowles, J.: ParEGO: a hybrid algorithm with on-line landscape approximation for expensive multiobjective optimization problems. IEEE Trans. Evol. Comput. 10(1), 50–66 (2006). https://doi.org/10.1109/TEVC.2005.851274

    Article  Google Scholar 

  29. Malkomes, G., Schaff, C., Garnett, R.: Bayesian optimization for automated model selection. In: Advances in Neural Information Processing Systems, pp. 2900–2908 (2016)

    Google Scholar 

  30. Matthews, A.G.d.G., van der Wilk, M., Nickson, T., Fujii, K., Boukouvalas, A., León-Villagrá, P., Ghahramani, Z., Hensman, J.: GPflow: a Gaussian process library using TensorFlow. J. Mach. Learn. Res. 18(40), 1–6 (2017). http://jmlr.org/papers/v18/16-537.html

  31. Metzen, J.H.: Minimum regret search for single- and multi-task optimization. In: Balcan, M.F., Weinberger, K.Q. (eds.) Proceedings of the 33rd International Conference on Machine Learning (ICML-16), Proceedings of Machine Learning Research, vol. 48, pp. 192–200. PMLR, New York, USA (2016)

    Google Scholar 

  32. Močkus, J.: On Bayesian methods for seeking the extremum. In: Marchuk, G. (ed.) Optimization Techniques IFIP Technical Conference, pp. 400–404. Springer, Berlin (1975)

    Chapter  Google Scholar 

  33. Montagna, S., Tokdar, S.T.: Computer emulation with nonstationary Gaussian processes. SIAM/ASA J. Uncertain. Quantif. 4(1), 26–47 (2016). https://doi.org/10.1137/141001512

    Article  MathSciNet  MATH  Google Scholar 

  34. Rasmussen, C.E., Williams, C.K.I.: Gaussian Processes for Machine Learning (Adaptive Computation and Machine Learning). MIT Press, Cambridge (2006)

    Google Scholar 

  35. Santner, T., Williams, B., Notz, W.: The Design and Analysis of Computer Experiments. Springer Series in Statistics. Springer, New York (2003)

    Book  Google Scholar 

  36. Shah, A., Ghahramani, Z.: Pareto frontier learning with expensive correlated objectives. In: Balcan, M.F., Weinberger, K.Q. (eds.) Proceedings of the 33rd International Conference on Machine Learning, Proceedings of Machine Learning Research, vol. 48, pp. 1919–1927. PMLR, New York, USA (2016)

    Google Scholar 

  37. Shah, A., Wilson, A.G., Ghahramani, Z.: Student-t processes as alternatives to Gaussian processes. In: AISTATS, Proceedings of Machine Learning Research, pp. 877–885. PMLR (2014)

    Google Scholar 

  38. Snoek, J., Larochelle, H., Adams, R.P.: Practical Bayesian optimization of machine learning algorithms. In: Pereira, F., Burges, C.J.C., Bottou, L., Weinberger, K.Q. (eds.) Advances in Neural Information Processing Systems, vol. 25, pp. 2951–2959. Curran Associates, Inc. (2012)

    Google Scholar 

  39. Stein, M.: Interpolation of Spatial Data: Some Theory for Kriging. Springer, Berlin (1999)

    Google Scholar 

  40. Van Dam, E.R., Husslage, B., Den Hertog, D., Melissen, H.: Maximin Latin hypercube designs in two dimensions. Oper. Res. 55(1), 158–169 (2007). https://doi.org/10.1287/opre.1060.0317

    Article  MathSciNet  MATH  Google Scholar 

  41. Wang, Z., Jegelka, S.: Max-value entropy search for efficient Bayesian optimization. In: Precup, D., Teh, Y.W. (eds.) Proceedings of the 34th International Conference on Machine Learning, Proceedings of Machine Learning Research, vol. 70, pp. 3627–3635. PMLR, International Convention Centre, Sydney, Australia (2017)

    Google Scholar 

  42. While, L., Bradstreet, L., Barone, L.: A fast way of calculating exact hypervolumes. IEEE Trans. Evol. Comput. 16(1), 86–95 (2012). https://doi.org/10.1109/TEVC.2010.2077298

    Article  Google Scholar 

  43. Zitzler, E., Laumanns, M., Thiele, L.: SPEA2: improving the strength Pareto evolutionary algorithm. Technical report (2001)

    Google Scholar 

  44. Zitzler, E., Thiele, L., Laumanns, M., Fonseca, C., da Fonseca, V.G.: Performance assesment of multiobjective optimizers: an analysis and review. Evol. Comput. 7(2), 117–132 (2003)

    Article  Google Scholar 

Download references

Acknowledgements

Ivo Couckuyt is a post-doctoral research fellow of FWO-Vlaanderen.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Joachim van der Herten .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Switzerland AG

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

van der Herten, J., Knudde, N., Couckuyt, I., Dhaene, T. (2020). Multi-objective Bayesian Optimization for Engineering Simulation. In: Bartz-Beielstein, T., Filipič, B., Korošec, P., Talbi, EG. (eds) High-Performance Simulation-Based Optimization. Studies in Computational Intelligence, vol 833. Springer, Cham. https://doi.org/10.1007/978-3-030-18764-4_3

Download citation

Publish with us

Policies and ethics