Abstract
The consistency of the explainability of artificial intelligence (XAI), especially with regard to the Rashomon effect, is in the focus of the here presented work. Rashomon effect has been named the phenomenon of receiving different machine learning (ML) explanations when employing different models to describe the same data. On the basis of concrete examples, cases of Rashomon effect will be visually demonstrated and discussed to underline the difficulty to practically produce definite and unambiguous machine learning explanations and predictions. Artificial intelligence (AI) presently undergoes a so-called replication and reproducibility crisis which hinders models and techniques from being properly assessed for robustness, fairness, and safety. Studying the Rashomon effect is important for understanding the causes of the unintended variability of results which originate from-* within the models and the XAI methods themselves.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Breiman, L. : Statistical modeling: the two cultures. Stat. Sci. 16(3), 199–215, (2001). https://www.jstor.org/stable/2676681
Chen, T., Guestrin, C.: XGBoost: a scalable tree boosting system. In: Proceedings of the 22nd SIGKDD Conference on Knowledge Discovery and Data Mining, pp. 785–794 (2016). Scikit-Learn California Housing dataset. http://scikit-learn.org/stable/datasets/real_world.html#california-housing-dataset. Accessed Apr 2022. https://doi.org/10.1145/2939672.2939785
Chen, T., Guestrin, C.: XGBoost: a scalable tree boosting system. In: Proceedings of the 22nd SIGKDD Conference on Knowledge Discovery and Data Mining, pp. 785–794 (2016). https://doi.org/10.1145/2939672.2939785
Covert, I.: Understanding and improving KernelSHAP. Blog by Ian Covert (2020). https://iancovert.com/blog/kernelshap/. Accessed Apr 2022
D’Amour, A.: Revisiting Rashomon: a comment on “the two cultures”. Observational Stud. 7(1) (2021). https://doi.org/10.1353/obs.2021.0022
Dressel, J., Farid, H.: The accuracy, fairness, and limits of predicting recidivism. Sci. Ad. 4(1), eaao5580 (2018). https://doi.org/10.1126/sciadv.aao5580
Fisher, A., Rudin, C., Dominici, F.: All models are wrong, but many are useful: learning a variable’s importance by studying an entire class of prediction models simultaneously. J. Mach. Learn. Res. 20(177), 1–81 (2019). http://jmlr.org/papers/v20/18-760.html
Fan, F.L., et al.: On interpretability of artificial neural networks: a survey. IEEE Trans. Radiat. Plasma Med. Sci. 5(6), 741–760 (2021). https://doi.org/10.1109/TRPMS.2021.3066428
Gerber E.: A new perspective on shapley values, part II: the Naïve Shapley method. Blog by Edden Gerber (2020). https://edden-gerber.github.io/shapley-part-2/. Accessed Apr 2022
Gibney, E.: This AI researcher is trying to ward off a reproducibility crisis. Interview Joelle Pineau. Nat. 577, 14 (2020). https://doi.org/10.1038/d41586-019-03895-5
Jia, E.: Explaining explanations and perturbing perturbations, Bachelor’s thesis, Harvard College (2020). https://nrs.harvard.edu/URN-3:HUL.INSTREPOS:37364690
Koehrsen, W.: Thoughts on the two cultures of statistical modeling. Towards Data Sci. (2019). https://towardsdatascience.com/thoughts-on-the-two-cultures-of-statistical-modeling-72d75a9e06c2. Accessed Apr 2022
Kuo, C.: Explain any models with the SHAP values - use the Kernelexplainer. Towards Data Sci. (2019). https://towardsdatascience.com/explain-any-models-with-the-shap-values-use-the-kernelexplainer-79de9464897a. Accessed Apr 2022
Lundberg, S.M., Lee, S.-I.: A unified approach to interpreting model predictions. In: Guyon, I., et al. (eds.) Advances in Neural Information Processing Systems 30, pp. 4765–4774 (2017). https://proceedings.neurips.cc/paper/2017/hash/8a20a8621978632d76c43dfd28b67767-Abstract.html
Lundberg, S.M., et al.: From local explanations to global understanding with explainable AI for trees. Nat. Mach. Intell. 2, 56–67 (2020). https://doi.org/10.1038/s42256-019-0138-9
Marx, C.T., Calmon, F., Ustun, B.: Predictive multiplicity in classification. In: ICML (International Conference on Machine Learning), Proceedings of Machine Learning Research, vol. 119, pp. 6765–6774 (2020). https://proceedings.mlr.press/v119/marx20a.html
Merrick, L., Taly, A.: The explanation game: explaining machine learning models using shapley values. In: Holzinger, A., et al. (eds.) Machine Learning and Knowledge Extraction, vol. 12279, pp. 17–38 (2020). https://doi.org/10.1007/978-3-030-57321-8_2
Mohan, A.: Kernel SHAP. Blog by Mohan, A. (2020). https://www.telesens.co/2020/09/17/kernel-shap/. Accessed Apr 2022
Molnar, C.: Interpretable machine learning. Free HTML version (2022). https://christophm.github.io/interpretable-ml-book/
Villa, J., Yoav Zimmerman, Y.: Reproducibility in ML: why it matters and how to achieve it. Determined AI (2018). https://www.determined.ai/blog/reproducibility-in-ml. Accessed Apr 2022
Warden, P.: The machine learning reproducibility crisis. Domino Data Lab (2018). https://blog.dominodatalab.com/machine-learning-reproducibility-crisis. Accessed Apr 2022
Zafar, M.R., Khan, N.: Deterministic local interpretable model-agnostic explanations for stable explainability. Mach. Learn. Knowl. Extr. 3(3), 525–541 (2021). https://doi.org/10.3390/make3030027
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Leventi-Peetz, AM., Weber, K. (2023). Rashomon Effect and Consistency in Explainable Artificial Intelligence (XAI). In: Arai, K. (eds) Proceedings of the Future Technologies Conference (FTC) 2022, Volume 1. FTC 2022 2022. Lecture Notes in Networks and Systems, vol 559. Springer, Cham. https://doi.org/10.1007/978-3-031-18461-1_52
Download citation
DOI: https://doi.org/10.1007/978-3-031-18461-1_52
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-18460-4
Online ISBN: 978-3-031-18461-1
eBook Packages: Intelligent Technologies and RoboticsIntelligent Technologies and Robotics (R0)