Abstract
The field of Explainable artificial intelligence is showing promising growth in recent years, thus giving researchers the option of deeply exploring the benefits and drawbacks of many different proposed models for solving the enigma of interpretability and explainability regarding machine learning models and their predictions. There are currently a number of techniques that can help to assist researchers in understanding the logic behind decisions made by various models, but the focus of this paper will mostly be on discussing and comparing two strong options, LIME (Local Interpretable Model-Agnostic Explanations) and SHAP (Shapley Additive Explanations). The proposed pipeline for the comparison will be given in a form of an Orange Data Mining workflow. Secondly, the paper aims to give a proposal of how a custom widget encapsulating the functionality of the LIME library can be integrated into the graphical interface, making its usability more appropriate towards less experienced users.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Gilpin, L.H., Bau, D., Yuan, B.Z., Bajwa, A., Specter, M., Kagal, L.: Explaining explanations: an overview of interpretability of machine learning. In: 2018 IEEE 5th International Conference on Data Science and Advanced Analytics (DSAA), pp. 80–89 (2018)
Ribeiro, M.T., Singh, S., Guestrin, C.: Why should I trust you?: Explaining the predictions of any classifier. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (2016)
Lipton, Z.C.: The mythos of model interpretability: in machine learning, the concept of interpretability is both important and slippery. ACM Queue 16(3), 31–57 (2018)
Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: a review of machine learning interpretability methods. Entropy (Basel) 23(1), 18 (2020)
Lundberg, S.M., Lee, S.-I.: A unified approach to interpreting model predictions. In: Proceedings of the 31st International Conference on Neural Information Processing Systems, pp. 4768–4777 (2017)
Rudesh, D., et al.: Explainable AI (XAI): core ideas, techniques and solutions. ACM Comput. Surv. 55(9), 1–33 (2023). https://doi.org/10.1145/3561048
Ignatiev, A.: Towards trustable explainable AI. In: Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence (2020)
Palacio, S., Lucieri, A., Munir, M., Hees, J., Ahmed, S., Dengel, A.: XAI handbook: Towards a unified framework for explainable AI, pp. 3766–3775 (2021). arXiv:2105.06677. [cs.AI]
Vice, J., Khan, M.M.: Toward accountable and explainable artificial intelligence part two: the framework implementation. In: IEEE Access, vol. 10, pp. 36091–36105 (2022). https://doi.org/10.1109/ACCESS.2022.3163523
Khan, M.M., Vice, J.: Toward accountable and explainable artificial intelligence part one: theory and examples. In: IEEE Access, vol. 10, pp. 99686–99701 (2022). https://doi.org/10.1109/ACCESS.2022.3207812
Loh, H.W., Ooi, C.P., Seoni, S., Barua, P.D., Molinari, F., Acharya, U.R.: Application of explainable artificial intelligence for healthcare: a systematic review of the last decade (2011–2022). Comput. Methods Programs Biomed. 226(107161), 107161 (2022)
Knapič, S., Malhi, A., Saluja, R., Främling, K.: Explainable artificial intelligence for human decision support system in the medical domain. Mach. Learn. Knowl. Extr. 3(3), 740–770 (2021). https://doi.org/10.3390/make3030037
Duell, J., Fan, X., Burnett, B., Aarts, G., Zhou, S.-M.: A comparison of explanations given by explainable artificial intelligence methods on analysing electronic health records. In: 2021 IEEE EMBS International Conference on Biomedical and Health Informatics (BHI), pp. 1–4 (2021)
Dave, D., Naik, H., Singhal, S., Patel, P.: Explainable AI meets healthcare: A study on heart disease dataset (2020). arXiv:2011.03195. [cs.LG]
Misheva, B.H., Osterrieder, J, Hirsa, A, Kulkarni, O., Lin, S.F.: Explainable AI in credit risk management (2021). arXiv:2103.00949. [q-fin.RM]
Schweighofer, E.: Rationale discovery and explainable AI. In: Legal Knowledge and Information Systems: JURIX 2021: The Thirty-fourth Annual Conference, Vilnius, Lithuania, 8–10 December 2021, vol. 346, IOS Press (2022)
Smith, J.W., Everhart, J.E., Dickson, W.C., Knowler, W.C., Johannes, R.S.: Using the ADAP learning algorithm to forecast the onset of diabetes mellitus. In: Proceedings of the Symposium on Computer Applications and Medical Care, pp. 261–265. IEEE Computer Society Press (1988)
Github link for the widget implementation. https://github.com/minanikolic916/LIME_ML
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Nikolić, M., Stanimirović, A., Stoimenov, L. (2024). Visual Programming Support for the Explainable Artificial Intelligence. In: Trajanovic, M., Filipovic, N., Zdravkovic, M. (eds) Disruptive Information Technologies for a Smart Society. ICIST 2023. Lecture Notes in Networks and Systems, vol 872. Springer, Cham. https://doi.org/10.1007/978-3-031-50755-7_18
Download citation
DOI: https://doi.org/10.1007/978-3-031-50755-7_18
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-50754-0
Online ISBN: 978-3-031-50755-7
eBook Packages: Intelligent Technologies and RoboticsIntelligent Technologies and Robotics (R0)