Abstract
With extensive use of machine learning in field of healthcare, explainable AI becomes a vital part of machine learning process by helping healthcare practitioners for understanding critical decision-making process. Machine learning has greatly impacted medical field by identification of factors responsible for vulnerable health risks and factors affecting them. Hence, interpretation of machine learning model is extremely critical in the healthcare sector. Healthcare practitioners must understand factors that affect decision-making process for diseases like diabetes made by machine learning model. Explainable artificial intelligence helps clinical practitioners in interpretation of black-box models and their decision-making process to verify why a particular decision was taken by machine learning model that is extremely important in medical field. Interpretability in machine learning methods can help healthcare practitioners to understand the features that affect the decision-making process for detection of diseases like diabetes. In the past few years, researchers have been successful in detecting diabetes using machine learning models. Explainable artificial intelligence is exceptionally useful in turning black-box into glass box machine learning model and explains their respective decision-making process in healthcare. The paper presents various interpretable machine learning methods for understanding factors affecting decision-making in case of diabetes prediction that could be explained using model agnostic methods.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Pima Indians Diabetes Database. https://www.kaggle.com/uciml/pima-indians-diabetes-database
Altmann, A., Toloşi, L., Sander, O., Lengauer, T.: Permutation importance: a corrected feature importance measure. Bioinformatics 26(10), 1340–1347 (2010). https://doi.org/10.1093/bioinformatics/btq134
Alvarez-Melis, D., Jaakkola, T.S.: On the robustness of interpretability methods (Whi). arXiv:1806.08049 (2018)
Apley, D.W., Zhu, J.: Visualizing the effects of predictor variables in black box supervised learning models. J. R. Stat. Soc.: Ser. B (Stat. Methodol.) 82(4), 1059–1086 (2020)
Fan, A., Jernite, Y., Perez, E., Grangier, D., Weston, J., Auli, M.: ELI5: long form question answering. In: ACL 2019 - 57th Annual Meeting of the Association for Computational Linguistics, Proceedings of the Conference, pp. 3558–3567 (2020)
Fisher, A., Rudin, C., Dominici, F.: All models are wrong, but many are useful: learning a variable’s importance by studying an entire class of prediction models simultaneously. J. Mach. Learn. Res. 20(177), 1–81 (2019)
Friedman, J.H.: Greedy function approximation: a gradient boosting machine. Ann. Stat. 29(5), 1189–1232 (2001). https://doi.org/10.1214/aos/1013203451
Friedman, J.H., Popescu, B.E.: Others: predictive learning via rule ensembles. Ann. Appl. Stat. 2(3), 916–954 (2008)
Goldstein, A., Kapelner, A., Bleich, J., Kapelner, M.A.: Package ‘ICEbox’ (2017)
Greenwell, B.M., Boehmke, B.C., McCarthy, A.J.: A simple and effective model-based variable importance measure. arXiv preprint arXiv:1805.04755 (2018)
Janzing, D., Minorics, L., Blöbaum, P.: Feature relevance quantification in explainable AI: a causality problem. arXiv preprint arXiv:1910.13413 (2019)
Kalyankar, G.D., Poojara, S.R., Dharwadkar, N.V.: Predictive analysis of diabetic patient data using machine learning and Hadoop. In: Proceedings of the International Conference on IoT in Social, Mobile, Analytics and Cloud, I-SMAC 2017 (DM), pp. 619–624 (2017). https://doi.org/10.1109/I-SMAC.2017.8058253
Kumari, P.K., Haddela, P.S.: Use of LIME for human interpretability in Sinhala document classification. In: Proceedings - IEEE International Research Conference on Smart Computing and Systems Engineering, SCSE 2019, pp. 97–102 (2019). https://doi.org/10.23919/SCSE.2019.8842767
Lauritsen, S.M., et al.: Explainable artificial intelligence model to predict acute critical illness from electronic health records. Nat. Commun. 11(1), 1–11 (2020). https://doi.org/10.1038/s41467-020-17431-x
Lundberg, S.M., Erion, G.G., Lee, S.I.: Consistent individualized feature attribution for tree ensembles. arXiv preprint arXiv:1802.03888 (2018)
Lundberg, S.M., Lee, S.I.: A unified approach to interpreting model predictions. In: Advances in Neural Information Processing Systems, pp. 4765–4774 (2017)
Molnar, C.: Interpretable Machine Learning. Lulu. com (2020)
Pawar, U., O’Shea, D., Rea, S., O’Reilly, R.: Explainable AI in Healthcare. In: 2020 International Conference on Cyber Situational Awareness, Data Analytics and Assessment, Cyber SA 2020 (2020). https://doi.org/10.1109/CyberSA49311.2020.9139655
Ribeiro, M.T., Singh, S., Guestrin, C.: “Why should I trust you?” Explaining the predictions of any classifier. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 1135–1144 (2016)
Ribeiro, M.T., Singh, S., Guestrin, C.: Model-agnostic interpretability of machine learning (Whi). arXiv:1606.05386 (2016)
Sundararajan, M., Najmi, A.: The many Shapley values for model explanation. In: International Conference on Machine Learning, pp. 9269–9278. PMLR (2020)
Tabish, S.A.: Is diabetes becoming the biggest epidemic of the twenty-first century? Int. J. Health Sci. 1(2), V (2007)
Zhao, Q., Hastie, T., et al.: Causal interpretations of black-box models. J. Bus. Econ. Stat. 39, 272–281 (2019)
Author information
Authors and Affiliations
Corresponding authors
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Gandhi, N., Mishra, S. (2022). Explainable AI for Healthcare: A Study for Interpreting Diabetes Prediction. In: Misra, R., Shyamasundar, R.K., Chaturvedi, A., Omer, R. (eds) Machine Learning and Big Data Analytics (Proceedings of International Conference on Machine Learning and Big Data Analytics (ICMLBDA) 2021). ICMLBDA 2021. Lecture Notes in Networks and Systems, vol 256. Springer, Cham. https://doi.org/10.1007/978-3-030-82469-3_9
Download citation
DOI: https://doi.org/10.1007/978-3-030-82469-3_9
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-82468-6
Online ISBN: 978-3-030-82469-3
eBook Packages: Intelligent Technologies and RoboticsIntelligent Technologies and Robotics (R0)