Skip to main content

Part of the book series: Lecture Notes in Networks and Systems ((LNNS,volume 256))

Included in the following conference series:

Abstract

With extensive use of machine learning in field of healthcare, explainable AI becomes a vital part of machine learning process by helping healthcare practitioners for understanding critical decision-making process. Machine learning has greatly impacted medical field by identification of factors responsible for vulnerable health risks and factors affecting them. Hence, interpretation of machine learning model is extremely critical in the healthcare sector. Healthcare practitioners must understand factors that affect decision-making process for diseases like diabetes made by machine learning model. Explainable artificial intelligence helps clinical practitioners in interpretation of black-box models and their decision-making process to verify why a particular decision was taken by machine learning model that is extremely important in medical field. Interpretability in machine learning methods can help healthcare practitioners to understand the features that affect the decision-making process for detection of diseases like diabetes. In the past few years, researchers have been successful in detecting diabetes using machine learning models. Explainable artificial intelligence is exceptionally useful in turning black-box into glass box machine learning model and explains their respective decision-making process in healthcare. The paper presents various interpretable machine learning methods for understanding factors affecting decision-making in case of diabetes prediction that could be explained using model agnostic methods.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 149.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 199.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Pima Indians Diabetes Database. https://www.kaggle.com/uciml/pima-indians-diabetes-database

  2. Altmann, A., Toloşi, L., Sander, O., Lengauer, T.: Permutation importance: a corrected feature importance measure. Bioinformatics 26(10), 1340–1347 (2010). https://doi.org/10.1093/bioinformatics/btq134

    Article  Google Scholar 

  3. Alvarez-Melis, D., Jaakkola, T.S.: On the robustness of interpretability methods (Whi). arXiv:1806.08049 (2018)

  4. Apley, D.W., Zhu, J.: Visualizing the effects of predictor variables in black box supervised learning models. J. R. Stat. Soc.: Ser. B (Stat. Methodol.) 82(4), 1059–1086 (2020)

    Article  MathSciNet  Google Scholar 

  5. Fan, A., Jernite, Y., Perez, E., Grangier, D., Weston, J., Auli, M.: ELI5: long form question answering. In: ACL 2019 - 57th Annual Meeting of the Association for Computational Linguistics, Proceedings of the Conference, pp. 3558–3567 (2020)

    Google Scholar 

  6. Fisher, A., Rudin, C., Dominici, F.: All models are wrong, but many are useful: learning a variable’s importance by studying an entire class of prediction models simultaneously. J. Mach. Learn. Res. 20(177), 1–81 (2019)

    MathSciNet  MATH  Google Scholar 

  7. Friedman, J.H.: Greedy function approximation: a gradient boosting machine. Ann. Stat. 29(5), 1189–1232 (2001). https://doi.org/10.1214/aos/1013203451

    Article  MathSciNet  MATH  Google Scholar 

  8. Friedman, J.H., Popescu, B.E.: Others: predictive learning via rule ensembles. Ann. Appl. Stat. 2(3), 916–954 (2008)

    Article  MathSciNet  Google Scholar 

  9. Goldstein, A., Kapelner, A., Bleich, J., Kapelner, M.A.: Package ‘ICEbox’ (2017)

    Google Scholar 

  10. Greenwell, B.M., Boehmke, B.C., McCarthy, A.J.: A simple and effective model-based variable importance measure. arXiv preprint arXiv:1805.04755 (2018)

  11. Janzing, D., Minorics, L., Blöbaum, P.: Feature relevance quantification in explainable AI: a causality problem. arXiv preprint arXiv:1910.13413 (2019)

  12. Kalyankar, G.D., Poojara, S.R., Dharwadkar, N.V.: Predictive analysis of diabetic patient data using machine learning and Hadoop. In: Proceedings of the International Conference on IoT in Social, Mobile, Analytics and Cloud, I-SMAC 2017 (DM), pp. 619–624 (2017). https://doi.org/10.1109/I-SMAC.2017.8058253

  13. Kumari, P.K., Haddela, P.S.: Use of LIME for human interpretability in Sinhala document classification. In: Proceedings - IEEE International Research Conference on Smart Computing and Systems Engineering, SCSE 2019, pp. 97–102 (2019). https://doi.org/10.23919/SCSE.2019.8842767

  14. Lauritsen, S.M., et al.: Explainable artificial intelligence model to predict acute critical illness from electronic health records. Nat. Commun. 11(1), 1–11 (2020). https://doi.org/10.1038/s41467-020-17431-x

    Article  Google Scholar 

  15. Lundberg, S.M., Erion, G.G., Lee, S.I.: Consistent individualized feature attribution for tree ensembles. arXiv preprint arXiv:1802.03888 (2018)

  16. Lundberg, S.M., Lee, S.I.: A unified approach to interpreting model predictions. In: Advances in Neural Information Processing Systems, pp. 4765–4774 (2017)

    Google Scholar 

  17. Molnar, C.: Interpretable Machine Learning. Lulu. com (2020)

    Google Scholar 

  18. Pawar, U., O’Shea, D., Rea, S., O’Reilly, R.: Explainable AI in Healthcare. In: 2020 International Conference on Cyber Situational Awareness, Data Analytics and Assessment, Cyber SA 2020 (2020). https://doi.org/10.1109/CyberSA49311.2020.9139655

  19. Ribeiro, M.T., Singh, S., Guestrin, C.: “Why should I trust you?” Explaining the predictions of any classifier. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 1135–1144 (2016)

    Google Scholar 

  20. Ribeiro, M.T., Singh, S., Guestrin, C.: Model-agnostic interpretability of machine learning (Whi). arXiv:1606.05386 (2016)

  21. Sundararajan, M., Najmi, A.: The many Shapley values for model explanation. In: International Conference on Machine Learning, pp. 9269–9278. PMLR (2020)

    Google Scholar 

  22. Tabish, S.A.: Is diabetes becoming the biggest epidemic of the twenty-first century? Int. J. Health Sci. 1(2), V (2007)

    Google Scholar 

  23. Zhao, Q., Hastie, T., et al.: Causal interpretations of black-box models. J. Bus. Econ. Stat. 39, 272–281 (2019)

    Article  MathSciNet  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding authors

Correspondence to Neel Gandhi or Shakti Mishra .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Gandhi, N., Mishra, S. (2022). Explainable AI for Healthcare: A Study for Interpreting Diabetes Prediction. In: Misra, R., Shyamasundar, R.K., Chaturvedi, A., Omer, R. (eds) Machine Learning and Big Data Analytics (Proceedings of International Conference on Machine Learning and Big Data Analytics (ICMLBDA) 2021). ICMLBDA 2021. Lecture Notes in Networks and Systems, vol 256. Springer, Cham. https://doi.org/10.1007/978-3-030-82469-3_9

Download citation

Publish with us

Policies and ethics