Skip to main content

Visual Programming Support for the Explainable Artificial Intelligence

  • Conference paper
  • First Online:
Disruptive Information Technologies for a Smart Society (ICIST 2023)

Abstract

The field of Explainable artificial intelligence is showing promising growth in recent years, thus giving researchers the option of deeply exploring the benefits and drawbacks of many different proposed models for solving the enigma of interpretability and explainability regarding machine learning models and their predictions. There are currently a number of techniques that can help to assist researchers in understanding the logic behind decisions made by various models, but the focus of this paper will mostly be on discussing and comparing two strong options, LIME (Local Interpretable Model-Agnostic Explanations) and SHAP (Shapley Additive Explanations). The proposed pipeline for the comparison will be given in a form of an Orange Data Mining workflow. Secondly, the paper aims to give a proposal of how a custom widget encapsulating the functionality of the LIME library can be integrated into the graphical interface, making its usability more appropriate towards less experienced users.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 149.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 199.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Gilpin, L.H., Bau, D., Yuan, B.Z., Bajwa, A., Specter, M., Kagal, L.: Explaining explanations: an overview of interpretability of machine learning. In: 2018 IEEE 5th International Conference on Data Science and Advanced Analytics (DSAA), pp. 80–89 (2018)

    Google Scholar 

  2. Ribeiro, M.T., Singh, S., Guestrin, C.: Why should I trust you?: Explaining the predictions of any classifier. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (2016)

    Google Scholar 

  3. Lipton, Z.C.: The mythos of model interpretability: in machine learning, the concept of interpretability is both important and slippery. ACM Queue 16(3), 31–57 (2018)

    Article  Google Scholar 

  4. Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: a review of machine learning interpretability methods. Entropy (Basel) 23(1), 18 (2020)

    Article  Google Scholar 

  5. Lundberg, S.M., Lee, S.-I.: A unified approach to interpreting model predictions. In: Proceedings of the 31st International Conference on Neural Information Processing Systems, pp. 4768–4777 (2017)

    Google Scholar 

  6. Rudesh, D., et al.: Explainable AI (XAI): core ideas, techniques and solutions. ACM Comput. Surv. 55(9), 1–33 (2023). https://doi.org/10.1145/3561048

  7. Ignatiev, A.: Towards trustable explainable AI. In: Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence (2020)

    Google Scholar 

  8. Palacio, S., Lucieri, A., Munir, M., Hees, J., Ahmed, S., Dengel, A.: XAI handbook: Towards a unified framework for explainable AI, pp. 3766–3775 (2021). arXiv:2105.06677. [cs.AI]

  9. Vice, J., Khan, M.M.: Toward accountable and explainable artificial intelligence part two: the framework implementation. In: IEEE Access, vol. 10, pp. 36091–36105 (2022). https://doi.org/10.1109/ACCESS.2022.3163523

  10. Khan, M.M., Vice, J.: Toward accountable and explainable artificial intelligence part one: theory and examples. In: IEEE Access, vol. 10, pp. 99686–99701 (2022). https://doi.org/10.1109/ACCESS.2022.3207812

  11. Loh, H.W., Ooi, C.P., Seoni, S., Barua, P.D., Molinari, F., Acharya, U.R.: Application of explainable artificial intelligence for healthcare: a systematic review of the last decade (2011–2022). Comput. Methods Programs Biomed. 226(107161), 107161 (2022)

    Article  Google Scholar 

  12. Knapič, S., Malhi, A., Saluja, R., Främling, K.: Explainable artificial intelligence for human decision support system in the medical domain. Mach. Learn. Knowl. Extr. 3(3), 740–770 (2021). https://doi.org/10.3390/make3030037

  13. Duell, J., Fan, X., Burnett, B., Aarts, G., Zhou, S.-M.: A comparison of explanations given by explainable artificial intelligence methods on analysing electronic health records. In: 2021 IEEE EMBS International Conference on Biomedical and Health Informatics (BHI), pp. 1–4 (2021)

    Google Scholar 

  14. Dave, D., Naik, H., Singhal, S., Patel, P.: Explainable AI meets healthcare: A study on heart disease dataset (2020). arXiv:2011.03195. [cs.LG]

  15. Misheva, B.H., Osterrieder, J, Hirsa, A, Kulkarni, O., Lin, S.F.: Explainable AI in credit risk management (2021). arXiv:2103.00949. [q-fin.RM]

  16. Schweighofer, E.: Rationale discovery and explainable AI. In: Legal Knowledge and Information Systems: JURIX 2021: The Thirty-fourth Annual Conference, Vilnius, Lithuania, 8–10 December 2021, vol. 346, IOS Press (2022)

    Google Scholar 

  17. Smith, J.W., Everhart, J.E., Dickson, W.C., Knowler, W.C., Johannes, R.S.: Using the ADAP learning algorithm to forecast the onset of diabetes mellitus. In: Proceedings of the Symposium on Computer Applications and Medical Care, pp. 261–265. IEEE Computer Society Press (1988)

    Google Scholar 

  18. Github link for the widget implementation. https://github.com/minanikolic916/LIME_ML

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Mina Nikolić .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Nikolić, M., Stanimirović, A., Stoimenov, L. (2024). Visual Programming Support for the Explainable Artificial Intelligence. In: Trajanovic, M., Filipovic, N., Zdravkovic, M. (eds) Disruptive Information Technologies for a Smart Society. ICIST 2023. Lecture Notes in Networks and Systems, vol 872. Springer, Cham. https://doi.org/10.1007/978-3-031-50755-7_18

Download citation

Publish with us

Policies and ethics