Skip to main content

Trustworthy AI Explanations as an Interface in Medical Diagnostic Systems

  • Conference paper
  • First Online:
Advances in Network-Based Information Systems (NBiS 2022)

Part of the book series: Lecture Notes in Networks and Systems ((LNNS,volume 526))

Included in the following conference series:

Abstract

We propose “Trustworthy AI Explanations as an Interface” framework to evaluate the AI systems by comparing the explanations generated by the system with the reasoning provided by the domain experts based on their experience and expertise. The proposed framework is based on the Trustworthy Explainability Acceptance metric and helps calibrate the trust and acceptance of the system accordingly. This framework can be easily implemented to any explainable AI system and can be used by standardization organizations to test and evaluate the AI systems. We illustrate the proposed framework using an interpretable medical diagnostic system for Breast Cancer Prediction.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 299.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 379.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Adadi, A., Berrada, M.: Peeking inside the black-box: a survey on explainable artificial intelligence (XAI). IEEE Access 6, 52138–52160 (2018)

    Article  Google Scholar 

  2. Boyens, J., Paulsen, C., Moorthy, R., Bartol, N., Shankles, S.A.: Supply chain risk management practices for federal information systems and organizations. NIST Spec. Publ. 800(161), 32 (2015)

    Google Scholar 

  3. Dastin, J.: Amazon scraps secret AI recruiting tool that showed bias against women. In: Ethics of Data and Analytics, pp. 296–299. Auerbach Publications (2018)

    Google Scholar 

  4. Fan, W., Liu, J., Zhu, S., Pardalos, P.M.: Investigating the impacting factors for the healthcare professionals to adopt artificial intelligence-based medical diagnosis support system (AIMDSS). Ann. Oper. Res. 294(1), 567–592 (2020). https://doi.org/10.1007/s10479-018-2818-y

    Article  Google Scholar 

  5. Floridi, L.: Establishing the rules for building trustworthy AI. Nat. Mach. Intell. 1(6), 261–262 (2019)

    Article  Google Scholar 

  6. Frank, A., Asuncion, A.: UCI machine learning repository, vol. 213, no. 2. University of California, School of Information and Computer Science, Irvine (2010). http://archive.ics.uci.edu/ml

  7. Ghai, B., Liao, Q.V., Zhang, Y., Bellamy, R., Mueller, K.: Explainable active learning (XAL) toward AI explanations as interfaces for machine teachers. Proc. ACM Hum.-Comput. Interact. 4(CSCW3), 1–28 (2021)

    Article  Google Scholar 

  8. Gilpin, L.H., Bau, D., Yuan, B.Z., Bajwa, A., Specter, M., Kagal, L.: Explaining explanations: an overview of interpretability of machine learning. In: 2018 IEEE 5th International Conference on Data Science And Advanced Analytics (DSAA), pp. 80–89. IEEE (2018)

    Google Scholar 

  9. Guo, G., Wang, H., Bell, D., Bi, Y., Greer, K.: KNN model-based approach in classification. In: Meersman, R., Tari, Z., Schmidt, D.C. (eds.) OTM 2003. LNCS, vol. 2888, pp. 986–996. Springer, Heidelberg (2003). https://doi.org/10.1007/978-3-540-39964-3_62

    Chapter  Google Scholar 

  10. Information Technology – Artificial Intelligence – Overview of trustworthiness in artificial intelligence. Standard, International Organization for Standardization (2020)

    Google Scholar 

  11. Kaur, D., Uslu, S., Durresi, A.: Trust-based security mechanism for detecting clusters of fake users in social networks. In: Barolli, L., Takizawa, M., Xhafa, F., Enokido, T. (eds.) WAINA 2019. AISC, vol. 927, pp. 641–650. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-15035-8_62

    Chapter  Google Scholar 

  12. Kaur, D., Uslu, S., Durresi, A.: Requirements for trustworthy artificial intelligence–a review. In: Barolli, L., Li, K.F., Enokido, T., Takizawa, M. (eds.) NBiS 2020. AISC, vol. 1264, pp. 105–115. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-57811-4_11

    Chapter  Google Scholar 

  13. Kaur, D., Uslu, S., Durresi, A., Badve, S., Dundar, M.: Trustworthy explainability acceptance: a new metric to measure the trustworthiness of interpretable AI medical diagnostic systems. In: Barolli, L., Yim, K., Enokido, T. (eds.) CISIS 2021. LNNS, vol. 278, pp. 35–46. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-79725-6_4

    Chapter  Google Scholar 

  14. Kaur, D., Uslu, S., Durresi, A., Mohler, G., Carter, J.G.: Trust-based human-machine collaboration mechanism for predicting crimes. In: Barolli, L., Amato, F., Moscato, F., Enokido, T., Takizawa, M. (eds.) AINA 2020. AISC, vol. 1151, pp. 603–616. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-44041-1_54

    Chapter  Google Scholar 

  15. Kaur, D., Uslu, S., Rittichier, K.J., Durresi, A.: Trustworthy artificial intelligence: a review. ACM Comput. Surv. (CSUR) 55(2), 1–38 (2022)

    Article  Google Scholar 

  16. Kohli, P., Chadha, A.: Enabling pedestrian safety using computer vision techniques: a case study of the 2018 Uber Inc. self-driving car crash. In: Arai, K., Bhatia, R. (eds.) FICC 2019. LNNS, vol. 69, pp. 261–279. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-12388-8_19

    Chapter  Google Scholar 

  17. Mangasarian, O.L., Street, W.N., Wolberg, W.H.: Breast cancer diagnosis and prognosis via linear programming. Oper. Res. 43(4), 570–577 (1995)

    Article  MathSciNet  Google Scholar 

  18. Miller, T.: Explanation in artificial intelligence: insights from the social sciences. Artif. Intell. 267, 1–38 (2019)

    Article  MathSciNet  Google Scholar 

  19. Nguyen, C., Wang, Y., Nguyen, H.N.: Random forest classifier combined with feature selection for breast cancer diagnosis and prognostic (2013)

    Google Scholar 

  20. Noble, W.S.: What is a support vector machine? Nat. Biotechnol. 24(12), 1565–1567 (2006)

    Article  Google Scholar 

  21. Ribeiro, M.T., Singh, S., Guestrin, C.: Why should I trust you? Explaining the predictions of any classifier. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 1135–1144 (2016)

    Google Scholar 

  22. Rittichier, K.J., Kaur, D., Uslu, S., Durresi, A.: A trust-based tool for detecting potentially damaging users in social networks. In: Barolli, L., Chen, H.-C., Enokido, T. (eds.) NBiS 2021. LNNS, vol. 313, pp. 94–104. Springer, Cham (2022). https://doi.org/10.1007/978-3-030-84913-9_9

    Chapter  Google Scholar 

  23. Rotter, J.B.: A new scale for the measurement of interpersonal trust 1. J. Pers. 35(4), 651–665 (1967)

    Article  Google Scholar 

  24. Ruan, Y., Zhang, P., Alfantoukh, L., Durresi, A.: Measurement theory-based trust management framework for online social communities. ACM Trans. Internet Technol. (TOIT) 17(2), 1–24 (2017)

    Article  Google Scholar 

  25. Samek, W., Müller, K.-R.: Towards explainable artificial intelligence. In: Samek, W., Montavon, G., Vedaldi, A., Hansen, L.K., Müller, K.-R. (eds.) Explainable AI: Interpreting, Explaining and Visualizing Deep Learning. LNCS (LNAI), vol. 11700, pp. 5–22. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-28954-6_1

    Chapter  Google Scholar 

  26. Shin, D.: The effects of explainability and causability on perception, trust, and acceptance: implications for explainable AI. Int. J. Hum.-Comput. Stud. 146, 102,551 (2021)

    Google Scholar 

  27. National Institute of Standards and Technology: NIST proposes method for evaluating user trust in artificial intelligence systems (2021). www.nist.gov/news-events/news/2021/05/nist-proposes-method-evaluating-user-trust-artificial-intelligence-systems

  28. Street, W.N., Wolberg, W.H., Mangasarian, O.L.: Nuclear feature extraction for breast tumor diagnosis. In: Biomedical Image Processing and Biomedical Visualization, vol. 1905, pp. 861–870. SPIE (1993)

    Google Scholar 

  29. Uslu, S., Kaur, D., Rivera, S.J., Durresi, A., Babbar-Sebens, M.: Trust-based game-theoretical decision making for food-energy-water management. In: Barolli, L., Hellinckx, P., Enokido, T. (eds.) BWCCA 2019. LNNS, vol. 97, pp. 125–136. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-33506-9_12

    Chapter  Google Scholar 

  30. Uslu, S., Kaur, D., Rivera, S.J., Durresi, A., Babbar-Sebens, M.: Trust-based decision making for food-energy-water actors. In: Barolli, L., Amato, F., Moscato, F., Enokido, T., Takizawa, M. (eds.) AINA 2020. AISC, vol. 1151, pp. 591–602. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-44041-1_53

    Chapter  Google Scholar 

  31. Uslu, S., Kaur, D., Rivera, S.J., Durresi, A., Babbar-Sebens, M., Tilt, J.H.: Control theoretical modeling of trust-based decision making in food-energy-water management. In: Barolli, L., Poniszewska-Maranda, A., Enokido, T. (eds.) CISIS 2020. AISC, vol. 1194, pp. 97–107. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-50454-0_10

    Chapter  Google Scholar 

  32. Uslu, S., Kaur, D., Rivera, S.J., Durresi, A., Babbar-Sebens, M., Tilt, J.H.: A trustworthy human-machine framework for collective decision making in food-energy-water management: the role of trust sensitivity. Knowl.-Based Syst. 213, 106,683 (2021)

    Google Scholar 

  33. Uslu, S., Kaur, D., Rivera, S.J., Durresi, A., Durresi, M., Babbar-Sebens, M.: Trustworthy acceptance: a new metric for trustworthy artificial intelligence used in decision making in food–energy–water sectors. In: Barolli, L., Woungang, I., Enokido, T. (eds.) AINA 2021. LNNS, vol. 225, pp. 208–219. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-75100-5_19

    Chapter  Google Scholar 

  34. Uslu, S., Kaur, D., Rivera, S.J., Durresi, A., Durresi, M., Babbar-Sebens, M.: Trustworthy fairness metric applied to AI-based decisions in food-energy-water. In: Barolli, L., Hussain, F., Enokido, T. (eds.) AINA 2022. LNNS, vol. 450, pp. 433–445. Springer, Cham (2022). https://doi.org/10.1007/978-3-030-99587-4_37

    Chapter  Google Scholar 

  35. Williamson, O.E.: Calculativeness, trust, and economic organization. J. Law Econ. 36(1, Part 2), 453–486 (1993)

    Google Scholar 

  36. Zhang, Y., Liao, Q.V., Bellamy, R.K.: Effect of confidence and explanation on accuracy and trust calibration in AI-assisted decision making. In: Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, pp. 295–305 (2020)

    Google Scholar 

Download references

Acknowledgements

This work was partially supported by the National Science Foundation under No. 1547411 and by the U.S. Department of Agriculture (USDA), National Institute of Food and Agriculture (NIFA) (Award Number 2017-67003-26057) via an interagency partnership between USDA-NIFA and the National Science Foundation (NSF) on the research program Innovations at the Nexus of Food, Energy and Water Systems.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Arjan Durresi .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Kaur, D., Uslu, S., Durresi, A. (2022). Trustworthy AI Explanations as an Interface in Medical Diagnostic Systems. In: Barolli, L., Miwa, H., Enokido, T. (eds) Advances in Network-Based Information Systems. NBiS 2022. Lecture Notes in Networks and Systems, vol 526. Springer, Cham. https://doi.org/10.1007/978-3-031-14314-4_12

Download citation

Publish with us

Policies and ethics