Skip to main content

Trustworthy Explainability Acceptance: A New Metric to Measure the Trustworthiness of Interpretable AI Medical Diagnostic Systems

  • Conference paper
  • First Online:
Complex, Intelligent and Software Intensive Systems (CISIS 2021)

Part of the book series: Lecture Notes in Networks and Systems ((LNNS,volume 278))

Included in the following conference series:

Abstract

We propose, Trustworthy Explainability Acceptance metric to evaluate explainable AI systems using expert-in-the-loop. Our metric calculates acceptance by quantifying the distance between the explanations generated by the AI system and the reasoning provided by the experts based on their expertise and experience. Our metric also evaluates the trust of the experts to include different groups of experts using our trust mechanism. Our metric can be easily adapted to any Interpretable AI system and be used in the standardization process of trustworthy AI systems. We illustrate the proposed metric using the high-stake medical AI application of Predicting Ductal Carcinoma in Situ (DCIS) Recurrence. Our metric successfully captures the explainability of AI systems in DCIS recurrence by experts.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 169.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 219.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Adadi, A., Berrada, M.: Peeking inside the black-box: a survey on explainable artificial intelligence (XAI). IEEE Access 6, 52138–52160 (2018)

    Article  Google Scholar 

  2. Alvarez-Melis, D., Jaakkola, T.S.: Towards robust interpretability with self-explaining neural networks. arXiv preprint arXiv:1806.07538 (2018)

  3. Angwin, J., Larson, J., Mattu, S., Kirchner, L.: Machine Bias. ProPublica (2016). https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing

  4. Arya, V., et al.: One explanation does not fit all: a toolkit and taxonomy of AI explainability techniques. arXiv preprint arXiv:1909.03012 (2019)

  5. Bartlett, J.M., et al.: Mammostrat® as a tool to stratify breast cancer patients at risk of recurrence during endocrine therapy. Breast Cancer Res. 12(4), 1–11 (2010)

    Article  Google Scholar 

  6. Calder, A.: EU GDPR: A Pocket Guide. IT Governance Ltd. (2018)

    Google Scholar 

  7. Correa, C., et al.: Overview of the randomized trials of radiotherapy in ductal carcinoma in situ of the breast. J. Natl. Cancer Inst. 2010(41), 162–177 (2010). Monographs

    Article  Google Scholar 

  8. Danny Tobey, M.: Explainability: where AI and liability meet: Actualités: Dla piper global law firm (2019). https://www.dlapiper.com/fr/france/insights/publications/2019/02/explainability-where-ai-and-liability-meet/

  9. Dastin, J.: Amazon Scraps Secret AI Recruiting Tool that Showed Bias Against Women. Reuters, San Francisco (2018). Accessed 9 Oct 2018

    Google Scholar 

  10. DeSantis, C.E., Fedewa, S.A., Goding Sauer, A., Kramer, J.L., Smith, R.A., Jemal, A.: Breast cancer statistics, 2015: convergence of incidence rates between black and white women. CA: Cancer J. Clin. 66(1), 31–42 (2016)

    Google Scholar 

  11. Došilović, F.K., Brčić, M., Hlupić, N.: Explainable artificial intelligence: a survey. In: 2018 41st International Convention on Information and Communication Technology, Electronics and Microelectronics (MIPRO), pp. 0210–0215. IEEE (2018)

    Google Scholar 

  12. EC: Ethics guidelines for trustworthy AI (2018). https://ec.europa.eu/digital-single-market/en/news/ethics-guidelines-trustworthy-ai

  13. Esserman, L.J., et al.: Addressing overdiagnosis and overtreatment in cancer: a prescription for change. Lancet Oncol. 15(6), e234–e242 (2014)

    Article  Google Scholar 

  14. Freitas, A.A.: Comprehensible classification models: a position paper. ACM SIGKDD Explor. Newsl. 15(1), 1–10 (2014)

    Article  Google Scholar 

  15. Guidotti, R.: Evaluating local explanation methods on ground truth. Artif. Intell. 291, 103,428 (2021)

    Google Scholar 

  16. Gunning, D.: Explainable Artificial Intelligence (XAI). Defense Advanced Research Projects Agency (DARPA), nd Web 2(2) (2017)

    Google Scholar 

  17. Gunning, D., Aha, D.: Darpa’s explainable artificial intelligence (XAI) program. AI Mag. 40(2), 44–58 (2019)

    Google Scholar 

  18. Information Technology - Artificial Intelligence - Overview of trustworthiness in artificial intelligence. Standard, International Organization for Standardization (2020)

    Google Scholar 

  19. Kaur, D., Uslu, S., Durresi, A.: Trust-based security mechanism for detecting clusters of fake users in social networks. In: Workshops of the International Conference on Advanced Information Networking and Applications, pp. 641–650. Springer (2019)

    Google Scholar 

  20. Kaur, D., Uslu, S., Durresi, A.: Requirements for trustworthy artificial intelligence-a review. In: International Conference on Network-Based Information Systems, pp. 105–115. Springer (2020)

    Google Scholar 

  21. Kaur, D., Uslu, S., Durresi, A., Mohler, G., Carter, J.G.: Trust-based human-machine collaboration mechanism for predicting crimes. In: International Conference on Advanced Information Networking and Applications, pp. 603–616. Springer (2020)

    Google Scholar 

  22. Kumar, A., Braud, T., Tarkoma, S., Hui, P.: Trustworthy AI in the age of pervasive computing and big data. In: 2020 IEEE International Conference on Pervasive Computing and Communications Workshops (PerCom Workshops), pp. 1–6. IEEE (2020)

    Google Scholar 

  23. Lakkaraju, H., Adebayo, J., Singh, S.: Tutorial: (track2) explaining machine learning predictions: state-of-the-art, challenges, and opportunities. In: NeurIPS 2020. NeurIPS Foundation (2020)

    Google Scholar 

  24. Lester, S.C., Connolly, J.L., Amin, M.B.: College of American pathologists protocol for the reporting of ductal carcinoma in situ. Arch. Pathol. Lab. Med. 133(1), 13–14 (2009)

    Article  Google Scholar 

  25. Luss, R., et al.: Generating contrastive explanations with monotonic attribute functions. arXiv preprint arXiv:1905.12698 (2019)

  26. Moran, M.S., et al.: Society of surgical Oncology-American society for radiation oncology consensus guideline on margins for breast-conserving surgery with whole-breast irradiation in stages I and II invasive breast cancer. Int. J. Radiation Oncol.* Biol.* Phys. 88(3), 553–564 (2014)

    Google Scholar 

  27. Ruan, Y., Durresi, A., Alfantoukh, L.: Using twitter trust network for stock market analysis. Knowl.-Based Syst. 145, 207–218 (2018)

    Article  Google Scholar 

  28. Ruan, Y., Zhang, P., Alfantoukh, L., Durresi, A.: Measurement theory-based trust management framework for online social communities. ACM Trans. Internet Technol. (TOIT) 17(2), 1–24 (2017)

    Article  Google Scholar 

  29. Sojda, R.S.: Empirical evaluation of decision support systems: needs, definitions, potential methods, and an example pertaining to waterfowl management. Environ. Model. Softw. 22(2), 269–277 (2007)

    Article  Google Scholar 

  30. Uslu, S., Kaur, D., Rivera, S.J., Durresi, A., Babbar-Sebens, M.: Decision support system using trust planning among food-energy-water actors. In: International Conference on Advanced Information Networking and Applications, pp. 1169–1180. Springer (2019)

    Google Scholar 

  31. Uslu, S., Kaur, D., Rivera, S.J., Durresi, A., Babbar-Sebens, M.: Trust-based game-theoretical decision making for food-energy-water management. In: International Conference on Broadband and Wireless Computing, Communication and Applications, pp. 125–136. Springer (2019)

    Google Scholar 

  32. Uslu, S., Kaur, D., Rivera, S.J., Durresi, A., Babbar-Sebens, M.: Trust-based decision making for food-energy-water actors. In: International Conference on Advanced Information Networking and Applications, pp. 591–602. Springer (2020)

    Google Scholar 

  33. Uslu, S., Kaur, D., Rivera, S.J., Durresi, A., Babbar-Sebens, M.: Trustworthy acceptance: a new metric for trustworthy artificial intelligence used in decision making in food-water-energy sectors. In: International Conference on Advanced Information Networking and Applications. Springer (2021)

    Google Scholar 

  34. Uslu, S., Kaur, D., Rivera, S.J., Durresi, A., Babbar-Sebens, M., Tilt, J.H.: Control theoretical modeling of trust-based decision making in food-energy-water management. In: Conference on Complex, Intelligent, and Software Intensive Systems, pp. 97–107. Springer (2020)

    Google Scholar 

  35. Uslu, S., Kaur, D., Rivera, S.J., Durresi, A., Babbar-Sebens, M., Tilt, J.H.: A trustworthy human-machine framework for collective decision making in food-energy-water management: the role of trust sensitivity. Knowl.-Based Syst. 213, 106,683 (2021)

    Google Scholar 

  36. Uslu, S., Ruan, Y., Durresi, A.: Trust-based decision support system for planning among food-energy-water actors. In: Conference on Complex, Intelligent, and Software Intensive Systems, pp. 440–451. Springer (2018)

    Google Scholar 

  37. Wakabayashi, D.: Self-driving Uber car kills pedestrian in Arizona, where robots roam. The New York Times 19 (2018)

    Google Scholar 

Download references

Acknowledgements

This work was partially supported by the National Science Foundation under Grant No. 1547411, by the U.S. Department of Agriculture (USDA) National Institute of Food and Agriculture (NIFA) (Award Number 2017-67003-26057) via an interagency partnership between USDA-NIFA and the National Science Foundation (NSF) on the research program Innovations at the Nexus of Food, Energy and Water Systems, and by an IUPUI iM2CS seed grant.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Arjan Durresi .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Kaur, D., Uslu, S., Durresi, A., Badve, S., Dundar, M. (2021). Trustworthy Explainability Acceptance: A New Metric to Measure the Trustworthiness of Interpretable AI Medical Diagnostic Systems. In: Barolli, L., Yim, K., Enokido, T. (eds) Complex, Intelligent and Software Intensive Systems. CISIS 2021. Lecture Notes in Networks and Systems, vol 278. Springer, Cham. https://doi.org/10.1007/978-3-030-79725-6_4

Download citation

Publish with us

Policies and ethics