Abstract
We propose, Trustworthy Explainability Acceptance metric to evaluate explainable AI systems using expert-in-the-loop. Our metric calculates acceptance by quantifying the distance between the explanations generated by the AI system and the reasoning provided by the experts based on their expertise and experience. Our metric also evaluates the trust of the experts to include different groups of experts using our trust mechanism. Our metric can be easily adapted to any Interpretable AI system and be used in the standardization process of trustworthy AI systems. We illustrate the proposed metric using the high-stake medical AI application of Predicting Ductal Carcinoma in Situ (DCIS) Recurrence. Our metric successfully captures the explainability of AI systems in DCIS recurrence by experts.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Adadi, A., Berrada, M.: Peeking inside the black-box: a survey on explainable artificial intelligence (XAI). IEEE Access 6, 52138–52160 (2018)
Alvarez-Melis, D., Jaakkola, T.S.: Towards robust interpretability with self-explaining neural networks. arXiv preprint arXiv:1806.07538 (2018)
Angwin, J., Larson, J., Mattu, S., Kirchner, L.: Machine Bias. ProPublica (2016). https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing
Arya, V., et al.: One explanation does not fit all: a toolkit and taxonomy of AI explainability techniques. arXiv preprint arXiv:1909.03012 (2019)
Bartlett, J.M., et al.: Mammostrat® as a tool to stratify breast cancer patients at risk of recurrence during endocrine therapy. Breast Cancer Res. 12(4), 1–11 (2010)
Calder, A.: EU GDPR: A Pocket Guide. IT Governance Ltd. (2018)
Correa, C., et al.: Overview of the randomized trials of radiotherapy in ductal carcinoma in situ of the breast. J. Natl. Cancer Inst. 2010(41), 162–177 (2010). Monographs
Danny Tobey, M.: Explainability: where AI and liability meet: Actualités: Dla piper global law firm (2019). https://www.dlapiper.com/fr/france/insights/publications/2019/02/explainability-where-ai-and-liability-meet/
Dastin, J.: Amazon Scraps Secret AI Recruiting Tool that Showed Bias Against Women. Reuters, San Francisco (2018). Accessed 9 Oct 2018
DeSantis, C.E., Fedewa, S.A., Goding Sauer, A., Kramer, J.L., Smith, R.A., Jemal, A.: Breast cancer statistics, 2015: convergence of incidence rates between black and white women. CA: Cancer J. Clin. 66(1), 31–42 (2016)
Došilović, F.K., Brčić, M., Hlupić, N.: Explainable artificial intelligence: a survey. In: 2018 41st International Convention on Information and Communication Technology, Electronics and Microelectronics (MIPRO), pp. 0210–0215. IEEE (2018)
EC: Ethics guidelines for trustworthy AI (2018). https://ec.europa.eu/digital-single-market/en/news/ethics-guidelines-trustworthy-ai
Esserman, L.J., et al.: Addressing overdiagnosis and overtreatment in cancer: a prescription for change. Lancet Oncol. 15(6), e234–e242 (2014)
Freitas, A.A.: Comprehensible classification models: a position paper. ACM SIGKDD Explor. Newsl. 15(1), 1–10 (2014)
Guidotti, R.: Evaluating local explanation methods on ground truth. Artif. Intell. 291, 103,428 (2021)
Gunning, D.: Explainable Artificial Intelligence (XAI). Defense Advanced Research Projects Agency (DARPA), nd Web 2(2) (2017)
Gunning, D., Aha, D.: Darpa’s explainable artificial intelligence (XAI) program. AI Mag. 40(2), 44–58 (2019)
Information Technology - Artificial Intelligence - Overview of trustworthiness in artificial intelligence. Standard, International Organization for Standardization (2020)
Kaur, D., Uslu, S., Durresi, A.: Trust-based security mechanism for detecting clusters of fake users in social networks. In: Workshops of the International Conference on Advanced Information Networking and Applications, pp. 641–650. Springer (2019)
Kaur, D., Uslu, S., Durresi, A.: Requirements for trustworthy artificial intelligence-a review. In: International Conference on Network-Based Information Systems, pp. 105–115. Springer (2020)
Kaur, D., Uslu, S., Durresi, A., Mohler, G., Carter, J.G.: Trust-based human-machine collaboration mechanism for predicting crimes. In: International Conference on Advanced Information Networking and Applications, pp. 603–616. Springer (2020)
Kumar, A., Braud, T., Tarkoma, S., Hui, P.: Trustworthy AI in the age of pervasive computing and big data. In: 2020 IEEE International Conference on Pervasive Computing and Communications Workshops (PerCom Workshops), pp. 1–6. IEEE (2020)
Lakkaraju, H., Adebayo, J., Singh, S.: Tutorial: (track2) explaining machine learning predictions: state-of-the-art, challenges, and opportunities. In: NeurIPS 2020. NeurIPS Foundation (2020)
Lester, S.C., Connolly, J.L., Amin, M.B.: College of American pathologists protocol for the reporting of ductal carcinoma in situ. Arch. Pathol. Lab. Med. 133(1), 13–14 (2009)
Luss, R., et al.: Generating contrastive explanations with monotonic attribute functions. arXiv preprint arXiv:1905.12698 (2019)
Moran, M.S., et al.: Society of surgical Oncology-American society for radiation oncology consensus guideline on margins for breast-conserving surgery with whole-breast irradiation in stages I and II invasive breast cancer. Int. J. Radiation Oncol.* Biol.* Phys. 88(3), 553–564 (2014)
Ruan, Y., Durresi, A., Alfantoukh, L.: Using twitter trust network for stock market analysis. Knowl.-Based Syst. 145, 207–218 (2018)
Ruan, Y., Zhang, P., Alfantoukh, L., Durresi, A.: Measurement theory-based trust management framework for online social communities. ACM Trans. Internet Technol. (TOIT) 17(2), 1–24 (2017)
Sojda, R.S.: Empirical evaluation of decision support systems: needs, definitions, potential methods, and an example pertaining to waterfowl management. Environ. Model. Softw. 22(2), 269–277 (2007)
Uslu, S., Kaur, D., Rivera, S.J., Durresi, A., Babbar-Sebens, M.: Decision support system using trust planning among food-energy-water actors. In: International Conference on Advanced Information Networking and Applications, pp. 1169–1180. Springer (2019)
Uslu, S., Kaur, D., Rivera, S.J., Durresi, A., Babbar-Sebens, M.: Trust-based game-theoretical decision making for food-energy-water management. In: International Conference on Broadband and Wireless Computing, Communication and Applications, pp. 125–136. Springer (2019)
Uslu, S., Kaur, D., Rivera, S.J., Durresi, A., Babbar-Sebens, M.: Trust-based decision making for food-energy-water actors. In: International Conference on Advanced Information Networking and Applications, pp. 591–602. Springer (2020)
Uslu, S., Kaur, D., Rivera, S.J., Durresi, A., Babbar-Sebens, M.: Trustworthy acceptance: a new metric for trustworthy artificial intelligence used in decision making in food-water-energy sectors. In: International Conference on Advanced Information Networking and Applications. Springer (2021)
Uslu, S., Kaur, D., Rivera, S.J., Durresi, A., Babbar-Sebens, M., Tilt, J.H.: Control theoretical modeling of trust-based decision making in food-energy-water management. In: Conference on Complex, Intelligent, and Software Intensive Systems, pp. 97–107. Springer (2020)
Uslu, S., Kaur, D., Rivera, S.J., Durresi, A., Babbar-Sebens, M., Tilt, J.H.: A trustworthy human-machine framework for collective decision making in food-energy-water management: the role of trust sensitivity. Knowl.-Based Syst. 213, 106,683 (2021)
Uslu, S., Ruan, Y., Durresi, A.: Trust-based decision support system for planning among food-energy-water actors. In: Conference on Complex, Intelligent, and Software Intensive Systems, pp. 440–451. Springer (2018)
Wakabayashi, D.: Self-driving Uber car kills pedestrian in Arizona, where robots roam. The New York Times 19 (2018)
Acknowledgements
This work was partially supported by the National Science Foundation under Grant No. 1547411, by the U.S. Department of Agriculture (USDA) National Institute of Food and Agriculture (NIFA) (Award Number 2017-67003-26057) via an interagency partnership between USDA-NIFA and the National Science Foundation (NSF) on the research program Innovations at the Nexus of Food, Energy and Water Systems, and by an IUPUI iM2CS seed grant.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2021 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Kaur, D., Uslu, S., Durresi, A., Badve, S., Dundar, M. (2021). Trustworthy Explainability Acceptance: A New Metric to Measure the Trustworthiness of Interpretable AI Medical Diagnostic Systems. In: Barolli, L., Yim, K., Enokido, T. (eds) Complex, Intelligent and Software Intensive Systems. CISIS 2021. Lecture Notes in Networks and Systems, vol 278. Springer, Cham. https://doi.org/10.1007/978-3-030-79725-6_4
Download citation
DOI: https://doi.org/10.1007/978-3-030-79725-6_4
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-79724-9
Online ISBN: 978-3-030-79725-6
eBook Packages: Intelligent Technologies and RoboticsIntelligent Technologies and Robotics (R0)