Abstract
We propose “Trustworthy AI Explanations as an Interface” framework to evaluate the AI systems by comparing the explanations generated by the system with the reasoning provided by the domain experts based on their experience and expertise. The proposed framework is based on the Trustworthy Explainability Acceptance metric and helps calibrate the trust and acceptance of the system accordingly. This framework can be easily implemented to any explainable AI system and can be used by standardization organizations to test and evaluate the AI systems. We illustrate the proposed framework using an interpretable medical diagnostic system for Breast Cancer Prediction.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Adadi, A., Berrada, M.: Peeking inside the black-box: a survey on explainable artificial intelligence (XAI). IEEE Access 6, 52138–52160 (2018)
Boyens, J., Paulsen, C., Moorthy, R., Bartol, N., Shankles, S.A.: Supply chain risk management practices for federal information systems and organizations. NIST Spec. Publ. 800(161), 32 (2015)
Dastin, J.: Amazon scraps secret AI recruiting tool that showed bias against women. In: Ethics of Data and Analytics, pp. 296–299. Auerbach Publications (2018)
Fan, W., Liu, J., Zhu, S., Pardalos, P.M.: Investigating the impacting factors for the healthcare professionals to adopt artificial intelligence-based medical diagnosis support system (AIMDSS). Ann. Oper. Res. 294(1), 567–592 (2020). https://doi.org/10.1007/s10479-018-2818-y
Floridi, L.: Establishing the rules for building trustworthy AI. Nat. Mach. Intell. 1(6), 261–262 (2019)
Frank, A., Asuncion, A.: UCI machine learning repository, vol. 213, no. 2. University of California, School of Information and Computer Science, Irvine (2010). http://archive.ics.uci.edu/ml
Ghai, B., Liao, Q.V., Zhang, Y., Bellamy, R., Mueller, K.: Explainable active learning (XAL) toward AI explanations as interfaces for machine teachers. Proc. ACM Hum.-Comput. Interact. 4(CSCW3), 1–28 (2021)
Gilpin, L.H., Bau, D., Yuan, B.Z., Bajwa, A., Specter, M., Kagal, L.: Explaining explanations: an overview of interpretability of machine learning. In: 2018 IEEE 5th International Conference on Data Science And Advanced Analytics (DSAA), pp. 80–89. IEEE (2018)
Guo, G., Wang, H., Bell, D., Bi, Y., Greer, K.: KNN model-based approach in classification. In: Meersman, R., Tari, Z., Schmidt, D.C. (eds.) OTM 2003. LNCS, vol. 2888, pp. 986–996. Springer, Heidelberg (2003). https://doi.org/10.1007/978-3-540-39964-3_62
Information Technology – Artificial Intelligence – Overview of trustworthiness in artificial intelligence. Standard, International Organization for Standardization (2020)
Kaur, D., Uslu, S., Durresi, A.: Trust-based security mechanism for detecting clusters of fake users in social networks. In: Barolli, L., Takizawa, M., Xhafa, F., Enokido, T. (eds.) WAINA 2019. AISC, vol. 927, pp. 641–650. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-15035-8_62
Kaur, D., Uslu, S., Durresi, A.: Requirements for trustworthy artificial intelligence–a review. In: Barolli, L., Li, K.F., Enokido, T., Takizawa, M. (eds.) NBiS 2020. AISC, vol. 1264, pp. 105–115. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-57811-4_11
Kaur, D., Uslu, S., Durresi, A., Badve, S., Dundar, M.: Trustworthy explainability acceptance: a new metric to measure the trustworthiness of interpretable AI medical diagnostic systems. In: Barolli, L., Yim, K., Enokido, T. (eds.) CISIS 2021. LNNS, vol. 278, pp. 35–46. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-79725-6_4
Kaur, D., Uslu, S., Durresi, A., Mohler, G., Carter, J.G.: Trust-based human-machine collaboration mechanism for predicting crimes. In: Barolli, L., Amato, F., Moscato, F., Enokido, T., Takizawa, M. (eds.) AINA 2020. AISC, vol. 1151, pp. 603–616. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-44041-1_54
Kaur, D., Uslu, S., Rittichier, K.J., Durresi, A.: Trustworthy artificial intelligence: a review. ACM Comput. Surv. (CSUR) 55(2), 1–38 (2022)
Kohli, P., Chadha, A.: Enabling pedestrian safety using computer vision techniques: a case study of the 2018 Uber Inc. self-driving car crash. In: Arai, K., Bhatia, R. (eds.) FICC 2019. LNNS, vol. 69, pp. 261–279. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-12388-8_19
Mangasarian, O.L., Street, W.N., Wolberg, W.H.: Breast cancer diagnosis and prognosis via linear programming. Oper. Res. 43(4), 570–577 (1995)
Miller, T.: Explanation in artificial intelligence: insights from the social sciences. Artif. Intell. 267, 1–38 (2019)
Nguyen, C., Wang, Y., Nguyen, H.N.: Random forest classifier combined with feature selection for breast cancer diagnosis and prognostic (2013)
Noble, W.S.: What is a support vector machine? Nat. Biotechnol. 24(12), 1565–1567 (2006)
Ribeiro, M.T., Singh, S., Guestrin, C.: Why should I trust you? Explaining the predictions of any classifier. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 1135–1144 (2016)
Rittichier, K.J., Kaur, D., Uslu, S., Durresi, A.: A trust-based tool for detecting potentially damaging users in social networks. In: Barolli, L., Chen, H.-C., Enokido, T. (eds.) NBiS 2021. LNNS, vol. 313, pp. 94–104. Springer, Cham (2022). https://doi.org/10.1007/978-3-030-84913-9_9
Rotter, J.B.: A new scale for the measurement of interpersonal trust 1. J. Pers. 35(4), 651–665 (1967)
Ruan, Y., Zhang, P., Alfantoukh, L., Durresi, A.: Measurement theory-based trust management framework for online social communities. ACM Trans. Internet Technol. (TOIT) 17(2), 1–24 (2017)
Samek, W., Müller, K.-R.: Towards explainable artificial intelligence. In: Samek, W., Montavon, G., Vedaldi, A., Hansen, L.K., Müller, K.-R. (eds.) Explainable AI: Interpreting, Explaining and Visualizing Deep Learning. LNCS (LNAI), vol. 11700, pp. 5–22. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-28954-6_1
Shin, D.: The effects of explainability and causability on perception, trust, and acceptance: implications for explainable AI. Int. J. Hum.-Comput. Stud. 146, 102,551 (2021)
National Institute of Standards and Technology: NIST proposes method for evaluating user trust in artificial intelligence systems (2021). www.nist.gov/news-events/news/2021/05/nist-proposes-method-evaluating-user-trust-artificial-intelligence-systems
Street, W.N., Wolberg, W.H., Mangasarian, O.L.: Nuclear feature extraction for breast tumor diagnosis. In: Biomedical Image Processing and Biomedical Visualization, vol. 1905, pp. 861–870. SPIE (1993)
Uslu, S., Kaur, D., Rivera, S.J., Durresi, A., Babbar-Sebens, M.: Trust-based game-theoretical decision making for food-energy-water management. In: Barolli, L., Hellinckx, P., Enokido, T. (eds.) BWCCA 2019. LNNS, vol. 97, pp. 125–136. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-33506-9_12
Uslu, S., Kaur, D., Rivera, S.J., Durresi, A., Babbar-Sebens, M.: Trust-based decision making for food-energy-water actors. In: Barolli, L., Amato, F., Moscato, F., Enokido, T., Takizawa, M. (eds.) AINA 2020. AISC, vol. 1151, pp. 591–602. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-44041-1_53
Uslu, S., Kaur, D., Rivera, S.J., Durresi, A., Babbar-Sebens, M., Tilt, J.H.: Control theoretical modeling of trust-based decision making in food-energy-water management. In: Barolli, L., Poniszewska-Maranda, A., Enokido, T. (eds.) CISIS 2020. AISC, vol. 1194, pp. 97–107. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-50454-0_10
Uslu, S., Kaur, D., Rivera, S.J., Durresi, A., Babbar-Sebens, M., Tilt, J.H.: A trustworthy human-machine framework for collective decision making in food-energy-water management: the role of trust sensitivity. Knowl.-Based Syst. 213, 106,683 (2021)
Uslu, S., Kaur, D., Rivera, S.J., Durresi, A., Durresi, M., Babbar-Sebens, M.: Trustworthy acceptance: a new metric for trustworthy artificial intelligence used in decision making in food–energy–water sectors. In: Barolli, L., Woungang, I., Enokido, T. (eds.) AINA 2021. LNNS, vol. 225, pp. 208–219. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-75100-5_19
Uslu, S., Kaur, D., Rivera, S.J., Durresi, A., Durresi, M., Babbar-Sebens, M.: Trustworthy fairness metric applied to AI-based decisions in food-energy-water. In: Barolli, L., Hussain, F., Enokido, T. (eds.) AINA 2022. LNNS, vol. 450, pp. 433–445. Springer, Cham (2022). https://doi.org/10.1007/978-3-030-99587-4_37
Williamson, O.E.: Calculativeness, trust, and economic organization. J. Law Econ. 36(1, Part 2), 453–486 (1993)
Zhang, Y., Liao, Q.V., Bellamy, R.K.: Effect of confidence and explanation on accuracy and trust calibration in AI-assisted decision making. In: Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, pp. 295–305 (2020)
Acknowledgements
This work was partially supported by the National Science Foundation under No. 1547411 and by the U.S. Department of Agriculture (USDA), National Institute of Food and Agriculture (NIFA) (Award Number 2017-67003-26057) via an interagency partnership between USDA-NIFA and the National Science Foundation (NSF) on the research program Innovations at the Nexus of Food, Energy and Water Systems.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Kaur, D., Uslu, S., Durresi, A. (2022). Trustworthy AI Explanations as an Interface in Medical Diagnostic Systems. In: Barolli, L., Miwa, H., Enokido, T. (eds) Advances in Network-Based Information Systems. NBiS 2022. Lecture Notes in Networks and Systems, vol 526. Springer, Cham. https://doi.org/10.1007/978-3-031-14314-4_12
Download citation
DOI: https://doi.org/10.1007/978-3-031-14314-4_12
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-14313-7
Online ISBN: 978-3-031-14314-4
eBook Packages: Intelligent Technologies and RoboticsIntelligent Technologies and Robotics (R0)