Abstract
In recent years, research on Explainable Artificial Intelligence (XAI) has risen as an answer to the demand for more openness and confidence in artificial intelligence (AI). This is particularly essential since AI is utilized in sensitive fields with social, ethical and security consequences. The focus of XAI’s work is largely on the categorization, decision or action of machine learning (ML) with already thorough systemic evaluations. In this paper, we try to have a comprehensive study of the different types of XAI. We evaluate the feature importance by analyzing three different popular algorithms on a medical dataset. Finally, the future challenges concerning XAI are discussed.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Mohri M, Rostamizadeh A, Talwalkar A (2018) Foundations of machine learning. MIT Press, Cambridge
Amina A, Mohammed B (2018) Peeking inside the black-box: a survey on explainable artificial intelligence (XAI). IEEE Access 6:52138–52160
Arrieta AB, Díaz-Rodríguez N, Del Ser J, Bennetot A, Tabik S, Barbado A, García S, Gil-López S, Molina D, Benjamins R et al (2020) Explainable artificial intelligence (XAI): concepts, taxonomies, opportunities and challenges toward responsible AI. Inf Fusion 58:82–115
Ribeiro MT, Singh S, Guestrin C (2016) Why should i trust you? Explaining the predictions of any classifier. In: Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pp 1135–1144
Smilkov D, Thorat N, Kim B, Viégas F, Wattenberg M (2017) Smoothgrad: removing noise by adding noise. arXiv:1706.03825
Morch NJS, Kjems U, Hansen LK, Svarer C, Law I, Lautrup B, Strother S, Rehm K (1995) Visualization of neural networks using saliency maps. In: Proceedings of ICNN’95-international conference on neural networks, vol 4. IEEE, pp 2085–2090
Fong RC, Vedaldi A (2017) Interpretable explanations of black boxes by meaningful perturbation. In: Proceedings of the IEEE international conference on computer vision, pp 3429–3437
Zeiler MD, Fergus R (2014) Visualizing and understanding convolutional networks. In: European conference on computer vision. Springer, Berlin, pp 818–833
Bach S, Binder A, Montavon G, Klauschen F, Müller KR, Samek W (2015) On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation. PLoS One 10(7):e0130140
Binder A, Bach S, Montavon G, Müller K-R, Samek W (2016) Layer-wise relevance propagation for deep neural network architectures. In: Information science and applications (ICISA) 2016. Springer, Berlin, pp 913–922
Montavon G, Lapuschkin S, Binder A, Samek W, Müller KR (2017) Explaining nonlinear classification decisions with deep Taylor decomposition. Pattern Recogn 65:211–222
Jacob K, Klaus-Robert M, Grégoire M (2020) Towards explaining anomalies: a deep Taylor decomposition of one-class models. Pattern Recogn 101:107198
Kindermans P-J, Schütt KT, Alber M, Müller K-R, Erhan D, Kim B, Dähne S (2017) Learning how to explain neural networks: patternnet and patternattribution. arXiv:1705.05598
Alber M, Lapuschkin S, Seegerer P, Hägele M, Schütt K, Montavon G, Samek W, Müller K-R, Dähne S, Kindermans P-J (2019) Innvestigate neural networks! J Mach Learn Res 20(93):1–8
Lapuschkin S, Wäldchen S, Binder A, Montavon G, Samek W, Müller KR (2019) Unmasking Clever Hans predictors and assessing what machines really learn. Nat Commun 10(1):1–8
Bau D, Zhou B, Khosla A, Oliva A, Torralba A (2017) Network dissection: quantifying interpretability of deep visual representations. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 6541–6549
Bhuyan BP, Karmakar A, Hazarika SM (2018) Bounding stability in formal concept analysis. In: Advanced computational and communication paradigms. Springer, Berlin, pp 545–552
Bhuyan BP (2017) Relative similarity and stability in fca pattern structures using game theory. In: 2017 2nd international conference on communication systems, computing and IT applications (CSCITA). IEEE, pp 207–212
UCI Machine Learning (2016) Pima Indians diabetes database
Fan A, Jernite Y, Perez E, Grangier D, Weston J, Auli M (2019) Eli5: long form question answering. arXiv:1907.09190
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2023 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.
About this paper
Cite this paper
Bhuyan, B.P., Srivastava, S. (2023). Feature Importance in Explainable AI for Expounding Black Box Models. In: Saraswat, M., Chowdhury, C., Kumar Mandal, C., Gandomi, A.H. (eds) Proceedings of International Conference on Data Science and Applications. Lecture Notes in Networks and Systems, vol 552. Springer, Singapore. https://doi.org/10.1007/978-981-19-6634-7_58
Download citation
DOI: https://doi.org/10.1007/978-981-19-6634-7_58
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-19-6633-0
Online ISBN: 978-981-19-6634-7
eBook Packages: Intelligent Technologies and RoboticsIntelligent Technologies and Robotics (R0)