Abstract
This plenary talk will explore a fascinating nex research field: the Explainable Artificial Intelligence (or XAI), whose goal if the building of explanatory models, to try and overcome the shortcomings of pure statistical learning by providing justifications, understandable by a human, for decisions or predictions made by them.
You have full access to this open access chapter, Download conference paper PDF
Similar content being viewed by others
Artificial Intelligence (AI) applications are increasingly present in the professional and private worlds. This is due to the success of technologies such as machine learning (and, in particular, deep learning approaches) and automatic decision-making that allow the development of increasingly robust and autonomous AI applications. Most of these applications are based on the analysis of historical data; they learn models based on the experience recorded in this data to make decisions or predictions.
However, automatic decision-making by means of Artificial Intelligence now raises new challenges in terms of human understanding of processes resulting from learning, of explanations of the decisions that are made (a crucial issue when ethical or legal considerations are involved) and, also, of human-machine communication.
To meet these needs, the field of Explainable Artificial Intelligence has recently developed.
Indeed, according to the literature, the notion of intelligence can be considered under four aspects: (a) the ability to perceive rich, complex and subtle information, (b) the ability to learn in a particular environment or context; (c) the ability to abstract, to create new meanings and (d) the ability to reason, for planning and decision-making.
These four skills are implemented by what is now called the Explainable Artificial Intelligence (or XAI) with the goal of building explanatory models, to try and overcome shortcomings of pure statistical learning by providing justifications, understandable by a human, for decisions or predictions made.
During this talk we will explore this fascinating new research field.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2020 Springer Nature Switzerland AG
About this paper
Cite this paper
Zanni-Merk, C. (2020). On the Need of an Explainable Artificial Intelligence. In: Borzemski, L., Świątek, J., Wilimowska, Z. (eds) Information Systems Architecture and Technology: Proceedings of 40th Anniversary International Conference on Information Systems Architecture and Technology – ISAT 2019. ISAT 2019. Advances in Intelligent Systems and Computing, vol 1050. Springer, Cham. https://doi.org/10.1007/978-3-030-30440-9_1
Download citation
DOI: https://doi.org/10.1007/978-3-030-30440-9_1
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-30439-3
Online ISBN: 978-3-030-30440-9
eBook Packages: Intelligent Technologies and RoboticsIntelligent Technologies and Robotics (R0)