Abstract
The main focus of an Intelligent environment, as with other applications of Artificial Intelligence, is generally on the provision of good decisions towards the management of the environment or the support of human decision-making processes. The quality of the system is often measured in terms of accuracy or other performance metrics, calculated on labeled data. Other equally important aspects are usually disregarded, such as the ability to produce an intelligible explanation for the user of the environment. That is, asides from proposing an action, prediction, or decision, the system should also propose an explanation that would allow the user to understand the rationale behind the output. This is becoming increasingly important in a time in which algorithms gain increasing importance in our lives and start to take decisions that significantly impact them. So much so that the EU recently regulated on the issue of a “right to explanation”. In this paper we propose a Human-centric intelligent environment that takes into consideration the domain of the problem and the mental model of the Human expert, to provide intelligible explanations that can improve the efficiency and quality of the decision-making processes .
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Ververidis, D., Kotropoulos, C., Pitas, I.: Automatic emotional speech classification. In: 2004 IEEE International Conference on Acoustics, Speech, and Signal Processing, vol. 1, pp. I–593. IEEE (2004)
Crawford, K.:Artificial intelligence’s white guy problem. N.Y. Times 25 (2016)
Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you? explaining the predictions of any classifier. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 1135–1144 (2016)
Goodman, B., Flaxman, S.: European union regulations on algorithmic decision-making and a right to explanation. AI Mag. 38(3), 50–57 (2017)
Dosilovic, F.K., Brcic, M., Hlupic, N.: Explainable artificial intelligence: a survey. In: Proceedings 2018 41st International Convention on Information and Communication Technology, Electronics and Microelectronics, MIPRO 2018, pp. 210–215, May 2018
Gilpin, L.H., Bau, D., Yuan, B.Z., Bajwa, A., Specter, M., Kagal, L.: Explaining explanations: an overview of interpretability of machine learning. In: Proceedings - 2018 IEEE 5th International Conference on Data Science and Advanced Analytics, DSAA 2018, pp. 80–89 (2019)
Sokol, K., Flach, P.: Desiderata for interpretability: explaining decision tree predictions with counterfactuals. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 10035–10036 (2019)
Ustun, B., Spangher, A., Liu, Y.: Actionable recourse in linear classification. In: FAT* 2019 - Proceedings of the 2019 Conference on Fairness, Accountability, and Transparency, pp. 10–19 (2019)
Silva, F., Analide, C.: Information asset analysis: credit scoring and credit suggestion. Int. J. Electron. Bus. 9(3), 203 (2011)
Chen, C., Lin, K., Rudin, C., Shaposhnik, Y., Wang, S., Wang, T.: An interpretable model with globally consistent explanations for credit risk, pp. 1–10. arXiv preprint arXiv:1811.12615, November 2018
Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. In: 34th International Conference on Machine Learning, ICML 2017, vol. 7, pp. 4844–4866 (2017)
Binder, A., Montavon, G., Lapuschkin, S., Müller, K.R., Samek, W.: Layer-wise relevance propagation for neural networks with local renormalization layers. Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), vol. 9887, pp. 63–71 (2016)
Settles, B.: From theories to queries: active learning in practice. In: Active Learning and Experimental Design workshop in conjunction with AISTATS, vol. 2010, pp. 1–18 (2011)
Ramos, D., Carneiro, D., Novais, P.: evoRF: an evolutionary approach to random forests. In: International Symposium on Intelligent and Distributed Computing, pp. 102–107. Springer (2019)
Singh, S., Gupta, P.: Comparative study ID3, cart and C4. 5 decision tree algorithm: a survey. Int. J. Adv. Inf. Sci. Technol. (IJAIST) 27(27), 97–103 (2014)
Lerman, R.I., Yitzhaki, S.: A note on the calculation and interpretation of the Gini index. Econ. Lett. 15(3–4), 363–368 (1984)
Molnar, C.: Interpretable Machine Learning. Lulu. com, Morrisville (2019)
Acknowledgments
This work was supported by the Northern Regional Operational Program, Portugal 2020 and European Union, trough European Regional Development Fund (ERDF) in the scope of project number 39900 - 31/SI/2017, and by FCT - Fundação para a Ciência e a Tecnologia, through projects UIDB/04728/2020 and UID/CEC/00319/2019.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2021 The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Carneiro, D., Silva, F., Guimarães, M., Sousa, D., Novais, P. (2021). Explainable Intelligent Environments. In: Novais, P., Vercelli, G., Larriba-Pey, J.L., Herrera, F., Chamoso, P. (eds) Ambient Intelligence – Software and Applications . ISAmI 2020. Advances in Intelligent Systems and Computing, vol 1239. Springer, Cham. https://doi.org/10.1007/978-3-030-58356-9_4
Download citation
DOI: https://doi.org/10.1007/978-3-030-58356-9_4
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-58355-2
Online ISBN: 978-3-030-58356-9
eBook Packages: Intelligent Technologies and RoboticsIntelligent Technologies and Robotics (R0)