Skip to main content

Explainable Intelligent Environments

  • Conference paper
  • First Online:
Ambient Intelligence – Software and Applications (ISAmI 2020)

Part of the book series: Advances in Intelligent Systems and Computing ((AISC,volume 1239))

Included in the following conference series:

Abstract

The main focus of an Intelligent environment, as with other applications of Artificial Intelligence, is generally on the provision of good decisions towards the management of the environment or the support of human decision-making processes. The quality of the system is often measured in terms of accuracy or other performance metrics, calculated on labeled data. Other equally important aspects are usually disregarded, such as the ability to produce an intelligible explanation for the user of the environment. That is, asides from proposing an action, prediction, or decision, the system should also propose an explanation that would allow the user to understand the rationale behind the output. This is becoming increasingly important in a time in which algorithms gain increasing importance in our lives and start to take decisions that significantly impact them. So much so that the EU recently regulated on the issue of a “right to explanation”. In this paper we propose a Human-centric intelligent environment that takes into consideration the domain of the problem and the mental model of the Human expert, to provide intelligible explanations that can improve the efficiency and quality of the decision-making processes .

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 129.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 169.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Ververidis, D., Kotropoulos, C., Pitas, I.: Automatic emotional speech classification. In: 2004 IEEE International Conference on Acoustics, Speech, and Signal Processing, vol. 1, pp. I–593. IEEE (2004)

    Google Scholar 

  2. Crawford, K.:Artificial intelligence’s white guy problem. N.Y. Times 25 (2016)

    Google Scholar 

  3. Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you? explaining the predictions of any classifier. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 1135–1144 (2016)

    Google Scholar 

  4. Goodman, B., Flaxman, S.: European union regulations on algorithmic decision-making and a right to explanation. AI Mag. 38(3), 50–57 (2017)

    Article  Google Scholar 

  5. Dosilovic, F.K., Brcic, M., Hlupic, N.: Explainable artificial intelligence: a survey. In: Proceedings 2018 41st International Convention on Information and Communication Technology, Electronics and Microelectronics, MIPRO 2018, pp. 210–215, May 2018

    Google Scholar 

  6. Gilpin, L.H., Bau, D., Yuan, B.Z., Bajwa, A., Specter, M., Kagal, L.: Explaining explanations: an overview of interpretability of machine learning. In: Proceedings - 2018 IEEE 5th International Conference on Data Science and Advanced Analytics, DSAA 2018, pp. 80–89 (2019)

    Google Scholar 

  7. Sokol, K., Flach, P.: Desiderata for interpretability: explaining decision tree predictions with counterfactuals. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 10035–10036 (2019)

    Google Scholar 

  8. Ustun, B., Spangher, A., Liu, Y.: Actionable recourse in linear classification. In: FAT* 2019 - Proceedings of the 2019 Conference on Fairness, Accountability, and Transparency, pp. 10–19 (2019)

    Google Scholar 

  9. Silva, F., Analide, C.: Information asset analysis: credit scoring and credit suggestion. Int. J. Electron. Bus. 9(3), 203 (2011)

    Article  Google Scholar 

  10. Chen, C., Lin, K., Rudin, C., Shaposhnik, Y., Wang, S., Wang, T.: An interpretable model with globally consistent explanations for credit risk, pp. 1–10. arXiv preprint arXiv:1811.12615, November 2018

  11. Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. In: 34th International Conference on Machine Learning, ICML 2017, vol. 7, pp. 4844–4866 (2017)

    Google Scholar 

  12. Binder, A., Montavon, G., Lapuschkin, S., Müller, K.R., Samek, W.: Layer-wise relevance propagation for neural networks with local renormalization layers. Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), vol. 9887, pp. 63–71 (2016)

    Google Scholar 

  13. Settles, B.: From theories to queries: active learning in practice. In: Active Learning and Experimental Design workshop in conjunction with AISTATS, vol. 2010, pp. 1–18 (2011)

    Google Scholar 

  14. Ramos, D., Carneiro, D., Novais, P.: evoRF: an evolutionary approach to random forests. In: International Symposium on Intelligent and Distributed Computing, pp. 102–107. Springer (2019)

    Google Scholar 

  15. Singh, S., Gupta, P.: Comparative study ID3, cart and C4. 5 decision tree algorithm: a survey. Int. J. Adv. Inf. Sci. Technol. (IJAIST) 27(27), 97–103 (2014)

    Google Scholar 

  16. Lerman, R.I., Yitzhaki, S.: A note on the calculation and interpretation of the Gini index. Econ. Lett. 15(3–4), 363–368 (1984)

    Article  Google Scholar 

  17. Molnar, C.: Interpretable Machine Learning. Lulu. com, Morrisville (2019)

    Google Scholar 

Download references

Acknowledgments

This work was supported by the Northern Regional Operational Program, Portugal 2020 and European Union, trough European Regional Development Fund (ERDF) in the scope of project number 39900 - 31/SI/2017, and by FCT - Fundação para a Ciência e a Tecnologia, through projects UIDB/04728/2020 and UID/CEC/00319/2019.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Davide Carneiro .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Carneiro, D., Silva, F., Guimarães, M., Sousa, D., Novais, P. (2021). Explainable Intelligent Environments. In: Novais, P., Vercelli, G., Larriba-Pey, J.L., Herrera, F., Chamoso, P. (eds) Ambient Intelligence – Software and Applications . ISAmI 2020. Advances in Intelligent Systems and Computing, vol 1239. Springer, Cham. https://doi.org/10.1007/978-3-030-58356-9_4

Download citation

Publish with us

Policies and ethics