Abstract
The field of algorithmic decision-making, particularly Artificial Intelligence (AI), has been drastically changing. With the availability of a massive amount of data and an increase in the processing power, AI systems have been used in a vast number of high-stake applications. So, it becomes vital to make these systems reliable and trustworthy. Different approaches have been proposed to make theses systems trustworthy. In this paper, we have reviewed these approaches and summarized them based on the principles proposed by the European Union for trustworthy AI. This review provides an overview of different principles that are important to make AI trustworthy.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
International Data Corporation IDC. Worldwide Spending on Artificial Intelligence Systems Will Be Nearly $98 Billion in 2023, According to New IDC Spending Guide (2019). https://www.idc.com/getdoc.jsp?containerId=prUS45481219
Angwin, J., et al.: Machine bias. ProPublica (2016). https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing
Dastin, J.: Amazon Scraps Secret AI Recruiting Tool that Showed Bias Against Women. Reuters, San Fransico (2018). Accessed 9 Oct 2018
Thomas, M.: Six Dangerous Risks of Artificial Intelligence. Builtin. 14 January 2019
Levin, S., Carrie, J.: Wong “Self-driving Uber kills Arizona woman in first fatal crash involving pedestrian” TheGuardian, 19 March 2018
Schlesinger, A., O’Hara, K.P., Taylor, A.S.: Let’s talk about race: identity, chatbots, and AI. In: Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (2018)
Rossi, F.: Building trust in artificial intelligence. J. Int. Aff. 72(1), 127–134 (2018)
Goodman, B., Flaxman, S.: European Union regulations on algorithmic decision-making and a “right to explanation”. AI Mag. 38(3), 50–57 (2017)
Joshi, N.: How we can build Trustworthy AI. Forbes, 30 July 2019
Jobin, A., Ienca, M., Vayena, E.: The global landscape of AI ethics guidelines. Nat. Mach. Intell. 1(9), 389–399 (2019)
Smuha, N.A.: The EU approach to ethics guidelines for trustworthy artificial intelligence. In: CRi-Computer Law Review International (2019)
Daugherty, P.R., James Wilson, H.: Human + Machine: Reimagining Work in the Age of AI. Harvard Business Review Press, Boston (2018)
European Commission. White paper on artificial intelligence–a European approach to excellence and trust (2020)
Veeramachaneni, K., et al.: AI^ 2: training a big data machine to defend. In: 2016 IEEE 2nd International Conference on Big Data Security on Cloud (BigDataSecurity), IEEE International Conference on High Performance and Smart Computing (HPSC) (2016)
Ruan, Y., Zhang, P., Alfantoukh, L., Durresi, A.: Measurement theory-based trust management framework for online social communities. ACM Trans. Internet Technol. 17(2), 24 (2017). Article 16
Uslu, S., et al.: Control theoretical modeling of trust-based decision making in food-energy-water management. In: Conference on Complex, Intelligent, and Software Intensive Systems. Springer, Cham (2020)
Uslu, S., et al.: Trust-based decision making for food-energy-water actors. In: International Conference on Advanced Information Networking and Applications. Springer, Cham (2020)
Uslu, S., et al.: Trust-based game-theoretical decision making for food-energy-water management. In: International Conference on Broadband and Wireless Computing, Communication and Applications. Springer, Cham (2019)
Kaur, D., Uslu, S., Durresi, A.: Trust-based security mechanism for detecting clusters of fake users in social networks. In: Workshops of the International Conference on Advanced Information Networking and Applications. Springer, Cham (2019)
Kaur, D., et al.: Trust-based human-machine collaboration mechanism for predicting crimes. In: International Conference on Advanced Information Networking and Applications. Springer, Cham (2020)
Xu, W., Evans, D., Qi, Y.: Feature squeezing: detecting adversarial examples in deep neural networks. In: Proceedings 2018 Network and Distributed System Security Symposium (2018): n. pag. Crossref. Web
Goodfellow, I.J., Shlens, J., Szegedy, C.: Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572 (2014)
Raji, I.D., et al.: Closing the AI accountability gap: defining an end-to-end framework for internal algorithmic auditing. In: Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency (2020)
Katell, M., et al.: Toward situated interventions for algorithmic equity: lessons from the field. In: Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency (2020)
Wieringa, M.: What to account for when accounting for algorithms: a systematic literature review on algorithmic accountability. In: Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency (2020)
Mehri, V.A., Ilie, D., Tutschku, K.: Privacy and DRM requirements for collaborative development of AI applications. In: Proceedings of the 13th International Conference on Availability, Reliability and Security (2018)
He, Y., et al.: Towards privacy and security of deep learning systems: a survey. arXiv preprint arXiv:1911.12562 (2019)
Hintze, M.: Science and Privacy: Data Protection Laws and Their Impact on Research, vol. 14, p. 103. Wash. JL Tech. & Arts (2018)
Cao, Y., Yang, J.: Towards making systems forget with machine unlearning. In: 2015 IEEE Symposium on Security and Privacy, San Jose, CA, 2015, pp. 463–480 (2015). https://doi.org/10.1109/sp.2015.35
Ragot, M., Martin, N., Cojean, S.: AI-generated vs. human artworks. a perception bias towards artificial intelligence? In: Extended Abstracts of the 2020 CHI Conference on Human Factors in Computing Systems (2020)
Brown, A.: Biased Algorithms Learn From Biased Data: 3 Kinds of Biases Found in AI datasets. Forbes, 7 February 2020 (2020)
Stock, P., Cisse, M.: Convnets and imagenet beyond accuracy: understanding mistakes and uncovering biases. In: Proceedings of the European Conference on Computer Vision (ECCV) (2018)
Mehrabi, N., et al.: A survey on bias and fairness in machine learning. arXiv preprint arXiv:1908.09635 (2019)
Agarwal, A., et al.: Automated test generation to detect individual discrimination in AI models. arXiv preprint arXiv:1809.03260 (2018)
Srivastava, B., Rossi, F.: Towards composable bias rating of AI services. In: Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society (2018)
Celis, L.E., et al.: How to be fair and diverse? arXiv preprint arXiv:1610.07183 (2016)
Sablayrolles, A., et al.: Radioactive data: tracing through training. arXiv preprint arXiv:2002.00937 (2020)
Lepri, B., et al.: Fair, transparent, and accountable algorithmic decision-making processes. Philos. Technol. 31(4), 611–627 (2018)
Bellamy, R.K.E., et al.: AI Fairness 360: an extensible toolkit for detecting and mitigating algorithmic bias. IBM J. Res. Dev. 63(4/5), 4:1–4:15 (2019)
Mueller, S.T., et al.: Explanation in human-AI systems: a literature meta-review, synopsis of key ideas and publications, and bibliography for explainable AI. arXiv preprint arXiv:1902.01876 (2019)
Wang, D., et al.: Designing theory-driven user-centric explainable AI. In: Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (2019)
Ribeiro, M.T., Singh, S., Guestrin, C.: Why should I trust you? explaining the predictions of any classifier. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (2016)
Zeiler, M.D., Fergus, R.: Visualizing and understanding convolutional networks. In: European Conference on Computer Vision. Springer, Cham (2014)
Mothilal, R.K., Sharma, A., Tan, C.: Explaining machine learning classifiers through diverse counterfactual explanations. In: Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency (2020)
Zhang, Q.-s., Zhu, S.-C.: Visual interpretability for deep learning: a survey. Front. Inf. Technol. Electron. Eng. 19(1), 27–39 (2018)
Kim, B., et al.: Interpretability beyond feature attribution: quantitative testing with concept activation vectors (TCAV). In: International Conference on Machine Learning (2018)
Madumal, P., et al.: Explainable reinforcement learning through a causal lens. arXiv preprint arXiv:1905.10958 (2019)
Mahendran, A., Vedaldi, A.: Understanding deep image representations by inverting them. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2015)
Acknowledgments
This work was partially supported by the National Science Foundation under Grant No. 1547411 and by the U.S. Department of Agriculture (USDA) National Institute of Food and Agriculture (NIFA) (Award Number 2017-67003-26057) via an interagency partnership between USDA-NIFA and the National Science Foundation (NSF) on the research program Innovations at the Nexus of Food, Energy and Water Systems.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2021 Springer Nature Switzerland AG
About this paper
Cite this paper
Kaur, D., Uslu, S., Durresi, A. (2021). Requirements for Trustworthy Artificial Intelligence – A Review. In: Barolli, L., Li, K., Enokido, T., Takizawa, M. (eds) Advances in Networked-Based Information Systems. NBiS 2020. Advances in Intelligent Systems and Computing, vol 1264. Springer, Cham. https://doi.org/10.1007/978-3-030-57811-4_11
Download citation
DOI: https://doi.org/10.1007/978-3-030-57811-4_11
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-57810-7
Online ISBN: 978-3-030-57811-4
eBook Packages: Intelligent Technologies and RoboticsIntelligent Technologies and Robotics (R0)