Abstract
Artificial Intelligence (AI) is already having a major impact on society. As a result, many organizations have launched a wide range of initiatives to establish ethical principles for the adoption of socially beneficial AI. Unfortunately, the sheer volume of proposed principles threatens to overwhelm and confuse. How might this problem of ‘principle proliferation’ be solved? In this paper, we report the results of a fine-grained analysis of several of the highest-profile sets of ethical principles for AI. We assess whether these principles converge upon a set of agreed-upon principles, or diverge, with significant disagreement over what constitutes ‘ethical AI.’ Our analysis finds a high degree of overlap among the sets of principles we analyze. We then identify an overarching framework consisting of five core principles for ethical AI. Four of them are core principles commonly used in bioethics: beneficence, non-maleficence, autonomy, and justice. On the basis of our comparative analysis, we argue that a new principle is needed in addition: explicability, understood as incorporating both the epistemological sense of intelligibility (as an answer to the question ‘how does it work?’) and in the ethical sense of accountability (as an answer to the question: ‘who is responsible for the way it works?’). In the ensuing discussion, we note the limitations and assess the implications of this ethical framework for future efforts to create laws, rules, technical standards, and best practices for ethical AI in a wide range of contexts.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
Notes
- 1.
These are not the only problems, see (Floridi 2019b).
- 2.
The Montreal Declaration is currently open for comments as part of a redrafting exercise. The principles we refer to here are those which were publicly announced as of May 1, 2018.
- 3.
The third version of Ethically Aligned Design will be released in 2019 following wider public consultation.
- 4.
A similar evaluation of AI ethics guidelines has recently been undertaken by Hagendorff (2019), which adopts different criteria of inclusion and assessment. Note that the evaluation includes in its sample the set of principles we describe here.
- 5.
Of the six documents, the Asilomar Principles offer the largest number of principles with arguably the broadest scope. The 23 principles are organised under three headings, “research issues”, “ethics and values”, and “longer-term issues”. We have omitted consideration of the five “research issues” here as they are related specifically to the practicalities of AI development in the narrower context of academia and industry. Similarly, the Partnership’s eight Tenets consist of both intra-organisational objectives and wider principles for the development and use of AI. We include only the wider principles (the first, sixth, and seventh tenets).
- 6.
References
Beauchamp, T.L., and J.F. Childress. 2012. Principles of biomedical ethics. Oxford: Oxford University Press.
Boland, H. 2018. Tencent executive urges Europe to focus on ethical uses of artificial intelligence. The Telegraph, October 14. https://www.telegraph.co.uk/technology/2018/10/14/tencent-executive-urges-europe-focus-ethical-uses-artificial/
China State Council. 2017. State Council notice on the issuance of the next generation Artificial Intelligence development plan, July 8. Retrieved September 18, 2018, from http://www.gov.cn/zhengce/content/2017-07/20/content_5211996.htm. Translation by Creemers, R., G. Webster, P. Triolo, and E. Kania. https://www.newamerica.org/documents/1959/translation-fulltext-8.1.17.pdf
Corea, F. 2019. AI knowledge map: How to classify AI technologies, a sketch of a new AI technology landscape. First appeared in Medium—Artificial Intelligence. https://medium.com/@Francesco_AI/ai-knowledge-map-how-to-classify-ai-technologies-6c073b969020. Reproduced in Corea, F. 2019. An introduction to data, 26. Springer.
Cowls, J., L. Floridi, and M. Taddeo. 2018. The challenges and opportunities of ethical AI. Artificially Intelligent. https://digitransglasgow.github.io/ArtificiallyIntelligent/contributions/04_Alan_Turing_Institute.html
Cowls, J., T. C. King, M. Taddeo, and L. Floridi. 2019. Designing AI for social good: Seven essential factors. http://ssrn.com/abstract=3388669
Cowls, J., M.-T. Png, and Y. Au. n.d. Foundations for geographic representation in algorithmic ethics. Unpublished.
Delcker, J. 2018. Europe’s silver bullet in global AI battle: Ethics. Politico, March 3. https://www.politico.eu/article/europe-silver-bullet-global-ai-battle-ethics/
Ding, J. 2018. Deciphering China’s AI dream, March. https://www.fhi.ox.ac.uk/wp-content/uploads/Deciphering_Chinas_AI-Dream.pdf
European Group on Ethics in Science and New Technologies. 2018. Statement on Artificial Intelligence, robotics and ‘autonomous’ systems, March. https://ec.europa.eu/info/news/ethics-artificial-intelligence-statement-ege-released-2018-apr-24_en
Floridi, L. 2013. The ethics of information. Oxford: Oxford University Press.
———. 2019a. What the near future of Artificial Intelligence could be. Philosophy & Technology 32 (1): 1–15. https://doi.org/10.1007/s13347-019-00345-y.
———. 2019b. Translating principles into practices of digital ethics: Five risks of being unethical. Philosophy & Technology 32 (2): 185–193. https://doi.org/10.1007/s13347-019-00354-x.
Floridi, L., M. Taddeo, and M. Turilli. 2009. Turing’s imitation game: Still an impossible challenge for all machines and some judges––An evaluation of the 2008 Loebner contest. Minds and Machines 19 (1): 145–150. https://doi.org/10.1007/s11023-008-9130-6.
Floridi, L., J. Cowls, M. Beltrametti, R. Chatila, P. Chazerand, V. Dignum, C. Luetge, R. Madelin, U. Pagallo, F. Rossi, B. Schafer, P. Valcke, and E. Vayena. 2018. AI4People—An ethical framework for a good AI society: Opportunities, risks, principles, and recommendations. Minds and Machines 28 (4): 689–707. https://doi.org/10.1007/s11023-018-9482-5.
Hagendorff, T. 2019. The ethics of AI ethics—An evaluation of guidelines. https://arxiv.org/abs/1903.03425
HLEGAI [High Level Expert Group on Artificial Intelligence], European Commission. 2018. Draft ethics guidelines for trustworthy AI, December 18. https://ec.europa.eu/digital-single-market/en/news/draft-ethics-guidelines-trustworthy-ai
———. 2019. Ethics guidelines for trustworthy AI, April 8. https://ec.europa.eu/digital-single-market/en/news/ethics-guidelines-trustworthy-ai
House of Lords Artificial Intelligence Committee. 2018. AI in the UK: Ready, willing and able, April 16. https://publications.parliament.uk/pa/ld201719/ldselect/ldai/100/10002.htm
Jezard, A. 2018. China is now home to the world’s most valuable AI start-up. World Economic Forum, April 11. https://www.weforum.org/agenda/2018/04/chart-of-the-day-china-now-has-the-worlds-most-valuable-ai-startup/
King, T., N. Aggarwal, M. Taddeo, and L. Floridi 2018. Artificial Intelligence crime: An interdisciplinary analysis of foreseeable threats and solutions, May 22. https://ssrn.com/abstract=3183238
Lee, K., and P. Triolo 2017. China’s Artificial Intelligence revolution: Understanding Beijing’s structural advantages. Eurasian Group, December. https://www.eurasiagroup.net/live-post/ai-in-china-cutting-through-the-hype
McCarthy, J., M.L. Minsky, N. Rochester, and C.E. Shannon. 2006. A proposal for the Dartmouth summer research project on artificial intelligence, August 31, 1955. AI Magazine 27 (4): 12. https://doi.org/10.1609/aimag.v27i4.1904.
Montreal Declaration for a Responsible Development of Artificial Intelligence. 2017. Announced at the conclusion of the Forum on the Socially Responsible Development of AI, November 3. https://www.montrealdeclaration-responsibleai.com/the-declaration
Morley, J., L. Floridi, L. Kinsey, and A. Elhalal. 2019. From what to how. An overview of AI ethics tools, methods and research to translate principles into practices. ArXiv:1905.06876 [Cs]. Retrieved from http://arxiv.org/abs/1905.06876
OECD. 2019. Recommendation of the Council on Artificial Intelligence. https://legalinstruments.oecd.org/en/instruments/OECD-LEGAL-0449
Partnership on AI. 2018. Tenets. https://www.partnershiponai.org/tenets/
Samuel, A.L. 1960. Some moral and technical consequences of automation—A refutation. Science 132 (3429): 741–742. https://doi.org/10.1126/science.132.3429.741.
Taddeo, M., and L. Floridi. 2018. Regulate artificial intelligence to avert cyber arms race. Nature 556 (7701): 296–298.
The IEEE Initiative on Ethics of Autonomous and Intelligent Systems. 2017. Ethically aligned design, v2. https://ethicsinaction.ieee.org
Turing, A.M. 1950. Computing machinery and intelligence. Mind 5 (236): 433–460. https://doi.org/10.1093/mind/lix.236.433.
Vodafone Institute for Society and Communications. 2018. New technologies: India and China see enormous potential—Europeans more sceptical. https://www.vodafone-institut.de/digitising-europe/digitisation-india-and-china-see-enormous-potential/
Webster, G., R. Creemers, P. Triolo, and E. Kania. 2017. China’s plan to ‘lead’ in AI: Purpose, prospects, and problems. New America, August, 1. https://www.newamerica.org/cybersecurity-initiative/blog/chinas-plan-lead-ai-purpose-prospects-and-problems/
Wiener, N. 1960. Some moral and technical consequences of automation. Science 131 (3410): 1355–1358. https://doi.org/10.1126/science.131.3410.1355.
Yang, G.Z., J. Bellingham, P.E. Dupont, P. Fischer, L. Floridi, R. Full, N. Jacobstein, V. Kumar, M. McNutt, R. Merrifield, B.J. Nelson, B. Scassellati, M. Taddeo, R. Taylor, M. Veloso, Z.L. Wang, and R. Wood. 2018. The grand challenges of science robotics. Science robotics 3 (14): eaar7650. https://doi.org/10.1126/scirobotics.aar7650.
Disclosure
Floridi chaired the AI4People project and Cowls was the rapporteur. Floridi is also a member of the European Commission’s High-Level Expert Group on Artificial Intelligence (HLEGAI).
Funding
Floridi’s work was supported by (i) Privacy and Trust Stream—Social lead of the PETRAS Internet of Things research hub—PETRAS is funded by the UK Engineering and Physical Sciences Research Council (EPSRC), grant agreement no. EP/N023013/1; (ii) Facebook; and (iii) Google. Cowls is the recipient of a Doctoral Studentship from the Alan Turing Institute.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2021 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this chapter
Cite this chapter
Floridi, L., Cowls, J. (2021). A Unified Framework of Five Principles for AI in Society. In: Floridi, L. (eds) Ethics, Governance, and Policies in Artificial Intelligence. Philosophical Studies Series, vol 144. Springer, Cham. https://doi.org/10.1007/978-3-030-81907-1_2
Download citation
DOI: https://doi.org/10.1007/978-3-030-81907-1_2
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-81906-4
Online ISBN: 978-3-030-81907-1
eBook Packages: Religion and PhilosophyPhilosophy and Religion (R0)