Abstract
This article reports the findings of AI4People, a year-long initiative designed to lay the foundations for a “Good AI Society”. We introduce the core opportunities and risks of AI for society; present a synthesis of five ethical principles that should undergird its development and adoption; and offer 20 concrete recommendations – to assess, to develop, to incentivise, and to support good AI – which in some cases may be undertaken directly by national or supranational policy makers, while in others may be led by other stakeholders. If adopted, these recommendations would serve as a firm foundation for the establishment of a Good AI Society.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
Notes
- 1.
Besides Luciano Floridi, the members of the Scientific Committee are: Monica Beltrametti, Raja Chatila, Patrice Chazerand, Virginia Dignum, Christoph Luetge, Robert Madelin, Ugo Pagallo, Francesca Rossi, Burkhard Schafer, Peggy Valcke, and Effy Vayena. Josh Cowls is the rapporteur. Thomas Burri contributed to an earlier draft.
- 2.
The analysis in this and the following two sections is also available in Cowls and Floridi (2018). Further analysis and more information on the methodology employed will be presented in Cowls and Floridi (Forthcoming).
- 3.
The Montreal Declaration is currently open for comments as part of a redrafting exercise. The principles we refer to here are those which were publicly announced as of 1st May, 2018.
- 4.
The third version of Ethically Aligned Design will be released in 2019 following wider public consultation.
- 5.
Of the six documents, the Asilomar Principles offer the largest number of principles with arguably the broadest scope. The 23 principles are organised under three headings, “research issues”, “ethics and values”, and “longer-term issues”. We have omitted consideration of the five “research issues” here as they are related specifically to the practicalities of AI development, particularly in the narrower context of academia and industry. Similarly, the Partnership’s eight Tenets consist of both intra-organisational objectives and wider principles for the development and use of AI. We include only the wider principles (the first, sixth, and seventh tenets).
- 6.
Determining accountability and responsibility may usefully borrow from lawyers in Ancient Rome who would go by this formula ‘cuius commoda eius et incommoda’ (‘the person who derives an advantage from a situation must also bear the inconvenience’). A good 2,200 years old principle that has a well-established tradition and elaboration could properly set the starting level of abstraction in this field.
- 7.
Of course, to the extent that AI systems are ‘products’, general tort law still applies in the same way to AI as it applies in any instance involving defective products or services that injure users or do not perform as claimed or expected.
References
Asilomar AI Principles. 2017. Principles developed in conjunction with the 2017 Asilomar conference [Benevolent AI 2017]. Retrieved September 18, 2018, from https://futureoflife.org/ai-principles.
Cowls, J., and L. Floridi. 2018. Prolegomena to a White Paper on Recommendations for the Ethics of AI (June 19, 2018). Available at SSRN: https://ssrn.com/abstract=3198732.
———. Forthcoming. The utility of a principled approach to AI ethics.
European Group on Ethics in Science and New Technologies. 2018, March. Statement on artificial intelligence, robotics and ‘autonomous’ systems. Retrieved September 18, 2018, from https://ec.europa.eu/info/news/ethics-artificial-intelligence-statement-ege-released-2018-apr-24_en.
Imperial College London. 2017, October 11. Written Submission to House of Lords Select Committee on Artificial Intelligence [AIC0214]. Retrieved September 18, 2018, from http://bit.ly/2yleuET.
Floridi, L. 2018. Soft ethics and the governance of the digital. Philosophy & Technology 2018: 1–8.
———. 2013. The ethics of information. Oxford: Oxford University Press.
House of Lords Artificial Intelligence Committee. 2018, April 16. AI in the UK: Ready, willing and able? Retrieved September 18, 2018, from https://publications.parliament.uk/pa/ld201719/ldselect/ldai/100/10002.htm.
King, T., N. Aggarwal, M. Taddeo, and L. Floridi. 2018, May 22. Artificial intelligence crime: An interdisciplinary analysis of foreseeable threats and solutions. Available at SSRN: https://ssrn.com/abstract=3183238.
Montreal Declaration for a Responsible Development of Artificial Intelligence. 2017, November 3. Announced at the conclusion of the Forum on the Socially Responsible Development of AI. Retrieved September 18, 2018, from https://www.montrealdeclaration-responsibleai.com/the-declaration.
Partnership on AI. 2018. Tenets. Retrieved September 18, 2018, from https://www.partnershiponai.org/tenets/.
Taddeo, M. 2017. The limits of deterrence theory in cyberspace. Philosophy & Technology 2017: 1–17.
The IEEE Initiative on Ethics of Autonomous and Intelligent Systems. 2017. Ethically aligned design, v2. Retrieved September 18, 2018, from https://ethicsinaction.ieee.org.
Acknowledgements
This publication would not have been possible without the generous support of Atomium – European Institute for Science, Media and Democracy. We are particularly grateful to Michelangelo Baracchi Bonvicini, Atomium’s President, to Guido Romeo, its Editor in Chief, the staff of Atomium for their help, and to all the partners of the AI4People project and members of its Forum (http://www.eismd.eu/ai4people) for their feedback. The authors of this article are the only persons responsible for its contents and any remaining mistakes.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2021 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this chapter
Cite this chapter
Floridi, L. et al. (2021). An Ethical Framework for a Good AI Society: Opportunities, Risks, Principles, and Recommendations. In: Floridi, L. (eds) Ethics, Governance, and Policies in Artificial Intelligence. Philosophical Studies Series, vol 144. Springer, Cham. https://doi.org/10.1007/978-3-030-81907-1_3
Download citation
DOI: https://doi.org/10.1007/978-3-030-81907-1_3
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-81906-4
Online ISBN: 978-3-030-81907-1
eBook Packages: Religion and PhilosophyPhilosophy and Religion (R0)