Skip to main content

What Role for Ethics in the Law of AI?

  • Chapter
  • First Online:
Artificial Intelligence, Social Harms and Human Rights

Abstract

The aim of the chapter is to explore the broader scope of the Ethics Guidelines for Trustworthy AI. In particular, the chapter focuses on the reasons that led the EU to develop an ethical approach to AI, seeking to investigate to what extent it is arguable that the ethical principles for a trustworthy AI should be based on the compliance with fundamental rights. It points out that the symbolic value of fundamental rights as embedded within this non-binding tool, shows the normative vision of the EU, mitigating the possible conflict between institutional and private actors involved and their related interests. It argues that neither the ethical approach nor the mere legal design of AI can effectively address the issue of algorithmic inferences and their impact on individuals and society. Finally, it seeks to contextualize the rationales of the Ethics Guidelines within the core issues of the Proposal for AI Regulation (Artificial Intelligence Act), investigating commonalities and differences between these two regulatory approaches.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 119.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 159.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 159.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

Notes

  1. 1.

    Ethics Guidelines for Trustworthy AI, https://op.europa.eu/it/publication-detail/-/publication/d3988569-0434-11ea-8c1f-01aa75ed71a1.

  2. 2.

    White Paper on Artificial Intelligence. A European Approach to Excellence and Trust, https://ec.europa.eu/info/sites/info/files/commission-white-paper-artificial-intelligencefeb2020en.pdf.

  3. 3.

    Guidelines for Military and Non-military Use of Artificial Intelligence, https://www.europarl.europa.eu/news/en/press-room/20210114IPR95627/guidelines-for-military-and-nonmilitary-use-of-artificial-intelligence.

  4. 4.

    See the Proposal for a Regulation of the European Parliament and of the Council laying down harmonized rules on artificial intelligence (Artificial Intelligence Act) and amending certain union legislative acts, https://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=CELEX:52021PC0206&from=EN.

  5. 5.

    On global regulatory models, see Yeung and Lodge (2019) and Bignami (2018).

  6. 6.

    European Framework on Ethical Aspects of Artificial Intelligence, Robotics and Related Technologies, https://www.europarl.europa.eu/RegData/etudes/STUD/2020/654179/EPRSSTU(2020)654179EN.pdf; The Ethics of Artificial Intelligence: Issues and Initiatives, https://www.europarl.europa.eu/RegData/etudes/STUD/2020/634452/EPRSSTU(2020)634452EN.pdf; Artificial Intelligence: From Ethics to Policy, https://www.europarl.europa.eu/RegData/etudes/STUD/2020/641507/EPRSSTU(2020)641507EN.pdf.

  7. 7.

    On these aspects see Catanzariti (2020a, 239–255; 2020b, 149–165).

  8. 8.

    Cath et al. (2018, 525): “This approach to human dignity provides the much-needed grounding in a well-established, ethical, legal, political, and social concept, which can help to ensure that tolerant care and fostering respect for people (both as individuals and as groups), their cultures and their environments, play a steering role in the assessments and planning for the future of an AI-driven world. By relying on human dignity as the pivotal concept, it should become less difficult to develop a comprehensive vision of how responsibility, cooperation, and sharable values can guide the design a ‘good AI society’”. See also McCrudden (2013, 1–58).

  9. 9.

    Pariotti (2013, 31).

  10. 10.

    Sheppele (2018), https://verfassungsblog.de/rule-of-law-retail-and-rule-of-law-wholesale-the-ecjs-alarming-celmer-decision/.

  11. 11.

    Rawls in the edition of 1999 of A Theory of Justice revisited the concept of overlapping consensus, exploring the issue of its possible effects on ethics (Rawls, 1999, 340 ss.).

  12. 12.

    Pariotti’s conceptualization on this is convincing in the sense that human rights are a resting notion with respect to ethics, not the contrary, as rights are derivative concepts. See Pariotti (2013, 204). See also Eisler (1987, 287).

  13. 13.

    Artificial Intelligence. From Ethics to Policy, https://www.europarl.europa.eu/RegData/etudes/STUD/2020/641507/EPRSSTU(2020)641507EN.pdf, p. 24; see also Yeung et al. (2020, 81–82).

  14. 14.

    See Ethics Guidelines, parr. 36 and 46. See also cfr. Mantelero (2018, 29): “the effect of the social and ethical values on the interpretation of these human rights. These values represent the societal factors that influence the way the balance is achieved between the different human rights and freedoms, in different contexts and in different periods. Moreover, social and ethical values concur in defining the extension of rights and freedoms, making possible broader forms of protection when the regulatory framework does not provide adequate answers to emerging issues”. See also White Paper, p. 36, on the lack of legal basis of concepts such as transparency, traceability and human intervention, https://ec.europa.eu/info/publications/white-paper-artificial-intelligence-european-approachexcellence-and-trusten.

  15. 15.

    Orgad and Reijers (2020, 2–3), https://cadmus.eui.eu/bitstream/handle/1814/66910/RSCAS%20202028.pdf.

  16. 16.

    Síthigh and Siems (2019, 17), https://cadmus.eui.eu/bitstream/handle/1814/60424/LAW201901.pdf.

  17. 17.

    See § 40 of the Ethics Guidelines: “Among the comprehensive set of indivisible rights set out in international human rights law, the EU Treaties and the EU Charter, the below families of fundamental rights are particularly apt to cover AI systems. Many of these rights are, in specified circumstances, legally enforceable in the EU so that compliance with their terms is legally obligatory. But even after compliance with legally enforceable fundamental rights has been achieved, ethical reflection can help us understand how the development, deployment and use of AI systems may implicate fundamental rights and their underlying values, and can help provide more fine-grained guidance when seeking to identify what we should do rather than what we (currently) can do with technology”.

  18. 18.

    Author’s translation from the Italian edition, see Böckenforde (1967, 68–69).

  19. 19.

    Bradford (2019, 7, 142, 147).

  20. 20.

    Cremona and Scott (2019, 11), Bradford (2015, 158), and Ebers and Cantero Gamito (2021, 8).

  21. 21.

    Chiti and Marchetti (2020, 39).

  22. 22.

    Ibidem, 40 and 41.

  23. 23.

    Ibidem, 48.

  24. 24.

    Waldman (2020, 107).

  25. 25.

    Renda (2021, 667) and Punzi (2003, 1–428).

  26. 26.

    Turing (1950, 433).

  27. 27.

    Ebers (2020, 92).

  28. 28.

    Hildebrandt and O'Hara (2020, 1–15). Pariotti interestingly reflects upon the difference between soft regulation and soft law (2017, 8–27).

  29. 29.

    Watcher and Mittelstadt (2019, 494–620).

  30. 30.

    Ibid.

  31. 31.

    Zuboff (2019, 128).

  32. 32.

    Stuermer et al. (2017, 247–262).

  33. 33.

    Stevens (2020, 156, 168).

  34. 34.

    Floridi (2010, 4).

  35. 35.

    Graber (2020, 194–213).

  36. 36.

    Mittelstadt et al. (2016, 4).

  37. 37.

    Ibidem, 4–10.

  38. 38.

    Floridi (2013, 322).

  39. 39.

    Floridi (2015, 11).

  40. 40.

    Surden (2020, 721).

  41. 41.

    Artificial Intelligence Act, https://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=CELEX:52021PC0206&from=EN. https://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=CELEX:52021PC0206&from=EN, 9.

  42. 42.

    Ibidem, 12.

References

  • Bignami, F. (2018). Comparative Law and Regulation. Understanding the Global Regulatory Process. Elgar.

    Google Scholar 

  • Böckenforde, W. (1967). La formazione dello Stato come processo di secolarizzazione (a cura di M. Nicoletti). Morcelliana.

    Google Scholar 

  • Bradford, A. (2015). Exporting Standards: The externalization of the EU’s Regulatory Power Via Markets. International Law Review of Law and Economics, 42, 158–173.

    Article  Google Scholar 

  • Bradford, A. (2019). The Brussels Effect. How the European Union Rules the World. Oxford University Press.

    Google Scholar 

  • Catanzariti, M. (2020a). Enhancing Policing Through Algorithmic Mass-Surveillance. In L. Marin, S. Montaldo (eds.), The Fight Against Impunity (pp. 239–255). Hart.

    Google Scholar 

  • Catanzariti, M. (2020b). La razionalit. algoritmica dei processi decisionali. In S. Gozzo, C. Pennisi, V. Asero, R. Sampugnaro (eds.), Big Data e processi decisionali (pp. 149–165). Egea.

    Google Scholar 

  • Cath, C., Floridi, L., Mittelstadt, B., Taddeo, M., Watcher, S. (2018). Artificial Intelligence and the ‘Good Society’: The US, EU, and UK approach. Science and Engineering Ethics, 24, 505–528.

    Google Scholar 

  • Chiti, E., Marchetti, B. (2020). Divergenti? Le strategie di Unione Europea e Stati Uniti in materia di intelligenza artificiale. Rivista della Regolazione dei Mercati, 1, 28–50.

    Google Scholar 

  • Cremona, M., Scott, J. (2019). Introduction. In M. Cremona, J. Scott (eds.), EU Law Beyond EU Borders: The Extraterritorial Reach of EU Law (pp. 1–20). Oxford University Press.

    Chapter  Google Scholar 

  • Ebers, M. (2020), Regulating AI and Robotics: Ethical and Legal Challenges. In M. Ebers, S. Navas Navarro (eds.), Algorithms and Law (pp. 37–99). Cambridge University Press.

    Google Scholar 

  • Ebers, M., Cantero Gamito, M. (2021). Algorithmic Governance and Governance of Algorithms: An Introduction. In M. Ebers, M. Cantero Gamito (eds.), Algorithmic Governance and Governance of Algorithms: Legal and Ethical Challenges (pp. 1–22). Springer.

    Google Scholar 

  • Eisler, R. (1987). Human Rights: Towards an Integrated Theory for Action. Human Rights Quarterly, 9, 287.

    Article  Google Scholar 

  • Floridi, L. (2010). Ethics After the Information Revolution. In The Cambridge Handbook of Information and Computer Ethics (pp. 3–19). Cambridge University Press.

    Google Scholar 

  • Floridi, L. (2013). The Ethics of Information. Oxford University Press.

    Book  Google Scholar 

  • Floridi, L. (2015). The Online Manifesto. Being Human in Hyperconnected Era. Springer.

    Google Scholar 

  • Graber, C.B. (2020). Artificial Intelligence, Affordances and Fundamental Rights. In M. Hildebrandt, K. O’Hara (eds.), Life and the Law in the Era of Data-Driven Agency (pp. 194–213). Elgar.

    Chapter  Google Scholar 

  • Hildebrandt, M., O’Hara, K., (2020). Introduction: Life and the Law in the Era of Data-Driven Agency. In M. Hildebrandt, K. O’Hara (eds.), Life and the Law in the Era of Data-Driven Agency (pp. 1–15). Elgar.

    Chapter  Google Scholar 

  • Mantelero, A. (2018). AI and Big Data. A Blueprint for a Human Rigths, Social and Ethical Impact Assessment. Computer Law & Security Review, 34(4), 754–772.

    Google Scholar 

  • McCrudden, C. (2013). In Pursuit of Human Dignity: An Introduction to Current Debates. In C. McCrudden (ed.), Understanding Human Dignity (pp. 1–58). Oxford University Press.

    Chapter  Google Scholar 

  • Mittelstadt, B.D., Allo, P., Taddeo, M., Wachter, S., Floridi, L. (2016). The Ethics of Algorithms: Mapping the Debate. Big Data & Society, 3(1), 1–21.

    Google Scholar 

  • Orgad, L., Reijers, W. (2020). How to Make the Perfect Citizen? Lessons from China’s Model of Social Credit System. EUI Working Paper RSCAS 28. https://cadmus.eui.eu/bitstream/handle/1814/66910/RSCAS%20202028.pdf

  • Pariotti, E. (2013). Diritti umani: concetto, teoria, evoluzione. Cedam.

    Google Scholar 

  • Pariotti, E. (2017). Self-regulation, concetto di diritto, normatività giuridica. Ars Interpretandi, 2, 9–28.

    Google Scholar 

  • Punzi, A. (2003). L’ordine giuridico delle macchine. La Mettrie Helvetius d’Holbach. L’uomo macchina verso l’intelligenza collettiva. Giappichelli.

    Google Scholar 

  • Rawls, J. (1999). A Theory of Justice. Harvard University Press.

    Book  Google Scholar 

  • Renda, A. (2021). Moral Machines. The Emerging EU Policy on “Trustworthy AI”. In W. Barfield (ed.), The Cambridge Handbook of Algorithms (pp. 667–690). Cambridge University Press.

    Google Scholar 

  • Sheppele, K.L. (2018, July). Rule of Law Retail and Rule of Law Wholesail. verfassunsblog.de.

  • Síthigh, D.M., Siems, M. (2019). The Chinese Social Credit System: A Model for Other Countries? EUI Working Paper LAW 1. https://cadmus.eui.eu/bitstream/handle/1814/60424/LAW201901.pdf.

  • Stevens, D. (2020). In Defense of ‘Toma’: Algorithmic Enhancement of a Sense of Justice. In M. Hildebrandt, K. O’Hara (eds.), Life and the Law in the Era of Data-Driven Agency (pp. 156–174). Elgar.

    Chapter  Google Scholar 

  • Stuermer, M., Abu-Tayeh, G., Myrach, T. (2017). Digital Sustainability: Basic Conditions for Sustainable Digital Artifacts and Their Ecosystems. Sustainability Science, 2, 247–262.

    Google Scholar 

  • Surden, H. (2020). The Ethics of AI in Law: Basic Questions. In M.D. Dubber, F. Pasquale, S. Das (eds.), Oxford Handbook of Ethics of AI (pp. 720–736). Oxford University Press.

    Google Scholar 

  • Turing, A.M. (1950). Computing Machinery and Intelligence. Mind, 49, 433–460.

    Article  Google Scholar 

  • Wachter, S., Mittelstadt, B. (2019). A Right to Reasonable Inferences: Re-thinking Data Protection Law in the Age of Big Data and AI. Columbia Business Law Review, 2, 494–620.

    Google Scholar 

  • Waldman, A.E. (2020). Algorithmic Legitimacy. In W. Barfield (ed.), The Cambridge Handbook of Algorithms (pp. 107–120). Cambridge University Press.

    Chapter  Google Scholar 

  • Yeung, K., Lodge, M. (2019). Algorithmic Regulation. Oxford University Press.

    Book  Google Scholar 

  • Yeung, K., Howes, A., Pogrebna, G. (2020). AI Governance by Human Rights—Centered Design, Deliberation, and Oversight: An End to Ethics Washing. In M.D. Dubber, F. Pasquale, S. Das (eds.), The Oxford Handbook of Ethics of AI (pp. 78–106). Oxford University Press.

    Google Scholar 

  • Zuboff, S. (2019). The Age of Surveillance Capitalism. The Fight for a Human Future and the New Frontier of Power. Public Affairs.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Mariavittoria Catanzariti .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Catanzariti, M. (2023). What Role for Ethics in the Law of AI?. In: Završnik, A., Simončič, K. (eds) Artificial Intelligence, Social Harms and Human Rights. Critical Criminological Perspectives. Palgrave Macmillan, Cham. https://doi.org/10.1007/978-3-031-19149-7_6

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-19149-7_6

  • Published:

  • Publisher Name: Palgrave Macmillan, Cham

  • Print ISBN: 978-3-031-19148-0

  • Online ISBN: 978-3-031-19149-7

  • eBook Packages: Law and CriminologyLaw and Criminology (R0)

Publish with us

Policies and ethics