Abstract
Not all AI risks are new. Risk of traffic accidents that self-driving cars generate is already reality in today’s traffic. Physical injuries a patient may suffer during a medical treatment occur regardless of whether the damage is caused by an autonomous agent or a human doctor. Modern societies are already familiar with the aforementioned risks. This chapter explores whether liability regimes, traditionally designed to deter physical risks and compensate an injured person when they occur, have rules apt for tackling the social risks that AI represents. In the European Union, the European Parliament has adopted a text of a Regulation on AI liability. The text is a clear step forward in adjusting liability rules to the challenges of AI. It sets out a position on who should be responsible and on what basis and provides injured persons with procedural devices in order to enhance their position and tackle the black-box issue. It, thus, for better or worse, deals with well-known fundamental issues surrounding AI liability. However, while social risks have been previously recognized by the European Commission in the White Paper and by some scholars, the adopted text omits to address them specifically. This chapter presents the nature of AI risks that liability rules should regulate. It seeks to address whether the traditional liability concepts are apt for regulating the novel types of risks. Just like in the case of safety regulation, this chapter attempts to demonstrate that a proper understanding of AI risks is a basis for sound regulation of liability.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
Notes
- 1.
Rothstein et al. (2013), p. 16.
- 2.
Zech (2021), p. 4.
- 3.
See de Jong et al. (2018), pp. 6–13.
- 4.
Rodríguez de las Heras Ballell (2019), pp. 308 et seq.
- 5.
de Jong et al. (2018), pp. 6–13.
- 6.
Kysar (2018), p. 54.
- 7.
Ibid, p. 53.
- 8.
Nilsson (2010).
- 9.
Calo (2017), pp. 399–435.
- 10.
Communication from the Commission to the European Parliament, the European Council, the Council, the European Economic and Social Committee and the Committee of the Regions on Artificial Intelligence for Europe, Brussels, 25.4.2018 COM(2018) 237 final.
- 11.
Turner (2019), p. 7.
- 12.
Russell and Norvig (2009).
- 13.
Hildt (2019).
- 14.
For instance: Reggia et al. (2015).
- 15.
Open Philanthropy (2016).
- 16.
Stone et al. (2016).
- 17.
European Commission Directorate-General for Justice and Consumers (2019).
- 18.
Ibid, 11.
- 19.
In general, an attempt to provide a legal definition of AI seems to be a troubling task. It is hard to encompass main characteristics of this technology in an operable definition. Rather unsatisfying attempt has been made by the European Parliament in the Resolution 2020/2014(INL) on a Civil Liability Regime for Artificial Intelligence.
- 20.
European Commission Directorate-General for Justice and Consumers (2019), p. 33.
- 21.
Wendehorst (2020), p. 152.
- 22.
Abbott (2020), p. 33.
- 23.
Koch (2020), p. 120.
- 24.
Wendehorst (2020), p. 152.
- 25.
Ibid, p. 153.
- 26.
Steinrötter (2020), pp. 270–271.
- 27.
Rodríguez de las Heras Ballell (2019), pp. 308 et al.
- 28.
The full list of risks identified by the Expert Group is included in the Report.
- 29.
Schirmer (2019), p. 131.
- 30.
van den Hoven van Genderen (2018), p. 21.
- 31.
EU Parliament had mentioned it before in the Resolution on Civil Law Rules on Robotics from 2017, but the idea has been abandoned in the Resolution on a Civil Liability Regime for Artificial Intelligence from 2020.
- 32.
- 33.
European Commission Directorate-General for Justice and Consumers (2019), p. 32.
- 34.
In order to recognize a multitude of actors who might be involved in the process of development of AI, a distinction between backend and frontend operators had been suggested by the Expert Group. The EU Parliament’s Resolution of 2020 accepted it. See European Commission Directorate-General for Justice and Consumers (2019), pp. 39–42.
- 35.
Supra note 12, p. 312.
- 36.
Council Directive 85/374/EEC of 25 July 1985 on the approximation of the laws, regulations and administrative provisions of the Member States concerning liability for defective products.
- 37.
See: European Law Institute (n.d.).
- 38.
Wagner (2019), p. 42.
- 39.
Defect under Product Liability Directive is defined as a standard of safety one is entitled to expect. See Borghetti (2019), p. 67. He notes that traditional methods of establishing defect employed by the courts are incompatible with characteristics of AI. He elaborates that the courts establish the defect in one of the following ways and elaborates why none of them are appropriate: proof that the product malfunctioned, proof of the violation of safety standards, balancing the product’s risks and benefits, and comparing the product with other products.
- 40.
Craglia (2018), p. 12.
- 41.
Baldwin and Black (2016), p. 567.
- 42.
- 43.
Dataethikommision (2019).
- 44.
Proposal for a Regulation of the European Parliament and of the Council on a Single Market for Digital
Services (Digital Services Act) and amending Directive 2000/31/EC (COM/2020/825 final).
- 45.
Macenaite (2017), p. 509.
- 46.
Black (2005), p. 510.
- 47.
Craglia (2018), p. 588.
- 48.
de Gregorio and Dunn (2021), p. 11.
- 49.
Recital 3 of the Proposal.
- 50.
- 51.
Ibid.
- 52.
The advocated approach includes a combination of a set of fully horizontal principles, a list of blacklisted AI practices, and a creation of a regulatory framework for defined high-risk applications.
- 53.
Dataethikommision (2019).
- 54.
Official title is: Proposal of the EU Commission for a Regulation of the European Parliament and of the Council laying down harmonised rules on Artificial Intelligence (Artificial Intelligence Act) and amending certain acts of the Union (AI Act), 21.4.2021, COM(2021) 206 final.
- 55.
Proposal of the EU Commission for a Directive of the European Parliament and of the Council adapting the rules on non-contractual civil liability in artificial intelligence (AIL-D), 28.9.2022, COM(2022) 496 final.
- 56.
Proposal of the EU Commission for a Directive of the European Parliament and of the Council on Liability for Defective Products (PL-D), 28.9.2022, COM(2022) 495 final.
- 57.
Wagner (2023), p. 11.
- 58.
Spindler (2023), p. 30.
- 59.
Ibid, p. 33 et al.
- 60.
Ibid, p. 39.
- 61.
See Hacker (2022).
- 62.
Ibid.
- 63.
European Commission Directorate-General for Justice and Consumers (2019).
- 64.
Notably, the Dataethikommision uses the term algorithm instead of AI, but it does so arguably as the focus of its work is on the algorithmic impact on data in general, and not only on the impact of AI.
- 65.
European Commission Directorate-General for Justice and Consumers (2019), p. 5.
- 66.
- 67.
Wagner (2023), p. 14.
References
Abbott R (2020) Reasonable Robot. Cambridge University Press
Alemanno A (2013) Regulating the European Risk Society. In: Alemanno A et al (eds) Better business regulation in a risk society. Springer
Antunes HS (2021) Civil liability applicable to artificial intelligence: a preliminary critique of the European Parliament Resolution of 2020. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3743242
Baldwin R, Black J (2016) Driving priorities in risk-based regulation: what’s the problem? J Law Soc 43(4):565–595
Black J (2005) The emergence of risk-based regulation and the new public risk management in the United Kingdom. Public Law Autumn:512–549
Borghetti JS (2019) How can artificial intelligence be defective? In: Lohsee S, Schulze R, Staudenmayer D (eds) Liability for Artificial Intelligence and the Internet of Things: Münster Colloquia on EU Law and the Digital Economy IV. Nomos
Calo R (2017) Artificial intelligence policy: a primer and roadmap. UC Davis Law Rev 51:399–435
Craglia M (ed) (2018) Artificial intelligence: a European perspective. Publications Office of the European Union
Dataethikommision (2019) Opinion. https://www.bmj.de/DE/Themen/FokusThemen/Datenethikkommission/Datenethikkommission_EN_node.html
De Gregorio G, Dunn P (2021) Profiling under Risk-based Regulation: Bringing together the GDPR and the DSA. https://assets.ctfassets.net/iapmw8ie3ije/5EuxLPaUIsgGt7R6PgeuFK/c9269e55e10bb2a7a0b392624c08f4d0/De_Gregorio_Dunn_My_Data_is_Mine__1_.pdf
de Jong ER et al (2018) Judge-made risk regulation and tort law: an introduction. Eur J Risk Regul 9:6–13
Eidenmüller Η (2019) Machine performance and human failure: how shall we regulate autonomous machines? J Bus Tech L 15(1):109–133
European Commission Directorate-General for Justice and Consumers (2019) Liability for artificial intelligence and other emerging digital technologies. Publications Office. https://data.europa.eu/doi/10.2838/25362
European Law Institute (2020) Response to the public consultation on the White Paper: on Artificial Intelligence – a European approach to excellence and trust. COM(2020) 65 final. https://www.europeanlawinstitute.eu/fileadmin/user_upload/p_eli/News_page/2020/ELI_Response_AI_White_Paper.pdf
European Law Institute (n.d.) Guiding principles for updating the product liability directive for the digital age. ELI innovation paper series. https://europeanlawinstitute.eu/fileadmin/user_upload/p_eli/Publications/ELI_Guiding_Principles_for_Updating_the_PLD_for_the_Digital_Age.pdf
Hacker P (2022) The European AI Liability Directives – critique of a half-hearted approach and lessons for the future. https://ssrn.com/abstract=4279796
Hildt E (2019) Artificial intelligence: does consciousness matter? Available via Frontiers in Psychology. https://www.frontiersin.org/articles/10.3389/fpsyg.2019.01535/full
Hutter BM (2006) Risk, regulation, and management. In: Taylor-Gooby P, Zinn J (eds) Risk in social science. Oxford University Press
Koch BA (2020) Liability for emerging digital technologies: an overview. JETL 11(2):115–136
Kysar D (2018) The public life of private law: tort law as a risk regulation mechanism. Eur J Risk Regul 9(1):48–65
Macenaite M (2017) The “riskification” of European data protection law through a two-fold shift. Eur J Risk Regul 8(3):506–540
Nilsson JN (2010) The quest for artificial intelligence: a history of ideas and achievements. Cambridge University Press
Open Philanthropy (2016) What should we learn from past AI forecasts? Available via Open Philanthropy. https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/what-should-we-learn-past-ai-forecasts
Reggia JA, Huang DW, Katz G (2015) Beliefs Concerning the Nature of Consciousness. J Conscious Stud 22:146–171
Rodríguez de las Heras Ballell T (2019) Legal challenges of artificial intelligence: modelling the disruptive features of emerging technologies and assessing their possible legal impact. Uniform Law Rev 24(2):302–314
Rothstein H et al (2013) Risk and the limits of governance: exploring varied patterns of risk-based governance across Europe. Regul Gov 7(2):212–235
Russell S, Norvig P (2009) Artificial intelligence: a modern approach, 3rd edn. Prentice Hall
Schirmer JE (2019) Artificial intelligence and legal personality: introducing “Teilrechtsfähigkeit”: a partial legal status made in Germany. In: Wischmeyer T, Rademacher T (eds) Regulating artificial intelligence. Springer
Spindler G (2023) Different approaches for liability of artificial intelligence – pros and cons – the new proposal of the EU Commission on Liability for Defective Products and AI Systems. https://ssrn.com/abstract=4354468
Steinrötter B (2020) The (envisaged) legal framework for commercialisation of digital data within the EU: data protection law and data economic law as a conflicted basis for algorithm-based products and services. In: Ebers M, Navas S (eds) Algorithms and law. Cambridge University Press, Cambridge
Stone P et al (2016) Defining AI. In: Stone P et al (eds) Artificial intelligence and life in 2030. One hundred year study on artificial intelligence: report of the 2015–2016 study panel. Stanford University, Stanford California. http://ai100.stanford.edu/2016-report
Turner J (2019) Robot rules: regulating artificial intelligence. Palgrave Macmillan
van den Hoven van Genderen R (2018) Do we need new legal personhood in the age of robots and AI? In: Corrales M, Fenwick M, Forgó N (eds) Robotics, AI and the future of law. Springer
Wagner G (2019) Robot liability. In: Lohsee S, Schulze R, Staudenmayer D (eds) Liability for artificial intelligence and the Internet of Things: Münster Colloquia on EU Law and the Digital Economy IV. Nomos
Wagner G (2023) Liability rules for the digital age - aiming for the Brussels effect. https://ssrn.com/abstract=4320285
Wendehorst C (2020) Strict liability for AI and other emerging technologies. JETL 11(2):150–180
Zech H (2021) Liability for AI: public policy considerations. ERA Forum 22:147–158
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this chapter
Cite this chapter
Muftic, N. (2023). Understanding the Risks of Artificial Intelligence as a Precondition for Sound Liability Regulation. In: Kornilakis, A., Nouskalis, G., Pergantis, V., Tzimas, T. (eds) Artificial Intelligence and Normative Challenges. Law, Governance and Technology Series, vol 59. Springer, Cham. https://doi.org/10.1007/978-3-031-41081-9_6
Download citation
DOI: https://doi.org/10.1007/978-3-031-41081-9_6
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-41080-2
Online ISBN: 978-3-031-41081-9
eBook Packages: Law and CriminologyLaw and Criminology (R0)