Abstract
This chapter explores the so-called ‘liability gaps’ that occurs when, in applying existing contractual, extra-contractual, or strict liability rules to harms caused by AI, the inherent characteristics of AI may result in unsatisfying outcomes, in particular for the damaged party. The chapter explains the liability gaps, investigating which features of AI challenge the application of traditional legal solutions and why. Subsequently, this chapter explores the challenges connected to the different possible solutions, including contract law, extra-contractual law, product liability, mandatory insurance, company law, and the idea of granting legal personhood to AI and robots. The analysis is carried out using hypothetical scenarios, to highlight both the abstract and practical implications of AI, based on the roles and interactions of the various parties involved. As a conclusion, this chapter offers an overview of the fundamental principles and guidelines that should be followed to elaborate a comprehensive and effective strategy to bridge the liability gaps. The argument made is that the guiding principle in designing legal solutions to the liability gaps must be the protection of individuals, particularly their dignity, rights and interests.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
Notes
- 1.
Vincent (2019) Apple’s credit card is being investigated for discriminating against women https://www.theverge.com/2019/11/11/20958953/apple-credit-card-gender-discrimination-algorithms-black-box-investigation. Accessed 19 February 2021.
- 2.
Ibid.
- 3.
Ibid.
- 4.
Vigdor (2019) Apple Card Investigated After Gender Discrimination Complaints https://www.nytimes.com/2019/11/10/business/Apple-credit-card-investigation.html. Accessed 19 February 2021. Please note that as of February 2021 no additional information on the investigation or its outcomes appears to have been made available.
- 5.
In this chapter, AI system indicates a product, device, service, or machine deploying a form of AI. AI should be intended in this chapter as any Machine Learning or other data analytics techniques presenting the capability to achieve a certain objective with a significant degree of autonomy, following supervised or unsupervised learning or other forms of software learning capability.
- 6.
- 7.
The example is a simplification. In real life, special liability regimes would create a presumption of fault of Alice. Alice would have a mandatory civil responsibility insurance to cover possible damages occurred while driving. Furthermore, additional parties might be involved, such as the manufacturer of the car or of any of its components, that might exempt Alice from any liability or to which she could demand to be indemnified, in case of defects or malfunctioning.
- 8.
Pagallo et al. 2018, p. 19.
- 9.
Witt 2001, p. 694.
- 10.
Witt 2001, p. 745.
- 11.
European Commission 2019, p. 21.
- 12.
Leenes et al. 2017, p. 9.
- 13.
European Commission 2019, p. 21.
- 14.
Scherer 2016, p. 363.
- 15.
- 16.
- 17.
There is no universal definition of pure economic loss, and the rules concerning it vary widely among European countries. As a general definition, pure economic loss entails the suffering of an economic loss not connected to a pre-existing harm. Cfr. Bussani and Palmer 2003.
- 18.
- 19.
von Bar and Drobnig 2009.
- 20.
Fauvarque-Cosson and Mazeaud 2009.
- 21.
Ibid.
- 22.
- 23.
- 24.
Please note that the relationship between Alice and The Bank might be either contractual or pre-contractual, depending on the kind of documents exchanged and the rules existing in a certain jurisdiction. The existence of a contractual or pre-contractual relationship does not automatically imply the existence of contractual liability in this case, because the damage derives from discrimination, which is an action prohibited by the law.
- 25.
- 26.
European Commission 2019, p. 51.
- 27.
Buyuksagis and van Boom 2013, p. 609.
- 28.
Ibid.
- 29.
Royakkers and van Est 2016, p. 185.
- 30.
- 31.
Bertolini 2013, p. 219.
- 32.
- 33.
UNESCO 2017, p. 19.
- 34.
This idea has been brought forward at the Sixth T.M.C. Asser Annual Lecture, given by Prof. Andrew Murray and entitled ‘Almost Human: Law and Human Agency in the Time of Artificial Intelligence’ that took place online on 26 November 2020. See also Murray 2021.
- 35.
Proposal for a Regulation of the European Parliament and of the Council laying down harmonised rules on Artificial Intelligence (Artificial Intelligence Act) and amending certain Union legislative acts, of 21 April 2021 [2021/0106(COD)], Annex III.
- 36.
Royakkers and van Est 2016, p. 185.
- 37.
Marchant and Lindor 2012, p. 1326.
- 38.
European Commission 2019, p. 39.
- 39.
- 40.
Weatherill 2013.
- 41.
Council Directive 85/374/EEC of 25 July 1985 on the approximation of the laws, regulations and administrative provisions of the Member States concerning liability for defective products [OJ L 210, 7.8.1985, pp. 29–33], article 7.
- 42.
Weatherill 2013.
- 43.
- 44.
Council Directive 85/374/EEC of 25 July 1985 on the approximation of the laws, regulations and administrative provisions of the Member States concerning liability for defective products [OJ L 210, 7.8.1985, pp. 29–33], article 2.
- 45.
European Commission 2020, p. 10.
- 46.
Jaquemin 2017.
- 47.
European Commission 2020, p. 4.
- 48.
Weatherill 2013; see also the definition of defect provided by article 6 of the Council Directive 85/374/EEC.
- 49.
European Commission 2020a.
- 50.
Turner 2019, p. 112.
- 51.
European Commission 2019, p. 61.
- 52.
Bayern 2015.
- 53.
Scherer 2019.
- 54.
Solum 1992.
- 55.
European Parliament 2017, p. 61.
- 56.
- 57.
- 58.
Open Letter to the European Commission: Artificial Intelligence and Robotics 2017, https://www.politico.eu/wp-content/uploads/2018/04/RoboticsOpenLetter.pdf. Accessed 19 February 2021.
- 59.
The European Commission, in the context of its AI strategy for Europe, has not embraced the view of the European Parliament concerning a special legal personality for artificial agents. Cfr. European Commission 2018 and European Commission’s High-Level Expert Group on Artificial Intelligence 2019.
- 60.
These values are expressly protected by the main tools constituting and regulating the EU, such as the Treaty on European Union and the Charter of Fundamental Rights.
- 61.
European Commission 2018, p. 2.
- 62.
European Commission 2020b.
- 63.
European Commission 2019, p. 32.
- 64.
Ibid.
- 65.
Ibid., p. 39.
- 66.
European Commission 2020b.
- 67.
De Conca 2020.
- 68.
European Commission 2019, p. 49.
- 69.
The application of contractual and extra-contractual liability in the past has been challenged by complex situations, such as the case of pre-contractual liability, or the duty of care vis-à-vis third parties. Cfr. for instance Michoński 2015.
References
Asaro P M (2011) A Body to Kick, but Still No Soul to Damn: Legal Perspectives on Robotics. In: Lin P et al (ed) Robot Ethics: The Ethical and Social Implications of Robotics. MIT Press, pp 169–186.
Bayern S (2015) The Implications of Modern Business-Entity Law for the Regulation of Autonomous Systems. Stan. Tech. L. Rev. 19:93–112.
Bertolini A (2013) Robots as Products: The Case for a Realistic Analysis of Robotic Applications and Liability Rules. LIT 5(2):214–247.
Buyuksagis E, van Boom W H (2013) Strict liability in contemporary European codification: Torn between objects, activities, and their risks. Georgetown Journal of International Law 44(2):609–640.
Bussani M, Palmer V V (eds) (2003) Pure economic loss in Europe. Cambridge University Press.
De Conca S (2020) Bridging the liability gap using human-centered legal design: three scenarios to apply the liability from social contact. WeRobot 2020, Ottawa 22-25 September 2020. https://techlaw.uottawa.ca/werobot/papers Accessed on 20 July 2021.
European Commission (2018) Artificial Intelligence for Europe: Communication from the Commission to the European Parliament, the European Council, the Council, the European Economic and Social Committee and the Committee of the Regions [COM(2018) 237].
European Commission (2019) Liability for Artificial Intelligence and other emerging digital technologies – Report from the Expert Group on Liability and New Technologies.
European Commission (2020a) Report from the commission to the European Parliament, the Council and the European Economic and Social Committee on the safety and liability implications of Artificial Intelligence, the Internet of Things and robotics [COM(2020) 64].
European Commission (2020b) White Paper on Artificial Intelligence - A European approach to excellence and trust [COM(2020) 65].
European Commission’s High-Level Expert Group on Artificial Intelligence (2019) Ethics Guidelines for Trustworthy AI.
European Parliament (2017) Draft Report of the Committee on Legal Affairs with recommendations to the Commission on Civil Law Rules on Robotics [2015/2103(INL)].
Farnsworth E A (2006) Comparative Contract Law. In: Reimann M, Zimmermann R (eds) The Oxford Handbook of Comparative Law. Oxford University Press.
Fauvarque-Cosson B, Mazeaud D (eds) (2009) European Contract Law, Materials for a Common Frame of Reference: Terminology, Guiding Principles, Model Rules. Sellier European Law Publishers.
Hage J (2017) Theoretical foundations for the responsibility of autonomous agents. Artif Intell Law 25:255–271.
Hevelke A, Nida-Rümelin J (2015) Responsibility for Crashes of Autonomous Vehicles: An Ethical Analysis. Sci Eng Ethics 21:619–630.
Jacquemin H (2017) Digital content and sales or service contracts under EU law and Belgian/French law. Journal of Intellectual Property, Information Technology and Electronic Commerce Law 8(1):27–38.
Johnson D G (2015) Technology with No Human Responsibility? Journal of Business Ethics 127(4):707–715.
Karnow C E A (2016) The application of traditional tort theory to embodied machine intelligence. In: Calo R, Froomkin M A, Kerr I (eds) Robot Law. Edward Elgar Publishing, pp 51–77.
Leenes R, Palmerini E, Koops B J, Bertolini A, Salvini P, Lucivero F (2017) Regulatory challenges of robotics: some guidelines for addressing legal and ethical issues. Law, Innovation and Technology, 9:1, pp 1–44.
Marchant G E, Lindor R A (2012) The coming collision between autonomous vehicles and the liability system. Santa Clara Law Review 52(4):1321–1340.
Marsh P D V (1994) Comparative Contract Law: England, France, Germany. Gower
Michoński D (2015) Contractual or Delictual? On the Character of Pre-contractual Liability in Selected European Legal Systems. Comparative Law Review 20:151–175.
Murray A (2021) Almost Human: Law and Human Agency in the Time of Artificial Intelligence - Sixth Annual T.M.C. Asser Lecture. Annual T.M.C. Asser Lecture Series. T.M.C. Asser Press, The Hague.
Pagallo U (2012) Three Roads to Complexity, AI and the Law of Robots: On Crimes, Contracts, and Torts. In: Palmirani M et al (eds) AICOL Workshops 2011, LNAI 7639. Springer, pp 48–60.
Pagallo U (2013a) The Laws of Robots: Crimes, Contracts, and Torts. Springer.
Pagallo U (2013b) Robots in the cloud with privacy: A new threat to data protection? CLSR 29(5):501–508.
Pagallo U, Corrales M, Fenwick M, Forgó N (2018) The Rise of Robotics & AI: Technological Advances & Normative Dilemmas. In: Corrales M et al (eds) Robotics, AI and the Future of Law. Springer, pp 1–14.
Royakkers L, van Est R (2016) Just Ordinary Robots: Automation from Love to War. CRC Press.
Scherer M (2016) Regulating Artificial Intelligence Systems: Risks, Challenges, Competence, and Strategies. Harvard Journal of Law & Technology 29(2):354–400.
Scherer M (2019) Of Wild Beasts and Digital Analogues: The Legal Status of Autonomous Systems. NEV. L.J. 19:259–291.
Solum L B (1992) Legal Personhood for Artificial Intelligences. N.C. L. Rev. 70(4):1231–1287.
Teubner G (2006) Rights of Non-humans? Electronic Agents and Animals as New Actors in Politics and Law. Journal of Law and Society 33(4):497–521.
Teubner G (2018) Digital Personhood? The Status of Autonomous Software Agents in Private Law. Ancilla Iuris 106–149.
Turner J (2019) Robot Rules: Regulating Artificial Intelligence. Palgrave Macmillan.
UNESCO (2017) Report of the World Commission on the Ethics of Scientific Knowledge and Technology (COMEST) on Robotics Ethics [SHS/YES/COMEST-10/17/2 REV].
van den Hoven van Genderen R (2019) Does Future Society Need Legal Personhood for Robots and AI? In: Ranschaert E et al (eds) Artificial Intelligence in Medical Imaging. Springer, pp 257–290.
von Bar C, Drobnig U (eds) (2009) The Interaction of Contract Law and Tort and Property Law in Europe, A Comparative Study. Sellier European Law Publishers.
Weatherill S (2013) EU Consumer Law and Policy. Edward Elgar Law.
Witt J (2001) Toward a New History of American Accident Law: Classical Tort Law and the Cooperative First-Party Insurance Movement. Harvard Law Review 114(3):690–841.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2022 T.M.C. Asser Press and the authors
About this chapter
Cite this chapter
De Conca, S. (2022). Bridging the Liability Gaps: Why AI Challenges the Existing Rules on Liability and How to Design Human-empowering Solutions. In: Custers, B., Fosch-Villaronga, E. (eds) Law and Artificial Intelligence. Information Technology and Law Series, vol 35. T.M.C. Asser Press, The Hague. https://doi.org/10.1007/978-94-6265-523-2_13
Download citation
DOI: https://doi.org/10.1007/978-94-6265-523-2_13
Published:
Publisher Name: T.M.C. Asser Press, The Hague
Print ISBN: 978-94-6265-522-5
Online ISBN: 978-94-6265-523-2
eBook Packages: Law and CriminologyLaw and Criminology (R0)