Skip to main content

Understanding the Risks of Artificial Intelligence as a Precondition for Sound Liability Regulation

  • Chapter
  • First Online:
Artificial Intelligence and Normative Challenges

Part of the book series: Law, Governance and Technology Series ((LGTS,volume 59))

  • 506 Accesses

Abstract

Not all AI risks are new. Risk of traffic accidents that self-driving cars generate is already reality in today’s traffic. Physical injuries a patient may suffer during a medical treatment occur regardless of whether the damage is caused by an autonomous agent or a human doctor. Modern societies are already familiar with the aforementioned risks. This chapter explores whether liability regimes, traditionally designed to deter physical risks and compensate an injured person when they occur, have rules apt for tackling the social risks that AI represents. In the European Union, the European Parliament has adopted a text of a Regulation on AI liability. The text is a clear step forward in adjusting liability rules to the challenges of AI. It sets out a position on who should be responsible and on what basis and provides injured persons with procedural devices in order to enhance their position and tackle the black-box issue. It, thus, for better or worse, deals with well-known fundamental issues surrounding AI liability. However, while social risks have been previously recognized by the European Commission in the White Paper and by some scholars, the adopted text omits to address them specifically. This chapter presents the nature of AI risks that liability rules should regulate. It seeks to address whether the traditional liability concepts are apt for regulating the novel types of risks. Just like in the case of safety regulation, this chapter attempts to demonstrate that a proper understanding of AI risks is a basis for sound regulation of liability.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 139.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Hardcover Book
USD 179.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

Notes

  1. 1.

    Rothstein et al. (2013), p. 16.

  2. 2.

    Zech (2021), p. 4.

  3. 3.

    See de Jong et al. (2018), pp. 6–13.

  4. 4.

    Rodríguez de las Heras Ballell (2019), pp. 308 et seq.

  5. 5.

    de Jong et al. (2018), pp. 6–13.

  6. 6.

    Kysar (2018), p. 54.

  7. 7.

    Ibid, p. 53.

  8. 8.

    Nilsson (2010).

  9. 9.

    Calo (2017), pp. 399–435.

  10. 10.

    Communication from the Commission to the European Parliament, the European Council, the Council, the European Economic and Social Committee and the Committee of the Regions on Artificial Intelligence for Europe, Brussels, 25.4.2018 COM(2018) 237 final.

  11. 11.

    Turner (2019), p. 7.

  12. 12.

    Russell and Norvig (2009).

  13. 13.

    Hildt (2019).

  14. 14.

    For instance: Reggia et al. (2015).

  15. 15.

    Open Philanthropy (2016).

  16. 16.

    Stone et al. (2016).

  17. 17.

    European Commission Directorate-General for Justice and Consumers (2019).

  18. 18.

    Ibid, 11.

  19. 19.

    In general, an attempt to provide a legal definition of AI seems to be a troubling task. It is hard to encompass main characteristics of this technology in an operable definition. Rather unsatisfying attempt has been made by the European Parliament in the Resolution 2020/2014(INL) on a Civil Liability Regime for Artificial Intelligence.

  20. 20.

    European Commission Directorate-General for Justice and Consumers (2019), p. 33.

  21. 21.

    Wendehorst (2020), p. 152.

  22. 22.

    Abbott (2020), p. 33.

  23. 23.

    Koch (2020), p. 120.

  24. 24.

    Wendehorst (2020), p. 152.

  25. 25.

    Ibid, p. 153.

  26. 26.

    Steinrötter (2020), pp. 270–271.

  27. 27.

    Rodríguez de las Heras Ballell (2019), pp. 308 et al.

  28. 28.

    The full list of risks identified by the Expert Group is included in the Report.

  29. 29.

    Schirmer (2019), p. 131.

  30. 30.

    van den Hoven van Genderen (2018), p. 21.

  31. 31.

    EU Parliament had mentioned it before in the Resolution on Civil Law Rules on Robotics from 2017, but the idea has been abandoned in the Resolution on a Civil Liability Regime for Artificial Intelligence from 2020.

  32. 32.

    For example: Eidenmüller (2019), p. 109; van den Hoven van Genderen (2018), p. 21.

  33. 33.

    European Commission Directorate-General for Justice and Consumers (2019), p. 32.

  34. 34.

    In order to recognize a multitude of actors who might be involved in the process of development of AI, a distinction between backend and frontend operators had been suggested by the Expert Group. The EU Parliament’s Resolution of 2020 accepted it. See European Commission Directorate-General for Justice and Consumers (2019), pp. 39–42.

  35. 35.

    Supra note 12, p. 312.

  36. 36.

    Council Directive 85/374/EEC of 25 July 1985 on the approximation of the laws, regulations and administrative provisions of the Member States concerning liability for defective products.

  37. 37.

    See: European Law Institute (n.d.).

  38. 38.

    Wagner (2019), p. 42.

  39. 39.

    Defect under Product Liability Directive is defined as a standard of safety one is entitled to expect. See Borghetti (2019), p. 67. He notes that traditional methods of establishing defect employed by the courts are incompatible with characteristics of AI. He elaborates that the courts establish the defect in one of the following ways and elaborates why none of them are appropriate: proof that the product malfunctioned, proof of the violation of safety standards, balancing the product’s risks and benefits, and comparing the product with other products.

  40. 40.

    Craglia (2018), p. 12.

  41. 41.

    Baldwin and Black (2016), p. 567.

  42. 42.

    Hutter (2006); Alemanno (2013).

  43. 43.

    Dataethikommision (2019).

  44. 44.

    Proposal for a Regulation of the European Parliament and of the Council on a Single Market for Digital

    Services (Digital Services Act) and amending Directive 2000/31/EC (COM/2020/825 final).

  45. 45.

    Macenaite (2017), p. 509.

  46. 46.

    Black (2005), p. 510.

  47. 47.

    Craglia (2018), p. 588.

  48. 48.

    de Gregorio and Dunn (2021), p. 11.

  49. 49.

    Recital 3 of the Proposal.

  50. 50.

    Antunes (2021); European Law Institute (2020).

  51. 51.

    Ibid.

  52. 52.

    The advocated approach includes a combination of a set of fully horizontal principles, a list of blacklisted AI practices, and a creation of a regulatory framework for defined high-risk applications.

  53. 53.

    Dataethikommision (2019).

  54. 54.

    Official title is: Proposal of the EU Commission for a Regulation of the European Parliament and of the Council laying down harmonised rules on Artificial Intelligence (Artificial Intelligence Act) and amending certain acts of the Union (AI Act), 21.4.2021, COM(2021) 206 final.

  55. 55.

    Proposal of the EU Commission for a Directive of the European Parliament and of the Council adapting the rules on non-contractual civil liability in artificial intelligence (AIL-D), 28.9.2022, COM(2022) 496 final.

  56. 56.

    Proposal of the EU Commission for a Directive of the European Parliament and of the Council on Liability for Defective Products (PL-D), 28.9.2022, COM(2022) 495 final.

  57. 57.

    Wagner (2023), p. 11.

  58. 58.

    Spindler (2023), p. 30.

  59. 59.

    Ibid, p. 33 et al.

  60. 60.

    Ibid, p. 39.

  61. 61.

    See Hacker (2022).

  62. 62.

    Ibid.

  63. 63.

    European Commission Directorate-General for Justice and Consumers (2019).

  64. 64.

    Notably, the Dataethikommision uses the term algorithm instead of AI, but it does so arguably as the focus of its work is on the algorithmic impact on data in general, and not only on the impact of AI.

  65. 65.

    European Commission Directorate-General for Justice and Consumers (2019), p. 5.

  66. 66.

    For example: https://www.euractiv.com/section/digital/news/leading-meps-raise-the-curtain-on-draft-ai-rules/.

  67. 67.

    Wagner (2023), p. 14.

References

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Nasir Muftic .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Muftic, N. (2023). Understanding the Risks of Artificial Intelligence as a Precondition for Sound Liability Regulation. In: Kornilakis, A., Nouskalis, G., Pergantis, V., Tzimas, T. (eds) Artificial Intelligence and Normative Challenges. Law, Governance and Technology Series, vol 59. Springer, Cham. https://doi.org/10.1007/978-3-031-41081-9_6

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-41081-9_6

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-41080-2

  • Online ISBN: 978-3-031-41081-9

  • eBook Packages: Law and CriminologyLaw and Criminology (R0)

Publish with us

Policies and ethics