Skip to main content

Punishing Artificial Intelligence: Legal Fiction or Science Fiction

  • Conference paper
  • First Online:
Legal Aspects of Autonomous Systems (ICASL 2022)

Part of the book series: Data Science, Machine Intelligence, and Law ((DSMIL,volume 4))

Included in the following conference series:

  • 355 Accesses

Abstract

Whether causing flash crashes in financial markets, purchasing illegal drugs, or running over pedestrians, AI is increasingly engaging in activity that would be criminal for a natural person, or even an artificial person like a corporation. We argue that criminal law falls short in cases where an AI causes certain types of harm and there are no practically or legally identifiable upstream criminal actors. This paper explores potential solutions to this problem, focusing on holding AI directly criminally liable where it is acting autonomously and irreducibly. Conventional wisdom holds that punishing AI is incongruous with basic criminal law principles such as the capacity for culpability and the requirement of a guilty mind. Drawing on analogies to corporate and strict criminal liability, as well as familiar imputation principles, we show how a coherent theoretical case can be constructed for AI punishment. AI punishment could result in general deterrence and expressive benefits, and it need not run afoul of negative limitations such as punishing in excess of culpability. Ultimately, however, punishing AI is not justified, because it might entail significant costs and it would certainly require radical legal changes. Modest changes to existing criminal laws that target persons, together with potentially expanded civil liability, are a better solution to AI crime.

This work, copyright 2019 by Ryan Abbott and Alexander Sarch, was originally published in the UC Davis Law Review, vol. 53, copyright 2019 by The Regents of the University of California. All rights reserved. Reprinted with permission. This chapter was adapted from the original text.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 149.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Hardcover Book
USD 199.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

Notes

  1. 1.

    See e.g. Hallevy (2014, pp. 185–229), Kingston (2016, p. 269), Mulligan (2018, pp. 579, 580), Wale and Yuratich (2015).

  2. 2.

    Hallevy (2010, pp. 171, 191).

  3. 3.

    Hallevy (2010, p. 199).

  4. 4.

    See Hallevy (2010, p. 200).

  5. 5.

    Hallevy (2010, pp. 200–201).

  6. 6.

    Hallevy (2010, p. 199).

  7. 7.

    Hu (2019, pp. 487, 531); see also Hallevy (2010, p. 490).

  8. 8.

    See Hu (2018, pp. 494, 497–498).

  9. 9.

    Sarch (2017, pp. 707, 709).

  10. 10.

    See Sect. 3.2 below.

  11. 11.

    See Model Penal Code § 2.07 (Am. Law Inst. 1962).

  12. 12.

    See Laufer (1994, pp. 647, 664–668).

  13. 13.

    See Hu (2019, pp. 529–530).

  14. 14.

    AI lacks a standard definition, but its very first definition in 1955 holds up reasonably well: ‘[T]he artificial intelligence problem is taken to be that of making a machine behave in ways that would be called intelligent if a human were so behaving.’ McCarthy et al. (1955).

  15. 15.

    See, e.g. Yasseri (2017).

  16. 16.

    See, e.g. Castelvecchi (2016).

  17. 17.

    See, e.g. Castelvecchi (2016).

  18. 18.

    Castelvecchi (2016).

  19. 19.

    Castelvecchi (2016).

  20. 20.

    Castelvecchi (2016).

  21. 21.

    See Abbott (2018, pp. 23–28).

  22. 22.

    See Rodriguez (2018).

  23. 23.

    See Rodriguez (2018).

  24. 24.

    If and when such machines come into existence, we will certainly enjoy reading their works on AI criminal liability.

  25. 25.

    See Abbott (2018).

  26. 26.

    See above notes 17–22 and accompanying text.

  27. 27.

    See Sect. 5.1 below.

  28. 28.

    See Gurney (2015, pp. 393, 433) (discussing crimes applicable to this scenario).

  29. 29.

    See Sect. 5 below.

  30. 30.

    See Sect. 5.1 below.

  31. 31.

    The chance of being prosecuted for a cyberattack in the United States is estimated at a mere 0.05% versus 46% for a violent crime. See Dixon (2019).

  32. 32.

    See generally Berman (2012, pp. 141, 144–45) (noting the convergence on this sort of theory of punishment).

  33. 33.

    Hart (2008, pp. 4–5).

  34. 34.

    See notes 46–47 below and accompanying text.

  35. 35.

    See Hart (2008).

  36. 36.

    See Duff and Hoskins (2017) (‘It is commonly suggested that punishment can help to reduce crime by deterring, incapacitatiing [sic], or reforming potential offenders....’).

  37. 37.

    See Duff and Hoskins (2017).

  38. 38.

    See Berman (2012, p. 145) (discussing types of deterrence).

  39. 39.

    See Berman (2012).

  40. 40.

    See Hart (2008, p. 19).

  41. 41.

    See Berman (2012, p. 145) (discussing rehabilitation).

  42. 42.

    See Tadros (2011, p. 21).

  43. 43.

    See Tadros (2011, p. 60).

  44. 44.

    See Berman (2012, p. 144) (on retributivism, punishment is justified if, but only to the extent that, ‘it is deserved or otherwise fitting, right or appropriate, and not [necessarily because of] any good consequences’ it may have); See also Tadros (2011, p. 151) (discussing desert-constrained consequentialism).

  45. 45.

    Negative retributivism is the view that the desert of the offender only prohibits punishing in excess of desert (even if it has good consequences). Positive retributivism says that the offender’s desert provides an affirmative reason for punishment.

  46. 46.

    See Model Penal Code § 1.02(C) (Am. Law Inst. 1962) (declaring that one of the ‘general purposes’ of the Code is ‘to safeguard conduct that is without fault from condemnation as criminal’).

  47. 47.

    See Model Penal Code § 4.01 (outlining the incapacity defense based on mental defect as when a person is unable ‘either to appreciate the criminality of his conduct or to conform [it to] the law’).

  48. 48.

    See Tadros, n 44 above, pp. 25–28.

  49. 49.

    Asaro (2011, pp. 169, 181).

  50. 50.

    See Hart (2008, p. 19).

  51. 51.

    See Berman, n. 34 above, p. 145.

  52. 52.

    Lemley and Casey (2019, pp. 1311, 1316, 1389–1393).

  53. 53.

    Mulligan (2018, p. 580); see Lewis (1989, pp. 53, 54).

  54. 54.

    Mulligan (2018, p. 593).

  55. 55.

    See Duff (2007, p. 114), Binder (2008, pp. 713, 733).

  56. 56.

    See Diamantis (2016, pp. 2049, 2078).

  57. 57.

    See Scheutz (2021).

  58. 58.

    See Diamantis, n. 58 above, pp. 2088–2089.

  59. 59.

    See, e.g. Rhodes (2017).

  60. 60.

    Lewis (1989, p. 54).

  61. 61.

    See Darling (2016, pp. 213, 228).

  62. 62.

    See Chalmers (1995, pp. 200, 201) (describing phenomenal experiences as those personally felt or experienced).

  63. 63.

    See Chalmers (1995) (discussing the hard problem of consciousness).

  64. 64.

    See notes 46–47 above and accompanying text.

  65. 65.

    See Husak and Callender (1994, pp. 32–33).

  66. 66.

    See generally Husak (2012, pp. 449, 456–457) (distinguishing narrow culpability as merely mens rea categories from broad culpability, which is the underlying normative defect that criminal law aims to respond to).

  67. 67.

    See Model Penal Code § 1.02(C) (Am. Law Inst. 1962); see also Moore (1997, p. 35).

  68. 68.

    See Alschuler (2009, pp. 1359, 1367–1369) (arguing against corporate punishment).

  69. 69.

    See Kircher (2009, p. 157).

  70. 70.

    See Model Penal Code § 2.07(1)(C) (Am. Law Inst. 1962) (adopting respondeat superior but restricting it to the mental states of high corporate officials).

  71. 71.

    See notes 38–40 above (explaining the idea of justifying punishment based on its good consequences).

  72. 72.

    Duff (2018, p. 19).

  73. 73.

    LaFave (2018), § 6.1(c) (‘[C]riminal liability requires that the activity in question be voluntary.’).

  74. 74.

    LaFave (2018).

  75. 75.

    Model Penal Code § 2.01(1) (Am. Law Inst. 1962).

  76. 76.

    See Model Penal Code § 2.01(2).

  77. 77.

    Yaffe (2012, pp. 174, 175).

  78. 78.

    Model Penal Code § 2.01(3) (Am. Law Inst. 1962).

  79. 79.

    See Duff (2007, pp. 9–20).

  80. 80.

    See Sect. 3.2 above.

  81. 81.

    See Bratman (1990, pp. 15, 23–27); see also Sarch (2015, pp. 453, 467–468).

  82. 82.

    Bratman (1990, p. 26).

  83. 83.

    Bratman (1990)

  84. 84.

    See Schwitzgebel (2001), (‘Traditional dispositional views of belief assert that for someone to believe some proposition P is for [her] to possess [relevant] behavioral dispositions pertaining to P. Often cited is the disposition to assent to utterances of P in [appropriate] circumstances.... Other relevant dispositions might include the disposition to exhibit surprise should the falsity of P [become] evident, the disposition to assent to Q if... shown that P implies Q, and the disposition to depend on P’s truth in [acting]. [More generally, this amounts to] being disposed to act as though P is the case.’).

  85. 85.

    See Model Penal Code § 2.02(2)(B) (Am. Law Inst. 1962) (defining knowledge as practical certainty).

  86. 86.

    See Schwitzgebel (2001, p. 76) (defending this approach to determining when to attribute beliefs to humans).

  87. 87.

    See Model Penal Code § 2.02(2)(C) (Am. Law Inst. 1962) (defining recklessness).

  88. 88.

    See, e.g. Szigeti (2014, p. 329).

  89. 89.

    List and Pettit (2011, p. 158).

  90. 90.

    See Hart (2008, pp. 1–27).

  91. 91.

    See List and Pettit (2011, p. 165).

  92. 92.

    See Hart (2008, p. 4).

  93. 93.

    See n. 53 above and accompanying text. See Chalmers (1995, pp. 200, 215) (distinguishing intellectual capacities from phenomenal consciousness).

  94. 94.

    See Breivik (2012).

  95. 95.

    See Fletcher (2013, p. 206) (defending objective theories of well-being from familiar objections); Sarch (2012, pp. 439–441) (defending a partially objective theory of well-being, where both subjective experiences and some objective components can impact well-being).

  96. 96.

    See Harman (2009, pp. 137, 139).

  97. 97.

    See note 53 above and accompanying text.

  98. 98.

    See Foot (2001).

  99. 99.

    See Foot (2001, p. 33).

  100. 100.

    See Feinberg (1974, pp. 43, 49–51).

  101. 101.

    Feinberg (1974, p. 51).

  102. 102.

    Feinberg (1974, p. 52).

  103. 103.

    Feinberg (1974).

  104. 104.

    See Feinberg (1974, pp. 49–50).

  105. 105.

    See 18 U.S.C. § 1030(a)(1)-(7) (2019) (defining offenses such as computer trespass and computer fraud); id. § 1343 (wire fraud statute).

  106. 106.

    Hallevy (2010, pp. 179–81).

  107. 107.

    See Kadish (1985, pp. 323, 372–373).

  108. 108.

    See Alldridge (1990, pp. 70–71); 18 U.S.C. § 2(b) (2019).

  109. 109.

    See Model Penal Code § 2.02(2)(C)-(D) (Am. Law Inst. 1962).

  110. 110.

    See Model Penal Code § 210.3(a).

  111. 111.

    See Model Penal Code § 210.4.

  112. 112.

    See, e.g., Model Penal Code § 220.1(2); Model Penal Code § 220.2(2); Model Penal Code § 220.3.

  113. 113.

    See Hallevy (2010, pp. 181–184).

  114. 114.

    The rule holds that the aider and abettor ‘of an initial crime... is also liable for any consequent crime committed by the principal, even if he or she did not abet the second crime, as long as the consequent crime is a natural and probable consequence of the first crime.’ Weiss (2002, pp. 1341, 1424).

  115. 115.

    Hallevy (2010, p. 184).

  116. 116.

    See, e.g., Model Penal Code § 2.03 (Am. Law Inst. 1962).

  117. 117.

    See LaFave (2018, § 14.5).

  118. 118.

    See Model Penal Code § 221.1 (defining burglary).

  119. 119.

    See LaFave (2018, § 14.5).

  120. 120.

    See Simester (2005, pp. 21, 45).

  121. 121.

    See notes 107–108 above and accompanying text.

  122. 122.

    See Sect. 3.1 above (discussing the Eligibility Challenge).

  123. 123.

    See Simons (2003, pp. 179, 195–96).

  124. 124.

    See notes 75–80 above and accompanying text.

  125. 125.

    See Bernard (1984, pp. 3–4).

  126. 126.

    European Parliament (2017, 16).

  127. 127.

    See ibid., 18.

  128. 128.

    For instance, more than 150 AI “experts” subsequently sent an open letter to the European Commission warning that, ‘[f]rom an ethical and legal perspective, creating a legal personality for a robot is inappropriate whatever the legal status model.’ Robotics-Openletter.eu, Open Letter to the European Commission Artificial Intelligence and Robotics, http://www.robotics-openletter.eu/

  129. 129.

    See Solaiman (2017, p. 155).

  130. 130.

    Adriano (2015, pp. 363, 365).

  131. 131.

    See Hu (2018, pp. 527–528).

  132. 132.

    See Law and Versteeg (2011, pp. 1163, 1170).

  133. 133.

    See Louis K. Liggett Co. v Lee [1933] 288 U.S. 517, 549 (Brandeis, J., dissenting).

  134. 134.

    See Citizens United v Fed. Election Comm’n [2010) 558 U.S. 310, 341; Burwell v Hobby Lobby Stores, Inc. [2014), 573 U.S. 682.

  135. 135.

    See 18 U.S.C. § 1030(a) (2019).

  136. 136.

    A new criminal offense—akin to driving without a license—could be imposed for cases where programmers, developers, owners or users have unreasonably failed to designate a Responsible Person for an AI.

  137. 137.

    The Responsible Person should also be liable for harms caused by an AI where the AI, if a natural person, would be criminally liable together with another individual. Otherwise, there is a risk that sophisticated AI developers could create machines that cause harm but rely on co-conspirators to escape liability.

  138. 138.

    There is precedent for such a Responsible Person registration scheme. In the corporate context, executives may be required to attest to the validity of some SEC filings and held strictly liable for false statements even where they have done nothing directly negligent. If the Responsible Person is a person at a company where a company owns the AI, it would have to be an executive to avoid the problem of setting up a low-level employee as “fall guy.” The SEC for this reason requires a C-level executive to attest to certain statements on filings.

  139. 139.

    It might also be likely that parties with more negotiating power would attempt to offload their liability. For instance, AI suppliers might attempt to shift liability to consumers. At least in the case of commercial products, it should not be possible for suppliers to do this.

  140. 140.

    This raises potential concerns about corporations with minimal capital being used to avoid liability. However, this same concern exists now with human activities, where thinly capitalized corporations are exploited as a way to limit the liability of individuals. Still, there are familiar legal tools to block this sort of illicit liability avoidance. To the extent a bad actor is abusing the corporate form, courts can, for instance, pierce the corporate veil.

References

  • Abbott R (2018) The reasonable computer: disrupting the paradigm of tort liability. Geo Wash L Rev 86:1–45

    Google Scholar 

  • Abbott R, Sarch A (2019) Punishing artificial intelligence: legal fiction or science fiction. UC Davis Law Rev 53:323–384

    Google Scholar 

  • Adriano EAQ (2015) The natural person, legal entity or juridical person and juridical personality. Penn St JL Int Aff 4:364–391

    Google Scholar 

  • Alldridge P (1990) The doctrine of innocent agency. Crim L F 2:45–83

    Article  Google Scholar 

  • Alschuler AW (2009) Two ways to think about the punishment of corporations. Am Crim L Rev 46:1359

    Google Scholar 

  • Asaro PM (2011) A body to kick, but still no soul to damn: legal perspectives on robotics. In: Lin P et al (eds) Robot ethics: the ethical and social implications of robotics. MIT Press, Cambridge, MA, pp 169–186

    Google Scholar 

  • Berman MN (2012) The justification of punishment. In: Marmor A (ed) The Routledge Companion to philosophy of law. Routledge

    Google Scholar 

  • Bernard TJ (1984) The historical development of corporate criminal liability. Criminology 22:3–18

    Article  Google Scholar 

  • Binder G (2008) Victims and the significance of causing harm. Pace L Rev 28:713–737

    Article  Google Scholar 

  • Bratman ME (1990) What is intention? In: Cohen PR et al (eds) Intentions in communication, vol 15, pp 23–27

    Google Scholar 

  • Breivik A (2012) Anders Breivik found sane: the verdict explained. Telegraph. https://www.telegraph.co.uk. Accessed 1 Dec 2022

  • Castelvecchi D (2016) Can we open the black box of AI? Nature. https://www.nature.com. Accessed 1 Dec 2022

  • Chalmers DJ (1995) Facing up to the problem of consciousness. J Consciousness Stud 2:200–219

    Google Scholar 

  • Darling K (2016) Extending legal protection to social robots: the effects of anthropomorphism, empathy, and violent behavior towards robotic objects. In Calo R, Froomkin AM, Kerr I (eds) Robot law. Edgar Elgar, pp 213–232

    Google Scholar 

  • Diamantis ME (2016) Corporate criminal minds. Notre Dame L Rev 91:2049–2090

    Google Scholar 

  • Dixon W (2019) Fighting cybercrime—what happens to the law when the law cannot be enforced? World Econ. Forum. https://www.weforum.org. Accessed 1 Dec 2022

  • Duff RA (2007) Answering for crime: responsibility and liability in the criminal law. Hart

    Google Scholar 

  • Duff RA (2018) The realm of criminal law. Oxford University Press, Oxford

    Book  Google Scholar 

  • European Parliament (2017) Report with recommendations to the commission on civil law rules on robotics. https://www.europarl.europa.eu Accessed 1 Dec 2022

  • Feinberg J (1974) The rights of animals and unborn generations. In Blackstone WT (ed) Philosophy and environmental crisis. The University Of Georgia Press, pp 43–68

    Google Scholar 

  • Fletcher G (2013) A fresh start for the objective-list theory of well-being. Utilitas 25:206–220

    Article  Google Scholar 

  • Foot P (2001) Natural goodness. Oxford University Press, Oxford

    Book  Google Scholar 

  • Gurney JK (2015) Driving into the unknown: examining the crossroads of criminal law and autonomous vehicles. Wake Forest JL Pol’y 5:393–442

    Google Scholar 

  • Hallevy G (2010) The criminal liability of artificial intelligence entities—from science fiction to legal social control. Akron Intell Prop J 4:171–201

    Google Scholar 

  • Hallevy G (2014) The punishibility of artificial intelligence technology. In: Liability for crimes involving artificial intelligence systems, pp 185–227

    Google Scholar 

  • Harman E (2009) Harming as causing harm. In: Roberts MA, Wasserman DT (eds) Harming future persons. Springer, Berlin, pp 137–154

    Chapter  Google Scholar 

  • Hart HLA (2008) Punishment and responsibility: essays in the philosophy of law, 2nd edn. Oxford Academic, Oxford

    Book  Google Scholar 

  • Hoskins Z, Duff A (2017) Consequentialist accounts. The stanford encyclopedia of philosophy. https://plato.stanford.edu/entries/legal-punishment/#PurConPun

  • Hu Y (2018) Robot criminal liability revisited. In Yoon JS, Han SH, Ahn SJ (eds) Dangerous ideas in law. Bobmunsa, pp 494–509

    Google Scholar 

  • Hu Y (2019) Robot criminals. 52 MICH. J.L. REFORM 487, 531

    Google Scholar 

  • Husak (2012), pp 449–457

    Google Scholar 

  • Husak DN, Callender CA (1994) wilful ignorance, knowledge, and the “equal culpability” thesis: a study of the deeper significance of the principle of legality. Wis L Rev

    Google Scholar 

  • Kadish H (1985) Complicity, cause and blame: a study in the interpretation of doctrine. Calif L Rev 73:323–410

    Article  Google Scholar 

  • Kingston JKC (2016) Artificial intelligence and legal liability. In Bramer M, Petridis M (eds) Research and development in intelligent systems XXXIII: incorporating applications and innovations in intelligent systems XXIV

    Google Scholar 

  • Kircher AS (2009) Corporate criminal liability versus corporate securities fraud liability: analyzing the divergence in standards of culpability. Am Crim L Rev 46:157

    Google Scholar 

  • LaFave WR (2018) Substantive criminal law, 3d edn. Thomson Reuters

    Google Scholar 

  • Laufer WS (1994) Corporate bodies and guilty minds. Emory LJ 43:651–658

    Google Scholar 

  • Law DS, Versteeg M (2011) The evolution and ideology of global constitutionalism. Calif L Rev 99:1163–1257

    Google Scholar 

  • Lemley MA, Casey B (2019) Remedies for robots. U Chi L Rev 86:1311

    Google Scholar 

  • Lewis D (1989) The punishment that leaves something to chance. Phil Pub Aff 18:53–67

    Google Scholar 

  • List C, Pettit P (2011) Group agency. Oxford University Press, Oxford

    Book  Google Scholar 

  • McCarthy et al (1955) A proposal for the dartmouth summer research project on artificial intelligence. http://jmc.stanford.edu

  • Moore MS (1997) Placing blame: A general theory of the criminal law 35

    Google Scholar 

  • Mulligan C (2018) Revenge against robots. S C L Rev 69:579, 580

    Google Scholar 

  • Rhodes M (2017) The touchy task of making robots seem human—but not too human. Wired. https://www.wired.com. Accessed 1 Dec 2022

  • Rodriguez J (2018) Gödel, consciousness and the weak vs. strong AI debate. https://www.linkedin.com. Accessed 1 Dec 2022

  • Sarch A (2012) Multi-component theories of well-being and their structure. Pac Philos Q 93:439–441

    Google Scholar 

  • Sarch A (2015) Double effect and the criminal law. Crim Law Philos 11:453–479

    Google Scholar 

  • Sarch A (2017) Who cares what you think? Criminal culpability and the irrelevance of unmanifested mental states. Law and Philos 36(6):707–750

    Google Scholar 

  • Schwitzgebel E (2001) In-between believing. Philos Q 51:76–82

    Google Scholar 

  • Simester AP (2005) Is strict liability always wrong? In: Simester AP (ed) Appraising strict liability. Oxford University Press, Oxford, pp 21–50

    Chapter  Google Scholar 

  • Simons KW (2003) Should the model penal code’s mens rea provisions be amended? Ohio St J Crim L 1:179–205

    Google Scholar 

  • Solaiman SM (2017) Legal personality of robots, corporations, idols and chimpanzees: a quest for legitimacy. Artif Intell L 25:155–179

    Article  Google Scholar 

  • Szigeti A (2014) Are individualist accounts of collective responsibility morally deficient? Institutions, emotions, and group agents: contributions to social ontology. Springer, Berlin, pp 329–342

    Chapter  Google Scholar 

  • Tadros V (2011) The ends of harm: the moral foundations of criminal law. Oxford University Press, Oxford

    Book  Google Scholar 

  • Wale J, Yuratich D (2015) Robot law: what happens if intelligent machines commit crimes? Conversation. http://theconversation.com/robot-law-what-happens-if-intelligent-machines-commitcrimes-44058. Accessed 1 Dec 2022

  • Weiss B (2002) What were they thinking? The mental states of the aider and abettor and the causer under federal law. Fordham L Rev 70:1341–1355

    Google Scholar 

  • Yaffe G (2012) The voluntary act requirement. In Marmor A (ed) The Routledge Companion to the Philosophy of Law. Routledge, London

    Google Scholar 

  • Yasseri T (2017) Never mind killer robots—even the good ones are scarily unpredictable. https://theconversation.com. Accessed 1 Dec 2022

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ryan Abbott .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Abbott, R., Sarch, A. (2024). Punishing Artificial Intelligence: Legal Fiction or Science Fiction. In: Moura Vicente, D., Soares Pereira, R., Alves Leal, A. (eds) Legal Aspects of Autonomous Systems. ICASL 2022. Data Science, Machine Intelligence, and Law, vol 4. Springer, Cham. https://doi.org/10.1007/978-3-031-47946-5_6

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-47946-5_6

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-47945-8

  • Online ISBN: 978-3-031-47946-5

  • eBook Packages: Law and CriminologyLaw and Criminology (R0)

Publish with us

Policies and ethics