Skip to main content

Judicial Power Without Judicial Responsibility: The Case Against Robot Judges

  • Conference paper
  • First Online:
Legal Aspects of Autonomous Systems (ICASL 2022)

Part of the book series: Data Science, Machine Intelligence, and Law ((DSMIL,volume 4))

Included in the following conference series:

  • 323 Accesses

Abstract

Is it possible that in future we will have robot judges? And would this actually be permissible? The article answers these questions with a reluctant “yes” and a strict “no” respectively.

English Version by Margaret Hiley.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 149.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Hardcover Book
USD 199.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

Notes

  1. 1.

    Weizenbaum (1984), 227. On this groundbreaking thinker, who was a crucial influence on the theses developed in this article and ought to be key reading for today’s AI enthusiasts, see Peters (2012), 16 ff., 27: “[Weizenbaum] ranks first among all digitization critics” (orig. “in der Riege aller Kritiker der Digitalisierung an erster Stelle”).

  2. 2.

    In the following, I make no attempt to define the term artificial intelligence, as the effort involved in developing such a definition would be disproportionate to its usefulness (on this, however, see Misselhorn (2019) 17 ff.; for a concise working definition, see Mainzer (2018), 3: “A system can be called intelligent if it is able to solve problems independently and efficiently” [orig. “Ein System heißt intelligent, wenn es selbständig und effizient Probleme lösen kann”]; also see Herberger (2018). Accordingly, AI is simply what we understand by this term in everyday life. Moreover, “[i]f one would not know what to do with a concept until one had defined it, then all philosophizing would be in a bad way.” Kant (2013), A 731/B 759, fn.

  3. 3.

    Thereto Tegmark (2017), 86 ff., Eberl (2018), 109, Ramge (2018), 40, Du Sautoy (2019), 18 ff.

  4. 4.

    Brynjolfsson and McAfee (2016), Tegmark (2017), 82 ff., Eberl (2018); Draeger and Müller-Eiselt (2019); also see the superb prologue in Domingos (2015), XI ff., which describes how our everyday lives already are shaped by learning machines.

  5. 5.

    For an overview, see Enders (2018), 722–723, Fries (2018), 418 ff. One early such attempt was Haft and Lehmann (1989), which apparently does not form the basis of more recent endeavours; Sousa Mendes (2020), 23 ff., gives an account of other early ventures.

  6. 6.

    Thereto in general Ramge (2018), 60 ff.; from a legal perspective Hartung et al. (2017), Breidenbach and Glatz (2018), Wagner (2018).

  7. 7.

    For a current overview: Bull (2019), arguing against extending the use of AI to court proceedings, 483.

  8. 8.

    Introduced by the Taxation Procedure Modernization Act (Gesetz zur Modernisierung des Besteuerungsverfahrens) of 18 July 2016 (BGBl. I 1679); thereto e.g. Martini and Nink (2017), Berger (2018).

  9. 9.

    See Niiler (2019); and the report Velsberg (2019).

  10. 10.

    Thereto Dräger and Müller-Eiselt (2019), 18–19, Gless and Wohlers (2019), 154–155, Höffler (2019), 58 ff. The Supreme Court of Wisconsin ruled that using COMPAS did not violate due process rights, State v. Loomis, 881 N.W.2d 749 (Wis. 2016) (summarized in Harvard Law Review 130 [2017], 1530 ff.). Also see Berk and Bleich (2013). Rostalski and Völkening (2019), 271 ff. make a proposal on how to use AI-based sentencing decisions in Germany.

  11. 11.

    As claimed by Fries (2018), 422.

  12. 12.

    Turing (1950), 433 ff.; on the test, also see Warwick (2012), 76 ff., Ramge (2018), 29 ff.

  13. 13.

    Also see Adrian (2017), 80–81.

  14. 14.

    Also see Wischmeyer (2018), 45.

  15. 15.

    Greco (2009), 372 ff.

  16. 16.

    Hoffmann-Riem (2017), 27–28, Enders (2018), 725, Hähnchen and Bommel (2018), 338, 340, Gless and Wohlers (2019) 156 ff. A similar objection is developed in Kotsoglu (2014), 451, arguing against the attempt to formalize the application of law developed by Raabe et al. (2012).

  17. 17.

    Thus (cautiously) Hoffmann-Riem (2017), 30(orig. “weicher Entscheidungsfaktoren”), Enders (2018), 725.

  18. 18.

    Without reference to the law Weizenbaum (1984), 71–72.

  19. 19.

    Weizenbaum (1984), 223.

  20. 20.

    Explicitly on the different styles of Japanese and American judges Weizenbaum (1984), 224 ff., Weizenbaum (2001), 11–12.

  21. 21.

    Fries (2018), 425, Gless and Wohlers (2019), 160–161.

  22. 22.

    On DeepBlue Nilsson (2010), 481 ff., Ramge (2018), 39, on decision trees Nilsson (2010), 402 ff.; Domingos (2015), 85 ff. Alpaydin (2016), 77 ff., Kelleher and Tierney (2018), 136 ff. On the event as a whole, also see the report by Kasparov himself (2017), es 73 ff.

  23. 23.

    See on the technical contribution of the programmers Silver et al. (2016); on AlphaGo, also see Kelleher and Tierney (2018), 31 ff., Tegmark (2017), 83 ff. As is always the case in this field, AlphaGo merely marked a temporary peak; in the meantime, its developers have developed AlphaGo Zero, which was able learn Go and other games on its own without any prior human knowledge, see Silver et al. (2017); also see Kasparov (2017), 265–266.

  24. 24.

    For a general, readily understandable account of deep learning, see Nilsson (2010), 408 ff., Warwick (2012), 92 ff., Alpaydin (2016), 86 ff., Kelleher and Tierney (2018), 121 ff., Eberl (2018), 99 ff., Ramge (2018), 46 ff., Sejnowski (2018); for a technology-focused account, see Schmidhuber (2015), Mainzer (2018), 99 ff. and Aggarwal (2018). On machine learning in general Jordan and Mitchell (2015); from a legal perspective Surden (2014); European Commission for the Efficiency of Justice (CEPEJ) (2019), 35 ff; and—decades ahead of its time!—Phillips (1990), 820 ff.

  25. 25.

    Kahneman (2011), 20 and passim.

  26. 26.

    Du Sautoy (2019), 67 ff.

  27. 27.

    Ramge (2018), 49.

  28. 28.

    Tegmark (2017), 88–89, Du Sautoy (2019), 34–35, 219–220.

  29. 29.

    Thereto Volland (2018), 12 (“The next Rembrandt”), 27 ff. Du Sautoy (2019), 126 ff. (“The Next Rembrandt”), 195 ff. (“Emmy” as an AI composer); as early as 1999 Kurzweil (1999), 158 ff., giving examples of music, poems, and paintings. On the question of whether this actually constitutes art, Weizenbaum (2001), 98 ff.; id., in: Weizenbaum and Haefner (1990), 86–87.

  30. 30.

    Jordan and Mitchell (2015), 257, Alpaydin (2016), 38 ff., Kelleher and Tierney (2018), 99 ff., Ramge (2018), 48–49.

  31. 31.

    On the concept of prediction, see Kelleher and Tierney (2018), 104–105: “Prediction is the task of estimating the value of a target attribute for a given instance based on the values of other attributes (or input attributes) for that instance.” Also see Alpaydin (2016), 39. Thus it is not just a matter of computers foreseeing decisions in the sense of a legal realism (thereto, see Surden (2014), 102, 108 ff., Frese (2015), 2092, Bues (2018), 275 ff. [280; mn. 1183–1184]), but of computers making these decisions themselves. On the distinction between training set und validation set, see Kelleher and Tierney (2018), 147, Domingos (2015), 75 ff., Alpaydin (2016), 155.

  32. 32.

    Kelleher and Tierney (2018), 99, 104–105; a detailed explanation, written specifically for lawyers, is provided by Surden (2014), 90 ff.; also see Domingos (2015), 151–152 (describing a different architecture, naive-bayes).

  33. 33.

    Alpaydin (2016), 67–68, Kelleher and Tierney (2018), 33.

  34. 34.

    Jordan and Mitchell (2015), 255.

  35. 35.

    Eberl (2018), 103 ff.

  36. 36.

    To give just two examples: Ashley (2017), 234 ff. with numerous further references; and CEPEJ (2019), 41 ff.

  37. 37.

    Weizenbaum and Haefner (1990), 88, Welzer (2018), 142 ff.

  38. 38.

    See in detail—from the older literature on methodology—Larenz (1958), 281 ff., Larenz (1965), 3; from the current literature Möllers (2017), §13 mn. 1 ff.

  39. 39.

    Further reservations—especially the question of how learning should take place in cases of disagreement between courts or between the case law and the literature—are raised by Enders (2018), 725–726.

  40. 40.

    Roxin and Greco (2020), §12 mn. 88a ff. present an attempt at systematization.

  41. 41.

    Domingos (2015), 175, 177 ff., 184–185.

  42. 42.

    Du Sautoy (2019), 126 ff. (346 images were made available to the computer as a training set in the project “The Next Rembrandt”), 211–212 (389 Bach chorales).

  43. 43.

    See the endeavours reported by Misselhorn (2019), 114 ff.

  44. 44.

    Aletras et al. (2016); see thereto also CEPEJ (2019), 14 fn. 7, 28 et al.

  45. 45.

    Gless and Wohlers (2019), 158.

  46. 46.

    Fries (2018), 425, Wischmeyer (2018), 23 ff., who for this reason holds that AI should not be used “when bringing criminal charges” (orig. “beim strafrechtlichen Schuldvorwurf”) “for reasons of principle” (orig. “aus prinzipiellen Gründen”, 24; 35); likewise CEPEJ (2019), 53 ff.; generally Pfitzenmaier (2016), 18 ff., Draeger and Müller-Eiselt (2019), 40 ff., Misselhorn (2019), 134–135.

  47. 47.

    Eberl (2018), 117–118.

  48. 48.

    O'Neil (2016), 27, 87, 133.

  49. 49.

    Kelleher and Tierney (2018), 192, Dräger and Müller-Eiselt (2019), 46 ff.

  50. 50.

    O’Neil (2016), 27.

  51. 51.

    O’Neil (2016), 7.

  52. 52.

    Foer (2017), 71.

  53. 53.

    For a particularly impressive account, see O’Neil (2016), 3, 28 ff., who coins the pun “weapons of math destruction”; Eubanks (2018). Also see Kelleher and Tierney (2018), 190 ff., Ernst (2017), 1032 ff., Wischmeyer (2018), 26 ff., incl. many further references in Footnote 102; Misselhorn (2019), 80, Orwat (2019), Webb (2019), 254–245.

  54. 54.

    O’Neil (2016), 8.

  55. 55.

    Angwin et al. (2016); accounts of this are given in Martini (2018), Enders (2018), 726, Dräger and Müller-Eiselt (2019), 44 ff., 148 ff., Webb (2019), 114 ff., Martínez Garay (2019), 115 ff. (172 ff.), also see CEPEJ (2019), 66–67.

  56. 56.

    According to Kelleher and Tierney (2018), 65, programmers spend 79% of their time preparing their data sets.

  57. 57.

    Kelleher and Tierney (2018), 47, Wischmeyer (2018), 23.

  58. 58.

    Volland (2018), 70; Dräger and Müller-Eiselt (2019), 135–6 (problem of the “spiral of bad data” [orig. “Spirale der schlechten Daten”]).

  59. 59.

    Kelleher and Tierney (2018), 34.

  60. 60.

    Gless and Wohlers (2019), 164 insist on this procedural precaution. It is to be implemented in Estonia, see the references in Fn 9 above.

  61. 61.

    Domingos (2015), 65.

  62. 62.

    Thereto (critically) Taplin (2017), attributing this motto to Facebook founder Mark Zuckerberg; Webb (2019), 53–54 (likewise critically; the following sentence, which is quoted verbatim, is taken from this book).

  63. 63.

    Webb (2019), 54. A similar criticism is voiced by Lanier (2010), 51, who complains about the “lack of intellectual modesty in the computer science community”: “An aeronautical engineer would never put a passenger in a plane based on an untested, speculative theory, but computer scientists commit analogous sins all the time.”

  64. 64.

    Tegmark (2017), 105; also see Wallach and Allen (2009), 71, Misselhorn (2019), 89.

  65. 65.

    Greco (2015), 44–45.

  66. 66.

    From a legal perspective Hoffmann-Riem (2017), 29–30, Enders (2018), 726, Martini (2018), 1018; for a general account Pasquale (2015); O'Neil (2016), 8–9 (“dictates from the algorithmic gods”); Misselhorn (2019), 80, Webb (2019), 111; on bots and messaging software in particular Kurz and Rieger (2017), 85 ff. (91–2); with a focus on technology Mainzer (2018), 245 ff.

  67. 67.

    Tegmark (2017), 106–107.

  68. 68.

    From a legal perspective Martini (2018), 1020 ff., Wischmeyer (2018), 22 (who recalls the obligation to provide information under Article 13(2)(f) and Article 14 (2)(g) GDPR and the right to information under Article 15(1)(h) GDPR), 42 ff. (a very differentiated account); from a philosophical perspective Bostrom and Yudkowsky (2014), 316 ff., Nida-Rümelin and Weidenfeld (2018), 77–78; from the popular scientific literature O'Neil (2016), 214 and passim; Draeger and Müller-Eiselt (2019), 182 ff. (also commenting on pioneering efforts in this regard, 13 ff.).

  69. 69.

    See the German government’s Artificial Intelligence Strategy, BT-Drs. 19/5880, 13, 16, 32, 39–40; also see Husain (2017), 44 ff., Wischmeyer (2018), 61, Deeks (2019), 1829, Gaede (2019), 27, Strandburg (2019), 1851.

  70. 70.

    Likewise Wischmeyer (2018), 54 ff.

  71. 71.

    Thereto Volland (2018), 27 ff.

  72. 72.

    Gless and Wohlers (2019), 159 ff. (orig. “Nachvollziehen einer derartigen Auskunft”).

  73. 73.

    Reichenbach (1938), 36 is the seminal account on this topic; on the present debate Schickore and Steinle (2006); from a (criminal) legal theory perspective Hassemer (1990), 116 ff. (drawing a distinction between the “production” [orig. “Herstellung”] and the “portrayal” [“Darstellung”] of a decision).

  74. 74.

    See Elhardt (2016), 59 ff.

  75. 75.

    For a representative account, see Paeffgen, SK-StPO, 5th ed. 2016, §112 mn. 21c.

  76. 76.

    Wischmeyer (2018), 44–45, 54, in his discussion of the black box argument.

  77. 77.

    Wischmeyer (2018), 54 (orig. “Auch Menschen sind für andere Menschen—und für sich selbst—‘black boxes’”).

  78. 78.

    See BGHZ 200, 38 (mn. 26 ff.) with regard to the so-called score formula used in SCHUFA credit reports; critically on trade secrecy in the case of incriminating algorithmic decisions Kurz and Rieger (2017), 92, 96–97, O'Neil (2016), 29, Wischmeyer (2018), 64–65, incl. references specifically concerning criminal justice in fn. 260.

  79. 79.

    See the previous fn.

  80. 80.

    See Bartlett (2018), 31, who calls trade secrets the “modern equivalent of the recipe for Coca-Cola”.

  81. 81.

    Arguing strongly against “turn[ing] every problem into a technical problem” Weizenbaum (1984), 180 (quote), 274 ff., 227. Similarly—with regard to another question—Tegmark (2017), 159. Also Gless and Wohlers (2019), 161.

  82. 82.

    “Judicial tenure may only be given in the case of a person who 1. is a German in terms of Article 116 of the Basic Law, 2. ...”; a German is a person who possesses German citizenship, Art. 116(1) var. 1 GG, Section 1 Nationality Act (Staatsangehörigkeitsgesetz, StAG); this only includes natural persons, see BGH NJW 2018, 2742. Further regulations are mentioned by Enders (2018), 723.

  83. 83.

    For a representative commentary on this doctrine, see Grzeszick, in: Maunz/Dürig, Grundgesetz-Kommentar (status: December 2007, Lfg. 51), Art. 20 mn. 105 ff; Schulze Felitz, in: Dreier, Grundgesetz Kommentar vol. II, 3rd ed. 2015, Art. 20 mn. 113 ff.

  84. 84.

    Enders (2018), 723. For this reason, Enders does not want to permit decisions even to be prepared by AI (without automatically adopting them)—ultimately rightly, as we will see shortly (2.2.2 b] aa][2]).

  85. 85.

    As stated clearly in Enders (2018), 723, whose full line of argument reads as follows: “In this regard, a lawful judge within the meaning of this norm definitely is a natural person34” (orig. “Dabei steht fest, dass gesetzlicher Richter im Sinne dieser Norm eine natürliche Person ist”). The corresponding fn. 34 opens with the words “As taken fully for granted in ...” and goes on to quote two commentaries on the Basic Law (orig. “So völlig selbstverständlich bei …”).

  86. 86.

    Bull (2015), 83, including further references on earlier versions of this rule.

  87. 87.

    On this contrast in the history of legal thought (in a still unsurpassed account), Welzel (1962).

  88. 88.

    Augustine (1998), 147.

  89. 89.

    Also see Nida-Rümelin and Weidenfeld (2018), 83 ff.: they do not act on their own reasons.

  90. 90.

    Turkle (2012), 85 ff., on the fundamental significance of the “gaze”, which is what establishes symmetry in the first place.

  91. 91.

    I adopt this phrasing from Weizenbaum's critique of the proposal that ELIZA, a chatbot he programmed, be used as a psychotherapist ([1984], 5–6; Weizenbaum (2006), 96).

  92. 92.

    Shanahan (2015), 113 ff., Nida-Rümelin and Weidenfeld (2018), 110; likewise tending towards this view Eidenmüller (2017), 775 ff. The line of argument resembles the well-known thought experiment of the Chinese room that goes back to Searle (1980), 417 (I quote from Haugeland (1997), 184 ff.]) and that Nida-Rümelin and Weidenfeld (2018), 115 ff. use as the crucial point of their argument; on the discussion of this argument, concerning which there is a bewildering plethora of literature, Preston and Bishop (2002), Carter (2007), 175 ff.

  93. 93.

    Weizenbaum (1984), 270: “Respect, understanding, and love are not technical problems”.

  94. 94.

    Weizenbaum (2001), 76: “What we do with computers is almost all simulations, models” (orig. “Was wir mit Computern machen, sind fast alles Simulationen, Modelle”); Turkle (2012), 101: “... sociable technology ... promises friendship but can only deliver performances. Do we really want to be in the business of manufacturing friends that will never be friends?”; 124: “... a robot cannot pretend because it can only pretend”; Nida-Rümelin and Weidenfeld (2018), 41; Kornwachs (2019), 336 ff.; also see Misselhorn (2019), 86–87, who ascribes machines “quasi opinions” (orig. “Quasi-Meinungen”) and “quasi wishes” (orig. “Quasi-Wünsche”). On the simulation of emotions by so-called “cobots”, which are used especially in the field of care for the elderly, Ramge (2018), 75 ff., Misselhorn (2019), 136 ff. and es Turkle (2012), 103 ff.—Incidentally, this constitutes another example of the denial of responsibility criticized here (for a closely related debate, see Turkle (2005), 295; an emphatic and comprehensive is then given in Turkle (2012), 23 ff, 124–125 and passim).

  95. 95.

    As also argued by Nida-Rümelin and Weidenfeld (2018), 108 ff.: “Warum KIs nicht denken können”.

  96. 96.

    A very similar point is mentioned by Turkle (2012), 286: “... knowledge of mortality and an experience of the life cycle are what make us uniquely human”. Rather tellingly, this banal fact does not feature in the long list of properties that Turing (1950), 443 ff. discusses. Bostrom (2018), 183, mentions them briefly, but asserts, without providing even the slightest justification, that “a posthuman being ... could be vulnerable, dependent, and limited.” (Article originally published in English as “Why I Want to be a Posthuman When I Grow Up”, available at: www.nickbostrom.com, last accessed: 14 February 2022 [quote on 21].) On human beings’ particular vulnerability in the context of a more general discussion of human dignity, with an attempt to link this to criminal law, Werkmeister (2015), 94 ff. incl. further references. I hope this does not mean I have committed what Weizenbaum and Haefner (1990), 90 (quotation), 101, denounces as the “grand error” (orig. “großen Fehler”) of “defining being human according to what humans can do and computers cannot” (orig. “Menschsein nach dem zu definieren, was der Mensch kann und der Computer nicht”). Also see Turkle (2005), 285: “Where once we were rational animals, now we are feeling computers, emotional machines.”

  97. 97.

    Also see Weizenbaum (1984), 282, Weizenbaum and Haefner (1990), 105, Turkle (2012), 85 ff.

  98. 98.

    Montesquieu, De l’esprit des lois, in: Oeuvres complètes (Aux Éditions du Seuil, Paris, 1964), 527 ff. (first published 1748), Livre 11, Cha 6 (587). English translation taken from The Spirit of the Laws, ed. and transl. Anne M. Cohler, Basia Carolyn Miller, and Harold Samuel Stone, Cambridge: Cambridge University Press, 1989, 158.

  99. 99.

    This criticism likewise is already found in Weizenbaum (1984), 228 ff. (under the heading “Incomprehensible Programs”); Weizenbaum (2006), 116–117: “I contend that most of the current computer systems, the large computer systems operating on a global scale, in the military, for example, are not transparent” (orig. “Ich behaupte, dass der größte Teil der aktuellen Computersysteme, der großen weltumspannend agierenden Computersysteme, im Militärbereich zum Beispiel, nicht durchschaubar sind”, 116); Misselhorn (2019), 132: “Problem of many hands” (orig. “Problem der vielen Hände”).

  100. 100.

    For references on deep learning, see Footnote 24 above as well as Reichwald and Pfisterer (2016), 208, Kirn and Müller-Hengstenberg (2014), 225.

  101. 101.

    Fan (2019), 70–71, Webb (2019), 110–111.

  102. 102.

    Webb (2019), 8–9 and passim points out these dangers with regard to the dominant large tech companies; also see Boehme-Neßler (2017), 3036.

  103. 103.

    Gless and Wohlers (2019), 163.

  104. 104.

    See thereto the references in Footnote 8 above.

  105. 105.

    These examples are taken from Hillgruber, in: Maunz and Dürig, GG (as of December 2007, Lfg. 51), Art. 92 mn. 56. A longer list is provided by Schulze-Fielitz in: Dreier, Grundgesetz Kommentar, Vol. 3, 3rd ed. 2018, Art. 92 mn. 44 ff. By contrast, Gless and Wohlers (2019), 152 ff. do not see the transfer of “routine decisions” (orig. “Routine-Entscheidungen”) to machines as a problem, but do not categorize decisions on pre-trial detention as such routine decisions. The thoughts developed here can also be understood as an attempt to flesh out this vague concept of the routine decision. They also take into account most of the examples mentioned by Engel (2014), 1100: Engel considers using computers in the partially automated proceedings for payment orders pursuant to Section 689(1) second sentence ZPO and in requests for information in matters concerning the commercial registry, but not in decisions concerning penalty orders. His idea that computers “possibly” (orig. “womöglich”) could be deployed to review the admissibility of a lawsuit presents a more alarming prospect, however, for in this case, the machine would place itself between the citizen and his or her judge.

  106. 106.

    Thereto in detail, including numerous references, Greco (2015), 261 ff.

  107. 107.

    BVerfGE 133, 168 (204 mn. 65 ff.).

  108. 108.

    Critically on the contradictions of this rule Greco (2016), 4 ff., incl. numerous references.

  109. 109.

    O’Neil (2016), 8.

  110. 110.

    On the culpability of machines Hilgendorf (2012), 128 ff., Schuhr (2012), 43; likewise Stammler and Markwalder (2017), 41 ff., Hage (2017), 255, 261 ff., and Gaede (2019), 64–65, Dennett (2017), 397 also holds that responsible machines are possible. In summary, including references to the relevant literature, Roxin and Greco (2020), §8 mn. 66 ff.

  111. 111.

    Arguing along these lines Warwick (2012), 143, Bostrom and Yudkowsky (2014), 320 ff.; Shanahan (2015), 182 ff. Bostrom (2018), 99 ff. Gaede (2019), 42 ff. (“self-aware artificial intelligence” [orig. “selbstbewusste künstliche Intelligenz”]); probably also Tegmark (2017), 109; also see Gunkel (2018).

  112. 112.

    Also see Footnote 159.

  113. 113.

    Also see Kurzweil (1999), 2: “The primary political and philosophical issue of the next century will be the definition of who we are.” Husain (2017), 167; also Turkle (2005), 29, 236–237, 294.

  114. 114.

    Highly critically, with many further references, Weizenbaum (1984), 177 ff., 187 ff., 226 ff.; also see Lanier (2010), 75, where he criticizes an ideology that denies the riddle of the existence of experiences as a “spiritual failure”; 153 ff. argues against “computationalism”, a theory according to which “the world can be understood as a computational process, with people as subprocesses.” Turkle (2005), 219 ff. paints an impressive picture of the first generation of the “new philosophers of artificial intelligence”.

  115. 115.

    See es Turing (1950) 442 ff.: the question of whether machines can think is “too meaningless to deserve discussion” beyond the context of the test.

  116. 116.

    Quoted from Weizenbaum (2006), 98, who engages critically with this and similar views; in more detail Weizenbaum (2001), 35 ff, 44 ff; also see Turkle (2005), 233–234 (who attributes the expression “bloody mess of organic matter” to Minsky). On Minsky also see Liebig (2001), 4 ff.

  117. 117.

    On this movement also see Cordeiro (2003), 65 ff., Bostrom (2018), 38 ff. (both authors take a positive view of this philosophy). The borderline to “transhumanism”, which is primarily concerned with enhancement, is blurred; on the latter, see Sorgner (2016), Göcke and Meier-Hamidi (2018). Also see Loh (2019).

  118. 118.

    Moravec (1988), 108 ff., Kurzweil (2006), 198 ff., 383 ff., Kurzweil (2013), 247; critically on Moravec Weizenbaum (2006), 104 ff., Weizenbaum (2001), 44 ff. On uploading, also see Shanahan (2015), 196 ff., Bostrom and Yudkowsky (2014), 324 ff., Bostrom (2018), 41 ff.

  119. 119.

    Kurzweil (2006), 257; he refers to his own philosophy as “patternism”, 386. Likewise Moravec (1988), 117, Hofstaedter (2007), 257–258, 288.

  120. 120.

    Kurzweil (2006), 330. Also see Kurzweil (2013), 129: “essential equivalence between a computer—with the right software—and a (conscious) mind.”

  121. 121.

    Kurzweil (2006), 203.

  122. 122.

    Kurzweil (2006), 325, Kurzweil (1999), 129: “We will be software, not hardware.”

  123. 123.

    Moravec (1988), 117.

  124. 124.

    Bostrom (2001), cited from Kurzweil (2013), 369 (who concurs).

  125. 125.

    Bostrom (2001; revised 2005). Also see Bostrom and Yudkowsky (2014), 322, who propose a “Principle of Substrate Non-Discrimination”, postulate: “If two beings have the same functionality and the same conscious experience, and differ only in the substrate of their implementation, then they have the same moral status.” As in Gaede (2019), the second condition (“same conscious experience”) is missing for non-contingent reasons.

  126. 126.

    Bostrom (2018), 91 ff.

  127. 127.

    Kurzweil (2013), 415.

  128. 128.

    Weizenbaum (2001), 42 (quote), 52 ff. (orig. “Daß die Artificial-Intelligence-Elite glaubt, Gefühle wie Liebe, Kummer, Freude, Trauer und alles, was die menschliche Seele mit Gefühlen und Emotionen aufwühlt, ließen sich einfach mir nichts dir nichts in einen Maschinenartefakt mit Computergehirn transferieren, zeigt, wie mir scheint, eine Verachtung für das Leben, eine Verleugnung ihrer eigenen menschlichen Erfahrung, um es vorsichtig auszudrücken”). Another outspoken critic is Welzer (2018), 181, who remarks on Kurzweil’s idea of uploading: “the inventor of this ‘solution’, Ray Kurzweil, is generally regarded not as crazy but as a genius, which in itself is a telling indicator of the present intellectual horizon” (orig. “der Erfinder dieser ‘Lösung’, Ray Kurzweil, gilt allgemein nicht als gaga, sondern als Genie, was an sich schon ein Indikator für den geistigen Horizont unserer Gegenwart ist”). Similarly Liebig (2001), 6: “mad idea”, “ideology which is as antihuman as it is anti-progress”, “... grotesque ...”; Lanier (2010), 29 ff.; Nida-Rümelin and Weidenfeld (2018), 28: “Only in philosophy seminars, certain feature pages, and AI circles can the indistinguishability of humans and machines be asserted” (orig. “Nur im philosophischen Oberseminar oder in manchen Feuilletons und KI-Zirkeln kann die Ununterscheidbarkeit von Menschen und Maschinen behauptet werden”); also see Geraci (2010).

  129. 129.

    Vinge (1993), 14.

  130. 130.

    Shanahan (2015), 93; similarly Minsky, 109: “Once delivered from the limitations of biology, we will...”.

  131. 131.

    Kurzweil (1999), 150.

  132. 132.

    Also see Shanahan (2015), 194–195: “close cousin of the Nazi fanatic”.

  133. 133.

    Thereto in greater detail Ambos (2018), §7 mn. 127 ff.

  134. 134.

    Bostrom (2014), 141.

  135. 135.

    Minsky (1994), 113; quoted approvingly by Kurzweil (2013), 260.

  136. 136.

    See the eponymous book by Moravec (1988), e.g. 1: “We humans will benefit for a time from their labors, but sooner or later, like natural children, they will seek their own fortunes while we, their aged parents, silently fade away. Very little need be lost in this passing of the torch...” On this matter also see Husain (2017), 181, who writes that we will become “creators of new life”, and humanity will become something like the obsolete computers we visit in retro museums (183–184).

  137. 137.

    Prinz (2012) (79 ff.; he also suggests that it is irrational to fear death, to want humanity to continue, or not to be concerned for the happiness of machines, 85–86); Bostrom (2018), 191 ff.

  138. 138.

    Once again Weizenbaum (1984), 208–209, Weizenbaum (2001), 42: “In other words, there are things that humans know only because they have a body” (orig. “Es gibt mit anderen Worten Dinge, die Menschen nur deshalb wissen, weil sie einen Körper haben”). The point is not to be confused with the well-known argument of Dreyfus (1967), 19 ff.; Dreyfus (1972), 147 ff., Dreyfus (1992), 235 ff.; his key objection that it is impossible to attribute intelligent behaviour to rules formalized in advance proceeds from a top-down perspective that by now has been superseded and no longer serves as a foundation for more recent computing successes (see 2.3.1. a] above).

  139. 139.

    Also see Weizenbaum and Haefner (1990), 103.

  140. 140.

    See Warwick (2012), 84–85 following Turing (1950), 447.

  141. 141.

    Thereto Wallach and Allen (2009), 68.

  142. 142.

    Thereto Wallach and Allen (2009), 64 ff., Warwick (2012), 10–11, 140–141, Shanahan (2015), 36 ff., Misselhorn (2019), 27 ff., 43 ff.; in monograph form Shanahan (2010); from a legal perspective Eidenmüller (2017), 768 ff., Gaede (2019), 20. The research field “artificial life” deals with questions that are so far removed from what we understand by life that it needs no further mention; for an instructive account on this topic Warwick (2012), 116 ff., Bedau, 295 ff.

  143. 143.

    Warwick (2012), 146 ff. For a critical account, see Weizenbaum and Haefner (1990), 105.

  144. 144.

    See Footnote 92 above.

  145. 145.

    Minsky (1988), 27, 30, and passim; also see Minsky (2006), 298 ff.

  146. 146.

    Kurzweil (2013), es 458 ff., discussing the Chinese Room argument (Footnote 92 above): “I understand English, but none of my neurons do.”

  147. 147.

    E.g. Hofstaedter (2007), et al. 291–292. Also see Turkle (2005), 265–266.

  148. 148.

    For a detailed discussion of this debate, including references, Roxin and Greco (2020), §19 mn. 52a ff.

  149. 149.

    For such a discussion, see the previous fn.

  150. 150.

    Plato (1925), 294A.

  151. 151.

    Weber (1972), 140 ff.

  152. 152.

    On this logic, see e.g. (for a classical example) Montesquieu (1964), Book XI, ch. 4 (586): “Qui le dirait! la vertu même a besoin des limites”; (for a modern example) Popper (1966), 131. Also see Greco (2013), 61 ff.

  153. 153.

    Article 1(1) of the 1948 Chiemsee Draft of a Basic Law for a Federation of German States (Chiemseer Entwurf eines Grundgesetzes für einen Bund deutscher Länder, orig. “um des Menschen willen da ist”).

  154. 154.

    Particularly Kurzweil (2013), who celebrates this notion; Chalmers (2016), 171 ff. (first published 2010), Husain (2017), es 180 ff., Bostrom (2014) weighs the advantages and disadvantages, defining “superintelligence” as “any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest” (26); Shanahan (2015), es 204 ff., Tegmark (2017), 44, 134 ff. and passim shows alarm; also see Domingos (2015), 25, who dreams of a master algorithm able to produce all past, present, and future knowledge; also see Warwick (2012), 74 ff., Ramge (2018), 81 ff. One often speaks of “strong” AI in contrast to the “weak” AI that exists thus far, see for instance Warwick (2012), 64–65, Ramge (2018), 18 ff.

  155. 155.

    Very similarly Bartlett (2018), 38–39, who speaks of a “moral singularity”, “the point at which we will start to delegate substantial moral and political reasoning to machines.” This could be a “point of no return”: “once we start relying on it, we’ll never sto” Similar concerns are expressed by Volland (2018), 233 ff. (who makes particular use of the example of robots creating art); Foer (2017), 77, Welzer (2018), 226 ff.; and Carr, as cited in the next fn.

  156. 156.

    An impressive account is given in Carr (2014), 65 ff., whose argument likewise is not purely psychological; see Carr (2011), es 211–212, 223–234.

  157. 157.

    Enders reaches a different conclusion, as in Footnote 84.

  158. 158.

    Zarkadakis (2015), 99 sees this differently, dreaming of a “new social contract” according to which we are governed by machines, i.e. by “perfect reason and incorruptible goodwill”.

  159. 159.

    In the current age of externally funded research, it would certainly be more promising to argue the opposite (an interesting topic, by the way, for a courageous study in the sociology of science; first thoughts on this can be found in Schünemann (2018), 326–327).

  160. 160.

    I adopt this term from Turkle (2012), 291 ff., who uses it primarily in reference to the introduction of robots in care for the elderly.

  161. 161.

    Weizenbaum (1984), 226–227.

References

  • Adrian A (2017) Der Richterautomat Ist Möglich. Rechtstheorie 48:77–121

    Article  Google Scholar 

  • Aggarwal CC (2018) Neural networks and deep learning: a textbook. Springer, New York

    Book  Google Scholar 

  • Aletras N, Tsarapatsanis D, Preotiuc-Pietro D et al (2016) Predicting judicial decisions of the European Court of Human Rights. PeerJ Computer Science 2:e93. https://doi.org/10.7717/peerj-cs.93

    Article  Google Scholar 

  • Alpaydin E (2016) Machine learning. The MIT Press, Cambridge/London

    Google Scholar 

  • Ambos K (2018) Internationales Strafrecht, 5th edn. Beck, München

    Google Scholar 

  • Angwin J, Larson J, Mattu S et al (2016) Machine Bias. Available at https://www.propublica.org. Accessed 28 July 2019

  • Ashley KD (2017) Artificial intelligence and legal analytics. Cambridge University Press. https://doi.org/10.1017/9781316761380

  • Augustine (1998) The city of god [De civitate Dei], Book IV (ed) and Dyson RW (transl). Cambridge University Press

    Google Scholar 

  • Bartlett J (2018) The people vs. tech. Penguin, London

    Google Scholar 

  • Bedau MA (2014) Artificial life. In: Frankish K, Ramsey WM (eds) Artificial Intelligence. Cambridge University Press, Cambridge, pp 296–315. https://doi.org/10.1017/CBO9781139046855.019

  • Berger A (2018) Der Automatisierte Verwaltungsakt. Nvwz 37:1260–1264

    Google Scholar 

  • Berk RA, Bleich J (2013) Statistical procedures for forecasting criminal behavior: a comparative assessment. Criminol Public Policy 12:513–544

    Article  Google Scholar 

  • Boehme-Neßler V (2017) Die Macht der Algorithmen und die Ohnmacht des Rechts. NJW 70:2021–3037

    Google Scholar 

  • Bostrom N (2001/2005) Ethical Principles in the creation of artificial minds. Available at https://nickbostrom.com/ethics/aiethics.html. Accessed 28 July 2019

  • Bostrom N (2014) Superintelligence. Paths, dangers, strategies. Oxford University Press, Oxford

    Google Scholar 

  • Bostrom N (2018) Die Zukunft der Menschheit. Suhrkamp, Berlin, p 2018

    Google Scholar 

  • Bostrom N, Yudkowsky E (2014) The ethics of artificial intelligence. In: Frankish K, Ramsey W (eds) The Cambridge handbook of artificial intelligence. Cambridge University Press, Cambridge

    Google Scholar 

  • Breidenbach S, Glatz F (eds) (2018) Rechtshandbuch legal tech. Beck, München

    Google Scholar 

  • Brynjolfsson E, McAfee A (2016) The second machine age. Norton & Company, New York

    Google Scholar 

  • Bues MM (2018) Artificial Intelligence im Recht. In: Hartung M, Bues MM, Beck HG (eds) München, p 275

    Google Scholar 

  • Bull HP (2015) Sinn und Unsinn des Datenschutzes. Mohr Siebeck, Tübingen

    Google Scholar 

  • Bull HP (2019) Über die Pläne zur flächendeckenden Technisierung der öffentlichen Verwaltung. CR 35:478–484

    Google Scholar 

  • Carr N (2011) The shallows. What the internet is doing to our brains. WW Norton & Company, New York/London

    Google Scholar 

  • Carr N (2014) The glass cage. How our computers are changing us. WW Norton & Company, New York/London

    Google Scholar 

  • Carter M (2007) Minds and computers. An introduction to the philosophy of artificial intelligence. Edinburgh University Press, Edinburgh

    Book  Google Scholar 

  • Chalmers DJ (2016) The Singularity: a philosophical analysis. In: Schneider S (ed) Science fiction and philosophy, 2nd ed. Wiley Online Books

    Google Scholar 

  • Cordeiro JL (2003) Future Life forms among Posthumans. J Fut Stud 8:65–72

    Google Scholar 

  • Deeks A (2019) The Judicial demand for explainable artificial intelligence. Columbia Law Rev 119:1829–1850

    Google Scholar 

  • Dennett D (2017) From Bacteria to bach and back. Penguin, New York

    Google Scholar 

  • Domingos P (2015) The master algorithm. Penguin, New York

    Google Scholar 

  • Draeger J, Müller-Eiselt R (2019) Wir und die intelligenten Maschinen. DVA, München

    Google Scholar 

  • Dreyfus HL (1967) Why computers must have bodies in order to be intelligent. Rev Metaphysics 21:13–32

    Google Scholar 

  • Dreyfus HL (1972) What Computers can’t do: the limits of artificial intelligence. The MIT Press, New York

    Google Scholar 

  • Dreyfus HL (1992) What computers still can’t do. A critique of artificial reason. The MIT Press, Massachusetts

    Google Scholar 

  • Du Sautoy M (2019) The creativity code. Belknap Press, London

    Book  Google Scholar 

  • Eberl U (2018) Smarte Maschinen, 2nd edn. Hanser, München

    Google Scholar 

  • Eidenmüller H (2017) The rise of robots and the law of humans. ZEuP 4:765–777

    Google Scholar 

  • Elhardt E (2016) Tiefenpsychologie: Eine Einführung, 18th edn. Kohlhammer, Berlin

    Google Scholar 

  • Enders P (2018) Einsatz künstlicher Intelligenz bei der juristischen Entscheidungsfindung. JA 721–727

    Google Scholar 

  • Engel M (2014) Algorithmisierte Rechtsfindung Als Juristische Arbeitshilfe. JZ 22(1096):1100

    Google Scholar 

  • Ernst C (2017) Algorithmische Entscheidungsfindung Und Personenbezogene Daten. JZ 22:1026–1036

    Article  Google Scholar 

  • Eubanks V (2018) Automating inequality. How high-tech tools profile, police, and punish the poor. St. Martin's Press, New York

    Google Scholar 

  • European Commission for the Efficiency of Justice (CEPEJ) (2019) European ethical Charter on the use of artificial intelligence in judicial systems and their environment. Council of Europe, Strasbourg

    Google Scholar 

  • Fan S (2019) Will AI replace us? Thames & Hudson, London

    Google Scholar 

  • Foer F (2017) World without Mind. Why Google, amazon, Facebook and apple threaten our future. Vintage, London

    Google Scholar 

  • Frese Y (2015) Recht Im Zweiten Maschinenzeitalter. NJW 68:2090–2092

    Google Scholar 

  • Fries M (2018) Automatische Rechtspflege. RW 4(2018):414–430

    Article  Google Scholar 

  • Gaede V (2019) Künstliche Intelligenz—Rechte und Strafen für Roboter? Nomos Verlag, Baden-Baden

    Book  Google Scholar 

  • Geraci R (2010) Apocalyptic AI. Oxford University Press, Oxford

    Book  Google Scholar 

  • Gless S, Wohlers W (2019) Subsumtionsautomat 2.0: Künstliche Intelligenz statt menschlicher Richter? In: Böse M, Schumann KH, Toepel F (eds) FS für Urs Kindhäuser. Nomos Verlag, Baden-Baden, pp 147–165

    Google Scholar 

  • Göcke BP, Meier-Hamidi F (eds.) (2018) Designobjekt Mensch. Die Agenda des Transhumanismus auf dem Prüfstand. Herder

    Google Scholar 

  • Greco L (2009) Lebendiges und Totes in Feuerbachs Straftheorie. Duncker & Humblot, Berlin

    Book  Google Scholar 

  • Greco L (2013) Tugend im Strafverfahren. In: Zöller M et al (eds) FS Jürgen Wolter zum 70. Geburtstag. Duncker & Humblot, Berlin, pp 61–86

    Google Scholar 

  • Greco L (2015) Strafprozesstheorie und materielle Rechtskraft. Duncker & Humblot, Berlin

    Book  Google Scholar 

  • Greco L (2016) Fortgeleiteter Schmerz—Überlegungen zum Verhältnis von Prozessabsprache, Wahrheitsermittlung und Prozessstruktur. GA 2016, pp 1–15

    Google Scholar 

  • Gunkel D (2018) Robot rights. MIT Press, Cambridge, MA/London

    Book  Google Scholar 

  • Haft F, Lehmann H (eds) (1989) Das LEX-Projekt. Entwicklung eines Expertensystems. Attempto, Tübingen

    Google Scholar 

  • Hage J (2017) Theoretical Foundations for the responsibility of autonomous agents. Artif Intell Law 25:255–271

    Article  Google Scholar 

  • Hähnchen S, Bommel R (2018) Digitalisierung Und Rechtsanwendung. JZ 73(334):340

    Google Scholar 

  • Hartung M, Bues MM, Halbleib G (eds) (2017) Legal tech. Beck, München

    Google Scholar 

  • Hassemer W (1990) Einführung in die Grundlagen des Strafrechts, 2nd ed. Beck, München

    Google Scholar 

  • Haugeland J (1997) Mind design II, MIT Press, Cambridge, MA/London

    Google Scholar 

  • Herberger M (2018) “Künstliche Intelligenz” und Recht. NJW 39:2815–2829

    Google Scholar 

  • Hilgendorf E (2012) Können Roboter schuldhaft handeln? In: Beck S (ed) Jenseits von Mensch und Maschine. Nomos, Baden-Baden, pp 119–132

    Google Scholar 

  • Höffler K (2019) Die Herausforderungen der globalisierten Kriminalität an die Kriminologie—am Beispiel Risikoprognosen. In: Dessecker A, Harrendorf S, Höffler K (eds) Angewandte Kriminologie—justizbezogene Forschung. Universitätsverlagen Göttingen

    Google Scholar 

  • Hoffmann-Riem W (2017) Verhaltenssteuerung Durch Algorithmen. Aör 142:1–42

    Article  Google Scholar 

  • Hofstaedter D (2007) I am a strange loop. Basic Books, New York

    Google Scholar 

  • Husain A (2017) The sentient machine. Scribner, New York

    Google Scholar 

  • Jordan MI, Mitchell TM (2015) Machine learning: trends, perspectives, and prospects. Science 349:255–260

    Article  Google Scholar 

  • Kahneman D (2011) Thinking, fast and slow. Farrar, Straus and Giroux, London

    Google Scholar 

  • Kant (2013) Critique of pure reason. In: Guyer P, Wood AW (eds) The Cambridge edition of the works of Immanuel Kant. Cambridge University Press, Cambridge

    Google Scholar 

  • Kasparov G (2017) Deep thinking, London

    Google Scholar 

  • Kelleher JD, Tierney B (2018) Data science. MIT Press, Cambridge, MA/London

    Book  Google Scholar 

  • Kirn S, Müller-Hengstenberg CD (2014) Intelligente (Software-)Agenten: Von der Automatisierung zur Autonomie? Verselbstständigung technischer Systeme. MMR 4:225–232

    Google Scholar 

  • Kornwachs K (2019) Smart robots—smart ethics? DuD 43:332–341

    Article  Google Scholar 

  • Kotsoglu KN (2014) Subsumtionsautomat 2.0. Über die (Un-)Möglichkeit einer Algorithmisierung der Rechtserzeugung. JZ 69:451–457

    Google Scholar 

  • Kurz C, Rieger F (2017) Autonomie und Handlungsfähigkeit in der digitalen Welt. In: Augstein J (ed) Reclaim Autonomy. Suhrkamp Verlag, Berlin, Selbstermächtigung in der digitalen Weltordnung, pp 85–98

    Google Scholar 

  • Kurzweil R (1999) The age of spiritual machines. Penguin, New York

    Google Scholar 

  • Kurzweil R (2006) The singularity is near. When humans transcend biology. Penguin, New York

    Google Scholar 

  • Kurzweil R (2013) How to create a mind. Penguin, New York

    Google Scholar 

  • Lanier J (2010) You are not a gadget. Vintage, New York

    Google Scholar 

  • Larenz K (1958) Wegweiser zu richterlicher Rechtsschöpfung, FS Nikisch, Mohr-Siebeck, Tübingen, pp 275–305

    Google Scholar 

  • Larenz K (1965) Richterliche Rechtsfortbildung Als Methodisches problem. NJW 1:1–10

    Google Scholar 

  • Liebig G (2001) The cult of artificial intelligence vs. the creativity of the human mind. Fidelio 10:4–15

    Google Scholar 

  • Loh J (2019) Trans- und Posthumanismus (Zur Einführung). Junius, Hamburg

    Google Scholar 

  • Mainzer K (2018) Künstliche Intelligenz—Wann übernehmen die Maschinen?, 2nd edn. Springer, Berlin

    Google Scholar 

  • Martínez Garay LM (2019) La relación entre culpabilidad y peligrosidad. In Maraver Gómez M, Pozuelo Arquimbau L (eds) La culpabilidad. Montevideo, pp 115–200

    Google Scholar 

  • Martini M (2018) Algorithmen als Herausforderung für die Rechtsordnung. JZ 72:1017–1025

    Google Scholar 

  • Martini M, Nink D (2017) Wenn Maschinen entscheiden. Persönlichkeitsschutz in Vollautomatisierten Verwaltungsverfahren. Nvwz 36:1–14

    Google Scholar 

  • Minsky M (1988) The society of mind. Simon & Schuster, New York

    Google Scholar 

  • Minsky M (1994) Will robots inherit the earth? Sci Am 271:108 ff

    Google Scholar 

  • Minsky M (2006) The emotion machine. Simon & Schuster, New York

    Google Scholar 

  • Misselhorn C (2019) Grundfragen der Maschinenethik, 3rd edn. Reclam, Ditzingen

    Google Scholar 

  • Möllers T (2017) Juristische Methodenlehre. Beck, München

    Google Scholar 

  • Montesquieu (1964) (first published 1748) De l’esprit des lois, in: Oeuvres complètes, Aux Éditions du Seuil, Paris

    Google Scholar 

  • Montesquieu (1989) The Spirit of the laws. In: Cohler AM, Miller BC, Stone HS (eds and transl). Cambridge University Press, Cambridge

    Google Scholar 

  • Moravec H (1988) Mind children. The future of robot and human intelligence. Harvard University Press, Cambridge, MA/London

    Google Scholar 

  • Nida-Rümelin J, Weidenfeld N (2018) Digital Humanism, 3rd edn. Springer, Berlin

    Google Scholar 

  • Niiler E (2019) Can AI be a fair judge in court? Estonia thinks so. Available at https://www.wired.com. Accessed 23 July 2019

  • Nilsson N (2010) The quest for artificial intelligence. Cambridge University Press, Oxford

    Google Scholar 

  • O'Neil C (2016) Weapons of Math destruction. How big data increases inequality and threatens democracy. Crown, London

    Google Scholar 

  • Orwat C (2019) Diskriminierungsrisiken durch Verwendung von Algorithmen. Nomos, Baden-Baden

    Google Scholar 

  • Pasquale F (2015) The black box society. The secret algorithms that control money and information. Harvard University Press, Cambridge, MA/London

    Book  Google Scholar 

  • Peters O (2012) Kritiker der Digitalisierung. Peter Lang, Berlin

    Google Scholar 

  • Pfitzenmaier G (2016) Leben auf Autopilot. Oekom

    Google Scholar 

  • Phillips L (1990) Proximate Applications of neural networks in jurisprudence. Jur PC 11–12:820 ff

    Google Scholar 

  • Plato (1925) Statesman. In: Statesman. HN, Lamb WRM (transl) Philebus. Ion Fowler. Loeb Classical Library 124. Harvard University Press, Cambridge/MA

    Google Scholar 

  • Popper (1966) The open society and its enemies, vol II, 5th edn. Routledge, New Jersey

    Google Scholar 

  • Preston J, Bishop M (eds) (2002) Views into the Chinese room: new essays on Searle and artificial intelligence. Claredon Press, Oxford

    Google Scholar 

  • Prinz J (2012) Singularity and inevitable doom. J Conscious Stud 19:77–86

    Google Scholar 

  • Raabe O et al (2012) Recht ex machina. Springer, Berlin

    Book  Google Scholar 

  • Ramge T (2018) Mensch und Maschine. Wie Künstliche Intelligenz und Roboter unser Leben verändern, 2nd edn. Reclam, Ditzingen

    Google Scholar 

  • Reichenbach H (1938) On probability and induction. In: Philosophy of science, vol 5, no 1, 21 ff

    Google Scholar 

  • Reichwald J, Pfisterer D (2016) Autonomie und Intelligenz im Internet der Dinge. CR 32:208–212

    Google Scholar 

  • Rostalski F, Völkening M (2019) Smart sentencing. Kripoz 5(2019):265–273

    Google Scholar 

  • Roxin C, Greco L (2020) AT I, 5th edn. Beck, München

    Google Scholar 

  • Steinle Schickore J, Steinle F (eds) (2002) Revisiting discovery and justification. Springer, Dordrecht

    Google Scholar 

  • Schmidhuber J (2015) Deep learning in neural networks: an overview. Neural Netw 61:85–117

    Article  Google Scholar 

  • Schuhr J (2012) Willensfreiheit, Roboter und Auswahlaxiom. In: Beck S (ed) Jenseits von Mensch und Maschine. Nomos, Baden-Baden

    Google Scholar 

  • Schulze F (2015) Art. 20 mn. In: Dreier (org) Grundgesetz Kommentar, vol II, 3rd edn. Mohr Siebeck, Tübingen

    Google Scholar 

  • Schünemann B (2018) Der Kampf ums Verbandsstrafrecht in dritter Neuauflage etc. StraFo 317 ff.

    Google Scholar 

  • Searle J (1980) Minds, brains, and programs. Behav Brain Sci 3:417–424

    Article  Google Scholar 

  • Sejnowski T (2018) The deep learning evolution. Publisher, Cambridge, MA/London

    Google Scholar 

  • Shanahan M (2010) Embodiment and the inner life: cognition and consciousness in the space of possible minds. Oxford University Press, Oxford

    Book  Google Scholar 

  • Shanahan M (2015) The technological singularity. MIT Press, Cambridge, MA/London

    Book  Google Scholar 

  • Silver et al (2016) Mastering the game of Go with deep neural networks and tree search. Nature 529:484–489

    Article  Google Scholar 

  • Silver et al. (2017) Mastering chess and shogi by self-play with a general reinforcement learning algorithm. Available at arXiv:1712.01815v1 [cs.AI]. Accessed 28 July 2019

  • Simmler M, Markwalder N (2017) Roboter in der Verantwortung? ZStW 129:20–47

    Article  Google Scholar 

  • Sorgner SL (2016) Transhumanismus. Herder, Freiburg/BAsel/Wien

    Google Scholar 

  • Sousa Mendes P (2020) Representation of legal knowledge and expert systems in law. In: Livro em Homenagem a Amilcar Sernadas. Lisboa, 23 ff.

    Google Scholar 

  • Strandburg K (2019) Rulemaking and inscrutable automated decision tools. Columbia Law Rev 119:1851–1886

    Google Scholar 

  • Surden H (2014) Machine learning and law. Washington Law Rev 89:87 ff.

    Google Scholar 

  • Taplin J (2017) Move fast and break things: how facebook, google and amazon have cornered culture and undermined democracy. Little, Brown and Company, New York

    Google Scholar 

  • Tegmark M (2017) Life 3.0. Being human in the age of artificial intelligence

    Google Scholar 

  • Turing A. (1950) Computing machinery and intelligence. In: Mind, vol LIX, no 236, 433 ff.

    Google Scholar 

  • Turkle S (2005) The second self. computers and the human spirit. The MIT Press, Cambridge, MA/London

    Book  Google Scholar 

  • Turkle S (2012) Alone together. Why we expect more from technology and less from each other, 3rd edn. Basic Books, New York

    Google Scholar 

  • Velsberg O (2019) “Estland: Roboter als Richter”. Available at https://www.mdr.de. Accessed 23 July 2019

  • Vinge V (1993) The Coming technological singularity: how to survive in the post-human era. Available at https://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/19940022856.pdf

  • Volland (2018) The creative power of machines

    Google Scholar 

  • Wagner J (2018) Legal tech and legal robots. Springer, Berlin

    Book  Google Scholar 

  • Wallach W, Allen C (2009) Moral machines. Teaching robots right from wrong. Oxford University Press, Oxford

    Book  Google Scholar 

  • Warwick K (2012) Artificial intelligence. The basics. Routledge, London/New York

    Google Scholar 

  • Webb A (2019) The big nine. How the tech giants & their thinking machines could warp humanity. Public Affairs, New York

    Google Scholar 

  • Weber M (1972) Wirtschaft und Gesellschaft, 5th edn. Mohr Siebeck, Tübingen

    Google Scholar 

  • Weizenbaum J (1984) Computer power and human reason. Pelican, London

    Google Scholar 

  • Weizenbaum J (2001) Computermacht und Gesellschaft. Suhrkamp, Berlin

    Google Scholar 

  • Weizenbaum J (2006) Wo sind sie, die Inseln der Vernunft im Cyberstrom? Herder, Freiburg

    Google Scholar 

  • Weizenbaum J, Haefner K (1990) Sind computer die besseren Menschen? Ein Streitgespräch. Piper

    Google Scholar 

  • Welzel H (1962) Naturrecht und materiale Gerechtigkeit, 4th edn. Vandenhoeck & Ruprecht, Göttingen

    Google Scholar 

  • Welzer H (2018) Die smarte Diktatur, 2nd edn. S. Fischer Verlag

    Google Scholar 

  • Werkmeister A (2015) Straftheorien im Völkerstrafrecht. Nomos, Baden-Baden

    Google Scholar 

  • Wischmeyer T (2018) Regulierung Intelligenter Systeme. Aör 143:1–66

    Article  Google Scholar 

  • Zarkadakis G (2015) In our own image. The history and future of artificial intelligence. Pegasus Book, New York/London

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Luís Greco .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Greco, L. (2024). Judicial Power Without Judicial Responsibility: The Case Against Robot Judges. In: Moura Vicente, D., Soares Pereira, R., Alves Leal, A. (eds) Legal Aspects of Autonomous Systems. ICASL 2022. Data Science, Machine Intelligence, and Law, vol 4. Springer, Cham. https://doi.org/10.1007/978-3-031-47946-5_12

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-47946-5_12

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-47945-8

  • Online ISBN: 978-3-031-47946-5

  • eBook Packages: Law and CriminologyLaw and Criminology (R0)

Publish with us

Policies and ethics