Abstract
Is it possible that in future we will have robot judges? And would this actually be permissible? The article answers these questions with a reluctant “yes” and a strict “no” respectively.
English Version by Margaret Hiley.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
Notes
- 1.
Weizenbaum (1984), 227. On this groundbreaking thinker, who was a crucial influence on the theses developed in this article and ought to be key reading for today’s AI enthusiasts, see Peters (2012), 16 ff., 27: “[Weizenbaum] ranks first among all digitization critics” (orig. “in der Riege aller Kritiker der Digitalisierung an erster Stelle”).
- 2.
In the following, I make no attempt to define the term artificial intelligence, as the effort involved in developing such a definition would be disproportionate to its usefulness (on this, however, see Misselhorn (2019) 17 ff.; for a concise working definition, see Mainzer (2018), 3: “A system can be called intelligent if it is able to solve problems independently and efficiently” [orig. “Ein System heißt intelligent, wenn es selbständig und effizient Probleme lösen kann”]; also see Herberger (2018). Accordingly, AI is simply what we understand by this term in everyday life. Moreover, “[i]f one would not know what to do with a concept until one had defined it, then all philosophizing would be in a bad way.” Kant (2013), A 731/B 759, fn.
- 3.
- 4.
- 5.
- 6.
- 7.
For a current overview: Bull (2019), arguing against extending the use of AI to court proceedings, 483.
- 8.
- 9.
- 10.
Thereto Dräger and Müller-Eiselt (2019), 18–19, Gless and Wohlers (2019), 154–155, Höffler (2019), 58 ff. The Supreme Court of Wisconsin ruled that using COMPAS did not violate due process rights, State v. Loomis, 881 N.W.2d 749 (Wis. 2016) (summarized in Harvard Law Review 130 [2017], 1530 ff.). Also see Berk and Bleich (2013). Rostalski and Völkening (2019), 271 ff. make a proposal on how to use AI-based sentencing decisions in Germany.
- 11.
As claimed by Fries (2018), 422.
- 12.
- 13.
Also see Adrian (2017), 80–81.
- 14.
Also see Wischmeyer (2018), 45.
- 15.
Greco (2009), 372 ff.
- 16.
- 17.
- 18.
Without reference to the law Weizenbaum (1984), 71–72.
- 19.
Weizenbaum (1984), 223.
- 20.
- 21.
- 22.
- 23.
See on the technical contribution of the programmers Silver et al. (2016); on AlphaGo, also see Kelleher and Tierney (2018), 31 ff., Tegmark (2017), 83 ff. As is always the case in this field, AlphaGo merely marked a temporary peak; in the meantime, its developers have developed AlphaGo Zero, which was able learn Go and other games on its own without any prior human knowledge, see Silver et al. (2017); also see Kasparov (2017), 265–266.
- 24.
For a general, readily understandable account of deep learning, see Nilsson (2010), 408 ff., Warwick (2012), 92 ff., Alpaydin (2016), 86 ff., Kelleher and Tierney (2018), 121 ff., Eberl (2018), 99 ff., Ramge (2018), 46 ff., Sejnowski (2018); for a technology-focused account, see Schmidhuber (2015), Mainzer (2018), 99 ff. and Aggarwal (2018). On machine learning in general Jordan and Mitchell (2015); from a legal perspective Surden (2014); European Commission for the Efficiency of Justice (CEPEJ) (2019), 35 ff; and—decades ahead of its time!—Phillips (1990), 820 ff.
- 25.
Kahneman (2011), 20 and passim.
- 26.
Du Sautoy (2019), 67 ff.
- 27.
Ramge (2018), 49.
- 28.
- 29.
Thereto Volland (2018), 12 (“The next Rembrandt”), 27 ff. Du Sautoy (2019), 126 ff. (“The Next Rembrandt”), 195 ff. (“Emmy” as an AI composer); as early as 1999 Kurzweil (1999), 158 ff., giving examples of music, poems, and paintings. On the question of whether this actually constitutes art, Weizenbaum (2001), 98 ff.; id., in: Weizenbaum and Haefner (1990), 86–87.
- 30.
- 31.
On the concept of prediction, see Kelleher and Tierney (2018), 104–105: “Prediction is the task of estimating the value of a target attribute for a given instance based on the values of other attributes (or input attributes) for that instance.” Also see Alpaydin (2016), 39. Thus it is not just a matter of computers foreseeing decisions in the sense of a legal realism (thereto, see Surden (2014), 102, 108 ff., Frese (2015), 2092, Bues (2018), 275 ff. [280; mn. 1183–1184]), but of computers making these decisions themselves. On the distinction between training set und validation set, see Kelleher and Tierney (2018), 147, Domingos (2015), 75 ff., Alpaydin (2016), 155.
- 32.
- 33.
- 34.
Jordan and Mitchell (2015), 255.
- 35.
Eberl (2018), 103 ff.
- 36.
- 37.
- 38.
- 39.
Further reservations—especially the question of how learning should take place in cases of disagreement between courts or between the case law and the literature—are raised by Enders (2018), 725–726.
- 40.
Roxin and Greco (2020), §12 mn. 88a ff. present an attempt at systematization.
- 41.
Domingos (2015), 175, 177 ff., 184–185.
- 42.
Du Sautoy (2019), 126 ff. (346 images were made available to the computer as a training set in the project “The Next Rembrandt”), 211–212 (389 Bach chorales).
- 43.
See the endeavours reported by Misselhorn (2019), 114 ff.
- 44.
- 45.
Gless and Wohlers (2019), 158.
- 46.
Fries (2018), 425, Wischmeyer (2018), 23 ff., who for this reason holds that AI should not be used “when bringing criminal charges” (orig. “beim strafrechtlichen Schuldvorwurf”) “for reasons of principle” (orig. “aus prinzipiellen Gründen”, 24; 35); likewise CEPEJ (2019), 53 ff.; generally Pfitzenmaier (2016), 18 ff., Draeger and Müller-Eiselt (2019), 40 ff., Misselhorn (2019), 134–135.
- 47.
Eberl (2018), 117–118.
- 48.
O'Neil (2016), 27, 87, 133.
- 49.
- 50.
O’Neil (2016), 27.
- 51.
O’Neil (2016), 7.
- 52.
Foer (2017), 71.
- 53.
For a particularly impressive account, see O’Neil (2016), 3, 28 ff., who coins the pun “weapons of math destruction”; Eubanks (2018). Also see Kelleher and Tierney (2018), 190 ff., Ernst (2017), 1032 ff., Wischmeyer (2018), 26 ff., incl. many further references in Footnote 102; Misselhorn (2019), 80, Orwat (2019), Webb (2019), 254–245.
- 54.
O’Neil (2016), 8.
- 55.
- 56.
According to Kelleher and Tierney (2018), 65, programmers spend 79% of their time preparing their data sets.
- 57.
- 58.
- 59.
Kelleher and Tierney (2018), 34.
- 60.
Gless and Wohlers (2019), 164 insist on this procedural precaution. It is to be implemented in Estonia, see the references in Fn 9 above.
- 61.
Domingos (2015), 65.
- 62.
- 63.
Webb (2019), 54. A similar criticism is voiced by Lanier (2010), 51, who complains about the “lack of intellectual modesty in the computer science community”: “An aeronautical engineer would never put a passenger in a plane based on an untested, speculative theory, but computer scientists commit analogous sins all the time.”
- 64.
- 65.
Greco (2015), 44–45.
- 66.
From a legal perspective Hoffmann-Riem (2017), 29–30, Enders (2018), 726, Martini (2018), 1018; for a general account Pasquale (2015); O'Neil (2016), 8–9 (“dictates from the algorithmic gods”); Misselhorn (2019), 80, Webb (2019), 111; on bots and messaging software in particular Kurz and Rieger (2017), 85 ff. (91–2); with a focus on technology Mainzer (2018), 245 ff.
- 67.
Tegmark (2017), 106–107.
- 68.
From a legal perspective Martini (2018), 1020 ff., Wischmeyer (2018), 22 (who recalls the obligation to provide information under Article 13(2)(f) and Article 14 (2)(g) GDPR and the right to information under Article 15(1)(h) GDPR), 42 ff. (a very differentiated account); from a philosophical perspective Bostrom and Yudkowsky (2014), 316 ff., Nida-Rümelin and Weidenfeld (2018), 77–78; from the popular scientific literature O'Neil (2016), 214 and passim; Draeger and Müller-Eiselt (2019), 182 ff. (also commenting on pioneering efforts in this regard, 13 ff.).
- 69.
- 70.
Likewise Wischmeyer (2018), 54 ff.
- 71.
Thereto Volland (2018), 27 ff.
- 72.
Gless and Wohlers (2019), 159 ff. (orig. “Nachvollziehen einer derartigen Auskunft”).
- 73.
Reichenbach (1938), 36 is the seminal account on this topic; on the present debate Schickore and Steinle (2006); from a (criminal) legal theory perspective Hassemer (1990), 116 ff. (drawing a distinction between the “production” [orig. “Herstellung”] and the “portrayal” [“Darstellung”] of a decision).
- 74.
See Elhardt (2016), 59 ff.
- 75.
For a representative account, see Paeffgen, SK-StPO, 5th ed. 2016, §112 mn. 21c.
- 76.
Wischmeyer (2018), 44–45, 54, in his discussion of the black box argument.
- 77.
Wischmeyer (2018), 54 (orig. “Auch Menschen sind für andere Menschen—und für sich selbst—‘black boxes’”).
- 78.
See BGHZ 200, 38 (mn. 26 ff.) with regard to the so-called score formula used in SCHUFA credit reports; critically on trade secrecy in the case of incriminating algorithmic decisions Kurz and Rieger (2017), 92, 96–97, O'Neil (2016), 29, Wischmeyer (2018), 64–65, incl. references specifically concerning criminal justice in fn. 260.
- 79.
See the previous fn.
- 80.
See Bartlett (2018), 31, who calls trade secrets the “modern equivalent of the recipe for Coca-Cola”.
- 81.
- 82.
“Judicial tenure may only be given in the case of a person who 1. is a German in terms of Article 116 of the Basic Law, 2. ...”; a German is a person who possesses German citizenship, Art. 116(1) var. 1 GG, Section 1 Nationality Act (Staatsangehörigkeitsgesetz, StAG); this only includes natural persons, see BGH NJW 2018, 2742. Further regulations are mentioned by Enders (2018), 723.
- 83.
For a representative commentary on this doctrine, see Grzeszick, in: Maunz/Dürig, Grundgesetz-Kommentar (status: December 2007, Lfg. 51), Art. 20 mn. 105 ff; Schulze Felitz, in: Dreier, Grundgesetz Kommentar vol. II, 3rd ed. 2015, Art. 20 mn. 113 ff.
- 84.
- 85.
As stated clearly in Enders (2018), 723, whose full line of argument reads as follows: “In this regard, a lawful judge within the meaning of this norm definitely is a natural person34” (orig. “Dabei steht fest, dass gesetzlicher Richter im Sinne dieser Norm eine natürliche Person ist”). The corresponding fn. 34 opens with the words “As taken fully for granted in ...” and goes on to quote two commentaries on the Basic Law (orig. “So völlig selbstverständlich bei …”).
- 86.
Bull (2015), 83, including further references on earlier versions of this rule.
- 87.
On this contrast in the history of legal thought (in a still unsurpassed account), Welzel (1962).
- 88.
Augustine (1998), 147.
- 89.
Also see Nida-Rümelin and Weidenfeld (2018), 83 ff.: they do not act on their own reasons.
- 90.
Turkle (2012), 85 ff., on the fundamental significance of the “gaze”, which is what establishes symmetry in the first place.
- 91.
- 92.
Shanahan (2015), 113 ff., Nida-Rümelin and Weidenfeld (2018), 110; likewise tending towards this view Eidenmüller (2017), 775 ff. The line of argument resembles the well-known thought experiment of the Chinese room that goes back to Searle (1980), 417 (I quote from Haugeland (1997), 184 ff.]) and that Nida-Rümelin and Weidenfeld (2018), 115 ff. use as the crucial point of their argument; on the discussion of this argument, concerning which there is a bewildering plethora of literature, Preston and Bishop (2002), Carter (2007), 175 ff.
- 93.
Weizenbaum (1984), 270: “Respect, understanding, and love are not technical problems”.
- 94.
Weizenbaum (2001), 76: “What we do with computers is almost all simulations, models” (orig. “Was wir mit Computern machen, sind fast alles Simulationen, Modelle”); Turkle (2012), 101: “... sociable technology ... promises friendship but can only deliver performances. Do we really want to be in the business of manufacturing friends that will never be friends?”; 124: “... a robot cannot pretend because it can only pretend”; Nida-Rümelin and Weidenfeld (2018), 41; Kornwachs (2019), 336 ff.; also see Misselhorn (2019), 86–87, who ascribes machines “quasi opinions” (orig. “Quasi-Meinungen”) and “quasi wishes” (orig. “Quasi-Wünsche”). On the simulation of emotions by so-called “cobots”, which are used especially in the field of care for the elderly, Ramge (2018), 75 ff., Misselhorn (2019), 136 ff. and es Turkle (2012), 103 ff.—Incidentally, this constitutes another example of the denial of responsibility criticized here (for a closely related debate, see Turkle (2005), 295; an emphatic and comprehensive is then given in Turkle (2012), 23 ff, 124–125 and passim).
- 95.
As also argued by Nida-Rümelin and Weidenfeld (2018), 108 ff.: “Warum KIs nicht denken können”.
- 96.
A very similar point is mentioned by Turkle (2012), 286: “... knowledge of mortality and an experience of the life cycle are what make us uniquely human”. Rather tellingly, this banal fact does not feature in the long list of properties that Turing (1950), 443 ff. discusses. Bostrom (2018), 183, mentions them briefly, but asserts, without providing even the slightest justification, that “a posthuman being ... could be vulnerable, dependent, and limited.” (Article originally published in English as “Why I Want to be a Posthuman When I Grow Up”, available at: www.nickbostrom.com, last accessed: 14 February 2022 [quote on 21].) On human beings’ particular vulnerability in the context of a more general discussion of human dignity, with an attempt to link this to criminal law, Werkmeister (2015), 94 ff. incl. further references. I hope this does not mean I have committed what Weizenbaum and Haefner (1990), 90 (quotation), 101, denounces as the “grand error” (orig. “großen Fehler”) of “defining being human according to what humans can do and computers cannot” (orig. “Menschsein nach dem zu definieren, was der Mensch kann und der Computer nicht”). Also see Turkle (2005), 285: “Where once we were rational animals, now we are feeling computers, emotional machines.”
- 97.
- 98.
Montesquieu, De l’esprit des lois, in: Oeuvres complètes (Aux Éditions du Seuil, Paris, 1964), 527 ff. (first published 1748), Livre 11, Cha 6 (587). English translation taken from The Spirit of the Laws, ed. and transl. Anne M. Cohler, Basia Carolyn Miller, and Harold Samuel Stone, Cambridge: Cambridge University Press, 1989, 158.
- 99.
This criticism likewise is already found in Weizenbaum (1984), 228 ff. (under the heading “Incomprehensible Programs”); Weizenbaum (2006), 116–117: “I contend that most of the current computer systems, the large computer systems operating on a global scale, in the military, for example, are not transparent” (orig. “Ich behaupte, dass der größte Teil der aktuellen Computersysteme, der großen weltumspannend agierenden Computersysteme, im Militärbereich zum Beispiel, nicht durchschaubar sind”, 116); Misselhorn (2019), 132: “Problem of many hands” (orig. “Problem der vielen Hände”).
- 100.
- 101.
- 102.
- 103.
Gless and Wohlers (2019), 163.
- 104.
See thereto the references in Footnote 8 above.
- 105.
These examples are taken from Hillgruber, in: Maunz and Dürig, GG (as of December 2007, Lfg. 51), Art. 92 mn. 56. A longer list is provided by Schulze-Fielitz in: Dreier, Grundgesetz Kommentar, Vol. 3, 3rd ed. 2018, Art. 92 mn. 44 ff. By contrast, Gless and Wohlers (2019), 152 ff. do not see the transfer of “routine decisions” (orig. “Routine-Entscheidungen”) to machines as a problem, but do not categorize decisions on pre-trial detention as such routine decisions. The thoughts developed here can also be understood as an attempt to flesh out this vague concept of the routine decision. They also take into account most of the examples mentioned by Engel (2014), 1100: Engel considers using computers in the partially automated proceedings for payment orders pursuant to Section 689(1) second sentence ZPO and in requests for information in matters concerning the commercial registry, but not in decisions concerning penalty orders. His idea that computers “possibly” (orig. “womöglich”) could be deployed to review the admissibility of a lawsuit presents a more alarming prospect, however, for in this case, the machine would place itself between the citizen and his or her judge.
- 106.
Thereto in detail, including numerous references, Greco (2015), 261 ff.
- 107.
BVerfGE 133, 168 (204 mn. 65 ff.).
- 108.
Critically on the contradictions of this rule Greco (2016), 4 ff., incl. numerous references.
- 109.
O’Neil (2016), 8.
- 110.
On the culpability of machines Hilgendorf (2012), 128 ff., Schuhr (2012), 43; likewise Stammler and Markwalder (2017), 41 ff., Hage (2017), 255, 261 ff., and Gaede (2019), 64–65, Dennett (2017), 397 also holds that responsible machines are possible. In summary, including references to the relevant literature, Roxin and Greco (2020), §8 mn. 66 ff.
- 111.
Arguing along these lines Warwick (2012), 143, Bostrom and Yudkowsky (2014), 320 ff.; Shanahan (2015), 182 ff. Bostrom (2018), 99 ff. Gaede (2019), 42 ff. (“self-aware artificial intelligence” [orig. “selbstbewusste künstliche Intelligenz”]); probably also Tegmark (2017), 109; also see Gunkel (2018).
- 112.
Also see Footnote 159.
- 113.
- 114.
Highly critically, with many further references, Weizenbaum (1984), 177 ff., 187 ff., 226 ff.; also see Lanier (2010), 75, where he criticizes an ideology that denies the riddle of the existence of experiences as a “spiritual failure”; 153 ff. argues against “computationalism”, a theory according to which “the world can be understood as a computational process, with people as subprocesses.” Turkle (2005), 219 ff. paints an impressive picture of the first generation of the “new philosophers of artificial intelligence”.
- 115.
See es Turing (1950) 442 ff.: the question of whether machines can think is “too meaningless to deserve discussion” beyond the context of the test.
- 116.
- 117.
On this movement also see Cordeiro (2003), 65 ff., Bostrom (2018), 38 ff. (both authors take a positive view of this philosophy). The borderline to “transhumanism”, which is primarily concerned with enhancement, is blurred; on the latter, see Sorgner (2016), Göcke and Meier-Hamidi (2018). Also see Loh (2019).
- 118.
- 119.
- 120.
- 121.
Kurzweil (2006), 203.
- 122.
- 123.
Moravec (1988), 117.
- 124.
- 125.
Bostrom (2001; revised 2005). Also see Bostrom and Yudkowsky (2014), 322, who propose a “Principle of Substrate Non-Discrimination”, postulate: “If two beings have the same functionality and the same conscious experience, and differ only in the substrate of their implementation, then they have the same moral status.” As in Gaede (2019), the second condition (“same conscious experience”) is missing for non-contingent reasons.
- 126.
Bostrom (2018), 91 ff.
- 127.
Kurzweil (2013), 415.
- 128.
Weizenbaum (2001), 42 (quote), 52 ff. (orig. “Daß die Artificial-Intelligence-Elite glaubt, Gefühle wie Liebe, Kummer, Freude, Trauer und alles, was die menschliche Seele mit Gefühlen und Emotionen aufwühlt, ließen sich einfach mir nichts dir nichts in einen Maschinenartefakt mit Computergehirn transferieren, zeigt, wie mir scheint, eine Verachtung für das Leben, eine Verleugnung ihrer eigenen menschlichen Erfahrung, um es vorsichtig auszudrücken”). Another outspoken critic is Welzer (2018), 181, who remarks on Kurzweil’s idea of uploading: “the inventor of this ‘solution’, Ray Kurzweil, is generally regarded not as crazy but as a genius, which in itself is a telling indicator of the present intellectual horizon” (orig. “der Erfinder dieser ‘Lösung’, Ray Kurzweil, gilt allgemein nicht als gaga, sondern als Genie, was an sich schon ein Indikator für den geistigen Horizont unserer Gegenwart ist”). Similarly Liebig (2001), 6: “mad idea”, “ideology which is as antihuman as it is anti-progress”, “... grotesque ...”; Lanier (2010), 29 ff.; Nida-Rümelin and Weidenfeld (2018), 28: “Only in philosophy seminars, certain feature pages, and AI circles can the indistinguishability of humans and machines be asserted” (orig. “Nur im philosophischen Oberseminar oder in manchen Feuilletons und KI-Zirkeln kann die Ununterscheidbarkeit von Menschen und Maschinen behauptet werden”); also see Geraci (2010).
- 129.
Vinge (1993), 14.
- 130.
Shanahan (2015), 93; similarly Minsky, 109: “Once delivered from the limitations of biology, we will...”.
- 131.
Kurzweil (1999), 150.
- 132.
Also see Shanahan (2015), 194–195: “close cousin of the Nazi fanatic”.
- 133.
Thereto in greater detail Ambos (2018), §7 mn. 127 ff.
- 134.
Bostrom (2014), 141.
- 135.
- 136.
See the eponymous book by Moravec (1988), e.g. 1: “We humans will benefit for a time from their labors, but sooner or later, like natural children, they will seek their own fortunes while we, their aged parents, silently fade away. Very little need be lost in this passing of the torch...” On this matter also see Husain (2017), 181, who writes that we will become “creators of new life”, and humanity will become something like the obsolete computers we visit in retro museums (183–184).
- 137.
- 138.
Once again Weizenbaum (1984), 208–209, Weizenbaum (2001), 42: “In other words, there are things that humans know only because they have a body” (orig. “Es gibt mit anderen Worten Dinge, die Menschen nur deshalb wissen, weil sie einen Körper haben”). The point is not to be confused with the well-known argument of Dreyfus (1967), 19 ff.; Dreyfus (1972), 147 ff., Dreyfus (1992), 235 ff.; his key objection that it is impossible to attribute intelligent behaviour to rules formalized in advance proceeds from a top-down perspective that by now has been superseded and no longer serves as a foundation for more recent computing successes (see 2.3.1. a] above).
- 139.
Also see Weizenbaum and Haefner (1990), 103.
- 140.
- 141.
Thereto Wallach and Allen (2009), 68.
- 142.
Thereto Wallach and Allen (2009), 64 ff., Warwick (2012), 10–11, 140–141, Shanahan (2015), 36 ff., Misselhorn (2019), 27 ff., 43 ff.; in monograph form Shanahan (2010); from a legal perspective Eidenmüller (2017), 768 ff., Gaede (2019), 20. The research field “artificial life” deals with questions that are so far removed from what we understand by life that it needs no further mention; for an instructive account on this topic Warwick (2012), 116 ff., Bedau, 295 ff.
- 143.
- 144.
See Footnote 92 above.
- 145.
- 146.
- 147.
- 148.
For a detailed discussion of this debate, including references, Roxin and Greco (2020), §19 mn. 52a ff.
- 149.
For such a discussion, see the previous fn.
- 150.
Plato (1925), 294A.
- 151.
Weber (1972), 140 ff.
- 152.
- 153.
Article 1(1) of the 1948 Chiemsee Draft of a Basic Law for a Federation of German States (Chiemseer Entwurf eines Grundgesetzes für einen Bund deutscher Länder, orig. “um des Menschen willen da ist”).
- 154.
Particularly Kurzweil (2013), who celebrates this notion; Chalmers (2016), 171 ff. (first published 2010), Husain (2017), es 180 ff., Bostrom (2014) weighs the advantages and disadvantages, defining “superintelligence” as “any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest” (26); Shanahan (2015), es 204 ff., Tegmark (2017), 44, 134 ff. and passim shows alarm; also see Domingos (2015), 25, who dreams of a master algorithm able to produce all past, present, and future knowledge; also see Warwick (2012), 74 ff., Ramge (2018), 81 ff. One often speaks of “strong” AI in contrast to the “weak” AI that exists thus far, see for instance Warwick (2012), 64–65, Ramge (2018), 18 ff.
- 155.
Very similarly Bartlett (2018), 38–39, who speaks of a “moral singularity”, “the point at which we will start to delegate substantial moral and political reasoning to machines.” This could be a “point of no return”: “once we start relying on it, we’ll never sto” Similar concerns are expressed by Volland (2018), 233 ff. (who makes particular use of the example of robots creating art); Foer (2017), 77, Welzer (2018), 226 ff.; and Carr, as cited in the next fn.
- 156.
- 157.
Enders reaches a different conclusion, as in Footnote 84.
- 158.
Zarkadakis (2015), 99 sees this differently, dreaming of a “new social contract” according to which we are governed by machines, i.e. by “perfect reason and incorruptible goodwill”.
- 159.
In the current age of externally funded research, it would certainly be more promising to argue the opposite (an interesting topic, by the way, for a courageous study in the sociology of science; first thoughts on this can be found in Schünemann (2018), 326–327).
- 160.
I adopt this term from Turkle (2012), 291 ff., who uses it primarily in reference to the introduction of robots in care for the elderly.
- 161.
Weizenbaum (1984), 226–227.
References
Adrian A (2017) Der Richterautomat Ist Möglich. Rechtstheorie 48:77–121
Aggarwal CC (2018) Neural networks and deep learning: a textbook. Springer, New York
Aletras N, Tsarapatsanis D, Preotiuc-Pietro D et al (2016) Predicting judicial decisions of the European Court of Human Rights. PeerJ Computer Science 2:e93. https://doi.org/10.7717/peerj-cs.93
Alpaydin E (2016) Machine learning. The MIT Press, Cambridge/London
Ambos K (2018) Internationales Strafrecht, 5th edn. Beck, München
Angwin J, Larson J, Mattu S et al (2016) Machine Bias. Available at https://www.propublica.org. Accessed 28 July 2019
Ashley KD (2017) Artificial intelligence and legal analytics. Cambridge University Press. https://doi.org/10.1017/9781316761380
Augustine (1998) The city of god [De civitate Dei], Book IV (ed) and Dyson RW (transl). Cambridge University Press
Bartlett J (2018) The people vs. tech. Penguin, London
Bedau MA (2014) Artificial life. In: Frankish K, Ramsey WM (eds) Artificial Intelligence. Cambridge University Press, Cambridge, pp 296–315. https://doi.org/10.1017/CBO9781139046855.019
Berger A (2018) Der Automatisierte Verwaltungsakt. Nvwz 37:1260–1264
Berk RA, Bleich J (2013) Statistical procedures for forecasting criminal behavior: a comparative assessment. Criminol Public Policy 12:513–544
Boehme-Neßler V (2017) Die Macht der Algorithmen und die Ohnmacht des Rechts. NJW 70:2021–3037
Bostrom N (2001/2005) Ethical Principles in the creation of artificial minds. Available at https://nickbostrom.com/ethics/aiethics.html. Accessed 28 July 2019
Bostrom N (2014) Superintelligence. Paths, dangers, strategies. Oxford University Press, Oxford
Bostrom N (2018) Die Zukunft der Menschheit. Suhrkamp, Berlin, p 2018
Bostrom N, Yudkowsky E (2014) The ethics of artificial intelligence. In: Frankish K, Ramsey W (eds) The Cambridge handbook of artificial intelligence. Cambridge University Press, Cambridge
Breidenbach S, Glatz F (eds) (2018) Rechtshandbuch legal tech. Beck, München
Brynjolfsson E, McAfee A (2016) The second machine age. Norton & Company, New York
Bues MM (2018) Artificial Intelligence im Recht. In: Hartung M, Bues MM, Beck HG (eds) München, p 275
Bull HP (2015) Sinn und Unsinn des Datenschutzes. Mohr Siebeck, Tübingen
Bull HP (2019) Über die Pläne zur flächendeckenden Technisierung der öffentlichen Verwaltung. CR 35:478–484
Carr N (2011) The shallows. What the internet is doing to our brains. WW Norton & Company, New York/London
Carr N (2014) The glass cage. How our computers are changing us. WW Norton & Company, New York/London
Carter M (2007) Minds and computers. An introduction to the philosophy of artificial intelligence. Edinburgh University Press, Edinburgh
Chalmers DJ (2016) The Singularity: a philosophical analysis. In: Schneider S (ed) Science fiction and philosophy, 2nd ed. Wiley Online Books
Cordeiro JL (2003) Future Life forms among Posthumans. J Fut Stud 8:65–72
Deeks A (2019) The Judicial demand for explainable artificial intelligence. Columbia Law Rev 119:1829–1850
Dennett D (2017) From Bacteria to bach and back. Penguin, New York
Domingos P (2015) The master algorithm. Penguin, New York
Draeger J, Müller-Eiselt R (2019) Wir und die intelligenten Maschinen. DVA, München
Dreyfus HL (1967) Why computers must have bodies in order to be intelligent. Rev Metaphysics 21:13–32
Dreyfus HL (1972) What Computers can’t do: the limits of artificial intelligence. The MIT Press, New York
Dreyfus HL (1992) What computers still can’t do. A critique of artificial reason. The MIT Press, Massachusetts
Du Sautoy M (2019) The creativity code. Belknap Press, London
Eberl U (2018) Smarte Maschinen, 2nd edn. Hanser, München
Eidenmüller H (2017) The rise of robots and the law of humans. ZEuP 4:765–777
Elhardt E (2016) Tiefenpsychologie: Eine Einführung, 18th edn. Kohlhammer, Berlin
Enders P (2018) Einsatz künstlicher Intelligenz bei der juristischen Entscheidungsfindung. JA 721–727
Engel M (2014) Algorithmisierte Rechtsfindung Als Juristische Arbeitshilfe. JZ 22(1096):1100
Ernst C (2017) Algorithmische Entscheidungsfindung Und Personenbezogene Daten. JZ 22:1026–1036
Eubanks V (2018) Automating inequality. How high-tech tools profile, police, and punish the poor. St. Martin's Press, New York
European Commission for the Efficiency of Justice (CEPEJ) (2019) European ethical Charter on the use of artificial intelligence in judicial systems and their environment. Council of Europe, Strasbourg
Fan S (2019) Will AI replace us? Thames & Hudson, London
Foer F (2017) World without Mind. Why Google, amazon, Facebook and apple threaten our future. Vintage, London
Frese Y (2015) Recht Im Zweiten Maschinenzeitalter. NJW 68:2090–2092
Fries M (2018) Automatische Rechtspflege. RW 4(2018):414–430
Gaede V (2019) Künstliche Intelligenz—Rechte und Strafen für Roboter? Nomos Verlag, Baden-Baden
Geraci R (2010) Apocalyptic AI. Oxford University Press, Oxford
Gless S, Wohlers W (2019) Subsumtionsautomat 2.0: Künstliche Intelligenz statt menschlicher Richter? In: Böse M, Schumann KH, Toepel F (eds) FS für Urs Kindhäuser. Nomos Verlag, Baden-Baden, pp 147–165
Göcke BP, Meier-Hamidi F (eds.) (2018) Designobjekt Mensch. Die Agenda des Transhumanismus auf dem Prüfstand. Herder
Greco L (2009) Lebendiges und Totes in Feuerbachs Straftheorie. Duncker & Humblot, Berlin
Greco L (2013) Tugend im Strafverfahren. In: Zöller M et al (eds) FS Jürgen Wolter zum 70. Geburtstag. Duncker & Humblot, Berlin, pp 61–86
Greco L (2015) Strafprozesstheorie und materielle Rechtskraft. Duncker & Humblot, Berlin
Greco L (2016) Fortgeleiteter Schmerz—Überlegungen zum Verhältnis von Prozessabsprache, Wahrheitsermittlung und Prozessstruktur. GA 2016, pp 1–15
Gunkel D (2018) Robot rights. MIT Press, Cambridge, MA/London
Haft F, Lehmann H (eds) (1989) Das LEX-Projekt. Entwicklung eines Expertensystems. Attempto, Tübingen
Hage J (2017) Theoretical Foundations for the responsibility of autonomous agents. Artif Intell Law 25:255–271
Hähnchen S, Bommel R (2018) Digitalisierung Und Rechtsanwendung. JZ 73(334):340
Hartung M, Bues MM, Halbleib G (eds) (2017) Legal tech. Beck, München
Hassemer W (1990) Einführung in die Grundlagen des Strafrechts, 2nd ed. Beck, München
Haugeland J (1997) Mind design II, MIT Press, Cambridge, MA/London
Herberger M (2018) “Künstliche Intelligenz” und Recht. NJW 39:2815–2829
Hilgendorf E (2012) Können Roboter schuldhaft handeln? In: Beck S (ed) Jenseits von Mensch und Maschine. Nomos, Baden-Baden, pp 119–132
Höffler K (2019) Die Herausforderungen der globalisierten Kriminalität an die Kriminologie—am Beispiel Risikoprognosen. In: Dessecker A, Harrendorf S, Höffler K (eds) Angewandte Kriminologie—justizbezogene Forschung. Universitätsverlagen Göttingen
Hoffmann-Riem W (2017) Verhaltenssteuerung Durch Algorithmen. Aör 142:1–42
Hofstaedter D (2007) I am a strange loop. Basic Books, New York
Husain A (2017) The sentient machine. Scribner, New York
Jordan MI, Mitchell TM (2015) Machine learning: trends, perspectives, and prospects. Science 349:255–260
Kahneman D (2011) Thinking, fast and slow. Farrar, Straus and Giroux, London
Kant (2013) Critique of pure reason. In: Guyer P, Wood AW (eds) The Cambridge edition of the works of Immanuel Kant. Cambridge University Press, Cambridge
Kasparov G (2017) Deep thinking, London
Kelleher JD, Tierney B (2018) Data science. MIT Press, Cambridge, MA/London
Kirn S, Müller-Hengstenberg CD (2014) Intelligente (Software-)Agenten: Von der Automatisierung zur Autonomie? Verselbstständigung technischer Systeme. MMR 4:225–232
Kornwachs K (2019) Smart robots—smart ethics? DuD 43:332–341
Kotsoglu KN (2014) Subsumtionsautomat 2.0. Über die (Un-)Möglichkeit einer Algorithmisierung der Rechtserzeugung. JZ 69:451–457
Kurz C, Rieger F (2017) Autonomie und Handlungsfähigkeit in der digitalen Welt. In: Augstein J (ed) Reclaim Autonomy. Suhrkamp Verlag, Berlin, Selbstermächtigung in der digitalen Weltordnung, pp 85–98
Kurzweil R (1999) The age of spiritual machines. Penguin, New York
Kurzweil R (2006) The singularity is near. When humans transcend biology. Penguin, New York
Kurzweil R (2013) How to create a mind. Penguin, New York
Lanier J (2010) You are not a gadget. Vintage, New York
Larenz K (1958) Wegweiser zu richterlicher Rechtsschöpfung, FS Nikisch, Mohr-Siebeck, Tübingen, pp 275–305
Larenz K (1965) Richterliche Rechtsfortbildung Als Methodisches problem. NJW 1:1–10
Liebig G (2001) The cult of artificial intelligence vs. the creativity of the human mind. Fidelio 10:4–15
Loh J (2019) Trans- und Posthumanismus (Zur Einführung). Junius, Hamburg
Mainzer K (2018) Künstliche Intelligenz—Wann übernehmen die Maschinen?, 2nd edn. Springer, Berlin
Martínez Garay LM (2019) La relación entre culpabilidad y peligrosidad. In Maraver Gómez M, Pozuelo Arquimbau L (eds) La culpabilidad. Montevideo, pp 115–200
Martini M (2018) Algorithmen als Herausforderung für die Rechtsordnung. JZ 72:1017–1025
Martini M, Nink D (2017) Wenn Maschinen entscheiden. Persönlichkeitsschutz in Vollautomatisierten Verwaltungsverfahren. Nvwz 36:1–14
Minsky M (1988) The society of mind. Simon & Schuster, New York
Minsky M (1994) Will robots inherit the earth? Sci Am 271:108 ff
Minsky M (2006) The emotion machine. Simon & Schuster, New York
Misselhorn C (2019) Grundfragen der Maschinenethik, 3rd edn. Reclam, Ditzingen
Möllers T (2017) Juristische Methodenlehre. Beck, München
Montesquieu (1964) (first published 1748) De l’esprit des lois, in: Oeuvres complètes, Aux Éditions du Seuil, Paris
Montesquieu (1989) The Spirit of the laws. In: Cohler AM, Miller BC, Stone HS (eds and transl). Cambridge University Press, Cambridge
Moravec H (1988) Mind children. The future of robot and human intelligence. Harvard University Press, Cambridge, MA/London
Nida-Rümelin J, Weidenfeld N (2018) Digital Humanism, 3rd edn. Springer, Berlin
Niiler E (2019) Can AI be a fair judge in court? Estonia thinks so. Available at https://www.wired.com. Accessed 23 July 2019
Nilsson N (2010) The quest for artificial intelligence. Cambridge University Press, Oxford
O'Neil C (2016) Weapons of Math destruction. How big data increases inequality and threatens democracy. Crown, London
Orwat C (2019) Diskriminierungsrisiken durch Verwendung von Algorithmen. Nomos, Baden-Baden
Pasquale F (2015) The black box society. The secret algorithms that control money and information. Harvard University Press, Cambridge, MA/London
Peters O (2012) Kritiker der Digitalisierung. Peter Lang, Berlin
Pfitzenmaier G (2016) Leben auf Autopilot. Oekom
Phillips L (1990) Proximate Applications of neural networks in jurisprudence. Jur PC 11–12:820 ff
Plato (1925) Statesman. In: Statesman. HN, Lamb WRM (transl) Philebus. Ion Fowler. Loeb Classical Library 124. Harvard University Press, Cambridge/MA
Popper (1966) The open society and its enemies, vol II, 5th edn. Routledge, New Jersey
Preston J, Bishop M (eds) (2002) Views into the Chinese room: new essays on Searle and artificial intelligence. Claredon Press, Oxford
Prinz J (2012) Singularity and inevitable doom. J Conscious Stud 19:77–86
Raabe O et al (2012) Recht ex machina. Springer, Berlin
Ramge T (2018) Mensch und Maschine. Wie Künstliche Intelligenz und Roboter unser Leben verändern, 2nd edn. Reclam, Ditzingen
Reichenbach H (1938) On probability and induction. In: Philosophy of science, vol 5, no 1, 21 ff
Reichwald J, Pfisterer D (2016) Autonomie und Intelligenz im Internet der Dinge. CR 32:208–212
Rostalski F, Völkening M (2019) Smart sentencing. Kripoz 5(2019):265–273
Roxin C, Greco L (2020) AT I, 5th edn. Beck, München
Steinle Schickore J, Steinle F (eds) (2002) Revisiting discovery and justification. Springer, Dordrecht
Schmidhuber J (2015) Deep learning in neural networks: an overview. Neural Netw 61:85–117
Schuhr J (2012) Willensfreiheit, Roboter und Auswahlaxiom. In: Beck S (ed) Jenseits von Mensch und Maschine. Nomos, Baden-Baden
Schulze F (2015) Art. 20 mn. In: Dreier (org) Grundgesetz Kommentar, vol II, 3rd edn. Mohr Siebeck, Tübingen
Schünemann B (2018) Der Kampf ums Verbandsstrafrecht in dritter Neuauflage etc. StraFo 317 ff.
Searle J (1980) Minds, brains, and programs. Behav Brain Sci 3:417–424
Sejnowski T (2018) The deep learning evolution. Publisher, Cambridge, MA/London
Shanahan M (2010) Embodiment and the inner life: cognition and consciousness in the space of possible minds. Oxford University Press, Oxford
Shanahan M (2015) The technological singularity. MIT Press, Cambridge, MA/London
Silver et al (2016) Mastering the game of Go with deep neural networks and tree search. Nature 529:484–489
Silver et al. (2017) Mastering chess and shogi by self-play with a general reinforcement learning algorithm. Available at arXiv:1712.01815v1 [cs.AI]. Accessed 28 July 2019
Simmler M, Markwalder N (2017) Roboter in der Verantwortung? ZStW 129:20–47
Sorgner SL (2016) Transhumanismus. Herder, Freiburg/BAsel/Wien
Sousa Mendes P (2020) Representation of legal knowledge and expert systems in law. In: Livro em Homenagem a Amilcar Sernadas. Lisboa, 23 ff.
Strandburg K (2019) Rulemaking and inscrutable automated decision tools. Columbia Law Rev 119:1851–1886
Surden H (2014) Machine learning and law. Washington Law Rev 89:87 ff.
Taplin J (2017) Move fast and break things: how facebook, google and amazon have cornered culture and undermined democracy. Little, Brown and Company, New York
Tegmark M (2017) Life 3.0. Being human in the age of artificial intelligence
Turing A. (1950) Computing machinery and intelligence. In: Mind, vol LIX, no 236, 433 ff.
Turkle S (2005) The second self. computers and the human spirit. The MIT Press, Cambridge, MA/London
Turkle S (2012) Alone together. Why we expect more from technology and less from each other, 3rd edn. Basic Books, New York
Velsberg O (2019) “Estland: Roboter als Richter”. Available at https://www.mdr.de. Accessed 23 July 2019
Vinge V (1993) The Coming technological singularity: how to survive in the post-human era. Available at https://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/19940022856.pdf
Volland (2018) The creative power of machines
Wagner J (2018) Legal tech and legal robots. Springer, Berlin
Wallach W, Allen C (2009) Moral machines. Teaching robots right from wrong. Oxford University Press, Oxford
Warwick K (2012) Artificial intelligence. The basics. Routledge, London/New York
Webb A (2019) The big nine. How the tech giants & their thinking machines could warp humanity. Public Affairs, New York
Weber M (1972) Wirtschaft und Gesellschaft, 5th edn. Mohr Siebeck, Tübingen
Weizenbaum J (1984) Computer power and human reason. Pelican, London
Weizenbaum J (2001) Computermacht und Gesellschaft. Suhrkamp, Berlin
Weizenbaum J (2006) Wo sind sie, die Inseln der Vernunft im Cyberstrom? Herder, Freiburg
Weizenbaum J, Haefner K (1990) Sind computer die besseren Menschen? Ein Streitgespräch. Piper
Welzel H (1962) Naturrecht und materiale Gerechtigkeit, 4th edn. Vandenhoeck & Ruprecht, Göttingen
Welzer H (2018) Die smarte Diktatur, 2nd edn. S. Fischer Verlag
Werkmeister A (2015) Straftheorien im Völkerstrafrecht. Nomos, Baden-Baden
Wischmeyer T (2018) Regulierung Intelligenter Systeme. Aör 143:1–66
Zarkadakis G (2015) In our own image. The history and future of artificial intelligence. Pegasus Book, New York/London
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Greco, L. (2024). Judicial Power Without Judicial Responsibility: The Case Against Robot Judges. In: Moura Vicente, D., Soares Pereira, R., Alves Leal, A. (eds) Legal Aspects of Autonomous Systems. ICASL 2022. Data Science, Machine Intelligence, and Law, vol 4. Springer, Cham. https://doi.org/10.1007/978-3-031-47946-5_12
Download citation
DOI: https://doi.org/10.1007/978-3-031-47946-5_12
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-47945-8
Online ISBN: 978-3-031-47946-5
eBook Packages: Law and CriminologyLaw and Criminology (R0)