Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

1 Law and Language

Law as we know it depends on the intricacies of natural language, whether spoken, written or printed. Is computer code another language or is code something else? Can legal norms be articulated in code? If so, what is gained and what is lost in translation? If not, what happens to legal norms when we transform them into computer code? Are they transformed into unlegal or alegal norms, or are they transformed into rules or algorithms that do not qualify as norms?

To answer these questions we must investigate the relationship between human language acquisition, the use of language and language as a system of signifiers that refer both to each other and to phenomena outside of language (De Saussure 1915). Language acquisition of the human species can be studied from the perspective of neo-Darwinian selection or biological predisposition (Pinker and Bloom 1990); seen as an affordance that builds on the plasticity of the human brain (Wolf 2007); or understood as the condition for and the effect of mind, self and society (Mead and Morris 1962). Though a wide variety of languages has emerged, entailing widely diverging grammatical structures that constrain and enable cognition of the world, they seem to share two key affordances. One is the ability to move beyond ostensive reference, the other is the capability to take the perspective of the other (Ricoeur 1992). These two constitutive features of human language allow humans to present in the here and now what is absent in present time or local space (Ricoeur 1973), and to integrate the perspective of other actors into that of a generalized other (Mead and Morris 1962). Together they permit the creation of a web of meaning that connects ‘pastness’ (Glenn 2007) with an anticipation of possible futures (Esposito 2011), thus shaping a world in which people can act, rather than merely display certain behaviours. The latter refers to what an external observer might register in terms of movements, states and shapes, whereas the former refers to what such behaviours signify to the actor and to those who need to ‘read’ her behaviour. Action involves an intentional stance and the double contingency that is typical for human interaction,Footnote 1 whereas behaviours are explained in terms of causal or statistical models. Human language indeed provides the framework for both ways of describing human gestures: (1) from a bird’s eyes’ view, often equated with an objective point of view and (2) as an attribution of meaning within the context of a specific language and culture, sometimes referred to as taking an intersubjectve point of view. Philosophers like Ricoeur (1973), Plessner (1975), Mead and Morris (1962), and Zizek (1991) have highlighted the ambiguity and the dynamic nature of human language, which are closely related to the inability to perform a final closure on what a word, sentence or idiom can mean. The most pointed way of describing this could be Zizek’s (1991: 30) dictum that ‘communication is a successful misunderstanding’. Subtle shifts in the meaning of our utterances coincide with the inexorable multiplicity of ‘generalized others’. We can never be sure how others will understand our performance, and this uncertainty is the germ of creative innovation. Rather than deploring it we should embrace it as the condition of possibility for a reiterant and necessary reinvention of self and society, understood as relational and mutually constitutive. The double contingency that is implicated in the persistent and mutual anticipation of how one is anticipated, calls for social norms that stabilize expectations. Such stabilization, however, may turn into intimidating forms of social control, constraining people from reinventing themselves and their society.

It seems that legal norms perform two functions at the same time: (1) they provide legal certainty in a way that (2) allows people to contest the validity, the relevance, and the interpretation of these norms in the light of concrete cases. This double instrumentality is an affordance of human language; it derives from the fact that social norms are made explicit in a way that renders them debatable. In formulating a legal rule its legal effect becomes apparent and this permits deliberation on its validity, relevance and meaning. Social norms that remain tacit, unspoken, below the radar or under the skin require articulation to become disputable. A dispute assumes that parties express and communicate their differences in terms of language rather than in the form of violence or intimidation. Language thus opens a space to discuss, to differ, to change, to redefine – language thus conditions both shared meaning and dissent. Law depends on language; without the subtle shifts in meaning that are inherent in a living language the law would become an algorithm, for instance based on statistical pattern recognition.

2 Language and Computer Code

So, does computer code compare to human language? Is it made of different stuff; does it extend the family of human languages, or is it a subset of the assortment of human languages? Or, should we distinguish between code that does and code that does not match human language? How will and how should the law respond to a hybrid society of human ‘nodes’ interacting with autonomous computing systems that run e.g. our critical infrastructures, public and private transport, healthcare, and the food industry? To answer this question we need to assess the transformations generated by communications that are predicated upon computer code. The most radical position has been taken by Kittler (1997), who proposed that computer code does not compare to language, claiming that it is a big mistake to view code as a form of language. His position is that software does not exist as such and eventually reduces to hardware (namely to differences in voltage):

Up to Holderlin’s time, a mere mention of lightning seemed to have been sufficient evidence of its possible poetic use. After this lightning’s metamorphosis into electricity, human-made writing passes through microscopically written inscriptions which, in contrast to all historical writing tools, are able to read and write by themselves.

(…) the so-called philosophy of the computer community tends to systematically obscure hardware by software, electronic signifiers by interfaces between formal and everyday languages.Footnote 2

Kittler basically finds that our writing process is obscured by computer programmes and its results manipulated in ways that are invisible for the naked human eye. We have no access to how our writings are processed, altered and interpreted by software code. He warns that once machines determine what information we can access, we will have lost control over what counts as knowledge. This may have been a rather prophetic understanding for a text written in 1997. By now, search engines that rank the search results of our queries have become the gateway to most of the information we seek. Their algorithms are part of the trade secret of a relatively small set of companies, who can claim intellectual property rights on them, such as copyright or patent. We may recall the famous article written by the founders of Google, 1 year after Kittler wrote his ‘There is no software’. Sergey Brin and Larry Page (1998) were, at that time, students at Stanford. In the Appendix, under the heading of ‘Advertising and Mixed Motives’ they claim that:Footnote 3

It is clear that a search engine which was taking money for showing cellular phone ads would have difficulty justifying the page that our system returned to its paying advertisers. For this type of reason and historical experience with other media, we expect that advertising funded search engines will be inherently biased towards the advertisers and away from the needs of the consumers.

(…) we believe that the issue of advertising causes enough mixed incentives that it is crucial to have a competitive search engine that is transparent and in the academic realm.

Both the European Commission and the Federal Trade Office in the US have engaged in legal action to enforce transparency on Google, precisely on this issue (Geitner 2012; Lohr 2012). From the perspective of law one wonders how we have arrived at a point where our access to knowledge and information has come to be regulated by competition law. On the other hand, we must applaud the fact that our legal framework is instrumental in enforcing the transparency advocated by the founding fathers of one of the most powerful gatekeepers on the Web – even if this is done against the apparent will of those same founding fathers.

Manovich (2008), other than Kittler, embraces software as the only way forward. He claims that if you do not learn how to program, your life will be programmed. In that sense he is in line with Kittler, since both emphasize the hold that software has over everyday life. Indeed, Manovich challenges the humanities insofar as they remain focused on text (2008: 6):

I completely agree with Fuller that ‘all intellectual work is now “software study”’. Yet it will take some time before the intellectuals will realize it.Footnote 4

Though lawyers may want to take a less radical stance on the importance of software study, coming to terms with the impact of code on some of the fundamental assumptions of the Rule of Law seems a pertinent challenge. To the extent that code rules what we come to read and write, the notion of human autonomy that is at the heart of our Western legal framework may require revision. We may, for instance, have to make room for some form of liability for autonomous computing systems, and to create the legal instruments that will allow us to regain a measure of autonomy in the midst of automated decision-making systems.

In her salient study on How we became posthuman (Hayles 1999), looks into the computational dimension of all this, describing the history of cybernetics, together with that of good old fashioned artificial intelligence (GOFAI). She highlights the attempts to privilege syntax over semantics, flow over content, a disembodied ­acontextual and ahistorical concept of information over a situated understanding of information. At the same time she, and many others, warn against the illusion of a pure human existence uncontaminated by technological interference; in her view humans have always been cyborgs, deeply intertwined with – if not constituted by – the technologies they invent (Ihde 1990, 2008). For lawyers, the point should be how the emerging ICT infrastructure reinvents us and what we want to preserve from the affordances of the earlier infrastructure. It should be clear by now that affordances such as a certain degree of autonomy cannot be taken for granted and will require actual re-engineering and active participation in the design of the novel architecture of everyday life.

3 Law as Code: Two Strands of Research

The intriguing question of what computer code does to the mediation of law is not squarely addressed in the chapters of Part I of this volume. The authors investigate two types of research questions: (1) the implications of artificial intelligence and the use of digital technologies for the attribution of legal personhood and (2) the implications of techno-regulation for the effectiveness and the legitimacy of the law. It is important, however, to take the issue of law, language and computer code into account when navigating Part I. The question of what it means to be a subject rather than merely an object, as well as the question of what it means to be regulated by machines rather than by human legislators are closely related to the issue of human language and computer code.

3.1 Artificial Intelligence and Legal Subjectivity

In the first two chapters of this volume, Hildebrandt and Pagallo inquire into the workings of artificial agents and their implications for legal subjectivity.

In her chapter, Hildebrandt recalls how in 2011 Ken Jennings, 74-times winner of the Jeopardy TV quiz, lost against Watson, the room-size IBM computer. Citing a popular ‘Simpsons’ phrase, Jennings wrote on his video screen: ‘I, for one, welcome our new computer overlords’. Markoff (2011) wrote in The New York Times that for IBM this was:

proof that the company has taken a big step toward a world in which intelligent machines will understand and respond to humans, and perhaps inevitably, replace some of them.

Hildebrandt suggests that Richard Powers, in his 1995 novel, Galatea 2.2, seems to have anticipated such an event. Powers introduces a so-called neural network, trained to improve its performance in mastering English literature. He describes how the networks gains traction and seemingly come alive as Helen, the machine that is being prepared to take a Master’s Comprehensive Exam in English literature. The novel traces the relationship that develops between the main character and the computer he is teaching, all the while raising and rephrasing the questions that have haunted AI research. In her chapter Hildebrandt addresses the potential implications of engaging computing systems as smart competitors or smart companions, raising issues of artificial agency and legal personhood. She undertakes a brief history of ‘intelligent’ machines, re-examining Turing’s thought experiment on what has come to be named as ‘Turing machines’ and Weizenbaum’s artificial Rogerian therapist Eliza, moving on to IBM’s Deep Blue and Watson. This forms the preamble to a discussion of Searle’s Chinese Room argument that frames the question of whether computers are capable of attributing meaning or whether meaning itself is an illusion anyway. Hildebrandt then returns to Powers’ Helen, investigating the fine line between postmodern humbug that almost invites computer mediated simulation and a more serious study of what literature may bring to a discussion of law and artificial intelligence. In her final section Hildebrandt confronts the issue of artificial agency and legal personhood, based on Dewey’s insightful definition of what it means to be artificial. Like in the case of an artificial lake, we should not mistake the artificial nature of legal subjectivity for a fiction: an artificial lake is not an imaginary lake.

Pagallo approaches a comparable issue when he examines the impact of robotics technology on today’s legal systems, more specifically investigating how a new generation of robo-traders, AI chauffeurs, artificial pop singers and autonomous lethal weapons affect people’s perception and knowledge of their changing environment. By examining aspects of contemporary literary-legal theory in combination with the regulatory tools of technology, e.g. codes, he aims to determine whether new types of responsibility for the behaviour of such robots must be attributed. He elaborates three different approaches: (1) taking the position that robotics has no substantial implications for the current legal framework of responsibility, (2) taking the opposite position that robotics require the attribution of legal personhood to make them liable for damage caused and (3) taking a middle position, i.e. that though robotics raises new legal issues, these should be framed in terms of new types of liability for the human actors that design, produce, sell or employ robots. Not by attributing liability to robots. Pagallo focuses on notions of reasonable foreseeability, apportioned liability, and causation in criminal law and contractual obligations. He acknowledges that, on the one hand, lawyers generally deem robots not to be legally and morally responsible because being machines they lack the set of preconditions traditionally assumed for the attribution of criminal liability, whereas, on the other hand, such machines are already reshaping notions of agency and human responsibility in civil law. Pagallo observes that artificial agents that send bids, accept offers, request quotes, negotiate deals, and make contracts already exist. His claim is that such machines can ultimately be considered liable for some of their actions, for instance by building on the Roman law notion of peculium. This was an amount of money provided to a slave, allowing ‘it’ to run a business and be made liable on its own account, thus limiting the liability of the owner of the slave. Robots, which are considered to be ‘things’ by the law, just like slaves under Roman law, could similarly be provided with a fund to take care of damages to be compensated in case of liability. Pagallo explains how the distinction between civil and criminal liability is crucial here, highlighting that the intelligence that emerges within the realm of civil law obligations ‘emerges from the rules of the game rather than individual choices’ (Chap. 2 by Pagallo in this volume).

3.2 Legal and Technological Normativity

Coming from entirely different angles, Van den Berg and Leenes and De Vries and Van Dijk confront the clash between legal and technological normativity.

In their chapter on ‘Abort, retry, fail: Scoping techno-regulation and other techno-­effects’, Van den Berg and Leenes start their investigation from the premise that technology affects human behaviour in the sense that it can be made instrumental to the enforcement of legal norms. Speed ramps or entry gates at train stations are a case in point here. Increasingly, however, norms are embedded into technology by private parties. DVD players are equipped with region codes and other Digital Rights Management systems to regulate the market for digital content consumption. This kind of regulation by technology, which they coin techno-regulation, or ‘code as code’ is swiftly becoming part of the contemporary regulator’s toolbox. The important question then becomes the one after the boundaries of techno-regulation. That is to say, should all technological influences on human behaviour be understood as forms of ‘techno-regulation’? Drawing on the findings of legal philosophy, STS, human computer interaction and regulation theory, Van den Berg and Leenes answer this question negatively. They highlight that technologies may have an impact on the normative constraints that orient human behaviour that were not intended by their designers, let alone a public authority. The authors find that the lack of attention for such implicit, unintended impacts makes regulators vulnerable to two kinds of risks. First, regulators may miss out on technological means to influence behaviours and second, regulators may not be aware of the unintended effects of the technological measures they implement and thus potentially jeopardize both the effectiveness and the legitimacy of such measures.

Based on their analysis of the way in which technology affects human behaviour the authors present a typology of techno-effects. First they distinguish between persuasion, nudging, affordances and techno-regulation. They note that, though all the techno-effects of these four types imply a more or less intentional effort to influence behaviour, those undergoing these effects have decreasing options to choose alternative behaviours, while at the same time they are less aware of being persuaded or even forced into a certain way of acting. Second they discuss a fifth techno-effect, which is unintended, implicit and automatic. They borrow the concept of ‘script’ from the Science Technology and Society studies, to explain how constraining such unintended effects can be. Furthermore Van den Berg and Leenes draw upon the field of Human-Computer Interaction to explain how people tend to display social responses to machines, seemingly unaware of the fact that they are anthropomorphizing. These effects are coined as the Media Equation (eliciting social responses to technologies) and anthropomorphizing. They form the sixth and seventh techno-effect.

The typology of techno-effects developed by Van den Berg en Leenes clearly highlights different levels of intention on the side of the regulator and different levels of choice, compulsion, and awareness on the side of those subjected to intended or unintended techno-effects. By providing a framework that goes beyond the usual dichotomy of effective or ineffective technological measures, the authors have opened a new field of research that is of import for democratic legislators, courts and citizens as well as designers, producers and users of technological artefacts.

In their chapter on ‘A bump in the road. Ruling out law from technology’, De Vries and Van Dijk are in search of the difference between legal code on the one hand and technical or social encodings on the other. They argue that the rise of techno-regulation provides a new incentive to revisit the classical question of ‘what is law’, forcing a new turn in the debate over what makes law salient as compared to other types of regulation. The novelty of the debate concerns the medium of the law, as well as the question of whether it matters. To what extent should the study of the history of law, as well as inquiries into the future of law, integrate the wider spectrum of media studies? Can lawyers and legal theorists take for granted that media do not make a difference to what they think is the essence of the law? Is there a danger in depicting modern law as a system of incorporeal rules that cannot be affected by the materiality of its longstanding medium of expression - the printing press?

These questions evoke the issue of whether law is a mere instrument of regulation, which may be better served by means of various types of techno-regulation. De Vries and Van Dijk criticise Lessig’s instrumentalism, inherent in the way he presents the four modalities that can be used to constrain, regulate or shape human behaviour: social norms, legal rules, market incentives, and architectural code. They then describe what they call the practice turn in legal theory, starting with Hart’s seminal The Concept of Law (sociological jurisprudence), which they find much less involved with the study of the practice of law then for instance Bruno Latour’s ethnographic study of the French Conseil d’Etat in The Making of Law. The authors provide an in-depth investigation of the semiotic background of Latour’s understanding of law, notably building on Greimas. This allows them to differentiate between law and technology as different modes of enunciation, with law being a matter of ‘tracing through reattachments’ and technology being a matter of ‘delegational folding’. To them, the value of Latour’s approach lies in his taking into account how the legal is constituted through an entire network of actants: files, protocols, spatial arrangements, procedures and people. This brings them to the work of Vismann (2008), who wrote an impressive study on the constitutive role of files in modern law, integrating media studies while taking an internal legal perspective on what law effects.

Interestingly, the authors find that both law and technology resist instrumentalisation by regulators. Their modus operandi (again drawing on terminology of Latour and Greimas) afford different ways of disabling the enforcement of the intention of whoever designed the law or the technology. This may save us from a technological future that rules out the law, but it will also save us from the regulatory dream of perfect compliance with legal prescriptions. In the end, the authors find that technological enforcement of a Rule of Law cannot sustain the legal enunciation that differentiates law from social engineering, techno-regulation and other attempts to influence human behaviour.