Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Basically, this book is organised around three poles:

  • Artificial intelligence applications to the modelling of reasoning about legal evidence, as well as to the handling of narratives;

  • The modelling of argumentation, especially as applied to law, and computer tools for that purpose; and

  • The specifics of disciplines within forensic science, especially in relation to actual or potential applications of computing.

The organisation of this book, by thematic clusters, is reflected accurately by the detailed table of contents. Apart from the chapters in this book, also several of the entries in the “Glossary” are substantial, and in some cases (e.g., s.vv. “mens rea”, “examination”, “time”, and “hearsay”) they can be considered as short sections providing further important information on given subjects. These are things that any practitioner of computing and in particular artificial intelligence, if setting to develop an application to legal evidence, other than by mere implementation of an extant design, ought to know. This is all the more the case if for the requirements analysis, the input from legal professionals is not articulate. A project leader should be very careful with such issues, lest he or she should be sorry later. In fact, it will not be enough to obtain functioning software; this software will have to be acceptable to strict scrutiny, to either law enforcement, or legal professional practice. The incentive for finding fault is that during litigation, if a procedural or substantive legal inadequacy can be apportioned (and this objection is upheld) to the use made of a given piece of software, then this may have an even major impact on the outcome of the judicial case at hand.

That we should be able to offer such a caveat, depends on a state of affairs in which information technology, as well as the pool of techniques from artificial intelligence, already have results to show, in the domain of legal evidence or of police investigations. The story of how we got there, is something that by itself deserves to be told. In the 1990s, AI applications to legal evidence were at most a desideratum, apart from some pioneering projects whose results catered to scholars in artificial intelligence or in cognitive science, yet were not operational in the application domain. There used to be computer tools for disparate kinds of forensic testing, but in all likelihood the first time that the several disciplines within forensic science are brought together with AI modelling of reasoning about legal evidence and with AI modelling of argumentation, is in the present treatment in this book. This was a good reason to have a unified treatment.

Popular perceptions of trials, through printed or cinematic whodunit stories, emphasise evidence. Undeniably, evidence plays a major role in law.Footnote 1 It is by no means the case that research in legal computing, or even more specifically in artificial intelligence and law (AI & Law), has been mainly concerned with legal evidence. Quite on the contrary, until the early 2000s evidence has been a surprisingly inconspicuous subject within AI & Law. Strangely, it took AI & Law three decades for Evidence to emerge conspicuously. In the 1970s, much work in AI & Law was on deontic logics, which are modal logics of obligation and permission.

Even as impressive practical tools emerged, with an array of topics active in AI & Law, still evidence was, in a sense, the unseen Cinderella. Some reference to evidence may have occurred, within treatments of other subjects within AI & Law. It can be safely stated that the turning point, for the status of evidence on the stage of AI & Law, was my own first initiative for a journal special issue on the subject, the proposal for which was accepted by the late Donald Berman qua regular editor of Artificial Intelligence and Law, as early as 1996. That initiative was not intended to record the state of the art as available at the time. Rather, it was about bootstrapping a pool of research and papers into existence, where these had been sorely absent. The initiative involved bringing together scholars from different disciplinary compartments, and this spurred interest and collaboration before we went to press. By-products included a conference session I co-chaired in Amsterdam in December 1999: whereas the audience was of legal scholars, some of the speakers were from AI, not necessarily previously associated with AI & Law. Eventually, several journal special issues resulted, and other people who hadn’t been among the authors started their own projects, or even undertook initiatives such as workshops.

Already in a guest editorial (Nissan & Martino, 2003b) of a special issue published in 2003 (namely, Nissan & Martino, 2003a), I was able to plot a graph (See Fig. 1.1), in which themes each appeared inside a circle, and showing the sundry thematic relations of the papers (identified by the authors’ names) that had appeared in the journal special issues in AI & Law which I had guest-edited up to that point, including the issue whose editorial it was (and for which, the thematic relations in Fig. 1.2 hold). Already at the time, I felt able to state that Fig. 1.1, “due to its intricacy, may look perhaps like a dish of pasta” (Nissan & Martino, 2003b, p. 239).

Fig. 1.1
figure 1_1_191169_1_En

Thematic relations of the articles that appeared in the journal special issues Artificial Intelligence and Law (AIL), 9(2/3), in 2001; Computing and Informatics (CAI), 20(6), in 2001; Cybernetics & Systems (C&S), 34(4/5) and 34(6/7), in 2003; in Applied Artificial Intelligence (AAI), 18(3/4), in 2004; in Information and Communications Technology Law (ICTL),10(1), and also on pp. 231–264, ibid., 10(2), in 2001. Also included are the papers on the representation of time in legal contexts, in the special issue of Information and Communications Technology Law, 7(3), in 1998. All those journal issues were guest-edited by Antonio Martino and Ephraim Nissan, except Information and Communications Technology Law, 10(2), whose scope was more broadly in AI & Law, and whose guest-editors were Donald Peterson, John Barnden, and Ephraim Nissan

Fig. 1.2
figure 1_2_191169_1_En

Thematic relations of the articles that appeared in the special issue (Nissan & Martino, 2003a) on legal evidence, in Cybernetics & Systems, 34(4/5)

At present, Evidence is a viable, pursued, subdomain within AI & Law. Even some AI & Law scholars who had chosen not to take part in the journal special issues initiative, typically and admittedly because of being at a remove from a concern with evidence, eventually turned to working on such projects in which evidence features conspicuously. The trend leaked into other areas pursued in AI & Law, mainly the modelling of legal argumentation. Moreover, contacts and even joint initiatives unfolded and continue to take place between such scholars and legal evidence scholars, who in turn had started to move in that direction after I turned to them (of course initially mainly for advice) with Don Berman’s agreement to the special issue in my hand. That the new trend keeps going now in such a sustained manner is an unmistakable indicator of successful emergence of the theme within AI & Law.

Let us turn to argumentation, which since the 1990s has been a very active field within AI & Law. It is definitely not the case that historically, all computational techniques for handling argumentation have been necessarily applied to legal arguments, let alone to legal evidence. Until the mid-1990s it would have been strange to combine treatment of computer tools for handling arguments (at the time, an emerging field within AI & Law, yet not about evidence), with a discussion of formal, computational approaches to legal evidence, as until that time formalisms for evidence used to be a hot topic among some legal scholars and statisticians, whereas within AI & Law the field had yet to emerge. Emerging trends make it cogent and stimulating to treat argumentation as well as other kinds of models of reasoning about the evidence within the same compass. Current developments in both fields are such that there is some synergism in dealing with both of them in the same overview.

AI & Law is more specific than the field of legal computing. Zeleznikow (2004) discussed the construction of intelligent legal decision-support systems in over fifty pages. Within AI & Law, with some seminal work from the end of the 1980s and then organically from the late 1990s, a new area has been developing, which applies AI techniques to how to reason on legal evidence, which requires also capturing within a formal setting at least some salient aspects of the legal narrative at hand. In turn, the subdomain of AI & Law that is mainly concerned with evidence is distinct from the application of computing, and of AI techniques in particular, in the various individual forensic disciplines, e.g., computer imaging or computer graphic techniques for reconstructing from body remains a set of faces in three dimensions, practically fleshing out a skull, which show what a dead person may have looked like – a method that is not without its critics, by comparison to photographs of the dead person once this has been identified. By way of exemplification from the forensic sciences, chapters or sections devoted to a few of them are included in this book: see Chapters 8 and 9, and Sections 6.1.10 and 6.2.1.5.

AI & Law is a field that is either the sole specialty of its typical journals, or a specialty along with law for information and computing technology. Artificial Intelligence and Law (Kluwer/Springer) is the standard journal of the former category, whereas both areas are hosted by Information and Communications Technology Law (Taylor & Francis), a journal whose previous title used to be Law, Computers, and Artificial Intelligence (Carfax). An older journal than both is Informatica e Diritto (in Florence). Oxford University Press publishes the International Journal of Law and Information Technology. In Australia, the University of Tasmania publishes the Journal of Law and Information Science. The website of the University of Warwick (England) hosts an e-journal called Journal of Information Law & Technology.Footnote 2 In 2010, Taylor and Francis launched the journal Argument & Computation, whose scope is important for the domain of AI & Law. As to Law, Probability and Risk: A Journal for Reasoning Under Uncertainty, launched by Oxford University Press in 2002, it is a journal of legal scholars and statisticians that also publishes relevant papers in AI & Law: “The journal is intended mainly for academic lawyers, mathematicians and statisticians. The journal seeks to publish papers that deal with topics on the interface of law and probabilistic reasoning” (from its blurb when launched). There exists as well the Kluwer journal, edited in Florence, Information Technology and the Law: An International Bibliography. “Artificial intelligence and legal reasoning” is category 023 in that journal.

It is important to understand that vis-à-vis artificial intelligence in general, applications to law have not been only at the receiving end: there has also been a flow of techniques which, once they proved effective in as fine-textured, “soft” and complex a field as law is, have become available within AI for an array of other applications. Basic research in AI has benefited from there being research ongoing in AI & Law. Another thing of which one should be aware, it that in the process by which the modelling of reasoning on legal evidence has begun to emerge within AI & Law, and then to move from the periphery to the mainstream of AI & Law, it was not the latter which contributed techniques to the new subdomain; rather, the new subdomain used techniques from the general field of AI, and insights from legal evidence scholarship concerned with probability and plausibility,Footnote 3 before techniques which are conspicuous within the pool of tools that had already been developed within AI & Law also came to fruition.

In a sense, it could have been expected that in order to make progress, one should look for AI techniques outside AI & Law: the latter had been rather neglecting evidence for the very reason that its tools had not been adequate as yet for dealing with evidence thoroughly. In contrast, AI in general had been much concerned with evidentiary reasoning. It stands to reason that such results from AI could have been promptly applied, if only (and this is the crux of the matter) the status of quantitative models for decision-making in criminal cases (as opposed to civil cases) weren’t a hotly disputed topic among legal scholars.

AI practitioners need to exercise care, lest methodological flaws vitiate their tools in the domain with some legal scholars, let alone opponents in litigation. There would be little point for computer scientists to develop tools for legal evidence, if legal scholars would find them vitiated ab initio. This is especially true of tools that would reason about the evidence in criminal cases, in view of fact-finding in the courtroom: whether to convict or not; this being different from the situation of the police, whose aim is to detect crime and to find suspects, without having the duty of proving their guilt beyond reasonable doubt, which is the task of the prosecutors.

It was crucial to get legal scholars of evidence on board, or at least sympathetically interested, and to obtain their input and feedback when steering the new direction of research within AI & Law, in trying to promote the development of credible computer tools or abstract techniques to deal with legal evidence. Besides, legal scholars and statisticians fiercely supporting or opposing Bayesianism in handling probabilities in judicial contexts (e.g., Allen & Redmayne, 1997; Nissan, 2001a; Tillers & Green, 1988) had come to realise the desirability of models of plausibility, rather than of just (strictly) probability. The participants in the debate about Bayesianism in law or more in general, about probabilities in law, are in practice continuing a controversy that started in the early modern period (Nissan, 2001b), with Voltaire being sceptical of probabilities in judicial decision-making, whereas in the 19th century Boole, of Boolean algebra fame, believed in the formalism’s potential applicability to law. An anonymous referee for an article by the present author has remarked: “This particular intellectual battle should also be better placed in the broader context […]. The fight has raged, and at least still burns, across the disciplines of statistics, philosophy, artificial intelligence and rather more derivatively in medicine and law”. It is true that also in epistemology, i.e., the philosophy of knowledge, there is a controversy about Bayesianism. In Dragoni and Nissan (2004, p. 297), we remarkedFootnote 4:

In the literature of epistemology, objections and counterobjections have been expressed concerning the adequacy of Bayesianism. One well-known critic is Alvin Plantinga (1993a, Chap. 7; 1993b, Chap. 8). In a textbook, philosopher Adam Morton (2003, Chap. 10) gave these headings to the main objections generally made by some epistemologists: “Beliefs cannot be measured in numbers”, “Conditionalization gives the wrong answers”, “Bayesianism does not define the strength of evidence”, and, most seriously, “Bayesianism needs a fixed body of propositions” (ibid., pp. 158–159). One of the Bayesian responses to the latter objection about “the difficulty of knowing what probabilities to give novel propositions” (ibid., p. 160), “is to argue that we can rationally give a completely novel proposition any probability we like. Some probabilities may be more convenient or more normal, but if the proposition is really novel, then no probability is forbidden. Then we can consider evidence and use it, via Bayes’ theorem, to change these probabilities. Given enough evidence, many differences in the probabilities that are first assigned will disappear, as the evidence forces them to a common value” (ibid.). For specific objections to Bayesian models of judicial decision making, the reader is urged to see the ones made in Ron Allen’s lead article [(1997)] in Allen and Redmayne (1997).

We shall come back to this topic, not as superficially as here in the introduction. Among the “Bayesian enthusiasts” concerning legal evidence, perhaps none is more so than Robertson and VignauxFootnote 5; whereas Ron Allen is prominent, and cogently articulate,Footnote 6 among the “Bayesio-skeptics”; see e.g. Allen (2001a) on his desiderata vis-à-vis artificial intelligence modelling of the plausibility of legal narratives (cf. Allen, 2008a, 2008b). Such charged labels are on occasion objected to, and the denotationally yet not connotationally equivalent labels, respectively “Bayesians” and “skeptics”, appear to be preferable. No application of statistics to the evaluation of evidence has won as much acclaim, even from Bayesio-skeptics, as Kadane and Schum’sFootnote 7 (1996) evaluation of the evidence in the Sacco and Vanzetti case from the 1920s, but in a sense the Bayesio-skeptics could afford to be generous, because that project had taken years to develop, and therefore is of little “real time” practical use in ongoing judicial settings.Footnote 8

It was in this context, that it took a systematic, organic effort in order to promote the new subdomain of modelling the reasoning on evidence within AI & Law. This was mainly done through several editorial initiatives, as well as workshops, of the present writer and of others (Martino & Nissan, 2001; Nissan & Martino,Footnote 9 2001, 2003a, 2004a; MacCrimmon & Tillers, 2002, on which see Nissan, 2004),Footnote 10 and this in turn involved spurring scholars from disparate disciplinary quarters to develop some piece of research to specification, and then to have referees from different specialties evaluate the resulting papers again and again.

For example, such practitioners of AI or of logic who had never before been concerned with legal applications, contributed some important applied techniques to a pool, until there was a critical mass of research visible enough to spur scholars within AI & Law the way it had been before, to enter the new subdomain and contribute their own techniques. Among the latter, it was perhaps argumentation techniques, a hotly pursued area of research within AI & Law during the 1990s, which constitute the most spectacular contribution.

Let us say something about the communities of users that may benefit from advances in AI & Law technology. Most often, computer tools used by legal professionals are technologically unambitious (at any rate, this is the case from the viewpoints of scholars in artificial intelligence, and in particular of AI & Law): legal professionals are likely to be using tools for document processing, and legal databases. Even simple tools for organising the evidence and the structure of how to argue a case may make a difference, in terms of work facilitation.

As to police officers, while in the office they may be using standard office tools, as well as (in Britain) the Police National Computer. Some police stations use computer tools for the way identity parades are carried out (see Section 4.5.2.3), but also intelligence and crime analysts use specialist tools (see Section 4.5.1 and Chapters 6 and 7).

Yet here, too, there are tools that may be of much help when carrying out investigations, e.g., Richard Leary’s FLINTS (Force Linked Intelligence System), a tool for criminal intelligence analysis, performing network link analysis; it was originally applied it in the West Midlands Police (see Chapter 7). There is considerable research ongoing into the use of data mining techniques assisting with criminal link analysis (see Chapter 6).

There exists a body of research into the general organisational problems that have occurred on the ground when police forces adopted computer technology as part of intelligence-led policing.Footnote 11 There are socio-legal studies that deal with this. See Section 4.5.1. Let us just cite here a few articles by James Sheptycki (2004) and by his collaborator Jerry Ratcliffe (2005, 2007). For example, concerning intelligence-led policing, Ratcliffe writes (2005, p. 437):

The ability to employ new methods of information management to better understand and respond to the criminal environment is not the sole domain of intelligence-led policing. There is overlap with the way that crime analysis is used within problem-oriented policing (Scott, 2000; Tilley, 2003), both for problem definition and evaluation analysis. High volume crime analysis, including the use of mapping, has become a core activity of crime analysts (Cope, 2003) and is central to CompStat. CompStat is an operational management process and is much more than just maps of crime, however, the mapping of volume crime patterns does form an integral part of the overall strategy (McGuire, 2000). CompStat combines computer technology, operational strategy, and managerial accountability, and is inherently data-driven (Walsh, 2001).

Ratcliffe described as follows some organisational problems arising from having to copy paper records into digital format (2005, pp. 442–443):

Interpretation of the criminal environment does not just require a suitable intelligence structure; it also requires appropriate data sources and analytical tools. One district commander mentioned that his officers had ‘done an internal audit and found a 50% error rate in data recording.’ Clearly any intelligence is only as good as the data it originates from, and a 50% error rate is a serious cause for concern. For example, computer simulation of crime mapping scenarios suggests that 85% is a minimum acceptable geocoding rate for basic crime mapping (Ratcliffe, 2004), placing significant doubts about a 50% error rate in basic recording. The practice of entering paper records onto the local computer system was not only error-prone, it was also time-consuming and limited to two offense categories: burglary and vehicle crime. There was no time to record other offense categories.

As there is no requirement of patrol officers to enter data onto a computer, considerable time was spent on data entry in order to digitally transcribe paper records. At least one person in every intelligence office mentioned, during interviews, problems with data entry. The main issues were the lack of personnel, and the content of data entry training that had been available to those analysts who had received training. These individuals complained that the training had not covered hard skills such as those required to operate the various mapping and record management platforms operated by the NZP [i.e., New Zealand Police]. As a result, data entry was slow and hindered the ability of the organization to identify timely intelligence. An Inspector in charge of a district-level intelligence office pointed out that in an internal study it had been shown to take sixteen minutes to enter the details of a burglary on to the records management system, and that while they record data on modus operandi and the property stolen, ‘nobody has time to analyze the stuff.’

There also is a down-to-earth manner of noticing the impact of computer technology in law enforcement, on the pool of skills (including computer literacy) as expected from candidates for specific roles. Let us consider an advert from England, of the Kent Police in a local newspaper. The post advertised (in December 2007) is Criminal Justice Unit Supervisor. The pay is not impressive. What are the skills required? And which computer skills are required? The ad reads as follows:

You will be responsible for supervising the day-to-day work of the criminal justice unit, together with another supervisor, to ensure case papers are properly prepared and submitted to the prosecuting authorities within tight timescales, responding to inquiries quickly and efficiently.

You must have well developed communication skills to tactfully, but assertively, deal with witnesses and victims who can be angry and abusive in cases where they have received no compensation from the courts or feel let down by the justice system, distressed at having to attend court or just reluctant witnesses who do not feel that they have time to attend court.

An ability to communicate at all levels within the unit and as part of the wider inter-agency approach involving the court, CPS (Crown Prosecution Service), probation and other criminal justice partners is essential.

The ability to manage office work, supervising caseworkers to whom particular criminal cases are entrusted, plays an important part. One would expect some ability to exploit technology, but that will only come later on, in the ad:

You must evidence your ability to work under pressure and drive through change, which is essential for the role, as all work passed to the office is subject to strict time limits. The ability to prioritise and allocate work to Caseworkers and to plan ahead is also essential for the smooth running of the unit.

The nature of this work is often distressing and sensitive as the unit deals with cases involving rape, child abuse and other sexual offences, road traffic collisions, as well as murder and offences against the person. You must have the ability to keep the caseworkers motivated, complete regular performance reviews with staff and be aware of signs of stress. You have to be fully dedicated to the role, as the unit has no control over the amount of work that has to be produced during any given week. You must also demonstrate flexibility and be prepared to stay as required at the end of the day, in oder to ensure that all witnesses are warned for crown court the next day and any problems are resolved.

It is at this point in the advert, that information technology skills are mentioned. This is something similar to what is found in ads for other jobs with the police. Whereas a wide range of application is mentioned, this mainly pertains to standard office software. Also police databases are mentioned, though:

Proven evidence in the ability to utilise a full range of Microsoft Office applications is essential. Experience in the use of Genesis and Police National Computer, together with knowledge of the criminal justice procedures would be an advantage.

For a post of Restorative Justice Administrator, coordinating a young offenders’ programme, and involving “reviewing case files from officers, checking that the relevant documents have been completed”, and so forth, the ad stated: “Good IT skills are essential, with previous experience of the force’s and national databases. You should be educated to at least GCSE standards or equivalent, including English Language and Mathematics”. Realising that much is important and sobering. Fancy tools should not be such that would become a burden to often overburdened police staff, and for one thing, in the U.K. it is well-known that office work is taking an inordinate percentage of time spent by the force, sometimes at the expense of patrolling. But sometimes it is the tasks that intelligence or crime analysts are given that are inappropriate for their skills, and they may be using software in order to produce inappropriate output (such as management statistics), just as they use software for what is their proper pool of skills. See Section 4.5.1. Some other computer tools for the police are intended for training.Footnote 12

Now let us consider a Kent Police ad for a Training Officer: “an enthusiastic and self-motivated Area Training Officer to work within the Personnel and Training Unit”. That one carries a better salary that the post of Criminal Justice Unit Supervisor. The Area Training Officer has to work with a police college “in the arranging of centralising training courses”, and is “required to identify and analyse local training needs, arrange and deliver appropriate training, whilst prioritising the demands placed on the area”. “You should be able to co-ordinate, design and deliver training in a range of styles, as well as having the ability to demonstrate practical experience of various training techniques”. Some statistics skills are required of the post-holder:

You will be required to undertake training needs analysis on an individual or group basis and provide management reports and statistical information. This is seen by the area to be a key element of the role, as the link between performance management and training is key to the area business.

And here come the IT skills, in the ad considered:

You must be flexible in your working hours as there may be a requirement for you to work approximately one evening per week for ongoing training of staff. Strong IT skills are essential, along with excellent communication skills and the ability to negotiate with senior managers, supervisors and outside organisations. An understanding of police roles is desirable.

Again, there are IT skills and IT skills. There are such IT skills that it would be reasonable to require of staffs, and IT skills that would be, quite unhelpfully, an unreasonable burden, if imposed as a requirement. If we are to develop useful tools at the forefront of what the state of technology affords, it is essential that the new tools will not be resented, and will not complicate the life of users.

Lawyers and policemen have different educational backgrounds. Their attitudes to technology and to numerate skills may also be different, and are certainly different from those of academic computer scientists. It is essential to calibrate the intended use of tools we may conceive of, according to the real-world features and professional cultures of the communities of users. As a matter of fact, there is an array of several professional communities who are being addressed in this book as a readership, and there is an array of several professional communities that are the intended users of both extant and potential tools within the scope of this book.

This book-form presentation was preceded by a less ambitious attempt at synthesis, in a couple of articles by this author (Nissan, 2008a, 2009a). They represent a preparatory stage in what was to become this book. The domain is mature for a volume such as the one you are reading now.

The range of techniques and tools explained, hopefully in an accessible manner, in this book is not presented in an overly technical manner. It is mainly an introduction, with indications of how (what to access in order) to pursue specific technical directions. Reading this book will hopefully result, for professionals in the fields concerned, in an ability to define requirements and perhaps commission a project; or then it will result in an ability for designers who are computer scientists, to see how to usefully direct their talents in this array of application domains.