Keywords

1 Introduction: A Framework for Representing Judicial Decisions

The considerations presented in this paper stem from the author’s research on a framework for case-law semantics (see Ceci 2012; Ceci and Gordon 2012) whose goal is to exploit Semantic Web technologies in order to achieve isomorphism between the text fragment (the only legally binding expression of the norm) and the legal rule, thus filling the gap between document representation and rules modelling (Palmirani et al. 2009). More precisely, the framework models the content of judicial documents, such as decisions of courts. The consideration guiding the research is that the features of the new OWL2 standard for computational ontologiesFootnote 1 could greatly improve legal concepts modelling and reasoning, if properly combined with defeasible rule modelling. The aim of the framework is therefore to formalize the legal concepts and the argumentation patterns contained in the judgement in order to check, validate and reuse the legal concepts as expressed by the judicial decision’s text. To achieve this, four layers along the Semantic Web stack of technologies (Fig. 32.1) are necessary:

Fig. 32.1
figure 1

The Semantic Web stack of technologies

  • a document metadata structure, capturing the main parts of the judgement to create a bridge between text and semantic annotation of legal concepts;

  • a legal core ontology, describing the legal domain’s main elements in terms of general concepts through an LKIF-Core extension;

  • a legal domain ontology, an extension of the legal core ontology representing the legal concepts of a specific legal domain concerned by the case-law, including a set of sample precedents;

  • argumentation modelling and reasoning, representing the structure and dynamics of argumentation.

The research is based on a middle-out methodology: top-down for modeling the core ontology, bottom-up for modeling the domain ontology and the argumentation rules. Its sample consists in 27 decisions of Italian case-law, from different courts (Tribunal, Court of Appeal, and Cassation Court) but all concerning the same legal subject: consumer law.Footnote 2 The research relies on the previous efforts of the community in the field of legal knowledge representation (Hoekstra et al. 2009) and rule interchange for applications in the legal domain (Gordon et al. 2009). The issue of implementing logics to represent judicial interpretation has already been faced, albeit only for the purposes of a sample case. The aim of the present research is to apply these theories to a set of real legal documents, stressing the definitions of OWL axioms as much as possible in order to provide a semantically powerful representation of the legal document for an argumentation system that relies on a defeasible subset of predicate logics.

The Legal Ontology (Palmirani and Ceci 2012) creates an environment where the knowledge extracted from the decision’s text can be processed and managed in order to perform deeper reasoning on the interpretation instances grounding the decision itself. This reasoning is based on the argumentation model of the Carneades Argumentation System.Footnote 3 The framework is capable of creating argumentation graphs in favour (pro) or against (con) a given legal statement, not only when all the premises for the argument are accepted in the knowledge base: Carneades is in fact capable of suggesting incomplete arguments (Ceci and Gordon 2012), thus highlighting critical aspects of the case which were not been taken into consideration by the judge (in the precedent case) or by the user (in the query). This means that, given a set of judicial decision encoded in the OWL ontology, the program is capable not only to represent the argumentation path followed by the judge, but also possible alternative paths that lead to different outcomes.

2 Representing Argumentation Patterns

2.1 Introduction

Any standard devoted to the representation of legal rules should include a unification of the logic layer into a single language. Some argue that the various aspects of argumentation can be properly represented through logics: in particular, Governatori (2011) shows how proof standards proposed in the Carneades framework correspond to some variants of defeasible logics, which could imply that an implementation of defeasible logics is able to compute acceptability of arguments. However this doesn’t seem to be the case, in the light of the following considerations:

  • Logics provide abstract formulas to represent relationships between concepts. With fine tools such as defeasible logics it is possible to successfully represent the complex relations between legal rules, but can they manage the application of these rules, a fundamental step towards the computation of the acceptability of an argument? In theory it could be the case, since the act of substitution of abstract symbols in formulas with the values of the situation we want to compute should be an automatic process where it doesn’t matter which material concept is added as the interpretation of that abstract symbol. For example, if we have a + b = c, we can interpret this simple rule as meaning many different things (for example a = 1, b = 2, c = 3, + = addition, or a = blue, b = yellow, c = green and + = mix) and this would not affect the truth function of the equation. On the contrary, in the concrete legal field the single elements bring with them particular conditions (minor rules or meta-rules), assumptions, exceptions, values, which can significantly alter the outcome of the abstract formula representing the rules.

  • Most of the application scenarios of legal language are centered on dialogues with two or more parties in which claims are made and competing arguments are put forward to support or attack these claims (this includes judgements, which are the focus of this paper, but also parliamentary debates and other legal acts). Following Walton, we recognize there are several kinds of dialogues, with different purposes and different protocols (Fig. 32.2). This view of arguments as dialogues (or processes) contrasts with the mainstream, relational conception of argument in the field of computational models of arguments, typified by Dung (1995), where argumentation is viewed not as a dialogical process for making justified decisions which resolve disputed claims, but as a method for inferring consequences from an inconsistent set of propositions. To see the difference between these conceptions of arguments, notice that a proposition that has not been attacked is acceptable in this relational model of argument, whereas in most dialogues a proposition that has not been supported by some argument is typically not acceptable, since most protocols place the burden of proof on the party that made the claim.

    Fig. 32.2
    figure 2

    Argumentation use cases in Gordon and Walton (2009)

Before exploring those two themes, a preliminary presentation of argument schemes is necessary.

2.2 Argumentation Schemes and Critical Questions

An argumentation scheme is a pattern of reasoning used in everyday conversation and other contexts, such as legal and scientific argumentation. Argumentation schemes serve the same purpose as their ancestors, the τόποι (topics) of Aristotle: they are useful to create, evaluate and classify arguments. In recent times, the Artificial Intelligence field has become increasingly interested in argumentation schemes, due to their potential for making improvements in the reasoning capabilities of agents (Verheij 2003; Garssen 2001; Dung and Sartor 2011). Two functions of argumentation schemes can be distinguished in the legal field: as argument patterns useful for reconstructing arguments from natural language texts, and as methods for generating arguments from argument sources, such as legislation or precedent cases.

In argumentation theory, argumentation schemes are evaluated through a set of critical questionsFootnote 4 (CQs), specific for each scheme. Each question reveals possible weak points in the argumentation, and if not answered adequately may render that specific argument useless in supporting the speaker’s position in the dialogue. Evidently, critical points in arguments should be formalized in a dialogical structure, in order to maintain the notion of defeasibility of every argument in the scheme, including those introduced to answer one or more critical questions. The example from Walton’s analysis of expert opinion (Walton 1997) perfectly explains this dialogical structure. Following is the model of argument from expert opinion:

Source E is an expert in domain D

E asserts that proposition A is known to be true (or false)

A is within D

Therefore, A may plausibly be taken to be true (or false).

As shown by experiments in social psychology, however, there is a tendency to defer to experts, sometimes without questioning, resulting in fallacious appeals to authority. Many circumstances could prevent the apparently deductive conclusion that “if E says A, then A is true”: in particular, epistemic closure in an expert field is far from truth, and therefore an expert can never be considered as knowing everything in a domain, and neither can its opinion be deductively true beyond challenge. Thus for many (if not all) appeals to the expert opinion, the deductivist approach does not work. Critical questions are used to ease tensions between forms of argument that are clearly reasonable in some instances, but that cannot be analysed as deductively valid (Reed and Walton 2001). Walton (1997) identifies six basic critical questions matching the appeal to expert opinion:

  1. a.

    How credible is E as an expert source?

  2. b.

    Is E an expert in D?

  3. c.

    Does E’s testimony imply A?

  4. d.

    Is E reliable?

  5. e.

    Is A consistent with the testimony of other experts?

  6. f.

    Is A supported by evidence?

Please notice that, in many cases, asking one of the basic critical questions above will lead to critical sub-questions at a deeper level of examination. This is one way to create argumentation graphs (Gordon 2010).

2.3  The Procedural Aspects of Argumentation

Robert Alexy’s discourse theory of legal argumentation explains how judicial discretion can be restricted without resorting to mechanical jurisprudence or conceptualism. In the early works of AI & Law on the subject, argumentation was modelled as deduction in a non-monotonic logic, i.e. as a defeasible consequence relation. The Pleadings Game—introduced in (Gordon 1994)—still uses non-monotonic logics and in particular defeasible logics to represent legal reasoning, but these logics are have a procedural layer on top of them which treats the whole argumentation as a process, with a sequence of moves by the players which are affected by the precedent ones.Footnote 5

Following the mathematical model of Doug Walton’s philosophy of argumentation and Aristotle’s classification, Gordon in (Gordon and Walton 2009) describes argumentation as being divided into three layers: logic, dialectic, and rhetoric. While logics deal with the so-called relational aspects of argumentation, dialectic directly addresses the procedural aspects of it. In the light of this first distinction, the claim that Defeasible Logics can manage the acceptability of arguments appears to be an effort to flatten the representation of the first two layers into mere logics, which does not seem to take into consideration the difference of tasks evoked by different argumentation patterns (or, as they will be called from now on, argumentation schemes—see, nor the dialectical (or procedural) aspects of argumentation.

Walton’s argumentation theory identifies a sequence of stages in a dialogue-like argument, where in each stage some moves are allowed to the player (as in The Pleadings Game) and those moves influence the possibilities for further stages. In particular, the concept of stages of the argumentation process is fundamental for the allocation of the burden of proof, which brings us back to the consideration in Sect. 32.2.1 about the relationship between Dungean Semantics and the dialogical conception of argumentation contained in (Walton 1998). Proposable exceptions, tacit acceptance, second grade preclusions, irrelevance: logics alone, no matter how powerful, cannot properly evaluate the acceptability of those argument if they cannot identify the stage of the process at which those arguments are introduced and consequently correctly allocate the burden of proof on one of the competing parties, and this in turn is not possible without a dialogical (or procedural) conception of argumentation. Defeasible logics can effectively manage complex interaction of rules such as the concept of proof standards. But an argument is much more than just rules, and representing the tasks and patterns presented above by relying only on a set of rules would require a huge effort, and yet produce an undesirably complicated and ungovernable result. This is, because these rules would have to simulate dialogical characteristics of argumentation, which are very different from relational ones.

In the Pleadings Game argumentation was viewed procedurally—as dialogues regulated by protocols—but this was accomplished by building a procedural layer on top of a non-monotonic logic. In LKIF, the relational interpretation of rules is abandoned entirely, in favor of a purely procedural view, and is thus more in line with modern argumentation theory in logics (Prakken 1995), philosophy (Walton 2006) and legal theory (Alexy 1989). Argumentation, as Gordon (2008) puts it, “cannot be reduced to logic.”

In the Carneades Application, therefore, argumentation schemes are managed in an upper layer than rules. In this perspective, rules are just one of many sources for argument construction along with ontologies (OWL) and cases (Cato), whose different logics and formats are translated and mixed into an argument graph. The architecture used to instantiate these sources into argument schemes is presented in (Gordon 2011).

3 Two Examples

The AI & Law community uses famous US courts precedents, such as Pierson versus Post in (Gordon and Walton 2006a) and Popov versus Hayashi in (Gordon and Walton 2012; Prakken 2012) as a test field for its theories. These demonstrations are aimed at showing how to model arguments starting from the legal concepts, and how the reliance on argument schemes and competency questions is necessary in order to achieve a reconstruction of the original arguments and to correctly evaluate them. However, those tests do not pay attention to the connection of those concepts with the metadata contained in the source legal document. This is also the approach of Ashley’s seminal contributions to the subject: the systems presented in (Ashley 1991, p. 34) and (Aleven 2003) are in fact oriented to the teaching of argumentation in law classes, rather than to the performing of automatic reasoning on the metadata contained in the legal documents.

The approach of the present research is more practical, as described in Sect. 32.1, and this approach will be kept also in finding evidence of the need for a modeling of the procedural aspects of argumentation into the emerging rule standards. The modeling of argument schemes seems the only viable choice to properly perform legal reasoning, and for this purpose the concept of argument scheme should include templates which represent procedural aspects of legal processes (such as the acts available to the parties during a trial). The two examples that follow are in fact taken from the sample of 27 decisions concerning consumer contracts which constitute the knowledge base of the research described in Sect. 32.1.

3.1 First Example

The first example is the decision issued on October 31, 2006 by the First Section of the Tribunal of Salerno, concerning the acceptability of an arbitration clause contained in a public statute (the statute of the Italian Football Federation). The argument put forward by the defender is that the judge is incompetent, since the litigation had to be settled by means of an arbitration following article 24 of the statute. The argument, however, was presented to the court only at a late stage of the trial. The judge, therefore, specifies that if the claim was formally qualified as a request for competence regulation (as the defender himself defined it), it would be inacceptable since those kind of claims can be presented only in the early stages of the trial. The judge, however, decides that the claim concerns the object of the trial, not a competence regulation. Therefore, the claim is acceptable and the judge declares his incompetence in favour of the arbitration court indicated in the statute.

Without recurring to argumentation schemes, representing this situation would require abstract structures for rules, which would astray from the original structure of the multi-logical process. To provide an example of this, Figs. 32.3 and 32.4 show how this information can be captured in LKIF and Clojure: the LKIF syntax doesn’t allow to be dynamic and the sentences are manually applied to the rules during the argumentation modelling, while with Clojure the expressiveness can be enhanced including dynamicity with the rules, Boolean operators and also a meta-scheme of the argument to be applied. This is a typical example of how procedural aspects of the legal argumentation can influence the outcome of a claim, and therefore the application of rules (and more generally the logic layer) has to take into account these aspects, in order to achieve a correct evaluation of the acceptability of arguments.

Fig. 32.3
figure 3

LKIF representation of the first example

Fig. 32.4
figure 4

Representation of the first example in the Clojure language

3.2 Second Example

A second example shows how arguments, even arguments from legal rules, can be introduced in the judgement for tasks different from that of applying the rule contained in the legal norm. In the decision given by the Tribunal of Rovereto on July 13, 2006, an article of the Civil Code concerning oppressive clauses, which lists these by subject and considers as oppressive all clauses introducing “a limitation in concluding certain contracts with third parties” is used as an argument to prove that “there is a general disfavor in the system towards all pacts introducing limitations to competition.” The argument of the oppressive clause is used together with the argument coming from article 81 of the EC Treaty, which explicitly forbids such pacts.

We can see how, in this case, the article of the Civil Code is evoked in the decision’s text, and must therefore be marked up and linked to the text of the law. But how do we tell the reasoner that this rule doesn’t have to be used for its general purpose (which is defining an oppressive clause) but rather for the purpose of supporting the statement that “there is a general disfavor in the system towards pacts introducing limitations to competition”? This can be done only by defining a framework for argumentation and by modeling argumentation schemes. In the example, the argument involving the article of the Civil Code would not be an argument from legal rules, but rather an argument from authority, and therefore the article of the Civil Code would not be transformed into an argument by translating the logic form of the rule it expresses, but rather by referring to the authority of the Civil Code and of the institution that issued it (which in this case is the Italian Parliament).

Conclusions: The Need for a Standard in Legal Reasoning

The present paper focuses on the logic layer of the semantic web stack, arguing that in order to properly process legal knowledge it is necessary to give account not only for deontic and defeasible extensions of logics, but also for argumentation schemes. Among the existing standards, LegalRuleML (Palmirani et al. 2011) includes most of the features required to represent legal rules and thus represents an improved standard language in comparison to LKIF-Rules. It can be taken as a cornerstone for the requirements that a reasoner must met in order to manage legal reasoning. However the LegalRuleML TC has not yet introduced in its language the support for the concepts of argumentation theory such as argumentation schemes and competency questions. An extension of the rule language in that direction would allow providing a standard set of metadata and logical operators for the reasoning layer to apply the state-of-the-art of legal argumentation theory. This engine could in turn consist of a standard set of libraries to be implemented into existing engines in order to introduce a complete management of defeasibility, deontics, temporal dimensions and argumentation schemes. The intention, in the upcoming research on this behalf, is to rely on a Drools-based application under construction by CIRSFID (Palmirani et al. 2012) and on NICTA’s SPINdle (Lam and Governatori 2009), both based on LegalRuleML.