Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

3.1 Types of Arguments

In court, lawyers seek to persuade the adjudicator. By contrast, their attitude towards the other party is conflictual, eristic, and they do not expect to persuade the other party while in court. If however a settlement out of court is sought, then a solution for the conflict of interests is sought by negotiation. Walton and Krabbe (1995, p. 66) proposed a classification of dialogues through which argumentation unfold. See Table 3.1.1. Figure 3.1.1 shows how to determine the type of dialogue (Walton & Krabbe, 1995, p. 81).

Fig. 3.1.1
figure 3_1_191169_1_En

Determining the type of a dialogue (based on Walton & Krabbe, 1995, p. 81; cf. Wooldridge, 2002, p. 156)

Table 3.1.1 Typology of argumentation. (based on Walton & Krabbe, 1995, p. 66; cf. Wooldridge, 2002, p. 155)

MacCormick (1995, pp. 467–468) defined argumentation as follows:

Argumentation is the activity of putting arguments for or against something. This can be done in speculative or in practical contexts. In purely speculative matters, one adduces arguments for or against believing something about what is the case. In practical contexts, one adduces arguments which are either reasons for or against doing something, or reasons for or against holding an opinion about what ought to be or may be or can be done.

“A reason given for acting or not acting in a certain way may be on account of what so acting or not acting will bring about. Such is teleological reasoning. All teleological reasoning presupposes some evaluation” (p. 468). “Deontological reasoning appeals to principles of right or wrong, principles about what ought or ought not to be or be done, where these principles are themselves taken to be ultimate, not derived from some form of teleological reasoning” (p. 468).

“Robert Summers [(1978)] has proposed the term ‘reasons of substance’ or ‘substantive reasons’ as a name for those reasons that have practical weight independently of authority” (MacCormick, p. 468). MacCormick discusses “institutional argumentation applying ‘authority reasons’ as grounds for legal decision” (p. 467), and “explores the three main categories of interpretative argument”, namely, linguistic arguments, systemic arguments, and teleological and deontological arguments (p. 467).

Systemic arguments are kinds of “arguments which work towards an acceptable understanding of a legal text seen particularly in its context as part of a legal system” (p. 473), e.g., the argument from precedent, the argument from analogy, and so forth.

3.2 Wigmore Charts vs. Toulmin Structure for Representing Relations Among Arguments

3.2.1 Preliminaries

John Henry Wigmore (1863–1943) was a very prominent exponent of legal evidence theory (and of comparative law) in the United States. A particular tool for structuring argumentation graphically, called Wigmore Charts and first proposed by Wigmore in the Illinois Law Review (Wigmore, 1913), thus has been in existence for the best part of the twentieth century, yet was basically ignored (notwithstanding Wigmore’s acknowledged prominence in other respects, among legal scholars) until it was resurrected in the 1980s.

Wigmore Charts are a handy tool for organising a legal argument, or, for that matter, any argument. They are especially suited for organising an argument based on a narrative. Among legal scholars, Wigmore Charts had been “revived” by Terence Anderson and William Twining (1991); already in 1984, a preliminary circulation draft of that book was in existence; it includes (to say it with the subtitle of the draft) “text, materials and exercises based upon Wigmore’s Science of Judicial Proof” (Wigmore, 1937). Anderson (1999a) discusses an example, making use of a reduced set of symbols from his modified version of Wigmore’s original chart method.

David Schum (2001) made use of Wigmore Charts while introducing his and Peter Tillers’ computer tool prototype for preparing a legal case, MarshalPlan, a hypertext tool whose design had already been described in 1991, and of which a prototype was being demonstrated in the late 1990s, and currently making use of Revolution development software. Also see Schum (1993), on how to use probability theory with Wigmore Charts.

In computer science, in order to represent an argument, it is far more common to find in use Toulmin’s argument structure (Toulmin, 1958), possibly charted. See Figs. 3.2.1.1, 3.2.1.2, and 3.2.1.3.

Fig. 3.2.1.1
figure 3_2_191169_1_En

Toulmin’s structure of argument: the abstract schema

Fig. 3.2.1.2
figure 3_3_191169_1_En

Toulmin’s structure of argument. An example drawn (with modifications) from a talk given by Uri Schild in Glasgow in 2002

Fig. 3.2.1.3
figure 3_4_191169_1_En

Toulmin’s structure of argument. An example drawn (with modifications) from a talk given by Uri Schild in Glasgow in 2002

Two or more arguments may be related to each other, in a Toulmin chart, because of the overlapping of one of the elements of Toulmin’s structure. Basically, the use of Wigmore Charts and Toulmin’s structure is equivalent, but Schum argues strongly in favour of the former. Some AI & Law scholars such as John Zeleznikow use Toulmin, whereas Henry Prakken when working on evidence uses Wigmore Charts.

3.2.2 The Notation of Wigmore Charts

Consider, in Toulmin’s structure from Fig. 3.2.1.1, how a rebuttal to a claim is notated. Anderson’s modified Wigmore Charts resort to an “open angle” to identify an argument that provides an alternative explanation for an inference proposed by the other part to a case. An empty circle (which can be labelled with a number) stands for circumstantial evidence or an inferred proposition, whereas an empty square stands for a testimonial assertion. For example, proposition 2 being a rebuttal of proposition 1 is notated as shown in Fig. 3.2.2.1.

Fig. 3.2.2.1
figure 3_5_191169_1_En

Argument 2 attacks argument 1 in Wigmore Chart notation

Had the open angle been closed, i.e., a triangle, it would have stood for an argument corroborating the inference. In order to indicate what an inference is based upon, using the triangle is not the most usual practice. Rather, then in order to indicate the relation between a factum probans (supporting argument) and a factum probandum (what it is intended to prove), that relation is notated as a line with a directed arrow from the former to the latter. See the upper row in Table 3.2.2.1, whose remainder shows how to notate other kinds of relation between arguments.

Table 3.2.2.1 Various relations in Wigmore Chart notation

An infinity symbol notated near something indicates that this is sensory evidence (testimonial assertions being heard, of real evidence that will be perceived in court with other senses). A paragraph symbol ¶ notated near a circle, stands for “facts the tribunal will judicially notice or otherwise accept without evidential support” (Anderson, 1999a, p. 57), whereas G near a circle stands for a commonsensical background generalisation “that is likely to play a significant role in an argument in a case, but that is not a proposition that will be supported by evidence or that the tribunal will be formally asked to notice judicially” (ibid., p. 57).

3.2.3 A Wigmorean Analysis of an Example

3.2.3.1 The Case, the Propositions, and the Wigmore Chart

In this subsection, a Wigmorean analysis is given, for an example of reasoning about the evidence supporting or disconfirming an accusation. It is not from the judiciary. A boy is accused of having taken and eaten sweets without his mother’s permission. On the face of it, one would think that it is a trivial matter. Yet, the argumentation is articulate, and deserves a Wigmore Chart.

Let us develop a Wigmorean analysis for an invented case. What is special about this case, is that the context is informal: a boy, Bill, is charged with having disobeyed his mother, by eating sweets without her permission. The envelopes of the sweets have been found strewn on the floor of Bill’s room. Bill tries to shift the blame to his sister, Molly. The mother acts as both prosecutor, and factfinder: it is going to be she who will give a verdict. Dad is helping in the investigation, and his evidence, which may be invalid, appears to exonerate Bill. This is based on testimony which Dad elicited from Grandma (Dad’s mother), who is asked to confirm or disconfirm an account of the events given by Bill, and which involves Grandma giving him permission to eat the sweets and share them with Molly.

Grandma’s evidence is problematic: Dad’s approach to questioning her was confirmationist. Grandma has received from Dad a description of the situation. She may be eager to spare Bill punishment. Perhaps this is why she is confirming his account. Yet, for Mum to make a suggestion to the effect that the truthfulness of her mother-in-law’s testimony is questionable, is politically hazardous, and potentially explosive.

Key list of Fig. 3.2.3.1.1: Circles are claims or inferred propositions. Squares are testimony. An infinity symbol associated with a circle signals the availability of evidence whose sensory perception (which may be replicated in court) is other than listening to testimony. An arrow reaches the factum probandum (which is to be demonstrated) from the factum probans (evidence or argument) in support of it, or possibly from a set of items in support (in which case the arrow has one target, but two or more sources). A triangle is adjacent to the argument in support for the item reached by the line from the triangle. An open angle identifies a counterargument, instead.

Fig. 3.2.3.1.1
figure 3_6_191169_1_En

A Wigmore Chart for the example of Bill and Molly

A Wigmore Chart is given in Fig. 3.2.3.1.1, showing the argumentational relations between the propositions listed below.

  1. 1.

    Bill disobeyed Mum.

  2. 2.

    Mum had instructed her children, Bill and Molly, not to eat sweets, unless they are given permission. In practice, when the children are given permission, it is Mum who is granting it.

  3. 3.

    Bill ate the sweets.

  4. 4.

    Many envelopes of sweets are strewn on the floor of Bill’s room.

  5. 5.

    It was Molly, not Bill, who ate the sweets whose envelopes were found in Bill’s room.

  6. 6.

    Bill says it was Molly who ate the sweets and placed the envelopes in his room, in order to frame him.

  7. 7.

    Molly is very well-behaved.

  8. 8.

    Bill would not have left around such damning evidence, implicating him as being the culprit.

  9. 9.

    The envelopes were very conspicuously strewn on the floor of Bill’s room.

  10. 10.

    Medical evidence suggests that Bill ate the sweets.

  11. 11.

    Bill’s teeth are aching, the reason being that he ate the sweets.

  12. 12.

    Bill has bad teeth.

  13. 13.

    Bill’s teeth are aching at the time the charge against him is being made.

  14. 14.

    Bill says that his teeth were already aching on the previous two days.

  15. 15.

    Mum is a nurse, and she immediately performed a blood test on Bill, and found an unusually high level of sugar in his bloodstream.

  16. 16.

    If there was a mix–up, then Molly is the culprit, not Bill.

  17. 17.

    Bill rang up Dad and claimed that Bill insisted with Mum to test also Molly’s blood, not only Bill’s blood, and that Mum did so, but must have mixed up the results of the two tests.

  18. 18.

    Mum tested both Bill and Molly for sugar in their bloodstream, and both of them tested positive.

  19. 19.

    Molly says she only ate sweets because Bill was doing so and convinced her to do likewise.

  20. 20.

    Bill was justified in eating the sweets.

  21. 21.

    Bill rang up Dad, related to him his version of the situation, and claimed to him that Grandma had come on visit, and while having some sweets herself, instructed Bill to the effect that both Bill and Molly should also have some sweets, and Bill merely complied.

  22. 22.

    Dad’s evidence confirms that Bill had Grandma’s permission.

  23. 23.

    Dad rang up Grandma, and she confirmed that she gave Bill the permission to take and eat the sweets.

  24. 24.

    Dad’s evidence is not valid, because Dad told Grandma about Bill’s predicament, and Grandma wanted to save Bill from punishment.

  25. 25.

    What Dad admitted, confirms that his way of questioning Grandma may have affected whether she was being sincere.

  26. 26.

    Dad confirms that he told Grandma about Bill’s predicament, and didn’t just ask her whether she had come on visit first, and next, whether sweets were being had.

  27. 27.

    Dad: “How dare you question Grandma’s sincerity?!”.

3.2.3.2 Considerations About the Situation at Hand

Proposing that exceeding benevolence and leniency for one’s grandchildren is typical behaviour for grandmothers, is an example of background generalisation. This could have been one more proposition in the list. “Children are fond of sweets” and “Children are less likely to resist temptation for something they crave” are other generalisations. “Molly is very well-behaved” is an example of character evidence, and is related to a background generalisation, “A person who on record is very well-behaved, is unlikely to be a perpetrator (if a suspect), or to be an offender again (if guilt is proven, but extenuating circumstances are invoked)”.

In turn, a counterargument against this generalisation is yet another generalisation, conveyed by the English proverb “Who has once the fame to be an early riser, may sleep till noon” and equivalent proverbs from other languages (Arthaber, 1929, §476), or the more explicit Latin proverb Saepe habet malus famam boni viri (“Oftentimes, one who is wicked has a reputation of being an honest man”; cf. in the Italian dialect of Bergamo, also quoted in Augusto Arthaber’s comparative anthology: Se ’n balos l’è stimat bu, / Che ’l fassa mal, no i cred nissu – “If a despicable fellow is deemed good, /  Even if he does evil, nobody would believe it”). Proverbs belong in folklore, yet they encapsulate generalisations or, like in the latter case, a caveat. Also consider the English proverb: “The best horse will sometimes stumble”, which is more charitable for Molly. Generalisations are dangerous in a judiciary context, if they are implicit and assumed uncritically.

Note that Molly is not necessarily lying, even in case Grandma actually gave Bill permission for both Bill and Molly, not in Molly’s presence. Molly may simply have been suspicious of Bill’s sincerity. It may be that she topped this up by littering his room with sweets envelopes, in order to have him ensconced as being the one responsible. Or then, Bill may have littered his room unthinkingly.

Some inconsistency in Bill’s reports is not necessarily fatal for his case. Bill’s insisting on Mum’s testing also Molly’s blood may be out of his desire for equal treatment as being a suspect, or then out of vindictiveness and spite (children hate being pricked with a needle, and less stoic than adults in this respect, so being pricked is already penalising Bill, and he would want the inconvenience shared by Molly, too, even in case she is innocent).

Importantly, Mum’s having tested both Bill’s and Molly’s blood enables Bill to claim that there was a mix-up; yet, in case he is guilty, this trick only makes sense if he didn’t expect also Molly to have a high sugar level, and whereas they both testing positive in as much as they both have a high sugar level suggests that both Bill and Molly ate sweets, perhaps Bill was under the impression that Molly was reluctant to partake. She may even have succumbed to the temptation for sweets at a later stage (not when she was witnessing, or perhaps approached by, Bill about the sweets), of which Bill is unaware.

“How dare you question Grandma’s sincerity?!” is an example of a political consideration about the evidence. In the American law of evidence, rules of extrinsic policy (in Wigmore’s terminology) are a category of exclusionary rules (i.e., rules excluding or restricting the use of admitted evidence), such that they give priority to other values over rectitude of decision. These are rules which are not so much directed at ascertaining the truth, but rather which serve the protection of personal rights and secrets. For a discussion of evidential rules and the judicial role in criminal trials, see Stein (2000).

Nissan (forthcoming b) is a very extensive analysis, in ca. 800 propositions and 100 Wigmore Charts, of the argumentation of the closing speech to the bench (by a barrister who is also among Italy’s most prominent forensic psychologists) on February 2006 in a trial on recovered memories. The treatment is so detailed because it was done a posteriori, on the trascription of a long speech with features typical of oralcy. Incorporating in the Wigmore Charts not only the logical structure, but also the rhetorical tactics, is novel. It must be said that such an extensive analysis is warranted by rhetorical studies, whereas work on practical Wigmore Charts as intended for preparing a case in court can be expected to be much more contained.

3.2.4 Another Example: An Embarrassing Situation in Court

3.2.4.1 An Episode During a Trial

We are going to analyse the situation described in the following report from the free newspaper Metro London of Friday, January 21, 2000, p. 3, col. 5 (punctuation is reproduced without modification):

Lawyer: My dog ate the evidence

AS mitigation goes, barrister Stephen Rich knew it was going to sound pretty lame.

When the defence counsel arrived at Newcastle Crown Court without vital video evidence for a criminal trial he told the judge: ‘The dog ate it, m’lud’.

The schoolboy excuse received the same cool response from judge David Hodson as it has from generations of teachers and the case was adjourned.

Mr Rich, 58, whose bull mastiff, Nalla, devoured the tape after he left a box of evidence unattended, said: ‘It was very embarrassing and the judge didn’t seem too impressed.’

Fortunately, for Mr Rich, the video came from closed-circuit TV and he was able to get another copy.

This story is amenable to interesting analysis, because of the role that background generalisations play in it. Namely, the explanation the defence lawyer gave for the missing evidence was suspiciously all too similar to the classical schoolboy’s excuse that his dog ate his homework. I have devoted a book, All the Appearance of a Pretext (Nissan, forthcoming a), to that archetype; as well as ‘The Dog Ate It’, in The American Journal of Semiotics (Nissan 2011f). In the situation at hand, there is mapping (Fig. 3.2.4.1.1) between patterns (the awkward real-life episode from the courtroom in Newcastle, and the cultural expectation of the classroom situation of a pupil making excuses), a mapping which was activated by the claim made by the defence barrister, an event which unwittingly evoked the archetypal situation of a pupil making excuses about his or her homework having been eaten by a pet dog belonging to the pupil (see Fig. 3.2.4.1.2, cf. Fig. 3.2.4.1.3).

Fig. 3.2.4.1.1
figure 3_7_191169_1_En

The episode in the courtroom in Newcastle was unwittingly evocative of the archetypal situation of a pupil blaming his dog for his missing homework

Fig. 3.2.4.1.2
figure 3_8a_191169_1_En

The archetypal situation of which the explanation given by the barrister in Newcastle was unwittingly evocative

Fig. 3.2.4.1.3
figure 3_8_191169_1_En

The explanation given by the defence barrister in Newcastle concerning the missing evidence

3.2.4.2 The Propositions and Their Wigmore Charts

In this subsection, we are listing the propositions representing the arguments involved in the episode from the courtroom in Newcastle, and we also provide Wigmore Charts that capture the relations between those propositions.

  1. 1.

    Rich left a box of evidence unattended at home.

  2. 2.

    The tape was inside the box.

  3. 3.

    Nalla ate the tape.

  4. 4.

    The tape was destroyed.

  5. 5.

    The tape is no longer available.

  6. 6.

    Nalla did access the box.

  7. 7.

    Nalla could access the box.

  8. 8.

    Being able to access the box, while the tape was inside, would enable to access the tape, if the box is not safely closed.

  9. 9.

    Nalla is a dog.

  10. 10.

    Nalla is Rich’s dog.

  11. 11.

    Nalla lives at Rich’s home.

  12. 12.

    Pet dogs typically live at their owner’s home.

  13. 13.

    A tape is not edible for dogs.

  14. 14.

    Dogs sometimes chew inedible things.

  15. 15.

    The dog could not conceivably swallow the box.

  16. 16.

    The box is too large.

  17. 17.

    It is unnecessary for the dog to swallow the box, for it to destroy the tape.

  18. 18.

    The box was not safely closed.

  19. 19.

    Did Nalla digest the tape? (a possible objection).

  20. 20.

    It is unnecessary for the dog to have fully digested the tape.

  21. 21.

    It is enough for the tape to be destroyed, that the dog would chew and damage it beyond repair.

  22. 22.

    Did Rich try to repair the tape? (a possible objection).

  23. 23.

    It is unnecessary for Rich to have actually tried to repair the tape.

  24. 24.

    Rich would be able to assess at sight the unrecoverability of the tape’s functionality.

  25. 25.

    Rich is a barrister.

  26. 26.

    Rich was the defence counsel of a criminal suspect.

  27. 27.

    The box contained evidence for the defence at the given trial.

  28. 28.

    Evidence is necessary for a party to a trial to seek a favourable factfinding.

  29. 29.

    The unavailability of defence evidence which previously existed, weakens the prospects of the defence.

  30. 30.

    The destroyed tape is no longer available for the defence.

  31. 31.

    The tape contained video evidence which was vital for the defence.

  32. 32.

    The defence case was harmed, if the video evidence could not be presented.

  33. 33.

    Rich had to justify in court why evidence announced was now unavailable.

  34. 34.

    Rich explained that his dog had eaten that piece of evidence.

  35. 35.

    Rich told the judge: “The dog ate it, m’lud”.

The propositions given thus far are organised in Figs. 3.2.4.2.1, 3.2.4.2.2, 3.2.4.2.3, 3.2.4.2.4, 3.2.4.2.5, 3.2.4.2.6, 3.2.4.2.7, and 3.2.4.2.8. Figure 3.2.4.2.2 is a refinement with respect to Fig. 3.2.4.2.1, and can replace it. Whereas Fig. 3.2.4.2.1 only considers the assertions 1–5, in Fig. 3.2.4.2.2 the assertions involved are 1–12. Figure 3.2.4.2.3 shows a possible objection (assertion 13, objecting to assertion 3), and an objection to the objection: assertion 14 retorts to assertion 13, and thus corroborates assertion 3. Note that Fig. 3.2.4.2.3 is contained in Fig. 3.2.4.2.5.

Fig. 3.2.4.2.1
figure 3_9_191169_1_En

A preliminary argument-structure for assertions 1–5

Fig. 3.2.4.2.2
figure 3_10_191169_1_En

A refinement of the argument-structure, for assertions 1–12

Fig. 3.2.4.2.3
figure 3_11_191169_1_En

A possible objection, and its refutation

Fig. 3.2.4.2.4
figure 3_12_191169_1_En

(a) One more possible objection, and its refutation. (b) An enhanced Wigmore Chart, replacing (a)

Fig. 3.2.4.2.5
figure 3_13_191169_1_En

A refinement of the reasoning of Fig. 3.2.4.2.3

Fig. 3.2.4.2.6
figure 3_14_191169_1_En

How could Rich be sure that the tape had become useless?

Fig. 3.2.4.2.7
figure 3_15_191169_1_En

Effect of the video evidence being unavailable

Fig. 3.2.4.2.8
figure 3_16_191169_1_En

What the barrister did in court

The Wigmore Chart of Fig. 3.2.4.1.4 is given here in two variants: Fig. 3.2.4.2.4 (a), and (b). The latter enables to avoid repeating the same node of the graph twice. In fact, the node labeled 1 (because it stands for assertion 1) has been put in the middle between node 16 and node 18, so that two-pronged arrow whose sources are node 16 and 1, as well as the two-pronged arrow whose sources are node 1 and node 18, can share node 1 graphically. This also illustrates the convention that the order of the nodes from left to right, in a two-pronged or multi-pronged arrow, does not matter. It does matter, instead, that arrows go upwards, and never downwards, so that one can see at a glimpse what the direction of the inference is.

Let us continue listing the propositions concerning the episode in Newcastle:

  1. 36.

    When people don’t manage to get it their way, they may be prone to resort to pretexts.

  2. 37.

    “The dog ate it” is a famous pretext.

  3. 38.

    “The dog ate it” is a pretext typically associated with pupils who didn’t do their homework.

  4. 39.

    “The dog ate it” is a suspicious excuse, for a pupil to use.

  5. 40.

    “The dog ate it is a very poor excuse for grown-ups to use, if their aim is to be believed.

  6. 41.

    The judge took a dim view of the barrister claiming that his dog had eaten that important piece of evidence.

Figure 3.2.4.2.9 shows a Wigmore Chart with the argument structure of propositions 36–41.

Fig. 3.2.4.2.9
figure 3_17_191169_1_En

The effect of the claim about the missing evidence

We could further represent (which we are not going to do here) the argument that the loss of the tape was due to force majeure, thus beyond the control of the barrister, as well as one more factor: the barrister is expected to be careful with the evidence in his or her care. Therefore, for a dim view to emerge subsequently to the claim being made that the dog ate the evidence, the contributions include this being a culturally canonical typification of a poor excuse, as well as a consideration about professionalism. Fortunately for Mr. Rich, the loss was not irretrievable. Let us continue listing the propositions:

  1. 42.

    Later on Rich was able, within the timescale of the proceedings, to produce another copy of the video evidence.

  2. 43.

    The tape that was destroyed was only one copy of that given video sequence.

  3. 44.

    Another copy of the video evidence was in existence.

  4. 45.

    Being able to produce the video evidence “saved the day” for the defence, i.e., the effect was as though the evidence had been there with no delay.

  5. 46.

    There was no lasting negative impact of Rich making the suspicious claim about why the evidence was unavailable.

  6. 47.

    As the evidence was there after all, there is little reason to believe that Rich had made up the story of his dog having eaten the evidence.

Propositions 41–47 are structured in the Wigmore Chart of Fig. 3.2.4.2.10.

Fig. 3.2.4.2.10
figure 3_18_191169_1_En

Saving the day

Let us go on listing the propositions:

  1. 48.

    That Rich would claim that his dog had eaten the missing evidence had been very suspicious.

  2. 49.

    It was at least as likely as not that such evidence had never existed, or that a tape existed with evidence much less helpful than claimed.

  3. 50.

    Suppose that there was no such vital evidence in the first place.

  4. 51.

    The defence would have referred to it as though it had been in existence, as this would have hopefully been useful for its case.

  5. 52.

    Once unable to produce it, defence could still hope for some benevolence on the part of the factfinder.

  6. 53.

    The judiciary may resist the hypothesis that a barrister would allow himself such misconduct as deliberately telling an outright lie.

  7. 54.

    A barrister is likely to be aware of such reluctance.

  8. 55.

    After all, at modern trials, factfinding depends on what factfinders come to believe.

  9. 56.

    There no longer is a rigid dependence on being able to assess and measure the evidence when coming to a verdict.Footnote 1

Propositions 48–56 are structured in the Wigmore Chart of Fig. 3.2.4.2.11. Figure 3.2.4.2.12 shows a unified chart replacing Figs. 3.2.4.2.1, 3.2.4.2.2, 3.2.4.2.3, 3.2.4.2.4, 3.2.4.2.5, and 3.2.4.2.6.

Fig. 3.4.2.11
figure 3_19_191169_1_En

The adverse argument about the missing evidence

Fig. 3.4.2.12
figure 3_20_191169_1_En

A unified chart replacing Figs. 3.2.4.2.1, 3.2.4.2.2, 3.2.4.2.3, 3.2.4.2.4, 3.2.4.2.5, and 3.2.4.2.6

In fact, that the barrister was able, during the same hearing albeit on a different day, to produce a valid copy of the evidence that had been lost (eaten by his dog) is no longer evocative of the archetype of the pupil making excuses to his teacher, but rather of a different kind of situation that is also socio-culturally known (see Fig. 3.2.4.2.13).

Fig. 3.2.4.2.13
figure 3_21_191169_1_En

Thought to be lost, yet recovered

3.3 Pollock’s Inference Graphs and Degrees of Justification

Much of current research into argumentation resorts to graphs. Whereas Wigmore Charts are intended to be of practical use while preparing or analysing a legal case, graphs used by argumentation scholars are sometimes more formally defined. Nevertheless, the various graphical approaches tend to resemble each other.

John L. Pollock of the Department of Philosophy of the University of Arizona, Tucson, developed OSCAR, a cognitive architecture for intelligent (artificial) agents, and this agent architecture is based on defeasible reasoning,Footnote 2 this in turn being represented as a network of arguments, called an inference-graph. We quote from an article of 2009 that appeared posthumously (Pollock, 2010, p. 7):

The current state of a defeasible reasoner can be represented by an inference-graph. This is a directed graph, where the nodes represent the conclusions of arguments (or premises, which can be regarded as a special kind of conclusion). There are two kinds of links between the nodes. Support-links represent inferences, diagramming how a conclusion is supported via a single inference-scheme applied to conclusions contained in the inference-graph. Defeat-links diagram defeat relations between defeaters and what they defeat. […]

Inferences proceed via inference-schemes, which license inferences. We can take an inference scheme to be a datastructure one slot of which consists of a set of premises (written as open formulas), a second slot of which consists of the conclusion (written as an open formula), and a third slot lists the scheme variables, which are the variables occurring in the premises and conclusion. Inference schemes license new inferences, which is to say that they license the addition of nodes and inference-links to a pre-existing inference-graph. Equivalently, they correspond to clauses in the recursive definition of “inference-graph”. The inference-graph representing the current state of the cognizer’s reasoning “grows” by repeated application of inference-schemes to conclusions already present in the inference-graph and adding the conclusion of the new inference to the inference-graph. When a conclusion is added to an inference-graph, this may also result in the addition of new defeat-links to the inference-graph. A new link may either record the fact that the new conclusion is a defeater of some previously recorded inference in the inference-graph, or the fact that some previously recorded conclusion is a defeater for the new inference.

Pollock recognised “that arguments can differ in strength and conclusions can differ in their degree of justification” (ibid., p. 8). He measured degrees of justification “by either real numbers, or more generally by the extended reals (the reals with the addition of 1 and –1)” (ibid.). As a matter of convenience, Pollock represented by the value 0 being equally justified in believing a proposition and its opposite (but if convenient, Pollock would assume for this a value different from zero). Moreover (ibid.):

Justification simpliciter requires the degree of justification to pass a threshold, but the threshold is contextually determined and not fixed by logic alone. When we ignore degrees of justification, a semantics for defeasible reasoning just computes a value of “defeated” or “undefeated” for a conclusion, but when we take account of degrees of justification, we can view the semantics more generally as computing the degrees of justification for conclusions. Being defeated simpliciter will consist of having a degree of justification lower than some threshold.

There are two sources of variation of degrees of justification (ibid., p. 9):

[C]hanging the degrees of justification for the premises of an argument can result in different degrees of justification for the conclusion. But there is a second source of variation. New conclusions are added to the inference-graph by applying inference-schemesFootnote 3 to previously inferred conclusions. Some inference-schemes provide more justification than others for their conclusions even when they are applied to premises having the same degrees of justification.

If there are no arcs in the graph that reach a given node (that is to say, its node-basis is empty), then that node is initial. In particular, it gets no node-defeaters. Initial nodes are undefeated. “A non-initial node is undefeated iff all the members of its node-basis are undefeated and all node-defeaters are defeated” (ibid., p. 10). “Let us define an inference/defeat-descendant of a node to be any node that can be reached from the first node by following support-links and defeat-links (in the direction of the arrows).” (ibid.).

Pollock discussed inference/defeat loops (ibid.):

The general problem is that a node Q can have an inference/defeat-descendant that is a defeater of Q. I will say that a node is Q-dependent iff it is an inference/defeat-descendant of a node Q. So the recursion is blocked in inference-graph […] by there being Q-dependent defeaters of Q and ∼Q-dependent defeaters of ∼Q.

where Q is a proposition, and ∼Q is “not Q”. “Iff” stands for “if and only if”. “Most of the different theories of defeasible reasoning differ in their assignments of degrees of justification only in how to handle inference/defeat-loops while making the assumption that all degrees of justification are either 0 or 1” (ibid.), but this assumption is too restrictive.

Pollock discussed “the ways in which allowing conclusions to have intermediate degrees of justification can affect the computation of degrees of justification and hence can affect the semantics for defeasible reasoning, focusing initially on loop-free inference-graphs.” (ibid., p. 11). Pollock discussed three ways, starting with diminishers, exemplified through two different persons, Smith and Jones, whose reliability may be regarded to be different, who predict whether it is going to rain (ibid., p. 12). Jones is a professional weatherman, with a track record of successful predictions (ibid.).

Suppose his predictions are correct 90% of the time. On the other hand, Smith predicts the weather on the basis of whether his bunion hurts, and although his predictions are surprisingly reliable, they are still only correct 80% of the time. Given just one of these predictions, we would be at least weakly justified in believing it. But given the pair of predictions, it seems clear that an inference on the basis of Jones’ prediction would be defeated outright. What about the inference from Smith’s prediction. Because Jones is significantly more reliable than Smith, we might still regard ourselves as weakly justified in believing that it is going to rain, but the degree of justification we would have for that conclusion seems significantly less than the degree of justification we would have in the absence of Smith’s contrary prediction, even if Smith’s predictions are only somewhat reliable. On the other hand, if Smith were almost as reliable as Jones, e.g., if Smith were right 89% of the time, then it does not seems that we would be even weakly justified in accepting Jones’ prediction. The upshot is that in cases of rebutting defeat, if the argument for the defeater is almost as strong as the argument for the defeatee, then the defeatee should be regarded as defeated. This is not to say that its degree of justification should be 0, but it should be low enough that it could never been justified simpliciter. On the other hand, if the strength of the argument for the defeater is significantly less than that for the defeatee, then the degree of justification of the defeatee should be lowered significantly, even if it is not rendered 0. In other others, the weakly justified defeaters acts as diminishers.

Moreover, consider reasoning from multiple premises to a conclusion. “[M]any philosophers have found it convincing that reasoning from multiple premises can produce conclusions with degrees of justification lower than the degrees of justification of the premises.” (ibid.). This is the second way, of the three preannounced. The third way, is with multiple arguments supporting the same conclusion: “A widely shared intuition is that if we have two independent arguments for a conclusion, that renders the conclusion more strongly justified than if we just had only one of the arguments. If so, a theory of defeasible reasoning must tell us how to compute the degree of justification obtained by combining multiple independent arguments.” (ibid., p. 13). This is the principle of accrual of arguments (ibid., p. 18). Pollock discussed whether this principle is true (ibid., pp. 19–20):

When we know the probability of a conclusion given each of two sets of evidence, the probability given the combined evidence is the joint probability. The preceding observation is that we often know that the joint probability is higher than either constituent probability, and this gives us a stronger reason for the conclusion, and we get this result without adopting an independent principle of the accrual of reasons. To further confirm that this is the correct diagnosis of what is going on, note that occasionally we will have evidence to the effect that the joint probability is lower than either of the constituent probabilities. For instance, suppose we know that Bill and Stew are jokesters. Each by himself tends to be reliable, but when both, in the presence of the other, tell us something surprising, it is likely that they are collaboratively trying to fool us. Knowing this, if each tells us that it is raining in Tucson in June, our wisest response is to doubt their joint testimony, although if either gave us that testimony in the absence of the other, it would justify us in believing it is raining in Tucson in June. So this is a case in which we do not get the effect of an apparent accrual of reasons, and it is explained by the fact that the instance of the statistical syllogism taking account of the combined testimony makes the conclusion less probable rather than more probable.

Pollock also considered other kinds of situations as well, and then proposed: “Having multiple arguments for a conclusion gives us only the degree of justification that the best of the arguments would give us.” (ibid., p. 21). Pollock acknowledged: “Thus far, I have been unable to find a case in which taking account of degrees of justification has a significant impact on reasoning. All cases I have discussed can be handled by appealing to simple principles for computing the degrees of justifications, the most notable being the weakest link principle. Most importantly, there is no way to make the accrual of reasons work. However, there is one final case to be discussed in which I believe that a correct account of defeasible reasoning requires us to appeal more seriously to degrees of justification.” (ibid.). That case is that of diminishers. “Perhaps the most compelling argument for diminishers is that if the degree of justification of a defeater is only marginally less than the strength of the argument it attacks, surely that should not leave the argument unscathed.” (ibid.). “The upshot is that the only cases of defeasible reasoning in which we need something more serious than the weakest link principle to handle and implement defeasible reasoning are cases involving diminishers. To handle those cases correctly, we need a principle governing how diminishers lower degrees of justification.” (ibid.).

3.4 Beliefs

3.4.1 Beliefs, in Some Artificial Intelligence Systems

In communication, agents reason about the beliefs of their interlocutor. Nested beliefs have been used in various computational systems modelling dialogic argumentation, such as Sycara’s PERSUADER (Sycara, 1989a, 1989b, 1990, 1992; Lewis & Sycara, 1993) – see below in Section 3.5 – or NAG of Zukerman, McConachy, and Korb (1998), or then DAPHNE of Grasso, Cawsey, and Jones (2000). Whereas in cooperative dialogues, i.e., such dialogues that none of the participants is committed to any form of deception, three levels of nesting of beliefs are sufficient, this is not enough when it comes to modelling some situations involving deception, and deeply-nested belief levels are required, as argued by Jasper Taylor (1994a, 1994b).Footnote 4

Take two of the propositions, one encapsulating a background generalisation, from our Wigmorean analysis of the Newcastle episode (see Section 3.2.4):

  • Proposition 53: The judiciary may resist the hypothesis that a barrister would allow himself such misconduct as deliberately telling an outright lie.

  • Proposition 54: A barrister is likely to be aware of such reluctance.

This is part of the reasoning we may ascribe to the judge or to anybody else, on hearing the barrister make the suspicious claim about his dog having eaten the evidence.

In Fig. 3.4.1.1, we propose the nesting of beliefs involved in the reasoning about those two propositions in the context of the Newcastle episode which we have been discussing in Section 3.2.4. The notation is as follows:

  • A box with a label below stands for the set of beliefs of the agent identified by that label.

  • A box with a label above stands for a belief or a set of beliefs about the object identified by that label (in particular, this may be another agent).

  • An outer box with a label below (an agent’s box) includes logical predicates, or then one or more other boxes, and the contents of the agent’s box are what that agent believes.

Fig. 3.4.1.1
figure 3_22_191169_1_En

Nested beliefs for propositions 53 and 54 from the Newcastle episode

This notation is taken from Ballim, Wilks, and Barnden (1990).

Figure 3.4.1.1 means the following:

  • The overall system believes that Rich is a barrister, and that judge1 is a member of the judiciary.

  • The overall system believes that judge1 believes that Rich is a barrister, and that judge1 is a member of the judiciary.

  • The overall system believes that judge1 believes that the judiciary believes that a barrister would rather not tell a lie to the judiciary.

  • The overall system believes that judge1 believes that a barrister believes that the judiciary believes that a barrister would rather not tell a lie to the judiciary.

Mutual beliefs are modelled in part of artificial intelligence’s models of teamwork (e.g., Sycara, 1998). What psychologists call attribution and in artificial intelligence is termed agent beliefs – i.e., how people (and computational cognitive models) reason about their own beliefs and the ones they ascribe to others – was applied to legal evidence in two papers that adopt different approaches: Ballim, By, Wilks, and Liske (2001), and Barnden (2001). Previously Ballim and Wilks (1991) proposed an AI formalism for nested beliefs, which in Ballim et al. (2001) they applied to legal narratives.Footnote 5 Barnden (2001) describes an application of agents’ simulative reasoning by agents on each other, by means of the ATT-Meta system, which deals with agents’ beliefs in respect of a formal approach to uncertain reasoning about them. The application is to reasoning about legal evidence. It is valuable, yet may be vulnerable to a Bayesio-skeptic critique: “by adopting a bold stance about how to mathematically treat uncertainty in a legal context, one is threading on the hornet’s nest that the debate about forensic statistics is, among legal theorists. Some will applaud, some would not, and causing this by itself is beyond reproach” (Nissan & Martino, 2004b).

It is important to point out that it has not been necessarily the case that computer tools that envisaged guessing (mindreading) the intentions of some player, have done so by explicitly incorporating a representation of agents’ nested beliefs. In particular, when just one level of ascription is involved, some other schema of representation may also be useful in practice. The following exemplifies this. BASKETBALL is an expert system that was developed in an ad hoc fashion (rather than according to some neat theory) by two students of mine under my direction. BASKETBALL is an expert system that gives advice to a basketball team on the opening five and the playing strategy, for a given upcoming match. It does, by analysing the present assets of the two teams, and based also on the likely course of action of the adversary team, even though the information about the present state of the adversary team is likely to be only partial; some general information about the league or the place is also relevant (Simhon, Nissan, & Zigdon, 1992). This is relevant, in our present context, because the reasoning in BASKETBALL is partly based on reading the minds of the other team, by considering how they are likely to plan how they should play, according to their assets. Yet, no further levels of nesting are involved: BASKETBALL does not consider the possibility that the other team, too, may be trying to guess how our own team is going to plan its own strategy, based on our own assets.

3.4.2 Dispositional Beliefs vs. Dispositions to Believe

A philosophical (epistemological) controversy on knowledge and belief is the one between Vendler (1975a, 1975b) and Aune (1975). Our propositions 53 and 54 for the Newcastle episode are such that it is relevant to consider the difference between dispositional beliefs and dispositions to believe. Robert Audi (1994) questioned the explanatory validity of antecedent belief:

Do you believe that this sentence has more than two words? […] It would be natural to answer affirmatively. And surely, for most readers considering these questions, that would be answering truly. Moreover, in affirmatively answering them, we seem to express antecedent beliefs: after all, we are aware of several words in the first sentence by the time we are asked if it has more than two […]. Antecedent belief of the propositions in question – believing them before being asked whether we do – is also the readiest explanation of why we answer the questions affirmatively without having to think about them. These considerations incline many people to attribute to us far more beliefs than, in my judgment, we have. […] I contend that, here, what may seem to be antecedently held but as yet unarticulated dispositional beliefs are really something quite different: dispositions to believe. […] The terms ‘tacit belief’ and ‘implicit belief’ have been used for both dispositional beliefs and dispositions to believe […] (ibid., p. 419).

3.4.3 Common Knowledge, and Consequentialism

In the nested boxes of Fig. 3.4.1.1 about propositions 53 and 54 from the Newcastle episode, we see that there is some common belief which is assumed to be shared at the very least between members of category barrister (thus, including Rich, the defence barrister from the trial in Newcastle whose dog ate the evidence) and members of category judiciary (thus, including judge1). The notion of common knowledge plays a significant role both in artificial intelligence models of agents’ beliefs, and in game theory. Mokherjee and Sopher (1994) is a paper on real players’ belief learning behaviour in economic games (ibid., pp. 62–63):

The assumption of Nash equilibrium plays a central role in modern noncooperative game theory. This theory is usually based on the notion that players have “common knowledge” regarding the payoffs and behavior modes of each other (see Aumann (1987) and Brandenburger and Dekel (1987)). It is commonly acknowledged that the assumption of common knowledge is a demanding one, and that a satisfactory theory should also describe the process by which players arrive at their beliefs (see, e.g., Binmore (1985)). Moreover, it is unlikely that most real players rely entirely on a cognitive process of thinking in order to arrive at their beliefs, rather than past experience at playing the same or related games.

In the nested beliefs related to propositions 53 and 54 from the Newcastle episode, the approach may be too “neat”, too idealised, in that it appears to assume that players are consequentialist in how they behave. Baron (1994) investigates nonsequentialist decisions, from a psychological viewpoint, with examples about convenience and examples in ethics:

According to a simple form of consequentialism, we should base decisions on our judgements about their consequences for achieving our goals. […] Yet some people knowingly follow decision rules that violate consequentialism. For example, they prefer harmful omissions to less harmful acts, they favor the status quo over alternatives they would otherwise judge to be better, they provide third-party compensation on the basis of the cause of an injury rather than the benefit from the compensation, […]. I suggest than nonconsequentialist principles arise from overgeneralizing rules that are consistent with consequentialism in a limited set of cases. Commitment to such rules is detached from their original purposes (ibid., p. 1).

Pietroski (1994) criticises Baron: “Because the many senses of ‘should’ are (somehow) related, it is easy to vacillate between different normative claims. I think Baron has done this, leaving his model without any clear function”. The sense of the ‘should’ in a given normative claim may be instrumental, i.e. to fulfil a (possibly implicit) desire of the agent, but desires may conflict. To Pietroski, a “pragmatic” sense of ‘should’ is that “agents should make those decisions that, all things considered, they think will satisfy their desires on the whole. Agents typically do what they should in this sense”. The attractions of Baron’s “model may result from the slogan, ‘Decisions should maximize the good’. However, what about moral readings?”. Baron’s model as a moral thesis is amenable to utilitarianism.Footnote 6 Pietroski claims “there is a ‘should’ of idealization.”

An example is the ideal gas law, valid if one chooses to ignore certain facts. A reading of Baron is possible in view of this, Pietroski maintains, and proceeds to criticise it. “Finally, it is of practical importance that some decisions should be made in the idealization, but not pragmatic sense (or vice versa). But the only other sense of ‘should’ relevant to public policy that I can think of is moral. And again, we do not want Baron’s consequentialism for our moral theory”.

3.4.4 Commitment vs. Belief: Walton’s Approach

3.4.4.1 The Problem of Recognising Belief, Based on Commitment

Writing for the benefit of legal scholars in the journal International Commentary on Evidence, Douglas Walton and Fabrizio Macagno (2005) claimed that

tools of argument analysis currently being developed in artificial intelligence can be applied to legal judgments about evidence based on common knowledge. Chains of reasoning containing generalizations and implicit premises that express common knowledge are modeled using argument diagrams and argumentation schemes.

Moreover, they argued for what they conceded is a controversial thesis (ibid.): “It is the thesis that such premises can best be seen as commitments accepted by parties to a dispute, and thus tentatively accepted, subject to default should new evidence come in that would overturn them”. According to that approach, also common knowledge is commitment, rather than knowledge: “Common knowledge, on this view, is not knowledge, strictly speaking, but a kind of provisional acceptance of a proposition based on its not being disputed, and its being generally accepted as true, but subject to exceptions” (ibid.).

Walton (2010) saw the need to overcome a problem in artificial intelligence concerning beliefs, and proposed a model such that (ibid., p. 23):

a belief is defined as a proposition held by an agent that (1) is not easily changed (stable), (2) is a matter of degree (held more or less weakly or strongly), (3) guides the goals and actions of the agent, and (4) is habitually or tenaciously held in a manner that indicates a strong commitment to defend it. It is argued that the new model overcomes the pervasive conflict in artificial intelligence between the belief-desire-intention model of reasoning and the commitment model.

Walton’s model “uses argumentation schemes for practical reasoning and abductive reasoning. A belief is characterised as a stable proposition that is derived abductively by one agent in a dialogue from the commitment set (including commitments derived from actions and goals) of another agent” (ibid.). Walton’s “paper offers a definition of the notion of belief and a method for determining whether a proposition is a belief of an agent or not, based on evidence. The method is based on a formal dialogue system for argumentation that enables inferences to be drawn from commitments to beliefs using argumentation schemes” (ibid.). Walton claimed (ibid.):

The approach offers a middle ground between the two leading artificial intelligence models that have been developed for programming intelligent agents. According to the commitment model, a commitment is a proposition that an agent has gone on record as accepting (Hamblin, 1970, 1971). In the belief-desire-intention (BDI) model (Bratman, 1987), intention and desire are viewed as the pro-attitudes that drive goal-directed reasoning forward to a proposal to take action. The BDI model is based on the concept of an agent that carries out practical reasoning based on goals that represent its intentions and incoming perceptions that update its set of beliefs as it moves along (Wooldridge, 2002).

Walton explained why distinguishing between commitment and belief is important, by making the example of lying in court (ibid., p. 31):

The third reason [why do we need a notion of belief, as opposed to commitment] has to do with negative concepts like insincerity, self-deception and lying, all of which appear to require some notion of belief. For example, the speech act of telling a lie could be defined as putting forward a statement as true when one believes (or even knows) that it is false. These concepts are fundamentally important not only in ethics, but also important in law in the process of examination in trials (including cross-examination), as well as in witness testimony and the crime of perjury. It is one thing to commit yourself in a dialogue to a proposition that you are not really committed to, as judged by your prior commitments in the dialogue. There might be many reasons to explain such an inconsistency of commitments. Perhaps you just forgot, or you can somehow explain the inconsistency. Maybe you just changed your mind, as some new evidence came into the dialogue. But lying is a different thing. To lie, you have to really believe that the statement you made is false. In short, negative notions of a significant kind, like lying, self-deception, and so forth, cannot be fully understood only through applying the notion of commitment, but also require reference to belief. Lying is also closely related to notions like lying by omission, equivocation, deception, and using ambiguity in argumentation, and these notions are in turn related to the study of informal fallacies.

Walton remarked (ibid., p. 29):

Commitments, on Hamblin’s view [(Hamblin, 1970)], are public and social. If you make an assertion of a statement A in a way that indicates you are committed to it, and there is a public record of your speech act of asserting A in this manner, then that is evidence you are committed to A. For example, if you confess to a murder under police questioning, and the interview was videotaped, then the videotape provides evidence that you are committed to the statement that you murdered the victim. In law, the videotape itself is called evidence, and when it is shown in court, it provides evidence for the accusation that you are guilty of the crime as alleged. Thus, once you have committed yourself to a statement, say by asserting it in public so that your assertion can be recorded or be put ‘on record’, then that is evidence of your commitment to it. Thus, commitment is inherently a social notion that has to do with public dialogues in which two parties or more engage in public conversations. Commitment is basically public. Your commitments are inferred from what you have gone on record as saying in some context of dialogue.

Belief, although it can sometimes be public, as when we talk about commonly held beliefs, is a more private matter. If belief is an internal psychological matter of what an individual really thinks is true or false, the privacy of belief makes it more difficult to judge what an individual believes. People often lie or conceal their real beliefs. And there is good reason to think that people often do not know what their own beliefs are. If Freud was right, we also have unconscious beliefs that may be quite different from what we profess to be our beliefs. Belief is deeply internal and psychological, and public commitment to a proposition is not necessarily an indication of belief. But perhaps there is a way to infer belief from commitment.

3.4.4.2 Walton’s Argument Schemes and Critical Questions for Argument from Commitment

In their book Argument Schemes, Walton, Reed, and Macagno (2008, p. 335) presented two types of an argument scheme called argument from commitment. The simpler type is as follows:

Commitment evidence premise: In this case, it was shown that a is committed to proposition A, according to the evidence of what he said or did.

Linkage of commitments premise: Generally when an arguer is committed to A, it can be inferred that he is also committed to B.

Conclusion: In this case, a is committed to B.

This first version of the argument-from-commitment scheme is associated with the following critical question:

CQ1: What evidence in the case supports the claim that a is committed to A, and does it include contrary evidence, indicating that a might not be committed to A?

The second type of the argument scheme is in the context of a dialogue:

Major premise: If arguer a has committed herself to proposition A, at some point in a dialogue, then it may be inferred that she is also committed to proposition B, should the question of whether B is true become an issue later in the dialogue.

Minor premise: Arguer a has committed herself to proposition A at some point in a dialogue.

Conclusion: At some later point in the dialogue, where the issue of B arises, arguer a may be said to be committed to proposition B.

This second type of the argument-from-commitment scheme is associated with this other critical question:

CQ2: Is there room for questioning whether there is an exception in this case to the general rule that commitment to A implies commitment to B?

3.4.4.3 Walton’s Argument Scheme and Critical Questions for Telling Out Belief Based on Commitment

Walton conceded (2010, p. 30):

The problem is how the bridge between commitment and belief can be crossed. That is, how can one draw a rational inference from a person’s commitment to a statement to the conclusion that he believes that this statement is true? The inference is surely a hazardous one in many instances. A participant in a discussion will often make or incur commitment to some proposition for the sake of argument without really believing that proposition, or even being in a position to know for sure whether it is true or not. However, an argument from commitment is a defeasible argumentation scheme, and this aspect of it might be quite favourable for using it to argue from commitment to belief.

Questioning is a manner to find about about belief from commitment (ibid., p. 37):

Suppose that you believe a particular proposition A, and A is not in your commitment set, nor is there any subset of propositions within your commitment set that logically implies A. Still, it may be the case that you believe that proposition A is true. What then is the link between your commitment set and your belief that proposition A is true? The link is that I can engage in an examination dialogue with you about proposition A, and about other factual propositions related to A, and judge from the commitments I can extract from you in this dialogue whether you believe proposition A or not. I can even ask you directly whether you believe A or not. Even if you claim not to believe A, I can ask you whether other propositions you have shown yourself to be committed to in the dialogue imply belief in A. So we can say that although there is no link of deductive logical implication between belief and explicit commitment, there can be defeasible links between sets of one’s commitments, both implicit and explicit.

Walton added, concerning examination dialogues (ibid., pp. 39–40):

Examination dialogue is a type of dialogue that has two goals (Walton, 2006[a]). One is to extract information to provide a body of data that can be used for argumentation in an embedded dialogue, like a persuasion dialogue for example. Examination dialogue can be classified as a species of information-seeking dialogue, and the primary goal is the extraction of information. However, there is also a secondary goal of testing the reliability of the information. Both goals are carried out by asking the respondent questions and then testing the reliability of the answers extracted from him. The formal analysis of the structure of the examination dialogue by Dunne et al. (2005) models this testing function of the examination dialogue. In their model, the proponent wins if she justifies her claim that she has found an inconsistency in the previous replies of the respondent. Otherwise the respondent wins. To implement this testing function, the information initially elicited is compared with other statements or commitments of the respondent, other known facts of the case, and known past actions of the respondent. This process of testing sometimes takes the form of attempts by the questioner to trap the respondent in an inconsistency, or even in using such a contradiction to attack the respondent’s ethical character. Such a character attack used in the cross-examination of a respondent can often be used as an ad hominem argument,Footnote 7 where for example, the testimony of a witness is impeached by arguing that he has lied in the past, and that therefore what he says now is not reliable as evidence.

Concluding his paper, Walton (2010, p. 43) proposed this “basic defeasible argumentation scheme for an argument from commitment to belief in”:

Premise 1: a is committed to A in a dialogue D based on an explanation of a’s commitments in D in the dialogue.

Premise 2: a’s commitment to A is not easily retracted under critical questioning in D.

Premise 3: a’s commitment to A is used as a premise in a’s practical reasoning and argumentation in D.

Conclusion: Therefore a believes A (more strongly or weakly).

where included in a’s commitments are a’s goals, actions and professed beliefs. Walton conceded (ibid.) that:

This scheme is built on the assumption that there is some way of ordering the comparative weakness or strength of the propositions in an agent’s set of beliefs, representing how firmly the agent is committed to that belief. Such firmness is indicated by how easily the proposition is given up under critical questioning by the other party in the dialogue, and by how prominently it is used as a premise in a’s argumentation.

Walton (ibid.) associated the scheme we have quoted above, with the following critical questions:

CQ1: What evidence can a give that supports his belief that A is true?

CQ2: Is A consistent with a’s other commitments in the dialogue?

CQ3: How easily is a’s commitment to A retracted under critical questioning?

CQ4: Can a give evidence to support A when asked for it?

CQ5: Is there some alternative explanation of a’s commitments?

Moreover, Walton (ibid.) also proposed this “comparative scheme for argument from commitment to belief with the conclusion that a believes A more strongly than B”:

Premise 1: a is committed to A more strongly than B in a dialogue D based on a’s explicit or implicit commitments in D in the sequence of dialogue.

Premise 2: a’s commitment to A is less easily retracted under critical questioning in D than a’s commitment to B.

Premise 3: a’s commitment to A is used as a premise in a’s practical reasoning and argumentation in D more often and centrally than a’s commitment to B.

Conclusion: Therefore a believes A more strongly than B.

This scheme in turn was associated with the following critical questions (ibid., p. 44):

CQ1: How stable is a’s commitment to A over B during the course of D?

CQ2: Is there evidence from the alternative explanations available so far in D suggesting that a does not believe A more strongly than B?

CQ3: How easily is a’s tenacity of commitment to A rather than to B retracted under critical questioning?

CQ4: Can a give stronger evidence to support A when asked for it, rather than to the evidence he gives to support B when asked for it?

Examinations may involve evasiveness. By pretending to cooperate, a person being interrogated may hide evasive action, something that has been researched in scholarship about argumentation (Galasinski, 1996). “A speaker resorting to covert evasion can be seen as trying to make her/his interlocutor believe that her/his utterance is cooperative and does answer the question posed. Covert evasion therefore is necessarily deceptive on a metadiscursive level. Thus it is a violation of what has been called by Grice [in Grice (1975, 1981)] a Cooperative Principle in general and its maxim of relation in particular” (Galasinski, 1996, p. 376). The design of the protocol of interrogation needs be skillful enough to reflect the examiner’s taking notice of, say, covert evasiveness on the part of the person interrogated.

3.4.4.4 Another Approach to Critical Questions

Bex, Bench-Capon, and Atkinson’s paper (2009) ‘Did he jump or was he pushed? Abductive practical reasoning’ adopts (ibid., p. 83) Atkinson and Bench-Capon’s (2007) formal model underlying the generation of arguments and critical questions, a model itself based upon Wooldridge and van der Hoek’s (2005) Action-based Alternating Transition System (AATS). As explained in Bex, Bench-Capon, and Atkinson. (2009, p. 83):

Essentially, an AATS consists of a set of states and transitions between them, with the transitions labelled with joint actions, that is, actions comprising an action of each of the agents concerned. To represent the fact that the outcome of actions is sometimes uncertain, in the scenario we use in this paper we will add a third “gent” which will determine whether the actions had the desired or the undesired effect. The transitions will be labeled with motivations, corresponding to the values of Bench-Capon (2003b), encouraging or discouraging movement from one state to the next. […] We use a transition system which is a simplified version of the AATS used in Atkinson and Bench-Capon (2007) to ground the practical reasoning argumentation scheme, but this will still allow us to hypothesise the reasoning concerning the events that may have taken place.

A story is a chain of arcs inside the graph (Bex et al., 2009, p. 83):

Given an AATS and a number of arguments generated from the AATS, a story (a sequence of events) is a path through the AATS. An argument explains why that path was followed, and so gives coherence and hence plausibility to the story. For example, ‘John wrote a paper, John went to Florence’ is a story, but it has more coherence expressed as ‘John went to Florence because he had to present the paper he had written.’

The story Bex et al. (2009) used throughout their paper is as follows (ibid., p. 83):

Picture two people on a bridge. The bridge is not a safe place: the footpath is narrow, the safety barriers are low, there is a long drop into a river, and a tramline with frequent traffic passing quite close to the footpath. One of the persons, call him Ishmael, is standing still, whereas the other, Ahab, is running. As Ahab reaches Ishmael, Ishmael falls into the river. Did he jump or was he pushed? To answer this we will need a story explaining either why Ahab chose to push Ishmael, or why Ishmael chose to jump to his doom. If Ahab is on trial, the story we believe will be crucial: if Ahab intended Ishmael’s death it will be murder, if there is a less damning explanation for the push it may be manslaughter, and if Ishmael jumped, Ahab is completely innocent. We illustrate the critical questions by reference to this example scenario.

Given that here “‘explanation’ stands for ‘the performance of joint action A in previous circumstances R’” (ibid., p. 84), by which “we mean physical explanation, how performing an action in R caused the new state of affairs S, as opposed to a mental explanation, what motivated an agent to do a particular action”, critical questions for choice of explanation that Bex et al. (2009) enumerate are the following (ibid., p. 84):

CQ1 “Are there alternative ways of explaining the current circumstances S?”, subdivided into (a) “Could the preceding state R have been different?”, and (b) “Could the action B have been different?”

CQ2 “Assuming the explanation, is there something which takes away the motivation?”

CQ3 “Assuming the explanation, is there another motivation which is a deterrent for doing the action?”

CQ4 “Can the current explanation be induced by some other motivation?”

CQ5 “Assuming the previous circumstances R, was one of the participants in the joint action trying to reach a different state?”

For example, the answer they provide (ibid.) for CQ5 is as follows:

Answer: in R, even though one agent performed his part of A with motivation M, the joint action was actually A' which led to S', where A'≠ A and S'≠ S

‘Ahab wanted to push Ishmael out of the way of the tram to get him out of danger, but nature did not cooperate (and Ishmael fell off the bridge)’

Next, Bex et al. (2009) enumerated (ibid., p. 85) critical questions for problem formulation, for example: “Assuming the previous circumstances, would the action have any consequences?” The argument scheme and all those critical questions were then expressed formally, by adopting a notation in terms of an AATS (ibid., section 3.2). A state transition diagram was drawn (ibid., p. 90) for the scenario explaining the circumstances of the Ahab and Ishmael narrative. Then, by adopting Bench-Capon’s (2003) Value-based Argumentation Framework (VAF), a diagram was drawn (Bex et al., 2009, p. 92) showing arguments, objections and rebuttals. Different orderings of values result in a number of competing explanations. The most preferred value is important for providing an ordering of the motivations of Ahab and Ishmael. Alternatives for Ahab’s motivation are: murder, arguable manslaughter, he did not push, or mercy killing. Alternatives for Ishmael’s motivation are: suicide, sacrifice to let Ahab pass, or he did not jump (ibid., pp. 92–93). Bex et al. (2009, p. 94) acknowledged that the most relevant related work is Walton and Schafer (2006).

3.5 Arguments in PERSUADER

Negotiation involves discretionary decision making. PERSUADER has been a classical example of a computer tool supporting human negotiation. Some tools for negotiation belong in AI & Law, and have proven useful for avoiding litigation in court: this is the case of Split Up, a tool developed in Australia in order to help divorcing couples; it makes use of an argument-based knowledge-representation in order to meet the expectations of both spouses, so that they be spared the expenses of litigation (Zeleznikow & Stranieri, 1998).

Some of the research into computational models of argumentation has been concerned with persuasion arguments, i.e., such arguments that the parties put forth in an attempt to convince each other. Prakken (2006) provided an overview of formal systems for persuasion dialogue. Gilbert, Grasso, Groarke, Gurr, and Gerlofs (2003) described a Persuasion Machine. Persuasive political argument is modelled in Atkinson, Bench-Capon, and McBurney (2005c). For a treatment of AI modelling of persuasion in court, see Bench-Capon (2003a, 2003b). Also see Bench-Capon (2002) and Greenwood, Bench-Capon, and McBurney (2003).

One possibility – in the words of Bex et al. (2009, p. 92) – is to

form the arguments into a Value-based Argumentation Framework (VAF), introduced in Bench-Capon (2003b). A VAF is an extension of the argumentation frameworks (AFs) of Dung (1995). In an AF an argument is admissible with respect to a set of arguments S if all of its attackers are attacked by some argument in S, and no argument in S attacks an argument in S.

In contrast (ibid.):

In a VAF an argument succeeds in defeating an argument it attacks only if its value is ranked as high as, or higher than, the value of the argument attacked. In VAFs audiences are characterised by their ordering of the values.Footnote 8 Arguments in a VAF are admissibleFootnote 9 with respect to an audience A and a set of arguments S if they are admissible with respect to S in the AF which results from removing all the attacks which do not succeed with respect to the ordering on values associated with audience A. A maximal admissible set of a VAF is known as a Preferred Extension (PE).

Katia Sycara’s PERSUADER is a computer system for argumentation-based negotiation (Sycara, 1989a, 1989b, 1990, 1992; Lewis & Sycara, 1993). Its application is to labour negotiation. As being a multi-agent system, it involved three agents: a trade union negotiating on behalf of its workers, a company, and a mediator. These try to reach an agreement. There is an iterated cycle of exchanging proposals and counter-proposals. The issues of the negotiation in the PERSUADER project were various, including wages, pensions, seniority, and subcontracting.

For each agent, its beliefs were represented in PERSUADER, and these beliefs were about that agent’s goals, and the interrelationships among those goals. For a particular position, generally PERSUADER could generate more than one possible argument. The weaker type of argument was presented first, and then arguments were presented by increasing strength (Sycara, 1989b, p. 131), in the following order:

  1. 1.

    appeal to universal principle;

  2. 2.

    appeal to a theme;

  3. 3.

    appeal to authority;

  4. 4.

    appeal to “status quo”,

  5. 5.

    appeal to “minor standards”;

  6. 6.

    appeal to “prevailing practice”;

  7. 7.

    appeal to precedents as counter-examples,

  8. 8.

    threaten.

Given goals of an agent were ranked by means of an integer value quantifying their respective strengths. For example, Importance of wage-goal1 is 6 for union1Starting from this statement, PERSUADER would, in order to generate arguments, search the goal-graph of the opposing agent (the company), and (according to Sycara, 1989b, p. 131) find out that: Increase in wage-goal1 by company1 will result in increase in economic-concessions, labour-cost1, production-cost1 Increase in wage-goal1 by company1 will result in decrease in profits1 To compensate, company1 can decrease fringe-benefits1, decrease employment1, increase plant-efficiency1, increase sales1

How does such a remedy on the part of the company conflict with the union’s goals? PERSUADER would detect right away, based on the union’s goal-graph, that: Only decrease fringe-benefits1, decrease employment1 violate goals of union1 Importance of fringe-benefits1 is 4 for union1 Importance of employment1 is 8 for union1 Since importance of employment1 > importance of wage-goal1 One possible argument found

The argument generated (Sycara, 1989b), made to the trade union after this has refused a proposed wage increase, is that:

If the company is forced to grant higher wage increases, then it will decrease employment.

In fact, the company could remedy by reducing employment, because it has the option to resort to subcontracting, or to increase automation. The graph shown in Fig. 3.5.1 represents the beliefs of a company, whose overarching goal is to maximise its profits. In order to increase profits, the company believes that it should decrease production costs or increase sales. In order to increase sales, the company should set for itself the subgoals of increasing quality or decreasing prices. In order to decrease production cost, the company should set for itself the subgoals of increasing plant efficiency, decreasing materials cost, or decreasing labour cost. In order to achieve a decrease in labour cost, the company could decrease employment (by increasing automation or subcontracting), or then it should obtain economic concessions from their workforce, by decreasing wages or by decreasing fringe benefits. An increase in employee satisfaction would be beneficial for increasing plant efficiency. Employee satisfaction would increase if uneconomic concessions are made, or then if economic concessions are made by increasing wages.

Fig. 3.5.1
figure 3_23_191169_1_En

The hierarchy (tree) of beliefs of a company concerning what its goals are, and how they relate to each other

A possible criticism that could be levelled at such workings of PERSUADER is that it makes the respective positions of the parties too rigid. Human negotiators often possess more knowledge of the specifics or of contingencies than in a general goal-hierarchy,Footnote 10 as well as knowledge that either parties or both may be reluctant to make explicit, except when it is convenient. This is why human negotiators may find some leeway when requesting or making concessions.

A logical representation was adopted by Kraus, Sycara, and Evenchik (1998), in order to model ideas about negotiation that were present in PERSUADER. My former colleague Sarit Kraus authored a related book (2001), Strategic Negotiation in Multiagent Environments.

3.6 Representing Arguments in Carneades

3.6.1 Carneades vs. Toulmin

In Toulmin’s model, as seen in Fig. 3.2.1.1, an argument consists of a single premise (“Datum” or “Data”), of the Claim (which is the conclusion), of a Qualifier which states the probative value of the inference (e.g., necessarily, or presumably), of the Warrant – which is a kind of rule which supports the inference from the premise to the conclusion of the argument – and of the Backing (an additional piece of data, which provides support for the warrant), as well as of a Rebuttal (which is an exception).

Gordon and Walton (2006)Footnote 11 described a formal model, implemented in Carneades, using a functional programming language and Semantic Web technologies. In the model underlying this tool, instead of Toulmin’s single datum there generally is a set of premises. A Rebuttal is modelled using a contrary argument. The Qualifier, which in Toulmin’s approach indicates the probative weight of the argument, in Carneades is handled by means of a degree, out of a set of proof standards (see below). Carneades treats Warrant and Backing differently from Toulmin. In fact, Carneades does not directly allow arguments about other arguments, and the conclusion of an argument must be a statement. Therefore, with Carneades the equivalent of Toulmin’s Warrant is to add a presumption for the warrant to the premises of an argument. “Backing, in turn, can be modelled as a premise of an argument supporting the warrant” (ibid.).

3.6.2 Proof Standards in Carneades

Let us consider in particular the standards of proofFootnote 12 as represented in Carneades. Gordon and Walton (2006) define four proof standards,Footnote 13 for Carneades. “If a statement satisfies a proof standard, it will also satisfy all weaker proof standards”.

  1. 1.

    The weakest is SE (scintilla of evidence): “A statement meets this standard iff it is supported by at least one defensible pro argument”.

  2. 2.

    The second weakest is PE (preponderance of the evidence): “A statement meets this standard iff its strongest defensible pro argument outweighs its strongest defensible con argument”.

  3. 3.

    A stronger standard is DV: “A statement meets this standard iff it is supported by at least one defensible pro argument and none of its con arguments is defensible”.

  4. 4.

    The strongest is BRD (beyond reasonable doubt: not necessarily in its legal meaning): “A statement meets this standard iff it is supported by at least one defensible pro argument, all of its pro arguments are defensible and none of its con arguments are defensible”.

3.6.3 The Notation of Carneades

In Gordon and Walton’s (2006) notation for argument graphs, a circle node is an argument, a box is a statement. The labels for the argument or the statement are inside the circle node or the box node. Arguments have boxes on both sides in the path: boxes and circles alternate in the path. Edges in the graph are labelled as follows.

If there is a black filled circle (which means presumption) at the end of an edge –• which touches an argument node, this indicates that the statement in the source node (a box) is a presumption, and as such it is a premise of that argument.

If the circle is hollow, instead, then this edge ⊸ stands for an exception, and the exception statement is a premise for the argument. Had the edge an arrow head, then the statement in its source would have been an ordinary premise.

In the formulae which accompany the argument graph within the same approach, each formula is labelled with an argument identifier, and each formula has a left-hand side (the set of premises), a right-hand side (a statement identifier, this being the conclusion), and an arrow from the left-hand side to the right-hand side.

The arrow indicates this is a pro argument, but if its head is not an arrow head but rather a hollow circle, then this is a con (contrary) argument.

The left-hand side of the rule is a list of premises, separated by commas. The premises may be statement identifiers with no circle prefix (then this is an ordinary premise), or a statement identifier prefixated with a black circle (then this is a presumption), or a statement prefixated with a hollow circle (then this is an exception).

Examples of formulae are shown in Table 3.6.3.1.

Table 3.6.3.1 Examples of notation in Carneades

These are two out of five formulae which in Gordon and Walton (2006) accompany their Fig. 1, a reduced version of which (representing only the two formulae given above) appears here as Fig. 3.6.3.1.

Fig. 3.6.3.1
figure 3_24_191169_1_En

A tree-like representation of formulae

3.7 Some Computer Tools that Handle Argumentation

Not all computer tools handle argumentation in the same perspective, or with the same theoretical foundations, or with a similar interface structure or protocol. Take Convince Me (Schank & Ranney, 1995), one of the argumentation visualisation tools reviewed in van den Braak, van Oostendorp, Prakken, and Vreeswijk (2006). It is based on Thagard’s Theory of Explanatory Coherence (e.g., Thagard, 1989, 2000a, 2000b, 2004), and the arguments consist of causal networks of nodes (which can display either evidence or hypotheses), and the conclusion which users draw from them. Convince Me predicts the user’s evaluations of the hypotheses based on the arguments produced, and gives a feedback about the plausibility of the inferences which the users draw.

Some tools envisage collaboration among users. Reason!Able, developed by Tim van Gelder (2002),Footnote 14 a philosopher from the University of Melbourne, is not designed for collaboration: the intended primary usage is by one user per session. Reason!Able guides the user step-by-step through the process of constructing an argument tree,Footnote 15 containing claims, reasons, and objections, the latter two kinds being complex objects which can be unfolded to see the premises. Reason!Able is intended for single-user instruction and learning of argumentation techniques, for which it is well suited.

Collaborative problem identification and solving is the purpose of IBIS, an Issue-Based Information System. Problems are decomposed into issues. QuestMap (Carr, 2003) is based on IBIS, mediates discussions, supports collaborative argumentation, and creates information maps, in the context of legal education.

The convenience of displaying the structure of arguments visually has prompted the development of tools with that taskFootnote 16; e.g., Carr (2003) described the use of the already mentioned computer tool, QuestMap (Conklin & Begeman, 1988) for visualising arguments, for use in teaching legal argumentation; the paper was published in a volume itself devoted to software tools for visualising argumentation. Reed and Rowe (2001), at the University of Dundee in Scotland, described an argument visualisation system called Araucaria. Footnote 17 Arguments analysed using this tool can be saved in a format called AML (for Argument Markup Language), which is an XML language (concerning XML, see http://Section 6.1.7.2 in the present book). According to Walton et al. (2008, p. 24),

Araucaria is similar to a software tool called Reason!Able [… see above], which has been well tested and is very simple and easy to use. Where Araucaria is aimed at argument analysis, for researchers and undergraduate teaching, Reason!Able is aimed at argument construction, for more introductory teaching earlier in the curriculum. The two thus complement each other.

Araucaria is not only a software tool for argument analysis; it makes it possible to found the analysis on argumentation schemes. This was discussed by Walton et al. (2008, pp. 367–415), who showed “how, with an understanding of defeasibility in schemes, various techniques can be used to formally describe argumentation schemes” (ibid., p. 392). Prakken, Reed, and Walton (2003), a paper on using argumentation schemes for reasoning on legal evidence, is mainly an exploration of applying Araucaria to an analysis in the style of Wigmore Charts. That article discussed appropriate argument structures for reasoning about evidence in relation to hypothesising crime scenarios. Bart Verheij (1999, 2003) described the ArguMed computer tool for visualising arguments, whereas Loui et al. (1997) proposed a tool called Room 5. ArguMed was discussed from a comparative perspective in Walton et al. (2008, pp. 397–399). In particular, they remarked about a peculiar trait (ibid., p. 398):

In ArguMed, undercutting moves, like asking a critical question, are modelled by a concept called entanglement. The question, or other rebuttal, attacks the inferential link between the premises and conclusion of the original argument, and thereby requires the retraction of the original conclusion. On a diagram, entanglement is representated as a line that meets another line at a junction marked by an X.

In his book Virtual Arguments, Verheij (2005) discussed the design of software tools being “argument assistants” for lawyers and other arguers. Several tools or approaches to argument visualisation were reported about in a paper collection edited by Kirschner, Buckingham Shum, and Carr (2003).

3.8 Four Layers of Legal Arguments

Lodder (2004) proposed a procedural model of legal argumentation. Prakken and Sartor (2002) usefully “propose that models of legal argument can be described in terms of four layers.

  1. 1.

    The first, logical layer defines what arguments are, i.e., how pieces of information can be combined to provide basic support for a claim.

  2. 2.

    The second, dialectical layer focuses on conflicting arguments: it introduces such notions as ‘counterargument’, ‘attack’, ‘rebuttal’ and ‘defeat’, and it defines, given a set of arguments and evaluation criteria, which arguments prevail.

  3. 3.

    The third, procedural layer regulates how an actual dispute can be conducted, i.e., how parties can introduce or challenge new information and state new arguments. In other words, this level defines the possible speech acts, and the discourse rules governing them. Thus the procedural layer differs from the first two in one crucial respect. While those layers assume a fixed set of premises, at the procedural layer the set of premises is constructed dynamically, during a debate.

  4. 4.

    This also holds for the final layer, the strategic or heuristic one, which provides rational ways of conducting a dispute within the procedural bounds of the third layer” (Prakken & Sartor, 2002, section 1.2).

3.9 A Survey of the Literature on Computational Models of Argumentation

3.9.1 Within AI & Law

Within AI & Law, models of argumentation are thriving. In the compass of this book we can cite relevant work, but the extent to which we actually delve into their content is limited. Let us start by citing the literature. We shall turn to a short discussion next. A good survey from which to start, is Prakken and Sartor (2002), which discusses the role of logic in computational models of legal argument. “Argumentation is one of the central topics of current research in Artificial Intelligence and Law. It has attracted the attention of both logically inclined and design-oriented researchers. Two common themes prevail. The first is that legal reasoning is defeasible, i.e., an argument that is acceptable in itself can be overturned by counterarguments. The second is that legal reasoning is usually performed in a context of debate and disagreement. Accordingly, such notions are studied as argument moves, attack, dialogue, and burden of proof” (ibid., p. 342).

“The main focus” of major projects in the “design” strand “is defining persuasive argument moves, moves which would be made by ‘good’ human lawyers. By contrast, much logic-based research on legal argument has focused on defeasible inference, inspired by AI research on nonmonotonic reasoningFootnote 18 and defeasible argumentation” (ibid., p. 343).

One should not mistakenly believe that models of argument are just either logicist, or pragmatic ad hoc treatments which are not probabilistic. There also is an important category, probabilistic models of argument, with which we are not concerned in this chapter. We deal with probabilistic models elsewhere, in this book.

In the literature on computational models of argumentation within AI & Law, the HYPO system, CABARET, and CATO (in chronological order) were prominent during the 1990s.Footnote 19 Other important research was conducted by Bench-Capon’s team in Liverpool, and by Prakken and his collaborators. Books include Prakken (1997), Ashley (1991).Footnote 20 There also are several paper-collections, stemming from conferences, which are devoted to computational models of argumentation, and of legal argument in particular.Footnote 21 The literature is vast.Footnote 22

For a treatment of the generation of intentions through argumentation, see Atkinson, Bench-Capon, and McBurney (2005a); cf. Atkinson, Bench-Capon, and McBurney (2005b). Kowalski and Toni (1996) discuss a logical model of abstract argumentation, in an AI & Law forum. Bondarenko, Dung, Kowalski, and Toni (1997), also stemming from Kowalski’s team at Imperial College, London, approached default reasoning by means of an abstract argumentation-theoretic framework. Toni and Kowalski (1996) apply an argumentation-theoretic approach to the transformation of logic programs. The starting point of Cayrol and Lagasquie-Schiex (2006) is bipolar argumentation frameworks, i.e., such frameworks that the interaction between arguments can be not only attack, but also, explicitly support; they go on to propose a framework “where conflicts occur between sets of arguments, characterised as coalitions of supporting arguments”.

In logic-based research, “the focus was first on reasoning with rules and exceptions and with conflicting rules. After a while, some turned their attention to logical accounts of case-based reasoning […]. Another shift in focus occurred after it was realised that legal reasoning is bound not only by the rules of logic but also by those of fair and effective procedure. Accordingly, logical models of legal argument have been augmented with a dynamic component, capturing that the information with which a case is decided is not somehow ‘there’ to be applied, but is constructed dynamically, in the course of a legal procedure” (Prakken & Sartor, 2002, p. 343).Footnote 23

Of course, legal argument is not necessarily about the evidence. Dung, Thang, and Hung (2010), a team based in Thailand, presented an application of AI & Law to the interpretation of contracts. As Grasso, Rahwan, Reed, and Simari (2010, p. 4) summarise Dung et al. (2010):

Interaction between parties needed to interpret a contract can be abstractly perceived as the exchange of arguments in support or against a given interpretation of the contract. Following this view, the main contribution of the work is an argument-based formalism that handles contract dispute resolution where the court will play the role of resolving the ongoing contract dispute by enforcing an interpretation of the contract that could be considered as representing the mutual intention of the involved parties in a fair manner. The formalism is based on modular argumentation, a recently proposed extension of assumption-based argumentation for modelling contract dispute resolution, and the appropriateness of this formalism is demonstrated by applying it to common laws. An example is developed using the system called MoDiSo (MOdular Argumentation for DIspute ReSOlution) that consists of three doctrines here modelled.

3.9.2 Within Other Research Communities

Computational modelling has concerned itself with arguments also outside the research community of either AI & Law, or communication in multi-agent systems or the work of scholars who contributed to those domain anyway. This is the case of a philosopher, Ghita Holmström-Hintikka (2001), who has applied to legal investigation, and in particular to expert witnesses giving testimony and being interrogated in court, the Interrogative Model for Truth-seeking that had been developed by Jaakko Hintikka for use in the philosophy of science; a previous paper of hers (Holmström-Hintikka, 1995), about expert witnesses, appeared in the journal Argumentation.Footnote 24 In 2010, Taylor and Francis launched their journal Argument & Computation.

This followed several conferences, and well as thematic issues in various journals. Journal special issues about computational models of argumentation include ones published in the journals Artificial Intelligence (Bench-Capon & Dunne, 2007); IEEE Intelligent Systems (Rahwan & McBurney, 2007); International Journal of Intelligent Systems (Reed & Grasso, 2007); Argumentation, this one on current use of Toulmin (Hitchcock & Verheij, 2005); Journal of Autonomous Agents and Multi-Agent Systems, on argumentation in multi-agent systems (Rahwan, 2005); Artificial Intelligence and Law (Bench-Capon & Dunne, 2005); Journal of Logic and Computation (Brewka, Prakken, & Vreeswijk, 2003); Informal Logic Journal (Gilbert, 2002); Computational Intelligence (Chaib-Draa & Dignum, 2002).

In the introduction to the inaugural issue of Argument & Computation, Grasso et al. (2010, p. 1) remarked:

Over the past decade or so, a new interdisciplinary field has emerged in the ground between, on the one hand, computer science – and artificial intelligence in particular – and, on the other, the area of philosophy concentrating on the language and structure of argument.

There are now hundreds of researchers worldwide who would consider themselves a part of this nascent community. Various terms have been proposed for the area, including “Computational Dialectics,” “Argumentation Technology” and “Argument-based Computing,” but the term that has stuck is simply Argument & Computation. It encompasses several specific strands of research, such as:

  • the use of theories of argument, and of dialectic in particular, in the design and implementation of protocols for multi-agent action and communication;

  • the application of theories of argument and rhetoric in natural language processing and affective computing;

  • the use of argument-based structures for autonomous reasoning in artificial intelligence, and in particular, for defeasible reasoning;

  • computer supported collaborative argumentation – the implementation of software tools for enabling online argument in domains such as education and e-government.

These strands come together to form the core of a research field that covers parts of artificial intelligence (AI), philosophy, linguistics and cognitive science, but, increasingly, is building an identity of its own.

Models for generating arguments automatically have been developed by computational linguists whose research is mainly concerned with tutorial dialogues (Carenini & Moore, 1999, 2001). ABDUL/ILANA was a tool from the early 1980s, also developed by computational linguists. It was an AI program that used to simulate the generation of adversary arguments on an international conflict (Flowers et al., 1982). In a disputation with adversary arguments, the players do not actually expect to convince each other, and their persuasion goals target observers. Persuasion arguments, instead, have the aim of persuading one’s interlocutor, too.Footnote 25 Gilbert et al. (2003) described a Persuasion Machine. Persuasive political argument is modelled in Atkinson et al. (2005c). For a treatment of AI modelling of persuasion in court, see e.g. Bench-Capon (2003a, 2003b). Also see Bench-Capon (2002) and Greenwood et al. (2003).Footnote 26

Arguments are also used by a rational agent on his own, when revising his beliefs: see on this Paglieri and Castelfranchi (2005), Harman (1986). Work on argumentation by computer scientists may even have been as simple as a mark-up language for structuring and tagging natural language text according to the line of argumentation it propounds: in 1999, Delannoy (1999) tentatively proposed that his own argumentation mark-up was unprecedented, but he was unaware of a previous proposal which in 1996 was published by Nissan and Shimony in a journal (Nissan & Shimony, 1996) and demonstrated by tagging an article in biology.

Parsons and McBurney (2003) have been concerned with argumentation-based communication between agents in multiagent systems.Footnote 27 This is also the context of Paglieri and Castelfranchi (2005), even though the latter is rather concerned with an agent revising his beliefs through contact with the environment. Kibble (2004) uses Brandom’s inferential semantics and Habermas’ theory of communicative action (which are oriented to social constructs rather than mentalistic notions), “in order to develop a more fine-grained conceptualisation of notions like commitment and challenge in the context of computational modelling of argumentative dialogue”.Footnote 28 Commitments are intersubjectively observable (Singh, 1999), whereas “agent design in terms of notions such as belief and intention faces the software engineering problem that it is not generally possible to identify data structures corresponding to beliefs and intentions in heterogeneous agents [(Wooldridge, 2000)], let alone a ‘theory of mind’ enabling agents to reason about agents’ beliefs” (Kibble, 2004).

3.10 Computational Models of Legal Argumentation About Evidence

3.10.1 Some Early and Ongoing Research

David Schum (1993, p. 175) makes the following considerations:

I have often wondered how many of the subtleties in evidence presented at trial are actually recognized by factfinders [i.e., jurors or the judge] and then incorporated in their conclusions. William Twining (1984) goes even farther in wondering how skilful are advocates themselves in recognizing evidentiary subtleties and then in explaining their significance to factfinders. One thing certain is that skilful advocates do not usually offer evidence haphazardly at trial but according to some design or strategy, the objective in such strategies being the presentation of what advocates judge to be the best possible argument on behalf of their clients. […] That different arguments are possible from the same evidence is one reason why there is to be a trial in the first place.

David Schum is the scholar who first combined computing, evidence, and argumentation. A scholar who is prominent in applying to legal evidence computational, logic-based, theoretically neat models of argumentation is Henry Prakken, who has done so with different co-authors. Prakken has done so at a time when, as well as shortly after, a body of published research started to emerge, of AI techniques for dealing with legal evidence (mainly in connection with mostly separate organisational efforts by Nissan, Tillers, and Zeleznikow). Until Prakken’s efforts,Footnote 29 the only ones who applied argumentation to computer modelling of legal evidence were Schum (in several publications), and Gulotta and Zappalà (2001): the latter explored two criminal cases by resorting to an extant tool for argumentation, DART, of Freeman and Farley (1996), as well as other tools.

Prakken and Renooij (2001) explored different methods for causal reasoning: section 5 in that paper is about argument-based reconstruction of a given case involving a car accident. The main purpose of Prakken (2004) “is to advocate logical approaches as a worthwhile alternative to approaches rooted in probability theory” (Prakken, 2004), discussing in particular logics for defeasible argumentation. “What about conflicting arguments? When an argument is deductive, the only possible attack is on its premises. However, a defeasible argument can be attacked even if all its premises are accepted”: “One way to attack it is to rebut it, i.e., to state an argument with an incompatible conclusion. […] A second way to attack the argument is to undercut it, i.e., to argue that in this case the premises do not support its conclusion” (Prakken, 2004, section 3.2).

Prakken (2001) “investigates the modelling of reasoning about evidence in legal procedure. To this end, a dialogue game model of the relevant parts of Dutch civil procedure is developed with three players: two adversaries and a judge” (ibid., p. 119). “[I]n the current models the judge’s role, if modelled at all, is limited to the simple activity of determining the truth of the parties’ claims. Yet in actual legal procedures judges have a much more elaborate role. For instance, in Dutch civil procedure judges allocate the burden of proof, determine whether grounds sufficiently support a claim, complete the parties’ arguments with legal and common knowledge, decide about admissibility of evidence, and assess the evidence” (ibid., p. 119).

Limitations of the dialogue game in Prakken (2001) listed there include the following (ibid., p. 128):

Firstly, the requirement that each move replies to a preceding move excludes some useful moves, such as lines of questioning in cross-examination of witnesses, with the goal of revealing an inconsistency in witness testimony. Typically, such lines of questioning do not want to reveal what they are aiming at. Secondly, at several points, the present ways to model legal-procedural acts have no clear one-to-one correspondence with the language of legal decisions. For instance, judges often merge their decisions on internal and dialectical strength of an argument: usually they regard the presence of a defeating counterargument as evidence that the argument is not internally valid.

Prakken et al. (2003) developed an analysis of evidence in the style of Wigmore Charts, using the Araucaria software of the University of Dundee in Scotland (Reed & Rowe, 2001, 2004), and argued for the use of argumentation schemes, which capture recurrent patterns of argumentation. Examples of recurrent patterns are to be found, that paper pointed out, in “inferences from witness or expert testimonies, causal arguments, or temporal projections”. The criminal case used in Prakken et al. (2003) and Bex, Prakken, Reed, and Walton (2003) by way of an example, is Commonwealth v. Umilian (1901, Supreme Judicial Court of Massachusetts, 177 Mass. 582), and is taken from Wigmore’s Principles (1931, pp. 62–66). It is a case that also David Schum uses on occasion for illustrating his own methods. Umilian, a farm labourer along with Jedrusik, was accused of murdering the latter, after discovering that Jedrusik was the author of a letter in which he falsely advised a priest that Umilian had a wife and children in the old country, so that Umilian’s marriage to a local maid at the farm would not take place. Umilian’s wedding was eventually celebrated, but he threatened to take revenge on Jedrusik, who disappeared and whose body was then found. For the period around the murder, Umilian and Jedrusik had been isolated in the area of the barn where the body was eventually found. It is an interesting case, its argumentation being displayed in Wigmore Charts.

Selmer Brigsjord with Shilliday, Taylor, Clark and Khemlani (2006) described Slate, a computer tool for supporting reasoning by argumentation, which produces explanations in simplified English. Part of the exemplification is about reasoning about hypotheses in criminal investigation.Footnote 30 It is unclear to me whether Slate can be usefully applied to serious crime analysis and intelligence analysis in real-world situations, as the exemplification seen in the given paper was rather like a whodunit puzzle,Footnote 31 but reportedly Brigsjord has been working on real case studies in intelligence analysis for the United States’ ARDA.Footnote 32

3.10.2 Stevie

Susan van den Braak and Gerard Vreeswijk, computer scientists from the University of Utrecht, and Prakken’s colleagues, have developed Stevie (van den Braak & Vreeswijk, 2006). Stevie is a knowledge representation architecture, “based on known argument ontologies and argumentation logics”, “to be used as a support tool to analyse criminal cases” “by allowing case analysts to visualize evidence in order to construct coherent stories. It allows them to maintain overview over all information during an investigation, so that different scenarios can be compared. Moreover, they are able to express the reasons why certain evidence supports the scenarios”. “Stevie is able to represent multiple cases and to support multiple users”. Permanent links to external source documents can be set. Other links, to external databases, enable “to retrieve simple factual information such as quotes from witness testimonies and other original source documents”.

In the Stevie approach, stories are “hypothetical reconstructions of what might have happened”. “Stevie uses defeasible reasoning […] to distill stories out of large quantities of information”, where “a story is a conflict-free and self-defending collection of claims (I-nodes). A story is conflict-free if (and only) if it does not contain a conflicting pair of I-nodes”. Moreover, “a story is self-defending if (and only if) every argument (made of of I-nodes and S-nodes) against an element of that story can be countered with an argument made up of I-nodes that belong to that story”. Besides, “a third constraint on stories” is “that they must be temporally consistent”.

An I-node is “an elementary piece of information that is used in modeling cases”, and is either a quotation node, or an interpretation node. An S-node is a scheme instance, where schemes are “predefined patterns of reasoning. A single scheme describes an inference, the necessary prerequisties for that inference, and possible critical questions that might undercut the inference”.

3.11 Argumentation for Dialectical Situations, vs. for Structuring Knowledge Non-dialectically, and an Integration of the Two

3.11.1 Three Categories of Concepts Grouping Concepts of Argumentation

The present Section 3.11 is based on an article by the same authors, Stranieri, Zeleznikow, and Yearwood (2001). Let us begin by saying something about conceptualisations of argumentation. According to James Freeman (1991), argumentation involves a family of concepts that can be broadly grouped into three categories:

  • concepts related to the process of engaging in an argument,

  • procedures or rules adopted to regulate the argument process, and

  • argument as a product or artefact of an argument process.

The first two categories, process and procedures, are intimately linked to a dialectical situation within a community of social agents. Freeman (1991, p. 20) defines a dialectical situation as

one that involves some opposition among participants to a discourse over some claim, that it involves interactive questioning for critically testing this claim and this process proceeds in a regimented, rule governed manner.

A dialectical situation need not occur between two independent human agents in that monologues can be represented dialectically. For instance, a mathematician engaged in a solo demonstration that a proposition follows from axioms does not overtly engage in a discourse. Nevertheless the reasoning can be seen as a linguistic reconstruction of an imaginary discursive exchange within a community of mathematicians. Argumentation as a product or artefact of an argument process involves viewing the linguistic reconstruction of what the argumentation process and procedure have generated. It involves laying out the premises, claims and layout of claims. For Freeman (1991), the distinction between the three views of argumentation – process, procedure and product – is largely illusory and unnecessarily confusing, particularly for his objective of identifying diagramming techniques for the clear articulation of arguments.

Argumentation concepts have been applied from the 1990s in a variety of knowledge engineering applications, typically without a clear delineation of argumentation as process, procedure or product, according to Freeman’s (1991) classification. The central claim advanced in this Section 3.11 (and in Stranieri et al., 2001) is that benefits inherent in the use of argumentation frameworks for information system knowledge engineering can be substantially enhanced if key features of the distinction between argumentation as process, procedure and product are maintained.

3.11.2 From the Toulmin Argument Structure, to the Generic Actual Argument Model

The rise of argumentation research within artificial intelligence, as early as the 1990s, has taken various forms. A variety of logics have been developed to represent argumentation in the context of a dialectical situation such as a dialogue. In contrast to the dialectical approach, argumentation has also been used non-dialectically, in order to provide structure for knowledge. As already seen in this book, the Toulmin Argument Structure (Toulmin, 1958) – see Fig. 3.11.2.1 – has been popular among those computer scientists who have devoted some attention to argumentation: the Toulmin structure has often been adopted to structure knowledge non-dialectically Nevertheless, most studies that apply the Toulmin structure do not use the original structure, but vary one or more components. Variations to the Toulmin structure can be understood as different ways to integrate a dialectical perspective into one which is essentially non-dialectical.

Fig. 3.11.2.1
figure 3_25_191169_1_En

Our version of the Toulmin Argument StructureFootnote

Figure 3.11.2.1 represents the basic template for the knowledge representation we call a generic argument. A generic argument is an instantiation of the template that models a group of arguments. The generic argument includes: (a) a variable-value representation of the claim with a certainty slot; (b) a variable-value representation of the data items (with certainty slots) as the grounds on which such claims are made; (c) reasons for relevance of the data items; (d) inference procedures that may be used to infer a claim value from data values; (e) reasons for the appropriateness of the inference procedure.

The idea is that the generic argument sets up a template for arguments that allows the representation of the claim and the grounds for the claim. The claim of a generic argument is a predicate with an unspecified value (which can be chosen from a set when an actual argument is being made). Each data item is also a predicate with an unspecified value which can be taken from a specified set of values. The connection between the data variables and the claim variable is called an inference procedure. An inference procedure is a relation between the data space and the claim space.

In this Section 3.11, the label dialectical argumentation is used to describe the modelling of discourse. This is contrasted with non-dialectical argumentation. Drawing the dialectical/non-dialectical distinction enables the specification of a framework, called the Generic Actual Argument Model (GAAM), that is expressly non-dialectical.Footnote 34 The framework enables the development of knowledge-based systems that integrate a variety of inference procedures, combine information retrieval with reasoning and facilitate automated document drafting. Furthermore, the non-dialectical framework provides the foundation for simple dialectical models. Systems based on our approach have been developed in family law, refugee law, determining eligibility for government legal aid, copyright law, and eTourism.

The central theme of the present Section 3.11 is that a distinction between argument as process and procedure, called here dialectical, and argument as product, called non-dialectical, serves useful purposes for knowledge engineering in that it has motivated the development of a knowledge representation framework that clearly separates the two perspectives. A framework for knowledge engineering that supports the non-dialectical perspective expressively is described. The non-dialectical framework called the Generic Actual Argument Model (GAAM) directly facilitates the development of hybrid systems, intelligent document drafting, data mining and intelligent information retrieval. Furthermore, the non-dialectical framework provides a knowledge representation base that is the foundation for dialectical models.

The Generic Actual Argument Model (GAAM) is a variant of the layout of arguments advanced by Toulmin (1958). Arguments for non-dialectical purposes are represented at two levels of abstraction; the generic and the actual level. The generic level is sufficiently general so as to represent claims made by all members of a discursive community. All participants use the same generic arguments to construct, by instantiation, their own actual arguments. The generic arguments represent a detailed layout of arguments acceptable to all participants whereas the actual arguments capture a participant’s position with respect to each argument. The actual arguments that one participant advances are more easily compared with those advanced by another, in a dialectical exercise because, in both cases, the actual arguments have been derived from a generic template that all participants share.

3.11.3 Dialectical vs. Non-Dialectical Argumentation

Recall that we agreed that in this Section 3.11, the label dialectical argumentation is used to describe the modelling of discourse. This is contrasted with non-dialectical argumentation. Argumentation as dialectic (process and procedure) is used in order to model situations that involve discourse within a community of agents. The agents need not be independent human agents engaged in group discussion but may even be a single software agent that has internal processes that involve dialectical exchange. In contrast, non-dialectical argumentation describes the use of argumentation to order, organise or structure knowledge without directly modelling a dialectical exchange.

Until recent decades, argumentation theories have been advanced for philosophical pursuits and not specifically to enhance knowledge engineering. As a consequence, the distinction between dialectical and non-dialectical use of argumentation concepts is rarely prominent.

For example, Aristotle presented three types of arguments; demonstrations, dialectical deductions and contentious deductions (Topics, Book 1, 100a, 27–30). Although each of Aristotle’s three types of argument can be seen as arising out of discursive exchanges, there is an implicit emphasis on the dialectical perspective for dialectical deductions because these arguments are made on the basis of premises that are debatable. They typically concern opinions that are adhered to with variable intensity by community members whereas demonstrations are assumed to have more of a ring of universal acceptance. Demonstrations are arguments whose claims are made from premises that are true and primary known, in more modern terminology, as analytic proofs. Contentious deductions are arguments that appear acceptable at first sight but, upon closer inspection, are not.

The analysis of argument advanced by Toulmin (1958) does not distinguish dialectical from non-dialectical argumentation. By illustrating that logic could be seen as a kind of generalised jurisprudence rather than as a science, Toulmin (1958) advanced a structure of argument that captures the layout of arguments. Jurisprudence focuses attention on procedures by which legal claims are advanced and attacked and, in a similar way, Toulmin sought to identify procedures by which any claim, in general, is advanced. He identified a layout of arguments that was constant regardless of the content of the argument.

As already seen earlier in Section 3.2 in this book, Toulmin (1958) concluded that most arguments, regardless of the domain, have a structure which consists of six basic invariants:

  • claim,

  • data,

  • modality,

  • rebuttal,

  • warrant, and

  • backing.

Every argument makes a claim based on some data. Let us consider an example. The argument in Fig. 3.11.3.1 is drawn from reasoning regarding refugee status according to the 1951 United Nations Convention relating to the Status of Refugees (as amended by the 1967 United Nations Protocol relating to the Status of Refugees), and relevant High Court of Australia rulings. The claim of the argument in Fig. 3.11.3.1 is the statement that Reff has a well founded fear of persecution. This claim is made on the basis of two data items, that Reff has a real chance of persecution and that relocation within Reff’s country of origin is not appropriate. A mechanism is required to act as a justification for why the claim follows from data. This justification is known as the warrant which is, in Fig. 3.11.3.1, the statement that “The test for well founded fear is real chance of persecution unless relocation affords protection”. The backing provides authority for the warrant and in a legal argument is typically a reference to a statute or a precedent case. The rebuttal component specifies an exception or condition that obviates the claim. Reff may well have a real chance of persecution and relocation within the country of origin is unlikely to lead to protection; however the claim that his fear is well founded does not hold if Reff’s persecution is due to criminal activities.

Fig. 3.11.3.1
figure 3_26_191169_1_En

Toulmin argument for well founded fear

The validity of the dialectical/non-dialectical distinction for knowledge engineering is demonstrated by noting that many applications of the Toulmin structure to knowledge modelling during the 1990s have varied the structure in one way or another. However ad hoc the variations seem at first sight, they can be understood if seen as attempts to emphasise the dialectical as opposed to the non-dialectical perspective, to different extents.

In Section 3.11.4, diverse applications of the Toulmin argument structure are compared and contrasted in order to demonstrate that the variations are best understood as attempts to integrate dialectical argumentation with non-dialectical argumentation. In Section 3.11.5, the GAAM is presented. By specifically attempting to develop a non-dialectical model at a level that is generic to a discursive community, a variation of the Toulmin structure is derived that does not itself model dialectical exchanges. Rather, it enables dialectical exchanges to be readily modelled once communal knowledge is organised using the non-dialectical model. Applications developed with the use of the GAAM are discussed in Section 3.11.6 together with some insights regarding the dialectical model that is to be developed on the basis of the non-dialectical frame.

3.11.4 Variations of Toulmin’s Structure

Argumentation has been used in knowledge engineering in two distinct ways; with a focus on the use of argumentation to structure knowledge (i.e. non-dialectical emphasis) or with a focus on the use of argumentation to model discourse (i.e. dialectical emphasis). Dialectical approaches typically automate the construction of an argument and counterarguments normally with the use of a nonmonotonic logic where operators are defined to implement discursive primitives such as attack, rebut, or accept. Carbogim, Robertson and Lee (2000) presented a comprehensive survey of defeasible argumentation.

Dialectical models have been advanced by Cohen (1985), Fox (1986), Vreeswijk (1993), Dung (1995), Prakken (1993a, 1993b), Prakken and Sartor (1996a), Gordon (1995), Fox and Parsons (1998) and many others. In general these approaches include a concept of conflict between arguments and the notion that some arguments defeat others. Most applications that follow a dialectical approach represent knowledge as first order predicate clauses, though they engage a nonmonotonic logic to allow contradictory clauses. Mechanisms are typically required to identify implausible arguments and to evaluate the better argument of two or more plausible ones. For example, Fox and Parsons (1998) analyse and extend the non standard logic LA of Krause, Ambler, Elvang-Goransson, and Fox (1995). In that formalisation, an argument is a tuple with three components:

$$\left({\textrm{Sentence}}:{\textrm{Grounds}}:{\textrm{Sign}}\right).$$

The sentence is the Toulmin claim though this may be a simple claim or a rule. The sign is a number or symbol that indicates the confidence warranted in the claim. The grounds are the sentences involved in asserting the claim and can be seen as the reasoning steps used to ultimately reach the conclusion.

The preference for one argument over others has been modelled in a variety of ways. Prakken (1993a, 1993b) extends the framework proposed by Poole (1988) by using a concept of specificity. The claim that a penguin flies because it is a bird and all birds fly is less specific that the claim that a penguin does not fly. Preference relations between rules are elicited from experts and explicitly specified in the defeasible reasoning logic described by Antoniou (1997).

In applications of argumentation to model dialectical reasoning, argumentation is used specifically to model discourse and only indirectly used to structure knowledge. Concepts of conflict and of argument preferences map directly onto a discursive situation where participants are engaged in dispute. In contrast, many uses of argumentation for knowledge engineering application do not model discourse. This corresponds more closely to a non-dialectical perspective.

A non-dialectical representation facilitated the organisation of complex legal knowledge for information retrieval by Dick (1987, 1991). She illustrates how relevant cases for an information retrieval query can be retrieved despite sharing no surface features if the arguments used in case judgements are represented as Toulmin argument structures. Marshall (1989), Ball (1994) and Loui, et al. (1997) have built hypertext based computer implementations that draw on knowledge organised as Toulmin arguments. Hypertext links connect an argument’s assertions with the warrants, backing and data of the same argument and also link the data of one argument with the assertion of other arguments. Complex reasoning can be represented succinctly enabling convenient search and retrieval of relevant information.

Clark (1991) represented the opinions of individual geologists as Toulmin argument structures so that his group decision support system could identify points of disagreement between experts. Matthijssen (1999) provides a further example of benefits that arise from the use of the original Toulmin structure. He represented user tasks as Toulmin arguments and associated a list of keywords to the structure. These keywords were used as information retrieval queries into a range of databases. Results indicate considerable advantages in precision and recall of documents as a result of this approach compared with approaches that require the user to invent queries.

Johnson, Zualkernan, and Tukey (1993) identified different types of expertise using this structure and Bench-Capon, Lowes, and McEnery (1991) used the Toulmin argument structure to explain logic programming conclusions. Branting (1994) expands the Toulmin argument structure warrants as a model of the legal concept of ratio decidendi , that is to say, the rationale of a decision. In the Split Up project, Zeleznikow and Stranieri (1995), and Stranieri, Zeleznikow, Gawler, and Lewis (1999) used the Toulmin argument structure to represent family law knowledge in a manner that facilitated rule/neural hybrid development.

Toulmin (1958) proposed his views on argumentation informally and never claimed to have advanced a theory of argumentation. He does not rigorously define key terms such as warrant and backing. He only loosely specifies how arguments relate to other arguments and provides no guidance as to how to evaluate the best argument or identify implausible ones. Nevertheless, the structure was found to be useful as a tool for organising knowledge.

According to James Freeman (1991), the Toulmin layout does not explicitly model discourse. Operators to question, attack or qualify opposition assertions are not explicit. Nor is there the facility to represent an agent’s beliefs as they differ from another agent’s. Not surprisingly, many knowledge engineering applications of the Toulmin framework have not modelled discursive exchanges at all, but have applied the framework to structure knowledge.

Despite the immediate appeal of the Toulmin argument structure as a convenient frame for structuring knowledge, most researchers that use the Toulmin layout vary the original structure. Each variation can be seen to be an attempt to integrate some aspects of dialectical reasoning into a structure that, for knowledge engineering purposes, is largely non-dialectical. In the following section three variations are presented. These can be understood as attempts to integrate a dialectical approach into a non-dialectical one.

3.11.4.1 Johnson’s Variation of the Toulmin Layout

Johnson, Zualkernan, et al. (1993) claimed that any argument’s backing can be classified into one of five distinct types of backing which they label Type 1 to Type 5. Each type of backing corresponds to a distinct type of expertise and also to a particular philosophical paradigm of reasoning as follows:

  • Type 1 arguments reflect axiomatic reasoning. Data and claim for these arguments are analytic truths. The supporting evidence derives from a system of axioms such as Peano’s axioms of arithmetic. Examples of what Aristotle called demonstrations would be captured as Type 1 arguments.

  • Type 2 arguments assert a particular medical diagnosis on the basis of empirical judgements from a number of patients who have presented with similar symptoms in the past.

  • Type 3 arguments are characterised by backings which reflect alternate representations of a problem. A medical diagnosis based on a model of the heart as a pump analyses symptoms to be consistent with that model. An alternate presentation that has the heart as a muscle provides other evidence.

  • Type 4 arguments differ from Type 3 arguments in that the alternate representations are conflicting. In this case the argument involves supporting evidence that is conflicting. An assertion is made by creating a composite representation from conflicting ones.

  • Type 5 backings refer to paradigms that reflect a process of inquiry.

The Type 1 and 2 backings that Johnson, Zualkernan, et al. (1993) identifies are markedly different from Types 3, 4 and 5. In the latter group, a claim is ultimately backed by recourse to alternate representations of a problem.

The resolution of conflicting representations is akin to a dialectical process. A common solution is sought from the exchange that is stimulated from conflicting representations. In Type 1 (axiomatic) or Type 2 (empirical) arguments, the backing is made from one perspective. There are no alternate representations and no common solutions. This is an example of a non-dialectical perspective.

The variation as per Johnson, Zualkernan, et al. (1993) does not introduce or eliminate components of the original Toulmin layout. However, by discerning non-dialectical backings from dialectical ones, it imposes a typology of backing that can be seen as an attempt to extend the structure toward somewhat of a dialectical application. The approach is limited by the unclear nature of the Toulmin warrant.

Broadly speaking, Toulmin formulates the warrant as an inference procedure. It is a procedure for inferring a claim given data. For example, the statement that “Most Italians are a Catholic” can be used as an inference rule to infer the claim that Mario is (probably) a Catholic given data that he is a Catholic. However, the statement that “Most Italians are a Catholic” can also be interpreted as a reason for the relevance of the data item “Mario is a Catholic” in the argument.

The distinction between a warrant as an inference rule and a warrant as a reason for relevance can be seen in the refugee argument of Fig. 3.11.3.1. The warrant statement that reflects that the High Court case of Chan introduced a “real chance of persecution” as the test for well founded fear is readily seen as a reason for the relevance of the real chance data item. It is less obviously viewed as an inference rule that can be applied to infer the claim.

Below, issues related to what James Freeman (1991) calls the problematic notion of warrant are discussed. However, it is important to note the Johnson typology applies to backings for warrants that are inference procedures but may not apply in the same way to warrants that are statements indicating a reason for the relevance of a data item.

3.11.4.2 The Freeman and Farley Variation on Toulmin Warrants

Arthur Farley and Kathleen Freeman (1995) recognised the need to extend the warrant component in order to develop a model of dialectical reasoning more formal than that proposed by Toulmin. Their main objective was to develop a system that could model the burden of proof concept in legal reasoning. The concept of burden of proof is often used to refer to the onus a discourse participant has, to supply evidence. So, as Prakken (2001) notes in modelling this form of burden of proof using a dialogue game model, a judge directs the pleadings phase of proceedings by requiring one litigant or another to supply evidence to support their claims. However, the form of burden of proof that was the focus of attention for Farley and Freeman (1995) involves the extent to which evidence is required in order to draw a conclusion. This varies with the severity of the misdemeanour. Except as otherwise provided by the law, the burden of proof requires proof by a preponderance of the evidence. In a criminal case, the state must prove all elements of the crime to a beyond reasonable doubt level. In cases of tax fraud, the burden of proof in a tax case is generally on the taxpayer (Black, 1990)

In an earlier paper than Farley and Freeman (1995), Kathleen Freeman (1994) described two types of warrants she called wtype1 and wtype2. The first warrant type, wtype1, classifies the relationship between assertion and data with category labels she calls explanatory or sign. Causal links are examples of explanatory warrants because they explain an assertion given data. Fire causes smoke. The consequent is explained by recourse to a cause/effect link. Other types of explanatory warrants include definitional relationships or property/attribute relationships. A sign relationship represents a correlational link between data and assertion.

The second warrant type, wtype2, represents the strength with which the assertion can be drawn from data. Examples of this type of warrant proposed by Kathleen Freeman represent the strength with which the consequent can be drawn from the antecedent. Default type warrants represent default relationships such as birds fly. Evidential warrants are less certain. Sufficient warrants are certain and typically stem from definitions.

Kathleen Freeman explicitly represents reasoning methods in addition to the two types of warrant. The reasoning types reside outside the Toulmin argument structure but interact with warrants in order to produce credible outcomes. Her model incorporates four reasoning mechanisms, modus ponens, modus tollens, abduction and contra positive abduction. For example, some reasoning mechanisms are stronger than others according to heuristics she devised. Modus ponens and modus tollens are assigned a strong link qualification if used with sufficient warrants, whereas the same reasoning types are assigned a “credible” qualification if used with evidential warrants.

Reasoning types interact with warrant types to control the generation of arguments according to reasoning heuristics. For example, modus ponens/abduction combinations are not permitted for two explanatory warrants unless both are evidential. Kathleen Freeman (1994) demonstrates a capacity her model has for dialectical reasoning. An assertion is initially argued for with the use of heuristics she defined. Then, an alternate argument is compared with the initial argument constructed and support for it is ascertained. The comparisons require the notion of level of proof which include beyond reasonable doubt, scintilla of evidence and preponderance of evidence. (Cf. in Section 3.6.2 in this book.)

Kathleen Freeman’s model is a sophisticated extension to the Toulmin argument structures that displays impressive dialectical reasoning results. She advances types of relationships between consequents and antecedents (wtpye1) and assigns the link a strength (wtype2). The discernment of two types of warrant is essential for her because her model of burden of proof relies on it. By specifying reasoning types and heuristics for their interaction with warrants, Farley and Freeman (1995) can be seen to provide a way to extend the Toulmin structure so that it can be applied to model dialogue. The ambiguity in the original Toulmin warrant is dealt with by reserving one type of warrant for the inference rule and the other to indicate the strength of the rule. This adds a representation of uncertainty to some extent, but as we shall describe below, the strength of the data items and strength of claims is not represented. Furthermore, there is no attempt to incorporate information regarding the broader context of the argument.

In contrast, the issue of context is paramount for Bench-Capon (1998), who is not intent on modelling the burden of proof in legal reasoning but on implementing a dialogue game that engages players in constructing arguments for and against assertions initially made by one party.

3.11.4.3 Bench-Capon’s Variation of the Toulmin Layout

Bench-Capon (1998) does not distinguish types of backing as Johnson, Zualkernan, et al. (1993) do, or types of warrant following Farley and Freeman (1995). Instead, he introduces an additional component to the Toulmin argument structure. The presupposition component of the Toulmin argument structure represents assumptions made that are necessary for the argument but are not the object of dispute, so they remain outside the core of the argument. A presupposition for the refugee argument illustrated in Fig. 3.11.3.1 would indicate that the country in which the argument is raised is a signatory to the United Nations Convention. As Australia is a signatory to the Convention, the data items and warrant that relate to the UN Convention are entirely appropriate. If Australia were not a signatory then those data items may not be as appropriate. This is illustrated in Fig. 3.11.4.3.1.

Fig. 3.11.4.3.1
figure 3_27_191169_1_En

Toulmin Argument Structure with presupposition component

Making explicit presuppositions in the argument structure is important for the use Bench-Capon (1998) makes of the Toulmin argument structure. A program that plays the part of one or both players in a dialogue game is often exposed to utterances in discourse that represent presuppositions and are not central to the discussion at hand.

The presuppositions can become critical if parties to a game do not share them. Bench-Capon (1998) interprets the warrant as an inference procedure much as Toulmin originally did. The dialogue game does not directly add dialectical operators such as rebut, attack or accept into the structure but these are instead encoded into the control mechanism that represent the rules of the dialogue game. The inherent ambiguity in the Toulmin warrant is not addressed; however, the context of the argument is modelled by the addition of a presupposition component.

3.11.4.4 Considerations Concerning Toulmin Variations

The three variations to the Toulmin argument structure presented thus far in Section 3.11.4 can be seen to be attempts at clarifying how the structure can be used within a dialogue. This objective motivated Johnson, Zualkernan, et al. (1993) to add types of backings. Each new backing type derives from the use of arguments by a discursive community. Farley and Freeman (1995) were more direct and developed specific reasoning heuristics so that an argument and counterargument are constructed as it would be within a discursive community. Bench-Capon (1998) defined a dialogue game that regulated the dialogue between two players who each encode their utterances as Toulmin components.

In the next section, Section 3.11.5, a variation of the Toulmin argument structure is proposed that specifically aims to model the structure of arguments in a non-dialectical manner. This is done in a manner that is at a sufficiently high level of abstraction so as to represent shared understanding between participants to a discourse which ultimately simplifies the specification of a dialectical model. However, even without extension into a dialectical model, the non-dialectical frame facilitates hybrid system development, document drafting and intelligent information retrieval.

3.11.5 A Generic Non-dialectical Model of Argumentation: The Generic Actual Argument Model (GAAM)

3.11.5.1 The Argument Template

Figure 3.11.5.1.1 represents a template for knowledge representation that varies the Toulmin argument structure. The template differs from the Toulmin structure in that it includes:

  • a variable-value representation of claim and data items,

  • a certainty variable associated with each variable-value rather than a modality or force associated with the entire argument,

  • reasons for the relevance of the data items in place of the warrant,

  • a list of inference procedures that can be used to infer a claim value from data values in place of the warrant,

  • reasons for the appropriateness of each inference procedure,

  • context variables,

  • the absence of the rebuttal component present in the original formulation,

  • the inclusion of a claim value reason component.

Fig. 3.11.5.1.1
figure 3_28_191169_1_En

Non-dialectical argument template

The argument template represents knowledge at a very high level of abstraction. There are two levels of instantiation made in applying the template to model arguments within a domain; the generic level and the actual level. A generic argument is an instantiation of the template where the following components are set:

  • claim, data and context variables are specified but not assigned values,

  • relevance reason statements and backing statements are specified,

  • inference procedures are listed but a commitment to any one procedure is avoided,

  • inference procedure reasons are specified for each procedure

  • claim and data variables are not assigned certainty values

The generic argument is sufficiently general so as to capture the variety of perspectives displayed by members of a discursive community.

Figure 3.11.5.1.2 illustrates the refugee argument above, as a generic argument. The claim variable has been labelled Well founded fear and acceptable values specified. There are three inference procedures known to be appropriate in this example; the first is a rule set that derives from heuristics an immigration expert uses, the second is a neural networkFootnote 35 trained from past cases and the third is a human inference. This latter inference indicates that a human is empowered with sufficient discretion to infer a claim value from data item values in any way he or she likes.

Fig. 3.11.5.1.2
figure 3_29_191169_1_En

Generic argument for well founded fear

In the Generic Actual Argument Model (GAAM), the Toulmin warrant has been replaced with two components; an inference procedure and a reason for relevance. This relates to two different roles a warrant can play in an argument from a non-dialectical perspective. As described above, the warrant indicates a reason for the relevance of a data item and on the other hand the warrant can be interpreted as a rule which, when applied to the data items, leads to a claim inference.

An inference procedure is an algorithm or method used to infer a claim value from data item values. Under this interpretation, an inference procedure is a relation between data variable values and claim variable values. It is any procedure that will perform a mapping from data items to claim items. A mathematical function, an algorithm, a rule set, a neural network, or procedures yet to be discovered are examples of inference procedures.

Actual arguments made are instances of a generic argument where each data slot has a value (data item value), an inference procedure is chosen and executed to deliver a value for the claim slot (claim value). Each generic argument has a claim, data items, reasons for why each data item is relevant, the names of the associated inference procedures and reasons for their appropriateness. Figure 3.11.5.1.3 shows a generic argument in detail. It consists of: a conjunction of data items or slots each with a reason for its relevance and the backing for this; a choice of inference procedures and the reasons for each one of these mechanisms and of course, the claim slot. All data slots act as input to the inference procedures. Each inference mechanism in the inference procedure slot provides a means of reaching a claim value from the input data values. Inference mechanisms may include rule sets, trained neural networks, case-based reasoners or human reasoning. The choice of a particular inference mechanism (other than human inferencing) and the reasons for that inference procedure provide a reason for arriving at a particular claim value. In the case of human inferencing there will still be a need to provide a justification for the claim. At the generic argument level this explanation cannot be given.

Fig. 3.11.5.1.3
figure 3_30_191169_1_En

Full representation of a generic argument

Figure 3.11.5.1.3 also includes certainty slots for each data item, claim and inference procedure. These recognise that there is uncertainty in the processes of developing actual arguments. The certainty values are assigned when values are assigned in the process of constructing an actual argument. A generic argument is an agreed approximation to a world but still may only be partial knowledge. We do not explicitly put a certainty or confidence value on a generic argument although we permit generic arguments to change over time. The structure of generic arguments that describe a domain will not be static. As knowledge within the domain evolves new versions of the generic argument structure will be required. New factors emerge as being relevant to some arguments and new inference procedures may be needed as new legal rules emerge or new cases become precedents. Most actual arguments in a domain are then underpinned by a particular version of the generic argument structure. Figure 3.11.5.1.3 also depicts variables that are required to capture the context of the generic argument. Context variables are conceptualised as factors that are critical for the appropriate instantiation of actual arguments from the generic template. However, context variables do not directly take part in the reasoning within an argument. For example, the reasoning used to infer claims about tours does not include the geographical region as a data item because the reasoning applies regardless of region.

3.11.5.2 Discussion

Many inference procedures can be implemented in software. Thus, they can be automated in computer based systems. However, this need not be necessarily the case for a knowledge engineering framework. Claims can sometimes be inferred from data items by human agents without the explicit specification of an inference procedure. This occurs frequently in discretionary fields of law where, as Christie (1986) notes, decision makers weight and combine relevant factors in their own way without articulating precisely how claims were inferred. This situation is accommodated within the Generic Actual Argument framework with the specification of an inference type labelled, simply, human.

The original Toulmin warrant can also be seen to be a reason for relevance or an inference procedure. Past contributions to a marriage are relevant in Australian family law. Past contributions appears as a data item in a generic argument regarding property distribution following divorce because a statute dictates that contributions are relevant. The wealth level of a marriage in Australia is made relevant by past cases and not by statute. The hair colour of the wife is considered irrelevant because there is no statutory or precedent basis for its relevance. Further, domain experts can think of no reason that would make this feature relevant.

The concept of relevance is in itself difficult to define generally. See http://Section 4.6 in this book. van Dijk (1989) describes the concept of relevance as it applies to a class of modal logics broadly called relevance logics as a concept grounded firmly in the pragmatics, and not the semantics or syntax of language. Within a discursive community, the data items in a generic argument must be relevant to the claim to the satisfaction of members of the community.

A generic argument in the field of family law property division may include hair colour as a relevant data item for inferring property division if a reason for its relevance that is acceptable, even if not held, by many in the community, is advanced. Perhaps the utterance

Blonde women will remarry more readily.

as a reason for the relevance of hair colour as a data item may not be held by all participants to a discourse but reflects a belief that is understood as plausible by many.

The argumentation framework advanced here not only departs from the Toulmin formulation by distinguishing inference procedure from reason for relevance but it also represents context explicitly. Figure 3.11.5.1.2 illustrates two context variables; the Determining country and the Person about which the argument is being made. The respective values are a list of world nations for the Determining Country and Reff or the more universal X for the Person.

Context variables represent something of the background knowledge that impacts on the generic argument. For example, the context variable Determining country in Fig. 3.11.5.1.2 represents a scope constraint on the argument. This indicates that an actual argument can be made based on the generic argument however the determining country sets a context for the argument. The context variable is an articulation of the presuppositions that underpin the generic argument.

The context variable can also represent the scope of variables used in the generic argument. For example, the Person context variable will be assigned the value X for a discourse participant intent on making the more universal argument that relates to well founded fear of anyone. The participant that restricts the argument to Reff does so be setting the context variable to Reff. In general, context is a difficult concept to define. In the framework defined here, context is defined as presupposition and variable scope. However, other definitions can also be accommodated as long as they can be captured as variable-value tuples.

There is no rebuttal component in the generic argument. The rebuttal is more clearly regarded to be a dialectical component and is therefore omitted from this essentially non-dialectical frame. For instance, discursive participants may create actual arguments as instances of the same generic argument in ways that are quite different from others. Participant A may assert a different claim value than B, yet have perfect agreement on all data item values because a different inference procedure was selected. Any discussion regarding this difference, including exchanges that make the point that the difference constitutes an attack, or exchanges that seek to defend A or B’s assertion, or exchanges that seek to identify the stronger argument involve dialectical exchange and are omitted from the non-dialectical frame.

3.11.5.3 Representing Actual Arguments

Figure 3.11.5.3.1 represents an actual argument. This is the second level instantiation of the argument template in Fig. 3.11.5.1.1. An actual argument corresponds to a position held by a participant in a discourse. It is an instantiation of a generic argument. The context variable person in the generic argument is instantiated to “Reff” indicating that the claim only applies to him and not to others.

Fig. 3.11.5.3.1
figure 3_31_191169_1_En

Actual argument for Reff has well founded fear

The data item value in Fig. 3.11.5.3.1 represents the situation that “Reff is likely to have a well founded fear”. The inference procedure for the actual argument is the ruleset called myRules. As a consequence of applying that ruleset, the claim value is instantiated to represent that Reff is likely to have well founded fear.

The claim value reason for this actual argument provides a reason for the specific claim value inferred rather than other claim values. The claim value reason in Fig. 3.11.5.3.1 expresses a reason for why well founded fear is likely, given the data items and inference procedure selected. The claim value reason is not a reason for the inference rule. First of all the inference procedure need not be a rule. If it is a mathematical function or has mechanisms that are not visible, such as a neural network, then the articulation of a reason for the inference procedure is impossible. Conceptually, it is more correct to say it is a reason for a particular value that has arisen as a result of the application of an inference procedure.

Certainty values are assigned when a participant creates an actual argument. The certainty value represents the degree of certainty the participant has that the claim (or data) variable value selected is the true value. A certainty value may be set directly by the participant or calculated by the inference procedure, if the variable value is set by an inference procedure. The certainty value of 80% associated with the data item value, “likely”, for the well founded fear variable in Fig. 3.11.5.3.1, is read as a high (80%) degree of certainty that well founded fear of persecution is likely. This is calculated by the inference procedure selected, myRules. However, if the inference procedure selected does not calculate certainty values (e.g., human inferences) then the participant must set a certainty value. The way in which the data item certainty values are combined is a feature of the mapping performed by the particular inference procedure selected so is not made explicit in the GAAM.

Linguistic variables values such as very elderly, elderly, middle aged, young and very young seem to represent certainty in themselves so as to make the specification of a certainty value redundant. However, the inclusion of a certainty value slot in the GAAM enables the specification of membership function values if fuzzy reasoningFootnote 36 was selected as the inference procedure, conditional probability if a Bayesian inference was selected or certainty factors if MYCIN-like rule inferences (i.e., rules of the kind made popular in expert system following the MYCIN expert system for medical diagnosis)Footnote 37 were used as the inference procedure.

Generic and actual argument structures correspond to a non-dialectical perspective. They do not directly model an exchange of views between discursive participants but rather describe assertions made from premises and the way in which multiple claims are organised. Claim variables are inferred using an inference procedure, which may not necessarily be automated, from data item values. The reasoning occurs within a context and the extent to which the data items correspond to true values, according to the proponent of the argument, is captured by certainty values.

The generic argument provides a level of abstraction that accommodates most points of view within a discursive community and anticipates the creation of actual arguments, by participants, as instantiations of a generic argument. However, it is conceivable that, given the open textured nature of reasoning, that a participant will seek to advance an actual argument that is a departure from the generic argument. This is a manifestation of discretion and can be realised with the introduction of a new variable (data, claim or context) value, with the use of a new inference procedure, or with a new claim value reason.

A non-dialectical argumentation model must model discretion and open texture. The concept of open texture was introduced by Waismann (1951) to assert that empirical concepts are necessarily indeterminate. A definition for open textured terms cannot be advanced with absolute certainty unless terms are defined axiomatically, as they are, for example in mathematics. Gold may be defined as that substance which has spectral emission lines, X, and is coloured deep yellow. However, because the possibility that a substance with the same spectral emission as gold, but without the colour of gold will appear in the future, cannot be ruled out, the concept for gold is open textured.

The concept of open texture is significant in the legal domain because new uses for terms, and new situations constantly arise in legal cases. Prakken (1993a) discerns three sources of open texture; reasoning which involves defeasible rules; vague terms; or classification ambiguities. Judicial discretion is conceptualised by Christie (1986) and Bayles (1990) as the flexibility decision-makers have in weighing relevant factors when exercising discretion, although articulating an assignment of weights is typically difficult. This view of discretion does not derive from defeasible rules, vague terms or classification ambiguities, so is regarded as a fourth type of situation that contributes to the open textured nature of law.

The link between the GAAM and discretion is described in detail by Stranieri, Yearwood, and Meikl (2000). Broadly, discretion manifests itself as the flexibility for a participant to construct an actual argument from a generic argument by:

  • Adding data item factors into the actual argument that are not in the generic tree.

  • Removing a data item factors from the actual argument that is in the generic tree.

  • Selecting a data, claim, or context variable value from those specified in the generic tree.

  • Selecting a data, claim, or context variable value that has not been specified in the generic tree.

  • Selecting an inference procedure from the list specified in the generic tree

  • Selecting an inference procedure not specified in the generic tree.

  • Leaving data items, reasons for relevance, inference procedure, and reasons for the appropriateness of inference procedures implicit.

  • Introducing a claim value reason statement.

  • Selecting certainty values.

This framework including the generic/actual distinction, the clear separation of inference procedure from other components and the inclusion of reasons for relevance and context introduces a non-dialectical structure that represents knowledge applicable to a discursive community, but does not include elements that are clearly needed to model dialectical exchanges. In the next Section 3.11.6, the way in which a the specification of a comprehensive non-dialectical structure facilitates hybrid reasoning, document drafting and information retrieval is described before illustrating steps toward a dialectical model based on the GAAM non-dialectical frame.

3.11.6 Applications of the Generic/Actual Argument Model

The use of the GAAM for facilitating hybrid reasoning is illustrated with the knowledge based system called Split Up, that predicts marital property distribution decisions following divorce made by judges of the Family Court of Australia. This research is reported by Stranieri, Zeleznikow, Gawler, and Lewis (1999); cf. Stranieri (1999).

3.11.6.1 The Split Up System for Negotiating a Divorce

The Split Up project (Stranieri et al., 1999) collected data from cases heard in the Family Court of Australia dealing with property distribution following divorce. The objective was to predict the percentage split of assets that a judge in the Family Court of Australia would be likely to award both parties of a failed marriage. Australian Family Law is generally regarded as highly discretionary. The statute presents a “shopping list ” of factors to be taken into account in arriving at a property order. The relative importance of each factor remains unspecified and many crucial terms are not defined. The age, state of health and financial resources of the litigants are explicitly mentioned in the statute as relevant factors, yet their relative weightings are unspecified. The Act clearly allows the decision-maker a great deal of discretion in interpreting and weighing factors.

In the Split Up system, the relevant variables were structured as data and claim items following the generic argument outlined above into 35 interlocking arguments. The ultimate claim, representing the percentage split of assets a judge would be likely to award the husband and wife, was the root of an argument tree. Unlike in dialogical argumentation, using an argument tree in non-dialogical argumentation, namely, in order to structure knowledge, aims at securing the following benefit: the argument tree is a hierarchy of relevant factors, and enables to decompose one large data mining exercise into many smaller ones.

Fig. 3.11.6.1.1
figure 3_32_191169_1_En

The argument tree in the Split Up system (details are shown in Figs. 3.11.6.1.2, 3.11.6.1.3, and 3.11.6.1.4)

Fig. 3.11.6.1.2
figure 3_33_191169_1_En

The downstream part of the argument tree in the Split Up system

Fig. 3.11.6.1.3
figure 3_34_191169_1_En

An enlarged detail of the argument tree in the Split Up system (the top left part of Fig. 3.11.6.1.1)

Fig. 3.11.6.1.4
figure 3_35_191169_1_En

An enlarged detail of the argument tree in the Split Up system (the bottom left part of Fig. 3.11.6.1.1)

Nodes in the argument tree of Split Up, illustrated as Fig. 3.11.6.1.1, are claim/data items. Variable values, inference procedures, reason for relevance and context are omitted from this diagram. The arguments interlock in that the claim of one argument is a data item for another, higher up a tree such as the one depicted in Fig. 3.11.6.1.1. For example, the variable Contributions of the husband relative to the wife is a data item for the ultimate claim and also the claim for an argument that has four data items.

In the Split Up system all claim variable values were inferred using automated inference procedures from the data item values. In 15 of the 35 arguments, claim values were inferred from data items with an inference procedure that involved the use of small rule-sets that represent expert heuristics whereas neural networks, trained on data from past Court cases, were used to infer claim values in the remaining 20 arguments.

The Split Up application illustrated that the generic/actual argument model captures knowledge in way that leads to readily maintainable knowledge bases, a requirement that is particularly important in law. The tree of arguments underpinning Split Up was first elicited with the assistance of domain experts in 1994. Since then, property division in family law has changed in that domestic violence is now recognised as a relevant consideration in property proceedings following a divorce. The framework localises this change to a single argument that does not impact on any other argument. Furthermore, an examination of the process that led to the introduction of domestic violence illustrated that the generic argument framework can clarify judicial reasoning.

Behind the argument tree, there is an argument which in represented by means of the Toulmin argument structure, in Figs. 3.11.6.1.5, 3.11.6.1.6, and 3.11.6.1.7.

Fig. 3.11.6.1.5
figure 3_36_191169_1_En

Why the wife deserves more, in a given case (data to claim)

Fig. 3.11.6.1.6
figure 3_37_191169_1_En

Why the wife deserves more, in a given case (claim to data)

Fig. 3.11.6.1.7
figure 3_38_191169_1_En

Why the wife deserves more, in a given case (with another claim)

In her thesis about family law in Australia, Renata Alexander (2000) noted that during the 1990s, numerous unsuccessful attempts were made to persuade judges to award a more generous property settlement to victims of domestic violence. This corresponds to the situation where an argument is advanced that departs from existing generic arguments by the introduction of a new data item. In recent years, a small number of Family Court judges began to accept the domestic violence argument. Many of the early cases were appealed and precedents set by higher Courts so that domestic violence is now undeniably a relevant consideration in property division. However, as Alexander (2000) notes, there is still some ambiguity in practice in that some judges have attributed domestic violence as a past contribution factor whereas others have recognised it as a factor that increases the victim’s future needs. The ambiguity corresponds to a situation where a new factor has currently been inserted in two places in the argument tree. The discursive community, judges, lawyers and analysts of Australian family law await the resolution of this apparent conflict.

The Split Up application also demonstrates that the generic/actual model provides a convenient frame for task decomposition that is particularly useful for data mining. Data mining is restricted to an exercise in discovering the inference procedure within each argument. Although the total number of relevant variables is large (103 in the Split Up system) most arguments have a small number of data items. Mining for an inference procedure involving a small number of variables is far more readily tractable than a large set. Furthermore, a flat list of all variables requires huge numbers of cases and often includes missing values. For example, values associated with children are empty for childless marriages and appear in a flat list as null values. Null values severely hamper data-mining attempts. However, if the variables are organised into a generic tree, each argument has a small number of variables (data items). This means that relatively small numbers of cases can be used to discover inference procedures that are accurate.

Accurate inference procedures are particularly important in the Split Up system because users (typically, a couple) are non-experts and need the system to prompt them for all relevant facts in order to infer all claims leading up to the culminating claim. This is in contrast to the Embrace system, which is configured to make no automated inferences at all, yet illustrates the document drafting and information retrieval benefits of the GAAM.

3.11.6.2 The Embrace System for Assessing Refugee Status

Yearwood, Stranieri, and Anjaria (1999) report the application of the generic actual model to supporting reasoning regarding the assessment of refugee status in Australia. Refugee law is highly discretionary and extremely difficult to model. The main statute, the United Nations Convention, lists factors to be taken into account in reaching a determination on the refugee status of an applicant, but does not specify the weighting factors should have.

Ensuring that the decision making is as consistent as possible in this complex and discretionary domain is critical for just outcomes. Yearwood et al. (1999) have modelled reasoning in this field using over 200 generic arguments derived from members of the body established to hear appeals from unsuccessful applicants, the Refugee Review Tribunal. Inference procedures have not been specified for any generic argument in order to ensure that the information system facilitates decision making but does not directly infer outcomes which are left entirely to Tribunal members. Nevertheless, the argument structures have proven to be useful in modelling refugee decisions and in generating XML documents that are plausible first draft determinations.

Refugee Review Tribunal determinations are documents that express the reasoning steps a member of the Tribunal followed in order to infer conclusions regarding the status of an applicant. Although, it is reasonable to expect that a mapping between the reasoning steps used by judges and the structure of the judgement produced would clearly be apparent, Branting, Callaway, Mott, and Lester (1999) note that such a mapping is by no means obvious. They make some progress beyond boilerplate templates with the sophisticated use of discourse analysis using speech act and rhetorical structure theory.

Yearwood and Stranieri (1999) have identified a simple heuristic for traversing a tree of actual arguments that leads to a plausible document structure. This is achieved without the use of discourse analysis methods, largely because the generic/actual framework is a succinct, yet expressive frame for capturing reasoning.

The document generation facility has been implemented as a module of an argument shell, called ArgumentDeveloper. Yearwood and Stranieri (2000) describe the program that has been written to facilitate the development of knowledge based systems that use the generic actual argument model. This module traverses the actual argument tree for a user in the order specified by the algorithm and generates an XML document with an appropriate document type definition file. When this is paired with a style sheet customised for refugee law, a determination is automatically generated that expresses the flow of reasoning in a manner that is quite plausible despite using no discourse analysis techniques.

Yearwood and Stranieri (2000) developed and implemented their “Argument Developer Agent” shell which allows the building and storage of versions of the generic argument framework within a domain and an interface for the development of actual arguments. The argument shell consists of the following components:

  • A generic argument editor that enables a knowledge engineer to enter a tree of generic arguments within a domain.

  • An actual argument editor that enables a user to enter actual arguments made by users. This identifies the appropriate argument in the generic structure based on the text used by the user in a notepad interface. It was then replaced by a dialogue interface to interact with the TOURIST agent, in an application to tourism which we briefly discuss in Section 3.11.6.4 below.

  • An inference engine that can infer a value for a claim from data item values by invoking the procedure embedded in an argument.

  • A dialogue generator that models the relationships between arguments such as A supports B, A rebuts C and D, A extends G; This is important for modelling the way in which two or more parties apply arguments in a dialogue.

A knowledge engineer using the argumentation shell first maps out all the generic arguments. The claim of each generic argument except for the culminating one, is a data item for another argument so a tree of arguments is constructed.

In addition to the need to generate draft determinations rapidly, the Embrace project provided the vehicle for demonstrating that the generic actual model improves information retrieval. This is implemented with the development of the information retrieval module into the ArgumentDeveloper shell. This module automatically generates a search engine query by assembling all terms used in an argument with a list of keywords associated with the argument. Matthijssen (1999) demonstrated improved precision and recall figures using keyword lists attached to the original Toulmin argument structure. The information retrieval query takes all variable names and values in addition to a list of terms associated with each generic argument in order to generate a query.

3.11.6.3 The GetAid System for Legal Aid Eligibility

The GetAid system operates in the field of legal aid eligibility where rapid prototyping of a web based application is more important. The generic actual framework has been applied to acquire knowledge regarding decisions made by officers of Victoria Legal Aid, a government funded provider of legal services for disadvantaged clients, in assessing whether an applicant should receive legal aid. Applicants for legal aid must pass a merits test which involves a prediction about the likely outcome of the case in Court. This assessment involves considerable discretion and is performed by grants officers who have extensive experience in the practices of Victorian Courts.

A web-based knowledge based system called GetAid was rapidly developed using the shell, WebShell, reported by Stranieri and Zeleznikow (2001a). Knowledge was modelled using two distinct techniques: decision trees for procedural-type tasks and generic argument trees for tasks that are more complex, ambiguous or uncertain.

The GetAid development demonstrated that the generic actual argument model (GAAM) is a useful representation for rapid knowledge acquisition. In order to construct a generic argument tree, the expert is initially prompted to articulate factors (data item variables) that may be relevant in determining the ultimate claim without any concern about how the factors may combine to actually infer a claim value. For every factor (data item variable) articulated, a reason for the items relevance must be able to be articulated. The possible values for each data item are then identified. The next step in the knowledge acquisition exercise involves viewing each data item as a claim and eliciting the data items that are used to infer its value.

Once the tree is developed as far back as the expert regards appropriate for the task at hand, attention can be then by focussed on identifying one or more inference procedures that may be used to infer a claim value from data item values. This proved difficult for the GetAid experts to articulate as rules because the way in which the factors combine is rarely made explicit but forms part of the expertise gathered over many cases. Although it is feasible to attempt to derive heuristics, the approach we used was to present a panel of experts with an exhaustive list of all combinations of data items as hypothetical cases and prompt for a likely decision. The decisions from a panel of experts were merged to form a dataset of records that were used to train a neural network for each generic argument.

The construction of the systems, GetAid, Split Up and Embrace illustrate the benefits in the use of the non-dialectical framework. These include hybrid reasoning, task decomposition, information retrieval, document generation and knowledge acquisition. These benefits can be seen to derive from the effectiveness of the generic actual model to structure reasoning. In the e-Tourism application, first steps have been made toward the development of a dialectical model that is based on the non-dialectical model.

3.11.6.4 An Application Outside Law: eTourism

In the eTourism system, developed by Avery, Yearwood, and Stranieri (2001), dialogue occurs between three types of software agents: tourists, tour advisors and tour operators. The human tourist invokes an instance of a tourist agent on commencing a consultation session. The tour advisor has no human counterpart. The dialogue between the tourist and advisor agents is aimed at realising the community goal of recommending tours the tourist will enjoy. The tour operator invokes an operator agent in order to inform the advisor of tours it operates. A key feature of the approach presented here is that all agents share the same generic argument tree but can instantiate their own actual arguments. In this way, each agent’s beliefs are represented by actual arguments, but because these are instances drawn from a common generic argument tree, negotiation can be simplified.

Jennings, Parsons, Sierra, and Faratin (2000) noted that negotiation underpins any attempt at coordinating multiple agents (human or software). For instance, the architecture for the eTourism application is based on an agent-oriented approach where each software agent represents world knowledge as arguments and interacts with other agents according to dialogue rules. An agent based framework that places emphasis on negotiation must include three main components:

  • a negotiation protocol,

  • a negotiation object, and

  • an agent’s decision-making model.

Generic arguments are used in the eTourism project as a means of representing the shared knowledge that an agent community has. In this approach each agent’s beliefs are represented by actual arguments. Because these instances are drawn from a common generic argument tree, negotiation between agents can be simplified. The mapping between the negotiation protocol, the negotiation object and the agent’s decision making model has been discussed and lays the groundwork for developing applications based on multiple agents negotiating outcomes because knowledge represented as generic/actual arguments helps to: constrain the negotiation protocol; constrain the negotiation objects; constrain the agent’s decision making model.

The generic argumentFootnote 38 constrains negotiation protocols in a convenient manner for agent-oriented architectures (this kind of software is discussed below in http://Section 6.1.6). The actual arguments of multiple agents can be readily compared and contrasted because each actual argument is an instantiation of the same generic argument. Operators that appear in dialectical argumentation such as attack and accept are readily implemented. An argument, A, attacks another argument, B, if A has a different claim value than B for the same claim variable. The source of the attack can be readily isolated. It may be due to different data item values, certainties, or different inference procedures. An argument, C accepts another argument, D if it has the same claim variable and value. Identical acceptance is operationalised as the same claim, data and inference procedures, whereas similar acceptance occurs if the claims are the same but data or inferences are not. Research was conducted in order to develop the dialectical model based on the generic/actual split.

Figure 3.11.6.4.1 illustrates an actual argument with data values set and a particular inference mechanism selected. It is an instantiated generic argument from Tourism where the claim is “The tour is feasible for the client”, based on the data items and values given in the diagram. The inference procedure may simply be a query against a data base of information on tours. The justification can be given as one of the answers that satisfies the query and the appropriate information.

Fig. 3.11.6.4.1
figure 3_39_191169_1_En

An actual argument in the Tourism domain to support customised delivery of Tourism Information

Stranieri and Zeleznikow (2001b) proposed an agent-based knowledge based approach to help regulate copyright. Five knowledge-based systems were described that are sufficiently flexible to protect authors rights without denying the public access to works for fair use purposes. The owner of a work and users who wish to copy a portion of the work are participants in the discursive community and share the same generic arguments. In order to copy the work, users construct their own actual arguments. The agent representing the owner determines whether to release the work or not by constructing its own actual argument. The generic/actual framework simplifies the negotiation protocol and assists in the deployment of an agent-oriented approach.

3.11.7 Envoi

Argumentation can be seen to have been applied to knowledge engineering in the 1990s and 2000s in two ways; with an emphasis on the dialectical nature of argumentation or with an emphasis on the structure of reasoning from a non-dialectical perspective. From the dialectical perspective, the way in which two or more participants in a discourse propose arguments that attack, rebut, defeat, subsume or accept others is paramount. From a non-dialectical perspective the way in which claims are laid out and inferred from premises is the object of attention. The argument structure proposed by Toulmin (1958) does not clearly delineate a dialectical perspective from a non-dialectical. Many applications of the Toulmin layout of arguments for knowledge engineering purposes vary the structure.

The variations made by and Kathleen Freeman (1994), Trevor Bench-Capon (1998), Johnson, Zualkernan, et al. (1993), and others can be understood as the result of different emphases on the dialectical or non-dialectical perspective, though in many cases the distinction is still blurred. A variation to the Toulmin structure called the Generic Actual Argument Model has been advanced in the present Section 3.11, where the distinction between dialectical and non-dialectical argumentation concepts is clearly defined. The GAAM is a non-dialectical model that facilitates hybrid reasoning, information retrieval, document drafting, knowledge acquisition and data mining. The non-dialectical GAAM has been applied for the construction of systems in refugee law, family law, eligibility for legal aid, copyright law. A dialectical model that is based on the GAAM is under investigation though early results with the automated provision of e-Tourism advice using an agent architecture indicate that a dialogue model is more readily realised simplified if built on the non-dialectical base.