Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

2.1 Introduction

In this chapter, a comprehensive review of the literature, focusing on two important aspects of the research problem, is presented. The first aspect (discussed in Sect. 2.3), focuses on the study of argumentation models, frameworks and applications in different areas of research. The objective of this study is to identify key elements of argumentation, its strengths and weakness and exploit them to address the challenges faced by Semantic Web applications. The second aspect (discussed in Sect. 2.7), focuses on the study and categorization of existing approaches for reasoning in Semantic Web applications. In Sect. 2.8, a critical evaluation of the existing literature is given and seven critical research issues that need attention are identified in order to provide a framework for argumentation support in Semantic Web applications.

2.2 Basic Definitions

In this section, some important definitions are outlined in order to prepare the reader for a better understanding of the concepts discussed in this chapter.

2.2.1 Argumentation

Argumentation is defined as “a verbal and social activity of reason aimed at increasing (or decreasing) the acceptability of a controversial standpoint for the listener or reader, by putting forward a constellation of propositions intended to justify (or refute) the standpoint before a rational judge”. It is the field of study in which rhetoric, logic and dialectic meet (Rahwan et al. 2007b).

2.2.2 Argumentation Systems

The applications governed by the rules of argumentation are known as argumentation systems (Munoz and Botia 2008). Different argumentation models and frameworks have been used in applications to address issues in different domains of research, and all of them have important notions, such follows:

  • the definition of argument;

  • the notion of conflict between arguments;

  • the notion of defeat;

  • an argumentation semantics that selects acceptable (justified) arguments (possibly including an underlying logical language and a notion of logical consequence).

2.2.3 Argument, Rebuttal, Undercut and Acceptable Arguments

Argumentation is inherently a process rather than an instant picture and the building blocks of argumentation are arguments and the relationships between those arguments. According to the definitions in the literature by Walton (2009); Palau and Moens (2009); Besnard and Hunter (2008), an argument is a set of statements made up of a minimum of three parts:

  • a conclusion, also known as a claim, is a proposition which could be either true or false. These claims are used to drive other claims;

  • a set of premises used to support the conclusion;

  • inference or reasoning steps from premises to conclusion.

The support of an argument provides the reason (justification) for the claim of the argument. An argument can be supported by other arguments known as its sub-arguments. Counter-arguments or rebuttals are also arguments that attack an argument with a contradictory claim. Counter-arguments, in turn, may be defeated and the process may continue, resulting in the construction of argumentation lines (Garcia and Simari 2004). An undercutting argument is an argument with a claim that contradicts some of the assumptions/inference of another argument.

For arguments to be acceptable, they must be weighed, compared and evaluated to identify the set of warrants and a conclusion which convinces all decision makers. An acceptable set of arguments is coherent and strong enough to defend itself against any attacking argument.

2.2.4 Argumentation Scheme

During the process of argumentation, relationships among arguments link them with one another in a certain pattern to support the ultimate conclusion. Such linking patterns are called Argumentation Schemes (Walton 2005). A leading example of an argumentation scheme is that which represents the argument from expert opinion (Walton 1997). Argument from expert opinion can be a reasonable argument if it meets the conditions displayed in the following argument form, where A is a proposition, E is an expert, and D is a domain of knowledge: E is an expert in domain D. E asserts that A is known to be true. A is within D. Therefore, A may plausibly be taken to be true.

2.2.5 Argumentation Life Cycle

According to Eemeren and Grootendorst (2004); Walton (2009), four tasks under the umbrella of argumentation are identification, analysis, evaluation and invention. The task of detection involves the construction of an argument and attaching it to an argumentation scheme, if possible. It involves the detection of a difference of opinion. In the analysis phase, the participant tries to find implicit premises and conclusions and tries to make them explicit to better evaluate the argument. Arguments missing some premises or, in some instances, a conclusion, are termed Enthymeme. In the evaluation phase, the strength of an argument is determined, i.e. either strong or weak, in accordance with the general criteria applicable to that argument. The last phase is invention, in which we try to construct new arguments that can be used to prove a specific conclusion.

2.2.6 Types of Arguments

According to Walton (2006), three major types of argument are as follows:

  • In a deductive argument (e.g. mathematical proof in propositional logic), if the premises are true, then the conclusion must be true. The reasoning process based on deductive arguments is known as deductive reasoning.

  • An inductive argument involves a kind of generalization from the empirical evidence gathered. Inductive arguments sometimes use statistical techniques to establish the strength (or confidence) of the supported claim. The reasoning based on inductive arguments is known as inductive reasoning.

  • In a presumptive argument, the conclusions are said to be plausible given the premises. Plausibility is different from probability. While probability is determined by reasoning from statistical evidence, plausibility states that the conclusion holds by default provided no adequate evidence supports the contrary view.

2.2.7 Patterns of Arguments

During the argumentation process, arguments can be arranged in three ways or patterns, called complex argumentation patterns, as discussed by Eemeren et al. (2002); Zarefsky (2009) and Reed et al. (2007).

  • Subordinative argumentation: In this pattern of argumentation, arguments are arranged in a serial structure and depend on one another in a specific order to carry the resolution.

  • Coordinative/linked argumentation: Arguments are arranged in a convergent structure. Each argument is independent of the others and the entire group of arguments must be carried out to carry the resolution.

  • Multiple/Parallel/Convergent argumentation: Each argument is independent of the others and each is sufficient to carry the resolution.

2.2.8 Monological and Dialogical Argumentation

According to Rotstein et al. (2010); Besnard and Hunter (2008), argumentation is monological if a single agent or entity has collated the knowledge to construct arguments for and against a particular conclusion. If a set of entities or agents interacts to construct arguments for and against a particular claim, then such argumentation is called dialogical argumentation. Newspaper articles, political speech, review articles, or problem analysis by an individual seeking to draw a conclusion are examples of monological argumentation, whereas, lawyers arguing in court, trader negotiations and debates on an issue are examples of dialogical argumentation.

2.2.9 Static and Dynamic Argumentation Framework

The argumentation framework is considered to be dynamic if the knowledge-base from which the arguments are derived is dynamic, i.e. it can be changed during the argumentation process either with external changes or via guided changes. In a static argumentation framework, by contrast, a single set of evidences is used in the argumentation process, i.e. the knowledge-base does not change during the process of argumentation. As a result, only one instance of argumentation framework would exist (Rotstein et al. 2010).

2.3 Argumentation-Based Models, Frameworks and Applications

In Chap. 1, a general introduction to argumentation was given. In this section, it is elaborated in greater detail. The current literature on argumentation models, frameworks and applications can be divided into two broad categories:

  1. 1.

    Philosophical argumentation

    Models, frameworks and applications emphasising enrichment of the internal structure of an argument as in the work done by Toulmin (2003) are considered as a philosophical model of argumentation,

  2. 2.

    Logic-based argumentation

    Frameworks and applications built on a logic-based argumentation framework are grouped under the umbrella of logical models of argumentation. The current frameworks and applications that exploit argumentation models for reasoning on the WWW are studied and compared.

2.4 Philosophical Models of Argumentation

“I see what your premises are, says the philosopher, and I see your conclusion. But I just don’t see how you get there. I don’t see the argument”. These statements distinguish the notion of argument in philosophy from the technical notion of argument in logic by placing greater emphasis on the internal reasoning structure that leads the premises to a conclusion (Parsons 1996). The history of argumentation in philosophy can be traced back to the beginnings of rhetoric in ancient Greece. Rhetoric is the art of using language to communicate effectively. Citizens learned techniques to argue in court so that they could defend themselves. Aristotle carried out a systematic treatment of argumentation and rhetoric. Until the 1950s, argumentation was based on rhetoric and logic, but in 1958, Toulmin provided a logical structure of arguments and explained how the process works, using it as a tool to analyze various kinds of philosophically-problematic reasoning. Perelman (1969) tried to find a description of the techniques of argumentation used to obtain the approval of others for their opinions and called it ‘new rhetoric’. Both Toulmin and Perelman tried to present an alternative to formal logic that was better suited to analyzing every day communication. Eemeren and Grootendorst (2004) studied argumentation as a means of resolving differences of opinion by considering argumentation as a discourse activity. They proposed pragma-dialectical theory which views argumentation as ideally being part of a critical discussion which progresses through four discussion stages to resolve a difference of opinion: the confrontation stage, opening stage, argumentation stage and concluding stage.

In this chapter, the existing literature on argumentation, based on philosophical concepts, is grouped into one of the following two categories:

  1. 1.

    Theoretical models of argumentation.

  2. 2.

    Argumentation frameworks and applications.

In the following section, each category is discussed in detail.

2.4.1 Theoretical Models of Argumentation

2.4.1.1 Toulmin’s Model and its Extensions

Toulmin, a British philosopher, pointed out that formal logic relies on the rigorous testing of arguments based on mathematical rules carried out to declare them either valid or invalid, which is of very little practical value (Toulmin 2003; Freeley and Steinberg 2008). He proposed a model to better understand the structure of practical reasoning that occurs in any argument. He believed that reasoning is much more closely associated with the activity of testing and shifting existing ideas through the process of justification, rather than using inference to discover new ideas. He categorized premises in such a way that an argument was provided with a richer structure, one which corresponds more closely to the way in which arguments are presented.

Fig. 2.1
figure 1

An illustration of Toulmin’s model of argument structure (Toulmin 2003)

He distinguished six parts in argument structure and presented these in a diagrammatic representation, as depicted in Fig. 2.1. The elements are:

  1. 1.

    Claims: Every argument makes assertions based on data. The assertion of an argument is the claim of the argument.

  2. 2.

    Grounds: Data and hard facts, plus the reasoning behind the claim to establish the foundation of the claim.

  3. 3.

    Warrants: Evidence and reasoning to justify the move from grounds to claim. Warrants are not self-validating.

  4. 4.

    Backing: The backing (or support) for an argument gives additional support to the warrant by answering different questions.

  5. 5.

    Modal qualification or degree of cogency: Qualifying the claim to express the degree of cogency or modal specification. This is the extent to which the argument is both sound and intellectually compelling. Toulmin used modal qualification to express the concept of degree of cogency. The degrees of cogency are certainty, probability, plausibility or possibility.

  6. 6.

    Rebuttals or Counter-arguments: Any rebuttal is an argument in itself, and thus may include a claim, warrant, backing and so on. It also can have a rebuttal.

According to Baroni et al. (1998), Toulmin’s conceptual model of argumentation can help to classify the various ways an argument can be analyzed. According to Toulmin, three possible strategies are:

  1. 1.

    If the initial data of the opponent is wrong, all the conclusions derived from that data will be undermined.

  2. 2.

    If there is a flaw in the line of reasoning that relates data to the conclusion, i.e. warrant, this might mean questioning the knowledge used in the current context or questioning the inference rules.

  3. 3.

    If inconsistencies can be detected in the opponent’s background knowledge, challenge the backing.

Table 2.1 summarizes the different extensions made to Toulmin’s model of argument representation. Each of these extensions is made keeping in view the purpose to be fulfilled in a specific domain, as illustrated in the table.

2.4.1.2 Argumentation Schemes Proposed by Walton and Reed

Argumentation schemes provide a way to perform reasoning over a set of premises and a conclusion. These argumentation schemes have emerged from informal logic and help to categorize the way arguments are built, aiming to fill the gap between logic-based application and human reasoning by providing schemes which capture stereotypical patterns of human reasoning, e.g., arguments from an expert opinion scheme. Formally, an argumentation scheme is composed of a set of premises \(\mathrm{{A}}_{i}\), a conclusion C, and a set of critical questions \(\mathrm{{CQ}}_{i}\) with the aim of defeating the derivation of the consequences (Rahwan et al. 2007a; Letia and Groza 2008).

The aim of an argument in presumptive or plausible reasoning is to shift the burden of proof in a dialogue. Blair (1999) describes and discusses approximately thirty such argumentation schemes. For each scheme, he provides a description, a formulation, a set of associated critical questions, at least one and often several cases which are actual or invented examples of the scheme in use, and a discussion of the scheme, in which he typically draws attention to its salient properties, relates it to other schemes, discusses the fallacies associated with it, comments on its presumptive force, and mentions typical contexts of its use. Fallacies are the violations of rules of critical discussion that hinder the resolution of opinion. Blair listed six characteristics of fallacy as follows: (1) dialectical (2) pragmatic (3) commitment-based (4) presumptive (5) pluralistic and (6) functional. These characteristics will help in the identification, classification and evaluation of fallacies. Subsequently, Walton (2005) tried to address the justification of a certain scheme.

Table 2.1 Extension to Toulmin’s model of argument structure

Reed and Walton (2003) also showed that argumentation schemes help users to identify and evaluate common types of argumentation in daily discourse, but the ways in which argumentation schemes drive a dialogue onwards, through a combination of critical questioning and relevance maintenance, is largely unaddressed. Therefore, the authors explored the relationship between the argument-as-process and argument-as-product representations, using, as a focus, the roles that argumentation schemes play in the two approaches. Arguments found in text are considered to be products because they are already there, and when the argument is used to fill the unstated premises or conclusions, the task is seen as argument-as-process. To understand this notion, suppose that Bob and Helen are having a critical discussion on tipping, and that Helen is against tipping. She thinks that tipping is a bad practice that ought to be discontinued. Suppose in this context, Helen puts forward the following argument: Dr. Phil says that tipping lowers self-esteem.

Dr. Phil is an expert psychologist, so the argument is, at least implicitly, an appeal to expert opinion. It is also, evidently, an instance of argument from consequences. Helen is telling her opponent, Bob, that lowering self-esteem is a bad consequence of an action. Her argument is based on the assumption that since this bad outcome is a consequence of tipping, tipping itself is a bad thing. Thus, Helen’s argument is an enthymeme, that is, it is a chain of argumentation that can be reconstructed as follows: The self-esteem argument:

  • Dr. Phil says that tipping lowers self-esteem. Dr. Phil is an expert in psychology, a field that has knowledge about self-esteem.

  • Tipping lowers self-esteem.

  • Lowering self-esteem is a bad thing.

  • Anything that leads to bad consequences is itself bad as a practice.

  • Tipping is a bad practice.

Fig. 2.2
figure 2

Illustration of the self-esteem argument

How can one know this? How can one fill in the unstated premises and link them with other premises and conclusions in a chain of argumentation that represents Helen’s line of argument? One tool which is needed is the argumentation scheme. Figure 2.2 illustrates the self-esteem argument in which the argumentation scheme is used to reach a conclusion.

2.4.2 Argumentation Frameworks and Applications

2.4.2.1 Zeno Argumentation Framework

The Zeno argumentation framework (Gordon and Karacapilidis 1997) is a formal model of argumentation based on the informal models of Toulmin’s and Rittel’s Issue-Based Information Systems (IBIS). The Zeno model contains the argumentation elements: issue, position, pro-argument, contra-argument, preference, decision, and comment, as illustrated in Fig. 2.3. A message in the Zeno discussion forum (mediation system) may contain more than one such argumentation element, if the author expresses complex information in a single contribution. Most contributions in a forum will arise as replies to existing arguments, so that an argumentation tree develops. Zeno uses five standards of proof:

  1. 1.

    Scintilla of Evidence: the choice has some pros.

  2. 2.

    Preponderance of Evidence: the pros outweigh the cons given the preference constraints.

  3. 3.

    No Better Alternative: no choice is preferred on the basis of the preference constraints.

  4. 4.

    Best Choice: one choice is preferred to every alternative choice on the basis of the preference constraints.

  5. 5.

    Beyond Reasonable Doubt: no con reason against a particular choice, and no pro reason for an alternative.

Fig. 2.3
figure 3

Zeno argumentation model

2.4.2.2 Carneades Argumentation Framework

The Carneades argumentation framework (Gordon and Walton 2006; Gordon et al. 2007) is a formal, mathematical model of argument evaluation which applies proof standards to determine the defensibility of arguments and the acceptability of statements on an issue-by-issue basis. It carries features from both the Zeno framework and argumentation schemes. The framework use three kinds of premises (ordinary premises, presumptions and exceptions) and information about the dialectical status of statements (undisputed, at issue, accepted or rejected) to model critical questions in such a way as to allow the burden of proof to be allocated to the proponent or the respondent, as appropriate. The proof standards of Carneades are:

  1. 1.

    Scintilla of Evidence: supported by at least one defensible pro argument.

  2. 2.

    Preponderance of Evidence: the strongest defensible pro argument outweighs the strongest defensible con argument, if there is one.

  3. 3.

    Dialectical Validity: supported by at least one defensible pro argument, and none of the con arguments are defensible.

  4. 4.

    Beyond Reasonable Doubt: supported by at least one defensible pro argument; all of the pro arguments are defensible and none of the con arguments are defensible.

Fig. 2.4
figure 4

Carneades argumentation model

Figure 2.4 is a reconstruction of Toulmin’s standard example about British citizenship in the Carneades framework. The ‘rebuttal’ is modeled as an exception and the backing as an assumption. Both the datum and warrant are ordinary premises. Alternatively, backing could be modeled as the premise of an additional argument pro the warrant, by generalizing the concept of an argumentation scheme to cover patterns with multiple arguments. CarneadesFootnote 1 provides tools that support a variety of argumentation tasks, including:

  1. 1.

    argument mapping and visualization;

  2. 2.

    argument evaluation, applying proof standards and respecting the distribution of the burden of proof;

  3. 3.

    argument construction from OWL ontologies and defeasible rules;

  4. 4.

    argument interchange in XML, using the Legal Knowledge Interchange Format (LKIF).

2.4.2.3 Sense-Making Tool: Araucaria

Argument diagramming is often claimed to be a powerful method for analysing and evaluating arguments (Reed and Rowe 2007). Ongoing work is being conducted on building software for the analysis of arguments, resulting in the development of software for specific groups of users with particular needs, leading to a plethora of such tools. As a result, there is a need for a tool that can support different theoretical approaches to analyze arguments.

The AraucariaFootnote 2 tool aims to do this. The Araucaria system (Reed and Rowe 2004) has been used to mark up and diagram textual arguments, supporting analysts’ work in reconstruction and identification. Araucaria is a freely available, open source software package which allows the text of an argument to be loaded from an Argument Markup Language (AML) file, and provides numerous tools for marking up this text and producing Standard, Toulmin, and Wigmore diagrams. Araucaria supports different styles of argumentation, as well as translation features from one style to another. It is currently being used in the construction of an online repository of arguments drawn from newspaper editorials, parliamentary reports and judicial summaries from around the world. Online Visualization of Argument (OVA)Footnote 3 is the web-based version of Araucaria.

It is evident from discussion on the philosophical models of argumentation that it plays a pivotal role as reasoning methodology in field of Philosophy. In the next subsection , the logic-based models of argumentation and their exploitation in the field of Artificial Intelligence is discussed.

2.5 Logic-Based Models of Argumentation and Applications

Traditional models of reasoning were monotonic and unable to cope with incomplete, uncertain and dynamic information. These reasoning models are built on first-order predicate logic or a subset of the same and perform reasoning under certain assumptions such as:

  1. 1.

    the given problem is fully specified (the solution to the problem lies in the specified information);

  2. 2.

    the specifications are consistent;

  3. 3.

    new facts are also consistent with the already specified specifications;

  4. 4.

    new facts do not lead to the retraction of previous conclusions.

If \({\mathcal {T}},{\mathcal {F}}\) and \({\mathcal {G}}\) represent some statements, then, monotonic reasoning can be expressed formally as follows:

$$\begin{aligned} {\mathcal {T}} \, \models \, {\mathcal {F}}\rightarrow {\mathcal {T}}\,\sqcup \, {\mathcal {G}} \, \models \, {\mathcal {F}}. \end{aligned}$$
(2.1)

It is evident from Eq. (2.1) that if a set of axioms in monotonic reasoning is enlarged, existing assertions or axioms cannot be retracted. Such reasoning does not add to our knowledge base and merely rearranges it (Nute 1994). This is a basic property which makes sense for mathematical knowledge but is not desirable for knowledge representation, in general.

In the 1970s, argumentation was considered to be another way to formalise defeasible reasoning or non-monotonic reasoning because of its close resemblance to human patterns of reasoning, indicating that argumentation is the way in which a person takes a standpoint and defends this standpoint. It is much more related to day-to-day argumentation than the reasoning of logicians, who tend to concentrate on the way in which conclusions are derived from premises (Eemeren et al. 1996). Some examples are as follows:

  1. 1.

    A well-known example from Artificial intelligence is as follows:

    Argument: Tweety flies because Tweety is a bird

    Counter-argument: Tweety is different therefore it does not fly.

  2. 2.

    In epistemology, the standard example is

    Argument: This looks red, therefore it is red.

    Counter-argument: But the ambient light is red, therefore it is not red.

In recent years, argumentation has gained considerable attention from the artificial intelligence research community which has led to the investigation of argumentation and its applications in various domains. From its theoretical foundations, argumentation can be integrated into a number of real world applications, such as planning, MAS, legal reasoning, knowledge engineering, the analysis of news reports, clustering, argumentation support systems, mediation systems and computer-supported collaborated argumentation (Chesnevar et al. 2006b).

In the field of AI, researchers are not particularly interested in the internal structure of an argument. In contrast, they consider an argument to be a single entity and hence are much more interested in modeling and evaluating the relationships between arguments to reach a conclusion. In this section, I broadly divide the current literature into two categories as follows:

  1. 1.

    Argumentation frameworks.

  2. 2.

    Argumentation systems or applications.

2.5.1 Argumentation Frameworks

Broadly speaking, I can divide argumentation frameworks into the following five categories:

  1. 1.

    Abstract argumentation framework.

  2. 2.

    Bipolar argumentation framework.

  3. 3.

    Preference-based argumentation framework.

  4. 4.

    Value-based argumentation framework.

  5. 5.

    Assumption-based argumentation framework.

2.5.1.1 Abstract Argumentation Framework

Dung (1995) proposed a very influential semantic foundation for an argumentative framework based on the notion of the acceptability of arguments. He defined the characteristics of the argumentative framework according to the relationship between arguments and between sets of arguments. He defined an argumentation framework, emerging from logic programming, as a pair \(AF=<{\mathcal {A}},attack>\) where \({\mathcal {A}}\) is the set of arguments and \(attack\) is the binary relation on \({\mathcal {A}} \times {\mathcal {A}}\), representing the conflict between them. If \((A,B) \in attack\) then argument A attacks and defeats argument B. The notion of \(defence\) is defined from the notion of defeat by: an argument \(A_{i}\) defends \(A_{j}\) against \(B\) iff there exist \(((B,A_{j}) \wedge (A_{i},B))\in attack\). He defined certain properties of the argumentation framework which help to categorize arguments into different extensions, such as preferred, stable and ground extensions. These properties are:

  1. 1.

    Conflict Free: Given an \(\text{ AF } F=({\mathcal {A}},attacks)\). A set \(S\subseteq {\mathcal {A}}\) is conflict-free in \(F\), if, for each \(a,b\,\in \,S, (a,b)\,\notin \,attacks\).

  2. 2.

    Admissible Set: Given an AF \(F = ({\mathcal {A}},attacks)\). A set S\(\subseteq \,{\mathcal {A}}\) is admissible in \(F\), if

    1. (a)

      \(S\) is conflict-free in \(F\)

    2. (b)

      \(a\in {\mathcal {A}}\) is defended by \(S\) in \(F\), if for each \(b\in A\) with \((b,a)\in \,attacks\), there exists a \(c\in S\), such that \((c,b)\in \,attacks\).

The framework abstracts the details of the underlying language, argument structure, origin and nature of arguments and argumentation rules. The presented semantics, therefore, are clearer and more precise and as a result, the relationship between the arguments can be analyzed in isolation from other relationships (e.g. implications). Additionally, this framework encompasses a large variety of specific formalisms, such as non-monotonic reasoning and game theory; as a result, it can be regarded as a powerful tool for comparing different systems. Although his work elaborated in detail the semantics of the argumentation network, Dung took an argument as an atomic entity, and his notion of attack is also weak, because it considered all arguments to be of the same strength. If an argumentation framework contains no even cycles, the dispute is resolvable and this resolution can be achieved in a time linear to the number of attacks. The framework also assumes that a complete set of arguments is given together with the set of conflicts between arguments, and focuses on the definition of the status of an argument. Argumentation semantics define the properties required for a set of arguments to be acceptable. A set of arguments exhibiting these properties is called an extension of the argumentation framework, for example:

  • Admissible semantics A set \(E \subseteq A\) is admissible if and only if \(E\) is conflict-free and \(E\) defends all its elements.

  • Preferred semantics A set \(E \subseteq A\) is a preferred extension if and only if \(E\) is maximal for set inclusion among the admissible sets.

  • Stable semantics A set \(E \subseteq A\) is a stable extension if and only if \(E\) is conflict-free and every a \(\in \) A , \(E\) is attacked by an element of \(E\).

  • Grounded-semantics The grounded extension of \(<\) A, R \(>\) is the smallest subset of A with respect to set inclusion among the subsets of A which are admissible and coincide with the set of arguments acceptable w.r.t. itself.

Table 2.2 presents the syntax used to represent arguments, the set of arguments and the relationships between the arguments later in this chapter. Table 2.3 gives a comparison of abstract argumentation frameworks on the basis of the notion of attack, argument acceptability criteria, extension and miscellaneous features. The notion of attack is defined as a tuple of the following form:

$$\begin{aligned}&\{(argument \;OR\;set\;of\;arguments),(argument\; OR \;set\;of\;counter\_argument),\\&\quad nature\;of\;attack, set\; of\; constraints\} \end{aligned}$$
Table 2.2 Symbols with their respective description

As proposed by Dung (1995), in Table 2.3, the notion of attack is represented as (A, B, \(\rightarrow \)), where \(A\) is an argument and \(B\) is its counter-argument and there is a direct attack between A and B. Similarly, (\({\mathcal {S}},{\mathcal {Y}},\rightarrow \)) represents a direct attack between the set of arguments \({\mathcal {S}}\) and the set of arguments \({\mathcal {Y}}\). Similarly, (\({\mathcal {S}}\), (A, B) | A, \(\circlearrowleft \)) represents an attack between the set of arguments \({\mathcal {S}}\) and an argument \(A\) and indicates that there is also a recursive attack between argument \(A\) and its counter-argument B. It is evident from Table 2.3 that the researcher built this argumentation framework on top of Dung’s framework by adding different flavours of attack, whereas the acceptability criteria and extensions are quite consistent. Each of these frameworks will be discussed briefly below.

Table 2.3 Comparison of abstract argumentation frameworks

Bochman (2003) extended Dung’s work by the direct representation of global conflicts between sets of arguments, whereas Nielsen and Parsons (2007) introduced the notion of joint attacks in which a set of arguments can attack other arguments. Katie Atkinson (2008) analyzed the two computational models of argumentation, i.e. the Abstract Argumentation Framework (AAF) and argumentation schemes. The AAF is the best framework to use when completely identified sets of arguments are available and a binary relationship exists between them. Very often, however, such a set is not available, in which case argumentation schemes can help to identify the ways in which arguments can be attacked or defended and assist in the evaluation of arguments with respect to a certain context. On the resolution of contextual issues, arguments can be abstracted to an argumentation framework and evaluation can be carried out with respect to logical relations between arguments. The author proposed an abstract argumentation scheme framework that represents the components of argumentation schemes in an argumentation framework. As a result, the structure of schemes is used to guide the dialogue and provide contextual elements of evaluation, whilst retaining the desirable properties of abstract frameworks to enable evaluation with respect to the logical relations between arguments.

Coste-Marquis et al. (2005) extended Dung’s framework with prudent semantics to better handle controversial arguments. Under prudent semantics, no two arguments belong to the same extension if one of them indirectly attacks the other. Coste-Marquis et al. (2006) also extended Dung’s framework to take into account several additional constraints on the admissible sets of arguments, expressed as a propositional formula over the set of arguments, called the constrained argumentation framework. All the frameworks discussed above are static argumentation frameworks. Cayrol et al. (2008) overcame the limitations of Dung’s framework by introducing the dynamic argumentation framework and studied the impact of the addition of a new argument which interacts with one previous argument on a set of framework extensions.

2.5.1.2 Bipolar Argumentation Frameworks

Most argumentation systems define only one type of relationship between arguments i.e. an attack/defeat relationship. However, different studies reveal that another type of relationship may exist between arguments: the “support” relationship. Such an argumentation framework is called a bipolar argumentation framework (BAF) and is defined as \(\langle {\mathcal {A}},attacks_{def},attacks_{sup}\rangle \) where \({\mathcal {A}}\) is a set of arguments, a binary relation \(attack_{def}\) on \({\mathcal {A}}\) is called a defeat relation and another binary relation \(attacks_{sup}\) on \({\mathcal {A}}\) is called a support relation.

Amgoud et al. (2008) provided a comprehensive survey on the use of bipolarity in argumentation frameworks and elaborated its importance in argumentation processes in real world applications. Cayrol and Lagasquie-Schiex (2009, 2010) discussed bipolarity at the interaction level in the argumentation process. They defined the meta-argumentation framework and introduced the concept of coalition in BAF, based on the coherence of the admissible set. The arguments in coalition cannot be used separately in the attack process. Oren et al. (2007) describe the evidential bipolar argumentation framework that supports argument schemes, burden of proof, and accrual of argument. Table 2.4 provides a comparison of different bipolar argumentation systems. Most of these bipolar argumentation frameworks consider different types of attacks such as direct and indirect. Some BIFs consider joint attacks between arguments, whereas others also consider attacks on an argument by a set of arguments. The BAF provides a more enriched structure for argument representation, such as coalitions.

2.5.1.3 Preference-Based Argumentation Frameworks

In argumentation frameworks, one argument may be preferred over another when it is more specific or has a higher probability or certainty. Such an argumentation framework is called a preference-based argumentation framework. It is defined as a triplet \(\langle {\mathcal {A}},attacks,\mathcal {\succeq }\rangle \) where \(\mathcal {A}\) is set of arguments, \(attacks\) is the binary attack relation defined on \({\mathcal {A}}X{\mathcal {A}}\) and \(\succeq \) is a (total or partial) pre-order (preference relation) defined on \({\mathcal {A}}{\mathcal {X}}{\mathcal {A}}\).

Table 2.5 provides a comparison of preference-based argumentation systems. Modgil (2009) extended Dung’s theory of argumentation to integrate meta-level argumentation about preferences between arguments to add more semantics to attack relationships between arguments. The result of an attack of one argument on another argument depends on the existence of a preference argument, stating the preference of the attacking argument on the attacked argument. The preferences between arguments are not predefined; instead, arguments claim them.

Table 2.4 Comparison of bipolar argumentation frameworks
Table 2.5 Comparison of preference-based argumentation frameworks

Amgoud and Cayrol (2002) address the acceptability of arguments in preference-based argumentation frameworks, proposing a proof theory for this preference-based argumentation framework. The proof theory verifies whether a given argument A is acceptable or not. The proof theory is presented as a dialogue tree between two players, PRO and OPP. Martinez et al. (2006) extended the notion of defeat in an argumentation framework. Depending on the outcome of the preference relation, an argument may be a proper defeater or a blocking defeater of another argument. Martinez et al. (2008) equipped the argumentation framework with a set of abstract attack relations of varied strength, such as strong defender, weak defender, normal defender and unqualified defenders.

2.5.1.4 Value-Base Argumentation Framework

In preference-based argumentation frameworks, it is not always possible to absolutely define the preference for an argument over its counter-argument, especially in practical reasoning, such as in law, politics and ethics. To address problems in such domains, a value-based argumentation framework has been proposed. A value-based argumentation framework (VAF) is a 5-tuple: \(VAF=<{\mathcal {A}},attacks,{\mathcal {V}},val,valpref{>}\) where \({\mathcal {A}}\) is the set of arguments, \(attacks\) is the binary attack relation defined on \({\mathcal {A}}{\mathcal {X}}{\mathcal {A}}, {\mathcal {V}}\) is a non-empty set of values, \(val\) is a function which maps from elements of \({\mathcal {A}}\) to elements of \({\mathcal {V}}\), and \(valpref\) is a preference relation on \({\mathcal {V}} \,X\,{\mathcal {V}}\).

Table 2.6 provides a comparison of different preference-based argumentation systems. Bench-Capon (2003) proposed a value-based argumentation framework to quantify the strength of arguments and discussed the possibility of persuasion in the face of uncertainty and disagreement. He argued that persuasion is pivotal in argumentation, and that the strength of an argument depends on the social values it advances; the success of one argument over another depends on the strength of the values advanced by the argument concerned. Haenni (2009) presented a formal theory of probabilistic argumentation to handle uncertain premises for which respective probabilities are known. Probability is used to measure the credibility (weight) of possible arguments and counter-arguments; thereafter, the overall probabilistic judgment of the uncertain proposition in question is carried out to reach a certain conclusion.

Table 2.6 Comparison of value-based argumentation frameworks

2.5.1.5 Assumption-Based Argumentation Framework

Assumption-based argumentation addresses the issues of how to find arguments, identify attacks, and exploit premises shared by different arguments. Formally, an assumption-based argumentation is a tuple \(\langle {\mathcal {L}},{\mathcal {R}},{\mathcal {A}},{\mathcal {M}}\rangle \) where

  • \(\langle {\mathcal {L}},{\mathcal {R}}\rangle \) is a deductive system, with a language \({\mathcal {L}}\) and a set of inference rules \({\mathcal {R}}\)

  • \({\mathcal {A}}\subseteq {\mathcal {L}}\) is a (non-empty) set, whose elements are referred to as \(assumptions\)

  • \({\mathcal {M}}\) is a total mapping from \({\mathcal {A}}\) into \({\mathcal {L}}\), where \(\lnot \alpha \) is the contrary of \(\alpha \).

In an assumption-based argumentation framework (ABF), arguments are deductions supported by assumptions. Bondarenko et al. (1993) presented an ABF in which a sentence is a non-monotonic consequence of a theory if it can be derived monotonically from a theory extended by means of acceptable assumptions. The notion of acceptability for such assumptions is formulated in terms of their ability to successfully counter-attack any attacking set of assumptions. The authors investigated applications of the proposed framework to logic programming, abductive logic programming, logic programs extended with classical negation, default logic, autoepistemic logic and non-monotonic modal logic. Dung et al. (2009a) provided a review of ABF and stated that ABF makes use of under-cutting as the only way in which one argument attacks another argument. Table 2.7 presents a comparison of two assumption-based argumentation frameworks.

Table 2.7 Comparison of assumption-based argumentation frameworks

2.5.2 Argumentation Systems

2.5.2.1 Abstract Argumentation System

Vreeswijk (1997) defined an abstract argumentation system that is capable of dealing with a number of problems of defeasible reasoning. He defined an abstract argumentation system as a triple \(({\mathcal {L}},R,\preceq )\) composed of a language \({\mathcal {L}}\) that has the capability of negation in the head of a rule to present contradiction, a set of inference rules \(R\), and \(\preceq \), a relationship between arguments. He called this argumentation system a collection of defeasible proofs, or arguments of varying conclusive force. Although he assumed a predefined order among the rules, he also pointed out that conclusive force is not determined solely by syntactical structure; rather, further information is needed from the semantics of the discourse domain to establish whether one argument is stronger than another. He identified two types of rules, strict rules and defeasible rules. Rules can be chained together to form arguments.

2.5.2.2 Defeasible Logic Programming (DeLP) Server

The DeLP server, proposed by Garcia and Simari (2004), is based on Defeasible Logic Programming (DeLP), which is a general-purpose defeasible argumentation formalism based on logic programming, and is intended to model inconsistent and potentially contradictory knowledge. A defeasible logic program has the form \(\psi \) = (\(\Pi ,\Delta \)), where \(\Pi \) and \(\Delta \) stand for strict knowledge and defeasible knowledge, respectively. The set \(\Pi \) involves strict rules of the form \(P\,\leftarrow \,Q{}_{1}\!\ldots \!Q{}_{n}\) and facts (strict rules with empty body), and is assumed to be non-contradictory (i.e., no complementary literals \(P\) and \(\thicksim \) \(P\) can be inferred, where \(\thicksim \) \(P\) denotes the contrary of P). The set \({\Delta }\) involves defeasible rules of the form \(P\leftarrowtail Q{}_{1}.....Q{}_{n}\) which stand for \( Q{}_{1}\!\ldots \!Q{}_{n}\) provide a tentative reason to “believe P”. Rules in DeLP are defined in terms of literals. A literal is an atom \(A\) or the strict negation (\(\thicksim \) \(A\)) of an atom. Default negation (denoted as not A) is also allowed in the body of defeasible rules. Deriving literals in DeLP results in the construction of arguments. Let \(h\) be a literal, and \(\psi \) = (\(\Pi ,\Delta \)) a DeLP program. The \(<A,h>\) is an argument structure for \(h\), if \(A\) is a set of defeasible rules of \(\Delta \), such that:

  1. 1.

    there exists a defeasible derivation for \(h\) from \({{P}}\cup A\);

  2. 2.

    the set \({{P}}\cup A\) is non-contradictory, and

  3. 3.

    \(A\) is minimal: there is no proper subset \(A\) of \(A\) such that \(A\) satisfies conditions (1) and (2). In short, an argument structure \(<A,h>\), or simply an argument \(A\) for \(h\), is a minimal non-contradictory set of defeasible rules, obtained from a defeasible derivation for a given literal h. The literal h will also be called the conclusion, supported by A. Note that strict rules are not part of an argument structure.

A derivation for the literal can be defeasible or strict. Let \(\psi =(\Pi ,\Delta )\) be a DeLP and \(L\) a ground literal. A defeasible derivation of \(L\) from \(P\), denoted \(P\rightsquigarrow L\), consists of a finite sequence \(L_{1},L_{2},\ldots ,L_{n}=L\) of ground literals, and each literal Li is in the sequence because:

(a) \(L_{i}\) is a fact in \({{P}}\), or

(b) there exists a rule \(R_{i}\) in \(\psi \) (strict or defeasible) with head \(L_{i}\) and body \(B_{1},B_{2},\ldots ,B_{k}\) and every literal of the body is an element \(L_{j}\) of the sequence appearing before \(L_{i}(j<i)\).

The derivation for literal \(h\) will be a strict derivation denoted by P \(\rightarrow \)L, if either h is a fact or all the rules used for obtaining the sequence \(L_{1},L_{2},\ldots ,L_{n}=L\) are strict rules. Strict derivation does not require defeasible rules.

In the DeLP program, the \( { {P}}\) cannot be contradictory, whereas the \(\psi =(\Pi ,\Delta )\) may be contradictory. Let \(\psi =(\Pi ,\Delta )\) be a DeLP and the two literals \(h\) and \(h_{1}\) disagree, if and only if the set \( { {P}}\cup {h,h_{1}}\) is contradictory. When contradictory goals can be derived defeasibly, argumentation formalism is used to decide between them. DeLP, being declarative, does not capture interactions between pieces of knowledge and the burden of defeasible inference falls on the language processor. However, priorities could be used as an alternative approach. In DeLP, the construction of argument structures is non-monotonic: that is, adding facts or strict rules to the program may cause some argument structures to be invalidated because they become contradictory.

In DeLP, answers (yes, no, undecided, or unknown) to queries are supported by arguments. However, an argument may be defeated by another argument. Let us take an example where an argument \({<}\,A_{1},h_{1}\,{>}\) counter-argues, rebuts, or attacks \({<}\, A_{2},h_{2}\,{>}\) literal h, if and only if there exists a sub-argument \({<}\,A,h\,{>}of{<}\,A_{2},h_{2}\,{>}\) such that \(h\) and \(h_{1}\) disagree. To compare arguments, two criteria are available:

  1. 1.

    Generalize Specificity: This favors two aspects of arguments as follows: (a) Prefer an argument with greater information content, or (b) Prefer an argument with less use of rules (more precise, more concise).

  2. 2.

    Argument comparison using rules priorities.

DeLP uses argumentation formalism to treat contradictory information by identifying contradictory information in the knowledge base and applying a dialectical process to decide which information prevails. Some formalisms define explicit priorities among rules and use these priorities to decide between competing conclusions. The use of these priorities is usually embedded in the derivation mechanism and competing rules are compared individually during the derivation process. In such formalisms, the derivation notion is bound to one single comparison criterion. In DeLP, to decide between competing conclusions, the arguments that support the conclusions are compared. Thus, the comparison criterion is independent of the derivation process, and could be replaced in a modular way.

2.5.2.3 Defeasible Reasoning-Based Argumentation Engines

Bryant and Krause (2008) provided a very comprehensive survey of existing practical implementations of both defeasible and argumentation-based reasoning engines in the literature and emphasized the need for well-designed empirical evaluation and well-formed complexity analysis to justify the practical applicability of reasoning engines. NathanFootnote 4 proposed an early implementation of a defeasible reasoner using specificity criteria to resolve conflict between generated arguments. To determine the support for a conclusion, a warrant procedure based on a series of incremental steps is used to classify an argument as “in” or “out” in a series of levels. This bottom-up approach to reasoning determines that an argument is warranted when it is “in” in at all remaining levels.

2.5.2.4 OSCAR

OSCAR proposed by Pollock (2000) is an agent-based architecture implemented in the LISP programming language for rational agents, i.e. it is equipped with practical and epistemic cognition. Practical cognition is about what to do and OSCAR directs the agent’s interaction with the world, whereas epistemic cognition is about what to believe; most of the work in rational cognition is performed by epistemic cognition. The reasoning consists of the construction of arguments (nodes) to support the conclusion, linked to one another through atomic reasons (dependencies) and forming an inference graph. Two kinds of defeaters are identified in OSCAR. The first is the rebutting defeater which attacks the conclusion of inference, and the second is the undercutting defeater which attacks the connection between the premises and the conclusion. The resolution of conflict is carried out by reasoning schemas to compute the defeat status and the degree of justification, given the set of arguments constructed. The defeasible reasoner of OSCAR is allowed to draw conclusions tentatively, and as a result, an argument may be justified in one stage of reasoning and unjustified later, without any additional input. However, an argument is warranted when the reasoner reaches a stage where, for any new stage of reasoning, the argument remains undefeated. This is useful when dealing with limited resources, providing three possible statuses to a subset of arguments: defeated, undefeated, and justified.

2.5.2.5 IACAS

Vreeswijk (1995) presents another argumentation system working on defeasible argumentation, designed to carry out interactive argumentations on computers and allowing a person to start with a dispute, a given number of facts, rules and cases. The fact that it is interactive, is capable of finding the right number of arguments to reach a conclusion, and is capable of analysing the epistemic status of propositions, sets it apart from other argumentation systems. The building blocks of the argumentation process are propositions, and strict and defeasible rules. Strict rules represent deductive argumentation steps and defeasible rules represent plausible argumentation steps. Arguments are displayed in a tree-like structure with a conclusion on the left-hand side and their premises on the right-hand side. It has a command line interface and allows the evaluation of the status of an argument to be certain, beyond reasonable doubt, some presumption in favor, balanced or undetermined. IACAS implementation shows that defeasible arguments are essential for carrying out formal argumentation.

2.5.2.6 Critical and Recommender Systems (C&R)

Chesnevar et al. (2006b) identified that the current C&R systems are incapable of dealing with the defeasible nature of information. These systems are based on machine learning and information retrieval algorithms. With no inference capabilities, decisions rely on heuristics. Systems based on these quantitative approaches have been criticized for their inability to generate easily understandable and logically clear results; therefore, much of the implicit information remains uncovered. In this paper, the authors present a novel approach for the integration of user-supported systems such as critics and recommender systems with a defeasible argumentation framework to enhance the practical reasoning capabilities of such systems. Formalisms such as description logics can be integrated to achieve this objective but they lack the capability to deal with the defeasible nature of user preferences. DeLP has therefore proven to constitute the simplest yet most expressive language for encoding rule-based knowledge with incomplete and potentially inconsistent information. The user preference criteria are modeled as facts, strict rules and defeasible rules which, in turn, with the addition of background information, are used by the argumentation framework to prioritize suggestions, thus enhancing the final results provided to users. Gomez et al. (2005) tried to integrate ontology theory, defeasible argumentation and belief revision to define ontology algebra, and suggested how different aspects of ontology integration can be defined in terms of defeasible argumentation and belief revision. OWL ontology is simply a collection of information comprising classes and properties, an approach which is associated with the DeLP program for representing knowledge in which facts and strict rules are distinguished. More formally, Ad-Ontology is a DeLP program P = (KP, KG, \(\Delta \)) where KP stands for particular knowledge (facts about individuals), KG stands for general knowledge (strict rules about relations held among individuals), and \(\Delta \) stands for defeasible knowledge (defeasible rules).

2.5.2.7 Miscellaneous Applications

Rahwan and Larson (2008) identified that little work exists on understanding the strategic aspects of argumentation among self-interested agents and introduced an argumentation mechanism design (ArgMD) which enables the design and analysis of argumentation for self-interested agents. This work lies at the intersection of game theory and formal argumentation theory. In this mechanism, the agent must decide which arguments to reveal simultaneously and the mechanism calculates the set of accepted arguments based on acceptability criteria.

Rahwan et al. (2004) also identified the role of argumentation in the agent’s negotiation. They identify the elements of environment (e.g. communication language, domain language, negotiation protocol) that host the agents and proposed a conceptual framework which outlines the core features required by agents for argumentation-based negotiation. Such negotiation will enable agents to operate in a dynamic, uncertain and unpredictable environment.

Dung et al. (2009b) also proposed a framework and described the extensive application of argument-based decision making and negotiation to a real-world scenario in which an investor agent and an estate manager agent negotiate to lease land for a computer-assembly factory. Agents are equipped with beliefs, goals, preferences, and argument-based decision-making mechanisms, and take uncertainties into account. Argumentation techniques are used in multi-agent systems to specify autonomous agent’s reasoning, which involves forming and revising beliefs and actions according to inconsistent, uncertain and contradictory information. Such techniques have been used to facilitate multi-agent interactions which involve the dialogue process between software agents who have contradictory views about certain domains of discourse.

2.6 Comparison Between Philosophical and Logic-Based Argumentation Frameworks and Applications

Many similarities as well as differences exist between philosophical and logic-based argumentation frameworks and applications.

Table 2.8 provides a comparative study of both types of argumentation frameworks and applications based on a number of distinct features. Of the various features, the most important are the representation of an argument structure and reasoning methodology. Philosophical models consider an enriched and complex argument structure which comprises a set of elements that facilitate the subjective assessment of an argument by the participants in decision making. The acceptability of arguments is again subjective in nature, e.g. strong, moderate, weak etc., and is dependent on human computation. Whereas logic-based argumentation models and applications consider argument to be a very simple structure and try to define explicit semantics so that a reasoning engine can evaluate the arguments, the evaluation of arguments in logic-based argumentation models and applications is very simple, i.e., it is either true or false. As a result, it is very easy to compute an acceptable set of arguments.

Argument structure and reasoning methodology are therefore the two features that are of paramount importance for the design and development of new software applications. A comparative study of both eases the decision about which argumentation paradigm to use for the development of new software applications.

Table 2.8 Comparison of logic-based argumentation frameworks/applications with philosophical models of argumentation/applications

In Sect. 2.7, the current reasoning approaches on the Semantic Web are categorized.

2.7 Categorization of Reasoning Approaches on the Semantic Web

As pointed out in Chap. 1, the ontology languages layer of the Semantic Web have reached a level of maturity and now efforts are being focused on the development of the logic layer of the Semantic Web. The logic layer provides a foundation for Semantic Web applications to perform advance reasoning techniques for automated information extraction, reasoning and integration to facilitate the decision-making process.

Broadly speaking, the current reasoning approaches on the Semantic Web can be divided into the following two categories:

  1. 1.

    Monotonic reasoning

    A reasoning is known as monotonic reasoning if during the reasoning process, once a conclusion is asserted, it can’t be retracted later on in the presence of new information. Monotonic reasoning follows Open-World Assumptions (OWA) where everything I don’t know or information which is not present in the model is considered undefined.

    The current monotonic reasoning-based approaches on the Semantic Web can be classified into the following three sub-categories:

    1. (a)

      Ontology-driven reasoning: Approaches that make use of ontologies for knowledge representation and reasoning. Section 2.7.1.1 elaborates on ontology-driven reasoning in detail.

    2. (b)

      Semantic Web rule-based reasoning: Approaches that make use of the Semantic Web rule-based languages to represent and reason over information present on the Semantic Web. These approaches are presented in Sect. 2.7.1.2.

    3. (c)

      Fuzzy logic-based approaches: Approaches that make use of fuzzy logic to represent and reason over information present on the Semantic Web. These approaches are presented in Sect. 2.7.1.3.

  2. 2.

    Non-monotonic reasoning

    A reasoning is known as non-monotonic reasoning if, during the reasoning process, once a conclusion is asserted, it can be retracted later in the presence of new information. Non-monotonic reasoning follows Close-World Assumptions (CWA) where everything I don’t know or information which is not present in the model is considered false. The current non-monotonic reasoning-based approaches can be classified into the following two sub-categories:

    1. (a)

      Defeasible logic-based approaches: Approaches that make use of defeasible logic-based rule languages to represent and reason over information present on the Web. These approaches are discussed in Sect. 2.7.2.1.

    2. (b)

      Argumentation-based approaches: Argumentation-based approaches make use of argumentation techniques to represent and reason over information present on the Web. These approaches are presented in Sect. 2.7.2.2.

In the following Sect. 2.7.1, the different sub-categories of monotonic reasoning are discussed in detail, followed by non-monotonic reasoning sub-categories in Sect. 2.7.2.

2.7.1 Sub-Categories of Monotonic Reasoning

2.7.1.1 Ontology-Driven Reasoning

Ontologies (Fensel 2003) are the core of the Semantic Web and provide formal and explicit specification of a certain domain. They use a combination of classes, and their relationships or properties, instances and axioms are defined in some formal language. The W3C has proposed two ontology languages for representing knowledge on the Semantic Web. The first one is RDFS, based on XML and logic programming, which is a lightweight ontology language. The second language is OWL, which is based upon description logic and provides constructs for cardinality restrictions, Boolean expressions and restrictions on properties. OWL ontologies come in three species: Lite, DL, and Full, ordered in increasing expressivity.

In addition to serving the purpose of representation, an ontology also enables logical inference on facts through axiomatization. Hence, ontologies on the Web should provide constructs for effective binding with logical inference primitives and options to support a variety of expressiveness and computational complexity requirements. Table 2.9 depicts a set of axioms defined for OWL-Lite and these are exploited by DL reasoning engines, such as FaCT++ (Tsarkov and Horrocks 2006) and Pellet (Parsia and Sirin 2007) to achieve the inference-ability objective. A number of architecture and web applications have been built by modeling domain knowledge in the form of ontologies, using the DL reasoner as an inference engine, such as a reasoning agent for the Semantic Web (Oguz et al. 2008), OSGi-based infrastructure to manage context-aware services (Gu et al. 2004) etc.

Table 2.9 OWL ontology reasoning semantics

2.7.1.2 Semantic Web Rule-Based Driven Reasoning

Proposals for the integration of rule languages and ontology languages can be classified by the degree of integration (Antoniou et al. 2005). Firstly, the hybrid approach is one where there is strict separation between the rule predicates and ontology predicates and reasoning is done by interfacing the existing rule reasoner with the ontology reasoned; whereas, with the homogeneous approach, both rules and ontologies are embedded in the same logical language \({\mathcal {L}}\) without making a prior distinction between the rule predicates and ontology predicates, and a single reasoner can be used for reasoning purposes.

The following two steps are involved:

  • Compilation of rules as a Rete network.

  • Matching phase i.e. data-driven reasoning by passing the facts present in the working memory through the Rete network.

The Rete (Forgy 1982) algorithm involves two steps. The first is the compilation of rules in the form of a network called a Rete network. The second is the matching phase, in which the rule engine matches the conditions of the rules in the knowledge base against the facts in the working memory. As a result of this match, a single rule fires. Firing the rule instance will add a new fact to the working memory. The matching phase starts again and only the new inferred facts filter through the compiled rules network and result in the firing of another rule; so the process continues. The process will stop when no more rules match the new inferred facts.

The importance of Semantic-based Web-based decision support systems (DSS) in business applications has been identified by a number of researchers over a period of time (Vahidov and Kersten 2004; Silverman et al. 2001; Toni 2007). Kartha and Novstrup (2009) proposed a combination of ontologies and decision rules for building a decision support application for time sensitive targeting. They represented knowledge with the help of rules known as ‘decision rules’ which: (a) include primitives from multiple ontologies and primitives that are defined by algorithms that are outside the rule framework; (b) are time-dependent; and (c) incorporate default assumptions. They developed what is known as the Sentinel system, which is general enough to support a wide variety of DSS tasks.

Ceccaroni et al. (2004) present an environmental decision support system (called OntoWEDSS) for waste water treatment to improve the diagnosis of faults in a treatment plant, which provides support for complex problem-solving and facilitates knowledge modeling and reuse. The system is based on the integration of case-based and rule-based reasoning with an ontology, i.e. Waste-Water Ontology (WaWO) for the representation of the domain and for reasoning. Nicolicin-Georgescu et al. (2010) present an approach to managing data warehouse cache allocations via DSS by using autonomic computing and Semantic Web technologies. They presented heuristics for autonomic computing adoption, using ontologies for DSS system modeling and ontology-based rules for heuristic implementation.

Similarly, Salam (2007) presents a supplier performance contract monitoring and execution of DSS, using OWL-DLFootnote 5 for knowledge representation SWRLFootnote 6 to express rules on top of OWL-DL ontologies. Cheung and Cheong (2007) address the challenges of market operations using a rule-based approach in mission-critical decisions and Garcia-Crespo et al. (2011) propose a semantic model for knowledge representation in e-business. Yang et al. (2009) proposed a Semantic Web-DSS and provide semantics for defining static and dynamic semantics representation based on ontologies and quantitative decision making comprising three steps: publishing decision requirements, bidding, and role-based collaboration among decision peers (each Semantic Web-DSS is a peer) to negotiate decision models.

2.7.1.3 Fuzzy Logic-Based Reasoning

A number of researchers used fuzzy logic-based quantitative approaches for reasoning to address the issues of group decision-making. Subsorn et al. (2008) proposed a Web-based group decision support system framework to deal with imprecise decision-making problems. The framework is based on a fuzzy analytic hierarchy process for group decision-making. The framework enables group members to develop satisfactory group solutions and allows group leaders to form final/acceptable, satisfactory group solutions. Ma et al. (2010) proposed ‘Decider’, a fuzzy multi-criteria group decision-making (MCGDM) process model that aims to support preference-based decisions over the available alternatives that are characterized by multiple criteria in a group. The model can handle information expressed in linguistic terms, Boolean values, as well as numeric values to assess and rank a set of alternatives within a group of decision makers.

Noor-E-Alam et al. (2010) also addressed the issue of multi-criteria decision-making (MCDM) involving multiple experts and pointed out that the participation of many experts makes the conflict aggregation process difficult. They developed a DSS based on types of fuzzy-based conflict aggregation algorithms, namely, a possibility measure and averaging conflict aggregation. Yue (2011) addressed the issue of multiple attribute decision-making (MADM) and developed an algorithm for determining the weights of decision makers within a group decision environment, in which the information regarding each individual decision is expressed by a matrix in interval numbers. He also defined positive ideal and negative ideal solutions of group opinion, the separation measures and the relative closeness from the positive ideal solution.

Cabrerizo et al. (2010) used fuzzy logic to address the issue of consensus building among experts when information is incomplete. They developed a consensus model to address group decision-making problems with incomplete unbalanced fuzzy linguistic information. The working of the model is supported by consistency and consensus measures, and with the help of a feedback mechanism, personalized advice is provided to the experts for modification to their unbalanced fuzzy linguistic preference relations. Similarly, efforts are being made to represent the results of the decision-making process to the end user in an easily comprehendible form, such as that of Li et al. (2001) who proposed a visualized information retrieval engine based on fuzzy control.

The aforementioned fuzzy logic-based approaches, being quantitative, have been criticized for their inability to generate easy-to-understand and logically clear results for justification purposes. These approaches follow monotonic behaviour whereby once a conclusion has been drawn, it cannot be retracted. Additionally, they lack inference reasoning capability over contradictory information; for BI, we need such inference mechanisms.

2.7.1.4 Description Logic Programs (DLP)

Logic Programming is a predominant paradigm for expressing knowledge with rules, and for making inferences and answering queries. It provides both a declarative reading (a programming paradigm that expresses the logic of a computation without describing its control flow) and an operational reading of rules (with implementations). Its semantics largely underpin four families of rule systems, i.e. SQL relationship databases, OPS5 heritage production rules, Prolog, and Even-Condition-Action rules, and it is used as the proposal for rules in the context of the Semantic Web.

Many efforts have focused on mapping, intersection, or a combination of description logics (DLs) and logic programs (LP) to overcome the shortcomings that emerged during the development of practical OWL applications (Patel-Schneider and Horrocks 2007). To overcome the limitations of reasoning on OWL, Grosof et al. (2003) proposed Description Logic Programs (DLP) which lie at the intersection of LP and DLs (as shown in Fig. 2.5), instead of using Full First Order Logic (FOL) to address OWL issues. FOL can express positive disjunctives which are inexpressible in LP, whereas it does not provide support for expressing negation-as-failure (representing incomplete information) and procedural attachments (e.g. the association of an action performing procedural invocation with the drawing of a conclusion about a particular predicate). On the other hand, Logic programs do not provide these features to support the non-monotonic behaviour of the system.

Fig. 2.5
figure 5

Expressive overlaps among knowledge representation languages (Grosof et al. 2003)

2.7.2 Sub-Categories of Non-Monotonic Reasoning

2.7.2.1 Defeasible Logic-Based Reasoning

Nute (1988) highlighted the importance of defeasible reasoning in decision support systems and developed a logic for defeasible reasoning by extending Prolog. The new logic comprises facts and presumption, absolute rules and defeasible rules, and introduced another kind of weak rule known as a ‘defeater’. Causey (1994) developed ‘EVID’, a system for interactive defeasible reasoning and Johnston and Governatori (2003) developed an algorithm that integrates defeasible logic into a decision support system by automatically deriving its knowledge from databases of precedents.

Dr Prolog (Antoniou and Bikakis 2007) is a prolog-based implementation for carrying out defeasible reasoning on the Web. It provides declarative system support rules, facts, ontologies, RuleML, and both monotonic and non-monotonic rules. It takes into consideration both open world and closed world assumptions and provides features for reasoning with inconsistencies. The system provides a number of variants such as ambiguity blocking, ambiguity propagation and contradictory literals. Defeasible theories are imported in defeasible logic or RuleML syntax and translated into logic programs with the help of a logic translator. The Reasoning Engine compiles the logic programs and the meta-program which corresponds to the DL version that the user selects (ambiguity blocking/ propagating), and evaluates the answers to the user’s queries. They extended RuleML DTDs to represent defeasible theories in XML format. Dr Brokering (Antoniou et al. 2007) is a Dr-Prolog-based software agent implementation to address the problem of brokering and matchmaking; i.e. how a requester’s requirements and preferences can be matched against a set of offerings collected by a broker.

Dr-Device (Kontopoulos et al. 2011; Bassiliades et al. 2004) is a CLISP-based defeasible reasoning implementation provided with a VDR-Device reasoning system, RDF loader/translator and rule loader/translator component. The VDR-Device is an integrated development environment equipped with a graphical front end used for deploying defeasible rules on top of RDF schema ontologies. The rule base is initially submitted to the rule loader which transforms the rules into CLISP-like syntax through an XSLT stylesheet. The resulting program is forwarded to the rule translator where defeasible logic rules are compiled into a set of CLISP production rules. In parallel, the RDF downloader downloads the RDF documents and translates them into CLISP objects according to the RDF-to-Object scheme. The reasoning system performs inference on transited RDF metadata using defeasible rules and generates the objects that constitute the result of the initial rule program. The RDF extractor exports the resulting objects in the form of RDF/XML to the user. Dr-Device is implemented in Jess and integrates well with RuleML and RDF. Unlike Prolog, Dr-Device supports only one variant: ambiguity blocking. At present, it does not support OWL ontologies. In addition, Dr-Prolog uses Logic Programs with well-founded semantics, which is formally equivalent to the formal model. In contrast, Dr-Device uses the logic meta-program as a guiding principle, but there is no formal proof of the correctness of the implementation. On the other hand, Dr-Device has the relative advantage of easier integration with mainstream software technologies.

SweetJess (Grosof et al. 2002) is another defeasible reasoning system based on Jess and closely resembles courteous logic programs. It integrates well with RuleML but it can only perform reasoning in DML + OIL ontologies and not on RDF data as Dr-Device and Dr-Prolog does. However, it allows for procedural attachment and it implements only one reasoning variant. Moreover, it imposes a number of restrictions on the programs so that it can map on Jess. Table 2.10 presents a comparison of defeasible reasoning-based information systems.

2.7.2.2 Argumentation-Based Approaches

The WWW, being distributed and ubiquitous, provides a universal platform for Internet users to interact with each other. Previously, there was one-way traffic of content contributors on the WWW. The content provides information which was mainly based on their thinking, observations and knowledge and the readers were not able to reply to the author’s arguments if a difference of opinion existed.

Web 2.0 has revolutionized the WWW and provides a platform for the readers and converts them from reader to content developer. This development led to the realization of argumentation among users of the WWW. Blogs are one of the best examples of this. Semantic Web technology adds more flavour to Web contents by enriching the content with certain semantics to make the content processable by machines and automate interaction and support the decision-making process. On the basis of the level of functionality, the current applications of argumentation on the WWW are divided into the following three categories:

  1. 1.

    Web-based argument-assistance systems.

  2. 2.

    Semantic Web-based argumentation support frameworks and applications.

  3. 3.

    Semantic Web-based argumentation support applications with shared ontology (AIF).

In the following sections, each of these categories is discussed in detail.

Table 2.10 Comparison of defeasible logic based web IDSS applications

2.7.3 Web-Based Argument-Assistance Systems

Web 2.0 is a powerful paradigm for designing argumentation tools to solve challenges in collaboration on a global scale. However, there is a huge gap between Web 2.0 technologies and argumentation formalisms. Argumentation formalism focuses on a particular kind of semantic structure for organizing elements in such a way that computation and inference can be performed to reach a conclusion, whereas Web 2.0 moves the emphasis away from argumentation formalism features, such as no predefined information organization schemes, and is more focused on self-organization and community-driven indexation of elements, e.g. folksonomies that can be rendered as clouds (Shum 2008).

To bridge this gap, an argument assistance application offers a step forward; for instance, argument assistance systems overcome the limitations of threaded discussion forums by making a clear distinction between unsupported premises and supported premises known as claims. To evaluate the existing applications, I define a scale for argument evaluation and argument acceptability as depicted in Table 2.11.

Table 2.11 Scale for evaluation and acceptability of arguments
Table 2.12 Comparison of Web 2.0 based argument assistance systems

Table 2.12 shows a comparative analysis of different web-based argument assistance applications, from which the following important observations can be made:

  1. 1.

    Most argumentation assistance applications are based on dialogical argumentation with the exception of Debatabase,Footnote 7 which follows monological argumentation.

  2. 2.

    Argument structures vary from very simple structures, such as premises and conclusions, to very complex argument structures, such as the argument structures in ConvinceMe,Footnote 8 DebatepointFootnote 9 and Truthmapping.Footnote 10

  3. 3.

    Argumentation is mostly used for application assistance in persuasion and debate.

  4. 4.

    The evaluation of arguments is either fully or partially dependent on humans. None of the systems have the semantics to automate the process of argument evaluation.

  5. 5.

    The acceptability of an argument is either fully human-dependent or is not at all dependent on humans. In the latter case, different mechanisms are used, such as voting.

  6. 6.

    Content contributors are not as prolific as they are on the social networks.

2.7.4 Semantic Web-Based Argumentation Support Frameworks and Applications

The Semantic Web is an extension of the WWW on which information is annotated with meta-data or ontologies to make it processable by machines. The Semantic Web plays an important role in automating the computing of user interaction. Realizing the importance of argumentation, Sprado and Gottfried (2009) defined an argumentation-based framework for a decision support system in the context of spatio-temporal systems. The DS framework is based on two paradigms, argumentation and description logics. Argumentation is applied to identify and analyze consistent sets of arguments, whereas description logics help to define terminological knowledge to categorize the arguments at a semantic level. In this framework, arguments refer to a conceptual description of a given state of affairs (Concept-Based Argumentation) and use the preferences among them to resolve conflicts at a conceptual level. Similarly, CoAKTinG (Bachler et al. 2004) provides tools to assist scientific collaboration by integrating intelligent meeting spaces, ontologically annotated media streams from online meetings, decision rationales and group memory capture, meeting facilitation, issue handling, planning and coordination support, constraint satisfaction, and instant messaging or presence.

The HCONE Kotis (2010) argumentation ontology supports the capture of the structure of an entire argumentation dialogue as it evolves among collaborating parties within a period. It allows the tracking and identification of the rationale behind atomic changes and/or ontology versions. CoPe_it! also provides a mechanism to evaluate the strength of a position, and so represents another interesting development. Positions or alternatives are posted after the completion of an appropriate form. Each time a user posts a discourse item, CoPe_it! re-evaluates the whole discussion and indicates a solution.

Table 2.13 provides a comparative analysis of different argumentation-based Semantic Web applications, summarized as follows:

  1. 1.

    Apart from debate, they are used to predict trends and cluster information.

  2. 2.

    Applications follow dialogical argumentation.

  3. 3.

    Current applications are not fully autonomous because they are partly dependent on humans for their functionality.

Table 2.13 Comparison of semantic based argumentation support applications

2.7.5 Semantic Web-Based Argumentation Support Applications with a Shared Ontology (AIF)

Currently, a large number of interactions occurring on the WWW need to be captured in certain semantic structures to make it possible for them to be explored by others (to back up their argument’s support or rebuttal), and to automate the process of argument build-up and analysis. Argument interchange format (AIF) is one step towards providing a standard ontology for capturing such interactions (Rahwan et al. 2007b; Chesnevar et al. 2006a; Iyad Rahwan 2009). Table 2.14 depicts different argumentation applications that share a common ontology.

Table 2.14 Comparison of semantic web-based argumentation support system with shared Ontology

2.8 Critical Evaluation of the Existing Approaches to Support Monological Argumentation in Semantic Web Applications

In this section, a critical evaluation of the existing approaches in the literature for information representation and reasoning in the Semantic Web is presented in order to build an integrated view and identify the key issues that need to be addressed to have a complete methodology that provides monological argumentation-driven automated reasoning support in Semantic Web applications. The provision of such a methodology will enable enterprises to consider information that is potentially incomplete and/or contradictory which exists either within the enterprise or in other enterprises, represent and perform automated reasoning over it to identify and resolve any conflicts which may arise, followed by the integration and representation of the reasoning results to assist decision makers in the enterprise-wide decision-making process.

As seen from the discussion on reasoning approaches in the literature, decision makers are extremely dependent on software applications to assist them in the process of decision making (Carlsson and Turban 2002; Shim et al. 2002). They need intelligent applications that can transform information (which may be incomplete and/or contradictory) into useful knowledge as well as providing qualitative insights, so that a human style of reasoning can be expected in software applications. To address this, researchers in the field of Artificial Intelligence (AI) have long been striving to realize human-like decision-making power in software applications. The vision of Semantic Web applications also derives its concepts from AI. However, as discussed in Sect. 2.7.1.1, the logic-based languages that lie at the logic layer of the Semantic Web are deductive in nature and perform monotonic reasoning i.e. reasoning under assumptions that the underlying information for decision making is consistent and the addition of new information doesn’t result in contradictions with existing information (Antoniou and Van Harmelen 2004; Horrocks et al. 2005). In other words, they assume that

  1. (i)

    no conflicts will arise during the process of decision-making, and

  2. (ii)

    new information will not result in a different output.

To overcome the limitations of the Semantic Web discussed above, defeasible reasoning based approaches have been proposed in the literature which enable Semantic Web applications to perform non-monotonic reasoning over incomplete and/or contradictory information (Antoniou and Bikakis 2007; Kontopoulos et al. 2011; Bassiliades et al. 2004). As pointed out in Sects. 1.3 and 2.7.2.1, even though defeasible reasoning seems to be a good option to address the issues of non-monotonic reasoning in Semantic Web applications, however, the superiority relation on defeasible rules are hard-coded preferences specified by a single user before performing reasoning, and if a conflict between the rules arises during reasoning, then the existing defeasible reasoning-based approaches don’t provide a solution to address them. As a result of this, Semantic Web applications built using such reasoning approaches are inflexible in responding to dynamic situations and they lack the ability to make judgments in such situations, unlike humans who may be able to make decisions even in situations where the information may be incomplete and/or contradictory.

To address this problem, the concept of ‘argumentation’ has been studied in the literature on AI. Argumentation is much more closely related to a human style of reasoning that takes into account the concepts from the study of arguments to support opinions, claims, and proposals, and ultimately to lead to justifiable decisions and conclusions (Prakken and Vreeswijk 2002; Obeid 1992). Toulmin (2003) was the first to provide the logical structure of an argument and his work has been extended by a number of researchers to enrich the argument structure and address a variety of reasoning problems in the philosophy of law and other disciplines. The formal foundations of argumentation have been well explored in the academic literature (as discussed in Sects. 2.4 and 2.5). However, their major drawback is that most software applications that are based on logic-based argumentation formalisms are built separately, and are proprietary in nature. As a result of this, the code is not available for enhancement and general use and cannot be applied directly to address the issues of non-monotonic reasoning in Semantic Web applications.

Approaches have been proposed in the literature that apply the concepts of argumentation in the Semantic Web. It can been seen from the discussion in Sect.  2.7.2.2 that argumentation-based reasoning approaches have proven to be very useful in empowering Semantic Web applications. It enables Semantic Web applications to take into account potentially incomplete and/or contradictory information and through argumentative reasoning, bring these to an agreeable conclusion, if possible. However, it is evident from the discussion in Sects. 2.7.32.7.5 that most argumentation-based Web applications are dialogical in nature where the reasoning mechanism is driven by the decision makers involved in the discussion. As a result of this, argumentation-based Semantic Web-applications are missing a very important and reusable component, that is, a reasoning engine capable of performing monological argumentation over underlying information that may be incomplete and/or contradictory. This is considered to be an integral part of Semantic Web applications for product recommendation, auctions, identification of requirements, vendor selection, negotiation, agent communication and information integration (Deng and Wibowo 2008; Cheung and Cheong 2007; Shim et al. 2002; Assche et al. 1988; Wen et al. 2008; Dong et al. 2011; Xue et al. 2012). So, due to the lack of reusable components (i.e. monological argumentation driven reasoning engine), most of these existing Semantic Web applications follow philosophical argumentation-based frameworks where reasoning is performed by humans to cogitate and evaluate arguments and to take action.

So, to have a reasoning engine that performs monological argumentation in Semantic Web applications, the current approaches discussed in the literature do not provide any solution. Hence, the main inadequacy of the existing approaches, from the literature discussed above, in having an argumentation-based approach for reasoning in Semantic Web applications which addresses all the aspects required for taking into account potentially incomplete and/or contradictory information either within an enterprise or in other enterprises can be summarized as:

  1. 1.

    Incapability of logic-based languages to represent information that is potentially incomplete and/contradictory coming from different sources either within an enterprise or in other enterprises.

  2. 2.

    Absence of a monological argumentation-driven reasoning engine (i.e. hybrid reasoning engine) to identify and resolve conflict in the underlying information.

  3. 3.

    No methodology for information and knowledge integration and their graphical representation to assist the decision maker in enterprise-wide decision making.

In the following sub-sections, each of these issues is discussed in detail.

2.8.1 Incapability of Logic-Based Languages to Represent Information that is Potentially Incomplete and/Contradictory Coming from Different Sources

The reasoning approaches proposed in the literature, such as ontology-driven reasoning, Semantic Web rule-based reasoning and DLP discussed in Sects. 2.7.1.1, 2.7.1.2 and 2.7.1.4 respectively are based on description logic (DL) which is a subset of predicate logic and therefore it inherits the limitations of predicate logic i.e. it only performs monotonic reasoning under certain assumptions as follows:

  1. 1.

    The given problem can be fully addressed with the available information (i.e. the solution to the problem lies within the available information).

  2. 2.

    The information or specification of rules required for decision-making is consistent. In other words, it is assumed that no contradictory information will emerge during the decision-making process.

  3. 3.

    If new information is added to the application, it will be consistent with the already available information or specifications.

  4. 4.

    New information does not lead to a retraction of previous conclusions.

which limits their capability to represent and reason by taking into account the information present on the Semantic Web that could be potentially incomplete and/or contradictory. To overcome the abovementioned problem, defeasible logic-based implementation has been proposed in the literature that provides a formalism to represent incomplete and/or contradictory information from a single user/source. In this approach, a decision maker can define his preferences over the contradictory rules at design time and these preferences are used to resolve conflicts during the process of automated reasoning. However, these approaches do not provide a solution for information representation when incomplete and/or contradictory information comes from different sources and when there is more than one user involved in the decision-making process. Hence, from the above discussion, it can be inferred that existing Semantic Web stack languages present at the logic-layer of the Semantic Web are incapable of representing incomplete and/or contradictory information that may exist within the enterprise or in different enterprises and make it available for reasoning purposes. Semantic Web applications built using these languages fail to represent information where contradictory information may come from different users/sources. However, such an approach is needed to capture all the information and the decision makers’ opinions during the decision-making process.

In Chap. 3, the problem associated with the representation of information which is potentially incomplete and/or contradictory is identified and defined, and in Chap. 4, a solution is proposed to address the problem defined in the existing literature.

2.8.2 Absence of an Monological Argumentation-Driven Reasoning Engine to Identify and Resolve Conflicts Present in Information Coming from Different Sources

The issue i.e. the absence of an argumentation-driven reasoning engine to identify and resolve conflicts present in information coming from different sources, can be subdivided into the following sub-issues:

  1. 1.

    Rete network and its limitations.

  2. 2.

    Lack of hybrid reasoning in Semantic Web reasoning engines.

  3. 3.

    Lack of different argumentation-driven conflict resolution strategies.

In the next sub-sections, each of these issues is discussed in detail.

2.8.2.1 Rete Network and its Limitations

Semantic Web application reasoning engines use the Rete network for the compilation of rules and work in close coordination with the working memory. However, the compilation of rules that may represent incomplete and/or contradictory information is not possible in the existing Rete network due to the following limitations:

  1. 1.

    The general Rete network works only for predicate logic-based rule languages that follow monotonic reasoning. Therefore, it is not capable of representing potentially incomplete and/or contradictory information as Rete nodes.

  2. 2.

    A Rete network only executes one rule in a single match-execute cycle. If two rules are activated, only the rule with the higher order preference for execution defined by an individual (owner of the contradictory rules) will be executed. When underlying information is potentially incomplete, to capture it may require the execution of both contradictory rules each of which may represent a different view point. However, the current Rete network fails to address this objective.

Hence, there is need to extend the Rete network for the representation of incomplete and/or contradictory information as Rete nodes and enable the two contradictory production rules to fire and instances of both production rules i.e. arguments, to be added to the argument set. In Chap. 3, the problem associated with the Rete network is formally identified and defined, and in Chap. 4, a solution for the problem defined in the existing literature is proposed as well as extensions to the Rete network in order to compile business rules (representing incomplete and/or contradictory information) in the form of a Rete network.

2.8.2.2 Lack of Hybrid Reasoning in Semantic Web Reasoning Engines

Attempts have been made in the literature to perform reasoning over incomplete and/or contradictory information in order to realize non-monotonic reasoning in Semantic Web applications such as Dr-Prolog (Antoniou and Bikakis 2007), Dr-Device (Kontopoulos et al. 2011; Bassiliades et al. 2004) and Situated Courteous logic (Grosof et al. 2002). These defeasible logic-based applications use either data-driven reasoning or goal-driven reasoning. Data-driven reasoning is used to move from current facts to a conclusion, whereas goal-driven reasoning is backward chain reasoning used to move from a conclusion to the facts. But in the case of Semantic Web applications, both types of reasoning are needed: data-driven reasoning for the construction of arguments from underlying information and goal-driven reasoning to identify and resolve conflicts that exit between arguments. However, none of these attempts provide a solution that has both data-driven and goal-driven reasoning to reason over incomplete and/or contradictory information. Another requirement for the reasoning engine is that it should have the capability to resolve conflicts using different criteria, either automatically or being guided by the members of the decision-making process in order to achieve their goals. Defeasible logic-based attempts in Semantic Web applications provide only goal-driven reasoning with an objective to identify the facts that support the conclusion (Antoniou and Bikakis 2007). Their methodology doesn’t provide any support to reasoning in an environment where conflicts may arise at run time such as in group decision making. When conflicts arise between the rules, these formalisms represent and handle only individual preferences in the form of priorities. These priorities are usually embedded in the derivation mechanism and competing rules are compared individually during the derivation process. Therefore, the derivation notion is bound to one single comparison criterion (defined by a single user) and fails to take into account the multiple factors that are important for making an informed decision.

To address the abovementioned drawbacks of defeasible reasoning-based Semantic Web applications, argumentation-based reasoning approaches in the existing literature have been discussed that take into account incomplete and/or contradictory information and reach an agreeable solution if possible. However, it is evident from the discussion in Sect. 2.7.2.2 that the Semantic Web and Web 2.0 are influenced by the philosophical view of argumentation, in which considerable emphasis is given to building arguments by human participation. Less importance has been given to monological argumentation, i.e. the construction of automated arguments and automated conflict resolution, and the acceptability of arguments by a reasoning engine to reach a conclusion. Therefore, such Semantic Web applications do not provide a solution for automated reasoning over underlying information. Therefore, there is need for a system to be equipped with monological argumentation with an automated built-in mechanism for argument construction and thereafter, through a reasoning process, identify and resolve conflicts and recommend a decision. However, such an approach has not been proposed in the existing literature. Hence, based on the above discussion, it can be inferred that existing Semantic Web-based approaches fail to provide a solution for reasoning over information that may be potentially incomplete and/or contradictory either within the enterprise or in other enterprises. In Chap. 3, the problem associated with hybrid reasoning is formally identified and defined, and in Chap. 4, a solution to the problem defined in the existing literature is proposed as well as a hybrid reasoning methodology for argument construction and conflict resolution.

2.8.2.3 Lack of Different Argumentation-Driven Conflict Resolution Strategies

The need for a hybrid reasoning engine discussed in the previous section generates a set of arguments which may conflict with each other. An argument may attack its counter-argument and defeat it on the basis of certain criteria such as the strength or the weight of the argument i.e. the argument with more strength will defeat its counter-argument. The criteria to establish defeat between an argument and its counter-argument are also context dependent. As pointed out in Sect. 2.5.1, different argumentation frameworks have been proposed which use different defeat criteria. In the working environment of an enterprise, different kinds of Semantic Web applications operate and each may have different reasoning contexts as discussed in Sect. 1.2. In order to enable these applications to reason over information and resolve conflicts, different conflict resolution strategies are required so that each application can use its own conflict resolution strategy for the establishment of priority between an argument and its counter-argument. Hence, there is need for different argumentation-driven conflict resolution strategies, each using different criteria to establish defeat between an argument and its counter-argument.

In Chap. 3, the problem associated with conflict resolution between arguments is identified and defined, and in Chap. 4, a solution for the problem defined in the existing literature is proposed as well as different argumentation-driven conflict resolution strategies to address the need for different applications in an enterprise.

2.8.3 No Methodology for Knowledge Integration or the Graphical Representation of the Reasoning Process and Results to Assist in Enterprise-Wide Decision Making

Semantic Web applications within enterprises today publish their information on the Web, either on their intranet or on the World Wide Web, which triggers the need to integrate the knowledge produced by the information systems of different enterprises to obtain a better picture of enterprise-wide decision making. However, current Semantic Web applications do not provide an enterprise-level knowledge integration methodology, especially when the results on a subject are potentially incomplete and inconsistent across the information systems of different enterprises. Most Semantic Web-based reasoning engines differ from each other in the following aspects:

  1. 1.

    each has different knowledge-based representation;

  2. 2.

    each has different reasoning semantics;

  3. 3.

    each has a different output format.

This results in enterprises being unable to share, reason and integrate information coming from different Semantic Web applications either within the enterprise or in different enterprises. Additionally, the decision maker in an enterprise always need in depth visibility of the reasoning process in order to take into account the rationale behind the conclusion and make appropriate decisions. The monotonic reasoning systems discussed in Sect. 2.7.1, and the non-monotonic reasoning-based system discussed in Sect. 2.7.2 provide no visibility or information about the reasoning process or how results are reached. Similarly, they provide no graphical representation of the reasoning process in the form of a reasoning chain which can help the decision maker trace the path from the evidence to the final conclusion and easily identify the basis on which the decision was reached.

Hence, it is evident from the discussion above that there is need for a methodology that provides a solution for knowledge integration which depicts the reasoning process in a graphical representation format in order to provide a better analysis environment for the decision maker so that appropriate decisions can be made. In Chap. 3, the problem associated with knowledge integration is formally identified and defined, and in Chap. 4, a solution for the problem defined in the existing literature is proposed, as well as a methodology for knowledge integration and its graphical representation to assist the decision maker in enterprise-wide decision making.

2.9 Conclusion

In this chapter, a survey of the existing literature on argumentation and its adoption in the fields of Philosophy and AI was presented. Also, a critical analysis of existing reasoning approaches deployed on the Semantic Web was given which categorised them as either monotonic or non-monotonic reasoning. It is evident from a critical evaluation of existing reasoning approaches that monotonic reasoning has a number of limitations that inhibit their ability to reason over information that could be potentially incomplete and contradictory. Non-monotonic reasoning, especially defeasible reasoning, is a good option but it works under certain constraints which curtail its adoption in Semantic Web applications for business intelligence.