1 Introduction

Roughly speaking, the scientific method can be understood as body of techniques for investigating phenomena, acquiring new information, or correcting and integrating previous data. Clearly, different reasoning processes take place during this activity: the derivation of what the established theory implies, the generation of hypothesis in order to explain observations not predicted by the theory, or the inference of the consequences of such hypothesis that can be latter tested for accuracy, among others. All these reasoning processes are essential in the scientific process, and understanding them is fundamental for understanding how we make science.

Among the many reasoning processes featured in scientific activities, three of them stand out. Deductive reasoning, the inference that preserves truth, is the most studied reasoning process. Default reasoning, allowing the drawing of inferences when the current information is incomplete, has been one of the most studied forms of ‘common sense’ reasoning. Abductive reasoning, roughly described as the search for an explanation of a surprising observation, is considered as the most important process in the generation of new scientific theories. And in spite of how important each one of these reasoning processes is on their own, their interaction is even more important: it is their combination what produces scientific results. Still, these processes have been frequently studied from dissimilar perspectives, with deduction being mostly secluded within logic and computer science, default reasoning being a subject only in artificial intelligence, and abductive reasoning being studied mainly in philosophy of science.

The present work proposes an understanding of deductive, default and abductive reasoning that puts these processes under the same umbrella. Its goal is to show how these forms of reasoning, all of them essential in the ‘process of science’, can be interpreted as different instances of the same phenomenon: epistemic dynamics. Reasoning is, after all, about managing knowledge and beliefs, and this paper’s aim is to make this idea precise. In a few words, this proposal’s main idea is that while deductive reasoning can be seen as an inference that generates knowledge from knowledge (Sect. 2), default reasoning can be seen as a generalisation that yields beliefs when the premises involves beliefs (Sect. 3), and abductive reasoning can be seen as a process that generates beliefs from knowledge (Sect. 4).

1.1 Epistemic Logic and Dynamic Epistemic Logic

The discussion in this work will be formalised with tools from dynamic epistemic logic, the ‘dynamic’ extension of epistemic logic. The basic ideas of these frameworks are recalled below. Still, it is important to emphasise that this paper’s main goal is not the use of these frameworks for representing the discussed forms of reasoning, but rather to show how the latter can be put under the same umbrella when they are understood as different instances of the epistemic dynamics phenomenon. Dynamic epistemic logic, understood as the epistemic language interpreted under possible worlds plus operations representing epistemic changes, is just a tool used here to show the kind of results that can be obtained by taking a dynamic epistemic approach to these forms of reasoning.Footnote 1 The ideas of this paper can be also formalised within other formal frameworks, as representing an agent’s information with two plain sets of propositional formulae, \(K\) (knowledge set) and \(B\) (belief set). Less interesting technical results might be obtained, but the main idea and its consequences will remain.

Epistemic logic Epistemic logic (EL; Hintikka 1962; Fagin et al. 1995) is a formal framework for reasoning about an agent’s knowledge. Given a set of atomic propositions P , the EL language extends the propositional one with formulae of the form \(K{\varphi}\), read as “the agent knows \(\varphi\) .Footnote 2 The classical semantic model for EL-formulae is a possible worlds model, a tuple \(M = \left\langle W, R, V \right\rangle\) with \(W\) a non-empty set of possible worlds, \(R \subseteq (W \times W)\) a binary accessibility relation (typically assumed to be at least reflexive) indicating which worlds the agent considers epistemically possible from each one of them, and \(V:{{\texttt{P}}}\rightarrow \wp ({W})\) an atomic valuation function indicating the possible worlds in which each atomic proposition is the case.

Formulae are evaluated on pointed models \((M, w)\) with \(M\) a possible worlds model and \(w \in W\) a possible world (the evaluation point). Atomic propositions are evaluated by following the atomic valuation function, and Boolean connectives are interpreted in the standard way. The key clause, the one for formulae of the form \({K}{\varphi }\), states that the agent knows \(\varphi\) at \(w\), \((M, w) \Vdash {K}{\varphi }\), if and only if \(\varphi\) is true in all the worlds she considers epistemically possible from \(w\), that is,

$$\begin{aligned} (M, w) \Vdash {K}{\varphi }\quad\hbox {iff for all}\;u \in W, Rwu\;\hbox {implies}\;(M, u) \Vdash \varphi \end{aligned}$$

Footnote 3Formula \(\varphi\) is true at \((M, w)\) when \((M, w) \Vdash \varphi\). The fact that \(\varphi\) is true at every possible world of a given model \(M\) is denoted by \(M \Vdash \varphi\); the fact that \(\varphi\) is valid (true at every possible world of any possible model) is denoted by \(\Vdash \varphi\).

The use of a formal framework makes it possible to study the consequences of the stated definition of knowledge. As a first example, take any formulae \(\varphi\) and \(\psi\): if an agent knows both \(\varphi \rightarrow \psi\) and \(\varphi\), then both formulae are true in every epistemic possibility, and hence so is \(\psi\); thus, the agent knows \(\psi\). This is the famous \(K\) axiom, and most of the discussion here will be exemplified by variations of it. As a second example, if \(\varphi\) is valid, then it holds in every world and, in particular, in every world the agent considers possible, thus making it part of her knowledge. This is known as the necessity rule. Thus, in symbols,

$$\begin{aligned} \Vdash {K}{(\varphi \rightarrow \psi )} \rightarrow ({K}{\varphi } \rightarrow {K}{\psi }) \quad \hbox {and}\quad \hbox {if}\;\Vdash \varphi , \hbox {then}\;\Vdash {K}{\varphi } \end{aligned}$$

From these two properties it follows that, under the stated definition, the notion of knowledge is closed under logical consequence.

The ability to discuss formally the properties of the notion of knowledge makes EL a suitable tool not only in logic but also in epistemology (Hendricks 2006), artificial intelligence (Meyer and Hoek 1995; Wheeler and Pereira 2004), game theory (Bacharach et al. 1997; de Bruin 2010; Perea 2012) and other fields. Still, as useful as it is, EL has some limitations, one of them being the fact that it represents the knowledge of an agent at a single moment of time, without looking at how this knowledge changes, either abstractly through time, or else concretely via specific epistemic actions. Its ‘dynamic’ extension, dynamic epistemic logic, follows the second path, defining concrete representations for diverse epistemic actions that change an agent’s information.

Dynamic epistemic logic If a possible worlds model represents an agent’s knowledge at some stage, then changes in the agent’s knowledge can be represented as changes in such model. Following this idea, dynamic epistemic logic (DEL; van Ditmarsch et al. 2007; van Benthem 2011) defines different model operations representing diverse epistemic actions, ranging from public and private versions of announcements (the latter being meaningful in multi-agent situations) to different forms of belief revision.

As an example, consider the public announcement case (Plaza 1989; Gerbrandy and Groeneveld 1997): after a public (and truthful!) announcement of a formula \(\chi\), the agent can discard those possibilities where \(\chi\) is not the case: they are not possible anymore. Formally, given a model \(M = \left\langle W, R, V \right\rangle\) representing an agent’s knowledge, the model \({M}_{\chi !} = \left\langle W', R', V' \right\rangle\) representing the agent’s knowledge after the public announcement of \(\chi\) is such that \(W'\) contains only those worlds in \(W\) where \(\chi\) holds (formally, \(W' := \left\{ w \in W \mid (M, w) \Vdash \chi \right\}\)), and \(R'\) and \(V\) are restricted to the new domain (formally, \(R' := R \cap (W' \times W')\) and, for every \(p \in {\texttt{P}}\), \(V'(p) := V(p) \cap W'\)).

Syntactically, the language is extended with modalities of the form \(\langle \chi !\rangle\) for expressing the effects of public announcements. Their semantic interpretation states that at \((M, w)\) it is possible to announce publicly \(\chi\) such that afterwards \(\varphi\) is the case, \((M, w) \Vdash \langle \chi !\rangle \,{\varphi }\) , if and only if \(\chi\) is true, \((M, w) \Vdash \chi\) , and after \(\chi\) ’s public announcement \(\varphi\) is the case, \(({M}_{\chi !}, w) \Vdash \varphi\). In symbols,

$$\begin{aligned} (M, w)\Vdash \langle \chi !\rangle \,{\varphi } \quad \hbox {iff} (M, w) \;\Vdash \chi \quad \hbox {and}\quad({M}_{\chi !}, w) \Vdash \varphi \end{aligned}$$

2 Deductive Reasoning

One of the main concerns in logic has been the concept of inference, i.e., the general process of drawing a conclusion from some given premises/assumptions. Among the many different forms of inference one can conceive, deduction, also known as valid inference and logical/classical consequence, has been the most extensively studied (see, e.g., Troelstra and Schwichtenberg 2000). The reason for this is not difficult to find: the conclusion of a deductive reasoning step is true in every single case in which all the premises are true, so deductive reasoning preserves truth. The following is a very simple example of such form of reasoning.

figure a

If truth-preservation is the characteristic property of deductive reasoning, then it is not difficult to reformulate this process in epistemic terms. By assuming, as it is normally done, that knowledge is truthful, then deductive reasoning can be seen as a reasoning process that preserves knowledge or, more precisely, as a reasoning process that makes the conclusion known in all those cases in which all the premises are already known.

Now, note how, even though this process is easily represented when an agent’s knowledge is displayed as a plain set of formulae (adding the conclusion of the applied deductive step to the set is enough), it cannot be represented properly within standard EL: as mentioned before, the knowledge of every agent in such framework is closed under logical consequence, and hence a deductive reasoning step does not give any agent any new information.Footnote 4

But, as discussed in van Benthem and Velázquez-Quesada (2010), this does not imply that epistemic logic is an inadequate tool for epistemological concerns. Under the possible worlds semantics, the \(K\) operator really just describes the agent’s potential implicit semantic information (what she can eventually obtain), which definitely has the mentioned closure property. However, this property does not need to hold for a related but different intuitive notion: actual explicit knowledge (what the agent currently has). This idea originated in several works, as Konolige (1984), Levesque (1984), Lakemeyer (1986), Vardi (1986), Fagin and Halpern (1988). Among all of them, the framework of Fagin and Halpern (1988) has the advantage of being an extension of EL, and its key definitions are as follows. Semantically, a possible worlds model is extended with an acknowledgement set function \({\mathsf{A}}\) that assigns a set of formulae \({\mathsf{A}}(w)\) to each possible world \(w\); intuitively, this set contains exactly those formulae the agent has acknowledged as true at \(w\). Syntactically, the language is extended with an operator \({\mathrm{A}}\), semantically interpreted as

$$\begin{aligned} (M, w) \Vdash {\mathrm{A} }{\varphi } \quad \hbox{iff }&\varphi \in {\mathsf{A}}(w) \end{aligned}$$

Then, while the notion of implicit knowledge is defined as truth in every epistemic possibility, as the notion of knowledge was defined before,

$$\begin{aligned} {K_{\mathrm{Im }}}{\varphi } := {K}{\varphi } \end{aligned}$$

the notion of explicit knowledge can be defined as truth plus acknowledgement of truth in every such possibility:Footnote 5

$$\begin{aligned} {K_{\mathrm{Ex }}}{\varphi } := {K}{(\varphi \wedge {\mathrm{A} }{\varphi })} \qquad \end{aligned}$$

These definitions imply some interesting validities. First, if some \(\varphi\) is explicitly known, then it is also implicitly known, that is,

$$\begin{aligned} \Vdash {K_{\mathrm{Ex }}}{\varphi } \rightarrow {K_{\mathrm{Im }}}{\varphi } \end{aligned}$$

And, even though implicit knowledge is closed under modus ponens,

$$\begin{aligned} \Vdash {K_{\mathrm{Im }}}{(\varphi \rightarrow \psi )} \rightarrow ({K_{\mathrm{Im }}}{\varphi } \rightarrow {K_{\mathrm{Im }}}{\psi }) \end{aligned}$$

explicit knowledge does not need to, so

$$\begin{aligned} \not \Vdash {K_{\mathrm{Ex }}}{(\varphi \rightarrow \psi )} \rightarrow ({K_{\mathrm{Ex }}}{\varphi } \rightarrow {K_{\mathrm{Ex }}}{\psi }) \end{aligned}$$

In other words, it is possible for an agent to know explicitly an implication and its antecedent without knowing explicitly its consequent. But the fact that the implication’s consequent is not explicitly known does not mean that such ‘truth’ is unreachable for the agent. First, knowing explicitly an implication and its antecedent implies the agent knows the consequent implicitly:

$$\begin{aligned} \Vdash {K_{\mathrm{Ex }}}{(\varphi \rightarrow \psi )} \rightarrow ({K_{\mathrm{Ex }}}{\varphi } \rightarrow {K_{\mathrm{Im }}}{\psi }) \end{aligned}$$

And there is more: such implicit knowledge can be turned into explicit, this being precisely where deductive reasoning plays its crucial role. Within this framework, this form of reasoning can be represented in the following way.

A model operation for deductive reasoning can be defined as one that adds some formula to the \({\mathsf{A}}\)-set of a given world when some other formulae are already present. The simplest instance of this idea, an operation representing a modus ponens step, just adds a formula \(\chi\) to the \({\mathsf{A}}\)-set of those worlds in which both \(\eta \rightarrow \chi\) and \(\eta\) are already present. Formally, a modus ponens reasoning step with \(\eta \rightarrow \chi\) can be represented as an operation that takes a model \(M = \left\langle W, R, V, {\mathsf{A}} \right\rangle\) and returns a model \({M}_{{\overset{\eta \rightarrow \chi }{\hookrightarrow }}} = \left\langle W, R, V, {\mathsf{A}}' \right\rangle\) that differs from \(M\) only in the acknowledgement set function, which is defined as

$$\begin{aligned} {\mathsf{A}}'(w) := \left\{ \begin{array}{ll} {\mathsf{A}}(w) \cup \left\{ \chi \right\} &{} \text {if}\, \left\{ \eta \rightarrow \chi , \eta \right\} \subseteq {\mathsf{A}}\text {(w)} \\ {\mathsf{A}}(w) &{} \text {otherwise} \\ \end{array} \right. \end{aligned}$$

In the new model the agent will acknowledge \(\chi\) precisely in those worlds in which she had acknowledged \(\eta \rightarrow \chi\) and \(\eta\). This operation’s effect can be expressed in the language by adding modalities \(\langle {\overset{\eta \rightarrow \chi }{\hookrightarrow }}\rangle\), semantically interpreted as

$$\begin{aligned} (M, w) \Vdash \langle {\overset{\eta \rightarrow \chi }{\hookrightarrow }}\rangle \,{\varphi } \quad \hbox {iff} (M, w) \Vdash {K_{\mathrm{Ex }}}{(\eta \rightarrow \chi )} \wedge {K_{\mathrm{Ex }}}{\eta } \quad \hbox {and} \quad ({M}_{{\overset{\eta \rightarrow \chi }{\hookrightarrow }}}, w) \Vdash \varphi \end{aligned}$$

With this, it is possible now to express the key effect of a deductive reasoning step: if an agent knows explicitly an implication and its antecedent, then after a modus ponens deductive reasoning step, she will know explicitly the consequent. This yields the following dynamic (and more realistic) version of the \(K\) axiom:

$$\begin{aligned} \Vdash {K_{\mathrm{Ex }}}{{\big (}\eta \rightarrow \chi {\big )}} {\rightarrow }{\big (}{K_{\mathrm{Ex }}}{\eta } \rightarrow \langle {\overset{\eta \rightarrow \chi }{\hookrightarrow }}\rangle \,{{K_{\mathrm{Ex }}}{\chi }} {\big )}\end{aligned}$$
(1)

More details of this particular framework for deductive inference within implicit and explicit knowledge can be found in Grossi and Velázquez-Quesada (2009), Grossi and Velázquez-Quesada (2010).

Caveat The use of alternative semantic models might produce different results. For example, it is also possible to define the notions of implicit and explicit knowledge under neighbourhood models (Scott 1970; Montague 1970; see Pacuit 2007 or Chapter 7 of Chellas 1980 for detailed presentations). In such framework (Velázquez-Quesada 2013), if some \(\varphi\) is explicitly known, it does not need to be implicitly known, that is,

$$\begin{aligned} \not \Vdash {K_{\mathrm{Ex }}}{\varphi } \rightarrow {K_{\mathrm{Im }}}{\varphi } \end{aligned}$$

and, in consequence, knowing explicitly an implication and its antecedent does not make an agent to know implicitly the consequent:

$$\begin{aligned} \not \Vdash {K_{\mathrm{Ex }}}{(\varphi \rightarrow \psi )} \rightarrow ({K_{\mathrm{Ex }}}{\varphi } \rightarrow {K_{\mathrm{Im }}}{\psi }) \end{aligned}$$

The failure of these properties is counterintuitive, but it has an explanation. In the possible worlds framework, both implicit (the primitive concept) and explicit knowledge (the derived concept) are defined ‘statically’, via formulas that do not involve actions. In contrast, in the neighbourhood framework, while explicit knowledge (now the primitive concept) is defined statically, implicit knowledge is defined as what the agent will know explicitly after performing deductive inference. Hence, although at some stage the agent might know that she does not know some \(p\) \(({K_{\mathrm{Ex }}}{\lnot {K_{\mathrm{Ex }}}{p}})\), further reasoning might give tell her \(p\)’s truth value \(({K_{\mathrm{Ex }}}{p})\), making the previous piece of knowledge obsolete.Footnote 6

Nevertheless, some key properties still hold. For example, implicit knowledge is closed under modus ponens,

$$\begin{aligned} \Vdash {K_{\mathrm{Im }}}{(\varphi \rightarrow \psi )} \rightarrow ({K_{\mathrm{Im }}}{\varphi } \rightarrow {K_{\mathrm{Im }}}{\psi }) \end{aligned}$$

and explicit knowledge does not need to:

$$\begin{aligned} \not \Vdash {K_{\mathrm{Ex }}}{(\varphi \rightarrow \psi )} \rightarrow ({K_{\mathrm{Ex }}}{\varphi } \rightarrow {K_{\mathrm{Ex }}}{\psi }) \end{aligned}$$

More importantly, it is also possible to define a model operation \({\overset{\eta \rightarrow \chi }{\hookrightarrow }}\) that represents deductive reasoning properly:

$$\begin{aligned} \Vdash {K_{\mathrm{Ex }}}{{\big (}\eta \rightarrow \chi {\big )}} {\rightarrow }{\big (}{K_{\mathrm{Ex }}}{\eta } \rightarrow \langle {\overset{\eta \rightarrow \chi }{\hookrightarrow }}\rangle \,{{K_{\mathrm{Ex }}}{\chi }} {\big )}\end{aligned}$$

Epistemic logic, omniscience and ampliative inference As mentioned, every epistemic logic agent’s knowledge is closed under logical consequence. This has been called the logical omniscience problem in epistemic logic (Hintikka 1962; Stalnaker 1991), and it is one of the main reasons why several authors (e.g., Hocutt 1972) have challenged the applicability of logic to any realistic account of knowledge. This discussion is related to the more general scandal of deduction (Hintikka 1973; Sequoiah-Grayson 2008; D’Agostino and Floridi 2009), which states that deductive reasoning does not provide new information: whatever is concluded was already present in the information given by the premises, and thus such reasoning process is not really informative. In fact, it has been argued that only non-truth-preserving inferences can be considered ampliative since, if the concluded information is genuinely new, its truth cannot be guaranteed by the old information (Hintikka and Sandu 2007).

In this sense, and as stated before, the approach used in this section follows the idea that standard epistemic logic only describes implicit semantic information. Thus, what is really needed then is not “epistemic logic bashing”, but rather a a richer account of an agent’s attitudes.Footnote 7 In particular, the approach avoids logical omniscience by asking the agent to acknowledge the truth of a formula (in all epistemic possibilities) in order to make it part of her explicit knowledge, a method that can be seen as a particular case of the general idea behind logics for justifications and/or evidence (e.g., Artëmov and Nogina 2005; Benthem and Pacuit 2011). By distinguishing between implicit and explicit knowledge, the approach aligns with the distinction between surface information and depth information made in Hintikka (1970), thus making truth-preserving inference definitely ampliative: though it does not generate new implicit knowledge, it definitely increases explicit knowledge, and it is explicit knowledge (what the agent currently has, and not what she could eventually derive from it) what plays a role in ‘real’ decision making scenarios.

3 Default Reasoning

Though reasoning with full certainty (i.e., knowledge) is useful in certain areas (e.g., mathematics, computer science), most of the information real agents deal with is not absolutely certain but only very plausible. Instead of having information stating “\(\varphi\) is true”, agents typically have information of the form “\(\varphi\) is plausible”, or “normally, \(\varphi\) is the case”. A classical example is the following.

figure b

From the unquestionable information that Chilly Willy is a bird and the plausible information that birds fly, this inference concludes that is it very plausible for Chilly Willy to fly. Still, the fact that Chilly Willy flies cannot be considered as an absolute truth: even if the two premises are true, Chilly Willy might be a penguin, or it might have broken wings. In such scenarios, it will not fly.

The aim of default reasoning (Reiter 1980; Delgrande et al. 1994; Boutilier 1994; Segerberg 1999) is to represent this and other similar forms of reasoning, including those based on general statements (e.g., “under typical circumstances, \(\varphi\)s are \(\psi\)s”), lack of information of the contrary (e.g., “if a \(\varphi\) was not a \(\psi\), you would know it”), conventional uses (e.g., “A \(\varphi\) is a \(\psi\) unless otherwise indicated”), persistence (e.g., “A \(\varphi\) is a \(\psi\) unless something changes it”) and so on. These inferences, mostly studied within artificial intelligence, allow to draw conclusions in cases where the information is incomplete but no contradictory evidence is present. One of their characteristics is that such conclusions are not absolute: they might be withdrawn when the information becomes complete.

For this paper’s purposes, it is important to note how default reasoning cannot be captured by a deductive inference. Consider the Chilly Willy example: the conclusion can be taken as an absolute truth only if the premises include, besides the fact that Chilly Willy is a bird, statements discarding each one of the (possibly infinite) reasons for which Chilly Willy might not fly. Even more, in order for an agent to use such inference, she would need to verify that none of these ‘flying-impossibilities’ situations holds: she would need to know that Chilly Willy is not a penguin, that it does not have broken wings, and so on.

But there are other epistemic notions besides knowledge. The public transport in Amsterdam is highly reliable, usually conforming to its time schedule, and nevertheless we cannot say in the absolute sense that we know the bus will be at the stop on time: many unpredictable factors, as snow, mechanical failures or car crashes may take place. If we had to act based only on what we know, we would have very little manoeuvring space. Fortunately, our attitudes toward information are more than just ‘knowing’ and ‘not knowing’. Most of our behaviour is leaded not by what we know, but rather by what we believe.

With this in mind, and assuming that beliefs do not need to be true (another natural assumption), default reasoning can be understood as an inference process that involves not only knowledge but also beliefs. More precisely, default reasoning can be understood as an inference whose premises do not need to be known (i.e., they do not need to be absolutely true); they might be just believed (i.e., just plausibly true). Then, the inference produces an conclusion that will be known when all the premises are so, but that otherwise will be just believed, and hence subject to revision in the light of further information.

In order to represent this form of reasoning in a DEL-style, it is necessary to extend the possible worlds model to allow us to represent both knowledge and beliefs. This paper’s approach follows the plausibility models of Baltag and Smets (2008): possible worlds structures in which the accessibility relation is interpreted as a plausibility relation representing the agent’s plausibility order among the worlds she considers epistemically possible. In such structures, denoted here by \(M = \left\langle W, \le , V \right\rangle\), an agent knows a formula \(\varphi\), \({K}{\varphi }\), when \(\varphi\) is true in all her epistemically possible worlds; on the other hand, for her to believe \(\varphi\), \({B}{\varphi }\), the formula only needs to be true in the most plausible of them.

The precise semantic structure that will be used here, plausibility acknowledgement models (Velázquez-Quesada 2014), is built by extending plausibility models with the function \({\mathsf{A}}\) used in Sect. 2: \({\mathsf{A}}(w)\) is the set of formulae the agent has acknowledged as true at world \(w\). The language for these structures extends the propositional one with the operator \({\mathrm{A} }\) from the previous section and with two modalities: one for the plausibility relation \(\le\), and another for a relation \(\sim\) defined as the union of \(\le\) and its converse \(\ge\). This relation \(\sim\) allows to look at every epistemically possible world, regardless of whether it is more \((\le )\) or less \((\ge )\) plausible than the evaluation point. Thus, it allows to define the notions of implicit and explicit knowledge following the previous section’s idea [i.e., \({K_{\mathrm{Im }}}{\varphi } := [{\scriptstyle \sim }]\,{\varphi }\) and \({K_{\mathrm{Ex }}}{\varphi } := [{\scriptstyle \sim }]\,{(\varphi \wedge {\mathrm{A} }{\varphi })}\)].

With respect to the notion of beliefs, thanks to the properties of the plausibility relation,Footnote 8 a formula \(\varphi\) is true in the most plausible worlds from a given world \(w\) if and only if \(\langle {\scriptstyle \le }\rangle \,{[{\scriptstyle \le }]\,{\varphi }}\) is the case at it. Then, while the notion of implicit belief can be defined as truth in the most plausible situations,

$$\begin{aligned} {B_{\mathrm{Im }}}{\varphi } := \langle {\scriptstyle \le }\rangle \,{[{\scriptstyle \le }]\,{\varphi }} \end{aligned}$$

the notion of explicit belief can be defined as truth plus acknowledgement of truth in such worlds:

$$\begin{aligned} {B_{\mathrm{Ex }}}{\varphi } := \langle {\scriptstyle \le }\rangle \,{[{\scriptstyle \le }]\,{(\varphi \wedge {\mathrm{A} }{\varphi })}} \end{aligned}$$

This framework generates interesting validities. For example, given \(\sim\)’s definition, knowledge implies belief:

$$\begin{aligned} \Vdash \left( {K_{\mathrm{Im }}}{\varphi } \rightarrow {B_{\mathrm{Im }}}{\varphi }\right) \;\wedge \; \left( {K_{\mathrm{Ex }}}{\varphi } \rightarrow {B_{\mathrm{Ex }}}{\varphi }\right) \end{aligned}$$

And, just as explicit knowledge implies implicit knowledge, if some \(\varphi\) is explicitly believed, then it is also implicitly believed:

$$\begin{aligned} \Vdash {B_{\mathrm{Ex }}}{\varphi } \rightarrow {B_{\mathrm{Im }}}{\varphi } \end{aligned}$$

Moreover, even though implicit beliefs is closed under modus ponens,

$$\begin{aligned} \Vdash {B_{\mathrm{Im }}}{(\varphi \rightarrow \psi )} \rightarrow ({B_{\mathrm{Im }}}{\varphi } \rightarrow {B_{\mathrm{Im }}}{\psi }) \end{aligned}$$

explicit beliefs do not need to:

$$\begin{aligned} \not \Vdash {B_{\mathrm{Ex }}}{(\varphi \rightarrow \psi )} \rightarrow ({B_{\mathrm{Ex }}}{\varphi } \rightarrow {B_{\mathrm{Ex }}}{\psi }) \end{aligned}$$

In other words, it is possible for an agent to believe explicitly an implication and its antecedent without believing explicitly its consequent, thus making a modus ponens reasoning step a useful tool.

Consider now, for the purposes of this section, a situation in which an agent believes explicitly an implication \(\varphi \rightarrow \psi , {B_{\mathrm{Ex }}}{(\varphi \rightarrow \psi )}\), and knows explicitly its antecedent \(\varphi , {K_{\mathrm{Ex }}}{\varphi }\). In the described framework, this implies that the agent believes implicitly the implication’s consequent. This is because both explicit knowledge and explicit beliefs imply implicit beliefs; hence, \({B_{\mathrm{Ex }}}{(\varphi \rightarrow \psi )}\) and \({K_{\mathrm{Ex }}}{\varphi }\) imply \({B_{\mathrm{Im }}}{(\varphi \rightarrow \psi )}\) and \({B_{\mathrm{Im }}}{\varphi }\), respectively. But implicit beliefs are closed under modus ponens, so \({B_{\mathrm{Im }}}{\psi }\). Thus,

$$\begin{aligned} \Vdash {B_{\mathrm{Ex }}}{(\varphi \rightarrow \psi )} \rightarrow ({K_{\mathrm{Ex }}}{\varphi } \rightarrow {B_{\mathrm{Im }}}{\psi }) \end{aligned}$$

Now observe how a modus ponens inference here is conceptually different from the one discussed in the previous section, where both the implication and its antecedent were known. In the latter, since knowledge implies truth, the antecedent is true and the implication preserves truth; hence, the implication’s consequent must be true. In other words, situations where the implication and its antecedent hold but the consequent does not are not possible.

The case is different when the implication is only believed: in such scenario, even though it is reasonable for the agent to consider very likely those situations in which the implication and its antecedent (and hence its consequent) hold, she should not discard those situations in which the antecedent holds but the implication (and hence its consequent) fails. In this framework, said reasoning step can be represented by a model operation (denoted by \({\overset{\eta \rightarrow \chi }{\rightharpoonup }}\)) that, by creating new epistemic possibilities when the agent is not absolute certain of some of the involved premises, allows her to use her beliefs to draw inferences without forgetting that beliefs might fail, hence producing knowledge when both the involved implication and its antecedent are known, but producing only beliefs when the implication or its antecedent are just believed (see Velázquez-Quesada 2014 for details). In symbols,

$$\begin{aligned} \Vdash {K_{\mathrm{Ex }}}{{\big (}\eta \rightarrow \chi {\big )}} {\rightarrow }{\big (}{K_{\mathrm{Ex }}}{\eta } \rightarrow \langle {\overset{\eta \rightarrow \chi }{\rightharpoonup }}\rangle \,{{K_{\mathrm{Ex }}}{\chi }} {\big )}\nonumber \\ \Vdash {K_{\mathrm{Ex }}}{{\big (}\eta \rightarrow \chi {\big )}} {\rightarrow }{\big (}{B_{\mathrm{Ex }}}{\eta } \rightarrow \langle {\overset{\eta \rightarrow \chi }{\rightharpoonup }}\rangle \,{{B_{\mathrm{Ex }}}{\chi }} {\big )}\nonumber \\ \Vdash {B_{\mathrm{Ex }}}{{\big (}\eta \rightarrow \chi {\big )}} {\rightarrow }{\big (}{K_{\mathrm{Ex }}}{\eta } \rightarrow \langle {\overset{\eta \rightarrow \chi }{\rightharpoonup }}\rangle \,{{B_{\mathrm{Ex }}}{\chi }} {\big )}\nonumber \\ \Vdash {B_{\mathrm{Ex }}}{{\big (}\eta \rightarrow \chi {\big )}} {\rightarrow }{\big (}{B_{\mathrm{Ex }}}{\eta } \rightarrow \langle {\overset{\eta \rightarrow \chi }{\rightharpoonup }}\rangle \,{{B_{\mathrm{Ex }}}{\chi }} {\big )}\end{aligned}$$
(2)

This interpretation is faithful to the spirit in default reasoning: in the most plausible situations, a given bird flies, and hence, given that Chilly Willy is a bird, in the most plausible situations Chilly Willy flies. Thus, this setting provides a very general perspective on the workings of inferences that mix knowledge and belief, far beyond the specifics of particular consequence relations.

On ampliative inference processes As mentioned before, distinguishing between explicit and implicit notions of information makes truth-preserving (i.e., deductive) inference ampliative, as it increases explicit knowledge. The generalisation of the current section to inferences involving both knowledge and beliefs shows how these forms of non-truth-preserving inferences can be also considered ampliative, but in a different sense. On the one hand, truth-preserving inference is internally ampliative: though it does not change the number of situations the agent considers, it does increase the information the agent has about each one of these possibilities. On the other hand, the key characteristic of the discussed non-truth-preserving inferences is that they are externally ampliative: they increase the number of possibilities the agent considers.

4 Abductive Reasoning

Introduced to modern logic by Charles S. Peirce, abductive reasoning (Paul 1993; Lipton 2004; Magnani 2001; Aliseda 2006) is typically understood as the process of looking for an explanation for a surprising observation.Footnote 9 Many forms of intellectual tasks, such as medical and fault diagnosis, scientific discovery, legal reasoning, and natural language understanding, belong to this category, thus making abduction one of the most important reasoning processes. A very simple example of such form of reasoning is the following:

figure c

Most formal approaches to abductive reasoning follow a syntactic perspective, with the typical definitions of an abductive problem and its explanation(s) given in terms of a theory and a formula. Therefore, most of the work on the subject has focused on: (1) discussing what a theory and a formula should satisfy in order to form an abductive problem, and what a formula should satisfy in order to be an abductive explanation (Aliseda 2006; 2) proposing algorithms to find abductive explanations (Kakas et al. 1992; Mayer and Pirri 1993, 1995; Reyes-Cabello et al. 2006; Klarman 2008);Footnote 10 and (3) analysing the structural properties of abductive consequence relations (Lobo and Uzcátegui 1997; Aliseda 2003; Walliser et al. 2004).

Even though abduction has been traditionally linked to scientific theories, in its most basic forms it deals with an agent’s (or a set of agents’) information and the way this information changes due to a surprising observation, a fact already observed in, e.g., Aliseda 2000; Gabbay and Woods 2005; Woods 2012. At the heart, abductive reasoning deals with epistemic changes triggered by an action, and thus it makes sense to look for dynamic epistemic representations.

Different from deductive and abductive reasoning, abductive reasoning can be seen as a process that involves more than one action, and thus involves more than two stages. This section, based on Velázquez-Quesada et al. (2013), distinguishes the following three stages. (Refinements are, of course, possible.Footnote 11)

  1. 1.

    The moment before the surprising observation \(\chi\), denoted here by \({\mathbf {s_{1}}}\).

  2. 2.

    The moment after the surprising observation but before incorporating its chosen solution \(\eta\) to the agent’s information, denoted here by \({\mathbf {s_{2}}}\)

  3. 3.

    The moment after the solution has been incorporated to the agent’s information, denoted here by \({\mathbf {s_{3}}}\).

This understanding of the abductive process can be represented with a diagram in the following way:

figure d

If each stage \({\mathbf {s_{1}}}, {\mathbf {s_{2}}}\) and \({\mathbf {s_{3}}}\) represents a still picture of the agent’s information (e.g., by using possible worlds models), then the transition between them can be represented by the epistemic actions that change the agent’s information: the surprising observation of \(\chi\) takes the agent from \({\mathbf {s_{1}}}\) to \({\mathbf {s_{2}}}\), and the acceptance of a chosen solution takes her from \({\mathbf {s_{2}}}\) to \({\mathbf {s_{3}}}\). Within DEL, a (surprising) observation can be represented by the public announcement operation defined in Sect. 1.1. However, the act of accepting a given explanation \(\eta\) cannot be represented with the same epistemic action: it would eliminate every \(\lnot \eta\) possibility, thus making the agent to know \(\eta\). This is not reasonable because abductive reasoning is a non-monotonic process: the chosen explanation does not need to be the case and in fact might be discarded in the light of further information. But, as discussed in Sect. 3, epistemic notions are not restricted to that of knowledge. Instead of integrating the chosen solution as part of her knowledge, it is more reasonable for the agent to integrate it as part of her beliefs.

The action of incorporating a given \(\eta\) as a belief rather than knowledge can be represented by means of a belief revision step (Gärdenfors 1992; Gärdenfors and Rott 1994; Williams and Rott 2001; Rott 2001). Within DEL, the plausibility models recalled in Sect. 3 allows us to represent an agent’s knowledge and beliefs. As mentioned, in such framework, beliefs are represented as what is true in the most plausible worlds; hence, an act of belief revision towards \(\eta\) can be represented with a model operation that changes the plausibility order, making worlds that satisfy the given formula the most plausible ones. Of course, there are several ways in which such a new order can be defined: for example, a drastic approach would produce a plausibility order with only two layers, the topmost one with all the \(\eta\)-worlds, leaving all the \(\lnot \eta\)-worlds below. On the other hand, a very conservative option would simply add a topmost layer containing only the ‘best’ \(\eta\)-worlds. Each one of these possibilities can be seen simply as one of the many different policies an agent has for revising her beliefs, allowing thus to represent the behaviour of the adventurous as well as the cautious minds.

In order to make the discussion precise, consider the so-called radical upgrade. After applying such policy to revise the agent’s beliefs with the formula \(\eta\), “all \(\eta\)-worlds become more plausible than all \(\lnot \eta\) -worlds, and within the two zones, the old ordering remains” (Benthem 2007). More precisely, if \(M = \left\langle W, \le , V \right\rangle\) is the current plausibility model, a radical upgrade with \(\eta\) produces the plausibility model \({M}_{{\eta \;}\!\Uparrow } = \left\langle W, \le ', V \right\rangle\) in which the new plausibility ordering is such that \(w \le ' u\) if and only if (1) \(w \le u\) and \(u\) is a \(\eta\)-world, or (2) \(w \le u\) and \(w\) is a \(\lnot \eta\)-world, or (3) \(w \sim u\), \(w\) is a \(\lnot \eta\)-world and \(u\) is a \(\eta\)-world.

The language is accordingly extended with modalities \(\langle {\eta }\!\Uparrow \rangle\) for expressing the effects of a radical revision with a formula \(\eta\). Their semantic interpretation states that at \((M, w)\) it is possible to perform a revision with \(\eta\) after which \(\varphi\) is the case, \((M, w) \Vdash \langle {\eta }\!\Uparrow \rangle \,{\varphi }\), if and only if after \(\eta\)’s radical revision \(\varphi\) is the case, \(({M}_{{\eta \;}\!\Uparrow }, w) \Vdash \varphi\). In symbols,

$$\begin{aligned} (M, w) \Vdash \langle {\eta }\!\Uparrow \rangle \,{\varphi }& \quad \hbox{iff} \quad ({M}_{{\eta \;}\!\Uparrow }, w) \Vdash \varphi \end{aligned}$$

With these tools it is possible to provide an epistemic and dynamic approach to abductive reasoning. Following Velázquez-Quesada et al. (2013), it is said that the agent has an abductive problem \(\chi\) at \({\mathbf {s_{2}}}\) when she knows \(\chi\) at \({\mathbf {s_{2}}}\) but did not know it at \({\mathbf {s_{1}}}\); this reflects the idea of a ‘surprising observation’. For the definition of an abductive solution, a simple and yet useful approach is to say that \(\eta\) is one of the agent’s solution for the abductive problem \(\chi\) if she knew \(\eta \rightarrow \chi\) before \(\chi\) became an abductive problem, i.e., at \({\mathbf {s_{1}}}\). This reflects the idea that a solution is a piece of information that would have helped the agent to predict the surprising \(\chi\) before it was observed. With these definitions, the fact that an agent has an abductive problem \(\chi\) and that \(\eta\) is the chosen abductive solution can be reflected on the previous diagram in the following way.

figure e

The dynamic epistemic approach allows to look at features of the abductive process under a new light. For example, following Aliseda (2006), it is still possible to classify abductive problems, but now in terms of the agent’s attitude towards the surprising observation \(\chi\) before it was observed: the abductive problem \(\chi\) is said to be novel when the agent believes neither \(\chi\) nor \(\lnot \chi\) before observing it \(({\mathbf {s_1}} \Vdash \lnot {B}{\chi } \wedge \lnot {B}{\lnot \chi })\), but it is said to be anomalous when the agent believes \(\lnot \chi\) before observing \(\chi\) \(({\mathbf {s_1}} \Vdash {B}{\lnot \chi })\). It is even possible to say that \(\chi\) is an expected problem when the agent believed \(\chi\) before the observation \(({\mathbf {s_1}} \Vdash {B}{\chi })\). Of course, the reader might not call this case an abductive problem: the observation does not trigger any further epistemic action, working rather as a confirmation. Nevertheless, this case shows how this proposal allows for such situations to be considered.Footnote 12

Similarly, abductive solutions can be classified, some of them in terms of the agent’s attitude towards the chosen explanation after the surprising observation (e.g., an explanation \(\eta\) is said to be consistent when the agent considers it epistemically possible after the surprising observation: \({\mathbf {s_2}} \Vdash \lnot {K}{\lnot \eta })\), and some others in terms of the effect that its acceptance has over the agent’s information (e.g., an explanation is said to be explanatory when accepting it does change the agent’s information or, in more technical terms, when there is no bisimulation between \({\mathbf {s_2}}\) and \({\mathbf {s_3}}\) Footnote 13).

More importantly, this approach yields a validity that describes the abductive reasoning process as one that takes knowledge and produces beliefs, thus making it in this sense the only one among those discussed in this paper that truly opens new possibilities. If both \(\chi\) and \(\eta\) are propositional formulae, then

$$\begin{aligned} \Vdash {K}{{\big (}\eta \rightarrow \chi {\big )}} {\rightarrow }[\chi !]\,{{\big (}{K}{\chi } \rightarrow \langle {\eta }\!\Uparrow \rangle \,{{B}{\eta }} {\big )}} \end{aligned}$$

stating that if an observation makes an agent to know \(\chi\), then an abductive process will allow her to propose \(\eta\) as an abductive solution if she knew \(\eta \rightarrow \chi\) before the observation, and will make her believe \(\eta\) once she accepts it as the chosen solution.

5 Logical Pluralism, Logical Dynamics and Epistemic Dynamics

By giving an epistemic interpretation to deductive, default and abductive reasoning, this paper has shown how these reasoning processes, typically studied from dissimilar perspectives, can be put under the same umbrella. This idea, placing together different forms of reasoning, is by no means new. Logical pluralism (Beall and Restall 2000, 2006) holds the view that there is not one true logic but rather many, and hence there is not always a single answer to the question “is this argument valid?”. This approach is often (but not exclusively) marked by the definition of new consequence relations \(|\!\!\sim\) satisfying non-classical structural rules of inference (the so-called substructural logics).

Another approach, one that shares more similarities with the general idea this paper proposes, is that of logical dynamics (Benthem 1996), which suggest that “the main issue is not a variety of reasoning styles but rather the variety of informational tasks performed by intelligent interacting agents, [involving inference], observation, questions and answers, dialogue [and] general communication”. Indeed, instead of looking at alternative and novel consequence relations, logical dynamics and, in particular, epistemic dynamics, work by enriching the basic language with modalities describing not only propositional attitudes but also the actions that affect them. Consider, for contrast, the general concept of non-monotonic reasoning (Kraus et al. 1990). As its name indicates, its key feature is that a previously concluded fact might be withdrawn after further information. Formal proposals for studying such forms of reasoning typically involve the definition of new consequence relations \(|\!\!\sim\) under which is possible to have both \(\varphi _1, \ldots , \varphi _n |\!\!\sim \psi\) and \(\varphi _1, \ldots , \varphi _n, \varphi _{n+1} \not |\!\!\sim \psi\). However, as discussed in Benthem (2008), another possibility is to look at enriched languages involving modalities for beliefs and belief revision, asking instead under which conditions formulas of the form \([{\varphi _{1}}\!\Uparrow ]\,{\cdots [{\varphi _{n}}\!\Uparrow ]\,{{B}{\psi }}}\) are valid, “thus ‘deconstructing’ [substructural phenomena] into classical logic plus an explicit account of the relevant [propositional attitudes and] informational events”.

The difference between these two approaches is similar to the conceptual difference between two approaches for representing uncertainty/ignorance. Multi-valued propositional logics work by allowing more than two truth values, with proposals ranging from three alternatives (true, false and unknown; Łukasiewicz 1920; Kleene 1938) to an infinite number of them (Klir and Yuan 1995; Adams 1998). Thus, it is possible to state the uncertainty about a given atomic proposition \(p\)’s truth value simply by assigning ‘unknown’ as its truth value. However, another alternative is rather to increase the language’s expressivity. This is the approach followed by epistemic logic (and, in general, by all modal logics) where, by adding the modal operator \(K\), it is possible to express uncertainty about \(p\) by stating that neither the formula nor its negation are known, \(\lnot {K}{p} \wedge \lnot {K}{\lnot p}\), emphasising thus that there is no ambiguity on \(p\)’s truth-value (at least in a classical world, \(p\) should be either true or else false); it is rather that the involved agent does not have enough information to make a proper statement about it.

6 Summary

The main aim of this paper has been to describe deductive, default and abductive reasoning as different instances of the same phenomenon: epistemic dynamics. The discussion has proposed to understand deductive reasoning as an inference whose conclusion will be known when all the premises are known, its main characteristic being that it makes explicit what so far has been only implicit (Sect. 2). On the other hand, default reasoning has been described as a process that allows an inference to take place even when not all the premises are known, producing conclusions whose epistemic attachment is that of the least epistemically attached premise: an inference in which the implication and the antecedent are known will produce a known conclusion, but an inference where the implication is just believed will produce only a believed conclusion (Sect. 3). Finally, abductive reasoning has been understood not as a single inference step but rather as a process that, in one of its simplest ‘deconstructions’, involves two epistemic actions: the observation that makes the agent to know the abductive problem, and the revision that allows her to integrate the chosen explanation as part of her beliefs (Sect. 4). The proposal has been formalised within the dynamic epistemic logic framework, with the following validities describing the key ideas:

Deductive reasoning:

\({K_{\mathrm{Ex }}}{{\big (}\eta \rightarrow \chi {\big )}} \to {\big (}{K_{\mathrm{Ex }}}{\eta } \rightarrow \langle {\overset{\eta \rightarrow \chi }{\hookrightarrow }}\rangle \,{{K_{\mathrm{Ex }}}{\chi }} {\big )}\)

Default reasoning:

\({B_{\mathrm{Ex }}}{{\big (}\eta \rightarrow \chi {\big )}} \to {\big (}{K_{\mathrm{Ex }}}{\eta } \rightarrow \langle {\overset{\eta \rightarrow \chi }{\rightharpoonup }}\rangle \,{{B_{\mathrm{Ex }}}{\chi }} {\big )}\)

Abductive reasoning:

\({K}{{\big (}\eta \rightarrow \chi {\big )}} \to [\chi !]\,{{\big (}{K}{\chi } \rightarrow \langle {\eta }\!\Uparrow \rangle \,{{B}{\eta }} {\big )}}\)

Describing different reasoning processes from the same perspective highlights their relationship, thus allowing us to understand how they interact together. One can imagine a situation in which an agent who knows that Chilly Willy is a bird uses the known implication “all birds have feathers” to get to know that Chilly Willy has feathers. Then, feeling adventurous, she can also use default reasoning with the believed implication “all birds fly” to come to believe that Chilly Willy flies, without forgetting that it might not. Nevertheless, if she finds out that Chilly Willy does not fly, she can use abductive reasoning with the known fact “a penguin does not fly” to believe that Chilly Willy is a penguin.

This dynamic epistemic analysis can be extended to other reasoning processes as inductive reasoning, understood as a progression from individual instances to broader generalisations, or to broader interpretations of abduction (e.g., allowing changes in the underlying logical consequence relation, Soler-Toscano et al. 2010, or creating and/or modifying of concepts, Quilici-Gonzalez and Haselager 2005), or to those formalising common sense assumptions (e.g., that of things being as expected unless otherwise specified: the circumscription of McCarthy 1980). A broader epistemic approach might even involve the concept of learning (Kelly 1996), which plays a key role in human information dynamics and has been also studied from a dynamic epistemic perspective (e.g., Gierasimczuk 2009). All in all, the current is an alternative proposal that, we believe, can help to shed light on the relationship between different forms of reasoning, and thus on the foundations and methods of science.