1 Introduction

Introducing their “reason-based theory of choice”, Dietrich and List (2013) noticed that although the philosophical literature has largely illustrated the usefulness of the concepts of reasons and arguments to think through action and decisions, decision theory strives to account for the latter exclusively in terms of preferences and beliefs. Despite Dietrich and List’s (2013, 2016) efforts, the gap remains large between philosophical and choice theoretic approaches.

This gap echoes a classical dichotomy in “moral sciences” between, on the one hand, first-person justifications of one’s acts in terms of reasons and arguments structuring these reasons, and on the other hand, third-person representations in terms of beliefs and preferences (Hausman 2011). By neglecting reason-based and other argumentative accounts, decision theory tends to devalue decision-makers’ understanding of their own actions.

This gap has tended to insulate decision theory from important philosophical debates in the past 30–40 years. Among the most influential approaches in these debates, Scanlon (2000) highlighted the links between reasons, justification and moral notions such as fairness and responsibility, Habermas’ (1981) “theory of communicative action” articulated the importance of justification and argumentation as distinctive features of rational action, and Rawls (2005) launched the debates on the “acceptability” (Estlund 2009) of reasons and arguments for public justification.

This gap also has important practical implications for decision analysis, by complicating the task for analysts to explain the recommendations they give to their clients. This, in turn, casts doubts on these recommendations, which appear to be imposed to rather than endorsed by decision-makers.

In this article, we aim to participate in unlocking this situation, by elaborating a framework designed to allow decision analysts to provide recommendations that decision-makers truly endorse, in empirical reality.

For that purpose, we introduce, as a commendable basis for recommendation, the “deliberated judgments” of the decision-maker. Roughly stated, these “deliberated judgments” represent the propositions that the decision-maker will consider to be well grounded, if he duly takes into account all the relevant arguments. This concept is inspired by Goodman’s (1983) and Rawls’ (1999) notion of reflective equilibrium. It also owes much to Roy’s (1996) view that an important part of the decision support interaction consists, for the analyst, in ensuring that the aided individual understands and accepts the reasoning on which the prescription is based.

This article is organized as follows. In Sect. 2, we define our core concepts, including the central concept of deliberated judgments. In Sect. 3, we then explore the issue of how empirical data come into play and are involved in the validation of models. This illustrates the empirical aspect of our framework, which distinguishes it from standard prescriptive approaches. Obviously enough, at this stage, the pivotal issue is to determine how one can say anything about “deliberated judgments”, given that, for any non-trivial decision, the potentially relevant arguments are infinitely numerous. Lastly, Sect. 4 discusses the significance of our approach for the practice of decision analysis and outlines future empirical applications.

2 Core concepts and notations

In this section, we start by presenting the general setting of our approach, including our understanding of arguments and of the topic on which the individual aims to take a stance. We then introduce our formalization of argumentative disposition, capturing an individual’s attitude towards arguments. This eventually allows us to present our notion of “deliberated judgment”.

2.1 General setting

Our approach starts from and is largely structured by the point of view of decision analysis. We accordingly assume that a decision situation has been identified: we admit that there is an individual i who requests decision support to answer questions such as: “is action a better than action b?”, or “which beliefs should I have about such or such matter?”. We consider that a topic T—a set of propositions on which the decision analysis process aims to lead the decision-maker to take a stance—is defined.Footnote 1 We do not formally define propositions and simply understand the notion in its ordinary sense. For example, a proposition can be a claim spelled out in a text in a natural language, such as the claim that action a is the most appropriate action for i in a given decision situation.

We also consider arguments that can be used by i to make up her mind about propositions in T. Here we understand the notion of argument in a large sense: anything that can be used to support a proposition, or undermine the effectiveness of such a support, is an argument. In the latter case, we talk about a counter-argument. Arguments as we understand them can encompass a huge diversity, ranging from very basic arguments that can be stated in a couple of words, to intricate arguments embedding numerous sub-arguments associated to one another in complex ways.

Let us then define the set \(S^*\) that contains all the arguments that one uses when trying to make up one’s mind about T. \(S^*\) can be understood in a “pragmatic” sense, as the set of all the arguments available around the temporal window of the decision process. It can also be understood in an “idealistic” sense, as the set of all the arguments that can possibly be raised, including those that humankind has not yet discovered.Footnote 2

Observe that under both interpretations, in all decision situations but the most trivial, it will be untenable to assume that the analyst knows all of \(S^*\): the analyst will only know a strict subset \(S\subset S^*\), containing the arguments that she has been able to gather.Footnote 3 An important part of our work in this article will be to identify conditions allowing to draw conclusions relating to \(S^*\) despite the fact that no one ever knows more than a strict subset of \(S^*\).

Example 1

(Ranking) Let us simply illustrate the content of the concepts introduced so far. Let \({{\mathcal {A}}}\) be a set of alternatives that i is interested in ranking. For all \(a_1 \ne a_2 \in {{\mathcal {A}}}\), define \(t_{a_1 \succ a_2}\) as the sentence: “\(a_1\) ought to be ranked above \(a_2\)”, and \(t_{a_1 \sim a_2}\) as “\(a_1\) ought to be ranked ex-æquo with \(a_2\)”. Define \(T = \bigcup _{a_1 \ne a_2 \in {{\mathcal {A}}}} \{t_{a_1 > a_2}, t_{a_1 \sim a_2}\}\) as the set of all such sentences. The topic T represents the propositions on which i is interested to make up her mind. Define \(S^*\) as the set of all strings corresponding to sentences in English. This set contains formulations of all the arguments that people can think about and use to make up their mind about the topic, and much more. An example of an argument is \(s= \) “Alternative \(a_1\) ought to be ranked above \(a_2\) because \(a_1\) is better than \(a_2\) on every criterion relevant to this problem”. \(\triangle \)

Our aim in the remainder of this section is to define formally i’s perspective towards the topic after he has considered all the arguments that are possibly relevant to the situation. We term this: i’s Deliberated Judgment (DJ).

2.2 Argumentative disposition

To define i’s DJ, we need to capture i’s attitude towards arguments. Importantly, we also need to capture the fact that i may change her opinion about arguments and their relative strengths. She can change her mind because of reasons independent of her endeavor to tackle the problem she addresses, for example depending on her mood. More interestingly, i will possibly change her mind when confronted with new arguments. For example, imagine that i has heard about two arguments, \(s_1\) and \(s_2\), and she thinks that \(s_2\) turns \(s_1\) into an ineffective argument. But then she comes to realize that \(s_2\) is in turn rendered ineffective by a third argument, \(s_3\). After having thought about \(s_3\), it might be that i no longer considers that \(s_2\) undermines \(s_1\).

Note that for simplicity’s sake, we say that an argument becomes ineffective (because of another argument) to mean that it becomes ineffective in its ability to support some proposition or to render other arguments ineffective.

Le us introduce our formalism to account for such a situation.Footnote 4

Let us start by defining a set of possible perspectives P that i can have towards the topic T. A perspective \(p \in P\) captures all the elements determining how i would react to arguments in \(S^*\). In p, i has a specific set of arguments in mind, which can partly determine his reaction to other arguments in \(S^*\). But other elements can come into play, such as (to come back to our example above) his mood.

If the decision analyst provides i with a new argument \(s\), this might lead i to switch from p to another perspective \(p'\) integrating both \(s\) and the arguments that i had in mind in p, and possibly other arguments that i might have been led to construct when trying to make up his mind about \(s\) and its implications. i’s perspective can also change over time, because he forgets some arguments.

We forcefully emphasize that we do not claim to be able to provide a complete account of all the elements encapsulated in this notion of perspective. In fact, our approach does not even require to believe that it is possible for anyone to capture the content of perspectives, or more generally to directly measure details about i’s internal states of mind. The notion of perspective merely serves as an abstract device allowing to ground the idea that i may have changing attitudes towards some pairs of arguments.

Based on these notions, given T and \(S^*\), define i’s argumentative disposition towards T as . These three relations, described here below, constitute the formal primitives of our concept of argumentative disposition.

:

is a relation from \(S^*\) to T. An argument \(s\)supports a proposition t, denoted by , iff i considers that \(s\) is an argument in favor of t. We emphasize that this definition should be understood in a conditional sense: means that i considers that, if \(s\) holds in her eyes, then she should endorse t, but this does not say anything about whether she thinks that \(s\) holds. An argument \(s\) may support several propositions in i’s view, or none.

:

is a binary relation over \(S^*\) representing whether i considers that a given argument trumps another one in some perspective. Let \(s_1, s_2 \in S^*\) be two arguments. We note (\(s_2\)trumps\(s_1\)) iff there is at least one perspective within which i considers that \(s_2\) turns \(s_1\) into an ineffective argument.Footnote 5 Let us emphasize that we are concerned with how i sees \(s_2\) and \(s_1\), not about whether \(s_2\) should be considered to be a good argument to trump \(s_1\) by any independent standard.

\(\mathbin {\ntriangleright _\exists }\):

is a binary relation over \(S^*\) defined in a similar way: \(s_2 \mathbin {\ntriangleright _\exists }s_1\) iff there is at least one perspective within which i does not consider that \(s_2\) turns \(s_1\) into an ineffective argument.

We assume that .

We consider that it is possible to query i about the trump relation between two arguments, and thus obtain information about , to the following limited extent: i may be presented with two arguments, \(s_1\) and \(s_2\), and asked whether he thinks that \(s_2\) trumps \(s_1\), or \(s_1\) trumps \(s_2\), or neither. In any case, we consider that i answers from the perspective he is currently in (to which we have no other access than through this query). Thus, if i answers that \(s_2\) trumps \(s_1\), we know that . Indeed, in such a case we know that there is at least one perspective within which he thinks that \(s_2\) trumps \(s_1\): namely, the perspective that he currently has. Conversely, if i answers that \(s_2\) does not trump \(s_1\), we know that \(s_2 \mathbin {\ntriangleright _\exists }s_1\).Footnote 6

Remark 1

Whereas the two relations allow to capture i’s changes of mind about whether a given argument can undermine another argument, the simple support relation adopted here does not permit to capture changes of mind about whether a given argument supports a given proposition. We assume that, in practice, when implementing our approach, propositions will be sufficiently simple and clear, so as to make it safe to assume that i will not change her mind concerning support during the decision process. This is a point to which the analyst will have to pay attention when applying our approach. If it appears, in real-life implementations, that this assumption is ill-advised, the framework will have to be extended by applying the approach used for to the support relation (this would not raise any specific difficulty). For the time being, in the absence of empirical reasons to believe that the added generality is needed, we choose to use a single relation for simplicity.\(\triangle \)

Example 2

(Ranking (cont.)) Consider a set of criteria J. Consider the argument \(s_b = \) “Alternative \(a_1\) ought to be ranked above \(a_2\) because \(a_1\) is better than \(a_2\) on three criteria while \(a_2\) is better than \(a_1\) on only one criterion”, and \(s_c = \) “It does not make sense to treat all criteria equally in this problem”. Then (depending on i’s disposition), it might hold that , and it might hold that . Note that both may very well hold together. \(\triangle \)

Definition 1

(Decision situation) We denote a decision situation by the tuple , with defined as above.

The part of i’s argumentative disposition that remains stable as i changes perspectives is of distinctive interest for decision analysis purposes. Indeed, recall that the emergence of new arguments may lead i to switch perspective. The stable part of her argumentative disposition is, therefore, a stance that proves resistant to the emergence of new arguments and is, in this sense, argumentatively well grounded from i’s point of view.

Let us, therefore, define the corresponding stable relations: \(\mathbin {\vartriangleright _\forall }\) is defined as \(s_2 \mathbin {\vartriangleright _\forall }s_1 \Leftrightarrow \lnot (s_2 \mathbin {\ntriangleright _\exists }s_1)\). In plain words, \(s_2 \mathbin {\vartriangleright _\forall }s_1\) if and only if there is no perspective within which \(s_2\) does not trump \(s_1\), or equivalently, \(s_2 \mathbin {\vartriangleright _\forall }s_1\) if and only if \(s_2\) trumps \(s_1\) in all perspectives. Relatedly, \(s_2 \mathbin {\ntriangleright _\forall }s_1\) is defined as: . Hence, \(s_2 \mathbin {\ntriangleright _\forall }s_1\) indicates that \(s_2\) never trumps \(s_1\). This implies, but is not equivalent to, \(\lnot (s_2 \mathbin {\vartriangleright _\forall }s_1)\).

Example 3

(Ranking (cont.)) Consider alternatives \(a_1\) and \(a_2\) such that \(a_1\) Pareto-dominates \(a_2\) on criteria J. Define \(s_d\) as an argument that states that \(a_1\) ought to be ranked above \(a_2\) because of the Pareto-dominance situation considering criteria in J. Then, it might hold that . Define \(s_{f}\) as “this is an incorrect reasoning because an important aspect to be considered in the problem is fairness and \(a_1\) is worse than \(a_2\) in this respect”. Then it might be that (assuming that i indeed considers fairness as important and that J does not include fairness). If i later changes her mind about the importance of fairness, then it will not hold that \(s_{f} \mathbin {\vartriangleright _\forall }s_d\). \(\triangle \)

This enables us to define a decisive argument as one that is never trumped by any argument in \(S^*\).

Definition 2

(Decisive argument) Given a decision situation , we say that an argument \(s\in S^*\) is decisive iff \(\forall s' \in S^*\): \(s' \mathbin {\ntriangleright _\forall }s\).

Notice that decisive arguments can be of very different sorts. Some decisive arguments will be very simple and straightforward arguments, which are so simple that they will be accepted by i whatever the perspective. By contrast, some decisive arguments will be very elaborate ones, taking many aspects of the topic into account and anticipating all sorts of arguments that could trump them, and accordingly never trumped by any other argument.

Example 4

(Weather forecast) Assume that individual i holds that t = “it will rain tomorrow” is supported by the argument \(s_1\) = “one can expect that it will rain tomorrow because weather forecast predicts so”. (See Fig. 1.) But imagine that i also holds, at least from some perspective, that \(s_2\) = “weather forecast is unreliable to infer what the weather will be like tomorrow because weather forecast is often wrong” is a counter-argument that trumps \(s_1\). Imagine further that i would accept that an argument \(s_3\) = “although it is often wrong, weather forecast is reliable because it is more often right than wrong” trumps \(s_2\). Imagine, finally, that no argument trumps \(s_3\) from any perspective.

In such a case, for i, \(s_1\) is not a decisive argument. However, one can elaborate a more complex argument \(s\) = “weather forecast predicts that it will rain tomorrow. This may be an incorrect prediction, but weather forecast is more often right than wrong, thus its predictions constitute a sufficient basis to think that it will rain tomorrow”. Notice that \(s\) includes the reasonings given by \(s_1\) and \(s_3\). Since \(s\) anticipates that \(s_2\) could be envisaged to trump it, \(s\) could be decisive in supporting t (as assumed in Fig. 1). \(\triangle \)

Fig. 1
figure 1

Illustration for Examples 4 and 5. The symbol under \(s_3\) and \(s\) indicates a decisive argument

2.3 Deliberated judgment

Given a decision situation, we are now in a position to characterize i’s stance towards the propositions in T once he has considered all the relevant arguments. We say that a proposition is justifiable if it is supported by a decisive argument. A proposition is said to be untenable when each argument supporting it is always trumped by a decisive argument.

Definition 3

(Justifiable and untenable propositions) Given a decision situation , a proposition t is:

  • justifiable iff ;

  • untenable iff .

Three important aspects of this definition are worth emphasizing.

First, we use modal terms to name these notions: we talk about “justifiable” rather than “justified” propositions. This is because, at a given point of time, individual i might well fail to accept, as a matter of brute empirical fact, a proposition supported by a decisive argument, for example, because she does not know this argument. Similarly, she might accept an untenable proposition. All this is despite the fact that the decisive arguments referred to in the definitions of justifiable and untenable propositions are decisive according toi’sargumentative disposition—that is, by i’s own standards.

Second, notice that, according to our definition, a proposition cannot be both justifiable and untenable, but it may be neither justifiable nor untenable. This may be the case if all the arguments supporting t have counter-arguments, but at least one argument supporting t has no decisive counter-argument.

Lastly, according to our definition, it is possible for a proposition t to be justifiable and for not-t, or more generally for any proposition \(t'\) in logical contradiction with t or having empirical incompatibilities with t, to be justifiable too. This specific definition allows to encompass situations in which there are intrinsically no more reason to accept t than \(t'\). This can happen even when it is clear and evident for i that t and \(t'\) are incompatible, and even in situations where this incompatibility between t and \(t'\) is highlighted in some argument examined during the decision process.Footnote 7 This is a consequence of our definition of the trump relation, and it reflects the important idea that, as a matter of fact, in some decision situations, even if one takes all the relevant arguments into account, it can happen that several, mutually incompatible propositions are equally supported. It is part of the very aim of decision-aid, in such situations, to unveil the fact that mutually incompatible propositions are equally supported.Footnote 8

Decision situations allowing to classify unambiguously all propositions in the agenda into justifiable or untenable propositions are of distinctive interest. Let us term such decision situations “clear-cut”.

Definition 4

(Clear-cut situation) A decision situation is clear-cut iff each proposition in T is either justifiable or untenable.

Given a decision situation, we can now define i’s DJ as those propositions \(t \in T\) that are justifiable.

Definition 5

(DJ ofi) The Deliberated Judgment corresponding to a decision situation is:

This notion of DJ, as we define it, captures what we take to be an important idea underlying Goodman’s (1983) and Rawls’ (1999) concept of “reflective equilibrium”. This idea is that, if i manages, through an iterative process of revision of her opinion through the integration of new elements or arguments, to reach an “equilibrium” which is stable with respect to the integration of new elements, then the opinion reached at “equilibrium” is of distinctive interest—it captures i’s “well-considered” or “true” opinion in some sense.Footnote 9

Notice that the meaning of this definition depends on the interpretation given to \(S^*\) (see the beginning of Sect. 2). In the idealistic interpretation, i’s DJ is unique and fixed once and for all. In the pragmatic interpretation, i’s DJ may evolve over time, as new arguments emerge.

Example 5

(Weather forecast (cont.)) To explain clearly this definition, it is useful to come back to our previous example (Fig. 1) of individual i who holds that “weather forecast is often wrong” (\(s_2\)) is a counter-argument that trumps “it will rain tomorrow because weather forecast predicts so” (\(s_1\)). We have seen that a more complex argument (\(s\)), including both “weather forecast predicts that it will rain tomorrow” and an additional sub-argument that trumps \(s_2\), can turn out to be a decisive argument to support “it will rain tomorrow” (t). In such a case, t belongs to i’s Deliberated Judgment, despite the fact that he might claim otherwise if not confronted with the complex argument above. \(\triangle \)

Example 6

(Weather forecast (variant)) In this example T contains two propositions: \(t_1\) is the proposition according to which it will rain tomorrow, and \(t_2\) is the contrary proposition. Two corresponding arguments are \(s_1\) and \(s_2\)—two weather forecasts from different sources that predict, respectively, that it will rain and that it will not. Assuming that i attributes equal credibility to both sources and considers no other argument to be relevant, he might end up with both \(t_1\) and \(t_2\) in his deliberated judgment. This should not be interpreted as meaning that i is incoherent, but simply as a situation where different propositions are equally justified for lack of means to tell them apart. Similarly, scientists can consider two contradictory hypotheses plausible, for lack of current knowledge; or someone may hold that two incompatible acts are equally (im)moral. \(\triangle \)

3 Issues of empirical validation

The former section clarified definitions and explained the articulations between the key concepts of our framework, at a rather abstract level. Now we want to investigate how this framework can be confronted with empirical reality. For that purpose, we will examine how one can test a model of the support and trump relations built by a decision analyst trying to capture the deliberated judgment of a decision-maker.

Let us define a model \(\eta \) of a decision situation as a pair of relations and \(\mathbin {\vartriangleright _{\eta }}\subseteq S^*\times S^*\). These relations are not necessarily an approximation of the real relations characterizing i. Indeed, the chief aim of the model is to know i’s DJ, not to reflect in detail what i thinks about all arguments, which would arguably not be achievable (we will come back to this important point below).

Define \(T_\eta \) as the set of propositions that the model \(\eta \) claims are supported:

Example 7

(Ranking (cont.)) We have already defined a set of alternatives \({{\mathcal {A}}}\), propositions T representing possible comparisons of the alternatives, and criteria J. Consider further a set of criteria functions \((g_j)_{j \in J}\) evaluating all the alternatives \(a\in {{\mathcal {A}}}\) using real numbers: \(g_j: {{\mathcal {A}}}\rightarrow {\mathbb {R}}\).

Imagine that i’s problem is to decide which kind of vegetable to grow in his backyard. Assume an analyst providing decision-aid to i considers that the problem can be reduced to a ranking between three candidates: carrots, lettuce and pumpkins, denoted by \(c, l, p \in {{\mathcal {A}}}\). The analyst believes that i is ready to rank vegetables according to exactly two criteria. The analyst has obtained six real numbers \(g_j(a)\), representing the performances of each alternative on each criteria, and believes that i is ready to rank vegetables according to the sum of their performances on the two criteria, \(v(a) = g_1(a) + g_2(a)\).

The analyst can now try to represent i’s attitude using a model by producing sentences that explain to i the “reasoning” underlying the definition of v. Assume the values given by v position carrots as winners. The analyst could define an argument \(s_{(c, l)}\) “carrots are a better choice than lettuce because carrots score \(g_1(c)\) on criterion one, and \(g_2(c)\) on criterion two, which gives it a value \(v(c)\), whereas lettuce scores \(g_1(l)\) on criterion one, and \(g_2(l)\) on criterion two, which gives it an inferior value \(v(l)\)”. In the model of the analyst, this argument supports the proposition that carrots are ranked higher than lettuce: . The model contains similar arguments in favor of other propositions \(t \in T\) that are in agreement with the values given by v. In our example, the analyst furthermore believes that no counter-arguments are necessary and thus defines \(\mathbin {\vartriangleright _{\eta }}= \emptyset \). \(\triangle \)

3.1 Validity and the problem of observability

Since the point of carving out \(\eta \) is to capture i’s Deliberated Judgment \(T_i\), we can define a valid model as one that correctly captures \(T_i\).

Definition 6

(Validity) A model \(\eta \) is valid iff \(T_\eta =T_i\).

How can the analyst determine if a given model \(\eta \) is a valid one?

Let us assume that the only information that he can use for that purpose is the one he can get by querying i – and is, in that sense, “observable” for him. DJs are not observable in that sense. Indeed, i’s DJ are defined in terms of \(\mathbin {\ntriangleright _\forall }\). But observing \(\mathbin {\ntriangleright _\forall }\) would require that i takes successively all the possible perspectives she can have, which is unrealistic.Footnote 10

In the remainder of this section, we explain how we handle this conundrum in two steps. First, Sect. 3.2 introduces a provisional solution, by identifying conditions that guarantee the existence of a model allowing to identify i’s DJ on the basis of what we will call an “operational” validity criterion—that is, a criterion based on observable data. Then, Sect. 3.3 explores how these conditions can be weakened.

3.2 Existence of a valid model and its conditions

In this subsection, we introduce apparently reasonable conditions about the way i reasons and about the decision situation. Our theorem will then guarantee that a model exists and captures correctly i’s DJ if those conditions are satisfied on \(S^*\) and if the model satisfies a validity criterion that, as opposed to validity itself, can be directly checked on the basis of observable data (an “operational validity” criterion).

3.2.1 Conditions

A first condition about mandates a certain form of stability. It assumes that i possibly changes her mind about whether an argument \(s'\) trumps another one only when there exists another argument that trumps \(s'\).

Condition 1

(Answerability) A decision situation satisfies Answerability iff, for all pairs of arguments \((s, s')\):

Let us now turn to the second condition. It has to do with the way i reasons. Imagine that i finds himself in the following uneasy situation. He declares that \(s_1\) is trumped by \(s_2\). However, i is also ready to declare that \(s_2\) is in turn trumped by \(s_3\), a decisive argument. In such a situation, it seems natural enough to assume that, if we carve out an argument \(s\), playing the same argumentative role as \(s_1\), but anticipating and defeating attempts to trump it using \(s_2\), i will endorse \(s\).

This assumption is formalized by the condition Closed under reinstatement below. To write it down, we first need to formalize, thanks to the following notion of replacement, the idea that a set of arguments is at least as powerful as another argument, from the point of view of its argumentative role. We say a set of arguments \(S\subseteq S^*\) replaces an argument \(s\in S^*\) whenever all the arguments trumped by \(s\) are also trumped by some argument \(s' \in S\), and all the propositions supported by \(s\) are also supported by some argument \(s' \in S\).Footnote 11

Definition 7

(Replacing arguments) A set of arguments \(S\subseteq S^*\)replaces\(s\in S^*\) iff and . We say that \(s'\) replaces \(s\), with \(s, s' \in S^*\), to mean that \(\{s'\}\) replaces \(s\).

Condition 2

(Closed under reinstatement) A decision situation is closed under reinstatement iff, \(\forall s_1 \ne s_2 \ne s_3 \ne s_1 \in S^*\) such that , with \(s_3\) decisive:

$$\begin{aligned} \exists s\;|\;s\text { replaces } s_1 \text { and } {\mathbin {\vartriangleright _\exists ^{-1}}}(s) \subseteq \mathbin {\vartriangleright _\exists ^{-1}}(s_1) {\setminus } \{s_2\}. \end{aligned}$$

The condition mandates that, whenever some decisive argument always trumps \(s_2\), which in turn trumps \(s_1\), it is possible to replace \(s_1\) by an argument that is no longer trumped by \(s_2\) and is not trumped by any other argument than those trumping \(s_1\).Footnote 12

Finally, we introduce two conditions on the size of the relation .

Let us call a chain of length k in a finite sequence \(s_i\) of arguments in \(S^*\), \(1 \le i \le k\), such that for \(1 \le i \le k - 1\). An infinite chain is an infinite sequence \(s_i\) such that for all \(i \in {\mathbb {N}}\).

Condition 3

(Bounded width) A decision situation has a bounded width iff there is no argument that is trumped by an infinite number of counter-arguments.

Condition 4

(Bounded length) A decision situation has a bounded length iff there is no infinite chain in . (Cycles in \(\mathbin {\vartriangleright _\exists }\) are, therefore, excluded as well.)

3.2.2 Operational validity criterion

Let us now define the following “operational” validity criterion for a model \(\eta \) intended to capture i’s DJ. We term it “operational” to emphasize that, as opposed to the definition of validity (definition 6), it can be checked on the sole basis of observable data.

Definition 8

(Operational validity criterion) A model \(\eta \) of a decision situation is operationally valid iff, whenever , it holds that and , and whenever t is not supported by \(\eta \), .

This criterion amounts to partially comparing, on the one hand, i’s argumentative disposition towards propositions and arguments and, on the other hand, \(\eta \)’s representations of i’s argumentative disposition.Footnote 13 More precisely, a model satisfies the operational validity criterion (for short: is operationally valid) iff:

  1. (i)

    arguments that, according to the model, support a proposition t are indeed considered by i to support t;

  2. (ii)

    whenever a model uses an argument \(s\) to support a proposition, and that argument is trumped by a counter-argument \(s_c\), the model can answer with a counter-counter-argument, using a counter-counter-argument that i confirms indeed trumps the counter-argument \(s_c\);

  3. (iii)

    whenever an argument \(s\) supports a proposition that the model does not consider to be supported, the model is able to counter that argument using a counter-argument that i confirms indeed trumps \(s\).

As required, this criterion is uniquely based on observable data. Indeed, recall that the only observable data that the analyst can use are the ones obtained by querying i by asking her if a given argument \(s_2\) trumps another argument \(s_1\). If she replies that it does, this is enough to conclude that, according to her, . Indeed, in such a case, we know that there is at least one perspective within which she thinks that \(s_2\) trumps \(s_1\): namely, the perspective that she currently has. Querying i can thus provide the information needed to check if a model is operationally valid.

3.2.3 Theorem

Since querying i will not give enough information to know that \(s_2 \mathbin {\vartriangleright _\forall }s_1\) (if indeed \(s_2 \mathbin {\vartriangleright _\forall }s_1\)), querying i will never allow to directly claim that a model satisfies the definition of validity (Definition 6). What we need, therefore, is a means to ensure that an operationally valid model is a valid one. This is provided by the following theorem.

Theorem 1

Assume a decision situation is Closed under reinstatement, Answerable and has Bounded length and width. Then: i) the decision situation is clear-cut; ii) there exists an operationally valid model of that decision situation; iii) any operationally valid model \(\eta \) satisfies \(T_i = T_\eta \).

Theorem 2 (in Sect. 3.3) generalizes this theorem. It is proven in section A.

Example 8

(Budget reform) Let us take a non-trivial example that will be used to illustrate how Theorem 1 can be used and why we need to go beyond this first theorem. Imagine that i is a political decision-maker. She wants to run for an election, and is elaborating her policy agenda. She has heard about Meinard et al.’s (2017) (thereafter referred to as “M”) argument that, according to a popular survey, biodiversity should be ranked after retirement schemes and public transportation, but before relations with foreign countries, order and security, and culture and leisure in the expenses of the State. Assume that i wants to make up her mind about the single proposition \(t =\) “I should include in my agenda a reform to increase public spending on biodiversity conservation so as to rank biodiversity higher than relations with foreign countries in the State budget”.

She requests the help of a decision analyst. The latter starts by reviewing the literature to identify a set of arguments with which he will work. (The arguments are illustrated in Fig. 2.) He thereby identifies that proposition t can be considered to be supported by \(s=\) “M’s finding (stated above) is based on a large scale survey and quantitative statistical analysis, and their protocol was designed to track the preferences that citizens express in popular votes. There are, therefore, scientific reasons to think that a policy package including the corresponding reform will gather support among voters.” Pursuing his exploration of the recent economic literature on environmental valuation methods, the analyst could identify only two counter-arguments to \(s\):

  • \(s_{c1}=\) “M’s measure is extremely rough as compared to more classical economic valuations, such as contingent valuations and the like (Kontoleon et al. 2007), which makes it non credible as a guide for policy”;

  • \(s_{c2}=\) “M claim to value biodiversity per se. The very meaning of such an endeavor is questionable because it is too abstract. More classical economic valuations are focused on concrete objects and projects, which is more promising”.

But he also found a counter-counter-argument to each of these counter-arguments:

  • \(s_{c1c}=\) “Biodiversity is not the kind of thing about which people make decisions in their everyday life. Their preferences about it are accordingly likely to be rough. The exceedingly precise measurements provided by contingent valuations and the like are therefore more a weakness than a strength”;

  • \(s_{c2c}=\) “Abstract notions such as biodiversity are an important determining factor for many people when they make decisions. Eschewing to value them is ill-founded”.

Imagine further that the analyst has not found any argument liable to trump either \(s_{c1c}\) or \(s_{c2c}\).

Fig. 2
figure 2

Illustration for Example 8. (Only the arguments used by the model \(\eta \) are displayed.)

Define \(s_{1, \text {reinstated}}\) as: “[content of\(s\)]; this is a rough measure but [content of\(s_{c1c}\)]”; similarly, define \(s_{2, \text {reinstated}}\) as “[content of\(s\)]; the very meaning could be questioned because it is highly abstract, but [content of\(s_{c2c}\)]”; and define \(s_\text {reinstated}\) as “[content of\(s\)]; this is a rough measure but [content of\(s_{c1c}\)]; the very meaning could be questioned because it is highly abstract, but [content of\(s_{c2c}\)]”. Define \(S\subseteq S^*\) as the set of argument comprising \(s\), \(s_{c1}\), \(s_{c2}\), \(s_{c1c}\), \(s_{c2c}\), \(s_{1, \text {reinstated}}\), \(s_{2, \text {reinstated}}\) and \(s_\text {reinstated}\).

Assume that the analyst is justified to think that i’s reasoning is such that \(S^*\) satisfies Closed under reinstatement, Answerability, Bounded length and Bounded width. Recall now that, to identify the propositions lying in \(T_i\), the analyst must identify arguments supporting propositions in \(T_i\), such that these arguments can resist counter-arguments from the whole of \(S^*\). In other words, the analyst must test the claims of the model not only against the counter-arguments in \(S\), but against the whole of \(S^*\), which the analyst ignores.

Imagine now that the analyst assumes that, even though \(S\) is a strict subset of \(S^*\), \(S\) is a good enough approximation of \(S^*\), in the sense that there is no argument in \(S^*{\setminus } S\) that trumps any argument in \(S\) or that supports t. Thanks to Theorem 1, the analyst can then deduce that the situation is clear-cut and that there exists a valid model of the decision situation.

The next step for him is to carve out a model \(\eta \) reproducing the relations between arguments that he found in the literature, and then to test whether his model is operationally valid using Definition 8. To validate \(\eta \), he would first ask i whether she agrees that \(s\) supports t. If so, he then would check whether i considers that \(s_{c1}\) is a counter-argument to \(s\), in which case the analyst would check that the counter-counter-argument that he envisaged, \(s_{c1c}\), is considered by i to trump \(s_{c1}\). The analyst would then proceed in a similar way with the second chain of counter-arguments (\(s_{c2}\) and \(s_{c2c}\)), and verify that, as \(\eta \) hypothesizes, i does not take any other argument in \(S\) to trump \(s\). This would, eventually, allow him to conclude on the validity of the model \(\eta \). Should it prove operationally valid, the analyst could then conclude that \(T_i=\{t\}\) (using Theorem 1 and \(T_\eta =\{t\}\)).

But notice that this whole story only works because we assumed that arguments in \(S^*{\setminus } S\) never trump any argument in \(S\). This assumption is clearly unrealistic: any slight reformulation of \(s_{c1}\), for example, will most likely also trump \(s\). This is not the only unrealistic assumption in our hypothetical scenario: it is also unlikely that the whole set \(S^*\) indeed satisfies Bounded length, for example. This condition requires an absence of cycle in the trump relation. While this may be considered to hold on \(S\), it is possible that some ambiguous or poorly phrased arguments in \(S^*\) would confuse i in such a way that i will declare, for example, that for some triple of such unclear arguments. Hence the need to go beyond Theorem 1. \(\triangle \)

Theorem 1 embodies an important step towards being able to confront models of deliberated judgment with empirical reality, by spelling out sufficient conditions upon which unrolling the procedures of refutation is not a pure waste of time and energy, because there is something to be found. It also illustrates the potential usefulness of the notion of operational validity. Indeed, since the point of the modeling endeavor in our context is to capture \(T_i\), we know by virtue of (iii) in Theorem 1 that, if the corresponding conditions are met, and if we have good reasons to believe that we have an operationally valid model, then we can admit that it captures \(T_i\).

However, establishing this theorem cannot be more than just a first step. As illustrated in Example 8, the conditions above are quite heroic. One cannot realistically expect that real-life decision situations will fulfill these conditions. The most important issue is that we need a means to distinguish \(S^*\) from the restricted set of arguments with which the analyst works in practice. And we need means to make sure that the restricted set indeed “covers” the matter “sufficiently”, so as to escape the situation in which the analyst is locked in Example 8, where he finds himself condemned to make wildly unrealistic assumptions. The next subsection tackles this pivotal issue.

3.3 Weakening of some conditions

To obtain the results we want, all we actually need is that it should be possible to define a subset of arguments \(S_\gamma \subseteq S^*\) that satisfies conditions akin to the ones defined above, and which are sufficient to cover the topic at hand.

Let us start by formalizing the requirement, for \(S_\gamma \), to cover the topic at hand. What we want is that all the arguments needed for the decision-maker to make up her mind about the topic should be encapsulated in \(S_\gamma \). This means that, if arguments from \(s\in S^*{\setminus } S_\gamma \) are brought to bear, it should be possible either to discard them or to show that they can be replaced by arguments in \(S_\gamma \).

This is done thanks to the following formal definitions and condition.

Definition 9

(Unnecessary argument) Given a decision situation and a subset \(S_\gamma \subseteq S^*\) of arguments, we say that \(S\subseteq S^*\)essentially replaces\(s\in S^*\) iff and .

Let denote the decisive arguments in \(S_\gamma \). We say that an argument \(s\in S^*\) is resistant iff it is not trumped by any argument in \(S_{\gamma \text {dec}}\). Let denote the resistant arguments in \(S_\gamma \).

We say that an argument \(s\in S^*\) is unnecessary iff \(s\) is trumped by a resistant argument from \(S_\gamma \) or \(s\) is essentially replaceable by \(S_{\gamma \text {res}}\). In formal terms: or and .

Condition 5

(Covering set of arguments) Given a decision situation and a set of arguments \(S_\gamma \subseteq S^*\), \(S_\gamma \) is covering iff all arguments \(s\in S^*{\setminus } S_\gamma \) are unnecessary.

Let us now relax the conditions of Theorem 1 by formulating weaker requirements confined to \(S_\gamma \). This adaptation is straightforward for Conditions 1 and 2.

Condition 6

(Set of arguments allowing answerability) Given a decision situation and a subset \(S_\gamma \subseteq S^*\) of arguments, we say that the set \(S_\gamma \) satisfies Answerability iff, for all \(s\in S^*, s' \in S_\gamma \): .

Condition 7

(Set of arguments closed under reinstatement) Given a decision situation and a subset \(S_\gamma \subseteq S^*\) of arguments, we say that the set \(S_\gamma \) is closed under reinstatement iff, \(\forall s_1, s_3 \in S_\gamma , s_1 \ne s_3, s_3\) not trumping \(s_1\), \(s_3\) decisive:

$$\begin{aligned} \exists s\in S_\gamma \;|\;s\text { replaces } s_1 \text { and } \mathbin {\vartriangleright _\exists ^{-1}}(s) \subseteq \mathbin {\vartriangleright _\exists ^{-1}}(s_1) {\setminus } \mathbin {\vartriangleright _\forall }(s_3). \end{aligned}$$

This condition is vacuous when there is no \(s_2\) such that : in that case, \(s_1\) replaces itself.

Similarly, we can relax Condition 3 and apply it to a subset of arguments. When an argument has very numerous counter-arguments, one may think that their vast number might spring from some common reasoning that they share. For example, an argument might involve some real value as part of its reasoning, and be multiplied as infinitely many similar arguments of the same kind using tiny variations of that real value. If so, and if we know that we can convincingly rebut each of these counter-arguments, we might believe that only a small number of counter-counter-arguments will suffice to rebut the counter-arguments.

Definition 10

(Defense) We say \(s\in S^*\) is \(S_\gamma \)-defended iff all the arguments \(s_c\) trumping \(s\) are trumped by a decisive argument in \(S_\gamma \), or formally, . We say \(s\in S^*\) is \((j, S_\gamma )\)-defended iff there exists a set \(S\subseteq S_\gamma \) of arguments of cardinality at most j such that \(s\) is \(S\)-defended (thus, if j arguments from \(S_\gamma \) suffice to defend \(s\)).

Condition 8

(Set of arguments with width bounded by j) Given a decision situation and a natural number j, a set of arguments \(S_\gamma \subseteq S^*\) has width bounded byj iff, for each argument \(s\in S_\gamma \), if \(s\) is \(S_\gamma \)-defended, then it is \((j, S_\gamma )\)-defended.

The condition is vacuously true when no argument in \(S^*\) is trumped by more than j counter-arguments.

Our last condition relaxes Condition 4. We want to exclude some of the long chains in \(S^*\). But we want to tolerate long chains, including cycles, among unclear arguments. Indeed, anecdotal evidence from ordinary argumentation situations suggests that in many (otherwise interesting) decision situations, cycles do appear in trump relations among arguments (for example, because arguments can use ambiguous terms). However, this does not necessarily prevent the situation from being modelizable in our sense. What we do need is to avoid some of the cycles or chains that involve “too many” arguments from \(S_\gamma \), in a somewhat technical sense captured by the following condition.

Condition 9

(Set of arguments with length bounded by k) Given a decision situation, a natural number k, and a set of arguments \(S_\gamma \), define a binary relation Q over \(S_\gamma \) as \(s_2 Q s_1\) iff or for some \(s\in S^*\), thus, . Let \(Q^1 = Q\) and \(Q^{k+1} = Q^k \circ Q\) for any natural number k. The set \(S_\gamma \) has length bounded by k iff \(\not \exists s_2, s_1 \in S_\gamma \;|\;s_2 Q^{k+1} s_1\), thus, iff it is impossible to reach an argument from \(S_\gamma \), starting from an argument from \(S_\gamma \), following Q more than k times.

This condition tolerates cyclesFootnote 14 in that involve only arguments picked outside the chosen set \(S_\gamma \). It only forbids a subset of the situations where a cycle (or a too long chain) is built that involve arguments from \(S_\gamma \). For example, it excludes a situation where for some \(s_1, s_2 \in S_\gamma \) and \(s\notin S_\gamma \).Footnote 15

Thanks to Conditions 59, we are now in a position to define our set of arguments of interest.

Definition 11

(CAC arguments) Given a decision situation and a set \(S_\gamma \subseteq S^*\), we say that \(S_\gamma \) is clear and covering, or CAC, iff it is Closed under reinstatement and Answerable, has width bounded by some number j and length bounded by some number k, and is such that all arguments \(s\in S^*{\setminus } S_\gamma \) are unnecessary.

Following the same rationale, we can define an operational criterion echoing Definition 8.

Definition 12

(\(S_\gamma \)-operational validity) Given a decision situation and a set \(S_\gamma \subseteq S^*\), we define a model \(\eta \) as \(S_\gamma \)-operationally valid iff for all , \(s\in S^*\), we have and , and when t is not supported by \(\eta \), .

A theorem echoing Theorem 1 can then be proved.

Theorem 2

Given a decision situation , given \(S_\gamma \subseteq S^*\), if \(S_\gamma \) is CAC, then (i) the decision situation is clear-cut; (ii) there exists an \(S_\gamma \)-operationally valid model \(\eta \); (iii) any \(S_\gamma \)-operationally valid model \(\eta \) satisfies \(T_i = T_\eta \).

This theorem is a strengthened version of Theorem 1 since it produces the same results based (i) on the conditions encapsulated in the definition of CAC arguments, and (ii) on \(S_\gamma \)-operational validity. Those conditions are implied by the ones assumed by Theorem 1. Indeed, when the conditions of Theorem 1 hold, taking \(S_\gamma = S^*\) satisfies the conditions of theorem 2.Footnote 16

4 Significance of the deliberated judgment framework for decision theory and the practice of decision analysis

Section 2 displayed the conceptual core of our framework and Sect. 3 explained how this framework can be confronted to empirical reality. The present section reflects on the meaning, promises and limits of our approach. We start by pondering on how the various conditions spelled out in Sect. 3 can be interpreted (Sect. 4.1). We then take a broader view to discuss how our framework relates to the larger literature in decision science (Sect. 4.2).

4.1 The meaning of our conditions

To understand the precise meaning of the conditions of Theorem 1 and, more importantly, of Theorem 2, an almost trivial but nonetheless very important first step is to spell out what it means if these conditions are not fulfilled.

We already stressed that the conditions of Theorem 1 are certainly too strong to be fulfilled. The conditions of Theorem 2 are, by construction, much weaker. But still, there certainly are situations where they are not fulfilled. In such cases, we do not claim that decision analysis is impossible. Neither is our general framework, as presented in Sect. 2, rendered bogus. The sole implication is that our approach to operational empirical validation cannot be implemented. This does not prevent, for example, the analyst from trying to identify directly decisive arguments, and this does not render irrelevant a decision analysis based on decisive arguments. Neither does this prevent completely other approaches to decision analysis to be implemented. The only implication is that a full-fledged implementation of our approach, including operational empirical validation, is not guaranteed to be possible in such situations. It is no part of our claim that our approach can be applied all the time and provides an all-encompassing framework liable to overcome all other approaches to decision analysis. Our approach has a specific domain of application.

Beyond these simple, negative comments, how are our conditions to be understood? In general terms, these various conditions can be interpreted in three different ways:

  1. (i)

    as axioms capturing minimal properties concerning arguments and the way i reasons,

  2. (ii)

    as empirical hypotheses,

  3. (iii)

    as rules governing the decision process (rules that i can commit to abide by, or can consider to be well-founded safeguards for the proper unfolding of the process).

Example 9

(Budget reform (cont.)) We can now improve Example 8 by relaxing the assumptions it contains. One can envisage in turn the three possibilities spelled out above.

In interpretation (i), instead of assuming that i always reasons in such a way that \(S^*\) in its entirety satisfies the conditions of Theorem 1, we only assume that the set of argument \(S= \{s\), \(s_{c1}\), \(s_{c2}\), \(s_{c1c}\), \(s_{c2c}\), \(s_{1, \text {reinstated}}\), \(s_{2, \text {reinstated}}\), \(s_\text {reinstated}\}\) is CAC.

In interpretation (ii), we have to take advantage of empirical data to claim that the above set is CAC. Imagine, for example, that we have been able to show that the overwhelming majority of people does reason with respect to the arguments in this set in such a way that it can be considered CAC. This would provide strong empirical support to admit that this set can be considered CAC for the purpose of the decision process at issue (assuming the pragmatic interpretation of \(S^*\)). In the present article, we leave aside the important difficulties that such empirical concrete applications would face.

In interpretation (iii), the analyst would start by explaining to i the content of the requirements encapsulated in the definition of a CAC set of arguments and ask her if she is willing to commit herself to reason in such a way as to fulfill these requirements when thinking about the arguments to be discussed in the process. For example, for the Answerability of the set of arguments (Condition 6), the analyst would ask i if she would accept to commit not to change her mind depending on her mood or any other non-argumentative factor. Notice that i might figure at some point that it was not a good idea after all to commit to these various things, and in such a case the decision analysis process would fail. \(\triangle \)

Some of the conditions of our theorems are arguably more congenial to a given interpretation. For example, it seems natural enough to interpret Condition 2 as a rationality requirement of the kind that it makes sense to use as an axiom (interpretation (i)). By contrast, Condition 1 is the kind of condition that can easily be translated in the form of rules than decision-makers can be asked to abide by when they engage in a decision process (interpretation (iii)). By construction, Conditions 6 and 7 are weakened versions of the above stronger conditions. They accordingly inherit the preferred interpretation suggested above. Conditions 8 and 9 can easily be seen as empirical hypotheses (interpretation (ii)).

However, although it is tempting to draw such connections between specific conditions and specific interpretations, at a more abstract level all the conditions above can be interpreted in all three interpretations. The different conditions can even be interpreted differently in the context of different implementations. In the present, largely theoretical work, we want to leave all these possibilities open. Future, more applied works, should assess if and when these different interpretations can be used, in particular by elaborating and implementing the convenient empirical validation protocols in interpretation (ii) and the convenient participatory procedures in interpretation (iii).

4.2 The deliberated judgment framework in perspective

Now that the meaning of the conditions of our theorems is clarified, we are in a firmer position to discuss the nature of our contribution to the literature.

The central, distinctive concept of our approach is the one of deliberated judgments of an individual. Deliberated judgments are the propositions that the individual herself considers based on decisive arguments, on due consideration. This formulation highlights the two key features of the concept.

The first key feature is that deliberated judgments are the result of a careful examination of arguments and counter-arguments. This echoes the approach to the notion of rationality developed most prominently by Habermas (1981). In this approach, actions, attitudes or utterances can be termed “rational” so long as the actor(s) performing or having them can account for them, explain them and use arguments and counter-arguments to withstand criticisms that other people could raise against them. Variants of this vision of rationality play a key role in other prominent philosophical frameworks, such as Scanlon’s (2000) and Sen’s (2009). Having in mind this approach to rationality, in the remainder of this discussion, we will therefore simply talk about “rationality” when referring to this first idea underlying our framework.

The second key feature is that deliberated judgments are nevertheless the individual’s own judgments, in the sense that they do not reflect the application of any exogenous criterion. This second idea can also be nicknamed, for brevity’s stake, by simply talking about “non-paternalism”.

Our approach, when applied in a decision analysis perspective, requires admitting the soundness of these two normative notions of rationality and non-paternalism.

Our approach, however, also has a strong descriptive dimension, which is a direct implication of the very meaning of non-paternalism. Though we are interested in deliberated judgments rather than in the “shallow” preferences that individuals spontaneously express, still the deliberated judgments that we are interested in are the ones of real, empirical individuals that are not constrained by our framework to adhere to a specific set of exogenous stances. These descriptive aspects feed a normative approach that accordingly owes its normative credentials both to its normative foundations and to its reference to empirical reality.

Due to this double anchorage in normative and descriptive aspects, our approach opens avenues to overcome perennial difficulties facing decision theory concerning its descriptive vs. normative status. Indeed, our framework sets the stage for decision-aiding practices that could have a crucial strength as compared with more standard approaches, by including rigorous tests of whether individuals endorse or not various arguments and argumentative lines, thereby avoiding both actively advocating them (a purely normative approach) and leaving the individual in the ignorance of their existence (a purely descriptive approach). Decision analyses based on deliberated judgments thereby provide compelling reasons for the aided individual to think that the decisions he makes once he has been aided are better than the one he would have made otherwise. Such reasons are liable to play a key role in strengthening the legitimacy and validity of decision analysis—two requirements largely discussed in the literature (Landry et al. 1983, 1996).

To illustrate this idea, it is useful to compare our framework to more classical approaches, such as utility theory. Proponents of utility theory could claim that utility functions provide arguments that individuals will consider convincing (Savage 1972; Morgenstern 1979; Raiffa 1985), and that, therefore, our approach will converge towards utility theory. However, the convincing power of utility-based arguments is debatable (Ellsberg 1961; Allais 1979). Psychologists have tried to test it experimentally (Slovic and Tversky 1974; MacCrimmon and Larsson 1979). But such tests can hardly be considered conclusive: the meaning of their results depends on how arguments have been presented to the individuals and on whether counter-arguments have been presented, as Slovic and Tversky (1974) themselves point out. Such a systematic confrontation with counter-arguments is precisely what our proposed framework allows to implement.

The formal framework presented in this article will, however, only live up to its promises if empirical applications are developed. Researchers in artificial intelligence (Labreuche 2011) and persuasion (Carenini and Moore 2006) have produced ways of “translating” formal Multi-Attribute Value Theory models into textual arguments, that could possibly provide promising tools to develop such applications.