Keywords

1 Introduction

1.1 General Philosophy

In decision-making involving multiple criteria, the basic problem stated by analysts and Decision-Makers (DMs) concerns the way that the final decision should be made. In many cases, however, this problem is posed in the opposite way: assuming that the decision is given, how is it possible to find the rational basis for the decision being made? Or equivalently, how is it possible to assess the DM’s preference model leading to exactly the same decision as the actual one or at least the most “similar” decision? The philosophy of preference disaggregation in multicriteria analysis is to assess/infer preference models from given preferential structures and to address decision-aiding activities through operational models within the aforementioned framework.

Under the term “multicriteria analysis” two basic approaches have been developed involving:

  1. 1.

    a set of methods or models enabling the aggregation of multiple evaluation criteria to choose one or more actions from a set A, and

  2. 2.

    an activity of decision-aid to a well-defined DM (individual, organization, etc.).

In both cases, the set A of potential actions (or objectives, alternatives, decisions) is analyzed in terms of multiple criteria in order to model all the possible impacts, consequences or attributes related to the set A.

Roy [108] outlines a general modeling methodology of decision-making problems, which includes four modeling steps starting with the definition of the set A and ending with the activity of decision-aid, as follows:

  • Level 1: Object of the decision, including the definition of the set of potential actions A and the determination of a problem statement on A.

  • Level 2: Modeling of a consistent family of criteria assuming that these criteria are non-decreasing value functions, exhaustive and non-redundant.

  • Level 3: Development of a global preference model, to aggregate the marginal preferences on the criteria.

  • Level 4: Decision-aid or decision support, based on the results of level 3 and the problem statement of level 1.

In level 1, Roy [108] distinguishes four reference problem statements, each of which does not necessarily preclude the others. These problem statements can be employed separately, or in a complementary way, in all phases of the decision-making process. The four problem statement are the following:

  • Problem statement α: Choosing one action from A (choice).

  • Problem statement β: Sorting the actions into predefined and preference ordered categories.

  • Problem statement γ: Ranking the actions from the best one to the worst one (ranking).

  • Problem statement δ: Describing the actions in terms of their performances on the criteria (description).

In level 2, the modeling process must conclude with a consistent family of criteria {g 1, g 2, , g n }. Each criterion is a non-decreasing real valued function defined on A, as follows:

$$\displaystyle\begin{array}{rcl} g_{i}: A \rightarrow [g_{i^{{\ast}}},g_{i}^{{\ast}}] \subset \mathfrak{R}/a \rightarrow g(a) \in \mathfrak{R}& &{}\end{array}$$
(9.1)

where \([g_{i^{{\ast}}},g_{i}^{{\ast}}]\) is the criterion evaluation scale, \(g_{i^{{\ast}}}\) and g i are the worst and the best level of the i-th criterion respectively, g i (a) is the evaluation or performance of action a on the i-th criterion and g(a) is the vector of performances of action a on the n criteria.

From the above definitions, the following preferential situations can be determined:

$$\displaystyle\begin{array}{rcl} \left \{\begin{array}{ll} g_{i}(a)> g_{i}(b) \Leftrightarrow a \succ b&(a\mbox{ is preferred to }b) \\ g_{i}(a) = g_{i}(b) \Leftrightarrow a \sim b&(a\mbox{ is indifferent to }b)\end{array} \right.& &{}\end{array}$$
(9.2)

So, having a weak-order preference structure on a set of actions, the problem is to adjust additive value or utility functions based on multiple criteria, in such a way that the resulting structure would be as consistent as possible with the initial structure. This principle underlies the disaggregation-aggregation approach presented in the next section.

This chapter is devoted to UTA methods, which are regression based approaches that have been developed as an alternative to multiattribute utility theory (MAUT). UTA methods not only adopt the aggregation-disaggregation principles, but they may also be considered as the main initiatives and the most representative examples of preference disaggregation theory. Another, more recent example of the preference disaggregation theory is the dominance-based rough set approach (DRSA) leading to decision rule preference model via inductive learning (see Chap. 9.5 of this book).

1.2 The Disaggregation-Aggregation Paradigm

In the traditional aggregation paradigm, the criteria aggregation model is known a priori, while the global preference is unknown. On the contrary, the philosophy of disaggregation involves the inference of preference models from given global preferences (Fig. 9.1).

Fig. 9.1
figure 1

The aggregation and disaggregation paradigms in MCDA [57]

The disaggregation-aggregation approach [56, 116, 128, 130] aims at analyzing the behavior and the cognitive style of the DM. Special iterative interactive procedures are used, where the components of the problem and the DM’s global judgment policy are analyzed and then they are aggregated into a value system (Fig. 9.2). The goal of this approach is to aid the DM to improve his/her knowledge about the decision situation and his/her way of preferring that entails a consistent decision to be achieved.

Fig. 9.2
figure 2

The disaggregation-aggregation approach [127]. (a) The value system approach; (b) the outranking relation approach; (c) the disaggregation-aggregation approach; (d) the multiobjective optimization approach

In order to use global preference given data, Jacquet-Lagrèze and Siskos [57] note that the clarification of the DM’s global preference necessitates the use of a set of reference actions A R . Usually, this set could be:

  1. 1.

    a set of past decision alternatives (A R : past actions),

  2. 2.

    a subset of decision actions, especially when A is large (\(A_{R} \subset A\)),

  3. 3.

    a set of fictitious actions, consisting of performances on the criteria, which can be easily judged by the DM to perform global comparisons (A R : fictitious actions).

In each of the above cases, the DM is asked to externalize and/or confirm his/her global preferences on the set A R taking into account the performances of the reference actions on all criteria.

1.3 Historical Background

The history of the disaggregation principle in multidimensional/multicriteria analyses begins with the use of goal programming techniques, a special form of linear programming structure, in assessing/inferring preference/aggregation models or in developing linear or non-linear multidimensional regression analyses [118].

Charnes et al. [16] proposed a linear model of optimal estimation of executive compensation by analyzing or disaggregating pairwise comparisons and given measures (salaries); the model was estimated so that it could be as consistent as possible with the data from the goal programming point of view.

Karst [65] minimized the sum of absolute deviations via goal programming in linear regression with one variable, while Wagner [147] generalizes the Karst’s model in the multiple regression case. Later Kelley [68] proposed a similar model to minimize the Tchebycheff’s criterion in linear regression.

Srinivasan and Shocker [143] outlined the ORDREG ordinal regression model to assess a linear value function by disaggregating pairwise judgments. Freed and Glover [34] proposed goal programming models to infer the weights of linear value functions in the frame of discriminant analysis (problem statement β).

The research on handling ordinal criteria began with the studies of Young et al. [148], and Jacquet-Lagrèze and Siskos [55]. The latter research refers to the presentation of the UTA method in the “Cahiers du LAMSADE” series and indicates the actual initiation of the development of disaggregation methods. Both research teams faced the same problem: to infer additive value functions by disaggregating a ranking of reference alternatives. Young et al. [148] proposed alternating least squares techniques, without ensuring, however, that the additive value function is optimally consistent with the given ranking. In the case of the UTA method, optimality is ensured through linear programming techniques.

2 The UTA Method

2.1 Principles and Notation

The UTA (UTilité Additive) method proposed by Jacquet-Lagrèze and Siskos [56] aims at inferring one or more additive value functions from a given ranking on a reference set A R . The method uses special linear programming techniques to assess these functions so that the ranking(s) obtained through these functions on A R is (are) as consistent as possible with the given one.

The criteria aggregation model in UTA is assumed to be an additive value function of the following form [56]:

$$\displaystyle\begin{array}{rcl} u(\mathbf{g}) =\sum _{ i=1}^{n}p_{ i}u_{i}(g_{i})& &{}\end{array}$$
(9.3)

subject to normalization constraints:

$$\displaystyle\begin{array}{rcl} \left \{\begin{array}{lll} \sum _{i=1}^{n}p_{ i} = 1 & & \\ u_{i}(g_{i^{{\ast}}}) = 0,&u_{i}(g_{i}^{{\ast}}) = 1,&\forall i = 1,2,\ldots,n\end{array} \right.& &{}\end{array}$$
(9.4)

where u i , i = 1, 2, , n are non decreasing real valued functions, named marginal value or utility functions, which are normalized between 0 and 1, and p i is the weight of u i (Fig. 9.3).

Fig. 9.3
figure 3

The normalized marginal value function

Both the marginal and the global value functions have the monotonicity property of the true criterion. For instance, in the case of the global value function the following properties hold:

$$\displaystyle\begin{array}{rcl} \left \{\begin{array}{ll} u[\mathbf{g}(a)]> u[\mathbf{g}(b)] \Leftrightarrow a \succ b&\mbox{ (preference)}\\ u[\mathbf{g} (a)] = u[\mathbf{g} (b)] \Leftrightarrow a \sim b&\mbox{(indifference)} \end{array} \right.& &{}\end{array}$$
(9.5)

The UTA method infers an unweighted form of the additive value function, equivalent to the form defined from relations (9.3) and (9.4), as follows:

$$\displaystyle\begin{array}{rcl} u(\mathbf{g}) =\sum _{ i=1}^{n}u_{ i}(g_{i})& &{}\end{array}$$
(9.6)

subject to normalization constraints:

$$\displaystyle\begin{array}{rcl} \left \{\begin{array}{ll} \sum _{i=1}^{n}u_{ i}(g_{i}^{{\ast}}) = 1& \\ u_{i}(g_{i^{{\ast}}}) = 0, &\forall i = 1,2,\ldots,n\end{array} \right.& &{}\end{array}$$
(9.7)

Of course, the existence of such a preference model assumes the preferential independence of the criteria for the DM [67], while other conditions for additivity can be found in [32, 33].

2.2 Development of the UTA Method

On the basis of the additive model (9.6)–(9.7) and taking into account the preference conditions (9.5), the value of each alternative a ∈ A R may be written as:

$$\displaystyle\begin{array}{rcl} u'[\mathbf{g}(a)] =\sum _{ i=1}^{n}u_{ i}[g_{i}(a)] +\sigma (a)\ \forall a \in A_{R}& &{}\end{array}$$
(9.8)

where \(\sigma (a)\) is a potential error relative to u′[g(a)].

Moreover, in order to estimate the corresponding marginal value functions in a piecewise linear form, Jacquet-Lagrèze and Siskos [56] propose the use of linear interpolation. For each criterion, the interval \([g_{i^{{\ast}}},g_{i}^{{\ast}}]\) is cut into (α i − 1) equal intervals, and thus the end points g i j are given by the formula:

$$\displaystyle\begin{array}{rcl} g_{i}^{j} = g_{ i^{{\ast}}} + \frac{j - 1} {\alpha _{i} - 1} (g_{i}^{{\ast}}- g_{ i^{{\ast}}})\ \forall j = 1,2,\ldots,\alpha _{i}& &{}\end{array}$$
(9.9)

The marginal value of an action a is approximated by a linear interpolation, and thus, for \(g_{i}(a) \in [g_{i}^{j} - g_{i}^{j+1}]\)

$$\displaystyle\begin{array}{rcl} u_{i}[g_{i}(a)] = u_{i}(g_{i}^{j}) + \frac{g_{i}(a) - g_{i}^{j}} {g_{i}^{j+1} - g_{i}^{j}}[u_{i}(g_{i}^{j+1}) - u_{ i}(g_{i}^{j})]& &{}\end{array}$$
(9.10)

The set of reference actions A R  = { a 1, a 2, , a m } is also “rearranged” in such a way that a 1 is the head of the ranking (best action) and a m its tail (worst action). Since the ranking has the form of a weak order R, for each pair of consecutive actions (a k , a k+1) it holds either \(a_{k} \succ a_{k+1}\) (preference) or \(a_{k} \sim a_{k+1}\) (indifference). Thus, if

$$\displaystyle\begin{array}{rcl} \Delta (a_{k},a_{k+1}) = u'[\mathbf{g}(a_{k})] - u'[\mathbf{g}(a_{k+1})]& &{}\end{array}$$
(9.11)

then one of the following holds:

$$\displaystyle\begin{array}{rcl} \left \{\begin{array}{ll} \Delta (a_{k},a_{k+1}) \geq \delta &\mbox{ iff}\ a_{k} \succ a_{k+1} \\ \Delta (a_{k},a_{k+1}) = 0&\mbox{ iff}\ a_{k} \sim a_{k+1}\end{array} \right.& &{}\end{array}$$
(9.12)

where δ is a small positive number so as to discriminate significantly two successive equivalence classes of R.

Taking into account the hypothesis on monotonicity of preferences, the marginal values \(u_{i}(g_{i})\) must satisfy the set of the following constraints:

$$\displaystyle\begin{array}{rcl} u_{i}(g_{i}^{j+1}) - u_{ i}(g_{i}^{j}) \geq s_{ i}\ \forall j = 1,2,\ldots,\alpha _{i} - 1,\ i = 1,2,\ldots,n& &{}\end{array}$$
(9.13)

with s i  ≥ 0 being indifference thresholds defined on each criterion g i . Jacquet-Lagrèze and Siskos [56] urge that it is not necessary to use these thresholds in the UTA model (s i  = 0), but they can be useful in order to avoid phenomena such as \(u_{i}(g_{i}^{j+1}) = u_{i}(g_{i}^{j})\) when \(g_{i}^{j+1} \succ g_{i}^{j}\).

The marginal value functions are finally estimated by means of the following Linear Program (LP) with (9.6), (9.7), (9.12), (9.13) as constraints and with an objective function depending on the \(\sigma (a)\) and indicating the amount of total deviation:

$$\displaystyle{ \left \{\begin{array}{ll} \mbox{ [min]}F =\sum _{a\in A_{R}}\sigma (a) & \\ \mbox{ subject to} & \\ \left.\begin{array}{ll} \Delta (a_{k},a_{k+1}) \geq \delta &\mbox{ if}\ a_{k} \succ a_{k+1} \\ \Delta (a_{k},a_{k+1}) = 0&\mbox{ if}\ a_{k} \sim a_{k+1}\\ \end{array} \right \} & \forall k \\ u_{i}(g_{i}^{j+1}) - u_{i}(g_{i}^{j}) \geq 0 &\forall i\mbox{ and }j \\ \sum _{i=1}^{n}u_{ i}(g_{i}^{{\ast}}) = 1 & \\ u_{i}(g_{i^{{\ast}}}) = 0,u_{i}(g_{i}^{j}) \geq 0,\sigma (a) \geq 0 &\forall a \in A_{R},\forall i\ \mbox{ and}\ j \end{array} \right. }$$
(9.14)

The stability analysis of the results provided by LP (9.14) is considered as a post-optimality analysis problem. As Jacquet-Lagrèze and Siskos [56] note, if the optimum F  = 0, the polyhedron of admissible solutions for \(u_{i}(g_{i})\) is not empty and many value functions lead to a perfect representation of the weak order R. Even when the optimal value F is strictly positive, other solutions, less good for F, can improve other satisfactory criteria, like Kendall’s τ.

As shown in Fig. 9.4, the post-optimal solutions space is defined by the polyhedron:

$$\displaystyle\begin{array}{rcl} \left \{\begin{array}{l} F \leq F^{{\ast}} + k(F^{{\ast}}) \\ \mbox{ all the constraints of LP (<InternalRef RefID="Equ14">9.14</InternalRef>)}\end{array} \right.& &{}\end{array}$$
(9.15)

where k(F ) is a positive threshold which is a small proportion of F .

Fig. 9.4
figure 4

Post-optimality analysis [56]

The algorithms which could be used to explore the polyhedron (9.15) are branch and bound methods, like reverse simplex method [146], or techniques dealing with the notion of the labyrinth in graph theory, such as Tarry’s method [15], or the method of Manas and Nedoma [77]. Jacquet-Lagrèze and Siskos [56], in the original UTA method, propose the partial exploration of polyhedron (9.15) by solving the following LPs:

$$\displaystyle\begin{array}{rcl} \left \{\begin{array}{ll} \mbox{ [min]}u_{i}(g_{i}^{{\ast}})\mbox{ and }\mbox{ [max]}u_{i}(g_{i}^{{\ast}}) \\ \mbox{ in} &\forall i = 1,2,\ldots,n \\ \mbox{ polyhedron (<InternalRef RefID="Equ15">9.15</InternalRef>)}\\ \end{array} \right.& &{}\end{array}$$
(9.16)

The average of the previous LPs may be considered as the final solution of the problem. In case of instability, a large variation of the provided solutions appears, and this average solution is less representative. In any case, the solutions of the above LPs give the internal variation of the weight of all criteria g i , and consequently give an idea of the importance of these criteria in the DM’s preference system.

2.3 The UTASTAR Algorithm

The UTASTAR method [128] is an improved version of the original UTA model presented in the previous section. In the original version of UTA [56], for each packed action a ∈ A R , a single error \(\sigma (a)\) is introduced to be minimized. This error function is not sufficient to minimize completely the dispersion of points all around the monotone curve of Fig. 9.5. The problem is posed by points situated on the right of the curve, from which it would be suitable to subtract an amount of value/utility and not increase the values/utilities of the others.

Fig. 9.5
figure 5

Ordinal regression curve (ranking versus global value)

In UTASTAR method, Siskos and Yannacopoulos [128] introduced a double positive error function, so that formula (9.8) becomes:

$$\displaystyle\begin{array}{rcl} u'[\textit{g}(a)] =\sum _{ i=1}^{n}u_{ i}[g_{i}(a)] -\sigma ^{+}(a) +\sigma ^{-}(a)\quad \forall \ a \in A_{ R}& &{}\end{array}$$
(9.17)

where \(\sigma ^{+}\) and \(\sigma ^{-}\) are the underestimation and the overestimation error respectively.

Moreover, another important modification concerns the monotonicity constraints of the criteria, which are taken into account through the transformations of the variables:

$$\displaystyle\begin{array}{rcl} w_{ij} = u_{i}(g_{i}^{j+1}) - u_{ i}(g_{i}^{j}) \geq 0\ \forall \ i = 1,2,\ldots,n\ \mbox{ and}& & \\ \ j = 1,2,\ldots,\alpha _{i} - 1& &{}\end{array}$$
(9.18)

and thus, the monotonicity conditions (9.13) can be replaced by the non-negative constraints for the variables w ij (for s i  = 0).

Consequently, the UTASTAR algorithm may be summarized in the following steps:

Step 1: :

Express the global value of reference actions u[g(a k )], k = 1, 2, , m, first in terms of marginal values \(u_{i}(g_{i})\), and then in terms of variables w ij according to the formula (9.18), by means of the following expressions:

$$\displaystyle\begin{array}{rcl} \left \{\begin{array}{ll} u_{i}(g_{i}^{1}) = 0 &\forall i = 1,2,\ldots,n \\ u_{i}(g_{i}^{j}) =\sum _{ t=1}^{j-1}w_{ it}&\forall i = 1,2,\ldots,n\ \mbox{ and}\ j = 2,3,\ldots,\alpha _{i} - 1\end{array} \right.& & {}\end{array}$$
(9.19)
Step 2: :

Introduce two error functions \(\sigma ^{+}\) and \(\sigma ^{-}\) on A R by writing for each pair of consecutive actions in the ranking the analytic expressions:

$$\displaystyle\begin{array}{rcl} \begin{array}{lll} \Delta (a_{k},a_{k+1})& =&u[\mathbf{g}(a_{k})] -\sigma ^{+}(a_{k}) +\sigma ^{-}(a_{k}) \\ & & - u[\mathbf{g}(a_{k+1})] +\sigma ^{+}(a_{k+1}) -\sigma ^{-}(a_{k+1})\end{array} & & {}\end{array}$$
(9.20)
Step 3: :

Solve the LP:

$$\displaystyle\begin{array}{rcl} \left \{\begin{array}{ll} [min]z =\sum _{ k=1}^{m}[\sigma ^{+}(a_{ k}) +\sigma ^{-}(a_{ k})] & \\ \mbox{ subject to} & \\ \left.\begin{array}{ll} \Delta (a_{k},a_{k+1}) \geq \delta &\mbox{ if}\ a_{k} \succ a_{k+1} \\ \Delta (a_{k},a_{k+1}) = 0&\mbox{ if}\ a_{k} \sim a_{k+1}\\ \end{array} \right \} & \forall k \\ \sum _{i=1}^{n}\sum _{ j=1}^{\alpha _{i}-1}w_{ ij} = 1 & \\ w_{ij} \geq 0,\sigma ^{+}(a_{k}) \geq 0,\sigma ^{-}(a_{k}) \geq 0 &\forall i,j,\mbox{ and }k\\ \end{array}\right.& & {}\end{array}$$
(9.21)

with δ being a small positive number.

Step 4: :

Test the existence of multiple or near optimal solutions of the LP (9.21) (stability analysis); in case of non uniqueness, find the mean additive value function of those (near) optimal solutions which maximize the objective functions:

$$\displaystyle\begin{array}{rcl} u_{i}(g_{i}^{{\ast}}) =\sum _{ j=1}^{\alpha _{i}-1}w_{ ij}\ \forall i = 1,2,\ldots,n& & {}\end{array}$$
(9.22)

on the polyhedron of the constraints of the LP (9.21) bounded by the new constraint:

$$\displaystyle\begin{array}{rcl} \sum _{k=1}^{m}[\sigma ^{+}(\alpha _{ k}) +\sigma ^{-}(\alpha _{ k})] \leq z^{{\ast}}+\varepsilon & & {}\end{array}$$
(9.23)

where z is the optimal value of the LP in step 3 and \(\varepsilon\) is a very small positive number.

A comparison analysis between UTA and UTASTAR algorithms is presented in [128] through a variety of experimental data. UTASTAR method has provided better results concerning a number of comparison indicators, like:

  1. 1.

    The number of the necessary simplex iterations for arriving at the optimal solution.

  2. 2.

    The Kendall’s τ between the initial weak order and the one produced by the estimated model.

  3. 3.

    The minimized criterion z (sum of errors) taken as the indicator of dispersion of the observations.

2.4 Robustness Analysis

UTA-based methods include robustness analysis to take account of the gap between the DM’s “true” model and the model resulting from the disaggregation computational mechanism. Roy [109] considers robustness as an enabling tool for decision analysts to resist the phenomena of approximations and ignorance zones. It should be emphasized that robustness refers mainly to the decision model, in the light of the assertion “robust models produce a fortiori robust results”.

However, robustness should also refer to the results and the decision support activities (e.g. conclusions, argumentation). In UTA methods robustness uses LP as the main inference mechanism. In this spirit, several UTA-type methods have been developed such as UTA-GMS [39], GRIP [31], and RUTA [64] to provide the DM with robust conclusions, Extreme Ranking Analysis [62] to determine the extreme ranking positions taken by the actions, and finally the robustness measurement control based on Monte Carlo sampling techniques (see [60, 61] for stochastic ordinal regression; see [41] for entropy measurement control).

Additional developments of robustness analysis in the context of UTA-type methods can be found in [17, 40, 63].

As presented in the previous section, in the UTA models, robustness refers to the post/near-optimality analysis. In the context of preference disaggregation approaches, Siskos and Grigoroudis [125] propose a general methodological framework for applying robustness analysis (Fig. 9.6).

Fig. 9.6
figure 6

Robustness analysis in preference disaggregation approaches [125]

The assessment of the robustness measures may depend on the post-optimality analysis results, and especially on the form and the extent of the polyhedron of the LP (9.14) or the LP (9.21). In particular, the observed variance in the post-optimality matrix indicates the degree of instability of the results. Following this approach, Siskos and Grigoroudis [125] proposed an Average Stability Index (ASI) based on the average of the normalized standard deviation of the estimated values \(u_{i}(g_{i}^{{\ast}})\) [42]. Instead of exploring only the extreme values \(u_{i}(g_{i}^{{\ast}})\), the post-optimality analysis may investigate every value \(u_{i}(g_{i}^{j})\) of each criterion. In this case, during the post-optimality stage, T LPs are formulated and solved, which maximize and minimize repeatedly \(u_{i}(g_{i}^{j})\), and the ASI for the i-th criterion is assessed as follows:

$$\displaystyle{ ASI(i) = 1 - \frac{1} {\alpha _{i} - 1}\sum \limits _{j=1}^{\alpha _{i}-1}\frac{\sqrt{T\sum \limits _{k=1 }^{T }(u_{i }^{jk })^{2 } - \left (\sum \limits _{k=1 }^{T }u_{i }^{jk } \right ) ^{2}}} { \frac{T} {\alpha _{i}-1}\sqrt{\alpha _{i } - 2}} }$$
(9.24)

where \(T = 2\sum _{i}(\alpha _{i} - 1)\) and \(u_{i}^{jk}\) is the estimated value of \(u_{i}(g_{i}^{j})\) in the k-th post-optimality analysis LP (j = 1, 2, , α i ).

The global robustness measure may be assessed as the average of the individual ASI(i) values. Since ASI measures are normalized in the interval [0, 1], high levels of robustness are achieved when ASI is close to 1. However, if the analyst is not satisfied with the value of the ASI measures, several alternative rules of robustness analysis may be applied, including new global preference judgments, enumeration and management of the hyperpolyhedron vertices in post-optimality analysis, new preference relations on the set A during the extrapolation phase, etc. (see [125] for more details).

2.5 A Numerical Example

The implementation of the UTASTAR algorithm is illustrated by a practical example taken from [128]. The problem concerns a DM who wishes to analyze the choice of transportation means during the peak hours (home-work place). Suppose that the DM is interested only in the following three criteria:

  1. 1.

    price (in monetary units),

  2. 2.

    time of journey (in minutes), and

  3. 3.

    comfort (possibility to have a seat).

The evaluation in terms of the previous criteria is presented in Table 9.1, where it should be noted that the following qualitative scale has been used for the comfort criterion: 0 (no chance of seating), + (little chance of seating) ++ (great chance of finding a seating place), and +++ (seat assured). Also, the last column of Table 9.1 shows the DM’s ranking with respect to the five alternative means of transportation.

Table 9.1 Criteria values and ranking of the DM

The first step of UTASTAR, as presented in the previous section, consists of making explicit the utilities of the five alternatives. For this reason the following scales have been chosen:

$$\displaystyle\begin{array}{rcl} [g_{1{\ast}},g_{1}^{{\ast}}]& =& [30,16,2] {}\\ {}[g_{2{\ast}},g_{2}^{{\ast}}]& =& [40,30,20,10] {}\\ {}[g_{3{\ast}},g_{3}^{{\ast}}]& =& [0,+,++,+ + +] {}\\ \end{array}$$

Using linear interpolation for the criterion according to formula (9.10), the value of each alternative may be written as:

$$\displaystyle{ \begin{array}{l} u[\mathbf{g}(\mbox{ RER})] = 0.07u_{1}(16) + 0.93u_{1}(2) + u_{2}(10) + u_{3}(+) \\ u[\mathbf{g}(\mbox{ METRO1})] = 0.14u_{1}(16) + 0.86u_{1}(2) + u_{2}(20) + u_{3}(++) \\ u[\mathbf{g}(\mbox{ METRO2})] = u_{1}(2) + u_{2}(20) + u_{3}(0) = u_{1}(2) + u_{2}(20) \\ \begin{array}{rl} u[\mathbf{g}(\mbox{ BUS})]& = 0.29u_{1}(16) + 0.71u_{1}(2) + u_{2}(40) + u_{3}(0) \\ & = 0.29u_{1}(16)\! +\! 0.71u_{1}(2)\end{array} \\ u[\mathbf{g}(\mbox{ TAXI})] = u_{1}(30) + u_{2}(30) + u_{3}(+ + +) = u_{2}(30) + u_{3}(+ + +)\end{array} }$$

where the following normalization conditions for the marginal value functions have been used: \(u_{1}(30) = u_{2}(40) = u_{3}(0) = 0\).

Also, according to formula (9.19), the global value of the alternatives may be expressed in terms of the variables w ij :

$$\displaystyle\begin{array}{rcl} & & u[\mathbf{g}(\mbox{ RER})] = w_{11} + 0.93w_{12} + w_{21} + w_{22} + w_{23} + w_{31} {}\\ & & u[\mathbf{g}(\mbox{ METRO1})] = w_{11} + 0.86w_{12} + w_{21} + w_{22} + w_{31} + w_{32} {}\\ & & u[\mathbf{g}(\mbox{ METRO2})] = w_{11} + w_{12} + w_{21} + w_{22} {}\\ & & u[\mathbf{g}(\mbox{ BUS})] = w_{11} + 0.71w_{12} {}\\ & & u[\mathbf{g}(\mbox{ TAXI})] = w_{21} + w_{31} + w_{32} + w_{33} {}\\ \end{array}$$

According to the second step of the UTASTAR algorithm, the following expressions are written, for each pair of consecutive actions in the ranking:

$$\displaystyle{ \begin{array}{l} \begin{array}{rl} \Delta (\mbox{ RER, METRO1})& = 0.07w_{12} + w_{23} - w_{32} -\sigma _{RER}^{+} +\sigma _{ RER}^{-} \\ &\quad +\sigma _{ METRO1}^{+} -\sigma _{METRO1}^{-}\end{array} \\ \begin{array}{rl} \Delta (\mbox{ METRO1, METRO2})& = -0.14w_{12} + w_{31} + w_{32} -\sigma _{METRO1}^{+} \\ &\quad +\sigma _{ METRO1}^{-} +\sigma _{ METRO2}^{+} -\sigma _{METRO2}^{-}\end{array} \\ \begin{array}{rl} \Delta (\mbox{ METRO2, BUS})& = 0.29w_{12} + w_{21} + w_{22} \\ & -\sigma _{METRO2}^{+} +\sigma _{ METRO2}^{-} +\sigma _{ BUS}^{+} -\sigma _{BUS}^{-}\end{array} \\ \begin{array}{rl} \Delta (\mbox{ BUS, TAXI})& = w_{11} + 0.71w_{12} - w_{21} - w_{31} - w_{32} - w_{33} \\ & -\sigma _{BUS}^{+} +\sigma _{ BUS}^{-} +\sigma _{ TAXI}^{+} -\sigma _{TAXI}^{-}\end{array}\end{array} }$$

Based on the aforementioned expression, an LP according to (9.21) is formulated, with δ = 0. 05. An optimal solution is: w 11 = 0. 5, w 21 = 0. 05, w 23 = 0. 05, w 33 = 0. 4 with \([min]z = z^{{\ast}} = 0\). This solution corresponds to the marginal value functions presented in Table 9.2 and produces a ranking which is consistent with the DM’s initial weak order.

Table 9.2 Marginal value functions (initial solution)

It should be emphasized that this solution is not unique. Through post-optimality analysis (step 4), the UTASTAR algorithm searches for multiple optimal solutions, or more generally, for near optimal solutions corresponding to error values between z and z +ε. For this reason, the error objective should be transformed to a constraint of the type (9.23).

In the presented numerical example, the initial LP has multiple optimal solutions, since z  = 0. Thus, in the post-optimality analysis step, the algorithm searches for more characteristic solutions, which maximize the expressions (9.22), i.e. the weights of each criterion. Furthermore, in this particular case we have:

$$\displaystyle{z^{{\ast}} = 0\Longleftrightarrow\sigma ^{+}(a_{ k}) =\sigma ^{-}(a_{ k}) = 0\ \forall \ k}$$

so the error variables may be excluded from the LPs of the post-optimality analysis. Table 9.3 presents the formulation of the LP that has to be solved during this step.

Table 9.3 Linear programming formulation (post-optimality analysis)

The solutions obtained during post-optimality analysis are presented in Table 9.4. The average of these three solutions is also calculated in the last row of Table 9.4. This centroid is taken as a unique utility function, provided that it is considered as a more representative solution of this particular problem.

Table 9.4 Post-optimality analysis and final solution

This final solution corresponds to the marginal value functions presented in Table 9.5. Also, the utilities for each alternative are calculated as follows:

$$\displaystyle\begin{array}{rcl} & & u[\mathbf{g}(\mbox{ RER})] = 0.856 {}\\ & & u[\mathbf{g}(\mbox{ METRO1})] = 0.523 {}\\ & & u[\mathbf{g}(\mbox{ METRO2})] = 0.523 {}\\ & & u[\mathbf{g}(\mbox{ BUS})] = 0.473 {}\\ & & u[\mathbf{g}(\mbox{ TAXI})] = 0.152 {}\\ \end{array}$$

where it is obvious that these values are consistent with the DM’s weak order.

Table 9.5 Marginal value functions (final solution)

These marginal utilities may be normalized by dividing every value \(u_{i}(g\,_{i}^{j})\) by \(u_{i}(g_{i}^{{\ast}})\). In this case the additive utility can be written as:

$$\displaystyle{u_{\mathbf{(g)}} = 0.506u_{1}(g_{1}) + 0.342u_{2}(g_{2}) + 0.152u_{3}(g_{3})}$$

where the normalized marginal value functions are presented in Fig. 9.7.

Fig. 9.7
figure 7

Normalized marginal value functions

3 Variants of the UTA Method

3.1 Alternative Optimality Criteria

Several variants of the UTA method have been developed, incorporating different forms of global preference or different forms of optimality criteria used in the linear programming formulation.

An extension of the UTA methods, where u[g(a)] is inferred from pairwise comparisons is proposed by Jacquet-Lagrèze and Siskos [56]. This subjective preference obtained by pairwise judgments is most often not transitive, and thus, the modified model may be written as in the following LP:

$$\displaystyle\begin{array}{rcl} \left \{\begin{array}{ll} [min]F =\sum _{(a,b):a\succ b}\lambda _{ab}z_{ab} +\sum _{(a,b):a\sim b}\lambda _{ab}z_{ba} & \\ \mbox{ subject to} & \\ \sum _{i=1}^{n}\{u_{ i}[g_{i}(a)] - u_{i}[g_{i}(b)]\} + z_{ab} \geq 0 &\mbox{ if}\ a \succ b \\ \sum _{i=1}^{n}\{u_{ i}[g_{i}(a)] - u_{i}[g_{i}(b)]\} + z_{ab} - z_{ba} = 0&\mbox{ if}\ a \sim b\ (\Rightarrow b \sim a) \\ u_{i}(g_{i}^{j+1}) - u_{i}(g_{i}^{j}) \geq s_{i} &\forall i,j \\ \sum _{i=1}^{n}u_{ i}(g_{i}^{{\ast}}) = 1 & \\ u_{i}(g_{i{\ast}}) = 0,u_{i}(g_{i}^{j}) \geq 0, &\forall i,j\ \\ z_{ab} \geq 0 &\ \forall (a,b) \in R\end{array} \right.& &{}\end{array}$$
(9.25)

\(\lambda _{ab}\) being a non negative weight reflecting a degree of confidence in the judgment between a and b.

An alternative optimality criterion would be to minimize the number of violated pairs of an order R provided by the DM in ranking R′ given by the model, which is equivalent to maximize Kendall’s τ between the two rankings. This extension is given by the mixed integer LP (9.26), where γ ab  = 0 if u[g(a)] − u[g(b)] ≥ δ for a pair (a, b) ∈ R and the judgment is respected, otherwise γ ab  = 1 and the judgment is violated. Thus, the objective function in this LP represents the number of violated pairs in the overall preference aggregated by u(g).

$$\displaystyle\begin{array}{rcl} \left \{\begin{array}{ll} [min]F =\sum _{(a,b)\in R}\gamma _{ab} \Leftrightarrow [max]\tau (R,R') & \\ \mbox{ subject to} & \\ \sum _{i=1}^{n}\{u_{ i}[g_{i}(a)] - u_{i}[g_{i}(b)]\} + M \cdot \gamma _{ab} \geq \delta &\forall (a,b) \in R \\ u_{i}(g_{i}^{j+1}) - u_{i}(g_{i}^{j}) \geq s_{i} &\forall i,j \\ \sum _{i=1}^{n}u_{ i}(g_{i}^{{\ast}}) = 1 & \\ u_{i}(g_{i{\ast}}) = 0,u_{i}(g_{i}^{j}) \geq 0 &\forall i,j \\ \gamma _{\alpha b} = 0\ \mbox{ or}\ 1 &\forall (a,b) \in R\end{array} \right.& &{}\end{array}$$
(9.26)

where M is a large number. Beuthe and Scannella [11] propose to handle separately the preference and indifference judgments, and modify the previous LP using the constraints:

$$\displaystyle\begin{array}{rcl} \left \{\begin{array}{ll} \sum _{i=1}^{n}\{u_{ i}[g_{i}(a)] - u_{i}[g_{i}(b)]\} + M \cdot \gamma _{ab} \geq \delta &\mbox{ if }a \succ b \\ \left.\begin{array}{ll} \sum _{i=1}^{n}\{u_{ i}[g_{i}(a)] - u_{i}[g_{i}(b)]\} + M \cdot \gamma _{ab} \geq 0& \\ \sum _{i=1}^{n}\{u_{ i}[g_{i}(a)] - u_{i}[g_{i}(b)]\} + M \cdot \gamma _{ba} \geq 0&\\ \end{array} \right \}&\mbox{ if }a \sim b \end{array} \right.& &{}\end{array}$$
(9.27)

The assumption of monotonicity of preferences, in the context of separable value functions, means that the marginal values are monotonic functions of the criteria. This assumption, although widely used, is sometimes not applicable to real-world situations. One way to deal with non-monotonic preferences is to divide the range of the criteria into intervals, so that the preferences are monotonic in each interval, and then treat each interval separately [67]. In the same spirit, Despotis and Zopounidis [22] present a variation of the UTASTAR method for the assessment of non-monotonic marginal value functions. In this model, the range if each criterion is divided into two intervals (see also Fig. 9.8):

$$\displaystyle\begin{array}{rcl} \left \{\begin{array}{l} G_{i}^{1} =\{ g_{i^{{\ast}}} = g_{i}^{1},g_{i}^{2},\ldots,g_{i}^{p_{i}} = d_{i}\} \\ G_{i}^{2} =\{ d_{i} = g_{i}^{p_{i}},g_{i}^{p_{i+1}},\ldots,g_{i}^{p_{i}+q_{i}} = g_{i}^{{\ast}}\}\\ \end{array} \right.& &{}\end{array}$$
(9.28)

where d i is the most desirable value of g i , and the parameters p i and q i are determined according to the dispersion of the input data; of course it holds that \(p_{i} + q_{i} =\alpha _{i}\). In this approach, the main modification concerns the assessment of the decision variables w ij of the LP (9.21). Hence, formula (9.19) becomes:

$$\displaystyle\begin{array}{rcl} u_{i}(g_{i}^{j}) = \left \{\begin{array}{ll} \sum _{t=1}^{j-1}w_{ it} &\mbox{ if}\ 1 <j \leq p_{i} \\ \sum _{t=1}^{p_{i}-1}w_{ it} -\sum _{t=p_{i}}^{j-1}w_{ it}&\mbox{ if}\ p_{i} <j \leq \alpha _{i} \end{array} \right.& &{}\end{array}$$
(9.29)

without considering the conditions \(u_{i}(g_{i}^{1}) = 0\).

Fig. 9.8
figure 8

A non-monotonic partial utility function [22]

Another extension of the UTA methods refers to the intensity of the DM’s preferences, similar to the context proposed in [143]. In this case, a series of constraints may be added during the LP formulation. For example, if the preference of alternative a over alternative b is stronger than the preference of b over c, then the following condition may be written:

$$\displaystyle\begin{array}{rcl} [u'[\mathbf{g}(a)] - u'[\mathbf{g}(b)]] - [u'[\mathbf{g}(b)] - u'[\mathbf{g}(c)]] \geq \phi & &{}\end{array}$$
(9.30)

where ϕ > 0 is a measure of preference intensity and u′(g) is given by formula (9.8). Thus, using formula (9.11), the following constraint should be added in LP (9.14):

$$\displaystyle\begin{array}{rcl} \Delta (a,b) - \Delta (b,c) \geq \phi & &{}\end{array}$$
(9.31)

In general, if the DM wishes to expand these preferences to the whole set of alternatives, a minimum number of m − 2 constraints of type (9.34) is required.

Despotis and Zopounidis [22] consider the case where the DM ranks the alternatives using an explicit overall index I. Thus, formula (9.12) may be replaced by the following condition:

$$\displaystyle\begin{array}{rcl} \Delta (a_{k},a_{k+1}) = I_{k} - I_{k+1}\ \forall k = 1,2,\ldots,m - 1& &{}\end{array}$$
(9.32)

Besides the succession of the alternatives in the preference ranking, these constraints state that the difference of global value of any successive alternatives in the ranking should be consistent with the difference of their evaluation on the ratio scale.

In the same context, Oral and Kettani [103] propose the optimization of lexicographic criteria without discretisation of criteria scales G i , where a ratio scale is used in order to express intensity of preferences.

Other variants of the UTA method concerning different forms of global preference are mainly focused on:

  • additional properties of the assessed value functions, like concavity [22];

  • construction of fuzzy outranking relations based on multiple value functions provided by UTA’s post-optimality analysis [117].

The dimensions of the aforementioned UTA models affect the computational complexity of the formulated LPs. In most cases it is preferable to solve the dual LP due to the structure of these LPs [56]. Table 9.6 summarizes the size of all LPs presented in the previous sections, where | P | and | I | denote the number of preference and indifference relations respectively, considering all possible pairwise comparisons in R. Also, it should be noted that LP (9.26) has \(m(m - 1)/2\) binary variables.

Table 9.6 LP size of UTA models

3.2 Meta-UTA Techniques

Other techniques, named meta-UTA, aimed at the improvement of the value function with respect to near optimality analysis or to its exploitation for decision support.

Despotis et al. [23] propose to minimize the dispersion of errors (Tchebycheff criterion) within the UTASTAR’s step 4 (see Sect. 9.2.3). In case of a strictly positive error z , the aim is to investigate the existence of near optimal solutions of the LP (9.21) which give rankings R′ such that τ(R′, R) > τ(R , R), with R being the ranking corresponding to the optimal value functions. The experience with the model [21] confirms that apart from the total error z , it is also the dispersion of the individual errors that is crucial for τ(R , R). Therefore, in the proposed post-optimality analysis, the difference between the maximum \((\sigma _{max})\) and the minimum error is minimized. As far as the individual errors are non-negative, this requirement can be satisfied by minimizing the maximum individual error (the \(L_{\infty }\) norm) according to the following LP:

$$\displaystyle\begin{array}{rcl} \left \{\begin{array}{ll} [min]\sigma _{max} & \\ \mbox{ subject to} & \\ \mbox{ all the constraints of LP (<InternalRef RefID="Equ21">9.21</InternalRef>)} & \\ \sum _{k=1}^{m}[\sigma ^{+}(a_{ k}) +\sigma ^{-}(a_{ k})] \leq z^{{\ast}}+\varepsilon & \\ \left.\begin{array}{ll} \sigma _{max} -\sigma ^{+}(a_{k}) \geq 0& \\ \sigma _{max} -\sigma ^{-}(a_{k}) \geq 0& \end{array} \right \}&\forall k \\ \sigma _{max} \geq 0 &\\ \end{array} \right.& &{}\end{array}$$
(9.33)

With the incorporation of the model (9.33) in UTASTAR, the value function assessment process becomes a lexicographic optimization process. That is, the final solution is obtained by minimizing successively the L 1 and the \(L_{\infty }\) norms.

Another approach concerning meta-UTA techniques refers to the UTAMP models. Beuthe and Scannella [9, 11] note that the values given to parameters s and δ in the UTA and UTASTAR methods, respectively, influence the results as well as the predictive quality of the models. Hence, in the framework of the research by Srinivasan and Shocker [143], they look for optimal values of s and/or δ in the case of positive errors (z  > 0), as well as when UTA gives a sum of error equal to zero (z  = 0).

In the post-optimality analysis step of UTASTAR (see Sect. 9.2.3), UTAMP1 model maximizes δ, which is the minimum difference between the global value of two consecutive reference actions. The name of the model denotes that, on the basis of UTA, maximizing δ leads to better identification for the relations of preference between actions.

Beuthe and Scannella [9] have also proposed to maximize the sum (δ + s) in order to stress not only the differences of utilities between actions, but also the differences between values at successive bounds. This more general approach was named UTAMP2. Note that s corresponds to the minimum of marginal value step w ij in the UTASTAR algorithm. Although the simple addition of these parameters is legitimate since both of them are defined in the same value units, Beuthe and Scannella [11] note that a weighted sum formula may also be considered.

The UTAMP models, as well as the UTASTAR method, are based on the idea of centrality, although these approaches use a different interpretation of this notion. Bous et al. [13] propose an alternative method where the final solution is obtained by using an optimality criterion that directly implements the idea of centrality. They propose the ACUTA method, which is based on the computation of the analytic center of a polyhedron. In this approach, the product of the slack variables of constraints (9.12)–(9.13), or equivalently the sum of their logarithms is maximized. This non-linear objective function guarantees the uniqueness of the provided solution.

3.3 Stochastic UTA Method

Within the framework of multicriteria decision-aid under uncertainty, Siskos [118] developed a specific version of UTA (Stochastic UTA), in which the aggregation model to infer from a reference ranking is an additive utility function of the form:

$$\displaystyle\begin{array}{rcl} u(\mathbf{d}^{a}) =\sum _{ i=1}^{n}\sum _{ j=1}^{\alpha _{i} }d_{i}^{a}(g_{ i}^{j})u_{ i}(g_{i}^{j})& &{}\end{array}$$
(9.34)

subject to normalization constraints (9.7), where d i a is the distributional evaluation of action a on the i-th criterion, \(d_{i}^{a}(g_{i}^{j})\) is the probability that the performance of action a on the i-th criterion is g i j, \(u_{i}(g_{i}^{j})\) is the marginal value of the performance g i j, d a is the vector of distributional evaluations of action a, and u(d a)and is the global utility of action a (see also Fig. 9.9).

Fig. 9.9
figure 9

Distributional evaluation and marginal value function

This global utility is of the von Neumann-Morgenstern form [66], in the case of discrete g i , where:

$$\displaystyle\begin{array}{rcl} \sum _{j=1}^{\alpha _{i} }d_{i}^{a}(g_{ i}^{j}) = 1& &{}\end{array}$$
(9.35)

Of course, the additive utility function (9.34) has the same properties as the value function:

$$\displaystyle\begin{array}{rcl} \left \{\begin{array}{ll} u(\mathbf{d}^{a})> u(\mathbf{d}^{b}) \Leftrightarrow a \succ b&\mbox{ (preference)} \\ u(\mathbf{d}^{a}) = u(\mathbf{d}^{b}) \Leftrightarrow a \sim b&\mbox{ (indifference) }\end{array} \right.& &{}\end{array}$$
(9.36)

Similarly to the cases of UTA and UTASTAR described in Sects. 9.2.29.2.3, the stochastic UTA method disaggregates a ranking of reference actions [122]. The algorithmic procedure could be expressed in the following way:

Step 1: :

Express the global expected utilities of reference actions \(u(d^{\alpha _{k}})\), k = 1, 2, , m, in terms of variables:

$$\displaystyle\begin{array}{rcl} w_{ij} = u_{i}(g_{i}^{j+1}) - u_{ i}(g_{i}^{j}) \geq 0& & {}\end{array}$$
(9.37)
Step 2: :

Introduce two error functions \(\sigma ^{+}\) and \(\sigma ^{-}\) by writing the following expressions for each pair of consecutive actions in the ranking:

$$\displaystyle\begin{array}{rcl} \begin{array}{rl} \Delta (a_{k},a_{k+1}) =&u(\mathbf{d}^{a_{k}}) -\sigma ^{+}(a_{ k}) +\sigma ^{-}(a_{ k}) \\ & - u(\mathbf{d}^{a_{k+1}}) +\sigma ^{+}(a_{ k+1}) -\sigma ^{-}(a_{ k+1})\end{array} & & {}\end{array}$$
(9.38)
Step 3: :

Solve the LP (9.21) by using formulae (9.37) and (9.38).

Step 4: :

Test the existence of multiple or near optimal solutions.

Of course, the ideas employed in all variants of the UTA method are also applicable in the same way in the case of the stochastic UTA.

3.4 UTA-Type Sorting Methods

The extension of the UTA method in the case of a discriminant analysis model was firstly discussed by Jacquet-Lagrèze and Siskos [56]. The aim is to infer u from assignment examples in the context of problem statement β [108]. In the presence of two classes, if the model is without errors, the following inequalities must hold:

$$\displaystyle\begin{array}{rcl} \left \{\begin{array}{ll} a \in A_{1} \Leftrightarrow u[\mathbf{g}(a)] \geq u_{0} \\ a \in A_{2} \Leftrightarrow u[\mathbf{g}(a)] <u_{0}\\ \end{array} \right.& &{}\end{array}$$
(9.39)

with u 0 being the level of acceptance/rejection, which must be found in order to distinguish the set of accepted actions called A 1 and the set of rejected actions called A 2.

Introducing the error variables \(\sigma (a)\), a ∈ A R , the objective is to minimize the sum of deviations from the threshold u 0 for the ill classified actions (see Fig. 9.10). Hence, u(g) can be estimated by means of the LP:

$$\displaystyle\begin{array}{rcl} \left \{\begin{array}{ll} \mbox{ [min]}F =\sum _{a\in A_{R}}\sigma (a) & \\ \mbox{ subject to} & \\ \sum _{i=1}^{n}u_{ i}[g_{i}(a)] - u_{0} +\sigma (a) \geq 0 &\forall \ a \in A_{1} \\ \sum _{i=1}^{n}u_{ i}[g_{i}(a)] - u_{0} -\sigma (a) \leq 0 &\forall \ a \in A_{2} \\ u_{i}(g_{i}^{j+1}) - u_{i}(g_{i}^{j}) \geq s_{i} &\forall i\mbox{ and }j \\ \sum _{i=1}^{n}u_{ i}(g_{i}^{{\ast}}) = 1 & \\ u_{i}(g_{i^{{\ast}}}) = 0,u_{0} \geq 0,u_{i}(g_{i}^{j}) \geq 0,\sigma (a) \geq 0&\forall \ a \in A_{R},\forall \ i\ \mbox{ and}\ j\end{array} \right.& &{}\end{array}$$
(9.40)
Fig. 9.10
figure 10

Distribution of the actions A 1 and A 2 on u(g) [56]

In the general case, the DM’s evaluation is expressed in terms of a classification of the reference alternatives into homogenous ordinal groups \(A_{1} \succ A_{2} \succ \ldots \succ A_{q}\) (i.e. group A 1 includes the most preferred alternatives, whereas group A q includes the least preferred ones). Within this context, the assessed additive value model will be consistent with the DM’s global judgment, if the following conditions are satisfied:

$$\displaystyle\begin{array}{rcl} \left \{\begin{array}{ll} u[\mathbf{g}(a)] \geq u_{1} & \forall \ a \in A_{1} \\ u_{l} \leq u[\mathbf{g}(a)] <u_{l-1} & \forall \ a \in A_{l}\quad (l = 2,3,\ldots,q - 1) \\ u[\mathbf{g}(a)] <u_{q-1} & \forall \ a \in A_{q}\end{array} \right.& &{}\end{array}$$
(9.41)

where u 1 > u 2 >  > u q−1 are thresholds defined in the global value scale [0, 1] to discriminate the groups, and u l is the lower bound of group A l .

This approach is named UTADIS method (UTilités Additives DIScriminantes) and is presented by Devaud et al. [24] (see also [28, 53, 152, 158]). Similarly to the UTASTAR method, two error variables are employed in the UTADIS method to measure the differences between the model’s results and the predefined classification of the reference alternatives. The additive value model is developed to minimize these errors using a linear programming formulation of type (9.40). In this case, the two types of errors are defined as follows:

  1. 1.

    \(\sigma _{k}^{+} =\max \{ 0,u_{l} - u[\mathbf{g}(a_{k})]\}\ \ \forall a_{k} \in A_{l}\ (l = 1,2,\ldots,q - 1)\) represents the error associated with the violation of the lower bound u l of a group A l by an alternative a k  ∈ A l ,

  2. 2.

    \(\sigma _{k}^{-} =\max \{ 0,u[\mathbf{g}(a_{k}) - u_{l-1}]\}\ \ \forall a_{k} \in A_{l}(l = 2,3,\ldots,q)\) represents the error associated with the violation of the upper bound u l−1 of a group A l by an alternative a k  ∈ A l .

Recently, several new variants of the original UTADIS method have been proposed (UTADIS I, II, III) to consider different optimality criteria during the development of the additive value classification model [28, 152, 158]. The UTADIS I method considers both the minimization of the classification errors, as well as the maximization of the distances of the correctly classified alternatives from the value thresholds. The objective in the UTADIS II method is to minimize the number of misclassified alternatives, whereas UTADIS III combines the minimization of the misclassified alternatives with the maximization of the distances of the correctly classified alternatives from the value thresholds.

In the same context, Zopounidis and Doumpos [155] proposed the MHDIS method (Multi-group Hierarchical DIScrimination) extending the preference disaggregation analysis framework of the UTADIS method in complex sorting/classification problems involving multiple-groups. MHDIS addresses sorting problems through a hierarchical (sequential) procedure starting by discriminating group A 1 from all the other groups {A 2, A 3, , A q }, and then proceeding to the discrimination between the alternatives belonging to the other groups. At each stage of this sequential/hierarchical process, two additive value functions are developed for the classification of the alternatives. Assuming that the classification of the alternatives should be made into q ordered classes, \(A_{1} \succ A_{2} \succ \cdots \succ A_{q},\) 2(q − 1) additive value functions are developed. These value functions have the following additive form:

$$\displaystyle\begin{array}{rcl} \left \{\begin{array}{ll} u_{l}(\mathbf{g}) =\sum _{ i=1}^{n}u_{ li}(g_{i}) & \\ u_{\sim l}(\mathbf{g}) =\sum _{ i=1}^{n}u_{ \sim li}(g_{i})&\end{array} \right.& &{}\end{array}$$
(9.42)

where u l measures the value for the DM of a decision to assign an alternative into group A l , whereas the \(u_{\sim l}\) corresponds to the classification into the set of groups \(A_{\sim l} =\{ A_{l+1},A_{l+2},\ldots,A_{q}\}\) and both functions are normalized in the interval [0, 1].

The rules used to perform the classification of the alternatives have the following form:

$$\displaystyle\begin{array}{rcl} \left \{\begin{array}{l} \mbox{ if }u_{1}(a_{k})> u_{\sim 1}(a_{k})\mbox{ then }a_{k} \in A_{1} \\ \ \ \mbox{ else if }u_{2}(a_{k})> u_{\sim 2}(a_{k})\mbox{ then }a_{k} \in A_{2} \\ \ \ \ \ \cdots \cdots \cdots \cdots \cdots \cdots \cdots \cdots \cdots \cdots \cdots \\ \ \ \ \ \ \ \ \ \mbox{ else if }u_{q-1}(a_{k})> u_{\sim (q-1)}(a_{k})\mbox{ then }a_{k} \in A_{q-1} \\ \ \ \ \ \ \ \ \ \ \ \ \mbox{ else }a_{k} \in A_{q}\end{array} \right.& &{}\end{array}$$
(9.43)

The development of all value functions in the MHDIS method is performed through the solution of three mathematical programming problems at each stage l of the discrimination process \(l = 1,2,\ldots,q - 1\). Initially, an LP is solved to minimize the magnitude of the classification errors (in distance terms similarly to the UTADIS approach). Then, a mixed-integer LP is solved to minimize the total number of misclassifications among the misclassifications that occur after the solution of the initial LP, while retaining the correct classifications. Finally, a second LP is solved to maximize the clarity of the classification obtained from the solutions of the previous LPs.

3.5 Other Variants and Extensions

In all previous approaches, the value function was built in a one-step process by formulating an LP that requires only the DM’s global preferences. In some cases, however, it would be more appropriate to build such a function from a two-step questioning process, by dissociating the construction of the marginal value functions and the assessment of their respective scaling constants.

In the first step, the various marginal value functions are built outside the UTA algorithm. These functions may be facilitated, for instance, by proposing specific parametrical marginal value functions to the DM and asking him/her to choose the one that matches his/her preferences on that specific criterion. Those functions should be normalized according to (9.4) conditions. Generally, the approaches applied in this construction step are:

  1. (a)

    techniques based on MAUT theory [67, 70],

  2. (b)

    the MACBETH method [35],

  3. (c)

    the Quasi-UTA method [12], that uses “recursive exponential” marginal value functions, and

  4. (d)

    the MIIDAS system (see Sect. 9.4) that combines artificial intelligence and visual procedures in order to extract the DM’s preferences [135].

In the second step, after the assessment of these value functions, the DM is asked to give a global ranking of alternatives in a similar way as in the basic UTA method. From this information, the problem may be formulated via an LP, in order to assess only the weighting factors p i of the criteria (scaling constants of criteria). Through this approach, initially named UTA II model [116], formula (9.11) becomes:

$$\displaystyle\begin{array}{rcl} \begin{array}{lll} \Delta (a_{k},a_{k+1})& =&\sum _{i=1}^{n}p_{ i}\{u_{i}[g_{i}(a_{k})] - u_{i}[g_{i}(a_{k+1})]\} \\ & & -\sigma ^{+}(a_{k}) +\sigma ^{-}(a_{k}) +\sigma ^{+}(a_{k+1}) -\sigma ^{-}(a_{k+1})\end{array} & &{}\end{array}$$
(9.44)

and the LP (9.14) is modified as follows:

$$\displaystyle\begin{array}{rcl} \left \{\begin{array}{ll} [min]F =\sum _{ k=1}^{m}[\sigma ^{+}(a_{ k}) +\sigma ^{-}(a_{ k})] & \\ \mbox{ subject to} & \\ \left.\begin{array}{ll} \Delta (a_{k},a_{k+1}) \geq \delta &\mbox{ if}\ a_{k} \succ a_{k+1} \\ \Delta (a_{k},a_{k+1}) = 0&\mbox{ if}\ a_{k} \sim a_{k+1}\\ \end{array} \right \} & \forall k \\ \sum _{i=1}^{n}p_{ i} = 1 & \\ p_{i} \geq 0,\sigma ^{+}(a_{k}) \geq 0,\sigma ^{-}(a_{k}) \geq 0 &\ \forall \ i,k.\\ \end{array}\right.& &{}\end{array}$$
(9.45)

The main principles of the UTA methods are also applicable in the specific field of multiobjective optimization, mainly in the field of linear programming with multiple objective functions. For instance, in the classical methods of Geoffrion et al. [35] and Zionts and Wallenius [150], the weights of the linear combinations of the objectives are inferred locally from trade-offs or pairwise judgments given by the DM at each iteration of the methods. Thus, these methods exploit in a direct way the DM’s value functions and seek the best compromise solution through successive maximization of these assessed value functions.

Stewart [145] proposed a procedure of pruning the decision alternatives using the UTA method. In this approach a sequence of alternatives is presented to the DM, who places each new presented alternative in rank order relative to the earlier alternatives evaluated. This ranking of elements in a subset of the decision space is used to eliminate other alternatives from further consideration. In the same context, Jacquet-Lagrèze et al. [58] developed a disaggregation method, similar to UTA, to assess a whole value function of multiple objectives for linear programming systems. This methodology enables to find compromise solutions and is mainly based on the following steps:

  1. 1.

    Generation of a limited subset of feasible efficient solutions as representative as possible of the efficient set.

  2. 2.

    Assessment of an additive value function using PREFCALC system (see Sect. 9.4).

  3. 3.

    Optimization of the additive value function on the original set of feasible alternatives.

Finally, Siskos and Despotis [123], in the context of UTA-based approaches in multiobjective optimization problems, proposed the ADELAIS method. This approach refers to an interactive method that uses UTA iteratively, in order to optimize an additive value function within the feasible region defined on the basis of the satisfaction levels and determined in each iteration.

3.6 Other Disaggregation Methods

The main principles of the aggregation-disaggregation approach may be combined with outranking relation methods. The most important efforts concern the problem of determining the values of several parameters when using these methods. The set of these parameters is used to construct a preference model with which the DM accepts as a working hypothesis in the decision-aid study. In several real-world applications the assumption that the DM is able to give explicitly the values of each parameter is not realistic.

In this framework, the ELECCALC system has been developed [69], which estimates indirectly the parameters of the ELECTRE II method. The process is based on the DM’s responses to questions of the system regarding his/her global preferences.

Furthermore, concerning problem statement β, several approaches consist in inferring the parameters of ELECTRE TRI through holistic information on DM’s judgments. These approaches aim at substituting assignment examples for direct elicitation of the model parameters. Usually, the values of these parameters are inferred through a regression-type analysis on assignment examples.

Mousseau and Słowiński [99] propose an interactive aggregation-disaggregation approach that infers ELECTRE TRI parameters simultaneously starting from assignment examples. In this approach, the determination of the parameters’ values (except the veto thresholds) that best restore the assignment examples is formulated through a non-linear optimization program.

Several efforts have tried to overcome the limitations of the aforementioned approach (computational difficulty, estimation of the veto threshold):

  1. (a)

    Mousseau et al. [100, 101] consider the subproblem of the determination of the weights only, assuming that the thresholds and category limits have been fixed. This leads to formulate an LP (rather than non-linear in the global inference model). Through experimental analysis, they show that this approach is able to infer weights that restore in a stable way the assignment examples and it is also able to identify possible inconsistencies in these assignment examples.

  2. (b)

    Doumpos and Zopounidis [29] use linear programming formulations in order to estimate all the parameters of the outranking relation classification model. However, in this approach, the parameters are estimated sequentially rather than through a global inference process. Thus, the proposed methodology does not specify the optimal parameters of the outranking relation (i.e. the ones that lead to a global minimum of the classification error). The results of this approach (“reasonable” specification of the parameters) serve rather as a basis for a thorough decision-aid process.

The problem of robustness and sensitivity analysis, through the extension of the previous research efforts is discussed in [26]. They consider the case where the DM can not provide exact values for the parameters of the ELECTRE TRI method, due to uncertain, imprecise or inaccurately determined information, as well as from lack of consensus among them. The proposed methodology combines the following approaches:

  1. 1.

    The first approach infers the value of parameters from assignment examples provided by the DM, as an elicitation aid.

  2. 2.

    The second approach considers a set of constraints on the parameter values reflecting the imprecise information that the DM is able to provide.

In the context of UTA-based ordinal regression analysis [119], the MUSA method has been developed in order to measure and analyze customer satisfaction [42, 134]. The method is used for the assessment of a set of marginal satisfaction functions in such a way that the global satisfaction criterion becomes as consistent as possible with customer’s judgments. Thus, the main objective of the method is the aggregation of individual judgments into a collective value function.

The MUSA method assesses global and partial satisfaction functions Y and X i respectively, given customers’ ordinal judgments Y and X i (for the i-th criterion). The ordinal regression analysis equation has the following form:

$$\displaystyle\begin{array}{rcl} \hat{Y }^{{\ast}} =\sum _{ i=1}^{n}b_{ i}X_{i}^{{\ast}}-\sigma ^{+} +\sigma ^{-}& &{}\end{array}$$
(9.46)

where \(\hat{Y }^{{\ast}}\) is the estimation of the global value function Y , n is the number of criteria, b i is a positive weight of the i-th criterion, \(\sigma ^{+}\) and \(\sigma ^{-}\) are the overestimation and the underestimation errors, respectively, and the value functions Y and X i are normalized in the interval [0,100]. In the MUSA method the notation of ordinal regression analysis is adopted, where a criterion g i is considered as a monotone variable X i and a value function is denoted as X i .

Similarly to the UTASTAR algorithm, the following transformation equations are used:

$$\displaystyle\begin{array}{rcl} \left \{\begin{array}{ll} z_{m} = y^{{\ast}m+1} - y^{{\ast}m}&\mbox{ for }m = 1,2,\ldots,\alpha \!-\!1 \\ w_{ik} = b_{i}x_{i}^{{\ast}k+1} & \mbox{ for }k = 1,2,\ldots,\alpha _{i}\! -\! 1\mbox{ and }i = 1,2,\ldots,n\end{array} \right.& &{}\end{array}$$
(9.47)

where y m is the value of the y m satisfaction level, x i k is the value of the x i satisfaction level, and α and α i are the number of global and partial satisfaction levels.

According to the previous definitions and assumptions, the MUSA estimation model can be written in an LP formulation, as follows:

$$\displaystyle\begin{array}{rcl} \left \{\begin{array}{ll} [min]F =\sum _{ j=1}^{m}\sigma _{ j}^{+} +\sigma _{ j}^{-} & \\ \mbox{ subject to} & \\ \sum _{i=1}^{n}\sum _{ k=1}^{x_{i}^{j}-1 }w_{ik} -\sum _{m=1}^{y^{j}-1 }z_{m} -\sigma _{j}^{+} +\sigma _{ j}^{-}&\mbox{ for }j = 1,2,\ldots,M \\ \sum _{m=1}^{\alpha -1}z_{ m} = 100 & \\ \sum _{i=1}^{n}\sum _{ k=1}^{\alpha _{i}-1}w_{ ik} = 100 \\ z_{m},w_{ik},\sigma _{j}^{+},\sigma _{j}^{-}\forall m,i,j,k\\ \end{array} \right.& &{}\end{array}$$
(9.48)

where M is the size of the customer sample, and x i j and y j are the j-th level on which variables X i and Y are estimated (i.e. global and partial satisfaction judgments of the j-th customer). The MUSA method includes also a post-optimality analysis stage, similarly to step 4 of the UTASTAR algorithm.

An analytical development of the method and the provided results is given in [42], while the presentation of the MUSA DSS can be found in [43, 46].

The problem of building non-additive utility functions may also be considered in the context of aggregation-disaggregation approach. A characteristic case refers to positive interaction (synergy) or negative interaction among criteria (redundancy). Two or more criteria are synergic (redundant) when their joint weight is more (less) than the sum of the weights given to the criteria considered singularly.

In order to represent interaction among criteria, some specific formulations of the utility functions expressed in terms of fuzzy integrals have been proposed [38, 81, 102]. In this context, Angilella et al. [2] propose a methodology that allows the inclusion of additional information such as an interaction among criteria. The method aims at searching a utility function representing the DM’s preferences, while the resulting functional form is a specific fuzzy integral (Choquet integral). As a result, the obtained weights may be interpreted as the “importance” of coalitions of criteria, exploiting the potential interaction between criteria. The method can also provide the marginal utility functions relative to each one of the considered criteria, evaluated on a common scale, as a consequence of the implemented methodology.

Hurson and Siskos [49] present a synergy of three complementary techniques to assess additive models on the whole criteria space. Their research includes a revised MACBETH technique, the standard MAUT trade-off analysis, and UTA-based methods for the assessment of both the marginal value functions, which are piecewise linear, and the weighting factors. The approach also uses a set of robustness measures and rules associated with MACBETH and UTA, in order to manage multiple LP solutions and extract robust conclusions from them. Several combinations of techniques are proposed which can facilitate the construction of the additive representation of DM’s preferences. So, according to the properties of the DM’s preferences and to the precise technical aspects of the decision-making problem, the analyst can choose the adequate combination of methods. Very recently Roy and Słowiński [110] presented a general framework to guide analysts and DMs in choosing the “right method”.

The general scheme of the disaggregation philosophy is also employed in other approaches, including rough sets [27, 105, 140, 149], machine learning [106], and neural networks [76, 144]. All these approaches are used to infer some form of decision model (a set of decision rules or a network) from given decision results involving assignment examples, ordinal or measurable judgments.

4 Applications and UTA-Based DSS

The methods presented in the previous sections adopt the aggregation-disaggregation approach. This approach constitutes a basis for the interaction between the analyst and the DM, which includes:

  • the consistency between the assessed preference model and the a priori preferences of the DM,

  • the assessed values (values, weights, utilities, …), and

  • the overall evaluation of potential actions (extrapolation output).

A general interaction scheme for this decision support process is given in Fig. 9.11.

Fig. 9.11
figure 11

Simplified decision support process based on disaggregation approach [57]

Several decision support systems (DSSs), based on the UTA model and its variants, have been developed on the basis of disaggregation methods. These systems include:

  1. (a)

    The PREFCALC system [52] is a DSS for interactive assessment of preferences using holistic judgments. The interactive process includes the classical aggregation phase where the DM is asked to estimate directly the parameters of the model (i.e. weights, trade-offs, etc.), as well as the disaggregation phase where the DM is asked to express his/her holistic judgments (i.e. global preference order on a subset of the alternatives) enabling an indirect estimation of the parameters of the model.

  2. (b)

    MINORA (Multicriteria Interactive Ordinal Regression Analysis) is a multicriteria interactive DSS with a wide spectrum of supported decision making situations [130, 131]. The core of the system is based on the UTASTAR method and it uses special interaction techniques in order to guide the DM to reach a consistent preference system.

  3. (c)

    MIIDAS (Multicriteria Interactive Intelligence Decision Aiding System) is an interactive DSS that implements the extended UTA II method [135]. In the first step of the decision-aid process, the system assess the DM’s value functions, while in the next step, the system estimates the DM’s preference model from his/her global preferences on a reference set of alternative actions. The system uses Artificial Intelligence and Visual techniques in order to improve the user interface and the interactive process with the DM.

  4. (d)

    The UTA PLUS software [71] is an implementation of the UTA method, which allows the user to modify interactively the marginal value functions within limits set from a sensitivity analysis of the formulated ordinal regression problem. During all these modifications, a friendly graphical interface helps the DM to reach an accepted preference model.

  5. (e)

    MUSTARD (Multicriteria Utility-based Stochastic Aid for Ranking Decisions) is an interactive DSS developed by Beuthe and Scannella [10], which incorporates several variants of the UTA method. The system provides several visual tools in order to structure the DM’s preferences to a specific problem (see also [121]). The interactive process with the DM contains the following main steps: problem structuring, preference questionnaire, optimization solver-parameter computing, final results (full rankings and graphs).

  6. (f)

    RUTA is a new UTA-based DSS proposed by Kadzinski et al. [64], which allows DMs to additionally exteriorize new types of preference information in terms of rank related statements (e.g. action a should be ranked in top 3, action b should be placed in bottom 5, etc.).

UTA methods have also been used in several works for conflict resolution in multi-actor decision situations [14, 54, 88]. In the same context, the MEDIATOR system was developed [59, 114, 115], which is a negotiation support system based on Evolutionary Systems Design (ESD) and database-centered implementation. ESD visualizes negotiations as a collective process of searching for designing a mutually acceptable solution. Participants are seen as playing a dynamical difference game in which a coalition of players is formed, if it can achieve a set of agreed upon goals. In MEDIATOR, negotiations are supported by consensus seeking through exchange of information and, where consensus is incomplete, by compromise. It assists in consensus seeking by aiding the players to build a group joint problem representation of the negotiations-in effect, joint mappings from control space to goal space (and through marginal utility functions) to utility space. Individual marginal utility functions are estimated by applying the UTA method. Players can arrive to a common coalition utility function through exchange of information and negotiation until players’ marginal utility functions are identical. In addition to exchanging information and negotiating to expand targets, players can consider the use of axioms to contract the feasible region.

The UTA methods may be extended in the case of multiple DMs, taking into account different input information (criteria values) and preferences for a group of DMs. Two alternative approaches may be found in the literature [125]:

  1. 1.

    Application of the UTA/UTASTAR methods in order to optimally infer marginal value functions of individual DMs; the approach enables each DM to analyze his/her behavior according to the general framework of preference disaggregation.

  2. 2.

    Application of the UTA/UTASTAR methods in order to assess a set of collective additive value functions; these value functions are as consistent as possible with the preferences of the whole set of DMs, and thus, they are able to aggregate individual value systems.

In the context of the first approach, Matsatsinis et al. [96] propose a general methodology for collective decision-making combining different MCDA approaches and incorporating several criteria in order to measure the DMs’ satisfaction over the aggregated rank-order of alternatives. Also, Matsatsinis and Delias [85] developed a general multicriteria protocol for multi-agent negotiations based on the UTA II method.

On the other hand, Siskos and Grigoroudis [125] propose the modification of the UTASTAR algorithm in order to infer a collective preference system for a group of DMs.

In the area of intelligent multicriteria DSSs, the MARKEX system has been proposed by Siskos and Matsatsinis [126] and Matsatsinis and Siskos [89, 91]. The system includes the UTASTAR algorithm and is used for the new product development process. It acts as a consultant for marketers, providing visual support to enhance understanding and to overcome lack of expertise. The data bases of the system are the results of consumer surveys, as well as financial information of the enterprises involved in the decision-making process. The system’s model base encompasses statistical analysis, preference analysis, and brand choice models. Figure 9.12 presents a general methodological flowchart of the system. Also, MARKEX incorporates partial knowledge bases to support DMs in different stages of the product development process. The system incorporates three partial expert systems, functioning independently of each other. These expert systems use the following knowledge bases for the:

Fig. 9.12
figure 12

Methodological flowchart of MARKEX [89]

  • selection of data analysis method,

  • selection of brand choice model, and

  • evaluation of the financial status of enterprises.

Furthermore, an intelligent web-based DSS, named DIMITRA, has been developed by Matsatsinis and Siskos [90]. The system is a consumer survey-based DSS, focusing on the decision-aid process for agricultural product development. Besides the implementation of the UTASTAR method in the preference analysis module, the DIMITRA system comprises several statistical analysis tools and consumer choice models. The system provides visual support to the DM (agricultural cooperatives, agribusiness firms, etc.) for several complex tasks, such as:

  • evaluation of current and potential market shares,

  • determination of the appropriate communication and penetration strategies, based on consumer attitudes and beliefs,

  • adjustment of the production according to product’s demand, and

  • detection of the most promising markets.

In the same context, new research efforts have combined UTA-based DSSs with intelligent agents’ technology. In general, the proposed methodologies engage the UTA models in a multi-agent architecture in order to assess the DM’s preference system. These research efforts include mainly the following:

  1. (a)

    An intelligent agent-based DSS, focusing on the determination of product penetration strategies has been developed [85, 9395]. The system implements an original consumer-based methodology, in which intelligent agents operate in a functional and a structural level, simultaneously. Task, information and interface agents are included in the functional level in order to coordinate, collect necessary information and communicate with the DM. Likewise, the structural level includes elementary agents based on a generic reusable architecture and complex agents which aim to the development of a dynamical agent organization in a recursive way.

  2. (b)

    A multi-agent architecture is proposed by Manouselis and Matsatsinis [80] for modeling electronic consumer’s behavior. The implementation of the system refers to electronic marketplaces and incorporates a step-by-step methodology for intelligent systems analysis and design, used in the particular decision-aid process. The system develops consumer behavioral models for the purchasing and negotiation process adopting additional operational research tools and techniques. The presented application refers to the case of Internet radio.

  3. (c)

    The AgentAllocator system [84] implements the UTA II method in the task allocation problem. These problems are very common to any multi-agent system in the context of Artificial Intelligence. The system is an intelligent agent DSS, which allows the DM to model his/her preferences in order to reach and employ the optimal allocation plan.

The need to combine data and knowledge in order to solve complex and ill-structured decision problems is a major concern in the modern marketing-management science. Matsatsinis [83] has proposed a DSS that implements the UTASTAR algorithm along with rule-induction data mining techniques. The main aim of the system is to derive and apply a set of rules that relate the global and the marginal value functions. A comparison between the original and the rule-based global values is used in the validity and stability analysis of the proposed methodology.

Furthermore, in the area of financial management, a variety of UTA-based DSSs has been developed, including mainly the following systems:

  1. (a)

    The FINEVA system [159] is a multicriteria knowledge-based DSS developed for the assessment of corporate performance and viability. The system implements multivariate statistical techniques (e.g. principal components analysis), expert systems technology [92], and the UTASTAR method to provide integrated support in evaluating the corporate performance.

  2. (b)

    The FINCLAS system [153] is a multicriteria DSS developed to study financial decision-making problems in which a classification (sorting) of the alternatives is required. The present form of the system is devoted to corporate credit risk assessment, and it can be used to develop classification models to assign a set of firms into predefined credit risk classes. The analysis performed by the system is based on the family of the UTADIS methods.

  3. (c)

    The INVESTOR system [156] is developed to study problems related to portfolio selection and management. The system implements the UTADIS method, as well as goal programming techniques to support portfolio managers and investors in their daily practice.

  4. (d)

    The PREFDIS system [157] is a multicriteria DSS developed to address classification problems. The system implements a series of preference disaggregation analysis techniques, namely the family of the UTADIS methods, in order to develop an additive utility function to be used for classification purposes.

  5. (e)

    The INTELLIGENT INVESTOR system [111, 112] is an intelligent system which aims to support investment decision-making. The system integrates MCDA methods (UTASTAR algorithm) and artificial intelligence technologies (expert system), incorporating several portfolio management tools (Fundamental Analysis, Technical Analysis, and Market Psychology).

Also, as presented in Sect. 9.3.5, Siskos and Despotis [123] have developed the ADELAIS system, which is designed to decision-aid in multiobjective linear programming (MOLP) problems.

Over the past two decades UTA-based methods have been applied in several real-world decision-making problems from the fields of financial management, marketing, environmental management, as well as human resources management, as presented in Table 9.7. These applications have provided insight on the applicability of preference disaggregation analysis in addressing real-world decision problems and its efficiency.

Table 9.7 Indicative applications of the UTA methods

Finally, the following real-world application, with emphasis on the synergy between UTA methods and other MCDA approaches, may be found in the literature:

  1. (a)

    Hurson et al. [51] present a case study regarding the portfolio selection problem and the evaluation of stocks in the Athens stock exchange. The assessment of the additive value model is done by combining MACBETH on a single criterion level and MAUT for the determination of inter-criteria parameters.

  2. (b)

    Siskos et al. [138, 139] propose a multicriteria methodology for e-government benchmarking in Europe. The proposed assessment procedure is supported by the MIIDAS DSS to visually determine the marginal value functions and elicit the set of admissible weights using the UTA II method. Finally, a set of complementary robustness analysis techniques is utilized to handle both the robustness of the evaluation model and the extreme ranking positions of the alternatives (i.e. countries).

  3. (c)

    Demesouka et al. [20] present S-UTASTAR (spatial UTASTAR), a robust ordinal regression DSS for land-use suitability analyses. The S-UTASTAR is applied in a raster-based case study to identify appropriate municipal solid waste landfill sites. Moreover, the Stochastic Multiobjective Acceptability Analysis (SMAA) is applied, based on a probability distribution of the additive model parameters, to indicate the frequency that an alternative get the best ranks, aiding this way the decision making process.

  4. (d)

    Doumpos et al. [30] present a UTADIS-based methodology for monitoring the postoperative behavior of patients that have received treatment for atrial fibrillation (AF). The model classifies the patients in seven categories according to their relapse risk, on the basis of seven criteria related to the AF type and pathology conditions, the treatment received by the patients, and their medical history. A two-stage robust multicriteria model development procedure is used to minimize the number and magnitude of the misclassifications.

  5. (e)

    Lakiotaki and Matsatsinis [73] analyze movie user profiles as a result of a multi-criteria recommendation methodology, applied to real user data, in order to reveal any hidden aspect of user behavior that would eventually improve current system’s performance.

  6. (f)

    Delias et al. [19] propose a recommendation approach to match the customized needs of an organization against the existing technologies (innovative products or services). The system is able to create a profile based on the organization’s needs and preferences. This profile is used to guide a recommendation process, according to which, available technologies are evaluated against the profile and proposed to the organization in a descending order.

  7. (g)

    Krassadaki et al. [72] propose a methodological framework based on a multicriteria clustering approach that identifies different assessment behaviors, in order to adopt the most common student assessment policy.

5 Concluding Remarks and Future Research

The UTA methods presented in this chapter belong to the family of ordinal regression analysis models aiming to assess a value system as a model of the preferences of the DM. This assessment is implemented through an aggregation-disaggregation process. With this process the analyst is able to infer an analytical model of preferences, which is as consistent as possible with the DM’ preferences. The acceptance of such a preference model is accomplished through a repetitive interaction between the model and the DM. This approach contributes towards an alternative reasoning for decision-aid (see Fig. 9.2).

Future research regarding UTA methods aims to explore further the potentials of the preference disaggregation philosophy within the context of multicriteria decision-aid. Jacquet-Lagrèze and Siskos [57] propose that potential research developments may be focused on:

  1. (a)

    the inference of more sophisticated aggregation models by disaggregation, and

  2. (b)

    the experimental evaluation of disaggregation procedures.

Finally, it would be interesting to explore the relationship of aggregation and disaggregation procedures in terms of similarities and/or dissimilarities regarding the evaluation results obtained by both approaches [57]. This will enable the identification of the reasons and the conditions under which aggregation and disaggregation procedures will lead to different or the same results.