Keywords

1 Introduction

Many multicriteria decision-making/aiding (MCDM/A) problems are to be found in organizations. The solutions found for these problems can produce different impacts on the organization’s strategies.

Therefore, in order to support DM’s evaluation of MCDM/A problems, MCDM/A building models were developed with a view to their being guides that offers solutions.

The main focus of this chapter is to present and discuss some issues that are raised by using MCDM/A building models, including some that deal with some problems in the RRM (risk, reliability, and maintenance) context.

2 Building Multicriteria Decision Models

An MCDM/A building model offers a formal and simplified representation of an MCDM/A problem. It consists of structured steps to represent the problem in line with the DM’s preferences during the decision-making process. According to Box and Draper (1987) all models are wrong since they are simplifications of the “real world”, but some models are useful as they make it possible to describe, study, and analyze problem situations. The key is to evaluate how wrong a model can be, i.e., to identify the point after which it is no longer useful.

One of the first MCDM/A building models was developed by Simon in 1960. This model proposed three steps to solve problems. The first one was the intelligence step, which is related to identifying future conflict situations in an organization. The second was the design step, which was about constructing the model by formalizing important aspects presented in the problem. Finally, the last step was the choice step, which sought to indicate the solution to the problem.

Moreover, two further steps can be integrated into this model, namely the review step, which is used to review the definitions made in the previous step and the implementation of the solution step (Polmerol and Barba-Romero 2000).

Currently, there are many MCDM/A building models in the literature, such as Roy (1996), Polmerol and Barba-Romero (2000), Belton and Stewart (2002), and de Almeida et al. (2015).

In Belton and Stewart (2002) the building model developed had five steps. The first was about identifying the problem, which is equivalent to the intelligence step in the Simon model. The second and third steps dealt with structuring the problem and constructing the decision model; these steps are equivalent to the design step in the Simon model. The fourth and five steps made a recommendation and implemented it.

In de Almeida et al. (2015), their decision model had twelve steps, which are aggregated into three major phases. The initial phase is the preliminary phase, during which problems are structured. The next phase is preference modeling, which is about choosing an adequate MCDM/A method that will be used to solve the problem. Lastly, the finalization phase is used for review and to implement the solution.

Based on these models, it can be seen that they present structured steps to formally represent the problem based on a DM’s preferences expressed during the process. According to Guitouni and Martel (1998), no building model will ever be perfected to characterize all decision-making problems. Thus, for each problem a decision model should be constructed to consider the DM’s preferences.

It is while the model is being built that the MCDM/A method that would be the most appropriate for solving the MCDM/A problem is indicated. Therefore, these methods deal with real problems, which formalize the problem by following some well-structured steps with a view to producing a solution that can be applied to solve this problem.

In this context, according to Keisler and Noonan (2012), problems are present in the “real world” and are transferred to the “model world”. In the “model world”, these problems are structured, processed, and the solution found. Then, this solution is returned to the “real world” to be implemented.

Moreover, the models constructed are particular for each specific MCDM/A problem. In other words, for each preference expressed by the DM in the steps, the model is shaped for the specific problem. As illustrated in Fig. 1, at the beginning of the process, there are many possible models, but during the steps for selecting a model, assumptions are made, sets of approaches are selected and simplifications are introduced, resulting in some models being eliminated.

Fig. 1
figure 1

Selecting the model in a funnel of the building process. (Adapted from de Almeida et al. 2015)

To conclude this topic, to support the building model process, problem structuring methods (PSM) can be used (Rosenhead and Mingers 2004; Eden 1988; Eden and Ackermann 2004; Ackermann and Eden 2001; Franco et al. 2004). According to Eden (1988) problem structuring seeks to build a formal representation for the problem, and this includes identifying objective and subjective factors of the decision-making process.

Among PSM methods, the value focus thinking (VFT) approach (Keeney 1992) aims to investigate the DM’s values in order to guide the decision process. In this approach, DMs need to address two issues, namely, deciding what he/she wants for the decision situation, i.e., what his/her objectives for the problem are, and evaluating how he/she will achieve these objectives, which are represented by the alternatives that may be the solutions for the problem.

Thus, based on the answer to these two questions, this approach presents a structured way of thinking about the decision-making process and the DM’s subjective judgments.

3 A Framework for Building Multicriteria Models in RRM

In this section, the building model presented in de Almeida et al. (2015) is discussed in order to highlight important concepts of the MCDM/A approach and is improved on by including steps presented in RRM decision situations. This model has four phases, namely:

  • Phase 1 – Preliminary phase

  • Phase 2 – Probabilistic Modeling phase

  • Phase 3 – Preference Modeling and choice of MCDM/A method phase

  • Phase 4 – Finalization phase

The first phase, called the preliminary phase, integrates four steps of this building model, and seeks to define the problem situation. Thus, the steps which comprise this phase are characterized to present the basic elements of an MCDM/A problem, such as: problem objectives, attributes associated with each objective, and the alternatives.

The second phase, called probabilistic modeling, consists of three steps that are used to define important elements present in probabilistic problems. This phase was included in this adapted building model, based on that of de Almeida et al. (2015), in order to provide a structured process to evaluate the RRM problem.

The third phase also has three steps. This phase is responsible for modeling the DM’s preferences with regard to the elements presented in the previous steps and is an important phase in the decision model. At the end of this phase, the building model has been defined, as illustrated in Fig. 1. Moreover, it is at the end of this phase that model indicates the appropriate MCDM/A method that should be applied to find the solution to the problem.

The fourth phase, called the finalization phase, has four steps and is responsible for presenting a recommendation for the MCDM/A problem. In this phase, the MCDM/A method will identify to produce a recommendation for the problem. Thus, this recommendation will be tested, reviewed, and implemented for the problem situation. The framework for the model set out in this chapter and based on de Almeida et al. (2015) is illustrated in Fig. 2.

Fig. 2
figure 2

Framework for building an MCDM/A model. (Adapted from de Almeida et al. 2015)

Compared to Simon’s model, this model does not have the intelligence phase but its steps are broadly equivalent to the phases of Simon’s model as follows: steps 1 to 10 to the design phase, 11 to the choice phase, while 12 and 13 are equivalent to the review step that was added to Simon’s model, and, similarly, 14 is equivalent to the implementation step that was added to Simon’s model.

However, note that a review step is not performed only in steps 12 and 13. It is present in the whole model, as a procedure that prompts successive refinements (Ackoff and Sasinieni 1968). These refinements permit returning to previous steps to review the preferences expressed and definitions made. Moreover, because it is possible to make refinements, some steps can be evaluated in a simplified way, and then later reviewed after more information becomes available from the successive steps. These refinements are identified by the dashed arrows between each of the steps.

3.1 Step 1 – Identify DM and Other Actors in the Decision-Making Process

In MCDM/A problems, the main figure is the decision maker (DM). The DM is the person who is responsible for the decision. The whole building model is based on the preferences that the DM expresses for the problem situation.

Decision problems may involve only one DM, when this is an individual decision, or more than one DM, when a group decision needs to be taken. The focus of this model is on individual decision, but for group decision adaptations to the model can be made.

In addition to the DM, other actors may be present in the decision scenario. Therefore, it is important to identify these actors and their role in the decision-making process. The other actors are: the analyst, the client, one or more experts, and the stakeholders.

The analyst has knowledge about the MCDM/A approach; his/her role is to provide methodological support to the DM throughout the decision-making process. The analyst must interact with the DM during all the steps that are followed to find the adequate MCDM/A method, based on the DM’s preferences.

The client can be considered a close advisor to the DM, who may deputize temporarily for the DM when the DM is absent. The client does not express his/her preferences, but only communicates the DM’s preferences to the analyst.

The expert has factual information about the behavior of some variables which are not under the control of the DM. He/she should not declare his/her preferences, but only give factual information to help the DM acquire a fuller understanding of the problem scenario.

Stakeholders represent a group of people who may be affected by the decision; they do not participate in the decision-making process but can influence DM’s preferences by reinforcing the importance of certain themes to them.

3.2 Step 2 – Identify Objectives

In MCDM/A problems, multiple objectives are present and the DM wishes to meet the whole set of objectives. Thus, the second step in the framework is to identify the objectives of the problem.

During this step, the DM must identify which objectives are the bases of interest for his decision. The reason why the problem must be solved is so that these objectives can be met. Based on these objectives, the DM will express his/her preferences.

Since the identification of objectives impacts all the future steps, this step can be considered the most important in the framework. If the definition of the objectives is incomplete or vague, potential problems will arise in future stages of the model.

The value focus thinking (VFT) approach, developed by Keeney (1992), can be used to support this step because it presents several relevant comments for the process of correct assessment of objectives.

According to the VFT, the process of finding the right objectives is not an easy task and some gimmicks can be used to assist this process such as using wish lists. This theory also classifies objectives into two categories: fundamental objectives and means objectives. Fundamental objectives are those that underlie the problem: being able to identify and achieve them, representing the reason for solving the problem. Means objectives are those that lead to fundamental objectives.

Besides the conceptual separation of such objectives, their hierarchical structure can be developed which facilitates understanding their relationship to each other, as illustrated in Fig. 3. Therefore, defining the set of objectives is a relevant step which is important as it provides a complete understanding of the problem situation. This usefully supports the subsequent steps, namely identifying the criteria and the alternatives.

Fig. 3
figure 3

Hierarchical structure of objectives. (Adapted from Keeney 1992)

3.3 Step 3 – Define Family of Criteria

For each objective identified, some criteria must be established to represent it. A criterion can be considered as a function that measures the level of achievement that some alternative obtains in the objective. According to Keeney (1992), criteria are characterized by the degree to which their related objective is successfully met.

Attributes are characterized as the lowest level to which a fundamental objective can be broken down and seek to measure the performance level of a given objective for a given situation (Keeney 1992).

At this stage of the model, criteria should be established in a non-redundant, exhaustive, and coherent form for all objectives (Roy 1996). Also, criteria must have three properties: they must be measurable, operational, and understandable. Their meaning is understood to be as follows: measurable means that criteria have to represent the objectives in detail; operational means that criteria should provide a common basis for value judgment; and as to understandable, it is assumed that criteria cannot be ambiguous when evaluating the alternatives (Keeney 1992).

As for identifying objectives, if criteria are in disagreement with these properties and definitions, future problems will arise in the subsequent steps of the model, which may lead to the use of an inconsistent MCDM/A method, and consequently, an unrepresentative solution may be indicated.

In addition, according to Keeney (1992), three types of attributes can be observed: natural attributes, constructed attributes, and proxy attributes. This classification depends on the values that will be presented within each criterion.

Natural attributes have the same interpretation for all DMs and they are clearly defined independently of the decision context. Examples include: price, distance, and duration. Constructed attributes are used when it is not possible to use natural ones. However, they are only suitable for the context of a specific decision. An example is when a subjective assessment needs to be used and a scale can be constructed to represent the alternative assessments in the criterion. Finally, proxy attributes are used in the latter case as an indirect measurement associated with the objective.

Moreover, the criteria can be deterministic or probabilistic. In problems in which information about consequences is known to be certain, i.e., the evaluation of each alternative in the specific criterion is represented by a constant level of performance; this criterion can be characterized as a deterministic criterion.

On the other hand, in a problem where information about consequences is probabilistic, the evaluation for each alternative in a specific criterion is based on information that might use a probability density function (PDF). For these problems, the probabilistic modeling phase has to be considered.

RRM decision problems require a probabilistic modeling phase, although in some cases, simplifications can be made in order to represent probabilistic consequences as deterministic indices, which in general can be some statistic of the PDF (e.g., means, percentiles, etc.).

3.4 Step 4 –Establish Alternatives

To establish problem alternatives, the first evaluation that must be made is about identifying the characteristics of the alternatives that will be used in the problem. To identify these characteristics, three questions need to be answered:

  • Is the set of alternatives discrete or continuous?

  • Are the alternatives stable or can they change throughout the process (Vincke 1992)?

  • Can the problem be solved by choosing one alternative as a solution, and excluding the rest of them or by combining alternatives?

After defining these characteristics, the problematic adopted in the problem has to be defined. This concerns how the DM intends to evaluate the set of alternatives. Some types of problematic are:

  • Choice Problematic: this is used when the DM desires to reduce the initial set of alternatives to a smaller subset.

  • Ranking problematic: this is used when the DM desires to rank the alternatives from best to worst.

  • Sorting Problematic: this is used when DM desires to classify alternatives into previously defined categories.

  • Description Problematic: this is used when DM desires to describe alternatives.

  • Portfolio Problematic: this problematic finds a combination of a subset of alternatives that maximizes the objectives and is limited by constraints.

Finally, alternatives for the problem can be generated, alternatives already presented in the environment can be used or new alternatives can be created. The VFT methodology emphasizes that the DM must create alternatives, and not only accept those that already exist and that are available to him/her when the problem occurs.

For each criterion an alternative is given an outcome (or consequence), which will be evaluated in the MCDM/A approach. The consequences can be deterministic or probabilistic. Deterministic consequences are those for which an exact value can be defined as the evaluation of the alternative in the criterion. Probabilistic consequences are used when problems are in an uncertain scenario. In this case, the evaluation of an alternative in a specific criterion is based on a probability distribution which represents this criterion.

Therefore, when this step is concluded a consequence matrix can be obtained. The consequence matrix for decision problem presents the evaluation of each alternative in each criterion.

3.5 Step 5 – Define State of Nature

This step will deal with problems in which some variables (state of nature) are not under the control of the DM, and thus cause random changes in the consequences matrix. The State of Nature is a typical ingredient in the traditional Decision Theory approach (Raiffa 1968; Berger 1985; Edwards et al. 2007; Goodwin and Wright 2004).

In these cases, the presence of the experts is very important since they give factual information about such variables to the DM. For example, in problems where the failure mode has to be evaluated, an expert’s knowledge about the situation can be useful to support the DM in obtaining the evaluation of each alternative in the specific criterion.

Some precautionary measures should be taken in this step. For example, for the state of nature, the analyst has to consider a probabilistic modeling of such information. Also, experts have to supply only factual information about these variables, since it is inappropriate to include preference information from experts in the decision model.

3.6 Step 6 – Establish a Priori Probability

For these problems, a priori information about the state of nature (θ) is characterized as an important element which should be defined in order to construct the model. This quantification can be provided by using probability distributions of θ, π(θ), called a priori probability distributions (Berger 1985).

Therefore, as stated in the previous phase, an expert’s knowledge about the problem scenario can be used to quantify the a priori probability distribution π(θ). Some procedures to develop the elicitation of expert’s prior knowledge are set out in the literature.

Keeney and von Winterfeldt (1991) proposed the following steps to elicit a priori probabilities:

  • Identify and select the problem

  • Identify and select experts

  • Discuss and refine the problem

  • Train experts to provide the elicitation, evaluating the reason to perform the elicitation

  • Conduct the elicitation process

  • Analyze the results

  • Solve disagreements

  • Document the results

One of the elicitation procedures is the equiprobable intervals method (Raiffa 1968). This method is based on developing equal intervals of probability based on estimating the most likely value of the state of nature (θ) given some probabilities. This method follows some steps:

  • Define the range of the minimum and the maximum values of the state of nature based on the value of an event that is unlikely to occur, with a probability of 0.001, and an event that is likely to occur, with a probability of 0.999.

  • Development of equal intervals of probability in order to define other values of state of nature, the third value defined is the intermediate value with a probability of 0.5.

  • Repeat the step again dividing the intervals into equal parts and estimating values of the state of nature. This will give the values of state of nature, a probability of 0.25 and 0.75.

  • After some points have been defined, a consistency test should be performed with the expert to confirm if the values estimated are consistent.

  • Finally, having defined the points, a statistical analysis can be performed in order to discover the probability distribution which best fits the points.

Therefore, these problems can be presented in a risk scenario. In these cases, it is appropriate to conduct the probabilistic modeling phase in order to formalize the problem and to support the DM’s understanding of the problem. If no probabilities are obtained, then an uncertain scenario is considered.

In general, for problems presented in a risk scenario, Bayesian Decision Theory (Berger 1985) is used to support the decision process. On the other hand, for problems in an uncertain scenario, it is recommended such procedures as MaxMin or MinMax be used (Raiffa 1968; Berger 1985).

3.7 Step 7 – Establish Consequence Function

As is well known, the expected utility function [E θ u(a)] of an alternative a is given by Eq. 1 as follows:

$$ E(a)=\int \pi \left(\theta \right)u\left(\theta, a\right) d\theta $$
(1)

where:

  • u(θ,a) is the utility of alternative a when the state of nature is θ.

Then, one can obtain the utility u(a) using the a priori probability π(θ).

The utility u(θ,a) is obtained by Eq. 2

$$ u\left(\theta, a\right)=\int P\left(x|\theta, a\right)u(x) dx $$
(2)

where:

  • P(x|θ,a) is the consequence function.

  • u(x) is the utility function of x which is obtained by preference modeling as dealt with in steps 8, 9, and 10.

The focus of this step is the consequence function, which associates the consequence to the state of nature and the chosen alternative. It is the probability P(x|θ,a) of obtaining x given θ and the alternative a (Berger 1985).

In general, P(x|θ,a) is obtained based on statistical data analysis or assumptions with regard to its behavior, as illustrated in de Almeida and Souza (2001), in a problem of service supply selection for maintenance, in which the consequence is the time to repair and θ is μ, the parameter of f(x) which is assumed to be an exponential probability function. This is the probabilistic model, which is the usual case in RRM.

3.8 Step 8 – Preference Modeling

Preference modeling is the first step for the third phase of the decision model. This phase presents higher interaction between the analyst and the DM, where the flexibility presented in the model allows not only reviews of the previous steps, but these three steps to be integrated.

This phase plays an important role in building the model, and has to be developed with care, because at the end of it the MCDM/A method is defined that will be used to solve the problem. Modeling preferences with the DM is one of the main steps within the decision-making process.

Based on this step the DM’s preference structure will be characterized. A preference relationship system or preference structure is represented by a collection of preference relations applied to the set of alternatives, which is constructed based on exhaustive and not exclusive comparisons.

Thus, some of the main preference structures of a DM are: Structure (P, I); Structure (P, Q, I); Structure (P, Q, I, R). Thus, based on these structures, the preference relations are:

  • Indifference (I): for DM there are clear reasons for declaring equivalence between two alternatives.

  • Strict Preference (P): for DM there are clear reasons to justify that one alternative is preferable to another.

  • Weak preference (Q): for DM there is no clear reason for declaring either indifference or strict preference. Therefore, the DM’s preference lays between P and I relations.

  • Incomparability (R): for DM there are no reasons to justify any of the other three relationships. Incomparability is useful when DM is unable or unwilling to establish comparisons between two alternatives.

Thus, in this step, it is necessary to evaluate which Preference Structure best represents the DM’s preferences for the problem. For example, Structure (P, I) should be used when the DM can define relations for each comparison of consequences. Thus, for this structure, the property of Ordenability, which is related to the possibility of providing comparisons for each pair, is the first that will be tested. Therefore, based on the agreement of this property, the transitivity property should be tested, where if x, y, and z are consequences and x P y and y P z, consequently x P z.

On the other hand, the structure (P, I, Q, R) allows DM to have doubts about the comparisons between the alternatives, and therefore the DM may remain undecided between two relations, such as Q, or may not be willing to express his/her preferences over some pair, such as R.

Moreover, in this step, one more important consideration that must be taken into account concerns the rationality considered by the DM in the problem, which can be: compensatory or non-compensatory. The terms compensatory and non-compensatory are associated with studies by Fishburn (1976).

Compensatory rationality exists when a worse performance of an alternative in the criterion i can be compensated by a higher performance of the same alternative in the criterion j. For this rationality, the trade-offs between the consequences are performed.

Non-compensatory rationality is the opposite of the previous one, when compensations between performances are not relevant for the DM. In this case, the difference in performance between two consequences is not relevant for the DM. The information that is relevant to him/her is which alternative wins over the criterion, even if the difference between them is very small.

Depending on the structure defined and the rationality that the DM presents, at the end of this step, a set of coherent MCDM/A methods is pre-selected, according to Fig. 4.

Fig. 4
figure 4

Evaluation of compensatory and non-compensatory rationality. (Adapted from de Almeida et al. 2015)

Figure 4 presents a flowchart to illustrate this step. From this figure, it can be seen that the estimation about compensatory or non-compensatory rationality is very important since this is used to define what family of MCDM/A methods is indicated and therefore will be pre-selected.

According to de Almeida et al. (2015), MCDM/A methods are characterized as a methodological formulation or a theory, which has an axiomatic structure. These methods are generic and can be applied in different problem situations in order to help find a solution.

Regarding compensatory rationality, a unique criterion of synthesis method (Roy 1996) is recommended to be applied, where the most usual is the additive aggregation based on the MAVT (Multi-Attribute Value Theory) or MAUT (Multi-Attribute Utility Theory) (Keeney and Raiffa 1976). The additive aggregation combined the criteria and generates a global value for each alternative. For non-compensatory rationality, it is recommended that outranking methods be used (Roy 1996). These methods make pairwise comparisons between the alternatives, as commented in the first chapter of this book.

However, the careful definition of the rationality, based on the DM’s preferences for the problem situation, is not even considered. In inappropriate cases, a familiar MCDM/A model is selected for use before all the DM’s preferences have been evaluated, i.e., at the beginning of the building model.

According to Wallenius (1975), in general, DMs do not feel comfortable about using decision models which they consider are difficult. In the same context, Bouyssou et al. (2006) commented that heuristics can be suggested to facilitate solving the problem. Therefore, in these cases, the analyst should be alert and ensure that the method used to characterize the DM’s preferences was appropriate, and therefore presents a recommendation which bring benefits to the decision situation.

3.9 Step 9 – Conducting an Intra-Criterion Evaluation

Intra-criterion evaluation is the evaluation of each alternative in each criterion, assigning a marginal utility function. Within the intra-criterion evaluation, an important concept is the scale and the scale transformation. For utility function an interval scale is considered, in which the utility zero is assigned to the worst consequence.

Sometimes the marginal utility function may be constructed over a consequence expressed as a verbal scale. A widely used quantitative verbal scale is the Likert scale (1932).

Depending on the pre-selected family of MCDM/A methods, the form of evaluating the intra-criterion will be developed in different ways.

As to compensatory rationality, where the methods of unique criterion of synthesis are adequate, the evaluation of each alternative in each criterion is represented by a value function for deterministic consequences or a utility function for probabilistic consequences. The value or utility functions can be linear or non-linear.

To construct the value function, few procedures are presented in Belton and Stewart (2002), it being simpler to model the problem in this case. On the other hand, in order to elicit the utility functions, the DMs behavior regarding risk has to be investigated. When the DM is considered risk averse or risk prone, the utility function is non-linear. For the DM who is neutral to risk, the utility function is linear.

Regarding non-compensatory rationality, where it is appropriate to use outranking methods, this step is conducted in another way. If the preferences for consequences which were expressed for each criterion are ordered, there is no need to conduct further evaluation in this step, but, the threshold estimation is characterized as being part of the intra-criteria evaluation. On the other hand, when probabilities are assigned to consequences, then a utility function might be applied, incorporating the DM’s attitude to risk. This would make necessary an integration between marginal utility function with outranking methods, as already done (de Almeida, 2005; de Almeida, 2007; Brito et al., 2010).

3.10 Step 10 – Conducting an Inter-Criteria Evaluation

The last step of this phase is the inter-criteria evaluation. Inter-criterion information allows the quantitative criteria to be combined in an aggregation process. This step involves elicitation procedures to obtain the criteria weights (de Almeida et al. 2015)

Different mechanisms of aggregations are presented in the literature; the mechanism selected depends on the MCDM/A method that will be used.

As to methods of unique criterion of synthesis, scale constants (k j) are used to aggregate the criteria. Scale constants do not represent how important the criteria are to DMs and cannot be directly determined. They represent the ratio between criteria, considering the set of consequences present in each one of them. The main differences between the MCDM/A methods presented in this classification are in the elicitation procedure applied to obtain the scale constants.

An example of an elicitation procedure, for deterministic consequences, is the tradeoff procedure (Keeney and Raiffa 1976), which presents a robust axiomatic structure which seeks indifference points to formulate (n-1) equalities, where n is the number of criteria in the problems. These equalities are used to find the exact values of scaling constants. The FITradeoff method (de Almeida et al., 2016) uses the same robust axiomatic structure with some advantages, needing only partial information from the DM, as mentioned in the first chapter of this book.

For probabilistic consequences, MAUT (Multi-Attribute Utility Theory) or prospect theory could be applied, as explained in the first chapter of this book.

As for outranking methods, the weights are defined directly by the DM. They represent the level of importance that each criterion in the problem has for the DM. The weights are normalized so that they sum to one.

At the end of this step, the decision model has been built and an appropriate MCDM/A method is indicated to solve the problem. In other words, the end of this step represents the end of the funnel, illustrated in Fig. 1. The next phase deals with applying the method procedure, testing the robustness of the solution, reviewing the decision-making process and implementing the recommendation.

3.11 Step 11 – Evaluate Alternatives to Find a Solution

This step is the first step of the finalization phase. In it, the algorithm of the MCDM/A method selected is processed and presents the solution to the problem. The MCDM/A method selected is not personalized for the problem, since it is generic and can be applied to many different situations.

On the other hand, the decision model built, which is implemented in order to indicate the adequate MCDM/A method, is personalized for each problem, and constructed based on the DM’s preferences which were expressed in the previous step.

3.12 Step 12 – Conduct a Sensitivity Analysis

Sensitivity analysis is a relevant step which aims to test the robustness of the decision model. Thus, after the sensitivity analysis, the recommendation found in the last step will be confirmed or reevaluations will be indicated for the building model.

The sensitivity analysis is characterized as being used to change problem inputs in order to analyze how these changes impact the recommendation made for solving the problem. In other words, this step verifies if the recommendation found in step 11 is sensitive to variations in the data of the problem, such as the consequence matrix and the criteria weights.

Sensitivity analysis can be conducted in two ways: individually, by changing one parameter at a time, and simultaneously by changing several parameters of the decision model.

With regard to the former, changes to the values of the scale constants (or weights) or of some consequences can be made. An example of variation can be generated by applying a percentage change of 10% to the nominal value, thereby generating values that are higher or lower than the original ones.

In MCDM/A problems, the values of consequences can be generated by considering some approximations since it is quite difficult to have access to all the data accurately. Therefore, it should be interesting to modify the values of consequences in order to test the robustness of the model since this model presents approximations. Many modifications can be performed to test the robustness of the building model.

For a complete evaluation, several changes must be done simultaneously. The Monte Carlo simulation is an approach used for simultaneous sensitivity analysis. In this approach, a random variation of data is applied to test the decision model. Thus, the solution found to each problem created is compared to the initial recommendation found in step eleven.

To test the robustness of the model, the frequency of changes in the initial recommendation is calculated after conducting a large number of simulations. Moreover, to complement this analysis, statistical hypothesis tests can be applied in order to evaluate the significance of these changes (Daher and de Almeida 2012).

Therefore, this step is important because based on its results, it is confirmed if the model built can be used formally to represent the MCDM/A problem and to find the representative solution for it, or if the model has to be reevaluated, and thus to return to some previous step in order to review the preferences expressed. It is worth mentioning that the approximations provided in some steps of the decision model can be reevaluated based on the impact that they can cause for the recommendation found. This places the responsibility on the DM for determining whether to keep these approximations or to revise them in earlier steps of the model.

3.13 Step 13 – Draw Up Recommendation

In this step the recommendation found in step 11 and tested in step 12 is presented to the DM, especially with regard to it degree of accuracy investigated in the last step. If the recommendation is favorable for the DM, the implementation of this recommendation can be made, i.e., the solution can be applied in the real problem situation.

If the recommendation and its analysis of robustness are not favorable for the DM, the decision model must be reviewed in order to identify steps where the DM’s preferences were not coherent or have changed during the process, and to identify steps where approximations made have impacted the recommendation found. As already stated, there is no right model being possible to DM review the previous assumptions made.

3.14 Step 14 – Implement the Solution

Finally, after the solution is found and accepted by the DM, it must be implemented. Brunsson (2007) presented important matters related to the implementation process, and emphasized that the implementation step depends on the decision situation and the decision model built.

As a result of the magnitude of the decision problem, the implementation process can be a complex process, and take more time to do than does the process for building the decision model. In this case, changes can occur in the problem scenario thereby modifying consequences and producing new solutions for the problems. In this case, should be interesting for DM to review the decision model build in order to update the problem elements and preferences expressed.

4 Conclusions

This chapter presents a framework for building decision models in the RRM context. This framework presented well-structured steps to support the DM in the evaluation of MCDM/A problems.

The framework developed was adapted from de Almeida et al. (2015) and had three phases. The preliminary phase aims to present important elements of the problem. The preference modeling phase deals with modeling the DM’s preferences regarding the elements defined, and the finalization phase is when the recommendation found for the problem is identified and tested.

In this framework, an additional phase was included in order to improve the earlier framework. This new phase was the probabilistic modeling phase which has important features to support the DM when he/she is dealing with probabilistic problems.

Therefore, the framework developed in this chapter can be used to formalize MCDM/A problems in order to present the adequate recommendation. It is important to highlight that building models are always wrong since they are a simplification of problem reality, but some of them are necessary to represent the problem elements and support the DM to solve them following a rational process (Box and Draper 1987).