1 Introduction: the decision analysis process under uncertainty

Probability encoding in uncertain situations is an old problem. Bernoulli [1] raised the issue of subjective probability. However, he did not offer any specific methodology of elicitation. Subjective probabilities were introduced in [2] and then axiomatized in [3], where the certainty equivalent method, CE, was used to encode the unique distribution of subjective probability. Later, Spetzler and Staël Von Holstein [4] examined different alternative encoding procedures to obtain subjective probabilities. Encoding procedures require that the subject responds to a set of questions either directly by providing numbers as answers (i.e., either values or probabilities; this method is known by judged probabilities method) or indirectly by choosing between simple bets (this method is known by choice-based method). In the indirect method, the bets are adjusted, according to the subject’s response, until he is indifferent to choosing between them. Moreover, Spetzler and Staël Von Holstein [4] distinguished further, according to the response mode chosen, i.e., whether a probability or a quantity had to be the answer to the question.

Meanwhile, decision models under risk evolved and considered decision weights. Indeed, in experimental and real-life situations, people do not conform to the expected utility theory of von Newman and Morgenstern [5] and violates the axioms. Thus, Quiggin [6] introduced the theory of cardinal utility, and decision weights to generalize the expected utility theory. The aim is to analyze the phenomena associated with the distortion of subjective probability. Decision weights can be obtained from the decumulative probability distribution through a probability weighting (or probability transformation) function, as in the rank-dependent model [6] or the cumulative prospect theory [7]. In the latter, a probability transformation function is elicited for gains and another for losses; gains and losses are being measured with respect to some anchored origin, as in [8] or in prospect theory [9].

Parametric methods have been most frequently used to elicit probability weighting functions. Many presented approaches that specify parametric forms for these functions and then estimated them through standard techniques [7, 10,11,12]. However, these approaches made inferences about their functional forms. Therefore, Abdellaoui and Munier [13] presented a nonparametric method for a decision model using a univariate value function. Wu and Gonzalez [14] avoided the parametric estimation problems by testing simple preference conditions for standard von Neumann Morgenstern utility functions. Abdellaoui [15] used nonparametric methods at the level of individuals to elicit both the utility function—using Wakker and Deneffe’s method [16]—and the probability weighting function.

Later, in the mid-nineties, models were proposed to deal with uncertainty by mean of decision weights, and methods were proposed to elicit those decision weights. Several authors [12, 17,18,19] used variants of a decomposition model in which the decision weight assigned to some uncertain event results from a two-stage process: a subjective probability of the event results from the agent’s judgment and is then transformed into a decision weight by a transformation function known from earlier experiments under rank-dependent expected utility, RDEU, under risk. Wakker [20] provided a decomposition model of decision weights, which [21] operationalized within one single experiment under uncertainty.

Few authors have tackled obtaining the probability weighting function in the case of many attributes. Using RDEU, Beaudouin et al. [22] elicited a probability weighting function under risk for every attribute and found the weighting functions to be different according to the attribute considered. However, the latter method cannot be reconciled with utility independence in the sense of Keeney and Raiffa [23], which is respected in the present paper.

Abdellaoui et al. [21] used a procedure to elicit and decompose decision weights for gains and losses under uncertainty, i.e., when “objective” probabilities do not exist. However, the procedure was not extended to the multi-attribute theory; moreover, it could not elicit the joint weighting function.

Recently, many have proposed interesting works in multi-attribute utility theory (MAUT). Abbas and Bell [24] proposes a new independence assumption to help assessment of multi-attribute utility functions. Bleichrodt et al. [25] demonstrate that standard sequences can also be used in MAUT where risk is assumed. Bosi and Herden [26] argue that the representation of a continuous multi-utility exists by considering adequate concepts of a continuity of a preorder. Durbach and Stewart [27] review multiple criteria decision analysis models used when the evaluation of attributes is uncertain. Ekeland et al. [28] propose a multivariate extension of the notion of comonotonicity, which consist of simultaneous optimal rearrangements of two vectors of risk. Engel and Wellman [29] offer a new utilization of preference structure in multi-attribute auctions. Galaabaatar and Karni [30] axiomatizes expected multi-utility representations of incomplete preferences under risk and under uncertainty. Galaabaatar and Karni [31] provide new axiomatizations of preference relations that exhibit incompleteness in both beliefs and tastes. Mongin and Pivato [32] present a ranking of multidimensional alternatives. Andersen et al. [33] discuss the intemporal utility.

Some proposed specifically to elicit multi-attribute utility functions [34, 35]. Others argued the proper scoring rules [36,37,38]. Many discussed the expected utility, such as [39,40,41,42,43,44,45], and [46] examined the conditional expected utility. Others discussed ambiguity [47,48,49,50,51,52,53]. Wakker and Yang [54] analyzed the concave/convex utility and weighting functions. Some authors debated the preferences such as in [55,56,57,58,59,60]. Others discussed probabilities in [61,62,63,64,65].

Nevertheless, no work has been proposed to solve the problem of eliciting a joint weighting function. Hence, an innovative method to directly encode joint weights for the multi-attribute utility function under uncertainty conditions is suggested in the present paper. A nonparametric (point by point) choice-based method is employed. This method helps identify and describe the true attitude of the decision maker toward probabilities when dealing with multi-attributes. Including people in the decision process allows them to comprehend and embrace their own way of thinking (or decision making) and decide based on it.

Moreover, this paper aims to improve the decision analysis model used in the performance-based earthquake engineering methodology (PBEE). The method intends to mitigate likely encountered seismic risk and deals with a specific construction. In this context, the newly proposed decision analysis method involves the owner in the decision-making process and helps him embrace his own decision. It enables him to select among the projects of building rehabilitation based on his personal and subjective elicited utilities and probability functions. Thus, when informed not only about the risk but also about his risk attitude toward this specific situation, he will be hopefully more involved and can willingly take measures to help mitigate seismic threat faced by his structure. Such processes are already used in the medical field since the mid-twentieth century [66]. As noted by Charles et al. [67] and Parsons [68], the goal is to move from a paternalist pattern to include the patient in the decision-making procedure. This method will help the engineering profession to shift from the paternalist pattern when dealing with seismic risk and involve the community of owners, which are the ones who are funding the strengthening measures needed to mitigate those risks.

The proposed decision analysis method is not limited to be used in the PBEE context, and numerous other applications of this encoding methodology can easily be envisioned.

For that purpose, Sect. 2 presents the decision analysis existing methods used in multi-attribute utility theory, notes limitations, and possible needed development. Section 3 proposes the innovative methodology to obtain the joint weighting function attached to the multi-attribute utility function. Section 4 offers the case study of performance-based earthquake engineering, and the improvements proposed by using the already existing decision analysis models, and specifically the decision analysis method proposed in this paper. Section 5 shows the validation of the proposed decision analysis method through the experimental economy applied to the PBEE case study. The results of the experiment are offered. Finally, Sect. 6 concludes the paper.

2 Decision analysis in multi-attribute utility theory: existing methods

One of the models, which is still widely used in risk analysis, is the von Neumann Morgenstern. The risk attitude of the decision maker is taken into account through the von Neumann Morgenstern utility function. The score of some project is then defined for each possible outcome xi, with discrete probabilities pi for each event i (i = 1, 2, …, n), and a von Neumann Morgenstern utility function, ui (.):

$$\sum\limits_{i = 1}^{n} {p_{i} } u(x_{i} )$$
(1)

The von Neumann Morgenstern utility reflects the preferences with respect to each attribute or outcome considered. But as already mentioned, in experimental and real-life situations, people do not conform to the expected utility theory of von Newman and Morgenstern [5] and violates the axioms. This violation highlighted the fact that people might also subjectively consider the probabilities and not just the outcomes. Therefore, the research that followed the “Allais Paradox” has shown that individuals do transform given probabilities into nonadditive decision weights πi (case of risk) and implicitly make use of risk measures, which are similar to decision weights, sometimes called nonadditive probabilities [69]. Aiming to study the phenomena associated with the distortion of subjective probability, Allais [70] proposed a decision analysis model which is the risk version of the rank-dependent utility model (RDU) proposed by Quiggin [71, 72]. The latter makes use of both the probability transformation function w(p) and the standard utility function u(x). w(p) is defined on the domain [0, 1] of the decumulative probability distribution, such that w(1) = 1, w(0) = 0, and for all p in this domain, w(p) > 0. Paying attention to the fact that the outcomes and their associated probabilities are indexed in such a way as x1 < x2 ··· < xn, the corresponding score functional for some lottery P can be described by:

$$V_{\text{RDU}} (P) = \sum\limits_{i = 1}^{n} {\pi_{i} } u(x_{i} )$$
(2)
$${\text{where}}\quad \forall i = 1,2, \ldots ,n - 1,\quad \pi_{i} = w\left( {\sum\limits_{j = i}^{j = n} {p_{j} } } \right) - w\left( {\sum\limits_{j = i + 1}^{j = n} {p_{j} } } \right),\,{\text{and}}\,{\text{for}}\,i = n\,\,\pi_{n} = w(p_{n} )$$
(3)

The uncertainty version of this type of decision model is only contained in (3), without the possibility to connect the decision weights to probabilities, as in (4). If the outcomes were mixed, i.e., losses and gains, the cumulative prospect theory model, CPT, with different probability weighting functions w+ for gains and w for losses, would be the best choice. In the case where only losses are addressed (since the PBEE case study presented in Sect. 4 deals with losses), RDU is the most suitable choice.

Given that the set of consequences is multidimensional, a multi-attribute utility function f over the set X of all attributes is needed. The multiplicative form of the utility function is presented in Keeney and Raiffa [23]. Even though it is and can be developed for the case of n attributes, the case of n = 3 and X = {X1, X2, X3} is presented here (as the PBEE case study given in Sect. 4 deals with three attributes). It rests on the assumption that “if X1 is utility independent of {X2, X3}, and if {X1, X2}, and {X1, X3} are preferentially independent of X3 and X2, respectively, then

$$K\,u\left( {x_{1} ,x_{2} ,x_{3} } \right) + 1 = \prod\limits_{j = 1}^{3} {\left[ {K\,k_{j} u_{j} \left( {x_{j} } \right) + 1} \right]}$$
(4)
$${\text{with}}\,k_{1} + k_{2} + k_{3} + Kk_{1} k_{2} + Kk_{1} k_{3} + Kk_{2} k_{3} + K^{2} k_{1} k_{2} k_{3} = 1$$
(5)

If the interval of the jth attribute is taken as being [\(x_{j}^{0}\), \(x_{j}^{ * }\)], then:

$$u_{j} (x_{j}^{0} ) = 0,\quad u_{j} (x_{j}^{ * } ) = 1,\quad j = 1,2,3,\,{\text{and}}\,{\text{similarly}}\,u(x_{1}^{0} ,x_{2}^{0} ,x_{3}^{0} ) = 0,\quad u(x_{1}^{*} ,x_{2}^{*} ,x_{3}^{*} ) = 1.$$

All the kj’s are scaling constants. K is an additional scaling constant given by Eq. (6). If K = 0, then (5) reduces to an additive form described by:

$$u_{{}} (x_{1}^{{}} ,x_{2}^{{}} ,x_{3}^{{}} ) = k_{1} u_{1} (x_{1}^{{}} ) + k_{2} u_{2} (x_{2}^{{}} ) + k_{3} u_{3} (x_{3}^{{}} ),\,{\text{with}}\,k_{1} + k_{2} + k_{3} = 1.$$

Equation (5) can be used in the framework defined by (3) or (4), as shown in Miyamoto and Wakker [73], and in Dyckerhoff [74]. The difficulties, arising when the w(p) function takes a specific form for each attribute, were shown in Beaudouin et al. [22].

Now that the utility function can be computed using [23]. In order to be able to compute the distorted function of probabilities w(p) that will help calculate VRDU in multi-attribute utility theory (MAUT). The method is proposed to estimate the distorted probability of each outcome (pi) such as in [15], but no technique until now is proposed to compute the w(p) when this probability is attached to all the outcomes. The method proposed in Sect. 3 allows computing the function w(p) related to all outcomes or the joint probability in multi-attribute utility theory, which helps to obtain VRDU in MAUT.

3 The suggested procedure: multi-attribute utility function and joint weighting function

In this section, a nonparametric method is proposed to elicit both a multi-attribute utility function and a joint weighting function associated with this multi-attribute utility. It enables the evaluation of some prospect P, using the RDU [70, 71] or, if the probabilities are unknown (uncertainty), nonadditive multi-attribute utility weights—the weighting function denoted as in [74] by w or w(p). Even though the method is presented here for three attributes (since the PBEE case study is limited to three attributes), the theory similarly holds and can be straightforwardly done for n attributes.

3.1 Elicitation of the partial utility functions

The three chosen attributes (cost, downtime, and deaths) are indexed by j and denoted by xj (j = 1, 2, …, m). As a first step (step 1), the three partial utility functions related to the three attributes were elicited. For this purpose, the method used in [15, 16] was followed. Abdellaoui [15] process follows the approach suggested in Wakker and Deneffe [16] to elicit the utility functions. A “standard sequence” of outcomes, i.e., a sequence of equally spaced outcomes in terms of utility, was constructed. The primary advantage of the method is its robustness to any transformation of probabilities. Indeed, the procedure works even when the probabilities are unknown.

The standard sequence obtained from the elicitation of some partial utility function uj(xj) is constructed as follows (j = 1, 2, 3). An outcome \(x_{j}^{1}\) was determined to make the subject indifferent between the prospects (\(x_{j}^{0}\), p; Rj, 1 − p) and (\(x_{j}^{1}\), p; rj, 1 − p), denoted (\(x_{j}^{0}\), p; Rj) and (\(x_{j}^{1}\), p; rj), respectively, where \(x_{j}^{1}\) < \(x_{j}^{0}\) < Rj < rj ≤ 0 are the negative real numbers in the case of losses, with some discretionary p ∈ (0, l). The amounts rj, Rj, and \(x_{j}^{0}\) are held fixed for any given j. Then, an outcome \(x_{j}^{2}\) was determined to make the subject indifferent between the prospects (\(x_{j}^{1}\), p; Rj) and (\(x_{j}^{2}\), p; rj). Under RDU, the two obtained indifferences lead to the two following equations, for some given j:

$$w\left( p \right)u\left( {x_{j}^{0} } \right) + \, \left( {1 - w\left( p \right)} \right)u\left( {R_{j} } \right) = w\left( p \right)u\left( {x_{j}^{1} } \right) + \left( {1 - w\left( p \right)} \right)u\left( {r_{j} } \right)$$
(6)
$$w\left( p \right)u\left( {x_{j}^{1} } \right) + \left( {1 - w\left( p \right)} \right)u\left( {R_{j} } \right) = w\left( p \right)u\left( {x_{j}^{2} } \right) + \left( {1 - \, w\left( p \right)} \right)u\left( {r_{j} } \right)$$
(7)

The elicitation of the jth partial utility function related to the jth attribute leads to:

$$u_{1} \left( {x_{j}^{1} } \right) - u_{1} \left( {x_{j}^{0} } \right) = u_{1} \left( {x_{j}^{2} } \right) - u_{1} \left( {x_{j}^{1} } \right) = \cdots = {\text{constant}} .$$
(8)

Therefore, \(x_{j}^{1}\), \(x_{j}^{2}\), …, \(x_{j}^{n}\) can be determined as a decreasing standard sequence of losses, i.e., \(\forall j,\) a sequence \(x_{j}^{n}\) < \(x_{j}^{n - 1}\) < ··· < \(x_{j}^{1}\) < \(x_{j}^{0}\) of equally spaced outcomes in terms of utility. Thus, the following equation is obtained:

$$u(x_{j}^{n} ) - u(x_{j}^{n - 1} ) = u(x_{j}^{n - 1} ) - u(x_{j}^{n - 2} ) = \cdots = u(x_{j}^{1} ) - u(x_{j}^{0} ).$$
(9)

The same procedure is repeated to construct the other standard sequences in eliciting the other partial utility functions uj(xj), with the same property holding for all attributes j. Finally, we note that in the case where the subject is an EU maximizer (i.e., does not distort probabilities, but considers them objectively), the same concept is applied, except that the particular case of \(w(p) = p\) is obtained, where w is the probability weighting function.

3.2 Elicitation of the joint weighting function

This paper proposes to innovate the decision analysis theory, namely to directly elicit the joint weighting function. To the best of our knowledge, no method exists to solve this problem.

In [15], the standard sequence of outcomes constructed above in step 1 (eliciting the partial utility function) is used in simple risky choices to obtain a standard sequence of probabilities, i.e., equally spaced probabilities of each attribute successively (j = 1, …,m) in terms of the weighting function. However, the proposed method tackles the joint probability weighting function of the multi-attribute utility function. It derives from [16] as above, and also from the procedure in [23] used in the multi-objective case, but it does not reduce to any of those methods. Let us consider the above-defined sequences \(x_{j}^{0}\), \(x_{j}^{1}\), \(x_{j}^{2}\), …, \(x_{j}^{n}\), for j = 1, 2, 3. Consider now the probabilities pi, i = 1, …, n − 1, satisfying indifference between the following lotteries, where each lottery encompasses elements of all three attributes:

$$[(x_{1}^{n} ,x_{2}^{n} ,x_{3}^{n} ),p_{i} ; \, (x_{1}^{0} ,x_{2}^{0} ,x_{3}^{0} )]\sim[(x_{1}^{i} ,x_{2}^{i} ,x_{3}^{i} ),1;\,(x_{1}^{0} ,x_{2}^{0} ,x_{3}^{0} )]$$
(10)

Relation (11) means that the subject is indifferent between the lottery, with pi giving the best outcome (\(x_{1}^{n}\), \(x_{2}^{n}\), \(x_{3}^{n}\)) and (1 − pi) giving the worst outcome (\(x_{1}^{0}\), \(x_{2}^{0}\), \(x_{3}^{0}\)) on one hand, and on the other, the certain outcome (\(x_{1}^{i}\), \(x_{2}^{i}\), \(x_{3}^{i}\)) obtained from the three standard sequences previously elicited for every i = 1, …, n − 1, where \(x_{j}^{n}\) < ··· < \(x_{j}^{0}\) < 0, for j = 1, 2, 3. Such indifference implies:

$$w\left( {p_{i} } \right) = [u(x_{1}^{i} ,x_{2}^{i} ,x_{3}^{i} ) - u(x_{1}^{0} ,x_{2}^{0} ,x_{3}^{0} )]/[u(x_{1}^{n} ,x_{2}^{n} ,x_{3}^{n} ) - u(x_{1}^{0} ,x_{2}^{0} ,x_{3}^{0} )],\quad \left( {i = 1, \ldots ,n - 1} \right)$$
(11)

Knowing that u(\(x_{1}^{n}\), \(x_{2}^{n}\), \(x_{3}^{n}\)) = 1, and u(\(x_{1}^{0}\), \(x_{2}^{0}\), \(x_{3}^{0}\)) = 0 (hence, all our partial utilities are in fact disutilities, the best outcome having u = 0, the worst u = 1), we obtain:

$$w\left( {p_{i} } \right) = u(x_{1}^{i} ,x_{2}^{i} ,x_{3}^{i} );\quad i = l, \ldots ,n - 1$$
(12)

Because u (\(x_{1}^{i}\), \(x_{2}^{i}\), \(x_{3}^{i}\)) can be computed from the global utility function, w(pi) can be obtained easily.

Therefore, the assessment of a standard sequence of outcomes and the construction of the indifferences (12) allow direct elicitation of the joint weighting function w related to the multi-attribute utility function straightforwardly.

4 An improved decision analysis model for performance-based earthquake engineering

The method we propose in this article aims to improve the decision analysis model proposed for the performance-based earthquake engineering PBEE methodology developed at the Pacific Earthquake Engineering Research Center PEER, among others. It is described in the PEER report 2005/2011 [66]: “This method is defined as design, evaluation, and construction of engineered facilities whose performance under common and extreme loads responds to the diverse needs and objectives of owner–user and society.” The assessment of the performance method addresses a facility defined by its location, design (structural and non-structural), and site conditions. It embodies four stages: the hazard analysis, the structural analysis, the damage analysis, and the loss analysis. Each stage considers one of the four variables: intensity measure (IM), engineering demand parameter (EDP), damage measure (DM), and decision variable (DV) such as the total repair cost, the repair duration, and the number of casualties (for example dollars, downtime, and deaths).

One is thus led to the framework equation for performance assessment for the desired realization of the decision variable, such as the mean annual frequency MAF of the decision variable DV, λ(DV), in accordance with the total probability theorem:

$$\lambda \left( {\text{DV}} \right) = \iiint {G\left\langle {{\text{DV}}\left| {\text{DM}} \right.} \right\rangle }{\text{d}}G\left\langle {{\text{DM}}\left| {\text{EDP}} \right.} \right\rangle {\text{d}}G\left\langle {{\text{EDP}}\left| {\text{IM}} \right.} \right\rangle {\text{d}}\lambda \left( {\text{IM}} \right)$$
(13)

This integration implies that the conditional probabilities G(EDP|IM), G(DM|EDP), and G(DV|DM) need to be assessed parametrically over a suitable range of levels of the damage measure DM, the engineering demands parameter EDP, and the intensity measure IM. PBEE is thus a probabilistic method accounting for all uncertainties in all four assessed stages. It is not just based on “building codes” but considers decision analysis and risk management. Thus, it helps implement more cost-effective solutions and prevents executing the uneconomical solution of strictly applying the current seismic code provisions proposed for new buildings. Indeed, uneconomical solutions to retrofit buildings are usually rejected by the owners, who cannot afford the price. Moreover, earthquakes are mainly low-probability high-consequence events; therefore, the preferences of the owner need to be considered in the decision-making process, i.e., the distortion of the perception of consequences and probabilities if the latter are known.

4.1 Evaluating alternatives

To evaluate alternatives, measures of effectiveness must be specified because they explicitly describe the potential impact on each of the involved agents (which are the owner such as the builder or building contractor, and the users of the considered facilities). The decision model (named 3D model) chosen by the committee of PEER considers the three attributes (or decision variables): deaths (D), downtime (D), and cost in dollars (D). The attributes, as presented, are in the order of their importance to the owners; this order was obtained based on the statistical referendum [66]. The PBEE methodology helps to assess their value while the specific building is facing likely scenarios of a future earthquake.

The chosen model of cost, downtime, and deaths is widespread; it is the classical loss model used in performance earthquake engineering worldwide since the PBEE was proposed. Many have used it, such as in [75, 76], and many codes adapted it, such as [77] or [78]. Moreover, this model is used for consequence-based risk management methodology at a city level, such as in Ergo a global platform for loss estimations, in Hazus for the USA or SYNER-G methodology in Europe using Capra among others.

This paper limits itself to the three mentioned attributes, which are the most decisive ones because technological advances allow a better understanding of each one of them. Indeed, many methods are available to evaluate damage fragility functions needed to evaluate those attributes. Other, less important attributes, if found, such as injured and environmental component, might be more difficult to quantify currently [79].

4.2 The decision analysis process

The PBEE decision analysis process [66] singles out only one decision variable from the multi-criteria decision model. The procedure can be improved by establishing that the optimal decision can be obtained by considering a multi-criteria model that considers the subjective evaluation of the criteria: cost, downtime, and deaths. Such preferences (i.e., the subjective view of the owner regarding the three attributes) may exclude strengthening of the whole structure, which is very costly, and might be unaffordable for the owner, who will reject the strengthening project.

Some authors, as [80], have considered the expected utility in making seismic risk management decisions for individual buildings, using assembly-based vulnerability methodology. In [80], Porter used a parametric method to elicit an exponential utility function. Recently, Cha and Ellingwood [81] studied the role of risk aversion on seismic risk mitigation of building structures. They used a parametric cumulative prospect theory model, CPT, which has the advantage of handling mixed lotteries, i.e., loss and gain prospects [7].

In this paper, we choose a more suitable decision analysis model to apply to the specific case of the PBEE decision analysis process, allowing to mitigate the seismic risk. As noted in Sect. 2, in the case where only losses are addressed, RDU is the most suitable choice. Moreover, it was proved that RDU is suitable for low-probability high-consequence events [82]. Since the 3D model of cost, downtime, and deaths is used in the PBEE case study, the attributes or outcomes are limited to the case of n = 3.

In the end, after performing all elicitations and computations, it will be possible to compare all likely proposed strengthening projects based on VRDU. Thus, instead of using the mathematical expectation through the formula \(U(P) = \sum\nolimits_{i = 1}^{n} {p_{i} x_{i} }\) or the maximization of the expected utility, with u being a logarithmic (or nonlinear) function \(U(P) = \sum\nolimits_{i = 1}^{n} {p_{i} u(x_{i} )}\), it is possible to compare the total subjective evaluation of the decision maker through the formulas presented in Eqs. (3) and (4). The highest VRDU is retained as the best project from the point of view of the decision maker.

Finally, in Sect. 5, the proposed decision analysis method proposed in Sect. 2 is validated through the experimental economy applied to the PBEE real case study. We note that in this paper, the values of the alternatives of the 3D model were not computed since they are already calculated in the report [66]. Even though the PBEE methodology is a lengthy process and needs specific competence, it is already available to all. The work in this paper is limited to improve the decision analysis model applied to PBEE.

5 Applying the procedure: the experimental protocol

As we proved previously, the function representing our case best is the RDU functional; therefore, a probability weighting function in addition to the utility function is needed. In this case, the only method that can elicit the utility function without prior knowledge of the weighting function w(.) is the trade-off method (TO) [16].

To elicit the probability weighting function and the utility function of the decision maker, a new method was used through a computer algorithm, which was developed borrowing from the method in [16] and the bisection method in [15]. The conducted experiment and the employed protocol are described. Figure 1 shows the flowchart of the experimental protocol.

Fig. 1
figure 1

Flowchart of the experimental protocol

5.1 Subjects stimuli and procedure

All subjects were civil engineers (either project managers, PhDs or endowed with some substantial working experience in the field, or students in their last year of a civil engineering school). Their engineering background and practical experience allowed them to understand real engineering concepts behind the decision-making process. Participants were motivated by their willingness to help scientific research in their field moving forward; therefore, they were not paid. As noted in [83], “the results of choice-based experiments rather than judgment-based experiments do not most of the time substantially depend on a compensation scheme.”

Participants were explicitly told and also read the following statement: in this experiment, you are, as a Civil Engineer, in charge of choosing a civil engineering rehabilitation project taking into consideration three factors, which are: the cost of rehabilitation, the downtime (time needed to accomplish this task), and deaths that could result, should a disaster, i.e., an earthquake occurs. You are responsible for managing the situation, finding the best compromise between the cost of strengthening, downtime, and likely human lives’ losses.

This section describes two experiments that elicit, under RDU, the utility function \(u_{{}} (x_{1}^{{}} ,x_{2}^{{}} ,x_{3}^{{}} )\) and the probability weighting function. A significant effort was made to collect high-quality data from 30 subjects who were recruited to participate in these experiments. In this experience, all prospects entailed loss outcomes; the first attribute (cost) varied from − 5 M€ to − 605 M€, the second attribute (downtime) from − 10 to − 10,810 days, and the third attribute (number of deaths) from − 5 deaths to − 2405 deaths. As noted in [15], “to make the curvature of the utility function sufficiently pronounced, it is necessary to investigate a sufficiently wide interval of outcomes.” The values of the considered intervals for each outcome were chosen to ensure that the obtained standard sequence reached or contained the real estimated value of the project through the loss evaluations model in the PEER report 2005/2011 [66]. Indeed, as noted in [15], the outcomes need to be chosen such that the range of outcomes between them includes all outcomes of interest.

The experiments were conducted at an individual level. Subjects were seated in front of a personal computer and were encouraged to take their time. Many were not familiar with probabilities and expectations. Thus, they were all given the needed explanation of the information regarding “choice trials” approximately 10–15 min before the experiment started. Then, subjects participated in a 30–40 min session to perform trade-off experiments, denoted by TO experiments, which consisted of an outright choice between two prospects, followed by probability weighting experiments indicated by PW experiments, then by the experiments to elicit the scaling constants, and finally by the consistency checkup. The answers to the questions raised were used by the subsequent ones. Before applying the procedure, the weighting of the assumptions is required.

5.2 Assessing the assumptions

5.2.1 Utility independence

The utility independence definition is presented in [84]. A method to test this assumption is presented in [23], where it was clearly noted that utility independence is a necessary condition to build the multi-attribute utility function, as per the definition offered in Sect. 2. In the following, \(x_{1}^{{}}\) is the index of cost, \(x_{2}^{{}}\) stands for downtime, and \(x_{3}^{{}}\) represents the number of deaths. Utility independence (UI) of downtime \(x_{2}^{{}}\) and deaths were estimated, i.e., \(x_{3}^{{}}\) UI \(x_{2}^{{}}\) and conversely \(x_{2}^{{}}\) UI \(x_{3}^{{}}\). Additionally, the cost and deaths are UI: \(x_{1}^{{}}\) UI \(x_{3}^{{}}\) and \(x_{3}^{{}}\) UI \(x_{1}^{{}}\). Finally, the cost and downtime were considered to be UI (\(x_{1}^{{}}\) UI \(x_{2}^{{}}\) and \(x_{2}^{{}}\) UI \(x_{1}^{{}}\)) with a margin of error of approximately 10%. Indeed, only one person noticed that downtime is primarily related to cost, whereas two others evoked the relation as having very weak meaning, hence the 10% rough estimate. This margin is acceptable to state the independency of those two outcomes [23]. The procedure was detailed in [79] following the methodology and recommendations presented in [23].

5.2.2 Stochastic independence

Because usual methods compute multi-attribute scores from the partial utilities’ scores, stochastic independence is required if one wants to avoid very complex computations. In this method, we do not elicit three subjective probability functions attached each to its related attribute. Indeed, the proposed method directly addresses multi-attribute utility and its joint probability distribution, so that the stochastic independence assumption is not needed.

5.3 Encoding utility functions

For the first attribute, the outcomes |x0|, |R| and |r| (see the section above) were fixed at the following amounts: − 5 M€, − 4 M€, and − 3 M€, respectively. For the second attribute, the outcomes were fixed to − 10 days, − 3 days, and − 1 day, respectively. Finally, for the third attribute, the outcomes were fixed to − 5 deaths, − 3 deaths, and − 1 death, respectively. Indeed, Wakker and Deneffe [16] noted that the reference outcomes r, R are chosen close enough to each other so that the revealed sequence x1, x2, …, xn is sufficiently narrow and gives utility to the desired level of accuracy.

In these experiments, subjects were asked to choose among each pair of lotteries. Based on the answers to our choice questions for each of the three attributes, a standard sequence encompassing seven outcomes was constructed (\(x_{1}^{1}\), \(x_{1}^{2}\), …, \(x_{1}^{7}\)), then (\(x_{2}^{1}\), \(x_{2}^{2}\), …, \(x_{2}^{7}\)), and finally (\(x_{3}^{1}\), \(x_{3}^{2}\), …, \(x_{3}^{7}\)). The reason the choice-based method was used is clarified in [85]: “Choice is more consistent than matching.”

Seven iterations (questions) were needed to assess each outcome \(x_{1}^{i}\), i = 1,…, 7, of the standard sequence using the bisection method, which is described as follows: Suppose that \(x_{1}^{i - 1}\) is a known outcome. To determine \(x_{1}^{i}\), the subject is asked in the kth choice to choose between prospects A = (\(x_{1}^{i - 1}\), p; − 4) and B = (\(x_{1k}^{i}\), p; − 3), where \(x_{1k}^{i}\) is taken as the middle point of the interval of the “feasible outcomes” corresponding to the kth iteration (question). The interval corresponding to the first iteration is [\(x_{1}^{i - 1}\), \(x_{1}^{i - 1} + \Delta\)]. The procedure followed by the computer algorithm used seven iterations to determine \(x_{1}^{1}\) because convergence was already attained at the seventh iteration. If the subject expresses the strict preference of the prospect A, the next choice situation involves a modification of the prospect B to be more attractive by replacing \(x_{1k}^{i}\) by the midpoint of the interval [\(x_{1}^{i - 1}\), Δ/2]. If the subject expresses the strict preference of the prospect B, the next choice situation involves a modification of the prospect B to be less attractive by replacing \(x_{1k}^{i}\) by the midpoint of the interval [Δ/2, \(x_{1}^{i - 1} + \Delta\)]. As introduced by Abdellaoui and Munier [86] in the “closing in” method, this process aims to reduce the interval containing \(x_{1}^{1}\). Finally, the seventh iteration is the value corresponding to the middle point of the last interval and is considered to be \(x_{1}^{1}\).

This procedure is repeated for every i = 1, 2, …, 7; thus \(x_{1}^{2}\), \(x_{1}^{3}\), \(x_{1}^{4}\), \(x_{1}^{5}\), \(x_{1}^{6}\), \(x_{1}^{7}\) were obtained successively. A similar procedure is also used in [21].

For the first attribute, |Δ| was fixed at 100 M€. For the second attribute, |Δ| was fixed at 1800 days. For the third attribute, |Δ| was fixed at 400 deaths. Once the standard sequence \(x_{j}^{1}\), \(x_{j}^{2}\), …, \(x_{j}^{7}\) is obtained, the computer algorithm checks the subject’s reliability by asking him to choose again between two prospects corresponding to the fourth iteration for each xj, for j = 1, 2, 3, and i = 1, 2, …, 7.

To select a probability p for the process of eliciting the utility functions, [16] did not provide a precise technique. In [87], p had the value of 1/3, in [86], it was 1/2, and in [15], it was 2/3. The last author noted that “All of the recent experimental studies using the trade-off method produced very similar results.” It was believed that probabilities close to one or equal to one half should be discarded to avoid any heuristic bias. Therefore, the probability 2/3 was used, and the three partial utility functions based on the seven items of the standard sequence in each case were obtained.

5.4 Eliciting the joint probability weighting function

Using a single joint probability weighting function for the three attributes ensures that the global preference in the studied case is taken into consideration and guarantees that Keeney and Raiffa’s utility independence assumptions are respected.

The procedure is shown as follows. In the PW experiments, the objective was to determine the probabilities pl, …, p6, p7; therefore, each subject was asked a new series of questions, with the goal of each question being to determine the pi for i = 1, 2, …, 7 that makes the subject indifferent between the outcome \(A_{k}^{i}\) = (\(x_{1}^{i}\), \(x_{2}^{i}\), \(x_{3}^{i}\)), a certainty outcome, and the prospect \(B_{k}^{i}\), where \(B_{k}^{i}\) = ((\(x_{1}^{7}\), \(x_{2}^{7}\), \(x_{3}^{7}\)), pik; (\(x_{1}^{0}\), \(x_{2}^{0}\), \(x_{3}^{0}\))), k = 1, …, 7. For the first set of questions, to obtain p1, we have i = 1, and we start by p11 = 1/2 as well as by the first outcome \(A_{k}^{1}\) = (\(x_{1}^{1}\), \(x_{2}^{1}\), \(x_{3}^{1}\)). With \(x_{j}^{0}\) and \(x_{j}^{7}\) are the two interval limits of each attribute while varying j = 1, 2, 3, \(x_{j}^{1}\) is the first value obtained from the standard sequences of each attribute while iterating j = 1, 2, 3. p1k is taken as the middle point of the “feasible interval” corresponding to the kth iteration (question). The interval corresponding to the first iteration was [0, 1]. The procedure used by the computer algorithm is described as follows: If the subject expresses a strict preference of the prospect A (B), the next choice situation involves a modification of prospect B to be more (less) attractive by replacing p12 by the midpoint of the interval [0, 1/2] ([1/2, 1]). A series of seven trade-offs is thus performed, and finally, the probability p1 is taken to be the middle point of the last interval. The procedure of the bisection method followed by the computer algorithm to determine the probability pl is similar to the one used for obtaining the outcome \(x_{1}^{1}\).

Then, for the second set of questions, we have i = 2, and we use the same method to obtain the probability p2. However, in the lottery, the first value obtained for the standard sequences was replaced with the second value obtained for the standard sequences, which is \(A_{k}^{2}\) = (\(x_{1}^{2}\), \(x_{2}^{2}\), \(x_{3}^{2}\)), and we start with p21 = 3/4 for k = 1.

Seven choice questions were needed to assess each probability of the standard sequence. By repeating the trade-off series for every value obtained from the standard sequences \(x_{j}^{1}\), \(x_{j}^{2}\), …, \(x_{j}^{7}\), for each j = 1, 2, 3, the values p1, p2, p3, p4, p5, p6, p7 were obtained successively by performing this trade-off series. Overall, each subject had to address even a series of seven choice questions because convergence was attained at the seventh iteration. The reliability test in PW experiments is as follows: The subject is asked to choose again between the two prospects \(A_{4}^{i}\) and \(B_{4}^{i}\) for every i = 1, 2, …, 7.

5.5 Assessing the scaling factors kj’s and the constant K

Assessing the scaling constants is not an easy task. The frequently used method suggested in [23], which is reliable and practical in many case studies, did not give appropriate results in the offered application. The attribute “number of deaths” took all of the attention in the trade-off experiments and made it challenging to obtain logical and acceptable results. This shortcoming encountered by experts familiar with elicitation procedures, due to the delicateness of the process, is noted by Keeney and Raiffa [23] as follows: “A major shortcoming of both questions I & II is the use of extreme levels of the attributes, …., we must force the decision-maker to respond to questions that are much more difficult than would be theoretically necessary, … rank the ki’s for less complexity, ….”. Therefore, Serquin [88]’s method was used here, with an adjustment in steps two and three; the order of the equations was changed but not the essential characteristics of the methodology. The assessment, presented in the following, was performed on a subject-by-subject basis, given this delicate situation [79]:

The first step requires that the decision maker arranges the three attributes of cost, downtime, and deaths in the order of importance he associates with each of these attributes. Assume this order to be: k3 > k2 > k1. As a consequence of this order, step two leads the decision maker to assess \(x_{2}^{'}\) using the indifference (\(x_{1}^{*}\), \(x_{2}^{0}\)) ~ (\(x_{1}^{0}\), \(x_{2}^{'}\)), which leads to the following equation k1 = k2 u2(\(x_{2}^{'}\)). Thus, k1 as a function of k2 is obtained.

Step two requires the decision maker to decide between ((\(x_{1}^{*}\), \(x_{2}^{*}\), \(x_{3}^{*}\)), p1; (\(x_{1}^{0}\), \(x_{2}^{0}\), \(x_{3}^{0}\))), and (\(x_{1}^{0}\), \(x_{2}^{0}\), \(x_{3}^{*}\)), until obtaining the value of p1, which produces indifference. Similarly, by replacing p1 in the following, when the subject comes in terms of \(x_{3}^{'}\) to the indifference ((\(x_{2}^{0}\), \(x_{3}^{\text{REF}}\)), p1; (\(x_{2}^{0}\), \(x_{3}^{0}\))) ~ (\(x_{2}^{0}\), \(x_{3}^{'}\)), then k3 = u3(\(x_{3}^{'}\))/u3(\(x_{3}^{\text{REF}}\)) might be computed, where \(x_{3}^{\text{REF}}\) = (\(x_{3}^{0}\) + \(x_{3}^{*}\))/2. The expression of k3 is free of w(p1), which was simplified during the computation, as shown by Serquin [88]. In step three, the same procedure is repeated, and it is required that the decision maker decides between ((\(x_{1}^{*}\), \(x_{2}^{*}\), \(x_{3}^{*}\)), p2; (\(x_{1}^{0}\), \(x_{2}^{0}\), \(x_{3}^{0}\))) ~ (\(x_{1}^{0}\), \(x_{2}^{*}\), \(x_{3}^{0}\)) until obtaining the value of p2, which produces indifference. Then, by replacing p2 in the following when the subject comes, in terms of \(x_{2}^{''}\), to the indifference ((\(x_{1}^{0}\), \(x_{2}^{\text{REF}}\)), p2; (\(x_{1}^{0}\), \(x_{2}^{0}\))) ~ (\(x_{2}^{0}\), \(x_{2}^{''}\)), then k2 = u2(\(x_{2}^{''}\))/u2(\(x_{2}^{\text{REF}}\)) can be computed, where \(x_{2}^{\text{REF}}\) = (\(x_{2}^{0}\) + \(x_{2}^{*}\))/2. The latter indifference is free of w(p2), which was removed during the computation.

To calculate the utility function, K must be computed using the method in [23]. K results from computing Eq. (6), which can also be written as:

$$K + 1 = \left( {Kk_{1} + 1} \right)\left( {Kk_{2} + 1} \right)\left( {Kk_{3} + 1} \right)$$
(14)

Because k1, k2, and k3 are known, the equation is left with only one unknown, K. As previously noted, if \(\sum {k_{i} } \ne 1\) and \(\sum {k_{i} } > 1\), the utility function is multiplicative rather than additive.

5.6 Experimental results

Following the simple procedure described above, the joint probability weighting function was elicited. Therefore, it was possible to translate the subjective evaluation of engineers into equations, which allowed for assessing the strategies to strengthen buildings to withstand future earthquakes.

For most subjects, the loss attributes’ cost, downtime, and number of deaths were found to have a convex utility function or a utility function that was close to a linear function, reflecting what is called, under EU, neutral behavior. Moreover, for most subjects, a very concave joint probability weighting function is obtained; therefore, when combined with the utility function, risk aversion could be expressed.

The largest value of the scaling constant was given to the attribute number of deaths, whereas the scaling constants related to cost and to downtime varied widely from one person to another. Experienced engineers gave a higher scaling constant to the attribute downtime over the attribute cost. Four different typical results and diagrams related to the four subjects are offered. For the other 26 subjects, similar results and diagrams are obtained.

As shown in Fig. 2, for the first attribute cost, subjects 1 and 2 have an almost linear utility function, expressing a neutral attitude toward this attribute, whereas subjects 3 and 4 have a slightly convex utility function. For the second attribute downtime, subject 2 has an almost linear utility function expressing a neutral attitude toward this attribute, whereas the subjects 1, 3, and 4 have a slightly convex utility function. For the third attribute number of deaths, all the subjects have a slightly convex utility function. Subjects 1, 3, and 4, who are presented in the graphs below, have a concave joint probability weighting function, which, when combined with the utility function, might illustrate the case of a slightly risk-averse person. Subject 2 has an S-inverse joint probability weighting function, with an inflection point near 1/3, which when combined with the utility function, might illustrate the case of a slightly risk-averse person for low probabilities and a risk-seeking person for moderate-to-high probabilities.

Fig. 2
figure 2

Utility functions of the attributes cost, downtime, and deaths, and the joint weighting function for the four subjects

The scaling constant K for the first, third, and fourth attributes tended toward zero, reflecting that the utility function is close to the additive form; therefore, a certain meaning concerning the ki’s can be deduced. All three subjects gave greater importance to the attribute number of deaths. The first then gave more importance to the attribute cost and then finally to the downtime. The third then gave more importance to the attribute downtime and then finally to the cost, and the fourth subject gave similar importance to the attributes downtime and cost. For the second subject, K was close to 0.5, which is far from the additive form; therefore, nothing can be deduced concerning the meaning of the ki’s. The results are shown in Table 1.

Table 1 Coefficients ki and K for four subjects

Experienced civil engineers agreed that the suggested decision path reflected the way that they believe that they approach their practical decisions. Inexperienced students in civil engineering were happy to discover their risk profile; they agreed that the results reflected what they had been trying to choose during the experiment.

More consistency emerged among practicing engineers than among student engineers (all 17 engineers were consistent, whereas only nine out of 13 students were consistent), which may pertain to the fact that the formers are used to and are more expert in handling such types of decisions.

Finally, we compared the results obtained using the classical expected utility theory (using Eq. 1) to the ones obtained by the rank-dependent utility theory (RDU) in multi-attribute utility theory MAUT (using Eqs. 2 and 3). This was only possible now due to the obtained joint weighting function using the method demonstrated in this paper. The results are given in Table 2 for all four subjects. It can be deduced that subjects evaluated the project differently while applying the expected utility (EU) and the RDU. For example, subject 1 and subject 2 have quite similar assessed values under the EU, since they have pretty identical joint weighting functions, and the VRDU obtained were nearly identical. While for subject 2 and subject 3 even though they have alike assessed values under EU, the obtained VRDU are very different since their joint weighting functions are different. This will likely lead to subject 2 and subject 3, making different decisions since they have evaluated the joint weighting function differently as a result of their different perception of probabilities and their behavior in uncertain environments. If we had only used the expected theory, we would have skipped this notion and suggested that they would have a similar final evaluation, skipping to describe their decision-making process correctly. Therefore, the joint weighting function allowed to describe more accurately their behavior, which will likely help them embrace the suggested strengthening solution proposed through the PBEE method.

Table 2 Results of the expected utility theory and the rank-dependent theory

6 Conclusion

This paper proposes an innovative approach to directly encode a joint probability weighting function when probabilities are unknown (uncertainty) or transformed for the multi-attribute utility function under uncertainty conditions is suggested. A nonparametric elicitation method (point by point) was used at the level of individual subjects, which allows accurately describing the decision-making process for each subject in the context of a specific decision situation. This method respects the axioms of Keeney and Raiffa, and the constant K was elicited without using probability transformation in the computation. This method helps capture and describe the real attitude of the decision maker toward probabilities when dealing with multi-attributes. The innovative decision analysis method is proposed to improve the decision process used in performance-based earthquake engineering methodology (PBEE), which aims to mitigate likely encountered seismic risk by a specific building. The decision-making process includes now the preferences of the owner while subjectively evaluating the low probability and catastrophic event of the earthquake. Thus, the decision maker can now recommend the optimal rehabilitation project according to the owner’s preferences. By using this method, the owner is involved in the decision-making process, which will help him embrace his own decision analysis process. It allows him to select among projects of building rehabilitation, based on his personal and subjective elicited utilities and probability functions. Moreover, this method helps the engineering profession to shift from the paternalist pattern when dealing with seismic risk and involves the community of owners, which are the ones who are funding measures to mitigate those risks. Finally, the proposed decision analysis method is validated through the experimental economy applied to a real case study of performance-based earthquake engineering. It demonstrated that the joint weighting functions helped describe more adequately the decision-maker attitude and captured better its preferences than the expected utility model. Moreover, the decision analysis process presented in this paper, even though introduced to solve the problem of strengthening structures, is not restricted to the earthquake engineering field but can be used in any multi-criteria decision analysis application.