Abstract
This paper provides a new simple, nonparametric method that directly elicits a joint weighting function to be immediately associated with multi-attribute utility under uncertainty. The method allows to include the subjective perception of probabilities and consequences in the decision-making process while dealing with multi-attributes. The concept was only possible when dealing with one outcome; this study demonstrates that it can be achieved for multiple outcomes by mean of a parametric method. First, the newly proposed procedure is described. Then, the experimental protocol is presented while applied to the performance-based earthquake engineering (PBEE) methodology. The decision-making process of PBEE is improved by including the preferences of the decision maker, which is usually the owner. This improvement is important, especially in seismic risk mitigation, because the problem involves low probabilities, which may be inaccurately estimated or even unknown, and these are coupled with catastrophic events. Such combined factors, when objectively weighed, lead to decisions counter to the distorted perception of probabilities and consequences that reflect the implicit desires of the decision maker. Using the newly proposed method, the owner, whose own preferences are elicited, is more likely to embrace the solution since he is involved in the decision process. The results and implications of the analysis are discussed showing the difference in results while including the joint weighting function in the decision-making procedure. The method, which is a step forward in multi-attribute utility theory, is not restricted to the decision-making process applied to PBEE but can be implemented to any multi-attribute decision analysis problem.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction: the decision analysis process under uncertainty
Probability encoding in uncertain situations is an old problem. Bernoulli [1] raised the issue of subjective probability. However, he did not offer any specific methodology of elicitation. Subjective probabilities were introduced in [2] and then axiomatized in [3], where the certainty equivalent method, CE, was used to encode the unique distribution of subjective probability. Later, Spetzler and Staël Von Holstein [4] examined different alternative encoding procedures to obtain subjective probabilities. Encoding procedures require that the subject responds to a set of questions either directly by providing numbers as answers (i.e., either values or probabilities; this method is known by judged probabilities method) or indirectly by choosing between simple bets (this method is known by choice-based method). In the indirect method, the bets are adjusted, according to the subject’s response, until he is indifferent to choosing between them. Moreover, Spetzler and Staël Von Holstein [4] distinguished further, according to the response mode chosen, i.e., whether a probability or a quantity had to be the answer to the question.
Meanwhile, decision models under risk evolved and considered decision weights. Indeed, in experimental and real-life situations, people do not conform to the expected utility theory of von Newman and Morgenstern [5] and violates the axioms. Thus, Quiggin [6] introduced the theory of cardinal utility, and decision weights to generalize the expected utility theory. The aim is to analyze the phenomena associated with the distortion of subjective probability. Decision weights can be obtained from the decumulative probability distribution through a probability weighting (or probability transformation) function, as in the rank-dependent model [6] or the cumulative prospect theory [7]. In the latter, a probability transformation function is elicited for gains and another for losses; gains and losses are being measured with respect to some anchored origin, as in [8] or in prospect theory [9].
Parametric methods have been most frequently used to elicit probability weighting functions. Many presented approaches that specify parametric forms for these functions and then estimated them through standard techniques [7, 10,11,12]. However, these approaches made inferences about their functional forms. Therefore, Abdellaoui and Munier [13] presented a nonparametric method for a decision model using a univariate value function. Wu and Gonzalez [14] avoided the parametric estimation problems by testing simple preference conditions for standard von Neumann Morgenstern utility functions. Abdellaoui [15] used nonparametric methods at the level of individuals to elicit both the utility function—using Wakker and Deneffe’s method [16]—and the probability weighting function.
Later, in the mid-nineties, models were proposed to deal with uncertainty by mean of decision weights, and methods were proposed to elicit those decision weights. Several authors [12, 17,18,19] used variants of a decomposition model in which the decision weight assigned to some uncertain event results from a two-stage process: a subjective probability of the event results from the agent’s judgment and is then transformed into a decision weight by a transformation function known from earlier experiments under rank-dependent expected utility, RDEU, under risk. Wakker [20] provided a decomposition model of decision weights, which [21] operationalized within one single experiment under uncertainty.
Few authors have tackled obtaining the probability weighting function in the case of many attributes. Using RDEU, Beaudouin et al. [22] elicited a probability weighting function under risk for every attribute and found the weighting functions to be different according to the attribute considered. However, the latter method cannot be reconciled with utility independence in the sense of Keeney and Raiffa [23], which is respected in the present paper.
Abdellaoui et al. [21] used a procedure to elicit and decompose decision weights for gains and losses under uncertainty, i.e., when “objective” probabilities do not exist. However, the procedure was not extended to the multi-attribute theory; moreover, it could not elicit the joint weighting function.
Recently, many have proposed interesting works in multi-attribute utility theory (MAUT). Abbas and Bell [24] proposes a new independence assumption to help assessment of multi-attribute utility functions. Bleichrodt et al. [25] demonstrate that standard sequences can also be used in MAUT where risk is assumed. Bosi and Herden [26] argue that the representation of a continuous multi-utility exists by considering adequate concepts of a continuity of a preorder. Durbach and Stewart [27] review multiple criteria decision analysis models used when the evaluation of attributes is uncertain. Ekeland et al. [28] propose a multivariate extension of the notion of comonotonicity, which consist of simultaneous optimal rearrangements of two vectors of risk. Engel and Wellman [29] offer a new utilization of preference structure in multi-attribute auctions. Galaabaatar and Karni [30] axiomatizes expected multi-utility representations of incomplete preferences under risk and under uncertainty. Galaabaatar and Karni [31] provide new axiomatizations of preference relations that exhibit incompleteness in both beliefs and tastes. Mongin and Pivato [32] present a ranking of multidimensional alternatives. Andersen et al. [33] discuss the intemporal utility.
Some proposed specifically to elicit multi-attribute utility functions [34, 35]. Others argued the proper scoring rules [36,37,38]. Many discussed the expected utility, such as [39,40,41,42,43,44,45], and [46] examined the conditional expected utility. Others discussed ambiguity [47,48,49,50,51,52,53]. Wakker and Yang [54] analyzed the concave/convex utility and weighting functions. Some authors debated the preferences such as in [55,56,57,58,59,60]. Others discussed probabilities in [61,62,63,64,65].
Nevertheless, no work has been proposed to solve the problem of eliciting a joint weighting function. Hence, an innovative method to directly encode joint weights for the multi-attribute utility function under uncertainty conditions is suggested in the present paper. A nonparametric (point by point) choice-based method is employed. This method helps identify and describe the true attitude of the decision maker toward probabilities when dealing with multi-attributes. Including people in the decision process allows them to comprehend and embrace their own way of thinking (or decision making) and decide based on it.
Moreover, this paper aims to improve the decision analysis model used in the performance-based earthquake engineering methodology (PBEE). The method intends to mitigate likely encountered seismic risk and deals with a specific construction. In this context, the newly proposed decision analysis method involves the owner in the decision-making process and helps him embrace his own decision. It enables him to select among the projects of building rehabilitation based on his personal and subjective elicited utilities and probability functions. Thus, when informed not only about the risk but also about his risk attitude toward this specific situation, he will be hopefully more involved and can willingly take measures to help mitigate seismic threat faced by his structure. Such processes are already used in the medical field since the mid-twentieth century [66]. As noted by Charles et al. [67] and Parsons [68], the goal is to move from a paternalist pattern to include the patient in the decision-making procedure. This method will help the engineering profession to shift from the paternalist pattern when dealing with seismic risk and involve the community of owners, which are the ones who are funding the strengthening measures needed to mitigate those risks.
The proposed decision analysis method is not limited to be used in the PBEE context, and numerous other applications of this encoding methodology can easily be envisioned.
For that purpose, Sect. 2 presents the decision analysis existing methods used in multi-attribute utility theory, notes limitations, and possible needed development. Section 3 proposes the innovative methodology to obtain the joint weighting function attached to the multi-attribute utility function. Section 4 offers the case study of performance-based earthquake engineering, and the improvements proposed by using the already existing decision analysis models, and specifically the decision analysis method proposed in this paper. Section 5 shows the validation of the proposed decision analysis method through the experimental economy applied to the PBEE case study. The results of the experiment are offered. Finally, Sect. 6 concludes the paper.
2 Decision analysis in multi-attribute utility theory: existing methods
One of the models, which is still widely used in risk analysis, is the von Neumann Morgenstern. The risk attitude of the decision maker is taken into account through the von Neumann Morgenstern utility function. The score of some project is then defined for each possible outcome xi, with discrete probabilities pi for each event i (i = 1, 2, …, n), and a von Neumann Morgenstern utility function, ui (.):
The von Neumann Morgenstern utility reflects the preferences with respect to each attribute or outcome considered. But as already mentioned, in experimental and real-life situations, people do not conform to the expected utility theory of von Newman and Morgenstern [5] and violates the axioms. This violation highlighted the fact that people might also subjectively consider the probabilities and not just the outcomes. Therefore, the research that followed the “Allais Paradox” has shown that individuals do transform given probabilities into nonadditive decision weights πi (case of risk) and implicitly make use of risk measures, which are similar to decision weights, sometimes called nonadditive probabilities [69]. Aiming to study the phenomena associated with the distortion of subjective probability, Allais [70] proposed a decision analysis model which is the risk version of the rank-dependent utility model (RDU) proposed by Quiggin [71, 72]. The latter makes use of both the probability transformation function w(p) and the standard utility function u(x). w(p) is defined on the domain [0, 1] of the decumulative probability distribution, such that w(1) = 1, w(0) = 0, and for all p in this domain, w(p) > 0. Paying attention to the fact that the outcomes and their associated probabilities are indexed in such a way as x1 < x2 ··· < xn, the corresponding score functional for some lottery P can be described by:
The uncertainty version of this type of decision model is only contained in (3), without the possibility to connect the decision weights to probabilities, as in (4). If the outcomes were mixed, i.e., losses and gains, the cumulative prospect theory model, CPT, with different probability weighting functions w+ for gains and w− for losses, would be the best choice. In the case where only losses are addressed (since the PBEE case study presented in Sect. 4 deals with losses), RDU is the most suitable choice.
Given that the set of consequences is multidimensional, a multi-attribute utility function f over the set X of all attributes is needed. The multiplicative form of the utility function is presented in Keeney and Raiffa [23]. Even though it is and can be developed for the case of n attributes, the case of n = 3 and X = {X1, X2, X3} is presented here (as the PBEE case study given in Sect. 4 deals with three attributes). It rests on the assumption that “if X1 is utility independent of {X2, X3}, and if {X1, X2}, and {X1, X3} are preferentially independent of X3 and X2, respectively, then
If the interval of the jth attribute is taken as being [\(x_{j}^{0}\), \(x_{j}^{ * }\)], then:
All the kj’s are scaling constants. K is an additional scaling constant given by Eq. (6). If K = 0, then (5) reduces to an additive form described by:
Equation (5) can be used in the framework defined by (3) or (4), as shown in Miyamoto and Wakker [73], and in Dyckerhoff [74]. The difficulties, arising when the w(p) function takes a specific form for each attribute, were shown in Beaudouin et al. [22].
Now that the utility function can be computed using [23]. In order to be able to compute the distorted function of probabilities w(p) that will help calculate VRDU in multi-attribute utility theory (MAUT). The method is proposed to estimate the distorted probability of each outcome (pi) such as in [15], but no technique until now is proposed to compute the w(p) when this probability is attached to all the outcomes. The method proposed in Sect. 3 allows computing the function w(p) related to all outcomes or the joint probability in multi-attribute utility theory, which helps to obtain VRDU in MAUT.
3 The suggested procedure: multi-attribute utility function and joint weighting function
In this section, a nonparametric method is proposed to elicit both a multi-attribute utility function and a joint weighting function associated with this multi-attribute utility. It enables the evaluation of some prospect P, using the RDU [70, 71] or, if the probabilities are unknown (uncertainty), nonadditive multi-attribute utility weights—the weighting function denoted as in [74] by w or w(p). Even though the method is presented here for three attributes (since the PBEE case study is limited to three attributes), the theory similarly holds and can be straightforwardly done for n attributes.
3.1 Elicitation of the partial utility functions
The three chosen attributes (cost, downtime, and deaths) are indexed by j and denoted by xj (j = 1, 2, …, m). As a first step (step 1), the three partial utility functions related to the three attributes were elicited. For this purpose, the method used in [15, 16] was followed. Abdellaoui [15] process follows the approach suggested in Wakker and Deneffe [16] to elicit the utility functions. A “standard sequence” of outcomes, i.e., a sequence of equally spaced outcomes in terms of utility, was constructed. The primary advantage of the method is its robustness to any transformation of probabilities. Indeed, the procedure works even when the probabilities are unknown.
The standard sequence obtained from the elicitation of some partial utility function uj(xj) is constructed as follows (j = 1, 2, 3). An outcome \(x_{j}^{1}\) was determined to make the subject indifferent between the prospects (\(x_{j}^{0}\), p; Rj, 1 − p) and (\(x_{j}^{1}\), p; rj, 1 − p), denoted (\(x_{j}^{0}\), p; Rj) and (\(x_{j}^{1}\), p; rj), respectively, where \(x_{j}^{1}\) < \(x_{j}^{0}\) < Rj < rj ≤ 0 are the negative real numbers in the case of losses, with some discretionary p ∈ (0, l). The amounts rj, Rj, and \(x_{j}^{0}\) are held fixed for any given j. Then, an outcome \(x_{j}^{2}\) was determined to make the subject indifferent between the prospects (\(x_{j}^{1}\), p; Rj) and (\(x_{j}^{2}\), p; rj). Under RDU, the two obtained indifferences lead to the two following equations, for some given j:
The elicitation of the jth partial utility function related to the jth attribute leads to:
Therefore, \(x_{j}^{1}\), \(x_{j}^{2}\), …, \(x_{j}^{n}\) can be determined as a decreasing standard sequence of losses, i.e., \(\forall j,\) a sequence \(x_{j}^{n}\) < \(x_{j}^{n - 1}\) < ··· < \(x_{j}^{1}\) < \(x_{j}^{0}\) of equally spaced outcomes in terms of utility. Thus, the following equation is obtained:
The same procedure is repeated to construct the other standard sequences in eliciting the other partial utility functions uj(xj), with the same property holding for all attributes j. Finally, we note that in the case where the subject is an EU maximizer (i.e., does not distort probabilities, but considers them objectively), the same concept is applied, except that the particular case of \(w(p) = p\) is obtained, where w is the probability weighting function.
3.2 Elicitation of the joint weighting function
This paper proposes to innovate the decision analysis theory, namely to directly elicit the joint weighting function. To the best of our knowledge, no method exists to solve this problem.
In [15], the standard sequence of outcomes constructed above in step 1 (eliciting the partial utility function) is used in simple risky choices to obtain a standard sequence of probabilities, i.e., equally spaced probabilities of each attribute successively (j = 1, …,m) in terms of the weighting function. However, the proposed method tackles the joint probability weighting function of the multi-attribute utility function. It derives from [16] as above, and also from the procedure in [23] used in the multi-objective case, but it does not reduce to any of those methods. Let us consider the above-defined sequences \(x_{j}^{0}\), \(x_{j}^{1}\), \(x_{j}^{2}\), …, \(x_{j}^{n}\), for j = 1, 2, 3. Consider now the probabilities pi, i = 1, …, n − 1, satisfying indifference between the following lotteries, where each lottery encompasses elements of all three attributes:
Relation (11) means that the subject is indifferent between the lottery, with pi giving the best outcome (\(x_{1}^{n}\), \(x_{2}^{n}\), \(x_{3}^{n}\)) and (1 − pi) giving the worst outcome (\(x_{1}^{0}\), \(x_{2}^{0}\), \(x_{3}^{0}\)) on one hand, and on the other, the certain outcome (\(x_{1}^{i}\), \(x_{2}^{i}\), \(x_{3}^{i}\)) obtained from the three standard sequences previously elicited for every i = 1, …, n − 1, where \(x_{j}^{n}\) < ··· < \(x_{j}^{0}\) < 0, for j = 1, 2, 3. Such indifference implies:
Knowing that u(\(x_{1}^{n}\), \(x_{2}^{n}\), \(x_{3}^{n}\)) = 1, and u(\(x_{1}^{0}\), \(x_{2}^{0}\), \(x_{3}^{0}\)) = 0 (hence, all our partial utilities are in fact disutilities, the best outcome having u = 0, the worst u = 1), we obtain:
Because u (\(x_{1}^{i}\), \(x_{2}^{i}\), \(x_{3}^{i}\)) can be computed from the global utility function, w(pi) can be obtained easily.
Therefore, the assessment of a standard sequence of outcomes and the construction of the indifferences (12) allow direct elicitation of the joint weighting function w related to the multi-attribute utility function straightforwardly.
4 An improved decision analysis model for performance-based earthquake engineering
The method we propose in this article aims to improve the decision analysis model proposed for the performance-based earthquake engineering PBEE methodology developed at the Pacific Earthquake Engineering Research Center PEER, among others. It is described in the PEER report 2005/2011 [66]: “This method is defined as design, evaluation, and construction of engineered facilities whose performance under common and extreme loads responds to the diverse needs and objectives of owner–user and society.” The assessment of the performance method addresses a facility defined by its location, design (structural and non-structural), and site conditions. It embodies four stages: the hazard analysis, the structural analysis, the damage analysis, and the loss analysis. Each stage considers one of the four variables: intensity measure (IM), engineering demand parameter (EDP), damage measure (DM), and decision variable (DV) such as the total repair cost, the repair duration, and the number of casualties (for example dollars, downtime, and deaths).
One is thus led to the framework equation for performance assessment for the desired realization of the decision variable, such as the mean annual frequency MAF of the decision variable DV, λ(DV), in accordance with the total probability theorem:
This integration implies that the conditional probabilities G(EDP|IM), G(DM|EDP), and G(DV|DM) need to be assessed parametrically over a suitable range of levels of the damage measure DM, the engineering demands parameter EDP, and the intensity measure IM. PBEE is thus a probabilistic method accounting for all uncertainties in all four assessed stages. It is not just based on “building codes” but considers decision analysis and risk management. Thus, it helps implement more cost-effective solutions and prevents executing the uneconomical solution of strictly applying the current seismic code provisions proposed for new buildings. Indeed, uneconomical solutions to retrofit buildings are usually rejected by the owners, who cannot afford the price. Moreover, earthquakes are mainly low-probability high-consequence events; therefore, the preferences of the owner need to be considered in the decision-making process, i.e., the distortion of the perception of consequences and probabilities if the latter are known.
4.1 Evaluating alternatives
To evaluate alternatives, measures of effectiveness must be specified because they explicitly describe the potential impact on each of the involved agents (which are the owner such as the builder or building contractor, and the users of the considered facilities). The decision model (named 3D model) chosen by the committee of PEER considers the three attributes (or decision variables): deaths (D), downtime (D), and cost in dollars (D). The attributes, as presented, are in the order of their importance to the owners; this order was obtained based on the statistical referendum [66]. The PBEE methodology helps to assess their value while the specific building is facing likely scenarios of a future earthquake.
The chosen model of cost, downtime, and deaths is widespread; it is the classical loss model used in performance earthquake engineering worldwide since the PBEE was proposed. Many have used it, such as in [75, 76], and many codes adapted it, such as [77] or [78]. Moreover, this model is used for consequence-based risk management methodology at a city level, such as in Ergo a global platform for loss estimations, in Hazus for the USA or SYNER-G methodology in Europe using Capra among others.
This paper limits itself to the three mentioned attributes, which are the most decisive ones because technological advances allow a better understanding of each one of them. Indeed, many methods are available to evaluate damage fragility functions needed to evaluate those attributes. Other, less important attributes, if found, such as injured and environmental component, might be more difficult to quantify currently [79].
4.2 The decision analysis process
The PBEE decision analysis process [66] singles out only one decision variable from the multi-criteria decision model. The procedure can be improved by establishing that the optimal decision can be obtained by considering a multi-criteria model that considers the subjective evaluation of the criteria: cost, downtime, and deaths. Such preferences (i.e., the subjective view of the owner regarding the three attributes) may exclude strengthening of the whole structure, which is very costly, and might be unaffordable for the owner, who will reject the strengthening project.
Some authors, as [80], have considered the expected utility in making seismic risk management decisions for individual buildings, using assembly-based vulnerability methodology. In [80], Porter used a parametric method to elicit an exponential utility function. Recently, Cha and Ellingwood [81] studied the role of risk aversion on seismic risk mitigation of building structures. They used a parametric cumulative prospect theory model, CPT, which has the advantage of handling mixed lotteries, i.e., loss and gain prospects [7].
In this paper, we choose a more suitable decision analysis model to apply to the specific case of the PBEE decision analysis process, allowing to mitigate the seismic risk. As noted in Sect. 2, in the case where only losses are addressed, RDU is the most suitable choice. Moreover, it was proved that RDU is suitable for low-probability high-consequence events [82]. Since the 3D model of cost, downtime, and deaths is used in the PBEE case study, the attributes or outcomes are limited to the case of n = 3.
In the end, after performing all elicitations and computations, it will be possible to compare all likely proposed strengthening projects based on VRDU. Thus, instead of using the mathematical expectation through the formula \(U(P) = \sum\nolimits_{i = 1}^{n} {p_{i} x_{i} }\) or the maximization of the expected utility, with u being a logarithmic (or nonlinear) function \(U(P) = \sum\nolimits_{i = 1}^{n} {p_{i} u(x_{i} )}\), it is possible to compare the total subjective evaluation of the decision maker through the formulas presented in Eqs. (3) and (4). The highest VRDU is retained as the best project from the point of view of the decision maker.
Finally, in Sect. 5, the proposed decision analysis method proposed in Sect. 2 is validated through the experimental economy applied to the PBEE real case study. We note that in this paper, the values of the alternatives of the 3D model were not computed since they are already calculated in the report [66]. Even though the PBEE methodology is a lengthy process and needs specific competence, it is already available to all. The work in this paper is limited to improve the decision analysis model applied to PBEE.
5 Applying the procedure: the experimental protocol
As we proved previously, the function representing our case best is the RDU functional; therefore, a probability weighting function in addition to the utility function is needed. In this case, the only method that can elicit the utility function without prior knowledge of the weighting function w(.) is the trade-off method (TO) [16].
To elicit the probability weighting function and the utility function of the decision maker, a new method was used through a computer algorithm, which was developed borrowing from the method in [16] and the bisection method in [15]. The conducted experiment and the employed protocol are described. Figure 1 shows the flowchart of the experimental protocol.
5.1 Subjects stimuli and procedure
All subjects were civil engineers (either project managers, PhDs or endowed with some substantial working experience in the field, or students in their last year of a civil engineering school). Their engineering background and practical experience allowed them to understand real engineering concepts behind the decision-making process. Participants were motivated by their willingness to help scientific research in their field moving forward; therefore, they were not paid. As noted in [83], “the results of choice-based experiments rather than judgment-based experiments do not most of the time substantially depend on a compensation scheme.”
Participants were explicitly told and also read the following statement: in this experiment, you are, as a Civil Engineer, in charge of choosing a civil engineering rehabilitation project taking into consideration three factors, which are: the cost of rehabilitation, the downtime (time needed to accomplish this task), and deaths that could result, should a disaster, i.e., an earthquake occurs. You are responsible for managing the situation, finding the best compromise between the cost of strengthening, downtime, and likely human lives’ losses.
This section describes two experiments that elicit, under RDU, the utility function \(u_{{}} (x_{1}^{{}} ,x_{2}^{{}} ,x_{3}^{{}} )\) and the probability weighting function. A significant effort was made to collect high-quality data from 30 subjects who were recruited to participate in these experiments. In this experience, all prospects entailed loss outcomes; the first attribute (cost) varied from − 5 M€ to − 605 M€, the second attribute (downtime) from − 10 to − 10,810 days, and the third attribute (number of deaths) from − 5 deaths to − 2405 deaths. As noted in [15], “to make the curvature of the utility function sufficiently pronounced, it is necessary to investigate a sufficiently wide interval of outcomes.” The values of the considered intervals for each outcome were chosen to ensure that the obtained standard sequence reached or contained the real estimated value of the project through the loss evaluations model in the PEER report 2005/2011 [66]. Indeed, as noted in [15], the outcomes need to be chosen such that the range of outcomes between them includes all outcomes of interest.
The experiments were conducted at an individual level. Subjects were seated in front of a personal computer and were encouraged to take their time. Many were not familiar with probabilities and expectations. Thus, they were all given the needed explanation of the information regarding “choice trials” approximately 10–15 min before the experiment started. Then, subjects participated in a 30–40 min session to perform trade-off experiments, denoted by TO experiments, which consisted of an outright choice between two prospects, followed by probability weighting experiments indicated by PW experiments, then by the experiments to elicit the scaling constants, and finally by the consistency checkup. The answers to the questions raised were used by the subsequent ones. Before applying the procedure, the weighting of the assumptions is required.
5.2 Assessing the assumptions
5.2.1 Utility independence
The utility independence definition is presented in [84]. A method to test this assumption is presented in [23], where it was clearly noted that utility independence is a necessary condition to build the multi-attribute utility function, as per the definition offered in Sect. 2. In the following, \(x_{1}^{{}}\) is the index of cost, \(x_{2}^{{}}\) stands for downtime, and \(x_{3}^{{}}\) represents the number of deaths. Utility independence (UI) of downtime \(x_{2}^{{}}\) and deaths were estimated, i.e., \(x_{3}^{{}}\) UI \(x_{2}^{{}}\) and conversely \(x_{2}^{{}}\) UI \(x_{3}^{{}}\). Additionally, the cost and deaths are UI: \(x_{1}^{{}}\) UI \(x_{3}^{{}}\) and \(x_{3}^{{}}\) UI \(x_{1}^{{}}\). Finally, the cost and downtime were considered to be UI (\(x_{1}^{{}}\) UI \(x_{2}^{{}}\) and \(x_{2}^{{}}\) UI \(x_{1}^{{}}\)) with a margin of error of approximately 10%. Indeed, only one person noticed that downtime is primarily related to cost, whereas two others evoked the relation as having very weak meaning, hence the 10% rough estimate. This margin is acceptable to state the independency of those two outcomes [23]. The procedure was detailed in [79] following the methodology and recommendations presented in [23].
5.2.2 Stochastic independence
Because usual methods compute multi-attribute scores from the partial utilities’ scores, stochastic independence is required if one wants to avoid very complex computations. In this method, we do not elicit three subjective probability functions attached each to its related attribute. Indeed, the proposed method directly addresses multi-attribute utility and its joint probability distribution, so that the stochastic independence assumption is not needed.
5.3 Encoding utility functions
For the first attribute, the outcomes |x0|, |R| and |r| (see the section above) were fixed at the following amounts: − 5 M€, − 4 M€, and − 3 M€, respectively. For the second attribute, the outcomes were fixed to − 10 days, − 3 days, and − 1 day, respectively. Finally, for the third attribute, the outcomes were fixed to − 5 deaths, − 3 deaths, and − 1 death, respectively. Indeed, Wakker and Deneffe [16] noted that the reference outcomes r, R are chosen close enough to each other so that the revealed sequence x1, x2, …, xn is sufficiently narrow and gives utility to the desired level of accuracy.
In these experiments, subjects were asked to choose among each pair of lotteries. Based on the answers to our choice questions for each of the three attributes, a standard sequence encompassing seven outcomes was constructed (\(x_{1}^{1}\), \(x_{1}^{2}\), …, \(x_{1}^{7}\)), then (\(x_{2}^{1}\), \(x_{2}^{2}\), …, \(x_{2}^{7}\)), and finally (\(x_{3}^{1}\), \(x_{3}^{2}\), …, \(x_{3}^{7}\)). The reason the choice-based method was used is clarified in [85]: “Choice is more consistent than matching.”
Seven iterations (questions) were needed to assess each outcome \(x_{1}^{i}\), i = 1,…, 7, of the standard sequence using the bisection method, which is described as follows: Suppose that \(x_{1}^{i - 1}\) is a known outcome. To determine \(x_{1}^{i}\), the subject is asked in the kth choice to choose between prospects A = (\(x_{1}^{i - 1}\), p; − 4) and B = (\(x_{1k}^{i}\), p; − 3), where \(x_{1k}^{i}\) is taken as the middle point of the interval of the “feasible outcomes” corresponding to the kth iteration (question). The interval corresponding to the first iteration is [\(x_{1}^{i - 1}\), \(x_{1}^{i - 1} + \Delta\)]. The procedure followed by the computer algorithm used seven iterations to determine \(x_{1}^{1}\) because convergence was already attained at the seventh iteration. If the subject expresses the strict preference of the prospect A, the next choice situation involves a modification of the prospect B to be more attractive by replacing \(x_{1k}^{i}\) by the midpoint of the interval [\(x_{1}^{i - 1}\), Δ/2]. If the subject expresses the strict preference of the prospect B, the next choice situation involves a modification of the prospect B to be less attractive by replacing \(x_{1k}^{i}\) by the midpoint of the interval [Δ/2, \(x_{1}^{i - 1} + \Delta\)]. As introduced by Abdellaoui and Munier [86] in the “closing in” method, this process aims to reduce the interval containing \(x_{1}^{1}\). Finally, the seventh iteration is the value corresponding to the middle point of the last interval and is considered to be \(x_{1}^{1}\).
This procedure is repeated for every i = 1, 2, …, 7; thus \(x_{1}^{2}\), \(x_{1}^{3}\), \(x_{1}^{4}\), \(x_{1}^{5}\), \(x_{1}^{6}\), \(x_{1}^{7}\) were obtained successively. A similar procedure is also used in [21].
For the first attribute, |Δ| was fixed at 100 M€. For the second attribute, |Δ| was fixed at 1800 days. For the third attribute, |Δ| was fixed at 400 deaths. Once the standard sequence \(x_{j}^{1}\), \(x_{j}^{2}\), …, \(x_{j}^{7}\) is obtained, the computer algorithm checks the subject’s reliability by asking him to choose again between two prospects corresponding to the fourth iteration for each xj, for j = 1, 2, 3, and i = 1, 2, …, 7.
To select a probability p for the process of eliciting the utility functions, [16] did not provide a precise technique. In [87], p had the value of 1/3, in [86], it was 1/2, and in [15], it was 2/3. The last author noted that “All of the recent experimental studies using the trade-off method produced very similar results.” It was believed that probabilities close to one or equal to one half should be discarded to avoid any heuristic bias. Therefore, the probability 2/3 was used, and the three partial utility functions based on the seven items of the standard sequence in each case were obtained.
5.4 Eliciting the joint probability weighting function
Using a single joint probability weighting function for the three attributes ensures that the global preference in the studied case is taken into consideration and guarantees that Keeney and Raiffa’s utility independence assumptions are respected.
The procedure is shown as follows. In the PW experiments, the objective was to determine the probabilities pl, …, p6, p7; therefore, each subject was asked a new series of questions, with the goal of each question being to determine the pi for i = 1, 2, …, 7 that makes the subject indifferent between the outcome \(A_{k}^{i}\) = (\(x_{1}^{i}\), \(x_{2}^{i}\), \(x_{3}^{i}\)), a certainty outcome, and the prospect \(B_{k}^{i}\), where \(B_{k}^{i}\) = ((\(x_{1}^{7}\), \(x_{2}^{7}\), \(x_{3}^{7}\)), pik; (\(x_{1}^{0}\), \(x_{2}^{0}\), \(x_{3}^{0}\))), k = 1, …, 7. For the first set of questions, to obtain p1, we have i = 1, and we start by p11 = 1/2 as well as by the first outcome \(A_{k}^{1}\) = (\(x_{1}^{1}\), \(x_{2}^{1}\), \(x_{3}^{1}\)). With \(x_{j}^{0}\) and \(x_{j}^{7}\) are the two interval limits of each attribute while varying j = 1, 2, 3, \(x_{j}^{1}\) is the first value obtained from the standard sequences of each attribute while iterating j = 1, 2, 3. p1k is taken as the middle point of the “feasible interval” corresponding to the kth iteration (question). The interval corresponding to the first iteration was [0, 1]. The procedure used by the computer algorithm is described as follows: If the subject expresses a strict preference of the prospect A (B), the next choice situation involves a modification of prospect B to be more (less) attractive by replacing p12 by the midpoint of the interval [0, 1/2] ([1/2, 1]). A series of seven trade-offs is thus performed, and finally, the probability p1 is taken to be the middle point of the last interval. The procedure of the bisection method followed by the computer algorithm to determine the probability pl is similar to the one used for obtaining the outcome \(x_{1}^{1}\).
Then, for the second set of questions, we have i = 2, and we use the same method to obtain the probability p2. However, in the lottery, the first value obtained for the standard sequences was replaced with the second value obtained for the standard sequences, which is \(A_{k}^{2}\) = (\(x_{1}^{2}\), \(x_{2}^{2}\), \(x_{3}^{2}\)), and we start with p21 = 3/4 for k = 1.
Seven choice questions were needed to assess each probability of the standard sequence. By repeating the trade-off series for every value obtained from the standard sequences \(x_{j}^{1}\), \(x_{j}^{2}\), …, \(x_{j}^{7}\), for each j = 1, 2, 3, the values p1, p2, p3, p4, p5, p6, p7 were obtained successively by performing this trade-off series. Overall, each subject had to address even a series of seven choice questions because convergence was attained at the seventh iteration. The reliability test in PW experiments is as follows: The subject is asked to choose again between the two prospects \(A_{4}^{i}\) and \(B_{4}^{i}\) for every i = 1, 2, …, 7.
5.5 Assessing the scaling factors kj’s and the constant K
Assessing the scaling constants is not an easy task. The frequently used method suggested in [23], which is reliable and practical in many case studies, did not give appropriate results in the offered application. The attribute “number of deaths” took all of the attention in the trade-off experiments and made it challenging to obtain logical and acceptable results. This shortcoming encountered by experts familiar with elicitation procedures, due to the delicateness of the process, is noted by Keeney and Raiffa [23] as follows: “A major shortcoming of both questions I & II is the use of extreme levels of the attributes, …., we must force the decision-maker to respond to questions that are much more difficult than would be theoretically necessary, … rank the ki’s for less complexity, ….”. Therefore, Serquin [88]’s method was used here, with an adjustment in steps two and three; the order of the equations was changed but not the essential characteristics of the methodology. The assessment, presented in the following, was performed on a subject-by-subject basis, given this delicate situation [79]:
The first step requires that the decision maker arranges the three attributes of cost, downtime, and deaths in the order of importance he associates with each of these attributes. Assume this order to be: k3 > k2 > k1. As a consequence of this order, step two leads the decision maker to assess \(x_{2}^{'}\) using the indifference (\(x_{1}^{*}\), \(x_{2}^{0}\)) ~ (\(x_{1}^{0}\), \(x_{2}^{'}\)), which leads to the following equation k1 = k2 u2(\(x_{2}^{'}\)). Thus, k1 as a function of k2 is obtained.
Step two requires the decision maker to decide between ((\(x_{1}^{*}\), \(x_{2}^{*}\), \(x_{3}^{*}\)), p1; (\(x_{1}^{0}\), \(x_{2}^{0}\), \(x_{3}^{0}\))), and (\(x_{1}^{0}\), \(x_{2}^{0}\), \(x_{3}^{*}\)), until obtaining the value of p1, which produces indifference. Similarly, by replacing p1 in the following, when the subject comes in terms of \(x_{3}^{'}\) to the indifference ((\(x_{2}^{0}\), \(x_{3}^{\text{REF}}\)), p1; (\(x_{2}^{0}\), \(x_{3}^{0}\))) ~ (\(x_{2}^{0}\), \(x_{3}^{'}\)), then k3 = u3(\(x_{3}^{'}\))/u3(\(x_{3}^{\text{REF}}\)) might be computed, where \(x_{3}^{\text{REF}}\) = (\(x_{3}^{0}\) + \(x_{3}^{*}\))/2. The expression of k3 is free of w(p1), which was simplified during the computation, as shown by Serquin [88]. In step three, the same procedure is repeated, and it is required that the decision maker decides between ((\(x_{1}^{*}\), \(x_{2}^{*}\), \(x_{3}^{*}\)), p2; (\(x_{1}^{0}\), \(x_{2}^{0}\), \(x_{3}^{0}\))) ~ (\(x_{1}^{0}\), \(x_{2}^{*}\), \(x_{3}^{0}\)) until obtaining the value of p2, which produces indifference. Then, by replacing p2 in the following when the subject comes, in terms of \(x_{2}^{''}\), to the indifference ((\(x_{1}^{0}\), \(x_{2}^{\text{REF}}\)), p2; (\(x_{1}^{0}\), \(x_{2}^{0}\))) ~ (\(x_{2}^{0}\), \(x_{2}^{''}\)), then k2 = u2(\(x_{2}^{''}\))/u2(\(x_{2}^{\text{REF}}\)) can be computed, where \(x_{2}^{\text{REF}}\) = (\(x_{2}^{0}\) + \(x_{2}^{*}\))/2. The latter indifference is free of w(p2), which was removed during the computation.
To calculate the utility function, K must be computed using the method in [23]. K results from computing Eq. (6), which can also be written as:
Because k1, k2, and k3 are known, the equation is left with only one unknown, K. As previously noted, if \(\sum {k_{i} } \ne 1\) and \(\sum {k_{i} } > 1\), the utility function is multiplicative rather than additive.
5.6 Experimental results
Following the simple procedure described above, the joint probability weighting function was elicited. Therefore, it was possible to translate the subjective evaluation of engineers into equations, which allowed for assessing the strategies to strengthen buildings to withstand future earthquakes.
For most subjects, the loss attributes’ cost, downtime, and number of deaths were found to have a convex utility function or a utility function that was close to a linear function, reflecting what is called, under EU, neutral behavior. Moreover, for most subjects, a very concave joint probability weighting function is obtained; therefore, when combined with the utility function, risk aversion could be expressed.
The largest value of the scaling constant was given to the attribute number of deaths, whereas the scaling constants related to cost and to downtime varied widely from one person to another. Experienced engineers gave a higher scaling constant to the attribute downtime over the attribute cost. Four different typical results and diagrams related to the four subjects are offered. For the other 26 subjects, similar results and diagrams are obtained.
As shown in Fig. 2, for the first attribute cost, subjects 1 and 2 have an almost linear utility function, expressing a neutral attitude toward this attribute, whereas subjects 3 and 4 have a slightly convex utility function. For the second attribute downtime, subject 2 has an almost linear utility function expressing a neutral attitude toward this attribute, whereas the subjects 1, 3, and 4 have a slightly convex utility function. For the third attribute number of deaths, all the subjects have a slightly convex utility function. Subjects 1, 3, and 4, who are presented in the graphs below, have a concave joint probability weighting function, which, when combined with the utility function, might illustrate the case of a slightly risk-averse person. Subject 2 has an S-inverse joint probability weighting function, with an inflection point near 1/3, which when combined with the utility function, might illustrate the case of a slightly risk-averse person for low probabilities and a risk-seeking person for moderate-to-high probabilities.
The scaling constant K for the first, third, and fourth attributes tended toward zero, reflecting that the utility function is close to the additive form; therefore, a certain meaning concerning the ki’s can be deduced. All three subjects gave greater importance to the attribute number of deaths. The first then gave more importance to the attribute cost and then finally to the downtime. The third then gave more importance to the attribute downtime and then finally to the cost, and the fourth subject gave similar importance to the attributes downtime and cost. For the second subject, K was close to 0.5, which is far from the additive form; therefore, nothing can be deduced concerning the meaning of the ki’s. The results are shown in Table 1.
Experienced civil engineers agreed that the suggested decision path reflected the way that they believe that they approach their practical decisions. Inexperienced students in civil engineering were happy to discover their risk profile; they agreed that the results reflected what they had been trying to choose during the experiment.
More consistency emerged among practicing engineers than among student engineers (all 17 engineers were consistent, whereas only nine out of 13 students were consistent), which may pertain to the fact that the formers are used to and are more expert in handling such types of decisions.
Finally, we compared the results obtained using the classical expected utility theory (using Eq. 1) to the ones obtained by the rank-dependent utility theory (RDU) in multi-attribute utility theory MAUT (using Eqs. 2 and 3). This was only possible now due to the obtained joint weighting function using the method demonstrated in this paper. The results are given in Table 2 for all four subjects. It can be deduced that subjects evaluated the project differently while applying the expected utility (EU) and the RDU. For example, subject 1 and subject 2 have quite similar assessed values under the EU, since they have pretty identical joint weighting functions, and the VRDU obtained were nearly identical. While for subject 2 and subject 3 even though they have alike assessed values under EU, the obtained VRDU are very different since their joint weighting functions are different. This will likely lead to subject 2 and subject 3, making different decisions since they have evaluated the joint weighting function differently as a result of their different perception of probabilities and their behavior in uncertain environments. If we had only used the expected theory, we would have skipped this notion and suggested that they would have a similar final evaluation, skipping to describe their decision-making process correctly. Therefore, the joint weighting function allowed to describe more accurately their behavior, which will likely help them embrace the suggested strengthening solution proposed through the PBEE method.
6 Conclusion
This paper proposes an innovative approach to directly encode a joint probability weighting function when probabilities are unknown (uncertainty) or transformed for the multi-attribute utility function under uncertainty conditions is suggested. A nonparametric elicitation method (point by point) was used at the level of individual subjects, which allows accurately describing the decision-making process for each subject in the context of a specific decision situation. This method respects the axioms of Keeney and Raiffa, and the constant K was elicited without using probability transformation in the computation. This method helps capture and describe the real attitude of the decision maker toward probabilities when dealing with multi-attributes. The innovative decision analysis method is proposed to improve the decision process used in performance-based earthquake engineering methodology (PBEE), which aims to mitigate likely encountered seismic risk by a specific building. The decision-making process includes now the preferences of the owner while subjectively evaluating the low probability and catastrophic event of the earthquake. Thus, the decision maker can now recommend the optimal rehabilitation project according to the owner’s preferences. By using this method, the owner is involved in the decision-making process, which will help him embrace his own decision analysis process. It allows him to select among projects of building rehabilitation, based on his personal and subjective elicited utilities and probability functions. Moreover, this method helps the engineering profession to shift from the paternalist pattern when dealing with seismic risk and involves the community of owners, which are the ones who are funding measures to mitigate those risks. Finally, the proposed decision analysis method is validated through the experimental economy applied to a real case study of performance-based earthquake engineering. It demonstrated that the joint weighting functions helped describe more adequately the decision-maker attitude and captured better its preferences than the expected utility model. Moreover, the decision analysis process presented in this paper, even though introduced to solve the problem of strengthening structures, is not restricted to the earthquake engineering field but can be used in any multi-criteria decision analysis application.
References
Bernoulli J (1713) Ars conjectandi, Base, Thurnisiorum
De Finetti B (1937) Foresight: its logical laws, its subjective sources. Translated from the French by Henry Kyburg Jr. 1993, www.books.google.com, Skotz & NL Johnson Editors
Savage LJ (1954) The foundation of statistics. Wiley, New York
Spetzler C, Staël Von Holstein CS (1975) Exceptional paper—probability encoding in decision analysis. Manag Sci 22:340–358
Von Newman J, Morgenstern O (1944) Theory of games and economic behavior. Princeton University Press, Princeton
Quiggin J (1982) A theory of anticipated utility. J Econ Behav Organ 3:323–343
Tversky A, Kahneman D (1992) Advances in prospect theory: cumulative representation of uncertainty. J Risk Uncertain 5:297–323
Allais M (1953) Le comportement de l’homme rationnel devant le risque: critique des postulats et axiomes de l’école Américaine. Econometrica 21:503–546
Kahneman D, Tversky A (1979) Prospect theory: an analysis of decision under risk. Econometrica 47:263–291
Camerer CF, Ho T-H (1994) Nonlinear weighting of probabilities and violation of betweenness axiom. J Risk Uncertain 8:167–196
Hey JD, Orme C (1994) Investigating generalizations of expected utility theory using experimental data. Econometrica 62:1296–1326
Fox CR, Tversky A (1998) A belief-based account of decision under uncertainty. Manag Sci 44:879–895
Abdellaoui M, Munier B (1995) Transformation subjective des très faibles probabilités face au risque d’irradiation ionisante: Etude expérimentale préliminaire. Rapport scientifique intermédiaire, CEPN, GRID, Fontenay aux Roses
Wu G, Gonzalez R (1996) Curvature of the probability weighting function. Manag Sci 42:1676–1690
Abdellaoui M (2000) Parameter-free elicitation of utility and probability weighting functions. Manag Sci 46:1497–1512
Wakker P, Deneffe D (1996) Eliciting von Newmann–Morgenstern utilities when probabilities are distorted or unknown. Manag Sci 42:1131–1996
Tversky A, Fox C (1995) Weighing risk and uncertainty. Psychol Rev 102:269–283
Wu G, Gonzalez R (1999) Nonlinear decision weights in choice under uncertainty. Manag Sci 45:74–85
Kilka M, Weber M (2001) What determines the shape of the probability weighting function under uncertainty. Manag Sci 47:1712–1726
Wakker P (2004) On the composition of risk preference and belief. Psychol Rev 111:236–241
Abdellaoui M, Vossman F, Weber M (2005) Choice-based elicitation and decomposition of decision weights for gains and losses under uncertainty. Manag Sci 51:1384–1399
Beaudouin F, Munier B, Serquin Y (1999) Multi-attribute decision making and generalized expected utility in nuclear power plant maintenance. In: Machina MJ, Munier B (eds) Beliefs, interactions and preferences in decision making. Kluwer Academic Publishers, Boston
Keeney RL, Raiffa H (1993) Decisions with multiple objectives preferences and value tradeoffs. Cambridge University Press, Cambridge
Abbas AE, Bell DE (2011) One-switch independence for multiattribute utility functions. Oper Res 59:764–771
Bleichrodt H, Doctor JN, Filko M, Wakker PP (2011) Utility independence of multiattribute utility theory is equivalent to standard sequence invariance of conjoint measurement. J Math Psychol 55:451–456
Bosi G, Herden G (2012) Continuous multi-utility representations of preorders. J Math Econ 48:212–218
Durbach IN, Stewart TJ (2012) Modeling uncertainty in multi-criteria decision analysis. Eur J Oper Res 223:1–14
Ekeland I, Galichon A, Henry M (2012) Comonotonic measures of multivariate risks. Math Finance 22:109–132
Engel Y, Wellman MP (2010) Multiattribute auctions based on generalized additive independence. J Artif Intell Res 37:479–525
Galaabaatar T, Karni E (2012) Expected multi-utility representations. Math Soc Sci 64:242–246
Galaabaatar T, Karni E (2013) Subjective expected utility theory with incomplete preferences. Econometrica 81:255–284
Mongin P, Pivato M (2015) Ranking multidimensional alternatives and uncertain prospects. J Econ Theory 157:146–171
Andersen S, Harrison GW, Lau MI, Rutström EE (2018) Multiattribute utility theory, intertemporal utility, and correlation aversion. Int Econ Rev 59:537–555
André FJ (2009) Indirect elicitation of non-linear multi-attribute utility functions: a dual procedure combined with DEA. Omega 37:883–895
Barreda-Tarrazona I, Jaramillo-Gutierrez A, Navarro-Martinez D, Sabater-Grande G (2011) Risk attitude elicitation using a multi-lottery choice task: real vs. hypothetical incentives. J Finance Account 40:609–624
Kothiyal A, Spinu V, Wakker PP (2011) Comonotonic proper scoring rules to measure ambiguity and subjective beliefs. J Multi-Criteria Decis Anal 17:101–113
Carvalho A (2015) Tailored proper scoring rules elicit decision weights. Judgm Decis Mak 10:86–96
Carvalho A (2016) An overview of applications of proper scoring rules. Decis Anal 13:223–234
Abdellaoui M, Wakker P (2019) Savage’s subjective expected utility simplified and generalized. Working paper
Charles-Cadogan G (2018) Probability interference in expected utility theory. J Math Econ 78:163–175
Gorno L (2017) A strict expected multi-utility theorem. J Math Econ 71:92–95
Heath A, Manolopoulou I, Baio G (2017) A review of methods for analysis of the expected value of information. Med Decis Mak 37:747–758
Izhakian Y (2017) Expected utility with uncertain probabilities theory. J Math Econ 69:91–103
Halpern JY, Leung S (2016) Maxmin weighted expected utility: a simpler characterization. Theor Decis 80:581–610
Baccelli J (2018) Risk attitudes in axiomatic decision theory—a conceptual perspective. Theor Decis 84:61–82
Amarante M (2017) Conditional expected utility. Theor Decis 83:175–193
Amarante M, Ghossoub M, Phelps E (2017) Contracting on ambiguous prospects. Econ J 127:2241–2246
Baillon A, Bleichrodt H, Li C, Wakker P (2019) Belief hedges: applying ambiguity measurements to all events and all ambiguity models. Working paper
Li C, Turmunkh U, Wakker P (2019) Trust as a decision under ambiguity. Exp Econ 22:51–75
Li C, Turmunkh U, Wakker P (2019) Social and strategic ambiguity versus betrayal aversion. Working paper
Baillon A, Emirmahmutoglu A (2018) Zooming in on Ambiguity Attitudes. Int Econ Rev 59:2107–2131
Baillon A, Huang Z, Selim A, Wakker P (2018) Measuring ambiguity attitudes for all (natural) events. Econometrica 86:1839–1858
Abdellaoui M, Bleichrodt H, l’Haridon O, van Dolder D (2016) Measuring loss aversion under ambiguity: a method to make prospect theory completely observable. J Risk Uncertain 52:1–20
Wakker P, Yang J (2019) A powerful tool for analyzing concave/convex utility and weighting functions. J Econ Theory 181:143–159
Vieider FM, Martinsson P, Nam PK, Truong N (2019) Risk preferences and development revisited. Theor Decis 86:1–21
Assa H, Zimper A (2018) Preferences over all random variables: incompatibility of convexity and continuity. J Math Econ 75:71–83
Carbone E, Dong X, Hey J (2017) Elicitation of preferences under ambiguity. J Risk Uncertain 54:87–102
Polisson M, Quah J, Renou L (2017) Revealed preferences over risk and uncertainty. School of Economics and Finance Discussion Paper No 1706
Javanmardi L, Lawryshyn Y (2016) A new rank dependent utility approach to model risk averse preferences in portfolio optimization. Ann Oper Res 237:161–176
Araujo A, Chateauneuf A, Gama GP, Novinski R (2018) General equilibrium with uncertainty loving preferences. Econometrica 86:1859–1871
Dimmock SG, Kouwenberg R, Mitchell OS, Peijnenburg K (2018) Household portfolio underdiversification and probability weighting: evidence from the field. NBES Working Paper 24928, http://www.nber.org/papers/w24928
Aydogan I (2017) Decisions from experience and from description: beliefs and probability weighting. Ph.D. Thesis
Buchak L (2016) Decision theory. In: Hájek A, Hitchcock C (eds) Oxford handbook of probability and philosophy, Chapter 13. Oxford University Press, New York, pp 789–814
Kopylov I (2016) Subjective probability, confidence, and bayesian updating. Econ Theor 62:635–658
Bruhin A, Maha M, Santos-Pinto L (2019) Risk and rationality: the relative importance of probability weighting and choice set dependence. No 19.01. Université de Lausanne, Faculté de HEC
Krawinkler H (2005) Van Nuys building testbed report: exercising seismic performance assessment. Pacific Earthquake Engineering Research Center, PEER 2005/11, Department of Civil and Environmental Engineering, Stanford University, USA
Charles C, Gafni A, Whelan T (1999) Decision-making in the physician-patient encounter: revisiting the shared treatment decision-making model. Soc Sci Med 49:651–661
Parsons T (1951) The social system. Free Press, Glencoe
Schmeidler D (1989) Subjective probability and expected utility without additivity. Econometrica 57:571–587
Allais M (1988) The general theory of random choices in relation to the invariant cardinal utility function and the specific probability function, the (U, θ) model: a general overview. In: Munier B (ed) Risk, decision and rationality. Reidel, Dordrecht, pp 231–289
Quiggin J (1981) Risk perception and risk aversion among Australian farmers. Aust J Agric Econ 25:160–169
Quiggin J (1993) Generalized expected utility theory. Kluwer Academic Publishers, Dordrecht
Miyamoto J, Wakker P (1996) Multi-attribute utility theory without expected utility foundations. Oper Res 44:313–326
Dyckerhoff R (1994) Decomposition of multivariate utility functions in non-additive expected utility theory. Multi Criteria Decis Anal 3:41–58
Moehle J, Deierlein GG (2004) A framework methodology for performance-based earthquake engineering. In: Proceedings of the 13th world conference on earthquake engineering. Vancouver, Canada, Paper No. 679
Porter KA (2003) An overview of PEER’s performance-based earthquake engineering methodology. In: Proceedings of the ninth international conference on applications of statistics and probability in civil engineering (ICASP9) July 6–9, 2003, San Francisco
FEMA P-58 (2012) Seismic performance assessment of buildings. Federal Emergency Management Agency, Washington, DC
Marsh ML, Stringer SJ (2013) Performance-based seismic bridge design. Volume 440 of NCHRP synthesis and Volume 440 of synthesis of highway practice, Publisher Transportation Research Board, (contributors: National Research Council), USA
Makhoul N (2010) Seismic risk of buildings: Multidisciplinary method in decision analysis and strengthening of structures. Ph.D. dissertation, Number: 2010 ENSAM 0026, the pastel number: “pastel-00521545, version 1 - 27 Sep 2010”, GRID, Arts et Métiers ParisTech-Paris, France. French
Porter K, Kiremidjian A (2001) Assembly-based vulnerability of buildings and its uses in seismic performance evaluation and risk management decision-making. Ph.D. dissertation, Department of civil and environmental engineering, Stanford University, USA
Cha EJ, Ellingwood BR (2013) Seismic risk mitigation of building structures: the role of risk aversion. Struct Saf 40:11–19
Etchart N (2003) Traitement subjectif du risque et comportement individuel devant les pertes: une étude expérimentale. Ph.D. Thesis, ENSAM-Paris, France. French
Camerer CF, Hogarth R (1999) The effect of financial incentives in experiments: a review, and capital–labor–production frame work. J Risk Uncertain 19:7–45
Keeney RL (1973) A decision analysis with multiple objectives: the Mexico City airport. Bell J Econ Manag Sci 4:101–117
Bostic R, Herrnstein RJ, Luce RD (1990) The effect on the preference reversal phenomenon of using choice indifferences. J Econ Behav Organ 13:193–212
Abdellaoui M, Munier B (1994) The closing-in method: an individual tool to investigate individual choice patterns under risk. In: Munier B, Machina MJ (eds) Models and experiments in risk and rationality. Dordrecht, Boston, pp 141–155
Fennema H, van Assen M (1998) Measuring the utility of losses by means of the trade-off method. J Risk Uncertain 17:277–295
Serquin Y (1998) Scientific management of large system maintenance: The contribution of decision analysis using the generalized multi-attribute utility. Ph.D. Thesis, ENS de Cachan, France
Acknowledgements
I dedicate this article to the memory of my beloved mother, Katia Tohme Makhoul, who was a mathematics professor and from whom I have inherited the passion for mathematics. This work is the decision analysis part achieved for obtaining my PhD degree; it was published in my thesis in French in 2010, Number: 2010 ENSAM 0026, and since the work was publishable, it has the pastel number: “pastel-00521545, version 1 - 27 Sep 2010.” I thank “Ecole Nationale Supérieur d’Arts et Métiers – Paris” and the Eiffel Excellence Scholarship, which funded this research through the “Ministère de l’Enseignement Supérieur et de la Recherche, France.” I thank Prof. Georges Zaccour for helpful comments on an earlier draft.
Funding
This research was funded by “Ecole Nationale Supérieur d’Arts et Métiers – Paris” through a Research Scholarship and the Eiffel Excellence Scholarship both from the “Ministère de l’Enseignement Supérieur et de la Recherche, France.”
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
The author declares that there is no conflict of interest.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
About this article
Cite this article
Makhoul, N. Seismic risk mitigation in buildings using a new method to encode a joint weighting function in multi-attribute utility theory. SN Appl. Sci. 1, 1103 (2019). https://doi.org/10.1007/s42452-019-1136-6
Received:
Accepted:
Published:
DOI: https://doi.org/10.1007/s42452-019-1136-6