Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

1 Introduction

Many applications require the evaluation of spatial information in order to take decisions: examples are environmental, social, and economic resources location and allocation, environmental status indicators definition, impact factor evaluation.

Often, a group of experts is asked to select or rate a set of alternatives which are distinct spatial units (locations or areas) of a territory for the target purpose [1]. This is done by evaluating some of the territory’s properties, represented in distinct registered aligned thematic images of the territory, based on distinct criteria. Generally, the criteria are expressed by linguistic terms such as low, medium, high, whose semantics can be represented within the fuzzy framework by soft constraints on the distinct properties of the territory [2]. For example, the disposition of an area to the occurrence of fires is greater if the area has a high aridity while it is low if it is a wetland. Each expert can associate a distinct semantics to the linguistic terms so that the experts’ opinions can be coherent among each other on some property but may conflict on others.

In this chapter, a soft fusion approach of coherent evaluations of spatial alternatives is described. In this approach the coherent evaluations of each property are aggregated to generate the group evaluation of the spatial alternatives by a soft fusion process, consisting of two steps: in the former, each expert evaluates each alternative, i.e., pixel of an image, by specifying soft constraints; in the latter, the coherent evaluations of the experts on each criterion are aggregated so as to weight differently their impact in determining the global judgment of each alternative by considering their distinct trust determined, based on the coherence of their evaluations.

Linguistic quantifier-guided Ordered Weighted Averaging (OWA) operators [3, 4] have been indicated as a powerful means to define soft fusion strategies of spatial information based on the concept of fuzzy majority 155,319. In [5] a fuzzy majority approach has been proposed to model a land suitability problem based on multi criteria group decision making, where the group solution map synthesizes the majority of the decision-makers’ preferences. In their approach the quantifier-guided fusion function generating the group map is defined by an Induced Ordered Weighted Averaging (IOWA) operator as proposed in [6] so that the opinions of the decision makers supporting each other in the fuzzy majority are taken into account in generating the group map. However, in this approach, the evaluations on each criterion are expressed by linguistic terms that are internally converted into fuzzy numbers and aggregated by each expert independently from the other experts. So the subjective interpretation of the semantics of linguistic terms by each expert is not modeled but is assumed as homogeneous. Further, the distinct trust of the experts is not modeled.

In our approach, the opinions of the experts on each criterion are compared to assess their coherence so that the final evaluation of each pixel is generated by a distinct fuzzy majority of experts on each criterion. In fact, the coherent evaluation is obtained by a quantifier-guided fusion, defined by an IOWA, where both the weights of the IOWA and the order of the arguments are based on the coherence and the distinct trust of the experts.

In Sect. 2, the classic approach to model group decision making is analyzed and compared with our proposal. In Sect. 3, the evaluation of alternatives on single criteria by each expert is described. In Sect. 4, we propose the computation of a soft coherence measure. In Sect. 5, the proposal to compute the coherent evaluation of each criterion expressing the opinion of a fuzzy majority of experts is described. In Sect. 6, the approaches to determine the overall evaluation of alternatives are outlined. The conclusions summarize the main contents.

2 Rational of the Approach with Respect to the Classic Group Decision Making Procedure

The objective of a group decision activity is the identification of the alternative(s) which is (are) judged the best by the majority of the experts.

In a spatial context, the alternatives can be distinct locations, i.e., pixels or areas of a territory that is described by several registered aligned thematic maps.

Experts give their the options on the basis of some evaluation scheme (see Fig. 1); this can be either implicitly assumed by the experts, or explicitly specified in the form of a set of predefined qualitative criteria, as in the case we are considering in this chapter.

Fig. 1
figure 1

Classic schema of a multi criteria group decision process

Examples of qualitative criteria are high slope, mostly sand lithology, etc. Each expert can specify his/her subjective meaning of a qualitative criterion by defining its semantics through a soft constraint that the pixels of an image representing a theme must satisfy, such as a minimum, maximum, acceptance level or range of values of slope, types of lithology, etc.

For each alternative, a pixel, a performance judgment is obtained by each expert by evaluating each soft constraint defined on the domain of the pixels’ values in an image representing a property of the territory.

At this point, the first goal of most of the group decision making approaches defined in the literature is the computation of the expert’s performance judgment of each alternative by aggregating the scores obtained by applying all the soft constraints defined on the available images. To this end a desired trade-off among the satisfaction degrees of soft constraints can be adopted by each expert, so that the expert’s performance judgment reflects a desired concurrency or compensation among the criteria.

Other techniques to evaluate the alternatives are based, for example, on the specification of preference relations or subjective probabilities or linguistic terms defined on an ordinal scale [710].

Once the alternatives have been evaluated by all the experts, the main problem is to compare experts’ judgments to verify the consensus among them [11]. In case of unanimous consensus, the evaluation process ends with the selection of the best alternative(s) that corresponds to the alternative(s) with the greatest consensual performance judgment.

However, in real situations humans rarely come to unanimous agreement: this has led to evaluate not only crisp degrees of consensus (degree 1 for full and unanimous agreement) but also intermediate degrees between 0 and 1, corresponding to partial agreement among all experts. Furthermore, full consensus (degree = 1) can be considered not necessarily as a result of unanimous agreement among all the experts, but it can be obtained even in case of agreement among a fuzzy majority of the experts, for example most experts 1,411. An outline of this two-step decision process is depicted in Fig. 1 where each matrix contains the performance judgments of an alternative by each expert on each criterion: the first step computes the global evaluation of each alternative by each expert on all or most criteria. Consensus in computed based on a similarity measure among the experts’ performance judgments of the alternatives.

In our approach, we focus on the fact that with the classic multi criteria group decision making schema two experts may obtain the same performance judgement for an alternative even if their single performance judgments of the criteria are completely different.

In fact, what matters in this approach is the overall performance of each alternative by each expert, and their comparison is irrespective of the conflicts or incoherence, between the performance judgments of the single criterion.

This scheme is suited to compute a global evaluation of the alternatives by a group of experts in two distinct situations:

  • in case of implicit criteria, when each expert uses a different set of criteria to evaluate the alternatives;

  • in the case of explicit criteria, when their aggregation can be performed freely by each expert based on their distinct decision attitudes. For example, an expert can consider all criteria completely not compensative one another for taking the decision, i.e., they must be all satisfied and, on the contrary, another expert considers the lack of satisfaction of a criterion completely replaceable by the satisfaction of any other criterion as proposed in [5].

Nevertheless this decision schema is not adequate in two other cases:

  • in the cases in which we need to introduce uniformity in considering the compensation/concurrency of the performance judgments of the given criteria. This can be necessary for providing distinct scenarios modelling distinct attitudes to risk of the decision maker: from example one taking into account just the best satisfied criterion, and the opposed one considering all the criteria;

  • in the case in which one wants a robust decision, determined by taking into account the performance judgments on each criterion of the experts who expressed coherent evaluations of the criterion;

  • in order to take into account the distinct subjective semantics of the same linguistic terms used by experts to judge the alternatives based on each criterion.

These three last cases are those that most often occur in the case of decisions involving the evaluation of spatial alternatives, generally performed based on a set of explicit criteria. In such case the decision requires the uniformity of the decision attitude in determining the disposition/hazard of regions of a territory to natural events. For example, in the case of evaluation of susceptibility maps of environmental phenomena, the experts are indeed models, implemented by distinct software tools, and specialized to compute the performance judgment of a criterion, i.e., a factor score contributing to the occurrence of the phenomenon, which is due to a property of the territory. A more robust global performance judgement of the criterion can be assessed by taking into account the coherent evaluations of the single tools or experts.

2.1 Outline of the Group Decision Making Process

Figure 1 recaps the classic group decision making process and Fig. 2 our proposed approach. It can be seen that in the classic schema first the fusion of the criteria evaluations for each expert is performed and then the fusion of the experts judgements on all criteria, while in our proposal first the coherent experts’ performance judgements on each criterion are computed and successively they are fused to compute the overall coherent performance of each alternative.

Fig. 2
figure 2

Proposed schema of decision process computing the overall performance of alternatives by the coherent evaluation of criteria by a fuzzy majority of experts

In this last case, first we compute for each criterion the coherence among the performance judgments among all (or most) experts. Notice that, for each criterion and each alternative a distinct subset of experts can express coherent evaluations.

In doing this, central to our approach is the definition of coherence among the experts’ evaluations over a single criterion.

Then, the global satisfaction of each criterion for each alternative is computed by aggregating the coherent performance judgments of the experts. Once we have these global satisfactions of all the criteria for each alternative, we can aggregate them to derive a final ranking of the alternatives: this last aggregation can have distinct semantics, depending on the fact that we want to consider the criteria completely compensative one another or not, or partially compensative.

In the next section, we are going to describe this multi criteria group decision process which is soft at distinct levels:

  • in modelling the distinct reliability of spatial data and trusts of the experts;

  • in modelling the distinct interpretations of coherence among the experts performance judgments on each criterion;

  • in computing the coherent performance judgement on each criterion by the fuzzy majority of coherent experts,

  • finally, in modelling the decision attitude that is most appropriate.

3 Evaluating Alternatives on Single Criteria

The objective of the decision process is ranking the locations (spatial units) of a given territory that have characteristics suitable for the allocation or location or resources, or for the identification of spatial objects or even areas susceptible to the occurrence of a phenomenon according to a group of experts or models. This is a multi criteria decision making process where the criteria are the factors that must be assessed independently and then combined to assess the suitability of the areas of the territory.

The decision process is performed on each spatial unit independently from the others.

For sake of simplicity, as sources of information we assume to have spatial data consisting of M registered aligned images, each one describing a physical or administrative property of the territory under study.

The domains of pixels values in each image can be different: they can be numeric continuous or discrete, ordinal or nominal.

We consider a group of K experts that are called to evaluate the spatial units based on a set of N common criteria.

Each criterion ci is expressed by a linguistic predicate whose semantics is defined by a soft constraint on the domain of values of an image. For example “high slope” is defined with a non decreasing membership function on the domain [0, 90] of the slope map.

Each expert can specify his/her subjective definition of the membership functions of the soft constraints on the domain of the property. This way, the experts’ evaluations of the distinct criteria are made consistent one another and can be aggregated.

Further, to each expert we can associate a trust value t 1 ,…,t k  ∈ [0, 1] to represent either their skill and ability in performing the evaluations, or the validity of the evaluation due to several reasons: either because the source of the data cannot be completely trusted or because one knows that the means of acquisition are not enough sophisticated and generate systematic errors; not least, because data are a result of a subjective analysis, such as surveyed data.

4 Evaluating the Coherence of Performance Judgments of a Fuzzy Majority of Experts

4.1 Definition of Coherence

Let us define a coherence measure between two vectors A = (a 1,…, a K) and B = (b 1,…, b K) with a i b ∈ [0, 1] based on the weighted Minkowski distance, by considering the trust (t 1 ,…, t K ) (such that they sum to 1) associated with each dimension of the two vectors, as follows:

$$ coherence\left( {A,B} \right) = \left( {\sum\limits_{k = 1}^{K} {t_{k} \left( {1 - \left| {a_{k} - b_{k} } \right|} \right)^{\alpha } } } \right)^{{\frac{1}{\alpha }}} $$
(1)

with −∞ < α < +∞.

We can obtain a range of special similarity measures by setting distinct values for the norm parameter α. For example, when α = 1 and all trust scores are equal, we obtain the inner product. If α = 2, the complement of the weighted Euclidean distance. If α = 0, the complement of the weighted geometric distance. If α = −1 the complement of the weighted Harmonic distance.

Our first purpose is to compute the coherence of an expert with respect to all the other experts of the group in evaluating each single alternative a with respect to each single criterion c by taking into account the distinct trust scores of the experts (t 1 ,…,t k ). To this end we assume that a K dimensional space is defined in which all the experts’ judgments on alternative a with respect to criterion c are represented by a vector:

$$ P = \left( {p_{\text{ac1}} , \ldots ,p_{\text{acK}} } \right). $$

The coherence of the i-th expert evaluation p aci with respect to all the other experts is defined as coherence (P, R i ), according to definition (1), in which R i  = (p aci,…, p aci) is the K dimensional reference vector representing the opinion of the i-th expert.

Coherence(P, R i ) considers all the K experts (all single components of the two vectors P and R i ).

A more flexible definition of the coherence measure taking into account a fuzzy majority Q of the experts and their distinct trust can be obtained by computing it through a Minkowski similarity OWA operator (MOWA) of dimension K with importance weights as defined in the following subsections [12].

4.2 Definition of the MOWA Operator

MOWA: R K × R K  R has an associated weighting vector W of dimension K such that \( \sum_{i = 1}^{K} {w_{i} } = {\mathbf{1}} \) and w i ∈ [0, 1] and

$$ MOWA(s_{1} , \ldots ,s_{K} ) = \left( {\sum\limits_{i = 1}^{K} {w_{i} } S_{i}^{\alpha } } \right)^{{\frac{1}{\alpha }}} \quad \quad {\text{with}} -\!\infty < \alpha < +\!\infty $$
(2)

where S i is the i-th largest of the s j = 1 − |a ib i|; it is the individual similarity between the i-th components of the two vectors A and B.

The main aspect of the MOWA operator is the reordering of the arguments based upon their values. This means that the weights of W, instead of being associated with a specific argument are associated with a particular position in the ordering. This reordering makes the MOWA operator a non linear operator. Thus, while the weights (t 1 ,…, t n ,) in formula (1) are indications of the reliability of the vector components, the semantics of W in formula (2) is the importance of an ordered position of the vector components, thus its choice determines the semantics of the aggregation.

In particular, if the weighting vector W is such that w 1 = 1 then we obtain as a result of MOWA(s 1 ,…,s K) = max(s 1 ,…,s K) instead when w K = 1 we obtain MOWA(s 1 ,…,s K) = min(s 1 ,…,s K).

To compute an “or-like” similarity between two vectors, which means weighting more heavily the contributions of the greatest individual similarities, we can set w i > w j for i < j, while, on the contrary, if we want an “and-like” semantics in order to minimize the overall similarity between two vectors we can weight more heavily the smallest individual similarities by setting w i < w j for i < j.

When all the w i = 1/Ki = 1,…,K, the MOWA operator reduces to definition (1) with equal trust scores.

In [13] it has been proposed that a monotone not decreasing linguistic quantifier [14] Q:[0, 1] → [0, 1] can be used to specify the concept of a fuzzy majority for modeling group decision making process. By a linguistic quantifier it is possible to defined flexible notions of majority. The crisp notion of majority (linguistically expressed by greater than 50 %) corresponds to Q(x) = 1 f-or x = round(K/2), Q(x) = 0 otherwise. If Q 1(x) ≤ Q 2(x) ∀x in [0, 1] the linguistic quantifier Q 1 defines a stricter fuzzy majority than Q 2, as in the case of the quantifiers almost all with respect to most.

Here we use this notion to generate a quantifier-guided MOWA operator computing a value reflecting the truth of a proposition:

Q most trusted dimensions of the two vectors A and B are similar.

This is achieved by computing the weights w i for i = 1,…,K of of MOWA as it has been done for the OWA operator with distinct importance in [15, 16]:

$$ w_{i} = Q\left( {\sum\limits_{j = 1}^{i} {e_{j} } } \right) - Q\left( {\sum\limits_{j = 0}^{i - 1} {e_{j} } } \right)\;\text{with}\quad \sum\limits_{i = 1}^{K} {e_{\text{i}} = } \sum\limits_{i = 1}^{K} {t_{\text{i}} = \text{1}} $$
(3)

where e j  = t h , the trust score of S h in definition (2) which is the h-th greatest among the arguments s 1,…,s K.

It can be shown that the w i obtained by applying (3) are defined in [0, 1] and their sum is 1.

This way w i increases with the trust of the argument to which it is associated. The arguments with null trust play no role and have a zero weight.

4.3 Definition of Qcoherence

Now we can define a quantifier notion of coherence measure between two vectors A and B, Qcoherence, based on the MOWA operator with weighting vector obtained by applying definition (3) as follows:

$$ Qcoherence\left( {A,B} \right) = MOWA(1 - \left| {a_{1} - b_{1} } \right|, \ldots ,1 - \left| {a_{K} - b_{K} } \right|) $$
(4)

Formula (4) reduces to the coherence measure in (1) when Q = averagely all is the identity function Q(x) = x, ∀x ∈ [0, 1].

By defining two fuzzy majorities Q 1 and Q 2 such that Q 1(x) ≤ x ≤ Q 2(x), ∀x in [0, 1] we have:

$$ Q_{ 1} coherence\left( {A,B} \right) \le coherence\left( {A,B} \right) \le Q_{ 2} coherence\left( {A,B} \right) $$
(5)

which means that the coherence measure defined in (4) between two same vectors A and B with same trust scores produces a lower equal value and an upper equal value with respect to the coherence measure in (1).

The coherence of the i-th expert evaluation p aci of alternative a with respect to criterion c and by considering a fuzzy majority Q of experts with distinct trust can be computed as Qcoherence(P, R i ) according to definition (3) and (2) in which R i  = (p aci,…, p aci) is the K dimensional reference vector representing the opinion of the i-th expert.

5 Soft Fusion of Coherent Performance Judgements

5.1 Fusing the Coherent Evaluations on Each Criterion

Let us indicate by coherence i = Qcoherence(P, R i ) the coherence of an expert i with Q other experts. If coherence i is low, it means that the i-th expert does not belong to the fuzzy majority Q of the group of experts expressing coherent performance p ac of the criterion c for the alternative a.

The values coherence i can be used to indicate the weight of the i-th expert’s evaluation p aci in determining the group coherent evaluation of the alternative a with respect to the criterion c by a fuzzy majority Q of coherent trusted experts. This corresponds to compute p ac of Q cohrent trusted experts , i.e., the opinion of the Q majority of most trusted experts that expressed coherent evaluations of alternative a with respect to criterion c.

This can be done as proposed in [6] by applying an IOWA operator defined in the following subsection.

5.2 Definition of the IOWA Operator

The IOWA operator of dimension K is a non linear aggregation operator IOWA:[0, 1]K → [0, 1] with a weighting vector W = (w 1 , w 2 ,…, w K), with w j ∈ [0, 1] and \( \sum\nolimits_{{{\text{j}} = 1}}^{\text{K}} {w_{j} } = 1, \) defined as:

$$ IOWA\left( { < x_{1} ,u_{1},\! > , \ldots , < x_{K} ,u_{K} \!> } \right) = \sum\limits_{i = 1}^{K} {w_{i} } x_{u - index(i)} $$
(6)

in which X = (x 1 ,…, x K ) is the argument vector to be aggregated and U = (u 1 ,…, u K ) is the inducing order vector such that it determines the order in which the elements of X have to be taken into account in the aggregation. Specifically, x u-index(i) is the element of vector X associated with the i-th smallest inducing order value u among the values (u 1 ,…, u K ).

Example

For example, given W = (0, 0, 0.5, 0.5), X = (0.1, 0.3, 0.8, 1), U = (3, 8, 6, 2) we obtain:

IOWA(<0.7,3>, <0.2,8>, <0.8,6>, <1,2>) = = 0*1 + 0*0.7 + 0.5*0.8 + 0.5*0.2 = 0.5 while considering a distinct inducing order vector U = (8, 1, 2, 4) we obtain: IOWA(<0.7, 8>, <0.2, 1>, <0.8, 2>, <1, 4>) = 0*0.2 + 0*0.8 + 0.5*1 + 0.5*0.7 = 0.85.

In order to compute the opinion of a fuzzy majority Q in [6] it was proposed to derive the inducing order vector U and the weighting vector W based on the support Supp of each argument to aggregate, which is a measure of the proximity of the argument to the other arguments defined as follows:

$$ Supp\left( {x_{i} , \, X} \right) = \sum\limits_{{{\text{j}} = 1\ldots ,{\text{K}}}} {( 1\;{\text{if }}\left| {x_{i} {-}x_{j} } \right| \, <\upbeta\;{\text{else }}0)} . $$
(7)

Then, the inducing order vector is defined as:

U = (u 1 ,…, u K ) = (Supp(x 1 , X),…, Supp(x K , X))

Further, given the quantifier Q and U the weighting vector W = (w1,…,wK) of the IOWA operator is derived as follows:

$$ w_{i} = \frac{{Q(m_{i} /K)}}{{\sum_{j = 1}^{K} {Q(m_{j} /K)} }} $$
(8)

in which m i  = argmin i (u 1,…, u K) is the i-th smallest element among (Supp(x 1 , X),…, Supp(x K , X)).

With this definition the IOWA operator do not take into account the distinct trust associated with the arguments to aggregate. Specifically, U is defined independently from the distinct trust scores (t 1 ,…, t K) of the elements in X.

In order to compute the performance of alternative a with respect to criterion c by Q most trusted experts so as to take into account the trust score t i of each expert i and his/her coherence Qcoherence(P, R i ) with Q other trusted experts, we propose to set the induced ordering vector U as follows:

$$ U = (Qcoherence\left( {P, \, R_{1} } \right), \ldots , \, Qcoherence(P, \, R_{K} )) $$
(9)

and then we compute p ac of Qcohrent trusted experts by applying an IOWA operator:

$$ p_{ac\;of\;Qcohrent\;trusted\;experts} \quad = IOWA( < p_{ac1} ,u_{ 1}\! > , \ldots , < p_{acK} ,u_{K} \!>\!) $$
(10)

in which the weighing vector W is obtained by following definition (8) in which we have computed:

$$ m_{i} = \frac{{{\text{argmin}}_{i} \left( {u_{1} *t_{1} , \ldots , u_{K} *t_{K} } \right)}}{{\sum_{i = 1}^{K} {{\text{argmin}}_{i} \left( {u_{1} *t_{1} , \ldots , u_{K} *t_{K} } \right)} }} $$
(11)

this way m i is the i-th smallest element among the Qcoherence of the experts multiplied by their trust degrees:

(Qcoherence(P, R 1)*t 1,…, Qcoherence(P, R K)* t K).

5.3 Example of Computation of the Opinion of a Fuzzy Majority of Experts Without Considering Their Trust

Consider the following quantifier most

$$ most(y) = \left\{ {\begin{array}{ll} 1 & {y \ge 0.8} \\ {2y - 0.6} & {0.3 < y < 0.8} \\ {0} & {y \le 0.3} \\ \end{array} } \right. $$
(12)

and the performance judgments of the experts.

P = (0.1, 0.6, 0.8, 0.2, 0.9) with trust T = (0.4, 0.2, 0, 0.4, 0).

Notice that experts [17] and [5] are not trusted at all t 3  = t 5  = 0. Thus, their evaluations 0.8 and 0.9 should be disregarded in computing the group evaluation, while the evaluation of the second expert, having trust t 2  = 0.2, should not influence too much the result.

Let us set β = 0.3 for computing U = Supp in formula (7).

By applying (6) we obtain:

W = (0.143, 0.143, 0.143, 0.143, 0.428), and, finally, IOWA(P, U) = 0.599 that is a bit closer to the performance judgements of the not trusted experts that to most trusted ones.

5.4 Example of Computation of the Opinion of a Fuzzy Majority of Experts by Considering Their Distinct Trust

Now, let us make the same example discussed above with our method having most as defined in formula (12),

P = (0.1, 0.6, 0.8, 0.2, 0.9) and trust T = (0.4, 0.2, 0, 0.4, 0).

We first compute the Qcoherence of each expert by applying definitions (4) based on the application of formula (3) for computing W and on definition (2) with parameter α = 1:

Qcoherence = (0.86, 0.64, 0.44, 0.88, 0.34).

Then we set the inducing order vector U = Qcoherence as defined in (9).

By applying definition (11) taking into account Qcoherence and T = (0.4, 0.2, 0, 0.4, 0) we obtain: M = (0, 0, 0.155, 0.417, 0.427).

By applying definition (8) we obtain the weighting vector W = (0, 0, 0, 0.48, 0.52).

Finally, we can apply the fusion defined by the IOWA in definition (6):

IOWA(<0.1,0.86>, <0.6,0.64>, <0.8,0.44>, <0.2,0.88>, <0.9,0.34>) = 0.1*0.48 + 0.2*0.52 = 0.152.

This result 0.152 is much lower than 0.599, obtained by the basic method proposed in [6], and better synthesises the coherent evaluations of the trusted majority of experts.

6 Computing the Coherent Evaluation of Each Alternative

Once we have the group coherent evaluations on all criteria for each alternative, computed as proposed in the previous section, we can aggregate them in order to compute the overall coherent evaluation of each alternative. This last aggregation can be defined as outlined in Sect. 1 by considering the objective of the decision process and the decision attitude.

Either total or partial compensation among the criteria can be modeled by aggregations defined for example by OWA operators [4, 16] and Generalized Conjunctive Disjunctive GCD operators [18]. When requiring that all criteria are fully, simultaneously, satisfied to state the disposition of a spatial unit to the occurrence of a critical natural event, one models a cautious decision, in the sense that he/she does not want to overestimate the risk that might happen in the given area. On the contrary, by requiring full compensation among the criteria one models an alarming decision in the sense that the satisfaction of one criterion is enough to set an alarm on the spatial unit.

Another aspect of the aggregation of the criteria evaluations is modeling their partial optionality, so that the overall evaluation of the alternative that satisfies the mandatory criteria is reworded or penalized depending on the degree of satisfaction of the optional criteria. This kind of aggregations can be modeled, for example, by discounting operators [19] and non monotonic operators [20].

A further aspect is modeling the hierarchical structure of the criteria: the criteria can be organized in a tree hierarchy that represents the criteria and sub-criteria desired aggregation flow, reflecting the decision preferences regarding the combination of the different criteria evaluations. This can be modeled by Logic Scoring of Preference (LSP) operators [21] as in [1, 22].

Finally, the overall evaluation of an alternative could synthesize the coherent evaluations of a fuzzy majority of the criteria. This can be done by specifying a linguistic quantifier Q and applying the procedure based on the IOWA operator proposed in the previous section.

7 Conclusions

The chapter proposes a multi criteria group decision making process for ranking spatial alternatives based on a soft fusion of coherent evaluations. The approach computes for each spatial unit of a territory an overall evaluation that reflects the coherent performance judgments of distinct groups of experts. This allows modeling the fact that the experts may have inhomogeneous knowledge of the area they are called to evaluate, i.e. they know better the conditions characterizing a given place with respect to the others.

This process could be applied also to rank any type of alternative in the cases in which it is important to take into account the coherence among the experts’ evaluations on each criterion, or when the aggregation of the criteria evaluations must be homogeneous for all experts, reflecting a given decision attitude or need.

The novelty of the proposal is the definition of the coherence of a fuzzy majority of the experts performance judgments based on MOWA operators, and the definition of a new approach based on IOWA operators for computing the representative opinion of a fuzzy majority of a group of experts with distinct trust.