1 Introduction

The weighted average (WA) is arguably the earliest and still most widely used form of aggregation or fusion. We remind the reader of the well-known formula for the WA, i.e.,

$$\begin{aligned} y=\frac{\sum _{i=1}^n x_iw_i}{\sum _{i=1}^n w_i}, \end{aligned}$$
(1)

in which \(w_i\) are the weights (real numbers) that act upon the sub-criteria \(x_i\) (real numbers). In this chapter, the term sub-criteria can mean data, features, decisions, recommendations, judgments, scores, etc. In (1), normalization is achieved by dividing the weighted numerator sum by the sum of all of the weights.

The arithmetic WA (AWA) is the one we are all familiar with and is the one in which all sub-criteria and weights in (1) are real numbers. In many situations [1,2,3,4,5,6,7], however, providing crisp numbers for either the sub-criteria or the weights is problematic (there could be uncertainties about them), and it is more meaningful to provide intervals, type-1 fuzzy sets (T1 FSs), words modeled by interval type-2 fuzzy sets (IT2 FSs), or a mixture of all of these, for the sub-criteria and weights. The resulting WAs are called novel weighted averages (NWAs), which have been introduced in [2, 3, 6].

The ordered weighted average (OWA) operator [8,9,10,11,12,13,14,15,16], a generalization of the linear WA operator, was proposed by Yager to aggregate experts’ opinions in decision making:

Definition 1

An OWA operator of dimension n is a mapping \(y_{_{OWA}}: R^n\rightarrow R\), which has an associated set of weights \(\mathbf {w} = \{w_1, \ldots ,w_n\}\) for which \(w_i\in [0, 1]\), i.e.,

$$\begin{aligned} y_{_{OWA}}=\frac{\sum _{i=1}^nw_ix_{\sigma (i)}}{\sum _{i=1}^nw_i} \end{aligned}$$
(2)

where \(\sigma : \{1, \ldots ,n\}\rightarrow \{1,\ldots ,n\}\) is a permutation function such that \(\{x_{\sigma (1)},x_{\sigma (2)},\ldots ,x_{\sigma (n)}\}\) are in descending order. \(\blacksquare \)

The key feature of the OWA operator is the ordering of the sub-criteria by value, a process that introduces a nonlinearity into the operation. It can be shown that the OWA operator is in the class of mean operators [17] as it is commutative, monotonic, and idempotent. It is also easy to see that for any \(\mathbf {w}\), \(\min _ix_i\le y_{OWA}\le \max _ix_i\).

The most attractive feature of the OWA operator is that it can implement different aggregation operators by setting the weights differently [8], e.g., by setting \(w_i=1/n\) it implements the mean operator, by setting \(w_1=1\) and \(w_i=0\) (\(i=2,\ldots ,n\)) it implements the maximum operator, by setting \(w_i=0\) (\(i=1,\ldots ,n-1\)) and \(w_n=1\) it implements the minimum operator, and by setting \(w_1=w_n=0\) and \(w_i=1/(n-1)\) it implements the so-called olympic aggregator, which is often used in obtaining aggregated scores from judge in olympic events such as gymnastics and diving.

Yager’s original OWA operator [12] considers only crisp numbers. Again, in many situations, it is more meaningful to provide intervals, T1 FSs, words modeled by IT2 FSs, or a mixture of all of these, for the sub-criteria and weights. Ordered NWAs (ONWAs) are the focus of this chapter.

The rest of this chapter is organized as follows: Sect. 2 introduces the NWAs. Section 3 proposes ONWAs. Section 4 compares ONWAs with NWAs and Zhou et al’s fuzzy extensions of the OWA. Finally, conclusions are drawn in Sect. 5.

2 Novel Weighted Averages (NWAs)

Definition 2

A NWA is a WA in which at least one sub-criterion or weight is not a single real number, but instead is an interval, T1 FS or an IT2 FS, in which case such sub-criteria or weights are called novel models. \(\blacksquare \)

How to compute (1) for these novel models is described in this section. Because there can be four possible models for sub-criteria or weights, there can be 16 different WAs, as summarized in Fig. 1.

Fig. 1
figure 1

Matrix of possibilities for a WA

Definition 3

When at least one sub-criterion or weight is modeled as an interval, and all other sub-criteria or weights are modeled by no more than such a model, the resulting WA is called an Interval WA (IWA). \(\blacksquare \)

Definition 4

When at least one sub-criterion or weight is modeled as a T1 FS, and all other sub-criteria or weights are modeled by no more than such a model, the resulting WA is called a Fuzzy WA (FWA). \(\blacksquare \)

Definition 5

When at least one sub-criterion or weight is modeled as an IT2 FS, the resulting WA is called a Linguistic WA (LWA). \(\blacksquare \)

Definition 2

(Continued) By a NWA is meant an IWA, FWA or LWA. \(\blacksquare \)

In order to reduce the number of possible derivations from 15 (the AWA is excluded) to three, it is assumed that: for the IWA all sub-criteria and weights are modeled as intervals, for the FWA all sub-criteria and weights are modeled as T1 FSs, and for the LWA all sub-criteria and weights are modeled as IT2 FSs.

2.1 Interval Weighted Average (IWA)

The IWA is defined as:

$$\begin{aligned} Y_{IWA}\equiv \frac{\sum _{i=1}^n X_i W_i }{\sum _{i=1}^n W_i }=[l,r] \end{aligned}$$
(3)

where

$$\begin{aligned} X_i&= [a_i , b_i ]\quad i=1,...,n \end{aligned}$$
(4)
$$\begin{aligned} W_i&= [c_i , d_i ]\quad i=1,...,n \end{aligned}$$
(5)

and \(Y_{IWA}\) is also an interval completely determined by its two end-points l and r,

$$\begin{aligned} l&=\min \limits _{x_i\in X_i\atop w_i\in W_i}\frac{\sum _{i=1}^nx_iw_i}{\sum _{i=1}^nw_i}=\min \limits _{ w_i\in W_i}\frac{\sum _{i=1}^na_iw_i}{\sum _{i=1}^nw_i}\end{aligned}$$
(6)
$$\begin{aligned} r&=\max \limits _{x_i\in X_i\atop w_i\in W_i}\frac{\sum _{i=1}^nx_iw_i}{\sum _{i=1}^nw_i}=\max \limits _{w_i\in W_i}\frac{\sum _{i=1}^nb_iw_i}{\sum _{i=1}^nw_i} \end{aligned}$$
(7)

and they can easily be computed by the KM or EKM Algorithms [18,19,20,21].

Example 1

Suppose for \(n=5\), \(\{x_i \}|_{i=1,...,5} =\{5, 7.5, 7, 6.5, 2\}\) and \(\{w_i\}|_{i=1,...,5} =\{4, 2.5, 8, 1.8, 6\}\), so that the arithmetic WA \(y_{_{AWA}} =5.31\). Let \(\lambda \) denote any of these crisp numbers. In this example, for the IWA, \(\lambda \rightarrow [\lambda -\delta ,\lambda +\delta ]\), where \(\delta \) may be different for different \(\lambda \), i.e.,

$$\{x_i \} |_{i=1,...,5} \rightarrow \{[4.5,5.5], [7.0, 8.0], [4.2,9.8], [6.0,7.0], [1.0,3.0]\}$$
$$\begin{aligned} \{w_i \}|_{i=1,...,5} \rightarrow \{[2.8, 5.2], [2.0, 3.0], [7.6, 8.4], [0.9, 2.7], [5.0, 7.0]\}. \end{aligned}$$

It follows that \(Y_{IWA}=[3.49, 7.12]\). The important difference between \(y_{_{AWA}}\) and \(Y_{_{IWA}}\) is that the uncertainties about the sub-criteria and weights have led to an uncertainty band for the IWA, and such a band may play a useful role in subsequent decision making. \(\blacksquare \)

2.2 Fuzzy Weighted Average (FWA)

The FWA [2, 3, 22,23,24,25,26,27] is defined as:

$$\begin{aligned} Y_{FWA}\equiv \frac{\sum _{i=1}^n X_i W_i }{\sum _{i=1}^n W_i } \end{aligned}$$
(8)

where \(X_i\) and \(W_i\) are T1 FSs, and \(Y_{FWA}\) is also a T1 FS. Note that (8) is an expressive way to represent the FWA because it is not computed using multiplications, additions and divisions, as expressed by it. Instead, it has been shown [2,3,4, 27] that the FWA can be computed by using the \(\alpha \)-cut Decomposition Theorem [28], where each \(\alpha \)-cut on \(Y_{FWA}\) is an IWA of the corresponding \(\alpha \)-cuts on \(X_i\) and \(W_i\).

Denote the \(\alpha \)-cut on \(Y_{FWA}\) as \([y_L(\alpha ),y_R(\alpha )]\), and the \(\alpha \)-cut on \(X_i\) and \(W_i\) as \([a_i(\alpha ),b_i(\alpha )]\) and \([c_i(\alpha ),d_i(\alpha )]\), respectively, as shown in Fig. 2. Then [2,3,4, 27],

$$\begin{aligned} y_L(\alpha )&=\min \limits _{\forall w_i(\alpha )\in [c_i(\alpha ),d_i(\alpha )]} \frac{\sum _{i=1}^n a_i(\alpha )w_i(\alpha )}{\sum _{i=1}^n w_i(\alpha )}\end{aligned}$$
(9)
$$\begin{aligned} y_R(\alpha )&=\max \limits _{\forall w_i(\alpha )\in [c_i(\alpha ),d_i(\alpha )]} \frac{\sum _{i=1}^n b_i(\alpha )w_i(\alpha )}{\sum _{i=1}^n w_i(\alpha )} \end{aligned}$$
(10)

\(y_L(\alpha )\) and \(y_R(\alpha )\) can be computed by the KM or EKM algorithms.

Fig. 2
figure 2

\(\alpha \)-cuts on a \(X_i\), b \(W_i\), and, c \(Y_{FWA}\)

The following algorithm is used to compute \(Y_{FWA}\):

  1. 1.

    For each \(\alpha \in [0,1]\), the corresponding \(\alpha \)-cuts of the T1 FSs \(X_i\) and \(W_i\) are first computed, i.e., compute

    $$\begin{aligned} X_i(\alpha )&=[a_i(\alpha ),b_i(\alpha )]\quad i=1,...,n \end{aligned}$$
    (11)
    $$\begin{aligned} W_i(\alpha )&=[c_i(\alpha ),d_i(\alpha )]\quad i=1,...,n \end{aligned}$$
    (12)
  2. 2.

    For each \(\alpha \in [0,1]\), compute \(y_L(\alpha )\) in (9) and \(y_R(\alpha )\) in (10) using the KM or EKM Algorithms.

  3. 3.

    Connect all left-coordinates \((y_L(\alpha ),\alpha )\) and all right-coordinates \((y_R(\alpha ),\alpha )\) to form the T1 FS \(Y_{FWA}\).

Example 2

This is a continuation of Example 1 in which each interval is assigned a symmetric triangular T1 FS that is centered at the mid-point (\(\lambda \)) of the interval, has membership grade equal to one at that point, and is zero at the interval end-points (\(\lambda -\delta \) and \(\lambda +\delta \)) (see the triangle in Fig. 3). The resulting \(X_i\) and \(W_i\) are plotted in Fig. 4a and Fig. 4b, respectively. The FWA is depicted in Fig. 4c as the solid curve. Although \(Y_{FWA}\) appears to be triangular, its sides are actually slightly curved.

Fig. 3
figure 3

Illustration of a T1 FS used in Example 2

Fig. 4
figure 4

Example 2: a \(X_i\), b \(W_i\), and, c \(Y_{FWA}\) (solid curve), \(Y_{OFWA}\) (dashed curve) and \(Y_{T1FOWA}\) (dotted curve)

The support of \(Y_{FWA}\) is [3.49, 7.12], which is the same as \(Y_{IWA}\) (see Example 1). This will always occur because the support of \(Y_{FWA}\) is the \(\alpha =0\) \(\alpha \)-cut, and this is \(Y_{IWA}\). The T1 FS \(Y_{FWA}\) indicates that more emphasis should be given to values of variable y that are closer to its apex, whereas the interval \(Y_{IWA}\) indicates that equal emphasis should be given to all values of variable y in its interval. The former reflects the propagation of the non-uniform uncertainties through the FWA, and can be used in future decisions. \(\blacksquare \)

2.3 Linguistic Weighted Average (LWA)

The LWA is defined as:

$$\begin{aligned} \tilde{Y}_{LWA}\equiv \frac{\sum _{i=1}^n \tilde{X}_i \tilde{W}_i }{\sum _{i=1}^n \tilde{W}_i } \end{aligned}$$
(13)

where \(\tilde{X}_i\) and \(\tilde{W}_i\) are IT2 FSs, and \(\tilde{Y}_{FWA}\) is also an IT2 FS. Again, (13) is an expressive way to describe the LWA. To compute \(\tilde{Y}_{LWA}\), one only needs to compute its LMF \(\underline{Y}_{LWA}\) and UMF \(\bar{Y}_{LWA}\).

Let \(W_i\) be an embedded T1 FS [19] of \(\tilde{W}_i\), as shown in Fig. 5b. Because in (13) \(\tilde{X}_i\) only appears in the numerator of \(\tilde{Y}_{LWA}\), it follows that

$$\begin{aligned} \underline{Y}_{LWA}&=\min _{\forall \, W_i\in [\underline{W}_i, \bar{W}_i]}\frac{\sum _{i=1}^n\underline{X}_iW_i}{\sum _{i=1}^nW_i}\end{aligned}$$
(14)
$$\begin{aligned} \bar{Y}_{LWA}&=\max _{\forall \, W_i\in [\underline{W}_i, \bar{W}_i]}\frac{\sum _{i=1}^n\bar{X}_iW_i}{\sum _{i=1}^nW_i} \end{aligned}$$
(15)
Fig. 5
figure 5

a \(\tilde{X}_i\) and an \(\alpha \)-cut, b \(\tilde{W}_i\) and an \(\alpha \)-cut, and, c \(\tilde{Y}_{LWA}\) and an \(\alpha \)-cut

The \(\alpha \)-cut based approach [4, 5] is also used to compute \(\underline{Y}_{LWA}\) and \(\bar{Y}_{LWA}\). First, the heights of \(\underline{Y}_{LWA}\) and \(\bar{Y}_{LWA}\) need to be determined. Because all UMFs are normal T1 FSs, \(h_{\bar{Y}_{LWA}}=1\). Denote the height of \(\underline{X}_i\) as \(h_{\underline{X}_i}\) and the height of \(\underline{W}_i\) as \(h_{\underline{W}_i}\). Let

$$\begin{aligned} h_{\min }=\min \{\min _{\forall i}h_{\underline{X}_i},\min _{\forall i}h_{\underline{W}_i}\} \end{aligned}$$
(16)

Then [5], \(h_{\underline{Y}_{LWA}}=h_{\min }\).

Let \([a_{il}(\alpha ), b_{ir}(\alpha )]\) be an \(\alpha \)-cut on \(\bar{X}_i\), \([a_{ir}(\alpha ), b_{il}(\alpha )]\) be an \(\alpha \)-cut on \(\underline{X}_i\) [see Fig. 5a], \([c_{il}(\alpha ), d_{ir}(\alpha )]\) be an \(\alpha \)-cut on \(\bar{W}_i\), \([c_{ir}(\alpha ), d_{il}(\alpha )]\) be an \(\alpha \)-cut on \(\underline{W}_i\) [see Fig. 5b], \([y_{Ll}(\alpha ), y_{Rr}(\alpha )]\) be an \(\alpha \)-cut on \(\bar{Y}_{LWA}\), and \([y_{Lr}(\alpha ), y_{Rl}(\alpha )]\) be an \(\alpha \)-cut on \(\underline{Y}_{LWA}\) [see Fig. 5c], where the subscripts l and L mean left and r and R mean right. The end-points of the \(\alpha \)-cuts on \(\tilde{Y}_{LWA}\) are computed as solutions to the following four optimization problems [4, 5]:

$$\begin{aligned} y_{Ll}(\alpha )&= \min \limits _{\forall w_i\in [c_{il}(\alpha ),d_{ir}(\alpha )]} \frac{\sum _{i=1}^na_{il}(\alpha )w_i}{\sum _{i=1}^nw_i},\quad \alpha \in [0,1] \end{aligned}$$
(17)
$$\begin{aligned} y_{Rr}(\alpha )&= \max \limits _{\forall w_i\in [c_{il}(\alpha ),d_{ir}(\alpha )]} \frac{\sum _{i=1}^nb_{ir}(\alpha )w_i}{\sum _{i=1}^nw_i},\quad \alpha \in [0,1] \end{aligned}$$
(18)
$$\begin{aligned} y_{Lr}(\alpha )&= \min \limits _{\forall w_i\in [c_{ir}(\alpha ),d_{il}(\alpha )]} \frac{\sum _{i=1}^na_{ir}(\alpha )w_i}{\sum _{i=1}^nw_i},\quad \alpha \in [0,h_{\min }] \end{aligned}$$
(19)
$$\begin{aligned} y_{Rl}(\alpha )&= \max \limits _{\forall w_i\in [c_{ir}(\alpha ),d_{il}(\alpha )]} \frac{\sum _{i=1}^nb_{il}(\alpha )w_i}{\sum _{i=1}^nw_i},\quad \alpha \in [0,h_{\min }] \end{aligned}$$
(20)

(17)–(20) are again computed by the KM or EKM Algorithms.

Observe from (17), (18), and Figs. 5a and b that \(y_{Ll}(\alpha )\) and \(y_{Rr}(\alpha )\) only depend on the UMFs of \(\tilde{X}_i\) and \(\tilde{W}_i\), i.e., they are only computed from the corresponding \(\alpha \)-cuts on the UMFs of \(\tilde{X}_i\) and \(\tilde{W}_i\); so,

$$\begin{aligned} \bar{Y}_{LWA}=\frac{\sum _{i=1}^n\bar{X}_i\bar{W}_i}{\sum _{i=1}^n\bar{W}_i}. \end{aligned}$$
(21)

Because all \(\bar{X}_i\) and \(\bar{W}_i\) are normal T1 FSs, \(\bar{Y}_{LWA}\) is also normal. The algorithm for computing \(\bar{Y}_{LWA}\) is:

  1. 1.

    Select appropriate m \(\alpha \)-cuts for \(\overline{Y}_{LWA}\) (e.g., divide [0, 1] into \(m-1\) intervals and set \(\alpha _j =(j-1)/(m-1)\), \(j=1,2,...,m\)).

  2. 2.

    For each \(\alpha _j\), find the corresponding \(\alpha \)-cuts \([a_{il}(\alpha _j),b_{ir}(\alpha _j)]\) and \([c_{il}(\alpha _j),d_{ir}(\alpha _j)]\) on \(\overline{X}_i\) and \(\overline{W}_i\) (\(i=1,...,n\)). Use a KM or EKM algorithm to find \(y_{Ll}(\alpha _j)\) in (17) and \(y_{Rr}(\alpha _j)\) in (18).

  3. 3.

    Connect all left-coordinates \((y_{Ll}(\alpha _j),\alpha _j)\) and all right-coordinates \((y_{Rr}(\alpha _j),\alpha _j)\) to form the T1 FS \(\overline{Y}_{LWA}\).

Similarly, observe from (19), (20), and Fig. 5a and b that \(y_{Lr}(\alpha )\) and \(y_{Rl}(\alpha )\) only depend on the LMFs of \(\tilde{X}_i\) and \(\tilde{W}_i\); hence,

$$\begin{aligned} \underline{Y}_{LWA}=\frac{\sum _{i=1}^n\underline{X}_i\underline{W}_i}{\sum _{i=1}^n\underline{W}_i}. \end{aligned}$$
(22)

Unlike \(\bar{Y}_{LWA}\), which is a normal T1 FS, the height of \(\underline{Y}_{LWA}\) is \(h_{\min }\), the minimum height of all \(\underline{X}_i\) and \(\underline{W}_i\). The algorithm for computing \(\underline{Y}_{LWA}\) is:

  1. 1.

    Determine \(h_{\underline{X}_i}\) and \(h_{\underline{W}_i}\), \(i=1,\ldots ,n\), and \(h_{\min }\) in (16).

  2. 2.

    Select appropriate p \(\alpha \)-cuts for \(\underline{Y}_{LWA}\) (e.g., divide [\(0, h_{\min }\)] into \(p-1\) intervals and set \(\alpha _j =h_{\min } (j-1)/(p-1)\), \(j=1,2,...,p\)).

  3. 3.

    For each \(\alpha _j\), find the corresponding \(\alpha \)-cuts \([a_{ir}(\alpha _j),b_{il}(\alpha _j)]\) and \([c_{ir}(\alpha _j),d_{il}(\alpha _j)]\) on \(\underline{X}_i\) and \(\underline{W}_i\). Use a KM or EKM algorithm to find \(y_{Lr}(\alpha _j)\) in (19) and \(y_{Rl}(\alpha _j)\) in (20).

  4. 4.

    Connect all left-coordinates \((y_{Lr}(\alpha _j),\alpha _j)\) and all right-coordinates \((y_{Rl}(\alpha _j),\alpha _j)\) to form the T1 FS \(\underline{Y}_{LWA}\).

In summary, computing \(\tilde{Y}_{LWA}\) is equivalent to computing two FWAs, \(\bar{Y}_{LWA}\) and \(\underline{Y}_{LWA}\). A flowchart for computing \(\underline{Y}_{LWA}\) and \(\bar{Y}_{LWA} \) is given in Fig. 6. For triangular or trapezoidal IT2 FSs, it is possible to reduce the number of \(\alpha \)-cuts for both \(\underline{Y}_{LWA}\) and \(\bar{Y}_{LWA}\) by choosing them only at turning points, i.e., points on the LMFs and UMFs of \(X_i\) and \(W_i\) (\(i=1,2,...,n\)) at which the slope of these functions changes.

Fig. 6
figure 6

A flowchart for computing the LWA [2, 3, 5]

Example 3

This is a continuation of Example 2 where each sub-criterion and weight is now assigned an FOU that is for a 50% symmetrical blurring of the T1 MF depicted in Fig. 3 (see Fig. 7). The left half of each FOU has support on the x (w)-axis given by the interval of real numbers \([(\lambda -\delta )-0.5 \delta , (\lambda -\delta )+0.5\delta ]\) and the right-half FOU has support on the x-axis given by the interval of real numbers \([(\lambda +\delta )-0.5\delta ,(\lambda +\delta )+0.5 \delta ]\). The UMF is a triangle defined by the three points \((\lambda -\delta -0.5 \delta ,0),(\lambda ,1),(\lambda +\delta +0.5 \delta ,0)\), and the LMF is a triangle defined by the three points \((\lambda -\delta +0.5 \delta ,0),(\lambda ,1),(\lambda +\delta -0.5\delta ,0)\). The resulting sub-criterion and weight FOUs are depicted in Figs. 8a and b, respectively, and \(\tilde{Y}_{LWA}\) is depicted in Fig. 8c as the solid curve. Although \(\tilde{Y}_{LWA}\) appears to be symmetrical, it is not.

Comparing Figs. 8c and 4c, observe that \(\tilde{Y}_{LWA}\) is spread out over a larger range of values than is \(Y_{FWA}\), reflecting the additional uncertainties in the LWA due to the blurring of sub-criteria and weights. This information can be used in future decisions.

Another way to interpret \(\tilde{Y}_{LWA}\) is to associate values of y that have the largest vertical intervals (i.e., primary memberships) with values of greatest uncertainty; hence, there is no uncertainty at the three vertices of the UMF, and, e.g., for the right-half of \(\tilde{Y}_{LWA}\) uncertainty increases from the apex of the UMF reaching its largest value at the right vertex of the LMF and then decreases to zero at the right vertex of the UMF. \(\blacksquare \)

Fig. 7
figure 7

Illustration of the IT2 FS used in Example 3. The dashed lines are the corresponding T1 FS used in Example 2

Fig. 8
figure 8

Example 3: a \(\tilde{X}_i\), b \(\tilde{W}_i\), and, c \(\tilde{Y}_{LWA}\) (solid curve), \(\tilde{Y}_{OLWA}\) (dashed curve) and \(\tilde{Y}_{IT2FOWA}\) (dotted curve)

3 Ordered Novel Weighted Averages (ONWAs)

ONWAs, including ordered IWAs, ordered FWAs and ordered LWAs, are proposed in this section.

3.1 The Ordered Interval Weighted Average (OIWA)

As its name suggests, the OIWA is a combination of the OWA and the IWA.

Definition 6

An OIWA is defined as

$$\begin{aligned} Y_{OIWA}=\frac{\sum _{i=1}^nW_iX_{\sigma (i)}}{\sum _{i=1}^n W_{\sigma (i)}} \end{aligned}$$
(23)

where \(X_i\) and \(W_i\) are intervals defined in (4) and (5), respectively, and \(\sigma : \{1, \ldots ,n\}\rightarrow \{1,\ldots ,n\}\) is a permutation function such that \(\{X_{\sigma (1)},X_{\sigma (2)},\ldots ,X_{\sigma (n)}\}\) are in descending order. \(\blacksquare \)

Definition 7

A group of intervals \(\{X_i\}_{i=1}^n\) are in descending order if \(X_i\succeq X_j\) for \(\forall i<j\) by a ranking method. \(\blacksquare \)

Any interval ranking method can be used to find \(\sigma \). In this chapter, we first compute the center of each interval and then rank them to obtain the order of the corresponding intervals. This is a special case of Yager’s first method [29] for ranking T1 FSs, where the T1 FSs degrade to intervals.

To compute \(Y_{OIWA}\), we first sort \(X_i\) in descending order and call them by the same name, but now \(X_1\succeq X_2\succeq \cdots \succeq X_n\) (\(W_i\) are not changed during this step); then, the OIWA becomes an IWA.

Example 4

For the same crisp \(x_i\) and \(w_i\) used in Example 1, the OWA \(y_{_{OWA}} =5.40\), which is different from \(y_{_{AWA}}=5.31\). For the same interval \(X_i\) and \(W_i\) used in Example 1, the OIWA \(Y_{_{OIWA}} =[4.17,6.66]\), which is different from \(Y_{_{IWA}}=[3.49,7.12]\). \(\blacksquare \)

3.2 The Ordered Fuzzy Weighted Average (OFWA)

As its name suggests, the OFWA is a combination of the OWA and the FWA.

Definition 8

An OFWA is defined as

$$\begin{aligned} Y_{OFWA}=\frac{\sum _{i=1}^nW_iX_{\sigma (i)}}{\sum _{i=1}^n W_{\sigma (i)}} \end{aligned}$$
(24)

where \(X_i\) and \(W_i\) are T1 FSs, and \(\sigma : \{1, \ldots ,n\}\rightarrow \{1,\ldots ,n\}\) is a permutation function such that \(\{X_{\sigma (1)},X_{\sigma (2)},\ldots ,X_{\sigma (n)}\}\) are in descending order. \(\blacksquare \)

Definition 9

A group of T1 FSs \(\{X_i\}_{i=1}^n\) are in descending order if \(X_i\succeq X_j\) for \(\forall i<j\) by a ranking method. \(\blacksquare \)

Any T1 FS ranking method can be used to find \(\sigma \). In this chapter, Yager’s first method [29] is used, which first computes the centroid of each T1 FS and then rank them to obtain the order of the corresponding T1 FSs.

To compute \(Y_{OFWA}\), we first sort \(X_i\) in descending order and call them by the same name, but now \(X_1\succeq X_2\succeq \cdots \succeq X_n\) (\(W_i\) are not changed during this step); then, the FWA algorithm introduced in Sect. 2.2 can be used to compute \(Y_{OFWA}\).

Example 5

For the same T1 FSs \(X_i\) and \(W_i\) used in Example 2, the OFWA \(Y_{_{OFWA}}\) is shown as the dashed curve in Fig. 4c, which is different from \(Y_{_{FWA}}\) [solid curve in Fig. 4c]. \(\blacksquare \)

3.3 The Ordered Linguistic Weighted Average (OLWA)

As its name suggests, the OLWA is a combination of the OWA and the LWA.

Definition 10

An OLWA is defined as

$$\begin{aligned} \tilde{Y}_{OLWA}=\frac{\sum _{i=1}^n\tilde{W}_i\tilde{X}_{\sigma (i)}}{\sum _{i=1}^n \tilde{W}_{\sigma (i)}} \end{aligned}$$
(25)

where \(\sigma : \{1, \ldots ,n\}\rightarrow \{1,\ldots ,n\}\) is a permutation function such that \(\{\tilde{X}_{\sigma (1)},\tilde{X}_{\sigma (2)},\ldots , \tilde{X}_{\sigma (n)}\}\) are in descending order. \(\blacksquare \)

Definition 11

A group of IT2 FSs \(\{\tilde{X}_i\}_{i=1}^n\) are in descending order if \(\tilde{X}_i\succeq \tilde{X}_j\) for \(\forall i<j\) by a ranking method. \(\blacksquare \)

Any IT2 FS ranking method can be used to find \(\sigma \). In this chapter, the centroid-based ranking method [30] is used, which first computes the center of centroid of each IT2 FS and then ranks them to obtain the order of the corresponding IT2 FSs.

To compute the OLWA, we first sort all \(\tilde{X}_i\) in descending order and call them by the same name, but now \(\tilde{X}_1\succeq \tilde{X}_2\succeq \cdots \succeq \tilde{X}_n\) (note that \(\tilde{W}_i\) are not changed during this step); then, the LWA algorithm introduced in Sect. 2.3 can be used to compute the OLWA.

Example 6

For the same IT2 FSs \(\tilde{X}_i\) and \(\tilde{W}_i\) used in Example 3, the OLWA \(\tilde{Y}_{_{OLWA}}\) is shown as the dashed curve in Fig. 8c, which is different from \(\tilde{Y}_{_{LWA}}\) [solid curve in Fig. 8c]. \(\blacksquare \)

4 Other Fuzzy Extensions of the OWA

There has been many works on fuzzy extensions of the OWA, e.g., linguistic ordered weighted averaging [31,32,33,34], uncertain linguistic ordered weighted averaging [35], and fuzzy linguistic ordered weighted averaging [36]; however, for these extensions, only the sub-criteria are modeled as T1 FSs whereas the weights are still crisp numbers. To the authors’ best knowledge,  Zhou et al. [14,15,16] are the first to consider fuzzy weights. Their approaches are introduced in this section for comparison purposes.

4.1 T1 Fuzzy OWAs

Zhou et al. [15, 16, 37] defined a T1 fuzzy OWA (T1FOWA) as:

Definition 12

Given T1 FSs \(\{W_i\}_{i=1}^n\) and \(\{X_i\}_{i=1}^n\), the membership function of a T1FOWA is computed by:

$$\begin{aligned} \mu _{Y_{T1FOWA}}(y)=\sup _{\displaystyle \frac{\sum _{i=1}^n w_ix_{\sigma (i)}}{\sum _{i=1}^n w_i}=y}\min (\mu _{W_1}(w_1),\ldots ,\mu _{W_n}(w_n),\mu _{X_1}(x_1),\ldots ,\mu _{X_n}(x_n)) \end{aligned}$$
(26)

where \(\sigma :\{1,\ldots ,n\}\rightarrow \{1,\ldots ,n\}\) is a permutation function such that \(\{x_{\sigma (1)},x_{\sigma (2)},\ldots ,x_{\sigma (n)}\}\) are in descending order. \(\blacksquare \)

\(\mu _{Y_{T1FOWA}}(y)\) can be understood from the Extension Principle [38], i.e., first all combinations of \(w_i\) and \(x_i\) whose OWA is y are found, and for the jth combination, the resulting \(y_j\) has a membership grade \(\mu (y_j)\) which is the minimum of the corresponding \(\mu _{X_i}(x_i)\) and \(\mu _{W_i}(w_i)\). Then, \(\mu _{Y_{T1FOWA}}(y)\) is the maximum of all these \(\mu (y_j)\).

\(Y_{T1FOWA}\) can be computed efficiently using \(\alpha \)-cuts [14], similar to the way they are used in computing the FWA. Denote \(Y_{T1FOWA}(\alpha )=[y_L'(\alpha ),y_R'(\alpha )]\) and use the same notations for \(\alpha \)-cuts on \(X_i\) and \(W_i\) as in Fig. 2. Then,

$$\begin{aligned} y_L'(\alpha )&=\min \limits _{\forall w_i(\alpha )\in [c_i(\alpha ),d_i(\alpha )]} \frac{\sum _{i=1}^n a_{\sigma (i)}(\alpha )w_i(\alpha )}{\sum _{i=1}^n w_i(\alpha )}\end{aligned}$$
(27)
$$\begin{aligned} y_R'(\alpha )&=\max \limits _{\forall w_i(\alpha )\in [c_i(\alpha ),d_i(\alpha )]} \frac{\sum _{i=1}^n b_{\sigma (i)}(\alpha )w_i(\alpha )}{\sum _{i=1}^n w_i(\alpha )} \end{aligned}$$
(28)

\(y_L'(\alpha )\) and \(y_R'(\alpha )\) can also be computed using KM or EKM algorithms. Generally \(\sigma \) is different for different \(\alpha \) in (27) and (28), because for each \(\alpha \) the \(a_i(\alpha )\) or \(b_i(\alpha )\) are ranked separately.

Generally the OFWA and the T1FOWA give different outputs, as indicated by the following:

Theorem 1

The OFWA and the T1FOWA have different results when at least one of the following two conditions occurs:

  1. 1.

    The left leg of \(X_i\) intersects the left leg of \(X_j\), \(i\ne j\).

  2. 2.

    The right leg of \(X_i\) intersects the right leg of \(X_j\), \(i\ne j\).\(\blacksquare \)

Proof: Because the proof for Condition 2 is very similar to that for Condition 1, only the proof for Condition 1 is given here.

Assume the left leg of \(X_i\) intersects the left leg of \(X_j\) at \(\alpha =\lambda \in (0, 1)\), as shown in Fig. 9. Then, \(a_i(\alpha )>a_j(\alpha )\) when \(\alpha \in [0,\lambda )\) and \(a_i(\alpha )<a_j(\alpha )\) when \(\alpha \in (\lambda ,1]\).

For an \(\alpha _1\in [0, \lambda )\), \(y_L'(\alpha _1)\) in (27) is computed as

$$\begin{aligned} y_L'(\alpha _1)&=\min \limits _{\forall w_i(\alpha _1)\in [c_i(\alpha _1),d_i(\alpha _1)]} \frac{\sum _{i=1}^n a_{\sigma _1(i)}(\alpha _1)w_i(\alpha _1)}{\sum _{i=1}^n w_i(\alpha _1)} \end{aligned}$$
(29)

where \(\sigma _1:\{1,\ldots ,n\}\rightarrow \{1,\ldots ,n\}\) is a permutation function such that \(\{a_{\sigma _1(1)}(\alpha _1),x_{\sigma _1(2)}(\alpha _1),\ldots ,x_{\sigma _1(n)}(\alpha _1)\}\) are in descending order. Because \(a_i(\alpha _1)>a_j(\alpha _1)\), it follows that \(\sigma _1(i)<\sigma _1(j)\).

For an \(\alpha _2\in (\lambda ,1]\), \(y_L'(\alpha )\) in (27) is computed as

$$\begin{aligned} y_L'(\alpha _2)&=\min \limits _{\forall w_i(\alpha _2)\in [c_i(\alpha _2),d_i(\alpha _2)]} \frac{\sum _{i=1}^n a_{\sigma _2(i)}(\alpha _2)w_i(\alpha _2)}{\sum _{i=1}^n w_i(\alpha _2)} \end{aligned}$$
(30)

where \(\sigma _2:\{1,\ldots ,n\}\rightarrow \{1,\ldots ,n\}\) is a permutation function such that \(\{a_{\sigma _2(1)}(\alpha _2),a_{\sigma _2(2)}(\alpha _2),\ldots ,a_{\sigma _2(n)}(\alpha _2)\}\) are in descending order. Because \(a_i(\alpha _2)<a_j(\alpha _2)\), it follows that \(\sigma _2(i)>\sigma _2(j)\), i.e., \(\sigma _1\ne \sigma _2\).

On the other hand, for \(Y_{OFWA}\), no matter which ranking method is used, the permutation function \(\sigma \) is the same for all \(\alpha \in [0,1]\). Without loss of generality, assume \(X_j\succeq X_i\) by a ranking method. Then, in (24) \(\sigma (i)>\sigma (j)\), and, for any \(\alpha \in [0, 1]\), \(y_L(\alpha )\) is computed as

$$\begin{aligned} y_L(\alpha )&=\min \limits _{\forall w_i(\alpha )\in [c_i(\alpha ),d_i(\alpha )]} \frac{\sum _{i=1}^n a_{\sigma (i)}(\alpha )w_i(\alpha )}{\sum _{i=1}^n w_i(\alpha )} \end{aligned}$$
(31)

Clearly, for any \(\alpha \in [0,\lambda )\), \(y_L(\alpha )\ne y_L'(\alpha )\) because \(\sigma \ne \sigma _1\). Consequently, the left legs of \(Y_{OFWA}\) and \(Y_{T1FOWA}\) are different. \(\blacksquare \)

Fig. 9
figure 9

Illustration of intersecting \(X_i\) and \(X_j\)

The following example illustrates Theorem 1.

Example 7

\(X_i\) and \(W_i\) shown in Figs. 4a and b are used in this example to illustrate the difference between \(Y_{T1FOWA}\) and \(Y_{OFWA}\). \(Y_{T1FOWA}\) is shown as the dotted curve in Fig. 4c. Note that it is quite different from \(Y_{OFWA}\) [dashed curve in Fig. 4c]. The difference is caused by the fact that the legs of \(X_3\) cross the legs of \(X_1\), \(X_2\) and \(X_4\), which causes the permutation function \(\sigma \) to change as \(\alpha \) increases. \(\blacksquare \)

Finally, observe two important points from Theorem 1:

  1. 1.

    Only the intersection of a left leg with another left leg, or a right leg with another right leg, would definitely lead to different \(Y_{T1FOWA}\) and \(Y_{OFWA}\). The intersection of a left leg with a right leg does not lead to different \(Y_{T1FOWA}\) and \(Y_{OFWA}\), as illustrated by Example 8.

  2. 2.

    Only the intersections of \(X_i\) may lead to different \(Y_{T1FOWA}\) and \(Y_{OFWA}\). The intersections of \(W_i\) have no effect on this because the permutation function \(\sigma \) does not depend on \(W_i\).

Example 8

Consider \(X_i\) shown in Fig. 10a and \(W_i\) shown in Fig. 10b. \(Y_{FWA}\) is shown as the solid curve in Fig. 10c, \(Y_{OFWA}\) the dashed curve, and \(Y_{T1FOWA}\) the dotted curve (the latter two are covered by the solid curve). Though \(X_i\) have some intersections, \(Y_{T1FOWA}\) is the same as \(Y_{OFWA}\) because no left (right) legs of \(X_i\) intersect. \(\blacksquare \)

Fig. 10
figure 10

Example 8, where \(Y_{FWA}\), \(Y_{OFWA}\) and \(Y_{T1FOWA}\) give the same result: a \(X_i\), b \(W_i\), and c \(Y_{FWA}\) (solid curve), \(Y_{OFWA}\) (dashed curve) and \(Y_{T1FOWA}\) (dotted curve)

4.2 IT2 Fuzzy OWAs

Zhou et al. [16] defined the IT2 fuzzy OWA (IT2FOWA) as:

Definition 13

Given IT2 FSs \(\{\tilde{W}_i\}_{i=1}^n\) and \(\{\tilde{X}_i\}_{i=1}^n\), the membership function of an IT2FOWA is computed by:

$$\begin{aligned} \mu _{\tilde{Y}_{IT2FOWA}}(y)=\bigcup \limits _{\forall W_i^e, X_i^e}\left[ \sup _{\displaystyle \frac{\sum _{i=1}^n w_ix_{\sigma (i)}}{\sum _{i=1}^n w_i}=y} \min (\mu _{W_1^e}(w_1),\ldots ,\mu _{W_n^e}(w_n),\mu _{X_1^e}(x_1),\ldots ,\mu _{X_n^e}(x_n))\right] \end{aligned}$$
(32)

where \(W_i^e\) and \(X_i^e\) are embedded T1 FSs of \(\tilde{W}_i\) and \(\tilde{X}_i\), respectively, and \(\sigma :\{1,\ldots ,n\}\rightarrow \{1,\ldots ,n\}\) is a permutation function such that \(\{x_{\sigma (1)},x_{\sigma (2)},\ldots ,x_{\sigma (n)}\}\) are in descending order. \(\blacksquare \)

Comparing (32) with (26), observe that the bracketed term in (32) is a T1FOWA, and the IT2FOWA is the union of all possible T1FOWAs computed from the embedded T1 FSs of \(\tilde{X}_i\) and \(\tilde{W}_i\). The Wavy Slice Representation Theorem [39] for IT2 FSs is used implicitly in this definition.

\(\tilde{Y}_{IT2FOWA}\) can be computed efficiently using \(\alpha \)-cuts, similar to the way they were used in computing the LWA. Denote the \(\alpha \)-cut on the UMF of \(\tilde{Y}_{IT2FOWA}\) as \(\overline{Y}_{OWA}(\alpha )=[y_{Ll}'(\alpha ), y_{Rr}'(\alpha )]\) for \(\forall \alpha \in [0,1]\), the \(\alpha \)-cut on the LMF of \(\tilde{Y}_{IT2FOWA}\) as \(\underline{Y}_{OWA}(\alpha )=[y_{Lr}'(\alpha ),y_{Rl}'(\alpha )]\) for \(\forall \alpha \in [0,h_{\min }]\), where \(h_{\min }\) is defined in (16). Using the same notations for \(\alpha \)-cuts on \(\tilde{X}_i\) and \(\tilde{W}_i\) as in Fig. 8, it is easy to show that

$$\begin{aligned} y_{Ll}'(\alpha )&=\min \limits _{\forall w_i(\alpha )\in [c_{il}(\alpha ),d_{ir}(\alpha )]} \frac{\sum _{i=1}^n a_{\sigma (i),l}(\alpha )w_i(\alpha )}{\sum _{i=1}^n w_i(\alpha )},\alpha \in [0,1]\end{aligned}$$
(33)
$$\begin{aligned} y_{Rr}'(\alpha )&=\max \limits _{\forall w_i(\alpha )\in [c_{il}(\alpha ),d_{ir}(\alpha )]} \frac{\sum _{i=1}^n b_{\sigma (i),r}(\alpha )w_i(\alpha )}{\sum _{i=1}^n w_i(\alpha )}, \alpha \in [0,1]\end{aligned}$$
(34)
$$\begin{aligned} y_{Lr}'(\alpha )&=\min \limits _{\forall w_i(\alpha )\in [c_{ir}(\alpha ),d_{il}(\alpha )]} \frac{\sum _{i=1}^n a_{\sigma (i),r}(\alpha )w_i(\alpha )}{\sum _{i=1}^n w_i(\alpha )}, \alpha \in [0,h_{\min }]\end{aligned}$$
(35)
$$\begin{aligned} y_{Rl}'(\alpha )&=\max \limits _{\forall w_i(\alpha )\in [c_{ir}(\alpha ),d_{il}(\alpha )]} \frac{\sum _{i=1}^n b_{\sigma (i),l}(\alpha )w_i(\alpha )}{\sum _{i=1}^n w_i(\alpha )}, \alpha \in [0,h_{\min }] \end{aligned}$$
(36)

\(y_{Ll}'(\alpha )\), \(y_{Rr}'(\alpha )\), \(y_{Lr}'(\alpha )\) and \(y_{Rl}'(\alpha )\) can also be computed using KM or EKM algorithms. Because \(\tilde{Y}_{IT2FOWA}\) computes the permutation function \(\sigma \) for each \(\alpha \) separately, generally \(\sigma \) is different for different \(\alpha \).

Generally the OLWA and the IT2FOWA give different outputs, as indicated by the following:

Theorem 2

The OLWA and the IT2FOWA have different results when at least one of the following four conditions occur:

  1. 1.

    The left leg of \(\overline{X}_i\) intersects the left leg of \(\overline{X}_j\), \(i\ne j\).

  2. 2.

    The left leg of \(\underline{X}_i\) intersects the left leg of \(\underline{X}_j\), \(i\ne j\).

  3. 3.

    The right leg of \(\overline{X}_i\) intersects the right leg of \(\overline{X}_j\), \(i\ne j\).

  4. 4.

    The right leg of \(\underline{X}_i\) intersects the right leg of \(\underline{X}_j\), \(i\ne j\). \(\blacksquare \)

The correctness of Theorem 2 can be easily seen from Theorem 1, i.e., Condition 1 leads to different \(y_{Ll}(\alpha )\) and \(y_{Ll}'(\alpha )\) for certain \(\alpha \), Condition 2 leads to different \(y_{Lr}(\alpha )\) and \(y_{Lr}'(\alpha )\) for certain \(\alpha \), Condition 3 leads to different \(y_{Rr}(\alpha )\) and \(y_{Rr}'(\alpha )\) for certain \(\alpha \), and Condition 4 leads to different \(y_{Rl}(\alpha )\) and \(y_{Rl}'(\alpha )\) for certain \(\alpha \). Example 9 illustrates Theorem 2.

Example 9

\(\tilde{X}_i\) and \(\tilde{W}_i\) shown in Figs. 8a and b are used in this example to illustrate the difference between \(\tilde{Y}_{OLWA}\) and \(\tilde{Y}_{IT2FOWA}\). \(\tilde{Y}_{IT2FOWA}\) is shown as the dotted curve in Fig. 8c. Note that it is quite different from \(\tilde{Y}_{OLWA}\) [dashed curve in Fig. 8c]. The difference is caused by the fact that the legs of \(\tilde{X}_3\) cross the legs of \(\tilde{X}_1\), \(\tilde{X}_2\) and \(\tilde{X}_4\), since the permutation function \(\sigma \) changes as \(\alpha \) increases.\(\blacksquare \)

Finally, observe also two important points from Theorem 2:

  1. 1.

    Only the intersection of a left leg with another left leg, or a right leg with another right leg, would definitely lead to different \(\tilde{Y}_{IT2FOWA}\) and \(\tilde{Y}_{OLWA}\). The intersection of a left leg with a right leg may not lead to different \(\tilde{Y}_{IT2FOWA}\) and \(\tilde{Y}_{OLWA}\), as illustrated by Example 10.

  2. 2.

    Only the intersections of \(\tilde{X}_i\) may lead to different \(\tilde{Y}_{IT2FOWA}\) and \(\tilde{Y}_{OLWA}\). The intersections of \(\tilde{W}_i\) have no effect on this because the permutation function \(\sigma \) does not depend on \(\tilde{W}_i\).

Example 10

Consider \(\tilde{X}_i\) shown in Fig. 11a and \(\tilde{W}_i\) shown in Fig. 11b. \(\tilde{Y}_{LWA}\) is shown as the solid curve in Fig. 11c, \(\tilde{Y}_{OLWA}\) the dashed curve, and \(\tilde{Y}_{IT2FOWA}\) the dotted curve (the latter two are covered by the solid curve). Though \(\tilde{X}_i\) have some intersections, \(\tilde{Y}_{IT2FOWA}\) is the same as \(\tilde{Y}_{OLWA}\). \(\blacksquare \)

Fig. 11
figure 11

Example 10, where IT2FOWA and OLWA give the same result: a \(\tilde{X}_i\), b \(\tilde{W}_i\), and c \(\tilde{Y}_{LWA}\) (solid curve), \(\tilde{Y}_{OLWA}\) (dashed curve) and \(\tilde{Y}_{IT2FOWA}\) (dotted curve)

Example 11

In this final example, we compare the results of LWA, OLWA and IT2FOWA when

$$\begin{aligned} \{\tilde{X}_i \} |_{i=1,...,4}&\rightarrow \{\text {Tiny, Maximum amount, Fair amount, Medium}\}\\ \{\tilde{W}_i \}|_{i=1,...,4}&\rightarrow \{\text {Small, Very little, Sizeable, Huge amount}\}. \end{aligned}$$

where the word FOUs are depicted in Fig. 12a and b. They are extracted from the 32-word vocabulary in [2, 3, 40], which is constructed from actual survey data. The corresponding \(\tilde{Y}_{LWA}\) is shown in Fig. 12c as the solid curve, \(\tilde{Y}_{OLWA}\) the dashed curve, and \(\tilde{Y}_{IT2FOWA}\) the dotted curve. Observe that they are different from each other.\(\blacksquare \!\!\)

Fig. 12
figure 12

Example 11: a \(\tilde{X}_i\), b \(\tilde{W}_i\), and, c \(\tilde{Y}_{LWA}\) (solid curve), \(\tilde{Y}_{OLWA}\) (dashed curve) and \(\tilde{Y}_{IT2FOWA}\) (dotted curve)

4.3 Discussions

The T1 and IT2 fuzzy OWAs have been derived by considering each \(\alpha \)-cut separately, whereas the OFWA and OLWA have been derived by considering each sub-criterion as a whole. Generally the two approaches give different results. Then, a natural question is: which approach should be used in practice?

We believe that it is more intuitive to consider an FS in its entirety during ranking of FSs. To the best of our knowledge, all ranking methods based on \(\alpha \)-cuts deduce a single number to represent each FS and then sort these numbers to obtain the ranks of the FSs (see the Appendix). Each of these numbers is computed based only on the FS under consideration, i.e., no \(\alpha \)-cuts on other FSs to be ranked are considered. Because in OFWA and OLWA the FSs are first ranked and then the WAs are computed, they coincide with our “FS in its entirety” intuition, and hence they are preferred in this chapter. Interestingly, this “FS in its entirety” intuition was also used implicitly in developing the linguistic ordered weighted averaging [32], the uncertain linguistic ordered weighted averaging [35], and the fuzzy linguistic ordered weighted averaging [36].

5 Conclusions

In this chapter, ordered novel weighted averages, including ordered interval weighted average, ordered fuzzy weighted average and ordered linguistic weighted average, as wells as procedures for computing them, have been introduced. They were compared with novel weighted averages and Zhou et al’s fuzzy extensions of the OWA. Examples showed that our ONWAs may give different results from Zhou et al’s extensions when the legs of the FSs have intersections. Because our extensions coincide with the “FS in its entirety” intuition, they are the suggested ones to use.

Table 1 Summary of ranking methods for T1 FSs. Note that for Classes 1 and 2, each T1 FS is first mapped into a crisp number, and then these numbers are sorted to obtain the ranks of the corresponding T1 FSs. For Class 3, the pairwise ranks are computed directly