Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

1 OWA Operators

The process of information aggregation appears in many applications related to the development of intelligent systems. In 1988 Yager introduced a new aggregation technique based on the ordered weighted averaging operators [44]. The determination of ordered weighted averaging operator weights is a very important issue of applying the OWA operator for decision making. One of the first approaches, suggested by O’Hagan [34], determines a special class of OWA operators having maximal entropy of the OWA weights for a given level of orness; algorithmically it is based on the solution of a constrained optimization problem. In 2001, using the method of Lagrange multipliers, Fullér and Majlender [12] solved this constrained optimization problem analytically and determined the optimal weighting vector. In 2003 using the Karush-Kuhn-Tucker second-order sufficiency conditions for optimality, Fullér and Majlender [13] computed the exact minimal variability weighting vector for any level of orness. In 2003 Carlsson, Fullér and Majlender [7] derived an algorithm for solving the (nonlinear) constrained OWA aggregation problem. In this work we shall give a short survey of some later works that extend and develop these models.

In a decision process the idea of trade-offs corresponds to viewing the global evaluation of an action as lying between the worst and the best local ratings. This occurs in the presence of conflicting goals, when a compensation between the corresponding compatibilities is allowed. Averaging operators realize trade-offs between objectives, by allowing a positive compensation between ratings. The concept of ordered weighted averaging operators was introduced by Yager in 1988 [44] as a way for providing aggregations which lie between the maximum and minimums operators. The structure of this operator involves a nonlinearity in the form of an ordering operation on the elements to be aggregated. The OWA operator provides a new information aggregation technique and has already aroused considerable research interest [49].

Definition 1.1

([44]) An OWA operator of dimension n is a mapping \(F :{\mathbb R}^{n} \rightarrow {\mathbb R}\), that has an associated weighting vector \(W = (w_{1}, w_{2}, \ldots , w_{n})^{T}\) such as \(w_{i} \in [0,1], \ 1 \le i \le n\), and \(w_1 + \cdots + w_n = 1\). Furthermore,

$$ F(a_{1}, \ldots , a_{n}) = w_1b_1 + \cdots + w_nb_n = \sum _{j=1}^n w_{j}b_{j}, $$

where \(b_{j}\) is the jth largest element of the bag \(\langle a_1, \ldots , a_n \rangle \).

A fundamental aspect of this operator is the re-ordering step, in particular an aggregate \(a_{i}\) is not associated with a particular weight \(w_{i}\) but rather a weight is associated with a particular ordered position of aggregate. It is noted that different OWA operators are distinguished by their weighting function. In order to classify OWA operators in regard to their location between and and or, a measure of orness, associated with any vector W is introduced by Yager [44] as follows,

$$ \mathrm{orness}(W) = \frac{1}{n-1} \sum _{i=1}^n (n-i)w_i. $$

It is easy to see that for any W the \(\mathrm{orness}(W)\) is always in the unit interval Furthermore, note that the nearer W is to an or, the closer its measure is to one; while the nearer it is to an and, the closer is to zero. It can easily be shown that \(\mathrm{orness}(W^*) = 1, \mathrm{orness}(W_*) = 0\) and \(\mathrm{orness}(W_A) = 0.5\). A measure of andness is defined as, \(\mathrm{andness}(W) = 1 - \mathrm{orness}(W)\). Generally, an OWA operator with much of nonzero weights near the top will be an orlike operator, that is, \(\mathrm{orness}(W) \ge 0.5\), and when much of the weights are nonzero near the bottom, the OWA operator will be andlike, that is, \(\mathrm{andness}(W) \ge 0.5\). In [44] Yager defined the measure of dispersion (or entropy) of an OWA vector by,

$$ \mathrm{disp}(W) = - \sum _{i=1}^nw_i \ln w_i. $$

We can see when using the OWA operator as an averaging operator \(\mathrm{disp}(W)\) measures the degree to which we use all the aggregates equally.

2 Obtaining OWA Operator Weights

One important issue in the theory of OWA operators is the determination of the associated weights. One of the first approaches, suggested by O’Hagan, determines a special class of OWA operators having maximal entropy of the OWA weights for a given level of orness; algorithmically it is based on the solution of a constrained optimization problem. Another consideration that may be of interest to a decision maker involves the variability associated with a weighting vector. In particular, a decision maker may desire low variability associated with a chosen weighting vector. It is clear that the actual type of aggregation performed by an OWA operator depends upon the form of the weighting vector [45]. A number of approaches have been suggested for obtaining the associated weights, i.e., quantifier guided aggregation [44, 45], exponential smoothing and learning [50]. O’Hagan [34] determined a special class of OWA operators having maximal entropy of the OWA weights for a given level of orness. His approach is based on the solution of he following mathematical programming problem,

$$\begin{aligned} \mathrm{maximize \ }&\qquad \mathrm{disp}(W) = - {\displaystyle \sum _{i=1}^n w_i \ln w_i} \nonumber \\ \mathrm{subject \ to}&\mathrm{orness}(W)= {\displaystyle \sum _{i=1}^n \frac{n-i}{n-1} \cdot w_i} = \alpha , \ 0 \le \alpha \le 1 \\&w_1 + \cdots + w_n =1, \, 0 \le w_i, \, i=1,\ldots , n. \nonumber \end{aligned}$$
(1)

In 2001, using the method of Lagrange multipliers, Fullér and Majlender [12] transformed constrained optimization problem (1) into a polynomial equation which is then was solved to determine the maximal entropy OWA operator weights. By their method, the associated weighting vector is easily obtained by

$$ \ln w_j = \frac{j-1}{n-1} \ln w_n + \frac{n-j}{n-1} \ln w_1 \Longrightarrow w_j = \root n-1 \of {w_1^{n-j} w_n^{j-1}} $$

and

$$ w_n = \frac{((n-1)\alpha -n)w_1 +1}{(n-1)\alpha +1 - nw_1} $$

then

$$ w_1[(n-1)\alpha +1 - nw_1]^n = ((n-1)\alpha )^{n-1} [((n-1)\alpha -n)w_1 +1] $$

where \(n \ge 3\). For \(n=2\) then from \(\mathrm{orness}(w_1, w_2) =\alpha \) the optimal weights are uniquely defined as \(w_1^*= \alpha \) and \(w_2^*= 1-\alpha \). Furthermore, if \(\alpha = 0\) or \(\alpha =1\) then the associated weighting vectors are uniquely defined as \((0,0, \ldots , 0, 1)^T\) and \((1,0, \ldots , 0,0)^T\), respectively.

An interesting question is to determine the minimal variability weighting vector under given level of orness [48]. The variance of a given weighting vector is computed as follows

$$\begin{aligned} D^2(W)&= \sum _{i=1}^n \frac{1}{n} (w_i - E(W))^2 = \frac{1}{n} \sum _{i=1}^n w_i^2 - \bigg ( \frac{1}{n} \sum _{i=1}^n w_i\bigg )^2 = \frac{1}{n} \sum _{i=1}^n w_i^2 - \frac{1}{n^2}. \end{aligned}$$

where \(E(W) = (w_1 + \cdots + w_n)/n = 1/n\) stands for the arithmetic mean of weights.

In 2003 Fullér and Majlender [13] suggested a minimum variance method to obtain the minimal variability OWA operator weights. A set of OWA operator weights with minimal variability could then be generated. Their approach requires the solution of the following mathematical programming problem:

$$\begin{aligned} \mathrm{minimize \ }&\qquad D^2(W) = {\displaystyle \frac{1}{n} \cdot \sum _{i=1}^n w_i^2 - \frac{1}{n^2}} \nonumber \\ \mathrm{subject \ to}&\mathrm{orness}(w) = {\displaystyle \sum _{i=1}^n \frac{n-i}{n-1}\cdot w_i} = \alpha , \ 0 \le \alpha \le 1, \\&w_1 + \cdots + w_n =1, \, 0 \le w_i, \, i=1,\ldots , n. \nonumber \end{aligned}$$
(2)

Fullér and Majlender [13] computed the exact minimal variability weighting vector for any level of orness using the Karush-Kuhn-Tucker second-order sufficiency conditions for optimality.

Yager [47] considered the problem of maximizing an OWA aggregation of a group of variables that are interrelated and constrained by a collection of linear inequalities and he showed how this problem can be modeled as a mixed integer linear programming problem. The constrained OWA aggregation problem [47] can be expressed as the following mathematical programming problem

$$\begin{aligned} \max&\ w^Ty \\ \text {subject to}&\ Ax \le b, x \ge 0, \end{aligned}$$

where \(w^Ty = w_1 y_1 + \cdots + w_n y_n\) and \(y_j\) denotes the jth largest element of the bag \(\langle x_1, \ldots , x_n \rangle \).

In 2003 Carlsson, Fullér and Majlender [7] showed an algorithm for solving the (nonlinear) constrained OWA aggregation problem

$$\begin{aligned} \max \, w^Ty; \ \text {subject to} \{x_1 + \cdots + x_n \le 1\text{, } x \ge 0\}. \end{aligned}$$
(3)

where \(y_j\) denotes the jth largest element of the bag \(\langle x_1, \ldots , x_n \rangle \).

3 Recent Advances

In this section we will give a short chronological survey of some later works that extend and develop the maximal entropy, the minimal variability and the constrained OWA operator weights models. We will mention only those works in which the authors extended, improved or used the findings of our original papers [7, 12, 13].

In 2004 Liu and Chen [21] introduced the concept of parametric geometric OWA operator (PGOWA) and a parametric maximum entropy OWA operator (PMEOWA) and showed the equivalence of parametric geometric OWA operator and parametric maximum entropy OWA operator weights. Carlsson et al. [8] showed how to evaluate the quality of elderly care services by OWA operators.

In 2005 Wang and Parkan [39] presented a minimax disparity approach, which minimizes the maximum disparity between two adjacent weights under a given level of orness. Their approach was formulated as

$$\begin{aligned} \mathrm{minimize \ } \max _{i=1,2, \ldots , n-1} \mid w_{i} - w_{i+1} \mid \\ \mathrm{subject \ to \ } \mathrm{orness}(w) = {\displaystyle \sum _{i=1}^n \frac{n-i}{n-1} w_i} = \alpha , \ 0 \le \alpha \le 1,\\ w_1 + \cdots + w_n =1, \, 0 \le w_i \le 1, \, i=1, \ldots , n. \end{aligned}$$

Majlender [32] developed a maximal Rényi entropy method for generating a parametric class of OWA operators and the maximal Rényi entropy OWA weights. His approach was formulated as

$$\begin{aligned} \mathrm{maximize \ } H_\beta (w) = \frac{1}{1-\beta } \log _2 \sum _{i=1}^n w_i^\beta \\ \mathrm{subject \ to \ } \mathrm{orness}(w) = {\displaystyle \sum _{i=1}^n \frac{n-i}{n-1} w_i} = \alpha , \ 0 \le \alpha \le 1,\\ w_1 + \cdots + w_n =1, \, 0 \le w_i \le 1, \, i=1, \ldots , n. \end{aligned}$$

where \(\beta \in {\mathbb R}\) and \(H_1(w) = - \sum _{i=1}^n w_i \log _2 w_i\). Liu [22] extended the the properties of OWA operator to the RIM (regular increasing monotone) quantifier which is represented with a monotone function instead of the OWA weighting vector. He also introduced a class of parameterized equidifferent RIM quantifier which has minimum variance generating function. This equidifferent RIM quantifier is consistent with its orness level for any aggregated elements, which can be used to represent the decision maker’s preference. Troiano and Yager [37] pointed out that OWA weighting vector and the fuzzy quantifiers are strongly related. An intuitive way for shaping a monotonic quantifier, is by means of the threshold that makes a separation between the regions of what is satisfactory and what is not. Therefore, the characteristics of a threshold can be directly related to the OWA weighting vector and to its metrics: the attitudinal character and the entropy. Usually these two metrics are supposed to be independent, although some limitations in their value come when they are considered jointly. They argued that these two metrics are strongly related by the definition of quantifier threshold, and they showed how they can be used jointly to verify and validate a quantifier and its threshold.

In 2006 Xu [43] investigated the dependent OWA operators, and developed a new argument-dependent approach to determining the OWA weights, which can relieve the influence of unfair arguments on the aggregated results. Zadrozny and Kacprzyk [54] discussed the use of the Yager’s OWA operators within a flexible querying interface. Their key issue is the adaptation of an OWA operator to the specifics of a user’s query. They considered some well-known approaches to the manipulation of the weights vector and proposed a new one that is simple and efficient. They discussed the tuning (selection of weights) of the OWA operators, and proposed an algorithm that is effective and efficient in the context of their FQUERY for Access package. Wang et al. [40] developed the query system of practical hemodialysis database for a regional hospital in Taiwan, which can help the doctors to make more accurate decision in hemodialysis. They built the fuzzy membership function of hemodialysis indices based on experts’ interviews. They proposed a fuzzy OWA query method, and let the decision makers (doctors) just need to change the weights of attributes dynamical, then the proposed method can revise the weight of each attributes based on aggregation situation and the system will provide synthetic suggestions to the decision makers. Chang et al. [10] proposed a dynamic fuzzy OWA model to deal with problems of group multiple criteria decision making. Their proposed model can help users to solve MCDM problems under the situation of fuzzy or incomplete information. Amin and Emrouznejad [6] introduced an extended minimax disparity model to determine the OWA operator weights as follows,

$$\begin{aligned} \mathrm{minimize \ } \delta \\ \mathrm{subject \ to \ } \mathrm{orness}(w) = {\displaystyle \sum _{i=1}^n \frac{n-i}{n-1} w_i} = \alpha , \ 0 \le \alpha \le 1,\\ w_j - w_i + \delta \ge 0, \, i=1, \ldots , n-1, j=i+1, \ldots , n\\ w_i - w_j + \delta \ge 0, \, i=1, \ldots , n-1, j=i+1, \ldots , n \\ w_1 + \cdots + w_n =1, \, 0 \le w_i \le 1, \, i=1, \ldots , n. \end{aligned}$$

In this model it is assumed that the deviation \(|w_i - w_j| \) is always equal to \(\delta \), \(i \ne j\).

In 2007 Liu [23] proved that the solutions of the minimum variance OWA operator problem under given orness level and the minimax disparity problem for OWA operator are equivalent, both of them have the same form of maximum spread equidifferent OWA operator. He also introduced the concept of maximum spread equidifferent OWA operator and proved its equivalence to the minimum variance OWA operator. Llamazares [30] proposed determining OWA operator weights regarding the class of majority rule that one should want to obtain when individuals do not grade their preferences between the alternatives. Wang et al. [41] introduced two models determining as equally important OWA operator weights as possible for a given orness degree. Their models can be written as

$$\begin{aligned} \mathrm{minimize \ } J_1 = \sum _{i=1}^{n-1} (w_{i} - w_{i+1} )^2 \\ \text {subject to } \text {orness}(w) = {\displaystyle \sum _{i=1}^n \frac{n-i}{n-1} w_i} = \alpha , \ 0 \le \alpha \le 1,\\ w_1 + \cdots + w_n =1, \, 0 \le w_i \le 1, \, i=1, \ldots , n. \end{aligned}$$

and

$$\begin{aligned} \mathrm{minimize \ } J_2 = {\displaystyle \sum _{i=1}^{n-1} \bigg ( \frac{w_i}{w_{i+1}} - \frac{w_{i+1}}{w_i}} \bigg )^2 \\ \text {subject to } \text {orness}(w) = {\displaystyle \sum _{i=1}^n \frac{n-i}{n-1} w_i} = \alpha , \ 0 \le \alpha \le 1,\\ w_1 + \cdots + w_n =1, \, 0 \le w_i \le 1, \, i=1, \ldots , n. \end{aligned}$$

Yager [51] used stress functions to obtain OWA operator weights. With this stress function, a user can “stress” which argument values they want to give more weight in the aggregation. An important feature of this stress function is that it is only required to be nonnegative function on the unit interval. This allows a user to completely focus on the issue of where to put the stress in the aggregation without having to consider satisfaction of any other requirements.

In 2008 Liu [24] proposed a general optimization model with strictly convex objective function to obtain the OWA operator under given orness level,

$$\begin{aligned} \mathrm{minimize \ } \sum _{i=1}^{n} F(w_i) \\ \text {subject to } \text {orness}(w) = {\displaystyle \sum _{i=1}^n \frac{n-i}{n-1} w_i} = \alpha , \ 0 \le \alpha \le 1,\\ w_1 + \cdots + w_n =1, \, 0 \le w_i \le 1, \, i=1, \ldots , n. \end{aligned}$$

and where F is a strictly convex function on [0, 1], and it is at least two order differentiable. His approach includes the maximum entropy (for \(F(x) = x \ln x\)) and the minimum variance (for \(F(x) = x^2\) problems as special cases. More generally, when \(F(x) = x^\alpha , \alpha >0\) it becomes the OWA problem of Rényi entropy [32], which includes the maximum entropy and the minimum variance OWA problem as special cases. Liu also included into this general model the solution methods and the properties of maximum entropy and minimum variance problems that were studied separately earlier. The consistent property that the aggregation value for any aggregated set monotonically increases with the given orness value is still kept, which gives more alternatives to represent the preference information in the aggregation of decision making. Then, with the conclusion that the RIM quantifier can be seen as the continuous case of OWA operator with infinite dimension, Liu [25] further suggested a general RIM quantifier determination model, and analytically solved it with the optimal control technique. Ahn [2] developed some new quantifier functions for aiding the quantifier-guided aggregation. They are related to the weighting functions that show properties such that the weights are strictly ranked and that a value of orness is constant independently of the number of criteria considered. These new quantifiers show the same properties that the weighting functions do and they can be used for the quantifier-guided aggregation of a multiple-criteria input. The proposed RIM and regular decreasing monotone (RDM) quantifiers produce the same orness as the weighting functions from which each quantifier function originates. the quantifier orness rapidly converges into the value of orness of the weighting functions having a constant value of orness. This result indicates that a quantifier-guided OWA aggregation will result in a similar aggregate in case the number of criteria is not too small.

In 2009 Wu et al. [42] used a linear programming model for determining ordered weighted averaging operator weights with maximal Yager’s entropy [46]. By analyzing the desirable properties with this measure of entropy, they proposed a novel approach to determine the weights of the OWA operator. Ahn [3] showed that a closed form of weights, obtained by the least-squared OWA (LSOWA) method, is equivalent to the minimax disparity approach solution when a condition ensuring all positive weights is added into the formulation of minimax disparity approach. Liu [26] presented some methods of OWA determination with different dimension instantiations, that is to get an OWA operator series that can be used to the different dimensional application cases of the same type. He also showed some OWA determination methods that can make the elements distributed in monotonic, symmetric or any function shape cases with different dimensions. Using Yager’s stress function method [51] he managed to extend an OWA operator to another dimensional case with the same aggregation properties.

In 2010 Ahn [4] presented a general method for obtaining OWA operator weights via an extreme point approach. The extreme points are identified by the intersection of an attitudinal character constraint and a fundamental ordered weight simplex that is defined as

$$ K = \{w \in {\mathbb R}^n \mid w_1 + w_2 +\cdots + w_n = 1, w_j \ge 0, \, j = 1, \ldots ,n\}. $$

The parameterized OWA operator weights, which are located in a convex hull of the identified extreme points, can then be specifically determined by selecting an appropriate parameter. Vergara and Xia [38] proposed a new method to find the weights of an OWA for uncertain information sources. Given a set of uncertainty data, the proposed method finds the combination of weights that reduces aggregated uncertainty for a predetermined orness level. Their approach assures best information quality and precision by reducing uncertainty. Yager [52] introduced a measure of diversity related to the problem of selecting of selecting n objects from a pool of candidates lying in q categories.

In 2011 Liu [27] summarizing the main OWA determination methods (the optimization criteria methods, the sample learning methods, the function based methods, the argument dependent methods and the preference methods) showed some relationships between the methods in the same kind and the relationships between different kinds. Gong [15] generated minimal disparity OWA operator weights by minimizing the combination disparity between any two adjacent weights and its expectation. Ahn [5] showed that the weights generated by the maximum entropy method show equally compatible performance with the rank order centroid weights under certain conditions. Hong [17] proved a relationship between the minimum-variance and minimax disparity RIM quantifier problems.

In 2012 Zhou et al. [55] introduced the concept of generalized ordered weighted logarithmic proportional averaging (GOWLPA) operator and proposed the generalized logarithm chi-square method to obtain GOWLPA operator weights. Zhou et al. [56] presented new aggregation operator called the generalized ordered weighted exponential proportional averaging (GOWEPA) operator and introduced the least exponential squares method to determine GOWEPA operator weights based on its orness measure. Yari and Chaji [53] used maximum Bayesian entropy method for determining ordered weighted averaging operator weights. Liu [28] provided analytical solutions of the maximum entropy and minimum variance problems with given linear medianness values.

In 2013 Cheng et al. [11] proposed a new time series model, which employs the ordered weighted averaging operator to fuse high-order data into the aggregated values of single attributes, a fusion adaptive network-based fuzzy inference system procedure, for forecasting stock price in Taiwanese stock markets. Luukka and Kurama [31] showed how to apply OWA operators to similarity classifier. This newly derived classifier is examined with four different medical data set. Data sets used in this experiment were taken from a UCI-Repository of Machine Learning Database. Liu et al. [29] introduced a new aggregation operator: the induced ordered weighted averaging standardized distance (IOWASD) operator. The IOWASD is an aggregation operator that includes a parameterized family of standardized distance aggregation operators in its formulation that ranges from the minimum to the maximum standardized distance. By using the IOWA operator in the VIKOR method, it is possible to deal with complex attitudinal characters (or complex degrees of orness) of decision maker and provide a more complete picture of the decision making process.

In 2014 Sang and Liu [36] showed an analytic approach to obtain the least square deviation OWA operator weights. Kim and Singh [19] outlined an entropy-based hydrologic alteration assessment of biologically relevant flow regimes using gauged flow data. The maximum entropy ordered weighted averaging method is used to aggregate non-commensurable biologically relevant flow regimes to fit an eco-index such that the harnessed level of the ecosystem is reflected. Kishor et al. [20] introduced orness measures in an axiomatic framework and to propose an alternate definition of orness that is based on these axioms. The proposed orness measure satisfies a more generalized set of axioms than Yager’s orness measure.

In 2015 Zhou et al. [57] introduced the generalized least squares method to determine the generalized ordered weighted logarithmic harmonic averaging (GOWLHA) operator weights based on its orness measure. Gao et al. [14] proposed a new operator named as the generalized ordered weighted utility averaging-hyperbolic absolute risk aversion (GOWUA-HARA) operator and constructed a new optimization model to determine its optimal weights. Aggarwal [1] presented a method to learn the criteria weights in multi-criteria decision making by applying emerging learning-to-rank machine learning techniques.

In 2016 Kaur et al. [18] applied minimal variability OWA operator weights to reduce computational complexity of high dimensional data and ANFIS with the fuzzy c-means clustering is used to produce understandable rules for investors. They verified their model through an empirical analysis of the stock data sets, collected from Bombay stock market to forecast the Bombay Stock Exchange Index. Gong et al. [16] presented two new disparity models to obtain the associated weights, which is determined by considering the absolute deviation and relative deviation of any distinct pairs of weights. Mohammed [33] demonstrated the application of a Laplace-distribution-based ordered weighted averaging operator to the problem of breast tumor classification.

In 2017 Reimann et al. [35] performed a large-scale empirical study and test whether preferences exhibited by subjects can be represented better by the OWA operator or by a more standard multi-attribute decision model. Chaji [9] presented an analytic approach to obtain maximal Bayesian entropy OWA weights. His approach is based on the solution of he following mathematical programming problem,

$$\begin{aligned} \mathrm{maximize \ }&W = - {\displaystyle \sum _{i=1}^n w_i \ln \frac{w_i}{\beta _i/\min \{\beta _1, \ldots , \beta _n\} }} = - {\displaystyle \sum _{i=1}^n w_i \ln \frac{w_i}{\beta _i}} - \ln \min \{\beta _1, \ldots , \beta _n\} \nonumber \\ \mathrm{subject \ to}&\qquad \mathrm{orness}(W)= {\displaystyle \sum _{i=1}^n \frac{n-i}{n-1} \cdot w_i} = \alpha , \ 0 \le \alpha \le 1 \\&w_1 + \cdots + w_n =1, \, 0 \le w_i, \, i=1,\ldots , n. \nonumber \end{aligned}$$

where \(\beta _1, \ldots , \beta _n\) are given prior OWA weights, such that \(\beta _1 + \cdots + \beta _n =1\), \(\beta _i >0, \, i=1,\ldots , n\).