1 Introduction

Multiple criteria decision-making (MCDM) is a topic of massive interest in all areas of research and practices because of its usefulness and application prospects in almost every field. MCDM methods have been applied in economics, human resources, business, finance, actuarial, hydrology, water management/water reservoir management, energy/energy management/energy planning, agriculture, vehicle insurance, medicine and health care, engineering, engineering design, utilities, road safety, supply chain management, transportation, logistics, marketing, retail, and environmental, wildlife management, social, management, construction, manufacturing and assembly, manufacturing systems, production planning, scheduling, portfolio selection, distribution systems, chemistry, military [48], weapons system evaluation, material selection, risk assessment, [49]. This highlights the significance of research on MCDM methods and its applications. In MCDM problems, making trade-off between the conflicting criteria and making a scientific decision largely depends on the decision-maker’s experience [49]; however, if the decision-makers’ experience is limited in light of the problem on which their opinions are sought there is a possibility that they would be unable to record their judgment (in the form of, e.g., scores) on the questions concerning which they think they do not have much information to judge. In such cases, the final sheet of their responses may contain some empty spaces leading to incomplete data, and thus uncertainty. Uncertainty is an essential aspect in MCDM that in principle arises from failures, assumptions, unavailability or incompleteness of data ([22], p. 8). Most of the conventional MCDM methods in practice today are not fully equipped to handle uncertainties primarily arising from the incomplete data sets. If one goes through the MCDM literature available in all major databases, one can easily observe that in most of the studies, the scholars have tested their models on complete data sets as incompleteness is a challenge for MCDM. However, in our daily lives, incomplete data are not as rare as it is in the literature. When a surveyor encourages an interviewee in filling all questions in a questionnaire (even though he/she may not be able to answer few questions and may want to leave them empty), an attempt is made to minimize the possibility of getting incomplete datasheets. Later, still if one finds missing entries against few respondents in the datasheet a convenient, and popular approach, is deleting them and considering completely filled questionnaires for data analysis. However, some MCDM methods like Grey Relational Analysis (GRA) are primarily popular for their ability to handle small and incomplete data [35, 19, 51]. Thus, when the sample size is small, deleting incomplete data sets can further reduce the size of the sample, which may not be acceptable in certain cases where sample was already deemed “too small”. Therefore, a method capable of handling incomplete information by extracting maximum information from the available data are more likely to solve real-life MCDM problems. Generating missing information to solve MCDM problems more effectively is not a new concept in literature. In their study involving four MCDM techniques, Park et al. [38] used binomial and uniform distributions to generate missing information. Truxillo [45] mentioned maximum likelihood parameter estimation and multiple imputation methods for handling incomplete data. In Grey System Theory, incomplete data sets can be managed through the execution of Average Operators (or, otherwise, Stepwise Ratio Operator, where applicable) ([30], Chapter 4). Another important consideration in MCDM approaches is normalization of data that aims at obtaining comparable scales of the criteria values thus making the data dimensionless [46]. Different criteria can have different scales thus to make them comparable normalization or standardization of the data is prerequisite in most of the MCDM techniques. However, different normalization techniques might yield different decision outcomes [25, 36]. Thus, the development of an MCDM model where the effect of normalization on the results is negligible is an important issue.

A major reason why even in the era, where MCDM techniques are in abundance, the organizations may still avoid using MCDM methods and prefer to decide intuitively has been reported to be the unacceptability of the ranking produced by an MCDM method to the rationale managers [3]. The complexity of the algorithm (higher computational cost) can also be added to the list. In the current study, a novel approach, hereby called Grey Absolute Decision Analysis (GADA), has been proposed that not only handles multi-attribute decision-making problems containing incomplete datasets without necessitating the standardization of the incommensurable criteria, but also provides a convenient and user-friendly algorithm. The model is very convenient and user-friendly with lower computational cost and does not require highly sophisticated software. Further, when compared with other MCDM methods, the results are both logical and convincing and the rationale on which the incomplete data sets are managed is less likely to produce a ranking that may be deemed unacceptable for the organizations and their managers. Therefore, the GADA method, with all its limitations, has particular strengths, which make it very suitable for convenient, reliable and multi-attribute organizational decision-makings. In the current study, the proposed method has been tested in different environments and its feasibility has been demonstrated with applications on different cases.

2 Literature Review

2.1 Multi-criteria Decision-Making Under Uncertainty

Multiple criteria decision-making (MCDM) is an integral part of decision theory and has been widely applied in numerous fields [31, 43]. MCDM is a structured framework for analyzing decision problems characterized by complex multiple objectives [2]. One of the goals of MCDM is to help decision-makers in integrating objective measurements with value judgments that are not based on individuals’ opinions but collective group ideas [15]. In MCDM methods usually, the process of judgment involves making pairwise comparisons between alternatives according to a given criterion [1]. The basic idea behind an MCDM method is to combine the criteria values and weights to obtain a single point of reference for evaluation (criteria) [52]. Roszkowska [39] highlights the main steps in MCDM as

  • establish system evaluation criteria that relate system capabilities to goals,

  • develop alternative systems for attaining the goals (generating alternatives),

  • evaluate alternatives in terms of criteria,

  • apply one of the normative multiple criteria analysis methods,

  • accept one alternative as “optimal” (preferred),

  • if the final solution is not acceptable, gather new information and go to the next iteration of multiple criteria optimization.

In some complex decision-making problems, however, decision-makers (DMs) cannot precisely express their decision information in quantitative terms, and they instead provide qualitative descriptions [43]. This impreciseness or greyness in the input data can create uncertainty and further complicate the data analysis process, and the decision-making process. Further, in real-life, information is usually imprecise or vague while each MCDM model tries to model an ideal situation, yet no model can represent an accurate picture of a real phenomenon. These models are merely rough approximations of a reality. Thus an ideal model should be the one that produces results most near to reality. However, the fuzziness or greyness of the qualitative information along with other issues like incompleteness of data can prevent an MCDM model from producing rational rankings acceptable to the decision-makers, especially when the uncertainties resulting from these factors were not being considered by the model. To handle MCDM problems under uncertainty, the scholars have produced many theories, each with its own strengths and limitations, e.g., the probability theory has been successful for its ability to handle problems containing randomness, however uncertainty is not probabilistic in nature but rather imprecise [26]. Most of us are well familiar with fuzzy logic for its ability to handle uncertainty [29]. Along with fuzzy logic to handle uncertainty, Grey System Theory is also an effective but relatively overlooked approach for uncertain environments [14]. Unlike statistics and probability theory, Grey System Theory allegedly neither requires large sample size nor a typical probability distribution [18, 19, 33, 51]. Further, unlike fuzzy theory that investigates phenomenon possessing “the characteristic of clear intension and unclear extension”, Grey System Theory investigates phenomenon that possesses “clear extension and unclear intension” ([30], p. 10). For example, “good investment” is fuzzy concept with clear intension, but the range or extension of good investment is unclear. However, when one says, the company is going to invest $10–$12 million on a certain product then because of clearly defined extension one can say this investment range is a grey concept with unclear intension as it is hard to know the exact intension, the exact amount of intended investment. The exact amount is randomly distributed across the clear extension. Thus in the case of “random uncertainty”, Grey System Theory allows a very suitable approach of decision-making [41]. Further, its flexibility reportedly allows it to handle fuzzy situations as well by transforming fuzzy set environment into grey number environment [46].

Traditionally, the foundations of decision analyses under risk and uncertainty are provided in expected utility theory [2]. Simple Additive Weighting (SAW) is a representative method under utility theory. Methods based on initial qualitative measurements are also becoming popular. Analytic Hierarchy Process (AHP) and fuzzy set theory-based methods are representative methods in this regard [46]. AHP and SAW are two of the most frequently used MCDM methodologies in practice [15, 39, 49]. The SAW method multiplies the normalized value of the criteria for the alternatives with the importance of the criteria and the alternative with the highest score is selected as the preferred one [39]. AHP uses a hierarchical structure and pairwise comparisons. An AHP hierarchy has at least three levels: the main objective of the problem at the top, multiple criteria that define alternatives in the middle and competing alternatives at the bottom [39]. A brief discussion on AHP and its steps of algorithm can be found in Ananda and Herath [2].

Because of strengths and weaknesses associated with different MCDM methods, these methods should be selected and used in specific situations where they can produce the most reliable results [38, 39] as there is no “one size fits all” MCDM approach. Some of the difficulties and challenges associated with existing MCDM methods, which can cause uncertainty in MCDM and can hamper the reliability of the results produced from a given MCDM method are listed below:

  1. i.

    Uncertainty and complexity in the decision-making process [2].

  2. ii.

    Complexity in algorithms/calculations and requirement of intricate details [6, 20].

  3. iii.

    The problem arising from aggregation of observations because different methods of mean can result in different results [25], e.g., arithmetic mean may improve the consistency of the judgments but can rescind the initial logic expected by the respondents [7]. Arithmetic mean (and so do geometric mean) is useful but depends on the situation [27].

  4. iv.

    Different methods to normalize or standardize the data sequences can produce different decision outcomes [25].

  5. v.

    Inadaptability to different levels of measurement [6] or problems associated with non-identical units of measure. An MCDM process typically involves, range standardization to transform the criterion scales with different units into commensurable (dimensionless) unit for convenient comparison of their weights as some methods can only work for identical units ([11, 22], p. 8; [44]).

  6. vi.

    Inappropriate selection of an MCDM method for a problem that was better suited for another method, and/or incorrect execution of the method. There is no single method suitable for all situations [38, 39], e.g., one method that is fit for group decision-making may not be fit for non-grouped decision-making, and a method suited for situation with less certainty may not work well with situations with much uncertainty. Also, some methods consider the independence of criteria and some consider their dependence on feedback/responses ([47], p. 105). Thus, the improper use of a method can also restrict its effectiveness or suitability for use [7].

  7. vii.

    Impreciseness, incompleteness and indeterminateness of input data or available information [2, 23, 31, 50] and uncertain information in decision-making process [10].

  8. viii.

    Information can be exact or inexact. Numerical input data cannot fully explain a qualitative phenomenon thus arises ambiguity, uncertainty, greyness or fuzziness in the judgement of a decision-maker ([2, 6, 10, 22, 28, 47, 54], p. 8).

  9. ix.

    Decision objectives can be complicated, uncertain and even conflicting ([22], p. 8).

  10. x.

    Decision criteria can be cardinal (continuous) or ordinal (categorical) ([22], p. 8). Also, in a study, the criteria can be either all positive, all negative or mixed.

  11. xi.

    Unavailability of sufficient amount of data. Even though a method may not necessarily need a large amount of data in processing, but still a small sample might only provide a rough picture, especially in academic research [7]. However, one cannot rule out the possibility of fewer decision-makers in sometimes the most important decisions (especially the high-impact decisions that are likely to influence the entire organization, or country).

  12. xii.

    Different methods of weighting and scoring can cause loss of information to a varying degree, e.g., direct scoring methods may have low computational cost and can easily be calculated, but may be susceptible to loss of information because of the use of ordinal scales ([13], p. 87).

  13. xiii.

    The respondents recording opinions arbitrarily, mistakenly, non-professionally or carelessly, or for the reason that they were not reliable experts qualified to report their opinions of the MCDM problem under study [7]. Their inability to properly comprehend a questionnaire or part of it can also be added to the list.

  14. xiv.

    The ranking produced by an MCDM method can go against the intuition of the decision-makers resulting in the conflict prompting the decision-makers to decide intuitively [3]. Even two MCDM methods can produce conflicting ranks.

  15. xv.

    Some methods just produce ranking, not the relative weights for the decision alternatives, e.g., TOPSIS and VIKOR methods.

One can see that most of these challenges can both directly and indirectly contribute to uncertainties in multi-criteria decision-making process. Table 1 presents an overview of the discussion, and classifies uncertainties associated with multi-criteria decision-making problems into five categories along with possible causes. The table shows, for optimum decision-making under MCDM under uncertainty paradigm the minimization of the “threats” is crucial and these threats can only be managed if the causes of uncertainties are effectively managed. Therefore, development of a method that can overwhelm most of the causes of uncertainties in multi-criteria decision-making can be a commendable scientific contribution.

Table 1 Five dimensional uncertainties in multi-criteria decision-making

Here a point needs emphasis. Studies argue that in MCDM process, the question under study is usually “how to improve the decision?” rather than “what is the right decision?”, therefore the concept of an optimum decision does not exist in a multi-criteria decision analysis framework and thus multi-criteria analysis cannot be justified within the optimization paradigm frequently adopted in traditional operational research/management sciences [31]. The current study, however, argues that the concept of an optimum decision can be integrated in discrete MCDM under uncertainty framework by redefining optimum alternative not as the “best alternative” but the “best alternative among all available alternatives”. There is no “best solution” to any problem. Even the solution that is regarded “optimum” or “best” by all MCDM methods today can appear “sub-optimum” tomorrow!

2.2 Absolute Grey Relational Analysis

Grey Relational Analysis (GRA) models, also called Grey Incidence Analysis (GIA) or Grey Correlation Analysis models, are one of the most critical parts of the Grey System Theory [53], a scientific theory propounded by a Chinese scientist Dr. Deng Julong in the 1980s [9] that deals with the systems with uncertain or incomplete information by referring them as “grey systems” [9, 17, 34]. The earliest GRA model was proposed by Deng in 1980s, and it is still the most influential one ([24, 30], p. 68). The underlying concept of GRA is to determine the extent of similarity between the data sequences by using the degree of similarity of geometric curves of the sequences ([30], p. 68). GRA can be used to expound a grey system whose operating mechanism and physical prototype are unclear [53]. Liu Sifeng’s Absolute GRA model, originally proposed in 1992, is one of the most promising GRA models and is the soul of several new grey multi-criteria decision-making models like Grey Incidence Decision-Making, Grey Target Decision-Making, etc. Absolute GRA model extends definite integral models to multiple integral models and can be used for high-dimensional data. Absolute GRA model’s output is a single value, called Absolute Grey Relational Grade (ɛ), which measures the geometric proximity among the data sequences [18]. If the data sequences are geometrically more similar, the value will be larger, and if the sequences are less similar, the value will be smaller. However, any two sequences cannot be absolutely unrelated. These properties of Absolute GRA model, along with other properties, are mentioned in Liu et al. ([30], p. 81). For two equal-time interval sequences Xi and Xj, the steps to calculate ɛ have been shown below [18, 30].

  • Step I: Calculate the zero-starting point images (X 0i and X 0j ) of the sequences Xi and Xj.

Let Xi = (xi(1), xi(2),…, xi(n)) be the data sequence of a system’s behavior and D the sequence operator which satisfies XiD = (xi(1)d, xi(2)d,…, xi(n)d) and xi(k)d = xi(k) − xi(1); k = 1, 2,…, n. Then D is referred to as a zero-starting point operator and XiD is the zero-starting point image of Xi. XiD is often written as XiD = X 0i  = (x 0i (1), x 0i (2),…, x 0i (n)).

  • Step II: Calculate |si|, |sj|, and |si − sj|.

If the zero-starting point images of Xi and Xj are X 0i  = (x 0i (1), x 0i (2),…, x 0i (n)) and X 0j  = (x 0j (1), x 0j (2),…, x 0j (n)),

Then,

$$\left| {s_{i} } \right| = \left| {\mathop \sum \limits_{k = 2}^{n - 1} x_{i}^{0} \left( k \right) + \frac{1}{2}x_{i}^{0} \left( n \right)} \right|,$$
$$\left| {s_{j} } \right| = \left| {\mathop \sum \limits_{k = 2}^{n - 1} x_{j}^{0} \left( k \right) + \frac{1}{2}x_{j}^{0} \left( n \right)} \right|,$$
$$\left| {s_{i} - s_{j} } \right| = \left| {\sum\limits_{k = 2}^{n - 1} {\left( {x_{i}^{0} \left( k \right) - x_{j}^{0} \left( k \right)} \right) + \frac{1}{2}\left( {x_{i}^{0} \left( n \right) - x_{j}^{0} \left( n \right)} \right)} } \right|.$$
  • Step III: Calculate Absolute Grey Relational Grade (ɛij) between the sequence Xi and Xj using the following formula

    $$\varepsilon_{ij} = \frac{{1 + \left| {s_{i} } \right| + \left| {s_{j} } \right|}}{{1 + \left| {s_{i} } \right| + \left| {s_{j} } \right| + \left| {s_{i} - s_{j} } \right|}}.$$

One of the properties of the model is that it always yields some values thus absolute grade never implies zero relationship between two data sequences, i.e., \(\varepsilon_{ij} = \varepsilon_{ji} \ne 0\). Recent studies (e.g., [18]) have reported, \(\frac{1}{2} < \varepsilon \le 1\). Also, to apply this method the length of data sequences should be the same. For the detailed method on how to solve each step and to view the complete list of properties and definitions associated with this model, Liu et al. ([30], Chapter 5) and Javed and Liu [18] can be consulted.

2.3 Handling Incomplete Data

Producing reliable results from insufficient or incomplete data under uncertain and complex environments is a challenge for almost all MCDM techniques. Ananda and Herath [2] highlighted the need to develop new ways to elicit responses under incomplete information considering the often incompleteness of information in forestry. Levi and Taji [23] in their study involving Group Analytic Network Process for hazards planning and emergency management, discussed the frequent group decision-making with incomplete information (both in terms of quantity and quality). Pankratova and Nedashkovskaya [37] also discussed the incompleteness of qualitative information like experts’ estimates and stressed that the technique of decision-making support must include methods for processing such information. In real-world problems, decision-making often take place in fuzzy environments where the preference information provided by decision-makers is often incomplete, indeterminate and inconsistent and can create uncertainty about preferences [5, 32, 15, 39]. Therefore, handling incomplete information by extracting maximum information from the available data is necessary for solving real-life MCDM problems effectively.

The incompleteness of data and uncertainty associated with the decision-making based on incomplete data are the problems of decision-making in real life. Generating missing information is one of the most acceptable solutions to handle incomplete information. Grey System Theory (GST) is an emerging and intelligent field to deal with problems containing incomplete information and uncertainty. In GST, there is a process called Whitenization, where grey data (incomplete data) is made whiter (completer) [16], whereas grey implies something between black (completely unknown) and white (completely known). In the grey systems literature, however, there are several ways to do so. Liu et al. ([30], Chapter 4) in their influential work on grey data analysis discussed the use of Average Operator to create new data by filling the blanks in the available data. They discussed three types of Average Operators; a non-adjacent neighbor generating operator, a mean (average) generation operator by using the non-adjacent neighbors, and an even generation operator by using adjacent neighbors (pp. 57–58). However, if one neighbor is missing, like in case of first or last data value in a data sequence, Average Operators cannot be applied and the Stepwise Ratio Operator is suggested (pp. 59). In different situations, different operators should be used. Therefore, selection of an appropriate operator depends on the situation and needs of the data analysts. The discussion in their work gives an impression that they suggested those operators for the situation when the position of data values in a data sequence is important, and cannot be changed. For example, if a blank is at first or last position in a sequence then they suggested the operator of stepwise ratio, however, if the position of data values is not important, just by interchanging the column (or row, as per the situation of the data sheet) containing blank (Ø) with another one (e.g., bringing the blank from the edge to center) it can easily be filled using a suitable Average Operator. Here a few assumptions can be made.

Assumption I

Incompleteness of data increases uncertainty for decision-makers as the reliability of insight drawn from incomplete data is likely to be lesser than that drawn from complete data.

Assumption II

Incompleteness of data, or the blanks in a data sequence, can be filled through whitenization that can be done through the aggregation operators.

According to Saaty and Vargas [40] “to achieve a decision with which the group is satisfied, the group members must accept the judgments, and ultimately the priorities.” According to them, this requires the judgments to be homogenous and the priorities of the individual group members to be compatible with the group priorities. Otherwise, the decision-makers may disagree with the resultant solution and choose to decide intuitively rather than trusting the MCDM approach [3]. Thus, the homogeneity of judgments of the decision-makers and the compatibility of each decision-maker’s priorities with the priorities of the group is vital for satisfactory group decision-making and acceptable decision outcomes.

Assumption III

If a group decision follows from a judgment that meets the conditions of homogeneity and compatibility then it is more likely to be acceptable.

3 Definitions

3.1 Definition I: Decision Objects (Aj)

Decision objects are the entities that are to be evaluated. In MCDM problems, alternatives, choices, solutions, options, possibilities, objectives, experiments, etc., are the decision objects.

3.2 Definition II: Decision Measures (aij)

Decision measure (aij) is that numeric value (or a linguistic term before its transformation to numeric value) that is used for the evaluation of the decision object(s). A decision measure aij gives the relationship between a decision subject Ei and a decision object aj. It is the score defining the importance of decision object assigned by a decision subject. In MCDM problems, the values against which the decision objects are to be evaluated are the decision measures of the decision objects. For example, on the 5-point Likert-scale, the values 1 to 5 are the decision measures used by the decision subject (respondent/expert) to evaluate a decision object.

3.3 Definition III: Decision Subjects (Ei)

Decision subject is the entity that produces the decision measure (characteristic/property/worth) of the decision object. If data are primary, decision subjects could be respondents or experts. If data are secondary, then the sources of data (journals, reports, etc.) are the decision subjects. Also, the machines and tools used for recording data can also be the decision subjects. If source of data/reading in an observation/experiment is not available, but different sub-criteria under the cost/benefit-type criteria are available, then these sub-criteria can be considered the decision subjects. Reliability of decision subjects determines the reliability of decision measures and vice versa.

3.4 Definition IV: Decision Criteria (C(k))

Decision criterion is that yardstick on which the decision objects are to be evaluated, e.g., attributes and objectives. Without this yardstick, the decision measures are hard to interpret. Usually, criterion can be positive or negative (and sometimes, moderate). Positive criteria, are the benefit-type criteria, and have maximization property, with respect to decision-makers, i.e., higher, the better, e.g., productivity, quality, efficiency, performance, sales, return on investment, good governance, etc. Negative criteria, are the cost-type criteria, and have minimization property, with respect to decision-makers, i.e., lower, the better, e.g., cost, expenses, price, inflation, liability, poverty corruption, pain, infections etc. If all criteria on which the decision-makers are evaluating different alternatives have the same type, then all these criteria are identical criteria.

3.5 Definition V: Criteria Weights (β)

Criteria weights are the weights assigned to the decision criteria by the decision subject(s). The sum of weights of all criteria should be equal to one.

3.6 Definition VI: Reciprocating Operator (RO)

Let Xi = (xi(1), xi(2), …, xi(n)) be the behavioral sequence of factor Xi. If the sequence operator D satisfies XiD = (xi(1)d, xi(2)d, …, xi(n)d), and

$$x_{i} \left( k \right)d = 1/x_{i} \left( k \right);\quad x_{i} \left( k \right) \ne 0;\quad k = 1,2, \ldots ,n,$$

then D is referred to as the Reciprocating Operator with XiD as its image, called the reciprocating image of Xi ([30], p. 72).

3.7 Definition VII: Arithmetic Mean Operator (AMO)

Let us say x1, x2, …., xn are n numerals and their aggregation (\(\bar{x}\)) is needed. If the Arithmetic Mean Operator (AMO) is deployed, then the aggregation would be given by

$$\bar{x} = {\text{AMO}}\left( {x_{1} , x_{2} , \ldots ,x_{n} } \right) = \frac{1}{n} \left( {x_{1} + x_{2} + \ldots + x_{n} } \right).$$

3.8 Definition VII: Geometric Mean Operator (GMO)

Let us say x1, x2, …., xn are n numerals and their aggregation (\(\bar{x}\)) is needed. If the Geometric Mean Operator (GMO) is deployed, then the aggregation would be given by

$$\bar{x} = {\text{GMO}}\left( {x_{1} , x_{2} , \ldots ,x_{n} } \right) = \left( {x_{1} \times x_{2} \times \ldots \times x_{n} } \right)^{{\frac{1}{n}}} .$$

AMO and GMO are two data aggregation operators associated with the GADA method, but they should be used appropriately to get desired benefits from the model. By building upon the literature [1, 8, 12, 27, 40] it is suggested that when the group of respondents are treated as separate individuals (aggregation of individual priorities) AMO can be deployed and when the group of respondents is treated as one decision-maker (aggregation of individual judgment) then GMO should be deployed. For a given criterion, the execution of AMO and GMO is shown in Table 2, where Ø is representing a blank in the data sheet.

Table 2 Execution of the two aggregation operators for data excavation

3.9 Definition VIII: GADA Indexes and GADA Weights

The GADA Indexes, \(\grave{\text{r}}\) and ȑ, and the GADA Weights, \(\grave{\text{R}}\) and Ȑ, are given by

$${\grave{\text{R}}_j} = \frac{{{\grave{\text{r}}_j}}}{{\mathop \sum \nolimits_{j = 1}^S {\grave{\text{r}}_j}}};\quad {\grave{\text{r}}_j} = {\left( {\mathop \prod \limits_{j = 1}^S r_j^{\alpha_i}} \right)^{1/\mathop \sum \limits_{i = 1}^N {\alpha_i}}},$$

where \(\alpha_{i}\) is the alpha of Javed [16], which is obtained through the aggregation of Absolute Grey Relational Grades (AGRGs) in the ith row (or column) of the AGRG Pairwise Comparison Matrix. \(\delta \left( k \right)\) is the Simulated Weights of Criteria, which is given by the geometric mean of \(\theta_{i} \left( k \right)\), where \(\theta_{i} \left( k \right) = \beta_{i} \left( k \right)*\sqrt {\alpha_{i}\left( k \right) }\). \(\beta_{i} \left( k \right)\) is the Perceived Weight of Criteria, which is the weight of kth criteria as perceived by ith expert. For all values of i, \(\mathop \sum \limits_{k = 1}^{M} \beta \left( k \right) = 1\); \(\mathop \sum \limits_{k = 1}^{M} \theta \left( k \right) \le 1\); \(\mathop \sum \limits_{k = 1}^{M} \delta \left( k \right) \le 1\); \(\mathop \sum \limits_{k = 1}^{M} \delta \left( k \right) \le \mathop \sum \limits_{k = 1}^{M} \beta \left( k \right)\). The normalized output of the GADA method produces a set of relative weights (\({\grave{\text{R}}_j}\), Ȑj), whereas ∑ \({\grave{\text{R}}_j}\) = 1, and ∑Ȑj = 1. These two relative weights for inter-criteria (local) ranking would be represented as \({\grave{\text{R}}}\) and for global/overall ranking by Ȑ, and are referred to as the GADA Weights. The GADA Weights reveal the relative weights of the decision objects (alternatives).

4 Grey Absolute Decision Analysis

4.1 The Construct

Grey Absolute Decision Analysis (GADA) method is a multiple attribute (discrete multi-criteria) group decision-making model that prioritizes the available alternatives while assigning them relative weights, called the GADA Weights. The GADA method defines an MCDM problem through four parameters associated with it: Decision Objects (Aj), Decision Subjects (Ei), Decision Criteria (Cj) and Decision Measures (aij). These four parameters define the system in which GADA method operates (see Definitions I–IV). The relationship among the four parameters is shown in Table 3, for the sake of their convenient interpretation.

Table 3 Relationship between four decision parameters

4.2 Algorithm

If M represents number of criteria (C(k); k = 1,2,…,M), N represents number of subjects/experts (Ei; i = 1,2,…,N) and S represents number of objects/alternatives (Aj; j = 1,2,…S), then the GADA method involves following steps, which are summarized in Fig. 1. In the current study, criteria imply attributes.

Fig. 1
figure 1

The GADA method

  • Step 1: Data reporting and preparation

Record the responses in the form of Response Matrix of Decision Measures [aij] for “higher the better” criteria C(k) against each alternative Aj.

For “lower the better” criteria, apply the Reciprocating Operator (Definition VI) on the raw data, thus the decision matrix will evolve to,

Here it should be noted that in case of “higher the better” criteria, the actual responses would be used for data analysis thus [rij ] = [aij], however in case of “lower the better” criteria” the actual response after going through the Reciprocating Operator (RO) would be used for data analysis thus rj = RO(ai). The key advantage of the GADA method is that there is no step to normalize the data, i.e., if one criterion’s values are in dollars and the other criterion’s values are in gallons, the GADA method stays unconcerned and, thus, is effectively dimensionless.

  • Step 2: Determining Absolute Grey Relational Pairwise Comparison Matrix and the alpha values

Calculate Absolute Grey Relational Grade (ɛ) between Ei and all other Es; i = 1, 2,….N. Thus, a pairwise comparison matrix would be obtained. This matrix is named Absolute Grey Relational Pairwise Comparison Matrix [ɛ]. The average of each row yields a measure (α), first introduced by Javed [16]. If the value of ɛ approaches 1 it implies strong association and greater consistency, and if it approaches 0.5 it implies weak association thus the ɛ value of Ei with itself is always one. When the value of ɛ is too near to 0.5 then the corresponding data should either be cautiously used or the data should be recollected. For one criteria,

where \(\alpha_{i} = \frac{1}{N}\left( {\varepsilon_{i1} + \varepsilon_{i2} \ldots + \varepsilon_{iN} } \right)\). For M criteria, C(k); k = 1,2,…,M,

  • Step 3: Calculate the Perceived Weights of the Criteria

Define the Perceived Weights of Criteria (β) assigned by the subjects/experts; k = 1,2,…,M; M is number of criteria; ∑β(k) = 1 such as

where \(\beta \left( k \right) = {\text{geometric}}\;{\text{mean}}\left( {\beta_{1} \left( k \right),\beta_{2} \left( k \right), \ldots ,\beta_{N} \left( k \right)} \right).\)

  • Step 4: Calculate the Simulated Weights of the Criteria

Multiply \(\beta_{i} \left( k \right)\) and \(\sqrt {\alpha_{i} \left( k \right)}\) to get θi(k), which can be used to estimate the “Simulated Weights of Criteria” δ(k). This approach of determining θi(k) was inspired by the work of Azadeh et al. [4].

where \(\delta \left( k \right) = {\text{geometric }}\;{\text{mean}}\left( {\theta_{1} \left( k \right),\theta_{2} \left( k \right), \ldots ,\theta_{N} \left( k \right)} \right)\); \(\theta_{i} \left( k \right) = \beta_{i} \left( k \right) \times \sqrt {\alpha_{i} \left( k \right)}\). If one is interested in precise relative simulated weights of the criteria, then normalization of \(\delta \left( k \right)\) can be done.

  • Step 5: Aggregation of the individual weights of the criteria against each alternative to get the overall weight of each criterion

In order to obtain the weight of each criterion the simplest procedure in MCDM research is the transformation of the individual weight of criteria to group one, i.e., the aggregation of individual weight of criteria with the aid of geometric mean [42]. In the GADA method this would be executed as,

where for each criterion C(k) and, for j = 1,2,3,…,S,

$${{\grave{\text{r}}}_j} = {\left( {\mathop \prod \limits_{j = 1}^S r_j^{\alpha_i}} \right)^{1/\mathop \sum \limits_{i = 1}^N {\alpha_i}}}.$$

The normalization of aforementioned weights would yield the relative weights (\({{\grave{\text{R}}}_j}\)) of the alternatives within each criterion such as

$${{\grave{\text{R}}}_j} = \frac{{{{\grave{\text{r}}}_j}}}{{\mathop \sum \nolimits_{j = 1}^S {{\grave{\text{r}}}_j}}}.$$

These weights can be arranged in a definite order to get inter-criteria (local) ranking of alternatives.

  • Step 6: Aggregation of the weights of each individual criteria to get the relative weight of each alternative

To get the overall (global) ranking of alternatives, the simulated weights of the criteria will be aggregated such as,

where

with the normalized vector,

Later, Ȑj can be used to rank the S alternatives.

The properties associated with the GADA method are:

  1. i.

    When \(\alpha \to 1\), then \(\delta \left( k \right) \to \beta \left( k \right)\).

  2. ii.

    The GADA method can provide relative weights (and rankings) of alternatives (\({{\grave{\text{R}}}_j}\), Ȑj) and criteria (δ(k)).

  3. iii.

    Lower computational cost because of simplicity in calculation steps.

  4. iv.

    No need for normalization of criteria scales as each criterion can have different units and ranges i.e., dollars can be compared with hours, etc.

  5. v.

    Both complete and incomplete data sheets can be handled.

  6. vi.

    Different numbers of respondents can evaluate different criteria.

  7. vii.

    No separate consistency test is needed as the alpha (α) allows the automatic monitoring of the consistency.

  8. viii.

    It can be used to determine the simulated/optimum value of weight for criteria (δ(k)). However, ∑δ(k) is not necessarily 1 because of the margin of error. These optimum weights can be used in other MCDM methods to enhance their performance.

  9. ix.

    The method is a mixed-method discrete MCDM methodology that can handle both qualitative and quantitative information to solve a problem. If data are qualitative, it suggests its transformation into quantitative form, e.g., replacing “highly agree” with 5 and “least agree” with 1, etc.

5 Applications

5.1 The 1st Case: Pilot Testing

This case is of group decision-making and was designed as one of the pilot tests for evaluating the performance of the GADA method. Let us assume that we have 4 alternatives/possibilities (A: four products of a factory) and 5 independent decision-makers (E: experts), who are evaluating them against 2 unequally weighted criteria (C(1): quality of the products; C(2): cost of production). Let us say, the five experts gave the following weights to the set of two criteria: (0.2, 0.8), (0.3, 0.7), (0.4, 0.6), (0.2, 0.8) and (0.2, 0.8). The scores in Table 4 were recorded on 5-point Likert-scale where 1 represents the lowest score and 5 represents the highest score. For quality, 5 implies the highest quality, and for cost 1 implies the least expensive. Thus, with respect to the quality criterion 1 is worst, and with respect to the cost criterion 5 is the worst. Tables 5, 6, 7, and 8 present the execution of the GADA method.

Table 4 The response sheet
Table 5 The revised response sheet and aggregations
Table 6 Estimating \(\alpha_{i}\) and \(\sqrt {\alpha_{i} \left( k \right)}\)
Table 7 Determining the simulated weights of criteria
Table 8 Local and global rankings of alternatives

5.1.1 Comparison with AHP and SAW Methods

The results obtained through AHP and SAW methods are shown in Table 9 along with their respective differences when compared with the GADA Weights.

Table 9 AHP and SAW methods based rankings

Figure 2 illustrates a comparison among obtained weights from AHP, SAW, and GADA methods. As can be seen from the bar graph, the results from the GADA method are reasonable.

Fig. 2
figure 2

Comparison among obtained weights from AHP, SAW, and GADA

It should be noted that the GADA method is not just an MCDM technique, but also a method to obtain optimum relative weights of the criteria given by the “simulated weights of the criteria” δ(k), which can be normalized to get the precise relative weights. Thus, it can be used to improve the ranking of other methods as well. For this purpose, the improved AHP ranking, when δ(1) = 0.213908 and δ(2) = 0.62455 are used instead of AHP’s own weights, is shown in Table 10.

Table 10 GADA’s simulated weights-driven AHP ranking

Here at least four insights can be drawn from the analyses. Firstly, different methods are likely to produce different rankings. Secondly, the variation in the weights influences the rankings. Thirdly, the GADA method’s Simulated Weights of Criteria are comparable to the weights obtained through other methods but with varying degree of closeness. Lastly, if one uses the Simulated Weights of Criteria, the ranking obtained through other MCDM methods are likely to become more comparable to the ranking obtained through the GADA method.

5.2 The 2nd Case: A case of Hazards Planning and Emergency Management

This is a case of group decision-making using incomplete data. The incomplete data for this case, shown in Table 11, have been taken from Levy and Taji [23], where C(1) is physiological discomfort, C(2) is emergency response cost, and C(3) is the safety criterion.

Table 11 The incomplete data sheet

Through the application of Arithmetic Mean Operator, the missing information was generated, as shown in Table 12.

Table 12 The data sheet after handling blanks through AMO

Since C(1) and C(2) are negative criteria, hence they need to go through the Reciprocating Operator, as per the algorithm of the GADA method. The outcome is Table 13.

Table 13 The revised data sheet

Absolute Grey Relational Pairwise Comparison Matrices for the three criteria, along with their alpha values, are shown in Tables 14, 15 and 16. The rest of the procedure is shown in Tables 17, 18, and 19. The weights, \(\beta_{i} \left( k \right)\), have been taken from Levy and Taji ([23], p. 915).

Table 14 Estimating \(\alpha_{i}\) and \(\sqrt {\alpha_{i} \left( k \right)}\) for C(1)
Table 15 Estimating \(\alpha_{i}\) and \(\sqrt {\alpha_{i} \left( k \right)}\) for C(2)
Table 16 Estimating \(\alpha_{i}\) and \(\sqrt {\alpha_{i} \left( k \right)}\) for C(3)
Table 17 Determining the simulated weights of criteria
Table 18 Inter-criteria evaluation
Table 19 Global ranking

Table 18 merely presents \(\grave{\text{r}}\), but if one desires weighted inter-criteria ranking, \(\grave{\text{R}}\) can also be calculated.

Figure 3 also sheds light on the ranks obtained from the GADA method and Levy and Taji’s Group Analytic Network Process (GANP) model. As can be seen from the line graph, the results are quite different, yet which of them is more reasonable? We are going to analyze the results in the following paragraph.

Fig. 3
figure 3

Comparison between GADA and GANP methods

Here it is worth mentioning that in Levy and Taji ([23]: Table 1), who used GANP for multi-criteria decision-making, for the decision-maker DM2, the only complete series, in C(3) (positive criterion), A2, A3 and A6 are worst and A5 is best. In C(2) (negative criterion), A1 and A6 are worst while A5 is best. In C(1) (negative criterion), A6 is worst while A4 and A5 are best. Also, in Levy and Taji ([23]: Table 2), in all three criteria, A6 is either worst or second-worst then how can it be 3rd best in the overall ranking? Therefore, their overall results seem irrational. Also, in their Table 1, if we compare A4 and A5 w.r.t. DM2 and DM4 (as other DMs did not assign any score to these two alternatives), then w.r.t. DM2, in two cases, A5 is superior to A4 and in one case, both are equally important, and w.r.t. DM4, in two cases, A5 is superior to A4 and in one case, A4 is superior to A5. Thus, conclusively, the likelihood of A5’s superiority over A4 is greater than vice versa, according to the judgments of the decision-makers. However, Levy and Taji’s [23] analysis surprisingly revealed A4 to be the best alternative that is very unlikely. Thus, the ranking obtained through the GADA method seems rational and convincing.

Variation in weights (and rankings) obtained through different MCDM methods is not an astounding fact. A review of different studies involving multiple MCDM methods can show that for one problem multiple methods can produce different, though comparable, weights. For instance, Kou et al. [21] used five MCDM methods and reported a slight variation between the weights for the magic gamma telescope dataset obtained through each method. They also reported variation in rankings obtained through five methods.

6 Conclusion and Recommendations

In multi-criteria decision-making (MCDM), the criteria can be both conflicting and incommensurable. At the same time, the data collected from the respondents can be either complete or incomplete. Furthermore, decision outcomes may or may not appear rationale to the managers/decision-makers. These are just a few factors which can complicate the attainment of optimal solution to MCDM problems, and thus can increase the uncertainty in the decision-making environment. MCDM, in general, and MCDM under uncertainty, in particular, can be argued to be the crucial skill that distinguishes humans from other species. It is the core function of all individuals and groups (or, organizations) made from these individuals. However, the MCDM under uncertain and complex situations is more complicated than simple and less challenging situations. The GADA method is a  modest attempt to handle discrete MCDM (MADM) problems in such situations while avoiding the loss of information and minimizing the possibility of the selection of sub-optimal solutions, and maximizing the possibility of the selection of an optimum solution, which is equally rationale for the organizational decision-makers and managers. During our past experience with different organizations and their managers, it was felt that despite the abundance of MCDM methods in the literature, the organizations and managers usually do not resort to these methods for decision-making under uncertainty rather they prefer intuitive decision-making. Thus, frankly speaking the primary purpose was not merely the publication of another MCDM model, but precisely the development of a model that can be considered feasible, smart, convenient and rationale by the managers and their organizations, i.e., to produce a ranking through group judgment that is satisfactory for the group involved in decision-making.

Precisely, the GADA method is a convenient method of group multi-criteria decision-making that enables decision-making for both conflicting and incommensurable (having different units) criteria. Further, the key benefit of the proposed method is its ability to handle both complete and incomplete data sets without any need to standardize all incommensurable criteria. Standardization of incommensurable criteria is the primary concern in almost every MCDM problem. Thus, considering the number of problems associated with the existing multi-criteria decision-making methodologies that have been highlighted in the paper, the proposed method is a humble step ahead. Other benefits include its adaptability to different scales of measurement under uncertainty, which may arise from small or incomplete sample, fuzziness or greyness in judgments, improper execution of complex algorithms/calculations and inconsistencies among the opinions of different experts that may lead to outliers as well.

However, no development is short of shortcomings. What prevents \(\delta \left( k \right)\) from approaching \(\beta \left( k \right)\) is an interesting problem for future research. Further strengths and limitations of the proposed method would be identified with time when it would be applied in various multiple criteria decision-making environments.