1 Introduction

Information system (IS) is a key investment to support and enhance business operation. Despite the benefits of IS adoption, such as improvements in profitability and organization performance, the failure rates of IS implementation remains generally high (Dwivedi et al. 2014). The evaluation and selection of a suitable IS that fits business requirements can reduce failure risk, yet, this remains a challenge for IS adoption success.

One of the more popular decision making tools, the Analytic Hierarchy Process (AHP), proposed by Saaty (Saaty 1977, 1980, 1990), has been applied in many domains. The tool and its various hybrid forms have been used in many studies of IS. For example, Muralidhar et al. (Muralidhar et al. 1990) used the AHP for IS project selection, Yang and Huang (Yang and Huang 2000) applied the AHP for IS sourcing decision, while Wang and Yang (Wang and Yang 2007) combined the PROMETHEE and the AHP for IS sourcing. Several studies (Wei et al. 2005; Ahn and Choi 2007; Yazgan et al. 2009; Chang et al. 2010; Sarkis and Sundarraj 2003) presented the application of AHP to Enterprise Resource Planning (ERP) system evaluation and selection, namely in the areas of digital video recorder systems Chang et al.(Chang et al. 2007), knowledge management tools Ngai and Chan (Ngai and Chan 2005), AHP-software applications, Ossadnik and Lange (Ossadnik and Lange 1999), and fuzzy AHP for software management Yuen and Lau (Yuen and Lau 2011). More recently, Razavi et al. (Razavi et al. 2010) proposed the AHP-based approach to analyze the quality attributes of enterprise architecture, while Ecer (Ecer 2020) proposed the interval Type-2 fuzzy AHP for the supplier selection of a home appliance manufacturer.

While there is an increasing use of AHP applications, it seems that many authors, who applied the AHP for their applications, are not aware of studies against the AHP. Belton and Gear (Belton and Gear 1983) presented two hypothetical examples to demonstrate how AHP produces reversed ranks when repeated copies of another alternative, i.e. an indifferent criterion, was added. Harker and Vargas (Harker and Vargas 1987) argued that the deletion of copies and the addition of criteria to differentiate alternatives were essential in both the AHP and in some utility models, thus, Belton and Gear’s counterexample (Belton and Gear 1983) was vacuous. Dye (Dyer 1990) argued that the results produced by the AHP were arbitrary when the principle of hierarchic composition was assumed. In another study, Forman (Forman 1993) listed and defended 18 critics of AHP.

Smith and Winterfeldt (Smith and Dv 2004), who were co-editors of Decision Analysis department of Manage Science in 2004 (Gass 2005), indicated that the axioms of AHP were met with resistance from decision analysts as the axioms of AHP conflicted with the axioms of expected utility theory. Gass (Gass 2005) briefly reviewed the debates of MAUT versus AHP, and urged the OR/MS community not to reject the use of AHP based on the work of Smith and Winterfeldt (Smith and Dv 2004). Barzilai (Barzilai 1998) argued that the AHP generated non-equivalent value functions and ranked from equivalent decompositions, but Whitaker (2007) argued that (Barzilai 1998) drew incorrect conclusions, proved false theorems, and drew misleading attention to examples that he did not show any fault with. As the debates of appropriateness of AHP are still ongoing, the alternatives of AHP should be considered and encouraged.

Koczkodaj et al. (Koczkodaj et al. 2016) argued that the AHP should not be equated with pairwise comparisons, and pointed out that “the first use of the method of Pairwise Comparisons (PC method) is often attributed to Ramon Llull, the 13th-century mystic and philosopher”. Yuen (Yuen 2009, 2012, 2014) proposed the use of the paired interval scale for conducting pairwise comparisons. Several examples and comparisons presented in (Yuen 2009, 2012, 2014) indicated that the AHP rating scale fail to reflect rater’s cognition of the difference of two objects, especially when the difference of two objects was not significant enough to be measured by times in the paired ratio scale schema, which 1, 2,…,9 were the default settings for the most applications of the AHP. For example, by applying the default setting of the AHP scale, the statement that Jason was slightly heavier than Peter (e.g. Peter is 60 kg and Jason is 61 kg) would be interpreted as the statement that Jason was two times as heavy as Peter (or Jason was one time heavier than Peter) Yuen (2014). Yuen (2014) presented more examples as seen below:

“ If Peter is 1.4 m , Jason will be 2.8 m if Jason is slightly higher t han Peter (Jason is 1.5 m in fact). P roviding that Peter is 1 5 years old, Jason will be 3 0 years old if Jason is slightly older than Peter (Jason is 1 6 in fact). Providing that Peter s IQ is 120, Jason’s IQ will be 240 if Jason is slightly more intelligent than Peter (Jason s IQ is 123 in fact).”

This research argues that the AHP rating scale potentially leads to misapplications, and the paired interval scale should be used for assessing pairwise comparisons when selection among competitive alternatives that result in challenges in decision making. The Primitive Cognitive Network Process (PCNP), as an ideal alternative of AHP, has received less attention by IS researchers. Whilst a few IS research papers using AHP presented the complete source data to meet reproducible research purposes, this paper identified three reproducible studies which applied the AHP to IS problems in the reputed journals (Muralidhar et al. 1990; Yang and Huang 2000; Wang and Yang 2007), and revisited them by the using the proposed PCNP.

The rest of this study is organized as follows. Section 2 illustrates the details of the primitive cognitive network process for general IS decision problems. Section 3 presents an application to an IS project selection using the baseline version of the PCNP with calculation details. Section 4 demonstrates the use of PCNP with an absolute measurement for an IS outsourcing decision problem. Section 5 presents the hybrid approach of the CNP and the PROMETHEE II for an IS outsourcing decision problem. Section 6 discusses the comparison results using AHP. Finally, Sect. 7 concludes the study, discusses its implications and proposes future research directions.

2 PCNP for IS decision problem

For a complex and significant decision, the best practice is to structure the decision making process in a scientific and systematic way. The decision making process normally include the major steps as follows:

  1. (1)

    Structuring an IS decision problem;

  2. (2)

    Obtaining data by comparisons and assessments according to an organized decision structure;

  3. (3)

    Calculating the assessment data to form a weighted decision table;

  4. (4)

    Producing the decision results by aggregating and ranking the weighted decision criteria.

Three IS decision problems incorporating the PCNP are presented in the following subsections. The summary of notations is presented in Appendix 3.

2.1 Structuring an IS decision problem

Various criteria can be applied when evaluating an IS project, and the selected criteria can be organized in a better presentation format, known as the Structural Assessment Network (SAN). Several examples depicting the SAN for various IS planning problems are shown in Figs. 1, 3 and 4. The decision goal is to select the best plan for an IS project. These IS projects can be referred to IS adoption project, IS outsource project, or IS development project. For an IS project, a set of candidate solutions for the decision problem \(T = \left( {T_{1} , \ldots ,T_{i} , \ldots ,T_{m} } \right)\) are evaluated with respect to the measurable criteria and the external leaves of structural criteria. Subject to the complexity of the criteria structure, the criteria structure can be organized as a hierarchical tree structure. A set of structured decision criteria, \(C = \left( {C_{1} , \ldots ,C_{i} , \ldots ,C_{n} } \right)\), is defined to measure the decision objective of selecting an information system. A criterion \(C_{i}\) is measured by aggregating a set of its sub-level criteria \(\left( {C_{i,1} ,...,C_{i,j} ,...,C_{{i,N_{i} }} } \right)\), and a sub-level criterion \(C_{i,j}\) has a set of its sub-level criteria \(\left( {C_{i,j,1} ,...,C_{i,j,k} ,...,C_{{i,j,N_{i,j} }} } \right)\).

Fig. 1
figure 1

Structural Assessment Network for Information System Project Selection

2.2 Comparisons and assessments

The weights of the criteria and the utility values of the alternatives with respect to the SAN are assessed by a questionnaire survey with cognitive pairwise comparisons. A typical question in the survey is illustrated in Fig. 2. Each pair of criteria are systematically compared and evaluated by giving a rating score. An example of a paired interval scale schema used in the questionnaire is shown in Table 1. The normal utility,\(\kappa\), represents the mean of the utility values of the comparison objects such that \(\kappa > 0\); by default, \(\kappa\) is the highest value in the paired interval scale, especially if the mean of the individual utility values of the comparison objects is unknown. In other words, all comparison objects are assumed to be competitive and have a high score initially. The definition for rating scale could be disassociated from \(\kappa\), especially when the mean and the highest difference among the candidate objects are provided. Adjusting \(\kappa\) value could lead to zero, negative and positive priority values. To prevent the negative or zero priority value, \(\kappa\) value should not be too small.

Fig. 2
figure 2

Evaluation for criteria importance for IS projects selection

Table 1 Paired interval scale schema for pairwise opposite comparison

Once the comparison scores for a question are filled by a rater, a Pairwise Opposite Matrix (POM) for a PC question, denoted by \(B = \left[ {b_{ij} } \right]\), is derived from the scores set in a question. A matrix of comparison scores \(\left[ {b_{ij} } \right]\) is given by a rater from the rating schema shown in Table 1. The rater only fills the entries in the upper triangular matrix (\(B^{ + }\)) with the rating scores chosen from the paired interval scales. The lower triangular matrix (\(B^{ - }\)) is the opposite of the upper triangular matrix, i.e. \(b_{ij} = - b_{ji}\). The POM \(\left[ {b_{ij} } \right]\) is combined by \(B = B^{ + } + B^{ - }\).

If the data of the Pairwise Reciprocal Matrices (PRMs) from AHP are given, a PRM can be converted to a POM by using the conversion schema presented in the Table 2. For example, in case 1, PRMs presented in Table 14 are converted to the POMs presented in Tables 3 and 4; In case 2, PRMs presented in Table 16 are converted to the POMs presented in Table 7; In case 3, PRMs presented in Table 19 are converted to the POMs presented in Table 10. The conversion mapping detail between AHP and PCNP is summarized in Table 13. By using this conversion, the AHP applications using the default scale can be revisited by the PCNP method.

Table 2 Conversion between paired ratio scale and paired interval scale
Table 3 POM and computational steps by RAU for priority vector of B0
Table 4 Pairwise opposite matrices for comparing IS projects with respect to four criteria

The Accordance Index is applied to evaluate the validity of a POM to indicate the appropriateness of the survey data. A complete POM needs \({\raise0.7ex\hbox{${n\left( {n - 1} \right)}$} \!\mathord{\left/ {\vphantom {{n\left( {n - 1} \right)} 2}}\right.\kern-\nulldelimiterspace} \!\lower0.7ex\hbox{$2$}}\) ratings. B is validated by the Accordance Index (AI) of the form as below:

$$AI = \frac{1}{{n^{2} }}\sum\limits_{i = 1}^{n} {\sum\limits_{j = 1}^{n} {d_{ij} } } ,\;Where\;d_{ij} = \sqrt {Mean\left( {\left( {\frac{1}{\kappa }\left( {B_{i} + B_{j}^{T} - b_{ij} } \right)} \right)^{2} } \right)}$$
(1)

where \(AI \ge 0\), \(\kappa\) is the normal utility,\(B_{i}\) is the row vector of B, and \(B_{j}^{T}\) is the row vector of \(B^{T}\) or column vector of B. If \(AI = 0\), B is perfectly accordant, i.e., \(\tilde{B} \equiv B\); If \(0 < AI \le 0.1\), then B is satisfactory. If \(AI > 0.1\), then B is unsatisfactory. The matrix form of Eq. 1 is equivalent to the element form as below.

$$AI = \frac{1}{{n^{2} }}\sum\limits_{i = 1}^{n} {\sum\limits_{j = 1}^{n} {\sqrt {\frac{1}{n}\sum\limits_{p = 1}^{n} {\left( {\frac{{\left( {b_{ip} + b_{pj} - b_{ij} } \right)}}{\kappa }} \right)^{2} } } } }$$
(2)

2.3 Form a weighted decision table

A weighted decision table presented in Form (3) comprises a matrix of m by n score values, i.e., \(\left[ {r_{ij} } \right]\), and a set of n weights, i.e., \(\left\{ {\omega_{j} } \right\}\).

$$\begin{array}{*{20}c} {\begin{array}{*{20}c} {} \\ {} \\ \end{array} } \\ {\begin{array}{*{20}c} {T_{1} } \\ \vdots \\ {T_{i} } \\ \vdots \\ {T_{m} } \\ \end{array} } \\ \end{array} \begin{array}{*{20}c} {\begin{array}{*{20}c} {\left( {\omega_{1} } \right.} & \ldots & {\omega_{j} } & \ldots & {\left. {\omega_{n} } \right)} \\ {c_{1} } & \cdots & {c_{j} } & \cdots & {c_{n} } \\ \end{array} } \\ {\left( {\begin{array}{*{20}c} {} & {} & {} & {} & {} \\ {} & {} & {} & {} & {} \\ {} & {} & {r_{ij} } & {} & {} \\ {} & {} & {} & {} & {} \\ {} & {} & {} & {} & {} \\ \end{array} } \right)} \\ \end{array}$$
(3)

\(c_{j} \in C\) is a criterion to measure an IS project. \(T_{i} \in T\) is an IS proposal alternative. \(r_{ij}\) is a score value for the IS proposal alternative i with respect to IS measurement criterion j. \(r_{ij}\) can be derived from relative measurement (Case 1 in Sect. 3), absolute measurement (Case 2 in Sect. 4) or direct input (Case 3 in Sect. 5). \(\omega_{j} \in \omega\) is the weight of the corresponding criterion \(c_{j}\). In this paper, the weights for the criteria of IS proposals selection and the relative strength values of the alternatives are derived from the pairwise opposite matrices (POMs). A pairwise opposite matrix \(B\) is reduced to a priority vector V by a cognitive prioritization function. The details are described in the following paragraph.

Let an ideal utility set be \(V = \left\{ {v_{1} , \ldots ,v_{n} } \right\}\). A comparison rating score \(b_{ij}\) is to (approximately) measure the difference between two objects, i.e., \(b_{ij} \cong v_{i} - v_{j}\). The Primitive Least Squares optimization function is a cognitive prioritization function to find the V to approximate to B with minimizing the total least error.

$${\text{PLS}}\left( {B,\kappa } \right){ = }{\text{Min}} \, \overline{\Delta }{ = }\sum\nolimits_{i = 1}^{n} {\sum\nolimits_{j = i + 1}^{n} {\left( {b_{ij} - v_{i} + v_{j} } \right)^{2} } }$$
(4)
$${\text{s.t.}}\quad \sum\nolimits_{{i = 1}}^{n} {v_{i} = n\kappa }$$

where n is the number of comparison objects in a group, and \(\kappa\) is the normal utility. The closed form solution of Form (4), which can easily be solved manually, is the Row Average plus the normal Utility (RAU), given by:

$$V = RAU\left( {B,\kappa } \right) = rowMean(B) + \kappa$$
(5)

Explicitly,

$$v_{i} = \left( {\frac{1}{n}\sum\limits_{j = 1}^{n} {b_{ij} } } \right) + \kappa ,\forall i \in \left\{ {1, \ldots ,n} \right\}$$
(6)

For many decision problems, the summation of priorities is equal to one, i.e., \(\sum {V^{\prime}} = 1\). V′ is said to be a normalized priority vector (or a priority vector in short) from V. The normalization function is a scaling function defined as below.

$$v^{\prime}_{i} = \frac{{v_{i} }}{{\sum\limits_{{i \in \left\{ {1, \ldots ,n} \right\}}} {v_{i} } }} = \frac{{v_{i} }}{n\kappa },\forall i \in \left\{ {1, \ldots ,n} \right\}$$
(7)

2.4 Aggregation and ranking

After the weights for the criteria of IS proposal selection and the score values for the IS proposal alternatives are derived, a weighted decision table is formed. To yield the best alternatives, aggregation and ranking are performed from the weighted decision table. The weighted arithmetic mean in Eq. (8) is applied to aggregate the values in the weighted decision table due to its popularity, simplicity, and efficiency.

$$T_{i} = \sum\nolimits_{j = 1}^{n} {\omega_{j} r_{ij} } ,\;i = {1}, \ldots ,m$$
(8)

After the values of the IS alternatives are aggregated, the final decision \(t^{*}\) can be selected from the candidate list, i.e., \(t^{*} \in T = \left[ {T_{1} , \ldots ,T_{m} } \right]\). The best alternative is determined by the highest score, and its position \(\gamma\) is returned by \(\arg \max\) as below.

$$t^{*} = T_{\gamma } ,{\text{where}}\;\gamma = \mathop {\arg \max }\limits_{{i \in \left\{ {1,2, \ldots ,m} \right\}}} \left( {\left\{ {T_{1} ,T_{2} , \ldots ,T_{i} , \ldots ,T_{m} } \right\}} \right)$$
(9)

The form above is used for the selection problem for one alternative. If more alternatives are needed, sorting algorithms are used to find the order or rank of the candidates. Alternatively, there are the other aggregation and ranking methods to rank the alternatives from the input of the weighted decision table. For example, Case 3 in Sect. 5 demonstrates the application of PROMETHEE II to compute the weighted decision table to derive the rank values.

3 Case 1: IS project selection using PCNP

An IS project selection problem using AHP (Muralidhar et al. 1990) is examined using the baseline version of the proposed PCNP approach. In this case, all evaluations are made solely based on pairwise comparisons.

3.1 Structuring information system decision problem

The careful selection of the appropriate IS projects which are well align to the organizational objectives is one of the most essential activities in the IS planning process, due to the limited organization resources. Muralidhar et al. (Muralidhar et al. 1990) proposed four important criteria, namely increased accuracy in clerical operations (C1), information processing efficiency (C2), promotion of organizational learning (C3), and implementation costs (C4). The objective was to choose the best IS project from six competitive IS project alternatives (T1 … T6), with respect to the four criteria. The measurable Structural Assessment Network (SAN) is presented in Fig. 1.

3.2 Comparisons and assessments

In this stage, the decision maker evaluates the criteria and alternatives through a set of predefined survey questions. Figure 2 demonstrates a questionnaire form used to evaluate criteria importance for the IS project selection. The rating score is based on the rating scale schema in Table 1, and \(\kappa\) is 8 by default. The rating scores are formed into the pairwise reciprocal matrices shown in Table 14 in Appendix 1 obtained from (Muralidhar et al. 1990). The rating scores converting from PRMs to POMs are based on the rating scale schema in Table 2. For example, regarding the rating score between C1 and C2 in Table 3, AHP interprets that C1 importance is 1/9 of C2 importance, whilst PCNP interprets that C1 importance is 8 units less than C2 importance. In other words, the difference between two objects measured by the AHP is much bigger than the PCNP interpretation as AHP is counted by a multiplicative unit, but not an interval unit.

The Accordance Index (AI) is used to check if the values in a POM are reasonable. In Table 4, no POM is perfectly accordant, and only two are within the recommended range. The steps to compute AI of B0 using Eq. 1 is summarized in Table 5. When AI value of a POM is within the recommended range between 0 and 0.1, the POM is appropriate to be used for prioritization. In this case, the POMs are converted from the PRMs in Table 14 in Appendix 1 obtained from (Muralidhar et al. 1990), and the POMs with dissatisfactory accordance are retained for consideration in the next step.

Table 5 Computational steps for Accordance Index of B0 using Eq. 1

3.3 Forming a weighted decision table

A weighted decision table is obtained from the prioritization results of all POMs. Table 3 shows the steps to compute a priority set derived from B0 using RAU of Eqs. 57. The prioritization values can be used for priority, weight, utility, or importance. The priority set from B0 is used as the weights of the criteria in evaluating IS projects, i.e., \(\omega\). The prioritization results of the POMs (B1,…,B4) are used for the criteria scores of the IS project alternatives, i.e. \(\left[ {r_{ij} } \right]\). Based on the prioritization results of all POMs, a weighted decision table is shown and presented in the first five columns of Table 6.

Table 6 Cognitive prioritization results, aggregation results, and ranks using PCNP

3.4 Aggregation and ranking

After a weighted decision table is derived, aggregation and ranking are performed. The results are shown in the last two columns in Table 6. For example, the objective priority of T1 is calculated by a weighted linear combination,\(0.109 \times 0.09 + 0.352 \times 0.201 + 0.305 \times 0.149 + 0.234 \times 0.194 = 0.172\), and the same computational step is applied to the other alternatives. T6 has the highest value of 0.186, whilst T3 has the lowest value of 0.121. The aggregation results of T4, T1, T5 and T2 are similar and close to 0.17. T6 does not have the superior result over the other alternatives. The close differences among the alternatives imply the difficulty to choose an IS proposal, hence, the decision tool is needed.

4 Case 2: IS outsourcing decision problem using PCNP with absolute measurement

In brief, IS outsourcing means that external providers take the responsibility for all or parts of the organization IS functions. An inappropriate outsourcing strategy may cost a huge amount of loss, which will lead to diminished user confidence. In this case, an IS outsourcing decision strategy using AHP adapted from Yang and Huang (Yang and Huang 2000) is revisited with the use of PCNP. Two levels of the weights are evaluated by pairwise comparisons while the score values \(\left\{ {r_{ij} } \right\}\) are based on an absolute measurement, which refers to the direct rating scores for all criteria with respect to all alternatives.

4.1 Structuring information system outsourcing decision problem

According to the demonstration presented by Yang and Huang (Yang and Huang 2000), a business bank planned to outsource part of its IS functions. The objective, criteria and alternatives of the outsourcing decision problem are reproduced as a SAN which is shown in Fig. 3. Three system alternatives were evaluated in the outsourcing decision:

  • Facilities management (T1) such as network facilities, host and some PCs;

  • Maintenance of management information system (T2) such as the online transaction processing system;

  • New system development (T3) such as internet homepage, unmanned bank and interactive voice response system.

Fig. 3
figure 3

Structural Assessment Network for Information System Outsourcing Selection Strategy

Due to a limited budget, the objective was to select the best system for outsourcing with respect to five major criteria considered as below.

  • Management (C1): a) stimulate IS department to improve their performance and enhance morale (C1,1); b) solve floating and scarcity of employees (C1,2);

  • Strategy (C2): shared risks;

  • Economics (C3): reduce the cost of developing and maintaining IS;

  • Technology (C4): a) acquire a new technology (C4,1); b) learn a new technology such as software management and development from vendors (C4,2);

  • Quality (C5): a) procure higher reliability and performance of IS (C5,1); b) achieve higher service level (C5,2).

4.2 Comparisons and assessments

According to the SAN for IS outsourcing selection strategy presented in Fig. 3, four POMs are used to measure the weights with respect to the four criteria categories, organized in two levels. These POMs presented in Table 7 are converted from the PRMs presented in Table 16 in Appendix 1 obtained from (Yang and Huang 2000). The AI values of all POMs are acceptable. For the comparison of two objects, the POM is always accordant. Unlike Case 1 presented in Sect. 3, the score values \(\left\{ {r_{ij} } \right\}\) are obtained from the direct rating scores for the alternatives with respect to the criteria on the basis of the absolute measurement presented in Table 18 in Appendix 1 obtained from (Yang and Huang 2000). The scores from the absolute measurements are based on the Likert scale, ranging from 1 to 5.

Table 7 Pairwise opposite matrices for comparing IS projects with respect to different criteria

4.3 Forming a weighted decision table

A weighted decision table in Table 9 consists of weights \(\omega\) and scores \(\left\{ {r_{ij} } \right\}\). The score values for the criteria with respect to alternative \(\left\{ {r_{ij} } \right\}\) are measured on a five points Likert scale, and presented in Table 18 in Appendix 1 obtained from (Yang and Huang 2000). The weights derived from the aggregation of prioritization of the POMs in Table 7 are presented in Table 8.

Table 8 Aggregation of weights from PCNP results

4.4 Aggregation and ranking

A simple weighted aggregation is performed from the inputs of the criteria weights and alternative scores. According to the aggregation results, the ranks of all the alternatives are calculated and presented in Table 9. As the score value is based on the five points Likert scale and sum of the weights is equal to one, the aggregation results are between 1 and 5. Providing that the decision maker may set the threshold score of aggregation result as 3 for outsourcing, the maintenance of management of IS (T2 = 3.71) and new system development (T3 = 3.23) should be outsourced.

Table 9 Cognitive prioritization results, aggregation results, and ranks using PCNP

5 Case 3: IS outsourcing decision problem using hybrid CNP

In this case, an IS outsourcing decision using the hybrid approach of the AHP and the PROMETHEE proposed by Wang and Yang (Wang and Yang 2007) is revisited. The hybrid approach of PCNP and PROMETHEE is demonstrated for the IS outsourcing decision strategy. A POM is used to evaluate the weight of criteria. The score values \(\left\{ {r_{ij} } \right\}\) apply the direct rating scores and values produced by a calculated indicator function for criteria with respect to alternatives. Unlike the previous two cases, which used positive (or maximal) criteria, the negative criteria are considered in this case. Instead of the weighted arithmetic mean in Eq. (8), PROMETHEE II (Brans et al. 2005; Brans 1982) in Appendix 2 is used for aggregation and ranking on the basis of the decision matrix.

5.1 Structuring information system decision problem

According to the demonstration in (Wang and Yang 2007), a bookstore planned to outsource parts of IS functions. The objective, criteria, and alternatives of the outsourcing decision problem are organized as a SAN presented in Fig. 4. Five IS outsourcing projects were considered: facilities management (T1), internet homepage development (T2), customer relationship management system (T3), supplier relationship management system (T4), online transaction processing system (T5). Due to the limited budget, the objective is to select the best system for the outsourcing with respect to six major criteria as listed in the following:

  • Economics (C1): cost reduction by a vendor;

  • Resource (C2): new technologies and knowledge workers from a vendor;

  • Strategy(C3): core activities of the company and noncore activities outsourcing for the vendor;

  • Risk(C4): loss of core competence, technical knowledge, flexibility, innovative capability, and increase of management complexity;

  • Management (C5): improvement of communication and management;

  • Quality (C6): improvement of quality and services of an internal IS department.

Fig. 4
figure 4

Structural Assessment Network for IS Outsourcing Projects Selection

5.2 Comparisons and assessments

The scores of comparisons and assessments are presented in one POM and one score values matrix respectively. A POM is used to measure the weights among six criteria. The POM presented in Table 10 is converted from the PRM in Table 19 in Appendix 1 obtained from (Wang and Yang 2007). The AI of POM is not within the recommended range but is still considered in this case for the purpose of comparisons with the AHP. To form a score values matrix \(\left\{ {r_{ij} } \right\}\) presented in Table 11, the criteria of the resource (C2), strategy (C3), risk (C4), management(C5), and quality (C6) applied absolute measurements on the basis of five points Likert scale, whilst the economics criterion (C1) was based on the ratio of saving costs to in-house development and maintenance costs.

Table 10 Pairwise reciprocal matrix for weights (AI = 0.164)
Table 11 Score values matrix and related parameter settings for PROMETHEE

5.3 Forming a weighted decision table

As seen in Table 11, a weighted decision table consists of a weighted set (W) and a scores matrix \(\left\{ {r_{ij} } \right\}\). The scores matrix has been discussed in Sect. 5.2. The weights derived from prioritization of the POM for the evaluation criteria in Table 10 are (0.160, 0.167, 0.194, 0.174, 0.142, 0.163). The strategy (C3) is the most important criterion, while the other criteria are slightly less important than the strategy.

5.4 Aggregation and ranking

Instead of the weighted arithmetic mean in Eq. (8), PROMETHEE II (Brans et al. 2005; Brans 1982) presented in Appendix 2 is used for the aggregation and ranking based on the weighted decision table. According to (Wang and Yang 2007), the parameter settings for PROMETHEE II are presented in the second and third rows of the Table 11. It is important to note that the ranking flow functions presented in (Wang and Yang 2007) were not consistent to the results presented in their paper. According to the results presented in the paper, an average should be used for the ranking flow, which is presented in Eq. A2A3 in Appendix 2. It might be the case that (Wang and Yang 2007) only used an established software application and the version of PROMETHEE II described in their papers were not perfectly matched, while this paper uses the R language to implement PROMETHEE II as the author’s own package, and without calling external package for PROMETHEE II.

The ranking results with PCNP are presented in Table 12. The positive outranking flow \(\phi^{ + }\) expresses how an alternative \(T_{i}\) outranks all the others. The higher value of \(\phi^{ + }\) leads to the better alternative. On the other hand, the negative outranking flow \(\phi^{ - }\) expresses how an alternative \(T_{i}\) is outranked by the others. The lower value of \(\phi^{ - }\) leads to the better alternative. To balance \(\phi^{ + }\) and \(\phi^{ - }\), the net the ranking flow \(\phi\) is used for final ranking decision. The higher \(\phi\) value leads to the better alternative. Therefore, the ranking order is facilities management (T1), online transaction processing system (T5), internet homepage development (T2), supplier relationship management system (T4), and then customer relationship management system (T3).

Table 12 PROMETHEE flows with PCNP

6 Comparisons and Discussions

As the PCNP is proposed to be an enhanced replacement of the AHP for IS project assessments, the comparisons between the AHP and the impacts are discussed in this section. Table 13 summarizes the mapping of data, weights, and results between the PCNP and the AHP with respect to the three cases presented in this paper. Regarding Case 1, PCNP recommends that T3 with the slightly higher preference value than the others, i.e., the largest difference is only 0.186–0.121 = 0.065 (Table 6), but AHP indicates that T1 with much higher preference value over the other alternatives, i.e. the largest difference is 0.225–0.048 = 0.177 (Table 15). The aggregated values among the six alternatives of PCNP are much closer than the AHP.

Table 13 Mapping of data, weights, and results between PCNP and AHP

Regarding Case 2, the ranks of both the PCNP and AHP are the same, although the weights of both methods varies. The population standard deviation of the PCNP results is 0.06, while the standard deviation of the AHP results is 0.1, with respect to the same mean 1/8 = 0.125. The AHP (0.245–0.023 = 0.222) produces larger weight difference compared to the PCNP (0.19–0.061 = 0.13). Both methods produced the same rank, as the matrix of score values \(\left\{ {r_{ij} } \right\}\) are significant, and the weight influence for the aggregation is less significant. Regarding Case 3, the AHP (sd = 0.082) also produced a larger weight difference than the PCNP (sd = 0.015), whilst the mean value of both weight sets is 0.167. The weight values have a significant influence for PROMETHEE aggregation to yield different ranks with respect to both methods.

The three cases confirm that the AHP produces larger weight difference compared to the PCNP. In AHP, the lowest rank has much lower value while the highest rank has much higher value. As the paired ratio scales usually exaggerate the human perception to describe the paired difference in times, and result in much wider variance (Yuen 2009, 2012, 2014). Thus, using the paired ratio scale of the AHP to represent the cognitive paired difference can be questionable.

Since the priorities produced by the PCNP are much closer than the AHP, the PCNP reflects the challenging decision problem, as the alternatives are highly competitive. On the other hand, the AHP reflects the triviality of the decision problem and quite often one alternative has much higher importance over others in times. Case 1 using AHP is taken as an example. If an expert thinks C2 is very important (0.524) and T1 has largest value (0.321) with respect to this criterion, T1 can be chosen directly without much delay to perform any complex decision process. If a strong alternative is obvious, using any decision tool for the selection problem will make it redundant, since the expert can make an informed decision immediately. The proposed decision problems can be challenging, but the AHP’s result may not be effective to be used for such challenging decision problems as the strength of the candidates should not be compared by multiplication times. On the other hand, when several similar competing alternatives are included in the shortlist, the PCNP is typically useful in such case which the decision is very difficult to be made due to uncertainty, insufficient information, lacking knowledge, or/and similar strengths among alternatives. Several other examples and discussions can be found in (Yuen 2009, 2012, 2014).

Despite the many applications using AHP, the AHP still sound corrected. All potential alternatives satisfying the baseline requirements are shortlisted before they are included in the AHP decision tree for the further evaluation and selection processes. In other words, the irrelevant or weak alternatives and criteria should directly be removed for consideration in early stage. Although the best one may not be selected, at least the suitable one is selected, one that is quite close to the best one, after the screening process. To date, no further valid measurement or follow-up action could be taken to benchmark the AHP decision, as there is a lack of established evaluation measurement that is well recognized to be used, especially when the scenario is not static and can be easily controlled.

In applying PCNP, however, there is one limitation or assumption. According to the paired interval scale in Table 1, the biggest difference between two objects are \(2\kappa\) units if the transitivity in 2 levels. In other words, among a set of alternatives, if the difference of the best alternative and the worst alternative is more than \(2\kappa\) unit, a negative value will be produced for the worst alternative. This is out-boundary problem, whilst the biggest difference among two objects in AHP is 8 × 8 = 64 times. More discussion can be found in (Yuen 2009, 2012).

This assumption can be exemplified using the PCNP due to out-boundary problem. For example, in a job recruitment process, if the candidates are below standard, they should not be invited. If a candidate is outstanding, he or she may immediately be given an offer, and the other invited candidates will be on a waiting list. Quite often, all shortlisted candidates are strong enough to be invited for the interviews. As they are very competitive or quite close, the selection decision tend to be quite challenging. In this way, the PCNP is the ideal tool to use to make a careful decision. As the AHP is based on paired ratio scale, it does not make sense that the differences between the candidates are measured by multiplication or times.

7 Conclusions

Although there is an increasing plethora of research and applications using the AHP, many studies have not addressed the potential problems of the AHP. Focusing on the pairwise comparisons for decision making, this paper proposes the PCNP for the IS decision problems. In general, this paper revisits three established cases (Muralidhar et al. 1990; Yang and Huang 2000; Wang and Yang 2007) for IS decision problems using the AHP. Case 1 presents the baseline version of the PCNP method, merely based on pairwise comparisons to evaluate the IS projects selection by revisiting the study (Muralidhar et al. 1990) using the AHP. The weighted decision table is derived from all evaluations based on pairwise comparisons. Detailed calculation steps are presented to demonstrate usability. Case 2 proposes the PCNP with the hierarchical criteria and absolute measurement for the IS outsourcing decision by revisiting the problem presented in (Yang and Huang 2000). Case 3 proposes the hybrid method combining PCNP and PROMETHE for the IS outsourcing solution comparing with (Wang and Yang 2007).

The major problem of the AHP is that the comparison of two objects represented by the ratio scale is usually exaggerated to represent the paired difference. The PCNP, which is based on the primitive paired interval difference between two objects, has a better matching human cognition of difference. For a single POM, providing 8 marks on average can be assigned to each alternative (\(\kappa = 8\)). Post-evaluation, some marks from some candidates are moved to the other candidates, producing the ranks. The limitation of such a comparison is that the biggest difference between the two objects should not be more than \(2\kappa\) units. Due to this limitation, the PCNP is the ideal MCDM tool for the challenging decision making process when the competing alternatives are closer, but is not suitable enough to compare the candidates with big difference.

The proposed method provides new insights for the replacement of the AHP, which can be applied to IS decision making, yielding more reliable results. As the scope of this research is limited to three previous established IS studies using AHP and revise the analytics by PCNP, future studies could consider other state-of-art IS planning projects, including Artificial Intelligence adoption projects, cloud sourcing planning projects, and mobile deployment projects, through the examination of the CNP and its hybrid methods, subjected to the complexity of the decision problems.