1 Introduction

Today's enterprises can only survive in a highly competitive environment by providing high-quality products. For improving quality, enterprises should improve their development processes and perform established defect management activities. In this sense quality control methods are important for businesses to achieve sustainable success. There are many popular approaches to quality processes. One of the important aspects of Six Sigma, which is one of them, is that it has an infrastructure that makes measurable improvements within these approaches. The Six Sigma strategy involves the use of statistical tools within a structured methodology to get products and services that are less costly, faster, and better.

Six Sigma is a common approach to the use of statistical tools within a structured methodology to reduce development costs, increase productivity, and develop high-quality products. However, companies have to develop many products at the same time, and conducting Six Sigma projects for all of their products may not be possible due to limited time and resources. Within this context, project prioritization and resource allocation for Six Sigma are two major concerns to be handled with great attention. A systematic Six Sigma project prioritization methodology is the key factor to successfully managing quality improvement activities [18, 54, 55]. There are many studies in the literature stating that Six Sigma project prioritization is the most critical and complicated task of Six Sigma [9, 27, 39, 41, 45, 56]. And also, the success of quality improvement via Six Sigma may relate to the accuracy of utilized project prioritization methods [1, 2, 4, 30,31,32, 37, 45,46,47, 50].

The quality of the software is a very important issue for software engineers. As the quality of the software increases, the complexity and number of error/fault rate decrease and maintainability increases. This saves enterprises time and cost.

Software development companies are also among the companies that should hold product quality in the foreground, and Six Sigma is one of the preferred methods used for improving the quality of software besides Object-Oriented Analysis, System Development Life Cycle (SDLC) prototyping, Total Quality Management [14, 28, 67]. However, to the best of our knowledge, there is no Six Sigma project selection study in the literature proposed for software products, although there are studies proposing Six Sigma project selection methodologies for various industries. The goal of this study is to fulfill this gap and propose a Six Sigma prioritization methodology for software development projects. For this purpose, we utilized 7 completed industrial software development projects of a CMMI (Capability Maturity Model Integration) Level 3 certified company in Turkey. CMMI is a process-level improvement training and appraisal program. All projects analyzed are entirely similar software projects developed by the same teams and developed by the same organization. Therefore, software architectures and coding styles for these projects are very similar to each other.

Proposing a prioritization methodology has two steps; the first one is to determine the metrics to be used, and the second one is to select a decision-making method to evaluate the obtained metrics.

To measure the quality of software projects, there are many object-oriented software quality metrics in the literature. The most accepted metric set to measure software quality in the literature is the Chidamber and Kemerer (C&K) metric suite [59]. C&K software quality metrics are used, and the values of these metrics are determined by the static code analysis software tool “Understand 6.0” [61]. C&K metric suite consists of 6 metrics. These metrics are Lack of Cohesion in Methods (LCOM), Coupling between Objects (CBO), Weighted Methods per Class (WMC), Depth of Inheritance Tree (DIT), Number of Children (NOC), and Response for a Class (RFC) [12].

There have been a lot of studies conducted to investigate the value of the C&K metrics. The metrics have often been associated with product results such as quality, fault-proneness, complexity measurement, and maintenance effort [35].

Each C&K metric is a criterion for prioritization, and there are 6 C&K metrics. Hence, applying the Multi-Criteria Decision-Making (MCDM) method is a suitable approach to performing a prioritization. MCDM methods are important tools for choosing the best alternative for multiple criteria in decision-making processes in different fields. In this study, the ARAS method was used for the prioritization and selection of software projects. The ARAS method is also an MCDM method that has been introduced to the literature in recent years [69]. As with other MCDM methods, first of all, the CRITIC and Entropy methods were used to determine criterion weights, and a new ARAS method was used to rank alternatives in prioritizing software development projects. According to the weights obtained from the two methods, the ARAS method was applied and the results were compared. We believe that our proposed methodology will be beneficial to software development companies for their resource allocation management in quality improvement processes.

The next sections are organized as follows. First, Sect. 2 briefly presents a literature survey about a general view of Six Sigma in the software development industry. Next, Sect. 3 demonstrates the proposed method in this study. In Sect. 4, results are discussed. Finally, the conclusion is given in Sect. 5.

2 Literature review

Harry and Schroeder, who pioneered Six Sigma at Motorola in the 1980s, described Six Sigma as “a combination of defects and errors in production and service processes to find the reasons and eliminate the cost and process to reduce cycle time, improve efficiency, meet customer expectations and return of investment (ROI),” proposed an approach that focuses on achieving business improvement [24]. The main purpose of the Six Sigma methodology is to enable businesses to provide services with near zero defects (3.4 defects/million parts), and the basic methodology of Six Sigma is DMAIC (Define, Measure, Analyze, Improve, and Control) [7].

The Six Sigma method has been applied in production processes for more than 30 years. However, the adoption of the Six Sigma methodology in the software development industry has been slower [42]. In software development processes, it is aimed to produce software products consistently with a very high level of customer satisfaction and a minimum number of defects. However, there have been difficulties in achieving this goal for years. Pournaghshband and Watson [42] emphasize the expression “Another way of looking at Six Sigma in a software context would be to achieve a defect-removal efficiency level of about 99.9999%. Since the average defect-removal efficiency level in the United States is only about 85% and less than one project in 1000 has ever topped 98%, it can be seen that actual Six Sigma results are beyond the current state of the art.”

In the software development industry, Six Sigma is used at the requirement collection stage and provides useful tools for developing a project concept on problem definition and stakeholder analysis of the software project. There is a strong relation between some software characteristics like deceleration cost, error per product, efficiency, cycle time, and sigma level [65].

The use of Six Sigma in software companies encourages software developers to be more process-centered. Thus, it is aimed to reduce the probability of producing errors per product or service. Companies like Allied Signals, Motorola, and General Electric have saved billions of dollars using Six Sigma [22, 23, 25].

Although Six Sigma is a beneficial strategy, companies have to develop many products at the same time, and conducting Six Sigma projects for all of their products may not be possible due to limited time and resources. Therefore, a prioritization methodology for Six Sigma project selection is crucial for companies that aim to improve quality with smart resource management [41, 53]. In addition, some studies on Six Sigma and software development project selection are given in Table 1, and MCDM methods used in Six Sigma project selection are given in Table 2.

Table 1 Literature review for Six sigma, project selection for software development
Table 2 Review of MCDM methods used in Six Sigma project selection

Six Sigma project prioritization and selection methods were reviewed in a recent survey paper comprehensively in [38]. In Pakdil [38], it is stated that decision-makers extensively use MCDM and the researchers recommend using MCDM methods for prioritizing and selecting projects [30, 36, 64, 66]. MCDM methods are used to solve decision problems involving multiple criteria. Thus, it is ensured that the alternatives are ranked by considering the determining criteria. In our study, we also choose MCDM as the selection process. Hence, in this part, we focused on the remarkable research performed for the Six Sigma project selection and MCDM applications which are related to our work.

In the literature survey given in Pakdil [38], the common MCDM methodologies used for selection processes are stated as AHP [3, 34, 66] fuzzy VIKOR [10, 63], fuzzy TOPSIS [3, 68], DEA [21, 34], ANP [63] and DEMATEL [62, 63].

In addition to the studies given above Tuş and Adalı [60], using a new unified decision-making approach based on CRITIC and Weighted Aggregated Sum Product Assessment (WASPAS) methods for the hospital time and attendance software selection problem. On the other hand, Bošković et al. [5] used CRITIC and ARAS methods for the selection of mobile network operators.

Puzovic et al. [44] developed Product Lifecycle Management (PLM) software selection, as an essential part of the PLM concept implementation. This study integrates the Fuzzy Analytic Hierarchy Process (FAHP) and the Preference Ranking Organization Method for Enrichment Evaluations (PROMETHEE). Khan and Ansari [29] proposed an intuitionistic fuzzy (IF) improved score function method to deal with MCDM problems under intuitionistic fuzzy sets (IFSs). Recent studies using the ARAS method in combination with CRITIC and Entropy can be seen in Table 3. However, to the authors' knowledge, there are no studies in the literature that use CRITIC-ARAS and Entropy-ARAS to rank software development projects.

Table 3 Review of recent studies using the ARAS method with CRITIC and entropy

In the CRITIC method, objective weights are obtained by using the actual data of each evaluation criterion. Therefore, it is sufficient to use the data in the decision matrix to calculate the criterion weights [15]. In this method, correlations between criteria explain how much each criterion affects the other in the decision process. Thus, the weights are obtained objectively for an unbiased ranking that eliminates the evaluation effect. In this way, subjective weights of criteria that are not based on expert opinion can be obtained [15, 60]. Entropy is an objective weighting method like a CRITIC, and it is used to measure the amount of information available from the decision matrix for each criterion [16, 51]. On the other hand, the ARAS method identifies a utility function value for each criterion. This value is directly proportional to the relative effect of the criteria values and weights considered in a project [69]. ARAS method was preferred in this study because it has advantages such as being able to be used when the criteria have different measurement units, being easy to implement, and reflecting the difference between the alternatives and the ideal solution [33].

Our study is the first study that implements and compared the results of the ARAS method with CRITIC and Entropy in the Six Sigma project prioritization and selection process for software development projects.

We analyzed 7 completed industrial software development projects of a CMMI Level 3 certified industry company in Turkey to evaluate our proposed methodology. To select and prioritize these 7 software development projects, we used the C&K metric suite, which is mostly used in the literature. These metrics were measured using a static code analysis tool called Understand, and the results obtained were discussed. The set of C&K metrics used in the study consists of 6 metrics. WMC metric is defined as the number of methods in a class. As the number of methods in a class increases, so does the potential impact on child classes, since child classes inherit all defined methods. An increase in the average WMC increases the error density and decreases the quality of the software [12, 17, 71]. DIT metric is the value of the distance between the highest class and the lowest class of a class in the software. The greater the inherited tree depth for a class, the more difficult it is to predict behavior based on the interaction between inherited features and new features [57]. NOC is the metric describing the number of derived subclasses in software. Having more than one subclass of a class results in high reuse and high error risk. With this criterion, it is possible to measure the features of the software such as efficiency, testability, and reusability [57]. RFC metric specifies the total number of methods in the software class, and these method numbers are potentially seen as invoked by a class. If a class calls many methods, it indicates how high the complexity of the class is [17, 57, 72]. CBO metric is the number of classes on which a class is dependent [72]. As CBO increases, the reusability of a class is likely to decrease. In general, CBO values for each class should be kept low [36]. LCOM metric measures the similarity of methods with each other. The LCOM value should be kept low to keep the agreement high between classes [43].

Thapaliyal et al. [58] stated that a high WMC value increases the error density and decreases the quality. Breesam [6] reported that the depth of the tree indicates the complexity of the design, that it is difficult to understand the system with many inheritance layers, and that the high DIT criterion increases errors. Chidamber and Kemerer [12] stated that a high number of subclasses, that is, the NOC criterion, generally means that it is more complex, more difficult to maintain, and has a higher tendency to error its value should below. At the same time, they also stated that triggering too many methods in a message sent for RFC criteria makes it difficult to detect the errors in the classes and to test the classes. They stated that low compatibility for the LCOM criterion increases the complexity, and therefore, errors in the development phase will increase.

Apart from these mentioned studies, Briand et al. [8] showed that CBO, RFC, and LCOM criteria can be used to find the error tendency of classes by using the logistic regression method. In the study by Zhou et al. [72], it was shown that C&K criteria other than DIT criteria and the number of lines of code are important in determining error trends.

The papers mentioned above are mainly manufacturing-oriented and none of them include the Six Sigma project selection method in the software development industry, as a gap in the literature. To fulfill this gap, we propose a Six Sigma prioritization methodology for software development projects. For this purpose, we utilized 7 completed industrial software development projects of a CMMI Level 3 certified company in Turkey.

3 Research methodology

The prioritization and selection of software projects were carried out in two phases. First, the criteria weights were determined using the CRITIC and Entropy methods, and then, the software projects were ranked according to the C&K metrics using the ARAS method for the weights obtained from both methods (Fig. 1).

Fig. 1
figure 1

Steps of analysis

3.1 Phase 1.1 Determining criteria weights using CRITIC

The CRITIC method was developed by Diakoulaki et al. [15]. In this study, instead of subjective weights obtained from expert opinion, the CRITIC method, in which objective weights are obtained using the correlation between standard deviation and criteria, was preferred [15]. The application steps of the CRITIC method are given below [15, 40, 48]. CRITIC method was applied to compute the importance weights of C&K metrics.

3.1.1 Step 1.1.1 Determining the initial decision matrix

Let \(\left[X\right]\) denotes the initial decision matrix, it can be represented as in Eq. (1)

$$\left[ X \right] = \left[ {x_{ij} } \right]_{n \times m} = \left[ {\begin{array}{*{20}c} {x_{11} } & {x_{12} } & \cdots & {x_{1m} } \\ {x_{21} } & {x_{22} } & \cdots & {x_{2m} } \\ \vdots & \vdots & \cdots & \vdots \\ {x_{n1} } & {x_{n2} } & \cdots & {x_{nm} } \\ \end{array} } \right]$$
(1)

where \({x}_{ij}\) is the element of \([X]\), \(i\) is the number of alternatives \({(A}_{i}, i=\mathrm{1,2},\dots ,n)\), and \(j\) is the number of criteria \(\left({C}_{j}, j=\mathrm{1,2},\dots ,m\right)\).

In the application, data are obtained for seven software projects \({(A}_{i}, i=\mathrm{1,2},\dots ,7)\) for six criteria \(\left({C}_{j}, j=\mathrm{1,2},\dots ,6\right)\) named as C&K metrics in Sect. 1. The considered criteria are as follows: WMC (\({C}_{1})\), DIT (\({C}_{2})\), NOC (\({C}_{3})\), CBO (\({C}_{4})\), RFC (\({C}_{5})\), LCOM (\({C}_{6})\). All criteria are cost-type criteria. The initial decision matrix in Table 4 has been created with the mean of the metric values obtained for all classes of software projects by the “Understand 6.0” static software code analysis tool.

Table 4 Initial decision matrix

3.1.2 Step 1.1.2. Normalizing the initial decision matrix

The initial decision matrix is normalized for benefit and cost type criteria, respectively, Eqs. (2) and (3).

$$n_{ij} = \frac{{x_{ij} - \min \left( {x_{j} } \right)}}{{\max \left( {x_{j} } \right) - \min \left( {x_{j} } \right)}}$$
(2)
$$n_{ij} = \frac{{\max \left( {x_{j} } \right) - x_{ij} }}{{\max \left( {x_{j} } \right) - \min \left( {x_{j} } \right)}}$$
(3)

where \({n}_{ij}\) is normalized values of \({x}_{ij}\), \(\mathrm{min}({x}_{j})\) is the minimum value of jth criterion by alternatives, and \(\mathrm{max}({x}_{j})\) is the maximum value of jth criterion by alternatives.

The normalized initial decision matrix in Table 5 is obtained by Eq. (3) since all criteria are cost types.

Table 5 Normalized initial decision matrix

3.1.3 Step 1.1.3. Determining the level of relationship between the criteria

Multiple correlation coefficients are calculated using the normalized decision matrix to determine the level of correlation between the criteria, and a \(m\) × \(m\) type correlation matrix is obtained. The Pearson correlation coefficient gives the linear relationship between the two variables under the assumption that the variables have a normal distribution [7]. In this study, since the values of criteria for the alternatives are not normally distributed, the Spearman correlation coefficient based on elements of rank orders given in Eq. (4) was used to calculate the relationship criteria. The correlation matrix for criteria is given in Table 6.

$$\rho_{jk} = 1 - \frac{{\mathop \sum \nolimits_{i = 1}^{n} d_{i}^{2} }}{{n\left( {n^{2} - 1} \right)}},\quad \left( {j,k = 1,2, \ldots ,m} \right)$$
(4)
Table 6 Relationship coefficient matrix

3.1.4 Step 1.1.4. Calculate the amount of information (\({S}_{j})\)

Standard deviation is used to measure contrast intensity and obtain objective criteria weights [70]. Thus, it can be said that the scores of the alternatives with higher weights differ from the weights obtained. Similarly, it can be argued that a criterion in which all alternatives have the same performance does not provide any additional information and is of no use to be included in the decision-making process [15].

The total information amount in the criteria is given in Eq. (5), and the measure of contrast intensity is given with the standard deviation calculated by Eq. (6).

$$S_{j} = \sigma_{j} \mathop \sum \limits_{k = 1}^{m} \left( {1 - \rho_{jk} } \right),\quad j = 1,2, \ldots ,m$$
(5)
$$\sigma_{j} = \sqrt {\frac{{\mathop \sum \nolimits_{i = 1}^{n} \left( {n_{ij} - \overline{n}_{j} } \right)^{2} }}{n}} ,\quad j = 1,2, \ldots ,m$$
(6)

The standard deviation and total information are given in Table 7.

Table 7 Values of (\(1-{\rho }_{jk}\)), \({S}_{j}\) and \({\sigma }_{j}\)

3.1.5 Step 1.1.5 Calculate the criteria weight (\({w}_{j})\)

Criteria weight for each criterion is computed by Eq. (7), and \({w}_{j}\) for each metric is obtained as shown in Table 8.

$$w_{j} = \frac{{S_{j} }}{{\mathop \sum \nolimits_{k = 1}^{m} S_{k} }} ,\quad j = 1,2, \ldots ,m$$
(7)
Table 8 Criteria weights

3.2 Phase 1.2 Determining criteria weights using entropy

3.2.1 Step 1.2.1 Determining initial decision matrix

Let \(\left[X\right]\) denotes the initial decision matrix, it can be represented as in Eq. (1). The initial decision matrix in Table 4 is used.

3.2.2 Step 1.2.2 Normalizing the initial decision matrix

Elements of normalized initial decision matrix \([Z]\) denoted as \({z}_{ij}; i=\mathrm{1,2},\dots ,n\) and \(j=\mathrm{1,2},\dots .,m\) shows the normalized value of the \(jth\) criterion for the \(ith\) software.

$${z}_{ij}=\frac{{x}_{ij}}{\sum_{i=1}^{n}{x}_{ij}}$$
(8)

The normalized initial decision matrix in Table 9 is obtained by Eq. (8)

Table 9 Normalized initial decision matrix

3.2.3 Step 1.2.3 Calculated entropy values and importance weight

Entropy value for each criterion \({e}_{j};j=\mathrm{1,2},\dots ,m\) was calculated with Eq. (9) and is given in Table 10.

$$e_{j} = \frac{{ - \mathop \sum \nolimits_{i = 1}^{n} \left[ {z_{ij} \ln \left( {z_{ij} } \right)} \right]}}{\ln \left( n \right)}.$$
(9)

For each criterion, importance weight \({w}_{j;}\) \(j=\mathrm{1,2},\dots .,m\) was calculated by Eq. (10) and is given in Table 10.

Table 10 Entropy value and importance weight for each criterion
$${w}_{j}=\frac{1-{e}_{j}}{\sum_{j=1}^{n}1-{e}_{j}}.$$
(10)

3.3 Phase 2: Determining the rankings of software projects using the ARAS method

The ARAS method was presented in the literature by Zavadskas and Turskis [69]. This method compares the utility function values of the decision alternatives in the decision problem with the utility function value of the optimal alternative added to the decision problem by the decision maker Zavadskas and Turskis [69].

3.3.1 Step 2.1 Formulating the initial decision matrix

The initial decision matrix is given in Table 4 using Eq. (1). In the ARAS method, unlike other MCDM methods, there is a row consisting of the optimal values of each criterion in the initial decision matrix. If the optimal value of the j criterion is unknown, then

$$x_{0j} = \left\{ {\begin{array}{*{20}c} {\mathop {\max }\limits_{i} x_{ij} ,} & {\quad {\text{if}}\,\mathop {\max }\limits_{i} x_{ij} \,is\,{\text{preferable}},} \\ {\mathop {\min }\limits_{i} x_{ij} ,} & {\quad {\text{if}}\,\mathop {\min }\limits_{i} x_{ij} \,is\,{\text{preferable}},} \\ \end{array} } \right.$$
(11)

As shown in Table 11, since the optimal values are not known for the criteria used to measure the software quality, and all criteria are cost types, the minimum value of alternatives for each criterion was taken as the optimal value.

Table 11 Optimal values

3.3.2 Step 2.2. Normalize the initial decision matrix

The initial decision matrix in Table 4 is normalized as in Eq. (12), and the normalized initial decision matrix in Table 12 is obtained.

$$\overline{x}_{ij} = \left\{ {\begin{array}{*{20}c} {\frac{{x_{ij} }}{{\mathop \sum \nolimits_{i = 0}^{n} x_{ij} }},} & {\quad {\text{benefit}}\,{\text{type}}\,{\text{criteria}}} \\ {\frac{{y_{ij} }}{{\mathop \sum \nolimits_{i = 0}^{n} y_{ij} }},} & {{\text{cost}}\,{\text{type}}\,{\text{criteria}}} \\ \end{array} } \right.$$
(12)

where \({y}_{ij}=\frac{1}{{x}_{ij}} .\)

Table 12 The normalized initial decision matrix

3.3.3 Step 2.3 Calculating weighted normalized initial decision matrix

Let \(\left[\widehat{X}\right]\) denotes the normalize-weighted matrix as in Eq. (13)

$$\left[ {\hat{X}} \right] = \left[ {\hat{x}_{ij} } \right]_{n \times m} = \left[ {\begin{array}{*{20}c} {\hat{x}_{11} } & {\hat{x}_{12} } & \cdots & {\hat{x}_{1m} } \\ {\hat{x}_{21} } & {\hat{x}_{22} } & \cdots & {\hat{x}_{2m} } \\ \vdots & \vdots & \cdots & \vdots \\ {\hat{x}_{n1} } & {\hat{x}_{n2} } & \cdots & {\hat{x}_{nm} } \\ \end{array} } \right]$$
(13)

where weighted normalized values of all the criteria are calculated by

$$\hat{x}_{ij} = \overline{x}_{ij} w_{j} ,\quad i = 0,1, \ldots ,n$$
(14)

and \({w}_{j}\) is the criteria importance weights obtained from CRITIC in Phase 1.1 and Entropy in Phase 1.2. and weighted normalized initial decision matrix is in Tables 13 and 14, respectively.

Table 13 Weighted normalized initial decision matrix with CRITIC
Table 14 Weighted normalized initial decision matrix with entropy

3.3.4 Step 2.4 Determining optimality function values

Optimality function values are calculated as follows

$$O_{i} = \mathop \sum \limits_{j = 1}^{m} \hat{x}_{ij} ,\quad i = 0,1,2, \ldots ,n$$
(15)

where \({O}_{i}\) is the value of the optimality function of ith alternative for each weighting method. Optimality function values are given in Table 15.

Table 15 Optimality function values

3.3.5 Step 2.5 Calculating utility degree

Alternative utility degree is explained by a comparison of the variant, which is analyzed, with the ideally best one \({O}_{0}\). Utility degree calculated by Eq. (16) for each alternative.

$$K_{i} = \frac{{O_{i} }}{{O_{0} }},\quad i = 1, \ldots .,n$$
(16)

where \({O}_{i}\) and \({O}_{0}\) are the optimality function values from Eq. (15)

To measure and interpret the quality of software development projects with C&K metrics, a threshold value has to be determined for each metric. However, it is possible to encounter very different threshold values in the literature. Therefore, we used criteria weights which are given in Tables 8 and 10. According to Table 8 which was calculated by CRITIC, the NOC metric’s weight is the highest. However, it is seen that the criteria weights calculated by the entropy method are close to each other. The utility degree and ranks of the projects are given in Table 16.

Table 16 Utility degree in CRITIC and entropy

According to the results obtained, as shown in Table 10, out of 7 software development projects, Project 7 was considered the highest priority project for two weighting methods. When the C&K metric values of Project 7 were examined, it was seen that after applying the MCDM method optimal value was highest compared to other projects.

4 Results and discussion

To ensure continuous development in areas where technology is developed, variables must be measurable. In this context, the measurability of software metrics in improving software quality enables the use of quantitative decision-making methods and objective evaluations. This leads to the selection of the right project within the scope of continuous improvement, thus preventing cost and time loss.

It is very crucial to prioritize the projects for the Six Sigma application to save time and allocate resources efficiently. In this study, we proposed a prioritization method for software development projects. Prioritization was performed based on the 6 C&K metrics. Since there is no precedence of any of the C&K metrics over any other, it is not possible to prioritize the software projects by just looking at the C&K metrics measurements. The MCDM-based prioritizing methodology allows the altogether evaluation of these 6 C&K metrics analytically. As shown in Table 16, Project 7 was found to have the highest priority for Six Sigma application among 7 software development projects after the application of our proposed methodology for two weighting methods.

The application of the methodology was shown step by step so that any practitioner can apply the proposed process easily. Our proposed method can be applied to all object-oriented software projects since the C&K metrics can easily be obtained by using static code analysis tools such as Understand and SonarQube. Since these metrics are obtained without user interpretation, the prioritization results are also objective, making the proposed methodology reliable as well.

We used CRITIC, Entropy, and ARAS as the MCDM methods. Other MCDM methods may also be used for selection. We selected CRITIC, Entropy, and ARAS due to their objective selection opportunities. These methods also provide the objective ranking based on initial decision matrix values.

Our proposed methodology fulfills the gap of not having a Six Sigma project selection methodology in the software development industry. We believe that our proposed methodology will help the decision-makers in decreasing their effort and wasting resources on quality improvement activities.

5 Conclusion

Six Sigma is a common approach used for quality improvement. Unfortunately, companies may not apply Six Sigma to all of their projects due to limited time and resources. So, the companies are obliqued to make a selection among the projects to which they can apply Six Sigma. A methodological way of prioritizing projects for Six Sigma selection is crucial for effective resource management.

In this study, we aimed to propose a prioritization methodology for Six Sigma selection in software development projects. For this purpose, we utilized 7 completed industrial software development projects of a CMMI Level 3 certified company in Turkey. The methodology used in our proposal is based on CRITIC, Entropy, and ARAS, which are MCDM-based methods. To the best of our knowledge, our study is the first one which proposes a prioritization methodology for Six Sigma selection in software development projects. Besides, our study is the first study that implements CRITIC, Entropy, and ARAS MCDM methods in selecting software development projects. We used C&K metrics to measure the quality of software projects. C&K metrics provide an interpretation of software quality characteristics such as efficiency, complexity, understandability, reusability, and maintainability. The values of C&K metrics are measured by the static code analysis tool, Understand.

Since the projects selected in the study are from the same field, it is thought that the project selected for the implementation of Six Sigma as a result of the prioritization will also be a pioneer in the improvement studies of the other projects.

Our study is explained in a systematic way giving the implementation steps. We believe that these steps will help software companies achieve their goals such as quality, process improvement, resource allocation, and customer satisfaction efficiently.

In future work, we are planning to increase the number of projects and apply different MCDM methods to the different industrial sectors.