1 Introduction

Accreditation is a system that ensures that goods or services are provided under certain standards. Accreditation in higher education is an assurance system that aims to demonstrate that a higher education institution or any program has crucial standards. Accreditation is long-term and is based on periodic internal-external evaluations (Aktan & Gencel, 2010). During the accreditation process, it must be evaluated that the predetermined standards from an accreditation panel are met by the higher education program (Hamutoğlu et al., 2020). Accreditation studies in engineering education have become an increasingly important tool for improving engineering programs and, facilitating the circulation of engineers in different countries (Özgüler et al., 2013). There are national institutions as well as global accreditation organizations such as ENAEE (European Network for Accreditation of Engineering Education), IEA (International Engineering Alliance), and ABET (Accreditation Board for Engineering and Technology). Similarly in Turkey, the Association, known as the Engineering Programs Evaluation and Accreditation Board, or MÜDEK for short, was established in 2002 as an independent and civil initiative to systematically address the issue of evaluating engineering education, to improve the quality of education, and to provide self-evaluation in institutions that carry out engineering education. It was officially accepted as a national accreditation body for engineering programs in Turkey dated 16 November 2007 (Platin, 2011).

With MÜDEK evaluations, it is tried to determine whether a program applying for accreditation meets a series of criteria. Basic criteria are published in the “Engineering Undergraduate Programs Evaluation Criteria-Version 2.2” document on MÜDEK’s official website. The third criterion has been determined as “Program Outcomes.” The term program outcome is defined as “expressions defining the knowledge, skills, and behaviors that students must acquire until they graduate from the program”. Under this heading, “3.3 Engineering programs should prove that students who have reached the graduation stage provide the program outcomes” (MÜDEK, 2020).

In the evaluations made by MÜDEK, the lack of a systematic assessment and evaluation system for the program outcomes criteria and the fact that the methods used in measurement consisted of only questionnaires were determined as the encountered deficiencies (Platin, 2011). The inadequacy of the evidence that the students coming to the graduation stage provide the program outcomes is the second most observed deficiency with 26% (Özgüler et al., 2013).

The main goal of this study is to develop a decision support system that will support program managers and evaluators in order to more precisely determine the level of individual students who have reached the graduation stage. The developed system aims to guide the systematic assessment and evaluation searches of the programs prepared for the accreditation process.

In Section 2, a literature survey is presented. The methodology is presented in Section 3. In Section 4, the findings and results obtained are discussed. The main results and benefits obtained from the proposed tool are presented in the last section.

2 Literature review

We can see some studies in the literature on the systematic measurement and monitoring the level of achieving program outcomes of engineering programs. For example, Politis and Siskos (2004) proposed an institutional evaluation system based on the principles of multi-criteria decision making (MCDM) analysis for the Production Engineering and Management department of the Technical University of Crete in 2004 (Politis & Siskos, 2004). Abu-Jdayil and Al-Attar (2010) proposed a curriculum assessment system using indirect and direct tools for ABET accreditation in the Chemical Engineering program. They evaluated course/curriculum, graduation exams, and success scores of main courses. The internship counselor questionnaire was used qualitatively indirectly in the student assessment of the graduate survey, the employer survey, the graduation interviews of the students, and the advisory board meetings (Abu-Jdayil & Al-Attar, 2010).

Aldowaisan and Allahverdi (2016a) used the control charts developed for statistical process controls in the Industrial and Management Systems Engineering program in their accreditation quality studies in 2016. They used control charts with the academic staff reports and graduate student survey scores to monitor the performance values ​​of the 11 program outcomes. When the limit values ​​are exceeded, they understand that there is an early-warning signal (Aldowaisan & Allahverdi, 2016a). Another study, conducted by Aldowaisan and Allahverdi (2016b) proposed how tools such as alumni surveys, alumni databases, and alumni meetings can summarize to show at what level the program educational objectives are achieved (Aldowaisan & Allahverdi, 2016b).

Kirkavak et al. (2019) presented a simple model for an industrial engineering program to measure individual students’ level of success to program outcomes at the micro-level and program at the macro level. The grades took by the students, and the level of relationship between the courses and the program outcomes, and the accreditation scorecard were tried to be created in the proposed model considering the ECTS credits of the course (Kirkavak et al., 2019).

When we examined the studies in the literature, we can see that the studies aimed at measuring the level of achieving the program outcomes of a program as a whole. In the study, which tried to reveal the success of students individually, it was seen that the simple weighted average method was used (Kirkavak et al., 2019). To the best of our knowledge, there is no study using the MCDM methodology for developing a DSS to use in the student’s program outcome achievement levels. Differently in our study, a decision support system has been developed using the MOORA technique to determine the ranking of POs more precisely. Mainly two types of decision support systems: rule-based or mathematical model-based. We combined them in a DSS. Our combined DSS incorporates both educational requirements and their rules and the MOORA model to convert these rules measurement scores conveniently. Five main contributions were obtained in our study. i) The offered MOORA model reduces the calculation steps of the user’s decision required in the evaluation process. ii) The application of the MOORA method does not require more calculation coefficients. This specification reduces the complexity of mathematical calculations. iii) This study proposes a flexible and user-friendly decision support system (DSS). iv) The MOORA method has sufficient robustness despite needing fewer application steps. It is a very successful methodology reaching efficient results compared with other MCDM methods such as TOPSIS, VIKOR, GRA, and AHP. Also, it has a simple system structure that is open to development. It can be modified for process improvement of different engineering education programs. v) The most crucial difference between the proposed studies with other studies is that we present a systematic, objective, and consistent way to determine individual PO achievement levels.

3 Methodology

3.1 Accreditation process in Turkey

Association for Evaluation and Accreditation of Engineering Programs (MÜDEK) is an organization that operates to contribute to the enhancement of quality of engineering education using the accreditation and evaluation of providing information services for engineering education programs in various disciplines.

MÜDEK is a member of the European Network for Accreditation of Engineering Education (ENAEE) since November 17, 2006, and is a full signatory (member) of the Washington Accord (WA) since June 15, 2011. MÜDEK conducts two types of evaluations for institutions applying voluntarily, namely “General Reviews (GR)” and “Interim Reviews (IR).” GR is an evaluation process that includes all criteria of institutions applying for the first time or after five years of accreditation. GR consists of three phases: before the institution visit, the institution visit, and after the institution visit. The first of these is to make a qualitative evaluation of the elements that cannot be documented in the self-evaluation report. The second is to make a detailed examination of the documents requested from the previously agreed institution. Finally, it is aimed to inform the institution of a preliminary assessment of its strengths and inadequacies. Activities after the institution visit start after the institution visit and continue until the institution is notified of the outcome of the accreditation decision meeting. At this stage, all findings and additional opinions identified during the visit are analyzed in detail and included in the final report. Consistent decisions are made by comparing them with the evaluations among similar institutions. IR is assessments for short-term accredited programs, usually two years. Emphasis is placed on weakness, anxiety, and observations identified during GR (MÜDEK, 2019).

The evaluations made during the accreditation process are based on the program outcomes. The acronym PO stands for Program Outcomes, not Program Objective. Both are used with different meanings in MÜDEK accreditation. The phrase Program Objective defines the strategic targets of the program. It refers to the career goals and expectations that graduates can achieve within 3–5 years after graduation. However, Program Outcomes refer to the knowledge, skills, and behaviors that students should acquire until they graduate. The Program Outcomes (POs) (Criterion 3) are one of MÜDEK’s evaluation criteria for four-year (first cycle) engineering programs and specify the knowledge, skills, and attitudes that students have to acquire. These outcomes are revised from ABET (a)-to-(k) outcomes by MÜDEK in 2003. Every engineering program to be evaluated must define its POs, including the mandatory MÜDEK outputs given in below. For the “X” engineering program, 11 POs were determined by MÜDEK, including the program outputs published in version 2.2. relevant POs are defined below (Özgüler et al., 2013):

“ …

• PO1: Adequate knowledge in subjects specific to mathematics, science, and related engineering discipline; ability to use theoretical and applied knowledge in these fields in complex engineering problems.

• PO2: Ability to identify, define, formulate, and solve complex engineering problems; ability to select and apply appropriate analysis and modeling methods for this purpose.

• PO3: Ability to design a complex system, process, device, or product under realistic constraints and conditions to meet certain requirements; ability to apply modern design methods for this purpose.

• PO4: Ability to develop, select and use modern techniques and tools required for the analysis and solution of complex problems encountered in specific engineering applications; ability to use information technologies effectively.

• PO5: Ability to design experiments, conduct experiments, collect data, analyze and interpret results to investigate complex engineering problems or research topics specific to X Engineering.

• PO6: Ability to work effectively in disciplinary and multi-disciplinary teams; ability to work individually.

• PO7: Ability to communicate effectively in Turkish, both orally and in writing; knowledge of at least one foreign language; the ability to write and understand written reports effectively, prepare design and production reports, make effective presentations, give and receive clear and understandable instructions.

• PO8: Awareness of the necessity of lifelong learning; the ability to access information, to follow developments in science and technology, and to constantly renew oneself.

• PO9: Acting by ethical principles, professional and ethical responsibility awareness; information about standards used in engineering applications.

• PO10: Information about business life practices such as project management, risk management, and change management; awareness of entrepreneurship, innovation; information about sustainable development.

• PO11: Information about the global and social effects of X Engineering applications on health, environment, and safety and the problems of the age reflected in the field of engineering; awareness of the legal consequences of engineering solutions.

… ”

The higher education program qualifications at the international level developed according to the European Qualifications Framework (EQF) and completed by the Bologna Process. The framework aims to increase transparency, recognition, and mobility in the higher education systems of the signatory countries of the Bologna Process (CoHE, 2021). Then, in 2008, Turkey identified its national qualifications as the National Qualifications Framework for Higher Education in Turkey (NQF-HETR) (TYYC its Turkish acronym) and associated it with the EQF (Hamutoğlu et al., 2020). The definition of NQF-HTR is crucial for many reasons, such as clarifying qualifications/achievements in higher education, clarifying lateral/vertical transfers between degrees, increasing the attractiveness and recognition of higher education abroad. Under the NQF-HTR, the academic and vocational qualifications of the main fields and sub-fields are defined separately. POs published by MÜDEK are fully compatible with NQF-HTR Engineering Area Undergraduate Qualifications (Academic Based), which started pilot applications in 2011. Thanks to full compatibility, MÜDEK is guaranteed to meet NQF-HTR qualifications (Platin, 2011). MÜDEK accreditation process has been preferred as an exemplary practice because it is a member of the accreditation umbrella organizations in the world, fully complies with ABET and EQF criteria, and is the only institution authorized in Turkey for the accreditation process in engineering education. MÜDEK is a full member of the European Network for Accreditation of Engineering Education. Also, MÜDEK became a full Member Signatory of the Washington Accord under the umbrella of the International Engineering Alliance. On the other hand, MÜDEK is a unique association in Turkey to accredited engineering programs. So, we chose the MÜDEK criteria for the development of a DSS to determine engineering student achievement levels dependent on individual program output during the accreditation process.

3.2 MOORA method

We develop a flow chart to select the most suitable MCDM method for our study. Figure 1 illustrates a flow chart for the preferred rules for choosing the most appropriate MCDM method in this study. According to the flow chart, the red illustration presents the following appropriate way to select the most appropriate MCDM method in Fig. 1. As a result, the Multi-Objective Optimization Method by Ratio Analysis (MOORA) method is chosen as a multi-criteria decision-making method to grade the provision levels of POs.

Fig. 1
figure 1

MCDM model selection flow chart (adapted from Koçak et al., 2021; Ic & Şimşek, 2019; Sen and Yang, 1998)

MOORA method, is one of the multi-criteria decision making (MCDM) methods widely used in the literature, was developed by Brauers and Zavadskas in 2006 (Brauers & Zavadskas, 2006). The ranking of the alternatives from the best one to the worst is the main principle of the MOORA method. The solution process is shorter, more flexible, and easily programmable compared to other MCDM methods.

The advantage of the MOORA method becomes evident when the number of calculations required to reach the MOORA index (Yi) is considered (Table 1). As the number of alternatives and criteria in the problem increases, other methods (such as TOPSIS, VIKOR, and GRA) will require more mathematical calculations.

Table 1 Comparisons of some MCDM methods

Table 1 presents a comparison between the distance-based MCDM methods (TOPSIS, VIKOR, and GRA) and hierarchical decision-making model, namely AHP, in detail. According to Table 1, the MOORA method provides better performance than other methods in solving complex multi-criteria decision-making problems. We preferred the MOORA method in this study to use in DSS. It provides better performance comparing other multi-criteria decision-making methods, contains less mathematical computational load, modeling simplicity, computational simplicity, and hence provides ease of the coding process. MOORA needs only three steps to reach the ranking results. However, the most competitive MCDM method, TOPSIS, needs six steps to calculate the ranking results. So, its computational time is significantly less comparing with the other MCDM methods. On the other hand, VIKOR and GRA methods need extra coefficient calculation v, ξ respectively. The MOORA method does not need to calculate additional coefficients to reach the ranking results. If necessary, we can easily add new criteria or alternatives in the MOORA model. But, especially in AHP, it is difficult to add a new alternative or criteria to the existing model. This situation requires a completely new model development process. All of these advantages provide the programming and coding simplicity for the MOORA method (Table 1).

MOORA method has been applied in a wide range from selection problems (Karande & Chakraborty, 2012) to credit evaluation problems (Görener et al., 2013). Iç (2020), proposed an integrated credit evaluation model using MOORA and GP methods (Iç, 2020). Thao (2020) proposed a ranking function based on polynomial and exponential functions to rank fuzzy numbers using the MOORA method (Thao, 2021). Aytaç Adalı and Tuş Işık (2017) aimed to proposed the laptop selection problem based on MOORA integrated multiplicative form (MULTIMOORA) and multi-objective optimization based on simple ratio analysis (MOOSRA) (Aytaç Adalı & Tuş Işık, 2017). Karande and Chakraborty (2012) presented a material selection model using the MOORA method (Karande & Chakraborty, 2012). Moslem and Çelikbilek (2020) presented an integrated approach of the Analytic Hierarchy Process (AHP) with MOORA based on grey optimization. They presented a case study for the public bus transport system in Budapest, Hungary (Moslem & Çelikbilek, 2020). Dorfeshan et al. (2018) proposed a novel decision methodology of critical path determination considering project cost, risk, quality, and safety. They used interval type-2 fuzzy sets (IT2FSs) to address uncertain project environments in the proposed decision methodology. To address the multi-criteria decision-making problem, they applied a version of MULTIMOORA and MOOSRA methods extended to IT2F-uncertainty (Dorfeshan et al., 2018). Luo et al. (2019) proposed a distance-based intuitionistic multiplicative MULTIMOORA method (Luo et al., 2019). They proposed a case study from the medical equipment selection problem to illustrate the applicability of the proposed method.

3.2.1 Application steps of the MOORA method

The MOORA method includes a solution process consisting of 3 steps. The first step of the method is to create the decision matrix. The stages of the MOORA method applied are explained below:

  1. Step 1:

    Creating the Decision Matrix

The program outputs to be listed are placed on the rows of the decision matrix as alternatives. In the columns of the decision matrix, the courses to be used in decision making are included as evaluation criteria. “A” decision matrix is ​​the initial matrix created by the decision-maker. In the decision matrix in Eq. (1), i = 1, ..., m indicates the number of alternatives, and j = 1, ..., n the number of evaluation criteria.

$$A=\left[\begin{array}{c}{a}_{11}\kern0.5em {a}_{12}\kern0.5em \begin{array}{cc}\cdots & {a}_{1n}\end{array}\\ {}\begin{array}{ccc}{a}_{21}& {a}_{22}& \begin{array}{cc}\cdots & {a}_{2n}\end{array}\end{array}\\ {}\begin{array}{c}\begin{array}{ccc}\bullet & & \begin{array}{cc}& \bullet \end{array}\end{array}\\ {}\begin{array}{ccc}\bullet & & \begin{array}{cc}& \bullet \end{array}\end{array}\\ {}\begin{array}{c}\begin{array}{ccc}\bullet & & \begin{array}{cc}& \bullet \end{array}\end{array}\\ {}\begin{array}{ccc}{a}_{m1}& {a}_{m2}& \begin{array}{cc}\cdots & {a}_{mn}\end{array}\end{array}\end{array}\end{array}\end{array}\right]i=1,\dots, m;j=1,\dots, n$$
(1)

The data of all courses showing the PO provision levels obtained for each course are transferred to the MOORA method. In the application of the MOORA method, POs represent alternatives. On the other hand, courses illustrate criteria. According to the scope of the Bologna process, engineering programs were asked to prepare the relationship matrix between the courses in the program and POs on a course basis (Table 2). Ratings in Table 2 show a value between 0 and 3, similar to the literature (Felder & Brent, 2003). A value of zero (0) means “No relationship- Does not support,” one (1) means “Low level of relationship,” two (2) value “Moderate relationship,” and three (3) value means “High level of relationship.”

Table 2 Relationships between courses and POs

In Table 3, information about a student who has reached the graduation stage in a program of the engineering faculty is presented for the use of the MOORA method to determine the provision levels of PO. While creating the decision matrix was created by multiplying the numerical value of the student’s letter grade the level of the relation of the course given in Table 2 to the relevant PO (see Appendix 1). The letter grades in the program and their numerical equivalents are as in Table 4.

Table 3 Student data for the decision matrix
Table 4 Letter grades and values
  1. Step 2:

    Creating the Normalized Decision Matrix

Using the linear normalization principle, the Normalized Decision Matrix (R), consisting of the normalized values of the elements of the matrix A, is determined (Table 5). For the normalization of the elements of matrix A, the linear normalization formula in Eq. (2) is used. In the formula, the value of aij* is represents the largest-valued element of the criterion Table 6.

Table 5 Normalized matrix
Table 6 Weighted normalized matrix
$${r}_{ij}=\frac{a_{ij}}{a_{ij}^{\ast }}i=1,\dots, m;j=1,\dots, n$$
(2)

The R matrix is obtained as in Eq. (3).

$${R}_{ij}=\left[\begin{array}{c}\begin{array}{ccc}{r}_{11}& {r}_{12}& \begin{array}{cc}\cdots & {r}_{1n}\end{array}\end{array}\\ {}\begin{array}{ccc}{r}_{21}& {r}_{22}& \begin{array}{cc}\cdots & {r}_{2n}\end{array}\end{array}\\ {}\begin{array}{c}\begin{array}{ccc}\bullet & & \begin{array}{cc}& \bullet \end{array}\end{array}\\ {}\begin{array}{ccc}\bullet & & \begin{array}{cc}& \bullet \end{array}\end{array}\\ {}\begin{array}{c}\begin{array}{ccc}\bullet & & \begin{array}{cc}& \bullet \end{array}\end{array}\\ {}\begin{array}{ccc}{r}_{m1}& {r}_{m2}& \begin{array}{cc}\cdots & {r}_{mn}\end{array}\end{array}\end{array}\end{array}\end{array}\right]i=1,\dots, m;j=1,\dots, n$$
(3)

Essentially, the MOORA method uses vector normalization. However, we use linear normalization methodology in the normalization step. Linear normalization is a more suitable methodology than vector normalization in the calculation of the program output successes. It presents the comparative analysis process between the best-graded program output and the others. Therefore we signed best PO as the ideal or threshold value to be reached level for the other POs.

  1. Step 3:

    Creating the Weighted Normalized Decision Matrix

First, the weight values wi for the evaluation criteria are determined. The important point here is that the sum of the weights to be determined for all criteria is equal to 1 as Eq. (4).

$$\sum_{i=1}^n{w}_i=1$$
(4)

The weight values of the criteria, namely the courses, were determined according to the relationship levels and ECTS values in Table 1. The weight of the courses (wi) is calculated as in Eq.(5) by simply dividing the ECTS value of the course taken by the total ECTS value of the courses taken by the student until graduation process.

$${w}_i=\frac{ECTS_i}{\sum_{i=1}^{\# COURSE}{ECTS}_i},\kern0.5em \forall i=1,\dots \# COURSE$$
(5)

where ECTSi is the course credit value and, #COURSE is the number of courses graduated by the student.

The normalized criteria values in the column are multiplied by the weight score of the criteria and the weighted normalized decision matrix (V) is obtained:

$${V}_{ij}=\left[\begin{array}{c}\begin{array}{ccc}{w}_1{r}_{11}& {w}_2{r}_{12}& \begin{array}{cc}\cdots & {w}_n{r}_{1n}\end{array}\end{array}\\ {}\begin{array}{ccc}{w}_1{r}_{21}& {w}_2{r}_{22}& \begin{array}{cc}\cdots & {w}_n{r}_{2n}\end{array}\end{array}\\ {}\begin{array}{c}\begin{array}{ccc}\bullet & & \begin{array}{cc}& \bullet \end{array}\end{array}\\ {}\begin{array}{ccc}\bullet & & \begin{array}{cc}& \bullet \end{array}\end{array}\\ {}\begin{array}{c}\begin{array}{ccc}\bullet & & \begin{array}{cc}& \bullet \end{array}\end{array}\\ {}\begin{array}{ccc}{w}_1{r}_{m1}& {w}_2{r}_{m2}& \begin{array}{cc}\cdots & {w}_n{r}_{mn}\end{array}\end{array}\end{array}\end{array}\end{array}\right]i=1,\dots, m;j=1,\dots, n$$
(6)
$$V=\left[\begin{array}{c}\begin{array}{ccc}{z}_{11}& {z}_{12}& \begin{array}{cc}\cdots & {z}_{1n}\end{array}\end{array}\\ {}\begin{array}{ccc}{z}_{21}& {z}_{22}& \begin{array}{cc}\cdots & {z}_{2n}\end{array}\end{array}\\ {}\begin{array}{c}\begin{array}{ccc}\cdots & \cdots & \begin{array}{cc}\cdots & \cdots \end{array}\end{array}\\ {}\begin{array}{ccc}{z}_{m1}& {z}_{m2}& \begin{array}{cc}\cdots & {z}_{mn}\end{array}\end{array}\end{array}\end{array}\right]$$
(7)
  1. Step 4:

    Weighted normalized data are added in case of maximization (for benefit type criteria) and subtracted in case of minimization (for cost type criteria) (Eq. 8).

$${Y}_i=\sum_{j=1}^g{z}_{ij}^{\ast }-\sum_{j=g+1}^n{z}_{ij}^{\ast }$$
(8)

where g is the number of financial ratios to be maximized, (n − g) is the number of financial ratios to be minimized, and Yi is the normalized assessment value of i the student concerning all the POs.

We have benefit-type criteria for all courses. So the Eq. (8) is converted as Eq. (9) for our study:

$${Y}_i=\sum_{j=1}^n{z}_{ij}^{\ast }$$
(9)

Finally, we can easily calculate the Yi values for the proposed case study, as shown in Table 7).

Table 7 Individual PO levels are calculated by the MOORA method

3.3 Development of DSS

Decision Support System (DSS) can be defined as computer-based solutions to be used to support complex decision-making and problem-solving. A classic DSS consisting of a database, model base, and interface that provides reports and graphs to the user for the system was quickly designed using MS Access software. The main emergence of DSS is to assist decision-makers (Shim et al., 2002). The developed DSS was requested to provide support in making the following decisions that are necessary for the managers:

  • What are the individual program output levels of the students at the graduation stage?

  • What is the position of students on the same graduation date relative to each other?

  • What is the effect of student preferences in technical and social optional courses on PO levels?

The entity-relationship diagram (ER) of the database designed for the developed DSS is as in Fig. 2.

Fig. 2
figure 2

ER Diagram

As can be seen from the ER diagram uses crow’s foot notation in Fig. 2. There are six tables in total in the database. The symbols and explanations in the crow’s foot notation are used in Fig. 2 essentially as in Table 8. We used a one-to-many relationship framework between the tables and the database.

Table 8 The symbols and explanations in Fig. 1

We structured it as a modular and flexible structure as possible. Developed DSS can use in different universities and programs. Detailed information is given below about the purposes of the tables in the ER diagram (Fig. 2):

tblPO: It stores the names of the program outputs (PO1, PO2, etc.) and short descriptions of the outcomes.

tblCoursePO: It stores the values that show the degree of relationship given in Table 2 with which PO of the courses in the program.

tblCourses: It stores data about the courses in the program (course code, course name, ECTS value of the course, etc.).

tblStundentSGraduate: It stores the personal data of the graduate candidate (student number, name, surname, graduate date, etc.)

tblStudentScores: It stores the courses (course code) and letter grades (grade) taken by the student until graduation.

tblLetterGrade: Stores letter grades and coefficient values. Since these values may vary from institution to institution, they are stored in a separate table.

In the model base of DSS, a wide variety of models are used to analyze the stored data (Laudon & Laudon, 2012). We incorporate the MOORA methodology into the developed DSS. The MOORA steps described in Section 3.2 are provided with SQL (Structured Query Language) query sentences and programs written in VBA (Visual Basic for Applications). The main menu of the developed software is shown in Fig. 3. In the settings section on the left side of the main menu, options for adding a new course (“New Course”) to the program are provided. Getting a course list (“Course List”) in the program and editing program outcomes (“Program Outcomes-PO”) entering the degree of relationship is given in Table 2 within (“PO and Course Relationship”) in the program. Finally, changing the letter grade values and coefficients (“Letter Grades”) are presented. Thus, it is possible to change the settings according to the program’s characteristics.

Fig. 3
figure 3

Main menu

On the right side of the main menu in Fig. 3, we can add graduated students’ information (New Graduate Student) into the DSS. Also, we can transfer their grades from the right-hand section. The “Enter Scores” button is clicked to enter the personal information and scales for the four-year periods using the main menu (Fig. 4). The courses and letter grades took by the student can be selected from the selection boxes. The purpose of the form given in Fig. 4 is used to compare and verify the student’s transcript. Grade point average (GPA) is calculated automatically when course codes and letter grades are entered on the form. When the letter grade is entered, the numerical value given in Table 4 is displayed in the “Grade” column on the form. According to the code of the selected course, the ECTS credit value appears in the “ECTS” column. The “Point” column is calculated by multiplying the value in the “Grade” and “ECTS” columns. The total of the “ECTS” column and “Points” column is calculated and displayed in the “Total ECTS” and “Total Points” fields on the form. GPA is determined by dividing the “Total Points” value by the “Total ECTS” value.

Fig. 4
figure 4

Letter grades registration screen

Once the grade information of the graduated student is entered, the individual student’s level of achieving the program outputs (“Show Individuals PO”), selected from the student list accessed from the main menu, can be seen on a form as in Fig. 5.

Fig. 5
figure 5

Levels of individually providing program outputs

4 Findings

The developed DSS was tested for an imaginaryFootnote 1 student who got an average letter (C) grade (2.0 / 4.0) from all courses (Fig. 4). Detailed sample data-set and related sub-calculations are provided in Appendix 1. PO values and ranks are summarized in Table 9.

Table 9 An example calculation

4.1 Comparison with simple arithmetic average method

In the literature some researchers recommend the simple arithmetic average method for calculation of the student’s PO levels when he/she graduated from the program (Kirkavak et al., 2019). We can calculate the simple arithmetic average equation for the same student using Eq.(10) (Table 9). We reach the value of the letter grade received for the course (Relationshipji) mainly using the relationship level with PO is related to the #COURSE shows the number of courses graduated by the student.

$${PO}_i=\frac{\sum_{j=1}^{\# COURSE}{Relationship}_{ji}.{ECTS}_j.{Grade}_j}{\sum_{j=1}^{\# COURSE}{Relationship}_{ji}.{ECTS}_j.4}\kern0.75em \forall j=1,\dots \# COURSE$$
(10)

As we can see the Table 10, the simple arithmetic average method does not reflect the convenient results about the student’s PO achievement levels. Eq. (10) compares the POs by using the threshold value using the assumption that if the student has a maximum letter grade of the courses (A), then he/she has a high achievement for the PO. So, another letter grade is derived simply from the other related courses’ grades using Eq. (10). But the results obtained from our case study are not suitable for comparative analysis between the POs. However, MOORA-based DSS presents more acceptable results for the student’s evaluation of the POs using the relative evaluation concerning each PO.

Table 10 Results obtained by simple arithmetic mean or weighted average

4.2 Comparative analysis

We propose another analysis using the comparison between the obtained results and TOPSIS results. We used Spearman’s rank correlation test to compare the MOORA ranking with the TOPSIS one. Spearman rank correlation test uses the two-parameter that measures to rank correlation level between the different rank sets according to Eqs. (12) and (13) (Iç, 2020):

$${d}^k={x}^k-{y}^k,\kern0.5em k=1,\dots, K$$
(11)
$${r}_s=1-\left\{6.\left[\ \frac{\sum_{k=1}^K{\left({d}^k\right)}^2}{K.\left({K}^2-1\right)}\ \right]\right\}$$
(12)
$$Z={r}_s.\sqrt{\left(K-1\right)}$$
(13)

where, dk: Difference between each element of two different data sets, k: Number of data, rs: Consistency measure, and Z indicates the test statistic.

If the calculated rs value higher than 0.5, we can conclude that the two rank sets are significantly correlated. Another parameter, namely the Z value (table value for 95% confidence for the normal distribution), must be higher than the 1.645 for a significant correlation between the rank sets. Calculated rs and Z values state that the MOORA and TOPSIS results are statistically correlated and, all ranking results are statistically similar (Table 11). MOORA method not only provides sensible ranking results but also an easily programmable tool for the developed DSS (Fig. 6).

Table 11 Comparison TOPSIS and MOORA
Fig. 6
figure 6

Comparative analysis for ranking differences

On the other hand, we can see that some of the PO rankings are different considering the different method results. One reason could be referred from Table 12 that PO3, PO6, PO7, and PO10 weighted normalized values are very close to each other. So, these POs are sensitive to the minor differentiations related to the mathematical calculation differences of the different versions of the methods. But, it is not a crucial problem for ranking results. Comparative analysis and Spearman’s rank correlation test give statistically similar ranking results for the three method’s results.

Table 12 Weighted normalized values summation results for POs

5 Conclusions

As can be seen from the findings presented in Section 4, the PO level values ​​obtained using simple average methods are equal to each other. Therefore, program managers, who have the position of decision-makers, cannot provide sufficient support in curriculum changes. In the proposed DSS, more precise rating results were obtained to support managers. Using the developed DSS, it can be easily seen how the PO levels and ratings change in different social and elective course preferences.

In this study, a DSS was prepared to use for the accreditation studies of the industrial engineering program of the engineering faculty of a university in Turkey. A decision support system is structured to systematically measure and evaluates the achievement of program outcomes of a graduate student. Developed DSS contributes to continuous improvement processes of the education system by providing reliable and scientifically based models in terms of monitoring and control of all steps of the process.