1 Introduction

The Multi-Criteria Decision Analysis (MCDA) or Multi-Criteria Decision-Making (MCDM) methods have attracted the attention of many researchers during the last few decades. The methods that have emerged in this regard sometime provide effective results even by bringing small changes to the existing ones. MCDM is involved with constructing and resolving the decision on multiple criteria and planning problems. The goal is to assist decision-makers who are dealing with these challenges. Unfortunately, in most of the cases, the decision-makers are unable to land on a single decision. As a result, the decision eventually becomes preference-based.

It is hard to deny how swiftly technology is being applied into the industry, forcing businesses or organizations to quickly learn and modify their decision-making processes. Decision-making involves a multifaceted process of interpreting, gathering, and analyzing data to choose a course of action among numerous choices [1]. In order to make the best choice, it develops priorities based on the information at hand [2]. As a result, several methodologies, procedures, and techniques for MCDM have been proposed. These techniques often grounded in optimization approaches, have found successful applications across diverse fields, including social sciences, psychology, natural sciences, and artificial intelligence [1]. Some more robust optimization strategies are also available to boost the effectiveness and reliability of the outcomes.

The landscape of MCDM is multifaceted, involving both symmetric and asymmetric problems. Addressing symmetric solutions aids in simplifying the transformation of the decision space with less computational effort. Many incarnations of Optimization Techniques (OTs) have been proposed in the literature to handle such symmetric problems under MCDM scenario [3].

Mostly, the research interest in decision-making focuses on the necessity of data evaluation and selecting optimal among the outcomes. The appropriate implementation of OTs and multi-criteria approaches is essential for this. Though quite a good number of traditional OTs are available, they become a handicap in handling such complex problems because of the non-linearity and irregularity of the objective function [4, 5]. The traditional optimization methods (such as linear programming or quadratic programming) are valuable tools in solving well-structured problems with linear or quadratic objective functions and constraints. However, many real-world decision-making scenarios are characterized by highly nonlinear, irregular, and often discontinuous objective functions. In such cases, the application of traditional OTs becomes challenging, as continuity and differentiability of the functions are mostly required for many traditional OTs to converge to the global optimal solution. However, it is not possible to derive a strategy due to the higher complexity of the optimization problem in hand. Therefore the metaheuristic approaches have been created as an alternative paradigm to address the issue. Over the time, these approaches have become popular for producing near-optimal solutions (if not the exact ones) in order to handle complex problems [1]. Now, Metaheuristic Optimization Algorithms (MOAs) have attracted huge attention in MCDM [6,7,8,9] as most traditional techniques struggle to find the optimal solutions in higher dimensional space. The beauty of such algorithms is to undergo specific iterative steps that mimic the natural behavior in order to pick the best out of the available alternatives. As a result, these optimal solutions, generated by MOAs, become valuable input datasets for MCDM techniques, enhancing decision-making processes. Despite a large number of MOAs cited in the literature, this study specifically highlights a selection of popular algorithms: Genetic Algorithm (GA), Particle Swarm Optimization (PSO), Ant Colony Optimization (ACO), and Artificial Bee Colony (ABC). These algorithms are employed as candidates in decision-making mechanisms, showcasing their applicability and effectiveness.

The primary goal of this comprehensive study is to offer a review of panoramic synthesis of the evolving context, where the competency of MOAs, specifically GA, PSO, ABC, and ACO, converges with the intricate domain of MCDM. In a landscape where previous works have meticulously examined MOAs [10,11,12,13] and MCDM [14,15,16,17] in isolation, this study stands out as the first of its kind. It comprehensively gathers, analyzes, and elaborates on research articles that not only employ MCDM methods alongside MOAs but also delve into MOA algorithms that integrate MCDM principles. The focus on algorithms that seamlessly integrate these domains aims to enhance the understanding of computational methods in engineering by unraveling profound insights, innovations, and applications at the fascinating intersection of these two domains. This paper, therefore, not only showcases past achievements but also charts a course for future explorations by illuminating the synergistic potential of MOAs and MCDM aligning seamlessly with the interests of the computational engineering community. In doing so, it provides a comprehensive understanding of their symbiotic relationship, presenting a roadmap for researchers and practitioners to enhance decision support systems and multi-criteria optimization solutions. After presenting the methodology followed for literature review, the following contributions are acclaimed:

  • the existing literature often segregates MOAs and MCDM, lacking a comprehensive exploration of their synergies. This paper bridges this gap by providing an exceptional analysis of the intricate relationship between these two domains.

  • real-world examples of how MOAs affect decision-making are scarce in the literature. In order to demonstrate the usefulness of these methods, this study provides instances of decision-making systems where GA-MCDM takes the lead, offering a novel perspective.

  • it is necessary to focus on specific Evolutionary Algorithms (EAs) to reduce information overload as there are numerous existing methods. The paper considers four highly cited algorithms GA, PSO, ACO, and ABC.

  • while the intersection of MOAs and MCDM is gaining momentum, there is a lack of temporal analysis. It provides in-depth exploration of practical applications, highlighting instances where synthesis of past achievements and current trends lay the groundwork for future explorations.

This article is partitioned into five crucial sections, apart from the introduction. The methodology for the literature review is described in Sect. 2. It also includes the scientific article on MCDM under the database of Science Direct, IEEE Xplore, and Google Scholar. The detailed MCDM techniques and fuzzy approaches are analyzed in Sect. 3. After that, a brief of metaheuristic approaches and their applications along with MCDM methods are represented in Sect. 4. Section 5 contains an overview of the study. The conclusion and the future scope of the paper have been discussed in Sect. 6.

2 Methodology and Quantitative Analysis of the Literature

The literature is the witness to the development of multiple MCDM techniques and several optimization algorithms. The search engine confines its search only to the articles on metaheuristics like GA, PSO, ACO, ABC, and their related (hybrid) domains. Therefore, the search flow mechanism is restricted to specific keywords, which are presented in Fig. 1. The compositions of these keywords while randomly browsed in the search engines help the researchers to find the publications on MCDM methods, metaheuristic methods and their combinations.

Fig. 1
figure 1

Search components for the survey of MCDM approaches

This section makes a critical review of the quantity of publications on different domains of MCDM only based on a variety of databases popularly available. The database of the resources being considered in this entire section includes Science Direct (https://www.sciencedirect.com), IEEE Xplore (https://ieeexplore.ieee.org), and Google Scholar (https://scholar.google.com). Basically, the research works on MCDM with different optimization methods are considered between the years 1978 to 2022. The total period is further subdivided into five different intervals, each of nine years length. The analysis done via Figs. 2, 3 and 4 the same five time-intervals in terms of years with different notions.

Assuming the entire MCDM research articles of all the databases mentioned above, the publication count of all five different periods is replicated in Fig. 2. It clarifies that less than 40% of work has been done in the first four decades, while more than 60% of the work is being performed in the last (recent) decade itself.

Fig. 2
figure 2

Publications of MCDM articles over different time-period

In order to analyze further, the keyword ‘multi-criteria decision making’, has been used for searching the research articles and the results have been presented in Table 1. The table holds the information of published research papers categorized in the time-interval frame for each database separately. The increase rate of publications is observed during the period 2014–2022. It can be seen that Science Direct has reported a higher rate of increment in the field of MCDM.

Table 1 Quantitative analysis of publications on MCDM from separate database over different time durations

Figure 3 represents the percentage of published articles on ‘MCDM methods’ with respect to the different time-intervals between 1978 and 2022 under each database cited above. The legends used to differentiate between the patterns are represented in blue. However, the same patterns will also be followed for other databases (colors). This figure concludes that during the past nine years, Science Direct and Google Scholar have gathered more support for this kind of article. In Fig. 3, the growth in the research contribution can be easily seen in the field of MCDM methods between the years 2014–2022.

Fig. 3
figure 3

Quantitative analysis of published articles on MCDM methods in different databases

Similarly, Table 2 reflects the information of published research papers based on the field of metaheuristic methods categorized in the time-interval frame for each database separately. It is clear that Google scholar upholds the top position in the number of publications in the metaheuristic methods and optimization field.

Table 2 Quantitative analysis of publications on MOAs from separate database over different time durations

Similar to Figs. 3, 4 also follows the same pattern and carries the percentage of published articles based on the search keyword ‘metaheuristic methods and optimization methods’ in the aforesaid databases. The figure carries the percentage of published articles with respect to the different time-intervals between 1978 and 2022 under each database cited above. Figure 4 reports an exponential growth in the percentage of research contribution for the recent decade, which implicates the increase in the interest of the scientific community in the field of optimization.

Fig. 4
figure 4

Quantitative analysis of published articles on optimization and MOAs in different databases

Henceforth, the analysis is shifted towards the use of the popular MOAs like GA and PSO in the field of MCDM. Figure  5 contains the evolution of four most famous algorithms GA, PSO, ABC, ACO in different time intervals between 2000 and 2023 for last two decades. The drastic growth in the number of research shows the interest of the researchers. The research implicaton for these algorithms is provided in the Sect. 5. Out of these articles, Fig. 6 represents the approximate number of research articles for which MOAs and MCDM methods have performed combinely. The comparison of the number of articles published in two different decades has been shown in the figure. This timeframe captures the dynamic growth and latest advancements in the integration of metaheuristics and MCDM, ensuring our paper reflects the most current and relevant developments in this emerging field. There is hardly any paper in the literature before this time period for such combination.

Fig. 5
figure 5

Evolution of famous MOAs in different periods of time

Fig. 6
figure 6

Quantitative comparison of research articles published during last two decades

Figure 6 ensures the increment of nearly around 85% published articles during 2012–2023 as compared to the time frame 2000–2011. This eventually reflects the usefulness of MOAs in solving real world problems for the modern scientific community.

3 MCDM Approaches

The goal of the MCDM approaches is to assist decision-makers in selecting the best option out of all the available alternatives, satisfying specific criteria [16]. Here, ‘alternatives’ means the set of choices from which the decision-maker selects the best fit, and the ‘criteria’ means the set of restrictions imposed while the choices are made. The proper consideration of multiple criteria leads to the accuracy of an appropriate decision [6]. The MCDM methods are essential for problems that have many solutions and the final choice is not exclaimed just with a yes or no [18]. These MCDM methods help us to compare, evaluate, and classify a set of finite alternatives concerning a group of limited attributes [14]. Unfortunately, the decision that decision-makers make become often inconsistent [19]. Thus, there involves a considerable risk factor for the decision-maker when deciding on a MCDM problem. Therefore, researchers have attempted to establish many MCDM approaches over the years to overcome the difficulties in making more accurate decisions. As a result, many research articles with different new algorithms and mathematical tools are proposed to find the more accurate optimal solution.

Though many MCDM approaches exist in the literature, some of the popular approaches are listed in Tables 3, 4, 5 and 6. The methods are categorized into four parts based on different approaches, namely Scoring-based approaches (Table 3), Distance-based approaches (Table 4), Pairwise comparison based approaches (Table 5) and Outranking-based approaches (Table 6). Each of the tables contains the advantage and disadvantages of a particular contribution along with the name of the proposed method, the year of its appearance and references against the authors’ names.

3.1 Scoring-Based Approaches

The scoring-based approaches (Table 3) are considered to be the most straightforward MCDM techniques. Their mechanism is based on using elementary arithmetic operations to evaluate the alternatives [20]. SAW as well as COPRAS are two examples of scoring approaches. With the use of these techniques, it has become simple to calculate the weighted normalized value sum for each criterion included in the model. SAW considers maximizing the criteria, while COPRAS is a development in SAW that enables both maximizing and minimizing criteria [21].

Table 3 Some popular MCDMs based on ‘Scoring Approaches

3.2 Distance-Based Approach

One more category of these approaches is the distance-based approach (Table 4), which calculates the distance between each alternative and an ideal point to obtain the results. The methods that fall under this category are Goal Programming (GP), which targets selecting the alternative that satisfies all goals, and Compromise Programming (CP), which chooses the alternative that is most similar to the ideal best alternative. Further, DEA can be considered an upgrade of GP. On the other hand, TOPSIS and VIKOR are the methods that behave similar to CP [14].

Table 4 Some popular MCDMs based on ‘Distance Approaches

3.3 Pairwise Comparison Approach

Pairwise Comparison Approaches (Table 5) compare alternatives for each and every individual criterion and calculate the corresponding criteria weights. These approaches have the flaw of just relying on the decision-maker’s knowledge. Thus different experts can provide different opinions about the same issue. The first-ever paired and popular approach to decision-making issues is the Analytic Hierarchy Process (AHP) [32]. The ANP is a technique that seeks to address the issue of the AHP’s criteria independence. The Multi-Objective Optimization Method by Ratio Analysis (MOORA) is also a pairwise technique that simultaneously optimizes two or more conflicting attributes (objectives). Similarly, BWM is also an alternative method, based on optimizing the feasible scenarios [33].

Table 5 Some popular MCDMs based on ‘Pairwise Comparison Approach

3.4 Outranking-Based Approaches

The outranking-based approaches (Table 6) involve creating a preference relation for a group of alternatives that identifies their relative dominance. These techniques can deal with ambiguous and insufficient data, and when used, they produce partial priority rankings of possibilities rather than a cardinal indicator of their relative preferences. Elimination and Choice Translating Reality (ELECTRE) is treated as the first method of its kind. Preference Ranking Organization Method for Enrichment of Evaluation (PROMETHEE) is another method which has been developed later, that outranked ELECTRE in many situations [40].

Table 6 Some popular MCDMs based on ‘Outranking Approaches

3.5 Few Popular MCDM Approaches

In the earlier sections, the pros and cons of many existing MCDM approaches were discussed. Though a large number of MCDM approaches are available in the literature, few of them became most efficient and popular as were frequently employed by the researchers. Figure 7 represents a comparison on the number of popular MCDM approaches (Source: Web of Science).

Fig. 7
figure 7

Quantitative comparison of research articles on popular MCDM methods (Source: WoS)

Based on their increasing citations, each of these seven popular approaches are reviewed in the following subsections along with their future aspects.

3.5.1 Elimination and Choice Translating Reality (ELECTRE)

ÉLimination Et Choix Traduisant la REalité (ELECTRE) is mainly based on concordance analysis along with its various iterations [41, 44]. Uncertainty and vagueness can be handled in a more advanced way by this method, though its mechanism and outcomes can be hard to clarify sometime. This method determines concordance and discordance indices and then create outranking relationships based on thresholds. Next it establishes a set of outranking criteria and assigns preference indices to the alternatives for final rankings. This outranking strategy makes it difficult to identify the advantages and disadvantages of the possibilities immediately. It also makes it challenging to cross-verify its impacts [45]. Since it takes uncertainty and vagueness into account, it is therefore widely used in economics, energy, water management, transportation and environmental problems. ELECTRE preserves many modifications, including ELECTRE-I, II, III, ELECTRE TRI, TRI-B, and TRI-C. Most recently, ELECTRE has been less modified by researchers, rather propositions with hybrid modes made it efficient. For instance, in the area of pharmaceuticals, a technique called AHP-ELECTRE-DEMATEL is proposed to examine the issues preventing industries from acquiring the 5.0 mode [46]. First, following various expert interviews, the impediments and issues are ranked using the integrated AHP. Then, DEMATEL method helps in linking virtual reality to actual reality. After that, the traditional ELECTRE approach determines the sets of synchronization and contrast by using a pair of alternating comparisons. In order to filter out the less desirable solutions and choose the best, this method generates a variety of metrics by combining synchronization and contrast. Future paths for study might include analyzing ELECTRE’s performance in dynamic decision aspect and investigating its flexibility in changing industries. Additionally, investigating hybrid models and novel applications, as demonstrated in pharmaceuticals, can open doors for further advancements.

3.5.2 Technique for Order Preference by Similarity to Ideal Solution (TOPSIS)

TOPSIS is a method for choosing an option that is ‘nearest to the positive ideal solution and farthest from the negative ideal solution’ in a multidimensional computing space [25]. This ideal solution is calculated by averaging the given input data. It becomes user friendly as it can be easily programmable and simple to apply. First, it normalizes the decision matrix and assign weights to criteria. Then, it calculates the ideal and anti-ideal solutions from the matrix. At last, it computes the Euclidean distance and closeness coefficient of each alternative and rank them based on closeness coefficient. The detailed mechanism of TOPSIS is provided in the Table 7. The fields like design, logistics and supply chain management, engineering and manufacturing systems, water resource management, business and marketing management, and human resource management have already benefited from the application of TOPSIS. The advantage of this approach is its easiness and the ability to handle the big data with fixed number of steps without getting affected by the size of the problem [47]. These advantages attracted the researchers to utilize it quickly as a decision-making tool or review other methods. In the field of biomass energy, a modified version of TOPSIS called Grey-TOPSIS (G-TOPSIS) is used to prioritize the energy barriers and choose the best fitted alternative [48]. AHP and Delphi approach have helped in weighting and prioritizing respectively to the energy barriers. Then, by the upgraded ranking technique G-TOPSIS, All the biomass energy alternative are sorted according to the closeness score. Recently, a combination of PSO and TOPSIS has successfully solved the problem of installation of energy storage in electric systems [49]. Despite of the numerous advantages of TOPSIS, it struggles with a few drawbacks. It becomes a challenge to balance additional factors while maintaining judgement consistency. Also, the correlation of its attributes cannot be handled well by its euclidean distance. Future studies might look into ways to improve TOPSIS in order to overcome these drawbacks and broaden its range of applications. Furthermore, examining variants such as G-TOPSIS in various settings might provide insightful information on their functionality and future directions.

3.5.3 Analytic Hierarchy Process (AHP)

In the year 1981, one more method was developed called AHP. AHP is ‘a theory of measurement through pairwise comparisons and relies on the judgments of experts to derive priority scales’ [34]. The key feature of the AHP is how better a pair-wise comparison can be performed. This method built a hierarchy of criteria and alternatives and formulate pairwise comparisons for criteria and alternatives at the same level of hierarchy. Next, by creating the PCM it calculate the priority vector and CR of each alternatives. Finally, it synthesizes the priority vectors to determine the overall ranking. The step by step explanation of AHP method is captured in Table 7. It considers both the comparisons like (i) the alternatives (also called variables) with respect to several criteria and (ii) the criteria concerning the goals to estimate criteria weights. AHP appeared frequently in the literature reviewed for this investigation. It is easy to use. Its pairwise comparison approach helps decision-makers to measure criteria weight and compare the alternatives. Due to its hierarchical nature, it is scalable and may quickly become large enough to hold decision-making issues. But criteria and alternatives are not independent to each other. The pairwise comparison creates inconsistencies in ranking criteria and judgment, passed by the decision-makers. Also, since AHP is a rank reversal technique, the insertion of alternatives at the end of the process can cause the final ranks to change or reverse. AHP has a direct application in the field of public policy, planning, resource management, political strategy and corporate policy [50]. After the success of MAUT, AHP also established milestones in MCDM problems and in their applications. The AHP approach produces alternative rankings that are similar to influential rankings. In future, AHP can be explored to address inconsistencies in pairwise comparisons and mitigate rank reversal issues. Furthermore, examining how well AHP adapts to dynamic decision-making contexts and strengthening its resistance to changes in alternative sequences may be worthwhile directions.

AHP is expanded in the form of ANP [35, 51] later. AHP maintains hierarchy and linearity. In this, the objective always remains at the top level and the alternatives follow the earlier levels. However, ANP is nonlinear and is essentially the generic version of AHP. In the past few years, ANP has become one of the preferred MCDM approach as it has many advantages. It has the ability to prioritize a large groups or clusters of elements by forming only one matrix. Moreover unlike AHP, it can better handle the interdependency of the components. With the help of various arbitrary criteria, it supports complex networked decision-making problems. Project selection, green supply chain management, and problems of optimal scheduling and product planning are the major fields where ANP is utilized adequately [50].

Table 7 Mechanism of AHP and TOPSIS

3.5.4 Multi-criteria Optimization and Compromise Solution (VIKOR)

VlseKriterijuska Optimizacija I Komoromisno Resenje (VIKOR) method came into existence in the year 1988. Ten years later, Opricovic [27] introduced this method to the world as an MCDM approach for selecting and ranking the conflicting and non-comparable criteria set of alternatives. He rename it as ‘Multi-criteria Optimization and Compromise Solution’. This method finds a compromise solution that is the most feasible and closest to the ideal solution while the alternatives are evaluated according to all established criteria. Initially, it normalizes the decision matrix and assign weights and then it calculates the individual and group performance scores. From this, it identifies the maximum group utility and minimum individual regret and rank the alternatives based on compromise solutions. VIKOR is being used with almost all the MCDM techniques to support the solution to be more accurate. In order to solve problems in a fuzzy environment where both criteria and weights could be Fuzzy Sets (FSs), the fuzzy VIKOR approach was created [52]. The method is widely used in the field of Sustainable and Renewable Energy, Machinary and Engineering [53], Performance Evaluation [54], Risk & Supply Chain Management [55], Human Resource Management and Water Resources Planning [52]. Despite the wide area of application, VIKOR is not without limitations. Its sensitivity to changes in criteria weights, might have an impact on the ranking. Furthermore, VIKOR may encounter difficulties when handling complicated decision or circumstances with a high level of ambiguity. Subsequent studies might examine methods to improve VIKOR’s resilience to changing criterion weights and examine its suitability for situations involving dynamic and unpredictable decision-making.

3.5.5 Preference Ranking Organization Method for Enrichment of Evaluation (PROMETHEE)

Similar to ELECTRE, PROMETHEE is an outranking approach with numerous iterations [40]. It was developed in the year 1982, just after the appearance of AHP [15]. The method starts with establishing pairwise comparisons and preference functions and calculates net outranking flows for each alternative. Following that, it applies a chosen preference function to obtain positive and negative flows. By aggregating net flows it generates the final ranking. Later, many variations of this approach have been developed, like PROMETHEE I, II, III, IV, V and VI. Besides that, this method also has some hybrid versions like PROMETHEE-GAIA and PROMETHEE-GDSS. PROMETHEE-I has partially helped in ranking the alternatives, while PROMETHEE II has been used for complete ranking. For interval-based rankings of the alternatives, PROMETHEE III was applied. Basically, if a continuous set of viable solutions is found then the PROMETHEE IV will be acquired for the alternatives to rank them partially or entirely. Segmentation constraint problems can be solved with PROMETHEE V, and PROMETHEE VI is developed to handle the process of robot brain representation [43]. PROMETHEE do not have any explicit procedure for assigning weights to the criteria. Because of that, when values are assigned, there is no easy way to do so. However, PROMETHEE does not need for criteria to be proportionate. The ease implementation of its iterative steps made PROMETHEE more popular. PROMETHEE has been extensively used in various fields, including agriculture, business management, chemistry, financial management, manufacturing, transportation, hydrology and water management, and energy management [50]. In the contemporary world, self-driving vehicles are one of the most attractive areas belonging to the intelligent transportation system. Since these are automated vehicles, their cyber security and risks of physical implementation are treated as serious issues. PROMETHEE and AHP together with MABAC are used specifically with SVNS as alternatives that have helped in ranking the associated risk. Moreover, it solves a model of providing high security and safety for the pedestrians and the drivers [56]. The problem with PROMETHEE, despite its extensive use, is that it lacks a clear mechanism for allocating weights to criteria, which may affect how accurate the decision-making process is. Subsequent investigations may delve into inventive methodologies to tackle this constraint and augment the suitability of PROMETHEE in intricate decision-making situations. Furthermore, resolving PROMETHEE’s limits in particular application areas and looking at how to integrate it with developing technologies might pave the way for future developments.

3.5.6 Multi-objective Optimization Method by Ratio Analysis (MOORA)

MOORA was proposed in 2006 [37]. In this process, two or more conflicting objectives (attributes) simultaneously optimized subject to certain constraints. In this, a matrix of judgements from the alternatives to the objectives is necessary. After that, a ratio system is developed to compare each judgement of an alternative on an objective with a denominator. This denominator serves as a substitute for all alternatives pertaining to the corresponding objective. The ratio system is not the only process MOORA follow, reference point technique is also equally competent [57]. The algorithmic mechanism of MOORA has been represented in Table 8. MOORA also has some new updated versions such as MULTIMOORA and MOOSRA. Brauers and Zavadskas [58] have added the complete multiplicative form in MULTIMOORA. MOOSRA method on the other hand, utilizes a straightforward ratio between the sum of the performance values for beneficial criteria and the sum of the performance values for non-beneficial criteria. It is also less susceptible to significant differences in the criteria’s values. The implementing steps of both MOOSRA and MULTIMOORA methods are the same as the MOORA method. However, it’s essential to note a potential drawback of MOORA, which lies in its sensitivity to the specific form of the decision matrix and the assigned weights. In recent years, the method has been hybridized with many other MCDM and metaheuristic methods to get relevant results. It is possible to increase MOORA’s usefulness and applicability by looking at how effectively it integrates with new technologies and can adapt to changing and dynamic contexts. Recently for example, Irvanizam et al. [59] have extended MULTIMOORA gets using trapezoidal fuzzy neutrosophic sets to overcome the weaknesses of this method.

3.5.7 Combinative Distance Based Assessment (CODAS)

CODAS was developed by Ghorabaee et al. [30]. Here, the deserving alternatives are evaluated by employing two different measures. The essential and fundamental measure captures how far away alternatives are from the negative ideal in Euclidean space. This kind of distance demands using an l2-norm indifference space for the criteria. The taxicab distance, which is connected to the l-norm indifference space serves as the supplementary/secondary measure. It is evident that the alternative further away from the negative-ideal solution is better acceptable. In this strategy, the taxicab distance is employed as a supplementary measure, if there occurs two incomparable possibilities based on Euclidean distance. The complete explanation of CODAS can be witnessed in Table 8. Recently, CODAS has been extended in IVIF environment to solve a problem of route selection in the field of transportation [60]. There have been many similar improvements in CODAS, namely CRITIC-CODAS, PL-CODAS etc. Of course, CODAS has been used with the other MCDM techniques to increase the accuracy rate. The method has a potential limitation particularly in its sensitivity to the choice of distance metrics and indifference spaces. Unlike MOORA, CODAS has not witnessed extensive exploration of its integration with metaheuristic optimization algorithms. Future research could delve into methodologies to enhance the robustness and generalizability of CODAS across diverse decision-making contexts.

Table 8 Mechanism of MOORA and CODAS

3.6 Fuzzy Based Approaches

Since a decision-maker can sometimes be vague or unsure about his judgements while analyzing the information, the subjective selection of the weights by the decision-makers is considered as a drawback of the MCDM [14]. Wherever the situation deals with uncertainty in decision making, the FS stands guarantee to come up with more accurate results. In order to handle such vagueness, Zadeh [61] developed FSs in the year 1965. The methodology of the FS is based on a membership function. Later in 1986, a more sophisticated FS namely the Intuitionist Fuzzy Set (IFS), came into existence that deals with both the membership and non-membership functions as well [62, 63].

Interestingly, a new concept of Bipolar Fuzzy Set (BFS) was proposed by Zhang [64]. Basically, it works with both positive and negative membership function values. Due to its dynamic behavior, some notable applications of BFS in MCDM are seen in the literature. In the year 2006, a concept of Fuzzy PSO hybridization was developed with an impact of the faster rate of convergence [65]. The synergy of other EAs have also been synergized with different MCDM approaches. A similar method namely Fuzzy TOPSIS approach was designed in 2008 to compare the alternative’s criteria. The mechanism of this method is to determine the proximity of the variables. In order to classify the advantages and disadvantages of the alternatives, the Euclidean distance between the alternatives and the ideal solution is compared. Meanwhile, the mechanism of IFS is being improved to capture uncertainty and ambiguity using linguistic concepts. This form of FS is known as Pythagorean Fuzzy Sets (PFS) [66].

In recent times, the q-Rung Orthopair Fuzzy set (q-ROF) is the latest method proposed in 2017 [67], based on IFS and PFS. It uses the degrees of membership, non-membership, and indeterminacy of decision-makers. Another FS called Spherical Fuzzy Set (SFS) was produced by Kutlu Gundogdu and Kahraman [68]. This set deals with membership, non-membership and hesitancy parameters.

Though some single optimization methods are more effective for making decisions, they suffer from computational burdens [69]. Of course, due to the ‘No Free Lunch Theorem [70]’, no single optimization method solves all sorts of real-world problems. However, hybrid methods sometimes provide surprisingly better results with higher accuracy for a larger range of problems. One such example is the synergy of IAHP and CODAS [71].

Future research in the realm of fuzzy logic and MCDM holds promising avenues for advancing computational efficiency, particularly by addressing the computational burdens associated with fuzzy methodologies and optimization algorithms. Fuzzy numbers with high uncertainty can help in the construction of optimization function with highly volatile demand or cost etc. Constraints involving production capacity, resource availability, or market demand etc. can consider IFS/PFSs to represent conditions with inherent uncertainty. Algorithms to solve these optimization models can be designed according to the level of uncertainty present in the problem. Simulation can also be performed using guzzy logic to capture the range of possible outcomes. Some of the notable works have been cited in the literature to solve MCDM problems by using fuzzy logic and MOAs. Such contributions are now listed in Table 9.

Table 9 FSs used to handle the uncertainty in MCDM methods and their inventors

After learning about well-known MCDM techniques and how fuzzy logic is used in MCDM, the subsequent section focuses towards an exploration of challenges encountered in the development of MCDM methods. This examination delves into the intricacies and hurdles faced, providing valuable insights into the complexities of enhancing decision-making processes in a multi-criteria context.

3.7 The Challenges of Developing Multi-criteria Decisions and Methods

MCDM is a field that deals with complex problems requiring the consideration of multiple, often conflicting, criteria. The challenge lies in the inherent intricacies of such decision-making processes. One of the primary challenges lies in finding solutions that effectively balance diverse and sometimes contradictory objectives. For instance, in urban planning, decisions need to account for economic growth, environmental sustainability, and social equity simultaneously. Finding a solution that optimally satisfies all criteria can be exceptionally challenging, as optimizing for one criterion might lead to compromises in another. The same kind of challenge can be witnessed in the domain of healthcare policy-making. Decision-makers must weigh the need for cost-efficiency with the quality of patient care, public health outcomes, and equitable access to services. Striking the right balance becomes a complex task, as improvements in one area may indeed come at the expense of another, requiring sophisticated algorithms and decision support methodologies to navigate these trade-offs effectively. Furthermore, MCDM often involves dealing with imperfect, uncertain, or incomplete information. This uncertainty can manifest in various forms from ambiguous data, imprecise measurements to volatile external factors, posing a substantial challenge for decision-making. Decision-makers must navigate this uncertainty and make choices that are robust to changing conditions. In healthcare, for instance, when evaluating the effectiveness of medical treatments, there is often limited and ambiguous clinical data. Medical professionals must make critical decisions under this information scarcity. MCDM methods need to incorporate robust techniques for handling this uncertainty, such as Bayesian modelling, sensitivity analysis, and probabilistic assessments. In the context of investment decisions in finance, market conditions are inherently uncertain, and the future performance of assets is unpredictable. MCDM methods must grapple with this uncertainty, employing techniques such as probabilistic modelling, scenario analysis, and sensitivity assessments to enable decision-makers to make informed choices in the face of imperfect information. Additionally, as MCDM methods become more sophisticated and nuanced, the computational complexity of solving MCDM problems also grows, posing a significant challenge in terms of time and resource requirements.

This escalating complexity of decision scenarios across diverse domains drive a wave of imperative evolution in MCDM processes. Balancing numerous criteria, addressing conflicting objectives, and incorporating ethical considerations are essential aspects in contemporary decision-making. The challenges posed by imperfect information, ambiguous scenarios, and emerging technologies underscore the need for continuous innovation in MCDM processes. Collaboration among experts from various domains is pivotal, ensuring adaptability and effectiveness in addressing the dynamic landscape of multi-criteria decision problems.

4 Metaheuristic Optimization Algorithms (MOAs)

Any decision-making model’s accuracy primarily rests on how well it determines each component’s priority, weight, or relative importance for the decision objective. MCDM can be considered as a feasible choice in this regard. These MCDM methods determine a priority value or weights of importance for the factors that helps in differentiating them from each other with respect to the decision objective. These priorities or weights of importance are determined for the common scenarios, which include both optimal and non-optimal cases. Further, pairwise comparisons are more challenging when comparing qualitative and quantitative factor information. The MCDM methods performed in this context are also called compensatory methods [77], where the transaction among the criteria is permissible. As an illustration, a production with high prices and excellent sound quality is acceptable because the high expenditures are offset by the excellent quality. In non-compensatory techniques, it is considered that the attributes are distinct from one another and there may be no transaction among the criteria. For instance, in order to earn a driving license, a practical driving test, a driving rules test, and eye tests are mandatory components. Strictly, one cannot be compensated by the other.

The priority of the parameter should always be set for an optimal condition in order to create an optimal scenario. Though MCDM can employ the advantage of the difference in influence in terms of the ‘impacts’ of the components, it cannot prevent the objective function stay dragged in an infeasible region. Therefore, even when factors can be distinguished, the optimality of the objective function cannot be guaranteed. For addressing these drawbacks of MCDM methods, OTs can be suitably employed to achieve a set of better solutions (namely pareto-optimal front), where MCDM can easily be fit to find the best solution based on priority weights. However, at least one objective must be needed to use OT. Thus, an objective function is framed using the available factors. This new objective function must be nonlinear since it is built using a weighted ratio of the importance of beneficiary and non-beneficiary components. OTs always produce the pv for each element in the optimal scenario. In fact, the pvs for each component are calculated in normalized form. Hence, OT normalizes decision variables, objectives and constraints. Therefore, some researchers choose OT, followed by MCDM, to make the best decision. Basically, there are two classifications of OT like traditional and heuristic approaches to provide exact and approximate solutions respectively. The classification is briefly presented in Fig. 8. The approximation methods and MOAs belong to heuristic approaches, and their primary distinction is the number of iterations they employ [11, 78]. Of course, it can be noted that the exact method provides a straightforward solution (but not always possible to apply), wherever the approximate solution provides at least the near-optimal solution, without fail.

Fig. 8
figure 8

Categorization of optimization algorithms

Additionally, MCDM and Multi-Objective Decision-Making (MODM) approaches are introduced as the two basic classes of MCDM methods. The MADM and MODM typically decide the best choice of attributes and optimize a number of objectives, respectively. MADM techniques are absolutely crucial for the fields of management, engineering, and similar sectors where the decision-makers have to choose the best-fit alternative among the existing primitive quantity of alternatives [79]. On the other hand, MODM techniques consider ‘the criterion as constraints’ and ‘the alternatives as objectives’ to construct an optimization problem. Unfortunately, the problem so designed becomes complex with the presence of nonlinear factors and the existence of a large number of objectives. Thus, in order to handle such MODM problems, the traditional techniques become handicapped. Therefore, researchers use MOAs, at least to obtain a near-optimal solution, if not the best one. These algorithms enable hassle-free efforts for researchers to handle a large number of alternative solutions at a time.

In order to deal with multi-objective optimization problems, most of the population/swarm-based algorithms provide dominance-guided solutions. These all are not really the optimal (alternative) solutions, but rather near optimal ones. MCDM techniques help to rank these solutions according to the preference of the decision-makers. Hence, the synergy of metaheuristic approaches with the MCDM techniques makes the solution robust and effective. An overview of such vital hybridizations available in the literature is highlighted in Fig. 9.

Fig. 9
figure 9

Categorization of hybrid algorithms on metaheuristics with MCDM in the literature based on the author’s perspective

There are some advantages of MOAs. They mostly improve the optimal results significantly. These algorithms are based on individual intelligence over the population [78, 80]. Application of such algorithms usually helps us in lowering prices, assigning duties, and optimizing the route to reach a destination [69]. Some of the recent applications of such algorithms include computer security, engineering, economics, and science [78]. It has now become a challenge to decide which MOA is the most effective one. Sometimes researchers use the concept of hybridization to improve the quality of the solution, where one operator in the algorithm overcomes the drawback of the other. Based on the inherent features, MOAs are categorized into following types such as (i) nature-inspired [69, 80,81,82,83], (ii) population-based [3, 78, 81, 84, 85], (iii) memory-based[82, 86,87,88], (iv) iterative [69, 80, 81, 84, 87], (v) greedy [87] and (vi) unique-solution-based [82, 87] metaheuristics. The interactive Venn diagram of such categories is reflected in Fig. 10.

Fig. 10
figure 10

Categorization of metaheuristics

Out of these metaheuristic approaches, the authors considered the four most popular algorithms namely GA, PSO, ACO and ABC in the next section. Each algorithm includes its working mechanism along with its applicability to MCDM problems in the recent literature, where the gradual developments/ methodologies within are being realized. Inspired by seminal works in the field [6, 16], the present study undertakes a comprehensive meta-analysis of current literature, emphasized on the synergy of MA and MCDM methodologies. This analysis culminates in the meticulous tabulation of key insights and findings, serving as a structured framework to elucidate the evolution and trends in MCDM research.

4.1 Genetic Algorithm (GA)

A GA is a mathematical model that belongs to the family of MOAs [89]. The mechanism of GA is based on Darwin’s principle of the “survival of the fittest”. Holland [90] is credited for introducing GA with the help of some effective operators. Initially, GA starts with a population where each individual is named as chromosome and each decision variable is called a gene. The algorithm repeatedly chooses parent chromosomes (a pair of individual solutions) from the current population, based on their ‘fitness function [Eq. (1)]’, in order to update the current population. Following specific ‘crossover [Eq. (2)]’ and ‘mutation [Eq. (3)]’ operators, the chosen parent chromosomes are utilized for breeding, and the resulting pair of children are employed to create the population for next generation. The sequential mechanism of GA is demonstrated in Table 11. Although there has been many modifications registered in GA till date, the fundamental equations for fitness function [Eq. (1)], crossover [Eq. (2)] and mutation [Eq. (3)] can be given by

$$\begin{aligned} \text {fitness value} = \dfrac{1}{1+f_i} \end{aligned}$$
(1)

here, \(f_i\) is the function value of the cromosome i.

$$\begin{aligned} Offspring_1&= Parent_1[Crossover Point] \nonumber \\ & \quad +\, Parent_2[Crossover Point] \nonumber \\ Offspring_2&= Parent_2[Crossover Point] \nonumber \\ &\quad +\, Parent_1[Crossover Point] \end{aligned}$$
(2)

here, crossover point is the point from which the chromosome will change its value.

$$\begin{aligned} \text {Mutated Chromosome} = \text {Chromosome} \nonumber \\{} & {} +\, \text {Mutation Rate} * \text {Random Change} \end{aligned}$$
(3)

In the year 1983, Katoch et al. [13] proposed a technique called tournament selection, which has helped GA to deal with population size appropriately. Then Goldberg [91], updated GA’s parameters and produced a partially mapped crossover. In 1989, he again revised the crossover and mutation in a very effective way. In the same year, RGAs were proposed by Lucasius et al. [92] in the area of Chemo-metrics. A binary GA was created by Payne and Glen [93] to determine how similar different compounds were. In the same year, a more improved version of simple GA called Multi-objective GA (MOGA) came into existence [94]. It was categorized into Pareto-based and Decomposition-based MOGAs. Another technique named Niched Pareto GA (NPGA) based on pareto dominance and tournament selection was proposed by Horn et al. [95] in the very next year. Using GA, Srinivas and Deb [96] proposed a Non-dominated Sorting GA (NSGA), which after that has been followed by many of its versions namely NSGA-I, II, III, etc.

With the fundamental version of GA, it has solved many milestone problems. In spite of all the disadvantages, the technique has provided solutions and developments in the field of operation management [97], wireless network [98], scheduling problem [99], engineering and technology advancements [100], medical science [101], etc. In 2011, GA was used for the purpose of solving the Single Row Facility Layout Problem (SRFLP) [102]. The approach solved complex issues with 60–80 samples. GA has been used in the hybrid forms many times to address the issues mentioned above. For example, the problem of multi-product and multi-period was solved by the GA-PSO hybrid technique [103]. During the transfer of multimedia data (photos, videos, and audios) over the internet, the data may be damaged or stolen. As a result, methods of image protection like encryption and cryptography are needed. In 2018, the right control parameters were chosen using GA and its variations. A multi-objective EA was created by Kaur and Kumar [104] to optimize the chaotic map’s control parameters. The chaotic beta map was used to generate the secret key, which helps to encrypt the data. The image was encrypted using parallel GAs. In the same year, due to its superior search capabilities, GA was employed to reduce the processing time for the decomposition (split) of an image. It has helped in enlargement of images to improve natural contrast [12]. In order to de-noise the image, GA and fuzzy logic have been hybridized. Haze, fog, and smog can all be eliminated from an image using a GA-based restoration technique [105]. The algorithm has also helped in upgrading the control parameters that improved performance of the parameters throughout the detection and recognition process [106]. Van Thai et al. [107] addresses a literature gap by conducting multi-objective optimization for CCC floors with notched connectors. Using the algorithm NSGA-II, the research minimizes total thickness, weight, and material cost while considering structural, vibration comfort, and fire condition constraints. The findings, presented in pareto fronts, offer optimal solutions across various floor spans and cost ratios of timber to concrete.

This section especially deals with the application renew of GA with MCDM hybridization approaches. Such a hybrid technique is being well applied for digital machining scheme selection [108]. The MCDM approach AHP has first determined the relative importance of evaluation criteria, providing weight values for the GA to optimize machining scheme selection. This integration ensures that the GA focuses on criteria according to their significance, enhancing the efficiency and effectiveness of the optimization process. A noble work has been done by Wang et al. [109] to select the best among the various maintenance strategies for a power plant. After applying FAHP method for the selection, they developed a novel nonlinear fuzzy optimization model for deriving the crisp priorities from the fuzzy judgement matrix and solved it with the help of basic GA. After implementing to the case study, it shows that the appropriate strategy for boilers is the estimated one. Recently, Goyal and Kaushal [5] has updated the optimization function proposed by Wang et al. [109] and solved it using an improved GA. The method has generated better optimum values. An order distribution problem has also been puzzled out by combining GA and BWM considering real case data [110]. In the field of Operations Research (OR), GA has been hybridized with the PROMETHEE II, to develop a design for the assembly line in supply chain management [97]. In another work, TOPSIS and GA are combined to investigate the efficiency of transportation of cargo for the Brazilian Rail Cargo System [111]. GA has also helped in selecting the best vaccine against COVID-19 by implementing with ELECTRE III and TOPSIS [101]. The concordance index can be understood using the ELECTRE III approach, while GA is renowned for its ability to separate individual decision-making preferences from overall decisions. Application of TOPSIS has tailored the appropriate ranking considering ideal and an anti-ideal solution. It has been observed that combining ELECTRE III-GA and TOPSIS is the ideal model to evaluate pandemic vaccinations. A MOSTP has been solved by MOGA and a Pareto solution is produced. Then AHP was applied to prioritize these results [112]. In another supply chain management study, GA is hybridized with FAHP-TOPSIS to control the demand and cost of material routing between supplier-producer and distributor [113]. In the year 2011, Fuzzy was implemented with GA in the field of computer science to address the criteria’s weights and then TOPSIS was applied to find the best result [114]. A new meta-search engine called Meta-Fusion was introduced in 2016 by Gupta and Singh [98] and offered simultaneous access to other search engines. As a result, it gives a single comprehensive list to the user as the final results of the best search engines. The suggested algorithm combines GA and Fuzzy-AHP (FAHP). The AGA-AHP combination evaluates the regional water resource carrying capacity [115]. In the field of engineering and neural networks, FAHP was also implemented with GA to select good quality questions for web-based test sheets [100]. In the same year, GA was used to modify the value of unacceptable CR of the FAHP approach and reduce the value of inconsistency to less than 0.1 [76]. With a subjective approach used to establish the concept of highest portfolio social return, Fernandez et al. [116] proposed an application of extended non-outranked sorting GA and ELECTRE III to the challenge of allocating public funding to competing policies, projects or programs. Recently, MBGA, the combination of MOORA and GA has solved the Flow-shop scheduling problem [99]. A water resources risk assessment model combining subjective and objective weighting methods has been addressed by Zhao et al. [7] using an improved AHP with an Accelerating GA. It constructs a judgment matrix for evaluation indices and determines combination weights. The model generates a systematic comprehensive evaluation index and proposes water resources development plans based on risk levels. From all above multiple applications, it can be claimed that the synergy of GA and MCDM performs better than the state of the art algorithms with a greater margin. A tabular representation of all these articles has been demonstrated in Table 10 to summarize the key contributions.

Table 10 A meta-analysis of the literature of GA

4.2 Particle Swarm Optimization (PSO)

Another mathematical model that belongs to the family of MOAs is PSO [89]. Like the collective intelligence of bird flocks and fish schools throughout their hunting and feeding, PSO chooses the ideal locations [117]. PSO algorithm was first introduced in 1995 by Kennedy and Eberhart [118]. PSO begins with the current position of the swarm and updates its position [Eq. (4)] from time to time over the iterations [119]. Three years later, the authors revised PSO and included the inertial weight and the optimal condition for the particle and the swarm [81]. The method operates sequentially with several alternatives (particles) in the set of solutions (swarm). As a working process, each alternative improves its velocity [Eq. (5)] while considering its past and present locations in the swarm and then determines the best overall (global) position [10]. The equations that helps to update the particle and provide the velocity are given by

$$\begin{aligned} x_{ij}(t+1)= x_{ij}(t) + v_{ij}(t+1) \end{aligned}$$
(4)
$$\begin{aligned} v_{ij}(t+1)&= w \cdot v_{ij}(t) + c_1 \cdot r_1 \cdot (pbest_{ij} - x_{ij}(t))\nonumber \\ & \quad + c_2 \cdot r_2 \cdot (gbest_{ij} - x_{ij}(t)) \end{aligned}$$
(5)

Each particle keeps track of the location in the search area where it has so far found the best answer, which refers to as its ‘personal best’ or ‘pbest (pb)’. They additionally store a ‘global best position’, or ‘gbest (gb) position’, in addition to the pb value. The best solution to the date in that alternative’s topological neighborhood is represented by the symbol gb. Besides pb and gb, its movement in a swarm also depends on a third factor called ‘velocity’. The pb and gb are updated for each particle during each iteration. Velocity is also updated using a random component toward the pb and gb position. Each particle modifies its position based on these three variables to see if it is a better fit [10]. The procedure continues iteratively until the termination requirements are satisfied. The step by step flow of PSO is drawn in Table 11. In general, the termination criteria can be considered as the number of function evaluations, the number of runs or the least tolerable error.

In solving complicated problems, the PSO algorithm has been shown to be effective in its classical version [120]. It proved itself as a fast converging algorithm in approximating the particles to the problem’s optimum [10]. PSO is successfully applied in several fields like agriculture, health, social and natural science, engineering and material science [1, 121]. However, PSO has some drawbacks as it could not perform well to solve the stochastic excess problem [122], quick convergence [117] and premature stalling [122], notwithstanding its effectiveness in complex optimization tasks. As a result, it gives rise to the hybrid algorithms that improve the balance between exploration and exploitation to upgrade the quality of the solution [120, 123]. In 2009, PSO was used to recognize 3D objects by observing them from various perspectives [124].

A considerable number of applications has also been proposed in the literature with PSO and fuzzy-based approaches. In 2010, FARG and PSO are applied together in identifying groups of images in which PSO matches the graphs among the images [125]. PSO, with cognitive Bayesian reasoning handling uncertainty in the data and generating decisions, classifies visual images by searching in the directed region [126]. FARG has also implemented in the behaviour recognition of objects in video sequencing for arranging the scenes in the organization module. PSO has classified objects in video data [127]. An improved PSO has also been employed in image registration [119]. As a result, the comparison of test picture features with references is not required anymore. This definitely has increased the convergence rate and lower down the calculation costs in the comparison. Qiu et al. [128] proposed a novel approach, the MO-PSO algorithm, to optimize forest harvesting practices by considering tree-level neighborhood interactions. MO-PSO lead to improved spatial distribution patterns, increased tree species mixing, and reduced stand competition. The study provides insights into optimizing forest management decisions by addressing multi-dimensional spatial characteristics at the tree level.

This section deals with the application of PSO and its variations with the combination of MCDM techniques in different fields of engineering, science and technology. In a personnel selection problem in 2012, PSO was utilized to solve an MCDM system using F-AHP [4]. The model transformed a prioritization problem into a nonlinear constrained optimization problem. First, F-AHP produced the judgement matrix from which fuzzy preference programming model drawn the criteria equations. These equations later transformed into a constrained nonlinear optimization model. PSO has been implemented to solve that optimization model and has produced better results. Khani et al. [72] has also operated the modified version of Wang et al. [109] optimization model. They added an additional inequality to get the consistent solution directly from the optimization function. The method has been successfully applied in the field of distribution of network. An adaptive mutation aggregated PSO method has been utilized to solve this optimization model. The model successfully locates the optimal HIFDs in the distribution feeders. A shaft blasting quality evaluation model has been worked out successfully by the integration of AHP and PSO [53]. This approach is smoothly implemented in artificial intelligence and computer programming technology. The improved accuracy has been achieved. In the field of OR, PSO has also been combined with AHP and TOPSIS to find out and improve the benefits of HTz in the economic growth of a country [129]. AHP constructs the judgement matrix, then PSO solved the optimization problem (converted from judgement matrix) and then TOPSIS evaluates the results. In order to predict the crude oil production level, a model is developed based on AHP-PSO [130]. AHP selects the higher weight parameter and then PSO optimizes these parameters. In 2018, in energy accumulation area, a hybrid approach combining TOPSIS and PSO was used for allocating energy storage in electric power system [49]. AHP and PROMETHEE, combined with PSO, have been used as a methodology for supplier selection under disruption risk [131]. In a ballistic missile design parameter optimization problem, PSO constructed the variable weights and TOPSIS calculated the Euclidean distance between particles of each group [132]. One more MCDM approach MOORA has helped PSO in evaluating the performance of perforated pin fin heat sink [133]. PSO has verified the outcomes and the MCDM method has identified the most suitable perforated fin structure. VIKOR and PSO have built a unique strategy and solved a multi-robot-box-pushing problem [54]. An extended VIKOR is implemented with PSO to solve an MCDM problem with probabilistic linguistic information [134]. In this problem, PSO helps each particle to reach its best position and then VIKOR, by its distance-based technique decides the optimal best. Recently, in 2022, TOPSIS was implemented in a dam reservoir problem with ChoA-PSO [135], which has successfully optimized the objective function built on the basis of the sum of the squares of water scarcity during the operation period. A brick-up model had also been proposed for recombining distinct MOAs by using AHP [8], which has been validated after on CEC 2015 benchmark function sets. Among all these applications discussed above, the collaboration between PSO and MCDM surpasses the performance of various famous algorithms. The key contributions of all these articles has been summarized in Table 12.

Table 11 Mechanism of GA and PSO
Table 12 A meta-analysis of the literature of PSO

4.3 Ant Colony Optimization (ACO)

ACO is a population-based metaheuristic, decentralized and probabilistic SI approach. Dorigo [137] first proposed it in his Ph.D. thesis with the goal of finding the best path across a graph based on ant behavior. Ants search for food based on the quantity of pheromones left out by their predecessors in the colonies. That helps the ants to search for the food source faster. Unlike GA, ACO does not employ evolutionary operators on solutions. Rather, it creates an entirely new set of solutions for the next iteration. The movements of ants are mapped into the set of decision variables to change the population in the immediate next iteration. Importantly in ACO, the relative pheromone concentration decides the relative fitness of a swarm. The higher the concentration is, the more the probability of selecting the path (considering a higher fit swarm). Unfortunately, the algorithm for choosing a certain path varies from problem to problem. The successive order of ACO is arranged in Table 14. In order to choose the better swarm based on their fitness, roulette wheel is yet proven to be the most popular approach [138]. The amount of pheromone \((\tau )\) is updated [Eq. (6)] using the pheromone evaporation rate \((\rho )\) and the pheromone deposited by ants. The equation is given by

$$\begin{aligned} \tau _{ij}(t+1) = (1 - \rho ) \cdot \tau _{ij}(t) + \sum _{k=1}^{m} \Delta \tau _{ij}^k(t) \end{aligned}$$
(6)

here, m is the number of ants.

ACO has outperformed in many practical applications, including the field of combinatorial optimization problems [9], routing difficulties [139], scheduling [140], production management [141], machine learning [142], feature selection [143], etc. One of the primary success stories in ACO is the use of this algorithm for dynamic problems. The first such application is proposed in [144], which deals with circuit-switched network routing (e.g., classical telephone networks). ACO has also been implemented in Ant net problem Di Caro and Dorigo [139] used for packet-switched network routing (e.g., the Internet). It has been experimentally demonstrated to outperform a complete set of state-of-the-art algorithms on various benchmark problems and concludes several advancements. Mavrovouniotis and Yang [145] used the ACO method to solve dynamic vehicle routing problems in 2015, with better outcomes in both academic and real-world cases. As a recent development in this field, ACO managed to solve spatial TSP or TSP-3D [146]. Assignment problems are also successfully solved by using ACO. It has solved a Knapsack assignment problem in combination with an Intuitionistic Fuzzy Pheromone [147]. In another recent work, Falcon-Cardona and Coello Coello [148] used an old framework to provide a novel method for multi-objective problems using a variant of the ACO algorithm known as MOACO-RR. A hybridization of ACO and PSO [140] minimized a test scenario of 600 benchmark instances successfully in an ordered flow shop scheduling problem. In 2022, the CACO-LD technique has been extended to solve the constrained ELD problem [149]. Recently, a new algorithm has been introduced [142] to address the global concern of increasing wildfires, employing deep learning models coupled with optimization algorithms for accurate prediction. In this, a hybridization of BBO and ACO algorithms has emphasized the importance of multi-factorial analysis in mapping fire-susceptible areas, providing valuable insights for wildfire prevention and land management.

Over the years, ACO with MCDM has been applied successfully in many real-life scenarios. In 2009, ACO and FAHP jointly designed an optimal Unmanned Aerial Vehicle (UAV) resource management. In this, ACO finds the optimal solution first and then FAHP observes the best figures for UAV [75]. In 2013, there was a hybridization of ABC and ACO developed with a combination of decision-making method VIKOR, which has optimized multi-response parameter designed problem of designing a tir lens [150]. Another hybrid optimization system combining ACO and steady state GA in presence of TOPSIS tackled ELD problem, ensuring robustness and stability in identifying Pareto optimal solutions [151]. Later, TOPSIS extracted the compromised finite set of best alternatives with this Pareto optimal solution. Another hybrid MCDM approach for green supplier selection in large group settings has been presented by Quan et al.[152]. It integrates interval-valued intuitionistic uncertain linguistic sets, ACO helps in decision-maker clustering, linear programming is used for objective weight determination and extended MULTIMOORA method for supplier ranking. Moreover, comprehensive approach has been introduced for evaluating service quality in power systems, incorporating the power customer satisfaction index system [153]. TOPSIS method for relative satisfaction determination, and ACO for sorting weight determination, enhancing credibility and scientific rigor in assessment has been used. Recently, a method called IO-ACO has been developed based on interval outranking. The novelty of the method has been tested over two benchmark instances [9]. It approximates the region of interest better than any other method. ACO works as a multi-objective optimizer and the outranking MCDM method ELECTRE handles the vagueness. Additionally, using MOORA, an enhancement of the ACO algorithm based on an ensemble of heuristics has been performed [143]. Here, the ant migration is predicted based on the opinions of multiple experts. The method has resolved the exploitation/exploration dilemma and improved the algorithm’s stability. It has been applied to the ensemble feature selection problem to gauge the effectiveness of the suggested strategy. In the array of applications explored in this section, the coordination between ACO and MCDM outperforms the state-of-the-art algorithms with a notable advantage. A meta-analysis of these articles has been provided in Table 13.

Table 13 A meta-analysis of the literature of ACO

4.4 Artificial Bee Colony (ABC)

SI algorithms are motivated by the swarm behavior of social insects. The recent SI algorithm, namely ABC algorithm, was proposed by Karaboga et al. [154]. It is a technique for locating the best solution in numerical optimization. It is a relatively faster and easier stochastic search algorithm that mimics the foraging behaviour of honey bees. The sources of the food are considered to be the solutions for swarms, according to ABC. The quality (quantity of nectar) of the food supply determines how fit a solution is. Three types of bees make up the entire population of the hive: employed, scout, and onlooker bees. An employed bee is one that travels alone to a food source that has previously been visited, while an onlooker bee waits in the dance area to decide which food source to choose. However, a scout bee searches the food source randomly. Each cycle of ABC algorithm consists of three steps:

  1. 1.

    identifying the scout bees for searching the potential random food sources;

  2. 2.

    sending the employed bees to the selected food sources to measure their nectar amounts;

  3. 3.

    Information sharing and selecting the food sources based on the nectar amount.

It is worth noting that there is just one employed bee per food source. It implies that the number of bees actively working in the hive equals the number of nearby food sources. The position of a randomly selected employed bee is modified using a mutation process [Eq. (7)]. The sequential mechanism of ABC is given in Table 14. The equation for the mutated position \((x'_{ij})\) is given by

$$\begin{aligned} x'_{ij} = x_{ij} + \phi * (x_{ij} - x_{kj}) \end{aligned}$$
(7)

here, \(x_{ij}\) is the current position, \(x_{kj}\) is the position of another randomly selected employed bee, and \(\phi\) is a random number between \(-1\) and 1.

ABC has a wide scope of research in the field of neural networks [155], electrical engineering, supply chain [55], ranking framework [156], image processing [157, 158], etc. Karaboga and Ozturk [155] used ABC in training feed-forward neural networks. In 2011, to forecast stock prices, Hsieh et al. [159] developed an integrated system that combines wavelet transforms with an ABC-RNN. Bacanin et al. [160] recently enhanced ABC to optimize the hidden units and connection weights of artificial neural networks. The enhanced strategy overcomes the shortcomings of the original methodology by incorporating guided best solution-constrained mechanisms and quasi-reflection-based learning. A unique clustering method built on ABC was proposed by Karaboga and Ozturk [161], who tested it on 13 representative data sets from the UCI machine learning repository. In order to distinguish between benign and malignant bone cancer, Lefteh et al. [157] presented a technique using fuzzy C-mean clustering and the Modified Adaptive Neuro-Fuzzy Inference System (MANFIS) with the ABC algorithm. Image quality has been improved by Adlin Sharo and Raimond [162] utilizing fuzzy logic and the ABC approach. The method converts the image’s RGB color value into an HSV color value.

There are many vital applications of ABC and its variations, along with MCDM, which are discussed here. In the year of 2013, there was a hybridization of ABC developed in a combination of an MCDM method, namely VIKOR, which has optimized multi-response parameter designed problems [150]. ABC has also been used to classify a multi-criteria inventory problem, optimizing the weights of the criteria [55]. The VIKOR technique subsequently employed these weights as input parameters. ABC is combined with AHP to maximize the accuracy of selection of genes in cancer research. [163]. AHP first filters the most relevant genes and then, ABC minimizes them to quality genes. More recently, the fuzzy TOPSIS model and the ABC algorithm are combined for many applications like, an effective way for the purpose of providing suggestions for good hotels based on client preferences and real data [156]. In another study, influential users are identified in social network for better advertising system [164]. Several solutions have been obtained by solving the problem via ABC from which best one is opted out using TOPSIS. Chang et al. [165] suggest two new representational frameworks for the trapezoidal and triangular membership functions. As a starting point, this study compares MCGP with NIOM using the ABC algorithm. This collaboration of ABC and MCDM surpasses the performance of many state-of-the-art algorithms. A summary of the key contributions is presented in Table 15.

Table 14 Mechanism of ACO and ABC
Table 15 A meta-analysis of the literature of ABC

The meticulous compiling and analyzing of the data in the Tables 10, 11, 13, 15 provide a comprehensive overview of the application of GA, PSO, ACO, and ABC in conjunction with MCDM approaches. This structured meta-analysis not only offers valuable insights into the current state of research but also highlights emerging trends and potential avenues for future exploration in the field of MCDM. However, it is observed that the hybridization of an efficient local search crucially dominates the individual metaheuristics in most of the cases [113, 130, 134, 135, 150].

In this article, the research has succeeded in conceptualizing and classifying the MCDM approaches, metaheuristic approaches and their combinations. The two categories for the most pertinent MCDM approaches are (a) classical form of MCDM and (b) MCDM with fuzzy extensions. Also, MCDM approaches succeed in picking the best suitable solution among the set of solutions (Pareto front) obtained by the metaheuristic approaches. Due to that, in recent times, this field has become more popular among the researchers. The following Fig. 11a, b gives quantitative information about these articles, almost all published in the last two decades.

Fig. 11
figure 11

a Classification of research articles, algorithm-wise. b Classification of research articles, area-wise

Figure 11a categorizes the articles on the basis of algorithms and explains that among the four algorithms described in Sect. 4, more than 60 % of work has been performed in combination of MCDM with GA alone. Figure 11a, b classifies the number of articles according to the major fields of application for each of the algorithms. It explains that the highest work is cited in the area of Computer Science and Artificial Intelligence (CS & AI). In Fig. 11a, b, the abbreviations used are Engg. (Engineering), OR (Operations Research) and E &E (Energy and Environment).

5 Overview of the Study

The study provides a pioneering exploration at the intersection of MOAs and MCDM methods. It distinguishes itself by methodically classifying and analyzing modern literature, illuminating the evolving ground of these interconnected domains. The deliberate focus on extremely prominent EAs-GA, PSO, ACO and ABC -is one of the distinguishing characteristics. The sequential scope containing past decade studies not only guarantees an up-to-date and pertinent summary but also shows the growing interest of the researchers. The strategic emphasis on high-citation articles provides a concise yet insightful snapshot of major developments, streamlining the abundance of available methods. However, this selectiveness may inadvertently overlook valuable insights from less-cited works. The review’s novelty lies in its ability to synthesize and organize a wealth of information, offering readers a comprehensive guide to the evolving landscape of MOAs and MCDM integration. While recognizing the strengths and advancements in the reviewed literature, it also critically assesses limitations, contributing to a nuanced understanding of the state-of-the-art in this dynamic research area.

The extensive literature survey has unveiled a substantial amount of research at the intersection of metaheuristic and MCDM approaches. Notably, the outlook for key metaheuristic algorithms including GA, PSO, ACO, and ABC, appears exceptionally promising in their adaptability and problem-solving capability, reflecting sustained popularity and a trajectory of diverse applications in the foreseeable future. The genetic interchange in the GA operator, the swarm intelligence in PSO, the pheromone follow up in ACO and the characteristic interchange through bogel dance in ABC have substantially contributed in finding better optimal solution of complex problems. Ongoing research activities and successful applications across industries suggest that these algorithms will continue evolving, finding relevance in emerging technologies, addressing complex optimization challenges, and contributing to diverse scientific disciplines. The future holds considerable potential for these metaheuristic optimization algorithms to remain instrumental in advancing both computational methodologies and decision-making processes. For a comprehensive understanding, the paper considers a curated selection of the top 15 highly cited research articles (as per Google Scholar citations), that provides a detailed breakdown of their key contributions, employed algorithms, and application areas, organized year-wise in Table 16.

Table 16 Summary of some highly cited articles on metaheuristic-based MCDM approaches

From the table above, the versatility of the hybridization of metaheuristics with MCDM methods can be seen. It is clear that GA has acquired more than half of the total citations. Also, it is observed that the publications contain a wide area of application covering a wider range of research. Yet, feature selection is pointed out as a prominent application field due to the highest citation of publications.

6 Conclusion and Future Scope

In the last two decades alone, 60% of the total research articles on Multi-Criteria Decision-Making (MCDM) have been published, indicating a gradual improvement in methodologies for solving related problems. This study has focused on the symbiotic relationship between Metaheuristic Optimization Algorithms (MOAs) and MCDM, paving the way for future explorations in decision support systems and multi-criteria optimization solutions. Approximately 85% of relevant articles have emerged in the last decade, signifying the growing popularity of combining metaheuristics with MCDM. Notably, researchers have embraced the synergistic power of hybrid OTs with MCDM, particularly favoring metaheuristic approaches (leaves with a set of solutions) over traditional optimization methods (leaves with a single solution). Genetic Algorithm (GA), Particle Swarm Optimization (PSO), Ant Colony Optimization (ACO), and Artificial Bee Colony (ABC) exemplify this trend, with GA-MCDM prevailing in 60% of cases.” The paper acknowledges the challenges and complexities in decision-making scenarios, emphasizing the imperative need for developing advanced MCDM processes. It recognizes the evolvement, including the influence of emerging technologies, demands for adaptation to dynamic environments, and the call for transparency and accountability. In response to these challenges, the study envisions the continuous innovation of MCDM processes to ensure their relevance and effectiveness in addressing diverse multi-criteria decision problems.

While feature selection problems and AI applications stand out, there remains ample room for improvement. The inclusion of effective operators in the cycle of Evolutionary Algorithms (EAs) or significant modification in their framework (crossover, mutation, etc.) may be combined with MCDM to provide improved solutions. The working steps of two or more MCDMs may be hybridized to analyze the net effect with EAs and can be fine-tuned to achieve better output accordingly. There are many other freshly developed metaheuristic algorithms (Fig. 10) with better exploitation and exploration capacity which can be hybridized with MCDM approaches. Most importantly, in all these approaches, the vagueness and uncertainty in the objectives or the constraints of the metaheuristic problems can be effectively depleted by including the fuzzy approaches. This review not only consolidates existing knowledge but also lays the groundwork for future endeavors, encouraging researchers and practitioners to explore new frontiers at the intersection of MOAs and MCDM.