1 Introduction

Human resources as a crucial factor in production industries and services are always threatened by several factors that one of the most important of which is occupational accidents. Every year, millions of people succumb to such accidents or work-related diseases, which correspond to the loss of existing human resources (Moatari-Kazerouni et al. 2015). Such incidents are bound to result in disastrous economic consequences for workers and their families, employers, insurance companies, and the community (Smith et al. 2015). Aside from this, occupational accidents can be associated with mental and psychological problems as well as financial problems for individuals that disability or death of the worker can exacerbate the gravity of the situation. In terms of global economic effects, withdrawal of active labor from the production cycle and reduction in the global Gross Domestic Product (GDP) by 4–6% or even more can be remarked as the important ones (Soleimani and Fattahi 2017; ILO 2019). According to the International Labour Organization (ILO) report, annually, occupational accidents and work-related diseases account for 2.78 million deaths that 86.3 percent of which (2.4 million) are disease-related, and this corresponds to 5 to 7 percent of mortality around the world (ILO 2019). Figure 1 demonstrates the common occupational disease which results in death by continents. In addition to this issue, 374 million workers suffer damages caused by non-fatal occupational accidents every year (Hämäläinen et al. 2017). Due to the mentioned points, implementing Health, Safety, and Environment (HSE) management systems has been increasingly recognized as a significant priority among industries and organizations.

Fig. 1
figure 1

Work-related mortality rate derived from occupational disease by continents (Hämäläinen et al. 2017)

Since the automotive industry generally consists of various sections like assembly, painting, mold making, etc. lines, workers are in danger of being poisoned by lots of toxic chemicals and injured by physical activities. Some of these factors are chemicals inhalation, high-frequency sound, electricity shocks, poor ergonomics, falling mold or parts during work, etc. which are liable to result in dangerous accidents or disease (Yousefi et al. 2018). This issue manifests the necessity of implementing the HSE management system. That is to say; this system brings about a significant mitigation in the likelihood of risk factors occurrence by improving health and safety levels and rising environmental awareness in all the organizational levels (Rezaee et al. 2020). This systematic framework provides methods and guidelines to identify hazardous risks, and consequently control or alleviating the side-effects of such threats in the workplace (Pourreza et al. 2018). Hence, how to carry out an efficient risk assessment process has received significant attention from managers and researchers.

There is a diverse array of risk assessment approaches that Failure Mode and Effects Analysis (FMEA) is one of the standard techniques used widely. This technique which has a proactive nature is used in identifying, evaluating, preventing/eliminating causes or effects of potential failures in a system (Liu 2016). Despite having numerous shortcomings, in most studies, the FMEA technique has been applied based on the conventional Risk Priority Number (RPN) score (Liu et al. 2019a; Huang et al. 2020). This score is derived from multiplying three risk factors as Severity (S), Occurrence (O), and Detection (D). There is, however, no scientific logic behind this multiplication, and risk prioritization based on this score cannot be reliable (Kutlu and Ekmekçioğlu 2012). In addition to this, in real-world applications SOD factors do not have equal importance, and should not be assumed the same in the ranking of risks/failure modes. At the same time, the conventional RPN score neglects the relative importance of risk factors (Chanamool and Naenna 2016). Indeed, this score, in terms of improvement efforts, places a premium on a risk that has a higher priority. Although, this risk may have a lower severity than other risks which have a lower RPN (Baghery et al. 2018). As another critical defect, the conventional RPN fails to distinguish risks with the same RPN score, while the value of risk factors is different (Rezaee et al. 2018b). Furthermore, in the RPN formula, other essential factors such as risk occurrence cost which can play a critical role in risk assessment are not considered (Rezaee et al. 2017a). On the other hand, the traditional FMEA technique cannot provide a framework to compare risks based on the value of risk factors, directly instead of multiplication of these (Tay et al. 2015). A risk assessment approach should provide decision-makers with outputs with high separability and reliability. These features can empower managers to plan corrective and preventive measures accurately due to limited organizational resources and prevent them from focusing on non-critical risks. Therefore, an effective approach is needed to identify critical risks more appropriately.

This study aims to present a novel approach that can address the mentioned shortcomings of the traditional FMEA technique. The proposed plan is performed in two main phases using FMEA, Fuzzy Best–Worst Method (FBWM), Fuzzy C-means (FCM) algorithm, and Combined Compromise Solution (CoCoSo) method. In the first phase, risks are identified through the FMEA technique. After determining the value of risk factors, including SOD and the other two main managerial criteria, FBWM is applied to calculate the weight of these factors. The reason for using this method is threefold (Guo and Zhao 2017): firstly, to cover the weighting problem in the traditional FMEA technique; secondly, to cover uncertainty derived from ambiguous and equivocate opinions given by experts in this technique. Finally, having less pairwise comparisons compared to Analytic Hierarchy Process (AHP), and higher consistency ratio compared to conventional Best–Worst Method (BWM), and also solving the complexity of applying tailed scales (e.g. 1–9 scale) in expressing preferences. After reaching the final weight of risk factors, the unsupervised FCM algorithm is used to cluster recognized risks into distinct risk categories. In fact, in this phase, instead of calculating the RPN score, it has been tried to compare risks based on risk factors, directly, and draw a sensible and detailed comparison among risks. In other words, risks clustering enables Decision-Makers (DMs) to reach a quick comparison of them without losing any piece of information, especially when there are resource or time limitations (Zhang et al. 2018; Duan et al. 2019).

After clustering risks, to determine the critical cluster by considering the relative importance of risk factors, weighted Euclidean distance is calculated in such a way that the more distance a cluster has, the more critical it is. Each cluster has a different number of risks which may be a large number, especially in large-scale problems. Since providing corrective/preventive actions for all risks belonging to the most critical cluster is likely to be time or cost-consuming, so DMs would face a challenging issue to prioritize risks. To address this issue, the CoCoSo method in the second phase as a novel powerful ranking method (Yazdani et al. 2019) is used to increase separability among risks priority and reliability of results in comparison with the FMEA. The weights derived from the FBWM in the first phase are applied in the calculation process of this method. To illustrate the applicability of the proposed approach, it is used in a manufacturing company. Although, various ranking Multi-Criteria Decision Making (MCDM) methods have been proposed, in this study the results of some other methods are provided to make a comparison with this method, and to manifest the robustness of this. The main contribution of the proposed approach is to focus on critical risks through clustering and prioritizing them to reduce the negative effects of these risks on the system significantly, compared to the FMEA technique and other similar approaches. Additionally, other contributions of this study are applying different weights to risk factors to determine the critical cluster using the FCM-FBWM approach and considering these weights and uncertainty of experts' opinions in the risk prioritization process to increase reliability and separability of outputs.

It should be noted that since uncertainty in the FMEA team members’ opinions on risk factors is an inevitable issue; the fuzzy form of BWM is applied. Furthermore, regarding the studies using clustering algorithms in the FMEA technique, it can be seen that prioritization of risks in the critical cluster has not been implemented; while due to the resource and time limitations, determining the most critical one is of cardinal importance. To solve this problem, in this study CoCoSo method as one of the newest and more robust MCDM techniques (Ulutaş et al. 2020) is used to determine the order of risks in the critical cluster to increase separability among risks priority and reliability of results compared to the FMEA.

The rest of this study is organized as follows: Sect. 2 is dedicated to reviewing some studies about FMEA applications. In Sect. 3, supplementary explanations of the FCM algorithm, FBWM, and CoCoSo method are presented. In Sect. 4, the proposed approach is explained in detail. In Sect. 5, results are presented and discussed comprehensively along with a comparison with other methods and sensitivity analysis. Finally, in Sect. 6, concluding remarks and suggestions for future studies are presented.

2 Literature review

FMEA as a proactive group-oriented technique for risk assessment and reliability analysis has been successfully applied in a wide range of real case studies (Liu et al. 2018; Rastayesh et al. 2019). To implement this technique, it is first necessary to create a multi-disciplinary team in the field of study (Carpitella et al. 2018). After that, the following stages are often executed (Yousefi et al. 2018): (I) identifying potential failures in a system and determining the value of those failures in the form of SOD factors by the FMEA team; (II) calculating the RPN score for each failure and prioritization of those; (III) scheming corrective/preventive actions for failures which have priority. According to the growing number of carried out studies on FMEA, the number of developed or modified methods and applications is significantly increasing. Some of these methods have been presented using artificial intelligence algorithms (Jee et al. 2015; Adar et al. 2017; Rezaee et al. 2017b; Renjith et al. 2018; Yousefi et al. 2020), and mathematical programming (Garcia et al. 2013; Chang et al. 2013; Behraftar et al. 2017; Rezaee et al. 2017a; Bakhtavar and Yousefi 2019). Since the conventional RPN cannot show acceptable performance in risks prioritization, and due to the existence of multiple risk factors in the FMEA technique, MCDM techniques are required to solve this problem (Liu et al. 2019a; Huang et al. 2020; Karunathilake et al. 2020).

Various MCDM methodologies have been integrated with the traditional FMEA technique that some of which are as follows. Helvacioglu and Ozen (2014) presented a combined approach based on the Technique for Order Preference by Similarity to Ideal Solution (TOPSIS) and trapezoidal fuzzy numbers for ranking failure modes in the yacht system design process. Ilangkumaran et al. (2014) adopted fuzzy AHP to prioritize critical risks in the paper industry. Liu et al. (2015) in their study, utilized fuzzy AHP, and entropy method with the purpose of weighting risk factors, and applied the VIekriterijumsko KOmpromisno Rangiranje (VIKOR) method to order failure modes. Tooranloo and Sadat-Ayatollah (2016) surveyed uncertain concepts and insufficient data in failure mode analysis. They used fuzzy linguistic terms and intuitionistic fuzzy hybrid TOPSIS to assign the weight of risk factors and ranking failure modes, respectively. Safari et al. (2016), instead of using the conventional RPN score, applied fuzzy VIKOR for risks prioritization. Liu et al. (2016) applied expert judgment and entropy method to weight risk factors and used the ELimination Et Choice Translating REality (ELECTRE) method to rank those risks. Wang et al. (2017) proposed a house of reliability-based rough VIKOR approach to determine the order of failures and considering the ambiguity in stated opinions by the FMEA team. Bian et al. (2018) took advantage of the D-number theory to handle the uncertainty of the FMEA team members’ opinions and then applied TOPSIS to rank the identified failures. Tian et al. (2018) presented a hybrid approach using the FBWM, relative entropy, and VIKOR methods to order failures.

Sakthivel et al. (2018) employed multiple methods to achieve an acceptable risk prioritization. They used AHP for determining the weight of risk factors, and fuzzy TOPSIS, and fuzzy VIKOR methods for ranking. To improve the risk assessment process using the FME technique, Liu et al. (2019b) used the AHP method with the aim of weighting risk factors, fuzzy Graph Theory, and Matrix (GTM) approach, and the DEcision-MAking Trial and Evaluation Laboratory (DEMATEL) technique to prioritize failure modes in their study. Yazdi (2019) presented an interactive risk assessment approach based on AHP and entropy methods. He modified the conventional RPN score based on the fuzzy set theory. Dabbagh and Yousefi (2019) in their hybrid approach, utilized fuzzy cognitive map and Multi-Objective Optimization based on Ratio Analysis (MOORA) method to rank risks that in which the former method was used to weight decision criteria with considering their casual relationships, and the latter applied to determine the order of risks. Ghoushchi et al. (2019) investigated combining Z-theory with the FMEA technique to consider both the uncertainty and reliability concepts in the risk assessment process. They used FBWM to weight SOD factors and the MOORA method based on the Z-theory to determine critical failure. In the more recent study, Lo et al. (2020) presented an integrated approach based on the DEMATEL technique and the TOPSIS method to determine the critical failure mode. Yucesan and Gul (2021) introduced a holistic FMEA approach based on the FBWM method and fuzzy Bayesian network to determine the weight of risk factors and assess the occurrence probabilities of the identified failure modes in an industrial kitchen equipment manufacturing facility. Celik and Gul (2021) extended an approach using BWM and Measurement of Alternatives and Ranking to Compromise Solution (MARCOS) methods under the context of interval type-2 fuzzy sets to assess dam construction safety. Yucesan et al. (2021) developed an extended version of the FMEA technique based on neutrosophic AHP to address the shortcomings of the RPN score in a case study of the textile industry.

Nowadays, applications of machine learning and data mining techniques due to their inherent characteristics have been widespread in various fields, including energy (Faizollahzadeh Ardabili et al. 2018; Shamshirband et al. 2019; Fan et al. 2020), environment science (Wu and Chau, 2013; Taormina and Chau 2015), aquaculture (Banan et al. 2020), medical science (Rezaee et al. 2021; Onari et al. 2021), manufacturing (Wang et al. 2020; Dogan and Birant, 2021),. In the meantime, one of the issues that can be addressed using data mining techniques is clustering. In the following, this study will focus on the applications of clustering techniques along with the FMEA approach in the risk analysis field. Tay et al. (2015) applied the fuzzy Adaptive Resonance Theory (ART) technique to cluster failure modes. In this study, first, the Euclidean distance-based similarity measure was used to calculate the similarity degree among failure modes. Then failure modes were clustered. Chang et al. (2017) applied the Self-Organizing Map (SOM) to cluster corrective actions of failure modes, and employed RPN interval to order the components of each group. Duan et al. (2019) categorized failure modes using the k-means clustering algorithm in such a way that initially, Double Hierarchy Hesitant Fuzzy Linguistic Term Sets (DHHFLTSs) were used to describe the FMEA team members’ linguistic opinions on failure modes. After that, the weight of SOD factors was determined by maximizing the deviation method, and finally, failure modes clustered into high, medium, and low-risk groups.

As can be seen, a significant number of studies are related to using RPN-based approaches, and as mentioned in the previous section, the conventional RPN has numerous shortcomings. As a viable alternative, presenting a methodology based on the original values of risk factors are likely to result in more compatible outcomes with practical applications (Tay et al. 2015). On the other hand, in real-world problems especially HSE risk assessment, in addition to SOD factors, some other essential factors such as treatment cost and duration resulting from the occurrence of each risk should be taken into consideration (Yousefi et al. 2018). Additionally, limited studies have paid attention to weight risk factors in the FMEA technique. To deal with such drawbacks, this study attempts to provide a proper risks clustering approach using the FCM algorithm based on SOD, C, and T (SODCT) factors. This algorithm determines a membership degree for each risk factor. It enables DMs to specify the number of risks in each cluster based on this degree, according to the problem studied (Zeraatpisheh et al. 2019). Also, in the presented hybrid approach, the FBWM, which is one of the powerful and latest weighting methods (Guo and Zhao 2017) is used with the aim of weighting SODCT factors.

It should be noted that since uncertainty in the FMEA team members’ opinions on risk factors is an inevitable issue; the fuzzy form of BWM is applied. Furthermore, regarding the studies using clustering algorithms in the FMEA technique, it can be seen that prioritization of risks in the critical cluster has not been implemented; while due to the resource and time limitations, determining the most critical one is of cardinal importance. To solve this problem, in this study CoCoSo method as one of the newest and more robust MCDM techniques (Ulutaş et al. 2020) is used to determine the order of risks in the critical cluster to increase separability among risks priority and reliability of results compared to the FMEA.

3 Methodology

As stated, this study aims to present an approach to cluster and prioritize critical HSE risks based on the FMEA technique using FBWM, FCM algorithm, and CoCoSo method. Further explanations of these methods are provided in the following subsections.

3.1 FBWM

Rezaei (2015) introduced a novel MCDM technique based on pairwise comparisons. Determining the weight of decision criteria by definitive values (1–9 scale), this model cannot be implemented in case of existence uncertain decision data. Guo and Zhao (2017) developed the BWM and presented FBWM to model ambiguity and uncertainty in human judgments. In this new method, DMs express their opinions on the criteria in the form of linguistic variables. This is done in such a way that Absolutely Important (AI) indicates that in the pairwise comparisons, one criterion is much more important than the other. Equally Importance (EI) shows that there is the same importance of the pair compared. In the FBWM, after determining the decision-making criteria \(c_{j} , j = 1,2,...,n\) the best and the worst criteria are represented as \(c_{B}\) and \(c_{W}\), respectively. In this study, risk factors are considered as decision-making criteria and pairwise comparisons between these factors have been made. In the next step, the fuzzy preference vector of the best criterion over others, and the fuzzy preference vector of others over the worst criterion are determined as \(\tilde{A}_{B}\) and \(\tilde{A}_{W}\), respectively using linguistic variables according to Table 1.

Table 1 Transformation rules of linguistic variables of DMs (Tian et al. 2018)

If \(\tilde{A}_{B} = (\tilde{a}_{{B1}} ,\tilde{a}_{{B2}} , \ldots ,\tilde{a}_{{Bn}} )\) and \(\tilde{A}_{W} = (\tilde{a}_{{1w}} ,\tilde{a}_{{2w}} , \ldots ,\tilde{a}_{{nw}} )\), the fuzzy performance of the \(c_{B}\) over \(c_{j}\) is represented as \(\tilde{a}_{{Bj}} = (l_{{Bj}} ,m_{{Bj}} ,u_{{Bj}} )\). The fuzzy performance of the \(c_{j}\) over \(c_{W}\) is represented as \(\tilde{a}_{{jW}} = (l_{{jW}} ,m_{{jW}} ,u_{{jW}} )\) where \(l\), \(m\) and \(u\), respectively indicate the lower, medial, and upper values. It should be noted that \(a_{{BB}}^{ \sim } = (1,1,1)\) and \(a_{{WW}}^{ \sim } = (1,1,1)\). Considering \(\bar{w}_{j}\), \(\bar{w}_{W}\) and \(\bar{w}_{B}\) as fuzzy triangular numbers, \(\bar{w}_{j} = (l_{j}^{w} ,m_{j}^{w} ,u_{j}^{w} )\) is used to define the fuzzy weights of \(c_{j}\). After defining these needed items, the fuzzy weights \((\bar{w}_{1} ^{*} ,\bar{w}_{2} ^{*} , \ldots ,\bar{w}_{n} ^{*} )\) can be obtained by the following model (Guo and Zhao 2017):

$$\begin{gathered} \min \,\tilde{\xi }_{{}} \hfill \\ s.t: \hfill \\ \end{gathered}$$
(1)
$$|\frac{{(l_{B}^{w} ,m_{B}^{w} ,u_{B}^{w} )}}{{(l_{j}^{w} ,m_{j}^{w} ,u_{j}^{w} )}} - (l_{{Bj}} ,m_{{Bj}} ,u_{{Bj}} )| \le (k^{*} ,k^{*} ,k^{*} )$$
(2)
$$|\frac{{(l_{j}^{w} ,m_{j}^{w} ,u_{j}^{w} )}}{{(l_{W}^{w} ,m_{W}^{w} ,u_{W}^{w} )}} - (l_{{jW}} ,m_{{jW}} ,u_{{jW}} )| \le (k^{*} ,k^{*} ,k^{*} )$$
(3)
$$\sum\limits_{{j = 1}}^{n} {R(\bar{w}_{j} } ) = 1$$
(4)
$$l_{j}^{w} \le m^{w} _{j} \le u_{j}^{w} \,\,\forall \,j,j = 1,2, \ldots ,n$$
(5)
$$l_{j}^{w} \ge 0,\forall \,j,j = 1,2, \ldots ,n$$
(6)

where \(\tilde{\xi } = (l^{\xi } ,m^{\xi } ,u^{\xi } )\) and it is considered \(l^{\xi } \le m^{\xi } \le u^{\xi }\) and supposed \(\tilde{\xi }^{*} = (k^{*} ,k^{*} ,k^{*} ),\,k^{*} \le l^{\xi }\). R is a set and a fuzzy number \(\tilde{a}\) on R is defined as a triangular fuzzy number if its membership function \(\mu _{{\tilde{a}}} (x):\,\,R \to [0,1]\) is equal to Eq. (7).

$$\mu _{{\tilde{a}}} (x) = \left\{ {\begin{array}{*{20}c} 0 & {x \in ( - \infty ,l)} \\ {\frac{{x - l}}{{m - l}}} & {x \in \left[ {l,m} \right]} \\ {\frac{{u - x}}{{u - m}}} & {x \in \left[ {m,u} \right]} \\ 0 & {x \in (u,\infty )} \\ \end{array} } \right.$$
(7)

Following achieving fuzzy weights, the Graded Mean Integration Representation (GMIR)\(R(\tilde{a})\) is used to transform the fuzzy weights of the criterion to a crisp weight. The GMIR \(R(\tilde{a}_{j} )\) formula is as follow:

$$R(\tilde{a}_{j} ) = \frac{{l_{j} + 4m_{j} + u_{j} }}{6}$$
(8)

In the final stage, the Consistency Ratio (CR) can be computed based on the formula \(CR = \tilde{\xi }^{*} /CI\) to evaluate the consistency degree of pairwise comparisons. In this formulation, \(CR\) is the optimal solution of the FBWM and the Consistency Index (CI) should not exceed the maximum possible CI shown in Table 1. It should be noted the \(CR \le 0.1\) is acceptable (Tian et al. 2018).

3.2 FCM algorithm

Clustering is an automated data analysis process in which a given data set is divided into different clusters in such a way that data points with similar traits clustered in a group are more diverse than those in other groups. In real-world applications, however, some data points may belong to multiple clusters. To solve this problem, fuzzy clustering algorithms have been introduced, which can segregate overlapping data points using fuzzy logic (Rezaee et al. 2018a). FCM algorithm is one of the common fuzzy clustering algorithms presented by Duda (1973) for the first time. During the implementation of this algorithm, each data point can belong to more than one cluster with different membership degrees. Indeed, the membership of data has a fuzzy (uncertain) nature and is a criterion between 0 and 1. After defining the data point vector \(X = (x_{1} ,x_{2} , \ldots ,x_{n} )^{T} \subseteq R\) as input, and determining the number of clusters, \(k\), the FCM algorithm is implemented, and the value of matrixes \(U\) and \(V\) are calculated. The matrix \(V = (v_{1} ,v_{2} , \ldots ,v_{k} )^{T}\) includes the cluster centers vector, and the matrix \(U = [u_{{ij}} ]_{{\kappa \times n}}\) is the membership matrix. The objective function of this algorithm is as follows (Bezdek 2013):

$$\min J_{m} (u,v;x) = \sum\limits_{{i = 1}}^{k} {} \sum\limits_{{j = 1}}^{n} {u_{{ij}}^{m} } ||\mathop {x_{j} }\limits^{{}} - \mathop v\limits^{ } _{{_{i} }} ||^{2}$$
(9)

In Eq. (9), \(u_{{ij}}\) and \(\mathop v\limits^{ } _{{_{i} }}\) represent the membership of \(x_{j}\) in the \(i\) th cluster, and the center of the \(i\) th cluster, respectively. Also, \(||\mathop {x_{j} }\limits^{{}} - \mathop v\limits^{ } _{{_{i} }} ||\) indicates Euclidean distance norm of \(x_{j}\) and \(\mathop v\limits^{ } _{{_{i} }}\), and \(m\) is a parameter which can be any real number greater than 1, employed to fuzzify the memberships. It should be noted that \(\sum\nolimits_{{i = 1}}^{k} {u_{{ij}}^{{}} } = 1 \forall j,j = 1,2, \ldots ,n\). Equation (9) fails to be minimized directly, so Alternating Optimization (AO) algorithm, which is an iterative technique is used to do this. Based on the AO, the optimal solution minimizing the \(J_{m} (u,v;x)\) and the center of \(i\) th cluster, are shown in Eqs. (10) and (11), respectively.

$$u_{{ij}} = \frac{1}{{\sum\nolimits_{{p = 1}}^{k} {\left( {\frac{{||x_{j} - v_{i} ||}}{{||x_{j} - v_{p} ||}}} \right)^{{2/(m - 1)}} } }} 1 \le i \le k 1 \le j \le n$$
(10)
$$v_{i} = \frac{{\sum\nolimits_{{j = 1}}^{n} {(u_{{ij}} )^{m} x_{j} } }}{{\sum\nolimits_{{j = 1}}^{n} {(u_{{ij}} )} }}$$
(11)

3.3 CoCoSo method

The CoCoSo method has been newly proposed by Yazdani et al. (2019) which can compete with other MCDM techniques such as TOPSIS, COmplex PRoportional ASsessment (COPRAS), VIKOR, and MOORA, and produce more robust results. The final ranking of the CoCoSo method is done based on three aggregator strategies. To perform this method, first, it is needed to define alternatives and related criteria. In this study, the identified risks and five risk factors are considered as the alternatives and decision-making criteria in the CoCoSo method, respectively, to determine the priority of each critical risk. If \(x_{{ij}}\) shows the value of the criterion \(j,j = 1,2, \ldots ,n\), for the alternative \(i,i = 1,2, \ldots ,m\), the decision matrix is \(x_{{ij}} = \left[ \begin{gathered} x_{{11}} x_{{12}} \ldots x_{{1n}} \hfill \\ x_{{21}} x_{{22}} \ldots x_{{2n}} \hfill \\ \ldots \ldots \ldots \ldots \hfill \\ x_{{m1}} x_{{m2}} \ldots x_{{mn}} \hfill \\ \end{gathered} \right]\) which is normalized using compromise normalization equations as follows:

$$r_{{ij}} = \frac{{x_{{ij}} - \mathop {\min x_{{ij}} }\limits_{i} }}{{\mathop {\max x_{{ij}} }\limits_{i} - \mathop {\min x_{{ij}} }\limits_{i} }} \forall i\forall j\;{\text{Positive }}\left( {{\text{benefit}}} \right){\text{ criterion}}$$
(12)
$$r_{{ij}} = \frac{{\mathop {\max x_{{ij}} }\limits_{i} - x_{{ij}} }}{{\mathop {\max x_{{ij}} }\limits_{i} - \mathop {\min x_{{ij}} }\limits_{i} }} \forall i\forall j\;{\text{Negative }}\left( {{\text{cost}}} \right){\text{ criterion}}$$
(13)

After obtaining the normalized matrix, the weight of each criterion \(w_{j}\) is determined by DM, then \(S_{i}\) and \(P_{i}\) showing the weighted comparability sequences and the exponential weight of comparability sequences, respectively, are calculated for each alternative by Eqs. (14) and (15).

$$S_{i} = \sum\limits_{{j = 1}}^{n} {(w_{j} r_{{ij}} )} \forall i$$
(14)
$$P_{i} = \sum\limits_{{j = 1}}^{n} {(r_{{ij}} )} ^{{w_{j} }} \forall i$$
(15)

It is needed to say that Eqs. (14) and (15) are based on the aggregated Simple Additive Weighting (SAW) and Exponentially Weighted Product (EWP) methods. In the next stage, three aggregator strategies (kia, kib, kic) are established to compute the relative weight of alternatives, according to Eqs. (16) to (18).

$$k_{{ia}} = \frac{{P_{i} + S_{i} }}{{\sum\limits_{{i = 1}}^{m} {(P_{i} + S_{i} )} }}\;\forall i$$
(16)
$$k_{{ib}} = \frac{{S_{i} }}{{\mathop {\min S_{i} }\limits_{i} }} + \frac{{P_{i} }}{{\mathop {\min P_{i} }\limits_{i} }}\;\forall i$$
(17)
$$k_{{ic}} = \frac{{\lambda (S_{i} ) + (1 - \lambda )(P_{i} )}}{{(\lambda \mathop {\max S_{i} + }\limits_{i} (1 - \lambda )\mathop {\max }\limits_{i} P_{i} )}}\;0 \le \lambda \le 1 \forall i$$
(18)

where \(\lambda\) is determined by DMs (usually equals 0.5). The final ranking is done based on Eq. (19) in such a way the after calculating this score for each alternative, all these scores are sorted in decreasing order, and the more significant, the better.

$$k_{i} = (k_{{ia}} k_{{ib}} k_{{ic}} )^{{\frac{1}{3}}} + \frac{1}{3}(k_{{ia}} + k_{{ib}} + k_{{ic}} )\;\forall i$$
(19)

4 Proposed approach

In this section, the presented a decision support system for prioritization of critical HSE risks is explained in detail. This approach is implemented in two combined phases: I) FMEA, FBWM and FCM algorithm, II) FBWM and CoCoSo. In the first phase, initially, the HSE risks are identified, and the values of their risk factors are determined by the multi-disciplinary FMEA team, which is the initial decision matrix. These factors include SOD factors, and also two extra factors C and T representing the treatment cost and treatment duration, respectively, are considered in this study due to their necessity in the HSE risk assessment process. In addition to this, they determine the best and the worst risk factors in terms of importance and then drew pairwise comparisons between risk factors and the best and worst factors using linguistic variables presented in Table 1.

In practical applications, the ambiguity in an individual’s judgment cannot be neglected, and ignoring this issue may lead to unreliable results. To address this problem, the FBWM model is employed in this phase. To perform this model, first, the linguistic variables should be converted into fuzzy values according to the defined fuzzy numbers for each linguistic term shown in Table 1. After that, by implementing the model described in subsection 3.1, the weights of risk factors are obtained. Since calculated weights are fuzzy numbers, Eq. (8) is utilized to achieve crisp numbers. One of the main attempts of this study is to provide a ranking approach based on the original values of risk factors instead of the conventional RPN score. Therefore, in the next stage of this phase, all risks are clustered using an unsupervised algorithm. This is because such algorithms can be implemented independently of whether the data is labeled or not. The FCM algorithm is employed due to its soft computations compared to the k-means algorithm (Cardone and Di Martino 2020). By performing this algorithm, the membership degrees of identified HSE risks per cluster are obtained to enable DMs to specify the number of risks in each cluster according to the problem studied. Then, the Euclidean distance between the center point of each cluster and the origin of the coordinate system is calculated. Since the aim is to identify the critical cluster based on the weight of SODCT factors, the weighted Euclidean distance measure is computed, and the more this measure is significant, the more the cluster is critical.

In the second phase, risks belonging to the critical cluster should be prioritized. This is attributed to several reasons including time, financial budget, and human resource limitations in taking corrective/preventive actions for all risks. Therefore, prioritizing critical risks is of considerable importance in reducing their negative impacts on the system effectively. To this end, the new MCDM technique, namely the CoCoSo method, which can produce robust results in comparison with other similar techniques (Yazdani et al. 2019) is applied. This MCDM technique is implemented based on obtained outputs from previous steps. To put it precisely, this study considers the existing risks in the critical cluster and their related five risk factors as alternatives and evaluation criteria, respectively. In the first step of the CoCoSo method, the decision matrix is determined and normalized using Eqs. (12) and (13). Then using the weights stem from the FBWM, the aggregator strategies \(k_{{ia}}\),\(k_{{ib}}\) and \(k_{{ic}}\) are calculated. As the final stage, the score \(k_{i}\) is achieved, and the more critical risk is recognized. This more clear prioritization enables DMs to handle critical risks of a system in a more efficient way regarding the resource. The implementation stages of the proposed approach have been shown in Fig. 2.

Fig. 2
figure 2

The flowchart of the proposed approach

5 The results analysis

The results of performing the presented approach for prioritization of HSE risks in the studied company are discussed in subsection 5.1. To validate this approach, the obtained results are compared with other similar methods explained in subsection 5.2. In the last subsection, 5.3, sensitivity analysis is carried to show the applicability of this approach.

5.1 The results of performing the proposed approach

As described in the previous sections, the proposed approach is based on the FMEA technique, FBWM, FCM algorithm, and CoCoSo method. The results of each phase are explained in the three subsequent parts.

5.1.1 Risks identification by the FMEA technique

The proposed approach has been employed in a manufacturing company active in the automotive industry to prioritize the identified HSE risks. Following the HSE rules in the automotive industry is of importance because workers are in danger of being exposed to lots of toxic chemicals and physical hazards, such as unpleasant smells, chemicals inhalation, high-frequency sound, inappropriate lighting, electricity shocks, etc. These risks may lead to an accident or chronic occupational diseases. Therefore, this study tries to implement the proposed approach in this industry. The 62 risks in this field of study were identified by the FMEA team (Yousefi et al. 2018), and the value of risk factors per risk determined by them is shown in Table 2.

Table 2 The identified HSE risks and the value of their risk factors

This Table, in addition to the value of three main FMEA factors (SOD), illustrates the value of two extra factors C and T given by the FMEA team. As can be seen, these values are between 1 and 5 in such a way that if one of the SOD factors of risk is equal to 1, demonstrates its very low importance, and 5 demonstrates its very high importance. That is to say, the rate 1 means the severity of injuries stem from the related risk is very low, occurs rarely, and is detectable. While the rate five means that the related risk can cause death or permanent disability, it happens frequently, and it is partially detectable. If the treatment cost of risk is less than 5,000,000 or more than 20,000,000 currencies, its relative importance is equal to 1 and 5, respectively (Dabbagh and Yousefi 2019). In terms of T, rates 1 and 5 means that the treatment duration of risk is less than a week and more than eight weeks, respectively.

5.1.2 Determining the weight of risk factors by the FBWM

To weight risk factors under the uncertain circumstance, the FBWM have been used in this study. This model enables DMs to determine the weight of decision criteria more flexibly and achieve reliable and precise results than the conventional BWM. According to the implementation stages of the FBWM, firstly, the FMEA team determined the best and the worst factors, and then made a pairwise comparison using linguistic variables as shown in Table 1. Risk factors S and O were selected as the best and the worst criteria, respectively that the preferences of the best criterion among the overall criteria \((\tilde{A}_{B} = \mathop a\limits^{ \sim } _{{_{{SS}} }} ,\mathop a\limits^{ \sim } _{{_{{SO}} }} ,\mathop a\limits^{ \sim } _{{_{{SD}} }} ,\mathop a\limits^{ \sim } _{{_{{SC}} }} ,\mathop a\limits^{ \sim } _{{_{{ST}} }} )\) and all criteria over the worst criterion \((\tilde{A}_{w} = \mathop a\limits^{ \sim } _{{_{{SO}} }} ,\mathop a\limits^{ \sim } _{{_{{OO}} }} ,\mathop a\limits^{ \sim } _{{_{{DO}} }} ,\mathop a\limits^{ \sim } _{{_{{CO}} }} ,\mathop a\limits^{ \sim } _{{_{{TO}} }} )\) have been represented in Table 3. The linguistic terms of risk factors can be converted into fuzzy numbers using Table 1. Therefore, \(\tilde{A}_{B}\) = [(1,1,1), (7/2,4,9/2), (5/2,3,7/2), (3/2,2,5/2), (5/2,3,7/2)] and \(\tilde{A}_{w}\) = [(7/2,4,9/2), (1,1,1), (3/2,2,5/2), (5/2,3,7/2), (2/3,1,3/2)]. Eventually, the FBWM describe in subsection 3.1 was solved applying LINGO 17.0 software, and after achieving fuzzy weights, final crisp weights were calculated based on Eq. (8). The results have been illustrated in Table 3.

Table 3 The FMEA team’s preferences and the weight of risk factors obtained by the FBWM

According to the results, risk factors S with the weight of 0.4024, and O with the weight of 0.0935 have the highest and the lowest weights, respectively as would be expected. Also, risk factors C, D, T with the weights of 0.2397, 0.1465, and 0.1179 are in the next places, respectively. Regarding the \(\tilde{\xi }^{*}\) = (0.4258, 0.4258, 0.4258), and based on the CI shown in Table 1, the \(CR\) = 0.4258/8.04 = 0.053, which is less than 0.1 showing the acceptable consistency rate.

5.1.3 Identifying the cluster of critical risks by the FCM algorithm

According to the proposed approach descriptions, the aim is to provide a more clear prioritization in comparison with the traditional FMEA technique to reduce the implementation costs of corrective/preventive actions. Therefore, it is required to cluster identified risks shown in Table 2 based on the values of SODCT factors. The FCM clustering algorithm can act more flexibly facing overlapping data points, and each of these can belong to at least two clusters with different memberships. Therefore, DMs can determine the number of data points in each cluster based on different minimum membership thresholds. To implement the FCM algorithm, initially, the number of clusters is specified, and then the values of SODCT factors for each risk are given to this algorithm as inputs. In this study, four clusters were considered, and the algorithm was implemented with the help of MATLAB 2016a software.

The outputs were the membership degree of risks per cluster. By determining the minimum membership threshold, it can be specified which of these risks belong to the cluster examined. Since the data used are of unlabeled type, it should be determined which of the clusters is critical. To this end, the center point of each cluster was calculated based on Eq. (11), and then the Euclidean distance between the center point and origin of the coordinate system was computed per cluster. To providing a more compatible clustering with practical problems, the weights of the SODCT factors obtained from the FBWM were used to compute the weighted Euclidean distance. These clusters have been labeled based on this distance in four classes Intolerable (unacceptable), Major, Tolerable, and Minor (acceptable). The more this distance is significant, the more risks of that cluster are critical. Clusters Intolerable and Minor have the highest and the lowest weighted Euclidean distance, respectively. This measure enables DMs to label the clusters based on the importance of risk factors in a system studied. The results have been listed in Table 4.

Table 4 Center points and weighted Euclidean distance per cluster

As can be seen, cluster four with the weighted Euclidean distance 3.7816 includes intolerable risks; therefore, this cluster was considered as the critical cluster. In the next stage, risks belonging to this cluster should be determined to be prioritized in the next phase of the presented approach. To specify this, the minimum membership threshold was considered equal to 0.45, and using membership degrees, risks of this cluster were determined highlighted in Table 5.

Table 5 Risks belonging to the critical cluster

5.1.4 Prioritizing the critical risks by the CoCoSo method

In this part, the results of the second phase are explained. According to Table 5, there are 17 risks specified in the critical cluster which have to be ranked by the CoCoSo method. The method is implemented base on outputs of both the FBWM and FCM algorithm in such a way that alternatives of this method are those 17 critical risks, and the weights of criteria are the weights of SODCT factors obtained from the FBWM.

To implement this method, regarding subsection 3.3, after forming the normalized decision-making matrix, the weighted comparability sequence \(S_{i}\) and the exponential weight of comparability sequence \(P_{i}\) were calculated using Eqs. (14) and (15) as follows.

$$\begin{aligned} S_{i} = \sum\limits_{{j = 1}}^{5} {(w_{j} r_{{ij}} )} = (w_{1} r_{{i1}} + w_{2} r_{{i2}} + w_{3} r_{{i3}} + w_{4} r_{{i4}} + w_{5} r_{{i5}} ) \\ & = (w_{S} r_{{i1}} + w_{O} r_{{i2}} + w_{D} r_{{i3}} + w_{C} r_{{i4}} + w_{T} r_{{i5}} )\, \forall i = 1,2, \ldots ,17 \\ \end{aligned}$$
$$P_{i} = \sum\limits_{{j = 1}}^{5} {(r_{{ij}} )} ^{{w_{j} }} = r_{{i1}}^{{w_{1} }} + r_{{i2}}^{{w_{2} }} + r_{{i3}}^{{w_{3} }} + r_{{i4}}^{{w_{4} }} + r_{{i5}}^{{w_{5} }} = r_{{i1}}^{{w_{S} }} + r_{{i2}}^{{w_{O} }} + r_{{i3}}^{{w_{D} }} + r_{{i4}}^{{w_{C} }} + r_{{i5}}^{{w_{T} }} \forall i = 1,2, \ldots ,17$$

Afterward, three aggregators strategies \(k_{{ia}}\), \(k_{{ib}}\) and \(k_{{ic}}\) were computed based on the Eqs. (16) to (18), respectively, as follows. It should be noted that, in this study \(\lambda = 0.5\).

$$k_{{ia}} = \frac{{P_{i} + S_{i} }}{{\sum\limits_{{i = 1}}^{{17}} {(P_{i} + S_{i} )} }} = \frac{{P_{i} + S_{i} }}{{(P_{1} + S_{1} ) + (P_{2} + S_{2} ) + \cdots + (P_{{17}} + S_{{17}} )}} \;\forall i = 1,2, \ldots ,17$$
$$k_{{ib}} = \frac{{S_{i} }}{{\mathop {\min S_{i} }\limits_{i} }} + \frac{{P_{i} }}{{\mathop {\min P_{i} }\limits_{i} }} = \frac{{S_{i} }}{{\min (S_{1} ,S_{2} , \ldots ,S_{{17}} )}} + \frac{{P_{i} }}{{\min (P_{1} ,P_{2} , \ldots ,P_{{17}} )}} \;\forall i = 1,2, \ldots ,17$$
$$k_{{ic}} = \frac{{\lambda (S_{i} ) + (1 - \lambda )(P_{i} )}}{{(\lambda \mathop {\max S_{i} + }\limits_{i} (1 - \lambda )\mathop {\max }\limits_{i} P_{i} )}} = \frac{{0.5(S_{i} ) + 0.5(P_{i} )}}{{0.5\mathop {\max (S_{1} ,S_{2} , \ldots ,S_{{17}} ) + }\limits_{{}} 0.5\mathop {\max }\limits_{{}} (P_{1} ,P_{2} , \ldots ,P_{{17}} )}} \;\forall i = 1,2, \ldots ,17$$

After reaching these values, the final ranking score \(k_{i} \forall i = 1,2, \ldots ,17\) was calculated using Eq. (19). The results have been demonstrated in Table 6.

Table 6 The results of the implementation of the CoCoSo method

As can be seen, the 17 identified risks that belonged to the critical cluster, were prioritized in 14 ranks that risk R10, risks (R17&R18) and risk R27 with the scores 2.1276, 2.05, and 2.027 have been placed in first, second and third ranks, respectively. In addition to this, risks (R01&R38), (R17&R18), and (R11&R51) have the same ranks due to the equal value of SODCT factors (see Table 2). This more clear prioritization allows DMs to manage critical risks more effectively according to both human and financial resources and also time limitation.

5.2 Comparison with other methods

In this part, it has been attempted to compare the results of the proposed approach with other conventional methods to manifest the applicability of this approach. To this end, a comparison has been drawn between the prioritization results of the FBWM-CoCoSo and traditional FMEA and some popular FMEA-based MCDM methods such as TOPSIS, VIKOR, and MOORA. The results have been shown in Table 7. To making a fair comparison, the weights of risk factors SODCT obtained from the FBWM have been applied in all MCDM-based risk prioritization methods.

Table 7 Comparison of the proposed approach and other MCDM-based risk prioritization methods

According to Table 7, risks R10 and R27 with the RPN = 800 have been placed in the top priority, risk R17, and R18 are jointly in the second, and risk R26 has been a stand in the third priority. As can be seen, since the current RPN score, do not consider the weight of risk factors, unlike other methods, and relying merely on the multiplication of these factors’ values, it failed to distinguish the priority of risks. R10 and R27 have different S and D values while having the same priority showing the disability of this score in distinguishing the rank of such risks. In other words, this technique has created only 10 places for all studied risks which can confound DMs in determining the order of risks for implementing corrective/preventive action regarding the resource limitations. For this reason, drawing more distinctions among risks’ prioritization by considering the weight of risk factors, and taking advantage of the FMEA technique can boost the decision-making power of DMs. In this regard, various researchers have tried to modify or develop the FMEA technique using other methods such as MCDM techniques. Most of these, however, failed to preserve the merits of this traditional technique for the FMEA team, including compatibility with its opinions and practical applications. In other words, from the viewpoint of the FMEA team, a method can be efficacious in the risk assessment process that not only does not make unrealistic changes when the weight of risk factors is applied but also can create more separability among risk priorities. To measure this, the RPN-Based Spearman Correlation Coefficient (RBSCC) can be exploited. The results of the four FMEA-based MCDM methods in Table 7 show that all of these methods have made a higher distinction than the traditional FMEA technique. While the proposed approach with RBSCC = 0.96 indicates that the results, in addition to creating greater separability among risks priority, have increased reliability of the results compared to other methods because of having a significant consistency with the nature of the FMEA technique (See Fig. 3).

Fig. 3
figure 3

Separability and compatibility of the proposed approach compared to the traditional FMEA technique

For example, risks R10 and R27, which were jointly in the first place regarding the conventional RPN scores, have been ranked in first and third places, respectively, based on the proposed approach. Also, risks R12 and R50 have been stood in two distinct ranks of fourth and fifth based on the proposed approach instead of the joint fourth rank. On the other side, based on the TOPSIS method, the critical risk R27 is in the 13th priority, which was in the first order based on the traditional FMEA technique. Although this can be partly due to the applying risk factors weights in the prioritization process, by examining the values of these factors in Table 2, it can be seen that this risk has the highest possible value of O, D, C, and T factors and the determined rank is not following reality. This rank change can be attributed to the value allocated to the severity factor of risk R27 and the structure of the TOPSIS method. This risk has the lowest severity factor, while this factor is the most important in risk assessment (see Table 3). This issue has led to RBSCC = 0.48 for the TOPSIS method, while the VIKOR method has the RBSCC = 0.46. This measure shows its more reduced performance compared to other methods, and in addition to ranking risk R27 in the 12th priority, risk R26 (the third rank based on the FMEA technique) has fallen to the 9th priority. Meanwhile, the MOORA method has shown, to some extent, acceptable performance with RBSCC = 0.76. Regarding risks R26 and R27, the MOORA method assigned the 6th and 7th priorities to these risks, respectively, while the VIKOR method has assigned the 9th and 12th priorities, respectively to these critical risks.

5.3 Sensitivity analysis

In this section, a sensitivity analysis has been designed to evaluate the results of the proposed approach based on weight (weights of SODCT factors) replacement scenarios. For this purpose, three scenarios have been defined for each risk factor in which the weight of the factor evaluated has been increased by 0.05, 0.15, and 0.25, respectively, compared to the original, and the weight of other factors have been decreased by 0.0125, 0.0375 and 0.0625, respectively. 15 scenarios arranged can be seen in Table 8, and the results of implementing the proposed approach according to these scenarios have been presented in Tables 9 ,10, 11.

Table 8 Scenarios defined to evaluate the effect of weight changes in risk prioritization
Table 9 Risk prioritization based on the sensitivity analysis on factors S and O
Table 10 Risk prioritization based on the sensitivity analysis on factors D and C
Table 11 Risk prioritization based on the sensitivity analysis on the factor T

Due to the increasing trend in the weight of the factor S (scenarios 1 to 3), it can be said that the Spearman Correlation Coefficient (SCC) index has experienced a relatively large decrease from 1 to 0.92 (see Table 9). This reduction indicates the results of the original state show a relatively high sensitivity to changes in the weight of factor S, especially in Scenario 3. Possessing the highest weight in the original state, factor S has led to a decrease in the importance of other factors in the prioritization process by experiencing a further increase in the weight. Additional investigations show risk R27, which had the third priority in the original state, has fallen to the 8th priority. Although risk R27 has the highest values of O, D, C, and T factors, due to the lowest value of factor S, making a further increase in this value has resulted in falling its priority to the 8th rank. By contrast, risk R41 has reached the third priority in Scenario 3 due to having the highest value of the factor S (see Fig. 4).

Fig. 4
figure 4

Risk prioritization changes based on increasing the weight of factor S

In terms of increasing the value of the factor O (scenarios 4 to 6 in Table 9), the SCC index has had a tolerable decrease showing marginal changes in risk priorities based on the proposed approach.

According to scenarios 7 to 9 (see Table 10) related to weight replacement of factor D, it can be seen that the SCC index has decreased from 1 to 0.93 after remaining static in Scenario 7 despite an increase by 0.005 in the value of this factor. To put it another way, although risk priorities based on the proposed approach stayed at a steady level in scenario 7, experienced considerable changes in Scenario 9. Take R10 as an example; the priority of this risk in scenarios 8 and 9 has dropped to second place from the first place in the original state, whilst risk R27 has jumped to the first rank. As a reason for this issue, it can be said that by making an increase in the weight of factor D, this factor has been considered as the most important factor instead of the factor S in scenario 9. In this case, having the highest value of factor D, risk R27 has reached the top priority.

Examining the increasing trend in the value of factor C using scenarios 10 to 12 (see Table 10 and Fig. 5), it can be seen that risk priorities have remained fairly stable in scenario 10. After that, the order of risks has experienced noticeable changes, and led to shrinking the SCC index from 1 to 0.9; since in scenarios 11 and 12, the value of factor C was more than other factors. Because of this issue, risk R05 with C = 5 rose to 6th priority from 10th priority in the original case, and risk R26 with C = 3 dropped to the 10th priority from 6th priority.

Fig. 5
figure 5

Risk prioritization changes based on increasing the weight of factor C

Eventually, according to scenarios 13 to 15 (see Table 11) showing the changing trend of the factor T, it can be said that the SCC index after staying the same in Scenario 13, has gradually decreased from 1 to 0.94. These changes illustrate a gradual sensitivity to increase the weight of the factor T. The reason for this can be attributed to the high value of the factor T per risk and the effect of factors S and C in the risk prioritization process. These two factors have the highest weight compared to other risk factors and play a significant role in risk prioritization based on the FMEA team members. In this case, risk R12 despite the similarity in the value of SOD factors with risk R50, only because T = 5 has risen to the fourth priority from the fifth priority, and R50 has dropped to the 6th priority from the fourth priority in the original state.

6 Discussion and conclusion

Occupational accidents and work-related diseases that, in some cases, lead to death can produce short-term and long-term undesirable outcomes in both individual’s life and economic terms. In this regard, the HSE management system can bring about significant prosperity in terms of creating a healthy and safe workplace. To implement this system more efficiently, it needs to determine potential risks and control the critical risks to reduce their adverse effects on the activities and workers. Although the FMEA technique is one the most popular methods in the identification of critical risks, its score namely conventional RPN has numerous shortcomings such as failing to create a distinct prioritization of risks, considering only SOD factors and the same importance for these. This issue in real-world applications can baffle DMs when corrective/preventive actions are taken, and there are financial and human resource limitations. Therefore, it is of cardinal importance to propose a more efficient approach that can address the mentioned shortcomings. This study aimed to prioritize HSE risks based on the FMEA technique and using the FCM clustering algorithm and a hybrid MCDM method. This approach was implemented in two phases. Firstly, after identifying the potential risks and determining the value of SOD factors and two extra factors C and T, the FCM algorithm was used to cluster risks, and the weighted Euclidean distance measure calculated using the weights obtained from the FBWM to specify the critical cluster. Following this phase, the FBWM-CoCoSo model was solved to prioritize the risks of the critical cluster. That is to say, in this study in addition to considering the uncertainty in the risk assessment process, critical risks were recognized based on the value of risk factors instead of the RPN score to boost the separability among risk priority. The results of the implementation of the proposed approach in a company active in the automotive industry show its high separability and reliability of the risks prioritization in comparison with traditional FMEA and other common FMEA-based MCDM methods. Since this study considers uncertainty in the critical risk prioritization process, future research can simultaneously apply the uncertainty and reliability concepts by developing the CoCoSo technique based on the Z-number theory to increase the reliability of outputs and separability among the priority of risks. Also, in the case of the existing large data set, unlike the data set used in this study, other clustering algorithms such as density-based spatial clustering of applications with noise can be applied to identify the critical cluster.