1 Introduction

As one of the basic activities of human society, decision-making problems (DMPs) exists in all aspects of social life, from the state, government, enterprises and institutions to everyone (Blanco-Mesa et al. 2017). When using MAGDM theory to solve DMPs, it is first necessary for decision makers (DMs) to express their evaluation information in an appropriate way.

In some cases, the evaluation information can be given in the form of deterministic values, such as student’s grade, car prices, etc. But in more cases, due to the complexity of the DMPs itself and limitations of human cognition, it is difficult to give their evaluations in the form of certain values, such as the service quality of merchants in e-commerce. For this problem, scholar Zadeh creatively proposed the concept of fuzzy set (FS), creating a pioneer in studying uncertain phenomena from the perspective of fuzzy membership (Zadeh 1975). However, FS only contains the membership degree (MD), which can not deal with complex situations. Therefore, based on FS, Atanassov introduced the theory of intuitionistic fuzzy set (IFS) (Atanassov 1986).

Being composed of MD and nonmembership degree (NMD), IFS is more suitable for describing uncertain phenomena than classical FS. Therefore, in the past few decades, IFS has been successfully applied to many fields. However, IFS cannot address indeterminate information. For example, when we invite an expert to judge the accuracy of a statement, he may say that the MD of the statement is 0.5, the NMD is 0.6, and the uncertainty degree is 0.2. In this case, it is beyond the scope of application of IFS. To solve this problem, Smarandache introduced the concept of neutrosophic set (NS) (Smarandache 1999). NS is composed of three independent memberships, including truth-membership (TM ), faulty-membership (FM ) and indeterminacy-membership (IM). NS is more general and extensive, but it is still difficult to apply. To overcome application problems, several subclasses of NS have been proposed over the years including interval neutrosophic set (Wang et al. 2005), single-valued neutrosophic set (SVNS) (Wang et al. 2010), simplified neutrosophic set (Ye 2014) and neutrosophic soft set (Deli and Broumi 2015).

When the actual problem is too complicated or unclear, it is difficult to fully express the information using quantitative expressions. Linguistic variables (LVs) are the most direct form of describing fuzzy information. Compared with precise values, LVs are closer to the way humans express knowledge, and can better reflect the fuzzy and uncertain information of people’s cognition. In recent years, the combination of LVs and other Fuzzy set has been continuously proposed, such as linguistic IFS (Garg and Kumar 2019), linguistic hesitant fuzzy set Liao (2015), linguistic NS Ji et al. (2018), and so on. To avoid the loss of information in the process of aggregation, Herrera and Martnez (2000) proposed 2-tuple symbolic representation, which converts linguistic information into continuous 2-tuple Linguistic terms. Therefore, it has gradually become a research hotspot for scholars and applied to many domains. However, using LVs to express decision maker’s preference implies that the degree of criterion belonging to a linguistic variable is 1, which cannot describe the degree of nonmembership and decision-maker’s hesitation. This defect hinders its application in DMPs.

To improve this limitation, Ye (2015) introduced a new powerful fuzzy tool called single-valued neutrosophic linguistic set (SVNLS). It combines the advantages of linguistic item and SVNS, and can describe uncertain information comprehensively and reasonably. For example, the performance of the engine of a certain vehicle by a decision maker may be given as \(\langle s_5,(0.4,0.2,0.2)\rangle \), which indicates that the mark of a certain vehicle with respect to the performance of the engine is about the linguistic item \(s_5\) with satisfaction degree 0.4, dissatisfaction degree 0.2, and indeterminacy degree 0.2. From the latest research trends, SVNLS is widely used in DMPs. After the expression of evaluation information is determined, it is possible to use the aggregation operator to aggregate the information by DMs (Blanco-Mesa 2019), and then use the aggregation operator or fuzzy decision method to sort the candidates.

With regard to aggregation operators (AOs), Wu et al. (2018) proposed the SVN 2-tuple linguistic Hamacher weighted averaging (SVN2TLHWA) operator based on Hamacher t-norm and t-conorm. Yu et al. (2020) proposed a SVN linguistic set (SVNLS) induced ordered weighted averaging distance operator. Wang et al. (2017) proposed a series of maclaurin symmetric mean (MSM) operator under SVNLS environments. Wang et al. (2019) developed a weighted dual muirhead mean operator under SVN2TL environment. Guo and Sun (2019) proposed a SVNLS-PT operator based on SVNLS set and prospect theory. Zhang et al. (2020) developed a weighted distance operator under SVNLS environment. Ju et al. (2018) put forward a weighted MSM operator under SVN interval 2-tuple linguistic environment. With regard to methods, Ye (2015) develop an extended TOPSIS method under SVNLS environment. Chen et al. (2018) proposed a TOPSIS method based on ordered weighted averaging distance under SVNLS environment. Ji et al. (2018) proposed an MABACCELECTRE method using SVNLS to handle the problem of outsourcing provider selection.

Recently, Pamucar and Cirovic develop a new reliable method called MABAC (Pamučar and Ćirović 2015). Compared with the existing methods, it owns the merits of considering the potential losses and gains, easy coding, systematic process and a sound logic. After that, scholars have studied the combination of MABAC method and some fuzzy sets to solve MAGDM problems, such as Pythagorean fuzzy set (Peng and Yang 2016), interval-valued intuitionistic fuzzy (Xue et al. 2016), hesitant fuzzy linguistic set (Sun et al. 2017), SVN (Peng and Dai 2018) Probability Multi-Valued Neutrosophic Sets (Liu and Cheng 2020), q-rung orthopair fuzzy (Wang et al. 2020), 2-Tuple Linguistic Neutrosophic (Wang et al. 2019), probabilistic uncertain linguistic information (Wei et al. 2020), picture 2-tuple linguistic (Zhang et al. 2020). However, there is no research of MABAC method under single-valued neutrosophic 2-tuple linguistic sets (SVN2TLS) environment in the current literature.

Besides, all AOs mentioned above are mainly based on a algebraic t-norms and t-conorms. The most widely used algebraic operations are algebraic product and sum. Although it is useful to carry out computing, it is lack of flexibility and robustness. With a parameter, Frank t-norms and t-conorms (Frank 1979) has certain compatibility and can solve MAGDM problems according to different parameter selection. What’s more, it can degenerate to probability and Lukasiewicz t-norms and t-conorms which make them more flexible and robust (Ji et al. 2018; Nancy 2016; Peng et al. 2018; Qin et al. 2016). Ji et al. (2018) studied the Frank prioritized Bonferroni mean operator under SVNL environment and applied in selecting third-party logistics providers. Nancy (2016)investigated some operations of SVN under Frank norm operations. Peng et al. (2018) studied the the Frank Heronian mean operator under linguistic intuitionistic environment and used to evaluate coal mine safety. However, there are few researches on the application of Frank operators. What’s more, we have not found any research in SVN2TLS environment. Therefore, this paper attempts to fill the research gap by extending the Frank t-norms and t-conorms to SVN2TLS environment.

What’s more, as the rationality of weight in MAGDM directly affects the accuracy of decision-making results, the research of weight plays an important role in MAGDM. Generally, the weight of MAGDM includes attribute weight and expert weight. However, the previous studies have assumed that expert weight or attribute weight is already known. Ji et al. (2018) and Guo and Sun (2019) applied mean-squared deviation method to calculate objective attribute weight. Wu et al. (2018) applied the method of maximizing deviation to calculate objective attribute weight. Guo and Sun (2019) also calculated subjective weights of attributes based on prospect values.

The best worst method (BWM) was first suggested by Rezaei (2016). For the BWM method, the best and worst criteria need to be provided by DMs at first. After that, determine the pairwise comparisons vectors of the best and worst criteria over all the other criteria, and finally obtain the optimal weights by solving a linear programming. In comparison with the AHP method, the number of pairwise comparisons in the BWM method is greatly reduced. For N criteria problems, the AHP method requires \(n^2-n\) comparisons, while the BWM method only requires \(2n-3\). Therefore, it can reduce the confusion caused by too many comparisons, making the final result easier to pass the consistency test. At present, there are very few applications of BWM method in MADM, and only five articles (Ecer and Pamucar 2020; Li et al. 2019; Maghsoodi et al. 2020; Wang et al. 2020; Yang et al. 2020) have been retrieved through web of science. We can find that the current scholars mainly study the application of the BWM method to solving attribute weights, but no scholars have applied the BWM method to the calculation of expert weights. In actual situations, there is usually an organizer who organizes experts to give an evaluation of the attributes of something. The organizer is usually familiar with the invited experts, so he (she) can select the best and worst experts from the invited list and compare them in pairs. Therefore, motivated on the above ideas, in this paper, we first apply BWM method to solve subjective expert weights. Second, the results may be biased because the current mixed methods often simply synthesize subjective and objective weights without forming a feedback network to obtain a stable solution.

Motivated by the above situation, we propose an iterative algorithm to integrate the BWM method and the objective method to obtain a stable expert weight value. Although the MABAC method considers potential gains and losses in the ranking of alternatives, it cannot reflect the characteristic that DMs are more sensitive to losses than gains and DMS are not completely rational (Wang 2020). Therefore, in this article, MABAC is improved by introducing risk preference parameters to reflect the characteristics of DMs’ risk attitudes.

In this paper, the main innovations and contributions are summarized as:

  1. 1.

    First, Frank t-norms and t-conorms are extend to SVN2TLS environment. Then, two new AOs are proposed including SVN2TL Frank weighted averaging (SVN2TLFWA) operator and SVN2TL Frank weighted geometric (SVN2TLFWG) operator, and discussed some desirable properties.

  2. 2.

    Combined with BWM method, an iterative algorithm is proposed to integrate the objective method to obtain a stable expert weight.

  3. 3.

    Based on the improved MABAC method, two new approaches are proposed under SVN2TLS environment and some special cases are investigated to verify the superiority of the proposed approaches.

To achieve the above contents, this remainder is organized as follows. In Sect. 2, we presents some concepts including BWM method, Frank operator, SVN2TLS, and score function. In Sect. 3, SVN2TLFWA operator and SVN2TLFWG operator are proposed based on SVN2TLS environment and Frank operator. In Sect. 4, we discussed the DMs’s weight and attribute weights, and a new MAGDM approach based on Frank aggregation operator is demonstrated. In Sect. 4, some examples are illustrated to show the effectiveness and superiority of the proposed method. The last section concludes the paper with future research directions.

2 Preliminaries

In this section, some basic concepts regarding Frank operator, SVN2TLS and BWM method are introduced.

2.1 Frank operator

Definition 1

(Nancy 2016) Frank product \(\otimes _F\) and Frank sum \(\oplus _F\) can be defined as:

$$\begin{aligned} a\oplus _Fb&=1-\log _\lambda \left( 1+\frac{(\lambda ^{1-a}-1)(\lambda ^{1-b}-1)}{\lambda -1}\right) , \end{aligned}$$
(1)
$$\begin{aligned} a\otimes _Fb&=\log _\lambda \left( 1+\frac{(\lambda ^{a}-1)(\lambda ^{b}-1)}{\lambda -1}\right) . \end{aligned}$$
(2)

where \(\lambda >1, a,b\in [0,1]\).

It can be easily proved that when \(\lambda \rightarrow \) 1, \(a\oplus _Fb \rightarrow a+b-ab\), \(a \otimes _Fb \rightarrow ab\). When \(\lambda \rightarrow \infty \), \(a\oplus _Fb \rightarrow min(a+b,1)\), \(a \otimes _Fb \rightarrow max(0,a+b-1)\).

Definition 2

(Liu et al. 2020) If function \(T_{E-F}\): \([0,t]^2 \rightarrow [0,t]\) satisfies:

$$\begin{aligned} T_{E-F}(a,b)={t}\log _\lambda \left( \frac{(\lambda ^{a/t}-1)(\lambda ^{b/t}-1)}{\lambda -1}+1\right) . \end{aligned}$$
(3)

then, it called extended Frank T-norm. where \(\lambda \in (1,\infty )\).

Definition 3

(Liu et al. 2020) If function \(S_{E-F}\): \([0,t]^2 \rightarrow [0,t]\) satisfies:

$$\begin{aligned} S_{E-F}(a,b)=t (1-\log _\lambda \left( 1+\frac{(\lambda ^{1-a/t}-1)(\lambda ^{1-b/t}-1)}{(\lambda -1)}\right) . \end{aligned}$$
(4)

then, it called extended Frank T-conorm. where \(\lambda \in (1,\infty )\).

2.2 linguistic 2-tuple

Definition 4

(Herrera and Martnez 2000) Let \(S=\{s_0,s_1,\cdots , s_g\}\) be a linguistic term set with odd granularity \(g+1\), and \(\beta \in [0,g]\). Then, a linguistic 2-tuple \((s_i,\alpha )\) that expresses the equivalent information to \(\beta \) can be obtained with the following function:

$$\begin{aligned}&\varDelta : [0,g]\rightarrow S \times [-0.5,0.5),\\&\quad \varDelta (\beta )=(s_i,\alpha ) with {\left\{ \begin{array}{ll} s_i,i=\mathrm{round}(\beta )\\ \alpha =\beta -i,\alpha \in [-0.5,0.5). \end{array}\right. } \end{aligned}$$

where \(round(\cdot )\) is the common rounding operation. Oppositely, \(\varDelta \) has an inverse function with \(\varDelta ^{-1}\):

$$\begin{aligned}&\varDelta ^{-1}: S \times [-0.5,0.5)\rightarrow [0,g],\\&\varDelta ^{-1}(s_i,\alpha )=i+\alpha . \end{aligned}$$

2.3 BWM method

Generally, to get a ranking results among several alternatives, there is usually an organizer that collect some experts with different knowledge fields using scientific and effective theoretical methods to gather the evaluation information of the schemes in a certain way, and finally get the ranking of alternatives.

Because the knowledge level, personalities and characters of experts are different , experts cannot be treated equally. It is reflected in the decision-making process that experts need to be assigned different weights. In the actual decision-making process, there is usually an organizer, and the organizer is relatively clear about the background of the invited experts. Therefore, for the organizer, it is more confident to provide the expert’s preferences information by pairwise comparisons. For the information of preference relations, decision can usually be achieved under the framework of the AHP (Saaty 1980). The AHP method has obvious advantages such as easy to understand and wide application, but it also has certain shortcomings. The method of pairwise comparison may cause organizer fall into a certain degree of confusion when there are many experts and the degree of difference between experts is not very large. under these circumstances, it difficult to pass the consistency test. If the consistency index requirements are not met, the AHP method will be useless.

In 2015, Rezaei improved the AHP method and introduced the BWM method (Rezaei 2016). The BWM method requires less comparative data, the scoring process for decision makers is simpler. Using the BWM method, the best and worst item need to determine, and the importance degree between the best and worst item need to be evaluated (a scale of 1–9 is usually used). For a comparison issue with n items, there are n-2 items need to compare with the best and worst items, so BWM only requires to conduct \(2n-3\) comparisons (AHP \(n^2-n\)). The number of comparisons is greatly reduced, making it easier to pass the consistency test.

In this part, the steps for BWM to calculate the weights of experts are summarized as:

Step 1  Determine the expert set \(\{E_1, E_2,\ldots ,E_l\}\);

Step 2  The organizer conducts a comprehensive analysis on the set of experts and selects the best and worst experts;

Step 3  Determine the preferences of the best and worst experts over the other expert using a number 1–9 and obtain two vectors;

$$\begin{aligned} U_B=(u_{B1},\ldots , u_{Bl}), V_W=(v_{1W},\ldots , v_{lW}), \end{aligned}$$

where l is the number of experts, \(u_{Bj}\) indicates the preference degree of the best expert over expert \(E_j\), \(v_{jW}\) means the preference degree of expert \(E_j\) over the worst expert.

Step 4. Get the optimal weight \((\mu _1^s,\mu _2^s,\cdots , \mu _l^s)\) and optimal solution \(\varepsilon ^*\) by solving linear programming below:

$$\begin{aligned}&\min \varepsilon \nonumber \\&\mathrm{s.t.} {\left\{ \begin{array}{ll} |\frac{\mu _B^s}{\mu _j^s}-u_{Bj}|\le \varepsilon ,&{}\quad j=1,\ldots ,l\\ |\frac{\mu _j^s}{\mu _W^s}-v_{jW}|\le \varepsilon ,&{}\quad j=1,\ldots ,l\\ \mu _j^s\ge 0,&{}\quad j=1,\ldots ,l\\ \sum _{j=1}^l\mu _j^s=1&{}. \end{array}\right. } \end{aligned}$$
(5)

The consistency ratio (CR) can be calculated as follow:

$$\begin{aligned} CR=\frac{\varepsilon ^*}{\mathrm{Consistency\, index}}. \end{aligned}$$
(6)

Step 5. Compute the CR using Eq. (6). If \(CR\le 0.1\), output the optimal weight and optimal solution; else, return to Step 1 and adjust the vectors \(U_B\) and \(V_W\).

Table 1 Consistency index (Rezaei 2016)

It should be noted that the greater the \(\varepsilon ^*\), the less reliable the comparisons are.

3 Some new AOs based on SVN2TLS and Frank t-norm and t-conorm

3.1 SVN2TLS

Definition 5

(Wu et al. 2018) A SVN2TLS in X is defined as:

$$\begin{aligned} A=\{\langle x,(s_{\theta (x)},\alpha _x),(T_A(x),I_A(x),F_A(x))\rangle |x\in X\}, \end{aligned}$$
(7)

where \(s_{\theta (x)} \in S, \alpha _x \in [-0.5, 0.5), T_A(x), I_A(x), F_A(x)\in [0,1] \), \(0\le T_A(x)+I_A(x)+F_A(x)\le 3\).

Definition 6

(Wu et al. 2018) Let \(a_p=\langle (s_{\theta (a_p)},\alpha _p),(T_{(a_p)},I_{(a_p)},F_{(a_p)})\rangle (p=1,2)\) be two SVN2TLSs, then the operations are defined as:

  1. 1.

    \(a_1\oplus a_2=\langle \varDelta (\varDelta ^{-1}(s_{\theta (a_1)},\alpha _1)+\varDelta ^{-1}(s_{\theta (a_2)},\alpha _2)),\) \(\qquad \qquad \qquad \times (T_{a1}+T_{a2}-T_{a1}T_{a2},I_{a1}I_{a2},F_{a1}F_{a2})\rangle ;\)

  2. 2.

    \(a_1\otimes a_2=\langle \varDelta (\varDelta ^{-1}(s_{\theta (a_1)},\alpha _1)\varDelta ^{-1}(s_{\theta (a_2)},\alpha _2)),\) \(\qquad \qquad \qquad \times (T_{a1}T_{a2},I_{a1}+I_{a2}-I_{a1}I_{a2},F_{a1}+F_{a2}-F_{a1}F_{a2})\rangle ;\)

  3. 3.

    \(\lambda a_1=\langle \varDelta (\lambda \varDelta ^{-1}(s_{\theta (a_1)},\alpha _1)), (1-(1-T_{a1})^\lambda ,(I_{a1})^\lambda , (F_{a1})^\lambda )\rangle ;\)

  4. 4.

    \((a_1)^\lambda =\langle \varDelta (\varDelta ^{-1}(s_{\theta (a_1)},\alpha _1)^\lambda ), ((T_{a1})^\lambda , 1-(1-I_{a1})^\lambda , 1-(1-F_{a1})^\lambda )\rangle .\)

Definition 7

Let \(a_p(p=1,2)\) be two SVN2TLSs, then the Frank operations rules of SVN2TLSs are defined as:

$$\begin{aligned} a_1\oplus a_2&= \langle \varDelta (t(1- \log _\lambda (1+\frac{(\lambda ^{1-\varDelta ^{-1}(s_{\theta (a_1)},\alpha _1)/t}-1)(\lambda ^{1-\varDelta ^{-1}(s_{\theta (a_2)},\alpha _2)/t}-1)}{(\lambda -1)}))), \\&\qquad \qquad \quad \times 1-\log _\lambda \left( 1+\frac{(\lambda ^{1-T_{a1}}-1)(\lambda ^{1-T_{a2}}-1)}{\lambda -1}\right) , \\&\qquad \times \log _\lambda \left( 1+\frac{(\lambda ^{I_{a1}}-1)(\lambda ^{I_{a2}}-1)}{\lambda -1}\right) , \log _\lambda \left( 1+\frac{(\lambda ^{F_{a1}}-1)(\lambda ^{F_{a2}}-1)}{\lambda -1}\right) . \\ a_1\otimes a_2&={t}\log _\lambda \left( \frac{(\lambda ^{\varDelta ^{-1}(s_{\theta (a_1)},\alpha _1)/t}-1)(\lambda ^{\varDelta ^{-1}(s_{\theta (a_2)},\alpha _2)/t}-1)}{\lambda -1}+1\right) ,\\&\qquad \times \log _\lambda \left( 1+\frac{(\lambda ^{T_{a1}}-1)(\lambda ^{T_{a2}}-1)}{\lambda -1}\right) , 1-\log _\lambda \left( 1+\frac{(\lambda ^{1-I_{a1}}-1)(\lambda ^{1-I_{a2}}-1)}{\lambda -1}\right) ,\\&\qquad \times 1-\log _\lambda \left( 1+\frac{(\lambda ^{1-F_{a1}}-1)(\lambda ^{1-F_{a2}}-1)}{\lambda -1}\right) . \\ na_1&=\langle \varDelta (t*(1-\log _\lambda (1+\frac{(\lambda ^{1-\varDelta ^{-1}(s_{\theta (a_1)},\alpha _1)/t}-1)^n}{(\lambda -1)^{n-1}}))),\\&\qquad \times 1-\log _\lambda \left( 1+\frac{(\lambda ^{1-T_{a1}}-1)^n}{(\lambda -1)^{n-1}}\right) , \log _\lambda \left( 1+\frac{(\lambda ^{I_{a1}}-1)^n}{(\lambda -1)^{n-1}}\right) ,\\&\qquad \times \log _\lambda \left( 1+\frac{(\lambda ^{F_{a1}}-1)^n}{(\lambda -1)^{n-1}}\right) . \\ (a_1)^n&= \langle \varDelta (t*\log _\lambda \left( 1+\frac{(\lambda ^{\varDelta ^{-1}(s_{\theta (a_1)},\alpha _1)}-1)^n}{(\lambda -1)^{n-1}}\right) ), \log _\lambda \left( 1+\frac{(\lambda ^{I_{a1}}-1)^n}{(\lambda -1)^{n-1}}\right) ,\\&\qquad \times 1-\log _\lambda \left( 1+\frac{(\lambda ^{1-I_{a1}}-1)^n}{(\lambda -1)^{n-1}}\right) , 1-\log _\lambda \left( 1+\frac{(\lambda ^{1-F_{a1}}-1)^n}{(\lambda -1)^{n-1}}\right) . \end{aligned}$$

Theorem 1

Let \(a_p(p=1,2,3)\) be three SVN2TLSs, \(\eta , \eta _1,\eta _2\ge 0\) then we have:

  1. 1.

    \(a_1\oplus a_2=a_2\oplus a_1 ;\)

  2. 2.

    \(a_1\otimes a_2=a_2\otimes a_1 ;\)

  3. 3.

    \((a_1\oplus a_2)\oplus a_3=a_1 \oplus ( a_2\oplus a_3) ;\)

  4. 4.

    \((a_1\otimes a_2)\otimes a_3=a_1 \otimes ( a_2\otimes a_3) ;\)

  5. 5.

    \(\eta (a_1\oplus a_2)=\eta a_2\oplus \eta a_1; \)

  6. 6.

    \(\eta _1a_1\oplus \eta _2a_1=(\eta _1+\eta _2)a_1 ;\)

  7. 7.

    \(a_1^{\eta _1}\otimes a_1^{\eta _2}=a_1^{\eta _1+\eta _2}; \)

  8. 8.

    \((a_1\otimes a_2)^{\eta _2}=a_1^{\eta }\otimes a_2^{\eta }.\)

Definition 8

Let a be any SVN2TLSs, the score function \(S(\cdot )\), accuracy function \(A(\cdot )\) and certainty function \(C(\cdot )\)for a are defined as:

  1. 1.

    \(S(a)=\langle \varDelta (\varDelta ^{-1}(s_{\theta (a)},\alpha )\frac{(2+T_a-I_a-F_a)}{3})\rangle \);

  2. 2.

    \(A(a)=\langle \varDelta (\varDelta ^{-1}(s_{\theta (a)},\alpha )(T_a-F_a))\rangle \);

  3. 3.

    \(C(a)=\langle \varDelta (\varDelta ^{-1}(s_{\theta (a)},\alpha )T_a)\rangle \).

Definition 9

Let \(a_p(p=1,2)\) be two SVN2TLSs, we have:

  1. 1.

    \(\mathrm{if}\, S(a_1)>S(a_2), \mathrm{then}\, a_1> a_2\);

  2. 2.

    \(\mathrm{if}\, S(a_1)=S(a_2), \mathrm{and}\, A(a_1)>A(a_2), \,\mathrm{then}\, a_1> a_2\);

  3. 3.

    \(\mathrm{if}\, S(a_1)=S(a_2), \,\mathrm{and}\, A(a_1)=A(a_2), \,\mathrm{and}\, C(a_1)>C(a_2) \,\mathrm{then}\, a_1> a_2 \);

  4. 4.

    \(\mathrm{if}\, S(a_1)=S(a_2), \,\mathrm{and}\, A(a_1)=A(a_2), \,\mathrm{and}\, C(a_1)=C(a_2) \,\mathrm{then}\, a_1= a_2 \).

3.2 SVN2TLFWA operator

Definition 10

Assume \(a_p\) is family of SVN2TLSs. The SVN2TLFWA operator is a mapping from \(\varOmega ^n\) to \(\varOmega \), and let the corresponding weight vector is w (\(0\le w_k\le 1\), \(\sum _{k=1}^n w_k=1\)), such that

$$\begin{aligned} SVN2TLFWA(a_1,a_2,\ldots , a_n)=\bigoplus _{i=1}^{n} w_ia_i. \end{aligned}$$
(8)

Theorem 2

Let \(a_p\) be n SVN2TLNs, then the Frank weighted averaging operator of n SVN2TLNs is still a SVN2TLN, and

$$\begin{aligned} SVN2TLFWA(a_1,a_2,\ldots ,a_n)= & {} \bigoplus _{i=1}^{n} w_ia_i\nonumber \\= & {} \bigg \langle \varDelta \left( t\left( 1- \log _\lambda \left( 1+\prod _{i=1}^n(\lambda ^{1-\varDelta ^{-1}(s_{\theta (a_i)},\alpha _i)/t}-1)^{w_i}\right) \right) \right) , \nonumber \\&\times 1-\log _\lambda \left( 1+\prod _{i=1}^n(\lambda ^{1-T_{a_i}}-1)^{w_i}\right) , \nonumber \\&\times \log _\lambda \left( 1+\prod _{i=1}^n(\lambda ^{I_{a_i}}-1)^{w_i}\right) , \log _\lambda \left( 1+\prod _{i=1}^n(\lambda ^{F_{a_i}}-1)^{w_i}\right) . \bigg \rangle \nonumber \\ \end{aligned}$$
(9)

Proof

See “Appendix A”. \(\square \)

Example 1

Let \(a_1=\langle s_4,(0.2,0.2,0.3)\rangle , a_2=\langle s_4,(0.4,0.3,0.1)\rangle , a_3=\langle s_3,(0.6,0.2,0.1)\rangle \) be three SVN2TLNs, and \(w=(0.3,0.4,0.2),\lambda =2, t=6\), then according to Eq. (9), we have

$$\begin{aligned} SVN2TLFWA(a_1, a_2, a_3)= & {} 0.3a_1\oplus 0.4a_2\oplus 0.2a_3\\= & {} \bigg \langle \varDelta \left( t\left( 1- \log _2\left( 1+(2^{1-(4/6)}-1)^{0.3}\cdot (2^{1-(4/6)}-1)^{0.4}\cdot (2^{1-(3/6)}-1)^{0.2}\right) \right) \right) , \\&\times 1-\log _2\left( 1+(2^{1-0.2}-1)^{0.3}\cdot (2^{1-0.4}-1)^{0.4}\cdot (2^{1-0.6}-1)^{0.2}\right) , \\&\times \log _2\left( 1+(2^{0.2}-1)^{0.3}\cdot (2^{0.3}-1)^{0.4}\cdot (2^{0.2}-1)^{0.2}\right) , \\&\times \log _2\left( 1+(2^{0.3}-1)^{0.3}\cdot (2^{0.1}-1)^{0.4}\cdot (2^{0.1}-1)^{0.2}\right) \bigg \rangle .\\= & {} \bigg \langle \varDelta \left( t\left( 1- \log _2\left( 1.3265\right) \right) \right) , 1-\log _2\left( 1.5582\right) , \log _2\left( 1.2146\right) ,\log _2\left( 1.1327\right) \bigg \rangle ,\\= & {} \bigg \langle \varDelta \left( 3.5546\right) , 0.3601, 0.2805,0.1798\bigg \rangle ,\\= & {} \bigg \langle (s_4,-0.4454), 0.3601, 0.2805,0.1798\bigg \rangle . \end{aligned}$$

Theorem 3

(Idempotency).  Let \(a_p (p=1,\ldots ,3)\) be n SVN2TLNs, if \(a_p=a=\langle (s_{\theta (a)},\alpha ),(T_{(a)},I_{(a)},F_{(a)})\rangle \) for all p, then

$$\begin{aligned} \mathrm{SVN2TLFWA}(a_1, \ldots , a_n)=\mathrm{SVN2TLFWA}(a, \ldots , a)=a. \end{aligned}$$
(10)

Proof

See “Appendix B”. \(\square \)

Theorem 4

(Monotonicity).  Let \(a_{x_i}=\langle (s_{\theta (a_{x_i})},\alpha _{x_i}),(T_{(a_{x_i})},I_{(a_{x_i})},F_{(a_{x_i})})\rangle \) and \(a_{y_i}=\langle (s_{\theta (a_{y_i})},\alpha _{y_i}),(T_{(a_{y_i})},I_{(a_{y_i})},F_{(a_{y_i})})\rangle (i=1,2,\ldots ,n)\) be two set of SVN2TLNs, if \(\varDelta ^{-1}(s_{\theta (a_{x_i})},\alpha _{x_i})\le \varDelta ^{-1}(s_{\theta (a_{y_i})},\alpha _{y_i}), T_{(a_{x_i})} \le T_{(a_{y_i})}, I_{(a_{x_i})} \ge I_{(a_{y_i})}, F_{(a_{x_i})} \ge F_{(a_{y_i})}\) for all i, then

$$\begin{aligned} \mathrm{SVN2TLFWA}(a_{x_1}, a_{x_2}, \ldots , a_{x_n})\le \mathrm{SVN2TLFWA}(a_{y_1}, a_{y_2}, \ldots , a_{y_n}). \end{aligned}$$
(11)

Proof

See “Appendix C”. \(\square \)

Theorem 5

(Boundedness).  Let \(a_-=\langle (s_{\theta (a)},\alpha )_-,(T_{(a_-)},I_{(a_-)},F_{(a_-)})\rangle \) and \(a_+=\langle (s_{\theta (a)},\alpha )_+,(T_{(a_+)},I_{(a_+)},F_{(a_+)})\rangle \), where \((s_{\theta (a)},\alpha )_-=\min \nolimits _{i} \{s_{\theta (a_i)},\alpha _i)\},T_{(a_-)}=\min \nolimits _{i} \{T_{(a_i)}\}, T_{(a_+)}=\max \nolimits _{i} \{T_{(a_i)}\}, I_{(a_-)}=\max \nolimits _{i} \{I_{(a_i)}\}, I_{(a_+)}=\min \nolimits _{i} \{I_{(a_i)}\}, F_{(a_-)}=\max \nolimits _{i} \{F_{(a_i)}\}, F_{(a_+)}=\min \nolimits _{i} \{F_{(a_i)}\}\)then

$$\begin{aligned} a_-\le \mathrm{SVN2TLFWA}(a_1, a_2, \ldots , a_n )\le a_+. \end{aligned}$$
(12)

From Theorem 3,

$$\begin{aligned}&\mathrm{SVN2TLFWA}(a_-, a_-, \ldots , a_- )= a_-,\\&\mathrm{SVN2TLFWA}(a_+, a_+, \ldots , a_+ )= a_+. \end{aligned}$$

From Theorem 4,

$$\begin{aligned} a_-\le \mathrm{SVN2TLFWA}(a_1, a_2, \ldots , a_n )\le a_+. \end{aligned}$$

Theorem 6

Let a and \(a_p(p=1,2,\ldots , n)\) be SVN2TLNs, and \(0\le w_k\le 1\), \(\sum _{k=1}^n w_k=1, r>0\), then

$$\begin{aligned}&\mathrm{SVN2TLFWA}(ra_1,\ldots ,ra_n)=r\cdot \mathrm{SVN2TLFWA}(a_1,\ldots ,a_n);\\&\mathrm{SVN2TLFWA}(a_1\oplus a,\ldots ,a_n\oplus a)=\mathrm{SVN2TLFWA}(a_1,\ldots ,a_n)\oplus a;\\&\mathrm{SVN2TLFWA}(ra_1\oplus a,\ldots ,ra_n\oplus a)=r\cdot \mathrm{SVN2TLFWA}(a_1,\ldots ,a_n)\oplus a. \end{aligned}$$

Proof

The proof process is similar to Nancy (2016), so we omitted it here. \(\square \)

In the following, we will discuss some special values of parameter \(\lambda \).

  1. 1.

    When \(\lambda \rightarrow \) 1

    $$\begin{aligned} \lim _{\lambda \rightarrow 1}\mathrm{SVN2TLFWA}(a_1,a_2,\ldots ,a_n)= & {} SVN2TLWA(a_1,a_2,\ldots ,a_n)\\= & {} \left\langle \varDelta (t(1- \prod _{i=1}^n (1-\varDelta ^{-1}(s_{\theta (a_i)},\alpha _i)/t)^{w_i})),\right. \\&\times \left. 1-\prod _{i=1}^n({1-T_{a_i}})^{w_i}, \prod _{i=1}^n(I_{a_i})^{w_i}, \prod _{i=1}^n(F_{a_i})^{w_i}\right\rangle . \end{aligned}$$
  2. 2.

    When \(\lambda \rightarrow \infty \)

    $$\begin{aligned}&\lim _{\lambda \rightarrow \infty }SVN2TLFWA(a_1,a_2,\ldots ,a_n)\\&\quad =\left\langle \varDelta (t\sum _{i=1}^n (\varDelta ^{-1}({w_i}(s_{\theta (a_i)},\alpha _i)/t))), \sum _{i=1}^n{w_i}T_{a_i}, \sum _{i=1}^n{w_i}I_{a_i}, \sum _{i=1}^n{w_i}F_{a_i}\right\rangle . \end{aligned}$$

Proof

See “Appendix D”. \(\square \)

3.3 SVN2TLFWG operator

Theorem 7

Let \(a_p(p=1,\ldots ,n)\) be n SVN2TLNs, then the Frank weighted geometric operator of n SVN2TLNs is still a SVN2TLN, and

$$\begin{aligned}&SVN2TLFWG(a_1,a_2,\ldots ,\alpha _n)\nonumber \\&\quad = \left\langle \varDelta \left( t\left( \log _\lambda \left( 1+\prod _{i=1}^n(\lambda ^{\varDelta ^{-1}(s_{\theta (a_i)},\alpha _i)/t}-1)^{w_i}\right) \right) \right) , \log _\lambda \left( 1+\prod _{i=1}^n(\lambda ^{T_{a_i}}-1)^{w_i}\right) \right. \nonumber ,\\&\qquad \times \left. 1-\log _\lambda \left( 1+\prod _{i=1}^n(\lambda ^{1-I_{a_i}}-1)^{w_i}\right) , 1-\log _\lambda \left( 1+\prod _{i=1}^n(\lambda ^{1-F_{a_i}}-1)^{w_i}\right) \right\rangle . \end{aligned}$$
(13)
  1. 1.

    When \(\lambda \rightarrow \) 1

    $$\begin{aligned}&\lim _{\lambda \rightarrow 1}\mathrm{SVN2TLFWG}(a_1,a_2,\ldots ,a_n)=\mathrm{SVN2TLWG}(a_1,a_2,\ldots ,a_n)\\&\quad =\left\langle \varDelta \left( t\left( \prod _{i=1}^n (\varDelta ^{-1}(s_{\theta (a_i)},\alpha _i)/t)^{w_i}\right) \right) , \right. \\&\qquad \left. \times \prod _{i=1}^n(T_{a_i})^{w_i}, 1-\prod _{i=1}^n({1-I_{a_i}})^{w_i}, 1-\prod _{i=1}^n({1-F_{a_i}})^{w_i}\right\rangle . \end{aligned}$$
  2. 2.

    When \(\lambda \rightarrow \infty \):

    $$\begin{aligned}&\lim _{\lambda \rightarrow \infty }\mathrm{SVN2TLFWG}(a_1,a_2,\ldots ,a_n)\\&\quad =\left\langle \varDelta (t\sum _{i=1}^n (\varDelta ^{-1}({w_i}(s_{\theta (a_i)},\alpha _i)/t))), \sum _{i=1}^n{w_i}T_{a_i}, \sum _{i=1}^n{w_i}I_{a_i}, \sum _{i=1}^n{w_i}F_{a_i}\right\rangle . \end{aligned}$$

Example 2

Let \(a_1=\langle s_4,(0.2,0.2,0.3)\rangle , a_2=\langle s_4,(0.4,0.3,0.1)\rangle , a_3=\langle s_3,(0.6,0.2,0.1)\rangle \) be three SVN2TLNs, and \(w=(0.3,0.4,0.2),\lambda =2, t=6\), then according to Eq (13), we have

$$\begin{aligned} \mathrm{SVN2TLFWG}(a_1, a_2, a_3)= & {} 0.3a_1\otimes 0.4a_2\otimes 0.2a_3\\= & {} \bigg \langle \varDelta \left( t\left( \log _2\left( 1+(2^{(4/6)}-1)^{0.3}\cdot (2^{(4/6)}-1)^{0.4}\cdot (2^{(3/6)}-1)^{0.2}\right) \right) \right) , \\&\times \log _2\left( 1+(2^{0.2}-1)^{0.3}\cdot (2^{0.4}-1)^{0.4}\cdot (2^{0.6}-1)^{0.2}\right) , \\&\times 1-\log _2\left( 1+(2^{(1-0.2)}-1)^{0.3}\cdot (2^{(1-0.3)}-1)^{0.4}\cdot (2^{(1-0.2)}-1)^{0.2}\right) , \\&\times 1-\log _2\left( 1+(2^{(1-0.3)}-1)^{0.3}\cdot (2^{(1-0.1)}-1)^{0.4}\cdot (2^{(1-0.1)}-1)^{0.2}\right) \bigg \rangle ,\\= & {} \bigg \langle \varDelta \left( t\left( \log _2\left( 1.3265\right) \right) \right) , \log _2\left( 1.3133\right) , 1-\log _2\left( 1.7131\right) ,1-\log _2\left( 1.7965\right) \bigg \rangle ,\\= & {} \bigg \langle \varDelta \left( 2.4457\right) , 0.3932, 0.2234,0.1548\bigg \rangle ,\\= & {} \bigg \langle (s_2,0.4457), 0.3932, 0.2234,0.1548\bigg \rangle . \end{aligned}$$

Theorem 8

Let a and \(a_p(p=1,2,\ldots , n)\) be SVN2TLNs, and \(0\le w_k\le 1\), \(\sum _{k=1}^n w_k=1, r>0\), then

$$\begin{aligned}&\mathrm{SVN2TLFWA}(a_1^r,\ldots ,a_n^r)= (\mathrm{SVN2TLFWA}(a_1,\ldots ,a_n))^r;\\&\mathrm{SVN2TLFWA}(a_1\otimes a,\ldots ,a_n\otimes a)=\mathrm{SVN2TLFWA}(a_1,\ldots ,a_n)\otimes a;\\&\mathrm{SVN2TLFWA}(a_1^r\oplus a,\ldots ,a_n^r\otimes a)=(SVN2TLFWA(a_1,\ldots ,a_n))^r\otimes a. \end{aligned}$$

4 Approach for MAGDM issue with proposed AOs

In an MAGDM problem, assume that (1) Alternative set: \(A=\{A_1, \cdots , A_m\}\); (2) Attribute (Criteria) set: \(C=\{C_{1}, \cdots , C_{n}\}\); (3) WV of attribute w satisfies \(w_i\in [0, 1]\) and \(\sum _{i=1}^nw_i=1\). (4) Experts set E, WV of expert \(\mu \) satisfies \(\mu _k\in [0, 1]\) and \(\sum _{i=1}^l\mu _i=1\). The evaluated information of alternative \(A_i\) under the attribute \(C_j\) by expert \(E_k\) can be expressed as \((s_{\theta _{(ij)^k}},\alpha _{{ij}^k}),(T_{(ij)^k}, I_{{(ij)^k}}, F_{(ij)^k})\). The corresponding evaluation matrix can be expressed as:\(R_k=(R_{ij}^k)_{m\times n}\).

The steps of MAGDM problem can be summarized as the construction of the evaluation index system, obtaining the evaluation value of the alternatives, calculating the expert weight, integrating the evaluation information, determining the attribute weight, sorting the alternatives according to the decision-making method and determining the optimal one.

Definition 11

Let \(a_p(p=1,2)\) be two SVN2TLSs, then the Hamming distance is defined as

$$\begin{aligned} d(a_1,a_2)= & {} |\varDelta ^{-1}(s_{\theta (a_1)},\alpha _1)T_{a_1}-\varDelta ^{-1}(s_{\theta (a_2)},\alpha _2)T_{a_2}|+|\varDelta ^{-1}(s_{\theta (a_1)},\alpha _1)(1-I_{a_1})\nonumber \\&-\varDelta ^{-1}(s_{\theta (a_2)},\alpha _2)(1-I_{a_2})|+|\varDelta ^{-1}(s_{\theta (a_1)},\alpha _1)(1-F_{a_1})\nonumber \\&-\varDelta ^{-1}(s_{\theta (a_2)},\alpha _2)(1-F_{a_2})|. \end{aligned}$$
(14)

4.1 Determine DMs’ weights

The subjective weight method relies on the experience and professional knowledge of decision makers, and the objective weight method relies on the actual evaluation information. To not only consider the experience and professional knowledge of DMs, but also use the evaluation information, the comprehensive weights of DMs is determined by linear combination method. Let \(\mu _j^s\) and \(\mu _j^o\) be subjective and objective weight of experts, respectively, then we have :

$$\begin{aligned} \mu _j=\rho \mu _j^s+(1-\rho )\mu _j^o. \end{aligned}$$
(15)

Definition 12

Let \((R_{ij}^k)_{m\times n}\) be the evaluation matrix of the expert \(E_k\) using SVN2TSNNs, and \((R_{ij}^\circ )_{m\times n}\) be the overall matrix which is defined as:

$$\begin{aligned} R_{ij}^\circ =\sum _{k=1}^l\mu _kR_{ij}^k, \end{aligned}$$
(16)

In group decision-making problems, it is generally considered that there is a consistent trend in the decisions of experts. If an expert’s evaluation of the scheme set is highly similar to that of the group, it means that the expert has the same opinion with other experts or the opinion of the expert is generally supported by the group, then the expert should have a higher objective weight.

Definition 13

Let \(R_k\) and \(R^\circ \) be the evaluation matrix of expert \(E_k\) and expert group,respectively. For the attribute\(C_j\) of the alternative \(A_i\), the relative deviation between expert \(R_k\) and \(R^\circ \) is represented by \(\psi _{ij}^k\), which is defined as

$$\begin{aligned} \psi (R_{ij}^k,R^\circ _{ij})=d\left( R_{ij}^k,R^\circ _{ij}\right) , \end{aligned}$$
(17)

then, after the k-th iteration, the sum of the deviations between the evaluation matrix of expert \(E_k\) and the expert group is denoted by \(\varDelta \psi _k^{(r)}\):

$$\begin{aligned} \varDelta \psi _k^{(r)}=\sum _{i=1}^m\sum _{j=1}^n\psi \left( R_{ij}^k,R^\circ _{ij}\right) . \end{aligned}$$
(18)

According to the consistency principle, the smaller \(\varDelta \psi _k^{(r)}\) is, the higher the similarity between expert \(E_k\) and the expert group, and then the higher objective weight \(\mu _k^o\) should be given. Therefore, the \(k-th\) normalized objective weight \(\mu _k^{o(r)}\) can be obtained as follows:

$$\begin{aligned} \mu _k^{o(r)}=\frac{\sum _{t=1}^l\varDelta \psi _t^{(r)}-\varDelta \psi _k^{(r)}}{\sum _{t=1}^l\varDelta \psi _t^{(r)}}. \end{aligned}$$
(19)

In this paper, considering the subjective and objective weights of experts, an iterative algorithm is used to achieve a stable solution. The end condition of the iterative algorithm is shown in Definition 16.

Definition 14

Norm is used to calculate the distance difference between the objective weight vector of the \(r-th\) iteration and the r-1 iteration.

$$\begin{aligned} \varDelta \mu ^o=\left| \sum _{k=1}^l\left( \mu _k^{o(r)}-\mu _k^{o(r-1)}\right) \right| ^{\frac{1}{2}}, \end{aligned}$$
(20)

When \(\varDelta \mu ^o\le \varepsilon \), such as \(\varepsilon =10^{-4}\), the iteration process is finished. The specific steps are summarized as follows:

  • Step 1  Input expert evaluation information, expert subjective weight, and initialize parameters \(\rho = 0.5\), r = 1, \(\varepsilon =10^{-4}\).

  • Step 2  If iterative number r = 1, the objective weight a is initialized \(\mu _k^o = \frac{1}{l}\) and go to step 3. If \(r\ne 1\), the objective weight of the r-1 iteration result \(\mu _k^{o(r-1)}\) is substituted into formula (15) to calculate the comprehensive weight.

  • Step 3  Calculate the weighted expert evaluation matrix \(R_{ij}^\circ \) according to Definition 12.

  • Step 4  Calculate the sum of the deviations of expert \(E_k\), \(\varDelta \psi _k^{(r)}\) according to Definition 13.

  • Step 5  Calculate the objective weight of the expert \(E_k\) after rth iteration \(\mu _k^{o(r)}\) according to Definition 13.

  • Step 6  Calculate the distance difference between the objective weight vector of the \(r-th\) iteration and the r-1 iteration \(\varDelta \mu ^o\) according to Definition 14. If \(\varDelta \mu ^o\le \varepsilon \), stop the iteration and go to the next step. Return to step 2, if \(\varDelta \mu ^o>\varepsilon \).

  • Step 7  Output the comprehensive expert weight \(\mu _k\).

4.2 Determine the attribute weights

In the actual DMPs, the weight of attributes may not be clear, so based on the maximizing deviation method (Wu et al. 2018; Wu and Chen 2007), we applied it to the SVN2TL environment.

Case 1  The weight information of attributes are completed unknown:

$$\begin{aligned}&\max D(w)=\sum _{k=1}^l\sum _{j=1}^n\sum _{i=1}^m\sum _{r=1}^m\mu _kw_jd(a_{ij}^{(k)},a_{rj}^{(k)})\nonumber ,\\&\mathrm{s.t.}\quad w_j\in [0,1], j=1,2, \ldots , n, \sum _{j=1}^nw_j^2=1. \end{aligned}$$
(21)

Based on Lagrange function, we have

$$\begin{aligned} L(w,\eta )=\sum _{k=1}^l\sum _{j=1}^n\sum _{i=1}^m\sum _{r=1}^m\mu _kw_jd(a_{ij}^{(k)},a_{rj}^{(k)})+\frac{\eta }{2} \left( \sum _{j=1}^nw_j^2-1\right) . \end{aligned}$$

Calculate the partial derivatives of variables \(w_j\) and \(\eta \) we have:

$$\begin{aligned} {\left\{ \begin{array}{ll} \frac{\partial L(w,\eta )}{\partial w_j}=\sum _{k=1}^l\sum _{i=1}^m\sum _{r=1}^m\mu _kd(a_{ij}^{(k)},a_{rj}^{(k)})+\eta w_j=0,\\ \frac{\partial L(w,\eta )}{\partial \eta }= \sum _{j=1}^nw_j^2-1=0. \end{array}\right. } \end{aligned}$$

By solving the above formula and normalizing it, we can get

$$\begin{aligned} w_j=\frac{\sum _{k=1}^l\sum _{i=1}^m\sum _{r=1}^m\mu _kd\left( a_{ij}^{(k)},a_{rj}^{(k)}\right) }{\sum _{k=1}^l\sum _{j=1}^n\sum _{i=1}^m\sum _{r=1}^m\mu _kd\left( a_{ij}^{(k)},a_{rj}^{(k)}\right) }. \end{aligned}$$

Case 2  In case the weight of attributes may be partially unknown we have:

$$\begin{aligned}&\max D(w)=\sum _{k=1}^l\sum _{j=1}^n\sum _{i=1}^m\sum _{r=1}^m\mu _kw_jd(a_{ij}^{(k)},a_{rj}^{(k)})\nonumber \\&\mathrm{s.t.}\quad w_j\in [0,1], w_j\in \varTheta , j=1,2, \ldots , n, \sum _{j=1}^nw_j=1. \end{aligned}$$
(22)

where \(\varTheta \) represents the set of relationships that attribute weights need to satisfies.

4.3 Approach for MAGDM under SVN2TLS environment

In this section, the proposed SVN2TLFWA or SVN2TLFWG operator is integrated with BWM and MABAC to deal a MAGDM issue. The steps are summarized as follows (see Fig. 1).

  • Step 1  Obtain the weight of each DM \(\mu \) by the method iterative algorithm proposed in 4.1.

  • Step 2  Obtain the overall decision matrix \(R=(R_{ij})_{m\times n}\) by proposed SVN2TLFWA or SVN2TLFWG operator.

  • Step 3  Obtain the attribute weight W based on the method proposed in 4.2.

  • Step 4  Normalize the decision matrix \(R=(R_{ij})_{m\times n}\).

    for benefit attributes:

    $$\begin{aligned} N_{ij}=R_{ij}=(s_{\theta _{ij}},\alpha _{ij})^\prime ,(T^\prime _{ij}, I^\prime _{ij}, F_{ij})=(s_{\theta _{ij}},\alpha _{ij}),(T_{ij}, I_{ij}, F_{ij}), \end{aligned}$$
    (23)

    for cost attributes:

    $$\begin{aligned} N_{ij}=\varDelta (t-\varDelta ^{-1}(s_{\theta _{ij}},\alpha _{ij})),(F_{ij},1-I_{ij},T_{ij}), \end{aligned}$$
    (24)
  • Step 5  Calculate the weighted normalized matrix \(V=[v_{ij}]_{m\times n}\) as below:

    (25)
  • Step 6  Determine the BBA vector \(B=[g_j]_{1\times n}\) from SVN2TLFWG operator.

    (26)
  • Step 7  Calculate the distance matrix \(D=[d_{ij}]_{m\times n}\).

Although the MABAC method considers the potential value of “gain” and “loss” in the ranking of alternatives, it cannot reflect the characteristics that DMs are more sensitive to “loss” than “gain”. It cannot describe the attitude of DMs to avoid risk. However, DMs often show incomplete rationality in the real decision-making process, that is, bounded rational behavior. Therefore, the risk preference parameter \(\theta \) is introduced to reflect the characteristics of DMs’ risk attitude, as follows:

$$\begin{aligned} d_{ij}= {\left\{ \begin{array}{ll} d(v_{ij},g_j),if S(v_{ij})\ge S(g_j),\\ -\theta d(v_{ij},g_j),if S(v_{ij})< S(g_j), \end{array}\right. } \end{aligned}$$
(27)

\(\theta (\theta >0)\) is the risk preference of the decision maker.\(\theta =1/>1/<1\) indicates that the DM is risk neutral / aversion / seeking.

Step 8  Rank the alternatives and get the best option. \(Q_i\) is defined as the row sum of matrix D as:

$$\begin{aligned} Q_i=\sum _{j=1}^n d_{ij},\quad i=1,\ldots ,m, \end{aligned}$$
(28)

The better alternative is the one with the bigger value of \(Q_i\).

Fig. 1
figure 1

Flowchart of the proposed approach

5 Case study

To show the validity and merits of the proposed approach, some examples are given in this section.

5.1 The procedure of the proposed method

Example 3

The purchase of a car is a complex fuzzy decision-making problem with many influencing factors. For different needs, the factors considered will be different. However, in this part, we only consider the general and decisive factors such as space (\(C_1\)), engine performance (\(C_2\)), use cost (\(C_3\)), safety (\(C_4\)). There were four cars (alternatives) to be evaluated by four DMs \(d^t(t=1,2,3,4)\) based on SVN2TLNNs with LTs \(\varOmega =\{s_0^7,s_1^7,\ldots ,s_6^7\}\) as shown in Table 1.

Table 2 Evaluation matrix of Example 3

Step 1  Suppose the organizer provides the following pairwise comparison vectors: \(U_B=[1, 1, 2, 3]\) and\( V_W=[3, 2 ,2, 1]\). Solving this problem using BWM method Eq. (5), we can obtain subjective weight of experts as \(\mu _1^s=0.3717,\mu _2^s=0.3007,\mu _3^s=0.1931,\mu _4^s=0.1345\), and \(\varepsilon ^*=0\). The provided comparison vectors meet the consistency condition, so the solution is feasible. Let \(\rho =0.5,\lambda =2,\varepsilon =10^{-8}\), using iterative algorithm we can get comprehensive experts weight \(\mu _1=0.3097,\mu _2=0.2807,\mu _3=0.2155,\mu _4=0.1942\).

Step 2  Based on SVN2TLFWA operator the synthesize decision matrix R can be obtained as shown in Table 3.

Table 3 Synthesize decision matrix R of Example 3

Step 3  Calculation of attribute weight by the maximizing deviation method. Utilize the optimal model Eq.  (21), the attribute weights can be obtained as \([w_i]_{1\times 4}=[0.3650,0.1987,0.2630,0.1733]\).

Step 4  Normalize the synthesize decision matrix, Based on Eq. (23), which can be founded as in Table 4.

Table 4 Normalized matrix of Example 3

Step 5  Construct the weighted normalized matrix V.

Step 6  Compute the BBA vector B which is shown as:

$$\begin{aligned}&g_1=\left\langle \left( s_2^7,0.1383),(0.2338,0.5453,0.6070\right) \right\rangle ,\\&g_2=\left\langle \left( s_1^7,0.0704),(0.1315,0.7306,0.7289\right) \right\rangle ,\\&g_3=\left\langle \left( s_1^7,-0.2279),(0.0559,0.9537,0.8761\right) \right\rangle ,\\&g_4=\left\langle \left( s_2^7,-0.4916),(0.1146,0.7930,0.7999\right) \right\rangle , \end{aligned}$$

Step 7  Obtain the distance matrix D as

$$\begin{aligned} D= \begin{pmatrix} 4.0433 &{}0.0525 &{} 0.0498 &{} -0.3555 \\ -1.3613 &{}-0.0831 &{} 0.0996 &{} -0.2519 \\ -0.9639 &{}-0.1398 &{} 0.0557 &{} -0.2527 \\ 0.7511 &{}0.1345 &{} -0.1080 &{} 1.9316 \end{pmatrix}, \end{aligned}$$

Step 8  Compute the sum of row elements of the matrix D, then we have \([Q_i]_{1\times 4}=[3.7900, -1.5968,-1.3008,2.7092]\). Therefore, the ranking results of the alternatives is \(\wp _1\succ \wp _4\succ \wp _3\succ \wp _2\).

5.2 Sensitivity analysis

In this part, we will analyze the influence of different values of parameters (\(\lambda ,\theta ,\rho \)) on the sorting results. Table 5 and Fig. 2 show the ranging results using SVN2TLFWA or SVN2TLFWG with different values of parameter \(\lambda \). From them we can see that with different AOs and \(\lambda \) the ordering of the alternatives are the same, which is \(\wp _1\succ \wp _4\succ \wp _3\succ \wp _2\). By further analysis, we can conclude that the sum of row elements \(Q_1,Q_4\) of the SVN2TLFWA operator decreases with the increase of parameter \(\lambda \), however, the trend of \(Q_2, Q_3\) is opposite to that of \(Q_1,Q_4\).

In step 2, the BWM method is applied to obtain the subjective weight of experts. To overcome the shortcomings of subjective methods, combined with the evaluation information provided by experts, an iterative algorithm was proposed to calculate the comprehensive weight of experts, and the linear combination method was used to combine the subjective and objective weights. Therefore, the parameter \(\rho \) in Eq. (15) can affect the comprehensive weight of experts. For the BWM method, the result is determined when a pairwise comparison vector is given. Also, it should be noted that \(\rho \) is the proportion of subjective weight. The influence of parameter \(\rho \) on the weight of experts and the number of iterations is shown in Table 6.

From Table 6, we can see that with the increase of \(\rho \), the weights of expert 1 and expert 2 are increasing while the corresponding weights of expert 3 and expert 4 are decreasing. The reason is that among the subjective weights calculated by the BWM method, the weights of expert 1 and expert 2 are relatively large, 0.3717 and 0. 3007, respectively.

Table 5 Ranking order with different parameter \(\lambda \)(\(\mu =0.5,\theta =1\))
Table 6 WV of experts and ranking order with different parameter \(\rho \)(SVN2TLFWA)
Fig. 2
figure 2

Value of \(g_i\) based on SVN2TLFWA operator

5.3 The reliability of the proposed method

Table 7 Evaluation values of alternative \(\phi _4'\) for different DMs

The relative performance of the proposed method will be tested based on three criteria established by Wang and Triantaphyllou (2005).

  • Criterion 1  The relative order of the alternatives will not change by replacing a non-optimal alternative with a worse one.

  • Criterion 2  The transitive property should be followed.

  • Criterion 3  After the problem is divided into small groups, other data are unchanged, the relative order should be identical to the original problem.

With regard to Criterion 1, a non-optimum alternative \(\wp _4\) is substituted by the worse one \(\wp _4'\) shown in Table 7. The values of \(Q_i\) in step 9 are generated as \([Q_i]_{1\times 4}=[4.5836, -0.2348, 0.1417, -1.3287]\), then the ranking order is \(\wp _1\succ \wp _3\succ \wp _2\succ \wp _4'\) and the most alternative is still \(\wp _1\). Therefore, the proposed method is reliable under Criterion 1.

With regard to Criteria 2 and 3, the original problem is decomposed into six groups: \(\{\wp _1, \wp _2\}, \{\wp _1, \wp _3\},\{\wp _1, \wp _4\},\{\wp _2, \wp _3\},\{\wp _2, \wp _4\},\{\wp _3, \wp _4\}\). By calculation, the corresponding values of \(Q_i\) are \([Q_1, Q_2]=[4.3605, -2.0429]\), \([Q_1, Q_3]=[3.1623, -1.5227]\), \([Q_1, Q_4]\!=\![1.0724, 0.3172]\), \([Q_2, Q_3]\!=\![-0.1944, 0.2416]\), \([Q_2, Q_4]\!=\![-1.3645, 2.8092]\), \([Q_3, Q_4]\!=\![-0.9497, 1.8993]\), so the orders of alternatives can be derived as: \(\wp _1\succ \wp _2, \wp _1\succ \wp _3, \wp _1\succ \wp _4, \wp _3\succ \wp _2, \wp _4\succ \wp _2, \wp _4\succ \wp _3\). Therefore,it can be deduced that the overall ranking of alternatives is consistent with the original one. Thus, the proposed method is reliable under Criteria 2 and 3.

5.4 Comparative analysing

In the below, the proposed approach will be compared with the existing methods including SVN2TLHWA operator (Wu et al. 2018), MABACCELECTRE method under single-valued neutrosophic linguistic environments (MESVNL) operator (Ji et al. 2018), SVNLS-PT operator (Guo and Sun 2019), INULWAA operator (Ye 2017), and WSVNLMSM operator (Wang et al. 2017).

Example 4

This is a emergency management problems from Wu et al. (2018) (see Table 8). SVNLNs were used by DMs to provide their preferences. In the following comparative analysis, we assume that the weights of experts are consistent, which is \([\mu _l,\mu _2,\mu _3]=[0.37, 0.33, 0.3]\).

Table 8 Evaluation matrix of Example 4

5.4.1 Calculation process of the proposed method

Due to space reasons, some steps with unimportant data are not given here.

  • Step 1   Assume \(\lambda =2\) and the WV of DMs is consistent with those in Wu et al. (2018), which is (0.37, 0.33, 0.3).

  • Step 3   The WV of attributes can be obtained as \(w=(0.2926, 0.3147, 0.3927)\).

  • Step 4   Because all the attributes are benefit, this step can be omitted.

  • Step 6  Compute the BBA vector B which is shown as:

    $$\begin{aligned}&g_1=\langle (s_2,-0.4358),(0.1772,0.6166,0.7021)\rangle ,\\&g_2=\langle (s_1,-0.0984),(0.2048,0.6141,0.6841)\rangle ,\\&g_3=\langle (s_1,0.4707),(0.1934,0.5162,0.6093)\rangle . \end{aligned}$$
  • Step 7  Obtain the distance matrix D as:

    $$\begin{aligned} D= \begin{pmatrix} 0.1155 &{}0.0944 &{} -0.5022 \\ -0.3029 &{}0.3884 &{} 0.3298 \\ -0.1265 &{}-0.1212 &{} 0.2789 \\ 0.3859 &{}-0.3279 &{} 0.1694 \end{pmatrix}. \end{aligned}$$
  • Step 8   Assuming \(\theta =1\), we can get \([Q_i]_{1\times 4}=[-0.2923,0.4153,0.0312, 0.2274]\). Therefore, the ordering of the alternatives is \(\wp _2\succ \wp _4\succ \wp _3\succ \wp _1\).

It should be noted that when changing the parameter values of \(\lambda \) or \(\theta \), such as \(\lambda =2,\theta =4\) or \(\lambda =3,\theta =3\), the ranking order will be changed to \(\wp _2\succ \wp _3\succ \wp _4\succ \wp _1\).

5.4.2 Calculation process based on SVNLS-PT operator (Guo and Sun 2019)

In the following, the SVNLS-PT operator (Guo and Sun 2019) is taken into consideration. For better comparative analysis, the data are consistent with that in Example 4.

  • Step 1 Normalize decision matrices. Since all attributes are benefit, this step is omitted here.

  • Step 2 Obtain prospect matrices. Let \(\alpha =\beta =0.88,\theta =2.25,\gamma =0.61, \delta =0.72, p=1\) and the reference point is \(\langle s_3^7,(0.5,0.5, 0.5)\rangle \), then the prospect matrix V can be obtained as:

    $$\begin{aligned} V= \begin{pmatrix} 0.3324 &{} 0.3848 &{}-0.5015 \\ 0.3448 &{} 0.4456 &{} 0.4363 \\ 0.2605 &{} 0.4200 &{} 0.4304 \\ 0.6552 &{} 0.3955 &{}-0.1740 \end{pmatrix}, \end{aligned}$$
  • Step 3 Calculate attribute weights. To unify the comparison, we assume the attribute weights are consistent with those in Wu et al. (2018), namely, \(w=(0.2926, 0.3147, 0.3927)\).

  • Step 4 Obtain the integrated prospect values \(\bar{V}\). where \(\bar{V}=V*w=\) [0.0214, 0.4125, 0.3774, 0.2479].

  • Step 5   The larger the value of \(\bar{V_i}\), the better the alternative \(\wp _i\). So we get \(\wp _2\succ \wp _3\succ \wp _4\succ \wp _1\).

5.4.3 Calculation process based on TOPSIS method

In the following, We consider replacing the MABAC method with the TOPSIS method. The TOPSIS method is a sorting method that is close to the ideal solution. The detailed steps of this method can be found in Chen et al. (2018).

  • Step 1 Assume that the WV of DMs is consistent with those in Wu et al. (2018), and both of them is (0.37, 0.33, 0.3). Suppose \(\lambda =2\).

  • Step 2 Let \(s^+=\langle (s_6^7,0),(1,0,0)\) and \(s^-=\langle (s_0^7,0),(1,0,0)\) be the positive and negative ideal solution, respectively. Then, calculate the distance between each alternative to \(s^+\) and \(s^-\) below.

    $$\begin{aligned} d_i^+=\sum _{j=1}^n {w_jd(R_{ij},s^+)},d_i^-=\sum _{j=1}^n {w_jd(R_{ij},s^-)}. \end{aligned}$$
    (29)
  • Step 3 Calculate the comprehensive evaluation index of each alternative as

    $$\begin{aligned} CC_i=\frac{d_i^-}{d_i^++d_i^-}. \end{aligned}$$
  • Step 4  Get the ranking result according to the value of \(CC_i\). we have \(CC_i=[0.4303,0.4491,0.4304,0.4351]\), then \(\wp _2\succ \wp _4\succ \wp _3\succ \wp _1\).

The influence of parameter \(\lambda \) on the closeness coefficient and the ranking result is shown in Fig. 3. From Fig. 3 we can draw the following conclusions: (1) the closeness coefficient based on SVN2TLFWA operator is monotonically decreasing relating to parameter \(\lambda \); (2) the change of parameter \(\lambda \) does not affect the ranking result; (3) compared with the MABAC-based method, the TOPSIS-based method has a lower degree of discrimination between alternatives and cannot reflect the risk preference of DMs.

Fig. 3
figure 3

Closeness coefficient for alternatives obtained by TOPSIS method

Table 9 Ranking results of different methods

Note:experts and attribute weights(EAW).

The comparisons results are shown in Table 9. The merits of proposed method are as follows:

Fig. 4
figure 4

The gap value between two adjacent ranked alternatives by different methods

  1. 1.

    Wu et al. (2018) proposed a novel MAGDM method based on Hamacher t-norms under SVN2TL environment with the WV of experts are known. In practical problems, the weight of experts is usually unknown. In this paper, we fully consider this situation and propose an iterative algorithm considering the weights of subjective and objective of experts to make the decision results more reasonable. Besides, to reflect the risk preference of DMs, the MABAC method was modified by introducing a risk preference parameter. Thus, compared with Wu et al. (2018), the proposed method with the merit of simple calculation systematic process and logic in line with human decision-making principle.

    In Wu et al. (2018), the ranking results are consistent(\(\wp _2\succ \wp _3\succ \wp _4\succ \wp _1\)) regardless of the parameter \(\lambda \) of Hamacher t-norm is in the set \(\{0,1,2,\infty \}\). Take SVN2TLGWA operator as an example, the score value is \(S_i(i=1,\ldots ,4)=(2.2130,2.5013,2.4489, 2.4385 )\). Figure 4 shows the gap value between two adjacent ranked alternatives. Gapi represents the gap value between the ith and the \(i+1th\) ranked alternatives. Form Fig. 4, we can see that the values of the proposed method are much larger than the method in Wu et al. (2018) and other methods, so the proposed method can distinguish alternatives clearer than other method.

  2. 2.

    Based on Maclaurin symmetric mean operator (Wang et al. 2017) or dual Muirhead mean operator (Wang et al. 2019) has the advantage of being able to capture the interrelationships among attributes, but the amount of calculation will also be higher. Compared with the WSVNLMSM operator (Wang et al. 2017), although the proposed method in this paper contains three parameters, \(\rho \) is the proportional coefficient to determine the weights of subjective and objective experts, and \(\theta \) is the risk attitude index of DMs. For aggregation, there is only one parameter \(\lambda \) which make the proposed operators with higher flexibility and consistency. what’s more, the SVN2TLWDMM operator in Wang et al. (2019) can not be used to deal MAGDM problems.

  3. 3.

    From the calculation of SVNLS-PT operator proposed by Guo and Sun (2019), we can see that too many parameters will affect the ranking result. At the same time, these parameters are determined by experiments, so it is uncertain whether the parameters are suitable for new problems or scenarios.

  4. 4.

    Without the introduce of 2-tuple, the aggregation results of Wang et al. (2017), Guo and Sun (2019) and Ji et al. (2018) may not match any of the LTSs. For example, \(s_{4.2}\) makes sense only in comparison. Therefore, the proposed operator is more efficient. What’s more, the SVNLWA operator proposed by Guo and Sun (2019) and Ji et al. (2018) is only the special cases of proposed operator with parameter \(\lambda \rightarrow \infty \).

6 Conclusions

In this paper, we investigated MAGDM under SVN2TLS environment. First, the Frank triangular norms are extended to SVN2TLS environment. On this basis, some new operational rules, and some related properties are investigated. Comparing with the existing literature (Ji et al. 2018; Ye 2015; Wang et al. 2017; Guo and Sun 2019; Chen et al. 2018), Frank operation can select appropriate parameter \(\lambda \) based on the actual situation and the preference of the DMs, making the assembly process more flexible and robust. Later, two new AOs(SVN2TLFWA and SVN2TLFWG) are proposed, and some desirable properties of the proposed AOs are discussed. Then, by combining the improved MABAC method with SVN2TLNs information, two new approaches are proposed to solve MAGDM problem and the computing steps are simply depicted. Based on BWM method, an iterative algorithm has been developed for determining unknown expert weights. what’s more, by introducing risk attitude parameters, the improved MABAC method can flexibly reflect the attitude characteristics of DMs when facing risks in the assessment. Finally, some examples are given to shown the detailed calculation process and the advantages of the proposed method. In future, we will extend the proposed operators to large group decision model and algorithm such as consensus model and clustering model, and so on.