1 Introduction

Multi-criteria decision-making (MCDM) methods and multi-criteria group decision-making (MCGDM) methods are widely used in real-life decision-making problems. However, these situations often involve uncertain, incomplete or indeterminate decision-making information. To address this problem, Zadeh [1] proposed fuzzy sets, which can provide a better representation of reality. Since then, various extensions of fuzzy sets have emerged, such as interval-valued fuzzy sets (IVFSs) [2], intuitionistic fuzzy sets (IFSs), interval-valued intuitionistic fuzzy sets (IVIFSs) [35], hesitant fuzzy sets (HFSs) [6] and intuitionistic hesitant fuzzy sets (IHFSs) [7], all of which have been used to solve MCDM [810] and MCGDM problems [1113]. IFSs and IVIFSs have generated further extensions to cope with the vagueness and hesitancy of knowledge or decision information, including triangular intuitionistic fuzzy numbers (TrIFNs) [7, 14, 15], trapezoidal intuitionistic fuzzy numbers (TIFNs) and trapezoidal interval-valued intuitionistic fuzzy numbers (TIVIFNs) [16]. These extensions posses notable advantages in that they extend the domain of IFSs from a discrete set to a continuous one. For example, TIFNs are defined using trapezoidal fuzzy numbers (TFNs) to express membership and non-membership functions, which helps describe decision makers’ (DMs) information with precision in different dimensions [17]. Still, many uncertainties exist in real decision-making processes, including indeterminate, inconsistent, imprecise, incomplete and even unknown information, which are beyond the scope of FSs and IFSs.

Smarandache developed his seminal theory of neutrosophic logic and neutrosophic sets (NSs) [18, 19] and pointed that the NS is a generalization of the IFS [20]. The prominent characteristic of a NS is the independence among the truth-membership, falsity-membership and indeterminate membership which allows them to express more abundant and flexible information than FSs and IFSs [21]. However, NSs cannot be applied in real scientific or engineering areas if not specified described. Since they were proposed, work on NSs theory has progressed rapidly, and a number of applications have been identified [22, 23]. Furthermore, many extensions have been developed, such as simplified neutrosophic sets (SVNSs and INSs) [24, 25], multi-valued neutrosophic sets (MVNSs) [26] and normal neutrosophic sets (NNSs) [27]. Moreover, some researchers have attempted to combine NSs with other traditional sets in order to enhance the ability to represent uncertainty; examples include single-valued neutrosophic graphs [28], interval-valued neutrosophic graphs [29] and interval-valued neutrosophic parameterized (IVNP-) soft sets [30]. In addition, interval-valued neutrosophic hesitant fuzzy sets (IVNHFSs) [31] and simplified neutrosophic linguistic sets (SNLSs) [3235] have been proposed. However, in SNLSs, the three membership degrees function relative to a fuzzy concept “Excellent” or “Good”, which is a discrete set, naturally, this may lead to information loss, such that it is worthwhile to extend the discrete set to a continuous one. Two studies [36, 37] addressed this by proposing a method to transform linguistic information into triangular fuzzy numbers (TrFNs). Deli and Subas [21] defined single-valued triangular neutrosophic numbers (SVTrN-numbers) as a generalization of TrFNs and TrIFNs, allowing the DMs’ information to be expressed completely in different dimensions [17].

Ye [38] proposed single-valued trapezoidal neutrosophic numbers (SVTNNs) as an extension of SVTrN-numbers in order to improve the ability to describe indeterminate and inconsistent information. SVTNNs have attracted a great deal of research attention because of their advantages in representing incomplete and inconsistent information while avoiding information loss and distortion in complex decision-making problems. For example, Deli and Şubaş [39] proposed a new ranking method of SVTNNs, which they applied to tackle MCDM problems. Smarandache defined single-valued neutrosophic trapezoidal linguistic numbers (SVNTLNs) by combining SVTNNs with trapezoidal fuzzy linguistic variables, and he also defined the neutrosophic trapezoidal linguistic weighted arithmetic averaging aggregation operator, the neutrosophic trapezoidal linguistic weighted geometric aggregation operator [40], the interval neutrosophic trapezoidal linguistic weighted arithmetic averaging aggregation operator and the interval neutrosophic trapezoidal linguistic weighted geometric averaging aggregation operator [41]. Ye [38] proposed the concept of a trapezoidal neutrosophic number (TNN) and defined the basic operations of TNNs. Based on this work, he developed the trapezoidal neutrosophic weighted arithmetic averaging (TNWAA) operator and the trapezoidal neutrosophic weighted geometric averaging (TNWGA) operator. However, in the method proposed by Ye [38], the TFNs and the three membership degrees are treated independently, such that their complementary effects might be ignored; this could lead to information distortion and conservative results. Furthermore, the method does not take into account interrelationships among criteria, which widely exist in real-world situations.

Aggregation operators have raised concerns about information fusion. Many efficient aggregating operators have been proposed and applied in MCGDM problems [42, 43]. They can be roughly divided into two categories [17]: aggregation operators with independent criteria, as introduced in the research described above [21, 3841], and aggregation operators that consider interdependent inputs, which widely exist in real decision-making problems. Bonferroni [44] initially proposed the Bonferroni mean (BM) operator, which is prominently characterized by its capacity to capture the interrelationships of input arguments. Many extensions of the BM operator have been applied in various fields. For example, Li et al. [45] introduced the geometric BM operator, applying it to environments with IFNs; meanwhile, Liu et al. [46] applied the BM operator to MVNSs, Liu and Jin [47] introduced a trapezoidal fuzzy linguistic BM operator, and Zhu et al. [48] developed triangular fuzzy BM operators and applied them to MCDM problems. Chen et al. [49] generalized the extended BM operator to explore its aggregation mechanism explicitly; Tian et al. [32] proposed the simplified neutrosophic linguistic normalized weighted BM operator, the simplified neutrosophic linguistic normalized geometric weighted BM operator and the gray linguistic weighted BM operator [42] to handle MCDM problems. Finally, Zhang et al. [50] constructed an improved decision support model that introduced IVNSs to denote online reviews and utilized BM operators to consider interrelationships among criteria.

As these examples illustrate, the BM operator has found applications in many fields, such as FSs, IFSs, linguistic information, NSs and various extensions of them. At the same time, SVTNNs can express indeterminate and inconsistent information more flexibly and have therefore gained some attention. However, little research has combined these concepts to address MCGDM problems using BM operators under SVTNN environments. Previous studies [38, 53] have focused on using the traditional arithmetic mean operator or geometric mean operator with SVTNNs, meaning that they only deal with independent criteria. Moreover, there are some drawbacks in defining the operations and comparison methods between two SVTNNs. To overcome these deficiencies, this paper proposes a new comparison method. In addition, expert weights are determined using an entropy-weighted method. Furthermore, the single-valued trapezoidal neutrosophic normalized weighted BM (SVTNNWBM) operator is proposed. Finally, a group decision-making problem for satisfaction assessment is solved using the approach based on the SVTNNWBM operator and the new comparison method with SVTNNs.

The rest of this paper is organized as follows. Section 2 briefly reviews some concepts regarding SVTNNs and their operations are briefly reviewed. Section 3 defines the new operations and comparison method. In Sect. 4, some SVTNN aggregation operators are introduced, including the single-valued trapezoidal neutrosophic BM (SVTNNBM) operator and the SVTNNWBM operator. Section 5 introduces the entropy-weighted method and develops a single-valued trapezoidal neutrosophic MCGDM approach by integrating the SVTNNWBM operator. Section 6 provides an illustrative example to demonstrate the feasibility and applicability of the proposed approach. Additionally, Sect. 6 contains sensitivity and comparative analyses and discussions. Section 7 presents conclusions.

2 Preliminaries

This section introduces some basic concepts and comparison methods related to SVTNNs; these concepts are useful and will be utilized in the subsequent analyses.

Definition 1

[51] Let K = [a 1, a 2, a 3, a 4] be a TFN on the real number set R, and a 1 ≤ a 2 ≤ a 3 ≤ a 4. Then the membership function μ K : R → [0, 1] is defined as follows:

$$\mu_{K} \left( x \right) = \left\{ {\begin{array}{*{20}l} {{{\left( {x - a_{1} } \right)\mu_{K} } \mathord{\left/ {\vphantom {{\left( {x - a_{1} } \right)\mu_{K} } {\left( {a_{2} - a_{1} } \right)}}} \right. \kern-0pt} {\left( {a_{2} - a_{1} } \right)}},} \hfill & {a_{1} \le x < a_{2} ;} \hfill \\ {\mu_{K} ,} \hfill & {a_{2} \le x \le a_{3} ;} \hfill \\ {{{\left( {a_{4} - x} \right)\mu_{K} } \mathord{\left/ {\vphantom {{\left( {a_{4} - x} \right)\mu_{K} } {\left( {a_{4} - a_{3} } \right)}}} \right. \kern-0pt} {\left( {a_{4} - a_{3} } \right)}},} \hfill & {a_{3} < x \le a_{4} ;} \hfill \\ {0,} \hfill & {{\text{otherwise}}.} \hfill \\ \end{array} } \right.$$

when a 2 = a 3, the TFN K = [a 1, a 2, a 3, a 4] is reduced to a TrFN.

Ye [38] extended the concept of TFNs to SVNSs and defined SVTNNs. In what follows, we will first introduce SVNSs.

Definition 2

[52] Let X be a space of points (objects), with a generic element in X denoted by x. A SVNS V in X is characterized by three independent parts, namely the truth-membership function T V , indeterminacy-membership function I V and falsity-membership function F V . Furthermore, T V : X → [0, 1], I V : X → [0, 1], and F V : X → [0, 1]. For simplification, V is denoted by V = {<x, (T V (x), I V (x), F V (x))> |x ∈ X}.

The SVNS V is a subclass of NSs, and the sum of T V (x), I V (x) and F V (x) satisfies 0 ≤ T V (x) + I V (x) + F V (x) ≤ 3.

As SVNNs are denoted by crisp numbers that cannot represent much fuzzy information, the SVTNN is proposed to extend the discrete set to a continuous one.

Definition 3

[38] Let \(T_{{\tilde{a}}} ,I_{{\tilde{a}}} ,F_{{\tilde{a}}} \in \left[ {0,1} \right]\). A SVTNN \(\tilde{a} = \left\langle {\left[ {a_{1} ,a_{2} ,a_{3} ,a_{4} } \right],\left( {T_{{\tilde{a}}} ,I_{{\tilde{a}}} ,F_{{\tilde{a}}} } \right)} \right\rangle\) is a special NS on the real number set R, whose truth-membership function \(\mu_{{\tilde{a}}}\), indeterminacy-membership function \(\nu_{{\tilde{a}}}\) and falsity-membership function \(\lambda_{{\tilde{a}}}\) are defined as follows:

$$\mu_{{\tilde{a}}} \left( x \right) = \left\{ {\begin{array}{*{20}l} {{{\left( {x - a_{1} } \right)T_{{\tilde{a}}} } \mathord{\left/ {\vphantom {{\left( {x - a_{1} } \right)T_{{\tilde{a}}} } {\left( {a_{2} - a_{1} } \right)}}} \right. \kern-0pt} {\left( {a_{2} - a_{1} } \right)}}} \hfill & {a_{1} < x < a_{2} ,} \hfill \\ {T_{{\tilde{a}}} } \hfill & {a_{2} < x < a_{3} ,} \hfill \\ {{{\left( {a_{4} - x} \right)T_{{\tilde{a}}} } \mathord{\left/ {\vphantom {{\left( {a_{4} - x} \right)T_{{\tilde{a}}} } {\left( {a_{4} - a_{3} } \right)}}} \right. \kern-0pt} {\left( {a_{4} - a_{3} } \right)}}} \hfill & {a_{3} < x < a_{4} ,} \hfill \\ 0 \hfill & {{\text{otherwise}}.} \hfill \\ \end{array} } \right.$$
$$\nu_{{\tilde{a}}} \left( x \right) = \left\{ {\begin{array}{*{20}l} {{{\left( {a_{2} - x + I_{{\tilde{a}}} \left( {x - a_{1} } \right)} \right)} \mathord{\left/ {\vphantom {{\left( {a_{2} - x + I_{{\tilde{a}}} \left( {x - a_{1} } \right)} \right)} {\left( {a_{2} - a_{1} } \right)}}} \right. \kern-0pt} {\left( {a_{2} - a_{1} } \right)}}} \hfill & {a_{1} < x < a_{2} ,} \hfill \\ {I_{{\tilde{a}}} } \hfill & {a_{2} < x < a_{3} ,} \hfill \\ {{{\left( {x - a_{3} + I_{{\tilde{a}}} \left( {a_{4} - x} \right)} \right)} \mathord{\left/ {\vphantom {{\left( {x - a_{3} + I_{{\tilde{a}}} \left( {a_{4} - x} \right)} \right)} {\left( {a_{4} - a_{3} } \right)}}} \right. \kern-0pt} {\left( {a_{4} - a_{3} } \right)}}} \hfill & {a_{3} < x < a_{4} ,} \hfill \\ 1 \hfill & {{\text{otherwise}}.} \hfill \\ \end{array} } \right.$$
$$\lambda_{{\tilde{a}}} \left( x \right) = \left\{ {\begin{array}{*{20}l} {{{\left( {a_{2} - x + F_{{\tilde{a}}} \left( {x - a_{1} } \right)} \right)} \mathord{\left/ {\vphantom {{\left( {a_{2} - x + F_{{\tilde{a}}} \left( {x - a_{1} } \right)} \right)} {\left( {a_{2} - a_{1} } \right)}}} \right. \kern-0pt} {\left( {a_{2} - a_{1} } \right)}}} \hfill & {a_{1} < x < a_{2} ,} \hfill \\ {F_{{\tilde{a}}} } \hfill & {a_{2} < x < a_{3} ,} \hfill \\ {{{\left( {x - a_{3} + F_{{\tilde{a}}} \left( {a_{4} - x} \right)} \right)} \mathord{\left/ {\vphantom {{\left( {x - a_{3} + F_{{\tilde{a}}} \left( {a_{4} - x} \right)} \right)} {\left( {a_{4} - a_{3} } \right)}}} \right. \kern-0pt} {\left( {a_{4} - a_{3} } \right)}}} \hfill & {a_{3} < x < a_{4} ,} \hfill \\ 1 \hfill & {{\text{otherwise}}.} \hfill \\ \end{array} } \right.$$

When a 1 > 0, \(\tilde{a} = \left\langle {\left[ {a_{1} ,a_{2} ,a_{3} ,a_{4} } \right],\left( {T_{{\tilde{a}}} ,I_{{\tilde{a}}} ,F_{{\tilde{a}}} } \right)} \right\rangle\) is called a positive SVTNN, denoted by \(\tilde{a} > 0\). Similarly, when a 4 ≤ 0, \(\tilde{a} = \left\langle {\left[ {a_{1} ,a_{2} ,a_{3} ,a_{4} } \right],\left( {T_{{\tilde{a}}} ,I_{{\tilde{a}}} ,F_{{\tilde{a}}} } \right)} \right\rangle\) is a negative SVTNN, denoted by \(\tilde{a} < 0\). When 0 ≤ a 1 ≤ a 2 ≤ a 3 ≤ a 4 ≤ 1 and \(T_{{\tilde{a}}} ,I_{{\tilde{a}}} ,F_{{\tilde{a}}} \in \left[ {0,1} \right],\tilde{a}\) is called a normalized SVTNN.

When \(I_{{\tilde{a}}} = 1 - T_{{\tilde{a}}} - F_{{\tilde{a}}}\), the SVTNN is reduced to a TIFN. When a 2 = a 3, then \(\tilde{a} = \left\langle {\left[ {a_{1} ,a_{2} ,a_{3} ,a_{4} } \right],\left( {T_{{\tilde{a}}} ,I_{{\tilde{a}}} ,F_{{\tilde{a}}} } \right)} \right\rangle\) becomes a single-valued triangular neutrosophic number (SVTrNN). If \(I_{{\tilde{a}}} = 0\) and \(F_{{\tilde{a}}} = 0\), then the SVTNN is reduced to a generalized TFN, \(\tilde{a} = \left\langle {\left[ {a_{1} ,a_{2} ,a_{3} ,a_{4} } \right],T_{{\tilde{a}}} } \right\rangle\).

Definition 4

[38] Let \(\tilde{a} = \left\langle {\left[ {a_{1} ,a_{2} ,a_{3} ,a_{4} } \right],\left( {T_{{\tilde{a}}} ,I_{{\tilde{a}}} ,F_{{\tilde{a}}} } \right)} \right\rangle\) and \(\tilde{b} = \left\langle {\left[ {b_{1} ,b_{2} ,b_{3} ,b_{4} } \right],\left( {T_{{\tilde{b}}} ,I_{{\tilde{b}}} ,F_{{\tilde{b}}} } \right)} \right\rangle\) be two SVTNNs, and \(\zeta \ge 0\). Their operations are defined as follows:

  1. 1.

    \(\tilde{a} \oplus \tilde{b} = \left\langle {\left[ {a_{1} + b_{1} ,a_{2} + b_{2} ,a_{3} + b_{3} ,a_{4} + b_{4} } \right],\left( {T_{{\tilde{a}}} + T_{{\tilde{b}}} - T_{{\tilde{a}}} T_{{\tilde{b}}} ,I_{{\tilde{a}}} I_{{\tilde{b}}} ,F_{{\tilde{a}}} F_{{\tilde{b}}} } \right)} \right\rangle ;\)

  2. 2.

    \(\tilde{a} \otimes \tilde{b} = \left\langle {\left[ {a_{1} b_{1} ,a_{2} b_{2} ,a_{3} b_{3} ,a_{4} b_{4} } \right],\left( {T_{{\tilde{a}}} T_{{\tilde{b}}} ,T_{{\tilde{a}}} + T_{{\tilde{b}}} - T_{{\tilde{a}}} T_{{\tilde{b}}} ,F_{{\tilde{a}}} + F_{{\tilde{b}}} - F_{{\tilde{a}}} F_{{\tilde{b}}} } \right)} \right\rangle ;\)

  3. 3.

    \(\zeta \tilde{a} = \left\langle {\left[ {\zeta a_{1} ,\zeta a_{2} ,\zeta a_{3} ,\zeta a_{4} } \right],\left( {1 - \left( {1 - T_{{\tilde{a}}} } \right)^{\zeta } ,\left( {I_{{\tilde{a}}} } \right)^{\zeta } ,\left( {F_{{\tilde{a}}} } \right)^{\zeta } } \right)} \right\rangle ;\)

  4. 4.

    \(\tilde{a}^{\zeta } = \left\langle {\left[ {a_{1}^{\zeta } ,a_{2}^{\zeta } ,a_{3}^{\zeta } ,a_{4}^{\zeta } } \right],\left( {\left( {T_{{\tilde{a}}} } \right)^{\zeta } ,1 - \left( {1 - I_{{\tilde{a}}} } \right)^{\zeta } ,1 - \left( {1 - F_{{\tilde{a}}} } \right)^{\zeta } } \right)} \right\rangle .\)

The following example illustrates some drawbacks in the operations described in Definition 4.

Example 1

Let \(\tilde{a} = \left\langle {\left[ {0.1,0.1,0.2,0.3} \right],\left( {0,0,1} \right)} \right\rangle\) and \(\tilde{b} = \left\langle {\left[ {0.1,0.1,0.2,0.3} \right],\left( {1,0,0} \right)} \right\rangle\) be two SVTNNs. According to Definition 4, the following result can be calculated: \(\tilde{a} + \tilde{b} = \left\langle {\left[ {0.2,0.2,0.4,0.6} \right],\left( {1,0,0} \right)} \right\rangle\). This result, however, is inaccurate, because the falsity-membership of \(\tilde{a}\), the correlations among TFNs and the membership degrees of \(\tilde{a}\) and \(\tilde{b}\) are not considered. Therefore, these operations would be impractical.

Example 2

Let \(\tilde{a}_{1} = \left\langle {[0.03,0.05,0.07,0.09],(0.3,0.5,0.5)} \right\rangle\) be a SVTNN and \(\zeta = 10\). Then the result \(\zeta \tilde{a}_{1}\) obtained using Definition 4 is

$$10\tilde{a}_{1} = \left\langle {\left[ {0.3,0.5,0.7,0.9} \right],\left( {0.9718,0.001,0.001} \right)} \right\rangle .$$

In this result, the three membership degrees of these SVTNNs are operated repeatedly, significantly distorting the result and conflicting with common sense.

Therefore, some new operations for SVTNNs must be defined in order to overcome these operational anomalies. The new operations will be discussed in Sect. 3.

In order to compare two different SVTNNs, some previous comparison methods have been proposed.

Definition 5

[53] Let \(\tilde{a} = \left\langle {\left[ {a_{1} ,a_{2} ,a_{3} ,a_{4} } \right],\left( {T_{{\tilde{a}}} ,I_{{\tilde{a}}} ,F_{{\tilde{a}}} } \right)} \right\rangle\) be a SVTNN. The score function and accuracy function of \(\tilde{a}\) are defined, respectively, as

$$S\left( {\tilde{a}} \right) = \frac{1}{16}\left[ {a_{1} + a_{2} + a_{3} + a_{4} } \right] \times \left( {2 + T_{{\tilde{a}}} - I_{{\tilde{a}}} - F_{{\tilde{a}}} } \right),$$
(1)
$$H\left( {\tilde{a}} \right) = \frac{1}{16}\left[ {a_{1} + a_{2} + a_{3} + a_{4} } \right] \times \left( {2 + T_{{\tilde{a}}} - I_{{\tilde{a}}} + F_{{\tilde{a}}} } \right).$$
(2)

Let ≻ and ~ be two binary relations on SVTNNs, denoted by \(\tilde{a} \succ \tilde{b}\) if \(\tilde{a}\) is preferred to \(\tilde{b}\), and \(\tilde{a}\sim\tilde{b}\) if \(\tilde{a}\) is indifferent to \(\tilde{b}\).

Definition 6

[53] Let \(\tilde{a} = \left\langle {\left[ {a_{1} ,a_{2} ,a_{3} ,a_{4} } \right],\left( {T_{{\tilde{a}}} ,I_{{\tilde{a}}} ,F_{{\tilde{a}}} } \right)} \right\rangle\) and \(\tilde{b} = \left\langle {\left[ {b_{1} ,b_{2} ,b_{3} ,b_{4} } \right],\left. {\left( {T_{{\tilde{b}}} ,I_{{\tilde{b}}} ,F_{{\tilde{b}}} } \right)} \right\rangle } \right.\) be two SVTNNs. Then,

  1. 1.

    If \(S\left( {\tilde{a}} \right) < S\left( {\tilde{b}} \right)\), then \(\tilde{a} \prec \tilde{b}\);

  2. 2.

    If \(S\left( {\tilde{a}} \right) = S\left( {\tilde{b}} \right)\) and \(H\left( {\tilde{a}} \right) < H\left( {\tilde{b}} \right)\), then \(\tilde{a} \prec \tilde{b}\);

  3. 3.

    If \(S\left( {\tilde{a}} \right) = S\left( {\tilde{b}} \right)\) and \(H\left( {\tilde{a}} \right) = H\left( {\tilde{b}} \right)\), then \(\tilde{a}\sim\tilde{b}\).

However, there are some limitations to Definition 5, which will be illustrated in Example 3.

Example 3

Let \(\tilde{a} = \left\langle {\left[ {0.2,0.3,0.5,0.8} \right],\left( {0.1,0.8,0} \right)} \right\rangle\) and \(\tilde{b} = \left\langle {\left[ {0.1,0.4,0.5,0.8} \right],\left( {0.2,0.9,0} \right)} \right\rangle\) be two SVTNNs. It is clear that \(\tilde{a} \ne \tilde{b}\). The following results can be obtained according to Definition 5: \(S\left( {\tilde{a}} \right) = S\left( {\tilde{b}} \right) = 0.146\), \(H\left( {\tilde{a}} \right) = H\left( {\tilde{b}} \right) = 0.146\), and \(\tilde{a}\sim\tilde{b}\). However, these results are counterintuitive.

Definition 7

[38] Let \(\tilde{a} = \left\langle {\left[ {a_{1} ,a_{2} ,a_{3} ,a_{4} } \right],\left( {T_{{\tilde{a}}} ,I_{{\tilde{a}}} ,F_{{\tilde{a}}} } \right)} \right\rangle\) be a SVTNN. The score function of \(\tilde{a}\) is defined as follows:

$$S^{\prime } \left( {\tilde{a}} \right) = \frac{1}{12}\left[ {a_{1} + a_{2} + a_{3} + a_{4} } \right] \times \left( {2 + T_{{\tilde{a}}} - I_{{\tilde{a}}} - F_{{\tilde{a}}} } \right).$$
(3)

Definition 8

[38] Let \(\tilde{a} = \left\langle {\left[ {a_{1} ,a_{2} ,a_{3} ,a_{4} } \right],\left( {T_{{\tilde{a}}} ,I_{{\tilde{a}}} ,F_{{\tilde{a}}} } \right)} \right\rangle\) and \(\tilde{b} = \left\langle {\left[ {b_{1} ,b_{2} ,b_{3} ,b_{4} } \right],\left( {T_{{\tilde{b}}} ,I_{{\tilde{b}}} ,F_{{\tilde{b}}} } \right)} \right\rangle\) be two SVTNNs. Then,

  1. 1.

    If \(S^{\prime } \left( {\tilde{a}} \right) > S^{\prime } \left( {\tilde{b}} \right)\), then \(\tilde{a} \succ \tilde{b}\);

  2. 2.

    If \(S^{\prime } \left( {\tilde{a}} \right) = S^{\prime } \left( {\tilde{b}} \right)\), then \(\tilde{a}\sim\tilde{b}\).

However, some drawbacks also exist in Definition 7, and they will be discussed in the following example.

Example 4

Let \(\tilde{a} = \left\langle {\left[ {0.3,0.4,0.5,0.8} \right],\left( {0.5,0.3,0.7} \right)} \right\rangle\) and \(\tilde{b} = \left\langle {\left[ {0.5,0.7,0.8,1} \right],\left( {0.2,0.8,0.4} \right)} \right\rangle\) be two SVTNNs. It is clear that \(\tilde{a} \ne \tilde{b}\). However, according to Definitions 7 and 8, \(S^{\prime } \left( {\tilde{a}} \right) = S^{\prime } \left( {\tilde{b}} \right) = 0.25\), and \(\tilde{a}\sim\tilde{b}\); these results do not conform to our intuition.

3 New operations and comparison method for SVTNNs

In order to overcome the limitations discussed in Sect. 2, this section defines several new operations. Moreover, a new comparison method is proposed on the basis of score, accuracy and certainty functions.

3.1 New operations for SVTNNs

Definition 9

Let \(\tilde{a} = \left\langle {\left[ {a_{1} ,a_{2} ,a_{3} ,a_{4} } \right],\left( {T_{{\tilde{a}}} ,I_{{\tilde{a}}} ,F_{{\tilde{a}}} } \right)} \right\rangle\) and \(\tilde{b} = \left\langle {\left[ {b_{1} ,b_{2} ,b_{3} ,b_{4} } \right],\left( {T_{{\tilde{b}}} ,I_{{\tilde{b}}} ,F_{{\tilde{b}}} } \right)} \right\rangle\) be two SVTNNs, and \(\zeta \ge 0\); then, the new operations for SVTNNs are defined as follows:

  1. 1.

    \(\tilde{a} \oplus \tilde{b}\)

    1. (i)

      If \(a_{3} + a_{4} \ne a_{1} + a_{2}\) and \(b_{3} + b_{4} \ne b_{1} + b_{2}\), \(\tilde{a} \oplus \tilde{b} = \left\langle {\left[ {a_{1} + b_{1} ,a_{2} + b_{2} ,a_{3} + b_{3} ,a_{4} + b_{4} } \right],\left( {\frac{{\varphi \left( {\tilde{a}} \right)T_{{\tilde{a}}} + \varphi \left( {\tilde{b}} \right)T_{{\tilde{b}}} }}{{\varphi \left( {\tilde{a}} \right) + \varphi \left( {\tilde{b}} \right)}},\frac{{\varphi \left( {\tilde{a}} \right)\left( {1 - I_{{\tilde{a}}} } \right) + \varphi \left( {\tilde{b}} \right)\left( {1 - I_{{\tilde{b}}} } \right)}}{{\varphi \left( {\tilde{a}} \right) + \varphi \left( {\tilde{b}} \right)}},\frac{{\varphi \left( {\tilde{a}} \right)\left( {1 - F_{{\tilde{a}}} } \right) + \varphi \left( {\tilde{b}} \right)\left( {1 - F_{{\tilde{b}}} } \right)}}{{\varphi \left( {\tilde{a}} \right) + \varphi \left( {\tilde{b}} \right)}}} \right)} \right\rangle\), where \(\varphi \left( {\tilde{a}} \right) = \frac{{a_{3} - a_{2} + a_{4} - a_{1} }}{2}\) and \(\varphi \left( {\tilde{b}} \right) = \frac{{b_{3} - b_{2} + b_{4} - b_{1} }}{2}\);

    2. (ii)

      If a 1 = a 2 = a 3 = a 4 = a and b 3 + b 4 ≠ b 1 + b 2, \(\tilde{a} \oplus \tilde{b} = \left\langle {\left[ {a + b_{1} ,a + b_{2} ,a + b_{3} ,a + b_{4} } \right],\left( {\frac{{aT_{{\tilde{a}}} + \varphi \left( {\tilde{b}} \right)T_{{\tilde{b}}} }}{{a + \varphi \left( {\tilde{b}} \right)}},\frac{{a\left( {1 - I_{{\tilde{a}}} } \right) + \varphi \left( {\tilde{b}} \right)\left( {1 - I_{{\tilde{b}}} } \right)}}{{a + \varphi \left( {\tilde{b}} \right)}},\frac{{a\left( {1 - F_{{\tilde{a}}} } \right) + \varphi \left( {\tilde{b}} \right)\left( {1 - F_{{\tilde{b}}} } \right)}}{{a + \varphi \left( {\tilde{b}} \right)}}} \right)} \right\rangle ;\)

    3. (iii)

      If a 3 + a 4 ≠ a 1 + a 2 and b 1 = b 2 = b 3 = b 4 = b, \(\tilde{a} \oplus \tilde{b} = \left\langle {\left[ {a_{1} + b,a_{2} + b,a_{3} + b,a_{4} + b} \right],\left( {\frac{{\varphi \left( {\tilde{a}} \right)T_{{\tilde{a}}} + bT_{{\tilde{b}}} }}{{\varphi \left( {\tilde{a}} \right) + b}},\frac{{\varphi \left( {\tilde{a}} \right)\left( {1 - I_{{\tilde{a}}} } \right) + b\left( {1 - I_{{\tilde{b}}} } \right)}}{{\varphi \left( {\tilde{a}} \right) + b}},\frac{{\varphi \left( {\tilde{a}} \right)\left( {1 - F_{{\tilde{a}}} } \right) + b\left( {1 - F_{{\tilde{b}}} } \right)}}{{\varphi \left( {\tilde{a}} \right) + b}}} \right)} \right\rangle ;\)

    4. (iv)

      If a 1 = a 2 = a 3 = a 4 = a and b 1 = b 2 = b 3 = b 4 = b, \(\tilde{a} \oplus \tilde{b} = \left\langle {a + b,\left( {\frac{{aT_{{\tilde{a}}} + bT_{{\tilde{b}}} }}{a + b},\frac{{a\left( {1 - I_{{\tilde{a}}} } \right) + b\left( {1 - I_{{\tilde{b}}} } \right)}}{a + b},\frac{{a\left( {1 - F_{{\tilde{a}}} } \right) + b\left( {1 - F_{{\tilde{b}}} } \right)}}{a + b}} \right)} \right\rangle ;\)

  2. 2.

    \(\tilde{a} \otimes \tilde{b} = \left\langle {\left[ {a_{1} b_{1} ,a_{2} b_{2} ,a_{3} b_{3} ,a_{4} b_{4} } \right],\left( {T_{{\tilde{a}}} T_{{\tilde{b}}} ,I_{{\tilde{a}}} + I_{{\tilde{b}}} - I_{{\tilde{a}}} I_{{\tilde{b}}} ,F_{{\tilde{a}}} + F_{{\tilde{b}}} - F_{{\tilde{a}}} F_{{\tilde{b}}} } \right)} \right\rangle ;\)

  3. 3.

    \(\zeta \tilde{a} = \left\langle {\left[ {\zeta a_{1} ,\zeta a_{2} ,\zeta a_{3} ,\zeta a_{4} } \right],\left( {T_{{\tilde{a}}} ,I_{{\tilde{a}}} ,F_{{\tilde{a}}} } \right)} \right\rangle ,\zeta \ge 0;\)

  4. 4.

    \(\tilde{a}^{\zeta } = \left\langle {\left[ {a_{1}^{\zeta } ,a_{2}^{\zeta } ,a_{3}^{\zeta } ,a_{4}^{\zeta } } \right],\left( {\left( {T_{{\tilde{a}}} } \right)^{\zeta } ,1 - \left( {1 - I_{{\tilde{a}}} } \right)^{\zeta } ,1 - \left( {1 - F_{{\tilde{a}}} } \right)^{\zeta } } \right)} \right\rangle ,\zeta \ge 0;\)

  5. 5.

    \({\text{neg}}\left( {\tilde{a}} \right) = \left\langle {\left[ {1 - a_{4} ,1 - a_{3} ,1 - a_{2} ,1 - a_{1} } \right],\left( {T_{{\tilde{a}}} ,I_{{\tilde{a}}} ,F_{{\tilde{a}}} } \right)} \right\rangle .\)

Example 5

Using Definition 9 and the data in Example 1, let \(\zeta = 2\). The calculated results are as follows:

  1. 1.

    \(\tilde{a} \oplus \tilde{b} = \left\langle {\left[ {0.2,0.2,0.35,0.7} \right],\left( {0.538,0,0.538} \right)} \right\rangle ;\)

  2. 2.

    \(\tilde{a} \otimes \tilde{b} = \left\langle {\left[ {0.01,0.01,0.03,0.12} \right],\left( {0,0,0} \right)} \right\rangle ;\)

  3. 3.

    \(2\tilde{a} = \left\langle {\left[ {0.2,0.2,0.4,0.6} \right],\left( {0,0,1} \right)} \right\rangle ;\)

  4. 4.

    \(\tilde{a}^{2} = \left\langle {\left[ {0.01,0.01,0.04,0.09} \right],\left( {0,0,1} \right)} \right\rangle .\)

Compared with the operations proposed by Ye [38] and Deli and Şubaş [53], these newly proposed SVTNN operations have the following advantages: (1) they can capture the correlations of TFNs and three membership degrees of SVTNNs and (2) they can effectively avoid repeated operations and minimize information loss and distortion.

Using the corresponding operations for SVTNNs, the following theorem can be easily proved.

Theorem 1

Let \(\tilde{a}\), \(\tilde{b}\), and \(\tilde{c}\) be three SVTNNs, and \(\zeta \ge 0\) ; then, the following equations are true:

  1. 1.

    \(\tilde{a} \oplus \tilde{b} = \tilde{b} \oplus \tilde{a};\)

  2. 2.

    \(\left( {\tilde{a} \oplus \tilde{b}} \right) \oplus \tilde{c} = \tilde{a} \oplus \left( {\tilde{b} \oplus \tilde{c}} \right);\)

  3. 3.

    \(\tilde{a} \otimes \tilde{b} = \tilde{b} \otimes \tilde{a};\)

  4. 4.

    \(\left( {\tilde{a} \otimes \tilde{b}} \right) \otimes \tilde{c} = \tilde{a} \otimes \left( {\tilde{b} \otimes \tilde{c}} \right);\)

  5. 5.

    \(\zeta \tilde{a} \oplus \zeta \tilde{b} = \zeta \left( {\tilde{b} \oplus \tilde{a}} \right);\)

  6. 6.

    \(\left( {\tilde{a} \otimes \tilde{b}} \right)^{\zeta } = \tilde{a}^{\zeta } \otimes \tilde{b}^{\zeta } .\)

It is easy to prove Theorem 1 according to Definition 9, so the proof is omitted here.

3.2 New comparison methods for SVTNNs

Motivated by the comparison methods proposed by Broumi and Smarandache [40] according to the expected function, accuracy function and certainty function of SVNTLNs, this subsection defines a new comparison method, and proves it to be reasonable and practical.

Definition 10

Let \(\tilde{a} = \left\langle {\left[ {a_{1} ,a_{2} ,a_{3} ,a_{4} } \right],\left( {T_{{\tilde{a}}} ,I_{{\tilde{a}}} ,F_{{\tilde{a}}} } \right)} \right\rangle\) be a SVTNN; then, the score function, accuracy function and certainty function of SVTNN \(\tilde{a}\) are defined, respectively, as follows:

$$E\left( {\tilde{a}} \right) = \frac{{a_{1} + 2a_{2} + 2a_{3} + a_{4} }}{6} \times \left( {\frac{{2 + T_{{\tilde{a}}} - I_{{\tilde{a}}} - F_{{\tilde{a}}} }}{3}} \right),$$
(4)
$$A\left( {\tilde{a}} \right) = \frac{{a_{1} + 2a_{2} + 2a_{3} + a_{4} }}{6} \times \left( {T_{{\tilde{a}}} - F_{{\tilde{a}}} } \right),$$
(5)
$$C\left( {\tilde{a}} \right) = \frac{{a_{1} + 2a_{2} + 2a_{3} + a_{4} }}{6}T_{{\tilde{a}}} .$$
(6)

Assume that \(\tilde{a}\) and \(\tilde{b}\) are two SVTNNs; then, they can be compared using the following rules.

Definition 11

Let \(\tilde{a} = \left\langle {\left[ {a_{1} ,a_{2} ,a_{3} ,a_{4} } \right],\left( {T_{{\tilde{a}}} ,I_{{\tilde{a}}} ,F_{{\tilde{a}}} } \right)} \right\rangle\) and \(\tilde{b} = \left\langle {\left[ {b_{1} ,b_{2} ,b_{3} ,b_{4} } \right],\left( {T_{{\tilde{b}}} ,I_{{\tilde{b}}} ,F_{{\tilde{b}}} } \right)} \right\rangle\) be two SVTNNs. The comparison method for \(\tilde{a}\) and \(\tilde{b}\) can be defined as follows:

  1. 1.

    If \(E\left( {\tilde{a}} \right) > E\left( {\tilde{b}} \right)\), then \(\tilde{a} \succ \tilde{b}\), meaning that \(\tilde{a}\) is superior to \(\tilde{b}\).

  2. 2.

    If \(E\left( {\tilde{a}} \right) = E\left( {\tilde{b}} \right)\), and if \(A\left( {\tilde{a}} \right) > A\left( {\tilde{b}} \right)\), then \(\tilde{a} \succ \tilde{b}\), meaning that \(\tilde{a}\) is superior to \(\tilde{b}\), otherwise, \(\tilde{a} \prec \tilde{b}\), meaning that \(\tilde{a}\) is inferior to \(\tilde{b}\).

  3. 3.

    If \(E\left( {\tilde{a}} \right) = E\left( {\tilde{b}} \right)\) and \(A\left( {\tilde{a}} \right) = A\left( {\tilde{b}} \right)\), then \(\tilde{a} \succ \tilde{b}\) if \(C\left( {\tilde{a}} \right) > C\left( {\tilde{b}} \right)\), meaning that \(\tilde{a}\) is superior to \(\tilde{b}\); however, \(\tilde{a} \prec \tilde{b}\) if \(C\left( {\tilde{a}} \right) < C\left( {\tilde{b}} \right)\), meaning that \(\tilde{a}\) is inferior to \(\tilde{b}\); and \(\tilde{a}\sim\tilde{b}\) if \(C\left( {\tilde{a}} \right) = C\left( {\tilde{b}} \right)\), meaning that \(\tilde{a}\) is indifferent to \(\tilde{b}\).

Example 6

Utilizing the data in Example 3, we can determine that \(E\left( {\tilde{a}} \right) = 0.188\) and \(E\left( {\tilde{b}} \right) = 0.195\). Then \(\tilde{b} \succ \tilde{a}\); in other words, \(\tilde{b}\) is superior to \(\tilde{a}\), which is consistent with our intuition.

4 Single-valued trapezoidal neutrosophic aggregation operators

This section reviews the traditional BM operator and the normalized weighted Bonferroni mean (NWBM) operator, as well as some of their prominent characteristics. Then, the SVTNNWBM operator is proposed in an environment featuring SVTNNs.

4.1 BM and NWBM operators

The BM operator [44] is a traditional aggregation operator that can capture expressed interrelationships of the individual arguments.

Definition 12

[54] Let \(p,q \ge 0\), and let \(a_{i} \left( {i = 1,2, \ldots ,n} \right)\) be a collection of non-negative numbers. Then the aggregation function

$${\text{BM}}^{p,q} \left( {a_{1} ,a_{2} , \ldots ,a_{n} } \right) = \left( {\frac{1}{{n\left( {n - 1} \right)}}{\mathop{\mathop{\sum}\limits_{i,j = 1}}\limits_{ i \ne j}^{n}} {a_{i}^{p} a_{j}^{q} } } \right)^{{{\raise0.7ex\hbox{$1$} \!\mathord{\left/ {\vphantom {1 {\left( {p + q} \right)}}}\right.\kern-0pt} \!\lower0.7ex\hbox{${\left( {p + q} \right)}$}}}}$$
(7)

is called the BM operator.

The BM operator has the following obvious properties:

  1. 1.

    \({\text{BM}}^{p,q} \left( {0,0, \ldots ,0} \right) = 0\).

  2. 2.

    (Commutativity). Let \(a_{i} \left( {i = 1,2, \ldots ,n} \right)\) and \(a_{i}^{{\prime }} \left( {i = 1,2, \ldots ,n} \right)\) be two sets of non-negative numbers. If \(\left( {a_{1}^{\prime } ,a_{2}^{\prime } , \ldots ,a_{n}^{\prime } } \right)\) is any permutation of \(\left( {a_{1} ,a_{2} , \ldots ,a_{n} } \right)\), then \({\text{BM}}^{p,q} \left( {a_{1}^{\prime } ,a_{2}^{\prime } , \ldots ,a_{n}^{\prime } } \right) = {\text{BM}}^{p,q} \left( {a_{1} ,a_{2} , \ldots ,a_{n} } \right)\).

  3. 3.

    (Idempotency). Let a i (i = 1, 2,…, n) be a set of non-negative numbers. If all a i are equal for all i, then \({\text{BM}}^{p,q} \left( {a_{1} ,a_{2} , \ldots ,a_{n} } \right) = a\).

  4. 4.

    (Monotonicity). Let a i (i = 1, 2,…, n) and \(a_{i}^{\prime } \left( {i = 1,2, \ldots ,n} \right)\) be two sets of non-negative numbers, If \(a_{i} \ge a_{i}^{\prime }\) for all i, then \({\text{BM}}^{p,q} \left( {a_{1} ,a_{2} , \ldots ,a_{n} } \right) \ge {\text{BM}}^{p,q} \left( {a_{1}^{\prime } ,a_{2}^{\prime } , \ldots ,a_{n}^{\prime } } \right)\).

  5. 5.

    (Boundedness). Let a i (i = 1, 2,…, n) be a set of non-negative numbers, while \(a^{ - } = \hbox{min} \left( {a_{1} ,a_{2} , \ldots ,a_{n} } \right)\) and \(a^{ + } = \hbox{max} \left( {a_{1} ,a_{2} , \ldots ,a_{n} } \right)\); then, \(a^{ - } \le {\text{BM}}^{p,q} \left( {a_{1} ,a_{2} , \ldots ,a_{n} } \right) \le a^{ + }\).

Some special cases of the BM operator with respect to the parameters p and q are discussed as follows:

  1. 1.

    If p = 1 and q = 1, then the BM operator is reduced to the following:

    $${\text{BM}}^{1,1} \left( {a_{1} ,a_{2} , \ldots ,a_{n} } \right) = \left( {\frac{1}{{n\left( {n - 1} \right)}}{\mathop{\mathop{\sum}\limits_{i,j = 1}}\limits_{ i \ne j}^{n}} {a_{i} a_{j} } } \right)^{{{\raise0.7ex\hbox{$1$} \!\mathord{\left/ {\vphantom {1 2}}\right.\kern-0pt} \!\lower0.7ex\hbox{$2$}}}} .$$
    (8)
  2. 2.

    If q = 0, then the BM operator is reduced to the generalized mean operator,

    $${\text{BM}}^{p,0} \left( {a_{1} ,a_{2} , \ldots ,a_{n} } \right) = \left( {\frac{1}{n}\sum\limits_{i = 1}^{n} {a_{i}^{p} } } \right)^{1/p} .$$
    (9)
  3. 3.

    If p = 1 and q = 0, then the BM operator is reduced to the arithmetic mean operator,

    $${\text{BM}}^{1,0} \left( {a_{1} ,a_{2} , \ldots ,a_{n} } \right) = \frac{1}{n}\sum\limits_{i = 1}^{n} {a_{i}^{{}} }$$
    (10)
  4. 4.

    If p → 0 and q = 0, then the BM operator is reduced the geometric mean operator,

    $$\mathop {\lim }\limits_{p \to 0} {\text{BM}}^{p,0} \left( {a_{1} ,a_{2} , \ldots ,a_{n} } \right) = \left( {\prod\limits_{i = 1}^{n} {a_{i} } } \right)^{1/n} .$$
    (11)

Definition 13

[55] Let p, q ≥ 0, and let a i (i = 1, 2,…, n) be a collection of non-negative numbers with the weight vector w = w 1, w 2,…, w n ) such that w i  ∈ [0, 1] and \(\sum\nolimits_{i = 1}^{n} {w_{i} } = 1\). If

$${\text{NWBM}}_{w}^{p,q} \left( {a_{1} ,a_{2} , \ldots ,a_{n} } \right) = \left( {\mathop{\mathop{\oplus}\limits_{i,j = 1}}\limits_{ i \ne j}^{n}} \frac{{w_{i} w_{j} }}{{1 - w_{i} }}\left( {a_{i}^{p} \otimes a_{j}^{q} } \right) \right)^{{\frac{1}{p + q}}} ,$$
(12)

then \({\text{NWBM}}_{w}^{p,q}\) is called the NWBM operator.

4.2 Single-valued trapezoidal neutrosophic normalized weighted BM operator

This subsection extends the traditional BM and NWBM operators to accommodate situations in which the input arguments are SVTNNs. Furthermore, a SVTNNBM operator and a SVTNNWBM operator are developed, and some of their desirable properties are analyzed.

Definition 14

Let p, q ≥ 0, and let \(\tilde{a}_{i} = \left\langle {\left[ {a_{i1} ,a_{i2} ,a_{i3} ,a_{i4} } \right],\left( {T_{{\tilde{a}_{i} }} ,I_{{\tilde{a}_{i} }} ,F_{{\tilde{a}_{i} }} } \right)} \right\rangle\) (i = 1, 2,…, n) be a set of SVTNNs. If

$${\text{SVTNNBM}}^{p,q} \left( {\tilde{a}_{1} ,\tilde{a}_{2} , \ldots ,\tilde{a}_{n} } \right) = \left( {\frac{1}{{n\left( {n - 1} \right)}}{\mathop{\mathop{\oplus}\limits_{i,j = 1}}\limits_{ i \ne j}^{n}} \left( {\tilde{a}_{i}^{p} \otimes \tilde{a}_{j}^{q} } \right)} \right)^{{\frac{1}{p + q}}} ,$$
(13)

then \({\text{SVTNNBM}}^{p,q}\) is called the SVTNNBM operator.

The following definition fully introduces the SVTNNWBM operator.

Definition 15

Let p, q ≥ 0, and let \(\tilde{a}_{i} = \left\langle {\left[ {a_{i1} ,a_{i2} ,a_{i3} ,a_{i4} } \right],\left( {T_{{\tilde{a}_{i} }} ,I_{{\tilde{a}_{i} }} ,F_{{\tilde{a}_{i} }} } \right)} \right\rangle\) (i = 1, 2,…, n) be a set of SVTNNs. If

$${\text{SVTNNWBM}}_{w}^{p,q} \left( {\tilde{a}_{1} ,\tilde{a}_{2} , \ldots ,\tilde{a}_{n} } \right) = \left({\mathop{\mathop{\oplus}\limits_{i,j = 1}}\limits_{ i \ne j}^{n}} \frac{{w_{i} w_{j} }}{{1 - w_{i} }}\left( {\tilde{a}_{i}^{p} \otimes \tilde{a}_{j}^{q} } \right) \right)^{{\frac{1}{p + q}}},$$
(14)

where w = (w 1, w 2,…, w n ) is the weight vector of \(\tilde{a}_{i}\), w i  ∈ [0, 1], and \(\sum\nolimits_{i = 1}^{n} {w_{i} } = 1\), then \({\text{SVTNNWBM}}_{w}^{p,q}\) is called the SVTNNWBM operator.

Theorem 2

Let p, q ≥ 0, and let \(\tilde{a}_{i} = \left\langle {\left[ {a_{i1} ,a_{i2} ,a_{i3} ,a_{i4} } \right],\left( {T_{{\tilde{a}_{i} }} ,I_{{\tilde{a}_{i} }} ,F_{{\tilde{a}_{i} }} } \right)} \right\rangle\) (i = 1, 2,…, n) be a set of SVTNNs. Then, the aggregated result using Eq. (14) is also a SVTNN, and

$$\begin{aligned} SVTNNWBM_{w}^{p,q} \left( {\tilde{a}_{1},\tilde{a}_{2} , \ldots ,\tilde{a}_{n} } \right) &= \,\left({\mathop{\mathop{\oplus}\limits_{i,j = 1}}\limits_{ i \ne j}^{n}}\frac{{w_{i} w_{j} }}{{1 - w_{i} }}\left( {\tilde{a}_{i}^{p} \otimes\tilde{a}_{j}^{q} } \right) \right)^{{\frac{1}{p + q}}} \hfill \\ &= \, \left\langle {\left[{\left({\mathop{\mathop{\oplus}\limits_{i,j = 1}}\limits_{ i \ne j}^{n}} \frac{{w_{i} w_{j} }}{{1 - w_{i} }}a_{i1}^{p} a_{j1}^{q} \right)^{{\frac{1}{p + q}}},\left({\mathop{\mathop{\oplus}\limits_{i,j = 1}}\limits_{ i \ne j}^{n}} \frac{{w_{i} w_{j} }}{{1 - w_{i} }}a_{i2}^{p} a_{j2}^{q} \right)^{{\frac{1}{p + q}}} ,\left({\mathop{\mathop{\oplus}\limits_{i,j = 1}}\limits_{ i \ne j}^{n}} \frac{{w_{i} w_{j} }}{{1 - w_{i} }}a_{i3}^{p} a_{j3}^{q} \right)^{{\frac{1}{p + q}}} ,\left({\mathop{\mathop{\oplus}\limits_{i,j = 1}}\limits_{ i \ne j}^{n}} \frac{{w_{i} w_{j} }}{{1 - w_{i} }}a_{i4}^{p} a_{j4}^{q} \right)^{{\frac{1}{p + q}}} } \right],} \right. \hfill \\ &\qquad \left( {\left( {\frac{{\mathop{\mathop{\oplus}\nolimits_{i,j = 1}}\limits_{ i \ne j}^{n}} \frac{1}{2}\frac{{w_{i} w_{j} }}{{1 - w_{i} }}\left[ {a_{i3}^{p} a_{j3}^{q} - a_{i2}^{p} a_{j2}^{q} + a_{i4}^{p} a_{j4}^{q} - a_{i1}^{p} a_{j1}^{q} } \right]\left( {T_{{\tilde{a}_{i} }} } \right)^{p} \left( {T_{{\tilde{a}_{j} }} } \right)^{q} }{{{\mathop{\mathop{\oplus}\nolimits_{i,j = 1}}\limits_{ i \ne j}^{n}} \frac{1}{2}\frac{{w_{i} w_{j} }}{{1 - w_{i} }}\left[ {a_{i3}^{p} a_{j3}^{q} - a_{i2}^{p} a_{j2}^{q} + a_{i4}^{p} a_{j4}^{q} - a_{i1}^{p} a_{j1}^{q} } \right]}}} \right)^{{\frac{1}{p + q}}} ,} \right. \hfill \\&\qquad 1 - \left( {1 - \frac{{{\mathop{\mathop{\oplus}\nolimits_{i,j = 1}}\limits_{ i \ne j}^{n}} \frac{1}{2}\frac{{w_{i} w_{j} }}{{1 - w_{i} }}\left[ {a_{i3}^{p} a_{j3}^{q} - a_{i2}^{p} a_{j2}^{q} + a_{i4}^{p} a_{j4}^{q} - a_{i1}^{p} a_{j1}^{q} } \right]\left( {1 - I_{{\tilde{a}_{i} }} } \right)^{p} \left( {1 - I_{{\tilde{a}_{j} }} } \right)^{q} }}{{{\mathop{\mathop{\oplus}\nolimits_{i,j = 1}}\limits_{ i \ne j}^{n}} \frac{1}{2}\frac{{w_{i} w_{j} }}{{1 - w_{i} }}\left[ {a_{i3}^{p} a_{j3}^{q} - a_{i2}^{p} a_{j2}^{q} + a_{i4}^{p} a_{j4}^{q} - a_{i1}^{p} a_{j1}^{q} } \right]}}} \right)^{{\frac{1}{p + q}}} , \hfill \\ &\qquad\left. {\left. { 1 - \left( {1 - \frac{{{\mathop{\mathop{\oplus}\nolimits_{i,j = 1}}\limits_{ i \ne j}^{n}} \frac{1}{2}\frac{{w_{i} w_{j} }}{{1 - w_{i} }}\left[ {a_{i3}^{p} a_{j3}^{q} - a_{i2}^{p} a_{j2}^{q} + a_{i4}^{p} a_{j4}^{q} - a_{i1}^{p} a_{j1}^{q} } \right]\left( {1 - F_{{\tilde{a}_{i} }} } \right)^{p} \left( {1 - F_{{\tilde{a}_{j} }} } \right)^{q} }}{{{\mathop{\mathop{\oplus}\nolimits_{i,j = 1}}\limits_{ i \ne j}^{n}} \frac{1}{2}\frac{{w_{i} w_{j} }}{{1 - w_{i} }}\left[ {a_{i3}^{p} a_{j3}^{q} - a_{i2}^{p} a_{j2}^{q} + a_{i4}^{p} a_{j4}^{q} - a_{i1}^{p} a_{j1}^{q} } \right]}}} \right)^{{\frac{1}{p + q}}} } \right)} \right\rangle . \hfill \\\end{aligned}$$
(15)

“Appendix 1” details the proof of Theorem 2.

The traditional NWBM operator has the properties of reducibility, commutativity, idempotency, monotonicity and boundedness. It is easy to see that the SVTNNWBM operator also satisfies these properties. The following theorem proves only the monotonicity property, while the others can be proved in a similar way and are omitted.

Theorem 3

(Monotonicity) Let \(\tilde{a}_{i} = \left\langle {\left[ {a_{i1} ,a_{i2} ,a_{i3} ,a_{i4} } \right],\left( {T_{{\tilde{a}_{i} }} ,I_{{\tilde{a}_{i} }} ,F_{{\tilde{a}_{i} }} } \right)} \right\rangle \;\left( {i = 1,2, \ldots n} \right)\) and \(\tilde{b}_{i} = \left\langle {\left[ {b_{i1} ,b_{i2} ,b_{i3} ,b_{i4} } \right],\left( {T_{{\tilde{b}_{i} }} ,I_{{\tilde{b}_{i} }} ,F_{{\tilde{b}_{i} }} } \right)} \right\rangle \;\left( {i = 1,2, \ldots ,n} \right)\) be two sets of SVTNNs. Suppose \(a_{i1} \ge b_{i1}\), \(a_{i2} \ge b_{i2}\), \(a_{i3} \ge b_{i3}\), \(a_{i4} \ge b_{i4}\), \(T_{{\tilde{a}_{i} }} \ge T_{{\tilde{b}_{i} }}\), \(I_{{\tilde{a}_{i} }} \le I_{{\tilde{b}_{i} }}\) , and \(F_{{\tilde{a}_{i} }} \le F_{{\tilde{b}_{i} }}\) for all i; then, \({\text{SVTNNWB}}_{w}^{p,q} \left( {\tilde{a}_{1} ,\tilde{a}_{2} , \ldots ,\tilde{a}_{n} } \right) \ge {\text{SVTNNWB}}_{w}^{p,q} \left( {\tilde{b}_{1} ,\tilde{b}_{2} , \ldots ,\tilde{b}_{n} } \right)\).

“Appendix 2 ” details the proof of Theorem 3.

5 MCGDM method based on the SVTNNWBM operator

This section develops an approach based on the SVTNNWBM operator and the new comparison method in order to solve MCGDM problems with SVTNN information.

For a group decision-making problem with a finite set of m alternatives, let D = {D 1, D 2,…, D s } be the set of DMs, A = {A 1, A 2,…, A m } be the set of alternatives, and C = {C 1, C 2,…, C n } be the set of criteria. Assume that the subjective weight vector of the criteria provided by each DM D k (k = 1, 2,…, s) is \(\varpi^{k} = \left( {\varpi_{1}^{k} ,\varpi_{2}^{k} , \ldots ,\varpi_{n}^{k} } \right)\), such that \(\varpi_{j}^{k} \in \left[ {0,1} \right]\) and \(\sum\nolimits_{j = 1}^{n} {\varpi_{j}^{k} } = 1\). Similarly, the weight vector of the DMs are specified as w = (w 1, w 2,…, w s ), where w k  ≥ 0, and \(\sum\nolimits_{k = 1}^{s} {w_{k} } = 1\). The evaluation values provided by the experts are converted into SVTNNs through two questionnaires, and \(\tilde{a}_{ij}^{k} = \left\langle {\left[ {a_{ij1}^{k} ,a_{ij2}^{k} ,a_{ij3}^{k} ,a_{ij4}^{k} } \right],\left( {T_{{\tilde{a}_{ij}^{k} }} ,I_{{\tilde{a}_{ij}^{k} }} ,F_{{\tilde{a}_{ij}^{k} }} } \right)} \right\rangle\), \(\left( {k = 1,2, \ldots ,s;\;j = 1,2, \ldots ,n;\;i = 1,2, \ldots ,m} \right)\) stands for the evaluation value of DM D k (k = 1, 2,…, s) for alternative \(\tilde{a}_{i} \left( {i = 1,2, \ldots ,m} \right)\) under criteria C j (j = 1, 2,…, n).

To elucidate the proposed methodology, this section is divided into two parts: determining each DM’s weight using an entropy-weighted method and describing the algorithm in the proposed approach.

5.1 Determining each DM’s weight using entropy-weighted method

Shannon [56] introduced the term “entropy” to measure the degree of uncertainty in information. It is a useful tool in decision-making, and it has found many applications [5760]. Applying the existed entropy-weighted methods, this subsection proposes a new method to obtain the objective weights of DMs.

We can first identify the decision matrix \(\tilde{a}^{k}\) and the weight vector of criteria \(\varpi^{k}\), which are provided by DM D k (k = 1, 2,…, s).

$$\tilde{a}^{k} = \left[ {\begin{array}{*{20}c} {\tilde{a}_{11}^{k} } & {\tilde{a}_{12}^{k} } & \cdots & {\tilde{a}_{1n}^{k} } \\ {\tilde{a}_{21}^{k} } & {\tilde{a}_{22}^{k} } & \cdots & {\tilde{a}_{2n}^{k} } \\ \vdots & \vdots & {} & \vdots \\ {\tilde{a}_{m1}^{k} } & {\tilde{a}_{m2}^{k} } & \cdots & {\tilde{a}_{mn}^{k} } \\ \end{array} } \right],$$
(16)
$$\varpi^{k} = \left( {\varpi_{1}^{k} ,\varpi_{2}^{k} , \ldots ,\varpi_{n}^{k} } \right),\quad \left( {k = 1,2, \ldots ,s} \right),$$
(17)

where the elements of the decision matrix \(\tilde{a}^{k}\) is characterized by SVTNNs.

The main procedures are as follows:

  1. 1.

    Two types of criteria exist in decision matrices: benefit and cost criteria. In order to make the criterion type uniform, the cost criteria must be transformed into benefit criteria using the negation operator defined in Definition 9. The normalized evaluation information matrix is

    $$\bar{a}^{k} = \left[ {\begin{array}{*{20}c} {\bar{a}_{11}^{k} } & {\bar{a}_{12}^{k} } & \cdots & {\bar{a}_{1n}^{k} } \\ {\bar{a}_{21}^{k} } & {\bar{a}_{22}^{k} } & \cdots & {\bar{a}_{2n}^{k} } \\ \vdots & \vdots & {} & \vdots \\ {\bar{a}_{m1}^{k} } & {\bar{a}_{m2}^{k} } & \cdots & {\bar{a}_{mn}^{k} } \\ \end{array} } \right].$$
    (18)
  2. 2.

    Using Definition 9, the jth criteria weight \(\varpi_{j}^{k}\) (k = 1, 2,…, s) is assigned to the jth criteria value \(\bar{a}_{ij}^{k}\) in decision matrix \(\bar{a}^{k}\) in Eq. (18). The weighted decision matrix is then identified as follows:

    $$U^{k} = \left( {\tilde{u}_{ij}^{k} } \right)_{m \times n} = \left( {\varpi_{j}^{k} \otimes \bar{a}_{ij}^{k} } \right)_{m \times n} = \left[ {\begin{array}{*{20}c} {\tilde{u}_{11}^{k} } & {\tilde{u}_{12}^{k} } & \cdots & {\tilde{u}_{1n}^{k} } \\ {\tilde{u}_{21}^{k} } & {\tilde{u}_{22}^{k} } & \cdots & {\tilde{u}_{2n}^{k} } \\ \vdots & \vdots & {} & \vdots \\ {\tilde{u}_{m1}^{k} } & {\tilde{u}_{m2}^{k} } & \cdots & {\tilde{u}_{mn}^{k} } \\ \end{array} } \right],\quad \left( {k = 1,2, \ldots ,s} \right),$$
    (19)

    where the elements of the weighted decision matrix are denoted as \(\tilde{u}_{ij}^{k} = \left\langle {\left[ {u_{ij1}^{k} ,u_{ij2}^{k} ,u_{ij3}^{k} ,u_{ij4}^{k} } \right],\left( {T_{{\tilde{u}_{ij}^{k} }} ,I_{{\tilde{u}_{ij}^{k} }} ,F_{{\tilde{u}_{ij}^{k} }} } \right)} \right\rangle\).

  3. 3.

    Let I k be the entropy of the kth DM; then,

    $$I_{k} = - \frac{1}{\ln m}\sum\limits_{i = 1}^{m} {\left( {\frac{{e_{i}^{k} }}{{\sum\nolimits_{i = 1}^{m} {e_{i}^{k} } }} * \ln \left| {\frac{{e_{i}^{k} }}{{\sum\nolimits_{i = 1}^{m} {e_{i}^{k} } }}} \right|} \right)} ,$$
    (20)

    where \(e_{i}^{k}\) is calculated according to Definition 10, and its form is presented as follows:

    $$e_{i}^{k} = \sum\limits_{j = 1}^{n} {E\left( {\tilde{u}_{ij}^{k} } \right)} ,$$
    (21)
    $$E\left( {\tilde{u}_{ij}^{k} } \right) = \frac{{u_{ij1}^{k} + 2u_{ij2}^{k} + 2u_{ij3}^{k} + u_{i4}^{k} }}{6} \times \left( {\frac{{2 + T_{{\tilde{u}_{ij}^{k} }} - I_{{\tilde{u}_{ij}^{k} }} - F_{{\tilde{u}_{ij}^{k} }} }}{3}} \right),$$
    (22)

    where \(E\left( {\tilde{u}_{ij}^{k} } \right)\) denotes the score function of the assessment information \(\tilde{u}_{i} \left( {i = 1,2, \ldots ,m} \right)\) with respect to C j (j = 1, 2,…, n) for DM D k (k = 1, 2,…, s). If \(e_{i}^{k} = 0\), it is assumed that

    $$\frac{{e_{i}^{k} }}{{\sum\nolimits_{i = 1}^{m} {e_{i}^{k} } }} * \ln \left| {\frac{{e_{i}^{k} }}{{\sum\nolimits_{i = 1}^{m} {e_{i}^{k} } }}} \right| = 0,\quad \left( {i = 1,2, \ldots ,m;\;j = 1,2, \ldots ,n;\;k = 1,2, \ldots ,s} \right).$$
  4. 4.

    We can elicit the objective expert weight as follows:

    $$w_{k} = \frac{{1 - I_{k} }}{{\sum\nolimits_{k = 1}^{s} {\left( {1 - I_{k} } \right)} }},$$
    (23)

    in which w k  ≥ 0, and \(\sum\nolimits_{k = 1}^{s} {w_{k} } = 1\).

5.2 The algorithm of the proposed approach

The procedures of the MCGDM approach involve the following steps:

Step 1: :

Establish the decision matrices and weight vector of the criteria.

According to Eqs. (16) and (17), we can get the decision matrix \(\tilde{a}^{k}\) and weight vector of criteria \(\varpi^{k}\) (k = 1, 2,…, s) provided by each DM D k (k = 1, 2,…, s).

Step 2: :

Normalize the decision matrices.

Decision matrices include benefit criteria and cost criteria. Using Definition 9, the cost criteria can be transformed into benefit criteria.

Step 3: :

Obtain the weighted decision matrices.

According to Eq. (19), the weighted decision matrices can be constructed by multiplying the subjective weight vector of DMs \(\varpi^{k} = \left( {\varpi_{1}^{k} ,\varpi_{2}^{k} , \ldots ,\varpi_{n}^{k} } \right)\), (k = 1, 2,…, s) into the decision matrices.

Step 4: :

Obtain expert weights through the entropy-weighted method.

We can identify the objective expert weights using Eqs. (20) through (23).

Step 5: :

Calculate the comprehensive criteria weights.

Utilizing the weighted arithmetic mean operator, we can identify the comprehensive criteria weights \(\tilde{\varpi } = \left( {\tilde{\varpi }_{1} ,\tilde{\varpi }_{2} , \ldots ,\tilde{\varpi }_{n} } \right)\) with \(\tilde{\varpi }_{j} \in \left[ {0,1} \right]\) (j = 1, 2,…, n) and \(\sum\nolimits_{j = 1}^{n} {\tilde{\varpi }_{j} } = 1\).

$$\tilde{\varpi }_{j} = \sum\limits_{k = 1}^{s} {w_{k} \varpi_{j}^{k} } ,\quad \left( {k = 1,2, \ldots ,s} \right).$$
(24)
Step 6: :

Obtain the aggregated decision matrix.

According to the new operations described in Definition 9 and using both the objective expert weights obtained in Step 4 and the weighted arithmetic mean operator, we can calculate the aggregated decision matrix as follows:

$$M = \left( {\bar{\bar{a}}_{ij} } \right)_{m \times n} = \left( {\sum\limits_{k = 1}^{s} {w_{k} \bar{a}_{ij}^{k} } } \right)_{m \times n} = \left[ {\begin{array}{*{20}c} {\bar{\bar{a}}_{11} } & {\bar{\bar{a}}_{12} } & \cdots & {\bar{\bar{a}}_{1n} } \\ {\bar{\bar{a}}_{21} } & {\bar{\bar{a}}_{22} } & \cdots & {\bar{\bar{a}}_{2n} } \\ \vdots & \vdots & {} & \vdots \\ {\bar{\bar{a}}_{m1} } & {\bar{\bar{a}}_{m2} } & \cdots & {\bar{\bar{a}}_{mn} } \\ \end{array} } \right].$$
(25)
Step 7: :

Obtain the overall value of A i .

Utilizing Eq. (15), the overall value of alternative A i can be aggregated.

Step 8: :

Calculate the score values.

Utilizing Eqs. (4) through (6), the score values can be obtained for comparison.

Step 9: :

Rank all alternatives.

Comparing the values obtained in Step 8 yields the final ranking results, and the optimal ranking(s) can be selected.

6 A numerical example

This section uses a numerical example adapted from Yue [61] to demonstrate the applicability of the proposed method.

A year-end report is required to assess various constituencies’ satisfaction with respect to the institutional leader at Chinese universities. The following four leaders of a university in Guangdong, China, must be assessed: (1) A 1 represents the president; (2) A 2 is the first vice president; (3) A 3 is the second vice president; (4) A 4 is the third vice president. Teams are assembled from several constituencies to serve as DMs (reviewers), including teachers (D 1), researchers (D 2) and undergraduate students (D 3). These DMs use the three criteria C 1 (working experience), C 2(academic performance) and C 3 (personality) to evaluate the four alternatives.

Reviewers can evaluate the four alternatives with respect to each criterion according to a hundred-point scale, in which 100 is the maximum grade and 0 is the minimum grade. As the evaluation team includes a large number of people, we must first obtain an interval number representing common opinion. Second, we should take into consideration the minimum and maximum scores from the reviewers, such that a TFN can be obtained. Furthermore, reviewers can evaluate the obtained TFN by voting in favor, voting in against or abstaining on each evaluation index. The final result is a SVTNN. For example, the assessment value of alternative A 1 is denoted as \(\tilde{a}_{11}^{1} = \left\langle {\left[ {0.6,0.7,0.8,0.9} \right],\left( {0.36,0.3,0.27} \right)} \right\rangle\). This result is obtained by DM D 1 with respect to criterion C 1 using the two questionnaires. Initially, we can identify an interval number representing the common opinion, denoted as [0.7, 0.8]. Meanwhile, a few people offered remarkably low or high assessment values, denoted as 0.6 and 0.9, respectively; obviously, these values should also be taken into consideration. This process yields the TFN [0.6, 0.7, 0.8, 0.9]. For a second time, the constituents are asked to evaluate this TFN by voting in favor, voting in against or abstaining on each evaluation index, which refers to the three membership degrees of the SVTNNs. This produces the final assessment information \(\tilde{a}_{11}^{1} = \left\langle {\left[ {0.6,0.7,0.8,0.9} \right],\left( {0.36,0.3,0.27} \right)} \right\rangle\).

The four possible alternatives are evaluated according to the three criteria listed above in the form of SVTNNs, which are transformed from evaluation values, as shown in the following three decision matrices:

$$\tilde{a}^{1} = \begin{array}{*{20}c} {A_{1} } \\ {A_{2} } \\ {A_{3} } \\ {A_{4} } \\ \end{array} \left( {\begin{array}{*{20}l} {\left\langle {\left[ {0.6,0.7,0.8,0.9} \right],\left( {0.36,0.3,0.27} \right)} \right\rangle } \hfill & {\left\langle {\left[ {0.72,0.5,0.8,0.86} \right],\left( {0.53,0.3,0.28} \right)} \right\rangle } \hfill & {\left\langle {\left[ {0.85,0.85,0.9,0.92} \right],\left( {0.57,0.35,0.22} \right)} \right\rangle } \hfill \\ {\left\langle {\left[ {0.77,0.77,0.8,0.81} \right],\left( {0.72,0.3,0.28} \right)} \right\rangle } \hfill & {\left\langle {\left[ {0.69,0.7,0.8,0.93} \right],\left( {0.91,0.5,0.07} \right)} \right\rangle } \hfill & {\left\langle {\left[ {0.83,0.85,0.85,0.88} \right],\left( {0.80,0.2,0.10} \right)} \right\rangle } \hfill \\ {\left\langle {\left[ {0.8,0.85,0.9,0.96} \right],\left( {0.63,0.5,0.19} \right)} \right\rangle } \hfill & {\left\langle {\left[ {0.59,0.7,0.8,0.87} \right],\left( {0.88,0.3,0.12} \right)} \right\rangle } \hfill & {\left\langle {\left[ {0.68,0.7,0.8,0.85} \right],\left( {0.86,0.4,0.14} \right)} \right\rangle } \hfill \\ {\left\langle {\left[ {0.6,0.6,0.8,0.9} \right],\left( {0.65,0.3,0.33} \right)} \right\rangle } \hfill & {\left\langle {\left[ {0.58,0.6,0.8,0.9} \right],\left( {0.72,0.3,0.23} \right)} \right\rangle } \hfill & {\left\langle {\left[ {0.6,0.7,0.8,0.9} \right],\left( {0.77,0.45,0.23} \right)} \right\rangle } \hfill \\ \end{array} } \right)$$
$$\tilde{a}^{2} = \begin{array}{*{20}c} {A_{1} } \\ {A_{2} } \\ {A_{3} } \\ {A_{4} } \\ \end{array} \left( {\begin{array}{*{20}l} {\left\langle {\left[ {0.77,0.8,0.8,0.83} \right],\left( {0.53,0.3,0.26} \right)} \right\rangle } \hfill & {\left\langle {\left[ {0.68,0.7,0.8,0.86} \right],\left( {0.54,0.4,0.35} \right)} \right\rangle } \hfill & {\left\langle {\left[ {0.82,0.85,0.9,0.9} \right],\left( {0.68,0.35,0.32} \right)} \right\rangle } \hfill \\ {\left\langle {\left[ {0.93,0.94,0.95,0.98} \right],\left( {0.85,0.3,0.15} \right)} \right\rangle } \hfill & {\left\langle {\left[ {0.76,0.8,0.8,0.86} \right],\left( {0.86,0.5,0.13} \right)} \right\rangle } \hfill & {\left\langle {\left[ {0.65,0.7,0.8,0.87} \right],\left( {0.69,0.2,0.3} \right)} \right\rangle } \hfill \\ {\left\langle {\left[ {0.79,0.8,0.84,0.85} \right],\left( {0.83,0.4,0.16} \right)} \right\rangle } \hfill & {\left\langle {\left[ {0.72,0.8,0.9,0.92} \right],\left( {0.76,0.5,0.24} \right)} \right\rangle } \hfill & {\left\langle {\left[ {0.81,0.85,0.9,0.97} \right],\left( {0.73,0.4,0.13} \right)} \right\rangle } \hfill \\ {\left\langle {\left[ {0.7,0.78,0.8,0.9} \right],\left( {0.9,0.3,0.07} \right)} \right\rangle } \hfill & {\left\langle {\left[ {0.58,0.6,0.8,0.9} \right],\left( {0.91,0.2,0.03} \right)} \right\rangle } \hfill & {\left\langle {\left[ {0.7,0.7,0.8,0.9} \right],\left( {0.66,0.4,0.12} \right)} \right\rangle } \hfill \\ \end{array} } \right)$$
$$\tilde{a}^{3} = \begin{array}{*{20}c} {A_{1} } \\ {A_{2} } \\ {A_{3} } \\ {A_{4} } \\ \end{array} \left( {\begin{array}{*{20}l} {\left\langle {\left[ {0.85,0.85,0.9,0.96} \right],\left( {0.81,0.3,0.18} \right)} \right\rangle } \hfill & {\left\langle {\left[ {0.76,0.8,0.8,0.86} \right],\left( {0.76,0.5,0.24} \right)} \right\rangle } \hfill & {\left\langle {\left[ {0.8,0.85,0.9,0.97} \right],\left( {0.74,0.35,0.19} \right)} \right\rangle } \hfill \\ {\left\langle {\left[ {0.79,0.8,0.8,0.87} \right],\left( {0.75,0.3,0.16} \right)} \right\rangle } \hfill & {\left\langle {\left[ {0.75,0.75,0.8,0.89} \right],\left( {0.84,0.5,0.16} \right)} \right\rangle } \hfill & {\left\langle {\left[ {0.81,0.85,0.9,0.93} \right],\left( {0.97,0.2,0.03} \right)} \right\rangle } \hfill \\ {\left\langle {\left[ {0.62,0.7,0.8,0.82} \right],\left( {0.89,0.1,0.11} \right)} \right\rangle } \hfill & {\left\langle {\left[ {0.84,0.85,0.85,0.89} \right],\left( {0.78,0.5,0.21} \right)} \right\rangle } \hfill & {\left\langle {\left[ {0.78,0.8,0.8,0.82} \right],\left( {0.74,0.4,0.11} \right)} \right\rangle } \hfill \\ {\left\langle {\left[ {0.6,0.6,0.8,0.9} \right],\left( {0.66,0.3,0.18} \right)} \right\rangle } \hfill & {\left\langle {\left[ {0.64,0.7,0.8,0.9} \right],\left( {0.63,0.3,0.27} \right)} \right\rangle } \hfill & {\left\langle {\left[ {0.6,0.65,0.8,0.9} \right],\left( {0.71,0.4,0.29} \right)} \right\rangle } \hfill \\ \end{array} } \right)$$

The subjective criteria weights offered by the DMs are \(\varpi^{1} = \left( {0.4,0.2,0.4} \right)\), \(\varpi^{2} = \left( {0.3,0.3,0.4} \right)\), and \(\varpi^{3} = \left( {0.4,0.4,0.2} \right)\), respectively.

6.1 Evaluation steps for the new MCGDM method based on the SVTNNWBM operator

The following steps describe the procedure for assessing DMs’ satisfaction with respect to their leaders and obtaining a final ranking order for the four alternatives.

Step 1::

Establish the decision matrices and weight vector of the criteria.

The decision matrices and weight vector of criteria are listed in the previous subsection.

Step 2::

Normalize the decision matrices.

Since all the criteria are benefit criteria, there is no need for normalization.

Step 3::

Obtain the weighted decision matrices.

Since every DM offers different subjective weights for the criteria, we must multiply the subjective weight vector of the DMs into the initial decision matrices. Utilizing Definition 9, we can acquire the following results:

$$U^{1} = \begin{array}{*{20}c} {A_{1} } \\ {A_{2} } \\ {A_{3} } \\ {A_{4} } \\ \end{array} \left( {\begin{array}{*{20}l} {\left\langle {\left[ {0.24,0.28,0.32,0.36} \right],\left( {0.36,0.3,0.27} \right)} \right\rangle } \hfill & {\left\langle {\left[ {0.144,0.1,0.16,0.172} \right],\left( {0.53,0.3,0.28} \right)} \right\rangle } \hfill & {\left\langle {\left[ {0.34,0.34,0.36,0.368} \right],\left( {0.57,0.35,0.22} \right)} \right\rangle } \hfill \\ {\left\langle {\left[ {0.308,0.308,0.32,0.324} \right],\left( {0.72,0.3,0.28} \right)} \right\rangle } \hfill & {\left\langle {\left[ {0.138,0.14,0.16,0.18} \right],\left( {0.91,0.5,0.07} \right)} \right\rangle } \hfill & {\left\langle {\left[ {0.332,0.34,0.34,0.352} \right],\left( {0.80,0.2,0.10} \right)} \right\rangle } \hfill \\ {\left\langle {\left[ {0.32,0.34,0.36,0.384} \right],\left( {0.63,0.5,0.19} \right)} \right\rangle } \hfill & {\left\langle {\left[ {0.118,0.14,0.16,0.174} \right],\left( {0.88,0.3,0.12} \right)} \right\rangle } \hfill & {\left\langle {\left[ {0.272,0.28,0.32,0.34} \right],\left( {0.86,0.4,0.14} \right)} \right\rangle } \hfill \\ {\left\langle {\left[ {0.24,0.24,0.32,0.36} \right],\left( {0.65,0.3,0.33} \right)} \right\rangle } \hfill & {\left\langle {\left[ {0.116,0.12,0.16,0.18} \right],\left( {0.72,0.3,0.23} \right)} \right\rangle } \hfill & {\left\langle {\left[ {0.24,0.28,0.32,0.36} \right],\left( {0.77,0.45,0.23} \right)} \right\rangle } \hfill \\ \end{array} } \right)$$
$$U^{2} = \begin{array}{*{20}c} {A_{1} } \\ {A_{2} } \\ {A_{3} } \\ {A_{4} } \\ \end{array} \left( {\begin{array}{*{20}l} {\left\langle {\left[ {0.231,0.24,0.24,0.249} \right],\left( {0.53,0.3,0.26} \right)} \right\rangle } \hfill & {\left\langle {\left[ {0.204,0.21,0.24,0.258} \right],\left( {0.54,0.4,0.35} \right)} \right\rangle } \hfill & {\left\langle {\left[ {0.328,0.34,0.36,0.36} \right],\left( {0.68,0.35,0.32} \right)} \right\rangle } \hfill \\ {\left\langle {\left[ {0.279,0.282,0.285,0.294} \right],\left( {0.85,0.3,0.15} \right)} \right\rangle } \hfill & {\left\langle {\left[ {0.228,0.24,0.24,0.258} \right],\left( {0.86,0.5,0.13} \right)} \right\rangle } \hfill & {\left\langle {\left[ {0.26,0.28,0.32,0.348} \right],\left( {0.69,0.2,0.3} \right)} \right\rangle } \hfill \\ {\left\langle {\left[ {0.237,0.24,0.252,0.255} \right],\left( {0.83,0.4,0.16} \right)} \right\rangle } \hfill & {\left\langle {\left[ {0.216,0.24,0.27,0.276} \right],\left( {0.76,0.5,0.24} \right)} \right\rangle } \hfill & {\left\langle {\left[ {0.324,0.34,0.36,0.388} \right],\left( {0.73,0.4,0.13} \right)} \right\rangle } \hfill \\ {\left\langle {\left[ {0.21,0.234,0.24,0.27} \right],\left( {0.9,0.3,0.07} \right)} \right\rangle } \hfill & {\left\langle {\left[ {0.174,0.18,0.24,0.27} \right],\left( {0.91,0.2,0.03} \right)} \right\rangle } \hfill & {\left\langle {\left[ {0.28,0.28,0.32,0.36} \right],\left( {0.66,0.4,0.12} \right)} \right\rangle } \hfill \\ \end{array} } \right)$$
$$U^{3} = \begin{array}{*{20}c} {A_{1} } \\ {A_{2} } \\ {A_{3} } \\ {A_{4} } \\ \end{array} \left( {\begin{array}{*{20}l} {\left\langle {\left[ {0.34,0.34,0.36,0.384} \right],\left( {0.81,0.3,0.18} \right)} \right\rangle } \hfill & {\left\langle {\left[ {0.304,0.32,0.32,0.344} \right],\left( {0.76,0.5,0.24} \right)} \right\rangle } \hfill & {\left\langle {\left[ {0.16,0.17,0.18,0.194} \right],\left( {0.74,0.35,0.19} \right)} \right\rangle } \hfill \\ {\left\langle {\left[ {0.316,0.32,0.32,0.348} \right],\left( {0.75,0.3,0.16} \right)} \right\rangle } \hfill & {\left\langle {\left[ {0.3,0.3,0.32,0.356} \right],\left( {0.84,0.5,0.16} \right)} \right\rangle } \hfill & {\left\langle {\left[ {0.162,0.17,0.18,0.186} \right],\left( {0.97,0.2,0.03} \right)} \right\rangle } \hfill \\ {\left\langle {\left[ {0.248,0.28,0.32,0.328} \right],\left( {0.89,0.1,0.11} \right)} \right\rangle } \hfill & {\left\langle {\left[ {0.336,0.34,0.34,0.356} \right],\left( {0.78,0.5,0.21} \right)} \right\rangle } \hfill & {\left\langle {\left[ {0.156,0.16,0.16,0.164} \right],\left( {0.74,0.4,0.11} \right)} \right\rangle } \hfill \\ {\left\langle {\left[ {0.24,0.24,0.32,0.36} \right],\left( {0.66,0.3,0.18} \right)} \right\rangle } \hfill & {\left\langle {\left[ {0.256,0.28,0.32,0.36} \right],\left( {0.63,0.3,0.27} \right)} \right\rangle } \hfill & {\left\langle {\left[ {0.12,0.13,0.16,0.18} \right],\left( {0.71,0.4,0.29} \right)} \right\rangle } \hfill \\ \end{array} } \right)$$
Step 4::

Obtain expert weights through the entropy-weighted method.

We can identify the objective expert weights using Eqs. (20) through (23). The results are calculated as follows:

  1. (i)

    According to Eq. (22), we have

    $$\left[ {E\left( {\tilde{u}_{ij}^{1} } \right)} \right]_{m \times n} = \left[ {\begin{array}{*{20}c} {0.179} & {0.0906} & {0.2342} \\ {0.2245} & {0.1193} & {0.2839} \\ {0.2268} & {0.1219} & {0.2335} \\ {0.193} & {0.1041} & {0.209} \\ \end{array} } \right],\quad \left[ {E\left( {\tilde{u}_{ij}^{2} } \right)} \right]_{m \times n} = \left[ {\begin{array}{*{20}c} {0.2749} & {0.2164} & {0.1288} \\ {0.2473} & {0.2296} & {0.1438} \\ {0.2644} & {0.236} & {0.1189} \\ {0.2083} & {0.2078} & {0.0988} \\ \end{array} } \right],\;{\text{and}}\;\left[ {E\left( {\tilde{u}_{ij}^{3} } \right)} \right]_{m \times n} = \left[ {\begin{array}{*{20}c} {0.179} & {0.0906} & {0.2342} \\ {0.2245} & {0.1193} & {0.2839} \\ {0.2268} & {0.1219} & {0.2335} \\ {0.193} & {0.1041} & {0.209} \\ \end{array} } \right].$$
  2. (ii)

    According to Eqs. (20) through (22), we have

    $$I_{1} = 0.996789,\quad I_{2} = 0.997812,\quad {\text{and}}\quad I_{3} = 0.996789.$$
  3. (iii)

    According to Eq. (23), the expert weights can be identified as follows:

    $$w_{1} = 0.3729,\quad w_{2} = 0.2542,\quad {\text{and}}\quad w_{3} = 0.3729.$$
Step 5::

Calculate the comprehensive criteria weights.

The following comprehensive criteria weights can be obtained using Eq. (24):

$$\varpi_{1} = 0.3746,\quad \varpi_{2} = 0.3,\quad {\text{and}}\quad \varpi_{3} = 0.3254.$$
Step 6::

Obtain the aggregated decision matrix.

Based on the initial decision matrices and the expert weights obtained in Step 4, and using the new operations proposed in Definition 9, we can acquire the following aggregated decision matrix:

$$M = \left( {\begin{array}{*{20}l} {\left\langle \begin{aligned} \left[ {0.737,0.781,0.837,0.905} \right], \hfill \\ \left( {0.491,0.406,0.416} \right) \hfill \\ \end{aligned} \right\rangle } \hfill & {\left\langle \begin{aligned} \left[ {0.725,0.663,0.8,0.86} \right], \hfill \\ \left( {0.564,0.353,0.364} \right) \hfill \\ \end{aligned} \right\rangle } \hfill & {\left\langle \begin{aligned} \left[ {0.824,0.85,0.9,0.934} \right], \hfill \\ \left( {0.68,0.504,0.543} \right) \hfill \\ \end{aligned} \right\rangle } \hfill \\ {\left\langle \begin{aligned} \left[ {0.792,0.824,0.838,0.876} \right], \hfill \\ \left( {0.75,0.423,0.431} \right) \hfill \\ \end{aligned} \right\rangle } \hfill & {\left\langle \begin{aligned} \left[ {0.73,0.744,0.8,0.897} \right], \hfill \\ \left( {0.882,0.5,0.321} \right) \hfill \\ \end{aligned} \right\rangle } \hfill & {\left\langle \begin{aligned} \left[ {0.777,0.812,0.856,0.896} \right], \hfill \\ \left( {0.811,0.433,0.537} \right) \hfill \\ \end{aligned} \right\rangle } \hfill \\ {\left\langle \begin{aligned} \left[ {0.73,0.781,0.847,0.88} \right], \hfill \\ \left( {0.789,0.696,0.55} \right) \hfill \\ \end{aligned} \right\rangle } \hfill & {\left\langle \begin{aligned} \left[ {0.716,0.781,0.844,0.89} \right], \hfill \\ \left( {0.833,0.38,0.212} \right) \hfill \\ \end{aligned} \right\rangle } \hfill & {\left\langle \begin{aligned} \left[ {0.75,0.775,0.825,0.869} \right], \hfill \\ \left( {0.808,0.418,0.203} \right) \hfill \\ \end{aligned} \right\rangle } \hfill \\ {\left\langle \begin{aligned} \left[ {0.625,0.646,0.8,0.9} \right], \hfill \\ \left( {0.687,0.474,0.509} \right) \hfill \\ \end{aligned} \right\rangle } \hfill & {\left\langle \begin{aligned} \left[ {0.602,0.637,0.8,0.9} \right], \hfill \\ \left( {0.748,0.388,0.318} \right) \hfill \\ \end{aligned} \right\rangle } \hfill & {\left\langle \begin{aligned} \left[ {0.625,0.681,0.8,0.9} \right], \hfill \\ \left( {0.723,0.504,0.413} \right) \hfill \\ \end{aligned} \right\rangle } \hfill \\ \end{array} } \right).$$
Step 7::

Obtain the overall value of A i .

For simplicity, we assume that p = q = 1; then, utilizing Eq. (15), we can identify the overall value of A i

$$A_{i} = \begin{array}{*{20}c} {A_{1} } \\ {A_{2} } \\ {A_{3} } \\ {A_{4} } \\ \end{array} \left( {\begin{array}{*{20}l} {\left\langle {\left[ {0.761,0.7653,0.8459,0.9} \right],\left( {0.5718,0.1847,0.171} \right)} \right\rangle } \hfill \\ {\left\langle {\left[ {0.7675,0.7948,0.8319,0.8891} \right],\left( {0.8145,0.1628,0.1772} \right)} \right\rangle } \hfill \\ {\left\langle {\left[ {0.7325,0.7794,0.8391,0.8796} \right],\left( {0.8092,0.124,0.2467} \right)} \right\rangle } \hfill \\ {\left\langle {\left[ {0.6182,0.6547,0.8,0.9} \right],\left( {0.7179,0.16,0.1852} \right)} \right\rangle } \hfill \\ \end{array} } \right).$$
Step 8::

Calculate the score values.

$$E\left( {A_{1} } \right) = 0.6012,\quad E\left( {A_{2} } \right) = 0.675,\quad E\left( {A_{3} } \right) = 0.6569,\quad {\text{and}}\quad E\left( {A_{4} } \right) = 0.5836.$$
Step 9::

Rank all alternatives.

Based on the score values obtained in Step 8, we can set forward the final ranking results: A 2 ≻ A 3 ≻ A 1 ≻ A 4. These results show that alternative A 2 is the best one.

6.2 The influence of parameters p and q on the final order of the alternatives

In order to illustrate the influence of different criteria weights, different values of p and q should be evaluated to check their influence on the example’s decision-making results. Several different values of p and q were taken into consideration in order to gain a comprehensive view, and the results are shown in Table 1.

Table 1 Ranking orders with different values of p and q in SVTNNWBM operator

An analysis of the results in Table 1 reveals that different values of p and q in the SVTNNWBM operator can lead to different ranking results. Except for the two situations p = 1, q = 0 and p = q = 0.5, the final ranking order is A 2 ≻ A 3 ≻ A 4 ≻ A 1; under the other conditions, the ranking order is A 2 ≻ A 3 ≻ A 1 ≻ A 4, as shown in Table 1. The best alternative is always A 2, while the worst alternative changes between A 1 and A 4.

The reasons for this inconsistency are as follows. In special cases where at least one of the two parameters p and q takes the value of zero, the SVTNNWBM operator cannot capture the interrelationship of the individual arguments, which produces a different ranking order. This is why the final ranking result when p = 1, q = 0 is different from the other circumstances. Moreover, when p = q = 0.5, because both of the parameters are smaller than 1, the aggregation value may be amplified when calculating the comprehensive value of A i ; as a result, the final ranking order of A 1 and A 4 will switch. In general, we can take the values of p and q as p = q = 1; not only is this intuitive and simple, but also it considers the interrelationships among criteria. Thus, the proposed method enables the DMs to select the desirable alternative according to their interest and actual needs.

6.3 Comparison analysis and discussion

In this subsection, a comparative study is conducted to validate the practicality and effectiveness of the proposed approach.

Case 1

Comparative analysis in the context of SNLS environments.

In order to verify its feasibility, the method proposed in this paper was used to solve the example in Tian et al. [33], which features an environment characterized by SNLSs. An analysis is conducted here to compare the proposed method and the method in [33].

The method proposed in [33] incorporates power aggregation operators and a TOPSIS-based QUALIFLEX method to solve green product design selection problems using neutrosophic linguistic information. According to the method proposed in this paper, the first step, in order to keep the decision information the same, is to translate the data in [33] into SVTNNs as defined in [62]. Next, the expert weights are obtained using the entropy-weighted method, and the comprehensive decision matrix is obtained using the SVTNNWBM operator. Then, the ranking results are obtained based on the new comparison method described in Sect. 3.2. The example found in [33] can be solved as follows (Table 2):

Table 2 Ranking results of different methods in SNLS environments

The example in [33] yields the same ranking results A 2 ≻ A 3 ≻ A 4 ≻ A 1 using the two different methods when p = 1, q = 0. There are subtle differences in other conditions where A 2 ≻ A 3 ≻ A 1 ≻ A 4, but alternative A 2 remains the optimum design. This can be explained as follows.

Using the proposed method, SNLS information is first converted into SVTNN information using the technique developed in [62]. In SNLSs, the membership degree, non-membership degree and indeterminate degree are relative to a fuzzy concept “Excellent” or “Good”, which is a discrete set and can cause information distortion and loss. However, SVTNNs allow for representation as a continuous set, which has more ability to express the uncertainty and maintain completeness of information. The discrepancy could also be caused by the distinct inherent characteristics of the aggregation operators and comparison methods utilized by these two methods. Although both the power average operators and BM operators take into account information about the relationships among the arguments being aggregated, they accomplish this differently, as stated in [33]. Given the above analysis, SVTNNs may reflect the assessment information better than SNLSs because they transform the linguistic terms into TFNs. Therefore, the results obtained in this paper can be considered to be relatively convincing.

Case 2

Comparative analysis in the context of SVTNN environments.

In order to validate the accuracy and superior performance of the proposed method, the method in Ye [38] was applied to deal with the example in Sect. 6. A comparative study is conducted here between the proposed approach and the method developed in [38], based on the illustrative example described in this paper.

The method proposed in [38] is used to handle a MCDM problem through four main procedures. First, the trapezoidal neutrosophic weighted arithmetic averaging (TNWAA) operator and trapezoidal neutrosophic weighted geometric averaging (TNWGA) operator are used to aggregate the evaluation values. Second, the score functions are calculated for each alternative’s collective overall value. Third, the best choice is selected according to the score values. When solving the example in Sect. 6 using the approach in [38], the first five procedures are the same as in our proposed method. However, in Step 6, the aggregated decision matrix is obtained by the operations in [38]; then, the TNWAA operator and TNWGA operator are applied to identify the overall evaluation values of each alternative in Step 7. Finally, the score values can be calculated, and the ranking results can be obtained.

As shown in Table 3, different ranking orders are obtained using the different methods, but the differences are subtle. The reasons for the inconsistency can be summarized as follows.

Table 3 Ranking results of different methods in SVTNN environments

From the perspective of operations, the improved operations for SVTNNs in this paper take into consideration the correlation between TFNs and the three membership degrees of SVTNNs. This is a reliable principle that can effectively avoid losing the information. The operations in [38], however, divide the TFNs and three membership degrees of SVTNNs into two separate parts, which may lead the aggregated results to deviate from the reality.

In terms of comparison methods, the new comparison method for SVTNNs proposed in this paper has some notable advantages over the corresponding method based on the score function in [38]. The details were discussed in Sect. 3.2.

In terms of aggregation operators, the use of the SVTNNWBM operator can take the interrelationships of the input arguments into consideration, allowing the user to assign different results by adjusting the value of parameters p and q. This adds flexibility to the proposed method. The TNWAA and TNWGA operators used in [38], however, cannot recreate the pairwise influence of different input arguments. Therefore, the ranking results in this paper are more reasonable, and the proposed method has more flexibility than the method in [38].

The results of the comparative analysis validate the proposed approach and confirm that it is practical and effective in addressing MCGDM problems.

7 Conclusion

SVTNNs have a strong ability to represent incomplete and inconsistent information, and they can avoid information loss and distortion in complex decision-making problems. MCGDM methods with SVTNNs have extensive application prospects in many domains. The BM operator can take into consideration interrelationships among the input arguments. Furthermore, the entropy-weighted method is an appropriate tool for determining objective weights, which is significant in solving decision-making problems. This paper developed a new approach to MCGDM problems using SVTNNs. We redefined the improved operations and proposed a new comparison method for SVTNNs. We obtained experts weights through the entropy-weighted method, and we applied the BM operator. Then, we proposed the SVTNNWBM operator to aggregate the decision information expressed by SVTNNs. We further studied some properties of the BM operator and discussed some special cases. In addition, a sensitivity analysis was constructed to assess the impact of changing the values of parameters p and q. Finally, we confirmed the new MCGDM approach to be practical and effective by applying it to a numerical example and comparing it with two different methods found in the literature. In future research, this method can be applied to other scenarios including personal selection, green supplier selection and medical diagnosis problems. This study considered the interrelationships among input arguments and acquired experts weights objectively; however, the risk preferences of DMs were ignored. Our next topic of study aims to cover this deficiency. In future research, the proposed approach can be applied to more practical cases to illustrate its efficiency and effectiveness. Because SVTNNs can be easily and intuitively obtained in education evaluation processes, this method should find further applications in this field.