1 Introduction

Fuzzy linguistic approach, which was first introduced by Zadeh [42], has been extended into several different models [5, 27, 28, 32]. It has attracted a lot of attention recently, due to its effectiveness and practicality in representing uncertainty and vagueness of meanings whose nature is qualitative rather than quantitative [41]. In recent years, the most popular extended fuzzy linguistic approaches have been based on hesitant fuzzy linguistic term sets (HFLTSs) [23] or probabilistic linguistic term sets (PLTSs) [22]. Since HFLTSs focus on the comparative linguistic expressions where the decision makers (DMs) can propose several possible linguistic terms at the same time based on hesitant fuzzy sets (HFSs) [26] and linguistic term sets (LTSs) [42], they increase the flexibility and capability of eliciting and representing linguistic information. On the other hand, PLTSs can express different importance degrees of the linguistic terms where the DMs hesitate among some linguistic terms; that is to say, it permits the DMs to use several linguistic terms to assess a linguistic variable, and these linguistic terms can be further calculated together with their associated probabilities. Since HFLTSs and PLTSs provide merits in depicting the DM’s cognitions and preferences, many researchers have studied them in qualitative decision making. For example, based on HFLTSs, lots of researches which contain basic operational laws [23, 31], aggregation operations [11], information measures including distance [9, 17], correlation [18], entropy [12] have been made. For another, with PLTSs, the basic operation [22], the improved operation [11], decision-making approach [16], etc., have also been studied.

It should be noted that HFLTSs and PLTSs were introduced to handle the decision-making problems which are represented in qualitative situations. However, in the face of increasingly complicated environment, uncertainty often contains both qualitative information and quantitative information in a certain problem. For instance, when evaluating the “performance” of a sensor, the qualitative terms such as “good,” “medium” and “bad” can be used; meanwhile, when evaluating the “error” of the sensor, it should be given a numerical information. Additionally, HFLTSs and PLTSs have some limitations due to the fact that they can only express a layer of evaluation information about the relevant attributes. However, some complex multi-attribute decision-making problems contain nested information, and the whole process needs to be evaluated twice so that the experts can make full use of decision information to get more accurate result. Motivated by PLTSs, Wang et al. [29] proposed the concept of the nested probabilistic-numerical linguistic term sets (NPNLTSs), which provides a different and more powerful form to fully represent the DMs’ preferences with qualitative information and quantitative information in the decision-making process. Besides, by using NPNLTSs, an extended TOPSIS method has been proposed to deal with multi-attribute group decision-making problem [29]. Later, with the nested probabilistic-numerical linguistic information, an optimization problem about tracking maneuvering target by multiple-sensors problem has been solved [30].

In order to apply NPNLTSs to deal with multi-attribute decision-making problems more effectively, we should pay more attention to the basic characteristics about NPNLTSs, in particular distance and similarity measures which are the basis of some well-known methods, such as TOPSIS [3], VIKOR [20] and ELECTRE [4]. Furthermore, these measures are also fundamentally important in many scientific fields, such as decision making [34], machine learning [13] and pattern recognition [2]. Therefore, distance and similarity measures are common tools and have been widely used in measuring the deviations and the closeness degrees of different arguments [17]. Up to now, many scholars have paid great attention to this issue and have achieved many results, which can be roughly classified into two sorts: one sort is mainly based on the traditional distance measures, such as the Hamming distance [40], the Euclidean distance [40] and the Hausdorff metric [10]. And the other is on the basis of some weighted distance operators, such as the ordered weighted distance measures [36, 38, 39], the hybrid weighted distance measures [35] and the fuzzy ordered distance measures [24]. Furthermore, all these distance measures have been extended into fuzzy sets [21], intuitionistic fuzzy sets [25], interval-valued intuitionistic fuzzy sets [1, 19], linguistic fuzzy sets [8], hesitant fuzzy sets [37] and HFLTSs [17]. Therefore, in this paper, we focus on investigating the distance and similarity measures for NPNLTSs, not only over one single aspect, but also consider multiple aspects in discrete case, continuous case and ordered weighted situation. Additionally, an approach based on these measures would be proposed to deal with a multi-attribute decision-making problem with NPNLTSs. Moreover, in order to understand the effects of different distance measures, various decision-making methods and the changed focal parameters on the results, we make some experiment simulations about the case study considering the evaluation of medical treatment. The contribution of this paper lies in the following aspects:

  1. (1)

    Based on some well-known traditional distance measures over one single aspect, the distance and similarity measures for two NPNLTSs with their properties and proofs are proposed.

  2. (2)

    Considering multiple aspects in three situations, which are the discrete case, the continuous case and the ordered weighted case, respectively, the distance and similarity measures for two collections of NPNLTSs are provided.

  3. (3)

    A decision-making approach with the proposed distance and similarity measures is provided, and we apply to a case study considering the evaluation of medical treatment.

  4. (4)

    Some comparisons and analyses by experiment simulations are completed from three angles including various decision-making methods, various distance and similarity measures and some changed focal parameters.

To do so, the rest of this paper is organized as follows: Sect. 2 presents the concept of the linguistic term sets and the nested probabilistic-numerical linguistic term sets. In Sect. 3, we give definitions and properties of distance and similarity measures for two NPNLTSs and for two collections of NPNLTSs in three cases. In Sect. 4, we propose a decision-making approach based on the proposed distance measures. Section 5 makes comparative analysis and discussion by experiment simulations from three aspects. Section 6 ends the paper with some conclusions.

2 Linguistic Term Sets and Nested Probabilistic-Numerical Linguistic Term Sets

Since linguistic terms express the DMs’ knowledge more comfortably and straightforward in the process of decision making, they have been considered to be closer to human being’s cognitive processes [41] and provide good application results in many fields [6, 7, 12, 33]. In the following, we recall the concept of the linguistic term sets and further introduce the nested probabilistic-numerical linguistic term sets.

2.1 Linguistic Term Sets

Zadeh first proposed the fuzzy linguistic approach to present the linguistic information, and its definition was given as follows [41]:

Definition 1

[41] A linguistic variable is characterized by a quintuple \(\left( {H,T\left( H \right),U,G,M} \right)\), where \(H\) is the name of variable; \(T\left( H \right)\) denotes the term set of \(H\), i.e., the set of its linguistic values; \(U\) is a universe of discourse; \(G\) is a syntactic rule for generating the terms in \(T\left( H \right)\); and \(M\) is a semantic rule for associating each linguistic value \(X\) with its meaning, \(M\left( X \right)\) is a fuzzy subset of \(U\).

Obviously, a linguistic variable depends on its linguistic descriptors and semantics, and many different ways choose the linguistic descriptor including the ordered structure approach and the context-free grammar approach to define its semantic [14]. Additionally, three ways can define the corresponding semantics: (1) based on an ordered structure of the linguistic term set; (2) based on membership functions and a semantic rule; and (3) mixed semantics. Up to now, many researchers have mainly studied the ordered structure approach and the semantics based on the ordered structure of the linguistic term set. Based on the above method, all the terms of the linguistic set are distributed on a scale [14, 15]. For example, the well-known set of seven linguistic terms is given as (see Fig. 1):

Fig. 1
figure 1

The set \(S\) of seven linguistic terms

Moreover, the semantics based on the ordered structure of the linguistic term set introduce the semantics over the linguistic term set. For instance, the above linguistic term set of seven terms with its syntax and fuzzy semantics representation can be shown in Fig. 2.

Fig. 2
figure 2

The set \(S\) of seven linguistic terms with its semantics

2.2 Nested Probabilistic-Numerical Linguistic Term Sets

Probabilistic linguistic term sets (PLTSs), which permit the DMs to hesitate among some linguistic terms with their associated probabilities, are a powerful structure in reflecting the importance degrees of the corresponding linguistic terms.

Definition 2

[22] Let \(S_{1} = \left\{ {s_{0} ,s_{1} , \ldots ,s_{\tau } } \right\}\) be a linguistic term set, a PLTS can be defined as:

$$L(p) = \left\{ {L^{(k)} (p^{(k)} )|L^{(k)} \in S_{1} ,\;p^{(k)} \ge 0,\;k = 1,2, \ldots ,\# L(p),\;\;\sum\limits_{k = 1}^{\# L(p)} {p^{(k)} } \le 1} \right\}$$
(1)

where \(L^{\left( k \right)} \left( {p^{\left( k \right)} } \right)\) is the linguistic term \(L^{(k)}\) associated with the probability \(p^{(k)}\), and \(\# L(p)\) is the number of all different linguistic terms in \(L(p)\).

Similar to the situations of PLTS, where the DMs may hesitate among some linguistic terms with their associated probabilities when evaluating an alternative, in a complex circumstance which contains both qualitative and quantitative information, the DMs may not only hesitate between several linguistic terms, but also have nested numerical information about the above linguistic terms. Hence, motivated by PLTSs, Wang et al. [18] introduced the nested probabilistic-numerical linguistic term sets (NPNLTSs) and the normalized NPNLTSs (N-NPNLTSs) as follows:

Definition 3

[29] Let \({\text{NPN}} = \{ {\text{OL}}\left( p \right)\left\{ {{\text{IL}}\left( v \right)} \right\}\}\) be a nested probabilistic-numerical linguistic term set (NPNLTS), which consists of an outer-layer probabilistic linguistic term set (OPLTS) \({\text{OL}}\left( p \right)\) and an inner-layer numerical linguistic term set (INLTS) \({\text{IL}}\left( v \right)\), i.e.,

$${\text{OL}}\left( p \right) = \left\{ {{\text{OL}}^{\left( k \right)} \left( {p^{\left( k \right)} } \right)|{\text{OL}}^{\left( k \right)} \in {\text{OS}},\;p^{\left( k \right)} \ge 0,\;k = 1,2, \ldots ,\# {\text{OL}}\left( p \right),\sum\limits_{k = 1}^{\# OL\left( p \right)} {p^{\left( k \right)} \le 1} } \right\}$$
(2)
$${\text{IL}}\left( v \right) = \left\{ {{\text{IL}}_{\left( k \right)}^{\left( l \right)} \left( {v_{\left( k \right)}^{\left( l \right)} } \right)|{\text{IL}}_{\left( k \right)}^{\left( l \right)} \in {\text{IS}},\;v_{\left( k \right)}^{\left( l \right)} \ge 0,\;k = 1,2, \ldots ,\# {\text{OL}}\left( p \right),l = 1,2, \ldots ,\# {\text{IL}}\left( v \right)} \right\}$$
(3)

where \({\text{OS}} = \{ s_{\alpha } |\alpha = 0,1,2, \ldots ,\tau \}\) and \({\text{IS}} = \{ n_{\beta } |\beta = 0,1,2, \ldots ,\varsigma \}\) are called an outer-layer linguistic term set (OLTS) and an inner-layer linguistic term set (ILTS), respectively, in the nested linguistic term set (NLTS) \({\text{NS}} = \{ s_{\alpha } \left\{ {n_{\beta } } \right\}\}\). \({\text{OL}}^{\left( k \right)} \left( {p^{\left( k \right)} } \right)\) is the kth outer-layer linguistic term element (OLTE) in the OLTS associated with the probability \(p^{\left( k \right)}\), and \(\# {\text{OL}}\left( p \right)\) is the number of the linguistic term element in \({\text{OL}}\left( p \right)\). \({\text{IL}}_{\left( k \right)}^{\left( l \right)} \left( {v_{\left( k \right)}^{\left( l \right)} } \right)\) is the lth inner-layer linguistic term element (ILTE) in the ILTS associated with the value \(v_{\left( k \right)}^{\left( l \right)}\) under kth OLTE, and \(\# {\text{IL}}\left( v \right)\) is the number of the linguistic term element in \({\text{IL}}\left( v \right)\).

Definition 4

[29] Let \(\overline{\text{NPN}} { = }\left\{ {{\text{OL}}^{N} \left( p \right)\left\{ {{\text{IL}}^{N} \left( v \right)} \right\}} \right\}\) be a normalized NPNLTS (N-NPNLTS) that consists of a normalized OPLTS \({\text{OL}}^{N} \left( p \right)\) and a normalized INLTS \(IL^{N} \left( v \right)\):

$${\text{OL}}^{N} \left( p \right) = \left\{ {{\text{OL}}^{N\left( k \right)} \left( {p^{N\left( k \right)} } \right)|{\text{OL}}^{N\left( k \right)} \in {\text{OS}},\;p^{N\left( k \right)} \ge 0,\;k = 1,2, \ldots ,\tau + 1,\sum\limits_{k = 1}^{\tau + 1} {p^{N\left( k \right)} = 1} } \right\}$$
(4)
$${\text{IL}}^{N} \left( v \right) = \left\{ {{\text{IL}}_{\left( k \right)}^{N\left( l \right)} \left( {v_{\left( k \right)}^{N\left( l \right)} } \right)|{\text{IL}}_{\left( k \right)}^{N\left( l \right)} \in {\text{IS}},\;1 \ge v_{\left( k \right)}^{N\left( l \right)} \ge 0,\;k = 1,2, \cdots ,\tau + 1,l = 1,2, \ldots ,\varsigma + 1} \right\}$$
(5)

where \(p^{N(k)} = {{p^{(k)} } \mathord{\left/ {\vphantom {{p^{(k)} } {\sum\nolimits_{k = 1}^{\tau + 1} {p^{(k)} } }}} \right. \kern-0pt} {\sum\nolimits_{k = 1}^{\tau + 1} {p^{(k)} } }}\) and \(v_{\left( k \right)}^{N\left( l \right)} = {{v_{\left( k \right)}^{\left( l \right)} } \mathord{\left/ {\vphantom {{v_{\left( k \right)}^{\left( l \right)} } {\sum\nolimits_{k = 1}^{\tau + 1} {v_{\left( k \right)}^{\left( l \right)} } }}} \right. \kern-0pt} {\sum\nolimits_{k = 1}^{\tau + 1} {v_{\left( k \right)}^{\left( l \right)} } }}\).

Remark 1

According to definitions above, the OLTS and the ILTS in a NPNLTS can not only consist of ordinal variables like PLTSs, but also consist of nominal variables. Therefore, there are four cases obviously. Case 1: the elements of the OLTS and the ILTS are both ordinal variables; Case 2: the elements of the OLTS and the ILTS are ordinal variables and nominal variables, respectively; Case 3: the elements of the OLTS and the ILTS are nominal variables and ordinal variables, respectively; Case 4: the elements of the OLTS and the ILTS are both nominal variables.

Since there have been some popular fuzzy sets to describe complex and uncertain information, we make some comparisons of NPNLTSs and these fuzzy sets which are presented in Introduction. Table 1 shows the merits of NPNLTSs [29] compared with other popular types of fuzzy sets, such as HFLTSs [23] and PLTSs [22].

Table 1 The comparison of some popular fuzzy sets

Example 1

Let \({\text{OS}}\) and \({\text{IS}}\) be an OLTS and an ILTS in a NLTS, respectively, i.e.,

$${\text{OS}} = \left\{ {s_{0} :{\text{poor}}\,{\text{student}},s_{1} :{\text{average}}\,{\text{student}},s_{2} :{\text{top}}\,{\text{student}}} \right\}$$
$${\text{IS}} = \left\{ {n_{0} :{\text{attention}}\,{\text{score}},n_{1} :{\text{attitude}}\,{\text{score}},{\kern 1pt} n_{2} :{\text{intelligence}}\,{\text{score}}} \right\}$$

Then, there are two N-NPNLTSs as follows:

$$\overline{\text{NPN}}_{1} { = }\left\{ \begin{aligned} s_{0} (0.3)\left\{ {n_{0} \left( {0.7} \right),n_{1} \left( {0.5} \right),n_{2} \left( {0.8} \right)} \right\}, \hfill \\ s_{1} (0.4)\left\{ {n_{0} \left( {0.8} \right),n_{1} \left( {0.6} \right),n_{2} \left( {0.8} \right)} \right\}, \hfill \\ s_{2} (0.3)\left\{ {n_{0} \left( {0.9} \right),n_{1} \left( {0.7} \right),n_{2} \left( {0.7} \right)} \right\} \hfill \\ \end{aligned} \right\},\quad \overline{\text{NPN}}_{2} { = }\left\{ \begin{aligned} s_{0} (0.6)\left\{ {n_{0} \left( {0.7} \right),n_{1} \left( {0.5} \right),n_{2} \left( {0.8} \right)} \right\}, \hfill \\ s_{1} (0.4)\left\{ {n_{0} \left( {0.8} \right),n_{1} \left( {0.6} \right),n_{2} \left( {0.8} \right)} \right\} \hfill \\ \end{aligned} \right\}$$

where \({\text{OL}}_{1}^{N} \left( p \right)\) and \({\text{OL}}_{2}^{N} \left( p \right)\) are \({\text{OL}}_{1}^{N} \left( p \right){ = }\left\{ {s_{0} (0.3),s_{1} (0.4),s_{2} (0.3)} \right\}\), \({\text{OL}}_{2}^{N} \left( p \right){ = }\left\{ {s_{0} (0.6),s_{1} (0.4)} \right\}\) and \({\text{IL}}^{N} \left( v \right)\) is \({\text{IL}}^{N} \left( v \right) = \left\{ {\left\{ {n_{0} \left( {0.7} \right),n_{1} \left( {0.5} \right),n_{2} \left( {0.8} \right)} \right\},\left\{ {n_{0} \left( {0.8} \right),n_{1} \left( {0.6} \right),n_{2} \left( {0.8} \right)} \right\},\left\{ {n_{0} \left( {0.9} \right),n_{1} \left( {0.7} \right),n_{2} \left( {0.7} \right)} \right\}} \right\}\).

Here, for a student, \(\overline{\text{NPN}}_{1}\) means the probabilities of the poor student, average student and top student are 0.3, 0.4 and 0.3, respectively. Meanwhile, the attention score, attitude score and intelligence score for poor student are 0.7, 0.5 and 0.8, respectively, for average student are 0.8, 0.6 and 0.8, respectively, and for top student are 0.9, 0.7 and 0.7, respectively. Similarly, we can interpret the meaning for \(\overline{\text{NPN}}_{2}\).

3 Distance and Similarity Measures with NPNLTSs

Since the distance and similarity measures are very important to calculate the deviation and closeness degrees of different arguments, and the existing measuring methods cannot be used to deal with the relevant measures with NPNLTSs, it is necessary to study the distance and similarity measures over NPNLTSs. In this section, we propose a family of distance and similarity measures between two NPNLTSs and two collections of NPNLTSs.

3.1 Distance and Similarity Measures Between Two NPNLTSs

Inspired by the existing research analyzed in Introduction, we set out our investigation from two aspects, which are the extensions of the traditional distance and similarity measures and on the basis of different types of some weighted forms, respectively. Suppose that the NPNLTSs are all N-NPNLTSs in this paper for the sake of the convenient calculation. In the following, we first put forward the axioms of distance and similarly measures for NPNLTSs:

Definition 5

Let \({\text{OS}} = \{ s_{\alpha } |\alpha = 0,1,2, \ldots ,\tau \}\) and \({\text{IS}} = \{ n_{\beta } |\beta = 0,1,2, \ldots ,\varsigma \}\) be an OLTS and an ILTS, respectively, \({\text{NPN}}_{1}\) and \({\text{NPN}}_{2}\) be two NPNLTSs, then the distance measure between \({\text{NPN}}_{1}\) and \({\text{NPN}}_{2}\) is defined as \(d\left( {{\text{NPN}}_{1} ,{\text{NPN}}_{2} } \right)\), which satisfies:

  1. (1)

    \(0 \le d\left( {{\text{NPN}}_{1} ,{\text{NPN}}_{2} } \right) \le 1\);

  2. (2)

    \(d\left( {{\text{NPN}}_{1} ,{\text{NPN}}_{2} } \right) = 0\) if and only if \({\text{NPN}}_{1} = {\text{NPN}}_{2}\);

  3. (3)

    \(d\left( {{\text{NPN}}_{1} ,{\text{NPN}}_{2} } \right) = d\left( {{\text{NPN}}_{2} ,{\text{NPN}}_{1} } \right)\).

As the complementary concept of distance measure, the similarity measure between two NPNLTSs can be described in the next definition:

Definition 6

Let \({\text{OS}} = \{ s_{\alpha } |\alpha = 0,1,2, \ldots ,\tau \}\) and \({\text{IS}} = \{ n_{\beta } |\beta = 0,1,2, \ldots ,\varsigma \}\) be an OLTS and an ILTS, respectively, \({\text{NPN}}_{1}\) and \({\text{NPN}}_{2}\) be two NPNLTSs, then the similarity measure between \({\text{NPN}}_{1}\) and \({\text{NPN}}_{2}\) is defined as \(\rho \left( {{\text{NPN}}_{1} ,{\text{NPN}}_{2} } \right)\), which satisfies:

  1. (1)

    \(0 \le \rho \left( {{\text{NPN}}_{1} ,{\text{NPN}}_{2} } \right) \le 1\);

  2. (2)

    \(\rho \left( {{\text{NPN}}_{1} ,{\text{NPN}}_{2} } \right) = 1\) if and only if \({\text{NPN}}_{1} = {\text{NPN}}_{2}\);

  3. (3)

    \(\rho \left( {{\text{NPN}}_{1} ,{\text{NPN}}_{2} } \right) = \rho \left( {{\text{NPN}}_{2} ,{\text{NPN}}_{1} } \right)\).

It is similar to the axioms of distance and similarity measures for HFSs and HFLTSs given by Xu [37] and Liao [17], respectively. And the three conditions of each axiom are easy to be understood and essential for the definitions of the measures. It can be easy to see that the relationship between the distance and similarity measures is:

$$\rho \left( {{\text{NPN}}_{1} ,{\text{NPN}}_{2} } \right) = 1 - d\left( {{\text{NPN}}_{1} ,{\text{NPN}}_{2} } \right)$$
(6)

Hence, in this paper, we mainly discuss the distance measures of N-NPNLTSs, and the corresponding similarity measures can be obtained by Eq. (6).

In general, different NPNLTSs have different numbers of OPLTSs or INLTSs in real applications, such as Example 1. In order to operate correctly when comparing two NPNLTSs, we first propose a method to add the elements under different cases in a NPNLTS.

Definition 7

Let \({\text{NPN}}_{1} = \{ {\text{OL}}_{1}^{{\left( {k_{1} } \right)}} \left( {p_{1}^{{\left( {k_{1} } \right)}} } \right)\left\{ {{\text{IL}}_{1}^{{\left( {l_{1} } \right)}} \left( {v_{1}^{{\left( {l_{1} } \right)}} } \right)} \right\}\}\) and \({\text{NPN}}_{2} = \left\{ {{\text{OL}}_{2}^{{\left( {k_{2} } \right)}} \left( {p_{2}^{{\left( {k_{2} } \right)}} } \right)\left\{ {{\text{IL}}_{2}^{{\left( {l_{2} } \right)}} \left( {v_{2}^{{\left( {l_{2} } \right)}} } \right)} \right\}} \right\}\) be two NPNLTSs, where \(k_{1} = 1,2, \ldots ,\# {\text{OL}}_{1} \left( {p_{1} } \right),k_{2} = 1,2, \ldots ,\# {\text{OL}}_{2} \left( {p_{2} } \right),l_{1} = 1,2, \ldots ,\# {\text{IL}}_{1} \left( {v_{1} } \right)\) and \(l_{2} = 1,2, \ldots ,\# {\text{IL}}_{2} \left( {v_{2} } \right)\), \(\# {\text{OL}}_{1} \left( {p_{1} } \right),\# {\text{OL}}_{2} \left( {p_{2} } \right),\# {\text{IL}}_{1} \left( {v_{1} } \right)\quad {\text{and}}\quad \# {\text{IL}}_{2} \left( {v_{2} } \right)\) are the numbers of the OLTS in \({\text{OL}}_{1} \left( {p_{1} } \right),\;{\text{OL}}_{2} \left( {p_{2} } \right)\) and the ILTS in \({\text{IL}}_{1} \left( {v_{1} } \right),{\text{IL}}_{2} \left( {v_{2} } \right)\), respectively. Suppose that \(\# {\text{OL}}_{1} \left( {p_{1} } \right) < \# {\text{OL}}_{2} \left( {p_{2} } \right)\) and \(\# {\text{IL}}_{1} \left( {v_{1} } \right) < \# {\text{IL}}_{2} \left( {v_{2} } \right)\), then we can add the elements for the sake of convenient calculation under different cases as follows:

Case 1 Add \(\hbox{min} \left( {{\text{OL}}_{i} } \right),\hbox{min} \left( {{\text{IL}}_{j} } \right),{\text{OL}}_{i} \in {\text{OL}},{\text{IL}}_{j} \in {\text{IL}}\) with \(p_{i} = 0,v_{j} = 0\) until \(\# {\text{OL}}_{1} \left( {p_{1} } \right) = \# {\text{OL}}_{2} \left( {p_{2} } \right)\),\(\# {\text{IL}}_{1} \left( {v_{1} } \right) = \# {\text{IL}}_{2} \left( {v_{2} } \right)\), respectively.

Case 2 Add \(\hbox{min} \left( {{\text{OL}}_{i} } \right),{\text{OL}}_{i} \in {\text{OL}}\) with \(p_{i} = 0\) until \(\# {\text{OL}}_{1} \left( {p_{1} } \right) = \# {\text{OL}}_{2} \left( {p_{2} } \right)\) and add the missing linguistic terms in the ILTS with the estimated values until \(\# {\text{IL}}_{1} \left( {v_{1} } \right) = \# {\text{IL}}_{2} \left( {v_{2} } \right)\).

Case 3 Add the missing linguistic terms in the OLTS with the estimated probabilities until \(\# {\text{OL}}_{1} \left( {p_{1} } \right) = \# {\text{OL}}_{2} \left( {p_{2} } \right)\), and add \(\hbox{min} \left( {IL_{j} } \right),{\text{IL}}_{j} \in {\text{IL}}\) with \(v_{j} = 0\) until \(\# {\text{IL}}_{1} \left( {v_{1} } \right) = \# {\text{IL}}_{2} \left( {v_{2} } \right)\).

Case 4 Add the missing linguistic terms in the OLTS and the ILTS with the estimated probabilities and values until \(\# {\text{OL}}_{1} \left( {p_{1} } \right) = \# {\text{OL}}_{2} \left( {p_{2} } \right)\),\(\# {\text{IL}}_{1} \left( {v_{1} } \right) = \# {\text{IL}}_{2} \left( {v_{2} } \right)\), respectively.

Then, we further give the definitions and properties of different distance measures between two NPNLTSs under different cases as follows:

Definition 8

Let \({\text{OS}} = \{ s_{\alpha } |\alpha = 0,1,2, \ldots ,\tau \}\) and \({\text{IS}} = \{ n_{\beta } |\beta = 0,1,2, \ldots ,\varsigma \}\) be an OLTS and an ILTS in a NPNLTS, respectively, \({\text{OL}}^{\left( k \right)} \left( {p^{\left( k \right)} } \right)\) is the kth OLTE in the OLTS associated with the probability \(p^{\left( k \right)}\), and \(\# {\text{OL}}\left( p \right)\) is the number of the linguistic term element in \({\text{OL}}\left( p \right)\). \({\text{IL}}_{\left( k \right)}^{\left( l \right)} \left( {v_{\left( k \right)}^{\left( l \right)} } \right)\) is the lth ILTE in the ILTS associated with the value \(v_{\left( k \right)}^{\left( l \right)}\), and \(\# {\text{IL}}\left( v \right)\) is the number of the linguistic term element in \({\text{IL}}\left( v \right)\). Let \({\text{NPN}}_{1} \left( {x_{i} } \right)\) and \({\text{NPN}}_{2} \left( {x_{i} } \right)\) be two NPNLTSs on \(X = \left\{ {x_{1} ,x_{2} , \ldots ,x_{n} } \right\}\) which can be denoted as:

$${\text{NPN}}_{1} \left( {x_{i} } \right) = \bigcup\nolimits_{{{\text{OL}}_{1}^{\left( k \right)} \left( {p^{\left( k \right)} } \right)\left\{ {{\text{IL}}_{1}^{\left( l \right)} \left( {v_{\left( k \right)}^{\left( l \right)} } \right)} \right\} \in {\text{NPN}}_{1} }} {\left\{ {{\text{OL}}_{1}^{\left( k \right)} \left( {p^{\left( k \right)} } \right)\left\{ {{\text{IL}}_{1}^{\left( l \right)} \left( {v_{\left( k \right)}^{\left( l \right)} } \right)} \right\}|k = 1,2, \ldots ,\# {\text{OL}}_{1} \left( p \right),l = 1,2, \ldots ,\# {\text{IL}}_{1} \left( v \right)} \right\}}$$
$${\text{NPN}}_{2} \left( {x_{i} } \right) = \bigcup\nolimits_{{{\text{OL}}_{2}^{\left( k \right)} \left( {p^{\left( k \right)} } \right)\left\{ {{\text{IL}}_{2}^{\left( l \right)} \left( {v_{\left( k \right)}^{\left( l \right)} } \right)} \right\} \in {\text{NPN}}_{2} }} {\left\{ {{\text{OL}}_{2}^{\left( k \right)} \left( {p^{\left( k \right)} } \right)\left\{ {{\text{IL}}_{2}^{\left( l \right)} \left( {v_{\left( k \right)}^{\left( l \right)} } \right)} \right\}|k = 1,2, \ldots ,\# {\text{OL}}_{2} \left( p \right),l = 1,2, \ldots ,\# {\text{IL}}_{2} \left( v \right)} \right\}}$$

where \(\# {\text{OL}}_{1} \left( p \right) = \# {\text{OL}}_{2} \left( p \right) = \# {\text{OL}}\left( p \right)\) and \(\# {\text{IL}}_{1} \left( v \right) = \# {\text{IL}}_{2} \left( v \right) = \# {\text{IL}}\left( v \right)\). (Otherwise, we can extend the shorter one by Definition 7.)

Case 1 Suppose that the subscripts in the OLTS and the ILTS are arranged in the ascending order, inspired by Definition 5, the NPN-Hamming distance of \({\text{NPN}}_{1} \left( {x_{i} } \right)\) and \(NPN_{2} \left( {x_{i} } \right)\) is defined as:

$$d_{h} \left( {{\text{NPN}}_{1} \left( {x_{i} } \right),{\text{NPN}}_{2} \left( {x_{i} } \right)} \right) = \frac{1}{{\# {\text{OL}} \times \# {\text{IL}}}}\sum\limits_{k = 1}^{{\# {\text{OL}}}} {\sum\limits_{l = 1}^{{\# {\text{IL}}}} {{{\left( {\frac{{\left| {{\text{OL}}_{1}^{\left( k \right)} - {\text{OL}}_{2}^{\left( k \right)} } \right|}}{\tau + 1} + \left| {p_{1}^{\left( k \right)} - p_{2}^{\left( k \right)} } \right| + \frac{{\left| {{\text{IL}}_{1}^{\left( l \right)} - {\text{IL}}_{2}^{\left( l \right)} } \right|}}{\varsigma + 1} + \left| {v_{1}^{\left( l \right)} - v_{2}^{\left( l \right)} } \right|} \right)} \mathord{\left/ {\vphantom {{\left( {\frac{{\left| {{\text{OL}}_{1}^{\left( k \right)} - {\text{OL}}_{2}^{\left( k \right)} } \right|}}{\tau + 1} + \left| {p_{1}^{\left( k \right)} - p_{2}^{\left( k \right)} } \right| + \frac{{\left| {{\text{IL}}_{1}^{\left( l \right)} - {\text{IL}}_{2}^{\left( l \right)} } \right|}}{\varsigma + 1} + \left| {v_{1}^{\left( l \right)} - v_{2}^{\left( l \right)} } \right|} \right)} 4}} \right. \kern-0pt} 4}} }$$
(7)

and the NPN-Euclidean distance of \({\text{NPN}}_{1} \left( {x_{i} } \right)\) and \({\text{NPN}}_{2} \left( {x_{i} } \right)\) is defined as:

$$d_{e} \left( {{\text{NPN}}_{1} \left( {x_{i} } \right),{\text{NPN}}_{2} \left( {x_{i} } \right)} \right) = \left( {\frac{1}{{\# {\text{OL}} \times \# {\text{IL}}}}\sum\limits_{k = 1}^{{\# {\text{OL}}}} {\sum\limits_{l = 1}^{{\# {\text{IL}}}} {\left( {{{\left( {\frac{{\left| {{\text{OL}}_{1}^{\left( k \right)} - {\text{OL}}_{2}^{\left( k \right)} } \right|}}{\tau + 1} + \left| {p_{1}^{\left( k \right)} - p_{2}^{\left( k \right)} } \right| + \frac{{\left| {{\text{IL}}_{1}^{\left( l \right)} - {\text{IL}}_{2}^{\left( l \right)} } \right|}}{\varsigma + 1} + \left| {v_{1}^{\left( l \right)} - v_{2}^{\left( l \right)} } \right|} \right)} \mathord{\left/ {\vphantom {{\left( {\frac{{\left| {{\text{OL}}_{1}^{\left( k \right)} - {\text{OL}}_{2}^{\left( k \right)} } \right|}}{\tau + 1} + \left| {p_{1}^{\left( k \right)} - p_{2}^{\left( k \right)} } \right| + \frac{{\left| {{\text{IL}}_{1}^{\left( l \right)} - {\text{IL}}_{2}^{\left( l \right)} } \right|}}{\varsigma + 1} + \left| {v_{1}^{\left( l \right)} - v_{2}^{\left( l \right)} } \right|} \right)} 4}} \right. \kern-0pt} 4}} \right)^{2} } } } \right)^{{{1 \mathord{\left/ {\vphantom {1 2}} \right. \kern-0pt} 2}}}$$
(8)

In addition, we can get the following NPN-generalized distance measure motivated by the generalized idea [38]:

$$d_{g} \left( {{\text{NPN}}_{1} \left( {x_{i} } \right),{\text{NPN}}_{2} \left( {x_{i} } \right)} \right) = \left( {\frac{1}{{\# {\text{OL}} \times \# {\text{IL}}}}\sum\limits_{k = 1}^{{\# {\text{OL}}}} {\sum\limits_{l = 1}^{{\# {\text{IL}}}} {\left( {{{\left( {\frac{{\left| {{\text{OL}}_{1}^{\left( k \right)} - {\text{OL}}_{2}^{\left( k \right)} } \right|}}{\tau + 1} + \left| {p_{1}^{\left( k \right)} - p_{2}^{\left( k \right)} } \right| + \frac{{\left| {{\text{IL}}_{1}^{\left( l \right)} - {\text{IL}}_{2}^{\left( l \right)} } \right|}}{\varsigma + 1} + \left| {v_{1}^{\left( l \right)} - v_{2}^{\left( l \right)} } \right|} \right)} \mathord{\left/ {\vphantom {{\left( {\frac{{\left| {{\text{OL}}_{1}^{\left( k \right)} - {\text{OL}}_{2}^{\left( k \right)} } \right|}}{\tau + 1} + \left| {p_{1}^{\left( k \right)} - p_{2}^{\left( k \right)} } \right| + \frac{{\left| {{\text{IL}}_{1}^{\left( l \right)} - {\text{IL}}_{2}^{\left( l \right)} } \right|}}{\varsigma + 1} + \left| {v_{1}^{\left( l \right)} - v_{2}^{\left( l \right)} } \right|} \right)} 4}} \right. \kern-0pt} 4}} \right)^{\lambda } } } } \right)^{{{1 \mathord{\left/ {\vphantom {1 \lambda }} \right. \kern-0pt} \lambda }}}$$
(9)

where \(\lambda > 0\). In particular, if \(\lambda = 1\), then the above generalized distance becomes the NPN-Hamming distance; if \(\lambda = 2\), then the above generalized distance becomes the NPN-Euclidean distance.

Case 2 Suppose that the subscripts in the OLTS are arranged in the ascending order, the NPN-Hamming distance of \({\text{NPN}}_{1} \left( {x_{i} } \right)\) and \({\text{NPN}}_{2} \left( {x_{i} } \right)\) is defined as:

$$d_{h} \left( {{\text{NPN}}_{1} \left( {x_{i} } \right),{\text{NPN}}_{2} \left( {x_{i} } \right)} \right) = \frac{1}{{\# {\text{OL}} \times \# {\text{IL}}}}\sum\limits_{k = 1}^{{\# {\text{OL}}}} {\sum\limits_{l = 1}^{{\# {\text{IL}}}} {{{\left( {\frac{{\left| {{\text{OL}}_{1}^{\left( k \right)} - {\text{OL}}_{2}^{\left( k \right)} } \right|}}{\tau + 1} + \left| {p_{1}^{\left( k \right)} - p_{2}^{\left( k \right)} } \right| + \left| {v_{1}^{\left( l \right)} - v_{2}^{\left( l \right)} } \right|} \right)} \mathord{\left/ {\vphantom {{\left( {\frac{{\left| {{\text{OL}}_{1}^{\left( k \right)} - {\text{OL}}_{2}^{\left( k \right)} } \right|}}{\tau + 1} + \left| {p_{1}^{\left( k \right)} - p_{2}^{\left( k \right)} } \right| + \left| {v_{1}^{\left( l \right)} - v_{2}^{\left( l \right)} } \right|} \right)} 3}} \right. \kern-0pt} 3}} }$$
(10)

and the NPN-Euclidean distance of \({\text{NPN}}_{1} \left( {x_{i} } \right)\) and \({\text{NPN}}_{2} \left( {x_{i} } \right)\) is defined as:

$$d_{e} \left( {{\text{NPN}}_{1} \left( {x_{i} } \right),{\text{NPN}}_{2} \left( {x_{i} } \right)} \right) = \left( {\frac{1}{{\# {\text{OL}} \times \# {\text{IL}}}}\sum\limits_{k = 1}^{{\# {\text{OL}}}} {\sum\limits_{l = 1}^{{\# {\text{IL}}}} {\left( {{{\left( {\frac{{\left| {{\text{OL}}_{1}^{\left( k \right)} - {\text{OL}}_{2}^{\left( k \right)} } \right|}}{\tau + 1} + \left| {p_{1}^{\left( k \right)} - p_{2}^{\left( k \right)} } \right| + \left| {v_{1}^{\left( l \right)} - v_{2}^{\left( l \right)} } \right|} \right)} \mathord{\left/ {\vphantom {{\left( {\frac{{\left| {{\text{OL}}_{1}^{\left( k \right)} - {\text{OL}}_{2}^{\left( k \right)} } \right|}}{\tau + 1} + \left| {p_{1}^{\left( k \right)} - p_{2}^{\left( k \right)} } \right| + \left| {v_{1}^{\left( l \right)} - v_{2}^{\left( l \right)} } \right|} \right)} 3}} \right. \kern-0pt} 3}} \right)^{2} } } } \right)^{{{1 \mathord{\left/ {\vphantom {1 2}} \right. \kern-0pt} 2}}}$$
(11)

and the NPN-generalized distance measure is defined as:

$$d_{g} \left( {{\text{NPN}}_{1} \left( {x_{i} } \right),{\text{NPN}}_{2} \left( {x_{i} } \right)} \right) = \left( {\frac{1}{{\# {\text{OL}} \times \# {\text{IL}}}}\sum\limits_{k = 1}^{{\# {\text{OL}}}} {\sum\limits_{l = 1}^{{\# {\text{IL}}}} {\left( {{{\left( {\frac{{\left| {{\text{OL}}_{1}^{\left( k \right)} - {\text{OL}}_{2}^{\left( k \right)} } \right|}}{\tau + 1} + \left| {p_{1}^{\left( k \right)} - p_{2}^{\left( k \right)} } \right| + \left| {v_{1}^{\left( l \right)} - v_{2}^{\left( l \right)} } \right|} \right)} \mathord{\left/ {\vphantom {{\left( {\frac{{\left| {{\text{OL}}_{1}^{\left( k \right)} - {\text{OL}}_{2}^{\left( k \right)} } \right|}}{\tau + 1} + \left| {p_{1}^{\left( k \right)} - p_{2}^{\left( k \right)} } \right| + \left| {v_{1}^{\left( l \right)} - v_{2}^{\left( l \right)} } \right|} \right)} 3}} \right. \kern-0pt} 3}} \right)^{\lambda } } } } \right)^{{{1 \mathord{\left/ {\vphantom {1 \lambda }} \right. \kern-0pt} \lambda }}}$$
(12)

where \(\lambda > 0\). In particular, if \(\lambda = 1\), then the above generalized distance becomes the NPN-Hamming distance; if \(\lambda = 2\), then the above generalized distance becomes the NPN-Euclidean distance.

Case 3 Suppose that the subscripts in the ILTS are arranged in the ascending order, the NPN-Hamming distance of \({\text{NPN}}_{1} \left( {x_{i} } \right)\) and \({\text{NPN}}_{2} \left( {x_{i} } \right)\) is defined as:

$$d_{h} \left( {{\text{NPN}}_{1} \left( {x_{i} } \right),{\text{NPN}}_{2} \left( {x_{i} } \right)} \right) = \frac{1}{{\# {\text{OL}} \times \# {\text{IL}}}}\sum\limits_{k = 1}^{{\# {\text{OL}}}} {\sum\limits_{l = 1}^{{\# {\text{IL}}}} {{{\left( {\left| {p_{1}^{\left( k \right)} - p_{2}^{\left( k \right)} } \right| + \frac{{\left| {{\text{IL}}_{1}^{\left( l \right)} - {\text{IL}}_{2}^{\left( l \right)} } \right|}}{\varsigma + 1} + \left| {v_{1}^{\left( l \right)} - v_{2}^{\left( l \right)} } \right|} \right)} \mathord{\left/ {\vphantom {{\left( {\left| {p_{1}^{\left( k \right)} - p_{2}^{\left( k \right)} } \right| + \frac{{\left| {{\text{IL}}_{1}^{\left( l \right)} - {\text{IL}}_{2}^{\left( l \right)} } \right|}}{\varsigma + 1} + \left| {v_{1}^{\left( l \right)} - v_{2}^{\left( l \right)} } \right|} \right)} 3}} \right. \kern-0pt} 3}} }$$
(13)

and the NPN-Euclidean distance of \({\text{NPN}}_{1} \left( {x_{i} } \right)\) and \({\text{NPN}}_{2} \left( {x_{i} } \right)\) is defined as:

$$d_{e} \left( {{\text{NPN}}_{1} \left( {x_{i} } \right),{\text{NPN}}_{2} \left( {x_{i} } \right)} \right) = \left( {\frac{1}{{\# {\text{OL}} \times \# {\text{IL}}}}\sum\limits_{k = 1}^{{\# {\text{OL}}}} {\sum\limits_{l = 1}^{{\# {\text{IL}}}} {\left( {{{\left( {\left| {p_{1}^{\left( k \right)} - p_{2}^{\left( k \right)} } \right| + \frac{{\left| {IL_{1}^{\left( l \right)} - IL_{2}^{\left( l \right)} } \right|}}{\varsigma + 1} + \left| {v_{1}^{\left( l \right)} - v_{2}^{\left( l \right)} } \right|} \right)} \mathord{\left/ {\vphantom {{\left( {\left| {p_{1}^{\left( k \right)} - p_{2}^{\left( k \right)} } \right| + \frac{{\left| {IL_{1}^{\left( l \right)} - IL_{2}^{\left( l \right)} } \right|}}{\varsigma + 1} + \left| {v_{1}^{\left( l \right)} - v_{2}^{\left( l \right)} } \right|} \right)} 3}} \right. \kern-0pt} 3}} \right)^{2} } } } \right)^{{{1 \mathord{\left/ {\vphantom {1 2}} \right. \kern-0pt} 2}}}$$
(14)

and the NPN-generalized distance measure is defined as:

$$d_{g} \left( {{\text{NPN}}_{1} \left( {x_{i} } \right),{\text{NPN}}_{2} \left( {x_{i} } \right)} \right) = \left( {\frac{1}{{\# {\text{OL}} \times \# {\text{IL}}}}\sum\limits_{k = 1}^{{\# {\text{OL}}}} {\sum\limits_{l = 1}^{{\# {\text{IL}}}} {\left( {{{\left( {\left| {p_{1}^{\left( k \right)} - p_{2}^{\left( k \right)} } \right| + \frac{{\left| {{\text{IL}}_{1}^{\left( l \right)} - {\text{IL}}_{2}^{\left( l \right)} } \right|}}{\varsigma + 1} + \left| {v_{1}^{\left( l \right)} - v_{2}^{\left( l \right)} } \right|} \right)} \mathord{\left/ {\vphantom {{\left( {\left| {p_{1}^{\left( k \right)} - p_{2}^{\left( k \right)} } \right| + \frac{{\left| {{\text{IL}}_{1}^{\left( l \right)} - {\text{IL}}_{2}^{\left( l \right)} } \right|}}{\varsigma + 1} + \left| {v_{1}^{\left( l \right)} - v_{2}^{\left( l \right)} } \right|} \right)} 3}} \right. \kern-0pt} 3}} \right)^{\lambda } } } } \right)^{{{1 \mathord{\left/ {\vphantom {1 \lambda }} \right. \kern-0pt} \lambda }}}$$
(15)

where \(\lambda > 0\). In particular, if \(\lambda = 1\), then the above generalized distance becomes the NPN-Hamming distance; if \(\lambda = 2\), then the above generalized distance becomes the NPN-Euclidean distance.

Case 4 The NPN-Hamming distance of \({\text{NPN}}_{1} \left( {x_{i} } \right)\) and \({\text{NPN}}_{2} \left( {x_{i} } \right)\) is defined as:

$$d_{h} \left( {{\text{NPN}}_{1} \left( {x_{i} } \right),{\text{NPN}}_{2} \left( {x_{i} } \right)} \right) = \frac{1}{{\# {\text{OL}} \times \# {\text{IL}}}}\sum\limits_{k = 1}^{{\# {\text{OL}}}} {\sum\limits_{l = 1}^{{\# {\text{IL}}}} {{{\left( {\left| {p_{1}^{\left( k \right)} - p_{2}^{\left( k \right)} } \right| + \left| {v_{1}^{\left( l \right)} - v_{2}^{\left( l \right)} } \right|} \right)} \mathord{\left/ {\vphantom {{\left( {\left| {p_{1}^{\left( k \right)} - p_{2}^{\left( k \right)} } \right| + \left| {v_{1}^{\left( l \right)} - v_{2}^{\left( l \right)} } \right|} \right)} 2}} \right. \kern-0pt} 2}} }$$
(16)

and the NPN-Euclidean distance of \({\text{NPN}}_{1} \left( {x_{i} } \right)\) and \({\text{NPN}}_{2} \left( {x_{i} } \right)\) is defined as:

$$d_{e} \left( {{\text{NPN}}_{1} \left( {x_{i} } \right),{\text{NPN}}_{2} \left( {x_{i} } \right)} \right) = \left( {\frac{1}{{\# {\text{OL}} \times \# {\text{IL}}}}\sum\limits_{k = 1}^{{\# {\text{OL}}}} {\sum\limits_{l = 1}^{{\# {\text{IL}}}} {\left( {{{\left( {\left| {p_{1}^{\left( k \right)} - p_{2}^{\left( k \right)} } \right| + \left| {v_{1}^{\left( l \right)} - v_{2}^{\left( l \right)} } \right|} \right)} \mathord{\left/ {\vphantom {{\left( {\left| {p_{1}^{\left( k \right)} - p_{2}^{\left( k \right)} } \right| + \left| {v_{1}^{\left( l \right)} - v_{2}^{\left( l \right)} } \right|} \right)} 2}} \right. \kern-0pt} 2}} \right)^{2} } } } \right)^{{{1 \mathord{\left/ {\vphantom {1 2}} \right. \kern-0pt} 2}}}$$
(17)

and the NPN-generalized distance measure is defined as:

$$d_{g} \left( {{\text{NPN}}_{1} \left( {x_{i} } \right),{\text{NPN}}_{2} \left( {x_{i} } \right)} \right) = \left( {\frac{1}{{\# {\text{OL}} \times \# {\text{IL}}}}\sum\limits_{k = 1}^{{\# {\text{OL}}}} {\sum\limits_{l = 1}^{{\# {\text{IL}}}} {\left( {{{\left( {\left| {p_{1}^{\left( k \right)} - p_{2}^{\left( k \right)} } \right| + \left| {v_{1}^{\left( l \right)} - v_{2}^{\left( l \right)} } \right|} \right)} \mathord{\left/ {\vphantom {{\left( {\left| {p_{1}^{\left( k \right)} - p_{2}^{\left( k \right)} } \right| + \left| {v_{1}^{\left( l \right)} - v_{2}^{\left( l \right)} } \right|} \right)} 2}} \right. \kern-0pt} 2}} \right)^{\lambda } } } } \right)^{{{1 \mathord{\left/ {\vphantom {1 \lambda }} \right. \kern-0pt} \lambda }}}$$
(18)

where \(\lambda > 0\). In particular, if \(\lambda = 1\), then the above generalized distance becomes the NPN-Hamming distance; if \(\lambda = 2\), then the above generalized distance becomes the NPN-Euclidean distance.

Remark 2

Since the ILTSs present the corresponding OLTSs, \({\text{IL}}_{\left( k \right)}^{\left( l \right)} \left( {v_{\left( k \right)}^{\left( l \right)} } \right)\) is invariant with a certain \({\text{OL}}_{{}}^{\left( k \right)}\); that is to say, if \({\text{OL}}_{1}^{\left( k \right)} = {\text{OL}}_{2}^{\left( k \right)}\), then \({\text{IL}}_{1}^{\left( l \right)} \left( {v_{1}^{\left( l \right)} } \right) = {\text{IL}}_{2}^{\left( l \right)} \left( {v_{2}^{\left( l \right)} } \right)\). Hence, under Case 3 and Case 4, the NPN-Hamming distance of \({\text{NPN}}_{1} \left( {x_{i} } \right)\) and \({\text{NPN}}_{2} \left( {x_{i} } \right)\) reduces to:

$$d_{h} \left( {{\text{NPN}}_{1} \left( {x_{i} } \right),{\text{NPN}}_{2} \left( {x_{i} } \right)} \right) = \frac{1}{{\# {\text{OL}} \times \# {\text{IL}}}}\sum\limits_{k = 1}^{{\# {\text{OL}}}} {\sum\limits_{l = 1}^{{\# {\text{IL}}}} {\left| {p_{1}^{\left( k \right)} - p_{2}^{\left( k \right)} } \right|} }$$
(19)

and the NPN-Euclidean distance of \({\text{NPN}}_{1} \left( {x_{i} } \right)\) and \({\text{NPN}}_{2} \left( {x_{i} } \right)\) reduces to:

$$d_{e} \left( {{\text{NPN}}_{1} \left( {x_{i} } \right),{\text{NPN}}_{2} \left( {x_{i} } \right)} \right) = \left( {\frac{1}{{\# {\text{OL}} \times \# {\text{IL}}}}\sum\limits_{k = 1}^{{\# {\text{OL}}}} {\sum\limits_{l = 1}^{{\# {\text{IL}}}} {\left( {\left| {p_{1}^{\left( k \right)} - p_{2}^{\left( k \right)} } \right|} \right)^{2} } } } \right)^{{{1 \mathord{\left/ {\vphantom {1 2}} \right. \kern-0pt} 2}}}$$
(20)

and the NPN-generalized distance measure reduces to:

$$d_{g} \left( {{\text{NPN}}_{1} \left( {x_{i} } \right),{\text{NPN}}_{2} \left( {x_{i} } \right)} \right) = \left( {\frac{1}{{\# {\text{OL}} \times \# {\text{IL}}}}\sum\limits_{k = 1}^{{\# {\text{OL}}}} {\sum\limits_{l = 1}^{{\# {\text{IL}}}} {\left( {\left| {p_{1}^{\left( k \right)} - p_{2}^{\left( k \right)} } \right|} \right)^{\lambda } } } } \right)^{{{1 \mathord{\left/ {\vphantom {1 \lambda }} \right. \kern-0pt} \lambda }}}$$
(21)

where \(\lambda > 0\). In particular, if \(\lambda = 1\), then the above generalized distance becomes the NPN-Hamming distance; if \(\lambda = 2\), then the above generalized distance becomes the NPN-Euclidean distance.

Theorem 1

Let\({\text{NPN}}_{1}\)and\({\text{NPN}}_{2}\)be two NPNLTSs on the universe\(X = \left\{ {x_{1} ,x_{2} , \ldots ,x_{n} } \right\}\)which are defined by\({\text{OL}}_{1}^{{}} \left( p \right)\), \({\text{IL}}_{1}^{{}} \left( v \right)\)and\({\text{OL}}_{ 2}^{{}} \left( p \right)\), \({\text{IL}}_{ 2}^{{}} \left( v \right)\), where\({\text{OL}}_{1}^{{}} \left( p \right)\), \({\text{OL}}_{ 2}^{{}} \left( p \right)\)and\({\text{IL}}_{1}^{{}} \left( v \right)\), \({\text{IL}}_{ 2}^{{}} \left( v \right)\)are their OPLTSs and INLTSs, respectively. Then, a family of distance measures\(d\left( {{\text{NPN}}_{1} ,{\text{NPN}}_{2} } \right)\)defined by Eqs. (7)–(18) satisfy the following conditions:

  1. (1)

    \(0 \le d\left( {{\text{NPN}}_{1} ,{\text{NPN}}_{2} } \right) \le 1\).

  2. (2)

    \(d\left( {{\text{NPN}}_{1} ,{\text{NPN}}_{2} } \right) = d\left( {{\text{NPN}}_{2} ,{\text{NPN}}_{1} } \right)\).

  3. (3)

    \(d\left( {{\text{NPN}}_{1} ,{\text{NPN}}_{2} } \right) = 0 \Leftrightarrow {\text{NPN}}_{1} = {\text{NPN}}_{2}\).

Proof

Here, we take Eq. (7) as an example to prove properties above, and Eqs. (8)–(18) can be verified in a similar way. From Eq. (7), we have

$$d_{h} \left( {{\text{NPN}}_{1} \left( {x_{i} } \right),{\text{NPN}}_{2} \left( {x_{i} } \right)} \right) = \frac{1}{{\# {\text{OL}} \times \# {\text{IL}}}}\sum\limits_{k = 1}^{{\# {\text{OL}}}} {\sum\limits_{l = 1}^{{\# {\text{IL}}}} {{{\left( {\frac{{\left| {{\text{OL}}_{1}^{\left( k \right)} \left( {x_{i} } \right) - {\text{OL}}_{2}^{\left( k \right)} \left( {x_{i} } \right)} \right|}}{\tau + 1} + \left| {p_{1}^{\left( k \right)} \left( {x_{i} } \right) - p_{2}^{\left( k \right)} \left( {x_{i} } \right)} \right| + \frac{{\left| {{\text{IL}}_{1}^{\left( l \right)} \left( {x_{i} } \right) - {\text{IL}}_{2}^{\left( l \right)} \left( {x_{i} } \right)} \right|}}{\varsigma + 1} + \left| {v_{1}^{\left( l \right)} \left( {x_{i} } \right) - v_{2}^{\left( l \right)} \left( {x_{i} } \right)} \right|} \right)} \mathord{\left/ {\vphantom {{\left( {\frac{{\left| {{\text{OL}}_{1}^{\left( k \right)} \left( {x_{i} } \right) - {\text{OL}}_{2}^{\left( k \right)} \left( {x_{i} } \right)} \right|}}{\tau + 1} + \left| {p_{1}^{\left( k \right)} \left( {x_{i} } \right) - p_{2}^{\left( k \right)} \left( {x_{i} } \right)} \right| + \frac{{\left| {{\text{IL}}_{1}^{\left( l \right)} \left( {x_{i} } \right) - {\text{IL}}_{2}^{\left( l \right)} \left( {x_{i} } \right)} \right|}}{\varsigma + 1} + \left| {v_{1}^{\left( l \right)} \left( {x_{i} } \right) - v_{2}^{\left( l \right)} \left( {x_{i} } \right)} \right|} \right)} 4}} \right. \kern-0pt} 4}} }$$
  1. (1)

    Since \(0 \le {\text{OL}}_{1}^{\left( k \right)} \left( {x_{i} } \right) \le \tau\) and \(0 \le {\text{OL}}_{2}^{\left( k \right)} \left( {x_{i} } \right) \le \tau\), then \(0 \le \left| {{\text{OL}}_{1}^{\left( k \right)} \left( {x_{i} } \right) - {\text{OL}}_{2}^{\left( k \right)} \left( {x_{i} } \right)} \right| \le \tau + 1\).

    Similarly, since \(0 \le {\text{IL}}_{1}^{\left( l \right)} \left( {x_{i} } \right) \le \varsigma\) and \(0 \le {\text{IL}}_{2}^{\left( l \right)} \left( {x_{i} } \right) \le \varsigma\), then \(0 \le \left| {{\text{IL}}_{1}^{\left( l \right)} \left( {x_{i} } \right) - {\text{IL}}_{2}^{\left( l \right)} \left( {x_{i} } \right)} \right| \le \varsigma + 1\).

    Because \(0 \le p_{1}^{\left( k \right)} \left( {x_{i} } \right) \le 1,0 \le p_{2}^{\left( k \right)} \left( {x_{i} } \right) \le 1,0 \le v_{1}^{\left( l \right)} \left( {x_{i} } \right) \le 1\) and \(0 \le v_{2}^{\left( l \right)} \left( {x_{i} } \right) \le 1\), then \(0 \le \left| {p_{1}^{\left( k \right)} \left( {x_{i} } \right) - p_{2}^{\left( k \right)} \left( {x_{i} } \right)} \right| \le 1\) and \(0 \le \left| {v_{1}^{\left( l \right)} \left( {x_{i} } \right) - v_{2}^{\left( l \right)} \left( {x_{i} } \right)} \right| \le 1\).

    Thus, \(0 \le d_{h} \left( {{\text{NPN}}_{1} \left( {x_{i} } \right),{\text{NPN}}_{2} \left( {x_{i} } \right)} \right) \le 1\).

  2. (2)

    The symmetry of the measure \(d_{h} \left( {{\text{NPN}}_{1} \left( {x_{i} } \right),{\text{NPN}}_{2} \left( {x_{i} } \right)} \right)\) with respect to their argument is obvious. Thus, \(d_{h} \left( {{\text{NPN}}_{1} \left( {x_{i} } \right),{\text{NPN}}_{2} \left( {x_{i} } \right)} \right) = d_{h} \left( {{\text{NPN}}_{2} \left( {x_{i} } \right),{\text{NPN}}_{1} \left( {x_{i} } \right)} \right)\).

  3. (3)

    We have

    $$\begin{aligned} & d_{h} \left( {{\text{NPN}}_{1} \left( {x_{i} } \right),{\text{NPN}}_{2} \left( {x_{i} } \right)} \right) = 0 \Leftrightarrow \left\{ {\begin{array}{*{20}l} {\left| {{\text{OL}}_{1}^{\left( k \right)} \left( {x_{i} } \right) - {\text{OL}}_{2}^{\left( k \right)} \left( {x_{i} } \right)} \right| = 0,\forall x_{i} \in X} \\ {\left| {p_{1}^{\left( k \right)} \left( {x_{i} } \right) - p_{2}^{\left( k \right)} \left( {x_{i} } \right)} \right| = 0,\forall x_{i} \in X} \\ {\left| {{\text{IL}}_{1}^{\left( l \right)} \left( {x_{i} } \right) - {\text{IL}}_{2}^{\left( l \right)} \left( {x_{i} } \right)} \right| = 0,\forall x_{i} \in X} \\ {\left| {v_{1}^{\left( l \right)} \left( {x_{i} } \right) - v_{2}^{\left( l \right)} \left( {x_{i} } \right)} \right| = 0,\forall x_{i} \in X} \\ \end{array} } \right. \\ & \quad \Leftrightarrow {\text{OL}}_{1}^{\left( k \right)} \left( {x_{i} } \right) = {\text{OL}}_{2}^{\left( k \right)} \left( {x_{i} } \right),p_{1}^{\left( k \right)} \left( {x_{i} } \right) = p_{2}^{\left( k \right)} \left( {x_{i} } \right),{\text{IL}}_{1}^{\left( l \right)} \left( {x_{i} } \right) = {\text{IL}}_{2}^{\left( l \right)} \left( {x_{i} } \right),v_{1}^{\left( l \right)} \left( {x_{i} } \right) = v_{2}^{\left( l \right)} \left( {x_{i} } \right),\forall x_{i} \in X \\ {\kern 1pt} & \quad \Leftrightarrow {\text{NPN}}_{1} \left( {x_{i} } \right) = {\text{NPN}}_{2} \left( {x_{i} } \right). \\ \end{aligned}$$

    Therefore, \(d_{h} \left( {{\text{NPN}}_{1} \left( {x_{i} } \right),{\text{NPN}}_{2} \left( {x_{i} } \right)} \right)\) is a distance measure between NPNLTSs, and so do other distance measures which are defined by Eqs. (8)–(18). □

Example 2

Let \({\text{NPN}}_{1} { = }\left\{ \begin{aligned}& s_{0} (0.6)\left\{ {n_{0} \left( {0.1} \right)} \right\}, \hfill \\& s_{1} (0.4)\left\{ {n_{0} \left( {0.3} \right),n_{1} \left( {0.5} \right)} \right\} \hfill \\ \end{aligned} \right\}\) and \({\text{NPN}}_{2} { = }\left\{ {s_{1} (1)\left\{ {n_{0} \left( {0.3} \right),n_{1} \left( {0.5} \right)} \right\}} \right\}\) be two NPNLTSs under Case 1. Then, we add the elements by Definition 7 to extend \({\text{NPN}}_{1} ,{\text{NPN}}_{2}\) as \({\text{NPN}}_{1} { = }\left\{ \begin{aligned}& s_{0} (0.6)\left\{ {n_{0} \left( {0.1} \right),n_{0} \left( 0 \right)} \right\}, \hfill \\& s_{1} (0.4)\left\{ {n_{0} \left( {0.3} \right),n_{1} \left( {0.5} \right)} \right\} \hfill \\ \end{aligned} \right\}\) and \({\text{NPN}}_{2} { = }\left\{ \begin{aligned}& s_{1} (1)\left\{ {n_{0} \left( {0.3} \right),n_{1} \left( {0.5} \right)} \right\}, \hfill \\& s_{1} (0)\left\{ {n_{0} \left( {0.3} \right),n_{1} \left( {0.5} \right)} \right\} \hfill \\ \end{aligned} \right\}\), respectively. Thus, the generalized distance between \({\text{NPN}}_{1}\) and \({\text{NPN}}_{2}\) is:

$$d_{g} \left( {{\text{NPN}}_{1} ,{\text{NPN}}_{2} } \right) = \left( {\frac{ 1}{4} \times \left( \begin{aligned} \left( {\frac{{\frac{{\left| {0 - 1} \right|}}{2} + \left| {0.6 - 1} \right| + \frac{{\left| {0 - 0} \right|}}{2} + \left| {0.1 - 0.3} \right|}}{4}} \right)^{\lambda } + \left( {\frac{{\frac{{\left| {0 - 1} \right|}}{2} + \left| {0.6 - 1} \right| + \frac{{\left| {0 - 1} \right|}}{2} + \left| {0 - 0.5} \right|}}{4}} \right)^{\lambda } \hfill \\ + \left( {\frac{{\frac{{\left| {1 - 1} \right|}}{2} + \left| {0.4 - 0} \right| + \frac{{\left| {0 - 0} \right|}}{2} + \left| {0.3 - 0.3} \right|}}{4}} \right)^{\lambda } + \left( {\frac{{\frac{{\left| {1 - 1} \right|}}{2} + \left| {0.4 - 0} \right| + \frac{{\left| {1 - 1} \right|}}{2} + \left| {0.5 - 0.5} \right|}}{4}} \right)^{\lambda } \hfill \\ \end{aligned} \right)} \right)^{{{1 \mathord{\left/ {\vphantom {1 \lambda }} \right. \kern-0pt} \lambda }}}$$

If \(\lambda = 1\), then the NPN-Hamming distance between \({\text{NPN}}_{1}\) and \({\text{NPN}}_{2}\) is:

$$d_{h} \left( {{\text{NPN}}_{1} ,{\text{NPN}}_{2} } \right) = \frac{1}{4} \times \left( {\frac{1.1}{4} + \frac{1.9}{4} + \frac{0.4}{4} + \frac{0.4}{4}} \right) = 0.2375$$

If \(\lambda = 2\), then the NPN-Euclidean distance between \({\text{NPN}}_{1}\) and \({\text{NPN}}_{2}\) is:

$$d_{e} \left( {{\text{NPN}}_{1} ,{\text{NPN}}_{2} } \right) = \left( {\frac{1}{4} \times \left( {\left( {\frac{1.1}{4}} \right)^{2} + \left( {\frac{1.9}{4}} \right)^{2} + \left( {\frac{0.4}{4}} \right)^{2} + \left( {\frac{0.4}{4}} \right)^{2} } \right)} \right)^{{{1 \mathord{\left/ {\vphantom {1 2}} \right. \kern-0pt} 2}}} = 0.2834$$

Next, we mainly discuss other distance measures under Case 1, and other cases can be deduced from Definition 8. The generalized NPN-Hausdorff distance measure is defined as:

$$d_{gh} \left( {{\text{NPN}}_{1} \left( {x_{i} } \right),{\text{NPN}}_{2} \left( {x_{i} } \right)} \right) = \left( {\mathop {\hbox{max} }\limits_{\begin{subarray}{l} k = 1,2, \ldots ,\# {\text{OL}} \\ l = 1,2, \ldots ,\# {\text{IL}} \end{subarray} } \left( {{{\left( {\frac{{\left| {{\text{OL}}_{1}^{\left( k \right)} - {\text{OL}}_{2}^{\left( k \right)} } \right|}}{\tau + 1} + \left| {p_{1}^{\left( k \right)} - p_{2}^{\left( k \right)} } \right| + \frac{{\left| {{\text{IL}}_{1}^{\left( l \right)} - {\text{IL}}_{2}^{\left( l \right)} } \right|}}{\varsigma + 1} + \left| {v_{1}^{\left( l \right)} - v_{2}^{\left( l \right)} } \right|} \right)} \mathord{\left/ {\vphantom {{\left( {\frac{{\left| {{\text{OL}}_{1}^{\left( k \right)} - {\text{OL}}_{2}^{\left( k \right)} } \right|}}{\tau + 1} + \left| {p_{1}^{\left( k \right)} - p_{2}^{\left( k \right)} } \right| + \frac{{\left| {{\text{IL}}_{1}^{\left( l \right)} - {\text{IL}}_{2}^{\left( l \right)} } \right|}}{\varsigma + 1} + \left| {v_{1}^{\left( l \right)} - v_{2}^{\left( l \right)} } \right|} \right)} 4}} \right. \kern-0pt} 4}} \right)^{\lambda } } \right)^{{{1 \mathord{\left/ {\vphantom {1 \lambda }} \right. \kern-0pt} \lambda }}}$$
(22)

where \(\lambda > 0\).

In particular, if \(\lambda = 1\), then the above generalized NPN-Hausdorff distance becomes the NPN-Hamming-Hausdorff distance:

$$d_{hgh} \left( {{\text{NPN}}_{1} \left( {x_{i} } \right),{\text{NPN}}_{2} \left( {x_{i} } \right)} \right) = \mathop {\hbox{max} }\limits_{\begin{subarray}{l} k = 1,2, \ldots ,\# OL \\ l = 1,2, \ldots ,\# IL \end{subarray} } {{\left( {\frac{{\left| {{\text{OL}}_{1}^{\left( k \right)} - {\text{OL}}_{2}^{\left( k \right)} } \right|}}{\tau + 1} + \left| {p_{1}^{\left( k \right)} - p_{2}^{\left( k \right)} } \right| + \frac{{\left| {{\text{IL}}_{1}^{\left( l \right)} - {\text{IL}}_{2}^{\left( l \right)} } \right|}}{\varsigma + 1} + \left| {v_{1}^{\left( l \right)} - v_{2}^{\left( l \right)} } \right|} \right)} \mathord{\left/ {\vphantom {{\left( {\frac{{\left| {{\text{OL}}_{1}^{\left( k \right)} - {\text{OL}}_{2}^{\left( k \right)} } \right|}}{\tau + 1} + \left| {p_{1}^{\left( k \right)} - p_{2}^{\left( k \right)} } \right| + \frac{{\left| {{\text{IL}}_{1}^{\left( l \right)} - {\text{IL}}_{2}^{\left( l \right)} } \right|}}{\varsigma + 1} + \left| {v_{1}^{\left( l \right)} - v_{2}^{\left( l \right)} } \right|} \right)} 4}} \right. \kern-0pt} 4}$$
(23)

If \(\lambda = 2\), then the generalized NPN-Hausdorff distance becomes the NPN-Euclidean–Hausdorff distance:

$$d_{egh} \left( {{\text{NPN}}_{1} \left( {x_{i} } \right),{\text{NPN}}_{2} \left( {x_{i} } \right)} \right) = \left( {\mathop {\hbox{max} }\limits_{\begin{subarray}{l} k = 1,2, \ldots ,\# {\text{OL}} \\ l = 1,2, \ldots ,\# {\text{IL}} \end{subarray} } \left( {{{\left( {\frac{{\left| {{\text{OL}}_{1}^{\left( k \right)} - {\text{OL}}_{2}^{\left( k \right)} } \right|}}{\tau + 1} + \left| {p_{1}^{\left( k \right)} - p_{2}^{\left( k \right)} } \right| + \frac{{\left| {{\text{IL}}_{1}^{\left( l \right)} - {\text{IL}}_{2}^{\left( l \right)} } \right|}}{\varsigma + 1} + \left| {v_{1}^{\left( l \right)} - v_{2}^{\left( l \right)} } \right|} \right)} \mathord{\left/ {\vphantom {{\left( {\frac{{\left| {{\text{OL}}_{1}^{\left( k \right)} - {\text{OL}}_{2}^{\left( k \right)} } \right|}}{\tau + 1} + \left| {p_{1}^{\left( k \right)} - p_{2}^{\left( k \right)} } \right| + \frac{{\left| {{\text{IL}}_{1}^{\left( l \right)} - {\text{IL}}_{2}^{\left( l \right)} } \right|}}{\varsigma + 1} + \left| {v_{1}^{\left( l \right)} - v_{2}^{\left( l \right)} } \right|} \right)} 4}} \right. \kern-0pt} 4}} \right)^{2} } \right)^{{{1 \mathord{\left/ {\vphantom {1 2}} \right. \kern-0pt} 2}}}$$
(24)

The hybrid NPN-Hamming distance between \({\text{NPN}}_{1} \left( {x_{i} } \right)\) and \({\text{NPN}}_{2} \left( {x_{i} } \right)\):

$$d_{hh} \left( {{\text{NPN}}_{1} \left( {x_{i} } \right),{\text{NPN}}_{2} \left( {x_{i} } \right)} \right) = \frac{1}{2}\left( \begin{aligned} & \frac{1}{{\# {\text{OL}} \times \# {\text{IL}}}}\sum\limits_{k = 1}^{{\# {\text{OL}}}} {\sum\limits_{l = 1}^{{\# {\text{IL}}}} {{{\left( {\frac{{\left| {{\text{OL}}_{1}^{\left( k \right)} - {\text{OL}}_{2}^{\left( k \right)} } \right|}}{\tau + 1} + \left| {p_{1}^{\left( k \right)} - p_{2}^{\left( k \right)} } \right| + \frac{{\left| {{\text{IL}}_{1}^{\left( l \right)} - {\text{IL}}_{2}^{\left( l \right)} } \right|}}{\varsigma + 1} + \left| {v_{1}^{\left( l \right)} - v_{2}^{\left( l \right)} } \right|} \right)} \mathord{\left/ {\vphantom {{\left( {\frac{{\left| {{\text{OL}}_{1}^{\left( k \right)} - {\text{OL}}_{2}^{\left( k \right)} } \right|}}{\tau + 1} + \left| {p_{1}^{\left( k \right)} - p_{2}^{\left( k \right)} } \right| + \frac{{\left| {{\text{IL}}_{1}^{\left( l \right)} - {\text{IL}}_{2}^{\left( l \right)} } \right|}}{\varsigma + 1} + \left| {v_{1}^{\left( l \right)} - v_{2}^{\left( l \right)} } \right|} \right)} 4}} \right. \kern-0pt} 4}} } \\ & + \mathop {\hbox{max} }\limits_{\begin{subarray}{l} k = 1,2, \ldots ,\# {\text{OL}} \\ l = 1,2, \ldots ,\# {\text{IL}} \end{subarray} } {{\left( {\frac{{\left| {{\text{OL}}_{1}^{\left( k \right)} - {\text{OL}}_{2}^{\left( k \right)} } \right|}}{\tau + 1} + \left| {p_{1}^{\left( k \right)} - p_{2}^{\left( k \right)} } \right| + \frac{{\left| {{\text{IL}}_{1}^{\left( l \right)} - {\text{IL}}_{2}^{\left( l \right)} } \right|}}{\varsigma + 1} + \left| {v_{1}^{\left( l \right)} - v_{2}^{\left( l \right)} } \right|} \right)} \mathord{\left/ {\vphantom {{\left( {\frac{{\left| {{\text{OL}}_{1}^{\left( k \right)} - {\text{OL}}_{2}^{\left( k \right)} } \right|}}{\tau + 1} + \left| {p_{1}^{\left( k \right)} - p_{2}^{\left( k \right)} } \right| + \frac{{\left| {{\text{IL}}_{1}^{\left( l \right)} - {\text{IL}}_{2}^{\left( l \right)} } \right|}}{\varsigma + 1} + \left| {v_{1}^{\left( l \right)} - v_{2}^{\left( l \right)} } \right|} \right)} 4}} \right. \kern-0pt} 4} \\ \end{aligned} \right)$$
(25)

The hybrid NPN-Euclidean distance between \({\text{NPN}}_{1} \left( {x_{i} } \right)\) and \({\text{NPN}}_{2} \left( {x_{i} } \right)\):

$$d_{he} \left( {{\text{NPN}}_{1} \left( {x_{i} } \right),{\text{NPN}}_{2} \left( {x_{i} } \right)} \right) = \left( {\frac{1}{2}\left( \begin{aligned} & \frac{1}{{\# {\text{OL}} \times \# {\text{IL}}}}\sum\limits_{k = 1}^{{\# {\text{OL}}}} {\sum\limits_{l = 1}^{{\# {\text{IL}}}} {\left( {{{\left( {\frac{{\left| {{\text{OL}}_{1}^{\left( k \right)} - {\text{OL}}_{2}^{\left( k \right)} } \right|}}{\tau + 1} + \left| {p_{1}^{\left( k \right)} - p_{2}^{\left( k \right)} } \right| + \frac{{\left| {{\text{IL}}_{1}^{\left( l \right)} - {\text{IL}}_{2}^{\left( l \right)} } \right|}}{\varsigma + 1} + \left| {v_{1}^{\left( l \right)} - v_{2}^{\left( l \right)} } \right|} \right)} \mathord{\left/ {\vphantom {{\left( {\frac{{\left| {{\text{OL}}_{1}^{\left( k \right)} - {\text{OL}}_{2}^{\left( k \right)} } \right|}}{\tau + 1} + \left| {p_{1}^{\left( k \right)} - p_{2}^{\left( k \right)} } \right| + \frac{{\left| {{\text{IL}}_{1}^{\left( l \right)} - {\text{IL}}_{2}^{\left( l \right)} } \right|}}{\varsigma + 1} + \left| {v_{1}^{\left( l \right)} - v_{2}^{\left( l \right)} } \right|} \right)} 4}} \right. \kern-0pt} 4}} \right)^{2} } } \\ & + \mathop {\hbox{max} }\limits_{\begin{subarray}{l} k = 1,2, \ldots ,\# {\text{OL}} \\ l = 1,2, \ldots ,\# {\text{IL}} \end{subarray} } \left( {{{\left( {\frac{{\left| {{\text{OL}}_{1}^{\left( k \right)} - {\text{OL}}_{2}^{\left( k \right)} } \right|}}{\tau + 1} + \left| {p_{1}^{\left( k \right)} - p_{2}^{\left( k \right)} } \right| + \frac{{\left| {{\text{IL}}_{1}^{\left( l \right)} - {\text{IL}}_{2}^{\left( l \right)} } \right|}}{\varsigma + 1} + \left| {v_{1}^{\left( l \right)} - v_{2}^{\left( l \right)} } \right|} \right)} \mathord{\left/ {\vphantom {{\left( {\frac{{\left| {{\text{OL}}_{1}^{\left( k \right)} - {\text{OL}}_{2}^{\left( k \right)} } \right|}}{\tau + 1} + \left| {p_{1}^{\left( k \right)} - p_{2}^{\left( k \right)} } \right| + \frac{{\left| {{\text{IL}}_{1}^{\left( l \right)} - {\text{IL}}_{2}^{\left( l \right)} } \right|}}{\varsigma + 1} + \left| {v_{1}^{\left( l \right)} - v_{2}^{\left( l \right)} } \right|} \right)} 4}} \right. \kern-0pt} 4}} \right)^{2} \\ \end{aligned} \right)} \right)^{{{\raise0.7ex\hbox{$1$} \!\mathord{\left/ {\vphantom {1 2}}\right.\kern-0pt} \!\lower0.7ex\hbox{$2$}}}}$$
(26)

The generalized hybrid distance between \({\text{NPN}}_{1} \left( {x_{i} } \right)\) and \({\text{NPN}}_{2} \left( {x_{i} } \right)\):

$$d_{hg} \left( {{\text{NPN}}_{1} \left( {x_{i} } \right),{\text{NPN}}_{2} \left( {x_{i} } \right)} \right) = \left( {\frac{1}{2}\left( \begin{aligned} \frac{1}{{\# {\text{OL}} \times \# {\text{IL}}}}\sum\limits_{k = 1}^{\# OL} {\sum\limits_{l = 1}^{\# IL} {\left( {{{\left( {\frac{{\left| {{\text{OL}}_{1}^{\left( k \right)} - {\text{OL}}_{2}^{\left( k \right)} } \right|}}{\tau + 1} + \left| {p_{1}^{\left( k \right)} - p_{2}^{\left( k \right)} } \right| + \frac{{\left| {{\text{IL}}_{1}^{\left( l \right)} - {\text{IL}}_{2}^{\left( l \right)} } \right|}}{\varsigma + 1} + \left| {v_{1}^{\left( l \right)} - v_{2}^{\left( l \right)} } \right|} \right)} \mathord{\left/ {\vphantom {{\left( {\frac{{\left| {{\text{OL}}_{1}^{\left( k \right)} - {\text{OL}}_{2}^{\left( k \right)} } \right|}}{\tau + 1} + \left| {p_{1}^{\left( k \right)} - p_{2}^{\left( k \right)} } \right| + \frac{{\left| {{\text{IL}}_{1}^{\left( l \right)} - {\text{IL}}_{2}^{\left( l \right)} } \right|}}{\varsigma + 1} + \left| {v_{1}^{\left( l \right)} - v_{2}^{\left( l \right)} } \right|} \right)} 4}} \right. \kern-0pt} 4}} \right)^{\lambda } } } \hfill \\ + \mathop {\hbox{max} }\limits_{\begin{subarray}{l} k = 1,2, \ldots ,\# {\text{OL}} \\ l = 1,2, \ldots ,\# {\text{IL}} \end{subarray} } \left( {{{\left( {\frac{{\left| {{\text{OL}}_{1}^{\left( k \right)} - {\text{OL}}_{2}^{\left( k \right)} } \right|}}{\tau + 1} + \left| {p_{1}^{\left( k \right)} - p_{2}^{\left( k \right)} } \right| + \frac{{\left| {{\text{IL}}_{1}^{\left( l \right)} - {\text{IL}}_{2}^{\left( l \right)} } \right|}}{\varsigma + 1} + \left| {v_{1}^{\left( l \right)} - v_{2}^{\left( l \right)} } \right|} \right)} \mathord{\left/ {\vphantom {{\left( {\frac{{\left| {{\text{OL}}_{1}^{\left( k \right)} - {\text{OL}}_{2}^{\left( k \right)} } \right|}}{\tau + 1} + \left| {p_{1}^{\left( k \right)} - p_{2}^{\left( k \right)} } \right| + \frac{{\left| {{\text{IL}}_{1}^{\left( l \right)} - {\text{IL}}_{2}^{\left( l \right)} } \right|}}{\varsigma + 1} + \left| {v_{1}^{\left( l \right)} - v_{2}^{\left( l \right)} } \right|} \right)} 4}} \right. \kern-0pt} 4}} \right)^{\lambda } \hfill \\ \end{aligned} \right)} \right)^{{{\raise0.7ex\hbox{$1$} \!\mathord{\left/ {\vphantom {1 \lambda }}\right.\kern-0pt} \!\lower0.7ex\hbox{$\lambda $}}}}$$
(27)

where \(\lambda > 0\).

It is easy to prove that Eqs. (22)–(27) also satisfy the properties in Theorem 1, and we omit it.

Example 3

(Continued with Example 2) According to Eq. (22), the generalized NPN-Hausdorff distance between \({\text{NPN}}_{1}\) and \({\text{NPN}}_{2}\) is:

$$d_{gh} \left( {{\text{NPN}}_{1} ,{\text{NPN}}_{2} } \right) = \left( {\hbox{max} \left\{ \begin{aligned} \left( {\frac{{\frac{{\left| {0 - 1} \right|}}{2} + \left| {0.6 - 1} \right| + \frac{{\left| {0 - 0} \right|}}{2} + \left| {0.1 - 0.3} \right|}}{4}} \right)^{\lambda } ,\left( {\frac{{\frac{{\left| {0 - 1} \right|}}{2} + \left| {0.6 - 1} \right| + \frac{{\left| {0 - 1} \right|}}{2} + \left| {0 - 0.5} \right|}}{4}} \right)^{\lambda } , \hfill \\ \left( {\frac{{\frac{{\left| {1 - 1} \right|}}{2} + \left| {0.4 - 0} \right| + \frac{{\left| {0 - 0} \right|}}{2} + \left| {0.3 - 0.3} \right|}}{4}} \right)^{\lambda } ,\left( {\frac{{\frac{{\left| {1 - 1} \right|}}{2} + \left| {0.4 - 0} \right| + \frac{{\left| {1 - 1} \right|}}{2} + \left| {0.5 - 0.5} \right|}}{4}} \right)^{\lambda } \hfill \\ \end{aligned} \right\}} \right)^{{{1 \mathord{\left/ {\vphantom {1 \lambda }} \right. \kern-0pt} \lambda }}}$$

If \(\lambda = 1\), then the NPN-Hamming-Hausdorff distance between \({\text{NPN}}_{1}\) and \(NPN_{2}\) is:

$$d_{hgh} \left( {{\text{NPN}}_{1} ,{\text{NPN}}_{2} } \right) = \hbox{max} \left\{ {\frac{1.1}{4},\frac{1.9}{4},\frac{0.4}{4},\frac{0.4}{4}} \right\} = 0.475$$

If \(\lambda = 2\), then the NPN-Euclidean-Hausdorff distance between \({\text{NPN}}_{1}\) and \({\text{NPN}}_{2}\) is:

$$d_{e} \left( {{\text{NPN}}_{1} ,{\text{NPN}}_{2} } \right) = \left( {\hbox{max} \left\{ {\left( {\frac{1.1}{4}} \right)^{2} ,\left( {\frac{1.9}{4}} \right)^{2} ,\left( {\frac{0.4}{4}} \right)^{2} ,\left( {\frac{0.4}{4}} \right)^{2} } \right\}} \right)^{{{1 \mathord{\left/ {\vphantom {1 2}} \right. \kern-0pt} 2}}} = 0.475$$

The generalized hybrid distance between \({\text{NPN}}_{1}\) and \({\text{NPN}}_{2}\) is:

$$d_{hg} \left( {{\text{NPN}}_{1} ,{\text{NPN}}_{2} } \right) = \left( {\frac{1}{2} \times \left( {\frac{ 1}{4} \times \left( {\left( {\frac{1.1}{4}} \right)^{\lambda } + \left( {\frac{1.9}{4}} \right)^{\lambda } + \left( {\frac{0.4}{4}} \right)^{\lambda } + \left( {\frac{0.4}{4}} \right)^{\lambda } } \right) + \hbox{max} \left\{ {\left( {\frac{1.1}{4}} \right)^{\lambda } ,\left( {\frac{1.9}{4}} \right)^{\lambda } ,\left( {\frac{0.4}{4}} \right)^{\lambda } ,\left( {\frac{0.4}{4}} \right)^{\lambda } } \right\}} \right)} \right)^{{{1 \mathord{\left/ {\vphantom {1 \lambda }} \right. \kern-0pt} \lambda }}}$$

If \(\lambda = 1\), then the hybrid NPN-Hamming distance between \({\text{NPN}}_{1}\) and \({\text{NPN}}_{2}\) is:

$$d_{hgh} \left( {{\text{NPN}}_{1} ,{\text{NPN}}_{2} } \right) = \frac{1}{2} \times \left( {0.2375 + 0.475} \right) = 0.3563$$

If \(\lambda = 2\), then the hybrid NPN-Euclidean distance between \({\text{NPN}}_{1}\) and \({\text{NPN}}_{2}\) is:

$$d_{e} \left( {{\text{NPN}}_{1} ,{\text{NPN}}_{2} } \right) = \left( {\frac{1}{2} \times \left( {0.0803 + 0.2256} \right)} \right)^{{{1 \mathord{\left/ {\vphantom {1 2}} \right. \kern-0pt} 2}}} = 0.3911$$

3.2 Distance and Similarity Measures Between Two Collections of NPNLTSs

In Sect. 3.1, we consider the distance and similarity measures of NPNLTSs over one single aspect. However, in many real applications such as multi-attribute decision making, the alternatives are often evaluated with respect to different attributes, and the weighting information of the attributes is also very important. Therefore, all aspects and the corresponding weights need to be considered. Since the evaluation information of the alternatives is often represented by several collections of NPNLTSs, in this subsection, we mainly study the generalized distance and similarity measures between two collections of NPNLTSs, and the corresponding Hamming distance and Euclidean distance can be obtained when the parameters \(\lambda = 1\) and \(\lambda = 2\), respectively.

3.2.1 Distance and Similarity Measures Between Two Collections of NPNLTSs in Discrete Case

Let \({\text{OS}} = \{ s_{\alpha } |\alpha = 0,1,2, \ldots ,\tau \}\) and \({\text{IS}} = \{ n_{\beta } |\beta = 0,1,2, \ldots ,\varsigma \}\) be OLTS and ILTS in the NLTS. For two collections of NPNLTSs \({\mathbf{NPN}}_{1} = \left\{ {{\text{NPN}}_{11} ,{\text{NPN}}_{12} , \ldots ,{\text{NPN}}_{1m} } \right\}\) and \({\mathbf{NPN}}_{2} = \left\{ {{\text{NPN}}_{21} ,{\text{NPN}}_{22} , \ldots ,{\text{NPN}}_{2m} } \right\}\) with the associated weighting vector \(\omega = \left( {\omega_{1} ,\omega_{2} , \ldots ,\omega_{m} } \right)^{T}\), where \(0 \le \omega_{j} \le 1\) and \(\sum\nolimits_{j = 1}^{m} {\omega_{j} } = 1\), the generalized weighted distance measure between \({\mathbf{NPN}}_{{\mathbf{1}}}\) and \({\mathbf{NPN}}_{{\mathbf{2}}}\) is defined as:

$$d_{gw} \left( {{\mathbf{NPN}}_{1} ,{\mathbf{NPN}}_{2} } \right) = \left( {\sum\limits_{j = 1}^{m} {\frac{{\omega_{j} }}{{\# {\text{OL}} \times \# {\text{IL}}}}} \sum\limits_{k = 1}^{{\# {\text{OL}}}} {\sum\limits_{l = 1}^{{\# {\text{IL}}}} {\left( {{{\left( {\frac{{\left| {{\text{OL}}_{1j}^{\left( k \right)} - {\text{OL}}_{2j}^{\left( k \right)} } \right|}}{\tau + 1} + \left| {p_{1j}^{\left( k \right)} - p_{2j}^{\left( k \right)} } \right| + \frac{{\left| {{\text{IL}}_{1j}^{\left( l \right)} - {\text{IL}}_{2j}^{\left( l \right)} } \right|}}{\varsigma + 1} + \left| {v_{1j}^{\left( l \right)} - v_{2j}^{\left( l \right)} } \right|} \right)} \mathord{\left/ {\vphantom {{\left( {\frac{{\left| {{\text{OL}}_{1j}^{\left( k \right)} - {\text{OL}}_{2j}^{\left( k \right)} } \right|}}{\tau + 1} + \left| {p_{1j}^{\left( k \right)} - p_{2j}^{\left( k \right)} } \right| + \frac{{\left| {{\text{IL}}_{1j}^{\left( l \right)} - {\text{IL}}_{2j}^{\left( l \right)} } \right|}}{\varsigma + 1} + \left| {v_{1j}^{\left( l \right)} - v_{2j}^{\left( l \right)} } \right|} \right)} 4}} \right. \kern-0pt} 4}} \right)^{\lambda } } } } \right)^{{{1 \mathord{\left/ {\vphantom {1 \lambda }} \right. \kern-0pt} \lambda }}}$$

and the generalized weighted NPN-Hausdorff distance measure is defined as:

$$d_{gwh} \left( {{\mathbf{NPN}}_{{\mathbf{1}}} ,{\mathbf{NPN}}_{{\mathbf{2}}} } \right) = \left( {\sum\limits_{j = 1}^{m} {\omega_{j} } \mathop {\hbox{max} }\limits_{\begin{subarray}{l} k = 1,2, \ldots ,\# OL \\ l = 1,2, \ldots ,\# IL \end{subarray} } \left( {{{\left( {\frac{{\left| {{\text{OL}}_{1j}^{\left( k \right)} - {\text{OL}}_{2j}^{\left( k \right)} } \right|}}{\tau + 1} + \left| {p_{1j}^{\left( k \right)} - p_{2j}^{\left( k \right)} } \right| + \frac{{\left| {{\text{IL}}_{1j}^{\left( l \right)} - {\text{IL}}_{2j}^{\left( l \right)} } \right|}}{\varsigma + 1} + \left| {v_{1j}^{\left( l \right)} - v_{2j}^{\left( l \right)} } \right|} \right)} \mathord{\left/ {\vphantom {{\left( {\frac{{\left| {{\text{OL}}_{1j}^{\left( k \right)} - {\text{OL}}_{2j}^{\left( k \right)} } \right|}}{\tau + 1} + \left| {p_{1j}^{\left( k \right)} - p_{2j}^{\left( k \right)} } \right| + \frac{{\left| {{\text{IL}}_{1j}^{\left( l \right)} - {\text{IL}}_{2j}^{\left( l \right)} } \right|}}{\varsigma + 1} + \left| {v_{1j}^{\left( l \right)} - v_{2j}^{\left( l \right)} } \right|} \right)} 4}} \right. \kern-0pt} 4}} \right)^{\lambda } } \right)^{{{1 \mathord{\left/ {\vphantom {1 \lambda }} \right. \kern-0pt} \lambda }}}$$

where \(\lambda > 0\).

Similarly, we can derive some hybrid weighted distance measures via combining the above distance measures. For example, the generalized hybrid weighted distance is as follows:

$$d_{ghw} \left( {{\mathbf{NPN}}_{1} ,{\mathbf{NPN}}_{2} } \right) = \left( {\sum\limits_{j = 1}^{m} {\frac{{\omega_{j} }}{2}} \left( \begin{aligned} & \frac{1}{{\# {\text{OL}} \times \# {\text{IL}}}}\sum\limits_{k = 1}^{{\# {\text{OL}}}} {\sum\limits_{l = 1}^{{\# {\text{IL}}}} {\left( {{{\left( {\frac{{\left| {{\text{OL}}_{1j}^{\left( k \right)} - {\text{OL}}_{2j}^{\left( k \right)} } \right|}}{\tau + 1} + \left| {p_{1j}^{\left( k \right)} - p_{2j}^{\left( k \right)} } \right| + \frac{{\left| {{\text{IL}}_{1j}^{\left( l \right)} - {\text{IL}}_{2j}^{\left( l \right)} } \right|}}{\varsigma + 1} + \left| {v_{1j}^{\left( l \right)} - v_{2j}^{\left( l \right)} } \right|} \right)} \mathord{\left/ {\vphantom {{\left( {\frac{{\left| {{\text{OL}}_{1j}^{\left( k \right)} - {\text{OL}}_{2j}^{\left( k \right)} } \right|}}{\tau + 1} + \left| {p_{1j}^{\left( k \right)} - p_{2j}^{\left( k \right)} } \right| + \frac{{\left| {{\text{IL}}_{1j}^{\left( l \right)} - {\text{IL}}_{2j}^{\left( l \right)} } \right|}}{\varsigma + 1} + \left| {v_{1j}^{\left( l \right)} - v_{2j}^{\left( l \right)} } \right|} \right)} 4}} \right. \kern-0pt} 4}} \right)^{\lambda } } } \\ & + \mathop {\hbox{max} }\limits_{\begin{subarray}{l} k = 1,2, \ldots ,\# {\text{OL}} \\ l = 1,2, \ldots ,\# {\text{IL}} \end{subarray} } \left( {{{\left( {\frac{{\left| {{\text{OL}}_{1j}^{\left( k \right)} - {\text{OL}}_{2j}^{\left( k \right)} } \right|}}{\tau + 1} + \left| {p_{1j}^{\left( k \right)} - p_{2j}^{\left( k \right)} } \right| + \frac{{\left| {{\text{IL}}_{1j}^{\left( l \right)} - {\text{IL}}_{2j}^{\left( l \right)} } \right|}}{\varsigma + 1} + \left| {v_{1j}^{\left( l \right)} - v_{2j}^{\left( l \right)} } \right|} \right)} \mathord{\left/ {\vphantom {{\left( {\frac{{\left| {{\text{OL}}_{1j}^{\left( k \right)} - {\text{OL}}_{2j}^{\left( k \right)} } \right|}}{\tau + 1} + \left| {p_{1j}^{\left( k \right)} - p_{2j}^{\left( k \right)} } \right| + \frac{{\left| {{\text{IL}}_{1j}^{\left( l \right)} - {\text{IL}}_{2j}^{\left( l \right)} } \right|}}{\varsigma + 1} + \left| {v_{1j}^{\left( l \right)} - v_{2j}^{\left( l \right)} } \right|} \right)} 4}} \right. \kern-0pt} 4}} \right)^{\lambda } \\ \end{aligned} \right)} \right)^{{{\raise0.7ex\hbox{$1$} \!\mathord{\left/ {\vphantom {1 \lambda }}\right.\kern-0pt} \!\lower0.7ex\hbox{$\lambda $}}}}$$

where \(\lambda > 0\).

3.2.2 Distance and Similarity Measures Between Two Collections of NPNLTSs in Continuous Case

Let \(x \in \left[ {a,b} \right]\), and the weight of \(x\) be \(\omega \left( x \right)\), where \(\omega \left( x \right) \in \left[ {0,1} \right]\) and \(\int_{a}^{b} {\omega \left( x \right){\text{d}}x = 1}\). Let \(OS = \{ s_{\alpha } |\alpha = 0,1,2, \ldots ,\tau \}\) and \(IS = \{ n_{\beta } |\beta = 0,1,2, \ldots ,\varsigma \}\) be OLTS and ILTS in the NLTS. For two collections of NPNLTSs \({\mathbf{NPN}}_{1} = \left\{ {{\text{NPN}}_{11} ,{\text{NPN}}_{12} , \ldots ,{\text{NPN}}_{1m} } \right\}\) and \({\mathbf{NPN}}_{2} = \left\{ {{\text{NPN}}_{21} ,{\text{NPN}}_{22} , \ldots ,{\text{NPN}}_{2m} } \right\}\) over the element \(x\), in analogy to the above analysis, we introduce the generalized continuous weighted distance measures between two collections of NPNLTSs \({\mathbf{NPN}}_{1}\) and \({\mathbf{NPN}}_{ 2}\), shown as follows, respectively:

$$d_{gcw} \left( {{\mathbf{NPN}}_{1} ,{\mathbf{NPN}}_{2} } \right) = \left( {\int\limits_{a}^{b} {\frac{\omega \left( x \right)}{{\# {\text{OL}} \times \# {\text{IL}}}}\sum\limits_{k = 1}^{{\# {\text{OL}}}} {\sum\limits_{l = 1}^{{\# {\text{IL}}}} {\left( {{{\left( {\frac{{\left| {{\text{OL}}_{1}^{\left( k \right)} \left( x \right) - {\text{OL}}_{2}^{\left( k \right)} \left( x \right)} \right|}}{\tau + 1} + \left| {p_{1}^{\left( k \right)} \left( x \right) - p_{2}^{\left( k \right)} \left( x \right)} \right| + \frac{{\left| {{\text{IL}}_{1}^{\left( l \right)} \left( x \right) - {\text{IL}}_{2}^{\left( l \right)} \left( x \right)} \right|}}{\varsigma + 1} + \left| {v_{1}^{\left( l \right)} \left( x \right) - v_{2}^{\left( l \right)} \left( x \right)} \right|} \right)} \mathord{\left/ {\vphantom {{\left( {\frac{{\left| {{\text{OL}}_{1}^{\left( k \right)} \left( x \right) - {\text{OL}}_{2}^{\left( k \right)} \left( x \right)} \right|}}{\tau + 1} + \left| {p_{1}^{\left( k \right)} \left( x \right) - p_{2}^{\left( k \right)} \left( x \right)} \right| + \frac{{\left| {{\text{IL}}_{1}^{\left( l \right)} \left( x \right) - {\text{IL}}_{2}^{\left( l \right)} \left( x \right)} \right|}}{\varsigma + 1} + \left| {v_{1}^{\left( l \right)} \left( x \right) - v_{2}^{\left( l \right)} \left( x \right)} \right|} \right)} 4}} \right. \kern-0pt} 4}} \right)^{\lambda } {\text{d}}x} } } } \right)^{{{1 \mathord{\left/ {\vphantom {1 \lambda }} \right. \kern-0pt} \lambda }}}$$

where \(\lambda > 0\).If \(\omega \left( x \right) = {1 \mathord{\left/ {\vphantom {1 {\left( {b - a} \right)}}} \right. \kern-0pt} {\left( {b - a} \right)}},\forall x \in \left[ {a,b} \right]\), then the above equation reduces to the generalized continuous normalized distance measures between two collections of NPNLTSs:

$$d_{gcn} \left( {{\mathbf{NPN}}_{1} ,{\mathbf{NPN}}_{2} } \right) = \left( {\frac{1}{b - a}\int\limits_{a}^{b} {\frac{1}{{\# {\text{OL}} \times \# {\text{IL}}}}\sum\limits_{k = 1}^{{\# {\text{OL}}}} {\sum\limits_{l = 1}^{{\# {\text{IL}}}} {\left( {{{\left( {\frac{{\left| {{\text{OL}}_{1}^{\left( k \right)} \left( x \right) - {\text{OL}}_{2}^{\left( k \right)} \left( x \right)} \right|}}{\tau + 1} + \left| {p_{1}^{\left( k \right)} \left( x \right) - p_{2}^{\left( k \right)} \left( x \right)} \right| + \frac{{\left| {{\text{IL}}_{1}^{\left( l \right)} \left( x \right) - {\text{IL}}_{2}^{\left( l \right)} \left( x \right)} \right|}}{\varsigma + 1} + \left| {v_{1}^{\left( l \right)} \left( x \right) - v_{2}^{\left( l \right)} \left( x \right)} \right|} \right)} \mathord{\left/ {\vphantom {{\left( {\frac{{\left| {{\text{OL}}_{1}^{\left( k \right)} \left( x \right) - {\text{OL}}_{2}^{\left( k \right)} \left( x \right)} \right|}}{\tau + 1} + \left| {p_{1}^{\left( k \right)} \left( x \right) - p_{2}^{\left( k \right)} \left( x \right)} \right| + \frac{{\left| {{\text{IL}}_{1}^{\left( l \right)} \left( x \right) - {\text{IL}}_{2}^{\left( l \right)} \left( x \right)} \right|}}{\varsigma + 1} + \left| {v_{1}^{\left( l \right)} \left( x \right) - v_{2}^{\left( l \right)} \left( x \right)} \right|} \right)} 4}} \right. \kern-0pt} 4}} \right)^{\lambda } {\text{d}}x} } } } \right)^{{{1 \mathord{\left/ {\vphantom {1 \lambda }} \right. \kern-0pt} \lambda }}}$$

where \(\lambda > 0\).

The generalized continuous weighted distance measures between two collections of NPNLTSs \({\mathbf{NPN}}_{1}\) and \({\mathbf{NPN}}_{ 2}\) is

$$d_{gcwh} \left( {{\mathbf{NPN}}_{{\mathbf{1}}} ,{\mathbf{NPN}}_{{\mathbf{2}}} } \right) = \left( {\int\limits_{a}^{b} {\omega \left( x \right)\mathop {\hbox{max} }\limits_{\begin{subarray}{l} k = 1,2, \ldots ,\# {\text{OL}} \\ l = 1,2, \ldots ,\# {\text{IL}} \end{subarray} } \left( {{{\left( {\frac{{\left| {{\text{OL}}_{1}^{\left( k \right)} \left( x \right) - {\text{OL}}_{2}^{\left( k \right)} \left( x \right)} \right|}}{\tau + 1} + \left| {p_{1}^{\left( k \right)} \left( x \right) - p_{2}^{\left( k \right)} \left( x \right)} \right| + \frac{{\left| {{\text{IL}}_{1}^{\left( l \right)} \left( x \right) - {\text{IL}}_{2}^{\left( l \right)} \left( x \right)} \right|}}{\varsigma + 1} + \left| {v_{1}^{\left( l \right)} \left( x \right) - v_{2}^{\left( l \right)} \left( x \right)} \right|} \right)} \mathord{\left/ {\vphantom {{\left( {\frac{{\left| {{\text{OL}}_{1}^{\left( k \right)} \left( x \right) - {\text{OL}}_{2}^{\left( k \right)} \left( x \right)} \right|}}{\tau + 1} + \left| {p_{1}^{\left( k \right)} \left( x \right) - p_{2}^{\left( k \right)} \left( x \right)} \right| + \frac{{\left| {{\text{IL}}_{1}^{\left( l \right)} \left( x \right) - {\text{IL}}_{2}^{\left( l \right)} \left( x \right)} \right|}}{\varsigma + 1} + \left| {v_{1}^{\left( l \right)} \left( x \right) - v_{2}^{\left( l \right)} \left( x \right)} \right|} \right)} 4}} \right. \kern-0pt} 4}} \right)^{\lambda } {\text{d}}x} } \right)^{{{1 \mathord{\left/ {\vphantom {1 \lambda }} \right. \kern-0pt} \lambda }}}$$

where \(\lambda > 0\).

If \(\omega \left( x \right) = {1 \mathord{\left/ {\vphantom {1 {\left( {b - a} \right)}}} \right. \kern-0pt} {\left( {b - a} \right)}},\forall x \in \left[ {a,b} \right]\), then the generalized continuous normalized distance measures between two collections of NPNLTSs \({\mathbf{NPN}}_{1}\) and \({\mathbf{NPN}}_{ 2}\) is obtained:

$$d_{gcnh} \left( {{\mathbf{NPN}}_{{\mathbf{1}}} ,{\mathbf{NPN}}_{{\mathbf{2}}} } \right) = \left( {\frac{1}{b - a}\int_{a}^{b} {\mathop {\hbox{max} }\limits_{\begin{subarray}{l} k = 1,2, \ldots ,\# {\text{OL}} \\ l = 1,2, \ldots ,\# {\text{IL}} \end{subarray} } \left( {{{\left( {\frac{{\left| {{\text{OL}}_{1}^{\left( k \right)} \left( x \right) - {\text{OL}}_{2}^{\left( k \right)} \left( x \right)} \right|}}{\tau + 1} + \left| {p_{1}^{\left( k \right)} \left( x \right) - p_{2}^{\left( k \right)} \left( x \right)} \right| + \frac{{\left| {{\text{IL}}_{1}^{\left( l \right)} \left( x \right) - {\text{IL}}_{2}^{\left( l \right)} \left( x \right)} \right|}}{\varsigma + 1} + \left| {v_{1}^{\left( l \right)} \left( x \right) - v_{2}^{\left( l \right)} \left( x \right)} \right|} \right)} \mathord{\left/ {\vphantom {{\left( {\frac{{\left| {{\text{OL}}_{1}^{\left( k \right)} \left( x \right) - {\text{OL}}_{2}^{\left( k \right)} \left( x \right)} \right|}}{\tau + 1} + \left| {p_{1}^{\left( k \right)} \left( x \right) - p_{2}^{\left( k \right)} \left( x \right)} \right| + \frac{{\left| {{\text{IL}}_{1}^{\left( l \right)} \left( x \right) - {\text{IL}}_{2}^{\left( l \right)} \left( x \right)} \right|}}{\varsigma + 1} + \left| {v_{1}^{\left( l \right)} \left( x \right) - v_{2}^{\left( l \right)} \left( x \right)} \right|} \right)} 4}} \right. \kern-0pt} 4}} \right)}^{\lambda } {\text{d}}x} \right)^{{{1 \mathord{\left/ {\vphantom {1 \lambda }} \right. \kern-0pt} \lambda }}}$$

where \(\lambda > 0\).

Naturally, a generalized hybrid continuous weighted distance between two collections of NPNLTSs \({\mathbf{NPN}}_{1}\) and \({\mathbf{NPN}}_{ 2}\) is shown below:

$$d_{ghcw} \left( {{\mathbf{NPN}}_{1} ,{\mathbf{NPN}}_{2} } \right) = \left( {\int_{a}^{b} {\frac{\omega \left( x \right)}{2}\left( \begin{aligned} & \frac{1}{{\# {\text{OL}} \times \# {\text{IL}}}}\sum\limits_{k = 1}^{{\# {\text{OL}}}} {\sum\limits_{l = 1}^{\# IL} {\left( {{{\left( {\frac{{\left| {{\text{OL}}_{1}^{\left( k \right)} \left( x \right) - {\text{OL}}_{2}^{\left( k \right)} \left( x \right)} \right|}}{\tau + 1} + \left| {p_{1}^{\left( k \right)} \left( x \right) - p_{2}^{\left( k \right)} \left( x \right)} \right| + \frac{{\left| {{\text{IL}}_{1}^{\left( l \right)} \left( x \right) - {\text{IL}}_{2}^{\left( l \right)} \left( x \right)} \right|}}{\varsigma + 1} + \left| {v_{1}^{\left( l \right)} \left( x \right) - v_{2}^{\left( l \right)} \left( x \right)} \right|} \right)} \mathord{\left/ {\vphantom {{\left( {\frac{{\left| {{\text{OL}}_{1}^{\left( k \right)} \left( x \right) - {\text{OL}}_{2}^{\left( k \right)} \left( x \right)} \right|}}{\tau + 1} + \left| {p_{1}^{\left( k \right)} \left( x \right) - p_{2}^{\left( k \right)} \left( x \right)} \right| + \frac{{\left| {{\text{IL}}_{1}^{\left( l \right)} \left( x \right) - {\text{IL}}_{2}^{\left( l \right)} \left( x \right)} \right|}}{\varsigma + 1} + \left| {v_{1}^{\left( l \right)} \left( x \right) - v_{2}^{\left( l \right)} \left( x \right)} \right|} \right)} 4}} \right. \kern-0pt} 4}} \right)^{\lambda } } } \\ & + \mathop {\hbox{max} }\limits_{\begin{subarray}{l} k = 1,2, \ldots ,\# {\text{OL}} \\ l = 1,2, \ldots ,\# {\text{IL}} \end{subarray} } \left( {{{\left( {\frac{{\left| {{\text{OL}}_{1}^{\left( k \right)} \left( x \right) - {\text{OL}}_{2}^{\left( k \right)} \left( x \right)} \right|}}{\tau + 1} + \left| {p_{1}^{\left( k \right)} \left( x \right) - p_{2}^{\left( k \right)} \left( x \right)} \right| + \frac{{\left| {{\text{IL}}_{1}^{\left( l \right)} \left( x \right) - {\text{IL}}_{2}^{\left( l \right)} \left( x \right)} \right|}}{\varsigma + 1} + \left| {v_{1}^{\left( l \right)} \left( x \right) - v_{2}^{\left( l \right)} \left( x \right)} \right|} \right)} \mathord{\left/ {\vphantom {{\left( {\frac{{\left| {{\text{OL}}_{1}^{\left( k \right)} \left( x \right) - {\text{OL}}_{2}^{\left( k \right)} \left( x \right)} \right|}}{\tau + 1} + \left| {p_{1}^{\left( k \right)} \left( x \right) - p_{2}^{\left( k \right)} \left( x \right)} \right| + \frac{{\left| {{\text{IL}}_{1}^{\left( l \right)} \left( x \right) - {\text{IL}}_{2}^{\left( l \right)} \left( x \right)} \right|}}{\varsigma + 1} + \left| {v_{1}^{\left( l \right)} \left( x \right) - v_{2}^{\left( l \right)} \left( x \right)} \right|} \right)} 4}} \right. \kern-0pt} 4}} \right)^{\lambda } \\ \end{aligned} \right){\text{d}}x} } \right)^{{{\raise0.7ex\hbox{$1$} \!\mathord{\left/ {\vphantom {1 \lambda }}\right.\kern-0pt} \!\lower0.7ex\hbox{$\lambda $}}}}$$

where \(\lambda > 0\).

Let \(\omega \left( x \right) = {1 \mathord{\left/ {\vphantom {1 {\left( {b - a} \right)}}} \right. \kern-0pt} {\left( {b - a} \right)}},\forall x \in \left[ {a,b} \right]\), a generalized hybrid continuous normalized distance measure between two collections of NPNLTSs \({\mathbf{NPN}}_{1}\) and \({\mathbf{NPN}}_{ 2}\) is:

$$d_{ghcn} \left( {{\mathbf{NPN}}_{1} ,{\mathbf{NPN}}_{2} } \right) = \left( {\frac{1}{{2\left( {b - a} \right)}}\int_{a}^{b} {\left( \begin{aligned} & \frac{1}{{\# {\text{OL}} \times \# {\text{IL}}}}\sum\limits_{k = 1}^{{\# {\text{OL}}}} {\sum\limits_{l = 1}^{{\# {\text{IL}}}} {\left( {{{\left( {\frac{{\left| {{\text{OL}}_{1}^{\left( k \right)} \left( x \right) - {\text{OL}}_{2}^{\left( k \right)} \left( x \right)} \right|}}{\tau + 1} + \left| {p_{1}^{\left( k \right)} \left( x \right) - p_{2}^{\left( k \right)} \left( x \right)} \right| + \frac{{\left| {{\text{IL}}_{1}^{\left( l \right)} \left( x \right) - {\text{IL}}_{2}^{\left( l \right)} \left( x \right)} \right|}}{\varsigma + 1} + \left| {v_{1}^{\left( l \right)} \left( x \right) - v_{2}^{\left( l \right)} \left( x \right)} \right|} \right)} \mathord{\left/ {\vphantom {{\left( {\frac{{\left| {{\text{OL}}_{1}^{\left( k \right)} \left( x \right) - {\text{OL}}_{2}^{\left( k \right)} \left( x \right)} \right|}}{\tau + 1} + \left| {p_{1}^{\left( k \right)} \left( x \right) - p_{2}^{\left( k \right)} \left( x \right)} \right| + \frac{{\left| {{\text{IL}}_{1}^{\left( l \right)} \left( x \right) - {\text{IL}}_{2}^{\left( l \right)} \left( x \right)} \right|}}{\varsigma + 1} + \left| {v_{1}^{\left( l \right)} \left( x \right) - v_{2}^{\left( l \right)} \left( x \right)} \right|} \right)} 4}} \right. \kern-0pt} 4}} \right)^{\lambda } } } \\ & + \,\mathop {\hbox{max} }\limits_{\begin{subarray}{l} k = 1,2, \ldots ,\# {\text{OL}} \\ l = 1,2, \ldots ,\# {\text{IL}} \end{subarray} } \left( {{{\left( {\frac{{\left| {{\text{OL}}_{1}^{\left( k \right)} \left( x \right) - {\text{OL}}_{2}^{\left( k \right)} \left( x \right)} \right|}}{\tau + 1} + \left| {p_{1}^{\left( k \right)} \left( x \right) - p_{2}^{\left( k \right)} \left( x \right)} \right| + \frac{{\left| {{\text{IL}}_{1}^{\left( l \right)} \left( x \right) - {\text{IL}}_{2}^{\left( l \right)} \left( x \right)} \right|}}{\varsigma + 1} + \left| {v_{1}^{\left( l \right)} \left( x \right) - v_{2}^{\left( l \right)} \left( x \right)} \right|} \right)} \mathord{\left/ {\vphantom {{\left( {\frac{{\left| {{\text{OL}}_{1}^{\left( k \right)} \left( x \right) - {\text{OL}}_{2}^{\left( k \right)} \left( x \right)} \right|}}{\tau + 1} + \left| {p_{1}^{\left( k \right)} \left( x \right) - p_{2}^{\left( k \right)} \left( x \right)} \right| + \frac{{\left| {{\text{IL}}_{1}^{\left( l \right)} \left( x \right) - {\text{IL}}_{2}^{\left( l \right)} \left( x \right)} \right|}}{\varsigma + 1} + \left| {v_{1}^{\left( l \right)} \left( x \right) - v_{2}^{\left( l \right)} \left( x \right)} \right|} \right)} 4}} \right. \kern-0pt} 4}} \right)^{\lambda } \\ \end{aligned} \right)} {\text{d}}x} \right)^{{{\raise0.7ex\hbox{$1$} \!\mathord{\left/ {\vphantom {1 \lambda }}\right.\kern-0pt} \!\lower0.7ex\hbox{$\lambda $}}}}$$

where \(\lambda > 0\).

3.2.3 Ordered Weighted Distance and Similarity Measures Between Two Collections of NPNLTSs

Since the ordered weighted distance can alleviate or intensify the influence of unduly large or small deviations on the aggregation results by assigning them low or high weights, it is very useful in realistic decision-making problems. Therefore, in this subsection, we mainly consider the ordered weighted distance measures with the context of NPNLTSs.

Motived by Liao [13], the generalized ordered weighted distance between two collections of NPNLTSs \({\mathbf{NPN}}_{1}\) and \({\mathbf{NPN}}_{ 2}\) can be obtained as follows:

$$d_{gow} \left( {{\mathbf{NPN}}_{1} ,{\mathbf{NPN}}_{2} } \right) = \left( {\sum\limits_{j = 1}^{m} {\omega_{j} } \left( {\frac{1}{{\# {\text{OL}} \times \# {\text{IL}}}}\sum\limits_{k = 1}^{{\# {\text{OL}}}} {\sum\limits_{l = 1}^{{\# {\text{IL}}}} {\left( {{{\left( {\frac{{\left| {{\text{OL}}_{1\sigma \left( j \right)}^{\left( k \right)} - {\text{OL}}_{2\sigma \left( j \right)}^{\left( k \right)} } \right|}}{\tau + 1} + \left| {p_{1\sigma \left( j \right)}^{\left( k \right)} - p_{2\sigma \left( j \right)}^{\left( k \right)} } \right| + \frac{{\left| {{\text{IL}}_{1\sigma \left( j \right)}^{\left( l \right)} - {\text{IL}}_{2\sigma \left( j \right)}^{\left( l \right)} } \right|}}{\varsigma + 1} + \left| {v_{1\sigma \left( j \right)}^{\left( l \right)} - v_{2\sigma \left( j \right)}^{\left( l \right)} } \right|} \right)} \mathord{\left/ {\vphantom {{\left( {\frac{{\left| {{\text{OL}}_{1\sigma \left( j \right)}^{\left( k \right)} - {\text{OL}}_{2\sigma \left( j \right)}^{\left( k \right)} } \right|}}{\tau + 1} + \left| {p_{1\sigma \left( j \right)}^{\left( k \right)} - p_{2\sigma \left( j \right)}^{\left( k \right)} } \right| + \frac{{\left| {{\text{IL}}_{1\sigma \left( j \right)}^{\left( l \right)} - {\text{IL}}_{2\sigma \left( j \right)}^{\left( l \right)} } \right|}}{\varsigma + 1} + \left| {v_{1\sigma \left( j \right)}^{\left( l \right)} - v_{2\sigma \left( j \right)}^{\left( l \right)} } \right|} \right)} 4}} \right. \kern-0pt} 4}} \right)^{\lambda } } } } \right)} \right)^{{{1 \mathord{\left/ {\vphantom {1 \lambda }} \right. \kern-0pt} \lambda }}}$$

where \(\lambda > 0\) and \(\sigma \left( j \right):\left( {1,2, \ldots ,m} \right) \to \left( {1,2, \ldots ,m} \right)\) is a permutation such that:

$$\begin{aligned} & \frac{1}{{\# {\text{OL}} \times \# {\text{IL}}}}\sum\limits_{k = 1}^{{\# {\text{OL}}}} {\sum\limits_{l = 1}^{{\# {\text{IL}}}} {\left( {{{\left( {\frac{{\left| {{\text{OL}}_{1\sigma \left( j \right)}^{\left( k \right)} - {\text{OL}}_{2\sigma \left( j \right)}^{\left( k \right)} } \right|}}{\tau + 1} + \left| {p_{1\sigma \left( j \right)}^{\left( k \right)} - p_{2\sigma \left( j \right)}^{\left( k \right)} } \right| + \frac{{\left| {{\text{IL}}_{1\sigma \left( j \right)}^{\left( l \right)} - {\text{IL}}_{2\sigma \left( j \right)}^{\left( l \right)} } \right|}}{\varsigma + 1} + \left| {v_{1\sigma \left( j \right)}^{\left( l \right)} - v_{2\sigma \left( j \right)}^{\left( l \right)} } \right|} \right)} \mathord{\left/ {\vphantom {{\left( {\frac{{\left| {{\text{OL}}_{1\sigma \left( j \right)}^{\left( k \right)} - {\text{OL}}_{2\sigma \left( j \right)}^{\left( k \right)} } \right|}}{\tau + 1} + \left| {p_{1\sigma \left( j \right)}^{\left( k \right)} - p_{2\sigma \left( j \right)}^{\left( k \right)} } \right| + \frac{{\left| {{\text{IL}}_{1\sigma \left( j \right)}^{\left( l \right)} - {\text{IL}}_{2\sigma \left( j \right)}^{\left( l \right)} } \right|}}{\varsigma + 1} + \left| {v_{1\sigma \left( j \right)}^{\left( l \right)} - v_{2\sigma \left( j \right)}^{\left( l \right)} } \right|} \right)} 4}} \right. \kern-0pt} 4}} \right)^{\lambda } } } \\ & \ge \frac{1}{{\# {\text{OL}} \times \# {\text{IL}}}}\sum\limits_{k = 1}^{{\# {\text{OL}}}} {\sum\limits_{l = 1}^{\# IL} {\left( {{{\left( {\frac{{\left| {{\text{OL}}_{{1\sigma \left( {j + 1} \right)}}^{\left( k \right)} - {\text{OL}}_{{2\sigma \left( {j + 1} \right)}}^{\left( k \right)} } \right|}}{\tau + 1} + \left| {p_{{1\sigma \left( {j + 1} \right)}}^{\left( k \right)} - p_{{2\sigma \left( {j + 1} \right)}}^{\left( k \right)} } \right| + \frac{{\left| {{\text{IL}}_{{1\sigma \left( {j + 1} \right)}}^{\left( l \right)} - {\text{IL}}_{{2\sigma \left( {j + 1} \right)}}^{\left( l \right)} } \right|}}{\varsigma + 1} + \left| {v_{{1\sigma \left( {j + 1} \right)}}^{\left( l \right)} - v_{{2\sigma \left( {j + 1} \right)}}^{\left( l \right)} } \right|} \right)} \mathord{\left/ {\vphantom {{\left( {\frac{{\left| {{\text{OL}}_{{1\sigma \left( {j + 1} \right)}}^{\left( k \right)} - {\text{OL}}_{{2\sigma \left( {j + 1} \right)}}^{\left( k \right)} } \right|}}{\tau + 1} + \left| {p_{{1\sigma \left( {j + 1} \right)}}^{\left( k \right)} - p_{{2\sigma \left( {j + 1} \right)}}^{\left( k \right)} } \right| + \frac{{\left| {{\text{IL}}_{{1\sigma \left( {j + 1} \right)}}^{\left( l \right)} - {\text{IL}}_{{2\sigma \left( {j + 1} \right)}}^{\left( l \right)} } \right|}}{\varsigma + 1} + \left| {v_{{1\sigma \left( {j + 1} \right)}}^{\left( l \right)} - v_{{2\sigma \left( {j + 1} \right)}}^{\left( l \right)} } \right|} \right)} 4}} \right. \kern-0pt} 4}} \right)^{\lambda } } } ,j = 1,2, \ldots ,m \\ \end{aligned}$$

and the generalized ordered weighted NPN-Hausdorff distance measure is defined as:

$$d_{gowh} \left( {{\mathbf{NPN}}_{{\mathbf{1}}} ,{\mathbf{NPN}}_{{\mathbf{2}}} } \right) = \left( {\sum\limits_{j = 1}^{m} {\omega_{j} } \mathop {\hbox{max} }\limits_{\begin{subarray}{l} k = 1,2, \ldots ,\# {\text{OL}} \\ l = 1,2, \ldots ,\# {\text{IL}} \end{subarray} } \left( {{{\left( {\frac{{\left| {{\text{OL}}_{{1\dot{\sigma }\left( j \right)}}^{\left( k \right)} - {\text{OL}}_{{2\dot{\sigma }\left( j \right)}}^{\left( k \right)} } \right|}}{\tau + 1} + \left| {p_{{1\dot{\sigma }\left( j \right)}}^{\left( k \right)} - p_{{2\dot{\sigma }\left( j \right)}}^{\left( k \right)} } \right| + \frac{{\left| {{\text{IL}}_{{1\dot{\sigma }\left( j \right)}}^{\left( l \right)} - {\text{IL}}_{{2\dot{\sigma }\left( j \right)}}^{\left( l \right)} } \right|}}{\varsigma + 1} + \left| {v_{{1\dot{\sigma }\left( j \right)}}^{\left( l \right)} - v_{{2\dot{\sigma }\left( j \right)}}^{\left( l \right)} } \right|} \right)} \mathord{\left/ {\vphantom {{\left( {\frac{{\left| {{\text{OL}}_{{1\dot{\sigma }\left( j \right)}}^{\left( k \right)} - {\text{OL}}_{{2\dot{\sigma }\left( j \right)}}^{\left( k \right)} } \right|}}{\tau + 1} + \left| {p_{{1\dot{\sigma }\left( j \right)}}^{\left( k \right)} - p_{{2\dot{\sigma }\left( j \right)}}^{\left( k \right)} } \right| + \frac{{\left| {{\text{IL}}_{{1\dot{\sigma }\left( j \right)}}^{\left( l \right)} - {\text{IL}}_{{2\dot{\sigma }\left( j \right)}}^{\left( l \right)} } \right|}}{\varsigma + 1} + \left| {v_{{1\dot{\sigma }\left( j \right)}}^{\left( l \right)} - v_{{2\dot{\sigma }\left( j \right)}}^{\left( l \right)} } \right|} \right)} 4}} \right. \kern-0pt} 4}} \right)^{\lambda } } \right)^{{{1 \mathord{\left/ {\vphantom {1 \lambda }} \right. \kern-0pt} \lambda }}}$$

where \(\lambda > 0\) and \(\dot{\sigma }\left( j \right):\left( {1,2, \ldots ,m} \right) \to \left( {1,2, \ldots ,m} \right)\) is a permutation such that:

$$\begin{aligned} & \mathop {\hbox{max} }\limits_{\begin{subarray}{l} k = 1,2, \ldots ,\# {\text{OL}} \\ l = 1,2, \ldots ,\# {\text{IL}} \end{subarray} } \left( {{{\left( {\frac{{\left| {{\text{OL}}_{{1\dot{\sigma }\left( j \right)}}^{\left( k \right)} - {\text{OL}}_{{2\dot{\sigma }\left( j \right)}}^{\left( k \right)} } \right|}}{\tau + 1} + \left| {p_{{1\dot{\sigma }\left( j \right)}}^{\left( k \right)} - p_{{2\dot{\sigma }\left( j \right)}}^{\left( k \right)} } \right| + \frac{{\left| {{\text{IL}}_{{1\dot{\sigma }\left( j \right)}}^{\left( l \right)} - {\text{IL}}_{{2\dot{\sigma }\left( j \right)}}^{\left( l \right)} } \right|}}{\varsigma + 1} + \left| {v_{{1\dot{\sigma }\left( j \right)}}^{\left( l \right)} - v_{{2\dot{\sigma }\left( j \right)}}^{\left( l \right)} } \right|} \right)} \mathord{\left/ {\vphantom {{\left( {\frac{{\left| {{\text{OL}}_{{1\dot{\sigma }\left( j \right)}}^{\left( k \right)} - {\text{OL}}_{{2\dot{\sigma }\left( j \right)}}^{\left( k \right)} } \right|}}{\tau + 1} + \left| {p_{{1\dot{\sigma }\left( j \right)}}^{\left( k \right)} - p_{{2\dot{\sigma }\left( j \right)}}^{\left( k \right)} } \right| + \frac{{\left| {{\text{IL}}_{{1\dot{\sigma }\left( j \right)}}^{\left( l \right)} - {\text{IL}}_{{2\dot{\sigma }\left( j \right)}}^{\left( l \right)} } \right|}}{\varsigma + 1} + \left| {v_{{1\dot{\sigma }\left( j \right)}}^{\left( l \right)} - v_{{2\dot{\sigma }\left( j \right)}}^{\left( l \right)} } \right|} \right)} 4}} \right. \kern-0pt} 4}} \right)^{\lambda } \\ & \ge \mathop {\hbox{max} }\limits_{\begin{subarray}{l} k = 1,2, \ldots ,\# {\text{OL}} \\ l = 1,2, \ldots ,\# {\text{IL}} \end{subarray} } \left( {{{\left( {\frac{{\left| {{\text{OL}}_{{1\dot{\sigma }\left( {j + 1} \right)}}^{\left( k \right)} - {\text{OL}}_{{2\dot{\sigma }\left( {j + 1} \right)}}^{\left( k \right)} } \right|}}{\tau + 1} + \left| {p_{{1\dot{\sigma }\left( {j + 1} \right)}}^{\left( k \right)} - p_{{2\dot{\sigma }\left( {j + 1} \right)}}^{\left( k \right)} } \right| + \frac{{\left| {{\text{IL}}_{{1\dot{\sigma }\left( {j + 1} \right)}}^{\left( l \right)} - {\text{IL}}_{{2\dot{\sigma }\left( {j + 1} \right)}}^{\left( l \right)} } \right|}}{\varsigma + 1} + \left| {v_{{1\dot{\sigma }\left( {j + 1} \right)}}^{\left( l \right)} - v_{{2\dot{\sigma }\left( {j + 1} \right)}}^{\left( l \right)} } \right|} \right)} \mathord{\left/ {\vphantom {{\left( {\frac{{\left| {{\text{OL}}_{{1\dot{\sigma }\left( {j + 1} \right)}}^{\left( k \right)} - {\text{OL}}_{{2\dot{\sigma }\left( {j + 1} \right)}}^{\left( k \right)} } \right|}}{\tau + 1} + \left| {p_{{1\dot{\sigma }\left( {j + 1} \right)}}^{\left( k \right)} - p_{{2\dot{\sigma }\left( {j + 1} \right)}}^{\left( k \right)} } \right| + \frac{{\left| {{\text{IL}}_{{1\dot{\sigma }\left( {j + 1} \right)}}^{\left( l \right)} - {\text{IL}}_{{2\dot{\sigma }\left( {j + 1} \right)}}^{\left( l \right)} } \right|}}{\varsigma + 1} + \left| {v_{{1\dot{\sigma }\left( {j + 1} \right)}}^{\left( l \right)} - v_{{2\dot{\sigma }\left( {j + 1} \right)}}^{\left( l \right)} } \right|} \right)} 4}} \right. \kern-0pt} 4}} \right)^{\lambda } ,j = 1,2, \ldots ,m \\ \end{aligned}$$

Certainly, we can derive some hybrid ordered weighted distance measures via combining the above distance measures. For example, the generalized hybrid ordered weighted distance is:

$$d_{ghow} \left( {{\mathbf{NPN}}_{1} ,{\mathbf{NPN}}_{2} } \right) = \left( {\sum\limits_{j = 1}^{m} {\frac{{\omega_{j} }}{2}} \left( \begin{aligned} & \frac{1}{\# OL \times \# IL}\sum\limits_{k = 1}^{{\# {\text{OL}}}} {\sum\limits_{l = 1}^{{\# {\text{IL}}}} {\left( {{{\left( {\frac{{\left| {{\text{OL}}_{{1\ddot{\sigma }\left( j \right)}}^{\left( k \right)} - {\text{OL}}_{{2\ddot{\sigma }\left( j \right)}}^{\left( k \right)} } \right|}}{\tau + 1} + \left| {p_{{1\ddot{\sigma }\left( j \right)}}^{\left( k \right)} - p_{{2\ddot{\sigma }\left( j \right)}}^{\left( k \right)} } \right| + \frac{{\left| {{\text{IL}}_{{1\ddot{\sigma }\left( j \right)}}^{\left( l \right)} - {\text{IL}}_{{2\ddot{\sigma }\left( j \right)}}^{\left( l \right)} } \right|}}{\varsigma + 1} + \left| {v_{{1\ddot{\sigma }\left( j \right)}}^{\left( l \right)} - v_{{2\ddot{\sigma }\left( j \right)}}^{\left( l \right)} } \right|} \right)} \mathord{\left/ {\vphantom {{\left( {\frac{{\left| {{\text{OL}}_{{1\ddot{\sigma }\left( j \right)}}^{\left( k \right)} - {\text{OL}}_{{2\ddot{\sigma }\left( j \right)}}^{\left( k \right)} } \right|}}{\tau + 1} + \left| {p_{{1\ddot{\sigma }\left( j \right)}}^{\left( k \right)} - p_{{2\ddot{\sigma }\left( j \right)}}^{\left( k \right)} } \right| + \frac{{\left| {{\text{IL}}_{{1\ddot{\sigma }\left( j \right)}}^{\left( l \right)} - {\text{IL}}_{{2\ddot{\sigma }\left( j \right)}}^{\left( l \right)} } \right|}}{\varsigma + 1} + \left| {v_{{1\ddot{\sigma }\left( j \right)}}^{\left( l \right)} - v_{{2\ddot{\sigma }\left( j \right)}}^{\left( l \right)} } \right|} \right)} 4}} \right. \kern-0pt} 4}} \right)^{\lambda } } } \\ & + \mathop {\hbox{max} }\limits_{\begin{subarray}{l} k = 1,2, \ldots ,\# {\text{OL}} \\ l = 1,2, \ldots ,\# {\text{IL}} \end{subarray} } \left( {{{\left( {\frac{{\left| {{\text{OL}}_{{1\ddot{\sigma }\left( j \right)}}^{\left( k \right)} - {\text{OL}}_{{2\ddot{\sigma }\left( j \right)}}^{\left( k \right)} } \right|}}{\tau + 1} + \left| {p_{{1\ddot{\sigma }\left( j \right)}}^{\left( k \right)} - p_{{2\ddot{\sigma }\left( j \right)}}^{\left( k \right)} } \right| + \frac{{\left| {{\text{IL}}_{{1\ddot{\sigma }\left( j \right)}}^{\left( l \right)} - {\text{IL}}_{{2\ddot{\sigma }\left( j \right)}}^{\left( l \right)} } \right|}}{\varsigma + 1} + \left| {v_{{1\ddot{\sigma }\left( j \right)}}^{\left( l \right)} - v_{{2\ddot{\sigma }\left( j \right)}}^{\left( l \right)} } \right|} \right)} \mathord{\left/ {\vphantom {{\left( {\frac{{\left| {{\text{OL}}_{{1\ddot{\sigma }\left( j \right)}}^{\left( k \right)} - {\text{OL}}_{{2\ddot{\sigma }\left( j \right)}}^{\left( k \right)} } \right|}}{\tau + 1} + \left| {p_{{1\ddot{\sigma }\left( j \right)}}^{\left( k \right)} - p_{{2\ddot{\sigma }\left( j \right)}}^{\left( k \right)} } \right| + \frac{{\left| {{\text{IL}}_{{1\ddot{\sigma }\left( j \right)}}^{\left( l \right)} - {\text{IL}}_{{2\ddot{\sigma }\left( j \right)}}^{\left( l \right)} } \right|}}{\varsigma + 1} + \left| {v_{{1\ddot{\sigma }\left( j \right)}}^{\left( l \right)} - v_{{2\ddot{\sigma }\left( j \right)}}^{\left( l \right)} } \right|} \right)} 4}} \right. \kern-0pt} 4}} \right)^{\lambda } \\ \end{aligned} \right)} \right)^{{{\raise0.7ex\hbox{$1$} \!\mathord{\left/ {\vphantom {1 \lambda }}\right.\kern-0pt} \!\lower0.7ex\hbox{$\lambda $}}}}$$

where \(\lambda > 0\) and \(\ddot{\sigma }\left( j \right):\left( {1,2, \ldots ,m} \right) \to \left( {1,2, \ldots ,m} \right)\) is a permutation \(\left( {j = 1,2, \ldots ,m} \right)\) such that:

$$\begin{aligned} & \frac{1}{{\# {\text{OL}} \times \# {\text{IL}}}}\sum\limits_{k = 1}^{{\# {\text{OL}}}} {\sum\limits_{l = 1}^{{\# {\text{IL}}}} {\left( {{{\left( {\frac{{\left| {{\text{OL}}_{{1\ddot{\sigma }\left( j \right)}}^{\left( k \right)} - {\text{OL}}_{{2\ddot{\sigma }\left( j \right)}}^{\left( k \right)} } \right|}}{\tau + 1} + \left| {p_{{1\ddot{\sigma }\left( j \right)}}^{\left( k \right)} - p_{{2\ddot{\sigma }\left( j \right)}}^{\left( k \right)} } \right| + \frac{{\left| {{\text{IL}}_{{1\ddot{\sigma }\left( j \right)}}^{\left( l \right)} - {\text{IL}}_{{2\ddot{\sigma }\left( j \right)}}^{\left( l \right)} } \right|}}{\varsigma + 1} + \left| {v_{{1\ddot{\sigma }\left( j \right)}}^{\left( l \right)} - v_{{2\ddot{\sigma }\left( j \right)}}^{\left( l \right)} } \right|} \right)} \mathord{\left/ {\vphantom {{\left( {\frac{{\left| {{\text{OL}}_{{1\ddot{\sigma }\left( j \right)}}^{\left( k \right)} - {\text{OL}}_{{2\ddot{\sigma }\left( j \right)}}^{\left( k \right)} } \right|}}{\tau + 1} + \left| {p_{{1\ddot{\sigma }\left( j \right)}}^{\left( k \right)} - p_{{2\ddot{\sigma }\left( j \right)}}^{\left( k \right)} } \right| + \frac{{\left| {{\text{IL}}_{{1\ddot{\sigma }\left( j \right)}}^{\left( l \right)} - {\text{IL}}_{{2\ddot{\sigma }\left( j \right)}}^{\left( l \right)} } \right|}}{\varsigma + 1} + \left| {v_{{1\ddot{\sigma }\left( j \right)}}^{\left( l \right)} - v_{{2\ddot{\sigma }\left( j \right)}}^{\left( l \right)} } \right|} \right)} 4}} \right. \kern-0pt} 4}} \right)^{\lambda } } } \\ & \quad + \mathop {\hbox{max} }\limits_{\begin{subarray}{l} k = 1,2, \ldots ,\# {\text{OL}} \\ l = 1,2, \ldots ,\# {\text{IL}} \end{subarray} } \left( {{{\left( {\frac{{\left| {{\text{OL}}_{{1\ddot{\sigma }\left( j \right)}}^{\left( k \right)} - {\text{OL}}_{{2\ddot{\sigma }\left( j \right)}}^{\left( k \right)} } \right|}}{\tau + 1} + \left| {p_{{1\ddot{\sigma }\left( j \right)}}^{\left( k \right)} - p_{{2\ddot{\sigma }\left( j \right)}}^{\left( k \right)} } \right| + \frac{{\left| {{\text{IL}}_{{1\ddot{\sigma }\left( j \right)}}^{\left( l \right)} - {\text{IL}}_{{2\ddot{\sigma }\left( j \right)}}^{\left( l \right)} } \right|}}{\varsigma + 1} + \left| {v_{{1\ddot{\sigma }\left( j \right)}}^{\left( l \right)} - v_{{2\ddot{\sigma }\left( j \right)}}^{\left( l \right)} } \right|} \right)} \mathord{\left/ {\vphantom {{\left( {\frac{{\left| {{\text{OL}}_{{1\ddot{\sigma }\left( j \right)}}^{\left( k \right)} - {\text{OL}}_{{2\ddot{\sigma }\left( j \right)}}^{\left( k \right)} } \right|}}{\tau + 1} + \left| {p_{{1\ddot{\sigma }\left( j \right)}}^{\left( k \right)} - p_{{2\ddot{\sigma }\left( j \right)}}^{\left( k \right)} } \right| + \frac{{\left| {{\text{IL}}_{{1\ddot{\sigma }\left( j \right)}}^{\left( l \right)} - {\text{IL}}_{{2\ddot{\sigma }\left( j \right)}}^{\left( l \right)} } \right|}}{\varsigma + 1} + \left| {v_{{1\ddot{\sigma }\left( j \right)}}^{\left( l \right)} - v_{{2\ddot{\sigma }\left( j \right)}}^{\left( l \right)} } \right|} \right)} 4}} \right. \kern-0pt} 4}} \right)^{\lambda } \\ & \ge \frac{1}{{\# {\text{OL}} \times \# {\text{IL}}}}\sum\limits_{k = 1}^{{\# {\text{OL}}}} {\sum\limits_{l = 1}^{{\# {\text{IL}}}} {\left( {{{\left( {\frac{{\left| {{\text{OL}}_{{1\ddot{\sigma }\left( {j + 1} \right)}}^{\left( k \right)} - {\text{OL}}_{{2\ddot{\sigma }\left( {j + 1} \right)}}^{\left( k \right)} } \right|}}{\tau + 1} + \left| {p_{{1\ddot{\sigma }\left( {j + 1} \right)}}^{\left( k \right)} - p_{{2\ddot{\sigma }\left( {j + 1} \right)}}^{\left( k \right)} } \right| + \frac{{\left| {{\text{IL}}_{{1\ddot{\sigma }\left( {j + 1} \right)}}^{\left( l \right)} - {\text{IL}}_{{2\ddot{\sigma }\left( {j + 1} \right)}}^{\left( l \right)} } \right|}}{\varsigma + 1} + \left| {v_{{1\ddot{\sigma }\left( {j + 1} \right)}}^{\left( l \right)} - v_{{2\ddot{\sigma }\left( {j + 1} \right)}}^{\left( l \right)} } \right|} \right)} \mathord{\left/ {\vphantom {{\left( {\frac{{\left| {{\text{OL}}_{{1\ddot{\sigma }\left( {j + 1} \right)}}^{\left( k \right)} - {\text{OL}}_{{2\ddot{\sigma }\left( {j + 1} \right)}}^{\left( k \right)} } \right|}}{\tau + 1} + \left| {p_{{1\ddot{\sigma }\left( {j + 1} \right)}}^{\left( k \right)} - p_{{2\ddot{\sigma }\left( {j + 1} \right)}}^{\left( k \right)} } \right| + \frac{{\left| {{\text{IL}}_{{1\ddot{\sigma }\left( {j + 1} \right)}}^{\left( l \right)} - {\text{IL}}_{{2\ddot{\sigma }\left( {j + 1} \right)}}^{\left( l \right)} } \right|}}{\varsigma + 1} + \left| {v_{{1\ddot{\sigma }\left( {j + 1} \right)}}^{\left( l \right)} - v_{{2\ddot{\sigma }\left( {j + 1} \right)}}^{\left( l \right)} } \right|} \right)} 4}} \right. \kern-0pt} 4}} \right)^{\lambda } } } \\ & \quad + \mathop {\hbox{max} }\limits_{\begin{subarray}{l} k = 1,2, \ldots ,\# {\text{OL}} \\ l = 1,2, \ldots ,\# {\text{IL}} \end{subarray} } \left( {{{\left( {\frac{{\left| {{\text{OL}}_{{1\ddot{\sigma }\left( {j + 1} \right)}}^{\left( k \right)} - {\text{OL}}_{{2\ddot{\sigma }\left( {j + 1} \right)}}^{\left( k \right)} } \right|}}{\tau + 1} + \left| {p_{{1\ddot{\sigma }\left( {j + 1} \right)}}^{\left( k \right)} - p_{{2\ddot{\sigma }\left( {j + 1} \right)}}^{\left( k \right)} } \right| + \frac{{\left| {{\text{IL}}_{{1\ddot{\sigma }\left( {j + 1} \right)}}^{\left( l \right)} - {\text{IL}}_{{2\ddot{\sigma }\left( {j + 1} \right)}}^{\left( l \right)} } \right|}}{\varsigma + 1} + \left| {v_{{1\ddot{\sigma }\left( {j + 1} \right)}}^{\left( l \right)} - v_{{2\ddot{\sigma }\left( {j + 1} \right)}}^{\left( l \right)} } \right|} \right)} \mathord{\left/ {\vphantom {{\left( {\frac{{\left| {{\text{OL}}_{{1\ddot{\sigma }\left( {j + 1} \right)}}^{\left( k \right)} - {\text{OL}}_{{2\ddot{\sigma }\left( {j + 1} \right)}}^{\left( k \right)} } \right|}}{\tau + 1} + \left| {p_{{1\ddot{\sigma }\left( {j + 1} \right)}}^{\left( k \right)} - p_{{2\ddot{\sigma }\left( {j + 1} \right)}}^{\left( k \right)} } \right| + \frac{{\left| {{\text{IL}}_{{1\ddot{\sigma }\left( {j + 1} \right)}}^{\left( l \right)} - {\text{IL}}_{{2\ddot{\sigma }\left( {j + 1} \right)}}^{\left( l \right)} } \right|}}{\varsigma + 1} + \left| {v_{{1\ddot{\sigma }\left( {j + 1} \right)}}^{\left( l \right)} - v_{{2\ddot{\sigma }\left( {j + 1} \right)}}^{\left( l \right)} } \right|} \right)} 4}} \right. \kern-0pt} 4}} \right)^{\lambda } \\ \end{aligned}$$

4 A Decision-Making Approach Based on the Proposed Distance Measures

Multi-attribute decision making is characterized in terms of a process of selecting best alternative from a set of alternatives with respect to some attributes in our daily life. In this section, we propose an approach based on the proposed distance measures to handle the multi-attribute decision-making problems with NPNLTSs.

4.1 A Decision-Making Approach with NPNLTSs

A multi-attribute decision-making problem with NPNLTS information can be interpreted as follows: Let \(X = \left\{ {x_{1} ,x_{2} , \ldots ,x_{n} } \right\}\) be a set of alternatives and \(C = \left\{ {c_{1} ,c_{2} , \ldots ,c_{m} } \right\}\) be several attributes with a weighting vector \(\omega = \left\{ {\omega_{1} ,\omega_{2} , \cdots ,\omega_{m} } \right\}^{T}\), where \(0 \le \omega_{j} \le 1\) and \(\sum\nolimits_{j = 1}^{m} {\omega_{j} } = 1\;\left( {j = 1,2, \ldots ,m} \right)\). Since a lot of practical decision-making problems involve the quantitative information and qualitative information, for example, when evaluating a student, the grades as the quantitative information and the behaviors as qualitative information are both needed, and such information can be transformed into NPNLTSs, then a judgment matrix with NPNLTS information can be obtained as follows:

$${\mathbf{NPN}} = \left[ {\begin{array}{*{20}c} {{\text{NPN}}_{11} } & {{\text{NPN}}_{12} } & \cdots & {{\text{NPN}}_{1m} } \\ {{\text{NPN}}_{21} } & {{\text{NPN}}_{22} } & \cdots & {{\text{NPN}}_{2m} } \\ \vdots & \vdots & \ddots & \vdots \\ {{\text{NPN}}_{n1} } & {{\text{NPN}}_{n2} } & \cdots & {{\text{NPN}}_{nm} } \\ \end{array} } \right]$$

where \({\text{NPN}}_{ij} = \left\{ {{\text{OL}}_{ij}^{\left( k \right)} \left( p \right)\left\{ {{\text{IL}}_{ij}^{\left( l \right)} \left( v \right)} \right\}|k = 1,2, \ldots ,\# {\text{OL}}\left( p \right),l = 1,2, \ldots ,\# {\text{IL}}\left( v \right)} \right\}\left( {i = 1,2, \ldots ,n;j = 1,2, \ldots ,m} \right)\) is a NPNLTS, denoting the degree that the alternative \(x_{i}\) satisfies the attribute \(c_{j}\).

Firstly, we define the bound of a judgment matrix with NPNLTSs for the sake of the ease of comparison under different cases:

Definition 9

Let \({\mathbf{NPN}} = \{ {\text{OL}}_{ij} \left( p \right)\left\{ {{\text{IL}}_{ij} \left( v \right)} \right\}\}\) be a matrix with NPNLTSs, then the lower bound and the upper bound under different cases are defined as:

Case 1 and Case 2

  1. (1)

    Lower bound: \({\text{NPN}}^{ - } = \hbox{min} \{ {\text{OL}}^{\left( i \right)} \left( {p^{\left( k \right)} } \right)\left\{ {{\text{IL}}\left( v \right)} \right\}\} = {\text{OL}}^{\left( j \right)} \left( {p^{\left( l \right)} } \right)\left\{ {{\text{IL}}\left( v \right)} \right\},{\text{OL}}^{\left( i \right)} \in {\text{OS}}\) and \({\text{OL}}^{\left( i \right)} \ge {\text{OL}}^{\left( j \right)} ,p^{\left( k \right)} \le p^{\left( l \right)} ,\forall i,k.\)

  2. (2)

    Upper bound: \({\text{NPN}}^{ + } = \hbox{max} \{ {\text{OL}}^{\left( i \right)} \left( {p^{\left( k \right)} } \right)\left\{ {{\text{IL}}\left( v \right)} \right\}\} = {\text{OL}}^{\left( j \right)} \left( {p^{\left( l \right)} } \right)\left\{ {{\text{IL}}\left( v \right)} \right\},{\text{OL}}^{\left( i \right)} \in {\text{OS}}\) and \({\text{OL}}^{\left( i \right)} \le {\text{OL}}^{\left( j \right)} ,p^{\left( k \right)} \le p^{\left( l \right)} ,\forall i,k.\)

Case 3 and Case 4

  1. (1)

    Lower bound: \({\text{NPN}}^{ - } = \hbox{min} \{ {\text{OL}}\left( p \right)\left\{ {{\text{IL}}^{\left( i \right)} \left( {v^{\left( k \right)} } \right)} \right\}\} = {\text{OL}}\left( p \right)\left\{ {{\text{IL}}^{\left( j \right)} \left( {v^{\left( l \right)} } \right)} \right\},{\text{IL}}^{\left( i \right)} \in {\text{IS}}\) and \(IL^{\left( i \right)} \ge IL^{\left( j \right)} ,\)\(v^{\left( k \right)} \le v^{\left( l \right)} ,\forall i,k.\)

  2. (2)

    Upper bound: \({\text{NPN}}^{ + } = \hbox{max} \left\{ {OL\left( p \right)\left\{ {IL^{\left( i \right)} \left( {v^{\left( k \right)} } \right)} \right\}} \right\} = {\text{OL}}\left( p \right)\left\{ {{\text{IL}}^{\left( j \right)} \left( {v^{\left( l \right)} } \right)} \right\},{\text{IL}}^{\left( i \right)} \in {\text{IS}}\) and \({\text{IL}}^{\left( i \right)} \le {\text{IL}}^{\left( j \right)} ,v^{\left( k \right)} \le v^{\left( l \right)} ,\forall i,k.\)

Next, the notions of the nested probabilistic-numerical linguistic positive ideal solution \(x^{ + }\) and the nested probabilistic-numerical linguistic negative ideal solution \(x^{ - }\) can be defined as follows, respectively:

$$x^{ + } = \left\{ {{\text{NPN}}^{1 + } ,{\text{NPN}}^{2 + } , \ldots ,{\text{NPN}}^{m + } } \right\}\,{\text{and}}\,x^{ - } = \left\{ {{\text{NPN}}^{1 - } ,{\text{NPN}}^{2 - } , \ldots ,{\text{NPN}}^{m - } } \right\}$$

where

$${\text{NPN}}^{j + } = \left\{ {\begin{array}{*{20}l} {\mathop {\hbox{max} }\limits_{i = 1,2, \ldots ,n} {\text{NPN}}^{ij + } = \mathop {\hbox{max} }\limits_{\begin{subarray}{l} k = 1,2, \ldots ,\# {\text{OL}}\left( p \right) \\ l = 1,2, \ldots ,\# {\text{IL}}\left( v \right) \end{subarray} } \left\{ {{\text{OL}}_{ij}^{\left( k \right)} \left( p \right)\left\{ {{\text{IL}}_{ij}^{\left( l \right)} \left( v \right)} \right\}} \right\},{\text{ for benefit attribute }} c_{j} } \\ {\mathop {\hbox{min} }\limits_{i = 1,2, \ldots ,n} {\text{NPN}}^{ij - } = \mathop {\hbox{min} }\limits_{\begin{subarray}{l} k = 1,2, \ldots ,\# {\text{OL}}\left( p \right) \\ l = 1,2, \ldots ,\# {\text{IL}}\left( v \right) \end{subarray} } \left\{ {{\text{OL}}_{ij}^{\left( k \right)} \left( p \right)\left\{ {{\text{IL}}_{ij}^{\left( l \right)} \left( v \right)} \right\}} \right\},{\text{ for cost attribute }}c_{j} } \\ \end{array} } \right.,\quad {\text{for}}\quad {\kern 1pt} j = 1,2, \ldots ,m$$

and

$${\text{NPN}}^{j - } = \left\{ {\begin{array}{*{20}l} {\mathop {\hbox{min} }\limits_{i = 1,2, \ldots ,n} {\text{NPN}}^{ij - } = \mathop {\hbox{min} }\limits_{\begin{subarray}{l} k = 1,2, \ldots ,\# {\text{OL}}\left( p \right) \\ l = 1,2, \ldots ,\# {\text{IL}}\left( v \right) \end{subarray} } \left\{ {{\text{OL}}_{ij}^{\left( k \right)} \left( p \right)\left\{ {{\text{IL}}_{ij}^{\left( l \right)} \left( v \right)} \right\}} \right\},{\text{ for benefit attribute }} c_{j} } \\ {\mathop {\hbox{max} }\limits_{i = 1,2, \ldots ,n} {\text{NPN}}^{ij + } = \mathop {\hbox{max} }\limits_{\begin{subarray}{l} k = 1,2, \ldots ,\# {\text{OL}}\left( p \right) \\ l = 1,2, \ldots ,\# {\text{IL}}\left( v \right) \end{subarray} } \left\{ {{\text{OL}}_{ij}^{\left( k \right)} \left( p \right)\left\{ {{\text{IL}}_{ij}^{\left( l \right)} \left( v \right)} \right\}} \right\},{\text{ for cost attribute }}c_{j} } \\ \end{array} } \right.,\quad {\text{for}}\quad {\kern 1pt} j = 1,2, \ldots ,m$$

Remark 3

The nested probabilistic-numerical linguistic positive ideal solution \(x^{ + }\) and the nested probabilistic-numerical linguistic negative ideal solution \(x^{ - }\) are also NPNLTSs. More specifically, according to Definition 9, it can be taken as special NPNLTSs with only one OPLTS under Case 1 and Case 2, and the number of OPLTSs will be \(\tau + 1\) under Case 3 and Case 4.

For the sake of selecting the best alternative, motivated by the TOPSIS method [20], we can calculate the distance between each alternative \(x_{i}\) and the nested probabilistic-numerical linguistic positive ideal solution \(x^{ + }\), and the distance between each alternative \(x_{i}\) and the nested probabilistic-numerical linguistic negative ideal solution \(x^{ - }\), respectively. As we can see, the smaller the distance \(d\left( {x_{i} ,x^{ + } } \right)\), the better the alternative; while the larger the distance \(d\left( {x_{i} ,x^{ - } } \right)\), the better the alternative. Obviously, these distances can be calculated by using the proposed distance measures in Sect. 4. In order to make full use of distance information, we take both \(d\left( {x_{i} ,x^{ + } } \right)\) and \(d\left( {x_{i} ,x^{ - } } \right)\) into consideration simultaneously and use the satisfaction degree given by Liao [13] as follows:

Definition 10

[13] A satisfaction degree of a given alternative \(x_{i}\) with respect to the attribute \(c_{j}\) is defined as:

$$\eta \left( {x_{i} } \right) = \frac{{\left( {1 - \theta } \right)d\left( {x_{i} ,x^{ - } } \right)}}{{\theta d\left( {x_{i} ,x^{ + } } \right) + \left( {1 - \theta } \right)d\left( {x_{i} ,x^{ - } } \right)}}$$

where the parameter \(\theta\) denotes the risk preferences of the DM: \(\theta > 0.5\) means that the DM is pessimist, while \(\theta < 0.5\) means the DM is optimist and \(\theta = 0.5\) means that the DM is neutral. And the value of the parameter \(\theta\) should be provided by the DM in advance.

Remark 4

For any \(\theta \in \left[ {0,1} \right]\), \(d\left( {x_{i} ,x^{ + } } \right) \in \left[ {0,1} \right]\) and \(d\left( {x_{i} ,x^{ - } } \right) \in \left[ {0,1} \right],i = 1,2, \cdots ,m\), it can be easy to see that \(0 \le \eta \left( {x_{i} } \right) \le 1\) and the higher the satisfaction degree, the better the alternative. We can calculate the satisfaction degrees by different measures proposed in Sect. 3.

4.2 Application Flow

Based on the discussion above, a decision-making approach can be established. In the following, the algorithm of the whole process with NPNLTS information is presented as follows:

Algorithm

  • Input: The NPNLTS matrix \({\text{NPN}} = \{ {\text{OL}}\left( p \right)\left\{ {{\text{IL}}\left( v \right)} \right\}\}_{n \times m}\); the weight vector \(\omega = \left( {\omega_{1} ,\omega_{2} , \ldots ,\omega_{m} } \right)^{T}\) and the parameter \(\theta\) of risk preferences.

  • Output: The satisfaction degrees \(\eta \left( {x_{i} } \right)\left( {i = 1,2, \ldots ,m} \right)\) of alternatives.

  • Step 1. Select the nested probabilistic-numerical linguistic positive ideal solution \(x^{ + }\) and the negative ideal solution \(x^{ - }\) from \({\text{NPN}} = \{ {\text{OL}}\left( p \right)\left\{ {{\text{IL}}\left( v \right)} \right\}\}_{n \times m}\). Go to Step 2.

  • Step 2. Choose the appropriate distance measure with NPNLTSs according to the types of decision-making problem. Go to Step 3.

  • Step 3. Calculate the satisfaction degrees \(\eta \left( {x_{i} } \right)\left( {i = 1,2, \ldots ,m} \right)\) of alternative by Definition 10. Go to Step 4.

  • Step 4. Rank the alternatives \(A_{i} \left( {i = 1,2, \ldots ,m} \right)\) and select the optimal alternative. Go to Step 5.

  • Step 5. End.

To understand Algorithm clearly, the application flow for decision-making problems based on the proposed distance measures with NPNLTSs can be described in Fig. 3.

Fig. 3
figure 3

The application flow for decision-making problems with NPNLTSs

5 A Case Study

In order to illustrate the effectiveness and reliance of the approaches with the proposed distance measures, in the following, we consider a multi-attribute decision-making problem concerning the evaluation of the treatment plans and make some comparisons and analyses about the proposed distance measures.

5.1 Problem Description

Since the imperfection of the public medical management system, public problems such as high medical cost, few channels and low coverage are troubling the people’s livelihood. For example, famous hospitals are overcrowded, and community hospitals are neglected, and patients’ medical procedures are cumbersome. All these problems are caused by poor medical information, polarization of medical resources, and incomplete medical supervision mechanism. These problems have become an important factor affecting the harmonious development of society. Wise Information Technology of 120 (WIT 120) makes use of self-service and interactive mode and uses computer-assisted decision making to improve the efficiency and accuracy of treatment evaluation. The aim of WIT 120 is to allow patients to enjoy safe, convenient and high-quality medical services with a relatively short waiting time and basic medical expenses, and to fundamentally solve the problem of “difficult and expensive medical services,” and truly achieve the goal of “health for all, health for all.”

When people suffer from major illness, due to the uncertainty of information and the urgency of time, doctors usually cannot make comprehensive judgments about the illness immediately. However, through the platform of WIT 120, patients can input information such as symptoms and medical history. Then, the system will give the best treatment after machine learning. To select the best medical treatment plan, four crucial factors need to be considered:

  • Operable coefficient The implementations of treatment are related to the safety of the patient. Thus, it is necessary to choose an operable and feasible treatment.

  • Comfort level The treatment process is supposed to consider the patient’s tolerance level. The more comfortable the patient’s experience is, the better the treatment should be.

  • Cost performance Different treatments would cost different amounts, such as conservative treatment or surgical treatment. Therefore, Cost is one of the factors in evaluating treatment.

  • Cure rate In the implementations of treatment plan, it is necessary to consider the cure rate of each treatment. Under the same circumstance, the higher the cure rate is, the better the medical treatment should be.

5.2 Solve the Problem

Suppose that five medical treatments \(\left\{ {x_{1} ,x_{2} ,x_{3} ,x_{4} ,x_{5} } \right\}\) are put forward to cure the disease. Four attributes \(\left\{ {c_{1} ,c_{2} ,c_{3} ,c_{4} } \right\}\) are considered, including \(c_{1}\): Operable coefficient, \(c_{2}\): Comfort level, \(c_{3}\): Cost performance, \(c_{4}\): Cure rate. The weighing vector of these four attributes is \(\omega = \left( {0.2,0.1,0.2,0.5} \right)^{T}\). Given the OLTS and the ILTS as follows:

$${\text{OS}} = \left\{ {s_{0} = {\text{very}}\,{\text{bad}},s_{1} = {\text{bad}},s_{2} = {\text{medium}},s_{3} = {\text{well}},s_{4} = {\text{very}}\,{\text{well}}} \right\}$$
$${\text{IS}} = \left\{ {n_{0} = {\text{higher}}\,{\text{cost}}\,{\text{performance}},n_{1} = {\text{medium}}\,{\text{cost}}\,{\text{performance}},n_{2} = {\text{lower}}\,{\text{cost}}\,{\text{performance}}} \right\}$$

As we can see, the elements of the OLTS and the ILTS are both ordinal variables, which shows that the problem belongs to Case 1. In order to get the relevant evaluation information, the WIT 120 system sets up a decision organization, which contains a group of DMs to assess the treatment plan. In the process of evaluation, the DMs evaluate different treatment plans with NPNLTSs which contain OPLTSs and INLTSs. More specifically, the DMs express their linguistic information with respect to the attributes in the OLTS and then we can further calculate the corresponding probabilities. In addition, the DMs also need to discuss the numerical information with respect to the OLTE in the OLTS to fully illustrate the nested information, such as hundred-mark system which is a 0-to-100 index in this example. In the following, a nested probabilistic-numerical linguistic judgment matrix can be constructed, shown in Table 2.

Table 2 The nested probabilistic-numerical linguistic judgment matrix

Next, we normalize the NPNLTSs with respect to the attributes with different alternatives, which can be shown in Table 3.

Table 3 The normalized nested probabilistic-numerical linguistic judgment matrix

From Table 3, it is noted that all the four attributes are benefit-type attributes. In order to select the desired treatment plan, we first establish the nested probabilistic-numerical linguistic positive ideal solution \(x^{ + }\) and the nested probabilistic-numerical linguistic negative ideal solution \(x^{ - }\), shown in Table 4.

Table 4 The positive ideal solution \(x^{ + }\) and the negative ideal solution \(x^{ - }\)

Then, considering the weighing vector about the attributes, we calculate the distance between each alternative \(x_{i}\) and the nested probabilistic-numerical linguistic positive ideal solution \(x^{ + }\) and the distance between each alternative \(x_{i}\) and the nested probabilistic-numerical linguistic negative ideal solution \(x^{ - }\), respectively. Furthermore, the satisfaction degree \(\eta \left( {x_{i} } \right)\) for each alternative \(x_{i}\) can be calculated by Definition 10. Without loss of generality, we choose \(\theta = 0.5\), and the results based on NPN-Hamming distance measure are:

$$\eta \left( {x_{1} } \right) = 0.2682,\;\eta \left( {x_{2} } \right) = 0.2496,\;\eta \left( {x_{3} } \right) = 0.2876,\;\eta \left( {x_{4} } \right) = 0.2760,\;\eta \left( {x_{5} } \right) = 0.2910$$

Therefore, the ranking of alternative medical treatments is \(x_{5} > x_{3} > x_{4} > x_{1} > x_{2}\), and the best medical treatment is \(x_{5}\).

5.3 Comparative Analysis and Discussion

In order to understand deeply about the proposed distance measures and show the superiority of the proposed method, we make some comparisons and analyses through simulation experiments from three aspects:

  • The impact of using various decision-making methods.

  • The impact of using various distance measures.

  • The impact of changing the focal parameters \(\lambda\) and \(\theta\).

5.3.1 The Impact of Using Various Decision-Making Methods

In this part, we compare the proposed method with other three popular decision-making methods presented in Introduction, which are the TOPSIS method, the VIKOR method and the ELECTRE method, respectively. Table 5 shows the rankings of the medical treatments by using four methods above based on Hamming distance measure.

Table 5 Rankings based on four methods

From Table 5, there is a little difference among the ranking results by using four decision-making methods. However, all the best medical treatments are \(x_{5}\), and the rankings are the same by using the proposed method and the ELECTRE method. Therefore, compared with other three methods, the ranking result with the proposed method is effective and reliable.

In the following, we study the average operation time (AOT) by four methods to deal with decision-making problems. Suppose that the numbers of alternatives and experts are denoted as \(m\) and \(k\), some simulation experiments are given by using various decision-making methods based on Hamming distance measure. When \(m\) and \(k\) are taken from 3 to 14, the AOTs by four methods after 1000 simulation times can be shown in Figs. 4 and 5, respectively.

Fig. 4
figure 4

The average operation time when both \(m\) and \(k\) are taken from 3 to 14

Fig. 5
figure 5

The average operation time when both \(m\) and \(k\) are taken from 3 to 14

From Figs. 4 and 5, there are apparent difference about AOT by using four methods when \(m\) and \(k\) are taken from 3 to 14. As we can see, there are the minimum AOT when using the proposed method, and the maximum AOT when using the ELECTRE method. Moreover, in Fig. 4, increasing the number of alternatives would take more time than increasing the number of experts. Hence, using the proposed method to deal with decision-making problems can obtain more stable results without taking too much time than the other three methods.

5.3.2 The Impact of Using Various Distance Measures

Next, we mainly discuss the impact of the ranking result when using various proposed distance measures with NPNLTSs. In order to compare and analyze the results clearly with different distance measures, we use the generalized weighted distance measure, the generalized weighted NPN-Hausdorff distance measure and the generalized hybrid weighted distance measure, respectively, to deal with the same problem in Sect. 5.1. The satisfaction degrees and the ranking results with different distance measures can be shown from Tables 6, 7 and 8, respectively.

Table 6 The satisfaction degree results with the generalized weighted distance measure
Table 7 The satisfaction degree results with the generalized weighted NPN-Hausdorff distance measure
Table 8 The satisfaction degree results with the generalized hybrid weighted distance measure

As we can see, there is little difference about the ranking results when using the proposed various distance measures with the changed parameter \(\lambda\). More specifically, when \(\lambda { = 1}\) and \(\lambda { = 2}\), the rankings are the same, that is \(x_{5} > x_{3} > x_{4} > x_{1} > x_{2}\); and when \(\lambda { = 4}\) and \(\lambda { = 6}\), the rankings are also the same, that is \(x_{5} > x_{3} > x_{ 2} > x_{ 4} > x_{ 1}\); but when \(\lambda { = 10}\), the slightly changes take place between \(x_{2}\) and \(x_{5}\), \(x_{1}\) and \(x_{4}\), respectively. Therefore, the ranking results have the strong stability based on different proposed distance measures.

Furthermore, it is noteworthy that when the focal parameter \(\lambda\) smaller than a certain value, all the best medical treatments are \(x_{5}\). As the parameter \(\lambda\) increases, the largest growth satisfaction degree is \(x_{2}\). Hence, when \(\lambda { = 10}\), the best medical treatment is \(x_{2}\). In some way, the parameter \(\lambda\) represents the preferences of the decision makers, and we will discuss some focal parameters in the next part.

5.3.3 The Impact of Changing Focal Parameters

Now, we study the impact of changing some focal parameters in decision-making process. In order to see the satisfaction degrees of different alternatives when using various distance measures with the parameter \(\lambda\), we draw the conclusions directly considering the influence of both the changed parameter \(\lambda\) and the different alternatives \(x_{i} \left( {i = 1,2,3,4,5} \right)\), shown from Figs. 6, 7 and 8, respectively.

Fig. 6
figure 6

The satisfaction degree results with the generalized weighted distance measure

Fig. 7
figure 7

The satisfaction degree results with the generalized weighted NPN-Hausdorff distance measure

Fig. 8
figure 8

The satisfaction degree results with the generalized hybrid weighted distance measure

From the above figures, there are also some interesting results:

  1. (1)

    When using a certain distance measure, the satisfaction degrees are increasing or decreasing with the parameter \(\lambda\) changes. More specifically, when we use the generalized weighted distance measure and the generalized hybrid weighted distance measure to calculate the distances, shown in Figs. 6b and 8b, respectively, the satisfaction degrees of all the alternatives are monotonically increasing with the increase in the parameter \(\lambda\). However, when we use the generalized weighted NPN-Hausdorff distance measure to calculate the distances, shown in Fig. 7b, the satisfaction degrees of \(x_{1}\) and \(x_{2}\) are monotonically increasing, while the satisfaction degrees of \(x_{3} ,x_{4}\) and \(x_{5}\) are monotonically decreasing with the increase in the parameter \(\lambda\).

  2. (2)

    It is noted that the alternatives with the largest increase are \(x_{2}\) with different distance measures, which is the reason that when \(\lambda = 1,2,4,6\),\(x_{5}\) is the best alternative and \(x_{2}\) is the second-best alternative; however, when \(\lambda = 10\),\(x_{2}\) is at the top of the figures. Therefore, from this point of view, we can regard the parameter \(\lambda\) as a DM’s risk attitude; thus, the proposed distance measures can give the DMs more choices to decide their risk preferences by the parameter \(\lambda\).

Without loss of generality, the value of the parameter \(\theta\) is equal to 0.5. In the following, we further consider the influence of the satisfaction degrees with the changed parameter \(\theta\) by using different distance measures. Therefore, we use the generalized weighted distance measure, the generalized weighted NPN-Hausdorff distance measure and the generalized hybrid weighted distance measure, respectively, to calculate the distances with the values of the parameter \(\theta\) from 0 to 1 and the calculation step is 0.001, which can be shown from Figs. 9a, 10 and 11a. In order to see the difference clearly by using different distance measures with the changed parameter \(\lambda\), we draw the satisfaction degrees with the parameter \(\theta\) from 0.5 to 0.6 and the calculation step is 0.0001, shown from Figs. 9b, 10 and 11b, respectively.

Fig. 9
figure 9

The satisfaction degree results with the generalized weighted distance measure

Fig. 10
figure 10

The satisfaction degree results with the generalized weighted NPN-Hausdorff distance measure

Fig. 11
figure 11

The satisfaction degree results with the generalized hybrid weighted distance measure

From the above figures, some interesting phenomena are also presented.

  1. (1)

    If we use a certain distance measure with the value of the parameter \(\theta\) from 0 to 1, shown in Figs. 9a, 10 and 11a, respectively, the surfaces almost coincide with the five different values of the parameter \(\lambda\); that is to say, overall, the differences of the satisfaction degrees are very small with the changed parameter \(\lambda\). However, when the value of the parameter \(\theta\) limits the scope, such as from 0.5 to 0.6, shown from Figs. 5b, 6 and 7b, respectively, it shows the difference of the satisfaction s with the different values of the parameter \(\lambda\).

  2. (2)

    The differences of the satisfaction degree results are different compared with the above proposed distance measures. More specifically, the difference by using the generalized weighted distance measure is larger than using the generalized weighted NPN-Hausdorff distance measure. Hence, from another point of view, the parameter \(\theta\) can be also regarded as the risk preference of the DM.

  3. (3)

    Combined with the figures and Definition 10, we can see that the larger the parameter \(\theta\) is, the more optimistic the DM is, on the contrary, the more pessimistic the DM is. Therefore, when the DMs deal with the multi-attribute decision-making problems with the nested probabilistic-numerical linguistic information, they can choose the risk preference in three aspects, which are the distance measure, the parameter \(\lambda\) and the parameter \(\theta\).

6 Conclusions

In this paper, we have mainly investigated some various types of distance and similarity measures for NPNLTSs combined with the basic axioms. A family of distance and similarity measures for NPNLTSs have been developed based on the well-known distances, such as the Hamming distance, the Euclidean distance, the Hausdorff distance and their generalizations. Then, we have studied the distance and similarity measures with respect to two collections of NPNLTSs in three aspects, which are discrete case, continuous case and the ordered weighted case, respectively. Because of the relationship between the distance measures and the similarity measures for NPNLTSs, we have focused our attention on the distance measures in this paper and the corresponding similarity measures can be obtained naturally. It should be noted that the lengths of two different NPNLTSs are often different in real applications, and we have discussed how to extend the shorter one under various cases until both of them have the same length to further calculate their distance. Additionally, we have proposed an approach to deal with multi-attribute decision-making problems based on the proposed distance and similarity measures. In order to show the applicability and the efficiency of the approach with the proposed distance measures, a case study concerning the evaluation of the medical treatments has been presented. After some comparisons and analyses, we have discussed from three angles including the impact of using various decision-making methods, various distance measures and the changed focal parameters. And we have obtained some interesting results. For example, the parameter \(\lambda\) and the parameter \(\theta\) can reflect the DM’s risk preferences in different aspects. As a result, the proposed distance measures give the DMs more choices to decide their risk preferences.

There are some interesting topics for further research. For example, the hybrid weighted distance and similarity measures between two collections of NPNLTSs can be investigated. Furthermore, we can apply our distance and similarity measures to other decision-making methods and study how to determine the weights in the decision-making problems.