1 Introduction

The world we live in is full of uncertainty, while traditional values cannot express the uncertainty and complexity. Therefore, Zadeh (1965) proposed fuzzy sets to represent this uncertain fuzzy concept. With the development of the discipline and its application in decision making, Zadeh proposed linguistic variables (Zadeh 1975) to represent linguistic information more conveniently and clearly. Linguistic variables such as “a little bit”, “very well” and “unlikely” are closer to decision maker’s (DM’s) intentions than numerical values, which helps in decision making.

The decision making of linguistic information has been studied extensively. Herrera et al. (1996, 1997) established a series of group decision making models under linguistic evaluation. Xu (2004) extended the traditional single linguistic variable into an interval linguistic variable and proposed the uncertain linguistic variable. Multi-attribute group decision making (MAGDM) has a wide range of applications in uncertain linguistic environments. Based on the distance and similarity theory Xian and Sun (2014), further proposed the Euclidean and Minkowski ordered weighted average distance operators (FLIEOWAD and FLIOWAMD) and applied to linguistic group decision making.

There are decision preferences in decision making, and a single linguistic term cannot accurately express this preference. To express DMs’ hesitation, Rodriguez et al. (2012) proposed hesitant fuzzy linguistic term sets (HFLTSs) based on the hesitant fuzzy sets (HFSs) (Torra 2010) and linguistic term sets (LTSs) (Zadeh 1975), which can express several possible linguistic terms simultaneously. The TOPSIS method of HFLTSs (Beg and Rashid 2013) and correlation operators (Wei et al. 2014) were also proposed and applied to decision making. Subsequently, Wang (2015) developed extended hesitant fuzzy linguistic term sets (EHFLTSs) based on HFLTSs. Scholars found that the weights of the linguistic information given by different DMs should be different in actual decision making. Therefore, Pang (Pang et al. 2016) proposed the PLTS to present possible linguistic terms and corresponding probabilistic information. Gou and Xu (2016) explored its operational rules. Bai et al. (2017) and Xian et al. (2019) put forward different methods to rank PLTSs. Since PLTSs are in line with the actual situation of decision making, it is also widely used in GDM Xing Li and Liao 2018. To show the preference of DMs, Zhang et al. (2016) proposed the probabilistic linguistic preference relation (PLPR) and applied it to risk assessment.  Hu et al. (2020) made a comprehensive analysis of the current research status, applications and future development directions related to PLTSs.

The above concepts all assume that the uncertain information is true and reliable. But in the real world, information released by government departments is obviously more reliable than it transmitted by individuals. This is due to the different credibility of the information. In view of the defect that the credibility of the information is ignored in the fuzzy domain, Zadeh (2011) proposed Z-numbers. A Z-number is expressed as \(Z=(A,B)\), where A is the evaluation and B is the credibility of A. For example, when evaluating the quality of a product, we use a Z-number (goodsure) to express “the DM is sure the quality is good”. “Good” is the evaluation and “sure” is its credibility. Kang et al. (2016) proposed a multi-attribute decision making (MADM) method to solve the supplier selection problem. Yaakob and Gegov (2016) combined the TOPSIS method with Z-numbers. Xiao (2014) applied Z-numbers to MADM. Z-number has also been studied to represent linguistic information, Wang et al. (2017) and Xian et al. (2019) respectively proposed the linguistic Z-number and the \(\mathrm{{{\mathcal {Z}}}}\) linguistic variable to represent the linguistic evaluation information containing credibility.

In decision making, the information credibility is influenced by two aspects: DM’s objective experience and subjective decision preference. DMs have different experience and familiarity with different fields involved. The evaluation credibility of familiar fields is higher, while that of unfamiliar fields is lower. Some DMs are accustomed to giving higher evaluations, while others are usually lower. Therefore, the credibility of the evaluation information is obviously different.

However, in the group decision making (GDM) research on PLTSs, most of the current studies focus on the representation of the linguistic terms and the calculation of the probability distribution, without considering the influence of the information credibility. Due to the subjective or objective reasons mentioned above, we often encounter the situation that the credibility of the evaluation information is different. For example, there are ten experts assessing the risk of a project, seven of them evaluate it as “high”, and three of them evaluate it as “very high”. Four of the ten experts are senior experts with years of experience in risk assessment, and their credibilities of the assessment are “very likely”; six of the ten experts are young and inexperienced, and their credibilities are “likely”. In this case, the decision information is expressed as: \((\{high(0.7),\;very\;high(0.3)\}, \{(likely(0.4),\;very\;likely(0.6))\})\).

In the above example, if we do not consider the credibility of the information, then they are completely credible by default. Obviously, incomplete decision information may lead to inaccurate final decision results. In addition, the existing methods to calculate attribute weights, such as the maximum deviation method, do not take the information credibility into consideration. It is more in line with people’s decision making habits to assign higher weights to the attributes with high credibility. In this paper, we propose a novel concept, Z probabilistic linguistic term set (ZPLTS). It combines the advantage that Z-numbers can represent information credibility, and solves the problem of inaccurate decision results caused by lack of information credibility in PLTS group decision problems. As a new linguistic information representation tool, ZPLTS can simultaneously represent multiple possible linguistic terms, probability distribution and information credibility, making decision information more complete.

The main contributions of this paper are:

First, inspired by Z-numbers, we propose the concept called ZPLTS. ZPLTS can not only represent the linguistic evaluation of DMs, but also show the hesitant decision information and preference of DMs. The normalization, operational rules, ranking method and distance measure are also proposed.

Second, combining with the characteristics of the information credibility, a weight calculation method based on the credibility is proposed.

Finally, we build a MAGDM model based on ZPLTSs, including the operators method and the extended TOPSIS method.

The distribution of the rest of the paper is as follows: In Sect. 2, we review some concepts of PLTSs and Z-numbers. The concept of ZPLTSs is proposed in Sect. 3. The normalization, operational rules, ranking method and distance are also put forward in this section. In Sect. 4, a MAGDM model is proposed to solve the decision making problems under Z probabilistic linguistic environment. We take a numerical example and conduct a comparative analysis to discuss the necessity and validity of what we proposed in Sect. 5. Section 6 concludes.

2 Preliminaries

In this section, we first review some concepts used in the following.

2.1 Linguistic term sets

Definition 1

(Xu 2005) Let \(S = \{ {s_i}|i = - \varsigma , \ldots , - 1,0,1, \ldots ,\varsigma \}\) be a LTS, \({s_i}\) presents the possible value of a linguistic varible.

Example 1

Let \(\varsigma =3,2\) , respectively. S and \(S^{\prime }\) are express as

$$\begin{aligned}&\begin{array}{l} S = \{ {s_{ - 3}} = very\;bad,\;{s_{ - 2}} = \;bad,{s_{ - 1}} = slight\;bad,\\ {s_0} = \;fair,\;{s_1} = slight\;good,\;{s_2} = good,\;{s_3} = very\;good\} \end{array}.\\&{S^{\prime }} = \{ \;s_{ - 2}^{\prime } = \;impossible,\;s_{ - 1}^{\prime } = slight\;impossible,s_0^{\prime } = \;fair,\\&s_1^{\prime } = slight\;possible,\;s_2^{\prime } = possible\;\}. \end{aligned}$$

2.2 Hesitant fuzzy linguistic term sets

On the basis of LTSs, the hesitant psychology of DMs was taken into consideration. DMs may evaluate several linguistic term values in the actual decision making. Then the concept of HFLTSs was put forward.

Definition 2

(Rodriguez et al. 2012) \(S = \{ {s_i}|i = - \varsigma , \ldots , - 1,0,1, \ldots ,\varsigma \}\) is a LTS, and \({b_S}\) is a HFLTS. \({b_S}\) is an ordered finite subset of the consecutive linguistic terms of S.

Example 2

Continue to Example 1, \(b_S\) and \(b_{{S^{\prime }}}\) are expressed as follows:

\({b_S} = \{ {s_{ - 1}} = slight\;bad,\;{s_1} = slight\;good\}\).

\({b_{{S^{\prime }}}} = \{ s_0^{\prime } = \;fair,\;{s_1^{\prime }} = slight\;possible\;good,\;s_2^{\prime } = possible\}\).

If we simplify them, we obtain \({b_S} = \{ {s_{ - 1}},\;{s_0}\}\), \({b_{{S^{\prime }}}} = \{ s_0^{\prime },\;{s_1^{\prime }},\;s_2^{\prime }\} \).

2.3 Probabilistic linguistic term sets

In HFLTSs, each linguistic variable has the same weight. However, preference exists in actual decision problems. To express the preference of DMs, Pang Pang et al. (2016) proposed PLTSs to make decision information more realistic and complete.

Definition 3

(Pang et al. 2016) \(S = \{ {s_i}|i = - \varsigma , \ldots , - 1,0,1, \ldots ,\varsigma \}\) is a LTS, then a PLTS is defined as

$$\begin{aligned} L(p) = \{ {L^{(m)}}({p^{(m)}})|{L^{(m)}} \in S,{p^{(m)}} \ge 0,m = 1,2, \ldots ,\# L(p),\sum \limits _{m = 1}^{\# L(p)} {{p^{(m)}} \le 1} \}, \end{aligned}$$
(1)

where \(L^{(m)}\) is a linguistic term, \(p^{(m)}\) is its probability, and \({\# L(p)}\) denotes the number of all different linguistic terms in L(p).

The operational rules of PLTSs are defined as follows:

Definition 4

(Gou and Xu 2016) Let \({L_1}(p)\) and \({L_2}(p)\) be two PLTSs, \(\chi \) is a real number and \(\chi > 0\). Then,

$$\begin{aligned}&\begin{array}{l} (1)\;{L_1}(p) \oplus {L_2}(p) = {g^{ - 1}}\left( {\mathop \cup \limits _{{\varepsilon _1}^{(m)} \in g({L_1}),{\varepsilon _2}^{(n)} \in g({L_2})} \{ ({\varepsilon _1}^{(m)} + {\varepsilon _2}^{(n)} - {\varepsilon _1}^{(m)}{\varepsilon _2}^{(n)})(p_1^{(m)}p_2^{(n)})\} } \right) ,\\ \quad m = 1,2, \ldots ,\# {L_1}(p),n = 1,2, \ldots ,\# {L_2}(p); \end{array}\\&\begin{array}{l} (2)\;{L_1}(p) \otimes {L_2}(p) = {g^{ - 1}}\left( {\mathop \cup \limits _{{\varepsilon _1}^{(m)} \in g({L_1}),{\varepsilon _2}^{(n)} \in g({L_2})} \{ ({\varepsilon _1}^{(m)}{\varepsilon _2}^{(n)})(p_1^{(m)}p_2^{(n)})\} } \right) ,\\ \;\;\;m = 1,2, \ldots ,\# {L_1}(p),n = 1,2, \ldots ,\# {L_2}(p); \end{array}\\&\begin{array}{l} (3)\;\chi L(p) = {g^{ - 1}}\left( {\mathop \cup \limits _{{\varepsilon ^{(m)}} \in g(L)} \{ (1 - {{(1 - {\varepsilon ^{(m)}})}^\chi })({p^{(m)}})\} } \right) , \;\;\;m = 1,2, \ldots ,\# L(p); \end{array}\\&\begin{array}{l} (4)\;{(L(p))^\chi } = {g^{ - 1}}(\mathop \cup \limits _{{\varepsilon ^{(m)}} \in g(L)} \{ {({\varepsilon ^{(m)}})^\chi }({p^{(m)}})\} ), \;\;\;m = 1,2, \ldots ,\# L(p); \end{array} \end{aligned}$$

where \(g:[ - \varsigma ,\varsigma ] \rightarrow [0,1],\;g({s_i}) = \frac{i}{{2\varsigma }} + \frac{1}{2} = \sigma \); \({g^{ - 1}}:[0,1] \rightarrow [ - \varsigma ,\varsigma ],\;{g^{ - 1}}(\sigma ) = {s_{(2\sigma - 1)\varsigma }} = {s_i}\).

2.4 Z-number

 Zadeh (2011) proposed Z-numbers by adding credibility to the traditional evaluation.

Definition 5

(Zadeh 2011) A Z-number is denoted as \(Z = (A,B)\), which is an ordered pair of fuzzy numbers. The first component, A, is a description or restriction of a value, and B is a measure of the credibility of A.

3 Z probabilistic linguistic term sets

To solve the problem of ignoring information credibility in the existing study of PLTSs, we propose ZPLTSs. In this section, the definition, relative operational rules, ranking method, normalization and distance are put forward.

3.1 The concept of ZPLTSs

In GDM, as the example illustrated in Sect. 1, ignoring credibility will lead to inaccurate results for decision making in probabilistic linguistic environment. Therefore, each DM gives the corresponding credibility while evaluating, which is a linguistic Z-number. Then, the evaluation and the credibility of multiple DMs are summarized to form two PLTSs, respectively, thus forming a novel ZPLTS. It contains three kinds of decision information: possible linguistic terms, probabilistic distribution, and credibility.

Definition 6

X is a non-empty set, then a ZPLTS \({\hat{Z}}^\# \) is defined as:

$$\begin{aligned} {{\hat{Z}}^\# } = \{ < x,{\hat{Z}} > |x \in X\}, \end{aligned}$$
(2)

where \({\hat{Z}}\) is a Z probabilistic linguistic value (ZPLV).

Definition 7

Let \({L_A}(p)\) and \({L_B}(p)\) be two PLTSs, then a ZPLV is defined as

$$\begin{aligned} {\hat{Z}} = ({\hat{A}},{\hat{B}}) = ({L_{{\hat{A}}}}(p),{L_{{\hat{B}}}}(p)), \end{aligned}$$
(3)

where

$$\begin{aligned} {L_{{\hat{A}}}}(p)= & {} \left\{ {L_{{\hat{A}}}}^{(m)}({p_{{\hat{A}}}}^{(m)})|{L_{{\hat{A}}}}^{(m)} \in {S}, {p_{{\hat{A}}}}^{(m)} \ge 0,m = 1,2, \ldots ,\# {L_{{\hat{A}}}}({p_{{\hat{A}}}})\right. ,\\&\left. \quad \sum \limits _{m = 1}^{\# {L_{{\hat{A}}}}({p_{{\hat{A}}}})} {{p_{{\hat{A}}}}^{(m)} \le 1} \right\} ,\\ {L_{{\hat{B}}}}(p)= & {} \left\{ {L_{{\hat{B}}}}^{(n)}({p_{{\hat{B}}}}^{(n)})|{L_{{\hat{B}}}}^{(n)} \in {S^{\prime }},{p_{{\hat{B}}}}^{(n)} \ge 0,n = 1,2, \ldots ,\# {L_{{\hat{B}}}}({p_{{\hat{B}}}})\right. , \\&\left. \quad \sum \limits _{n = 1}^{\# {L_{{\hat{B}}}}({p_{{\hat{B}}}})} {{p_{{\hat{B}}}}^{(n)} \le 1} \right\} , \end{aligned}$$

\(S = \{ {s_i}|i = - \varsigma , \ldots , - 1,0,1, \ldots ,\varsigma \}\) and \(S' = \{ {s'_j}|j = - \zeta , \ldots , - 1,0,1, \ldots ,\zeta \}\) are two LTSs.

Remark 1

(1) If \({L_{{\hat{B}}}}(p) = \{ {s'_\zeta }(1)\} \), then the credibility is the highest, the concept of the credibility can be ignored. A ZPLV is reduced to a PLTS, and it is expressed as:

$$\begin{aligned} {\hat{Z}} = ({L_{{\hat{A}}}}(p),\{ {s'_\zeta }(1)\} ) = {L_{{\hat{A}}}}(p). \end{aligned}$$
(4)

(2) If \(\# {L_{{\hat{A}}}}({p_{{\hat{A}}}}) = \# {L_{{\hat{B}}}}({p_{{\hat{B}}}}) = 1\) and \({p_{{\hat{A}}}}^{(m)} = {p_{{\hat{B}}}}^{(n)} = 1\), then both \({\hat{A}}\) and \({\hat{B}}\) have only one linguistic term, and the probabilities are 1. Then, a ZPLV is reduced to a linguistic Z-number (Wang et al. 2017), and it is expressed as:

$$\begin{aligned} {\hat{Z}} = (\{ {s_i}(1)\} ,\{ {s'_j}(1)\} ) = ({s_i},{s'_j}), \end{aligned}$$
(5)

where \(i = - \varsigma , \ldots , - 1,0,1, \ldots ,\varsigma \) and \(j = - \zeta , \ldots , - 1,0,1, \ldots ,\zeta \).

Example 3

Continue to Example 1. Assume that there are four experts evaluating a product, and their evaluations and corresponding credibilities are: \((good,slight\; possible)\), \((very\;good,possible)\), (goodfair), and (goodpossible). Then, represent them by the linguistic terms in Example 1 and obtain four linguistic Z-numbers: \(({s_2},{{s'}_1})\), \(({s_3},{{s'}_2})\), \(({s_2},{{s'}_0})\), \(({s_2},{{s'}_2})\). Summarize them to a ZPLTS and obtain \((\{ {s_2}(0.75),{s_3}(0.25)\} ,\{ {{s'}_0}(0.25),{{s'}_1}(0.25),{{s'}_2}(0.5)\} )\).

3.2 The normalization of ZPLVs

For the normalization of ZPLVs, both \({L_{{\hat{A}}}}(p)\) and \({L_{{\hat{B}}}}(p)\) in ZPLVs should be normalized. The process is divided into two parts: one is the normalization of the probability distribution, and the other is the normalization of the number of linguistic terms.

The first is the normalization of the probability. When the sums of probabilities in \({L_{{\hat{A}}}}(p)\) and \({L_{{\hat{A}}}}(p)\) are less than 1, the remaining probabilities need to be further allocated.

Definition 8

Let \({\hat{Z}} = ({L_{{\hat{A}}}}(p),{L_{{\hat{B}}}}(p))\) be a ZPLV with \(\sum \limits _{m = 1}^{\# {L_{{\hat{A}}}}({p_{{\hat{A}}}})} {{p_{{\hat{A}}}}^{(m)} < 1} \) and \(\sum \limits _{n = 1}^{\# {L_{{\hat{B}}}}({p_{{\hat{B}}}})} {{p_{{\hat{B}}}}^{(n)} < 1} \), then the associated ZPLV is:

$$\begin{aligned} {\hat{Z}} = ({\dot{L}_{{\hat{A}}}}(p),{\dot{L}_{{\hat{B}}}}(p)) = (\{ {L_{{\hat{A}}}}^{(m)}({\dot{p}_{{\hat{A}}}}^{(m)})\} ,\{ {L_{{\hat{B}}}}^{(n)}({\dot{p}_{{\hat{B}}}}^{(n)})\} ), \end{aligned}$$
(6)

where

$$\begin{aligned}&{{\dot{p}}_{{\hat{A}}}}^{(m)} = {{{p_{{\hat{A}}}}^{(m)}} \Big /{\sum \limits _{m = 1}^{\# {L_{{\hat{A}}}}({p_{{\hat{A}}}})} {{p_{{\hat{A}}}}^{(m)}} }},\\&{{\dot{p}}_{{\hat{B}}}}^{(n)} = {{{p_{{\hat{B}}}}^{(n)}} \Big /{\sum \limits _{n = 1}^{\# {L_{{\hat{B}}}}({p_{{\hat{B}}}})} {{p_{{\hat{B}}}}^{(n)}} }}, \end{aligned}$$

\(m = 1,2, \ldots ,\# {L_{{\hat{A}}}}({p_{{\hat{A}}}})\) and \(n = 1,2, \ldots ,\# {L_{{\hat{B}}}}({p_{{\hat{B}}}})\).

Next is the normalization of the number of linguistic terms. In order to facilitate the calculation of the distance between ZPLVs, we need to make the number of linguistic terms in \({L_A}(p)\) and \({L_B}(p)\) equal.

Definition 9

Let \({{\hat{Z}}_1} = ({L_{{\hat{A}}1}}(p),{L_{{\hat{B}}1}}(p))\) and \({{\hat{Z}}_2} = ({L_{{\hat{A}}2}}(p),{L_{{\hat{B}}2}}(p))\) be two ZPLVs. \(\# {L_{{\hat{A}}1}}({p_{{\hat{A}}1}})\) and \(\# {L_{{\hat{A}}2}}({p_{{\hat{A}}2}})\) are the numbers of linguistic terms in \({L_{{\hat{A}}1}}(p)\) and \({L_{{\hat{A}}2}}(p)\), respectively. \(\# {L_{{\hat{B}}1}}({p_{{\hat{B}}1}})\) and \(\# {L_{{\hat{B}}2}}({p_{{\hat{B}}2}})\) are the numbers of the linguistic terms in \({L_{{\hat{B}}1}}(p)\) and \({L_{{\hat{B}}2}}(p)\), respectively. If \(\# {L_{{\hat{A}}1}}({p_{{\hat{A}}1}}) > \# {L_{{\hat{A}}2}}({p_{{\hat{A}}2}})\), then

  1. (1)

    add \(\# {L_{{\hat{A}}1}}({p_{{\hat{A}}1}}) - \# {L_{{\hat{A}}2}}({p_{{\hat{A}}2}})\) linguistic terms to \({L_{{\hat{A}}2}}(p)\), and the added linguistic terms are any linguistic terms in \({L_{{\hat{A}}2}}(p)\);

  2. (2)

    let the probabilities of the added linguistic terms be 0.

Similarly, when \(\# {L_{{\hat{B}}1}}({p_{{\hat{B}}1}}) > \# {L_{{\hat{B}}2}}({p_{{\hat{B}}2}})\), the steps for their normalization are like these.

3.3 Basic operational rules of ZPLVs

Inspired by Xian et al. (2019) and Gou and Xu (2016), the operational rules of ZPLVs are defined in this part.

Definition 10

Let \({{\hat{Z}}_1} = ({{\hat{A}}_1},{{\hat{B}}_1}) = ({L_{{\hat{A}}1}}(p),{L_{{\hat{B}}1}}(p))\) and \({{\hat{Z}}_2} = ({{\hat{A}}_2},{{\hat{B}}_2}) = ({L_{{\hat{A}}2}}(p),{L_{{\hat{B}}2}}(p))\) be two ZPLVs, where

$$\begin{aligned} {L_{{\hat{A}}1}}(p)= & {} \left\{ {L_{{\hat{A}}1}}^{(m)}({p_{{\hat{A}}1}}^{(m)})|{L_{{\hat{A}}1}}^{(m)} \in S,{p_{{\hat{A}}1}}^{(m)} \ge 0,m = 1,2, \ldots ,\# {L_{{\hat{A}}1}}({p_{{\hat{A}}1}})\right. ,\\&\left. \quad \sum \limits _{m = 1}^{\# {L_{{\hat{A}}1}}({p_{{\hat{A}}1}})} {{p_{{\hat{A}}1}}^{(m)} \le 1} \right\} ,\\ {L_{{\hat{B}}1}}(p)= & {} \left\{ {L_{{\hat{B}}1}}^{(n)}({p_{{\hat{B}}1}}^{(n)})|{L_{{\hat{B}}1}}^{(n)} \in {S^{\prime }},{p_{{\hat{B}}1}}^{(n)} \ge 0,n = 1,2, \ldots ,\# {L_{{\hat{B}}1}}({p_{{\hat{B}}1}})\right. ,\\&\left. \quad \sum \limits _{n = 1}^{\# {L_{{\hat{B}}1}}({p_{{\hat{B}}1}})} {{p_{{\hat{B}}1}}^{(n)} \le 1} \right\} ,\\ {L_{{\hat{A}}2}}(p)= & {} \left\{ {L_{{\hat{A}}2}}^{(h)}({p_{{\hat{A}}2}}^{(h)})|{L_{{\hat{A}}2}}^{(h)} \in S,{p_{{\hat{A}}2}}^{(h)} \ge 0,h = 1,2, \ldots ,\# {L_{{\hat{A}}2}}({p_{{\hat{A}}2}})\right. ,\\&\left. \quad \sum \limits _{h = 1}^{\# {L_{{\hat{A}}2}}({p_{{\hat{A}}2}})} {{p_{{\hat{A}}2}}^{(h)} \le 1} \right\} , \end{aligned}$$

and

$$\begin{aligned} {L_{{\hat{B}}2}}(p)= & {} \left\{ {L_{{\hat{B}}2}}^{(k)}({p_{{\hat{B}}2}}^{(k)})|{L_{{\hat{B}}2}}^{(k)} \in {S^{\prime }},{p_{{\hat{B}}2}}^{(k)} \ge 0,k = 1,2, \ldots ,\# {L_{{\hat{B}}2}}({p_{{\hat{B}}2}})\right. ,\\&\left. \quad \sum \limits _{k = 1}^{\# {L_{{\hat{B}}2}}({p_{{\hat{B}}2}})} {{p_{{\hat{B}}2}}^{(k)} \le 1} \right\} , \end{aligned}$$

then some basic operational rules are as follows:

$$\begin{aligned}&\begin{array}{l} (1)\left. \quad {{{\hat{Z}}}_1} + {{{\hat{Z}}}_2} = ({{{\hat{A}}}_1} + {{{\hat{A}}}_2},\frac{{{{{\hat{B}}}_1} + {{{\hat{B}}}_2}}}{2}) = ({L_{{\hat{A}}1}}(p) + {L_{{\hat{A}}2}}(p),\frac{{{L_{{\hat{B}}1}}(p) + {L_{{\hat{B}}2}}(p)}}{2})\right. \\ \qquad =\left( {g^{ - 1}}\left( {\mathop \cup \limits _{{\varepsilon _1}^{(m)} \in g({L_{{\hat{A}}1}}),{\varepsilon _2}^{(h)} \in g({L_{{\hat{A}}2}})} \{ ({\varepsilon _1}^{(m)} + {\varepsilon _2}^{(h)} - {\varepsilon _1}^{(m)}{\varepsilon _2}^{(h)})(p_{{\hat{A}}1}^{(m)}p_{{\hat{A}}2}^{(h)})\} } \right) \right. ,\\ \qquad \quad \left. \left. \frac{1}{2}{g^{ - 1}}\left( {\mathop \cup \limits _{{\varepsilon _1}^{(n)} \in g({L_{{\hat{B}}1}}),{\varepsilon _2}^{(k)} \in g({L_{{\hat{B}}2}})} \{ ({\varepsilon _1}^{(n)} + {\varepsilon _2}^{(k)} - {\varepsilon _1}^{(n)}{\varepsilon _2}^{(k)})(p_{{\hat{B}}1}^{(n)}p_{{\hat{B}}2}^{(k)})\} } \right) \right. \right) ; \end{array}\\&\begin{array}{l} (2)\left. \quad {{{\hat{Z}}}_1} \otimes {{{\hat{Z}}}_2} = ({{{\hat{A}}}_1} \otimes {{{\hat{A}}}_2},{{{\hat{B}}}_1} \otimes {{{\hat{B}}}_2}) = ({L_{{\hat{A}}1}}(p) \otimes {L_{{\hat{A}}2}}(p),{L_{{\hat{B}}1}}(p) \otimes {L_{{\hat{B}}2}}(p)\right. )\\ \qquad = \left( {g^{ - 1}}\left( {\mathop \cup \limits _{{\varepsilon _1}^{(m)} \in g({L_{{\hat{A}}1}}),{\varepsilon _2}^{(h)} \in g({L_{{\hat{A}}2}})} \{ ({\varepsilon _{{\hat{A}}1}}^{(m)}{\varepsilon _{{\hat{A}}2}}^{(h)})(p_{{\hat{A}}1}^{(m)}p_{{\hat{A}}2}^{(h)})\} } \right) \right. ,\\ \qquad \quad \left. {g^{ - 1}}\left( {\mathop \cup \limits _{{\varepsilon _1}^{(n)} \in g({L_{{\hat{B}}1}}),{\varepsilon _2}^{(k)} \in g({L_{{\hat{B}}2}})} \{ ({\varepsilon _{{\hat{B}}1}}^{(n)}{\varepsilon _{{\hat{B}}2}}^{(k)})(p_{{\hat{B}}1}^{(n)}p_{{\hat{B}}2}^{(k)})\} } \right) \right) ; \end{array}\\&\begin{array}{l} (3)\left. \chi {{{\hat{Z}}}_1} = (\chi {{{\hat{A}}}_1},{{{\hat{B}}}_1}) = (\chi {L_{{\hat{A}}1}}({p_{{\hat{A}}1}}),{L_{{\hat{B}}1}}({p_{{\hat{A}}1}}))\right. \\ \qquad =\left( {g^{ - 1}}\left( {\mathop \cup \limits _{{\varepsilon ^{(m)}} \in g({L_{{\hat{A}}1}})} \{ (1 - {{(1 - {\varepsilon ^{(m)}})}^\chi })({p_{{\hat{A}}1}}^{(m)})\} } \right) ,{L_{{\hat{B}}1}}({p_{{\hat{B}}1}})\right) ; \end{array}\\&\begin{array}{l} (4)\left. {{{\hat{Z}}}_1}^\chi = ({{{\hat{A}}}_1}^\chi ,{{{\hat{B}}}_1}^\chi ) = ({({L_{{\hat{A}}1}}({p_{{\hat{A}}1}}))^\chi },{({L_{{\hat{B}}1}}({p_{{\hat{B}}1}}))^\chi })\right. \\ \qquad =\left( {g^{ - 1}}\left( {\mathop \cup \limits _{{\varepsilon ^{(m)}} \in g({L_{{\hat{A}}1}})} \{ {{({\varepsilon ^{(m)}})}^\chi }({p_{{\hat{A}}1}}^{(m)})\} } \right) ,{g^{ - 1}}\left( {\mathop \cup \limits _{{\varepsilon ^{(h)}} \in g({L_{{\hat{B}}1}})} \{ {{({\varepsilon ^{(h)}})}^\chi }({p_{{\hat{B}}1}}^{(h)})\} } \right) \right) . \end{array} \end{aligned}$$

Theorem 1

Let \(\;{\hat{Z}},{{\hat{Z}}_1},{{\hat{Z}}_2}\) be three ZPLVs, and \(\chi ,{\chi _1},{\chi _2}\) are three real numbers. Then

\((1)\;{{\hat{Z}}_1} + {{\hat{Z}}_2} = {{\hat{Z}}_2} + {{\hat{Z}}_1}\);

\((2)\;{{\hat{Z}}_1} \otimes {{\hat{Z}}_2} = {{\hat{Z}}_2} \otimes {{\hat{Z}}_1}\);

\((3)\;\chi ({{\hat{Z}}_1} + {{\hat{Z}}_2}) = \chi {{\hat{Z}}_1} + \chi {{\hat{Z}}_2}\);

\((4)\;{({{\hat{Z}}_1} \otimes {{\hat{Z}}_2})^\chi } = {{\hat{Z}}_1}^\chi \otimes {{\hat{Z}}_2}^\chi \);

\((5)\;{\chi _1}{\hat{Z}} + {\chi _2}{\hat{Z}} = ({\chi _1} + {\chi _2}){\hat{Z}}\), when \({{\varepsilon ^{(m)}},{\varepsilon ^{(n)}} \in g({L_{{\hat{A}}}})}\) and \(m=n\);

\((6)\;{{\hat{Z}}^{{\chi _1}}} \otimes {{\hat{Z}}^{{\chi _2}}} = {{\hat{Z}}^{{\chi _1} + {\chi _2}}}\), when \({{\varepsilon ^{(m)}},{\varepsilon ^{(n)}} \in g({L_{{\hat{A}}}})}\), \({{\varepsilon ^{(h)}},{\varepsilon ^{(k)}} \in g({L_{{\hat{B}}}})}\), \(m=n\) and \(h=k\).

Proof

The proofs of (1) and (2) are obvious, so we omit the proofs here.

$$\begin{aligned}&(3)\;\chi \left( {{{\hat{Z}}}_1} + {{{\hat{Z}}}_2})\right. \\&\qquad = \chi ({g^{ - 1}}\left( {\mathop \cup \limits _{{\varepsilon _1}^{(m)} \in g({L_{{\hat{A}}1}}),{\varepsilon _2}^{(h)} \in g({L_{{\hat{A}}2}})} \{ ({\varepsilon _1}^{(m)} + {\varepsilon _2}^{(h)} - {\varepsilon _1}^{(m)}{\varepsilon _2}^{(h)})(p_{{\hat{A}}1}^{(m)}p_{{\hat{A}}2}^{(h)})\} } \right) ,\\&\qquad \left. \frac{1}{2}{g^{ - 1}}\left( {\mathop \cup \limits _{{\varepsilon _1}^{(n)} \in g({L_{{\hat{B}}1}}),{\varepsilon _2}^{(k)} \in g({L_{{\hat{B}}2}})} \{ ({\varepsilon _1}^{(n)} + {\varepsilon _2}^{(k)} - {\varepsilon _1}^{(n)}{\varepsilon _2}^{(k)})(p_{{\hat{B}}1}^{(n)}p_{{\hat{B}}2}^{(k)})\} } \right) \right) \\&\qquad =\left( {g^{ - 1}}\left( {\mathop \cup \limits _{{\varepsilon _1}^{(m)} \in g({L_{{\hat{A}}1}}),{\varepsilon _2}^{(h)} \in g({L_{{\hat{A}}2}})} \{ (1 - {{(1 - {\varepsilon _1}^{(m)})}^\chi }{{(1 - {\varepsilon _2}^{(h)})}^\chi })({p_{{\hat{A}}1}}^{(m)}{p_{{\hat{A}}2}}^{(h)})\} } \right) \right. ,\\&\qquad \left. \frac{{{L_{{\hat{B}}1}}(p) + {L_{{\hat{B}}2}}(p)}}{2}\right) \\&\qquad = \left( {g^{ - 1}}\left( {\mathop \cup \limits _{{\varepsilon _1}^{(m)} \in g({L_{{\hat{A}}1}})} \{ (1 - {{(1 - {\varepsilon _1}^{(m)})}^\chi })({p_{{\hat{A}}1}}^{(m)})\} } \right) , {L_{{\hat{B}}1}}(p)\right) \\&\qquad \quad + \left( {g^{ - 1}}\left( {\mathop \cup \limits _{{\varepsilon _2}^{(h)} \in g({L_{{\hat{A}}2}})} \{ (1 - {{(1 - {\varepsilon _2}^{(h)})}^\chi })({p_{{\hat{A}}2}}^{(h)})\} } \right) ,{L_{{\hat{B}}2}}(p)\right) = \chi {{{\hat{Z}}}_1} + \chi {{{\hat{Z}}}_2} \\&(4)\;{({{{\hat{Z}}}_1} \otimes {{{\hat{Z}}}_2})^\chi } \\&\qquad =\left( {\left( {{g^{ - 1}}\left( {\mathop \cup \limits _{{\varepsilon _1}^{(m)} \in g({L_{{\hat{A}}1}}),{\varepsilon _2}^{(h)} \in g({L_{{\hat{A}}2}})} \{ ({\varepsilon _{{\hat{A}}1}}^{(m)}{\varepsilon _{{\hat{A}}2}}^{(h)})(p_{{\hat{A}}1}^{(m)}p_{{\hat{A}}2}^{(h)})\} } \right) } \right) ^\chi }\right. , \\&\qquad \left. {\left( {{g^{ - 1}}\left( {\mathop \cup \limits _{{\varepsilon _1}^{(n)} \in g({L_{{\hat{B}}1}}),{\varepsilon _2}^{(k)} \in g({L_{{\hat{B}}2}})} \{ ({\varepsilon _{{\hat{B}}1}}^{(n)}{\varepsilon _{{\hat{B}}2}}^{(k)})(p_{{\hat{B}}1}^{(n)}p_{{\hat{B}}2}^{(k)})\} } \right) } \right) ^\chi }\right) \\&\qquad = \left( {g^{ - 1}}\left( {\mathop \cup \limits _{{\varepsilon _1}^{(m)} \in g({L_{{\hat{A}}1}}),{\varepsilon _2}^{(h)} \in g({L_{{\hat{A}}2}})} \{ {{({\varepsilon _{{\hat{A}}1}}^{(m)})}^\chi }{{({\varepsilon _{{\hat{A}}2}}^{(h)})}^\chi }(p_{{\hat{A}}1}^{(m)}p_{{\hat{A}}2}^{(h)})\} } \right) ,\right. \\&\qquad \left. {g^{ - 1}}\left( (\mathop \cup \limits _{{\varepsilon _1}^{(n)} \in g({L_{{\hat{B}}1}}),{\varepsilon _2}^{(k)} \in g({L_{{\hat{B}}2}})} \{ {{({\varepsilon _{{\hat{B}}1}}^{(n)})}^\chi }{{({\varepsilon _{{\hat{B}}2}}^{(k)})}^\chi }(p_{{\hat{B}}1}^{(n)}p_{{\hat{B}}2}^{(k)})\} \right) \right) \\&\qquad = \left( {\left( {{g^{ - 1}}\left( {\mathop \cup \limits _{{\varepsilon _1}^{(m)} \in g({L_{{\hat{A}}1}})} \{ ({\varepsilon _{{\hat{A}}1}}^{(m)})(p_{{\hat{A}}1}^{(m)})\} } \right) } \right) ^\chi },\right. \\&\qquad \left. {\left( {{g^{ - 1}}\left( {(\mathop \cup \limits _{{\varepsilon _1}^{(n)} \in g({L_{{\hat{B}}1}})} \{ ({\varepsilon _{{\hat{B}}1}}^{(n)})(p_{{\hat{B}}1}^{(n)})\} } \right) } \right) ^\chi }\right) \\&\qquad \quad \otimes \left( {\left( {{g^{ - 1}}\left( {\mathop \cup \limits _{{\varepsilon _2}^{(h)} \in g({L_{{\hat{A}}2}})} \{ ({\varepsilon _{{\hat{A}}2}}^{(h)})(p_{{\hat{A}}2}^{(h)})\} } \right) } \right) ^\chi },\right. \\&\qquad \left. {\left( {{g^{ - 1}}\left( {(\mathop \cup \limits _{{\varepsilon _2}^{(k)} \in g({L_{{\hat{B}}2}})} \{ ({\varepsilon _{{\hat{B}}2}}^{(k)})(p_{{\hat{B}}2}^{(k)})\} } \right) } \right) ^\chi }\right) \;\\&\qquad = {{{\hat{Z}}}_1}^\chi \otimes {{{\hat{Z}}}_2}^\chi \end{aligned}$$

(5) When \({{\varepsilon ^{(m)}},{\varepsilon ^{(n)}} \in g({L_{{\hat{A}}}})}\) and \(m=n\), then

$$\begin{aligned}&\begin{array}{l} {\chi _1}{\hat{Z}} + {\chi _2}{\hat{Z}} = ({\chi _1} + {\chi _2}){\hat{Z}}\\ \qquad = \left( {g^{ - 1}}\left( {\mathop \cup \limits _{{\varepsilon ^{(m)}} \in g({L_{{\hat{A}}}})} \{ (1 - {{(1 - {\varepsilon ^{(m)}})}^{{\chi _1}}})({p_{{\hat{A}}}}^{(m)})\} } \right) \right. ,\\ \qquad {L_{{\hat{B}}}}({p_{{\hat{B}}}})) \;\; + ({g^{ - 1}}\left( {\mathop \cup \limits _{{\varepsilon ^{(n)}} \in g({L_{{\hat{A}}}})} \{ (1 - {{(1 - {\varepsilon ^{(n)}})}^{{\chi _2}}})({p_{{\hat{A}}}}^{(n)})\} } \right) ,{L_{{\hat{B}}}}({p_{{\hat{B}}}})) \\ \qquad = ({g^{ - 1}}\left( {\mathop \cup \limits _{{\varepsilon ^{(m)}},{\varepsilon ^{(n)}} \in g({L_{{\hat{A}}}})} \{ (1 - {{(1 - {\varepsilon ^{(m)}})}^{{\chi _1}}}{{(1 - {\varepsilon ^{(n)}})}^{{\chi _2}}})({p_{{\hat{A}}}}^{(m)}{p_{{\hat{A}}}}^{(n)})\} } \right) ,\\ \qquad \frac{{{L_{{\hat{B}}}}({p_{{\hat{B}}}}) + {L_{{\hat{B}}}}({p_{{\hat{B}}}})}}{2}) = ({g^{ - 1}}\left( {\mathop \cup \limits _{{\varepsilon ^{(m)}} \in g({L_{{\hat{A}}}}),m = n} \{ (1 - {{(1 - {\varepsilon ^{(m)}})}^{{\chi _1}}}^{ + {\chi _2}})({p_{{\hat{A}}}}^{(m)})\} } \right) ,\\ \qquad {L_{{\hat{B}}}}({p_{{\hat{B}}}})) = ({\chi _1} + {\chi _2}){\hat{Z}} \end{array} \end{aligned}$$

(6) When \({{\varepsilon ^{(m)}},{\varepsilon ^{(n)}} \in g({L_{{\hat{A}}}})}\), \({{\varepsilon ^{(h)}},{\varepsilon ^{(k)}} \in g({L_{{\hat{B}}}})}\), \(m=n\) and \(h=k\).Then

$$\begin{aligned}&{{{\hat{Z}}}^{{\chi _1}}} \otimes {{{\hat{Z}}}^{{\chi _2}}} = {{{\hat{Z}}}^{{\chi _1} + {\chi _2}}}=\left( {g^{ - 1}}\left( {\mathop \cup \limits _{{\varepsilon ^{(m)}} \in g({L_{{\hat{A}}}})} \{ {{({\varepsilon ^{(m)}})}^{{\chi _1}}}({p_{{\hat{A}}}}^{(m)})\} } \right) \right. ,\\&\qquad \left. {g^{ - 1}}\left( {\mathop \cup \limits _{{\varepsilon ^{(h)}} \in g({L_{{\hat{B}}}})} \{ {{({\varepsilon ^{(h)}})}^{{\chi _1}}}({p_{{\hat{B}}}}^{(h)})\} } \right) \right) \otimes \left( {g^{ - 1}}\left( {\mathop \cup \limits _{{\varepsilon ^{(n)}} \in g({L_{{\hat{A}}}})} \{ {{({\varepsilon ^{(n)}})}^{{\chi _2}}}({p_{{\hat{A}}}}^{(n)})\} } \right) ,\right. \\&\qquad \left. {g^{ - 1}}\left( {\mathop \cup \limits _{{\varepsilon ^{(k)}} \in g({L_{{\hat{B}}}})} \{ {{({\varepsilon ^{(k)}})}^{{\chi _2}}}({p_{{\hat{B}}}}^{(k)})\} } \right) \right) = \left( {g^{ - 1}}\left( {\mathop \cup \limits _{{\varepsilon ^{(m)}},{\varepsilon ^{(n)}} \in g({L_{{\hat{A}}}})} \{ {{({\varepsilon ^{(m)}})}^{{\chi _1}}}{{({\varepsilon ^{(n)}})}^{{\chi _2}}}({p_{{\hat{A}}}}^{(m)}{p_{{\hat{A}}}}^{(n)})\} } \right) \right. , \\&\qquad \left. {g^{ - 1}}\left( {\mathop \cup \limits _{{\varepsilon ^{(h)}},{\varepsilon ^{(k)}} \in g({L_{{\hat{B}}}})} \{ {{({\varepsilon ^{(h)}})}^{{\chi _1}}}{{({\varepsilon ^{(k)}})}^{{\chi _2}}}({p_{{\hat{B}}}}^{(h)}{p_{{\hat{B}}}}^{(k)})\} } \right) \right) \\&\qquad =\left( {g^{ - 1}}\left( {\mathop \cup \limits _{{\varepsilon ^{(m)}} \in g({L_{{\hat{A}}}}),m = n} \{ {{({\varepsilon ^{(m)}})}^{{\chi _1}}}^{ + {\chi _2}}({p_{{\hat{A}}}}^{(m)})\} } \right) ,{g^{ - 1}}\left( {\mathop \cup \limits _{{\varepsilon ^{(h)}} \in g({L_{{\hat{B}}}}),h = k} \{ {{({\varepsilon ^{(h)}})}^{{\chi _1} + }}^{{\chi _2}}({p_{{\hat{B}}}}^{(h)})\} } \right) \right) \\&\qquad = {{{\hat{Z}}}^{{\chi _1} + {\chi _2}}} \end{aligned}$$

\(\square \)

Example 4

Let \( {{\hat{Z}}_1} = ({{\hat{A}}_1},{{\hat{B}}_1}) = ({L_{{\hat{A}}1}}(p),{L_{{\hat{B}}1}}(p)) = (\{ {s_0}(0.2),{s_1}(0.4),{s_3}(0.4)\} , \{ {s'_{ - 2}}(0.1), {s'_0}(0.4)\} ) \text {and} {{\hat{Z}}_2} = ({{\hat{A}}_2},{{\hat{B}}_2}) = ({L_{{\hat{A}}2}}(p),{L_{{\hat{B}}2}}(p)) = (\{ {s_{ - 1}}(0.4),{s_0}(0.6)\} ,\{ {s'_1}(0.5),{s'_2}(0.5)\} ) \) be two ZPLVs, and \(\chi = 2\).

The normalized \({{{\hat{Z}}}_1}\) and \({{{\hat{Z}}}_2}\) are:

$$\begin{aligned} {{\hat{Z}}_1}= & {} (\{ {s_0}(0.2),{s_1}(0.4),{s_3}(0.4)\} ,\{ {s'_{ - 2}}(0.2),{s'_0}(0.4)\} ),\\ {{\hat{Z}}_2}= & {} (\{ {s_{ - 1}}(0.4),{s_0}(0.6),{s_{ - 1}}(0)\} ,\{ {s'_1}(0.5),{s'_2}(0.5)\} ). \end{aligned}$$
$$\begin{aligned}&\begin{array}{l} (1)\;{{{\hat{Z}}}_1} + {{{\hat{Z}}}_2} = ({g^{ - 1}}\{ \frac{2}{3}(0.08),\frac{3}{4}(0.12),\frac{7}{9}(0.16),\frac{5}{6}(0.24),1(0.4)\},\frac{1}{2}{g^{ - 1}}\{ \frac{{13}}{{18}}(0.1),\\ \quad \frac{{31}}{{36}}(0.1),\frac{5}{6}(0.4),\frac{{11}}{{12}}(0.4)\} ) = (\{ {s_1}(0.08),{s_{1.5}}(0.12),{s_{1.67}}(0.16),\\ \quad {s_2}(0.24),{s_3}(0.4)\},\{ {{s'}_{ - 0.18}}(0.1),{{s'}_{0.54}}(0.4),{{s'}_{0.78}}(0.1),{{s'}_{1.26}}(0.4)\} ) \end{array}\\&\begin{array}{l} (2)\;{{{\hat{Z}}}_1} \otimes {{{\hat{Z}}}_2} = ({g^{ - 1}}\{ \frac{1}{6}(0.08),\frac{1}{4}(0.12),\frac{2}{9}(0.16),\frac{1}{2}(0.24),\frac{1}{3}(0.4)\} ,{g^{ - 1}}\{ \frac{1}{9}(0.1),\\ \qquad \frac{5}{{36}}(0.1),\frac{1}{3}(0.4), \frac{5}{{12}}(0.4)\} ) = (\{ {s_{ - 2}}(0.08),{s_{ - 1.5}}(0.12),{s_{ - 1.67}}(0.16),\\ \qquad {s_{ - 1}}(0.4),{s_0}(0.24)\} , \{ {{s'}_{ - 2.33}}(0.1),{{s'}_{ - 2.17}}(0.1),{{s'}_{ - 1}}(0.4),{{s'}_{ - 0.5}}(0.4)\} ) \end{array}\\&\begin{array}{l} (3)\;2{{{\hat{Z}}}_1}\; = ({g^{ - 1}}\{ \frac{3}{4}(0.2),\frac{8}{9}(0.4),1(0.4)\} ,\{ {{s'}_{ - 2}}(0.2),{{s'}_0}(0.8)\} ) \\ \qquad = (\{ {s_{1.5}}(0.2),{s_{2.33}}(0.4),{s_3}(0.4)\} , \{ {{s'}_{ - 2}}(0.2),{{s'}_0}(0.8)\} ) \end{array}\\&\begin{array}{l} (4)\;{{{\hat{Z}}}_1}^2 = ({g^{ - 1}}\{ \frac{1}{4}(0.2),\frac{4}{9}(0.4),1(0.4)\} , {g^{ - 1}}\{ \frac{1}{{36}}(0.2),\frac{1}{4}(0.8)\} ) \\ \qquad = (\{ {s_{ - 1.5}}(0.2),{s_{ - 0.33}}(0.4),{s_3}(0.4)\} , \{ {{s'}_{ - 2.83}}(0.2),{{s'}_{ - 1.5}}(0.8)\} ) \end{array} \end{aligned}$$

3.4 The comparison method of ZPLVs

In this section, we propose a comparison method for ZPLVs. First, we define the concept of the score of the ZPLV. We express the linguistic values of A and B in ZPLV through the axes. And then use the area as the score of the ZPLV, as shown in Fig.1.

Definition 11

Let

$$\begin{aligned}&{\hat{Z}} = ({\hat{A}},{\hat{B}}) = ({L_{{\hat{A}}}}(p),{L_{{\hat{B}}}}(p)) = (\{ {L_{{\hat{A}}}}^{(m)}({p_{{\hat{A}}}}^{(m)})|m = 1,2, \ldots ,\# {L_{{\hat{A}}}}({p_{{\hat{A}}}})\} ,\\&\quad \{ {L_{{\hat{B}}}}^{(n)}({p_{{\hat{B}}}}^{(n)})|n = 1,2, \ldots ,\# {L_{{\hat{B}}}}({p_{{\hat{B}}}})\} ), \end{aligned}$$

\(v_{{\hat{A}}}^{(m)}\) and \(v_{{\hat{B}}}^{(n)}\) are the subscripts of \({L_{{\hat{A}}}}^{(m)}\) and \({L_{{\hat{B}}}}^{(n)}\). The score of \({\hat{Z}}\) is

$$\begin{aligned} S({\hat{Z}}) = ({\bar{\alpha }} + \varsigma )({{\bar{\alpha }} ^{\prime }} + \zeta ), \end{aligned}$$
(7)

where

$$\begin{aligned} {\bar{\alpha }}= & {} \frac{{\sum \limits _{m = 1}^{\# {L_{{\hat{A}}}}({p_{{\hat{A}}}})} {{v^{(m)}}} {p_{{\hat{A}}}}^{(m)}}}{{\sum \limits _{m = 1}^{\# {L_{{\hat{A}}}}({p_{{\hat{A}}}})} {{p_{{\hat{A}}}}^{(m)}} }} \quad \text {and}\quad {{\bar{\alpha }} ^{\prime }} = \frac{{\sum \limits _{n = 1}^{\# {L_{{\hat{B}}}}({p_{{\hat{B}}}})} {{v^{(n)}}} {p_{{\hat{B}}}}^{(n)}}}{{\sum \limits _{n = 1}^{\# {L_{{\hat{B}}}}({p_{{\hat{B}}}})} {{p_{{\hat{B}}}}^{(n)}} }}. \end{aligned}$$
Fig. 1
figure 1

The score of a ZPLV

The higher the score of the ZPLV is, the larger the ZPLV is. However, it is also possible for the scores of two ZPLVs to be equal, as shown in Fig.2. Therefore, we introduce another concept, the deviation degree of the ZPLV.

Fig. 2
figure 2

Two ZPLVs have the same score

The definition of the deviation degree of the ZPLV is as follows:

Definition 12

Let

$$\begin{aligned}&{\hat{Z}} = ({\hat{A}},{\hat{B}}) = ({L_{{\hat{A}}}}(p),{L_{{\hat{B}}}}(p)) = (\{ {L_{{\hat{A}}}}^{(m)}({p_{{\hat{A}}}}^{(m)})|m = 1,2, \ldots ,\\&\qquad \# {L_{{\hat{A}}}}({p_{{\hat{A}}}})\} ,\{ {L_{{\hat{B}}}}^{(n)}({p_{{\hat{B}}}}^{(n)})|n = 1,2, \ldots ,\# {L_{{\hat{B}}}}({p_{{\hat{B}}}})\} ), \end{aligned}$$

\({v_{{\hat{A}}}}^{(m)}\) and \({v_{{\hat{B}}}}^{(n)}\) are the subscripts of \({L_{{\hat{A}}}}^{(m)}\) and \({L_{{\hat{B}}}}^{(n)}\). The score of \({\hat{Z}}\) is \(S({\hat{Z}}) = ({\bar{\alpha }} + \varsigma )({{\bar{\alpha }} ^{\prime }} + \varsigma )\), where \({\bar{\alpha }} = \frac{{\sum \limits _{m = 1}^{\# {L_{{\hat{A}}}}({p_{{\hat{A}}}})} {{v_{{\hat{A}}}}^{(m)}} {p_{{\hat{A}}}}^{(m)}}}{{\sum \limits _{m = 1}^{\# {L_{{\hat{A}}}}({p_{{\hat{A}}}})} {{p_{{\hat{A}}}}^{(m)}} }}\) and \({{\bar{\alpha }} ^{\prime }} = \frac{{\sum \limits _{n = 1}^{\# {L_{{\hat{B}}}}({p_{{\hat{B}}}})} {{v_{{\hat{B}}}}^{(n)}} {p_{{\hat{B}}}}^{(n)}}}{{\sum \limits _{n = 1}^{\# {L_{{\hat{B}}}}({p_{{\hat{B}}}})} {{p_{{\hat{B}}}}^{(n)}} }}\).

The deviation degree of \({\hat{Z}}\) is

$$\begin{aligned} D({\hat{Z}}) = \sqrt{\frac{{\sum \limits _{m = 1}^{\# {L_{{\hat{A}}}}({p_{{\hat{A}}}})} {\sum \limits _{n = 1}^{\# {L_{{\hat{B}}}}({p_{{\hat{B}}}})} {{{\left( {({p_{{\hat{A}}}}^{(m)}{v_{{\hat{A}}}}^{(m)} + \varsigma )({p_{{\hat{B}}}}^{(n)}{v_{{\hat{B}}}}^{(n)} + \zeta ) - ({\bar{\alpha }} + \varsigma )({{{\bar{\alpha }} }^{\prime }} + \zeta )} \right) }^2}} } }}{{\# {L_{{\hat{A}}}}({p_{{\hat{A}}}})\# {L_{{\hat{B}}}}({p_{{\hat{B}}}})}}}. \end{aligned}$$
(8)

Then, the comparison method of ZPLVs is summarized as follows:

Definition 13

Let \({{\hat{Z}}_1}\) and \({{\hat{Z}}_2}\) be two ZPLVs, then

  1. (1)

    If \(S({{\hat{Z}}_1}) > S({{\hat{Z}}_2})\), then \({{\hat{Z}}_1} > {{\hat{Z}}_2}\).

  2. (2)

    If \(S({{\hat{Z}}_1}) < S({{\hat{Z}}_2})\), then \({{\hat{Z}}_1} < {{\hat{Z}}_2}\).

  3. (3)

    If \(S({{\hat{Z}}_1}) = S({{\hat{Z}}_2})\), then

(\(\dot{1}\)):

If \(D({{\hat{Z}}_1}) > D({{\hat{Z}}_2})\), then \({{\hat{Z}}_1} < {{\hat{Z}}_2}\).

(\(\dot{2}\)):

If \(D({{\hat{Z}}_1}) < D({{\hat{Z}}_2})\), then \({{\hat{Z}}_1} > {{\hat{Z}}_2}\).

(\(\dot{3}\)):

If \(D({{\hat{Z}}_1}) = D({{\hat{Z}}_2})\), then \({{\hat{Z}}_1} \approx {{\hat{Z}}_2}\).

(\(\dot{4}\)):

If \({L_{{\hat{A}}1}}(p) = {L_{{\hat{B}}1}}(p)\) and \({L_{{\hat{A}}2}}(p) = {L_{{\hat{B}}2}}(p)\), then \({{\hat{Z}}_1} = {{\hat{Z}}_2}\).

Example 5

Continue to Example 4, compare \({{\hat{Z}}_1}\) and \({{\hat{Z}}_2}\). We calculate the score and the deviation degree of them, respectively.

$$\begin{aligned}&{{\bar{\alpha }} _1} = \frac{{\sum \limits _{m = 1}^{\# {L_{{\hat{A}}1}}({p_{{\hat{A}}1}})} {{v_{{\hat{A}}1}}^{(m)}} {p_{{\hat{A}}1}}^{(m)}}}{{\sum \limits _{m = 1}^{\# {L_{{\hat{A}}1}}({p_{{\hat{A}}1}})} {{p_{{\hat{A}}1}}^{(m)}} }} = \frac{{0 \times 0.2 + 1 \times 0.4 + 3 \times 0.4}}{{0.2 + 0.4 + 0.4}} = 1.6\\&{{\bar{\alpha }} ^{\prime }}_1 = \frac{{\sum \limits _{n = 1}^{\# {L_{{\hat{B}}1}}({p_{{\hat{B}}1}})} {{v_{{\hat{B}}1}}^{(n)}} {p_{{\hat{B}}1}}^{(n)}}}{{\sum \limits _{n = 1}^{\# {L_{{\hat{B}}1}}({p_{{\hat{B}}1}})} {{p_{{\hat{B}}1}}^{(n)}} }} = \frac{{ - 2 \times 0.2 + 0 \times 0.8}}{{0.2 + 0.8}} = - 0.4\\&S({{\hat{Z}}_1}) = ({{\bar{\alpha }} _1} + \varsigma )({{\bar{\alpha }} ^{\prime }}_1 + \zeta )\; = (1.6 + 3)( - 0.4 + 3) = 11.96\\&\begin{array}{l} D({{{\hat{Z}}}_1}) = \sqrt{\frac{{\sum \limits _{m = 1}^{\# {L_{{\hat{A}}}}({p_{{\hat{A}}}})} {\sum \limits _{n = 1}^{\# {L_{{\hat{B}}}}({p_{{\hat{B}}}})} {{{\left( {({p_{{\hat{A}}1}}^{(m)}{v_{{\hat{A}}1}}^{(m)} + \varsigma )({p_{{\hat{B}}1}}^{(n)}{v_{{\hat{B}}1}}^{(n)} + \zeta ) - ({{{\bar{\alpha }} }_1} + \varsigma )({{{\bar{\alpha }} }_1}^{\prime } + \zeta )} \right) }^2}} } }}{{\# {L_{{\hat{A}}}}({p_{{\hat{A}}1}})\# {L_{{\hat{B}}1}}({p_{{\hat{B}}1}})}}}\\ = \sqrt{\frac{{{{(3 \times 2.6 - 11.96)}^2} + {{(3 \times 3 - 11.96)}^2} + {{(3.4 \times 2.6 - 11.96)}^2} + {{(3.4 \times 3 - 11.96)}^2} + {{(4.2 \times 2.6 - 11.96)}^2} + {{(4.2 \times 3 - 11.96)}^2}}}{{3 \times 2}}} \\ \approx 2.59 \end{array} \end{aligned}$$

With the same method, we obtain

$$\begin{aligned} S({{\hat{Z}}_2})= & {} 10.8,\,\,\ D({{\hat{Z}}_2}) \approx 1.07.\\&\because S({{\hat{Z}}_1})> S({{\hat{Z}}_2}),\quad \therefore {{\hat{Z}}_1} > {{\hat{Z}}_2}. \end{aligned}$$

3.5 The distance between ZPLVs

The distance between ZPLVs is based on the deviation degree of ZPLVs and the modified Euclidean distance.

Definition 14

Let \( {{\hat{Z}}_1} = ({{\hat{A}}_1},{{\hat{B}}_1}) = ({L_{{\hat{A}}1}}(p),{L_{{\hat{B}}1}}(p)) = (\{ {L_{{\hat{A}}1}}^{(m)}({p_{{\hat{A}}1}}^{(m)})|m = 1,2, \ldots ,\# {L_{{\hat{A}}1}}({p_{{\hat{A}}1}})\} ,\{ {L_{{\hat{B}}1}}^{(n)} ({p_{{\hat{B}}1}}^{(n)})|n = 1,2, \ldots ,\# {L_{{\hat{B}}1}}({p_{{\hat{B}}1}})\} )\) and \( {{\hat{Z}}_2} = ({{\hat{A}}_2},{{\hat{B}}_2}) = ({L_{{\hat{A}}2}}(p),{L_{{\hat{B}}2}}(p)) = (\{ {L_{{\hat{A}}2}}^{(h)}({p_{{\hat{A}}2}}^{(h)})|h = 1,2, \ldots , \# {L_{{\hat{A}}2}}({p_{{\hat{A}}2}})\} ,\{ {L_{{\hat{B}}2}}^{(k)}({p_{{\hat{B}}2}}^{(k)})|k = 1,2, \ldots ,\# {L_{{\hat{B}}2}}({p_{{\hat{B}}2}})\} ) \) be two ZPLVs. \( \# {L_{{\hat{A}}1}}({p_{{\hat{A}}1}}) = \# {L_{{\hat{A}}2}}({p_{{\hat{A}}2}}) \text {and} \# {L_{{\hat{B}}1}}({p_{{\hat{B}}1}}) = \# {L_{{\hat{B}}2}}({p_{{\hat{B}}2}}), \) then the distance between \({{\hat{Z}}_1}\) and \({{\hat{Z}}_2}\) is

$$\begin{aligned}&d({{\hat{Z}}_1},{{\hat{Z}}_2}) =\nonumber \\&\quad \sqrt{\frac{{\sum \limits _{m = 1}^{\# {L_{{\hat{A}}1}}({p_{{\hat{A}}1}})} {\sum \limits _{n = 1}^{\# {L_{{\hat{B}}1}}({p_{{\hat{B}}1}})} {{{\left( {({p_{{\hat{A}}1}}^{(m)}{v_{{\hat{A}}1}}^{(m)} + \varsigma )({p_{{\hat{B}}1}}^{(n)}{v_{{\hat{B}}1}}^{(n)} + \zeta ) - ({p_{{\hat{A}}2}}^{(m)}{v_{{\hat{A}}2}}^{(m)} + \varsigma )({p_{{\hat{B}}2}}^{(n)}{v_{{\hat{B}}2}}^{(n)} + \zeta )} \right) }^2}} } }}{{\# {L_{{\hat{A}}1}}({p_{{\hat{A}}1}})\# {L_{{\hat{B}}1}}({p_{{\hat{B}}1}})}}}.\nonumber \\ \end{aligned}$$
(9)

Theorem 2

Let \({{\hat{Z}}_1},{{\hat{Z}}_2},{{\hat{Z}}_3}\) be three ZPLVs, then they satisfy the next three theorems:

  1. (1)

    \(d({{\hat{Z}}_1},{{\hat{Z}}_2}) \ge 0;\)

  2. (2)

    \(d({{\hat{Z}}_1},{{\hat{Z}}_2}) = d({{\hat{Z}}_2},{{\hat{Z}}_1});\)

  3. (3)

    If \({{\hat{Z}}_1} \le {{\hat{Z}}_2} \le {{\hat{Z}}_3}\), then \( d({{{\hat{Z}}}_1},{{{\hat{Z}}}_2}) \le d({{{\hat{Z}}}_1},{{{\hat{Z}}}_3}) \text {and}\,d({{{\hat{Z}}}_2},{{{\hat{Z}}}_3}) \le d({{{\hat{Z}}}_1},{{{\hat{Z}}}_3}). \)

Proof

(1) and (2) are obvious.

(3)

$$\begin{aligned}&d({{\hat{Z}}_1},{{\hat{Z}}_2}) =\\&\quad \sqrt{\frac{{\sum \limits _{m = 1}^{\# {L_{{\hat{A}}1}}({p_{{\hat{A}}1}})} {\sum \limits _{n = 1}^{\# {L_{{\hat{B}}1}}({p_{{\hat{B}}1}})} {{{\left( {({p_{{\hat{A}}1}}^{(m)}{v_{{\hat{A}}1}}^{(m)} + \varsigma )({p_{{\hat{B}}1}}^{(n)}{v_{{\hat{B}}1}}^{(n)} + \zeta ) - ({p_{{\hat{A}}2}}^{(m)}{v_{{\hat{A}}2}}^{(m)} + \varsigma )({p_{{\hat{B}}2}}^{(n)}{v_{{\hat{B}}2}}^{(n)} + \zeta )} \right) }^2}} } }}{{\# {L_{{\hat{A}}1}}({p_{{\hat{A}}1}})\# {L_{{\hat{B}}1}}({p_{{\hat{B}}1}})}}} . \end{aligned}$$
$$\begin{aligned}&d({{\hat{Z}}_1},{{\hat{Z}}_3}) =\\&\quad \sqrt{\frac{{\sum \limits _{m = 1}^{\# {L_{{\hat{A}}1}}({p_{{\hat{A}}1}})} {\sum \limits _{n = 1}^{\# {L_{{\hat{B}}1}}({p_{{\hat{B}}1}})} {{{\left( {({p_{{\hat{A}}1}}^{(m)}{v_{{\hat{A}}1}}^{(m)} + \varsigma )({p_{{\hat{B}}1}}^{(n)}{v_{{\hat{B}}1}}^{(n)} + \zeta ) - ({p_{{\hat{A}}3}}^{(m)}{v_{{\hat{A}}3}}^{(m)} + \varsigma )({p_{{\hat{B}}3}}^{(n)}{v_{{\hat{B}}3}}^{(n)} + \zeta )} \right) }^2}} } }}{{\# {L_{{\hat{A}}1}}({p_{{\hat{A}}1}})\# {L_{{\hat{B}}1}}({p_{{\hat{B}}1}})}}} . \end{aligned}$$

For \({{\hat{Z}}_2} \le {{\hat{Z}}_3}\), so

$$\begin{aligned} ({p_{{\hat{A}}2}}^{(m)}{v_{{\hat{A}}2}}^{(m)} + \varsigma )({p_{{\hat{B}}2}}^{(n)}{v_{{\hat{B}}2}}^{(n)} + \zeta ) \le ({p_{{\hat{A}}3}}^{(m)}{v_{{\hat{A}}3}}^{(m)} + \varsigma )({p_{{\hat{B}}3}}^{(n)}{v_{{\hat{B}}3}}^{(n)} + \zeta ). \end{aligned}$$

Therefore, \( d({{{\hat{Z}}}_1},{{{\hat{Z}}}_2}) \le d({{{\hat{Z}}}_1},{{{\hat{Z}}}_3}). \)

Similarly, \( d({{{\hat{Z}}}_2},{{{\hat{Z}}}_3}) \le d({{{\hat{Z}}}_1},{{{\hat{Z}}}_3}). \) \(\square \)

Example 6

Continue to Example 4, the distance between the normalized \({{{\hat{Z}}}_1}\) and \({{{\hat{Z}}}_2}\) is:

$$\begin{aligned} \begin{array}{l} d({{{\hat{Z}}}_1},{{{\hat{Z}}}_2}) = \sqrt{\frac{{\sum \limits _{m = 1}^{\# {L_{{\hat{A}}1}}({p_{{\hat{A}}1}})} {\sum \limits _{n = 1}^{\# {L_{{\hat{B}}1}}({p_{{\hat{B}}1}})} {{{\left( {({p_{{\hat{A}}1}}^{(m)}{v_{{\hat{A}}1}}^{(m)} + \varsigma )({p_{{\hat{B}}1}}^{(n)}{v_{{\hat{B}}1}}^{(n)} + \zeta ) - ({p_{{\hat{A}}2}}^{(m)}{v_{{\hat{A}}2}}^{(m)} + \varsigma )({p_{{\hat{B}}2}}^{(n)}{v_{{\hat{B}}2}}^{(n)} + \zeta )} \right) }^2}} } }}{{\# {L_{{\hat{A}}1}}({p_{{\hat{A}}1}})\# {L_{{\hat{B}}1}}({p_{{\hat{B}}1}})}}} \\ = \sqrt{\frac{{{{(3 \times 2.6 - 2.6 \times 3.5)}^2} + {{(3 \times 3 - 2.6 \times 4)}^2} + {{(3.4 \times 2.6 - 3 \times 3.5)}^2} + {{(3.4 \times 3 - 3 \times 4)}^2} + {{(4.2 \times 2.6 - 3 \times 3.5)}^2} + {{(4.2 \times 3 - 3 \times 4)}^2}}}{{3 \times 2}}} \\ \approx 1.3 \end{array} \end{aligned}$$

4 MAGDM with ZPLTSs

Assume that there are a discrete set of alternatives \(X = \{ {x_1},{x_2}, \ldots ,{x_s}\} (s \ge 2)\) and a set of attributes \(C = \{ {c_1},{c_2}, \ldots ,{c_t}\} (t \ge 2)\). The weighting vector of attributes is \(W = {({\omega _1},{\omega _2}, \ldots ,{\omega _t})^T}\), where \({\omega _\theta } \in [0,1]\), \(\theta = 1,2, \cdots ,t\), and \(\sum \limits _{\theta = 1}^t {{\omega _\theta }} = 1\). The attribute weights are unknown. According to the evaluation and the corresponding credibility given by DMs, we can obtain ZPLVs. All of the ZPLVs constitute the ZPLV decision matrix and it is denoted as R.

$$\begin{aligned} R = {[{{\hat{Z}}_{\gamma \theta }}]_{s \times t}} = \left[ {\begin{array}{*{20}{c}} {{{{\hat{Z}}}_{11}}}&{} \cdots &{}{{{{\hat{Z}}}_{\gamma \theta }}}&{} \cdots &{}{{{{\hat{Z}}}_{1t}}}\\ \vdots &{} \ddots &{} \vdots &{} \ddots &{} \vdots \\ {{{{\hat{Z}}}_{\gamma 1}}}&{} \cdots &{}{{{{\hat{Z}}}_{\gamma \theta }}}&{} \cdots &{}{{{{\hat{Z}}}_{\gamma t}}}\\ \vdots &{} \ddots &{} \vdots &{} \ddots &{} \vdots \\ {{{{\hat{Z}}}_{s1}}}&{} \cdots &{}{{{{\hat{Z}}}_{\gamma \theta }}}&{} \cdots &{}{{{{\hat{Z}}}_{st}}} \end{array}} \right] , \end{aligned}$$
(10)

where \( {{\hat{Z}}_{\gamma \theta }} = ({{\hat{A}}_{\gamma \theta }},{{\hat{B}}_{\gamma \theta }}) = ({L_{{{{\hat{A}}}_{\gamma \theta }}}}(p),{L_{{{{\hat{B}}}_{\gamma \theta }}}}(p)) = (\{ {L_{{{{\hat{A}}}_{\gamma \theta }}}}^{(m)}({p_{{{{\hat{A}}}_{\gamma \theta }}}}^{(m)})|m = 1,2, \ldots ,\# {L_{{{{\hat{A}}}_{\gamma \theta }}}}({p_{{{{\hat{A}}}_{\gamma \theta }}}})\} , \{ {L_{{{{\hat{B}}}_{\gamma \theta }}}}^{(n)}({p_{{{{\hat{B}}}_{\gamma \theta }}}}^{(n)})| \) \(n = 1,2, \ldots ,\# {L_{{{{\hat{B}}}_{\gamma \theta }}}}({p_{{{{\hat{B}}}_{\gamma \theta }}}})\} )\). It represents the decision information of attribute \(\theta (\theta = 1,2, \ldots ,t)\) of alternative \(\gamma (\gamma = 1,2, \ldots ,s)\). \({L_{{{{\hat{A}}}_{\gamma \theta }}}}^{(m)}\) is the mth value of \({L_{{{{\hat{A}}}_{\gamma \theta }}}}({p_{{{{\hat{A}}}_{\gamma \theta }}}})\), \({p_{{{{\hat{A}}}_{\gamma \theta }}}}^{(m)}\) is the corresponding probability, \(\# {L_{{{{\hat{A}}}_{\gamma \theta }}}}({p_{{{{\hat{A}}}_{\gamma \theta }}}})\) is the number of different linguistic terms in \({L_{{{{\hat{A}}}_{\gamma \theta }}}}({p_{{{{\hat{A}}}_{\gamma \theta }}}})\). \({L_{{{{\hat{B}}}_{\gamma \theta }}}}^{(n)}\) is the nth value of \({L_{{{{\hat{B}}}_{\gamma \theta }}}}({p_{{{{\hat{B}}}_{\gamma \theta }}}})\), \({p_{{{{\hat{B}}}_{\gamma \theta }}}}^{(n)}\) is the corresponding probability, \(\# {L_{{{{\hat{B}}}_{\gamma \theta }}}}({p_{{{{\hat{B}}}_{\gamma \theta }}}})\) is the number of different linguistic terms in \({L_{{{{\hat{B}}}_{\gamma \theta }}}}({p_{{{{\hat{B}}}_{\gamma \theta }}}})\).

4.1 Calculation method for attribute weights based on the credibility

The determination of the attribute weights is a very important part in MAGDM. However, due to the complexity and the uncertainty of the decision environment, the attribute weights are often unknown. Therefore, we propose a method to determine the attribute weights in Z probabilistic linguistic (ZPL) environment based on the credibility of the information. The main idea of this method is that the higher the credibility of the evaluation information is, the more credible the information is, and the higher the weight is given to it.

The information credibility of the attribute \(\theta \) is labeled as:

$$\begin{aligned} {L_{{{{\hat{B}}}_\theta }}}({p_{{{{\hat{B}}}_\theta }}}) = \{ {L_{{{{\hat{B}}}_\theta }}}^{(f)}({p_{{{{\hat{B}}}_\theta }}}^{(f)})|f = 1,2, \ldots ,\# {L_{{{{\hat{B}}}_\theta }}}({p_{{{{\hat{B}}}_\theta }}})\}, \end{aligned}$$
(11)

where \({L_{{{{\hat{B}}}_\theta }}}^{(f)}\) is the fth value of \({L_{{{{\hat{B}}}_\theta }}}({p_{{{{\hat{B}}}_\theta }}})\), \({p_{{{{\hat{B}}}_\theta }}}^{(f)}\) is the corresponding probability, \(\# {L_{{{{\hat{B}}}_\theta }}}({p_{{{{\hat{B}}}_\theta }}})\) is the number of different linguistic terms in \(\# {L_{{{{\hat{B}}}_\theta }}}({p_{{{{\hat{B}}}_\theta }}})\) and \({v_{{{{\hat{B}}}_\theta }}}^{(f)}\) is the subscript of \({L_{{{{\hat{B}}}_\theta }}}^{(f)}\). And

$$\begin{aligned} {L_{{{{\hat{B}}}_\theta }}}({p_{{{{\hat{B}}}_\theta }}}) = \frac{{\sum \limits _{\gamma = 1}^s {{L_{{{{\hat{B}}}_{\gamma \theta }}}}({p_{{{{\hat{B}}}_{\gamma \theta }}}})} }}{s}. \end{aligned}$$
(12)

Using the score of PLTSs in Pang et al. (2016), the score of \({L_{{{{\hat{B}}}_\theta }}}({p_{{{{\hat{B}}}_\theta }}})\) is:

$$\begin{aligned} E({L_{{{{\hat{B}}}_\theta }}}({p_{{{{\hat{B}}}_\theta }}})) = {s'_{{{\bar{\alpha }}_{_\theta }}}}, \end{aligned}$$
(13)

where \( {{\bar{\alpha }} _{_\theta }} = {{\sum \limits _{f = 1}^{\# {L_{{{{\hat{B}}}_\theta }}}({p_{{{{\hat{B}}}_\theta }}})} {{p_{{{{\hat{B}}}_\theta }}}^{(f)}{v_{{{{\hat{B}}}_\theta }}}^{(f)}} } \Big / {\sum \limits _{f = 1}^{\# {L_{{{{\hat{B}}}_\theta }}}({p_{{{{\hat{B}}}_\theta }}})} {{p_{{{{\hat{B}}}_\theta }}}^{(f)}} }}. \)

Then the attribute weight \({\omega _\theta }\) is:

$$\begin{aligned} {\omega _\theta } = \frac{{g({{s'}_{{{{\bar{\alpha }} }_{_\theta }}}})}}{{\sum \nolimits _{\theta = 1}^t {g({{s'}_{{{{\bar{\alpha }} }_{_\theta }}}})} }},(\theta = 1,2, \ldots ,t), \end{aligned}$$
(14)

where \({g({{s'}_{{{{\bar{\alpha }} }_{_\theta }}}})}\) is calculated by the equation mentioned in Definition 4. It’s easy to prove \({\omega _\theta } > 0\).

By normalizing \({\omega _\theta }(\theta = 1,2, \ldots ,t)\) so that its sum is 1, the following equation is obtained:

$$\begin{aligned} \omega _\theta ^* = \frac{{{\omega _\theta }}}{{\sum \nolimits _{\theta = 1}^t {{\omega _\theta }} }}. \end{aligned}$$
(15)

In this way, we obtain the attribute weight vector \(W = {({\omega _1},{\omega _2}, \ldots ,{\omega _t})^T}\). It is used in the following operators and the extended TOPSIS method for ZPLVs.

4.2 Some aggregate operators based on ZPLTSs

In decision making, information aggregation is an indispensable part. To integrate ZPLTSs in decision making, we propose three operators to aggregate decision information.

4.2.1 ZPLWA operator

First, we propose the Z probabilistic linguistic weighted averaging (ZPLWA) operator.

Definition 15

Let \({{\hat{Z}}^\# }\) be a ZPLTS, \({{\hat{Z}}_\theta } = ({{\hat{A}}_\theta },{{\hat{B}}_\theta }) \in {{\hat{Z}}^\# }\), \(\theta = 1,2, \ldots ,t\). A ZPLWA operator aggregation is as follows:

$$\begin{aligned} {\phi _{ZPLWA}}({{\hat{Z}}_1},{{\hat{Z}}_2}, \ldots ,{{\hat{Z}}_t}) = \sum \limits _{\theta = 1}^t {{\omega _\theta }{{{\hat{Z}}}_\theta }}, \end{aligned}$$
(16)

where \({\omega _\theta }\) is calculated by the method proposed in Sect. 4.1, \({\omega _\theta }\in [0,1]\), and \(\sum \limits _{\theta = 1}^t {{\omega _{\theta } } = 1} \).

Theorem 3

\( {{\hat{Z}}_\theta } = ({{\hat{A}}_\theta },{{\hat{B}}_\theta }) = ({L_{{\hat{A}}\theta }}({p_{{\hat{A}}\theta }}),{L_{{\hat{B}}\theta }}({p_{{\hat{B}}\theta }})) = (\{ {L_{{\hat{A}}\theta }}^{(m)}({p_{{\hat{A}}\theta }}^{(m)})|m = 1,2, \ldots , \# {L_{{\hat{A}}\theta }}({p_{{\hat{A}}\theta }})\} ,\{ {L_{{\hat{B}}\theta }}^{(n)}({p_{{\hat{B}}\theta }}^{(n)})|n = 1,2, \ldots ,\# {L_{{\hat{B}}\theta }}({p_{{\hat{B}}\theta }})\} ), \) then their aggregated value by using the ZPLWA operator is also a ZPLV, and

$$\begin{aligned}&\begin{array}{l} {\phi _{ZPLWA}}({{{\hat{Z}}}_1},{{{\hat{Z}}}_2}, \ldots ,{{{\hat{Z}}}_t}) = \sum \limits _{\theta = 1}^t {{\omega _\theta }{{{\hat{Z}}}_\theta }} = \sum \limits _{\theta = 1}^t {{\omega _\theta }\left( {{{\hat{A}}}_\theta },{{{\hat{B}}}_\theta }\right) } \\ \qquad = \left( \sum \limits _{\theta = 1}^t {{\omega _\theta }} {{{\hat{A}}}_\theta },\sum \limits _{\theta = 1}^t {{{{\hat{B}}}_\theta }} \right. )\\ \qquad = \left( \sum \limits _{\theta = 1}^t {{\omega _\theta }{L_{{\hat{A}}\theta }}(p)} ,\frac{{\sum \limits _{\theta = 1}^t {{L_{{\hat{B}}\theta }}(p)} }}{t}\right) . \end{array} \end{aligned}$$
(17)

Proof

For \(t = 2\), \( {\omega _1}{{{\hat{Z}}}_1} + {\omega _2}{{{\hat{Z}}}_2} = {\omega _1}({{{\hat{A}}}_1},{{{\hat{B}}}_1}) + {\omega _2}({{{\hat{A}}}_2},{{{\hat{B}}}_2})\; = {\omega _1}({L_{{\hat{A}}1}}(p),{L_{{\hat{B}}1}}(p)) + {\omega _2}({L_{{\hat{A}}2}}(p),{L_{{\hat{B}}2}}(p)) = \left( {\omega _1}{L_{{\hat{A}}1}}(p) + {\omega _2}{L_{{\hat{A}}2}}(p),\frac{{{L_{{\hat{B}}1}}(p) + {L_{{\hat{B}}2}}(p)}}{2}\right) . \)

When \(t = k\), we have

$$\begin{aligned} \begin{array}{l} {\omega _1}{{{\hat{Z}}}_1} + {\omega _2}{{{\hat{Z}}}_2} +\cdots + {\omega _k}{{{\hat{Z}}}_k} = \sum \limits _{\theta = 1}^k {{\omega _\theta }{{{\hat{Z}}}_\theta }} = \sum \limits _{\theta = 1}^k {{\omega _\theta }({{{\hat{A}}}_\theta },{{{\hat{B}}}_\theta })} \\ \quad \quad = \left( \sum \limits _{\theta = 1}^k {{\omega _\theta }} {{{\hat{A}}}_\theta },\sum \limits _{\theta = 1}^k {{{{\hat{B}}}_\theta }} \right) \\ \quad \quad \quad = \left( \sum \limits _{\theta = 1}^k {{\omega _\theta }{L_{{\hat{A}}\theta }}(p)} ,\frac{{\sum \limits _{\theta = 1}^k {{L_{{\hat{B}}\theta }}(p)} }}{k}\right) . \end{array} \end{aligned}$$

When \(t = k + 1\), we have

$$\begin{aligned}&\begin{array}{l} {\omega _1}{{{\hat{Z}}}_1} + {\omega _2}{{{\hat{Z}}}_2} +\cdots + {\omega _{k + 1}}{{{\hat{Z}}}_{k + 1}} = \sum \limits _{\theta = 1}^k {{\omega _\theta }{{{\hat{Z}}}_\theta }} + {\omega _{k + 1}}{{{\hat{Z}}}_{k + 1}}\\ \qquad = \sum \limits _{\theta = 1}^k {{\omega _\theta }({{{\hat{A}}}_\theta },{{{\hat{B}}}_\theta })} + {\omega _{k + 1}}({{{\hat{A}}}_{k + 1}},{{{\hat{B}}}_{k + 1}})\\ \qquad = \left( \sum \limits _{\theta = 1}^k {{\omega _\theta }} {{{\hat{A}}}_\theta },\sum \limits _{\theta = 1}^k {{{{\hat{B}}}_\theta }} \right) + ({\omega _{k + 1}}{{{\hat{A}}}_{k + 1}},{{{\hat{B}}}_{k + 1}})\\ \qquad = \left( \sum \limits _{\theta = 1}^{k + 1} {{\omega _\theta }{L_{{\hat{A}}\theta }}(p)} ,\frac{{\sum \limits _{\theta = 1}^{k + 1} {{L_{{\hat{B}}\theta }}(p)} }}{{k + 1}}\right) . \end{array} \end{aligned}$$

Therefore, this equation holds for all t. \(\square \)

Theorem 4

(Monotonicity) Let \(({{\hat{Z}}_1},{{\hat{Z}}_2}, \ldots ,{{\hat{Z}}_t})\) and \(({{\hat{Z}}^*}_1,{{\hat{Z}}^*}_2, \ldots ,{{\hat{Z}}^*}_t)\) be two ZPLTSs. If \({{\hat{Z}}_\theta } < {{\hat{Z}}^*}_\theta \) for all \(\theta = 1,2, \ldots ,t\), then

$$\begin{aligned} {\phi _{ZPLWA}}({{\hat{Z}}_1},{{\hat{Z}}_2}, \ldots ,{{\hat{Z}}_t}) < {\phi _{ZPLNWA}}({{\hat{Z}}^*}_1,{{\hat{Z}}^*}_2, \ldots ,{{\hat{Z}}^*}_t). \end{aligned}$$
(18)

Proof

$$\begin{aligned} {\phi _{ZPLWA}}({{\hat{Z}}_1},{{\hat{Z}}_2}, \ldots ,{{\hat{Z}}_t})= & {} \sum \limits _{\theta = 1}^t {{\omega _\theta }{{{\hat{Z}}}_\theta }} \\ {\phi _{ZPLWA}}({{\hat{Z}}^*}_1,{{\hat{Z}}^*}_2, \ldots ,{{\hat{Z}}^*}_t)= & {} \sum \limits _{\theta = 1}^t {{\omega _\theta }{{{\hat{Z}}}^*}_\theta } \end{aligned}$$

\({{\hat{Z}}_\theta } < {{\hat{Z}}^*}_\theta \) holds for all \(\theta = 1,2, \ldots ,t\) . Therefore, \({\phi _{ZPLWA}}({{\hat{Z}}_1},{{\hat{Z}}_2}, \ldots ,{{\hat{Z}}_t}) < {\phi _{ZPLWA}}({{\hat{Z}}^*}_1,{{\hat{Z}}^*}_2, \ldots ,{{\hat{Z}}^*}_t)\). \(\square \)

Theorem 5

(Idempotency) If \({{\hat{Z}}_\theta },{\hat{Z}} \in {{\hat{Z}}^\# }\) and \({{\hat{Z}}_\theta } = {\hat{Z}}\), \(\theta = 1,2, \ldots ,t\) , then

$$\begin{aligned} {\phi _{ZPLWA}}({{\hat{Z}}_1},{{\hat{Z}}_2}, \ldots ,{{\hat{Z}}_t}) = {\hat{Z}}. \end{aligned}$$
(19)

Proof

Since for all \(\theta = 1,2, \ldots ,t\), we have \({{\hat{Z}}_\theta } = {\hat{Z}}\),

$$\begin{aligned} {\phi _{ZPLWA}}({{\hat{Z}}_1},{{\hat{Z}}_2}, \ldots ,{{\hat{Z}}_t}) = \sum \limits _{\theta = 1}^t {{\omega _\theta }{{{\hat{Z}}}_\theta }} = \sum \limits _{\theta = 1}^t {{\omega _\theta }{\hat{Z}}}. \end{aligned}$$

According to \(\sum \limits _{\theta = 1}^t {{\omega _\theta } = 1} \),

$$\begin{aligned} {\phi _{ZPLWA}}({{\hat{Z}}_1},{{\hat{Z}}_2}, \ldots ,{{\hat{Z}}_t}) = {\hat{Z}}. \end{aligned}$$

\(\square \)

Theorem 6

(Boundedness) Let \({{\hat{Z}}_m} = \min ({{\hat{Z}}_1},{{\hat{Z}}_2}, \ldots ,{{\hat{Z}}_t})\), \({{\hat{Z}}_M} = \max ({{\hat{Z}}_1},{{\hat{Z}}_2}, \ldots ,{{\hat{Z}}_t})\). Then

$$\begin{aligned} {{\hat{Z}}_m} \le {\phi _{ZPLWA}}({{\hat{Z}}_1},{{\hat{Z}}_2}, \ldots ,{{\hat{Z}}_t}) \le {{\hat{Z}}_M}. \end{aligned}$$
(20)

Proof

For \({{\hat{Z}}_m} \le {{\hat{Z}}_\theta } \le {{\hat{Z}}_M}\), \(\theta = 1,2, \ldots ,t\), and \(\sum \limits _{\theta = 1}^t {{\omega _\theta } = 1} , \)

$$\begin{aligned} \begin{array}{l} {\phi _{ZPLWA}}({{{\hat{Z}}}_1},{{{\hat{Z}}}_2}, \ldots ,{{{\hat{Z}}}_t}) = \sum \limits _{\theta = 1}^t {{\omega _\theta }{{{\hat{Z}}}_\theta }} \ge \sum \limits _{\theta = 1}^t {{\omega _\theta }{{{\hat{Z}}}_m}} = {{{\hat{Z}}}_m}. \end{array}\\ \begin{array}{l} {\phi _{ZPLWA}}({{{\hat{Z}}}_1},{{{\hat{Z}}}_2}, \ldots ,{{{\hat{Z}}}_t}) = \sum \limits _{\theta = 1}^t {{\omega _\theta }{{{\hat{Z}}}_\theta }} \le \sum \limits _{\theta = 1}^t {{\omega _\theta }{{{\hat{Z}}}_M}} = {{{\hat{Z}}}_M}. \end{array} \end{aligned}$$

Therefore, \({{\hat{Z}}_m} \le {\phi _{ZPLWA}}({{\hat{Z}}_1},{{\hat{Z}}_2}, \ldots ,{{\hat{Z}}_t}) \le {{\hat{Z}}_M}\). \(\square \)

Remark 2

When \({\omega _\theta } = \frac{1}{t}\), the ZPLWA operator is reduced to the Z probabilistic linguistic averaging (ZPLA) operator.

$$\begin{aligned}&\begin{array}{l} {\phi _{ZPLA}}({{{\hat{Z}}}_1},{{{\hat{Z}}}_2}, \ldots ,{{{\hat{Z}}}_t}) = \sum \limits _{\theta = 1}^t {\frac{1}{t}{{{\hat{Z}}}_\theta }} = \frac{1}{t}\sum \limits _{\theta = 1}^t {({{{\hat{A}}}_\theta },{{{\hat{B}}}_\theta })} = \left( \frac{1}{t}\sum \limits _{\theta = 1}^t {{{{\hat{A}}}_\theta }} ,\sum \limits _{\theta = 1}^t {{{{\hat{B}}}_\theta }} \right) \\ \qquad =\left( \frac{{\sum \limits _{\theta = 1}^t {{L_{{\hat{A}}\theta }}(p)} }}{t},\frac{{\sum \limits _{\theta = 1}^t {{L_{{\hat{B}}\theta }}(p)} }}{t}\right) . \end{array} \end{aligned}$$
(21)

4.2.2 ZPLWG operator

Second, we propose the Z probabilistic linguistic weighted geometric (ZPLWG) operator.

Definition 16

Let \({{\hat{Z}}^\# }\) be a ZPLTS, \({{\hat{Z}}_\theta } = ({{\hat{A}}_\theta },{{\hat{B}}_\theta }) \in {{\hat{Z}}^\# }\), \(\theta = 1,2, \ldots ,t\). A ZPLWG operator aggregation is as follows:

$$\begin{aligned} {\phi _{ZPLWG}}({{\hat{Z}}_1},{{\hat{Z}}_2}, \ldots ,{{\hat{Z}}_t}) = \prod \limits _{\theta = 1}^t {{{{\hat{Z}}}^{{\omega _\theta }}}_\theta }, \end{aligned}$$
(22)

where \({\omega _\theta }\) is calculated by the method proposed in Sect. 4.1, \({\omega _\theta }\in [0,1]\), and \(\sum \limits _{\theta = 1}^t {{\omega _{\theta } } = 1} \).

Theorem 7

$$\begin{aligned}&{{\hat{Z}}_\theta } = ({{\hat{A}}_\theta },{{\hat{B}}_\theta }) = ({L_{{\hat{A}}\theta }}({p_{{\hat{A}}\theta }}),{L_{{\hat{B}}\theta }}({p_{{\hat{B}}\theta }})) = (\{ {L_{{\hat{A}}\theta }}^{(m)}({p_{{\hat{A}}\theta }}^{(m)})|m = 1,2, \ldots ,\\&\# {L_{{\hat{A}}\theta }}({p_{{\hat{A}}\theta }})\} ,\{ {L_{{\hat{B}}\theta }}^{(n)}({p_{{\hat{B}}\theta }}^{(n)})|n = 1,2, \ldots ,\# {L_{{\hat{B}}\theta }}({p_{{\hat{B}}\theta }})\} ), \end{aligned}$$

then their aggregated value by using the ZPLWG operator is also a ZPLV, and

$$\begin{aligned} \begin{array}{l} {\phi _{ZPLWG}}({{{\hat{Z}}}_1},{{{\hat{Z}}}_2}, \ldots ,{{{\hat{Z}}}_t}) = \prod \limits _{\theta = 1}^t {{{{\hat{Z}}}^{{\omega _\theta }}}_\theta } = \prod \limits _{\theta = 1}^t {{{({{{\hat{A}}}_\theta },{{{\hat{B}}}_\theta })}^{{\omega _\theta }}}} \\ \quad \quad \quad \quad = \left( \prod \limits _{\theta = 1}^t {{{{\hat{A}}}_\theta }^{{\omega _\theta }}} ,\prod \limits _{\theta = 1}^t {{{{\hat{B}}}_\theta }^{{\omega _\theta }}} \right) \\ \quad \quad \quad \quad = \left( \prod \limits _{\theta = 1}^t {{L_{{{{\hat{A}}}_\theta }}}{{({p_{{\hat{A}}\theta }})}^{{\omega _\theta }}}} ,\prod \limits _{\theta = 1}^t {{{{\hat{B}}}_\theta }{{({p_{{\hat{B}}\theta }})}^{{\omega _\theta }}}} \right) .\\ \end{array} \end{aligned}$$
(23)

Theorem 8

(Monotonicity) Let \(({{\hat{Z}}_1},{{\hat{Z}}_2}, \ldots ,{{\hat{Z}}_t})\) and \(({{\hat{Z}}^*}_1,{{\hat{Z}}^*}_2, \ldots ,{{\hat{Z}}^*}_t)\) be two ZPLTSs. If \({{\hat{Z}}_\theta } < {{\hat{Z}}^*}_\theta \) for all \(\theta = 1,2, \ldots ,t\), then

$$\begin{aligned} {\phi _{ZPLWG}}({{\hat{Z}}_1},{{\hat{Z}}_2}, \ldots ,{{\hat{Z}}_t}) < {\phi _{ZPLWG}}({{\hat{Z}}^*}_1,{{\hat{Z}}^*}_2, \ldots ,{{\hat{Z}}^*}_t). \end{aligned}$$
(24)

Theorem 9

(Idempotency) If \({{\hat{Z}}_\theta },{\hat{Z}} \in {{\hat{Z}}^\# }\) and \({{\hat{Z}}_\theta } = {\hat{Z}}\), \(\theta = 1,2, \ldots ,t\), then

$$\begin{aligned} {\phi _{ZPLWG}}({{\hat{Z}}_1},{{\hat{Z}}_2}, \ldots ,{{\hat{Z}}_t}) = {\hat{Z}}. \end{aligned}$$
(25)

Theorem 10

(Boundedness) Let \({{\hat{Z}}_m} = \min ({{\hat{Z}}_1},{{\hat{Z}}_2}, \ldots ,{{\hat{Z}}_t})\), \({{\hat{Z}}_M} = \max ({{\hat{Z}}_1},{{\hat{Z}}_2}, \ldots ,{{\hat{Z}}_t})\). Then

$$\begin{aligned} {{\hat{Z}}_m} \le {\phi _{ZPLWG}}({{\hat{Z}}_1},{{\hat{Z}}_2}, \ldots ,{{\hat{Z}}_t}) \le {{\hat{Z}}_M}. \end{aligned}$$
(26)

Remark 3

When \({\omega _\theta } = \frac{1}{t}\), the ZPLWG operator is reduced to the Z probabilistic linguistic geometric(ZPLG) operator.

$$\begin{aligned} \begin{array}{l} {\phi _{ZPLG}}({{{\hat{Z}}}_1},{{{\hat{Z}}}_2}, \ldots ,{{{\hat{Z}}}_t}) = {{{\hat{Z}}}_1}^{\frac{1}{t}} \otimes {{{\hat{Z}}}_2}^{\frac{1}{t}} \otimes \cdots \otimes {{{\hat{Z}}}_t}^{\frac{1}{t}}= (\prod \limits _{\theta = 1}^t {{L_{{\hat{A}}\theta }}{{(p)}^{\frac{1}{t}}}} ,\prod \limits _{\theta = 1}^t {{L_{{\hat{B}}\theta }}{{(p)}^{\frac{1}{t}}}} ). \end{array} \end{aligned}$$
(27)

4.2.3 ZPLWD operator

Finally, we propose the Z probabilistic linguistic weighted distance (ZPLWD) operator.

In ZPLVs, there is a positive ideal solution when \({\hat{A}}\) and \({\hat{B}}\) are both the largest, that is, the highest evaluation and the highest credibility.

Definition 17

Let \(R = {[{{\hat{Z}}_{\gamma \theta }}]_{s \times t}}\) be a ZPLV decision matrix, where

$$\begin{aligned}&{{\hat{Z}}_{\gamma \theta }} = ({L_{{{{\hat{A}}}_{\gamma \theta }}}}({p_{{{{\hat{A}}}_{\gamma \theta }}}}),{L_{{{{\hat{B}}}_{\gamma \theta }}}}({p_{{{{\hat{B}}}_{\gamma \theta }}}}))\\&= (\{ {L_{{{{\hat{A}}}_{\gamma \theta }}}}^{(m)}({p_{{{{\hat{A}}}_{\gamma \theta }}}}^{(m)})|m = 1,2, \ldots ,\# {L_{{{{\hat{A}}}_{\gamma \theta }}}}({p_{{{{\hat{A}}}_{\gamma \theta }}}})\} ,\\&\{ {L_{{{{\hat{B}}}_{\gamma \theta }}}}^{(n)}({p_{{{{\hat{B}}}_{\gamma \theta }}}}^{(n)})|n = 1,2, \ldots ,\# {L_{{{{\hat{B}}}_{\gamma \theta }}}}({p_{{{{\hat{B}}}_{\gamma \theta }}}})\} ). \end{aligned}$$

Then the positive ideal solution of the alternatives is \({{\hat{Z}}^ + } = ({\hat{Z}}_1^ + ,{\hat{Z}}_2^ + , \cdots ,{\hat{Z}}_t^ + )\), where

$$\begin{aligned}&{\hat{Z}}_\theta ^ + = ({L_{{{{\hat{A}}}_\theta }}}^ + ({p_{{{{\hat{A}}}_\theta }}}),{L_{{{{\hat{B}}}_\theta }}}^ + ({p_{{{{\hat{B}}}_\theta }}}))\\&= (\{ {({L_{{{{\hat{A}}}_\theta }}}^{(m)})^ + }|m = 1,2, \ldots ,\# {L_{{{{\hat{A}}}_{\gamma \theta }}}}({p_{{{{\hat{A}}}_{\gamma \theta }}}})\} ,\{ {({L_{{{{\hat{B}}}_\theta }}}^{(n)})^ + }|n = 1,2, \ldots ,\# {L_{{{{\hat{B}}}_{\gamma \theta }}}}({p_{{{{\hat{B}}}_{\gamma \theta }}}})\} ) \end{aligned}$$

and \(({({L_{{{{\hat{A}}}_\theta }}}^{(m)})^ + },{({L_{{{{\hat{B}}}_\theta }}}^{(n)})^ + }) = ({s_{\mathop {\max }\limits _\gamma \{ p_{\gamma \theta }^{(m)}v_{\gamma \theta }^{(m)}\} }}, {s'_{\mathop {\max }\limits _\gamma \{ p_{\gamma \theta }^{(n)}v_{\gamma \theta }^{(n)}\} }})\), \(v_{\gamma \theta }^{(m)}\) and \(v_{\gamma \theta }^{(n)}\) are the subscripts of \({L_{{{{\hat{A}}}_{\gamma \theta }}}}^{(m)}\) and \({L_{{{{\hat{B}}}_{\gamma \theta }}}}^{(n)}\), respectively.

Definition 18

Let \({{\hat{Z}}^\# }\) be a ZPLTS, \({{\hat{Z}}_\theta } = ({{\hat{A}}_\theta },{{\hat{B}}_\theta }) \in {{\hat{Z}}^\# }\), and \(\theta = 1,2, \ldots ,t\). A ZPLWD operator aggregation is as follows:

$$\begin{aligned} {\phi _{ZPLWD}}({{\hat{Z}}_1},{{\hat{Z}}_2}, \ldots ,{{\hat{Z}}_t})\; = \sum \limits _{\theta = 1}^t {{\omega _\theta }} d({{\hat{Z}}_\theta },{{\hat{Z}}_\theta }^ + ), \end{aligned}$$
(28)

where \({{\hat{Z}}_\theta }^ + \) is the positive ideal solution, which is calculated by Definition 17. \({\omega _\theta }\) is calculated by the method proposed in Section 4.1, \({\omega _\theta }\in [0,1]\), and \(\sum \limits _{\theta = 1}^t {{\omega _{\theta } } = 1} \).

Their aggregated value by using the ZPLWD operator is the distance between \({x_\gamma }\) and the optimal alternative, and it is labeled as \(d({x_\gamma },{Z^ + })\). \(d({{\hat{Z}}_\theta },{{\hat{Z}}_\theta }^ + )\) can also be abbreviated to \({d_\theta }\).

Theorem 11

(Monotonicity) Let \(({{\hat{Z}}_1},{{\hat{Z}}_2}, \ldots ,{{\hat{Z}}_t})\) and \(({{\hat{Z}}^*}_1,{{\hat{Z}}^*}_2, \ldots ,{{\hat{Z}}^*}_t)\) be two ZPLTSs. If \({d_\theta } < {d_\theta }^*\) for all \(\theta = 1,2, \ldots ,t\), then

$$\begin{aligned} {\phi _{ZPLWD}}({{\hat{Z}}_1},{{\hat{Z}}_2}, \ldots ,{{\hat{Z}}_t}) < {\phi _{ZPLWG}}({{\hat{Z}}^*}_1,{{\hat{Z}}^*}_2, \ldots ,{{\hat{Z}}^*}_t). \end{aligned}$$
(29)

Theorem 12

(Idempotency) If \({{\hat{Z}}_\theta },{\hat{Z}} \in {{\hat{Z}}^\# }\) and \({d_\theta } = d\), \(\theta = 1,2, \ldots ,t\), then

$$\begin{aligned} {\phi _{ZPLWD}}({{\hat{Z}}_1},{{\hat{Z}}_2}, \ldots ,{{\hat{Z}}_t}) = d. \end{aligned}$$
(30)

Theorem 13

(Boundedness) Let \({d_m} = \min ({d_1},{d_2}, \ldots ,{d_t})\), \({d_M} = \max ({d_1},{d_2}, \ldots ,{d_t})\). Then

$$\begin{aligned} {d_m} \le {\phi _{ZPLWD}}({{\hat{Z}}_1},{{\hat{Z}}_2}, \ldots ,{{\hat{Z}}_t}) \le {d_M}. \end{aligned}$$
(31)

4.3 An extended TOPSIS method for ZPLTSs

We propose a series of operators in the last section, but the aggregation process leads to increased computational complexity and loss of decision information. In order to overcome these shortcomings, we propose an extended TOPSIS method for ZPLTs to solve MAGDM problems in Z probabilistic linguistic environment environment. The main idea of the TOPSIS method is to find an ideal solution which is the closest to the positive ideal solution and the farthest from the negative ideal solution.

Definition 19

Let \(R = {[{{\hat{Z}}_{\gamma \theta }}]_{s \times t}}\) be a ZPLV decision matrix. Then the attribute values vector of the alternative \({x_\gamma }(\gamma = 1,2, \ldots ,s)\) is \({{\hat{Z}}_\gamma } = ({{\hat{Z}}_{\gamma 1}},{{\hat{Z}}_{\gamma 2}}, \ldots ,{{\hat{Z}}_{\gamma t}})\).

The definition of the positive ideal solution is proposed in Definition 17. The negative ideal solution requires the worst evaluation and the highest credibility.

Definition 20

Let \(R = {[{{\hat{Z}}_{\gamma \theta }}]_{s \times t}}\) be a ZPLV decision matrix, where \( {{\hat{Z}}_{\gamma \theta }} = ({L_{{{{\hat{A}}}_{\gamma \theta }}}}({p_{{{{\hat{A}}}_{\gamma \theta }}}}),{L_{{{{\hat{B}}}_{\gamma \theta }}}}({p_{{{{\hat{B}}}_{\gamma \theta }}}})) = (\{ {L_{{{{\hat{A}}}_{\gamma \theta }}}}^{(m)}({p_{{{{\hat{A}}}_{\gamma \theta }}}}^{(m)})|m = 1,2, \ldots , \# {L_{{{{\hat{A}}}_{\gamma \theta }}}}({p_{{{{\hat{A}}}_{\gamma \theta }}}})\} ,\{ {L_{{{{\hat{B}}}_{\gamma \theta }}}}^{(n)}({p_{{{{\hat{B}}}_{\gamma \theta }}}}^{(n)})|n = 1,2, \ldots , \# {L_{{{{\hat{B}}}_{\gamma \theta }}}}({p_{{{{\hat{B}}}_{\gamma \theta }}}})\} ). \) Then the negative ideal solution of the alternatives is \({{\hat{Z}}^ - } = ({\hat{Z}}_1^ - ,{\hat{Z}}_2^ - , \cdots ,{\hat{Z}}_t^ - )\), where \( {\hat{Z}}_\theta ^ - = ({L_{{{{\hat{A}}}_\theta }}}^ - ({p_{{{{\hat{A}}}_\theta }}}),{L_{{{{\hat{B}}}_\theta }}}^ + ({p_{{{{\hat{B}}}_\theta }}})) = (\{ {({L_{{{{\hat{A}}}_\theta }}}^{(m)})^ - }|m = 1,2, \ldots , \# {L_{{{{\hat{A}}}_{\gamma \theta }}}}({p_{{{{\hat{A}}}_{\gamma \theta }}}})\} ,\{ {({L_{{{{\hat{B}}}_\theta }}}^{(n)})^ + }|n = 1,2, \ldots , \# {L_{{{{\hat{B}}}_{\gamma \theta }}}}({p_{{{{\hat{B}}}_{\gamma \theta }}}})\} )\ and\ ({({L_{{{{\hat{A}}}_\theta }}}^{(m)})^ - },{({L_{{{{\hat{B}}}_\theta }}}^{(n)})^ + })= ({s_{\mathop {\min }\limits _\gamma \{ p_{\gamma \theta }^{(m)}v_{\gamma \theta }^{(m)}\} }},{s'_{\mathop {\max }\limits _\gamma \{ p_{\gamma \theta }^{(n)}v_{\gamma \theta }^{(n)}\} }}), {v_{\gamma \theta }^{(m)}} \) and \({v_{\gamma \theta }^{(n)}}\) are the subscripts of \({L_{{{{\hat{A}}}_{\gamma \theta }}}}^{(m)}\) and \({L_{{{{\hat{B}}}_{\gamma \theta }}}}^{(n)}\), respectively.

The distance between \({x_\gamma }\) and the positive ideal solution is:

$$\begin{aligned} {d_\gamma }^ + = d({x_\gamma },{{\hat{Z}}^ + }) = \sum \limits _{\theta = 1}^t {{\omega _\theta }} d({{\hat{Z}}_{\gamma \theta }},{{\hat{Z}}_\theta }^ + ), \end{aligned}$$
(32)

where

$$\begin{aligned}&d({{\hat{Z}}_{\gamma \theta }},{{\hat{Z}}_\theta }^ {+} ) =\\&\quad \sqrt{\frac{{\sum \limits _{m = 1}^{\# {L_{{{{\hat{A}}}_{\gamma \theta }}}}({p_{{{{\hat{A}}}_{\gamma \theta }}}})} {\sum \limits _{n = 1}^{\# {L_{{{{\hat{B}}}_{\gamma \theta }}}}({p_{{{{\hat{B}}}_{\gamma \theta }}}})} {{{\left( {({p_{{{{\hat{A}}}_{\gamma \theta }}}}^{(m)}{v_{{{{\hat{A}}}_{\gamma \theta }}}}^{(m)} {+} \varsigma )({p_{{{{\hat{B}}}_{\gamma \theta }}}}^{(n)}{v_{{{{\hat{B}}}_{\gamma \theta }}}}^{(n)} {+} \zeta ) {-} ({{({p_{{{{\hat{A}}}_\theta }}}^{(m)}{v_{{{{\hat{A}}}_\theta }}}^{(m)})}^ {+} } {+} \varsigma )({{({p_{{{{\hat{B}}}_\theta }}}^{(n)}{v_{{{{\hat{B}}}_\theta }}}^{(n)})}^ {+} } {+} \zeta )} \right) }^2}} } }}{{\# {L_{{{{\hat{A}}}_{\gamma \theta }}}}({p_{{{{\hat{A}}}_{\gamma \theta }}}})\# {L_{{{{\hat{B}}}_{\gamma \theta }}}}({p_{{{{\hat{B}}}_{\gamma \theta }}}})}}}, \end{aligned}$$

\({\omega _\theta }\) is calculated by the method proposed in Sect. 4.1.

The distance between \({x_\gamma }\) and the negative ideal solution is:

$$\begin{aligned} {d_\gamma }^ - = d({x_\gamma },{{\hat{Z}}^ - }) = \sum \limits _{\theta = 1}^t {{\omega _\theta }} d({{\hat{Z}}_{\gamma \theta }},{{\hat{Z}}_\theta }^ - ), \end{aligned}$$
(33)

where

$$\begin{aligned}&d({{\hat{Z}}_{\gamma \theta }},{{\hat{Z}}_\theta }^ - ) = \\&\quad \sqrt{\frac{{\sum \limits _{m = 1}^{\# {L_{{{{\hat{A}}}_{\gamma \theta }}}}({p_{{{{\hat{A}}}_{\gamma \theta }}}})} {\sum \limits _{n = 1}^{\# {L_{{{{\hat{B}}}_{\gamma \theta }}}}({p_{{{{\hat{B}}}_{\gamma \theta }}}})} {{{\left( {({p_{{{{\hat{A}}}_{\gamma \theta }}}}^{(m)}{v_{{{{\hat{A}}}_{\gamma \theta }}}}^{(m)} + \varsigma )({p_{{{{\hat{B}}}_{\gamma \theta }}}}^{(n)}{v_{{{{\hat{B}}}_{\gamma \theta }}}}^{(n)} + \zeta ) - ({{({p_{{{{\hat{A}}}_\theta }}}^{(m)}{v_{{{{\hat{A}}}_\theta }}}^{(m)})}^ - } + \varsigma )({{({p_{{{{\hat{B}}}_\theta }}}^{(n)}{v_{{{{\hat{B}}}_\theta }}}^{(n)})}^ + } + \zeta )} \right) }^2}} } }}{{\# {L_{{{{\hat{A}}}_{\gamma \theta }}}}({p_{{{{\hat{A}}}_{\gamma \theta }}}})\# {L_{{{{\hat{B}}}_{\gamma \theta }}}}({p_{{{{\hat{B}}}_{\gamma \theta }}}})}}} , \end{aligned}$$

\({\omega _\theta }\) is calculated by the method proposed in Sect. 4.1.

It is not difficult to see that the smaller \(d_{\min }^ +\) is, the larger \(d_{\max }^ -\) is, the better the alternative is.

The smallest distance between \({x_\gamma }\) and the positive ideal solution is:

$$\begin{aligned} d_{\min }^ + = \mathop {\min }\limits _{1 \le \gamma \le s} {d_\gamma }^ + . \end{aligned}$$
(34)

The largest distance between \({x_\gamma }\) and the negative ideal solution is:

$$\begin{aligned} d_{\max }^ - = \mathop {\max }\limits _{1 \le \gamma \le s} {d_\gamma }^ - . \end{aligned}$$
(35)

Then, the closeness coefficient of the alternative \({x_\gamma }(\gamma = 1,2, \ldots ,s)\) is:

$$\begin{aligned} C({x_\gamma }) = \frac{{{d_\gamma }^ - }}{{d_{\max }^ - }} - \frac{{{d_\gamma }^ + }}{{d_{\min }^ + }},\;(C({x_\gamma }) \le 0). \end{aligned}$$
(36)

The best alternative is:

$$\begin{aligned} {x_\gamma }^ * = \{ {x_\gamma }|\mathop {\max }\limits _{1 \le \gamma \le s} C({x_\gamma })\} . \end{aligned}$$
(37)

4.4 A MAGDM model based on ZPLTSs

ZPLTS can be widely used in MAGDM because it can not only show the preference of DMs, but also represent information in a more comprehensive way. We build a MAGDM model based on ZPLTSs in this section. The flow chart of the model is shown in Fig.3.

Fig. 3
figure 3

A MAGDM model based on ZPLTSs

In a MAGDM problems, assume there are a discrete set of alternatives \(X = \{ {x_1},{x_2}, \ldots ,{x_s}\} (s \ge 2)\), a set of DMs \(D = \{ {d_1},{d_2}, \ldots ,{d_\eta }\} (\eta \ge 2)\), and a set of attributes \(C = \{ {c_1},{c_2}, \ldots ,{c_t}\} (t \ge 2)\). Each DM give a ZPLV decision matrix \({R^\eta }(\eta = 1,2, \cdots ,q)\) to evaluate every attributes of different alternatives. \({R^\eta } = {[{{\hat{Z}}^\eta }_{\gamma \theta }]_{s \times t}}\) and \( {{\hat{Z}}^\eta }_{\gamma \theta } = ({{\hat{A}}^\eta }_{\gamma \theta },{{\hat{B}}^\eta }_{\gamma \theta }) = ({L^\eta }_{{{{\hat{A}}}_{\gamma \theta }}}({p^\eta }_{{{{\hat{A}}}_{\gamma \theta }}}),{L^\eta }_{{{{\hat{B}}}_{\gamma \theta }}}({p^\eta }_{{{{\hat{B}}}_{\gamma \theta }}})) = (\{ {L^\eta }{_{{{{\hat{A}}}_{\gamma \theta }}}^{(m)}}({p^\eta }{_{{{{\hat{A}}}_{\gamma \theta }}}^{(m)}})|m = 1,2, \ldots ,\# {L^\eta }_{{{{\hat{A}}}_{\gamma \theta }}}({p^\eta }_{{{{\hat{A}}}_{\gamma \theta }}})\} , \{ {L^\eta }{_{{{{\hat{B}}}_{\gamma \theta }}}^{(n)}}({p^\eta }{_{{{{\hat{B}}}_{\gamma \theta }}}^{(n)}})|n = 1,2, \ldots ,\# {L^\eta }_{{{{\hat{B}}}_{\gamma \theta }}}({p^\eta }_{{{{\hat{B}}}_{\gamma \theta }}})\} ). \)

Step 1. Combine all of the DMs’ evaluations and corresponding credibilities to construct the final ZPLV decision matrix \(R = {[{{\hat{Z}}_{\gamma \theta }}]_{s \times t}}\).

Step 2. Calculate the attribute weight vector by Eq. (16). If DMs choose the method of the operators, then go to Step 3. If DMs choose the extended TOPSIS method, then go to Step 5.

Step 3. DMs choose the appropriate operator according to the actual situation.

(1) Calculate the ZPLTS \({{\hat{Z}}_{x\gamma }}(\gamma = 1,2, \ldots ,s)\) of the alternative \({x_\gamma }\) by the ZPLWA operator.

$$\begin{aligned} {{\hat{Z}}_{x\gamma }} = ({{\hat{A}}_{x\gamma }},{{\hat{B}}_{x\gamma }}) = {\phi _{ZPLWA}}({{\hat{Z}}_1},{{\hat{Z}}_2}, \ldots ,{{\hat{Z}}_t}) = \sum \limits _{\theta = 1}^t {{\omega _\theta }{{{\hat{Z}}}_{\gamma \theta }}}. \end{aligned}$$

(2) Calculate the ZPLTS \({{\hat{Z}}_{x\gamma }}(\gamma = 1,2, \ldots ,s)\) of the alternative \({x_\gamma }\) by the ZPLWG operator.

$$\begin{aligned} {{\hat{Z}}_{x\gamma }} = ({{\hat{A}}_{x\gamma }},{{\hat{B}}_{x\gamma }}) = {\phi _{ZPLWG}}({{\hat{Z}}_1},{{\hat{Z}}_2}, \ldots ,{{\hat{Z}}_t}) = \prod \limits _{\theta = 1}^t {{{{\hat{Z}}}_{\gamma \theta }}^{{\omega _\theta }}}. \end{aligned}$$

(3) Calculate the distance \(d({x_\gamma },{{\hat{Z}}^ + })\) between each alternative and the positive ideal solution by the ZPLWD operator.

$$\begin{aligned} d({x_\gamma },{{\hat{Z}}^ + }) = {\phi _{ZPLWD}}({{\hat{Z}}_1},{{\hat{Z}}_2}, \ldots ,{{\hat{Z}}_t}) = \sum \limits _{\theta = 1}^t {{\omega _\theta }} d({{\hat{Z}}_{\gamma \theta }},{{\hat{Z}}_\theta }^ + ). \end{aligned}$$

Step 4. If DMs choose the ZPLWA operator and the ZPLWG operator, then rank \({\hat{Z}}_{x\gamma }\) with the ranking method proposed in Sect. 3.3. If DMs choose the ZPLWD operator, then the smaller \(d({x_\gamma },{{\hat{Z}}^ + })\) is, the better the alternative is. Select the best alternative(s) and go to Step 9.

Step 5. Determine the positive ideal solution \({{\hat{Z}}^ +}\) and the negative ideal solution \({{\hat{Z}}^ - }\) by Definitions 17 and 20.

Step 6. Calculate the distance \(d_\gamma ^ +\) between \({x_\gamma }\) and the positive ideal solution by Eq. (33). And calculate the distance \(d_\gamma ^ -\) between \({x_\gamma }\) and the positive ideal solution by Eq. (34).

Step 7. Calculate \(d_{\min }^ + \) and \(d_{\max }^ - \) by Eq. (35) and Eq. (36), respectively.

Step 8. Obtain \(C({x_\gamma })\) of each alternative by Eq. (37) and select the best one(s) by Eq. (38).

Step 9. End.

The MAGDM model contains two methods: the operators, and the extended TOPSIS method. It is easy to see from Fig.3 that the steps of the operators are relatively simple. The extended TOPSIS method reduce the loss of decision information. Each method has its advantages, so DMs can choose according to their needs.

5 Illustrative example and discussion

We illustrate ZPLTSs with a numerical example from (Pang et al. 2016).

5.1 A numerical example

A company plan a project in the next following years, and there are three projects to choose. Five company employers decide one integrated optimal project to be the development plan of the company. Assessment to evaluate mainly from the following four aspects:

  1. (1)

    Financial perspective(\({c_1}\));

  2. (2)

    The customer satisfaction(\({c_2}\));

  3. (3)

    Internal business process perspective(\({c_3}\));

  4. (4)

    Learning and growth perspective(\({c_4}\)).

DMs provide the decision information represented by ZPLVs according to their experience, which includes the evaluation and the corresponding credibility. The evaluation is based on the LTS

$$\begin{aligned}&S = \{ {s_0} = none,{s_1} = very\;low,\;{s_2} = low,\;{s_3} = medium,\\&{s_4} = high,\;{s_5} = very\;high,\;{s_6} = perfect\;\}, \end{aligned}$$

and the credibility is based on the LTS

$$\begin{aligned} \begin{array}{l} {S^{\prime }} = \{ \;s_{ - 3}^{\prime } = \;not\;at\;all\;sure,\;s_{ - 2}^{\prime } = \;not\;sure,\;s_{ - 1}^{\prime } = a\;little\;\;not\;sure,\;\\ s_0^{\prime } = \;medium\;sure,\;s_1^{\prime } = a\;little\;sure,\;s_2^{\prime } = sure,\;s_3^{\prime } = very\;sure\;\}. \end{array} \end{aligned}$$

Five linguistic decision matrixes provided by five DMs are listed in Table 15. Next, we use the operators and the extended TOPSIS method to calculate.

Table 1 The first DM’s linguistic decision matrix
Table 2 The second DM’s linguistic decision matrix
Table 3 The third DM’s linguistic decision matrix
Table 4 The fourth DM’s linguistic decision matrix
Table 5 The fifth DM’s linguistic decision matrix

5.1.1 The aggregate operators

Step 1. The linguistic decision matrixes of five DMs are summarized into a ZPLV decision matrix. For example, for the first attribute of the first alternative, the evaluation information given by five DMs are \(({s_0},{s'_2})\), \(({s_1},{s'_3})\), \(({s_{ 1}},{s'_2})\), \(({s_1},{s'_3})\), and \(({s_0},{s'_3})\). For there are two \({s_0}\) and three \({s_1}\), the probabilities of \({s_0}\) and \({s_1}\) are 0.4 and 0.6. Similarly, the probabilities of \({s'_2}\) and \({s'_3}\) are 0.4 and 0.6. Therefore, the summarized ZPLV is \((\{ {s_0}(0.4),{s_1}(0.6)\} ,\{ {s'_2}(0.4),{s'_3}(0.6)\} )\). The ZPLV decision matrix is shown in Table 6. Then, normalize Table 6 to obtain the normalized matrix shown in Table 7.

Table 6 The group linguistic decision matrix
Table 7 The normalized group linguistic decision matrix

Step 2. Calculate the attribute weight vector by Eq. (16).

$$\begin{aligned} W = {(0.278,0.257,0.238,0.226)^T}. \end{aligned}$$

Step 3. Select the optimal project with the operators based on ZPLTSs.

(1) Aggregate \({{\hat{Z}}_\gamma }(\gamma = 1,2,3)\) with the ZPLWA operator and select the optimal project.

$$\begin{aligned} S({{\hat{Z}}_{x1}})\approx & {} 22.826,\quad S({{\hat{Z}}_{x2}}) \approx 15.258,\quad S({{\hat{Z}}_{x3}}) \approx 13.1\\&\because S({{\hat{Z}}_1})> S({{\hat{Z}}_2})> S({{\hat{Z}}_3}),\quad \therefore {x_{1}}> {x_{2}} > {x_{3}} . \end{aligned}$$

Therefore, \({x_{1}}\) is the optimal project.

(2) Aggregate \({{\hat{Z}}_\gamma }(\gamma = 1,2,3)\) with the ZPLWG operator and select the optimal project.

$$\begin{aligned} S({{\hat{Z}}_1})\approx & {} 19.71,\quad S({{\hat{Z}}_2}) \approx 10.192,\quad S({{\hat{Z}}_3}) \approx 3.467\\&\because S({{\hat{Z}}_{x1}})> S({{\hat{Z}}_{x2}})> S({{\hat{Z}}_{x3}}),\quad \therefore {x_1}> {x_2} > {x_3}. \end{aligned}$$

Therefore, \({x_{1}}\) is the optimal project.

(3) Aggregate \({{\hat{Z}}_\gamma }(\gamma = 1,2,3)\) with the ZPLWD operator and select the optimal project.

Determaine the positive ideal solution.

$$\begin{aligned}&\begin{array}{l} {{{\hat{Z}}}^ + } = ({\hat{Z}}_1^ + ,{\hat{Z}}_2^ + ,{\hat{Z}}_3^ + ,{\hat{Z}}_4^ + ) = ((\{ {s_{0.4}},{s_{0.6}},{s_0}\} ,\\ \qquad \{ {{s'}_{0.8}},{{s'}_{1.8}},{{s'}_{1.2}}\} ),(\{ {s_0},{s_{0.6}},{s_{0.25}}\} ,\{ {{s'}_{0.8}},{{s'}_{1.8}},{{s'}_{0.5}}\} ),\\ \qquad (\{ {s_0},{s_{0.8}},{s_{0.66}}\} ,\{ {{s'}_{0.5}},{{s'}_{0.5}},{{s'}_1}\} ),(\{ {s_{0.8}},{s_{1.2}},{s_0}\} ,\{ {{s'}_{1.6}},{{s'}_{1.6}},{{s'}_0}\} )) \end{array} \end{aligned}$$

Aggregate with the ZPLWD operator.

$$\begin{aligned} d({x_1},{Z^ + })\approx & {} 2.479,\;\;\;\;d({x_2},{Z^ + }) \approx 4.818,\;\;\;\;d({x_3},{Z^ + }) \approx 5.724\\&\because d({x_1},{Z^ + })< \;d({x_2},{Z^ + }) < \;d({x_3},{Z^ + })),\;\therefore {x_1}> {x_2} > {x_3}. \end{aligned}$$

Therefore, \({x_{1}}\) is the optimal project.

5.1.2 The extended TOPSIS method

Step 1. Construct the ZPLV decision making matrix as Table 7.

Step 2. Calculate the attribute weight vector by Eq. (16).

$$\begin{aligned} W = {(0.278,0.257,0.238,0.226)^T}. \end{aligned}$$

Step 3. Determine \({{\hat{Z}}^ +}\) and \({{\hat{Z}}^ -}\) by Definitions 17 and 20.

$$\begin{aligned}&\begin{array}{l} {{{\hat{Z}}}^ + } = ({\hat{Z}}_1^ + ,{\hat{Z}}_2^ + ,{\hat{Z}}_3^ + ,{\hat{Z}}_4^ + ) = ((\{ {s_{0.4}},{s_{0.6}},{s_0}\} ,\{ {{s'}_{0.8}},{{s'}_{1.8}},{{s'}_{1.2}}\} ),(\{ {s_0},{s_{0.6}},{s_{0.25}}\} ,\\ \{ {{s'}_{0.8}},{{s'}_{1.8}},{{s'}_{0.5}}\} ),(\{ {s_0},{s_{0.8}},{s_{0.66}}\} ,\{ {{s'}_{0.5}},{{s'}_{0.5}},{{s'}_1}\} ),\\ \qquad (\{ {s_{0.8}},{s_{1.2}},{s_0}\} ,\{ {{s'}_{1.6}},{{s'}_{1.6}},{{s'}_0}\} )). \end{array}\\&\begin{array}{l} {{{\hat{Z}}}^ - } = ({\hat{Z}}_1^ - ,{\hat{Z}}_2^ - ,{\hat{Z}}_3^ - ,{\hat{Z}}_4^ - ) = ((\{ {s_0},{s_0},{s_0}\} ,\{ {{s'}_{0.8}},{{s'}_{1.8}},{{s'}_{1.2}}\} ),\\ \qquad (\{ {s_{ - 0.25}},{s_0},{s_0}\} ,\{ {{s'}_{0.8}},{{s'}_{1.8}},{{s'}_{0.5}}\} ), (\{ {s_{ - 0.5}},{s_{ - 0.5}},{s_0}\} ,\{ {{s'}_{0.5}},{{s'}_{0.5}},{{s'}_1}\} ),\\ \quad \quad (\{ {s_0},{s_0},{s_0}\} ,\{ {{s'}_{1.6}},{{s'}_{1.6}},{{s'}_0}\} )). \end{array} \end{aligned}$$

Step 4. Calculate \(d_\gamma ^ + \) and \(d_\gamma ^ - \) by Eq. (33) and Eq. (34).

$$\begin{aligned}&d_1^ + = 5.352,\;d_2^ + = 5.630,\;d_3^ + = 6.236\\&d_1^ - = 5.127,\;d_2^ - = 4.422,\;d_3^ - = 4.862 \end{aligned}$$

Step 5. Calculate \(d_{\min }^ + \) and \(d_{\max }^ - \) by Eq. (35) and Eq. (36).

$$\begin{aligned} d_{\min }^ + = 5.352,\;d_{\max }^ - = 5.127 \end{aligned}$$

Step 6. Obtain \(C({x_\gamma })\) of each project by Eq. (37) and select the best one by Eq. (38).

$$\begin{aligned}&C({x_1}) = 0,\;C({x_2}) = - 0.189,\;C({x_3}) = - 0.217\\&\quad \because C({x_1})> C({x_2})> C({x_3}),\;\therefore {x_1}> {x_2} > {x_3}. \end{aligned}$$

Therefore, \({x_1}\) is the optimal project.

5.2 Analysis and discussion

5.2.1 Comparison with particular ZPLVs

In this section, we explore the impact of the information credibility on ranking results. In the numerical example above, we set all of the credibilities as \(\{ {s'_3}(1),{s'_3}(0),{s'_3}(0)\} \), and then the normalized ZPLV decision matrix is constructed as Table 8. At this time, the credibility is the maximum, which can be ignored as ZPLVs degenerate into PLTSs. Repeat the calculation in the previous section, and the obtained results are compared with those obtained in Sect. 5.1, as shown in Table 9.

Table 8 The normalized group linguistic decision matrix when credibility is the maximum
Table 9 Comparison of results before and after credibility maximization

As can be seen from Table 9, the attribute weight vectors are the same. This is because all of the credibilities are equal. When the credibility of each evaluation is different, just like the example in Sect. 5.1, the obtained attribute weights are different. However, other weighting methods (such as the maximum deviation method) ignore the important role of the information credibility.

Except for attribute weights, the change of the credibility also lead to the changes of the ranking and the optimal alternative. These also prove the necessity of the ZPLTs and the ZPLVs proposed in this paper.

5.2.2 Comparison with PLTSs

We use PLTSs (Pang et al. 2016) to calculate the above example as a comparison. The processes are follows:

First, we take the probabilistic linguistic weighted averaging (PLWA) operator as an example.

Step 1. Construct a probabilistic linguistic decision making matrix and normalize it. The normalized matrix is shown in Table 10.

Step 2. Calculate the attribute weight vector by the maximizing deviation method.

$$\begin{aligned} W = {(0.158,0.157,0.359,0.326)^T}. \end{aligned}$$

Step 3. Aggregate \({{\tilde{Z}}_\gamma }(\omega )\) with the PLWA operator:

$$\begin{aligned} {{\tilde{Z}}_\gamma }(\omega ) = {\omega _1}{L_{\gamma 1}}(p) \oplus {\omega _2}{L_{\gamma 2}}(p) \oplus {\omega _3}{L_{\gamma 3}}(p) \oplus {\omega _4}{L_{\gamma 4}}(p). \end{aligned}$$

Then,

$$\begin{aligned} {{\tilde{Z}}_1}(\omega ) = \{ {s_0},{s_{0.8674}},{s_0}\} ,\;{\tilde{Z}_{}}(\omega ) = \{ {s_{ - 0.090}},{s_{0. - 0.180}},{s_{0.039}}\} ,\;{{\tilde{Z}}_2}(\omega ) = \{ {s_{0.261}},{s_{0.417}},{s_{0.237}}\}. \end{aligned}$$

Step 4. The score of each alternative is

$$\begin{aligned} E({{\tilde{Z}}_1}(\omega )) = 0.289,\;E({{\tilde{Z}}_2}(\omega )) = - 0.077,\;E({{\tilde{Z}}_3}(\omega )) = 0.305 \end{aligned}$$

Step 5. Rank and select the optimal project.

$$\begin{aligned} \because E({{\tilde{Z}}_3}(\omega ))> E({{\tilde{Z}}_1}(\omega ))> E({{\tilde{Z}}_2}(\omega )),\;\therefore {x_3}> {x_1} > {x_2}. \end{aligned}$$

Therefore, \({x_3}\) is the optimal project.

Then, the extended TOPSIS method with PLTSs is used to calculate the numerical example.

Step 1. Construct a probabilistic linguistic decision making matrix and normalize it. The normalized matrix is shown in Table 10.

Table 10 The normalized group linguistic decision matrix with PLTSs

Step 2. Calculate the attribute weight vector by the maximizing deviation method.

$$\begin{aligned} W = {(0.158,0.157,0.359,0.326)^T}. \end{aligned}$$

Step 3. Determine the PIS \(L{(p)^ + }\) and the NIS \(L{(p)^ - }\).

$$\begin{aligned}&L{(p)^ + } = (L(p)_1^ + ,L(p)_2^ + ,L(p)_3^ + ,L(p)_4^ + ) = (\{ {s_{0.4}},{s_{0.6}},{s_0}\} ,\\&\{ {s_0},{s_{0.6}},{s_{0.25}}\} ,\{ {s_0},{s_{0.8}},{s_{0.66}}\} ,\{ {s_{0.8}},{s_{1.2}},{s_0}\} ).\\&L{(p)^ - } = (L(p)_1^ - ,L(p)_2^ - ,L(p)_3^ - ,L(p)_4^ - ) = (\{ {s_0},{s_0},{s_0}),\\&\{ {s_{ - 0.25}},{s_0},{s_0}\} ,\{ {s_{ - 0.5}},{s_{ - 0.5}},{s_0}\} ,\{ {s_0},{s_0},{s_0}\} ). \end{aligned}$$

Step 4. Calculate the deviation degree \(d({x_\gamma },L{(p)^ + })\) between each project and the deviation degree \(d({x_\gamma },L{(p)^ - })\) between each project and the NIS.

$$\begin{aligned} d({x_1},L{(p)^ + })= & {} 0.347,\;d({x_2},L{(p)^ + }) = 0.86,\;d({x_3},L{(p)^ + }) = 0.290\\ d({x_1},L{(p)^ - })= & {} 0.628,\;d({x_2},L{(p)^ - }) = 0.097,\;d({x_3},L{(p)^ - }) = 0.450 \end{aligned}$$

Step 5. Calculate \({d_{\max }}({x_\gamma },L{(p)^ + })\) and \({d_{\min }}({x_\gamma },L{(p)^ - })\).

$$\begin{aligned} {d_{\max }}({x_\gamma },L{(p)^ + }) = 0.290,\;{d_{\min }}({x_\gamma },L{(p)^ - }) = 0.628 \end{aligned}$$

Step 6. Obtain the improved closeness coefficient \(C({x_\gamma })\) of each project and select the best one.

$$\begin{aligned}&CI({x_1}) = - 0.194,\;CI({x_2}) = - 2.209,\;CI({x_3}) = - 0.204\\&\because CI({x_1})> \;CI({x_3})> CI({x_2}),\;\therefore {x_1}> {x_3} > {x_2}. \end{aligned}$$

Therefore, \({x_1}\) is the optimal project.

The ranking results of PLTSs are compared those of ZPLVs when the credibility is the highest. The comparison results are shown in Table 11.

Table 11 Comparison of the results obtained by ZPLTSs and PLTSs
Table 12 Comparison with other linguistic representation methods

The attribute weights calculated by this paper are different from those calculated by Pang (Pang et al. 2016). This is due to the different emphasis of the two methods. This paper emphasizes the influence of the credibility on weights, and the maximum deviation method emphasizes the deviation degree of the evaluation.

Table 11 shows that the ranking and the optimal alternative obtained by both the operator method and the extended TOPSIS method are the same. This is because ZPLVs can degenerate to PLTSs under special circumstances(that is, when the credibility is the highest). The results in the table further confirm the theory. ZPLVs in this paper can not only represent the evaluation and probability in traditional PLTSs, but also the corresponding credibility. The diversity of information representation is an advantage over PLTSs.

5.2.3 Comparison with other linguistic representation method

We compare several linguistic representation methods in this section, including HFLTS, EHFLTS, PLTS, uncertain probabilistic linguistic term set (UPLTS) (Jin et al. 2019), linguistic Z-number, \(\mathrm{{{\mathcal {Z}}}}\) linguistic variable and ZPLTS proposed in this paper. The comparison aspects including the characteristics, the probability information, and the credibility. The comparison results are shown in Table 12.

As can be seen from Table 12, the ZPLTS can not only include the linguistic evaluation and the probability of preference, but also represent the credibility of information, making the decision information more complete and the decision result more accurate. A single PLTS or Z-number will not do the trick.

6 Conclusion

In GDM of PLTSs, only considering the linguistic terms and the probability distribution but ignoring the credibility of the information will lead to the loss of decision information, thereby affecting the accuracy of the decision results. Therefore, in this paper, we propose the ZPLTS in combination with the existing linguistic representation tools. The normalization, calculation, comparison method and distance measure are studied. Then we put forward a new weight calculation method based on the credibility, and put forward a MAGDM model based on ZPLTSs in combination with the correlation operators and the extended TOPSIS method. Finally, a numerical example and comparisons with other methods are given to prove its effectiveness.

In the future, we will discuss the new method to calculating probabilistic distribution in ZPLTSs. The current algorithm defaults to the same weight for all DMs. In addition, it is of great significance to further study the relevant theories of ZPLTSs and explore its application in public health emergency decision making.