1 Introduction

In real life, the social and economic environment is complex, and human thinking habits are fuzzy. Therefore, the decision makers (DMs) may have their preferences to evaluate the objects by taking advantage of the linguistic terms instead of utilizing the accurate numbers (Feylizadeh et al. 2018; Wei et al., 2020a, 2020b; Yu et al. 2017, 2018; Zhang et al. 2020). For instance, the DMs might utilize “poor”, “moderate” and “good” when describing satisfaction with a car. To facilitate the description of qualitative evaluation information, Herrera and Martinez (2000) gave a definition of the 2-tuple linguistic terms sets (2TLTSs) for tackling the language evaluation information. Herrera and Martinez (2001) extended the language 2-tuple to multi-granular hierarchical language environment for handling MAGDM. Especially, the hesitant fuzzy linguistic term sets (HFLTSs) are came up with by Rodriguez et al. (2012) on the foundation of HFSs (Torra 2010) and LTSs (Zadeh 1975) used to allow DMs to give a couple of possible linguistic variable. After the HFLTSs were defined, Wei et al. (2014) raised some operations for this fuzzy environment, and also possibility degree formulas are applied to compare the size of two HFLTSs. Liao et al. (2015) came up with the VIKOR method of HFLTSs for the study of qualitative decision-making problems by taking advantage of the calculation model of VIKOR. Inspired by the previous scholar, Wang et al. (2016) proposed a probabilistic multi-hesitation fuzzy language information to evaluate logistics outsourcing based on the calculation model of TODIM. The entropy and cross-entropy of HFLTSs are calculated by Gou et al. (2017a). Zhang et al. (2018) developed a novel consistency construction process for MAGDM using HFLTSs. Wu et al. (2019) defined compromise solutions for MAGDM using HFLTSs. Liao et al. (2018a, b) did a careful research about the ELECTRE II technique under the context of HFLTS and exploited two newfangled methods: ELECTRE II model on the basis of fractional deviation and ELECTRE II model by calculating positive and negative ideal. Wei (2019) used HFLTSs information to explore MADM's generalized dice similarity measurement.

However, the majority of the existing studies on HFLTSs focus on all possible values offered by the DMs with weight or significance which are equal (Chakraborty and Zavadskas 2014; Rikhtegar et al. 2014; Zavadskas 2013). Obviously, this does not jibe with reality. These possible values may distribute differently in both independent and group decisions. Thus, Pang et al. (2016) raised the PLTSs to get over this limitation and defined a computational model for sorting PLTSN with score degree or deviation degree. By applying two equivalent transformation functions, Gou and Xu (2016) proposed some laws of operation for HFLEs as well as PLTSs. On the foundation of geometric Bonferroni mean, Liang et al. (2018) came up with the probabilistic linguistic grey relational analysis (PL-GRA) to solve MAGDM. Lin et al. (2019) extended the ELECTRE II technique into PLTSs to edge calculation. Liao et al. (2019) researched the newfangled operations of PLTSs in combinations of ELECTRE III method. Chen et al. (2019) used PLTSs-MULTIMOORA to do with the selection of cloud-based ERP system. Cheng et al. (2018) studied multi-attribute decision problems of interactive venture capital group in probabilistic language context. Zhai et al. (2016) came up with the PLVTSs to expand the applied range of multi-granular linguistic information. Lu et al. (2019) used TOPSIS method for probabilistic linguistic MAGDM with entropy weight, and they applied it in selecting supplier of new agricultural machinery products.

Keshavarz Ghorabaee et al. (2015) defined the EDAS to solve the MCIC problems which can also be used for MADM or MAGDM problems. The EDAS method has high efficiency and a small amount of computation in contrast to other decision-making methods. Keshavarz Ghorabaee et al. (2016) extended the EDAS method to work out the MADM problems and applied it in selecting supplier. Stevic, Vasiljevic, Zavadskas, Sremac, as well as Turskis (2018), adopted the fuzzy EDAS method to confirm the wood manufacturer. Liang (2020) designed an EDAS method for MAGDM under intuitionistic fuzzy environment. Keshavarz Ghorabaee et al. (2017a) proposed an extended EDAS technique which was utilized to tackle MAGDM and ITFSs in multi-criteria subcontractor evaluation problems. On the basis of normally distributed data, a random EDAS method was came up with by Keshavarz Ghorabaee et al. (2017b). Zhang et al. (2019a) united the EDAS method with the P2TL information, then they applied it to a practical problem. The RR phenomenon in EDAS method and conducted simulation analysis with TOPSIS method was broken down by Keshavarz-Ghorabaee et al. (2018) analyzed. Zhang et al. (2019b) designed the EDAS method for MCGDM, which uses the fuzzy information of picture to select green suppliers. Under q-rung orthopair fuzzy environment, Li et al. (2020) took advantage of the EDAS method for MAGDM.

However, there is no currently any research on the application of EDAS method in the combination of MAGDM and PLTSs. Therefore, it is needful to keep a watchful eye on this issue. This paper’s purpose is to expand the EDAS technique under the PLTSs context and the ultimate goal is to solve the MAGDM. The planning of the manuscript is able to generalized as below: (1) the EDAS technique is expanded by PLTSs; (2) the PL-EDAS technique is put forward to cope with the decision-making problems which have the characteristics of PLTSs; (3) a calculational example about the selection of green supplier is carried out to test the designed approach; (4) some contrast studies are given with the PLWA operator, PL-TOPSIS technique and PL-GRA technique to check the rationality of PL-EDAS technique.

The remaining chapter of this manuscript is planned as shown below. Section 2 presents some fundamental principle related to PLTSs. In Sect. 3, the EDAS technique with PLTSs is derived. In Sect. 4, taking green supplier selection as an example, a comparative analysis is made. In Sect. 5, there are some primary conclusions of this manuscript.

2 Preliminaries

Firstly, Xu (2005) raised the additive linguistic scale. Meanwhile, the transformation function between the linguistic terms and \(\left[ {0,1} \right]\) was raised Gou et al. (2017b).

Definition 1

(Gou et al. 2017b; Xu 2005). Let \(L = \left\{ {l_{\alpha } \left| {\alpha = { - }\theta , \ldots , - 2, - 1,0,1,2, \ldots \theta } \right.} \right\}\) be an LTS (Xu 2005), the linguistic terms \(l_{\alpha }\) can depict the equivalent information to \(\beta\) which is derived through the transformation function \(g\)(Gou et al. 2017b):

$$ g:\left[ {l_{ - \theta } ,l_{\theta } } \right] \to \left[ {0,1} \right],\;g\left( {l_{\alpha } } \right) = \frac{\alpha + \theta }{{2\theta }} = \beta $$
(1)

Additionally,\(\beta\) can depict the equivalent information to the linguistic terms \(l_{\alpha }\) which is derived through the transformation function \(g^{ - 1}\):

$$ g^{ - 1} :\left[ {0,1} \right] \to \left[ {l_{ - \theta } ,l_{\theta } } \right],\;g^{ - 1} \left( \beta \right) = l_{{\left( {2\beta - 1} \right)\theta }} = l_{\alpha } $$
(2)

Definition 2

(Pang et al. 2016). For a given LTS \(L = \left\{ {l_{j} \left| {j = { - }\theta , \cdots , - 2, - 1,0,1,2, \cdots \theta } \right.} \right\}\), a PLTS is defined:

$$ L\left( p \right) = \left\{ {l^{\left( \phi \right)} \left( {p^{\left( \phi \right)} } \right)\left| {l^{\left( \phi \right)} \in L,p^{\left( \phi \right)} \ge 0,\;\phi = 1,2, \ldots ,\# L\left( p \right),\sum\limits_{\phi = 1}^{\# L\left( p \right)} {p^{\left( \phi \right)} \le 1} } \right.} \right\} $$
(3)

What  \(l^{\left( \phi \right)} \left( {p^{\left( \phi \right)} } \right)\) means is the probability value \(p^{\left( \phi \right)}\) of the \(\phi {\text{th}}\) linguistic term \(l^{\left( \phi \right)}\), and \(\# L\left( p \right)\) is the length of linguistic terms in \(L\left( p \right)\). Accordingly, \(l^{\left( \phi \right)}\) in \(L\left( p \right)\) is ranked in ascending order.

For the sake of calculation, the PLTS is normalized by Pang et al. (2016) \(L\left( p \right)\) as \(\tilde{L}\left( p \right) = \left\{ {l^{\left( \phi \right)} \left( {\tilde{p}^{\left( \phi \right)} } \right)\left| {l^{\left( \phi \right)} \in L,\tilde{p}^{\left( \phi \right)} \ge 0,\;\phi = 1,2, \ldots ,\# L\left( {\tilde{p}} \right),\sum\limits_{\phi = 1}^{\# L\left( p \right)} {\tilde{p}^{\left( \phi \right)} = 1} } \right.} \right\}\), where \(\tilde{p}^{\left( \phi \right)} = {{p^{\left( \phi \right)} } \mathord{\left/ {\vphantom {{p^{\left( \phi \right)} } {\sum\limits_{\phi = 1}^{\# L\left( p \right)} {p^{\left( \phi \right)} } }}} \right. \kern-\nulldelimiterspace} {\sum\limits_{\phi = 1}^{\# L\left( p \right)} {p^{\left( \phi \right)} } }}\) for all \(\phi = 1,2, \ldots ,\# L\left( {\tilde{p}} \right)\).

Definition 3

(Pang et al. 2016). Given \(L = \left\{ {l_{\alpha } \left| {\alpha = { - }\theta , \ldots , - 1,0,1, \ldots \theta } \right.} \right\}\) be an LTS, \(\tilde{L}_{1} \left( {\tilde{p}} \right) = \left\{ {l_{1}^{\left( \phi \right)} \left( {\tilde{p}_{1}^{\left( \phi \right)} } \right)\left| {\;\phi = 1,2, \ldots ,\# \tilde{L}_{1} \left( {\tilde{p}} \right)} \right.} \right\}\) and \(\tilde{L}_{2} \left( {\tilde{p}} \right) = \left\{ {l_{2}^{\left( \phi \right)} \left( {\tilde{p}_{2}^{\left( \phi \right)} } \right)\left| {\;\phi = 1,2, \cdots ,\# \tilde{L}_{2} \left( {\tilde{p}} \right)} \right.} \right\}\) be two PLTSs, where \(\# \tilde{L}_{1} \left( {\tilde{p}} \right)\) and \(\# \tilde{L}_{2} \left( {\tilde{p}} \right)\) are the numbers of \(\tilde{L}_{1} \left( {\tilde{p}} \right)\) and \(\tilde{L}_{2} \left( {\tilde{p}} \right)\), respectively. If \(\# \tilde{L}_{1} \left( {\tilde{p}} \right) > \# \tilde{L}_{2} \left( {\tilde{p}} \right)\), then \(\tilde{L}_{2} \left( {\tilde{p}} \right)\) should be added to \(\# \tilde{L}_{1} \left( {\tilde{p}} \right) - \# \tilde{L}_{2} \left( {\tilde{p}} \right)\) linguistic terms to \(\tilde{L}_{2} \left( {\tilde{p}} \right)\). Furthermore, the linguistic terms which added newly should be the smallest linguistic term in \(\tilde{L}_{2} \left( {\tilde{p}} \right)\), also, the probabilities of these should be zero.

Definition 4

(Pang et al. 2016). Regarding a PLTS \(\tilde{L}\left( {\tilde{p}} \right) = \left\{ {l_{{}}^{\left( \phi \right)} \left( {\tilde{p}_{{}}^{\left( \phi \right)} } \right)\left| {\;\phi = 1,2, \ldots ,\# \tilde{L}_{{}} \left( {\tilde{p}} \right)} \right.} \right\}\), the scoring \(s\left( {\tilde{L}\left( {\tilde{p}} \right)} \right)\) and deviation degree \(\sigma \left( {\tilde{L}\left( {\tilde{p}} \right)} \right)\) of \(\tilde{L}\left( {\tilde{p}} \right)\) are recorded as:

$$ s\left( {\tilde{L}\left( {\tilde{p}} \right)} \right) = {{\sum\limits_{\phi = 1}^{{\# \tilde{L}_{{}} \left( {\tilde{p}} \right)}} {g\left( {\tilde{L}\left( {\tilde{p}} \right)} \right)\tilde{p}_{{}}^{\left( \phi \right)} } } \mathord{\left/ {\vphantom {{\sum\limits_{\phi = 1}^{{\# \tilde{L}_{{}} \left( {\tilde{p}} \right)}} {g\left( {\tilde{L}\left( {\tilde{p}} \right)} \right)\tilde{p}_{{}}^{\left( \phi \right)} } } {\sum\limits_{\phi = 1}^{{\# \tilde{L}_{{}} \left( {\tilde{p}} \right)}} {\tilde{p}_{{}}^{\left( \phi \right)} } }}} \right. \kern-\nulldelimiterspace} {\sum\limits_{\phi = 1}^{{\# \tilde{L}_{{}} \left( {\tilde{p}} \right)}} {\tilde{p}_{{}}^{\left( \phi \right)} } }} $$
(4)
$$ \sigma \left( {\tilde{L}\left( {\tilde{p}} \right)} \right) = {{\sqrt {\sum\limits_{\phi = 1}^{{\# \tilde{L}_{{}} \left( {\tilde{p}} \right)}} {\left( {g\left( {\tilde{L}\left( {\tilde{p}} \right)} \right)\tilde{p}_{{}}^{\left( \phi \right)} - s\left( {\tilde{L}\left( {\tilde{p}} \right)} \right)} \right)^{2} } } } \mathord{\left/ {\vphantom {{\sqrt {\sum\limits_{\phi = 1}^{{\# \tilde{L}_{{}} \left( {\tilde{p}} \right)}} {\left( {g\left( {\tilde{L}\left( {\tilde{p}} \right)} \right)\tilde{p}_{{}}^{\left( \phi \right)} - s\left( {\tilde{L}\left( {\tilde{p}} \right)} \right)} \right)^{2} } } } {\sum\limits_{\phi = 1}^{{\# \tilde{L}_{{}} \left( {\tilde{p}} \right)}} {\tilde{p}_{{}}^{\left( \phi \right)} } }}} \right. \kern-\nulldelimiterspace} {\sum\limits_{\phi = 1}^{{\# \tilde{L}_{{}} \left( {\tilde{p}} \right)}} {\tilde{p}_{{}}^{\left( \phi \right)} } }} $$
(5)

Using Eqs. (4)–(5), the size relation between the two PLTSs is displayed as: (1) if \(s\left( {\tilde{L}_{1} \left( {\tilde{p}} \right)} \right) > s\left( {\tilde{L}_{2} \left( {\tilde{p}} \right)} \right)\), then \(\tilde{L}_{1} \left( {\tilde{p}} \right) > \tilde{L}_{2} \left( {\tilde{p}} \right)\); (2) if \(s\left( {\tilde{L}_{1} \left( {\tilde{p}} \right)} \right) = s\left( {\tilde{L}_{2} \left( {\tilde{p}} \right)} \right)\), then if \(\sigma \left( {\tilde{L}_{1} \left( {\tilde{p}} \right)} \right) = \sigma \left( {\tilde{L}_{2} \left( {\tilde{p}} \right)} \right)\), then \(\tilde{L}_{1} \left( {\tilde{p}} \right) = \tilde{L}_{2} \left( {\tilde{p}} \right)\); if \(\sigma \left( {\tilde{L}_{1} \left( {\tilde{p}} \right)} \right) < \sigma \left( {\tilde{L}_{2} \left( {\tilde{p}} \right)} \right)\), then, \(\tilde{L}_{1} \left( {\tilde{p}} \right) > \tilde{L}_{2} \left( {\tilde{p}} \right)\).

Definition 5.

Let \(L = \left\{ {l_{\alpha } \left| {\alpha = { - }\theta , \cdots , - 1,0,1, \cdots \theta } \right.} \right\}\) be an LTS. And let \(\tilde{L}_{1} \left( {\tilde{p}} \right) = \left\{ {l_{1}^{\left( \phi \right)} \left( {\tilde{p}_{1}^{\left( \phi \right)} } \right)\left| {\;\phi = 1,2, \ldots ,\# \tilde{L}_{1} \left( {\tilde{p}} \right)} \right.} \right\}\) and \(\tilde{L}_{2} \left( {\tilde{p}} \right) = \left\{ {l_{2}^{\left( \phi \right)} \left( {\tilde{p}_{2}^{\left( \phi \right)} } \right)\left| {\;\phi = 1,2, \ldots ,\# \tilde{L}_{2} \left( {\tilde{p}} \right)} \right.} \right\}\) be two PLTSs with \(\# \tilde{L}_{1} \left( {\tilde{p}} \right) = \# \tilde{L}_{2} \left( {\tilde{p}} \right)\), then the distance of Hamming \(d\left( {\tilde{L}_{1} \left( {\tilde{p}} \right),\tilde{L}_{2} \left( {\tilde{p}} \right)} \right)\) between \(\tilde{L}_{1} \left( {\tilde{p}} \right)\) and \(\tilde{L}_{2} \left( {\tilde{p}} \right)\) is depicted as below:

$$ d\left( {\tilde{L}_{1} \left( {\tilde{p}} \right),\tilde{L}_{2} \left( {\tilde{p}} \right)} \right) = \frac{{\sum\nolimits_{\phi = 1}^{{\# \tilde{L}_{1} \left( {\tilde{p}} \right)}} {\left( {\tilde{p}_{1}^{\left( \phi \right)} g\left( {l_{1}^{\left( \phi \right)} } \right) - \tilde{p}_{2}^{\left( \phi \right)} g\left( {l_{2}^{\left( \phi \right)} } \right)} \right)} }}{{\# \tilde{L}_{1} \left( {\tilde{p}} \right)}} $$
(6)

3 EDAS method for probabilistic linguistic MAGDM problems

In this subsection, an extending PL-EDAS method will be brought in. The following is a description of how to solve the problem of probabilistic language MAGDM. Let \(A = \left\{ {A_{1} ,A_{2} , \ldots ,A_{m} } \right\}\) be a group of alternatives, and \(G = \left\{ {G_{1} ,G_{2} , \ldots ,G_{n} } \right\}\) be attributes with the weight vector \(w = \left( {w_{1} ,w_{2} , \ldots ,w_{n} } \right)\), where \(w_{j} \in \left[ {0,1} \right]\),\(j = 1,2, \ldots ,n\),\(\sum\limits_{j = 1}^{n} {w_{j} } = 1\), and \(d = \left\{ {d_{1} ,d_{2} , \ldots ,d_{q} } \right\}\) be a bunch of experts. Assume that there are \(n\) qualitative attributes \(G = \left\{ {G_{1} ,G_{2} , \ldots ,G_{n} } \right\}\), furthermore, each expert gives assessment to the values and describes these as linguistic expressions \(l_{ij}^{k}\).

Then, PL-EDAS technique’ calculation steps are as below:

Step 1 Transform the linguistic evaluation information \(l_{ij}^{k}\) into PL decision matrix \(L = \left( {L_{ij}^{{}} \left( p \right)} \right)_{m \times n}\), \(L_{ij}^{{}} \left( {\tilde{p}} \right) = \left\{ {l_{ij}^{\left( \phi \right)} \left( {p_{ij}^{\left( \phi \right)} } \right)\left| {\;\phi = 1,2, \ldots ,\# L_{ij} \left( p \right)} \right.} \right\}\)\(\left( {i = 1,2, \ldots ,m,j = 1,2, \ldots ,n} \right)\).

Step 2 The standard probabilistic linguistic decision matrix \(\tilde{L} = \left( {\tilde{L}_{ij}^{{}} \left( {\tilde{p}} \right)} \right)_{m \times n}\) is determined.

Step 3 Determine the attributes’ weight.

Entropy is a traditional term in informative theory which known as the average (expected) amount of information including each attribute (Ding and Shi 2005). If a given attribute has higher entropy, the smaller the score difference between the alternative and the attribute. This means that such attributes provide less information and have less weight. Firstly, the standard decision matrix \(N\tilde{L}_{ij}^{{}} \left( {\tilde{p}} \right)\) can be obtained as below:

$$ N\tilde{L}_{ij}^{{}} \left( {\tilde{p}} \right) = \frac{{\sum\nolimits_{\phi = 1}^{{\# \tilde{L}_{1} \left( {\tilde{p}} \right)}} {\left( {\tilde{p}_{ij}^{\left( \phi \right)} g\left( {l_{ij}^{\left( \phi \right)} } \right)} \right)} }}{{\sum\limits_{i = 1}^{m} {\sum\nolimits_{\phi = 1}^{{\# \tilde{L}_{1} \left( {\tilde{p}} \right)}} {\left( {\tilde{p}_{ij}^{\left( \phi \right)} g\left( {l_{ij}^{\left( \phi \right)} } \right)} \right)} } }},\;\;j = 1,2, \ldots ,n, $$
(7)

Thereafter, the vector of entropy \(E_{j}\) is computed:

$$ E_{j} = - \frac{1}{\ln m}\sum\nolimits_{i = 1}^{m} {N\tilde{L}_{ij}^{{}} \left( {\tilde{p}} \right)\ln N\tilde{L}_{ij}^{{}} \left( {\tilde{p}} \right)} $$
(8)

and \(N\tilde{L}_{ij}^{{}} \left( {\tilde{p}} \right)\ln N\tilde{L}_{ij}^{{}} \left( {\tilde{p}} \right)\) is regarded as 0, if \(N\tilde{L}_{ij}^{{}} \left( {\tilde{p}} \right) = 0\).

Finally, the weight \(w = \left( {w_{1} ,w_{2} , \ldots ,w_{n} } \right)\) is computed:

$$ w_{j} = \frac{{1 - E_{j} }}{{\sum\nolimits_{j = 1}^{n} {\left( {1 - E_{j} } \right)} }},\;\;j = 1,2, \ldots ,n. $$
(9)

Step 4 Obtain the probabilistic linguistic \(AV\) in accordance with all attributes:

$$ {\text{PLAV}} = \left( {{\text{PLAV}}_{j} } \right)_{1 \times n} $$
(10)
$$ {\text{PLAV}}_{j} = \left\{ {l_{j}^{\left( \phi \right)} \left( {p_{j}^{\left( \phi \right)} } \right)\left| {\;\phi = 1,2, \ldots ,\# L_{ij} \left( p \right)} \right.} \right\} $$
(11)
$$ l_{j}^{\left( \phi \right)} \left( {p_{j}^{\left( \phi \right)} } \right) = \frac{{\sum\nolimits_{i = 1}^{m} {l_{ij}^{\left( \phi \right)} } }}{m}\left( {\frac{{\sum\nolimits_{i = 1}^{m} {p_{ij}^{\left( \phi \right)} } }}{m}} \right) $$
(12)

Step 5 The PLPDA and PLNDA matrix can be got by Eqs. (1318) in view of attribute type (benefit or cost).

$$ {\text{PLPDA}} = \left[ {{\text{PLPDA}}_{ij} } \right]_{m \times n} $$
(13)
$$ {\text{PLNDA}} = \left[ {{\text{PLNDA}}_{ij} } \right]_{m \times n} $$
(14)

If jth attribute is beneficial type,

$$ {\text{PLPDA}}_{ij} = \frac{{\max \left( {0,d\left( {\tilde{L}_{ij}^{{}} \left( {\tilde{p}} \right),PLAV_{j} } \right)} \right)}}{{s\left( {PLAV_{j} } \right)}} $$
(15)
$$ {\text{PLNDA}}_{ij} = \frac{{\max \left( {0,d\left( {{\text{PLAV}}_{j} - L_{ij} \left( {\tilde{p}} \right)} \right)} \right)}}{{s\left( {{\text{PLAV}}_{j} } \right)}} $$
(16)

If jth attribute is cost type,

$$ {\text{PLPDA}}_{ij} = \frac{{\max \left( {0,d\left( {{\text{PLAV}}_{j} ,\tilde{L}_{ij}^{{}} \left( {\tilde{p}} \right)} \right)} \right)}}{{s\left( {{\text{PLAV}}_{j} } \right)}} $$
(17)
$$ {\text{PLNDA}}_{ij} = \frac{{\max \left( {0,d\left( {\tilde{L}_{ij}^{{}} \left( {\tilde{p}} \right),{\text{PLAV}}_{j} } \right)} \right)}}{{s\left( {{\text{PLAV}}_{j} } \right)}} $$
(18)

Step 6 The weighted sum of PLPDA and PLNDA can be obtained by Eqs. (19) and (20), accordingly:

$$ {\text{PLSP}}_{i} = \sum\limits_{j = 1}^{m} {w_{j} } {\text{PLPDA}}_{ij} , $$
(19)
$$ {\text{PLSN}}_{i} = \sum\limits_{j = 1}^{m} {w_{j} } {\text{PLNDA}}_{ij} , $$
(20)

Step 7 The value of PLSP and PLSN for each alternative can be normalized by Eqs. (21) and (22):

$$ {\text{PLNSP}}_{i} = \frac{{{\text{PLSP}}_{i} }}{{\mathop {\max }\limits_{i} \left( {{\text{PLSP}}} \right)_{i} }} $$
(21)
$$ {\text{PLNSN}}_{i} = 1 - \frac{{{\text{PLSN}}_{i} }}{{\mathop {\max }\limits_{i} \left( {{\text{PLSN}}_{i} } \right)}} $$
(22)

Step 8 The PLAS can be determined by Eq. (23)

$$ {\text{PLAS}}_{i} = \frac{{{\text{PLNSP}}_{i} {\text{ + PLNSN}}_{i} }}{2}. $$
(23)

where \(0 \le {\text{PLAS}}_{i} \le 1\).

Step 9 Sort the alternatives by decreasing values of \({\text{PLAS}}_{i}\) and the highest value is, the best alternative is.

4 A numerical example and comparative analysis

4.1 A numerical example

Since China’s joined the WTO, its economy has maintained a rapid development and a relatively high growth rate. Meanwhile, economic situation is confronted with dire challenges: for one thing, due to the constantly changing international economic situation, international green product production standards will restrict the development of many Chinese enterprises. For another, progress in its economic will also cause the shortage of environment and natural resources, which will in turn restrict the social and economic development. Therefore, more and more enterprises begin to realize the dialectical relationship between environmental protection and green development. The selection of green supplier could be regarded as the classical MAGDM issues (Lei et al. 2020; Liao et al. 2018a; Si et al. 2018; Wang et al. 2020; Zhai et al. 2018). Accordingly, in this part, we do a calculation example regarding the selection of green supplier to clarify the technique involved in this manuscript. There are five potential green suppliers \(A_{i} \left( {i = 1,2,3,4,5} \right)\) to be evaluated. Four beneficial attributes are selected by the experts to assess the five optional suppliers: ①G1 is to improve the environmental quality; ②G2 is the price capability of suppliers; ③G3 is the green image, human resources and financial status; ④G4 is the ability of the environment. The linguistic term set is utilized to assess the five potential green suppliers \(A_{i} \left( {i = 1,2,3,4,5} \right)\).

\(\begin{gathered} S = \{ s_{ - 3} = {\text{extremely}}\;{\text{poor}}(EP),s_{ - 2} = {\text{very}}\;{\text{poor}}(VP),\;s_{ - 1} = {\text{poor}}(P),s_{0} = {\text{medium}}(M),\; \hfill \\ s_{1} = {\text{good}}(G),\;s_{2} = {\text{very}}\;{\text{good}}(VG),s_{3} = {\text{extremely}}\;{\text{good}}(EG)\} \hfill \\ \end{gathered}\)

by the five DMs about the four attributes which recorded in Table 1, 2, 3, 4 and 5.

Table 1 Linguistic decision matrix by the first DM
Table 2 Linguistic decision matrix by the second DM
Table 3 Linguistic decision matrix by the third DM
Table 4 Linguistic decision matrix by the fourth DM
Table 5 Linguistic decision matrix by the fifth DM

Next steps, the PL-EDAS technique is developed for green supplier selection.

Step 1 The linguistic variables are transformed into PL decision matrix (Table 6).

Table 6 PL decision matrix

Step 2 Calculate the standard PL decision matrix (Table 7).

Table 7 Standard PL decision matrix

Step 3 Compute the weight of each attribute from Eqs. (7)–(9): \(w_{1} = 0.1489,w_{2} = 0.1587,w_{3} = 0.3108,w_{4} = 0.3816\).

Step 4 Obtain the \({\text{PLAV}}_{i} \left( {i = 1,2,3,4,5} \right)\) (Table 8).

Table 8 PLAV in accordance with all attributes

Step 5 Compute the PLPDA and PLNDA matrix by Eq. (1314), which are listed in Tables 9 and 10.

Table 9 PLPDA matrix
Table 10 PLNDA matrix

Step 6 Calculate the \({\text{PLSP}}_{i} ,{\text{PLSN}}_{i} \left( {i = 1,2,3,4,5} \right)\) by Eq. (1920) (Tables 11).

Table 11 PLSP and PLSN values

Step 7 Obtain the \({\text{PLNSP}}_{i} ,{\text{PLNSN}}_{i} \left( {i = 1,2,3,4,5} \right)\) by Eqs. (2122) (Tables 12).

Table 12 PLNSP and PLNSN values

Step 8 Calculate the \({\text{PLAS}}_{i} \left( {i = 1,2,3,4,5} \right)\) by Eq. (23) (Table 13).

Table 13 PLAS value

Step 9 On the basis of the \({\text{PLAS}}_{i}\), evidently, the order is:\(A_{4} > A_{2} > A_{3} > A_{5} > A_{1}\) and the best green supplier is \(A_{4}\).

4.2 Comparative analysis

Further on, a comparative analysis between method which proposed by us with PLWA operator (Pang et al. 2016) (Tables 14), probabilistic linguistic TOPSIS method (Pang et al. 2016) as well as probabilistic linguistic GRA method (Liang et al. 2018) (let \(\rho = 0.5\)) is carried out as below:

Table 14 Ordering from different methods

The following conclusions can be drawn from the above ranking results, the optimal green supplier for the four methods is \(A_{2}\), although their respective ranking results are slightly different. The rationality and validity of the proposed technique are verified. Each of the four methods has its own merits: (1) PL-TOPSIS method features in considering the distance proximity between positive and negative ideal solutions; (2) PL-GRA method only takes the shape similarity degree into consideration from the positive ideal solution; (3) PLWA operator only considers group influences degree; (4) PL-EDAS method proposed by us, on the one hand, uses probability language to describe the evaluation information, which conforms to human decision-making habits; on the other hands, it calculates the distance between all attributes and the average value, which is more scientific and accurate.

5 Conclusion

In this manuscript, by combining with PLNs, we expand the scope of the EDAS method’s application in the MAGDM. In the first place, the fundamental definition and distance formula of PLNs are recommended in brief. Next, inspired by the classical EDAS technique under the real environment, the amplified EDAS method is brought in the PLTSs to come up with MAGDM problems and its prominent feature is that it emphasizes the distance proximity between all attributes and the average solution. Finally, a calculation example about the selection of green supplier is given to show the validity of developed technique, and some contrast analyses are also carried out to verify the feasibility in real-world MAGDM problems. The presented method will be regarded as an efficient tool to develop new decision software or decision support system with PLNs. In this research, the EDAS method is designed to come up with the problem of independent decision or group decision under uncertain environment. In the future, the uses of the proposed model will be explored in other MAGDM and many other dubious and fuzzy contexts.