1 Introduction

Group decision making (GDM) is a problem-solving process, which is devoted to identifying and choosing alternatives based on the knowledge (or opinions) expressed by a set of decision makers (DMs) (Li et al. 2021; Liu et al. 2018b, 2019c). In GDM problems, the concept of preference relations is one of the most common tools used to express individuals’ opinions through pairwise comparisons of alternatives (Kou et al. 2016). Various kinds of preference relations have been proposed, such as additive preference relations (also called reciprocal fuzzy preference relations) (Herrera-Viedma et al. 2007a; Pérez et al. 2010; Tanino 1984), multiplicative preference relations (Fan et al. 2006; Saaty 1980; Srdjevic 2005; Srdjevic and Srdjevic 2013), and linguistic preference relations (Dong et al. 2015; Li et al. 2018a; Wu et al. 2015, 2020).

In real-world decision problems, DMs may not be entirely self-confident about their opinions because of time pressure or limited expertise regarding the problem domain. In these situations, the form of incomplete preference relations is used to deal with preferences that lack self-confidence. In an incomplete preference relation, there are two self-confidence levels: (i) the individuals have self-confidence when they provide the preference information, and (ii) the individuals lack self-confidence when the preference information is missing. Recently, a more general theoretical context was developed by Liu et al. (2017), that is, self-confident preference relations, which allow DMs to express multiple self-confidence levels when providing their opinions. In a self-confident preference relation, each element includes two components: the former part represents preference evaluation over a pair of alternatives, and the second part represents the self-confidence level over the first part. Meanwhile, Dong et al. (2019) discussed the validity of self-confident preference relations through a large number of comparative simulation experiments, and indicated that self-confident preference relations exhibit better performance than incomplete preference relations in most cases. More studies on self-confident preference relations can be found in Refs. (Liu et al., 2018a, 2019a, 2019b; Zhang et al. 2021a).

Individual consistency and group consensus are two important topics that use preference relations in classical GDM (Chiclana et al. 2008; Jin et al. 2020). In a rational GDM process, consistency is a formal property of individuals’ preferences, and it is associated with the transitivity property (Herrera-Viedma et al. 2007b; Chiclana et al. 2008). Lack of consistency may lead to inconsistent and unreliable conclusions (Li et al. 2018b, 2019). Therefore, individual consistency should be sought after in order to make rational decision-making. Individual consistency is usually applied to help DMs analyze their preferences and ensure that the DMs are neither illogical nor random in the decision-making process. The group consensus refers to the agreement among the group of DMs regarding the collective solution (Gong et al. 2017; Wu and Xu 2012; Xiao et al. 2021; Zhang et al. 2019b). In relation to the efficiency and effectiveness of group consensus, some interesting discussions have been held (Moreno-Jiménez et al. 2013; Roberto 2005; Susskind, et al. 1999). Although consensus is achieved, this does not necessarily imply that the best solution will be obtained. Efficiency (optimal use of resources or correct decisional procedure) is a key criterion that must be met, which legitimizes GDM to be a correct decision procedure with the following advantages: (i) Promoting communication among the DMs, and (ii) More effective implementation.

To maintain the individual consistency in the consensus reaching process, numerous consistency and consensus models have been proposed in GDM problems (Cabrerizo et al. 2018; Chiclana et al. 2008; Wu and Xu 2016; Zhang and Guo 2016; Zhang et al. 2014). For instance, Chiclana et al. (2008) presented a consensus reaching model that deals with both consistency and consensus in GDM. Zhang and Guo (2016) developed consistency and consensus models for GDM with uncertain 2-tuple linguistic preference relations. Zhang et al. (2014) then developed a consensus framework to manage both individual consistency and group consensus issues. Meanwhile, Cabrerizo et al. (2018) proposed a multi-objective optimization approach that considers both consistency and consensus for GDM with linguistic preference relations. Moreno-Jiménez et al. (2008) presented a consistency and consensus based method in Analytic Hierarchy Process-GDM, which was used in a real-life experiment on e-democracy developed for the City Council of Zaragoza. Gong et al. (2020) proposed two types of consensus models under consistency constraints for linear uncertain preference relations, which were applied to the sensitivity assessment of the meteorological industry in a region of China. To date, many studies have been conducted on consistency and consensus in GDM, with recent literature reviews on GDM provided by Li et al. (2019) and Zhang et al. (2020).

Although the existing related studies have made considerable progress regarding consistency and consensus issues, some challenges still remain:

  1. (1)

    To our knowledge, consistency has an important influence on GDM. Liu et al. (2017) proposed self-confident preference relations to help DMs express multiple self-confidence levels in preference relations. Dong et al. (2019) analyzed the validity of self-confident preference relations by comparing the performance of complete preference relations, incomplete preference relations, and self-confident preference relations. However, there are few studies on the consistency measurement of self-confident preference relations.

  2. (2)

    As mentioned above, consistency and consensus are two important issues in GDM with preference relations, and some consensus frameworks have been investigated to simultaneously manage individual consistency and consensus. It is natural that we hope to maintain individual consistency in consensus reaching process in GDM with self-confident additive preference relations.

  3. (3)

    In many situations, preference adjustment means the loss of original preference information and the cost consumption of reaching the consensus. In recent years, some consensus approaches with minimum adjustment have been investigated and widely employed in various GDM contexts (Xu et al. 2020; Zhang et al. 2020). Based on the basic idea of minimum adjustments and minimum cost, an optimization-based problem is proposed: minimizing the preference and self-confidence loss in GDM with self-confident additive preference relations.

Motivated by these challenges, this study concentrates on obtaining optimal adjusted self-confident additive preference relations with acceptable consistency and consensus levels, in which the decision makers’ initial preferences are preserved as much as possible. Particularly, to focus on the main research objective of this study, we design the nonlinear optimization method within the context of consensus reaching without feedback process (Palomares et al. 2014; Zhang et al. 2020). Specifically, the main research contributions and innovations of this work are summarized in the following three aspects:

  1. (1)

    A consistency measure method is developed to evaluate the individual consistency level of a self-confident additive preference relation by considering the self-confidence levels. Moreover, a new consensus index is presented to measure the group consensus level of self-confident additive preference relations.

  2. (2)

    Based on the new consistency index, a nonlinear optimization model is presented to derive self-confident additive preference relations with acceptable consistency, which seeks to minimize the adjustment of preference and self-confidence values.

  3. (3)

    Considering the new consensus index, a nonlinear optimization approach is proposed to improve the group consensus level that simultaneously manages individual consistency in GDM with self-confident additive preference relations.

Moreover, the proposed optimization-based models optimally preserve the original preference information and self-confidence levels according to the required consistency and consensus levels.

This paper is organized as follows: Sect. 2 provides a review of some basic information about the 2-tuple linguistic model, additive preference relations, and self-confident additive preference relations. The consistency index and consensus index of self-confident additive preference relations are defined in Sect. 3. Section 4 presents two nonlinear optimization models. The first model is used to improve individual consistency, and the second model is used to improve the group consensus level that simultaneously manages individual consistency in GDM. Section 5 provides a hypothetical application with comparative analysis to show the usability and validity of the proposed methods. Finally, Sect. 6 provides concluding remarks.

2 Preliminaries

This section reviews some basic knowledge of the 2-tuple linguistic model, additive preference relations and self-confident additive preference relations.

2.1 The 2-tuple Linguistic Model

As mentioned before, the self-confidence level is often characterized using a linguistic rating scale. In this study, the 2-tuple linguistic model is an effective tool to deal with the operations of linguistic self-confidence levels. The basic concepts and mathematical formulations of the 2-tuple linguistic model are given by Herrera and Martínez (2000).

Let \(S = \{ s_{i} |i = 0, \, 1, \, ..., \, g\}\) be an ordinal linguistic term set with odd \(g + 1\) rating options, where \(s_{i}\) represents a possible value of a linguistic variable, and \(S = \{ s_{i} |i = 0, \, 1, \, ..., \, g\}\) satisfies the following conditions:

$$s_{i} > s_{j}\,\,\text{if}\,\text{and}\,\text{only}\,\text{if}\, i > j$$

.

Based on the linguistic term set \(S = \{ s_{i} |i = 0, \, 1, \, ..., \, g\}\), Herrera and Martínez proposed the 2-tuple linguistic representation model (Herrera and Martínez 2000) as follows.

Definition 1 (Herrera and Martínez 2000).

Let \(\beta \in [0, \, g]\) be a number in the granularity interval of the linguistic term set \(S = \{ s_{0} , \, ..., \, s_{g} \}\) and let \(i = round(\beta )\) and \(\alpha = \beta - i\) be two values such that \(i \in [0, \, g]\) and \(\alpha \in [ - 0.5, \, 0.5)\). Then, \(\alpha\) is called a symbolic translation, with round being the standard rounding operation.

The 2-tuple linguistic model represents the linguistic information by a 2-tuple \((s_{i} , \, \alpha )\), where \(s_{i} \in S\) and \(\alpha \in [ - 0.5, \, 0.5)\). A one-to-one mapping between linguistic 2-tuples and numerical values in \([0, \, g]\) is possible.

Definition 2 (Herrera and Martínez 2000).

Let \(S = \{ s_{0} , \, ..., \, s_{g} \}\) be a linguistic term set and \(\beta \in [0, \, g]\) be a value representing the result of a symbolic aggregation operation, then the 2-tuple that expresses the equivalent information to \(\beta\) is obtained with the following function: \(\Delta :[0, \, g] \to S \times [ - 0.5, \, 0.5)\), where.

\(\Delta (\beta ) = (s_{i} , \, \alpha )\), with \(\left\{ \begin{gathered} s_{i} , \, i = round(\beta ) \hfill \\ \alpha = \beta - i, \, \alpha \in [ - 0.5, \, 0.5). \hfill \\ \end{gathered} \right.\)

For convenience, denoting \(\overline{S} = S \times [ - 0.5, \, 0.5)\). Function \(\Delta\) is a one-to-one mapping whose inverse function \(\Delta^{ - 1} :\overline{S} \to [0, \, g]\) is defined as \(\Delta^{ - 1} ((s_{i} , \, \alpha )) = i + \alpha\). For notation simplicity, this study sets \(\Delta^{ - 1} ((s_{i} , \, 0)) = \Delta^{ - 1} (s_{i} )\). Clearly, an ordering on the set of 2-tuples and a negation operator can be defined as follows:

1) Let \((s_{k} , \, \alpha )\) and \((s_{l} , \, \gamma )\) be two 2-tuples. Then:

i) if \(k < l\), then \((s_{k} , \, \alpha )\) is smaller than \((s_{l} , \, \gamma )\).

ii) if \(k = l\), then

  1. (a)

    if \(\alpha = \gamma\), then \((s_{k} , \, \alpha )\) and \((s_{l} , \, \gamma )\) represents the same information.

  2. (b)

    if \(\alpha < \gamma\), then \((s_{k} , \, \alpha )\) is smaller than \((s_{l} , \, \gamma )\).

2) A 2-tuple negation operator:

$$ Neg((s_{i} {, }\alpha )) = \Delta (g - (\Delta^{ - 1} (s_{i} , \, \alpha ))). $$
(1)

2.2 Additive Preference Relations

Additive preference relations are often used to represent the decision maker’s preference opinions in GDM problems. Let \(X = \{ x_{1} , \, x_{2} , \, ..., \, x_{n} \}\) be a finite set of alternatives. The concept of additive preference relation is provided in Definition 3.

Definition 3 (Orlovsky 1978; Tanino 1984).

An additive preference relation on a set of alternatives \(X\) is represented by a matrix \(F = (f_{ij} )_{n \times n}\), with \(f_{ij}\) in \([0, \, 1]\) being as the preference intensity of alternative \(x_{i}\) to that of \(x_{j}\), satisfying the following reciprocity property: \(f_{ij} + f_{ji} = 1\) for \(i, \, j = 1, \, 2, \, ..., \, n\).

Specifically, \(f_{ij} = 0.5\) signifies the indifference between \(x_{i}\) and \(x_{j}\), \(f_{ij} > 0.5\) indicates a definite preference for \(x_{i}\) over \(x_{j}\), and \(f_{ij} < 0.5\) denotes a definite preference for \(x_{i}\) over \(x_{j}\).

In decision making with additive preference relations, transitivity is an important concept that help avoid inconsistent conclusions. Therefore, the consistency of additive preference relations is based on the concept of transitivity. Compared with the ordinal consistency, cardinal consistency is considered a stronger concept because it extends the transitivity of preferences by implementing the intensity of preferences.

Let \(F = (f_{ij} )_{n \times n}\) be an additive preference relation. Some transitive properties of additive preference relations can be described as follows (Herrera-Viedma et al. 2007b; Orlovsky 1978; Tanino 1984):

  1. (a)

    Weak stochastic transitivity. \(f_{ij} \ge 0.5, \, f_{jk} \ge 0.5 \Rightarrow f_{ik} \ge 0.5, \, \forall i, \, j, \, k\).

  2. (b)

    Strong stochastic transitivity. \(f_{ij} \ge 0.5, \, f_{jk} \ge 0.5 \Rightarrow f_{ik} \ge \max (f_{ij} , \, f_{jk} ), \, \forall i, \, j, \, k\).

  3. (c)

    Additive transitivity. \(f_{ij} = f_{ik} - f_{jk} + 0.5, \, \forall i, \, j, \, k\).

Based on the additive transitivity, an estimated preference value associated with the pair of alternatives \((x_{i} , \, x_{j} )\) can be calculated using \(f_{ik} - f_{jk} + 0.5\). Then, by computing the error between a preference value \(f_{ij}\) and its estimated value \(f_{ik} - f_{jk} + 0.5\), a cardinal consistency measurement method for additive preference relations is presented as Definition 4.

Definition 4 (Herrera-Viedma et al. 2007a).

Let \(F = (f_{ij} )_{n \times n}\) be an additive preference relation. The consistency level (\(CL\)) of \(F = (f_{ij} )_{n \times n}\) is defined as follows:

$$ CL(F) = 1 - \frac{2}{3n(n - 1)(n - 2)}\sum\limits_{i,j,k = 1}^{n} {|f_{ij} + f_{jk} - f_{ik} - 0.5|} $$
(2)

Clearly, \(CL(F) \in [0, \, 1]\). When \(CL(F) = 1\), the additive preference relation \(F\) is fully consistent; otherwise, the lower the value of \(CL(F)\) the more inconsistent \(F\) is.

2.3 Self-Confident Additive Preference Relations

Self-confident preference relation is a new effective tool to express the self-confidence levels of DMs in preference relations (Liu et al. 2017). In self-confident additive preference relations, a linguistic term set \(S^{SL} = \{ l_{0} , \, l_{1} , \, ..., \, l_{g} \}\) is often pre-established to help DMs express self-confidence levels. In particular, the linguistic term set \(S^{SL} = \{ l_{0} , \, l_{1} , \, ..., \, l_{g} \}\) has \(g + 1\) rating options (Alwin and Krosnick 1985; Wu et al. 2021). For example, a possible linguistic term set \(S^{SL} = \{ l_{0} , \, l_{1} , \, l_{2} , \, l_{3} , \, l_{4} , \, l_{5} , \, l_{6} , \, l_{7} , \, l_{8} \}\) contains 9-point linguistic terms. Detailed information about the linguistic terms set with a 9-point is listed in Table 1.

Table 1 The detailed information about the 9-point linguistic terms set

Let \(X = \{ x_{1} , \, x_{2} , \, ..., \, x_{n} \}\) be a set of \(n\) alternatives and \(S^{SL} = \{ l_{0} , \, l_{1} , \, ..., \, l_{g} \}\) be a linguistic term set. The concept of self-confident additive preference relation is defined as follows:

Definition 5 (Liu et al. 2017).

A self-confident additive preference relation on a set of alternatives \(X\) is represented by a matrix \(R = ((f_{ij} , \, s_{ij} ))_{n \times n}\), with each element \((f_{ij} , \, s_{ij} )\) containing two parts: the first part, \(f_{ij} \in [0, \, 1]\), represents the preference intensity of alternative \( \, x_{i}\) to that of \( \, x_{j}\), and the second part, \(s_{ij} \in S^{SL}\), represents the self-confidence level over the preference value \(f_{ij}\). The following conditions are assumed: \(f_{ij} + f_{ji} = 1\), \(s_{ij} = s_{ji}\), and \(s_{ii} = l_{g}\) for \(i, \, j = 1, \, 2, \, ..., \, n\).

In Definition 5, \( \, f_{ii} = 0.5\) is the preference intensity of alternative \(x_{i}\) to that of \(x_{i}\), and \(s_{ii}\) is the self-confidence level over \(f_{ii}\). Thus, \(s_{ii} = l_{g}\). If the linguistic term in Table 1 are used to represent the DMs’ self-confidence levels, then \(s_{ii} = l_{8}\).

Example 1

Let \(X = \{ x_{1} , \, x_{2} , \, x_{3} \}\) be a set of three alternatives. Let \(S^{SL} = \{ l_{0} , \, l_{1} , \, l_{2} , \, l_{3} , \, l_{4} , \, l_{5} , \, l_{6} , \, l_{7} , \, l_{8} \}\) be a linguistic term set with a 9-point (the detailed information of \(S^{SL}\) can be found in Table 1). Then, a self-confident additive preference relation on the set of alternatives \(X\) can be represented by the following matrix \(R\):

$$ R = \left( {\begin{array}{*{20}c} {(0.5, \, l_{8} )} & {(0.2, \, l_{5} )} & {(0.8, \, l_{2} )} \\ {(0.8, \, l_{5} )} & {(0.5, \, l_{8} )} & {(0.6, \, l_{3} )} \\ {(0.2, \, l_{2} )} & {(0.4, \, l_{3} )} & {(0.5, \, l_{8} )} \\ \end{array} } \right) $$

In \(R\), \(f_{12} = 0.2\) represents the preference intensity of alternative \(x_{1}\) over \(x_{2}\) is 0.2, and \(s_{12} = l_{5}\) means the self-confident level over the preference value \(f_{12}\) is \(l_{5}\) (i.e., slightly high confident). Other elements in \(R\) can be explained similarly.

Let \(R = ((f_{ij} , \, s_{ij} ))_{n \times n}\) be a self-confident additive preference relation. Some transitive properties of self-confident additive preference relations can be described as follows (Liu et al. 2017):

  1. (a)

    Weak stochastic transitivity at self-confidence level \(l \in S^{SL}\).

    \(f_{ij} \ge 0.5, \, f_{jk} \ge 0.5 \Rightarrow f_{ik} \ge 0.5, \, \forall i, \, j, \, k{\text{ and }}s_{ij} \ge l, \, \forall i, \, j\).

  2. (b)

    Strong stochastic transitivity at self-confidence level \(l \in S^{SL}\).

    \(f_{ij} \ge 0.5, \, f_{jk} \ge 0.5 \Rightarrow f_{ik} \ge \max (f_{ij} , \, f_{jk} ), \, \forall i, \, j, \, k{\text{ and }}s_{ij} \ge l, \, \forall i, \, j\).

  3. (c)

    Additive transitivity at self-confidence level \(l \in S^{SL}\).

    \(f_{ij} = f_{ik} - f_{jk} + 0.5, \, \forall i, \, j, \, k{\text{ and }}s_{ij} \ge l, \, \forall i, \, j\).

Clearly, the additive transitivity condition is stronger than strong stochastic transitivity, and the strong stochastic transitivity condition is stronger than weak stochastic transitivity.

Note 1: Tanino (1984) analyzed individual preference from the perspective of individual utility values. In the case where individual preferences are represented by utility values, the preference relations can be converted from the utility functions (Tanino 1990). In GDM problems, preference relation is one of the most commonly used representation formats because the pairwise comparison mode is more accurate than the non-pairwise methods (Herrera-Viedma et al. 2021; Millet 1997). This paper focuses on relevant studies on improving individual consistency and the deformation of preference relations in GDM. Therefore, the relationship between preference relation and utility function is not the focus of this paper.

3 Consistency and Consensus Indexes

In this section, we present a method to measure the consistency level of a self-confident additive preference relation. Moreover, a new consensus measure method is developed to measure the consensus level among DMs in GDM with self-confident additive preference relations.

3.1 Individual Consistency Index

Consistency is an important problem regarding preference relations because it has a direct impact on the final decision results. Traditionally, individual consistency degree of an additive preference relation can be evaluated by calculating the difference between the estimated values and the preference values. Herrera-Viedma et al. (2007b) introduced a method to measure the individual consistency level (\(CL\)) of additive preference relations (i.e., Eq. (2)). In Eq. (2), if \(|f_{ij} + f_{jk} - f_{ik} - 0.5|\) has a larger value, the consistency degree of preferences values over the alternatives \((x_{i} , \, x_{j} , \, x_{k} )\) is worse. In this case, when providing a self-confident additive preference relation, it is reasonable to assign smaller self-confidence levels over the preference values \((f_{ij} , \, f_{jk} , \, f_{ik} )\). The consistency level of the self-confident additive preference relation should be as large as possible. Thus, by extending Herrera-Viedma et al.’s individual consistency measuring method (Herrera-Viedma et al. 2007b), we propose a method to evaluate the individual consistency level of a self-confident additive preference relation by considering the self-confidence levels.

Definition 6

Let \(R = ((f_{ij} , \, s_{ij} ))_{n \times n}\) be a self-confident additive preference relation, and \(S^{SL} = \{ l_{0} , \, l_{1} , \, ..., \, l_{g} \}\) be a linguistic term set to express self-confidence levels. Then, the consistency index for self-confident additive preference relation is defined as,

$$ SCL(R) = 1 - \frac{2}{3n(n - 1)(n - 2)}\sum\limits_{i,j,k = 1}^{n} {\frac{{\Delta^{ - 1} (s_{ij} ) + \Delta^{ - 1} (s_{jk} ) + \Delta^{ - 1} (s_{ik} )}}{{3\Delta^{ - 1} (l_{g} )}}|f_{ij} + f_{jk} - f_{ik} - 0.5|} $$
(3)

In Eq. (3), the parameters \(f_{ij} , \, f_{jk} , \, f_{ik} \in [0, \, 1]\) represent the preference values in \(R\). The parameters \(s_{ij} , \, s_{jk} , \, s_{ik} \in S^{SL}\) represent the self-confidence levels over the preference values \(f_{ij} , \, f_{jk} \, and \, f_{ik}\), respectively. The parameter \(l_{g} \in S^{SL}\) represents the self-confidence level over \(f_{ii}\).

Clearly, \(SCL(R) \in [0, \, 1]\). The larger the value of \(SCL(R)\), the more consistent \(R\) is. If \(SCL(R) = 1\), then \(R\) is a fully consistent self-confident additive preference relation. According to reality situation, a consistency threshold \(\overline{SCL} \in [0, \, 1]\) is often established for \(SCL(R)\). If \(SCL(R) \ge \overline{SCL}\), we conclude that the consistency of \(R\) is acceptable; otherwise, the consistency of \(R\) is unacceptable.

The construction of individual consistency index \(SCL\) extends the concept of \(CL\) (i.e., Eq. (2)), and when \(s_{ij} = s_{jk} = s_{ik} = l_{g} \, \forall i, \, j, \, k\), the index \(SCL\) becomes the index \(CL\).

3.2 Consensus Index

In the GDM problem, when the difference in preference information among DMs is sufficiently large, it will be difficult to obtain a consensus solution. In general, the consensus index is applied to measure the degree of agreement among DMs. By computing the difference between preference values, we can evaluate the consensus level among DMs. Let \(F^{(r)} = (f_{ij}^{(r)} )_{n \times n}\) and \(F^{(t)} = (f_{ij}^{(t)} )_{n \times n}\) be two additive preference relations. In the work developed by Chiclana et al. (2008), a distance function is presented to measure the similarity degree of the preference values between DMs \(d_{r}\) and \(d_{t}\), on a pair of alternatives \(x_{i}\) and \(x_{j}\), as follows:

$$ s\left(f_{ij}^{(r)} , \, f_{ij}^{(t)}\right ) = 1 - |f_{ij}^{(r)} - f_{ij}^{(t)} | $$
(4)

In self-confident additive preference relations, when the preference value \(f_{ij}\) has a self-confidence level \(s_{ij}\) (\(s_{ij} \in S^{SL}\)), the similarity degree of preference information between two DMs needs to be measured at self-confidence levels. Let \(R^{(r)} = ((f_{ij}^{(r)} , \, s_{ij}^{(r)} ))_{n \times n}\) and \(R^{(t)} = ((f_{ij}^{(t)} , \, s_{ij}^{(t)} ))_{n \times n}\) be two self-confident additive preference relations. A new distance function with self-confidence levels to measure the similarity degree between \((f_{ij}^{(r)} , \, s_{ij}^{(r)} )\) and \((f_{ij}^{(t)} , \, s_{ij}^{(t)} )\) is proposed as follows,

$$ ss((f_{ij}^{(r)} , \, s_{ij}^{(r)} ), \, (f_{ij}^{(t)} , \, s_{ij}^{(t)} )) = 1 - \frac{{\Delta^{ - 1} (s_{ij}^{(r)} ) + \Delta^{ - 1} (s_{ij}^{(t)} )}}{{2\Delta^{ - 1} (l_{g} )}}|f_{ij}^{(r)} - f_{ij}^{(t)} | $$
(5)

It is clear that \(0 \le \frac{{\Delta^{ - 1} (s_{ij}^{(r)} ) + \Delta^{ - 1} (s_{ij}^{(t)} )}}{{2\Delta^{ - 1} (l_{g} )}} \le 1\). The value of \(\frac{{\Delta^{ - 1} (s_{ij}^{(r)} ) + \Delta^{ - 1} (s_{ij}^{(t)} )}}{{2\Delta^{ - 1} (l_{g} )}}\) in Eq. (5) determines the magnification of the difference between preference values \(f_{ij}^{(r)}\) and \(f_{ij}^{(t)}\). When \( \, s_{ij}^{(r)} = \, s_{ij}^{(t)} = l_{g}\), the new distance function with self-confidence levels in Eq. (5) reduces to Eq. (4).

Consider a GDM problem under self-confident additive preference relations context. Let \(D = \{ d_{1} , \, d_{2} , \, ..., \, d_{m} \}\) be a set of \(m\) DMs and \(X = \{ x_{1} , \, x_{2} , \, ..., \, x_{n} \}\)(\(n \ge 2\)) be a set of \(n\) alternatives. The DMs use linguistic terms set \(S^{SL} = \{ l_{0} , \, l_{1} , \, ..., \, l_{g} \}\) to express self-confidence levels. Based on the new distance function \(ss((f_{ij}^{(r)} , \, s_{ij}^{(r)} ), \, (f_{ij}^{(t)} , \, s_{ij}^{(t)} ))\) (i.e., Eq. (5)), we present a new consensus index to evaluate the group consensus level of self-confident additive preference relations. The underlying idea of the new consensus index consists of fusing the similarity of the preferences of all DMs on each pair of alternatives.

Definition 7

Let \(\{ R^{(1)} , \, R^{(2)} , \, ..., \, R^{(m)} \}\) be a set of the self-confident additive preference relations provided by \(m\) DMs, where \(R^{(k)} = ((f_{ij}^{(k)} , \, s_{ij}^{(k)} ))_{n \times n}\) represents the preference relation provided by the decision maker \(d_{k}\)(\(k = 1, \, 2, \, ..., \, m\)). The consensus level among all the DMs is given by,

$$ \begin{aligned} CCL\{ R^{{(1)}} ,{\text{ }}R^{{(2)}} ,{\text{ }}...,{\text{ }}R^{{(m)}} \} & = \frac{1}{{n(n - 1)}}\sum\limits_{{i = 1}}^{n} {\sum\limits_{{j = 1,j \ne i}}^{n} {(\frac{2}{{m(m - 1)}}\sum\limits_{{t \ge r}}^{m} {\sum\limits_{{r = 1}}^{m} {ss((f_{{ij}}^{{(r)}} ,{\text{ }}s_{{ij}}^{{(r)}} ),{\text{ }}(f_{{ij}}^{{(t)}} ,{\text{ }}s_{{ij}}^{{(t)}} ))} } )} } \\ & = 1 - \frac{2}{{nm(m - 1)(n - 1)}}\sum\limits_{{i = 1}}^{n} {\sum\limits_{{j = 1,j \ne i}}^{n} {\sum\limits_{{t \ge r}}^{m} {\sum\limits_{{r = 1}}^{m} {\frac{{\Delta ^{{ - 1}} (s_{{ij}}^{{(r)}} ) + \Delta ^{{ - 1}} (s_{{ij}}^{{(t)}} )}}{{2\Delta ^{{ - 1}} (l_{g} )}}|f_{{ij}}^{{(r)}} - f_{{ij}}^{{(t)}} |.} } } } \\ \end{aligned} $$
(6)

When \(CCL\{ R^{(1)} , \, R^{(2)} , \, ..., \, R^{(m)} \} = 1\), a complete group consensus is reached; otherwise, a larger \(CCL\{ R^{(1)} , \, R^{(2)} , \, ..., \, R^{(m)} \}\) value indicates a higher consensus level among all DMs \(\{ d_{1} , \, d_{2} , \, ..., \, d_{m} \}\). In actual decision-making situations, a threshold value \(\overline{CCL} \in [0, \, 1]\) is usually established for the consensus index. If \(CCL\{ R^{(1)} , \, R^{(2)} , \, ..., \, R^{(m)} \} \ge \overline{CCL}\), it can be considered to reach an acceptable consensus level; otherwise, it will be considered that the consensus level is unacceptable.

4 Nonlinear Optimization Models to Manage Individual Consistency and Group Consensus

As mentioned in Introduction, there are two basic issues in GDM: consistency and consensus (Chiclana et al. 2008). When the consistency level or the consensus level is unacceptable, the DMs need to adjust their individual preferences. Preference adjustment usually means cost in GDM problems. In this section, two nonlinear optimization models are proposed to manage consistency and consensus issues for GDM with self-confident additive preference relations. The first optimization-based model is used to derive self-confident additive preference relations with an acceptable consistency, and the second optimization-based model is used to improve the group consensus level that simultaneously manages individual consistency.

Before discussing the nonlinear optimization models, we first introduce Manhattan distance, which is a distance function used to measure the distance between two additive preference relations. Let \(E = (e_{ij} )_{n \times n}\) and \(F = (f_{ij} )_{n \times n}\) be two additive preference relations. Manhattan distance between \(E\) and \(F\) is:

$$ d(E, \, F) = \frac{1}{{n^{2} }}\sum\limits_{i = 1}^{n} {\sum\limits_{j = 1}^{n} {|e_{ij} - f_{ij} |} } $$
(7)

In this section, an extended Manhattan distance is used to measure the distance between two self-confident additive preference relations. Let \(R^{(r)} = ((f_{ij}^{(r)} , \, s_{ij}^{(r)} ))_{n \times n}\) and \(R^{(t)} = ((f_{ij}^{(t)} , \, s_{ij}^{(t)} ))_{n \times n}\) be two self-confident additive preference relations. The extended Manhattan distance between \(R^{(r)}\) and \(R^{(t)}\) is:

$$ ds(R^{(r)} , \, R^{(t)} ) = \alpha_{1} \times \frac{1}{{n^{2} }}\sum\limits_{i = 1}^{n} {\sum\limits_{j = 1}^{n} {|f_{ij}^{(r)} - f_{ij}^{(t)} |} } + \alpha_{2} \times \frac{1}{{n^{2} }}\sum\limits_{i = 1}^{n} {\sum\limits_{j = 1}^{n} {|\Delta^{ - 1} (s_{ij}^{(r)} ) - \Delta^{ - 1} (s_{ij}^{(t)} )|} } $$
(8)

where \(\alpha_{1}\) and \(\alpha_{2}\) are normalization coefficients. The preference values \(f_{ij}\) and self-confidence levels \(s_{ij}\) have different domains, so there could be a large difference between \(f_{ij}\) and \(s_{ij}\). In order to eliminate the influence of varying domains in the measurement distance, \(\alpha_{1}\) and \(\alpha_{2}\) are used to normalize the distance of preference values \(f_{ij}\) and the distance of self-confidence levels \(s_{ij}\). According to the matrix theory, the normalization coefficient \(\alpha_{k}\) can be given by

$$ \alpha_{k} = \frac{1}{{sp_{k} }}, \, k = 1, \, 2. $$
(9)

where \(sp_{k}\) is Frobenius norm of matrix \(G^{k}\), i.e., \(sp_{k} = ||G^{k} ||_{2} = \sqrt {\sum\limits_{i = 1}^{n} {\sum\limits_{j = 1}^{n} {(g_{ij}^{k} )^{2} } } }\).

All notations are summarized in Table 2.

Table 2 Notation summary

4.1 Improving Individual Consistency

Generally, a lack of consistency will lead to unreliable conclusions. When the preference relation is of unacceptable consistency, the decision maker needs to modify individual preferences. The consistency-improving process can be used to help the DMs improve individual consistency. Without loss of generality, an optimization-based method is proposed to obtain self-confident additive preference relations with acceptable consistency in the present work.

Let \(\overline{SCL}\) be a predefined individual consistency threshold for the consistency index of a self-confident additive preference relation \(R = ((f_{ij} , \, s_{ij} ))_{n \times n}\). If \(SCL(R) < \overline{SCL}\), \(R\) is considered to be of unacceptable consistency. In this subsection, we present a nonlinear optimization model to construct a new self-confident additive preference relation \(\overline{R}\) with acceptable consistency. The main idea of this optimization model is to minimize the information loss between \(R\) and \(\overline{R}\), including the loss of preference information and the loss of self-confidence information, and also to find a self-confident additive preference relation \(\overline{R} = ((\overline{{f_{ij} }} , \, \overline{{s_{ij} }} ))_{n \times n}\) that satisfies \(SCL(\overline{R} ) \ge \overline{SCL}\). Therefore, based on the extended Manhattan distance (i.e., Eq. (8)), the objective function of the proposed nonlinear optimization model can be constructed as follows:

$$ \min \, \alpha_{1} \times \frac{1}{{n^{2} }}\sum\limits_{i = 1}^{n} {\sum\limits_{j = 1}^{n} {|f_{ij} - \overline{{f_{ij} }} |} } + \alpha_{2} \times \frac{1}{{n^{2} }}\sum\limits_{i = 1}^{n} {\sum\limits_{j = 1}^{n} {|\Delta^{ - 1} (s_{ij} ) - \Delta^{ - 1} (\overline{{s_{ij} }} )|} } $$
(10)

where \(\alpha_{1}\) and \(\alpha_{2}\) are the normalization coefficients. Due to the varying scales adopted for preference and self-confidence, there could be a large difference between the preference values and self-confidence values. In order to eliminate the influence of different scales, \(\alpha_{1}\) and \(\alpha_{2}\) are used to standardized measure the preference information loss and self-confidence information loss (Ma et al. 2006; Zhang et al. 2019a).

Meanwhile, it is natural that the constructed self-confident additive preference relation \(\overline{R}\) should have an acceptable consistency level, that is,

\(SCL(\overline{R} ) = 1 - \frac{2}{3n(n - 1)(n - 2)}\sum\limits_{i,j,k = 1}^{n} {\frac{{\Delta^{ - 1} (\overline{{s_{ij} }} ) + \Delta^{ - 1} (\overline{{s_{jk} }} ) + \Delta^{ - 1} (\overline{{s_{ik} }} )}}{{3\Delta^{ - 1} (l_{g} )}}|\overline{{f_{ij} }} + \overline{{f_{jk} }} - \overline{{f_{ik} }} - 0.5|} \ge \overline{SCL}\).

In this way, a nonlinear optimization model to obtain the self-confident additive preference relation with acceptable consistency can be constructed as follows:

$$ \left\{ \begin{gathered} \min {\text{ }}\alpha _{1} \times \frac{1}{{n^{2} }}\sum\limits_{{i = 1}}^{n} {\sum\limits_{{j = 1}}^{n} {|f_{{ij}} - \overline{{f_{{ij}} }} |} } + \alpha _{2} \times \frac{1}{{n^{2} }}\sum\limits_{{i = 1}}^{n} {\sum\limits_{{j = 1}}^{n} {|\Delta ^{{ - 1}} (s_{{ij}} ) - \Delta ^{{ - 1}} (\overline{{s_{{ij}} }} )|} } \quad \quad \quad \quad \quad \quad \quad \;\;\quad \quad (1) \hfill \\ s.t.\left\{ \begin{gathered} \overline{{f_{{ij}} }} + \overline{{f_{{ji}} }} = 1\quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad (2) \hfill \\ \Delta ^{{ - 1}} (\overline{{s_{{ij}} }} ) = \Delta ^{{ - 1}} (\overline{{s_{{ji}} }} )\quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \;\;(3) \hfill \\ \frac{2}{{3n(n - 1)(n - 2)}}\sum\limits_{{i,j,k = 1}}^{n} {\frac{{\Delta ^{{ - 1}} (\overline{{s_{{ij}} }} ) + \Delta ^{{ - 1}} (\overline{{s_{{jk}} }} ) + \Delta ^{{ - 1}} (\overline{{s_{{ik}} }} )}}{{3\Delta ^{{ - 1}} (l_{g} )}}|\overline{{f_{{ij}} }} + \overline{{f_{{jk}} }} - \overline{{f_{{ik}} }} - 0.5|} \le 1 - \overline{{SCL}} \quad \;(4) \hfill \\ \overline{{f_{{ij}} }} \ge 0,{\text{ }}\overline{{s_{{ij}} }} \in S^{{SL}} \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \;(5) \hfill \\ \end{gathered} \right. \hfill \\ \end{gathered} \right. $$
(11)

where \(\overline{{f_{ij} }}\)(\(i, \, j = 1, \, 2, \, ..., \, n\)) and \(\overline{{s_{ij} }}\)(\(i, \, j = 1, \, 2, \, ..., \, n\)) are the decision variables. In model (11), the objective function is to minimize the information and self-confidence loss between \(R\) and \(\overline{R}\). Formulas (2) and (3) are used to yield the new self-confident additive preference relation \(\overline{R}\). Formula (4) is used to ensure that the consistency level of \(\overline{R}\) is acceptable. Model (11) not only guarantees that the adjusted individual self-confident additive preference relation can reach an acceptable consistency but also seeks the optimal self-confident additive preference relation with minimum information and self-confidence loss from decision maker’s original preference relation.

Note 2: The nonlinear optimization model (11) is convex nonlinear programming. It is possible that there are multiple optimal solutions to model (11). The problem of uniqueness is not the core of our proposal, so in this paper we do not focus on this issue. In future studies we will discuss selecting the optimal solution from the set of optimal solutions of this model. For example, an index can be added to yield a unique optimal solution. The proposed nonlinear optimization models of this study can be solved by readily available software such as LINGO.

We use seven transformed variables in model (11): \(a_{ijk} = \overline{{f_{ij} }} + \overline{{f_{jk} }} - \overline{{f_{ik} }} - 0.5\), \(b_{ijk} = |a_{ijk} |\), \(s_{ijk} = (\Delta^{ - 1} (\overline{{s_{ij} }} ) + \Delta^{ - 1} (\overline{{s_{jk} }} ) + \Delta^{ - 1} (\overline{{s_{ik} }} ))/3\Delta^{ - 1} (l_{g} )\), \(x_{ij} = f_{ij} - \overline{{f_{ij} }}\), \(c_{ij} = |x_{ij} |\), \(y_{ij} = \Delta^{ - 1} (s_{ij} ) - \Delta^{ - 1} (\overline{{s_{ij} }} )\), and \(d_{ij} = |y_{ij} |\). In this way, model (11) can be transformed into the following nonlinear programming model:

$$ \left\{ \begin{gathered} \min {\text{ }}\alpha _{1} \times \frac{1}{{n^{2} }}\sum\limits_{{i = 1}}^{n} {\sum\limits_{{j = 1}}^{n} {c_{{ij}} } } + \alpha _{2} \times \frac{1}{{n^{2} }}\sum\limits_{{i = 1}}^{n} {\sum\limits_{{j = 1}}^{n} {d_{{ij}} } } \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \;(1) \hfill \\ s.t.\left\{ \begin{gathered} \overline{{f_{{ij}} }} + \overline{{f_{{ji}} }} = 1\quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \;i,j = 1,2,...,n\quad \quad \;(2) \hfill \\ \Delta ^{{ - 1}} (\overline{{s_{{ij}} }} ) = \Delta ^{{ - 1}} (\overline{{s_{{ji}} }} )\quad \quad \quad \quad \quad \quad \quad \quad \quad \;\;i,j = 1,2,...,n\quad \quad \;(3) \hfill \\ a_{{ijk}} = \overline{{f_{{ij}} }} + \overline{{f_{{jk}} }} - \overline{{f_{{ik}} }} - 0.5\quad \quad \quad \quad \quad \quad i,j,k = 1,2,...,n\quad \quad \;(4) \hfill \\ b_{{ijk}} - a_{{ijk}} \ge 0\quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \;\;i,j,k = 1,2,...,n\quad \quad \;\;(5) \hfill \\ b_{{ijk}} + a_{{ijk}} \ge 0\quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad i,j,k = 1,2,...,n\quad \quad (6) \hfill \\ s_{{ijk}} = \frac{{\Delta ^{{ - 1}} (\overline{{s_{{ij}} }} ) + \Delta ^{{ - 1}} (\overline{{s_{{jk}} }} ) + \Delta ^{{ - 1}} (\overline{{s_{{ik}} }} )}}{{3\Delta ^{{ - 1}} (l_{g} )}}\quad \quad \quad i,j,k = 1,2,...,n\quad \quad (7) \hfill \\ \frac{2}{{3n(n - 1)(n - 2)}}\sum\limits_{{i,j,k = 1}}^{n} {s_{{ijk}} b_{{ijk}} } \le 1 - \overline{{SCL}} \quad \;\;i,j,k = 1,2,...,n\quad \quad (8) \hfill \\ x_{{ij}} = f_{{ij}} - \overline{{f_{{ij}} }} \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad i,j = 1,2,...,n\quad \quad \quad (9) \hfill \\ c_{{ij}} - x_{{ij}} \ge 0\quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad i,j = 1,2,...,n\quad \quad \quad \;\;{\text{(10)}} \hfill \\ c_{{ij}} + x_{{ij}} \ge 0{\text{ }}\quad \quad \quad \quad \quad \quad \quad \quad \quad i,j = 1,2,...,n\quad \quad \quad \quad \quad \;{\text{(11)}} \hfill \\ y_{{ij}} = \Delta ^{{ - 1}} (s_{{ij}} ) - \Delta ^{{ - 1}} (\overline{{s_{{ij}} }} )\quad \quad \quad \quad \quad \quad i,j = 1,2,...,n\quad \quad \quad \quad {\text{(12)}} \hfill \\ d_{{ij}} - y_{{ij}} \ge 0\quad \quad \quad \quad \quad \quad \quad \quad \quad i,j = 1,2,...,n\quad \quad \quad \quad \quad \;{\text{(13)}} \hfill \\ d_{{ij}} + y_{{ij}} \ge 0\quad \quad \quad \quad \quad \quad \quad \quad \quad i,j = 1,2,...,n\quad \quad \quad \quad \quad \;{\text{(14)}} \hfill \\ \overline{{f_{{ij}} }} \ge 0{\text{ }}\quad \quad \quad \quad \quad \quad \quad \quad \quad i,j = 1,2,...,n\quad \quad \quad \quad \quad \quad \;\;\;{\text{(15)}} \hfill \\ \overline{{s_{{ij}} }} \in S^{{SL}} \quad \quad \quad \quad \quad \quad \quad \quad \quad i,j = 1,2,...,n\quad \quad \quad \quad \quad \quad \;\;{\text{(16)}} \hfill \\ \end{gathered} \right. \hfill \\ \end{gathered} \right. $$
(12)

In model (12), constraints (4)−(6) guarantee that \(b_{ijk} = |a_{ijk} | = |\overline{{f_{ij} }} + \overline{{f_{jk} }} - \overline{{f_{ik} }} - 0.5|\), constraint (8) guarantees that \(\overline{R}\) is of acceptable consistency \(SCL(\overline{R} ) \ge \overline{SCL}\), constraints (9)−(11) guarantee that \(c_{ij} = |x_{ij} | = |f_{ij} - \overline{{f_{ij} }} |\), and constraints (12)−(14) guarantee that \(d_{ij} = |y_{ij} | = |\Delta^{ - 1} (s_{ij} ) - \Delta^{ - 1} (\overline{{s_{ij} }} )|\).

Solving nonlinear optimization model (12) obtains the adjusted self-confident additive preference relation \(\overline{R}\) with acceptable consistency, \(SCL(\overline{R} ) \ge \overline{SCL}\). When setting \(\overline{SCL} = 1\), the obtained self-confident additive preference relation \(\overline{R}\) is of full consistency. According to different consistency thresholds \(\overline{SCL}\), the proposed approach can generate different feedback adjustment matrices for DMs to improve individual consistency.

Note 3: In this paper, the preference relations in the group decision-making process are homogeneous (i.e., all individuals are assumed to have equal weights), so we do not design the weight for the decision maker, and instead we focus on developing a feedback adjustment with minimum information loss, which can generate adjustment suggestions that DMs use as a reference to modify their individual opinions. In future research we may discuss using the individual consistency level as a penalty factor to assign importance to the ranking of DMs.

4.2 Consensus Model

In real-life scenarios, the DMs may have different opinions about the decision-making problem, making it difficult to directly obtain an acceptable collective solution. Thus, a consensus reaching process is usually designed to build connections among DMs and to improve the group consensus level (Dong et al. 2018; Li et al. 2020; Zhang et al. 2019c). To ensure individual consistency in the consensus reaching process, Chiclana et al. (2008) proposed a consensus model based on individual consistency and consensus measures, and Zhang et al. (2012) developed a linear optimization model controlling both consistency and consensus for additive preference relations. Based on the frameworks of Chiclana et al. (2008) and Zhang et al. (2012), we propose an optimization-based consensus model to manage consistency and consensus issues in GDM with self-confident additive preference relations.

There are two basic elements in our optimization model: a set of \(m\) DMs, \(D = \{ d_{1} , \, d_{2} , \, ..., \, d_{m} \}\), and a set of \(n\) alternatives, \(X = \{ x_{1} , \, x_{2} , \, ..., \, x_{n} \}\)(\(n \ge 2\)). Let \(\lambda = (\lambda^{(1)} , \, \lambda^{(2)} , \, ..., \, \lambda^{(m)} )^{T}\) be the weight vector of DMs. Let \(S^{SL} = \{ l_{0} , \, l_{1} , \, ..., \, l_{g} \}\) be a linguistic term set provided for DMs to express their self-confidence levels. Each decision maker uses self-confident additive preference relations as the decision-making tool to express preference information and self-confidence level. Let \(R^{(k)} = ((f_{ij}^{(k)} , \, s_{ij}^{(k)} ))_{n \times n}\) (\(k = 1, \, 2, \, ..., \, m\)) be a group of self-confident additive preference relations with an unacceptable group consensus level. Let \(\overline{CCL}\) be the predefined consensus threshold. To reach consensus among DMs \(\{ d_{1} , \, d_{2} , \, ..., \, d_{m} \}\), we need to find a new group of self-confident additive preference relations \(\overline{{R^{(k)} }} = ((\overline{{f_{ij}^{(k)} }} , \, \overline{{s_{ij}^{(k)} }} ))_{n \times n}\)(\(k = 1, \, 2, \, ..., \, m\)) with acceptable consensus and consistency levels. To preserve \(R^{(k)}\)(\(k = 1, \, 2, \, ..., \, m\)) as much information as possible, the distance measure between \(R^{(k)}\) and \(\overline{{R^{(k)} }}\) should be as small as possible. Based on this idea, the objective function of the proposed optimization-based consensus model can be constructed as follows:

$$ \min \, \frac{1}{{n^{2} }}\sum\limits_{k = 1}^{n} {\alpha_{1}^{(k)} } \sum\limits_{i = 1}^{n} {\sum\limits_{j = 1}^{n} {|f_{ij}^{(k)} - \overline{{f_{ij}^{(k)} }} |} } + \frac{1}{{n^{2} }}\sum\limits_{k = 1}^{n} {\alpha_{2}^{(k)} } \sum\limits_{i = 1}^{n} {\sum\limits_{j = 1}^{n} {|\Delta^{ - 1} (s_{ij}^{(k)} ) - \Delta^{ - 1} (\overline{{s_{ij}^{(k)} }} )|} } $$
(13)

Meanwhile, it is natural that the consistency of \(\overline{{R^{(k)} }}\) should be acceptable, that is,

\(SCL(\overline{{R^{(k)} }} ) = 1 - \frac{2}{3n(n - 1)(n - 2)}\sum\limits_{i,j,c = 1}^{n} {\frac{{\Delta^{ - 1} (\overline{{s_{ij}^{(k)} }} ) + \Delta^{ - 1} (\overline{{s_{jc}^{(k)} }} ) + \Delta^{ - 1} (\overline{{s_{ic}^{(k)} }} )}}{{3\Delta^{ - 1} (l_{g} )}}|\overline{{f_{ij}^{(k)} }} + \overline{{f_{jc}^{(k)} }} - \overline{{f_{ic}^{(k)} }} - 0.5|} \ge \overline{SCL} ,\) and the group DMs should have an acceptable consensus level, i.e.,

$$ CCL\{ \overline{{R^{(1)} }} , \, \overline{{R^{(2)} }} , \, ..., \, \overline{{R^{(m)} }} \} = 1 - \frac{2}{nm(m - 1)(n - 1)}\sum\limits_{i = 1}^{n} {\sum\limits_{j = 1,j \ne i}^{n} {\sum\limits_{t \ge r}^{m} {\sum\limits_{r = 1}^{m} {\frac{{\Delta^{ - 1} (\overline{{s_{ij}^{(r)} }} ) + \Delta^{ - 1} (\overline{{s_{ij}^{(t)} }} )}}{{2\Delta^{ - 1} (l_{g} )}}|\overline{{f_{ij}^{(r)} }} - \overline{{f_{ij}^{(t)} }} | \ge \overline{CCL} .} } } } $$

In this way, the following nonlinear optimization model to promote consensus reaching in GDM with self-confident additive preference relations is constructed:

$$ \left\{ \begin{gathered} \min {\text{ }}\frac{1}{{n^{2} }}\sum\limits_{{k = 1}}^{n} {\alpha _{1}^{{(k)}} } \sum\limits_{{i = 1}}^{n} {\sum\limits_{{j = 1}}^{n} {|f_{{ij}}^{{(k)}} - \overline{{f_{{ij}}^{{(k)}} }} |} } + \frac{1}{{n^{2} }}\sum\limits_{{k = 1}}^{n} {\alpha _{2}^{{(k)}} } \sum\limits_{{i = 1}}^{n} {\sum\limits_{{j = 1}}^{n} {|\Delta ^{{ - 1}} (s_{{ij}}^{{(k)}} ) - \Delta ^{{ - 1}} (\overline{{s_{{ij}}^{{(k)}} }} )|} \quad \quad \quad \quad \quad \quad \quad \quad \;\;(1)} \hfill \\ s.t.\left\{ \begin{gathered} \overline{{f_{{ij}}^{{(k)}} }} + \overline{{f_{{ji}}^{{(k)}} }} = 1\quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad (2) \hfill \\ \Delta ^{{ - 1}} (s_{{ij}}^{{(k)}} ) = \Delta ^{{ - 1}} (s_{{ji}}^{{(k)}} )\quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \;\;(3) \hfill \\ \frac{2}{{3n(n - 1)(n - 2)}}\sum\limits_{{i,j,c = 1}}^{n} {\frac{{\Delta ^{{ - 1}} (\overline{{s_{{ij}}^{{(k)}} }} ) + \Delta ^{{ - 1}} (\overline{{s_{{jc}}^{{(k)}} }} ) + \Delta ^{{ - 1}} (\overline{{s_{{ic}}^{{(k)}} }} )}}{{3\Delta ^{{ - 1}} (l_{g} )}}|\overline{{f_{{ij}}^{{(k)}} }} + \overline{{f_{{jc}}^{{(k)}} }} - \overline{{f_{{ic}}^{{(k)}} }} - 0.5|} \le 1 - \overline{{SCL}} \quad \;\;(4) \hfill \\ \frac{2}{{nm(m - 1)(n - 1)}}\sum\limits_{{i = 1}}^{n} {\sum\limits_{{j = 1,j \ne i}}^{n} {\sum\limits_{{t \ge r}}^{m} {\sum\limits_{{r = 1}}^{m} {\frac{{\Delta ^{{ - 1}} (\overline{{s_{{ij}}^{{(r)}} }} ) + \Delta ^{{ - 1}} (\overline{{s_{{ij}}^{{(t)}} }} )}}{{2\Delta ^{{ - 1}} (l_{g} )}}|\overline{{f_{{ij}}^{{(r)}} }} - \overline{{f_{{ij}}^{{(t)}} }} | \le 1 - \overline{{CCL}} \quad \quad \quad \quad \quad \quad \;\;\;(5)} } } } \hfill \\ \overline{{f_{{ij}}^{{(k)}} }} \ge 0,{\text{ }}\overline{{s_{{ij}}^{{(k)}} }} \in S^{{SL}} \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad (6) \hfill \\ \end{gathered} \right. \hfill \\ \end{gathered} \right. $$
(14)

where \(\overline{{f_{ij}^{(k)} }}\)(\(k = 1, \, 2, \, ..., \, m\), \(i, \, j = 1, \, 2, \, ..., \, n\)) and \(\overline{{s_{ij}^{(k)} }}\)(\(k = 1, \, 2, \, ..., \, m\), \(i, \, j = 1, \, 2, \, ..., \, n\)) are the decision variables. In model (14), the objective function is to minimize the total information and self-confidence loss between \(R^{(k)}\) and \(\overline{{R^{(k)} }}\). Formulas (2) and (3) are used to yield the new self-confident additive preference relation \(\overline{{R^{(k)} }}\). Formula (4) is used to ensure that the consistency level of \(\overline{{R^{(k)} }}\) is acceptable. Formula (5) is used to ensure that the consensus level among DMs is acceptable. Model (14) guarantees that the individual self-confident additive preference relation can reach an acceptable consistency level and that the group DMs can reach a predefined consensus level. Meanwhile, Model (14) seeks the optimal self-confident additive preference relations \(\overline{{R^{(k)} }}\)(\(k = 1, \, 2, \, ..., \, m\)) with minimum information and self-confidence loss from the original preference relations of the DMs.

We use ten transformed variables in model (14): \(a_{ijc}^{(k)} = \overline{{f_{ij}^{(k)} }} + \overline{{f_{jc}^{(k)} }} - \overline{{f_{ic}^{(k)} }} - 0.5\), \(b_{ijc}^{(k)} = |a_{ijc}^{(k)} |\), \(s_{ijc}^{(k)} = (\Delta^{ - 1} (\overline{{s_{ij}^{(k)} }} ) + \Delta^{ - 1} (\overline{{s_{jc}^{(k)} }} ) + \Delta^{ - 1} (\overline{{s_{ic}^{(k)} }} ))/3\Delta^{ - 1} (l_{g} )\), \(f_{ij}^{(rt)} = \overline{{f_{ij}^{(r)} }} - \overline{{f_{ij}^{(t)} }}\),\(g_{ij}^{(rt)} = |f_{ij}^{(rt)} |\),\(e_{ij}^{(rt)} = (\Delta^{ - 1} (\overline{{s_{ij}^{(r)} }} ) + \Delta^{ - 1} (\overline{{s_{ij}^{(t)} }} ))/2\Delta^{ - 1} (l_{g} )\), \(x_{ij}^{(k)} = f_{ij}^{(k)} - \overline{{f_{ij}^{(k)} }}\),\(c_{ij}^{(k)} = |x_{ij}^{(k)} |\), \(y_{ij}^{(k)} = \Delta^{ - 1} (s_{ij}^{(k)} ) - \Delta^{ - 1} (\overline{{s_{ij}^{(k)} }} )\), \(d_{ij}^{(k)} = |y_{ij}^{(k)} |\). In this way, model (14) is transformed into the following nonlinear programming model:

$$ \left\{ \begin{gathered} \min \frac{1}{{n^{2} }}\sum\limits_{{k = 1}}^{m} {\alpha _{1}^{{(k)}} \sum\limits_{{i = 1}}^{n} {\sum\limits_{{j = 1}}^{n} {c_{{ij}}^{{(k)}} } } } + \frac{1}{{n^{2} }}\sum\limits_{{k = 1}}^{m} {\alpha _{2}^{{(k)}} \sum\limits_{{i = 1}}^{n} {\sum\limits_{{j = 1}}^{n} {d_{{ij}}^{{(k)}} } } } \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad (1) \hfill \\ s.t.\left\{ \begin{gathered} \overline{{f_{{ij}}^{{(k)}} }} + \overline{{f_{{ji}}^{{(k)}} }} = 1\quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad i,j = 1,2,...,n;k = 1,2,...,m\quad {\text{(2)}} \hfill \\ \Delta ^{{ - 1}} (\overline{{s_{{ij}}^{{(k)}} }} ) = \Delta ^{{ - 1}} (\overline{{s_{{ji}}^{{(k)}} }} )\quad \quad \quad \quad \quad \quad \quad \quad \quad i,j = 1,2,...,n;k = 1,2,...,m\quad {\text{(3)}} \hfill \\ a_{{ijc}}^{{(k)}} = \overline{{f_{{ij}}^{{(k)}} }} - \overline{{f_{{ic}}^{{(k)}} }} + \overline{{f_{{jc}}^{{(k)}} }} - 0.5\quad \quad \quad \quad i,j,c = 1,2,...,n;k = 1,2,...,m\quad {\text{(4)}} \hfill \\ b_{{ijc}}^{{(k)}} - a_{{ijc}}^{{(k)}} \ge 0\quad \quad \quad \quad \quad \quad \quad \quad \quad \quad i,j,c = 1,2,...,n;k = 1,2,...,m\quad {\text{(5)}} \hfill \\ b_{{ijc}}^{{(k)}} + a_{{ijc}}^{{(k)}} \ge 0\quad \quad \quad \quad \quad \quad \quad \quad \quad \quad i,j,c = 1,2,...,n;k = 1,2,...,m\quad {\text{(6)}} \hfill \\ s_{{ijc}}^{{(k)}} = \frac{{\Delta ^{{ - 1}} (\overline{{s_{{ij}}^{{(k)}} }} ) + \Delta ^{{ - 1}} (\overline{{s_{{jc}}^{{(k)}} }} ) + \Delta ^{{ - 1}} (\overline{{s_{{ic}}^{{(k)}} }} )}}{{3\Delta ^{{ - 1}} (l_{g} )}}{\text{ }}i,j,c = 1,2,...,n;k = 1,2,...,m\quad (7) \hfill \\ \frac{2}{{3n(n - 1)(n - 2)}}\sum\limits_{{i,j,c = 1}}^{n} {s_{{ijc}}^{{(k)}} b_{{ijc}}^{{(k)}} } \le 1 - \overline{{SCL}} {\text{ }}i,j,c = 1,2,...,n;k = 1,2,...,m\quad {\text{(8)}} \hfill \\ f_{{ij}}^{{(rt)}} = \overline{{f_{{ij}}^{{(r)}} }} - \overline{{f_{{ij}}^{{(t)}} }} {\text{ }}\quad \quad \quad \quad \quad \quad \quad \quad i,j = 1,2,...,n;r,t = 1,2,...,m\quad (9) \hfill \\ g_{{ij}}^{{(rt)}} - f_{{ij}}^{{(rt)}} \ge 0\quad \quad \quad \quad \quad \quad \quad \quad \quad i,j = 1,2,...,n;r,t = 1,2,...,m\quad (10) \hfill \\ g_{{ij}}^{{(rt)}} + f_{{ij}}^{{(rt)}} \ge 0\quad \quad \quad \quad \quad \quad \quad \quad \quad i,j = 1,2,...,n;r,t = 1,2,...,m\quad (11) \hfill \\ e_{{ij}}^{{(rt)}} = \frac{{\Delta ^{{ - 1}} (\overline{{s_{{ij}}^{{(r)}} }} ) + \Delta ^{{ - 1}} (\overline{{s_{{ij}}^{{(t)}} }} )}}{{2\Delta ^{{ - 1}} (l_{g} )}}\quad \quad \quad \quad i,j = 1,2,...,n;r,t = 1,2,...,m\quad \quad (12) \hfill \\ \frac{2}{{nm(m - 1)(n - 1)}}\sum\limits_{{i = 1}}^{n} {\sum\limits_{{j = 1,j \ne i}}^{n} {\sum\limits_{{t \ge r}}^{m} {\sum\limits_{{r = 1}}^{m} {e_{{ij}}^{{(rt)}} g_{{ij}}^{{(rt)}} } } } } \le 1 - \overline{{CCL}} \quad \quad \quad \quad \quad \quad \quad \quad (13) \hfill \\ x_{{ij}}^{{(k)}} = f_{{ij}}^{{(k)}} - \overline{{f_{{ij}}^{{(k)}} }} \quad \quad \quad \quad \quad \quad \quad \quad \quad i,j = 1,2,...,n;k = 1,2,...,m\quad {\text{(14)}} \hfill \\ c_{{ij}}^{{(k)}} - x_{{ij}}^{{(k)}} \ge 0{\text{ }}\quad \quad \quad \quad \quad \quad \quad \quad \quad \quad i,j = 1,2,...,n;k = 1,2,...,m\quad {\text{(15)}} \hfill \\ c_{{ij}}^{{(k)}} + x_{{ij}}^{{(k)}} \ge 0\quad \quad \quad \quad \quad \quad \quad \quad \quad \quad i,j = 1,2,...,n;k = 1,2,...,m\quad {\text{(16)}} \hfill \\ y_{{ij}}^{{(k)}} = \Delta ^{{ - 1}} (s_{{ij}}^{{(k)}} ) - \Delta ^{{ - 1}} (\overline{{s_{{ij}}^{{(k)}} }} )\quad \quad \quad \quad \quad i,j = 1,2,...,n;k = 1,2,...,m\quad {\text{(17)}} \hfill \\ d_{{ij}}^{{(k)}} - y_{{ij}}^{{(k)}} \ge 0\quad \quad \quad \quad \quad \quad \quad \quad \quad \quad i,j = 1,2,...,n;k = 1,2,...,m\quad {\text{(18)}} \hfill \\ d_{{ij}}^{{(k)}} + y_{{ij}}^{{(k)}} \ge 0\quad \quad \quad \quad \quad \quad \quad \quad \quad \quad {\text{ }}i,j = 1,2,...,n;k = 1,2,...,m\quad {\text{(19)}} \hfill \\ \overline{{f_{{ij}}^{{(k)}} }} \ge 0\quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad i,j = 1,2,...,n;k = 1,2,...,m\quad {\text{(20)}} \hfill \\ \overline{{s_{{ij}}^{{(k)}} }} \in S^{{SL}} \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad i,j = 1,2,...,n;k = 1,2,...,m\quad {\text{(21)}} \hfill \\ \end{gathered} \right. \hfill \\ \end{gathered} \right. $$
(15)

In model (15), constraints (4)−(6) guarantee that \(b_{ijc}^{(k)} = |a_{ijc}^{(k)} | = |\overline{{f_{ij}^{(k)} }} + \overline{{f_{jc}^{(k)} }} - \overline{{f_{ic}^{(k)} }} - 0.5|\), constraint (8) guarantees that \(\overline{{R^{(k)} }}\) is of acceptable consistency \(SCL(\overline{{R^{(k)} }} ) \ge \overline{SCL}\), constraints (9)−(11) guarantee that \(g_{ij}^{(rt)} = |f_{ij}^{(rt)} | = |\overline{{f_{ij}^{(r)} }} - \overline{{f_{ij}^{(t)} }} |\), constraint (13) guarantees that \(\{ \overline{{R^{(1)} }} , \, \overline{{R^{(2)} }} , \, ..., \, \overline{{R^{(m)} }} \}\) is of acceptable consensus level \(CCL\{ \overline{{R^{(1)} }} , \, \overline{{R^{(2)} }} , \, ..., \, \overline{{R^{(m)} }} \} \ge \overline{CCL}\), constraints (14)−(16) guarantee that \(c_{ij}^{(k)} = |x_{ij}^{(k)} | = |f_{ij}^{(k)} - \overline{{f_{ij}^{(k)} }} |\), and constraints (17)−(19) guarantee that \(d_{ij}^{(k)} = |y_{ij}^{(k)} | = |\Delta^{ - 1} (s_{ij}^{(k)} ) - \Delta^{ - 1} (\overline{{s_{ij}^{(k)} }} )|\).

The optimal self-confident additive preference relations \(\overline{{R^{(k)} }} = ((\overline{{f_{ij}^{(k)} }} , \, \overline{{s_{ij}^{(k)} }} ))_{n \times n}\) (\(k = 1, \, 2, \, ..., \, m\)) with minimum information and self-confidence loss can be yielded by solving model (15). Let \(R^{(c)} = ((f_{ij}^{(c)} ,s_{ij}^{(c)} ))_{n \times n}\) be the collective preference matrix obtained from \(\overline{{R^{(k)} }}\)(\(k = 1, \, 2, \, ..., \, m\)) using the weighted average method, where

$$ f_{ij}^{(c)} = \sum\limits_{k = 1}^{m} {\lambda^{(k)} \overline{{f_{ij}^{(k)} }} } $$
(16)
$$ s_{ij}^{(c)} = \Delta (\sum\limits_{k = 1}^{m} {\lambda^{(k)} \Delta^{ - 1} (\overline{{s_{ij}^{(k)} }} )} ) $$
(17)

In Liu et al. (2017), a linear programming method is presented to estimate the collective preference vector based on the heterogeneous self-confident preference relations. Inspired by the model of Liu et al. (2017), we use the following model to obtain the collective preference vector \(w^{(c)} = (w_{1}^{(c)} , \, w_{2}^{(c)} , \, ..., \, w_{n}^{(c)} )^{T}\) from \(R^{(c)}\):

$$ \left\{ \begin{gathered} \min z = \sum\limits_{i = 1}^{n - 1} {\sum\limits_{j = i + 1}^{n} {z_{ij}^{(c)} } } \hfill \\ s.t.\left\{ \begin{gathered} \varepsilon_{ij}^{(c)} = \frac{1}{2}(w_{i}^{(c)} - w_{j}^{(c)} ) + 0.5 - f_{ij}^{(c)} \hfill \\ z_{ij}^{(c)} = |\Delta^{ - 1} (s_{ij}^{(c)} )||\varepsilon_{ij}^{(c)} | \hfill \\ \sum\nolimits_{i = 1}^{n} {w_{i}^{(c)} } = 1 \hfill \\ \end{gathered} \right. \hfill \\ \end{gathered} \right. $$
(18)

5 Hypothetical Application and Comparative Analysis

In this section, we use a hypothetical example of application to show the usability of the proposed methods. We consider a scenario namely venture capital investment. We further conduct the comparison analysis to show the influence of self-confidence levels on the decision-making results.

5.1 Hypothetical Application

Venture capital investment is a form of financing that investors provide to startup companies or small businesses that are believed to have long-term growth potential. Venture capital investment with high risk, high growth and high yield, plays an important role in fostering emerging industries in both developed and developing countries (Ewens et al. 2021). In this activity, several departments/experts are usually involved in order to make a better investment decision. Therefore, it makes sense to incorporate venture capital investment into a GDM problem.

A venture capital firm plans to invest in Internet projects. Through the preliminary screening of the company, four high-quality projects are selected into the final selection stage. Among these projects are.

\(x_{1}\): Internet live broadcast project.

\(x_{2}\): Internet parking project.

\(x_{3}\): Internet delivery system project.

\(x_{4}\): Internet education system project.

These four project proposals are submitted to the investment committee, which consists six experts \(\{ d_{1} , \, d_{2} , \, ..., \, d_{6} \}\) from different departments: technology, product, marketing, financing, team, and management. Considering the fairness among experts, we suppose that the weight of each expert is equal, \(\lambda^{(1)} = \lambda^{(2)} = ... = \lambda^{(6)} = 1/6\). They are required to consider the risk and yield factors, and then give their opinions about the four projects. It is assumed that experts give their opinions by self-confident additive preference relations, as follows:

$$ R_{VC}^{(1)} = \left( {\begin{array}{*{20}c} {(0.5, \, l_{8} )} & {(0.1, \, l_{7} )} & {(0.6, \, l_{7} )} & {(0.7, \, l_{6} )} \\ {(0.9, \, l_{7} )} & {(0.5, \, l_{8} )} & {(0.2, \, l_{1} )} & {(0.5, \, l_{3} )} \\ {(0.4, \, l_{7} )} & {(0.8, \, l_{1} )} & {(0.5, \, l_{8} )} & {(0.2, \, l_{6} )} \\ {(0.3, \, l_{6} )} & {(0.5, \, l_{3} )} & {(0.8, \, l_{6} )} & {(0.5, \, l_{8} )} \\ \end{array} } \right) \,R_{VC}^{(2)} = \left( {\begin{array}{*{20}c} {(0.5, \, l_{8} )} & {(0.3, \, l_{2} )} & {(0.8, \, l_{3} )} & {(0.7, \, l_{6} )} \\ {(0.7, \, l_{2} )} & {(0.5, \, l_{8} )} & {(0.5, \, l_{4} )} & {(0.5, \, l_{8} )} \\ {(0.2, \, l_{3} )} & {(0.5, \, l_{4} )} & {(0.5, \, l_{8} )} & {(0.6, \, l_{6} )} \\ {(0.3, \, l_{6} )} & {(0.5, \, l_{8} )} & {(0.4, \, l_{6} )} & {(0.5, \, l_{8} )} \\ \end{array} } \right) $$
$$ R_{VC}^{(3)} = \left( {\begin{array}{*{20}c} {(0.5, \, l_{8} )} & {(0.6, \, l_{5} )} & {(0.4, \, l_{4} )} & {(0.1, \, l_{3} )} \\ {(0.4, \, l_{5} )} & {(0.5, \, l_{8} )} & {(0.7, \, l_{6} )} & {(0.8, \, l_{4} )} \\ {(0.6, \, l_{4} )} & {(0.3, \, l_{6} )} & {(0.5, \, l_{8} )} & {(0.3, \, l_{7} )} \\ {(0.9, \, l_{3} )} & {(0.2, \, l_{4} )} & {(0.7, \, l_{7} )} & {(0.5, \, l_{8} )} \\ \end{array} } \right)\, R_{VC}^{(4)} = \left( {\begin{array}{*{20}c} {(0.5, \, l_{8} )} & {(0.2, \, l_{3} )} & {(0.5, \, l_{1} )} & {(0.4, \, l_{6} )} \\ {(0.8, \, l_{3} )} & {(0.5, \, l_{8} )} & {(0.3, \, l_{4} )} & {(0.7, \, l_{2} )} \\ {(0.5, \, l_{1} )} & {(0.7, \, l_{4} )} & {(0.5, \, l_{8} )} & {(0.5, \, l_{4} )} \\ {(0.6, \, l_{6} )} & {(0.3, \, l_{2} )} & {(0.5, \, l_{4} )} & {(0.5, \, l_{8} )} \\ \end{array} } \right) $$
$$ R_{VC}^{(5)} = \left( {\begin{array}{*{20}c} {(0.5, \, l_{8} )} & {(0.7, \, l_{8} )} & {(0.5, \, l_{7} )} & {(0.4, \, l_{6} )} \\ {(0.3, \, l_{8} )} & {(0.5, \, l_{8} )} & {(0.1, \, l_{3} )} & {(0.7, \, l_{4} )} \\ {(0.5, \, l_{7} )} & {(0.9, \, l_{3} )} & {(0.5, \, l_{8} )} & {(0.2, \, l_{7} )} \\ {(0.6, \, l_{6} )} & {(0.3, \, l_{4} )} & {(0.8, \, l_{7} )} & {(0.5, \, l_{8} )} \\ \end{array} } \right)\,R_{VC}^{(6)} = \left( {\begin{array}{*{20}c} {(0.5, \, l_{8} )} & {(0.4, \, l_{5} )} & {(0.4, \, l_{7} )} & {(0.6, \, l_{4} )} \\ {(0.6, \, l_{5} )} & {(0.5, \, l_{8} )} & {(0.5, \, l_{8} )} & {(0.9, \, l_{2} )} \\ {(0.6, \, l_{7} )} & {(0.5, \, l_{8} )} & {(0.5, \, l_{8} )} & {(0.6, \, l_{7} )} \\ {(0.4, \, l_{4} )} & {(0.1, \, l_{2} )} & {(0.4, \, l_{7} )} & {(0.5, \, l_{8} )} \\ \end{array} } \right) $$

In the individual consistency improving process, we need to measure the consistency of \(R_{VC}^{(k)}\)(\(k = 1,{ 2, }...{, 6}\)). In this hypothetical example, we assume that the consistency threshold is \(\overline{SCL} = 0.90\). Based on Eq. (3), the individual consistency levels of each expert would be the following \(SCL(R_{VC}^{(1)} ) = 0.756\), \(SCL(R_{VC}^{(2)} ) = 0.891\), \(SCL(R_{VC}^{(3)} ) = 0.847\), \(SCL(R_{VC}^{(4)} ) = 0.937\), \(SCL(R_{VC}^{(5)} ) = 0.797\) and \(SCL(R_{VC}^{(6)} ) = 0.937\). It can be found that \(SCL(R_{{_{VC} }}^{(k)} ) < \overline{SCL}\)(\(k = 1,{ 2, 3, 5}\)), which implies \(R_{VC}^{(k)}\)(\(k = 1,{ 2, 3, 5}\)) is of unacceptable consistency. Then, we can use the consistency improving method (refer to model (12)) to deal with the inconsistency for \(R_{VC}^{(k)}\)(\(k = 1,{ 2, 3, 5}\)) with the consideration of minimum adjustments. The adjusted \(\overline{{R_{VC}^{(k)} }}\) with acceptable consistency can be obtained.

$$ \overline{{R_{VC}^{(1)} }} = \left( {\begin{array}{*{20}c} {(0.5, \, l_{8} )} & {(0.303, \, l_{7} )} & {(0.6, \, l_{7} )} & {(0.303, \, l_{6} )} \\ {(0.697, \, l_{7} )} & {(0.5, \, l_{8} )} & {(0.223, \, l_{1} )} & {(0.5, \, l_{3} )} \\ {(0.4, \, l_{7} )} & {(0.777, \, l_{1} )} & {(0.5, \, l_{8} )} & {(0.202, \, l_{6} )} \\ {(0.697, \, l_{6} )} & {(0.5, \, l_{3} )} & {(0.798, \, l_{6} )} & {(0.5, \, l_{8} )} \\ \end{array} } \right)\,\overline{{R_{VC}^{(2)} }} = \left( {\begin{array}{*{20}c} {(0.5, \, l_{8} )} & {(0.3, \, l_{2} )} & {(0.8, \, l_{3} )} & {(0.7, \, l_{6} )} \\ {(0.7, \, l_{2} )} & {(0.5, \, l_{8} )} & {(0.5, \, l_{4} )} & {(0.539, \, l_{8} )} \\ {(0.2, \, l_{3} )} & {(0.5, \, l_{4} )} & {(0.5, \, l_{8} )} & {(0.6, \, l_{6} )} \\ {(0.3, \, l_{6} )} & {(0.461, \, l_{8} )} & {(0.4, \, l_{6} )} & {(0.5, \, l_{8} )} \\ \end{array} } \right) $$
$$ \overline{{R_{VC}^{(3)} }} = \left( {\begin{array}{*{20}c} {(0.5, \, l_{8} )} & {(0.6, \, l_{5} )} & {(0.4, \, l_{4} )} & {(0.1, \, l_{3} )} \\ {(0.4, \, l_{5} )} & {(0.5, \, l_{8} )} & {(0.7, \, l_{6} )} & {(0.534, \, l_{4} )} \\ {(0.6, \, l_{4} )} & {(0.3, \, l_{6} )} & {(0.5, \, l_{8} )} & {(0.3, \, l_{7} )} \\ {(0.9, \, l_{3} )} & {(0.466, \, l_{4} )} & {(0.7, \, l_{7} )} & {(0.5, \, l_{8} )} \\ \end{array} } \right)\,\overline{{R_{VC}^{(5)} }} = \left( {\begin{array}{*{20}c} {(0.5, \, l_{8} )} & {(0.7, \, l_{8} )} & {(0.5, \, l_{7} )} & {(0.4, \, l_{6} )} \\ {(0.3, \, l_{8} )} & {(0.5, \, l_{8} )} & {(0.297, \, l_{3} )} & {(0.647, \, l_{4} )} \\ {(0.5, \, l_{7} )} & {(0.703, \, l_{3} )} & {(0.5, \, l_{8} )} & {(0.4, \, l_{7} )} \\ {(0.6, \, l_{6} )} & {(0.353, \, l_{4} )} & {(0.6, \, l_{7} )} & {(0.5, \, l_{8} )} \\ \end{array} } \right) $$

Based on Eq. (3), we obtain that \(SCL(\overline{{R_{VC}^{(k)} }} ) = 0.9\) (\(k = 1,{ 2, 3, 5}\)). All experts satisfy the individual consistency level. Notably, in this example we assume that the experts follow the adjusted suggestion.

Next, in the consensus reaching process let \(\overline{{R_{VC}^{(4)} }} = R_{VC}^{(4)}\) and \(\overline{{R_{VC}^{(6)} }} = R_{VC}^{(6)}\). According to Eq. (6), the group consensus level among six experts can be measured \(CCL\{ \overline{{R_{VC}^{(1)} }} , \, \overline{{R_{VC}^{(2)} }} , \, \overline{{R_{VC}^{(3)} }} , \, \overline{{R_{VC}^{(4)} }} , \, \overline{{R_{VC}^{(5)} }} , \, \overline{{R_{VC}^{(6)} }} \} = 0.874\). Assume that consensus threshold value is \(\overline{CCL} = 0.92\), we can see \(CCL\{ \overline{{R_{VC}^{(1)} }} , \, \overline{{R_{VC}^{(2)} }} , \, \overline{{R_{VC}^{(3)} }} , \, \overline{{R_{VC}^{(4)} }} , \, \overline{{R_{VC}^{(5)} }} , \, \overline{{R_{VC}^{(6)} }} \} < \overline{CCL}\), which implies that group consensus level does not satisfy the requirement. Applying consensus model (15) to solve this problem, the adjusted self-confident additive preference relations \(\overline{{\overline{{R_{VC}^{(1)} }} }}\), \(\overline{{\overline{{R_{VC}^{(2)} }} }}\), \(\overline{{\overline{{R_{VC}^{(3)} }} }}\), \(\overline{{\overline{{R_{VC}^{(4)} }} }}\), \(\overline{{\overline{{R_{VC}^{(5)} }} }}\) and \(\overline{{\overline{{R_{VC}^{(6)} }} }}\) are obtained:

$$ \overline{{\overline{{R_{VC}^{(1)} }} }} = \left( {\begin{array}{*{20}c} {(0.5, \, l_{8} )} & {(0.303, \, l_{7} )} & {(0.548, \, l_{7} )} & {(0.303, \, l_{6} )} \\ {(0.697, \, l_{7} )} & {(0.5, \, l_{8} )} & {(0.223, \, l_{1} )} & {(0.5, \, l_{3} )} \\ {(0.452, \, l_{7} )} & {(0.777, \, l_{1} )} & {(0.5, \, l_{8} )} & {(0.4, \, l_{6} )} \\ {(0.697, \, l_{6} )} & {(0.5, \, l_{3} )} & {(0.6, \, l_{6} )} & {(0.5, \, l_{8} )} \\ \end{array} } \right) $$
$$ \overline{{\overline{{R_{VC}^{(2)} }} }} = \left( {\begin{array}{*{20}c} {(0.5, \, l_{8} )} & {(0.3, \, l_{2} )} & {(0.8, \, l_{3} )} & {(0.6, \, l_{6} )} \\ {(0.7, \, l_{2} )} & {(0.5, \, l_{8} )} & {(0.5, \, l_{4} )} & {(0.539, \, l_{8} )} \\ {(0.2, \, l_{3} )} & {(0.5, \, l_{4} )} & {(0.5, \, l_{8} )} & {(0.5, \, l_{6} )} \\ {(0.4, \, l_{6} )} & {(0.461, \, l_{8} )} & {(0.5, \, l_{6} )} & {(0.5, \, l_{8} )} \\ \end{array} } \right) $$
$$ \overline{{\overline{{R_{VC}^{(3)} }} }} = \left( {\begin{array}{*{20}c} {(0.5, \, l_{8} )} & {(0.55, \, l_{5} )} & {(0.4, \, l_{4} )} & {(0.303, \, l_{3} )} \\ {(0.45, \, l_{5} )} & {(0.5, \, l_{8} )} & {(0.5, \, l_{6} )} & {(0.534, \, l_{4} )} \\ {(0.6, \, l_{4} )} & {(0.5, \, l_{6} )} & {(0.5, \, l_{8} )} & {(0.4, \, l_{7} )} \\ {(0.697, \, l_{3} )} & {(0.466, \, l_{4} )} & {(0.6, \, l_{7} )} & {(0.5, \, l_{8} )} \\ \end{array} } \right) $$
$$ \overline{{\overline{{R_{VC}^{(4)} }} }} = \left( {\begin{array}{*{20}c} {(0.5, \, l_{8} )} & {(0.3, \, l_{3} )} & {(0.5, \, l_{1} )} & {(0.4, \, l_{6} )} \\ {(0.7, \, l_{3} )} & {(0.5, \, l_{8} )} & {(0.3, \, l_{4} )} & {(0.7, \, l_{2} )} \\ {(0.5, \, l_{1} )} & {(0.7, \, l_{4} )} & {(0.5, \, l_{8} )} & {(0.5, \, l_{4} )} \\ {(0.6, \, l_{6} )} & {(0.3, \, l_{2} )} & {(0.5, \, l_{4} )} & {(0.5, \, l_{8} )} \\ \end{array} } \right) $$
$$ \overline{{\overline{{R_{VC}^{(5)} }} }} = \left( {\begin{array}{*{20}c} {(0.5, \, l_{8} )} & {(0.55, \, l_{8} )} & {(0.5, \, l_{7} )} & {(0.4, \, l_{6} )} \\ {(0.45, \, l_{8} )} & {(0.5, \, l_{8} )} & {(0.297, \, l_{3} )} & {(0.647, \, l_{4} )} \\ {(0.5, \, l_{7} )} & {(0.703, \, l_{3} )} & {(0.5, \, l_{8} )} & {(0.4, \, l_{7} )} \\ {(0.6, \, l_{6} )} & {(0.353, \, l_{4} )} & {(0.6, \, l_{7} )} & {(0.5, \, l_{8} )} \\ \end{array} } \right) $$
$$ \overline{{\overline{{R_{VC}^{(6)} }} }} = \left( {\begin{array}{*{20}c} {(0.5, \, l_{8} )} & {(0.4, \, l_{5} )} & {(0.4, \, l_{7} )} & {(0.6, \, l_{4} )} \\ {(0.6, \, l_{5} )} & {(0.5, \, l_{8} )} & {(0.5, \, l_{8} )} & {(0.9, \, l_{2} )} \\ {(0.6, \, l_{7} )} & {(0.5, \, l_{8} )} & {(0.5, \, l_{8} )} & {(0.5, \, l_{7} )} \\ {(0.4, \, l_{4} )} & {(0.1, \, l_{2} )} & {(0.5, \, l_{7} )} & {(0.5, \, l_{8} )} \\ \end{array} } \right) $$

The consensus level among the adjusted self-confident additive preference relations is acceptable due to \(CCL\{ \overline{{\overline{{R_{VC}^{(1)} }} }} , \, \overline{{\overline{{R_{VC}^{(2)} }} }} , \, \overline{{\overline{{R_{VC}^{(3)} }} }} , \, \overline{{\overline{{R_{VC}^{(4)} }} }} , \, \overline{{\overline{{R_{VC}^{(5)} }} }} , \, \overline{{\overline{{R_{VC}^{(6)} }} }} \} = 0.92 \ge \overline{CCL}\). The individual consistency levels are \(SCL(\overline{{\overline{{R_{VC}^{(1)} }} }} ) = 0.9\), \(SCL(\overline{{\overline{{R_{VC}^{(2)} }} }} ) = 0.914\), \(SCL(\overline{{\overline{{R_{VC}^{(3)} }} }} ) = 0.945\), \(SCL(\overline{{\overline{{R_{VC}^{(4)} }} }} ) = 0.935\), \(SCL(\overline{{\overline{{R_{VC}^{(5)} }} }} ) = 0.9\) and \(SCL(\overline{{\overline{{R_{VC}^{(6)} }} }} ) = 0.913\), which are acceptable due to \(SCL(\overline{{\overline{{R_{VC}^{(k)} }} }} ) \ge \overline{SCL}\)(\(k = 1,{ 2, }...{, 6}\)). The adjusted self-confident additive preference relations would be used to ensure the rationality of the project investment decision.

Based on Eqs. (16) and  (17), the collective preference matrix would be the following:

$$ R_{VC}^{(c)} = \left( {\begin{array}{*{20}c} {(0.5, \, l_{8} )} & {(0.4, \, l_{5} )} & {(0.525, \, l_{5} )} & {(0.434, \, l_{5} )} \\ {(0.6, \, l_{5} )} & {(0.5, \, l_{8} )} & {(0.387, \, l_{4} )} & {(0.637, \, l_{4} )} \\ {(0.475, \, l_{5} )} & {(0.613, \, l_{4} )} & {(0.5, \, l_{8} )} & {(0.45, \, l_{6} )} \\ {(0.566, \, l_{5} )} & {(0.363, \, l_{4} )} & {(0.55, \, l_{6} )} & {(0.5, \, l_{8} )} \\ \end{array} } \right). $$

The collective preference vector can be obtained from \(R^{(c)}\) using model (18), \(w_{VC}^{(c)} = (0.159, \, 0.359, \, 0.191, \, 0.291)^{T}\).

5.2 Comparative Analysis

To show the influence of different self-confidence levels on the decision-making results, a comparative analysis is constructed for the hypothetical application. Compared with the corresponding six self-confident additive preference relations \(R_{VC}^{(k)}\)(\(k = 1,{ 2, }...{, 6}\)), the following twelve matrices have the same preference values but different self-confidence levels:

$$ R_{VC1}^{(1)} = \left( {\begin{array}{*{20}c} {(0.5, \, l_{8} )} & {(0.1, \, l_{1} )} & {(0.6, \, l_{3} )} & {(0.7, \, l_{7} )} \\ {(0.9, \, l_{1} )} & {(0.5, \, l_{8} )} & {(0.2, \, l_{8} )} & {(0.5, \, l_{8} )} \\ {(0.4, \, l_{3} )} & {(0.8, \, l_{8} )} & {(0.5, \, l_{8} )} & {(0.2, \, l_{5} )} \\ {(0.3, \, l_{7} )} & {(0.5, \, l_{8} )} & {(0.8, \, l_{5} )} & {(0.5, \, l_{8} )} \\ \end{array} } \right)\,R_{VC2}^{(1)} = \left( {\begin{array}{*{20}c} {(0.5, \, l_{8} )} & {(0.1, \, l_{8} )} & {(0.6, \, l_{8} )} & {(0.7, \, l_{8} )} \\ {(0.9, \, l_{8} )} & {(0.5, \, l_{8} )} & {(0.2, \, l_{8} )} & {(0.5, \, l_{8} )} \\ {(0.4, \, l_{8} )} & {(0.8, \, l_{8} )} & {(0.5, \, l_{8} )} & {(0.2, \, l_{8} )} \\ {(0.3, \, l_{8} )} & {(0.5, \, l_{8} )} & {(0.8, \, l_{8} )} & {(0.5, \, l_{8} )} \\ \end{array} } \right) $$
$$ R_{VC1}^{(2)} = \left( {\begin{array}{*{20}c} {(0.5, \, l_{8} )} & {(0.3, \, l_{2} )} & {(0.8, \, l_{5} )} & {(0.7, \, l_{1} )} \\ {(0.7, \, l_{2} )} & {(0.5, \, l_{8} )} & {(0.5, \, l_{3} )} & {(0.5, \, l_{3} )} \\ {(0.2, \, l_{5} )} & {(0.5, \, l_{3} )} & {(0.5, \, l_{8} )} & {(0.6, \, l_{4} )} \\ {(0.3, \, l_{1} )} & {(0.5, \, l_{3} )} & {(0.4, \, l_{4} )} & {(0.5, \, l_{8} )} \\ \end{array} } \right)\,R_{VC2}^{(2)} = \left( {\begin{array}{*{20}c} {(0.5, \, l_{8} )} & {(0.3, \, l_{8} )} & {(0.8, \, l_{8} )} & {(0.7, \, l_{8} )} \\ {(0.7, \, l_{8} )} & {(0.5, \, l_{8} )} & {(0.5, \, l_{8} )} & {(0.5, \, l_{8} )} \\ {(0.2, \, l_{8} )} & {(0.5, \, l_{8} )} & {(0.5, \, l_{8} )} & {(0.6, \, l_{8} )} \\ {(0.3, \, l_{8} )} & {(0.5, \, l_{8} )} & {(0.4, \, l_{8} )} & {(0.5, \, l_{8} )} \\ \end{array} } \right) $$
$$ R_{VC1}^{(3)} = \left( {\begin{array}{*{20}c} {(0.5, \, l_{8} )} & {(0.6, \, l_{2} )} & {(0.4, \, l_{2} )} & {(0.1, \, l_{3} )} \\ {(0.4, \, l_{2} )} & {(0.5, \, l_{8} )} & {(0.7, \, l_{1} )} & {(0.8, \, l_{4} )} \\ {(0.6, \, l_{2} )} & {(0.3, \, l_{1} )} & {(0.5, \, l_{8} )} & {(0.3, \, l_{3} )} \\ {(0.9, \, l_{3} )} & {(0.2, \, l_{4} )} & {(0.7, \, l_{3} )} & {(0.5, \, l_{8} )} \\ \end{array} } \right)\,R_{VC2}^{(3)} = \left( {\begin{array}{*{20}c} {(0.5, \, l_{8} )} & {(0.6, \, l_{8} )} & {(0.4, \, l_{8} )} & {(0.1, \, l_{8} )} \\ {(0.4, \, l_{8} )} & {(0.5, \, l_{8} )} & {(0.7, \, l_{8} )} & {(0.8, \, l_{8} )} \\ {(0.6, \, l_{8} )} & {(0.3, \, l_{8} )} & {(0.5, \, l_{8} )} & {(0.3, \, l_{8} )} \\ {(0.9, \, l_{8} )} & {(0.2, \, l_{8} )} & {(0.7, \, l_{8} )} & {(0.5, \, l_{8} )} \\ \end{array} } \right) $$
$$ R_{VC1}^{(4)} = \left( {\begin{array}{*{20}c} {(0.5, \, l_{8} )} & {(0.2, \, l_{2} )} & {(0.5, \, l_{6} )} & {(0.4, \, l_{3} )} \\ {(0.8, \, l_{2} )} & {(0.5, \, l_{8} )} & {(0.3, \, l_{2} )} & {(0.7, \, l_{4} )} \\ {(0.5, \, l_{6} )} & {(0.7, \, l_{2} )} & {(0.5, \, l_{8} )} & {(0.5, \, l_{3} )} \\ {(0.6, \, l_{3} )} & {(0.3, \, l_{4} )} & {(0.5, \, l_{3} )} & {(0.5, \, l_{8} )} \\ \end{array} } \right)\,R_{VC2}^{(4)} = \left( {\begin{array}{*{20}c} {(0.5, \, l_{8} )} & {(0.2, \, l_{8} )} & {(0.5, \, l_{8} )} & {(0.4, \, l_{8} )} \\ {(0.8, \, l_{8} )} & {(0.5, \, l_{8} )} & {(0.3, \, l_{8} )} & {(0.7, \, l_{8} )} \\ {(0.5, \, l_{8} )} & {(0.7, \, l_{8} )} & {(0.5, \, l_{8} )} & {(0.5, \, l_{8} )} \\ {(0.6, \, l_{8} )} & {(0.3, \, l_{8} )} & {(0.5, \, l_{8} )} & {(0.5, \, l_{8} )} \\ \end{array} } \right) $$
$$ R_{VC1}^{(5)} = \left( {\begin{array}{*{20}c} {(0.5, \, l_{8} )} & {(0.7, \, l_{1} )} & {(0.5, \, l_{3} )} & {(0.4, \, l_{4} )} \\ {(0.3, \, l_{1} )} & {(0.5, \, l_{8} )} & {(0.1, \, l_{2} )} & {(0.7, \, l_{3} )} \\ {(0.5, \, l_{3} )} & {(0.9, \, l_{2} )} & {(0.5, \, l_{8} )} & {(0.2, \, l_{1} )} \\ {(0.6, \, l_{4} )} & {(0.3, \, l_{3} )} & {(0.8, \, l_{1} )} & {(0.5, \, l_{8} )} \\ \end{array} } \right)\,R_{VC2}^{(5)} = \left( {\begin{array}{*{20}c} {(0.5, \, l_{8} )} & {(0.7, \, l_{8} )} & {(0.5, \, l_{8} )} & {(0.4, \, l_{8} )} \\ {(0.3, \, l_{8} )} & {(0.5, \, l_{8} )} & {(0.1, \, l_{8} )} & {(0.7, \, l_{8} )} \\ {(0.5, \, l_{8} )} & {(0.9, \, l_{8} )} & {(0.5, \, l_{8} )} & {(0.2, \, l_{8} )} \\ {(0.6, \, l_{8} )} & {(0.3, \, l_{8} )} & {(0.8, \, l_{8} )} & {(0.5, \, l_{8} )} \\ \end{array} } \right) $$
$$ R_{VC1}^{(6)} = \left( {\begin{array}{*{20}c} {(0.5, \, l_{8} )} & {(0.4, \, l_{3} )} & {(0.4, \, l_{1} )} & {(0.6, \, l_{2} )} \\ {(0.6, \, l_{3} )} & {(0.5, \, l_{8} )} & {(0.5, \, l_{2} )} & {(0.9, \, l_{4} )} \\ {(0.6, \, l_{1} )} & {(0.5, \, l_{2} )} & {(0.5, \, l_{8} )} & {(0.6, \, l_{2} )} \\ {(0.4, \, l_{2} )} & {(0.1, \, l_{4} )} & {(0.4, \, l_{2} )} & {(0.5, \, l_{8} )} \\ \end{array} } \right)\,R_{VC1}^{(6)} = \left( {\begin{array}{*{20}c} {(0.5, \, l_{8} )} & {(0.4, \, l_{8} )} & {(0.4, \, l_{8} )} & {(0.6, \, l_{8} )} \\ {(0.6, \, l_{8} )} & {(0.5, \, l_{8} )} & {(0.5, \, l_{8} )} & {(0.9, \, l_{8} )} \\ {(0.6, \, l_{8} )} & {(0.5, \, l_{8} )} & {(0.5, \, l_{8} )} & {(0.6, \, l_{8} )} \\ {(0.4, \, l_{8} )} & {(0.1, \, l_{8} )} & {(0.4, \, l_{8} )} & {(0.5, \, l_{8} )} \\ \end{array} } \right) $$

The 12 matrices would be divided into two groups: the first group contains \(R_{VC1}^{(1)}\), \(R_{VC1}^{(2)}\), \(R_{VC1}^{(3)}\), \(R_{VC1}^{(4)}\), \(R_{VC1}^{(5)}\) and \(R_{VC1}^{(6)}\), and the second group contains \(R_{VC2}^{(1)}\), \(R_{VC2}^{(2)}\), \(R_{VC2}^{(3)}\), \(R_{VC2}^{(4)}\), \(R_{VC2}^{(5)}\) and \(R_{VC2}^{(6)}\). The weight of each DM is equal, \(\lambda^{(1)} = \lambda^{(2)} = ... = \lambda^{(6)} = 1/6\). Let \(\overline{SCL} = 0.9\) be the consistency threshold and \(\overline{CCL} = 0.92\) be the consensus threshold, which are the same as in hypothetical example. Comparative analysis results of hypothetical example are listed in Table 2. The adjusted matrices \(\overline{{\overline{{R_{VCi}^{(k)} }} }}\) (\(i = 1, \, 2\),\(k = 1,{ 2, }...{, 6}\)) can be obtained as follows (Table 3),

$$ \overline{{\overline{{R_{VC1}^{(1)} }} }} = \left( {\begin{array}{*{20}c} {(0.5, \, l_{8} )} & {(0.186, \, l_{1} )} & {(0.6, \, l_{3} )} & {(0.699, \, l_{7} )} \\ {(0.814, \, l_{1} )} & {(0.5, \, l_{8} )} & {(0.4, \, l_{8} )} & {(0.5, \, l_{8} )} \\ {(0.4, \, l_{3} )} & {(0.6, \, l_{8} )} & {(0.5, \, l_{8} )} & {(0.599, \, l_{5} )} \\ {(0.301, \, l_{7} )} & {(0.5, \, l_{8} )} & {(0.401, \, l_{5} )} & {(0.5, \, l_{8} )} \\ \end{array} } \right) $$
$$ \overline{{\overline{{R_{VC1}^{(2)} }} }} = \left( {\begin{array}{*{20}c} {(0.5, \, l_{8} )} & {(0.3, \, l_{2} )} & {(0.775, \, l_{5} )} & {(0.7, \, l_{1} )} \\ {(0.7, \, l_{2} )} & {(0.5, \, l_{8} )} & {(0.5, \, l_{3} )} & {(0.5, \, l_{3} )} \\ {(0.225, \, l_{5} )} & {(0.5, \, l_{3} )} & {(0.5, \, l_{8} )} & {(0.6, \, l_{4} )} \\ {(0.3, \, l_{1} )} & {(0.5, \, l_{3} )} & {(0.4, \, l_{4} )} & {(0.5, \, l_{8} )} \\ \end{array} } \right) $$
$$ \overline{{\overline{{R_{VC1}^{(3)} }} }} = \left( {\begin{array}{*{20}c} {(0.5, \, l_{8} )} & {(0.6, \, l_{2} )} & {(0.4, \, l_{2} )} & {(0.1, \, l_{3} )} \\ {(0.4, \, l_{2} )} & {(0.5, \, l_{8} )} & {(0.7, \, l_{1} )} & {(0.8, \, l_{4} )} \\ {(0., \, l_{2} )} & {(0.3, \, l_{1} )} & {(0.5, \, l_{8} )} & {(0.3, \, l_{3} )} \\ {(0.9, \, l_{3} )} & {(0.2, \, l_{4} )} & {(0.7, \, l_{3} )} & {(0.5, \, l_{8} )} \\ \end{array} } \right) $$
$$ \overline{{\overline{{R_{VC1}^{(4)} }} }} = \left( {\begin{array}{*{20}c} {(0.5, \, l_{8} )} & {(0.2, \, l_{2} )} & {(0.5, \, l_{6} )} & {(0.4, \, l_{3} )} \\ {(0.8, \, l_{2} )} & {(0.5, \, l_{8} )} & {(0.3, \, l_{2} )} & {(0.7, \, l_{4} )} \\ {(0.5, \, l_{6} )} & {(0.7, \, l_{2} )} & {(0.5, \, l_{8} )} & {(0.5, \, l_{3} )} \\ {(0.6, \, l_{3} )} & {(0.3, \, l_{4} )} & {(0.5, \, l_{3} )} & {(0.5, \, l_{8} )} \\ \end{array} } \right) $$
$$ \overline{{\overline{{R_{VC1}^{(5)} }} }} = \left( {\begin{array}{*{20}c} {(0.5, \, l_{8} )} & {(0.7, \, l_{1} )} & {(0.5, \, l_{3} )} & {(0.4, \, l_{4} )} \\ {(0.3, \, l_{1} )} & {(0.5, \, l_{8} )} & {(0.1, \, l_{2} )} & {(0.7, \, l_{3} )} \\ {(0.5, \, l_{3} )} & {(0.9, \, l_{2} )} & {(0.5, \, l_{8} )} & {(0.2, \, l_{1} )} \\ {(0.6, \, l_{4} )} & {(0.3, \, l_{3} )} & {(0.8, \, l_{1} )} & {(0.5, \, l_{8} )} \\ \end{array} } \right) $$
$$ \overline{{\overline{{R_{VC1}^{(6)} }} }} = \left( {\begin{array}{*{20}c} {(0.5, \, l_{8} )} & {(0.4, \, l_{3} )} & {(0.4, \, l_{1} )} & {(0.6, \, l_{2} )} \\ {(0.6, \, l_{3} )} & {(0.5, \, l_{8} )} & {(0.5, \, l_{2} )} & {(0.8, \, l_{4} )} \\ {(0.6, \, l_{1} )} & {(0.5, \, l_{2} )} & {(0.5, \, l_{8} )} & {(0.6, \, l_{2} )} \\ {(0.4, \, l_{2} )} & {(0.2, \, l_{4} )} & {(0.4, \, l_{2} )} & {(0.5, \, l_{8} )} \\ \end{array} } \right) $$
$$ \overline{{\overline{{R_{VC2}^{(1)} }} }} = \left( {\begin{array}{*{20}c} {(0.5, \, l_{8} )} & {(0.4, \, l_{8} )} & {(0.5, \, l_{8} )} & {(0.6, \, l_{8} )} \\ {(0.6, \, l_{8} )} & {(0.5, \, l_{8} )} & {(0.699, \, l_{8} )} & {(0.5, \, l_{8} )} \\ {(0.5, \, l_{8} )} & {(0.301, \, l_{8} )} & {(0.5, \, l_{8} )} & {(0.4, \, l_{8} )} \\ {(0.4, \, l_{8} )} & {(0.5, \, l_{8} )} & {(0.6, \, l_{8} )} & {(0.5, \, l_{8} )} \\ \end{array} } \right) $$
$$ \overline{{\overline{{R_{VC2}^{(2)} }} }} = \left( {\begin{array}{*{20}c} {(0.5, \, l_{8} )} & {(0.5, \, l_{8} )} & {(0.5, \, l_{8} )} & {(0.6, \, l_{8} )} \\ {(0.5, \, l_{8} )} & {(0.5, \, l_{8} )} & {(0.5, \, l_{8} )} & {(0.531, \, l_{8} )} \\ {(0.5, \, l_{8} )} & {(0.5, \, l_{8} )} & {(0.5, \, l_{8} )} & {(0.416, \, l_{8} )} \\ {(0.4, \, l_{8} )} & {(0.469, \, l_{8} )} & {(0.584, \, l_{8} )} & {(0.5, \, l_{8} )} \\ \end{array} } \right) $$
$$ \overline{{\overline{{R_{VC2}^{(3)} }} }} = \left( {\begin{array}{*{20}c} {(0.5, \, l_{8} )} & {(0.5, \, l_{8} )} & {(0.411, \, l_{8} )} & {(0.4, \, l_{8} )} \\ {(0.5, \, l_{8} )} & {(0.5, \, l_{8} )} & {(0.699, \, l_{8} )} & {(0.5, \, l_{8} )} \\ {(0.589, \, l_{8} )} & {(0.301, \, l_{8} )} & {(0.5, \, l_{8} )} & {(0.4, \, l_{8} )} \\ {(0.6, \, l_{8} )} & {(0.5, \, l_{8} )} & {(0.6, \, l_{8} )} & {(0.5, \, l_{8} )} \\ \end{array} } \right) $$
$$ \overline{{\overline{{R_{VC2}^{(4)} }} }} = \left( {\begin{array}{*{20}c} {(0.5, \, l_{8} )} & {(0.4, \, l_{8} )} & {(0.411, \, l_{8} )} & {(0.4, \, l_{8} )} \\ {(0.6, \, l_{8} )} & {(0.5, \, l_{8} )} & {(0.411, \, l_{8} )} & {(0.626, \, l_{8} )} \\ {(0.589, \, l_{8} )} & {(0.589, \, l_{8} )} & {(0.5, \, l_{8} )} & {(0.416, \, l_{8} )} \\ {(0.6, \, l_{8} )} & {(0.374, \, l_{8} )} & {(0.584, \, l_{8} )} & {(0.5, \, l_{8} )} \\ \end{array} } \right) $$
$$ \overline{{\overline{{R_{VC2}^{(5)} }} }} = \left( {\begin{array}{*{20}c} {(0.5, \, l_{8} )} & {(0.5, \, l_{8} )} & {(0.5, \, l_{8} )} & {(0.4, \, l_{8} )} \\ {(0.5, \, l_{8} )} & {(0.5, \, l_{8} )} & {(0.176, \, l_{0} )} & {(0.5, \, l_{8} )} \\ {(0.5, \, l_{8} )} & {(0.824, \, l_{0} )} & {(0.5, \, l_{8} )} & {(0.4, \, l_{8} )} \\ {(0.6, \, l_{8} )} & {(0.5, \, l_{8} )} & {(0.6, \, l_{8} )} & {(0.5, \, l_{8} )} \\ \end{array} } \right) $$
$$ \overline{{\overline{{R_{VC2}^{(6)} }} }} = \left( {\begin{array}{*{20}c} {(0.5, \, l_{8} )} & {(0.4, \, l_{8} )} & {(0.411, \, l_{8} )} & {(0.6, \, l_{8} )} \\ {(0.6, \, l_{8} )} & {(0.5, \, l_{8} )} & {(0.5, \, l_{8} )} & {(0.626, \, l_{8} )} \\ {(0.589, \, l_{8} )} & {(0.5, \, l_{8} )} & {(0.5, \, l_{8} )} & {(0.416, \, l_{8} )} \\ {(0.4, \, l_{8} )} & {(0.374, \, l_{8} )} & {(0.584, \, l_{8} )} & {(0.5, \, l_{8} )} \\ \end{array} } \right) $$
Table 3 Comparative analysis results

We can observe that different self-confidence levels produce different adjusted self-confident additive preference relations with the same required consistency and consensus levels. Furthermore, different self-confidence levels lead to different collective preference vector of projects, which means that different self-confidence levels would lead to different decision-making results.

6 Conclusion

Self-confident additive preference relation is a new concept of preference relation. In this study, we investigate the consistency and consensus issues in GDM with self-confident additive preference relations. First, based on the individual consistency measuring method presented by Herrera-Viedma et al. (2007b), we propose a consistency index to measure the individual consistency level of a self-confident additive preference relation. Then, we present a new consensus index to evaluate the group consensus level of self-confident additive preference relations. Second, we propose two nonlinear optimization models to manage consistency and consensus issues for GDM with self-confident additive preference relations. The first nonlinear optimization model is used to derive self-confident additive preference relations with acceptable consistency. The second nonlinear optimization model is used to promote consensus reaching that simultaneously manage individual consistency. Moreover, the proposed optimization-based models optimally preserve the original preference information and self-confidence levels, according to the required consistency and consensus levels. Finally, the proposal is applied in a venture capital investment scenario, and a comparison analysis is conducted to show the influence of self-confidence levels on the decision-making results. The proposal in this study can provide group decision support to help DMs manage individual consistency and promote consensus building.

For the future work, we plan to work on the consensus problem based on heterogeneous self-confident preference relations in a large-scale decision-making context (Chen et al. 2015; Zhang et al. 2019a). Meanwhile, we argue that it is very interesting in future to design a consensus reaching model with a feedback process, in which the obtained optimal adjusted self-confident additive preference relations are employed as references for decision makers to modify their opinions. Moreover, decision makers may express non-cooperative behaviors (Dong et al. 2016; Xu et al. 2019; Zhang et al. 2021b) in accepting the adjustment suggestions in the feedback-based consensus reaching process. So, the investigation of non-cooperative behaviors in the feedback-based consensus reaching process with self-confident additive preference relations is another interesting research direction.