Keywords

1 Introduction

Decision-making plays an important part in management and business. The assistance provided to the decision-making activity by means of systematic methods and models for analyzing decisions is imposed by the limits of the human decision-maker. Filip (2002) presents the main types of limits generally valid:

  • cognitive limits linked to the manager’s limited capacity to store and process information and knowledge;

  • economic limits linked to the cost involved in obtaining and processing information;

  • time limits revealed in the low and faulty quality of certain decisions taken under time pressure in a competitive environment.

These limits are generally amplified by the characteristics of the economic environment. Decision-making is a difficult process due to incomplete elements and scanty information. Most of the real-world situations involve multi-attribute decision-making, which occurs when the choice of an alternative or action plan is accomplished with the decision-maker having to consider several objectives simultaneously. The objectives are different in nature and can be in contrast to one another. Managers face numerous and contradictory multi-attribute decision problems: making as high profits as possible, minimizing costs and risks, for example, in investment processes, when selecting human resources or choosing markets. Another important class of multi-attribute decision problems can be encountered in the political decision-making. That takes into account both the society’s and the citizens’ public interests, as well as that of interest groups or of the decision-makers’ personal objectives. In this respect, we can exemplify the application for a public function, accomplishing alliances, or mergers between political parties, setting fiscal policies, setting policies for stimulating certain economic sectors (Filip 2002). These decision-making situations lead to problems with a limited number of discrete alternatives. In many situations, the evaluation of attributes cannot be accomplished precisely, which requires the development of models based on fuzzy sets.

In 1965, Zadeh first introduced the theory of fuzzy sets. Later on, many researchers have been working on the process of dealing with fuzzy decision-making problems by applying the fuzzy sets theory.

2 Preliminaries

In Zadeh (1965), a fuzzy set is defined as follows: Let \( X = \left\{ {x_{1} ,x_{2} , \ldots ,x_{n} } \right\} \) be a universe of discourse, a fuzzy set A is characterized by a membership function \( \mu_{A}:X \to \left[ {0,1} \right] \), which associates to each element \( x_{j} \in X \), the degree of membership \( \mu_{A} \left( {x_{j} } \right) \),

$$ A = \left\{ {\left( {x_{j} ,\mu_{A} \left( {x_{j} } \right)} \right)\,,x_{j} \in X} \right\} $$
(1)

In the particular case, when \( \mu_{A} \) only takes the values 0 or 1, the fuzzy set A is a classical subset of X.

The definition of fuzzy sets clarifies the distinction between random and fuzzy: the random phenomenon is the result of uncertainty regarding the membership or non-membership of an object to a class; in a fuzzy phenomenon, there are several intermediate degrees of membership, set between full membership and non-membership.

Let \( e_{i} = \left( {x_{i} ,\mu_{A} \left( {x_{i} } \right)} \right) \) and \( e_{j} = \left( {x_{j} ,\mu_{A} \left( {x_{j} } \right)} \right) \) be two fuzzy elements from fuzzy set A. We say

$$ e_{i} < e_{j} if \mu_{A} \left( {x_{i} } \right) < \mu_{A} \left( {x_{j} } \right) $$
(2)

An intuitionist fuzzy set (IFS) A in X is (Atanasov 1986):

$$ A = \left\{ {\left( {x_{j} ,\mu_{A} \left( {x_{j} } \right)\,,v_{A} \left( {x_{j} } \right)} \right),x_{j} \in X} \right\} $$
(3)

which is characterized by a membership function \( \mu_{A} \) and a non-membership function \( v_{A} \), where

$$ \mu_{A}:X \to \left[ {0,1} \right]\;,\quad x_{j} \in X \to \mu_{A} \left( {x_{j} } \right) \in \left[ {0,1} \right] $$
(4)
$$ v_{A}:X \to \left[ {0,1} \right]\;,\quad x_{j} \in X \to v_{A} \left( {x_{j} } \right) \in \left[ {0,1} \right] $$
(5)

on condition that

\( \mu_{A} \left( {x_{j} } \right) + v_{A} \left( {x_{j} } \right) \le 1, \) for all \( x_{j} \in X \)

For each IFS A in X, if

$$ \pi_{A} \left( {x_{j} } \right) = 1 - \mu_{A} \left( {x_{j} } \right) - v_{A} \left( {x_{j} } \right) $$
(6)

then \( \pi_{A} \left( {x_{j} } \right) \) is called the degree of indeterminacy of \( x_{j} \) to A.

If \( \pi_{A} \left( {x_{j} } \right) = 1 - \mu_{A} \left( {x_{j} } \right) - \nu_{A} \left( {x_{j} } \right) = 0 \), for each \( x_{j} \in X \) the IFS A is reduced to a fuzzy set (Xu 2007a).

Xu (2007b) calls \( \alpha = \left( {\mu_{\alpha } ,\nu_{\alpha } } \right) \)intuitionistic fuzzy number (IFN), where \( \mu_{\alpha } \in \left[ {0,1} \right] \), \( \nu_{\alpha } \in \left[ {0,1} \right] \) and \( \mu_{\alpha } + \nu_{\alpha } \le 1 \). We define

  • the score of α:

$$ s\left( \alpha \right) = \mu_{\alpha } - \nu_{\alpha } \;\;{\text{where}}\;\;s\left( \alpha \right) \in \left[ { - 1,1} \right] $$
(7)
  • the degree of accuracy of the IFN α:

$$ h\left( \alpha \right) = \mu_{\alpha } + \nu_{\alpha } \;\;{\text{where}}\;\;h\left( \alpha \right) \in \left[ {0,1} \right] $$
(8)

Let \( s\left( {\alpha_{1} } \right) = \mu_{{\alpha_{1} }} - \nu_{{\alpha_{1} }} \) and \( s\left( {\alpha_{2} } \right) = \mu_{{\alpha_{2} }} - \nu_{{\alpha_{2} }} \) be the scores of \( \alpha_{1} \) and \( \alpha_{2} \), respectively, and let \( h\left( {\alpha_{1} } \right) = \mu_{{\alpha_{1} }} + \nu_{{\alpha_{1} }} \) and \( h\left( {\alpha_{2} } \right) = \mu_{{\alpha_{2} }} + \nu_{{\alpha_{2} }} \) be the accuracy degrees of \( \alpha_{1} \), and \( \alpha_{2} \), respectively, then define (Xu 2007b):

\( \alpha_{1} < \alpha_{2} \) if \( s\left( {\alpha_{1} } \right) < s\left( {\alpha_{2} } \right) \) or

$$ {\text{if}}\;\;s\left( {\alpha_{1} } \right) = s\left( {\alpha_{2} } \right)\;\;{\text{and}}\;\;h\left( {\alpha_{1} } \right) < h\left( {\alpha_{2} } \right) $$
(9)
$$ \alpha_{1} = \alpha_{2} \;\;{\text{if}}\;\;s\left( {\alpha_{1} } \right) = s\left( {\alpha_{2} } \right)\;\;{\text{and}}\;\;h\left( {\alpha_{1} } \right) = h\left( {\alpha_{2} } \right) $$
(10)

this obviously involves \( \mu_{{\alpha_{1} }} = \mu_{{\alpha_{2} }} \) and \( \nu_{{\alpha_{1} }} = \nu_{{\alpha_{2} }}. \)

3 Applying Fuzzy Techniques in the Delphi Method

The Delphi Consultation Method was developed in the years 1964–1965 by O. Helmer from the Rand Corporation of Santa Monica. It is a group technique that is successfully applied in the areas of management and marketing decision-making. The Delphi Method has emerged as a better method compared to the committee method which involves several rounds of discussions for choosing a solution. It has been noted that oratorical gifted people, those with particular scientific reputation, succeed in imposing their opinion even if there are also better solutions. There is also almost general apprehension to admit that one’s view can change from one round to the next.

The Delphi Consultation Method is successfully applied in the following situations (Linstone and Turoff 2002; Dick 2000; Turoff 2002).

  • The problem does not require precise analytical techniques, but it can benefit from subjective collective judgments;

  • People involved in the analysis of complex problems do not have a record of adequate communication and may represent diverse backgrounds in terms of experience and expertise;

  • The number of people consulted is greater than the number allowing for effective face to face interaction;

  • The animosity between the participants is so serious that the communication process must be mediated and/or anonymity needs to be ensured; the participants’ heterogeneity must be maintained with a view to ensuring the validity of results, which means that domination by quantity or by strength of personality should be avoided.

The Delphi Method involves the following iterative process:

  1. Step 1.

    The problem must be defined. The group of experts is chosen in the field encompassing the problem discussed (they will be consulted separately and independently). The questionnaire is prepared and distributed.

  2. Step 2.

    The questionnaire responses are analyzed. The information obtained, which is subjective, is statistically analyzed. The results are communicated to group members.

  3. Step 3.

    Group members analyze the results and make new estimates, providing explanations for those opinions that differ significantly from the other participants’.

The process defined by Steps 2 and 3 is repeated until the responses are stabilized, that is, they converge to a reasonable solution in terms of management. Thus, the technique allows experts to tackle a complex problem in a systematic way. From one stage to another, the relevant information is communicated and group members are further instructed. In this way, recommendations can be provided based on more complete information.

In Bojadziev and Bojadziev (1995), a method is proposed for analyzing the information provided by the target group, given a questionnaire whose answers are quantitative (numerical). The method consists in providing answers by means of fuzzy numbers.

A fuzzy number A is defined by the associated function f A (x), which has the domain of definition \( A = \left[ {a_{1} ,a_{2} } \right] \subset R \) with \( f_{A} \left( x \right) \in \left[ {0,1} \right] \):

$$ f_{A} \left( x \right) = \left\{ {\begin{array}{*{20}{l}} {\frac{{x - a_{1} }}{{a_{M} - a_{1} }},} \hfill & {daca} \hfill & {a_{1} \le x \le a_{M} } \hfill \\ {\frac{{x - a_{2} }}{{a_{M} - a_{2} }},} \hfill & {daca} \hfill & {a_{M} \le x \le a_{2} } \hfill \\ {0,} \hfill & {in\;rest} \hfill & {} \hfill \\ \end{array}} \right.$$
(11)

Suppose that for an uncertain value, the lowest and highest possible value can be specified, that is, the medium range \( A = \left[ {a_{1} ,a_{2} } \right] \). If, in addition, the most probable value, a M , can be specified, then the maximum will be the point (a M ,1). Having the three values, the triangular fuzzy number \( \left( {a_{1} ,a_{M} ,a_{2} } \right) \) can be built and the associated function can be defined (11).

The application of the fuzzy technique in the Delphi Method, for quantitative answers (numeric), consists of the following steps (Bojadziev and Bojadziev 1995):

Algorithm 1 (Delphi FS-1)

    1. Step 1.

      The experts \( E_{i} ,\;i = 1,2, \ldots ,n, \) are interviewed about the realization of an event (e.g., estimating the duration of an activity for applying the critical path method in project management, forecasting a financial measure—inflation rate).

The data are provided by each expert E i under the form of triangular fuzzy numbers:

$$ A^{\left( i \right)} = \left( {a_{1}^{\left( i \right)} ,a_{M}^{\left( i \right)} ,a_{2}^{\left( i \right)} } \right),\quad i = 1,2, \ldots ,n $$
  1. Step 2.

    The average value A m is calculated for the n fuzzy numbers A (i) estimated:

$$ A_{m} = \left( {m_{1} ,m_{M} ,m_{2} } \right) = \left( {\frac{1}{n}\sum\limits_{i = 1}^{n} {a_{1}^{\left( i \right)} ,\;\,\frac{1}{n}\sum\limits_{i = 1}^{n} {a_{M}^{\left( i \right)} ,\;\,\frac{1}{n}\sum\limits_{i = 1}^{n} {a_{2}^{\left( i \right)} } } } } \right) $$
(12)

Each expert will receive back the differences:

\( \left( {m_{1} - a_{1}^{\left( i \right)} ,m_{M} - a_{M}^{\left( i \right)} ,m_{2} - a_{2}^{\left( i \right)} } \right) \) and the distance

$$ d\left( {A^{\left( i \right)} ,A_{m} } \right) = \frac{1}{2}\left\{ {\hbox{max} \left( {\left| {m_{1} - a_{1}^{\left( i \right)} } \right|,\;\left| {m_{2} - a_{2}^{\left( i \right)} } \right|} \right) + \left| {m_{M} - a_{M}^{\left( i \right)} } \right|} \right\} $$
(13)
  1. Step 3.

    After analyzing the data received, each expert will provide a new triangular fuzzy number:

$$ B^{\left( i \right)} = \left( {b_{1}^{\left( i \right)} ,b_{M}^{\left( i \right)} ,b_{2}^{\left( i \right)} } \right),\quad i = 1,2, \ldots ,n $$

The distance between the two fuzzy numbers \( A_{m} \) and \( B_{m} \) is calculated using relation (13).

Step 2 is resumed until the successive evaluations of the means A m , B m ,… are stabilized, that is, the distance between 2 successive terms is less than the required value \( \varepsilon \), that is, \( d\left( {A_{m} ,B_{m} } \right) \le \varepsilon \).

3.1 Numerical Example

A group of 12 experts are consulted to estimate the time needed for accomplishing an objective (in months).

The triangular fuzzy numbers provided \( A^{\left( i \right)} ,\;i = 1,2, \ldots ,12 \) are given in Table 1.

Table 1 Triangular fuzzy numbers

\( A_{m} \) is evaluated according to relation (12) and distance \( d\left( {A^{\left( i \right)} ,A_{m} } \right),\;i = 1,2, \ldots ,12 \), according to relation (13). It can be noticed that the opinions of experts E4, E5, E6 are the most remote from the mean value \( A_{m} \) obtained in the first iteration. Table 2 shows the triangular fuzzy numbers reviewed and proposed by the experts after the analysis of the received data \( B^{\left( i \right)} = \left( {b_{1}^{\left( i \right)} ,\;b_{M}^{\left( i \right)} ,\;b_{2}^{\left( i \right)} } \right),\quad i = 1,2, \ldots ,n \). It can be noticed that (E3, E7, E8, E9, E12) have not changed their opinions, and others (E1, E2, E6) have made minor changes.

Table 2 Triangular fuzzy numbers reviewed and proposed

Being given that, according to (13), \( d\left( {A_{m} ,B_{m} } \right) = 0.915 \), the manager can stop the decision-making process and accept \( B_{m} \).

In the case of problems with qualitative (non-numeric) formulations, we suggest the use of fuzzy sets, as a Fuzzy technique in the Delphi Method, to define the membership to a certain class (property). This assumes the completion of the following steps:

Algorithm 2 (Delphi FS-2)

  1. Step 1.

    The experts \( E_{i} ,\;i = 1,2, \ldots ,n, \) have been interviewed with regard to the degree of membership of an element to a particular propriety A. The data are provided by each expert E i , and they will represent a fuzzy set: \( A = \left\{ {\left( {E_{i} ,\mu_{A} \,\left( {E_{i} } \right)} \right)\left| {i = 1,2, \ldots ,n} \right.} \right\} \)

  1. Step 2.

    The average value of the set A items is estimated as follows:

$$ \bar{A} = \left( {\frac{1}{n} \cdot \sum\limits_{i = 1}^{n} {\mu_{A} \left( {E_{i} } \right)} } \right) $$
(14)

Each expert will receive back the average value \( \bar{A} \) and the difference:

$$ d_{i} = \left| {\bar{A} - \mu_{A} \left( {E_{i} } \right)} \right| $$
(15)
  1. Step 3.

    After analyzing the data received, each expert will provide a new membership function \( \mu_{B} \left( {E_{i} } \right) \). The data are provided by each expert E i and they will form a new fuzzy set B:

$$ B = \left\{ {\left( {E_{i} ,\mu_{B} \left( {E_{i} } \right)} \right)|i = 1,2, \ldots ,n} \right\} $$

We calculate a similarity measure based on the set-theoretic approach (Xu 2007a).

We note \( \pi_{A} \left( {E_{i} } \right) = 1 - \mu_{A} \left( {E_{i} } \right) \)

$$ S\left( {A,B} \right) = \frac{{\sum\limits_{i = 1}^{n} {\left( {\hbox{min} \left( {\mu_{A} \left( {E_{i} } \right),\mu_{B} \left( {E_{i} } \right)} \right) + \hbox{min} \left( {\pi_{A} \left( {E_{i} } \right),\pi_{B} \left( {E_{i} } \right)} \right)} \right)} }}{{\sum\limits_{i = 1}^{n} {\left( {\hbox{max} \left( {\mu_{A} \left( {E_{i} } \right),\mu_{B} \left( {E_{i} } \right)} \right) + \hbox{max} \left( {\pi_{A} \left( {E_{i} } \right),\pi_{B} \left( {E_{i} } \right)} \right)} \right)} }} $$
(16)

If S(A, B) is close to 1, for example \( 0.85 \le S\left( {A,B} \right) \le 1 \), it can be considered that the evaluations obtained are similar and the consultation process by means of the Delphi Method is over. Otherwise, the process continues to Step 2 and a new consultation of the expert group.

3.2 Numerical Example

The same group of 12 experts is consulted to estimate the degree of membership of an element to a particular property A. For example, given the economic crisis, property A can be defined as Compliance with the rules set by the European Union to reduce budget deficit. The results of consulting the economic experts about a particular country from the European Union are shown in Table 3. \( \bar{A} \) and \( d_{i} \) are evaluated, according to relations (14) and (15) and experts are notified. After evaluating, they will pass on the new values of the membership function to the property studied (Table 4).

Table 3 Results of consulting the economic experts
Table 4 The new values of the membership function to the property studied

The similarity measure \( S\left( {A,B} \right) \) is calculated, according to the relation (16):

$$ S\left( {A,B} \right) = \frac{11.30}{12.70} = 0.8897 $$

and so, the consultation process using the Delphi Method can be considered finished.

4 Applying Fuzzy Techniques in Multi-Attribute Decision Models

We consider a classical multi-attribute decision problem. Let

\( A = \left\{ {A_{1} ,A_{2} , \ldots ,A_{m} } \right\} \)—be the alternatives set

\( C = \left\{ {C_{1} ,C_{2} , \ldots ,C_{n} } \right\} \)—be the characteristics (attributes) set

\( w = \left\{ {w_{1} ,w_{2} , \ldots ,w_{m} } \right\} \)—be the attributes weight for each characteristic,

where \( w_{i} \ge 0,\;\quad i = 1,2, \ldots ,m \) and \( \sum\limits_{i = 1}^{m} {w_{i} = 1} \).

The vector w of the weights reflects the relative importance given to each characteristic (attribute). Xu (2007b) presents several methods for setting the attributes weights in the case of multi-attribute decision models based on IFS with several decision-makers.

The classical problem of multi-attribute decision-making resides in ranking the alternatives set taking into account the characteristics (attributes) set, considering the weights associated to each characteristic.

4.1 Determining the Weights

The algorithm (Lixăndroiu 2011b) presented in what follows with a view to determining an aggregated system of weights relies on the model proposed by (Hung and Chen 2009), which aimed at determining the objective weights of the attributes using the concept of Shannon’s entropy, and on the model proposed by (Li and Yang 2003), for optimizing the values of the weights proposed by the decision-maker (subjective weights).

In 1972 De Luca and Termini defined a non-probabilistic entropy formula of a fuzzy set based on Shannon’s entropy function:

$$ E_{LT} \left( A \right) = - k\sum\limits_{i = 1}^{n} {\left[ {\mu_{A} \left( {x_{i} } \right)\ln \mu_{A} \left( {x_{i} } \right) + \left( {1 - \mu_{A} \left( {x_{i} } \right)} \right)\ln \left( {1 - \mu_{A} \left( {x_{i} } \right)} \right)} \right]} $$
(17)

Vlachos and Sergiadis in 2007 defined a measure for the intuitionist fuzzy entropy:

$$ E_{LT}^{IFS} \left( A \right) = - \frac{1}{n \cdot \ln 2}\sum\limits_{i = 1}^{n} {\left[ {\mu_{A} \left( {x_{i} } \right)\ln \mu_{A} \left( {x_{i} } \right) + \nu_{A} \left( {x_{i} } \right)\ln \nu_{A} \left( {x_{i} } \right) + \left( {1 - \pi_{A} \left( {x_{i} } \right)} \right)\ln \left( {1 - \pi_{A} \left( {x_{i} } \right)} \right) - \pi_{A} \left( {x_{i} } \right)\ln 2} \right]} $$
(18)

Algorithm 3

  1. Step 1.

    Input:

    m-number of alternatives

    n-number of attributes (characteristics) for each alternative

    $$ A_{i} = \left\{ { < C_{j} ,\mu_{{A_{i} }} \left( {C_{j} } \right)\,,v_{{A_{i} }} \left( {C_{j} } \right) > ,C_{j} \in C} \right\},\quad i = 1, \, 2, \ldots , $$

    m-attribute values for the m alternatives represented by IFSs; μ represents the degree of membership, while ν represents the degree of non-membership of the alternatives A i to the attribute (characteristic) C j

    \( \left\{ {\;\left( {\rho_{j} ,\tau_{j} } \right)\;,j = 1,2, \ldots ,n} \right\} \)-the weights given by the decision-maker for the n attributes represented by IFSs

  2. Step 2.

    Let \( \pi_{ij} = 1 - \mu_{ij} - \nu_{ij} ,i = 1,2, \ldots ,m,j = 1,2, \ldots ,n \)

    $$ \begin{aligned} w^{\prime}_{j} & = \rho_{j} ,j = 1,2, \ldots ,n \hfill \\ w^{\prime\prime}_{j} & = \rho_{j} + \tau_{j} ,j = 1,2, \ldots ,n \hfill \\ \end{aligned} $$

    We note \( \left( {wo_{1} ,wo_{2} , \ldots ,wo_{n} } \right) \)-the optimized weights of the attributes; they are obtained as a solution of the following linear programming problem:

    $$ \hbox{max} \left\{ {z = \sum\limits_{j = 1}^{n} {\sum\limits_{i = 1}^{m} {\pi_{ij} \cdot wo_{j} } } } \right\} $$
    (19)

    under the restrictions

    $$ \left\{ {\begin{array}{*{20}c} {w^{\prime}_{j} \le wo_{j} \le w^{\prime\prime}_{j} ,j = 1,2, \ldots ,n} \\ {\sum\limits_{j = 1}^{n} {wo_{j} = 1} } \\ \end{array} } \right. $$
  3. Step 3.

    We calculate according to (18):

    $$ E_{LT}^{IFS} \left( {C_{j} } \right) = - \frac{1}{m \cdot \ln 2}\sum\limits_{i = 1}^{m} {\left[ {\mu_{ij} \left( {C_{j} } \right)\ln \mu_{ij} \left( {C_{j} } \right) + \nu_{ij} \left( {C_{j} } \right)\ln \nu_{ij} \left( {C_{j} } \right) + \left( {1 - \pi_{ij} \left( {C_{j} } \right)} \right)\ln \left( {1 - \pi_{ij} \left( {C_{j} } \right)} \right) - \pi_{ij} \left( {C_{j} } \right)\ln 2} \right]} $$
    (20)

    where \( j = 1,2, \ldots ,n \), and the constant \( \frac{1}{m \cdot \ln 2} \) ensures \( 0 \le E_{LT}^{IFS} \left( {C_{j} } \right) \le 1. \)

  4. Step 4.

    The degree of divergence \( \left( {d_{j} } \right) \) of the average intrinsic information provided by the corresponding performance ratings on criterion C j can be defined as (Hung and Chen 2009):

    $$ d_{j} = 1 - E_{LT}^{IFS} \left( {C_{j} } \right),\;j = 1,2, \ldots ,n $$
    (21)
  5. Step 5.

    The entropy weight of the attribute C j is

    $$ we_{j} = \frac{{d_{j} }}{{\sum\limits_{j = 1}^{n} {d_{j} } }},\;j = 1,2, \ldots ,n $$
    (22)
  6. Step 6.

    The aggregate weight value of the attribute C j is

    $$ W_{j} = \frac{{wo_{j} \cdot we_{j} }}{{\sum\limits_{j = 1}^{n} {wo_{j} \cdot we_{j} } }},\;j = 1,2, \ldots ,n $$
    (23)

    Output: values of the aggregated weights \( \left( {W_{1} ,W_{2} , \ldots ,W_{n} } \right) \). STOP

4.2 Numerical Example

We consider a multi-attribute decision problem with m = 4 alternatives \( \left\{ {A_{1} ,A_{2} ,A_{3} ,A_{4} } \right\} \) and n = 3 attributes (characteristics) \( \left\{ {C_{1} ,C_{2} ,C_{3} } \right\}. \) The values of the attributes for each alternative are presented under the form of IFS.

  1. Step 1.

    Input: m = 4, n = 3

    $$ \begin{array}{*{20}l} {A_{1} = } \hfill {\left\{ { < C_{1} , \, 0.75, \, 0.10 > , \, < C_{2} , \, 0.45, \, 0.50 > , \, < C_{3} , \, 0.60, \, 0.30 > } \right\}} \hfill \\ {A_{2} = } \hfill {\left\{ { < C_{1} , \, 0.50, \, 0.30 > , \, < C_{2} , \, 0.65, \, 0.10 > , \, < C_{3} , \, 0.70, \, 0.20 > } \right\}} \hfill \\ {A_{3} = } \hfill {\left\{ { < C_{1} , \, 0.80, \, 0.10 > , \, < C_{2} , \, 0.55, \, 0.20 > , \, < C_{3} , \, 0.50, \, 0.10 > } \right\}} \hfill \\ {A_{4} = } \hfill {\left\{ { < C_{1} , \, 0.70, \, 0.20 > , \, < C_{2} , \, 0.80, \, 0.05 > , \, < C_{3} , \, 0.40, \, 0.45 > } \right\}} \hfill \\ \end{array} $$

    The importance of the criteria is given by the weight vector presented under the form of IFS:

    $$ \left\{ { < 0.25, \, 0.25 > , \, < 0.30, \, 0.50 > , \, < 0.35, \, 0.60 > } \right\} $$
  2. Step 2.

    We solve the linear programming problem (19) with the software package Quantitative Management:

    $$ \hbox{max} \left\{ {0.55 \cdot wo_{1} + 0.70 \cdot wo_{2} + 0.75 \cdot wo_{3} } \right\} $$

    under the restrictions

    $$ \left\{ {\begin{array}{*{20}l} {0.25 \le wo_{1} \le 0.75} \\ {0.30 \le wo_{2} \le 0.50} \\ {0.35 \le wo_{3} \le 0.40} \\ {wo_{1} + wo_{2} + wo_{3} = 1} \\ \end{array} } \right. $$

    We obtain the optimized values of the weights:

    $$ wo_{1} = 0.25\quad wo_{2} = 0.35\quad wo_{3} = 0.40 $$
  3. Step 3.

    The entropy values for the attributes, according to (20), are as follows:

    $$ E_{LT}^{IFS} \left( {C_{1} } \right) = 0.7337,\quad E_{LT}^{IFS} \left( {C_{2} } \right) = 0.7437,\quad E_{LT}^{IFS} \left( {C_{3} } \right) = 0.8755 $$
  4. Step 4.

    The degree of divergence for the attribute, according to (21), is as follows:

    $$ d_{1} = 0.2663,\quad d_{2} = 0.2563,\quad d_{3} = 0.1245 $$
  5. Step 5.

    The entropy weight for the attribute, according to (22), is as follows:

    $$ we_{1} = 0.4115,\quad we_{2} = 0.3961,\quad we_{3} = 0.1924 $$
  6. Step 6.

    The aggregate weight values for the attributes, according to (23), are as follows:

    $$ W_{1} = 0.3230,\quad W_{2} = 0.4353,\quad W_{3} = 0.2417 $$

The algorithm for calculating the weights of the attributes combines objective weights, based on the entropy of intuitionist fuzzy sets, with subjective weights given as intuitionist fuzzy sets. These subjective weights are first adjusted (optimized) according to the intuitionistic fuzzy decision matrix. Then, the final hierarchy of the decision-making alternatives may be obtained by applying different models.

In what follows, two models of multi-attribute decision-making using fuzzy techniques are presented.

4.3 The Diameter Method

The Diameter Method is a direct method which accomplishes the ranking of alternatives considering the homogeneity of appreciations as compared to the attributes. In order to avoid compensations, two functions φ (appreciation) and δ (diameter) are defined in the classical diameter method, whose aggregation triggers the alternatives ranking. The smaller the diameter is, the more homogenous an alternative is; and the greater the appreciation is, the better the alternative is.

The algorithm of the classical method is as follows:

Algorithm 4 (Diameter)

  1. Step 1.

    Input: m-number of alternatives

    n-number of characteristics (attributes) for each alternative

    \( w = \left\{ {w_{1} ,w_{2} , \ldots ,w_{n} } \right\} \)-attributes weights

    \( V_{ij} ,\;i = 1,2, \ldots ,m,\quad j = 1,2, \ldots ,n \)-set of alternatives values for each characteristic (attribute).

  2. Step 2.

    We define the appreciation function φ:

    $$ \varphi :A \to R $$
    $$ \varphi \left( {A_{i} } \right) = \sum\limits_{j = 1}^{n} {\left( {m - pos\left( {A_{i} ,C_{j} } \right)} \right) \cdot w_{j} } ,\;i = 1,2, \ldots ,m $$
    (24)

    where \( pos:A \times C \to \left\{ {1,2, \ldots m} \right\} \) and \( pos\left( {A_{i} ,C_{j} } \right) = k \) represent the position held by value \( V_{ij} \) in the ascending/descending order of the values of characteristic \( C_{j} \), taking into account the minimum/maximum criterion.We calculate the values of the appreciation function \( \varphi \left( {A_{i} } \right),\;i = 1,2, \ldots ,m \).

  3. Step 3.

    We define the diameter function δ:

    $$ \delta :A \to N $$
    $$ \delta \left( {A_{i} } \right) = \mathop {\hbox{max} }\limits_{j} \left( {pos\left( {A_{i} ,C_{j} } \right)} \right) - \mathop {\hbox{min} }\limits_{j} \left( {pos\left( {A_{i} ,C_{j} } \right)} \right)\quad i = 1,2, \ldots ,m\quad j = 1,2, \ldots ,n $$
    (25)
  4. Step 4.

    We calculate the aggregate function:

    $$ \varphi\,\&\,\delta :A \to R $$
    $$ \varphi\,\&\,\delta \left( {A_{i} } \right) = \frac{{\left( {\varphi \left( {A_{i} } \right) + \left( {m - \delta \left( {A_{i} } \right)} \right)} \right)}}{2}\quad i = 1,2,\ldots,m $$
    (26)

    The alternatives ranking is given by the descending values of function \( \varphi\,\&\,\delta \). STOP.

The adaptation of the diameter method, when the alternatives values for each characteristic represent a fuzzy set, leads to the following algorithm for calculating the aggregate function \( \varphi\,\&\,\delta \left( {A_{i} } \right),i = 1,2, \ldots ,m \), which will allow the alternatives ranking.

Algorithm 5 (Diameter FS)

  1. Step 1.

    Input: m-number of alternatives

    n-number of characteristics (attributes) for each alternative

    \( w = \left\{ {w_{1} ,w_{2} , \ldots ,w_{n} } \right\} \)-attributes weights

    $$ A_{i} = \left\{ {\left( {C_{j} ,\mu_{{A_{i} }} \left( {C_{j} } \right)} \right),C_{j} \in C} \right\},\;\;i = 1,2, \ldots ,m $$

    where \( \mu_{{A_{i} }} \left( {C_{j} } \right) \) indicates the degree to which the alternative A i satisfies the attribute C j .

  2. Step 2.

    We determine the matrix \( P = \left( {pos\left( {A_{i} ,C_{j} } \right),\;i = 1,2, \ldots ,m,\;j = 1,2, \ldots ,n} \right) \) according to (2), and we calculate \( \varphi \left( {A_{i} } \right),\;i = 1,2, \ldots ,m \), according to (24).

  3. Step 3.

    We calculate \( \delta \left( {A_{i} } \right),\;i = 1,2, \ldots ,m \), according to (25).

  4. Step 4.

    We calculate \( \varphi\,\&\,\delta \left( {A_{i} } \right),\;i = 1,2, \ldots ,m \), according to (26), and we determine the alternatives ranking. STOP.

The diameter method modified for values of given characteristics under the form IFS leads to the following algorithm:

Algorithm 6 (Diameter IFS)

  1. Step 1.

    Input: m-number of alternatives

n-number of characteristics (attributes) for each alternative

\( w = \left\{ {w_{1} ,w_{2} , \ldots ,w_{n} } \right\} \)-attributes weights

$$ A_{i} = \left\{ {\left( {C_{j} ,\mu_{{A_{i} }} \left( {C_{j} } \right)\,,v_{{A_{i} }} \left( {C_{j} } \right)} \right),C_{j} \in C} \right\},\;\;i = 1,2, \ldots ,m $$

where \( \mu_{{A_{i} }} \left( {C_{j} } \right) \) indicates the degree to which the alternative \( A_{i} \) satisfies the attribute \( C_{j} \), and \( \nu_{{A_{i} }} \left( {C_{j} } \right) \) indicates the degree to which the alternative \( A_{i} \) does not satisfy the attribute \( C_{j} \).

  1. Step 2.

    We calculate the score of IFN \( \alpha_{ij} = \left( {\mu_{{A_{i} }} \left( {C_{j} } \right),\nu_{{A_{i} }} \left( {C_{j} } \right)} \right) \) for \( i = 1,2, \ldots ,m \) and \( j = 1,2, \ldots ,n \) according to (7).

We calculate the degree of accuracy of the IFN \( \alpha_{ij} = \left( {\mu_{{A_{i} }} \left( {C_{j} } \right),\nu_{{A_{i} }} \left( {C_{j} } \right)} \right) \) for \( i = 1,2, \ldots ,m \) and \( j = 1,2, \ldots ,n \) according to (8).

  1. Step 3.

    We determine the matrix \( P = \left( {pos\left( {A_{i} ,C_{j} } \right),\;i = 1,2, \ldots ,m,\;j = 1,2, \ldots ,n} \right) \) according to (9) and (10). We calculate \( \varphi \left( {A_{i} } \right),\;i = 1,2, \ldots ,m \), according to (24).

  1. Step 4.

    We calculate \( \delta \left( {A_{i} } \right),\;i = 1,2, \ldots ,m \), according to (25).

  1. Step 5.

    We calculate \( \varphi\,\&\,\delta \left( {A_{i} } \right),\;i = 1,2, \ldots ,m \), according to (26), and determine the alternatives ranking. STOP.

4.4 Numerical Example

For a specific position, a company receives 5 applicants, who represent the alternatives set \( A = \left\{ {A_{1} ,A_{2} ,A_{3} ,A_{4} ,A_{5} } \right\} \). The selection committee wants to choose the candidate who best satisfies the characteristics: (C1) experience, (C2) computer skills, and (C3) age (as young as possible). The committee has one restriction: (C4) the salary offered and accepted should be as small as possible. After analyzing the CVs, the letters of recommendation, interviews were held, which eventually allowed the candidates’ evaluation from the point of view of the four characteristics. This example is presented and solved in (Bojadziev and Bojadziev 1995) using the maximin method modified for attributes values given under the form of fuzzy set (FS).

For the fuzzy set diameter model, previously presented, the values of the characteristics for each alternative are presented under the form of FS. The application of the Algorithm 5 leads to:

  1. Step 1.

    Input: m = 5, n = 4

    $$ \begin{array}{*{20}l} {A_{1} = } \hfill {\left\{ {\left( {C_{1} , \, 0.8} \right), \, \left( {C_{2} , \, 0.7} \right), \, \left( {C_{3} , \, 0.7} \right), \, \left( {C_{4} , \, 0.4} \right)} \right\}} \hfill \\ {A_{2} = } \hfill {\left\{ {\left( {C_{1} , \, 0.6} \right), \, \left( {C_{2} , \, 0.6} \right), \, \left( {C_{3} , \, 0.8} \right), \, \left( {C_{4} , \, 0.7} \right)} \right\}} \hfill \\ {A_{3} = } \hfill {\left\{ {\left( {C_{1} , \, 0.3} \right), \, \left( {C_{2} , \, 0.8} \right), \, \left( {C_{3} , \, 0.5} \right), \, \left( {C_{4} , \, 0.6} \right)} \right\}} \hfill \\ {A_{4} = } \hfill {\left\{ {\left( {C_{1} , \, 0.7} \right), \, \left( {C_{2} , \, 0.2} \right), \, \left( {C_{3} , \, 0.5} \right), \, \left( {C_{4} , \, 0.8} \right)} \right\}} \hfill \\ {A_{5} = } \hfill {\left\{ {\left( {C_{1} , \, 0.5} \right), \, \left( {C_{2} , \, 0.3} \right), \, \left( {C_{3} , \, 0.4} \right), \, \left( {C_{4} , \, 0.9} \right)} \right\}} \hfill \\ \end{array} $$

The importance of the criteria is given by the weight vector: w = {0.2, 0.3, 0.2, 0.3}.

  1. Step 2.

    We calculate matrix P:

 

C1

C2

C3

C4

A1

1

2

2

5

A2

3

3

1

3

A3

5

1

3

4

A4

2

5

3

2

A5

4

4

4

1

We calculate \( \varphi \left( {A_{i} } \right),\quad i = 1,2, \ldots ,m \)

$$ \varphi \left( {A_{1} } \right) = \left( {5 - 1} \right) \cdot 0.2 + \left( {5 - 2} \right) \cdot 0.3 + \left( {5 - 2} \right) \cdot 0.2 + \left( {5 - 5} \right) \cdot 0.3 = 2.3 $$

Analogously \( \varphi \left( {A_{2} } \right) = 2.4 \) \( \varphi \left( {A_{3} } \right) = 1.9 \) \( \varphi \left( {A_{4} } \right) = 1.9 \) \( \varphi \left( {A_{5} } \right) = 1.9 \)

  1. Step 3.

    We calculate \( \delta \left( {A_{i} } \right),\;i = 1,2, \ldots ,m \)

It results \( \delta \left( {A_{1} } \right) = 5 - 1 = 4 \) \( \delta \left( {A_{2} } \right) = 2 \) \( \delta \left( {A_{3} } \right) = 4 \) \( \delta \left( {A_{4} } \right) = 3 \) \( \delta \left( {A_{5} } \right) = 3 \)

  1. Step 4.

    We calculate \( \varphi\,\&\,\delta \left( {A_{i} } \right),\;i = 1,2, \ldots ,m \)

It results \( \varphi\,\&\,\delta \left( {A_{1} } \right) = {{\left[ {2.3 + \left( {5 - 4} \right)} \right]} \mathord{\left/ {\vphantom {{\left[ {2.3 + \left( {5 - 4} \right)} \right]} 2}} \right. \kern-0pt} 2} = 1.65 \)

\( \varphi\,\&\,\delta \left( {A_{2} } \right) = 2.70 \) \( \varphi\,\&\,\delta \left( {A_{3} } \right) = 1.45 \) \( \varphi\,\&\,\delta \left( {A_{4} } \right) = 1.95 \) \( \varphi \& \delta \left( {A_{5} } \right) = 1.95 \)

The order of the alternatives is \( A_{2} \succ A_{4} = A_{5} \succ A_{1} \succ A_{3} \) and, consequently, candidate number 2 will be selected. STOP.

For the intuitionistic fuzzy set diameter model (Algorithm 6), previously presented, the values of the characteristics for each alternative are presented under the form of IFS. The application of the Algorithm 6 leads to:

  1. Step 1.

    Input: m = 5, n = 4

$$ \begin{array}{*{20}l} {A_{1} = } \hfill {\left\{ {\left( {C_{1} , \, 0.8, \, 0.1} \right), \, \left( {C_{2} , \, 0.7, \, 0.1} \right), \, \left( {C_{3} , \, 0.7, \, 0} \right), \, \left( {C_{4} , \, 0.4, \, 0.3} \right)} \right\}} \hfill \\ {A_{2} = } \hfill {\left\{ {\left( {C_{1} , \, 0.6, \, 0.3} \right), \, \left( {C_{2} , \, 0.6, \, 0.1} \right), \, \left( {C_{3} , \, 0.8, \, 0.1} \right), \, \left( {C_{4} , \, 0.7, \, 0.2} \right)} \right\}} \hfill \\ {A_{3} = } \hfill {\left\{ {\left( {C_{1} , \, 0.3, \, 0.5} \right), \, \left( {C_{2} , \, 0.8, \, 0.1} \right), \, \left( {C_{3} , \, 0.5, \, 0.3} \right), \, \left( {C_{4} , \, 0.6, \, 0.3} \right)} \right\}} \hfill \\ {A_{4} = \, } \hfill {\left\{ {\left( {C_{1} , \, 0.7, \, 0.1} \right), \, \left( {C_{2} , \, 0.2, \, 0.5} \right), \, \left( {C_{3} , \, 0.5, \, 0.3} \right), \, \left( {C_{4} , \, 0.8, \, 0.1} \right)} \right\}} \hfill \\ {A_{5} = } \hfill {\left\{ {\left( {C_{1} , \, 0.5, \, 0.2} \right), \, \left( {C_{2} , \, 0.3, \, 0.6} \right), \, \left( {C_{3} , \, 0.4, \, 0.2} \right), \, \left( {C_{4} , \, 0.9, \, 0} \right)} \right\}} \hfill \\ \end{array} $$

The importance of the criteria is given by the weight vector: w = {0.2, 0.3, 0.2, and 0.3}.

  1. Step 2.

    We calculate the score of the intuitionistic fuzzy number:

 

C1

C2

C3

C4

A1

0.7

0.6

0.7

0.1

A2

0.3

0.5

0.7

0.5

A3

−0.2

0.7

0.2

0.3

A4

0.6

−0.3

0.2

0.7

A5

0.3

−0.3

0.2

0.9

and the degree of accuracy of the intuitionistic fuzzy number:

 

C1

C2

C3

C4

A1

0.9

0.8

0.7

0.7

A2

0.9

0.7

0.9

0.9

A3

0.8

0.9

0.8

0.9

A4

0.8

0.7

0.8

0.9

A5

0.7

0.9

0.6

0.9

  1. Step 3.

    We calculate the matrix P:

 

C1

C2

C3

C4

A1

1

2

2

5

A2

3

3

1

3

A3

5

1

3

4

A4

2

5

3

2

A5

4

4

4

1

Matrix P calculated is identical with the one determined in the previous numerical example. This will finally lead to the same values for the aggregate function \( \varphi\,\&\,\delta \left( {A_{i} } \right),\;i = 1,2, \ldots ,m \). The order of the alternatives is \( A_{2} \succ A_{4} = A_{5} \succ A_{1} \succ A_{3} \) and, consequently, candidate number 2 will be selected. STOP.

4.5 The TOPSIS Method

The technique for order preference by similarity to ideal solution method (TOPSIS) (Hwang and Yoon 1981) is based on the idea that the optimal variant needs to have minimum distance as to the ideal solution.

The set of attribute values, which forms the matrix of attribute values, is represented for each alternative \( A_{i} \) by the following IFS (Xu 2007a):

$$ A_{i} = \left\{ {\;\left( {C_{j} ,\mu_{{A_{i} }} \left( {C_{j} } \right)\,,v_{{A_{i} }} \left( {C_{j} } \right)} \right),\;C_{j} \in C} \right\},\;i = 1,2, \ldots ,m $$
(27)

where \( \mu_{{A_{i} }} \left( {C_{j} } \right) \) indicates the degree to which the alternative \( A_{i} \) satisfies the attribute \( C_{j} \),

\( \nu_{{A_{i} }} \left( {C_{j} } \right) \) indicates the degree to which the alternative \( A_{i} \) does not satisfy the attribute \( C_{j} \)

and \( \mu_{{A_{i} }} \left( {C_{j} } \right) \in \left[ {0,1} \right] \), \( v_{{A_{i} }} \left( {C_{j} } \right) \in \left[ {0,1} \right] \), \( \mu_{{A_{i} }} \left( {C_{j} } \right) + v_{{A_{i} }} \left( {C_{j} } \right) \le 1, \)

We note \( \pi_{{A_{i} }} \left( {C_{j} } \right) = 1 - \mu_{{A_{i} }} \left( {C_{j} } \right) - v_{{A_{i} }} \left( {C_{j} } \right) \), for all \( C_{j} \in C \).

The TOPSIS method modified for values of given characteristics under the form IFS leads to the following:

Algorithm 7 (TOPSIS IFS)

  1. Step 1.

    Input: m-number of alternatives

    n-number of characteristics for each alternative

    \( w = \left\{ {w_{1} ,w_{2} , \ldots ,w_{n} } \right\} \)-weights of the attributes

    \( A_{i} = \left\{ {\;\left( {C_{j} ,\mu_{{A_{i} }} \left( {C_{j} } \right)\,,v_{{A_{i} }} \left( {C_{j} } \right)} \right),\;C_{j} \in C} \right\},\;i = 1,2, \ldots ,m \)-attribute values for the m alternatives represented by IFSs

  2. Step 2.

    The positive ideal solution IFS \( \left( {A^{ + } } \right) \) is calculated, defined as follows:

$$ A^{ + } = \left\{ {\left( {C_{j} ,\mu_{{A^{ + } }} \left( {C_{j} } \right)\,,v_{{A^{ + } }} \left( {C_{j} } \right)} \right)\;,C_{j} \in C} \right\} $$
(28)

where \( \mu_{{A^{ + } }} \left( {C_{j} } \right) = \mathop {\hbox{max} }\limits_{i} \left\{ {\mu_{{A_{i} }} \left( {C_{j} } \right)} \right\} \) and \( v_{{A^{ + } }} \left( {C_{j} } \right) = \mathop {\hbox{min} }\limits_{i} \left\{ {v_{{A_{i} }} \left( {C_{j} } \right)} \right\} \)

And the negative ideal solution IFS \( \left( {A^{ - } } \right) \), defined as follows:

$$ A^{ - } = \left\{ {\left( {C_{j} ,\mu_{A} - \left( {C_{j} } \right),v_{A} - \left( {C_{j} } \right)} \right),C_{j} \in C} \right\} $$
(29)

where \( \mu_{A} - \left( {C_{j} } \right) = \mathop {min}\limits_{i} \left\{ {\mu_{{A_{i} }} \left( {C_{j} } \right)} \right\} \) and \( v_{A} - \left( {C_{j} } \right) = \mathop {\hbox{max} }\limits_{i} \left\{ {v_{{A_{i} }} \left( {C_{j} } \right)} \right\} \)

  1. Step 3.

    The degree of indeterminacy, corresponding to the two solutions calculated at Step 2, is calculated.

$$ \begin{array}{*{20}l} {\pi_{A} + \left( {C_{j} } \right)} \hfill & { = 1 - \mu_{A} + } \hfill & {\left( {C_{j} } \right) - v_{A} + \left( {C_{j} } \right)} \hfill \\ {\pi_{A} - \left( {C_{j} } \right)} \hfill & { = 1 - \mu_{A} - } \hfill & {\left( {C_{j} } \right) - v_{A} - \left( {C_{j} } \right)} \hfill \\ \end{array} $$
  1. Step 4.

    The Euclidean distances are calculated between each alternative and the positive and negative ideal solutions (Xu 2007a):

$$ d\left( {A^{ + } ,A_{i} } \right) = \sqrt {\frac{1}{2} \cdot \sum\limits_{j = 1}^{n} {w_{j} } \cdot \left( {\left( {\mu_{A} + \left( {C_{j} } \right) - \mu_{{A_{i} }} \left( {C_{j} } \right)} \right)^{2} + \left( {v_{A} + \left( {C_{j} } \right) - v_{{A_{i} }} \left( {C_{j} } \right)} \right)^{2} + \left( {\pi_{A} + \left( {C_{j} } \right) - \pi_{{A_{i} }} \left( {C_{j} } \right)} \right)^{2} } \right)} $$
(30)
$$ d\left( {A^{ - } ,A_{i} } \right) = \sqrt {\frac{1}{2} \cdot \sum\limits_{j = 1}^{n} {w_{j} } \cdot \left( {\left( {\mu_{A} - \left( {C_{j} } \right) - \mu_{{A_{i} }} \left( {C_{j} } \right)} \right)^{2} + \left( {v_{A} - \left( {C_{j} } \right) - v_{{A_{i} }} \left( {C_{j} } \right)} \right)^{2} + \left( {\pi_{A} - \left( {C_{j} } \right) - \pi_{{A_{i} }} \left( {C_{j} } \right)} \right)^{2} } \right)} $$
(31)
  1. Step 5.

    The relative distance is calculated for each alternative as to the positive ideal solution.

$$ d_{i} = 1- \frac{{d\left( {A^{ + } ,A_{i} } \right)}}{{d\left( {A^{ + } ,A_{i} } \right) + d\left( {A^{ - } ,A_{i} } \right)}}\quad {\text{with}}\quad {\text{d}}_{i} \in \left[ {0,1} \right],\;i = 1,2, \ldots ,m $$
(32)

Obviously, the greatest value of \( d_{i} \) corresponds to the best alternative \( A_{i} \).

  1. Step 6.

    A classification is made on the alternatives set in accordance with the decreasing values of \( d_{i} \), calculated at Step 5. STOP.

4.6 Numerical Example

We resume the same numerical example presented in the previous paragraph 4.1. The Diameter Method. The values of the characteristics for each alternative are presented under the form of IFS:

  1. Step 1.

    Input: m = 5, n = 4

$$ \begin{aligned} A_{1} & = \, \left\{ {\left( {C_{1} , \, 0.8, \, 0.1} \right), \, \left( {C_{2} , \, 0.7, \, 0.1} \right), \, \left( {C_{3} , \, 0.7, \, 0} \right), \, \left( {C_{4} , \, 0.4, \, 0.3} \right)} \right\} \\ A_{2} & = \, \left\{ {\left( {C_{1} , \, 0.6, \, 0.3} \right), \, \left( {C_{2} , \, 0.6, \, 0.1} \right), \, \left( {C_{3} , \, 0.8, \, 0.1} \right), \, \left( {C_{4} , \, 0.7, \, 0.2} \right)} \right\} \\ A_{3} & = \, \left\{ {\left( {C_{1} , \, 0.3, \, 0.5} \right), \, \left( {C_{2} , \, 0.8, \, 0.1} \right), \, \left( {C_{3} , \, 0.5, \, 0.3} \right), \, \left( {C_{4} , \, 0.6, \, 0.3} \right)} \right\} \\ A_{4} & = \, \left\{ {\left( {C_{1} , \, 0.7, \, 0.1} \right), \, \left( {C_{2} , \, 0.2, \, 0.5} \right), \, \left( {C_{3} , \, 0.5, \, 0.3} \right), \, \left( {C_{4} , \, 0.8, \, 0.1} \right)} \right\} \\ A_{5} & = \, \left\{ {\left( {C_{1} , \, 0.5, \, 0.2} \right), \, \left( {C_{2} , \, 0.3, \, 0.6} \right), \, \left( {C_{3} , \, 0.4, \, 0.2} \right), \, \left( {C_{4} , \, 0.9, \, 0} \right)} \right\} \\ \end{aligned} $$

The importance of the criteria is given by the weight vector: w = {0.2, 0.3, 0.2, 0.3}.

  1. Step 2.

    The positive ideal solution IFS A + is calculated, according to (28):

$$ A^{ + } = \, \left\{ {\left( {C_{1} , \, 0.8, \, 0.1} \right), \, \left( {C_{2} , \, 0.8, \, 0.1} \right), \, \left( {C_{3} , \, 0.8, \, 0} \right), \, \left( {C_{4} , \, 0.9, \, 0} \right)} \right\} $$

The negative ideal solution IFS A is calculated, according to (29):

$$ A^{ - } = \, \left\{ {\left( {C_{1} , \, 0.3, \, 0.5} \right), \, \left( {C_{2} , \, 0.2, \, 0.6} \right), \, \left( {C_{3} , \, 0.4, \, 0.3} \right), \, \left( {C_{4} , \, 0.4, \, 0.3} \right)} \right\} $$
  1. Step 3.

    The degree of indeterminacy is calculated as follows:

$$ \pi_{{A^{ + } }} = \left( {0.1, \, 0.1, \, 0.2, \, 0.1} \right)\;{\text{and}}\;\pi_{{A^{ - } }} = \left( {0.2,\;0.2,\;0.3,\;0.3} \right) $$
  1. Step 4.

    The Euclidean distances are calculated between each alternative and the positive and negative ideal solutions, according to (30) and (31):

$$ \begin{aligned} d\left( {A^{ + } ,A_{1} } \right) & = 0.2489\quad d\left( {A^{ + } ,A_{2} } \right) = 0.1843\quad d\left( {A^{ + } ,A_{3} } \right) = 0.2949\quad d\left( {A^{ + } ,A_{4} } \right) = 0.3271 \\ & d\left( {A^{ + } ,A_{5} } \right) = 0.2983 \\ d\left( {A^{ - } ,A_{1} } \right) & = 0.3674\quad d\left( {A^{ - } ,A_{2} } \right) = 0.3492\quad d\left( {A^{ - } ,A_{3} } \right) = 0.3271\quad d\left( {A^{ - } ,A_{4} } \right) = 0.2701 \\ & d\left( {A^{ - } ,A_{5} } \right) = 0.2966 \\ \end{aligned} $$
  1. Step 5.

    The relative distance is calculated for each alternative as to the positive ideal solution, according to (32): d 1  = 0.5960 d 2 = 0.6544 d 3 = 0.5258 d 4 = 0.4523 d 5 = 0.4985

  1. Step 6.

    The order of the alternatives is \( A_{2} \succ A_{1} \succ A_{3} \succ A_{5} \succ A_{4} \) and, consequently, candidate number 2 will be selected. STOP.

Remark

In (Bojadziev and Bojadziev 1995), the same numerical example solved by means of the classical fuzzy sets

$$ \begin{array}{*{20}l} A_{1} = \hfill {\left\{ {\left( {C_{1} , \, 0. 8} \right), \, \left( {C_{2} , \, 0. 7} \right), \, \left( {C_{3} , \, 0. 7} \right), \, \left( {C_{4} , \, 0. 4} \right)} \right\}} \hfill \\ {A_{2} = } \hfill {\left\{ {\left( {C_{1} , \, 0. 6} \right), \, \left( {C_{2} , \, 0. 6} \right), \, \left( {C_{3} , \, 0. 8} \right), \, \left( {C_{4} , \, 0. 7} \right)} \right\}} \hfill \\ A_{3} = \hfill {\left\{ {\left( {C_{1} , \, 0. 3} \right), \, \left( {C_{2} , \, 0. 8} \right), \, \left( {C_{3} , \, 0. 5} \right), \, \left( {C_{4} , \, 0. 6} \right)} \right\}} \hfill \\ A_{4} = \hfill {\left\{ {\left( {C_{1} , \, 0. 7} \right), \, \left( {C_{2} , \, 0. 2} \right), \, \left( {C_{3} , \, 0. 5} \right), \, \left( {C_{4} , \, 0. 8} \right)} \right\}} \hfill \\ {A_{5} = } \hfill {\left\{ {\left( {C_{1} , \, 0. 5} \right), \, \left( {C_{2} , \, 0. 3} \right), \, \left( {C_{3} , \, 0. 4} \right), \, \left( {C_{4} , \, 0. 9} \right)} \right\}} \hfill \\ \end{array} $$

leads to the hierarchy of the alternatives \( A_{2} \succ A_{1} \succ A_{3} = A_{5} \succ A_{4} \) and, consequently, the same candidate number 2 will be selected.

5 Conclusion

Applying several multi-attribute decision-making fuzzy models for the same numerical example has led to the following alternatives rankings considered is shown in Table 5.

Table 5 Decision-making fuzzy models

Table 6 presents an analysis of the ranks obtained by each alternative.

Table 6 Analysis of the ranks

The ranking alternatives based on the position obtained in different classification models, presented in Table 6, lead to the following final ranking: \( A_{2} \succ A_{1} \succ A_{5} \succ A_{4} \succ A_{3} \).

Classical fuzzy sets and the intuitionistic fuzzy sets represent powerful tools in modeling complex phenomena, which exhibit shades of difference and present imprecise information. However, these models need to be regarded as tools assisting the decision-making process. The final decision belongs to managers and will be influenced by their own intuition and experience, which generally have an important part to play in this process.

The application of vague techniques in the Delphi Method emphasizes the possibility to structure a decision-making process, generally fuzzy, using fuzzy tools.