1 Introduction

Projection measure is a very suitable tool for dealing with decision making problems because it can consider not only the distance but also the included angle between objects evaluated [1, 2]. Therefore, projection methods have been applied successfully to decision making. For example, [1] and [2] proposed two projection methods for uncertain multiple attribute decision making with preference information. Then, Xu and Hu [3] introduced the projection model-based approaches for intuitionistic fuzzy multiple attribute decision making. Xu and Cai [4] proposed projection model-based approaches to intuitionistic fuzzy multiple attribute decision making. Xu and Liu [5] put forward a group decision-making approach based on interval multiplicative and fuzzy preference relations by using projection. Zeng et al. [6] presented a projection method for multiple attribute group decision making with intuitionistic fuzzy information. Also, Yue [7, 8] developed group decision-making methods based on projection measure with intuitionistic fuzzy information. Yue and Jia [9] further introduced a group decision-making method based on a projection measure with hybrid intuitionistic fuzzy information. However, the projection methods imply some drawback in some case, which was indicated in [5].

As a generalization of the concept of the classic set, fuzzy set, interval valued fuzzy set, intuitionistic fuzzy set, and interval-valued intuitionistic fuzzy set, Smarandache [10] firstly proposed a concept of neutrosophic set from philosophical point of view. However, it will be difficult to be applied to real science and engineering fields because its truth-membership, indeterminacy-membership, falsity-membership functions are defined in real standard or nonstandard subsets of ]0, 1+[. Hence, Ye [11] presented a simplified neutrosophic set (SNS), where its truth-membership, indeterminacy-membership, falsity-membership functions are defined in the real standard interval [0, 1] to use easily in real science and engineering applications with the incomplete, indeterminate and inconsistent information. SNS is a subclass of a neutrosophic set and includes the concepts of a single valued neutrosophic set (SVNS) [12] and an interval neutrosophic set (INS) [13]. Meanwhile, Ye [11] proposed two weighted aggregation operators of SNSs for multicriteria decision-making problems with simplified neutrosophic information. Then, Ye [14] developed three vector similarity measures between SNSs as a generalization of Jaccard, Dice, and cosine similarity measures between two vectors, and then the three similarity measures were applied to multicriteria decision-making problems in simplified neutrosophic setting. Furthermore, Peng et al. [15, 16] further introduced some aggregation operators for simplified neutrosophic multicriteria group decision making and an outranking approach for simplified neutrosophic multicriteria decision making, respectively. Zhang et al. [17] also introduced an outranking approach for multicriteria decision-making problems with interval neutrosophic information. As the subclasses of SNSs such as SVNSs and INSs, many researchers have been proposed various similarity measures, cross entropy measures, correlation coefficients, aggregation operators for SVNSs and INSs and have been applied successfully to decision-making problems [1824]. Furthermore, some researchers have further extended the concepts of SNSs. Tian et al. [25] proposed simplified neutrosophic linguistic normalized weighted Bonferroni mean operator for multicriteria decision-making problems. Ye [26] put forward a multiple attribute group decision-making method based on interval neutrosophic uncertain linguistic variables. Peng et al. [27] presented multi-valued neutrosophic sets and power aggregation operators for multicriteria group decision-making problems.

However, the cosine similarity measure of SNSs introduced in vector space also shows some unreasonable result in some case (see Example 1 in “Preliminaries” section) and is a basic mathematical tool for projection methods. Since the projection measure is a useful method in decision making, in the aforementioned decision making methods, there is no research on a projection-based decision-making method under neutrosophic environments. Motivated by the existing projection methods, this paper shall extend the existing projection methods to SNSs and propose a new harmonic averaging projection method of SNSs for decision-making problems with simplified neutrosophic information. To do so, the rest of the paper is organized as follows. “Preliminaries” section briefly describes projection measures of real vectors and interval numbers, some concepts and operations of SNSs, and a cosine similarity measure of simplified neutrosophic values in vector space. “Simplified neutrosophic harmonic averaging projection measure” section proposes a simplified neutrosophic harmonic averaging projection measure. In “Simplified neutrosophic harmonic averaging projection-based decision-making method” section, we introduce a multiple attribute decision-making method based on the proposed projection measure. In “Illustrative example” section, an illustrative example is presented to demonstrate the application and effectiveness of the proposed method. Finally, “Conclusion” section contains conclusions and future work.

2 Preliminaries

2.1 Projection measures of real vectors and interval numbers

This subsection briefly reviews the basic concepts of the projection measures of real vectors and interval numbers.

Definition 1

(Xu and Da [1], Xu [2]). Let A = (a 1, a 2, …, a n ) and B = (b 1, b 2, …, b n ) be two real vectors, then the cosine of the included angle between A and B is defined as

$$ \cos (A,B) = \frac{A \cdot B}{\left\| A \right\|\left\| B \right\|} , $$
(1)

where the modules of the real vectors A and B are \( \left\| A \right\| = \sqrt {\sum\nolimits_{j = 1}^{n} {a_{j}^{2} } } \) and \( \left\| B \right\| = \sqrt {\sum\nolimits_{j = 1}^{n} {b_{j}^{2} } } \) and the inner product between A and B is \( A \cdot B = \sum\nolimits_{j = 1}^{n} {a_{j} b_{j} } \).

Definition 2

(Xu and Da [1], Xu [2]). Let A = (a 1, a 2, …, a n ) and B = (b 1, b 2, …, b n ) be two real vectors, then the projection of the vector A on the vector B is defined as:

$$ \Pr {\text{o}}j_{B} (A) = \left\| A \right\|\cos (A,B) = \frac{A \cdot B}{\left\| B \right\|} . $$
(2)

The projection Proj B (A) is a measure, which can consider not only the distance but also the included angle between A and B.

In general, the larger the value of Proj B (A) is, the closer A is to B (Xu and Da [1], Xu [2]).

Kaufmann and Gupta [28] introduced an interval number and defined as follows.

Definition 3

(Kaufmann and Gupta [28]). If \( a = [a^{l} ,a^{u} ] = \{ x|a^{l} \le x \le a^{u} ,a^{l} ,a^{u} \in R\} \), then a is called an interval number. If \( a = [a^{l} ,a^{u} ] = \{ x|0 < a^{l} \le x \le a^{u} \} \), then a is called a positive interval number. If a l = a u, then a is reduced to a real number.

As a generalization of Definition 2, when an interval number is considered as a two-dimensional vector, the projection measure between positive interval numbers is given as follows [1, 2].

Definition 4

Let \( a = [a^{l} ,a^{u} ] \) and \( b = [b^{l} ,b^{u} ] \) be two positive interval numbers, then the projection of interval number a on b is defined as [1, 2]:

$$ P_{b} (a) = \frac{{a^{l} b^{l} + a^{u} b^{u} }}{{\sqrt {(b^{l} )^{2} + (b^{u} )^{2} } }} . $$
(3)

Similarly, the larger the value of P b (a) is, the closer the interval number a is to b.

2.2 Basic concepts of SNSs and their cosine similarity measure

To apply a neutrosophic set to science and engineering areas, Ye [11] introduced a SNS, which is a subclass of the neutrosophic set, and gave the following definition of SNS.

Definition 5

(Ye [11]): Let X be a space of points (objects), with a generic element in X denoted by x. A neutrosophic set N in X is characterized by a truth-membership function T N (x), an indeterminacy-membership function I N (x), and a falsity-membership function F N (x). If the functions T N (x), I N (x) and F N (x) are singleton subintervals/subsets in the real standard interval [0, 1], such that T N (x): X → [0, 1], I N (x): X → [0, 1], and F N (x): X → [0, 1]. Then, a simplification of the neutrosophic set N is denoted by

$$ N = \left\{ {\left\langle {x,T_{N} (x),I_{N} (x),F_{N} (x)} \right\rangle |x \in X} \right\}, $$

which is called a SNS. It is a subclass of the neutrosophic set and includes the concepts of a SVNS and an INS. Obviously, the sum of T N (x) = [inf T N (x), sup T N (x)], I N (x) = [inf I N (x), sup I N (x)], F N (x) = [inf F N (x), sup F N (x)] ⊆ [0, 1] satisfies the condition 0 ≤ sup T N (x) + sup I N (x) + sup F N (x) ≤ 3 for any xX.

Then, there are the following relations for SNSs M and N [11, 14]:

  1. 1)

    Complement of N: N c = {〈[inf F N (x), sup F N (x)], [1 − sup I N (x), 1 − inf I N (x)], [inf T N (x), sup T N (x)]〉|xX};

  2. 2)

    Containment: M ⊆ N if and only if inf T M (x) ≤ inf T N (x), sup T M (x) ≤ sup T N (x), inf I M (x) ≥ inf I N (x), sup I M (x) ≥ sup I N (x), inf F M (x) ≥ inf F N (x), and sup F M (x) ≥ sup F N (x) for any x in X;

  3. 3)

    Equality: M = N if and only if M ⊆ N and N ⊆ M;

  4. 4)

    Intersection: MN = {〈[min(inf T M (x), inf T N (x)), min(sup T M (x), sup T N (x))], [max(inf I M (x), inf I N (x)), max(sup I M (x), sup I N (x))], [max(inf F M (x), inf F N (x)), max(sup F M (x), sup F N (x))]〉|x ϵ X};

  5. 5)

    Union: MN = {〈[max(inf T M (x), inf T N (x)), max(sup T M (x), sup T N (x))], [min(inf I M (x), inf I N (x)), min(sup I M (x), sup I N (x))], [min(inf F M (x), inf F N (x)), min(sup F M (x), sup F N (x))]〉|x ϵ X}.

For convenience, the three interval pairs T N (x) = [inf T N (x), sup T N (x)], I N (x) = [inf I N (x), sup I N (x)], F N (x) = [inf F N (x), sup F N (x)] ⊆ [0, 1] in the SNS N are denoted by a simplified neutrosophic value (SNV) \( \alpha = \left\langle {[T^{l} ,T^{u} ],[I^{l} ,I^{u} ],[F^{l} ,F^{u} ]} \right\rangle \), which is a basic component in the SNS N.

Definition 6

Let \( \alpha_{1} = \left\langle {[T_{1}^{l} ,T_{1}^{u} ],[I_{1}^{l} ,I_{1}^{u} ],[F_{1}^{l} ,F_{1}^{u} ]} \right\rangle \) and \( \alpha_{2} = \left\langle {[T_{2}^{l} ,T_{2}^{u} ],[I_{2}^{l} ,I_{2}^{u} ],[F_{2}^{l} ,F_{2}^{u} ]} \right\rangle \) be two SNVs. Then, the cosine similarity measure between SNVs (also called the cosine of the included angle between SNVs α 1 and α 2) is defined in the 6-dimentional vector space as follows [14]:

$$ \begin{aligned} S_{c} (\alpha_{1} ,\alpha_{2} ) & = \frac{{\alpha_{1} \cdot \alpha_{2} }}{{\left\| {\alpha_{1} } \right\|\left\| {\alpha_{2} } \right\|}} \\ & = \frac{{T_{1}^{l} T_{2}^{l} + T_{1}^{u} T_{2}^{u} + I_{1}^{l} I_{2}^{l} + I_{1}^{u} I_{2}^{u} + F_{1}^{l} F_{2}^{l} + F_{1}^{u} F_{2}^{u} }}{{\sqrt {\left( {T_{1}^{l} } \right)^{2} + \left( {T_{1}^{u} } \right)^{2} + \left( {I_{1}^{l} } \right)^{2} + \left( {I_{1}^{u} } \right)^{2} + \left( {F_{1}^{l} } \right)^{2} + \left( {F_{1}^{u} } \right)^{2} } \sqrt {\left( {T_{2}^{l} } \right)^{2} + \left( {T_{2}^{u} } \right)^{2} + \left( {I_{2}^{l} } \right)^{2} + \left( {I_{2}^{u} } \right)^{2} + \left( {F_{2}^{l} } \right)^{2} + \left( {F_{2}^{u} } \right)^{2} } }} \\ \end{aligned} . $$
(4)

It satisfies the following properties [14]:

  • (P1) 0 ≤ S c (α 1, α 2) ≤ 1;

  • (P2) S c (α 1, α 2) = S c (α 2, α 1);

  • (P3) S c (α 1, α 2) = 1 if α 1 = α 2, i.e. \( T_{1}^{l} = T_{2}^{l} \), \( T_{1}^{u} = T_{2}^{u} \), \( I_{1}^{l} = I_{2}^{l} \), \( I_{1}^{u} = I_{2}^{u} \), \( F_{1}^{l} = F_{2}^{l} \), and \( F_{1}^{u} = F_{2}^{u} \).

However, the cosine similarity measure is always not reasonable. To show the flaw, we give the following example.

Example 1

Let α 1 = 〈[1, 1], [0, 0], [0, 0]〉, α 2 = 〈[0.8, 0.8], [0, 0], [0, 0]〉 be two SNVs. Then, the cosine measure between α 1 and α 2 is considered.

According to Eq. (4), we can obtain S c (α 1, α 2) = 1. Obviously, α 1 is really different from α 2. Hence, the cosine measure implies some unreasonable phenomenon. In this case, it is difficult to apply it to pattern recognition.

The cosine measure is a basic mathematical tool in the projection method, and then the projection method also implies some flaw in some case, which was indicated in [5]. To overcome the flaw and to extend the projection method, we shall propose a simplified neutrosophic harmonic averaging projection measure in the following section.

3 Simplified neutrosophic harmonic averaging projection measure

In this section, we propose a harmonic averaging projection measure between SNVs to extend and improve the projection measure between two positive interval numbers in Definition 4.

Based on Definition 4, we give the following definition of projection measures between SNVs:

Definition 7

Let \( \alpha_{1} = \left\langle {[T_{1}^{l} ,T_{1}^{u} ],[I_{1}^{l} ,I_{1}^{u} ],[F_{1}^{l} ,F_{1}^{u} ]} \right\rangle \) and \( \alpha_{2} = \left\langle {[T_{2}^{l} ,T_{2}^{u} ],[I_{2}^{l} ,I_{2}^{u} ],[F_{2}^{l} ,F_{2}^{u} ]} \right\rangle \) be two SNVs. Then,

$$ \alpha_{1} \cdot \alpha_{2} = T_{1}^{l} T_{2}^{l} + T_{1}^{u} T_{2}^{u} + I_{1}^{l} I_{2}^{l} + I_{1}^{u} I_{2}^{u} + F_{1}^{l} F_{2}^{l} + F_{1}^{u} F_{2}^{u} $$
(5)

is called the inner product between SNVs α 1 and α 2,

$$ \left\| {\alpha_{1} } \right\| = \sqrt {(T_{1}^{l} )^{2} + (T_{1}^{u} )^{2} + (I_{1}^{l} )^{2} + (I_{1}^{u} )^{2} + (F_{1}^{l} )^{2} + (F_{1}^{u} )^{2} } , $$
(6)
$$ \left\| {\alpha_{2} } \right\| = \sqrt {(T_{2}^{l} )^{2} + (T_{2}^{u} )^{2} + (I_{2}^{l} )^{2} + (I_{2}^{u} )^{2} + (F_{2}^{l} )^{2} + (F_{2}^{u} )^{2} } $$
(7)

are called the modules of α 1 and α 2, respectively, and then

$$ P_{{\alpha_{2} }} (\alpha_{1} ) = \alpha_{1} \cos (\alpha_{1} ,\alpha_{2} ) = \frac{{\alpha_{1} \cdot \alpha_{2} }}{{\left\| {\alpha_{2} } \right\|}} , $$
(8)
$$ P_{{\alpha_{1} }} (\alpha_{2} ) = \alpha_{2} \cos (\alpha_{1} ,\alpha_{2} ) = \frac{{\alpha_{1} \cdot \alpha_{2} }}{{\left\| {\alpha_{1} } \right\|}} $$
(9)

are called the projections of a SNV α 1 on a SNV α 2 and a SNV α 2 on a SNV α 1 respectively.

Then, based on the two projection measures (bidirectional projection measures), we can introduce the following harmonic averaging measure of the two projection measures:

$$ \begin{aligned} P(\alpha_{1} ,\alpha_{2} ) & = \frac{1}{{\frac{1}{2}\left( {\frac{1}{{P_{{\alpha_{2} }} (\alpha_{1} )}} + \frac{1}{{P_{{\alpha_{1} }} (\alpha_{2} )}}} \right)}} = \frac{{2\alpha_{1} \cdot \alpha_{2} }}{{\left\| {\alpha_{2} } \right\| + \left\| {\alpha_{1} } \right\|}} \\ & = \frac{{2\left( {T_{1}^{l} T_{2}^{l} + T_{1}^{u} T_{2}^{u} + I_{1}^{l} I_{2}^{l} + I_{1}^{u} I_{2}^{u} + F_{1}^{l} F_{2}^{l} + F_{1}^{u} F_{2}^{u} } \right)}}{{\sqrt {\left( {T_{2}^{l} } \right)^{2} + \left( {T_{2}^{u} } \right)^{2} + \left( {I_{2}^{l} } \right)^{2} + \left( {I_{2}^{u} } \right)^{2} + \left( {F_{2}^{l} } \right)^{2} + \left( {F_{2}^{u} } \right)^{2} } + \sqrt {\left( {T_{1}^{l} } \right)^{2} + \left( {T_{1}^{u} } \right)^{2} + \left( {I_{1}^{l} } \right)^{2} + \left( {I_{1}^{u} } \right)^{2} + \left( {F_{1}^{l} } \right)^{2} + \left( {F_{1}^{u} } \right)^{2} } }} \\ \end{aligned} , $$
(10)

which is called the harmonic averaging projection measure. Clearly, it shows the following properties:

  • (P1) P(α 1, α 2) = P(α 2, α 1);

  • (P2) 0 ≤ P(α 1, α 2).

In general, the larger the value of P(α 1, α 2) is, the closer the two SNVs α 1 and α 2 are.

To show the effectiveness of the harmonic averaging projection measure, we give the following two examples.

Example 2

It is known that α * = 〈[1, 1], [0,0], [0,0]〉 is the maximum of SNVs. Let α 1 = 〈[0.8,0.8], [0.4, 0.4], [0.3, 0.3]〉 and α 2 = 〈[0.8,0.8], [0,0], [0,0]〉 be two SNVs. Then, we consider the ranking order between α 1 and α 2.

First, we can obtain P(α 1, α *) = 1.1643 and P(α 2, α *) = 1.2571 according to Eq. (10). Thus, α 2 is closer to α * than α 1, i.e., α 2 is superior to α 1.

Then, we can obtain S c (α 1, α *) = 0.848 and S c (α 2, α *) = 1 according to Eq. (4). Therefore, α 2 is closer to α * than α 1, i.e., α 2 is superior to α 1. However, the result of S c (α 2, α *) = 1 is unreasonable since α 2 is different from α *.

Example 3

Since α * = 〈[1, 1], [0,0], [0,0]〉 is the maximum of SNVs, let α 1 = 〈[0.6,0.8], [0.1,0.2], [0.2, 0.3]〉 and α 2 = 〈[0,0], [0,0], [0,0]〉 be two SNVs. Then, we consider the ranking order between α 1 and α 2.

First, we can obtain P(α 1, α *) = 1.1198 and P(α 2, α *) = 0 according to Eq. (10). Thus, α 1 is closer to α * than α 2, i.e., α 1 is superior to α 2.

Then according to Eq. (4), we can obtain S c (α 1, α *) = 0.9113, and then S c (α 2, α *) is undefined (unmeaningful). Therefore, α 1 and α 2 cannot be ranked in this case.

Obviously, the ranking results in the above two examples show that the harmonic averaging projection measure is superior to the cosine measure.

4 Simplified neutrosophic harmonic averaging projection-based decision-making method

In this section, we propose a simplified neutrosophic harmonic averaging projection-based method for multiple attribute decision-making problems with simplified neutrosophic information.

Let Y = {y 1, y 2,…, y m } be a set of alternatives and X = {x 1, x 2,…, x n } be a set of attributes. Then, the characteristic of an alternative y i (i = 1, 2,…, m) with respect to an attribute x j (j = 1, 2,…, n) is expressed as the form of a SNS:

$$ y_{i} = \{ \langle x_{j} ,T_{{y_{i} }} (x_{j} ),I_{{y_{i} }} (x_{j} ),F_{{y_{i} }} (x_{j} )\rangle |x_{j} \in X\} , $$

where \( T_{{y_{i} }} (x_{j} ) \) = [inf \( T_{{y_{i} }} (x_{j} ) \), sup \( T_{{y_{i} }} (x_{j} ) \)], \( I_{{y_{i} }} (x_{j} ) \) = [inf \( I_{{y_{i} }} (x_{j} ) \), sup \( I_{{y_{i} }} (x_{j} ) \)], and \( F_{{y_{i} }} (x_{j} ) \) = [inf \( F_{{y_{i} }} (x_{j} ) \), sup \( F_{{y_{i} }} (x_{j} ) \)] ⊆ [0, 1] are the three interval pairs in a SNS y i , with 0 ≤ sup \( T_{{y_{i} }} (x_{j} ) \) + sup \( I_{{y_{i} }} (x_{j} ) \) + sup \( F_{{y_{i} }} (x_{j} ) \) ≤ 3 for x j X, j = 1, 2, …, n, and i = 1, 2, …, m. Especially, when there are \( F_{{y_{i} }} (x_{j} ) \) = inf \( F_{{y_{i} }} (x_{j} ) \) = sup \( F_{{y_{i} }} (x_{j} ) \), \( I_{{A_{i} }} (C_{j} ) \) = inf \( I_{{A_{i} }} (C_{j} ) \) = sup \( I_{{A_{i} }} (C_{j} ) \), and \( F_{{A_{i} }} (C_{j} ) \) = inf \( F_{{A_{i} }} (C_{j} ) \) = sup \( F_{{A_{i} }} (C_{j} ) \) in a SNS y i , we can still express the three interval pairs with the equality of upper ends and lower ends, i.e. \( T_{{y_{i} }} (x_{j} ) \) = [\( T_{{y_{i} }} (x_{j} ) \),\( T_{{y_{i} }} (x_{j} ) \)], \( I_{{y_{i} }} (x_{j} ) \) = [\( I_{{y_{i} }} (x_{j} ) \),\( I_{{y_{i} }} (x_{j} ) \)], and \( F_{{y_{i} }} (x_{j} ) \) = [\( F_{{y_{i} }} (x_{j} ) \),\( F_{{y_{i} }} (x_{j} ) \)] for x j X.

For convenience, the three interval pairs \( T_{{y_{i} }} (x_{j} ) \) = [inf \( T_{{y_{i} }} (x_{j} ) \), sup \( T_{{y_{i} }} (x_{j} ) \)], \( I_{{y_{i} }} (x_{j} ) \) = [inf \( I_{{y_{i} }} (x_{j} ) \), sup \( I_{{y_{i} }} (x_{j} ) \)], and \( F_{{y_{i} }} (x_{j} ) \) = [inf \( F_{{y_{i} }} (x_{j} ) \), sup \( F_{{y_{i} }} (x_{j} ) \)] ⊆ [0, 1] are denoted by a SNV α ij  = 〈\( [T_{ij}^{l} ,T_{ij}^{u} ] \), \( [I_{ij}^{l} ,I_{ij}^{u} ] \), \( [F_{ij}^{l} ,F_{ij}^{u} ] \)〉 (i = 1, 2, …, m; j = 1, 2,…, n), which is a basic component in a SNS y i , derived usually from the evaluation of an alternative y i with respect to an attribute x j by the expert or decision maker. Thus, we can establish a simplified neutrosophic decision matrix D = (α ij ) m×n .

In a multiple attribute decision-making environment, the concept of ideal point has been used to help identify the best alternative in the decision set. Although the ideal alternative does not exist in real world, it does provide a useful theoretical construct against which to evaluate alternatives [14].

Generally speaking, there are benefit attributes and cost attributes in evaluation attributes. Let B be a collection of benefit attributes and C be a collection of cost attributes. In the multiple attribute decision-making method, an ideal alternative can be determined by using the maximum operator for the benefit attributes and the minimum operator for the cost attributes to obtain the best SNV (ideal solution) of each attribute among all alternatives. Therefore, we can give an ideal SNV for a benefit attribute in the ideal alternative y * by

$$ \begin{array}{*{20}l} {\alpha_{j}^{*} = \left\langle {\left[ {T_{j}^{l*} ,T_{j}^{u*} } \right],\left[ {I_{j}^{l*} ,I_{j}^{u*} } \right],\left[ {F_{j}^{l*} ,F_{j}^{u*} } \right]} \right\rangle } \hfill \\ {\quad \, = \left\langle {\left[ {\mathop {\hbox{max} }\limits_{i} (T_{ij}^{l} ),\mathop {\hbox{max} }\limits_{i} (T_{ij}^{u} )} \right],\left[ {\mathop {\hbox{min} }\limits_{i} (I_{ij}^{l} ),\mathop {\hbox{min} }\limits_{i} (I_{ij}^{u} )} \right],} \right.\left. {\left[ {\mathop {\hbox{min} }\limits_{i} (F_{ij}^{l} ),\mathop {\hbox{min} }\limits_{i} (F_{ij}^{u} )} \right]} \right\rangle } \hfill \\ \end{array} {\text{for}}\;j \in B, $$
(11)

while for a cost attribute, we can give an ideal SNV in the ideal alternative y * by

$$ \begin{array}{*{20}l} {\alpha_{j}^{*} = \left\langle {\left[ {T_{j}^{l*} ,T_{j}^{u*} } \right],\left[ {I_{j}^{l*} ,I_{j}^{u*} } \right],\left[ {F_{j}^{l*} ,F_{j}^{u*} } \right]} \right\rangle } \hfill \\ {\quad \; = \left\langle {\left[ {\mathop {\hbox{min} }\limits_{i} (T_{ij}^{l} ),\mathop {\hbox{min} }\limits_{i} (T_{ij}^{u} )} \right],\left[ {\mathop {\hbox{max} }\limits_{i} (I_{ij}^{l} ),\mathop {\hbox{max} }\limits_{i} (I_{ij}^{u} )} \right],} \right.\left. {\left[ {\mathop {\hbox{max} }\limits_{i} (F_{ij}^{l} ),\mathop {\hbox{max} }\limits_{i} (F_{ij}^{u} )} \right]} \right\rangle } \hfill \\ \end{array} {\text{for}}\;j \in C. $$
(12)

If the weight of the attribute x j (j = 1, 2,…, n), entered by the decision-maker, is w j , with w j ∈ [0, 1] and \( \sum\nolimits_{j = 1}^{n} {w_{j} } = 1 \), based on Eq. (10), the weighted harmonic averaging projection measure between an alternative y i and the ideal alternative y * is introduced by

$$ WP(y_{i} ,y^{*} ) = \sum\limits_{j = 1}^{n} {w_{j} \frac{{2\left( {T_{ij}^{l} T_{j}^{l*} + T_{ij}^{u} T_{j}^{u*} + I_{ij}^{l} I_{j}^{l*} + I_{ij}^{u} I_{j}^{u*} + F_{ij}^{l} F_{j}^{l*} + F_{ij}^{u} F_{j}^{u*} } \right)}}{{\left( \begin{aligned} \sqrt {\left( {T_{ij}^{l} } \right)^{2} + \left( {T_{ij}^{u} } \right)^{2} + \left( {I_{ij}^{l} } \right)^{2} + \left( {I_{ij}^{u} } \right)^{2} + \left( {F_{ij}^{l} } \right)^{2} + \left( {F_{ij}^{u} } \right)^{2} } \hfill \\ + \sqrt {\left( {T_{j}^{l*} } \right)^{2} + \left( {T_{j}^{u*} } \right)^{2} + \left( {I_{j}^{l*} } \right)^{2} + \left( {I_{j}^{u*} } \right)^{2} + \left( {F_{j}^{l*} } \right)^{2} + \left( {F_{j}^{u*} } \right)^{2} } \hfill \\ \end{aligned} \right)}}} . $$
(13)

Clearly, the larger the value of WP(y i , y *) is, the closer the alternative y j and the ideal alternative y * are. Therefore, the alternative y j is the best one. According to the values of WP(y i , y *) (i = 1, 2, …, m), the ranking order of all alternatives can be determined and the best one can be easily selected as well.

5 Illustrative example

In this section, an example on investment alternatives is provided as the multiple attribute decision-making problem to demonstrate the application and effectiveness of the simplified neutrosophic harmonic averaging projection-based decision-making method.

Let us consider the decision-making problem adopted from [14] for convenient comparison. An investment company wants to invest a sum of money in the best option. To invest the money, there is a panel with four possible alternatives: (1) y 1 is a car company; (2) y 2 is a food company; (3) y 3 is a computer company; (4) y 4 is an arms company. The investment company must take a decision according to the three attributes: (1) x 1 is the risk; (2) x 2 is the growth; (3) x 3 is the environmental impact, where x 1 and x 2 are benefit attributes, and x 3 is a cost attribute. The weight vector of the attributes is given by W = (0.35, 0.25, 0.40)T. The four possible alternatives are to be evaluated under the above three attributes by the form of SNVs, which are structured as the following simplified neutrosophic decision matrix D:

$$ \begin{gathered} D = \left( {\alpha _{{ij}} } \right)_{{4 \times 3}} \hfill \\ \quad = \left[ {\begin{array}{*{20}c} {\left\langle {[0.4,0.5],[0.2,0.3],[0.3,0.4]} \right\rangle } \\ {\left\langle {[0.6,0.7],[0.1,0.2],[0.2,0.3]} \right\rangle } \\ {\left\langle {[0.3,0.6],[0.2,0.3],[0.3,0.4]} \right\rangle } \\ {\left\langle {[0.7,0.8],[0.0,0.1],[0.1,0.2]} \right\rangle } \\ \end{array} } \right. \hfill \\ \;\;\;\;\;\;\;\;\begin{array}{*{20}c} {\left\langle {[0.4,0.6],[0.1,0.3],[0.2,0.4]} \right\rangle } \\ {\left\langle {[0.6,0.7],[0.1,0.2],[0.2,0.3]} \right\rangle } \\ {\left\langle {[0.5,0.6],[0.2,0.3],[0.3,0.4]} \right\rangle } \\ {\left\langle {[0.6,0.7],[0.1,0.2],[0.1,0.3]} \right\rangle } \\ \end{array} \hfill \\ \;\;\;\;\;\;\;\;\left. {\begin{array}{*{20}l} {\left\langle {[0.7,0.9],[0.2,0.3],[0.4,0.5]} \right\rangle } \hfill \\ {\left\langle {[0.3,0.6],[0.3,0.5],[0.8,0.9]} \right\rangle } \hfill \\ {\left\langle {[0.4,0.5],[0.2,0.4],[0.7,0.9]} \right\rangle } \hfill \\ {\left\langle {[0.6,0.7],[0.3,0.4],[0.8,0.9]} \right\rangle } \hfill \\ \end{array} } \right]. \hfill \\ \end{gathered} $$

Then, the developed approach is applied to the decision-making problem to obtain the most desirable alternative(s).

By using Eqs. (11) and (12) for the simplified neutrosophic decision matrix D, we can obtain the following ideal alternative:

$$ \begin{aligned} y^{*} = \left\{ {\left\langle {[0.7,0.8],[0.0,0.1],[0.1,0.2]} \right\rangle ,} \right. \hfill \\ \quad \quad \;\left\langle {[0.6,0.7],[0.1,0.2],[0.1,0.3]} \right\rangle , \hfill \\ \left. {\quad \quad \;\left\langle {[0.3,0.5],[0.3,0.5],[0.8,0.9]} \right\rangle } \right\}. \hfill \\ \end{aligned} $$

By using Eq. (13), we can obtain the values of the projection measure WP(y i , y *) (i = 1, 2, 3, 4) as follows:

$$ WP\left( {y_{ 1} ,y^{*} } \right) = \, 0. 9 6 9 9,\;WP\left( {y_{ 2} ,y^{*} } \right) = { 1}. 1 9 9 6,\;WP\left( {y_{ 3} ,y^{*} } \right) = { 1}.0 9 1 4,\;{\text{and}}\;WP\left( {y_{ 4} ,y^{*} } \right) = { 1}. 2 2 60. $$

From the above results, the ranking order of the four alternatives is y 4 > y 2 > y 3 > y 1. Clearly, amongst them y 4 is the best alternative.

For comparative convenience, we introduce the decision results based on the Jaccard measure WS J(y i , y *), the Dice measure WS D(y i , y *), and the cosine measure WS C(y i , y *) from [14]. Then, all the obtained results are shown in Table 1.

Table 1 Results of various measure methods

The results of Table 1 show that the harmonic averaging projection measure-based ranking order is in accordance with the cosine measure-based ranking order and different from the ranking orders based on the Jaccard and Dice measures in [14], which only indicate the difference between y 2 and y 4. As we can see, depending on different measure methods used, the results may be different [6, 14]. Compared with the simplified neutrosophic decision-making method in [14], the simplified neutrosophic decision-making method proposed in this paper uses the harmonic averaging projection measure, which considers not only both the distance and the included angle between objects evaluated but also bidirectional projection magnitudes between objects evaluated; while the decision-making method presented in [14] uses the cosine measure, which only considers the included angle between objects evaluated and implies some flaw in some case. As mentioned before, the harmonic averaging projection measure is superior to the cosine measure. Therefore, the proposed decision-making method is superior to that in [14].

6 Conclusion

This paper has developed a simplified neutrosophic harmonic averaging projection measure and its multiple attribute decision-making method in simplified neutrosophic setting. Through the harmonic averaging projection measure values between each alternative and the ideal alternative, the ranking order of all alternatives can be determined and the best alternative can be easily identified as well. Finally, an illustrative example demonstrated the application and effectiveness of the developed method.

The technique proposed in this paper can extend existing simplified neutrosophic decision-making methods and provide a new way for multiple attribute decision-making problems with simplified neutrosophic information. In the future work, we shall extend the proposed projection measure to other areas, such as pattern recognition and medical diagnosis.