1 Introduction

Because of different types of uncertainties in real world, there are many mathematical tools for dealing with incomplete, indeterminate and inconsistent information. Zadeh (1965) firstly proposed the theory of fuzzy set which is applied successfully in various fields. Subsequently, several new concepts of high-order fuzzy sets have been presented. Among them, intuitionistic fuzzy set (IFS) introduced by Atanassov (1986) is a typical generalization of fuzzy set. An IFS consists of a membership function and a non-membership function of the universe and provides a flexible mathematical framework to uncertain information processing. Smarandache (1998) originally proposed the notion of a neutrosophic set which is a generalization of fuzzy set and intuitionistic fuzzy set (Smarandache 2005). A neutrosophic set is characterized independently by a truth membership function, a falsity membership function and an indeterminacy membership function and is more suitable to handle incomplete, indeterminate and inconsistent information. In order to easily use the neutrosophic set in real scientific and engineering fields, Wang et al. (2010) proposed the notion of single-valued neutrosophic set (SVNS), which is an instance of neutrosophic set, and provided some set-theoretic operations on SVNSs. The single-valued neutrosophic set theory has been proven to be useful in many scientific fields, such as multi-attribute decision making, machine learning, medical diagnosis, fault diagnosis and so on (see Deli 2017; Guo et la. 2014; Guo and Cheng 2009; Liu et al. 2014; Peng et al. 2016; Ye 2015, 2016, 2017a; Zhan et al. 2017; Zhang 2017; Zhang et al. 2017, 2018c). Moreover, several new neutrosophic theories have been proposed, for examples, neutrosophic cubic set (Jun et al. 2017), neutrosophic rough set (Crispin and Arockiarani 2017; Liu and Yang 2017; Yang et al. 2017) and neutrosophic concept lattice (Singh 2017).

Entropy, similarity and cross-entropy are three important and related information measures. They have been widely used in feature selection, clustering analysis, pattern recognition and so on (Abualigah 2019; Abualigah and Khader 2017). Entropy is usually designed for measuring uncertain degree of information. Similarity and cross-entropy are mainly used to measure the discrimination degree of two objects. Usually, entropy can be constructed by using similarity or cross-entropy. The study of similarity measure is of particular importance because, in many practical situations, we need to compare two objects in order to determine whether they are identical or approximately identical or at least to what degree they are identical (Abualigah and Hanandeh 2015; Abualigah et al. 2018a, b, c). Up to now, a lot of research has been done about information measures with applications in the field of neutrosophic set theory. Broumi and Smarandache (2013) presented a method to calculate the distance between SVNSs on the basis of Hausdorff distance and proposed some similarity measures by using distance and matching function. Ye (2013) presented the correlation coefficient of SVNSs based on the correlation of intuitionistic fuzzy sets. A decision-making method is proposed by using weighted correlation coefficient and the weighted cosine similarity measure of SVNSs. Majumdar and Samanta (2014) presented several similarity measures for SVNSs based on Hamming (Euclidian) distance and normalized Hamming (Euclidian) distance between two SVNSs. Furthermore, an entropy function to measure the uncertainty involved in a SVNS is also presented. Ye (2014b) presented three vector similarity measures for simplified neutrosophic sets (SNSs), including the Jaccard, Dice and cosine similarity measures for SNSs and applied them to multicriteria decision-making problems under a simplified neutrosophic environment. By combining the interval neutrosophic sets and interval-valued hesitant fuzzy sets, Liu and Shi (2015) proposed the notion of interval neutrosophic hesitant sets. Also they developed some new aggregation operators for interval neutrosophic hesitant fuzzy information. Sahin (2017) proposed two techniques converting an interval neutrosophic set into a fuzzy set and a single-valued neutrosophic set, respectively. Based on extension of fuzzy cross-entropy and single-valued neutrosophic cross-entropy the interval neutrosophic cross-entropy is constructed. Additionally, two multi-criteria decision-making methods are developed by using the interval neutrosophic cross-entropy between an alternative and the ideal alternative. Ye (2017b) proposed two cotangent similarity measures for SVNSs based on cotangent function. Furthermore, these cotangent similarity measures have been applied to the fault diagnosis of steam turbine. Wu et al. (2018) proposed some formulas to construct information measures on the basis of the cosine function. The relationship among entropy, similarity measure and cross-entropy as well as their mutual transformations are further discussed. Moreover, an approach to multi-attribute decision making based on these information measures is presented. Pramanik et al. (2018) proposed a new cross entropy measure under SVNS environment, namely NS-cross entropy. A novel multi-attribute group decision-making strategy is developed which is capable of dealing with unknown weight of attributes and unknown weight of decision-makers.

The definitions and construction methods of entropy, similarity and cross-entropy are closely related to the inclusion relation for neutrosophic sets. There are two widely used definitions of inclusion relation (Smarandache 1998; Wang et al. 2010; Borzooei et al. 2014), called type-1 and type-2 inclusion relations. Recently, Zhang et al. (2018a, b) noted that there are some shortcomings of the existing inclusion relations. These two inclusion relations divide actually three membership functions into two groups and do not really take advantage of the three membership functions. Accordingly, Zhang et al. (2018a) proposed a new kind of inclusion relation for SVNSs (called type-3 inclusion relation) and presented the union and intersection operations on SVNSs corresponding to type-3 inclusion relation. The algebraic structure of SVNSs has also been investigated.

For type-3 inclusion relation, the truth membership function, falsity membership function and indeterminacy membership function of SVNS are considered to be of different importance. The existing similarity measures and entropies for SVNSs are mainly designed with respect to type-1 and type-2 inclusion relations and are not suitable to type-3 inclusion relation. So, in the present paper, we study the entropy and similarity measures of SVNSs with respect to this new kind of inclusion relation. The axiomatic definitions of similarity and entropy for single-valued neutrosophic values (SVNVs) with respect to type-3 inclusion relation are proposed and some construction methods for similarity and entropy are presented. By using these information measures and aggregation operators, some similarity and entropy measures for SVNSs are examined. The paper is organized as follows: In Sect. 2, we recall some notions and properties related to SVNS and its algebraic structure. In Section 3, we point out that, with respect to type-3 inclusion relation, the three membership functions of SVNSs are not of same importance and the existing measures of similarity and entropy for SVNSs are not suitable by some examples. Then, we propose the axiomatic definitions of similarity and entropy measures for SVNVs. By using Hamming distance, cosine function and cotangent function, three similarity measures and three entropies for SVNVs are constructed. In Sect. 4, we extend the definitions and construction methods of similarity and entropy for SVNVs to SVNSs. In Sect. 5, we present a multi-attribute decision making method by using the new similarity and entropy measures proposed in this paper. It shows that these new information measures are effective and efficient. The paper is completed with some concluding remarks.

2 Overview of neutrosophic set

In this section, we recall some fundamental notions and properties related to neutrosophic set.

Definition 1

(Smarandache 1998) Let X be a space of points (objects), with a generic element in X denoted by x. A neutrosophic set A in X is characterized by a truth-membership function \(T_{A}(x)\), an indeterminacy-membership function \(I_{A}(x)\), and a falsity-membership function \(F_{A}(x)\), where \(T_{A}(x)\), \(I_{A}(x)\) and \(F_{A}(x)\) are real standard or non-standard subsets of \(]^{-}0,1^{+}[\) such that \(T_{A}(x):X\rightarrow ]^{-}0,1^{+}[\), \(I_{A}(x):X\rightarrow ]^{-}0,1^{+}[\) and \(F_{A}(x):X\rightarrow ]^{-}0,1^{+}[\), and the sum of \(T_{A}(x)\), \(I_{A}(x)\) and \(F_{A}(x)\) satisfies the condition \(^{-}0\le {\text {sup}}T_{A}(x)+{\text {sup}}I_{A}(x)+{\text {sup}}F_{A}(x)\le 3^{+}\).

In order to easily apply neutrosophic set theory to science and engineering, Wang et al. (2010) presented the concept of single-valued neutrosophic set (SVNS) as follows.

Definition 2

(Wang et al. 2010) Let X be a space of points (objects), with a generic element in X denoted by x. A single-valued neutrosophic set A in X is characterized by a truth-membership function \(T_{A}(x)\), an indeterminacy-membership function \(I_{A}(x)\) and a falsity-membership function \(F_{A}(x)\). A single-valued neutrosophic set A can be denoted by

$$\begin{aligned} A=\{(x, T_{A}(x), I_{A}(x), F_{A}(x))|x\in X\} \end{aligned}$$
(1)

where \(T_{A}(x), I_{A}(x), F_{A}(x)\in [0,1]\) for each \(x\in X\).

In this paper, a single-valued neutrosophic set A in X is also denoted by

$$\begin{aligned} A=\{(x,A(x))|x\in X\} \end{aligned}$$
(2)

where \(A(x)=(T_{A}(x), I_{A}(x), F_{A}(x))\) and \(T_{A}(x), I_{A}(x), F_{A}(x)\in [0,1]\) for each \(x\in X\). We use the symbol \({\text {SVNS}}(X)\) to denote the set of all single-valued neutrosophic sets in X.

Two single-valued neutrosophic sets A and B are equal, written as \(A=B\), if and only if \(T_{A}(x)=T_{B}(x)\), \(I_{A}(x)=I_{B}(x)\) and \(F_{A}(x)=F_{B}(x)\) for any \(x\in X\). For the inclusion relation of neutrosophic sets, an original definition is proposed by Smarandache (see Smarandache 1998, 2005). It is called type-1 inclusion relation in (Zhang et al. 2018a, b) and denoted by \(\subseteq _{1}\). Another one is denoted by \(\subseteq _{2}\) and is called type-2 inclusion relation.

Definition 3

(Smarandache 1998, 2005) Let X be a finite set and \(A,B\in {\text {SVNS}}(X)\). A is contained in B, denoted by \(A\subseteq _{1} B\), if \(T_{A}(x)\le T_{B}(x), I_{A}(x)\ge I_{B}(x)\) and \(F_{A}(x)\ge F_{B}(x)\) for any \(x\in X\).

Definition 4

(Wang et al. 2010; Borzooei et al. 2014) Let X be a finite set and \(A,B\in {\text {SVNS}}(X)\). A is contained in B, denoted by \(A\subseteq _{2} B\), if \(T_{A}(x)\le T_{B}(x), I_{A}(x)\le I_{B}(x)\) and \(F_{A}(x)\ge F_{B}(x)\) for any \(x\in X\).

With respect to inclusion relations \(\subseteq _{1}\) and \(\subseteq _{2}\), there are two kinds of union and intersection operations on single-valued neutrosophic sets.

Definition 5

(Smarandache 1998, 2005) Let X be a finite set and \(A,B\in {\text {SVNS}}(X)\) with

$$\begin{aligned} A= & {} \{(x, T_{A}(x), I_{A}(x), F_{A}(x))|x\in X\}, \\ B= & {} \{(x, T_{B}(x), I_{B}(x), F_{B}(x))|x\in X\}. \end{aligned}$$

(1) The type-1 union of A and B is a single-valued neutrosophic set C, written as \(C=A\cup _{1} B\), whose truth-membership, indeterminacy-membership and falsity-membership functions are given by: for any \(x\in X\),

$$\begin{aligned} T_{C}(x)= & {} \max \{T_{A}(x), T_{B}(x)\},\\ I_{C}(x)= & {} \min \{I_{A}(x), I_{B}(x)\},\\ F_{C}(x)= & {} \min \{F_{A}(x), F_{B}(x)\}. \end{aligned}$$

(2) The type-1 intersection of A and B is a single-valued neutrosophic set D, written as \(D=A\cap _{1} B\), whose truth-membership, indeterminacy-membership and falsity-membership functions are given by: for any \(x\in X\),

$$\begin{aligned} T_{D}(x)= & {} \min \{T_{A}(x), T_{B}(x)\},\\ I_{D}(x)= & {} \max \{I_{A}(x), I_{B}(x)\},\\ F_{D}(x)= & {} \max \{F_{A}(x), F_{B}(x)\}. \end{aligned}$$

Definition 6

(Wang et al. 2010; Borzooei et al. 2014) Let X be a finite set and \(A,B\in {\text {SVNS}}(X)\) with

$$\begin{aligned} A= & {} \{(x, T_{A}(x), I_{A}(x), F_{A}(x))|x\in X\},\\ B= & {} \{(x, T_{B}(x), I_{B}(x), F_{B}(x))|x\in X\}. \end{aligned}$$

(1) The type-2 union of A and B is a single-valued neutrosophic set C, written as \(C=A\cup _{2} B\), whose truth-membership, indeterminacy-membership and falsity-membership functions are given by: for any \(x\in X\),

$$\begin{aligned} T_{C}(x)= & {} \max \{T_{A}(x), T_{B}(x)\},\\ I_{C}(x)= & {} \max \{I_{A}(x), I_{B}(x)\},\\ F_{C}(x)= & {} \min \{F_{A}(x), F_{B}(x)\}. \end{aligned}$$

(2) The type-2 intersection of A and B is a single-valued neutrosophic set D, written as \(D=A\cap _{2} B\), whose truth-membership, indeterminacy-membership and falsity-membership functions are given by: for any \(x\in X\),

$$\begin{aligned} T_{D}(x)= & {} \min \{T_{A}(x), T_{B}(x)\},\\ I_{D}(x)= & {} \min \{I_{A}(x), I_{B}(x)\},\\ F_{D}(x)= & {} \max \{F_{A}(x), F_{B}(x)\}. \end{aligned}$$

For any \((\beta _{1}, \beta _{2}, \beta _{3})\in [0,1]^{3}\), we denote by \(\overline{(\beta _{1}, \beta _{2}, \beta _{3})}\) the single-valued neutrosophic set \(\{(x,\beta _{1}, \beta _{2}, \beta _{3})|x\in X\}\). That is, \(\overline{(\beta _{1}, \beta _{2}, \beta _{3})}(x)=(\beta _{1}, \beta _{2}, \beta _{3})\) for each \(x\in X\).

Definition 7

(Zhang et al. 2018a) \((M, \vee , \wedge , ^{-}, 0, 1)\) is called a generalized De Morgan algebra if \((M, \vee , \wedge , 0, 1)\) is a bounded lattice with the bottom element 0 and top element 1, and \(^{-}: M\rightarrow M\) is an unary operation satisfies the identities:

GM1: \(x=(x^{-})^{-}\);

GM2: \((x\wedge y)^{-}=x^{-}\vee y^{-}\);

GM3: \(1^{-}=0\).

For a generalized De Morgan algebra \((M, \vee , \wedge , ^{-}, 0, 1)\), if \((M, \vee , \wedge , 0, 1)\) is a bounded distributive lattice, then \((M, \vee , \wedge , ^{-}, 0, 1)\) is called a De Morgan algebra. The following theorem presents the algebraic structure of single-valued neutrosophic sets.

Theorem 1

(Zhang et al. 2018a) Let X be a finite set.

  1. (1)

    \(({\text {SVNS}}(X), \cup _{1}, \cap _{1}, ^{c}, \overline{(0,1,1)}, \overline{(1,0,0)})\) is a De Morgan algebra;

  2. (2)

    \(({\text {SVNS}}(X), \cup _{2}, \cap _{2}, ^{c}, \overline{(0,0,1)}, \overline{(1,1,0)})\) is a De Morgan algebra;

    where the complement \(A^{c}\) of a single-valued neutrosophic set A is denote by \(A^{c}=\{(x, T_{A^{c}}(x), I_{A^{c}}(x), F_{A^{c}}(x))|x\in X\}\) and defined as \(T_{A^{c}}(x)=F_{A}(x)\), \(I_{A^{c}}(x)=1-I_{A}(x)\), \(F_{A^{c}}(x)=T_{A}(x)\).

Zhang et al. (2018a) made a theoretical study on the inclusion relations \(\subseteq _{1}\), \(\subseteq _{2}\) and pointed out some shortcomings of these relations. It is noted that, with respect to type-1 and type-2 inclusion relations, three membership functions of SVNS are actually divided into two groups, and the order relation is then determined using the method similar to intuitionistic fuzzy sets. In other words, the two inclusion relations do not really take advantage of the three membership functions. Accordingly, a new kind of inclusion relation is proposed and its basic properties are examined.

Let \(D^{*}=\{(x_{1},x_{2},x_{3})|x_{1},x_{2},x_{3}\in [0,1]\}\). An element of \(D^{*}\) is called a single-valued neutrosophic value (SVNV, or single-valued neutrosophic number). Actually, for any single-valued neutrosophic set A, \((T_{A}(x), I_{A}(x), F_{A}(x))\in D^{*}\). In order to discuss the inclusion relation between single-valued neutrosophic sets, Zhang et al. (2018a) proposed an order relation \(\le \) on single-valued neutrosophic values as follows: for any \(x=(x_{1},x_{2},x_{3})\in D^{*}\), \(y=(y_{1},y_{2},y_{3})\in D^{*}\),

$$\begin{aligned} x\le & {} y\Leftrightarrow ((x_{1}<y_{1})\wedge (x_{3}\\\ge & {} y_{3}))\vee ((x_{1}=y_{1})\wedge (x_{3}> y_{3})) \\ \vee ((x_{1}= & {} y_{1})\wedge (x_{3}=y_{3})\wedge (x_{2}\le y_{2})) \end{aligned}$$

Based on order relation \(\le \) on \(D^{*}\), Zhang et al. (2018a) proposed the following inclusion relation on SVNS.

Definition 8

(Zhang et al. 2018a) Let X be a finite set, \(A,B\in {\text {SVNS}}(X)\) and \(A=\{(x, A(x))|x\in X\}\), \(B=\{(x, B(x))|x\in X\}\). A is contained in B, denoted by \(A\subseteq B\), if \(A(x)\le B(x)\) for each \(x\in X\).

Theorem 2

(Zhang et al. 2018a) Let X be a space of points (objects), with a generic element in X denoted by x. \(({\text {SVNS}}(X), \cup , \cap , ^{c}, \overline{(0,0,1)}, \overline{(1,1,0)})\) is a generalized De Morgan algebra, where \(A\cup B\) and \(A\cap B\) are the least upper bound and greatest lower bound of A and B with respect to \(\subseteq \) given by: for any \(x\in X\),

$$\begin{aligned} (A\cup B)(x)= & {} \left\{ \begin{array}{ll}A(x), &{} \text{ if }\quad B(x)\le A(x); \\ B(x), &{} \text{ if }\quad A(x)\le B(x);\\ (T_{A}(x)\vee T_{B}(x), 0, F_{A}(x)\wedge F_{B}(x)),&{}\text{ otherwise }\quad \end{array} \right. \\ (A\cap B)(x)= & {} \left\{ \begin{array}{ll}B(x), &{} \text{ if }\quad B(x)\le A(x); \\ A(x), &{} \text{ if }\quad A(x)\le B(x);\\ (T_{A}(x)\wedge T_{B}(x), 1, F_{A}(x)\vee F_{B}(x)),&{}\text{ otherwise }\quad \end{array} \right. \end{aligned}$$

3 Similarity and entropy measures of single-valued neutrosophic values

The information measures are very useful tools to cope with uncertainty and vagueness. In general, there are three important information measures in uncertain information processing: similarity, entropy and cross-entropy. They are closely related. Usually, the similarity between two single-valued neutrosophic sets is constructed from the similarity between two single-valued neutrosophic values by using some aggregation operators. Wu et al. (2018) proposed an axiomatic definition of similarity measure for single-valued neutrosophic values.

Definition 9

(Wu et al. 2018) A function \(S: D^{*}\times D^{*}\rightarrow [0,1]\) is called a similarity measure for single-valued neutrosophic values if it satisfy the following properties:

  1. (1)

    \(S(x,y)=0\) if and only if \(x_{t}-y_{t}=1\) or \(x_{t}-y_{t}=-1\) (\(t=1,2,3\));

  2. (2)

    \(S(x,y)=1\) if and only if \(x=y\);

  3. (3)

    \(S(x,y)=S(y,x)\);

  4. (4)

    \(S(x,z)\le S(x,y)\) and \(S(x,z)\le S(y,z)\), if \(x_{t}\le y_{t}\le z_{t}\) or \(x_{t}\ge y_{t}\ge z_{t}\) (\(t=1,2,3\));

    where \(x=(x_{1},x_{2},x_{3}), y=(y_{1},y_{2},y_{3}), z=(z_{1},z_{2},z_{3})\in D^{*}\).

There are some formulas for computing the similarity of single-valued neutrosophic values. For example, for \(x=(x_{1},x_{2},x_{3})\in D^{*}\), \(y=(y_{1},y_{2},y_{3})\in D^{*}\):

\(S_{1}(x,y)=1-\frac{1}{3}(|x_{1}-y_{1}|+|x_{2}-y_{2}|+|x_{3}-y_{3}|)\) (Majumdar and Samanta 2014).

\(S_{2}(x,y)=\frac{1}{3(\sqrt{2}-1)}\sum _{t=1}^{3}(\sqrt{2}\cos \frac{x_{t}-y_{t}}{4}\pi -1)\) (Wu et al. 2018).

\(S_{3}(x,y)=\cot (\frac{\pi }{4}+\frac{\pi }{12}(|x_{1}-y_{1}|+|x_{2}-y_{2}|+|x_{3}-y_{3}|))\) (Ye 2017b).

The similarity measure is actually related to inclusion relation (see Definition 18 in next Section). The above mentioned similarity measures are proposed based on inclusion relations \(\subseteq _{1}\) or \(\subseteq _{2}\) and have been applied successfully to some decision making problems. However, it seems that these similarity measures are not suitable for the inclusion relation \(\subseteq \) defined in Definition 7 or order relation \(\le \). For example, let \(x=(0.3,0.7,0.6)\), \(y=(0.4,0.2,0.6)\), \(z=(0.6,0.7,0.6)\). It follows that \(x\le y\le z\). By routine computation, we have \(S_{1}(x,y)=0.8\), \(S_{1}(y,z)=0.77\), \(S_{1}(x,z)=0.9\). Consequently, \(S_{1}(x,y)<S_{1}(x,z)\) and \(S_{1}(y,z)<S_{1}(x,z)\). Furthermore,

$$\begin{aligned} S_{2}(x,y)= & {} \frac{1}{3(\sqrt{2}-1)}(\sqrt{2}\cos \frac{\pi }{40}+\sqrt{2}\cos \frac{\pi }{8}+\sqrt{2}-3),\\ S_{2}(y,z)= & {} \frac{1}{3(\sqrt{2}-1)}(\sqrt{2}\cos \frac{\pi }{20}+\sqrt{2}\cos \frac{\pi }{8}+\sqrt{2}-3),\\ S_{2}(x,z)= & {} \frac{1}{3(\sqrt{2}-1)}(\sqrt{2}\cos \frac{3\pi }{40}+2\sqrt{2}-3). \end{aligned}$$

It is trivial that \(S_{2}(y,z)<S_{2}(x,y)\). Furthermore,

$$\begin{aligned} S_{2}(x,y)= & {} \frac{1}{3(\sqrt{2}-1)}(\frac{\sqrt{2}}{2}\cos \frac{6\pi }{80}\cos \frac{4\pi }{80}+\sqrt{2}-3),\\ S_{2}(x,z)= & {} \frac{1}{3(\sqrt{2}-1)}(\frac{\sqrt{2}}{2}\cos ^{2}\frac{3\pi }{80}+\sqrt{2}-3), \end{aligned}$$

and consequently, \(S_{2}(y,z)<S_{2}(x,y)<S_{2}(x,z)\).

As for \(S_{3}\), we have \(S_{3}(x,y)=\cot \frac{3}{10}\pi \), \(S_{3}(y,z)=\cot \frac{37}{120}\pi \), \(S_{3}(x,z)=\cot \frac{11}{40}\pi \) and consequently, \(S_{3}(y,z)<S_{3}(x,y)<S_{3}(x,z)\).

The inclusion relations \(\subseteq _{1}\) and \(\subseteq _{2}\) of single-valued neutrosophic sets are based on the following order relations \(\le _{1}\) and \(\le _{2}\) on \(D^{*}\) respectively: for any \(x=(x_{1},x_{2},x_{3}), y=(y_{1},y_{2},y_{3})\in D^{*}\),

$$\begin{aligned} x\le _{1} y\Leftrightarrow (x_{1}\le y_{1})\wedge (x_{2}\ge & {} y_{2})\wedge (x_{3}\ge y_{3})\\ x\le _{2} y\Leftrightarrow (x_{1}\le y_{1})\wedge (x_{2}\le & {} y_{2})\wedge (x_{3}\ge y_{3}) \end{aligned}$$

Actually, for two single-valued neotrosophic sets A and B in X, we have \(A\subseteq _{1} B\) if and only if \(A(x)\le _{1} B(x)\), and \(A\subseteq _{2} B\) if and only if \(A(x)\le _{2} B(x)\) for each \(x\in X\).

The order relation \(\le \) on single-valued neotrosophic values proposed by Zhang et al. (2018a) is essentially different from \(\le _{1}\) and \(\le _{2}\). The order relations \(\le _{1}\) and \(\le _{2}\) are based on the assumption that, for a single-valued neotrosophic value, the truth-membership degree, the indeterminacy-membership degree and falsity-membership degree are independent and of same importance. But some times this is not the case. For example, in case of voting, the vote in favour or the vote against may be more important than the vote abstention. That is to say, they are not of same importance. The order relation \(\le \) takes this case into consideration. If two single-valued neotrosophic values \(x=(x_{1},x_{2},x_{3})\) and \(y=(y_{1},y_{2},y_{3})\) can be distinguished by their truth-membership degree and falsity-membership degree, that is \((x_{1}<y_{1})\wedge (x_{3}\ge y_{3})\) or \((x_{1}=y_{1})\wedge (x_{3}> y_{3})\), then they are ordered by using just truth-membership degree and falsity-membership degree. In this case, we do not take the indeterminacy-membership degree into consideration. If x and y can not be distinguished by their truth-membership degree and falsity-membership degree, that is \((x_{1}=y_{1})\) and \(x_{3}=y_{3}\), then they are ordered by their indeterminacy-membership degrees. Thus, the order relation \(\le \) is based on the assumption that the truth-membership degree (or falsity-membership degree) is more important than the indeterminacy-membership degree.

Based on the above analysis, we propose the following definition of similarity measure in accordance with the order relation \(\le \).

Definition 10

A function \(S: D^{*}\times D^{*}\rightarrow [0,1]\) is called a similarity measure for single-valued neutrosophic values if it satisfy the following properties:

  1. (S1)

    \(S(x,y)=0\), if and only if \(x_{t}-y_{t}=1\) or \(x_{t}-y_{t}=-1\), \(t=1,3\);

  2. (S2)

    \(S(x,y)=1\), if and only if \(x=y\);

  3. (S3)

    \(S(x,y)=S(y,x)\);

  4. (S4)

    \(S(x,z)\le S(x,y)\), \(S(x,z)\le S(y,z)\) if \(x\le y\le z\).

Let \(x=(x_{1},x_{2},x_{3})\in D^{*}\), \(y=(y_{1},y_{2},y_{3})\in D^{*}\). We know that, with respect to order relation \(\le \), the truth-membership degree and falsity-membership degree are more important than the indeterminacy-membership degree. That is to say, if \(x_{1}=y_{1}\) and \(x_{3}=y_{3}\), then x and y should have a relative larger similarity degree. For example, one may think that (0.3, 0.9, 0.4) and (0.3, 0.1, 0.4) are more similar than (0.3, 0.9, 0.4) and (0.5, 0.8, 0.4). We consider the following information measure formula S(xy) for x and y:

$$\begin{aligned} S(x,y)=\left\{ \begin{array}{ll}1-\frac{|x_{2}-y_{2}|}{2}, &{} \text{ if }\quad (x_{1}=y_{1})\wedge (x_{3}=y_{3}), \\ \frac{2-|x_{1}-y_{1}|-|x_{3}-y_{3}|}{4},&{}\text{ otherwise. }\quad \end{array} \right. \nonumber \\ \end{aligned}$$
(3)

Theorem 3

S(xy), defined by Eq. (3), is a similarity measure between x and y.

Proof

Let \(x=(x_{1},x_{2},x_{3})\in D^{*}\), \(y=(y_{1},y_{2},y_{3})\in D^{*}\). We note that if \(x_{1}=y_{1}\) and \(x_{3}=y_{3}\), by \(S(x,y)=1-\frac{|x_{2}-y_{2}|}{2}\) it follows that \(0.5\le S(x,y)\le 1\); if \(x_{1}\ne y_{1}\) or \(x_{3}\ne y_{3}\), by \(S(x,y)=\frac{2-|x_{1}-y_{1}|-|x_{3}-y_{3}|}{4}\) we conclude that \(0\le S(x,y)< 0.5\).

  1. (S1)

    If \(S(x,y)=0\), then \(|x_{1}-y_{1}|=1\) and \(|x_{3}-y_{3}|=1\), it follows that \(x_{t}-y_{t}=1\) or \(x_{t}-y_{t}=-1\) (\(t=1,3\)). Conversely, if \(x_{t}-y_{t}=1\) or \(x_{t}-y_{t}=-1\) (\(t=1,3\)), then \(S(x,y)=0\) is trivial.

  2. (S2)

    \(S(x,y)=1\) if and only if \(x_{1}=y_{1}\), \(x_{3}=y_{3}\) and \(x_{2}=y_{2}\). It follows that \(S(x,y)=1\) if and only if \(x=y\).

  3. (S3)

    \(S(x,y)=S(y,x)\) is trivial.

  4. (S4)

    Assume that \(x=(x_{1},x_{2},x_{3})\), \(y=(y_{1},y_{2},y_{3})\), \(z=(z_{1},z_{2},z_{3})\) and \(x\le y\le z\).

Case 1: If \((x_{1}<y_{1})\wedge (x_{3}\ge y_{3})\) and \((y_{1}<z_{1})\wedge (y_{3}\ge z_{3})\), then \((x_{1}<z_{1})\wedge (x_{3}\ge z_{3})\). Consequently, \(S(x,y)=\frac{2-|x_{1}-y_{1}|-|x_{3}-y_{3}|}{4}\), \(S(y,z)=\frac{2-|y_{1}-z_{1}|-|y_{3}-z_{3}|}{4}\) and \(S(x,z)=\frac{2-|x_{1}-z_{1}|-|x_{3}-z_{3}|}{4}\). By \(x_{1}<y_{1}<z_{1}\) and \(x_{3}\ge y_{3}\ge z_{3}\), it follows that \(S(x,z)\le S(x,y)\) and \(S(x,z)\le S(y,z)\).

Case 2: If \((x_{1}<y_{1})\wedge (x_{3}\ge y_{3})\) and \((y_{1}=z_{1})\wedge (y_{3}> z_{3})\), then \((x_{1}<z_{1})\wedge (x_{3}> z_{3})\). Consequently, \(S(x,y)=\frac{2-|x_{1}-y_{1}|-|x_{3}-y_{3}|}{4}\), \(S(y,z)=\frac{2-|y_{1}-z_{1}|-|y_{3}-z_{3}|}{4}\) and \(S(x,z)=\frac{2-|x_{1}-z_{1}|-|x_{3}-z_{3}|}{4}\). By \(x_{1}<y_{1}=z_{1}\) and \(x_{3}\ge y_{3}> z_{3}\), it follows that \(S(x,z)\le S(x,y)\) and \(S(x,z)\le S(y,z)\).

Case 3: If \((x_{1}<y_{1})\wedge (x_{3}\ge y_{3})\) and \((y_{1}=z_{1})\wedge (y_{3}=z_{3})\wedge (y_{2}\le z_{2})\), then \((x_{1}<z_{1})\wedge (x_{3}\ge z_{3})\). Consequently, \(S(x,y)=\frac{2-|x_{1}-y_{1}|-|x_{3}-y_{3}|}{4}\), \(S(y,z)=1-\frac{|y_{2}-z_{2}|}{2}\) and \(S(x,z)=\frac{2-|x_{1}-z_{1}|-|x_{3}-z_{3}|}{4}\). It follows that \(S(x,z)=S(x,y)\) and \(S(x,z)<0.5\le S(y,z)\).

Case 4: If \((x_{1}=y_{1})\wedge (x_{3}> y_{3})\) and \((y_{1}<z_{1})\wedge (y_{3}\ge z_{3})\), then \((x_{1}<z_{1})\wedge (x_{3}> z_{3})\). Consequently, \(S(x,y)=\frac{2-|x_{1}-y_{1}|-|x_{3}-y_{3}|}{4}\), \(S(y,z)=\frac{2-|y_{1}-z_{1}|-|y_{3}-z_{3}|}{4}\) and \(S(x,z)=\frac{2-|x_{1}-z_{1}|-|x_{3}-z_{3}|}{4}\). By \(x_{1}=y_{1}<z_{1}\) and \(x_{3}> y_{3}\ge z_{3}\), it follows that \(S(x,z)\le S(x,y)\) and \(S(x,z)\le S(y,z)\).

Case 5: If \((x_{1}=y_{1})\wedge (x_{3}> y_{3})\) and \((y_{1}=z_{1})\wedge (y_{3}> z_{3})\), then \((x_{1}=z_{1})\wedge (x_{3}> z_{3})\). Consequently, \(S(x,y)=\frac{2-|x_{1}-y_{1}|-|x_{3}-y_{3}|}{4}\), \(S(y,z)=\frac{2-|y_{1}-z_{1}|-|y_{3}-z_{3}|}{4}\) and \(S(x,z)=\frac{2-|x_{1}-z_{1}|-|x_{3}-z_{3}|}{4}\). It follows that \(S(x,z)\le S(x,y)\) and \(S(x,z)\le S(y,z)\).

Case 6: If \((x_{1}=y_{1})\wedge (x_{3}> y_{3})\) and \((y_{1}=z_{1})\wedge (y_{3}=z_{3})\wedge (y_{2}\le z_{2})\), then \((x_{1}=z_{1})\wedge (x_{3}> z_{3})\). Consequently, \(S(x,y)=\frac{2-|x_{1}-y_{1}|-|x_{3}-y_{3}|}{4}\), \(S(y,z)=1-\frac{|y_{2}-z_{2}|}{2}\) and \(S(x,z)=\frac{2-|x_{1}-z_{1}|-|x_{3}-z_{3}|}{4}\). It follows that \(S(x,z)=S(x,y)\) and \(S(x,z)<0.5\le S(y,z)\).

Case 7: If \((x_{1}=y_{1})\wedge (x_{3}=y_{3})\wedge (x_{2}\le y_{2})\) and \((y_{1}<z_{1})\wedge (y_{3}\ge z_{3})\), then \((x_{1}<z_{1})\wedge (x_{3}\ge z_{3})\). Consequently, \(S(x,y)=1-\frac{|x_{2}-y_{2}|}{2}\), \(S(y,z)=\frac{2-|y_{1}-z_{1}|-|y_{3}-z_{3}|}{4}\) and \(S(x,z)=\frac{2-|x_{1}-z_{1}|-|x_{3}-z_{3}|}{4}\). By \(x_{1}=y_{1}<z_{1}\) and \(x_{3}=y_{3}\ge z_{3}\), it follows that \(S(x,z)<0.5 \le S(x,y)\) and \(S(x,z)=S(y,z)\).

Case 8: If \((x_{1}=y_{1})\wedge (x_{3}=y_{3})\wedge (x_{2}\le y_{2})\) and \((y_{1}=z_{1})\wedge (y_{3}> z_{3})\), then \((x_{1}=z_{1})\wedge (x_{3}> z_{3})\). Consequently, \(S(x,y)=1-\frac{|x_{2}-y_{2}|}{2}\), \(S(y,z)=\frac{2-|y_{1}-z_{1}|-|y_{3}-z_{3}|}{4}\) and \(S(x,z)=\frac{2-|x_{1}-z_{1}|-|x_{3}-z_{3}|}{4}\). By \(x_{1}=y_{1}=z_{1}\) and \(x_{3}=y_{3}> z_{3}\), it follows that \(S(x,z)<0.5 \le S(x,y)\) and \(S(x,z)=S(y,z)\).

Case 9: If \((x_{1}=y_{1})\wedge (x_{3}=y_{3})\wedge (x_{2}\le y_{2})\) and \((y_{1}=z_{1})\wedge (y_{3}=z_{3})\wedge (y_{2}\le z_{2})\), then \((x_{1}=z_{1})\wedge (x_{3}=z_{3})\wedge (x_{2}\le z_{2})\). Consequently, \(S(x,y)=1-\frac{|x_{2}-y_{2}|}{2}\), \(S(y,z)=1-\frac{|y_{2}-z_{2}|}{2}\) and \(S(x,z)=1-\frac{|x_{2}-z_{2}|}{2}\). It follows that \(S(x,z)\le S(x,y)\) and \(S(x,z)\le S(y,z)\). \(\square \)

Theorem 3 presents a method to construct similarity between two SVNVs with respect to order relation \(\le \) by using Hamming distances. The following theorem shows that the similarity between two SVNVs can also be constructed by using cosine function and cotangent function. Let \(x=(x_{1},x_{2},x_{3})\in D^{*}\), \(y=(y_{1},y_{2},y_{3})\in D^{*}\). \(S^{\prime }(x,y)\) and \(S^{\prime \prime }(x,y)\) are defined as:

$$\begin{aligned} S^{\prime }(x,y)= & {} \left\{ \begin{array}{ll}\frac{1}{2}(1+\cos \frac{(x_{2}-y_{2})}{2}\pi ), &{} \text{ if }\quad (x_{1}=y_{1})\wedge (x_{3}=y_{3}); \\ \frac{1}{4}(\cos \frac{(x_{1}-y_{1})}{2}\pi +\cos \frac{(x_{3}-y_{3})}{2}\pi ),&{}\text{ otherwise. }\quad \end{array} \right. \nonumber \\ \end{aligned}$$
(4)
$$\begin{aligned} S^{\prime \prime }(x,y)= & {} \left\{ \begin{array}{ll}\frac{1}{2}(1{+}\cot (\frac{\pi }{4}{+}|x_{2}{-}y_{2}|\frac{\pi }{4})), &{} \text{ if }\, (x_{1}=y_{1})\wedge (x_{3}=y_{3}); \\ \frac{1}{4}(\cot (\frac{\pi }{4}+|x_{1}-y_{1}|\frac{\pi }{4})+\cot (\frac{\pi }{4}+|x_{3}-y_{3}|\frac{\pi }{4})),&{}\text{ otherwise. }\quad \end{array} \right. \nonumber \\ \end{aligned}$$
(5)

Theorem 4

\(S^{\prime }(x,y)\) and \(S^{\prime \prime }(x,y)\) are similarity measures between x and y.

The proof of this theorem is similar to that of Theorem 3. Besides similarity measure, entropy is also an important uncertainty measure in uncertain information analysis. In accordance with order relation \(\le \), we propose the notion of entropy measure for single-valued neutrosophic values as follows:

Definition 11

A function \(E: D^{*}\rightarrow [0,1]\) is called an entropy measure for single-valued neutrosophic value if it satisfy the following properties:

  1. (E1)

    \(E(x)=0\) if and only if \(x_{t}=1\) or \(x_{t}=0\) (\(t=1,3\));

  2. (E2)

    \(E(x)=1\) if and only if \(x_{t}=0.5\) (\(t=1,2,3\));

  3. (E3)

    \(E(x)=E(x^{c})\);

  4. (E4)

    \(E(x)\le E(y)\) if y is more uncertain that x, that is \(x_{t}\le y_{t}\) when \(y_{t}\le 0.5\) and \(x_{t}\ge y_{t}\) when \(y_{t}\ge 0.5\) (\(t=1,2,3\)).

Usually, entropy can be computed by using the similarity degree \(S(x,x^{c})\) between a single-valued neutrosophic value x and its complement \(x^{c}\). But here it seems unreasonable to take \(S(x,x^{c})\) as entropy. We consider the similarity measure S(xy) defined by (3). For \(x=(x_{1},x_{2},x_{3})\in D^{*}\), \(x^{c}=(x_{3},1-x_{2},x_{1})\) and hence

$$\begin{aligned} S(x,x^{c})=\left\{ \begin{array}{ll}1-\frac{|2x_{2}-1|}{2}, &{} \text{ if }\quad x_{1}=x_{3}; \\ \frac{1}{2}(1-|x_{1}-x_{3}|),&{}\text{ otherwise. }\quad \end{array} \right. \end{aligned}$$

It follows that \(S(x,x^{c})=1\) if and only if \(x_{1}=x_{3}\) and \(x_{2}=0.5\). Thus, for \(x=(0.2,0.5,0.2)\) we have \(S(x,x^{c})=1\). We conclude that E2 does not hold. Furthermore, we note that \(S(x,x^{c})>0\) for any \(x\in D^{*}\).

In order to constructing entropy measure by using similarity measure, we consider another kind of complement of single-valued neutrosophic value presented by Peng et al. (2014). For any \(x=(x_{1},x_{2},x_{3})\in D^{*}\), let \(x^{c}=(1-x_{1},1-x_{2},1-x_{3})\). It follows that

$$\begin{aligned} E(x)=S(x,x^{c})=\left\{ \begin{array}{ll}1-\frac{|2x_{2}-1|}{2}; &{} \text{ if }\quad x_{1}=x_{3}=0.5, \\ \frac{2-|2x_{1}-1|-|2x_{3}-1|}{4};&{}\text{ otherwise. }\quad \end{array} \right. \nonumber \\ \end{aligned}$$
(6)

Theorem 5

E(x), defined by Eq. (6), is an entropy measure for single-valued neutrosophic values.

Proof

  1. (E1):

    \(E(x)=0\) if and only if \(|2x_{1}-1|=1\) and \(|2x_{3}-1|=1\), if and only if \(x_{1}=1\) or \(x_{1}=0\), and \(x_{3}=1\) or \(x_{3}=0\).

  2. (E2):

    \(E(x)=1\) if and only if \(x_{1}=x_{3}=0.5\) and \(x_{2}=0.5\).

  3. (E3):

    \(E(x)=E(x^{c})\) is trivial.

  4. (E4)

    Let \(x=(x_{1},x_{2},x_{3})\in D^{*}\), \(y=(y_{1},y_{2},y_{3})\in D^{*}\) and y is more uncertain than x, that is \(x_{t}\le y_{t}\) if \(y_{t}\le 0.5\) and \(x_{t}\ge y_{t}\) if \(y_{t}\ge 0.5\) (\(t=1,2,3\)).

Case 1: \(x_{1}=x_{3}=0.5\). It follows that \(y_{1}=y_{3}=0.5\). In fact, if \(y_{1}>0.5\), then \(x_{1}\ge y_{1}>0.5\); if \(y_{1}<0.5\), then \(x_{1}\le y_{1}<0.5\), a contradiction. It follows that \(y_{1}=0.5\). Similarly, we conclude that \(y_{3}=0.5\). It follows that \(E(x)=S(x,x^{c})=1-\frac{|2x_{2}-1|}{2}\) and \(E(y)=S(y,y^{c})=1-\frac{|2y_{2}-1|}{2}\). By \(x_{2}\le y_{2}\) if \(y_{2}\le 0.5\) and \(x_{2}\ge y_{2}\) if \(y_{2}\ge 0.5\), we conclude that \(|2x_{2}-1|\ge |2y_{2}-1|\) and hence \(E(x)\le E(y)\).

Case 2: \(x_{1}\ne 0.5\) or \(x_{3}\ne 0.5\), \(y_{1}=y_{3}=0.5\). It follows that \(E(x)=S(x,x^{c})=\frac{2-|2x_{1}-1|-|2x_{3}-1|}{4}<0.5\le S(y,y^{c})=E(y)\).

Case 3: \(x_{1}\ne 0.5\) or \(x_{3}\ne 0.5\), \(y_{1}\ne 0.5\) or \(y_{3}\ne 0.5\). It follows that \(E(x)=S(x,x^{c})=\frac{2-|2x_{1}-1|-|2x_{3}-1|}{4}\), \(E(y)=S(y,y^{c})=\frac{2-|2y_{1}-1|-|2y_{3}-1|}{4}\). By \(x_{t}\le y_{t}\) if \(y_{t}\le 0.5\) and \(x_{t}\ge y_{t}\) if \(y_{t}\ge 0.5\) (\(t=1,3\)) we have \(|2y_{1}-1|\le |2x_{1}-1|\) and \(|2y_{3}-1|\le |2x_{3}-1|\). Consequently, \(E(x)\le E(y)\). \(\square \)

Similar to Eq. (6), by using (4) and (5), for any \(x=(x_{1},x_{2},x_{3})\in D^{*}\) we have

$$\begin{aligned} S^{\prime }(x,x^{c})= & {} \left\{ \begin{array}{ll}\frac{1}{2}(1+\cos \frac{2x_{2}-1}{2}\pi ); &{} \text{ if }\quad x_{1}=x_{3}=0.5, \\ \frac{1}{4}(\cos \frac{(2x_{1}-1)}{2}\pi +\cos \frac{(2x_{3}-1)}{2}\pi );&{}\text{ otherwise. }\quad \end{array} \right. \nonumber \\ \end{aligned}$$
(7)
$$\begin{aligned} S^{\prime \prime }(x,x^{c})= & {} \left\{ \begin{array}{ll}\frac{1}{2}(1+\cot (\frac{\pi }{4}+|2x_{2}-1|\frac{\pi }{4})), &{} \text{ if }\quad x_{1}=x_{3}=0.5; \\ \frac{1}{4}(\cot (\frac{\pi }{4}+|2x_{1}-1|\frac{\pi }{4})+\cot (\frac{\pi }{4}+|2x_{3}-1|\frac{\pi }{4})),&{}\text{ otherwise. }\quad \end{array} \right. \end{aligned}$$
(8)

Theorem 6

\(E^{\prime }(x)\) and \(E^{\prime \prime }(x)\) are entropy measures for single-valued neutrosophic values, where \(E^{\prime }(x)=S^{\prime }(x,x^{c})\), \(E^{\prime \prime }(x)=S^{\prime \prime }(x,x^{c})\).

4 Similarity and entropy measures of single-valued neutrosophic sets

In this section, we extend the notions of similarity measure and entropy measure of single-valued neutrosophic values to single-valued neutrosophic sets.

Definition 12

Let X be a finite set of objects. A function \(S: {\text {SVNS}}(X)\times {\text {SVNS}}(X)\rightarrow [0,1]\) is called a similarity measure for single-valued neutrosophic sets if it satisfy the following properties:

  1. (S1)

    \(S(A,B)=0\), if and only if \(|T_{A}(x)-T_{B}(x)|=1\) and \(|F_{A}(x)-F_{B}(x)|=1\) for any \(x\in X\);

  2. (S2)

    \(S(A,B)=1\), if and only if \(A=B\);

  3. (S3)

    \(S(A,B)=S(B,A)\);

  4. (S4)

    \(S(A,C)\le S(A,B)\), \(S(A,C)\le S(B,C)\) if \(A\subseteq B\subseteq C\).

Definition 13

Let X be a finite set of objects. A function \(E: {\text {SVNS}}(X)\rightarrow [0,1]\) is called an entropy measure for single-valued neutrosophic set if it satisfy the following properties:

  1. (E1)

    \(E(A)=0\) if and only if \(T_{A}(x)=1\) or \(T_{A}(x)=0\), and \(F_{A}(x)=1\) or \(F_{A}(x)=0\) for any \(x\in X\);

  2. (E2)

    \(E(A)=1\) if and only if \(A(x)=(0.5,0.5,0.5)\) for any \(x\in X\);

  3. (E3)

    \(E(A)=E(A^{c})\);

  4. (E4)

    \(E(A)\le E(B)\) if B is more uncertain that A, that is \(|T_{A}(x)-0.5|\ge |T_{B}(x)-0.5|\), \(|I_{A}(x)-0.5|\ge |I_{B}(x)-0.5|\) and \(|F_{A}(x)-0.5|\ge |F_{B}(x)-0.5|\) for any \(x\in X\).

Theorem 7

Let \(X=\{x_{1},x_{2},\ldots ,x_{n}\}\). Assume that \(s: D^{*}\times D^{*}\rightarrow [0,1]\) is a similarity measure for single-valued neutrosophic values. \(S: {\text {SVNS}}(X)\times {\text {SVNS}}(X)\rightarrow [0,1]\) is given by: for any \(A,B\in {\text {SVNS}}(X)\),

$$\begin{aligned} S(A,B)=\frac{1}{n}\sum _{i=1}^{n}s(A(x_{i}), B(x_{i})) \end{aligned}$$
(9)

Then, S(AB) is a similarity measure between A and B.

Proof

Let \(A=\{(x_{i}, A(x_{i}))|1\le i\le n\}\), \(B=\{(x_{i}, B(x_{i}))|1\le i\le n\}\), \(A(x_{i})=(T_{A}(x_{i}), I_{A}(x_{i}), F_{A}(x_{i}))\) and \(B(x_{i})=(T_{B}(x_{i}), I_{B}(x_{i}), F_{B}(x_{i}))\).

(S1) Assume that \(S(A,B)=0\). It follows that \(s(A(x_{i}), B(x_{i}))=0\) for each \(x_{i}\) (\(1\le i\le n\)). We note that s is a similarity measure for SVNVs and hence \(|T_{A}(x_{i})-T_{B}(x_{i})|=1\) and \(|F_{A}(x_{i})-F_{B}(x_{i})|=1\).

Conversely, assume that \(|T_{A}(x_{i})-T_{B}(x_{i})|=1\) and \(|F_{A}(x_{i})-F_{B}(x_{i})|=1\) for each \(x_{i}(1\le i\le n)\). It follows that \(s(A(x_{i}), B(x_{i}))=0\) and consequently \(S(A,B)=0\).

(S2) Assume that \(S(A,B)=1\). By \(0\le s(A(x_{i}), B(x_{i}))\le 1\), it follows that \(s(A(x_{i}), B(x_{i}))=1\) and hence \(A(x_{i})=B(x_{i})\) for each \(x_{i}\) (\(1\le i\le n\)). Consequently, \(A=B\) as required.

Conversely, assume that \(A=B\). It follows that \(A(x_{i})=B(x_{i})\) for each \(x_{i}\) and hence \(s(A(x_{i}), B(x_{i}))=1\). Consequently, \(S(A,B)=1\).

(S3) is trivial.

(S4) Assume that \(A\subseteq B\subseteq C\). It follows that \(A(x_{i})\le B(x_{i})\le C(x_{i})\) for each \(x_{i}\). We conclude that \(s(A(x_{i}), C(x_{i}))\le s(A(x_{i}), B(x_{i}))\) and hence \(S(A,C)\le S(A,B)\). Similarly, we have \(S(A,C)\le S(B,C)\). \(\square \)

This theorem shows that the similarity between two single-valued neutrosophic sets can be constructed by aggregating the similarities between related single-valued neutrosophic values. The following theorem presents a general method to construct the entropy of a single-valued neutrosophic set by aggregating entropies of related single-valued neutrosophic values.

Theorem 8

Let \(X=\{x_{1},x_{2},\ldots ,x_{n}\}\). Assume that \(e: D^{*}\rightarrow [0,1]\) is an entropy measure for single-valued neutrosophic value. \(E: {\text {SVNS}}(X)\rightarrow [0,1]\) is given by: for any \(A\in {\text {SVNS}}(X)\),

$$\begin{aligned} E(A)=\frac{1}{n}\sum _{i=1}^{n}e(A(x_{i})) \end{aligned}$$
(10)

Then, E(A) is an entropy measure for single-valued neutrosophic set.

Proof

(E1) Assume that \(E(A)=0\). It follows that \(e(A(x_{i}))=0\) for each \(x_{i}(1\le i\le n)\). We note that e is an entropy measure for SVNVs and hence \(T_{A}(x_{i})=1\) or \(T_{A}(x_{i})=0\), and \(F_{A}(x_{i})=1\) or \(F_{A}(x_{i})=0\).

Conversely, assume that \(T_{A}(x_{i})=1\) or \(T_{A}(x_{i})=0\), and \(F_{A}(x_{i})=1\) or \(F_{A}(x_{i})=0\) for each \(x_{i}\). It follows that \(e(A(x_{i}))=0\) and hence \(E(A)=0\) as required.

(E2) Assume that \(E(A)=1\). By \(0\le e(A(x_{i}))\le 1\) it follows that \(e(A(x_{i}))=1\) for each \(x_{i}\). Since e is an entropy measure for SVNVs, we have \(T_{A}(x_{i})=I_{A}(x_{i})=F_{A}(x_{i})=0.5\).

Conversely, assume that \(T_{A}(x_{i})=I_{A}(x_{i})=F_{A}(x_{i})=0.5\) for each \(x_{i}(1\le i\le n)\). It follows that \(e(A(x_{i}))=1\) and hence \(E(A)=1\).

(E3) is trivial.

(E4) Assume that \(A,B\in {\text {SVNS}}(X)\) and B is more uncertain that A. It follows that, for each \(x_{i}\), \(|T_{A}(x_{i})-0.5|\ge |T_{B}(x_{i})-0.5|\), \(|I_{A}(x)-0.5|\ge |I_{B}(x)-0.5|\) and \(|F_{A}(x)-0.5|\ge |F_{B}(x)-0.5|\). We note that e is an entropy measure for SVNVs and hence \(e(A(x_{i}))\le e(B(x_{i}))\) for each \(x_{i}\) and hence \(E(A)=\frac{1}{n}\sum _{i=1}^{n}e(A(x_{i}))\le \frac{1}{n}\sum _{i=1}^{n}e(B(x_{i}))=E(B)\). \(\square \)

The formulas (9) and (10) can be extended to weighted average model. Assume that \(W=(w_{1}, w_{2}, \ldots , w_{n})\) is the weight vector with \(w_{i}\in [0,1]\) and \(\sum _{i=1}^{n}w_{i}=1\). Then, the similarity and entropy can be extended to the following (11) and (12) respectively:

$$\begin{aligned}&S(A,B)=\sum _{i=1}^{n}w_{i}\cdot s(A(x_{i}), B(x_{i})) \end{aligned}$$
(11)
$$\begin{aligned}&E(A)=\sum _{i=1}^{n}w_{i}\cdot e(A(x_{i})) \end{aligned}$$
(12)

5 Applications in multi-attributes decision making

Netrosophic set theory has been applied to dealing with multi-attribute decision making problems and several related decision making methods have been proposed. Many of these methods are based on some similarity measures. In this section, we propose a single-valued neutrosophic set based decision making method by using the new similarity measures presented in this paper. We begin this section with a novel neutrosophic set based multi-attributes decision making method, which was presented in (Ye 2014a).

5.1 Ye’s multi-attributes decision making method with analysis

Ye (2014a) presented a multi-attributes decision making method by using single-valued neutrosophic set. The approach can be described as follows. Here some modifications on notations and technical terms have been made to fit the context of our discussion.

Assume that \(A=\{A_{i}|1\le i\le m\}\) is the set of alternatives, \(C=\{C_{j}|1\le j\le n\}\) is a collection of attributes. For decision making, it is required to provide the information that the alternative \(A_{i}\) satisfies the attribute \(C_{j}\). The decision making information can be represented as a single-valued neutrosophic value \(\alpha ^{ij}=(\alpha ^{ij}_{1}, \alpha ^{ij}_{2}, \alpha ^{ij}_{3})\) (Ye 2014a; Wu et al. 2018). When all the performances of the alternatives are provided, the single-valued neutrosophic decision matrix \(D=(\alpha ^{ij})_{m\times n}\) can be constructed. Assume that the weight vector of attributes \(W=(w_{1},w_{2},\ldots ,w_{n})\) is given by domain experts, where \(0\le w_{j}\le 1 (1\le j\le n)\) and \(\sum _{j=1}^{n}w_{j}=1\).

In order to obtain the optimal alternatives, the description of each alternative \(A_{i}\) (the ith row of decision matrix D) will be aggregated to a single-valued neutrosophic value \(\alpha _{i}\) by using the aggregation operator (Ye 2014a):

$$\begin{aligned} \alpha ^{i}=(1-\prod _{j=1}^{n}(1-\alpha ^{ij}_{1})^{w_{j}}, 1-\prod _{j=1}^{n}(1-\alpha ^{ij}_{2})^{w_{j}}, 1-\prod _{j=1}^{n}(1-\alpha ^{ij}_{3})^{w_{j}})\nonumber \\ \end{aligned}$$
(13)

To rank alternatives in the decision-making process, \(\alpha ^{*}=(1,0,0)\) is taken as ideal alternative, and then based on the cosine similarity measure, the similarity degree \(S(\alpha ^{i}, \alpha ^{*})\) between \(\alpha ^{i}\) and \(\alpha ^{*}\) is computed:

$$\begin{aligned} S(\alpha ^{i}, \alpha ^{*})=\frac{\alpha ^{i}_{1}}{\sqrt{(\alpha ^{i}_{1})^{2}+(\alpha ^{i}_{2})^{2}+(\alpha ^{i}_{3})^{2}}} \end{aligned}$$
(14)

where \(\alpha ^{i}=(\alpha ^{i}_{1}, \alpha ^{i}_{2}, \alpha ^{i}_{3})\). Then, the bigger the measure value \(S(\alpha ^{i}, \alpha ^{*})\) is, the better the alternative \(A_{i}\) is, because the alternative \(A_{i}\) is close to the ideal alternative \(\alpha ^{*}\). Through the cosine similarity measure between each alternative and the ideal alternative, the ranking order of all alternatives can be determined and the best one can be easily identified as well. For illustration, Ye (2014a) considered the following example.

Example 1

Let us consider the following decision-making problem. There is an investment company, which wants to invest a sum of money in the best option. There is a panel with four possible alternatives to invest the money: (1) \(A_{1}\) is a car company; (2) \(A_{2}\) is a food company; (3) \(A_{3}\) is a computer company; (4) \(A_{4}\) is an arms company. The investment company must take a decision according to the following three criteria: (1) \(C_{1}\) is the risk; (2) \(C_{2}\) is the growth; (3) \(C_{3}\) is the environmental impact. Then, the weight vector of the criteria is given by \(W=(0.35, 0.25, 0.4)\).

The evaluations of an alternative \(A_{i} (i=1,2,3,4)\) with respect to a criterion \(C_{j} (j=1,2,3)\) are obtained from the questionnaire of a domain expert. For example, when we ask the opinion of an expert about an alternative \(A_{1}\) with respect to a criterion \(C_{1}\), he or she may say that the possibility in which the statement is good is 0.4 and the statement is poor is 0.3 and the degree in which he or she is not sure is 0.2. By using the neutrosophic notation, it can be expressed as \(\alpha ^{11}=(0.4,0.2,0.3)\). Thus, when the four possible alternatives with respect to the above three criteria are evaluated by the expert, we can obtain the following simplified neutrosophic decision matrix D:

By using Eq. (13), the weighted arithmetic average value (aggregating single-valued neutrosophic value) \(\alpha ^{i}\) for \(A_{i} (i=1,2,3,4)\) are computed:

$$\begin{aligned} \alpha ^{1}= & {} (0.3268, 0.2000, 0.3881),\quad \alpha ^{2}=(0.5627, 0.1414, 0.2000),\\ \alpha ^{3}= & {} (0.4375, 0.2416, 0.2616),\quad \alpha ^{4}=(0.5746, 0.1555, 0.1663). \end{aligned}$$

Then, by using cosine similarity measure to compute the similarity between \(\alpha ^{i}\) and ideal alternative \(\alpha ^{*}=(1,0,0)\): \(S(\alpha ^{1}, \alpha ^{*})=0.5992\), \(S(\alpha ^{2}, \alpha ^{*})=0.9169\), \(S(\alpha ^{3}, \alpha ^{*})=0.7756\), \(S(\alpha ^{4}, \alpha ^{*})=0.9297\). From these similarity measures, the ranking order of four alternatives is \(A_{4}\succ A_{2}\succ A_{3}\succ A_{1}\). Therefore, the alternative \(A_{4}\) is the best choice among all the alternatives.

The above decision making method is actually a useful method for selecting the optimal alternative in decision making problems based on neutrosophic sets. It is implicit that the alternative with the maximum similarity degree to ideal single-valued neutrosophic value should be selected as the optimum candidate. On the other hand, it seems that the computation of the aggregated single-valued neutrosophic value is relatively complicated and time-consuming. Additionally, we note that, for this decision making problem, Zhang et al. (2018a) obtained exactly the same results as above by using score function and accuracy function.

5.2 Multi-attributes decision making based on new similarity measure

In this subsection, we apply the new similarity measure proposed in this paper to the decision making problem described in the last subsection. We note that the decision making method presented in (Ye 2014a) is relatively complicated because of the computation of aggregated single-valued neutrosophic value. In order to reduce the computation complexity, we can firstly compute the similarity degree between the related descriptions and ideal single-valued neutrosophic value and then aggregate these similarities to obtain the similarity degree between the alternatives and the ideal single-valued neutrosophic set. Finally, all the alternatives will be ranked by the similarities. The computational procedure can be summarized as:

Step 1: Based on the decision matrix \(D=(\alpha ^{ij})_{m\times n}\), we compute the similarity \(S(\alpha ^{ij}, \alpha ^{*})\) by using Eq. (3). We note that the top element and bottom element in \(D^{*}\) with respect to order relation \(\le \) are (1, 1, 0) and (0, 0, 1) respectively. Thus, in this case, the ideal single-valued neutrosophic value should be \(\alpha ^{*}=(1,1,0)\).

Step 2: We compute the similarity \(S(A_{i}, \alpha ^{*})\) by

$$\begin{aligned} S(A_{i}, \alpha ^{*})=\sum _{j=1}^{n}w_{j}S(\alpha ^{ij},\alpha ^{*}) \end{aligned}$$
(15)

Step 3: Select the best alternative(s) in accordance with the similarity degrees \(S(A_{i}, \alpha ^{*})\). The best alternative(s) is the one with \(\max _{i=1}^{m}S(A_{i}, \alpha ^{*})\).

Example 2

We reconsider the decision making problem presented in Example 22.

Step 1: By Eq. (3), we have \(S(\alpha ^{11}, \alpha ^{*})=\frac{2-|1-0.4|-|0-0.3|}{4}=0.275\). Similarly, we have \(S(\alpha ^{12}, \alpha ^{*})=0.275\), \(S(\alpha ^{13}, \alpha ^{*})=0.175\); \(S(\alpha ^{21}, \alpha ^{*})=0.35\), \(S(\alpha ^{22}, \alpha ^{*})=0.35\), \(S(\alpha ^{23}, \alpha ^{*})=0.325\); \(S(\alpha ^{31}, \alpha ^{*})=0.25\), \(S(\alpha ^{32}, \alpha ^{*})=0.3\), \(S(\alpha ^{33}, \alpha ^{*})=0.325\); \(S(\alpha ^{41}, \alpha ^{*})=0.4\), \(S(\alpha ^{42}, \alpha ^{*})=0.35\), \(S(\alpha ^{43}, \alpha ^{*})=0.3\).

Step 2: By Eq. (15), we have

$$\begin{aligned} S(A_{1}, \alpha ^{*})=0.35\times 0.275+0.25\times 0.275+0.4\times 0.175=0.235. \end{aligned}$$

Similarly, \(S(A_{2}, \alpha ^{*})=0.34\), \(S(A_{3}, \alpha ^{*})=0.2925\), \(S(A_{4}, \alpha ^{*})=0.3475\).

Step 3: According to these similarity measures, the ranking order of four alternatives is \(A_{4}\succ A_{2}\succ A_{3}\succ A_{1}\). Therefore, the alternative \(A_{4}\) is the best choice among all the alternatives.

For this example, by using the new similarity measure proposed in this paper, we obtained the same ranking order of alternatives as in (Ye 2014a). It shows that the new similarity measures proposed in this paper are effective and efficient. We note that, if we directly compute the similarity \(S(\alpha ^{i},\alpha ^{*})\) between the aggregated single-valued neutrosophic values \(\alpha ^{i}\) and ideal single-valued neutrosophic value \(\alpha ^{*}\) by using Eq. (3), then we have \(S(\alpha _{1}, \alpha ^{*})=0.2497\), \(S(\alpha _{2}, \alpha ^{*})=0.3407\), \(S(\alpha _{1}, \alpha ^{*})=0.2940\), \(S(\alpha _{1}, \alpha ^{*})=0.3521\). Thus, the ranking order of four alternatives is also \(A_{4}\succ A_{2}\succ A_{3}\succ A_{1}\).

Recently, Wu et al. (2018) presented an approach to single-valued neutrosophic set based multi-attribute decision making by using similarity and entropy measures. Here, besides ideal alternative, anti-ideal alternative has also been taken into consideration. Similar to Eq. (15), we compute the similarity measures between the alternative \(A_{i}\) and the anti-ideal alternative \(\alpha _{*}=(0,0,1)\) as follows:

$$\begin{aligned} S(A_{i}, \alpha _{*})=\sum _{j=1}^{n}w_{j}S(\alpha ^{ij},\alpha _{*}) \end{aligned}$$
(16)

Then, compute the closeness degree of the alternative \(A_{i}\) to the ideal alternative by using (Wu et al. 2018)

$$\begin{aligned} T(A_{i})=\frac{S(A_{i}, \alpha ^{*})}{S(A_{i}, \alpha ^{*})+S(A_{i}, \alpha _{*})},&\quad i=1,2,\ldots ,m \end{aligned}$$
(17)

The best alternative(s) will be selected in accordance with the closeness degree \(T(A_{i})\) (\(i=1,2,\ldots ,m\)). The best alternative(s) is the one with \(\max _{i}T(A_{i})\). In some cases, the multi-attribute decision making approach presented in (Wu 2018) can derive more accurate results because the anti-ideal alternative has also been taken into consideration.

In the above example, if we compute the similarity degree between the alternative \(A_{i}\) and the anti-ideal alternative \(\alpha _{*}=(0,0,1)\) by using Eq. (3), then we will conclude that \(S(A_{1}, \alpha _{*})=0.265\), \(S(A_{2}, \alpha _{*})=0.34\), \(S(A_{3}, \alpha _{*})=0.2075\) and \(S(A_{4}, \alpha _{*})=0.1525\). Consequently, we have

$$\begin{aligned} T(A_{1})= & {} \frac{S(A_{1}, \alpha ^{*})}{S(A_{1}, \alpha ^{*})+S(A_{1}, \alpha _{*})}=0.47,\,\\ T(A_{2})= & {} \frac{S(A_{2}, \alpha ^{*})}{S(A_{2}, \alpha ^{*})+S(A_{2}, \alpha _{*})}=0.5,\\ T(A_{3})= & {} \frac{S(A_{3}, \alpha ^{*})}{S(A_{3}, \alpha ^{*})+S(A_{3}, \alpha _{*})}=0.585,\,\\ T(A_{4})= & {} \frac{S(A_{4}, \alpha ^{*})}{S(A_{4}, \alpha ^{*})+S(A_{4}, \alpha _{*})}=0.695. \end{aligned}$$

In this case, the ranking order of four alternatives is \(A_{4}\succ A_{3}\succ A_{2}\succ A_{1}\).

6 Concluding remarks

The study of information measures is an important topic in uncertain information processing. This paper is devoted to the study of similarity and entropy measures under SVNS environment. In some practical cases, the truth-membership function, indeterminacy-membership function and falsity-membership function of a SVNS are not of same importance. The type-3 inclusion relation between SVNSs proposed recently by Zhang et al. (2018a, b) is designed to dealing with this situation. However, we showed by illustrative examples that the existing similarity and entropy measures are not suitable for the type-3 inclusion relation between SVNSs. To cope with this issue, we proposed the axiomatic definitions of similarity and entropy for SVNVs with respect to type-3 inclusion relation. Based on Hamming distance, cosine function and cotangent function, we constructed three similarity measures for SVNVs. The entropies are constructed by using the similarity between a SVNV and its complement. Accordingly, the similarity and entropy measures for SVNSs are presented by using aggregation of the similarity and entropy of SVNVs respectively. Based on the new similarity and entropy measures proposed in this paper we present a multi-attribute decision making method. It demonstrates that these new information measures are applicable and efficient.

It is worth noticing that the similarity and entropy measures presented in this study may not suitable for type-1 and type-2 inclusion relations. Therefore, in further research, the similarity and entropy measures which are fit for all three kinds of inclusion relations deserve in-depth investigation. Moreover, the generalization of the information measures presented in this paper may be an interesting topic. For example, the similarity measure presented in Eq. (3) can be extended to

$$\begin{aligned} S(x,y)=\left\{ \begin{array}{ll}1-(1-\alpha )|x_{2}-y_{2}|, &{} \text{ if }\quad (x_{1}=y_{1})\wedge (x_{3}=y_{3}), \\ 0.5\alpha (2-|x_{1}-y_{1}|-|x_{3}-y_{3}|),&{}\text{ otherwise. }\quad \end{array} \right. \nonumber \\ \end{aligned}$$
(18)

where \(\alpha \in (0,1)\) is a threshold value. Furthermore, the application of single-valued neutrosophic information measures in some areas such as pattern recognition, information fusion system is an important issue to be addressed.