1 Introduction

The relevance of theory of fuzzy sets to cluster analysis has been well documented by many research works and applications [1, 8, 1012, 14, 16]. A fuzzy partition U of n objects into c clusters is usually described by a matrix of the size c × n with elements from the unit interval. The use of the unit interval allows to express membership grades of U with infinitely many values from 0 to 1. However, this detailed information is not always needed. A large number of different membership grades complicate interpretation and increase computational cost in further applications. In decision-making, an approximation of U often provides sufficient information. There has been extensive research related to approximation of fuzzy sets [35, 9, 13, 15]. However, approximation of fuzzy partitions has not received considerable attention. Some work in this direction can be found in [1, 2, 68].

In practice, an approximation of a fuzzy partition U is usually performed by the maximum membership method [1] or by an α-cut [5, 15] of U, where α is a real number from the unit interval. Both methods produce crisp matrices with elements from the set {0, 1} and therefore most of the detailed information provided by the original fuzzy partition is lost. The aim of this contribution is to explore some methods of gradual approximations of U by non-crisp matrices. Two approaches, based on two different interpretations of the notion of α-cut of U, are considered. First, the α-cut can be interpreted as a sharper approximation of U in which the large membership grades (at least as large as α) are elevated to 1, while the small membership grades (smaller than α) are reduced to 0. Second, it may be viewed as a coarser approximation of U in which the total number of distinct membership grades of U is reduced to only two values, namely to 0 and 1. In this case, 0 and 1 are the labels for the image of U in the partition of the unit interval into subintervals [0, α) and [α, 1], respectively.

The maximum membership approximation of U is a generalization of the α-cut of U, where the parameter α is replaced by an n-tuple of parameters T = (t 1, , t n ) derived from U. Therefore, the concept of α-sharpness can be extended to T-sharpness. Also, the maximum membership approximation of U can be obtained from a sequence of its minimum membership approximations [2]. T-sharper approximations and the minimum membership approximation based on T-sharpness will be explored in Sect. 3. Analogously, when the set T = { α} is extended to the set T = {t 1, , t m } of parameters from (0, 1), the α-cut of U is generalized to the T-level cut of U, which is a coarser image of U in the partition of the unit interval into at most m + 1 subintervals. T-level cuts will be studied in Sect. 4. T-sharper approximations and T-level cuts of a fuzzy partition U represent a compromise between the original matrix U (too fuzzy) and its crisp approximation (too coarse). The parameters in the n-tuple T or in the set T can be derived from U or can be given by a researcher. The rest of this paper is organized as follows. Section 2 reviews some basic concepts of fuzzy partitions. Section 5 provides some concluding remarks.

2 Preliminaries

We assume that X is a finite universal set with the cardinality | X | = n and 2 ≤ cn. Let V cn denote the set of all real c × n matrices. Then the hard (crisp) c-partition space for X is the set [1]

$$\displaystyle{ M_{hcn} =\{ U \in V _{cn}: u_{ij} \in \{ 0,1\},\;\text{for all}\;i,j;\;\;\sum _{i=1}^{c}u_{ ij} = 1,\;\text{for all}\;j;\;\;0 <\sum _{ j}^{n}u_{ ij} < n,\;\text{for all}\;i\}, }$$
(1)

the probabilistic fuzzy c-partition space for X is the set

$$\displaystyle\begin{array}{rcl} M_{fcn} =\{ U \in V _{cn}: u_{ij} \in [0,1],\;\text{for all}\;i,j;\;\;\sum _{i=1}^{c}u_{ ij} = 1,\;\text{for all}\;j;\;\;0 <\sum _{ j}^{n}u_{ ij} < n,\;\text{for all}\;i\},& &{}\end{array}$$
(2)

and the possibilistic fuzzzy c-partition space for X is the set

$$\displaystyle\begin{array}{rcl} M_{pcn} =\{ U \in V _{cn}: u_{ij} \in [0,1],\;\text{for all}\;i,j;\;\;\sum _{i=1}^{c}u_{ ij} > 0,\;\text{for all}\;j;\;\;0 <\sum _{ j}^{n}u_{ ij} < n,\;\text{for all}\;i\}.& &{}\end{array}$$
(3)

The degenerated partition spaces M hcno ,  M fcno , and M pcno are supersets of M hcn ,  M fcn , and M pcn , respectively, obtained by the condition

$$\displaystyle{ 0 \leq \sum _{j}^{n}u_{ ij} < n,\;\text{for all}\;i. }$$
(4)

The notation I will be used for the set of integers {1, , c} and the notation J will be used for the set of integers {1, , n}. For U, WV cn , 

$$\displaystyle{ U \leq W\mbox{ if and only if }u_{ij} \leq w_{ij}\;\mbox{ for all }(i,j) \in I \times J. }$$
(5)

Fuzziness of a fuzzy partition UM fco can be evaluated by the partition entropy defined by

$$\displaystyle{ H(U) = \frac{1} {n}\sum _{i\in I}\sum _{j\in J}h(u_{ij}), }$$
(6)

where h(u ij ) = u ij log 1∕c (u ij ), when u ij > 0 and h(0) = 0. Then

  1. 1.

    0 ≤ H(U) ≤ 1

  2. 2.

    H(U) = 0 if and only if UM hco

  3. 3.

    H(U) = 1 if and only if U has all elements equal to 1∕c

For α ∈ [0, 1], the α-cut of UM fco is the matrix U α V cn with elements

$$\displaystyle{ (u_{\alpha })_{ij} = \left \{\begin{array}{ll} 1&\mbox{ if }u_{ij} \geq \alpha, \\ 0&\mbox{ otherwise}. \end{array} \right. }$$
(7)

The strong α-cut of U is the matrix \(U_{\alpha ^{+}} \in V _{cn}\) with elements

$$\displaystyle{ (u_{\alpha ^{+}})_{ij} = \left \{\begin{array}{ll} 1&\mbox{ if }u_{ij} >\alpha, \\ 0&\mbox{ otherwise }. \end{array} \right. }$$
(8)

When α = 1, then U α is the core of U, denoted by corU, and when α = 0, then \(U_{\alpha ^{+}}\) is the support of U, denoted by suppU.

The maximum membership approximation of UM fco is the partition MM(U) ∈ M fco defined for all (i, j) ∈ I × J by

$$\displaystyle{ MM(u_{ij}) = \left \{\begin{array}{ll} 1&\mbox{ if }u_{ij} =\max \{ u_{kj};\;k \in I\}, \\ 0&\mbox{ otherwise. } \end{array} \right. }$$
(9)

Note that U may have several different maximum membership approximations if the maximum membership of a classified object in U is not unique.

3 T-Sharper Approximations of Fuzzy Partitions

Sharpening of membership grades of a fuzzy set is obtained by transformation of large membership grades closer to 1 and small membership grades closer to zero. The notions of large and small are understood with respect to a given threshold α ∈ [0, 1]. The following generalization of α-sharpening can be considered in the case of fuzzy partitions.

Definition 1

Consider U, WM fco and n-tuple T = (t 1, , t n ),  t j ∈ [0, 1] for all jJ. Then W is T-sharper than U, denoted by W T U, if the membership grades of W satisfy the following properties: for all (i, j) ∈ I × J,

$$\displaystyle{ w_{ij} \leq u_{ij} \leq t_{j}\quad \mbox{ or }\quad t_{j} \leq u_{ij} \leq w_{ij}. }$$
(10)

Further in this paper, the notation T = (t j ) will be used instead of T = (t 1, , t n ).

Remark 1

If T = (α),  α ∈ [0, 1], then T-sharpness is reduced to α-sharpness. In order to obtain a T-sharper approximation of U which is a fuzzy partition different from U, the n-tuple T must contain at least one parameter t j such that

$$\displaystyle{ \min \{u_{ij}\vert u_{ij} > 0,i \in I\} \leq t_{j} \leq \max \{ u_{ij}\vert u_{ij} < 1,i \in I\}. }$$
(11)

Proposition 1

Given T = (t 1, , t n ) and U, V, WM fco , the following properties hold:

  1. 1.

    U T U

  2. 2.

    If U T V and V T U, then U = V

  3. 3.

    If U T V and V T W, then U T W

Corollary 1

M fco is partially ordered by the relation T .

T-sharpening of fuzzy partitions reduces their fuzziness and therefore it can be considered as a method of partial defuzzification.

Proposition 2

Let U, WM fco and W T U. Then H(W) ≤ H(U).

Note that if H(W) ≤ H(U), then W does not need to be a T-sharper approximation of U.

The notion of T-sharpness can be used in characterization of the minimum membership approximation of a fuzzy partition. Recall that the method of the maximum membership generates a crisp approximation of U, where the maximum membership grade in each column of U is replaced by 1. On the contrary, the minimum membership grade approximation should be a fuzzy partition where the minimum membership grade in each column of U is replaced by 0. Since this approximation should reduce the partition entropy of U, we request that the minimum membership approximation of U be T-sharper than U.

Definition 2

Assume UM fco . For each jJ, let s j = max{u ij |, iI} and r j = min{u ij | u ij > 0, iI}. Create matrix WV cn as follows: for all (i, j) ∈ I × J

$$\displaystyle{ w_{ij} = \left \{\begin{array}{ll} u_{ij} &\mbox{ if }r_{j} = s_{j}, \\ 0 &\mbox{ if }u_{ij} \leq r_{j} < s_{j}, \\ \delta _{ij} > 0&\mbox{ otherwise, } \end{array} \right. }$$
(12)

where i = 1 c w ij = 1 for all jJ and W T U for some n-tuple T. Then W is a minimum membership approximation of U denoted by mm(U).

Note that minimum membership approximation of U is not unique.

Proposition 3

Assume UM fco . For each jJ, let s j = max{u ij |, iI},  r j = min{u ij | u ij > 0, iI},  I j = {iI: u ij = r j }, and I j = {iI: u ij > r j }. Consider mapping φ 1: M fco M fco which assigns to each u ij U element φ 1(u ij ) ∈ φ 1(U) as follows:

$$\displaystyle{ \varphi _{1}(u_{ij}) = \left \{\begin{array}{ll} u_{ij} &\mathit{\mbox{ if }}r_{j} = s_{j}, \\ 0 &\mathit{\mbox{ if }}u_{ij} \leq r_{j} < s_{j}, \\ u_{ij} +\gamma _{ij}&\mathit{\mbox{ otherwise, }} \end{array} \right. }$$
(13)

where γ j = r j | I j | ∕ | I j | . Then φ 1(U) = mm(U) and φ 1(U) ≺ T U, where T = (r j ).

Another way of obtaining a minimum membership approximation of U is based on linear intensification [2].

Proposition 4

Assume UM fco . For each jJ, let s j = max{u ij |, iI},  r j = min{u ij | u ij > 0, iI},  I j = {iI | u ij > 0}, and m j = | I j | . Consider the mapping (linear intensification) φ 2: M fco M fco which assigns to each u ij U element φ 2(u ij ) ∈ φ 2(U) as follows:

$$\displaystyle{ \varphi _{2}(u_{ij}) = \left \{\begin{array}{ll} \frac{1} {m_{j}} + \frac{1} {1-m_{j}r_{j}}(u_{ij} - \frac{1} {m_{j}})&\mathit{\mbox{ if }}r_{j} < s_{j}, \\ u_{ij} &\mathit{\mbox{ otherwise }}. \end{array} \right. }$$
(14)

Then φ 2(U) = mm(U) and φ 2(U) ≺ T U, where T = (1∕m j ).

A slight modification of φ 1 from Proposition 3 also generates mm(U).

Proposition 5

Assume UM fco . For each jJ,  r j = min{u ij | u ij > 0, iI},  I j = {iI |  u ij = r j },  s j = max{u ij , iI}, and I j ∗∗ = {iI |  u ij = s j }. Consider the mapping φ 3: M fco M fco which assigns to each u ij U element φ 3(u ij ) ∈ φ 3(U) as follows:

$$\displaystyle{ \varphi _{3}(u_{ij}) = \left \{\begin{array}{ll} u_{ij} &\mathit{\mbox{ if }}r_{j} = s_{j}\mathit{\mbox{ or }}r_{j} < u_{ij} < s_{j}, \\ 0 &\mathit{\mbox{ if }}u_{ij} \leq r_{j} < s_{j}, \\ u_{ij} +\gamma _{ij}&\mathit{\mbox{ otherwise, }} \end{array} \right. }$$
(15)

where γ j = r j | I j | ∕ | I j ∗∗ | . Then φ 3(U) = mm(U) and φ 3(U) ≺ T U, where T = (r j ).

A sequence of gradual approximations of U by mm(U) can be created according to Algorithm 1 below.

Algorithm 1

Let UM fco and φ: M fco M fco such that φ(U) = mm(U).

Step 1: :

Put k: = 0 and mm k(U): = U.

Step 2: :

Create mm k+1(U) = φ(mm k(U)).

Step 3: :

If mm k+1(U) = mm k(U) or k = c − 2, stop. Else put k: = k + 1 and go to Step 2.

Proposition 6

Assume a sequence mm k(U),  k = 0, , c − 2 of approximations of U generated by Algorithm  1 . Then \(mm^{k+1}(U) \prec _{T_{k}}mm^{k}(U)\) and T k+1T k .

Corollary 2

Assume a sequence mm k(U),  k = 0, , c − 2 of approximations of U generated by Algorithm  1 . Then H(mm k+1(U)) ≤ H(mm k(U)). 

Remark 2

When U has in each column only non-repeated positive membership grades, then

  • Fuzzy partition mm k(U),  k = 1, , c − 1 created by Algorithm 1 is the kth minimum membership approximation of U. Consequently, mm c−1(U) = MM(U),

  • If φ 2 from Proposition 4 is used in Algorithm 1, then \(mm^{k+1}(U) \prec _{T_{k}}mm^{k}(U),\;k = 0,\ldots,c - 2\), where T k = (1∕(ck)) and hence T 0 = (1∕c) and T c−1 = (1∕2).

Each mapping φ: M fco M fco which creates a minimum membership approximation of a fuzzy partition U reduces the number m j of clusters to which object x j X is classified (2 ≤ m j c). Then, in decision-making, the following guidelines can be used for the choice of φ.

  • If there is no other condition placed on mm(U), use the mapping φ 1 from Proposition 6. In this case, membership grades of x j greater than the minimal one are uniformly elevated.

  • If the goal is to create a fuzzy partition mm(U) such that the membership grades of x j distributed to m j clusters of U are sharpened with respect to the threshold 1∕m j , use the mapping φ 2 from Proposition 4.

  • If the goal is to create a fuzzy partition mm(U) with more distinct maximum membership grades, use the mapping φ 3 from Proposition 5. Note that φ 3 can be considered as a combination of the minimum membership and the maximum membership methods, because the minimal membership grade of x j is reduced to zero and only its maximal membership grade is elevated.

Example 1

Let U be the fuzzy partition of elements from the set X = {x 1, x 2, x 3, x 4, x 5, x 6} represented by the matrix

$$\displaystyle{U = \left (\begin{array}{*{10}c} x_{1}\, & x_{2}\, & x_{3}\, & x_{4}\, & x_{5}\, & x_{6} \\ 0.27\,&0.55\,&0.1\,& 0\, &0.25\,&0.40\\ 0.03\, &0.15\, &0.1\, &0.60\, &0.25\, &0.30 \\ 0.35\,&0.30\,&0.2\,&0.28\,&0.25\,&0.09\\ 0.35\, & 0\, &0.6\, &0.12\, &0.25\, &0.21 \end{array} \right ).}$$

Due to repeated maximum membership grades in column 1 and column 5, there are 8 different maximum membership approximations of U. When Algorithm 1 with the mapping φ 1 from Proposition 6 is applied to U, the following non-crisp approximations are obtained. First,

$$\displaystyle{mm^{1}(U) = \left (\begin{array}{*{10}c} 0.12\,&0.625\,& 0\, & 0\, & 0.25\, &0.43\\ 0\, & 0\, & 0\, &0.66\, & 0.25\, &0.33 \\ 0.44\,&0.375\,&0.3\,&0.34\,& 0.25\, & 0\\ 0.44\, & 0\, &0.7\, & 0\, &0, 25\, &0.24 \end{array} \right ),}$$

where \(mm^{1}(U) \prec _{T_{0}}(U)\) and T 0 = (0. 03, 0. 15, 0. 1, 0. 12, 0. 25, 0. 09).Partition entropy H(U) = 0. 821 was reduced to H(mm 1(U)) = 0. 643.The second and the third approximations are

$$\displaystyle{mm^{2}(U) = \left (\begin{array}{*{10}c} 0\, &1\,&0\,&0\,&0.25\,&0.55\\ 0\, &0\, &0\, &1\, &0.25\, &0.45 \\ 0.5\,&0\,&0\,&0\,&0.25\,& 0\\ 0.5\, &0\, &1\, &0\, &0.25\, & 0 \end{array} \right ),\quad mm^{3}(U) = \left (\begin{array}{*{10}c} 0\, &1\,&0\,&0\,& 0.25\, &1\\ 0\, &0\, &0\, &1\, & 0.25\, &0 \\ 0.5\,&0\,&0\,&0\,& 0.25\, &0\\ 0.5\, &0\, &1\, &0\, &0, 25\, &0 \end{array} \right ),}$$

where \(mm^{3}(U) \prec _{T_{2}}mm^{2}(U) \prec _{T_{1}}mm^{1}(U),\) T 1 = (0. 12, 0. 375, 0. 3, 0. 34, 0. 25, 0. 24) and T 2 = (0. 5, 1, 1, 1, 0. 25, 0. 45),H(mm 2(U)) = 0. 333 and H(mm 3(U)) = 0. 250.

When Algorithm 1 with the mapping φ 2 from Proposition 4 is applied to U, the following non-crisp approximations are obtained. First,

$$\displaystyle{mm^{1}(U) = \left (\begin{array}{*{10}c} 0.2728\,&0.7273\,& 0\, & 0\, &0.25\,&0.4844\\ 0\, & 0\, & 0\, &0.75\, &0.25\, &0.3281 \\ 0.3636\,&0.2727\,&0.1667\,&0.25\,&0.25\,& 0\\ 0.3636\, & 0\, &0.8333\, & 0\, &0.25\, &0.1875 \end{array} \right ),}$$

where \(mm^{1}(U) \prec _{T_{0}}(U)\) and T 0 = (1∕4, 1∕3, 1∕4, 1∕3, 1∕4, 1∕4).Partition entropy H(U) = 0. 821 was reduced to H(mm 1(U)) = 0. 614.The second and the third approximations are

$$\displaystyle{mm^{2}(U) = \left (\begin{array}{*{10}c} 0\, &1\,&0\,&0\,&0.25\,& 0.324\\ 0\, &0\, &0\, &1\, &0.25\, &0.6786 \\ 0.5\,&0\,&0\,&0\,&0.25\,& 0\\ 0.5\, &0\, &1\, &0\, &0.25\, & 0 \end{array} \right ),\quad mm^{3}(U) = \left (\begin{array}{*{10}c} 0\, &1\,&0\,&0\,&0.25\,&1\\ 0\, &0\, &0\, &1\, &0.25\, &0 \\ 0.5\,&0\,&0\,&0\,&0.25\,&0\\ 0.5\, &0\, &1\, &0\, &0.25\, &0 \end{array} \right ).}$$

where \(mm^{3}(U) \prec _{T_{2}}mm^{2}(U) \prec _{T_{1}}mm^{1}(U),\) T 1 = (1∕3, 1∕2, 1∕2, 1∕2, 1∕4, 1∕3) and T 2 = (1∕2, 1, 1, 1, 1∕4, 1∕2),H(mm 2(U)) = 0. 325 and H(mm 3(U)) = 0. 250. When Algorithm 1 with the mapping φ 3 from Proposition 5 is applied to U, the following non-crisp approximations are obtained. First,

$$\displaystyle{mm^{1}(U) = \left (\begin{array}{*{10}c} 0.270\,&0.7\,& 0\, & 0\, &0.25\,&0.49\\ 0\, & 0\, & 0\, &0.72\, &0.25\, &0.30 \\ 0.365\,&0.3\,&0.2\,&0.28\,&0.25\,& 0\\ 0.365\, & 0\, &0.8\, & 0\, &0.25\, &0.21 \end{array} \right ),}$$

where \(mm^{1}(U) \prec _{T_{0}}(U)\) and T 0 = (0. 03, 0. 15, 0. 1, 0. 12, 0. 25, 0. 09).Partition entropy H(U) = 0. 821 was reduced to H(mm 1(U)) = 0. 627.The second and the third approximations are

$$\displaystyle{mm^{2}(U) = \left (\begin{array}{*{10}c} 0\, &1\,&0\,&0\,&0.25\,&0.55\\ 0\, &0\, &0\, &1\, &0.25\, &0.45 \\ 0.5\,&0\,&0\,&0\,&0.25\,& 0\\ 0.5\, &0\, &1\, &0\, &0.25\, & 0 \end{array} \right ),\quad mm^{3}(U) = \left (\begin{array}{*{10}c} 0\, &1\,&0\,&0\,&0.25\,&1\\ 0\, &0\, &0\, &1\, &0.25\, &0 \\ 0.5\,&0\,&0\,&0\,&0.25\,&0\\ 0.5\, &0\, &1\, &0\, &0.25\, &0 \end{array} \right ).}$$

where \(mm^{3}(U) \prec _{T_{2}}mm^{2}(U) \prec _{T_{1}}mm^{1}(U),\) T 1 = (0. 27, 0. 3, 0. 2, 0. 28, 0. 25, 0. 21) and T 2 = (0. 5, 1, 1, 1, 0. 25, 0. 45),H(mm 2(U)) = 0. 333 and H(mm 3(U)) = 0. 250.

4 T-Level Cuts of Fuzzy Partitions

Another way of approximation of a fuzzy partition U is transformation of U to a matrix with reduced number of distinct membership grades from interval (0, 1). This type of transformation is called coarsening. Given UM fco , the set

$$\displaystyle{ \varLambda _{U} =\{\delta \in (0,1):\,\delta = u_{ij}\,\mbox{ for some }u_{ij} \in U\} }$$
(16)

is called the level set of U. Obviously, the level set of a crisp partition is empty.

Definition 3

Consider a matrix WV cn with elements from [0, 1]. The coarseness of W is evaluated by the coefficient

$$\displaystyle{ \kappa (U) = 1 - \frac{\vert \varLambda _{U}\vert } {n.c}. }$$
(17)

Proposition 7

Assume a fuzzy partition UM fco . Then

  1. 1.

    0 ≤ κ(U) ≤ 1

  2. 2.

    κ(U) = 1 if and only if UM hco (crisp partition)

  3. 3.

    κ(U) = 0 if and only if all membership grades of U are non-repeated values between zero and one

An α-cut of U can be considered as a coarser image of U in the partition of the unit interval into subintervals [0, α) and [α, 1], or simply as the level cut of U based on the set of parameters T = { α}, α ∈ (0, 1). Let {δ 1, δ 2, , δ k } be the set of all distinct membership grades of U such that 1 > δ 1 > δ 2 > > δ k > 0. Then the level cuts \(U_{\delta _{1}},U_{\delta _{2}},\ldots,U_{\delta _{k}}\) create a sequence of gradual crisp approximations of U such that

$$\displaystyle{ cor\,U < U_{\delta _{1}} < U_{\delta _{2}} <\ldots < U_{\delta _{k}} < supp\;U. }$$
(18)

Our goal is to find a sequence of gradual non-crisp coarser approximations of U satisfying inequality (18). Based on the concept of p-level sets [3], the concept of T-level cuts of a fuzzy partition U is introduced in the next definition.

Definition 4

Let UM fco and T = {t 1, t 2, , t m } be a set of parameters such that 0 < t 1t 2t m < 1. Then the T-level cut of U is the matrix T(U) ∈ V cn defined for all (i, j) ∈ I × J as follows:

$$\displaystyle{ \mathbf{T}(u_{ij}) = \frac{\vert \{t \in \mathbf{T}: t < u_{ij}\}\vert } {\vert \mathbf{T}\vert }. }$$
(19)

The membership grade T(u ij ) evaluates the proportion of parameters from T which are dominated by u ij . If T includes r distinct parameters from (0, 1), then T(U) is an image of U in the partition of the unit interval into r + 1 subintervals. In general, T(U) is a possibilistic fuzzy partition.

Proposition 8

Let UM fco . Assume a T -level cut T(U) with the membership grades T(u ij ). Then

  1. 1.

    T(0) = 0 and T(1) = 1

  2. 2.

    T(u ij ) ∈ {0, 1∕m, 2∕m, , (m − 1)∕m, 1}, where m = | T |

  3. 3.

    If u ij u pq then T(u ij ) ≤ T(u pq )

  4. 4.

    When T = {t 1, t 2, , t m }, S = {s 1, s 2, , s m }, and t j s j for all j = 1, , m, then S(U) ≤ T(U)

  5. 5.

    If T = { α},  α ∈ (0, 1), then \(\mathbf{T}(U) = U_{\alpha ^{+}}\)

Corollary 3

Let | Λ U | = m. If the set T has less than m parameters (not necessarily distinct), then T(U) is a coarser approximation of U.

A sequence of gradual approximations of U by coarser T-level cuts of U can be created according to Algorithm 2 below.

Algorithm 2

Consider UM fco .

Step 1: :

Let k be the number of crisp columns in U (i.e., columns containing elements equal only to zero and one). Put n : = nk.

Step 2: :

Remove from U all crisp columns and arrange the elements in each remaining column from the largest to the smallest. Denote the resulting matrix by U . Let c be the total number of distinct rows in U with all elements from (0, 1).

Step 3: :

For q = 1, …c , arrange the elements in the qth row of U from the smallest to the largest. Denote the resulting matrix by U ∗∗. Assign the qth-row of U ∗∗ to n -tuple T q and create T q (U).

Proposition 9

Consider the T -level cuts of UM pc created by Algorithm  2 . Then

  1. 1.

    \(cor\;U < \mathbf{T}_{1}(U) < \mathbf{T}_{2}(U) < \mathbf{T}_{c^{{\ast}}}(U) < supp\;U\)

  2. 2.

    κ(T q (U)) < κ(U)  for all q = 1, , c

  3. 3.

    T q (U) ∈ M pco for q ≥ 2

Remark 3

T 1(U) created by Algorithm 2 contains one column with all elements equal to zero (the column, where the smallest from all n maximal membership grades is located). Therefore, T 1(U) is not a possibilistic fuzzy partition of n objects.

Example 2

Consider the fuzzy partition U from Example 1. Then Algorithm 2 applied to U creates the following matrix U

$$\displaystyle{U^{{\ast}} = \left (\begin{array}{*{10}c} 0.35\;&0.55\;&0.6\;&0.60\;&0.25\;&0.40\\ 0.35\; &0.30\; &0.2\; &0.28\; &0.25\; &0.30 \\ 0.27\;&0.15\;&0.1\;&0.12\;&0.25\;&0.21\\ 0.03\; & 0\; &0.1\; & 0\; &0.25\; &0.09 \end{array} \right ).}$$

Then T 1 = {0. 25, 0. 35, 0. 4, 0. 55, 0. 6, 0. 6} and

$$\displaystyle{T_{1}(U) = \left (\begin{array}{*{10}c} 1/6\;&3/6\;& 0\; & 0\; &0\;&2/6 \\ 0\; & 0\; & 0\; &4/6\;&0\;&1/6 \\ 1/6\;&1/6\;& 0\; &1/6\;&0\;& 0 \\ 1/6\;& 0\; &4/6\;& 0\; &0\;& 0 \end{array} \right ).}$$

Coefficient of coarseness κ(U) = 0. 375 is increased to κ(T 1(U)) = 0. 833.The second and the third approximations are

$$\displaystyle{T_{2}(U) = \left (\begin{array}{*{10}c} 2/6\;& 1\; &0\;& 0\; &1/6\;& 1 \\ 0\; & 0\; &0\;& 1\; &1/6\;&3/6 \\ 5/6\;&3/6\;&0\;&2/6\;&1/6\;& 0 \\ 5/6\;& 0\; &1\;& 0\; &1/6\;&1/6 \end{array} \right ),\quad T_{3}(U) = \left (\begin{array}{*{10}c} 5/6\;& 1\; & 0\; & 0\; &4/6\;& 1 \\ 0\; &2/6\;& 0\; & 1\; &4/6\;& 1 \\ 1\; & 1\; &3/6\;& 1\; &4/6\;& 0 \\ 1\; & 0\; & 1\; &1/6\;&4/6\;&3/6 \end{array} \right ).}$$

with T 2 = {0. 2, 0. 25, 0. 28, 0. 3, 0. 3, 0. 35} and T 3 = {0. 1, 0. 12, 0. 15, 0. 21, 0. 25, 0. 27}, and κ(T 2(U)) = κ(T 1(U)) = 0. 833. Obviously,

$$\displaystyle{core\;U < T_{1}(U) < T_{2}(U) < T_{3}(U) < supp\;U.}$$

5 Conclusion

Two methods of a gradual non-crisp approximation of a fuzzy partition U based on parameters which may be derived from U were proposed. While the first method focuses on reduction of fuzziness of U, the second method increases coarseness of U. The relation of T-sharpness and the coefficient of coarseness were introduced. Although the work in this paper refers to a fuzzy partition of n objects into c clusters, it can be applied to any situation where information is represented by a matrix with elements from the unit interval. This includes, e.g., tables of relative frequencies or normalized evaluations of n objects by c experts. In future work, approximations of fuzzy partitions based on combination of sharpening and coarsening will be studied.