Keywords

1 Introduction

Data clustering is one of the most important approaches used to discover naturally occurring structures in a dataset. Clustering is a process, which refers to grouping objects into meaningful clusters so that the elements of a cluster are similar, whereas they are dissimilar in different clusters. Nowadays, a variety of large collections of data are created. This brings a great challenge for clustering algorithms, so new different clustering algorithms and their configurations are being intensively developed, e.g. [12,13,14,15, 26]. Data clustering is applied in many areas, such as biology, spatial data analysis, business, and others. It should be noted that there is no a clustering algorithm, which creates the right data partition for all datasets. Moreover, the same algorithm can also produce different results depending on the input parameters applied. Therefore, cluster validation should be also used to assess results of data clustering. So far, a number of authors have proposed different cluster validity indices or modifications of existing ones, e.g., [11, 25, 29,30,31].

Among clustering algorithms four categories can be distinguished: partitioning, hierarchical, grid-based and density-based clustering. For example, the well-known partitioning algorithms are, e.g. K-means, Partitioning Around Medoids (PAM) [6, 34] and Expectation Maximization (EM) [19], whereas the hierarchical clustering includes agglomerative and divisive approaches, e.g. the Single-linkage, Complete-linkage or Average-linkage or DIvisive ANAlysis Clustering (DIANA) [20, 24]. Then, the grid-based approach includes methods such as e.g. the Statistical Information Grid-based (STING) or Wavelet-based Clustering (WaveCluster) [21, 28, 32]. The next category of clustering algorithms is the density-based approach. The Density Based Spatial Clustering of Application with Noise (DBSCAN) is the most famous density-based algorithm [10]. It can discover clusters of an arbitrary shape and size, but is rarely used to cluster multidimensional data due to so-called “\(curse\; of \;dimensionality\)”. Consequently, it has many extensions, e.g. [5, 7, 8, 18, 27, 33]. However, the DBSCAN also requires two input parameters, i.e. radius eps and density threshold MinPts. Determination of these parameters is crucial to the right performance of this clustering method. Especially the eps radius is very difficult to be determined correctly. Recently, new concepts have been proposed for the determining of these input parameters [16]. It is important to note that clustering methods can be used during a process of designing various neural networks [1,2,3], fuzzy and rule systems [4, 9, 17, 22, 23].

In this paper, a new approach to determining the eps radius is proposed. It is based on an analysis of a knee, which appears in the sorted values of the distance function used in the dataset. This paper is organized as follows: Sect. 2 presents a detailed description of the DBSCAN clustering algorithm. In Sect. 3 the new method to determine the eps radius is outlined while Sect. 4 illustrates experimental results on datasets. Finally, Sect. 5 presents conclusions.

2 The Concept of the DBSCAN Clustering Algorithm

In this section the basic concept of the DBSCAN algorithm is described. As mentioned above, it is a very popular algorithm because it can find clusters of arbitrary shapes and requires only two input parameters, i.e. the eps radius and the density threshold MinPts. To understand the basic concept of the algorithm several terms should be explained. Let us denote the eps radius by \(\varepsilon \) and a dataset by X, where a point \(p\in X\). The \(\varepsilon \) is usually determined by the user and the right choice of this parameter is a key issue for this algorithm. The MinPts is the minimal number of neighboring points belonging to a so-called core point.

Definition 1:

The \(\varepsilon \)-neighborhood of point \(p \in X\) is called \(N_{\varepsilon }(p)\) and is defined as follows: \(N_{\varepsilon } \left( p \right) = \left\{ {q \in X|dist(p,q) \le \varepsilon } \right\} \), where dist(pq) is a distance function between p and q.

When a number of points belonging to the \(\varepsilon \)-neighborhood of p is greater or equal to the MinPts, p is called the core point.

Definition 2:

A point p is directly density-reachable from a point q with respect to \(\varepsilon \) and the MinPts when q is a core point and p belongs to the \(\varepsilon \)-neighborhood of q.

When a point p is directly density-reachable from a point q and a number of points belonging to the \(\varepsilon \)-neighborhood of p is smaller than the MinPts, p is called a border point.

Definition 3:

A point p is a noise if it is neither a core point nor a border point.

Definition 4:

A point p is density-reachable from a point q with respect to the \(\varepsilon \) and the MinPts when there is a chain of points \(p_1, p_2, \ldots ,p_n\), \(p_1=q\), \(p_n=p\) so that \(p_{i+1}\) is directly density-reachable from \(p_i\).

Definition 5:

A point p is density-connected to a point q with respect to the \(\varepsilon \) and the MinPts when there is a point o such that p and q are density-reachable from the point o.

Definition 6:

A cluster C with respect to the \(\varepsilon \) and the MinPts is a non-empty subset of X, where the following conditions are satisfied:

  1. 1.

    \(\forall p,q\): if \(p \in C\) and q is density-reachable from p with respect to the \(\varepsilon \) and the MinPts, then \(q \in C\).

  2. 2.

    \(\forall p,q \in C\): p is density-connected to q with respect to the \(\varepsilon \) and the MinPts.

The DBSCAN algorithm creates clusters according to Definition 6. At first, a point p is selected randomly and if \(|N_{\varepsilon }(p)|\ge MinPts\) than the point p will be the core point and will be marked as a new cluster. Next, the new cluster is expanded by the points which are density-reachable from p. This process is repeated until no cluster found. On the other hand, if \(|N_{\varepsilon }(p)|\) \(< MinPts\), then the point p will be considered as a new noise. However, this point can be included in another cluster if it is density-reachable from some core point.

Fig. 1.
figure 1

An example of 2-dimensional dataset consisting of three clusters.

Fig. 2.
figure 2

Sorted values of the \(k_{dist}\) function with respect to \(k=5\) and \(k=6\) for the dataset.

Fig. 3.
figure 3

Partition of the range for four equal parts: part1, part2, part3 and part4.

3 Explanation of the New Approach to Determine the Eps Parameter

As mentioned above the right choice of the eps (\(\varepsilon \)) parameter is a fundamental issue for a high performance of the DBSCAN. However, it is a very difficult task. One of the most popular ideas bases on a distance function which computes a distance between each point \(p \in X\) and its k-th nearest neighbor. This function can be denoted by \(k_{dist}\). It requires an input parameter k, which is the number of the nearest neighbors of the point p. For instance, Fig. 1 shows an example of a 2-dimensional dataset consisting of three clusters. The clusters contain 150, 100 and 50 elements, respectively. The \(k_{dist}\) function can be used in the dataset, and then the results are sorted in an ascending order. Figure 2 presents the sorted results for two values of the k parameter, i.e. k = 5 and k = 6. It can be observed that the plots include a “knee”, where the distances change significantly. Moreover, the number of calculated distances for k=6 is greater than for k = 5, because additional distances have to be calculated. The fundamental issue is the appropriate determining of a threshold point, which defines the maximal \(k_{dist}\) value in the clusters of a dataset. All the points with \(k_{dist}\) values higher than the maximal \(k_{dist}\) value are considered to be a noise. It can be noted that the threshold point refers to the “knee” but it is very difficult to determine this point correctly. To solve this problem, a new method, which consists of a few steps, is proposed. Let us denote a set of all the sorted values of \(k_{dist}\) function for the X dataset by \(V_{sdist}\). The \(\varepsilon \) (eps), mentioned above, depends on the threshold point occurring in the dataset for the given k number of the nearest neighbors of the point. It should be noted that in Fig. 2 the values of the \(k_{dist}\) function increase abruptly when they are to the right of the “knee”. This means that there are points beyond the threshold point of the dataset and they can be interpreted as a noise. Moreover, the “knee” usually appears at the end of the sorted values of the \(k_{dist}\) function and its size depends on the properties of the dataset. Among the sorted values, it is possible to determine a range, which indicates the knee more precisely. It can be defined by \(v_{start}\) and \(v_{stop}\) points as follows:

$$\begin{aligned} \begin{aligned} v_{start} = \left| {V_{sdist}} \right| - \left| X \right| \\ v_{stop} = \left| {V_{sdist}} \right| \quad \quad \ \end{aligned} \end{aligned}$$
(1)

where \(\left| {V_{sdist}} \right| \) is the number of the elements of the \(V_{sdist}\) and \(\left| X \right| \) is the number of the elements of the X. Furthermore, the range is divided into four equal parts, i.e. part1, part2, part3 and part4. The size of such part is equal to \(\left| X \right| /4\). For example, Fig. 3 shows the partition of the range for four equal parts. Next, for each of the parts the average values are calculated as follows:

$$\begin{aligned} \begin{aligned} S_{v1} = \frac{1}{n}\sum \limits _{i = v1}^n {{k_{dist}} \left( i \right) }\\ S_{v2} = \frac{1}{n}\sum \limits _{i = v2}^n {{k_{dist}} \left( i \right) }\\ S_{v3} = \frac{1}{n}\sum \limits _{i = v3}^n {{k_{dist}} \left( i \right) }\\ S_{v4} = \frac{1}{n}\sum \limits _{i = v4}^n {{k_{dist}} \left( i \right) } \end{aligned} \end{aligned}$$
(2)

where n is the number of the \(k_{dist}\) values occurring in each part and \(v_j\) is the start point of the jth part, \(j={1..4}\). These start points are defined as follows: \(v1= v_{start}\), \(v2=v1+n\), \(v3=v2+n\) and \(v4=v3+n\). Next, the “knee” can be analyzed by these calculated averages values. First, three factors a, b and c are computed. They can be expressed as follows:

$$\begin{aligned} a = \frac{{S_{v2}}}{{S_{v1}}} \quad b = \frac{{S_{v3}}}{{S_{v2}}} \quad c = \frac{{S_{v4}}}{{S_{v3}}} \end{aligned}$$
(3)

These factors play a key role in the analysis of the “knee”. For instance, when the values of the \(k_{dist}\) increase very slowly in part1, part2 and part3, the average values \(S_{v1}\), \(S_{v2}\), \(S_{v3}\) also do not change significantly. Thus, the values of a and b are almost equal. Furthermore, if values of the \(k_{dist}\) function increase abruptly in part4, then parameter c will have a large value. In this case, the \(\varepsilon \) should equal \(S_{v4}\) because there is a high probability that the \(S_{v4}\) refers to the threshold point occurring in this dataset so it can be expressed as:

$$\begin{aligned} \begin{aligned} if(a \approx b)\; \wedge \;\left( {c \ge T} \right) \;\;then\\ \varepsilon = S_{v4} \quad \quad \quad \quad \end{aligned} \end{aligned}$$
(4)

where T is a constant value and is determined experimentally (\(T=1.4\)). On the other hand, when \(c<T\), the values of the \(k_{dist}\) increase slowly in part4 and the “knee” can be quite wide, so smaller values of the \(k_{dist}\) function refer to the threshold point. In this case, the \(\varepsilon \) should be calculated with respect to part2, part3 and part4 so it is defined as follows:

$$\begin{aligned} \begin{aligned} if(a \approx b)\; \wedge \;\left( {c < T} \right) \;\; then\\ \varepsilon = \frac{S_{v2}+S_{v3}+S_{v4}}{3} \quad \end{aligned} \end{aligned}$$
(5)

Next, when the values of the \(k_{dist}\) increase significantly in part1, part2 and part3, the values of a and b are different, i.e. the b is much bigger than the a factor. It means that the threshold point refers to the values from part3 and part4. Thus, the \(\varepsilon \) can be defined as follows:

$$\begin{aligned} \begin{aligned} if(a \ne b)\;\; then\\ \quad \varepsilon = \frac{S_{v3}+S_{v4}}{2} \end{aligned} \end{aligned}$$
(6)

In the next section, the results of the experimental studies are presented to confirm the effectiveness of this new approach.

Table 1. A detailed description of the artificial datasets
Fig. 4.
figure 4

Examples of 2-dimensional artificial datasets: (a) \({ Data\; \mathrm 1}\), (b) \({ Data\; \mathrm 2}\), (c) \({ Data\; \mathrm 3}\), (d) \({ Data\; \mathrm 4}\), (e) \({ Data\; \mathrm 5}\), (f) \({ Data\; \mathrm 6}\), (g) \({ Data\; \mathrm 7}\), (h) \({ Data\; \mathrm 8}\) and (i) \({ Data\; \mathrm 9}\).

4 Experimental Results

In this section, several experiments have been conducted on 2-dimensional artificial datasets using the original DBSCAN algorithm. This algorithm is one of most popular clustering methods, because it can recognize clusters with arbitrary shapes. Artificial datasets include clusters of various sizes and shapes. The first parameter k (MinPts) equaled 6 in all the experiments. Such value of the k guarantees that the algorithm does not create clusters of too low a density threshold. To automatically determine the radius \(\varepsilon \), the new approach described above is used. Moreover, the evaluation of the accuracy the DBSCAN algorithm is conducted by visual inspection. It can be noted that this algorithm is rarely used to cluster multidimensional data due to the so-called “\(curse\; of \;dimensionality\)”. Recently, however, a modification of this algorithm has been proposed to solve this problem [7].

4.1 Datasets

In the conducted experiments nine 2-dimensional datasets are used. Most of them come from the R package. The artificial data are called Data 1, Data 2, Data 3, Data 4, Data 5, Data 6, Data 7, Data 8 and Data 9, respectively.

Fig. 5.
figure 5

Results of the DBSCAN clustering algorithm for 2-dimensional datasets: (a) \({ Data\; \mathrm 1}\), (b) \({ Data\; \mathrm 2}\), (c) \({ Data\; \mathrm 3}\), (d) \({ Data\; \mathrm 4}\), (e) \({ Data\; \mathrm 5}\), (f) \({ Data\; \mathrm 6}\), (g) \({ Data\; \mathrm 7}\), (h) \({ Data\; \mathrm 8}\) and (i) \({ Data\; \mathrm 9}\)

They consist of various number of clusters, i.e. 2, 3, 4, 6, and 7 clusters. The scatter plot of these data is presented in Fig. 4. As it can be observed on the plot, the distances between the clusters are very different and some clusters are quite close. Generally, the clusters are located in different areas and some of the clusters are very close and others quite far. For instance, in Data 5 the elements create Gaussian, square, triangle and wave shapes, Data 6 consists of 2 Gaussian eyes, a trapezoid nose and a parabola mouth (with a vertical Gaussian one) and Data 7 is so-called the spirals problem, where points are on two entangled spirals. Moreover, the sizes of the clusters are different and they contain a various number of elements. Table 1 shows a detailed description of these datasets used in the experiments.

4.2 Experiments

The experimental analysis is designed to evaluate the performance of the new method to automatically specify the \(\varepsilon \) parameter. As mentioned above, this parameter is very important for the DBSCAN algorithm to work correctly and it is usually determined by visual inspection of the sorted values of the \(k_{dist}\) function. On the other hand, the new approach described in Sect. 3 allows us to determine this parameter in an automatic way. In these experiments the nine 2-dimensional datasets used are called \({ Data\; \mathrm 1}\), \({ Data\; \mathrm 2}\), \({ Data\; \mathrm 3}\), \({ Data\; \mathrm 4}\), \({ Data\; \mathrm 5}\), \({ Data\; \mathrm 6}\), \({ Data\; \mathrm 7}\), \({ Data\; \mathrm 8}\) and \({ Data\; \mathrm 9}\) datasets. It needs to be noted that the value of the k (MinPts) parameter equals 6 in all the experiments. Then, when the \(\varepsilon \) parameter is specified by the new method, the DBSCAN algorithm can be used to cluster these datasets. Figure 5 shows the results of the DBSCAN algorithm, where each cluster is marked with different signs. The data elements classified as the noise are marked with a circle. It should be noted that the new approach provides correct values of the \(\varepsilon \) in all the experiments. Thus, despite the fact that the differences of distances and shapes between clusters are significant, all the datasets are clustered correctly by the DBSCAN. Moreover, the number of data elements classified as noise in all the datasets is small.

5 Conclusions

In this paper a new method is proposed for computing the \(\varepsilon \) parameter for the DBSCAN algorithm. This method uses the \(k_{dist}\) function, which computes the distance between each point \(p \in X\) and its kth nearest neighbor. The fundamental issue is to correctly determine the \(threshold\; point\), which defines the maximal \(k_{dist}\) value in the dataset clusters. To solve this problem, first, the new method finds out the region of the \(k_{dist}\) values creating the knee and divides it into four parts. Next, average values of these parts are determined. This makes it possible to calculate the right value of \(\varepsilon \). In the conducted experiments, several 2-dimensional datasets were used, where the number of clusters, sizes and shapes varied within a wide range. From the perspective of the conducted experiments this automatic way to compute \(\varepsilon \) is useful and easy. All the presented results confirm a very high efficiency of the newly proposed approach.