Abstract
Cluster analysis is one important field in pattern recognition and machine learning, consisting in an attempt to distribute a set of data patterns into groups, considering only the inner properties of those data. One of the most popular techniques for data clustering is the K-Means algorithm, due to its simplicity and easy implementation. But K-Means is strongly dependent on the initial point of the search, what may lead to suboptima (local optima) solutions. In the past few decades, Evolutionary Algorithms (EAs), like Group Search Optimization (GSO), have been adapted to the context of cluster analysis, given their global search capabilities and flexibility to deal with hard optimization problems. However, given their stochastic nature, EAs may be slower to converge in comparison to traditional clustering models (like K-Means). In this work, three hybrid memetic approaches between K-Means and GSO are presented, named FMKGSO, MKGSO and TMKGSO, in such a way that the global search capabilities of GSO are combined with the fast local search performances of K-Means. The degree of influence of K-Means on the behavior of GSO method is evaluated by a set of experiments considering both real-world problems and synthetic data sets, using five clustering metrics to access how good and robust the proposed hybrid memetic models are.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Avoid common mistakes on your manuscript.
1 Introduction
In the past few decades, the amount of daily produced data in electronic devices, such as smartphones, tablets, computers, cars, GPS, smart TVs, Internet of Things applications, and so on, has increased exponentially, in such a way that automatic and scalable computational systems are even more required. To extract useful information from the large data sets such systems are based on, it is impossible to rely in human analysis only, once the need for precise and reliable information in a short period of time has become mandatory (Naldi and Campello 2014).
Data clustering is one of the most important and primitive activities in pattern recognition, consisting in an important mechanism for exploratory data analysis. Clustering is characterized by an unsupervised attempt to categorize a set of data patterns in clusters, in such a way that observations belonging in a same cluster are more close related (according to their feature set) than observations from different clusters. In clustering, no prior knowledge about the data set at hand is required, so the clustering models take decisions about how to group the observations taking into consideration only the inner properties of such observations. Clustering algorithms have been successfully employed in many real-world applications, given their flexibility and adaptability, in fields such as: Agriculture (Rahamathunnisa et al. 2020), Data Mining (Sapkota et al. 2019), Ecology (Sreepathi et al. 2017), Image Understanding (Saraswathi and Allirani 2013; Wan et al. 2017; Diderot et al. 2019; Wei et al. 2019), Medicine (Li et al. 2016; Bruse et al. 2017; Premalatha and Subasree 2017; BEDDAD et al. 2019), Text Mining (Liu and Xiong 2011), Web Services (Parimalam and Sundaram 2017), Wireless Sensors Networks (WSN) (Misra and Kumar 2016; Dhiviya et al. 2017; Masoud et al. 2019), and so on. An interesting survey on data clustering algorithms can be found in (Xu and Tian 2015).
From an optimization perspective, clustering is considered as a particular kind of NP-hard grouping problem. The most popular clustering approaches are the partitional clustering algorithms, which provide a partition of the data set into a prefixed number of clusters (an input parameter for the models). Each cluster is represented by its centroid vector, and the clustering process is driven in an effort to optimize a criterion function iteratively, by updating these cluster centroids, seeking out to improve the quality of the final solution (final partition) provided by the algorithm.
One of the most popular partitional clustering methods is the K-Means algorithm (MacQueen et al. 1967). Although traditional partitional clustering models are able to promote fast convergence speeds, these methods are known for their sensibility to the initial centroid position, what may lead to weak solutions (i.e., the model may be trapped in a local minimum point) if the algorithm starts in a poor region of the problem space. Standard partitional clustering approaches lack the mechanisms to escape from local optima points, promoting local searches on the problem space only.
Evolutionary Algorithms (EAs) have been increasingly applied to solve a great variety of difficult problems, such as hard functions optimization (He et al. 2009; Oliveira et al. 2013; Pacifico and Ludermir 2014; Jain et al. 2019), weights and architecture optimization in Neural Networks (Silva et al. 2011; Idrissi et al. 2016; Darwish et al. 2020), feature weighting and selection (Ramos and Vellasco 2018; Taşci et al. 2018; Xu et al. 2020), and so on, given their global search capability and their mechanisms to avoid local optima points. In EAs, a set (population) of candidate solutions (individuals) for the problem at hand is kept and evolved according to a generational process, seeking out the optimization of a criterion function (the fitness function). In EAs such as Genetic Algorithm (GA) (Holland 1992), Evolutionary Programming (EP) (Fogel et al. 1966; Fogel 2009), Evolution Strategies (ES) (Rechenberg 1973; Schwefel 1993) and Differential Evolution (DE) (Storn and Price 1995, 1997), the searching scheme is driven by operators that simulate biological processes like mutation, recombination and selection. In this context, Swarm Intelligence (SI) methods are extensions of EAs which execute their search in an attempt to simulate self-organizing collective behavior of social animals, like swarming, flocking and herding (Bonabeau et al. 1999). Examples of SI algorithms are the Ant Colony Optimization (ACO) (Dorigo et al. 1996), Particle Swarm Optimization (PSO) (Kennedy and Eberhart 1995; Kennedy et al. 2001) and Group Search Optimization (GSO) (He et al. 2006, 2009). Table 1 presents some of the most recent and popular algorithms from Evolutionary Computing literature. Interesting texts in Evolutionary Computing can be found in (Kennedy 2006; Simon 2013; Eiben and Smith 2015).
EAs have been increasingly applied in clustering applications (Hruschka et al. 2009; Inkaya et al. 2016; Canuto et al. 2018; Figueiredo et al. 2019), but, given their stochastic nature, such techniques may be too slow to converge in comparison to standard partitional clustering algorithms. It is also known that, in general, evolutionary algorithms local search mechanisms are not very much effective (Ren et al. 2014).
To overcome this drawback, Hybrid Intelligence Systems (HISs) and Memetic Algorithms (MA) combining evolutionary approaches and K-Means have been proposed in literature, such that global searches performed by EAs are complemented by local searches performed by K-Means. Some examples of hybrid evolutionary systems that use K-Means as a local searcher are found in (Ahmadyfard and Modares 2008; Abdel-Kader 2010; Bhavani et al. 2011; Pacifico and Ludermir 2018, 2019). But most works in data clustering literature only employ K-Means after the generational process of the selected EA is concluded, which means that the best solution found by the EA may already be trapped in a local minimum point, making the application of K-Means sub-optimal (it will only exploit the problem region that contains a local minimum). To avoid such problem, in this work three hybrid memetic algorithms are proposed, named FMKGSO, MKGSO and TMKGSO, which combine the global search capabilities of GSO with local search speed provided by K-Means, which use K-Means (in different ways) to improve the population of GSO during the generational process. GSO was proposed as a method for continuous optimization problems based on the behavior of social animals attempting to find food resources, and, in the past few years, GSO has been successfully applied in many real world applications (Chen et al. 2014b; Krishnaprabha and Aloor 2014; Li et al. 2015; Pacifico et al. 2018), presenting competitive optimization performances in comparison to other evolutionary algorithms, such as GA, PSO, EP and ES.
The proposed approaches are implemented in an attempt to comprehend the influence of K-Means on the behavior of GSO when dealing with data clustering task, and, for that, a testing bed consisting of 20 (twenty) real-world problems (obtained from UCI Machine Learning Repository - Asuncion and Newman 2007) and 10 synthetic data sets is proposed. Five evaluation metrics from clustering literature are employed to validate the quality of the solutions achieved by the proposed models, and five partitional evolutionary algorithms are selected for comparison purposes.
This work is organized as follows. Firstly, some introductory concepts are introduced (Sect. 2), like K-Means (Sect. 2.1), partitional evolutionary algorithms (Sect. 2.2) and GSO (Sect. 2.3). After that, the proposed hybrid memetic GSO approaches are explained (Sect. 3), followed by the experimental evaluation (Sect. 4). Finally, some conclusions and interesting trends for future works are presented (Sect. 5).
2 Preliminaries
The baseline models adopted in this work are presented in the following sections (Sects. 2.1, 2.2 and 2.3).
2.1 K-Means
K-Means (MacQueen et al. 1967) has been proposed as a partitional clustering algorithm for continuous data, which groups real-valued data vectors into a predefined number of clusters. Consider a partition \(P_{C}\) of a data set with N data patterns (each data pattern is represented by a vector \({\mathbf{x} }_j\in {\mathfrak {R}^{m}}\), where \(j = 1, 2, \ldots , N\)) in C clusters, where C is a required input parameter for the algorithm. Each cluster is represented by its centroid vector \({\mathbf{g} }_{c}\in {\mathfrak {R}^{m}}\) (where \(c = 1, 2, \ldots , C\)).
In K-Means, clusters are formed based on a dissimilarity measure, the Euclidean distance [Eq. (1)]. For each iteration (until a maximum number of iterations \(t_{maxkmeans}\) is reached or another stopping criterion is satisfied), a new cluster centroid vector is calculated, for each cluster, as the mean of its current data vectors (i.e., the data patterns currently assigned to the cluster). After that, the new partition is formed, and each pattern is associated to the cluster with the nearest centroid.
where
where \(N_c\) is the number of patterns associated to cluster c. The criterion function for K-Means is the Within-Cluster Sum of Squares, given in Eq. (3).
K-Means algorithm is presented in Algorithm 1.
2.2 Evolutionary algorithms for data clustering
In this section, we explain the most commonly adopted representation schema for evolutionary algorithms, when adapted as partitional clustering methods (Chen and Ye 2004).
Once more, consider a partition \(P_{C}\) of a data set with N patterns \({\mathbf{x} }_j\in {\mathfrak {R}^{m}}\) (\(j = 1, 2,\ldots , N\)) in C clusters. Each cluster is represented by its centroid vector \({\mathbf{g} }_{c}\in {\mathfrak {R}^{m}}\) (\(c = 1, 2, \ldots , C\)). Each population individual \({\mathbf{X} }_i\in {\mathfrak {R}^n}\) (where \(n = m \times C\)) in population G represents C cluster centroids at the same time, one for each cluster (Chen and Ye 2004). For instance, if \(m = 3\) and \(C = 5\), each individual will be a vector \({\mathbf{X} }_i\in {\mathfrak {R}^{15}}\), where the first three features will represent the centroid \({\mathbf{g} }_1\), features 4-th to 6-th will represent the centroid \({\mathbf{g} }_2\), features 7-th to 9-th will represent the centroid \({\mathbf{g} }_4\), features 10-th to 12-th will represent the centroid \({\mathbf{g} }_4\), and the last three features (features 13-th to 15-th) will represent the centroid \({\mathbf{g} }_5\), just like in Fig. 1.
The population of evolutionary algorithms is generally initialized by a random process, but in the context of partitional data clustering, an initialization by the random choice of C patterns from the data set in analysis to compose the initial cluster centroids (just like in K-Means - see Sect. 2.1), for each individual, leads to a faster exploration of the problem search space.
As the fitness function, many works adopt the Within-Cluster Sum of Squares [Eq. (3)] or some alternative function that takes such criterion as its main component, just like in (Chen and Ye 2004; Ahmadyfard and Modares 2008; Hruschka et al. 2009; Prabha and Visalakshi 2014; Pacifico and Ludermir 2019, 2019b). But sometimes, different functions are adopted as the fitness function (Das et al. 2007; Liu et al. 2011; Wong et al. 2011; He and Tan 2012; José-García and Gómez-Flores 2016; Pacifico and Ludermir 2016).
Once the initial population is obtained and the fitness value for each individual X\(_i^{(0)}\) in population G is computed, the evolutionary operators for the selected evolutionary algorithm are applied to the population, evolving the cluster centroids represented by each individual through a generational process, until a termination condition is reached. The global best individual found by the EA is provided as the clustering solution. A generic evolutionary algorithm for partitional data clustering is presented in Algorithm 2.
Although EAs are able to find global solutions even when dealing with complex optimization problems, the searching process performed by such strategies may be to slow, and in many applications (just like in Cui et al. 2005; Abdel-Kader 2010; Ahmadi et al. 2010; Chen et al. 2014a; Pacifico and Ludermir 2018, 2019), the EAs are hybridized with K-Means in such a way that the EAs are used to find a better set of starting points (i.e., cluster centroids) to K-Means. Algorithm 3 illustrates such process.
2.3 Group search optimization
Group search optimization is inspired by animal social searching behavior and group living theory. GSO employs the Producer-Scrounger (PS) model as a framework. The PS model was firstly proposed by Barnard and Sibly (1981) to analyze social foraging strategies of group living animals. PS model assumes that there are two foraging strategies within groups: producing (e.g., searching for food); and joining (scrounging, e.g., joining resources uncovered by others). Foragers are assumed to use producing or joining strategies exclusively. Under this framework, concepts of resource searching from animal visual scanning mechanism are used to design optimum searching strategies in GSO algorithm (He et al. 2009).
In GSO, the population G of S individuals is called group, and each individual is called a member. In a n-dimensional search space, the i-th member at the t-th searching iteration (generation) has a current position X\(_i^t\) \(\in {\mathfrak {R}^n}\) and a head angle \(\alpha\)\(_i^t\in {\mathfrak {R}^{n-1}}\). The search direction of the i-th member, which is a vector D\(_i^t\)(\(\alpha\)\(_i^t) = (d_{i1}^t, \ldots , d_{in}^t)\) can be calculated from \(\alpha\)\(_i^t\) via a polar to Cartesian coordinate transformation:
A group in GSO consists of three types of members: producers, scroungers and dispersed members (or rangers) (He et al. 2009). The rangers are introduced by GSO model, extending standard PS framework.
During each GSO search iteration, a group member which has found the best fitness value so far (most promising area form the problem search space) is chosen as the producer (X\(_p\)) (Couzin et al. 2005), and the remaining members are scroungers or rangers. Standard GSO admits only one producer in each iteration, but there are some GSO variants that use multiple producers at the same time (Junaed et al. 2013; Pacifico and Ludermir 2013).
The producer employs a scanning strategy (producing) based on its vision field, generalized to a n-dimensional space, which is characterized by maximum pursuit angle \({\theta _{max}} \in {\mathfrak {R}^{n-1}}\) and maximum pursuit distance \(l_{max}\in {\mathfrak {R}}\), given by Eq. (5).
where \(U_{k}\) and \(L_{k}\) denote the upper bound and lower bound of the k-th dimension from the problem space, respectively.
In GSO, at the t-th iteration the producer X\(_p^t\) will scan laterally by randomly sampling three points in the scanning field: one at zero degree [Eq. (6)], one in the right hand side hypercube [Eq. (7)] and one in the left hand side hypercube [Eq. (8)].
where \(r_1\in {\mathfrak {R}}\) is a normally distributed random number (mean 0 and standard deviation 1) and r\(_2\in {\mathfrak {R}^{n-1}}\) is a uniformly distributed random sequence in the range (0, 1).
If the producer is able to find a better resource than its current position, it will fly to this point; if no better point is found, the producer will stay in its current position, then it will turn its head to a new generated angle [Eq. (9)].
where \(\alpha _{max}\in {\mathfrak {R}}\) is the maximum turning angle.
If after \(a\in {\mathfrak {R}}\) iterations the producer cannot find a better area, it will turn its head back to zero degree [Eq. (10)].
All scroungers will join the resource found by the producer, performing scrounging strategy according to Eq. (11).
where r\(_3\in {\mathfrak {R}^n}\) is a uniform random sequence in the range (0, 1) and \(\circ\) is the Hadamard product or the Schur product, which calculates the entrywise product of two vectors.
The rangers will perform random walks through the problem space (Higgins and Strauss 2004), according to Eq. (12).
where
In GSO, when a member escapes from the search space bounds, it will turn back to its previous position inside the search space (Dixon 1959). Some studies considering alternative treatments to deal with out-bounded population individuals can be found in (Xu and Shu 2006; Silva et al. 2011; Pacifico et al. 2018).
GSO algorithm is presented in Algorithm 4.
GSO scrounging operator focuses the search performed by the group in the most promising areas from the problem space, corresponding to the main exploration/exploitation strategy employed by many EAs (like crossover strategy in Genetic Algorithms and particle movement in Particle Swarm Optimization).
Producing and ranging are the main mechanisms employed by GSO for escaping local minima points. When the producer of one generation is trapped in a local minimum point (situation in which all scroungers would be trapped by following the producer, resulting in a premature convergence of the group), producing operator may find a better point in the search space, escaping from that local minimum. Even if that situation does not happen so easily, rangers will keep performing random walks that do not depend on the results found by the producer, which may lead to most promising areas than the ones found so far by the whole group, evading local minima points.
Some works have already adapted GSO to the context of partitional clustering, just like in (Pacifico and Ludermir 2014a, 2016, 2019b).
3 Proposed approaches
This section presents the proposed hybrid memetic GSO and K-Means models, elaborated to test the influence of K-Means when employed as a local searcher to improve the capabilities of partitional Group Search Optimizer. Three memetic algorithms are presented: FMKGSO, MKGSO and TMKGSO. In this work, we opt to combine GSO and K-Means due to some advantages promoted by these algorithms, such as:
-
1.
GSO has been proven to be a good global optimizer in many real-world applications;
-
2.
GSO presents good mechanisms to escape from local optima points, represented by the producing behavior of producers, and by the ranging behavior from dispersed members;
-
3.
Ranging and producing behaviors are also employed to prevent the premature convergence of the group, that is a known and hard to deal problem that affects some global search natural-inspired metaheuristics, such as Particle Swarm Optimization;
-
4.
Standard GSO implements an effective mechanism to prevent out-bounded members, avoiding individuals that would not codify an actual solution to the problem at hand, by extrapolating the problem search space boundaries;
-
5.
K-Means is a good local searcher, performing fast exploitation on problem space regions.
The proposed models are described as follows.
Firstly, as in generic partitional framework for EAs (see Sect. 2.2), for all three memetic algorithms, each group member is represented as a continuous vector X\(_i \in {\mathfrak {R}^n}\) (with \(n = m \times C\)), codifying C cluster centroids at the same time, and at the t-th generation, X\(_i\) will determine its own partition X\(_i^t.P_C\).
For each algorithm, the initial population will be obtained by the random selection of C patterns from the data set being analyzed to compose each member X\(_i^{(0)}\). After random initialization, each member X\(_i^{(0)}\) has its fitness value computed to determine the quality of the partition X\(_i^{(0)}.P_C\) it represents, by associating each data pattern x\(_j\) to its closest centroid vector X\(_i^{(0)}\).g\(_c\) in X\(_i^{(0)}\).
After that, GSO generational process is initialized. Each memetic algorithm will employ K-Means for some degree during its generational process to improve the quality of the group.
As in standard GSO, for all three memetic approaches, the generational process starts, in each generation t, by the choice of the best member found so far (according to the selected fitness function) as the producer X\(_p^t\). The producer will perform producing operator as in standard GSO, by the evaluation of three random points from its visual scanning field [see Eqs. (6), (7) and (8)]. If a better point is found, the X\(_p^t\) will migrate to that point in the next generation.
The scrounging and ranging for the memetic approaches will be executed following Eqs. (11) and (12), respectively. After executing all standard GSO operators, each memetic model will adopt K-Means to improve GSO group by different ways, as described bellow.
In FMKGSO, the algorithm keeps track on the quality of each member separately. For each new member X\(_i^{t+1}\) in the t-th generation (after the execution of producing, scrounging and ranging operators), determine its partition X\(_i^{t+1}.P_C\) by associating each data pattern x\(_j\) to its closest centroid vector X\(_i^{t+1}\).g\(_c\). Compute X\(_i^{t+1}\) fitness. If X\(_i^{t+1}\) has not been able to improve (in terms of the fitness value) by a predefined number of consecutive generations (\(t_{maxkmeans}\)), its cluster centroids are refined by the application of \(t_{maxkmeans}\) K-Means steps. FMKGSO will only apply K-Means to the members that are unable to improve their partition for a period of time. This combination seeks out to speed up the search performed by each member, once GSO operators could present slow convergence rates, just like in all EAs. K-Means execution can speed up considerably the convergence rates of GSO members, promoting a better exploration and exploitation of the problem search space. The generational process executes until a termination condition is met, and the final best improved solution X\(_p^{t_{max}}\) is returned. FMKGSO algorithm is presented in Algorithm 5.
In MKGSO, after the determination of the new group by the execution of producing, scrounging and ranging operators, each member X\(_i^{t+1}\) will be improved by the execution of exactly one K-Means step in each generation before its fitness evaluation. All members are refined that way, independently if GSO operators have been able to improve or not. This operation seeks out to speedup the exploitation performed by each GSO member, by aggregating local information into GSO generational process smoothly. After that, the partitions represented by each member will be determined, and their fitness value will be computed. The generational process executes until a termination condition is met, and the final best improved solution X\(_p^{t_{max}}\) is returned. MKGSO algorithm is presented in Algorithm 6.
The last proposed approach, TMKGSO, will apply K-Means to each member after a period, for a limited number of steps. After a prefixed number of generations (\(t_{maxkmeans}\), an input parameter for the algorithm), each new group member \({\mathbf{X}}_i^{t+1}\) (obtained by the application of producing, scrounging or ranging operator) is refined by the application of \(t_{maxkmeans}\) K-Means steps. By alternating some executions of GSO and K-Means, the new groups are improved by the interchanging of global and local information, in such a way that when GSO is executing, its operators will prevent the group from getting trapped in local optma regions, promoting the global exploration of the problem space, while during K-Means steps, the group will be guided through fast local exploitations on the region of the problem space where each member is placed. Each algorithm will complement the search performed by the other. The generational process executes until a termination condition is met, and the final best improved solution X\(_p^{t_{max}}\) is returned. TMKGSO algorithm is presented in Algorithm 7.
By employing three different strategies to use K-Means during GSO generational process, we can evaluate the actual influence of that technique as a local searcher. In that sense, in this work the total number of intended clusters C is supposed to be known a priori (Niu et al. 2017), so no other factors would influence on the behavior of the proposed models. The automatic determination of the optimal number of cluster for each problem is ongoing research, and that issue will be addressed in a future work. For more information on automatic clustering issue, please refer to Das et al. (2007), José-García and Gómez-Flores (2016), Elaziz et al. (2019).
4 Experimental results
In this section, we test the clustering capabilities of the proposed approaches, in comparison to other partitional evolutionary algorithms from literature, by means of 20 (twenty) real-world and 10 synthetic data sets. All real-world data sets are benchmark classification and clustering problems acquired from UCI Machine Learning Repository (Asuncion and Newman 2007). The selected real data set features are shown in Table 2, presenting different degrees of difficulties, such as unbalanced and overlapping classes, different number of classes and features, and so on.
The proposed synthetic data sets are split into to main groups: Disp group (containing five data sets), where all classes varies in relation to its degree of dispersion, from a well-separated scenario (Disp01) to a scenario where all classes present some degree of overlapping; Prox group (containing five data sets), where the overlapping is obtained by the approximation of class centers, from a scenario where all class centers are well-split into the problem search space (Prox01), to a scenario where all class centers are close to each other (Prox05). The configurations for the proposed synthetic data sets are presented in Table 3 and illustrated in Fig. 2.
For comparison purposes, five clustering measures are employed: the Within-Cluster Sum of Squares [Eq. (3)], the Intra-Cluster Distance [\(D_{max}\), Eq. (14)] and the Inter-Cluster Separation [\(D_{min}\), Eq. (15)] (Wong et al. 2011), the Weighted Quantization Error (\(J_{e_{2}}\), Eq. 16) (Esmin et al. 2008) and the Corrected Rand Index [CR, Eq. (17)] (Hubert and Arabie 1985).
The CR assesses the degree of similarity between an a priori partition and a partition provided by the clustering algorithm. Given that all data sets adopted in this work are real-valued classification problems (i.e., all data patters are labeled in each data set), CR represents a robust comparison metric for clustering studies, since its analysis takes into consideration only the relationships among the data patterns from an a priori and an a posteriori partitions, without taking into consideration the categories themselves.
Considering a partition \(U_R = \{u_1, u_2, \ldots , u_R\}\) provided by a clustering algorithm and an a priori partition defined by classification \(V_C = \{v_1, v_2, \ldots , v_C\}\), CR is defined as by Eq. (17).
where
where \(n_{ij}\) represents the number of objects that are in clusters \(u_i\) and \(v_i\); \(n_{i.}\) indicates the number of objects in cluster \(u_i\); \(n_{.j}\) indicates the number of objects in cluster \(v_j\); and n is the total number of objects. CR takes its values from the interval [− 1,1], in which the value 1 indicates perfect agreement between partitions, whereas values near 0 (or negatives) correspond to cluster agreement found by chance (Arabie et al. 1996).
The evaluation criterion includes a rank system employed through the application of Friedman hypothesis test (Friedman 1937, 1940) for all the comparison clustering measures. The Friedman test is a non-parametric hypothesis test that ranks all algorithms for each data set separately. If the null-hypothesis (all ranks are not significantly different) is rejected, Nemenyi test (Nemenyi 1962) is adopted as the post-hoc test. According to Nemenyi test, the performance of two algorithms are considered significantly different if the corresponding average ranks differ by at least the critical difference
where \(n_{data}\) represents the number of data sets, \(n_{alg}\) represents the number of compared algorithms and \(q_a\) are critical values based on a Studentized range statistic divided by \(\sqrt{2}\) (Demšar 2006). Since J, \(J_{e_{2}}\) and \(D_{max}\) are minimization metrics (indicated by \(\downarrow\)), the best methods will obtain lower ranks for the Friedman-Nemenyi test, while for \(D_{min}\) and CR (maximization metrics, indicated by \(\uparrow\)), the best methods will find higher average ranks for the Friedman-Nemenyi hypothesis test.
The proposed hybrid memetic GSO approaches are compared to five other EAs: Genetic Algorithm (GA), Differential Evolution (DE), Particle Swarm Optimization (PSO), standard Group Search Optimization and Backtracking Search Optimization (BSA). The selected approaches are state-of-the-art models from evolutionary computing and data clustering literature, being successfully applied in many applications recently: GA (Shi and Xu 2018; Akbari et al. 2019; Islam et al. 2019; Mortezanezhad and Daneshifar 2019; Toman et al. 2020), DE (Wang 2018; Li and Dong 2019; Cho and Nyunt 2020; Zhang and Cao 2020; Zhu et al. 2020), PSO (Souza et al. 2018; Wang et al. 2018; Li et al. 2019; Pacifico and Ludermir 2019; Miranda and Prudêncio 208), GSO (Pacifico and Ludermir 2018; Lin and Huang 2019; Pacifico and Ludermir 2019b; Taj and Basu 2019; Abualigah 2020), and BSA (Latiff et al. 2016; Günen et al. 2017; Li et al. 2019; Toz et al. 2019; Hassan and Rashid 2020).
All selected algorithms have been adapted as hybrid partitional clustering models according to the schema presented in Algorithm 3. In this work, all algorithms used the Within-Cluster Sum of Squares [Eq. (3)] as the fitness function, for simplicity, once our main objective is to evaluate the impact of K-Means on the behavior of memetic GSO, and not the influence of the fitness function on the behavior of EAs. For that issue, please, refer to Wong et al. (2011), Pacifico and Ludermir (2016).
All algorithms run in a MATLAB 7.6 environment. Thirty independent tests have been executed for each data set, and all evolutionary methods have started with the same initial population in each test, obtained by a random process, as explained in Sect. 2), as a manner to guarantee a fair evaluation and comparison among the selected techniques. Also, once the computational costs have been evaluated in terms of the average execution time of each model in each data set, each algorithm has been run and tested in a computer with an i7-7700K CPU, NVIDIA GeForce GTX 1060 6GB GPU and 32 GB RAM, independently (one algorithm each time), and no other programs, but the Operating System, were executed during the tests, granting the same environmental conditions to each method. For all tests, the adopted number of clusters C is equal to the number of classes per data set.
The hyperparameters for each evolutionary algorithm are presented in Table 4, and have been obtained from the literature: GA (Abdel-Kader 2010), DE (Das et al. 2007), PSO (Abdel-Kader 2010; Pacifico and Ludermir 2019), GSO (He et al. 2009; Pacifico and Ludermir 2014a, 2018), and BSA (Civicioglu 2013). The population size (S) and the maximum number of generations (\(t_{max}\)) have been chosen after a trial-and-error process, using values from Pacifico and Ludermir (2018),Pacifico and Ludermir (2019) as the staring point. Once the selected approaches did not improve significantly with higher population sizes nor higher number of generations, we kept the same parameter values as presented in Pacifico and Ludermir (2018), Pacifico and Ludermir (2019) to perform the current evaluation, once the computational costs associated with such parameters may grow very fast for high parameter values. The maximum number of K-Means executions (\(t_{maxkmeans}\)) has been obtained from Pacifico and Ludermir (2018).
4.1 Discussion
The discussion is divided into three parts: first, we evaluate the experimental results for the real-world data sets (Sect. 4.1.1), followed by the evaluation on the experiments using the synthetic data sets (Sect. 4.1.2); An overall evaluation is also performed, based on the Friedman-Nemenyi hypothesis tests and the computational costs (in terms of the average execution times) obtained by each model (Sect. 4.1.3).
4.1.1 Real-world data sets
The experimental results for the real-world data sets are presented from Tables 5, 6, 7, 8, 9. The best results for each metric in each data set are bold faced.
In an empirical analysis, we can observe that the proposed hybrid memetic GSO approaches are able to find better values in relation to the fitness function (J) in most cases than hybrid partitional models that only use K-Means to refine the best solution found by the generational process of the EA. As illustrated in Fig. 3, sometimes the comparison hybrid partitional approaches are too slow while performing the global search that the only significant improvement is obtained after the generational process, when their final solution is refined by K-Means method (which presents fast local search capabilities). Fig. 3 also showed that even the use of just one K-Means step during the generational process for each member (MKGSO approach) is enough to improve significantly the convergence speed of GSO. The empirical analysis also showed that the hybrid memetic GSO models are more stable than the comparison hybrid partitional EA and K-Means algorithms.
The overall evaluation on the real-world problems obtained by the Friedman-Nemenyi hypothesis tests (Table 10) showed that the proposed memetic approaches are able to find clusters that are, in average, better represented by their final centroids (according to J and \(J_{e_{2}}\) indices), the final clusters are more similar to the actual classes (CR index), and the clusters are well-split in the problem space (according to \(D_{min}\)). The hypothesis tests also showed that, although MKGSO and TMKGSO use about the same number of K-Means steps for each member on current experimentation, MK-GSO presented more smooth convergence rates than TMKGSO, what may help MKGSO to avoid local minima points, but yet according to the current set of experiments, the performance of TMKGSO has been considered significantly better than the average results for MKGSO (in terms of the optimization of the fitness function - J).
According to the Friedman-Nemenyi tests, considering all five clustering metrics, the best results have been found by TMKGSO, followed by MKGSO and FMKGSO (second and third places, respectively).
When considering the average execution time obtained by each method in each data set (Table 11), we can see that FMKGSO and TMKGSO (which use K-Means refinements only in determined conditions) presented average times compatible with the other state-of-the-art EAs from the literature. By the other hand, MKGSO presented average execution times about four times higher than the other evaluated approaches, what may limit its applications when the data set is big (in terms of the number of data patterns). Such limitation may have occurred due to the fact that such method uses K-Means calls in each generation for each individual of the population, requiring many calls for allocation and deallocation of computational resources by the compiler.
4.1.2 Synthetic data sets
The results for both sets of synthetic data sets are presented from Tables 12, 13, 14, 15, 16. The best results for each metric in each data set are bold faced.
The empirical analysis considering both Disp and Prox sets showed that all algorithms have been significantly affected when the degree of class overlapping increased, specially when the class overlapping has been obtained by approximating the class centers of distribution (Prox data sets). In spite of that, the memetic approaches have been able, once again, to find the best average performances and degree of stability among all approaches. In particular, FMKGSO was able to find the best values for almost all tests, considering the five evaluation metrics, showing its capabilities and competitiveness. Fig. 4 illustrates the obtained partitions for each algorithm.
Considering the ranks provided by Friedman-Nemenyi hypothesis test (Table 17), the memetic approaches have found the best values for all evaluated indices. Although, in an overall analysis on the synthetic data sets, FMKGSO has been able to present ranks slightly better then the ranks of MKGSO, the Friedman-Nemenyi tests showed that there is no statistical significantly differences between FMKGSO performance and the performances of MKGSO (their differences are not greater than the Critical Distance), so both memetic algorithms are considered to have achieved the best results together. TMKGSO has been capable of reaching the third place on the evaluation considering the synthetic approaches, but, for all five clustering metrics, this method has found better performances than all the selected state-of-the-art comparison approaches, reinforcing the advantages of memetic models.
The evaluation considering the average execution times for each model showed that (Table 18) showed that, once more, FMKGSO and TMKGSO computational costs, in terms of time, are close to the computational costs of state-of-the-art algorithms, but MKGSO, although presenting better performances when considering the clustering metrics than the non-memetic comparison approaches, is still a little bit slower in execution.
4.1.3 Overall evaluation
At this point, an overall evaluation considering both the real-world and synthetic data sets is provided. For that, an evaluation based on the Friedman-Nemenyi hypothesis tests considering all data sets (real and synthetic) is performed (Table 19). By considering both sets of data sets, we can have a better understanding on the behavior of all selected algorithms (hybrid partitional approaches, and hybrid memetic partitional approaches) when dealing with either controlled and uncontrolled testing scenarios, showing how robust such approaches actually are.
The overall analysis ranked TMKGSO as the best approach among all the tested algorithms, considering the rank values for all metrics, followed by FMKGSO and MKGSO, respectively. The overall evaluation showed that including some K-Means during the generational process of EAs is quite advantageous (the memetic approach), and it is a preferred strategy than only use K-Means only to refine the best solution found so far by the EAs. Considering the selected five clustering metrics, we can concluded that, for the current testing bed, the memetric approaches are able to find final solutions with clusters that are, in average, better represented by their centroids vectors (according to J and \(J_{e_{2}}\) indices), more similar to the actual classes (CR index), more compact (according to \(D_{max}\) index), and well-split in the problem space in relation to each other (according to \(D_{min}\)). The proposed memetic approaches have also been able to find the best stability throughout the experimentation, reinforcing their reliability.
For the current testing bed, the best hybrid model has been the TMKGSO approach, which executes some K-Means steps after a prefixed number of generations for each group member, which either has found good performances, according to the clustering indices, and has been able to execute with a computational cost, in terms of average execution time, close to what is expected from any Evolutionary Algorithm from current state-of-the-art. By the other hand, although able to find good average performances, MKGSO approach has been quite slow in its execution, what may represent a limitation. But, if time is not the main concern when a new data clustering system is proposed, MKGSO would still be a better choice than non-memetic approaches, once it still preserves GSO good global searching capabilities and K-Means fast convergence speed, so the designer of such system should have this trade-off in mind when making the decision.
5 Conclusions
This paper presented an evaluation on the behavior of partitional Group Search Optimization when employing K-Means algorithm as a local search operator, to deal with data clustering problem. In that sense, three hybrid memetic algorithms are proposed (FMKGSO, MKGSO and TMKGSO), which use different ways to combine GSO and K-Means in an attempt to improve GSO population during the generational process of the method. Although K-Means is easily trapped in local optima points, it is known that it would represent a good local searcher when combined to global search metaheuristics, like evolutionary algorithms. Also, GSO, which is known to present good global searching capabilities and mechanisms to escape from local optima points, would benefit considerably by adopting K-Means to speedup the algorithm convergence when dealing with data clustering problems.
To evaluate the proposed methods, five state-of-the-art partitional clustering evolutionary algorithms are tested: GA, DE, PSO, standard GSO and BSA. As a current and usual practice in data clustering literature, the comparison approaches are hybridized with K-Means in such a way that their best final solutions are refined by K-Means after the generational process, so we could access the actual influence of such technique on the behavior of EAs.
The experimental testing bed adopted in this work included 20 real-world and 10 synthetic data sets. To access the potential of the proposed memetic models, five clustering metrics have been employed (within-cluster sum of squares, correct Rand index, weighted quantization error, intra-cluster distance and inter-cluster separation), and the experimental analysis has been obtained by means of both an empirical analysis, and by an overall evaluation obtained through the application of a rank system considering the Friedman-Nemenyi hypothesis tests on each clustering metric.
The experimental results showed that, when K-Means is executed within the generational process of GSO at least by a few steps, it is able to aggregate more local information than when it is adopted only to improve the final solution found so far by an EA, speeding up the exploration/exploitation promoted by the memetic system. EAs are known to present slow convergence speeds, so when K-Means is only employed after the generational process of the EA, given K-Means lack of recovering mechanisms from local optima points, the final solution found by the hybrid model may still be in a region of the problem space that do not contain the global optimum point, so K-Means will still be trapped in such region.
All memetic approaches have been able to improve the comparison approaches, and the best evaluated memetic combination between K-Means and GSO (according to the Friedman-Nemenyi ranks and the evaluation on the average execution times) has been achieved when K-Means executes for a limited number of steps, for each population individual, after a prefixed number of GSO generations (TMKGSO algorithm), followed by FMKGSO (where individuals are improved by K-Means only if they fail to improve for a prefixed number of consecutive generations). It means that interchange between the global search offered by EA operators and the local search promoted by K-Means (i.e., the memetic framework) may represent a robust approach to explore and exploit the problem search space, leading to better optimization performances. By the other hand, the use of just one K-Means step for each individual from the population in each generation (MKGSO approach), represented the worst combination for the memetic models, in terms of computational costs (measured by the average execution time), but such combination still kept the good performances of all other evaluated memetic approaches (FMKGSO and TMKGSO), showing its potential in terms of performance (considering all five selected clustering metrics) when dealing with data clustering problems, in comparison to all other state-of-the-art non-memetic hybrid algorithms. In a future investigation, some mechanisms, like a selection process on the population individuals that would be improved by K-Means in each GSO generation, would be performed in an attempt to improve MKGSO computational costs.
As future works, we intend to evaluate the influence of the adopted distance function on the behavior of the proposed memetic algorithms, so the proposed approaches would be more robust to deal with clusters with different formats and shapes, overcoming some limitations of the standard Euclidean distance. Also, new fitness functions will be introduced and tested, using a larger testing bed, that will be obtained by the development of alternative synthetic data set configurations and scenarios. Finally, mechanisms for the automatic determination of the best number of clusters will be implemented, in such a way that such parameter would be estimated by the memetic evolutionary model itself, instead of being provided as an a priori input parameter to the algorithm.
References
Abdel-Kader RF (2010) Genetically improved pso algorithm for efficient data clustering. In: Machine Learning and Computing (ICMLC), 2010 Second International Conference on, pp. 71–75. IEEE
Abualigah L (2020) Group search optimizer: a nature-inspired meta-heuristic optimization algorithm with its results, variants, and applications. Neural Computing and Applications pp. 1–24
Ahmadi A, Karray F, Kamel MS (2010) Flocking based approach for data clustering. Nat Comput 9(3):767–791
Ahmadyfard A, Modares H (2008) Combining pso and k-means to enhance data clustering. In: Telecommunications, 2008. IST 2008. International Symposium on, pp. 688–691. IEEE
Akbari M, Izadkhah H (2019) Gakh: A new evolutionary algorithm for graph clustering problem. In: 2019 4th International Conference on Pattern Recognition and Image Analysis (IPRIA), pp. 159–162. IEEE
Arabie P, Hubert LJ, De Soete G (1996) Clustering and classification. World Scientific, Singapore
Asuncion A, Newman D (2007) Uci machine learning repository
Barnard C, Sibly R (1981) Producers and scroungers: a general model and its application to captive flocks of house sparrows. Anim Behav 29(2):543–550
BEDDAD B, HACHEMI K, POSTAIRE JG, JABLONCIK F, MESSAI O (2019) An improvement of spatial fuzzy c-means clustering method for noisy medical image analysis. In: 2019 6th International Conference on Image and Signal Processing and their Applications (ISPA), pp. 1–5. IEEE
Bhavani R, Sadasivam GS, Kumaran R (2011) A novel parallel hybrid k-means-de-aco clustering approach for genomic clustering using mapreduce. In: Information and Communication Technologies (WICT), 2011 World Congress on, pp. 132–137. IEEE
Bonabeau E, Dorigo M, Theraulaz G (1999) Swarm intelligence: from natural to artificial systems, vol 4. Oxford University Press, New York
Bruse JL, Zuluaga MA, Khushnood A, McLeod K, Ntsinjana HN, Hsia TY, Sermesant M, Pennec X, Taylor AM, Schievano S (2017) Detecting clinically meaningful shape clusters in medical image data: metrics analysis for hierarchical clustering applied to healthy and pathological aortic arches. IEEE Trans Biomed Eng 64(10):2373–2383
Canuto A, Neto AF, Silva HM, Xavier-Júnior JC, Barreto CA (2018) Population-based bio-inspired algorithms for cluster ensembles optimization. Natural Computing pp. 1–18
Chen CY, Ye F (2004) Particle swarm optimization algorithm and its application to clustering analysis. In: Networking, Sensing and Control, 2004 IEEE International Conference on, vol. 2, pp. 789–794. IEEE
Chen G, Luo W, Zhu T (2014) Evolutionary clustering with differential evolution. In: Evolutionary Computation (CEC), 2014 IEEE Congress on, pp. 1382–1389. IEEE
Chen J, Zheng J, Liu Y, Wu Q (2014) Dynamic economic dispatch with wind power penetration using group search optimizer with adaptive strategies. In: IEEE PES Innovative Smart Grid Technologies, Europe, pp. 1–6. IEEE
Cho PPW, Nyunt TTS (2020) Data clustering based on differential evolution with modified mutation strategy. In: 2020 17th International Conference on Electrical Engineering/Electronics, Computer, Telecommunications and Information Technology (ECTI-CON), pp. 222–225. IEEE
Civicioglu P (2013) Backtracking search optimization algorithm for numerical optimization problems. Appl Math Comput 219(15):8121–8144
Couzin ID, Krause J, Franks NR, Levin SA (2005) Effective leadership and decision-making in animal groups on the move. Nature 433(7025):513–516
Cui X, Potok TE, Palathingal P (2005) Document clustering using particle swarm optimization. In: Proceedings 2005 IEEE Swarm Intelligence Symposium, 2005. SIS 2005., pp. 185–191. IEEE
Darwish A, Ezzat D, Hassanien AE (2020) An optimized model based on convolutional neural networks and orthogonal learning particle swarm optimization algorithm for plant diseases diagnosis. Swarm Evol Comput 52:100616
Das S, Abraham A, Konar A (2007) Automatic clustering using an improved differential evolution algorithm. IEEE Trans Syst Man Cybern Part A Syst Hum 38(1):218–237
Demšar J (2006) Statistical comparisons of classifiers over multiple data sets. J Mach Learn Res 7:1–30
Dhiviya S, Sariga A, Sujatha P (2017) Survey on wsn using clustering. In: 2017 Second International Conference on Recent Trends and Challenges in Computational Models (ICRTCCM), pp. 121–125. IEEE
Diderot PKG, Vasudevan N, Sankaran KS (2019) An efficient fuzzy c-means clustering based image dissection algorithm for satellite images. In: 2019 International Conference on Communication and Signal Processing (ICCSP), pp. 0806–0809. IEEE
Dixon A (1959) An experimental study of the searching behaviour of the predatory coccinellid beetle adalia decempunctata (l.). The Journal of Animal Ecology pp. 259–281
Dorigo M, Maniezzo V, Colorni A (1996) Ant system: optimization by a colony of cooperating agents. Syst Man Cybern Part B Cybern IEEE Trans 26(1):29–41
Eiben AE, Smith JE (2015) Introduction to evolutionary computing. Springer, Berlin
Elaziz MA, Nabil N, Ewees AA, Lu S (2019) Automatic data clustering based on hybrid atom search optimization and sine-cosine algorithm. In: 2019 IEEE Congress on Evolutionary Computation (CEC), pp. 2315–2322. IEEE
Esmin AAA, Pereira DL, De Araujo F (2008) Study of different approach to clustering data by using the particle swarm optimization algorithm. In: Evolutionary Computation, 2008. CEC 2008.(IEEE World Congress on Computational Intelligence). IEEE Congress on, pp. 1817–1822. IEEE
Figueiredo E, Macedo M, Siqueira HV, Santana CJ Jr, Gokhale A, Bastos-Filho CJ (2019) Swarm intelligence for clustering-a systematic review with new perspectives on data mining. Eng Appl Artif Intell 82:313–329
Fogel D (2009) Artificial intelligence through simulated evolution. Wiley-IEEE Press, New Jersy
Fogel DB (2006) Evolutionary computation: toward a new philosophy of machine intelligence, vol 1. Wiley, New Jersy
Fogel LJ, Owens AJ, Walsh MJ (1966) Artificial intelligence through simulated evolution. Wiley, New Jersy
Friedman M (1937) The use of ranks to avoid the assumption of normality implicit in the analysis of variance. J Am Stat Assoc 32(200):675–701
Friedman M (1940) A comparison of alternative tests of significance for the problem of m rankings. Ann Math Stat 11(1):86–92
Geem ZW (2010) Recent advances in harmony search algorithm, vol 270. Springer, Berlin
Z.W Geem, Kim JH, Loganathan GV (2001) A new heuristic optimization algorithm: harmony search. Simulation 76(2):60–68
Günen MA, Atasever ÜH, Beşdok E (2017) A novel edge detection approach based on backtracking search optimization algorithm (bsa) clustering. In: 2017 8th International Conference on Information Technology (ICIT), pp. 116–122. IEEE
Hassan BA, Rashid TA (2020) Operational framework for recent advances in backtracking search optimisation algorithm: a systematic review and performance evaluation. Appl Math Comput 370:124919
He H, Tan Y (2012) A two-stage genetic algorithm for automatic clustering. Neurocomputing 81:49–59
He S, Wu Q, Saunders J (2006) A novel group search optimizer inspired by animal behavioural ecology. In: 2006 IEEE Congress on Evolutionary Computation (CEC), pp. 1272–1278. IEEE
He S, Wu QH, Saunders JR (2009) Group search optimizer: an optimization algorithm inspired by animal searching behavior. IEEE Trans Evol Comput 13(5):973–990
Higgins CL, Strauss RE (2004) Discrimination and classification of foraging paths produced by search-tactic models. Behav Ecol 15(2):248–254
Holland JH (1992) Genetic algorithms. Sci Am 267(1):66–72
Hruschka ER, Campello RJ, Freitas AA et al (2009) A survey of evolutionary algorithms for clustering. IEEE Trans Syst Man Cybern Part C Appl Rev 39(2):133–155
Hubert L, Arabie P (1985) Comparing partitions. J Classif 2(1):193–218
Idrissi MAJ, Ramchoun H, Ghanou Y, Ettaouil M (2016) Genetic algorithm for neural network architecture optimization. In: 2016 3rd International Conference on Logistics Operations Management (GOL), pp. 1–4. IEEE
Inkaya T, Kayalıgil S, Özdemirel NE (2016) Swarm intelligence-based clustering algorithms: A survey. In: Unsupervised learning algorithms, pp. 303–341. Springer
Islam MT, Basak PK, Bhowmik P, Khan M (2019) Data clustering using hybrid genetic algorithm with k-means and k-medoids algorithms. In: 2019 23rd International Computer Science and Engineering Conference (ICSEC), pp. 123–128. IEEE
Jain M, Singh V, Rani A (2019) A novel nature-inspired algorithm for optimization: Squirrel search algorithm. Swarm Evol Comput 44:148–175
José-García A, Gómez-Flores W (2016) Automatic clustering using nature-inspired metaheuristics: a survey. Appl Soft Comput 41:192–213
Junaed A, Akhand M, Murase K, et al (2013) Multi-producer group search optimizer for function optimization. In: 2013 International Conference on Informatics, Electronics and Vision (ICIEV), pp. 1–4. IEEE
Karaboga D, Basturk B (2007) A powerful and efficient algorithm for numerical function optimization: artificial bee colony (abc) algorithm. J Global Optim 39(3):459–471
Kennedy J (2006) Swarm intelligence. Handbook of nature-inspired and innovative computing. Springer, Berlin, pp 187–219
Kennedy J, Eberhart R (1995) Particle swarm optimization. In: Neural Networks, 1995. Proceedings., IEEE International Conference on, vol. 4, pp. 1942–1948. IEEE
Kennedy J, Eberhart RC, Shi Y (2001) Swarm intelligence. Kaufmann, San Francisco
Koza JR, Koza JR (1992) Genetic programming: on the programming of computers by means of natural selection, vol 1. MIT press, Cambridge
Krishnaprabha R, Aloor G (2014) Group search optimizer algorithm in wireless sensor network localization. In: 2014 International Conference on Advances in Computing, Communications and Informatics (ICACCI), pp. 1953–1957. IEEE
Latiff NA, Malik NNA, Idoumghar L (2016) Hybrid backtracking search optimization algorithm and k-means for clustering in wireless sensor networks. In: 2016 IEEE 14th Intl Conf on Dependable, Autonomic and Secure Computing, 14th Intl Conf on Pervasive Intelligence and Computing, 2nd Intl Conf on Big Data Intelligence and Computing and Cyber Science and Technology Congress (DASC/PiCom/DataCom/CyberSciTech), pp. 558–564. IEEE
Li L, Liang Y, Li T, Wu C, Zhao G, Han X (2019) Boost particle swarm optimization with fitness estimation. Nat Comput 18(2):229–247
Li L, Xu S, Wang S, Ma X (2016) The diseases clustering for multi-source medical sets. In: 2016 International Conference on Identification, Information and Knowledge in the Internet of Things (IIKI), pp. 294–298. IEEE
Li T, Dong H (2019) Unsupervised feature selection and clustering optimization based on improved differential evolution. IEEE Access 7:140438–140450
Li Xl (2002) An optimizing method based on autonomous animats: fish-swarm algorithm. Syst Eng Theory Pract 22(11):32–38
Li Yz, Zheng Xw, Lu Dj (2015) Virtual network embedding based on multi-objective group search optimizer. In: 2015 10th International Conference on Broadband and Wireless Computing, Communication and Applications (BWCCA), pp. 598–601. IEEE
Li Z, Hu Z, Miao Y, Xiong Z, Xu X, Dai C (2019) Deep-mining backtracking search optimization algorithm guided by collective wisdom. Mathematical Problems in Engineering 2019
Lin CJ, Huang ML (2019) Efficient hybrid group search optimizer for assembling printed circuit boards. AI EDAM 33(3):259–274
Liu F, Xiong L (2011) Survey on text clustering algorithm-research present situation of text clustering algorithm. In: 2011 IEEE 2nd International Conference on Software Engineering and Service Science, pp. 196–199. IEEE
Liu Y, Wu X, Shen Y (2011) Automatic clustering using genetic algorithms. Appl Math Comput 218(4):1267–1279
MacQueen J, et al (1967) Some methods for classification and analysis of multivariate observations. In: Proceedings of the fifth Berkeley symposium on mathematical statistics and probability, vol. 1, pp. 281–297. California, USA
Masoud MZ, Jaradat Y, Zaidan D, Jannoud I (2019) To cluster or not to cluster: A hybrid clustering protocol for wsn. In: 2019 IEEE Jordan International Joint Conference on Electrical Engineering and Information Technology (JEEIT), pp. 678–682. IEEE
Miranda PB, Prudêncio RB (2018) A novel context-free grammar for the generation of pso algorithms. Natural Computing pp. 1–19
Mirjalili S (2016) Sca: a sine cosine algorithm for solving optimization problems. Knowl-Based Syst 96:120–133
Mirjalili S, Lewis A (2016) The whale optimization algorithm. Adv Eng Softw 95:51–67
Mirjalili S, Mirjalili SM, Lewis A (2014) Grey wolf optimizer. Adv Eng Softw 69:46–61
Misra S, Kumar R (2016) A literature survey on various clustering approaches in wireless sensor network. In: 2016 2nd international conference on communication control and intelligent systems (CCIS), pp. 18–22. IEEE
Mortezanezhad A, Daneshifar E (2019) Big-data clustering with genetic algorithm. In: 2019 5th Conference on Knowledge Based Engineering and Innovation (KBEI), pp. 702–706. IEEE
Naldi MC, Campello RJGB (2014) Evolutionary k-means for distributed data sets. Neurocomputing 127:30–42
Nemenyi P (1962) Distribution-free multiple comparisons. Biometrics 18(2):263
Niu B, Duan Q, Liu J, Tan L, Liu Y (2017) A population-based clustering technique using particle swarm optimization and k-means. Nat Comput 16(1):45–59
Oliveira JFL, Pacifico LDS, Ludermir TB (2013) A hybrid group search optimization based on fish swarms. In: 2013 Brazilian Conference on Intelligent Systems, pp. 51–56. IEEE
Pacifico LDS, Ludermir TB (2013) Cooperative group search optimization. In: 2013 IEEE Congress on Evolutionary Computation, pp. 3299–3306. IEEE
Pacifico LDS, Ludermir TB (2014) A group search optimization method for data clustering. In: Intelligent Systems (BRACIS), 2014 Brazilian Conference on, pp. 342–347. IEEE
Pacifico LDS, Ludermir TB (2014) Improved cooperative group search optimization based on divide-and-conquer strategy. In: 2014 Brazilian Conference on Intelligent Systems, pp. 420–425. IEEE
Pacifico LDS, Ludermir TB (2016) Data clustering using group search optimization with alternative fitness functions. In: 2016 5th Brazilian Conference on Intelligent Systems (BRACIS), pp. 301–306. IEEE
Pacifico LDS, Ludermir TB (2018) Hybrid k-means and improved group search optimization methods for data clustering. In: 2018 International Joint Conference on Neural Networks (IJCNN), pp. 1–8. IEEE
Pacifico LDS, Ludermir TB (2019) Hybrid k-means and improved self-adaptive particle swarm optimization for data clustering. In: 2019 International Joint Conference on Neural Networks (IJCNN), pp. 1–7. IEEE
Pacifico LDS, Ludermir TB (2019) A partitional cooperative coevolutionary group search optimization approach for data clustering. In: 2019 8th Brazilian Conference on Intelligent Systems (BRACIS), pp. 347–352. IEEE
Pacifico LDS, Ludermir TB, Britto LFS (2018) A hybrid improved group search optimization and otsu method for color image segmentation. In: 2018 7th Brazilian Conference on Intelligent Systems (BRACIS), pp. 296–301. IEEE
Pacifico LDS, Ludermir TB, Oliveira JFL (2018) Evolutionary elms with alternative treatments for the population out-bounded individuals. In: 2018 7th Brazilian Conference on Intelligent Systems (BRACIS), pp. 151–156. IEEE
Parimalam T, Sundaram KM (2017) Efficient clustering techniques for web services clustering. In: 2017 ieee international conference on computational intelligence and computing research (iccic), pp. 1–4. IEEE
Passino KM (2002) Biomimicry of bacterial foraging for distributed optimization and control. IEEE Control Syst Mag 22(3):52–67
Prabha KA, Visalakshi NK (2014) Improved particle swarm optimization based k-means clustering. In: 2014 International Conference on Intelligent Computing Applications, pp. 59–63. IEEE
Premalatha P, Subasree S (2017) Performance analysis of clustering algorithms in medical datasets. In: 2017 Second International Conference on Electrical, Computer and Communication Technologies (ICECCT), pp. 1–6. IEEE
Rahamathunnisa U, Nallakaruppan M, Anith A, Kumar KS (2020) Vegetable disease detection using k-means clustering and svm. In: 2020 6th International Conference on Advanced Computing and Communication Systems (ICACCS), pp. 1308–1311. IEEE
Ramos AC, Vellasco M (2018) Quantum-inspired evolutionary algorithm for feature selection in motor imagery eeg classification. In: 2018 IEEE Congress on Evolutionary Computation (CEC), pp. 1–8. IEEE
Rashedi E, Nezamabadi-Pour H, Saryazdi S (2009) Gsa: a gravitational search algorithm. Inf Sci 179(13):2232–2248
Rechenberg I (1973) Evolution strategy: optimization of technical systems by means of biological evolution. Fromman Holzboog Stuttgart 104:15–16
Ren Z, Zhang A, Wen C, Feng Z (2014) A scatter learning particle swarm optimization algorithm for multimodal problems. Cybern IEEE Trans 44(7):1127–1140
Sapkota N, Alsadoon A, Prasad P, Elchouemi A, Singh AK (2019) Data summarization using clustering and classification: Spectral clustering combined with k-means using nfph. In: 2019 International Conference on Machine Learning, Big Data, Cloud and Parallel Computing (COMITCon), pp. 146–151. IEEE
Saraswathi S, Allirani A (2013) Survey on image segmentation via clustering. In: 2013 International Conference on Information Communication and Embedded Systems (ICICES), pp. 331–335. IEEE
Saremi S, Mirjalili S, Lewis A (2017) Grasshopper optimisation algorithm: theory and application. Adv Eng Softw 105:30–47
Schwefel HPP (1993) Evolution and optimum seeking: the sixth generation. Wiley, New Jersy
Shi H, Xu M (2018) A data classification method using genetic algorithm and k-means algorithm with optimizing initial cluster center. In: 2018 IEEE International Conference on Computer and Communication Engineering Technology (CCET), pp. 224–228. IEEE
Silva DNG, Pacifico LDS, Ludermir TB (2011) An evolutionary extreme learning machine based on group search optimization. In: 2011 IEEE Congress of Evolutionary Computation (CEC), pp. 574–580. IEEE
Simon D (2013) Evolutionary optimization algorithms. Wiley, New Jersy
Souza E, Santos D, Oliveira G, Silva A, Oliveira AL (2018) Swarm optimization clustering methods for opinion mining. Natural computing pp. 1–29
Sreepathi S, Kumar J, Mills RT, Hoffman FM, Sripathi V, Hargrove WW (2017) Parallel multivariate spatio-temporal clustering of large ecological datasets on hybrid supercomputers. In: 2017 IEEE International Conference on Cluster Computing (CLUSTER), pp. 267–277. IEEE
Storn R, Price K (1995) Differential evolution–a simple and efficient adaptive scheme for global optimization over continuous spaces. international computer science institute, berkeley. Tech. rep., CA, 1995, Tech. Rep. TR-95–012
Storn R, Price K (1997) Differential evolution-a simple, efficient heuristic for global optimization over continuous spaces. J Global Optim 11(4):341–359
Taj N, Basu A (2019) Hybridization of genetic and group search optimization algorithm for deadline-constrained task scheduling approach. J Intell Syst 28(1):153–171
Taşci E, Gökalp O, Uğur A (2018) Development of a novel feature weighting method using cma-es optimization. In: 2018 26th Signal Processing and Communications Applications Conference (SIU), pp. 1–4. IEEE
Toman SH, Abed MH, Toman ZH (2020) Cluster-based information retrieval by using (k-means)-hierarchical parallel genetic algorithms approach. arXiv preprint arXiv:2008.00150
Toz G, Yücedağ İ, Erdoğmuş P (2019) A fuzzy image clustering method based on an improved backtracking search optimization algorithm with an inertia weight parameter. J King Saud Univ Comput Inf Sci 31(3):295–303
Wan C, Ye M, Yao C, Wu C (2017) Brain mr image segmentation based on gaussian filtering and improved fcm clustering algorithm. In: 2017 10th International Congress on Image and Signal Processing, BioMedical Engineering and Informatics (CISP-BMEI), pp. 1–5. IEEE
Wang F (2018) A weighted k-means algorithm based on differential evolution. In: 2018 2nd IEEE Advanced Information Management, Communicates, Electronic and Automation Control Conference (IMCEC), pp. 1–2274. IEEE
Wang H, Zuo L, Liu J, Yi W, Niu B (2018) Ensemble particle swarm optimization and differential evolution with alternative mutation method. Natural Computing pp. 1–14
Wei Y, Niu C, Wang H, Liu D (2019) The hyperspectral image clustering based on spatial information and spectral clustering. In: 2019 IEEE 4th International Conference on Signal and Image Processing (ICSIP), pp. 127–131. IEEE
Wong MT, He X, Yeh WC (2011) Image clustering using particle swarm optimization. In: Evolutionary Computation (CEC), 2011 IEEE Congress on, pp. 262–268. IEEE
Xu D, Tian Y (2015) A comprehensive survey of clustering algorithms. Ann Data Sci 2(2):165–193
Xu H, Xue B, Zhang M (2020) A duplication analysis based evolutionary algorithm for bi-objective feature selection. IEEE Transactions on Evolutionary Computation
Xu Y, Shu Y (2006) Evolutionary extreme learning machine-based on particle swarm optimization. International Symposium on Neural Networks. Springer, Berlin, pp 644–652
Yang XS (2009) Firefly algorithms for multimodal optimization. In: International symposium on stochastic algorithms, pp. 169–178. Springer
Yang XS (2010) Firefly algorithm, levy flights and global optimization. In: Research and development in intelligent systems XXVI, pp. 209–218. Springer
Zhang M, Cao J (2020) An elitist-based differential evolution algorithm for multiobjective clustering. In: 2020 3rd International Conference on Artificial Intelligence and Big Data (ICAIBD), pp. 161–166. IEEE
Zhu L, Ma Y, Bai Y (2020) A self-adaptive multi-population differential evolution algorithm. Nat Comput 19(1):211–235
Acknowledgements
The authors would like to thank FACEPE, CNPq and CAPES (Brazilian Research Agencies) for their financial support.
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
About this article
Cite this article
Pacifico, L.D.S., Ludermir, T.B. An evaluation of k-means as a local search operator in hybrid memetic group search optimization for data clustering. Nat Comput 20, 611–636 (2021). https://doi.org/10.1007/s11047-020-09809-z
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11047-020-09809-z