Abstract
Traditionally, feature selection has been required as a preliminary step for many pattern recognition problems. In recent years, distributed learning has been the focus of much attention, due to the proliferation of big databases, in some cases distributed across different nodes. However, most of the existing feature selection algorithms were designed for working in a centralized manner, i.e. using the whole dataset at once. In this research, a new approach for using filter methods in a distributed manner is presented. The approach splits the data horizontally, i.e., by samples. A filter is applied at each partition performing several rounds to obtain a stable set of features. Later, a merging procedure is performed in order to combine the results into a single subset of relevant features. Five of the most well-known filters were used to test the approach. The experimental results on six representative datasets show that the execution time is shortened whereas the performance is maintained or even improved compared to the standard algorithms applied to the non-partitioned datasets.
Access provided by Autonomous University of Puebla. Download to read the full chapter text
Chapter PDF
Similar content being viewed by others
Keywords
- Feature Selection
- Feature Subset
- Feature Selection Method
- Feature Selection Algorithm
- Pattern Recognition Problem
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.
References
Zhao, Z., Liu, H.: Spectral Feature Selection for Data Mining. Chapman & Hall/Crc Data Mining and Knowledge Discovery. Taylor & Francis Group (2011)
Frank, A., Asuncion, A.: UCI Machine Learning Repository (2010), http://archive.ics.uci.edu/ml (accessed April 2013)
Guyon, I., Gunn, S., Nikravesh, M., Zadeh, L.A.: Feature extraction: foundations and applications, vol. 207. Springer (2006)
Yu, L., Liu, H.: Redundancy based feature selection for microarray data. In: Proceedings of the Tenth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 737–742. ACM (2004)
Bolón-Canedo, V., Sánchez-Maroño, N., Alonso-Betanzos, A.: Feature selection and classification in multiple class datasets: An application to kdd cup 99 dataset. Expert Systems with Applications 38(5), 5947–5957 (2011)
Forman, G.: An extensive empirical study of feature selection metrics for text classification. The Journal of Machine Learning Research 3, 1289–1305 (2003)
Saari, P., Eerola, T., Lartillot, O.: Generalizability and simplicity as criteria in feature selection: application to mood classification in music. IEEE Transactions on Audio, Speech, and Language Processing 19(6), 1802–1812 (2011)
Liu, H., Motoda, H.: Feature selection for knowledge discovery and data mining. Springer (1998)
Saeys, Y., Inza, I., Larrañaga, P.: A review of feature selection techniques in bioinformatics. Bioinformatics 23(19), 2507–2517 (2007)
Chan, P.K., Stolfo, S.J., et al.: Toward parallel and distributed learning by meta-learning. In: AAAI Workshop in Knowledge Discovery in Databases, pp. 227–240 (1993)
Ananthanarayana, V.S., Subramanian, D.K., Murty, M.N.: Scalable, distributed and dynamic mining of association rules. In: Prasanna, V.K., Vajapeyam, S., Valero, M. (eds.) HiPC 2000. LNCS, vol. 1970, pp. 559–566. Springer, Heidelberg (2000)
Tsoumakas, G., Vlahavas, I.: Distributed data mining of large classifier ensembles. In: Proceedings Companion Volume of the Second Hellenic Conference on Artificial Intelligence, pp. 249–256 (2002)
Das, K., Bhaduri, K., Kargupta, H.: A local asynchronous distributed privacy preserving feature selection algorithm for large peer-to-peer networks. Knowledge and Information Systems 24(3), 341–367 (2010)
McConnell, S., Skillicorn, D.B.: Building predictors from vertically distributed data. In: Proceedings of the 2004 Conference of the Centre for Advanced Studies on Collaborative Research, pp. 150–162. IBM Press (2004)
Skillicorn, D.B., McConnell, S.M.: Distributed prediction from vertically partitioned data. Journal of Parallel and Distributed Computing 68(1), 16–36 (2008)
Rokach, L.: Taxonomy for characterizing ensemble methods in classification tasks: A review and annotated bibliography. Computational Statistics & Data Analysis 53(12), 4046–4072 (2009)
de Haro García, A.: Scaling data mining algorithms. Application to instance and feature selection. PhD thesis, Universidad de Granada (2011)
Hall, M., Frank, E., Holmes, G., Pfahringer, B., Reutemann, P., Witten, I.H.: The weka data mining software: an update. ACM SIGKDD Explorations Newsletter 11(1), 10–18 (2009)
Hall, M.A.: Correlation-based feature selection for machine learning. PhD thesis, Citeseer (1999)
Dash, M., Liu, H.: Consistency-based search in feature selection. Artificial Intelligence 151(1-2), 155–176 (2003)
Zhao, Z., Liu, H.: Searching for interacting features. In: Proceedings of the 20th International Joint Conference on Artifical Intelligence, pp. 1156–1161. Morgan Kaufmann Publishers Inc. (2007)
Hall, M.A., Smith, L.A.: Practical feature subset selection for machine learning. Computer Science 98, 181–191 (1998)
Kononenko, I.: Estimating attributes: Analysis and extensions of relief. In: Bergadano, F., De Raedt, L. (eds.) ECML 1994. LNCS, vol. 784, pp. 171–182. Springer, Heidelberg (1994)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2013 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Bolón-Canedo, V., Sánchez-Maroño, N., Cerviño-Rabuñal, J. (2013). Scaling Up Feature Selection: A Distributed Filter Approach. In: Bielza, C., et al. Advances in Artificial Intelligence. CAEPIA 2013. Lecture Notes in Computer Science(), vol 8109. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-40643-0_13
Download citation
DOI: https://doi.org/10.1007/978-3-642-40643-0_13
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-642-40642-3
Online ISBN: 978-3-642-40643-0
eBook Packages: Computer ScienceComputer Science (R0)