Abstract
We experimentally evaluated bagging and six other randomization-based ensemble tree methods. Bagging uses randomization to create multiple training sets. Other approaches, such as Randomized C4.5, apply randomization in selecting a test at a given node of a tree. Then there are approaches, such as random forests and random subspaces, that apply randomization in the selection of attributes to be used in building the tree. On the other hand boosting incrementally builds classifiers by focusing on examples misclassified by existing classifiers. Experiments were performed on 34 publicly available data sets. While each of the other six approaches has some strengths, we find that none of them is consistently more accurate than standard bagging when tested for statistical significance.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
Similar content being viewed by others
References
Breiman, L.: Bagging predictors. Machine Learning 24, 123–140 (1996)
Eibl, G., Pfeiffer, K.P.: How to make AdaBoost.M1 work for weak base classifiers by changing only one line of the code. In: Proceedings of the Thirteenth European Conference on Machine Learning, pp. 72–83 (2002)
Collins, M., Schapire, R.E., Singer, Y.: Logistic regression, AdaBoost and Bregman distances. In: Proceedings of the Thirteenth Annual Conference on Computational Learning Theory, pp. 158–169 (2000)
Schapire, R.E.: The strength of weak learnability. Machine Learning 5(2), 197–227 (1990)
Breiman, L.: Random forests. Machine Learning 45(1), 5–32 (2001)
Dietterich, T.G.: An experimental comparison of three methods for constructing ensembles of decision trees: bagging, boosting, and randomization. Machine Learning 40(2), 139–157 (2000)
Ho, T.K.: The random subspace method for constructing decision forests. IEEE Transactions on Pattern Analysis and Machine Intelligence 20(8), 832–844 (1998)
Hulten, G., Domingos, P.: Learning from infinite data in finite time. In: Advances in Neural Information Processing Systems, vol. 14, pp. 673–680. MIT Press, Cambridge (2002)
Bowyer, K.W., Chawla Jr., N.V., Moore, T.E., Hall, L.O., Kegelmeyer, W.P.: A parallel decision tree builder for mining very large visualization datasets. In: IEEE Systems, Man, and Cybernetics Conference, pp. 1888–1893 (2000)
Chawla, N.V., Moore, T.E., Hall, L.O., Bowyer, K.W., Kegelmeyer, W.P., Springer, C.: Distributed learning with bagging-like performance. Pattern Recognition Letters 24, 455–471 (2003)
Merz, C.J., Murphy, P.M.: UCI Repository of Machine Learning Databases. Univ. of CA., Dept. of CIS, Irvine, CA, http://www.ics.uci.edu/~mlearn/MLRepository.html
Banfield, R.: The OpenDT project. Technical report, University of South Florida (2003), http://www.csee.usf.edu/~rbanfiel/OpenDT.html
Quinlan, J.R.: C4.5: Programs for Machine Learning. Morgan Kaufmann, San Francisco (1992)
Hall, L.O., Bowyer, K.W., Banfield, R.E., Bhadoria, D., Kegelmeyer, W.P., Eschrich, S.: Comparing pure parallel ensemble creation techniques against bagging. In: The Third IEEE International Conference on Data Mining, pp. 533–536 (2003)
Brazdil, P., Gama, J.: The statlog project-evaluation/characterization of classification algorithms. Technical report, The STATLOG Project-Evaluation/Characterization of Classification Algorithms (1998), http://www.ncc.up.pt/liacc/ML/statlog/
Breiman, L., Friedman, J.H., Olshen, R.A., Stone, P.J.: Classification and Regression Trees. Wadsworth International Group, Belmont (1984)
Witten, I.H., Frank, E.: Data Mining: Practical machine learning tools with Java implementations. Morgan Kaufmann, San Francisco (1999)
de Borda, J.C.: Memoire sur les elections au scrutin. Historie de lÁcademie Royale des Sciences, Paris (1781)
Ho, T.K.: A data complexity analysis of comparative advantages of decision forest constructors. Pattern Analysis and Applications 5, 102–112 (2002)
Banfield, R.E., Hall, L.O., Bowyer, K.W., Kegelmeyer, W.P.: A new ensemble diversity measure applied to thinning ensembles. In: Multiple Classifier Systems Conference, June 2003, pp. 306–316 (2003)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2004 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Banfield, R.E., Hall, L.O., Bowyer, K.W., Bhadoria, D., Kegelmeyer, W.P., Eschrich, S. (2004). A Comparison of Ensemble Creation Techniques. In: Roli, F., Kittler, J., Windeatt, T. (eds) Multiple Classifier Systems. MCS 2004. Lecture Notes in Computer Science, vol 3077. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-25966-4_22
Download citation
DOI: https://doi.org/10.1007/978-3-540-25966-4_22
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-22144-9
Online ISBN: 978-3-540-25966-4
eBook Packages: Springer Book Archive