Abstract
One approach to learning classification rules from examples is to build decision trees. A review and comparison paper by Mingers (Mingers, 1989) looked at the first stage of tree building, which uses a “splitting rule” to grow trees with a greedy recursive partitioning algorithm. That paper considered a number of different measures and experimentally examined their behavior on four domains. The main conclusion was that a random splitting rule does not significantly decrease classificational accuracy. This note suggests an alternative experimental method and presents additional results on further domains. Our results indicate that random splitting leads to increased error. These results are at variance with those presented by Mingers.
Article PDF
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Avoid common mistakes on your manuscript.
References
Breiman, L., Friedman, J., Olshen, R., & Stone, C. (1984). Classification and regression trees. Wadsworth, Belmont.
Buntine, W. (1989). Learning classification rules using Bayes. In Proceedings of the Sixth International Machine Learning Workshop, Cornell, New York. Morgan Kaufmann.
Cestnik, B., Kononenko, I., & Bratko, I. (1987). Assistant86: A knowledge-elicitation tool for sophisticated users. In I. Bratko, & N. Lavrač (Eds.), Progress in machine learning: Proceedings of EWSL-87, (pp. 31–45), Bled, Yugoslavia. Sigma Press.
Clark, P., & Niblett, T. (1989). The CN2 induction algorithm. Machine Learning, 3, 261–283.
Fisher, D., & McKusick, K. (1989). An empirical comparison of ID3 and back-propagation and machine learning classification methods. In Proceedings of the Eleventh International Joint Conference on Artificial Intelligence, (pp. 788–793). Detroit: Morgan Kaufmann.
Fisher, R. (1936). Multiple measurements in taxonomic problems. Annals of Eugenics, VII, 179–188.
Kibler, D., & Langley, P. (1988). Machine learning as an experimental science. In D. Sleeman (Ed.), Proceedings of the Third European Working Session on Learning, (pp. 81–92). Glasgow: Pitman Publishing.
Mingers, J. (1989). An empirical comparison of selection measures for decision-tree induction. Machine Learning, 3, 319–342.
Mooney, R., Shavlik, J., Towell, G., & Gove, A. (1989). An experimental comparison of symbolic and connectionist learning algorithms. In Proceedings of the Eleventh International Joint Conference on Artificial Intelligence, (pp. 775–780). Detroit: Morgan Kaufmann.
Quinlan, J. R. (1986). Induction of decision trees. Machine Learning, 1, 81–106.
Quinlan, J. (1988). Simplifying decision trees. In B. Gaines & J. Boose (Eds.), Knowledge acquisition for knowledgebased systems, (pp. 239–252). London: Academic Press.
Quinlan, J. R. (1989). Unknown attribute values in induction. In Proceedings of the Sixth International Machine Learning Workshop. Cornell, New York: Morgan Kaufmann.
Quinlan, J. R., Compton, P., Horn, K., & Lazarus, L. (1987). Inductive knowledge acquisition: A case study. In J.R. Quinlan (Ed.), Applications of expert systems. London: Addison Wesley.
Schlimmer, J., & Granger Jr., R. (1986). Incremental learning from noisy data. Machine Learning, 1, 317–354.
Weiss, S., & Kapouleas, I. (1989). An empirical comparison of pattern recognition, neural nets, and machine learning classification methods. In Proceedings of the Eleventh International Joint Conference on Artificial Intelligence, (pp. 781–787). Detroit: Morgan Kaufmann.
Author information
Authors and Affiliations
Rights and permissions
About this article
Cite this article
Buntine, W., Niblett, T. A Further Comparison of Splitting Rules for Decision-Tree Induction. Machine Learning 8, 75–85 (1992). https://doi.org/10.1023/A:1022686419106
Issue Date:
DOI: https://doi.org/10.1023/A:1022686419106