Abstract
Ensemble methods are general techniques to improve the accuracy of any given learning algorithm. Boosting is a learning algorithm that builds the classifier ensembles incrementally. In this work we propose an improvement of the classical and inverse AdaBoost algorithms to deal with the problem of the presence of outliers in the data. We propose the Robust Alternating AdaBoost (RADA) algorithm that alternates between the classic and inverse AdaBoost to create a more stable algorithm. The RADA algorithm bounds the influence of the outliers to the empirical distribution, it detects and diminishes the empirical probability of “bad” samples, and it performs a more accurate classification under contaminated data.
We report the performance results using synthetic and real datasets, the latter obtained from a benchmark site.
This work was supported in part by Research Grant Fondecyt 1061201 and 1070220, in part by the international cooperation Research Grant Fondecyt 7070262 and 7070093.
Chapter PDF
Similar content being viewed by others
References
Allende, H., Nanculef, R., Salas, R.: Bootstrapping neural networks. In: Monroy, R., Arroyo-Figueroa, G., Sucar, L.E., Sossa, H. (eds.) MICAI 2004. LNCS (LNAI), vol. 2972, pp. 813–822. Springer, Heidelberg (2004)
Bauer, E., Kohavi, R.: An empirical comparison of voting classification algorithms: Bagging, boosting, and variants. Machine Learning 36(1-2), 105–139 (1999)
Blake, C.L., Merz, C.J.: UCI repository of machine learning databases (1998)
Breiman, L.: Bagging predictors. Machine Learning 24(2), 123–140 (1996)
Duda, R., Hart, P., Stork, D.: Pattern classification. Wiley-Interscience, Chichester (2000)
Freund, Y., Schapire, R.: A decision-theoretic generalization of on-line learning and an application to boosting. Journal of Computer and System Sciences 55(1), 119–139 (1997)
Kanamori, T., Takenouchi, T., Eguchi, S., Murata, N.: The most robust loss function for boosting. In: Pal, N.R., Kasabov, N., Mudi, R.K., Pal, S., Parui, S.K. (eds.) ICONIP 2004. LNCS, vol. 3316, pp. 496–501. Springer, Heidelberg (2004)
Kuncheva, L., Whitaker, C.: Using diversity with three variants of boosting: Aggressive, conservative and inverse. In: Roli, F., Kittler, J. (eds.) MCS 2002. LNCS, vol. 2364, pp. 81–90. Springer, Heidelberg (2002)
Valiant, L.G.: A theory of the learnable. Communications of the ACM 27(11), 1134–1142 (1984)
Author information
Authors and Affiliations
Editor information
Rights and permissions
Copyright information
© 2007 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Allende-Cid, H., Salas, R., Allende, H., Ñanculef, R. (2007). Robust Alternating AdaBoost. In: Rueda, L., Mery, D., Kittler, J. (eds) Progress in Pattern Recognition, Image Analysis and Applications. CIARP 2007. Lecture Notes in Computer Science, vol 4756. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-76725-1_45
Download citation
DOI: https://doi.org/10.1007/978-3-540-76725-1_45
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-76724-4
Online ISBN: 978-3-540-76725-1
eBook Packages: Computer ScienceComputer Science (R0)