Abstract
A classifier is said to have good generalization ability if it performs on test data almost as well as it does on the training data. The main result of this paper provides a sufficient condition for a learning algorithm to have good finite sample generalization ability. This criterion applies in some cases where the set of all possible classifiers has infinite VC dimension. The result is applied to prove the good generalization ability of support vector machines by a exploiting a sparse-representation property.
Article PDF
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Avoid common mistakes on your manuscript.
References
Devroye, L. (1982). Bounds for the uniform deviation of empirical measures. Journal of Multivariate Analysis, 12, 72–79.
Floyd, S.& Warmuth M. (1995). Sample compression, learnability, and the Vapnik-Chervonenekis dimension. Machine Learning Journal, 21, 269–304.
Freund, Y.& Schapire, R. E. (1998). Large margin classification using the perceptron algorithm. In COLT' 98: Proceedings of the Eleventh Annual Conference on Computational Learning Theory.
Graepel, T., Herbrich, R.,& Shawe-Taylor, J. (2000). Generalisation error bounds for sparse linear classifiers. In COLT 2000: Proceedings of the Thirteenth Annual Conference on Computational Learning Theory.
Minsky, M. L.& Papert, S. A. (1988). Perceptrons. The MIT Press, Cambridge, MA.
Vapnik, V. N. (1998). Statistical Learning Theory. John Wiley&Sons, Inc., New York City, NY.
Author information
Authors and Affiliations
Rights and permissions
About this article
Cite this article
Gat, Y. A Learning Generalization Bound with an Application to Sparse-Representation Classifiers. Machine Learning 42, 233–239 (2001). https://doi.org/10.1023/A:1007605716762
Issue Date:
DOI: https://doi.org/10.1023/A:1007605716762