Abstract
We consider a framework for learning additive classifiers based on regularized empirical risk minimization, where the regularization favors “smooth” functions. We present representations of classifiers for which the optimization problem can be efficiently solved. The first family of such classifiers are derived from a penalized spline formulation due to Eilers and Marx, which is modified to enabled linearization. The second is a novel family of classifiers that are based on classes of orthogonal basis functions with othogonal derivatives. Both these families lead to explicit feature embeddings that can be used with off-the-shelf linear solvers such as LIBLINEAR to obtain additive classifiers. The proposed family of classifiers offer better trade-offs between training time, memory overhead and classifier accuracy, compared to the state-of-the-art in additive classifier training.
Chapter PDF
Similar content being viewed by others
Keywords
- Training Time
- Computer Vision Application
- Memory Overhead
- Empirical Risk Minimization
- Orthogonal Basis Function
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.
References
Eilers, P., Marx, B.: Generalized linear additive smooth structures. Journal of Computational and Graphical Statistics 11(4), 758–783 (2002)
Lazebnik, S., Schmid, C., Ponce, J.: Beyond bags of features: Spatial pyramid matching for recognizing natural scene categories. In: CVPR (2006)
Maji, S., Berg, A.C.: Max margin additive classifiers for detection. In: ICCV (2009)
Vedaldi, A., Zisserman, A.: Efficient additive kernels via explicit feature maps. In: CVPR (2010)
Rahimi, A., Recht, B.: Random features for large-scale kernel machines. In: NIPS (2007)
Hein, M., Bousquet, O.: Hilbertian metrics and positive definite kernels on probability measures. In: AISTATS (2005)
Hastie, T., Tibshirani, R.: Generalized Additive Models. Chapman & Hall/CRC (1990)
Pearce, N., Wand, M.: Penalized splines and reproducing kernel methods. The American Statistician 60(3), 233–240 (2006)
Wahba, G.: Spline models for observational data, vol. 59. Society for Industrial Mathematics (1990)
Eilers, P., Marx, B.: Splines, knots, and penalties. Wiley Interdisciplinary Reviews: Computational Statistics (2005)
Webster, M.: Orthogonal polynomials with orthogonal derivatives. Mathematische Zeitschrift 39, 634–638 (1935)
Shalev-Shwartz, S., Singer, Y., Srebro, N.: Pegasos: Primal estimated sub-gradient solver for svm. In: ICML (2007)
LeCun, Y., Cortes, C.: The mnist database of handwritten digits (1998)
Munder, S., Gavrila, D.M.: An experimental study on pedestrian classification. IEEE TPAMI 28(11) (2006)
Maji, S., Berg, A.C., Malik, J.: Classification using intersection kernel support vector machines is efficient. In: CVPR (2008)
Dalal, N., Triggs, B.: Histograms of oriented gradients for human detection. In: CVPR (2005)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2012 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Maji, S. (2012). Linearized Smooth Additive Classifiers. In: Fusiello, A., Murino, V., Cucchiara, R. (eds) Computer Vision – ECCV 2012. Workshops and Demonstrations. ECCV 2012. Lecture Notes in Computer Science, vol 7583. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-33863-2_24
Download citation
DOI: https://doi.org/10.1007/978-3-642-33863-2_24
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-642-33862-5
Online ISBN: 978-3-642-33863-2
eBook Packages: Computer ScienceComputer Science (R0)