Introduction
In a supervised learning task we are interested in finding a function that maps a set of given examples into a set of classes or categories. This function, called classifier, will be used later to classify new examples, which in general are different from the given ones. The process of modeling the classifier is called training, and the algorithm used during the training is called learning algorithm. The initial given examples used by the learning algorithm are called training examples. The training examples are usually provided by an external agent to the robot also called tutor. Once the classifier is learned, it should be tested on new unseen examples to evaluate its performance. These new examples are called test examples.
Access provided by Autonomous University of Puebla. Download to read the full chapter text
Chapter PDF
References
Freund, Y.: Boosting a weak learning algorithm by majority. In: COLT: Proceedings of the Workshop on Computational Learning Theory. Morgan Kaufmann, San Francisco (1990)
Freund, Y.: Data filtering and distribution modeling algorithms for machine learning. PhD thesis, University of California at Santa Cruz (1993)
Freund, Y., Schapire, R.E.: A decision-theoretic generalization of on-line learning and an application to boosting. In: Proceedings of the European Conference on Computational Learning Theory, pp. 23–37 (1995)
Freund, Y., Schapire, R.E.: A short introduction to boosting. Journal of Japanase Society for Artificial Intelligence 14(5), 771–780 (1999)
Grove, A.J., Schuurmans, D.: Boosting in the limit: Maximizing the margin of learned ensembles. In: Proceedings of the National Conference on Artificial Intelligence, pp. 692–699 (1998)
Kearns, M., Valiant, L.G.: Cryptographic limitations on learning boolean formulae and finite automata. Journal of the ACM 41(1), 67–95 (1994)
Kearns, M.J., Valiant, L.G.: Learning boolean formulae or finite automata is as hard as factoring. Technical Report TR-14-88, Harvard University Aiken Computation Laboratory (August 1988)
Meir, R., Rätsch, G.: An introduction to boosting and leveraging. In: Mendelson, S., Smola, A.J. (eds.) Advanced Lectures on Machine Learning. LNCS (LNAI), vol. 2600, pp. 118–183. Springer, Heidelberg (2003)
Mitchell, T.M.: Machine Learning. McGraw-Hill, New York (1997)
Rätsch, G., Onoda, T., Müller, K.R.: Soft margins for adaboost. Machine Learning 42(3), 287–320 (2001)
Schapire, R.E.: The strength of weak learnability. Machine Learning 5, 197–227 (1990)
Schapire, R.E.: The boosting approach to machine learning: An overview. In: MSRI Workshop on Nonlinear Estimation and Classification (2001)
Schapire, R.E., Singer, Y.: Improved boosting algorithms using confidence-rated predictions. Machine Learning 37(3), 297–336 (1999)
Theodoridis, S., Koutroumbas, K.: Pattern Recognition, 3rd edn. Academic Press, London (2006)
Valiant, L.G.: A theory of the learnable. Communications of the ACM 27(11), 1134–1142 (1984)
Witten, I.H., Frank, E.: Data Mining: Practical Machine Learning Tools and Techniques, 2nd edn. Morgan Kaufmann, San Francisco (2005)
Rights and permissions
Copyright information
© 2010 Springer-Verlag Berlin Heidelberg
About this chapter
Cite this chapter
Mozos, Ó.M. (2010). Supervised Learning. In: Semantic Labeling of Places with Mobile Robots. Springer Tracts in Advanced Robotics, vol 61. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-11210-2_2
Download citation
DOI: https://doi.org/10.1007/978-3-642-11210-2_2
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-642-11209-6
Online ISBN: 978-3-642-11210-2
eBook Packages: EngineeringEngineering (R0)