Abstract
Bayesian learning (Tipping, 2004; Barber, 2012) is the name commonly used to identify a set of computational methods for supervised learning based on Bayes’ Theorem. Broadly speaking, Bayes’ Theorem deals with the modification of our perception of the probability of an event, as a consequence of the occurrence of one or more facts. For instance, what probability are you assigning to the event “somebody stole my car” at the moment? Of course, this can depend on many different factors, but on a normal day, one may argue that that probability is generally rather low. Now, imagine that you go looking for your car, and the car is not in the place where you remember that you parked it. What is now the probability of the event “somebody stole my car”? The fact that the car is not where it was parked clearly changes the probability that it was stolen. This property is general: the realization of some events can modify the probability of others. This property can be exploited to tackle Machine Learning tasks, for instance classification: data, interpreted as events, can be used to change the probability that a given observation belongs to a given class. Before studying this mechanism in detail, let us first present Bayes’ Theorem and its most immediate use in Machine Learning.
Access provided by Autonomous University of Puebla. Download to read the full chapter text
Chapter PDF
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
Copyright information
© 2023 Springer Nature Switzerland AG
About this chapter
Cite this chapter
Vanneschi, L., Silva, S. (2023). Bayesian Learning. In: Lectures on Intelligent Systems. Natural Computing Series. Springer, Cham. https://doi.org/10.1007/978-3-031-17922-8_9
Download citation
DOI: https://doi.org/10.1007/978-3-031-17922-8_9
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-17921-1
Online ISBN: 978-3-031-17922-8
eBook Packages: Computer ScienceComputer Science (R0)