Abstract
The motivation for the work presented in this book results from the problem of time series prediction. A standard method is to train a neural network to predict a single future value as a function of a so-called lag vector of m past observations or measurements. The crucial requirement for the successful application of such a scheme is that the probability distribution of the targets conditional on the inputs is unimodal and symmetric. However, even when a series of past measurements is only subjected to Gaussian observational noise, this assumption does not necessarily hold. On the contrary, for reasons discussed in Chapter 1, the distribution is likely to be distorted and may be multimodal. This suggests that, in general, it is not sufficient to train a network to predict only a single value, but that the complete probability distribution of the target conditional on the input vector should be modelled.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Author information
Authors and Affiliations
Rights and permissions
Copyright information
© 1999 Springer-Verlag London Limited
About this chapter
Cite this chapter
Husmeier, D. (1999). Summary. In: Neural Networks for Conditional Probability Estimation. Perspectives in Neural Computing. Springer, London. https://doi.org/10.1007/978-1-4471-0847-4_17
Download citation
DOI: https://doi.org/10.1007/978-1-4471-0847-4_17
Publisher Name: Springer, London
Print ISBN: 978-1-85233-095-8
Online ISBN: 978-1-4471-0847-4
eBook Packages: Springer Book Archive