Abstract
Learning low dimensional manifold from highly nonlinear data of high dimensionality has become increasingly important for discovering intrinsic representation that can be utilized for data visualization and preprocessing. The autoencoder is a powerful dimensionality reduction technique based on minimizing reconstruction error, and it has regained popularity because it has been efficiently used for greedy pre-training of deep neural networks. Compared to Neural Network (NN), the superiority of Gaussian Process (GP) has been shown in model inference, optimization and performance. GP has been successfully applied in nonlinear Dimensionality Reduction (DR) algorithms, such as Gaussian Process Latent Variable Model (GPLVM). In this paper we propose the Gaussian Processes Autoencoder Model (GPAM) for dimensionality reduction by extending the classic NN based autoencoder to GP based autoencoder. More interestingly, the novel model can also be viewed as back constrained GPLVM (BC-GPLVM) where the back constraint smooth function is represented by a GP. Experiments verify the performance of the newly proposed model.
Access provided by Autonomous University of Puebla. Download to read the full chapter text
Chapter PDF
Similar content being viewed by others
References
Bengio, Y., Lamblin, P., Popovici, D., Larochelle, H.: Greedy layer-wise training of deep networks. In: Advances in Neural Information Processing Systems (2007)
Bishop, C.M.: Pattern Recognition and Machine Learning. Springer (2006)
Cottrell, G.W., Munro, P., Zipser, D.: Learning internal representations from grayscale images: An example of extensional programming. In: Conference of the Cognitive Science Society (1987)
Ek, C.H.: Shared Gaussian Process Latent Variables Models. Ph.D. thesis, Oxford Brookes University (2009)
Frank, A., Asuncion, A.: UCI machine learning repository (2010)
Gao, J., Zhang, J., Tien, D.: Relevance units latent variable model and nonlinear dimensionality reduction. IEEE Transactions on Neural Networks 21, 123–135 (2010)
Jiang, X., Gao, J., Wang, T., Shi, D.: Tpslvm: A dimensionality reduction algorithm based on thin plate splines. IEEE Transactions on Cybernetics (to appear, 2014)
Jiang, X., Gao, J., Wang, T., Zheng, L.: Supervised latent linear gaussian process latent variable model for dimensionality reduction. IEEE Transactions on Systems, Man, and Cybernetics - Part B: Cybernetics 42(6), 1620–1632 (2012)
Lawrence, N.: Probabilistic non-linear principal component analysis with gaussian process latent variable models. Journal of Machine Learning Research 6, 1783–1816 (2005)
Lawrence, N.D., Quinonero-Candela, J.: Local distance preservation in the gp-lvm through back constraints. In: International Conference on Machine Learning (ICML), pp. 513–520. ACM Press (2006)
Mardia, K.V., Kent, J.T., Bibby, J.M.: Multivariate analysis. Academic Press, London (1979)
Neal, R.: Bayesian learning for neural networks. Lecture Notes in Statistics 118 (1996)
Palm, R.B.: Prediction as a candidate for learning deep hierarchical models of data. Master’s thesis, Technical University of Denmark (2012)
Rasmussen, C.E., Williams, C.K.I.: Gaussian Processes for Machine Learning. The MIT Press (2006)
Rosman, G., Bronstein, M.M., Bronstein, A.M., Kimmel, R.: Nonlinear dimensionality reduction by topologically constrained isometric embedding. International Journal of Computer Vision 89, 56–68 (2010)
Roweis, S.T., Saul, L.K.: Nonlinear dimensionality reduction by locally linear embedding. Science 290, 2323–2326 (2000)
Scholkopf, B., Smola, A., Muller, K.R.: Nonlinear component analysis as a kernel eigenvalue problem. Neural Computation 10(5), 1299–1319 (1998)
Snoek, J., Adams, R.P., Larochelle, H.: Nonparametric guidance of autoencoder representations using label information. Journal of Machine Learning Research 13, 2567–2588 (2012)
Tenenbaum, J.B., de Silva, V., Langford, J.C.: A global geometric framework for nonlinear dimensionality reduction. Science 290, 2319–2323 (2000)
Tipping, M.E., Bishop, C.M.: Probabilistic principal component analysis. Journal of the Royal Statistical Society, Series B 61, 611–622 (1999)
Titsias, M.K., Lawrence, N.D.: Bayesian gaussian process latent variable model. In: International Conference on Artificial Intelligence and Statistics (2010)
Wang, J.M., Fleet, D.J., Hertzmann, A.: Gaussian process dynamical models. In: Advances in Neural Information Processing Systems (NIPS), pp. 1441–1448 (2005)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2014 Springer International Publishing Switzerland
About this paper
Cite this paper
Jiang, X., Gao, J., Hong, X., Cai, Z. (2014). Gaussian Processes Autoencoder for Dimensionality Reduction. In: Tseng, V.S., Ho, T.B., Zhou, ZH., Chen, A.L.P., Kao, HY. (eds) Advances in Knowledge Discovery and Data Mining. PAKDD 2014. Lecture Notes in Computer Science(), vol 8444. Springer, Cham. https://doi.org/10.1007/978-3-319-06605-9_6
Download citation
DOI: https://doi.org/10.1007/978-3-319-06605-9_6
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-06604-2
Online ISBN: 978-3-319-06605-9
eBook Packages: Computer ScienceComputer Science (R0)