Abstract
Knowledge transfer from multiple source domains to a target domain is crucial in transfer learning. Most existing methods are focused on learning weights for different domains based on the similarities between each source domain and the target domain or learning more precise classifiers from the source domain data jointly by maximizing their consensus of predictions on the target domain data. However, these methods only consider measuring similarities or building classifiers on the original data space, and fail to discover a more powerful feature representation of the data when transferring knowledge from multiple source domains to the target domain. In this paper, we propose a new framework for transfer learning with multiple source domains. Specifically, in the proposed framework, we adopt autoencoders to construct a feature mapping from an original instance to a hidden representation, and train multiple classifiers from the source domain data jointly by performing an entropy-based consensus regularizer on the predictions on the target domain. Based on the framework, a particular solution is proposed to learn the hidden representation and classifiers simultaneously. Experimental results on image and text real-world datasets demonstrate the effectiveness of our proposed method compared with state-of-the-art methods.
Chapter PDF
Similar content being viewed by others
References
Blitzer, J., Dredze, M., Pereira, F.: Biographies, bollywood, boom-boxes and blenders: Domain adaptation for sentiment classification. In: Proceedings of the 45th ACL (2007)
Boser, B.E., Guyon, I.M., Vapnik, V.N.: A training algorithm for optimal margin classifiers. In: Proceedings of the 5th AWCLT (1992)
Bourlard, H., Kamp, Y.: Auto-association by multilayer perceptrons and singular value decomposition. Biological Cybernetics (1988)
Chattopadhyay, R., Ye, J.P., Panchanathan, S., Fan, W., Davidson, I.: Multi-source domain adaptation and its application to early detection of fatigue. In: Proceedings of the 17th ACM SIGKDD, pp. 717–725. ACM (2011)
Chen, M.M., Xu, Z.X., Weinberger, K., Sha, F.: Marginalized denoising autoencoders for domain adaptation. In: Proceedings of the 29th ICML (2012)
David, H., Stanley, L.: Applied Logistic Regression. Wiley, New York (2000)
Duan, L., Tsang, I.W., Xu, D., Chua, T.S.: Domain adaptation from multiple sources via auxiliary classifiers. In: Proceedings of the 26th ICML (2009)
Gao, J., Fan, W., Jiang, J., Han, J.W.: Knowledge transfer via multiple model local structure mapping. In: Proceedings of the 14th ACM SIGKDD (2008)
Ge, L., Gao, J., Zhang, A.D.: Oms-tl: a framework of online multiple source transfer learning. In: Proceedings of the 22nd ACM CIKM, pp. 2423–2428. ACM (2013)
Hinton, G.E., Zemel, R.S.: Autoencoders, minimum description length, and helmholtz free energy. In: Advances in NIPS (1994)
Hu, D.H., Zheng, V.W., Yang, Q.: Cross-domain activity recognition via transfer learning. Pervasive and Mobile Computing 7(3), 344–358 (2011)
Olshausen, B.A., Field, D.J.: Sparse coding with an overcomplete basis set: A strategy employed by v1? Vision Research (1997)
Pan, S.J., Kwok, J.T., Yang, Q.: Transfer learning via dimensionality reduction. In: Proceedings of the 23rd AAAI (2008)
Pan, S.J., Ni, X., Sun, J.T., Yang, Q., Chen, Z.: Cross-domain sentiment classification via spectral feature alignment. In: Proceedings of the 19th WWW (2010)
Pan, S.J., Tsang, I.W., Kwok, J.T., Yang, Q.: Domain adaptation via transfer component analysis. IEEE TNN (2011)
Pan, S.J., Yang, Q.: A survey on transfer learning. IEEE TKDE (2010)
Poultney, C., Chopra, S., Cun, Y.L.: Efficient learning of sparse representations with an energy-based model. In: Advances in NIPS (2006)
Rifai, S., Vincent, P., Muller, X., Glorot, X., Bengio, Y.: Contractive auto-encoders: Explicit invariance during feature extraction. In: Proceedings of the 28th ICML (2011)
Shi, Z.P., Ye, F., He, Q., Shi, Z.Z.: Symmetrical invariant lbr texture descriptor and application for image retrieval. In: Congress on Image and Signal Processing (2008)
Si, S., Tao, D.C., Geng, B.: Bregman divergence-based regularization for transfer subspace learning. IEEE TKDE (2010)
Vincent, P., Larochelle, H., Bengio, Y., Manzagol, P.: Extracting and composing robust features with denoising autoencoders. In: Proceedings of the 25th ICML (2008)
Vincent, P., Larochelle, H., Lajoie, I., Bengio, Y., Manzagol, P.: Stacked denoising autoencoders: Learning useful representations in a deep network with a local denoising criterion. JMLR (2010)
Yu, W., Zeng, G., Luo, P., Zhuang, F., He, Q., Shi, Z.: Embedding with autoencoder regularization. In: Blockeel, H., Kersting, K., Nijssen, S., Železný, F. (eds.) ECML PKDD 2013, Part III. LNCS, vol. 8190, pp. 208–223. Springer, Heidelberg (2013)
Zhang, K., Zheng, V., Wang, Q.J., Kwok, J., Yang, Q., Marsic, I.: Covariate shift in hilbert space: A solution via sorrogate kernels. In: Proceedings of The 30th ICML, pp. 388–395 (2013)
Zhang, L.: The Research on Human-computer Cooperation in Content-based Image Retrieval. Ph.D. thesis, Tsinghua University, Beijing (2001) (in Chinese)
Zhuang, F.Z., Luo, P., Xiong, H., He, Q., Xiong, Y.H., Shi, Z.Z.: Exploiting associations between word clusters and document classes for cross-domain text categorization. Statistical Analysis and Data Mining (2011)
Zhuang, F.Z., Luo, P., Xiong, H., Xiong, Y.H., He, Q., Shi, Z.Z.: Cross-domain learning from multiple sources: A consensus regularization perspective. IEEE TKDE (2010)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2014 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Zhuang, F., Cheng, X., Pan, S.J., Yu, W., He, Q., Shi, Z. (2014). Transfer Learning with Multiple Sources via Consensus Regularized Autoencoders. In: Calders, T., Esposito, F., Hüllermeier, E., Meo, R. (eds) Machine Learning and Knowledge Discovery in Databases. ECML PKDD 2014. Lecture Notes in Computer Science(), vol 8726. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-662-44845-8_27
Download citation
DOI: https://doi.org/10.1007/978-3-662-44845-8_27
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-662-44844-1
Online ISBN: 978-3-662-44845-8
eBook Packages: Computer ScienceComputer Science (R0)