Abstract
Availability of a large amount of unlabeled data and practical challenges associated with annotating datasets for different domains has led to the development of models that can obtain knowledge from one domain (called source domain) and use this knowledge in a similar domain (called the target domain). This forms the core of transfer learning. Self-taught learning is one of the popular paradigms of transfer learning frequently used for classification tasks. In a typical self-taught learning setting, we have a source domain having a large amount of unlabeled data instances and a target domain having limited labeled data instances. With this setting, self-taught learning proceeds as follows: Given ample unlabeled data instances in the source domain, we try to obtain their optimal representation. Basically, we are learning the transformation that maps unlabeled data instances to their optimal representation.The transformation learnt in the source domain is then used to transform target domain instances; the transformed target domain instances along with their corresponding labels are then used in supervised classification tasks. In our work, we have applied self-taught learning in image classification task. For this, we have used stacked autoencoders (for grayscale images) and convolutional autoencoders (for colored images) to obtain an optimal representation of the images present in the source domain. The transformation function learnt in the source domain is then used to transform target domain images. The transformed target domain instances along with their labels are then used for building the supervised classifier (in our case, SVM). Rigorous experiments on MNIST, CIFAR10 and CIFAR100 dataset show that our self-taught learning approach is doing well against the baseline model (where no transfer learning has been used) even when there are limited number of labeled data instances in the target domain.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
R. Rajat, B. Alexis, L. Honglak, P. Benjamin, Y.N. Andrew, Self-taught learning: transfer learning from unlabeled data, in Proceedings of the 24th International Conference on Machine Learning, vol. 8 (2007), pp. 759–766
M. Lei, L. Manchun, M. Xiaoxue, C. Liang, D. Peijun, L. Yongxue, A review of supervised object-based land-cover image classification. ISPRS J. Photogram. Remote Sens. 130, 277–293 (2017)
D. Thibaut, M. Taylor, T. Nicolas, C. Matthieu, Wildcat: weakly supervised learning of deep convnets for image classification, pointwise localization and segmentation, in IEEE Conference on Computer Vision and Pattern Recognition, vol. 2 (2017)
G. Jie, W. Hongyu, F. Jianchao, M. Xiaorui, Deep supervised and contractive neural network for SAR image classification. IEEE Trans. Geosci. Remote Sens. 55(4), 2442–2459 (2017)
B.J. Kyle, M.O. Hakeem, Generation of a supervised classification algorithm for time-series variable stars with an application to the LINEAR dataset. New Astron. 52, 35–47 (2017)
L. Peng, R.C. Kim-Kwang, W. Lizhe, H. Fang, SVM or deep learning? A comparative study on remote sensing image classification. Soft Comput. 21(23), 7053–7065 (2017)
E. Tuba, N. Bacanin, An algorithm for handwritten digit recognition using projection histograms and SVM classifier (2015)
T. Chuanqi, S. Fuchun, K. Tao, Z. Wenchang, Y. Chao, L. Chunfang, A survey on deep transfer learning (2018)
Y. Zhilin, S. Ruslan, W.C. William, Transfer learning for sequence tagging with hierarchical recurrent networks (2017)
W. Karl, M.K. Taghi, D.D. Wang, A survey of transfer learning. J. Big Data 3(1), 9 (2016)
L. Shao, F. Zhu, X. Li, Transfer learning for visual categorization: a survey. IEEE Trans. Neural Netw. Learn. Syst. 26(5), 1019–1034 (2015)
L. Jie, B. Vahid, H. Peng, Z. Hua, X. Shan, Z. Guangquan, Transfer learning using computational intelligence: a survey. Knowl.-Based Syst. 80, 14–23 (2015)
B. Yoshua, C. Aaron, V. Pascal, Representation learning: a review and new perspectives. IEEE Trans. Pattern Anal. Mach. Intell. 35(8), 1798–1828 (2013)
K.L.L. Chamara, Z. Hongming, H. Guang-Bin, V. Chi Man, Representational learning with extreme learning machine for big data. IEEE Intell. Syst. 28(6), 31–34 (2013)
S. Dinggang, W. Guorong, S. Heung-Il, Deep learning in medical image analysis. Annu. Rev. Biomed. Eng. 19, 221–248 (2017)
C. Diego, T. Isabel, A.F. Manuel, B. Søren, M.G.I. José, Deep feature learning for virus detection using a Convolutional Neural Network (2017)
J. Luyang, Z. Ming, L. Pin, X. Xiaoqiang, A convolutional neural network based feature learning and fault diagnosis method for the condition monitoring of gearbox. Measurement 111, 1–10 (2017)
M. Konstantin, M. Tomoko, High level feature extraction for the self-taught learning algorithm. EURASIP J. Audio Speech Music Process. 1, 6 (2013). https://doi.org/10.1186/1687-4722-2013-6
K. Ronald, K. Christopher, Self-taught feature learning for hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 55(5), 2693–2705 (2017)
J. Zequn, W. Yunchao, J. Xiaojie, F. Jiashi, L. Wei, Deep self-taught learning for weakly supervised object localization (2017)
Y. Sethi, V. Jain, K.P. Singh, M. Ojha, Isomap based self-taught transfer learning for image classification
H. Peilin, J. Pengfei, Q. Siqi, D. Shukai, Self-taught learning based on sparse autoencoder for E-nose in wound infection detection. Sensors 17(10), 2279 (2017)
I. Goodfellow, Y. Bengio, A. Courville, Deep Learning (The MIT Press, Cambridge, 2016)
J.S. Alex, S. Bernhard, A tutorial on support vector regression. Stat. Comput. 14(3), 199–222 (2004)
X. Guo, X. Liu, E. Zhu, J. Yin, Deep clustering with convolutional autoencoders, in International Conference on Neural Information Processing (2017), pp. 373–382
F. Mazdak, MNIST handwritten digits (2014)
D. Li, The MNIST database of handwritten digit images for machine learning research [best of the web]. IEEE Sig. Process. Mag. 29(6), 141–142 (2012)
K. Alex, H. Geoffrey, Learning multiple layers of features from tiny images (2009 )
Q. Yu, W. Yueming, Z. Xiaoxiang, W. Zhaohui, Robust feature learning by stacked autoencoder with maximum correntropy criterion, in IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) (2014), pp. 6716-6720
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2020 Springer Nature Singapore Pte Ltd.
About this paper
Cite this paper
Singh, U.P., Chavan, S., Hindwani, S., Singh, K.P. (2020). Self-taught Learning: Image Classification Using Stacked Autoencoders. In: Nagar, A., Deep, K., Bansal, J., Das, K. (eds) Soft Computing for Problem Solving 2019 . Advances in Intelligent Systems and Computing, vol 1138. Springer, Singapore. https://doi.org/10.1007/978-981-15-3290-0_1
Download citation
DOI: https://doi.org/10.1007/978-981-15-3290-0_1
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-15-3289-4
Online ISBN: 978-981-15-3290-0
eBook Packages: Intelligent Technologies and RoboticsIntelligent Technologies and Robotics (R0)