Abstract
Multi-modal data extracted from different sensors in a smart home can be fused to build models that recognize the daily living activities of residents. This paper proposes a Deep Convolutional Neural Network to perform the activity recognition task using the multi-modal data collected from a smart residential home. The dataset contains accelerometer data (composed of three perpendicular components of acceleration and the strength of the accelerometer signal received by four receivers), video data (15 time-series related to 2D and 3D center of mass and bounding box extracted from an RGB-D camera), and Passive Infra-Red sensor data. The performance of the Deep Convolutional Neural Network is compared to the Deep Belief Network. Experimental results revealed that the Deep Convolutional Neural Network with two pairs of convolutional and max pooling layers achieved better classification accuracy than the Deep Belief Network. The Deep Belief Network uses Restricted Boltzmann Machines for pre-training the network. When training deep learning models using classes with a high number of training samples, the DBN achieved 65.97% classification accuracy, whereas the CNN achieved 75.33% accuracy. The experimental results demonstrate the challenges of dealing with multi-modal data and highlight the importance of having a suitable number of samples within each class for sufficiently training and testing deep learning models.
The project is supported by The Leverhulme Trust (Research Project Grant RPG-2016-252).
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Twomey, N., Diethe, T., Flach, P.: Bayesian active learning with evidence-based instance selection. In: Second International Workshop on Learning over Multiple Contexts, Conjunction with ECML PKDD 2015 (2015)
Twomey, N., et al.: The SPHERE challenge: activity recognition with multi-modal sensor data, pp. 1–14 (2016)
Twomey, N., Diethe, T., Craddock, I., Flach, P.: Unsupervised learning of sensor topologies for improving activity recognition in smart environments. Neurocomputing 234(June 2016), 93–106 (2017)
Li, M., Miao, Z., Ma, C.: Feature extraction with convolutional restricted boltzmann machine for audio classification. In: 2015 3rd IAPR Asian Conference on Pattern Recognition, pp. 791–795 (2015)
Gao, X.W., Hui, R., Tian, Z.: Classification of CT brain images based on deep learning networks. Comput. Methods Programs Biomed. 138, 49–56 (2017)
Yang, J., Zhao, Y.,J. Cheung, J., Chan, W., Yi, C., Brussel, V.U.: Hyperspectral image classification using two-channel deep convolutional neural network. In: 2016 IEEE Geoscience and Remote Sensing Symposium, pp. 5079–5082 (2016)
Chen, S., Member, S., Wang, H., Xu, F., Member, S.: Target classification using the deep convolutional networks for SAR images. IEEE Trans. Geosci. Remote Sens. 54(8), 4806–4817 (2016)
Ebg, F.G.: Breast cancer histopathological image classification using convolutional neural networks. In: IJCNN, pp. 2560–2567 (2016)
Ferrari, S.L., Signoroni, A.: Bacterial colony counting with convolutional neural networks in digital microbiology imaging. Pattern Recogn. 61, 629–640 (2017)
LeCun, Y., Bottou, L., Bengio, Y., Haffner, P.: Gradient-based learning applied to document recognition. Proc. IEEE 86(11), 2278–2323 (1998)
Cireşan, U.M., Schmidhuber, J.: Multi-column deep neural networks for image classification. In: International Conference of Pattern Recognition, February 2012, pp. 3642–3649 (2012)
Sun, Y., Wang, X., Tang, X.: Deep learning face representation from predicting 10,000 classes. In: Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 1891–1898 (2014)
Krizhevsky, I.S., Geoffrey E.H.: ImageNet classification with deep convolutional neural networks. In: Advances in Neural Information Processing Systems 25 (NIPS2012), pp. 1–9 (2012)
Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. Science 313(5786), 504–507 (2006)
Fukushima, K.: Neocognitron: a self-organizing neural network model for a mechanism of pattern recognition unaffected by shift in position. Pattern Recogn. 15(6), 455–469 (1982)
Hinton, G.E., Osindero, S., Teh, Y.W.: A fast learning algorithm for deep belief nets. Neural Comput. 18(7), 1527–1554 (2006)
Feldman, J.A., Ballard, D.H.: Connectionist models and their properties. Cogn. Sci. 6(3), 205–254 (1982)
Cai, X., Lin, X.: Feature extraction using restricted boltzmann machine for stock price prediction. 2012 IEEE International Conference on Computer Science and Automation Engineering, pp. 80–83 (2012)
Hinton, G.E.: Training products of experts by minimizing contrastive divergence. Neural Comput. 14(8), 1771–1800 (2002)
Acknowledgment
G. Cosma, A. Taherkhani and T.M. McGinnity acknowledge the financial support of The Leverhulme Trust (Research Project Grant RPG-2016-252).
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2019 Springer Nature Switzerland AG
About this paper
Cite this paper
Taherkhani, A., Cosma, G., Alani, A.A., McGinnity, T.M. (2019). Activity Recognition from Multi-modal Sensor Data Using a Deep Convolutional Neural Network. In: Arai, K., Kapoor, S., Bhatia, R. (eds) Intelligent Computing. SAI 2018. Advances in Intelligent Systems and Computing, vol 857. Springer, Cham. https://doi.org/10.1007/978-3-030-01177-2_15
Download citation
DOI: https://doi.org/10.1007/978-3-030-01177-2_15
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-01176-5
Online ISBN: 978-3-030-01177-2
eBook Packages: Intelligent Technologies and RoboticsIntelligent Technologies and Robotics (R0)