Abstract
The automation system has brought a revolutionary change in our lives. Food packaging activity recognition can add a new dimension to industrial automation systems. However, it is challenging to identify the packaging activities using only skeleton data of the upper body due to the similarities between the activities and subject-dependent results. Bento Packaging Activity Recognition Challenge 2021 provides us with a dataset of ten different activities performed during Bento box packaging in a laboratory using MoCap (motion capture) sensors. Bento box is a single-serving packed meal that is very popular in Japanese cuisine. In this paper, we develop methods using the classical machine learning approach, as the given dataset is small compared to other skeleton datasets. After preprocessing, we extract different hand-crafted features and train different models like extremely randomized trees, random forest, and XGBoost classifiers and select the best model based on cross-validation score. Then, we explore different combinations of features and use the best combination of features for prediction. By applying our methodology, we achieve 64% accuracy and 53.66% average accuracy in tenfold cross-validation and leave-one-subject-out cross-validation, respectively.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Kim, E., Helal, S., Cook, D.: Human activity recognition and pattern discovery. IEEE Pervasive Comput. 9, 48–53 (2009)
Chen, L., Hoey, J., Nugent, C., Cook, D., Yu, Z.: Sensor-based activity recognition. IEEE Trans. Syst. Man Cybern. Part C (Appl. Rev.) 42, 790–808 (2012)
Bodor, R., Jackson, B., Papanikolopoulos, N.: Vision-based human tracking and activity recognition. In: Proceedings of the 11th Mediterranean Conference on Control and Automation, vol. 1 (2003)
Bux, A., Angelov, P., Habib, Z.: Vision based human activity recognition: a review. In: Advances In Computational Intelligence Systems, pp. 341–371 (2017)
Zhang, Z.: Microsoft kinect sensor and its effect. IEEE Multimedia 19, 4–10 (2012)
Mitobe, K., Kaiga, T., Yukawa, T., Miura, T., Tamamoto, H., Rodgers, A., Yoshimura, N.: Development of a motion capture system for a hand using a magnetic three dimensional position sensor. In: ACM SIGGRAPH 2006 Research Posters, pp. 102-es (2006)
Sarker, S., Rahman, S., Hossain, T., Ahmed, S., Jamal, L., Ahad, M.A.R.: Skeleton-based activity recognition: preprocessing and approaches. Contactless Human Act. Anal. 200, 43 (2021)
Hbali, Y., Hbali, S., Ballihi, L., Sadgal, M.: Skeleton-based human activity recognition for elderly monitoring systems. IET Comput. Vis. 12, 16–26 (2018)
Slim, S., Atia, A., Elfattah, M., Mostafa, M.: Survey on human activity recognition based on acceleration data. Intl. J. Adv. Comput. Sci. Appl. 10, 84–98 (2019)
Maurtua, I., Ibarguren, A., Kildal, J., Susperregi, L., Sierra, B.: Human-robot collaboration in industrial applications: safety, interaction and trust. Int. J. Adv. Robot. Syst. 14, 1729881417716010 (2017)
Mesquita, A., Zamarioli, C., Carvalho, E.: The use of robots in nursing care practices: an exploratory descriptive study. Online Braz. J. Nurs. 15, 404–413 (2016)
Hentout, A., Aouache, M., Maoudj, A., Akli, I.: Human-robot interaction in industrial collaborative robotics: a literature review of the decade 2008–2017. Adv. Robot. 33, 764–799 (2019)
Alia, S., Lago, P., Takeda, S., Adachi, K., Benaissa, B., Ahad, M.A.R., Inoue, S.: Summary of the cooking activity recognition challenge. In: Human Activity Recognition Challenge, pp. 1–13 (2021)
Lago, P., Alia, S., Takeda, S., Mairittha, T., Mairittha, N., Faiz, F., Nishimura, Y., Adachi, K., Okita, T., Charpillet, F., et al.: Nurse care activity recognition challenge: summary and results. In: Adjunct Proceedings of the 2019 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2019 ACM International Symposium on Wearable Computers, pp. 746–751 (2019)
Siraj, M., Shahid, O., Ahad, M.A.R.: Cooking activity recognition with varying sampling rates using deep convolutional GRU framework. In: Human Activity Recognition Challenge, pp. 115–126 (2021)
Basak, P., Tasin, S., Tapotee, M., Sheikh, M., Sakib, A., Baray, S., Ahad, M.A.R.: Complex nurse care activity recognition using statistical features. In: Adjunct Proceedings of the 2020 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2020 ACM International Symposium on Wearable Computers, pp. 384–389 (2020). https://doi.org/10.1145/3410530.3414338
Alia, S.S., Adachi, K., Nahid, N., Kaneko, H., Lago, P., Bento, S.I.: Packaging Activity Recognition Challenge. IEEE DataPort (2021). https://doi.org/10.21227/cwhs-t440
Brownlee, J.: A gentle introduction to imbalanced classification. Mach. Learn. Mastery 22 (2019)
Bach, M., Werner, A., Palt, M.: The proposal of under sampling method for learning from imbalanced datasets. Procedia Comput. Sci. 159, 125–134 (2019)
Kadir, M., Akash, P., Sharmin, S., Ali, A., Shoyaib, M.: Can a simple approach identify complex nurse care activity? In: Adjunct Proceedings of the 2019 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2019 ACM International Symposium on Wearable Computers, pp. 736–740 (2019)
Tits, M.: Expert gesture analysis through motion capture using statistical modeling and machine learning. Ph.D. Dissertation (2018)
Sie, M., Cheng, Y., Chiang, C.: Key motion spotting in continuous motion sequences using motion sensing devices. In: 2014 IEEE International Conference on Signal Processing, Communications and Computing (ICSPCC), pp. 326–331 (2014)
Ahad, M.A.R., Lago, P., Inoue, S.: Human Activity Recognition Challenge. Springer, Berlin (2021)
Geurts, P., Ernst, D., Wehenkel, L.: Extremely randomized trees. Mach. Learn. 63, 3–42 (2006)
Oshiro, T., Perez, P., Baranauskas, J.: How many trees in a random forest? In: International Workshop on Machine Learning and Data Mining in Pattern Recognition, pp. 154–168 (2012)
Chen, T., Guestrin, C., et al.: XGBoost: a scalable tree boosting system. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD’16), pp. 785–794 (2016)
Yu, B., Liu, Y., Chan, K.: Effective human activity recognition based on small datasets (2020). ArXiv Preprint ArXiv:2004.13977
Hossain, T., Sarker, S., Rahman, S., Ahad, M.A.R.: Skeleton-based human action recognition on large-scale datasets. In: Vision, Sensing and Analytics: Integrative Approaches, pp. 125–146 (2021)
Adachi, K., Alia, S.S., Nahid, N., Kaneko, H., Lago, P., Inoue, S.: Summary of the Bento packaging activity recognition challenge. In: The 3rd International Conference on Activity and Behavior Computing (ABC2021)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Appendix
Appendix
See Table 3.
Rights and permissions
Copyright information
© 2022 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.
About this paper
Cite this paper
Anwar, A., Islam Tapotee, M., Saha, P., Ahad, M.A.R. (2022). Identification of Food Packaging Activity Using MoCap Sensor Data. In: Ahad, M.A.R., Inoue, S., Roggen, D., Fujinami, K. (eds) Sensor- and Video-Based Activity and Behavior Computing. Smart Innovation, Systems and Technologies, vol 291. Springer, Singapore. https://doi.org/10.1007/978-981-19-0361-8_11
Download citation
DOI: https://doi.org/10.1007/978-981-19-0361-8_11
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-19-0360-1
Online ISBN: 978-981-19-0361-8
eBook Packages: Intelligent Technologies and RoboticsIntelligent Technologies and Robotics (R0)