Skip to main content

Cooking Activity Recognition with Varying Sampling Rates Using Deep Convolutional GRU Framework

  • Chapter
  • First Online:
Human Activity Recognition Challenge

Abstract

Activity recognition is one of the most researched topics in the field of machine learning-based recognition. There are many challenges associated with Human Activity Recognition. One of the most important challenges to overcome is the simultaneous recognition of complex activities as well as smaller activities that are part of such complex activities. The dataset that has been used and the work that has been done in this paper is part of the Cooking Activity Recognition Challenge. The dataset that has been provided in this challenge contains three classes of complex or macro activities and ten classes of smaller or micro activities. The macro activities are mutually exclusive whereas multiple micro activities can occur in a sequence as parts of a particular macro activity. The dataset is very challenging because of the recorded segments having varying sample rates among them for which we have preprocessed the data. In addition to that, the dataset contains several recorded segments with missing data. The task of this challenge has been to classify macro- and micro activities separately from this dataset. We have introduced a deep learning framework combining Convolutional Neural Network (CNN) and Gated Recurrent Unit (GRU) to extract spatial and temporal features for recognition of macro- and micro activities. The model that we have proposed for this dataset outperforms other conventional and existing deep learning models with classification accuracies of 83.76 and 59.39% for macro- and micro-activity classifications, respectively.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 149.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 199.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 199.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Hayashi, T., Nishida, M., Kitaoka, N., Takeda, K., Daily activity recognition based on dnn using environmental sound and acceleration signals. In: 23rd European Signal Processing Conference (EUSIPCO). IEEE, pp. 2306–2310 (2015)

    Google Scholar 

  2. Yao, S., Hu, S., Zhao, Y., Zhang, A., Abdelzaher, T.: Deepsense: A unified deep learning framework for time-series mobile sensing data processing. In: Proceedings of the 26th International Conference on World Wide Web, pp. 351–360 (2017)

    Google Scholar 

  3. Park, J., Jang, K., Yang, S.-B.: Deep neural networks for activity recognition with multi-sensor data in a smart home. In: IEEE 4th World Forum on Internet of Things (WF-IoT), pp. 155–160. IEEE, Singapore (2018)

    Google Scholar 

  4. Himawan, I., Towsey, M., Law, B., Roe, P.: Deep learning techniques for koala activity detection. Interspeech 2018, 2107–2111 (2018)

    Google Scholar 

  5. Haque, M.N., Tonmoy, M.T.H., Mahmud, S., Ali, A.A., Khan, M.A.H., Shoyaib, M.: Gru-based attention mechanism for human activity recognition. In: 2019 1st International Conference on Advances in Science, Engineering and Robotics Technology (ICASERT). IEEE, pp. 1–6 (2019)

    Google Scholar 

  6. Lago, P., Takeda, S., Adachi, K., Alia, S.S., Bennai, B., Inoue, S., Charpillet, F.: Cooking activity dataset with macro and micro activities. IEEE DataPort (2020). https://doi.org/10.21227/hyzg-9m49

  7. Lago, P., Takeda, S., Alia, S.S., Adachi, Bennai, B., Charpillet, F., Inoue, S.: A dataset for complex activity recognition with micro and macro activities in a cooking scenario (2020). arXiv:2006.10681

  8. Cooking activity recognition challenge. https://abc-research.github.io/cook2020/data_description/

  9. Ahmed, M., Antar, A.D., Ahad, M.A.R.: An approach to classify human activities in real-time from smartphone sensor data. In: 2019 Joint 8th International Conference on Informatics, Electronics & Vision (ICIEV) and 2019 3rd International Conference on Imaging, Vision & Pattern Recognition (icIVPR). IEEE, pp. 140–145 (2019)

    Google Scholar 

  10. He, Z., Jin, L.: Activity recognition from acceleration data based on discrete consine transform and svm. In: 2009 IEEE International Conference on Systems, Man and Cybernetics. IEEE, pp. 5041–5044 (2009)

    Google Scholar 

  11. Chen, L., Hoey, J., Nugent, C.D., Cook, D.J., Yu, Z.: Sensor-based activity recognition. IEEE Trans. Syst. Man Cybernet. Part C (Appl. Rev.) 42(6), 790–808 (2012)

    Google Scholar 

  12. Bao, L., Intille, S.S.: Activity recognition from user-annotated acceleration data. In: International Conference on Pervasive Computing, pp. 1–17. Springer, Berlin (2004)

    Google Scholar 

  13. Mantyjarvi, J., Himberg, J., Seppanen, T.: Recognizing human motion with multiple acceleration sensors. In 2001 IEEE International Conference on Systems, Man and Cybernetics. e-Systems and e-Man for Cybernetics in Cyberspace (Cat. No. 01CH37236), vol. 2, , pp. 747–752. IEEE (2001)

    Google Scholar 

  14. Antar, A.D., Ahmed, M., Ishrak, M.S., Ahad, M.A.R.: A comparative approach to classification of locomotion and transportation modes using smartphone sensor data. In Proceedings of the 2018 ACM International Joint Conference and 2018 International Symposium on Pervasive and Ubiquitous Computing and Wearable Computers, pp. 1497–1502 (2018)

    Google Scholar 

  15. Cheema, M.S., Eweiwi, A., Bauckhage, C.: Human activity recognition by separating style and content. Patt. Recogn. Lett. 50, 130–138 (2014)

    Article  Google Scholar 

  16. Aggarwal, J.K., Xia, L.: Human activity recognition from 3d data: a review. Patt. Recogn. Lett. 48, 70–80 (2014)

    Article  Google Scholar 

  17. Yang, Y., Hou, C., Lang, Y., Guan, D., Huang, D., Xu, J.: Open-set human activity recognition based on micro-doppler signatures. Patt. Recogn. 85, 60–69 (2019)

    Article  Google Scholar 

  18. Barnachon, M., Bouakaz, S., Boufama, B., Guillou, E.: Ongoing human action recognition with motion capture. Patt. Recogn. 47(1), 238–247 (2014)

    Article  Google Scholar 

  19. Lin, Y., Le Kernec, J.: Performance analysis of classification algorithms for activity recognition using micro-doppler feature. In: 2017 13th International Conference on Computational Intelligence and Security (CIS). IEEE, pp. 480–483 (2017)

    Google Scholar 

  20. Pawlyta, M., Skurowski, P.: A survey of selected machine learning methods for the segmentation of raw motion capture data into functional body mesh. In: Conference of Information Technologies in Biomedicine, pp. 321–336. Springer, Berlin (2016)

    Google Scholar 

  21. Wang, J., Chen, Y., Hao, S., Peng, X., Hu, L.: Deep learning for sensor-based activity recognition: a survey. Patt. Recogn. Lett. 119, 3–11 (2019)

    Article  Google Scholar 

  22. Hammerla, N.Y., Halloran, S., Plötz, T.: Deep, convolutional, and recurrent models for human activity recognition using wearables (2016). arXiv:1604.08880

  23. Münzner, S., Schmidt, P., Reiss, A., Hanselmann, M., Stiefelhagen, R., Dürichen, R.: Cnn-based sensor fusion techniques for multimodal human activity recognition. In: Proceedings of the 2017 ACM International Symposium on Wearable Computers, pp. 158–165 (2017)

    Google Scholar 

  24. Suvarnam, B., Ch, V.S.: Combination of cnn-gru model to recognize characters of a license plate number without segmentation. In: 2019 5th International Conference on Advanced Computing & Communication Systems (ICACCS). IEEE, pp. 317–322 (2019)

    Google Scholar 

  25. Lyu, Y., Huang, X.: Road segmentation using cnn with gru (2018). arXiv:1804.05164

  26. Kim, P.-S., Lee, D.-G., Lee, S.-W.: Discriminative context learning with gated recurrent unit for group activity recognition. Patt. Recogn. 76, 149–161 (2018)

    Article  Google Scholar 

  27. Haque, M.N., Mahbub, M., Tarek, M.H., Lota, L.N., Ali, A.A.: Nurse care activity recognition: a gru-based approach with attention mechanism. Adjunct Proceedings of the 2019 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2019 ACM International Symposium on Wearable Computers, pp. 719–723 (2019)

    Google Scholar 

  28. Rohrbach, M., Amin, S., Andriluka, M., Schiele, B.: A database for fine grained activity detection of cooking activities. In: 2012 IEEE Conference on Computer Vision and Pattern Recognition, IEEE, pp. 1194–1201 (2012)

    Google Scholar 

  29. Luowei Zhou, J.C., Xu, C.: Youcook2 dataset. http://youcook2.eecs.umich.edu

  30. Alia, S.S., Lago, P., Takeda, S., Adachi, K., Benaissa, B., Ahad, M.A.R., Inoue, S.: Summary of the cooking activity recognition challenge. Human Activity Recognition Challenge, Smart Innovation, Systems and Technologies (2020)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Md. Sadman Siraj .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Siraj, M.S., Shahid, O., Ahad, M.A.R. (2021). Cooking Activity Recognition with Varying Sampling Rates Using Deep Convolutional GRU Framework. In: Ahad, M.A.R., Lago, P., Inoue, S. (eds) Human Activity Recognition Challenge. Smart Innovation, Systems and Technologies, vol 199. Springer, Singapore. https://doi.org/10.1007/978-981-15-8269-1_10

Download citation

Publish with us

Policies and ethics