Skip to main content

Multi-class Multi-label Classification for Cooking Activity Recognition

  • Chapter
  • First Online:
Human Activity Recognition Challenge

Part of the book series: Smart Innovation, Systems and Technologies ((SIST,volume 199))

Abstract

In this paper, we present an automatic approach to recognize cooking activities from acceleration and motion data. We rely on a dataset that contains three-axis acceleration and motion data collected with multiple devices, including two wristbands, two smartphones and a motion capture system. The data is collected from three participants while preparing sandwich, fruit salad and cereal recipes. The participants performed several fine-grained activities while preparing each recipe such as cut and peel. We propose to use the multi-class classification approach to distinguish between cooking recipes and a multi-label classification approach to identify the fine-grained activities. Our approach achieves 81% accuracy to recognize fine-grained activities and 66% accuracy to distinguish between different recipes using leave-one-subject-out cross-validation. The multi-class and multi-label classification results are 27 and 50% points higher than the baseline. We further investigate the effect on classification performance of different strategies to cope with missing data and show that imputing missing data with an iterative approach provides 3% point increment to identify fine-grained activities. We confirm findings from the literature that extracting features from multi-sensors achieves higher performance in comparison to using single-sensor features.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 149.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 199.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 199.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

Notes

  1. 1.

    https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.StandardScaler.html.

References

  1. Bulling, A., Blanke, U., Schiele, B.: A tutorial on human activity recognition using body-worn inertial sensors. ACM Comput. Surv. (CSUR) 46(3), 1–33 (2014)

    Article  Google Scholar 

  2. Radu, V., Tong, C., Bhattacharya, S., Lane, N.D., Mascolo, C., Marina, M.K., Kawsar, F.: Multimodal deep learning for activity and context recognition. In: Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies, vol. 1, no. 4, pp. 1–27 (2018)

    Google Scholar 

  3. Guan, Yu., Plötz, T.: Ensembles of deep LSTM learners for activity recognition using wearables. In: Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies, vol. 1, no. 2, pp. 1–28 (2017)

    Google Scholar 

  4. Pham, C., Plötz, T., Oliver, P.: Real-time activity recognition for food preparation. In: Proceedings of the IEEE International Conference on Computing and Communication Technologies, Nagercoil, Tamil Nadu, India (2010)

    Google Scholar 

  5. Lago, P., Takeda, S., Adachi, K., Alia, S.S., Matsuki, M., Benai, B., Inoue, S., Charpillet, F.: Cooking activity dataset with Macro and Micro activities. IEEE Dataport (2020). https://doi.org/10.21227/hyzg-9m49

  6. Lago, P., Takeda, S., Alia, S.S., Adachi, K., Benaissa, B., Charpillet, F., Inoue, S.: A dataset for complex activity recognition with Micro and Macro activities in a cooking scenario (2020)

    Google Scholar 

  7. Alia, S.S., Lago, P., Takeda, S., Adachi, K., Benaissa, B., Rahman Ahad, Md A., Inoue, S.: Summary of the cooking activity recognition challenge. Human Activity Recognition Challenge, Smart Innovation, Systems and Technologies. Springer Nature, Berlin (2020)

    Google Scholar 

  8. Géron, A.: Hands-on Machine Learning with Scikit-Learn, Keras, and TensorFlow: Concepts, Tools, and Techniques to Build Intelligent Systems. O’Reilly Media, Sebastopol (2019)

    Google Scholar 

  9. Ahuja, K., Kim, D., Xhakaj, F., Varga, V., Xie, A., Zhang, S., Townsend, J.E., Harrison, C., Ogan, A., Agarwal, Y.: EduSense: practical classroom sensing at scale. In: Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies, vol. 3, no. 3, pp. 1–26 (2019)

    Google Scholar 

  10. Saha, K., Reddy, M.D., das Swain, V., Gregg, J.M., Grover, T., Lin, S., Martinez, G.J., et al.: Imputing missing social media data stream in multisensor studies of human behavior. In: 2019 8th International Conference on Affective Computing and Intelligent Interaction (ACII), pp. 178–184. IEEE (2019)

    Google Scholar 

  11. Jaques, N., Taylor, S., Sano, A., Picard, R.: Multimodal autoencoder: a deep learning approach to filling in missing sensor data and enabling better mood prediction. In: 2017 Seventh International Conference on Affective Computing and Intelligent Interaction (ACII), pp. 202–208. IEEE (2017)

    Google Scholar 

  12. Janko, V., Rešçiç, N., Mlakar, M., Drobni, V., Gams, M., Slapniar, G., Gjoreski, M., Bizjak, J., Marinko, M., Luštrek, M.: A new frontier for activity recognition: the Sussex-Huawei locomotion challenge. In: Proceedings of the 2018 ACM International Joint Conference and 2018 International Symposium on Pervasive and Ubiquitous Computing and Wearable Computers, pp. 1511–1520 (2018)

    Google Scholar 

  13. Lago, P., Matsuki, M., Inoue, S.: Achieving single-sensor complex activity recognition from multi-sensor training data (2020). arXiv:2002.11284

  14. Meurisch, C., Gogel, A., Schmidt, B., Nolle, T., Janssen, F., Schweizer, I., Mühlhäuser, M.: Capturing daily student life by recognizing complex activities using smartphones. In: Proceedings of the 14th EAI International Conference on Mobile and Ubiquitous Systems: Computing, Networking and Services, pp. 156–165 (2017)

    Google Scholar 

  15. Sorower, M.S.: A Literature Survey on Algorithms for Multi-label Learning, vol. 18, pp. 1-25. Oregon State University, Corvallis (2010)

    Google Scholar 

  16. Rohrbach, M., Amin, S., Andriluka, M., Schiele, B.: A database for fine grained activity detection of cooking activities. In: 2012 IEEE Conference on Computer Vision and Pattern Recognition, pp. 1194–1201. IEEE (2012)

    Google Scholar 

  17. Zinnen, A., Blanke, U., Schiele, B.: An analysis of sensor-oriented vs. model-based activity recognition. In: 2009 International Symposium on Wearable Computers, pp. 93–100. IEEE (2009)

    Google Scholar 

  18. Tenorth, M., Bandouch, J., Beetz, M.: The TUM kitchen data set of everyday manipulation activities for motion tracking and action recognition. In: 2009 IEEE 12th International Conference on Computer Vision Workshops, ICCV Workshops, pp. 1089–1096. IEEE (2009)

    Google Scholar 

  19. De la Torre, F., Hodgins, J., Bargteil, A., Martin, X., Macey, J., Collado, A., Beltran, P.: Guide to the Carnegie Mellon University multimodal activity (CMU-MMAC) database (2009)

    Google Scholar 

  20. Roggen, D., Calatroni, A., Rossi, M., Holleczek, T., Förster, K., Tröster, G., Lukowicz, P., et al.: Collecting complex activity datasets in highly rich networked sensor environments. In: 2010 Seventh International Conference on Networked Sensing Systems (INSS), pp. 233–240. IEEE (2010)

    Google Scholar 

  21. Whitehouse, S., Yordanova, K., Paiement, A., Mirmehdi, M.: Recognition of unscripted kitchen activities and eating behaviour for health monitoring, pp. 1–6 (2016)

    Google Scholar 

  22. Yordanova, K., Whitehouse, S., Paiement, A., Mirmehdi, M., Kirste, T., Craddock, I.: What’s cooking and why? Behaviour recognition during unscripted cooking tasks for health monitoring. In: 2017 IEEE International Conference on Pervasive Computing and Communications Workshops (PerCom Workshops), pp. 18-21. IEEE (2017)

    Google Scholar 

  23. Yordanova, K., Lüdtke, S., Whitehouse, S., Krüger, F., Paiement, A., Mirmehdi, M., Craddock, I., Kirste, T.: Analysing cooking behaviour in home settings: towards health monitoring. Sensors 19(3), 646 (2019)

    Article  Google Scholar 

  24. Rohrbach, M., Rohrbach, A., Regneri, M., Amin, S., Andriluka, M., Pinkal, M., Schiele, B.: Recognizing fine-grained and composite activities using hand-centric features and script data. Int. J. Comput. Vis. 119(3), 346–373 (2016)

    Article  MathSciNet  Google Scholar 

  25. Bolaños, M., Ferrà, A., Radeva, P.: Food ingredients recognition through multi-label learning. In: International Conference on Image Analysis and Processing, pp. 394-402. Springer, Cham (2017)

    Google Scholar 

  26. Mohamed, R.: Multi-label classification for physical activity recognition from various accelerometer sensor positions. J. Inf. Commun. Technol. 17(2), 209–231 (2020)

    Google Scholar 

  27. Leeger-Aschmann, C.S., Schmutz, E.A., Zysset, A.E., Kakebeeke, T.H., Messerli-Bürgy, N., Stülb, K., Arhab, A. et al.: Accelerometer-derived Physical Activity Estimation in Preschoolers–comparison of Cut-point Sets Incorporating the Vector Magnitude vs the Vertical Axis. BMC public health 19, no. 1, p. 513 (2019)

    Google Scholar 

  28. Burkov, A.: The Hundred-page Machine Learning Book. In: Burkov, A. (ed.) Quebec City (2019)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Shkurta Gashi .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Gashi, S., Di Lascio, E., Santini, S. (2021). Multi-class Multi-label Classification for Cooking Activity Recognition. In: Ahad, M.A.R., Lago, P., Inoue, S. (eds) Human Activity Recognition Challenge. Smart Innovation, Systems and Technologies, vol 199. Springer, Singapore. https://doi.org/10.1007/978-981-15-8269-1_7

Download citation

Publish with us

Policies and ethics