Abstract
The use of anthropometric measurements, to understand an individual’s body shape and size, is an increasingly common approach in health assessment, product design, and biomechanical analysis. Non-contact, three-dimensional (3D) scanning, which can obtain individual human models, has been widely used as a tool for automatic anthropometric measurement. Recently, Alldieck et al. (2018) developed a video-based 3D modelling technique, enabling the generation of individualised human models for virtual reality purposes. As the technique is based on standard video images, hardware requirements are minimal, increasing the flexibility of the technique’s applications. The aim of this study was to develop an automated method for acquiring anthropometric measurements from models generated using a video-based 3D modelling technique and to determine the accuracy of the developed method. Each participant’s anthropometry was measured manually by accredited operators as the reference values. Sequential images for each participant were captured and used as input data to generate personal 3D models, using the video-based 3D modelling technique. Bespoke scripts were developed to obtain corresponding anthropometric data from generated 3D models. When comparing manual measurements and those extracted using the developed method, the accuracy of the developed method was shown to be a potential alternative approach of anthropometry using existing commercial solutions. However, further development, aimed at improving modelling accuracy and processing speed, is still warranted.
Access provided by Autonomous University of Puebla. Download conference paper PDF
Similar content being viewed by others
Keywords
1 Introduction
The use of anthropometric measurements, to understand an individual’s body shape and size, is an increasingly common approach in health assessment, product design, and biomechanical analysis. For instance, Streng et al. (2018) indicated that female heart failure patients with high waist-hip ratio have a high risk of mortality. Verwulgen et al. (2018) introduce a new workflow to use 3D anthropometry to design close-fit products for users. Pandis and Bull (2017) and Smith and Bull (2018) used low-cost 3D scanning to acquired body segment parameter for biomechanical analysis.
Manual anthropometric techniques are a traditional and widely used approach as the equipment is easy to access and calibrate (Hume et al. 2018). However, technical expertise is required to ensure the accuracy of measurement (Perini et al. 2005). Sebo et al. (2017) indicated that waist girth measurements collected by untrained general practitioners could have errors of more than 2 cm. Furthermore, this approach cannot obtain more complex anthropometric measurements, such as body volume, surface area, or body segment parameters directly.
Three-dimensional (3D) scanning systems use specific sensors, such as depth-cameras and stereo-cameras, to obtain individual human models. Simple and complex anthropometric data can be extracted virtually from the resulting scanning output (i.e. the individual 3D human models). Ma et al. (2011a, b) used this technique to obtain individual body segment inertia parameters and developed mathematical models to estimate the body segment inertia parameters from body mass and stature. Recently, advanced computer vision techniques such as 3D correspondence approaches (Groueix et al. 2018; Zuffi and Black 2015) have been developed enabling anatomical landmarks to be identified without manual palpation or placing any markers on the participants. Furthermore, some cost-effect 3D scanning systems, such as KX-16, Proscanner and Styku scanners have been developed for non-contact anthropometric measurement. Consequently, these non-contact 3D scanning techniques have been widely used as a tool to carry out anthropometric procedures.
All of these 3D scanning systems generally require specific hardware (scanning booth, depth camera and turntable). Therefore, users typically go to specialist facilities to complete 3D scanning for anthropometric assessments, which in turn reduces the accessibility of 3D scanning based anthropometry. A mobile 3D scanning solution which, could be used in flexible environments (i.e. minimal specialist equipment or facilities), would enhance the application of 3D scanning based anthropometry, particularly in health and biomechanical assessment. For instance, using a mobile 3D scanning system in their own home, patients could complete comprehensive anthropometric assessments to independently monitor health conditions such as obesity, without needing to travel to specialist facilities. Further, biomechanists could use mobile 3D scanning solutions to model the body segment inertia parameters of athletes in the field and in sports facilities, rather than requiring athletes to travel to expensive and complex laboratories. Such a technological advancement would help both users and practitioners to save time and cost, when conducting anthropometric assessments.
Recently, Alldieck et al. (2018) developed a video-based 3D modelling technique, enabling the generation of individualised human models for virtual reality purposes. As the technique is based on standard video images, hardware requirements (e.g. depth camera, turntable, etc.) are minimal, increasing the flexibility and range of the technique’s applications. However, whilst good levels of accuracy for point-to-point distances have been demonstrated (Alldieck et al. 2018), the accuracy of anthropometric measurements derived using this technique must be examined before its use in anthropometric applications. Furthermore, there is no software developed to obtain body measurements from the 3D models built from this video-based 3D modelling technique. Thus, the aim of this study was to develop an automated method to acquire anthropometric measurements from the models generated using this video-based 3D modelling technique and determine the accuracy of the developed method.
2 Method
2.1 Participants
The study was approved by the ethics committee at Sheffield Hallam University. Five male and six female healthy participants were recruited (stature: 1.71 ± 0.09 m; mass: 77.2 ± 13.8 kg) in this study. All participants gave written consent before participating. They were requested to wear close-fitting clothing during the all test procedures.
2.2 Manual Anthropometric Measurement
Traditional anthropometric data, including stature, mass, waist and hip girths were measured manually by accredited operators according to the International Society for the Advancement of Kinanthropometry (ISAK) protocols (Stewart et al. 2011). Understanding the accuracy of waist and hip girths obtained from the developed method was considered as the initial stage to determine the potential for further anthropometric measurement. Thus, ISAK manual measurements of waist and hip girths were regarded as the reference values to evaluate the accuracy of the developed methods in this study.
2.3 Anthropometric Measurement with 3D Modelling Techniques
The technique developed by Alldieck et al. (2018) enables 3D human modelling with moving participants. However, the movement artefacts resulting from participant breathing and the self-rotation data acquisition procedure might increase the amount of error of human modelling. To determine the optimal accuracy of the developed technique, a bespoke capturing system with a moving camera (Chiu et al. 2019) was used to minimize the effect of human movement and breathing during image capture. Operating the bespoke capturing system involves an operator rotating a single camera around a stationary participant whilst image data is acquired in approximately 10 s as shown in Fig. 1. Participants were requested to stand still and hold their breath at end-tidal volume during image capture.
The technique developed by Alldieck et al. (2018) can use a general camera to obtain image data and applied a convolutional neural network-based (CNN-based) program to extract silhouette and joint data for generating individual 3D models. Nevertheless, the public CNN-based models such as Deeplabv3+ (Chen et al. 2018) cannot discriminate between participants and other humans which may be present in the background within the captured images as they were not developed for the specific cases in this study. Thus, a Microsoft Kinect V2 was used as the capturing device to enable accurate silhouettes of participants to be generated for determining the accuracy of the developed method. All tasks were completed without training a new CNN-based model.
The depth images captured by the Microsoft Kinect V2 were processed with distance threshold and random sample consensus algorithms (Derpanis 2010) to remove background pixels (e.g. floor, wall) for extracting silhouette images as shown in Fig. 2(a) and (b). The OpenPose algorithmsFootnote 1 (Wei et al. 2016) were applied to the images captured by the infrared camera of a Microsoft Kinect V2 to extract the joint position data as shown in Fig. 2(c) and (d). The data of silhouette images and joint positions and the manual measurements of statures were then used as the input of the video-based 3D modelling technique developed by Alldieck et al. (2018) to generate individual models as shown in Fig. 3(a). While applying the 3D modelling technique developed by Alldieck et al. (2018), the weights (parameter values) in the source codeFootnote 2 provided by Alldieck et al. (2018) were adopted to improve the accuracy of 3D reconstruction.
Bespoke scripts were developed to identify the region for measuring both the waist and hip girths from the generated individual models, as shown in Fig. 3(b). The scripts obtained 2D cross-section profiles along the length of the body scan and calculated the circumference of each 2D cross-section to determine the waist and hip girth of the generated 3D models, according to similar ISAK manual measurement protocols (Stewart et al. 2011). In other words, the waist girth was measured on the level with the narrowest circumference on the torso and the hip girth was measured on the level with the most posterior prominence.
2.4 Statistical Analysis
The accuracy of anthropometric measurements obtained using the developed method was quantified according to the root mean square error (RMSE) and relative inter-method technical error of measurement (relative inter-method TEM or %TEM) compared to the reference waist and hip girths obtained using manual measurement as shown in Eqs. (1) and (2).
where \( n \) is the number of participants, \( m_{i} \) represents the ISAK manual measurement obtained from the \( i \)-th participant, and \( d_{i} \) represents the measurement obtained from the \( i \)-th participant using the developed method in this study.
3 Results
Table 1 shows the results of this study. When comparing manually measured and video-based 3D modelled anthropometric data, the RMSEs for waist and hip girths were both around 5 cm. The relative inter-method TEMs were larger than 3.5%. For male participants, the accuracy of waist girths was worse than the one of hip girths. By contrast, for female participants, the accuracy of waist girths was better than the ones of hip girths.
4 Discussions
The aim of this study was to develop an automated method to acquire anthropometric measurements from individual models generated using a video-based 3D modelling technique (Alldieck et al. 2018) and determine the accuracy of the developed method.
The RMSEs of waist girth measures obtained using the developed method were similar to those acquired from existing commercial solutions, such as Proscanner and Styku scanners (Bourgeois et al. 2017) as shown in Table 2. The results show that the developed method, which applies the video-based 3D modelling technique (Alldieck et al. 2018) is a potential alternative approach of performing anthropometric measurement using existing commercial solutions. However, the accuracy of hip girths obtained using the developed method was worse than those acquired using existing commercial solutions (Bourgeois et al. 2017) as shown in Table 2. The pose of the model generated from the video-based techniques (standing with feet apart) could lead to increased error of hip girth measurement, which should be performed with feet together according to ISAK protocols (Stewart et al. 2011). Thus, further development should consider correcting the pose effect to obtain more accurate hip girth measurements. A complete anthropometric validation test should be also conducted to understand which measurements can be obtained using the developed method accurately. The relative inter-method TEM of both measurements (>3.5%) exceeded the acceptable range for some anthropometric applications (ISAK Level 1: 1.5% and Level 2: 1.0%). Furthermore, the bespoke capturing system was used in this study to minimize the movement artefacts caused by breathing and self-rotation of the participant during the scanning procedure. The accuracy of extracted measurements could be decreased when applying the developed method with self-rotating participants. Thus, further development is required to improve the accuracy of the developed method before applying this technique in applications which require accurate anthropometric measurements to be extracted from image data where the participant is moving, such as at-home health assessment and biomechanical analysis.
The results of this study showed that the accuracy of male and female participants was different. Although male and female template models were used in the technique developed by Alldieck et al. (2018), the modelling algorithms to generate individual 3D human models seem the same. Further development might be required to use gender-specific penalty functions in the optimization processes while generating individual models for the application that needs accurate anthropometric data.
The use of video cameras represents a unique and flexible opportunity for estimating human morphometrics. However, the typical processing time (Azure virtual machine F1s) for generating one individual model exceeded two hours. The long processing time of the developed method might limit the potential application of this technique for general purpose or research studies. Alldieck et al. (2019) have presented a novel approach which applied machine learning to reduce the processing time from two hours to 10 s. Therefore, further development should consider applying machine learning techniques and updated development (Alldieck et al. 2019) to improve processing speed.
Notes
- 1.
The algorithms was applied by referring the code provided from https://github.com/ildoonet/tf-pose-estimation.
- 2.
References
Alldieck, T., Magnor, M., Lal Bhatnagar, B., Theobalt, C., Pons-Moll, G.: Learning to reconstruct people in clothing from a single RGB camera. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1175–1186 (2019)
Alldieck, T., Magnor, M., Xu, W., Theobalt, C., Pons-Moll, G.: Video based reconstruction of 3D people models. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8387–8397 (2018)
Bourgeois, B., et al.: Clinically applicable optical imaging technology for body size and shape analysis: comparison of systems differing in design. Eur. J. Clin. Nutr. (2017). https://doi.org/10.1038/ejcn.2017.142
Chen, L.-C., Zhu, Y., Papandreou, G., Schroff, F., Adam, H.: Encoder-decoder with atrous separable convolution for semantic image segmentation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 801–818 (2018)
Chiu, C.-Y., Thelwell, M., Senior, T., Choppin, S., Hart, J., Wheat, J.: Comparison of depth cameras for three-dimensional reconstruction in medicine. Proc. Inst. Mech. Eng. Part H: J. Eng. Med. 233, 938–947 (2019). https://doi.org/10.1177/0954411919859922
Derpanis, K.G.: Overview of the RANSAC Algorithm. Image Rochester NY 4, 2–3 (2010)
Groueix, T., Fisher, M., Kim, V.G., Russell, B.C., Aubry, M.: 3D-CODED: 3D correspondences by deep deformation. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 230–246 (2018)
Hume, P.A., Sheerin, K.R., de Ridder, J.H.: Non-imaging method: surface anthropometry. In: Hume, P.A., Kerr, D.A., Ackland, T.R. (eds.) Best Practice Protocols for Physique Assessment in Sport. Springer Singapore, pp. 61–70 (2018). https://doi.org/10.1007/978-981-10-5418-1_6
Ma, Y., Kwon, J., Mao, Z., Lee, K., Li, L., Chung, H.: Segment inertial parameters of Korean adults estimated from three-dimensional body laser scan data. Int. J. Ind. Ergon. 41, 19–29 (2011a). https://doi.org/10.1016/j.ergon.2010.11.004
Ma, Y., Lee, K., Li, L., Kwon, J.: Nonlinear regression equations for segmental mass-inertial characteristics of Korean adults estimated using three-dimensional range scan data. Appl. Ergon. 42, 297–308 (2011b). https://doi.org/10.1016/j.apergo.2010.07.005
Pandis, P., Bull, A.M.: A low-cost three-dimensional laser surface scanning approach for defining body segment parameters. Proc. Inst. Mech. Eng. Part H: J. Eng. Med. 231, 1064–1068 (2017). https://doi.org/10.1177/0954411917727031
Perini, T.A., de Oliveira, G.L., de Ornellas, J.S., de Oliveira, F.P.: Technical error of measurement in anthropometry. Revista Brasileira de Medicina do Esporte 11, 81–85 (2005)
Sebo, P., Herrmann, F.R., Haller, D.M.: Accuracy of anthropometric measurements by general practitioners in overweight and obese patients. BMC Obes. 4, 23 (2017). https://doi.org/10.1186/s40608-017-0158-0
Smith, S.H.L., Bull, A.M.J.: Rapid calculation of bespoke body segment parameters using 3D infra-red scanning. Med. Eng. Phys. (2018). https://doi.org/10.1016/j.medengphy.2018.10.001
Stewart, A., Marfell-Jones, M., Olds, T., Ridderde, H.: International Standards for Anthropometric Assessment. International Society for the Advancement of Kinanthropometry, Lower Hutt (2011)
Streng, K.W., et al.: Waist-to-hip ratio and mortality in heart failure. Eur. J. Heart Fail. 20, 1269–1277 (2018). https://doi.org/10.1002/ejhf.1244
Verwulgen, S., Lacko, D., Vleugels, J., Vaes, K., Danckaers, F., De Bruyne, G., Huysmans, T.: A new data structure and workflow for using 3D anthropometry in the design of wearable products. Int. J. Ind. Ergon. 64, 108–117 (2018). https://doi.org/10.1016/j.ergon.2018.01.002
Wei S-E, Ramakrishna V, Kanade T, Sheikh Y Convolutional pose machines. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4724–4732 (2016)
Zuffi, S., Black, M.J.: The stitched puppet: a graphical model of 3D human shape and pose. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3537–3546 (2015)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2020 The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Chiu, CY., Thelwell, M., Goodwill, S., Dunn, M. (2020). Accuracy of Anthropometric Measurements by a Video-Based 3D Modelling Technique. In: Ateshian, G., Myers, K., Tavares, J. (eds) Computer Methods, Imaging and Visualization in Biomechanics and Biomedical Engineering. CMBBE 2019. Lecture Notes in Computational Vision and Biomechanics, vol 36. Springer, Cham. https://doi.org/10.1007/978-3-030-43195-2_29
Download citation
DOI: https://doi.org/10.1007/978-3-030-43195-2_29
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-43194-5
Online ISBN: 978-3-030-43195-2
eBook Packages: EngineeringEngineering (R0)