Abstract
This article elaborates on the theory of monocular measurement for a walker and uses experiments to validate this approach. This new monocular approach for measuring the movement parameters of a walker allows for the 3-D pose and position of the walker to be obtained using at least two endpoints on the target, the walker’s vertex and heel, and then other point’s position on the walker can be calculated. If the walker’s stature is not known, there is a coefficient of proportionality between the measured result and the true result. When the walker’s stature is known, the movement parameters of the walker can be measured in this way by a monocular camera, which can be used to study a person’s movement or to monitor person’s action in public places by monitor video.
Access provided by CONRICYT-eBooks. Download conference paper PDF
Similar content being viewed by others
Keywords
1 Introduction
One of the tasks of human engineering is to measure the structure or movement of a person. To measure an object’s structure or movement parameters is important work in computer vision and photogrammetry. When the measurement system is made up of many cameras, the three-dimensional stance and position can be obtained by having the cameras intersect at no fewer than three characteristic points on the target’s surface [1, 2]. The three-dimensional stance and position can be calculated when at least three characteristic points in the body’s coordinate system are known. Three-dimensional structure and movement can be rebuilt on the basis of single camera sequence images when the target has more disoperative characteristic points [3,4,5]. Many methods have been applied to measure the three-dimensional stance of targets such as missiles, including the axis method, the ellipticity method, the length–width ratio method, and the spiral method [6, 7].
This article elaborates on a method of monocular measurement for a walker, which is a translation-only one-dimensional target. The approach can be used to measure the 3-D pose and position parameters of the walker by using two endpoints. Then, any other point’s position on the target can be calculated. If there is no known distance between any two endpoints, a coefficient of proportionality will be left for the results. The movement parameters of the walker can be measured in this way when the distance between two points, the walker’s vertex and heel, is known, and thus, the true result is obtained.
2 Measurement Method
Adopting a center perspective projection-imaging model. Take the camera reference as a reference coordinates, so the camera lens is the origin and camera’s rotating matrix R is identity matrix. \( {\mathbf{M}} = \left[ {\begin{array}{*{20}c} X & Y & Z \\ \end{array} } \right]^{\text{T}} \), whose augmented matrix is \( \widetilde{{\mathbf{M}}} = \left[ {\begin{array}{*{20}c} X & Y & Z & 1 \\ \end{array} } \right]^{\text{T}} \), is the target’s coordinate in the reference coordinates and \( {\mathbf{m}} = \left[ {\begin{array}{*{20}c} u & v \\ \end{array} } \right]^{\text{T}} \), whose augmented matrix is \( {\tilde{\mathbf{m}}} = \left[ {\begin{array}{*{20}c} u & v & 1 \\ \end{array} } \right]^{\text{T}} \), is the target’s image coordinate. The imaging relation is:
where \( \left( {u_{0} ,v_{0} } \right) \) is the main spot coordinate of the image, \( \alpha \) and \( \beta \) are the transverse and lengthways imaging equivalent focal lengths. \( \gamma \) is image reverse factor.
As shown in Fig. 1, two end points on a one-dimensional target are considered. And, the target is translation-only. The camera coordinate S-XYZ is the reference coordinate, and the image plane is I. The target’s characteristic points are A i and B i at time i (i = 0, 1, …, m − 1). The distance from A i to B i is L. The image points corresponding to the characteristic points are a i and b i .
Suppose the positions of the target’s characteristic points A and B are \( {\mathbf{M}}_{A0} = \left[ {\begin{array}{*{20}c} {X_{A0} } & {Y_{A0} } & {Z_{A0} } \\ \end{array} } \right]^{\text{T}} \) and \( {\mathbf{M}}_{B0} = \left[ {\begin{array}{*{20}c} {X_{B0} } & {Y_{B0} } & {Z_{B0} } \\ \end{array} } \right]^{\text{T}} \) at time 0, and that their positions at time i are \( {\mathbf{M}}_{Ai} = \left[ {\begin{array}{*{20}c} {X_{Ai} } & {Y_{Ai} } & {Z_{Ai} } \\ \end{array} } \right]^{\text{T}} \) and \( {\mathbf{M}}_{Bi} = \left[ {\begin{array}{*{20}c} {X_{Bi} } & {Y_{Bi} } & {Z_{Bi} } \\ \end{array} } \right]^{\text{T}} \). The translation vector of the position at time i corresponding to the position at time 0 is \( {\mathbf{T}}_{i} = \left[ {\begin{array}{*{20}c} {T_{Xi} } & {T_{Yi} } & {T_{Zi} } \\ \end{array} } \right]^{\text{T}} \). Thus
Suppose the image coordinates of A and B are \( {\mathbf{m}}_{Ai} = \left[ {\begin{array}{*{20}c} {u_{Ai} } & {v_{Ai} } \\ \end{array} } \right]^{\text{T}} \) and \( {\mathbf{m}}_{Bi} = \left[ {\begin{array}{*{20}c} {u_{Bi} } & {v_{Bi} } \\ \end{array} } \right]^{\text{T}} \) at time i.
According to Formula (2), the imaging relation of the target’s characteristic points A and B can then be expressed as:
\( {\tilde{\mathbf{m}}}_{A0} , {\tilde{\mathbf{m}}}_{B0} , {\tilde{\mathbf{m}}}_{Ai} , {\tilde{\mathbf{m}}}_{Bi} , \widetilde{{\mathbf{M}}}_{A0} , \widetilde{{\mathbf{M}}}_{B0} \) and \( \widetilde{{\mathbf{T}}}_{i} \) are augment matrix of \( {\mathbf{m}}_{A0} \text{, }{\mathbf{m}}_{B0} \text{, }{\mathbf{m}}_{Ai} \text{, }{\mathbf{m}}_{Bi} \text{, }{\mathbf{M}}_{A0} \text{, }{\mathbf{M}}_{B0} \). \( s_{A0} , s_{B0} , s_{Ai} , s_{Bi} \) is the scale. \( {\mathbf{A}} \) is the inner parameters of camera and \( {\mathbf{R}} \) is the 3 × 3 identify matrix, \( {\mathbf{T}} \) is 3 × 1 zero vector.
Take the middle variables as:
So we can set up the linear equation related to g 0, g 1, g 2, g 3, g 4 and g 5,i , g 6,i , g 7,i . Suppose the distance between A and B is L, then
Therefore,
Furthermore, we can obtain results for other parameters
When the exact distance between A and B is known, the positions of A and B at any point in time can be solved with precision (that is, the initial position and the translation vector at every time).
However, if the distance between A and B is not known, a coefficient of proportionality will exist between the measured result and the true result. In practice, if the distance between each target point and the camera is known, the true result can also be calculated. The pose of one-dimensional target can be solved if the characteristic points target coordinates are known [8]. When, we need not to know the scale information.
3 Experimental Validation
The target to be measured is a person walking steadily in front of the camera about 100 m. The medial axes of the person are taken as the one-dimensional target. The calvarias and the intersect point of the medial axes with the two touchdown points connect line are chosen as the starting point and the end point. The camera remains stationary on the floor and takes images, some of which are shown in Fig. 2. The small circle points are the projection on the image plane of position’s result of the end characteristic point of the walker. The big circle points are the current result. All of the results at different time are figured in every image. The triangle is the calculated result of pate and projects it to image plane. All the result’s projection can accordant with the current target’s pose and position.
Let the person stays at every measured position. At the same time of taking images, getting the 3-D coordinates depending on geosystems. On the reference coordinate’s XZ, the true trajectory and calculated trajectory are shown in Fig. 3, in which the real line represents the true trajectory and the broken line represents calculated trajectory.
4 Conclusion
This article elaborates on the theory of monocular measurement for a walker, a translation-only one-dimensional target, and uses practical experiments to validate this approach. By this approach, a single camera is employed to take at least two images and make use of at least two characteristic points on the target to measure the walker’s movement parameters. Provided the distance or another measurement between two characteristic points is known, the three-dimensional position and stance can be calculated. Except the two characteristic points, other point’s position on the target can also be calculated. If no measurement information is available, there will be a coefficient of proportionality between the measured result and the true result.
References
Atkinson KB (1996) Close-range photogrammetry and machine vision. University College London, UK
Yu Q, Shang Y (2009) Videometrics: principles and researches. Science Process, Beijing
Hartley R, Zisserman A (2000) Multiple view geometry in computer vision. Cambridge University Press, Cambridge
Mayer H (2006) 3D reconstruction and visualization of urban scenes from uncalibrated wide-baseline image sequences. Int Arch Photogramm, Remote Sens Spa Inf Sci 36(5):207–212
Shang Y, Sun X, Yang X et al (2013) A camera calibration method for large field optical measurement. Optika 124:6553–6558
Yu Q, Sun X, Chen G (2000) A new method of measure the pitching and yaw of the axes symmetry object through the optical image. J Natl Univ Defense Technol 22(2):15–19
Yu Q, Sun X, Qiu Z (2002) Approach of determination of object’s 3d pose from mono-view. Opt Tech 28(1):77–79
Shang Y, Liu J, Xie T et al (2014) A monocular pose measurement method of a translation-only one-dimensional object without scene information. Optika 125:4051–4056
Acknowledgements
This study was supported by a National Key Scientific Instrument and Equipment Development Project (Grant No. 2013YQ140517) and a National Natural Science Foundation of China (Grant No. 11472302).
Compliance with Ethical Standards
The study was approved by the Logistics Department for Civilian Ethics Committee of the National University of Defense Technology.
All subjects who participated in the experiment were provided with and signed an informed consent form.
All relevant ethical safeguards have been met with regard to subject protection.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2018 Springer Nature Singapore Pte Ltd.
About this paper
Cite this paper
Shang, Y. (2018). Measurement of a Walker’s Movement Parameters by a Monocular Camera. In: Long, S., Dhillon, B. (eds) Man–Machine–Environment System Engineering. MMESE 2017. Lecture Notes in Electrical Engineering, vol 456. Springer, Singapore. https://doi.org/10.1007/978-981-10-6232-2_6
Download citation
DOI: https://doi.org/10.1007/978-981-10-6232-2_6
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-10-6231-5
Online ISBN: 978-981-10-6232-2
eBook Packages: EngineeringEngineering (R0)