Keywords

11.1 Introduction

This chapter introduces a new implementation of feature detection in measuring foot progression angle. The FPA is an important parameter in the treatment of the people with medial knee osteoarthritis. Abnormal FPA is a clinical indicator of gait monitoring and retraining [1]. The FPA alternation is presented as a possible solution for reducing the knee loading and knee pain for individuals with knee osteoarthritis. In addition to that, changes in the FPA have been correlated with changes in the foot eversion moment [2], knee adduction moment [3], hip joint moment [4], foot pressure distribution [5], and foot medial loading [6, 7]. A comprehensive review of gait modification strategies for altering medial knee joint load is given in [8]. The importance of exercise, gait retraining, and also footwear and insoles for knee osteoarthritis treatment, is also highlighted in the literature [9, 10].

Currently, the FPA can only be computed, while walking in a laboratory equipped with marker-based or the IMU based motion capture systems. Specialized cameras or sensors are required to be installed by technicians in both methods. Those methods are too complex and not appropriate for long term non-clinical based FPA monitoring and walking habits retaining.

In our initial investigation we have considered applications of various wearable sensors, such as the accelerometers and gyroscopes. In addition to that, the touch sensors were considered for flat foot phase detection. All off those should be installed and used in non-clinical environments. As such, we present a different approach in this study.

First of all, let us explain what the positions are that our body takes, while we are walking naturally in a straight line. We go through the sequence of Gate Cycles. Each Gate Cycle has two phases: Stance Phase, which takes about 60% of the time and Swing Phase with 40% of the Gate Cycle time. During the Stance Phase one leg and foot is bearing the body weight, while during the Swing Phase the feet are not touching the ground. In this study, only Stance Phase is considered, since the FPA can be measured only when a foot is on the ground (see Fig. 11.1).

Fig. 11.1
figure 1

Stance Phase is 60% of the gate cycle, which consists of four stages: a heel strike, b early flat foot, c late flat foot (mid stance), d toe off (push off)

Since in the Swing Phase a foot is off the ground, we cannot perform our measurements. We propose the VFM system to estimate the FPA using a monocular camera as a sole sensor in the model. The FPA is calculated as the angle between the foot vector, line joining the heel point center and the second metatarsal head, and the forward Progression Direction (PD) visualized as a pair of parallel lines in transverse plane. Measurements are conducted during foot flat phase of walking. The average time duration available for the FPA measurements is between 15 and 50% of stance [11]. The VFM method, for the first time is proposed for measuring the FPA via a chest mounted smart phone in non-clinical scenarios. The VFM model detects the FPA without installing any devices inside or outside of the shoes and keeps the intrusive and technician intervention to the minimum. In addition, the problem of matching the feature of shoe is addressed in this study. An image rectification is performed to remove the distortion caused by the monocular camera. The VFM modeling results are validated through comparison to the results from Digital Inclinometer Measurement (DIM).

This chapter includes following information. Experiment scenarios and system setup are described in Sect. 11.2. Image calibration and rectification are discussed in Sect. 11.3. Section 11.4 includes the VFM algorithm modules. Validation and results are represented in Sect. 11.5. Conclusions are given in Sect. 11.6.

11.2 Experiment Scenarios and Setup

The research presented here is approved by the Institutional Ethics Committee and all participants have provided written informed consent. In order to achieve Data AcQuisition (DAQ) goal, while still maintaining the low intrusion, low maintenance, high portable, and accessibility at this stage of research, the mobile phone is turned into a webcam via standard vision applications. Laptop is used for image processing. Mobile phone and laptop are on the same Wi-Fi network so that they can communicate. Currently, the VFM model of the FPA analysis is running on MATLAB 2014a environment using a laptop computer. Personal computer platform could, also, be used to analyze and present the gait data. The system is implemented entirely on mobile phone platform in the later stages of the research and biofeedback medical procedure is developed on mobile phone platform as well.

During the proceeding with computer vision and image processing, the optimal system setup became important. Mounting the camera downward on the torso (Fig. 11.2a, b) has a few advantages. First, the torso is just above the foot, which is the best angle to detect the FPA. Second, camera can monitor two feet with closed range, which is better than a vision range obtained from the head and ceiling mounting positions. Third, the position is close to the body weight center with high accessibility and stability. It is better than what can be obtained with other body mounting solutions. The chest mounted camera has similar property like torso one and is applied as an alternative solution.

Fig. 11.2
figure 2

Monitoring system: a front view, b top view of chest mounted camera and harness, c walking aid as platform

Figure 11.2c shows a camera mounted on a walking aid. This approach is a subject of a separate investigation devoted to the people that need any kind of walking aid. The downward facing camera is mounted on torso via a harness shown in Fig. 11.2a, b. The camera axis does not have to be orthogonal to the plane of motion.

Experimental system is designed to be applied on the track, hallway, and tiles covered floor or any floors with visible straight lines or edges (see Fig. 11.3). Assume that the PD is the same as the direction defined by those parallel lines, which must be shown in camera’s image acquired for the purpose of image rectification and lines detection. A smart phone camera is the only the DAQ hardware needed for the VFM model.

Fig. 11.3
figure 3

Experimental environments with straight lines used for reference direction: a floor mat, b hall way, c tracks

11.3 Image Calibration and Rectification

The VFM method of the FPA estimation is composed of seven modules, as shown in Fig. 11.4. The module 1 “Baseline data collection” is the starting module of the whole model. It is only executed once in each walking test. We will now analyze the initial data collection, while all other modules and the whole algorithm will be explained later. Onboard camera captures sequences of images that carry a large amount of geometrical information, such as the PD and the FPA. There are also two types of distortions: lens distortion and perspective distortion caused by the optical lens and position of the camera relative to the subject [12]. Image calibration and rectification are the tools used to eliminate those two types of distortion resoundingly. After removing the distortion in perspective image, the FPA is detectable by calculating the angle between foot line and the PD.

Fig. 11.4
figure 4

The flow chart of VFM model. Module 1 collects baseline data likes FPA ref , VF ref . Module 2 extracts visual features from current frame of image. Module 3 matches VF new and VF ref . Module 4 aligns shoes of two successive images into same orientation. Module 5 detects the PD in Im new and Im ref . Module 6 measures PD dif in aligned image. Module 7 calculate PD dif and FPA new

Lens distortion can often be corrected by applying suitable algorithmic transformations to the digital photograph [13, 14]. Using those transformations, the lens distortions can be corrected in retrospect and the measurement error can be eliminated. In starting module, the calibration was conducted above the calibration pattern before each experiment.

Camera calibration is the process of estimating parameters of the camera using images of a special calibration pattern, as shown in Fig. 11.5. Camera parameters, including the intrinsics, extrinsics and distortion coefficients, are extracted by the flexible calibration method [15]. They are shown in Fig. 11.5.

Fig. 11.5
figure 5

Calibration images and 3D camera position rebuild

During a calibration, the photo shots from different angles and positions were collected by onboard camera in front of the calibration pattern board. The visualization results are shown in Fig. 11.6. The left column plots the locations of the calibration pattern in the camera’s coordinate system and the right column plots the locations of the camera in the pattern’s coordinate system. Based on that, 3D info matrix is built up and the camera’s extrinsic parameters were calculated. The measurements were accurate within 0.2 mm when 10 images are collected. When more pictures are analyzed, high accuracy can be achieved, as given in literature [15].

Fig. 11.6
figure 6

Visualization results of camera calibration

The imaging geometry and perspective projection were already investigated comprehensively for the computer vision applications [16]. Under perspective image, a plane is mapped to the image by a plane projective transformation [12]. This transformation is used in many computer vision applications, including planar object recognition [17, 18]. The projective transformation is determined uniquely if Euclidean world coordinates of four or more image points are known [19].

Once the transformation is determined, the Euclidean measurements, such as lengths and angles, can be calculated on the world plane directly. In this research, only planar angle measurement is studied, which needs a pair of parallel lines on ground to rectify images (Fig. 11.7a, c). While walking along the parallel lines of track, hall way, or tile defined direction, the parallel lines are detectable by Hough transformation [20, 21]. The Hough transformation is applied in the module 5 “Detect PD line”. The PD lines are shown in Fig. 11.7 before and after the Hough transformation. Those detected parallel lines are used as the reference lines to transfer the prospective image into the rectified image with metric angle information. The rectified image, as shown in Fig. 11.7b, d, allows the metric properties, such as planar angles to be measured on the world plane from those perspective images [19, 22].

Fig. 11.7
figure 7

Angle measurement in the rectified images of the planar surfaces: a and c original input image, tile, tape, b and d rectified image

In the Planar Angle Measurement (PAM) demonstration, two paper strips were placed upon a same plane and rotated constantly. The relative angle of two strips was calculated in the real time by the PAM method (Fig. 11.8a, b). In the demo, the paper strips were segmented out from each image frame in video stream, as shown in Fig. 11.8d [23, 24]. After calibration and rectification, the angles between two stripes were detected accurately with average error E aver  ± 0.05° shown in Fig. 11.8c, d. The PAM method is applied in the module 6 “Measure planar angle” to measure the \(PD_{dif}\), as shown in Fig. 11.8. Our average error is well below the FPA differences that appear with every step, i.e. all step are different and accordingly all FPAs.

Fig. 11.8
figure 8

Angle measurement in the rectified planar surface: a and c input images, b strip edges, d angle detection

11.4 VFM Algorithm Modules

The whole procedure is shown in Fig. 11.4. The first module is already explained above comprehensively. The other six modules are conducted continuously until the end of walking test. The baseline data collection module collects data such as Im ref , VF ref , and FPA ref , where Im ref is the static on-ground shoe image taken from onboard camera, VF ref refers to the shoe visual features extracted from Im ref , and FPA ref is the reference FPA measured in Im ref .

In the FPA enhancement method, the DFPA is developed to provide accurate FPA measurement. The DFPA uses the reference FPA (FPA ref ), which is fixed and already known, as shown in Fig. 11.9, to obtain a precise angle of unknown rotation by relating it to known object Im ref via VF ref .

Fig. 11.9
figure 9

The known FPA measured applying a protractor is used in defining reference angle FPA ref

The Im new is the image frame containing the FPA information at Foot Flat Phase (FFP) moment in each step, which is picked out by the FFP estimation algorithm. The FFP optimization is the subject of a parallel investigation. The main approach in that complementary research is a detection of the sequence of still foot images that implies stationary phase in the foot motion. The sequence is appearing from the Early Flat Foot stage, as shown in Fig. 11.1b, to the Late Flat Foot, shown in Fig. 11.1c. In that foot flat position, appearing during the stance phase after the heel contact, a foot stays in contact with the ground and the location of the foot should not be changing. For real time algorithm, the time duration of the FFP is an important factor. Some of the other parameters, such as the FPA, step length, and step width, are detectable in this period.

Another approach that is also a valuable option would be the application of additional tactile sensors. The most reliable solution will be selected and will become the part of the biofeedback system. The vision based FFP algorithm that we use at the moment not only detects the FFP image frame in real-time video stream but also identifies the left and right foot. Therefore, the FPA is detectable in each image that is indicated by FFP moment.

Hereinafter, consider the foot feature extraction and matching, alternative feature extraction method, and the FPA measurement in Sects. 11.4.111.4.3, respectively.

11.4.1 Foot Feature Extraction and Matching

The Speeded-Up Robust Feature (SURF) algorithm is based on extraction of feature points from image [25]. This method is also used for description of feature points and comparison of these points in the module 2 “Detect shoe features” and module 3 “Matching features”, as shown in Fig. 11.10. The main advantage of the SURF is its speed. Higher speed processing is achieved using the integral convolution method and approximation of Gaussian function [15]. The feature points of shoes were extracted by the SURF in images Im ref and Im new (Fig. 11.10c). The shoe transformation between the images Im ref and Im new has been found corresponding to the matched feature point pairs using the statistically robust M-estimator SAmple Consensus (MSAC) algorithm [26]. The shoes from the images Im ref and Im new were aligned into same size and orientation by matched feature point pairs for \(PD_{dif}\) measurement. The process is conducted in module 2 of the VFM model, as shown in Fig. 11.4.

Fig. 11.10
figure 10

Shoes alignment: a initial image Im ref , b obtained image Im new , c feature point extraction, d matched features, e shoes alignment and PD dif measurement

11.4.2 Alternative Feature Extraction Method

Since the color and textile of participant’s shoes are unexpected and the other variable environment conditions as well, the shoe is not identifiable in the image. A pair of paper strips, instead of shoes, is used in the investigations (Fig. 11.11). There are two functional areas in the strip: outside solo color area and inside feature area. The outside solo color area shows better identification in an image [11]. The color segment detection crops the foot area out of the whole photo, as shown in Fig. 11.11a, b [23, 24]. Furthermore, the inside feature area reduces the VFM model’s detection time cost without affecting the angle accuracy. This method is ideal for participants to monitor the FPA in their own shoes and in the familiar environments.

Fig. 11.11
figure 11

Shoe indicator (an alternative feature extraction method): a foot indicator strip, b segmented foot area, c matched features

11.4.3 FPA Measurement

In the calibrated and rectified image, the FPA is detectable by calculating the angle between the foot orientation and the PD. The numbers and positions of the SURF feature points in each image are variable. Accordingly, there is no a certain geometry shape of a group of feature points to describe the same shoe in each image. Instead of measuring angle between the feature points and the PD, angles of VF dif are measured in aligned Im ref and Im new via VF ref and VF new . VF dif is the relative position between VF ref and VF new in Im ref and Im new . Due to the fact that the human body is not a fixture, the positions between the onboard camera and shoes are variable in Im ref and Im new . For example, when the camera rotates with body and the foot keeps still on ground, FPA dif is zero degree theoretically, while VF dif reports a rotation value due to the body sway. Therefore, FPA dif is defined the same as PD dif (the rotation offset of the PD measured in Im ref and Im new in the shoe aligned picture). It is different to the scenario that can be seen in Fig. 11.10e.

Thus, FPA new is calculated by using PD dif and FPA ref , as given by Eq. 11.1.

$$FPA_{new} = PD_{dif} + FPA_{ref}$$
(11.1)

11.5 Validation and Results

To test the performance of the proposed VFM model of the FPA estimation, the validation tests were conducted both on static protractor patterns and in process of walking along a straight track. The accuracy of the VFM model was evaluated based on the average errors with respect to the FPA computed from the DIM and Protractor Measurement (PM) methods. One-way ANalysis of VAriance (ANOVA) was used to determine if there was any difference in errors of FPA estimation based on the DIM and the VFM model.

In the static validation, the shoe was placed upon a large protractor laying on the track and rotated transversely from 0° to 40°, as shown in Fig. 11.12. The FPA results estimated by the PM, the DIM and the VFM are shown in Table 11.1. The average FPA errors of the DIM and the VFM were 0.0063 ± 0.000009° and 0.312578 ± 0.0494°. As digital inclinometer and protractor are in the same plane, the DIM and the PM recorded the FPA errors within 0.64%, which is negligible. Furthermore, it is not feasible to place protractor under each step, therefore only the DIM is applied to validate the VFM model in the walking test.

Fig. 11.12
figure 12

Validation methods: a the DIM, b the PM

Table 11.1 Validation results of the PM, the DIM, and the VFM

From the column of errors of the VFM in Table 11.1, we can see that the error of the VFM estimation increases, as foot rotates away from the original orientation (0°). This error can be minimized by setting original reference of shoes orientation into the middle of the subjects’ FPA range. For example, if the subject’s FPA range is observed from 20° to 40°, then the reference shoes orientation should be set to 30°, which minimizes the error ranges.

In the walking test, the subject’s nature FPA was observed to be around 4°, which was set as original reference of shoes orientation. The DIM and the VFM estimations of the FPA during a trial are shown in Table 11.2. In general, the FPA estimations from the VFM model closely followed the DIM estimations under the walking test. The average FPA errors were −0.15 ± 0.13° for normal gait.

Table 11.2 Walking validation results of the DIM and the VFM estimations

For comparison, consider that a recently published result in a foot wore Magneto-Inertial Sensing system yielded corresponding error measures of−0.15 ± 0.24° [2]. Our proposed FPA estimation uses only a smart phone as input sensor and achieves similar error rates, while being substantially less complex to implement.

Validation results are shown in Fig. 11.13. Solid and dashed lines represent the DIM and the VFM estimations over 20 steps in the trial respectively, see Fig. 11.13a. There are no significant differences between the DIM and the VFM methods of FPA estimates for the nature walking condition. The average errors of the VFM are slightly larger than the DIM method errors, and one outliner (6.4°) is found at 11th step, as shown in Fig. 11.13a. This is likely due the curly edges of the carpet, which could affect the accuracy of the PD and the FPA measurements.

Fig. 11.13
figure 13

FPA validation results and error analysis: a plots, b variance

The VFM model was tested to analysis the FPA with respect to three variables: Step Length, Step Width, and Body Swing Angle (BSA) [27]. The 10-step free-cadence walking at the self-selected velocity was recorded firstly as baseline. Cadence is the walking rate expressed in steps per minute. Everyone can calculate his/her free-cadence walking rate. Average is in the range of 100–130 steps/min calculated based on the walking speed in km per hour and the step length. There are huge differences for each person.

Ten real-time numeric outputs of gait information for free-cadence walking were collected and are presented in Fig. 11.14a, while their corresponding graphic patterns are displayed in Fig. 11.14b. The graphic patterns, one per step, are instantly updated on the computer screen. Gait information like the foot progression angle, body swing angle, and flat-foot stance time of each step are displayed in text box of Fig. 11.14b. The gait information of two successive steps, such as the step length and step width, is illustrated between two frames.

Fig. 11.14
figure 14

Real time numeric outputs: a numerical outputs, b graphical outputs collected during the walk. Axis OY corresponds to the PD. Motion time is expressed in ms. Axes OX and OY together show 2D layout of the experimental place. Time t is the elapsed time from the beginning of the video, while the FPA and the BSA, as all other quantities, are shown for each step (# is a step number)

The step length, step width and the BSA variability are modified to present the effect to the baseline FPA through this method, as shown in Fig. 11.15. It can be seen that three groups of the FPA values are computed in relevant to the variability of participant’s step length. The first increment of step length is 100 mm, and the second is 200 mm. The blue dots present results for walking using nature step length that have the largest FPA. As the step length increases, the FPA is observed to be smaller, i.e. foots ate more aligned to the progression direction. The regression function between the FPA variability and step length variables is given by Eq. 11.2, where 1 degree increase in the FPA variability is associated with 129.29 mm reduction in step_length variability.

Fig. 11.15
figure 15

The statistic charts of the FPAs in respect to: a step length in, b step width in, c the BSA in. White dots with blue circle represent the gait info of free-cadence as a baseline, green and red dots represent data collected on first and second gait modifications

$$FPA = 13.203 - 1.2929*step\_lenth$$
(11.2)

Figure 11.15b reflects the participant’s FPA in respect to step widths, in his free-cadence, and in targeting 10° and 20° larger, respectively. Changing step width affects the FPA dramatically as the FPA ranges from 2° to 12° in Fig. 11.15b. The FPA is affected but not on same direction. Mean FPA is 1.5° less than the baseline FPA, when the participant targeting 10 mm larger step width and 6° larger in targeting 20 mm larger step width. It depends on how the participant deals with the gait modification in his/her own most comfort way.

Figure 11.15c plots the effect of the BSA in respect to the FPA on three walks: free-cadence walking manner, increasing the BSA by 10° and 20° on that. The FPA stays at a stable value, while the participant can keep the balance by swinging other part of body. The chart does not address the relative importance related to the FPA contributed by the BSA. The FPA shows low correlation with the body sway compared to the step length and step width. The differences between the FPA values measured for various BSA are not significant. Thus, all three groups of data represent the same mean FPA value at increasing BSA.

11.6 Conclusions

The research presented here is a part of a larger project called mobile foot progression angle correction system. Investigation includes the flat foot phase detection, then foot progression angle estimation, and finally, correction system design. Monocular vision sensor captures large amount of data simultaneously. Smart phone can play multiple roles in the FPA estimation process. The visual feature matching model for the FPA estimation, in the research presented here, has obtained equivalent output results that are comparable with the recently published work, which was conducted in the dedicated lab environments. Comparing to the Foot Wore IMU Sensing system, the VFM estimates has the advantage in detecting movement disorder with abnormal gait; as VFM model, it does not need large real time computation to predict movement approaching, nor need the coping with the IMU sensor’s drift problem over the time. While the PM and the DIM methods of the FPA estimation are accurate in static FPA measurements, they are not feasible to test a participant’s nature gait manner in walking. Therefore, the VFM model is a solution for long-term real time FPA monitoring in home or community like environments. In the future, patients with movement disorders or abnormal gait and the FPA estimation, in non-straight line walking, could be diagnosed and treated with the system based on presented VFM method. In order to prepare for that, we will conduct a comprehensive testing of the biofeedback method with large number of participants.