Keywords

1 Related Work

Gait recognition implies the method of identifying a person depending on the style of walking. The significance of gait recognition lies in ability to offer better results in low resolution and from long distance. Gait recognition finds its applications in condition monitoring of patients and security. Gait defines the movement of a human being. Gait recognition is done in accordance with the appearance and the manner individual is walking. Recent year many works were going under gait recognition. Individual identification by Hidden Markov model, gait recognition is done with respect to the feature selected by canonical transformation and eigen transformation, measuring the correlation between frame pairs. Model based method is one common method used. From the feature obtained a three-dimensional model is made. But these methods require a greater number of cameras which are set to achieve the external and internal cameral calibration needs. The exactness of the method relies on the features that are selected. From the gait sequence lower body regions are obtained to construct a gait sequence model. The angular postures obtained from gait cycle helps in setting up a 3D gait recognition model. But the features whichever are selected should be invariant so that efficiency of the system can be improved. Almost all methods are performed on con-trolled environmental conditions.

In this proposed method silhouettes obtained from video sequence is used. The main challenges are recognizing the user when he prefers to walk in a random direction, here the silhouettes will not match with database. [1] Moreover, problems occur in recognizing individual depending on the dress he is wearing and items he is carrying. Appearance based methods use spatial temporal features; it relies entirely on the sequences of image captured by camera [2]. Here singular value decomposition methodology is adopted to evaluate the features of gait from GEI. Initially walking pattern of gait is traced out and the user is recognized based by using extracted features. Here for walking direction identification Gait Energy Image (GEI) of the leg region is taken and user recognition can be done by any learning algorithms like Random Space Learning (RSL), Linear Discriminant Analysis (LDA). [4, 5] There were works performed on identifying the walking direction by using Poisson random walk and by using mask which assigns weights higher and lower depending on unaltered and altered areas.

1.1 Motivation

This paper proposes a unique way for identifying the walking direction and recognition of user when imposed to change in appearance. PHash values over the leg region is computed for identifying walking direction and GEI decomposition is per-formed for user recognition. This entire paper can be classified into two parts consists for walking pattern and user identification.

2 Proposed Work

The entire work can be sectioned into testing and training section. First the database is designed by including silhouette and walking patterns of users. For computing the PHash values from the entire GEI leg region of silhouette is separated. For reducing the dimension Linear Discriminant Analysis (LDA) and Principal Component Analysis (PCA) are used. When training is completed for all data set walking patterns are identified by performing K-nearest neighbor (k-NN). For user recognition the dissimilarity between test and trained images are calculated. Finally, the majority voting decides user recognition.

2.1 Identification of Walking Pattern

The general shape of leg region is obtained by calculating the PHash values of the GEI. The values so obtained are used are compared to compute hamming distance. In this method the entire silhouette is not considered specifically the leg region of GEI is filtered out so that computational complexity can be reduced to some extent. To this GEI Discrete Cosine Transforms (DCT) are applied. DCT coefficients separate image into low frequency elements which contain the relevant information and high frequency elements which can be discarded. By concatenating the selected coefficients of DCT the unidirectional PHash descriptor can be obtained. PHash can be converted into binary by following equation (Fig. 1).

Fig. 1
A flow chart. User recognition, G E Is flows into: G E I decomposition, unaltered section selection; P C A computation; recognition using L D A; Majority voting; result. Walking direction identification, G E Is: leg region selection; PHash generation; walking direction database; hamming distance computation; user database w r t; recognition L D A.

Proposed method for user recognition using gait

$${\text{PHash}}\left( {{\text{Ic}}} \right) = \left\{ {\begin{array}{*{20}c} {0, \;if\;DCT\left( {Ic} \right) \le DCT} \\ {1,\; if\;DCT\left( {Ic} \right) > DCT} \\ \end{array} } \right.$$
(1)

The PHash values of the testing GEIs are compared with dataset stored for training, the walking pattern is identified by using k-NN.

2.2 User Recognition

The PHash values will be compared with dataset if there is no difference then GEI of leg region matches and thereby walking direction can be identified. After identifying the walking direction next steps involve matching it with database. But GEI obtained is of different walking direction, database stored features will not match. The reason for mismatch may be the user is wearing a coat or a bag. So, the GEI is de-composed into horizontal sections with a thought that the entire GEI will not change because of a coat and bag. So, decomposing allows removing the part having coat and bag and rest of the region can be compared. All the GEI section are added together to form a one-dimensional vector. All the vectors together form a matrix. The dimensionality of matrix cane is reduced by PCA, as it selects components having high variance. The data decorrelation is obtained by LDA using the projections obtained from PCA.

In this proposed method average GEI is computed for all training data sequence considering walking direction. A threshold is applied to average image and test GEI so that unaltered section can be traced out. The sections so obtained are projected to selected principal components. In User recognition the last step involves majority voting among the selected parts levels.

3 Experimental Results

In this work CASIA Gait database is used. GEI is obtained for 11 different angles varying from 0 to 180°, with every step 18° is set between adjacent angles. For every angle 10 different images were taken one with normal walking in others person carrying a bag or a coat. For each image one is set for testing and other is reserved for training. The steps involved prior to walking direction identification are (i) selecting the per-centage of GEI required at leg region (ii) DCT coefficients required and (iii) value of k in k-NN. It is observed that better results are obtained when 33% GEI is selected to use in leg region, DCT coefficients selected are 22 × 22 lower which can be further used to evaluate PHash values and k values for k-NN algorithm.

Figure 2 GEI selected, GEI obtained for leg region, Horizontal sections division, leg region selection, DCT coefficients obtained, PHash values generated. User is recognized by training four sequences of images obtained and six among the image sequence obtained is used for testing. Among the images two of them indicates the normal walking direction and rest out of four two is where the person is wearing a coat and in other two people is wearing a bag. Figure 3 shows the outcome when user walking normally. In 11 different angles images are captured and it is observed that accuracy is above 95%. The output obtained when user is carrying a bag is shown in Fig. 4. For different angles the accuracy is measured. All results show it is above 95%. The accuracy obtained while carrying a bag is available in Fig. 5. Same as before the accuracy calculated for 11 different angles. The result accuracy is beyond 90%

Fig. 2
G E I images with labels, G E I of a person, G E I leg region, G E I selections, and selected G E I selections. 3 G E I Images of a man walking with a bag, and a G E I image focusing on the leg portion. 2 images with several small structures are labeled D C T coefficients and phash bits.

GEI obtained and DCT and PHash bits

Fig. 3
An illustration of walking direction identification coat, obtained in 11 different angles using P C A, L D A. Output class versus target class indicated in percentage and whole number. 11 steps highlighted in the table, 100.0, 69; 87.2, 68; 91.0, 71; 96.6, 143; 98.3, 291; 97.4, 295; 97.0, 290; 97.7, 297; 99.0, 295; 99.0, 296; 99.7, 291.

Normal walking direction accuracy obtained in 11 different angles

Fig. 4
An illustration of walking direction identification bag, obtained in 11 different angles using P C A, L D A. Output class versus target class indicated in percentage and whole number. 11 steps highlighted in the table, 100.0, 69; 89.7, 70; 93.3, 70; 95.2, 140; 97.7, 293; 98.3, 295; 97.7, 291; 98.0, 294; 99.7, 295; 98.0, 294; 98.3, 296.

Accuracy results obtained in 11 different angles while carrying a bag

Fig. 5
An illustration of walking direction identification coat, obtained in 11 different angles using P C A, L D A. Output class versus target class indicated in percentage and whole number. 11 steps highlighted in the table, 100.0, 69; 87.2, 68; 91.0, 71; 96.6, 143; 98.3, 291; 97.4, 295; 97.0, 290; 97.7, 297; 99.0, 295; 99.0, 296; 99.7, 291.

Accuracy results obtained in 11 different angles while carrying a bag

4 Conclusion

This paper comes up with a new methodology of identification of human, based on the pattern of walking. Usually in all biometric methods such as finger prints and retina the user cooperation is essential. But gait recognition can identify a subject from his body postures so here active participation of subject is not needed. This particular work can be further extended to criminal investigation and security, as from analyzing the video samples obtained from surveillance camera and radar images can also be processed to identify the subject.