Abstract
Many medical studies have shown that human gait has individuality. As a biometric technology, gait can be used for human identity recognition. Gait has many valuable features in human recognition, such as not easy to forge, not to touch and not disturbed. However, the gait recognition technology based on inertial sensor is difficult to be applied to the actual scene. The reason is that models, algorithms and parameters are generated from specific training data sets, most of which come from the laboratory. So they can’t adapt to the real and unrestricted situation. This paper aims to explore the method of gait recognition model to quickly adapt to different real scene. Firstly, a network model for gait recognition was designed based on convolutional neural network (CNN), including model structure and parameters. Secondly, in gait identification using mobile phones, the most frequently changed scenes are motion mode and placement position. The movement patterns (such as walking on a flat road and going upstairs) and sensor placement will bring different gait characteristics. In this paper, we explored which parts of the model realized the extraction and recognition of these features. The scene features of this paper mainly included: movement patterns and sensor placement position. Further, based on the hypothesis of motion combination, we studied the directionality of model transfer. Experiments showed that some layers of the model were closely related to identity recognition, while others were not related to identity recognition. Based on this conclusion, only fine tuning the relevant layer could obtain similar accuracy and shorten convergence time. Secondly, the direction of transfer could also affect the recognition accuracy.
Access provided by Autonomous University of Puebla. Download conference paper PDF
Similar content being viewed by others
Keywords
1 Introduction
Due to the popularity of portable smart devices (such as mobile phones), the security of smart devices and their private data has attracted wearer’s close attention. Therefore, establishing a more comprehensive identification mechanism is becoming an urgent need. For example, when the mobile phone is lost, the owner hope that the mobile phone can automatically lock or protect it.
Medical and psychological research showed that everyone’s gait had unique features. Gait recognition refers to person identity recognition based on gait features. It has the characteristics of not easy to forge, non-contact recognition, implicit and so on. Gait has become a new biometric for person identity recognition. At present, gait-based identification technology mainly includes two categories: one is based on video data; The other is based on wearable sensors.
This paper studies gait identification technology based on acceleration sensors. Specifically, in this paper, we collected the signal of the inherent acceleration sensor of the mobile phone and established a gait identification application based on the acceleration signal. We used it as a supplement to other person identificaiton technologies in order to strengthen privacy protection on mobile phone. However, in the process of achieving this overall goal, there is a difficult problem, that is, the recognition model and algorithm generated by specific training data can not adapt to the changing real scene. For example, the used environment of our mobile phones is constantly changing, such as wearer’s movement, road conditions and the placement of mobile phones. There is a large gap in data from different environments. A model based on a kind of data does not work effectively in another environment. Moreover, it is difficult to train a high-precision recognition model suitable for various environments according to various data.
Therefore, the idea of this paper is that the recognition model can quickly adapt to the changing environment. For example, users with mobile phones are sometimes walking, sometimes going upstairs and downstairs, and sometimes running. The recognition model needs to quickly generate the recognition model of the new environment from the general recognition model. For another example, the mobile phone will be placed in different parts of the body by the user, sometimes in the hand, sometimes at the waist, and sometimes in the trouser pocket. The method of this paper is to find out the relationship between the model and environmental variables. When environmental variables change, the general model adopts transfer learning technology to realize rapid adaptation.
These environment variables should be easy to monitor. This paper will mainly study two scene elements, namely wearer’s movement pattern and placement position of the sensors. In this paper, we will study which part of the recognition model plays a key role of person identification in changing scenes. Then, the corresponding parts of the recognition model are quickly adjusted according to the changing environmental characteristics, so as to quickly adapt to different scenarios even if there are only small samples.
The main chapters of this paper are as follows. The related works are introduced in the second part. A gait recognition model based on CNN is designed in the third part. In the fourth part, we explore which parts of the model extract and recognize the gait features brought by placement and activity patterns. Based on these findings, some optimized transfer strategies could be designed. At the same time, based on the composition of motion, the transfer direction of the model can also improve the convergence speed of the model. The last part is the conclusion.
2 Related Works
From the beginning of the 21st century, gait identification based on wearable sensors has been studied by many scholars. The gait recognition systems using acceleration sensors are mainly based on three class of technologies. Namely: (1) similar matching; (2) classification based on machine learning; (3) Deep Neural Network.
The method of similarity matching was usually based on distance measurement or correlation measurement. In this method, the collected gait data were compared with the stored gait template. Then the user’s identity was recognized according to the score of similarity.
The similarity measurement of gait data include: histogram similarity, Manhattan distance [1], Euclidean distance [6], correlation coefficient [4], Tanimoto distance [3] and Hamming distance [28], cosine distance [2]. In some literatures, some advanced measurement methods are introduced. They include: dynamic time warping (DTW), Derivative algorithm based on DTW [9], Cyclic Rotation Metric CRM [26], cycle matching using the fusion of multiple similarity scores of Min rule [32], a statistical Measure of Similarity (MOS) [37], fuzzy finite state machine rules [14], Jaccard distance [11], sparse-code collection (CSCC) [34], weighted voting scheme [13], Geometric Template Matching [38], Box approximation geometry [36], weighted Pearson correlation coefficients [33], curve aligning [16], computational theory of perceptions [14], statistical significance analysis [37], Gaussian membership functions [14], and etc.
The recognition algorithms based on machine learning are the classification process. Machine learning methods usually use gait features to train recognition models. Then the training recognition models are used for classification prediction. At present, most of the methods based on machine learning are supervised learning, and some advanced algorithms in unsupervised learning are also used for gait recognition. But they need experimental verification. Machine learning methods in gait recognition include: KNN (nearest neighbour) [30], support vector machines (SVM) [29], decision trees (DT) [35], random forests [25, 31], neural networks [18], hidden Markov model (HMM) classifier [22, 27], Gaussian mixture model (GMM) classifier [10], logistic regression [17], Bayesian network classifiers [23]. Other advanced methods are also mentioned, such as Linear Discriminant Analysis (LDA) [15], I-vector approach [7], principal component analysis (PCA) [12, 21], Fisher discriminant analysis [20], Gradient-Boosted Trees [39], spare-code collection [34], Gaussian Mixture Model-Universal Background Model GMM-UBM [19] etc.
Compared with the former two, the research of gait identification based on deep neural network is relatively less. The main research is as follows. Dehzangi Omid and etc. designed a deep convolutional neural network (DCNN) learning to extract discriminative features from the 2D expanded gait cycles and jointly optimized the identification model [40]. Gadaleta, Matteo presented a system named IDNet. IDNet was the first system that exploited a deep learning approach as universal feature extractors for gait recognition, and that combined classification results from subsequent walking cycles into a multi-stage decision making framework [41].
3 Proposed Approach
3.1 Structure of Network HIR-Net
In our convolution network model (HIR-Net), a five-layer convolution network structure was adopted. Each layer contained one-dimensional convolution and Activation Function (Relu). The first three layers were appended the maximum pooling. The last layer was appended batch regularization and dropout operation. Finally, the final recognition was carried out through a full connection layer. The convolution layers were used to extract deep spatial and temporal features. The structure of the model was shown in Fig. 1 and the model parameters were shown in Table 1.
3.2 Transfer Strategies of HIR-Net Model
Generally, the first convolutional layer of the CNN model extracts small scale features. There are equivariant representations between different layers. So the extracted features from the deeper convolution layer are closely related to large scale features. The extracted features from the deeper convolution layer are closely related to specific tasks.
The wearer’s movement mode and placement of cellphone are the most common factors affecting gait characteristics. In this paper, we tried to find out which layers got the features related to person identification in various states. The detailed processes were explained in the section experiment.
4 Experiments
4.1 Data Acquisition and Preprocessing
-
(1)
Tester and Environment
The age distribution of 34 testers was between 22 and 27 years old. There were 18 males and 16 females. The height and weight of male were 163 cm–181 cm and 52 kg–82 kg respectively. The height and weight of women were 150 cm–169 cm and 43 kg–58 kg respectively. Pants were sports pants or jeans. The test environment is indoor. Four Android mobile phones (Huawei glory V8, oppo find5, red rice Note1, Meizu pro5) were selected for the data-collecting devices. The data acquisition time was 1 min.
-
(2)
Normalization and Filtering
The original data obtained from the accelerometer were the x-y-z triaxial acceleration. The original data were normalized in G (one acceleration of gravity). The formula 1 took the acceleration of X axis as an example. \(OAccx_i\) was originalrunning, going upstairs data and \(Accx_i\) was normalized data.
$$\begin{aligned} Acc{x_i} = \frac{{OAcc{x_i}}}{{9.8\,m/{s^2}}}g \end{aligned}$$(1) -
(3)
Organization of Data
In order to avoid the difference of X-Y-Z axis data caused by orientation of the phones, two kinds of direction independent features were adopted in this paper. They were defined in formula 3 and formula 4. \(Acc_i = \left( {Accx_i,Accy_i,Accz_i} \right) \) was the acceleration data of the i-th sampling, which including X-Y-Z axis data. \(Accg_i = \left( {Accgx_i,Accgy_i,Accgz_i} \right) \) was the gravitational acceleration data of the i-th sampling, which including X-Y-Z axis data. \(Accu_i = \left( {Accux_i,Accuy_i,Accuz_i} \right) \) was the wearer’s acceleration data of the i-th sampling, which including X-Y-Z axis data. \( Accux_i,Accuy_i,Accuz_i\) are defined in formula 4. The data, inputing to model, contained two dimensional components, which were \(< Acct_i,Accp_i > \)
$$\begin{aligned} Acct_i = \sqrt{Accx_i^2 + Accy_i^2 + Accz_i^2} \end{aligned}$$(2)$$\begin{aligned} Accp_i = Accxu_i \times Accxg_i + Accyu_i \times Accyg_i + Acczu_i \times Acczg_i \end{aligned}$$(3)$$\begin{aligned} Accux_i=Accx_i-Accgx_i,Accuy_i=Accy_i-Accgy_i,Accuz_i=Accz_i-Accgz_i \end{aligned}$$(4) -
(4)
Data segmentation
The collected data were time series data with a long period of time. In sample preparation, we used sliding window to segment time series data. In the following chapter, we would determine the size of sliding window by experiments. The size of the sliding window is 128 sample points.
4.2 Test Method
Firstly, the data domain was divided into source domain and target domain. In each test, we first trained the model on the source domain, and then transfered the model from the source domain to the target domain. In order to find the layer which has the strongest correlation with the specific environmental factors in the network model, the parameters of some layers were fixed while other layers could be fine tuned. Then, according to the convergence rate and accuracy of the model, the layers with strongest correlation to the specific environmental factors were found out.
4.3 Transfering on Movement Pattern
A person’s movement pattern (such as walking or running) is randomness and individual. In daily living, movement patterns always change. The dataset in this section contains four movements, i.e. walking, running, going upstairs and going downstairs. We took the gait data collected during the walking as the source domain dataset and the data collected from the other three movement patterns as the target domain dataset. The data set of the source domain contained the data collected from 34 subjects when they walk, and the data set of the target domain contained the data collected from 34 subjects when they run, went upstairs and went downstairs. The collection environment was indoor. The sensors were placed at the waist. The comparison of experimental elements between source domain and target domain was listed in Table 2. The goal of the experiment was to find the layers which had the strongest correlation with person identificaiton in changing movement pattern.
Firstly, the model was trained in the source domain dataset. Then the model was tranfered to the target domain dataset. In Fig. 2, Fig. 3, and Fig. 4, the notion \(finetune_-X+\) represents the parameters of all layers were imported from the source domain model and the parameters of layers before X were fixed in the target training. i.e.: The parameters of layers after X (including layer X) would be tuned in the target training. Such as, \(finetune_-4+\) indicates that the parameters of layer 1, layer 2 and layer 3 were imported from the source domain model and fixed during the training on the target domain. The parameters of the latter layers would be tuned in the target training. The notion \(base_-mode\) represents retraining the model in the target domain dataset from parameters initialized randomly.
The belows can be seen from Fig. 2, Fig. 3, and Fig. 4. Firstly, when the source domain model was directly used for prediction on the target data set, the prediction performance was very poor. As shown in curve notion \(finetune6 +\).
Secondly, the method based on transfer learning can better adapt to different movement patterns and converged faster. As can be seen from Fig. 2, Fig. 3, and Fig. 4, the performance of curve notion as \(base_-modal\) was poor.
Third, \(finetune_-1+\), \(finetune_-2+\), \(finetune_-3+\) have very similar performance. Therefore, we believed that the small-scale features extracted from the first two layers had little relationship with identity recognition under different motion modes. The features related to identity recognition mainly were the further combination of the previous features. The three layers and the back layers of the model got the identity features in different motion modes.
Therefore, when the model needs to switch between different motion models, the transfer strategy of fixing the first two layers can be adopted. This fine tuning strategy would not reduce the recognition rate, and could shorten the convergence time.
4.4 Transfering on Placement of Mobile Phone
The mobile phone are often placed in different positions of the body. For example, mobile phones are sometimes held by hand, sometimes put on the waist, and sometimes put in the trouser pocket. Therefore, it is very meaningful to study the model to quickly adapt to the data collected from different placement. In this paper, we used four placement positions (waist, upper arm, trouser pocket and hand). The source domain dataset contains data collected from the phone on waist. The target domain dataset contains data collected from phones placed in the other three locations (i.e. upper arm, trouser pocket and hand). Both the source domain dataset and the target domain dataset contain data collected from 34 subjects when walking. The comparison of experimental elements between the source domain dataset and the target domain dataset was listed in Table 3.
From Fig. 5, Fig. 6, and Fig. 7, we could get similar results. Firstly, when the source domain model was directly used for prediction on the target data set, the prediction performance was very poor. As shown in curve notion \(finetune6 +\).
Secondly, the method based on transfer learning can better adapt to different placement position. As can be seen from Fig. 5, Fig. 6, and Fig. 7, the performance of curve notion as \(base_-modal\) was not the best ones.
Thirdly, \(finetune_-1+\), \(finetune_-2+\), \(finetune_-3+\) have very similar performance. Therefore, we believed that the small-scale features extracted from the first two convolution layers had little relationship with identity recognition. When the parameters of the first two layers were fixed, the migrated model could achieve similar performance. The features related to identity recognition are mainly the further combination of previous features. The third layer and the back layers of the model obtain the identity features. As can be seen from the figure, if we chosed to fix the first two layers, we could obtain good performance in most cases.
Therefore, when the model needs to adapt to the data collected from different positions, the transfer strategy of fixing the first two layers can be adopted. This fine tuning strategy would not reduce the recognition rate, and could shorten the convergence time.
4.5 Transfering Based on Motion Composition
Firstly, if the acceleration data was directly input into the model trained with different position data, the recognition rate was very low. This conclusion could be seen from the Table 4.
Secondly, from the perspective of human physiological structure and behavior, the movement of different parts of the human body is related to each other. For example, hand motion can be regarded as a combination of waist motion and arm motion relative to the waist. Therefore, we believed that the data collected from the hands contained the characteristics of the waist data. We tested this hypothesis from two aspects.
In this paper, we inputed the waist data into the hand model to test its accuracy. At the same time, we inputed the hand data into the waist model for accuracy test. We compared the above two accuracy rates. Because the hand motion includes the waist motion, the recognition rate obtained by inputting the waist data into the hand model was higher than that obtained by inputting the hand data into the waist model.
We used the waist data to fine tune the hand model (i.e. the hand model was transfered to the waist model) to test the accuracy. At the same time, we used the hand data to fine tune the waist model (i.e. the waist model was transfered to the hand model) to test the accuracy. Furthermore, we compared the above two accuracy rates. The results were shown in the Fig. 8 and Fig. 9. As can be seen from the figure, the effect of migrating from the handheld model to the waist model was better than that from the waist model to the handheld model. Because the handheld data contains the characteristic information of waist data, the convergence speed was accelerated and the accuracy was higher; On the contrary, the waist data does not contain some details of the handheld data and lacks corresponding features, so the reverse migration effect was poor.
5 Conclusion
The automatic feature extraction ability of convolutional neural network is more suitable for extracting hidden personality features from gait data. In order to make the recognition model quickly adapt to a variety of data sets, in this paper, we explored the rapid adaptation technology based on fine tuning. We explored which parts of the model extracted and identified gait features related to placement and activity patterns. Based on these findings, some optimized fine tuning strategies could be designed. At the same time, based on the superposition of motion, the transfer direction of the model could also improve the convergence speed and accuracy of the model.
References
Ngo, T.T., Makihara, Y., Nagahara, H., Mukaigawa, Y., Yagi, Y.: Orientation-compensative signal registration for owner authentication using an accelerometer. lEICE Trans. Inf. Syst. 97(3), 541–553 (2016)
Zhong, Y., Deng, Y.: Sensor orientation invariant mobile gait biometrics. In: International Joint Conference on Biometrics, IJCB (2014)
Subramanian, R., et al.: Orientation invariant gait matching algorithm based on the Kabsch alignment. In: IEEE International Conference on Identity, Security and Behavior Analysis (ISBA 2015), pp. 1–8 (2015)
Sprager, S., Juric, M.B: Inertial sensor-based gait recognition: a review. Sensors 15(9), 22089–22127 (2015)
Johnston, A.H., Weiss, G.M.: Smartwatch-based biometric gait recognition. In: 2015 IEEE 7th International Conference on Biometrics Theory, Applications and Systems (BTAS), pp. 1–6 (2015)
Gafurov, D., Snekkkenes, E.: Arm swing as a weak biometric for unobtrusive user authentication. In: 2008 International Conference on Intelligent Information Hiding and Multimedia Signal Processing, pp. 1080–1087 (2008)
San-Segundo, R., Cordoba, R., Ferreiros, J., D’Haro-Enriquez, L.: Frequency features and GMM-UBM approach for gait-based person identification using smartphone inertial signals. Pattern Recogn. Lett. 73(April 1), 60–67 (2016)
Szegedy, C., et al.: Going deeper with convolutions. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2015)
Muaaz, M., Mayrhofer, R.: Orientation independent cell phone based gait authentication. In: Proceedings of the 12th International Conference on Advances in Mobile Computing and Multimedia, pp. 161–164 (2014)
Lu, H., Huang, J., Saha, T., Nachman, L.: Unobtrusive gait verification for mobile phones. In: Proceedings of the 2014 ACM International Symposium on Wearable Computers, pp. 91–98 (2014)
Damaševičius, R., Maskeliūnas, R., Venčkauskas, A., Woźniak, M.: Smartphone user identity verification using gait characteristics. Symmetry 8(10), 100 (2016)
Sprager, S., Zazula, D.: Impact of different walking surfaces on gait identification based on higher-order statistics of accelerometer data. In: 2011 IEEE International Conference on Signal and Image Processing Applications (ICSIPA), pp. 360–365 (2011)
Sun, B., Wang, Y., Banda, J.: Gait characteristic analysis and identification based on the iPhone’s accelerometer and gyrometer. Sensors 14(9), 17037–17054 (2014)
Trivino, G., Alvarez-Alvarez, A., Bailador, G.: Application of the computational theory of perceptions to human gait pattern recognition. Pattern Recogn. 43(7), 2572–2581 (2010)
Lin, B.-S., Liu, Y.-T., Yu, C., Jan, G.E., Hsiao, B.-T.: Gait recognition and walking exercise intensity estimation. Int. J. Environ. Res. Public Health 11(4), 3822–3844 (2014)
Sun, H., Yuao, T.: Curve aligning approach for gait authentication based on a wearable accelerometer. Physiol. Meas. 33(6), 1111 (2012)
Primo, A., Phoha, V.V., Kumar, R., Serwadda, A.: Context-aware active authentication using smartphone accelerometer measurements. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 98–105. Springer, New York (2014)
Mondal, S., Nandy, A., Chakraborty, P., Nandi, G.C.: Gait based personal identification system using rotation sensor. J. Emerg. Trends Comput. Inf. Sci. 3(2), 395–402 (2012)
San-Segundo, R., Echeverry-Correa, J.D., Salamea-Palacios, C., Lutfi, S.L., Pardo, J.M.: I-vector analysis for gait-based person identification using smartphone inertial signals. Pervasive Mob. Comput. 38, 140–153 (2017)
Kobayashi, T., Hasida, K., Otsu, N.: Rotation invariant feature extraction from 3-D acceleration signals. In: 2011 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 3684–3687 (2011)
Sprager, S., Zazula, D.: A cumulant-based method for gait identification using accelerometer data with principal component analysis and support vector machine. WSEAS Trans. Signal Process. 5(11), 369–378 (2009)
Nickel, C., Brandt, H., Busch, C.: Benchmarking the performance of SVMs and HMMs for accelerometer-based biometric gait recognition. In: 2011 IEEE International Symposium on Signal Processing and Information Technology (ISSPIT), pp. 281–286 (2011)
Watanabe, Y.: Influence of holding smart phone for acceleration-based gait authentication. In: 2014 Fifth International Conference on Emerging Security Technologies, pp. 30–33 (2014)
Gafurov, D., Helkala, K., Søndrol, T.: Biometric gait authentication using accelerometer sensor. J. Comput. 1(7), 51–59 (2006)
Frank, J., Mannor, S., Pineau, J., Precup, D.: Time series analysis using geometric template matching. IEEE Trans. Pattern Anal. Mach. Intell. 35(3), 740–754 (2012)
Gafurov, D., Bours, P.: Improved hip-based individual recognition using wearable motion recording sensor. In: Security Technology, Disaster Recovery & Business Continuity-International Conferences, pp. 179–186 (2010)
Nickel, C., Busch, C.: Does a cycle-based segmentation improve accelerometer-based biometric gait recognition?. In: 2012 11th International Conference on Information Science, Signal Processing and their Applications (ISSPA), pp. 746–751 (2012)
Hoang, T., Choi, D., Nguyen, T.: Gait authentication on mobile phone using biometric cryptosystem and fuzzy commitment scheme. Int. J. Inf. Secur. 14(6), 549–560 (2015). https://doi.org/10.1007/s10207-015-0273-1
Ren, Y., Chen, Y., Chuah, M.C., Yang, J.: User verification leveraging gait recognition for smartphone enabled mobile healthcare systems. IEEE Trans. Mob. Comput. 14(9), 1961–1974 (2014)
Sprager, S., Juric, M.B: An efficient HOS-based gait authentication of accelerometer data. IEEE Trans. Inf. Forensics Secur. 10(7), 1486–1498 (2015)
Zeng, Y., Pande, A., Zhu, J., Mohapatra, P.: WearIA: wearable device implicit authentication based on activity information. In: 2017 IEEE 18th International Symposium on A World of Wireless, Mobile and Multimedia Networks (WoWMoM), pp. 1–9 (2017)
Gafurov, D., Snekkenes, E., Bours, P.: Improved gait recognition performance using cycle matching. In: 2010 IEEE 24th International Conference on Advanced Information Networking and Applications Workshops, pp. 836–841 (2010)
Ren, Y., Chen, Y., Chuah, M.C., Yang, J.: Smartphone based user verification leveraging gait recognition for mobile healthcare systems. In: 2013 IEEE International Conference on Sensing, Communications and Networking (SECON), pp. 149–157 (2013)
Zhang, Y., Pan, G., Jia, K., Lu, M., Wang, Y., Wu, Z.: Accelerometer-based gait recognition by sparse representation of signature points with clusters. IEEE Trans. Cybern. 45(9) 1864–1875 (2014)
Kwapisz, J.R., Weiss, G.M., Moore, S.A.: Cell phone-based biometric identification. In: 2010 Fourth IEEE International Conference on Biometrics: Theory, Applications and Systems (BTAS), pp. 1–7 (2010)
Samà, Al., Ruiz, F.J., Agell, N., Pérez-López, C., Català, A., Cabestany, J.: Gait identification by means of box approximation geometry of reconstructed attractors in latent space. Neurocomputing 121(Dec. 9), 79–88 (2013)
Bächlin, M., Schumm, J., Roggen, D., Töster, G.: Quantifying gait similarity: user authentication and real-world challenge. In: International Conference on Biometrics, pp. 1040–104 (2009)
Frank, J., Mannor, S., Precup, D.: Activity and gait recognition with time-delay embeddings. In: AAAI, pp. 1581–1586 (2010)
Preuveneers, D., Joosen, W., et al.: Improving resilience of behaviometric based continuous authentication with multiple accelerometers. In: IFIP Annual Conference on Data and Applications Security and Privacy, pp. 473–485 (2017)
Dehzangi, O., Taherisadr, M., ChangalVala, R.: IMU-based gait recognition using convolutional neural networks and multi-sensor fusion. Sensors 17(12), 2735 (2017)
Gadaleta, M., Rossi, M.: IDNet: smartphone-based gait recognition with convolutional neural networks. Pattern Recogn. 74, 25–37 (2018)
Acknowledgments
Supported by Department of Science and Technology of Sichuan Province. The Funding numbers are 2020YFS0083 and 2021YJ0081.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2022 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.
About this paper
Cite this paper
Zhang, X., Zeng, J., Wang, G. (2022). A Fine-Tuning Strategy Based on Real Scenes in Gait Identification. In: Wang, G., Choo, KK.R., Ko, R.K.L., Xu, Y., Crispo, B. (eds) Ubiquitous Security. UbiSec 2021. Communications in Computer and Information Science, vol 1557. Springer, Singapore. https://doi.org/10.1007/978-981-19-0468-4_25
Download citation
DOI: https://doi.org/10.1007/978-981-19-0468-4_25
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-19-0467-7
Online ISBN: 978-981-19-0468-4
eBook Packages: Computer ScienceComputer Science (R0)