Abstract
The 3D palmprint biometric recognition system is being considered as a very promising way to address the limitations of 2D palmprint recognition systems. This paper introduces a novel method for the 3D palmprint recognition based on the Tan and Triggs normalization technique (TT) for the preprocessing data and GIST descriptor for feature extraction. The TT technique can effectively and efficiently eliminate not only the low frequencies containing the undesirable effects of shadows but also high frequencies containing aliasing and noise in 3D palmprint. The holistic feature extraction method has been employed to attain salient characteristics. Finally, for the features matching the cosine Mahalanobis distance has been used for 3D palmprint recognition. The proposed system has been evaluated on publicly available 3D palmprint database of 8000 samples. Experimental analyses show that the proposed method can functionally eradicate the effect of uneven illumination and greatly improve the performance of the recognition system. Moreover, experimental results demonstrate that our method is capable of competing with many existing state-of-the-art 3D palmprint recognition techniques as well as outperforming many others.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Avoid common mistakes on your manuscript.
1 Introduction
Contributions Robust and dependable identity management systems have become an international demand in various fields. In fact, security and privacy have become a delicate problem for citizens, companies and the governments due to large scale information and data theft [13, 14]. Biometrics is a method of identifying individuals from biological characteristics. Nowadays, various biometric traits have been employed such as face, fingerprint, palmprint, DNA, voice and iris 2. The design of a reliable, efficient and resilient biometric identification system still has to go a long way. The identification of the individual is essential to ensure the security of systems and organizations [3, 12]. Identification or recognition systems based on biometrics prove to be potentially effective in order to respond the needs of security. To this end, remarkable efforts have been recently done in emerging identification system, i.e., palmprint based individual recognition model. The conducted researches on palmprint recognition principally concentrate on 2D palmprint traits. They have effectively proved their efficiency in various applications, but 2D palmprint images can be effortlessly copied and counterfeited [1, 2]. Thus, 2D palmprint based systems are susceptible to diverse sorts of attacks. To defeat these attacks, 3D palmprint based individual recognition methods have been introduced [16, 17, 23, 26, 28,29,–30]. Zhang et al. [29] presented a palmprint recognition method that extracted features like Gaussian curvature images (GCI), mean curvature images (MCI), and ST maps (surface type), while for matching two measures that are similar to Hamming distance were used. In [16], MCI (mean curvature images) from the original depth data were extracted. Next, the both orientation and line features have been extracted from MCI. Then, these two kinds of features have been fused at feature level and score level. Zhang et al. [30] realized a multi-level scheme for individual authentication by merging both features of 3D and 2D palmprints. To compare two 3D palmprints, surface curvature maps were extracted and then the normalized local correlation has been used for matching. In [26], Yang et al. employed the shape index representation to explain the geometry of local regions in a 3D palmprint. Also, authors extracted LBP (local binary pattern) and Gabor wavelet features of the shape index image. Later, these two features were combined at score level. Liu and Li [17] have applied the Orthogonal Line Ordinal Feature operator, denoted as OLOF [23] on MCI derived from a 3D palmprint. To suppress small misalignment between two palmprint traits, the authors utilized a cross correlation-based technique to collect two feature maps. While, J. Cui [8] proposed a two-dimensional with three-dimensional palmprint recognition system using PCA and the TPTSR (Two Phase Test Sample Representation) framework. Meraoumia et al. [18] have combined both the 2D and 3D information of palmprints to build an improved multimodal biometric system with score level fusion. The PCA and DWT (Discrete Wavelet Transform) have been applied on palmprint for feature extraction and an HMM (Hidden Markov Model) for modeling the feature vectors. Zhang et al. [31] developed a 3D palmprint recognition system, in which the block-wise features method has been used for features extraction and collaborative representation to describe these features. In turn, Chaa et al.[6] have integrated 2D and 3D information of palmprints to build an efficient multimodal biometric system based on fusion at the matching score level. The 2D palmprint features were extracted using the B-BSIF (Bank-Binarized Statistical Image Features) method and the 3D palmprint features were extracted using the self-quotient image algorithm and Gabor wavelets. In the literature, few deep learning methods for 3D palmprint recognition have been proposed. For instance, in our previous work [7], we have developed a new personal system for 3D palmprint recognition using the PCAnet to extract features and SVM for the classification. Whereas, Samai et al. [22] presented a novel scheme to recognize people using their 2D and 3D palmprints. The scores of 2D and 3D palmprints has been integrates at matching score level. Discrete Cosine Transform Net (DCT Net), Mean Curvature (MC) and the Gauss Curvature (GC) were used to extract features from 2D palmprint and 3D palmprint.”
In this work, a new individual recognition system using 3D palmprint has been proposed in order to overcome the weaknesses of 2D palmprint recognition systems. Evolving systems usually operate in dynamically changing and evolving environment. Evolving systems aims is to have better generalization capability. Biometric systems by definition operates in changing environment, e.g., change of ambient lighting during data capturing and authentication. There are many works on adaptive biometric systems. For example, Author in [27] presented a palmprint recognition system that uses adaptive mechanism to fuse match scores for better performance. While, Authors in [27] employed adaptive thresholding to handle the changing data. Both [27] and [11] studies are vulnerable to change in input sample sizes. In order to address some issues in [27] and [11], we employed GIST descriptor, which summarizes the gradient information (scales and orientations) of the input sample and suppress the invariances cause due to evolving/changing environments or sample sizes. To reconstruct an illumination-invariant 3D palmprint images, the TT normalization technique has been applied on 3D palmprint. Whilst, GIST descriptor has been defined and used to extract discriminative features from TT images. Then, PCA + LDA method has been employed to reduce feature dimension, and finally the cosine Mahalanobis distance has been utilized as a matching strategy. To evaluate the performance of the introduced algorithm, extensive experiments have been performed on 3D Palmprint database with 8000 range images from 200 individuals. The experimental results demonstrate that the proposed system can achieve better results compared with other prior methods. The rest of this paper is structured as follows. A general idea of the proposed framework is depicted in the section 2. In particular, TT normalization technique, GIST descriptor and PCA + LDA technique are described in Section 2. In Section 3, the experimental results with a detailed discussion are reported. The conclusion and future works and described in Section 4.
2 Proposed method
An automatic 3D palmprint recognition system is a processing chain that is split into two stages: (i) enrollment and (ii) identification/authentication phases, as shown in Fig. 1. During enrollment, the biometric trait (here, palmprint) of the user is captured and processed using Tan and Triggs technique (TT technique). The features are extracted using GIST descriptor, then dimensionality reduction using (PCA + LDA) technique is applied. The outputs are then saved in a database as templates and reference model. During the identification or authentication, the same biometric trait of the user is captured, processed with TT technique, and the features are extracted. This time, the extracted features and compared with those stored in the database using Mahalanobis distance to calculate their correspondence. If the distance is less than a threshold, the user is accepted as genuine, otherwise as impostor.
The main ideas brought in this paper are as follows:
-
The Tan and Triggs normalization technique is used to eliminate not only the low frequencies containing the undesirable effects of shadows but also high frequencies containing aliasing and noise in 3D palmprints. The Tan and Triggs normalization technique (TT) algorithm has never been studied in 3D palmprint biometrics. Its potential has never been tested for 3D palmprints images.
-
GIST descriptor is applied over the resultant TT image to discriminate the features of 3D palmprint image that is used for identification /authentication and classification of the 3D palmprint image. Because one of the foremost advantages offered by the GIST descriptor-based feature extraction methods is discriminative information of the image trait due to extract an overall spatial envelope that corresponds to the different frequencies and orientations contained in the image, whence comes the criterion of comprehensiveness of this descriptor. Because of the overall description of the essential information in the image, it is possible not to keep the details of an image and therefore identify only the main frequencies and orientations of the image in order to classify it.
2.1 Region of interest extraction
In this section, the region of interest (ROI) extraction process for 3D palmprint is described. Li et al. [16] designed a 3D palmprint acquisition machine based on structured-light technology, with which the 3D palmprint and the 2D palmprint images are simultaneously recorded from the palm. The process for ROI extraction is shown in Fig. 2. First, a Gaussian smoothing operation is applied to the original image, next the smoothed image is binarized with a threshold T (see Fig. 2a, Fig. 2b). So these thresholds have been calculated automatically by using Otsu’s method [20] then it is employed to convert the gray level image into binary image. Second, the boundaries of the binary image can be easily extracted by a boundary tracking algorithm as shown in Fig. 2c. Third, the boundary image is processed to locate the points P1 and P2 for selecting the 2D ROI template. As a final point, the ROI image is extracted, where the rectangle indicates the region of the ROI (see Fig. 2d).The extracted 2D ROI is illustrated in Fig. 2e. Figure 2f illustrates the 3D palmprint picture and Fig. 2g illustrates the obtained 3D ROI by grouping the cloud points corresponding to the pixels in 2D ROI, as described in [29].
2.2 Tan and Triggs normalization technique
TT technique has been introduced by Tan and Triggs in field of face recognition [24]. Here, the TT technique has been used to increase the contrast of the dark regions in depth image of 3D palmprint and to attenuate that of the luminous regions, to suppress the noise and the gradients of illumination and normalize the contrast. Figure 3 shows the three steps for the processing an example of a depth image of 3D palmprint.
2.2.1 Gamma correction
The goal is to increase the contrast of dark regions, while attenuating that of bright regions. The first step is described by the mathematical relation:
Where I(i, j)represents the input image, I′(i, j) denotes the image resulting from the first step of preprocessing and τ stands for the gamma value.
Figure 4 shows the depth image corrected with different Gamma values. The best result was obtained with the values of gamma = 1.1. It is clear from this figure that the Gamma values =0.1 gives bright images and the Gamma values =3 gives dark images. Also, the Gamma values =1.1 obtained better image representation of depth images of 3D palmprints.
2.2.2 Difference of Gaussians
In the difference of Gaussians (DoG), a high-pass filter has been applied to reduce aliasing and noises existing in the image with no raze more of the underlying recognition signal, whereas a low-pass filter is applied to remove the undesirable effects of shadows. The high-pass filter and low-pass filter are implemented using the Gaussian filter Gσ(i, j). In this work, the parameters of DoG filter have been empirically selected as σ1 = 1 and σ2 = 2.
The difference of Gaussian (DoG) is obtained by the difference of two Gaussian with values of σ different (σ1and σ2with σ2 > σ1 ) but close.
I′′(i, j) is the resulting image of DoG filter, which has been calculated by the following equation:
2.2.3 Contrast equalization
The last step of the Tan and Triggs normalization is contrast equalization. This function performs robust processing of an image that has already been photometrically normalized. This is carried out in following three steps (equations):
In this work, the parameters of DoG filter were empirically selected as ρ = 1 and α = 2. Figure 5 shows the samples of the depth images of 3D palmprint with their processed images using Tan and Triggs normalization technique (TT images).
2.3 GIST feature extraction
The GIST descriptor has been demonstrated to be an excellent solution for scene categorization problems [19]. GIST descriptor is a method that attempts to label scenes using spectral and coarsely localized information. The benefit of the descriptor is being very compact and fast to compute. In this work, we have used GIST descriptors for feature extraction with 8 orientations at 5 different scales and a grid of spatial resolution of 6 × 6. The feature vector of each TT image was computed using the GIST descriptor. In particular, first, for each TT image, the values of vector were achieved by convolving the TT image through a bank of Gabor filters using 8 orientations and 5 scales. In the spatial domain, a Gabor filter is the product of a complex sinusoid and a Gaussian envelope. A filter of Gabor 2D is defined continuously by the function Hμ,ν as follows:
With xp = x cos(θυ) + y sin(θυ) xp = xcos(θν) + ysin(θν) and yp = −x sin(θυ) + y cos(θυ).The parameters of Gabor filter are: yp = − xsin(θν) + ycos(θν)fμ = fmax/2μ/2\( {\mathrm{f}}_{\upmu}=\frac{{\mathrm{f}}_{\mathrm{max}}}{2^{\frac{\upmu}{2}}} \) and θυ = υπ/8. Where fμfμ and fmax fmaxare respectively, the center and the maximal frequency. θυ is its orientation.n In the experiments, the parameters of 2D Gabor filters were empirically selected such as the maximal frequency fmax = 0.25 and n = λ =√2. Here, n and λ represent the size of the Gaussian envelope along x-axis and y-axis respectively.
θνThen, the 40 responses (filtered images) are window with a 6 × 6 grid (36 regions). Next, the average values within each region are calculated. Finally, the 36 averaged values of all 40 filtered images (8 × 5) are concatenated that results into a feature vector of length 36 × 40 = 1440. The steps for feature extraction from a sample images using GIST descriptor are shown in the Fig. 6. GIST descriptor for different scales, orientations and regions are shown in Fig. 7.
2.4 Reduction of the dimensionality
The feature vectors of high dimension are sometime difficult to process or classify. To address the issue of large data, the techniques dimensionality reduction has been used. Dimensionality reduction methods enable us to reduce computing time, improve prediction performance, and help us to better understand the data. One of the most common techniques is PCA + LDA, which is fast, simple and a popular scheme. The PCA is first used to project the images into a lower data space [25]. The goal of the LDA is to maximize inter-class distances, while minimizing intra-class distances, which amounts to finding the matrix of transformation W which maximizes criterion [4]:
T(W)T(W) is the fisher discriminant criterion that is maximized, where W is built by concatenation of the number d leading Eigenvectors. It has been noted that W is obtained by resolving the following system:
Here j = 1, 2… d. Applying the PCA + LDA method on the selected data, 399 most important features have been selected for 3D palmprint database.
2.5 The module matching
In this method, the cosine Mahalanobis distance has been used as a classifier for all the test images. Given two feature vectors Vi and Vj representing query and database images, respectively. The distance between Vi and Vj has been obtained by the following relation:
In the above equation, C represents the covariance matrix.
3 Experiments
Here, we provide the experimental analysis of the proposed 3D palmprint recognition framework.
3.1 Database
In order to evaluate the performance of the proposed system described in the previous section, the publicly available PolyU 3D palmprint data has been used. This data was collected by the Hong Kong polytechnic university [21]. It contains 8000 samples given from 400 different palms (classes), from 200 individuals (136 male and 64female). For each subject 20 different palmprint sample is available. The 3D palmprint traits have been recorded in two sessions with one-month average time r range between two sessions. In each session, 10 samples were collected from both left and right palms of the volunteer.
3.2 Experimental protocol
In all experiments, we took the 3D palmprint samples of the first session as the gallery settings and the images of the second session as the probe set. Both 3D palmprint identification and verification are illustrated in this work. For the identification experiment, we provide results in the form of recognition rate, i.e., Rank-1 recognition rate that is calculated as:
Where NiNi stands for the number of images effectively assigned to the right identity and NN refers to the overall number of images trying to assign an identity. For the verification experiment, we report results in the form of the Error Equal Rate (EER), i.e., when False Accept Rate (FAR) = False Reject Rate (FRR). Also, the CMC curve (Cumulative Match Characteristic) is usually used for identification (one-to-many searches). While A Receiver Operating Characteristic (ROC) curve that is a plot of Genuine Accept Rate (GAR) against FAR for all achievable thresholds can be plotted for verification mode.
3.3 Best GIST descriptor and TT normalization parameters
In this experiment, our goal is to determine the best GIST descriptor and TT normalization technique parameters for 3D palmprint recognition. In the proposed system, there are four parameters that are required to be determined. These parameters are: the values of gamma τ of TT normalization technique, and the number of scales, the number of orientations and the size of the grid of the GIST descriptor. We first vary the size of grid and keep the other parameters fixed as follows: the number of scales = 5, the number of orientations = 8 and the values of gamma τ = 1.2. Table 1 presents the performance of the proposed 3D palmprint recognition system, when the size of grid differs between 4 × 4 and 10 × 10.
From Table 1, it is easy to see that the best performance was attained with a size of grid 6 × 6. As a result, the lowest (EER) equals to 0.00% and the verification rate (VR) at 1% FAR equals to 100% and Rank-1 equals to 99.98% was obtained under the verification and identification mode, respectively. Next, the values of gamma τ was varied by keeping the GIST descriptor’s parameters fixed as follows: the number of scales = 5, the number of orientations = 8 and a grid of size 6 × 6. Figure 8 shows the performance of the proposed 3D palmprint recognition system with different values of gamma ranging from 0.1 to 1.5 with a step of 0.1.
We can observe in Fig. 7 that the best performance was obtained with the values of gamma = 1.1. As a result, the lowest (EER) that was attained is 0.00%. Whereas, the verification rate (VR) at 1% FAR reached to 100% for verification mode. Similarly, VR at 1% FAR for Rank-1 reached to 100.00% for identification mode. Finally, we fixed the values of gamma to 1.1, the size of the grid of 6 × 6 and we varied the number of scales and the number of orientations. The results are shown in the Fig. 9. a in the form of the CMC (Cumulative Match Characteristic) curve. In Fig. 9.a, we can notice that with the number of scales = 5 and the number of orientations = 8, the system was able to procure better performance. CMC curves (Fig. 9b) shows the recognition rate of the proposed system with different sigma values (σ1\σ2). First, we vary the values of σ1 between 0.1 to 1 and fixed the values of σ2 = 2. The results from this CMC curve demonstrate that the value of (σ1) has a slight effect on the recognition rate. The best result has been obtained with the value of σ1 = 1.
Second, we fixed the value of σ1 = 1 and vary the values of σ2 between (1.2, 1.7 and 2). The best result has been obtained with the value of σ2 = 2. Therefore, we have chosen the values of (σ1, σ2) = (1, 2) in our work.
Based on the results achieved in this experiment, we decided to utilize GIST descriptor of 5 scales, 8 orientations and a grid of size 6 × 6, and TT normalization technique with the value of gamma = 1.1 and σ1 = 1, σ2 = 2 in the following experiments.
3.4 Analysis of feature extraction techniques with and without TT normalization
In this set of experiments, we investigated several feature algorithms on depth image 3D palmprint to extract feature vector and their efficacy with and without TT normalization scheme. The results are reported in Table 2. As can be seen in Table 2 that TT + GIST+PCA + LDA method performs significantly better than other six methods, i.e., PCA + LDA, Gabor+PCA + LDA, GIST+PCA + LDA, GCI+ GIST+PCA + LDA, MCI+ GIST+PCA + LDA and TT+ Gabor+PCA + LDA. Here, the MCI and GCI images have been enhanced using Butterworth low-pass Filter [9] of order 4 with cutoff frequency 20. The system using TT + GIST+PCA + LDA method attained an EER equals to 0.00%, and VR at 1% FAR equals to 100.00% and Rank-1 equals to 100.00% under verification and identification mode, respectively. We compare Rank-1 (identification mode) and the EER (verification) mode of the proposed method with three classifiers: Support Vector Machine with ‘One-Against-One’srategy (OAO-SVM) [10], K nearest neighbor (KNN) and Random forest [5]. The results of this comparison are summarized in Table 3. We can notice from the Table 3 that the classifier KNN with Mahalanobis distance achieved lower EER and Higher Rank-1 and is therefore more efficient than the SVM and the Random forest classifiers. The OAO-SVM using radial basis function (RBF) gives much better results than those from Gaussian kernel, linear function and polynomial function. Thus, only the performances (Rank-1/EER) from RBF kernel are reported in Table 3. Also, the result of the Random forest with a number of trees (NT) equals to 1000 is reported in Table 3.
3.5 Comparison with existing state of the art 3D palmprint recognition techniques
In this section, in Table 4, we provide a performance comparison between the proposed system and other prior works in the literatures. It is worth noticing that the proposed system outperforms other existing frameworks. In addition, we also evaluated the running speed of the proposed algorithm and compared it with the computational complexity of the baseline algorithms. To this end, the proposed method has been implemented using the software MATLAB R2018a and Windows 7 operating system. Experiments have been conducted on a type of PC Core i3-2375M CPU with 4G MB RAM. Considering only one test sample, the runtime for the operation of identification comprises the time needed for the feature extraction step and the time required for matching step [23]. The computational time for one sample identification scenario is presented in Table 5. As shown in Table 5, the proposed framework runs faster than other methods such as MCI [29], the local correlation (LC)-based method [30], MCI + GCI + ST method [28], Joint line and orientation features (JLOF) [16], PCA + TPTSR method [8] and in [6] except with the block-wise features and collaborative representation (BWFCR) [31] method. Specifically, the computational cost of proposed framework is only 2.7 s, while it requires 76.53 s by MCI + GCI + ST method proposed in [28].
4 Conclusion
In this paper, a novel method for the 3D palmprint recognition has been proposed. In particular, the TT normalization technique, GIST feature descriptor with PCA + LDA and the cosine Mahalanobis distance for matching were employed to procure an efficient biometric recognition system. The experimental results on publicly available dataset demonstrate that the proposed 3D palmprint recognition system can achieve better performance with 400 classes than prior frameworks. Moreover, the conducted experiments on the PolyU 3D palmprint database demonstrate that GIST descriptor is a powerful texture descriptor also useful for 3D palmprint biometric trait. The future work will include integrating other modalities (e.g., face) to take advantage of 3D palmprint modality in order to better procure security systems with high accuracy.
References
Akhtar Z, Alfarid N (2011) Secure learning algorithm for multimodal biometric systems against spoof attacks. In: Proc. international conference on information and network technology (IPCSIT), pp 52–57
Akhtar Z, Foresti GL (2016) Face spoof attack recognition using discriminative image patches. Journal of electrical and Computer Engineering. 1–14. DOI:10.1155/2016/4721849.
Bardwell WE (2005) Biometric identification system using biometric images and personal identification number stored on a magnetic stripe and associated methods. U.S. patent no. 6,959,874. 1
Belhumeur PN, Hespanha JP, Kriegman DJ (1997) Eigenfaces vs. Fisherfaces: recognition using class specific linear projection. IEEE Trans Pattern Anal Mach Intell 19(7):711–720. https://doi.org/10.1109/34.598228
Breiman L (2001) Random forests. Mach Learn 45(1):5–32. https://doi.org/10.1023/A:1010933404324
Chaa M, Boukezzoula N-E, Attia A (2017) Score-level fusion of two-dimensional and three-dimensional palmprint for personal recognition systems. Journal of Electronic Imaging 26(1):013018. https://doi.org/10.1117/1.JEI.26.1.013018
Chaa M, Akhtar Z, Attia A (2019) 3D palmprint recognition using unsupervised convolutional deep learning network and SVM classifier. IET Image Process 13(5):736–745. https://doi.org/10.1049/iet-ipr.2018.5642
Cui J (2014) 2D and 3D palmprint fusion and recognition using PCA plus TPTSR method. Neural Comput & Applic 24(3–4):497–502. https://doi.org/10.1007/s00521-012-1265-y
Gonzalez RC, Woods RE (2002) Digital image processing (preview).
Hsu C-W, Lin C-J (2002) A comparison of methods for multiclass support vector machines. IEEE Trans Neural Netw 13(2):415–425. https://doi.org/10.1109/72.991427
Jaafar H, Ibrahim S, Ramli DA (2015) A robust and fast computation touchless palm print recognition system using LHEAT and the IFkNCN classifier. Computational intelligence and neuroscience 2015:1–17. https://doi.org/10.1155/2015/360217
Jain A, Hong L, Pankanti S (2000) Biometric identification. Commun ACM 43(2):90–98. https://doi.org/10.1145/328236.328110
Jain AK, Ross A, Prabhakar S (2004) An introduction to biometric recognition. IEEE Transactions on Circuits and Systems for Video Technology 14(1):4–20. https://doi.org/10.1109/TCSVT.2003.818349
Jain AK, Flynn P, Ross AA (2008) Handbook of biometrics. Springer Science & Business Media X, 556. DOI:10.1007/978-0-387-71041-9
Li W, Zhang L, Zhang D et al. (2010) Efficient joint 2D and 3D palmprint matching with alignment refinement. 2010 I.E. Computer Society Conference on Computer Vision and Pattern Recognition (CVPR 2010), San Francisco, CA, USA. IEEE Computer Society, Washington, DC, USA, p. 795–801. DOI:10.1109/ CVPR.2010.5540134
Li W, Zhang D, Zhang L, Lu G, Yan J (2011) 3D palmprint recognition with joint line and orientation features. IEEE Transactions on Systems, Man, and Cybernetics - Part C: Applications and Reviews 42(2):274–279. https://doi.org/10.1109/TSMCC.2010.2055849
Liu M, Li L (2012) Cross-correlation based binary image registration for 3D palmprint recognition. In: 2012 IEEE 11th International Conference on Signal Processing. IEEE, pp 1597-1600. DOI: 10.1109/ICoSP.2012.6491885.
Meraoumia A, Chitroub S, Bouridane A (2013) 2D and 3D palmprint information, PCA and HMM for an improved person recognition performance. Integrated Computer-Aided Engineering 20(3):303–319. https://doi.org/10.3233/ICA-130431
Oliva A, Torralba A (2001) Modeling the shape of the scene: a holistic representation of the spatial envelope. Int J Comput Vis 42(3):145–175. https://doi.org/10.1023/A:1011139631724
Otsu N (1979) A threshold selection method from gray-level histograms. IEEE transactions on systems, man, and cybernetics 9(1):62–66. https://doi.org/10.1109/TSMC.1979.4310076
PolyU 2D and 3D Palmprint Database (2017) available at: www.comp.polyu.edu.hk/~biometrics/, Accessed on Jan. 01
Samai D, Bensid K, Meraoumia A, Taleb-Ahmed A, Bedda M (2018) 2d and 3d palmprint recognition using deep learning method. In: 2018 3rd International Conference on Pattern Analysis and Intelligent Systems (PAIS), IEEE, pp 1-6. DOI: 10.1109/PAIS.2018.8598522
Sun Z, Tan T, Wang Y et al. (2005) Ordinal palmprint represention for personal identification. 2005 I.E. Computer Society Conference on Computer Vision and Pattern Recognition (CVPR 2005), San Diego, CA, USA. IEEE Computer Society, Washington, DC, p. 279–284. DOI:10.1109/CVPR.2005.267.
Tan X, Triggs B (2010) Enhanced local texture feature sets for face recognition under difficult lighting conditions. IEEE Trans Image Process 19(6):1635–1650. https://doi.org/10.1109/TIP.2010.2042645
Turk M, Pentland A (1991) Eigenfaces for recognition. J Cogn Neurosci 3(1):71–86. https://doi.org/10.1162/jocn.1991.3.1.71
Yang B, Wang X, Yao J, Yang X, Zhu W (2013) Efficient local representations for three-dimensional palmprint recognition. Journal of Electronic Imaging 22(4):043040. https://doi.org/10.1117/1.JEI.22.4.043040
Zhang S (2013) Palmprint recognition method based on adaptive fusion. In: 2013 Second international conference on robot, vision and signal processing, IEEE, pp 115-119. DOI: 10.1109/RVSP.2013.33
Zhang D, Lu G, Li W, Zhang L, Luo N (2008) Three dimensional palmprint recognition using structured light imaging. In: 2008 IEEE Second International Conference on Biometrics: Theory, Applications and Systems, IEEE, pp 1-6. DOI: 10.1109/BTAS.2008.4699346
Zhang D, Lu G, Li W et al (2009) Palmprint recognition using 3-D information. IEEE Transactions on Systems, Man, and Cybernetics - Part C: Applications and Reviews 39(5):505–519. https://doi.org/10.1109/TSMCC.2009.2020790
Zhang D, Kanhangad V, Luo N, Kumar A (2010) Robust palmprint verification using 2D and 3D features. Pattern Recogn 43(1):358–368. https://doi.org/10.1016/j.patcog.2009.04.026
Zhang L, Shen Y, Li H, Lu J (2015) 3D palmprint identification using block-wise features and collaborative representation. IEEE Trans Pattern Anal Mach Intell 37(8):1730–1736. https://doi.org/10.1109/TPAMI.2014.2372764
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher’s note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
About this article
Cite this article
Chaa, M., Akhtar, Z. 3D Palmprint recognition using Tan and Triggs normalization technique and GIST descriptors. Multimed Tools Appl 80, 2263–2277 (2021). https://doi.org/10.1007/s11042-020-09689-6
Received:
Revised:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11042-020-09689-6