1 Introduction

Contributions Robust and dependable identity management systems have become an international demand in various fields. In fact, security and privacy have become a delicate problem for citizens, companies and the governments due to large scale information and data theft [13, 14]. Biometrics is a method of identifying individuals from biological characteristics. Nowadays, various biometric traits have been employed such as face, fingerprint, palmprint, DNA, voice and iris 2. The design of a reliable, efficient and resilient biometric identification system still has to go a long way. The identification of the individual is essential to ensure the security of systems and organizations [3, 12]. Identification or recognition systems based on biometrics prove to be potentially effective in order to respond the needs of security. To this end, remarkable efforts have been recently done in emerging identification system, i.e., palmprint based individual recognition model. The conducted researches on palmprint recognition principally concentrate on 2D palmprint traits. They have effectively proved their efficiency in various applications, but 2D palmprint images can be effortlessly copied and counterfeited [1, 2]. Thus, 2D palmprint based systems are susceptible to diverse sorts of attacks. To defeat these attacks, 3D palmprint based individual recognition methods have been introduced [16, 17, 23, 26, 28,29,30]. Zhang et al. [29] presented a palmprint recognition method that extracted features like Gaussian curvature images (GCI), mean curvature images (MCI), and ST maps (surface type), while for matching two measures that are similar to Hamming distance were used. In [16], MCI (mean curvature images) from the original depth data were extracted. Next, the both orientation and line features have been extracted from MCI. Then, these two kinds of features have been fused at feature level and score level. Zhang et al. [30] realized a multi-level scheme for individual authentication by merging both features of 3D and 2D palmprints. To compare two 3D palmprints, surface curvature maps were extracted and then the normalized local correlation has been used for matching. In [26], Yang et al. employed the shape index representation to explain the geometry of local regions in a 3D palmprint. Also, authors extracted LBP (local binary pattern) and Gabor wavelet features of the shape index image. Later, these two features were combined at score level. Liu and Li [17] have applied the Orthogonal Line Ordinal Feature operator, denoted as OLOF [23] on MCI derived from a 3D palmprint. To suppress small misalignment between two palmprint traits, the authors utilized a cross correlation-based technique to collect two feature maps. While, J. Cui [8] proposed a two-dimensional with three-dimensional palmprint recognition system using PCA and the TPTSR (Two Phase Test Sample Representation) framework. Meraoumia et al. [18] have combined both the 2D and 3D information of palmprints to build an improved multimodal biometric system with score level fusion. The PCA and DWT (Discrete Wavelet Transform) have been applied on palmprint for feature extraction and an HMM (Hidden Markov Model) for modeling the feature vectors. Zhang et al. [31] developed a 3D palmprint recognition system, in which the block-wise features method has been used for features extraction and collaborative representation to describe these features. In turn, Chaa et al.[6] have integrated 2D and 3D information of palmprints to build an efficient multimodal biometric system based on fusion at the matching score level. The 2D palmprint features were extracted using the B-BSIF (Bank-Binarized Statistical Image Features) method and the 3D palmprint features were extracted using the self-quotient image algorithm and Gabor wavelets. In the literature, few deep learning methods for 3D palmprint recognition have been proposed. For instance, in our previous work [7], we have developed a new personal system for 3D palmprint recognition using the PCAnet to extract features and SVM for the classification. Whereas, Samai et al. [22] presented a novel scheme to recognize people using their 2D and 3D palmprints. The scores of 2D and 3D palmprints has been integrates at matching score level. Discrete Cosine Transform Net (DCT Net), Mean Curvature (MC) and the Gauss Curvature (GC) were used to extract features from 2D palmprint and 3D palmprint.”

In this work, a new individual recognition system using 3D palmprint has been proposed in order to overcome the weaknesses of 2D palmprint recognition systems. Evolving systems usually operate in dynamically changing and evolving environment. Evolving systems aims is to have better generalization capability. Biometric systems by definition operates in changing environment, e.g., change of ambient lighting during data capturing and authentication. There are many works on adaptive biometric systems. For example, Author in [27] presented a palmprint recognition system that uses adaptive mechanism to fuse match scores for better performance. While, Authors in [27] employed adaptive thresholding to handle the changing data. Both [27] and [11] studies are vulnerable to change in input sample sizes. In order to address some issues in [27] and [11], we employed GIST descriptor, which summarizes the gradient information (scales and orientations) of the input sample and suppress the invariances cause due to evolving/changing environments or sample sizes. To reconstruct an illumination-invariant 3D palmprint images, the TT normalization technique has been applied on 3D palmprint. Whilst, GIST descriptor has been defined and used to extract discriminative features from TT images. Then, PCA + LDA method has been employed to reduce feature dimension, and finally the cosine Mahalanobis distance has been utilized as a matching strategy. To evaluate the performance of the introduced algorithm, extensive experiments have been performed on 3D Palmprint database with 8000 range images from 200 individuals. The experimental results demonstrate that the proposed system can achieve better results compared with other prior methods. The rest of this paper is structured as follows. A general idea of the proposed framework is depicted in the section 2. In particular, TT normalization technique, GIST descriptor and PCA + LDA technique are described in Section 2. In Section 3, the experimental results with a detailed discussion are reported. The conclusion and future works and described in Section 4.

2 Proposed method

An automatic 3D palmprint recognition system is a processing chain that is split into two stages: (i) enrollment and (ii) identification/authentication phases, as shown in Fig. 1. During enrollment, the biometric trait (here, palmprint) of the user is captured and processed using Tan and Triggs technique (TT technique). The features are extracted using GIST descriptor, then dimensionality reduction using (PCA + LDA) technique is applied. The outputs are then saved in a database as templates and reference model. During the identification or authentication, the same biometric trait of the user is captured, processed with TT technique, and the features are extracted. This time, the extracted features and compared with those stored in the database using Mahalanobis distance to calculate their correspondence. If the distance is less than a threshold, the user is accepted as genuine, otherwise as impostor.

Fig. 1
figure 1

Architecture of a 3D palmprint identification or authentication system

The main ideas brought in this paper are as follows:

  • The Tan and Triggs normalization technique is used to eliminate not only the low frequencies containing the undesirable effects of shadows but also high frequencies containing aliasing and noise in 3D palmprints. The Tan and Triggs normalization technique (TT) algorithm has never been studied in 3D palmprint biometrics. Its potential has never been tested for 3D palmprints images.

  • GIST descriptor is applied over the resultant TT image to discriminate the features of 3D palmprint image that is used for identification /authentication and classification of the 3D palmprint image. Because one of the foremost advantages offered by the GIST descriptor-based feature extraction methods is discriminative information of the image trait due to extract an overall spatial envelope that corresponds to the different frequencies and orientations contained in the image, whence comes the criterion of comprehensiveness of this descriptor. Because of the overall description of the essential information in the image, it is possible not to keep the details of an image and therefore identify only the main frequencies and orientations of the image in order to classify it.

2.1 Region of interest extraction

In this section, the region of interest (ROI) extraction process for 3D palmprint is described. Li et al. [16] designed a 3D palmprint acquisition machine based on structured-light technology, with which the 3D palmprint and the 2D palmprint images are simultaneously recorded from the palm. The process for ROI extraction is shown in Fig. 2. First, a Gaussian smoothing operation is applied to the original image, next the smoothed image is binarized with a threshold T (see Fig. 2a, Fig. 2b). So these thresholds have been calculated automatically by using Otsu’s method [20] then it is employed to convert the gray level image into binary image. Second, the boundaries of the binary image can be easily extracted by a boundary tracking algorithm as shown in Fig. 2c. Third, the boundary image is processed to locate the points P1 and P2 for selecting the 2D ROI template. As a final point, the ROI image is extracted, where the rectangle indicates the region of the ROI (see Fig. 2d).The extracted 2D ROI is illustrated in Fig. 2e. Figure 2f illustrates the 3D palmprint picture and Fig. 2g illustrates the obtained 3D ROI by grouping the cloud points corresponding to the pixels in 2D ROI, as described in [29].

Fig. 2
figure 2

The steps of extraction ROI of 3D palmprint image

2.2 Tan and Triggs normalization technique

TT technique has been introduced by Tan and Triggs in field of face recognition [24]. Here, the TT technique has been used to increase the contrast of the dark regions in depth image of 3D palmprint and to attenuate that of the luminous regions, to suppress the noise and the gradients of illumination and normalize the contrast. Figure 3 shows the three steps for the processing an example of a depth image of 3D palmprint.

Fig. 3
figure 3

The three steps of the TT technique, from left to right: input depth image; image after Gamma correction; image after DoG filtering; image after contrast equalization

2.2.1 Gamma correction

The goal is to increase the contrast of dark regions, while attenuating that of bright regions. The first step is described by the mathematical relation:

$$ {\mathrm{I}}^{\prime}\left(\mathrm{i},\mathrm{j}\right)=\left\{\begin{array}{c}\mathrm{I}{\left(\mathrm{i},\mathrm{j}\right)}^{\uptau}\kern3em for\kern0.75em \tau >0\ \\ {}\mathit{\log}\left(I\left(i,j\right)\right)\kern1.25em for\kern0.5em \tau =0\end{array}\right. $$
(1)

Where I(i, j)represents the input image, I(i, j) denotes the image resulting from the first step of preprocessing and τ stands for the gamma value.

Figure 4 shows the depth image corrected with different Gamma values. The best result was obtained with the values of gamma = 1.1. It is clear from this figure that the Gamma values =0.1 gives bright images and the Gamma values =3 gives dark images. Also, the Gamma values =1.1 obtained better image representation of depth images of 3D palmprints.

Fig. 4.
figure 4

(a) example of depth image, (b), (c) and (d) processed depth image with Gamma correction- (b) Gamma = 0.1, (c) Gamma = 1.1 and (d) Gamma =3)

2.2.2 Difference of Gaussians

In the difference of Gaussians (DoG), a high-pass filter has been applied to reduce aliasing and noises existing in the image with no raze more of the underlying recognition signal, whereas a low-pass filter is applied to remove the undesirable effects of shadows. The high-pass filter and low-pass filter are implemented using the Gaussian filter Gσ(i, j). In this work, the parameters of DoG filter have been empirically selected as σ1 = 1 and σ2 = 2.

$$ {G}_{\sigma}\left(i,j\right)=\frac{1}{2\pi {\sigma}^2}{e}^{-\left({i}^2+{j}^2\right)/2{\sigma}^2} $$
(2)

The difference of Gaussian (DoG) is obtained by the difference of two Gaussian with values of σ different (σ1and σ2with σ2 > σ1 ) but close.

$$ {g}_1\left(i,j\right)={G}_{\sigma 1}\left(i,j\right)\ast {\mathrm{I}}^{\prime}\left(\mathrm{i},\mathrm{j}\right) $$
(3)

I′′(i, j) is the resulting image of DoG filter, which has been calculated by the following equation:

$$ {\mathrm{I}}^{\prime \prime}\left(\mathrm{i},\mathrm{j}\right)={g}_1\left(i,j\right)-{g}_2\left(i,j\right)=\left(\ {G}_{\sigma 1}\left(i,j\right)-{G}_{\sigma 2}\left(i,j\right)\right)\ast {\mathrm{I}}^{\prime}\left(\mathrm{i},\mathrm{j}\right) $$
(4)

2.2.3 Contrast equalization

The last step of the Tan and Triggs normalization is contrast equalization. This function performs robust processing of an image that has already been photometrically normalized. This is carried out in following three steps (equations):

$$ {g}_2\left(i,j\right)={G}_{\sigma 2}\left(i,j\right)\ast {\mathrm{I}}^{\prime}\left(\mathrm{i},\mathrm{j}\right) $$
(5)
$$ \mathrm{I}\left(\mathrm{i},\mathrm{j}\right)\leftarrow \frac{\mathrm{I}\left(\mathrm{i},\mathrm{j}\right)}{{\left( mean\left({\left|\mathrm{I}\left({\mathrm{i}}^{\prime },{\mathrm{j}}^{\prime}\right)\right|}^{\upalpha}\right)\right)}^{\raisebox{1ex}{$1$}\!\left/ \!\raisebox{-1ex}{$\upalpha $}\right.}} $$
(6)
$$ \mathrm{I}\left(\mathrm{i},\mathrm{j}\right)\leftarrow \frac{\mathrm{I}\left(\mathrm{i},\mathrm{j}\right)}{{\left( mean\left(\mathit{\min}{\left(\uprho, \left|\mathrm{I}\left({\mathrm{i}}^{\prime },{\mathrm{j}}^{\prime}\right)\right|\right)}^{\upalpha}\right)\right)}^{\raisebox{1ex}{$1$}\!\left/ \!\raisebox{-1ex}{$\upalpha $}\right.}} $$
(7)
$$ \mathrm{I}\left(\mathrm{i},\mathrm{j}\right)\leftarrow \uprho \mathit{\tanh}\left(\frac{\mathrm{I}\left(\mathrm{i},\mathrm{j}\right)}{\uprho}\right) $$
(8)

In this work, the parameters of DoG filter were empirically selected as ρ = 1 and α = 2. Figure 5 shows the samples of the depth images of 3D palmprint with their processed images using Tan and Triggs normalization technique (TT images).

Fig. 5
figure 5

Shows the depth of 3D palmprint images with their TT images

2.3 GIST feature extraction

The GIST descriptor has been demonstrated to be an excellent solution for scene categorization problems [19]. GIST descriptor is a method that attempts to label scenes using spectral and coarsely localized information. The benefit of the descriptor is being very compact and fast to compute. In this work, we have used GIST descriptors for feature extraction with 8 orientations at 5 different scales and a grid of spatial resolution of 6 × 6. The feature vector of each TT image was computed using the GIST descriptor. In particular, first, for each TT image, the values of vector were achieved by convolving the TT image through a bank of Gabor filters using 8 orientations and 5 scales. In the spatial domain, a Gabor filter is the product of a complex sinusoid and a Gaussian envelope. A filter of Gabor 2D is defined continuously by the function Hμ,ν as follows:

$$ {H}_{\mu, \nu }=\frac{f_{\mu}^2}{\pi n\lambda}\mathit{\exp}\left[-\left(\frac{f_{\mu}^2}{n^2}\right){x}_p^2-\left(\frac{f_{\mu}^2}{\lambda^2}\right){y}_p^2\right]\mathit{\exp}\left(j2\pi {x}_p\right) $$
(9)

With xp = x cos(θυ) + y sin(θυ) xp = xcos(θν) + ysin(θν) and yp = −x sin(θυ) + y cos(θυ).The parameters of Gabor filter are: yp =  − xsin(θν) + ycos(θν)fμ = fmax/2μ/2\( {\mathrm{f}}_{\upmu}=\frac{{\mathrm{f}}_{\mathrm{max}}}{2^{\frac{\upmu}{2}}} \) and θυ = υπ/8. Where fμfμ and fmax fmaxare respectively, the center and the maximal frequency. θυ is its orientation.n In the experiments, the parameters of 2D Gabor filters were empirically selected such as the maximal frequency fmax = 0.25 and n = λ =√2. Here, n and λ represent the size of the Gaussian envelope along x-axis and y-axis respectively.

θνThen, the 40 responses (filtered images) are window with a 6 × 6 grid (36 regions). Next, the average values within each region are calculated. Finally, the 36 averaged values of all 40 filtered images (8 × 5) are concatenated that results into a feature vector of length 36 × 40 = 1440. The steps for feature extraction from a sample images using GIST descriptor are shown in the Fig. 6. GIST descriptor for different scales, orientations and regions are shown in Fig. 7.

Fig. 6
figure 6

Feature extraction using GIST descriptor: (a) the input TT palmprint image. (b) Gabor filter 8 orientation at 5 different scales. (c) The filtered images for different scales and orientations. (d) Windowing the filtered images and then the average values for each region has been calculated. (e) Concatenating the averaged values of all filtered images to form a feature vector from GIST descriptor

Fig. 7
figure 7

GIST descriptor for different scales, orientations and regions: (a) GIST descriptor of 64 regions; 8 orientations and 6 scales. Length of the feature vector for each image is 3072. (b) GIST descriptor of 64 regions; 2 orientations, 2 scales. Length of the feature vector for each image is 256. (c) GIST descriptor of 64 regions; 1 orientation, 1 scale. Length of the feature vector for each image is 64

2.4 Reduction of the dimensionality

The feature vectors of high dimension are sometime difficult to process or classify. To address the issue of large data, the techniques dimensionality reduction has been used. Dimensionality reduction methods enable us to reduce computing time, improve prediction performance, and help us to better understand the data. One of the most common techniques is PCA + LDA, which is fast, simple and a popular scheme. The PCA is first used to project the images into a lower data space [25]. The goal of the LDA is to maximize inter-class distances, while minimizing intra-class distances, which amounts to finding the matrix of transformation W which maximizes criterion [4]:

$$ \mathrm{T}\left(\mathrm{W}\right)={\mathrm{W}}_{\mathrm{opt}}=\underset{\mathrm{W}}{\arg\ \max}\frac{\left|{\mathrm{W}}^{\mathrm{T}}{\mathrm{S}}_{\mathrm{B}}\mathrm{W}\right|}{\left|{\mathrm{W}}^{\mathrm{T}}{\mathrm{S}}_{\mathrm{W}}\mathrm{W}\right|}=\left[{\mathrm{W}}_1{\mathrm{W}}_2\cdots {\mathrm{W}}_{\mathrm{d}}\right] $$
(10)

T(W)T(W) is the fisher discriminant criterion that is maximized, where W is built by concatenation of the number d leading Eigenvectors. It has been noted that W is obtained by resolving the following system:

$$ {\mathrm{S}}_{\mathrm{W}}^{-1}{\mathrm{S}}_{\mathrm{B}}{\mathrm{W}}_{\mathrm{j}}={\mathrm{W}}_{\mathrm{j}}{\uplambda}_{\mathrm{j}} $$
(11)

Here j = 1, 2… d. Applying the PCA + LDA method on the selected data, 399 most important features have been selected for 3D palmprint database.

2.5 The module matching

In this method, the cosine Mahalanobis distance has been used as a classifier for all the test images. Given two feature vectors Vi and Vj representing query and database images, respectively. The distance between Vi and Vj has been obtained by the following relation:

$$ {d}_{Ma}\left({V}_i,{V}_j\right)={\left({V}_i-{V}_j\right)}^T{C}^{-1}\left({V}_i-{V}_j\right) $$
(12)

In the above equation, C represents the covariance matrix.

3 Experiments

Here, we provide the experimental analysis of the proposed 3D palmprint recognition framework.

3.1 Database

In order to evaluate the performance of the proposed system described in the previous section, the publicly available PolyU 3D palmprint data has been used. This data was collected by the Hong Kong polytechnic university [21]. It contains 8000 samples given from 400 different palms (classes), from 200 individuals (136 male and 64female). For each subject 20 different palmprint sample is available. The 3D palmprint traits have been recorded in two sessions with one-month average time r range between two sessions. In each session, 10 samples were collected from both left and right palms of the volunteer.

3.2 Experimental protocol

In all experiments, we took the 3D palmprint samples of the first session as the gallery settings and the images of the second session as the probe set. Both 3D palmprint identification and verification are illustrated in this work. For the identification experiment, we provide results in the form of recognition rate, i.e., Rank-1 recognition rate that is calculated as:

$$ Rank-1=\frac{N_i}{N}\cdotp 100\left(\%\right) $$
(13)

Where NiNi stands for the number of images effectively assigned to the right identity and NN refers to the overall number of images trying to assign an identity. For the verification experiment, we report results in the form of the Error Equal Rate (EER), i.e., when False Accept Rate (FAR) = False Reject Rate (FRR). Also, the CMC curve (Cumulative Match Characteristic) is usually used for identification (one-to-many searches). While A Receiver Operating Characteristic (ROC) curve that is a plot of Genuine Accept Rate (GAR) against FAR for all achievable thresholds can be plotted for verification mode.

3.3 Best GIST descriptor and TT normalization parameters

In this experiment, our goal is to determine the best GIST descriptor and TT normalization technique parameters for 3D palmprint recognition. In the proposed system, there are four parameters that are required to be determined. These parameters are: the values of gamma τ of TT normalization technique, and the number of scales, the number of orientations and the size of the grid of the GIST descriptor. We first vary the size of grid and keep the other parameters fixed as follows: the number of scales = 5, the number of orientations = 8 and the values of gamma τ = 1.2. Table 1 presents the performance of the proposed 3D palmprint recognition system, when the size of grid differs between 4 × 4 and 10 × 10.

Table 1 Performance for different size of grid

From Table 1, it is easy to see that the best performance was attained with a size of grid 6 × 6. As a result, the lowest (EER) equals to 0.00% and the verification rate (VR) at 1% FAR equals to 100% and Rank-1 equals to 99.98% was obtained under the verification and identification mode, respectively. Next, the values of gamma τ was varied by keeping the GIST descriptor’s parameters fixed as follows: the number of scales = 5, the number of orientations = 8 and a grid of size 6 × 6. Figure 8 shows the performance of the proposed 3D palmprint recognition system with different values of gamma ranging from 0.1 to 1.5 with a step of 0.1.

Fig. 8
figure 8

Performance for different values of gamma: (a) identification mode, (b) verification mode

We can observe in Fig. 7 that the best performance was obtained with the values of gamma = 1.1. As a result, the lowest (EER) that was attained is 0.00%. Whereas, the verification rate (VR) at 1% FAR reached to 100% for verification mode. Similarly, VR at 1% FAR for Rank-1 reached to 100.00% for identification mode. Finally, we fixed the values of gamma to 1.1, the size of the grid of 6 × 6 and we varied the number of scales and the number of orientations. The results are shown in the Fig. 9. a in the form of the CMC (Cumulative Match Characteristic) curve. In Fig. 9.a, we can notice that with the number of scales = 5 and the number of orientations = 8, the system was able to procure better performance. CMC curves (Fig. 9b) shows the recognition rate of the proposed system with different sigma values (σ1\σ2). First, we vary the values of σ1 between 0.1 to 1 and fixed the values of σ2 = 2. The results from this CMC curve demonstrate that the value of (σ1) has a slight effect on the recognition rate. The best result has been obtained with the value of σ1 = 1.

Fig. 9
figure 9

CMC curve: (a) for the different number of scales and orientations, (b) for the different values of σ1 and σ2

Second, we fixed the value of σ1 = 1 and vary the values of σ2 between (1.2, 1.7 and 2). The best result has been obtained with the value of σ2 = 2. Therefore, we have chosen the values of (σ1, σ2) = (1, 2) in our work.

Based on the results achieved in this experiment, we decided to utilize GIST descriptor of 5 scales, 8 orientations and a grid of size 6 × 6, and TT normalization technique with the value of gamma = 1.1 and σ1 = 1, σ2 = 2 in the following experiments.

3.4 Analysis of feature extraction techniques with and without TT normalization

In this set of experiments, we investigated several feature algorithms on depth image 3D palmprint to extract feature vector and their efficacy with and without TT normalization scheme. The results are reported in Table 2. As can be seen in Table 2 that TT + GIST+PCA + LDA method performs significantly better than other six methods, i.e., PCA + LDA, Gabor+PCA + LDA, GIST+PCA + LDA, GCI+ GIST+PCA + LDA, MCI+ GIST+PCA + LDA and TT+ Gabor+PCA + LDA. Here, the MCI and GCI images have been enhanced using Butterworth low-pass Filter [9] of order 4 with cutoff frequency 20. The system using TT + GIST+PCA + LDA method attained an EER equals to 0.00%, and VR at 1% FAR equals to 100.00% and Rank-1 equals to 100.00% under verification and identification mode, respectively. We compare Rank-1 (identification mode) and the EER (verification) mode of the proposed method with three classifiers: Support Vector Machine with ‘One-Against-One’srategy (OAO-SVM) [10], K nearest neighbor (KNN) and Random forest [5]. The results of this comparison are summarized in Table 3. We can notice from the Table 3 that the classifier KNN with Mahalanobis distance achieved lower EER and Higher Rank-1 and is therefore more efficient than the SVM and the Random forest classifiers. The OAO-SVM using radial basis function (RBF) gives much better results than those from Gaussian kernel, linear function and polynomial function. Thus, only the performances (Rank-1/EER) from RBF kernel are reported in Table 3. Also, the result of the Random forest with a number of trees (NT) equals to 1000 is reported in Table 3.

Table 2 RANK-1/EER obtained by the proposed method for both modes (Identification and Verification) on 3D palmprint.
Table 3 Comparative of proposed personal recognition method with three classifiers

3.5 Comparison with existing state of the art 3D palmprint recognition techniques

In this section, in Table 4, we provide a performance comparison between the proposed system and other prior works in the literatures. It is worth noticing that the proposed system outperforms other existing frameworks. In addition, we also evaluated the running speed of the proposed algorithm and compared it with the computational complexity of the baseline algorithms. To this end, the proposed method has been implemented using the software MATLAB R2018a and Windows 7 operating system. Experiments have been conducted on a type of PC Core i3-2375M CPU with 4G MB RAM. Considering only one test sample, the runtime for the operation of identification comprises the time needed for the feature extraction step and the time required for matching step [23]. The computational time for one sample identification scenario is presented in Table 5. As shown in Table 5, the proposed framework runs faster than other methods such as MCI [29], the local correlation (LC)-based method [30], MCI + GCI + ST method [28], Joint line and orientation features (JLOF) [16], PCA + TPTSR method [8] and in [6] except with the block-wise features and collaborative representation (BWFCR) [31] method. Specifically, the computational cost of proposed framework is only 2.7 s, while it requires 76.53 s by MCI + GCI + ST method proposed in [28].

Table 4 Comparative of proposed personal recognition method with the existing approaches
Table 5 Computation time of proposed scheme and the existing approaches

4 Conclusion

In this paper, a novel method for the 3D palmprint recognition has been proposed. In particular, the TT normalization technique, GIST feature descriptor with PCA + LDA and the cosine Mahalanobis distance for matching were employed to procure an efficient biometric recognition system. The experimental results on publicly available dataset demonstrate that the proposed 3D palmprint recognition system can achieve better performance with 400 classes than prior frameworks. Moreover, the conducted experiments on the PolyU 3D palmprint database demonstrate that GIST descriptor is a powerful texture descriptor also useful for 3D palmprint biometric trait. The future work will include integrating other modalities (e.g., face) to take advantage of 3D palmprint modality in order to better procure security systems with high accuracy.