Keywords

1 Introduction

3D fingerprint identification system is an emerging biometric authentication method, which is powerful for anti-spoofing attacks. Beyond 2D image domain, finger object built on point cloud and curved surface can be displayed within 3D space, which provides more stereo sense [5]. Being different from traditional 2D fingerprint identification systems, 3D fingerprint identification systems hold the following advantages.

  • Contactless Fingerprint Image Acquisition: Traditional 2D fingerprint images have to be captured by placing, pressing or rolling fingers against optical scanning devices or finger trace cards. Such operations usually result in distorted, incomplete and low resolution fingerprint images. Instead, 3D fingerprint imaging systems provide contactless scanning setting, which directly prevent finger from touching imaging devices. The resultant fingerprint images are non-distorted, complete, multi-posed and resolution guaranteed. The 2D multi-posed fingerprint images captured by 3D fingerprint imaging system are shown in Fig. 1.

  • Man-made Finger Model Prevention: Traditional 2D fingerprint identification systems are being challenged by man-made finger models in recent years. That is, the imposter finger models, which are made by printing finger ridge-valley patterns onto rubber, plastic, and other soft materials, are intended to fool the 2D fingerprint identification systems. Consequently, the 2D fingerprint identification systems are unable to distinguish whether the fingers under investigation are genuine human fingers. On the contrary, 3D fingerprint identification systems can easily validate whether the investigated fingers are artificially made ones.

Fig. 1.
figure 1

2D Multi-posed fingerprint images (left thumb): (a) left lateral view; (b) frontal view; and (c) right lateral view.

3D fingerprint construction process plays an essential role for 3D fingerprint identification system. Within 3D fingerprint identification system, the target finger can be captured by multiple built-in cameras. Since these built-in cameras are differently posed, the resultant pictures are differently posed. To generate 3D finger model, 3D construction process works on the obtained multi-posed 2D images [2, 4]. To be explicit, as an initial step, correspondence between different posed images should be discovered. In most cases, the correspondence discovery depends on feature points obtained by specific feature point detection method such as SIFT. Afterward, based on the discovered correspondence in 2D domain, the spatial coordinates for the detected feature points in 3D space are calculated by 3D triangle reconstruction. Furthermore, other points’ 2D correspondence can be inferred based on the prior correspondence establishment. Then their 3D coordinates are also calculated via 3D triangle reconstruction. At the end, the point cloud can be generated and it is further used to form the curved surface of the 3D finger model [1, 3].

For discovering feature level correspondence between two differently posed images, SIFT is one of the most popular methods. After SIFT point detection process, the image points, whose multi-scale Gaussian filtering response are salient, are labeled as feature points. For each individual feature point, it holds a high-dimension feature vector. Such feature vector is a summary of local gradient information in terms of gradient magnitude and direction. Furthermore, two points whose feature vectors have higher similarity are matched between different images, while two points with lower feature vector similarity are not matched. The correspondence between different images is discovered via the matched feature points [5]. For fingerprint images, however, the feature vectors are generally similar, as the local regions centered at every feature points have very similar ridge-valley structures (the local summarized gradient information are also very similar). Such local structure similarity significantly reduces the discrimination of the feature vector-based matching. Therefore, massive mismatches happen between the different posed fingerprint images. In this case, the SIFT-based feature point matching approach is unable to work. Figure 2 shows an example when SIFT-based feature matching method is applied to the left and frontal images in Fig. 1.

Fig. 2.
figure 2

An example: SIFT-based feature matching method fails to find the correspondence between the left and frontal views in Fig. 1.

For covering the SIFT deficiency, we propose a direct feature point correspondence discovery algorithm based on evolutionary computing in this paper. As an initial step, the minutiae in fingerprint images are utilized instead of using the SIFT feature points. Furthermore, the proposed technique treats the feature point matching problem as the optimization problem. To maximize the number of the matched feature points between two images, differential evolution, a fast searching evolutionary computing algorithm is adopted. The rest of this paper is organized as follows: in Sect. 2, the differential evolution-based minutiae matching method is introduced; in Sect. 3, the experiments for 2D different posed minutiae set matching and 3D minutiae point cloud construction are conducted; in Sect. 4, the conclusion and future research directions are given.

2 Proposed Method

The minutiae matching process can be regarded as the process to search for more and more mated minutiae points from differently posed fingerprint images. To actualize this process, we utilize differential evolution algorithm [6, 7]. A summary for the proposed differential evolution-based minutiae matching program is provided as follows.

  • Step 1: Input two differently imposed 2D fingerprint images.

  • Step 2: Employ Verifinger SDK to extract minutiae points for input images. The outputs obtained from Verifinger SDK are minutiae points’ coordinates in 2D image domain.

  • Step 3: Generate fundamental matrix and initially set up its elements. These elements belonging to the fundamental matrix are treated as the parameters which need to be optimized by the differential evolution algorithm. Also, the generated fundamental matrix must be a rank-two matrix.

  • Step 4: Define objective function for optimization. Such objective function is required to represent the underlying relationship between the fundamental matrix’s elements and the number of the matched minutiae points for both images.

  • Step 5: Define epipolar constraint and spatial constraint for point-to-point matching.

  • Step 6: Iteratively run the differential evolution algorithm until the termination conditions are satisfied.

  • Step 7: Export the optimized fundamental matrix’s parameters. Based on these optimized parameters, the minutiae point correspondence relationship can be obtained.

To be explicit, the technical details involved in each individual step are described as follows.

2.1 Minutiae Extraction

Minutiae extraction is the first step for the subsequent procedures. To ensure the reliability of the extracted minutiae points, Verifinger SDK, a widely used commercial software development kit for biometric applications, is employed in this paper. We believe that VeriFinger SDK is the most reliable minutiae extractor in public domain as far as we know. For the multi-posed fingerprint images captured by the 3D imaging system, we cannot find any other softwares or computer programs for fingerprint minutiae detection and extraction, which can be competitive or even better than VeriFinger SDK. Figure 3 shows the extracted minutiae for the left and frontal views in Fig. 1 respectively.

Fig. 3.
figure 3

The extracted minutiae by Verifinger SDK for the left and frontal views in Fig. 1: (a) left view’s minutiae points; and (b) frontal view’s minutiae points.

2.2 Fundamental Matrix

Fundamental matrix is a core concept in computer vision system, which explains projective geometric relationship for associated 2D objects. Also, the fundamental matrix is a low-rank matrix whose dimension is \(3 \times 3\) and rank should be equal to two. This strict equality imposed on the matrix rank to regulate the epipolar lines in one view can be converged into a point (such point is called epipole). To guarantee the fundamental matrix’s rank equals to two, the following matrix decomposition strategy is applied.

$$\begin{aligned} F = A \cdot B = \left[ {\begin{array}{*{20}{c}} {{A_1}}&{}{{A_2}}\\ {{A_3}}&{}{{A_4}}\\ {{A_5}}&{}{{A_6}} \end{array}} \right] \cdot \left[ {\begin{array}{*{20}{c}} {{B_1}}&{}{{B_2}}&{}{{B_3}}\\ {{B_4}}&{}{{B_5}}&{}{{B_6}} \end{array}} \right] \end{aligned}$$
(1)

where the elements in matrixes A (dimension \(3 \times 2\)) and B (dimension \(2 \times 3\)) can be assigned by arbitrary numeric values (all zeros are exceptional) and \(rank\left( F \right) = 2\). Due to the multiplication between A and B, the number of the parameters which need to be optimized for F is twelve.

2.3 Objective Function

The objective function pushes the differential evolution algorithm approaching to the optimal parameters and solutions. By evaluating individual fitness in whole population, the algorithm is not only to preserve the individuals with higher fitness but also to weed out the ones with lower fitness. To better drives the differential evolution algorithm to work towards the optimum, an objective function which can map the twelve parameters for fundamental matrix F onto the number of the matched minutiae points is created. In this sense, the increment of the matched minutiae points can positively feedback the evolution of the parameter’s values. When the maximal number of the matched minutiae is achieved, it means the differential evolution algorithm has already converged to the optimal solutions for the target parameters.

2.4 Epipolar Constraint and Spatial Constraint

Both epipolar and spatial constraints are required to be considered simultaneously to avoid invalid solutions for target parameters during differential evolution searching process. Figure 4 illustrates how the epipolar constraint works.

Fig. 4.
figure 4

An example to show how the epipolar constraint works.

In Fig. 4, the images are captured from left and frontal views respectively. A point, located at the same position of the chessboard, appears in the two different views. We denote it as \(P_A\) in left view image and \(P_B\) in frontal view image. Under the projective transformation of the fundamental matrix, \(P_A\) is projected into an epipolar line \(L_A\) in the frontal view image and \(P_B\) is mapped to an epipolar line \(L_B\) in the left view image. In the left view image, the epipolar line \(L_B\) passes through \(P_A\). Also, in the frontal view image, the epipolar line \(L_A\) passes through \(P_B\). This observation evidences a very important characteristic for the fundamental matrix. It demonstrates that \(P_A\) can be matched against \(P_B\) when \(P_A\)’s epipolar line passing through \(P_B\). This important characteristic directs the differential evolution algorithm to seek for the points, whose epipolar lines should pass through another view’s points. Mathematically, a valid point-to-point match should be counted when the following condition is satisfied.

$$\begin{aligned} \left[ {{P_B},1} \right] \cdot F \cdot {\left[ {{P_A},1} \right] ^T} = \left[ {{P_B},1} \right] \cdot {L_A} = 0 \end{aligned}$$
(2)

where \({P_A} = \left[ {P_A^{\left( x \right) },P_A^{\left( y \right) }} \right] \) and \({P_B} = \left[ {P_B^{\left( x \right) },P_B^{\left( y \right) }} \right] \) (dimension \(1 \times 2\)).

Being different from the epipolar constraint, spatial constraint only focuses on the minutiae points’ relative locations in both horizontal and vertical directions. Figure 5 shows how the horizontal spatial constraint works. For the horizontal-axis constraint, the matched minutiae points’ relative shifts along horizontal-axis should be the same. The point relative shift is calculated according to the following formula. In Fig. 5, the yellow and pink reference lines are determined according to the fingertip.

$$\begin{aligned} {{\tilde{x}}_{point}} = \frac{{{x_{point}} - {x_{yellow}}}}{{{x_{pink}} - {x_{yellow}}}} \end{aligned}$$
(3)

where \({\tilde{x}}_{point}\) is the horizontal relative shift of the minutiae point. \(x_{point}\), \(x_{yellow}\) and \(x_{pink}\) stand for the x coordinates for the minutiae point, the yellow reference line and the pink reference line respectively.

Fig. 5.
figure 5

An example to show how the horizontal spatial constraint works. (Color figure online)

Besides, Fig. 6 exhibits how the vertical spatial constraint works. For the vertical-axis constraint, the matched minutiae points’ y coordinates should be the same. That is, the matched minutiae points’ heights are at the same level in image domain.

Fig. 6.
figure 6

An example to show how the vertical spatial constraint works.

Fig. 7.
figure 7

An example to show the results after running the proposed direct minutiae point matching algorithm for a right index finger. Also the SIFT matching result is compared: (a) a single run result; (b) an aggregated result after multiple runs; and (c) SIFT matching result.

3 Experiments

To validate the feasibility and effectiveness of the proposed direct minutiae matching algorithm, we conduct the following two experiments: (i) the minutiae point level’s correspondence discovery between two differently posed 2D fingerprint images; and (ii) minutiae point cloud construction within 3D space.

3.1 Minutiae Point Correspondence Discovery

The data set used for this study is collected from the internal volunteers from the University of New South Wales. This data set consists of 150 volunteers’ 3D fingerprint imaging records. Each volunteer case includes 10 fingers from left palm to right palm. For each single finger, we capture its left, frontal and right view images by 3 built-in cameras in 3D fingerprint imaging system. For ensuring robustness and reliability, we scan every single finger twice. In total, we collect \(60 = 10 \times 3 \times 2\) 2D fingerprint images for each volunteer. Afterwards, we extract the minutiae points by using VeriFinger SDK. Therefore, we obtain 60 minutiae point files for each volunteer case.

As an example, we demonstrate the minutiae point matching results in Fig. 7. The result shown in Fig. 7(a) is obtained based on a single run of the proposed algorithm. To search for more matched minutiae points, Monte Carlo simulation based on multiple runs is conducted (the result is shown in Fig. 7(b)). For each independent trial, the proposed differential evolution-based algorithm starts with a group of randomly initialized solutions. The randomly initialized solutions could be different for each independent run, since the random number generator built in the differential evolution approach works on the different random number seed every time. The Monte Carlo simulation repeatedly runs the proposed algorithm to make sure the randomly initialized solutions could be uniformly distributed. After multiple runs, we aggregate the final result by selecting the repeatedly matched minutiae pairs. Compared with the results achieved by the proposed algorithm, the feature point correspondence discovered by SIFT is unreliable (the result is shown in Fig. 7(c)).

Fig. 8.
figure 8

An example to show the constructed 3D minutiae point cloud based on the calculated camera parameter matrixes and the discovered minutiae point correspondence in Fig. 7(b).

3.2 Minutiae Point Cloud Construction

The minutiae point cloud construction is based on the minutiae point correspondence discovered in Sect. 3.1. To calculate minutiae points’ spatial coordinates in 3D space, camera parameter matrix for the built-in camera in the 3D fingerprint imaging system is firstly necessary. Mathematically, the camera parameter matrix can be denoted as follows.

$$\begin{aligned} C = K \cdot M = K \cdot \left[ {R,T} \right] \end{aligned}$$
(4)

where C stands for the camera parameter matrix. K is a \(3 \times 3\) camera internal parameter matrix, which can be obtained via camera calibration based on chessboard testing. M ia a \(3 \times 3\) camera external parameter matrix, which contains two components: (i) camera pose matrix R (dimension \(3 \times 3\)); and (ii) camera shift vector T (dimension \(3 \times 1\)). The camera pose matrix R is determined by the camera’s rotations \(\left( {{\theta _x},{\theta _y},{\theta _z}} \right) \) along x, y and z axis in 3D coordinate system. The camera shift vector \(T = [{T_x},{T_y},{T_z}]\) is the bias apart from the origin in 3D coordinate system. The values for \(\left( {{\theta _x},{\theta _y},{\theta _z},{T_x},{T_y},{T_z}} \right) \) are given by the manufacturer of the used 3D fingerprint imaging system, therefore the camera external parameter matrix M can be calculated.

Furthermore, given the camera parameter matrixes and the minutiae points correspondence in Fig. 7(b), the minutiae points’ 3D coordinates can be computed via triangulation reconstruction method. As an example, we demonstrate the resultant 3D minutiae point cloud in Fig. 8.

4 Conclusion and Future Work

This paper proposes a novel minutiae point correspondence discovery strategy based on differential evolution for 3D fingerprint imaging system. Such algorithm can be adopted to align the minutiae point set in one view against the point set in another one. As an alternative solution for tackling the feature point level matching, this algorithm is able to produce reliable matching result, when the SIFT-based matching approach cannot work at all. Furthermore, this algorithm can solidly support the subsequent 3D minutiae cloud construction. The effectiveness and superiority of the proposed algorithm are evidenced by the experiments.

However, the proposed algorithm still needs to be further improved in the following aspects: (i) the number of the matched minutiae pairs are still limited after taking Monte Carlo simulation; and (ii) the minutiae point matching method in 3D space is in demand.