Introduction

For the past, many years fingerprints have been used as one of the most reliable biometric used for identifying a person [1]. This is because fingerprint features remain unchanged throughout their lifetime. Fingerprints have been the most preferred biometric over other biometric traits [2]. They have found themselves in a wide range of applications ranging from forensics [3], secure mobile payments [4], and maintaining ID of the citizens at borders [5]. The size of these databases can exceed several million in number and the size further can increase if different fingerprint impressions of the same person being registered. Performing one-to-one fingerprint verification over such a large-scale database poses different challenges. There is also a need to build a fingerprint indexing-based automated fingerprint identification system (AFIS) to scale down the search size. In the above-mentioned applications, AFIS has been widely used. Using rolled/plain and latent fingerprints, latent fingerprint matching using AFIS is achieved. Rolling fingerprints (Fig. 1a) are obtained by dipping the surface of fingerprints in ink and rolling them on a paper surface from one side to another. This approach is used to capture complete ridge data and the amount of minutiae collected from this type of fingerprint would be about 100. Whereas, plain fingerprints (Fig. 1b) are obtained by pressing the inked fingerprint against the paper or pressing the fingerprint over a fingerprint-scanning device. The capture area of plain fingerprints is smaller compared to the rolled fingerprint and the number of minutiae obtained is around 50 in number.

Fig. 1
figure 1

Fingerprint impressions collected using different methods. a Rolled, b plain, and c latent

The rolled/ten-print fingerprint matching using AFIS [6] has achieved matching accuracy of around 100% with a background database of 10 K fingerprints. On the other side, latent fingerprints (Fig. 1c) are collected from the surface of objects that are unintentionally left on crime scenes and they become extremely important evidence in crime scene investigations to convict criminals. The latent fingerprint investigations using the AFIS have the highest social significance and any false convictions will have a great social impact. There have been many instances where wrong convictions have brought embarrassment to investigating agencies and many agencies had to make public apologies.

The murder case of Marion Ross dates back more than 18 years and is one of the controversial cases encountered by the Scottish government [7]. Despite Ms. McKie denied her entry into the home of a murder victim, she was wrongly accused of the murder based on fingerprints found at the crime scene. After a detailed public inquiry, it was concluded as the suspicion was caused due to “human error” and it raised several questions on the integrity of the fingerprint-matching system. Ms. McKie was compensated with £750,000 and the Scottish Police agency apologized for the wrong accusation.

In another case where a terrorist attack took place on March 11, 2004, in Madrid and it witnessed the death of 191 people. An American lawyer named “Brandon Mayfield” [8] was detained for 14 days based on the latent fingerprints found on a bag that contained detonators and explosives from the crime scene. In police custody, Brandon claimed that he did not have a passport and had not gone out of the USA for 10 years. Unable to prove the source of fingerprints by forensic agencies and after a few months of arguments, the court acquitted him. The error made by AFIS resulted in the Federal Bureau of Investigation (FBI) to seek a public apology.

Currently, latent fingerprint identification is carried out manually by forensic experts and the development of a fully automatic accurate latent fingerprint identification system is still in its early stage. The researchers have to address the challenges faced in processing low-quality latent fingerprints. Some of the challenges are explained next.

Latent Fingerprint Identification Challenges: Science vs. Fiction

Forensic investigations became popular after airing several crime investigation programs including Crime Scene Investigation (CSI) [9]. These programs showcased forensic laboratories equipped with highly trained personnel working with state-of-the-art instrumentation and solving cases in a timely fashion. Some of the illusions created by these television series are listed below:

  • Prosecutors, judges, and police officers believe that CSI will have unreasonable quality and quantity of physical evidence at crime scenes.

  • CSI can be easily proved in courtrooms.

The things look completely different when a CSI was carried out. One of the important challenges faced to research latent fingerprint identification is the lack of publicly available latent fingerprint databases. The CSI programs have influenced viewers to research this area, and the collection or distribution of physical evidence in the public domain is increasing.

Other than this, the researchers have to deal with the incomplete and partial latent fingerprints. The latent fingerprints contain partial fingerprint information due to smudged ridges [10] and they result in blurred images. Overall, the quality of fingerprints is poor and they suffer from non-linear distortions due to pressure variations on the subject from where the fingerprints are obtained. Researchers [11, 12] used ridge-frequency, and ridge-orientation to represent the global ridge patterns. The major limitations of these global features are that they are not good at handling non-linear distortions and are not invariant to geometric transformations. Another problem with using global features is that it requires prior information of core points to align fingerprints and in latent fingerprints, core points may not always be present. Hence, global features are better suited for image classification.

Related Work Carried Out

Many researchers [13, 14] working in the field of AFIS used manually marked local minutia points, core point, delta point, and Region of Interest (ROI) to perform fingerprint search. On the NIST SD27 database, the manually annotated fingerprints with orientation field and accuracy maps [13] produced the highest Rank-1 identification accuracy of 79.5%. The dictionary of patches [14] helped to boost the quality of fingerprints, and the highest Rank-1 matching accuracy of 32.51% was produced on manually marked minute points of the enhanced fingerprint. Based on the top Rank-50 matching accuracy, the researchers concluded that the match performance using manually marked points produced better results compared to automatic markers. A fusion of manually marked minutia points with minutiae extracted from automatic minutiae extraction [15] produced Rank-1 identification accuracy of 40% and it was observed that the results were found to be better than methods developed using global fingerprint features. Fingerprint core point extraction and localization using a Gaussian filter [16] method was proposed to predict core points. The major limitation of the above-proposed method is that the manually marked minutiae points are not accurate [17] and the core points are not always available in latent fingerprints. These features are not tolerant of geometric transformations.

Many researchers [18,19,20,21] used geometric features that are robust against rotation and translation. These geometric features were derived from its neighboring minutiae points and the hashing technique was used to speed up the search. For performing latent fingerprint identification, an algorithm called the “Descriptor-based Hough Transform (DBHT)” [22,23,24] was proposed. This algorithm uses Minutiae Cylinder Codes (MCC) [25] to perform fingerprint alignment using the Hough Transform and later local fingerprint matching is performed. This method has been proved as an effective way of representing a minutia for fingerprint matching and indexing. MCC representation requires prior knowledge about fingerprint alignment and the minutiae neighborhood representation is not robust towards rotation and scale. The system recorded 53.5% [24] of Rank-1 identification accuracy.

Clustering algorithms for latent fingerprints were developed [26]. To cope with non-linear distortions, they used minutiae descriptors and the minutiae clusters were combined in this step to locate the corresponding minutiae. The descriptors of the minutiae were rotated between 0 and π/4 to perform matching. To address the problems of non-linear distortions, different algorithms were developed. The neighbouring minutiae-based descriptor (NMD) algorithm [26] produced Rank-1 identification accuracy of 85.66%. In a fixed pixel radius, Minutiae neighborhood groups [27] were created and the fingerprint alignment based on best-matching pairs was subsequently performed. To cope with fingerprint deformations, rotations, and scale, a robust algorithm to locate the orientation and location of minutiae using Delaunay triangles [28] was proposed.

To estimate ridge-flow, extract minutiae descriptors with two minutiae templates plus texture template, convolutional neural-networks (ConvNets) are proposed [29]. Reference templates stored in the background are compared with query latent fingerprint and the highest Rank-1 recognition accuracy of 64.7% was published on the NIST SD27 database. A “Latent Fingerprint Identification” (LFI) algorithm [30] with supervised classifier was proposed. Fingerprint matching with global features, deformable minutiae descriptors independent of local minutiae cylinder code (MCC) and m-triplet descriptors was carried out. The highest Rank-1 identification accuracy of 73.26% was achieved with rank level fusion.

An end-to-end solution for fingerprint matching [31] was introduced which utilizes automated ROI cropping, pre-processing, extraction of features, and matching. For each minutia, 96-dimensional descriptors were extracted from the neighbourhood of minutiae, and the virtual minutia was used to quantify the product to reduce the length to 16. 70% rank-1 identification accuracy was produced with rank level fusion.

To summarize, the latent fingerprints contain a limited number of minutia points. The fingerprints are of poor quality and they suffer from fingerprint deformations and non-linear distortions. The latent fingerprints contain poor quality ridges and they suffer from fingerprint deformations and non-linear distortions. The approaches proposed by various researchers make use of core points for fingerprint alignment. Owing to the absence of minutia feature points, the latent fingerprint may not always contain a core point or delta point. The existing systems are not robust against fingerprint geometric transformations. Hence, there is a need to develop a system that identifies a person with an existing small number of minutia features and without requiring fingerprint pre-alignment. In addition, the system should be robust against fingerprint deformations and scale. A nearest minutiae neighbourhood-based latent fingerprint identification algorithm called “Ratio of Minutiae Triangles (RMT)” is proposed to solve these problems. The main goal of the proposed work is to develop an automated algorithm for fingerprint identification that is more robust for fingerprint deformation, rotation, scale, and does not rely on any alignment of fingerprints. We test our proposed work on FVC 2004_DB1 [35], and NIST SD27 [36] latent fingerprint database. The performance of the algorithms is discussed in the later sections.

The paper is organized as follows. The next section introduces the ratio of minutiae triangles (RMT) algorithm which was developed on nearest minutiae pairs and it makes uses of Latent fingerprint descriptors. The next following section analyses the performance of the proposed algorithms on databases with different settings and compares the performance with the state-of-art algorithms. The last section discusses the results and future scope.

Ratio of Minutiae Triangles (RMT)

We propose a minutia descriptor-based matching system called “Ratio of Minutiae Triangles (RMT)”. RMT uses local minutiae triangular features that are invariant to minutiae deformations, rotations, and scale [18]. The proposed RMT algorithm does not depend upon fingerprint pre-alignment information required for matching. Thus, eliminating the need for fingerprint alignment using a singular point and core point. After the minutiae feature extraction phase, the feature vectors for the gallery fingerprints and query fingerprints are developed. For matching, the feature vectors of query fingerprints are compared to feature vectors of gallery fingerprints. Feature extraction and RMT algorithms are explained next.

Dealing with Distortions and Geometric Transformations

The triangular features obtained from triangle angles, lengths, and sides are proven to be invariant to translation, rotation, and scale [18]. The RMT algorithms use the minutiae triangle features to produce latent minutiae feature vectors. Latent fingerprints suffer from distortions, and it results in minutia points being shifted away compared to the minutia points in its corresponding ground truth (true mate) fingerprints.

Dealing with Distortions

The minutiae distance measure is invariant under translation, rotation, and fairly invariant to the scale. The ratio of minutiae distance defines angle, and the ratio of distance is invariant under the above-mentioned transformations. The latent fingerprints are most of the times affected by non-linear distortions, the minutia points extracted from query fingerprint look slightly different than its true mate stored in the background gallery database. From Fig. 2a, it can be observed that ‘O’ represents the vertex of the minutiae triangle at reference (0, 0). A, B minutia points are shifted away from their original positions when the fingerprint suffers from distortions. This results in minutia points \(x_{1}\), \(x_{2}\), \(y_{2}\) getting distributed within maximum minutiae distance (L) of 150 pixels such that 0 < \(x_{1}\) < L, 0 < \(y_{2}\) < L and \(x_{2}\) < L (Table 1).

Fig. 2
figure 2

Triangle variables

Table 1 Tolerable angle (\(\alpha_{\min }\), \(\alpha_{med}\), \(\alpha_{\max }\)) variations under different threshold values

Several experiments were conducted to test the robustness of the system against distortions [18]. The researchers observed that the min (\(\alpha_{\min }\)), median (\(\alpha_{{{\text{med}}}}\)) minutiae triangle angles can tolerate distortions better than the maximum minutiae triangle angle \((\alpha_{\max } )\) thresholds. The minutiae deviations (\(\alpha\)) between 2° and 4° can tolerate most uncertain distortions and can help in reducing the search time.

Let minutia points P1, P2, and P3 represent the minutiae triangular variables, and \(\alpha_{i}\) represent the three angles of the triangle where i = 1, 2, 3. The triangle angles are given by

$$ (\alpha_{\max } ) = \max \{ \alpha_{i} \} ,(\alpha_{\min } ) = \min \{ \alpha_{i} \} \;{\text{and}}\;(\alpha_{{{\text{med}}}} ) = 180^\circ - \alpha_{\max } - \alpha_{\min } $$

From Fig. 2b it can be observed that minutia point ‘P1’ represents the vertex with an angle \(\alpha_{\max }\), ‘P2’ represents the minutia vertex point with an angle \(\alpha_{\min }\) and ‘P3’ is the vertex minutiae point with \(\alpha_{{{\text{med}}}}\) angle. Hence, a minutiae triangle formed using minimum (\(\alpha_{\min }\)) and median angle (\(\alpha_{{{\text{med}}}}\)) provide robustness towards variations in triangle angle and is given by

$$ 0^\circ < \alpha_{\min } \le 60^\circ \;{\text{and}}\;\alpha_{\min } \le \alpha_{{{\text{med}}}} < 90^\circ . $$

These triangle angles are used to build minutiae invariant feature vectors in our proposed algorithms.

Dealing with Geometric Transformations

We use the following geometric features to identify minutia points that are influenced under different rotation and scale.

  • Relative translation: Let Zi = xi + j yi be the complex number with \(j = \sqrt { - 1}\) corresponding to the minutia location point Pi (xi, yi), where i = 1, 2, 3.

    Then, we define, Z21 = Z2 − Z1, Z32 = Z3 − Z2, and Z13 = Z1Z3.

    Further, the centroid of the triangle ‘ZC’ is defined as follows:

    $$ Z_{{\text{C}}} \, = \,\left( {Z_{{\text{1}}} \, + \,Z_{{\text{2}}} \, + \,Z_{{\text{3}}} } \right)/{\text{3}}\;{\text{and}}\;\left| {Z\, - \,Z^{\prime} } \right| < \,\delta _{t} , $$

    where Z and Z′ are the centroids of the triangle (ZC) found in two different impressions and |ZZ′| is the translation between the centroids of these two triangles.

  • Relative rotation: Let θ13=∠ Z13, θ21=∠ Z21, θ32=∠ Z32, and ∠ Z is the phase angle.

    Then, |θ − θ′| <  δr,

    $$ {\text{Then}},\;\left| {\theta \, - \,\theta ^{\prime } } \right| \,< \,\delta _{r} , $$

    where θ and θ′ are the minutiae with phase angles θ13, θ21, and θ32, found in two different impressions.

    These features offer robustness towards false correspondences between true minutia points and their corresponding minutia points found in query fingerprint.

Ratio of Triangles

Another set of experiments was conducted by Dunn et al. [24] to reconstruct dental radiographic images captured with different image arrangements. Using projective invariants, the algorithm locates the pixel in the rotated image. Initially, in the reference image, p1, p2, p3, and p4 feature points are chosen and their corresponding P1, P2, P3, and P4 points are identified in the rotated image. Then a fifth point, p5, is scanned through the reference image. The values for two ratios of triangles can be determined using these five-point positions in the reference image as follows:

$$ x = \frac{{\vartriangle \left( {1,2,5} \right)\, \cdot \, \vartriangle \left( {3,4,5} \right)}}{{\vartriangle \left( {1,3,5} \right)\, \cdot \,\vartriangle \left( {2,4,5} \right)}}, $$
(1)
$$ y = \frac{{\vartriangle \left( {1,4,5} \right)\, \cdot \, \vartriangle \left( {2,3,4} \right)}}{{\vartriangle \left( {2,4,5} \right)\, \cdot \,\vartriangle \left( {1,3,4} \right)}}, $$
(2)

where the total of eight triangular areas is defined as shown in Fig. 3. The images were rotated in terms of 0, 2, 4, 8, and 16° angles against the reference image after performing initial alignment, and the algorithm produced fewer standard deviations for the difference image. The algorithm transformation performance is determined by the initial selection of feature points. To overcome this limitation, we propose an alignment-free fingerprint-matching algorithm after the minutiae feature extraction step which is explained in “Latent Fingerprint Registration”.

Fig. 3
figure 3

Representations of the ratio of triangles from eight triangles. a Ratio of triangle representation (x) obtained from areas of triangles \(\vartriangle \left( {1,2,5} \right), \vartriangle \left( {3,4,5} \right), \vartriangle \left( {1,3,5} \right), \vartriangle \left( {2,4,5} \right)\). b Ratio of triangle representation (y) obtained from areas of triangles \(\vartriangle \left( {1,4,5} \right), \vartriangle \left( {2,3,4} \right), \vartriangle \left( {2,4,5} \right), \vartriangle \left( {1,3,4} \right)\)

Latent Minutiae Feature Extraction

Latent fingerprints contain incomplete partial fingerprint information. Accurate minutiae detection plays a very important role in obtaining good matching results and intern helps to correctly identify a person. We used NIST’s FpMV minutiae viewer [37] open-source minutiae detection tools to obtain reliable minutiae features to achieve better matching accuracy. The software is developed above the MINDTCT application and the extraction results are found to be close to the manually marked minutiae. The block diagrams seen in Figs. 4 and 5 explain the steps used to extract minutiae from the FVC2004 and NIST SD27 databases using FpMV, respectively. Since the quality of FVC2004 query fingerprints contains good-quality ridges, the minutiae features are extracted using the FpMV tool without performing any additional pre-processing operation. Minutiae features are extracted in the form of x, y coordinates and are later read by the RMT algorithm for further matching operation.

Fig. 4
figure 4

Minutiae feature extraction steps for FVC2004 database

Fig. 5
figure 5

Minutiae feature extraction steps for NIST SD27 database

Compared to FVC2004 fingerprints, the accuracy of latent NIST SD27 fingerprints is very low. Hence, the fingerprints need to be pre-processed to remove background noise and bring out important foreground ridge information. We use the deep neural network model called “Pre-ProcessNet” [33] to perform automatic segmentation (ROI cropping), and enhancement operations. Pre-ProcessNet generates orientation maps from the learned dictionary of orientation patches. Similar to orientation maps, the segmentation map is generated to suppress background noise. The orientation information with segmentation information is used to enhance the fingerprint by passing through the bank of Gabor filters as shown in Fig. 5. The fingerprint skeleton is extracted from the enhanced image and binarization operation is carried out before extracting minutiae features. Finally, the minutiae features are extracted using the FpMV tool, and x, y minutiae coordinate information is passed onto the RMT algorithm to perform fingerprint-matching operation.

Figure 6 indicates the complete implementation details of the proposed pre-processing, feature extraction, and matching steps. The RMT algorithm is explained next.

Fig. 6
figure 6

A complete flow diagram indicating proposed fingerprint pre-processing, extraction, and matching steps

Ratio of Minutiae Triangles (RMT) Algorithm

We propose an RMT algorithm, which uses a distinctive fixed-length latent minutia descriptor defined from nearest minutiae neighborhood invariants. The process begins by first choosing ‘m’ minutiae points among ‘n’ nearest neighbors, and later calculating the minutiae invariants. The process is explained in detail next.

Latent Fingerprint Registration

We start the fingerprint registration process on the gallery fingerprints stored in the background database. From the minutiae points extracted using the previous FpMV application, we select a reference minutia point. To generate minutiae invariants from the set of minutiae neighborhood points, we select ‘n’ minutia points (n = 8) around a reference minutia ‘P’ and arrange them in clockwise sequence P1, P2, P3, P4, P6, P7, and P8 as shown in Fig. 7. We select ‘m’ minutiae points (m = 7) from ‘n’ points such that m < n to obtain a set of minutiae combinations (MC). With n = 8, m = 7 we obtain \(\left( {\begin{array}{*{20}c} n \\ m \\ \end{array} } \right) = nCm = 8\) minutiae combinations namely p, q, r, s, t, u, v, and w, as seen in Fig. 7. For each MC, select A, B, C, D, and E coplanar minutia points (Mp = 5) and calculate minutiae invariant ‘I’ using the ratio of minutiae triangular products using the following equation and store them on a minutiae arrangement vector (MAV):

$$ I = \frac{{\vartriangle \left( {{\text{A}},{\text{B}},{\text{C}}} \right)\, \cdot \,\vartriangle \left( {{\text{A}},{\text{D}},{\text{E}}} \right)}}{{\vartriangle \left( {{\text{A}},{\text{B}},{\text{D}}} \right)\, \cdot \,\vartriangle \left( {{\text{A}},{\text{C}},{\text{E}}} \right)}}, $$
(3)

where (A, B, C), (A, D, E), (A, B, D) and (A, C, E) are the area of triangles formed from points A, B, C, D, E. To calculate the remaining invariant values, rotate points A, B, C, D, and E in the clockwise direction and store them on MAV as shown in Fig. 8.

Fig. 7
figure 7

‘n’ nearest minutiae points (n = 8) around a reference minutia ‘P’. Clockwise ordering of ‘m’ minutia points (m = 7) over ‘n’ points to obtain p, q, r, s, t, u, v, and w minutiae combinations (MC)

Fig. 8
figure 8

Minutiae invariants. a Arrangement of ‘n’ nearest minutiae points (n = 8) in a clockwise direction. b Possible combinations ‘m’ minutiae points (m = 7) around ‘n’ points. c A possible minutiae invariant with 5 points chosen around ‘m’

The total number of MAV for a MC is equal to \({}_{m}{\text{C}}_{{{\text{M}}_{{\text{P}}} }}\) = 21. E.g., P ← p (0), p (1), p (2) …, p (20) invariants are stored on MAV ‘P’. The steps are repeated to calculate minutiae invariants for p, q, r, s, t, u, v, and w minutiae combinations (MC) and are stored as P, Q, R, S, T, U, V, and W ‘MAV’, respectively. These MAV are arranged as PQRSTUVW, QRSTUVWP, RSTUVWPQ … WPQRSTUV vectors on the hash-table as shown in Fig. 9.

Fig. 9
figure 9

Process of hash-table construction during fingerprint registration

To perform hashing, each MAV is used to calculate the hash value and is stored on the hash-table using the following equation:

$$ {\text{HTindex}} = \left( {\mathop \sum \limits_{x = 0}^{mC5 - 1} {\text{MA}}_{{\text{V}}} \left[ x \right]\, \cdot \,k^{x} } \right){\text{mod}}\;{\text{HTsize ,}} $$
(4)

where \({\text{MA}}_{{\text{V}}} { }\left[ x \right]\) = \({\text{MA}}_{{\text{V}}}\) containing \(mC5\) invariants obtained from Eq. 3.

M is the number of minutiae points selected from n points (m = 7). k is the level of quantization (k = 15). HTsize = Hash-table size (kept as 1 TB).

Continuous values contained by MAV is converted into discrete values by quantizing (k = 15) between [0, 15] range. On the unique HTindex value, a Latent Fingerprint-ID (LFP-ID), a Latent Minutia-ID (LM-ID) and MAV are stored into the corresponding cell of the hash-table. These steps are repeated for the remaining minutia points present in the fingerprint, and a hash-table is constructed. The RMT fingerprint registration algorithm is explained next.

figure a

Latent Fingerprint Retrieval

Similar to the hash-index (HTindex) values calculated from minutiae invariant feature vectors calculated for gallery fingerprints, the HTindex for query fingerprints are calculated. The query HTindex values are compared with the HTindex values stored on hash-table and the candidate list (LFP-ID, LM-ID, and MAV) is retrieved. Vote counts for the number of matching feature vectors of LM-ID belonging to the corresponding LFP-ID are increased. The voting process will be ineffective for the repeated matching of the same LM-ID and LFP-ID. The final matching result is obtained by sorting fingerprint ID’s (LFP-ID) in descending order of votes received. The fingerprint with the highest votes represents Rank-1 matching accuracy. The experiment steps and the results are explained in the next section.

figure b

RMT Fingerprint Registration and Retrieval Computational Complexity

Let us consider ‘M’ as the average number of minutia points present in a fingerprint. Let ‘N’ represent the number of gallery fingerprints used for registration, and ‘MP’ represent the number of points used to determine the minutiae invariant features.

  • Initially ‘m’ points over ‘n’ are arbitrarily chosen and all possible minutiae combinations of ‘m’ points over ‘n’ \(\left( {\begin{array}{*{20}c} n \\ m \\ \end{array} } \right) \) are calculated. This step increases the computational cost by ‘n’ times.

  • Next, ‘\(M_{{\text{P}}}\)’ points are chosen from ‘m’ minutia points belonging to a particular minutiae combination, and \(\left( {\begin{array}{*{20}c} m \\ {M_{{\text{P}}} } \\ \end{array} } \right)\) minutiae invariants are obtained.

  • This step is repeated for the ‘n’ minutiae combinations. Total \(\left( {\begin{array}{*{20}c} m \\ {M_{{\text{P}}} } \\ \end{array} } \right)n\) minutiae invariants for all ‘n’ minutiae combinations are obtained.

  • This step is repeated by selecting other minutia points ‘M’ in a fingerprint. Total \(\left( {\begin{array}{*{20}c} m \\ {M_{{\text{P}}} } \\ \end{array} } \right)nM\) minutiae invariants for all minutia points in a fingerprint are obtained.

  • The computational cost for registering all ‘N’ fingerprints on a hash-table is \(O\left( {\left( {\begin{array}{*{20}c} m \\ {M_{{\text{P}}} } \\ \end{array} } \right)nMN} \right)\). This is because \(\left( {\begin{array}{*{20}c} m \\ {M_{{\text{P}}} } \\ \end{array} } \right)nM,\) minutiae invariants are calculated for every fingerprint belonging to a fingerprint database containing ‘N’ fingerprints.

  • Similarly, the computational cost of retrieving a matching fingerprint is \(\left( {\left( {\begin{array}{*{20}c} m \\ {M_{{\text{P}}} } \\ \end{array} } \right)nM} \right)\). This is because \(\left( {\begin{array}{*{20}c} m \\ {M_{{\text{P}}} } \\ \end{array} } \right)n \) minutiae invariants for all ‘M’ minutiae points in a query fingerprint are retrieved from hash-table.

Choosing ‘m’ Minutiae Points from ‘n’ Points and Establishing Rotation Invariance

In the RMT algorithm, we match a query fingerprint MAV against the MAV in the hash-table. The selection of the same starting minutiae combination (MC) belonging to gallery fingerprints is not assured when producing minutiae invariants (I) for query fingerprints. Hence, to make the algorithm alignment-free and robust towards any global transformations, we generate minutiae invariants for all minutiae combinations (MC) in query, gallery fingerprints.

To start with the RMT algorithm, we identified ‘n’ minutia neighborhood points around a reference minutia and selected ‘m’ minutia points to obtain ‘\(nCm\)’ minutiae combinations (MC). For each MC, we calculate minutiae invariants, store them on minutiae arrangement vectors (MAV). To perform these operations, the value of n and m are carefully selected after satisfying the following conditions.

  1. 1.

    If the value of n = m, then the number of \(M_{{\text{C}}} = nCm = 1\). Hence, the value of m < n to generate a sufficient number of \(M_{{\text{C}}}\) as well as minutiae invariants (I).

  2. 2.

    For every ‘\(M_{{\text{C}}}\)’, we select A, B, C, D, and E co-planar points (\(M_{{\text{P}}}\) = 5), and calculate minutiae invariants (I). The value of n ≥ \(M_{{\text{P}}}\) toa obtain at least one minutia invariant for a reference central minutia.

  3. 3.

    Large value of ‘n’ will produce a greater number of \(M_{{\text{C}}}\) and it results in good discriminative power. However, this adds the complexity of calculating a large number of invariants (I) for a reference minutia and it also increases the memory storage requirement of the hash-table.

Since the value of n > m ≥ \(M_{{\text{P}}}\), we select n = 8, m = 7, k = 15 to satisfy the above conditions.

Results and Discussion

We test the performance of our proposed RMT algorithms on FVC2004 [35] and NIST SD27 [36] databases separately. We store the ground truth fingerprints in the gallery database and apply the RMT fingerprint registration algorithm (Algorithm 1) to generate hash-indexes (HTindex). Later, we store the minutiae arrangement vectors (MAV) on the hash-table location indexed by HTindex value. Similarly, for query input fingerprints we apply the RMT fingerprint retrieval algorithm (Algorithm 2) to generate hash-indexes (HTindex) and retrieve matching MAV. Fingerprints with matching MAV receive the votes for corresponding fingerprints and the fingerprints are sorted based on the descending order of votes received. The fingerprint with the highest vote represents the top Rank-1 result. For both the FVC and NIST databases, we set values of n, m, and k (Algorithms 1 and 2) as 8, 7, and 15, respectively.

FVC 2004 Setup

We store the ground truth fingerprints in the gallery database as seen in Fig. 10d. To increase the size of the background database, we extract the minutiae features from different extraction methods and store them in the gallery database. We accomplish this task by extracting fingerprint features from a MATLAB application called “Simple Fingerprint Matching (SFM)” [32] and a deep neural network model called “ExtractNet” [33] as shown in Fig. 10b, c. The total size of the gallery database (including ground truth) formed is equal to 2560 images.

Fig. 10
figure 10

Minutiae extracted from 101_1 fingerprint (FVC2004_DB1). a Original fingerprint. b Fingerprint extracted using SFM [32]. c Fingerprint extracted using ExtractNet [33], d ground truth fingerprint

To perform matching, we extract minutiae features from query fingerprints using the NIST FpMV application. We achieved the highest Rank-1 identification accuracy of 100% on FVC2004 query fingerprints. To test the robustness of our proposed RMT algorithm, we compress (RMT-WSQ) and rotate the query fingerprint images of the FVC2004 database using another NBIS MINDTCT application [38] as shown in Fig. 11a–c.

Fig. 11
figure 11

Minutiae extraction of query fingerprint 101_1 (FVC 2004_DB1). a Original fingerprint. b Fingerprint extracted using FpMV. c Fingerprint compressed and rotated using MINDTCT [38] application

We obtained a Rank-1 identification accuracy of 78.75% and Rank-4 identification accuracy of 100%. Figure 12 shows the CMC curve of top Rank-20 FVC2004 fingerprint retrievals.

Fig. 12
figure 12

Rank-1 identification accuracy of RMT algorithm tested on FVC 2004 database query fingerprints with WSQ compression and unrestricted rotation

NIST SD27 Setup

We store the manually annotated NIST SD27 ground truth fingerprints in the gallery database as shown in Fig. 13d. To increase the size of the gallery database, we extract minutiae features using the COTS application [34]. Figure 13b, c indicates the minutiae features extracted for G008 (top) and G004 (bottom) ground truth fingerprints using COTS1 and COTS2 matchers. A total of 3096 fingerprints are created and stored in the background gallery database. Minutiae features for query fingerprints are extracted using the FpMV minutiae extractor. Figure 13a illustrates the minutiae extraction operation on good-quality G008 and G004 query fingerprints using the FpMV tool.

Fig. 13
figure 13

Minutia points extracted from good-quality G008 (top) and G004 NIST SD27 latent fingerprints using different methods. a Pre-processed, enhanced, and extracted query fingerprints using FpMV application [37]. b Gallery fingerprint database constructed using COTS1 [34] matcher. c Gallery fingerprint database constructed using COTS2 [34] matcher. d Manually annotated NIST SD27 ground truth fingerprints corresponding to a

As seen in Fig. 13a, the query fingerprint contains many spurious, and missing minutia points compared to its corresponding manually annotated ground truth minutiae image (Fig. 13d). To further test the robustness of the proposed algorithms, we randomly rotate the query fingerprint images of the NIST SD27 database between 0 and 360 degrees and scale by 10% (see Fig. 14a–c).

Fig. 14
figure 14

NIST SD27 query fingerprint (G004) rotation and scaling operation. a Original fingerprint. b Fingerprint extracted using FpMV. c Fingerprint randomly rotated and scaled by 10%

The performance of state-of-art algorithms developed for NIST SD27 fingerprint matching is listed in Table 2. The highest Rank-1 matching accuracy of 85.6% was achieved by clustering-based cylinder codes [26]. This algorithm uses minutiae descriptors which are rotated to a maximum value of π/4 for alignment of the fingerprint. Our proposed RMT does not require any prior information about fingerprint pre-alignment. It can be observed from Fig. 16a that our proposed RMT algorithm achieves the maximum 86.82% Rank-1 identification accuracy. Compared to the state-of-the-art Cylinder-Codes [26], the RMT algorithm produced improvement in Rank-1 accuracy by 1.16%. Further, RMT algorithm showed improvement in Rank-1 identification accuracy of 12.82%, 20.42%, and 16.82%, respectively compared to the other state-of-art NMD-2 [27], DBHT [23], and LM [31] algorithms.

Table 2 The Rank-1 identification accuracy reported by the proposed RMT algorithm and state-of-the-art algorithms for NIST SD27 [25] public dataset using minutiae features

Further, we conduct experiments on a different category of NIST SD27 latent fingerprints. Based on the qualities of the latent fingerprints in NIST SD27, the latent examiners classified fingerprints as “good”, “bad”, and “ugly” fingerprints. The classification is subjective and examples of different quality NIST SD27 latent fingerprints are highlighted in Fig. 15.

Fig. 15
figure 15

Different quality NIST SD27 latent fingerprints. a Good, b bad, and c ugly

Good, bad, and ugly quality NIST SD27 fingerprints are tested individually and the results were reported. We obtained a Rank-1 identification rate of 100% over 85.5% (LM), 83% (NMD), and 58% (NMD-2) matchers for good-quality fingerprints as shown in Fig. 16b. For bad category images, we obtained a Rank-1 identification rate of 77.65% over 68% (LM), 50% (NMD), and 35% (NMD-2) matchers as shown in Fig. 16c. Finally, for the ugly category of images, we obtained a Rank-1 identification rate of 82.35% over 44% (LM), 38% (NMD), and 10% (NMD-2) matchers as seen in Fig. 16d.

Fig. 16
figure 16

CMC curves displaying latent fingerprint-matching results of proposed RMT algorithm vs different state-of-art algorithms on a NIST SD27 database. a All fingerprints, b good quality, c bad quality, and d ugly quality fingerprints

As discussed earlier, we restricted our experiments to NIST SD27 gallery images. The experiments can be extended on a large background gallery database created by adding NIST SD4, SD14, and synthetic fingerprints to the existing NIST SD27 background database.

Confusion Matrix

The confusion matrix is used to measure the performance of fingerprint sets belonging to two different classes. The right diagonal elements classify the fingerprints as true-positive and true-negative instances as correct, whereas left diagonal elements classify fingerprints as false-positive and false-negative instances as incorrect.

We select 258 latent fingerprints from the NIST SD27 database and other 258 fingerprint images from the FVC2002 database to measure the performance of this 2-class problem. Hence, a total of 516 query fingerprints from 2 different datasets are formed.

The confusion matrix shown in Table 3 is used to measure the performance of a two-class problem. Based on these two classifications, it generates four possible outcomes. A right match is treated as the “positive class” in the case of the fingerprint-matching problem, and the wrong match is treated as the “negative class”. The right diagonal elements distinguish instances of True Positive (TP) and True Negative (TN), while the left diagonal elements incorrectly classify instances of False Positive (FP) and False Negative (FN). TP is used to calculate the right positive prediction of fingerprints and FP is used to evaluate the wrong prediction of positive fingerprints. Whereas, the accurate negative fingerprint prediction is given by TN and the incorrect negative prediction by FN.

Table 3 Confusion matrix which classifies NIST SD27 fingerprints belonging to two different classes as 4 possible results

A gallery fingerprint dataset is formed with two distinct classes to obtain a confusion matrix for our proposed RMT algorithm. A Gallery fingerprint dataset with 512 instances is created for NIST SD27 query fingerprints by selecting 256 fingerprints from NIST SD27 and 256 fingerprints from FVC2002. The fingerprints from these two cases are selected randomly during processing. The current and expected classification is demonstrated in Table 2. The TP, FP, TN, and FN values were obtained as 224, 34, 238, and 20 fingerprints, respectively. Various steps calculated from the confusion matrix are shown in Table 4.

Table 4 Different measures derived from the confusion matrix

Latent Fingerprints Unidentified by RMT

Our proposed RMT algorithm was able to identify good-quality (G005) fingerprint which was un-identified by the state-of-art MCC SDK v1.4 [39], and M3gl [40] applications. Similarly, ugly quality (U213) fingerprint was identified by our RMT algorithm which was un-identified by the state-of-art algorithm [26]. RMT algorithm was able to generate a sufficient number of discriminative minutiae arrangement vectors (MAV), and vote for the corresponding ground truth fingerprints while retrieving. However, our proposed RMT algorithm and state-of-the-art algorithms [26, 27] were unable to identify the correct ground truth fingerprint due to a large number of spurious minutia points contained in low-quality (B119) fingerprint. For our algorithm to work successfully, the minutiae extraction framework should detect at least eight minutia points (n = 8) to generate invariant feature vectors (MAV). The identified and un-identified fingerprints by our RMT algorithm is illustrated in Fig. 17.

Fig. 17
figure 17

Identified and un-identified NIST SD27 latent fingerprints by RMT. a Good-quality G005 fingerprint un-identified by the MCC SDK v1.4 [39], M3gl [40] and identified by RMT. b Ugly-quality U213 (overlapping) fingerprint un-identified by the state-of-art algorithm [26] and identified by RMT. c Bad-quality B119 fingerprint un-identified by our proposed RMT algorithm as well as the state-of-art algorithms [26, 27]

Conclusion and Future Scope

Unlike normal fingerprints, latent fingerprints present distinct challenges. This is primarily because incomplete and low-quality ridge data are found in the latent fingerprints. The core points required for fingerprint alignment may not always be present because of its partial fingerprint capture area and incomplete minutia information. Due to these reasons, latent fingerprint identification is considered to be a more challenging research field. We have proposed a geometric transformation invariant automatic latent fingerprint identification system that can detect a person without fingerprint pre-alignment. The proposed system can identify a person with missing minutia or partial minutia information. To achieve this, we proposed “Ratio of Minutiae Triangles” (RMT) fingerprint registration and retrieval algorithms. The algorithm makes use of existing nearest minutia neighbors to generate unique invariant feature vectors. We used the ratio of triangles on minutiae neighborhoods and obtained discriminative feature vectors. These feature vectors were stored in the hash-table and later compared with feature vectors of query fingerprints to perform matching. The proposed method does not utilize any pre-fingerprint alignment information and works well on partial fingerprints. To test the robustness of algorithms, we compressed, rotated FVC2004 query fingerprints and matched them with the gallery fingerprint database. To test the performance of the criminal NIST SD27 database, we randomly rotated-scaled query fingerprints and performed matching with the gallery database. RMT algorithm produced the highest Rank-1 identification of 78.75% for WSQ compressed FVC2004 dataset and 86.82% for NIST SD27 dataset, respectively. The reported experimental results showed that the RMT algorithm produced improved results on all NIST SD27 fingerprints as well as good, bad, and ugly quality fingerprints (separately) compared to the state-of-art algorithms. The performance of the proposed RMT algorithm proved that it is robust against rotation and scale.

However, the proposed method performs minutia feature extraction and matching separately. This affects the performance of matching in terms of matching time. The system was tested with the gallery database of 3 k images and the match time was moderate. For large fingerprint databases, the match time will increase with the existing separate extraction and matching steps. An improved end-to-end automatic latent fingerprint identification system with integrated automatic feature extraction and matching steps can be developed to overcome the limitation of the proposed system. Since the bad, and ugly quality fingerprints contain very few minutia points, an improved enhancement algorithm can help to extract a sufficient number of genuine minutia points to improve matching performance.