Abstract
A fingerprint classification system groups fingerprints according to their characteristics and therefore helps in the matching of a fingerprint against a large database of fingerprints. Data preparation is one of the very important tasks of research work. Unless we build a quality database, we cannot ensure quality output from the model developed to achieve the objectives of this research. Hence, in this chapter, we fully gave our attention to the preparation of a primary database to recognize the fingerprints. Additionally, we are also using fingerprint databases which are commonly available in the public domain for validating our models.
Access provided by Autonomous University of Puebla. Download conference paper PDF
Similar content being viewed by others
Keywords
1 Introduction
Fingerprint analysis is done on the many standard databases available online [1]. This work is focused on creating a fingerprint database extracted from the collected real-time fingerprint images. This database is called the real-time database. The fingerprints are collected from a group of students of Silicon Institute of Technology, Bhubaneswar, through a fingerprint sensor from five basic classes of the Henry system. The fingerprint images are captured via available fingerprint sensors in the market. So a collection of 50 fingerprints sensed from five basic classes are stored, and the database finally in terms of extracted features is created for further processing tasks. In this work, the arch and tented arch (class 4 and class 5) are considered as one class as class4 to avoid misclassification. So here the database consists of 4 classes of fingerprints. A subset of the samples is present in Fig. 1.
In Fig. 1a–f, we can see the different classes of fingerprints given with respect to their class value. Scrutinizing this pattern at various levels discloses various types of characteristics that are global feature and local feature. The features are collected in terms of feature vectors of each individual fingerprint.
2 Feature Extraction
The ‘Feature’ is the minute physical detail present in the fingerprint in the form of ridges and furrows which define the category of fingerprint in terms of class details. Each fingerprint is different from the other with these minute details which need to be extracted using different methods to identify the fingerprint [2, 3]. The process of drawing out the hidden attributes from the 2-dimensional image is called feature extraction.
A fingerprint can be described by using quantitative measures associated with the pattern flow or oriented textures called features of a fingerprint [4]. Analysis of the oriented texture is an important aspect of research in practical applications. Farrokhina et al. [5] described the Gabor filter bank method that can be used to describe the global representation of texture by the image decomposition method. Daugman et al. [6] have represented the translated and scale-invariant texture representation for human iris using the Gabor wavelet coefficient method, but it was not considered to be an efficient one, as it is not rotation-invariant. Prabhakar et al. [7] presented a method for fingerprint texture representation by extracting the reference point, and the region surrounding the reference point is further divided into cells [8]. The information contained in each cell is further extracted using spatial frequency channels and represented in an ordered fashion called features. The steps involved in the Gabor filter bank using spatial frequency representation of features are as shown in Fig. 2 [9]:
-
Image Acquisition (fingerprint capture process),
-
Normalization and segmentation of the pixels of the fingerprint image,
-
Orientation field estimation from the normalized image,
-
Estimation of the core point from the orientation field,
-
Circular division of the selected portion of the image, and
-
Gabor filtering of the sectors in different angular directions.
The collection of features in all angles to form the feature vector or fingercode [10].
During feature extraction, we have followed the following steps in this research. Here, a generic technique is proposed for representing fingerprint texture that lies in collecting one (or more) invariant points of reference of the texture following its orientation field [7, 11]. The core point or point of reference is divided into six sectors. Those six sectors are again sub-divided into six subsectors in each sector forming 36 subsectors. Each sector is then analyzed for the information present in the spatial frequency channels. Features are collected from the subsectors and used as the representation of the fingerprint class [12]. Now, these representations of features represent the local information of the tessellated sector that reflects the invariant characteristics of the global relationships present in the local fingerprint patterns.
The following steps are used in the proposed feature extraction algorithm.
First, the core point (center point) which is also called the reference point is defined, and taking this point as the center, the surrounding region is spatially tessellated into sectors.
Then, convert the tessellated image in the form of component images that describes the ridge structure, and from the component images generate the feature vector combining the sectors.
2.1 Feature Code Generation
The steps of the feature code generation algorithm are as follows.
Select a reference point or core point for spatial tessellation (circular in our case) around the reference point taking it as the center. It is called our region of interest (ROI). Divide the ROI into a set of component sub-images, to preserve global ridge and furrow structures called sectors. Calculate the standard deviation in each sector to create the code [13, 14].
2.2 Core Point Location
The orientation field is smoothened in the local neighborhoods and defined as O’. Initialize E, an image which contains the sine components of the smoothened orientation field
For every pixel in E(i, j), integrate pixel intensities into regions R1 and R2 as shown in Fig. 4 and assign the corresponding pixels by the value of their difference.
Now the maximum value is calculated and is defined as the core point (Fig. 3).
2.3 Tessellation
Tessellation is a process of dividing the circular extracted orientation field into different equal-area sectors as shown in Fig. 5. It is mainly used to make the fingerprint features rotation-invariant [7, 15]. The equations show the formula to achieve tessellated output.
where
\({T}_{i}=idivk\),
\({\theta }_{i}=\left(imodk\right)\left(\frac{2\pi }{k}\right)\),
\(r=\sqrt{{\left(x-{x}_{c}\right)}^{2}+{\left(y-{y}_{c}\right)}^{2}}\), \(\theta ={\mathrm{tan}}^{-1}\left(\frac{y-{y}_{c}}{x-{x}_{c}}\right)\)
2.4 Filtering
The Gabor filtering is done in the frequency domain, which can be represented by the following expression:
Figure 6 shows the output of the Gabor filter for different angle orientations. For the pixels in sector Si, the normalized image is defined as
where M0 and V0 are desired mean and variance values, respectively [16, 17]. In this work, we have taken M0 and V0 to be 100. Mi and Vi can be determined as follows:
where Ik(x,y) is the kth pixel in sector Si.
The normalization is done sector-wise. Normalized and filtered images are shown in Fig. 7.
2.5 Feature Vector
Each sector \(S_i\) is further sub-divided into six subsectors. So for the given six sectors, we have thus 36 subsectors. The core point acts as one subsector and the region outside the circular region is also considered as another subsector. So together it forms 38 subsectors for which the component images in four angle directions (0°, 45°, 90°, 135°) is calculated. The feature vector or feature code defines the standard deviation \(F_{i\theta }\) of the collected component images for sector Si and is given as
where ki represents the number of pixels present in Si and Miθ represents the mean pixel intensity present in Ciθ (x, y). Since we have considered the innermost circle as one sector and the region lying outside the ROI as another sector, the 152-dimensional feature vectors are created as shown in Fig. 8.
3 Statistical Analysis of the Dataset
Statistical analysis is the method of collecting, exploring, and presentation of large data for discovering different patterns in the form of trends [6, 18]. This analysis process is used for research and other Government as well as industrial applications for decision-making, for example, mean defines the average value where standard deviation defines the spread of the values. Basically, standard deviation shows the spread of the data toward high or low values. A high value represents more spreading and low values represent the numbers close to average. The margin of error is double the standard deviation. Researchers have considered that the values that are bigger than two or three times of the standard deviation are important values [3, 19]. The statistical analysis of the fingerprints is given in Table 1.
Table 1 shows the collected mean and standard deviation values collected from the feature vector for different angle orientations and finally for total feature vector.
The mean and standard deviation plots are used to check the variation of mean in different groups of data as done by the analyst. Mean plots are used with ungrouped data to determine the change in mean value with time. From Fig. 9, we can see that the mean plot of different angular orientations is almost the same whereas the total feature vector combining the angles is changing continuously. Mean plots are used along with standard deviation plots. The mean plot checks for a shift in location whereas, the standard deviation plot checks for shifts in scale. Also, Fig. 10 reflects the change in variance with the shift in scale value for the feature vectors.
4 Conclusion
Here, we have performed the feature extraction of real-time fingerprint images using the Gabor filter bank method. Here, each fingerprint is divided into 6 sectors which are again divided into six equal divisions to form 36 subsectors. These 36 subsectors including the core-point as one subsector and the outside region as one subsector form 38 subsectors and each subsector is again passed through the Gabor filter bank for four different angles of 0, 45, 90, and 135 degrees. Each angle shows the extracted feature in that direction and combining the four angular features the feature vector or finger code is formed. This generated finger code is used for classification using different classification algorithms.
References
Almajmaie L, Ucan ON, Bayat O (2019) Fingerprint recognition system based on modified multi-connect architecture (MMCA). Cogn Syst Res 58:107–113
Li R, Li C-T, Guan Yu (2018) Inference of a compact representation of sensor fingerprint for source camera identification. Pattern Recogn 74:556–567
Karu K, Jain AK (1996) Fingerprint Classification. Pattern Recogn 29(3):389–404
Janecek A, Gansterer W, Demel M, Ecker G (2008) On the relationship between feature selection and classification accuracy. In New challenges for feature selection in data mining and knowledge discovery, pp 90–105
Landau PM (2012) Multiuser health monitoring using biometric identification. U.S. Patent Application 13/281,233
Rehman YA, Po LM, Liu M (2008) LiveNet: Improving features generalization for face liveness detection using convolution neural networks. Expert Syst Appl 108: 159–169
Prabhakar S, Pankanti S, Jain AK (2003) Biometric recognition: Security and privacy concerns. IEEE Secur Priv 1(2):33–42
Chouhan SS, Kaul A, Singh UP (2019) Image segmentation using computational intelligence techniques. Arch Computat Methods Eng 26(3): 533–596
Hsu HC, Wang MS (2012) Detection of copy-move forgery image using Gabor descriptor. In: Anti-counterfeiting, security, and identification, pp 1–4. IEEE
Dash SK, Dash AP, Dehuri S, Cho SB (2013) Feature selection for designing a novel differential evolution trained radial basis function network for classification. Int J Appl Metaheuristic Comput (IJAMC) 4(1): 32–49
Liu W, Chen Y, Wan F (2008) Fingerprint classification by ridgeline and singular point analysis. In Image and signal processing, CISP’08. Congress on 4(2008):594–598
Yu Y, Zuo C, Qian K (2018) Sixth International Conference on Optical and Photonic Engineering (icOPEN 2018). SPIE
Meng Q, Lu X, Zhang B, Gu Y, Ren G, Huang X (2018) Research on the ROI registration algorithm of the cardiac CT image time series. Biomed Signal Process Control 40:71–82
Karthi G, Ezhilarasan M (2019) Multimodal biometrics authentication using multiple matching algorithm. In Cognitive informatics and soft computing, pp 361–369. Springer, Singapore
Cai H, Hu Z, Chen Z, Zhu D (2018) A driving fingerprint map method of driving characteristic representation for driver identification. IEEE Access 6:71012–71019
Kumar PS, Valarmathy S (2012) Development of a novel algorithm for SVMBDT fingerprint classifier based on clustering approach. In 2012 International conference on advances in engineering, science and management (ICAESM), pp 256–261
Tarjoman M, Zarei S (2008) Automatic fingerprint classification using graph theory. In Proceedings of world academy of science, engineering and technology 30:831–835
Kamijo M (1993) Classifying fingerprint images using neural network: deriving the classification state. In IEEE international conference on neural networks, 1932–1937
Enright MK, Morley M, Sheehan KM (2002) Items by design: the impact of systematic feature variation on item statistical characteristics. Appl Measur Educ 15(1):49–74
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2022 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.
About this paper
Cite this paper
Mishra, A., Dehuri, S., Mallick, P.K. (2022). Feature Extraction and Feature Sheet Preparation of Real-Time Fingerprints for Classification Application. In: Mallick, P.K., Bhoi, A.K., González-Briones, A., Pattnaik, P.K. (eds) Electronic Systems and Intelligent Computing. Lecture Notes in Electrical Engineering, vol 860. Springer, Singapore. https://doi.org/10.1007/978-981-16-9488-2_7
Download citation
DOI: https://doi.org/10.1007/978-981-16-9488-2_7
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-16-9487-5
Online ISBN: 978-981-16-9488-2
eBook Packages: EngineeringEngineering (R0)