Abstract
Face Recognition is considered to be as one of the finest aspects of Computer Vision, also various Feature Extraction and classification techniques including Neural Network Architectures have made it even more interesting. In this paper, an attempt towards developing a model for better feature representation/extraction and cascading it with neural networks classifier is presented. In order to derive better use of face recognition system for faster and better surveillance, analysis is carried out which provides a greater knowledge on the entire process and clarifies on various parameters effecting the system. Most popular Single-Layer Neural Networks such as generalized regression neural network (GRNN) and probabilistic neural network (PNN) are used with different subspace methods to provide a distinguished analysis. The experimental results in this work have revealed that the combination of subspace method with neural networks has increased the robustness and speed of face recognition system. Performance analysis of the proposed model is carried out by conducting the experiments on three benchmarking databases such as ORl, Yale and Feret.
Access provided by Autonomous University of Puebla. Download conference paper PDF
Similar content being viewed by others
Keywords
1 Introduction
A derivative application of Computer Vision, Pattern Recognition and Biometrics is Face Recognition. Face is an essential feature in an individual which presents major trait. Face Recognition brings down these traits which are distinct in the images, forming to recognize a particular individual. It is easier to recognize and track an individual through computers.
In recent years, most of the face recognition approaches have achieved better and faster results. An effort in providing an analysis for better face recognition considering traditional feature extraction techniques such as PCA and KPCA with neural network architecture such as GRNN and PNN and results are obtained in terms of recognition rate [1–3]. The results are discussed and analysed for better combination architecture. In this regard, the literature study states: Subspace methods in face recognition are discussed. Principal Components are obtained from face images so as it can be used for recognition of faces [4, 5]. A non-linear way of deriving principal components with a kernal was best described in [6]. In [7] the class-specific method was introduced and matrix was divided into Within-Class scatter matrix and Between-class scattermatrix to further reduce the Feature Matrix space.
An overview about subspace methods used with neural network for Face Recognition is surveyed. PCA for feature extraction and BPNN for classification were proposed in [8]. A two-level haar wavelet transform is used to decompose frontal face image into sub-bands. With Eigenface feature is extracted and is used as an input to the classification algorithm BPNN, described in [9]. In [1] Gabor filter is used as an input to the Feed Forward Neural Network (FFNN). Use of Back Propagation Neural network (BPNN), the transformation of different inputs and comparison of unknown face with the given face which is in the database is presented in [2]. In [3] curvelet transform and Linear Discriminant Analysis (LDA) are used to extract feature and Radial Basis Function Network (RBFN) is used for the purpose of Classification.
These provide with various kinds of subspace methods with neural network in face recognition. The impact of neural networks on subspace methods is better well not discussed. Considering this an attempt is made to present an analysis of various single-layer neural networks with subspace methods.
The organization of the paper is as follows: In Sect. 2, proposed feature extraction and different classification techniques of face recognition are presented. In Sect. 3, Experimental results are discussed and analysis is briefed. Finally, Conclusions are drawn.
2 Proposed Method
In this section, an overview of different subspace methods used in face recognition with different Neural Network architectures is presented.
2.1 Subspace Methods for Feature Extraction
2.1.1 Principal Component Analysis
The image space of face is represented as f(a, b) which is a two-Dimensional array. PCA as described in [4, 5] represents the training image set of data into singular columns. These singular columns are derived points which are mapped onto a smaller image space represented as subspace.
Let the training set of faces be \(A = ({A_1},{A_2}, \ldots ,{A_m})\). The mean face of this training set is calculated by \(z = \frac{1}{x}\sum\nolimits_{y = 1}^x {a_m}\) where each face differs by \({\phi_i} = {a_i} - z\).
The image array obtained is a high dimension vector space which is then analysed by Principal Component Analysis. The high dimension vector space A is subjected to obtain eigen value and eigen vector by the following covariance matrix.
where the matrix \(L = [{\phi_1},{\phi_2}, \ldots ,{\phi_x}]\). The covariance matrix expresses vector and scalar values as Eigen vectors and Eigen values. The obtained covariance matrix is subjected as feature matrix. The feature matrix is distinguished into individual face class by using neural networks for classification.
2.1.2 Kernal Principal Component Analysis
The Covariance matrix ‘C’ calculated out of the training set image space ‘A’ is subjected to kernal idea [6]. Kernal is a non-linear mapping function which is used for estimating the mapping of the covariance matrix. The kernal mapping function is represented as
where \({{(1_N})_i}{,_j} = \frac{1}{N}.\)
The kernal matrix ‘X’ is a square matrix which is of the same dimension of the covariance matrix ‘C’. The non-linear principal components obtained using gaussian method of KPCA on each and every image collectively forms a feature matrix. The obtained Kernal feature matrix is further classified using neural networks.
2.2 Neural Network Architectures for Classification
2.2.1 Single-Layer Neural Network
It commits only single synapse due to single hidden layer. Single-layer neural network pertains to obtain substantial adaptation.
2.2.2 Generalized Regression Neural Network
A GRNN is a variation of the radial basis neural networks, which is based on kernal regression networks [10, 11]. GRNN does not require iterative training procedure as back propagation networks.
GRNN consists of four layers: input layer, pattern layer, summation layer, and output layer. The obtained feature matrix from ‘C’ and ‘X’, respectively, from PCA and KPCA is fed to GRNN for Classification. The summation layers consist of two summation neurons P and Q. P computes weighted matrix where as Q computes the unweighted outputs of the pattern layer. This Layer merely distinguishes P and Q neuron, predicting towards \({K'_i}\) to an unknown input vector L as
This neural network results into individual face classes. Network obtained further is simulated with testing feature matrix for recognition rate.
2.2.3 Probabilistic Neural Network
PNN was first proposed in [12]; the architecture contains many interconnected processing units represented by neurons and is stacked in successive layers. The feature matrix obtained by the subspace methods is fed into the input layer. The input layer does not perform any computation; from this layer the values are transported to the pattern layer. The pattern layer computes accordingly its output by
The successive pattern layer computes to the maximum likelihood of pattern ‘x’ which classified and surfaced to ‘C j ’. The Bayes’s decision rule further presents with
where \(\hat j(x)\) denotes the classified output. This classified network is simulated with feature matrix of testing images to obtain recognition result.
3 Experimental Results and Performance Analysis
The proposed work is experimented on three databases. They are ORL, Yale, FERET databases.
3.1 Subspace Methods and Generalized Regression Neural Network (GRNN)
To classify the train images feature matrix obtained from subspace methods such as PCA and KPCA, a GRNN classification algorithm is applied. In order to evaluate the recognition results, the experiment was conducted for different weights by varying 0.2 weights from 0.2 to 1.4 as to find the threshold value. In this section experiment results of subspace methods such as PCA and KPCA with GRNN are discussed [13–19].
3.1.1 PCA and GRNN
The results of PCA and GRNN for ORL, Yale and FERET database are shown in Table 1.
3.1.2 KPCA and GRNN
The recognition results of KPCA and GRNN for ORL, Yale and FERET databases are shown in Table 2.
From Tables 1 and 2 we can clearly distinguish KPCA with GRNN outperforms in all the three databases when compared to PCA with GRNN. The increase in training images results an increase in recognition result. Lesser the weights the Neural network gives better performance.
3.2 Subspace Methods and Probabilistic Neural Network (PNN)
To obtain the better classification of face images, a PNN is applied with the subspace methods. In this section recognition results of subspace methods such as PCA and KPCA with PNN are discussed. Recognition rates were measured by differing 0.2 weight i.e., from 0.2 to 1.4.
3.2.1 PCA and PNN
The recognition results of PCA and PNN for ORL, Yale and FERET databases are shown in Table 3.
3.2.2 KPCA and PNN
The recognition results of KPCA and PNN for ORL, Yale and FERET databases are shown in Table 4.
KPCA with PNN outperform PCA with PNN by considering above Tables 3 and 4. From the above conducted experiments it clearly shows that KPCA as a subspace method with Single-layer neural network outperforms PCA.
3.3 Performance Analysis
By analysing the experimental results, combination of KPCA and GRNN yields better recognition rate 90.83, 93.33 and 93.33 % for ORL, Yale and FERET databases when compared to other subspace methods with GRNN. With the combination of subspace methods and PNN, the recognition results were same as subspace methods with GRNN as such the combination of KPCA and PNN exhibits similar performance.
GRNN is a highly parallel structure with one-pass learning algorithm. It approximates and arbitrates the function between input and output vectors which estimates the training data directly. Pattern classification presents with network creation as in PNN. This architecture directly implements the one-pass learning algorithm. The non-linear mapping nature of KPCA and with GRNN which is a function approximation architecture and the PNN a classification network which both network are simple supervised neural network gives an added advantage for better classification and recognition.
Withal, the comparative study proves that the better recognition rates obtained for the combination of KPCA and GRNN and KPCA and PNN. Hence from the analysis for achieving highest recognition rate in face recognition system, KPCA and GRNN and KPCA and PNN are better suitable.
4 Conclusion
A successful face recognition system mainly depends on the feature extraction methods and the pattern classifier. Extraction of a representative feature set increases the efficiency of performance by reducing the space dimension. Neural networks are non-linear processes that perform learning and classification. They also perform well in recognizing the facial expression. In this regard, for an efficient face recognition, we performed an analysis with each of the subspace methods such as PCA and KPCA as a feature extraction methods with neural networks such as GRNN and PNN as a classifier. As a result, KPCA and GRNN and KPCA and PNN exhibit better performance. The main purpose of the proposed analysis work made towards face recognition is to identify the appropriate neural network model with the better feature extraction methods in achieving better recognition rate. So that a system can work reliable with a good learning capacity and insensitivity to small or gradual changes in the face image. With this the applicability and capability of better suitable neural network models with their performance in terms of recognition rates are described.
References
Kumar, K.: Artificial neural network based face detection using gabor feature extraction. Int. J. Adv. Technol. Eng. Res. (IJATER) 2, 220–225 (2012)
Revathy, N., Guhan, T.: Face recognition system using back propagation artificial neural network. Int. J. Adv. Eng. Technol. (IJAET) 3, 321–324 (2012)
Radha, V., Nallammal, N.: Neural network based face recognition using RBFN classifier. In: Proceedings of the World Congress on Engineering and Computer Science (WCECS), vol. 1 (2011)
Turk, M.A., Pentland, A.P.: Eigenfaces for recognition. J. Cognit. Neurosci. 3, 71–86 (1991)
Turk, M.A., Pentland, A.P.: Face recognition using eigenfaces. In: Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition (1991)
Scholkopf, B., Smola, A., Müller, K.R.: Kernel principal component analysis. Adv. Kernel Methods Support Vector Learning, pp 327–352 (1999)
Belhumer, P.N., Hespanha, J.P., Kriegman, D.J.: Eigenfaces vs fisherfaces: recognition using class specific linear projection. IEEE Trans. Pattern Anal. Mach. Intell. 711–720 (1997)
Chaudhary, U., Mubarak, C.M., Rehman, A., Riyaz, A., Mazhar, S.: Face recognition using PCA-BPNN algorithm. Int. J. Modren Eng. Res. (IJMER), 2, 1366–1370 (2012)
Daramola, S.A., Odeghe, O.S.: Efficient face recognition system using artificial neural network. Int. J. Comput. Appl. 41(21), 12–15 (2012)
Specht, D.F.: A general regression neural network. IEEE Trans. Neural Networks 2(6), 568–576 (1991)
Demuth, H.B., Beale, M.: Neural network toolbox for use with MATLAB Users Guide Version 4. Mathworks (2002)
Specht, D.F.: Probabilistic neural networks. IEEE Int. Conf. Neural Networks, pp. 525–532 (1990)
Beale, H., Hagan, M.T., Demuth, H.B.: Neural network toolbox users guide 2013a, Mathworks (2013)
http://www.cl.cam.ac.uk/research/dtg/attarchive/facedatabase.html
Sahoolizadeh, A.H., Heidari, B.Z., Dehghani, C.H.: A new face recognition method using PCA, LDA and neural network. Int. J. Electr. Electron. Eng. 2(5), 6–12 (2008)
Esbati, H., Shirazi, J.: Face recognition with PCA and KPCA using Elman neural network and SVM. Int. J. Electr. Electron. Eng. 5(10), 135–140 (2011)
Hoang, L.T.: Applying artificial neural networks for face recognition. In: Advances in Artificial Neural System, vol. 2011, Hindawi Publishing Corp
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2016 Springer India
About this paper
Cite this paper
Sudhanva, E.V., Manjunath Aradhya, V.N., Naveena, C. (2016). Analysis of Different Neural Network Architectures in Face Recognition System. In: Satapathy, S., Raju, K., Mandal, J., Bhateja, V. (eds) Proceedings of the Second International Conference on Computer and Communication Technologies. Advances in Intelligent Systems and Computing, vol 380. Springer, New Delhi. https://doi.org/10.1007/978-81-322-2523-2_45
Download citation
DOI: https://doi.org/10.1007/978-81-322-2523-2_45
Published:
Publisher Name: Springer, New Delhi
Print ISBN: 978-81-322-2522-5
Online ISBN: 978-81-322-2523-2
eBook Packages: EngineeringEngineering (R0)