Keywords

1 Introduction

A derivative application of Computer Vision, Pattern Recognition and Biometrics is Face Recognition. Face is an essential feature in an individual which presents major trait. Face Recognition brings down these traits which are distinct in the images, forming to recognize a particular individual. It is easier to recognize and track an individual through computers.

In recent years, most of the face recognition approaches have achieved better and faster results. An effort in providing an analysis for better face recognition considering traditional feature extraction techniques such as PCA and KPCA with neural network architecture such as GRNN and PNN and results are obtained in terms of recognition rate [13]. The results are discussed and analysed for better combination architecture. In this regard, the literature study states: Subspace methods in face recognition are discussed. Principal Components are obtained from face images so as it can be used for recognition of faces [4, 5]. A non-linear way of deriving principal components with a kernal was best described in [6]. In [7] the class-specific method was introduced and matrix was divided into Within-Class scatter matrix and Between-class scattermatrix to further reduce the Feature Matrix space.

An overview about subspace methods used with neural network for Face Recognition is surveyed. PCA for feature extraction and BPNN for classification were proposed in [8]. A two-level haar wavelet transform is used to decompose frontal face image into sub-bands. With Eigenface feature is extracted and is used as an input to the classification algorithm BPNN, described in [9]. In [1] Gabor filter is used as an input to the Feed Forward Neural Network (FFNN). Use of Back Propagation Neural network (BPNN), the transformation of different inputs and comparison of unknown face with the given face which is in the database is presented in [2]. In [3] curvelet transform and Linear Discriminant Analysis (LDA) are used to extract feature and Radial Basis Function Network (RBFN) is used for the purpose of Classification.

These provide with various kinds of subspace methods with neural network in face recognition. The impact of neural networks on subspace methods is better well not discussed. Considering this an attempt is made to present an analysis of various single-layer neural networks with subspace methods.

The organization of the paper is as follows: In Sect. 2, proposed feature extraction and different classification techniques of face recognition are presented. In Sect. 3, Experimental results are discussed and analysis is briefed. Finally, Conclusions are drawn.

2 Proposed Method

In this section, an overview of different subspace methods used in face recognition with different Neural Network architectures is presented.

2.1 Subspace Methods for Feature Extraction

2.1.1 Principal Component Analysis

The image space of face is represented as f(a, b) which is a two-Dimensional array. PCA as described in [4, 5] represents the training image set of data into singular columns. These singular columns are derived points which are mapped onto a smaller image space represented as subspace.

Let the training set of faces be \(A = ({A_1},{A_2}, \ldots ,{A_m})\). The mean face of this training set is calculated by \(z = \frac{1}{x}\sum\nolimits_{y = 1}^x {a_m}\) where each face differs by \({\phi_i} = {a_i} - z\).

The image array obtained is a high dimension vector space which is then analysed by Principal Component Analysis. The high dimension vector space A is subjected to obtain eigen value and eigen vector by the following covariance matrix.

$$C = \frac{1}{x}\sum\limits_{y = 1}^x {\phi_y}\phi_y^T = L\;{L^T}$$
(1)

where the matrix \(L = [{\phi_1},{\phi_2}, \ldots ,{\phi_x}]\). The covariance matrix expresses vector and scalar values as Eigen vectors and Eigen values. The obtained covariance matrix is subjected as feature matrix. The feature matrix is distinguished into individual face class by using neural networks for classification.

2.1.2 Kernal Principal Component Analysis

The Covariance matrix ‘C’ calculated out of the training set image space ‘A’ is subjected to kernal idea [6]. Kernal is a non-linear mapping function which is used for estimating the mapping of the covariance matrix. The kernal mapping function is represented as

$$\hat X = X - {1_N}X - X{1_N} + {1_N}X{1_N}$$
(2)

where \({{(1_N})_i}{,_j} = \frac{1}{N}.\)

The kernal matrix ‘X’ is a square matrix which is of the same dimension of the covariance matrix ‘C’. The non-linear principal components obtained using gaussian method of KPCA on each and every image collectively forms a feature matrix. The obtained Kernal feature matrix is further classified using neural networks.

2.2 Neural Network Architectures for Classification

2.2.1 Single-Layer Neural Network

It commits only single synapse due to single hidden layer. Single-layer neural network pertains to obtain substantial adaptation.

2.2.2 Generalized Regression Neural Network

A GRNN is a variation of the radial basis neural networks, which is based on kernal regression networks [10, 11]. GRNN does not require iterative training procedure as back propagation networks.

GRNN consists of four layers: input layer, pattern layer, summation layer, and output layer. The obtained feature matrix from ‘C’ and ‘X’, respectively, from PCA and KPCA is fed to GRNN for Classification. The summation layers consist of two summation neurons P and Q. P computes weighted matrix where as Q computes the unweighted outputs of the pattern layer. This Layer merely distinguishes P and Q neuron, predicting towards \({K'_i}\) to an unknown input vector L as

$$K_i^{\prime} = \frac{{\sum\nolimits_{i = 1}^n {{a_i}\,\cdot \,{ \exp } - F(x,{x_i})} }}{{\sum\nolimits_{i = 1}^n {{ \exp } - F(x,{x_i})} }}$$
(3)
$$F(x,{x_i}) = \sum\limits_{z = 1}^y {\left( {\frac{{{x_i} - {x_i}c}}{\sigma }} \right)^2}.$$
(4)

This neural network results into individual face classes. Network obtained further is simulated with testing feature matrix for recognition rate.

2.2.3 Probabilistic Neural Network

PNN was first proposed in [12]; the architecture contains many interconnected processing units represented by neurons and is stacked in successive layers. The feature matrix obtained by the subspace methods is fed into the input layer. The input layer does not perform any computation; from this layer the values are transported to the pattern layer. The pattern layer computes accordingly its output by

$${\phi_{ij}}(x) = \frac{1}{{{{(2\pi )}^\frac{d}{2}}{\sigma^d}}}{ \exp }\left[ { - \frac{{{{(x - {x_{ij}})}^T}(x - {x_{ij}})}}{{2{\sigma^2}}}} \right].$$
(5)

The successive pattern layer computes to the maximum likelihood of pattern ‘x’ which classified and surfaced to ‘C j ’. The Bayes’s decision rule further presents with

$$\hat j(x) = {\text{argmax}}\left\{ {{p_i}(x)} \right\},i = 1,2, \ldots ,m$$
(6)

where \(\hat j(x)\) denotes the classified output. This classified network is simulated with feature matrix of testing images to obtain recognition result.

3 Experimental Results and Performance Analysis

The proposed work is experimented on three databases. They are ORL, Yale, FERET databases.

3.1 Subspace Methods and Generalized Regression Neural Network (GRNN)

To classify the train images feature matrix obtained from subspace methods such as PCA and KPCA, a GRNN classification algorithm is applied. In order to evaluate the recognition results, the experiment was conducted for different weights by varying 0.2 weights from 0.2 to 1.4 as to find the threshold value. In this section experiment results of subspace methods such as PCA and KPCA with GRNN are discussed [1319].

3.1.1 PCA and GRNN

The results of PCA and GRNN for ORL, Yale and FERET database are shown in Table 1.

Table 1 Recognition results for PCA and GRNN

3.1.2 KPCA and GRNN

The recognition results of KPCA and GRNN for ORL, Yale and FERET databases are shown in Table 2.

Table 2 Recognition Results for KPCA and GRNN

From Tables 1 and 2 we can clearly distinguish KPCA with GRNN outperforms in all the three databases when compared to PCA with GRNN. The increase in training images results an increase in recognition result. Lesser the weights the Neural network gives better performance.

3.2 Subspace Methods and Probabilistic Neural Network (PNN)

To obtain the better classification of face images, a PNN is applied with the subspace methods. In this section recognition results of subspace methods such as PCA and KPCA with PNN are discussed. Recognition rates were measured by differing 0.2 weight i.e., from 0.2 to 1.4.

3.2.1 PCA and PNN

The recognition results of PCA and PNN for ORL, Yale and FERET databases are shown in Table 3.

Table 3 Recognition results for PCA and PNN

3.2.2 KPCA and PNN

The recognition results of KPCA and PNN for ORL, Yale and FERET databases are shown in Table 4.

Table 4 Recognition Results for KPCA and PNN

KPCA with PNN outperform PCA with PNN by considering above Tables 3 and 4. From the above conducted experiments it clearly shows that KPCA as a subspace method with Single-layer neural network outperforms PCA.

3.3 Performance Analysis

By analysing the experimental results, combination of KPCA and GRNN yields better recognition rate 90.83, 93.33 and 93.33 % for ORL, Yale and FERET databases when compared to other subspace methods with GRNN. With the combination of subspace methods and PNN, the recognition results were same as subspace methods with GRNN as such the combination of KPCA and PNN exhibits similar performance.

GRNN is a highly parallel structure with one-pass learning algorithm. It approximates and arbitrates the function between input and output vectors which estimates the training data directly. Pattern classification presents with network creation as in PNN. This architecture directly implements the one-pass learning algorithm. The non-linear mapping nature of KPCA and with GRNN which is a function approximation architecture and the PNN a classification network which both network are simple supervised neural network gives an added advantage for better classification and recognition.

Withal, the comparative study proves that the better recognition rates obtained for the combination of KPCA and GRNN and KPCA and PNN. Hence from the analysis for achieving highest recognition rate in face recognition system, KPCA and GRNN and KPCA and PNN are better suitable.

4 Conclusion

A successful face recognition system mainly depends on the feature extraction methods and the pattern classifier. Extraction of a representative feature set increases the efficiency of performance by reducing the space dimension. Neural networks are non-linear processes that perform learning and classification. They also perform well in recognizing the facial expression. In this regard, for an efficient face recognition, we performed an analysis with each of the subspace methods such as PCA and KPCA as a feature extraction methods with neural networks such as GRNN and PNN as a classifier. As a result, KPCA and GRNN and KPCA and PNN exhibit better performance. The main purpose of the proposed analysis work made towards face recognition is to identify the appropriate neural network model with the better feature extraction methods in achieving better recognition rate. So that a system can work reliable with a good learning capacity and insensitivity to small or gradual changes in the face image. With this the applicability and capability of better suitable neural network models with their performance in terms of recognition rates are described.