Keywords

1 Introduction

Faces are used by us humans to recognize people; same can be done with the help of computers automatically. Earlier, simple mathematical models were used for face recognition but in this century, face recognition has been established as a science of its own. With the advent and use of engineering in this area, face recognition systems have been one of the major attractions in computers. Face recognition systems are used in both ways, viz. for face verification as well as face identification. There has been an extensive study of face recognition in the past 25 years with relevance to various domains like two-dimensional, three-dimensional as well as videos which has contributed immensely in the areas of research and development of the same. Still, some factors like varying posture, face expression, poor lightning, person’s age, any unwanted obstacle, etc., degrade the performance of a face recognition system [1].

The field of FRS provides with a host of prospects that can be exploited for advancement of research. Thus, our study tries to address the reasons that depreciate the performance of FRS and also explores better techniques. The proposed system tries to remove various factors that affect the performance of FRS like varied postures, poor lighting, and different expression, etc. [2] FRS is simply pattern recognition for faces that can be used to differentiate as an unknown and known face. Because face is a dynamic biometric it has huge number of problems to solve. The developers and researchers in the field of image procession, human–computer interaction and AI have come out with many alternative remedies and solutions to reduce the problematic factors and make the FRS more robust and accurate.

In broader sense, the approach to face recognition may be differentiated as either being feature-based or holistic-based. The set of features used in both approached is basically different. In the holistic approach, recognition process is carried out on the basis of globally extracted features from a face, whereas the feature-based approach uses the local features. The holistic-based techniques signify the optimum variance of data in pixels of face images used to identify a subject and features of face like mouth, nose and eyes are used in feature-based approach for the identification process [3].

2 FR Using Eigenfaces

The main inspiration for creating eigenfaces was face recognition. So, eigenfaces are having an edge over different available techniques for the same with respect to the effectiveness and pace of the face recognition systems. Basically, the eigenface approach follows dimension reduction method, hence an FRS can easily embody data of a number of subject with a very small-sized data. There is no significant impact on the performance of the FRS due to reduction in image size but FRS fails significantly with variation in the probe versus seen images [4].

For the recognition process, the images seen by the FRS are stored as collection of weights that describe the eigenfaces for the said image [3]. As a new input faces is given to the FRS for identification, the corresponding weights of that image is calculated by representing the image as a collection of eigenfaces resulting in weights of the input image which is to be probed. Then, the weights are compared against all the stored weights in the database with an aim to identify the match which is closest. This nearest-neighbour is a very simple technique to find the Euclidean Distance between two vectors and the min is used to classify as the closest [4].

Facial sets, their average face, eigenface of each facial image and the matching eigenvalues are represented in the figures displayed beneath. Every eigenface deviates from uniform grey where facial feature differs in the training set. Eigenfaces are nothing but a type of map of variation among faces [5, 6] (Figs. 1, 2, 3 and 4).

Fig. 1
figure 1

Facial sets

Fig. 2
figure 2

Matching eigenfaces

Fig. 3
figure 3

Reconstruction of first image with eigenface

Fig. 4
figure 4

Corresponding eigenvalues

Consider a facial image X to be a 2D N × N matrix of 8bits. Consider the image to be a vector having a dimension value of N2. For (example), a common image having dimension of 256 × 256 can become a vector of 65536. Then, an image group is mapped as a group of points. As facial images are same on the whole configuration are not distributed in random for the image space and are described through very low-dimensional-subspace. Basic aim of PCA is finding vectors which are important for distributing facial images in the whole image space [2].

Now, those vectors are used in defining subspace of facial images that is being called as a face space. Every vector is having a length of N that describes an N × N facial image, and represents linear combination of the original facial image. These image vectors are eigenvectors of covariance matrix which corresponds to the original facial image, having face like representation (appearance), hence we call them eigenfaces. For (example), eigenfaces shown in the Fig. 2.

3 Neural Network Simulation

For performing composite functions in different areas of applications like speech control, vision control, identification and classification and pattern recognition neural networks are being trained [7]. We have built a separate neural network for every person’s image present in the facial database. Once eigenfaces are obtained, the corresponding calculation for obtaining feature vectors for the facial images in the database is done and is provided as input for training each neural network.

The training algorithm uses the face feature vectors of the same individual person which are used to train an individual’s neural network and also other neural networks. Whenever an input image is given for facial recognition process, then the corresponding feature vectors are computed using already calculated eigenfaces and the new descriptors of input image are obtained [5]. These descriptors are given as input to all neural networks and those neural networks are replicated with the descriptors. Then, comparison of output given by neural network is done. In case, max output value overcomes the already defined threshold value, the input facial image is considered to be of that person having max output.

4 Summary of Eigenface Based Face Recognition

Summary of the eigenface-based face recognition approach is presented below:

  • Create a facial image database of known-persons.

  • Decide a training set with M number of images corresponding to every individual having disparity in facial expressions and lighting conditions.

  • Calculate MxM matrix (L) and the corresponding eigenvectors and its eigenvalues. Select M’ eigenvectors having the highest corresponding eigenvalues.

  • Merge normalized images training set that produces M’ eigenfaces and save the corresponding values.

  • Compute and save a feature-vector for each individual in the database.

  • Build a neural network of every individual present in the facial image database.

5 Experimental Results

We have used ORL face image database to test our method. ORL face image database is having multiple facial images of every individual under varying conditions. Next, we describe various details for ORL database and the related performance of the proposed FRS. Here, we have used a separate neural network for each individual in the database.

We have used ORL facial database for testing the proposed method with regular occurrence of head position disparity. All facial images have been captured against a dark homogeneous background. For 40 different persons, 10 differing images are taken with respect to

  • diverse timing,

  • altering lighting conditions,

  • varying face expressions, viz. open eyes/closed eyes and smile/no smile,

  • varying face detail, viz. with/without specs,

  • head pose, viz. tilt and rotation.

Figure 5 given below represents ORL database with whole set of 40 persons with 10 different images of each individual. As total neural networks are equal to total number of individuals in the facial image database, 40 neural networks, viz. 1 for every individual have been built. In the given 10 face images, initial 4 have been taken to train neural networks, and testing of those neural networks is done. After testing, the features of those networks are updated to get min squared-error-function. Now, the trained networks are to be utilized for facial recognition process.

Fig. 5
figure 5

ORL face database

Mean face for entire ORL face database, their corresponding eigen values and the ORL facial image database’s top-30 eigenfaces represented in Figs. 6, 7 and 8 in that order.

Fig. 6
figure 6

Mean face for ORL face database

Fig. 7
figure 7

The corresponding eigenvalues

Fig. 8
figure 8

The ORL eigenfaces

The overall efficiency of face recognition is enhanced with respect to number of facial images utilized to train the neural networks. The table given below represents the recognition rates with respect to varying number of images utilized to train the networks (Table 1).

Table 1 Recognition rates with respect to varying number of images to train and test networks

Facial recognition rate is affected by the neurons present in the hidden layer and eigenfaces which have been used for describing a face, hence we have conducted the tests with varying number of both parameters and the corresponding results are given in Table 2 presented below. Contrast of the facial image is enhanced considerably with histogram equalization by converting corresponding values in the face image. Here, we represent recognition rates in the Table 2 given below without/with histogram equalization.

Table 2 Representing recognition rate with varying neurons and Eegenfaces (Histogram Equalization done/five Images used in training and testing)

6 Conclusion

Proposed method is applied on ORL Database. We have implemented varying number of neurons in the hidden layer and varying number of eigenfaces using neural networks. We find here that for the entire database with five images of every individual to train the network, hidden layer consisting 15 neurons and 50 eigenface values for entire face database are adequate for acceptable recognition rates of about 93%.

Eigenface approach is quite susceptible to head position disparity. Facial image variance happens for those face images which have significant head position disparity. We represent comparative analysis of the proposed method and the already established approaches in Table 3 given below.

Table 3 Comparative efficiency analysis of ORL database