Keywords

1 Introduction

Basically face recognition can be used for verification and Identification. In the year 1988 Kirby and Sirovich applied PCA, a standard linear algebra technique for the face recognition problem. The technique was the landmark and considered as milestone because it requires only less than one hundred values in code a normalized face images accurately.

For the past few years, a systematic investigation has been going on to design a robust security/authentication mechanism. With the advent of miniaturized imaging systems the design process of security systems has been improved. The devices are application specific and present data (Biometric) to be incorporated into the design. Many researchers showed that the features extracted from face images aid in designing robust security/authentication systems. Successful face recognition system [1] is proposed utilizing Eigen face approach. This method is conventional, considers frontal and high contrast faces for implementing the system, but in real time faces may not be frontal and device intrinsic capture (illumination variation) properties pose difficulties in the process of detection. Thus in security and other computer vision applications, pose and variation in illuminations plays a critical role. The Eigen face approach is not satisfactorily addressing these problems.

In recent works [24], face recognition is carried out with PCA method and succeeded well, but it fails as input space increases and also suffers from the problem of discrimination between faces of similar persons like twins.

Face feature extraction suffers from

  1. (a)

    Pose and expression variation,

  2. (b)

    Resolution variation and

  3. (c)

    Illumination problems

The methods designed using PCA [25] works well for either (a) or (b) but not on three issues altogether. Mainly in bio-metric home security applications, the above mentioned issues are obvious. Variant of principal component analysis is Kernel Principal Component analysis [5] (KPCA) and is nonlinear extension. In KPCA, input data is initially mapped into new feature space using non-linear mapping (kernel). PCA is performed on the kernel transformed data to extract feature vectors [2, 4]. The kernel mapping provides mechanism to address pose and expression variation (Figs. 60.1, 60.2).

Fig. 60.1
figure 1

Sample Images of ORL database with different pose

Fig. 60.2
figure 2

ORL face database

The paper concentrated on face recognition by using bit-planes of a image. Section 60.2 explains about bit-plane Slicing.

To aid the process of recognition, nearest neighborhood classifier is used; this method finds an image to the class whose features are closest to it with respect to the Euclidean norm.

The performance of the proposed algorithm is verified on available databases on the internet, such as ORL face database [7] and YALE database [11]. ORL face database consists of 400 images of 40 individuals; each subject has 10 images in different poses. YALE face database consists of 5760 images of ten individuals in nine poses, and each pose in 64 illumination conditions. This paper has categorised into six sections. Section 60.2 is devoted to Bit-Plane Slicing, Sect. 60.3 Principle Component Analysis, Sect. 60.4 Nearest Neighborhood classifier, Sect. 60.5 Proposed Method, In Sect. 60.6 experimental results and discussions were compared the proposed method with PCA and showed accurate results on challenging databases and the Sect. 60.7 possible conclusion have been drawn out of technology.

2 Bit-Plane Slicing

Bit-Plane Slicing is a technique in which the image is sliced to different planes. It ranges from Bit level 0 which is the least significant bit (LSB) to Bit level 7 which is the most significant bit (MSB). The input of the method is an 8-bit per pixel image. This is a very important method in Image Processing.

The advantage of doing this method is to get the relative importance played by each bit of the image. It highlights the contribution made by specific bits. In this method, only in last 4 higher order bits planes significant data is visualized [7]. The lower level bit plane does not give much detail because they are made up of lower contrast. The bit level in bit plane 7 is equivalent to the bit level of the original image.The running time of the Bit-Plane algorithm for one image can range from 2 s to 1 min on a Pentium IV CPU using MATLAB code. Execution time will vary from one image to another, depending on the size of the image (Figs. 60.3, 60.4).

Fig. 60.3
figure 3

8-bit planes of a image

Fig. 60.4
figure 4

Bit plane decomposition

3 Principle Component Analysis

A 2-D facial image can be represented as 1-D vector by concatenating each row (or column) into a long thin vector. Let’s suppose we have M vectors of size N (= rows of image × columns of image) representing a set of sampled images. pj’s represent the pixel values.

$$ x_{i} = \, [p_{ 1} : \, : \, :p_{N} ]^{T} ; \, i = { 1}, \, \ldots , \, M $$
(60.1)

The images are mean centered by subtracting the mean image from each image vector. Let m represent the mean image.

$$ m = \frac{1}{M}\sum\limits_{i = 1}^{M} {x_{i} } \, (2) $$
(60.2)

And let \( w_{i} \) be defined as mean centered image

$$ w_{i} = x_{i} - m $$
(60.3)

Our goal is to find a set of e i ’s which have the largest possible projection onto each of the w i ’s. We wish to find a set of M orthonormal vectors e i for which the quantity

$$ \lambda_{i} = \frac{1}{M}\sum\limits_{n = 1}^{M} {(e_{i}^{T} w_{n} )^{2} } \, $$

is maximized with the orthonormality constraint

$$ e_{l}^{T} e_{k} = \delta_{lk} $$

It has been shown that the e i ’s and , λ i ’s are given by the eigenvectors and eigenvalues of the covariance matrix

$$ C = WW^{T} $$
(60.4)

Where W is a matrix composed of the column vectors w i placed side by side. The eigenvectors corresponding to nonzero eigenvalues of the covariance matrix produce an orthonormal basis for the subspace within which most image data can be represented with a small amount of error. The eigenvectors are sorted from high to low according to their corresponding eigenvalues. The eigenvector associated with the largest eigen value is one that reflects the greatest variance in the image. That is, the smallest eigen value is associated with the eigenvector that finds the least variance. They decrease in exponential fashion, meaning that the roughly 90 % of the total variance is contained in the first 5–10 % of the dimensions.

A facial image can be projected onto M′ (\( < \) M) dimensions by computing

$$ \Upomega \; = \;\left[ {v 1, \, v_{ 2\ldots } v_{M} } \right]^{T} $$
(60.5)

4 Euclidean Classifier

Different distance metrics in 2-D are the Cityblock Distance between \( \left( {{\text{x1}},{\text{y1}}} \right) \) and \( \left( {{\text{x2}},{\text{y2}}} \right) \) is \( \left| {x1 - x2} \right| \)+ \( \left| {y1 - y2} \right| \). The CHESSBOARD DISTANCE is \( \max (\left| {x1 - x2} \right|,\left| {y1 - y2} \right|) \). The QUASI-EUCLIDEAN DISTANCE [18] can be written as\( {\text{abs}}\left( {{\text{x1}} - {\text{x2}}} \right) \, + \, (\sqrt 2 - 1)*{\text{abs}}\left( {{\text{y1}} - {\text{y2}}} \right) \), in the case of \( {\text{abs}}\left( {{\text{x1}} - {\text{x2}}} \right) \, > {\text{ abs}}\left( {{\text{y1}} - {\text{y2}}} \right) \) the QUASI-EUCLIDEAN DISTANCE written as, \( (\sqrt 2 - 1)*{\text{abs}}\left( {{\text{x1}} - {\text{x2}}} \right) \, + {\text{ abs}}\left( {{\text{y1}} - {\text{y2}}} \right) \), the EUCLIDEAN DISTANCE can be written as \( \sqrt {(x1 - x2)^{2} + (y1 - y2)^{2} } \). In this work we have used nearest neighborhood classifier to recognize the image. This classifier comes under minimum distance classifiers. It is also called as Euclidean classifier. In this method the minimum the distance from test feature vectors to train feature vectors the correct the image is. If Xi, Yj represents test and train image features then

$$ \left\| {{\text{X}}_{\text{i}} - {\text{Y}}_{\text{i}} } \right\| \equiv \sqrt {\left( {{\text{X}}_{\text{i}} - {\text{Y}}_{\text{i}} } \right)^{\text{T}} \left( {{\text{X}}_{\text{i}} - {\text{Y}}_{\text{i}} } \right)} < \left\| {{\text{X}}_{\text{i}} - {\text{Y}}_{\text{j}} } \right\| $$
(60.6)

*Where||. || represents Euclidean norm

Because of its simplicity, it finds an image to the class whose features are closest to it with respect to the Euclidean norm.

5 Proposed Method

The proposed method starts with splitting the ORL database into two sections. One section is called as test database and other is train database. The test database consists of 200 images of all 40 subjects, in the similar way train database consists of 5 subjects each for 40 subjects. The algorithm is checked by picking a image from test database and finding its nearest image from train database. After splitting the database next step is decomposing the image into 8 bit-planes. In previous works there is dimensionality reduction on image it self. In our novel method we applied the dimensionality reduction step after decomposing image into its bit-planes.

6 Experimental Results

The paper results evaluated by using recognition rate parameter. The recognition parameter is calculated as ratio of no of successful attempts of algorithm to total number of attempts. All 200 test images are tested and recognition rate is calculated. The final recognition rates by varying the features are listed below. From the table it is observed that as the feature size increasing the recognition rate increases. It is also concluded that the optimum feature size for this algorithm is 20, because if the feature size increased beyond 20, the recognition rate is same and fixed.

7 Conclusion

The paper proposed a novel approach for face recognition by extracting features from a constant illumination and pose-variant image by using Bit-Plane Slicing and PCA method. All the images have been cropped from uneven size to 128 × 128 pixels. As a next step 7th Bit-Plane of image is mapped from input space of data to feature space is done by PCA method [4].

PCA is a powerful linear model for extracting non-linear features, the nearest-neighbor distance classifier which can enhance the recognition process.

Experimentation was done on ORL face database. Compared to existing approach [2], with marginal increase in computational cost and time, high recognition rate is reported in this paper. The Computed results were tabulated in Table 60.1 and graphically shown in Fig. 60.5. The experimental results in Table 60.1 shows that as the feature space increases as more than 10, the performance increases but at the same time cost of the algorithm increases. We are working out to apply the proposed method to deal with illumination variation, which is our future work remains (Fig. 60.6).

Table 60.1 Recognition rate versus number of eigen vectors
Fig. 60.5
figure 5

Proposed algorithm

Fig. 60.6
figure 6

Recognition rate versus features