1 Introduction

Face detection is the technique used to detect any human face in given image or video. Most of the human–computer interaction uses face detection as the pre-processing step. Currently, four face detection techniques are available. First, one being knowledge base techniques which have certain defined rules about facial features of human but it has some trouble in their algorithm about various head poses. The second one is a feature based technique which will detect human with eyebrows or eyes. This method generally gives good detection but depends upon the head and background variation and this approach may fail due to poor image resolution [1, 2]. Most of the time face detection is restricted to get the data from still image or some short length video. These approaches didn’t inherent 3D technologies so, the changes in this susceptible. This problem is overcome by generating 3D models based on various still image or video data and it can be used in testing any probe image. Most of the algorithms, they produced a low-quality image even if initially it was high; this approach is not used for facial recognition because the face model generated by this approach is not equal to the perfect [3]. To get better algorithm on face detection, this article uses the cascade classifier which is built with the help of existing Haar-Like features by the AdaBoost algorithm to select features along with new structure feature relatively more like face recognition feature. These new features form good classifiers which will eventually remove the non-face part and also try to keep the original resolution of the algorithm by recognizing texture features in ordinary human faces [4,5,6].

2 Related Work

Dey et al. [7] proposed the Robert edge detector and with the set of arithmetic operation face, edges are detected between an initial frame and the nearest ones. With the help of Gaussian Filtering Technique, non desired edges and noise are removed in the pre-processing step. The author described a novel algorithm and this algorithm is performed on face Image/video to identify the face part. This proposed algorithm cannot work on the complex background and on different lighting condition [7]. Lin et al. [8] described a multi-scale histograms algorithm for face recognition. This proposed strategy gives precision rates equivalent to the LBP-based algorithm and speed of this proposed algorithm is 10 times more than the previous techniques [8]. Liuliu et al. [9] proposed a novel skin shading algorithm was used to identify the human face. This algorithm demonstrates the high calculation rate of identifying the human face as well as false location rate is lower. This algorithm gives to outline of real face detection to some degree of angle [9]. Chihaoui et al. [10] described a combination of neural network, skin color, and the Gabor filter techniques to identify the human face. The main focus of this proposed methodology is skin color segmentation before Gabor filters and neural networks algorithm were used to reduce computation time. The proposed algorithm gives high computational time but the rate of identifying the human face is poor [10]. Muttu et al. [11] proposed an algorithm for identifying the parts of the human face as well as the complete human face. The Viola–Jones technique was used to identify the human face [11]. Kulkarni et al. [12] proposed a Weighted Least Square sifting algorithm to identify the human Face Expression. The Gabor and Log Gabor function are used for the facial components. In the proposed algorithm SVM classifier is used for recognizing the human face [12].

In the proposed work, there are five stages of processing:

  1. 1.

    Image acquisition

  2. 2.

    Face detection

  3. 3.

    Pre-processing

  4. 4.

    Feature extraction

  5. 5.

    Classification

  1. 1.

    Image acquisition In this input is taken by using a webcam or still image from folder. If the input selected by us webcam it returns image frame using get snapshot function. Or else browse image from the various database [2, 3]. The math works MATLAB provides live video capturing as a one library function and it can be used in calling camera or webcam. Matlab built the library which is used in applications [13].

  2. 2.

    Face detection After image acquisition, image enhancement has to be applied to face image for low illumination, poor contrast, and long distance. The image visibility will be enhanced by this. Now, use fdlibmex to detect the face [2]. In this research, solve the issues of dull like. Cropping the face image and save into the database is the final step. This algorithm has the capacity to detect human face up to 60-° rotation [14].

  3. 3.

    Pre-processing stage Resizing the image in the standard size is has to be done after face detection. Noisy pixels from an image are removed by median filtering. With the help of genetic algorithm and fuzzy-C means (GAFCM), we can identify the value of two neighbor class levels are nearly same pixel values [6, 15, 16].

  4. 4.

    Feature extraction In this stage, filtered image is used for the process of feature extraction. Extraction of the features of an image is performed using DWT method. DWT decompose the image into four bands: low pass filter and high pass filter. It is a lossless compression method which means that this method does not degrade the quality of an image [15].

  5. 5.

    Classification In the process of classification data is classified using RBF-SVM. It empowers the example acknowledgment between two classes via hunting down a choice surface that has most extreme separation from the nearest focuses in the preparation set which are named as bolster vectors.

3 Proposed Methodology

3.1 Contrast Enhancement

This is one of the most important stages in face recognition and detection will help to improve the face image before performing face detection as shown in Fig. 1. There are some variations in face images when compared to other methods. In the proposed work, the contrast stretching process is used and it adjusts the brightness of the objects which were in the image to get the clean visibility. The Grey level values in high contrast images are span over the full range. So, the remapping or structuring the gray values of low contrast image makes it to high contrast image with the gray values span over the full image [17].

Fig. 1
figure 1

Block diagram of proposed methodology

3.2 Face Detector

The Nilsson algorithm is used as a base for the fdlibmex [6, 18] algorithm comprises a dull library containing methods. Successive Mean Quantization Transform (SMQT) and Sparse Network of Winnows (SNOW) are the methods available in this library. The result obtained from the original snow classifier was very well utilized by the split up snow and it can perform rapid detection by creating cascade classifier. In the proposed methodology three folds technique is used. Firstly, for the illumination and sensor insensitive operation in object recognition the local SMQT features are proposed, secondly, the speed up of original classifiers are maintained by the split up snow which was present in it. Finally, the combination of both classifier and features performs the frontal face and face poses detection [19].

3.3 Median Filtering

The noises found in the image are reduced by using a median filter [14, 20]. The median is the center value of all the values of the pixels in the neighborhood as shown in Fig. 2. In median filter, replace all the neighboring pixel value with the median of those values i.e.

Fig. 2
figure 2

Median value calculation in a given pixel matrix

  • Neighbourhood values: 05, 12, 19, 28, 31, 49, 69, 75, 87

  • Median Value: 31

Noise is any undesirable or unwanted signal. The electrical system used for storage, transmission and processing of the data and all the steps are the main contributor to add the noise in data. Most of the times we want to improve the quality of the image that was corrupted by noise.

3.4 Genetic Algorithm (GA)

GA is an optimization technique based on population and GA act as an alternative methodology for traditional optimization techniques [21, 22]. The main solution for this problem is the chromosome that was present in the population. The series of populations for each upcoming and succeeding generation are designed by GA and crossover and mutation are used as operators for principal search mechanisms—The optimization of given objective or fitness function is the main aim of the algorithm. Each perfect or effective solution to a chromosome is mapped by an encoding mechanism. Each chromosome that provides a satisfactory solution to the problem is evaluated by objective functions [23].

3.5 Fuzzy C-Means (FCM) Methods

In this proposed Method, calculating membership function can be carried out by using GA with FCM method [24]. The membership degree calculation and the cluster centers update are the main procedures of the FCM algorithm. The extent of each data point belongs to each cluster is indicated by membership degree and the values of cluster centers are updated by using this method.

$$u_{ik} = \frac{1}{{\sum\nolimits_{j = 1}^{c} {\left( {\frac{{d_{ik} }}{{d_{jk} }}} \right)^{{{\raise0.7ex\hbox{$2$} \!\mathord{\left/ {\vphantom {2 {(m - 1)}}}\right.\kern-0pt} \!\lower0.7ex\hbox{${(m - 1)}$}}}} } }}\quad (1 \le k \le m),(1 \le i \le c)$$
(1)

uik denotes a degree of membership of xk in the ith cluster. A degree of fuzziness is controlled by the parameter m > 1. This denotes that in every cluster each data pattern has a degree of membership.

3.6 Feature Extraction Using Haar Wavelet Features

The feature extraction method called the domain of digital image processing frequently uses Two-dimensional Haar wavelet method. The 2-D Haar wavelets can take on several types, each used to detect edges along different directions. Different types of Haar wavelets are used by us to compute the feature vector [25]. The wavelet feature vector of the sub-image is calculated by us using these wavelets that were convolved with the cropped sub-image. Numbers of Haar feature vector was obtained by us for each sub-image. SVM receives this feature vector for classification [10, 26,27,28].

figure d
figure e
figure f

4 Result Analysis

MATLAB is a powerful tool through which we can analysis and visualization the operations. The proposed method was evaluated in two different steps:

  • Face detection

  • Face recognition

In face detection, the proposed methodology was evaluated in five standard databases such as JAFFE [29], FEI [30,31,32,33,34], LFW-a [35,36,37], CMU + MIT [38,39,40,41,42] and own database Figs. 3, 4, 5, 6, 7, 8, 9, 10, 11 and 12. In face recognition, the proposed methodology was evaluated in Cohn–Kanade standard database [43] (CK+) and JAFFE database [29] as shown in Figs. 13, 14, 15, 16, 17 and 18. In the proposed methodology 5-fold cross-validation was used to evaluate the performance. Finally, we used SVM classifier for trained the data with concatenated the Haar wavelet features extracted from the discriminative characteristics between the given pair of expression classes.

Fig. 3
figure 3

Framework for proposed system on JAFFEE dataset. a Actual image, b improved image, c detected face, d crop face image, e noise removed image, f face segmentation using GA

Fig. 4
figure 4

Framework for proposed system on JAFFEE dataset. a Actual image, b improved image, c detected face, d crop face image, e noise removed image, f face segmentation using GA

Fig. 5
figure 5

Framework for proposed system FEI dataset at 45°. a Actual image, b improved image, c detected face, d crop face image, e noise removed image, f face segmentation using GA

Fig. 6
figure 6

Framework for proposed system FEI dataset at 60°. a Actual image, b improved image, c detected face, d crop face image, e noise removed image, f face segmentation using GA

Fig. 7
figure 7

Framework for proposed system LFW-a dataset at 60°. a Actual image, b improved image, c detected face, d crop face image, e noise removed image, f face segmentation using GA

Fig. 8
figure 8

Framework for proposed system LFW-a dataset at 45°. a Actual image, b improved image, c detected face, d crop face image, e noise removed image, f face segmentation using GA

Fig. 9
figure 9

Framework for Proposed System CMU + MIT dataset at 45°. a Actual image, b improved image, c detected face, d crop face image, e noise removed image, f face segmentation using GA

Fig. 10
figure 10

Framework for proposed system on CMU + MIT dataset. a Actual image, b improved image, c detected face, d crop face image, e noise removed image, f face segmentation using GA

Fig. 11
figure 11

Framework for proposed system on own dataset. a Actual image, b improved image, c detected face, d crop face image, e noise removed image, f face segmentation using GA

Fig. 12
figure 12

Framework for proposed system on own dataset. a Actual image, b improved image, c detected face, d crop face image, e noise removed image, f face segmentation using GA

Fig. 13
figure 13

Result on CK+ dataset using GUI

Fig. 14
figure 14

Accuracy comparison on various existing methods with proposed algorithm on CK+ dataset

Fig. 15
figure 15

Overall performance of proposed methodology on CK+ database

Fig. 16
figure 16

Overall performance measure comparison between existing with proposed algorithm

Fig. 17
figure 17

Accuracy comparison on various existing methods with proposed algorithm on JAFEE Dataset

Fig. 18
figure 18

Overall performance of proposed methodology on JAFEE database

Table 1 Accuracy comparison of various existing methodology with proposed methodology on CK+ database
Table 2 Overall performance of proposed methodology on CK+ database
Table 3 Overall performance measure various existing methodology with proposed methodology on CK+ database
Table 4 Accuracy comparison of various existing methodology with proposed methodology on JAFEE database
Table 5 Overall performance of proposed methodology on JAFEE database

Calculate the average precision (P) and recall (R) of the system using below formula:

$$\begin{aligned} {\text{P}} & = \left( {{\text{No}} . {\text{ of relevant images matched}}} \right)/\left( {\text{Total number of image matched}} \right) \\ {\text{R}} & = \left( {{\text{No}} . {\text{ of relevant image matched}}} \right)/{\text{Number of image in the database}} \\ \end{aligned}$$
  1. 1.

    Calculate matching rate (MR) of the system using this formula:

$$MR = 100*((TP + TN)/N)$$

where TP stands for true positive and TN stands for true negative and N is the size of data set.

4.1 Face Recognition Result

In Fig. 16, it shows that the proposed methodology gives better performance of existing model on CK+ dataset. If we choose the best results among the results, the highest classification rate of the Proposed can reach approximately 100%. According to our experimental results, as shown in Tables (1, 2 and 3), the proposed methodology gives the better performance of existing model on CK+ dataset.

According to Fig. 18, it shows that the proposed methodology gives better performance on JAFEE dataset as compared with existing methodology. In the same manner, our experimental results as shown in Tables (4 and 5), it shows that the proposed methodology gives better performance on JAFEE dataset as compared with the existing methodology.

5 Conclusion

In this paper, we proposed an algorithm for face detection and recognition. In this methodology RBF-SVM classifier and Haar Wavelet Transform (HWT) with GAFCM is used for feature extraction. The RBF-SVM classifier used attributes such as left eye, right eye, mouth, face and nose for identify the various expression of particular face. Finally we got better result on both the datasets 95.6% and 88.08% respectively CK+ and JAFEE dataset. Average time is also less as compared to existing methodology. In the other part Face detection on various databases is reached approximate 100% in all the standard databases. In the future work, we apply this algorithm for feature extraction and classification for the various dataset.