Keywords

1 Introduction

The natural face image is taken in diverse environment condition which cause sundry illumination [1, 2] problem. Illumination affects the image entirely or moderately. Partial featured change is difficult to analyze and repair which increases the recognition time criticalities. Some illumination adjustment methods exist including gamma correction, Contrast adjustment approaches. But if the database image also suffers the illumination variations, it will be difficult and performance degrading approach to repair each DB image with each recognition process. Even the repair must be relative to the contrast observations of input image. Natural capturing images also face the problem of improper or discriminative [36] image capturing. Such capturing results partial capturing [7], pose variation, facial information loss [8] etc. Subject specific variations and data loss requires a series of filter [9, 10] operation to repair image even then it will impact similarity score. The classified facial mapping process can be applied to represent the entire face recognition process. Based on the captured facial segments and features, the accuracy of the method depends. Different kind of deficiencies that can be identified in natural captured image are shown in Fig. 1. This paper has provided the solution against all these problems.

Fig. 1
figure 1

Spotted deficiencies in natural images a head position deficiencies, b illumination variation deficiencies, c missing information

If the face image is captured in normalized environment and without any capturing mistake, even then it can hide some of the facial features. Such high level feature loss occurs, if the features are covered by some wearable object such as glasses, scarf, cap, etc. Pose change of the person, head position also affects the facial features. Such kind of feature variation and feature loss is not recoverable through any filter. Some of researchers provide some sparse methods with multiple instance based recognition [11, 12] process. The dynamic feature [13] selection and transformation methods are also applied to recognize [14] image under these abruptions.

In this paper, the image transformation model is provided based on multiple feature aspects. This method is significant to covert the image information to analog curve by applying the segmented feature derivation method. This analog form based frequency point mapping will be able to map the selected feature points over the facial image that will provide a solution to most of the captured image irregularities. In this section, all the deficiencies identified in facial image capturing and recognition process are discussed. In Sect. 2, the proposed work model is presented with a description of each of inclusive stage. In Sect. 3, the comparative experimentation is provided to evaluate the accuracy of this model.

2 MFAST Model

In this section, the proposed MFAST model is described to identify the solution of improper facial capturing and hidden partial facial regions. The model is robust against pose variation, illumination variation and wearable object problems. As discussed earlier, the model will transform each of DB images and input image in the form of analog feature curves. Instead of comparing high resolution face image, the analog curve obtained from MFAST model will be compared. To apply accurate recognition process, varying window ratio specific weight ratio method is applied to frequency points of analog curve. The section describes the integrated process stages.

2.1 MFAST: Feature Extraction

The natural image taken either as input or the DB image is in raw form, which is required to transform into analog featured form. The MFAST model for featured conversion to analog form is shown in Fig. 2. At first stage of this model, the transformation is applied to acquire the segmented features and represent as analog form. Normalization process is applied to analog curve to obtain key featured points. Finally the feature curves are combined in composing featured form. Second level filtration is applied to normalize the composite featured signal.

Fig. 2
figure 2

MFAST model

2.1.1 Transformation

To apply the transformation, the complete image is divided into smaller blocks of equal size. For each block, single feature points will be obtained with feature specification. The block size identifies the number of feature points which are presented as the feature curve. The experimentation signifies that the lower block size, improved the accuracy of recognition process.

2.1.1.1 AF1-Moment Analysis

AF (Analong Feature) Moment is quantified form of mass value of segmented region. This quantized vector can be described in different forms based on the order specification. Zeroth order moment itself represents mass value, 1st order moment is the center of mass; second order moment represents rotational inertia. In the same way, each order generates the structural and quantified features of the image. In this work, the mass based rotational inertia is considered as featured data. The statistical derivation of this feature is shown here below

$$ M_{k} = \frac{1}{N}\mathop \sum \limits_{i = 1}^{N} \left( {Blk_{i} - \mu } \right)^{k} $$
(1)

Here Eq. (1) represents the moment equation derived respectively to mean. The formulation with the effect of rotational derivation is shown in Eq. (2)

$$ M_{k} = \int\limits_{ - \infty }^{\infty } {\left( {Blk_{i} - \mu } \right)^{k} } $$
(2)

Here Blake is the segmented block and µ is the center intensity poi respective to which the rotational mass value derivation is considered.

2.1.1.2 AF2-Skewness Analysis

AF (Analog Feature) Skewness is applied on the image to identify the degree of asymmetric as the quantized interpretation of image block is considered. While presenting the probability distribution of intensity point, the center spitted legs are observed. Stretched left leg where represents the negatively skewed image and right stretched leg represents positively skewed. The quantized image block based skewness formulation is shown in Eq. (3)

$$ Sk = \frac{{\frac{1}{n}\mathop \sum \nolimits_{i = 1}^{n} \left( {Blk_{i} - \mu } \right)^{3} }}{{\left( {\sqrt {\frac{1}{n}\mathop \sum \nolimits_{i = 1}^{N} \left( {Blk_{i} - \mu } \right)^{2} } } \right)^{3} }} $$
(3)

This equation represents mean specific coefficient derivation is considered for symmetric feature identification.

2.1.1.3 AF3-Kertosis Analysis

Third block segmented Analog Feature (AF) parameter taken here is Kurtosis which generates the outlier prone block evaluation. This evaluation is based on normal distribution applied to intensity regulation of image block. A comparative significant decision value considered is 3. The kurtosis value higher than 3 resembles higher outlier proneness in block, whereas the lower value signify lesser error proneness. The formulation of this feature vector is shown in Eq. (4)

$$ Kr = \frac{{E\left( {Blk - \mu } \right)^{4} }}{{\sigma^{4} }} $$
(4)

Here E(x) is expected to quantify the image along with population pixel evaluation.

2.1.2 Normalization

Each of the individual evaluated block featured value is quantized and transformed into a curve form. This generated analog curve is irregular and requires to dissolve the inclusive impurities and disturbance. Band pass filtration along with dynamic thresholding is applied here for probabilistic featured evaluation. This feature vector independent normalization process is shown in Table 1.

Table 1 AF normalization

Here Table 1 described the method to normalized analog featured curve. This model includes successive processes including low pass filtration and high pass filtration. The derivative coefficient signal is obtained using derivative evaluation, which successively following by squared transformation to obtain the absolute signal value considered as a processed signal form.

2.1.3 Composition

The final step of MFAST model is to combine these AF features curves and formulate a combined feature form which is implied in later stage for facial recognition. The composted curve formulation is shown in Eq. (5)

$$ {\text{CAF}} = {\text{AF}}1 \, \left| {{\text{AF}}2 \, } \right|{\text{AF}}3. $$
(5)

The composed curve is finally applied by feature normalization phase to remove the aggregative impurities over the signal curve. This aggregative normalized curve is obtained by Eq. (6) which describes the algorithmic process of Table 1.

$$ {\text{CFAF}} = {\text{Normalization }}\left( {\text{CAF}} \right) $$

This CFAF form feature is obtained from the input image and the database. The recognition process applied to composition is defined in the next subsection.

2.2 MFAST: Recognition

This composed CFAF curve is finally compared with each of DB images under curve point analysis. The matching is here identified in terms of ratio between mapped points and total curve points. If input and target CFAF curves are same, each point of the input CFAF curve will overlap the target CFAF curve. If the difference is in illumination values, then some of the curve points of input CFAF will be above or below the target curve with little difference. Applied threshold window based matching is able to perform recognition on such curves. If the input image is having hidden features or obscure features, then some of the points can exactly map, some can map with window difference and some completely differ points will not map under window formulation. To cover all aspects of recognition, the educational formulation of the recognition process is shown in Eq. (6) and its visualization is shown in Fig. 4 (Fig. 3).

$$ {\text{RRatio}} = {\text{EM}} + {\text{FWM}}\,*\,0.9 + {\text{SWM}}\,*\, 0.8 $$
(6)

Here

EM:

Exactly Matched Featured Ratio

FWM:

First Window (3 Points) Specified Matched Feature Ratio

SWM:

Second Window (5 Points) Specified Matched Feature Ratio

Fig. 3
figure 3

Recognition process of MFAST model

Here Fig. 4 shown the input CFAF mapping applied on each DB CFAF curve. Here blue line is DB image curve points and the red line represents the input image curve points. The result shows the clear disqualification of this mapping. The experimentation applied on different databases is shown in Sect. 3.

Fig. 4
figure 4

MFAST recognition process a example of input and target images, b CFAF curve map on input and target image

3 Experimentation

MFAST Model is designed in this paper to improve the recognition accuracy against the illumination problem, pose variation, partial occlusion problems. To obtain the natural images the experimentation is applied on FERET (Face Recognition Technology), LFW (Labeled Faces in the Wild) and Indian Database. At first experimentation is applied on the FERET database which consist about 14,126 images of 1199 images. The dataset is having multi aged and mix gender images with different resolution, illumination and pose. While performing the experimentation, three different training sets are considered of 200, 500 and 700 images. The test set considered here are respectively of 100, 200 and 300 images. The average accuracy on three different training-testing pairs is shown in Fig. 5. MFAST model provided the recognition rate about 92.5 %. The comparative observations taken against PCA, LDA, LDA–PCA and SVM approaches. As the existing methods compared with symmetric features which reflected in low recognition rate. The highest recognition rate obtained among existing approaches is 80.1 % for PCA–LDA approach.

Fig. 5
figure 5

Recognition accuracy analysis

LFW is another online acquired database considered under size and scope range. The database is having about 13,233 images for 5749 individuals. The images are captured in a natural environment and relative variables. All the real time problems and impurities are the part of the database. These highly diverse features based database is applied for accuracy improvement for MFAST model. To perform the analysis, two training and test set pairs are taken. The first set consists 200 training images and 50 test images, whereas the second considered dataset pair is wider with 500 training images and 150 test images. The proposed model has provided the evaluation respective to the structural and asymmetric formulation of facial image. Based on these constraint specific observations, the model has provided the average accuracy of 96 %. The comparative observations show that the existing models provided maximum upto 84 % average accuracy for the PCA–LDA method.

Final experimentation is applied to more complex Indian face database. The dataset is having images of 61 individuals with 11 instances of each. The variation in database images is in terms of illumination, head position, pose and expression. To apply experimentation, two training sets of 300 and 400 images is taken with 50 and 200 test images. MFAST model provided 94 % accuracy rate, which is much higher than the highest recognition rate of existing approaches as shown in Fig. 5.

4 Conclusion

The paper has presented an image to segmented analog featured transformation based MFAST model. The model applied a segmented featured derivation for three main vectors called moment analysis, skewness analysis and kurtosis feature analysis. These features are applied to obtain the symmetric observation, fault prone strength analysis and structural feature extraction. After extracting the features, these quantized features are transformed to analog curve form. A band pass filtration based constraint derivation is applied to normalize the feature curve. The composed CFAF feature is formed by collecting all three feature curves. Finally, feature points of input and target DB CFAF curves are mapped under window threshold specification. The experimentation is here implied for three FERET, LFW and Indian Face Databases. The comparative observations obtained against PCA, SVM, LDA and PCA–LDA methods show that the model has enhanced the recognition accuracy because of asymmetric feature based evaluation. The average accuracy against multiple variations provided by the model is up to 96 %, whereas the existing methods provided the accuracy maximum upto 83 %.