Abstract
In this paper, we investigate two relatively new optimization algorithms in facial recognition, the grasshopper optimization algorithm (GOA) and binary dragonfly algorithm (BDA) which had the best performance out of 13 optimization algorithms that were compared. We investigate the effectiveness of both optimization algorithms alongside two classifiers, k-nearest neighbor (KNN) and support vector machine (SVM). Performance evaluation of the four combinations, BDA-KNN, BDA-SVM, GOA-KNN and GOA-SVM, indicate near-ideal recognition rates, with the GOA variants slightly outperforming their BDA counterparts. When compared to other recently proposed facial recognition approaches, the proposed algorithms depict improved accuracy.
Access provided by Autonomous University of Puebla. Download conference paper PDF
Similar content being viewed by others
Keywords
- Biometrics
- Binary dragonfly algorithm
- Classification
- Facial recognition
- Grasshopper algorithm
- Optimization algorithm
1 Introduction
Facial recognition (FR) has many practical applications due to its advantages such as uniqueness, immutability, social acceptance, ease of use and low cost [1]. It is a nonintrusive method for identifying or verifying individuals. FR algorithms involve training classifiers using facial features. Unfortunately, there are many redundant, irrelevant features negatively affect the performance of FR approaches. Approaches such as binary pattern (LBP) can be used to extract local spatial patterns as opposed to global features [2, 3]. It is a feature descriptor for facial expression representation. The main advantages of LBP are tolerance against illumination changes and its computational simplicity [4]. For further performance improvements, feature selection methods can be employed to reduce feature space dimensionality. Feature selection attempts to solve the problem of redundant, irrelevant, and inaccurate features, and can be performed with the aid of various optimization algorithms. These algorithms can include ant colony optimization [5], particle swarm optimization [6], bacteria foraging optimization [7] and firefly algorithm [8].
FR can be implemented using classifiers which include artificial neural networks (ANNs), support vector machine (SVM) and the k-nearest neighbor (KNN) algorithm. These are machine learning approaches that are commonly used for pattern recognition. Many researchers have shown that KNN and SVM outperform other classifiers for FR purposes [9,10,11]. KNN is a simple, efficient, reliable, and computationally efficient algorithm for FR [12] whereas SVM is a machine learning approach widely used in image processing applications. KNN has high recognition rate and can quickly identify items from a large dataset [10]. In terms of facial recognition, KNN leverages upon distance metrics to identify the closest person from the dataset [11, 13]. SVM is an effective discriminative classifier that developed by Philips [14]. The input to SVM is a set \(X,Y\) of labeled training data, where \(X\) is the data and \(Y = \left[ { - 1,1} \right]\) is the label. The output of an SVM algorithm is a set of \(N\) support vectors. The main advantage for SVM is stability, whereby bit changes in the data do not greatly affect the hyperplane, leading to a stable model [15, 16]. SVM can be used to develop classifier or regression models. For FR, SVM attempts to generate a decision surface that separates dissimilarities between images of the same and the images of different individuals [11].
Various FR algorithms have been proposed in recent literature, all with the goal of maximizing recognition rate by adopting a variety of techniques. Gao and Lee’s approach is based on scale-invariant feature transform (SIFT), which is a method to extract local features [17]. The experimental results shown that average performance of 95% when tested on the FERET dataset. Agarwal and Bhanot’s approach involved identifying the center hidden neuron layers of a radial basis function neural network for the purpose of facial recognition [8]. They used the firefly optimization algorithm as feature selection method. Experimental results showed decent recognition rates for various databases as ORL (97.75%), Yale (99.83%), AR (93.15%) and LFW (60.50%). Zhu and Xue presented a novel approach called random subspace method for FR [18]. The tensor subspace approach was used for feature selection to achieve a recognition rate of 98.32%. Lu et al. used a sparse representation method using rank decomposition to get a robust recognition rate of 96% [19].
Other researchers developed FR methods to address issues of high-dimensional features and the multitude of variations available in face images. One method uses GOA to extract relevant features from the high dimensional feature vectors [20]. Their experiments on the ORL dataset led to an accuracy of 91.5%. Sasirekha and Thangavel proposed novel FR algorithm based on KNN with particle swarm optimization (PSO) [21]. LBP and PSO were used to extract and select features respectively, leading to a best-case accuracy of 97.41%. Maheshwari et al. developed an FR approach based on local directional pattern, a feature extraction method [14]. Then, genetic, and differential optimization algorithms were used as a feature selection method to eliminate the irrelevant features. Finally, SVM used to classify the identity of facial images. Their experimental analysis showed that differential evolution outperforms genetic algorithm. Gupta and Goel developed a FR approach that extracts features using a Gabor filter [22]. Principal component analysis (PCA) was then used for feature selection for dimension reduction. A modified version of the artificial bee colony (ABC) is then used on the feature vectors to search for the best match for a test image in a given database, achieving an accuracy of 97%. Abd et al. proposed an FR approach also based on the Gabor filter for feature extraction followed by feature selection by grey wolf optimization (GWO) algorithm. By training a KNN classifier, a recognition rate 97% was achieved on the Yale dataset [23]. The FR approach by Kumar, based on PCA and bat optimization algorithm depicted a recognition accuracy of 96% when tested on the Yale database [12].
More recently, Aro et al. proposed an FR algorithm based on enhanced gabor filters and the ant colony optimization algorithm [24]. The proposed method aimed to solve the high dimensionality problem of gabor filters that lead to low performance and high time complexity. The ant colony optimization algorithm was used to remove noisy, redundant and irrelevant gabor features. They achieved an accuracy of 97.14% and 95.71% using the Malahanobis and Chebyshev classifiers, respectively. Benamara et al. proposed a multispectral face recognition method using random feature selection and PSO-SVM [25]. The proposed method solved the problem of intra-variation conditions which negatively affects the performance of FR systems by using both infrared and visible spectra. A new feature selection algorithm was introduced that reduces the feature space dimensionality to be suitable for real time applications.
Eleyan proposed a PSO metaheuristic algorithm as a feature selection method for face recognition systems that reduces the dimensionality of extracted feature vectors [26]. Experimental analysis was executed by using two well-known face databases. Performance of the PSO approach in terms of accuracy, specificity and sensitivity depicted high performance as compared to other algorithms such as principal component analysis (PCA). Malhotra and Kumar proposed an optimized facial recognition approach that combines DCT and PCA to extract the features that led to a high recognition accuracy of 96.5% [27]. Cuckoo search was used in the feature selection stage to remove irrelevant features. Král et al. proposed another face recognition system based on an improved local binary patterns (LBP) approach [28]. In the proposed approach, the enhanced LBP considers more pixels and different neighborhoods while computing the features. The proposed approach was evaluated using UFI and FERET face datasets, depicting improved performance as compared to other state-of-the art approaches. Table 1 shows the summary of the related work.
In this paper, we investigate the use of two relatively new optimization algorithms in facial recognition. We select these algorithms after studying different 13 optimization algorithms from the perspectives of accuracy and time complexity when used for feature selection. Based on our experiments, the binary dragonfly algorithm (BDA) and grasshopper optimization algorithm (GOA) outperformed their peers in both aspects. Both optimization algorithms are used for feature selection prior to training KNN and SVM classifiers. We denote the four FR approaches as BDA-KNN, BDA-SVM, GOA-KNN and GOA-SVM. The proposed FR algorithms depict a desirable performance in terms of both recognition rate and time complexity, outperforming other recently proposed FR algorithms.
The remainder of this paper is organized as follows: Sect. 1 discusses related work in FR, followed by Sect. 2 which investigates 13 optimization algorithms for feature selection. Section 3 then describes four of the proposed FR approaches whereas Sect. 4 provides experimental analysis of those methods. Finally, the paper concludes with some final remarks in Sect. 5.
2 Optimization Algorithms
2.1 Binary Dragonfly Algorithm
The dragonfly algorithm (DA) is a relatively new optimization algorithm based on swarm intelligence proposed in 2016 [29]. There are many versions of DA such as BDA, multi-objective dragonfly algorithm and single-dragonfly algorithm. The relevant parameters for BDA are listed below, where N is the number of neighboring individuals, \(X_{i} , X_{j} , X^{ + } , X^{ - }\) denote the positions of the current individual, \(j\) th individual, food source and enemy respectively, and \(t\) denotes the number of iterations,
To update the position of dragonflies in a search space and formulate their movements, two vectors are considered, the step vector \(\Delta X\) and position, \(X\). The step vector denotes the direction of dragonfly movement which can be calculated as
After calculating the step vector, the position vectors are calculated as
Then, to enhance the randomness of the dragonflies,
where \(r_{1} ,\) \(r_{2}\) denote two random numbers in [0,1], \(\beta = 1.5\) and α is calculated as
where \(\Phi \left( x \right) = \left( {x - 1} \right)!\). Finally, the transfer function is used to calculate the probability of the dragonflies changing positions,
To update the position of search agents in binary search spaces,
where \(r\) denotes to a number in the interval of [0,1]. The BDA algorithm considers all the dragonflies as one swarm and simulate exploration/exploitation by adaptively tuning the swarming factors (\(s\), \(a\), \(c\), \(f\), and \(e\)) as well as the inertia weight (\(w\)). The pseudocode of BDA is shown in Algorithm 1.
2.2 Grasshopper Optimization Algorithm
Grasshopper optimization algorithm (GOA) is a new optimization algorithm proposed by Saremi et al. in 2017 [30]. The multi-objective version of the grasshopper algorithm was later proposed in 2018 proposed by Mirjalini [31]. As its name implies, GOA is inspired from the behavior of grasshoppers. It is generally used to search for optimal solutions to constrained and unconstrained problems [32]. The pseudocode of GOA is shown in Algorithm 2. The position of the ith grasshopper, \(X_{i}\) is calculated as
where \(S_{i}\) is the social interaction, \(G_{i}\) is the gravitational force on the ith grasshopper and \(A_{i}\) is the wind advection. Social interaction is the main parameter that dictates the grasshoppers’ movement which can be calculated as
where, N is the number of grasshoppers, \(d_{ij}\) is the distance between the ith and jth grasshoppers, \(\widehat{{d_{ij} }}\) is a unit vector from the ith to the jth grasshopper, and s is a where, N is the number of grasshoppers, \(d_{ij}\) is the distance between the ith and jth grasshoppers, \(\widehat{{d_{ij} }}\) is a unit vector from the ith to the jth grasshopper, and s is a function that represents social attraction. These parameters are defined as
respectively, where \(f\) and \(l\) are the attraction intensity and the attractive length scale respectively, and \(x_{i}\) represents the ith grasshopper within the entire population. The final mathematical model of the grasshopper position in the dth dimension is described as
where \(ub_{d}\), \(lb_{d}\) and \(\widehat{{T_{d} }}\) are the upper bound, lower bound and best solution found so far, respectively. \(c\) is a control parameter to modify the behavior of exploitation and exploration and can be calculated as
where \(c_{max} = 1\), \(c_{min} = 0.00001\), \(l\) and \(L\) are the maximum value, minimum value, current iteration and maximum number of iterations, respectively.
2.3 Comparison of Optimization Algorithms
Prior to selecting BDA and GOA to be used in our work, we performed a comparison of 13 optimization algorithms according to 12 test functions to determine their accuracy and efficiency for feature selection purposes. The 12 test functions used for comparison include Raster, Ackley, Camel3, Dejong5, Levy, Sphere, Rosen, Griewank, Zakharov, Schaffer2, Rothyp and Shubert [33]. Experiments were performed using MATLAB 2018 on an Intel Core-i5 CPU with 2 GB RAM. The experiments were executed 1000 times before the accuracy results (cost function) and time taken (in seconds) for each execution are noted, where for both measures, a lower value is desired. Search area dimensions between 10, 20 and 30 were used, with the lower and upper bounds of \(10 \in \left[ { - 5,5} \right]\), \(20 \in \left[ { - 10,10} \right]\) and \(30 \in \left[ { - 15,15} \right]\). Only the unimodal category (single solution problems) is used to determine the best algorithm for feature selection. The results are tabulated in Table 2, where the dragonfly and grasshopper optimization algorithms outperformed their peers in both metrics.
3 Proposed Method
In the proposed work, features of the human face are first extracted using uniform LBP (ULBP). Features are the significant characteristics from a face image which may be its shape, texture, or context. Relevant features are then selected by using BDA and GOA to train two classifiers, KNN and SVM. Classifiers trained using features selected by BDA are denoted as BDA-KNN and BDA-SVM whereas the classifiers trained using features selected by GOA are denoted as GOA-KNN and GOA-SVM. The following subsections provide details regarding the steps involved in developing these algorithms.
3.1 Preprocessing
Illumination and pose normalization techniques are used in the preprocessing stage to mitigate their negative effects on the overall performance of the algorithm. The normalization technique divides the face image into four sub-segments which are each processed independently. The location of the nose is considered the middle point of the image where this image splitting occurs. Illumination normalization is performed for each segment based on the probability density function of its pixels’ grey levels. Upon completing the normalization process, the sub segments are merged and subjected to pixel averaging followed by the application of filters. Details of the entire process are available in [34].
3.2 Feature Extraction
Conventional LBP is typically computed for each pixel \(\left( {x_{c} ,y_{c} } \right)\) of an image with the consideration of small circular neighborhood values (with radius \(R\) pixels). Let \(g_{c}\) denotes the gray level value of that pixel, then \(LBP_{P,R}\) \(\left( {x_{c} ,y_{c} } \right)\) is defined as follows
where \(P\) corresponds to the number of pixels in the neighborhood with radius \(R\). A subset of these \(2^{P}\) binary patterns known as uniform patterns have at most two transitions from 0 to 1 (or vice versa). These uniform patterns play an important role in improving recognition. Thus, the total number of output labels generated by mapping patterns of \(p\) bits is \(p\left( {p - 1} \right) + 3\). We can mathematically define the uniform LBP (\(LBP\left( {P,R} \right)^{u2}\) as:
where \(I\left( z \right) \in [0,P\left( {P - 1} \right) + 1]\) and
\(U\left( {LBP_{P,R} } \right)\) denotes the pattern’s number of spatial bitwise transitions (1/0 changes). If the value of \(U\left( {LBP_{P,R} } \right) < 2\), the corresponding pixel is labeled by an index function I(Z). Otherwise, the pixel will be assigned a value of \(\left( {P - 1} \right)P + 2\). Each uniform pattern is assigned an index based on the index function \(I\left( Z \right)\) which contains \(\left( {P - 1} \right)P + 2\) indices [35]. The global high-dimensional feature descriptor is then generated by concatenating all the features.
3.3 Feature Selection and Classification
Extracting features using ULBP is sensitive to noise and can lead to irrelevant features. The feature extraction method results in a high dimensional feature vector which affects the accuracy and computational cost of a classifier. An efficient FR method could be built by identifying the most important features of the face image. These problems are solved via feature selection which we will perform using BDA and GOA (presented previously in sections A and B, respectively). The parameters used for BDA and GOA are summarized below:
-
BDA
-
Test Size = 1
-
Maximum Iterations = 50
-
Number of Particles = 5
-
-
GOA
-
Maximum Number of Generations = 50
-
Number of Search Agents = 5
-
Lower Bound = −10
-
Upper Bound = 10
-
The candidate population (number of particles/search agents) for each optimization algorithm is first initialized, then the search for the best features is performed. After each iteration, features which have been identified will be used as inputs to the KNN or SVM classifiers. The resulting recognition accuracy will be used as the fitness function to compare the new set of features to the previous one. Features that lead to the highest accuracy will be selected for facial recognition purposes. We use each optimization algorithm separately alongside each classification algorithm to identify the combination that maximizes recognition accuracy. Feature selection based on the four combinations, BDA-KNN, BDA-SVM, GOA-KNN and GOA-SVM follow similar steps as shown in Algorithm 3.
4 Results and Discussion
All experiments described in this section are performed using the Windows 10 on an Intel Core-i5 CPU with 2 GB RAM and MATLAB version 2018. We use three datasets for comparative purposes, the first of which being the Olivetti-Oracle Research Lab (ORL) face database. The database contains 400 frontal faces, each with a size of 112 X 92 pixels. They can be subdivided into 10 tightly cropped images of 40 individuals with variations in pose, illumination, facial expressions (open/closed eyes, smiling/not smiling) and facial details (glasses/no glasses). The second dataset used is the AR face database created by Aleix Martinez and Robert Benavente. The AR database consists of 4000 color images of 126 different people, divided into 46 females and 70 males. The facial images were taken under restricted conditions but with variations in illumination, facial expression and occlusion with sunglasses, scarves, and hair styles. Labeled Faces in the Wild (LFW) is the third and final dataset used in this work [36]. It consists of 5749 different subjects where 1680 subjects have two or more images, resulting in a total of 13,233 images. Similar to the previously discussed datasets, the images have differences in terms of pose, lighting, expression, background, race, age, gender, clothing, occlusions, camera, focus, and other parameters. This database is considered one of the most vital datasets to analyze the robustness of FR against uncontrolled conditions.
We evaluate the performance of the four combinations, BDA-KNN, BDA-SVM, GOA-KNN and GOA-SVM in terms of their time complexity and accuracy. We first compare the optimized algorithms with their unoptimized counterparts to show the performance gains in terms of both metrics. As seen in Tables 3 and 4, the optimized algorithms displayed significant performance improvements. Reduction of the features from feature selection leads to improved accuracy (by preventing overfitting) and improved time complexity.
Prior to using the reduced feature set, SVM is generally more accurate than KNN albeit being slower. The performance gap between both algorithms is reduced by applying the optimization algorithms for feature selection. In addition, BDA-KNN and GOA-KNN now slightly outperforms BDA-SVM and GOA-SVM respectively. The experiments also indicate that the GOA variants of both classifiers slightly outperform their BDA counterparts. One explanation for this phenomenon is that GOA is more suited to identify global optima whereas BDA tends to generate locally optimal results. We also compare the proposed work against other recently proposed approaches based on recognition rate as shown in Table 5. For all datasets, BDA-KNN, BDA-SVM, GOA-KNN and GOA-SVM generally outperform their peers.
The new optimization algorithms were effective in removing irrelevant, noisy, and redundant features that were extracted using ULBP. This is apparent from the high prediction accuracy of the proposed method as compared to other FR proposals in Table 5. This result also supports our findings in Table 2, which identified that the dragonfly and grasshopper algorithms outperform other optimization algorithms. To the best of our knowledge, the proposed work is one of the first in investigating the use of both dragonfly and grasshopper algorithms specifically for facial recognition purposes.
5 Conclusion
In this paper, we investigate the application of two relatively new optimization algorithms in facial recognition, the dragonfly and grasshopper optimization algorithms. We select these algorithms after performing a thorough comparison with 13 of its peers in terms of feature selection capability. Both algorithms are then used for feature selection alongside two classifiers, k-nearest neighbor and support vector machine. We denote the combination of these approaches as BDA-KNN, BDA-SVM, GOA-KNN and GOA-SVM respectively. As expected, significant performance improvements were obtained when the optimized algorithms were compared to their unoptimized counterparts. Interestingly, we also found that the KNN outperformed their SVM counterparts after application of the optimization algorithms for feature selection, whereas the inverse held true prior to feature selection. We also found that the GOA-based classifiers outperform their BDA counterparts due to the capability of GOA in identifying globally optimal solutions as compared to the locally optimal solutions generated by BDA. Performance comparison against other similar approaches in literature depicts the superiority of the proposed methods in terms of both accuracy and time complexity. Moving forward, our findings imply that future facial recognition algorithms should leverage upon grasshopper optimization for feature selection to maximize performance.
References
Ma H, Celik T (2019) FER-Net: facial expression recognition using densely connected convolutional network. Electron Lett 55(4):184–186
Chen Z, Huang W, Lv Z (2017) Towards a face recognition method based on uncorrelated discriminant sparse preserving projection. Multimed Tools Appl 76(17):17669–17683
Chengeta K, Viriri S (2018) A survey on facial recognition based on local directional and local binary patterns. In: 2018 conference on information communications technology and society (ICTAS), pp 1–6. IEEE
Ojala T, Pietikainen M, Maenpaa T (2002) Multiresolution gray-scale and rotation invariant texture classification with local binary patterns. IEEE Trans Pattern Anal Mach Intell 24(7):971–987
Yan Z, Yuan C (2004) Ant colony optimization for feature selection in face recognition. In: International conference on biometric authentication, pp 221–226. Springer, Heidelberg
Connolly JF, Granger E, Sabourin R (2012) Evolution of heterogeneous ensembles through dynamic particle swarm optimization for video-based face recognition. Pattern Recogn 45(7):2460–2477
Jakhar R, Kaur N, Singh R (2011) Face recognition using bacteria foraging optimization-based selected features. Int J Adv Comput Sci Appl 1(3)
Agarwal V, Bhanot S (2018) Radial basis function neural network-based face recognition using firefly algorithm. Neural Comput Appl 30(8):2643–2660
Islam KT, Raj RG, Al-Murad A (2017) Performance of SVM, CNN, and ANN with BoW, HOG, and image pixels in face recognition. In: 2017 2nd international conference on electrical & electronic engineering (ICEEE), pp 1–4. IEEE
Kumar M, Jindal MK, Sharma RK (2011) k-nearest neighbor based offline handwritten Gurmukhi character recognition. In: 2011 international conference on image information processing, pp 1–4. IEEE
Parveen P, Thuraisingham B (2006) Face recognition using multiple classifiers. In: 2006 18th IEEE international conference on tools with artificial intelligence (ICTAI 2006), pp 179–186. IEEE
Kumar D (2017) Feature selection for face recognition using DCT-PCA and Bat algorithm. Int J Inf Technol 9(4):411–423
Sinha P, Sinha P (2015) Comparative study of chronic kidney disease prediction using KNN and SVM. Int J Eng Res Technol 4(12):608–612
Phillips PJ (1999) Support vector machines applied to face recognition. In: Advances in neural information processing systems, pp 803–809
Chang CC, Lin CJ (2011) LIBSVM: a library for support vector machines. ACM Trans Intell Syst Technol (TIST) 2(3):1–27
Ghimire D, Jeong S, Lee J, Park SH (2017) Facial expression recognition based on local region specific features and support vector machines. Multimed Tools Appl 76(6):7803–7821
Gao Y, Lee HJ (2019) Pose-invariant features and personalized correspondence learning for face recognition. Neural Comput Appl 31(1):607–616
Zhu Y, Xue J (2017) Face recognition based on random subspace method and tensor subspace analysis. Neural Comput Appl 28(2):233–244
Lu Y, Cui J, Fang X (2014) Enhancing sparsity via full rank decomposition for robust face recognition. Neural Comput Appl 25(5):1043–1052
Shukla AK, Kanungo S (2019) An automated face retrieval system using grasshopper optimization algorithm-based feature selection method. In: International conference on emerging current trends in computing and expert technology, pp 492–502. Springer, Cham
Sasirekha K, Thangavel K (2019) Optimization of K-nearest neighbor using particle swarm optimization for face recognition. Neural Comput Appl 31(11):7935–7944
Gupta A, Goel L (2016) Heuristic approach for face recognition using artificial bee colony optimization. In: The international symposium on intelligent systems technologies and applications, pp 209–223. Springer, Cham
Abd AL, El-Hafeez T, Zaki AM (2018) Face recognition based on Grey Wolf optimization for feature selection. International conference on advanced intelligent systems and informatics. Springer, Cham, pp 273–283
Aro T, Abikoye O, Oladipo I, Awotunde B (2019) Enhanced Gabor features based facial recognition using ant colony optimization algorithm. J Sustain Technol 10(1)
Benamara NK, Zigh E, Stambouli TB, Keche M (2019) Efficient Multispectral face recognition using random feature selection and PSO-SVM. In: Proceedings of the 2nd international conference on networking, information systems & security, pp 1–6
Eleyan A (2019) Particle swarm optimization based feature selection for face recognition. In: 2019 seventh international conference on digital information processing and communications (ICDIPC), pp 1–4. IEEE
Malhotra P, Kumar D (2019) An optimized face recognition system using cuckoo search. J Intell Syst 28(2):321–332
Král P, Vrba A, Lenc L (2019) Enhanced local binary patterns for automatic face recognition. In: International conference on artificial intelligence and soft computing, pp 27–36. Springer, Cham
Mirjalili S (2016) Dragonfly algorithm: a new meta-heuristic optimization technique for solving single-objective, discrete, and multi-objective problems. Neural Comput Appl 27(4):1053–1073
Saremi S, Mirjalili S, Lewis A (2017) Grasshopper optimisation algorithm: theory and application. Adv Eng Softw 105:30–47
Mirjalili SZ, Mirjalili S, Saremi S, Faris H, Aljarah I (2018) Grasshopper optimization algorithm for multi-objective optimization problems. Appl Intell 48(4):805–820
Neve AG, Kakandikar GM, Kulkarni O (2017) Application of grasshopper optimization algorithm for constrained and unconstrained test functions. Int J Swarm Intell Evol Comput 6(165):2
Virtual library of simulation experiments: test functions and datasets
Sharif M, Mohsin S, Jamal MJ, Raza M (2010) Illumination normalization preprocessing for face recognition. In: 2010 the 2nd conference on environmental science and information application technology, vol 2, pp 44–47. IEEE
Salyut J, Kurnaz C (2018) Profile face recognition using local binary patterns with artificial neural network. In: 2018 international conference on artificial intelligence and data processing (IDAP), pp 1–4. IEEE
Learned-Miller E, Huang GB, Roy Chowdhury A, Li H, Hua G (2016) Labeled faces in the wild: a survey. In: Advances in face detection and facial image analysis, pp 189–248. Springer, Cham
Singh G, Chhabra I (2018) Genetic algorithm implementation to optimize the hybridization of feature extraction and metaheuristic classifiers. In: Hybrid metaheuristics for image analysis, pp 49–86. Springer, Cham
Maheshwari R, Kumar M, Kumar S (2016) Optimization of feature selection in face recognition system using differential evolution and genetic algorithm. In: Proceedings of fifth international conference on soft computing for problem solving, pp 363–374. Springer, Singapore
Yang XS (2010) A new metaheuristic bat-inspired algorithm. In: Nature inspired cooperative strategies for optimization (NICSO 2010), pp 65–74. Springer, Heidelberg
Kiran MS (2014) Improved artificial bee colony algorithm for continuous optimization problems. J Comput Commun 2(04):108
Kirkpatrick S, Gelatt CD, Vecchi MP (1983) Optimization by simulated annealing. Science 220(4598):671–680
Yang XS (2009) Harmony search as a metaheuristic algorithm. In: Music-inspired harmony search algorithm, pp 1–14. Springer, Heidelberg
Atashpaz-Gargari E, Lucas C (2007) Imperialist competitive algorithm: an algorithm for optimization inspired by imperialistic competition. In: 2007 IEEE congress on evolutionary computation, pp 4661–4667. IEEE
Pham DT, Castellani M (2015) A comparative study of the Bees Algorithm as a tool for function optimisation. Cogent Eng 2(1):1091540
Yang XS (2009) Firefly algorithms for multimodal optimization. In: International symposium on stochastic algorithms, pp 169–178. Springer, Heidelberg
Kennedy J, Eberhart R (1995) Particle swarm optimization. In: Proceedings of ICNN’95-international conference on neural networks, vol 4, pp 1942–1948. IEEE
Storn R, Price K (1997) Differential evolution–a simple and efficient heuristic for global optimization over continuous spaces. J Global Optim 11(4):341–359
dos Reis Ribeiro M, de Aguiar MS (2011) Cultural Algorithms: a study of concepts and approaches. In: 2011 workshop-school on theoretical computer science, pp 145–148. IEEE
Mehrabian AR, Lucas C (2006) A novel numerical optimization algorithm inspired from weed colonization. Ecol Inform 1(4):355–366
Vinay A, Shekhar VS, Manjunath N, Murthy KB, Natarajan S (2018) Expediting automated face recognition using the novel ORB 2-IPR framework. In: Proceedings of international conference on cognition and recognition, pp 223–232. Springer, Singapore
Acknowledgements
This work is supported in part by the Ministry of Education Malaysia under the Fundamental Research Grant Scheme (FRGS), project number FRGS/1/2019/ICT05/USM/02/1 and Universiti Sains Malaysia under grant no. 8011036.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2021 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.
About this paper
Cite this paper
Ibrahim, D.R., Teh, J.S., Abdullah, R. (2021). Improved Facial Recognition Algorithms Based on Dragonfly and Grasshopper Optimization. In: Alfred, R., Iida, H., Haviluddin, H., Anthony, P. (eds) Computational Science and Technology. Lecture Notes in Electrical Engineering, vol 724. Springer, Singapore. https://doi.org/10.1007/978-981-33-4069-5_10
Download citation
DOI: https://doi.org/10.1007/978-981-33-4069-5_10
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-33-4068-8
Online ISBN: 978-981-33-4069-5
eBook Packages: Computer ScienceComputer Science (R0)