Abstract
Gabor wavelet can extract most informative and efficient texture features for different computer vision and multimedia applications. Features extracted by Gabor wavelet have similar information as visualized by the receptive field of simple cells in the visual cortex of the mammalian brains. This motivates researchers to use Gabor wavelet for feature extraction. Gabor wavelet features are used for many multimedia applications such as stereo matching, face and facial expression recognition (FER), texture representation for segmentation. This motivates us to analyze Gabor features to evaluate their effectiveness in representing an image. In this paper, three major characteristics of Gabor features are established viz., (i) Real coefficients of Gabor wavelet alone is sufficient enough to represent an image; (ii) Local Gabor wavelet features with overlapping regions represent an image more accurately as compared to the global Gabor features and the local features extracted for the non-overlapping regions; and (iii) Real coefficients of overlapping regions are more robust to radiometric changes as compared to the features extracted from both global and local (non-overlapping regions) by using real, imaginary and magnitude information of a Gabor wavelet. The efficacy and effectiveness of these findings are evaluated by reconstructing the original image using the extracted features, and subsequently the reconstructed image is compared with the original image. Experimental results show that the local Gabor wavelet features extracted from overlapping regions represent an image more efficiently than the global and non-overlapping region-based features. Experimental results also show that the real coefficients alone is sufficient enough to represent an image more accurately as compared to the imaginary and magnitude informations.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Avoid common mistakes on your manuscript.
1 Introduction
Feature extraction is an important and active research topic of Computer Vision. Also, many multimedia applications such as face recognition, texture image classification, image indexing and retrieval employees Gabor wavelet for feature extraction [13, 21]. Some of these applications need a disparity map which can be obtained from stereo correspondence [1, 11, 14]. The disparity map gives an additional information for these applications.
Again, feature extraction may be local or a global process. Global features extract a prominent information of an image, whereas local features extract a detailed information. Depending on the applications, either local or global features may be employed.
Gabor filter was first proposed by Daugman, and it is extended for 2D domain to extract the information of image textures [3, 4]. This is done by convolving an image with the Gabor wavelet kernel. Some of the applications which use a Gabor wavelet for feature extraction, and subsequent pattern classification are discussed in this context. In general, the feature vector is the concatenation of features extracted by a Gabor filter for different orientations and scales [23]. Another way of extracting Gabor features is by applying gray level co-occurrence matrix over the Gabor wavelet convolved images. In another approach, covariance matrix is calculated for all the Gabor filtered images [30, 31]. Finally, non-duplicated values of the covariance matrix are used as features. The simplicity and success of local binary pattern (LBP) in many applications motivates the researchers to use LBP on Gabor filtered images [20]. Another recent approach is the adaptation of fractal signature of magnitude coefficients for efficient texture feature extraction [35].
Zhang et al used histogram of Gabor phase pattern (HGPP) as a feature for face recognition [36]. Xu et al extracted Gabor features from depth and intensity images, and subsequently used this feature for face recognition [32]. Jahanbin et al obtained Gabor features separately from co-registered range and portrait image pairs at fiducial points [8]. These two features are merged to use for face recognition.
Yang et al also used Gabor wavelet for face classification [34]. This method takes care the cases of occlusion by constructing a compact occlusion dictionary from Gabor features. In [33], facial expression representation model is proposed using the statistical characteristics of training images. Texture and shape information are used to measure the similarity between the testing images and the facial expression models. Zhang et al proposed a FER system with a capability of handling occlusion [37]. A set of face templates are extracted from the Gabor filtered images using Monte Carlo algorithm. Extracted features are robust to occlusion.
Gabor wavelet is also used for image indexing and retrieval [17]. Input image is decomposed by Gabor wavelet at different scales and orientations. The obtained coefficients contain certain redundant information.
Edge detection using simplified Gabor wavelet is presented in [9]. In this method, initially input image is convolved with quantized imaginary Gabor filter by considering two orientations and one scale.
Shen and Jia proposed a 3D Gabor wavelet for hyperspectral image classification [24]. This 3D Gabor wavelet is convolved with an input image to obtain the feature vector. 2D Gabor wavelet-based automatic retinal vessel segmentation is proposed in [25]. In this method, image is filtered by a 2D Gabor wavelet with different orientations and scales. At a particular scale, maximum value of the coefficients for all the possible orientations is taken for each pixel. This procedure is repeated for all the scales to form the feature vector.
It has to be noted that the above mentioned methods use magnitude information of Gabor wavelet, which require both real and imaginary coefficients. In this paper, an extensive analysis of Gabor filter properties is presented. It has been established that the real coefficients of a Gabor filter alone can be more effectively used to extract necessary information of an image in place of magnitude information. To validate our claim, local Gabor wavelet features are extracted for all the image pixels from the overlapping neighboring regions. The performance of this local Gabor wavelet feature is compared with the global Gabor features. Additionally, the performance of the above local feature is also compared with the local features extracted from non-overlapping regions.
Jones and Palmer mentioned that an optimal performance of 2D Gabor filter can be obtained by using real part of the filter in [10]. Further, Daugman proposed a new feature extraction method by considering elementary 2D Gabor functions [5]. In this paper, neural network is employed to achieve this task. In these two papers, authors simply mentioned “real part of the complex Gabor function is a good fit to the receptive field weight functions found in simple cells in a cat’s striate cortex”. They did not establish their claim either experimentally and/or analytically. But in our paper, extensive experimental evaluations are done to validate the effectiveness of the real coefficients of the Gabor features. It is also observed from our experimental analysis that real coefficients can give almost similar performance as that given by the magnitude information for the applications like stereo correspondence. Additionally, extensive experimental investigations are done for analyzing the characteristics of Gabor features for synthetic illumination changes and real radiometric variations.
Based on the above literatures and observations, major contributions of this paper can be summarized as follows:
-
Real coefficients of Gabor wavelet alone are sufficient enough to represent an image. Our claim is validated by considering three different cases: (i) Global Gabor features, (ii) Local Gabor features extracted from overlapping regions, and (iii) Local Gabor features extracted from non-overlapping regions. In all these cases, it is observed that real coefficients alone can represent an image more efficiently as compared to the Gabor imaginary coefficients. Also, the real coefficients can represent an image almost in the similar manner as compared to the magnitude information.
-
Three different features (global Gabor features, local Gabor features for overlapping regions and local Gabor features for non-overlapping regions) are analyzed. It is again observed that local Gabor features for overlapping regions can represent an image more accurately compared to other two counterparts.
-
Robustness of all the three Gabor features are analyzed for radiometric variations in a scene, and we found that the real coefficients of local Gabor features for overlapping regions are more robust as compared to the Gabor features extracted from the imaginary part or magnitude information. Also, this method is significantly better than the local Gabor features for non-overlapping regions and the global features.
This paper is organized as follows: Section 2 describes the basics of Gabor wavelet. Global and local Gabor wavelet feature extraction methods are explained in Sections 3 and 4 respectively. Section 5 is intended for showing extensive experimental results, and finally, we draw our conclusions in Section 6.
2 Basics of Gabor wavelet
2D Gabor functions are Gaussian modulated complex sinusoids, which is given by:
The first term in the square bracket in equation (1) determines the oscillatory part of the Gabor kernel, and the second term is used to compensate the DC value [15]. Gabor wavelets are referred as a class of self-similar functions generated by the process of orientation and scaling of the 2D Gabor function, which is given by:
Here, \(\theta = \frac {{n\pi } }{K}\), where K is the total number of orientations, and a −m is the scale factor [2]. Frequency and orientation representations of a Gabor filter are very similar to those of the human visual system. That is why, Gabor features are quite appropriate for texture representation and discrimination.
3 Global Gabor wavelet feature (GGWF) extraction
Global features can be used to represent the entire image, while local features can represent a particular region of an image. In most of the cases, the dimension of global features are less than the dimension of the input image. So, the global feature extraction methods may be considered as the dimensionality reduction techniques. Global features are fast to compute.
Let us consider an image I of size P×Q. In order to obtain the GGWF, the input image I is convolved with the Gabor function which is tuned to different scales and orientations, and the obtained coefficients represent the global feature F. Mathematically, it is given by:
where, “ ∗ ” is the convolution operator. For a Gabor kernel of size N g ×N g with m th scale and n th orientation, the filtered image F m n is given by:
Since Gabor function is a Gaussian modulated complex sinusoid, the above equation can be separated into two parts - real and imaginary parts given by
The magnitude of the Gabor filtered output is calculated as follows:
First row of Fig. 1 shows images represented using global Gabor wavelet features. In this figure, input image, image represented using the real coefficients, image represented using the imaginary coefficients, and the image represented using the magnitude information for m=2,k=2 in equations (3), (5), (6) and (7) are shown from the left to right.
When the family of Gabor wavelet forms a frame, they can completely represent an input image, i.e., when a Gabor wavelet family is treated as an orthonormal basis, then an input image can be approximately recovered by the linear superposition of these bases weighted by the wavelet coefficients [15]. The reconstructed image I recon using GGWF vector is found as follows:
where, \(\left ({A + B} \right )/2\) is the frame redundancy measurement, B/A is the frame tightness measurement. When B = A, the frame forms a tight frame, and the reconstruction using the inverse formula is exact.
Teddy image reconstructed using the extracted global features is shown in first row of Fig. 2. From both Figs. 1 and 2, it is observed that the image represented/reconstructed using the real coefficients and the magnitude are almost visually similar to the input image, whereas image represented using the imaginary coefficients significantly visually different as compared to the image represented/reconstructed by both real and magnitude informations.
Image reconstructed using only the real coefficients is similar to the image reconstructed using the magnitude information, but memory requirement is reduced by half when only the real coefficients are used [22]. This is due to the fact that both real and imaginary informations are needed to be stored to extract the magnitude information, which is not the case for storing only the real information.
4 Local Gabor wavelet feature (LGWF) extraction
Local features represent a region of interest of an image. Generally, these local features are obtained by considering the gray-scale value and/or color information of a pixel. A good local feature should uniquely represent a particular point in an image. Local features are generally used for the applications such as stereo matching, object tracking, 3D calibration and 3D reconstruction. In this paper, local features are extracted using both overlapping and non-overlapping regions. The procedure of LGWF extraction, and subsequent reconstruction of the original image by using the extracted features is discussed below.
Figure 3 shows the block diagram of Gabor wavelet-based local feature extraction from overlapping regions, where the square (pink and green color) represents two local image patches. In this figure, the pixel for which the feature is extracted is shown in the form of a small green circle. It is seen that the neighboring regions of the two patches have a common region, and hence the method is termed as feature extraction from overlapping regions. Let us consider an image I of size P×Q. To find the local feature vector for the pixel I(i,j), a certain neighborhood N(i,j) of size u×v is considered. This patch is convolved with the Gabor kernel g m n (refer to equation (2)) for different orientations and scales, which is given by:
where,
This procedure is repeated for all the pixels of an image, and each pixel has a set of Gabor coefficients. The images represented using the real, imaginary and the magnitude informations for Teddy image with m=2,k=2 are shown in second row of Fig. 1.
Reconstruction of the original image using the real coefficients, imaginary coefficients, and both real and imaginary coefficients (magnitude) of local Gabor wavelet are performed by using the equation as given below:
To compare the reconstructed image with the original image, average of the coefficients is computed for each of the pixels. Figure 2 shows the images which are reconstructed using the extracted local features from the overlapping regions. The reconstructed images using the real coefficients, imaginary coefficients and the magnitude information by using equation (11) are shown in second row of Fig. 2.
An input image is partitioned into subregions of size u 1×v 1 to find the LGWF from the non-overlapping regions. In this case, the extracted coefficients characterize the entire region, whereas in the case of overlapping regions, the extracted coefficients characterize a particular pixel. Equations (9–11) with slight modifications can also be employed to represent and reconstruct the original image for non-overlapping regions.
5 Experimental results
To evaluate the performance of the extracted Gabor features, Middlebury stereo images are used [6, 26–29]. Experimental evaluations are performed by considering different window sizes, various orientations and scales. Additionally, performance of these features are evaluated for synthetic illumination changes (gain change, bias change, gamma and vignetting changes). Also, these evaluations are performed for real radiometric changes which include the change in the exposure and light source. Metrics used for these evaluations are mean square error (MSE), correlation coefficient (CC), universal quality index (UQI or QI) and structural similarity index (SSI) [16].
Mathematically, these metrics can be expressed as follows:
where, I is an input image, J is the reconstructed image.
where, cov(I,J) denotes the covariance between the input and reconstructed images. σ I and σ J are the standard deviation of the input and the reconstructed images respectively.
where, σ I J represents the standard deviation, \({\overline I}\) and \({\overline J}\) are the mean of I and J respectively.
Lesser values of MSE and larger values of CC, QI and SSI indicate better similarity of the reconstructed image with the original image. For evaluation, images are reconstructed using both the global and local features which are extracted from both overlapping and non-overlapping regions. The reconstructed images are normalized in the range of [0-1]. Figure 4 shows the image represented using the real coefficients, image represented using the imaginary coefficients, and the image represented using the magnitude information extracted from both global and overlapping regions. In the following figures and tables, real, imag and mag denote the real, imaginary and magnitude information of the extracted features, while OL, NO and G correspond to overlapping, non-overlapping and global regions.
5.1 Different window sizes
Tables 1 and 2 show the performance of local features which are extracted from overlapping and non-overlapping regions for different window sizes. Performance of the features for overlapping regions decreases with the increase of window size, whereas the performance of features extracted from non-overlapping regions increases with the increase of window size. For overlapping regions, feature vectors are extracted for each of the pixels of an image. So if the window size is increased, many neighboring pixels influence the feature vector of the center pixel. The extracted features can characterize a pixel more effectively when all the neighboring pixels belong to a homogeneous region. On the other hand, the feature vector cannot effectively represent a pixel for the case when some of the neighboring pixels belong to different regions. So, the reconstruction error increases with the increase in window size. For non-overlapping regions, a feature vector is extracted for each of the image patches. When the window size is increased, it encloses more number of pixels. These pixels in turn provide additional information to the extracted feature of the patch. Hence, the extracted feature vector will be able to effectively represent an image patch.
5.2 Different number of orientations
The performance of GGWF and LGWF (both overlapping and non-overlapping regions) for different number of orientations K=1,2,3,4,5,6,7 and 8 are shown in Table 3, and in Fig. 5. For local features, there is no significant change in the performance for more number of orientations, whereas global feature shows better performance with the increase of number of orientations. The entire image used for global feature extraction has more pixel variations as compared to the pixel variations in the image patches used for local feature extraction. So, global feature is able to represent these variations more efficiently with the increase of number of orientations. In case of the local feature, few number of orientations are sufficient enough to represent the pixel variations present in the image patches. This finding is illustrated for global features in Fig. 6. Figure 6 shows the reconstructed cones image for 2, 4, 6 and 8 number of orientations. An image can be reconstructed more accurately with more number of orientations.
5.3 Different number of scales
The performance of Gabor features are also evaluated for different scales as shown in Tables 4 and 5. When the number of scale is 1, the performance of LGWF (overlapping regions) is comparable to the global features. On the other hand, LGWF (overlapping regions) performs better as compared to the global features when the number of scales is 2. Few number of scales are able to represent the pixel variations in the image patch used for local feature extraction. Hence, the error value remains almost same for different scales. However for global features, high frequency information is lost with the increase of number of scales. So, the image reconstructed using these low resolution images can produces an approximate original image. Hence, the more number of scales increases the error. This finding is illustrated in Fig. 7.
5.4 Synthetic illumination changes
The performance of Gabor features are investigated for illumination changes. To incorporate synthetic illumination changes, left image is kept unaltered, while the intensity values of the pixels of the right image are altered. Illumination change may be global or local. Global illumination changes can further be classified as linear or non-linear. To evaluate the performance of these features under varying illumination conditions, the intensity values of the pixels of the right image are varied synthetically using the formula which is given by:
where, I i is the input image and I o is the synthetically varied image of I i . m f , a f and γ f are the multiplicative (gain change), additive (bias change) and gamma factors respectively [18]. In the above formula, multiplicative and additive factors represent a linear global change, whereas gamma factor denotes a non-linear global change. Additionally, local change is represented by using a vignetting function. Figure 8 shows different synthetic illumination changes applied to the Venus image. Figure 9, Tables 6, 7 and 8 show the performance of Gabor features for different values of multiplicative, additive and gamma factors. Figure 10 and Table 9 show the results for local illumination variations. The above mentioned figures and tables show that these Gabor features are affected by illumination variations. This effect is because of the inherent properties of convolution operation. To establish our claim, an image is synthetically illuminated for m f =2. Figure 11 shows the comparison of the image reconstructed by using the features extracted from the synthetically varied image with the image reconstructed by using the features extracted from the original image. Figure 11a and d show the original and reconstructed images respectively, whereas Fig. 11b and e show the synthetically illuminated image and its corresponding reconstructed image respectively. Figure 11c shows the difference image between Fig. 11a and b, while Fig. 11f shows the difference image between Fig. 11d and e. These results show that the changes in the intensity values of the original image alter the extracted Gabor features, which finally affects the intensity values of the corresponding reconstructed image. Similar explanations can be given for the cases when an image is synthetically illuminated by considering the vignetting effect, additive and gamma factors (Fig. 10).
5.5 Real radiometric changes
Many multimedia applications need features which are robust to radiometric changes. One such application is finding a stereo correspondence. The images used so far for the performance evaluation of Gabor features are captured under the same lighting condition and the same camera settings. Hence, new dataset which was captured under different lighting conditions and different camera exposures are also used [6, 28]. This dataset is shown in Fig. 12. The change of camera exposure is a global transformation which is similar to the global brightness change. This effect is similar to gain change or multiplicative factor in synthetic illumination variations [7]. Different light sources produce many local radiometric variations in the captured images. The performance of Gabor features for exposure and lighting changes can be seen in Figs. 13 and 14. Quantitative evaluations of these effects are shown in Tables 10 and 11. So, it is observed that Gabor features obtained for real radiometric variations show almost similar characteristics as that of the Gabor features for synthetically illuminated variations. An input image for “Exposure a” which is used to extract the features, while the reference image which is used to evaluate the reconstructed image is taken for “Exposure b”. This is denoted as “a/b” in Fig. 13 and Table 10. The reconstructed image corresponds to the image taken for “Exposure a”. Similarly, an input image for “Lighting a” which is used to extract the features, while the reference image which is used to evaluate the reconstructed image is taken for “Lighting b”. This is denoted as “a/b” in Fig. 14 and Table 11. The reconstructed image corresponds to the image taken for “Lighting a”.
From all the above experimental results, it is concluded that the real part of the feature extracted from the overlapping regions represents the original image more efficiently than the imaginary part, and the real part of the feature gives almost similar performance as that of the magnitude information. Real Gabor filter extracts texture information, while imaginary Gabor filter extracts the edge information [12]. Hence, real Gabor filter is sufficient to represent an image. Additionally, it is observed that the local features extracted from the overlapping regions can represent an image more efficiently as compared to the features extracted from both global and non-overlapping regions. In the local feature extraction method from overlapping regions, the pixel for which the feature is extracted is given more weight as compared to its neighboring pixels by the Gabor function during the convolution operation. That is why the local features extracted from the overlapping regions perform better than the other two Gabor features.
5.6 Performance evaluation of Gabor features for stereo correspondence
Stereo correspondence is considered as one application to investigate the performance of three Gabor-based extracted features. As explained earlier, the additional information obtained in the form of disparity map from stereo correspondence may be used in many multimedia applications such as face and facial expression recognition. The efficacy of these features are evaluated from the computed disparity map [19]. For this, mean-square error is used for evaluating the estimated disparity map. Table 12 gives a quantitative comparison of all the three Gabor features in stereo correspondence.
6 Conclusion
Gabor wavelet-based extracted features are used for number of computer vision and multimedia applications. Most of the well-established pattern recognition algorithms use magnitude information of Gabor wavelet for feature extraction. For this, information of both real and imaginary coefficients is needed. In this paper, it is validated that the real part of Gabor filter alone can represent an image more efficiently. To compute the real coefficients, only the real part of the Gabor filter bank needs to be stored in the memory. On the other hand, both the real and imaginary parts of Gabor filter bank need to be stored separately to compute the magnitude information. Additionally, the outputs obtained by the convolution operation with both real and imaginary parts of Gabor filter bank need to be stored separately to compute the magnitude information. But for computing the real coefficients, the outputs obtained by the convolution operation with the real part of the Gabor filter bank only need to be stored. That is why, memory requirement is reduced by half when only the real coefficients are used.
In earlier literatures, it is mentioned that an optimal performance of 2D Gabor filter can be obtained by using real part of the filter. But, there is no concrete experimental validations in this regard. But in our paper, an experimental evaluation using 2D Gabor wavelet suggests that the real coefficients of Gabor function is sufficient to represent an image in an efficient manner as compared to the imaginary kernel of a Gabor function and magnitude informations. The importance of real coefficients is also observed when three different Gabor features (GGWF and LGWF) are employed for stereo correspondence.
In this paper, the performances of three Gabor wavelet features namely GGWF, LGWF -for both overlapping and non-overlapping regions are evaluated. These comparisons are done by considering different window sizes, different number of orientations and different scales. Also, performance of these features is analyzed for radiometric changes. The metrics used for performance comparisons are MSE, CC, QI, and SSI. Experimental results show LGWF (overlapping regions) performs better as compared to the other two features. Additionally, it is shown that the real coefficients of a Gabor filter represent an image more accurately as compared to the imaginary coefficients.
References
Ali AM (2014) A 3D-based pose invariant face recognition at a distance framework. IEEE Trans Inf Forensic Secur 9(12):2158–2169
Bhagavathy S, Tesic J, Manjunath BS (2003) On the Rayleigh nature of Gabor filter outputs. In: Proceedings IEEE International Conference Image Processing, pp 745–748
Daugman J (1980) Two-dimensional spectral analysis of cortical receptive field profiles. Vis Res 20(10):847–856
Daugman JG (1985) Uncertainty relation for resolution in space, spatial frequency, and orientation optimized by two-dimensional visual cortical filters. J Opt Soc A Opt Image Sci Vis 2(7):1160–1169
Daugman JG (1988) Complete discrete 2-D Gabor transforms by neural networks for image analysis and compression. IEEE Trans Acoust Speech Signal Process 36 (7):1169–1179
Hirschmller H, Scharstein D (2007) Evaluation of cost functions for stereo matching. In: Proceeding International Conference Computer Vision and Pattern Recognition, pp 1–8
Hirschmuller H, Scharstein D (2009) Evaluation of stereo matching costs on images with radiometric differences. IEEE Trans Pattern Anal Mach Intell 31(9):1582–1599
Jahanbin S, Choi H, Bovik AC (2011) Passive multimodal 2-D+3-D face recognition using Gabor features and landmark distances. IEEE Trans Inf Forensic Secur 6(4):1287–1304
Jiang W, Lam KM, Shen TZ (2009) Efficient edge detection using simplified Gabor wavelets. IEEE Trans Syst Man Cyber 39(4):1036–1047
Jones JP, Palmer LA (1987) An evaluation of the 2-D Gabor filter model of simple receptive fields in cat striate cortex. J Neurophysiol 58(6):1233–1258
Kosov S, Scherbaum K, Faber K, Thormahlen T, Seidel H-P (2009) Rapid stereo-vision enhanced face detection. In: Proceedings IEEE International Conference Image Processing, pp 1221–1224
Kumar A, Pang G (2000) Fabric defect segmentation using multichannel blob detectors. Opt Eng 39(12):3176–3190
Lin L, Luo P, Chen X, Zeng K (2012) Representing and recognizing objects with massive local image patches. Pattern Recogn 45:231–240
Liebelt J, Xiao J, Yang J (2006) Robust AAM fitting by fusion of images and disparity data. In: Proceedings IEEE International Conference Computer Vision and Pattern Recognition, pp 2483–2490
Lee TS (1996) Image representation using 2D Gabor wavelets. IEEE Trans Pattern Anal Mach Intell 18(10):959–971
Loizou CP, Pattischis CS, Istepanian RSH, Pantziaris M, Tyllis T, Nicolaides A (2004) Quality evaluation of ultrasound imaging in the carotid artery. In: Proceeding Mediterranean Electrotechnical Conference, pp 395–398
Moghaddam HA, Dehaji MN (2013) Enhanced Gabor wavelet correlogram feature for image indexing and retrieval. Pattern Anal Appl 16:163–177
Mohamed MA, Rashwan HA, Mertsching B, Garcia MA, Puig D (2014) Illumination-robust optical flow approach using local directional pattern. IEEE Trans Circ Syst Video Technol 24(9):1499–1508
Malathi T, Bhuyan MK (2015) Estimation of disparity map of stereo image pairs using spatial domain local Gabor wavelet. IET Comput Vis 9(4):595–602
Ojala T, Pietikainen M, Maenpaa T (2002) Multiresolution gray-scale and rotation invariant texture classification with local binary patterns. IEEE Trans Pattern Anal Mach Intell 24(7):971–987
Peng Z, Li Y, Cai Z, Lin L (2015) Deep boosting: Joint feature selection and analysis dictionary learning in hierarchy. Neurocomputing:1–20
Panetta RL, Liu C, Yang P (2013) A pseudo-spectral time domain method for light scattering computation. In: Kokhanovsky AA (ed) Light Scattering Reviews 8: Radiative transfer and light scattering. Springer, London, p 151
Rajadell O, Garca-Sevilla P, Pla F (2009) Scale analysis of several filter banks for color texture classification. In: Proceedings International Symposium Advances in Visual Computing: Part II, pp 509?-518
Shen L, Jia S (2011) Three-dimensional Gabor wavelets for pixel-based hyperspectral imagery classification. IEEE Trans Trans Geosci Remote Sens 49 (12):5039–5046
Soares JVB, Leandro JJG, Cesar Jr. RM, Jelinek HF, Cree MJ (2006) Retinal vessel segmentation using the 2-D Gabor wavelet and supervised classification. IEEE Trans Med Imag 25(9):1214–1222
Scharstein D, Szeliski R (2002) A taxonomy and evaluation of dense two-frame stereo correspondence algorithms. Int J Comput Vis 47(1):7–42
Scharstein D, Szeliski R (2003) High-accuracy stereo depth maps using structured light. In: Proceeding International Conference Computer Vision and Pattern Recognition, pp 195–202
Scharstein D, Pal C (2007) Learning conditional random fields for stereo. In: Proceeding International Conference Computer Vision and Pattern Recognition, pp 1–8
Scharstein D, Hirschmller H, Kitajima Y, Krathwohl G, Nesic N, Wang X, Westling P (2014) High-resolution stereo datasets with subpixel-accurate ground truth. In: Proceeding German Conference Pattern Recognition, pp 31–42
Tou JY, Tay YH, Lau PY (2007) Gabor filters and grey-level co-occurrence matrices in texture classification. In: MMU International Symposium Information and Communications Technologies, pp 197–202
Tou JY, Tay YH, Lau PY (2009) Gabor filters as feature images for covariance matrix on texture classification Problem. Advances in neuro-information processing. Lect Notes Comput Sci 5507:745–751
Xu C, Li S, Tan T, Quan L (2009) Automatic 3D face recognition from depth and intensity Gabor features. Pattern Recogn 42(9):1895–1905
Xie X, Lam KM (2009) Facial expression recognition based on shape and texture. Pattern Recogn 42(5):1003–1011
Yang M, Zhang L, Shiu SCK, Zhang D (2013) Gabor feature based robust representation and classification for face recognition with Gabor occlusion dictionary. Pattern Recogn 46:1865–1878
Zuiga AG, Florindo JB, Bruno OM (2014) Gabor wavelets combined with volumetric fractal dimension applied to texture analysis. Pattern Recogn Lett 36:135–143
Zhang B, Shan S, Chen X, Gao W (2007) Histogram of Gabor phase patterns (HGPP): A novel object representation approach for face recognition. IEEE Trans Image Process 16(1):57–68
Zhang L, Tjondronegoro D, Chandran V (2014) Random Gabor based templates for facial expression recognition in images with facial occlusion. Neurocomputing 145:451–464
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Malathi, T., Bhuyan, M.K. Performance analysis of Gabor wavelet for extracting most informative and efficient features. Multimed Tools Appl 76, 8449–8469 (2017). https://doi.org/10.1007/s11042-016-3414-2
Received:
Revised:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11042-016-3414-2