Abstract
Authors have proposed novel multi-dimensional multi-directional mask maximum edge patterns for the bio-medical image retrieval. Standard local binary patterns encode relationship of neighbor pixels with center pixel. Local mesh patterns encode the relationship between adjacent pixels surrounding the center pixel. Proposed approach encodes relationship of neighbour pixels in adjacent planes of a multi-dimensional image, in three stages. In the first stage, five sub images are formed by traversing in five different directions on three planes of a multi-dimensional image. In the second stage, directional masks are applied on each sub image to find directional edges. In stage three, maximum edge patterns are found based on the directions of the directional edges. To examine performance analysis of the proposed algorithm, we tested proposed algorithm on three benchmark databases, which gives retrieval accuracy \(56.93\%\) for top 5 images, 93.36 and \(62.49\%\) for top 10 images on MESSIDOR (Retinal images), VIA/I-ELCAP (CT images) and OASIS-MRI databases respectively in terms of average retrieval precision. The comparison reflects, there is considerable improvement in the performance.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Avoid common mistakes on your manuscript.
1 Introduction
1.1 Motivation
In last few decades, there has been a rapid growth in severe and critical diseases in India and all over the world resulting in increasing need of expert medical services in urban as well as in remote places especially in developing countries. In areas where general clinical practices are present, we can provide them a technological solution, which can assist those clinics in bringing expert medical services to their help. Bio-medical imaging has emerged as a very useful technological development in medical diagnostic field. Biomedical imaging creates visual representation of interior body parts that are useful for medical analysis and diagnosis. There are different types of biomedical imaging modalities like computed tomography (CT), magnetic resonance imaging (MRI), fundus imaging etc. With these techniques in-hand, engineers can provide further technological solutions to medical field. Content based image retrieval (CBIR) is one of such technological solution in which engineering and medical streams can work hand in hand. CBIR works in two stages first is feature extraction stage in which database image features are extracted to form feature vector database. In second stage, similarity matching is done in which distance of query image feature is measured from each of the images in feature database. A detailed survey on CBIR is discussed in [8, 18, 33].
Different CBIR methods proposed are discussed in [2, 3, 5, 16, 17, 27, 30, 31, 35, 43]. Biomedical images have dominant spatial features, which led us to use local patterns for image indexing and retrieval as local patterns encode spatial information of an image. Different existing local pattern methods [4, 7, 9,10,11,12,13,14, 22,23,24,25,26, 36,37,42] are discussed in state of the art section, which are used for image retrieval. Proposed feature descriptor has a different approach for encoding the image spatial information. We traverse the image in multi-dimensional planes and in multi directional manner to encode the relationship of neighboring pixels in an image.
1.2 Related work
Mathieu et al. [17] proposed a method for CBIR that used adapted wavelet and weighted distance between signatures. These weighted distances are obtained from computing signature distance between the query and database images. Kenneth et al. [35] proposed a method to retrieve retinal images from a retinal image database in which they estimated the posterior probability of K-NN. They made use of weighted summation of the similarity between query vector and neighboring indexes. Quellec et al. [30] used optimized wavelet transform to generate a signature for each image. These image signatures are later used for retrieval. Javed et al. [2] used scaled invariant feature transform (SIFT) and bag-of-words (BoW) model together to describe and differentiate 3D images of computed tomographic colonography computer-aided detection (CTCCAD). They used Euclidean distance metric for similarity matching of the BoW histograms to determine the similarity between the query image and an image in the database. Baby et al. [3] proposed a method for content-based image retrieval using dual-tree complex wavelet transform (DT-CWT) for MESSIDOR database of fundus images along with generalized Gaussian model (GGD) and Kullback–Leibler divergence (KLD) measurement. Naguib et al. [27] proposed a method for content based image retrieval of diabetic macular edema (DME) Images. In which they divided the macula into three concentric regions then they used texture discontinuities of these regions to represent lesions in retina. The distance measure gives higher weights to lesions closer to the fovea to reflect the severity of DME. Chi et al. [5] proposed content-based image retrieval of multiphase CT images for focal liver lesion characterization in which they used hybrid generative-discriminative focal liver lesions (FLLs) detection method to extract multiphase density and texture features and a non-rigid B-spline registration method for localizing FLL on multiphase CT scan. Romero et al. [31] proposed a new method for detection of microaneurysms. They applied bottom-hat transform to remove reddish regions. Later they applied hit-or-miss transform to remove blood vessels from RoIs. Murala et al. [22] proposed directional binary wavelet patterns for Bio-medical Image indexing and retrieval. They used binary wavelet transform (BWT) to extract feature from multi-resolution binary images using local binary patterns. Murala et al. [24] proposed local ternary co-occurrence patterns, which encodes the co-occurrence of similar ternary edges and extracts features by applying Gabor transform. Bala et al. [4] proposed local texton XOR patterns (LTxXORP) in which they first found texton image by converting RGB image to HSV image. Then they applied XOR operation between center pixel and surrounding pixels to find LTxXORPs and finally these LTxXORPs and HSV histogram are used to form a feature vector. Deep et al. [7] proposed Directional local ternary quantized extrema patterns (DLTerQEP) for biomedical image retrieval in which they encoded spatial relationship between center pixel and neighbor pixels in any given directions (i.e., \({{0}^{\circ }},{{45}^{\circ }},{{90}^{\circ }}\,\mathrm{{and}}\,{{135}^{\circ }}\)). Verma et al. [36] proposed local tri-directional patterns for image retrieval wherein they encoded the relationship of local intensity of pixels based on three directions in the neighborhood. Murala et al. [23] proposed local tetra patterns, which encode the relation between selected referenced pixel and its neighbors based on the directions that are calculated using first order derivatives in vertical as well as horizontal directions. Murala et al. [25] proposed local mesh patterns where they decide a referenced pixel in an image; locate its surrounding neighbors, later encoded relationship among located neighbors. Murala et al. [26] proposed spherical symmetric 3-D local ternary patterns where they encode relationship of surrounding neighbors extracted from five selected directions in 3D planes (R–G–B planes) with center pixel. Vipparthi et al. [37] proposed local directional mask maximum edge patterns (LDMaMEP) in which they collected maximum edge patterns (MEP) and maximum edge position patterns (MEPP) from the magnitude directional edges of an image. Vipparthi et al. [41] proposed color directional local quinary patterns (CDLQP). CDLQP extracts the channel wise directional edge information between reference pixel and its surrounding neighbourhoods on individual R, G and B planes by computing its grey level difference based on quinary values. Vipparthi et al. [38] proposed dual directional multi-motif XOR patterns in which they used one standard 22 grid at a distance two and four 13 smart grids along dual directions for new motif representation which then undergoes XOR operation to generate multi-motif XOR patterns. Vipparthi et al. [40] proposed directional local motif XOR patterns (DLMXoRPs). They calculated motif using 13 grids to extract all directional information later XOR operation is applied on transformed new motif images. Vipparthi et al. [39] proposed local Gabor maximum edge position octal patterns (LGMEPOP). In this method they found maximum edge positions (MEP) on Gabor responses which gave eight edges based on relationship between referenced pixel and its neighbors. LGMEPOP uses first three dominant MEPs to generate octal codes which are later encoded into octal patterns. Vipparthi et al. [42] proposed multi-joint histogram based modeling. In this approach they constructed joint correlation histograms between the motif and texton maps. Vipparthi et al. [37] proposed local extreme complete trio pattern (LECTP) that uses integration of local extreme sign trio patterns (LESTP) and magnitude local operator (MLOP). These patterns extract complete extreme to minimal edge information in all possible directions using trio values. Dubey et al. [11] proposed multichannel decoded local binary patterns in which they used adder and decoder based schemes for combination of LBPs from different channels. Dubey et al. [10] proposed local bit plane decoded pattern (LBDP) where they calculated local bit-plane transformed values for each image pixel using its neighboring pixels bit-plane binary contents. Then LBDPs are generated by using difference of center pixel intensity and the transformed values. Dubey et al. [9] proposed local wavelet pattern (LWP) for image retrieval. They first used neighbor pixel relations for local wavelet decomposition. Relationship of these decomposed values with the transformed center pixel value is encoded to find the LWP. Yao et al. [44] proposed LEPSEG for segmentation of image and LEPINV for retrieval of image. The LEPSEG and LEPINV methods differ in sensitivity to variations in rotation and scale is sensitive. Sastry et al. [32] proposed an image retrieval algorithm using the scale invariant (SI) and rotation invariant (RI) Gabor Texture (GT) features. The individual RI and SI Gabor representations are obtained doing some modification in the conventional operation on Gabor filters. Marko et al. [15] modified conventional LBP to find the center-symmetric local binary pattern (CS-LBP) which significantly reduces the feature vector length. Moghaddam et al. [21] proposed an image indexing and retrieval method based on combination of multiresolution image decomposition and color correlation histogram in which they computed wavelet coefficients of image using Gabor wavelet and later computed one-directional autocorrelograms of the wavelet coefficients to form an index vector. Moghaddam et al. [20] proposed enhanced Gabor wavelet correlogram in which they optimized gabor wavelet features using quantization threshold later computed autocorrelogram of the quantized wavelet coefficients to store as index vector. Heikkil et al. [15] did local region matching using the CS-LBP which reduced the dimension of the LBP.
Most of the above-discussed methods have used texture information of single plane of image as dominant information. The proposed approach traverses three planes of an image in five different directions to collect multi-dimensional texture information, so we get detailed features for comparison.
1.3 Main contribution
Proposed multi-dimensional multi-directional mask maximum edge patterns (\(\hbox {MD}^{2}\hbox {MaMEP}\)) approach takes into consideration the fact that, biomedical images have dominant spatial information. Considering texture as a dominant feature, our method encodes the texture information from neighbouring planes in five different directions. Proposed multi-dimensional multi-directional approach helped us to encode more detail texture information of the image. We have carried out three experiments on three different databases namely MESSIDOR [6] a diabetic retinopathy database, OASIS MRI [19] database, and VIA/I-ELCAP CT [1] database. Experimental results are given in results and discussion section.
Arrangement of paper is as follows: Sect. 2 gives introduction to some existing local patterns which inspired us for our approach. Section 3 gives detail description of our methodology. Section 4 contains discussion of our experimental results. Section 5 gives concluding remarks.
2 Local patterns
2.1 Local binary patterns
Ojala et al. [28, 29] proposed local binary patterns (LBP) for texture classification. LBP encodes the relationship of center pixel with the neighbor pixels. The relationship is calculated using (1) and (2).
where \(I_{n}\), \(I_\mathrm{{c}}\) indicate pixel intensity of neighbor pixel and center pixel respectively. P indicates number of neighbors and R indicates radius of neighborhood.
2.2 Local mesh patterns
Murala et al. [25] proposed new image retrieval approach using local mesh patterns (LMeP). LMeP is calculated based on the relationship of neighbors with the given center pixel in an image. The LMeP calculation is carried out using (3).
where k represents LMeP index and mod (x, y) returns the remainder for x / y operation. P indicates number of neighbors and R indicates neighborhood radius.
2.3 Local directional mask maximum edge patterns (LDMaMEP)
Vipparthi et al. [37] proposed new image retrieval approach using LDMaMEP in which they first obtain directional edges of image using directional masks, which are later used for collecting maximum edge patterns (MEP), and maximum edge position patterns (MEPP). These MEP and MEPPs are used as feature vectors that are later used for image retrieval.
3 Methodology
3.1 Gaussian filter bank
If the input image I is a color image, then we use R–G–B planes separately to derive five directional images. But if the image I is gray image then we used Gaussian filter bank with different standard deviations to derive three Gaussian images using (4) and (5):
For different values of \(\sigma \) i.e. \(\sigma 1\), \(\sigma 2\), \(\sigma 3\) we convolve \(G(x,y,\sigma )\) with I(x, y) as given in (5):
These three Gaussian images are used as three planes of the original image and using (7)–(11) five directional images are derived.
3.2 Proposed approach
In proposed multi-dimensional multi-directional approach, we extract color sub-grid \(({{D}_{3\times 3\times {p}}})\) from a input color image \(I\,(m,n,p)\) using (6)
After sub grid extraction, we traverse \(({{D}_{3\times 3\times {p}}})\) in five symmetric directions to form five-sub grid of size \(3\times 3\) each as shown in Fig. 1 and calculated using (7)–(11):
where, sub-grid is G-plane of \(({{D}_{3\times 3\times {p}}})\), \({{I}_{2}}\) sub-grid is derived by traversing 2nd row of all the three planes of sub grid \(({{D}_{3\times 3\times {p}}})\), \({{I}_{3}}\) sub-grid is derived by traversing 2nd column of all the three planes of image \(({{D}_{3\times 3\times {p}}})\), \({{I}_{4}}\) sub-grid is derived by traversing diagonally on all the three planes of image \(({{D}_{3\times 3\times p}})\), \({{I}_{5}}\) sub-gird is derived by traversing anti-diagonally on all the three planes of image \(({{D}_{3\times 3\times {p}}})\).
These five directional images are then applied with directional masks, which will produce directional edges. There are eight standard directional masks as shown in Fig. 2. These directional masks are convolved with the five sub grids obtained from (7)–(11). Each one of the five images will produce an eight-element directional edge vector. So, there will be five directional edge vectors. These directional vectors are calculated using (12)
where, \({{I}_{\alpha }}\) is \({{\alpha }\mathrm{{th}}}\) directional sub-grid and \(\mathrm{M{a}}_{\beta }\) is \({{\beta }\mathrm{{th}}}\) mask (shown in Fig. 2) applied on the sub-grid.
The directional edges are then sorted (considering magnitudes) and stored (actual values) in descending order as shown in (13), then if the value is positive then it is replaced by 1 else it is replaced by 0 as given in (16).
where, \({{\underset{\beta }{\mathop {\max }}\,}^{k}}(\left| \mathrm{Dir}(\alpha ,\beta ) \right| )\) gives \({{k}\mathrm{{th}}}\) maximum value from vector Dir irrespective of its sign over the range of \(\beta \), and \(\arg ({{\underset{\beta }{\mathop {\max }}\,}^{k}}(\left| \mathrm{Dir}(\alpha ,\beta ) \right| ))\) gives index of the \({{k}\mathrm{{th}}}\) maximum value from vector Dir irrespective of its sign over the range of \(\beta \). In DSD vector, actual values of directional vectors are stored in descending order and in POS vector [calculated using Eq. (14) ] the respective index of the directional value is stored. Using directional vectors in (13), binary patterns are derived using (16). Later MEP is calculated using (16) and by using respective positions of the directional edges from (14) MEPPs are calculated with help of (17). Pictorial explanation of MEP and MEPP calculation is given in Fig. 3.
For \(3\times 3\) grid segment there will be one MEP and four MEPPs. MEP values range from 0 to 255 and MEPP values range from 0 to 63. For one directional image there will be \(256+4\times 64\) (one MEP and four MEPP) feature vector length. So, for five directional images it will produce five feature vectors of one MEP and four MEPPs each i.e. there will be \(5\times 512\) feature vector length for an image.
3.3 Similarity measurement
In the proposed feature extraction algorithm, representation of feature vector for query image (Q) is, \({{f}_\mathrm{{Q}}}=[{{f}_{\mathrm{{{Q}}_{1}}}},{{f}_{{\mathrm{{Q}}_{2}}}},{{f}_{{\mathrm{{Q}}_{3}}}},\ldots ,{{f}_{{\mathrm{{Q}}_\mathrm{{N}}}}}]\). Similarly, the feature vector for dataset images is represented as, \({{f}_\mathrm{{D{B}}_{ {p}}}}=[{{f}_\mathrm{{D{{B}}_{ {p}}}_{1}}},{{f}_\mathrm{{D{{B}}_{ {p}}}_{2}}},{{f}_\mathrm{{D{{B}}_{ {p}}}_{3}}},\ldots ,{{f}_\mathrm{{D{{B}}_{ {p}}}_\mathrm{{N}}}}]\) where, \(p=(1,2,\ldots , \hbox {DB})\) For similarity matching, \({{d}_{1}}\) similarity distance metric is used which is computed using (18):
where, Q is the query image, N is the length of feature vector, DB is database image, \({{f}_\mathrm{{D{{B}_{ {p,q}}}}}}\) is \({{q}\mathrm{{th}}}\) feature of \({{p}\mathrm{{th}}}\) image in the database, \({{f}_\mathrm{{{{Q}_{ {q}}}}}}\) is \({{q}\mathrm{{th}}}\) feature of query image. Our main aim is to choose n top images that are similar to query image.
4 Results and discussions
We performed experiments on three different databases namely MESSIDOR [6] a diabetic retinopathy database, OASIS MRI [19] database, and VIA/I-ELCAP CT [1]. For all the three different databases we worked on 3D plane.
The performance is evaluated in terms of Precision [average retrieval precision (ARP)], and Recall [average retrieval rate (ARR)] which are calculated using (19)–(22).
where, \({{N}_\mathrm{{R}}}\) is set of all relevant images in the database, \({{N}_\mathrm{{RT}}}\) is set of all retrieved images from database, \({{N}_\mathrm{{R}}}\cap \,{{N}_\mathrm{{RT}}}\) gives total number of relative images retrieved. \({{n}_\mathrm{{R}}}\) is number of relevant images, \({{n}_\mathrm{{RT}}}\) is number of retrieved images, \({{I}_{i}}\) is \({{i}\mathrm{{th}}}\) query image and total number of images in database is denoted by DB.
For performance analysis of proposed method, different state-of- art methods are used which are: LBP [29], DT-CWT + GGD [11], LDMaMEP [37], GLBP [29], DBWP [22], LMeP [25], GLMeP [25], INTH [34], GLCM1 [34], GLCM2 [34], first four central moments of a Gaussian filter bank with four scales (GFB) [34], SS-3D-LTP [26].
4.1 Result analysis on MESSIDOR database
In this experiment we applied our proposed method to MESSIDOR [6] database which consists of 1200 retinal images captured from patients of diabetic retinopathy. These images are divided into four groups based on the severity of disease. These images are available in three sizes \(1440 \times 960,\,\,2240 \times 1488\,\) and \(\,2304 \times 1536\). All the image are annotated with retinopathy grades and the specifications of retinopathy that are based on number of micro-aneurysms, hemorrhages and the sign of neovascularization prsent in the image which is given in Table 1. Images in which the above abnormalities are absent are considered as normal images.
In this experiment top five images are retrieved for given query image. The comparison of group-wise precision as well as average precision with other existing state-if-the-art methods is illustrated in Table 2. It is clear that proposed method shows considerable improvement in ARP as compared to existing methods . If we compare \((\hbox {MD})^{2}\hbox {MaMEP}\) with the previous method DT-CWT we get noticeable improvement from 53.7 to \(56.93\%\) in ARP. Whereas, if we consider local directional mask maximum edge patterns (LDMaMEP) we get satisfactory improvement from 53.13 to \(56.93\%\) in ARP. Also, if we consider volumetric local directional triplet patterns (VLDTP) we get improvement from 55.73 to \(56.93\%\) in ARP.
4.2 Result analysis on OASIS-MRI database
This experiment is carried out on OASIS-MRI [19] database, which is publically available and consists of 421 images recorded from patients aged between 18 and 96 years. For experimental purpose these images are divided into four groups based on shape of ventricular in the images, each group has 124, 102, 89, 106 images respectively. Experimental results of proposed feature descriptor in terms of ARP is compared with other existing methods is depicted in Table 3. From Table 3, it is that there is significant improvement in ARP for individual groups as well as in overall ARP also. The retrieval results of proposed feature descriptor with considering top n images is compared with other existing state-of-the-art feature descriptor is gvien in Table 4. When proposed method [\((\hbox {MD})^{2}\hbox {MaMEP}\)] is compared with other existing methods i.e. with LDMaMEP and SS-3D-LTP, we get noticeable improvement in ARP from 57.87 to \(62.49\%\) and from 53.32 to \(62.49\%\) respectively. From this experimentation, we can say that there is significant increment in overall ARP for number of top matches considered.
4.3 Result analysis on VIA/I-ELCAP CT database
Experiment 3 is performed on VIA/I-ELCAP CT database [1] which is a publically available database jointly created by vision and image analysis group (VIA) and international early lung cancer program (I-ELCAP). These CT images are of \(512\times 512\) resolutions and are recorded in digital imaging and communications in medicine (DIACOM) format. We used 1000 such images of CT scans which are in total 10 scans of 100 images in each scan.
Experimental results in terms of ARP and ARR are compared with other existing methods to analyze the effectiveness of proposed feature descriptor is given in Tables 5 and 6. Proposed method [\((\hbox {MD})^{2}\hbox {MaMEP}\)] when compared with other existing methods i.e. with LMeP and GLMeP, proposed method achieve noticeable improvement in ARP from 52.69 to \(60.40\%\). and from 54.56 to \(60.40~\%\) (\(\hbox { {n}}=100\) top matches) respectively. We get 93.36 and \(60.40\%\) ARR for top 10 matches.
5 Conclusion
We have proposed a novel approach for Bio-medical image retrieval which is tested on three publicly available standard Bio-medical databases. The proposed method is novel in encoding the relationship of neighbors, it considers multiple dimensions of an image to encode the local depth information, further it accesses the local information in multiple directions and finds directional edges, and due to this process our method is able to retrieve images accurately. Whereas other methods in literature mostly consider one dimensional image information for encoding resulting in less retrieval accuracy, e.g. LMeP encodes the relationship of adjacent neighbors whereas our proposed \((\hbox {MD})^2\hbox {MaMEP}\) encodes the relationship of neighbors in adjacent planes. We carried out three experiments on three different publically available bio-medical databases. We got \(56.93\%\) average precision for (\({n}=5\)) MESSIDOR retinal database, \(60.40\%\) average precision for (\({n}=100\)) and \(93.36\%\) average precision for (\({n}=10\)) VIA/I-ELCAP CT database, and \(62.49\%\) average precision for (\({n}=10\)) OASIS MRI database. Our method gave us considerable improvement in ARP as well as ARR compared to other existing methods on respective databases. The proposed \((\hbox {MD})^2\hbox {MaMEP}\) method can be further applied to natural and texture databases.
References
ELCAP-CT Database available at. http://www.via.cornell.edu/databases-/lungdb.html. Accessed 27 Nov 2017
Aman JM, Yao J, Summers RM (2010) Content-based image retrieval on CT colonography using rotation and scale invariant features and bag-of-words model. In: 2010 IEEE International symposium on biomedical imaging: from nano to macro. IEEE, pp 1357–1360
Baby CG, Chandy DA (2013) Content-based retinal image retrieval using dual-tree complex wavelet transform. In: 2013 International conference on signal processing image processing & pattern recognition (ICSIPR), IEEE. pp 195–199
Bala A, Kaur T (2016) Local texton xor patterns: a new feature descriptor for content-based image retrieval. Eng Sci Technol Int J 19(1):101–112
Chi Y, Zhou J, Venkatesh SK, Tian Q, Liu J (2013) Content-based image retrieval of multiphase ct images for focal liver lesion characterization. Med Phys 40(10):1–13
Decencire E, Zhang X, Cazuguel G, Lay B, Cochener B, Trone C, Gain P, Ordonez R, Massin P, Erginay A, Charton B, Klein JC (2014) Feedback on a publicly distributed database: the messidor database. Image Anal Stereol 33(3):231–234. https://doi.org/10.5566/ias.1155 http://www.ias-iss.org/ojs/IAS/article/view/1155
Deep G, Kaur L, Gupta S (2016) Biomedical image indexing and retrieval descriptors: a comparative study. Procedia Comput Sci 85:954–961
Dharani T, Aroquiaraj IL (2013) A survey on content based image retrieval. In: 2013 International conference on pattern recognition, informatics and mobile engineering (PRIME). IEEE, pp 485–490
Dubey SR, Singh SK, Singh RK (2015) Local wavelet pattern: a new feature descriptor for image retrieval in medical ct databases. IEEE Trans Image Process 24(12):5892–5903
Dubey SR, Singh SK, Singh RK (2016) Local bit-plane decoded pattern: a novel feature descriptor for biomedical image retrieval. IEEE J Biomed Health Inform 20(4):1139–1147
Dubey SR, Singh SK, Singh RK (2016) Multichannel decoded local binary patterns for content-based image retrieval. IEEE Trans Image Process 25(9):4018–4032
Dudhane A, Shingadkar G, Sanghavi P, Jankharia B, Talbar S (2017) Interstitial lung disease classification using feed forward neural networks. In: Proceedings of advances in intelligent systems research, pp 515–521
Dudhane AA, Talbar SN (2018) Multi-scale directional mask pattern for medical image classification and retrieval. In: Proceedings of 2nd international conference on computer vision & image processing. Springer, pp 345–357
Gonde AB, Patil PW, Galshetwar GM, Waghmare LM (2017) Volumetric local directional triplet patterns for biomedical image retrieval. In: 2017 Fourth international conference on image information processing (ICIIP). IEEE, pp 1–6
Heikkilä M, Pietikäinen M, Schmid C (2009) Description of interest regions with local binary patterns. Pattern Recognit 42(3):425–436
Jai-Andaloussi S, Lamard M, Cazuguel G, Tairi H, Meknassi M, Cochener B, Roux C (2010) Content based medical image retrieval based on bemd: optimization of a similarity metric. In: 2010 Annual international conference of the IEEE engineering in medicine and biology society (EMBC). IEEE, pp 3069–3072
Lamard M, Cazuguel G, Quellec G, Bekri L, Roux C, Cochener B (2007) Content based image retrieval based on wavelet transform coefficients distribution. In: 2007 29th Annual international conference of the IEEE engineering in medicine and biology society (EMBS). IEEE, pp 4532–4535
Liu Y, Zhang D, Lu G, Ma WY (2007) A survey of content-based image retrieval with high-level semantics. Pattern Recognit 40(1):262–282
Marcus DS, Fotenos AF, Csernansky JG, Morris JC, Buckner RL (2010) Open access series of imaging studies: longitudinal mri data in nondemented and demented older adults. J Cogn Neurosci 22(12):2677–2684
Moghaddam HA, Dehaji MN (2013) Enhanced gabor wavelet correlogram feature for image indexing and retrieval. Pattern Anal Appl 16(2):163–177
Moghaddam HA, Khajoie TT, Rouhi AH, Tarzjan MS (2005) Wavelet correlogram: a new approach for image indexing and retrieval. Pattern Recognit 38(12):2506–2518
Murala S, Maheshwari R, Balasubramanian R (2012) Directional binary wavelet patterns for biomedical image indexing and retrieval. J Med Syst 36(5):2865–2879
Murala S, Maheshwari R, Balasubramanian R (2012) Local tetra patterns: a new feature descriptor for content-based image retrieval. IEEE Trans Image Process 21(5):2874–2886
Murala S, Wu QJ (2013) Local ternary co-occurrence patterns: a new feature descriptor for mri and CT image retrieval. Neurocomputing 119:399–412
Murala S, Wu QJ (2014) Local mesh patterns versus local binary patterns: biomedical image indexing and retrieval. IEEE J Biomed Health Inform 18(3):929–938
Murala S, Wu QJ (2015) Spherical symmetric 3d local ternary patterns for natural, texture and biomedical image indexing and retrieval. Neurocomputing 149:1502–1514
Naguib AM, Ghanem AM, Fahmy AS (2013) Content based image retrieval of diabetic macular edema images. In: 2013 IEEE 26th International symposium on computer-based medical systems (CBMS). IEEE, pp 560–562
Ojala T, Pietikäinen M, Harwood D (1996) A comparative study of texture measures with classification based on featured distributions. Pattern Recognit 29(1):51–59
Ojala T, Pietikainen M, Maenpaa T (2002) Multiresolution gray-scale and rotation invariant texture classification with local binary patterns. IEEE Trans Pattern Anal Mach Intell 24(7):971–987
Quellec G, Lamard M, Cazuguel G, Cochener B, Roux C (2010) Wavelet optimization for content-based image retrieval in medical databases. Med Image Anal 14(2):227–241
Rosas-Romero R, Martínez-Carballido J, Hernández-Capistrán J, Uribe-Valencia LJ (2015) A method to assist in the diagnosis of early diabetic retinopathy: image processing applied to detection of microaneurysms in fundus images. Comput Med Imaging Graph 44:41–53
Sastry CS, Ravindranath M, Pujari AK, Deekshatulu BL (2007) A modified gabor function for content based image retrieval. Pattern Recognit Lett 28(2):293–300
Smeulders AW, Worring M, Santini S, Gupta A, Jain R (2000) Content-based image retrieval at the end of the early years. IEEE Trans Pattern Anal Mach Intell 22(12):1349–1380
Sorensen L, Shaker SB, De Bruijne M (2010) Quantitative analysis of pulmonary emphysema using local binary patterns. IEEE Trans Med Imaging 29(2):559–569
Tobin KW, Abramoff MD, Chaum E, Giancardo L, Govindasamy VP, Karnowski TP, Tennant MT, Swainson S (2008) Using a patient image archive to diagnose retinopathy. In: 2008 30th Annual international conference of the IEEE engineering in medicine and biology society (EMBS). IEEE, pp 5441–5444
Verma M, Raman B (2016) Local tri-directional patterns: a new texture feature descriptor for image retrieval. Digit Signal Process 51:62–72
Vipparthi SK, Murala S, Gonde AB, Wu QJ (2016) Local directional mask maximum edge patterns for image retrieval and face recognition. IET Comput Vis 10(3):182–192
Vipparthi SK, Murala S, Nagar SK (2015) Dual directional multi-motif xor patterns: a new feature descriptor for image indexing and retrieval. Opt Int J Light Electron Opt 126(15):1467–1473
Vipparthi SK, Murala S, Nagar SK, Gonde AB (2015) Local gabor maximum edge position octal patterns for image retrieval. Neurocomputing 167:336–345
Vipparthi SK, Nagar S (2014) Expert image retrieval system using directional local motif xor patterns. Expert Syst Appl 41(17):8016–8026
Vipparthi SK, Nagar SK (2014) Color directional local quinary patterns for content based indexing and retrieval. Hum Centric Comput Inf Sci 4(1):6
Vipparthi SK, Nagar SK (2014) Multi-joint histogram based modelling for image indexing and retrieval. Comput Electr Eng 40(8):163–173
Xavier L, Mary ITB, Raj WND (2011) Content based image retrieval using textural features based on pyramid-structure wavelet transform. In: 2011 3rd International conference on electronics computer technology (ICECT), vol 4. IEEE, pp 79–83
Yao CH, Chen SY (2003) Retrieval of translated, rotated and scaled color textures. Pattern Recognit 36(4):913–929
Acknowledgements
Our sincere thanks to Mr. Prashant W. Patil and Mr. Akshay A. Dudhane (Research Scholars), Computer Vision and Pattern Recognition Laboratory, IIT Ropar, Punjab, India for their valuable technical discussions during this work. We would like to extend our gratitude towards the anonymous reviewers, because of their insights the manuscript quality improved.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Galshetwar, G.M., Waghmare, L.M., Gonde, A.B. et al. Multi-dimensional multi-directional mask maximum edge pattern for bio-medical image retrieval. Int J Multimed Info Retr 7, 231–239 (2018). https://doi.org/10.1007/s13735-018-0156-0
Received:
Revised:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s13735-018-0156-0