1 Introduction

Although the content-based image retrieval has recently been a highly active research area, image complexity has led to many challenges. Many studies have been conducted to develop some algorithms for solving some issues, and they get the exact results when retrieving images and distinguishing between them. Image algorithms are employed by many of the proposed algorithms for extracting profiles and applying their properties to match similarities [38, 53].

Nevertheless, most algorithms use black and white images. Due to the need to retrieve the best images in large data, extensive research has been recently done on image retrieval. The first proposed ideas in the early 1970s were based on textual margins, regardless of their visual characteristics. Relevant images are available to users using their keywords. Accordingly, this is called text-based image retrieval [34]. The necessity of outsourcing such intensive image-characteristic detection tasks to cloud computing increasingly grows; however, when outsourcing it to a cloud platform, the concerns for the effective protection of private image and multimedia data is the major obstacle impeding the further implementation of cloud computing techniques over a lot of images and multimedia data [26]. A new scheme supporting Content-Based Image Retrieval (CBIR) is proposed over the encrypted images without revealing the sensitive information to the cloud server for addressing this challenge [32]. Several studies have focused on feature extraction for image retrieval. Various methods such as machine learning [25, 40], ontology-based methods [37, 38, 41], and wavelets [23, 24, 35, 39] are employed for this issue. The fundamental approach in CBIR is visual similarity. The similarity between feature vector values of different semantic categories reduces CBIR’s performance since images are retrieved with no semantic relationship [50].

The systems mentioned above had some problems such as the margin squeeze that required a long time, high cost, and highly dependent on the designer’s perception of the image. Furthermore, since the same image concepts are not the same from the various users’ perspective, the boundaries attached to the images do not cover the whole issue; hence, the text-based queries are incomplete. CBIR is the required method in critical areas like medical science, industry, etc. Accordingly, image and video retrieval methods are discussed by Sundaram et al. ([51]), describing some methods as learned Lexicon-driven interactive video retrieval, a linear-algebraic technique with an application in semantic image retrieval, and logistic regression for image retrieval. Besides, Tyagi ([55]) introduced several strategies for bridging the semantic gap and reflecting recent CBIR advancements. LBP method, as an important one, is discussed in the study by Brahnam et al. [6], providing the latest literature reviews and some of the best LBP variants by researchers based on textual analysis research as well as research on LBP descriptors and variants. Moreover, the HOG method for image analysis is surveyed by Jahne [22] in some studies.

A new CBIR model based on an impressive integration of color, texture, and shape features is proposed in this paper to reconstruct the corrupted portions of these images. Image normalizing, denoising and color channel modifying by the SLIC superpixel method are the proposed algorithm’s first steps. Afterward, the state-of-the-art techniques such as HOG, LBP, and HSV are applied to the dataset images. The combination of the texture, color, and shape morphology can improve the reconstructed images’ retrieval performance compared to the ground truth. This work’s main novelty is adopting well-known techniques to achieve high content retrieval rates for various applications such as clinical usages. The highest precision of 98.71% on a liver CT scan image exhibits the proposed method’s efficiency.

The other innovations of this paper can be described as follows.

  • The image will be normalized, and the noise reduction operation is conducted using the median filter

  • The color channel is changed using the SLIC superpixel.

  • Combinational usage of HOG and LBP algorithms for optimization

This paper uses image processing principles besides the median filter, HOG, SLIC superpixel, and LBP methods to determine the image pattern and its content-based image retrieval.

2 Literature review

In the early 1990s, with the size of the increased images in databases like the world wide web and to overcome the problems with text-based systems, the CBIR was proposed to automatically extract images and using visual concepts like color, texture, and shape of the images [8, 22, 30, 34, 51, 6, 55]. This era of digital technology has had a considerable share in medical sciences. The medical imaging modalities are rapidly growing in number due to the improvements in biomedical sensors and high-throughput image acquisition technologies [45].

Among the most used image retrieval systems, the Query By Image Content (QBIC) system developed by IBM, the Anaktisi [61], the image recovery system established by Columbia University called VisualSEEK, and the University of Amsterdam, called PicToSeek, may be referred to [57].

In medical applications, the fast growth of technology and the emergence of various imaging devices (MRI and X-ray and CT) have led to the introduction of CBMIR due to the growing production of medical images. Although content-based systems have been designed in many applications, only a small number of GoldMiner, IRMA, and ASSERT systems have been developed in the medical imagery area [17].

A bunch of CBMIR systems like high-resolution computed tomography (HRCT) lung images [12], mammography [27], chest CT [60], chest X-ray [44], spine X-ray [59], and dental X-ray [42] have considered the images of particular organs; these systems may not be applied to other medical devices. Due to the increased number of medical images, retrieving them in various applications like the effective detection of disease, research, education, and medical education will be very significant. As observed in Fig. 1, a content-based retrieval system forms the axis of the image.

Fig. 1
figure 1

Content-based image retrieval system sample

2.1 Retrieval-based color features

Problems with effective search and navigation through the data can be solved by information retrieval. CBIR is a technique that helps search user-desired information from a huge set of image files and interpret user intentions for the desired information. The retrieval of information is based on image features like color, shape, texture, annotation, etc.

A color histogram is the most important color display used in image retrieval. Histogram sharing, comparisons, and coherent color histograms are known as the most important techniques used in this color display (to diminish the effects of noise). Moreover, other views are used for color features to retrieve images. For instance, colored moments and colored sets may be referred to [15, 18, 19].

The recovery-based color scheme feature refers to using color only on databases with high images that will significantly increase the number of incorrect returns and make this feature ineffective [47]. Thus, the color layout feature (the combination of color and spatial relationship of pixels) is proposed. This method’s basic goal is to extract color specifications under some image areas, using color attributes in four-dimensional trees [16]. Image segmentation and using color property in each area may lead to very accurate results and increase the work’s complexity.

The following are from other approaches addressed this issue: Using three first-color torques in some pixels in several predefined overlapping areas. This method works according to the distinction between pixels similar and non-similar to each area to the color of that area [4]. Whereas spacing layouts may be used with other features (like texture, etc.)

2.2 Retrieval-based texture features

The texture includes the visual patterns that cannot be displayed by color. There are different models for displaying and using textures. Recycling based on texture is not very effective. Nevertheless, it is better to use texture similarity between points of the same color (such as sky and sea, or leaves and grass). There is a variety of techniques for recognizing texture. A coincidence matrix is one of the first attempts in this field, with a background highly related to the issue of image retrieval [20]. Tamura method is one of the most important strategies in this area based on psychological studies and contextual texture features. The most critical techniques for texture retrieval operate according to quadratic value statistics comparisons between query items and database data. One can measure the image texture properties such as contrast, roughness, texture, or periodicity, and the patterns’ randomness through these statistics [21]. Other approaches to the assessment of texture include wavelet transforms, Gabor filters, and fractals. There have been many papers presented on the efficiency of these methods in image processing applications.

2.3 Retrieval-based image form

The recovery-based form is the most intuitive retrieval form [36]. The most important feature making the form feature suitable for image processing applications is that a way may be resistant to transitions, rotations, and changes in its equivalence level. In these methods, some feature attributes of objects within images (independent of the direction and the map) are stored for each image and retrieved from them. Based on how they are extracted, form characteristics are divided into two categories of features based on edges and the regions [3]. The templates’ elasticity, the finite element method, the comparison between the histogram of the image edges, and the skeletal representation of image objects compared with the graph matching conventions, the invariant functions, and wavelet descriptors are from the other consecutive methods proposed to accommodate forms. In this regard, an image content recovery method is presented using the HOG descriptor plus the superpixel SLIC along with the LBP (Ke [31]). It is noteworthy that a comparison with the other similar papers is conducted in terms of texture, color, and form.

3 Recent works

Several papers have been proposed in the area of content-based image retrieval. The measurement method is presented as a retrieval method for the image axis content in the study by (Mutasem K [1]) based on evolutionary algorithms. Two genetic algorithms are combined, and local searches are repeated as an image retrieval method. Also, in the study by [49], a quick content-based image retrieval method uses extensive data. The approach suggested by this study is to employ a hybrid algorithm called the Chain Clustering Binary Search Tree (CC-BST) that uses mapping reduction and Hadoop methods. In the research [56], the visual and pattern information of images is proposed using the kernel method to retrieve content-based image retrieval that represented recovery in large data based on linear mapping and encryption. In the study (Mutasem K [1]), the Memetic algorithm is applied for the content-based image retrieval of large data. This approach employed a genetic and large deluge algorithm representing the ability to restore images at the end of the work.

In the work of [13], a cellular neural network is utilized to repair the image. The use of this feed-forward network might restore a significant part of the damaged image. This kind of network may move from the cell to the cell in parallel (simultaneously with its neighbors and, the KNN method, or the nearest neighbors), leading to the rapid problem solving and learning rate. This project has been used for edge detection by averaging the cellular neural network. The network would learn how to specify the damaged edges. In the study by [58], a pre-trained deep neural network approach is employed along with an automatic noise reduction decoder to restore the image. This is a non-learning method, and the paper’s approach is compared with the KSVD method. One of the basic weak points of this method is that the images with Gaussian noise could be better eliminated than other types like peppermint.

A model named Allen Kahn is applied in work by (Yibao [28]) to repair the images. This is a local method only applied to the domain and the two attributes. The first feature included the number of pixels in the repair domain collected by the curvature of the dissemination besides the use of image information outside the image restoration area, while the second one is the value of the outer pockets of the restoration like the damaged input area, which has not yet been processed on the same areas. The nonlinear equations are used to solve this problem. That was why the program would function optimally. In [14], the self-organizing neural network or Kohonen mapping is employed for repairing the damaged digital images. In this study, this neural network’s weak point is the need to set the algorithm parameter. A self-organized neural network is utilized to separate and repair the images as two separate networks. The MPEG-7, revealed by the Moving Picture Experts Group (MPEG), proposes a complete set of multimedia descriptors to create so-called descriptions that can be utilized by applications that allow quality access to content. MPEG-7 can support various applications and suggests reasonable storage, searching, and retrieval. In MPEG-7 based methods, Pattanaik et al. [46] proposed using a colour structure descriptor for the histogram of colour and edge. They combined these features to increase the performance of CBIR. However, their method was adapted and evaluated in computer vision applications. In this paper, we focus on CT scan images in medical applications. We adopt state-of-the-art algorithms in our clinical domains consist of ultrasound images and CT scans. Our contribution can be summarized as 1) We proposed a pipeline (shown in Fig. 2) based on HOG, LBP, and HSV to improve retrieval performance in CT-images 2) We evaluate the proposed model in our clinical applications (liver CT-scans).

Fig. 2
figure 2

The block diagram of the proposed method

4 The proposed method

Feature extraction methods may be used for characteristics like color, texture, and form. Image segmentation makes it possible to interact with the location parameter to extract the texture’s color properties. i.e., in the case of segmentation, one may specify that the properties belonged to an area in the image correspond to and compared with the characteristics of the corresponding area of the alternative characteristics of images. In this structure, as described in the following, the segmentation method is employed to compare the areas beside the image’s extraction of objects. The following sections will describe each of these methods.

4.1 SLIC super-pixel

Various segmentation methods have been used for investigating the semantic meanings of an image. The great performance and simple structure are the main reasons that cause the wide application of the SLIC method in image segmentation (Y. [33]). This algorithm is utilized for isolating a set of fractions, considering the intensity and spatial features. For filtering out the non-mass candidates, a selection rule is used [9].

That is to say; a simple method is used: Super-pixels, applying each image to about 25–200 regions. This method has many desirable features, as follows:

  1. 1)

    Reducing the images’ complexity from thousands of pixels to just a few hundred pixels

  2. 2)

    Each superpixel is a perceptual constant, i.e., all pixels may be united in color and texture

  3. 3)

    Since pixels result from a general segmentation, most structures in the image are preserved.

Most algorithms used in computer models benefit from network pixels as a segmentation method. A simple duplicate clustering algorithm that conducts clustering in a five-dimensional space specified by the CIELAB color space’s LAB values. SLIC is easy to use, and it can easily be employed practically; indeed, it is the only parameter determining the desired number of superpixels [9].

4.2 HOG descriptor

Dalal and Triggs proposed a histogram of oriented gradients or HOG that is an interesting point of this descriptor to measure the distribution of gradient intensity or edge directions in the local region of an image [11]. The image is first transformed into fine-grained cells, and then a histogram collects steep directions or edge directions for pixels inside the cell for each region. A local character descriptor has resulted from the combination of these histograms. In this regard, the HOG descriptor is extracted from the full image before partitioning the image since the HOG attribute is extracted from the block, and transmitting the local information to the local description [54].

4.3 Color histogram

Over the past decade, color may be the most effective visual characteristic widely investigated in image retrieval research (Zhuohua [32]). A color histogram is proposed by Swain and Ballard [52]. The distance between the two images is then measured by the histogram cross-section method as a straightforward method to take advantage of a decent function. However, its major drawback is that it is not strong enough for significant apparent variations, since it does not have any spatial information. That is the reason it has a local color descriptor [5]. There are multiple color spaces of RGB, HSV, YCrCb to interpret the color of an image. The color space used here is HSV, developed to provide an intuitive presentation of color to establish a simple way for humans to use colors skillfully. The HSV color model is more similar to the human perceptual understanding of colors since it decouples chromatic components from achromatic components, allowing humans to identify pure colors [48]. The color H shows the dominant component of the spectral color in its form in green, yellow, and red. Adding white color to pure color may lead to a color change; more color saturation will be obtained from less white. While the color’s brightness is shown by the amount (V), (S) indicates the saturation. Equation (1) represents the color histogram with HSV color mode.

$$ {\displaystyle \begin{array}{l}V=\max \left(R,G,B\right)\\ {}S=\left\{\begin{array}{c}\frac{V-\min \left(R,G,B\right)}{V}\kern2em if\ V\ne 0\\ {}0\kern2.5em \mathrm{Otherwise}\end{array}\right.\\ {}H=\left\{\begin{array}{c}\frac{60\left(G-B\right)}{V-\min \left(R,G,B\right)}\kern4.5em if\ V=R\\ {}120+\frac{60\left(B-R\right)}{V-\min \left(R,G,B\right)}\kern1em if\kern0.75em V=G\\ {}240+\frac{60\left(R-G\right)}{V-\min \left(R,G,B\right)}\kern1.5em if\kern0.75em V=B\end{array}\right.\end{array}} $$
(1)

4.4 Local binary patterns (LBP)

The Local Binary Patterns or LBP was introduced by [43] in the 1990s. The LBP feature extraction is efficient, and it can be obtained by multi-scale and non-invariant filters rather than scaling and rotation. Because of its invariance for lighting conditions and robustness for image noise, LBP has exhibited remarkable discriminative power in different domains. For instance, LBP has been employed in face recognition, multi-object tracking, as well as scene classification [10]. The idea behind this text manipulator is assigning a grayscale to each pixel code. The gray level (Pc) corresponds to the coordinates (Xc, Yc) with adjacent portions (Pn) employing the Eq. (2).

$$ {\displaystyle \begin{array}{c} LBP\left({X}_c,{Y}_c\right)={\sum}_{n=0}^Ps\left({P}_n-{P}_c\right)\\ {}s\left({P}_n-{P}_c\right)=1\ if{P}_n-{P}_c\ge 0\\ {}s\left({P}_n-{P}_c\right)=0\ if{P}_n-{P}_c<0\end{array}} $$
(2)

where P stands for the number of adjoining pixels, generally, the neighborhood is considered 3 × 3, where P is equal to eight adjacent. Therefore, the LBP values varying between 0 and 255 for each pixel, as for an image for a grayscale. To form the LBP descriptor, a histogram based on these values is computed. A uniform LBP is used for this descriptor, extracting most of the basic structure of LBP. An LBP descriptor is considered the same if it has a maximum of 2 to 1 or 1 to 0 variations. For instance, both 00001000 (2 changes) and 1,000,000 (1 change) patterns are identical due to the maximum of two changes between 0 and 1 has occurred.

On the other hand, the pattern 01010010 is not the same; because it has six changes from 0 to 1 and 1 to 0. Therefore, the nine homogeneous patterns have a U value with a maximum value of 2 (0000000, 0000001, 0000011, 0000111, 0001111, 0011111, 0111111, and 1,111,111). These nine patterns correspond to 58 patterns out of 256 original rotated patterns, which may occur in 3 × 3 regions. The remaining patterns accumulate in a single compartment, leading to a 59-chamber histogram. The use of only 58,256 of the template information may seem a waste of information; however, this approximation is supported by a significant observation. For instance, the nine chosen patterns appear to be the most convergent spatial patterns existing in algebraic microstructures.

4.5 Data extraction

This method has been proposed to use the powerful rating of descriptors mentioned above and overcome their limitations: HOG descriptor is first extracted from the entire image since it presents the shape’s local attribute. Subsequently, each image is divided into different sections. There are 16 sections for minimizing the density of information. Then, each section is rounded to extract and combine the HSV color and the LBP uniform histogram in a loop; however, the similarity between vectors is based on the calculation of Euclidean distance that is very optimal for comparing histograms and vectors.

4.6 Steps of proposed method

Extraction and combination of the local features through the proposed method in a single image

  1. Step 1:

    HOG descriptor is extracted from the entire image: VHOG

  2. Step 2:

    SLIC Superpixel is employed for the image to get 16 sections.

  3. Step 3:

    A round of each Superpixel creates a loop, and its edge is obtained.

  1. A)

    The rectangular ROI is calculated.

  2. B)

    To obtain local attributes, they are transmitted to descriptors.

  3. C)

    Conversion of the section to HSV space and obtaining the color histogram: VHSVi

  4. D)

    Conversion of the section to gray and obtaining LBP feature: VLBPi

  5. E)

    The two vectors are combined with Eq. (3)

$$ {\displaystyle \begin{array}{c}{\mathrm{V}}_{\mathrm{cmd}1}={V}_{\mathrm{HSV}1}+{V}_{\mathrm{LBP}1}\\ {}.\\ {}\begin{array}{c}.\\ {}.\\ {}{\mathrm{V}}_{\mathrm{Cmd}16}={V}_{\mathrm{HSV}16}+{V}_{\mathrm{LBP}16}\end{array}\end{array}} $$
(3)
  1. Step 4:

    the color and texture descriptors are connected to obtain a local visual property by Eq. (4).

$$ {\mathrm{V}}_{\mathrm{CMD}}=\left\{{\mathrm{V}}_{\mathrm{CMD}1},{\mathrm{V}}_{\mathrm{CMD}2},{\mathrm{V}}_{\mathrm{CMD}3},\dots, {\mathrm{V}}_{\mathrm{CMD}16}\right\} $$
(4)

The vector is multiplied by the factor, and the characteristic of the form is added by Eq. (5).

$$ {V}_{image}={V}_{HOG}+W1\ast {V}_{CMD}\kern1.5em \left(W1=0.3\right) $$
(5)

A visual investigation system created each dataset image’s visual properties as a vector with numerical values stored in the data file to assess the described methods. According to the Euclidean distance, the query image property would be compared with the file’s properties. Then, it would return images with a minimum distance of zero to monitor the query image. Precision and reminder parameters are commonly employed for calculating the quality of the imaging system in the image research system. The set of all relevant image results for each intended query is represented by A, and Bi stands for all the image results returned to the system. Accuracy may be defined as the percentage of the recovered images from the same section as the query image with Eq. (6).

$$ {P}_i=\frac{A_i\cap {B}_i}{B_i} $$
(6)

This system has been designed to return 16 images after a query image; for each query, the address resolution protocol (ARP) is calculated by Eq. (7).

$$ ARP=\frac{1}{N}\sum \limits_{i=1}^N{P}_i $$
(7)

where N is the test portion in the dataset.

The Pseudo-code of the presented method is presented in Algorithm 1.

figure a

5 Experimental results

First, the dataset’s input image needs to be inserted in the main dataset with multiple images in various areas, such as animal pictures, MRI images, CT scans, etc. The present approach is then tested on various data to assess its reliability in terms of the program’s proposed method, model, structure, texture, color, and shape. The proposed CBIR model is implemented employing MATLAB R2018a (Mathworks Inc., Novi, MI, USA) on the platform Intel core i7–4720 @ 3.4GHz, 16G RAM, 64 bit.

For a better understanding of the proposed method, the block diagram is given below:

5.1 Datasets

Two datasets, including textual images, are employed in this study to evaluate the proposed method. The first dataset is IMAGENET1M, a dataset for large scale CBIR [7]. IMAGENET1M includes 2048-dimensional real-valued features gathered from a deep neural network model on a 10%-labeled ILSVRC-15 dataset. Base, query, training, and validation are four parts of this dataset in which both the 2048-dimensional features and the image list are provided. Li and Wang introduced the second dataset [29], composed of 10,000 test images. These images are stored in JPEG format with size 384×256 or 256×384. Both of these datasets have several images in various fields.

5.2 Experiments

The initial input image is a CT scan that is an image of the liver with a distorted part of the liver’s precise part.

  • Step 1: The input image is represented in Fig. 3, along with the image histogram.

    Fig. 3
    figure 3

    The input image and its histogram

  • Step 2: The image will be normalized in the following. At this stage, the noise reduction operation is conducted using the median filter, and the color channels change. Figure 4 is the result of the noise reduction operation using the median filter.

    Fig. 4
    figure 4

    Noise reduction result by median filter

In fact, low pass filters have been applied to reduce noise in input images. The peak signal to noise ratio PSNR computed to determine the noise level after applying the median filter. Figure 4 illustrate the output image of the noise removal block.

  • Step 3: The color channel is changed using the SLIC superpixel. In this case, the red, blue, and green channels cover the image to see this section’s output in Fig. 5.

    Fig. 5
    figure 5

    Changing color to red (left) and green channel (right)

  • Step 4: Contrast improvement and image thresholding are two necessary tasks to the distorted part that may be fully identified. In fact, the image is reconstructed, the row is selected, and the HOG descriptor is accomplished in the image. The output of this section is shown in Fig. 6.

    Fig. 6
    figure 6

    Image contrast enhancement and HoG thresholding in the initial extraction

  • Step 5: In the second phase, HOG should be conducted to extract feature, based on the two components of the light intensity and the edge of the image, to find the exact tampered segments. This is performed by the HOG descriptor, concurrently and repeatedly, until reaching the liver’s object’s exact edges. The output of this section is presented in Fig. 7.

    Fig. 7
    figure 7

    HOG feature extraction operation in second phased based on edge and light intensity

For contrast, the edge information with the input image is combined to highlight the detected edge computed based on HOG. Specifically, strong points (corner) is selected to achieve high contrast images.

  • Step 6: Subsequently, the local threshold operations will be performed to identify the parts needed for recovery. Figure 8 presents its output.

  • Step 7: In this research, the thumbnail details separation in an image for retrieval is a significant point in content-based image retrieval systems according to LBP and its vectors. The main part in need of being retrieved should be identified as a template. This template is generally extracted based on the LBP and its vectors specified in the localization stage. Figure 9 presents the output of the detachable detail section in the image for image recovery.

    Fig. 8
    figure 8

    Local threshold operation

    Fig. 9
    figure 9

    Thumbnail detail separation for image retrieval based on LBP and its vectors

  • Step 8: For retrieval with LBP and its vectors, the thumbnails in the image are separated into three phases, requiring identifying the pattern, the intensity of the light and the edge, and eventually, the exact recovery of the image area. The output of this content-based image retrieved section may be observed in Fig. 10.

    Fig. 10
    figure 10

    From left to right: pattern recognition, light intensity, edge, and finally, precisely content-based image retrieval with LBP and its vectors

  • Output: Finally, the recovered image is in its original form, and its histogram is just like the original histogram, with the lowest variations, as in Fig. 11.

    Fig. 11
    figure 11

    Histogram of the retrieved image

The image’s output by applying the liver image from the CT scan data collection and the animal image has good results. According to Section 4.5, the HOG descriptor is first extracted from the entire image. There are 16 sections for minimizing the density of information.

In the following, the evaluation criteria for the assessment of the proposed approach are to be determined, generally including evaluation results, as observed in Table 1.

Table 1 The evaluation criteria result for a liver CT scan image

As shown in Table 2, the criteria’ comparison results are based on reference papers with different criteria measured in the same conditions.

Table 2 Comparison of the proposed method with other approaches in terms of Accuracy and Recall

Table 2 shows the accuracy and recall indices of the proposed approach compared to similar studies in the same conditions. Besides, the comparison between the evaluation terms of these papers and the proposed approach is demonstrated in Fig. 12.

Fig. 12
figure 12

Comparison of the proposed method with other approaches in evaluation terms

6 Conclusion

CBIR is a technique for finding the images from large data that attracted much attention in the last decade. Extracting features may be associated with various image capabilities like color structure, image texture, etc. Nevertheless, the fundamental problem here is that the simple information checked by a computer has a lot to do with the deep understanding picked up by a person. In the present study, a new CBIR approach is suggested to reconstruct and improve the quality of the distorted images. A combination of color, texture, and shape features based on modifying the color channel using SLIC, enhancing the image contrast via HOG, and separating the image details applying LBP is proposed in this study. Experimental outcomes indicated that the proposed approach yielded higher retrieval accuracy than the other similar works. Obtained content retrieval rate of 90.54% and accuracy of about 98.71% on a liver CT scan image, illustrate the efficiency of the proposed approach.