1 Introduction

Digital images are captured by imaging systems and transmitted through different communication channels and processed by automated or human visual systems. These images are prone to contrast degradation. Contrast enhancement is an essential stage of digital image processing to improve the images’ visual appearance so that the enhanced image is more pleasant than the original image for human visual systems or more useful for automated systems [1,2,3]. Despite having no universal theory to decide the appropriate technique for contrast enhancement, various contrast enhancement approaches have been introduced in the literature.

Fuzzy set theory is utilized to resolve problems with ambiguity and uncertainty [4]. Since the acquired/ digitized images could include ambiguity or uncertainty in the intensity of colors, the fuzzy set theory can be a great technique to improve the quality of images. Pal and King [5] proposed its use with the INTensification (INT) operator for grayscale image enhancement. Later, the modified versions of the INT operator—New INTensification (NINT) [6], and Global INTensification (GINT) [7] operators were implemented to modify the intensities in color images. Different features of the Fuzzy set such as fuzzy relation [8], fuzzy rules [9,10,11], fuzzy morphological function [12], fuzzy filters [13, 14], and Type-II fuzzy set [15, 16] were also proposed in the literature for the image enhancement.

Histogram Equalization (HE) is a simple and common practice to increase the contrast of images. Due to its limitation, a number of its variants were developed, including Adaptive Histogram Equalization (AHE), Bi Histogram Equalization (BHE), Brightness Preserving Dynamic Histogram Equalization (BPDHE) [17], and Contrast Limited Adaptive Histogram Equalization (CLAHE) [18]. Many researchers combined the goodness of fuzzy set theory with histogram equalization. Histogram modification techniques were applied to the fuzzy histogram which resulted in BPDFHE [19, 20], and FuzzyCIE [21].

In fuzzy image enhancement, the performance of the method also depends upon the function used for fuzzification [22]. So, Different transformation functions and fuzzy numbers have been used for image enhancement by researchers. [23,24,25,26,27,28] employed various fuzzy numbers for membership assignment and modification for different regions. The required parameters were optimized by utilizing numerous fuzzy features and optimization algorithms. The previously used optimization procedures take a lot of time because the problem is framed as a multi-variable-optimization problem, and the fitness function is evaluated repeatedly to determine the optimum parameters. In this paper, the optimization problem is divided into single-variable subproblems which require less time to solve and might speed up the optimization process.

Most of the existing approaches act on the grayscale image. If these techniques are applied to all (R, G, and B) channels of color images, artifacts may occur in the output image [3]. Therefore, the researchers convert RGB images into different color spaces including L*a*b*, XYZ, HSV, etc. This conversion process requires a lot of matrix calculations. The proposed method converted the input image into the RGB luminosity image, which is simply calculated using scalar multiplication and matrix addition. This paper’s objective is to enhance low-contrast color images by applying fuzzy set theory.

The main contributions of the paper are as follows:

  • The fuzzy inference system was devised to find whether the image needs enhancement and to know the type of the image (i.e. Dark, Bright and Pleasant).

  • The image is enhanced only when the image has low contrast and requires the enhancement to avoid over-enhancement.

  • The image is converted into RGB Luminosity which needs a shorter time than other conversions.

  • The RGB Luminosity calculation and color image reconstruction are performed based on the image type (dark or bright).

  • The multi-variable-optimization problem is split into two single-variable subproblems which speed up the process of membership modification.

  • The LCFIE (Low Contrast Fuzzy Image Enhancement) method is proposed for the contrast enhancement of color images.

The rest of the paper is structured as follows: Section 2 explains the proposed method with fuzzification, defuzzification, objective and fitness function, and color image reconstruction. Section 3 consists of the experiment, results, and discussion with subjective and objective assessment. The paper is concluded in section 4.

2 Proposed method

Dynamic range expansion is performed on the input image as the first stage in the proposed method to get a more radiant color enhancement. Each image pixel is scaled based on the global minimum and maximum intensities among all color channels. Let I be an input image in RGB color space with the intensity range of [0, 255] and \(I=\{R, G, B\}\). The global minimum and maximum intensities of the image among all color channels are ascertained as follows:

$$\begin{aligned}{} & {} I_{min}=\min \{\min \{R\},\min \{G\},\min \{B\}\} \end{aligned}$$
(1)
$$\begin{aligned}{} & {} I_{max}=\max \{\max \{R\},\max \{G\},\max \{B\}\} \end{aligned}$$
(2)

Then, each image pixel is scaled in the range [0, 255] as:

$$\begin{aligned} Is_{c}(i,j)=255*\frac{I_{c}(i,j)-I_{min}}{I_{max}-I_{min}} \end{aligned}$$
(3)

where \(I_{c}(i,j)\) is the image pixel of the C channel at (ij) location and \(Is_{c}(i,j)\) is its scaled version.

After scaling, the image is to be categorized as a low-contrast dark image, low-contrast bright image, and pleasant image, as per the fuzzy rule-based system. Low-contrast dark and bright images need contrast enhancement. The color images are converted from a multichannel (R, G, B channels) spectrum into one channel such as the L* channel of the L*a*b* color space or the V channel of the HSV color space conversion from RGB color space. In this paper, RGB Luminosity or luminance is used to find the one-channel image as used in [28]. This one-channel image is converted from the spatial plane to the fuzzy plane, enhanced according to the proposed process, and defuzzified back to the spatial plane. The color image is reconstructed using the enhanced one-channel image in proportion to the changes made in the one-channel image (RGB Luminance). The RGB Luminosity channel is directly used for dark images. But in the case of the bright images, the RGB Luminosity image is converted into a dark image by subtracting it from the maximum intensity level (i.e. 255).

2.1 Selection of low-contrast dark and bright image

A good image has an exposure value near 0.5 [7]. Based on this fact, the images may be categorized as dark (exposure value less than 0.5), pleasant (exposure value near 0.5), and bright images (exposure value greater than 0.5). No exact value of exposure is used to categorize the image, so a ‘Sugeno’ type fuzzy rule-based system is being generated to find whether the image needs enhancement and to categorize the image into dark, pleasant, or bright classes. The system consists of two input variables -Normalized mean and standard deviation of the image and two output variables-NeedEnh and Type. The membership functions which were used for the fuzzification of input variables are shown in figure 1. NeedEnh consists of two levels—Yes and No. Type has three levels—Dark, Bright, and Pleasant. The rules to select the exposure are mentioned as

  1. 1.

    If (StdDev is Low) and (Mean is Low) then (NeedEnh is Yes) (Type is Dark)

  2. 2.

    If (StdDev is Low) and (Mean is High) then (NeedEnh is Yes) (Type is Bright)

  3. 3.

    If (Mean is Mid) then (NeedEnh is No) (Type is Pleasant)

  4. 4.

    If (StdDev is High) then (NeedEnh is No) (Type is Pleasant)

Figure 1
figure 1

Membership functions for normalized mean, standard deviation of the image.

2.2 Fuzzification and defuzzification of low contrast images

The luminosity levels of an image \( L_{M\times N} \) are linked with the membership values and illustrated in the notation of fuzzy set theory as follows:

$$\begin{aligned} L= \bigcup _{m=1}^{M}\bigcup _{n=1}^{N}\frac{\mu _{mn}}{l_{mn}}=\lbrace \mu _{mn}/l_{mn} \rbrace \end{aligned}$$
(4)

The modified Gaussian function was used to transform the luminosity channel into the fuzzy plane and described as follows:

$$\begin{aligned} \mu (k)=\exp ^{ \big [ - \big ( \frac{L_{max}-k}{\sqrt{2}C}\big )^{2}\big ]} \end{aligned}$$
(5)

where k is the luminosity level in the range [0, G − 1], and \(L_{max}\) is the maximum luminosity level. C is a fuzzifier whose value is in the range 1–255 and decided by minimization of the objective function discussed in section 2.3.

The following Sigmoid function was used to modify the membership values associated with gray levels of the image:

$$\begin{aligned} \mu '(k)=\frac{1}{1+\exp ^{(-t*(\mu (k)-\mu _{C}))}} \end{aligned}$$
(6)

where t is the optimized parameter based on the maximum value of fitness function described in section 2.5 and its value is in the range of 5–25; \(\mu _{C}\) is in the range of 0–1 and found such that there would be no loss of information.

After modification of membership values, the image must be converted back to the spatial plane from fuzzy plane to display. The enhanced intensities were determined as follows:

$$\begin{aligned} k'=L_{max}-\sqrt{-2\log (\mu '(k)){C}^{2}} \end{aligned}$$
(7)

where \(k'\) is the enhanced luminosity level. The value of C is also modified such that there would be no loss of image content after defuzzification. So, from Eq. (5)

$$\begin{aligned} C=\frac{L_{max}-k}{\sqrt{-2 \log {\mu (k)}}} \end{aligned}$$
(8)

by putting \(k=0\) and \(\mu (k)=\) minimum of modified membership values, the value of C would be assessed as

$$\begin{aligned} C=\frac{L_{max}}{\sqrt{-2 \log {\mu }}} \end{aligned}$$
(9)

where \(\mu \) is the minimum of modified membership values.

2.3 Objective function

The algorithm needs global and local information to find the optimized parameters for fuzzification. The global information about an image can be collected using fuzzy entropy and index of area coverage (IOAC), while local information by fuzzy compactness. Fuzzy entropy is a measure to find the degree of fuzziness in the image. IOAC of an image measures the portion of area(enveloped by the image’s breadth and length). At the same time, Fuzzy compactness the fraction of the area (covered by the perimeter) blanketed by the image [29].

These metrics can be used separately or combined as a function. Therefore, combining all these metrics, the following objective function is used to determine the optimum parameters required for the fuzzification of the image content:

$$\begin{aligned} {\text {F}}={\text {FE}}\times {\text {Compactness}} \times {\text {IOAC}} \end{aligned}$$
(10)

where,

$$\begin{aligned}{} & {} FE=\frac{1}{MN \log (2)} \sum _{m=1}^{M} \sum _{n=1}^{N} (-\mu _{m,n}\log (\mu _{m,n})\\{} & {} \quad \qquad -(1-\mu _{m,n}) \log (1-\mu _{m,n})) \\{} & {} {\text {Compactness}}={\text {Area}}/({\textrm{Perimeter}}*{\textrm{Perimeter}}) \\{} & {} {\text {Area}}=\sum _{m=1}^{M} \sum _{n=1}^{N}\mu _{m,n} \\{} & {} {\text {Perimeter}}=\sum _{m=1}^{M-1}\sum _{n=1}^{N}(\mu _{m,n}-\mu _{m-1,n}) \\{} & {} \qquad \qquad \qquad +\sum _{m=1}^{M}\sum _{n=1}^{N-1}(\mu _{m,n}-\mu _{m,n-1}) \\{} & {} {\text {Index of Area Coverage(IOAC)}}={\text {Area}}/({\textrm{length}}*{\textrm{breadth}}) \\{} & {} {\text {length}}=max_{x}(sum_{y}(\mu _{x,y})) \\{} & {} {\text {breath}}=max_{y}(sum_{x}(\mu _{x,y})) \end{aligned}$$

and \(\mu \) is the membership value.

2.4 Optimization of \(\mu _{C}\)

For modification of membership value, two parameters t and \(\mu _{C}\) are to be optimized. The process to select the value of t is mentioned in section 2.5. The value of \(\mu _{C}\) is chosen such that there would be no loss of information, i.e., the modified membership value should not be less than the original membership value. When the modified membership value is less than the initial membership value, the image intensity will be negative at the time of defuzzification. The pixel intensity will round off to zero, causing the pixel to become black and its content to be lost.

The membership values associated with image 6(a) are shown in figure 2(a). These membership values are modified using Eq. (6) for \(\mu _{C}=[0.3, 0.4] \) at the interval of 0.01 and shown in figure 2(b). It is clear from this figure that after a certain value of \(\mu _{C}\), some of the modified membership values become less than the original ones. While taking \(\mu _{C}=0.36\) and \(t=23.99\), the membership values are modified and defuzzified to get the enhanced pixel intensities which are shown in figure 2(c). In this figure, some pixels’ intensities become negative, which would be rounded off to zero while displaying the image, so there would be a loss of image content to some extent. The optimized modified membership values and enhanced pixel intensities, which have no information loss, are shown in figure 2(d and e), respectively. To reduce the computation time, the value of \(\mu _{C}\) can be derived using Eq. (6) as

$$\begin{aligned} \mu '(k)=\frac{1}{1+\exp ^{(-t*(\mu (k)-\mu _{C}))}} \end{aligned}$$

So,

$$\begin{aligned} \mu _{C}=\mu (k)+ \frac{\log \left( \frac{1-\mu '(k)}{\mu '(k)}\right) }{t} \end{aligned}$$

For no loss of image content, the minimum of original membership values \((\mu )\) should be less than or equal to the minimum of modified-membership values \((\mu ')\). Here, \(\mu =\mu '\) is used to find the value of \(\mu _{C}\). So, putting \(\mu (k)=\mu \) and \(\mu '(k)=\mu \),

$$\begin{aligned} \mu _{C}=\mu + \frac{\log \left( \frac{1-\mu }{\mu }\right) }{t} \end{aligned}$$

Experimentally, it is found that the optimized value of \(\mu _{C}\) can be evaluated by putting the maximum value of t in the above equation.

$$\begin{aligned} \mu _{C}=\mu + \frac{\log \left( \frac{1-\mu }{\mu }\right) }{25} \end{aligned}$$
(11)

At this point, \(\forall t\), its value will generate the curves whose starting value would be more than or equal to the minimum membership values.

Figure 2
figure 2

(a) Original optimized membership function of Wall image, (b) Modified membership values for different values of \(\mu _{C}\) from 0.3 to 0.4 at the interval of 0.01 (c) Modified pixel intensities when \(t=23.99\) and \(\mu _{C}=0.36\) are used for membership value modification (d) Optimized modified membership values at \(t=23.99\) and \(\mu _{C}=0.34\) and (e) Optimized modified pixel intensities.

2.5 Fitness function

The enhanced images should have more content information and sharper edges than the original image. Therefore, to determine the optimal value of the parameter t required for membership value modification, a fitness function was employed with three measures: the sum of the intensities in the Sobel edge image (an indicator to know how much the edges are preserved), the number of Sobel edge pixels, and entropy (a metric of information) [30]. The optimization process utilized the maximum value of the fitness function, which is formulated in [28] as follows:

$$\begin{aligned} F(G_{E})=\log \left( \log \left( \sum ^{M}\sum ^{N}(G_{S})\right) \right) \times \frac{EdPix(G_{S})}{M \times N} \times E(G_{E}) \end{aligned}$$
(12)

where \(G_{E}\) is an enhanced version of the original grayscale image and was applied to the Sobel edge operator to produce the \(G_{S}\) or Sobel edge image. The EdPix is the number of pixels with intensity values greater than a threshold. \(E(G_{E})\) indicates the entropy value of \(G_{E}\).

2.6 Enhancement of RGB luminance

The steps to calculate and improve RGB luminance or luminosity are as follows:

  1. 1.

    Label R, G, and B to the scaled image’s red, green, and blue channels.

  2. 2.

    Evaluate the RGB Luminosity as mentioned in [28] and label it L.

  3. 3.

    Convert L to fuzzy feature plane using Eq. (5) with the optimized value of C using the objective function formulated in Eq. (10).

  4. 4.

    Assess the value of \(\mu _{C}\) from Eq. (11).

  5. 5.

    Determine the optimum value of parameter t using F (Eq. (12)).

  6. 6.

    Compute the modified membership value using Eq. (6) and the optimum t and \(\mu _{C}\).

  7. 7.

    Find the optimized value of C using Eq. (9).

  8. 8.

    Defuzzify using Eq. (7) to get improved luminosity, \(L'\).

Using this process, the enhanced luminosity in the dark image is more than the original luminosity, whereas the improved luminosity is less than the initial luminosity in the bright image.

2.7 Color image reconstruction

The resultant color image can be constructed from the input image proportionally in the ratio of L\(^{'}\) to L. For the dark image, the enhanced color image was created as described in [31] and formulated as follows:

$$\begin{aligned} I_{c,ij}^{'}=I_{c,ij}\times \bigg [\frac{L'_{ij}}{L_{ij}}\bigg ] \end{aligned}$$
(13)

where \(L_{ij}'\) and \(L_{ij}\) are the enhanced and original luminosity at the location (ij), respectively. \(I_{c,ij}^{'}\) and \(I_{c,ij}\) are the enhanced and original pixel intensity at the location (ij) for the c channel, which means all channels of an RGB image. But for the bright images, it can generate the intensities out of the gamut and lose some information while displaying. So for these images, the enhanced color images can be constructed as mentioned in [28, 32] and equated as:

$$\begin{aligned} I_{c,ij}^{'}=I_{c,ij}\times \bigg [\frac{L'_{ij}}{L_{ij}}\bigg ]^{1-\sqrt{I_{c,ij}}} \end{aligned}$$
(14)

The step-by-step process to find the enhanced RGB image from the input color image is shown in figure 3.

Figure 3
figure 3

Flow chart for the proposed approach.

3 Experiment and results

The experiment was performed on the contrast distorted color images from the CSIQ [33] and CCID2014 [34] databases. The CCID2014 database provides dark and bright images with low contrast, whereas the CSIQ database consists of low contrast images with normal exposure. The paper shows the results of two images from the CSIQ database and four images from the CCID2014 database. The ‘child’ (figure 4(a)) and ‘1600’ (figure 5(a)) are from the CSIQ database; ‘Wall1’ (figure 6(a)), ‘Caps1’ (figure 7(a)), ‘Wall2’ (figure 8(a)) and ‘Caps2’ (figure 9(a)) are from the CCID2014 database. The proposed scheme was compared with Histogram Equalization (HE), Contrast Limited Adaptive Histogram Equalization (CLAHE) [18], Brightness Preserving Dynamic Histogram Equalization (BPDHE) [17], Pal1980 [5], Hanmandlu2003 [7], Hanmandlu2016 [27], BPFDHE [19], and FuzzyCIE [21].

Figure 4
figure 4

(a, f) Original child image and its histogram, enhanced images and their histograms produced by (b, g) Histogram Equalization, (c, h) BPDHE, (d, i) CLAHE, (e, j) Pal1980, (k, p) Hanmandlu2003, (l, q) BPDFHE, (m, r) Hanmandlu2016, (n, s) FuzzyCIE, (o, t) proposed method.

Figure 5
figure 5

(a, f) Original 1600 image and its histogram, enhanced images and their histograms produced by (b, g) Histogram equalization, (c, h) BPDHE, (d, i) CLAHE, (e, j) Pal1980, (k, p) Hanmandlu2003, (l, q) BPDFHE, (m, r) Hanmandlu2016, (n, s) FuzzyCIE, (o, t) proposed method.

Figure 6
figure 6

(a, f) Original wall1 image and its histogram, enhanced images and their histograms produced by (b, g) Histogram equalization, (c, h) BPDHE, (d, i) CLAHE, (e, j) Pal1980, (k, p) Hanmandlu2003, (l, q) BPDFHE, (m, r) Hanmandlu2016, (n, s) FuzzyCIE, (o, t) proposed method.

Figure 7
figure 7

(a, f) Original caps1 image and its histogram, enhanced images and their histograms produced by (b, g) Histogram equalization, (c, h) BPDHE, (d, i) CLAHE, (e, j) Pal1980, (k, p) Hanmandlu2003, (l, q) BPDFHE, (m, r) Hanmandlu2016, (n, s) FuzzyCIE, (o, t) proposed method.

Figure 8
figure 8

(a, f) Original Wall2 image and its histogram, enhanced images and their histograms produced by (b, g) Histogram Equalization, (c, h) BPDHE, (d, i) CLAHE, (e, j) Pal1980, (k, p) Hanmandlu2003, (l, q) BPDFHE, (m, r) Hanmandlu2016, (n, s) FuzzyCIE, (o, t) proposed method.

Figure 9
figure 9

(a, f) Original Caps2 image and its histogram, enhanced images and their histograms produced by (b, g) Histogram equalization, (c, h) BPDHE, (d, i) CLAHE, (e, j) Pal1980, (k, p) Hanmandlu2003, (l, q) BPDFHE, (m, r) Hanmandlu2016, (n, s) FuzzyCIE, (o, t) proposed method.

Figure 4 demonstrates that histogram modification approaches like HE, BPDHE, BPDFHE, and FuzzyCIE cover the whole dynamic range of the histogram but do not provide color enhancement (see figures 4(b, c, l, n)); in fact, the colors fade away. The Histogram Equalization (HE) method was applied to the V channel of the HSV color space without affecting hue and saturation. Figures 4(b, c) illustrate how the HE and BPDHE approaches completely change the color of the water and grass. The color of the child’s dress also varies in figure 4(c), for example, from sea green to black. The CLAHE approach does not change the shape of the histogram, but it does not cover the entire dynamic range, so the image remains blurry, as shown in figure 4(d).

The Pal1980 method was employed on the V channel in the same way as the histogram equalization method was and the output image is displayed in figure 4(e). Most parts of the image have turned black and the histogram shape is also changed. Figures 4(k, m) depict the results of the Hanmandlu2003 and Hanmandlu2016 methods. Since the size and shape of the histogram remain unchanged, the output images look identical to the original image. However, not only the value channel of the HSV color space is changed but also the saturation channel in these methods. The L* channel of L*a*b* color space was altered in the BPDFHE and FuzzyCIE by utilizing the histogram modification technique on the fuzzy histogram. However, as seen in figures 4(l, n), these approaches were unable to restore the image’s colors. By using the proposed method, the histogram covers the entire dynamic range and has the same shape as the given image. The colors of all parts are reflecting well, as shown in figure 4(o).

The ‘1600’ image yielded nearly identical results to the ‘child’ image. The colors of the flowers and grass have also changed, and the sky color is too white in figure 5(b)), yet it is darkening in figures 5(c, d, l). Figure 5(e) contains black color patches in several places. The histogram of figure 5(k) is shifting towards the left, so the mean of the image becomes low, but the contrast is not improved. The colors of the flowers in figure 5(n) have changed. Figure 5(o) displays the pleasant image produced by the proposed method, and its histogram stretches in both directions and covers the whole dynamic range.

The ‘Wall1’ image is a low-contrast dark image displayed in figure 6(a). Its histogram is skewed towards the left and concentrated in the narrow range, as shown in figure 6(f). Here, the motive is that histogram will cover the whole dynamic range, the enhanced image will look pleasant, and no color cast happens. It is happening in figures 6(b, o). The image displayed in figure 6(o) is the best among all images shown in figure 6.

Figure 7(a) contains the ‘Caps1’ image which is also a low-contrast dark image. Additionally, as seen in figure 7(f), its histogram is also confined to a small range and inclined to the left. The motive is the same as in the case of the Wall1 image. The good enhancement may be seen in figures 7(b, o). The finest of all the images is the one shown in figure  7(o) and its histogram displayed in figure 7(t) is following the same shape as the original image’s histogram.

Figure 8 consists of the original image—‘Wall2’, enhanced images, and their histograms. The histogram of the original image is skewed towards the right and has a small range, which indicates that it is a bright image with low contrast. HE, CLAHE, and FUZZYCIE stretched the histogram towards the left, as shown in figures 8(g, i, s), but the colors of the output images are changed. The color of the doors looks dark brown (see figures 8(b, d, n)).

Pal1980 shifted the histogram towards the left slightly and tried to boost the contrast, but the output image is still dull, as shown in figures 8(e, j). BPDHE produced the clear image displayed in figure 8(c), but most of the image part becomes black, and the histogram (figure 8(h)) has a large number of pixels near 0. Hanmandlu2016 method created a better visible image (see figure 8(m)), and it is also visible from its histogram in figure 8(r). The image’s histogram enhanced by the proposed method is stretched towards the left and has a similar shape as original image (figure 8(t)). The output image (displayed in figure 8(o)) is also better than other enhanced images.

The original image—‘Caps2’, its enhanced images, and their histograms are displayed in figure 9. The original image’s histogram is also inclined to the right and has a narrow range, indicating that it is a bright, low-contrast image. It shows the same results as in the case of the ‘Wall2’ image. Figure 9(o) displays the image which is the best of all images and produced by the proposed method. Its histogram is illustrated in figure 9(t).

3.1 Subjective assessment

The ITU-R suggested a five-point (1..5) and a 0..100 scale’s Mean Opinion Score (MOS) as a qualitative assessment method to evaluate the image’s visual appearance [35]. Ponomarenko et al [36] utilized a 10-point scale (0..9) to demonstrate that all these methodologies are different only by the observer’s convenience and lead to the same results. For this study, twenty image processing experts and non-experts provided MOS based on a 7-point Likert scale (Much Worse −3, Worse −2, Slightly Worse −1, The Same 0, Slightly Better +1, Better +2, Much Better +3), which is recommended by ITU-R for comparison purposes.

The observers eyed both distorted and improved images side by side. Their responses were recorded on the 7-point scale for the enhanced images and averaged to provide MOS, tabulated in table 1. Enhanced images produced by the proposed approach have an average opinion score of over 2.6, demonstrating their excellent quality. The standard deviation of opinion scores is in the 0.44–0.61 range, implying that most of the quality scores are either 2 or 3.

Table 1 Mean Opinion Score (MOS) and standard deviation of opinion scores for all images by twenty observers.

3.2 Objective assessment

The researchers provided a variety of metrics for the objective evaluation of an image. The mean, a fundamental property, was computed to determine the image’s exposure (i.e., it is a dark, bright, or pleasant image). The standard deviation is a primary measure of contrast in the image. Here, the experiment was performed on color images, so the colorfulness index provided in [37] was used to judge the color quality. Apart from measuring individual features of an image, the overall quality can be assessed by metrics such as CQE [38], NIQMC [39], and NR-CDIQA [40]. In this paper, NR-CDIQA and CQE were employed to determine the quality of an image. Since enhanced images should have a significant amount of content information and sharp edges, the fitness function, mentioned in section 2.5 was also utilized to evaluate the image quality.

The arithmetic mean of intensities of original and enhanced color images are shown in table 2. The first two images have mean values near 127, which means they have good exposure. Methods [5, 7, 17, 21] decreased the mean of the ‘1600’ image, despite increasing as in the case of methods HE, [18, 19, 27]. The mean of the ‘Child’ image is increased by all methods except [5]. The proposed method also improved the means of both images. As the values of the mean of the ‘Wall1’ and ‘Caps1’ images are 35.81 and 16.46 respectively, they are considered dark images and require the mean enhancement or good exposure. The mean of the ‘Wall2’ image and ‘Caps2’ are 215.69 and 210.85 respectively, so they are bright images that need the mean decrement to achieve good exposure and it is beautifully done by the proposed method.

Table 2 Mean of RGB image’s intensities.

Standard deviation is a good measure to calculate the contrast of an image as it finds the dispersion of image intensities relative to its mean. It is evident from table 3 that the original images have low contrast, less than 40. The proposed method has a strong potential for considerably boosting the contrast of every image. It is clear from table 3 that the highest standard deviation doesn’t mean the image attains good contrast. It may be because the pixel intensities are concentrated at both ends of the histogram rather than in the middle.

Table 3 Standard deviation of RGB image.

The original images have little content information and look dull, so the fitness function’s value is very low, as shown in table 4. For the ‘1600’ image, HE and CLAHE yield high fitness values, but the sky is too white and black in figures 5(b) and 5 (d), respectively, so these values are not worthwhile. For the ‘Wall1’ and ‘Caps1’ images, HE possessed the maximum fitness function value and produced somewhat better images. For the ‘Wall2’ and ‘Caps2’ images, CLAHE provides the highest fitness value, but the images produced are lacking in color formation (The colors are modified). The proposed method obtained either the highest fitness values or near them and produced images with no color cast.

Table 4 Values of the fitness function.
Table 5 Colorfulness index.

The colorfulness index is an essential metric for the color images to measure how colorful the image is. As can be seen in table 5, Original images have low colorfulness index values. Due to this, the colors of the ‘1600’ and ‘Child’ images look flat or faded, despite having good exposure. The ’Wall2’ and Caps2 images have high mean values but appear dreary and the same is in the case of the ‘Wall1’ and ‘Caps1’ images which have low exposure. The colors may be saturated towards black (dark) or white (bright), so saturation adjustment is required for an adequate level of enhancement. Only the [27] method considers this stage of the process into account. The proposed method modifies all color channels in proportion to the change in luminosity, so the saturation is altered itself without changing the hue and enhanced images attained a good colorfulness index.

NR-CDIQA is a no-reference quality metric for contrast-distorted images and considers the statistical model of natural images for quality prediction. The model is trained using SUN 2012 database and validated on CID2013, TID2013, and CSIQ databases of contrast distorted color images. It is evident from table 6 that its value is between 2 and 3 for the original images. The proposed method increases its values for the enhanced images and has higher values than other fuzzy set theory-based approaches. CQE is also considered a good metric for assessing overall image quality, so its values are tabulated in table 7. A nice improvement was found through the proposed method. The results indicate that the proposed approach achieves either the highest values or near them.

Table 6 NR-CDIQA values.
Table 7 CQE (color quality enhancement) values.

4 Conclusion

The fuzzy rule-based system was designed to find the candidate images for enhancement automatically and categorize them into different groups-dark, bright or pleasant. The proposed method was compared with several modern algorithms working on histogram modification and fuzzy image enhancement. All color channels were modified according to the ratio of RGB Luminosity before enhancement and after enhancement.

The enhanced images are quantified subjectively as well as objectively. The proposed method is producing images with no color cast. Histograms of the enhanced images follow the same shape as the original images. The subjective assessment proved how effectively the proposed method enhances the image. Mean, standard deviation, fitness function value, colorfulness index, NR-CDIQA, and CQE have shown the efficiency and effectiveness of the LCFIE(Low Contrast Fuzzy Image Enhancement) in enhancing the quality of low-contrast color images and proved that the proposed approach is superior to existing methods.