1 Introduction

Computer vision technology has significantly advanced and is widely utilized in various fields such as mobile phones, medical applications, and underwater and space exploration. High dynamic range images are usually preferred in these fields. However, the real images captured by cameras usually have quality problems such as low or high brightness and contrast due to environmental conditions. Low-light images often have less visibility, poor contrast, and limited details. This situation may hinder the accurate interpretation and analysis of images [24]. These problems can negatively impact the success and capacity of a computer vision system. Therefore, the importance of image enhancement techniques is increased for computer vision applications [16, 22, 39]. The aim of low-light image enhancement methods is to obtain a natural and effective image that reveals the details in the image by adjusting the brightness and contrast [4, 39]. Contrast and brightness enhancement studies can bring out hidden details and textures in the image and assist with the accurate interpretation of the image [38].

Nowadays, there are plenty of low-light image enhancement methods with different purposes and advantages. The aim of histogram equalization (HE) methods [9, 38] is to achieve a balanced histogram distribution by calculating the dynamic range of image gray levels to improve the contrast and brightness of the image. Nevertheless, enhanced images have exhibited artificial distortions such as local over-under enhancement, saturation, and color distortion. Gamma correction methods are utilized to enhance the contrast of the image according to the selected parameter [9]. The method improves each pixel in the image independently. Therefore, obtaining a satisfactory result is difficult on the gamma correction methods. In the retinex methods [18], the images are considered as the combination of illumination and reflection components. While the reflection component represents the inner information and details of the image, the illumination component represents the brightness of the image [40]. The retinex methods are the most common methods used for low-light images. However, these may cause adverse effects such as halo effect and under/over enhancement [6]. In fusion-based methods, multiple images of the same scene are obtained with different sensors or multiple images of the same sensor are obtained with different imaging techniques. In order to obtain a single high-quality image, these images are combined with different methods [23, 35]. In recent years, fusion-based methods have become widespread. However, some adverse effects such as blurring in the image and spatial distortions may occur in fusion-based methods. Also, the decision in combining multi images can be difficult.

In this paper, we propose a new fusion-based enhancement method based on active contour model (ACM) [5] and adaptive gamma correction (AGC) technique [13] for natural contrast and brightness adjustment that reveals details in low-light images. The image is divided into regions with the CV-LGDF method [28]. Segmented regions and fusion mask are obtained with the CV-LGDF method. Each region is converted to a grayscale image and the exposure threshold value of each region is calculated. Thus, effective parameter selection is provided in each region. Then, the RGB color space of the input region is converted to HSI color space. The intensity (I) color component of each regions are enhanced with the novel AGC technique, which reflects the nature of the image region. The enhanced regions of the image are combined according to the fusion mask, and then the image is transformed into RGB color space. Finally, the HE is performed on the input image using the histogram map of the fusion image. The main contributions of this paper are as follows:

  • We introduce a new fusion-based image enhancement method using CV-LGDF segmentation and AGC enhancement methods.

  • We apply the exposure threshold value of the regions as the parameter value for the weight function for each segmented region in the proposed AGC enhancement method.

  • We obtain effective and natural contrast and brightness enhancement by using the proposed fusion-based image enhancement method for low-light images.

The rest of this article proceeds as follows: The related literature is presented in Section 2. The proposed fusion-based enhancement and details are explained in Section 3. Section 4 presents the experimental results and comparisons of the enhancement methods. Finally, the conclusion and future work are given in Section 5.

2 Related work

HE is a common used method for contrast and brightness enhancement. Nonetheless, HE can produce artifacts such as local over or under enhancement and saturation, which may lead to a loss of detail in the image. Pizer et al. proposed the contrast-limited HE method that separates the input image into regions of equal size and applies HE to these regions [30]. Wadud et al. presented the dynamic HE method which segments the image histogram into multiple sub-histograms using the local minimum and performs HE in these regions until there is the absence of any dominating portions [1]. Kuldeep et al. offered exposure-based subimage HE method that divides histogram of image into sub-histogram according to exposure threshold and applies HE in these histograms [34]. Parihar et al. described entropy-based dynamic sub-HE method which recursively segments the image histogram into sub-histograms according to the entropy value and performs a new dynamic range HE to sub-histogram [29]. Simi et al. developed the parameterless fuzzy HE method [33].

Nonlinear transform function based methods are widely used in contrast and brightness enhancement method. The gamma correction method adjusts the contrast and brightness by expanding the dynamic range of the image. However, effective parameter selection is time-consuming and difficult. In addition, it can create artifacts in the image as it is applied directly to each pixel of the image, regardless of neighborhoods. Many gamma correction based methods have been developed to effectively adjust the parameter value and to provide effective contrast and brightness enhancement. Huang et al. presented the AGC method which calculates the proper gamma value with the statistical data taken on the image [14]. Huang et al. described AGCWD method that determines gamma value using the probability distribution of pixel value on image [13]. Liu et al. offered the new gamma correction method that the gamma parameter is determined using the membership function in low-light images [25]. Bhandari et al. presented a novel correction method which combines gamma-corrected reflectance and the multi-scale retinex method [2].

In retinex-based methods, it is assumed that the image can be represented as an illumination and reflection component. These methods provide color enhancement as well as contrast and brightness adjustment in low-light images. However, it may create negative effects such as the halo effect and over-under enhancement. Jabson et al. developed a multi-scale color restoration function method to obtain a good color enhancement in the image [17]. Wang et al. presented the naturalness preserved enhancement method that performs a pass filter using neighbouring luminance information [37]. Guo et al. presented an illumination estimation-based method that assumes only the illumination component to reduce computational complexity [11]. Li et al. proposed the illumination and reflection component estimation method, in which a decomposition was performed in the V component of the HSV space of the input image [21].

In recent years, fusion-based methods have become popular for image enhancement. Fu et al. described a multi-scale fusion-based method, in which different enhancement methods were applied to the illumination component of the image [8]. The enhanced illumination component was combined with the reflection component to obtain an enhanced image. Wang et al. suggested a fusion-based method using retinex based methods, where the single scale retinex method was applied to the input image in RGB color space and the retinex algorithm for color restoration method was applied to the I component of the input image in HSI color space [36]. Jeong and Lee presented a gamma correction-based fusion method in the dark and light regions of the input image in the YCbCr color space [15]. Leng et al. have implemented the technique of image fusion by providing effective coefficient selections of the images and by applying a multi-orientation fusion strategy [19, 20].

In the literature, HE-based, gamma-based, retinex methods, and fusion-based methods are applied to enhance low-light images. Most of the HE-based enhancement methods have been applied either the whole image or the sub-regions of the image according to the entropy, median, and mean value. Most of the gamma-based methods have used either a fixed gamma parameter or an adaptive parameter determined for the low-light images in enhancement process. Retinex-based methods have divided the image into illumination and reflection components and have provided enhancement in these components. Most of the fusion-based methods have produced enhanced images by the fusion of multiple enhanced images of the same image. In these methods, region-based, pixel-based, block-based, and weight parameter-based processes are used in the fusion of images.

In the low-light image enhancement methods, the choice of color space is very important. In RGB color space, the enhancement process is applied to each component separately. However, in the RGB color space, the correlation between the components deteriorates and serious color deviations occur in the image [38]. Therefore, after the image is converted to different color spaces, enhancement methods are applied. Thus, most image enhancement methods perform enhancement on a single color component in different color spaces, such as the V component of HSV, the I component of HSI, or the L component of the Lab.

In this paper, a new fusion-based image enhancement method is proposed by using the advantages of the AGC method and CV-LGDF segmentation method. Unlike the most fusion-based enhancement methods, the image is divided into object and background regions with CV-LGDF. The RGB color space of these regions is converted into HSI color space. The I component of the regions are enhanced with the AGC method adapted to its own structure in the proposed method.

3 Proposed method

We have proposed a novel fusion-based image enhancement method consisting of segmentation, enhancement, and fusion. For image segmentation phase, the CV-LGDF method which can be applied to different types of images has been used. This segmentation method not only detects the borders of the objects but also determines the details of objects of the image [28]. In the proposed approach, the image is divided into regions as object and background using the CV-LGDF segmentation method [28].

The RGB color space of these regions is transformed into HSI color space. The I component of the regions are enhanced with a novel AGC method adjusted according to the content of the regions. These enhanced regions are combined with the fusion mask, and the obtained image is transformed into RGB color space. Finally, HE is performed on the input image using the histogram map of the fusion image.

3.1 Image segmentation method

Segmentation divides an image into meaningful regions based on certain similar characteristics such as color, texture, and structure. It makes the image suitable for analysis, interpretation and classification. In this paper, the ACM based CV-LGDF method, as shown in Fig. 1, is used for the image segmentation [28]. In the global segmentation phase, the detection of the border regions of the objects is provided in the image. The local segmentation phase is applied to detect the details as well as the borders in the image. In the local segmentation phase, the global fusion mask, a result of the global segmentation phase, is used initialization contour. Then, a fusion mask is obtained as a result of local segmentation phase.

Fig. 1
figure 1

Block diagram of the CV-LGDF based segmentation method

Firstly, input RGB image is transformed into a grayscale image. Then, the global segmentation phase of the CV-LGDF method is performed on the grayscale image. The energy minimization function of the global segmentation phase is defined as follows [28]:

$$\begin{aligned} E(c_1,c_2,C)&=\int _{\text {inside}C}(I(x)-c_1)^2 H(\phi (x))dx \nonumber \\&+\int _{\text {outside}C}(I(x)-c_2)^2 (1-H(\phi (x)))dx \end{aligned}$$
(1)

where, I(x) and C denotes the image and the initialization contour, respectively. \(c_1\) and \(c_2\) represent the inside and outside mean intensities of the contour C, respectively. \(c_1\) and \(c_2\) are indicated as follows [28]:

$$\begin{aligned} c_1=\frac{\int _\Omega I(x) H(\phi (x))dx}{H(\phi (x))dx} \end{aligned}$$
(2)
$$\begin{aligned} c_2=\frac{\int _\Omega I(x) (1-H(\phi (x)))dx}{(1-H(\phi (x)))dx} \end{aligned}$$
(3)

where, \(\Omega \) and \(H(\phi (x))\) indicate the image domain and the regularized Heaviside function, respectively. \(H(\phi (x))\) is defined as follows [28]:

$$\begin{aligned} H(\phi )=\frac{1}{2}\left( 1+\frac{2}{\pi } arctan\left( \frac{\phi }{\varepsilon } \right) \right) \end{aligned}$$
(4)

where, \(\varepsilon \) is a small constant, \(\phi \) is the zero level set of a Lipschitz function. The detail of the term is described in [5].

The gradient descent flow function of the global segmentation method is expressed as follows [28]:

$$\begin{aligned} \frac{\partial \phi }{\partial t}&=\frac{\mu div\left( \frac{\nabla \phi }{\mid \nabla \phi \mid }\right) }{max\left( max \mid div\left( \frac{\nabla \phi }{\mid \nabla \phi \mid }\right) \mid \right) }\nonumber \\&+\frac{1}{(-I(x)-c_1)^2+(I(x)-c_2)^2} \end{aligned}$$
(5)

where \(\mu div\left( \frac{\nabla \phi }{\mid \nabla \phi \mid }\right) \) term is defined as the regularity of the contour. It indicates the smoothness over the region boundaries [28].

After the homogeneous regions of the image are detected quickly and successfully in the global segmentation phase, then, the local segmentation phase of the CV-LGDF is applied to the image. The obtained contour as a result of global segmentation is utilized as the initialization contour for the local segmentation phase. Thus, the details in this region are revealed more effectively in the local segmentation phase. The energy minimization function of the local segmentation phase is defined as follows [28]:

$$\begin{aligned} E\left( u_1(x),u_2(x),\sigma ^2_1(x),\sigma ^2_2(x)\right) =&-\int _{\Omega \text {outside}}\omega (x-y) \log p_{1,x}\left( I(y)\right) dy \nonumber \\&-\int _{\Omega \text {inside}}\omega (x-y) \log p_{2,x}\left( I(y)\right) dy \end{aligned}$$
(6)

where, I(y) indicates the image. \(\log p_{1,x}\left( I(y)\right) \) is the probability intensity of the image and is defined as follows [28]:

$$\begin{aligned} p_{1,x}\left( I(y)\right) =\frac{1}{\sqrt{2\pi }\sigma _i} exp\left( -\frac{\left( u_i(x)-I(y) \right) ^2}{2\sigma _i(x)^2} \right) \end{aligned}$$
(7)

\(u_1(x)\) and \(u_2(x)\) are explained as local intensity and are defined as follows [28]:

$$\begin{aligned} u_1(x)=\frac{\int \omega (x-y) I(y) H(\phi (x))dy}{\int \omega (x-y) H(\phi (x))dy} \end{aligned}$$
(8)
$$\begin{aligned} u_2(x)=\frac{\int \omega (x-y) I(y) \left( 1-H(\phi (x)) \right) dy}{\int \omega (x-y)\left( 1-H(\phi (x)) \right) dy} \end{aligned}$$
(9)

\(\sigma ^2_{1}(x)\) and \(\sigma ^2_{2}(x)\) is explained as standard deviation and are defined as follows [28]:

$$\begin{aligned} \sigma _1(x)^2=\frac{\int \omega (x-y) \left( u_1(x)-I(y) \right) ^2 H(\phi (x))dy}{\int \omega (x-y) H(\phi (x))dy} \end{aligned}$$
(10)
$$\begin{aligned} \sigma _2(x)^2=\frac{\int \omega (x-y) \left( u_2(x)-I(y) \right) ^2 \left( 1-H(\phi (x)) \right) dy}{\int \omega (x-y)\left( 1-H(\phi (x)) \right) dy} \end{aligned}$$
(11)

The gradient descent flow function applied in the local segmentation phase is expressed as follows [28]:

$$\begin{aligned} \frac{\partial \phi }{\partial t} = -\delta (\phi ) (e_1 - e_2)+ \nu \delta (\phi ) + \mu \left( \nabla ^2 \phi \right) \end{aligned}$$
(12)

where, \(e_1\) and \(e_2\) variables indicate as the local average intensities and variances of the image. \(e_1\) and \(e_2\) are defined as follows[28] :

$$\begin{aligned} e_1(x)= \int _{\Omega }\omega (x-y)\left[ \log \sigma _1(x) + \left( -\frac{\left( u_1(x)-I(y) \right) ^2}{2\sigma _1(x)^2} \right) \right] dy \end{aligned}$$
(13)
$$\begin{aligned} e_2(x)= \int _{\Omega }\omega (x-y)\left[ \log \sigma _2(x) + \left( -\frac{\left( u_2(x)-I(y) \right) ^2}{2\sigma _2(x)^2} \right) \right] dy \end{aligned}$$
(14)

The pseudo code of the CV-LGDF method is given in Algorithm 1.

Algorithm 1
figure a

The CV-LGDF Segmentation Method

The fusion mask is obtained by applying the CV-LGDF method to the image. The input RGB image is segmented into object and background regions using the fusion mask, as demonstrated in Fig. 2. According to the white and black information of the fusion mask, the bright regions have been labeled as the object region and the dark regions have been labeled as the background region.

Fig. 2
figure 2

Image segmentation method (a) input image, (b) fusion mask, (c) object region, (d) background region

3.2 Image enhancement method

Image enhancement improves the quality and information of an image by adjusting its contrast and brightness. We propose a novel image enhancement method based on the AGC method in HSI color space, which effectively enhances the contrast and brightness of the image. The block diagram of the AGC-based enhancement method is given in Fig. 3.

Fig. 3
figure 3

Block diagram of the proposed AGC-based enhancement method

After the input image is segmented into object and background regions, image enhancement is applied to the regions. These regions are converted to grayscale images to calculate the exposure threshold value. Then the exposure threshold values are calculated for the object and background regions of the image, respectively. Hanmandlu et al. [12] introduced an exposure threshold value to determine underexposed or overexposed image levels in the image. This parameter is normalized in the range of [0, 1]. The optimal exposure value is 0.5. Regions are categorized as overexposed where the exposure value exceeds 0.5, while regions with exposure values lower than 0.5 are underexposed. The exposure threshold function is expressed as follows [12]:

$$\begin{aligned} exposure=\frac{1}{L}\times \frac{\sum _{k=0}^L h(k) \times k}{\sum _{k=0}^L h(k)} \end{aligned}$$
(15)

where, k is the image gray level value, h(k) is the image histogram, and L is the total gray level number of the image. The exposure threshold value is used to set the gamma correction parameter in the proposed method.

Histogram stretching (HS) is an effective technique to increase the contrast by stretching the intensity values of the image [9]. In this paper, the histogram of the image is performed by redistributing the intensity values within the range of [0-255]. The HS is appiled to the R, G, and B color components of both regions of the image, respectively. The HS function is used in the R color component is defined as follows:

$$\begin{aligned} R'(x,y)=\frac{R(x,y)-R_{min}}{R_{max}-R_{min}}\times 255 \end{aligned}$$
(16)

where, R and \(R'\) defines the input image and HS applied image, respectively. (xy) indicates the position of the pixel. \(R_{max}\) and \(R_{min}\) are the maximum and minimum values of the image, respectively.

After HS is performed on all color components for expanding dynamic range, the RGB color space is converted to HSI color space as expressed in (17):

$$\begin{aligned} \left[ H(x,y),S(x,y),I(x,y)\right] =T_{RGB}^{HSI}\left[ R'(x,y),G'(x,y),B'(x,y)\right] \end{aligned}$$
(17)

where, \(T_{RGB}^{HSI}\) is color space transformation. Hue (H), saturation (S), and intensity (I) components are expressed as follows:

$$\begin{aligned} \begin{aligned}&H(x,y) = {\left\{ \begin{array}{ll} \theta &{}\text { if } B' \le G' \\ 360- \theta &{}\text { if } B' > G' \end{array}\right. } \\&\theta = \cos ^{-1}{\left\{ \begin{array}{ll} \frac{\frac{1}{2}[(R'-G')+(R'-B')]}{[(R'-G')^2+(R'-B')(G'-B')]^{1/2}} \end{array}\right. } \\ \\&S(x,y) = 1-\frac{3\times {\left[ min \left( R'(x,y),G'(x,y),B'(x,y)\right) \right] }}{R'(x,y)+G'(x,y)+B'(x,y)} \\ \\&I(x,y) = \frac{R'(x,y)+G'(x,y)+B'(x,y)}{3} \end{aligned} \end{aligned}$$
(18)

The I is the intensity component that affects the contrast and brightness of the image. I color component directly affects the quality of the enhancement. In this paper, the I component is enhanced to obtain effective and natural contrast and brightness adjustment in the image by using proposed AGC method.

The gamma correction is defined as follows:

$$\begin{aligned} T(l)=l_{max}\left( \frac{l}{l_{max}}\right) ^\gamma \end{aligned}$$
(19)

where, \(l_{max}\), represents the maximum intensity, l denotes the intensity of each pixel of the input image. \(\gamma \) defines the gamma correction parameter. The \(\gamma \) parameter selection affects the quality of the image.When \(\gamma >1\) will be caused low brightness in the image. On the contrary, \(\gamma <1\) will be caused high brightness in the image [10, 38]. Thus, the \(\gamma \) value is chosen effectively for each regions. In this paper, AGC method is applied that the \(\gamma \) parameter value is calculated with the weighted cumulative distribution function \((cdf_w(l))\). The AGC method is defined as follows:

$$\begin{aligned} AGC(l)=l_{max}\left( \frac{l}{l_{max}}\right) ^{cdf_w(l)} \end{aligned}$$
(20)

\(cdf_w(l)\) is expressed as follows:

$$\begin{aligned} cdf_w(l)=\sum _{l=0}^{l_{max}}\left( \frac{pdf_w(l)}{\sum _{l=0}^{l_{max}}(pdf_w(l))}\right) \end{aligned}$$
(21)

where, \(pdf_w(l)\) is weighted probability density function. This function is implemented to reduce the negative effects when the histogram of the image is changed. \(pdf_w(l)\) is defined as follows:

$$\begin{aligned} pdf_w(l)=pdf_{max}\times \left( \frac{pdf(l)-pdf_{min}}{pdf_{max}-pdf_{min}}\right) ^w \end{aligned}$$
(22)

where, \(pdf_{max}\), \(pdf_{min}\) and w are the maximum probability density function (pdf), minimum pdf, and the adjusted parameter, respectively.

The proposed method calculates the w value independently for each segmented region. Since the object region is bright than the background region, in the proposed AGC method, the exposure threshold value (exposure) of the object region is chosen as the w value. However, since the background region is dark, by considering the calculated exposure value of the background region, \((1-exposure)\) is selected as the w value. Thus, effective value selection is provided for the parameter of the weighted pdf in each region. The pdf is described as follows:

$$\begin{aligned} pdf(l)=\frac{n_l}{M\times N} \end{aligned}$$
(23)

where, \(n_l\) is the number of pixels of the image, M and N are the image dimensions.

After the AGC enhancement method is performed on the regions, the regions are converted to RGB color space. The proposed AGC method has obtained enhancement that reveals details in the object and background region of the image. The pseudo code of the AGC method is given in Algorithm 2.

Algorithm 2
figure b

Image Enhancement Method

3.3 Fusion of enhanced regions

The enhanced regions are combined using the fusion mask, and a fusion image is obtained in this method, as shown in Fig. 4. The enhanced object region image is multiplied by the fusion mask, and the enhanced background image is multiplied by the negative fusion mask. Then, the obtained images are aggregated and an enhanced image is obtained. The fusion of enhanced regions is described as follows:

$$\begin{aligned} I_\text {fusion}=m\times I_\text {object}+m'\times I_\text {background} \end{aligned}$$
(24)

where, \(I_\text {fusion}\), \(I_\text {object}\) and \(I_\text {background}\) are the fusion image, enhanced object region and enhanced background region, respectively. m and \(m'\) are defined as fusion mask and negative of fusion mask, respectively.

In order to provide effective contrast and brightness enhancement in the image, HE is applied to the input image using the histogram map of the obtained fusion image. Thus, artefacts of the HE are mitigated in the image. The novel image fusion-based method obtains a natural and high-quality enhanced image. The block diagram the proposed ACM and AGC based image fusion method is shown in Fig. 5.

Fig. 4
figure 4

Block diagram of image fusion method

Fig. 5
figure 5

Block diagram of the proposed ACM and AGC based image fusion method

Fig. 6
figure 6

Low-light images and histograms of the images used in the experimental study

4 Experimental results

This section describes the subjective and objective evaluation of the performance the image enhancement methods on low-light images. The images and histograms shown in Fig. 6 are selected from the LIME [11], NPE [37] and MEF [26] datasets. MSRCR [17], LIME [11], NPE [37], MF [8], camera response model (LECARM) [32], fractional order fusion model (FFM) method [7], a joint intrinsic-extrinsic priority model (JIEP) [3], the joint enhancement and denoising method (JED) [31] are used for comparison with the proposed method. All the experiments are performed using MATLAB R2020b on a personel computer.

Figures 7, 8, 9 and 10 show the visual comparison results and histograms of the Rock, Cottage, Birds and Street images, respectively. It can be seen from the figures that MSRCR method increases the brightness excessively in the images. Because the MSRCR method evaluates only the reflection component of the image, it produces poor color fidelity and low contrast in images. The histograms of the MSRC method have formed high frequency distribution. LIME method enhances more effective the contrast in the image. However, it occasionally causes brightness distortion and a halo in the image. The histograms of the LIME method have shown the lowest frequency distribution.

Fig. 7
figure 7

Enhancement results and histograms of the Rock image (a) MSRCR [17], (b) LIME [11], (c) NPE [37], (d) MF [8], (e) LECARM [32], (f) FFM [7], (g) JIEP [3], (h) JED [31], and (i) proposed method

Fig. 8
figure 8

Enhancement results and histograms of the Cottage image (a) MSRCR [17], (b) LIME [11], (c) NPE [37], (d) MF [8], (e) LECARM [32], (f) FFM [7], (g) JIEP [3], (h) JED [31], and (i) proposed method

Fig. 9
figure 9

Enhancement results and histograms of the Birds image (a) MSRCR [17], (b) LIME [11], (c) NPE [37], (d) MF [8], (e) LECARM [32], (f) FFM [7], (g) JIEP [3], (h) JED [31], and (i) proposed method

Fig. 10
figure 10

Enhancement results and histograms of the Street image (a) MSRCR [17], (b) LIME [11], (c) NPE [37], (d) MF [8], (e) LECARM [32], (f) FFM [7], (g) JIEP [3], (h) JED [31], and (i) proposed method

NPE, MF, and JIEP methods exhibit more low contrast and brightness enhancement. NPE method shows lower saturation and image quality. The MF method fails to enhance the brightness of dark details and thus, creates artifacts in the image. The histograms of the NPE and MF methods have formed a dense frequency distribution. JIEP method reveals regions of the images. On the other hand, it is less effective in contrast enhancement on the image. The FFM method shows a similar behaviour to the MF method. However, it fails to improve locally in images containing multiple textures. The histograms of the FFM and JIEP methods have created a similar frequency distribution. The LECARM method provides a balanced contrast and brightness adjustment in the images. It effectively highlights the details in dark areas. The histograms of the LECARM method formed the scattered frequency distribution. The JED method obtains balanced brightness enhancement in images. On the contrary, JED has a smoothing effect on the image, therefore it does not preserve details and edge information in the image. The proposed method highlights regions of the image and obtains a natural contrast and brightness enhancement. The histograms of the JED and the proposed methods have produced a similar frequency distribution. However, the histograms of the proposed method have shown sparse frequency distribution.

In order to evaluate the performance of the proposed method, we use Blind/Referenceless Image Spatial Quality Evaluator (BRISQUE) [27], Lightness Order Error (LOE) [37], and Gradient Magnitude Similarity Deviation (GMSD) [8] metrics for the quantitative measurement. BRISQUE is a spatial quality assessment metric for referenceless enhanced images. The smaller value for the BRISQUE metric indicates the better image quality. The comparison results are given in Table 1.

According to the BRISQUE metric, the MF method performs well in Rock, Cottage and River images, while the NPE method carries out well in Street images. The LECARM method shows the best performance for the Tower image. The proposed method obtains the best performance in Birds, Room, and Road images and gives the best average.

The LOE assesses the change in intensity values between the input and enhanced images by comparing the lightness order of two pixels. The lower LOE metric value shows the performace of the enhancement method which obtains less distortion and more natural image. The LOE results is given in Table 2.

The proposed fusion-based method outperforms all images except for the Rock image for the LOE metric. In addition, it obtains the best average among the comparison methods. The JIEP method shows the best performance on the Rock image.

The GMSD measures the gradient distortions between the input and enhanced images in terms of image quality. The low GMSD value indicates that the enhanced image preserves its naturalness and improves in quality. The GMSD results is given in Table 3.

JIEP method shows good performance in Rock and Room images according to the GMSD metric. JED method carries out good performance in Birds image. FFM method performs good performance in River image. The proposed method achieves good performance in Cottage, Road and Tower images.

Table 1 Results of BRISQUE for the low-light images
Table 2 Results of LOE for the low-light images
Table 3 Results of GMSD for the low-light images

5 Conclusions

In this paper, a novel fusion-based image enhancement method, which combines CV-LGDF segmentation and AGC enhancement, is proposed for low-light images. In the fusion method, the image is separated into the object and the background regions with the fusion mask which is produced by the CV-LGDF method. After HS is applied to each region, regions are converted to HSI color space. Each region is enhanced with the proposed AGC method, by using the exposure threshold independently. The enhanced regions are merged with the fusion mask. This method achieves an enhanced image in which the details highlight on the image. The obtained image is transformed into RGB color space. Then, HE is performed on the input image with the histogram map of the fusion image. In the proposed method, due to dividing the image into regions and applying the enhancement according to the calculated exposure threshold value of regions, over or under improvement of the regions have been highly limited. The performance of the proposed fusion-based method is compared with the state-of-the-art methods. Experimental results illustrate that the proposed fusion-based method not only obtains natural contrast and brightness enhancement but also has a good effect on detail recovery in low-light images. As a future work, the image can be segmented into more than two regions, and the proposed method can be applied to each region. Additionally, the method can also be performed in different color spaces.