Keywords

1 Introduction

Images taken under low-light environment tend to exhibit a lack of contrast and poor visibility. The poor quality of such images adversely affects the efficiency of various machine vision and multimedia applications like object tracking [1] and detection [2]. As these algorithms are trained primarily on high-quality images, they yield poor results in low-light conditions. To improve the performance of these applications, and also for aesthetic purposes, various low-light image enhancement methodologies have been proposed in recent decades.

1.1 Literature Review

One of the most intuitive approaches for low-light image enhancement and contrast restoration is the contrast enhancement-based techniques. Histogram equalization (HE) [3] constrains the image uniformly in the range of [0, 1]. However, it also enhances any noise present in the scene and gives rise to checkerboard effects. Furthermore, it fails to consider the statistical distribution of the image and, hence, leads to visible artifacts. Adaptive contrast enhancement (AHE) [4], on the other hand, is computationally expensive. Celik et al. [5] proposed a contrast enhancement method for variational and contextual information (CVC) which considers the inter-pixel relationship and their contextual information to smooth the target histogram. Lee et al. [6] used layered difference (LDR) to represent 2D histogram for contrast enhancement. They amplified the difference in gray level between neighboring pixels. Direct amplification of low-light image is another conventional approach to restore the color contrast and visibility. However, this can lead to over-enhancement and saturation of areas having bright light. It also gives rise to halos. To overcome this, Tang et al. [7] applied dehazing-type algorithm on the inverted low-light image to suppress regions with strong light from over-enhancement. Dong et al. [8] also performed dehazing on the inverted low-light images to obtain enhanced results by manipulating their illumination maps. Guo et al. introduced a restoration approach for low-light images (LIME) [9] which imposes a structure prior on the illumination map. Li et al. [10] performed super-pixel segmentation and used various statistical measures to estimate the noise level of each super-pixel. They used an adaptive approach for dehazing to avoid over-enhancement.

A benchmark model which inspired several enhancement techniques for low-light images was presented by Land et al., called as Retinex theory [11]. The theory is based on human perception and assumes that images are made up of two factors, namely reflectance and illumination. Some of the initial attempts include single [12] and multi-scale [13] Retinex-based approaches. These, however, give unnatural and over enhanced results. Shin et al. [14] introduced an algorithm to restore naturalness (ENR) which creates a mapping curve to adjust contrast and suppress edge artifacts. Fu et al. [15] used morphological operations to separate an image to its corresponding illumination and reflectance maps. They use a fusing strategy to combine the advantages of various techniques such as histogram equalization [3] and sigmoid function. Wang et al. [16] enhanced non-uniformly illuminated images by balancing naturalness and details.

Although significant advancements have been made to enhance low-light images, some of these approaches are computationally costly and, hence, unfeasible for real-time purposes. Furthermore, although Retinex theory preserves the edge information and the naturalness of an image, the existing algorithms still fail to completely restore an image without any loss of contrast or color information.

1.2 Contributions

While several methodologies have been developed to restore low-light images, they often tend to lose color information or do not perform significant enhancement or improvements. This paper, on the other hand, proposes a simple, yet efficient approach to effectively enhance and restore a given low-light image. The dehazing-type approach used in the paper increases the general pixel intensity of the image. Also, using gamma correction further widens the dynamic range of the image, thereby improving its contrast and overall visual appeal. The results obtained present the efficiency of the proposed method when compared with several benchmark techniques in qualitative as well as quantitative terms.

The remaining of the paper is arranged as follows: Sect. 2 illustrates upon the algorithm introduced and describes it in detail. Section 3 includes the qualitative and quantitative results obtained and their comparison with several benchmarks. Lastly, Sect. 4 highlights some of the important discussions followed by the concluding remarks.

2 Proposed Methodology

This section includes the explanation of the proposed methodology. Figure 1 outlines the steps to be followed in the proposed algorithm. The following subsections involve discussions on the basic building blocks of Fig. 1.

Fig. 1
figure 1

Block diagram of the proposed methodology

2.1 Retinex Theory

The basic assumption of the Retinex theory [11] is that images can be separated into their constituent illumination (lightness) and reflectance maps. Mathematically, an input image \(I_{\text {input}}\) can be represented by its illumination \(L_{\text {illum}}\) and reflectance \(R_{\text {ref}}\) components as

$$\begin{aligned} I_{\text {input}}^c (a, b) = R_{\text {ref}}^c (a, b) \times L_{\text {illum}} (a, b) \end{aligned}$$
(1)

where (ab) is the pixel location and c is for the blue (B), green (G), and red (R) color channels. It can be observed that the reflectance is different for different color channels as it represents the properties of each color. On the other hand, the illumination map for all the color channels is the same, as the same amount of light reaching each of the color channels. From (1), the reflectance is defined as

$$\begin{aligned} R_{\text {ref}}^c (a, b) = \dfrac{I_{\text {input}}^c (a, b)}{L_{\text {illum}} (a, b)}. \end{aligned}$$
(2)

2.2 Illumination Map Estimation

The illumination map can be approximated as the maximum intensity of the image at each pixel location. For an input image \(I_{\text {input}}\), the initial illumination map \(L_{\text {illum}}\) is estimated as

$$\begin{aligned} L_{\text {illum}} (a, b) = \max _{c \in \{B,G,R\}}{I_{\text {input}}^c (a, b)} \end{aligned}$$
(3)

The inverted illumination \(L_{\text {inv}}\) can then be calculated as

$$\begin{aligned} L_{\text {inv}} (a, b) = 255 - L_{\text {illum}} (a, b) \end{aligned}$$
(4)

Herein, it has been assumed that the range of the input image is [0, 255]. Figures 2a–c show a low-light image, its illumination map using (3), and the inverted illumination map using (4), respectively. It can be observed that the high intensity of the inverted illumination map gives it a hazy appearance. Accordingly, suitable dehazing techniques can be applied to the inverted illumination map to enhance it.

Fig. 2
figure 2

Image enhancement using proposed algorithm: a input image, b corresponding illumination map approximated by (3), c inverted illumination map calculated by (4), d recovered illumination map using (9), e illumination map further enhanced with gamma correction by (10), and f final enhanced output image by (11)

2.3 Dehazing of Inverted Illumination Map

As the inverted illumination map has a hazy appearance, dehazing like approach can be applied to it. A pioneer dehazing technique is based on the conventional atmospheric scattering model [17], given by

$$\begin{aligned} I^c (a, b) = t_{\text {trans}}(a, b) \times J^c (a, b) + A^c (1 - t_{\text {trans}}(a, b)) \end{aligned}$$
(5)

where J(ab) is the original dehazed scene, I(ab) represents the observed image, \(t_{\text {trans}}(a,b)\) represents the transmission map representing the amount of light that reaches the observer, A is global atmospheric light, a and b represent the coordinate locations of the pixels, and c represents the color channels B, G, and R. The global atmospheric light represents the pixels with maximum light in an image. To calculate the atmospheric light, the brightest 0.1% pixels of the dark channel [18] are considered. For an inverted illumination map \(L_{\text {inv}}\), the corresponding dark channel [18] \(L{_{\text {inv}}^{\text {dark}}}\) is given by

$$\begin{aligned} L{_{\text {inv}}^{\text {dark}}} (a, b) = \min _{(a, b) \in \Omega (i, j)}\left( L_{\text {inv}}(a, b) \right) \end{aligned}$$
(6)

where \(\Omega \) represents a patch with center at pixel (ij). The patch size considered here is 3 \(\times \) 3.

The pixel locations corresponding to the brightest pixel intensities in the dark channel are mapped to their corresponding locations of the pixels in the illumination map. The highest intensity among these, i.e., 0.1% pixel locations in the inverted illumination map is the estimated atmospheric light.

The transmission map can then be approximated using atmospheric light as

$$\begin{aligned} t_{\text {trans}}(a, b) = 1 - \omega \left( \dfrac{L{_{\text {inv}}^{\text {dark}}} (a, b)}{A} \right) \end{aligned}$$
(7)

where \(\omega \) is a constant parameter used to prevent an image from appearing unnatural. Based on [18], \(\omega \) is set to 0.95.

Finally, the dehazed inverted illumination map \(L_{\text {inv}}^{\text {dehazed}}\) is calculated as

$$\begin{aligned} L_{\text {inv}}^{\text {dehazed}}(a, b) = A + \dfrac{\left( L_{\text {inv}}(a, b) - A \right) }{\max (t_{\text {trans}}(a, b), t_0)}, ~~~t_0 = 0.1 \end{aligned}$$
(8)

where \(t_0\) is the minimum transmission value that is set to avoid division by 0.

The enhanced illumination map \(L_{\text {enh}}\) of the input low-light image is then obtained as

$$\begin{aligned} L_{\text {enh}}(a, b) = 255 - L_{\text {inv}}^{\text {dehazed}}(a, b) \end{aligned}$$
(9)

This illumination map can further be enhanced to improve the contrast by applying gamma correction as

$$\begin{aligned} L_{\text {enh}}^{\text {gamma}}(a, b) = \left( L_{\text {enh}}(a, b) \right) ^{\gamma } \end{aligned}$$
(10)

where \(\gamma \) = 0.5 is the gamma correction parameter.

Finally, the required enhanced output image \(I_{\text {enh}}\) is calculated as

$$\begin{aligned} I_{\text {enh}}^c(a, b) = R_{\text {ref}}^c(a, b) \times L_{\text {enh}}^{\text {gamma}}(a, b) \end{aligned}$$
(11)

where \(R_{\text {ref}}^c(a, b)\) is the reflectance of the image calculated using (2).

3 Experimental Results

The proposed algorithm has been compared with several state of the artFootnote 1, namely HE [3], AHE [4], CVC [5], LDR [6], ENR [14], and LIME [9]. The resultsFootnote 2 suggest the superiority of the proposed methodology over the considered benchmarks both quantitatively and qualitatively.

Fig. 3
figure 3

Comparison of proposed methodology with various benchmarks on Waving Girl image: a input low-light image, outputs of b HE [3], c AHE [4], d CVC [5], e LDR [6], f ENR [14], g LIME [9], and h proposed method

Fig. 4
figure 4

Comparison of proposed methodology with various benchmarks on Christmas Rider image: a input low-light image, outputs of b HE [3], c AHE [4], d CVC [5], e LDR [6], f ENR [14], g LIME [9], and h proposed method

Fig. 5
figure 5

Comparison of proposed methodology with various benchmarks on High Chair image: a input low-light image, outputs of b HE [3], c AHE [4], d CVC [5], e LDR [6], f ENR [14], g LIME [9], and h proposed method

3.1 Qualitative Analysis

Figures 3, 4, and 5 present the output of the proposed methodology along with several benchmarks. It can be observed that while the result of the proposed approach has sufficient light and color information, AHE [4], CVC [5], and LDR [6] fail to preserve any such color fidelity. Although the result of HE [3] in Fig. 3b has increased overall pixel intensity, the image appears whitewashed. Outputs of CVC [5] and LDR [6] continue to appear dark and fail to enhance the contrast of the image, as can be observed in images (d) and (e) of Figs. 3, 4, and 5. The proposed algorithm performs better than these benchmarks in terms of visual appeal while giving comparable results as ENR [14] and LIME [9].

Fig. 6
figure 6

Various standard images used for quantitative comparison. From left to right and top to bottom: Images of Baby at Window, Baby on Grass, Christmas Rider, High Chair, Standing Boy, Waving Girl, Dog, and Santa’s Little Helper. (Download Link: Click here)

Table 1 Performance comparison in terms of LOE (\(\times \) \(10^3)\) [16]

3.2 Quantitative Analysis

The proposed method has been compared with several benchmarks on various low-light images shown in Fig. 6. The performance metric used for comparison is the lightness order error (LOE) [16], mathematically given by

$$\begin{aligned} \text {LOE} = \dfrac{1}{m} \sum _{a = 1}^{m} \sum _{b = 1}^{m} \left( U \left( M_\text {e}(a), M_\text {e}(b) \right) \oplus U \left( M_\text {r} (a), M_\text {r} (b) \right) \right) \end{aligned}$$
(12)

where m is the pixel location, \(U(\alpha , \beta )\) = 1 if \(\alpha \) > \(\beta \), and 0 if \(\alpha \) \(\le \) \(\beta \), \(\oplus \) represents the exclusive OR operation, and \(M_r(i)\) and \(M_e(i)\) are the highest pixel values among blue, green, and red color channels of the reference and enhanced images, respectively, at pixel location i. Lower value of LOE [16] indicates higher naturalness of lightness in the images.

It can be inferred from Table 1 that the proposed method performs better than nearly all the considered benchmarks. Although ENR [14] outperforms the proposed algorithm in two images (Baby at Window and Santa’s Little Helper), the proposed algorithm gives superior results in most of the cases. In the image of High Chair, the LOE value of the proposed method is lesser than one-tenth of the LOE value of ENR [14] and is lower by a factor of nearly two in the image of Baby on Grass and Christmas Rider. The proposed method gives better results than nearly all the benchmarks considered in qualitative as well as quantitative parameters.

4 Conclusions

The paper proposed an image restoration approach for low-light images by enhancing its illumination map. The results show the superiority of the proposed methodology over various other famous benchmarks. It can be concluded that the proposed method gives visually pleasing results with enhanced contrast and color fidelity. The qualitative and quantitative results are superior to a majority of the other benchmark methods considered. However, the future work may include estimation of the enhanced color image directly with input low-light image.