Abstract
Imperfect lightness conditions usually lower the visual quality of an image by bringing in unclear image details and poor image contrast. Traditional low-light enhancement models based on one single input are often limited in avoiding the effect of over-enhancement or under-enhancement. Models based on fusing multiple input sources usually perform well in relieving this issue, as they can harmonize the complementary visual appearances of a same scene provided by different sources. Nevertheless, these models still have difficulty in dealing with the situation that only one input is at hand, which usually happens in many practical situations. In this paper, we propose a low-light enhancement model that artificially enriches input sources and then seamlessly fuses them. Specifically, with an input image, we first generate multiple enhanced images based on a lightness-aware camera response model. These images are then fused at mid-level based on a patch-based image decomposition model. To validate our model, we conduct qualitative and quantitative comparisons with several state-of-the-art single-source and multi-source models on a collection of real-world images. Experimental results show that our model better improves the image quality in terms of visual naturalness and aesthetics.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Avoid common mistakes on your manuscript.
1 Introduction
Nowadays, taking and sharing photographs has become a common life style in recent years. Various interesting image-based applications have emerged, such as image retrieval [1], classification [2, 3], clustering [4, 5], stylization [6], and segmentation [7]. Despite the advances in photographing devices, low-quality images can still be produced due to various reasons. On one hand, many people are not familiar with the basic shooting skills, e.g., rule of thirds, exposure setting, and so on. In this context, intelligent systems have been designed to help people choose a proper content composition [8, 9]. On the other hand, the visual appearance of a photograph can be affected by imperfect imaging conditions, such as low lightness, foggy weather, and motion blur. Therefore, enhancement models at the pixel level are also highly desired. Typical applications include detail enhancement [10], color transfer [11], low-light enhancement [12], dehazing [13], motion deblur [14], to name but a few.
In this paper, we focus on solving the issue of low lightness, which is often encountered in the photograph shooting activities. We can generally divide the poor lightness into two types. The first one is the globally low lightness, where there is only a few weak light sources existing in the imaging scene, e.g., a nighttime outdoor place and a dimly lit indoor place in Fig. 1. The second one is the unbalanced lightness, where a good light source exists, but fails to well illuminate the whole scene, e.g., backlight and sidelight in Fig. 1. To address the first type of low lightness, many enhancing models have been proposed, such as histogram-based models [15,16,17,18,19], aiming at stretching the histogram to a larger range of intensity values. However, they are limited to the second type of unbalanced lightness conditions, as these models tend to over-enhance the originally well-illuminated regions. Retinex-based models [20,21,22] are able to relieve this issue to some extent, but they are still less effective in tackling complex lightness conditions. Commonly, all the above models try to improve the contrast via manipulating only one single input.
The advance in photographing device benefits the task of low-light enhancement by providing more input data. Specifically, a same scene can be recorded with multiple images with different exposures, which effectively expand the intensity range for an image, especially for the originally dark regions. The multiple sources pave the way for the development of fusion-based enhancing models [23,24,25]. However, these models depend on the enriched data sources, which can be still unavailable in many situations. For example, we only get access to only one single low-light image from the Internet in most cases. If we still use the road map of multi-source fusion, we have to artificially generate a few intermediate enhanced images as the inputs in advance.
Following this road map, only with one single low-light image, we propose a low-light enhancement model via generating and fusing multiple sources. With a lightness-aware camera response model, multiple initial enhancements simulating differently exposed images are firstly produced. They are then fused at mid-level based on a patch-based image representation, in which image patches from each source are decomposed into several signals and are fused, respectively. The final enhancement is obtained by recomposing the fused signals back again. The highlights of our research are twofold. First, we extend the ability of camera response function in terms of adapting to different lightness configurations. Second, the mid-level fusion show more competitive performance than the current state-of-the-art enhancing models, including the ones based on single source and multiple sources.
The rest of this paper is organized as follows. Section 2 introduces the related works. Section 3 presents the details of the proposed model. Qualitative and quantitative comparisons are reported in Sect. 4. We finally conclude our research in Sect. 5.
2 Related works
In this section, we briefly review the related works of low-light image enhancement. We divided them into single-source models and multi-source models.
For the single-source group, a representative enhancing model is based on the manipulation of image histogram [15,16,17,18,19]. Based on the observation that the histogram of a low-light image heavily tails at low-intensity region, the histogram-based models mainly target at equalizing the intensity distribution across the whole intensity range [15,16,17,18] or reshaping the histogram into a desired distribution [19]. Since a histogram usually ignores most spatial information of an image, these models are often limited in tackling local lightness variation. As a result, they tend to produce the over-enhanced or under-enhanced effect.
Differently, Retinex-based models [20, 21, 26, 27] assume that an image is composed of an illumination layer and a reflectance layer. The former layer represents the illumination of the imaging scene, and the latter layer represents the inherent characteristics of object surface. A straightforward way for the Retinex-based models is to change the illumination layer, keep the reflectance layer unchanged, and recompose the two layers back again [26, 27]. The key component of Retinex-based enhancement models lies in the successful illumination–reflectance decomposition.
Since the image decomposition is ill-posed in nature, it often needs an alternative optimization process to approximate the two layers, which can be unstable and time-consuming. Enhancement models based on simplified Retinex model are thereof proposed [22, 28]. These models still assume that an image is the combination of the two layers. The difference is that they roughly estimate the illumination layer with a simple MaxRGB technique and refine the MaxRGB image with an edge-preserving filter, which plays a vital role in these simplified Retinex models. For example, choosing a different filtering model can lead to slightly different enhancing effects, especially for image regions with complex patterns [28].
There is another interesting assumption that the inverse of a low-light image resembles a hazy image. By applying dehazing techniques, the darkness can be eliminated as haze in the reverse low-light image [29,30,31]. However, methods based on this assumption tend to generate unrealistic effect on salient object boundaries.
The single-source-based enhancement models are usually controlled by one or two parameters, e.g., (simplified) Retinex-based models, or completely parameter-free, e.g., histogram-based models, which act as a uniform enhancing strength imposed on the whole image region. Therefore, they are less spatially aware of image contents with different illumination conditions. In this context, they tend to produce improper local enhancing effects.
For the multi-source group, the above issue can be largely relieved by jointly considering multiple sources as the inputs for the enhancement model, which are potentially complementary to each other. With advanced imaging hardware, multiple source images of a same scene can be almost simultaneously collected with different exposures. The key left for the enhancement task is the seamless fusion of these input images, which adaptively combines the different appearances of a same image region, and thereof avoids the over-/under-enhancement. Bertalmio and Levine [24] propose to encode the gradient and color information from a short-exposed image and a long-exposed image into an image functional, and perform variational minimization to obtain the final fusion. Kou et al. [25] use a multi-resolution technique for achieving the seamless fusion. Additionally, they propose to improve the fused result by further enhancing image details. Specifically, they propose an improved image filter that extracts high-frequency details from multiple inputs and add them into the fused image. Ma et al. [23] propose a novel patch decomposition model that separates an image patch into three kinds of signals. Then, the decomposed signals from each source are linearly or nonlinearly fused. The resultant enhancement is finally obtained by recomposing the signals back again.
For the situation when only one image is at hand, the technical road map of multi-source fusion needs an expansion, and the stage of source generation becomes indispensible. In [32], along with the original input, Fu et al. generate two intermediate enhanced images by applying two intensity transform techniques. Hao et al. [33] produce an intermediate enhancement based on the simplified Retinex model and fuse it with the original image by designing a content-aware weight map. Ying et al. [34] propose a novel bio-inspired enhancement model, in which the source is generated by a simulated camera response model [35]. Different from [34], the model in this paper avoids the heuristic judgment on if an image pixel is underexposed and thus is more flexible in generating more intermediate enhancing results.
Of note, there has been learning-based research for the low-light enhancement task [36, 37], which demonstrates very promising performance. For these methods, a collection of sufficient and reliable image pairs (normal lightness vs. low lightness) is vital to the training procedure.
3 Proposed method
3.1 Overall framework
Suppose the input \({\mathbf{I}}_{0} \in {\mathbb{R}}^{W \times H \times 3}\) is a color image represented in RGB space. The technical road map of our model is shown in Fig. 2. The model contains two main parts, i.e., lightness-aware source generation (described in Sect. 3.2) and multi-source fusion (described in Sect. 3.3).
Of note, the data flow in the first part is explained as follows. We first convert the RGB input into the HSV space and then only send the V channel into the generator. The reason is that the generator only aims at simulating different illumination conditions. After that, we replace the original V channel with the generated V channel and keep the other two channels unchanged. Then, for all the sources, we convert the HSV image back into the RGB space, which is used in the following stage of image decomposition.
3.2 Generation of fusion source
We generate the fusion sources based on the camera response model, which is jointly described by camera response function (CRF) and the brightness transform function (BTF). The former is only determined by a camera itself, while the latter is determined by the camera and the exposure ratio \(k\). The general form of CRF can be represented as:
Here, \({\mathbf{V}} \in {\mathbb{R}}^{W \times H}\) is the V channel in the HSV space (observed pixel intensity of an image). \({\mathbf{E}}\) is the ideal scene irradiance. By choosing exposure ratios, different observed images can be obtained, e.g., \({\mathbf{V}}_{0} = f\left( {\mathbf{E}} \right)\) (a trivial case as \(k = 1\)), \({\mathbf{V}}_{1} = f\left( {k{\mathbf{E}}} \right)\).
On the other hand, the mapping between \({\mathbf{V}}_{0}\) and \({\mathbf{V}}_{1}\) can be also described by the brightness transform function, which represents the mapping between two observed images of a same scene with different exposures:
Based on Eqs. 1 and 2, we have:
According to [34, 35], we specify the BTF as a simple form: \({\mathbf{V}}_{1} = g\left( {{\mathbf{V}}_{0} ,k} \right) = \beta {\mathbf{V}}_{0}^{\gamma }\). Of note, \(\beta\) and \(\gamma\) are related to both the camera and the exposure ratio. Based on the comparametric equation [38], we have \(f\left( {k{\mathbf{E}}} \right) = \beta f\left( {\mathbf{E}} \right)^{\gamma }\). Except for the trivial case of \(\gamma = 1\), we can obtain the closed form of CRF as:
Here, \(a\) and \(b\) are build-in parameters of a camera. They can be empirically chosen as a = − 0.3293, b = 1.1258, which are suitable for most cameras [35]. Therefore, we can thus obtain the BTF parameters \(\beta\) and \(\gamma\):
Then, the brightness transform function can be further parameterized by the exposure ratio \(k\):
In our application, we can take \({\mathbf{V}}_{0}\) as the original image at hand and \({\mathbf{V}}_{1}\) as a generated source.
Then, we estimate the exposure ratio \(k\) as follows. We first remove the small-scale image details from \({\mathbf{V}}_{0}\) by using the fast guided filter [39]. Then, we extract the low-light regions of low lightness determined by a threshold \(\eta\):
In another word, \({\mathbf{M}}\) approximately indicates the set of low-light pixels in the original image. The exposure ratio estimation can be formed as an optimization problem:
where \({\mathcal{H}}\left( \cdot \right)\) is the entropy and can be estimated from the image histogram of \({\mathbf{M}}\).
From the above modeling, the determination of low-light regions has large impact on the optimal exposure ratio. As exemplified in Fig. 1, low-light images can be divided into various specified conditions. Therefore, a single and ad hoc setting of the threshold \(\eta\) (e.g., 0.5 in [34]) is limited to describe the complex lightness conditions for an arbitrary image. We use a set of threshold values \(\left\{ {\eta_{1} , \eta_{2} , \ldots ,\eta_{N - 1} } \right\}\) to obtain different \(\tilde{k}\) values that cater to the generation of multiple sources for fusion.
3.3 Patch-based fusion
Without losing generality, we totally obtain \(N\) sources \(\left\{ {{\mathbf{I}}_{0} ,{\mathbf{I}}_{1} , \ldots ,\varvec{ }{\mathbf{I}}_{N - 1} } \right\}\) for the fusion process. For an image patch \({\mathbf{P}}\) of each source, we adopt the patch-based image decomposition [23]:
In Eq. 9, for each \(M \times M\) squared patch, we stack its RGB channels together into a \(3M^{2}\)-length column vector \({\mathbf{P}}\). Here, \(\left\| \cdot \right\|\) is the L2 norm, \(c\) is the patch scale, \({\mathbf{s}}\) is the patch structure, and \(l\) is the patch mean intensity. These decomposed elements can be seen as a mid-level representation of an image. In the following, the three components are separately fused.
First, a nonlinear max-fusion is applied to the patch scale \(c_{n}\):
Second, a linear weight fusion is constructed for the patch structure \({\mathbf{s}}_{n}\):
From Eq. 11, we observe that the fused \({\bar{\mathbf{s}}}\) is jointly determined by \(\left\{ {{\mathbf{s}}_{n} } \right\}\) of the multiple sources. The fusion weights are determined by the exponential of patch scales \(\left\{ {c_{n} } \right\}\), where \(\rho \ge 0\) is a hyper-parameter. The obtained \({\bar{\mathbf{s}}}\) is further normalized as \({\hat{\mathbf{s}}} = {\bar{\mathbf{s}}}/\left\| {{\bar{\mathbf{s}}}} \right\|\). From Eqs. 10 and 11, we observe that the fusion tends to weigh more on strong patches and still considers the impact of weak patches. Third, we also use a weighted linear fusion for the patch mean \(l_{n}\):
In Eq. 12, \(L\left( { \cdot , \cdot } \right)\) describes how well the lightness of \(l_{n}\) is in \({\mathbf{I}}_{n}\):
where \(\mu_{n}^{0}\) is the global mean of \({\mathbf{I}}_{n}\), \(\sigma_{g}\) and \(\sigma_{l}\) control the spreads of the Gaussian distribution tails.
Finally, we disconnect the stacked \({\hat{\mathbf{s}}}\) back into the RGB channels and reconstruct them with the obtained \(\hat{c}\) and \(\hat{l}\) according to Eq. 9:
where \(\phi \in \left\{ {{\text{R}},{\text{G}},{\text{B}}} \right\}\) enumerates the three color channels. We use a sliding window with the stride length of \(B = \lfloor M/2 \rfloor\) to reconstruct each patch of the result image, and the pixels in overlapping regions are averaged. In this way, the reconstructed \({\mathbf{I}}_{\text{f}}\) is taken as the final result. In our research, the fusion parameters \(M,\rho ,\sigma_{g} ,\sigma_{l}\) are typically set as in [23].
3.4 Algorithm summary
The whole algorithm is summarized in Table 1. We can easily see that the computational complexity of the whole algorithm is jointly determined by the total pixel number, number of fusion sources, and the patch size. In experiments, we empirically found that there exists a balance between the number of fusion sources and the computational efficiency. We choose N = 4 sources for our research (including the original image) and set \(\eta\) as 0.4, 0.5, and 0.6 in our experiments.
4 Experiments
4.1 Experimental settings
In experiments, a total of 35 images were collected from the Internet or taken by the authors. As shown in Fig. 3, images taken at different outdoor and indoor scenes have various lightness conditions, e.g., nighttime, backlight, sidelight, and so on. We introduce six models for comparison: Single-source models include multi-scale Retinex-based model (MSRCR) [20], Dehazing-based model (DEHAZE) [30], LIME model (LIME) [22], while multi-source models include bio-inspired model (BIMEF) [34], multi-fusion model (MF) [32], lightness-aware simplified Retinex model (LA-Retinex) [33]. The codes of first five models were publicly available from the project webpage of [34], while the codes of [33] were implemented in our previous research. All of them were run on a laptop with 2.6G Hz CPU and 8G RAM.
4.2 Visual comparisons
We first conduct visual comparisons. In Fig. 4a, we present the enhanced results of the three images with dim lightness. We have the following observations. First, all the models are able to reveal the image details hidden in the darkness, especially for the single-source models. Second, all the single-source models are prone to generating over-enhanced results, like the inappropriately boosted edges and textures (DEHAZE, LIME), or unrealistically change of global appearance (MSRCR). In contrary, the results based on multi-source models have much more balanced lightness configurations and are more visually appealing. In Fig. 4b, we present the enhanced results of the three images partially with low-light regions. We have similar observations as in Fig. 4a that multi-source models perform better than single-source ones. Furthermore, by taking a closer look, our model produces fewer artifacts and brings in more vivid colors. For example, our model has more natural appearance on the wall and medal region at the first row of Fig. 4b than MF and LA-Retinex. The color of grass, trees, and sunset region of our model is brighter than BIMEF in the second and third examples of Fig. 4b. The reasons are twofold. On one hand, the patch-based computation makes our method robust to artifacts to some extent. On the other hand, as the RGB channels of \(\left\{ {{\mathbf{I}}_{0} ,{\mathbf{I}}_{1} , \ldots , {\mathbf{I}}_{N - 1} } \right\}\) are jointly considered in the decomposition and fusion, our method is able to improve the color distribution.
4.3 Quantitative comparisons
We also conduct quantitative comparisons on all the models based on multi-source fusion. We use a non-reference image quality evaluator BTMQI [40] and a visual aesthetic scoring network [41] and show all the scores in Tables 2 and 3. Of note, since our task does not change the image content composition, we only use the fine-grained scores of color harmony, color vividness, and lightness produced by the network trained in [41]. In the tables, we use bold font/Italics to highlight the best/the second best performance, respectively.
From both tables, our model has the best performance among all four fusion-based models, which again validates the effectiveness of our model. Specifically, we also have some additional observations. First, the MF model has the second best performance. This confirms the usefulness of the fusion road map, in which good results can be obtained even by combining several simple enhancement models as in [32]. Second, for a few cases, the input images can have higher scores than those of their enhanced results. This observation indicates that enhancement does not necessarily improve the visual quality all the time. The reason is that the models for comparison are still not fully quality-aware or aesthetics-aware, although they try to harmonize the complementary appearances across the multiple sources with different fusion techniques.
5 Conclusions
In this paper, we propose a low-light enhancement model via generating and fusing multiple sources, which facilitates the situation that only one single input image is at hand. We empirically validate our model on various low-light images. Compared with single-source models and other multi-source models, our model produces better results in terms of visual naturalness and aesthetics. As mentioned above, although our model is able to improve the visual aesthetics of an image, it is still limited as the enhancement process itself is not aesthetics-aware. In the following research, we plan to extend our model by equipping it with an aesthetics optimization process [42]. We also note that the determination of the low-light region in this paper is still heuristic to some extent. We can consider the technique of unsupervised feature selection [43,44,45,46] to accurately delineate the image regions with low lightness.
References
Wang M, Li H, Tao D, Lu K, Wu X (2012) Multimodal graph-based reranking for web image search. IEEE Trans Image Process 21(11):4649–4661
Sun F, Tang J, Li H, Qi G, Huang T (2014) Multi-label image categorization with sparse factor representation. IEEE Trans Image Process 23(3):1028–1037
Zhang S, Li X, Zong M, Zhu X, Wang R (2018) Efficient kNN classification with different numbers of nearest neighbors. IEEE Trans Neural Netw Learn Syst 29(5):1774–1785
Zhu X, Zhang S, Li Y, Zhang J, Yang L, Fang Y (2018) Low-rank Sparse Subspace for spectral clustering. IEEE Trans Knowl Data Eng. https://doi.org/10.1109/TKDE.2018.2858782
Hong R, Zhang L, Tao D (2016) Unified photo enhancement by discovering aesthetic communities from flickr. IEEE Trans Image Process 25(3):1124–1135
Wang M, Hong R, Yuan X, Yan S, Chua T (2012) Movie2Comics: towards a lively video content presentation. IEEE Trans Multimed 14(3–2):858–870
Guo Y, Chen T (2018) Semantic segmentation of RGBD images based on deep depth regression. Pattern Recogn Lett 109:55–64
Yin W, Mei T, Chen C, Li S (2014) Socialized mobile photography: learning to photograph with social context via mobile devices. IEEE Trans Multimed 16(1):184–200
Ni B, Xu M, Wang M, Yan S, Tian Q (2013) Learning to photograph: a compositional perspective. IEEE Trans Multimed 15(5):1138–1151
Hao S, Pan D, Guo Y, Hong R, Wang M (2016) Image detail enhancement with spatially guided filters. Sig Process 120:789–796
Nguyen R, Kim S, Brown M (2014) Illuminant aware gamut-based color transfer. Comput Graph Forum 33(7):319–328
Guo X, Li Y, Ling H (2017) LIME: low-light image enhancement via illumination map estimation. IEEE Trans Image Process 26(2):982–993
Li Y, Tan R, Brown M (2015) Nighttime haze removal with glow and multiple light colors. In: Proceedings of international conference on computer vision (ICCV)
Pan J, Hu Z, Su Z, Yang M (2017) L0-regularized intensity and gradient prior for deblurring text images and beyond. IEEE Trans Pattern Anal Mach Intell 39(2):342–355
Reza AM (2004) Realization of the contrast limited adaptive histogram equalization (CLAHE) for real-time image enhancement. Journal VLSI Signal Process Syst Signal Image Video Technol 38(1):35–44
Lee C, Lee C, Kim C (2013) Contrast enhancement based on layered difference representation of 2D histograms. IEEE Trans Image Process 22(12):5372–5384
Wang C, Ye Z (2005) Brightness preserving histogram equalization with maximum entropy: a variational perspective. IEEE Trans Consum Electron 51(4):1326–1334
Chen S, Ramli A (2003) Minimum mean brightness error bi-histogram equalization in contrast enhancement. IEEE Trans Image Process 49(4):162–174
Li Y, Guo F, Tan R, Brown M (2014) A contrast enhancement framework with JPEG artifacts suppression. In: Proceedings of European Conference on Computer Vision (ECCV)
Jobson J, Rahman U, Woodell A (1997) A multi-scale retinex for bridging the gap between color images and the human observation of scenes. IEEE Trans Image Process 6(7):965–976
Yue H, Yang J, Sun X, Wu F, Hou C (2017) Contrast enhancement based on intrinsic image decomposition. IEEE Trans Image Process 26(8):3981–3994
Guo X, Li Y, Ling H (2017) LIME: low-light image enhancement via illumination map estimation. IEEE Trans Image Process 26(2):982–993
Ma K, Li H, Yong H, Wang Z, Meng D, Zhang L (2017) Robust multi-exposure image fusion: a structural patch decomposition approach. IEEE Trans Image Process 26(5):2519–2532
Bertalmio B, Levine S (2013) Variational approach for the fusion of exposure bracketed pairs. IEEE Trans Image Process 22(2):712–723
Kou F, Wei Z, Chen W, Wu X, Wen C, Li Z (2018) Intelligent detail enhancement for exposure fusion. IEEE Trans Multimedia 20(2):484–495
Fu X, Zeng D, Huang Y, Zhang X, Ding X (2016) A weighted variational model for simultaneous reflectance and illumination estimation. In: Proceedings of computer vision and pattern recognition (CVPR)
Cai B, Xu X, Guo K, Jia K, Hu B, Tao D (2017) A joint intrinsic–extrinsic prior model for retinex. In: Proceedings of international conference on computer vision (ICCV)
Hao S, Feng Z, Guo Y (2018) Low-light image enhancement with a refined illumination map. Multim Tools Appl. https://doi.org/10.1007/s11042-017-5448-5
Malm H, Oskarsson M, Warrant E, Clarberg P, Hasselgren J, Lejdfors C (2007) Adaptive enhancement and noise reduction in very low light-level video. In: Proceedings of international conference on computer vision (ICCV)
Dong X, Wang G, Pang Y (2011) Fast efficient algorithm for enhancement of low lighting video. In: Proceedings of international conference on multimedia & expo (ICME)
Song J, Zhang L, Shen P, Peng X, Zhu G (2016) Single low-light image enhancement using luminance map. In: Proceedings of Chinese conference of pattern recognition (CCPR)
Fu X, Zeng D, Huang Y, Liao Y, Ding X, Paisley J (2016) A fusion-based enhancing method for weakly illuminated images. Signal Process 129:82–96
Hao S, Guo Y, Wei Z (2018) Lightness-aware contrast enhancement for images with different illumination conditions. Multimed Tools Appl. https://doi.org/10.1007/s11042-018-6257-1
Ying Z, Li G, Gao W (2017) A bio-inspired multi-exposure fusion framework for low-light image enhancement. ArXiv.abs/1711.00591
Ying Z, Li G, Ren Y, Wang R, Wang W (2017) A new low-light image enhancement algorithm using camera response model. In: Proceedings of international conference on computer vision (ICCV) workshop
Lore K, Akintayo A, Sarkar S (2017) LLNet: a deep autoencoder approach to natural low-light image enhancement. Pattern Recognit 61:650–662
Chen C, Chen Q, Xu J, Koltun V (2018) Learning to see in the dark. In: Proceedings of computer vision and pattern recognition (CVPR)
Steve M (2000) Comparametric equations with practical applications in quantigraphic image processing. IEEE Trans Image Process 9(8):1389–1406
He K, Sun J (2015) Fast guided filter. ArXiv, abs/1505.00996
Gu K, Wang S, Zhai G, Ma S, Yang X, Lin W, Zhang W, Gao W (2016) Blind quality assessment of tone-mapped images via analysis of information, naturalness, and structure. IEEE Trans Multimed 18(3):432–443
Kong S, Shen X, Lin Z, Mech R, Fowlkes C (2016) Photo aesthetics ranking network with attributes and content adaptation. In: Proceedings of European conference on computer vision (ECCV)
Zhang L, Wang M, Nie L, Hong R, Xia Y, Zimmermann R (2015) Biologically inspired media quality modeling, In: Proceedings of ACM multimedia (ACM MM)
Zheng W, Zhu X, Wen G, Zhu Y, Yu H, Gan J (2018) Unsupervised feature selection by self-paced learning regularization. Pattern Recogn Lett. https://doi.org/10.1016/j.patrec.2018.06.029
Zhu X, Zhang S, Hu R, Zhu Y, Song J (2018) Local and global structure preservation for robust unsupervised spectral feature selection. IEEE Trans Knowl Data Eng 30(3):517–529
Zheng W, Zhu X, Zhu Y, Hu R, Lei C (2018) Dynamic graph learning for spectral feature selection. Multimed Tools Appl. https://doi.org/10.1007/s11042-017-5272-y
Lei C, Zhu X (2018) Unsupervised feature selection via local structure learning and sparse learning. Multimed Tools Appl. https://doi.org/10.1007/s11042-017-5381-7
Acknowledgements
The research was supported by the National Nature Science Foundation of China (Grant Nos. 61772171 and 61632007).
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
The authors declare that they have no conflict of interest.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
About this article
Cite this article
Zhou, Z., Feng, Z., Liu, J. et al. Single-image low-light enhancement via generating and fusing multiple sources. Neural Comput & Applic 32, 6455–6465 (2020). https://doi.org/10.1007/s00521-018-3893-3
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00521-018-3893-3