Abstract
Image processing is the method of converting an image into a digital format and applying specific operations to it in order to extract some valuable information. Image processing is the backbone of many computer vision applications. Image acquisition is the initial step in this procedure. Image acquisition is the process of capturing images of the external environment. Low-light images are those that have been captured when the light intensity of the surrounding environment is low. The performance of different real-time applications will be impacted by this low-light situation. The enhancement techniques are used to improve the low-light image quality. The enhancement methods are applied to those images to get a better visual effect. In robot vision, surveillance, underwater image enhancement, haze removal and other applications, low-light image enhancement is used. This paper presents a detailed survey on the different low-light image enhancement techniques.
Access provided by Autonomous University of Puebla. Download conference paper PDF
Similar content being viewed by others
Keywords
1 Introduction
Computer vision is an area of artificial intelligence (AI) that enables computers and systems to extract useful information from digital images. The quality of the image will depend on a number of factors, including illumination, contrast and brightness. Images that are captured in an environment having low illumination or low light are categorized as low-light images. In many real-time applications, this low-light condition may occur. So, to overcome this, many low-light image enhancement methods are used. This survey paper’s main goal is to investigate the various image improvement techniques for low-light images. Image enhancement is a technique that helps to improve the quality of an image. The parameters that define the image quality are color, contrast, brightness, illumination, etc. During the image acquisition, sufficient light intensity is needed. If the light intensity is low, the captured image will give less information than the original image. In many applications, there is a possibility of low-light conditions. It is necessary to create an enhancement method that is more suited for low-light images in order to get around this. The popular low-light image enhancement methods are Gamma transformation, Histogram equalization, Retinex methods, machine learning and deep learning methods. In recent years, the availability of various learning models introduces a large exploration of low-light image enhancement methods. This survey paper divides the algorithm into two classes, traditional methods and learning-based methods. This learning-based algorithms are again classified into machine learning-based and deep learning-based methods. Section 2 describes a few existing low-light applications. Section 3 explains the classification of enhancement methods.
2 Low-Light Images
Medical image processing has been widely used in research in recent years to diagnose a variety of disorders. When considering various medical imaging techniques, this low-light environment could affect the accuracy of the diagnosis. One of the most important methods for identifying abnormalities of the larynx is laryngeal endoscopy. Due to the anatomical structure of the human body, it is difficult to get illuminated images of this region. As a result, low-light images are obtained.
This enhancement scheme is also applicable for the enhancement of chest x-ray for the detailed analysis of Covid-19 cases. Figures 1, 2 show the larynx endoscopy image and chest x-ray image. Night traffic monitoring is a major challenge in today’s world. These types of enhancement algorithms are useful for improving the analysis of monitoring systems. The other important areas where this low-light condition may exist are underwater images, foggy images, satellite images, etc. (Figs. 3, 4).
3 Methodologies
This survey paper introduces a distinction between traditional and learning-based low-light image enhancement technique (Fig. 5).
The traditional methods are Gamma transformation, Histogram equalization and Retinex-based methods. The learning methods are machine learning (ML) and deep learning methods (DL). Methods based on machine learning have only recently become available. Machine learning is a subset of artificial intelligence. They are capable of learning by themselves without being explicitly programmed. The limitations of ML algorithms are, they require supervision for feature extraction and handle only thousands of data points. Commonly preferred ML algorithms are principal component analysis (PCA), regression, support vector machine (SVM), etc.
Several deep learning-based image enhancement methods have also emerged since 2016. DL is a subset of the ML algorithm. Millions of data points are processed by DL algorithms. As a result, a large number of features are extracted without supervision. Convolution neural networks (CNNs) have been used as the foundation of deep learning frameworks in a variety of research papers. Deep learning-based methods can achieve excellent results in low-light image enhancement. Section 3.4 describes about deep learning algorithms.
3.1 Gamma Transformation
A Gamma function is a nonlinear transformation. Gamma correction is a technique used for image enhancement.
where ‘√’ represents the gamma correction parameter. By varying the parameter, several different transformation curves can be obtained. When ‘√’ > 1, the transformation will broaden the dynamic range of the low-gray value areas of the image and compress the range of the high-gray value areas. When ‘√’ < 1 the transformation will have the low gray values and stretch the high gray values. When ‘√’ = 1 output remains unchanged (Fig. 6).
A pair of complementary gamma functions by fusion is one of the methods used for low-light image enhancement (Li et al. 2020). The pair of complementary functions are as follows,
where x—input pixel value, y1 and y2—transformed output pixels.
The input red, green, blue (RGB) image is transformed into a hue, saturation, value (HSV) image. The brightness of the image is determined by the value component (V), which depends on the amount of light intensity present in the environment. The value component is enhanced by the above transformation equations. Then two enhanced ‘V’ components are combined by,
where \(c_{1} = V_{i} /\mathop \sum \nolimits V_{i}\).
“I1” is the first input for the fusion process. The identical value component is subjected to sharpening and histogram equalization to produce the second input for the fusion. The second input for fusion is,
The value components I1 and I2 are fused by the image fusion process. This overall process improves the brightness of the low-light images by adjusting the dark region and compressing the bright region. The advantage of using this gamma function is that it generates even brightness.
3.2 Histogram Equalization
Histogram equalization (Narendra and Fitch 1981; Abdullah-Al-Wadud et al. 2007) is one of the traditional methods for low-light image enhancement. The pixels are the basic building blocks of an image. Each pixel holds a specific intensity value. The histogram is a plot that shows the number of pixels versus their intensity values. The histogram equalization algorithm uses the cumulative distribution function (CDF) to adjust the output gray level to have a uniform distribution (Fig. 7).
‘I1’ will serve as the input image, and ‘L’ will serve as the gray value. ‘N’ is for the overall number of pixels in a picture, ‘I(i, j)’ stands for the gray value at the point with coordinates (i, j), and ‘nk’ stands for the number of pixels at gray level k. The likelihood that a specific gray level ‘k’ will occur is,
The cumulative distribution function (CDF) of the gray level of an image ‘I’ is given by,
The histogram equalization algorithm maps the original image to an enhanced image with a uniform gray-level distribution based on CDF (Table 1). The enhanced output image is represented as follows:
3.3 Retinex Theory
Retinex theory (Land 1977) is one of the major strategies employed in low-light image enhancement. As per the Retinex theory, the observed image is represented as the product of reflectance and illumination component (Fig. 8).
As per Retinex theory,
where
S(X, Y)—Observed image,
R(X, Y)—Reflectance component,
L(X, Y)—Illumination component.
A low-light image is characterized as it is captured in a low illuminance region. Illuminance is the measure of how much incident light illuminates the surface. For images taken in dim lighting, illuminance is below the standard level. As per the Retinex theory, the reflectance component is considered as the enhanced image, \(R = S/L\). By choosing the proper illumination map, the required enhanced image is obtained. Most of the research work is carried out based on this equation. By using the illumination component, the enhanced outputs are obtained by performing division operations. To overcome the difficulty in this division operation an inverse term is used. The inverse term is expressed in the given equation,
Using an inverse illumination map (L−1), the enhanced image (R) is obtained. Many of the deep learning model uses this Retinex theory as the basic theory. As per the theory, illumination map is constructed by various CNN models. Current research works are carried out in deep learning without Retinex theory also. Deep learning models will play an important role in the enhancement of low-light images.
3.4 Deep Learning-Based Methods
Deep learning has been applied to computer vision tasks such as low-light image enhancement in recent years due to its excellent representation and generalization abilities. Many deep learning models use Retinex theory for their operation. A convolutional neural network (CNN) is a deep learning network architecture that learns directly from data. CNNs are especially useful for detecting patterns in images in order to recognize objects, classes and categories.
Figure 9 shows the basic architecture of convolutional neural network (CNN). The function of the convolution layer is to extract meaningful information by applying a sliding window on the input matrix. The pooling layer reduces the height and width while maintaining the depth information to conduct dimensionality reduction. Based on the application, different types of pooling are preferred. These are maximum pooling, average pooling and minimum pooling. Fully connected layer will perform the classification.
A generative adversarial network (GAN) (Goodfellow et al. 2014) is an unsupervised deep learning-based model. It uses unlabelled data for training. GAN contains two competing neural networks called generator and discriminator, which compete against one another and may evaluate, discover and follow variations within the dataset. The generator generates fake samples of images and tries to fool the discriminator. During the training phase, the generator and discriminator run in competition with each other. The model is trained to function more effectively during each epoch.
A Retinex-based attention network (Huang et al. 2020) uses Retinex as the basic theory for the learning of deep neural networks. This technique calculates an improved image from a reflectance map. Illumination extraction block is developed using an attention mechanism module, resulting in an illumination map prediction network. In order to gain more precise illumination information for the input image, this attention technique is inserted between the convolution layer and batch normalization. On both low illumination images with uniform light and uneven illumination, this model lessens the impact of noise and the augmented information that results.
A Multiscale Attention Retinex Network (MARN) (Zhang and Wang 2021) is designed to predict a detailed inverse illumination map of the input image. When compared with various CNN algorithms, the Multiscale Attention Retinex Network gives better feature extraction. This MARN improves the generalization capability of the network. Instead of using more image priors, an illumination attention map is used to learn the model. It improves the quality of the image in various lighting conditions. This utilizes reconstruction loss, structure similarity loss and detail loss. If the inverse illumination is predicted, the reflectance map is calculated by using Retinex theory and then this reflectance map is estimated as an enhanced image.
A simple generative adversarial network with a Retinex model (Ma et al. 2021), a decomposition network is used to decompose the low-light image into illuminance and reflection maps. For training the GAN structure unpaired datasets are used. This provides a better generalization to the model. By using this structure, reduced training complexity and reduced training time is achieved. This model is applied to mobile phones with small memory.
An enlighten GAN is a modified GAN structure (Jiang et al. 2021). It introduces Enlighten GAN structure that can be trained without image pairs. Even with unpaired datasets, this structure is generalized very well for various real-time images. This model introduces a global and local discriminator structure that handles spatially varying light conditions in the input image. The results of Enlighten GAN are compared with several state-of-art methods. All results show the superiority of Enlighten GAN.
Various approaches have been used to improve image segmentation (Long et al. 2015). Segmentation is the process of dividing an image into its various parts. These basic operations are performed in many computer vision tasks. Segmentation shows good performance during daytime or in bright light. In the case of low-light images, segmentation is not performed well because of the presence of noise, blurredness, etc. The process of segmentation can be divided into a single-class and multi-class segmentation. In single-class segmentation (Wang and Ren 2018), only one object or one feature is considered for segmentation. In multi-class segmentation (Dai and Gool 2018), multiple features are considered. In Cho et al. (2020), semantic segmentation of low-light images with modified Cycle GAN is introduced. The modified Cycle GAN is trained using paired dataset and the L1 loss function is added to the existing Cycle GAN for improving the performance of the segmentation.
Table 2 summarizes the low-light image enhancement techniques.
4 Conclusion
Various state-of-art methods are discussed in this paper for low-light image enhancement. Many of the deep learning structures use Retinex as the basic theory of operation. The illumination map is modified by using various learning architectures, CNN, GAN, Cyclic GAN, etc., which are a few illustrations of deep learning models. This survey presents some works which are more suitable in a noisy environment also. In many real-time applications, low-light conditions may occur due to the unavailability of environmental light. Low-light image enhancement thus plays a crucial role in each of these scenarios. Low-light image enhancement can be extended to the enhancement of low-light video also.
References
Abdullah-Al-Wadud M, Kabir MH, Akber Dewan MA, Chae O (2007) A dynamic histogram equalization for image contrast enhancement. IEEE Trans Consum Electron 53(2):593–600. https://doi.org/10.1109/TCE.2007.381734
Chen SD, Ramli AR (2003) Contrast enhancement using recursive mean-separate histogram equalization for scalable brightness preservation. IEEE Trans Consum Electron 49(4):1301–1309. https://doi.org/10.1109/TCE.2003.1261233
Chen S-D, Ramli AR (2003) Minimum mean brightness error bi-histogram equalization in contrast enhancement. IEEE Trans Consum Electron 49(4):1310–1319. https://doi.org/10.1109/TCE.2003.1261234
Cho SW, Baek NR, Koo JH, Arsalan M, Park KR (2020) Semantic segmentation with low light images by modified CycleGAN-based image enhancement. IEEE Access 8:93561–93585. https://doi.org/10.1109/ACCESS.2020.2994969
Cho SW, Baek NR, Koo JH, Park KR (2021) Modified perceptual cycle generative adversarial network-based image enhancement for improving accuracy of low light image segmentation. IEEE Access 9:6296–6324. https://doi.org/10.1109/ACCESS.2020.3048366
Dai D, Gool LV (2018) Dark model adaptation: semantic image segmentation from daytime to night time. In: Proceedings 21st Interrnatinal Conference on Intelligent Transport System (ITSC), Maui, HI, pp 3819–3824
Ganesan SD, Rabbani M (2019) Contrast enhancement using completely overlapped uniformly decrementing sub-block histogram equalization for less controlled illumination variation. Int Arab J Inform Technol 16(3):389–396
Garg A, Pan X-W, Dung L-R (2022) LiCENt: low-light image enhancement using the light channel of HSL. IEEE Access 10:33547–33560. https://doi.org/10.1109/ACCESS.2022.3161527
Goodfellow IJ, Pouget-Abadie J, Mirza M, Xu B, Warde-Farley D, Ozair S, Courville A, Bengio Y (2014) Generative Adversarial Networks. http://arxiv.org/abs/1406.2661
Guo X, Li Y, Ling H (2017) LIME: low-light image enhancement via illumination map estimation. IEEE Trans Image Process 26(2):982–993. https://doi.org/10.1109/TIP.2016.2639450
Guo Y, Ke X, Ma J, Zhang J (2019) A pipeline neural network for low-light image enhancement. IEEE Access 7:13737–13744. https://doi.org/10.1109/ACCESS.2019.2891957
Guo Y, Lu Y, Liu RW, Yang M, Chui KT (2020) Low-light image enhancement with regularized illumination optimization and deep noise suppression. IEEE Access 8:145297–145315. https://doi.org/10.1109/ACCESS.2020.3015217
Huang W, Zhu Y, Huang R (2020) Low light image enhancement network with attention mechanism and retinex model. IEEE Access 8:74306–74314. https://doi.org/10.1109/ACCESS.2020.2988767
Jiang Y et al (2021) EnlightenGAN: deep light enhancement without paired supervision. IEEE Trans Image Process 30:2340–2349. https://doi.org/10.1109/TIP.2021.3051462
Lamba M, Rachavarapu KK, Mitra K (2021) Harnessing multi-view perspective of light fields for low-light imaging. IEEE Trans Image Process 30:1501–1513. https://doi.org/10.1109/TIP.2020.3045617
Land EH (1977) The retinex theory of color vision. Sci Amer 237(6):108128
Li Y, Li J, Li Y, Kim H, Serikawa S (2019) Low-light underwater image enhancement for deep-sea tripod. IEEE Access 7:44080–44086. https://doi.org/10.1109/ACCESS.2019.2897691
Li C, Tang S, Yan J, Zhou T (2020) Low-light image enhancement via pair of complementary gamma functions by fusion. IEEE Access 8:169887–169896. https://doi.org/10.1109/ACCESS.2020.3023485
Liang Z, Cai J, Cao Z, Zhang L (2021) CAMERANET: a two-stage framework for effective camera ISP learning. IEEE Trans Image Process 30:2248–2262. https://doi.org/10.1109/TIP.2021.3051486
Lim S, Kim W (2021) DSLR: deep stacked laplacian restorer for low-light image enhancement. IEEE Trans Multimedia 23:4272–4284. https://doi.org/10.1109/TMM.2020.3039361
Long J, Shelhamer E, Darrell T (2015) Fully convolutional networks for semantic segmentation. In: Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR), Boston, MA, pp 3431–3440
Lu K, Zhang L (2021) TBEFN: a two-branch exposure-fusion network for low-light image enhancement. IEEE Trans Multimedia 23:4093–4105. https://doi.org/10.1109/TMM.2020.3037526
Lu Y, Kim D-W, Jung S-W (2020) DeepSelfie: single-shot low-light enhancement for selfies. IEEE Access 8:121424–121436. https://doi.org/10.1109/ACCESS.2020.3006525
Ma F, Chai J, Wang H (2019) Two-dimensional compact variational mode decomposition-based low-light image enhancement. IEEE Access 7:136299–136309. https://doi.org/10.1109/ACCESS.2019.2940531
Ma T et al (2021) Retinex GAN: unsupervised low-light enhancement with two-layer convolutional decomposition networks. IEEE Access 9:56539–56550. https://doi.org/10.1109/ACCESS.2021.3072331
Megha P, Swarna M, Sowmya V, Soman KP (2016) Low contrast satellite image restoration based on adaptive histogram equalization and discrete wavelet transform. In: International Conference on Communication and Signal Processing (ICCSP), pp 0402–0406. https://doi.org/10.1109/ICCSP.2016.7754166
Narendra PM, Fitch RC (1981) Real-time adaptive contrast enhancement. IEEE Trans Pattern Anal Mach Intell PAMI-3:655–661
Ooi CH, Kong NSP, Ibrahim H (2009) Bi-histogram equalization with a plateau limit for digital image enhancement. IEEE Trans Consum Electron 55(4):2072–2080. https://doi.org/10.1109/TCE.2009.5373771
Parihar AS, Verma OP (2016) Contrast enhancement using entropy-based dynamic sub-histogram equalisation. IET Image Process 10(11):799–808
Park S, Yu S, Kim M, Park K, Paik J (2018) Dual autoencoder network for retinex-based low-light image enhancement. IEEE Access 6:22084–22093. https://doi.org/10.1109/ACCESS.2018.2812809
Ravirathinam P, Goel D, Ranjani JJ (2021) C-LIENet: a multi-context low-light image enhancement network. IEEE Access 9:31053–31064. https://doi.org/10.1109/ACCESS.2021.3059498
Ren Y, Ying Z, Li TH, Li G (2019) LECARM: low-light image enhancement using the camera response model. IEEE Trans Circuits Syst Video Technol 29(4):968–981. https://doi.org/10.1109/TCSVT.2018.2828141
Sheet D, Garud H, Suveer A, Mahadevappa M, Chatterjee J (2010) Brightness preserving dynamic fuzzy histogram equalization. IEEE Trans Consum Electron 56(4):2475–2480. https://doi.org/10.1109/TCE.2010.5681130
Sim KS, Tso CP, Tan YY (2007) Recursive sub-image histogram equalization applied to gray scale images. Pattern Recogn Lett 28(10):1209–1221. https://doi.org/10.1016/j.patrec.2007.02.003
Singh N, Bhandari AK (2021) Principal component analysis-based low-light image enhancement using reflection model. IEEE Trans Instrum Measurem 70(1–10):5012710. https://doi.org/10.1109/TIM.2021.3096266
Singh K, Kapoor R (2014) Image enhancement using exposure based sub image histogram equalization. Pattern Recogn Lett 36:10–14. https://doi.org/10.1016/j.patrec.2013.08.024
Singh K, Kapoor R (2014) Image enhancement via median-mean based sub-image-clipped histogram equalization. Optik 125(17):4646–4651. https://doi.org/10.1016/j.ijleo.2014.04.093
Wang Y, Ren J (2018) Low-light forest frame image segmentation based on color features. J Phys Conf Ser 1069(1):012165
Wang Y, Chen Q, Zhang B (1999) Image enhancement based on equal area dualistic sub image histogram equalization method. IEEE Trans Consum Electron 45(1):68–75. https://doi.org/10.1109/30.75441
Wang L, Liu Z, Siu W, Lun DPK (2020a) Lightening network for low-light image enhancement. IEEE Trans Image Process 29:7984–7996. https://doi.org/10.1109/TIP.2020.3008396
Wang M, Tian Z, Gui W, Zhang X, Wang W (2020b) Low-Light image enhancement based on nonsubsampled shearlet transform. IEEE Access 8:63162–63174. https://doi.org/10.1109/ACCESS.2020.2983457
Wang R, Jiang B, Yang C, Li Q, Zhang B (2022) MAGAN: unsupervised low-light image enhancement guided by mixed-attention. Big Data Mining Anal 5(2):110–119. https://doi.org/10.26599/BDMA.2021.9020020
Wang W, Wei C, Yang W, Liu J (2018) GLADNet: low-light enhancement network with global awareness. In: Proceedings—13th IEEE International Conference on Automatic Face and Gesture Recognition, FG 2018, pp 751–755. https://doi.org/10.1109/FG.2018.00118
Yadav G, Maheshwari S, Agarwal A (2014) Contrast limited adaptive histogram equalization-based enhancement for real time video system. In: International conference on advances in computing, communications and informatics (ICACCI), pp 2392–2397. https://doi.org/10.1109/ICACCI.2014.6968381
Yang W, Wang W, Huang H, Wang S, Liu J (2021) Sparse gradient regularized deep retinex network for robust low-light image enhancement. IEEE Trans Image Process 30:2072–2086. https://doi.org/10.1109/TIP.2021.3050850
Zhang X, Wang X (2021) MARN: multi-scale attention Retinex network for low-light image enhancement. IEEE Access 9:50939–50948. https://doi.org/10.1109/ACCESS.2021.3068534
Zhu H, Zhao Y, Wang R, Wang R, Chen W, Gao X (2021) LLISP: low-light image signal processing net via two-stage network. IEEE Access 9:16736–16745. https://doi.org/10.1109/ACCESS.2021.3053607
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2024 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.
About this paper
Cite this paper
Anila, V.S., Nagarajan, G., Perarasi, T. (2024). An Exploration of State-of-Art Approaches on Low-Light Image Enhancement Techniques. In: Shetty, N.R., Prasad, N.H., Nagaraj, H.C. (eds) Advances in Communication and Applications . ERCICA 2023. Lecture Notes in Electrical Engineering, vol 1105. Springer, Singapore. https://doi.org/10.1007/978-981-99-7633-1_15
Download citation
DOI: https://doi.org/10.1007/978-981-99-7633-1_15
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-99-7632-4
Online ISBN: 978-981-99-7633-1
eBook Packages: EngineeringEngineering (R0)