1 Introduction

With the rapid development of computer vision technology, digital image processing systems have been widely used in the fields of urban transportation, satellite remote sensing and video surveillance [11, 34, 40]. However, some uncontrollable factors often result in image defects during image acquisition. Especially in poor light conditions (such as overcast, night or indoor space with insufficient light), the images have defects such as insufficient brightness, low contrast and narrow greyscale range as the result of weak light reflected from the objects [1, 35, 38]. Therefore, research on low-light image enhancement methods is of great significance and value.

There are several kinds of methods to enhance low-illumination mages, including histogram equalization, retinex decomposition, greyscale mapping, and deep learning. Owing to simple realization and fast speed, greyscale mapping methods have been widely used in the field of image enhancement [17]. The most classic among them are the gamma mapping function and the logarithmic mapping function [29]. In fact, most greyscale mapping methods are improved on the basis of these two classic mapping functions. For example, Tian et al. proposed a low-light colour image enhancement algorithm based on a logarithmic processing model. The light components of the images were nonlinearly enhanced, and an enhancement operator was introduced to modify the membership function. Finally, the images were enhanced by inversely transforming the membership function [36]. Cheng et al. [5]. proposed an adaptive gamma correction algorithm, by which the gamma correction parameters were adaptively obtained based on the cumulative probability distribution histogram. Zhi et al. constructed a double gamma function, by which the γ value was automatically adjusted according to the distribution characteristics of the light map, thereby increasing the greyscale of the low-light area while suppressing the greyscale of the local high-light area [36]. Yu et al. [41]. mapped the chromatic components of the images to an appropriate range based on inverse hyperbolic tangent and then proposed low-light image enhancement based on the best hyperbolic tangent contour. Although these methods have their own advantages, they are essentially improved based on the classic mapping function. The inherent shortcomings of the classic mapping function (for example, it is difficult to coordinate the bright and dark areas of the image, and it is easy to over enhance) make it difficult to fully utilize the advantages of these methods. It is necessary to design a new mapping method to replace the classical mapping function.

Therefore, this paper proposed a new multiparameter grey mapping method. Unlike the construction of the classic mapping function, the new mapping method divides the grey space of the image into stretched regions and compressed regions according to the grey value and establishes separate mapping rules. By adopting the enhancement strategy of “compressing the bright area first and then stretching the dark area”, the new mapping method fundamentally overcomes the inherent deficits of the classical mapping function, which has difficulty coordinating the grey distribution of the bright and dark areas of the image and is easy to overenhance. The new mapping method can not only directly control the amount of compression of the grey space in the bright area of the image through parameters. It can also adjust the greyscale distribution state of the dark area of the image without changing the greyscale value of the pixels in the bright area. In addition, this paper also designs an adaptive enhancement algorithm with the new mapping method as the core to verify its effectiveness and flexibility. Experimental results showed that the adaptive algorithm had excellent performance in colour rendering, brightness enhancement and noise suppression. It was also significantly better than the latest similar methods for the last five years in visual quality and quantitative testing.

Contributions:

  1. 1)

    A mapping function generation method based on grey space division was proposed, providing another possibility of generating a mapping function.

  2. 2)

    A compression mapping function that preferentially merged the grey areas with very few pixels was proposed, and the grey distribution characteristics in the light area of the images can be preserved whenever possible.

  3. 3)

    The grey increment function was designed, by which various stretch mapping functions can be flexibly generated to control the grey distribution characteristics in the dark area of the images.

  4. 4)

    An adaptive enhancement algorithm with this mapping method as the core was designed to verify the flexibility and superiority of this mapping function.

The rest of this paper is organized as follows. Section 2 briefly introduced related works. Section 3 elaborates the method proposed in this paper. Section 4 presents our experimental results and compares them with the latest methods for the past five years. Section 5 summarizes the further work of our method. Section 6 presents the conclusions.

2 Related works

At present, the enhancement methods for low-illumination images mainly include four categories: grey mapping-based, histogram equalization-based, retinex decomposition-based, and deep learning-based.

The method based on grey mapping is an image enhancement algorithm for point-by-point pixel enhancement based on mathematical functions [27]. The most typical are the gamma mapping function and logarithmic mapping function. Owing to simple realization and fast speed, these methods have been widely used in the field of image enhancement [17]. However, due to not considering the overall grey distribution of the image, most of these algorithms have limited enhancement ability and poor adaptability [36].

The main principle of histogram equalization (HE) is to adjust the output greyscale based on the cumulative distribution function (CDF) and provide a probability density function for uniform distribution; thus, the details hidden in the dark area can reappear, and the visual effect of the input images can be effectively enhanced [7, 10, 28, 32]. There has also been a plurality of algorithms improved from the classic HE algorithm. In [22], a contrast-limited adaptive histogram equalization (CLAHE) was proposed to reduce the excessive enhancement of image details by extending the histogram equalization by using a threshold. Banik et al. [2]. introduced a contrast enhancement algorithm to enhance different types of low-light images using histogram equalization and gamma correction. Tan et al. [30]. proposed an exposure-based multihistogram equalization contrast enhancement method for nonuniform illumination images (EMHE) to reduce degradation and protect image details by allocating a new output grey range through an entropy-controlled grey distribution scheme. The HE-based algorithm is popularly used due to its simplicity and effectiveness in enhancing the quality of dimmed images. However, most enhancement algorithms based on HE theory cause overenhancement or may produce noisy images, especially when the input images are very dark [25].

The method based on retinex theory reduces the influence of the light components on the images by separating reflection components from the total light to enhance the image [13, 16]. Guo et al. [8]. proposed low-light image enhancement via illumination map estimation (LIME) to refine the initial illumination map by imposing a structure-aware prior. In [15], a structure-revealing low-light image enhancement via a robust retinex model (RRM) was proposed by considering an additional noise map. Cai et al. [3]. proposed a joint intrinsic-extrinsic prior model for retinex decomposition (JIEP) by considering the properties of 3D objects. Ren et al. [24]. proposed an enhancement framework combining the traditional retinex and camera response models in which an enhanced image is obtained by adjusting the exposure of a low-light image. These algorithms can not only improve the image contrast and brightness but also have obvious advantages in colour image enhancement [12]. However, the Gaussian convolution template is used for illumination estimation, and these algorithms are poor in retaining the edges [36].

In addition, there are also low-light image enhancement methods based on deep learning. Yang et al. [37]. proposed enhancing low-light images by coupled dictionary learning. Lore et al. [18]. used a deep autoencoder named low-light net (LLNet) to perform contrast enhancement and denoising. In [39], Yang et al. attempted semisupervised learning for low-light image enhancement. However, such methods must be supported by a large data set, and the increase in model complexity will lead to a sharp increase in the time complexity of the corresponding algorithm [25].

3 New mapping method and its adaptive algorithm

3.1 Classic grey mapping method

Classic grey mapping methods include logarithmic functions, gamma functions and other improved functions. The logarithmic transformation function means a logarithmic relationship between the value of each pixel in the output image and the value of the corresponding pixel in the input image [21]. The normalized expression is as follows:

$$ {g}_{out}\left(x,y\right)=\frac{\log \left(1+c\times {g}_{in}\left(x,y\right)\right)}{\log \left(1+c\right)} $$
(1)

Where, c is an adjustable parameter. The gamma transformation function is similar to the logarithmic transformation function and widely used, and its expression is as follows:

$$ {g}_{out}\left(x,y\right)={g}_{in}{\left(x,y\right)}^{\gamma } $$
(2)

Where, γ is a correction coefficient. Figure 1 shows the curve shape of the above two classic grey mapping methods based on different parameters.

Fig. 1
figure 1

Classical grey transformation methods. a Gamma transformation. b Logarithmic transformation

3.1.1 Disadvantages of classic grey mapping method

The first shortcoming of the classic grey mapping function is that it is difficult to coordinate the enhancement effect of the bright and dark areas of the image. Since the classic mapping function has only one parameter, there is a strong correlation between the dark area stretch rule and the bright area compression rule. The above two classic mapping functions are thus unable to take into account the visual quality of the bright and dark areas of the image [5]. Taking the gamma function as an example, when the γ value is reduced to increase the contrast of the dark area of the image, the pixels in the bright area will inevitably be pushed to the high grey value area. In contrast, when the contrast of the bright area in the image is preserved to avoid excessive enhancement, the visual quality of the dark area of the image cannot be improved by adjusting the value of γ. This is the root cause of insufficient or excessive enhancement of the classical mapping function.

The second shortcoming of the classic mapping function is its unreasonable bright area compression rules. Unlike the dark area of the image, the bright area tends to have good visual quality. The more pixels there are for greyscale merging in this area, the higher the loss of image details. However, it is not the pixel distribution characteristics of the area that determines the compression rule of the classic mapping function but its single parameter. Therefore, when using the classic mapping function for enhancement, it is often encountered that the grey levels with more pixels in the bright area are preferentially merged, while the grey levels with fewer pixels or no pixels are completely preserved.

3.2 New mapping method

To construct a more reasonable grey mapping function, the to-be-stretched dark area from the to-be-compressed light area of the images was separated, and the corresponding mapping functions were constructed. The shortcomings of the classic mapping function can be effectively avoided through compression before allocation. First, given the size So of the dark area of the images, the grey space was divided into stretch area Vs and compression area Vc:

$$ {V}_s=\left\{\ A(i)\ |\ A(i)\in V,i<{S}_o\right\}; $$
(3)
$$ {V}_c=\left\{\ A(i)\ |\ A(i)\in V,i\ge {S}_o\right\}; $$
(4)

Where, A(i) is a pixel set at grey value i. Then, the mapping function based on the parameters of the compression cut-off point Se and the characteristic coefficient C was constructed:

$$ F(i)=\left\{\begin{array}{c}{F}_{low}\left(i,{S}_e,C\right),\kern1.5em i\in {V}_s;\\ {}{F}_{high}\left(i,{S}_e\right),\kern2.5em i\in {V}_c;\end{array}\right. $$
(5)

where the compression cut-off point Se determines the maximum compression amount of the image compression area; the characteristic coefficient C is used to adjust the grey distribution characteristics of the image stretch area. Figure 2 shows the basic workflow of this mapping function.

Fig. 2
figure 2

Enhanced flow chart of new mapping function

3.2.1 Compression mapping function

The compression mapping function is mainly used to compress the greyscale in the light area of the images and provide an additional grey space for the pixels in the dark area. Each greyscale merge in the compression area was set at the grey level with the least pixel content to minimize the damage to the bright area of the image during compression. To implement this process and generate the corresponding compression mapping function, the initial compression mapping function was defined as:

$$ {y}^0(x)=x; $$
(6)

where integer x ∈ [Se, 255]; y0 means that no greyscale compression occurs in the current compression area. Then, the grey set Pk with the fewest pixels in the current compression area was extracted, and the maximum grey value I was obtained by formula (7):

$$ \left\{\begin{array}{c}I=\max \left({P}^k\right);\kern8.75em \\ {}{P}^k=\left\{\ i\ \right|\ {N}_i=\min \left({N}_j\right),\kern0.5em j\in {V}_c^k\Big\};\end{array}\right. $$
(7)

where k represents the compression frequency ([0, Se − So]); \( {V}_c^k \) is a compression area for kth greyscale merging, and its initial state is \( {V}_c^0={V}_c \); and Ni is the pixel amount at the grey value i. After the pixels at the greyscale value I were merged into an adjacent greyscale with fewer pixels (when the pixel amount for the greyscale on both sides was the same, the pixels were preferentially merged towards the right), the compressed area was updated. Its expression is as follows:

$$ {\displaystyle \begin{array}{c} If\ I=\max (i)\ or\ {N}_{I-1}>{N}_{I+1}\\ {}A\left(I-1\right)=A\left(I-1\right)\cup A(I);\\ {}\begin{array}{c}{V}_c^{k+1}=\left\{\ A(i)\ |\ i\ne I\ \right\};\\ {} else\\ {}\begin{array}{c}A\left(I+1\right)=A\left(I+1\right)\cup A(I);\\ {}{V}_c^{k+1}=\left\{\ A(i)\ |\ i\ne I\ \right\};\end{array}\end{array}\end{array}} $$
(8)

where A(I) is the set of all pixels at grey value I. Next, the compression mapping function yk + 1(x) corresponding to formula (8) is generated. The calculation process is as follows:

$$ {\displaystyle \begin{array}{c} If\ I=\max (i)\ or\ {N}_{I-1}<{N}_{I+1}\\ {}M=\max \left\{\ x\ |\ {y}^k(x)=I-1\ \right\};\\ {}\begin{array}{c}\ {y}^{k+1}(x)=\left\{\begin{array}{c}\ {y}^k(x)+1,x\le M;\\ {}\ {y}^k(x),\kern2em x>M;\end{array}\right.\\ {} else\\ {}\begin{array}{c}M=\max \left\{\ x\ |\ {y}^k(x)=I\ \right\};\\ {}\ {y}^{k+1}(x)=\left\{\begin{array}{c}\ {y}^k(x)+1,x\le M;\\ {}\ {y}^k(x),\kern2em x>M;\end{array}\right.\end{array}\end{array}\end{array}} $$
(9)

After each iteration of formulas (7), (8), and (9), the grey set A(I) with the fewest pixels in \( {V}_c^k \) is compressed to an adjacent greyscale according to the rules, and then \( {V}_c^{k+1} \) and a new compression mapping function yk + 1(x) are generated. Therefore, the final compression mapping function Fhigh(i, Se) is as follows:

$$ {F}_{high}\left(i,{S}_e\right)={y}^{S_e-{S}_o}(i) $$
(10)

Figure 3 shows the compression principle for the compression mapping function in a greyscale map.

Fig. 3
figure 3

The compression process of the contractive mapping function

As shown in Fig. 3, the grey distribution characteristics in the light area of the images processed by formula (10) were retained whenever possible. Finally, it should be noted that the compression mapping function was generated by iterating the pixels at greyscale (not the image pixel matrix), and the iteration frequency was only in the range of (Se − So). Therefore, small memory space was occupied, and the running speed was high.

3.2.2 Stretch mapping function

The stretch mapping function is mainly used to increase the contrast between pixels in the dark area. In view of the higher contrast in the areas with a lower grey value for enhancing the dark area, we designed a grey increment function ∆gr with a controllable decay rate of grey increment to directly control the grey increment between adjacent greyscales of the dark area. The expression of the grey increment function is as follows:

$$ \Delta {g}_r(i)=T{\left(1-\sqrt{\frac{i-1}{S_o-1}}\right)}^C $$
(11)

where C is a characteristic coefficient (C ≥ 1), which controls the decay rate of grey increment∆gr; T is the maximum grey increment ([1, Se − So]). Without restricting the maximum increment in the dark area, the stretch mapping function was constructed irrespective of this parameter. Figure 4a shows the ∆gr function curve generated based on different C values. To elaborate the influences of the characteristic coefficient C on ∆gr(i), formula (12) was derived from formula (11):

$$ \Delta {g}_r^{\prime }(i)=-\frac{CT}{2}\sqrt{\frac{S_o-1}{i-1}}\ {\left(1-\sqrt{\frac{i-1}{S_o-1}}\right)}^{C-1} $$
(12)
Fig. 4
figure 4

Grey increment function and Stretch mapping function. a Grey increment function. b Adaptive stretch mapping function. c Stretch mapping function with limited maximum increment

As seen from formula (12), the increment decay rate \( \Delta {g}_r^{\prime }(i) \) at a specific greyscale I is only related to the characteristic coefficient C when T is a fixed value. Next, I of formula (12) is deemed a fixed value, and formula (13) is derived based on C:

$$ H(C)=-\frac{T}{2}\sqrt{\frac{S_o-1}{I-1}}\left[\left(1-\sqrt{\frac{I-1}{S_o-1}}\right)+C\left(C-1\right)\right]{\left(1-\sqrt{\frac{I-1}{S_o-1}}\right)}^{C-2}; $$
(13)

As seen from formula (13), when C and I fall within the value range, H(C) < 0. The smaller C is, the slower the decay rate of the grey increment ∆gr(i) and the more uniform the grey interval between adjacent greyscales in the stretch area after enhancement. The adaptive incremental mapping function glow(i) with grey distribution characteristics in the dark area of the images can be controlled based on formula (11) and parameter Se, and its expression is as follows:

$$ {g}_{low}(i)=\left\{\begin{array}{c}0,\kern21.5em i=0;\\ {}i+ round\left[\left({S}_e-{S}_o\right)\frac{\sum_{k=1}^i\Delta {g}_r(k)}{\sum_{j=1}^{S_o-1}\Delta {g}_r(j)}\right],\kern2em 0<i\le {S}_o-1;\end{array}\right. $$
(14)

Figure 4b shows the glow curve generated based on different C values. The adjustment effect of the characteristic parameter C on glow can be clearly seen. Figure 5 gives an application example for controlling the dark area of the images based on different C values. It was found through comparison that the dark area was adjusted based on the function glow without changing the light area of the images.

Fig. 5
figure 5

The adjustment of parameter C in the dark area of the image

The grey increment function is much more useful, and the maximum grey increment ∆g(1) of the dark area can be set specifically by limiting the value range of the characteristic coefficient C. Given that the increment in formula (14) is equal to T, the conditional formula of C (∆g(1) = T) is obtained:

$$ \frac{1}{2}\le \frac{S_e-{S}_o}{\sum_{j=1}^{S_o-1}\Delta {g}_r\left(j,C\right)}<\frac{3}{2}; $$
(15)

The stretch mapping function at the maximum grey increment T can be obtained by substituting the C value that meets the conditions into formula (14). The existence of the C value that meets the condition of formula (15) can be proven by formula (13) and the following formula.

$$ \underset{C\to +\infty }{\lim }{\sum}_{j=1}^{S_o-1}\Delta {g}_r\left(j,C\right)=T; $$
(16)

In addition, if the maximum grey increment between pixels in the stretch area does not exceed T, the mapping function Glow(i) restricting the maximum increment can be used, and its expression is as follows:

$$ {\displaystyle \begin{array}{c} If\kern0.75em {S}_e\le {\sum}_{j=1}^{S_o-1}\Delta {g}_r(j)+{S}_o\\ {}{G}_{low}(i)={g}_{low}(i);\\ {}\begin{array}{c} else\\ {}{G}_{low}(i)=\left\{\begin{array}{c}{S}_e-{S}_o-{\sum}_{k=1}^{S_o-1}\Delta {g}_r(k),\kern12.25em i=0;\\ {}{G}_{low}(0)+i+ round\left({\sum}_{k=1}^i\Delta {g}_r(k)\right),\kern2em 0<i\le {S}_o-1;\end{array}\right.\end{array}\end{array}} $$
(17)

Figure 4c shows the Glow mapping curve generated based on different T values when C is taken to be 6. It can be clearly seen from Fig. 4c that T can restrict the maximum grey increment of the pixels in the dark area.

3.2.3 Value of S o

The parameter So is mainly used to separate the dark area of the images and determine the range of the stretch function. Since this mapping function comprises multiple parameters that are mutually correctable, the value of So is flexible. In processing a specific image, the dark area to be enhanced in the image can be accurately separated by adjusting the S value to obtain the best enhancement effect. When designing an adaptive algorithm, to reduce the complexity of the algorithm, So only needs to be set to a fixed value and make corrections based on other parameters. Fig. 6 shows correction examples of 3 groups of the same or similar stretch mapping curves generated based on different So values. For most low-light images, the to-be-enhanced dark area is generally located in the first quarter of the grey space with a lower grey value. Therefore, So is set to 64 when an adaptive algorithm is designed.

Fig. 6
figure 6

Three groups of map function curve correction cases

3.3 Adaptive algorithm of new mapping function

To verify the availability and flexibility of the new grey mapping function, an adaptive enhancement algorithm for low-light images with the new mapping function as the core was designed. The basic flow of the algorithm is as follows (Fig. 7):

Fig. 7
figure 7

Flow chart of the algorithm in this paper

As three colour components (R, G, and B) in the RGB colour space are significantly related, serious colour deviation easily occurs during enhancement. The image was converted from the RGB colour space to the HSV colour space, and the V component unrelated to the image colour was extracted for separate enhancement.

3.3.1 Adaptive process of parameter S e

The grey amount of the images with good visual quality in the light and dark areas is uniform, and the pixels are evenly distributed. Therefore, the selection of parameters is mainly based on two principles: the grey space of its dark area after image enhancement occupies at least half of the total grey space. The grey space is distributed according to the pixel amount in the light and dark areas of the images. Figure 8 shows the adaptive process of the parameter Se.

Fig. 8
figure 8

The process of choosing parameter Se

As shown in Fig. 8a, the average pixel amount of the greyscale in the stretch area and the compression area of the images is calculated, and the corresponding formula is as follows:

$$ {M}_s=\frac{\sum \limits_{i=0}^{s_o-1}{N}_i}{S_o}; $$
(18)
$$ {M}_c=\frac{\sum \limits_{i={s}_o}^{255}{N}_i}{256-{S}_o}; $$
(19)

where Ni is the pixel amount of the images at greyscale value i; Ms is the average greyscale pixel amount in the stretch area of the images; and Mc is the average greyscale pixel amount in the compression area of the images. Next, the proportional threshold Mt is calculated based on Ms and Mc:

$$ {M}_t=\mathit{\operatorname{Max}}\left(\ {M}_s,{M}_c\right); $$
(20)

The pixel amount in the dark area is corrected for the main purpose of expanding the application range of this algorithm so that this algorithm is still effective when dealing with local low-light images with fewer pixels in the dark area. Then, the stretching amount m (see Fig. 8c) of the dark area is obtained by distributing the grey space based on the pixel ratio between the stretch area and compression area in the redistributed area, and its expression is as follows:

$$ m=\frac{\alpha \times {Reb}_{low}\times \left({L}_{low}+{L}_{high}\right)}{\alpha \times {Reb}_{low}+{Reb}_{high}}-{L}_{low}; $$
(22)

where Reblow and Rebhigh are the total pixel amounts in the dark and light areas of the redistribution area and Llow and Lhigh are the grey space lengths of the redistribution area. Finally, the calculation formula of Se is determined based on m, So and the above principles:

$$ {S}_e=\operatorname{Max}\left({S}_o+m,128\right); $$
(23)

3.3.2 Adaptive process of parameter C

The characteristic coefficient C determines the grey distribution characteristics of the pixels in the dark area of the images. When the extra grey space under the compression mapping function is larger, a smaller C value should be selected. This can increase the contrast at more greyscales in the dark area to obtain better visual quality. In contrast, when the extra grey space under the compression mapping function is small, to ensure the sharpness of the very dark areas of the image, a larger C value should be selected. Therefore, the C value decreases with increasing Se. After a large number of experiments, the basic expression between C and Se is determined as follows:

$$ C=2+4\times {\left(\frac{255-{S}_e}{255-128}\right)}^{1.5}; $$
(24)

Considering that Se has a limit value of 255, the characteristic coefficient C determines the overall brightness of the images. Therefore, it is necessary to compensate for the C value of the very dark images in which Se reached 255 according to the grey distribution characteristics to avoid brightness that was too low after enhancement. The calculation formula of the compensation coefficient P is as follows:

$$ P=\left\{\begin{array}{c}1,\kern5em \frac{N_{0\sim round\left({S}_o/2\right)}}{N_{0\sim {S}_o}}\ge 0.7;\kern0.5em \\ {}\frac{N_{0\sim round\left({S}_o/2\right)}}{0.7\times {N}_{0\sim {S}_o}},\kern6.5em else;\kern0.5em \end{array}\right. $$
(25)

where N0~i is the total pixel number of the images at grey value 0~i. Tthe final calculation formula of C can be obtained:

$$ C=2+4\times {\left(\frac{255-{S}_e}{255-128}\right)}^{1.5}+P; $$
(26)

3.3.3 Design of noise reduction module

Although the brightness of low-light images is improved by the brightness enhancement algorithm, image noise hidden in the dark area is exposed. Therefore, it is crucial to reduce the noise of the enhanced images. To improve the operating speed of the noise reduction module, the image was converted from the RGB colour space to the YCrCb colour space, and the Y component containing the most image information was extracted for noise reduction. Figure 9 shows the noise reduction process of this algorithm. First, the image is divided into image blocks of 8 × 8 pixels, and the matrix units of less than 8 × 8 pixels are filled with 0. Then, the image blocks of 8 × 8 pixels are converted to the frequency domain space through discrete cosine transform, and the high-frequency signals with image noise gathering are filtered by a high-frequency filter. Next, through inverse discrete cosine transform, the image blocks are converted from the frequency domain space to the space domain after noise reduction. Finally, noise reduction of the entire image is completed by splicing all the image blocks into the RGB space.

Fig. 9
figure 9

Noise reduction process

As the complexity of the 2D discrete cosine transform algorithm was \( O\left({N}^2\right) \) and the running speed was slow, it was replaced with a 1D discrete cosine transform. In addition, an optimized calculation method based on fast Fourier transform (FFT) was also used in this process. The algorithm of the optimized noise reduction algorithm was significantly improved in its calculation speed, and the time complexity was reduced to \( O\left(N\log N\right) \).

4 Experiments

All comparative experiments were performed in MATLAB R2020a on a PC running Windows 10 with an Intel (R) Xeon (R) E-2176 M CPU @ 2.70 GHz and 32G RAM. The selected comparison algorithms included FFM [6], JED [23], JIEP [3], LECARM [24], RRM [15] and SDD [9]. The test images were all natural low-light images taken onsite and without any postprocessing; the shooting tool was a Sony ILCE-6400 camera, and the storage format is an 8-bit 800 × 1200 resolution BMP file. Figure 10 is an image dataset for the test experiments.

Fig. 10
figure 10

Test images for experiments: Gate, Bicycle, Flower, Fresco, Car1, Car2, Bulldozer, Crane, Bench

4.1 Subjective evaluation

Figures 11, 12, 13, 14, 15, 16, 17, 18, 19 and 20 are visual comparison charts of the images enhanced by the 7 methods. As can be clearly seen from these images, the images processed by this algorithm have good sensory quality both in brightness and colour rendering.

Fig. 11
figure 11

Experimental results in Gate. a Input image. b FFM. c JED. d JIEP. e LECARM. f RRM. g SDD. h The proposed method

Fig. 12
figure 12

Experimental results in Bicycle. a Input image. b FFM. c JED. d JIEP. e LECARM. f RRM. g SDD. h The proposed method

Fig. 13
figure 13

Experimental results in Flower. a Input image. b FFM. c JED. d JIEP. e LECARM. f RRM. g SDD. h The proposed method

Fig. 14
figure 14

Experimental results in Fresco. a Input image. b FFM. c JED. d JIEP. e LECARM. f RRM. g SDD. h The proposed method

Fig. 15
figure 15

Experimental results in Car1. a Input image. b FFM. c JED. d JIEP. e LECARM. f RRM. g SDD. h The proposed method

Fig. 16
figure 16

Experimental results in Car2. a Input image. b FFM. c JED. d JIEP. e LECARM. f RRM. g SDD. h The proposed method

Fig. 17
figure 17

Experimental results in Bulldozer. a Input image. b FFM. c JED. d JIEP. e LECARM. f RRM. g SDD. h The proposed method

Fig. 18
figure 18

Experimental results in Crane. a Input image. b FFM. c JED. d JIEP. e LECARM. f RRM. g SDD. h The proposed method

Fig. 19
figure 19

Experimental results in Academy. a Input image. b FFM. c JED. d JIEP. e LECARM. f RRM. g SDD. h The proposed method

Fig. 20
figure 20

Experimental results in Bench. a Input image. b FFM. c JED. d JIEP. e LECARM. f RRM. g SDD. h The proposed method

4.2 Objective evaluation

For more quantitative measurement, six nonreference image quality indexes were selected to comprehensively evaluate the quality of the enhanced image, namely, the Blind/Referenceless Image Spatial Quality Evaluator (BRISQUE) [19], the Perception-based Image Quality Evaluator (PIQE) [31], Information entropy (IE) [26], Contrast (C) [33], Average gradient (AG) [14] and the Naturalness Image Quality Evaluator (NIQE) [20].

  1. (1)

    BRISQUE is a neural network model for nonreference image evaluation, which is trained through a large number of distorted natural scene pictures. The smaller the value is, the lower the image distortion rate and the better the image quality.

  2. (2)

    PIQE is a neural network model for nonreference image evaluation that is sensitive to image blurring. The greater the value, the blurrier the images and the higher the image distortion rate.

  3. (3)

    The IE index describes the average information content of the image source and represents the aggregation characteristics of the image grey distribution. The greater the value is, the more image information there is and the better the image quality.

  4. (4)

    The index C represents the grey contrast of an image. The greater the value, the clearer the image and the more vivid the colour.

  5. (5)

    The index AG describes the ability of an image to express contrast details. The greater the value, the better the texture detail manifestation of the images.

  6. (6)

    NIQE is a neural network model for nonreference image evaluation that is sensitive to image noise. The smaller the value is, the less noise there is in the images and the better the image quality.

Table 1 shows the final results of the contrast experiments. The best results are highlighted in bold. The experimental data show that this algorithm had the best scores during most index tests, and the experimental results of these six indexes in the column “Average” were the best. This algorithm had an outstanding performance in indexes IE, C and AG and only ranked second, followed by index C of the picture cranes by a margin of 0.09. In terms of the BRISQUE, PIQE and NIQE indexes, as this algorithm greatly improved the brightness of the images, the image noise hidden in the dark area was fully exposed. Therefore, the ranking of this algorithm fluctuated among the evaluation indexes sensitive to image noise. However, compared with the LECARM algorithm second only to this algorithm in terms of brightness improvement, this algorithm not only had a small difference between the overall score and the best score in the indexes BRISQUE, PIQE and NIQE but also had the best score in more than half of the test pictures. Although the FFM and JED algorithms performed better in terms of the indexes PIQE and NIQE, these two algorithms improved the brightness in the dark area of the images to a small extent. It can be seen that this algorithm was significantly better than other comparison algorithms in terms of data.

Table 1 Objective evaluation results in terms of BRISQUE, PIQE, IE, C, AG and NIQE

4.3 Time complexity analysis

For real-time image processing equipment, its algorithm should not only have a good processing effect but also an adequate running speed [4]. Therefore, there is a need to analyse the time complexity of the algorithms. To accurately calculate the time complexity of the comparison algorithm, by modifying the height of a typical low-illuminance image while keeping the aspect ratio unchanged, 10 test images with gradually increasing sizes were obtained (see Table 2). All algorithms were run 16 times, and 16 sets of runtime data were recorded for each image corresponding to each algorithm. To reduce the experimental errors, the 3 groups of data with the longest and shortest running times among the 16 groups were deleted. The average value of the remaining 10 sets of data was used to represent the time required for the algorithm to process the current size image. Table 2 shows the average runtime of this method to process images of different sizes. Figure 21a is a comparison diagram of each algorithm by runtime.

Table 2 Runtime of the proposed method
Fig. 21
figure 21

Algorithm time complexity comparison chart. a Results of computational complexity with different methods. b The running time of different ways of using this algorithm

It can be seen from Fig. 21a that this algorithm ran faster, second only to the LECARM algorithm among all the comparison algorithms. In fact, the noise reduction module of this algorithm can be further optimized, but tool library functions containing C++ code are required for optimization. To ensure the fairness of the contrast experiments, the calculation time of this algorithm after calling the optimized noise reduction module is given in Fig. 10b as a reference. Finally, it should be noted that the time complexity of this adaptive algorithm was O(N log N);the time complexity of the mapping function O(N).

4.4 Comprehensive evaluation

In this section, the advantages and disadvantages of all comparison methods are summarized in Table 3. In addition, some data in sections 4.1 ~ 4.3 are used as support to illustrate these points.

Table 3 Summary of advantages and disadvantages of all comparison methods

4.5 Example of local low-light image enhancement

This algorithm was applicable not only to global low-light images but also to local low-light images with higher overall brightness. Figure 22 is an example of local low-light image enhancement by this algorithm. It can be seen from these image examples that this algorithm still performed well in enhancing local low light.

Fig. 22
figure 22

The result of this method on local low-illuminance image processing. a The original image. b The processed image

5 Future work

Although our adaptive algorithm achieved good results in the experimental part of the fourth section, there is still much room for improvement. The first is the value of So. Section 3.1.3 shows that So is set as a fixed value in the adaptive algorithm. However, in fact, this is a compromise solution to reduce the complexity of the algorithm. The value of the parameter So determines the range of action of the stretched mapping function. The closer its value is to the upper limit of the grey value of the real dark area, the better the enhancement effect of the new mapping function. Therefore, bringing the value of So closer to the upper limit will be one of the focuses of our future improvement work. The second is the optimization problem of parameter C and parameter Se. The values of the parameter C and the parameter Se are directly related to the final enhancement effect of the image. Although our adaptive algorithm showed good performance when using formula (26) for enhancement, the parameters C and Se can be further optimized. Currently, there are many mature optimization algorithms for parameter optimization, such as the ant colony algorithm, simulated annealing algorithm and particle swarm algorithm. Using multiobjective optimization algorithms to obtain the best values of C and Se will be another focus of our future improvement work.

6 Conclusions

This paper proposed a new multiparameter grey mapping method. This can not only effectively avoid the excessive enhancement of the images in the light area but also adjust the dark area of the images without changing the pixel grey value of the light area. In addition, an adaptive enhancement algorithm with this mapping method as the core was designed to verify the flexibility and superiority of the new mapping method. Experimental data showed that this method performed well in visual quality, quantitative testing and running speed.