Keywords

1 Introduction

Image defogging technology is to use certain algorithms or means to reduce or even eliminate the fog in the image. Image defogging technology has always been an important research content in the field of image processing. The current mainstream image defogging algorithms can be divided into three categories [1]: one is based on image enhancement, the other is based on image restoration, and the third is based on deep learning.

The prior-based image defogging method is based on the atmospheric scattering model and uses certain prior conditions to solve the parameters in the atmospheric scattering model, mainly the solution of atmospheric light value and transmittance, and then deduces the haze-free image. Dark channel prior algorithm in outdoor fog-free images, except for the sky area, in most cases, a channel with a very low pixel value can be found in the R, G, and B channels, that is, the dark channel. The coarse transmittance is estimated according to the dark channel prior theory, and the first 0.1% of the pixels are taken from the dark channel image according to the brightness, and the maximum value of the corresponding pixel is found in the original foggy image as the atmospheric light value, so as to obtain the Fog image. However, the dark channel prior algorithm will cause a serious Halo effect in the sudden change of field depth, and the transmission rate estimation of the sky area is also inaccurate.

Aiming at the deficiencies in the dark channel prior algorithm, this paper improves the dark channel prior algorithm: first, the atmospheric light value is improved, and the Otsu algorithm is used to initially segment the image, and the white noise in the segmentation is improved through morphological knowledge. Calculate the atmospheric light value in the sky area. Secondly, to improve the transmittance, a linear model is established according to the relationship between the foggy image and the fog-free image, the adaptive parameters are introduced to constrain the rate of linear transformation, and the initial transmittance is obtained by logarithmic transformation of the linear mapping relationship. Then the depth-of-field formula is introduced to compensate the transmittance refined by guided filtering, and then the optimized transmittance is obtained through the weighted L1 norm context regularization algorithm. Finally, contrast enhancement processing is carried out on the obtained restored image with histogram equalization algorithm.

2 Atmospheric Scattering Model

The atmospheric scattering model in this paper can be divided into three parts: incident light attenuation model, ambient light imaging model and fog image degradation model.

2.1 Incident Light Attenuation Model

The incident light attenuation model describes the attenuation process of light from the object to the imaging device.

Suppose there is a space, the material distribution of this space is the same as that of the atmosphere. Assuming its thickness is \(dx\), when a beam of parallel light with intensity \(E\left( {x,\lambda } \right)\) passes through the space, the energy change \(dE\left( {x,\lambda } \right)\) of this beam of parallel light is expressed as:

$$ \frac{dE(x,\lambda )}{{E(x,\lambda )}} = - \beta (\lambda )dx $$
(1)

In formula (1), \(\lambda\) represents the wavelength of light; \(\beta\) is a function of \(\lambda\), , also known as the atmospheric scattering coefficient. Assuming that the particles in the atmosphere are evenly distributed, the definite integral is used to deal with both sides of the formula (1) in the interval from \(x = 0\) to \(x = d\), and after simplified processing, it can be obtained:

$$ E_{d} (d,\lambda ) = E_{0} (\lambda )e^{ - \beta (\lambda )d} $$
(2)

Equation (2) is the expression of the incident light attenuation model, where \(E_{0} (\lambda )\) represents the light intensity of the object at \(x = 0\). As the depth of field \(d\) gradually increases, the energy \(E_{d} (d,\lambda )\) of the incident light gradually decays.

2.2 Incident Light Attenuation Model

The ambient light imaging model refers to the process in which ambient light such as sunlight and ground reflected light participate in object imaging. These ambient lights increase the ability to attenuate. Due to the large amount of ambient light, the three channel values of R, G, and B increase, which eventually leads to problems such as blurring and color distortion of the acquired image.

Fig. 1.
figure 1

Schematic diagram of ambient light imaging model

As shown in Fig. 1, assuming that the ambient light in the imaging scene is in a constant state, let the angle between the camera and the edge of the object and the horizontal direction be \(d\theta\), the distance between the object and the camera be \(d\), and the unit at \(x\) from the camera The microelement part of the volume is \(dv\), and the microelement \(dv\) at the distance \(x\) from the object to the camera is used as the ambient light point source, then within a unit volume, the obtained light intensity microelement \(dI\) can be expressed as:

$$ dI(x,\lambda ) = dv \cdot \beta (\lambda ) = \beta (\lambda ) \cdot d\theta \cdot x^{2} \cdot dx $$
(3)

According to the law of incident light attenuation, it can be obtained that the ambient light enters the propagation path and passes through the loss of suspended particles in the atmosphere, and the light intensity \(dE\) reaching the camera is as follows:

$$ dE(x,\lambda ) = \frac{{dI(x,\lambda )e^{ - \beta (\lambda )} }}{{x^{2} }} $$
(4)

The definite integral of formula (4) in the interval from \(x = 0\) to \(x = d\), considering the incident attenuation, can obtain the energy relationship of the ambient light imaging model under the ambient light intensity \(E_{q}\):

$$ E_{q} (d,\lambda ) = E_{\infty } (\lambda )(1 - e^{ - \beta (\lambda )d} ) $$
(5)

2.3 Image Degradation Model in Foggy Weather

Due to the influence of severe weather such as smog and system-related parameters, image degradation can be understood as the attenuation of each pixel in a clear image to varying degrees. Therefore, the atmospheric scattering model can be approximately regarded as the superposition of the incident light attenuation model and the ambient light imaging model, and its mathematical expression is:

$$ E(d,\lambda ) = E_{0} (\lambda )e^{ - \beta (\lambda )d} + E_{\infty } (\lambda )(1 - e^{ - \beta (\lambda )d} ) $$
(6)

For the convenience of calculation, it is generally considered that the pixel gray value of a certain point in the image is approximately proportional to the radiation intensity received by the point, then:

$$ E(d,\lambda ) = I(x),E_{0} (\lambda ) = J(x),E_{\infty } (\lambda ) = A $$
(7)
$$ t(x) = e^{ - \beta d(x)} $$
(8)

where \(x = (i,j)\) is a two-dimensional vector, which represents the coordinate position of each point in the image, \(I\) represents the foggy image, \(J\) represents the image after defogging, \(t(x)(0 \le t(x) \le 1)\) is the transmittance distribution function, and \(A\) is the atmospheric light value. \(t(x)J(x)\) is the attenuation term and \((1 - t(x))A\) is the ambient light term.

Simplifying the processing of formula (7), the radiation intensity obtained after dehazing can be expressed as:

$$ J(x){ = }\frac{I(x) - A}{{t(x)}} + A $$
(9)

3 Improved Dehazing Algorithm Based on Dark Channel Prior

3.1 Dark Channel Prior Transmittance and Atmospheric Light Value Estimation

Assuming that the transmittance \(t(x)\) in each minimum filtering window is constant, assuming \(\widetilde{t}(x)\), and \(A\) (global atmospheric composition) is a known quantity, we can obtain [2]:

$$ \mathop {\min }\limits_{y \in \Omega (x)} (\mathop {\min }\limits_{C} (\frac{{I^{C} (y)}}{{A^{c} }})) = \widetilde{t}(x)\mathop {\min }\limits_{y \in \Omega (x)} (\mathop {\min }\limits_{C} (\frac{{J^{C} (y)}}{{A^{C} }})) + 1 - \widetilde{t}(x) $$
(10)

In fog-free weather, there are also some particles in the air, which is a slight fog. In order to produce the texture of depth of field, it is necessary to retain a certain amount of fog. Therefore, assuming a retention factor \(\rho\), in general, \(\rho = 0.95\) can be taken. Then formula (10) can be rewritten as:

$$ \widetilde{t}(x) = 1 - \rho \cdot \mathop {\min }\limits_{y \in \Omega (x)} (\mathop {\min }\limits_{C} (\frac{{I^{C} (y)}}{{A^{C} }})) $$
(11)

For the selection of the atmospheric light \(A\) value, combined with the ambient light imaging model, the area with the densest fog concentration is generally selected. In fact, the \(A\) value can also be obtained from the dark channel map, and the first 0.1% of the pixels are taken according to the brightness. Find the corresponding point with the highest brightness in the original fog image, and the pixel value of this point is used as the \(A\) value. Combined with formula (9), the formula for restoring the image can be obtained as:

$$ J(x) = \frac{I(x) - A}{{\max (\widetilde{t}(x),t_{d} )}} + A $$
(12)

where \(t_{d} = 0.1\) is a threshold set for the transmittance, which can restrain the phenomenon that the restored image is too bright due to too small transmittance, resulting in serious distortion of the restored image color.

3.2 Optimization of Atmospheric Light Value Selection Method

The atmospheric light value model uses the Otsu algorithm principle [3], and the specific analysis process is shown in the figure below (Fig. 2):

Fig. 2.
figure 2

Atmospheric light value calculation module flow chart

Sky Area Judgment

If there is a sky area, calculate the atmospheric light value in the sky area, if there is no sky area, then use the method of calculating the atmospheric light value by He et al. The number of pixels defining the sky area is \(n_{sky}\), the total number of pixels of the foggy image is \(MN\), and the ratio of the sky area to the entire image is \(p_{sky}\), then:

$$ p_{sky} = \frac{{n_{sky} }}{MN} $$
(13)

At that time \(p_{sky} \ge 6\%\), it is determined that there is a sky area. At that time \(p_{sky} < 6\%\), it was determined that there was no sky area.

Atmospheric Light Value Calculation

After judging that there is a sky area, find the values in the dark channel map corresponding to the sky area, and calculate the average of these values as the value of atmospheric light. The calculation formula is as follows:

$$ A_{1} = \frac{{\sum\limits_{(i,j) \in sky} {I_{dark} (i,j)} }}{{\left| {n_{sky} } \right|}} $$
(14)

where \(I_{dark} (i,j)\) is the dark channel information map of the foggy image.

After many experiments and tests, this method has a good correction effect on the situation where the atmospheric light value is too large or too small, and the picture will not appear too white or the color is too saturated, and the restored image is more natural.

When the sky area is not detected, the method used to calculate the atmospheric light value is as follows:

  1. (1)

    In the dark channel map, take the top 0.1% of the pixels according to the brightness,

  2. (2)

    Correspond the pixel position found in (1) to the original fog image and find the value of the corresponding point with the highest brightness as the \(A\) value.

3.3 Transmittance Method Improvements

Firstly, by analyzing the foggy image and the fogless image, a linear model is used, and its logarithmic transformation is performed to obtain the initial transmittance. The initial transmittance is refined by guided filtering algorithm to obtain the refined transmittance. Then, the depth-of-field formula is introduced to compensate the thinned transmittance. The transmittance was optimized using the weighted norm A regularization algorithm. Figure 3 below is the flow chart of transmittance optimization:

Fig. 3.
figure 3

Transmittance Optimization Flowchart

Transmittance Estimation Based on Logarithmic Adaptation

Transmittance is also one of the important parameters in image restoration, and the accuracy of transmittance estimation has a direct impact on the quality of defogged images. The transmittance function is:

$$ t(i,j) = \frac{A - I(i,j)}{{A - J(i,j)}} $$
(15)

where \(I(i,j)\) and \(J(i,j)\) represent the gray value of the foggy image and the haze-free image at the \(C\) middle coordinate point \((i,j)\) respectively.

According to the dark channel prior theory, after the minimum filtering process on the transmittance function, we can get:

$$ t(i,j) = \frac{{\mathop {\min }\limits_{{C \in \left\{ {R,G,B} \right\}}} A - \mathop {\min }\limits_{{C \in \left\{ {R,G,B} \right\}}} I^{C} (i,j)}}{{\mathop {\min }\limits_{{C \in \left\{ {R,G,B} \right\}}} A - \mathop {\min }\limits_{{C \in \left\{ {R,G,B} \right\}}} J^{C} (i,j)}} $$
(16)

Assuming that the atmospheric light \(A^{C}\) is known to be a constant, it will be abbreviated as \(J^{dark} (i,j)\), then it can be obtained as:

$$ t(i,j) = \frac{{A^{C} - I^{dark} (i,j)}}{{A^{C} - J^{dark} (i,j)}} $$
(17)

In the existing single image defogging algorithm based on linear mapping [4], the impact of ambient light on imaging is closely related to the depth of field distance, and the farther the depth of field distance is, the greater the impact will be.

Therefore, using the segmented area of the quadratic function to approximate it, the above formula can be modified as:

$$ \mathop {\min }\limits_{{C \in \left\{ {R,G,B} \right\}}} J^{C} (i,j) = \frac{{\mathop {\min }\limits_{{C \in \left\{ {R,G,B} \right\}}} I^{C} (i,j) - RGB_{\min } }}{{RGB_{\max } - RGB_{\min } }} \cdot *I^{dark} (i,j) $$
(18)

where \(RGB_{\max }\) and \(RGB_{\min }\) are the maximum and minimum values of \(\min_{{C \in \left\{ {R,G,B} \right\}}} I^{C} (i,j)\), respectively.

Increase the luminance value with obvious difference in fog concentration in the dark channel, and perform logarithmic transformation processing on formula (18), it can be obtained:

$$ \mathop {\min }\limits_{{C \in \left\{ {R,G,B} \right\}}} J^{C} (i,j) = \log (\delta + \frac{{\mathop {\min }\limits_{{C \in \left\{ {R,G,B} \right\}}} I^{C} (i,j) - RGB_{\min } }}{{RGB_{\max } - RGB_{\min } }}.*I^{dark} (i,j)) $$
(19)

The logarithmic transmittance function can be obtained:

$$ t_{2} (i,j) = \frac{{\left| {A^{C} - \mathop {\min }\limits_{{C \in \left\{ {R,G,B} \right\}}} I^{C} (i,j)} \right|}}{{\left| {A^{C} - \log (\frac{{\mathop {\min }\limits_{{C \in \left\{ {R,G,B} \right\}}} I^{C} (i,j) - RGB_{\min } }}{{RGB_{\max } - RGB_{\min } }} \cdot *\mathop {\min }\limits_{{C \in \left\{ {R,G,B} \right\}}} I^{C} (i,j) + \delta )} \right|}} $$
(20)

Transmittance Compensation Based on Color Attenuation Prior Algorithm

For most outdoor foggy images, the difference between fog density and brightness and saturation is positively correlated in the same area. Assuming that there is a linear relationship between the fog density, the difference between brightness and saturation, the specific mathematical expression is as follows:

$$ d(x) \propto c(x) \propto v(x) - s(x) $$
(21)

where \(x\) is the pixel point, \(d(x)\) is the depth of field, \(c(x)\) is the concentration of the fog, \(v(x) - s(x)\) is the difference between the brightness and saturation of the image, and \(\propto\) is a positive correlation sign.

The above law is the prior law of color attenuation, and its corresponding depth of field model, the formula is as follows:

$$ d(x) = \theta_{0} + \theta_{1} v(x) + \theta_{2} s(x) + \varepsilon (x) $$
(22)

where \(v(x)\) is the bright spot of the pixel, \(s(x)\) is the saturation of the pixel, and \(\theta_{0}\), \(\theta_{1}\), \(\theta_{2}\) are the linear coefficients. The function \(\varepsilon (x)\) represents the random variable from which the model produces random errors, using a Gaussian density function with mean 0 and variable \(\sigma^{2} \left( {\varepsilon \left( x \right) \sim N\left( {0,\sigma^{2} } \right)} \right)\).

According to the CAP principle, the depth of field formula can be introduced to compensate the transmittance. Its mathematical expression is:

$$ t_{b} (x,y) = \max (t_{3} (x),q_{i} ) $$
(23)

Context Regularization Based on Weighted L1 Norm

Pixels in local facets of an image generally have similar depth values, deriving a piecewise transfer from boundary constraints. However, this creates patches in areas where the depth of field changes dramatically, leading to noticeable halo artifacts in the defogged results. The way to solve this problem is to introduce a weighting function \(W(x,y)\) on the constraints, namely:

$$ W(x,y)(t_{b} (y) - t_{b} (x)) \approx 0 $$
(24)

where x and y are the pixel points of two adjacent positions in the transmittance map.

Through the feature statistics of a large number of haze images, it is found that the adjacent pixels on the edge of the image depth have similar color grayscales. Therefore, we can use the difference between local adjacent pixels to construct a weighting function. Below is the squared difference between vectors based on two adjacent pixels.

$$ W(x,y) = e^{{ - \frac{{\left\| {I(x) - I(y)} \right\|^{2} }}{{2\sigma^{2} }}}} $$
(25)

where \(\sigma\) is the standard deviation, which \(I(x) - I(y)\) is the difference in gray value between two adjacent pixels in the local window.

Another formula for brightness difference based on adjacent pixels is as follows:

$$ W(x,y) = \left( {\left| {\ell (x) - \ell (y)} \right|^{\phi } + \zeta } \right)^{ - 1} $$
(26)

where \(\ell (x)\) is the logarithmic bright channel of the hazy image, and the exponent \(\phi\) is the sensitivity of the sum of two adjacent pixels, \(\zeta\) is a small constant (usually 0.0001) to prevent division by zero.

Based on the variable splitting method to optimize the above formula, the basic idea is to introduce several auxiliary variables, decompose them into a series of simple sub-problems, and construct a new transmittance target optimization function as:

$$ \gamma \left\| {t_{b} - t_{z} } \right\|_{2}^{2} + \sum\limits_{j \in \eta } {\left\| {W_{j} \circ u_{j} } \right\|_{1} } + \tau (\sum\limits_{j \in \eta } {\left\| {u_{j} - (D_{j} \otimes t_{b} )} \right\|_{2}^{2} } ) $$
(27)

Among them, \(u_{j}\)(\(j \in \eta\)) is the auxiliary function introduced, and \(\tau\) is the weight.

Through the 2-dimensional Fast Fourier Transform (FFT) algorithm and assuming a circular boundary condition, the optimal transmittance \(t^{*}\) can be directly calculated:

$$ t^{*} = F^{ - 1} \left( {\frac{{\frac{\tau }{\gamma }F\left( {t_{z} } \right) + \sum\limits_{j \in \eta } {\overline{{F\left( {D_{j} } \right)}} } \circ F\left( {u_{j} } \right)}}{{\frac{\tau }{\gamma } + \sum\limits_{j \in \eta } {\overline{{F\left( {D_{j} } \right)}} \circ F\left( {D_{j} } \right)} }}} \right) $$
(28)

Among them, the class \(F\left( \bullet \right)\) is the Fourier transform, which \(F^{ - 1} \left( \bullet \right)\) is the inverse Fourier transform, which \(\left( \bullet \right)\) means conjugation, and “\(\circ\)” means element-wise multiplication.

3.4 Image Restoration

According to the description in Sect. 3.2, the optimal transmittance obtained through L1 regularization constrained iterative solution of the compensated transmittance and the atmospheric light value obtained based on the Otsu algorithm described in Sect. 3.1 are abbreviated in this paper as, by substituting the formula (9) A clear and fog-free image can be obtained, and a corrected image can be obtained.

$$ J(x,y) = \frac{{I(x,y) - A_{1} }}{{t^{ * } (x,y)}} + A_{1} $$
(29)

4 Image Restoration Experiment and Result Analysis in Foggy Weather

4.1 Algorithm Process

The overall flowchart of the algorithm in this paper is shown in Fig. 4 below:

Fig. 4.
figure 4

The overall flow chart of the algorithm in this paper

The main steps of the algorithm are described as follows:

  1. (1)

    Get a foggy image.

  2. (2)

    The transmittance obtaining module and the atmospheric light value obtaining module obtain the atmospheric light value and the transmittance respectively.

  3. (3)

    Substitute the foggy image, the calculated atmospheric light value and transmittance into the foggy image degradation model to obtain the restored image.

  4. (4)

    Contrast enhancement processing is performed on the restored image with histogram equalization algorithm to obtain the final dehazed image.

4.2 Experimental Results and Analysis

Algorithms such as He algorithm, Meng algorithm, Zhu algorithm, Berman algorithm and the improved algorithm are selected for comparative experiments.

The first group of comparative experiments: the original image is a foggy road image, and five algorithms are used to dehaze the image. The result is as follows (Fig. 5):

Fig. 5.
figure 5

Defog comparison chart of road fog image

The second group of comparative experiments: the original image is selected as the foggy image of the house. The experimental results are shown in Fig. 6 below:

Fig. 6.
figure 6

Defog comparison chart of houses in foggy days

The third group of comparative experiments: the original image used in this experiment is the ginkgo tree fog image (Fig. 7).

Fig. 7.
figure 7

Defog comparison chart of houses in foggy days

Table 1. Objective evaluation results

As AG, SSIM, EN, and PSNR are used to objectively evaluate the image after defogging. The following Table 1 shows the index data measured in each group of experimental result graphs.

From the data indicators, we can see that the average gradient of the algorithm in this paper is slightly lower than the average gradient of the Meng algorithm in the experiment. The SSIM, PSNR and E N of the algorithm in this paper are all higher than those of the Meng algorithm. In contrast, although the AG of the algorithm in this paper is slightly lower than that of the Meng algorithm, it provides the highest SSIM, P SNR and E N. It shows that the color of the algorithm in this paper is the most realistic, that is, the degree of image distortion is the lowest, and the detailed information of the image after defogging is rich, that is, the image is clearer.

5 Conclusion

After studying a large number of relevant literatures in the field of image dehazing, this paper decides to focus on the image dehazing algorithm based on the atmospheric scattering model. After deeply understanding the theoretical knowledge of the dark channel prior algorithm and the shortcomings of the dark channel prior algorithm, a new image defogging algorithm is proposed. It is verified by comparative experiments that the algorithm in this paper has no halo and color shift in the area of sudden depth of field, and the visual effect is more real and natural. Moreover, the defogged image obtained by the algorithm in this paper has rich detail information, clearer image, and has certain advantages in the degree of distortion. The defogged effect is good, and it is more in line with people’s visual aesthetics.