Abstract
The dark channel prior dehazing algorithm can clear the foggy images to different degrees. However, the algorithm still has some deficiencies, such as the halo phenomenon at the edge of the image in the area of sudden change in depth of field; inaccurate estimation of transmittance, resulting in color shift in the image; inaccurate estimation of atmospheric light value, resulting in darker images after defogging, etc. Therefore, it is necessary to improve the dark channel prior algorithm. This paper improves the dehazing algorithm on the basis of in-depth study of the dark channel prior algorithm, and proposes a new dehazing algorithm based on the atmospheric scattering model. The simulation results show that the improved algorithm in this paper can effectively suppress the halo and color distortion in the abrupt depth of field area, and the obtained defogged image has rich detail information, clearer image, and moderate brightness. The algorithm has improved in objective parameters such as average gradient, structural similarity, peak signal-to-noise ratio and information entropy.
Access provided by Autonomous University of Puebla. Download conference paper PDF
Similar content being viewed by others
Keywords
1 Introduction
Image defogging technology is to use certain algorithms or means to reduce or even eliminate the fog in the image. Image defogging technology has always been an important research content in the field of image processing. The current mainstream image defogging algorithms can be divided into three categories [1]: one is based on image enhancement, the other is based on image restoration, and the third is based on deep learning.
The prior-based image defogging method is based on the atmospheric scattering model and uses certain prior conditions to solve the parameters in the atmospheric scattering model, mainly the solution of atmospheric light value and transmittance, and then deduces the haze-free image. Dark channel prior algorithm in outdoor fog-free images, except for the sky area, in most cases, a channel with a very low pixel value can be found in the R, G, and B channels, that is, the dark channel. The coarse transmittance is estimated according to the dark channel prior theory, and the first 0.1% of the pixels are taken from the dark channel image according to the brightness, and the maximum value of the corresponding pixel is found in the original foggy image as the atmospheric light value, so as to obtain the Fog image. However, the dark channel prior algorithm will cause a serious Halo effect in the sudden change of field depth, and the transmission rate estimation of the sky area is also inaccurate.
Aiming at the deficiencies in the dark channel prior algorithm, this paper improves the dark channel prior algorithm: first, the atmospheric light value is improved, and the Otsu algorithm is used to initially segment the image, and the white noise in the segmentation is improved through morphological knowledge. Calculate the atmospheric light value in the sky area. Secondly, to improve the transmittance, a linear model is established according to the relationship between the foggy image and the fog-free image, the adaptive parameters are introduced to constrain the rate of linear transformation, and the initial transmittance is obtained by logarithmic transformation of the linear mapping relationship. Then the depth-of-field formula is introduced to compensate the transmittance refined by guided filtering, and then the optimized transmittance is obtained through the weighted L1 norm context regularization algorithm. Finally, contrast enhancement processing is carried out on the obtained restored image with histogram equalization algorithm.
2 Atmospheric Scattering Model
The atmospheric scattering model in this paper can be divided into three parts: incident light attenuation model, ambient light imaging model and fog image degradation model.
2.1 Incident Light Attenuation Model
The incident light attenuation model describes the attenuation process of light from the object to the imaging device.
Suppose there is a space, the material distribution of this space is the same as that of the atmosphere. Assuming its thickness is \(dx\), when a beam of parallel light with intensity \(E\left( {x,\lambda } \right)\) passes through the space, the energy change \(dE\left( {x,\lambda } \right)\) of this beam of parallel light is expressed as:
In formula (1), \(\lambda\) represents the wavelength of light; \(\beta\) is a function of \(\lambda\), , also known as the atmospheric scattering coefficient. Assuming that the particles in the atmosphere are evenly distributed, the definite integral is used to deal with both sides of the formula (1) in the interval from \(x = 0\) to \(x = d\), and after simplified processing, it can be obtained:
Equation (2) is the expression of the incident light attenuation model, where \(E_{0} (\lambda )\) represents the light intensity of the object at \(x = 0\). As the depth of field \(d\) gradually increases, the energy \(E_{d} (d,\lambda )\) of the incident light gradually decays.
2.2 Incident Light Attenuation Model
The ambient light imaging model refers to the process in which ambient light such as sunlight and ground reflected light participate in object imaging. These ambient lights increase the ability to attenuate. Due to the large amount of ambient light, the three channel values of R, G, and B increase, which eventually leads to problems such as blurring and color distortion of the acquired image.
As shown in Fig. 1, assuming that the ambient light in the imaging scene is in a constant state, let the angle between the camera and the edge of the object and the horizontal direction be \(d\theta\), the distance between the object and the camera be \(d\), and the unit at \(x\) from the camera The microelement part of the volume is \(dv\), and the microelement \(dv\) at the distance \(x\) from the object to the camera is used as the ambient light point source, then within a unit volume, the obtained light intensity microelement \(dI\) can be expressed as:
According to the law of incident light attenuation, it can be obtained that the ambient light enters the propagation path and passes through the loss of suspended particles in the atmosphere, and the light intensity \(dE\) reaching the camera is as follows:
The definite integral of formula (4) in the interval from \(x = 0\) to \(x = d\), considering the incident attenuation, can obtain the energy relationship of the ambient light imaging model under the ambient light intensity \(E_{q}\):
2.3 Image Degradation Model in Foggy Weather
Due to the influence of severe weather such as smog and system-related parameters, image degradation can be understood as the attenuation of each pixel in a clear image to varying degrees. Therefore, the atmospheric scattering model can be approximately regarded as the superposition of the incident light attenuation model and the ambient light imaging model, and its mathematical expression is:
For the convenience of calculation, it is generally considered that the pixel gray value of a certain point in the image is approximately proportional to the radiation intensity received by the point, then:
where \(x = (i,j)\) is a two-dimensional vector, which represents the coordinate position of each point in the image, \(I\) represents the foggy image, \(J\) represents the image after defogging, \(t(x)(0 \le t(x) \le 1)\) is the transmittance distribution function, and \(A\) is the atmospheric light value. \(t(x)J(x)\) is the attenuation term and \((1 - t(x))A\) is the ambient light term.
Simplifying the processing of formula (7), the radiation intensity obtained after dehazing can be expressed as:
3 Improved Dehazing Algorithm Based on Dark Channel Prior
3.1 Dark Channel Prior Transmittance and Atmospheric Light Value Estimation
Assuming that the transmittance \(t(x)\) in each minimum filtering window is constant, assuming \(\widetilde{t}(x)\), and \(A\) (global atmospheric composition) is a known quantity, we can obtain [2]:
In fog-free weather, there are also some particles in the air, which is a slight fog. In order to produce the texture of depth of field, it is necessary to retain a certain amount of fog. Therefore, assuming a retention factor \(\rho\), in general, \(\rho = 0.95\) can be taken. Then formula (10) can be rewritten as:
For the selection of the atmospheric light \(A\) value, combined with the ambient light imaging model, the area with the densest fog concentration is generally selected. In fact, the \(A\) value can also be obtained from the dark channel map, and the first 0.1% of the pixels are taken according to the brightness. Find the corresponding point with the highest brightness in the original fog image, and the pixel value of this point is used as the \(A\) value. Combined with formula (9), the formula for restoring the image can be obtained as:
where \(t_{d} = 0.1\) is a threshold set for the transmittance, which can restrain the phenomenon that the restored image is too bright due to too small transmittance, resulting in serious distortion of the restored image color.
3.2 Optimization of Atmospheric Light Value Selection Method
The atmospheric light value model uses the Otsu algorithm principle [3], and the specific analysis process is shown in the figure below (Fig. 2):
Sky Area Judgment
If there is a sky area, calculate the atmospheric light value in the sky area, if there is no sky area, then use the method of calculating the atmospheric light value by He et al. The number of pixels defining the sky area is \(n_{sky}\), the total number of pixels of the foggy image is \(MN\), and the ratio of the sky area to the entire image is \(p_{sky}\), then:
At that time \(p_{sky} \ge 6\%\), it is determined that there is a sky area. At that time \(p_{sky} < 6\%\), it was determined that there was no sky area.
Atmospheric Light Value Calculation
After judging that there is a sky area, find the values in the dark channel map corresponding to the sky area, and calculate the average of these values as the value of atmospheric light. The calculation formula is as follows:
where \(I_{dark} (i,j)\) is the dark channel information map of the foggy image.
After many experiments and tests, this method has a good correction effect on the situation where the atmospheric light value is too large or too small, and the picture will not appear too white or the color is too saturated, and the restored image is more natural.
When the sky area is not detected, the method used to calculate the atmospheric light value is as follows:
-
(1)
In the dark channel map, take the top 0.1% of the pixels according to the brightness,
-
(2)
Correspond the pixel position found in (1) to the original fog image and find the value of the corresponding point with the highest brightness as the \(A\) value.
3.3 Transmittance Method Improvements
Firstly, by analyzing the foggy image and the fogless image, a linear model is used, and its logarithmic transformation is performed to obtain the initial transmittance. The initial transmittance is refined by guided filtering algorithm to obtain the refined transmittance. Then, the depth-of-field formula is introduced to compensate the thinned transmittance. The transmittance was optimized using the weighted norm A regularization algorithm. Figure 3 below is the flow chart of transmittance optimization:
Transmittance Estimation Based on Logarithmic Adaptation
Transmittance is also one of the important parameters in image restoration, and the accuracy of transmittance estimation has a direct impact on the quality of defogged images. The transmittance function is:
where \(I(i,j)\) and \(J(i,j)\) represent the gray value of the foggy image and the haze-free image at the \(C\) middle coordinate point \((i,j)\) respectively.
According to the dark channel prior theory, after the minimum filtering process on the transmittance function, we can get:
Assuming that the atmospheric light \(A^{C}\) is known to be a constant, it will be abbreviated as \(J^{dark} (i,j)\), then it can be obtained as:
In the existing single image defogging algorithm based on linear mapping [4], the impact of ambient light on imaging is closely related to the depth of field distance, and the farther the depth of field distance is, the greater the impact will be.
Therefore, using the segmented area of the quadratic function to approximate it, the above formula can be modified as:
where \(RGB_{\max }\) and \(RGB_{\min }\) are the maximum and minimum values of \(\min_{{C \in \left\{ {R,G,B} \right\}}} I^{C} (i,j)\), respectively.
Increase the luminance value with obvious difference in fog concentration in the dark channel, and perform logarithmic transformation processing on formula (18), it can be obtained:
The logarithmic transmittance function can be obtained:
Transmittance Compensation Based on Color Attenuation Prior Algorithm
For most outdoor foggy images, the difference between fog density and brightness and saturation is positively correlated in the same area. Assuming that there is a linear relationship between the fog density, the difference between brightness and saturation, the specific mathematical expression is as follows:
where \(x\) is the pixel point, \(d(x)\) is the depth of field, \(c(x)\) is the concentration of the fog, \(v(x) - s(x)\) is the difference between the brightness and saturation of the image, and \(\propto\) is a positive correlation sign.
The above law is the prior law of color attenuation, and its corresponding depth of field model, the formula is as follows:
where \(v(x)\) is the bright spot of the pixel, \(s(x)\) is the saturation of the pixel, and \(\theta_{0}\), \(\theta_{1}\), \(\theta_{2}\) are the linear coefficients. The function \(\varepsilon (x)\) represents the random variable from which the model produces random errors, using a Gaussian density function with mean 0 and variable \(\sigma^{2} \left( {\varepsilon \left( x \right) \sim N\left( {0,\sigma^{2} } \right)} \right)\).
According to the CAP principle, the depth of field formula can be introduced to compensate the transmittance. Its mathematical expression is:
Context Regularization Based on Weighted L1 Norm
Pixels in local facets of an image generally have similar depth values, deriving a piecewise transfer from boundary constraints. However, this creates patches in areas where the depth of field changes dramatically, leading to noticeable halo artifacts in the defogged results. The way to solve this problem is to introduce a weighting function \(W(x,y)\) on the constraints, namely:
where x and y are the pixel points of two adjacent positions in the transmittance map.
Through the feature statistics of a large number of haze images, it is found that the adjacent pixels on the edge of the image depth have similar color grayscales. Therefore, we can use the difference between local adjacent pixels to construct a weighting function. Below is the squared difference between vectors based on two adjacent pixels.
where \(\sigma\) is the standard deviation, which \(I(x) - I(y)\) is the difference in gray value between two adjacent pixels in the local window.
Another formula for brightness difference based on adjacent pixels is as follows:
where \(\ell (x)\) is the logarithmic bright channel of the hazy image, and the exponent \(\phi\) is the sensitivity of the sum of two adjacent pixels, \(\zeta\) is a small constant (usually 0.0001) to prevent division by zero.
Based on the variable splitting method to optimize the above formula, the basic idea is to introduce several auxiliary variables, decompose them into a series of simple sub-problems, and construct a new transmittance target optimization function as:
Among them, \(u_{j}\)(\(j \in \eta\)) is the auxiliary function introduced, and \(\tau\) is the weight.
Through the 2-dimensional Fast Fourier Transform (FFT) algorithm and assuming a circular boundary condition, the optimal transmittance \(t^{*}\) can be directly calculated:
Among them, the class \(F\left( \bullet \right)\) is the Fourier transform, which \(F^{ - 1} \left( \bullet \right)\) is the inverse Fourier transform, which \(\left( \bullet \right)\) means conjugation, and “\(\circ\)” means element-wise multiplication.
3.4 Image Restoration
According to the description in Sect. 3.2, the optimal transmittance obtained through L1 regularization constrained iterative solution of the compensated transmittance and the atmospheric light value obtained based on the Otsu algorithm described in Sect. 3.1 are abbreviated in this paper as, by substituting the formula (9) A clear and fog-free image can be obtained, and a corrected image can be obtained.
4 Image Restoration Experiment and Result Analysis in Foggy Weather
4.1 Algorithm Process
The overall flowchart of the algorithm in this paper is shown in Fig. 4 below:
The main steps of the algorithm are described as follows:
-
(1)
Get a foggy image.
-
(2)
The transmittance obtaining module and the atmospheric light value obtaining module obtain the atmospheric light value and the transmittance respectively.
-
(3)
Substitute the foggy image, the calculated atmospheric light value and transmittance into the foggy image degradation model to obtain the restored image.
-
(4)
Contrast enhancement processing is performed on the restored image with histogram equalization algorithm to obtain the final dehazed image.
4.2 Experimental Results and Analysis
Algorithms such as He algorithm, Meng algorithm, Zhu algorithm, Berman algorithm and the improved algorithm are selected for comparative experiments.
The first group of comparative experiments: the original image is a foggy road image, and five algorithms are used to dehaze the image. The result is as follows (Fig. 5):
The second group of comparative experiments: the original image is selected as the foggy image of the house. The experimental results are shown in Fig. 6 below:
The third group of comparative experiments: the original image used in this experiment is the ginkgo tree fog image (Fig. 7).
As AG, SSIM, EN, and PSNR are used to objectively evaluate the image after defogging. The following Table 1 shows the index data measured in each group of experimental result graphs.
From the data indicators, we can see that the average gradient of the algorithm in this paper is slightly lower than the average gradient of the Meng algorithm in the experiment. The SSIM, PSNR and E N of the algorithm in this paper are all higher than those of the Meng algorithm. In contrast, although the AG of the algorithm in this paper is slightly lower than that of the Meng algorithm, it provides the highest SSIM, P SNR and E N. It shows that the color of the algorithm in this paper is the most realistic, that is, the degree of image distortion is the lowest, and the detailed information of the image after defogging is rich, that is, the image is clearer.
5 Conclusion
After studying a large number of relevant literatures in the field of image dehazing, this paper decides to focus on the image dehazing algorithm based on the atmospheric scattering model. After deeply understanding the theoretical knowledge of the dark channel prior algorithm and the shortcomings of the dark channel prior algorithm, a new image defogging algorithm is proposed. It is verified by comparative experiments that the algorithm in this paper has no halo and color shift in the area of sudden depth of field, and the visual effect is more real and natural. Moreover, the defogged image obtained by the algorithm in this paper has rich detail information, clearer image, and has certain advantages in the degree of distortion. The defogged effect is good, and it is more in line with people’s visual aesthetics.
References
Gonzalez, R.C., Woods, R.E.: Digital Image Processing, 3rd edn. Electronic Industry Press, Beijing (2011)
Bai, Z., Wang, B.: Air Particulate Matter Pollution and Prevention. Chemical Industry Press, Beijing (2011)
Wang, D., Zhang, T.: Review and analysis of image dehazing algorithms. J. Graph. 154(06), 861–870 (2020)
Tarel, J.P., Hautière, N.: Fast visibility restoration from a single color or gray level image. In: IEEE International Conference on Computer Vision. IEEE (2010)
Fattal, R.: Single image dehazing. ACM Trans. Graph. 27(3), 1–9 (2008)
Tan, R.T.: Visibility in bad weather from a single image. In: 2008 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR), Anchorage, Alaska, USA. IEEE (2008)
He, K., Sun, J., Tang, X.: Single image haze removal using dark channel prior. IEEE Trans. Pattern Anal. Mach. Intell. 33(12), 2341–2353 (2011)
Harald, K.: Theorieder horizontalen sichtweite: Kontrast und Sichtweite. Keim & Nemnich (1925)
Mccartney, E.J.: Scattering phenomena. Optics of the atmosphere: scattering by molecules and particles. Science 196, 1084–1085 (1977)
Nayar, S.K., Narasimhan, S.G.: Vision in bad weather. In: Proceedings of the Seventh IEEE International Conference on Computer Vision. IEEE (2002)
Shen, Y., Wang, Q.: Sky region detection in a single image for autonomous ground robot navigation. Int. J. Adv. Rob. Syst. 10(4), 1 (2013)
Li, X., Zhou, L.: Open-set voiceprint recognition adaptive threshold calculation method based on Otsu algorithm and deep learning. J. Jilin Univ. (Nat. Sci. Ed.) 59(04), 909–914 (2021)
Li, X., Niu, H., Zhong, H.: Image defogging method based on mean standard deviation and weighted transmittance. J. Railway Sci. Eng. 17(11), 2938–2945 (2020)
Zhao, X., Huang, F.: Image enhancement based on dual-channel prior and illumination map-guided filtering. Adv. Lasers Optoelectron. 58(8), 45–54 (2021)
Wang, P., Zhang, Y., Bao, F., et al.: Optimal dehazing method based on foggy image degradation model. Chin. J. Image Graph. 23(4), 12 (2018)
Cai, B., Xu, X., Jia, K., et al.: DehazeNet: an end-to-end system for single image haze removal. IEEE Trans. Image Process. 25(11), 5187–5198 (2016)
Ren, W., Liu, S., Zhang, H., Pan, J., Cao, X., Yang, M.-H.: Single image dehazing via multi-scale convolutional neural networks. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9906, pp. 154–169. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-46475-6_10
Rashid, H., Zafar, N., Iqbal, M.J., et al.: Single image dehazing using CNN. Procedia Comput. Sci. 147, 124–130 (2019)
He, Z., Patel, V.M.: Densely connected pyramid dehazing network. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition. IEEE (2018)
Acknowledgement
This work was supported by the National Nature Science Foundation of China (62076249), Key Research and Development Plan of Shandong Province (2020CXGC010701, 2020LYS11), Natural Science Foundation of Shandong Province (ZR2020MF154).
Author information
Authors and Affiliations
Corresponding authors
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2023 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.
About this paper
Cite this paper
Pan, X., Wang, H., Liu, Y., Zhao, Z., Huang, F. (2023). Research on Improved Image Dehazing Algorithm Based on Dark Channel Prior. In: Pan, L., Zhao, D., Li, L., Lin, J. (eds) Bio-Inspired Computing: Theories and Applications. BIC-TA 2022. Communications in Computer and Information Science, vol 1801. Springer, Singapore. https://doi.org/10.1007/978-981-99-1549-1_31
Download citation
DOI: https://doi.org/10.1007/978-981-99-1549-1_31
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-99-1548-4
Online ISBN: 978-981-99-1549-1
eBook Packages: Computer ScienceComputer Science (R0)