Abstract
Under the condition of foggy weather, the videos and images obtained by the imaging devices which capture visible light will be severely degraded due to the low visibility of the scene, such as contrast reduction and color attenuation. The fog removal algorithm based on the dark channel prior now has yielded a great effect, but the algorithm has the characteristics of high time complexity and space complexity; thus, it does not have practicality. On the basis of the dark channel prior, we propose a fast method of image fog removal based on gray image guided filtering. Firstly, we estimate the atmospheric scattering model transmission through the dark channel information, and then adopt the gray image of the input mistily image to guide the transmission filtering to enable the optimization of rough transmission, namely, to maintain edges and smoothing regions; on this basis, the recovery image without fog can be obtained. Experiments demonstrate that the proposed algorithm in this chapter can effectively remove fog from a foggy image and increase the efficiency as a result.
Access provided by Autonomous University of Puebla. Download conference paper PDF
Similar content being viewed by others
Keywords
1 Introduction
In the computer vision application system, the exits of haze or fog greatly interferes with the normal work of the computer vision application system. Humans, vehicles, and other targets in the images are in the edge blur, color distortion and loss of contour information occurs. A lot of research has been done on haze removal in computer vision applications. The algorithm research of fog removal has made some achievements, and the technique based on the physical model can be separated into two classes according to whether the scene additional information is needed. One obtains more images under different states of weather [1–3], or varying degrees of polarization [4–7], to remove the fog, but these methods are unpractical. The other one estimates the model parameters and restores the images by trying to set out from the image itself, which is practical. Fattal [8] used a local window-based operation and a graphical model. However, it attempts to separate uncorrelated fields.
He [9]restored a fog-degraded image by using the dark channel prior and gained the sharp image; however, this method has a computational complexity and is time-consuming. In this chapter, we propose a novel type of image restoration, namely, the gray image guided filter. We find that the cause of fog-degraded images is single, and the degenerative characters of the gray image are in common with the degenerative characters of its color image; as a result, the transmission of the atmospheric scattering model estimates to use the dark channel prior, and optimizes the transmission by guiding a filter on the gray image to obtain a sharp image in this way. Our method can keep the edge of the input image to gain a sharp image. Experiments show that the improved method to get the same fog-free image is more efficient.
2 Defogging Based on Dark Channel Prior
2.1 Atmospheric Models
In computer graphics and computer vision, a very popular model to describe the formation of a fog-degraded image was proposed by W. E. K. Middleton [10] in 1952 with the expression shown as follows:
where \(I(x)\) is the obtained image, \(J(x)\) is the scene radiance, \(A\) is the atmospheric light, and \(t(x)\) is the medium transmission describing the portion of the light that reaches the camera is not scattered. The aim of the fog removal is to separate \(J\), \(A\) and \(t\) from \(I\).
2.2 Dark Channel Prior
He proposed a simple but useful prior in his paper—dark channel prior. He found that in the vast majority of the local area of the sky, there always exists at least one color channel having a very low value at some pixels, which means the minimum intensity in such regions should have a very low value, converging to zero. As described in the formula, for a dark image \(J(x)\), we define:
Equation (69.2) is presented according to the definition of dark channel prior, and we call \({{J}^{dark}}(x)\) the dark channel of \(J\). As to the fog-free image, after dark channel priority, most of the dark channel image pixel values are close to zero. As to the other way around, if there are a lot of high-value pixels in the dark channel image, these pixel values can represent the additional impact of the fog on the scene imaging, which is the airlight of the foggy weather. What should be noted is that we only consider the condition that the input image is captured on foggy days.
2.3 Fog Image Restoration
According to the dark channel prior, for the fog image, we can estimate the atmospheric light \(A\) and the transmission \(t(x)\). In the transmission image, the spaces with higher pixel values will be less affected by the fog, namely, they are closer to the imaging device because we use block areas in the local area when calculating the dark channel image; in this sense, the transmission estimated through the dark channel image exits the block effect. For the better description of the image edge information, the global optimization of initial transmission is needed. In He’s paper, he used soft matting to optimize the initial transmission; nevertheless, soft matting has a computational complexity which is time-consuming. The algorithm is as follows: First, hypothesize that atmospheric light \(A\) is given and the transitivity \(\overset{\tilde{\ }}{\mathop{t}}\,(x)\) in a local area is constant; then use the dark channel prior in Equation (69.1):
where \(\Omega (x)\) is the region centered at \(x\), and \(col\) represents the three color channels.
According to the laws of the dark channel prior, the \({{J}^{dark}}\) of the fog-free image should approach zero, then estimate the transmission \(\overset{\tilde{\ }}{\mathop{t}}\,(x)\).
He’s paper used soft matting to optimize the initial transmission. Figure 69.1e is the result after soft matting, and as it shows, the optimized transmission can keep the sharp edge discontinuity and sketch the contours of the outline of the object.
The intensity of the sky in a fog image is always very similar to the atmospheric light, and can always been used to estimate the atmospheric light \(A\). According to the atmospheric light \(A\) and the optimized transmission \(\overset{\tilde{\ }}{\mathop{t}}\,(x)\), we can obtain the fog-free image \(J(x)\) by Eq. (69.1):
Figure 69.1 shows the results according to the algorithm of image fog removal with an image size of \(600\times 400\), and an experimental condition as follows: Intel Pentium CPU, 2.13 GHz, RAM 2.0 GB, OS Windows 7. The result shows that without soft matting, (Fig. 69.1d) cannot get a good result for fog removal. Soft matting has been used to optimize the algorithm (Fig. 69.1f) in removing the fog well and to keep the image information at the same time, but this method is time-consuming without effectiveness; as a result, in this chapter, we do not affect the subjective visual effect and sharply improve the processing speed of the algorithm.
3 Image Fog Removal Based on Gray Image Guided Filtering
As mentioned above, we can obtain a sharp image if the transmission is optimized. From the soft matting algorithm, the optimization mainly contains the regional smoothing and keeps the edge features; therefore, in this chapter, another algorithm to optimize the transmission is used. We use the gray image of the original image as the guide image and filter the transmission by structuring a variable coefficient filter. This filtering method is called guided filter [11].
3.1 Guided Filter
Guide filter assumption: The filter model is linear at the local area of the image, and in each local area the filtering output is a linear transformation of the guide image. Assuming the guide image is \(I(x)\), the image for filtering is \(p(x)\), the filtered image is \(q(x)\), we thus have:
where \({{\omega }_{\kappa }}\) is a region centered at \(x\) and \({{W}_{j}}(I)\) is the variable filter core upon translation; it can be obtained by its guide image.
In order to solve the coefficients \({{\alpha }_{\kappa }}\)and \({{\beta }_{\kappa }}\), minimize the difference between \(q\) and \(p\) to solve the optimal solution by the definition above, thus we have:
In order to prevent \({{\alpha }_{\kappa }}\) from becoming too large, we set up the penalty factor \(\varepsilon \). By solving the optimal solution of the secondary system above, \({{\alpha }_{\kappa }}\) and \({{\beta }_{\kappa }}\)can be obtained as follows:
where \(\omega \) is the number of pixels in the filter window, \({{\mu }_{\kappa }}\) and \({{\sigma }_{k}}\) are the average and variance of \(I\) in \({{\omega }_{k}}\), and \({{\overset{\_}{\mathop{p}}}_{\kappa }}\) is the mean of \(p(x)\) in \({{\omega }_{k}}\). Putting \({{\alpha }_{\kappa }}\) and \({{\beta }_{\kappa }}\) into Eq. (69.6), we can gain the filtered image \(q(x)\).
3.2 Fog Removal Based on Gray Image Guided Filter
First of all, use the dark channel method to estimate a rough transmission \(\overset{\tilde{\ }}{\mathop{t}}\,(x)\) and atmospheric light \(A\), as described in Sect. 2. Then, use the guided filter to implement the regional smoothing and keep the edge features. Use \({{I}_{gray}}(x)\) of \(I(x)\) as the guide image to guide the \(\overset{\tilde{\ }}{\mathop{t}}\,(x)\) filtering. \({{\alpha }_{\kappa }}\) and \({{\beta }_{\kappa }}\) can be gained by Eq. (69.8) and Eq. (69.9); then, puting \({{\alpha }_{\kappa }}\) and \({{\beta }_{\kappa }}\) into Eq. (69.6), we obtain the transmission \(t(x)\). Now, by Eq. (69.4), the sharp image \(J(x)\) is restored.
Equation (69.5) and Eq. (69.6) show that the guided filter method is good to keep the image edge information,(Fig. 69.2) which implements the edge features.
4 Experimental Results
The experimental results show that, compared with Fattal’s work, our algorithm has obvious improvement in terms of the visual effect; compared with He’s work, our approach has obvious advantages in processing speed. The proposed approach can get a sharp image and improve the efficiency at the same time (Table 69.1).
5 Conclusion
In this chapter, we use the gray image guide filter to optimize the transmission; the optimized transmission can realize the regional smoothing and keep the edge features. The experimental results show that the approach proposed in this chapter can optimize the transmission rapidly; therefore, our approach in fog removal maintains the image information including the edge of the image and the structure information; besides, we obtain a good visual effect. Objectively, the processing time is shortened greatly. The method proposed in this chapter is general about the 2D images. The dark channel prior assumes that the image exists in a dark area. If there are vast white areas in the foggy image, the dark channel prior would fail even though the guided filter is adopted. The following task will be to test whether it is suitable for 3D images and to explore ways to deal with the fog of large white areas.
References
Narasimhan SG, Nayar SK. Chromatic framework for vision in bad weather. Proceedings of IEEE conference on computer vision and pattern recognition. Washington D.C., USA: IEEE; 2000. pp. 598–605.
Nayar SK, Narasimhan SG. Vision in bad weather. Proceedings of the 7th IEEE international conference on computer vision. Kerkyra, Greece: IEEE; 1999. pp. 820–7.
Narasimhan SG, Nayar SK. Contrast restoration of weather degraded images. IEEE Trans Pattern Anal Mach Intell. 2003;25(6):713–24.
Schechner YY, Narasimhan SG, Nayar SK. Instant dehazing of images using polarization. Proceedings of the IEEE computer society conference on computer vision and pattern recognition. Washington D. C., USA: IEEE; 2001. pp. 325–32.
Namer E, Schechner YY. Advanced visibility improvement based on polarization filtered images. Proceedings of the polarization science and remote sensing II. San Diego, USA: SPIE; 2005. pp. 36–45
Schechner YY, Narasimhan SG, Nayar SK. Polarization-based vision through haze. Appl Opt. 2003;42(3):511–25.
Shwartz S, Namer E, Schechner YY. Blind haze separation. Proceedings of the IEEE computer society conference on computer vision and pattern recognition. Washington D. C., USA: IEEE; 2006. pp. 1984–91.
Fattal R. Single image dehazing. ACM Trans Graph. 2008;27(3):1–9.
HE KM, Sun J, Tang XO. Single image haze removal using dark channel prior. Proceedings of the IEEE conference on computer vision and pattern recognition miami. USA: IEEE; 2009. pp. 1956–63.
Middleton WEK. Vision through the atmosphere. Geophysics II. Berlin: Springer; 1957. pp. 254–87.
He K, Sun J, Tang X. Guided image filtering. Computer Vision–ECCV 2010. Berlin: Springer; 2010. pp. 1–14.
Acknowledgements
This work was supported in part by the Fundamental Research Funds for the Central Universities Grant.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2015 Springer International Publishing Switzerland
About this paper
Cite this paper
Wang, Z., Li, H., Teng, J., He, X. (2015). Fast Image Fog Removal Based on Gray Image Guided Filtering. In: Wang, W. (eds) Proceedings of the Second International Conference on Mechatronics and Automatic Control. Lecture Notes in Electrical Engineering, vol 334. Springer, Cham. https://doi.org/10.1007/978-3-319-13707-0_69
Download citation
DOI: https://doi.org/10.1007/978-3-319-13707-0_69
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-13706-3
Online ISBN: 978-3-319-13707-0
eBook Packages: EngineeringEngineering (R0)