Keywords

1 Introduction

In the computer vision application system, the exits of haze or fog greatly interferes with the normal work of the computer vision application system. Humans, vehicles, and other targets in the images are in the edge blur, color distortion and loss of contour information occurs. A lot of research has been done on haze removal in computer vision applications. The algorithm research of fog removal has made some achievements, and the technique based on the physical model can be separated into two classes according to whether the scene additional information is needed. One obtains more images under different states of weather [13], or varying degrees of polarization [47], to remove the fog, but these methods are unpractical. The other one estimates the model parameters and restores the images by trying to set out from the image itself, which is practical. Fattal [8] used a local window-based operation and a graphical model. However, it attempts to separate uncorrelated fields.

He [9]restored a fog-degraded image by using the dark channel prior and gained the sharp image; however, this method has a computational complexity and is time-consuming. In this chapter, we propose a novel type of image restoration, namely, the gray image guided filter. We find that the cause of fog-degraded images is single, and the degenerative characters of the gray image are in common with the degenerative characters of its color image; as a result, the transmission of the atmospheric scattering model estimates to use the dark channel prior, and optimizes the transmission by guiding a filter on the gray image to obtain a sharp image in this way. Our method can keep the edge of the input image to gain a sharp image. Experiments show that the improved method to get the same fog-free image is more efficient.

2 Defogging Based on Dark Channel Prior

2.1 Atmospheric Models

In computer graphics and computer vision, a very popular model to describe the formation of a fog-degraded image was proposed by W. E. K. Middleton [10] in 1952 with the expression shown as follows:

$$ I(x)=J(x)t(x)+A(1-t(x)), $$
(69.1)

where \(I(x)\) is the obtained image, \(J(x)\) is the scene radiance, \(A\) is the atmospheric light, and \(t(x)\) is the medium transmission describing the portion of the light that reaches the camera is not scattered. The aim of the fog removal is to separate \(J\), \(A\) and \(t\) from \(I\).

2.2 Dark Channel Prior

He proposed a simple but useful prior in his paper—dark channel prior. He found that in the vast majority of the local area of the sky, there always exists at least one color channel having a very low value at some pixels, which means the minimum intensity in such regions should have a very low value, converging to zero. As described in the formula, for a dark image \(J(x)\), we define:

$$ {{J}^{dark}}(x)=\underset{color\in \{r,g,b\}}{\mathop{\min }}\,(\underset{y\in \Omega (x)}{\mathop{\min }}\,({{J}^{color}}(y))) $$
(69.2)

Equation (69.2) is presented according to the definition of dark channel prior, and we call \({{J}^{dark}}(x)\) the dark channel of \(J\). As to the fog-free image, after dark channel priority, most of the dark channel image pixel values are close to zero. As to the other way around, if there are a lot of high-value pixels in the dark channel image, these pixel values can represent the additional impact of the fog on the scene imaging, which is the airlight of the foggy weather. What should be noted is that we only consider the condition that the input image is captured on foggy days.

2.3 Fog Image Restoration

According to the dark channel prior, for the fog image, we can estimate the atmospheric light \(A\) and the transmission \(t(x)\). In the transmission image, the spaces with higher pixel values will be less affected by the fog, namely, they are closer to the imaging device because we use block areas in the local area when calculating the dark channel image; in this sense, the transmission estimated through the dark channel image exits the block effect. For the better description of the image edge information, the global optimization of initial transmission is needed. In He’s paper, he used soft matting to optimize the initial transmission; nevertheless, soft matting has a computational complexity which is time-consuming. The algorithm is as follows: First, hypothesize that atmospheric light \(A\) is given and the transitivity \(\overset{\tilde{\ }}{\mathop{t}}\,(x)\) in a local area is constant; then use the dark channel prior in Equation (69.1):

$$ \underset{col}{\mathop{\min }}\,(\underset{y\in \Omega (x)}{\mathop{\min }}\,(\frac{{{I}^{col}}(y)}{{{A}^{col}}}))=\overset{\tilde{\ }}{\mathop{t}}\,(x)\underset{col}{\mathop{\min (}}\,\underset{y\in \Omega (x)}{\mathop{\min }}\,(\frac{{{J}^{col}}(y)}{{{A}^{col}}}))+(1-\overset{\tilde{\ }}{\mathop{t}}\,(x)), $$
(69.3)

where \(\Omega (x)\) is the region centered at \(x\), and \(col\) represents the three color channels.

According to the laws of the dark channel prior, the \({{J}^{dark}}\) of the fog-free image should approach zero, then estimate the transmission \(\overset{\tilde{\ }}{\mathop{t}}\,(x)\).

He’s paper used soft matting to optimize the initial transmission. Figure 69.1e is the result after soft matting, and as it shows, the optimized transmission can keep the sharp edge discontinuity and sketch the contours of the outline of the object.

Fig. 69.1
figure 1

Fog removal using dark channel prior. (a) Fog image, (b) dark channel, (c) rough transmission, (d) dark channel recover, (e) optimized transmission, (f) the results of using soft matting for optimization

The intensity of the sky in a fog image is always very similar to the atmospheric light, and can always been used to estimate the atmospheric light \(A\). According to the atmospheric light \(A\) and the optimized transmission \(\overset{\tilde{\ }}{\mathop{t}}\,(x)\), we can obtain the fog-free image \(J(x)\) by Eq. (69.1):

$$ {{J}_{c}}(x)=\frac{{{I}_{c}}(x)-{{A}_{c}}}{\max (t(x),{{t}_{0}})}+{{A}_{c}}. $$
(69.4)

Figure 69.1 shows the results according to the algorithm of image fog removal with an image size of \(600\times 400\), and an experimental condition as follows: Intel Pentium CPU, 2.13 GHz, RAM 2.0 GB, OS Windows 7. The result shows that without soft matting, (Fig. 69.1d) cannot get a good result for fog removal. Soft matting has been used to optimize the algorithm (Fig. 69.1f) in removing the fog well and to keep the image information at the same time, but this method is time-consuming without effectiveness; as a result, in this chapter, we do not affect the subjective visual effect and sharply improve the processing speed of the algorithm.

3 Image Fog Removal Based on Gray Image Guided Filtering

As mentioned above, we can obtain a sharp image if the transmission is optimized. From the soft matting algorithm, the optimization mainly contains the regional smoothing and keeps the edge features; therefore, in this chapter, another algorithm to optimize the transmission is used. We use the gray image of the original image as the guide image and filter the transmission by structuring a variable coefficient filter. This filtering method is called guided filter [11].

3.1 Guided Filter

Guide filter assumption: The filter model is linear at the local area of the image, and in each local area the filtering output is a linear transformation of the guide image. Assuming the guide image is \(I(x)\), the image for filtering is \(p(x)\), the filtered image is \(q(x)\), we thus have:

$$ q(x)=\sum_{j\in {{\omega }_{\kappa }}}{{{W}_{j}}(I)p(j)}, $$
(69.5)
$$ q(x)={{\alpha }_{\kappa }}I(x)+{{\beta }_{\kappa }},\forall x\in {{\omega }_{\kappa }}, $$
(69.6)

where \({{\omega }_{\kappa }}\) is a region centered at \(x\) and \({{W}_{j}}(I)\) is the variable filter core upon translation; it can be obtained by its guide image.

In order to solve the coefficients \({{\alpha }_{\kappa }}\)and \({{\beta }_{\kappa }}\), minimize the difference between \(q\) and \(p\) to solve the optimal solution by the definition above, thus we have:

$$ E({{\alpha }_{k}},{{b}_{k}})=\sum_{i\in {{\omega }_{k}}}{((}{{\alpha }_{k}}{{I}_{i}}+{{b}_{k}}-{{p}_{i}}{{)}^{2}}+\varepsilon \alpha_{k}^{2}). $$
(69.7)

In order to prevent \({{\alpha }_{\kappa }}\) from becoming too large, we set up the penalty factor \(\varepsilon \). By solving the optimal solution of the secondary system above, \({{\alpha }_{\kappa }}\) and \({{\beta }_{\kappa }}\)can be obtained as follows:

$${\alpha _k} = \frac{{\frac{1}{\omega }\sum\limits_{i \in {\omega _k}} {{I_i}p - {\mu _k}{{\bar p}_{{ki}}}} }}{{\sigma _k^2 + \varepsilon }},$$
(69.8)
$$ {{b}_{k}}={{\bar{p}}_{k}}-{{\alpha }_{k}}{{\mu }_{k}}, $$
(69.9)

where \(\omega \) is the number of pixels in the filter window, \({{\mu }_{\kappa }}\) and \({{\sigma }_{k}}\) are the average and variance of \(I\) in \({{\omega }_{k}}\), and \({{\overset{\_}{\mathop{p}}}_{\kappa }}\) is the mean of \(p(x)\) in \({{\omega }_{k}}\). Putting \({{\alpha }_{\kappa }}\) and \({{\beta }_{\kappa }}\) into Eq. (69.6), we can gain the filtered image \(q(x)\).

3.2 Fog Removal Based on Gray Image Guided Filter

First of all, use the dark channel method to estimate a rough transmission \(\overset{\tilde{\ }}{\mathop{t}}\,(x)\) and atmospheric light \(A\), as described in Sect. 2. Then, use the guided filter to implement the regional smoothing and keep the edge features. Use \({{I}_{gray}}(x)\) of \(I(x)\) as the guide image to guide the \(\overset{\tilde{\ }}{\mathop{t}}\,(x)\) filtering. \({{\alpha }_{\kappa }}\) and \({{\beta }_{\kappa }}\) can be gained by Eq. (69.8) and Eq. (69.9); then, puting \({{\alpha }_{\kappa }}\) and \({{\beta }_{\kappa }}\) into Eq. (69.6), we obtain the transmission \(t(x)\). Now, by Eq. (69.4), the sharp image \(J(x)\) is restored.

Equation (69.5) and Eq. (69.6) show that the guided filter method is good to keep the image edge information,(Fig. 69.2) which implements the edge features.

Fig. 69.2
figure 2

Fog removal using dark channel prior. (a) Fog image, (b) dark channel, (c) rough transmission, (d) dark channel recover, (e) optimized transmission, (f) the results of using soft matting for optimization

4 Experimental Results

The experimental results show that, compared with Fattal’s work, our algorithm has obvious improvement in terms of the visual effect; compared with He’s work, our approach has obvious advantages in processing speed. The proposed approach can get a sharp image and improve the efficiency at the same time (Table 69.1).

Table 69.1 Processing time

5 Conclusion

In this chapter, we use the gray image guide filter to optimize the transmission; the optimized transmission can realize the regional smoothing and keep the edge features. The experimental results show that the approach proposed in this chapter can optimize the transmission rapidly; therefore, our approach in fog removal maintains the image information including the edge of the image and the structure information; besides, we obtain a good visual effect. Objectively, the processing time is shortened greatly. The method proposed in this chapter is general about the 2D images. The dark channel prior assumes that the image exists in a dark area. If there are vast white areas in the foggy image, the dark channel prior would fail even though the guided filter is adopted. The following task will be to test whether it is suitable for 3D images and to explore ways to deal with the fog of large white areas.