1 Introduction

Outdoor vision system has been applied to a variety of purposes such as navigation, surveillance and so on. However, bad weather conditions usually dissatisfy human seeing, bring difficulties to image processing and decrease the performance of vision algorithms such as feature detection, stereo correspondence, tracking, segmentation, object recognition and so on. Based on the type of visual effects, bad weather conditions can be divided into steady (haze, fog and mist) or dynamic (rain, snow, hail) [8]. Rain or snow is the common bad weather for reducing visibility. An image captured in falling rain or snow is covered with lots of bright streaks or white spots to confuse human vision.

The existing methods to remove rain or snow could be classified into two cases: one is for videos, the other is for single images. In the case of videos, Gary and Nayar developed a correlation model capturing the dynamics of rain and a physics-based motion blur model stating the photometry of rain [8, 9]. Zhang et al. presented a detection method combining temporal with chromatic properties of rain [17]. Barnum et al. modeled the rain or snow steaks using a blurred Gaussian model, rain and snow can be detected and removed based on the statistical information in frequency space with different frames [1, 2]. Bossu et al. proposed a rain removal via foreground separation, selection rules and detecting by HOS [3]. These methods well remove rain or snow in videos. However, aforementioned methods are not suitable to the case of single image since there is no reference temporal information. As a result, the tasks become more challenging in the case of single image. For single-image-based approach, Kang et al. proposed a rain removal method by image decomposition, in which the rain components of single image could be removed via performing dictionary learning and sparse coding [12]. However, the method cannot remove the non-orientation snowflakes. Xu et al. proposed a method using guided filter to remove rain or snow [14]. Zheng et al. proposed a new method to remove rain or snow through using guided filter to get the smooth image [16]. Yi-Lei Chen and Chiou-Ting Hsu proposed a novel low-rank appearance model for removing rain streaks [5]. Duan-Yu Chen et al. proposed a visual depth guided color image rain streaks removal method using sparse coding [6]. After removing rain streaks and snowflakes, objects in the image can be seen clearly.

On the other hand, the rain streaks or snowflakes in single image can be considered as the image noise. Some common denoising methods are available for rain or snow removal. For example, the bilateral filter [13] is an edge-preserving smoothing filter which considers the influence of distant pixel and their variance. The non-local means algorithm [4] is a popular image denoising method which is based on a non-local averaging of all pixels in an image. The guided filter, proposed in [10] and [11], transfers the structures of guidance image to the filtering output, and this filter has the edge-preserving smoothing property. Even these denoising methods can directly apply for single image rain or snow removal, the effects are poor and the results are always artificially blurred. Whatsoever, for many single image rain removal methods, these denoising methods can be a good pre-processing, such as [6, 12, 14, 16] and the proposed method.

In this work, a guided L0 smoothing filter is designed based on the L0 gradient minimization [15]. Firstly, a coarse but nearly rain-free or snow-free guidance image is obtained by adopting the traditional guided filter [10, 11]. Then the proposed guided L0 smoothing filter is designed to remove rain or snow. The structure of the paper is given as follows: the background information is introduced in Section 2, followed by the design of the guided L0 smoothing filter in Section 3. In Section 4 the method of removing the rain or snow is detailed. And the experimental results by comparing with other methods and discussions of the limitation are shown in Section 5. The last section is the conclusion.

2 Background

2.1 The physical property of rain or snow

Generally, the size of raindrop is 0.1-3.5mm. Since raindrop usually fall quickly, they are imaged as rain steaks in the image. And only a few pixels are occupied with rain steaks, and snowflakes are similar to rain steaks.

There are three observations on common rain streaks or snowflakes: Firstly, due to the little size and fast speed, raindrop or snowflake is imaged in form of little streaks or little spots with normal exposure of the camera. Secondly, the light of raindrop can be refracted, and the snowflake is white and bright in nature, which means the rain streaks or snowflakes are brighter than the background. Thirdly, the size of rain streaks or snowflakes are small, and they are sparse. The image taken from real-world scenes is usually piecewise smooth. So the degraded background can be restored using the value around the rain streaks or snowflakes.

2.2 The mathematical model of rain or snow image

The following model is commonly used to describe the formation of rain or snow image:

$$\begin{array}{@{}rcl@{}} {I_{in}} = (1 - \alpha ){I_{b}} + \alpha {I_{r}}{\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt,} \ \ \ (0 \le \alpha \le 1), \end{array} $$
(1)

where I i n is the input image, I b is the clean background image, I r is the rain or snow component and α is the scale parameter.

When α = 0, I i n = I b , the observed image is not affected by rain or snow. In rain or snow removal task, we don’t need to deal with this situation. When α ≠ 0, \({I_{in}} = (1 - \alpha ){I_{b}} + \alpha {I_{r}}{\kern 1pt} \), we have I i n > I b . The main goal of rain or snow removal is to recover the background I b from the observed image I i n . This is an ill-posed problem, and some additional prior information is needed to recover the background.

2.3 The edge characteristic of rain or snow

Based on the relation between the pixels of edges and their surrounding pixels, all edges can be classified into three categories (Fig. 1): step edges, ridge edges and valley edges.

Fig. 1
figure 1

Intensity profiles of three types of edges along the horizontal

In Fig. 1, let each point of the edge be the center of the corresponding window(the size of window is kk; k = p, p + 1⋯⋯; p is the width of the edge). We will calculate and compare the means and variances in the corresponding windows in order to distinguish three kinds of edges. The mean and variance for step edges almost remains unchanged even if the size of window becomes larger. But for ridge edges and valley edges, their means and variances will change as the window size grows. Concretely, when the size of the window grows for ridge edges, the means become smaller and close to their adjacent pixels, and the variances also become smaller and close to zero; while for valley edges the means become lager and close to the adjacent pixels, and the variances also become smaller and close to zero.

Comparing with the surrounding pixels, the size of rain steaks or snowflakes is smaller and their values are higher than surrounding pixels. So the ridge edges with limited size are considered as rain streaks or snowflakes, and most of the other edges in the background are step edges or valley edges. Therefore, we can distinguish the rain streaks or snowflakes from the background or background defined by texture in this way.

3 The rain-free or snow-free guidance Image

3.1 Guided filter

Guided filter is an edge-preserving smoothing filter, which works well near the edges [10, 11]. Guidance image can be the observed image itself or another reference image. Besides, the guided filter is a fast and nonapproximate linear time algorithm, whose computational complexity is independent of the filtering kernel size. Particularly, guided filter formulates output image I o u t as follows:

$$\begin{array}{@{}rcl@{}} {I_{out}} = \frac{1}{{|w|}}\sum\limits_{k|i \in {w_{k}}} {({a_{k}}{I_{i}} + {b_{k}})} , \end{array} $$
(2)

where w k is a window centered at the pixel k, |w| is the number of pixels in the window, I is guidance image, a k and b k are defined as:

$$\begin{array}{@{}rcl@{}} \begin{array}{l} {a_{k}} = \frac{{\frac{1}{{\left| w \right|}}\sum\nolimits_{i \in {\kern 1pt} {\omega_{k}}} {{I_{i}}{p_{i}} - {\mu_{k}}{{\overline p}_{k}}}} }{{{\sigma_{k}^{2}} + \varepsilon} }\\ \\ {b_{k}} = {\overline p_{k}} - {a_{k}}{\mu_{k}} \end{array}, \end{array} $$
(3)

where μ k and \({\sigma _{k}^{2}}\) are the mean and variance of I in ω k , p is the input image, \({\overline p_{k}}\) is the mean of p in w k , ε is a regularization parameter to controls the structural similarity. The larger ε is, the smoother the output will be.

3.2 The low frequency part

When the input image is treated as the guidance image (p = I), \({a_{k}} = \frac{\sigma_{k}^{2}}{\sigma_{k}^{2} + \varepsilon}\) and \({b_{k}} = (1 - {a_{k}}){\mu _{k}} = \frac{\varepsilon} {{\sigma_{k}^{2} + \varepsilon}}{\mu_{k}}\), and the output image using guided filter is formulated by

$$\begin{array}{@{}rcl@{}} {I_{out}} = \frac{1}{{|w|}}\sum\nolimits_{i \in {w_{k}}} {(\frac{{{\sigma_{k}}^{2}}}{{{\sigma_{k}}^{2} + \varepsilon} }{I_{i}} + \frac{\varepsilon} {{{\sigma_{k}}^{2} + \varepsilon} }{\mu_{k}})} . \end{array} $$
(4)

The low frequency part of the above mentioned three kinds of edges after using guided filter will change differently. As shown in Fig. 2, we have the following analysis:

1) The low frequency part of step edges after using guided filter is still step edges, but their ranges become smaller, which means that the step edges become smoother after using guided filter (as the step edges in Fig. 2).

2) If the ridge edges with small size are unaffected by the other edges, their variances are close to 0, then the ridge edges will disappear and tend to the background. While I i belongs to the ridge edges with large size, it is hard to make I o u t be equal to background, so the low frequency part of ridge edges with large sizes will remain and become smoothed (as the ridge edges in Fig. 2).

3) the value of low frequency part for valley edges will become larger than the input (as the valley edges in Fig. 2).

Fig. 2
figure 2

Intensity change of three types of edges after using guided filter along the horizontal

So based on the above analysis, we can conclude that the low frequency part using guided filter can be distinguished among the three-types edges, i.e., the step edges will be retained, but become a little smooth (Fig. 3a); the ridge edges with small size will disappear (Fig. 3b), while the ridge edges with large size will be retained (Fig. 3c); the pixel values of valley edges will become higher in the low frequency part (Fig. 3d).

Fig. 3
figure 3

Intensity change of three types of edges after using guided filter along the horizontal. (Blue line is pixel values of observed image, the red line is the corresponding low frequency part via guided filter)

However, we cannot directly remove rain or snow by this traditional guided filter. In fact, if we regard input image as the guidance image, we can only get a rough result. Because the edges in the output are modified by the input image to approach to the edges in guidance image and it is not the original edges in the input image anymore. The guided filter does not work well in this case.

Nonetheless, guided filter can be a good pre-processing of rain removal method. The low frequency part is an image only keeping the information of contour of background and without remaining the edges caused by rain or snow. It can be regarded as a guidance image in the proposed method. Using both the guidance image and the original observed image, rain or snow can be removed by the proposed guided L0 smoothing filter.

4 Guided L0 smoothing filter

In this section we will firstly briefly review the theory of L0 gradient minimization [15] in brief, and simply explain the corresponding solver. Then based on the model of L0 gradient minimization, the guided L0 smoothing filter is proposed. After that, we present its application to remove rain or snow.

4.1 L0 gradient minimization

By L0 gradient minimization, they can directly optimize the L0 norm to get a piecewise constant output image [15]. It is effective to sharpen prominent edges by increasing the steepness of transition while eliminating low-amplitude structures. In this literature, the following minimization problem (5) is given:

$$\begin{array}{@{}rcl@{}} \min\limits_{f} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} ||f - f^{\ast}|{|_{2}^{2}}{\kern 1pt} + {\kern 1pt} {\kern 1pt} \lambda ||\nabla f|{|_{0}}, \end{array} $$
(5)

where f is output, ∇f is the gradients of f, f is the observed image, and λ is a weight to control the level of detail. The former term is for fidelity, the later is to constraint the sparsity for the gradient magnitude of the output.

To solve the objective function, auxiliary variable δ is introduced to deal with ∇f, so (5) is written the following minimization:

$$\begin{array}{@{}rcl@{}} \min\limits_{f,{\kern 1pt} {\kern 1pt} \delta} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} ||f - f^{\ast}{||_{2}^{2}}{\kern 1pt} + {\kern 1pt} {\kern 1pt} \beta ||\delta - \nabla f||_{2}^{2} + \lambda ||\delta {||_{0}}, \end{array} $$
(6)

where β controls the similarity between variables δ and ∇f, and the smoothing level is controlled by λ, δ is a vector with two components: δ x and δ y . Then equation (6) is solved through alternatively minimizing [15] δ and f.

4.2 Guided L0 smoothing filter

Recently Xu et al. proposed L0 gradient minimization to sharpen and keep prominent edges, meanwhile smooth low-amplitude structures in line with the observed image [15]. While in the given case (rain or snow removal), we do not intend to keep all the prominent edges except ones with low gradients, so the pure L0 gradient minimization method will fail in the task of rain or snow removal.

Consequently, in this paper a guided L0 smoothing filter is proposed. It takes advantage of the property of guided filter with L0 gradient minimization. Unlike the original L0 gradient minimization, the edges of enhanced result can be preserved or smoothed according to the rain-free/snow-free guidance image. Specifically, the observed image’s edges can be reserved if the corresponding locations of the guidance image is of large gradient magnitudes, and they will be smoothed if the corresponding locations are of low gradient magnitudes.

Figure 4 shows we can only obtain the sparsity via the final iteration of L0 gradient minimization (i.e., H l a s t ), but can not obtain the gradient magnitude. β can solve this problem in the iteration process of L0 gradient minimization. However, the gradient magnitude of guidance image cannot be obtained. In this paper, the level of guidance image’s gradient can be perceived in the output image by solving the following iteration process. In this way, the prominent edges of the observed image can be protected. Therefore, we call the proposed algorithm to be guided L0 smoothing filter.

Fig. 4
figure 4

Comparisons of L0 gradient minimization [15], guided filter [10, 11] and proposed guided L0 smoothing filter. a observed image; b guidance image; c the first iteration corresponding to (b) by [15] (λ = 0.0005); d the final iteration corresponding to (b) by [15] (λ = 0.0005); e smooth (a) with (d); f smooth (a) by [15]; g smooth (b) by [15]; h smooth (a) with guided filter [10, 11]; i smooth (a) with proposed filter

Firstly, when f k is known, we optimize the δ k as (7), where λ must be small to keep all gradients information

$$\begin{array}{@{}rcl@{}} \min\limits_{{\delta^{{k}}}} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\beta^{k}}||\nabla {f^{k}} - {\delta^{k}}|{|_{2}^{2}} + \lambda ||{\delta^{k}}|{|_{0}}{\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} (k = 1,{\kern 1pt} {\kern 1pt} {\kern 1pt} 2,{\kern 1pt} {\kern 1pt} {\kern 1pt} 3 \ldots ). \end{array} $$
(7)

We can solve (7) as follows [15]:

$$\begin{array}{@{}rcl@{}} {\delta^{k}} = \left\{ \begin{array}{l} 0{\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} \nabla {f^{k}} \le \frac{\lambda} {\beta} {\kern 1pt} \\ \nabla {f^{k}}{\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} others \end{array} \right. \end{array} $$
(8)

Secondly, when f k and δ k are known, we solve f k + 1 by:

$$\begin{array}{@{}rcl@{}} \min\limits_{f^{\mathrm{k} + 1}} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} ||{f^{k + 1}} - f^{\ast}|{|_{2}^{2}}{\kern 1pt} + {\kern 1pt} {\kern 1pt} {\beta^{k}}||{\delta^{k}} - \nabla {f^{k + 1}}|{|_{2}^{2}}. \end{array} $$
(9)

Then the expression (9) is equivalent to (10):

$$\begin{array}{@{}rcl@{}} \left\{ \begin{array}{l} \min\limits_{{f^{k + 1}}} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} ||{f^{k + 1}} - f^{\ast}|{|_{2}^{2}}{\kern 1pt} + {\kern 1pt} {\kern 1pt} {\beta^{k}}||\nabla {f^{k + 1}} - H.*\nabla {f^{k}}|{|_{2}^{2}}\\ H = \left\{ \begin{array}{l} 0{\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\delta^{k}} = 0\\ 1{\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\delta^{k}} \ne 0{\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} \end{array} \right.{\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} (k = 1,{\kern 1pt} {\kern 1pt} 2,{\kern 1pt} {\kern 1pt} {\kern 1pt} 3{\kern 1pt} {\kern 1pt} \ldots ) \end{array} \right. . \end{array} $$
(10)

The objective function is quadratic and thus is a convex optimization problem. So we can use the least square method and Fourier transformation to solve it. Then the solution of (9) or (10) is:

$$\begin{array}{@{}rcl@{}} {f^{k + 1}} = ifft(\frac{{fft({f^{*}}) + \beta (fft({\partial_{x}^{T}})fft({\delta_{x}^{k}}) + fft({\partial_{x}^{T}})fft({\delta_{y}^{k}}))}}{{fft(1) + \beta (fft({\partial_{x}^{T}})fft({\partial_{x}}) + fft({\partial_{y}^{T}})fft({\partial_{y}}))}}), \end{array} $$
(11)

where fft is the fast Fourier transform operator and i f f t is its inverse. x and y denote difference operators in the horizontal and vertical directions, respectively.

Finally, when s k and δ k are known, we solve s k + 1 from (12)

$$\begin{array}{@{}rcl@{}} \left\{ \begin{array}{l} \min\limits_{{s^{k + 1}}} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} ||{s^{k + 1}} - s^{*}|{|_{2}^{2}}{\kern 1pt} + {\kern 1pt} {\kern 1pt} {\beta^{k}}||\nabla {s^{k + 1}} - H.*\nabla {s^{k}}|{|_{2}^{2}}\\ H = \left\{ \begin{array}{l} 0{\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\delta^{k}} = 0\\ 1{\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\delta^{k}} \ne 0{\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} \end{array} \right.{\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} (k = 1,{\kern 1pt} {\kern 1pt} 2,{\kern 1pt} {\kern 1pt} {\kern 1pt} 3{\kern 1pt} {\kern 1pt} \ldots ) \end{array} \right. . \end{array} $$
(12)

The solution of (12) is:

$$\begin{array}{@{}rcl@{}} {s^{k + 1}} = iff(\frac{{fft({s^{*}}) + \beta (fft({\partial_{x}^{T}})fft(H.*\nabla {s_{x}^{k}}) + fft({\partial_{x}^{T}})fft(H.*\nabla {s_{y}^{k}}))}}{{fft(1) + \beta (fft({\partial_{x}^{T}})fft({\partial_{x}}) + fft({\partial_{y}^{T}})fft({\partial_{y}}))}}). \end{array} $$
(13)

The above entire procedure is summarized in Algorithm 1. In the proposed method, most of the parameter settings are same to L0 gradient minimization’s, i.e., β 0, \({\beta _{\max }}\), κ are fixed as 2λ, 105, 2, respectively. λ is set to 0.0005. Figure 4 shows the comparison among L0 gradient minimization, guided filter and the proposed guided L0 smoothing filter. Figure 4i is the result of proposed filter, its prominent edges are closer to the observed image than the guided filter’s result [10, 11], and it is as smoothed as the result of guided filter.

Algorithm 1 :

Input: observed image s , guidance image f , parameters λ, β 0, \({\beta _{\max } }\), rate κ

Initialization: \({f^{1}} \leftarrow {f^{\ast }}\), \({s^{1}} \leftarrow {s^{\ast }}\), \({\kern 1pt} {\beta ^{1}} \leftarrow {\beta _{0}}\), \(k \leftarrow 1\)

repeat:

with f k, solve for δ k in (8);

with f k and δ k, solve for f k + 1 in (11);

with s k and δ k, solve for s k + 1 in (13);

\(\beta \leftarrow \kappa \beta \), k + +;

Until \(\beta \ge {\beta _{\max }}\)

Output: The result of guided L0 smoothing filter s.

5 The frame of the proposed method

Figure 5 explicitly shows the framework of the proposed method. Firstly, low frequency part of the observed image will be obtained by the traditional guided filter, i.e., we obtain a blurred guidance image without rain or snow. Because the rain streaks are small and sparse to hardly cover the background, and observed image has real background information, the proposed guided L0 smoothing filter is used to restore the rain-free/snow-free result. However, the valley edges do not need to be restored, because they don’t belong to rain streaks or snowflakes. So at last, the minimization operation was taken between the observed image and the rain-free or snow-free result to get the final refined result.

Fig. 5
figure 5

Block diagram of the proposed removal method

6 Experimental results

Since we know that the rain streaks almost don’t appear in the horizontal direction and their intensities are higher than their adjacent pixels in horizontal direction. Hence, all of the rain streaks removal experiments should be carried out in the horizontal direction. But for snowflakes removal experiments, they are performed both in horizontal and vertical directions.

6.1 The results of proposed rain and snow removal methods

The most commonly used data is applied to test our proposed method. In the Fig. 6, the left column images are the observed image, the middle images are the restored background by the proposed method, and the right column images are the estimated rain steaks or snowflakes. the estimated rain steaks or snowflakes image can be obtained by subtracting our result with the observed image. Even some details will lose such as little ridge edges or some pixels of edges, rain or snow can be estimated with low error rate through the proposed method. So our method can improve the visual quality of rain image or snow image to get an almost clear and accurate background.

Fig. 6
figure 6

The results of proposed method and the estimated rain steaks or snowflakes

6.2 Comparisons

In Figure 7 we compare our rain removal result with the methods of Xu et al. [14] and the coauthors’s previous work in [16]. It shows that the proposed method yields better visual quality against other methods for comparisons. Particularly, the proposed method does not create blurred artifacts, and the restored image is closer to the background in the observed image. For example, the result of Figs. 7b and c are not clear, but the proposed result is clearer.

Fig. 7
figure 7

Comparison of rain removal results with other methods

Figure 8 shows the comparisons of snow removal results obtained by previous methods [14, 16] with the proposed method. Clearly, the proposed method achieves much more accurate result of snowflake removal than the previous methods.

Fig. 8
figure 8

Comparisons of rain removal results with other methods

In Fig. 9, the comparison of snow removal results is shown among the common denoising methods and the state-of-the-art rain removal methods. Seen from the results, the proposed method works better than the common denoising methods (Bilateral filter [13], K-SVD [7], Nonlocal filter [4], guided filter [10, 11]), especially achieving a greater improvement for the effect of rain removal or the restored background (for example in the zoom of red box). Compared with the state-of-the-art image rain removal methods, although the proposed method cannot remove all the rain streaks like MCA method [12] and method [10, 11], the corresponding result is clearer than the comparison methods [1012]. Moreover, although the methods [12, 14] enhance the details of background, the naturalness is preserved better and the value is closer to the observed image in proposed method (for example in the zoom in of red box).

Fig. 9
figure 9

Comparison of rain removal results with other methods

6.3 Limitations

The proposed method is based on the fact that the region value for rain or snow is smaller and brighter than background. As a result, there are some limitations. Figure 6 (third row) shows the case that rain streaks are too dense and cover most parts of the background, so we can not find the background in any patches, and the rain streaks can not be completely removed. In Fig. 6 (fourth row), the rain streaks are too bright, the window frames and the branches are smaller and brighter than the adjacent pixels as well, the proposed method can not remove all rain streaks without losing background information. However, these problems are still challenging to be worthy of further researching for this filed.

7 Conclusions

In this work, an improved rain and snow removal method is proposed. A designed guided L0 smoothing filter is designed in the method. It is based on the fact that small rain streaks or snowflakes are brighter than the adjacent pixels. After using the guided filter, the low frequency parts of three kinds of edges can be distinguished by the means and variances in the window [16]. So we take the observed image and the low frequency part as the inputs of the designed filter to acquire a rough result. Then further by the minimization operation between the rough result and the observed image to achieve the final refined result. The results show that the proposed approach is more effective in rain and snow removal task than the recent methods.