1 Introduction

As the Fourier transforms of common low-pass filters are not nonzero everywhere, they are not invertible. Discrete Fourier transform (DFT)-based inverse filtering [15] and deconvolution methods [2, 11] are commonly used for estimating the latent image from blurred and noisy images. However, these methods do not produce precise restoration of the original image and often introduce artifacts. Nevertheless, for zero noise after the blurring and under specific constraints on the filter weights and the image’s number of bits, we show that the original image can still be restored from its filtered version. Bilateral transition between the original image and its smoothed version is a plausible feature that can be used in many image processing applications. For example, in image editing [1], users might want to ‘undo’ their smoothing filter or replace it with an edge-preserving denoising method [3, 20]. In general, users may have different preferences in the level of sharpness and noise. The ability to return to the original image is generally better than sharpening the smoothed image. In the current study, we show how a family of filters can be fully reversed to generate the original image to machine precision. We also highlight the method’s constraints and limitations.

1.1 Related Work

There are several common methods for estimating the original image from its smoothed version when the filter kernel is given. Inverse filtering [15] is the simplest and fastest method that uses the DFT of the kernel and filtered image to estimate the original image. A threshold is generally used to replace zero or smaller DFT values to avoid infinities or extremely high values in the invertible DFT kernel elements. The main disadvantage of this method is that it does not perform well in the presence of noise. The Wiener filter [11] offers an optimal trade-off, in terms of mean square error, between inverse filtering and noise smoothing. However, when the blurring filter is singular, the Wiener filter actually amplifies the noise. Wavelet-based denoising scheme [8] combines Fourier-domain inverse filtering and wavelet-domain image denoising in turn. Although the approach improves the performance, in the case when the blurring function is not invertible, the algorithm is not applicable. Furthermore, since the two steps are separated, there is no control over the overall performance of the restoration. Neelamani et al. [17] proposed a wavelet-based deconvolution technique for ill-conditioned systems that employs both Fourier-domain Wiener-like and wavelet-domain regularization. Several recent methods use priors based on natural image statistics to regularize the ill-posed problem of non-blind deconvolution. A common approach that gives good results in a reasonable time is to assume Laplacian distribution marginals that allow a number of fast L1 and related TV-norm methods to be deployed [19, 22]. Wang et al. [22] used minimization technique known as half-quadratic splitting, originally proposed by Geman and colleagues [9, 10] and showed how it could be used with a total-variation (TV) semi-norm to deconvolve images. However, real-world images have shown to have heavier tails than a Laplacian. Levin et al. [14] and Joshi et al. [12] have applied hyper-Laplacian image priors to non-blind deconvolution problems using iteratively re-weighted least squares (IRLS) [21] to solve for the deblurred image. Krishnan and Fergus [13] proposed an efficient scheme using a similar approach. Their algorithm uses an alternating minimization scheme where the non-convex part of the problem is solved in one phase, followed by a quadratic phase, which can be efficiently solved in the frequency- domain using FFTs.

In the current study, we propose a method for exact non-blind deconvolution. We assume zero noise after the filtering. The method has a different approach in restoring the original image. Instead of inverting the kernel, we formulate the filtered image as a blending of translated replications of the original image. Using this formulation, our method iteratively eliminates these replications. To the best of our knowledge, this is the first paper to show a method and constraints for exact non-blind deconvolution. The method also obtains superior results for a small amount of additive Gaussian noise, when filtering with average kernels, compared to classical and state-of-the-art non-blind deconvolution methods.

Fig. 1
figure 1

One-dimensional illustration of the reversible filtering restoration process. N is the filter size. The first iteration is one-dimensional derivative of the signal. The other iterations remove repetitions when generating a new distal one. When the repetition crosses the image limit, the image is fully restored

The current paper is organized as follows: Section 2 starts with describing the method of restoring the original image from average filtered image. Since the 2D reversible filter is separable, the restoration is performed in each direction separately, using one-dimensional filters, one direction after the other. For simplicity, we start with one-dimensional kernel formulation of the method. Next, the method is extended to higher- dimensional kernels and to approximated Gaussian filters. Thereafter, we provide the proof for reversible filters of kernels with two nonzero weights. Section 3 illustrates the method and compares it to inverse filtering and to other deconvolution methods. Section 4 highlights the method’s limitations. A list of optional applications is suggested in Sect. 5, while Sect. 6 provides the summary and conclusions.

2 Method

We start with formulating a moving average filter in one direction as a blending of several replications of the original image with a constant translation between successive replications. The method’s idea is to iteratively update the image by subtracting its own transformed replications while generating new distal ones. The original image is fully restored after the last new-generated replication crosses the image limits. Figure 1 provides a one-dimensional illustration of the method. The current section provides the mathematical formulation of the problem, the proposed solution and a proof of the method’s bit exact image restoration.

2.1 Reversing a Moving Average Filter

Given the input filtered image with the known kernel size as an input, and the above formulation, we will now define the problem of obtaining the original image.

Problem statement

Given an image \(\hat{I}(x)\) that consists of an average of multiple equal images with a fixed relative translation

$$\begin{aligned} \hat{I}(x)=\frac{1}{N}\sum _{i=0}^{N-1}{I(x-i)} \end{aligned}$$
(1)

where \(N>1\) is the filter size.

We would like to obtain the original image I(x).

For simplicity, in this section, we omit the y index since the processing is one dimensional. In the following sections, x denotes a two-dimensional vector of coordinates.

Proposition

Denote the image size by \(\varOmega \). Applying the following operation \(k\ge \lceil \hbox {log}_2 (\varOmega /N) \rceil \) times

$$\begin{aligned} \hat{I}_{k+1}(x)= \left\{ \begin{array}{ll} \hat{I}_k(x)-\hat{I}_k(x-1), &{}\quad k=1 \\ \hat{I}_k(x)+\hat{I}_k(x-2^{k-2}N), &{}\quad k>1 \end{array}\right. \end{aligned}$$
(2)

will result in

$$\begin{aligned} \hat{I}_{k+1}(x)=\frac{1}{N} I(x)-\frac{1}{N}I(x-2^{k-1}N) \end{aligned}$$
(3)

Since for \(k\ge \lceil \hbox {log}_2(\varOmega /N) \rceil +1\) the second term \(I(x-\varOmega )\) is outside the image limits for all pixels \(0<x<\varOmega \), we get

$$\begin{aligned} I(x)=N\hat{I}_{k+1}(x). \end{aligned}$$
(4)

Proof

We prove the proposition using induction. First, we show that for \(k=1, 2, 3\) the proposition holds.

For \(k=1\), we get

$$\begin{aligned} \begin{aligned} \hat{I}_2(x)&=\hat{I}_1(x)-\hat{I}_1(x-1)\\&=\frac{1}{N}\sum _{i=0}^{N-1}{I(x-i)}-\frac{1}{N}\sum _{i=1}^{N}{I(x-i)}\\&=\frac{1}{N} I(x)-\frac{1}{N}I(x-N) \end{aligned} \end{aligned}$$
(5)

Going one step forward to \(k=2\), we get

$$\begin{aligned} \begin{aligned} \hat{I}_3(x)&= \hat{I}_2(x)+\hat{I}_2(x-2^0N)\\&=\frac{1}{N}I(x)-\frac{1}{N}I(x-N)\\&\quad +\frac{1}{N}I(x-N)-\frac{1}{N}I(x-2N)\\&=\frac{1}{N}I(x)-\frac{1}{N}I(x-2N) \end{aligned} \end{aligned}$$
(6)

For \(k=3\), we get

$$\begin{aligned} \begin{aligned} \hat{I}_4(x)&=\hat{I}_3(x)+\hat{I}_3(x-2^1N)\\&=\frac{1}{N}I(x)-\frac{1}{N}I(x-2N)\\&\quad +\frac{1}{N}I(x-2N)-\frac{1}{N}I(x-4N)\\&=\frac{1}{N}I(x)-\frac{1}{N}I(x-4N) \end{aligned} \end{aligned}$$
(7)

Assuming that for some \(k=n\)

$$\begin{aligned} \hat{I}_{n+1}(x)=\frac{1}{N} I(x)-\frac{1}{N}I(x-2^{n-1}N) \end{aligned}$$
(8)

we will show that for \(k=n+1\)

$$\begin{aligned} \hat{I}_{n+2}(x)=\frac{1}{N} I(x)-\frac{1}{N}I(x-2^nN) \end{aligned}$$
(9)

By applying Eq. (2), we get

$$\begin{aligned} \hat{I}_{n+2}(x)=\hat{I}_{n+1}(x)+\hat{I}_{n+1}(x-2^{n-1}N) \end{aligned}$$
(10)

Setting the induction assumption to the above for each of the two elements, we receive

$$\begin{aligned} \begin{aligned} \hat{I}_{n+2}(x)&=\frac{1}{N}I(x)-\frac{1}{N}I(x-2^{n-1}N)\\&\quad +\frac{1}{N}I(x-2^{n-1}N)\\&\quad -\frac{1}{N}I(x-2^{n-1}N-2^{n-1}N)\\&=\frac{1}{N}I(x)-\frac{1}{N}I(x-2^nN) \end{aligned} \end{aligned}$$
(11)

\(\square \)

Fig. 2
figure 2

Convolution of one-dimensional filters to generate two-dimensional box filter and approximated Gaussian filters—obtained using convolution

Fig. 3
figure 3

Illustration of the reversible filtering restoration process. The input image (upper left) was filtered with a \([5\times 5]\) average kernel. The resulted image (lower right) is identical to the original image up to machine precision

2.2 Extension to 2D Approximated Gaussian Filters

Until now we have described the exact restoration (reversible filtering) on one-directional average kernel. In the current section, we generalize the reversible filtering to kernels with uneven weights and higher dimension. Since the filtering is formulated using blending of transformed images that can be reversed, we can compose the transformations (or filtrations) and reverse them. Therefore, the key idea is to build the final filter kernel using convolution of one-dimensional average filters. Then, restore the original image by applying the restoration of these one-dimensional average filters (described above in Sect. 2.1), iteratively, one filter after the other. The order of the filters in the convolution or restoration does not matter as the operation is linear. On the other hand, the length and direction of the one-dimensional average filters composing the final filter need to be saved.

We denote these one-dimensional filters by \(N_x^{(i)}\) and \(N_y^{(j)}\), for the x and y directions, respectively, where i and j are the indices of filters in the x and y directions, respectively.

One implication of the central limit theorem [18] is that convolution of several independent filters approximates a Gaussian filter. This way, the filtering and reverse filtering of approximated Gaussian filters can be performed. Figure 2 demonstrates the filter generation process. Figure 3 demonstrates the iterative process of reverse filtering for two-dimensional average filter.

Figure 4 shows the method’s results for two-dimensional average kernel and for two-dimensional approximated Gaussian kernels.

Fig. 4
figure 4

Illustration of the reversible filtering restoration for two-dimensional average kernel (upper row) and for approximated Gaussian kernels (mid- and lower rows)

The reverse filtering can be formulated in several ways. Here, we prefer to formulate the translation using the convention of transformation matrix M defined using homogeneous coordinates. This enables simple index writing of x and y as two-dimensional vectors and eases the generalization to linear transformations in Sect. 2.3. For example for translations tytx in the vertical and horizontal directions, respectively:

$$\begin{aligned} M_y=\left[ \begin{matrix} 1 &{}\quad 0 &{}\quad 0 \\ 0&{}\quad 1 &{}\quad ty \\ 0&{}\quad 0 &{}\quad 1 \\ \end{matrix}\right] ,\quad M_x=\left[ \begin{matrix} 1 &{}\quad 0 &{}\quad tx \\ 0&{}\quad 1 &{}\quad 0 \\ 0&{}\quad 0 &{}\quad 1 \\ \end{matrix}\right] \end{aligned}$$
(12)

Therefore, a composition of two vertical translations \(tx_1, tx_2\), will generate a translation of \(tx_1+tx_2\). It is worth mentioning that using homogeneous coordinates, composition is implemented simply using matrix multiplication.

For average reverse filtering, we use translation of one pixel in each direction, i.e., \(tx=ty=1\).

For example, convolving the image with one-dimensional average filter \(N_x=3\) is translated to \(\frac{1}{3}(I+I(Mx)+I(M^2x))\) where, \(M=\left[ \begin{matrix} 1 &{}\quad 0 &{}\quad 1 \\ 0&{}\quad 1 &{}\quad 0 \\ 0&{}\quad 0 &{}\quad 1 \\ \end{matrix}\right] \) Below is a description of the algorithm stages. As we loop on the x and y directions, the matrix M refers to either \(M_x\) or \(M_y\) depending on the direction of the current loop. Figure 5 illustrates the method’s flow. Below is the pseudo-code of reversing approximated Gaussian filters (denoted by Algorithm A).

Fig. 5
figure 5

Illustration of the method’s systematic flow. The downward path (blue) illustrates the filtering process. The reversible filter parameters \(N_x^{(i)},N_y^{(j)}\) (i and j denote the number of filters in the x and y directions, respectively) are stored to the meta-data of the image’s file. The upward path (green) illustrates the restoration process (Algorithm A, 2.2) (Color figure online)

figure b

2.3 Reversible Filters with Two Nonzero Weights

Another type of filters that can be reversed are filters with two nonzero weights. The current section provides the proof for their bit exact reversibility.

We formulate the problem using two replications with relative translation:

Problem statement

Given an image \(\hat{I}(x)\) that consists of integration of two equal images with relative translation u

$$\begin{aligned} \hat{I}(x)=\alpha I(x)+\beta I(x-u) \end{aligned}$$
(13)

We would like to obtain the original image I(x). We assume that \(\alpha \) is not smaller than \(\beta \). If this is not the case, we switch between the notations of \(\alpha \) and \(\beta \) with an opposite sign to u.

Proposition

Denote the image size by \(\varOmega \).

Applying the following operation \(k\ge \lceil \hbox {log}_2 |\varOmega /u| \rceil \) times

$$\begin{aligned} \hat{I}_{k+1}(x)= \left\{ \begin{array}{ll} \hat{I}_k(x)-\left( \frac{\beta }{\alpha }\right) ^{2^{k-1}}\hat{I}_k(x-2^{k-1}u), &{}\quad k=1 \\ \hat{I}_k(x)+\left( \frac{\beta }{\alpha }\right) ^{2^{k-1}}\hat{I}_k(x-2^{k-1}u), &{}\quad k>1 \\ \end{array}\right. \end{aligned}$$
(14)

will result in

$$\begin{aligned} \hat{I}_{k+1}(x)=\alpha I(x)-\frac{\beta ^{2^k}}{\alpha ^{2^k-1}}I(x-2^ku) \end{aligned}$$
(15)

Since for \(k \ge \lceil \hbox {log}_2|\varOmega /u| \rceil \) the second term \(I(x-2^ku)\) is outside the image limits for all pixels \(0<x<\varOmega \) we get

$$\begin{aligned} I(x)=\hat{I}_{k+1}(x)/\alpha . \end{aligned}$$
(16)

Proof

We prove the proposition by induction. First we show that for \(k=1,2\) the proposition holds. Setting \(k=1\) to Eq. (14) yields

$$\begin{aligned} \hat{I}_2(x)= & {} \hat{I}_1(x)-\left( \frac{\beta }{\alpha }\right) ^1\hat{I}_1(x-u) \end{aligned}$$
(17)
$$\begin{aligned} \hat{I}_2(x)= & {} {}\alpha I(x)+\beta I(x-u)\nonumber \\&-\left( \frac{\beta }{\alpha }\right) ^1[\alpha I(x-u)+\beta I(x-2u)]\nonumber \\&=\alpha I(x) -\frac{\beta ^2}{\alpha }I(x-2u) \end{aligned}$$
(18)

Going one step forward with \(k=2\), we get

$$\begin{aligned} \hat{I}_2(x)= & {} \alpha I(x) -\frac{\beta ^2}{\alpha }I(x-2u) \end{aligned}$$
(19)
$$\begin{aligned} \hat{I}_2(x-2u)= & {} \alpha I(x-2u) -\frac{\beta ^2}{\alpha }I(x-4u) \end{aligned}$$
(20)
$$\begin{aligned} \hat{I}_3(x)= & {} \hat{I}_2(x) +\left( \frac{\beta }{\alpha }\right) ^2\hat{I}_2(x-2u)\nonumber \\= & {} \alpha I(x)-\frac{\beta ^2}{\alpha }I(x-2u)\nonumber \\&+\left( \frac{\beta }{\alpha }\right) ^2\left[ \alpha I(x-2u)-\frac{\beta ^2}{\alpha }I(x-4u)\right] \nonumber \\= & {} \alpha I(x)-\frac{\beta ^4}{\alpha ^3}I(x-4u) \end{aligned}$$
(21)

Assuming that for some \(k=n\)

$$\begin{aligned} \hat{I}_{n+1}(x)=\alpha I(x)-\frac{\beta ^{2^n}}{\alpha ^{2^n-1}}I(x-2^nu) \end{aligned}$$
(22)

we will show that

$$\begin{aligned} \hat{I}_{n+2}(x)=\alpha I(x)-\frac{\beta ^{2^{n+1}}}{\alpha ^{2^{n+1}-1}}I(x-2^{n+1}u) \end{aligned}$$
(23)

From (14), we get that

$$\begin{aligned} \hat{I}_{n+2}(x)=\hat{I}_{n+1}(x)+\left( \frac{\beta }{\alpha }\right) ^{2^n} \hat{I}_{n+1}(x-2^nu) \end{aligned}$$
(24)

Setting the assumption both for \(\hat{I}_{n+1}(x)\) and for \(\hat{I}_{n+1}(x-2^nu)\) and put in (14)

$$\begin{aligned} \hat{I}_{n+2}(x)= & {} \alpha I(x)-\frac{\beta ^{2^n}}{\alpha ^{2^n-1}}I(x-2^nu)+\left( \frac{\beta }{\alpha }\right) ^{2^n}\nonumber \\&\times \bigg [ \alpha I(x-2^nu)-\frac{\beta ^{2^n}}{\alpha ^{2^n-1}}I(x-2^nu-2^nu)\bigg ]\nonumber \\ \end{aligned}$$
(25)

After some simple algebra, we get

$$\begin{aligned} \hat{I}_{n+2}(x)=\alpha I(x)-\frac{\beta ^{2^{n+1}}}{\alpha ^{2^{n+1}-1}}I(x-2^{n+1}u) \end{aligned}$$
(26)

The reader may notice that once I(x) was obtained the translated image \(I(x-u)\) can be restored also by applying

$$\begin{aligned} I(x-u)=\frac{\hat{I}(x)-\alpha I(x)}{\beta } \end{aligned}$$
(27)

Below is the pseudo-code of reversing filters with two nonzero weights (denoted by Algorithm B). \(\square \)

figure c

2.4 Restoring an Image from Its Linear Transformed Blended Replications

This section and the subsequent one have nothing to do with filtering, and the restored image is not bit exact with the original one. Yet, the generalization and constraints of restoring the image from its linear transformed replications sum are still an interesting problem. Denote transformation matrix using homogeneous coordinates by M.

For example, affine transformation matrix should look like this:

$$\begin{aligned} M=\left[ \begin{matrix} a_1 &{}\quad a_2 &{}\quad tx \\ a_3&{}\quad a_4 &{}\quad ty \\ 0&{}\quad 0 &{}\quad 1 \\ \end{matrix}\right] \end{aligned}$$
(28)

Using homogeneous coordinates, the composition of two linear warps is obtained by matrix multiplication. The image \(\hat{I}(x)\) consists of a weighted sum of two replications with a relative linear transformation

$$\begin{aligned} \hat{I}(x)=\alpha I(x)+\beta I(Mx) \end{aligned}$$
(29)

Since we can isolate the translation component u from the transformation M, applying the following operation \(k\ge \lceil log_2 |\varOmega /u|\rceil \) times

$$\begin{aligned} \hat{I}_{k+1}(x)= \left\{ \begin{array}{ll} \hat{I}_k(x)-\left( \frac{\beta }{\alpha }\right) ^{2^{k-1}}\hat{I}_{k}(Mx), &{}\quad k=1 \\ \hat{I}_k(x)+\left( \frac{\beta }{\alpha }\right) ^{2^{k-1}}\hat{I}_{k}(M_{k}x), &{}\quad k>1 \\ \end{array}\right. \end{aligned}$$
(30)

where \(M_k=M_{k-1}\cdot M_{k-1}\) with \(M_1=M\) will result in

$$\begin{aligned} \hat{I}_{k+1}(x)=\alpha I(x)-\frac{\beta ^{2^k}}{\alpha ^{2^k-1}}I\left( M^{2^k}x\right) \end{aligned}$$
(31)

The translation component of the transformation \(M^{2^k}\) is \(2^ku\). Therefore, the second term is outside the image limits.

The method can also be extended to linear transformations using composition of linear warps.

Given an image \(\hat{I}(x)\) consisting of average of multiple equal images with a fixed relative transformation

$$\begin{aligned} \hat{I}(x)=\frac{1}{N}\sum _{i=0}^{N-1}{I(x-iMx)} \end{aligned}$$
(32)

Since we can isolate the translation component u from the transformation M, applying the below operation \(k\ge \lceil \hbox {log}_2 |\varOmega /u|\rceil +1 \) times

$$\begin{aligned} \hat{I}_{k+1}(x)= \left\{ \begin{array}{ll} \hat{I}_k(x)-\hat{I}_{k}(Mx), &{}\quad k=1 \\ \hat{I}_k(x)+\hat{I}_{k}(M_{k}x), &{}\quad k>1\\ \end{array}\right. \end{aligned}$$
(33)

where \(M_k=M_{k-1}\cdot M_{k-1}\) with \(M_1=M\) will result in

$$\begin{aligned} \hat{I}_{k+1}(x)=\frac{1}{N} I(x)-\frac{1}{N}I\left( M^{2^{k-1}}x\right) \end{aligned}$$
(34)

The translation component of the transformation \(M^{2^k}\) is \(2^ku\). Therefore, the second term is outside the image limits.

The reason that the restoration is not bit exact is related to the interpolation of the images that cannot be precise in the image edges. A special care of the edges may reduce the generated artifacts in the restored image. Figures 6 and 7 illustrate the restoration from its affine transformed blended replications.

Fig. 6
figure 6

The method’s iterative restoration process from the blended images (left) to the restored images (right). The middle row describe exact restoration

Fig. 7
figure 7

Illustrative results: the blended images (left), the kernel (mid), and the restored images (right) for the following three blended image simulations: weighted sum of two replications with relative affine transform (top); bit exact restoration for average of multiple translated replications or filtering (middle); and average of multiple \((N=20)\) affine transformed replications (bottom). The artifacts are related to the interpolation at the image edges. The middle column demonstrates the kernel that is spatially variant in affine transformed replications

Fig. 8
figure 8

Comparison of the proposed method to inverse filtering and other deconvolution methods. The first row shows the original image (left), filtered image using the proposed scheme (middle) and the kernel. The rest of the rows show pairs of restored and diff images using the various methods. The diff image color limits are \([-\,20,20]\)

Fig. 9
figure 9

Comparison of the reverse filtering to inverse filtering and to deconvolution methods. The same restoration process was performed on each color channel. The first row shows the original image (left), filtered image using the proposed scheme (middle) and the kernel. The rest of the rows show pairs of restored and diff images using the various methods. The diff image color limits are \([-\,20,20]\). RMSE: Inverse filtering (20.2), Lucy (15.6), Wiener (15.9), Krishnan–Fergus (23.6), Wang (72.3), Proposed (0.0)

3 Numerical Experiments

To test the performance of the proposed method with respect to alternative restoration algorithms, we compared the results of the current scheme to the inverse filtering [15], Lucy–Richardson [2], Wiener [11], Wang et al. [22] and Krishnan–Fergus [13] deconvolution methods. The results are demonstrated in Figs. 8 and 9.

The algorithms’ parameters were: default MATLAB parameters with 10 iterations for the Lucy–Richardson deconvolution and noise-to-signal ratio (NSR) of 0.01 for the Wiener deconvolution. A threshold of 0.0001 was used for the inverse filtering. For the Wang et al. [22] algorithm: l2 was chosen with default values according to [6, 7]. For the Krishnan–Fergus algorithm [13]: \(\lambda \) of \(2e-3\) and \(\alpha \) of 1 / 2 were used.

As will be further detailed in Sect. 4, the proposed method requires the preservation of the initial rows and columns after the convolution. Therefore, the last rows and columns are cropped to match the original image size. For the inverse filtering, the blurring was performed in the frequency domain as it yields better restoration results. For the rest of the deconvolution methods, the original image size was preserved by taking the central part of the convolution after filtering with zero-value boundary handling.

The root-mean-square error (RMSE) of the restored image with respect to the original image for the noiseless scenario was computed as a function of the kernel size. The results are illustrated in Fig. 10 and summarized in Table 1.

To demonstrate the advantages and limitations of the proposed method more comprehensively with respect to the other deconvolution methods, the effect of various levels of additive noise after the filtering was tested. The cameraman image with \([9\times 9]\) kernel size was used. The results are shown in Fig. 11. The comparison is also demonstrated in Fig. 12 on a typical image filtered using a \([3\times 3]\) average kernel followed by an additive Gaussian noise with a standard deviation \(\sigma =0.01\). Figure 12 demonstrates the pattern of restoration error and shows that for very small level of noise (\(\sigma =0.01\) corresponds to noise level of 0.004%) the method provides competitive results.

Table 1 Comparison of the RMSE of the restored image with respect to the original image as a function of the kernel size
Fig. 10
figure 10

Comparison RMSE of the restored image with respect to the original image as a function of the kernel size. The comparison was on the cameraman image. Average kernels were used. Noise was not added after the blurring

Fig. 11
figure 11

Comparison RMSE of the restored image with respect to the original image as a function of the Gaussian noise standard deviation—\(\sigma \). The comparison was on the cameraman image with a \([9\times 9]\) average filter. The proposed method gets the lowest restoration error for \(\sigma <0.01\)

Fig. 12
figure 12

Comparison of the reverse filtering to inverse filtering and to deconvolution methods with additive Gaussian noise (\(\sigma =0.01\)) after the filtering. The first row shows the original image (left), filtered image using the proposed scheme (middle) and the \([5\times 5]\) average kernel. The rest of the rows show pairs of restored and diff images (color limits \([-\,20,20]\)). RMSE: Inverse filtering (23.7), Lucy (17.5), Wiener (15.4), Krishnan–Fergus (30.0), Wang (57.8), Proposed (12.5)

4 Method Limitations

The proposed method requires the filtered image to be stored with sufficient number of bits such that values are not truncated after the filtering. This could be possible either with floating point or with fixed point by multiplying the image with the filter divisor beforehand. Since low light and low dynamic range images are generally noisy, the final image generally undergoes some histogram adjustment. By using a linear histogram adjustment, the reversible filter can be coupled with the linear factor such that the final values are not truncated. The second limitation is that for reversing the filtering, the first rows and columns cannot be cropped from the image file after the filtering. Instead, more of the last rows and columns are cropped to match the original image size. This issue can be bypassed by displaying the filtered image without the first kernel rows and columns. Since the first rows and columns are essential for the restoration, it is important to note that when using border handling other than zero value, the filtered padded lines must also be stored for the restoration of the original image. The third limitation is that for exact restoration the filter kernel must be known beforehand, therefore the method is less feasible for blind deconvolution applications [4].

5 Applications

The current section suggests several optional applications using the proposed method.

5.1 Image Denoising

As state-of-the-art denoising algorithms [3, 20] are expensive and of limited use in many real-time applications, sometimes smoothing filters are the only available option. Using the current method, the smoothing filter can be replaced later on with an edge-preserving denoiser if needed or if such denoiser is available. As high frequencies cannot be restored after a regular smoothing filter, the edge-preserving denoiser is expected to perform much better on the original or restored image by preserving the original resolution and edges.

In low-illumination condition, the dark image is noisy and does not use the entire number of bits. Therefore, a reversible filter, to reduce the image noise, can be selected such that if multiplying the maximal value with the filter’s denominator, the filtered image values are not truncated. Using the proposed method, the original image can be restored and a different sophisticated denoising method can be used to improve the final image quality. Replacing/canceling the denoising method can also be performed on image formats with extended number of bits (e.g., wide dynamic range image formats) captured in any illumination condition.

5.2 Image Editing of Motion Blur in 2D Computer Graphics

In 2D computer graphics, motion blur is an artistic filter to simulate the visual motion effect. Many graphical software products such as Adobe Photoshop [5] or GIMP [23] offer simple motion blur filters. Using the proposed method, the pre-filtered image can be restored to cancel or modify the blurring direction or strength.

5.3 Control Web Commercial Visual Content

Several web commercial strategies are used to give a customer the ability to estimate the product before making the decision of purchasing it. Watermarking enables visualization of the image while protecting copyright/ownership. Google Books [16] enables the customer to see parts of the book. The proposed method can be used to enable control over the visual content with respect to the purchase status without the need to download the content again. For example, the sharpened high-quality visual version will become available only after purchasing it or after renewing the membership. This can be implemented by withholding the filters’ parameters, which can be unique for each page or image until getting the payment.

5.4 Restoring the Bayer/Color Image from the Luma Channel

In sensors’ processing chain, computer vision applications are often computed on the luma (Y channel of the YCbCr color space) rather than on the RGB color image to save hardware resources. Therefore, the input Bayer image is used to generate both the output RGB image and the luma image. This requires memory for storing the Bayer/RGB image during the computation of the computer vision application. The extraction of the luma image can be obtained by filtering the Bayer pattern image with the filter \(\left[ \begin{matrix} 1 &{}\quad 2 &{}\quad 1 \\ 2&{}\quad 4 &{}\quad 2 \\ 1&{}\quad 2 &{}\quad 1 \\ \end{matrix}\right] /16\). Since this filter can be calculated from two horizontal and two vertical convolutions of the one-dimensional filter [1, 1] / 2, the original Bayer image can be restored from the luma image using the proposed method. In order to avoid truncation of the values by the division, the original image should be multiplied by 16 before the filtering. Using this approach, the Bayer/color image can be computed from the luma image after finishing the computer vision computation and save the extra memory required for storing the Bayer/RGB image. Figure 13 demonstrates the restoration of the original Bayer image from the luma channel image.

Fig. 13
figure 13

Restoration of the Bayer and color images from the luma channel image

6 Summary and Conclusions

In the current paper, we propose an exact non-blind deconvolution scheme that enables bilateral transition between an image and its smoothed version. The method can revert the filtering made by average box filters or by approximated Gaussian filters that were obtained by convolution of averaging filters. The technique can be used for many image processing applications such as image editing [1], computer graphics and web commercial services where the filtering can be reverted without the need to keep the original image. The main contribution of the current work is the theoretical solution of getting the exact original image from its filtered version even if the blur kernel has zero weights in its Fourier transform. The solution may serve as a platform for future applications to come.