Keywords

1 Introduction

Single-image super-resolution (SISR) is a classical problem in the field of computer vision that predicts a high-resolution (HR) image from its low-resolution (LR) observation. Because this is an ill-posed problem with multiple possible solutions, obtaining a rich prior based on a large number of data points is beneficial for better prediction. Deep learning is quite effective for such problems. The performance of SISR has been significantly improved by using convolutional neural networks (CNN), starting with the pioneering work of Dong et al. in 2014 [9]. Before the dominance of deep-learning-based methods [1, 9, 10, 19, 20, 23, 26, 27, 41, 42, 56, 57] in this field, example-based methods [4, 12, 13, 16, 21, 44, 45, 51, 52] were mainly used for learning priors. Among them, sparse coding, which is a representative example-based method, has shown state-of-the-art performance [44, 45, 51]. SISR using sparse coding comprises the following steps, as illustrated in Fig. 1(a): learn an LR dictionary \(D_{\text {L}}\) with patches extracted from LR images, learn an HR dictionary \(D_{\text {H}}\) with patches extracted from HR images, represent patches densely cropped from an input image with \(D_{\text {L}}\), map \(D_{\text {L}}\) representations to \(D_{\text {H}}\) representations, reconstruct HR patches using \(D_{\text {H}}\), then aggregate the overlapped HR patches to produce a final output.

Fig. 1.
figure 1

Schematic illustrations of single image super-resolution with (a) sparse-coding-based approach, (b) conventional deep-learning-based approach, and (c) our approach. The numbers - indicate each step of the super-resolution process.

As depicted in Fig. 1(b), Dong et al. [9] replaced all the above handcrafted steps with a multilayered CNN in their proposed method SRCNN to take advantage of the powerful capability of deep learning. Note that, in this method, \(D_{\text {L}}\) and \(D_{\text {H}}\) are implicitly acquired through network training. Since the SRCNN, various methods have been proposed to improve performance, for example, by deepening the network with residual blocks and skip connections [20, 27, 41, 57], applying attention mechanisms [8, 31, 34, 56], and using a transformer [6, 26]. However, most of these studies, including state-of-the-art ones, follow the same formality as SRCNN from a general perspective, where all the processes in the sparse-coding-based methods are replaced by a multilayered network.

One disadvantage of deep-learning-based methods is that their performance is degraded for images created differently from the training dataset [14]. Although there have been several approaches to address this issue, such as training networks for multiple degradations [40, 46, 49, 55, 59] and making models agnostic to degradations with iterative optimizations [14, 38], it is also important to make the network structure more robust. We hypothesize that \(D_{\text {H}}\) implicitly learned inside a multilayered network is fragile to subtle differences in input images from the training time. This hypothesis leads us to the method we propose.

In this study, we propose an end-to-end super-resolution network with a deep dictionary (SRDD), where \(D_{\text {H}}\) is explicitly learned through the network training (Fig. 1(c)). The main network predicts the coefficients of \(D_{\text {H}}\) and the weighted sum of the elements (or atoms) of \(D_{\text {H}}\) produces an HR output. This approach is fundamentally different from the conventional deep-learning-based approach, where the network has upsampling layers inside it. The upsampling process of the proposed method is efficient because the pre-generated \(D_{\text {H}}\) can be used as a magnifier for inference. In addition, the main network does not need to maintain the information of the processed image at the pixel level in HR space. Therefore, the network can concentrate only on predicting the coefficients of \(D_{\text {H}}\). For in-domain test images, our method shows performance that is not as good as latest ones, but close to the conventional baselines (e.g., CARN). For out-of-domain test images, our method shows superior performance compared to conventional deep-learning-based methods.

2 Related Works

2.1 Sparse-Coding-Based SR

Before the dominance of deep-learning-based methods in the field of SISR, example-based methods showed state-of-the-art performance. The example-based methods exploit internal self-similarity [11, 13, 16, 50] and/or external datasets [4, 12, 21, 44, 45, 51, 52]. The use of external datasets is especially important for obtaining rich prior. In the sparse-coding-based methods [44, 45, 51, 52], which are state-of-the-art example-based methods, high/low-resolution patch pairs are extracted from external datasets to create high/low-resolution dictionaries \(D_{\text {H}}\)/\(D_{\text {L}}\). The patches cropped from an input image are encoded with \(D_{\text {L}}\) and then projected onto \(D_{\text {H}}\) via iterative processing, producing the final output with appropriate patch aggregation.

2.2 Deep-Learning-Based SR

Deep CNN. All the handcrafted steps in the traditional sparse-coding-based approach were replaced with an end-to-end CNN in a fully feed-forward manner. Early methods, including SRCNN [9, 19, 20], adopted pre-upsampling in which LR input images are first upsampled for the SR process. Because the pre-upsampling is computationally expensive, post-upsampling is generally used in recent models [1, 26, 27, 31, 56]. In post-upsampling, a transposed convolution or pixelshuffle [37] is usually used to upsample the features for final output. Although there are many proposals to improve network architectures [25, 54], the protocol that directly outputs SR images with post-upsampling has been followed in most of those studies. Few studies have focused on the improvement of the upsampling strategy. Tough some recent works [3, 5, 60] leveraged the pre-trained latent features as a dictionary to improve output fidelity with rich textures, they used standard upsampling strategies in their proposed networks.

Convolutional Sparse Coding. Although methods following SRCNN have been common in recent years, several fundamentally different approaches have been proposed before and after the proposal of SRCNN. Convolutional sparse coding [15, 35, 39, 47] is one of such methods that work on the entire image differently from traditional patch-based sparse coding. The advantage of convolutional sparse coding is that it avoids the boundary effect in patch-based sparse coding. However, it conceptually follows patch-based sparse coding in that the overall SR process is divided into handcrafted steps. Consequently, its performance lags behind that of end-to-end feed-forward CNN.

Robust SR. The performance of deep-learning-based SR is significantly affected by the quality of the input image, especially the difference in conditions from the training dataset [14]. Several approaches have been proposed to make the network more robust against in-domain test images by training with multiple degradations [40, 46, 49, 55, 59]. For robustness against out-of-domain test images, some studies aim to make the network agnostic to degradations [14, 38]. In these methods, agnostics acquisition is generally limited to specific degradations; therefore, it is important to make the network structure itself more robust.

Fig. 2.
figure 2

The overall pipeline of the proposed method. A high-resolution dictionary \(D_{\text {H}}\) is generated from random noise. An encoded code of \(D_{\text {H}}\) is then concatenated with an extracted feature to be inputted to a per-pixel predictor. The predictor output is used to reconstruct the final output based on \(D_{\text {H}}\).

3 Method

As depicted in Fig. 1(c), the proposed method comprises three components: \(D_{\text {H}}\) generation, per-pixel prediction, and reconstruction. The \(D_{\text {H}}\) generator generates an HR dictionary \(D_{\text {H}}\) from random noise input. The per-pixel predictor predicts the coefficients of \(D_{\text {H}}\) for each pixel from an LR YCbCr input. In the reconstruction part, the weighted sum of the elements (or atoms) of \(D_{\text {H}}\) produces an HR Y-channel output as a residual to be added to a bicubically upsampled Y channel. The remaining CbCr channels are upscaled with a shallow SR network. We used ESPCN [37] as the shallow SR network in this work. All of these components can be simultaneously optimized in an end-to-end manner; therefore, the same training procedure can be used as in conventional deep-learning-based SR methods. We use \(L_{1}\) loss function to optimize the network

$$\begin{aligned} L = \frac{1}{M} \sum _{i=1}^{M} ||I_i^{gt} - \varTheta (I_i^{lr})||_{1}, \end{aligned}$$
(1)

where \(I_i^{lr}\) and \(I_i^{gt}\) are LR patch and its ground truth. M denotes the number of training image pairs. \(\varTheta (\cdot )\) represents a function of the SRDD network. Figure 2 illustrates the proposed method in more detail. We describe the design of each component based on Fig. 2 in the following subsections.

3.1 \(D_{\text {H}}\) Generation

From random noise \(\delta ^{s^2 \times 1 \times 1}\) (\(\in \mathbb {R}^{s^2 \times 1 \times 1}\)) with a standard normal distribution, the \(D_{\text {H}}\) generator generates the HR dictionary \(D_{\text {H}}^{N \times s \times s}\), where s is an upscaling factor and N is the number of elements (atoms) in the dictionary. \(D_{\text {H}}\) is then encoded by \(s \times s\) convolution with groups N, followed by ReLU [33] and \(1 \times 1\) convolution. Each N element of the resultant code \(C_{\text {H}}^{N \times 1 \times 1}\) represents each \(s \times s\) atom as a scalar value. Although the \(D_{\text {H}}\) can be trained using a fixed noise input, we found that introducing input randomness improves the stability of the training. A pre-generated fixed dictionary and its code are used in the testing phase. Note that only \(D_{\text {H}}\) is generated since low-resolution dictionaries (encoding) can be naturally replaced by convolutional operations without excessive increases in computation.

Fig. 3.
figure 3

A generator architecture of a high-resolution dictionary \(D_{\text {H}}\). A tree-like network with depth d generates \(2^d\) atoms of size \(1 \times s \times s\) from a random noise input, where s is an upscaling factor.

Fig. 4.
figure 4

Learned atoms of \(\times 4\) SRDD with \(N = 128\). The size of each atom is \(1 \times 4 \times 4\). The data range is renormalized to [0, 1] for visualization.

As illustrated in Fig. 3, the \(D_{\text {H}}\) generator has a tree-like structure, where the nodes consist of two \(1 \times 1\) convolutional layers with ReLU activation. The final layer has a Tanh activation followed by a pixelshuffling layer; therefore, the data range of the output atoms is \([-1, 1]\). To produce N atoms, depth d of the generator is determined as

$$\begin{aligned} d = \log _2 N. \end{aligned}$$
(2)

Figure 4 shows generated atoms with \(s = 4\) and \(N = 128\). We observed that the contrast of the output atoms became stronger as training progressed, and they were almost fixed in the latter half of the training.

3.2 Per-pixel Prediction

We utilize UNet++ [61] as a deep feature extractor in Fig. 2. We slightly modify the original UNet++ architecture: the depth is reduced from four to three, and a long skip connection is added. The deep feature extractor outputs a tensor of size \(f \times h \times w\) from the input YCbCr image, where h and w are the height and width of the image, respectively. Then the extracted feature is concatenated with the expanded code of \(D_{\text {H}}\)

$$\begin{aligned} C_{\text {H}}^{N \times h \times w} = R_{1 \times h \times w}(C_{\text {H}}^{N \times 1 \times 1}), \end{aligned}$$
(3)

to be inputted to a per-pixel predictor, where \(R_{a \times b \times c}(\cdot )\) denotes the \(a \times b \times c\) repeat operations. The per-pixel predictor consists of ten bottleneck residual blocks followed by a softmax function that predicts N coefficients of \(D_{\text {H}}\) for each input pixel. Both the deep feature extractor and per-pixel predictor contain batch normalization layers [18] before the ReLU activation. The resultant prediction map \(M^{N \times h \times w}\) is further convolved with a \(2 \times 2\) convolution layer to produce a complementary prediction map \(M'^{N \times (h-1) \times (w-1)}\). A complementary prediction map is used to compensate for the patch boundaries when reconstructing the final output. The detail of the compensation mechanism is described in the next subsection. Although we tried to replace softmax with ReLU to directly express sparsity, ReLU made the training unstable. We also tried entmax [36], but the performance was similar to that of softmax, so we decided to use softmax for simplicity.

Fig. 5.
figure 5

Visualization of sparsity of a prediction map (center) and its complementary prediction map (right). The number of predicted coefficients larger than \(1e\!-\!2\) is counted for each pixel. More atoms are assigned to the high-frequency parts and the low-frequency parts are relatively sparse.

Figure 5 visualizes the sparsity of the prediction map and its complementary prediction map. The number of coefficients larger than \(1e-2\) is counted for each pixel to visualize the sparsity. The model with \(N = 128\) is used. More atoms are assigned to the high-frequency parts of the image, and the low-frequency parts are relatively sparse. This feature is especially noticeable in the complementary prediction map. In the high-frequency region, the output image is represented by linear combinations of more than tens of atoms for both maps.

3.3 Reconstruction

The prediction map \(M^{N \times h \times w}\) is upscaled to \(N \times sh \times sw\) by nearest-neighbor interpolation, and the element-wise multiplication of that upscaled prediction map \(U_s(M^{N \times h \times w})\) with the expanded dictionary \(R_{1 \times h \times w}(D_{\text {H}}^{N \times s \times s})\) produces \(N \times sh \times sw\) tensor T consists of weighted atoms. The \(U_s(\cdot )\) denotes \(\times s\) nearest-neighbor upsampling. Finally, tensor T is summed over the first dimension, producing output x as

$$\begin{aligned} x^{1 \times sh \times sw}= & {} \sum _{k=0}^{N-1} T^{N \times sh \times sw}[k, :, :],\end{aligned}$$
(4)
$$\begin{aligned} T^{N \times sh \times sw}= & {} U_s(M^{N \times h \times w}) \otimes R_{1 \times h \times w}(D_{\text {H}}^{N \times s \times s}). \end{aligned}$$
(5)

The same sequence of operations is applied to the complementary prediction map to obtain the output \(x'\) as follows:

$$\begin{aligned} x'^{1 \times s(h-1) \times s(w-1)}= & {} \sum _{k=0}^{N-1} T'^{N \times s(h-1) \times s(w-1)}[k, :, :],\end{aligned}$$
(6)
$$\begin{aligned} T'^{N \times s(h-1) \times s(w-1)}= & {} U_s(M'^{N \times (h-1) \times (w-1)}) \otimes R_{1 \times (h-1) \times (w-1)}(D_{\text {H}}^{N \times s \times s}). \end{aligned}$$
(7)

Note that the same dictionary, \(D_{\text {H}}\), is used to obtain x and \(x'\). By centering x and \(x'\), as illustrated in Fig. 6, the imperfections at the patch boundaries can complement each other. The final output residual is obtained by concatenating the overlapping parts of the centered x and \(x'\) and applying a \(5 \times 5\) convolution. For non-overlapping parts, x is simply used as the final output.

Fig. 6.
figure 6

Schematic illustration of a mechanism to compensate patch boundary with a complementary prediction map, where s is a scaling factor. Left: Prediction map (blue) and its complementary prediction map (orange). Right: Upsampled prediction and complementary prediction maps with centering. (Color figure online)

4 Experiments

4.1 Implementation Details

We adopt a model with 128 atoms (SRDD-128) and a small model with 64 atoms (SRDD-64). The number of filters of the models is adjusted according to the number of atoms. Our network is trained by inputting \(48 \times 48\) LR patches with a mini-batch size of 32. Following previous studies [1, 27, 56], random flipping and rotation augmentation is applied to each training sample. We use Adam optimizer [22] with \(\beta _1 = 0.9\), \(\beta _2 = 0.999\), and \(\epsilon = 10^{-8}\). The learning rate of the network except for the \(D_{\text {H}}\) generator is initialized as \(2e\!-\!4\) and halved at [200k, 300k, 350k, 375k]. The total training iterations is 400k. The learning rate of the \(D_{\text {H}}\) generator is initialized as \(5e\!-\!3\) and halved at [50k, 100k, 200k, 300k, 350k]. Parameters of the \(D_{\text {H}}\) generator are frozen at 360k iteration. In addition, to stabilize training of the \(D_{\text {H}}\) generator, we randomly shuffle the order of output atoms for the first 1k iterations. We use PyTorch to implement our model with an NVIDIA P6000 GPU. Training takes about two/three days for SRDD-64/128, respectively. More training details are provided in the supplementary material.

4.2 Dataset and Evaluation

Training Dataset. Following previous studies, we use 800 HR-LR image pairs of the DIV2K [43] training dataset to train our models. LR images are created from HR images by Matlab bicubic downsampling. For validation, we use initial ten images from the DIV2K validation dataset.

Test Dataset. For testing, we evaluate the models on five standard benchmarks: Set5 [2], Set14 [53], BSD100 [29], Urban100 [16], and Manga109 [30]. In addition to standard test images downsampled with Matlab bicubic function same as in training, we use test images that downsampled by OpenCV bicubic, bilinear, and area functions to evaluate the robustness of the models. In addition, we evaluate the models on real-world ten historical photographs.

Evaluation. We use common image quality metrics peak signal-to-noise ratio (PSNR) and structural similarity index (SSIM) [48] calculated on the Y channel (luminance channel) of YCbCr color space. For evaluation of real-world test images, no-reference image quality metric NIQE [32] is used since there are no ground-truth images. Following previous studies, we ignore s pixels from the border to calculate all the metrics, where s is an SR scale factor.

4.3 Ablation Study

We conduct ablation experiments to examine the impact of individual elements in SRDD. We report the results of SRDD-64 throughout this section. The results of the ablation experiments on Set14 downsampled with Matlab bicubic function are summarized in Table 1.

Table 1. Results of ablation experiments on Set14 downsampled with Matlab bicubic function.
Fig. 7.
figure 7

Validation curves during the training of SRDD-64 with and without batch normalization layers.

Batch Normalization. We show the validation curves of SRDD-64 with and without batch normalization layers in Fig. 7. The performance of the proposed model is substantially improved by using batch normalization. This result is in contrast to conventional deep-learning-based SR methods, where batch normalization generally degrades performance [27]. Unlike conventional methods where the network directly outputs the SR image, the prediction network in SRDD predicts the coefficients of \(D_{\text {H}}\) for each pixel, which is rather similar to the semantic segmentation task. In this sense, it is natural that batch normalization, which is effective for semantic segmentation [7, 24, 58], is also effective for the proposed model.

Bottleneck Blocks. We eliminate bottleneck blocks and \(D_{\text {H}}\) code injection from the per-pixel predictor. The prediction network becomes close to the plane UNet++ structure with this modification. The performance drops as shown in Table 1, but still demonstrates a certain level of performance.

Compensation Mechanism. As shown in Table 1, removing the compensation mechanism from SRDD-64 degrades the performance. However, the effect is marginal indicates that our model can produce adequate quality outputs without boundary compensation. This result is in contrast to the sparse-coding-based methods, which generally require aggregation with overlapping patch sampling to reduce imperfection at the patch boundary. Because the computational complexity of our compensation mechanism is very small compared to that of the entire model, we adopt it even if the effect is not so large.

4.4 Results on In-Domain Test Images

We conduct experiments on five benchmark datasets, where the LR input images are created by Matlab bicubic downsampling same as in the DIV2K training dataset. Because SRDD is quite shallow and fast compared to current state-of-the-art models, we compare SRDD to relatively shallow and fast models with roughly 50 layers or less. Note that recent deep SR models usually have hundreds of convolutional layers [56]. We select ten models for comparison: SRCNN [9], FSRCNN [10], VDSR [19], DRCN [20], LapSRN [23], DRRN [41], MemNet [42], CARN [1], IMDN [17], and LatticeNet [28]. We also compare our model with a representative sparse coding-based method A+ [45]. Results for the representative very deep models RCAN [56], NLSA [31], and SwinIR [26] are also shown.

Table 2. Quantitative comparison for \(\times 4\) SR on benchmark datasets. Best and second best results are highlighted in and , respectively.
Table 3. Execution time of representative models on an Nvidia P4000 GPU for \(\times 4\) SR with input size \(256\times 256\).
Fig. 8.
figure 8

Visual comparison for \(\times 4\) SR on Set14 and Urban100 dataset. Zoom in for a better view.

The quantitative results for \(\times 4\) SR on benchmark datasets are shown in Table 2. SRDD-64 and SRDD-128 show comparable performances to CARN/IMDN and LatticeNet, respectively. As shown in Table 3, the inference speed of SRDD-64 is also comparable to that of CARN, but slower than IMDN. These results indicate that the overall performance of our method on in-domain test images is close to that of conventional baselines (not as good as state-of-the-art models). The running time of representative deep models are also shown for comparison. They are about 20 times slower than CARN and SRDD-64. The visual results are provided in Fig. 8.

4.5 Results on Out-of-Domain Test Images

Synthetic Test Images. We conduct experiments on Set14, where the LR input images are created differently from training time. We use bicubic, bilinear, and area downsampling with OpenCV resize functions. The difference between Matlab and OpenCV resize functions mainly comes from the anti-aliasing option. The anti-aliasing is default enabled/unenabled in Matlab/OpenCV, respectively. We mainly evaluate CARN and SRDD-64 because these models have comparable performance on in-domain test images. The state-of-the-art lightweight model IMDN [17] and the representative blind SR model IKC [14] are also evaluated for comparison. The results are shown in Table 4. SRDD-64 outperformed these models by a large margin for the three different resize functions. This result implies that our method is more robust for the out-of-domain images than conventional deep-learning-based methods. The visual comparison on a test image downsampled with OpenCV bicubic function is shown in Fig. 9. CARN overly emphasizes high-frequency components of the image, while SRDD-64 outputs a more natural result.

Table 4. Quantitative results of \(\times 4\) SR on Set14 downsampled with three different OpenCV resize functions. Note that the models are trained with Matlab bicubic downsampling.
Fig. 9.
figure 9

Visual comparison for \(\times 4\) SR on Set14 baboon downsampled with OpenCV bicubic function.

Real-World Test Images. We conduct experiments on widely used ten historical images to see the robustness of the models on unknown degradations. Because there is no ground-truth image, we adopt a no-reference image quality metric NIQE for evaluation. Table 5 shows average NIQE for representative methods. As seen in the previous subsection, our SRDD-64 shows comparable performance to CARN if compared with in-domain test images. However, on the realistic datasets with the NIQE metric, SRDD-64 clearly outperforms CARN and is close to EDSR. Interestingly, unlike the results on the in-domain test images, the performance of SRDD-64 is better than that of SRDD-128 for realistic degradations. This is probably because representing an HR image with a small number of atoms makes the atoms more versatile. The visual results are provided in Fig. 10.

Table 5. Results of no-reference image quality metric NIQE on real-world historical images. Note that the models are trained with Matlab bicubic downsampling.
Fig. 10.
figure 10

Visual comparison for \(\times 4\) SR on real-world historical images. Zoom in for a better view.

4.6 Experiments of \(\times \)8 SR

To see if our method would work at different scaling factors, we also experiment with the \(\times 8\) SR case. We use DIV2K dataset for training and validation. The test images are prepared with the same downsampling function (i.e. Matlab bicubic function) as the training dataset. Figure 11 shows generated atoms of SRDD with \(s = 8\) and \(N = 128\). The structure of atoms with \(s = 8\) is finer than that with \(s = 4\), while the coarse structures of both cases are similar. The quantitative results on five benchmark datasets are shown in Table 6. SRDD performs better than the representative shallow models though its performance does not reach representative deep model EDSR.

Fig. 11.
figure 11

Learned atoms of \(\times 8\) SRDD with \(N = 128\). The size of each atom is \(1 \times 8 \times 8\). The data range is renormalized to [0, 1] for visualization.

Table 6. Quantitative comparison for \(\times 8\) SR on benchmark datasets. Best and second best results are highlighted in and , respectively.

5 Conclusions

We propose an end-to-end super-resolution network with a deep dictionary (SRDD). An explicitly learned high-resolution dictionary (\(D_{\text {H}}\)) is used to upscale the input image as in the sparse-coding-based methods, while the entire network, including the \(D_{\text {H}}\) generator, is simultaneously optimized to take full advantage of deep learning. For in-domain test images (images created by the same procedure as the training dataset), the proposed SRDD shows performance that is not as good as latest ones, but close to the conventional baselines (e.g., CARN). For out-of-domain test images, SRDD outperforms conventional deep-learning-based methods, demonstrating the robustness of our model.

The proposed method is not limited to super-resolution tasks but is potentially applicable to other tasks that require high-resolution output, such as high-resolution image generation. Hence, the proposed method is expected to have a broad impact on various tasks. Future works will be focused on the application of the proposed method to other vision tasks. In addition, we believe that our method still has much room for improvement compared to the conventional deep-learning-based approach.