CVPR2020超分辨方向文章总结(下)

CVPR2020所有论文:https://openaccess.thecvf.com/CVPR2020

1.Unified Dynamic Convolutional Network for Super-Resolution with Variational Degradations

problem

Deep Convolutional Neural Networks (CNNs) have achieved remarkable results on Single Image Super-Resolution (SISR). Despite considering only a single degradation, recent studies also include multiple degrading effects to better reflect real-world cases.

However, most of the works assume a fixed combination ofdegrading effects, or even train an individual network for different combinations. Instead, a more practical approach is to train a single network for wide-ranging and variational degradations.

solution

Exploit dynamic convolutions to better solve the non-blind SISR problem with variational degradations.

Dynamic convolution is a far more flexible operation than the standard one. A standard convolution learns kernels that minimize the error across all pixel locations at once. While, dynamic convolution uses per-pixel kernels generated by the parameter-generating network [21]. Moreover, the kernels of standard convolution are content-agnostic which are fixed after training.

In contrast, the dynamic ones are content-adaptive which adapt to different input even after training. By the aforementioned properties, dynamic convolution demonstrates itself a bet-ter alternative to handle variational degradations.

CVPR2020超分辨方向文章总结(下)

CVPR2020超分辨方向文章总结(下)

2.Learning Texture Transformer Network for Image Super-Resolution

problem

Recent progress has been made by taking high-resolution images as references (Ref), so that relevant textures can be transferred to LR images. However, existing SR approaches neglect to use attention mechanisms to transfer high-resolution (HR) textures from Ref images, which limits these approaches in challenging cases.

solution

Propose a novel Texture Transformer Network for Image Super-Resolution (TTSR), in which the LR and Ref images are formulated as queries and keys in a transformer, respectively.

TTSR consists offour closely-related modules optimized for image generation tasks, including a learnable texture extractor by DNN, a relevance embedding module, a hard-attention module for texture transfer, and a soft-attention module for texture synthesis.

Such a design encourages joint feature learning across LR and Ref images, in which deep feature correspondences can be discovered by attention, and thus accurate texture features can be transferred. The proposed texture transformer can be further stacked in a cross-scale way, which enables texture recovery from different levels (e.g., from 1x to 4x magnification)

CVPR2020超分辨方向文章总结(下)
CVPR2020超分辨方向文章总结(下)

3.Correction Filter for Single Image Super-Resolution: Robustifying Off-the-Shelf Deep Super-Resolvers

problem

Deep Neural Networks (DNNs) have shown superior performance over alternative methods when the acquisition process uses a fixed known downscaling kernel—typically a bicubic kernel.

However, several recent works have shown that in practical scenarios, where the test data mismatch the training data (e.g. when the downscaling kernel is not the bicubic kernel or is not available at training), the leading DNN methods suffer from a huge performance drop.

solution

Inspired by the literature on generalized sampling, in this work we propose a method for improving the performance of DNNs that have been trained with a fixed kernel on observations acquired by other kernels.

For a known kernel, we design a closed-form correction filter that modifies the low-resolution image to match one which is obtained by another kernel (e.g. bicubic), and thus improves the results of existing pre-trained DNNs.

For an unknown kernel, we extend this idea and propose an algorithm for blind estimation ofthe required correction filte.

CVPR2020超分辨方向文章总结(下)