Papers
Topics
Authors
Recent
Search
2000 character limit reached

Deep Learning for Single Image Super-Resolution: A Brief Review

Published 9 Aug 2018 in cs.CV | (1808.03344v3)

Abstract: Single image super-resolution (SISR) is a notoriously challenging ill-posed problem, which aims to obtain a high-resolution (HR) output from one of its low-resolution (LR) versions. To solve the SISR problem, recently powerful deep learning algorithms have been employed and achieved the state-of-the-art performance. In this survey, we review representative deep learning-based SISR methods, and group them into two categories according to their major contributions to two essential aspects of SISR: the exploration of efficient neural network architectures for SISR, and the development of effective optimization objectives for deep SISR learning. For each category, a baseline is firstly established and several critical limitations of the baseline are summarized. Then representative works on overcoming these limitations are presented based on their original contents as well as our critical understandings and analyses, and relevant comparisons are conducted from a variety of perspectives. Finally we conclude this review with some vital current challenges and future trends in SISR leveraging deep learning algorithms.

Citations (833)

Summary

  • The paper reviews state-of-the-art deep learning architectures and optimization objectives, categorizing approaches into efficient network designs and perceptual loss strategies.
  • It highlights early limitations of models like SRCNN and subsequent improvements through deconvolution layers and recursive mechanisms in deeper networks.
  • It discusses trade-offs between high PSNR metrics via MSE loss and enhanced visual quality achieved using GAN-based methods and perceptual losses.

Deep Learning for Single Image Super-Resolution: An Academic Perspective

The paper "Deep Learning for Single Image Super-Resolution: A Brief Review" by Wenming Yang, Xuechen Zhang, Yapeng Tian, Wei Wang, Jing-Hao Xue, and Qingmin Liao provides an exhaustive overview of the contemporary advancements in deep learning applied to single image super-resolution (SISR). Notably, the authors categorize the existing research into two pivotal domains: the exploration of efficient neural network architectures for SISR and the development of optimization objectives appropriate for deep SISR learning.

Introduction

The review begins by identifying the intrinsic complexity of SISR, an ill-posed problem where a low-resolution (LR) image can map to multiple plausible high-resolution (HR) images. The authors attribute the recent substantial progress in SISR to the robust capacity of deep learning (DL) algorithms to model high-dimensional data representation effectively. This survey strategically focuses on exploring two critical aspects: efficient neural network architectures designed for SISR and effective optimization objectives for deep SISR learning.

Deep Architectures for SISR

Benchmarking and Progressions

The paper benchmarks the Super-Resolution Convolutional Neural Network (SRCNN) model as a starting point. SRCNN comprises three layers and introduces the basic architectural idea of mapping LR input directly to HR space. However, SRCNN's approach is not without limitations: it uses the bicubic interpolation of LR images which is computationally intensive and suboptimal for some high-frequency details.

Enhancements in Network Architectures

The subsequent enhancements include architectures like FSRCNN and ESPCN, which utilize deconvolution (transposed convolution) layers to directly infer HR images from LR inputs, thereby overcoming the inefficiencies of bicubic interpolation. The authors further discuss architectures such as VDSR, DRCN, and SRResNet, which demonstrate that deeper networks generally lead to better performance, with VDSR being the first model to implement a very deep network for SISR.

Notably, recursive mechanisms, as seen in DRCN and DRRN, aim to reduce parameter overhead while maintaining performance. The advancements culminate in state-of-the-art architectures like EDSR and RDN, which push the boundaries in terms of depth and the intricate design of residual and dense connections to optimize feature extraction for SISR tasks.

Optimization Objectives for SISR

MSE and its Derivatives

The authors outline that the Mean Squared Error (MSE) loss, used extensively, optimizes for high Peak Signal-to-Noise Ratio (PSNR) but often results in perceptually unsatisfactory outcomes due to over-smoothing. They introduce alternatives such as the Mean Absolute Error (MAE), which assumes Laplacian noise, to provide more robust updates during training.

Perceptual and Adversarial Losses

A key contribution is the discussion on perceptual loss, as introduced in works by Johnson et al., leveraging pretrained networks like VGG to compute loss in feature space rather than pixel space. This approach aligns more closely with human perceptual quality. Additionally, Generative Adversarial Networks (GANs) for SISR, e.g., SRGAN, are discussed for their capability to produce visually appealing HR images, albeit sometimes at the cost of increased artifacts and lower PSNR values.

The authors identify several key trends and challenges for future research:

  1. Efficiency and Deployment: Emphasizing the need for lighter and computationally efficient models that can be deployed in real-world applications without substantial performance loss.
  2. Large-scale SISR and Unknown Corruption: Highlighting the need for more robust solutions that can handle large scaling factors and unknown degradation, which remain significant challenges in SISR.
  3. Theoretical Insights: Encouraging more theoretical work to better understand the underlying mechanisms of deep learning models applied to SISR, transforming them from black-box systems to more interpretable frameworks.
  4. Assessment Criteria: Advocating for more precise and contextual evaluation metrics that align with specific application needs beyond traditional measures like PSNR and SSIM.

Conclusion

Overall, this paper systematically reviews the multifaceted developments in deep learning for SISR, presenting both achievements and ongoing challenges. The discussed advancements in network architectures and optimization techniques contribute significantly to the field, providing a trajectory for future research to build upon.

By integrating sophisticated architectures and leveraging novel optimization strategies, the field is poised to achieve robust and efficient solutions for real-world SISR applications, making strides in both performance and practical deployment.

Paper to Video (Beta)

No one has generated a video about this paper yet.

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.