基于参考图像超分辨率的双投影融合-2022年

ID:37242

大小:3.11 MB

页数:15页

时间:2023-03-03

金币:10

上传者:战必胜
Citation: Lin, R.; Xiao, N. Dual
Projection Fusion for
Reference-Based Image
Super-Resolution. Sensors 2022, 22,
4119. https://doi.org/10.3390/
s22114119
Academic Editors: M. Jamal Deen,
Subhas Mukhopadhyay, Yangquan
Chen, Simone Morais, Nunzio
Cennamo and Junseop Lee
Received: 22 April 2022
Accepted: 25 May 2022
Published: 28 May 2022
Publishers Note: MDPI stays neutral
with regard to jurisdictional claims in
published maps and institutional affil-
iations.
Copyright: © 2022 by the authors.
Licensee MDPI, Basel, Switzerland.
This article is an open access article
distributed under the terms and
conditions of the Creative Commons
Attribution (CC BY) license (https://
creativecommons.org/licenses/by/
4.0/).
sensors
Article
Dual Projection Fusion for Reference-Based Image
Super-Resolution
Ruirong Lin *
and Nanfeng Xiao
School of Computer Science and Engineering, South China University of Technology, Guangzhou 510006, China;
xiaonf@scut.edu.cn
* Correspondence: cslrr546786@mail.scut.edu.cn
Abstract:
Reference-based image super-resolution (RefSR) methods have achieved performance su-
perior to that of single image super-resolution (SISR) methods by transferring texture details from an
additional high-resolution (HR) reference image to the low-resolution (LR) image. However, existing
RefSR methods simply add or concatenate the transferred texture feature with the LR features, which
cannot effectively fuse the information of these two independently extracted features. Therefore, this
paper proposes a dual projection fusion for reference-based image super-resolution (DPFSR), which
enables the network to focus more on the different information between feature sources through
inter-residual projection operations, ensuring effective filling of detailed information in the LR feature.
Moreover, this paper also proposes a novel backbone called the deep channel attention connection
network (DCACN), which is capable of extracting valuable high-frequency components from the LR
space to further facilitate the effectiveness of image reconstruction. Experimental results show that
we achieve the best peak signal-to-noise ratio (PSNR) and structure similarity (SSIM) performance
compared with the state-of-the-art (SOTA) SISR and RefSR methods. Visual results demonstrate that
the proposed method in this paper recovers more natural and realistic texture details.
Keywords:
reference-based super-resolution; attention mechanism; texture transformer; dual
projection fusion
1. Introduction
Image super-resolution (SR) aims to reconstruct an HR image with clear texture details
from a blurred LR image [
1
]. In recent years, deep learning-based SISR algorithms [
2
6
]
have made significant progress and are widely used for various real-world tasks, such
as medical image processing [
7
,
8
], surveillance imaging [
9
], and object recognition [
10
].
However, when the upsampling factor reaches 4× or greater, the reconstruction results of
most existing methods show blurred visual effects or artifacts. Although generative adver-
sarial network (GAN) [
11
] and perceptual loss [
12
]-based methods have been proposed to
improve the quality of the reconstructed images, they cannot guarantee the realism of the
generated textures, resulting in the degradation of the PSNR performance.
To address this problem, the RefSR method [1318], which transfers fine details from
an additional reference image (Ref) to the LR image, is proposed. Compared to traditional
SISR, RefSR exhibits better reconstruction performance. RefSR transforms the more complex
texture generation process into a relatively simple texture search and transfer operation,
thus producing more realistic and natural-looking textures. For example, Zhang et al. [
16
]
feed the Ref and LR images into a pre-trained VGG model for feature extraction, and then
performed feature matching and texture transfer in the neural feature space. Yang et al. [
18
]
firstly introduced the transformer architecture to the SR tasks and proposed a novel texture
transformer to model the correspondence between the LR and Ref images, which helps to
perform feature matching more accurately.
However, the previous methods ignore that the information in the LR space still
has valuable high-frequency components. Besides, they simply add or concatenate the
Sensors 2022, 22, 4119. https://doi.org/10.3390/s22114119 https://www.mdpi.com/journal/sensors
资源描述:

当前文档最多预览五页,下载文档查看全文

此文档下载收益归作者所有

当前文档最多预览五页,下载文档查看全文
温馨提示:
1. 部分包含数学公式或PPT动画的文件,查看预览时可能会显示错乱或异常,文件下载后无此问题,请放心下载。
2. 本文档由用户上传,版权归属用户,天天文库负责整理代发布。如果您对本文档版权有争议请及时联系客服。
3. 下载前请仔细阅读文档内容,确认文档内容符合您的需求后进行下载,若出现内容与标题不符可向本站投诉处理。
4. 下载文档时可能由于网络波动等原因无法下载或下载错误,付费完成后未能成功下载的用户请联系客服处理。
关闭