基于U形可逆网络的编码统一去模糊方法

ID:38573

大小:16.43 MB

页数:23页

时间:2023-03-11

金币:2

上传者:战必胜
Citation: Wang, M.; Wen, T.; Liu, H.
A Codec-Unified Deblurring
Approach Based on U-Shaped
Invertible Network with Sparse
Salient Representation in Latent
Space. Electronics 2022, 11, 2177.
https://doi.org/10.3390/
electronics11142177
Academic Editor: Dah-Jye Lee
Received: 21 June 2022
Accepted: 5 July 2022
Published: 12 July 2022
Publishers Note: MDPI stays neutral
with regard to jurisdictional claims in
published maps and institutional affil-
iations.
Copyright: © 2022 by the authors.
Licensee MDPI, Basel, Switzerland.
This article is an open access article
distributed under the terms and
conditions of the Creative Commons
Attribution (CC BY) license (https://
creativecommons.org/licenses/by/
4.0/).
electronics
Article
A Codec-Unified Deblurring Approach Based on U-Shaped
Invertible Network with Sparse Salient Representation in
Latent Space
Meng Wang
1,2,
*, Tao Wen
1,2
and Haipeng Liu
1,2
1
Faculty of Information Engineering and Automation, Kunming University of Science and Technology,
Kunming 650500, China; wentao997540054@163.com (T.W.); ran@kust.edu.cn (H.L.)
2
Yunnan Key Laboratory of Artificial Intelligence, Kunming University of Science and Technology,
Kunming 650500, China
* Correspondence: wangmeng@kmust.edu.cn
Abstract:
Existing deep learning architectures usually use a separate encoder and decoder to generate
the desired simulated images, which is inefficient for feature analysis and synthesis. Aiming at the
problem that the existing methods fail to fully utilize the correlation of codecs, this paper focuses on
the codec-unified invertible networks to accurately guide the image deblurring process by controlling
latent variables. Inspired by U-Net, a U-shaped multi-level invertible network (UML-IN) is proposed
by integrating the wavelet invertible networks into a supervised U-shape architecture to establish the
multi-resolution correlation between blurry and sharp image features under the guidance of hybrid
loss. Further, this paper proposes to use
L
1 regularization constraints to obtain sparse latent variables,
thereby alleviating the information dispersion problem caused by high-dimensional inference in
invertible networks. Finally, we fine-tune the weights of invertible modules by calculating a similarity
loss between blur-sharp variable pairs. Extensive experiments on real and synthetic blurry sets show
that the proposed approach is efficient and competitive compared with the state-of-the-art methods.
Keywords:
invertible networks; image deblurring; U-Net; multi-resolution correlations;
L
1
regularization; similarity loss
1. Introduction
The purpose of image deblurring is to restore a low-quality degraded image to a
high-quality image with sharp spatial details. An efficient deblurring method can not
only enhance visual perception, but also assist with high-level vision tasks such as image
classification [
1
] and object detection [
2
]. However, image deblurring is a highly ill-posed
problem because there are infinitely feasible solutions. In order to constrain the solution
space to valid images, early deblurring methods typically use empirical observations to
handcraft image priors to improve image quality [
3
7
]. In recent years, with the successful
application of deep learning [
2
,
8
10
], the deblurring methods based on convolutional neural
networks (CNNs) that implicitly learn more general priors by capturing the statistical
information of natural images from large-scale data have developed rapidly [1116].
Compared with earlier methods, the CNN-based methods have significantly improved
the model’s performance, which is mainly due to the diversity of generative framework
design. At present, the main solutions include the module structures of single decoding,
codec separation and codec-unified. The representative of model design based on single
decoding is GAN, which has mature applications in image deblurring tasks [
17
21
]. GAN
maps the input noise (i.e., latent variables) to the generated results. The former is usually
set to obey the Gaussian distribution or uniform distribution independent of the training
data (or application scenarios). However, the research of Karras et al. [
22
] showed that
the generated results obtained by using the noise constrained by a prior distribution were
Electronics 2022, 11, 2177. https://doi.org/10.3390/electronics11142177 https://www.mdpi.com/journal/electronics
资源描述:

当前文档最多预览五页,下载文档查看全文

此文档下载收益归作者所有

当前文档最多预览五页,下载文档查看全文
温馨提示:
1. 部分包含数学公式或PPT动画的文件,查看预览时可能会显示错乱或异常,文件下载后无此问题,请放心下载。
2. 本文档由用户上传,版权归属用户,天天文库负责整理代发布。如果您对本文档版权有争议请及时联系客服。
3. 下载前请仔细阅读文档内容,确认文档内容符合您的需求后进行下载,若出现内容与标题不符可向本站投诉处理。
4. 下载文档时可能由于网络波动等原因无法下载或下载错误,付费完成后未能成功下载的用户请联系客服处理。
关闭