基于自适应特征融合的动态多注意去杂网络

ID:38593

大小:9.05 MB

页数:19页

时间:2023-03-11

金币:2

上传者:战必胜
Citation: Zhao, D.; Mo, B.; Zhu, X.;
Zhao, J.; Zhang, H.; Tao, Y.; Zhao, C.
Dynamic Multi-Attention Dehazing
Network with Adaptive Feature
Fusion. Electronics 2023, 12, 529.
https://doi.org/10.3390/
electronics12030529
Academic Editor: Silvia Liberata Ullo
Received: 26 December 2022
Revised: 15 January 2023
Accepted: 16 January 2023
Published: 19 January 2023
Copyright: © 2023 by the authors.
Licensee MDPI, Basel, Switzerland.
This article is an open access article
distributed under the terms and
conditions of the Creative Commons
Attribution (CC BY) license (https://
creativecommons.org/licenses/by/
4.0/).
electronics
Article
Dynamic Multi-Attention Dehazing Network with Adaptive
Feature Fusion
Donghui Zhao
1,
* , Bo Mo
1
, Xiang Zhu
2
, Jie Zhao
1,3
, Heng Zhang
4
, Yimeng Tao
1
and Chunbo Zhao
1
1
Beijing Institute of Technology, Beijing 100081, China
2
Beijing Building Materials Research Institute Co., Ltd., Beijing 100041, China
3
North Navigation Control Technology Co., Ltd., Beijing 100176, China
4
Shanghai Electro-Mechanical Engineering Institute, Shanghai 201109, China
* Correspondence: zhaodonghui99@foxmail.com
Abstract:
This paper proposes a Dynamic Multi-Attention Dehazing Network (DMADN) for single
image dehazing. The proposed network consists of two key components, the Dynamic Feature
Attention (DFA) module, and the Adaptive Feature Fusion (AFF) module. The DFA module provides
pixel-wise weights and channel-wise weights for input features, considering that the haze distribution
is always uneven in a degenerated image and the value in each channel is different. We propose an
AFF module based on the adaptive mixup operation to restore the missing spatial information from
high-resolution layers. Most previous works have concentrated on increasing the scale of the model
to improve dehazing performance, which makes it difficult to apply in edge devices. We introduce
contrastive learning in our training processing, which leverages both positive and negative samples
to optimize our network. The contrastive learning strategy could effectively improve the quality of
output while not increasing the model’s complexity and inference time in the testing phase. Extensive
experimental results on the synthetic and real-world hazy images demonstrate that DMADN achieves
state-of-the-art dehazing performance with a competitive number of parameters.
Keywords: dehazing; CNN; feature attention; feature fusion; contrastive learning
1. Introduction
Haze is a common atmospheric phenomenon caused by floating particles in the
air. Due to the turbid medium, light propagation is hindered, and images taken in the
haze are often subject to some degree of degradation. Input images captured in the hazy
environment will affect the performance of dependable high-level computer vision systems
(such as object detection [
1
,
2
] and scene understanding [
3
,
4
]). However, a dependable
high-level computer vision system must work well with various kinds of interference [
5
,
6
].
It is a significant step for developing dehazing techniques to improve the robustness of
high-level computer vision systems.
Previous works [
7
,
8
] has proposed the atmosphere scattering model to explain the
process of hazy image generation. Specifically, it assumes that:
I(x) = J(x)t(x) + A(1 t(x)) (1)
where
I(x)
and
J(x)
are the degenerated hazy and clear images,
A
is the atmosphere light
intensity, and
t(x)
is the medium transmission map. We also have
t(x) = e
βd (x)
, where
β
and d(x) are the atmosphere scattering parameter and the scene depth, respectively.
Early dehazing methods [
9
19
] are based on priors in nature scenes; He et al. [
12
]
proposed the dark channel prior (DCP) which is the masterpiece of the prior-based
method. However, prior-based dehazing methods are not efficient in specific scenar-
ios. In recent years, the Convolutional Neural Network (CNN) has been proven effective in
dehazing [2027]
. DehazeNet [
21
] first reconstructs the haze-free image by estimating
A
Electronics 2023, 12, 529. https://doi.org/10.3390/electronics12030529 https://www.mdpi.com/journal/electronics
资源描述:

当前文档最多预览五页,下载文档查看全文

此文档下载收益归作者所有

当前文档最多预览五页,下载文档查看全文
温馨提示:
1. 部分包含数学公式或PPT动画的文件,查看预览时可能会显示错乱或异常,文件下载后无此问题,请放心下载。
2. 本文档由用户上传,版权归属用户,天天文库负责整理代发布。如果您对本文档版权有争议请及时联系客服。
3. 下载前请仔细阅读文档内容,确认文档内容符合您的需求后进行下载,若出现内容与标题不符可向本站投诉处理。
4. 下载文档时可能由于网络波动等原因无法下载或下载错误,付费完成后未能成功下载的用户请联系客服处理。
关闭