用于小目标检测的AIE-YOLO辅助信息增强YOLO

ID:39098

阅读量:0

大小:1.90 MB

页数:13页

时间:2023-03-14

金币:2

上传者:战必胜
Citation: Yan, B.; Li, J.; Yang, Z.;
Zhang, X.; Hao, X. AIE-YOLO:
Auxiliary Information Enhanced
YOLO for Small Object Detection.
Sensors 2022, 22, 8221. https://
doi.org/10.3390/s22218221
Academic Editor: Petros Daras
Received: 26 September 2022
Accepted: 24 October 2022
Published: 27 October 2022
Publishers Note: MDPI stays neutral
with regard to jurisdictional claims in
published maps and institutional affil-
iations.
Copyright: © 2022 by the authors.
Licensee MDPI, Basel, Switzerland.
This article is an open access article
distributed under the terms and
conditions of the Creative Commons
Attribution (CC BY) license (https://
creativecommons.org/licenses/by/
4.0/).
sensors
Article
AIE-YOLO: Auxiliary Information Enhanced YOLO for Small
Object Detection
Bingnan Yan, Jiaxin Li *, Zhaozhao Yang, Xinpeng Zhang and Xiaolong Hao
School of Electronic Engineering, Xi’an Shiyou University, Xi’an 710065, China
* Correspondence: jiaxinli_cv@163.com
Abstract:
Small object detection is one of the key challenges in the current computer vision field due
to the low amount of information carried and the information loss caused by feature extraction. You
Only Look Once v5 (YOLOv5) adopts the Path Aggregation Network to alleviate the problem of
information loss, but it cannot restore the information that has been lost. To this end, an auxiliary
information-enhanced YOLO is proposed to improve the sensitivity and detection performance of
YOLOv5 to small objects. Firstly, a context enhancement module containing a receptive field size of
21
×
21 is proposed, which captures the global and local information of the image by fusing multi-scale
receptive fields, and introduces an attention branch to enhance the expressive ability of key features
and suppress background noise. To further enhance the feature expression ability of small objects, we
introduce the high- and low-frequency information decomposed by wavelet transform into PANet to
participate in multi-scale feature fusion, so as to solve the problem that the features of small objects
gradually disappear after multiple downsampling and pooling operations. Experiments on the
challenging dataset Tsinghua–Tencent 100 K show that the mean average precision of the proposed
model is 9.5% higher than that of the original YOLOv5 while maintaining the real-time speed, which
is better than the mainstream object detection models.
Keywords:
small object detection; context enhancement; large receptive field; wavelet transform;
multi-scale feature fusion
1. Introduction
In recent years, the significant improvement of computer hardware technology and the
continuous deepening of deep learning theory in the field of image- and object-detection
algorithms based on deep learning have made major breakthroughs, which are widely
used in automatic driving, intelligent medical treatment, industrial inspection, and remote
sensing image analysis, etc. [
1
4
]. However, compared with regular-sized objects, small
objects carry less information and are susceptible to background interference, which limits
the further development of object detection in the real world.
At present, object-detection algorithms based on deep learning can be divided into
two categories. One is the two-stage object detection algorithm represented by R-CNN
series [
5
7
]. The other is the one-stage object detection algorithm represented by the You
Only Look Once (YOLO) series [
8
11
] and SSD series [
12
,
13
]. The one-stage detection
algorithm has the advantages of fast inference speed and high real-time performance, but
the accuracy is slightly lower than that of the two-stage object detection algorithm. As one
of the YOLO series models, YOLOv5 with an easy network structure has received extensive
attention and application due to its consideration of detection accuracy and detection rate
at the same time. However, it is not sensitive to small objects and is easy to miss detect.
The main reason for the poor detection performance of YOLOv5 for small objects is that the
visual features of small objects are not obvious, and the feature information and position
information are gradually lost or even ignored after feature extraction of convolutional
neural networks (CNNs), which is difficult to be detected by the network. Secondly, small
Sensors 2022, 22, 8221. https://doi.org/10.3390/s22218221 https://www.mdpi.com/journal/sensors
资源描述:

当前文档最多预览五页,下载文档查看全文

此文档下载收益归作者所有

当前文档最多预览五页,下载文档查看全文
温馨提示:
1. 部分包含数学公式或PPT动画的文件,查看预览时可能会显示错乱或异常,文件下载后无此问题,请放心下载。
2. 本文档由用户上传,版权归属用户,天天文库负责整理代发布。如果您对本文档版权有争议请及时联系客服。
3. 下载前请仔细阅读文档内容,确认文档内容符合您的需求后进行下载,若出现内容与标题不符可向本站投诉处理。
4. 下载文档时可能由于网络波动等原因无法下载或下载错误,付费完成后未能成功下载的用户请联系客服处理。
关闭