
Citation: Yan, B.; Li, J.; Yang, Z.;
Zhang, X.; Hao, X. AIE-YOLO:
Auxiliary Information Enhanced
YOLO for Small Object Detection.
Sensors 2022, 22, 8221. https://
doi.org/10.3390/s22218221
Academic Editor: Petros Daras
Received: 26 September 2022
Accepted: 24 October 2022
Published: 27 October 2022
Publisher’s Note: MDPI stays neutral
with regard to jurisdictional claims in
published maps and institutional affil-
iations.
Copyright: © 2022 by the authors.
Licensee MDPI, Basel, Switzerland.
This article is an open access article
distributed under the terms and
conditions of the Creative Commons
Attribution (CC BY) license (https://
creativecommons.org/licenses/by/
4.0/).
Article
AIE-YOLO: Auxiliary Information Enhanced YOLO for Small
Object Detection
Bingnan Yan, Jiaxin Li *, Zhaozhao Yang, Xinpeng Zhang and Xiaolong Hao
School of Electronic Engineering, Xi’an Shiyou University, Xi’an 710065, China
* Correspondence: jiaxinli_cv@163.com
Abstract:
Small object detection is one of the key challenges in the current computer vision field due
to the low amount of information carried and the information loss caused by feature extraction. You
Only Look Once v5 (YOLOv5) adopts the Path Aggregation Network to alleviate the problem of
information loss, but it cannot restore the information that has been lost. To this end, an auxiliary
information-enhanced YOLO is proposed to improve the sensitivity and detection performance of
YOLOv5 to small objects. Firstly, a context enhancement module containing a receptive field size of
21
×
21 is proposed, which captures the global and local information of the image by fusing multi-scale
receptive fields, and introduces an attention branch to enhance the expressive ability of key features
and suppress background noise. To further enhance the feature expression ability of small objects, we
introduce the high- and low-frequency information decomposed by wavelet transform into PANet to
participate in multi-scale feature fusion, so as to solve the problem that the features of small objects
gradually disappear after multiple downsampling and pooling operations. Experiments on the
challenging dataset Tsinghua–Tencent 100 K show that the mean average precision of the proposed
model is 9.5% higher than that of the original YOLOv5 while maintaining the real-time speed, which
is better than the mainstream object detection models.
Keywords:
small object detection; context enhancement; large receptive field; wavelet transform;
multi-scale feature fusion
1. Introduction
In recent years, the significant improvement of computer hardware technology and the
continuous deepening of deep learning theory in the field of image- and object-detection
algorithms based on deep learning have made major breakthroughs, which are widely
used in automatic driving, intelligent medical treatment, industrial inspection, and remote
sensing image analysis, etc. [
1
–
4
]. However, compared with regular-sized objects, small
objects carry less information and are susceptible to background interference, which limits
the further development of object detection in the real world.
At present, object-detection algorithms based on deep learning can be divided into
two categories. One is the two-stage object detection algorithm represented by R-CNN
series [
5
–
7
]. The other is the one-stage object detection algorithm represented by the You
Only Look Once (YOLO) series [
8
–
11
] and SSD series [
12
,
13
]. The one-stage detection
algorithm has the advantages of fast inference speed and high real-time performance, but
the accuracy is slightly lower than that of the two-stage object detection algorithm. As one
of the YOLO series models, YOLOv5 with an easy network structure has received extensive
attention and application due to its consideration of detection accuracy and detection rate
at the same time. However, it is not sensitive to small objects and is easy to miss detect.
The main reason for the poor detection performance of YOLOv5 for small objects is that the
visual features of small objects are not obvious, and the feature information and position
information are gradually lost or even ignored after feature extraction of convolutional
neural networks (CNNs), which is difficult to be detected by the network. Secondly, small
Sensors 2022, 22, 8221. https://doi.org/10.3390/s22218221 https://www.mdpi.com/journal/sensors