一种基于CononNext的特征增强无锚暹罗视觉跟踪网络

ID:38602

大小:20.16 MB

页数:18页

时间:2023-03-11

金币:2

上传者:战必胜
Citation: Xu, Q.; Deng, H.; Zhang, Z.;
Liu, Y.; Ruan, X.; Liu, G. A
ConvNext-Based and Feature
Enhancement Anchor-Free Siamese
Network for Visual Tracking.
Electronics 2022, 11, 2381. https://
doi.org/10.3390/electronics11152381
Academic Editor: Silvia Liberata
Ullo
Received: 21 June 2022
Accepted: 22 July 2022
Published: 29 July 2022
Publishers Note: MDPI stays neutral
with regard to jurisdictional claims in
published maps and institutional affil-
iations.
Copyright: © 2022 by the authors.
Licensee MDPI, Basel, Switzerland.
This article is an open access article
distributed under the terms and
conditions of the Creative Commons
Attribution (CC BY) license (https://
creativecommons.org/licenses/by/
4.0/).
electronics
Article
A ConvNext-Based and Feature Enhancement Anchor-Free
Siamese Network for Visual Tracking
Qiguo Xu
1
, Honggui Deng
1,
*, Zeyu Zhang
1
, Yang Liu
1
, Xusheng Ruan
1
and Gang Liu
1,2
1
School of Physics and Electronics, Central South University, Changsha 410017, China;
202211045@csu.edu.cn (Q.X.); 192211038@csu.edu.cn (Z.Z.); 192211037@csu.edu.cn (Y.L.);
202212071@csu.edu.cn (X.R.); 162201003@csu.edu.cn (G.L.)
2
College of Information Science and Engineering, Changsha Normal University, Changsha 410199, China
* Correspondence: denghonggui@csu.edu.cn; Tel.: +86-199-7499-4794
Abstract:
Existing anchor-based Siamese trackers rely on the anchor’s design to predict the scale
and aspect ratio of the target. However, these methods introduce many hyperparameters, leading
to computational redundancy. In this paper, to achieve outstanding network efficiency, we propose
a ConvNext-based anchor-free Siamese tracking network (CAFSN), which employs an anchor-free
design to increase network flexibility and versatility. In CAFSN, to obtain an appropriate backbone
network, the state-of-the-art ConvNext network is applied to the visual tracking field for the first
time by improving the network stride and receptive field. Moreover, A central confidence branch
based on Euclidean distance is offered to suppress low-quality prediction frames in the classification
prediction network of CAFSN for robust visual tracking. In particular, we discuss that the Siamese
network cannot establish a complete identification model for the tracking target and similar objects,
which negatively impacts network performance. We build a Fusion network consisting of crop
and 3Dmaxpooling to better distinguish the targets and similar objects’ abilities. This module
uses 3DMaxpooling to select the highest activation value to improve the difference between it and
other similar objects. Crop unifies the dimensions of different features and reduces the amount of
computation. Ablation experiments demonstrate that this module increased success rates by 1.7%
and precision by 0.5%. We evaluate CAFSN on challenging benchmarks such as OTB100, UAV123,
and GOT-10K, validating advanced performance in noise immunity and similar target identification
with 58.44 FPS in real time.
Keywords: visual tracking; ConvNext network; features enhancement; anchor-free
1. Introduction
Visual tracking is a significant research problem in the field of computer vision. As long
as the target state of the initial sequence frame is acquired, the tracker needs to predict the
target state of each subsequent frame [
1
,
2
]. Visual tracking is still challenging in practical
applications because the target is in various complex scenes such as occlusion, fast motion,
illumination changes, scale changes, and background clutter [3,4].
The current popular visual tracking methods focus on the Siamese network [
5
]. These
approaches typically consist of a Siamese backbone network for feature extraction, an inter-
active head, and a predictor for generating target localization. The Siamese network defines
the visual tracking task as a target-matching problem and learns the similarity mapping
between the template and search images through the interactive head. However, the tracker
cannot predict the target scale effectively since a single similarity mapping usually contains
limited spatial information. CFNet [
6
] proposed combining the filter technology with
deep learning, and each frame would combine the previous template to calculate a new
template. This approach ensures effective tracking when scale changes are large. Siamese
FC et al. [
7
,
8
] propose matching multiple scales in the search region to determine the target
Electronics 2022, 11, 2381. https://doi.org/10.3390/electronics11152381 https://www.mdpi.com/journal/electronics
资源描述:

当前文档最多预览五页,下载文档查看全文

此文档下载收益归作者所有

当前文档最多预览五页,下载文档查看全文
温馨提示:
1. 部分包含数学公式或PPT动画的文件,查看预览时可能会显示错乱或异常,文件下载后无此问题,请放心下载。
2. 本文档由用户上传,版权归属用户,天天文库负责整理代发布。如果您对本文档版权有争议请及时联系客服。
3. 下载前请仔细阅读文档内容,确认文档内容符合您的需求后进行下载,若出现内容与标题不符可向本站投诉处理。
4. 下载文档时可能由于网络波动等原因无法下载或下载错误,付费完成后未能成功下载的用户请联系客服处理。
关闭