Citation: Xu, Q.; Deng, H.; Zhang, Z.;
Liu, Y.; Ruan, X.; Liu, G. A
ConvNext-Based and Feature
Enhancement Anchor-Free Siamese
Network for Visual Tracking.
Electronics 2022, 11, 2381. https://
doi.org/10.3390/electronics11152381
Academic Editor: Silvia Liberata
Ullo
Received: 21 June 2022
Accepted: 22 July 2022
Published: 29 July 2022
Publisher’s Note: MDPI stays neutral
with regard to jurisdictional claims in
published maps and institutional affil-
iations.
Copyright: © 2022 by the authors.
Licensee MDPI, Basel, Switzerland.
This article is an open access article
distributed under the terms and
conditions of the Creative Commons
Attribution (CC BY) license (https://
creativecommons.org/licenses/by/
4.0/).
Article
A ConvNext-Based and Feature Enhancement Anchor-Free
Siamese Network for Visual Tracking
Qiguo Xu
1
, Honggui Deng
1,
*, Zeyu Zhang
1
, Yang Liu
1
, Xusheng Ruan
1
and Gang Liu
1,2
1
School of Physics and Electronics, Central South University, Changsha 410017, China;
202211045@csu.edu.cn (Q.X.); 192211038@csu.edu.cn (Z.Z.); 192211037@csu.edu.cn (Y.L.);
202212071@csu.edu.cn (X.R.); 162201003@csu.edu.cn (G.L.)
2
College of Information Science and Engineering, Changsha Normal University, Changsha 410199, China
* Correspondence: denghonggui@csu.edu.cn; Tel.: +86-199-7499-4794
Abstract:
Existing anchor-based Siamese trackers rely on the anchor’s design to predict the scale
and aspect ratio of the target. However, these methods introduce many hyperparameters, leading
to computational redundancy. In this paper, to achieve outstanding network efficiency, we propose
a ConvNext-based anchor-free Siamese tracking network (CAFSN), which employs an anchor-free
design to increase network flexibility and versatility. In CAFSN, to obtain an appropriate backbone
network, the state-of-the-art ConvNext network is applied to the visual tracking field for the first
time by improving the network stride and receptive field. Moreover, A central confidence branch
based on Euclidean distance is offered to suppress low-quality prediction frames in the classification
prediction network of CAFSN for robust visual tracking. In particular, we discuss that the Siamese
network cannot establish a complete identification model for the tracking target and similar objects,
which negatively impacts network performance. We build a Fusion network consisting of crop
and 3Dmaxpooling to better distinguish the targets and similar objects’ abilities. This module
uses 3DMaxpooling to select the highest activation value to improve the difference between it and
other similar objects. Crop unifies the dimensions of different features and reduces the amount of
computation. Ablation experiments demonstrate that this module increased success rates by 1.7%
and precision by 0.5%. We evaluate CAFSN on challenging benchmarks such as OTB100, UAV123,
and GOT-10K, validating advanced performance in noise immunity and similar target identification
with 58.44 FPS in real time.
Keywords: visual tracking; ConvNext network; features enhancement; anchor-free
1. Introduction
Visual tracking is a significant research problem in the field of computer vision. As long
as the target state of the initial sequence frame is acquired, the tracker needs to predict the
target state of each subsequent frame [
1
,
2
]. Visual tracking is still challenging in practical
applications because the target is in various complex scenes such as occlusion, fast motion,
illumination changes, scale changes, and background clutter [3,4].
The current popular visual tracking methods focus on the Siamese network [
5
]. These
approaches typically consist of a Siamese backbone network for feature extraction, an inter-
active head, and a predictor for generating target localization. The Siamese network defines
the visual tracking task as a target-matching problem and learns the similarity mapping
between the template and search images through the interactive head. However, the tracker
cannot predict the target scale effectively since a single similarity mapping usually contains
limited spatial information. CFNet [
6
] proposed combining the filter technology with
deep learning, and each frame would combine the previous template to calculate a new
template. This approach ensures effective tracking when scale changes are large. Siamese
FC et al. [
7
,
8
] propose matching multiple scales in the search region to determine the target
Electronics 2022, 11, 2381. https://doi.org/10.3390/electronics11152381 https://www.mdpi.com/journal/electronics