
Article
Aerial Target Tracking Algorithm Based on Faster
R-CNN Combined with Frame Differencing
Yurong Yang *, Huajun Gong, Xinhua Wang and Peng Sun
College of Automation Engineering, Nanjing University of Aeronautics and Astronautics, 29 Jiangjun Road,
Nanjing 210016, China; ghj301@nuaa.edu.cn (H.G.); xhwang@nuaa.edu.cn (X.W.); spa147@nuaa.edu.cn (P.S.)
* Correspondence: yyr1991@126.com; Tel.: +86-25-8489-2301 (ext. 50) or +86-157-0518-6772.
Academic Editor: Michael Wing
Received: 24 April 2017; Accepted: 12 June 2017; Published: 20 June 2017
Abstract:
We propose a robust approach to detecting and tracking moving objects for a naval
unmanned aircraft system (UAS) landing on an aircraft carrier. The frame difference algorithm
follows a simple principle to achieve real-time tracking, whereas Faster Region-Convolutional Neural
Network (R-CNN) performs highly precise detection and tracking characteristics. We thus combine
Faster R-CNN with the frame difference method, which is demonstrated to exhibit robust and
real-time detection and tracking performance. In our UAS landing experiments, two cameras placed
on both sides of the runway are used to capture the moving UAS. When the UAS is captured,
the joint algorithm uses frame difference to detect the moving target (UAS). As soon as the Faster
R-CNN algorithm accurately detects the UAS, the detection priority is given to Faster R-CNN. In this
manner, we also perform motion segmentation and object detection in the presence of changes in the
environment, such as illumination variation or “walking persons”. By combining the 2 algorithms
we can accurately detect and track objects with a tracking accuracy rate of up to 99% and a frame per
second of up to 40 Hz. Thus, a solid foundation is laid for subsequent landing guidance.
Keywords: deep learning; Faster R-CNN; UAS landing; object detection
1. Introduction
Unmanned aircraft systems (UAS) have become a major trend in robotics research in recent
decades. UAS has emerged in an increasing number of applications, both military and civilian.
The opportunities and challenges of this fast-growing field are summarized by Kumar et al. [
1
].
Flight, takeoff, and landing involve the most complex processes; in particular, autonomous landing in
unknown or Global Navigation Satellite System (GNSS)-denied environments remains undetermined.
With fusion and development of computer vision and image processing, the application of visual
navigation in UAS automatic landing has widened.
Computer vision in UAS landing has achieved a number of accomplishments in recent years.
Generally, we divided these methods into two main categories based on the setup of vision sensors,
namely on-board vision landing systems, and method using on-ground vision system. A lot of work has
been done for on-board vision landing. Shang et al. [
2
] proposed a method for UAS automatic landing
by recognizing an airport runway in the image. Sven et al. [
3
] described the design of a landing pad
and the vision-based algorithm that estimates the 3D position of the UAS relative to the landing pad.
Li et al. [4]
estimated the UAS pose parameters according to the shapes and positions of 3 runway lines
in the image, which were extracted using the Hough transform.
Saripalli et al. [5]
presented a design
and implementation of a real-time, vision-based landing algorithm for an autonomous helicopter by
using vision for precise target detection and recognition. Yang Gui [
6
] proposed a method for UAS
automatic landing by the recognition of 4 infrared lamps on the runway.
Cesetti et al. [7]
presented
a vision-based guide system for UAS navigation and landing with the use of natural landmarks.
Aerospace 2017, 4, 32; doi:10.3390/aerospace4020032 www.mdpi.com/journal/aerospace