Citation: Luo, J.; Zhu, L.; Wu, N.;
Chen, M.; Liu, D.; Zhang, Z.; Liu, J.
Adaptive Neural-PID Visual Servoing
Tracking Control via Extreme Learning
Machine. Machines 2022, 10, 782.
https://doi.org/10.3390/
machines10090782
Academic Editors: Shuai Li, Dechao
Chen, Mohammed Aquil Mirza,
Vasilios N. Katsikis, Dunhui Xiao and
Predrag Stanimirovi´c
Received: 31 July 2022
Accepted: 5 September 2022
Published: 7 September 2022
Publisher’s Note: MDPI stays neutral
with regard to jurisdictional claims in
published maps and institutional affil-
iations.
Copyright: © 2022 by the authors.
Licensee MDPI, Basel, Switzerland.
This article is an open access article
distributed under the terms and
conditions of the Creative Commons
Attribution (CC BY) license (https://
creativecommons.org/licenses/by/
4.0/).
Article
Adaptive Neural-PID Visual Servoing Tracking Control via
Extreme Learning Machine
Junqi Luo
1,2
, Liucun Zhu
1,3,
* , Ning Wu
2,
* , Mingyou Chen
3
, Daopeng Liu
4
, Zhenyu Zhang
3
and Jiyuan Liu
3
1
College of Mechanical Engineering, Guangxi University, Nanning 530004, China;
2
Key Laboratory of Beibu Gulf Offshore Engineering Equipment and Technology, Beibu Gulf University,
Qinzhou 535000, China
3
Advanced Science and Technology Research Institute, Beibu Gulf University, Qinzhou 535000, China
4
School of Mechanical Engineering, Jiangsu University, Zhenjiang 212000, China
* Correspondence: lczhu@bbgu.edu.cn (L.Z.); n.wu@bbgu.edu.cn (N.W.)
Abstract:
The vision-guided robot is intensively embedded in modern industry, but it is still a
challenge to track moving objects in real time accurately. In this paper, a hybrid adaptive control
scheme combined with an Extreme Learning Machine (ELM) and proportional–integral–derivative
(PID) is proposed for dynamic visual tracking of the manipulator. The scheme extracts line features
on the image plane based on a laser-camera system and determines an optimal control input to
guide the robot, so that the image features are aligned with their desired positions. The observation
and state–space equations are first determined by analyzing the motion features of the camera and
the object. The system is then represented as an autoregressive moving average with extra input
(ARMAX) and a valid estimation model. The adaptive predictor estimates online the relevant 3D
parameters between the camera and the object, which are subsequently used to calculate the system
sensitivity of the neural network. The ELM–PID controller is designed for adaptive adjustment of
control parameters, and the scheme was validated on a physical robot platform. The experimental
results showed that the proposed method’s vision-tracking control displayed superior performance
to pure P and PID controllers.
Keywords: adaptive visual tracking; visual servoing; laser-camera system; ELM–PID control
1. Introduction
Robotic vision has important commercial and domestic applications such as in assem-
bly and welding, fruit picking, and household services. Most robots, however, follow a set
program to complete repetitive tasks. When discrepancies occur in the target or the robot, it
tends to be unable to make timely environmental adjustments, which is largely due to the
inherent lack of an adequate perception capability [
1
]. Visual servoing enables dexterous
control of robots through continuous visual perception and has drawn consistent attention.
Since 1996, Hutchinson’s three classic surveys [
2
–
4
] have provided a systematic un-
derstanding of visual servoing. According to the representation of control signals, it can
be categorized as position-based (PBVS), image-based (IBVS) or hybrid (HVS). In partic-
ular, IBVS has attracted widespread interest for its simple structure and insensitivity to
calibration accuracy. Common methods of IBVS control are adaptive [
5
], sliding mode [
6
],
fuzzy [
7
] and learning-based [
8
]. Saleem et al. [
9
] proposed an adaptive fuzzy-tuned
proportional derivative (AFT-PD) control scheme to improve the visual tracking control of
a mobile wheeled robot. YANG et al. [
10
] used radial basis function (RBF) neural networks
to estimate the dynamic parameters of the robot and compensate for the robot’s torque to
improve the tracking performance of the controller.
Most IBVS studies have been carried out under the assumption that the target is
stationary, so visual tracking in dynamic scenes has rarely been considered. Certain re-
searchers have estimated the Jacobi matrix of IBVS by developing an adaptive algorithm.
Machines 2022, 10, 782. https://doi.org/10.3390/machines10090782 https://www.mdpi.com/journal/machines