Citation: Ramezani Dooraki, A.; Lee,
D.J. A Multi-Objective Reinforcement
Learning Based Controller for
Autonomous Navigation in
Challenging Environments. Machines
2022, 10, 500. https://doi.org/
10.3390/machines10070500
Academic Editors: Luis Payá, Oscar
Reinoso García and Helder Jesus
Araújo
Received: 23 April 2022
Accepted: 14 June 2022
Published: 22 June 2022
Publisher’s Note: MDPI stays neutral
with regard to jurisdictional claims in
published maps and institutional affil-
iations.
Copyright: © 2022 by the authors.
Licensee MDPI, Basel, Switzerland.
This article is an open access article
distributed under the terms and
conditions of the Creative Commons
Attribution (CC BY) license (https://
creativecommons.org/licenses/by/
4.0/).
Article
A Multi-Objective Reinforcement Learning Based Controller
for Autonomous Navigation in Challenging Environments
Amir Ramezani Dooraki and Deok-Jin Lee *
School of Mechanical Design Engineering, Jeonbuk National Univeristy, Jeonju 54896, Korea;
a.ramezani.dooraki@gmail.com
* Correspondence: deokjlee@jbnu.ac.kr
Abstract:
In this paper, we introduce a self-trained controller for autonomous navigation in static and
dynamic (with moving walls and nets) challenging environments (including trees, nets, windows,
and pipe) using deep reinforcement learning, simultaneously trained using multiple rewards. We
train our RL algorithm in a multi-objective way. Our algorithm learns to generate continuous action
for controlling the UAV. Our algorithm aims to generate waypoints for the UAV in such a way
as to reach a goal area (shown by an RGB image) while avoiding static and dynamic obstacles.
In this text, we use the RGB-D image as the input for the algorithm, and it learns to control the
UAV in 3-DoF (x, y, and z). We train our robot in environments simulated by Gazebo sim. For
communication between our algorithm and the simulated environments, we use the robot operating
system. Finally, we visualize the trajectories generated by our trained algorithms using several
methods and illustrate our results that clearly show our algorithm’s capability in learning to maximize
the defined multi-objective reward.
Keywords:
reinforcement learning; autonomous navigation; obstacle avoidance; deep learning;
multi-objective
1. Introduction
By reflecting on nature, one might see a variety of intelligent animals, each with some
degree of intelligence, where all of them learn by experience to convert their potential
capabilities into some skill. This fact inspired the machine learning community to devise
and develop machine learning approaches inspired by nature, within the field of bio-
inspired artificial intelligence [
1
]. Further, it motivated us to use the idea of learning by
reinforcement signal in this paper. Reinforcement signals (such as positive rewards and
punishments) are some of the primary sources of learning in intelligent creatures. This type
of learning is implemented and applied by the reinforcement learning family of algorithms
in machine learning.
While learning can be considered a type of intelligence, it can only manifest itself when
applied to an agent or when used to control one. An agent could be any actual or simulated
machine, robot, or application with sensors and actuators. Considering several categories
of robots, Unmanned Aerial Vehicles (UAVs) are among the most critical agents to be
controlled because they can fly, which means they can move to locations and positions that
are not possible for Unmanned Ground Vehicles (UGV) and Unmanned Surface Vehicles
(USV). One of the essential considerations in controlling a UAV is its navigation and control
algorithm. There are several kinds of methods used for navigation in UAVs, some of them
are dependent on some external guiding system (such as GPS), and a couple of them are
based on rule-based controllers, which are different from an alive creature.
In this paper, we introduce a self-trained controller for autonomous navigation (SCAN)
(Algorithm 1) in static and dynamic challenging environments using a deep reinforcement
learning-based algorithm trained by Depth and RGB images as input, and multiple re-
wards. We control the UAV using continuous actions (a sample generated trajectory by
Machines 2022, 10, 500. https://doi.org/10.3390/machines10070500 https://www.mdpi.com/journal/machines