基于多目标强化学习的挑战环境自主导航控制器

ID:39148

大小:10.50 MB

页数:25页

时间:2023-03-14

金币:2

上传者:战必胜
Citation: Ramezani Dooraki, A.; Lee,
D.J. A Multi-Objective Reinforcement
Learning Based Controller for
Autonomous Navigation in
Challenging Environments. Machines
2022, 10, 500. https://doi.org/
10.3390/machines10070500
Academic Editors: Luis Payá, Oscar
Reinoso García and Helder Jesus
Araújo
Received: 23 April 2022
Accepted: 14 June 2022
Published: 22 June 2022
Publishers Note: MDPI stays neutral
with regard to jurisdictional claims in
published maps and institutional affil-
iations.
Copyright: © 2022 by the authors.
Licensee MDPI, Basel, Switzerland.
This article is an open access article
distributed under the terms and
conditions of the Creative Commons
Attribution (CC BY) license (https://
creativecommons.org/licenses/by/
4.0/).
machines
Article
A Multi-Objective Reinforcement Learning Based Controller
for Autonomous Navigation in Challenging Environments
Amir Ramezani Dooraki and Deok-Jin Lee *
School of Mechanical Design Engineering, Jeonbuk National Univeristy, Jeonju 54896, Korea;
a.ramezani.dooraki@gmail.com
* Correspondence: deokjlee@jbnu.ac.kr
Abstract:
In this paper, we introduce a self-trained controller for autonomous navigation in static and
dynamic (with moving walls and nets) challenging environments (including trees, nets, windows,
and pipe) using deep reinforcement learning, simultaneously trained using multiple rewards. We
train our RL algorithm in a multi-objective way. Our algorithm learns to generate continuous action
for controlling the UAV. Our algorithm aims to generate waypoints for the UAV in such a way
as to reach a goal area (shown by an RGB image) while avoiding static and dynamic obstacles.
In this text, we use the RGB-D image as the input for the algorithm, and it learns to control the
UAV in 3-DoF (x, y, and z). We train our robot in environments simulated by Gazebo sim. For
communication between our algorithm and the simulated environments, we use the robot operating
system. Finally, we visualize the trajectories generated by our trained algorithms using several
methods and illustrate our results that clearly show our algorithm’s capability in learning to maximize
the defined multi-objective reward.
Keywords:
reinforcement learning; autonomous navigation; obstacle avoidance; deep learning;
multi-objective
1. Introduction
By reflecting on nature, one might see a variety of intelligent animals, each with some
degree of intelligence, where all of them learn by experience to convert their potential
capabilities into some skill. This fact inspired the machine learning community to devise
and develop machine learning approaches inspired by nature, within the field of bio-
inspired artificial intelligence [
1
]. Further, it motivated us to use the idea of learning by
reinforcement signal in this paper. Reinforcement signals (such as positive rewards and
punishments) are some of the primary sources of learning in intelligent creatures. This type
of learning is implemented and applied by the reinforcement learning family of algorithms
in machine learning.
While learning can be considered a type of intelligence, it can only manifest itself when
applied to an agent or when used to control one. An agent could be any actual or simulated
machine, robot, or application with sensors and actuators. Considering several categories
of robots, Unmanned Aerial Vehicles (UAVs) are among the most critical agents to be
controlled because they can fly, which means they can move to locations and positions that
are not possible for Unmanned Ground Vehicles (UGV) and Unmanned Surface Vehicles
(USV). One of the essential considerations in controlling a UAV is its navigation and control
algorithm. There are several kinds of methods used for navigation in UAVs, some of them
are dependent on some external guiding system (such as GPS), and a couple of them are
based on rule-based controllers, which are different from an alive creature.
In this paper, we introduce a self-trained controller for autonomous navigation (SCAN)
(Algorithm 1) in static and dynamic challenging environments using a deep reinforcement
learning-based algorithm trained by Depth and RGB images as input, and multiple re-
wards. We control the UAV using continuous actions (a sample generated trajectory by
Machines 2022, 10, 500. https://doi.org/10.3390/machines10070500 https://www.mdpi.com/journal/machines
资源描述:

当前文档最多预览五页,下载文档查看全文

此文档下载收益归作者所有

当前文档最多预览五页,下载文档查看全文
温馨提示:
1. 部分包含数学公式或PPT动画的文件,查看预览时可能会显示错乱或异常,文件下载后无此问题,请放心下载。
2. 本文档由用户上传,版权归属用户,天天文库负责整理代发布。如果您对本文档版权有争议请及时联系客服。
3. 下载前请仔细阅读文档内容,确认文档内容符合您的需求后进行下载,若出现内容与标题不符可向本站投诉处理。
4. 下载文档时可能由于网络波动等原因无法下载或下载错误,付费完成后未能成功下载的用户请联系客服处理。
关闭