基于深度强化学习框架的无地图局部路径规划方法

ID:39146

大小:2.62 MB

页数:24页

时间:2023-03-14

金币:2

上传者:战必胜
Citation: Yin, Y.; Chen, Z.; Liu, G.;
Guo, J. A Mapless Local Path
Planning Approach Using Deep
Reinforcement Learning Framework.
Sensors 2023, 23, 2036. https://
doi.org/10.3390/s23042036
Academic Editors: Luis Payá, Oscar
Reinoso García and
Helder Jesus Araújo
Received: 16 January 2023
Revised: 7 February 2023
Accepted: 8 February 2023
Published: 10 February 2023
Copyright: © 2023 by the authors.
Licensee MDPI, Basel, Switzerland.
This article is an open access article
distributed under the terms and
conditions of the Creative Commons
Attribution (CC BY) license (https://
creativecommons.org/licenses/by/
4.0/).
sensors
Article
A Mapless Local Path Planning Approach Using Deep
Reinforcement Learning Framework
Yan Yin , Zhiyu Chen, Gang Liu and Jianwei Guo *
School of Computer Science and Engineering, Changchun University of Technology, Changchun 130012, China
* Correspondence: guojianwei@ccut.edu.cn
Abstract:
The key module for autonomous mobile robots is path planning and obstacle avoidance.
Global path planning based on known maps has been effectively achieved. Local path planning in
unknown dynamic environments is still very challenging due to the lack of detailed environmental
information and unpredictability. This paper proposes an end-to-end local path planner n-step
dueling double DQN with reward-based
e
-greedy (RND3QN) based on a deep reinforcement learning
framework, which acquires environmental data from LiDAR as input and uses a neural network to fit
Q-values to output the corresponding discrete actions. The bias is reduced using n-step bootstrapping
based on deep Q-network (DQN). The
e
-greedy exploration-exploitation strategy is improved with
the reward value as a measure of exploration, and an auxiliary reward function is introduced to
increase the reward distribution of the sparse reward environment. Simulation experiments are
conducted on the gazebo to test the algorithm’s effectiveness. The experimental data demonstrate
that the average total reward value of RND3QN is higher than that of algorithms such as dueling
double DQN (D3QN), and the success rates are increased by 174%, 65%, and 61% over D3QN on three
stages, respectively. We experimented on the turtlebot3 waffle pi robot, and the strategies learned
from the simulation can be effectively transferred to the real robot.
Keywords:
D3QN; exploration-exploitation; turtlebot3; n-step; auxiliary reward functions;
path planning
1. Introduction
With the gradual development and growth of artificial intelligence, mobile robots
based on artificial intelligence have provided various conveniences to society while im-
proving social productivity, among which autonomous mobile robots are widely concerned
for their ability to complete tasks independently in a given environment. The key to au-
tonomous mobile robots is the ability to navigate autonomously, and the basis of navigation
is path planning. It means finding a safe path from the starting position to the target
position without colliding with any obstacle. According to the degree of information about
the environment, path planning can be divided into global path planning and local path
planning (also called real-time planning). This paper aims to explore the problem of local
path planning for robots in dynamic environments.
Different ways of path planning are needed in different environments, for example,
a completely known environment usually uses global path planning, while a partially
known environment or a completely unknown environment requires local path planning.
The robot navigation completes its own localization based on simultaneous localization
and mapping (SLAM) [
1
,
2
], and then plans the path to the target location by global path
planning, the accuracy of which depends on the accuracy of environment acquisition.
Global path planning can find the optimal solution, but it requires accurate information
about the environment to be known in advance and its poor robustness to the noise of
the environmental model. Local path planning detects the robot working environment
by sensors to obtain information such as unknown and geometric properties of obstacles.
Sensors 2023, 23, 2036. https://doi.org/10.3390/s23042036 https://www.mdpi.com/journal/sensors
资源描述:

当前文档最多预览五页,下载文档查看全文

此文档下载收益归作者所有

当前文档最多预览五页,下载文档查看全文
温馨提示:
1. 部分包含数学公式或PPT动画的文件,查看预览时可能会显示错乱或异常,文件下载后无此问题,请放心下载。
2. 本文档由用户上传,版权归属用户,天天文库负责整理代发布。如果您对本文档版权有争议请及时联系客服。
3. 下载前请仔细阅读文档内容,确认文档内容符合您的需求后进行下载,若出现内容与标题不符可向本站投诉处理。
4. 下载文档时可能由于网络波动等原因无法下载或下载错误,付费完成后未能成功下载的用户请联系客服处理。
关闭