Citation: Yin, Y.; Chen, Z.; Liu, G.;
Guo, J. A Mapless Local Path
Planning Approach Using Deep
Reinforcement Learning Framework.
Sensors 2023, 23, 2036. https://
doi.org/10.3390/s23042036
Academic Editors: Luis Payá, Oscar
Reinoso García and
Helder Jesus Araújo
Received: 16 January 2023
Revised: 7 February 2023
Accepted: 8 February 2023
Published: 10 February 2023
Copyright: © 2023 by the authors.
Licensee MDPI, Basel, Switzerland.
This article is an open access article
distributed under the terms and
conditions of the Creative Commons
Attribution (CC BY) license (https://
creativecommons.org/licenses/by/
4.0/).
Article
A Mapless Local Path Planning Approach Using Deep
Reinforcement Learning Framework
Yan Yin , Zhiyu Chen, Gang Liu and Jianwei Guo *
School of Computer Science and Engineering, Changchun University of Technology, Changchun 130012, China
* Correspondence: guojianwei@ccut.edu.cn
Abstract:
The key module for autonomous mobile robots is path planning and obstacle avoidance.
Global path planning based on known maps has been effectively achieved. Local path planning in
unknown dynamic environments is still very challenging due to the lack of detailed environmental
information and unpredictability. This paper proposes an end-to-end local path planner n-step
dueling double DQN with reward-based
e
-greedy (RND3QN) based on a deep reinforcement learning
framework, which acquires environmental data from LiDAR as input and uses a neural network to fit
Q-values to output the corresponding discrete actions. The bias is reduced using n-step bootstrapping
based on deep Q-network (DQN). The
e
-greedy exploration-exploitation strategy is improved with
the reward value as a measure of exploration, and an auxiliary reward function is introduced to
increase the reward distribution of the sparse reward environment. Simulation experiments are
conducted on the gazebo to test the algorithm’s effectiveness. The experimental data demonstrate
that the average total reward value of RND3QN is higher than that of algorithms such as dueling
double DQN (D3QN), and the success rates are increased by 174%, 65%, and 61% over D3QN on three
stages, respectively. We experimented on the turtlebot3 waffle pi robot, and the strategies learned
from the simulation can be effectively transferred to the real robot.
Keywords:
D3QN; exploration-exploitation; turtlebot3; n-step; auxiliary reward functions;
path planning
1. Introduction
With the gradual development and growth of artificial intelligence, mobile robots
based on artificial intelligence have provided various conveniences to society while im-
proving social productivity, among which autonomous mobile robots are widely concerned
for their ability to complete tasks independently in a given environment. The key to au-
tonomous mobile robots is the ability to navigate autonomously, and the basis of navigation
is path planning. It means finding a safe path from the starting position to the target
position without colliding with any obstacle. According to the degree of information about
the environment, path planning can be divided into global path planning and local path
planning (also called real-time planning). This paper aims to explore the problem of local
path planning for robots in dynamic environments.
Different ways of path planning are needed in different environments, for example,
a completely known environment usually uses global path planning, while a partially
known environment or a completely unknown environment requires local path planning.
The robot navigation completes its own localization based on simultaneous localization
and mapping (SLAM) [
1
,
2
], and then plans the path to the target location by global path
planning, the accuracy of which depends on the accuracy of environment acquisition.
Global path planning can find the optimal solution, but it requires accurate information
about the environment to be known in advance and its poor robustness to the noise of
the environmental model. Local path planning detects the robot working environment
by sensors to obtain information such as unknown and geometric properties of obstacles.
Sensors 2023, 23, 2036. https://doi.org/10.3390/s23042036 https://www.mdpi.com/journal/sensors