基于双足机器人受扰单刚体模型的模型预测控制器深度强化学习

ID:39117

阅读量:0

大小:3.84 MB

页数:17页

时间:2023-03-14

金币:2

上传者:战必胜

 
Citation: Hou, L.; Li, B.; Liu, W.; Xu,
Y.; Yang, S.; Rong, X. Deep
Reinforcement Learning for Model
Predictive Controller Based on
Disturbed Single Rigid Body Model
of Biped Robots. Machines 2022, 10,
975. https://doi.org/10.3390/
machines10110975
Academic Editor: Manuel F. Silva
Received: 19 September 2022
Accepted: 22 October 2022
Published: 26 October 2022
Publishers Note: MDPI stays neutral
with regard to jurisdictional claims in
published maps and institutional affil-
iations.
Copyright: © 2022 by the authors.
Licensee MDPI, Basel, Switzerland.
This article is an open access article
distributed under the terms and
conditions of the Creative Commons
Attribution (CC BY) license (https://
creativecommons.org/licenses/by/
4.0/).
machines
Article
Deep Reinforcement Learning for Model Predictive Controller
Based on Disturbed Single Rigid Body Model of Biped Robots
Landong Hou
1
, Bin Li
2,
* , Weilong Liu
2
, Yiming Xu
1
, Shuhui Yang
2
and Xuewen Rong
3
1
School of Electrical Engineering and Automation, Qilu University of Technology (Shandong Academy of
Sciences), Jinan 250353, China
2
School of Mathematics and Statistics, Qilu University of Technology (Shandong Academy of Sciences),
Jinan 250353, China
3
School of Control Science and Engineering, Shandong University, Jinan 250100, China
* Correspondence: ribbenlee@126.com; Tel.: +86-1369-862-2129
Abstract:
This paper modifies the single rigid body (SRB) model, and considers the swinging leg as
the disturbances to the centroid acceleration and rotational acceleration of the SRB model. This paper
proposes deep reinforcement learning (DRL)-based model predictive control (MPC) to resist the
disturbances of the swinging leg. The DRL predicts the swing leg disturbances, and then MPC gives
the optimal ground reaction forces according to the predicted disturbances. We use the proximal
policy optimization (PPO) algorithm among the DRL methods since it is a very stable and widely
applicable algorithm. It is an on-policy algorithm based on the actor–critic framework. The simulation
results show that the improved SRB model and the PPO-based MPC method can accurately predict the
disturbances of the swinging leg to the SRB model and resist the disturbance, making the locomotion
more robust.
Keywords: biped robots; single rigid body; model predictive control; deep reinforcement learning
1. Introduction
In this paper, deep reinforcement learning (DRL) is used to predict the disturbances of
the swinging leg to the single rigid body (SRB) model, and the SRB-based model predictive
control (MPC) method is transplanted to the biped robots with a non-negligible leg mass.
Compared with other types of robots, legged robots have huge application value and
development prospects. At present, quadruped robots and biped robots are the research
hotspots in the field of legged robots. Due to the complex nonlinear dynamics and higher
degrees of freedom of biped robots, it is a challenging task to realize the stable walking of
biped robots [
1
]. Compared with quadruped robots, it is difficult to achieve static stability
with biped robots due to their mechanical structure design. Since the rectangular foot
area of biped robots is very small, some biped robots even have linear feet. This results in
a small or even a non-existent support field for biped robots during static standing and
locomotion. From the point of view of stability analysis, the biped robots do not have
the condition of static stability, but only have the condition of dynamic stability. This
means that bipedal robots can only stabilize themselves during locomotion. Therefore,
the design of the locomotion controller of biped robots is much more difficult than that of
quadruped robots.
At present, there are two main control methods for legged robots, namely model-based
control methods and model-free control methods. DRL is the most dominant of the model-
free methods. Currently, in the field of legged robots, proximal policy optimization [
2
]
(PPO) and deep deterministic policy gradient [
3
] (DDPG) are the two most commonly used
DRL methods. The DRL methods successfully realize the navigation of mobile robots [
4
]
and the motion control of manipulators [
5
]. The DRL methods avoid the complex modeling
and parameter adjustment process. Through the guidance of different reward functions,
the agent can learn different target strategies, which is a more flexible control method.
Machines 2022, 10, 975. https://doi.org/10.3390/machines10110975 https://www.mdpi.com/journal/machines
资源描述:

当前文档最多预览五页,下载文档查看全文

此文档下载收益归作者所有

当前文档最多预览五页,下载文档查看全文
温馨提示:
1. 部分包含数学公式或PPT动画的文件,查看预览时可能会显示错乱或异常,文件下载后无此问题,请放心下载。
2. 本文档由用户上传,版权归属用户,天天文库负责整理代发布。如果您对本文档版权有争议请及时联系客服。
3. 下载前请仔细阅读文档内容,确认文档内容符合您的需求后进行下载,若出现内容与标题不符可向本站投诉处理。
4. 下载文档时可能由于网络波动等原因无法下载或下载错误,付费完成后未能成功下载的用户请联系客服处理。
关闭