Seneors报告 基于自适应卡尔曼时差和后继表示的多智能体强化学习-2022年

VIP文档

ID:28574

大小:1.70 MB

页数:23页

时间:2023-01-07

金币:10

上传者:战必胜

 
Citation: Salimibeni, M.;
Mohammadi, A.; Malekzadeh, P.;
Plataniotis, K.N. Multi-Agent
Reinforcement Learning via
Adaptive Kalman Temporal
Difference and Successor
Representation. Sensors 2022, 22, 1393.
https://doi.org/10.3390/s22041393
Academic Editors: Panagiotis E.
Pintelas, Ioannis E. Livieris and
Sotiris Kotsiantis
Received: 30 December 2021
Accepted: 7 February 2022
Published: 11 February 2022
Publishers Note: MDPI stays neutral
with regard to jurisdictional claims in
published maps and institutional affil-
iations.
Copyright: © 2022 by the authors.
Licensee MDPI, Basel, Switzerland.
This article is an open access article
distributed under the terms and
conditions of the Creative Commons
Attribution (CC BY) license (https://
creativecommons.org/licenses/by/
4.0/).
sensors
Article
Multi-Agent Reinforcement Learning via Adaptive Kalman
Temporal Difference and Successor Representation
Mohammad Salimibeni
1
, Arash Mohammadi
1,
*, Parvin Malekzadeh
2
and Konstantinos N. Plataniotis
2
1
Concordia Institute for Information System Engineering, Concordia University,
Montreal, QC H3G 1M8, Canada; m_alimib@encs.concordia.ca
2
Department of Electrical and Computer Engineering, University of Toronto, Toronto, ON M5S 3G8, Canada;
p_malekz@encs.concordia.ca (P.M.); kostas@ece.utoronto.ca (K.N.P.)
* Correspondence: arash.mohammadi@concordia.ca; Tel.: +1-514-848-2712 (ext. 2712)
Abstract:
Development of distributed Multi-Agent Reinforcement Learning (MARL) algorithms has
attracted an increasing surge of interest lately. Generally speaking, conventional Model-Based (MB) or
Model-Free (MF) RL algorithms are not directly applicable to the MARL problems due to utilization
of a fixed reward model for learning the underlying value function. While Deep Neural Network
(DNN)-based solutions perform well, they are still prone to overfitting, high sensitivity to parameter
selection, and sample inefficiency. In this paper, an adaptive Kalman Filter (KF)-based framework
is introduced as an efficient alternative to address the aforementioned problems by capitalizing on
unique characteristics of KF such as uncertainty modeling and online second order learning. More
specifically, the paper proposes the Multi-Agent Adaptive Kalman Temporal Difference (MAK-TD)
framework and its Successor Representation-based variant, referred to as the MAK-SR. The proposed
MAK-TD/SR frameworks consider the continuous nature of the action-space that is associated
with high dimensional multi-agent environments and exploit Kalman Temporal Difference (KTD)
to address the parameter uncertainty. The proposed MAK-TD/SR frameworks are evaluated via
several experiments, which are implemented through the OpenAI Gym MARL benchmarks. In
these experiments, different number of agents in cooperative, competitive, and mixed (cooperative-
competitive) scenarios are utilized. The experimental results illustrate superior performance of the
proposed MAK-TD/SR frameworks compared to their state-of-the-art counterparts.
Keywords:
Kalman Temporal Difference; Multiple Model Adaptive Estimation; Multi-Agent Rein-
forcement Learning; Successor Representation
1. Introduction
Reinforcement Learning (RL), as a class of Machine Learning (ML) techniques, targets
providing human-level adaptive behavior by construction of an optimal control policy [
1
].
Generally speaking, the main underlying objective is learning (via trial and error) from
previous interactions of an autonomous agent and its surrounding environment. The
optimal control (action) policy can be obtained via RL algorithms through the feedback
that environment provides to the agent after each of its actions [
2
9
]. Policy optimality can
be reached via such an approach with the goal of increasing the reward over time. In most
of the successful RL applications, e.g., Go and Poker games, robotics, and autonomous
driving, typically, several autonomous agents are involved. This naturally falls within
the context of Multi-Agent RL (MARL), which is a relatively long-established domain;
however, it has recently been revitalized due to the advancements made in the single-agent
RL approaches. In the MARL domain, which is the focus of this manuscript, multiple
decision-making agents interact (cooperate and/or compete) in a shared environment to
gain a common or a conflicting goal. Research Questions: In this paper, we aim to answer
the following research questions:
Sensors 2022, 22, 1393. https://doi.org/10.3390/s22041393 https://www.mdpi.com/journal/sensors
资源描述:

当前文档最多预览五页,下载文档查看全文

此文档下载收益归作者所有

当前文档最多预览五页,下载文档查看全文
温馨提示:
1. 部分包含数学公式或PPT动画的文件,查看预览时可能会显示错乱或异常,文件下载后无此问题,请放心下载。
2. 本文档由用户上传,版权归属用户,天天文库负责整理代发布。如果您对本文档版权有争议请及时联系客服。
3. 下载前请仔细阅读文档内容,确认文档内容符合您的需求后进行下载,若出现内容与标题不符可向本站投诉处理。
4. 下载文档时可能由于网络波动等原因无法下载或下载错误,付费完成后未能成功下载的用户请联系客服处理。
关闭