2023HUMS 使用强化学习优化飞机机队准备情况

ID:72793

阅读量:1

大小:0.71 MB

页数:8页

时间:2025-01-03

金币:10

上传者:神经蛙1号
PEER REVIEW
20
th
Australian International Aerospace Congress, 27‑28th February 2023, Melbourne
Normal Paper
Student Paper
Young Engineer Paper
Aircraft fleet readiness optimisation using reinforcement
learning: a proof of concept
Kilian Vos
1
, Zhongxiao Peng
1
and Wenyi Wang
2
1
University of New South Wales, Sydney, NSW, 2052, Australia
2
Defence Science and Technology Group, Fishermans Bend, VIC, 3207, Australia
Abstract
A fleet of aircraft can be seen as a set of degrading systems that undergo variable loads as they
fly missions and require maintenance throughout their lifetime. Optimal fleet management aims
to maximise fleet availability and readiness while minimising overall maintenance costs. To
achieve this goal, individual aircraft, with variable age and degradation paths, need to operate
cooperatively to maintain high fleet availability while avoiding mechanical failure by
scheduling preventive maintenance actions. Thereby, fleet management is a complex decision-
making problem. In recent years, Reinforcement Learning (RL) has emerged as an effective
method to optimise sequential decision-making problems (e.g., DeepMind’s AlphaZero). In this
work, we introduce an RL framework that can be employed to optimise the operation and
maintenance of a fleet of aircraft. The operation of a fleet of aircraft is modelled in a simulated
environment and Q-learning is employed to find the optimal policy. The RL solution is then
evaluated against traditional operation/maintenance strategies and the results indicate that the
RL policy performs relatively well over the fleet’s lifetime. We conclude that RL has potential
to help optimise and support fleet management problems.
Keywords: reinforcement learning, markov decision process, fleet operation, maintenance
scheduling, readiness optimisation.
Introduction
Reinforcement Learning (RL) is a machine learning technique suitable to solve sequential
decision-making problems (e.g., autonomous driving, inventory management or chess artificial
intelligence). It distinguishes itself from supervised learning in many ways. While in supervised
learning we provide examples to our models from which they can learn how to match the inputs
to exact outputs (labels or numeric values), when dealing with a sequential decision-making
problem it becomes very challenging to provide explicit supervision. Instead, in the RL
framework we provide the algorithms with a reward function and let the algorithm explore the
different actions and learn from experience [1].
Previous studies have investigated the applicability of RL to optimise the operation and
maintenance of a single mechanical component [2], the maintenance schedule for a multi-
component system [3], as well as the maintenance of a fleet of aircraft [4]. In this work, we
present a proof of concept that uses RL to optimise both the operation (mission assignment) and
maintenance schedule of a fleet of aircraft.
Degradation Model
The degradation model used to simulate aircraft damage was designed to follow Paris’ Law and
replicate the degradation paths observed in the Virkler experiment [5]. The crack length
propagation is described by the following equation:
资源描述:

当前文档最多预览五页,下载文档查看全文

此文档下载收益归作者所有

当前文档最多预览五页,下载文档查看全文
温馨提示:
1. 部分包含数学公式或PPT动画的文件,查看预览时可能会显示错乱或异常,文件下载后无此问题,请放心下载。
2. 本文档由用户上传,版权归属用户,天天文库负责整理代发布。如果您对本文档版权有争议请及时联系客服。
3. 下载前请仔细阅读文档内容,确认文档内容符合您的需求后进行下载,若出现内容与标题不符可向本站投诉处理。
4. 下载文档时可能由于网络波动等原因无法下载或下载错误,付费完成后未能成功下载的用户请联系客服处理。
关闭