PEER REVIEW
20
th
Australian International Aerospace Congress, 27‑28th February 2023, Melbourne
Normal Paper
Student Paper
Young Engineer Paper
Aircraft fleet readiness optimisation using reinforcement
learning: a proof of concept
Kilian Vos
1
, Zhongxiao Peng
1
and Wenyi Wang
2
1
University of New South Wales, Sydney, NSW, 2052, Australia
2
Defence Science and Technology Group, Fishermans Bend, VIC, 3207, Australia
Abstract
A fleet of aircraft can be seen as a set of degrading systems that undergo variable loads as they
fly missions and require maintenance throughout their lifetime. Optimal fleet management aims
to maximise fleet availability and readiness while minimising overall maintenance costs. To
achieve this goal, individual aircraft, with variable age and degradation paths, need to operate
cooperatively to maintain high fleet availability while avoiding mechanical failure by
scheduling preventive maintenance actions. Thereby, fleet management is a complex decision-
making problem. In recent years, Reinforcement Learning (RL) has emerged as an effective
method to optimise sequential decision-making problems (e.g., DeepMind’s AlphaZero). In this
work, we introduce an RL framework that can be employed to optimise the operation and
maintenance of a fleet of aircraft. The operation of a fleet of aircraft is modelled in a simulated
environment and Q-learning is employed to find the optimal policy. The RL solution is then
evaluated against traditional operation/maintenance strategies and the results indicate that the
RL policy performs relatively well over the fleet’s lifetime. We conclude that RL has potential
to help optimise and support fleet management problems.
Keywords: reinforcement learning, markov decision process, fleet operation, maintenance
scheduling, readiness optimisation.
Introduction
Reinforcement Learning (RL) is a machine learning technique suitable to solve sequential
decision-making problems (e.g., autonomous driving, inventory management or chess artificial
intelligence). It distinguishes itself from supervised learning in many ways. While in supervised
learning we provide examples to our models from which they can learn how to match the inputs
to exact outputs (labels or numeric values), when dealing with a sequential decision-making
problem it becomes very challenging to provide explicit supervision. Instead, in the RL
framework we provide the algorithms with a reward function and let the algorithm explore the
different actions and learn from experience [1].
Previous studies have investigated the applicability of RL to optimise the operation and
maintenance of a single mechanical component [2], the maintenance schedule for a multi-
component system [3], as well as the maintenance of a fleet of aircraft [4]. In this work, we
present a proof of concept that uses RL to optimise both the operation (mission assignment) and
maintenance schedule of a fleet of aircraft.
Degradation Model
The degradation model used to simulate aircraft damage was designed to follow Paris’ Law and
replicate the degradation paths observed in the Virkler experiment [5]. The crack length
propagation is described by the following equation: