Uncertainty-Aware Prediction of Remaining Useful Life in Complex
Systems
Weijun Xu
1
, Enrico Zio
2
1,2
Department of Energy, Politecnico di Milano, Milano, 20156, Italy
weijun.xu@polimi.it
enrico.zio@polimi.it
2
Centre de Recherche sur les Risques et les Crises (CRC), MINES-Paris, PSL University, Sophia Antipolis, 06904, France
enrico.zio@mines-paristech.fr
ABSTRACT
Accurate prediction of the remaining useful life (RUL) of in-
dustrial systems is critical to ensuring smooth operation and
safety. Various prognostic methods have been developed, but
significant challenges remain for field applications. While
many methods may achieve high accuracy, they often fall
short in quantifying the uncertainty of their predictions. With-
out uncertainty quantification, it is difficult to assess the con-
fidence level of the prognostic results. Therefore, it is es-
sential to transparently present the uncertainty levels in the
predicted results. This Ph.D. project aims to develop novel
uncertainty-aware methods for RUL prediction of complex
systems. The project will address the following situations
where it is more and more uncertain: (a) propose a gen-
eral framework for data-driven RUL methods to quantify un-
certainty and generate adaptive confidence intervals under a
single fault mode and a single operating condition; (b) con-
sider both epistemic and aleatoric uncertainties in scenarios
with multiple fault modes and multiple operating conditions
and then calibrate uncertainty to enhance their accuracy; (c)
explore how to predict RUL and quantify uncertainty when
there are no run-to-failure data and RUL labels in practice;
(d) handle uncertainty propagation from the component level
to the system level. Through this research, the project will
provide more reliable and comprehensive solutions for RUL
prediction in complex systems.
1. PROBLEM STATEMENT
The methods for predicting Remaining Useful Life (RUL)
can generally be divided into two main categories: model-
based approaches and data-driven approaches (Gebraeel et
al., 2023). Model-based methods rely on a deep understand-
Weijun Xu et al. This is an open-access article distributed under the terms of
the Creative Commons Attribution 3.0 United States License, which permits
unrestricted use, distribution, and reproduction in any medium, provided the
original author and source are credited.
ing of degradation mechanisms and the governing principles
of the degradation process. However, practical challenges
arise due to the complexity of failure mechanisms and op-
erating environments, making it difficult to establish accurate
models, especially when there is uncertainty in system behav-
ior.
In contrast, data-driven methods are flexible and not confined
to a specific model structure, depending instead on the quan-
tity and quality of the available data. By employing machine
learning algorithms and statistical techniques, these methods
can identify patterns and relationships without needing ex-
plicit knowledge of the underlying degradation mechanisms,
making them adaptable to various systems. By learning di-
rectly from the data, they can capture complex relationships
and patterns, thereby addressing the limitations of traditional
model-based approaches. However, most data-driven meth-
ods generate single-point RUL estimates and often lack ro-
bustness in uncertainty quantification (Zio, 2022).
Deep learning, a prominent data-driven approach, is known
for its ability to handle complex nonlinear data structures,
achieving high RUL prediction accuracy. However, it of-
ten struggles in quantifying prediction uncertainty (Khan &
Yairi, 2018), especially in real-world scenarios with multiple
fault modes and multiple operating conditions, lack of run-
to-failure data and RUL labels, leading to heightened uncer-
tainty levels.
In RUL prediction, there are two primary types of uncer-
tainties: aleatoric uncertainty and epistemic uncertainty (Li,
Yang, Lee, Wang, & Rong, 2020). Quantifying these un-
certainties in various scenarios poses a significant challenge.
Due to inaccuracies stemming from model misspecification
and approximate inference, it’s imperative to calibrate ob-
tained uncertainties for accurate quantification (Kuleshov,
Fenner, & Ermon, 2018). For example, a 95% posteriori con-
fidence interval will typically not cover 95% of the true re-
1