2024PHM 智能互联基础设施多元时间序列预测中的对抗性攻击与防御

ID:72772

阅读量:1

大小:2.28 MB

页数:19页

时间:2025-01-03

金币:10

上传者:神经蛙1号
Adversarial Attacks and Defenses in Multivariate Time-Series
Forecasting for Smart and Connected Infrastructures
Pooja Krishan
1
, Rohan Mohapatra
2
, Sanchari Das
3
and Saptarshi Sengupta
4
1,2,4
Department of Computer Science, San Jos
´
e State University, San Jos
´
e, CA, 95192, USA
3
Department of Computer Science, University of Denver, Denver, CO, 80210, USA
pooja.krishan@sjsu.edu
rohan.mohapatra@sjsu.edu
sanchari.das@du.edu
saptarshi.sengupta@sjsu.edu
ABSTRACT
The emergence of deep learning models has revolutionized
various industries over the last decade, leading to a surge in
connected devices and infrastructures. However, these mod-
els can be tricked into making incorrect predictions with high
confidence, leading to disastrous failures and security con-
cerns. To this end, we explore the impact of adversarial at-
tacks on multivariate time-series forecasting and investigate
methods to counter them. Specifically, we employ untargeted
white-box attacks, namely the Fast Gradient Sign Method
(FGSM) and the Basic Iterative Method (BIM), to poison
the inputs to the training process, effectively misleading the
model. We also illustrate the subtle modifications to the in-
puts after the attack, which makes detecting the attack using
the naked eye quite difficult. Having demonstrated the fea-
sibility of these attacks, we develop robust models through
adversarial training and model hardening. We are among the
first to showcase the transferability of these attacks and de-
fenses by extrapolating our work from the benchmark elec-
tricity data to a larger, 10-year real-world data used for pre-
dicting the time-to-failure of hard disks. Our experimental
results confirm that the attacks and defenses achieve the de-
sired security thresholds, leading to a 72.41% and 94.81%
decrease in RMSE for the electricity and hard disk datasets
respectively after implementing the adversarial defenses.
1. INTRODUCTION
A time-series records a series of metrics over regular inter-
vals of time as a sequence of values. Time-series forecast-
ing refers to the task of estimating the output at a certain
time step, given the previous values. It is used in a variety
Pooja Krishan et al. This is an open-access article distributed under the terms
of the Creative Commons Attribution 3.0 United States License, which per-
mits unrestricted use, distribution, and reproduction in any medium, provided
the original author and source are credited.
of domains such as finance (Sezer, Gudelek, & Ozbayoglu,
2020), power consumption prediction (Divina, Garc
´
ıa Torres,
Gom
´
ez Vela, & V
´
azquez Noguera, 2019), health prediction
of equipment (C.-Y. Lin, Hsieh, Cheng, Huang, & Adnan,
2019), healthcare (Kaushik et al., 2020), and weather fore-
casting (Karevan & Suykens, 2020). The widespread use
of sensors and actuators has resulted in a proliferation of
data, leading to the shift from traditional time-series forecast-
ing methods to deep learning architectures (Siami-Namini,
Tavakoli, & Siami Namin, 2018), which are more capable of
gleaning insights and identifying long-term trends from the
data. However, it is a double-edged sword as deep learning
models can be easily compromised by attacks, causing the
models to produce incorrect forecasts based on manipulated
input data. This gullible nature of deep learning models to at-
tacks paves the way for catastrophic failures in safety-critical
applications and leads to the wastage of valuable resources,
time, money, and productivity (Akhtar & Mian, 2018). This
opens up a new area of research to develop models resistant
to these types of attacks.
Adversarial attacks on deep learning models are classified
into white-box or black-box attacks, and targeted or untar-
geted attacks depending on the ease of access, and the at-
tacker’s goal respectively. In white-box attacks, the attacker
knows sensitive model-specific information such as inputs,
targets, and gradients (Melis et al., 2021). Conversely, in
black-box attacks, the model is viewed as an oracle that out-
puts values given input data and the attack is crafted based on
observed model behavior (Oh, Schiele, & Fritz, 2019; Tsin-
genopoulos, Preuveneers, & Joosen, 2019). In targeted at-
tacks, the adversary tries to not only delude the model but
also prompts it to produce an output from a particular distri-
bution (Fursov et al., 2021) whereas in untargeted attacks the
attacker intends to trigger the model to generate incorrect out-
1
资源描述:

当前文档最多预览五页,下载文档查看全文

此文档下载收益归作者所有

当前文档最多预览五页,下载文档查看全文
温馨提示:
1. 部分包含数学公式或PPT动画的文件,查看预览时可能会显示错乱或异常,文件下载后无此问题,请放心下载。
2. 本文档由用户上传,版权归属用户,天天文库负责整理代发布。如果您对本文档版权有争议请及时联系客服。
3. 下载前请仔细阅读文档内容,确认文档内容符合您的需求后进行下载,若出现内容与标题不符可向本站投诉处理。
4. 下载文档时可能由于网络波动等原因无法下载或下载错误,付费完成后未能成功下载的用户请联系客服处理。
关闭