TSInsight时间序列数据可解释性的局部全局归因框架-2021年

ID:37217

大小:14.96 MB

页数:18页

时间:2023-03-03

金币:10

上传者:战必胜
sensors
Article
TSInsight: A Local-Global Attribution Framework for
Interpretability in Time Series Data
Shoaib Ahmed Siddiqui
1,2,
*
,†
, Dominique Mercier
1,2,
*
,†
, Andreas Dengel
1,2
and Sheraz Ahmed
1

 
Citation: Siddiqui, S.A.; Mercier, D.;
Dengel A.; Ahmed, S. TSInsight: A
Local-Global Attribution Framework
for Interpretability in Time Series
Data. Sensors 2021, 21, 7373.
https://doi.org/10.3390/s21217373
Academic Editor: Nunzio Cennamo
Received: 15 September 2021
Accepted: 28 October 2021
Published: 5 November 2021
Publishers Note: MDPI stays neutral
with regard to jurisdictional claims in
published maps and institutional affil-
iations.
Copyright: © 2021 by the authors.
Licensee MDPI, Basel, Switzerland.
This article is an open access article
distributed under the terms and
conditions of the Creative Commons
Attribution (CC BY) license (https://
creativecommons.org/licenses/by/
4.0/).
1
German Research Center for Artificial Intelligence (DFKI), 67663 Kaiserslautern, Germany;
andreas.dengel@dfki.de (A.D.); sheraz.ahmed@dfki.de (S.A.)
2
Department of Computer Science, TU Kaiserslautern, 67663 Kaiserslautern, Germany
* Correspondence: shoaib_ahmed.siddiqui@dfki.de (S.A.S.); dominique.mercier@dfki.de (D.M.)
These authors contributed equally to this work.
Abstract:
With the rise in the employment of deep learning methods in safety-critical scenarios,
interpretability is more essential than ever before. Although many different directions regarding
interpretability have been explored for visual modalities, time series data has been neglected, with
only a handful of methods tested due to their poor intelligibility. We approach the problem of
interpretability in a novel way by proposing TSInsight, where we attach an auto-encoder to the
classifier with a sparsity-inducing norm on its output and fine-tune it based on the gradients from
the classifier and a reconstruction penalty. TSInsight learns to preserve features that are important for
prediction by the classifier and suppresses those that are irrelevant, i.e., serves as a feature attribution
method to boost the interpretability. In contrast to most other attribution frameworks, TSInsight is
capable of generating both instance-based and model-based explanations. We evaluated TSInsight
along with nine other commonly used attribution methods on eight different time series datasets
to validate its efficacy. The evaluation results show that TSInsight naturally achieves output space
contraction; therefore, it is an effective tool for the interpretability of deep time series models.
Keywords:
interpretability; time series analysis; feature attribution; deep learning; auto-encoder;
feature importance; demystification
1. Introduction
Deep learning models have been at the forefront of technology in a range of different
domains including image classification [
1
], object detection [
2
], speech recognition [
3
],
text recognition [
4
] and image captioning [
5
]. These models are particularly effective in
automatically discovering useful features. However, this automatic feature extraction
comes at the cost of a lack of transparency of the system. Therefore, despite these advances,
their employment in safety-critical domains, such as finance [
6
], self-driving cars [
7
]
and medicine [
8
], is limited due to the lack of interpretability of the decision made by
the network.
Numerous efforts have been made for the interpretation of these black-box models.
These efforts can mainly be classified into two separate directions. The first set of strategies
focuses on making the network itself interpretable by trading off some performance. The
second set of strategies focuses on explaining a pretrained model, i.e., they try to infer the
reason for a particular prediction. However, all of these methods have been particularly
developed and tested for visual modalities which are directly intelligible for humans.
Transferring methodologies developed for visual modalities to time series data is difficult
due to the non-intuitive nature of time series. Therefore, only a handful of methods have
been focused on explaining time series models in the past [9,10].
We approach the attribution problem in a novel way by attaching an auto-encoder
on top of the classifier. The auto-encoder is fine-tuned based on the gradients from the
classifier. Rather than optimizing the auto-encoder to reconstruct the whole input, we
Sensors 2021, 21, 7373. https://doi.org/10.3390/s21217373 https://www.mdpi.com/journal/sensors
资源描述:

当前文档最多预览五页,下载文档查看全文

此文档下载收益归作者所有

当前文档最多预览五页,下载文档查看全文
温馨提示:
1. 部分包含数学公式或PPT动画的文件,查看预览时可能会显示错乱或异常,文件下载后无此问题,请放心下载。
2. 本文档由用户上传,版权归属用户,天天文库负责整理代发布。如果您对本文档版权有争议请及时联系客服。
3. 下载前请仔细阅读文档内容,确认文档内容符合您的需求后进行下载,若出现内容与标题不符可向本站投诉处理。
4. 下载文档时可能由于网络波动等原因无法下载或下载错误,付费完成后未能成功下载的用户请联系客服处理。
关闭