Citation: Mercier, D.; Dengel, A.;
Ahmed, S. TimeREISE: Time Series
Randomized Evolving Input Sample
Explanation. Sensors 2022, 22, 4084.
https://doi.org/10.3390/s22114084
Academic Editors: M. Jamal Deen,
Subhas Mukhopadhyay, Yangquan
Chen, Simone Morais, Nunzio
Cennamo and Junseop Lee
Received: 13 May 2022
Accepted: 26 May 2022
Published: 27 May 2022
Publisher’s Note: MDPI stays neutral
with regard to jurisdictional claims in
published maps and institutional affil-
iations.
Copyright: © 2022 by the authors.
Licensee MDPI, Basel, Switzerland.
This article is an open access article
distributed under the terms and
conditions of the Creative Commons
Attribution (CC BY) license (https://
creativecommons.org/licenses/by/
4.0/).
Article
TimeREISE: Time Series Randomized Evolving Input
Sample Explanation
Dominique Mercier
1,2,
* , Andreas Dengel
1,2
and Sheraz Ahmed
1
1
German Research Center for Artificial Intelligence (DFKI), 67663 Kaiserslautern, Germany;
andreas.dengel@dfki.de (A.D.); sheraz.ahmed@dfki.de (S.A.)
2
Department of Computer Science, TU Kaiserslautern, 67663 Kaiserslautern, Germany
* Correspondence: dominique.mercier@dfki.de
Abstract:
Deep neural networks are one of the most successful classifiers across different domains.
However, their use is limited in safety-critical areas due to their limitations concerning interpretability.
The research field of explainable artificial intelligence addresses this problem. However, most
interpretability methods align to the imaging modality by design. The paper introduces TimeREISE,
a model agnostic attribution method that shows success in the context of time series classification.
The method applies perturbations to the input and considers different attribution map characteristics
such as the granularity and density of an attribution map. The approach demonstrates superior
performance compared to existing methods concerning different well-established measurements.
TimeREISE shows impressive results in the deletion and insertion test, Infidelity, and Sensitivity.
Concerning the continuity of an explanation, it showed superior performance while preserving the
correctness of the attribution map. Additional sanity checks prove the correctness of the approach
and its dependency on the model parameters. TimeREISE scales well with an increasing number of
channels and timesteps. TimeREISE applies to any time series classification network and does not rely
on prior data knowledge. TimeREISE is suited for any usecase independent of dataset characteristics
such as sequence length, channel number, and number of classes.
Keywords:
deep learning; time series; interpretability; explainability; attribution; convolutional
neural network; artificial intelligence; classifications
1. Introduction
The success of deep neural networks comes from the superior performance and
scaling deep neural networks offer compared to traditional machine learning methods [
1
].
However, during the last few decades, the need for explainable decisions has become more
significant. In critical infrastructures, it is inconceivable to use approaches without any
justification for the results [
2
]. In the medical sector, financial domain, and other safety-
critical areas, explainable computations are necessary by law [
3
]. Furthermore, there are
ethical constraints that limit the use of artificial intelligence even more [
4
,
5
]. Accordingly, a
large research domain evolved. This domain covers explainable artificial intelligence (XAI).
One main goal is to propose techniques that provide interpretable results to enable the
broader use of deep neural networks.
For several years researchers developed modifications of the networks and model
agnostic methods to provide these results [
6
]. The majority of these methods originate
from the imaging modality as its concepts are easier to interpret for humans [
7
]. Model
agnostic methods have shown especially great success. One famous category of model
agnostic approaches is attribution methods [
8
]. The number of available methods in this
category increases every year. One advantage of them is their loose coupling with the
network. In addition, they do not limit the processing capabilities of the network, although
some attribution methods include minor limitations concerning the network architecture.
Sensors 2022, 22, 4084. https://doi.org/10.3390/s22114084 https://www.mdpi.com/journal/sensors