Citation: Wan, R.; Tian, C.; Zhang,
W.; Deng, W.; Yang, F. A Multivariate
Temporal Convolutional Attention
Network for Time-Series Forecasting.
Electronics 2022, 11, 1516. https://
doi.org/10.3390/electronics11101516
Academic Editors: Phivos Mylonas,
Katia Lida Kermanidis and Manolis
Maragoudakis
Received: 21 April 2022
Accepted: 29 April 2022
Published: 10 May 2022
Publisher’s Note: MDPI stays neutral
with regard to jurisdictional claims in
published maps and institutional affil-
iations.
Copyright: © 2022 by the authors.
Licensee MDPI, Basel, Switzerland.
This article is an open access article
distributed under the terms and
conditions of the Creative Commons
Attribution (CC BY) license (https://
creativecommons.org/licenses/by/
4.0/).
Article
A Multivariate Temporal Convolutional Attention Network for
Time-Series Forecasting
Renzhuo Wan
1,2,†
, Chengde Tian
1,†
, Wei Zhang
1
, Wendi Deng
1
and Fan Yang
3,
*
1
School of Electronic and Electrical Engineering, Wuhan Textile University, Wuhan 430200, China;
wanrz@wtu.edu.cn (R.W.); 2015363074@mail.wtu.edu.cn (C.T.); wzhang@wtu.edu.cn (W.Z.);
wddeng@wtu.edu.cn (W.D.)
2
Hubei Key Laboratory of Digital Textile Equipment, Wuhan Textile University, Wuhan 430200, China
3
School of Mathematical and Physical Sciences, Wuhan Textile University, Wuhan 430200, China
* Correspondence: yangfan@wtu.edu.cn
† These authors contributed equally to this work.
Abstract:
Multivariate time-series forecasting is one of the crucial and persistent challenges in time-
series forecasting tasks. As a kind of data with multivariate correlation and volatility, multivariate
time series impose highly nonlinear time characteristics on the forecasting model. In this paper, a new
multivariate time-series forecasting model, multivariate temporal convolutional attention network
(MTCAN), based on a self-attentive mechanism is proposed. MTCAN is based on the Convolution
Neural Network (CNN) model, using 1D dilated convolution as the basic unit to construct asymmetric
blocks, and then, the feature extraction is performed by the self-attention mechanism to finally obtain
the prediction results. The input and output lengths of this network can be determined flexibly.
The validation of the method is carried out with three different multivariate time-series datasets.
The reliability and accuracy of the prediction results are compared with Long Short-Term Memory
(LSTM), Gated Recurrent Unit (GRU), Convolutional Long Short-Term Memory (ConvLSTM), and
Temporal Convolutional Network (TCN). The prediction results show that the model proposed in
this paper has significantly improved prediction accuracy and generalization.
Keywords:
multivariate time-series forecasting; self-attention mechanism; deep learning; neural
network
1. Introduction
A multivariate time series is an important data object, which is a series of obser-
vations formed by multivariate variables recorded in chronological order. Multivariate
time series are used in more and more fields, such as the environment [
1
,
2
], finance [
3
,
4
],
transportation [5–7]
, healthcare [
8
], and energy [
9
,
10
]. In these fields, time-series prediction
is used to monitor some critical data and avoid the occurrence of unforeseen situations
that cause economic losses. For multivariate time-series prediction tasks, early solutions
mainly choose recurrent networks, but recurrent networks suffer from gradient disappear-
ance and gradient explosion problems, due to which the long-term dependence problem
of RNNs [
11
,
12
] cannot be solved. The time-series structure on the one hand makes it
difficult to have efficient parallel computing capability (the computation of the current
state depends not only on the current input but also on the input of the previous state),
and on the other hand makes the RNN model, including variants of LSTM [
13
], GRU [
14
],
etc., more similar to a Markov decision process [
15
] in general and difficult to extract
global information. In addition, CNN models [
16
] have started to be applied to sequence
modeling. For multivariate time-series [
17
] problems, these models also have difficulty
capturing the mapping relationships between multiple variables as well as adapting to
complex data features.
Traditional CNNs are generally considered less suitable for modeling time-series
problems, which is mainly due to the limitation of convolutional kernel size and thus
Electronics 2022, 11, 1516. https://doi.org/10.3390/electronics11101516 https://www.mdpi.com/journal/electronics