求解在线Sylvester方程的固定时间收敛梯度神经网络

ID:38858

大小:0.81 MB

页数:13页

时间:2023-03-14

金币:2

上传者:战必胜
Citation: Tan, Z. Fixed-Time
Convergent Gradient Neural
Network for Solving Online Sylvester
Equation. Mathematics 2022, 10, 3090.
https://doi.org/10.3390/
math10173090
Academic Editor: Asier Ibeas
Received: 29 July 2022
Accepted: 25 August 2022
Published: 28 August 2022
Publishers Note: MDPI stays neutral
with regard to jurisdictional claims in
published maps and institutional affil-
iations.
Copyright: © 2022 by the author.
Licensee MDPI, Basel, Switzerland.
This article is an open access article
distributed under the terms and
conditions of the Creative Commons
Attribution (CC BY) license (https://
creativecommons.org/licenses/by/
4.0/).
mathematics
Article
Fixed-Time Convergent Gradient Neural Network for Solving
Online Sylvester Equation
Zhiguo Tan
School of Information Engineering, Guangzhou Panyu Polytechnic, Guangzhou 511483, China;
tanzhiguo136@163.com or tanzg@gzpyp.edu.cn
Abstract:
This paper aims at finding a fixed-time solution to the Sylvester equation by using a
gradient neural network (GNN). To reach this goal, a modified sign-bi-power (msbp) function
is presented and applied on a linear GNN as an activation function. Accordingly, a fixed-time
convergent GNN (FTC-GNN) model is developed for solving the Sylvester equation. The upper
bound of the convergence time of such an FTC-GNN model can be predetermined if parameters
are given regardless of the initial conditions. This point is corroborated by a detailed theoretical
analysis. In addition, the convergence time is also estimated utilizing the Lyapunov stability theory.
Two examples are then simulated to demonstrate the validation of the theoretical analysis, as well as
the superior convergence performance of the presented FTC-GNN model as compared to the existing
GNN models.
Keywords:
gradient neural network; Sylvester equation; activation function; fixed-time convergence
MSC: 15A24; 68Q32; 68T05
1. Introduction
As a set of prominent and proverbial linear matrix equations, the Sylvester equation
has aroused general concern among researchers in the past few decades, owing to its crucial
role in matrix theory and diverse applications, such as image fusion [
1
], dimensionality
reduction [2]
, linear descriptor systems [
3
], machine learning [
4
], the stabilization of PDE
(partial differential equation)—ODE (ordinary differential equation) cascade systems [
5
],
and so on. Consequently, a great deal of time and energy of researchers has been expended
to put forward various numerical algorithms to rapidly seek out the solution to the Sylvester
equation. Gradient-based iterative algorithms [
6
,
7
], the Bartels–Stewart algorithm [
8
], and
its extensions [
9
,
10
] are some typical examples of them. Although the concrete forms of
these algorithms may be diverse, they share one common characteristic, i.e., the solving
process is carried out in a serial processing manner. In addition, generally speaking,
O(n
3
)
arithmetic operations are usually required to execute most of these numerical algorithms
to seek out the solution [
11
,
12
]. It is thus predictable that vast amounts of time will be
consumed (or, say, wasted) when applying these serial computational schemes to large-scale
matrix-related problems (including the Sylvester equation), and it may be also inappropriate
to apply them to real-time problem solving.
To work around these issues as well as to promote computational efficiency, parallel
processing schemes based on recurrent neural networks (RNNs) are preferred and heralded
as a powerful alternative. In addition to the characteristic of parallel processing, RNN
models can be implemented expediently by circuit components in the wake of the rapid
development of field-programmable gate array and integrated circuit technology [
13
15
].
As the result of these two outstanding features, a growing number of RNN models that
aim at solving the Sylvester equation and related problems (e.g., matrix pseudoinverse)
have been successively put forward and discussed in recent years [
16
25
]. ZNN (zeroing
Mathematics 2022, 10, 3090. https://doi.org/10.3390/math10173090 https://www.mdpi.com/journal/mathematics
资源描述:

当前文档最多预览五页,下载文档查看全文

此文档下载收益归作者所有

当前文档最多预览五页,下载文档查看全文
温馨提示:
1. 部分包含数学公式或PPT动画的文件,查看预览时可能会显示错乱或异常,文件下载后无此问题,请放心下载。
2. 本文档由用户上传,版权归属用户,天天文库负责整理代发布。如果您对本文档版权有争议请及时联系客服。
3. 下载前请仔细阅读文档内容,确认文档内容符合您的需求后进行下载,若出现内容与标题不符可向本站投诉处理。
4. 下载文档时可能由于网络波动等原因无法下载或下载错误,付费完成后未能成功下载的用户请联系客服处理。
关闭