用于半监督节点分类的GRNN图再训练神经网络

ID:38863

阅读量:2

大小:0.44 MB

页数:16页

时间:2023-03-14

金币:2

上传者:战必胜
Citation: Li, J.; Fan, S. GRNN:
Graph-Retraining Neural Network
for Semi-Supervised Node
Classification. Algorithms 2023, 16,
126. https://doi.org/10.3390/
a16030126
Academic Editors: Krzysztof
Ejsmont, Aamer Bilal Asghar, Yong
Wang and Rodolfo Haber
Received: 13 January 2023
Revised: 14 February 2023
Accepted: 16 February 2023
Published: 22 February 2023
Copyright: © 2023 by the authors.
Licensee MDPI, Basel, Switzerland.
This article is an open access article
distributed under the terms and
conditions of the Creative Commons
Attribution (CC BY) license (https://
creativecommons.org/licenses/by/
4.0/).
algorithms
Article
GRNN: Graph-Retraining Neural Network for
Semi-Supervised Node Classification
Jianhe Li and Suohai Fan *
School of Information Science and Technology, Jinan University, Guangzhou 510632, China
* Correspondence: tfsh@jnu.edu.cn
Abstract:
In recent years, graph neural networks (GNNs) have played an important role in graph rep-
resentation learning and have successfully achieved excellent results in semi-supervised classification.
However, these GNNs often neglect the global smoothing of the graph because the global smoothing
of the graph is incompatible with node classification. Specifically, a cluster of nodes in the graph often
has a small number of other classes of nodes. To address this issue, we propose a graph-retraining
neural network (GRNN) model that performs smoothing over the graph by alternating between a
learning procedure and an inference procedure, based on the key idea of the expectation-maximum
algorithm. Moreover, the global smoothing error is combined with the cross-entropy error to form
the loss function of GRNN, which effectively solves the problem. The experiments show that GRNN
achieves high accuracy in the standard citation network datasets, including Cora, Citeseer, and
PubMed, which proves the effectiveness of GRNN in semi-supervised node classification.
Keywords:
graph neural network; graph-retraining neural network; semi-supervised node
classification
1. Introduction
Convolutional neural networks (CNNs) achieve outstanding performance in a wide
range of tasks that are based on Euclidean data including computer vision [
1
] and recom-
mender systems [
2
,
3
]. However, an increasing number of application data are in the form
of graph-structured non-Euclidean data, such as literature-citation networks and knowl-
edge graphs. In this case, graph convolutional networks (GCNs) commonly outperform
other models and are successfully applied in social analysis [
4
,
5
], citation networks [
6
8
],
transport
forecasting [9,10]
, and other promising fields. For example, in a literature-citation
network, articles are usually presented as nodes and citation relationships as edges be-
tween nodes. When faced with the challenge of semi-supervised node classification on such
non-Euclidean graph data, classical GCNs first extract and aggregate features of articles
using the graph Fourier transform and the convolution theorem and then classify unlabeled
articles based on graph convolution output features.
Graph neural networks (GNNs) are capable of reading prospective data from the
network architecture. Graph convolution networks (GCNs), in which nodes learn their
potential representations by aggregating features from neighboring nodes, have been
shown to be effective and practical. For example, GCN [
11
] calculates the weight of ag-
gregated neighbor nodes based on the degree of its nodes and its first-order neighbor
nodes;
GAT [12]
injects the graph structure into the mechanism by performing masked
attention, which means that the weight of aggregated neighbor nodes is a learnable param-
eter;
GraphSAGE [13]
uniformly samples a fixed number of neighbor nodes to aggregate
information rather than using all neighbor nodes. MPNN [
14
] uses message-passing func-
tions to unify almost all variants of the spatial graph neural network. However, a critical
limitation is that the labels of nodes are independently predicted based on their representa-
tions and their neighbor-node representations. Each layer of the graph convolution neural
network is a special kind of Laplace smoothing [
6
]. In other words, they attach importance
Algorithms 2023, 16, 126. https://doi.org/10.3390/a16030126 https://www.mdpi.com/journal/algorithms
资源描述:

当前文档最多预览五页,下载文档查看全文

此文档下载收益归作者所有

当前文档最多预览五页,下载文档查看全文
温馨提示:
1. 部分包含数学公式或PPT动画的文件,查看预览时可能会显示错乱或异常,文件下载后无此问题,请放心下载。
2. 本文档由用户上传,版权归属用户,天天文库负责整理代发布。如果您对本文档版权有争议请及时联系客服。
3. 下载前请仔细阅读文档内容,确认文档内容符合您的需求后进行下载,若出现内容与标题不符可向本站投诉处理。
4. 下载文档时可能由于网络波动等原因无法下载或下载错误,付费完成后未能成功下载的用户请联系客服处理。
关闭