基于神经符号学习的OWL DL中大规模ABox的近似推理

ID:38853

大小:0.70 MB

页数:24页

时间:2023-03-14

金币:2

上传者:战必胜
Citation: Zhu, X.; Liu, B.; Zhu, C.;
Ding, Z.; Yao, L. Approximate
Reasoning for Large-Scale ABox in
OWL DL Based on Neural-Symbolic
Learning. Mathematics 2023, 11, 495.
https://doi.org/10.3390/math11030495
Academic Editor: Ioannis E. Livieris
Received: 22 November 2022
Revised: 3 January 2023
Accepted: 15 January 2023
Published: 17 January 2023
Copyright: © 2023 by the authors.
Licensee MDPI, Basel, Switzerland.
This article is an open access article
distributed under the terms and
conditions of the Creative Commons
Attribution (CC BY) license (https://
creativecommons.org/licenses/by/
4.0/).
mathematics
Article
Approximate Reasoning for Large-Scale ABox in OWL DL
Based on Neural-Symbolic Learning
Xixi Zhu , Bin Liu, Cheng Zhu *, Zhaoyun Ding and Li Yao
Science and Technology on Information Systems and Engineering Laboratory, National University of Defense
Technology, Changsha 410073, China
* Correspondence: zhucheng@nudt.edu.cn
Abstract: The ontology knowledge base (KB) can be divided into two parts: TBox and ABox, where
the former models schema-level knowledge within the domain, and the latter is a set of statements of
assertions or facts about instances. ABox reasoning is a process of discovering implicit knowledge
in ABox based on the existing KB, which is of great value in KB applications. ABox reasoning is
influenced by both the complexity of TBox and scale of ABox. The traditional logic-based ontology
reasoning methods are usually designed to be provably sound and complete but suffer from long
algorithm runtimes and do not scale well for ontology KB represented by OWL DL (Description
Logic). In some application scenarios, the soundness and completeness of reasoning results are not the
key constraints, and it is acceptable to sacrifice them in exchange for the improvement of reasoning
efficiency to some extent. Based on this view, an approximate reasoning method for large-scale ABox
in OWL DL KBs was proposed, which is named the ChunfyReasoner (CFR). The CFR introduces
neural-symbolic learning into ABox reasoning and integrates the advantages of symbolic systems
and neural networks (NNs). By training the NN model, the CFR approximately compiles the logic
deduction process of ontology reasoning, which can greatly improve the reasoning speed while
ensuring higher reasoning quality. In this paper, we state the basic idea, framework, and construction
process of the CFR in detail, and we conduct experiments on two open-source ontologies built on
OWL DL. The experimental results verify the effectiveness of our method and show that the CFR can
support the applications of large-scale ABox reasoning of OWL DL KBs.
Keywords:
neural-symbolic learning; approximate reasoning; large-scale ABox reasoning; neural
network; ontology reasoning; OWL DL
MSC: 68T01; 68T07; 68T20; 68T27; 68T30; 68T99
1. Introduction
Ontology is an important form of modeling existing human knowledge through
symbols, and it is the core of the Semantic Web technology framework [
1
]. OWL DL
(Description Logic) is an ontology language recommended by W3C, which is widely
used in practical applications [
2
,
3
] (unless otherwise specified, ontology in the following
paragraphs refers to ontology built on OWL DL). An ontology knowledge base (KB) can be
divided into two parts: TBox and ABox, where the former is the schema-level knowledge
of the KB, which is used to describe the recognized concepts and roles, while the latter is a
collection of assertions or factual statements of instances in the domain. Reasoning is the
core technology in KB-based systems, and the process of reasoning new implicit knowledge
in ABox based on the existing KB is called ABox reasoning [
4
] (also known as ABox
materialization), which has important value in KB applications. Recently, with the explosive
growth of data and the improvement of knowledge extraction technology, the ontology
KBs containing large-scale ABox are becoming more common than before, and reasoning
tasks for them have attracted increasing attention [58].
Mathematics 2023, 11, 495. https://doi.org/10.3390/math11030495 https://www.mdpi.com/journal/mathematics
资源描述:

当前文档最多预览五页,下载文档查看全文

此文档下载收益归作者所有

当前文档最多预览五页,下载文档查看全文
温馨提示:
1. 部分包含数学公式或PPT动画的文件,查看预览时可能会显示错乱或异常,文件下载后无此问题,请放心下载。
2. 本文档由用户上传,版权归属用户,天天文库负责整理代发布。如果您对本文档版权有争议请及时联系客服。
3. 下载前请仔细阅读文档内容,确认文档内容符合您的需求后进行下载,若出现内容与标题不符可向本站投诉处理。
4. 下载文档时可能由于网络波动等原因无法下载或下载错误,付费完成后未能成功下载的用户请联系客服处理。
关闭