Citation: Sun, Z.; Li, X. Named Entity
Recognition Model Based on Feature
Fusion. Information 2023, 14, 133.
hps://doi.org/10.3390/
info14020133
Academic Editors: Krzysztof
Ejsmont, Aamer Bilal Asghar, Yong
Wang and Rodolfo Haber
Received: 7 October 2022
Revised: 20 December 2022
Accepted: 20 December 2022
Published: 17 February 2023
Copyright: © 2023 by the authors.
Licensee MDPI, Basel, Swierland.
This article is an open access article
distributed under the terms and
conditions of the Creative Commons
Aribution (CC BY) license (hps://
creativecommons.org/licenses/by/
4.0/).
Article
Named Entity Recognition Model Based on Feature Fusion
Zhen Sun * and Xinfu Li
School of Cyberspace Security and Computer Science, Hebei University, Baoding 071000, China
* Correspondence: 20191366@stumail.hbu.edu.cn; Tel.: +86‑178‑0037‑6608
Abstract: Named entity recognition can deeply explore semantic features and enhance the ability of
vector representation of text data. This paper proposes a named entity recognition method based
on multi‑head aention to aim at the problem of fuzzy lexical boundary in Chinese named entity
recognition. Firstly, Word2vec is used to extract word vectors, HMM is used to extract boundary
vectors, ALBERT is used to extract character vectors, the Feedforward‑aention mechanism is used
to fuse the three vectors, and then the fused vectors representation is used to remove features by BiL‑
STM. Then multi‑head aention is used to mine the potential word information in the text features.
Finally, the text label classication results are output after the conditional random eld screening.
Through the verication of WeiboNER, MSRA, and CLUENER2020 datasets, the results show that
the proposed algorithm can eectively improve the performance of named entity recognition.
Keywords: named entity recognition; ALBERT; vector fusion; multiple head aention
1. Introduction
Named Entity Recognition (NER) is an essential task in natural language process‑
ing [1]. Combining computer science and linguistics, NER studies various theories and
methods for eective communication between humans and computers using natural lan‑
guage, aiming to extract specic entities from unstructured text relationships [2]. For ex‑
ample, names of people, places, organizations [3], etc.
In machine learning methods, NER is usually treated as a sequence annotation task [4].
The neural network model usually includes three parts: the embedding layer, the
encoding layer and the output layer [5]. The marker model is learned by large‑scale corpus,
and the word information is annotated by combining the word location information and
vector representation. In the embedded layer, the pre‑trained model is mainly used to learn
the distributed representation of text, and Bert [6] uses MLM and SOP tasks to achieve
good results.
The second part is the encoding layer, which is used to extract the sequence features
and then capture the context dependencies of the input text features. Using CNN for cod‑
ing has high parallel computing eciency but weak feature extraction ability on long se‑
quence input. BiLSTM [7] is proposed to carry out feature coding to learn
context dependencies.
The third part is the output layer, which extracts the encoding of the second part,
generates the optimal tag sequence, and obtains the tag recognition result of the NER
task. Softmax function is widely used in multiple classication tasks, and often ignores
label interdependence in sequence labeling tasks. Therefore, CRF is mainly used to learn
the label dependence of named entities [8] and becomes the rst choice of the NER task
decoding layer.
Compared with English‑named entities, Chinese‑named entities have no obvious word
boundary [9], which signicantly aects the accuracy of Chinese‑named entity recogni‑
tion. However, Chinese characters have dierent meanings in dierent scenarios, but the
previous research on Chinese‑named entity recognition only transferred the named entity
Information 2023, 14, 133. https://doi.org/10.3390/info14020133 https://www.mdpi.com/journal/information