用于图像捕获的语音引导变换器的语法和语义分离部分

ID:38606

大小:7.92 MB

页数:18页

时间:2023-03-11

金币:2

上传者:战必胜
Citation: Wang, D.; Liu, B.; Zhou, Y.;
Liu, M.; Liu, P.; Yao, R. Separate
Syntax and Semantics: Part-of-
Speech-Guided Transformer for
Image Captioning. Appl. Sci. 2022, 12,
11875. https://doi.org/10.3390/
app122311875
Academic Editor: Silvia Liberata Ullo
Received: 17 October 2022
Accepted: 16 November 2022
Published: 22 November 2022
Publishers Note: MDPI stays neutral
with regard to jurisdictional claims in
published maps and institutional affil-
iations.
Copyright: © 2022 by the authors.
Licensee MDPI, Basel, Switzerland.
This article is an open access article
distributed under the terms and
conditions of the Creative Commons
Attribution (CC BY) license (https://
creativecommons.org/licenses/by/
4.0/).
applied
sciences
Article
Separate Syntax and Semantics: Part-of-Speech-Guided
Transformer for Image Captioning
Dong Wang
1,2
, Bing Liu
1,2,
*, Yong Zhou
1,2
, Mingming Liu
1,2
, Peng Liu
3
and Rui Yao
1,2
1
School of Computer Science and Technology, China University of Mining and Technology,
Xuzhou 221116, China
2
Engineering Research Center of Mine Digitization, Ministry of Education of the People’s Republic of China,
Xuzhou 221116, China
3
National Joint Engineering Laboratory of Internet Applied Technology of Mines, Xuzhou 221008, China
* Correspondence: liubing@cumt.edu.cn
Abstract:
Transformer-based image captioning models have recently achieved remarkable perfor-
mance by using new fully attentive paradigms. However, existing models generally follow the
conventional language model of predicting the next word conditioned on the visual features and
partially generated words. They treat the predictions of visual and nonvisual words equally and
usually tend to produce generic captions. To address these issues, we propose a novel part-of-speech-
guided transformer (PoS-Transformer) framework for image captioning. Specifically, a self-attention
part-of-speech prediction network is first presented to model the part-of-speech tag sequences for
the corresponding image captions. Then, different attention mechanisms are constructed for the
decoder to guide the caption generation by using the part-of-speech information. Benefiting from
the part-of-speech guiding mechanisms, the proposed framework not only adaptively adjusts the
weights between visual features and language signals for the word prediction, but also facilitates the
generation of more fine-grained and grounded captions. Finally, a multitask learning is introduced
to train the whole PoS-Transformer network in an end-to-end manner. Our model was trained and
tested on the MSCOCO and Flickr30k datasets with the experimental evaluation standard CIDEr
scores of 1.299 and 0.612, respectively. The qualitative experimental results indicated that the captions
generated by our method conformed to the grammatical rules better.
Keywords: image captioning; transformer; part of speech; multitask learning
1. Introduction
Image captioning is the task of generating the grammatically correct description of an
image, which has been attracting much attention in the field of image understanding [
1
8
].
With the success of deep learning, image captioning models have recently achieved great
progress. A typical deep neural network for an image captioning model generally follows
an encoder–decoder paradigm, where a deep convolutional neural network (CNN) is intro-
duced as the encoder to learn visual representations from the input image, while a recurrent
neural network (RNN) serves as the decoder to recursively predict each word. Recently,
the transformer-based image captioning models have shown superior performance to the
conventional CNN-RNN models by using fully attentive paradigms. Despite great ad-
vances made in the model architectures, existing models still have two limitations: (i) they
treat the predictions of visual and nonvisual words equally at each time step, leading to
ambiguous inference; (ii) they have the tendency to generate minimal sentences, which
is common in datasets. Consequently, how to organize phrases and words to accurately
express the semantics of an image remains a challenging task.
The neuroscience research on language processing has demonstrated that the brain
contains partially separate systems for processing syntax and semantics [
9
,
10
], which
Appl. Sci. 2022, 12, 11875. https://doi.org/10.3390/app122311875 https://www.mdpi.com/journal/applsci
资源描述:

当前文档最多预览五页,下载文档查看全文

此文档下载收益归作者所有

当前文档最多预览五页,下载文档查看全文
温馨提示:
1. 部分包含数学公式或PPT动画的文件,查看预览时可能会显示错乱或异常,文件下载后无此问题,请放心下载。
2. 本文档由用户上传,版权归属用户,天天文库负责整理代发布。如果您对本文档版权有争议请及时联系客服。
3. 下载前请仔细阅读文档内容,确认文档内容符合您的需求后进行下载,若出现内容与标题不符可向本站投诉处理。
4. 下载文档时可能由于网络波动等原因无法下载或下载错误,付费完成后未能成功下载的用户请联系客服处理。
关闭