用于视觉问答的深度模块双线性注意网络-2022年

ID:37333

大小:17.93 MB

页数:15页

时间:2023-03-03

金币:10

上传者:战必胜

 
Citation: Yan, F.; Silamu, W.; Li, Y.
Deep Modular Bilinear Attention
Network for Visual Question
Answering. Sensors 2022, 22, 1045.
https://doi.org/10.3390/s22031045
Academic Editor: Anastasios
Doulamis
Received: 12 November 2021
Accepted: 26 January 2022
Published: 28 January 2022
Publishers Note: MDPI stays neutral
with regard to jurisdictional claims in
published maps and institutional affil-
iations.
Copyright: © 2022 by the authors.
Licensee MDPI, Basel, Switzerland.
This article is an open access article
distributed under the terms and
conditions of the Creative Commons
Attribution (CC BY) license (https://
creativecommons.org/licenses/by/
4.0/).
sensors
Article
Deep Modular Bilinear Attention Network for Visual
Question Answering
Feng Yan
1
, Wushouer Silamu
1,2,
* and Yanbing Li
1
1
School of Information Science and Engineering, Xinjiang University, Urumqi 830046, China;
yanfeng@stu.xju.edu.cn (F.Y.); liyb@xju.edu.cn (Y.L.)
2
Laboratory of Multi-Lingual Information Technology, Xinjiang University, Urumqi 830046, China;
* Correspondence: wushour@xju.edu.cn
Abstract:
VQA (Visual Question Answering) is a multi-model task. Given a picture and a question
related to the image, it will determine the correct answer. The attention mechanism has become a
de facto component of almost all VQA models. Most recent VQA approaches use dot-product to
calculate the intra-modality and inter-modality attention between visual and language features. In
this paper, the BAN (Bilinear Attention Network) method was used to calculate attention. We propose
a deep multimodality bilinear attention network (DMBA-NET) framework with two basic attention
units (BAN-GA and BAN-SA) to construct inter-modality and intra-modality relations. The two
basic attention units are the core of the whole network framework and can be cascaded in depth. In
addition, we encode the question based on the dynamic word vector of BERT(Bidirectional Encoder
Representations from Transformers), then use self-attention to process the question features further.
Then we sum them with the features obtained by BAN-GA and BAN-SA before the final classification.
Without using the Visual Genome datasets for augmentation, the accuracy of our model reaches
70.85% on the test-std dataset of VQA 2.0.
Keywords:
attention mechanism; visual question answering; multi-model; bilinear attention
network
1. Introduction
The task goal of VQA (Visual Question Answering) [
1
] is to build a question answering
system like human intelligence, which can recognize the category, spatial relationship,
and other information of objects from the specified pictures. VQA has broad application
scenarios and has far-reaching significance for the development of artificial intelligence
(see Figure 1).
Our model can be applied to the blind assistant robot. The surrounding images and
audio can be obtained through the robot’s hardware sensor as the input of our model,
which can effectively help the blind perceive the surrounding objects.
The most challenging problem in VQA is establishing the association between each
region in the image and the words in the question, and the model of VQA has the ability
to align the image and text semantically. VQA models not only have to understand the
content of a picture, but also have to find the corresponding answer to the question, which
presents a greater challenge to the model and makes it more intelligent.
MCB [
2
], MFB [
3
], and Mutan [
4
] capture the high-level interactions between question
and images features based on the fusion method. However, the scope of use is limited, and
it is not easy to apply to other VQA models.
Attention mechanisms [58] are very important for deep learning, and it has success-
fully been applied to the VQA task. The model based on the attention mechanism focuses
on the key information. Aderson et al. [
9
] proposed a bottom-up and top-down atten-
tion mechanism and won the VQA Challenge 2017. They use the concatenated attention
mechanism to get the image attention guided by the question. However, the model ignores
Sensors 2022, 22, 1045. https://doi.org/10.3390/s22031045 https://www.mdpi.com/journal/sensors
资源描述:

当前文档最多预览五页,下载文档查看全文

此文档下载收益归作者所有

当前文档最多预览五页,下载文档查看全文
温馨提示:
1. 部分包含数学公式或PPT动画的文件,查看预览时可能会显示错乱或异常,文件下载后无此问题,请放心下载。
2. 本文档由用户上传,版权归属用户,天天文库负责整理代发布。如果您对本文档版权有争议请及时联系客服。
3. 下载前请仔细阅读文档内容,确认文档内容符合您的需求后进行下载,若出现内容与标题不符可向本站投诉处理。
4. 下载文档时可能由于网络波动等原因无法下载或下载错误,付费完成后未能成功下载的用户请联系客服处理。
关闭