一种抗干扰低功耗的嘴唇识别方法

ID:38600

大小:1.61 MB

页数:16页

时间:2023-03-11

金币:2

上传者:战必胜

 
Citation: Jia, J.; Wang, Z.; Xu, L.; Dai,
J.; Gu, M.; Huang, J. An
Interference-Resistant and
Low-Consumption Lip Recognition
Method. Electronics 2022, 11, 3066.
https://doi.org/10.3390/
electronics11193066
Academic Editor: Silvia Liberata Ullo
Received: 25 August 2022
Accepted: 20 September 2022
Published: 26 September 2022
Publishers Note: MDPI stays neutral
with regard to jurisdictional claims in
published maps and institutional affil-
iations.
Copyright: © 2022 by the authors.
Licensee MDPI, Basel, Switzerland.
This article is an open access article
distributed under the terms and
conditions of the Creative Commons
Attribution (CC BY) license (https://
creativecommons.org/licenses/by/
4.0/).
electronics
Article
An Interference-Resistant and Low-Consumption Lip
Recognition Method
Junwei Jia
1
, Zhilu Wang
2
, Lianghui Xu
1
, Jiajia Dai
1
, Mingyi Gu
1
and Jing Huang
1,
*
1
School of Information and Electronic Engineering, Zhejiang Gongshang University (ZJSU),
Hangzhou 310018, China
2
College of Mechanical and Electrical Engineering, Hohai University (HHU), Changzhou 213022, China
* Correspondence: jhuang@mail.zjgsu.edu.cn
Abstract:
Lip movements contain essential linguistic information. It is an important medium for
studying the content of the dialogue. At present, there are many studies on how to improve the
accuracy of lip language recognition models. However, there are few studies on the robustness and
generalization performance of the model under various disturbances. Specific experiments show
that the current state-of-the-art lip recognition model significantly drops in accuracy when disturbed
and is particularly sensitive to adversarial examples. This paper substantially alleviates this problem
by using Mixup training. Taking the model subjected to negative attacks generated by FGSM as an
example, the model in this paper achieves 85.0% and 40.2% accuracy on the English dataset LRW
and the Mandarin dataset LRW-1000, respectively. The correct recognition rates are improved by
9.8% and 8.3%, compared with the current advanced lip recognition models. The positive impact of
Mixup training on the robustness and generalization of lip recognition models is demonstrated. In
addition, the performance of the lip recognition classification model depends more on the training
parameters, which increase the computational cost. The InvNet-18 network in this paper reduces the
consumption of GPU resources and the training time while improving the model accuracy. Compared
with the standard ResNet-18 network used in mainstream lip recognition models, the InvNet-18
network in this paper has more than three times lower GPU consumption and 32% fewer parameters.
After detailed analysis and comparison in various aspects, it is demonstrated that the model in
this paper can effectively improve the model’s anti-interference ability and reduce training resource
consumption. At the same time, the accuracy is comparable with the current state-of-the-art results.
Keywords:
lip recognition; visual speech recognition; data enhancement; inverse convolutional
neural network
1. Introduction
Lip recognition is also called visual speech recognition (VSR). It analyzes the dynamic
changes of the lips. The aim is to recognize the speech content in the video. This task
involves natural language processing, image classification, speech processing, and pat-
tern recognition. In recent years, there have been many applications of lip recognition in
real life such as in vivo detection [
1
], improved hearing aids [
2
], etc., with broad applica-
tion prospects.
The lip recognition model consists of two steps. The first is to extract the visual fea-
tures of the lips. The second is categorization. The extracted visual features should contain
sufficient representative information and robustness [
3
]. The traditional extraction method
is manual annotation. Such practices only ensure that the downstream task can be classified
and recognized without considering the effectiveness of the acquired features. Therefore,
the recognition accuracy is low. Although, there are corresponding methods [
4
,
5
] to solve
this problem. These methods rely on manual design, and the design process is complex.
Visual features obtained using manual annotation do not meet human expectations, and re-
searchers have begun to seek more effective visual elements. Deep learning techniques
Electronics 2022, 11, 3066. https://doi.org/10.3390/electronics11193066 https://www.mdpi.com/journal/electronics
资源描述:

当前文档最多预览五页,下载文档查看全文

此文档下载收益归作者所有

当前文档最多预览五页,下载文档查看全文
温馨提示:
1. 部分包含数学公式或PPT动画的文件,查看预览时可能会显示错乱或异常,文件下载后无此问题,请放心下载。
2. 本文档由用户上传,版权归属用户,天天文库负责整理代发布。如果您对本文档版权有争议请及时联系客服。
3. 下载前请仔细阅读文档内容,确认文档内容符合您的需求后进行下载,若出现内容与标题不符可向本站投诉处理。
4. 下载文档时可能由于网络波动等原因无法下载或下载错误,付费完成后未能成功下载的用户请联系客服处理。
关闭