MEMe一种用于高效和有效人体姿态估计的相互增强建模方法-2022年

ID:37209

阅读量:0

大小:2.09 MB

页数:15页

时间:2023-03-03

金币:10

上传者:战必胜

 
Citation: Li, J.; Wang, Z.; Qi, B.;
Zhang, J.; Yang, H. MEMe: A
Mutually Enhanced Modeling
Method for Efficient and Effective
Human Pose Estimation. Sensors
2022, 22, 632. https://doi.org/
10.3390/s22020632
Academic Editor: Antonio
Fernández-Caballero
Received: 20 December 2021
Accepted: 11 January 2022
Published: 14 January 2022
Publishers Note: MDPI stays neutral
with regard to jurisdictional claims in
published maps and institutional affil-
iations.
Copyright: © 2022 by the authors.
Licensee MDPI, Basel, Switzerland.
This article is an open access article
distributed under the terms and
conditions of the Creative Commons
Attribution (CC BY) license (https://
creativecommons.org/licenses/by/
4.0/).
sensors
Article
MEMe: A Mutually Enhanced Modeling Method for Efficient
and Effective Human Pose Estimation
Jie Li
1,2,3
, Zhixing Wang
1,2,3,4
, Bo Qi
1,2,3,
* , Jianlin Zhang
1,2,3
and Hu Yang
1,2
1
Key Laboratory of Optical Engineering, Chinese Academy of Sciences, Chengdu 610209, China;
lijie163@mails.ucas.edu.cn (J.L.); 202111012048@std.uestc.edu.cn (Z.W.); jlin@ioe.ac.cn (J.Z.);
yangh@ioe.ac.cn (H.Y.)
2
Institute of Optics and Electronics, Chinese Academy of Sciences, Chengdu 610209, China
3
University of Chinese Academy of Sciences, Beijing 100039, China
4
School of Information and Communication Engineering, University of Electronic Science and Technology of
China, Chengdu 610209, China
* Correspondence: qibo@ioe.ac.cn
Abstract:
In this paper, a mutually enhanced modeling method (MEMe) is presented for human pose
estimation, which focuses on enhancing lightweight model performance, but with low complexity.
To obtain higher accuracy, a traditional model scale is largely expanded with heavy deployment
difficulties. However, for a more lightweight model, there is a large performance gap compared to
the former; thus, an urgent need for a way to fill it. Therefore, we propose a MEMe to reconstruct
a lightweight baseline model, EffBase transferred intuitively from EfficientDet, into the efficient
and effective pose (EEffPose) net, which contains three mutually enhanced modules: the Enhanced
EffNet (EEffNet) backbone, the total fusion neck (TFNeck), and the final attention head (FAHead).
Extensive experiments on COCO and MPII benchmarks show that our MEMe-based models reach
state-of-the-art performances, with limited parameters. Specifically, in the same conditions, our
EEffPose-P0 with 256
×
192 can use only 8.98 M parameters to achieve 75.4 AP on the COCO val set,
which outperforms HRNet-W48, but with only 14% of its parameters.
Keywords:
human pose estimation; deep learning; mutually enhanced; efficient and effective;
modeling method; extended convolutions; feature fusion; attention mechanisms; CNN
1. Introduction
Since 2016, deep learning-based methods [
1
,
2
] have become a prime focus of research in
2D human pose estimation, greatly promoting the development of action recognition [
3
] and
other human-centered applications [
4
,
5
]. Those deep learning models can be categorized
as large models with high performances and small models with low accuracies, leading to
a performance gap in Figure 1. To fill this gap between the complicated and lightweight
models, this paper explores a general modeling method used to make the lightweight
models “cross the gap”, i.e., with better performances than the big ones.
Traditionally, to overcome the challenges in the scale variances and keypoint occlu-
sions, various classic large models are proposed, such as stacked hourglass [
6
],
CPN [7]
,
SimpleBaseline [
8
], and HRNet [
9
]. Stacked hourglass consists of multiple stacked hourglass-
shaped modules with intermediate supervision, which is the first multi-scale representation
network architecture in human pose estimation, but is complex and inefficient. To solve
this problem, CPN cascades only two pyramid nets with ResNet as the backbone, where
one is a global net and the other is a refine net, to make better auxiliary supervision. Simple-
Baseline directly proposes a single-stage hourglass, where the features are downsampled
by the ResNet to encode information, and then upsampled by deconvolution to decode the
final output. As for HRNet, it is designed to maintain high-resolution representation by
multi-scale parallel branches, which is more efficient and effective than ever, but still has
Sensors 2022, 22, 632. https://doi.org/10.3390/s22020632 https://www.mdpi.com/journal/sensors
资源描述:

当前文档最多预览五页,下载文档查看全文

此文档下载收益归作者所有

当前文档最多预览五页,下载文档查看全文
温馨提示:
1. 部分包含数学公式或PPT动画的文件,查看预览时可能会显示错乱或异常,文件下载后无此问题,请放心下载。
2. 本文档由用户上传,版权归属用户,天天文库负责整理代发布。如果您对本文档版权有争议请及时联系客服。
3. 下载前请仔细阅读文档内容,确认文档内容符合您的需求后进行下载,若出现内容与标题不符可向本站投诉处理。
4. 下载文档时可能由于网络波动等原因无法下载或下载错误,付费完成后未能成功下载的用户请联系客服处理。
关闭