Citation: Li, Y.; Shi, Q.; Song, J.; Yang,
F. Human Pose Estimation via
Dynamic Information Transfer.
Electronics 2023, 12, 695. https://
doi.org/10.3390/electronics12030695
Academic Editor: Silvia Liberata
Ullo
Received: 10 January 2023
Revised: 27 January 2023
Accepted: 28 January 2023
Published: 30 January 2023
Copyright: © 2023 by the authors.
Licensee MDPI, Basel, Switzerland.
This article is an open access article
distributed under the terms and
conditions of the Creative Commons
Attribution (CC BY) license (https://
creativecommons.org/licenses/by/
4.0/).
Article
Human Pose Estimation via Dynamic Information Transfer
Yihang Li
1,2
, Qingxuan Shi
1,2,
*, Jingya Song
1,2
and Fang Yang
1,2
1
School of Cyber Security and Computer, Hebei University, Baoding 071002, China
2
Hebei Machine Vision Engineering Research Center, Hebei University, Baoding 071002, China
* Correspondence: qingxuanshi@hbu.edu.cn
Abstract:
This paper presents a multi-task learning framework, called the dynamic information
transfer network (DITN). We mainly focused on improving the pose estimation with the spatial
relationship of the adjacent joints. To benefit from the explicit structural knowledge, we constructed
two branches with a shared backbone to localize the human joints and bones, respectively. Since
related tasks share a high-level representation, we leveraged the bone information to refine the joint
localization via dynamic information transfer. In detail, we extracted the dynamic parameters from
the bone branch and used them to make the network learn constraint relationships via dynamic
convolution. Moreover, attention blocks were added after the information transfer to balance the
information across different granularity levels and induce the network to focus on the informative
regions. The experimental results demonstrated the effectiveness of the DITN, which achieved
90.8% PCKh@0.5 on MPII and 75.0% AP on COCO. The qualitative results on the MPII and COCO
datasets showed that the DITN achieved better performance, especially on heavily occluded or easily
confusable joint localization.
Keywords: computer vision; pose estimation; multi-task learning; dynamic information transfer
1. Introduction
Two-dimensional human pose estimation (HPE) is the task of localizing human joints
or parts from monocular images [
1
,
2
] or videos [
3
–
5
]. It has become a significant basis
for human action recognition [
6
], human–computer interaction [
7
], human parsing [
8
],
animation [
9
], etc. Classical methods [
10
–
13
] are mainly based on the pictorial structure
(PS) framework. They usually adopt vertices indicating joints and edges encoding the
connections of adjacent joints to construct skeleton graph models. The spatial relationship
of joints, such as the angle and distance, is captured to predict the localization of body
joints. Deep learning methods [
14
–
21
] extract spatial contextual information directly from
data. These methods perform well in visual representation; however, they lack the ability to
explicitly learn the spatial relationship between joints. Without utilizing a holistic skeleton
structure and intrinsic prior knowledge, it is difficult for them to tackle challenges including
uncommon body postures and occlusions.
Recent studies [
22
,
23
] suggest that spatial dependency can provide contextual cues
to help localize body joints in crowded and occluded scenes. Tang et al. [
24
] proposed a
hierarchical compositional framework that exploits the relationships among human joints.
Nie et al. [
25
,
26
] leveraged bone information from human parsing to assist human pose
estimation in a multi-task learning manner. These methods prove the effectiveness of
spatial representation learning. The representation in the form of human bones provides
more holistic structure information for the precise localization of human joints. For human
pose estimation, it is significant to explore the simplicity of the spatial information from
different levels and promote information interaction between them.
Inspired by advances in multi-task learning for computer vision tasks [
27
,
28
], we
present a simple and effective framework, called the dynamic information transfer network
(DITN). With implicit constraints from multi-task learning, the localization accuracy of
Electronics 2023, 12, 695. https://doi.org/10.3390/electronics12030695 https://www.mdpi.com/journal/electronics