基于曲率点对特征(Cur-PPF)的机器人拣仓6D姿态估计

ID:38670

阅读量:0

大小:11.29 MB

页数:20页

时间:2023-03-11

金币:2

上传者:战必胜

 
Citation: Cui, X.; Yu, M.; Wu, L.; Wu,
S. A 6D Pose Estimation for Robotic
Bin-Picking Using Point-Pair
Features with Curvature (Cur-PPF).
Sensors 2022, 22, 1805. https://
doi.org/10.3390/s22051805
Academic Editors: Yuansong Qiao
and Seamus Gordon
Received: 20 January 2022
Accepted: 22 February 2022
Published: 24 February 2022
Publishers Note: MDPI stays neutral
with regard to jurisdictional claims in
published maps and institutional affil-
iations.
Copyright: © 2022 by the authors.
Licensee MDPI, Basel, Switzerland.
This article is an open access article
distributed under the terms and
conditions of the Creative Commons
Attribution (CC BY) license (https://
creativecommons.org/licenses/by/
4.0/).
sensors
Article
A 6D Pose Estimation for Robotic Bin-Picking Using Point-Pair
Features with Curvature (Cur-PPF)
Xining Cui, Menghui Yu, Linqigao Wu and Shiqian Wu *
Institute of Robotics and Intelligent Systems, School of Information Science and Engineering, Wuhan University
of Science and Technology, Wuhan 430081, China; cuixining@wust.edu.cn (X.C.); yumenghui_hui@163.com (M.Y.);
wulin_a@126.com (L.W.)
* Correspondence: shiqian.wu@wust.edu.cn; Tel.: +86-136-2711-4410
Abstract:
Pose estimation is a particularly important link in the task of robotic bin-picking. Its
purpose is to obtain the 6D pose (3D position and 3D posture) of the target object. In real bin-picking
scenarios, noise, overlap, and occlusion affect accuracy of pose estimation and lead to failure in robot
grasping. In this paper, a new point-pair feature (PPF) descriptor is proposed, in which curvature
information of point-pairs is introduced to strengthen feature description, and improves the point
cloud matching rate. The proposed method also introduces an effective point cloud preprocessing,
which extracts candidate targets in complex scenarios, and, thus, improves the overall computational
efficiency. By combining with the curvature distribution, a weighted voting scheme is presented to
further improve the accuracy of pose estimation. The experimental results performed on public data
set and real scenarios show that the accuracy of the proposed method is much higher than that of
the existing PPF method, and it is more efficient than the PPF method. The proposed method can be
used for robotic bin-picking in real industrial scenarios.
Keywords:
pose estimation; robotic bin-picking; candidate targets; curvature information;
weighted voting
1. Introduction
Bin-picking is a common scene in the industry, aiming to take out objects placed in
disorder by robotic arms. There are different degrees of overlap and occlusion interference
with the detection and perception of objects, yielding the failure of the robotic grasping
task [
1
]. Bin-picking is challenging, attracting many domestic and foreign scholars [
2
4
].
The key of bin-picking is to calculate the pose of the best picking point of the target object [
5
],
namely, 6D pose estimation. According to the current research on pose estimation, it can be
divided into correspondence method, template-based method, voting-based method, and
deep learning-based method [6].
The method to find the relationship between input data and known point cloud model
is called the correspondence method. According to the type of input data, the method
can be divided into 2D–3D correspondence and 3D–3D correspondence [
7
]. The 2D–3D
corresponding method is often used for objects with rich textures. The point cloud model
is projected from multiple angles, and the relationship between the template image and the
RGB image of the target object in a single angle, is found through feature points. Then, the
Perspective-n-Point (PnP) algorithm is used to restore the pose of the current perspective.
For example, Hu et al. [
8
] introduced a segmentation driven network framework for 6D pose
estimation. This method predicts the local pose through the 2D key point position of objects
in the scenario, thereby generating a set of reliable 3D to 2D correspondences, and then uses
the PnP algorithm to calculate the accurate pose of each object. This method can maintain
robustness in the presence of overlap among objects, but it is not suitable for untextured
objects. In the 3D–3D corresponding method, the acquired depth image is converted
into a 3D point cloud, and then the relationship between the two point clouds is solved
Sensors 2022, 22, 1805. https://doi.org/10.3390/s22051805 https://www.mdpi.com/journal/sensors
资源描述:

当前文档最多预览五页,下载文档查看全文

此文档下载收益归作者所有

当前文档最多预览五页,下载文档查看全文
温馨提示:
1. 部分包含数学公式或PPT动画的文件,查看预览时可能会显示错乱或异常,文件下载后无此问题,请放心下载。
2. 本文档由用户上传,版权归属用户,天天文库负责整理代发布。如果您对本文档版权有争议请及时联系客服。
3. 下载前请仔细阅读文档内容,确认文档内容符合您的需求后进行下载,若出现内容与标题不符可向本站投诉处理。
4. 下载文档时可能由于网络波动等原因无法下载或下载错误,付费完成后未能成功下载的用户请联系客服处理。
关闭