RTSDM——一种无人机实时语义密集映射系统

ID:38522

大小:2.18 MB

页数:21页

时间:2023-03-11

金币:2

上传者:战必胜
Citation: Li, Z.; Zhao, J.; Zhou, X.;
Wei, S.; Li, P.; Shuang, F. RTSDM: A
Real-Time Semantic Dense Mapping
System for UAVs. Machines 2022, 10,
285. https://doi.org/10.3390/
machines10040285
Academic Editor: Dario Richiedei
Received: 15 February 2022
Accepted: 11 April 2022
Published: 18 April 2022
Publishers Note: MDPI stays neutral
with regard to jurisdictional claims in
published maps and institutional affil-
iations.
Copyright: © 2022 by the authors.
Licensee MDPI, Basel, Switzerland.
This article is an open access article
distributed under the terms and
conditions of the Creative Commons
Attribution (CC BY) license (https://
creativecommons.org/licenses/by/
4.0/).
machines
Article
RTSDM: A Real-Time Semantic Dense Mapping System
for UAVs
Zhiteng Li , Jiannan Zhao, Xiang Zhou, Shengxian Wei, Pei Li and Feng Shuang *
Guangxi Key Laboratory of Intelligent Control and Maintenance of Power Equipment, School of Electrical
Engineering, Guangxi University, Nanning 530004, China; 1912392015@st.gxu.edu.cn (Z.L.);
jzhao@gxu.edu.cn (J.Z.); 1812401013@st.gxu.edu.cn (X.Z.); 1912392031@st.gxu.edu.cn (S.W.);
1912301027@st.gxu.edu.cn (P.L.)
* Correspondence: fshuang@gxu.edu.cn
Abstract:
Intelligent drones or flying robots play a significant role in serving our society in appli-
cations such as rescue, inspection, agriculture, etc. Understanding the scene of the surroundings is
an essential capability for further autonomous tasks. Intuitively, knowing the self-location of the
UAV and creating a semantic 3D map is significant for fully autonomous tasks. However, integrating
simultaneous localization, 3D reconstruction, and semantic segmentation together is a huge challenge
for power-limited systems such as UAVs. To address this, we propose a real-time semantic mapping
system that can help a power-limited UAV system to understand its location and surroundings. The
proposed approach includes a modified visual SLAM with the direct method to accelerate the com-
putationally intensive feature matching process and a real-time semantic segmentation module at the
back end. The semantic module runs a lightweight network, BiSeNetV2, and performs segmentation
only at key frames from the front-end SLAM task. Considering fast navigation and the on-board
memory resources, we provide a real-time dense-map-building module to generate an OctoMap
with the segmented semantic map. The proposed system is verified in real-time experiments on a
UAV platform with a Jetson TX2 as the computation unit. A frame rate of around 12 Hz, with a
semantic segmentation accuracy of around 89% demonstrates that our proposed system is computa-
tionally efficient while providing sufficient information for fully autonomous tasks such as rescue,
inspection, etc.
Keywords: semantic mapping; visual SLAM; UAV; CNN; OctoMap
1. Introduction
Fully autonomous UAVs (unmanned aerial vehicles) need to understand their environ-
ments in detail. In many cases, connecting the semantic information with the 3D position
information of the surroundings is critical for high-level decision-making. For example,
if a rescue drone can understand its surroundings regarding self-location and accessible
fire escapes, it will make more reasonable action plans to help survivors [
1
]. In precision
agriculture, drones also need to understand the surrounding environment in real time;
therefore, real-time semantic mapping is significant and worth exploring in this type of
drone application [2].
Most UAVs use GPS (global positioning system) signals to locate themselves, but the
GPS is often inaccessible due to signal blockage in enclosed environments such as caves and
buildings. In these cases, SLAM (simultaneous localization and mapping) [
3
] technology is
advantageous as it provides self-location and spatial information about the environment
but relies only on on-board sensors. SLAM based on LiDAR (light detection and ranging)
is a traditional approach in enclosed environments [
4
6
]. However, the high cost and
additional weight of LiDAR make it unaffordable for small drones. Thus, vision-based
SLAM is a more competitive option because it provides plenty of information to understand
the field of view and has a compact size and reasonable price [
7
]. With a simple camera
Machines 2022, 10, 285. https://doi.org/10.3390/machines10040285 https://www.mdpi.com/journal/machines
资源描述:

当前文档最多预览五页,下载文档查看全文

此文档下载收益归作者所有

当前文档最多预览五页,下载文档查看全文
温馨提示:
1. 部分包含数学公式或PPT动画的文件,查看预览时可能会显示错乱或异常,文件下载后无此问题,请放心下载。
2. 本文档由用户上传,版权归属用户,天天文库负责整理代发布。如果您对本文档版权有争议请及时联系客服。
3. 下载前请仔细阅读文档内容,确认文档内容符合您的需求后进行下载,若出现内容与标题不符可向本站投诉处理。
4. 下载文档时可能由于网络波动等原因无法下载或下载错误,付费完成后未能成功下载的用户请联系客服处理。
关闭