Citation: Wang, Y.; Hussain, B.; Yue,
C.P. VLP Landmark and
SLAM-Assisted Automatic Map
Calibration for Robot Navigation
with Semantic Information. Robotics
2022, 11, 84. https://doi.org/
10.3390/robotics11040084
Academic Editors: Shuai Li, Dechao
Chen, Mohammed Aquil Mirza,
Vasilios N. Katsikis, Dunhui Xiao and
Predrag Stanimirovi´c
Received: 24 July 2022
Accepted: 19 August 2022
Published: 21 August 2022
Publisher’s Note: MDPI stays neutral
with regard to jurisdictional claims in
published maps and institutional affil-
iations.
Copyright: © 2022 by the authors.
Licensee MDPI, Basel, Switzerland.
This article is an open access article
distributed under the terms and
conditions of the Creative Commons
Attribution (CC BY) license (https://
creativecommons.org/licenses/by/
4.0/).
Article
VLP Landmark and SLAM-Assisted Automatic Map Calibration
for Robot Navigation with Semantic Information
Yiru Wang
1,2
, Babar Hussain
1
and Chik Patrick Yue
1,2,
*
1
HKUST Shenzhen-Hong Kong Collaborative Innovation Research Institute, Shenzhen 518055, China
2
The State Key Laboratory of Advanced Displays and Optoelectronics Technologies, ECE Department,
The Hong Kong University of Science and Technology, Hong Kong SAR, China
* Correspondence: eepatrick@ust.hk
Abstract:
With the rapid development of robotics and in-depth research of automatic navigation
technology, mobile robots have been applied in a variety of fields. Map construction is one of the
core research focuses of mobile robot development. In this paper, we propose an autonomous map
calibration method using visible light positioning (VLP) landmarks and Simultaneous Localization
and Mapping (SLAM). A layout map of the environment to be perceived is calibrated by a robot
tracking at least two landmarks mounted in the venue. At the same time, the robot’s position
on the occupancy grid map generated by SLAM is recorded. The two sequences of positions are
synchronized by their time stamps and the occupancy grid map is saved as a sensor map. A map
transformation method is then performed to align the orientation of the two maps and to calibrate
the scale of the layout map to agree with that of the sensor map. After the calibration, the semantic
information on the layout map remains and the accuracy is improved. Experiments are performed in
the robot operating system (ROS) to verify the proposed map calibration method. We evaluate the
performance on two layout maps: one with high accuracy and the other with rough accuracy of the
structures and scale. The results show that the navigation accuracy is improved by 24.6 cm on the
high-accuracy map and 22.6 cm on the rough-accuracy map, respectively.
Keywords:
map calibration; visible light positioning (VLP); robot localization; Simultaneous Local-
ization and Mapping (SLAM); map transformation
1. Introduction
With the development of sensors, control systems, bionics and artificial intelligence,
robot technology has been investigated and applied in many areas to provide services
such as hospital inspection, hotel delivery and warehouse logistics. Using mobile robots
in indoor environments can effectively improve the intelligence and effectiveness of task
execution. By combining robot intelligence and human expertise, human–robot interac-
tion is promoted in multiple scenarios, such as medical applications [
1
,
2
] and industrial
applications [
3
,
4
]. Meanwhile, in these robot applications, navigation plays an increasingly
crucial role. As an essential element in the navigation process, high-precision positioning in
indoor environments is still a challenging task. Since the Global Navigation Satellite System
(GNSS) cannot provide satisfactory positioning services in indoor environments due to the
extreme signal attenuation and interruption caused by indoor structures, WiFi/Bluetooth
fingerprinting-based indoor positioning systems (IPSs) have raised extensive attention and
achieved encouraging results. However, positioning based on WiFi/Bluetooth can only
achieve meter-level accuracy [5].
Compared with WiFi/Bluetooth fingerprinting-based positioning, positioning with
landmarks composed of visible light positioning (VLP)-enabled lights can provide an
absolute location when using an image sensor as a receiver. Scanning of the whole area is
not required, and global 3D positioning results can be achieved as long as the 3D position
Robotics 2022, 11, 84. https://doi.org/10.3390/robotics11040084 https://www.mdpi.com/journal/robotics