惯性视觉脑导航系统中基于位置单元和头部方向单元的定位方法

ID:38754

大小:5.14 MB

页数:23页

时间:2023-03-14

金币:2

上传者:战必胜
sensors
Article
A Positioning Method Based on Place Cells and Head-Direction
Cells for Inertial/Visual Brain-Inspired Navigation System
Yudi Chen
1
, Zhi Xiong
1,
*, Jianye Liu
1
, Chuang Yang
1
, Lijun Chao
1
and Yang Peng
2

 
Citation: Chen, Y.; Xiong, Z.; Liu, J.;
Yang, C.; Chao, L.; Peng, Y. A
Positioning Method Based on Place
Cells and Head-Direction Cells for
Inertial/Visual Brain-Inspired
Navigation System. Sensors 2021, 21,
7988. https://doi.org/10.3390/
s21237988
Academic Editors:
George Nikolakopoulos and
Maorong Ge
Received: 18 October 2021
Accepted: 23 November 2021
Published: 30 November 2021
Publishers Note: MDPI stays neutral
with regard to jurisdictional claims in
published maps and institutional affil-
iations.
Copyright: © 2021 by the authors.
Licensee MDPI, Basel, Switzerland.
This article is an open access article
distributed under the terms and
conditions of the Creative Commons
Attribution (CC BY) license (https://
creativecommons.org/licenses/by/
4.0/).
1
Navigation Research Center, College of Automation Engineering, Nanjing University of Aeronautics and
Astronautics, Nanjing 211106, China; chenyudi@nuaa.edu.cn (Y.C.); ljyac@nuaa.edu.cn (J.L.);
yangchuang@nuaa.edu.cn (C.Y.); chaolijun@nuaa.edu.cn (L.C.)
2
Shanghai Aerospace Control Technology Institute, Shanghai 201108, China; 13501798394@163.com
* Correspondence: xiongzhi@nuaa.edu.cn; Tel.: +86-138-1380-8576
Abstract:
Mammals rely on vision and self-motion information in nature to distinguish directions
and navigate accurately and stably. Inspired by the mammalian brain neurons to represent the spatial
environment, the brain-inspired positioning method based on multi-sensors’ input is proposed to
solve the problem of accurate navigation in the absence of satellite signals. In the research related to
the application of brain-inspired engineering, it is not common to fuse various sensor information to
improve positioning accuracy and decode navigation parameters from the encoded information of the
brain-inspired model. Therefore, this paper establishes the head-direction cell model and the place
cell model with application potential based on continuous attractor neural networks (CANNs) to
encode visual and inertial input information, and then decodes the direction and position according
to the population neuron firing response. The experimental results confirm that the brain-inspired
navigation model integrates a variety of information, outputs more accurate and stable navigation
parameters, and generates motion paths. The proposed model promotes the effective development
of brain-inspired navigation research.
Keywords:
brain-inspired navigation; place cells; head-direction cells; continuous attractor neural
networks (CANNs); population neuron decoding
1. Introduction
Unmanned mobile platforms (such as robots, unmanned vehicles, and unmanned
aerial vehicles) have a wide range of applications in many industries. For mobile platforms,
autonomous navigation is a key technology of automatic operation. At present, the navi-
gation system can be equipped with inertial measurement units (IMU), global navigation
satellite systems (GNSS), vision sensors, and radar sensors, etc. However, satellite signals
have interfered in satellite-jamming environments (e.g., indoor facilities, tall buildings,
forests), which reduces the accuracy of navigation and positioning. Compared with radar
sensors and vision sensors, vision sensors have more perceptual information, so the visual
autonomous navigation method has been rapidly developed.
In engineering applications, the vision sensor can accurately track environmental
features when the mobile platform is moving at a low speed. The use of vision to locate
and build maps has achieved good results, but the positioning and navigation effects
are not good in the case of weak light and rapid movement of the mobile platform. IMU
follows the change of movement speed and accurately measures angular velocity and linear
acceleration without the restriction of the scene, but it produces estimated cumulative drift
after a long-time operation. In order to take advantage of the respective advantages of
vision sensors and IMUs, the fusion of vision and inertial sensor data can provide more
accurate position information [
1
,
2
]. Location information estimation methods are usually
based on probability models, such as extended Kalman filter (EKF) [
3
], unscented Kalman
filter (UKF) [
4
], and particle filter (PF) [
5
]. The above methods rely on establishing an
Sensors 2021, 21, 7988. https://doi.org/10.3390/s21237988 https://www.mdpi.com/journal/sensors
资源描述:

当前文档最多预览五页,下载文档查看全文

此文档下载收益归作者所有

当前文档最多预览五页,下载文档查看全文
温馨提示:
1. 部分包含数学公式或PPT动画的文件,查看预览时可能会显示错乱或异常,文件下载后无此问题,请放心下载。
2. 本文档由用户上传,版权归属用户,天天文库负责整理代发布。如果您对本文档版权有争议请及时联系客服。
3. 下载前请仔细阅读文档内容,确认文档内容符合您的需求后进行下载,若出现内容与标题不符可向本站投诉处理。
4. 下载文档时可能由于网络波动等原因无法下载或下载错误,付费完成后未能成功下载的用户请联系客服处理。
关闭