Article
A Positioning Method Based on Place Cells and Head-Direction
Cells for Inertial/Visual Brain-Inspired Navigation System
Yudi Chen
1
, Zhi Xiong
1,
*, Jianye Liu
1
, Chuang Yang
1
, Lijun Chao
1
and Yang Peng
2
Citation: Chen, Y.; Xiong, Z.; Liu, J.;
Yang, C.; Chao, L.; Peng, Y. A
Positioning Method Based on Place
Cells and Head-Direction Cells for
Inertial/Visual Brain-Inspired
Navigation System. Sensors 2021, 21,
7988. https://doi.org/10.3390/
s21237988
Academic Editors:
George Nikolakopoulos and
Maorong Ge
Received: 18 October 2021
Accepted: 23 November 2021
Published: 30 November 2021
Publisher’s Note: MDPI stays neutral
with regard to jurisdictional claims in
published maps and institutional affil-
iations.
Copyright: © 2021 by the authors.
Licensee MDPI, Basel, Switzerland.
This article is an open access article
distributed under the terms and
conditions of the Creative Commons
Attribution (CC BY) license (https://
creativecommons.org/licenses/by/
4.0/).
1
Navigation Research Center, College of Automation Engineering, Nanjing University of Aeronautics and
Astronautics, Nanjing 211106, China; chenyudi@nuaa.edu.cn (Y.C.); ljyac@nuaa.edu.cn (J.L.);
yangchuang@nuaa.edu.cn (C.Y.); chaolijun@nuaa.edu.cn (L.C.)
2
Shanghai Aerospace Control Technology Institute, Shanghai 201108, China; 13501798394@163.com
* Correspondence: xiongzhi@nuaa.edu.cn; Tel.: +86-138-1380-8576
Abstract:
Mammals rely on vision and self-motion information in nature to distinguish directions
and navigate accurately and stably. Inspired by the mammalian brain neurons to represent the spatial
environment, the brain-inspired positioning method based on multi-sensors’ input is proposed to
solve the problem of accurate navigation in the absence of satellite signals. In the research related to
the application of brain-inspired engineering, it is not common to fuse various sensor information to
improve positioning accuracy and decode navigation parameters from the encoded information of the
brain-inspired model. Therefore, this paper establishes the head-direction cell model and the place
cell model with application potential based on continuous attractor neural networks (CANNs) to
encode visual and inertial input information, and then decodes the direction and position according
to the population neuron firing response. The experimental results confirm that the brain-inspired
navigation model integrates a variety of information, outputs more accurate and stable navigation
parameters, and generates motion paths. The proposed model promotes the effective development
of brain-inspired navigation research.
Keywords:
brain-inspired navigation; place cells; head-direction cells; continuous attractor neural
networks (CANNs); population neuron decoding
1. Introduction
Unmanned mobile platforms (such as robots, unmanned vehicles, and unmanned
aerial vehicles) have a wide range of applications in many industries. For mobile platforms,
autonomous navigation is a key technology of automatic operation. At present, the navi-
gation system can be equipped with inertial measurement units (IMU), global navigation
satellite systems (GNSS), vision sensors, and radar sensors, etc. However, satellite signals
have interfered in satellite-jamming environments (e.g., indoor facilities, tall buildings,
forests), which reduces the accuracy of navigation and positioning. Compared with radar
sensors and vision sensors, vision sensors have more perceptual information, so the visual
autonomous navigation method has been rapidly developed.
In engineering applications, the vision sensor can accurately track environmental
features when the mobile platform is moving at a low speed. The use of vision to locate
and build maps has achieved good results, but the positioning and navigation effects
are not good in the case of weak light and rapid movement of the mobile platform. IMU
follows the change of movement speed and accurately measures angular velocity and linear
acceleration without the restriction of the scene, but it produces estimated cumulative drift
after a long-time operation. In order to take advantage of the respective advantages of
vision sensors and IMUs, the fusion of vision and inertial sensor data can provide more
accurate position information [
1
,
2
]. Location information estimation methods are usually
based on probability models, such as extended Kalman filter (EKF) [
3
], unscented Kalman
filter (UKF) [
4
], and particle filter (PF) [
5
]. The above methods rely on establishing an
Sensors 2021, 21, 7988. https://doi.org/10.3390/s21237988 https://www.mdpi.com/journal/sensors