Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (21)

Search Parameters:
Keywords = brain-inspired navigation

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
19 pages, 7467 KB  
Article
A Bionic Goal-Oriented Path Planning Method Based on an Experience Map
by Qiang Zou and Yiwei Chen
Biomimetics 2025, 10(5), 305; https://doi.org/10.3390/biomimetics10050305 - 11 May 2025
Viewed by 545
Abstract
Brain-inspired bionic navigation is a groundbreaking technological approach that emulates the biological navigation systems found in mammalian brains. This innovative method leverages experiences within cognitive space to plan global paths to targets, showcasing remarkable autonomy and adaptability to various environments. This work introduces [...] Read more.
Brain-inspired bionic navigation is a groundbreaking technological approach that emulates the biological navigation systems found in mammalian brains. This innovative method leverages experiences within cognitive space to plan global paths to targets, showcasing remarkable autonomy and adaptability to various environments. This work introduces a novel bionic, goal-oriented path planning approach for mobile robots. First, an experience map is constructed using NeuroSLAM, a bio-inspired simultaneous localization and mapping method. Based on this experience map, a successor representation model is then developed through reinforcement learning, and a goal-oriented predictive map is formulated to address long-term reward estimation challenges. By integrating goal-oriented rewards, the proposed algorithm efficiently plans optimal global paths in complex environments for mobile robots. Our experimental validation demonstrates the method’s effectiveness in experience sequence prediction and goal-oriented global path planning. The comparative results highlight its superior performance over traditional Dijkstra’s algorithm, particularly in terms of adaptability to environmental changes and computational efficiency in optimal global path generation. Full article
(This article belongs to the Special Issue Bio-Inspired Robotics and Applications 2025)
Show Figures

Figure 1

20 pages, 4186 KB  
Article
Deep Learning-Emerged Grid Cells-Based Bio-Inspired Navigation in Robotics
by Arturs Simkuns, Rodions Saltanovs, Maksims Ivanovs and Roberts Kadikis
Sensors 2025, 25(5), 1576; https://doi.org/10.3390/s25051576 - 4 Mar 2025
Cited by 1 | Viewed by 1946
Abstract
Grid cells in the brain’s entorhinal cortex are essential for spatial navigation and have inspired advancements in robotic navigation systems. This paper first provides an overview of recent research on grid cell-based navigation in robotics, focusing on deep learning models and algorithms capable [...] Read more.
Grid cells in the brain’s entorhinal cortex are essential for spatial navigation and have inspired advancements in robotic navigation systems. This paper first provides an overview of recent research on grid cell-based navigation in robotics, focusing on deep learning models and algorithms capable of handling uncertainty and dynamic environments. We then present experimental results where a grid cell network was trained using trajectories from a mobile unmanned ground vehicle (UGV) robot. After training, the network’s units exhibited spatially periodic and hexagonal activation patterns characteristic of biological grid cells, as well as responses resembling border cells and head-direction cells. These findings demonstrate that grid cell networks can effectively learn spatial representations from robot trajectories, providing a foundation for developing advanced navigation algorithms for mobile robots. We conclude by discussing current challenges and future research directions in this field. Full article
(This article belongs to the Special Issue Smart Sensor Systems for Positioning and Navigation)
Show Figures

Figure 1

28 pages, 2402 KB  
Review
A Review of Neuromorphic Sound Source Localization and Echolocation-Based Navigation Systems
by Eugénie Dalmas, François Danneville, Fouzia Elbahhar, Michael Bocquet and Christophe Loyez
Electronics 2024, 13(24), 4858; https://doi.org/10.3390/electronics13244858 - 10 Dec 2024
Cited by 1 | Viewed by 2740
Abstract
The development of positioning systems has been significantly advanced by a combination of technological innovations, such as improved sensors, signal processing, and computational power, alongside inspiration drawn from biological mechanisms. Although vision is the main means for positioning oneself—or elements relative to oneself—in [...] Read more.
The development of positioning systems has been significantly advanced by a combination of technological innovations, such as improved sensors, signal processing, and computational power, alongside inspiration drawn from biological mechanisms. Although vision is the main means for positioning oneself—or elements relative to oneself—in the environment, other sensory mediums provide additional information, and may even take over when visibility is lacking, such as in the dark or in troubled waters. In particular, the auditory system in mammals greatly contributes to determining the location of sound sources, as well as navigating or identifying objects’ texture and shape, when combined with echolocation behavior. Taking further inspiration from the neuronal processing in the brain, neuromorphic computing has been studied in the context of sound source localization and echolocation-based navigation, which aim at better understanding biological processes or reaching state-of-the-art performances in energy efficiency through the use of spike encoding. This paper sets out a review of these neuromorphic sound source localization, sonar- and radar-based navigation systems, from their earliest appearance to the latest published works. Current trends and possible future directions within this scope are discussed. Full article
(This article belongs to the Special Issue Precision Positioning and Navigation Communication Systems)
Show Figures

Figure 1

41 pages, 3369 KB  
Review
Application of Event Cameras and Neuromorphic Computing to VSLAM: A Survey
by Sangay Tenzin, Alexander Rassau and Douglas Chai
Biomimetics 2024, 9(7), 444; https://doi.org/10.3390/biomimetics9070444 - 20 Jul 2024
Cited by 5 | Viewed by 5583
Abstract
Simultaneous Localization and Mapping (SLAM) is a crucial function for most autonomous systems, allowing them to both navigate through and create maps of unfamiliar surroundings. Traditional Visual SLAM, also commonly known as VSLAM, relies on frame-based cameras and structured processing pipelines, which face [...] Read more.
Simultaneous Localization and Mapping (SLAM) is a crucial function for most autonomous systems, allowing them to both navigate through and create maps of unfamiliar surroundings. Traditional Visual SLAM, also commonly known as VSLAM, relies on frame-based cameras and structured processing pipelines, which face challenges in dynamic or low-light environments. However, recent advancements in event camera technology and neuromorphic processing offer promising opportunities to overcome these limitations. Event cameras inspired by biological vision systems capture the scenes asynchronously, consuming minimal power but with higher temporal resolution. Neuromorphic processors, which are designed to mimic the parallel processing capabilities of the human brain, offer efficient computation for real-time data processing of event-based data streams. This paper provides a comprehensive overview of recent research efforts in integrating event cameras and neuromorphic processors into VSLAM systems. It discusses the principles behind event cameras and neuromorphic processors, highlighting their advantages over traditional sensing and processing methods. Furthermore, an in-depth survey was conducted on state-of-the-art approaches in event-based SLAM, including feature extraction, motion estimation, and map reconstruction techniques. Additionally, the integration of event cameras with neuromorphic processors, focusing on their synergistic benefits in terms of energy efficiency, robustness, and real-time performance, was explored. The paper also discusses the challenges and open research questions in this emerging field, such as sensor calibration, data fusion, and algorithmic development. Finally, the potential applications and future directions for event-based SLAM systems are outlined, ranging from robotics and autonomous vehicles to augmented reality. Full article
(This article belongs to the Special Issue Biologically Inspired Vision and Image Processing 2024)
Show Figures

Figure 1

17 pages, 1626 KB  
Article
Modeling Autonomous Vehicle Responses to Novel Observations Using Hierarchical Cognitive Representations Inspired Active Inference
by Sheida Nozari, Ali Krayani, Pablo Marin, Lucio Marcenaro, David Martin Gomez and Carlo Regazzoni
Computers 2024, 13(7), 161; https://doi.org/10.3390/computers13070161 - 28 Jun 2024
Cited by 4 | Viewed by 1959
Abstract
Equipping autonomous agents for dynamic interaction and navigation is a significant challenge in intelligent transportation systems. This study aims to address this by implementing a brain-inspired model for decision making in autonomous vehicles. We employ active inference, a Bayesian approach that models decision-making [...] Read more.
Equipping autonomous agents for dynamic interaction and navigation is a significant challenge in intelligent transportation systems. This study aims to address this by implementing a brain-inspired model for decision making in autonomous vehicles. We employ active inference, a Bayesian approach that models decision-making processes similar to the human brain, focusing on the agent’s preferences and the principle of free energy. This approach is combined with imitation learning to enhance the vehicle’s ability to adapt to new observations and make human-like decisions. The research involved developing a multi-modal self-awareness architecture for autonomous driving systems and testing this model in driving scenarios, including abnormal observations. The results demonstrated the model’s effectiveness in enabling the vehicle to make safe decisions, particularly in unobserved or dynamic environments. The study concludes that the integration of active inference with imitation learning significantly improves the performance of autonomous vehicles, offering a promising direction for future developments in intelligent transportation systems. Full article
(This article belongs to the Special Issue System-Integrated Intelligence and Intelligent Systems 2023)
Show Figures

Figure 1

27 pages, 6109 KB  
Article
An Improved Dyna-Q Algorithm Inspired by the Forward Prediction Mechanism in the Rat Brain for Mobile Robot Path Planning
by Jing Huang, Ziheng Zhang and Xiaogang Ruan
Biomimetics 2024, 9(6), 315; https://doi.org/10.3390/biomimetics9060315 - 23 May 2024
Cited by 3 | Viewed by 2212
Abstract
The traditional Model-Based Reinforcement Learning (MBRL) algorithm has high computational cost, poor convergence, and poor performance in robot spatial cognition and navigation tasks, and it cannot fully explain the ability of animals to quickly adapt to environmental changes and learn a variety of [...] Read more.
The traditional Model-Based Reinforcement Learning (MBRL) algorithm has high computational cost, poor convergence, and poor performance in robot spatial cognition and navigation tasks, and it cannot fully explain the ability of animals to quickly adapt to environmental changes and learn a variety of complex tasks. Studies have shown that vicarious trial and error (VTE) and the hippocampus forward prediction mechanism in rats and other mammals can be used as key components of action selection in MBRL to support “goal-oriented” behavior. Therefore, we propose an improved Dyna-Q algorithm inspired by the forward prediction mechanism of the hippocampus to solve the above problems and tackle the exploration–exploitation dilemma of Reinforcement Learning (RL). This algorithm alternately presents the potential path in the future for mobile robots and dynamically adjusts the sweep length according to the decision certainty, so as to determine action selection. We test the performance of the algorithm in a two-dimensional maze environment with static and dynamic obstacles, respectively. Compared with classic RL algorithms like State-Action-Reward-State-Action (SARSA) and Dyna-Q, the algorithm can speed up spatial cognition and improve the global search ability of path planning. In addition, our method reflects key features of how the brain organizes MBRL to effectively solve difficult tasks such as navigation, and it provides a new idea for spatial cognitive tasks from a biological perspective. Full article
(This article belongs to the Special Issue Bioinspired Algorithms)
Show Figures

Figure 1

19 pages, 1688 KB  
Article
Machine Learning-Based Control of Autonomous Vehicles for Solar Panel Cleaning Systems in Agricultural Solar Farms
by Farima Hajiahmadi, Mohammad Jafari and Mahmut Reyhanoglu
AgriEngineering 2024, 6(2), 1417-1435; https://doi.org/10.3390/agriengineering6020081 - 20 May 2024
Cited by 7 | Viewed by 2504
Abstract
This paper presents a machine learning (ML)-based approach for the intelligent control of Autonomous Vehicles (AVs) utilized in solar panel cleaning systems, aiming to mitigate challenges arising from uncertainties, disturbances, and dynamic environments. Solar panels, predominantly situated in dedicated lands for solar energy [...] Read more.
This paper presents a machine learning (ML)-based approach for the intelligent control of Autonomous Vehicles (AVs) utilized in solar panel cleaning systems, aiming to mitigate challenges arising from uncertainties, disturbances, and dynamic environments. Solar panels, predominantly situated in dedicated lands for solar energy production (e.g., agricultural solar farms), are susceptible to dust and debris accumulation, leading to diminished energy absorption. Instead of labor-intensive manual cleaning, robotic cleaners offer a viable solution. AVs equipped to transport and precisely position these cleaning robots are indispensable for the efficient navigation among solar panel arrays. However, environmental obstacles (e.g., rough terrain), variations in solar panel installation (e.g., height disparities, different angles), and uncertainties (e.g., AV and environmental modeling) may degrade the performance of traditional controllers. In this study, a biologically inspired method based on Brain Emotional Learning (BEL) is developed to tackle the aforementioned challenges. The developed controller is implemented numerically using MATLAB-SIMULINK. The paper concludes with a comparative analysis of the AVs’ performance using both PID and developed controllers across various scenarios, highlighting the efficacy and advantages of the intelligent control approach for AVs deployed in solar panel cleaning systems within agricultural solar farms. Simulation results demonstrate the superior performance of the ML-based controller, showcasing significant improvements over the PID controller. Full article
Show Figures

Figure 1

22 pages, 13301 KB  
Article
NeoSLAM: Long-Term SLAM Using Computational Models of the Brain
by Carlos Alexandre Pontes Pizzino, Ramon Romankevicius Costa, Daniel Mitchell and Patrícia Amâncio Vargas
Sensors 2024, 24(4), 1143; https://doi.org/10.3390/s24041143 - 9 Feb 2024
Cited by 2 | Viewed by 2535
Abstract
Simultaneous Localization and Mapping (SLAM) is a fundamental problem in the field of robotics, enabling autonomous robots to navigate and create maps of unknown environments. Nevertheless, the SLAM methods that use cameras face problems in maintaining accurate localization over extended periods across various [...] Read more.
Simultaneous Localization and Mapping (SLAM) is a fundamental problem in the field of robotics, enabling autonomous robots to navigate and create maps of unknown environments. Nevertheless, the SLAM methods that use cameras face problems in maintaining accurate localization over extended periods across various challenging conditions and scenarios. Following advances in neuroscience, we propose NeoSLAM, a novel long-term visual SLAM, which uses computational models of the brain to deal with this problem. Inspired by the human neocortex, NeoSLAM is based on a hierarchical temporal memory model that has the potential to identify temporal sequences of spatial patterns using sparse distributed representations. Being known to have a high representational capacity and high tolerance to noise, sparse distributed representations have several properties, enabling the development of a novel neuroscience-based loop-closure detector that allows for real-time performance, especially in resource-constrained robotic systems. The proposed method has been thoroughly evaluated in terms of environmental complexity by using a wheeled robot deployed in the field and demonstrated that the accuracy of loop-closure detection was improved compared with the traditional RatSLAM system. Full article
(This article belongs to the Special Issue Advanced Sensing and Control Technologies for Autonomous Robots)
Show Figures

Figure 1

20 pages, 8675 KB  
Article
Perceiving like a Bat: Hierarchical 3D Geometric–Semantic Scene Understanding Inspired by a Biomimetic Mechanism
by Chi Zhang, Zhong Yang, Bayang Xue, Haoze Zhuo, Luwei Liao, Xin Yang and Zekun Zhu
Biomimetics 2023, 8(5), 436; https://doi.org/10.3390/biomimetics8050436 - 19 Sep 2023
Cited by 3 | Viewed by 2517
Abstract
Geometric–semantic scene understanding is a spatial intelligence capability that is essential for robots to perceive and navigate the world. However, understanding a natural scene remains challenging for robots because of restricted sensors and time-varying situations. In contrast, humans and animals are able to [...] Read more.
Geometric–semantic scene understanding is a spatial intelligence capability that is essential for robots to perceive and navigate the world. However, understanding a natural scene remains challenging for robots because of restricted sensors and time-varying situations. In contrast, humans and animals are able to form a complex neuromorphic concept of the scene they move in. This neuromorphic concept captures geometric and semantic aspects of the scenario and reconstructs the scene at multiple levels of abstraction. This article seeks to reduce the gap between robot and animal perception by proposing an ingenious scene-understanding approach that seamlessly captures geometric and semantic aspects in an unexplored environment. We proposed two types of biologically inspired environment perception methods, i.e., a set of elaborate biomimetic sensors and a brain-inspired parsing algorithm related to scene understanding, that enable robots to perceive their surroundings like bats. Our evaluations show that the proposed scene-understanding system achieves competitive performance in image semantic segmentation and volumetric–semantic scene reconstruction. Moreover, to verify the practicability of our proposed scene-understanding method, we also conducted real-world geometric–semantic scene reconstruction in an indoor environment with our self-developed drone. Full article
(This article belongs to the Special Issue Biologically Inspired Vision and Image Processing)
Show Figures

Figure 1

22 pages, 7055 KB  
Article
Brain-Inspired Navigation Model Based on the Distribution of Polarized Sky-Light
by Jinshan Li, Jinkui Chu, Ran Zhang and Kun Tong
Machines 2022, 10(11), 1028; https://doi.org/10.3390/machines10111028 - 4 Nov 2022
Cited by 3 | Viewed by 2042
Abstract
This paper proposes a brain-inspired navigation model based on absolute heading for the autonomous navigation of unmanned platforms. The proposed model combined the sand ant’s strategy of acquiring absolute heading from the sky environment and the brain-inspired navigation system, which is closer to [...] Read more.
This paper proposes a brain-inspired navigation model based on absolute heading for the autonomous navigation of unmanned platforms. The proposed model combined the sand ant’s strategy of acquiring absolute heading from the sky environment and the brain-inspired navigation system, which is closer to the navigation mechanism of migratory animals. Firstly, a brain-inspired grid cell network model and an absolute heading-based head-direction cell network model were constructed based on the continuous attractor network (CAN). Then, an absolute heading-based environmental vision template was constructed using the line scan intensity distribution curve, and the path integration error was corrected using the environmental vision template. Finally, a topological cognitive node was constructed according to the grid cell, the head direction cell, the environmental visual template, the absolute heading information, and the position information. Numerous topological nodes formed the absolute heading-based topological map. The model is a topological navigation method not limited to strict geometric space scale, and its position and absolute heading are decoupled. The experimental results showed that the proposed model is superior to the other methods in terms of the accuracy of visual template recognition, as well as the accuracy and topology consistency of the constructed environment topology map. Full article
(This article belongs to the Section Robotics, Mechatronics and Intelligent Machines)
Show Figures

Figure 1

18 pages, 4591 KB  
Article
Improved Visual SLAM Using Semantic Segmentation and Layout Estimation
by Ahmed Mahmoud and Mohamed Atia
Robotics 2022, 11(5), 91; https://doi.org/10.3390/robotics11050091 - 6 Sep 2022
Cited by 4 | Viewed by 3871
Abstract
The technological advances in computational systems have enabled very complex computer vision and machine learning approaches to perform efficiently and accurately. These new approaches can be considered a new set of tools to reshape the visual SLAM solutions. We present an investigation of [...] Read more.
The technological advances in computational systems have enabled very complex computer vision and machine learning approaches to perform efficiently and accurately. These new approaches can be considered a new set of tools to reshape the visual SLAM solutions. We present an investigation of the latest neuroscientific research that explains how the human brain can accurately navigate and map unknown environments. The accuracy suggests that human navigation is not affected by traditional visual odometry drifts resulting from tracking visual features. It utilises the geometrical structures of the surrounding objects within the navigated space. The identified objects and space geometrical shapes anchor the estimated space representation and mitigate the overall drift. Inspired by the human brain’s navigation techniques, this paper presents our efforts to incorporate two machine learning techniques into a VSLAM solution: semantic segmentation and layout estimation to imitate human abilities to map new environments. The proposed system benefits from the geometrical relations between the corner points of the cuboid environments to improve the accuracy of trajectory estimation. Moreover, the implemented SLAM solution semantically groups the map points and then tracks each group independently to limit the system drift. The implemented solution yielded higher trajectory accuracy and immunity to large pure rotations. Full article
(This article belongs to the Section Aerospace Robotics and Autonomous Systems)
Show Figures

Figure 1

33 pages, 12058 KB  
Article
A Brain-Inspired Model of Hippocampal Spatial Cognition Based on a Memory-Replay Mechanism
by Runyu Xu, Xiaogang Ruan and Jing Huang
Brain Sci. 2022, 12(9), 1176; https://doi.org/10.3390/brainsci12091176 - 1 Sep 2022
Cited by 1 | Viewed by 3148
Abstract
Since the hippocampus plays an important role in memory and spatial cognition, the study of spatial computation models inspired by the hippocampus has attracted much attention. This study relies mainly on reward signals for learning environments and planning paths. As reward signals in [...] Read more.
Since the hippocampus plays an important role in memory and spatial cognition, the study of spatial computation models inspired by the hippocampus has attracted much attention. This study relies mainly on reward signals for learning environments and planning paths. As reward signals in a complex or large-scale environment attenuate sharply, the spatial cognition and path planning performance of such models will decrease clearly as a result. Aiming to solve this problem, we present a brain-inspired mechanism, a Memory-Replay Mechanism, that is inspired by the reactivation function of place cells in the hippocampus. We classify the path memory according to the reward information and find the overlapping place cells in different categories of path memory to segment and reconstruct the memory to form a “virtual path”, replaying the memory by associating the reward information. We conducted a series of navigation experiments in a simple environment called a Morris water maze (MWM) and in a complex environment, and we compared our model with a reinforcement learning model and other brain-inspired models. The experimental results show that under the same conditions, our model has a higher rate of environmental exploration and more stable signal transmission, and the average reward obtained under stable conditions was 14.12% higher than RL with random-experience replay. Our model also shows good performance in complex maze environments where signals are easily attenuated. Moreover, the performance of our model at bifurcations is consistent with neurophysiological studies. Full article
Show Figures

Figure 1

31 pages, 4280 KB  
Article
Navigation Map-Based Artificial Intelligence
by Howard Schneider
AI 2022, 3(2), 434-464; https://doi.org/10.3390/ai3020026 - 12 May 2022
Cited by 10 | Viewed by 7286
Abstract
A biologically inspired cognitive architecture is described which uses navigation maps (i.e., spatial locations of objects) as its main data elements. The navigation maps are also used to represent higher-level concepts as well as to direct operations to perform on other navigation maps. [...] Read more.
A biologically inspired cognitive architecture is described which uses navigation maps (i.e., spatial locations of objects) as its main data elements. The navigation maps are also used to represent higher-level concepts as well as to direct operations to perform on other navigation maps. Incoming sensory information is mapped to local sensory navigation maps which then are in turn matched with the closest multisensory maps, and then mapped onto a best-matched multisensory navigation map. Enhancements of the biologically inspired feedback pathways allow the intermediate results of operations performed on the best-matched multisensory navigation map to be fed back, temporarily stored, and re-processed in the next cognitive cycle. This allows the exploration and generation of cause-and-effect behavior. In the re-processing of these intermediate results, navigation maps can, by core analogical mechanisms, lead to other navigation maps which offer an improved solution to many routine problems the architecture is exposed to. Given that the architecture is brain-inspired, analogical processing may also form a key mechanism in the human brain, consistent with psychological evidence. Similarly, for conventional artificial intelligence systems, analogical processing as a core mechanism may possibly allow enhanced performance. Full article
(This article belongs to the Special Issue Feature Papers for AI)
Show Figures

Figure 1

17 pages, 7064 KB  
Article
Position Correction and Trajectory Optimization of Underwater Long-Distance Navigation Inspired by Sea Turtle Migration
by Ziyuan Li, Huapeng Yu, Ye Li, Tongsheng Shen, Chongyang Wang and Zheng Cong
J. Mar. Sci. Eng. 2022, 10(2), 163; https://doi.org/10.3390/jmse10020163 - 27 Jan 2022
Viewed by 3481
Abstract
Accumulating evidence suggests that migrating animals store navigational “maps” in their brains, decoding location information from geomagnetic information based on their perception of the magnetic field. Inspired by this phenomenon, a novel geomagnetic inversion navigation framework was proposed to address the error constraint [...] Read more.
Accumulating evidence suggests that migrating animals store navigational “maps” in their brains, decoding location information from geomagnetic information based on their perception of the magnetic field. Inspired by this phenomenon, a novel geomagnetic inversion navigation framework was proposed to address the error constraint of a long-distance inertial navigation system. In the first part of the framework, the current paper proposed a geomagnetic bi-coordinate inversion localization approach which enables an autonomous underwater vehicle (AUV) to estimate its current position from geomagnetic information like migrating animals. This paper suggests that the combination of geomagnetic total intensity (F) and geomagnetic inclination (I) can determine a unique geographical location, and that there is a non-unique mapping relationship between the geomagnetic parameters and the geographical coordination (longitude and latitude). Then the cumulative error of the inertial navigation system is corrected, according to the roughly estimated position information. In the second part of the framework, a cantilever beam model is proposed to realize the optimal correction of the INS historical trajectory. Finally, the correctness of the geomagnetic bi-coordinate inversion localization model we proposed was verified by outdoor physical experiments. In addition, we also completed a geomagnetic/inertial navigation integrated long-distance semi-physical test based on the real navigation information of the AUV. The results show that the geomagnetic inversion navigation framework proposed in this paper can constrain long-distance inertial navigation errors and improve the navigation accuracy by 73.28% compared with the pure inertial navigation mode. This implies that the geomagnetic inversion localization will play a key role in long-distance AUV navigation correction. Full article
Show Figures

Figure 1

23 pages, 5265 KB  
Article
A Positioning Method Based on Place Cells and Head-Direction Cells for Inertial/Visual Brain-Inspired Navigation System
by Yudi Chen, Zhi Xiong, Jianye Liu, Chuang Yang, Lijun Chao and Yang Peng
Sensors 2021, 21(23), 7988; https://doi.org/10.3390/s21237988 - 30 Nov 2021
Cited by 3 | Viewed by 3305
Abstract
Mammals rely on vision and self-motion information in nature to distinguish directions and navigate accurately and stably. Inspired by the mammalian brain neurons to represent the spatial environment, the brain-inspired positioning method based on multi-sensors’ input is proposed to solve the problem of [...] Read more.
Mammals rely on vision and self-motion information in nature to distinguish directions and navigate accurately and stably. Inspired by the mammalian brain neurons to represent the spatial environment, the brain-inspired positioning method based on multi-sensors’ input is proposed to solve the problem of accurate navigation in the absence of satellite signals. In the research related to the application of brain-inspired engineering, it is not common to fuse various sensor information to improve positioning accuracy and decode navigation parameters from the encoded information of the brain-inspired model. Therefore, this paper establishes the head-direction cell model and the place cell model with application potential based on continuous attractor neural networks (CANNs) to encode visual and inertial input information, and then decodes the direction and position according to the population neuron firing response. The experimental results confirm that the brain-inspired navigation model integrates a variety of information, outputs more accurate and stable navigation parameters, and generates motion paths. The proposed model promotes the effective development of brain-inspired navigation research. Full article
(This article belongs to the Topic Autonomy for Enabling the Next Generation of UAVs)
Show Figures

Figure 1

Back to TopTop