Biologically Inspired Vision and Image Processing 2024

A special issue of Biomimetics (ISSN 2313-7673). This special issue belongs to the section "Bioinspired Sensorics, Information Processing and Control".

Deadline for manuscript submissions: closed (20 May 2025) | Viewed by 12071

Special Issue Editor


E-Mail Website
Guest Editor
Department of Computer Science, Sichuan University, Chengdu 610065, China
Interests: biologically inspired vision and image processing
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

The visual system of the brain is a complex and efficient image processing system, and it is an important source of computer vision theory and technological innovation. Brain-inspired and imitated brains are important breakthroughs in the theoretical innovation and technological revolution of the new generation of artificial intelligence. From the perspective of computational simulation, it helps to clarify or predict some information processing mechanisms of the brain's visual system; on the other hand, it also provides a series of new general-purpose computing models and common key technologies for many engineering applications centered on intelligent environment perception. This proposed Biologically Inspired Vision and Image Processing (BIVIP) Special Issue welcomes original, unpublished contributions from authors. Topics include (but are not limited to): 

  • Models for the neurons of various visual levels;
  • Neural coding and decoding of visual information;
  • Neural networks for local visual circuits;
  • Visual mechanism-inspired deep neural networks;
  • Visual models for image processing;
  • Visual mechanism-inspired models for computer vision applications;
  • Hardware implementations of visual models;
  • Artificial vision-related software and hardware;
  • Visual models for temporal information processing;
  • Receptive field-based models;
  • Biologically inspired novel spiking neural networks and optimization methods;
  • Visual dynamic information processing technology based on event camera.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere.

Dr. Shaobing Gao
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Biomimetics is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2200 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • visual cognitive computing
  • brain simulation
  • computational neuroscience
  • biologically inspired computer vision
  • artificial intelligence

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • Reprint: MDPI Books provides the opportunity to republish successful Special Issues in book format, both online and in print.

Further information on MDPI's Special Issue policies can be found here.

Published Papers (4 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

19 pages, 2028 KiB  
Article
Biologically Inspired Spatial–Temporal Perceiving Strategies for Spiking Neural Network
by Yu Zheng, Jingfeng Xue, Jing Liu and Yanjun Zhang
Biomimetics 2025, 10(1), 48; https://doi.org/10.3390/biomimetics10010048 - 14 Jan 2025
Cited by 1 | Viewed by 974
Abstract
A future unmanned system needs the ability to perceive, decide and control in an open dynamic environment. In order to fulfill this requirement, it needs to construct a method with a universal environmental perception ability. Moreover, this perceptual process needs to be interpretable [...] Read more.
A future unmanned system needs the ability to perceive, decide and control in an open dynamic environment. In order to fulfill this requirement, it needs to construct a method with a universal environmental perception ability. Moreover, this perceptual process needs to be interpretable and understandable, so that future interactions between unmanned systems and humans can be unimpeded. However, current mainstream DNN (deep learning neural network)-based AI (artificial intelligence) is a ‘black box’. We cannot interpret or understand how the decision is made by these AIs. An SNN (spiking neural network), which is more similar to a biological brain than a DNN, has the potential to implement interpretable or understandable AI. In this work, we propose a neuron group-based structural learning method for an SNN to better capture the spatial and temporal information from the external environment, and propose a time-slicing scheme to better interpret the spatial and temporal information of responses generated by an SNN. Results show that our method indeed helps to enhance the environment perception ability of the SNN, and possesses a certain degree of robustness, enhancing the potential to build an interpretable or understandable AI in the future. Full article
(This article belongs to the Special Issue Biologically Inspired Vision and Image Processing 2024)
Show Figures

Figure 1

13 pages, 2200 KiB  
Article
Deep Neural Networks for Accurate Depth Estimation with Latent Space Features
by Siddiqui Muhammad Yasir and Hyunsik Ahn
Biomimetics 2024, 9(12), 747; https://doi.org/10.3390/biomimetics9120747 - 9 Dec 2024
Viewed by 1670
Abstract
Depth estimation plays a pivotal role in advancing human–robot interactions, especially in indoor environments where accurate 3D scene reconstruction is essential for tasks like navigation and object handling. Monocular depth estimation, which relies on a single RGB camera, offers a more affordable solution [...] Read more.
Depth estimation plays a pivotal role in advancing human–robot interactions, especially in indoor environments where accurate 3D scene reconstruction is essential for tasks like navigation and object handling. Monocular depth estimation, which relies on a single RGB camera, offers a more affordable solution compared to traditional methods that use stereo cameras or LiDAR. However, despite recent progress, many monocular approaches struggle with accurately defining depth boundaries, leading to less precise reconstructions. In response to these challenges, this study introduces a novel depth estimation framework that leverages latent space features within a deep convolutional neural network to enhance the precision of monocular depth maps. The proposed model features dual encoder–decoder architecture, enabling both color-to-depth and depth-to-depth transformations. This structure allows for refined depth estimation through latent space encoding. To further improve the accuracy of depth boundaries and local features, a new loss function is introduced. This function combines latent loss with gradient loss, helping the model maintain the integrity of depth boundaries. The framework is thoroughly tested using the NYU Depth V2 dataset, where it sets a new benchmark, particularly excelling in complex indoor scenarios. The results clearly show that this approach effectively reduces depth ambiguities and blurring, making it a promising solution for applications in human–robot interaction and 3D scene reconstruction. Full article
(This article belongs to the Special Issue Biologically Inspired Vision and Image Processing 2024)
Show Figures

Figure 1

Review

Jump to: Research

41 pages, 3369 KiB  
Review
Application of Event Cameras and Neuromorphic Computing to VSLAM: A Survey
by Sangay Tenzin, Alexander Rassau and Douglas Chai
Biomimetics 2024, 9(7), 444; https://doi.org/10.3390/biomimetics9070444 - 20 Jul 2024
Cited by 3 | Viewed by 3737
Abstract
Simultaneous Localization and Mapping (SLAM) is a crucial function for most autonomous systems, allowing them to both navigate through and create maps of unfamiliar surroundings. Traditional Visual SLAM, also commonly known as VSLAM, relies on frame-based cameras and structured processing pipelines, which face [...] Read more.
Simultaneous Localization and Mapping (SLAM) is a crucial function for most autonomous systems, allowing them to both navigate through and create maps of unfamiliar surroundings. Traditional Visual SLAM, also commonly known as VSLAM, relies on frame-based cameras and structured processing pipelines, which face challenges in dynamic or low-light environments. However, recent advancements in event camera technology and neuromorphic processing offer promising opportunities to overcome these limitations. Event cameras inspired by biological vision systems capture the scenes asynchronously, consuming minimal power but with higher temporal resolution. Neuromorphic processors, which are designed to mimic the parallel processing capabilities of the human brain, offer efficient computation for real-time data processing of event-based data streams. This paper provides a comprehensive overview of recent research efforts in integrating event cameras and neuromorphic processors into VSLAM systems. It discusses the principles behind event cameras and neuromorphic processors, highlighting their advantages over traditional sensing and processing methods. Furthermore, an in-depth survey was conducted on state-of-the-art approaches in event-based SLAM, including feature extraction, motion estimation, and map reconstruction techniques. Additionally, the integration of event cameras with neuromorphic processors, focusing on their synergistic benefits in terms of energy efficiency, robustness, and real-time performance, was explored. The paper also discusses the challenges and open research questions in this emerging field, such as sensor calibration, data fusion, and algorithmic development. Finally, the potential applications and future directions for event-based SLAM systems are outlined, ranging from robotics and autonomous vehicles to augmented reality. Full article
(This article belongs to the Special Issue Biologically Inspired Vision and Image Processing 2024)
Show Figures

Figure 1

33 pages, 9250 KiB  
Review
Biological Basis and Computer Vision Applications of Image Phase Congruency: A Comprehensive Survey
by Yibin Tian, Ming Wen, Dajiang Lu, Xiaopin Zhong and Zongze Wu
Biomimetics 2024, 9(7), 422; https://doi.org/10.3390/biomimetics9070422 - 10 Jul 2024
Cited by 1 | Viewed by 2408
Abstract
The concept of Image Phase Congruency (IPC) is deeply rooted in the way the human visual system interprets and processes spatial frequency information. It plays an important role in visual perception, influencing our capacity to identify objects, recognize textures, and decipher spatial relationships [...] Read more.
The concept of Image Phase Congruency (IPC) is deeply rooted in the way the human visual system interprets and processes spatial frequency information. It plays an important role in visual perception, influencing our capacity to identify objects, recognize textures, and decipher spatial relationships in our environments. IPC is robust to changes in lighting, contrast, and other variables that might modify the amplitude of light waves yet leave their relative phase unchanged. This characteristic is vital for perceptual tasks as it ensures the consistent detection of features regardless of fluctuations in illumination or other environmental factors. It can also impact cognitive and emotional responses; cohesive phase information across elements fosters a perception of unity or harmony, while inconsistencies can engender a sense of discord or tension. In this survey, we begin by examining the evidence from biological vision studies suggesting that IPC is employed by the human perceptual system. We proceed to outline the typical mathematical representation and different computational approaches to IPC. We then summarize the extensive applications of IPC in computer vision, including denoise, image quality assessment, feature detection and description, image segmentation, image registration, image fusion, and object detection, among other uses, and illustrate its advantages with a number of examples. Finally, we discuss the current challenges associated with the practical applications of IPC and potential avenues for enhancement. Full article
(This article belongs to the Special Issue Biologically Inspired Vision and Image Processing 2024)
Show Figures

Figure 1

Back to TopTop