Biologically Inspired Vision and Image Processing 2025

A special issue of Biomimetics (ISSN 2313-7673). This special issue belongs to the section "Bioinspired Sensorics, Information Processing and Control".

Deadline for manuscript submissions: closed (30 November 2025) | Viewed by 2142

Special Issue Editor


E-Mail Website
Guest Editor
College of Computer Science, Sichuan University, Chengdu 610054, China
Interests: image processing; low-level vision; color constancy; illuminant estimation; biologically inspired computational vision; multimedia data modeling; recurrent neural network (RNN); long short-term memory (LSTM); tensor; convolution; deep learning
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

The visual system of the brain is a complex and efficient image processing system, acting as an important resource for computer vision theory and technological innovation. Brain-inspired and brain-imitating models represent important breakthroughs for theoretical innovation and technological revolution in the context of the new generation of artificial intelligence. From the perspective of computational simulation, such technology helps in clarifying or predicting some of the information processing mechanisms of the brain’s visual system; on the other hand, it also provides a series of new general-purpose computing models and common key technologies for many engineering applications centered on intelligent environment perception. This proposed Biologically Inspired Vision and Image Processing (BIVIP) Special Issue welcomes original, unpublished contributions from authors in this field. Topics include (but are not limited to) the following: 

  • Models for neurons of various visual levels;
  • Neural coding and decoding of visual information;
  • Neural networks for local visual circuits;
  • Visual mechanism-inspired deep neural networks;
  • Visual models for image processing;
  • Visual mechanism-inspired models for computer vision applications;
  • Hardware implementations of visual models;
  • Artificial vision-related software and hardware;
  • Visual models for temporal information processing;
  • Receptive field-based models;
  • Biologically inspired novel spiking neural networks and optimization methods;
  • Visual dynamic information processing technology based on event camera.

Dr. Shaobing Gao
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 250 words) can be sent to the Editorial Office for assessment.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Biomimetics is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2200 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • visual cognitive computing
  • brain simulation
  • computational neuroscience
  • biologically inspired computer vision
  • artificial intelligence

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • Reprint: MDPI Books provides the opportunity to republish successful Special Issues in book format, both online and in print.

Further information on MDPI's Special Issue policies can be found here.

Published Papers (2 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

17 pages, 7561 KB  
Article
Fine-Grained Image Recognition with Bio-Inspired Gradient-Aware Attention
by Bing Ma, Junyi Li, Zhengbei Jin, Wei Zhang, Xiaohui Song and Beibei Jin
Biomimetics 2025, 10(12), 834; https://doi.org/10.3390/biomimetics10120834 - 12 Dec 2025
Viewed by 201
Abstract
Fine-grained image recognition is one of the key tasks in the field of computer vision. However, due to subtle inter-class differences and significant intra-class differences, it still faces severe challenges. Conventional approaches often struggle with background interference and feature degradation. To address these [...] Read more.
Fine-grained image recognition is one of the key tasks in the field of computer vision. However, due to subtle inter-class differences and significant intra-class differences, it still faces severe challenges. Conventional approaches often struggle with background interference and feature degradation. To address these issues, we draw inspiration from the human visual system, which adeptly focuses on discriminative regions, to propose a bio-inspired gradient-aware attention mechanism. Our method explicitly models gradient information to guide the attention, mimicking biological edge sensitivity, thereby enhancing the discrimination between global structures and local details. Experiments on the CUB-200-2011, iNaturalist2018, nabbirds and Stanford Cars datasets demonstrated the superiority of our method, achieving Top-1 accuracy rates of 92.9%, 90.5%, 93.1% and 95.1%, respectively. Full article
(This article belongs to the Special Issue Biologically Inspired Vision and Image Processing 2025)
Show Figures

Figure 1

20 pages, 8688 KB  
Article
DE-YOLOv13-S: Research on a Biomimetic Vision-Based Model for Yield Detection of Yunnan Large-Leaf Tea Trees
by Shihao Zhang, Xiaoxue Guo, Meng Tan, Chunhua Yang, Zejun Wang, Gongming Li and Baijuan Wang
Biomimetics 2025, 10(11), 724; https://doi.org/10.3390/biomimetics10110724 - 30 Oct 2025
Viewed by 796
Abstract
To address the challenges of variable target scale, complex background, blurred image, and serious occlusion in the yield detection of Yunnan large-leaf tea tree, this study proposes a deep learning network DE-YOLOv13-S that integrates the visual mechanism of primates. DynamicConv was used to [...] Read more.
To address the challenges of variable target scale, complex background, blurred image, and serious occlusion in the yield detection of Yunnan large-leaf tea tree, this study proposes a deep learning network DE-YOLOv13-S that integrates the visual mechanism of primates. DynamicConv was used to optimize the dynamic adjustment process of the effective receptive field and channel the gain of the primate visual system. Efficient Mixed-pooling Channel Attention was introduced to simulate the observation strategy of ‘global gain control and selective integration parallel’ of the primate visual system. Scale-based Dynamic Loss was used to simulate the foveation mechanism of primates, which significantly improved the positioning accuracy and robustness of Yunnan large-leaf tea tree yield detection. The results show that the Box Loss, Cls Loss, and DFL Loss of the DE-YOLOv13-S network decreased by 18.75%, 3.70%, and 2.54% on the training set, and by 18.48%, 14.29%, and 7.46% on the test set, respectively. Compared with YOLOv13, its parameters and gradients are only increased by 2.06 M, while the computational complexity is reduced by 0.2 G FLOPs, precision, recall, and mAP are increased by 3.78%, 2.04% and 3.35%, respectively. The improved DE-YOLOv13-S network not only provides an efficient and stable yield detection solution for the intelligent management level and high-quality development of tea gardens, but also provides a solid technical support for the deep integration of bionic vision and agricultural remote sensing. Full article
(This article belongs to the Special Issue Biologically Inspired Vision and Image Processing 2025)
Show Figures

Figure 1

Back to TopTop