Bio-Inspired Signal Processing on Image and Audio Data

A special issue of Biomimetics (ISSN 2313-7673). This special issue belongs to the section "Bioinspired Sensorics, Information Processing and Control".

Deadline for manuscript submissions: 31 July 2026 | Viewed by 758

Special Issue Editors


E-Mail Website
Guest Editor
Intelligent Media & Recognition Lab, Seoul National University of Science and Technology, Seoul, Republic of Korea
Interests: image and signal processing; biometrics; artificial intelligence; machine learning

E-Mail Website
Guest Editor
The School of Games, Arts, Media, and Engineering (GAME) and Electrical, Computer, and Energy Engineering (ECEE), Arizona State University, Tempe, AZ, USA
Interests: computer vision; imaging; signal processing; machine learning
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
1. Yang Laboratory, Neurosurgery Research, Barrow Neurological Institute, Phoenix, AZ, USA
2. School of Biological and Health Systems Engineering, Arizona State University, Tempe, AZ, USA
Interests: human electrophysiology; systems/cognitive/computational neuroscience; invasive neuromodulation

Special Issue Information

Dear Colleagues,

Bio-inspired signal processing, grounded in evolutionary computation, cellular automata, and neuromorphic computing, provides powerful paradigms for solving complex and uncertain real-world signal processing problems. By simulating the collective behaviors of simple biological agents and leveraging parallel, self-organizing, and hybrid optimization mechanisms, these approaches enable adaptive, robust, and energy-efficient intelligent systems.

Inspired by biological perception and neural computation, bio-inspired models exhibit complex spatiotemporal dynamics, distributed representations of information, and event-driven processing, while supporting effective cooperation and integration across multimodal signals. Such mechanisms enhance optimization capability, interpretability, and scalability in processing images, audio, and time series data under dynamic and resource-constrained environments.

In this context, this Special Issue aims to present diverse bio-inspired approaches in signal processing. Potential topics include, but are not limited to, the following:

  1. Genetic and evolutionary algorithms for adaptive optimization broadly applied to image and audio (and other time series) signal processing under complex and uncertain environments.
  2. Cellular automata-based models for distributed, parallel, and self-organizing representation learning, enabling formation of complex spatiotemporal patterns and robust signal analysis.
  3. Neuromorphic/neuro-inspired computing and spiking neural network models for energy-efficient, event-driven processing of sensory signals, supporting real-time and low-power intelligent systems that process sensor data in real time.
  4. Bio-inspired mechanisms for integrating heterogeneous sensory signals, enabling cooperative multimodal perception and improved interpretability in complex naturalistic environments.
  5. Interpretable bio-inspired signal processing frameworks that enhance robustness, scalability, and reliability through biologically grounded structures and functional dynamics.

Dr. Eunsom Jeon
Dr. Pavan Turaga
Dr. Andrew Yang
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 250 words) can be sent to the Editorial Office for assessment.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Biomimetics is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2200 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • artificial neural networks
  • genetic and evolutionary algorithms
  • cellular automata
  • neuromorphic computing
  • event-driven processing
  • information fusion
  • signal processing
  • human-centric processing
  • explainable AI

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • Reprint: MDPI Books provides the opportunity to republish successful Special Issues in book format, both online and in print.

Further information on MDPI's Special Issue policies can be found here.

Published Papers (1 paper)

Order results
Result details
Select all
Export citation of selected articles as:

Research

20 pages, 34702 KB  
Article
rePPG: Relighting Photoplethysmography Signal to Video
by Seunghyun Kim, Yeongje Park, Byeongseon An and Eui Chul Lee
Biomimetics 2026, 11(4), 230; https://doi.org/10.3390/biomimetics11040230 - 1 Apr 2026
Viewed by 488
Abstract
Remote photoplethysmography (rPPG) extracts physiological signals from facial videos by analyzing subtle skin color variations caused by blood flow. While this technology enables contactless health monitoring, it also raises privacy concerns because facial videos reveal both identity and sensitive biometric information. Existing privacy-preserving [...] Read more.
Remote photoplethysmography (rPPG) extracts physiological signals from facial videos by analyzing subtle skin color variations caused by blood flow. While this technology enables contactless health monitoring, it also raises privacy concerns because facial videos reveal both identity and sensitive biometric information. Existing privacy-preserving techniques, such as blurring or pixelation, degrade visual quality and are unsuitable for practical rPPG applications. This paper presents rePPG, a framework that inserts a desired rPPG signal into facial videos while preserving the original facial appearance. The proposed method disentangles facial appearance and physiological features, enabling replacement of the physiological signal without altering facial identity or visual quality. Skin segmentation restricts modifications to skin regions, and a cycle-consistency mechanism ensures that the injected rPPG signal can be reliably recovered from the generated video. Importantly, the extracted rPPG signals are evaluated against the injected target physiological signals rather than the subject’s original physiological state, ensuring that the evaluation measures signal rewriting accuracy. Experiments on the PURE and UBFC datasets show that rePPG successfully embeds target PPG signals, achieving 1.10 BPM MAE and 95.00% PTE6 on PURE while preserving visual quality (PSNR 24.61 dB, SSIM 0.638). Heart rate metrics are computed using a 5-second temporal window to ensure a consistent evaluation protocol. Full article
(This article belongs to the Special Issue Bio-Inspired Signal Processing on Image and Audio Data)
Show Figures

Figure 1

Back to TopTop