Biomimetics and Bioinspired Artificial Intelligence Applications: 2nd Edition

A special issue of Biomimetics (ISSN 2313-7673). This special issue belongs to the section "Bioinspired Sensorics, Information Processing and Control".

Deadline for manuscript submissions: 31 May 2025 | Viewed by 2514

Special Issue Editors


E-Mail Website
Guest Editor
School of Computer Science and Technology, Shandong University of Finance and Economics, No. 7366, East Second Ring Road, Yaojia Sub-District, Jinan 250014, China
Interests: machine learning; data mining; multimedia processing
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Shandong Computer Science Center (National Supercomputer Center in Jinan), Shandong Provincial Key Laboratory of Computer Networks, Qilu University of Technology (Shandong Academy of Sciences), Jinan, China
Interests: machine learning; data mining; multimedia processing
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Biomimetics focuses on living systems and attempts to transfer their properties to engineering applications, dramatically influencing human civilization. In recent decades, the integration of biomimetics and computing methods has achieved great results in a variety of artificial intelligence applications, including medical diagnosis, robotics, optimization, and pattern recognition. This Special Issue seeks to understand how to design biomimetic machinery and material models that mimic the properties and structures of organisms and report the latest advances in bioinspired algorithms in artificial intelligence. We welcome original research, meta-analysis, and review articles covering (but not limited to) the following potential topics:

  • Biomimetics of materials and structures;
  • Biomimetic design, construction, and devices;
  • Bioinspired robotics and autonomous systems;
  • Applications of bioinspired methods in computer vision and signal processing;
  • Brain-inspired computing methods, e.g., neural networks and deep learning;
  • Swarm intelligence and collective behaviour, g., particle swarm optimization and ant colony optimization;
  • Evolutionary algorithms and optimization, e.g., genetic algorithms;
  • Adaptive and self-learning systems.

Prof. Dr. Chaoran Cui
Dr. Xiaohui Han
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Biomimetics is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2200 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • biomimetics of materials and structures
  • biomimetic design, construction, and devices
  • bioinspired robotics and autonomous systems
  • applications of bioinspired methods in computer vision and signal processing
  • brain-inspired computing methods, e.g., neural networks and deep learning
  • swarm intelligence and collective behaviour, e.g., particle swarm optimization and ant colony optimization
  • evolutionary algorithms and optimization, e.g., genetic algorithms
  • adaptive and self-learning systems

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue policies can be found here.

Related Special Issue

Published Papers (2 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

31 pages, 6822 KiB  
Article
MHO: A Modified Hippopotamus Optimization Algorithm for Global Optimization and Engineering Design Problems
by Tao Han, Haiyan Wang, Tingting Li, Quanzeng Liu and Yourui Huang
Biomimetics 2025, 10(2), 90; https://doi.org/10.3390/biomimetics10020090 - 5 Feb 2025
Viewed by 1143
Abstract
The hippopotamus optimization algorithm (HO) is a novel metaheuristic algorithm that solves optimization problems by simulating the behavior of hippopotamuses. However, the traditional HO algorithm may encounter performance degradation and fall into local optima when dealing with complex global optimization and engineering design [...] Read more.
The hippopotamus optimization algorithm (HO) is a novel metaheuristic algorithm that solves optimization problems by simulating the behavior of hippopotamuses. However, the traditional HO algorithm may encounter performance degradation and fall into local optima when dealing with complex global optimization and engineering design problems. In order to solve these problems, this paper proposes a modified hippopotamus optimization algorithm (MHO) to enhance the convergence speed and solution accuracy of the HO algorithm by introducing a sine chaotic map to initialize the population, changing the convergence factor in the growth mechanism, and incorporating the small-hole imaging reverse learning strategy. The MHO algorithm is tested on 23 benchmark functions and successfully solves three engineering design problems. According to the experimental data, the MHO algorithm obtains optimal performance on 13 of these functions and three design problems, exits the local optimum faster, and has better ordering and stability than the other nine metaheuristics. This study proposes the MHO algorithm, which offers fresh insights into practical engineering problems and parameter optimization. Full article
Show Figures

Graphical abstract

19 pages, 954 KiB  
Article
Memory–Non-Linearity Trade-Off in Distance-Based Delay Networks
by Stefan Iacob and Joni Dambre
Biomimetics 2024, 9(12), 755; https://doi.org/10.3390/biomimetics9120755 - 11 Dec 2024
Viewed by 1017
Abstract
The performance of echo state networks (ESNs) in temporal pattern learning tasks depends both on their memory capacity (MC) and their non-linear processing. It has been shown that linear memory capacity is maximized when ESN neurons have linear activation, and that a trade-off [...] Read more.
The performance of echo state networks (ESNs) in temporal pattern learning tasks depends both on their memory capacity (MC) and their non-linear processing. It has been shown that linear memory capacity is maximized when ESN neurons have linear activation, and that a trade-off between non-linearity and linear memory capacity is required for temporal pattern learning tasks. The more recent distance-based delay networks (DDNs) have shown improved memory capacity over ESNs in several benchmark temporal pattern learning tasks. However, it has not thus far been studied whether this increased memory capacity comes at the cost of reduced non-linear processing. In this paper, we advance the hypothesis that DDNs in fact achieve a better trade-off between linear MC and non-linearity than ESNs, by showing that DDNs can have strong non-linearity with large memory spans. We tested this hypothesis using the NARMA-30 task and the bitwise delayed XOR task, two commonly used reservoir benchmark tasks that require a high degree of both non-linearity and memory. Full article
Show Figures

Figure 1

Back to TopTop