Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (224)

Search Parameters:
Keywords = auto-tune

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
21 pages, 2949 KiB  
Article
Memetic Optimization of Wastewater Pumping Systems for Energy Efficiency: AI Optimization in a Simulation-Based Framework for Sustainable Operations Management
by Agostino G. Bruzzone, Marco Gotelli, Marina Massei, Xhulia Sina, Antonio Giovannetti, Filippo Ghisi and Luca Cirillo
Sustainability 2025, 17(14), 6296; https://doi.org/10.3390/su17146296 - 9 Jul 2025
Viewed by 245
Abstract
This study investigates the integration of advanced optimization algorithms within energy-intensive infrastructures and industrial plants. In fact, the authors focus on the dynamic interplay between computational intelligence and operational efficiency in wastewater treatment plants (WWTPs). In this context, energy optimization is thought of [...] Read more.
This study investigates the integration of advanced optimization algorithms within energy-intensive infrastructures and industrial plants. In fact, the authors focus on the dynamic interplay between computational intelligence and operational efficiency in wastewater treatment plants (WWTPs). In this context, energy optimization is thought of as a hybrid process that emerges at the intersection of engineered systems, environmental dynamics, and operational constraints. Despite the known energy-intensive nature of WWTPs, where pumps and blowers consume over 60% of total power, current methods lack systematic, real-time adaptability under variable conditions. To address this gap, the study proposes a computational framework that combines hydraulic simulation, manufacturer-based performance mapping, and a Memetic Algorithm (MA) capable of real-time optimization. The methodology synthesizes dynamic flow allocation, auto-tuning mutation, and step-by-step improvement search into a cohesive simulation environment, applied to a representative parallel-pump system. The MA’s dual capacity to explore global configurations and refine local adjustments reflects both static and kinetic aspects of optimization: the former grounded in physical system constraints, the latter shaped by fluctuating operational demands. Experimental results across several stochastic scenarios demonstrate consistent power savings (12.13%) over conventional control strategies. By bridging simulation modeling with optimization under uncertainty, this study contributes to sustainable operations management, offering a replicable, data-driven tool for advancing energy efficiency in infrastructure systems. Full article
Show Figures

Figure 1

25 pages, 7504 KiB  
Article
Explainable Artificial Intelligence (XAI) for Flood Susceptibility Assessment in Seoul: Leveraging Evolutionary and Bayesian AutoML Optimization
by Kounghoon Nam, Youngkyu Lee, Sungsu Lee, Sungyoon Kim and Shuai Zhang
Remote Sens. 2025, 17(13), 2244; https://doi.org/10.3390/rs17132244 - 30 Jun 2025
Viewed by 377
Abstract
This study aims to enhance the accuracy and interpretability of flood susceptibility mapping (FSM) in Seoul, South Korea, by integrating automated machine learning (AutoML) with explainable artificial intelligence (XAI) techniques. Ten topographic and environmental conditioning factors were selected as model inputs. We first [...] Read more.
This study aims to enhance the accuracy and interpretability of flood susceptibility mapping (FSM) in Seoul, South Korea, by integrating automated machine learning (AutoML) with explainable artificial intelligence (XAI) techniques. Ten topographic and environmental conditioning factors were selected as model inputs. We first employed the Tree-based Pipeline Optimization Tool (TPOT), an evolutionary AutoML algorithm, to construct baseline ensemble models using Gradient Boosting (GB), Random Forest (RF), and XGBoost (XGB). These models were further fine-tuned using Bayesian optimization via Optuna. To interpret the model outcomes, SHAP (SHapley Additive exPlanations) was applied to analyze both the global and local contributions of each factor. The SHAP analysis revealed that lower elevation, slope, and stream distance, as well as higher stream density and built-up areas, were the most influential factors contributing to flood susceptibility. Moreover, interactions between these factors, such as built-up areas located on gentle slopes near streams, further intensified flood risk. The susceptibility maps were reclassified into five categories (very low to very high), and the GB model identified that approximately 15.047% of the study area falls under very-high-flood-risk zones. Among the models, the GB classifier achieved the highest performance, followed by XGB and RF. The proposed framework, which integrates TPOT, Optuna, and SHAP within an XAI pipeline, not only improves predictive capability but also offers transparent insights into feature behavior and model logic. These findings support more robust and interpretable flood risk assessments for effective disaster management in urban areas. Full article
(This article belongs to the Special Issue Artificial Intelligence for Natural Hazards (AI4NH))
Show Figures

Figure 1

64 pages, 4356 KiB  
Article
Auto-Tuning Memory-Based Adaptive Local Search Gaining–Sharing Knowledge-Based Algorithm for Solving Optimization Problems
by Nawaf Mijbel Alfadli, Eman Mostafa Oun and Ali Wagdy Mohamed
Algorithms 2025, 18(7), 398; https://doi.org/10.3390/a18070398 - 28 Jun 2025
Viewed by 275
Abstract
The Gaining–Sharing Knowledge-based (GSK) algorithm is a human-inspired metaheuristic that models how people learn and disseminate knowledge across their lifetime. It has shown promising results across a range of engineering optimization problems. However, one of its major limitations lies in the use of [...] Read more.
The Gaining–Sharing Knowledge-based (GSK) algorithm is a human-inspired metaheuristic that models how people learn and disseminate knowledge across their lifetime. It has shown promising results across a range of engineering optimization problems. However, one of its major limitations lies in the use of fixed parameters to guide the search process, which often causes the algorithm to get stuck in local optima. To address this challenge, we propose an Auto-Tuning Memory-based Adaptive Local Search (ATMALS) empowered GSK, that is, ATMALS-GSK. This enhanced version of GSK introduces two key improvements: adaptive local search and memory-driven automatic tuning of parameters. Rather than relying on fixed values, ATMALS-GSK continuously adjusts its parameters during the optimization process. This is achieved through a Gaussian distribution mechanism that iteratively updates the likelihood of selecting different parameter values based on their historical impact on the fitness function. This selection process is guided by a weighted moving average that tracks each parameter’s contribution to fitness improvement over time. To further reduce the risk of premature convergence, an adaptive local search strategy is embedded, facilitating the algorithm’s escape from local traps and guiding it toward more optimal regions within the search domain. To validate the effectiveness of the ATMALS-GSK algorithm, it is evaluated on the CEC 2011 and CEC 2017 benchmarks. The results indicate that the ATMALS-GSK algorithm outperforms the original GSK, its variants, and other metaheuristics by delivering greater robustness, quicker convergence, and superior solution quality. Full article
Show Figures

Figure 1

14 pages, 1853 KiB  
Article
Effective Breast Cancer Classification Using Deep MLP, Feature-Fused Autoencoder and Weight-Tuned Decision Tree
by Nagham Rasheed Hameed Alsaedi and Mehmet Fatih Akay
Appl. Sci. 2025, 15(13), 7213; https://doi.org/10.3390/app15137213 - 26 Jun 2025
Viewed by 246
Abstract
Breast cancer remains a leading cause of death among women worldwide, underscoring the urgent need for practical diagnostic tools. This paper presents an advanced machine learning algorithm designed to improve classification accuracy in breast cancer diagnosis. The system integrates a deep multi-layer perceptron [...] Read more.
Breast cancer remains a leading cause of death among women worldwide, underscoring the urgent need for practical diagnostic tools. This paper presents an advanced machine learning algorithm designed to improve classification accuracy in breast cancer diagnosis. The system integrates a deep multi-layer perceptron (Deep MLP) for feature extraction, a feature-fused autoencoder for efficient dimensional reduction, and a weight-tuned decision-tree classifier optimized via cross-validation and square weight adjustment. The proposed method was rigorously tested using the Wisconsin breast cancer dataset, employing k-fold cross-validation to ensure robustness and generalizability. Key performance indicators, including accuracy, precision, recall, F1-score, and area under the curve (AUC), were used to evaluate the model’s ability to distinguish between malignant and benign tumors. Our results suggest that this combination model outperforms traditional classification methods, with high accuracy and robust performance across data partitions. The main contribution of this research is the development of a new framework for deep learning. Auto-encoder and decision tree results show that this system has strong potential to improve breast cancer diagnosis, offering physicians a reliable and effective tool. Full article
Show Figures

Figure 1

29 pages, 13225 KiB  
Review
Tuneable Lenses Driven by Dielectric Elastomers: Principles, Structures, Applications, and Challenges
by Zhuoqun Hu, Meng Zhang, Zihao Gan, Jianming Lv, Zhuoyu Lin and Huajie Hong
Appl. Sci. 2025, 15(12), 6926; https://doi.org/10.3390/app15126926 - 19 Jun 2025
Viewed by 297
Abstract
As the core element of adaptive optical systems, tuneable lenses are essential in adaptive optics. Dielectric elastomer-driven tuneable lenses offer significant advantages in tuning range, response speed, and lightweight design compared to traditional mechanical zoom lenses. This paper systematically reviews the working mechanisms [...] Read more.
As the core element of adaptive optical systems, tuneable lenses are essential in adaptive optics. Dielectric elastomer-driven tuneable lenses offer significant advantages in tuning range, response speed, and lightweight design compared to traditional mechanical zoom lenses. This paper systematically reviews the working mechanisms and research advancements of these lenses. Firstly, based on the two driving modes of deformation zoom and displacement zoom, the tuning principle of dielectric elastomer-driven tuneable lenses is analysed in depth. Secondly, the design methodology and current status of the research are systematically elaborated for four typical structures: monolithic, composite, array, and metalenses. Finally, the potential applications of this technology are discussed in the fields of auto-zoom imaging, microscopic imaging, augmented reality display, and infrared imaging, along with an analysis of the key technological challenges faced by this technology, such as material properties, modelling and control, preparation processes, and optical performance. This paper aims to provide a systematic reference for researchers in this field and to help promote the engineering application of dielectric elastomer tuneable lens technology. Full article
(This article belongs to the Section Optics and Lasers)
Show Figures

Figure 1

20 pages, 1262 KiB  
Article
NiaAutoARM: Automated Framework for Constructing and Evaluating Association Rule Mining Pipelines
by Uroš Mlakar, Iztok Fister and Iztok Fister
Mathematics 2025, 13(12), 1957; https://doi.org/10.3390/math13121957 - 13 Jun 2025
Viewed by 270
Abstract
Numerical Association Rule Mining (NARM), which simultaneously handles both numerical and categorical attributes, is a powerful approach for uncovering meaningful associations in heterogeneous datasets. However, designing effective NARM solutions is a complex task involving multiple sequential steps, such as data preprocessing, algorithm selection, [...] Read more.
Numerical Association Rule Mining (NARM), which simultaneously handles both numerical and categorical attributes, is a powerful approach for uncovering meaningful associations in heterogeneous datasets. However, designing effective NARM solutions is a complex task involving multiple sequential steps, such as data preprocessing, algorithm selection, hyper-parameter tuning, and the definition of rule quality metrics, which together form a complete processing pipeline. In this paper, we introduce NiaAutoARM, a novel Automated Machine Learning (AutoML) framework that leverages stochastic population-based metaheuristics to automatically construct full association rule mining pipelines. Extensive experimental evaluation on ten benchmark datasets demonstrated that NiaAutoARM consistently identifies high-quality pipelines, improving both rule accuracy and interpretability compared to baseline configurations. Furthermore, NiaAutoARM achieves superior or comparable performance to the state-of-the-art VARDE algorithm while offering greater flexibility and automation. These results highlight the framework’s practical value for automating NARM tasks, reducing the need for manual tuning, and enabling broader adoption of association rule mining in real-world applications. Full article
(This article belongs to the Section E1: Mathematics and Computer Science)
Show Figures

Figure 1

21 pages, 1914 KiB  
Article
Robust Enhanced Auto-Tuning of PID Controllers for Optimal Quality Control of Cement Raw Mix via Neural Networks
by Dimitris Tsamatsoulis
ChemEngineering 2025, 9(3), 52; https://doi.org/10.3390/chemengineering9030052 - 20 May 2025
Viewed by 1042
Abstract
Ensuring efficient long-term quality control of the raw mix remains a priority for the cement industry, supporting initiatives to lower the CO2 footprint by incorporating significant amounts of alternative fuels and raw materials in clinker production. This study presents an effective method [...] Read more.
Ensuring efficient long-term quality control of the raw mix remains a priority for the cement industry, supporting initiatives to lower the CO2 footprint by incorporating significant amounts of alternative fuels and raw materials in clinker production. This study presents an effective method for creating a robust auto-tuner for proportional–integral–differential (PID) controller control of the lime saturation factor (LSF) of the raw mix using artificial neural networks (ANNs). This auto-tuner, combined with a previously studied robust PID controller, forms an integrated system that adapts to process changes and maintains low long-term variance in LSF. The ANN links each of the three PID gains to the process dynamic parameters, with the three ANNs also interconnected. We employed the Levenberg–Marquardt method to optimize the ANNs’ synaptic weights and applied the weight decay method to prevent overfitting. The industrial implementation of our control system, using the auto-tuner for 16,800 h of raw mill operation, shows an average LSF standard deviation of 2.5, with fewer than 10% of the datasets exceeding a standard deviation of 3.5. Considering that the measurement reproducibility is 1.44 and assuming a low mixing ratio of the raw meal in the silo equal to 2, the LSF standard deviation in the kiln feed approaches the analysis reproducibility, indicating that disturbances in the raw meal largely diminish in the kiln feed. In conclusion, integrating traditional, well-established tools like PID controllers with newer advanced techniques, such as ANNs, can yield innovative solutions. Full article
Show Figures

Figure 1

21 pages, 3304 KiB  
Article
Personalised Fractional-Order Autotuner for the Maintenance Phase of Anaesthesia Using Sine-Tests
by Marcian D. Mihai, Isabela R. Birs, Nicoleta E. Badau, Erwin T. Hegedus, Amani Ynineb and Cristina I. Muresan
Fractal Fract. 2025, 9(5), 317; https://doi.org/10.3390/fractalfract9050317 - 15 May 2025
Viewed by 317
Abstract
The research field of clinical practice has experienced a substantial increase in the integration of information technology and control engineering, which includes the management of medication administration for general anaesthesia. The invasive nature of input signals is the reason why autotuning methods are [...] Read more.
The research field of clinical practice has experienced a substantial increase in the integration of information technology and control engineering, which includes the management of medication administration for general anaesthesia. The invasive nature of input signals is the reason why autotuning methods are not widely used in this research field. This study proposes a non-invasive method using small-amplitude sine tests to estimate patient parameters, which allows the design of a personalised controller using an autotuning principle. The primary objective is to regulate the Bispectral Index through the administration of Propofol during the maintenance phase of anaesthesia, using a personalised fractional-order PID. This work aims to demonstrate the effectiveness of personalised control, which is facilitated by the proposed sine-based method. The closed-loop simulation results demonstrate the efficiency of the proposed approach. Full article
(This article belongs to the Special Issue Fractional Mathematical Modelling: Theory, Methods and Applications)
Show Figures

Figure 1

13 pages, 510 KiB  
Article
A Comparative Analysis of Student Performance Prediction: Evaluating Optimized Deep Learning Ensembles Against Semi-Supervised Feature Selection-Based Models
by Jose Antonio Lagares Rodríguez, Norberto Díaz-Díaz and Carlos David Barranco González
Appl. Sci. 2025, 15(9), 4818; https://doi.org/10.3390/app15094818 - 26 Apr 2025
Viewed by 505
Abstract
Advancements in modern technology have significantly increased the availability of educational data, presenting researchers with new challenges in extracting meaningful insights. Educational Data Mining offers analytical methods to support the prediction of student outcomes, development of intelligent tutoring systems, and curriculum optimization. Prior [...] Read more.
Advancements in modern technology have significantly increased the availability of educational data, presenting researchers with new challenges in extracting meaningful insights. Educational Data Mining offers analytical methods to support the prediction of student outcomes, development of intelligent tutoring systems, and curriculum optimization. Prior studies have highlighted the potential of semi-supervised approaches that incorporate feature selection to identify factors influencing academic success, particularly for improving model interpretability and predictive performance. Many feature selection methods tend to exclude variables that may not be individually powerful predictors but can collectively provide significant information, thereby constraining a model’s capabilities in learning environments. In contrast, Deep Learning (DL) models paired with Automated Machine Learning techniques can decrease the reliance on manual feature engineering, thereby enabling automatic fine-tuning of numerous model configurations. In this study, we propose a reproducible methodology that integrates DL with AutoML to evaluate student performance. We compared the proposed DL methodology to a semi-supervised approach originally introduced by Yu et al. under the same evaluation criteria. Our results indicate that DL-based models can provide a flexible, data-driven approach for examining student outcomes, in addition to preserving the importance of feature selection for interpretability. This proposal is available for replication and additional research. Full article
Show Figures

Figure 1

13 pages, 1246 KiB  
Article
Comparing Auto-Machine Learning and Expert-Designed Models in Diagnosing Vitreomacular Interface Disorders
by Ceren Durmaz Engin, Mahmut Ozan Gokkan, Seher Koksaldi, Mustafa Kayabasi, Ufuk Besenk, Mustafa Alper Selver and Andrzej Grzybowski
J. Clin. Med. 2025, 14(8), 2774; https://doi.org/10.3390/jcm14082774 - 17 Apr 2025
Viewed by 886
Abstract
Background: The vitreomacular interface (VMI) encompasses a group of retinal disorders that significantly impact vision, requiring accurate classification for effective management. This study aims to compare the effectiveness of an expert-designed custom deep learning (DL) model and a code free Auto Machine Learning [...] Read more.
Background: The vitreomacular interface (VMI) encompasses a group of retinal disorders that significantly impact vision, requiring accurate classification for effective management. This study aims to compare the effectiveness of an expert-designed custom deep learning (DL) model and a code free Auto Machine Learning (ML) model in classifying optical coherence tomography (OCT) images of VMI disorders. Materials and Methods: A balanced dataset of OCT images across five classes—normal, epiretinal membrane (ERM), idiopathic full-thickness macular hole (FTMH), lamellar macular hole (LMH), and vitreomacular traction (VMT)—was used. The expert-designed model combined ResNet-50 and EfficientNet-B0 architectures with Monte Carlo cross-validation. The AutoML model was created on Google Vertex AI, which handled data processing, model selection, and hyperparameter tuning automatically. Performance was evaluated using average precision, precision, and recall metrics. Results: The expert-designed model achieved an overall balanced accuracy of 95.97% and a Matthews Correlation Coefficient (MCC) of 94.65%. Both models attained 100% precision and recall for normal cases. For FTMH, the expert model reached perfect precision and recall, while the AutoML model scored 97.8% average precision, and 97.4% recall. In VMT detection, the AutoML model showed 99.5% average precision with a slightly lower recall of 94.7% compared to the expert model’s 95%. For ERM, the expert model achieved 95% recall, while the AutoML model had higher precision at 93.9% but a lower recall of 79.5%. In LMH classification, the expert model exhibited 95% precision, compared to 72.3% for the AutoML model, with similar recall for both (88% and 87.2%, respectively). Conclusions: While the AutoML model demonstrated strong performance, the expert-designed model achieved superior accuracy across certain classes. AutoML platforms, although accessible to healthcare professionals, may require further advancements to match the performance of expert-designed models in clinical applications. Full article
(This article belongs to the Special Issue Artificial Intelligence and Eye Disease)
Show Figures

Figure 1

34 pages, 65802 KiB  
Article
Using Citizen Science Data as Pre-Training for Semantic Segmentation of High-Resolution UAV Images for Natural Forests Post-Disturbance Assessment
by Kamyar Nasiri, William Guimont-Martin, Damien LaRocque, Gabriel Jeanson, Hugo Bellemare-Vallières, Vincent Grondin, Philippe Bournival, Julie Lessard, Guillaume Drolet, Jean-Daniel Sylvain and Philippe Giguère
Forests 2025, 16(4), 616; https://doi.org/10.3390/f16040616 - 31 Mar 2025
Viewed by 673
Abstract
The ability to monitor forest areas after disturbances is key to ensure their regrowth. Problematic situations that are detected can then be addressed with targeted regeneration efforts. However, achieving this with automated photo interpretation is problematic, as training such systems requires large amounts [...] Read more.
The ability to monitor forest areas after disturbances is key to ensure their regrowth. Problematic situations that are detected can then be addressed with targeted regeneration efforts. However, achieving this with automated photo interpretation is problematic, as training such systems requires large amounts of labeled data. To this effect, we leverage citizen science data (iNaturalist) to alleviate this issue. More precisely, we seek to generate pre-training data from a classifier trained on selected exemplars. This is accomplished by using a moving-window approach on carefully gathered low-altitude images with an Unmanned Aerial Vehicle (UAV), WilDReF-Q (Wild Drone Regrowth Forest—Quebec) dataset, to generate high-quality pseudo-labels. To generate accurate pseudo-labels, the predictions of our classifier for each window are integrated using a majority voting approach. Our results indicate that pre-training a semantic segmentation network on over 140,000 auto-labeled images yields an F1 score of 43.74% over 24 different classes, on a separate ground truth dataset. In comparison, using only labeled images yields a score of 32.45%, while fine-tuning the pre-trained network only yields marginal improvements (46.76%). Importantly, we demonstrate that our approach is able to benefit from more unlabeled images, opening the door for learning at scale. We also optimized the hyperparameters for pseudo-labeling, including the number of predictions assigned to each pixel in the majority voting process. Overall, this demonstrates that an auto-labeling approach can greatly reduce the development cost of plant identification in regeneration regions, based on UAV imagery. Full article
Show Figures

Figure 1

20 pages, 7370 KiB  
Article
Output Feedback Regulation via Sinusoidal Control with Application to Semi-Continuous Bio/Chemical Reactors
by Ricardo Aguilar-López, Ricardo Femat and Juan L. Mata-Machuca
Processes 2025, 13(3), 891; https://doi.org/10.3390/pr13030891 - 18 Mar 2025
Viewed by 315
Abstract
This work proposes a nonlinear control strategy, an output feedback control based on a sinusoidal control approach for output regulation purposes with application to semi-continuous (fed-batch) bio/chemical processes. A key feature of the proposed control scheme is its auto-stop property, which ensures that [...] Read more.
This work proposes a nonlinear control strategy, an output feedback control based on a sinusoidal control approach for output regulation purposes with application to semi-continuous (fed-batch) bio/chemical processes. A key feature of the proposed control scheme is its auto-stop property, which ensures that the required set points are reached while automatically ceasing control input. This is particularly advantageous in fed-batch reactors, where exceeding the maximum operative volume is a common concern; additionally, the proposed controller can be bounded by only selecting the amplitude of the sine function. The closed-loop stability of the designed auto-stop control law is analyzed via the Lyapunov–Krazovzkii framework, which allows us to claim that the closed-loop dynamic operation of the corresponding processes is stable. The proposed controller is applied to two typical examples of semi-continuous bio/chemical reactors for regulation purposes, which aim to increase the reactors’ productivity. In addition, a comparison with a well-tuned internal model control proportional–integral (IMC PI) controller is performed. To show the performance of the control schemes, numerical experiments were carried out to show the controllers’ performance under different and realistic operation conditions. Here, for the bioreactor example, the performance index does not reach a steady state, but the gap between the IMC PI controller and the proposed one is around 100, 200, and 250 units for the different set points, which is in favor of the proposed controller. Regarding the chemical reactor, the performance index of the corresponding gap between the steady-state values of the performance index is also in favor of the proposed control law. Full article
Show Figures

Figure 1

17 pages, 1434 KiB  
Article
Decoding Brain Signals in a Neuromorphic Framework for a Personalized Adaptive Control of Human Prosthetics
by Georgi Rusev, Svetlozar Yordanov, Simona Nedelcheva, Alexander Banderov, Fabien Sauter-Starace, Petia Koprinkova-Hristova and Nikola Kasabov
Biomimetics 2025, 10(3), 183; https://doi.org/10.3390/biomimetics10030183 - 14 Mar 2025
Cited by 1 | Viewed by 830
Abstract
Current technological solutions for Brain-machine Interfaces (BMI) achieve reasonable accuracy, but most systems are large in size, power consuming and not auto-adaptive. This work addresses the question whether current neuromorphic technologies could resolve these problems? The paper proposes a novel neuromorphic framework of [...] Read more.
Current technological solutions for Brain-machine Interfaces (BMI) achieve reasonable accuracy, but most systems are large in size, power consuming and not auto-adaptive. This work addresses the question whether current neuromorphic technologies could resolve these problems? The paper proposes a novel neuromorphic framework of a BMI system for prosthetics control via decoding Electro Cortico-Graphic (ECoG) brain signals. It includes a three-dimensional spike timing neural network (3D-SNN) for brain signals features extraction and an on-line trainable recurrent reservoir structure (Echo state network (ESN)) for Motor Control Decoding (MCD). A software system, written in Python using NEST Simulator SNN library is described. It is able to adapt continuously in real time in supervised or unsupervised mode. The proposed approach was tested on several experimental data sets acquired from a tetraplegic person. First simulation results are encouraging, showing also the need for a further improvement via multiple hyper-parameters tuning. Its future implementation on a neuromorphic hardware platform that is smaller in size and significantly less power consuming is discussed too. Full article
(This article belongs to the Special Issue Advances in Brain–Computer Interfaces)
Show Figures

Figure 1

21 pages, 912 KiB  
Article
A High-Accuracy Decoupling Balance Control Method for an Auto-Balancing Bridge Based on a Variable-Domain Fuzzy-PID Controller
by Li Wang, Yijiu Zhao, Yifan Wang and Haitao Zhou
Symmetry 2025, 17(3), 354; https://doi.org/10.3390/sym17030354 - 26 Feb 2025
Viewed by 716
Abstract
The auto-balancing bridge method is an impedance measurement method with higher accuracy than other traditional methods. The balance control algorithm within the auto-balancing bridge is a crucial component. Its performance in maintaining symmetry between the current flowing through the test element and the [...] Read more.
The auto-balancing bridge method is an impedance measurement method with higher accuracy than other traditional methods. The balance control algorithm within the auto-balancing bridge is a crucial component. Its performance in maintaining symmetry between the current flowing through the test element and the current flowing through a known reference resistor determines the impedance measurement accuracy. However, using the imaginary impedance of a practical reference resistor in the bridge diminishes the convergence accuracy of the auto-balancing bridge. In this paper, a feedforward decoupling module is first constructed to compensate for the imaginary part of the reference resistor and decouple the auto-balancing bridge into two independent balance control channels, namely, the real and imaginary channels. Then, two balance controllers based on the variable-domain fuzzy-PID algorithm are used for these two separated balance control channels in order to improve the convergence accuracy and adaptability of bridge balancing. Finally, the particle swarm optimization method is used to automatically tune the controller’s parameters to enhance the development efficiency of the auto-balancing bridge. Experimental results show that this bridge balance control algorithm can quickly stabilize the unbalanced current of the bridge. For the practical auto-balancing bridge circuit, its relative impedance measurement error remains below 0.05%. This method effectively improves measurement accuracy and provides crucial technical support for the application of auto-balancing bridges in the high-precision measurement field. Full article
(This article belongs to the Section Engineering and Materials)
Show Figures

Figure 1

42 pages, 40649 KiB  
Article
A Multi-Drone System Proof of Concept for Forestry Applications
by André G. Araújo, Carlos A. P. Pizzino, Micael S. Couceiro and Rui P. Rocha
Drones 2025, 9(2), 80; https://doi.org/10.3390/drones9020080 - 21 Jan 2025
Cited by 3 | Viewed by 3007
Abstract
This study presents a multi-drone proof of concept for efficient forest mapping and autonomous operation, framed within the context of the OPENSWARM EU Project. The approach leverages state-of-the-art open-source simultaneous localisation and mapping (SLAM) frameworks, like LiDAR (Light Detection And Ranging) Inertial Odometry [...] Read more.
This study presents a multi-drone proof of concept for efficient forest mapping and autonomous operation, framed within the context of the OPENSWARM EU Project. The approach leverages state-of-the-art open-source simultaneous localisation and mapping (SLAM) frameworks, like LiDAR (Light Detection And Ranging) Inertial Odometry via Smoothing and Mapping (LIO-SAM), and Distributed Collaborative LiDAR SLAM Framework for a Robotic Swarm (DCL-SLAM), seamlessly integrated within the MRS UAV System and Swarm Formation packages. This integration is achieved through a series of procedures compliant with Robot Operating System middleware (ROS), including an auto-tuning particle swarm optimisation method for enhanced flight control and stabilisation, which is crucial for autonomous operation in challenging environments. Field experiments conducted in a forest with multiple drones demonstrate the system’s ability to navigate complex terrains as a coordinated swarm, accurately and collaboratively mapping forest areas. Results highlight the potential of this proof of concept, contributing to the development of scalable autonomous solutions for forestry management. The findings emphasise the significance of integrating multiple open-source technologies to advance sustainable forestry practices using swarms of drones. Full article
Show Figures

Figure 1

Back to TopTop