Previous Issue
Volume 18, June
 
 

Algorithms, Volume 18, Issue 7 (July 2025) – 56 articles

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
34 pages, 924 KiB  
Systematic Review
Smart Microgrid Management and Optimization: A Systematic Review Towards the Proposal of Smart Management Models
by Paul Arévalo, Dario Benavides, Danny Ochoa-Correa, Alberto Ríos, David Torres and Carlos W. Villanueva-Machado
Algorithms 2025, 18(7), 429; https://doi.org/10.3390/a18070429 - 11 Jul 2025
Abstract
The increasing integration of renewable energy sources (RES) in power systems presents challenges related to variability, stability, and efficiency, particularly in smart microgrids. This systematic review, following the PRISMA 2020 methodology, analyzed 66 studies focused on advanced energy storage systems, intelligent control strategies, [...] Read more.
The increasing integration of renewable energy sources (RES) in power systems presents challenges related to variability, stability, and efficiency, particularly in smart microgrids. This systematic review, following the PRISMA 2020 methodology, analyzed 66 studies focused on advanced energy storage systems, intelligent control strategies, and optimization techniques. Hybrid storage solutions combining battery systems, hydrogen technologies, and pumped hydro storage were identified as effective approaches to mitigate RES intermittency and balance short- and long-term energy demands. The transition from centralized to distributed control architectures, supported by predictive analytics, digital twins, and AI-based forecasting, has improved operational planning and system monitoring. However, challenges remain regarding interoperability, data privacy, cybersecurity, and the limited availability of high-quality data for AI model training. Economic analyses show that while initial investments are high, long-term operational savings and improved resilience justify the adoption of advanced microgrid solutions when supported by appropriate policies and financial mechanisms. Future research should address the standardization of communication protocols, development of explainable AI models, and creation of sustainable business models to enhance resilience, efficiency, and scalability. These efforts are necessary to accelerate the deployment of decentralized, low-carbon energy systems capable of meeting future energy demands under increasingly complex operational conditions. Full article
(This article belongs to the Special Issue Algorithms for Smart Cities (2nd Edition))
Show Figures

Figure 1

21 pages, 21492 KiB  
Article
SPL-YOLOv8: A Lightweight Method for Rape Flower Cluster Detection and Counting Based on YOLOv8n
by Yue Fang, Chenbo Yang, Jie Li and Jingmin Tu
Algorithms 2025, 18(7), 428; https://doi.org/10.3390/a18070428 - 11 Jul 2025
Abstract
The flowering stage is a critical phase in the growth of rapeseed crops, and non-destructive, high-throughput quantitative analysis of rape flower clusters in field environments holds significant importance for rapeseed breeding. However, detecting and counting rape flower clusters remains challenging in complex field [...] Read more.
The flowering stage is a critical phase in the growth of rapeseed crops, and non-destructive, high-throughput quantitative analysis of rape flower clusters in field environments holds significant importance for rapeseed breeding. However, detecting and counting rape flower clusters remains challenging in complex field conditions due to their small size, severe overlapping and occlusion, and the large parameter sizes of existing models. To address these challenges, this study proposes a lightweight rape flower clusters detection model, SPL-YOLOv8. First, the model introduces StarNet as a lightweight backbone network for efficient feature extraction, significantly reducing computational complexity and parameter counts. Second, a feature fusion module (C2f-Star) is integrated into the backbone to enhance the feature representation capability of the neck through expanded spatial dimensions, mitigating the impact of occluded regions on detection performance. Additionally, a lightweight Partial Group Convolution Detection Head (PGCD) is proposed, which employs Partial Convolution combined with Group Normalization to enable multi-scale feature interaction. By incorporating additional learnable parameters, the PGCD enhances the detection and localization of small targets. Finally, channel pruning based on the Layer-Adaptive Magnitude-based Pruning (LAMP) score is applied to reduce model parameters and runtime memory. Experimental results on the Rapeseed Flower-Raceme Benchmark (RFRB) demonstrate that the SPL-YOLOv8n-prune model achieves a detection accuracy of 92.2% in Average Precision (AP50), comparable to SOTA methods, while reducing the giga floating point operations per second (GFLOPs) and parameters by 86.4% and 95.4%, respectively. The model size is only 0.5 MB and the real-time frame rate is 171 fps. The proposed model effectively detects rape flower clusters with minimal computational overhead, offering technical support for yield prediction and elite cultivar selection in rapeseed breeding. Full article
(This article belongs to the Section Analysis of Algorithms and Complexity Theory)
Show Figures

Figure 1

18 pages, 1871 KiB  
Article
Interpretable Reinforcement Learning for Sequential Strategy Prediction in Language-Based Games
by Jun Zhao, Jintian Ji, Robail Yasrab, Shuxin Wang, Liang Yu and Lingzhen Zhao
Algorithms 2025, 18(7), 427; https://doi.org/10.3390/a18070427 - 11 Jul 2025
Abstract
Accurate and interpretable prediction plays a vital role in natural language processing (NLP) tasks, particularly for enhancing user trust and model transparency. However, existing models often struggle with poor adaptability and limited interpretability when applied to dynamic language prediction tasks such as Wordle [...] Read more.
Accurate and interpretable prediction plays a vital role in natural language processing (NLP) tasks, particularly for enhancing user trust and model transparency. However, existing models often struggle with poor adaptability and limited interpretability when applied to dynamic language prediction tasks such as Wordle. To address these challenges, this study proposes an interpretable reinforcement learning framework based on an Enhanced Deep Deterministic Policy Gradient (Enhanced-DDPG) algorithm. By leveraging a custom simulation environment and integrating key linguistic features word frequency, letter frequency, and repeated letter patterns (rep) the model dynamically predicts the number of attempts needed to solve Wordle puzzles. Experimental results demonstrate that Enhanced-DDPG outperforms traditional methods such as Random Forest Regression (RFR), XGBoost, LightGBM, METRA, and SQIRL in terms of both prediction accuracy (MSE = 0.0134, R2 = 0.8439) and robustness under noisy conditions. Furthermore, SHapley Additive exPlanations (SHAP) are employed to interpret the model’s decision process, revealing that repeated letter patterns significantly influence low-attempt predictions, while word and letter frequencies are more relevant for higher attempt scenarios. This research highlights the potential of combining interpretable artificial intelligence (I-AI) and reinforcement learning to develop robust, transparent, and high-performance NLP prediction systems for real-world applications. Full article
(This article belongs to the Topic Applications of NLP, AI, and ML in Software Engineering)
Show Figures

Figure 1

25 pages, 2178 KiB  
Article
Cross-Modal Fake News Detection Method Based on Multi-Level Fusion Without Evidence
by Ping He, Hanxue Zhang, Shufu Cao and Yali Wu
Algorithms 2025, 18(7), 426; https://doi.org/10.3390/a18070426 - 10 Jul 2025
Abstract
Although multimodal feature fusion technology in fake news detection can integrate complementary information from different modal data, the semantic inconsistency of multimodal features will lead to feature fusion difficulties. And there is the problem of information loss during one fusion process. In addition, [...] Read more.
Although multimodal feature fusion technology in fake news detection can integrate complementary information from different modal data, the semantic inconsistency of multimodal features will lead to feature fusion difficulties. And there is the problem of information loss during one fusion process. In addition, although it is possible to improve the detection effect by increasing the support of external evidence in fake news detection, there is a lag in obtaining external evidence and the reliability and completeness of the evidence source is difficult to guarantee. Additional noise may be introduced to interfere with the model judgment. Therefore, a cross-modal fake news detection method (CM-MLF) based on evidence-free multilevel fusion is proposed. The method solves the semantic inconsistency problem by utilizing cross-modal alignment processing. And it utilizes the attention mechanism to perform multilevel fusion of text and image features without the assistance of other evidential features to further enhance the expressive power of the features. Experiments show that the method achieves better detection results on multiple benchmark datasets, effectively improving the accuracy and robustness of cross-modal fake news detection. Full article
(This article belongs to the Special Issue Algorithms for Feature Selection (3rd Edition))
Show Figures

Graphical abstract

25 pages, 1885 KiB  
Article
Robust Algorithm for Calculating the Alignment of Guide Rolls in Slab Continuous Casting Machines
by Robert Rosenthal, Nils Albersmann and Mohieddine Jelali
Algorithms 2025, 18(7), 425; https://doi.org/10.3390/a18070425 - 9 Jul 2025
Abstract
To ensure the product quality of a steel slab continuous casting machine, the mechanical alignment of the guide rolls must be monitored and corrected regularly. Misaligned guide rolls cause stress and strain in the partially solidified steel strand, leading to internal cracks and [...] Read more.
To ensure the product quality of a steel slab continuous casting machine, the mechanical alignment of the guide rolls must be monitored and corrected regularly. Misaligned guide rolls cause stress and strain in the partially solidified steel strand, leading to internal cracks and other quality issues. Current methods of alignment measurement are either not suited for regular maintenance or provide only indirect alignment information in the form of angle measurements. This paper presents three new algorithms that convert the available angle measurements into the absolute position of each guide roll, which is equivalent to the mechanical alignment. The algorithms are based on geometry and trigonometry or the gradient descent optimization algorithm. Under near ideal conditions, all algorithms yield very accurate position results. However, when tested and evaluated under various conditions, their susceptibility to real-world disturbances is revealed. Here, only the optimization-based algorithm reaches the desired accuracy. Under the influence of randomly distributed angle measurement errors with an amplitude of 0.01°, it is able to determine 90% of roll positions within 0.1 mm of their actual position. Full article
(This article belongs to the Section Algorithms for Multidisciplinary Applications)
Show Figures

Figure 1

25 pages, 875 KiB  
Article
Filter Learning-Based Partial Least Squares Regression and Its Application in Infrared Spectral Analysis
by Yi Mou, Long Zhou, Weizhen Chen, Jianguo Liu and Teng Li
Algorithms 2025, 18(7), 424; https://doi.org/10.3390/a18070424 - 9 Jul 2025
Abstract
Partial Least Squares (PLS) regression has been widely used to model the relationship between predictors and responses. However, PLS may be limited in its capacity to handle complex spectral data contaminated with significant noise and interferences. In this paper, we propose a novel [...] Read more.
Partial Least Squares (PLS) regression has been widely used to model the relationship between predictors and responses. However, PLS may be limited in its capacity to handle complex spectral data contaminated with significant noise and interferences. In this paper, we propose a novel filter learning-based PLS (FPLS) model that integrates an adaptive filter into the PLS framework. The FPLS model is designed to maximize the covariance between the filtered spectral data and the response. This modification enables FPLS to dynamically adapt to the characteristics of the data, thereby enhancing its feature extraction and noise suppression capabilities. We have developed an efficient algorithm to solve the FPLS optimization problem and provided theoretical analyses regarding the convergence of the model, the prediction variance, and the relationships among the objective functions of FPLS, PLS, and the filter length. Furthermore, we have derived bounds for the Root Mean Squared Error of Prediction (RMSEP) and the Cosine Similarity (CS) to evaluate model performance. Experimental results using spectral datasets from Corn, Octane, Mango, and Soil Nitrogen show that the FPLS model outperforms PLS, OSCPLS, VCPLS, PoPLS, LoPLS, DOSC, OPLS, MSC, SNV, SGFilter, and Lasso in terms of prediction accuracy. The theoretical analyses align with the experimental results, emphasizing the effectiveness and robustness of the FPLS model in managing complex spectral data. Full article
Show Figures

Figure 1

20 pages, 1353 KiB  
Article
Dynamic Modeling and Validation of Peak Ability of Biomass Units
by Dawei Xia, Guizhou Cao, Jiayao Pan, Xinghai Wang, Kai Meng, Yuancheng Sun and Zhenlong Wu
Algorithms 2025, 18(7), 423; https://doi.org/10.3390/a18070423 - 9 Jul 2025
Abstract
Biomass units can play a certain role in peak summer and winter due to their advantages in terms of their environmental and short-term peak ability. To analyze the peak ability of biomass units, this paper focuses on the dynamic modeling of biomass unit [...] Read more.
Biomass units can play a certain role in peak summer and winter due to their advantages in terms of their environmental and short-term peak ability. To analyze the peak ability of biomass units, this paper focuses on the dynamic modeling of biomass unit peak ability. Firstly, the process of biomass feeding amount to power output is divided into a feed–heat module, heat–main steam pressure module and main steam pressure–power module. A two-input and two-output dynamic model is established where the feeding amount and turbine valve opening serve as inputs, and the main steam pressure and power serve as outputs. Then the effectiveness of the established model is validated by actual operation data of a 30 MW biomass unit. This dynamic model can provide a mechanistic model for analyzing the impact of fuel calorific value on the power output, and provide support for fuel management and scheduling strategies during the peak period of biomass units. Full article
(This article belongs to the Special Issue Artificial Intelligence in Modeling and Simulation (2nd Edition))
Show Figures

Figure 1

18 pages, 580 KiB  
Article
Feature Transformation-Based Few-Shot Class-Incremental Learning
by Xubo Zhang and Yang Luo
Algorithms 2025, 18(7), 422; https://doi.org/10.3390/a18070422 - 9 Jul 2025
Abstract
In the process of few-shot class-incremental learning, the limited number of samples for newly introduced classes makes it difficult to adequately adapt model parameters, resulting in poor feature representations for these classes. To address this issue, this paper proposes a feature transformation method [...] Read more.
In the process of few-shot class-incremental learning, the limited number of samples for newly introduced classes makes it difficult to adequately adapt model parameters, resulting in poor feature representations for these classes. To address this issue, this paper proposes a feature transformation method that mitigates feature degradation in few-shot incremental learning. The transformed features better align with the ideal feature distribution required by an optimal classifier, thereby alleviating performance decline during incremental updates. Before classification, the method learns a well-conditioned linear mapping from the available base classes. After classification, both class prototypes and query samples are projected into the transformed feature space to improve the overall feature distribution. Experimental results on three benchmark datasets demonstrate that the proposed method achieves strong performance: it reduces performance degradation to 24.85 percentage points on miniImageNet, 24.45 on CIFAR100, and 24.14 on CUB, consistently outperforming traditional methods such as iCaRL (44.13–50.71 points degradation) and recent techniques like FeTrIL and PL-FSCIL. Further analysis shows that the transformed features bring class prototypes significantly closer to the theoretically optimal equiangular configuration described by neural collapse, highlighting the effectiveness of the proposed approach. Full article
Show Figures

Figure 1

17 pages, 1101 KiB  
Article
Ship Scheduling Algorithm Based on Markov-Modulated Fluid Priority Queues
by Jianzhi Deng, Shuilian Lv, Yun Li, Liping Luo, Yishan Su, Xiaolin Wang and Xinzhi Liu
Algorithms 2025, 18(7), 421; https://doi.org/10.3390/a18070421 - 8 Jul 2025
Abstract
As a key node in port logistics systems, ship anchorage is often faced with congestion caused by ship flow fluctuations, multi-priority scheduling imbalances and the poor adaptability of scheduling models to complex environments. To solve the above problems, this paper constructs a ship [...] Read more.
As a key node in port logistics systems, ship anchorage is often faced with congestion caused by ship flow fluctuations, multi-priority scheduling imbalances and the poor adaptability of scheduling models to complex environments. To solve the above problems, this paper constructs a ship scheduling algorithm based on a Markov-modulated fluid priority queue, which describes the stochastic evolution of the anchorage operation state via a continuous-time Markov chain and abstracts the arrival and service processes of ships into a continuous fluid input and output mechanism modulated by the state. The algorithm introduces a multi-priority service strategy to achieve the differentiated scheduling of different types of ships and improves the computational efficiency and scalability based on a matrix analysis method. Simulation results show that the proposed model reduces the average waiting time of ships by more than 90% compared with the M/G/1/1 and RL strategies and improves the utilization of anchorage resources by about 20% through dynamic service rate adjustment, showing significant advantages over traditional scheduling methods in multi-priority scenarios. Full article
Show Figures

Figure 1

21 pages, 561 KiB  
Article
Comparative Analysis of BERT and GPT for Classifying Crisis News with Sudan Conflict as an Example
by Yahya Masri, Zifu Wang, Anusha Srirenganathan Malarvizhi, Samir Ahmed, Tayven Stover, David W. S. Wong, Yongyao Jiang, Yun Li, Qian Liu, Mathieu Bere, Daniel Rothbart, Dieter Pfoser and Chaowei Yang
Algorithms 2025, 18(7), 420; https://doi.org/10.3390/a18070420 - 8 Jul 2025
Viewed by 31
Abstract
To obtain actionable information for humanitarian and other emergency responses, an accurate classification of news or events is critical. Daily news and social media are hard to classify based on conveyed information, especially when multiple categories of information are embedded. This research used [...] Read more.
To obtain actionable information for humanitarian and other emergency responses, an accurate classification of news or events is critical. Daily news and social media are hard to classify based on conveyed information, especially when multiple categories of information are embedded. This research used large language models (LLMs) and traditional transformer-based models, such as BERT, to classify news and social media events using the example of the Sudan Conflict. A systematic evaluation framework was introduced to test GPT models using Zero-Shot prompting, Retrieval-Augmented Generation (RAG), and RAG with In-Context Learning (ICL) against standard and hyperparameter-tuned bert-based and bert-large models. BERT outperformed GPT in F1-score and accuracy for multi-label classification (MLC) while GPT outperformed BERT in accuracy for Single-Label classification from Multi-Label Ground Truth (SL-MLG). The results illustrate that a larger model size improves classification accuracy for both BERT and GPT, while BERT benefits from hyperparameter tuning and GPT benefits from its enhanced contextual comprehension capabilities. By addressing challenges such as overlapping semantic categories, task-specific adaptation, and a limited dataset, this study provides a deeper understanding of LLMs’ applicability in constrained, real-world scenarios, particularly in highlighting the potential for integrating NLP with other applications such as GIS in future conflict analyses. Full article
(This article belongs to the Special Issue Evolution of Algorithms in the Era of Generative AI)
Show Figures

Graphical abstract

25 pages, 1579 KiB  
Systematic Review
Using Smartwatches in Stress Management, Mental Health, and Well-Being: A Systematic Review
by Nikoletta-Anna Kapogianni, Angeliki Sideraki and Christos-Nikolaos Anagnostopoulos
Algorithms 2025, 18(7), 419; https://doi.org/10.3390/a18070419 - 8 Jul 2025
Viewed by 44
Abstract
This systematic review explores the role of smartwatches in stress management, mental health monitoring, and overall well-being. Drawing from 61 peer-reviewed studies published between 2016 and 2025, this review synthesizes empirical findings across diverse methodologies, including biometric data collection, machine learning algorithms, and [...] Read more.
This systematic review explores the role of smartwatches in stress management, mental health monitoring, and overall well-being. Drawing from 61 peer-reviewed studies published between 2016 and 2025, this review synthesizes empirical findings across diverse methodologies, including biometric data collection, machine learning algorithms, and user-centered design evaluations. Smartwatches, equipped with sensors for physiological signals such as heart rate, heart rate variability, electrodermal activity, and skin temperature, have demonstrated promise in detecting and predicting stress and mood fluctuations in both clinical and everyday contexts. This review emphasizes the need for interdisciplinary collaboration to advance technological precision, ethical data handling, and user experience design. Moreover, it highlights how different algorithms—such as Support Vector Machines (SVMs), Random Forests, Deep Neural Networks, and Boosting methods—perform across various physiological signals (e.g., HRV, EDA, skin temperature). Furthermore, it identifies performance trends and challenges across lab-based vs. real-world deployments, emphasizing the trade-off between generalizability and personalization in model design. Full article
(This article belongs to the Special Issue Algorithms for Smart Cities (2nd Edition))
Show Figures

Figure 1

30 pages, 3108 KiB  
Article
Research on the Integrated Scheduling of Imaging and Data Transmission for Earth Observation Satellites
by Guanfei Yu and Kunlun Zhang
Algorithms 2025, 18(7), 418; https://doi.org/10.3390/a18070418 - 8 Jul 2025
Viewed by 34
Abstract
This study focuses on the integrated scheduling issues of imaging and data transmission for Earth observation satellites, where each target needs to be imaged and transmitted within a feasible time window. The scheduling process also takes into account the constraints of satellite energy [...] Read more.
This study focuses on the integrated scheduling issues of imaging and data transmission for Earth observation satellites, where each target needs to be imaged and transmitted within a feasible time window. The scheduling process also takes into account the constraints of satellite energy and storage capacity. In this paper, a mixed-integer linear programming (MILP) model for the integrated scheduling of imaging data transmission has been proposed. The MILP model was validated through numerical experiments based on simulation data from SuperView-1 series satellites. Additionally, some neighborhood mechanisms are designed based on the characteristics of the problem. Based on the neighborhood mechanisms, the rule-based large neighborhood search algorithm (RLNS) was designed, which constructs initial solutions through various scheduling rules and iteratively optimizes the solutions using multiple destroying and repairing operators. To address the shortcomings of the overly regular mechanism of the destruction and repair operator for large neighborhood search, we design a genetic algorithms (GA) for tuning the heuristic scheduling rules. The calculation results demonstrate the effectiveness of RLNS and GA, highlighting their advantages over CPLEX in solving large-scale problems. Full article
(This article belongs to the Section Combinatorial Optimization, Graph, and Network Algorithms)
Show Figures

Figure 1

10 pages, 757 KiB  
Article
Environmental Sensitivity in AI Tree Bark Detection: Identifying Key Factors for Improving Classification Accuracy
by Charles Warner, Fanyou Wu, Rado Gazo, Bedrich Benes and Songlin Fei
Algorithms 2025, 18(7), 417; https://doi.org/10.3390/a18070417 - 8 Jul 2025
Viewed by 19
Abstract
Accurate tree species identification through bark characteristics is essential for effective forest management, but traditionally requires extensive expertise. This study leverages artificial intelligence (AI), specifically the EfficientNet-B3 convolutional neural network, to enhance AI-based tree bark identification, focusing on northern red oak (Quercus [...] Read more.
Accurate tree species identification through bark characteristics is essential for effective forest management, but traditionally requires extensive expertise. This study leverages artificial intelligence (AI), specifically the EfficientNet-B3 convolutional neural network, to enhance AI-based tree bark identification, focusing on northern red oak (Quercus rubra), hackberry (Celtis occidentalis), and bitternut hickory (Carya cordiformis) using the CentralBark dataset. We investigated three environmental variables—time of day (lighting conditions), bark moisture content (wet or dry), and cardinal direction of observation—to identify sources of classification inaccuracies. Results revealed that bark moisture significantly reduced accuracy by 8.19% in wet conditions (89.32% dry vs. 81.13% wet). In comparison, the time of day had a significant impact on hackberry (95.56% evening) and northern red oak (80.80% afternoon), with notable chi-squared associations (p < 0.05). Cardinal direction had minimal effect (4.72% variation). Bitternut hickory detection consistently underperformed (26.76%), highlighting morphological challenges. These findings underscore the need for targeted dataset augmentation with wet and afternoon images, alongside preprocessing techniques like illumination normalization, to improve model robustness. Enhanced AI tools will streamline forest inventories, support biodiversity monitoring, and bolster conservation in dynamic forest ecosystems. Full article
(This article belongs to the Special Issue Machine Learning Models and Algorithms for Image Processing)
Show Figures

Figure 1

23 pages, 3338 KiB  
Article
European Efficiency Schemes for Domestic Gas Boilers: Estimation of Savings in Heating of Settlements
by Dejan Brkić
Algorithms 2025, 18(7), 416; https://doi.org/10.3390/a18070416 - 6 Jul 2025
Viewed by 197
Abstract
This article aims to evaluate the seasonal efficiency of natural gas boilers used in European households, highlighting the cost effectiveness, environmental benefits, and user comfort associated with higher-efficiency models, particularly those based on condensing technology. The study applies a standardized algorithm used in [...] Read more.
This article aims to evaluate the seasonal efficiency of natural gas boilers used in European households, highlighting the cost effectiveness, environmental benefits, and user comfort associated with higher-efficiency models, particularly those based on condensing technology. The study applies a standardized algorithm used in European energy labeling schemes to calculate the seasonal efficiency of household gas boilers. It further includes a comparative analysis of selected boiler models available on the Serbian market and outlines a step-by-step method for estimating gas savings when replacing older, less efficient boilers with modern units. Condensing boilers demonstrate significantly higher seasonal efficiency than standard models by recovering additional heat from exhaust gases. These improved boilers produce lower greenhouse gas emissions and offer annual fuel savings of approximately 10% to 30%, depending on the boiler’s age, system design, and usage patterns. The results also confirm the direct correlation between seasonal efficiency and annual fuel consumption, validating the use of efficiency-based cost comparisons. The analysis focuses on residential gas boilers available in the Serbian market, although the models examined are commonly distributed across Europe. The findings highlight the important role of energy efficiency labels—based on a standardized algorithm—in guiding boiler selection, helping consumers and policymakers make informed decisions that promote energy savings and reduce environmental impact. This article contributes to the theoretical and practical understanding of gas boiler efficiency by integrating algorithm-based evaluation with market data and user-centered considerations. It offers actionable insights for consumers, energy advisors, and policymakers in the context of Europe’s energy transition. Verifying the efficiency calculations of gas boilers requires a careful combination of theoretical methods, measured data, and adherence to standards. Full article
Show Figures

Figure 1

21 pages, 5436 KiB  
Article
Engine Optimization Model for Accurate Prediction of Friction Model in Marine Dual-Fuel Engine
by Mina Tadros
Algorithms 2025, 18(7), 415; https://doi.org/10.3390/a18070415 - 4 Jul 2025
Viewed by 203
Abstract
This paper presents an innovative engine optimization model integrated with a friction fitting tool to enhance the accuracy of computed performance for a marine dual-fuel engine. The focus is on determining the terms of the Chen–Flynn correlation—an empirical engine friction model—to improve the [...] Read more.
This paper presents an innovative engine optimization model integrated with a friction fitting tool to enhance the accuracy of computed performance for a marine dual-fuel engine. The focus is on determining the terms of the Chen–Flynn correlation—an empirical engine friction model—to improve the precision of friction and performance predictions. The developed model employs WAVE, a 1D engine simulation software, coupled with a nonlinear optimizer to identify the optimal configuration of key parameters, including the turbocharger, injection system, combustion behavior, and friction model. The optimization procedure maximizes the air–fuel ratio (AFR) within the engine while adhering to various predefined constraints. The model is applied to four operational points along the propeller curve, with the optimized results subsequently integrated into a friction fitting tool. This tool predicts the terms of the Chen–Flynn correlation through an updated procedure, achieving highly accurate results with a coefficient of determination (R2) value of 99.88%, eliminating the need for experimental testing. The optimized friction model provides a reliable foundation for future studies and applications, enabling precise friction predictions across various engine types and fuel compositions. Full article
Show Figures

Graphical abstract

21 pages, 1097 KiB  
Article
An Industry Application of Secure Augmentation and Gen-AI for Transforming Engineering Design and Manufacturing
by Dulana Rupanetti, Corissa Uberecken, Adam King, Hassan Salamy, Cheol-Hong Min and Samantha Schmidgall
Algorithms 2025, 18(7), 414; https://doi.org/10.3390/a18070414 - 4 Jul 2025
Viewed by 217
Abstract
This paper explores the integration of Large Language Models (LLMs) and secure Gen-AI technologies within engineering design and manufacturing, with a focus on improving inventory management, component selection, and recommendation workflows. The system is intended for deployment and evaluation in a real-world industrial [...] Read more.
This paper explores the integration of Large Language Models (LLMs) and secure Gen-AI technologies within engineering design and manufacturing, with a focus on improving inventory management, component selection, and recommendation workflows. The system is intended for deployment and evaluation in a real-world industrial environment. It utilizes vector embeddings, vector databases, and Approximate Nearest Neighbor (ANN) search algorithms to implement Retrieval-Augmented Generation (RAG), enabling context-aware searches for inventory items and addressing the limitations of traditional text-based methods. Built on an LLM framework enhanced by RAG, the system performs similarity-based retrieval and part recommendations while preserving data privacy through selective obfuscation using the ROT13 algorithm. In collaboration with an industry sponsor, real-world testing demonstrated strong results: 88.4% for Answer Relevance, 92.1% for Faithfulness, 80.2% for Context Recall, and 83.1% for Context Precision. These results demonstrate the system’s ability to deliver accurate and relevant responses while retrieving meaningful context and minimizing irrelevant information. Overall, the approach presents a practical and privacy-aware solution for manufacturing, bridging the gap between traditional inventory tools and modern AI capabilities and enabling more intelligent workflows in design and production processes. Full article
Show Figures

Figure 1

23 pages, 6016 KiB  
Article
Detecting SARS-CoV-2 in CT Scans Using Vision Transformer and Graph Neural Network
by Kamorudeen Amuda, Almustapha Wakili, Tomilade Amoo, Lukman Agbetu, Qianlong Wang and Jinjuan Feng
Algorithms 2025, 18(7), 413; https://doi.org/10.3390/a18070413 - 4 Jul 2025
Viewed by 281
Abstract
The COVID-19 pandemic has presented significant challenges to global healthcare, bringing out the urgent need for reliable diagnostic tools. Computed Tomography (CT) scans have proven instrumental in detecting COVID-19-induced lung abnormalities. This study introduces Convolutional Neural Network, Graph Neural Network, and Vision Transformer [...] Read more.
The COVID-19 pandemic has presented significant challenges to global healthcare, bringing out the urgent need for reliable diagnostic tools. Computed Tomography (CT) scans have proven instrumental in detecting COVID-19-induced lung abnormalities. This study introduces Convolutional Neural Network, Graph Neural Network, and Vision Transformer (ViTGNN), an advanced hybrid model designed to enhance SARS-CoV-2 detection by combining Graph Neural Networks (GNNs) for feature extraction with Vision Transformers (ViTs) for classification. Using the strength of CNN and GNN to capture complex relational structures and the ViT capacity to classify global contexts, ViTGNN achieves a comprehensive representation of CT scan data. The model was evaluated on a SARS-CoV-2 CT scan dataset, demonstrating superior performance across all metrics compared to baseline models. The model achieved an accuracy of 95.98%, precision of 96.07%, recall of 96.01%, F1-score of 95.98%, and AUC of 98.69%, outperforming existing approaches. These results indicate that ViTGNN is an effective diagnostic tool that can be applied beyond COVID-19 detection to other medical imaging tasks. Full article
Show Figures

Figure 1

17 pages, 479 KiB  
Article
Analysis of Reliability and Efficiency of Information Extraction Using AI-Based Chatbot: The More-for-Less Paradox
by Eugene Levner and Boris Kriheli
Algorithms 2025, 18(7), 412; https://doi.org/10.3390/a18070412 - 3 Jul 2025
Viewed by 134
Abstract
This paper addresses the problem of information extraction using an AI-powered chatbot. The problem concerns searching and extracting relevant information from large databases in response to a human user’s query. Expanding the traditional discrete search problem well known in operations research, this problem [...] Read more.
This paper addresses the problem of information extraction using an AI-powered chatbot. The problem concerns searching and extracting relevant information from large databases in response to a human user’s query. Expanding the traditional discrete search problem well known in operations research, this problem introduces two players; the first player—an AI chatbot such as ChatGPT—sequentially scans available datasets to find an appropriate answer to a given query, while the second—a human user—conducts a dialogue with the chatbot and evaluates its answers in each round of the dialogue. The goal of an AI-powered chatbot is to provide maximally useful and accurate information. During a natural language conversation between a human user and an AI, the human user can modify and refine queries until s/he is satisfied with the chatbot’s output. We analyze two key characteristics of human–AI interaction: search reliability and efficiency. Search reliability is defined as the ability of a robot to understand user queries and provide correct answers; it is measured by the frequency (probability) of correct answers. Search efficiency of a chatbot indicates how accurate and relevant the information returned by the chatbot is; it is measured by the satisfaction level a human user receives for a correct answer. An AI chatbot must perform a sequence of scans over the given databases and continue searching until the human user declares, in some round, that the target has been found. Assuming that the chatbot is not completely reliable, each database may have to be scanned infinitely often; in this case, the objective of the problem is to determine a search policy for finding the optimal sequence of chatbot scans that maximizes the expected user satisfaction over an infinite time horizon. Along with these results, we found a counterintuitive relationship between AI chatbot reliability and search performance: under sufficiently general conditions, a less reliable AI chatbot may have higher expected search efficiency; this phenomenon aligns with other well-known “more-for-less” paradoxes. Finally, we discussed the underlying mechanism of this paradox. Full article
23 pages, 1678 KiB  
Article
Development of Digital Training Twins in the Aircraft Maintenance Ecosystem
by Igor Kabashkin
Algorithms 2025, 18(7), 411; https://doi.org/10.3390/a18070411 - 3 Jul 2025
Viewed by 181
Abstract
This paper presents an integrated digital training twin framework for adaptive aircraft maintenance education, combining real-time competence modeling, algorithmic orchestration, and cloud–edge deployment architectures. The proposed system dynamically evaluates learner skill gaps and assigns individualized training resources through a multi-objective optimization function that [...] Read more.
This paper presents an integrated digital training twin framework for adaptive aircraft maintenance education, combining real-time competence modeling, algorithmic orchestration, and cloud–edge deployment architectures. The proposed system dynamically evaluates learner skill gaps and assigns individualized training resources through a multi-objective optimization function that balances skill alignment, Bloom’s cognitive level, fidelity tier, and time efficiency. A modular orchestration engine incorporates reinforcement learning agents for policy refinement, federated learning for privacy-preserving skill analytics, and knowledge graph-based curriculum models for dependency management. Simulation results were conducted on the Pneumatic Systems training module. The system’s validation matrix provides full-cycle traceability of instructional decisions, supporting regulatory audit-readiness and institutional reporting. The digital training twin ecosystem offers a scalable, regulation-compliant, and data-driven solution for next-generation aviation maintenance training, with demonstrated operational efficiency, instructional precision, and extensibility for future expansion. Full article
Show Figures

Graphical abstract

23 pages, 787 KiB  
Article
Integrating Machine Learning Techniques and the Theory of Planned Behavior to Assess the Drivers of and Barriers to the Use of Generative Artificial Intelligence: Evidence in Spain
by Antonio Pérez-Portabella, Jorge de Andrés-Sánchez, Mario Arias-Oliva and Mar Souto-Romero
Algorithms 2025, 18(7), 410; https://doi.org/10.3390/a18070410 - 3 Jul 2025
Viewed by 120
Abstract
Generative artificial intelligence (GAI) is emerging as a disruptive force, both economically and socially, with its use spanning from the provision of goods and services to everyday activities such as healthcare and household management. This study analyzes the enabling and inhibiting factors of [...] Read more.
Generative artificial intelligence (GAI) is emerging as a disruptive force, both economically and socially, with its use spanning from the provision of goods and services to everyday activities such as healthcare and household management. This study analyzes the enabling and inhibiting factors of GAI use in Spain based on a large-scale survey conducted by the Spanish Center for Sociological Research on the use and perception of artificial intelligence. The proposed model is based on the Theory of Planned Behavior and is fitted using machine learning techniques, specifically decision trees, Random Forest extensions, and extreme gradient boosting. While decision trees allow for detailed visualization of how variables interact to explain usage, Random Forest provides an excellent model fit (R2 close to 95%) and predictive performance. The use of Shapley Additive Explanations reveals that knowledge about artificial intelligence, followed by innovation orientation, is the main explanatory variable of GAI use. Among sociodemographic variables, Generation X and Z stood out as the most relevant. It is also noteworthy that the perceived privacy risk does not show a clear inhibitory influence on usage. Factors representing the positive consequences of GAI, such as performance expectancy and social utility, exert a stronger influence than the negative impact of hindering factors such as perceived privacy or social risks. Full article
(This article belongs to the Special Issue Evolution of Algorithms in the Era of Generative AI)
Show Figures

Figure 1

22 pages, 2789 KiB  
Article
Longitudinal Tire Force Estimation Method for 4WIDEV Based on Data-Driven Modified Recursive Subspace Identification Algorithm
by Xiaoyu Wang, Te Chen and Jiankang Lu
Algorithms 2025, 18(7), 409; https://doi.org/10.3390/a18070409 - 3 Jul 2025
Viewed by 205
Abstract
For the longitudinal tire force estimation problem of four-wheel independent drive electric vehicles (4WIDEVs), traditional model-based observers have limitations such as high modeling complexity and strong parameter sensitivity, while pure data-driven methods are susceptible to noise interference and have insufficient generalization ability. Therefore, [...] Read more.
For the longitudinal tire force estimation problem of four-wheel independent drive electric vehicles (4WIDEVs), traditional model-based observers have limitations such as high modeling complexity and strong parameter sensitivity, while pure data-driven methods are susceptible to noise interference and have insufficient generalization ability. Therefore, this study proposes a joint estimation framework that integrates data-driven and modified recursive subspace identification algorithms. Firstly, based on the electromechanical coupling mechanism, an electric drive wheel dynamics model (EDWM) is constructed, and multidimensional driving data is collected through a chassis dynamometer experimental platform. Secondly, an improved proportional integral observer (PIO) is designed to decouple the longitudinal force from the system input into a state variable, and a subspace identification recursive algorithm based on correction term with forgetting factor (CFF-SIR) is introduced to suppress the residual influence of historical data and enhance the ability to track time-varying parameters. The simulation and experimental results show that under complex working conditions without noise and interference, with noise influence (5% white noise), and with interference (5% irregular signal), the mean and mean square error of longitudinal force estimation under the CFF-SIR algorithm are significantly reduced compared to the correction-based subspace identification recursive (C-SIR) algorithm, and the comprehensive estimation accuracy is improved by 8.37%. It can provide a high-precision and highly adaptive longitudinal force estimation solution for vehicle dynamics control and intelligent driving systems. Full article
Show Figures

Figure 1

19 pages, 2917 KiB  
Article
An Approach to Trustworthy Article Ranking by NLP and Multi-Layered Analysis and Optimization
by Chenhao Li, Jiyin Zhang, Weilin Chen and Xiaogang Ma
Algorithms 2025, 18(7), 408; https://doi.org/10.3390/a18070408 - 3 Jul 2025
Viewed by 161
Abstract
The rapid growth of scientific publications, coupled with rising retraction rates, has intensified the challenge of identifying trustworthy academic articles. To address this issue, we propose a three-layer ranking system that integrates natural language processing and machine learning techniques for relevance and trust [...] Read more.
The rapid growth of scientific publications, coupled with rising retraction rates, has intensified the challenge of identifying trustworthy academic articles. To address this issue, we propose a three-layer ranking system that integrates natural language processing and machine learning techniques for relevance and trust assessment. First, we apply BERT-based embeddings to semantically match user queries with article content. Second, a Random Forest classifier is used to eliminate potentially problematic articles, leveraging features such as citation count, Altmetric score, and journal impact factor. Third, a custom ranking function combines relevance and trust indicators to score and sort the remaining articles. Evaluation using 16,052 articles from Retraction Watch and Web of Science datasets shows that our classifier achieves 90% accuracy and 97% recall for retracted articles. Citations emerged as the most influential trust signal (53.26%), followed by Altmetric and impact factors. This multi-layered approach offers a transparent and efficient alternative to conventional ranking algorithms, which can help researchers discover not only relevant but also reliable literature. Our system is adaptable to various domains and represents a promising tool for improving literature search and evaluation in the open science environment. Full article
Show Figures

Figure 1

21 pages, 2212 KiB  
Article
The Whale Optimization Algorithm and Markov Chain Monte Carlo-Based Approach for Optimizing Teacher Professional Development in Creative Learning Design with Technology
by Kalliopi Rigopouli, Dimitrios Kotsifakos and Yannis Psaromiligkos
Algorithms 2025, 18(7), 407; https://doi.org/10.3390/a18070407 - 2 Jul 2025
Viewed by 220
Abstract
In this article, we present a hybrid optimization methodology using the whale optimization algorithm and Markov Chain Monte Carlo sampling technique in a teachers’ training development program regarding creativity in technology-enhanced learning design. Finding the best possible training for creativity in learning design [...] Read more.
In this article, we present a hybrid optimization methodology using the whale optimization algorithm and Markov Chain Monte Carlo sampling technique in a teachers’ training development program regarding creativity in technology-enhanced learning design. Finding the best possible training for creativity in learning design with technology is a complex task, as many dynamic and multi-model variables need to be taken into consideration. When designing the best possible training, the whale optimization algorithm helped us in determining the right methods, resources, content, and assessment. A further Markov Chain Monte Carlo-based approach helped us in deciding with accuracy that these were the correct parameters of our training. In this article, we show that metaheuristic algorithms like the whale optimization algorithm, validated by a Markov chain technique like Markov Chain Monte Carlo, can help not only in areas like machine learning but also in fields without structured data, like creativity in technology-enhanced learning design. The best possible training for a teacher’s professional development in creative learning design is collaborative, hands-on, and utilizes creativity definitions for the product along with technology integration learning design models. Full article
(This article belongs to the Special Issue Artificial Intelligence Algorithms and Generative AI in Education)
Show Figures

Figure 1

26 pages, 5350 KiB  
Article
Secure Image Transmission Using Multilevel Chaotic Encryption and Video Steganography
by Suhad Naji Alrekaby, Maisa’a Abid Ali Khodher, Layth Kamil Adday and Reem Aljuaidi
Algorithms 2025, 18(7), 406; https://doi.org/10.3390/a18070406 - 1 Jul 2025
Viewed by 284
Abstract
The swift advancement of information and communication technology has made it increasingly difficult to guarantee the security of transmitted data. Traditional encryption techniques, particularly in multimedia applications, frequently fail to defend against sophisticated attacks, such as chosen-plaintext, differential, and statistical analysis attacks. More [...] Read more.
The swift advancement of information and communication technology has made it increasingly difficult to guarantee the security of transmitted data. Traditional encryption techniques, particularly in multimedia applications, frequently fail to defend against sophisticated attacks, such as chosen-plaintext, differential, and statistical analysis attacks. More often than not, traditional cryptographic methods lack proper diffusion and sufficient randomness, which is why they are vulnerable to these types of attacks. By combining multi-level chaotic maps with Least Significant Bit (LSB) steganography and Advanced Encryption Standard (AES) encryption, this study proposes an improved security approach for picture transmission. A hybrid chaotic system dynamically creates the encryption keys, guaranteeing high unpredictability and resistance to brute-force attacks. Next, it incorporates the encrypted images into video frames, making it challenging to find the secret data. The suggested method demonstrates its resilience to statistical attacks by achieving entropy values over 7.99 and number of pixels change rate (NPCR) values above 99.63% in contrast to traditional encryption techniques, showing how resilient it is to statistical attacks. Our hybrid approach improves data secrecy and resistance to various cryptographic attacks. Experimental results confirm the efficiency of the suggested technique by achieving entropy values around 7.99, number of pixels change rate (NPCR) values above 99.63%, and unified average changing intensity (UACI) values over 31.98%, ensuring the secure transmission of sensitive images while maintaining video imperceptibility. Full article
(This article belongs to the Section Parallel and Distributed Algorithms)
Show Figures

Figure 1

24 pages, 842 KiB  
Article
Predicting the Magnitude of Earthquakes Using Grammatical Evolution
by Constantina Kopitsa, Ioannis G. Tsoulos and Vasileios Charilogis
Algorithms 2025, 18(7), 405; https://doi.org/10.3390/a18070405 - 1 Jul 2025
Viewed by 235
Abstract
Throughout history, human societies have sought to explain natural phenomena through the lens of mythology. Earthquakes, as sudden and often devastating events, have inspired a range of symbolic and mythological interpretations across different civilizations. It was not until the 18th and 19th centuries [...] Read more.
Throughout history, human societies have sought to explain natural phenomena through the lens of mythology. Earthquakes, as sudden and often devastating events, have inspired a range of symbolic and mythological interpretations across different civilizations. It was not until the 18th and 19th centuries that a more positivist and scientific approach began to emerge regarding the explanation of earthquakes, recognizing their origin as stemming from processes occurring beneath the Earth’s surface. A pivotal moment in the emergence of modern seismology was the Lisbon earthquake of 1755, which marked a significant shift towards scientific inquiry. This means that the question of how earthquakes occur has been resolved; thanks to advancements in scientific, geological, and geophysical research, it is now well understood that seismic events result from the collision and movement of lithospheric or tectonic plates. The contemporary challenge that emerges, however, lies in whether such seismic phenomena can be accurately predicted. In this paper, a systematic attempt is made to use techniques based on Grammatical Evolution to determine the magnitude of earthquakes. These techniques use freely available data in which the history of large earthquakes is introduced before the application of the proposed techniques. From the execution of the experiments, it has become clear that the use of these techniques can allow for more effective estimation of the magnitude of earthquakes compared to other machine learning techniques from the relevant literature. Full article
(This article belongs to the Special Issue Algorithms in Data Classification (3rd Edition))
Show Figures

Figure 1

20 pages, 1688 KiB  
Article
Unveiling the Shadows—A Framework for APT’s Defense AI and Game Theory Strategy
by Pedro Brandão and Carla Silva
Algorithms 2025, 18(7), 404; https://doi.org/10.3390/a18070404 - 1 Jul 2025
Viewed by 216
Abstract
Advanced persistent threats (APTs) pose significant risks to critical systems and infrastructures due to their stealth and persistence. While several studies have reviewed APT characteristics and defense mechanisms, this paper goes further by proposing a hybrid defense framework based on artificial intelligence and [...] Read more.
Advanced persistent threats (APTs) pose significant risks to critical systems and infrastructures due to their stealth and persistence. While several studies have reviewed APT characteristics and defense mechanisms, this paper goes further by proposing a hybrid defense framework based on artificial intelligence and game theory. First, a literature review outlines the evolution, methodologies, and known incidents of APTs. Then, a novel conceptual framework is presented, integrating unsupervised anomaly detection (isolation forest) and strategic defense modeling (Stackelberg game). Experimental results on simulated data demonstrate the robustness and scalability of the approach. In addition to reviewing current APT detection techniques, this work presents a defense model that integrates machine learning-based anomaly detection with predictive game-theoretic modeling. Full article
(This article belongs to the Section Algorithms for Multidisciplinary Applications)
Show Figures

Figure 1

31 pages, 8397 KiB  
Article
Research on APF-Dijkstra Path Planning Fusion Algorithm Based on Steering Model and Volume Constraints
by Xizheng Wang, Gang Li and Zijian Bian
Algorithms 2025, 18(7), 403; https://doi.org/10.3390/a18070403 - 1 Jul 2025
Viewed by 263
Abstract
For the local oscillation phenomenon of the APF algorithm in the face of static U-shaped obstacles, the path cusp phenomenon caused by the vehicle corner and path curvature constraints is not taken into account, as well as the low path safety caused by [...] Read more.
For the local oscillation phenomenon of the APF algorithm in the face of static U-shaped obstacles, the path cusp phenomenon caused by the vehicle corner and path curvature constraints is not taken into account, as well as the low path safety caused by ignoring the vehicle volume constraints. Therefore, an APF-Dijkstra path planning fusion algorithm based on steering model and volume constraints is proposed to improve it. First, perform an expansion treatment on the obstacles in the map, optimize the search direction of the Dijkstra algorithm and its planned global path, ensuring that the distance between the path and the expanded grid is no less than 1 m, and use the path points as temporary target points for the APF algorithm. Secondly, a Gaussian function is introduced to optimize the potential energy function of the APF algorithm, and the U-shaped obstacle is ellipticized, and a virtual target point is used to provide the gravitational force. Again, the three-point arc method based on the steering model is used to determine the location of the predicted points and to smooth the paths in real time while constraining the steering angle. Finally, a 4.5 m × 2.5 m vehicle rectangle is used instead of the traditional mass points to make the algorithm volumetrically constrained. Meanwhile, a model for detecting vehicle collisions is established to cover the rectangle boundary with 14 envelope circles, and the combined force of the computed mass points is transformed into the combined force of the computed envelope circles to further improve path safety. The algorithm is validated by simulation experiments, and the results show that the fusion algorithm can avoid static U-shaped obstacles and dynamic obstacles well; the curvature change rate of the obstacle avoidance path is 0.248, 0.162, and 0.169, and the curvature standard deviation is 0.16, which verifies the smoothness of the fusion algorithm. Meanwhile, the distances between the obstacles and the center of the rear axle of the vehicle are all higher than 1.60 m, which verifies the safety of the fusion algorithm. Full article
(This article belongs to the Section Combinatorial Optimization, Graph, and Network Algorithms)
Show Figures

Figure 1

33 pages, 1215 KiB  
Article
On the Extended Simple Equations Method (SEsM) for Obtaining Numerous Exact Solutions to Fractional Partial Differential Equations: A Generalized Algorithm and Several Applications
by Elena V. Nikolova
Algorithms 2025, 18(7), 402; https://doi.org/10.3390/a18070402 - 30 Jun 2025
Viewed by 134
Abstract
In this article, we present the extended simple equations method (SEsM) for finding exact solutions to systems of fractional nonlinear partial differential equations (FNPDEs). The expansions made to the original SEsM algorithm are implemented in several directions: (1) In constructing analytical solutions: exact [...] Read more.
In this article, we present the extended simple equations method (SEsM) for finding exact solutions to systems of fractional nonlinear partial differential equations (FNPDEs). The expansions made to the original SEsM algorithm are implemented in several directions: (1) In constructing analytical solutions: exact solutions to FNPDE systems are presented by simple or complex composite functions, including combinations of solutions to two or more different simple equations with distinct independent variables (corresponding to different wave velocities); (2) in selecting appropriate fractional derivatives and appropriate wave transformations: the choice of the type of fractional derivatives for each system of FNPDEs depends on the physical nature of the modeled real process. Based on this choice, the range of applicable wave transformations that are used to reduce FNPDEs to nonlinear ODEs has been expanded. It includes not only various forms of fractional traveling wave transformations but also standard traveling wave transformations. Based on these methodological enhancements, a generalized SEsM algorithm has been developed to derive exact solutions of systems of FNPDEs. This algorithm provides multiple options at each step, enabling the user to select the most appropriate variant depending on the expected wave dynamics in the modeled physical context. Two specific variants of the generalized SEsM algorithm have been applied to obtain exact solutions to two time-fractional shallow-water-like systems. For generating these exact solutions, it is assumed that each system variable in the studied models exhibits multi-wave behavior, which is expressed as a superposition of two waves propagating at different velocities. As a result, numerous novel multi-wave solutions are derived, involving combinations of hyperbolic-like, elliptic-like, and trigonometric-like functions. The obtained analytical solutions can provide valuable qualitative insights into complex wave dynamics in generalized spatio-temporal dynamical systems, with relevance to areas such as ocean current modeling, multiphase fluid dynamics and geophysical fluid modeling. Full article
(This article belongs to the Section Algorithms for Multidisciplinary Applications)
Show Figures

Figure 1

30 pages, 3292 KiB  
Review
Smart and Secure Healthcare with Digital Twins: A Deep Dive into Blockchain, Federated Learning, and Future Innovations
by Ezz El-Din Hemdan and Amged Sayed
Algorithms 2025, 18(7), 401; https://doi.org/10.3390/a18070401 - 30 Jun 2025
Viewed by 202
Abstract
In recent years, cutting-edge technologies, such as artificial intelligence (AI), blockchain, and digital twin (DT), have revolutionized the healthcare sector by enhancing public health and treatment quality through precise diagnosis, preventive measures, and real-time care capabilities. Despite these advancements, the massive amount of [...] Read more.
In recent years, cutting-edge technologies, such as artificial intelligence (AI), blockchain, and digital twin (DT), have revolutionized the healthcare sector by enhancing public health and treatment quality through precise diagnosis, preventive measures, and real-time care capabilities. Despite these advancements, the massive amount of generated biomedical data puts substantial challenges associated with information security, privacy, and scalability. Applying blockchain in healthcare-based digital twins ensures data integrity, immutability, consistency, and security, making it a critical component in addressing these challenges. Federated learning (FL) has also emerged as a promising AI technique to enhance privacy and enable decentralized data processing. This paper investigates the integration of digital twin concepts with blockchain and FL in the healthcare domain, focusing on their architecture and applications. It also explores platforms and solutions that leverage these technologies for secure and scalable medical implementations. A case study on federated learning for electroencephalogram (EEG) signal classification is presented, demonstrating its potential as a diagnostic tool for brain activity analysis and neurological disorder detection. Finally, we highlight the key challenges, emerging opportunities, and future directions in advancing healthcare digital twins with blockchain and federated learning, paving the way for a more intelligent, secure, and privacy-preserving medical ecosystem. Full article
Show Figures

Figure 1

22 pages, 3658 KiB  
Article
The True Shortest Path of Obstacle Grid Graph Is Solved by SGP Vertex Extraction and Filtering Algorithm
by Yijie Zhang and Jizhou Chen
Algorithms 2025, 18(7), 400; https://doi.org/10.3390/a18070400 - 29 Jun 2025
Viewed by 166
Abstract
In the obstacle grid map, due to the limitations in search direction imposed by classical path algorithms and meta-heuristic algorithms, the shortest paths are not the true shortest paths (TSPs) but rather the shortest grid paths (SGPs). This paper introduces an SGP vertex [...] Read more.
In the obstacle grid map, due to the limitations in search direction imposed by classical path algorithms and meta-heuristic algorithms, the shortest paths are not the true shortest paths (TSPs) but rather the shortest grid paths (SGPs). This paper introduces an SGP vertex extraction and filtering algorithm (SGPVEFA) that identifies key nodes within SGPs. After screening, these nodes yield TSPs under the same conditions. Through various experiments, the shortest path length searched by the SGPVEFA proposed in this paper can be used to search for the real shortest path, and it also has advantages in comparison with recent new algorithms. With the increase in map scale and obstacle rate, the advantages of this path algorithm are more significant. Full article
(This article belongs to the Section Combinatorial Optimization, Graph, and Network Algorithms)
Show Figures

Figure 1

Previous Issue
Back to TopTop