Next Issue
Volume 18, August
Previous Issue
Volume 18, June
 
 

Algorithms, Volume 18, Issue 7 (July 2025) – 78 articles

Cover Story (view full-size image): We show how path planning algorithms can be enhanced with situational detail to include the peculiarities of individualized factors that connect people to space. This is useful in facility design software, where a physical design plan is known, but how people, groups, and crowds might contextually embody to the space is not. We show that the algorithmic expansion of well-known planning algorithms using node-based architectures that include individualized perspectives when and where needed in a hyper-localized (human) enacted situational context can be nested within physical (design) site considerations. We demonstrate a proof-of-concept for use in the Unity 3D modeling platform. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
19 pages, 1088 KiB  
Article
The Specialist’s Paradox: Generalist AI May Better Organize Medical Knowledge
by Carlo Galli, Maria Teresa Colangelo, Marco Meleti and Elena Calciolari
Algorithms 2025, 18(7), 451; https://doi.org/10.3390/a18070451 - 21 Jul 2025
Viewed by 216
Abstract
This study investigates the ability of six pre-trained sentence transformers to organize medical knowledge by performing unsupervised clustering on 70 high-level Medical Subject Headings (MeSH) terms across seven medical specialties. We evaluated models from different pre-training paradigms: general-purpose, domain-adapted, and from-scratch domain-specific. The [...] Read more.
This study investigates the ability of six pre-trained sentence transformers to organize medical knowledge by performing unsupervised clustering on 70 high-level Medical Subject Headings (MeSH) terms across seven medical specialties. We evaluated models from different pre-training paradigms: general-purpose, domain-adapted, and from-scratch domain-specific. The results reveal a clear performance hierarchy. A top tier of models, including the general-purpose MPNet and the domain-adapted BioBERT and RoBERTa, produced highly coherent, specialty-aligned clusters (Adjusted Rand Index > 0.80). Conversely, models pre-trained from scratch on specialized corpora, such as PubMedBERT and BioClinicalBERT, performed poorly (Adjusted Rand Index < 0.51), with BioClinicalBERT yielding a disorganized clustering. These findings challenge the assumption that domain-specific pre-training guarantees superior performance for all semantic tasks. We conclude that model architecture, alignment between the pre-training objective and the downstream task, and the nature of the training data are more critical determinants of success for creating semantically coherent embedding spaces for medical concepts. Full article
(This article belongs to the Special Issue Evolution of Algorithms in the Era of Generative AI)
Show Figures

Figure 1

34 pages, 2713 KiB  
Article
EpiInfer: A Non-Markovian Method and System to Forecast Infection Rates in Epidemics
by Jovan Kascelan, Ruoxi Yang and Dennis Shasha
Algorithms 2025, 18(7), 450; https://doi.org/10.3390/a18070450 - 21 Jul 2025
Viewed by 270
Abstract
Consider an evolving epidemic in which each person is either (S) susceptible and healthy; (E) exposed, contagious but asymptomatic; (I) infected, symptomatic, and quarantined; or (R) recovered, healthy, and susceptible. The inference problem, given (i) who is showing symptoms (I) and who is [...] Read more.
Consider an evolving epidemic in which each person is either (S) susceptible and healthy; (E) exposed, contagious but asymptomatic; (I) infected, symptomatic, and quarantined; or (R) recovered, healthy, and susceptible. The inference problem, given (i) who is showing symptoms (I) and who is not (S, E, R) and (ii) the distribution of meetings among people each day, is to predict the number of infected people (state I) in future days (e.g., 1 through 20 days out into the future) for the purpose of planning resources (e.g., needles, medicine, staffing) and policy responses (e.g., masking). Each prediction horizon has different uses. For example, staffing may require forecasts of only a few days, while logistics (i.e., which supplies to order) may require a two- or three-week horizon. Our algorithm and system EpiInfer is a non-Markovian approach to forecasting infection rates. It is non-Markovian because it looks at infection rates over the past several days in order to make predictions about the future. In addition, it makes use of the following information: (i) the distribution of the number of meetings per person and (ii) the transition probabilities between states and uses those estimates to forecast future infection rates. In both simulated and real data, EpiInfer performs better than the standard (in epidemiology) differential equation approaches as well as general-purpose neural network approaches. Compared to ARIMA, EpiInfer is better starting with 6-day forecasts, while ARIMA is better for shorter forecast horizons. In fact, our operational recommendation would be to use ARIMA (1,1,1) for short predictions (5 days or less) and then EpiInfer thereafter. Doing so would reduce relative Root Mean Squared Error (RMSE) over any state of the art method by up to a factor of 4. Predictions of this accuracy could be useful for people, supply, and policy planning. Full article
Show Figures

Figure 1

23 pages, 906 KiB  
Article
Detection Model for 5G Core PFCP DDoS Attacks Based on Sin-Cos-bIAVOA
by Zheng Ma, Rui Zhang and Lang Gao
Algorithms 2025, 18(7), 449; https://doi.org/10.3390/a18070449 - 21 Jul 2025
Viewed by 258
Abstract
The development of 5G environments has several advantages, including accelerated data transfer speeds, reduced latency, and improved energy efficiency. Nevertheless, it also increases the risk of severe cybersecurity issues, including a complex and enlarged attack surface, privacy concerns, and security threats to 5G [...] Read more.
The development of 5G environments has several advantages, including accelerated data transfer speeds, reduced latency, and improved energy efficiency. Nevertheless, it also increases the risk of severe cybersecurity issues, including a complex and enlarged attack surface, privacy concerns, and security threats to 5G core network functions. A 5G core network DDoS attack detection model is been proposed which utilizes a binary improved non-Bald Eagle optimization algorithm (Sin-Cos-bIAVOA) originally designed for IoT DDoS detection to select effective features for DDoS attacks. This approach employs a novel composite transfer function (Sin-Cos) to enhance exploration. The proposed method’s performance is compared with classical algorithms on the 5G Core PFCP DDoS attacks dataset. After rigorous testing across a spectrum of attack scenarios, the proposed detection model exhibits superior performance compared to traditional DDoS detection algorithms. This is a significant finding, as it suggests that the model achieves a higher degree of detection accuracy, meaning it is better equipped to identify and mitigate DDoS attacks. This is particularly noteworthy in the context of 5G core networks, as it offers a novel solution to the problem of DDoS attack detection for this critical infrastructure. Full article
Show Figures

Figure 1

23 pages, 3578 KiB  
Article
High-Precision Chip Detection Using YOLO-Based Methods
by Ruofei Liu and Junjiang Zhu
Algorithms 2025, 18(7), 448; https://doi.org/10.3390/a18070448 - 21 Jul 2025
Viewed by 228
Abstract
Machining chips are directly related to both the machining quality and tool condition. However, detecting chips from images in industrial settings poses challenges in terms of model accuracy and computational speed. We firstly present a novel framework called GM-YOLOv11-DNMS to track the chips, [...] Read more.
Machining chips are directly related to both the machining quality and tool condition. However, detecting chips from images in industrial settings poses challenges in terms of model accuracy and computational speed. We firstly present a novel framework called GM-YOLOv11-DNMS to track the chips, followed by a video-level post-processing algorithm for chip counting in videos. GM-YOLOv11-DNMS has two main improvements: (1) it replaces the CNN layers with a ghost module in YOLOv11n, significantly reducing the computational cost while maintaining the detection performance, and (2) it uses a new dynamic non-maximum suppression (DNMS) method, which dynamically adjusts the thresholds to improve the detection accuracy. The post-processing method uses a trigger signal from rising edges to improve chip counting in video streams. Experimental results show that the ghost module reduces the FLOPs from 6.48 G to 5.72 G compared to YOLOv11n, with a negligible accuracy loss, while the DNMS algorithm improves the debris detection precision across different YOLO versions. The proposed framework achieves precision, recall, and mAP@0.5 values of 97.04%, 96.38%, and 95.56%, respectively, in image-based detection tasks. In video-based experiments, the proposed video-level post-processing algorithm combined with GM-YOLOv11-DNMS achieves crack–debris counting accuracy of 90.14%. This lightweight and efficient approach is particularly effective in detecting small-scale objects within images and accurately analyzing dynamic debris in video sequences, providing a robust solution for automated debris monitoring in machine tool processing applications. Full article
(This article belongs to the Special Issue Machine Learning Models and Algorithms for Image Processing)
Show Figures

Figure 1

23 pages, 1473 KiB  
Article
Integrating Inferential Statistics and Systems Dynamics: A Study of Short-Term Happiness Evolution in Response to a Dose of Alcohol and Caffeine
by Salvador Amigó, Antonio Caselles, Joan C. Micó and Pantaleón D. Romero
Algorithms 2025, 18(7), 447; https://doi.org/10.3390/a18070447 - 21 Jul 2025
Viewed by 192
Abstract
This paper compares two methods, inferential statistics and Systems Dynamics, to study the evolution of individual happiness after a single dose of drug consumption. In an application case, the effect of alcohol and caffeine on happiness is analyzed through a single-case experimental design, [...] Read more.
This paper compares two methods, inferential statistics and Systems Dynamics, to study the evolution of individual happiness after a single dose of drug consumption. In an application case, the effect of alcohol and caffeine on happiness is analyzed through a single-case experimental design, with replication, involving two participants. Both inferential statistical analysis and Systems Dynamics methods have been used to analyze the results. Two scales were used to measure happiness—the Euphoria Scale (ES) and the Smiling Face Scale (SFS)—in trait and state format. A single-case experimental ABC design was used. Phase A had no treatment, and Phases B and C saw both subjects receiving 26.51 mL of alcohol and 330 mg of caffeine, respectively. The participants filled in a form with both scales in a state format every 10 min over a 3 h period, operating in each one of the three phases A, B and C. The main conclusion of the analysis performed is that both methods provide similar results about the evolution of individual happiness after single dose consumption. Therefore, the article shows that inferential statistics and the stimulus response model derived from the Systems Dynamics approach can be used in a complementary and enriching way to obtain prediction results. Full article
Show Figures

Figure 1

28 pages, 2612 KiB  
Article
Optimizing Economy with Comfort in Climate Control System Scheduling for Indoor Ice Sports Venues’ Spectator Zones Considering Demand Response
by Zhuoqun Du, Yisheng Liu, Yuyan Xue and Boyang Liu
Algorithms 2025, 18(7), 446; https://doi.org/10.3390/a18070446 - 20 Jul 2025
Viewed by 169
Abstract
With the growing popularity of ice sports, indoor ice sports venues are drawing an increasing number of spectators. Maintaining comfort in spectator zones presents a significant challenge for the operational scheduling of climate control systems, which integrate ventilation, heating, and dehumidification functions. To [...] Read more.
With the growing popularity of ice sports, indoor ice sports venues are drawing an increasing number of spectators. Maintaining comfort in spectator zones presents a significant challenge for the operational scheduling of climate control systems, which integrate ventilation, heating, and dehumidification functions. To explore economic cost potential while ensuring user comfort, this study proposes a demand response-integrated optimization model for climate control systems. To enhance the model’s practicality and decision-making efficiency, a two-stage optimization method combining multi-objective optimization algorithms with the technique for order preference by similarity to an ideal solution (TOPSIS) is proposed. In terms of algorithm comparison, the performance of three typical multi-objective optimization algorithms—NSGA-II, standard MOEA/D, and Multi-Objective Brown Bear Optimization (MOBBO)—is systematically evaluated. The results show that NSGA-II demonstrates the best overall performance based on evaluation metrics including runtime, HV, and IGD. Simulations conducted in China’s cold regions show that, under comparable comfort levels, schedules incorporating dynamic tariffs are significantly more economically efficient than those that do not. They reduce operating costs by 25.3%, 24.4%, and 18.7% on typical summer, transitional, and winter days, respectively. Compared to single-objective optimization approaches that focus solely on either comfort enhancement or cost reduction, the proposed multi-objective model achieves a better balance between user comfort and economic performance. This study not only provides an efficient and sustainable solution for climate control scheduling in energy-intensive buildings such as ice sports venues but also offers a valuable methodological reference for energy management and optimization in similar settings. Full article
Show Figures

Figure 1

20 pages, 817 KiB  
Systematic Review
Domain-Specific Languages for Algorithmic Graph Processing: A Systematic Literature Review
by Houda Boukham, Kawtar Younsi Dahbi and Dalila Chiadmi
Algorithms 2025, 18(7), 445; https://doi.org/10.3390/a18070445 - 19 Jul 2025
Viewed by 381
Abstract
Graph analytics has grown increasingly popular as a model for data analytics across a variety of domains. This has prompted an emergence of solutions for large-scale graph analytics, many of which integrate user-facing domain-specific languages (DSLs) to support graph processing operations. These DSLs [...] Read more.
Graph analytics has grown increasingly popular as a model for data analytics across a variety of domains. This has prompted an emergence of solutions for large-scale graph analytics, many of which integrate user-facing domain-specific languages (DSLs) to support graph processing operations. These DSLs fall into two categories: query-based DSLs for graph-pattern matching and graph algorithm DSLs. While graph query DSLs are now standardized, research on DSLs for algorithmic graph processing remains fragmented and lacks a cohesive framework. To address this gap, we conduct a systematic literature review of algorithmic graph processing DSLs aimed at large-scale graph analytics. Our findings reveal the prevalence of property graphs (with 60% of surveyed DSLs explicitly adopting this model), as well as notable similarities in syntax and features. This allows us to identify a common template that can serve as the foundation for a standardized graph algorithm model, improving portability and unifying design between different DSLs and graph analytics toolkits. We additionally find that, despite achieving remarkable performance and scalability, only 20% of surveyed DSLs see real-life adoption. Incidentally, all DSLs for which user documentation is available are developed as part of academia–industry collaborations or in fully industrial contexts. Based on these results, we provide a comprehensive overview of the current research landscape, along with a roadmap of recommendations and future directions to enhance reusability and interoperability in large-scale graph analytics across industry and academia. Full article
(This article belongs to the Special Issue Graph and Hypergraph Algorithms and Applications)
Show Figures

Figure 1

22 pages, 22865 KiB  
Article
Fractional Discrete Computer Virus System: Chaos and Complexity Algorithms
by Ma’mon Abu Hammad, Imane Zouak, Adel Ouannas and Giuseppe Grassi
Algorithms 2025, 18(7), 444; https://doi.org/10.3390/a18070444 - 19 Jul 2025
Viewed by 180
Abstract
The spread of computer viruses represents a major challenge to digital security, underscoring the need for a deeper understanding of their propagation mechanisms. This study examines the stability and chaotic dynamics of a fractional discrete Susceptible-Infected (SI) model for computer viruses, incorporating commensurate [...] Read more.
The spread of computer viruses represents a major challenge to digital security, underscoring the need for a deeper understanding of their propagation mechanisms. This study examines the stability and chaotic dynamics of a fractional discrete Susceptible-Infected (SI) model for computer viruses, incorporating commensurate and incommensurate types of fractional orders. Using the basic reproduction number R0, the derivation of stability conditions is followed by an investigation of how varying fractional orders affect the system’s behavior. To explore the system’s nonlinear chaotic behavior, the research of this study employs a suite of analytical tools, including the analysis of bifurcation diagrams, phase portraits, and the evaluation of the maximum Lyapunov exponent (MLE) for the study of chaos. The model’s complexity is confirmed through advanced complexity algorithms, including spectral entropy, approximate entropy, and the 01 test. These measures offer a more profound insight into the complex behavior of the system and the role of fractional order. Numerical simulations provide visual evidence of the distinct dynamics associated with commensurate and incommensurate fractional orders. These results provide insights into how fractional derivatives influence behaviors in cyberspace, which can be leveraged to design enhanced cybersecurity measures. Full article
(This article belongs to the Section Algorithms for Multidisciplinary Applications)
Show Figures

Figure 1

83 pages, 3818 KiB  
Systematic Review
Explainability and Interpretability in Concept and Data Drift: A Systematic Literature Review
by Daniele Pelosi, Diletta Cacciagrano and Marco Piangerelli
Algorithms 2025, 18(7), 443; https://doi.org/10.3390/a18070443 - 18 Jul 2025
Viewed by 426
Abstract
Explainability and interpretability have emerged as essential considerations in machine learning, particularly as models become more complex and integral to a wide range of applications. In response to increasing concerns over opaque “black-box” solutions, the literature has seen a shift toward two distinct [...] Read more.
Explainability and interpretability have emerged as essential considerations in machine learning, particularly as models become more complex and integral to a wide range of applications. In response to increasing concerns over opaque “black-box” solutions, the literature has seen a shift toward two distinct yet often conflated paradigms: explainable AI (XAI), which refers to post hoc techniques that provide external explanations for model predictions, and interpretable AI, which emphasizes models whose internal mechanisms are understandable by design. Meanwhile, the phenomenon of concept and data drift—where models lose relevance due to evolving conditions—demands renewed attention. High-impact events, such as financial crises or natural disasters, have highlighted the need for robust interpretable or explainable models capable of adapting to changing circumstances. Against this backdrop, our systematic review aims to consolidate current research on explainability and interpretability with a focus on concept and data drift. We gather a comprehensive range of proposed models, available datasets, and other technical aspects. By synthesizing these diverse resources into a clear taxonomy, we intend to provide researchers and practitioners with actionable insights and guidance for model selection, implementation, and ongoing evaluation. Ultimately, this work aspires to serve as a practical roadmap for future studies, fostering further advancements in transparent, adaptable machine learning systems that can meet the evolving needs of real-world applications. Full article
(This article belongs to the Special Issue Machine Learning for Pattern Recognition (3rd Edition))
Show Figures

Figure 1

21 pages, 2584 KiB  
Article
Adaptive Nonlinear Proportional–Integral–Derivative Control of a Continuous Stirred Tank Reactor Process Using a Radial Basis Function Neural Network
by Joo-Yeon Lee, Gang-Gyoo Jin and Gun-Baek So
Algorithms 2025, 18(7), 442; https://doi.org/10.3390/a18070442 - 18 Jul 2025
Viewed by 241
Abstract
Temperature control in a continuous stirred tank reactor (CSTR) poses significant challenges due to the process’s inherent nonlinearities and uncertain parameters. This study proposes an innovative solution by developing an adaptive nonlinear proportional–integral–derivative (NPID) controller. The nonlinear gain that dynamically scales the error [...] Read more.
Temperature control in a continuous stirred tank reactor (CSTR) poses significant challenges due to the process’s inherent nonlinearities and uncertain parameters. This study proposes an innovative solution by developing an adaptive nonlinear proportional–integral–derivative (NPID) controller. The nonlinear gain that dynamically scales the error fed to the integrator is enhanced for optimized performance. The network’s ability to approximate nonlinear functions and its online learning capabilities are leveraged by effectively integrating an NPID control scheme with a radial basis function neural network (RBFNN). This synergistic approach provides a more robust and reliable control strategy for CSTRs. To assess the proposed method’s feasibility, a set of simulations was conducted for tracking, disturbance rejection, and parameter variations. These results were compared with those of an adaptive RBFNN-based PID (APID) controller under identical conditions. The simulations indicated that the proposed method achieved reductions in maximum overshoot of 33.7% and settling time of 54.2% for upward and downward setpoint changes and 27.2% and 5.3% for downward and upward setpoint changes compared to the APID controller. For disturbance changes, the proposed method reduced the peak magnitude (Mpeak) by 4.9%, recovery time (trcy) by 23.6%, and integral absolute error by 16.2%. Similarly, for parameter changes, the reductions were 3.0% (Mpeak), 26.4% (trcy), and 24.4% (IAE). Full article
Show Figures

Figure 1

26 pages, 6787 KiB  
Article
Frost Resistance Prediction of Concrete Based on Dynamic Multi-Stage Optimisation Algorithm
by Xuwei Dong, Jiashuo Yuan and Jinpeng Dai
Algorithms 2025, 18(7), 441; https://doi.org/10.3390/a18070441 - 18 Jul 2025
Viewed by 213
Abstract
Concrete in cold areas is often subjected to a freeze–thaw cycle period, and a harsh environment will seriously damage the structure of concrete and shorten its life. The frost resistance of concrete is primarily evaluated by relative dynamic elastic modulus and mass loss [...] Read more.
Concrete in cold areas is often subjected to a freeze–thaw cycle period, and a harsh environment will seriously damage the structure of concrete and shorten its life. The frost resistance of concrete is primarily evaluated by relative dynamic elastic modulus and mass loss rate. To predict the frost resistance of concrete more accurately, based on the four ensemble learning models of random forest (RF), adaptive boosting (AdaBoost), categorical boosting (CatBoost), and extreme gradient boosting (XGBoost), this paper optimises the ensemble learning models by using a dynamic multi-stage optimisation algorithm (DMSOA). These models are trained using 7090 datasets, which use nine features as input variables; relative dynamic elastic modulus (RDEM) and mass loss rate (MLR) as prediction indices; and six indices of the coefficient of determination (R2), mean square error (MSE), root mean square error (RMSE), mean absolute error (MAE), correlation coefficient (CC), and standard deviation ratio (SDR) are selected to evaluate the models. The results show that the DMSOA-CatBoost model exhibits the best prediction performance. The R2 of RDEM and MLR are 0.864 and 0.885, respectively, which are 6.40% and 11.15% higher than those of the original CatBoost model. Moreover, the model performs better in error control, with significantly lower MSE, RMSE, and MAE and stronger generalization ability. Additionally, compared with the two mainstream optimisation algorithms (SCA and AOA), DMSOA-CatBoost also has obvious advantages in prediction accuracy and stability. Related work in this paper has a certain significance for improving the durability and quality of concrete, which is conducive to predicting the performance of concrete in cold conditions faster and more accurately to optimise the concrete mix ratio whilst saving on engineering cost. Full article
Show Figures

Figure 1

22 pages, 346 KiB  
Article
Two Extrapolation Techniques on Splitting Iterative Schemes to Accelerate the Convergence Speed for Solving Linear Systems
by Chein-Shan Liu and Botong Li
Algorithms 2025, 18(7), 440; https://doi.org/10.3390/a18070440 - 18 Jul 2025
Viewed by 190
Abstract
For the splitting iterative scheme to solve the system of linear equations, an equivalent form in terms of descent and residual vectors is formulated. We propose an extrapolation technique using the new formulation, such that a new splitting iterative scheme (NSIS) can be [...] Read more.
For the splitting iterative scheme to solve the system of linear equations, an equivalent form in terms of descent and residual vectors is formulated. We propose an extrapolation technique using the new formulation, such that a new splitting iterative scheme (NSIS) can be simply generated from the original one by inserting an acceleration parameter preceding the descent vector. The spectral radius of the NSIS is proven to be smaller than the original one, and so has a faster convergence speed. The orthogonality of consecutive residual vectors is coined into the second NSIS, from which a stepwise varying orthogonalization factor can be derived explicitly. Multiplying the descent vector by the factor, the second NSIS is proven to be absolutely convergent. The modification is based on the maximal reduction of residual vector norm. Two-parameter and three-parameter NSIS are investigated, wherein the optimal value of one parameter is obtained by using the maximization technique. The splitting iterative schemes are unified to have the same iterative form, but endowed with different governing equations for the descent vector. Some examples are examined to exhibit the performance of the proposed extrapolation techniques used in the NSIS. Full article
(This article belongs to the Special Issue Recent Advances in Numerical Algorithms and Their Applications)
20 pages, 359 KiB  
Article
Iterative Matrix Techniques Based on Averages
by María A. Navascués
Algorithms 2025, 18(7), 439; https://doi.org/10.3390/a18070439 - 17 Jul 2025
Viewed by 195
Abstract
Matrices have an important role in modern engineering problems like artificial intelligence, biomedicine, machine learning, etc. The present paper proposes new algorithms to solve linear problems involving finite matrices as well as operators in infinite dimensions. It is well known that the power [...] Read more.
Matrices have an important role in modern engineering problems like artificial intelligence, biomedicine, machine learning, etc. The present paper proposes new algorithms to solve linear problems involving finite matrices as well as operators in infinite dimensions. It is well known that the power method to find an eigenvalue and an eigenvector of a matrix requires the existence of a dominant eigenvalue. This article proposes an iterative method to find eigenvalues of matrices without a dominant eigenvalue. This algorithm is based on a procedure involving averages of the mapping and the independent variable. The second contribution is the computation of an eigenvector associated with a known eigenvalue of linear operators or matrices. Then, a novel numerical method for solving a linear system of equations is studied. The algorithm is especially suitable for cases where the iteration matrix has a norm equal to one or the standard iterative method based on fixed point approximation converges very slowly. These procedures are applied to the resolution of Fredholm integral equations of the first kind with an arbitrary kernel by means of orthogonal polynomials, and in a particular case where the kernel is separable. Regarding the latter case, this paper studies the properties of the associated Fredholm operator. Full article
Show Figures

Figure 1

15 pages, 3517 KiB  
Article
A High-Precision UWB-Based Indoor Positioning System Using Time-of-Arrival and Intersection Midpoint Algorithm
by Wen-Piao Lin and Yi-Shun Lu
Algorithms 2025, 18(7), 438; https://doi.org/10.3390/a18070438 - 17 Jul 2025
Viewed by 328
Abstract
This study develops a high-accuracy indoor positioning system using ultra-wideband (UWB) technology and the time-of-arrival (TOA) method. The system is built using Arduino Nano microcontrollers and DW1000 UWB chips to measure distances between anchor nodes and a mobile tag. Three positioning algorithms are [...] Read more.
This study develops a high-accuracy indoor positioning system using ultra-wideband (UWB) technology and the time-of-arrival (TOA) method. The system is built using Arduino Nano microcontrollers and DW1000 UWB chips to measure distances between anchor nodes and a mobile tag. Three positioning algorithms are tested: the triangle centroid algorithm (TCA), inner triangle centroid algorithm (ITCA), and the proposed intersection midpoint algorithm (IMA). Experiments conducted in a 732 × 488 × 220 cm indoor environment show that TCA performs well near the center but suffers from reduced accuracy at the edges. In contrast, IMA maintains stable and accurate positioning across all test points, achieving an average error of 12.87 cm. The system offers low power consumption, fast computation, and high positioning accuracy, making it suitable for real-time indoor applications such as hospital patient tracking and shopping malls where GPS is unavailable or unreliable. Full article
Show Figures

Figure 1

29 pages, 435 KiB  
Article
Possibility and the Impossibility of Reliable Broadcast: A 1-Safe and Reliable Broadcast Algorithm in the Presence of Arbitrary Initialization
by Aisha Dabees and Mehmet Hakan Karaata
Algorithms 2025, 18(7), 437; https://doi.org/10.3390/a18070437 - 17 Jul 2025
Viewed by 155
Abstract
In this paper, we first prove that it is impossible to devise an asynchronous reliable broadcast algorithm that can start in an arbitrary initial system configuration where processes execute their actions solely based on local knowledge. We then propose the first asynchronous reliable [...] Read more.
In this paper, we first prove that it is impossible to devise an asynchronous reliable broadcast algorithm that can start in an arbitrary initial system configuration where processes execute their actions solely based on local knowledge. We then propose the first asynchronous reliable broadcast algorithm that, starting in any arbitrary initial system configuration, ensures three properties, namely, premature feedback safety, propagation 1-safety, and propagation reliability. Premature feedback refers to the conclusion of a broadcast operation at a process although the process has neighbors that did not receive the most recent broadcast yet. Due to possessing the premature feedback safety property, the proposed algorithm always executes as per its specification without premature feedback and is able to implement propagation reliability with exactly once semantics. In addition, our proposed algorithm works even if the processes that the broadcast did not reach yet are perturbed by transient faults, making it more scalable compared to their counterparts requiring initialization. Full article
(This article belongs to the Section Algorithms for Multidisciplinary Applications)
Show Figures

Figure 1

23 pages, 1187 KiB  
Article
Transmit and Receive Diversity in MIMO Quantum Communication for High-Fidelity Video Transmission
by Udara Jayasinghe, Prabhath Samarathunga, Thanuj Fernando and Anil Fernando
Algorithms 2025, 18(7), 436; https://doi.org/10.3390/a18070436 - 16 Jul 2025
Viewed by 210
Abstract
Reliable transmission of high-quality video over wireless channels is challenged by fading and noise, which degrade visual quality and disrupt temporal continuity. To address these issues, this paper proposes a quantum communication framework that integrates quantum superposition with multi-input multi-output (MIMO) spatial diversity [...] Read more.
Reliable transmission of high-quality video over wireless channels is challenged by fading and noise, which degrade visual quality and disrupt temporal continuity. To address these issues, this paper proposes a quantum communication framework that integrates quantum superposition with multi-input multi-output (MIMO) spatial diversity techniques to enhance robustness and efficiency in dynamic video transmission. The proposed method converts compressed videos into classical bitstreams, which are then channel-encoded and quantum-encoded into qubit superposition states. These states are transmitted over a 2×2 MIMO system employing varied diversity schemes to mitigate the effects of multipath fading and noise. At the receiver, a quantum decoder reconstructs the classical information, followed by channel decoding to retrieve the video data, and the source decoder reconstructs the final video. Simulation results demonstrate that the quantum MIMO system significantly outperforms equivalent-bandwidth classical MIMO frameworks across diverse signal-to-noise ratio (SNR) conditions, achieving a peak signal-to-noise ratio (PSNR) up to 39.12 dB, structural similarity index (SSIM) up to 0.9471, and video multi-method assessment fusion (VMAF) up to 92.47, with improved error resilience across various group of picture (GOP) formats, highlighting the potential of quantum MIMO communication for enhancing the reliability and quality of video delivery in next-generation wireless networks. Full article
(This article belongs to the Section Algorithms for Multidisciplinary Applications)
Show Figures

Figure 1

30 pages, 893 KiB  
Review
A Comprehensive Review and Benchmarking of Fairness-Aware Variants of Machine Learning Models
by George Raftopoulos, Nikos Fazakis, Gregory Davrazos and Sotiris Kotsiantis
Algorithms 2025, 18(7), 435; https://doi.org/10.3390/a18070435 - 16 Jul 2025
Viewed by 313
Abstract
Fairness is a fundamental virtue in machine learning systems, alongside with four other critical virtues: Accountability, Transparency, Ethics, and Performance (FATE + Performance). Ensuring fairness has been a central research focus, leading to the development of various mitigation strategies in the literature. These [...] Read more.
Fairness is a fundamental virtue in machine learning systems, alongside with four other critical virtues: Accountability, Transparency, Ethics, and Performance (FATE + Performance). Ensuring fairness has been a central research focus, leading to the development of various mitigation strategies in the literature. These approaches can generally be categorized into three main techniques: pre-processing (modifying data before training), in-processing (incorporating fairness constraints during training), and post-processing (adjusting outputs after model training). Beyond these, an increasingly explored avenue is the direct modification of existing algorithms, aiming to embed fairness constraints into their design while preserving or even enhancing predictive performance. This paper presents a comprehensive survey of classical machine learning models that have been modified or enhanced to improve fairness concerning sensitive attributes (e.g., gender, race). We analyze these adaptations in terms of their methodological adjustments, impact on algorithmic bias and ability to maintain predictive performance comparable to the original models. Full article
Show Figures

Graphical abstract

32 pages, 2302 KiB  
Review
Early Detection of Alzheimer’s Disease Using Generative Models: A Review of GANs and Diffusion Models in Medical Imaging
by Md Minul Alam and Shahram Latifi
Algorithms 2025, 18(7), 434; https://doi.org/10.3390/a18070434 - 15 Jul 2025
Viewed by 547
Abstract
Alzheimer’s disease (AD) is a progressive, non-curable neurodegenerative disorder that poses persistent challenges for early diagnosis due to its gradual onset and the difficulty in distinguishing pathological changes from normal aging. Neuroimaging, particularly MRI and PET, plays a key role in detection; however, [...] Read more.
Alzheimer’s disease (AD) is a progressive, non-curable neurodegenerative disorder that poses persistent challenges for early diagnosis due to its gradual onset and the difficulty in distinguishing pathological changes from normal aging. Neuroimaging, particularly MRI and PET, plays a key role in detection; however, limitations in data availability and the complexity of early structural biomarkers constrain traditional diagnostic approaches. This review investigates the use of generative models, specifically Generative Adversarial Networks (GANs) and Diffusion Models, as emerging tools to address these challenges. These models are capable of generating high-fidelity synthetic brain images, augmenting datasets, and enhancing machine learning performance in classification tasks. The review synthesizes findings across multiple studies, revealing that GAN-based models achieved diagnostic accuracies up to 99.70%, with image quality metrics such as SSIM reaching 0.943 and PSNR up to 33.35 dB. Diffusion Models, though relatively new, demonstrated strong performance with up to 92.3% accuracy and FID scores as low as 11.43. Integrating generative models with convolutional neural networks (CNNs) and multimodal inputs further improved diagnostic reliability. Despite these advancements, challenges remain, including high computational demands, limited interpretability, and ethical concerns regarding synthetic data. This review offers a comprehensive perspective to inform future AI-driven research in early AD detection. Full article
(This article belongs to the Special Issue Advancements in Signal Processing and Machine Learning for Healthcare)
Show Figures

Graphical abstract

16 pages, 6915 KiB  
Article
A Lightweight and Efficient Plant Disease Detection Method Integrating Knowledge Distillation and Dual-Scale Weighted Convolutions
by Xiong Yang, Hao Wang, Qi Zhou, Lei Lu, Lijuan Zhang, Changming Sun and Guilu Wu
Algorithms 2025, 18(7), 433; https://doi.org/10.3390/a18070433 - 15 Jul 2025
Viewed by 257
Abstract
Plant diseases significantly undermine agricultural productivity. This study introduces an improved YOLOv10n model named WD-YOLO (Weighted and Double-scale YOLO), an advanced architecture for efficient plant disease detection. The PlantDoc dataset was initially enhanced using data augmentation techniques. Subsequently, we developed the DSConv module—a [...] Read more.
Plant diseases significantly undermine agricultural productivity. This study introduces an improved YOLOv10n model named WD-YOLO (Weighted and Double-scale YOLO), an advanced architecture for efficient plant disease detection. The PlantDoc dataset was initially enhanced using data augmentation techniques. Subsequently, we developed the DSConv module—a novel convolutional structure employing double-scale weighted convolutions that dynamically adjust to different scale perceptions and optimize attention allocation. This module replaces the conventional Conv module in YOLOv10. Furthermore, the WTConcat module was introduced, dynamically merging weighted concatenation with a channel attention mechanism to replace the Concat module in YOLOv10. The training of WD-YOLO incorporated knowledge distillation techniques using YOLOv10l as a teacher model to refine and compress the architectural learning. Empirical results reveal that WD-YOLO achieved an mAP50 of 65.4%, outperforming YOLOv10n by 9.1% without data augmentation and YOLOv10l by 2.3%, despite having significantly fewer parameters (9.3 times less than YOLOv10l), demonstrating substantial gains in detection efficiency and model compactness. Full article
(This article belongs to the Special Issue Algorithms for Feature Selection (3rd Edition))
Show Figures

Figure 1

13 pages, 4530 KiB  
Article
Clinical Validation of a Computed Tomography Image-Based Machine Learning Model for Segmentation and Quantification of Shoulder Muscles
by Hamidreza Rajabzadeh-Oghaz, Josie Elwell, Bradley Schoch, William Aibinder, Bruno Gobbato, Daniel Wessell, Vikas Kumar and Christopher P. Roche
Algorithms 2025, 18(7), 432; https://doi.org/10.3390/a18070432 - 14 Jul 2025
Viewed by 223
Abstract
Introduction: We developed a computed tomography (CT)-based tool designed for automated segmentation of deltoid muscles, enabling quantification of radiomic features and muscle fatty infiltration. Prior to use in a clinical setting, this machine learning (ML)-based segmentation algorithm requires rigorous validation. The aim [...] Read more.
Introduction: We developed a computed tomography (CT)-based tool designed for automated segmentation of deltoid muscles, enabling quantification of radiomic features and muscle fatty infiltration. Prior to use in a clinical setting, this machine learning (ML)-based segmentation algorithm requires rigorous validation. The aim of this study is to conduct shoulder expert validation of a novel deltoid ML auto-segmentation and quantification tool. Materials and Methods: A SwinUnetR-based ML model trained on labeled CT scans is validated by three expert shoulder surgeons for 32 unique patients. The validation evaluates the quality of the auto-segmented deltoid images. Specifically, each of the three surgeons reviewed the auto-segmented masks relative to CT images, rated masks for clinical acceptance, and performed a correction on the ML-generated deltoid mask if the ML mask did not completely contain the full deltoid muscle, or if the ML mask included any tissue other than the deltoid. Non-inferiority of the ML model was assessed by comparing ML-generated to surgeon-corrected deltoid masks versus the inter-surgeon variation in metrics, such as volume and fatty infiltration. Results: The results of our expert shoulder surgeon validation demonstrates that 97% of ML-generated deltoid masks were clinically acceptable. Only two of the ML-generated deltoid masks required major corrections and only one was deemed clinically unacceptable. These corrections had little impact on the deltoid measurements, as the median error in the volume and fatty infiltration measurements was <1% between the ML-generated deltoid masks and the surgeon-corrected deltoid masks. The non-inferiority analysis demonstrates no significant difference between the ML-generated to surgeon-corrected masks relative to inter-surgeon variations. Conclusions: Shoulder expert validation of this CT image analysis tool demonstrates clinically acceptable performance for deltoid auto-segmentation, with no significant differences observed between deltoid image-based measurements derived from the ML generated masks and those corrected by surgeons. These findings suggest that this CT image analysis tool has potential to reliably quantify deltoid muscle size, shape, and quality. Incorporating these CT image-based measurements into the pre-operative planning process may facilitate more personalized treatment decision making, and help orthopedic surgeons make more evidence-based clinical decisions. Full article
(This article belongs to the Special Issue Machine Learning in Medical Signal and Image Processing (3rd Edition))
Show Figures

Figure 1

23 pages, 5245 KiB  
Article
Machine Learning Reconstruction of Wyrtki Jet Seasonal Variability in the Equatorial Indian Ocean
by Dandan Li, Shaojun Zheng, Chenyu Zheng, Lingling Xie and Li Yan
Algorithms 2025, 18(7), 431; https://doi.org/10.3390/a18070431 - 14 Jul 2025
Viewed by 267
Abstract
The Wyrtki Jet (WJ), a pivotal surface circulation system in the equatorial Indian Ocean, exerts significant regulatory control over regional climate dynamics through its intense eastward transport characteristics, which modulate water mass exchange, thermohaline balance, and cross-basin energy transfer. To address the scarcity [...] Read more.
The Wyrtki Jet (WJ), a pivotal surface circulation system in the equatorial Indian Ocean, exerts significant regulatory control over regional climate dynamics through its intense eastward transport characteristics, which modulate water mass exchange, thermohaline balance, and cross-basin energy transfer. To address the scarcity of in situ observational data, this study developed a satellite remote sensing-driven multi-parameter coupled model and reconstructed the WJ’s seasonal variations using the XGBoost machine learning algorithm. The results revealed that wind stress components, sea surface temperature, and wind stress curl serve as the primary drivers of its seasonal dynamics. The XGBoost model demonstrated superior performance in reconstructing WJ’s seasonal variations, achieving coefficients of determination (R2) exceeding 0.97 across all seasons and maintaining root mean square errors (RMSE) below 0.2 m/s across all seasons. The reconstructed currents exhibited strong consistency with the Ocean Surface Current Analysis Real-time (OSCAR) dataset, showing errors below 0.05 m/s in spring and autumn and under 0.1 m/s in summer and winter. The proposed multi-feature integrated modeling framework delivers a high spatiotemporal resolution analytical tool for tropical Indian Ocean circulation dynamics research, while simultaneously establishing critical data infrastructure to decode monsoon current coupling mechanisms, advancing early warning systems for extreme climatic events, and optimizing regional marine resource governance. Full article
Show Figures

Figure 1

19 pages, 2636 KiB  
Article
Electric Vehicle Sales Forecast for the UK: Integrating Machine Learning, Time Series Models, and Global Trends
by Shima Veysi, Mohammad Moshfeghi, Amir Sadrfaridpour and Peiman Emamy
Algorithms 2025, 18(7), 430; https://doi.org/10.3390/a18070430 - 14 Jul 2025
Viewed by 330
Abstract
This study presents a comprehensive forecasting approach to evaluate the future of electric vehicle (EV) adoption in the United Kingdom through 2035. Using three complementary models—SARIMAX, Prophet with regressors, and XGBoost—the analysis balances statistical robustness, policy sensitivity, and interpretability. Historical data from 2015 [...] Read more.
This study presents a comprehensive forecasting approach to evaluate the future of electric vehicle (EV) adoption in the United Kingdom through 2035. Using three complementary models—SARIMAX, Prophet with regressors, and XGBoost—the analysis balances statistical robustness, policy sensitivity, and interpretability. Historical data from 2015 to 2024 was used to train the models, incorporating key drivers such as battery prices, GDP growth, public charging infrastructure, and government policy targets. XGBoost demonstrated the highest historical accuracy, making it a strong explanatory tool, particularly for assessing variable importance. However, due to its limitations in extrapolation, it was not used for long-term forecasting. Instead, Prophet and SARIMAX were employed to project EV sales under baseline, optimistic, and pessimistic policy scenarios. The results suggest that the UK could achieve between 2,964,000 and 3,188,000 EV sales by 2035 under baseline assumptions. Scenario analysis revealed high sensitivity to infrastructure growth and policy enforcement, with potential shortfalls of up to 500,000 vehicles in pessimistic scenarios. These findings highlight the importance of sustained government commitment and investment in EV infrastructure and supply chains. By combining machine learning diagnostics with transparent forecasting models, the study offers actionable insights for policymakers, investors, and stakeholders navigating the UK’s zero-emission transition. Full article
(This article belongs to the Collection Feature Papers in Evolutionary Algorithms and Machine Learning)
Show Figures

Figure 1

34 pages, 924 KiB  
Systematic Review
Smart Microgrid Management and Optimization: A Systematic Review Towards the Proposal of Smart Management Models
by Paul Arévalo, Dario Benavides, Danny Ochoa-Correa, Alberto Ríos, David Torres and Carlos W. Villanueva-Machado
Algorithms 2025, 18(7), 429; https://doi.org/10.3390/a18070429 - 11 Jul 2025
Cited by 1 | Viewed by 542
Abstract
The increasing integration of renewable energy sources (RES) in power systems presents challenges related to variability, stability, and efficiency, particularly in smart microgrids. This systematic review, following the PRISMA 2020 methodology, analyzed 66 studies focused on advanced energy storage systems, intelligent control strategies, [...] Read more.
The increasing integration of renewable energy sources (RES) in power systems presents challenges related to variability, stability, and efficiency, particularly in smart microgrids. This systematic review, following the PRISMA 2020 methodology, analyzed 66 studies focused on advanced energy storage systems, intelligent control strategies, and optimization techniques. Hybrid storage solutions combining battery systems, hydrogen technologies, and pumped hydro storage were identified as effective approaches to mitigate RES intermittency and balance short- and long-term energy demands. The transition from centralized to distributed control architectures, supported by predictive analytics, digital twins, and AI-based forecasting, has improved operational planning and system monitoring. However, challenges remain regarding interoperability, data privacy, cybersecurity, and the limited availability of high-quality data for AI model training. Economic analyses show that while initial investments are high, long-term operational savings and improved resilience justify the adoption of advanced microgrid solutions when supported by appropriate policies and financial mechanisms. Future research should address the standardization of communication protocols, development of explainable AI models, and creation of sustainable business models to enhance resilience, efficiency, and scalability. These efforts are necessary to accelerate the deployment of decentralized, low-carbon energy systems capable of meeting future energy demands under increasingly complex operational conditions. Full article
(This article belongs to the Special Issue Algorithms for Smart Cities (2nd Edition))
Show Figures

Figure 1

21 pages, 21508 KiB  
Article
SPL-YOLOv8: A Lightweight Method for Rape Flower Cluster Detection and Counting Based on YOLOv8n
by Yue Fang, Chenbo Yang, Jie Li and Jingmin Tu
Algorithms 2025, 18(7), 428; https://doi.org/10.3390/a18070428 - 11 Jul 2025
Viewed by 350
Abstract
The flowering stage is a critical phase in the growth of rapeseed crops, and non-destructive, high-throughput quantitative analysis of rape flower clusters in field environments holds significant importance for rapeseed breeding. However, detecting and counting rape flower clusters remains challenging in complex field [...] Read more.
The flowering stage is a critical phase in the growth of rapeseed crops, and non-destructive, high-throughput quantitative analysis of rape flower clusters in field environments holds significant importance for rapeseed breeding. However, detecting and counting rape flower clusters remains challenging in complex field conditions due to their small size, severe overlapping and occlusion, and the large parameter sizes of existing models. To address these challenges, this study proposes a lightweight rape flower clusters detection model, SPL-YOLOv8. First, the model introduces StarNet as a lightweight backbone network for efficient feature extraction, significantly reducing computational complexity and parameter counts. Second, a feature fusion module (C2f-Star) is integrated into the backbone to enhance the feature representation capability of the neck through expanded spatial dimensions, mitigating the impact of occluded regions on detection performance. Additionally, a lightweight Partial Group Convolution Detection Head (PGCD) is proposed, which employs Partial Convolution combined with Group Normalization to enable multi-scale feature interaction. By incorporating additional learnable parameters, the PGCD enhances the detection and localization of small targets. Finally, channel pruning based on the Layer-Adaptive Magnitude-based Pruning (LAMP) score is applied to reduce model parameters and runtime memory. Experimental results on the Rapeseed Flower-Raceme Benchmark (RFRB) demonstrate that the SPL-YOLOv8n-prune model achieves a detection accuracy of 92.2% in Average Precision (AP50), comparable to SOTA methods, while reducing the giga floating point operations per second (GFLOPs) and parameters by 86.4% and 95.4%, respectively. The model size is only 0.5 MB and the real-time frame rate is 171 fps. The proposed model effectively detects rape flower clusters with minimal computational overhead, offering technical support for yield prediction and elite cultivar selection in rapeseed breeding. Full article
(This article belongs to the Section Analysis of Algorithms and Complexity Theory)
Show Figures

Figure 1

18 pages, 1871 KiB  
Article
Interpretable Reinforcement Learning for Sequential Strategy Prediction in Language-Based Games
by Jun Zhao, Jintian Ji, Robail Yasrab, Shuxin Wang, Liang Yu and Lingzhen Zhao
Algorithms 2025, 18(7), 427; https://doi.org/10.3390/a18070427 - 11 Jul 2025
Viewed by 374
Abstract
Accurate and interpretable prediction plays a vital role in natural language processing (NLP) tasks, particularly for enhancing user trust and model transparency. However, existing models often struggle with poor adaptability and limited interpretability when applied to dynamic language prediction tasks such as Wordle [...] Read more.
Accurate and interpretable prediction plays a vital role in natural language processing (NLP) tasks, particularly for enhancing user trust and model transparency. However, existing models often struggle with poor adaptability and limited interpretability when applied to dynamic language prediction tasks such as Wordle. To address these challenges, this study proposes an interpretable reinforcement learning framework based on an Enhanced Deep Deterministic Policy Gradient (Enhanced-DDPG) algorithm. By leveraging a custom simulation environment and integrating key linguistic features word frequency, letter frequency, and repeated letter patterns (rep) the model dynamically predicts the number of attempts needed to solve Wordle puzzles. Experimental results demonstrate that Enhanced-DDPG outperforms traditional methods such as Random Forest Regression (RFR), XGBoost, LightGBM, METRA, and SQIRL in terms of both prediction accuracy (MSE = 0.0134, R2 = 0.8439) and robustness under noisy conditions. Furthermore, SHapley Additive exPlanations (SHAP) are employed to interpret the model’s decision process, revealing that repeated letter patterns significantly influence low-attempt predictions, while word and letter frequencies are more relevant for higher attempt scenarios. This research highlights the potential of combining interpretable artificial intelligence (I-AI) and reinforcement learning to develop robust, transparent, and high-performance NLP prediction systems for real-world applications. Full article
(This article belongs to the Topic Applications of NLP, AI, and ML in Software Engineering)
Show Figures

Figure 1

26 pages, 2178 KiB  
Article
Cross-Modal Fake News Detection Method Based on Multi-Level Fusion Without Evidence
by Ping He, Hanxue Zhang, Shufu Cao and Yali Wu
Algorithms 2025, 18(7), 426; https://doi.org/10.3390/a18070426 - 10 Jul 2025
Viewed by 383
Abstract
Although multimodal feature fusion technology in fake news detection can integrate complementary information from different modal data, the semantic inconsistency of multimodal features will lead to feature fusion difficulties. And there is the problem of information loss during one fusion process. In addition, [...] Read more.
Although multimodal feature fusion technology in fake news detection can integrate complementary information from different modal data, the semantic inconsistency of multimodal features will lead to feature fusion difficulties. And there is the problem of information loss during one fusion process. In addition, although it is possible to improve the detection effect by increasing the support of external evidence in fake news detection, there is a lag in obtaining external evidence and the reliability and completeness of the evidence source is difficult to guarantee. Additional noise may be introduced to interfere with the model judgment. Therefore, a cross-modal fake news detection method (CM-MLF) based on evidence-free multilevel fusion is proposed. The method solves the semantic inconsistency problem by utilizing cross-modal alignment processing. And it utilizes the attention mechanism to perform multilevel fusion of text and image features without the assistance of other evidential features to further enhance the expressive power of the features. Experiments show that the method achieves better detection results on multiple benchmark datasets, effectively improving the accuracy and robustness of cross-modal fake news detection. Full article
(This article belongs to the Special Issue Algorithms for Feature Selection (3rd Edition))
Show Figures

Graphical abstract

25 pages, 1885 KiB  
Article
Robust Algorithm for Calculating the Alignment of Guide Rolls in Slab Continuous Casting Machines
by Robert Rosenthal, Nils Albersmann and Mohieddine Jelali
Algorithms 2025, 18(7), 425; https://doi.org/10.3390/a18070425 - 9 Jul 2025
Viewed by 218
Abstract
To ensure the product quality of a steel slab continuous casting machine, the mechanical alignment of the guide rolls must be monitored and corrected regularly. Misaligned guide rolls cause stress and strain in the partially solidified steel strand, leading to internal cracks and [...] Read more.
To ensure the product quality of a steel slab continuous casting machine, the mechanical alignment of the guide rolls must be monitored and corrected regularly. Misaligned guide rolls cause stress and strain in the partially solidified steel strand, leading to internal cracks and other quality issues. Current methods of alignment measurement are either not suited for regular maintenance or provide only indirect alignment information in the form of angle measurements. This paper presents three new algorithms that convert the available angle measurements into the absolute position of each guide roll, which is equivalent to the mechanical alignment. The algorithms are based on geometry and trigonometry or the gradient descent optimization algorithm. Under near ideal conditions, all algorithms yield very accurate position results. However, when tested and evaluated under various conditions, their susceptibility to real-world disturbances is revealed. Here, only the optimization-based algorithm reaches the desired accuracy. Under the influence of randomly distributed angle measurement errors with an amplitude of 0.01°, it is able to determine 90% of roll positions within 0.1 mm of their actual position. Full article
(This article belongs to the Section Algorithms for Multidisciplinary Applications)
Show Figures

Figure 1

25 pages, 875 KiB  
Article
Filter Learning-Based Partial Least Squares Regression and Its Application in Infrared Spectral Analysis
by Yi Mou, Long Zhou, Weizhen Chen, Jianguo Liu and Teng Li
Algorithms 2025, 18(7), 424; https://doi.org/10.3390/a18070424 - 9 Jul 2025
Viewed by 267
Abstract
Partial Least Squares (PLS) regression has been widely used to model the relationship between predictors and responses. However, PLS may be limited in its capacity to handle complex spectral data contaminated with significant noise and interferences. In this paper, we propose a novel [...] Read more.
Partial Least Squares (PLS) regression has been widely used to model the relationship between predictors and responses. However, PLS may be limited in its capacity to handle complex spectral data contaminated with significant noise and interferences. In this paper, we propose a novel filter learning-based PLS (FPLS) model that integrates an adaptive filter into the PLS framework. The FPLS model is designed to maximize the covariance between the filtered spectral data and the response. This modification enables FPLS to dynamically adapt to the characteristics of the data, thereby enhancing its feature extraction and noise suppression capabilities. We have developed an efficient algorithm to solve the FPLS optimization problem and provided theoretical analyses regarding the convergence of the model, the prediction variance, and the relationships among the objective functions of FPLS, PLS, and the filter length. Furthermore, we have derived bounds for the Root Mean Squared Error of Prediction (RMSEP) and the Cosine Similarity (CS) to evaluate model performance. Experimental results using spectral datasets from Corn, Octane, Mango, and Soil Nitrogen show that the FPLS model outperforms PLS, OSCPLS, VCPLS, PoPLS, LoPLS, DOSC, OPLS, MSC, SNV, SGFilter, and Lasso in terms of prediction accuracy. The theoretical analyses align with the experimental results, emphasizing the effectiveness and robustness of the FPLS model in managing complex spectral data. Full article
Show Figures

Figure 1

20 pages, 1353 KiB  
Article
Dynamic Modeling and Validation of Peak Ability of Biomass Units
by Dawei Xia, Guizhou Cao, Jiayao Pan, Xinghai Wang, Kai Meng, Yuancheng Sun and Zhenlong Wu
Algorithms 2025, 18(7), 423; https://doi.org/10.3390/a18070423 - 9 Jul 2025
Viewed by 215
Abstract
Biomass units can play a certain role in peak summer and winter due to their advantages in terms of their environmental and short-term peak ability. To analyze the peak ability of biomass units, this paper focuses on the dynamic modeling of biomass unit [...] Read more.
Biomass units can play a certain role in peak summer and winter due to their advantages in terms of their environmental and short-term peak ability. To analyze the peak ability of biomass units, this paper focuses on the dynamic modeling of biomass unit peak ability. Firstly, the process of biomass feeding amount to power output is divided into a feed–heat module, heat–main steam pressure module and main steam pressure–power module. A two-input and two-output dynamic model is established where the feeding amount and turbine valve opening serve as inputs, and the main steam pressure and power serve as outputs. Then the effectiveness of the established model is validated by actual operation data of a 30 MW biomass unit. This dynamic model can provide a mechanistic model for analyzing the impact of fuel calorific value on the power output, and provide support for fuel management and scheduling strategies during the peak period of biomass units. Full article
(This article belongs to the Special Issue Artificial Intelligence in Modeling and Simulation (2nd Edition))
Show Figures

Figure 1

18 pages, 580 KiB  
Article
Feature Transformation-Based Few-Shot Class-Incremental Learning
by Xubo Zhang and Yang Luo
Algorithms 2025, 18(7), 422; https://doi.org/10.3390/a18070422 - 9 Jul 2025
Viewed by 333
Abstract
In the process of few-shot class-incremental learning, the limited number of samples for newly introduced classes makes it difficult to adequately adapt model parameters, resulting in poor feature representations for these classes. To address this issue, this paper proposes a feature transformation method [...] Read more.
In the process of few-shot class-incremental learning, the limited number of samples for newly introduced classes makes it difficult to adequately adapt model parameters, resulting in poor feature representations for these classes. To address this issue, this paper proposes a feature transformation method that mitigates feature degradation in few-shot incremental learning. The transformed features better align with the ideal feature distribution required by an optimal classifier, thereby alleviating performance decline during incremental updates. Before classification, the method learns a well-conditioned linear mapping from the available base classes. After classification, both class prototypes and query samples are projected into the transformed feature space to improve the overall feature distribution. Experimental results on three benchmark datasets demonstrate that the proposed method achieves strong performance: it reduces performance degradation to 24.85 percentage points on miniImageNet, 24.45 on CIFAR100, and 24.14 on CUB, consistently outperforming traditional methods such as iCaRL (44.13–50.71 points degradation) and recent techniques like FeTrIL and PL-FSCIL. Further analysis shows that the transformed features bring class prototypes significantly closer to the theoretically optimal equiangular configuration described by neural collapse, highlighting the effectiveness of the proposed approach. Full article
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop