Previous Issue
Volume 18, July
 
 

Algorithms, Volume 18, Issue 8 (August 2025) – 56 articles

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
41 pages, 7109 KiB  
Article
Cross-Cultural Safety Judgments in Child Environments: A Semantic Comparison of Vision-Language Models and Humans
by Don Divin Anemeta and Rafal Rzepka
Algorithms 2025, 18(8), 507; https://doi.org/10.3390/a18080507 (registering DOI) - 13 Aug 2025
Abstract
Despite advances in complex reasoning, Vision-Language Models (VLMs) remain inadequately benchmarked for safety-critical applications like childcare. To address this gap, we conduct a multilingual (English, French, Polish, Japanese) comparison of VLMs and human safety assessments using a dataset of original images from child [...] Read more.
Despite advances in complex reasoning, Vision-Language Models (VLMs) remain inadequately benchmarked for safety-critical applications like childcare. To address this gap, we conduct a multilingual (English, French, Polish, Japanese) comparison of VLMs and human safety assessments using a dataset of original images from child environments in Japan and Poland. Our proposed methodology utilizes semantic clustering to normalize and compare hazard identification and mitigation strategies. While both models and humans identify overt dangers with high semantic agreement (e.g., 0.997 similarity for ‘scissors’), their proposed actions diverge significantly. Humans strongly favor direct physical intervention (‘remove object’: 64.% for Polish vs. 55.0% for VLMs) and context-specific actions (‘move object elsewhere’: 17.8% for Japanese), strategies that models under-represent. Conversely, VLMs consistently over-recommend supervisory actions (such as ‘Supervise children closely’ or ‘Supervise use of scissors’). These quantified discrepancies highlight the critical need to integrate nuanced, human-like contextual judgment for the safe deployment of AI systems. Full article
(This article belongs to the Section Algorithms for Multidisciplinary Applications)
Show Figures

Figure 1

24 pages, 2794 KiB  
Article
Algorithmic Modeling of Generation Z’s Therapeutic Toys Consumption Behavior in an Emotional Economy Context
by Xinyi Ma, Xu Qin and Li Lv
Algorithms 2025, 18(8), 506; https://doi.org/10.3390/a18080506 - 13 Aug 2025
Abstract
The quantification of emotional value and accurate prediction of purchase intention has emerged as a critical interdisciplinary challenge in the evolving emotional economy. Focusing on Generation Z (born 1995–2009), this study proposes a hybrid algorithmic framework integrating text-based sentiment computation, feature selection, and [...] Read more.
The quantification of emotional value and accurate prediction of purchase intention has emerged as a critical interdisciplinary challenge in the evolving emotional economy. Focusing on Generation Z (born 1995–2009), this study proposes a hybrid algorithmic framework integrating text-based sentiment computation, feature selection, and random forest modeling to forecast purchase intention for therapeutic toys and interpret its underlying drivers. First, 856 customer reviews were scraped from Jellycat’s official website and subjected to polarity classification using a fine-tuned RoBERTa-wwm-ext model (F1 = 0.92), with generated sentiment scores and high-frequency keywords mapped as interpretable features. Next, Boruta–SHAP feature selection was applied to 35 structured variables from 336 survey records, retaining 17 significant predictors. The core module employed a RF (random forest) model to estimate continuous “purchase intention” scores, achieving R2 = 0.83 and MSE = 0.14 under 10-fold cross-validation. To enhance interpretability, RF model was also utilized to evaluate feature importance, quantifying each feature’s contribution to the model outputs, revealing Social Ostracism (β = 0.307) and Task Overload (β = 0.207) as dominant predictors. Finally, k-means clustering with gap statistics segmented consumers based on emotional relevance, value rationality, and interest level, with model performance compared across clusters. Experimental results demonstrate that our integrated predictive model achieves a balance between forecasting accuracy and decision interpretability in emotional value computation, offering actionable insights for targeted product development and precision marketing in the therapeutic goods sector. Full article
(This article belongs to the Section Algorithms for Multidisciplinary Applications)
Show Figures

Figure 1

17 pages, 1234 KiB  
Article
Avalanche Hazard Prediction in East Kazakhstan Using Ensemble Machine Learning Algorithms
by Yevgeniy Fedkin, Natalya Denissova, Gulzhan Daumova, Ruslan Chettykbayev and Saule Rakhmetullina
Algorithms 2025, 18(8), 505; https://doi.org/10.3390/a18080505 - 13 Aug 2025
Abstract
The study is devoted to the construction of an avalanche susceptibility map based on ensemble machine learning algorithms (random forest, XGBoost, LightGBM, gradient boosting machines, AdaBoost, NGBoost) for the conditions of the East Kazakhstan region. To train these models, data were collected on [...] Read more.
The study is devoted to the construction of an avalanche susceptibility map based on ensemble machine learning algorithms (random forest, XGBoost, LightGBM, gradient boosting machines, AdaBoost, NGBoost) for the conditions of the East Kazakhstan region. To train these models, data were collected on avalanche path profiles, meteorological conditions, and historical avalanche events. The quality of the trained machine learning models was assessed using metrics such as accuracy, precision, true positive rate (recall), and F1-score. The obtained metrics indicated that the trained machine learning models achieved reasonably accurate forecasting performance (forecast accuracy from 67% to 73.8%). ROC curves were also constructed for each obtained model for evaluation. The resulting AUCs for these ROC curves showed acceptable levels (from 0.57 to 0.73), which also indicated that the presented models could be used to predict avalanche danger. In addition, for each machine learning model, we determined the importance of the indicators used to predict avalanche danger. Analysis of the importance of the indicators showed that the most significant indicators were meteorological data, namely temperature and snow cover level in avalanche paths. Among the indicators that characterized the avalanche paths’ profiles, the most important were the minimum and maximum slope elevations. Thus, within the framework of this study, a highly accurate model was built using geospatial and meteorological data that allows identifying potentially dangerous slope areas. These results can support territorial planning, the design of protective infrastructure, and the development of early warning systems to mitigate avalanche risks. Full article
(This article belongs to the Section Evolutionary Algorithms and Machine Learning)
Show Figures

Figure 1

36 pages, 1259 KiB  
Article
A Survey of Printable Encodings
by Marco Botta, Davide Cavagnino, Alessandro Druetto, Maurizio Lucenteforte and Annunziata Marra
Algorithms 2025, 18(8), 504; https://doi.org/10.3390/a18080504 - 12 Aug 2025
Abstract
The representation of binary data in a compact, printable, efficient, and often human-readable format is essential in numerous computing applications, mainly driven by the limitations of systems and communication protocols not designed to handle arbitrary 8-bit binary data. This paper provides a comprehensive [...] Read more.
The representation of binary data in a compact, printable, efficient, and often human-readable format is essential in numerous computing applications, mainly driven by the limitations of systems and communication protocols not designed to handle arbitrary 8-bit binary data. This paper provides a comprehensive survey and an extensive characterization of printable encoding schemes, tracing their evolution from historical methods to contemporary solutions for representing, storing, and transmitting binary data using restricted character sets. The review includes a foundational analysis of fundamental character encodings, proposes a layered model for the classification of printable encodings, and examines various schemes based on their numerical bases, alphabets, and functional characteristics. Algorithms, key design trade-offs, the impact of relevant standards, security implications, performance considerations, and human factors are systematically discussed, aiming to offer a detailed understanding of the current context and open challenges. Full article
(This article belongs to the Section Analysis of Algorithms and Complexity Theory)
Show Figures

Figure 1

23 pages, 9894 KiB  
Article
The Problem of Formation Destruction in Carbon Dioxide Storage: A Microscopic Model
by Natalia Levashova, Pavel Levashov, Dmitry Erofeev and Alla Sidorova
Algorithms 2025, 18(8), 503; https://doi.org/10.3390/a18080503 - 12 Aug 2025
Abstract
In the context of the current global transition toward low-carbon energy, the issue of CO2 utilization has become increasingly important. One of the most promising natural targets for CO2 sequestration is the terrigenous sedimentary formations found in oil, gas, [...] Read more.
In the context of the current global transition toward low-carbon energy, the issue of CO2 utilization has become increasingly important. One of the most promising natural targets for CO2 sequestration is the terrigenous sedimentary formations found in oil, gas, and coal basins. It is generally assumed that CO2 injected into such formations can be stored indefinitely in a stable form. However, the dissolution of CO2 into subsurface water leads to a reduction in pH, which may cause partial dissolution of the host formation, altering the structure of the subsurface in the injection zone. This process is relatively slow, potentially unfolding over decades or even centuries, and its long-term consequences require careful investigation through mathematical modeling. The geological formation is treated as a partially soluble porous medium, where the dissolution rate is governed by surface chemical reactions occurring at the pore boundaries. In this study, we present an applied mathematical model that captures the coupled processes of mass transport, surface chemical reactions, and the resulting microscopic changes in the pore structure of the formation. To ensure the model remains grounded in realistic geological conditions, we based it on exploration data characterizing the composition and microstructure of the pore space typical of the Cenomanian suite in northern Western Siberia. The model incorporates the dominant geochemical reactions involving calcium carbonate (calcite, CaCO3), characteristic of Cenomanian reservoir rocks. It describes the dissolution of CO2 in the pore fluid and the associated evolution of ion concentrations, specifically H+, Ca2+, and HCO3. The input parameters are derived from experimental data. While the model focuses on calcite-based formations, the algorithm can be adapted to other mineralogies with appropriate modifications to the reaction terms. The simulation domain is defined as a cubic region with a side length of 1 μm, representing a fragment of the geological formation with a porosity of 0.33. The pore space is initially filled with a mixture of liquid CO2 and water at known saturation levels. The mathematical framework consists of a system of diffusion–reaction equations describing the dissolution of CO2 in water and the subsequent mineral dissolution, coupled with a model for surface evolution of the solid phase. This model enables calculation of surface reaction rates within the porous medium and estimates the timescales over which significant changes in pore structure may occur, depending on the relative saturations of water and liquid CO2. Full article
(This article belongs to the Section Algorithms for Multidisciplinary Applications)
Show Figures

Figure 1

15 pages, 552 KiB  
Article
How Much Is Too Much? Facing Practical Limitations in Hyper-Heuristic Design for Packing Problems
by José Carlos Ortiz-Bayliss, Alonso Vela Morales and Ivan Amaya
Algorithms 2025, 18(8), 502; https://doi.org/10.3390/a18080502 - 12 Aug 2025
Abstract
Hyper-heuristics, or simply heuristics to choose heuristics, represent a powerful approach to tackling complex optimization problems. These methods decide which heuristic to apply throughout the solving process, aiming to improve the solving process. While they have demonstrated significant success across various domains, their [...] Read more.
Hyper-heuristics, or simply heuristics to choose heuristics, represent a powerful approach to tackling complex optimization problems. These methods decide which heuristic to apply throughout the solving process, aiming to improve the solving process. While they have demonstrated significant success across various domains, their suitability for all problem instances, even within a specific domain, is not guaranteed. The literature provides many examples of successful hyper-heuristic models for packing problems. Among those models, we can mention rule-based and fixed-sequence-based hyper-heuristics. These two models have proven useful in various scenarios. This paper investigates a genetic-based approach that produces hybrid hyper-heuristics. Such hybrid hyper-heuristics combine rule-based decisions while firing heuristic sequences. The rationale behind this hybrid approach is that we aimed to combine the strengths of both approaches. Although we expected to improve on the individual performance of the methods, we obtained contradictory results that suggest that, at least in this work, combining the strengths of different hyper-heuristic models may not be a suitable approach. Full article
(This article belongs to the Section Evolutionary Algorithms and Machine Learning)
Show Figures

Figure 1

34 pages, 1448 KiB  
Article
High-Fidelity Image Transmission in Quantum Communication with Frequency Domain Multi-Qubit Techniques
by Udara Jayasinghe, Thanuj Fernando and Anil Fernando
Algorithms 2025, 18(8), 501; https://doi.org/10.3390/a18080501 - 11 Aug 2025
Abstract
This paper proposes a novel quantum image transmission framework to address the limitations of existing single-qubit time domain systems, which struggle with noise resilience and scalability. The framework integrates frequency domain processing with multi-qubit (1 to 8 qubits) encoding to enhance robustness against [...] Read more.
This paper proposes a novel quantum image transmission framework to address the limitations of existing single-qubit time domain systems, which struggle with noise resilience and scalability. The framework integrates frequency domain processing with multi-qubit (1 to 8 qubits) encoding to enhance robustness against quantum noise. Initially, images are source-coded using JPEG and HEIF formats with rate adjustment to ensure consistent bandwidth usage. The resulting bitstreams are channel-encoded and mapped to multi-qubit quantum states. These states are transformed into the frequency domain via the quantum Fourier transform (QFT) for transmission. At the receiver, the inverse QFT recovers the time domain states, followed by multi-qubit decoding, channel decoding, and source decoding to reconstruct the image. Performance is evaluated using bit error rate (BER), peak signal-to-noise ratio (PSNR), structural similarity index measure (SSIM), and universal quality index (UQI). Results show that increasing the number of qubits enhances image quality and noise robustness, albeit at the cost of increased system complexity. Compared to time domain processing, the frequency domain approach achieves superior performance across all qubit configurations, with the eight-qubit system delivering up to a 4 dB maximum channel SNR gain for both JPEG and HEIF images. Although single-qubit systems benefit less from frequency domain encoding due to limited representational capacity, the overall framework demonstrates strong potential for scalable and noise-robust quantum image transmission in future quantum communication networks. Full article
(This article belongs to the Section Algorithms for Multidisciplinary Applications)
Show Figures

Graphical abstract

34 pages, 4433 KiB  
Article
Estimation of Residential Vacancy Rate in Underdeveloped Areas of China Based on Baidu Street View Residential Exterior Images: A Case Study of Nanning, Guangxi
by Weijia Zeng, Binglin Liu, Yi Hu, Weijiang Liu, Yuhe Fu, Yiyue Zhang and Weiran Zhang
Algorithms 2025, 18(8), 500; https://doi.org/10.3390/a18080500 - 11 Aug 2025
Abstract
Housing vacancy rate is a key indicator for evaluating urban sustainable development. Due to rapid urbanization, population outflow and insufficient industrial support, the housing vacancy problem is particularly prominent in China’s underdeveloped regions. However, the lack of official data and the limitations of [...] Read more.
Housing vacancy rate is a key indicator for evaluating urban sustainable development. Due to rapid urbanization, population outflow and insufficient industrial support, the housing vacancy problem is particularly prominent in China’s underdeveloped regions. However, the lack of official data and the limitations of traditional survey methods restrict in-depth research. This study proposes a vacancy rate estimation method based on Baidu Street View residential exterior images and deep learning technology. Taking Nanning, Guangxi as a case study, an automatic discrimination model for residential vacancy status is constructed by identifying visual clues such as window occlusion, balcony debris accumulation, and facade maintenance status. The study first uses Baidu Street View API to collect images of residential communities in Nanning. After manual annotation and field verification, a labeled dataset is constructed. A pre-trained deep learning model (ResNet50) is applied to estimate the vacancy rate of the community after fine-tuning with labeled street view images of Nanning’s residential communities. GIS spatial analysis is combined to reveal the spatial distribution pattern and influencing factors of the vacancy rate. The results show that street view images can effectively capture vacancy characteristics that are difficult to identify with traditional remote sensing and indirect indicators, providing a refined data source and method innovation for housing vacancy research in underdeveloped regions. The study further found that the residential vacancy rate in Nanning showed significant spatial differentiation, and the vacancy driving mechanism in the old urban area and the emerging area was significantly different. This study expands the application boundaries of computer vision in urban research and fills the research gap on vacancy issues in underdeveloped areas. Its results can provide a scientific basis for the government to optimize housing planning, developers to make rational investments, and residents to make housing purchase decisions, thus helping to improve urban sustainable development and governance capabilities. Full article
(This article belongs to the Special Issue Algorithms for Smart Cities (2nd Edition))
Show Figures

Figure 1

29 pages, 12751 KiB  
Review
A Research Landscape of Agentic AI and Large Language Models: Applications, Challenges and Future Directions
by Sarfraz Brohi, Qurat-ul-ain Mastoi, N. Z. Jhanjhi and Thulasyammal Ramiah Pillai
Algorithms 2025, 18(8), 499; https://doi.org/10.3390/a18080499 - 11 Aug 2025
Viewed by 23
Abstract
Agentic AI and Large Language Models (LLMs) are transforming how language is understood and generated while reshaping decision-making, automation, and research practices. LLMs provide underlying reasoning capabilities, and Agentic AI systems use them to perform tasks through interactions with external tools, services, and [...] Read more.
Agentic AI and Large Language Models (LLMs) are transforming how language is understood and generated while reshaping decision-making, automation, and research practices. LLMs provide underlying reasoning capabilities, and Agentic AI systems use them to perform tasks through interactions with external tools, services, and Application Programming Interfaces (APIs). Based on a structured scoping review and thematic analysis, this study identifies that core challenges of LLMs, relating to security, privacy and trust, misinformation, misuse and bias, energy consumption, transparency and explainability, and value alignment, can propagate into Agentic AI. Beyond these inherited concerns, Agentic AI introduces new challenges, including context management, security, privacy and trust, goal misalignment, opaque decision-making, limited human oversight, multi-agent coordination, ethical and legal accountability, and long-term safety. We analyse the applications of Agentic AI powered by LLMs across six domains: education, healthcare, cybersecurity, autonomous vehicles, e-commerce, and customer service, to reveal their real-world impact. Furthermore, we demonstrate some LLM limitations using DeepSeek-R1 and GPT-4o. To the best of our knowledge, this is the first comprehensive study to integrate the challenges and applications of LLMs and Agentic AI within a single forward-looking research landscape that promotes interdisciplinary research and responsible advancement of this emerging field. Full article
(This article belongs to the Special Issue Evolution of Algorithms in the Era of Generative AI)
Show Figures

Figure 1

14 pages, 1769 KiB  
Article
Queue Stability-Constrained Deep Reinforcement Learning Algorithms for Adaptive Transmission Control in Multi-Access Edge Computing Systems
by Longzhe Han, Tian Zeng, Jia Zhao, Xuecai Bao, Guangming Liu and Yan Liu
Algorithms 2025, 18(8), 498; https://doi.org/10.3390/a18080498 - 11 Aug 2025
Viewed by 36
Abstract
To meet the escalating demands of massive data transmission, the next generation of wireless networks will leverage the multi-access edge computing (MEC) architecture coupled with multi-access transmission technologies to enhance communication resource utilization. This paper presents queue stability-constrained reinforcement learning algorithms designed to [...] Read more.
To meet the escalating demands of massive data transmission, the next generation of wireless networks will leverage the multi-access edge computing (MEC) architecture coupled with multi-access transmission technologies to enhance communication resource utilization. This paper presents queue stability-constrained reinforcement learning algorithms designed to optimize the transmission control mechanism in MEC systems to improve both throughput and reliability. We propose an analytical framework to model the queue stability. To increase transmission performance while maintaining queue stability, queueing delay model is designed to analyze the packet scheduling process by using the M/M/c queueing model and estimate the expected packet queueing delay. To handle the time-varying network environment, we introduce a queue stability constraint into the reinforcement learning reward function to jointly optimize latency and queue stability. The reinforcement learning algorithm is deployed at the MEC server to reduce the workload of central cloud servers. Simulation results validate that the proposed algorithm effectively controls queueing delay and average queue length while improving packet transmission success rates in dynamic MEC environments. Full article
(This article belongs to the Special Issue AI Algorithms for 6G Mobile Edge Computing and Network Security)
Show Figures

Figure 1

21 pages, 4852 KiB  
Article
Series Arc Fault Detection Method Based on Time Domain Imaging and Long Short-Term Memory Network for Residential Applications
by Ruobo Chu, Schweitzer Patrick and Kai Yang
Algorithms 2025, 18(8), 497; https://doi.org/10.3390/a18080497 - 11 Aug 2025
Viewed by 34
Abstract
This article presents a novel method for detecting series arc faults (SAFs) in residential applications using time-domain imaging (TDI) and Long Short-Term Memory (LSTM) networks. The proposed method transforms current signals into grayscale images by filtering out the fundamental frequency, allowing key arc [...] Read more.
This article presents a novel method for detecting series arc faults (SAFs) in residential applications using time-domain imaging (TDI) and Long Short-Term Memory (LSTM) networks. The proposed method transforms current signals into grayscale images by filtering out the fundamental frequency, allowing key arc fault characteristics—such as high-frequency noise and waveform distortions—to become visually apparent. The use of Ensemble Empirical Mode Decomposition (EEMD) helped isolate meaningful signal components, although it was computationally intensive. To address real-time requirements, a simpler yet effective TDI method was developed for generating 2D images from current data. These images were then used as inputs to an LSTM network, which captures temporal dependencies and classifies both arc faults and appliance types. The proposed TDI-LSTM model was trained and tested on 7000 labeled datasets across five common household appliances. The experimental results show an average detection accuracy of 98.1%, with reduced accuracy for loads using thyristors (e.g., dimmers). The method is robust across different appliance types and conditions; comparisons with prior methods indicate that the proposed TDI-LSTM approach offers superior accuracy and broader applicability. Trade-offs in sampling rates and hardware implementation were discussed to balance accuracy and system cost. Overall, the TDI-LSTM approach offers a highly accurate, efficient, and scalable solution for series arc fault detection in smart home systems. Full article
(This article belongs to the Special Issue AI and Computational Methods in Engineering and Science)
Show Figures

Graphical abstract

22 pages, 2811 KiB  
Article
Deep Feature Selection of Meteorological Variables for LSTM-Based PV Power Forecasting in High-Dimensional Time-Series Data
by Husein Mauladdawilah, Mohammed Balfaqih, Zain Balfagih, María del Carmen Pegalajar and Eulalia Jadraque Gago
Algorithms 2025, 18(8), 496; https://doi.org/10.3390/a18080496 - 10 Aug 2025
Viewed by 183
Abstract
Accurate photovoltaic (PV) power forecasting is essential for grid integration, particularly in maritime climates with dynamic weather patterns. This study addresses high-dimensional meteorological data challenges by systematically evaluating 32 variables across four categories (solar irradiance, temperature, atmospheric, hydrometeorological) for day-ahead PV forecasting using [...] Read more.
Accurate photovoltaic (PV) power forecasting is essential for grid integration, particularly in maritime climates with dynamic weather patterns. This study addresses high-dimensional meteorological data challenges by systematically evaluating 32 variables across four categories (solar irradiance, temperature, atmospheric, hydrometeorological) for day-ahead PV forecasting using long short-term memory (LSTM) networks. Using six years of data from a 350 kWp solar farm in Scotland, we compare satellite-derived data and local weather station measurements. Surprisingly, downward thermal infrared flux—capturing persistent atmospheric moisture and cloud properties in maritime climates—emerged as the most influential predictor despite low correlation (1.93%). When paired with precipitation data, this two-variable combination achieved 99.81% R2, outperforming complex multi-variable models. Satellite data consistently surpassed ground measurements, with 9 of the top 10 predictors being satellite derived. Our approach reduces model complexity while improving forecasting accuracy, providing practical solutions for energy systems. Full article
(This article belongs to the Special Issue Algorithms for Feature Selection (3rd Edition))
Show Figures

Figure 1

15 pages, 3633 KiB  
Article
HSS-YOLO Lightweight Object Detection Model for Intelligent Inspection Robots in Power Distribution Rooms
by Liang Li, Yangfei He, Yingying Wei, Hucheng Pu, Xiangge He, Chunlei Li and Weiliang Zhang
Algorithms 2025, 18(8), 495; https://doi.org/10.3390/a18080495 - 8 Aug 2025
Viewed by 209
Abstract
Currently, YOLO-based object detection is widely employed in intelligent inspection robots. However, under interference factors present in dimly lit substation environments, YOLO exhibits issues such as excessively low accuracy, missed detections, and false detections for critical targets. To address these problems, this paper [...] Read more.
Currently, YOLO-based object detection is widely employed in intelligent inspection robots. However, under interference factors present in dimly lit substation environments, YOLO exhibits issues such as excessively low accuracy, missed detections, and false detections for critical targets. To address these problems, this paper proposes HSS-YOLO, a lightweight object detection model based on YOLOv11. Initially, HetConv is introduced. By combining convolutional kernels of different sizes, it reduces the required number of floating-point operations (FLOPs) and enhances computational efficiency. Subsequently, the integration of Inner-SIoU strengthens the recognition capability for small targets within dim environments. Finally, ShuffleAttention is incorporated to mitigate problems like missed or false detections of small targets under low-light conditions. The experimental results demonstrate that on a custom dataset, the model achieves a precision of 90.5% for critical targets (doors and two types of handles). This represents a 4.6% improvement over YOLOv11, while also reducing parameter count by 10.7% and computational load by 9%. Furthermore, evaluations on public datasets confirm that the proposed model surpasses YOLOv11 in assessment metrics. The improved model presented in this study not only achieves lightweight design but also yields more accurate detection results for doors and handles within dimly lit substation environments. Full article
(This article belongs to the Section Algorithms for Multidisciplinary Applications)
Show Figures

Figure 1

26 pages, 3766 KiB  
Article
Water Quality Evaluation and Analysis by Integrating Statistical and Machine Learning Approaches
by Amar Lokman, Wan Zakiah Wan Ismail and Nor Azlina Ab Aziz
Algorithms 2025, 18(8), 494; https://doi.org/10.3390/a18080494 - 8 Aug 2025
Viewed by 201
Abstract
Water quality assessment plays a vital role in environmental monitoring and resource management. This study aims to enhance the predictive modeling of the Water Quality Index (WQI) using a combination of statistical diagnostics and machine learning techniques. Data collected from six river locations [...] Read more.
Water quality assessment plays a vital role in environmental monitoring and resource management. This study aims to enhance the predictive modeling of the Water Quality Index (WQI) using a combination of statistical diagnostics and machine learning techniques. Data collected from six river locations in Malaysia are analyzed. The methodology involves collecting water quality data from six river locations in Malaysia, followed by a series of statistical analyses including assumption testing (shapiro–wilk and breusch–pagan tests), diagnostic evaluations, feature importance analysis, and principal component analysis (PCA). Decision tree regression (DTR) and autoregressive integrated moving average (ARIMA) are employed for regression, while random forest is used for classification. Learning curve analysis is conducted to evaluate model performance and generalization. The results indicate that dissolved oxygen (DO) and ammoniacal nitrogen (AN) are the most influential parameters, with normalized importance scores of 1.000 and 0.565, respectively. The breusch–pagan test identifies significant heteroscedasticity (p-value = (3.138e115)), while the Shapiro–Wilk test confirms non-normality (p-value = 0.0). PCA effectively reduces dimensionality while preserving 95% of dataset variance, optimizing computational efficiency. Among the regression models, ARIMA demonstrates better predictive accuracy than DTR. Meanwhile, random forest achieves high classification performance and shows strong generalization capability with increasing training data. Learning curve analysis reveals overfitting in the regression model, suggesting the need for hyperparameter tuning, while the classification model demonstrates improved generalization with additional training data. Strong correlations among key parameters indicate potential multicollinearity, emphasizing the need for careful feature selection. These findings highlight the synergy between statistical pre-processing and machine learning, offering a more accurate and efficient approach to water quality prediction for informed environmental policy and real-time monitoring systems. Full article
Show Figures

Figure 1

28 pages, 5869 KiB  
Article
Comparison of Classical and Artificial Intelligence Algorithms to the Optimization of Photovoltaic Panels Using MPPT
by João T. Sousa and Ramiro S. Barbosa
Algorithms 2025, 18(8), 493; https://doi.org/10.3390/a18080493 - 7 Aug 2025
Viewed by 257
Abstract
This work investigates the application of artificial intelligence techniques for optimizing photovoltaic systems using maximum power point tracking (MPPT) algorithms. Simulation models were developed in MATLAB/Simulink (Version 2024), incorporating conventional and intelligent control strategies such as fuzzy logic, genetic algorithms, neural networks, and [...] Read more.
This work investigates the application of artificial intelligence techniques for optimizing photovoltaic systems using maximum power point tracking (MPPT) algorithms. Simulation models were developed in MATLAB/Simulink (Version 2024), incorporating conventional and intelligent control strategies such as fuzzy logic, genetic algorithms, neural networks, and Deep Reinforcement Learning. A DC/DC buck converter was designed and tested under various irradiance and temperature profiles, including scenarios with partial shading conditions. The performance of the implemented MPPT algorithms was evaluated using such metrics as Mean Absolute Error (MAE), Integral Absolute Error (IAE), mean squared error (MSE), Integral Squared Error (ISE), efficiency, and convergence time. The results highlight that AI-based methods, particularly neural networks and Deep Q-Network agents, outperform traditional approaches, especially in non-uniform operating conditions. These findings demonstrate the potential of intelligent controllers to enhance the energy harvesting capability of photovoltaic systems. Full article
(This article belongs to the Special Issue Algorithmic Approaches to Control Theory and System Modeling)
Show Figures

Figure 1

14 pages, 661 KiB  
Article
Epileptic Seizure Prediction Using a Combination of Deep Learning, Time–Frequency Fusion Methods, and Discrete Wavelet Analysis
by Hadi Sadeghi Khansari, Mostafa Abbaszadeh, Gholamreza Heidary Joonaghany, Hamidreza Mohagerani and Fardin Faraji
Algorithms 2025, 18(8), 492; https://doi.org/10.3390/a18080492 - 7 Aug 2025
Viewed by 222
Abstract
Epileptic seizure prediction remains a critical challenge in neuroscience and healthcare, with profound implications for enhancing patient safety and quality of life. In this paper, we introduce a novel seizure prediction method that leverages electroencephalogram (EEG) data, combining discrete wavelet transform (DWT)-based time–frequency [...] Read more.
Epileptic seizure prediction remains a critical challenge in neuroscience and healthcare, with profound implications for enhancing patient safety and quality of life. In this paper, we introduce a novel seizure prediction method that leverages electroencephalogram (EEG) data, combining discrete wavelet transform (DWT)-based time–frequency analysis, advanced feature extraction, and deep learning using Fourier neural networks (FNNs). The proposed approach extracts essential features from EEG signals—including entropy, power, frequency, and amplitude—to effectively capture the brain’s complex and nonstationary dynamics. We measure the method based on the broadly used CHB-MIT EEG dataset, ensuring direct comparability with prior research. Experimental results demonstrate that our DWT-FS-FNN model achieves a prediction accuracy of 98.96 with a zero false positive rate, outperforming several state-of-the-art methods. These findings underscore the potential of integrating advanced signal processing and deep learning methods for reliable, real-time seizure prediction. Future work will focus on optimizing the model for real-world clinical deployment and expanding it to incorporate multimodal physiological data, further enhancing its applicability in clinical practice. Full article
(This article belongs to the Special Issue 2024 and 2025 Selected Papers from Algorithms Editorial Board Members)
Show Figures

Graphical abstract

21 pages, 943 KiB  
Article
An Early Investigation of the HHL Quantum Linear Solver for Scientific Applications
by Muqing Zheng, Chenxu Liu, Samuel Stein, Xiangyu Li, Johannes Mülmenstädt, Yousu Chen and Ang Li
Algorithms 2025, 18(8), 491; https://doi.org/10.3390/a18080491 - 6 Aug 2025
Viewed by 168
Abstract
In this paper, we explore using the Harrow–Hassidim–Lloyd (HHL) algorithm to address scientific and engineering problems through quantum computing, utilizing the NWQSim simulation package on a high-performance computing platform. Focusing on domains such as power-grid management and climate projection, we demonstrate the correlations [...] Read more.
In this paper, we explore using the Harrow–Hassidim–Lloyd (HHL) algorithm to address scientific and engineering problems through quantum computing, utilizing the NWQSim simulation package on a high-performance computing platform. Focusing on domains such as power-grid management and climate projection, we demonstrate the correlations of the accuracy of quantum phase estimation, along with various properties of coefficient matrices, on the final solution and quantum resource cost in iterative and non-iterative numerical methods such as the Newton–Raphson method and finite difference method, as well as their impacts on quantum error correction costs using the Microsoft Azure Quantum resource estimator. We summarize the exponential resource cost from quantum phase estimation before and after quantum error correction and illustrate a potential way to reduce the demands on physical qubits. This work lays down a preliminary step for future investigations, urging a closer examination of quantum algorithms’ scalability and efficiency in domain applications. Full article
Show Figures

Figure 1

30 pages, 336 KiB  
Article
Enhancing Discoverability: A Metadata Framework for Empirical Research in Theses
by Giannis Vassiliou, George Tsamis, Stavroula Chatzinikolaou, Thomas Nipurakis and Nikos Papadakis
Algorithms 2025, 18(8), 490; https://doi.org/10.3390/a18080490 - 6 Aug 2025
Viewed by 414
Abstract
Despite the significant volume of empirical research found in student-authored academic theses—particularly in the social sciences—these works are often poorly documented and difficult to discover within institutional repositories. A key reason for this is the lack of appropriate metadata frameworks that balance descriptive [...] Read more.
Despite the significant volume of empirical research found in student-authored academic theses—particularly in the social sciences—these works are often poorly documented and difficult to discover within institutional repositories. A key reason for this is the lack of appropriate metadata frameworks that balance descriptive richness with usability. General standards such as Dublin Core are too simplistic to capture critical research details, while more robust models like the Data Documentation Initiative (DDI) are too complex for non-specialist users and not designed for use with student theses. This paper presents the design and validation of a lightweight, web-based metadata framework specifically tailored to document empirical research in academic theses. We are the first to adapt existing hybrid Dublin Core–DDI approaches specifically for thesis documentation, with a novel focus on cross-methodological research and non-expert usability. The model was developed through a structured analysis of actual student theses and refined to support intuitive, structured metadata entry without requiring technical expertise. The resulting system enhances the discoverability, classification, and reuse of empirical theses within institutional repositories, offering a scalable solution to elevate the visibility of the gray literature in higher education. Full article
Show Figures

Figure 1

19 pages, 753 KiB  
Article
In-Context Learning for Low-Resource Machine Translation: A Study on Tarifit with Large Language Models
by Oussama Akallouch and Khalid Fardousse
Algorithms 2025, 18(8), 489; https://doi.org/10.3390/a18080489 - 6 Aug 2025
Viewed by 285
Abstract
This study presents the first systematic evaluation of in-context learning for Tarifit machine translation, a low-resource Amazigh language spoken by 5 million people in Morocco and Europe. We assess three large language models (GPT-4, Claude-3.5, PaLM-2) across Tarifit–Arabic, Tarifit–French, and Tarifit–English translation using [...] Read more.
This study presents the first systematic evaluation of in-context learning for Tarifit machine translation, a low-resource Amazigh language spoken by 5 million people in Morocco and Europe. We assess three large language models (GPT-4, Claude-3.5, PaLM-2) across Tarifit–Arabic, Tarifit–French, and Tarifit–English translation using 1000 sentence pairs and 5-fold cross-validation. Results show that 8-shot similarity-based demonstration selection achieves optimal performance. GPT-4 achieved 20.2 BLEU for Tarifit–Arabic, 14.8 for Tarifit–French, and 10.9 for Tarifit–English. Linguistic proximity significantly impacts translation quality, with Tarifit–Arabic substantially outperforming other language pairs by 8.4 BLEU points due to shared vocabulary and morphological patterns. Error analysis reveals systematic issues with morphological complexity (42% of errors) and cultural terminology preservation (18% of errors). This work establishes baseline benchmarks for Tarifit translation and demonstrates the viability of in-context learning for morphologically complex low-resource languages, contributing to linguistic equity in AI systems. Full article
(This article belongs to the Section Evolutionary Algorithms and Machine Learning)
Show Figures

Figure 1

19 pages, 1905 KiB  
Article
Fuzzy Frankot–Chellappa Algorithm for Surface Normal Integration
by Saeide Hajighasemi and Michael Breuß
Algorithms 2025, 18(8), 488; https://doi.org/10.3390/a18080488 - 6 Aug 2025
Viewed by 132
Abstract
In this paper, we propose a fuzzy formulation of the classic Frankot–Chellappa algorithm by which surfaces can be reconstructed using normal vectors. In the fuzzy formulation, the surface normal vectors may be uncertain or ambiguous, yielding a fuzzy Poisson partial differential equation that [...] Read more.
In this paper, we propose a fuzzy formulation of the classic Frankot–Chellappa algorithm by which surfaces can be reconstructed using normal vectors. In the fuzzy formulation, the surface normal vectors may be uncertain or ambiguous, yielding a fuzzy Poisson partial differential equation that requires appropriate definitions of fuzzy derivatives. The solution of the resulting fuzzy model is approached by adopting a fuzzy variant of the discrete sine transform, which results in a fast and robust algorithm for surface reconstruction. An adaptive defuzzification strategy is also introduced to improve noise handling in highly uncertain regions. In experiments, we demonstrate that our fuzzy Frankot–Chellappa algorithm achieves accuracy on par with the classic approach for smooth surfaces and offers improved robustness in the presence of noisy normal data. We also show that it can naturally handle missing data (such as gaps) in the normal field by filling them using neighboring information. Full article
(This article belongs to the Collection Feature Papers in Algorithms for Multidisciplinary Applications)
Show Figures

Figure 1

18 pages, 3562 KiB  
Article
Robust U-Nets for Fetal R-Peak Identification in Electrocardiography
by Peishan Zhou, Stephen So and Belinda Schwerin
Algorithms 2025, 18(8), 487; https://doi.org/10.3390/a18080487 - 6 Aug 2025
Viewed by 189
Abstract
Accurate fetal R-peak detection from low-SNR fetal electrocardiogram (FECG) signals remains a critical challenge as current NI-FECG methods struggle to extract high SNR FECG signals and conventional algorithms fail when signal quality deteriorates. We proposed a U-Net-based method that enables robust R-peak detection [...] Read more.
Accurate fetal R-peak detection from low-SNR fetal electrocardiogram (FECG) signals remains a critical challenge as current NI-FECG methods struggle to extract high SNR FECG signals and conventional algorithms fail when signal quality deteriorates. We proposed a U-Net-based method that enables robust R-peak detection directly from low-SNR FECG signals (0–12 dB), bypassing the need for high-SNR inputs that are clinically difficult to acquire. The method was evaluated on both real (A&D FECG) and synthetic (FECGSYN) databases, comparing against ten state-of-the-art detectors. The proposed method significantly reduces false predictions compared to commonly used detection algorithms, achieving a PPV of 99.81%, an SEN of 100.00%, and an F1-score of 99.91% on the A&D FECG database and a PPV of 99.96%, an SEN of 99.93%, and an F1-score of 99.94% on the FECGSYN database. Further investigation of robustness in low-SNR conditions (0 dB, 5 dB, and 10 dB) achieved 87.38% F1-score at 0 dB SNR on real signals, surpassing the best-performing algorithm implemented in Neurokit by 13.58%. In addition, the algorithm showed ≤2.65% performance variation across tolerance windows (50 reduced to 20 ms), further underscoring its detection accuracy. Overall, this work reduces the reliance on high-SNR FECG signals by reliably extracting R-peaks from suboptimal signals, providing implications for the reliability of fetal heart rate variability analysis in real-world noisy environments. Full article
(This article belongs to the Special Issue Advancements in Signal Processing and Machine Learning for Healthcare)
Show Figures

Figure 1

23 pages, 2640 KiB  
Article
DenseNet-Based Classification of EEG Abnormalities Using Spectrograms
by Lan Wei and Catherine Mooney
Algorithms 2025, 18(8), 486; https://doi.org/10.3390/a18080486 - 5 Aug 2025
Viewed by 261
Abstract
Electroencephalogram (EEG) analysis is essential for diagnosing neurological disorders but typically requires expert interpretation and significant time. Purpose: This study aims to automate the classification of normal and abnormal EEG recordings to support clinical diagnosis and reduce manual workload. Automating the initial screening [...] Read more.
Electroencephalogram (EEG) analysis is essential for diagnosing neurological disorders but typically requires expert interpretation and significant time. Purpose: This study aims to automate the classification of normal and abnormal EEG recordings to support clinical diagnosis and reduce manual workload. Automating the initial screening of EEGs can help clinicians quickly identify potential neurological abnormalities, enabling timely intervention and guiding further diagnostic and treatment strategies. Methodology: We utilized the Temple University Hospital EEG dataset to develop a DenseNet-based deep learning model. To enable a fair comparison of different EEG representations, we used three input types: signal images, spectrograms, and scalograms. To reduce dimensionality and simplify computation, we focused on two channels: T5 and O1. For interpretability, we applied Local Interpretable Model-agnostic Explanations (LIME) and Gradient-weighted Class Activation Mapping (Grad-CAM) to visualize the EEG regions influencing the model’s predictions. Key Findings: Among the input types, spectrogram-based representations achieved the highest classification accuracy, indicating that time-frequency features are especially effective for this task. The model demonstrated strong performance overall, and the integration of LIME and Grad-CAM provided transparent explanations of its decisions, enhancing interpretability. This approach offers a practical and interpretable solution for automated EEG screening, contributing to more efficient clinical workflows and better understanding of complex neurological conditions. Full article
(This article belongs to the Special Issue AI-Assisted Medical Diagnostics)
Show Figures

Figure 1

22 pages, 3052 KiB  
Article
A Novel Dual-Strategy Approach for Constructing Knowledge Graphs in the Home Appliance Fault Domain
by Daokun Zhang, Jian Zhang, Yanhe Jia and Mengjie Liao
Algorithms 2025, 18(8), 485; https://doi.org/10.3390/a18080485 - 5 Aug 2025
Viewed by 297
Abstract
Knowledge graph technology holds significant importance for efficient fault diagnosis in household appliances. However, the scarcity of public fault diagnosis data and the lack of automated knowledge extraction pose major challenges to knowledge graph construction. To address issues such as ambiguous entity boundaries, [...] Read more.
Knowledge graph technology holds significant importance for efficient fault diagnosis in household appliances. However, the scarcity of public fault diagnosis data and the lack of automated knowledge extraction pose major challenges to knowledge graph construction. To address issues such as ambiguous entity boundaries, severe entity nesting, and poor entity extraction performance in fault diagnosis texts, this paper proposes a dual-strategy progressive knowledge extraction framework. First, to tackle the high complexity of fault diagnosis texts, an entity recognition model named RoBERTa-zh-BiLSTM-MUL-CRF is designed, improving the accuracy of nested entity extraction. Second, leveraging the semantic understanding capability of large language models, a progressive prompting strategy is adopted for ontology alignment and relation extraction, achieving automated knowledge extraction. Experimental results show that the proposed named entity recognition model outperforms traditional models, with improvements of 3.87%, 5.82%, and 2.05% in F1-score, recall, and precision, respectively. Additionally, the large language model demonstrates better performance in ontology alignment compared to traditional machine learning models. The constructed knowledge graph for household appliance fault diagnosis integrates structured fault diagnosis information. It effectively processes unstructured fault texts and supports visual queries and entity tracing. This framework can assist maintenance personnel in making rapid judgments, thereby improving fault diagnosis efficiency. Full article
(This article belongs to the Section Combinatorial Optimization, Graph, and Network Algorithms)
Show Figures

Figure 1

15 pages, 833 KiB  
Article
Tolerance Proportionality and Computational Stability in Adaptive Parallel-in-Time Runge–Kutta Methods
by Imre Fekete, Ferenc Izsák, Vendel P. Kupás and Gustaf Söderlind
Algorithms 2025, 18(8), 484; https://doi.org/10.3390/a18080484 - 5 Aug 2025
Viewed by 209
Abstract
In this paper, we investigate how adaptive time-integration strategies can be effectively combined with parallel-in-time numerical methods for solving systems of ordinary differential equations. Our focus is particularly on their influence on tolerance proportionality. We examine various grid-refinement strategies within the multigrid reduction-in-time [...] Read more.
In this paper, we investigate how adaptive time-integration strategies can be effectively combined with parallel-in-time numerical methods for solving systems of ordinary differential equations. Our focus is particularly on their influence on tolerance proportionality. We examine various grid-refinement strategies within the multigrid reduction-in-time (MGRIT) framework. Our results show that a simple adjustment to the original refinement factor can substantially improve computational stability and reliability. Through numerical experiments on standard test problems using the XBraid library, we demonstrate that parallel-in-time solutions closely match their sequential counterparts. Moreover, with the use of multiple processors, computing time can be significantly reduced. Full article
(This article belongs to the Section Analysis of Algorithms and Complexity Theory)
Show Figures

Figure 1

22 pages, 398 KiB  
Article
An Improved Convergence Analysis of a Multi-Step Method with High-Efficiency Indices
by Santhosh George, Manjusree Gopal, Samhitha Bhide and Ioannis K. Argyros
Algorithms 2025, 18(8), 483; https://doi.org/10.3390/a18080483 - 4 Aug 2025
Viewed by 167
Abstract
A multi-step method introduced by Raziyeh and Masoud for solving nonlinear systems with convergence order five has been considered in this paper. The convergence of the method was studied using Taylor series expansion, which requires the function to be six times differentiable. However, [...] Read more.
A multi-step method introduced by Raziyeh and Masoud for solving nonlinear systems with convergence order five has been considered in this paper. The convergence of the method was studied using Taylor series expansion, which requires the function to be six times differentiable. However, our convergence study does not depend on the Taylor series. We use the derivative of F up to two only in our convergence analysis, which is presented in a more general Banach space setting. Semi-local analysis is also discussed, which was not given in earlier studies. Unlike in earlier studies (where two sets of assumptions were used), we used the same set of assumptions for semi-local analysis and local convergence analysis. We discussed the dynamics of the method and also gave some numerical examples to illustrate theoretical findings. Full article
(This article belongs to the Special Issue Recent Advances in Numerical Algorithms and Their Applications)
Show Figures

Figure 1

24 pages, 3291 KiB  
Article
Machine Learning Subjective Opinions: An Application in Forensic Chemistry
by Anuradha Akmeemana and Michael E. Sigman
Algorithms 2025, 18(8), 482; https://doi.org/10.3390/a18080482 - 4 Aug 2025
Viewed by 236
Abstract
Simulated data created in silico using a previously reported method were sampled by bootstrapping to generate data sets for training multiple copies of an ensemble learner (i.e., a machine learning (ML) method). The posterior probabilities of class membership obtained by applying the ensemble [...] Read more.
Simulated data created in silico using a previously reported method were sampled by bootstrapping to generate data sets for training multiple copies of an ensemble learner (i.e., a machine learning (ML) method). The posterior probabilities of class membership obtained by applying the ensemble of ML models to previously unseen validation data were fitted to a beta distribution. The shape parameters for the fitted distribution were used to calculate the subjective opinion of sample membership into one of two mutually exclusive classes. The subjective opinion consists of belief, disbelief and uncertainty masses. A subjective opinion for each validation sample allows identification of high-uncertainty predictions. The projected probabilities of the validation opinions were used to calculate log-likelihood ratio scores and generate receiver operating characteristic (ROC) curves from which an opinion-supported decision can be made. Three very different ML models, linear discriminant analysis (LDA), random forest (RF), and support vector machines (SVM) were applied to the two-state classification problem in the analysis of forensic fire debris samples. For each ML method, a set of 100 ML models was trained on data sets bootstrapped from 60,000 in silico samples. The impact of training data set size on opinion uncertainty and ROC area under the curve (AUC) were studied. The median uncertainty for the validation data was smallest for LDA ML and largest for the SVM ML. The median uncertainty continually decreased as the size of the training data set increased for all ML.The AUC for ROC curves based on projected probabilities was largest for the RF model and smallest for the LDA method. The ROC AUC was statistically unchanged for LDA at training data sets exceeding 200 samples; however, the AUC increased with increasing sample size for the RF and SVM methods. The SVM method, the slowest to train, was limited to a maximum of 20,000 training samples. All three ML methods showed increasing performance when the validation data was limited to higher ignitable liquid contributions. An ensemble of 100 RF ML models, each trained on 60,000 in silico samples, performed the best with a median uncertainty of 1.39x102 and ROC AUC of 0.849 for all validation samples. Full article
(This article belongs to the Special Issue Artificial Intelligence in Modeling and Simulation (2nd Edition))
Show Figures

Graphical abstract

13 pages, 7106 KiB  
Article
Multi-Scale Universal Style-Transfer Network Based on Diffusion Model
by Na Su, Jingtao Wang and Yun Pan
Algorithms 2025, 18(8), 481; https://doi.org/10.3390/a18080481 - 4 Aug 2025
Viewed by 247
Abstract
Artistic style transfer aims to transfer the style of an artwork to a photograph while maintaining its original overall content. Although current style-transfer methods have achieved promising results when processing photorealistic images, they often struggle with brushstroke preservation in artworks, especially in styles [...] Read more.
Artistic style transfer aims to transfer the style of an artwork to a photograph while maintaining its original overall content. Although current style-transfer methods have achieved promising results when processing photorealistic images, they often struggle with brushstroke preservation in artworks, especially in styles such as oil painting and pointillism. In such cases, the extracted style and content features tend to include redundant information, leading to issues such as blurred edges and a loss of fine details in the transferred images. To address this problem, this paper proposes a multi-scale general style-transfer network based on diffusion models. The proposed network consists of a coarse style-transfer module and a refined style-transfer module. First, the coarse style-transfer module is designed to perform mainstream style-transfer tasks more efficiently by operating on downsampled images, enabling faster processing with satisfactory results. Next, to further enhance edge fidelity, a refined style-transfer module is introduced. This module utilizes a segmentation component to generate a mask of the main subject in the image and performs edge-aware refinement. This enhances the fusion between the subject’s edges and the target style while preserving more detailed features. To improve overall image quality and better integrate the style along the content boundaries, the output from the coarse module is upsampled by a factor of two and combined with the subject mask. With the assistance of ControlNet and Stable Diffusion, the model performs content-aware edge redrawing to enhance the overall visual quality of the stylized image. Compared with state-of-the-art style-transfer methods, the proposed model preserves more edge details and achieves more natural fusion between style and content. Full article
(This article belongs to the Section Evolutionary Algorithms and Machine Learning)
Show Figures

Figure 1

19 pages, 455 KiB  
Article
A Quantum-Resistant FHE Framework for Privacy-Preserving Image Processing in the Cloud
by Rafik Hamza
Algorithms 2025, 18(8), 480; https://doi.org/10.3390/a18080480 - 4 Aug 2025
Viewed by 383
Abstract
The advent of quantum computing poses an existential threat to the security of cloud services that handle sensitive visual data. Simultaneously, the need for computational privacy requires the ability to process data without exposing it to the cloud provider. This paper introduces and [...] Read more.
The advent of quantum computing poses an existential threat to the security of cloud services that handle sensitive visual data. Simultaneously, the need for computational privacy requires the ability to process data without exposing it to the cloud provider. This paper introduces and evaluates a hybrid quantum-resistant framework that addresses both challenges by integrating NIST-standardized post-quantum cryptography with optimized fully homomorphic encryption (FHE). Our solution uses CRYSTALS-Kyber for secure channel establishment and the CKKS FHE scheme with SIMD batching to perform image processing tasks on a cloud server without ever decrypting the image. This work provides a comprehensive performance analysis of the complete, end-to-end system. Our empirical evaluation demonstrates the framework’s practicality, detailing the sub-millisecond PQC setup costs and the amortized transfer of 33.83 MB of public FHE materials. The operational performance shows remarkable scalability, with server-side computations and client-side decryption completing within low single-digit milliseconds. By providing a detailed analysis of a viable and efficient architecture, this framework establishes a practical foundation for the next generation of privacy-preserving cloud applications. Full article
Show Figures

Figure 1

17 pages, 3816 KiB  
Article
Charging Station Siting and Capacity Determination Based on a Generalized Least-Cost Model of Traffic Distribution
by Mingzhao Ma, Feng Wang, Lirong Xiong, Yuhonghao Wang and Wenxin Li
Algorithms 2025, 18(8), 479; https://doi.org/10.3390/a18080479 - 4 Aug 2025
Viewed by 240
Abstract
With the popularization of electric vehicles and the continuous expansion of the electric vehicle market, the construction and management of charging facilities for electric vehicles have become important issues in research and practice. In some remote areas, the charging stations are idle due [...] Read more.
With the popularization of electric vehicles and the continuous expansion of the electric vehicle market, the construction and management of charging facilities for electric vehicles have become important issues in research and practice. In some remote areas, the charging stations are idle due to low traffic flow, resulting in a waste of resources. Areas with high traffic flow may have fewer charging stations, resulting in long queues and road congestion. The purpose of this study is to optimize the location of charging stations and the number of charging piles in the stations based on the distribution of traffic flow, and to construct a bi-level programming model by analyzing the distribution of traffic flow. The upper-level planning model is the user-balanced flow allocation model, which is solved to obtain the optimal traffic flow allocation of the road network, and the output of the upper-level planning model is used as the input of the lower-layer model. The lower-level planning model is a generalized minimum cost model with driving time, charging waiting time, charging time, and the cost of electricity consumed to reach the destination of the trip as objective functions. In this study, an empirical simulation is conducted on the road network of Hefei City, Anhui Province, utilizing three algorithms—GA, GWO, and PSO—for optimization and sensitivity analysis. The optimized results are compared with the existing charging station deployment scheme in the road network to demonstrate the effectiveness of the proposed methodology. Full article
Show Figures

Figure 1

27 pages, 1853 KiB  
Article
Heterogeneous Graph Structure Learning for Next Point-of-Interest Recommendation
by Juan Chen and Qiao Li
Algorithms 2025, 18(8), 478; https://doi.org/10.3390/a18080478 - 3 Aug 2025
Viewed by 287
Abstract
Next Point-of-Interest (POI) recommendation is aimed at predicting users’ future visits based on their current status and historical check-in records, providing convenience to users and potential profits to businesses. The Graph Neural Network (GNN) has become a common approach for this task due [...] Read more.
Next Point-of-Interest (POI) recommendation is aimed at predicting users’ future visits based on their current status and historical check-in records, providing convenience to users and potential profits to businesses. The Graph Neural Network (GNN) has become a common approach for this task due to the capabilities of modeling relations between nodes in a global perspective. However, most existing studies overlook the more prevalent heterogeneous relations in real-world scenarios, and manually constructed graphs may suffer from inaccuracies. To address these limitations, we propose a model called Heterogeneous Graph Structure Learning for Next POI Recommendation (HGSL-POI), which integrates three key components: heterogeneous graph contrastive learning, graph structure learning, and sequence modeling. The model first employs meta-path-based subgraphs and the user–POI interaction graph to obtain initial representations of users and POIs. Based on these representations, it reconstructs the subgraphs through graph structure learning. Finally, based on the embeddings from the reconstructed graphs, sequence modeling incorporating graph neural networks captures users’ sequential preferences to make recommendations. Experimental results on real-world datasets demonstrate the effectiveness of the proposed model. Additional studies confirm its robustness and superior performance across diverse recommendation tasks. Full article
Show Figures

Figure 1

Previous Issue
Back to TopTop