Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (845)

Search Parameters:
Keywords = incremental networks

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
23 pages, 1179 KiB  
Article
Model Retraining upon Concept Drift Detection in Network Traffic Big Data
by Sikha S. Bagui, Mohammad Pale Khan, Chedlyne Valmyr, Subhash C. Bagui and Dustin Mink
Future Internet 2025, 17(8), 328; https://doi.org/10.3390/fi17080328 - 24 Jul 2025
Abstract
This paper presents a comprehensive model for detecting and addressing concept drift in network security data using the Isolation Forest algorithm. The approach leverages Isolation Forest’s inherent ability to efficiently isolate anomalies in high-dimensional data, making it suitable for adapting to shifting data [...] Read more.
This paper presents a comprehensive model for detecting and addressing concept drift in network security data using the Isolation Forest algorithm. The approach leverages Isolation Forest’s inherent ability to efficiently isolate anomalies in high-dimensional data, making it suitable for adapting to shifting data distributions in dynamic environments.Anomalies in network attack data may not occur in large numbers, so it is important to be able to detect anomalies even with small batch sizes. The novelty of this work lies in successfully detecting anomalies even with small batch sizes and identifying the point at which incremental retraining needs to be started. Triggering retraining early also keeps the model in sync with the latest data, reducing the chance for attacks to be successfully conducted. Our methodology implements an end-to-end workflow that continuously monitors incoming data and detects distribution changes using Isolation Forest, then manages model retraining using Random Forest to maintain optimal performance. We evaluate our approach using UWF-ZeekDataFall22, a newly created dataset that analyzes Zeek’s Connection Logs collected through Security Onion 2 network security monitor and labeled using the MITRE ATT&CK framework. Incremental as well as full retraining are analyzed using Random Forest. There was a steady increase in the model’s performance with incremental retraining and a positive impact on the model’s performance with full model retraining. Full article
(This article belongs to the Special Issue DDoS Attack Detection for Cyber–Physical Systems)
Show Figures

Figure 1

21 pages, 2869 KiB  
Article
State of Health Estimation for Lithium-Ion Batteries Based on TCN-RVM
by Yu Zhao, Yonghong Xu, Yidi Wei, Liang Tong, Yiyang Li, Minghui Gong, Hongguang Zhang, Baoying Peng and Yinlian Yan
Appl. Sci. 2025, 15(15), 8213; https://doi.org/10.3390/app15158213 - 23 Jul 2025
Viewed by 55
Abstract
A State of Health (SOH) estimation of lithium-ion batteries is a core function of battery management systems, directly affecting the safe operation, lifetime prediction, and economic efficiency of batteries. However, existing methods still face challenges in balancing feature robustness and model generalization ability; [...] Read more.
A State of Health (SOH) estimation of lithium-ion batteries is a core function of battery management systems, directly affecting the safe operation, lifetime prediction, and economic efficiency of batteries. However, existing methods still face challenges in balancing feature robustness and model generalization ability; for instance, some studies rely on features whose physical correlation with SOH lacks strict verification, or the models struggle to simultaneously capture the temporal dynamics of health factors and nonlinear mapping relationships. To address this, this paper proposes an SOH estimation method based on incremental capacity (IC) curves and a Temporal Convolutional Network—Relevance Vector Machine (TCN-RVM) model, with core innovations reflected in two aspects. Firstly, five health factors are extracted from IC curves, and the strong correlation between these features and SOH is verified using both Pearson and Spearman coefficients, ensuring the physical rationality and statistical significance of feature selection. Secondly, the TCN-RVM model is constructed to achieve complementary advantages. The dilated causal convolution of TCN is used to extract temporal local features of health factors, addressing the insufficient capture of long-range dependencies in traditional models; meanwhile, the Bayesian inference framework of RVM is integrated to enhance the nonlinear mapping capability and small-sample generalization, avoiding the overfitting tendency of complex models. Experimental validation is conducted using the lithium-ion battery dataset from the University of Maryland. The results show that the mean absolute error of the SOH estimation using the proposed method does not exceed 0.72%, which is significantly superior to comparative models such as CNN-GRU, KELM, and SVM, demonstrating higher accuracy and reliability compared with other models. Full article
15 pages, 1174 KiB  
Article
A New Incremental Learning Method Based on Rainbow Memory for Fault Diagnosis of AUV
by Ying Li, Yuxing Ye, Zhiwei Zhang and Long Wen
Sensors 2025, 25(15), 4539; https://doi.org/10.3390/s25154539 - 22 Jul 2025
Viewed by 70
Abstract
Autonomous Underwater Vehicles (AUVs) are gradually becoming some of the most important equipment in deep-sea exploration. However, in the dynamic nature of the deep-sea environment, any unplanned fault of AUVs would cause serious accidents. Traditional fault diagnosis models are trained in static and [...] Read more.
Autonomous Underwater Vehicles (AUVs) are gradually becoming some of the most important equipment in deep-sea exploration. However, in the dynamic nature of the deep-sea environment, any unplanned fault of AUVs would cause serious accidents. Traditional fault diagnosis models are trained in static and fixed datasets, making them difficult to adopt in new and unknown deep-sea environments. To address these issues, this study explores incremental learning to enable AUVs to continuously adapt to new fault scenarios while preserving previously learned diagnostic knowledge, named the RM-MFKAN model. First, the approach begins by employing the Rainbow Memory (RM) framework to analyze data characteristics and temporal sequences, thereby delineating boundaries between old and new tasks. Second, the model evaluates data importance to select and store key samples encapsulating critical information from prior tasks. Third, the RM is combined with the enhanced KAN network, and the stored samples are then combined with new task training data and fed into a multi-branch feature fusion neural network. The proposed RM-MFKAN model was conducted on the “Haizhe” dataset, and the experimental results have demonstrated that the proposed model achieves superior performance in fault diagnosis for AUVs. Full article
Show Figures

Figure 1

39 pages, 1774 KiB  
Review
FACTS Controllers’ Contribution for Load Frequency Control, Voltage Stability and Congestion Management in Deregulated Power Systems over Time: A Comprehensive Review
by Muhammad Asad, Muhammad Faizan, Pericle Zanchetta and José Ángel Sánchez-Fernández
Appl. Sci. 2025, 15(14), 8039; https://doi.org/10.3390/app15148039 - 18 Jul 2025
Viewed by 286
Abstract
Incremental energy demand, environmental constraints, restrictions in the availability of energy resources, economic conditions, and political impact prompt the power sector toward deregulation. In addition to these impediments, electric power competition for power quality, reliability, availability, and cost forces utilities to maximize utilization [...] Read more.
Incremental energy demand, environmental constraints, restrictions in the availability of energy resources, economic conditions, and political impact prompt the power sector toward deregulation. In addition to these impediments, electric power competition for power quality, reliability, availability, and cost forces utilities to maximize utilization of the existing infrastructure by flowing power on transmission lines near to their thermal limits. All these factors introduce problems related to power network stability, reliability, quality, congestion management, and security in restructured power systems. To overcome these problems, power-electronics-based FACTS devices are one of the beneficial solutions at present. In this review paper, the significant role of FACTS devices in restructured power networks and their technical benefits against various power system problems such as load frequency control, voltage stability, and congestion management will be presented. In addition, an extensive discussion about the comparison between different FACTS devices (series, shunt, and their combination) and comparison between various optimization techniques (classical, analytical, hybrid, and meta-heuristics) that support FACTS devices to achieve their respective benefits is presented in this paper. Generally, it is concluded that third-generation FACTS controllers are more popular to mitigate various power system problems (i.e., load frequency control, voltage stability, and congestion management). Moreover, a combination of multiple FACTS devices, with or without energy storage devices, is more beneficial compared to their individual usage. However, this is not commonly adopted in small power systems due to high installation or maintenance costs. Therefore, there is a trade-off between the selection and cost of FACTS devices to minimize the power system problems. Likewise, meta-heuristics and hybrid optimization techniques are commonly adopted to optimize FACTS devices due to their fast convergence, robustness, higher accuracy, and flexibility. Full article
(This article belongs to the Special Issue State-of-the-Art of Power Systems)
Show Figures

Figure 1

25 pages, 2878 KiB  
Article
A Multi-Faceted Approach to Air Quality: Visibility Prediction and Public Health Risk Assessment Using Machine Learning and Dust Monitoring Data
by Lara Dronjak, Sofian Kanan, Tarig Ali, Reem Assim and Fatin Samara
Sustainability 2025, 17(14), 6581; https://doi.org/10.3390/su17146581 - 18 Jul 2025
Viewed by 340
Abstract
Clean and safe air quality is essential for public health, yet particulate matter (PM) significantly degrades air quality and poses serious health risks. The Gulf Cooperation Council (GCC) countries are particularly vulnerable to frequent and intense dust storms due to their vast desert [...] Read more.
Clean and safe air quality is essential for public health, yet particulate matter (PM) significantly degrades air quality and poses serious health risks. The Gulf Cooperation Council (GCC) countries are particularly vulnerable to frequent and intense dust storms due to their vast desert landscapes. This study presents the first health risk assessment of carcinogenic and non-carcinogenic risks associated with exposure to PM2.5 and PM10 bound heavy metals and polycyclic aromatic hydrocarbons (PAHs) based on air quality data collected during the years of 2016–2018 near Dubai International Airport and Abu Dhabi International Airport. The results reveal no significant carcinogenic risks for lead (Pb), cobalt (Co), nickel (Ni), and chromium (Cr). Additionally, AI-based regression analysis was applied to time-series dust monitoring data to enhance predictive capabilities in environmental monitoring systems. The estimated incremental lifetime cancer risk (ILCR) from PAH exposure exceeded the acceptable threshold (10−6) in several samples at both locations. The relationship between visibility and key environmental variables—PM1, PM2.5, PM10, total suspended particles (TSPs), wind speed, air pressure, and air temperature—was modeled using three machine learning algorithms: linear regression, support vector machine (SVM) with a radial basis function (RBF) kernel, and artificial neural networks (ANNs). Among these, SVM with an RBF kernel showed the highest accuracy in predicting visibility, effectively integrating meteorological data and particulate matter variables. These findings highlight the potential of machine learning models for environmental monitoring and the need for continued assessments of air quality and its health implications in the region. Full article
(This article belongs to the Special Issue Impact of AI on Business Sustainability and Efficiency)
Show Figures

Figure 1

55 pages, 6352 KiB  
Review
A Deep Learning Framework for Enhanced Detection of Polymorphic Ransomware
by Mazen Gazzan, Bader Alobaywi, Mohammed Almutairi and Frederick T. Sheldon
Future Internet 2025, 17(7), 311; https://doi.org/10.3390/fi17070311 - 18 Jul 2025
Viewed by 340
Abstract
Ransomware, a significant cybersecurity threat, encrypts files and causes substantial damage, making early detection crucial yet challenging. This paper introduces a novel multi-phase framework for early ransomware detection, designed to enhance accuracy and minimize false positives. The framework addresses the limitations of existing [...] Read more.
Ransomware, a significant cybersecurity threat, encrypts files and causes substantial damage, making early detection crucial yet challenging. This paper introduces a novel multi-phase framework for early ransomware detection, designed to enhance accuracy and minimize false positives. The framework addresses the limitations of existing methods by integrating operational data with situational and threat intelligence, enabling it to dynamically adapt to the evolving ransomware landscape. Key innovations include (1) data augmentation using a Bi-Gradual Minimax Generative Adversarial Network (BGM-GAN) to generate synthetic ransomware attack patterns, addressing data insufficiency; (2) Incremental Mutual Information Selection (IMIS) for dynamically selecting relevant features, adapting to evolving ransomware behaviors and reducing computational overhead; and (3) a Deep Belief Network (DBN) detection architecture, trained on the augmented data and optimized with Uncertainty-Aware Dynamic Early Stopping (UA-DES) to prevent overfitting. The model demonstrates a 4% improvement in detection accuracy (from 90% to 94%) through synthetic data generation and reduces false positives from 15.4% to 14%. The IMIS technique further increases accuracy to 96% while reducing false positives. The UA-DES optimization boosts accuracy to 98.6% and lowers false positives to 10%. Overall, this framework effectively addresses the challenges posed by evolving ransomware, significantly enhancing detection accuracy and reliability. Full article
Show Figures

Figure 1

23 pages, 10912 KiB  
Article
ET: A Metaheuristic Optimization Algorithm for Task Mapping in Network-on-Chip
by Ke Li, Jingbo Shao and Yan Song
Electronics 2025, 14(14), 2846; https://doi.org/10.3390/electronics14142846 - 16 Jul 2025
Viewed by 151
Abstract
In Network-on-Chip (NoC) research, the task mapping problem has attracted considerable attention as a core issue influencing system performance. As an NP-hard problem, it remains challenging, and existing algorithms exhibit limitations in both mapping quality and computational efficiency. To address this, a method [...] Read more.
In Network-on-Chip (NoC) research, the task mapping problem has attracted considerable attention as a core issue influencing system performance. As an NP-hard problem, it remains challenging, and existing algorithms exhibit limitations in both mapping quality and computational efficiency. To address this, a method named ET (Enhanced Coati Optimization Algorithm) is proposed, which leverages the nature-inspired Coati Optimization Algorithm (COA) for task mapping. An incremental hill-climbing strategy is integrated to improve local search capabilities, and a dynamic mechanism for adjusting the exploration–exploitation ratio is designed to better balance global and local searches. Additionally, an initial mapping strategy based on spectral clustering is introduced, which utilizes inter-task communication strength to cluster tasks, thereby improving the quality of the initial population. To evaluate the effectiveness of the proposed algorithm, the performance of the ET algorithm is compared and analyzed against various existing algorithms in terms of communication cost, energy consumption, and latency, using both real benchmark task maps and randomly generated task maps. Experimental results demonstrate that the ET algorithm consistently outperforms the compared algorithms across all performance metrics, thereby confirming its superiority in addressing the NoC task mapping problem. Full article
Show Figures

Figure 1

30 pages, 14631 KiB  
Article
Unsupervised Plot Morphology Classification via Graph Attention Networks: Evidence from Nanjing’s Walled City
by Ziyu Liu and Yacheng Song
Land 2025, 14(7), 1469; https://doi.org/10.3390/land14071469 - 15 Jul 2025
Viewed by 277
Abstract
Urban plots are pivotal links between individual buildings and the city fabric, yet conventional plot classification methods often overlook how buildings interact within each plot. This oversight is particularly problematic in the irregular fabrics typical of many Global South cities. This study aims [...] Read more.
Urban plots are pivotal links between individual buildings and the city fabric, yet conventional plot classification methods often overlook how buildings interact within each plot. This oversight is particularly problematic in the irregular fabrics typical of many Global South cities. This study aims to create a plot classification method that jointly captures metric and configurational characteristics. Our approach converts each cadastral plot into a graph whose nodes are building centroids and whose edges reflect Delaunay-based proximity. The model then learns unsupervised graph embeddings with a two-layer Graph Attention Network guided by a triple loss that couples building morphology with spatial topology. We then cluster the embeddings together with normalized plot metrics. Applying the model to 8973 plots in Nanjing’s historic walled city yields seven distinct plot morphological types. The framework separates plots that share identical FAR–GSI values but differ in internal organization. The baseline and ablation experiments confirm the indispensability of both configurational and metric information. Each type aligns with specific renewal strategies, from incremental upgrades of courtyard slabs to skyline management of high-rise complexes. By integrating quantitative graph learning with classical typo-morphology theory, this study not only advances urban form research but also offers planners a tool for context-sensitive urban regeneration and land-use management. Full article
Show Figures

Figure 1

23 pages, 16886 KiB  
Article
SAVL: Scene-Adaptive UAV Visual Localization Using Sparse Feature Extraction and Incremental Descriptor Mapping
by Ganchao Liu, Zhengxi Li, Qiang Gao and Yuan Yuan
Remote Sens. 2025, 17(14), 2408; https://doi.org/10.3390/rs17142408 - 12 Jul 2025
Viewed by 322
Abstract
In recent years, the use of UAVs has become widespread. Long distance flight of UAVs requires obtaining precise geographic coordinates. Global Navigation Satellite Systems (GNSS) are the most common positioning models, but their signals are susceptible to interference from obstacles and complex electromagnetic [...] Read more.
In recent years, the use of UAVs has become widespread. Long distance flight of UAVs requires obtaining precise geographic coordinates. Global Navigation Satellite Systems (GNSS) are the most common positioning models, but their signals are susceptible to interference from obstacles and complex electromagnetic environments. In this case, vision-based technology can serve as an alternative solution to ensure the self-positioning capability of UAVs. Therefore, a scene adaptive UAV visual localization framework (SAVL) is proposed. In the proposed framework, UAV images are mapped to satellite images with geographic coordinates through pixel-level matching to locate UAVs. Firstly, to tackle the challenge of inaccurate localization resulting from sparse terrain features, this work proposes a novel feature extraction network grounded in a general visual model, leveraging the robust zero-shot generalization capability of the pre-trained model and extracting sparse features from UAV and satellite imagery. Secondly, in order to overcome the problem of weak generalization ability in unknown scenarios, a descriptor incremental mapping module was designed, which reduces multi-source image differences at the semantic level through UAV satellite image descriptor mapping and constructs a confidence-based incremental strategy to dynamically adapt to the scene. Finally, due to the lack of annotated public datasets, a scene-rich UAV dataset (RealUAV) was constructed to study UAV visual localization in real-world environments. In order to evaluate the localization performance of the proposed framework, several related methods were compared and analyzed in detail. The results on the dataset indicate that the proposed method achieves excellent positioning accuracy, with an average error of only 8.71 m. Full article
Show Figures

Figure 1

14 pages, 259 KiB  
Article
Adaptive Learning Approach for Human Activity Recognition Using Data from Smartphone Sensors
by Leonidas Sakalauskas and Ingrida Vaiciulyte
Appl. Sci. 2025, 15(14), 7731; https://doi.org/10.3390/app15147731 - 10 Jul 2025
Viewed by 158
Abstract
Every day humans interact with smartphones that have embedded sensors that enable the tracking of changing physical activities of the device owner. However, several problems arise with the recognition of multiple activities (such as walking, sitting, running, and other) on smartphones. Firstly, most [...] Read more.
Every day humans interact with smartphones that have embedded sensors that enable the tracking of changing physical activities of the device owner. However, several problems arise with the recognition of multiple activities (such as walking, sitting, running, and other) on smartphones. Firstly, most of the devices do not recognize some activities well, such as walking upstairs or downstairs. Secondly, recognition algorithms are embedded into smartphone software and are static, unless updated. In this case, a recognition algorithm must be re-trained with training data of a specific size. Thus, an adaptive (also known as, online or incremental) learning algorithm would be useful in this situation. In this work, an adaptive learning and classification algorithm based on hidden Markov models (HMMs) is applied to human activity recognition, and an architecture model for smartphones is proposed. To create a self-learning method, a technique that involves building an incremental algorithm in a maximal likelihood framework has been developed. The adaptive algorithms created enable fast self-learning of the model parameters without requiring the device to store data obtained from sensors. It also does not require sending gathered data to a server over the network for additional processing, making them autonomous and independent from outside systems. Experiments involving the modeling of various activities as separate HMMs with different numbers of states, as well as modeling several activities connected to one HMM, were performed. A public dataset called the Activity Recognition Dataset was considered for this study. To generalize the results, different performance metrics were used in the validation of the proposed algorithm. Full article
Show Figures

Figure 1

31 pages, 1216 KiB  
Article
EL-GNN: A Continual-Learning-Based Graph Neural Network for Task-Incremental Intrusion Detection Systems
by Thanh-Tung Nguyen and Minho Park
Electronics 2025, 14(14), 2756; https://doi.org/10.3390/electronics14142756 - 9 Jul 2025
Viewed by 293
Abstract
Modern network infrastructures have significantly improved global connectivity while simultaneously escalating network security challenges as sophisticated cyberattacks increasingly target vital systems. Intrusion Detection Systems (IDSs) play a crucial role in identifying and mitigating these threats, and recent advances in machine-learning-based IDSs have shown [...] Read more.
Modern network infrastructures have significantly improved global connectivity while simultaneously escalating network security challenges as sophisticated cyberattacks increasingly target vital systems. Intrusion Detection Systems (IDSs) play a crucial role in identifying and mitigating these threats, and recent advances in machine-learning-based IDSs have shown promise in detecting evolving attack patterns. Notably, IDSs employing Graph Neural Networks (GNNs) have proven effective at modeling the dynamics of network traffic and internal interactions. However, these systems suffer from Catastrophic Forgetting (CF), where the incorporation of new attack patterns leads to the loss of previously acquired knowledge. This limits their adaptability and effectiveness in evolving network environments. In this study, we introduce the Elastic Graph Neural Network for Intrusion Detection Systems (EL-GNNs), a novel approach designed to enhance the continual learning (CL) capabilities of GNN-based IDSs. This approach enhances the performance of the GNN-based Intrusion Detection System (IDS) by significantly improving its capability to preserve previously learned knowledge from past cyber threats while simultaneously enabling it to effectively adapt and respond to newly emerging attack patterns in dynamic and evolving network environments. Experimental evaluations on trusted datasets across multiple task scenarios demonstrate that our method outperforms existing approaches in terms of accuracy and F1-score, effectively addressing CF and enhancing adaptability in detecting new network attacks. Full article
Show Figures

Graphical abstract

33 pages, 5572 KiB  
Article
Machine Learning-Based Methods for the Seismic Damage Classification of RC Buildings
by Sung Hei Luk
Buildings 2025, 15(14), 2395; https://doi.org/10.3390/buildings15142395 - 8 Jul 2025
Viewed by 247
Abstract
This paper aims to investigate the feasibility of machine learning methods for the vulnerability assessment of buildings and structures. Traditionally, the seismic performance of buildings and structures is determined through a non-linear time–history analysis, which is an accurate but time-consuming process. As an [...] Read more.
This paper aims to investigate the feasibility of machine learning methods for the vulnerability assessment of buildings and structures. Traditionally, the seismic performance of buildings and structures is determined through a non-linear time–history analysis, which is an accurate but time-consuming process. As an alternative, structural responses of buildings under earthquakes can be obtained using well-trained machine learning models. In the current study, machine learning models for the damage classification of RC buildings are developed using the datasets generated from numerous incremental dynamic analyses. A variety of earthquake and structural parameters are considered as input parameters, while damage levels based on the maximum inter-story drift ratio are selected as the output. The performance and effectiveness of several machine learning algorithms, including ensemble methods and artificial neural networks, are investigated. The importance of different input parameters is studied. The results reveal that well-prepared machine learning models are also capable of predicting damage levels with an adequate level of accuracy and minimal computational effort. In this study, the XGBoost method generally outperforms the other algorithms, with the highest accuracy and generalizability. Simplified prediction models are also developed for preliminary estimation using the selected input parameters for practical usage. Full article
(This article belongs to the Section Building Structures)
Show Figures

Figure 1

20 pages, 11079 KiB  
Article
A Bayesian Ensemble Learning-Based Scheme for Real-Time Error Correction of Flood Forecasting
by Liyao Peng, Jiemin Fu, Yanbin Yuan, Xiang Wang, Yangyong Zhao and Jian Tong
Water 2025, 17(14), 2048; https://doi.org/10.3390/w17142048 - 8 Jul 2025
Viewed by 259
Abstract
To address the critical demand for high-precision forecasts in flood management, real-time error correction techniques are increasingly implemented to improve the accuracy and operational reliability of the hydrological prediction framework. However, developing a robust error correction scheme remains a significant challenge due to [...] Read more.
To address the critical demand for high-precision forecasts in flood management, real-time error correction techniques are increasingly implemented to improve the accuracy and operational reliability of the hydrological prediction framework. However, developing a robust error correction scheme remains a significant challenge due to the compounded errors inherent in hydrological modeling frameworks. In this study, a Bayesian ensemble learning-based correction (BELC) scheme is proposed which integrates hydrological modeling with multiple machine learning methods to enhance real-time error correction for flood forecasting. The Xin’anjiang (XAJ) model is selected as the hydrological model for this study, given its proven effectiveness in flood forecasting across humid and semi-humid regions, combining structural simplicity with demonstrated predictive accuracy. The BELC scheme straightforwardly post-processes the output of the XAJ model under the Bayesian ensemble learning framework. Four machine learning methods are implemented as base learners: long short-term memory (LSTM) networks, a light gradient-boosting machine (LGBM), temporal convolutional networks (TCN), and random forest (RF). Optimal weights for all base learners are determined by the K-means clustering technique and Bayesian optimization in the BELC scheme. Four baseline schemes constructed by base learners and three ensemble learning-based schemes are also built for comparison purposes. The performance of the BELC scheme is systematically evaluated in the Hengshan Reservoir watershed (Fenghua City, China). Results indicate the following: (1) The BELC scheme achieves better performance in both accuracy and robustness compared to the four baseline schemes and three ensemble learning-based schemes. The average performance metrics for 1–3 h lead times are 0.95 (NSE), 0.92 (KGE), 24.25 m3/s (RMSE), and 8.71% (RPE), with a PTE consistently below 1 h in advance. (2) The K-means clustering technique proves particularly effective with the ensemble learning framework for high flow ranges, where the correction performance exhibits an increment of 62%, 100%, and 100% for 1 h, 2 h, and 3 h lead hours, respectively. Overall, the BELC scheme demonstrates the potential of a Bayesian ensemble learning framework in improving real-time error correction of flood forecasting systems. Full article
(This article belongs to the Special Issue Innovations in Hydrology: Streamflow and Flood Prediction)
Show Figures

Figure 1

22 pages, 989 KiB  
Article
A Second-Classroom Personalized Learning Path Recommendation System Based on Large Language Model Technology
by Qiankun Yang and Changyong Liang
Appl. Sci. 2025, 15(14), 7655; https://doi.org/10.3390/app15147655 - 8 Jul 2025
Viewed by 421
Abstract
To address the limitations of existing learning path recommendation methods—such as poor adaptability, weak personalization, and difficulties in processing long sequences of student behavior and interest data—this paper proposes a personalized learning path recommendation system for the second classroom based on large language [...] Read more.
To address the limitations of existing learning path recommendation methods—such as poor adaptability, weak personalization, and difficulties in processing long sequences of student behavior and interest data—this paper proposes a personalized learning path recommendation system for the second classroom based on large language model (LLM) technology, with a focus on integrating the pre-trained model GPT-4. The goal is to improve recommendation accuracy and personalization by leveraging GPT-4’s strong long-sequence modeling capability. The system fuses students’ multimodal data (e.g., physiological signals, facial expressions, activity levels, and emotional states), extracts deep features using GPT-4, and generates tailored learning paths based on individual feature vectors. It also incorporates incremental learning and self-attention mechanisms to enable real-time feedback and dynamic adjustments. A generative adversarial network (GAN) is introduced to enhance diversity and innovation in recommendations. The experimental results show that the system achieves a personalized recommendation accuracy of over 92%, with coverage and recall rates exceeding 91% and 93%, respectively. Feedback adjustment time remains within 1.5 s, outperforming mainstream models. This study provides a novel and effective technical framework for personalized learning in the second classroom, promoting both efficient resource utilization and student development. Full article
(This article belongs to the Special Issue Advanced Models and Algorithms for Recommender Systems)
Show Figures

Figure 1

24 pages, 1645 KiB  
Article
Dual-Stage Clean-Sample Selection for Incremental Noisy Label Learning
by Jianyang Li, Xin Ma and Yonghong Shi
Bioengineering 2025, 12(7), 743; https://doi.org/10.3390/bioengineering12070743 - 8 Jul 2025
Viewed by 371
Abstract
Class-incremental learning (CIL) in deep neural networks is affected by catastrophic forgetting (CF), where acquiring knowledge of new classes leads to the significant degradation of previously learned representations. This challenge is particularly severe in medical image analysis, where costly, expertise-dependent annotations frequently contain [...] Read more.
Class-incremental learning (CIL) in deep neural networks is affected by catastrophic forgetting (CF), where acquiring knowledge of new classes leads to the significant degradation of previously learned representations. This challenge is particularly severe in medical image analysis, where costly, expertise-dependent annotations frequently contain pervasive and hard-to-detect noisy labels that substantially compromise model performance. While existing approaches have predominantly addressed CF and noisy labels as separate problems, their combined effects remain largely unexplored. To address this critical gap, this paper presents a dual-stage clean-sample selection method for Incremental Noisy Label Learning (DSCNL). Our approach comprises two key components: (1) a dual-stage clean-sample selection module that identifies and leverages high-confidence samples to guide the learning of reliable representations while mitigating noise propagation during training, and (2) an experience soft-replay strategy for memory rehearsal to improve the model’s robustness and generalization in the presence of historical noisy labels. This integrated framework effectively suppresses the adverse influence of noisy labels while simultaneously alleviating catastrophic forgetting. Extensive evaluations on public medical image datasets demonstrate that DSCNL consistently outperforms state-of-the-art CIL methods across diverse classification tasks. The proposed method boosts the average accuracy by 55% and 31% compared with baseline methods on datasets with different noise levels, and achieves an average noise reduction rate of 73% under original noise conditions, highlighting its effectiveness and applicability in real-world medical imaging scenarios. Full article
Show Figures

Figure 1

Back to TopTop