Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (370)

Search Parameters:
Keywords = historical training information

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
25 pages, 1169 KiB  
Article
DPAO-PFL: Dynamic Parameter-Aware Optimization via Continual Learning for Personalized Federated Learning
by Jialu Tang, Yali Gao, Xiaoyong Li and Jia Jia
Electronics 2025, 14(15), 2945; https://doi.org/10.3390/electronics14152945 - 23 Jul 2025
Viewed by 232
Abstract
Federated learning (FL) enables multiple participants to collaboratively train models while efficiently mitigating the issue of data silos. However, large-scale heterogeneous data distributions result in inconsistent client objectives and catastrophic forgetting, leading to model bias and slow convergence. To address the challenges under [...] Read more.
Federated learning (FL) enables multiple participants to collaboratively train models while efficiently mitigating the issue of data silos. However, large-scale heterogeneous data distributions result in inconsistent client objectives and catastrophic forgetting, leading to model bias and slow convergence. To address the challenges under non-independent and identically distributed (non-IID) data, we propose DPAO-PFL, a Dynamic Parameter-Aware Optimization framework that leverages continual learning principles to improve Personalized Federated Learning under non-IID conditions. We decomposed the parameters into two components: local personalized parameters tailored to client characteristics, and global shared parameters that capture the accumulated marginal effects of parameter updates over historical rounds. Specifically, we leverage the Fisher information matrix to estimate parameter importance online, integrate the path sensitivity scores within a time-series sliding window to construct a dynamic regularization term, and adaptively adjust the constraint strength to mitigate the conflict overall tasks. We evaluate the effectiveness of DPAO-PFL through extensive experiments on several benchmarks under IID and non-IID data distributions. Comprehensive experimental results indicate that DPAO-PFL outperforms baselines with improvements from 5.41% to 30.42% in average classification accuracy. By decoupling model parameters and incorporating an adaptive regularization mechanism, DPAO-PFL effectively balances generalization and personalization. Furthermore, DPAO-PFL exhibits superior performance in convergence and collaborative optimization compared to state-of-the-art FL methods. Full article
Show Figures

Figure 1

12 pages, 1120 KiB  
Article
A Temporal Comparison of 50 Years of Australian Scuba Diving Fatalities
by John M. Lippmann
Int. J. Environ. Res. Public Health 2025, 22(7), 1148; https://doi.org/10.3390/ijerph22071148 - 19 Jul 2025
Viewed by 262
Abstract
Australian scuba fatalities over 50 years were examined to determine temporal changes over two consecutive periods, 1972–1999 and 2000–2021. The Australasian Diving Safety Foundation database and National Coronial Information System were searched to identify scuba deaths from 1972 to 2021. Historical data, police [...] Read more.
Australian scuba fatalities over 50 years were examined to determine temporal changes over two consecutive periods, 1972–1999 and 2000–2021. The Australasian Diving Safety Foundation database and National Coronial Information System were searched to identify scuba deaths from 1972 to 2021. Historical data, police and witness reports, and autopsies were recorded and comparisons made between the two periods. Of 430 total deaths, 236 occurred during 1972–1999 and 194 during 2000–2021, with average annual fatalities of 8.4 and 8.8, respectively. The proportion of males reduced (83% to 76%) and median ages rose (33 to 47 years) with a large rise in the percentage of casualties among people aged 45 years or older (24% to 57%). There were increases in certified divers (64% to 81%) and in the proportion of divers who were with a buddy at the time of their incident (17% to 27%), as well as a decrease in out-of-gas incidents (30% to 25%). A reduction in primary drowning (47% to 36%) was accompanied by more than a doubling of cardiac-related disabling conditions (12% to 26%). The substantial increase in casualties’ ages and of the proportions of casualties aged 45 or more and of females between the periods indicate the inclusion of a broader cohort of participants and ageing of longtime divers. The reduction in primary drowning was likely due to increased training and improvements in equipment, particularly BCDs and pressure gauges. The rise in cardiac-related deaths was due to an older and more obese cohort. Improved health education and surveillance and improved dive planning are essential to reduce such deaths. Full article
Show Figures

Figure 1

33 pages, 11613 KiB  
Article
Assessing and Mapping Forest Fire Vulnerability in Romania Using Maximum Entropy and eXtreme Gradient Boosting
by Adrian Lorenț, Marius Petrila, Bogdan Apostol, Florin Capalb, Șerban Chivulescu, Cătălin Șamșodan, Cristiana Marcu and Ovidiu Badea
Forests 2025, 16(7), 1156; https://doi.org/10.3390/f16071156 - 13 Jul 2025
Viewed by 606
Abstract
Understanding and mapping forest fire vulnerability is essential for informed landscape management and disaster risk reduction, especially in the context of increasing anthropogenic and climatic pressures. This study aims to model and spatially predict forest fire vulnerability across Romania using two machine learning [...] Read more.
Understanding and mapping forest fire vulnerability is essential for informed landscape management and disaster risk reduction, especially in the context of increasing anthropogenic and climatic pressures. This study aims to model and spatially predict forest fire vulnerability across Romania using two machine learning algorithms: MaxEnt and XGBoost. We integrated forest fire occurrence data from 2006 to 2024 with a suite of climatic, topographic, ecological, and anthropogenic predictors at a 250 m spatial resolution. MaxEnt, based on presence-only data, achieved moderate predictive performance (AUC = 0.758), while XGBoost, trained on presence–absence data, delivered higher classification accuracy (AUC = 0.988). Both models revealed that the impact of environmental variables on forest fire occurrence is complex and heterogeneous, with the most influential predictors being the Fire Weather Index, forest fuel type, elevation, and distance to human proximity features. The resulting vulnerability and uncertainty maps revealed hotspots in Sub-Carpathian and lowland regions, especially in Mehedinți, Gorj, Dolj, and Olt counties. These patterns reflect historical fire data and highlight the role of transitional agro-forested landscapes. This study delivers a replicable, data-driven approach to wildfire risk modelling, supporting proactive management and emphasising the importance of integrating vulnerability assessments into planning and climate adaptation strategies. Full article
(This article belongs to the Section Natural Hazards and Risk Management)
Show Figures

Figure 1

25 pages, 7406 KiB  
Article
Landslide Susceptibility Level Mapping in Kozhikode, Kerala, Using Machine Learning-Based Random Forest, Remote Sensing, and GIS Techniques
by Pradeep Kumar Badapalli, Anusha Boya Nakkala, Raghu Babu Kottala, Sakram Gugulothu, Fahdah Falah Ben Hasher, Varun Narayan Mishra and Mohamed Zhran
Land 2025, 14(7), 1453; https://doi.org/10.3390/land14071453 - 12 Jul 2025
Viewed by 1160
Abstract
Landslides are among the most destructive natural hazards in the Western Ghats region of Kerala, driven by complex interactions between geological, hydrological, and anthropogenic factors. This study aims to generate a high-resolution Landslide Susceptibility Level Map (LSLM) using a machine learning (ML)-based Random [...] Read more.
Landslides are among the most destructive natural hazards in the Western Ghats region of Kerala, driven by complex interactions between geological, hydrological, and anthropogenic factors. This study aims to generate a high-resolution Landslide Susceptibility Level Map (LSLM) using a machine learning (ML)-based Random Forest (RF) model integrated with Geographic Information Systems (GIS). A total of 231 historical landslide locations obtained from the Bhukosh portal were used as reference data. Eight predictive factors—Stream Order, Drainage Density, Slope, Aspect, Geology, Land Use/Land Cover (LULC), Normalized Difference Vegetation Index (NDVI), and Moisture Stress Index (MSI)—were derived from remote sensing and ancillary datasets, preprocessed, and reclassified for model input. The RF model was trained and validated using a 50:50 split of landslide and non-landslide points, with variable importance values derived to weight each predictive factor of the raster layer in ArcGIS. The resulting Landslide Susceptibility Index (LSI) was reclassified into five susceptibility zones: Very Low, Low, Moderate, High, and Very High. Results indicate that approximately 17.82% of the study area falls under high to very high susceptibility, predominantly in the steep, weathered, and high rainfall zones of the Western Ghats. Validation using Area Under the Curve–Receiver Operating Characteristic (AUC-ROC) analysis yielded an accuracy of 0.890, demonstrating excellent model performance. The output LSM provides valuable spatial insights for planners, disaster managers, and policymakers, enabling targeted mitigation strategies and sustainable land-use planning in landslide-prone regions. Full article
Show Figures

Figure 1

18 pages, 721 KiB  
Article
An Adaptive Holt–Winters Model for Seasonal Forecasting of Internet of Things (IoT) Data Streams
by Samer Sawalha and Ghazi Al-Naymat
IoT 2025, 6(3), 39; https://doi.org/10.3390/iot6030039 - 10 Jul 2025
Viewed by 298
Abstract
In various applications, IoT temporal data play a crucial role in accurately predicting future trends. Traditional models, including Rolling Window, SVR-RBF, and ARIMA, suffer from a potential accuracy decrease because they generally use all available data or the most recent data window during [...] Read more.
In various applications, IoT temporal data play a crucial role in accurately predicting future trends. Traditional models, including Rolling Window, SVR-RBF, and ARIMA, suffer from a potential accuracy decrease because they generally use all available data or the most recent data window during training, which can result in the inclusion of noisy data. To address this critical issue, this paper proposes a new forecasting technique called Adaptive Holt–Winters (AHW). The AHW approach utilizes two models grounded in an exponential smoothing methodology. The first model is trained on the most current data window, whereas the second extracts information from a historical data segment exhibiting patterns most analogous to the present. The outputs of the two models are then combined, demonstrating enhanced prediction precision since the focus is on the relevant data patterns. The effectiveness of the AHW model is evaluated against well-known models (Rolling Window, SVR-RBF, ARIMA, LSTM, CNN, RNN, and Holt–Winters), utilizing various metrics, such as RMSE, MAE, p-value, and time performance. A comprehensive evaluation covers various real-world datasets at different granularities (daily and monthly), including temperature from the National Climatic Data Center (NCDC), humidity and soil moisture measurements from the Basel City environmental system, and global intensity and global reactive power from the Individual Household Electric Power Consumption (IHEPC) dataset. The evaluation results demonstrate that AHW constantly attains higher forecasting accuracy across the tested datasets compared to other models. This indicates the efficacy of AHW in leveraging pertinent data patterns for enhanced predictive precision, offering a robust solution for temporal IoT data forecasting. Full article
Show Figures

Figure 1

18 pages, 3657 KiB  
Article
Vehicle Trajectory Data Augmentation Using Data Features and Road Map
by Jianfeng Hou, Wei Song, Yu Zhang and Shengmou Yang
Electronics 2025, 14(14), 2755; https://doi.org/10.3390/electronics14142755 - 9 Jul 2025
Viewed by 345
Abstract
With the advancement of intelligent transportation systems, vehicle trajectory data have become a key component in areas like traffic flow prediction, route planning, and traffic management. However, high-quality, publicly available trajectory datasets are scarce due to concerns over privacy, copyright, and data collection [...] Read more.
With the advancement of intelligent transportation systems, vehicle trajectory data have become a key component in areas like traffic flow prediction, route planning, and traffic management. However, high-quality, publicly available trajectory datasets are scarce due to concerns over privacy, copyright, and data collection costs. The lack of data creates challenges for training machine learning models and optimizing algorithms. To address this, we propose a new method for generating synthetic vehicle trajectory data, leveraging traffic flow characteristics and road maps. The approach begins by estimating hourly traffic volumes, then it uses the Poisson distribution modeling to assign departure times to synthetic trajectories. Origin and destination (OD) distributions are determined by analyzing historical data, allowing for the assignment of OD pairs to each synthetic trajectory. Path planning is then applied using a road map to generate a travel route. Finally, trajectory points, including positions and timestamps, are calculated based on road segment lengths and recommended speeds, with noise added to enhance realism. This method offers flexibility to incorporate additional information based on specific application needs, providing valuable opportunities for machine learning in intelligent transportation systems. Full article
(This article belongs to the Special Issue Big Data and AI Applications)
Show Figures

Figure 1

18 pages, 1756 KiB  
Technical Note
Detection of Banana Diseases Based on Landsat-8 Data and Machine Learning
by Renata Retkute, Kathleen S. Crew, John E. Thomas and Christopher A. Gilligan
Remote Sens. 2025, 17(13), 2308; https://doi.org/10.3390/rs17132308 - 5 Jul 2025
Viewed by 590
Abstract
Banana is an important cash and food crop worldwide. Recent outbreaks of banana diseases are threatening the global banana industry and smallholder livelihoods. Remote sensing data offer the potential to detect the presence of disease, but formal analysis is needed to compare inferred [...] Read more.
Banana is an important cash and food crop worldwide. Recent outbreaks of banana diseases are threatening the global banana industry and smallholder livelihoods. Remote sensing data offer the potential to detect the presence of disease, but formal analysis is needed to compare inferred disease data with observed disease data. In this study, we present a novel remote-sensing-based framework that combines Landsat-8 imagery with meteorology-informed phenological models and machine learning to identify anomalies in banana crop health. Unlike prior studies, our approach integrates domain-specific crop phenology to enhance the specificity of anomaly detection. We used a pixel-level random forest (RF) model to predict 11 key vegetation indices (VIs) as a function of historical meteorological conditions, specifically daytime and nighttime temperature from MODIS and precipitation from NASA GES DISC. By training on periods of healthy crop growth, the RF model establishes expected VI values under disease-free conditions. Disease presence is then detected by quantifying the deviations between observed VIs from Landsat-8 imagery and these predicted healthy VI values. The model demonstrated robust predictive reliability in accounting for seasonal variations, with forecasting errors for all VIs remaining within 10% when applied to a disease-free control plantation. Applied to two documented outbreak cases, the results show strong spatial alignment between flagged anomalies and historical reports of banana bunchy top disease (BBTD) and Fusarium wilt Tropical Race 4 (TR4). Specifically, for BBTD in Australia, a strong correlation of 0.73 was observed between infection counts and the discrepancy between predicted and observed NDVI values at the pixel with the highest number of infections. Notably, VI declines preceded reported infection rises by approximately two months. For TR4 in Mozambique, the approach successfully tracked disease progression, revealing clear spatial spread patterns and correlations as high as 0.98 between VI anomalies and disease cases in some pixels. These findings support the potential of our method as a scalable early warning system for banana disease detection. Full article
(This article belongs to the Special Issue Plant Disease Detection and Recognition Using Remotely Sensed Data)
Show Figures

Figure 1

7 pages, 626 KiB  
Proceeding Paper
Optimized CO2 Emission Forecasting for Thailand’s Electricity Sector Using a Multivariate Gray Model
by Kamrai Janprom, Tungngern Phetkamhang, Sittadach Morkmechai and Supachai Prainetr
Eng. Proc. 2025, 86(1), 5; https://doi.org/10.3390/engproc2025086005 - 4 Jul 2025
Viewed by 229
Abstract
This paper proposes an advanced forecasting model for predicting carbon dioxide (CO2) emissions in Thailand’s electricity generation sector. The model integrates a multivariate gray model with the fminsearch optimization algorithm in MATLAB (R2025a) to address the critical challenge of accurate emission [...] Read more.
This paper proposes an advanced forecasting model for predicting carbon dioxide (CO2) emissions in Thailand’s electricity generation sector. The model integrates a multivariate gray model with the fminsearch optimization algorithm in MATLAB (R2025a) to address the critical challenge of accurate emission forecasting, a key driver of climate change. Historical data on CO2 emissions, gross domestic product (GDP), peak electricity demand, and electricity user numbers are utilized to enhance predictive accuracy. Comparative analysis demonstrates that the optimized model significantly outperforms the conventional multivariate gray model, achieving mean absolute percentage error (MAPE) values of 7.74% for the training set and 1.75% for the testing set. The results highlight the effectiveness of the proposed approach as a robust tool for policymakers and stakeholders in Thailand’s energy sector, offering actionable insights to support informed decision-making in managing and reducing CO2 emissions. Full article
Show Figures

Figure 1

25 pages, 2892 KiB  
Article
Focal Correlation and Event-Based Focal Visual Content Text Attention for Past Event Search
by Pranita P. Deshmukh and S. Poonkuntran
Computers 2025, 14(7), 255; https://doi.org/10.3390/computers14070255 - 28 Jun 2025
Viewed by 314
Abstract
Every minute, vast amounts of video and image data are uploaded worldwide to the internet and social media platforms, creating a rich visual archive of human experiences—from weddings and family gatherings to significant historical events such as war crimes and humanitarian crises. When [...] Read more.
Every minute, vast amounts of video and image data are uploaded worldwide to the internet and social media platforms, creating a rich visual archive of human experiences—from weddings and family gatherings to significant historical events such as war crimes and humanitarian crises. When properly analyzed, this multimodal data holds immense potential for reconstructing important events and verifying information. However, challenges arise when images and videos lack complete annotations, making manual examination inefficient and time-consuming. To address this, we propose a novel event-based focal visual content text attention (EFVCTA) framework for automated past event retrieval using visual question answering (VQA) techniques. Our approach integrates a Long Short-Term Memory (LSTM) model with convolutional non-linearity and an adaptive attention mechanism to efficiently identify and retrieve relevant visual evidence alongside precise answers. The model is designed with robust weight initialization, regularization, and optimization strategies and is evaluated on the Common Objects in Context (COCO) dataset. The results demonstrate that EFVCTA achieves the highest performance across all metrics (88.7% accuracy, 86.5% F1-score, 84.9% mAP), outperforming state-of-the-art baselines. The EFVCTA framework demonstrates promising results for retrieving information about past events captured in images and videos and can be effectively applied to scenarios such as documenting training programs, workshops, conferences, and social gatherings in academic institutions Full article
Show Figures

Figure 1

35 pages, 1412 KiB  
Article
AI Chatbots in Philology: A User Experience Case Study of Conversational Interfaces for Content Creation and Instruction
by Nikolaos Pellas
Multimodal Technol. Interact. 2025, 9(7), 65; https://doi.org/10.3390/mti9070065 - 27 Jun 2025
Viewed by 585
Abstract
A persistent challenge in training future philology educators is engaging students in deep textual analysis across historical periods—especially in large classes where limited resources, feedback, and assessment tools hinder the teaching of complex linguistic and contextual features. These constraints often lead to superficial [...] Read more.
A persistent challenge in training future philology educators is engaging students in deep textual analysis across historical periods—especially in large classes where limited resources, feedback, and assessment tools hinder the teaching of complex linguistic and contextual features. These constraints often lead to superficial learning, decreased motivation, and inequitable outcomes, particularly when traditional methods lack interactive and scalable support. As digital technologies evolve, there is increasing interest in how Artificial Intelligence (AI) can address such instructional gaps. This study explores the potential of conversational AI chatbots to provide scalable, pedagogically grounded support in philology education. Using a mixed-methods case study, twenty-six (n = 26) undergraduate students completed structured tasks using one of three AI chatbots (ChatGPT, Gemini, or DeepSeek). Quantitative and qualitative data were collected via usability scales, AI literacy surveys, and semi-structured interviews. The results showed strong usability across all platforms, with DeepSeek rated highest in intuitiveness. Students reported confidence in using AI for efficiency and decision-making but desired greater support in evaluating multiple AI-generated outputs. The AI-enhanced environment promoted motivation, autonomy, and conceptual understanding, despite some onboarding and clarity challenges. Implications include reducing instructor workload, enhancing student-centered learning, and informing curriculum development in philology, particularly for instructional designers and educational technologists. Full article
Show Figures

Figure 1

26 pages, 3938 KiB  
Article
Multifractal Carbon Market Price Forecasting with Memory-Guided Adversarial Network
by Na Li, Mingzhu Tang, Jingwen Deng, Liran Wei and Xinpeng Zhou
Fractal Fract. 2025, 9(7), 403; https://doi.org/10.3390/fractalfract9070403 - 23 Jun 2025
Viewed by 430
Abstract
Carbon market price prediction is critical for stabilizing markets and advancing low-carbon transitions, where capturing multifractal dynamics is essential. Traditional models often neglect the inherent long-term memory and nonlinear dependencies of carbon price series. To tackle the issues of nonlinear dynamics, non-stationary characteristics, [...] Read more.
Carbon market price prediction is critical for stabilizing markets and advancing low-carbon transitions, where capturing multifractal dynamics is essential. Traditional models often neglect the inherent long-term memory and nonlinear dependencies of carbon price series. To tackle the issues of nonlinear dynamics, non-stationary characteristics, and inadequate suppression of modal aliasing in existing models, this study proposes an integrated prediction framework based on the coupling of gradient-sensitive time-series adversarial training and dynamic residual correction. A novel gradient significance-driven local adversarial training strategy enhances immunity to volatility through time step-specific perturbations while preserving structural integrity. The GSLAN-BiLSTM architecture dynamically recalibrates historical–current information fusion via memory-guided attention gating, mitigating prediction lag during abrupt price shifts. A “decomposition–prediction–correction” residual compensation system further decomposes base model errors via wavelet packet decomposition (WPD), with ARIMA-driven dynamic weighting enabling bias correction. Empirical validation using China’s carbon market high-frequency data demonstrates superior performance across key metrics. This framework extends beyond advancing carbon price forecasting by successfully generalizing its “multiscale decomposition, adversarial robustness enhancement, and residual dynamic compensation” paradigm to complex financial time-series prediction. Full article
Show Figures

Figure 1

24 pages, 4899 KiB  
Article
A Coordination Optimization Framework for Multi-Agent Reinforcement Learning Based on Reward Redistribution and Experience Reutilization
by Bo Yang, Linghang Gao, Fangzheng Zhou, Hongge Yao, Yanfang Fu, Zelong Sun, Feng Tian and Haipeng Ren
Electronics 2025, 14(12), 2361; https://doi.org/10.3390/electronics14122361 - 9 Jun 2025
Viewed by 684
Abstract
Cooperative multi-agent reinforcement learning (MARL) has emerged as a powerful paradigm for addressing complex real-world challenges, including autonomous robot control, strategic decision-making, and decentralized coordination in unmanned swarm systems. However, it still faces challenges in learning proper coordination among multiple agents. The lack [...] Read more.
Cooperative multi-agent reinforcement learning (MARL) has emerged as a powerful paradigm for addressing complex real-world challenges, including autonomous robot control, strategic decision-making, and decentralized coordination in unmanned swarm systems. However, it still faces challenges in learning proper coordination among multiple agents. The lack of effective knowledge sharing and experience interaction mechanisms among agents has led to substantial performance decline, especially in terms of low sampling efficiency and slow convergence rates, ultimately constraining the practical applicability of MARL. To address these challenges, this paper proposes a novel framework termed Reward redistribution and Experience reutilization based Coordination Optimization (RECO). This innovative approach employs a hierarchical experience pool mechanism that enhances exploration through strategic reward redistribution and experience reutilization. The RECO framework incorporates a sophisticated evaluation mechanism that assesses the quality of historical sampling data from individual agents and optimizes reward distribution by maximizing mutual information across hierarchical experience trajectories. Extensive comparative analyses of computational efficiency and performance metrics across diverse environments reveal that the proposed method not only enhances training efficiency in multi-agent gaming scenarios but also significantly strengthens algorithmic robustness and stability in dynamic environments. Full article
Show Figures

Graphical abstract

31 pages, 1581 KiB  
Article
Dynamic Portfolio Return Classification Using Price-Aware Logistic Regression
by Yakubu Suleiman Baguda, Hani Moaiteq AlJahdali and Altyeb Altaher Taha
Mathematics 2025, 13(11), 1885; https://doi.org/10.3390/math13111885 - 4 Jun 2025
Viewed by 823
Abstract
The dynamic and uncertain nature of financial markets presents significant challenges in accurately predicting portfolio returns due to inherent volatility and instability. This study investigates the potential of logistic regression to enhance the accuracy and robustness of return classification models, addressing challenges in [...] Read more.
The dynamic and uncertain nature of financial markets presents significant challenges in accurately predicting portfolio returns due to inherent volatility and instability. This study investigates the potential of logistic regression to enhance the accuracy and robustness of return classification models, addressing challenges in dynamic portfolio optimization. We propose a price-aware logistic regression (PALR) framework to classify dynamic portfolio returns. This approach integrates price movements as key features alongside traditional portfolio optimization techniques, enabling the identification and analysis of patterns and relationships within historical financial data. Unlike conventional methods, PALR dynamically adapts to market trends by incorporating historical price data and derived indicators, leading to more accurate classification of portfolio returns. Historical market data from the Dow Jones Industrial Average (DJIA) and Hang Seng Index (HSI) were used to train and test the model. The proposed scheme achieves an accuracy of 99.88%, a mean squared error (MSE) of 0.0006, and an AUC of 99.94% on the DJIA dataset. When evaluated on the HSI dataset, it attains a classification accuracy of 99.89%, an AUC of 99.89%, and an MSE of 0.011. The results demonstrate that PALR significantly improves classification accuracy and AUC while reducing MSE compared to conventional techniques. The proposed PALR model serves as a valuable tool for return classification and optimization, enabling investors, assets, and portfolio managers to make more informed and effective decisions. Full article
(This article belongs to the Special Issue Machine Learning and Finance)
Show Figures

Figure 1

29 pages, 7118 KiB  
Article
Quarter-Hourly Power Load Forecasting Based on a Hybrid CNN-BiLSTM-Attention Model with CEEMDAN, K-Means, and VMD
by Xiaoyu Liu, Jiangfeng Song, Hai Tao, Peng Wang, Haihua Mo and Wenjie Du
Energies 2025, 18(11), 2675; https://doi.org/10.3390/en18112675 - 22 May 2025
Cited by 1 | Viewed by 577
Abstract
Accurate long-term power load forecasting in the grid is crucial for supply–demand balance analysis in new power systems. It helps to identify potential power market risks and uncertainties in advance, thereby enhancing the stability and efficiency of power systems. Given the temporal and [...] Read more.
Accurate long-term power load forecasting in the grid is crucial for supply–demand balance analysis in new power systems. It helps to identify potential power market risks and uncertainties in advance, thereby enhancing the stability and efficiency of power systems. Given the temporal and nonlinear features of power load, this paper proposes a hybrid load-forecasting model using attention mechanisms, CNN, and BiLSTM. Historical load data are processed via CEEMDAN, K-means clustering, and VMD for significant regularity and uncertainty feature extraction. The CNN layer extracts features from climate and date inputs, while BiLSTM captures short- and long-term dependencies from both forward and backward directions. Attention mechanisms enhance key information. This approach is applied for seasonal load forecasting. Several comparative experiments show the proposed model’s high accuracy, with MAPE values of 1.41%, 1.25%, 1.08% and 1.67% for the four seasons. It outperforms other methods, with improvements of 0.25–2.53 GWh2 in MSE, 0.15–0.1 GWh in RMSE, 0.1–0.74 GWh in MAE and 0.22–1.40% in MAPE. Furthermore, the effectiveness of the data processing method and the impact of training data volume on forecasting accuracy are analyzed. The results indicate that decomposing and clustering historical load data, along with large-scale data training, can both boost forecasting accuracy. Full article
Show Figures

Figure 1

22 pages, 1587 KiB  
Article
Robust 12-Lead ECG Classification with Lightweight ResNet: An Adaptive Second-Order Learning Rate Optimization Approach
by Guolin Yang, Shiyun Zou, Hua Qin, Yuyi Cao, Zihan Zhang and Xiangyuan Deng
Electronics 2025, 14(10), 1941; https://doi.org/10.3390/electronics14101941 - 9 May 2025
Cited by 1 | Viewed by 593
Abstract
To enhance the classification accuracy of the ResNet model for 12-lead ECG signals, a novel approach that focuses on optimizing the learning rate within the model training algorithm is proposed. Firstly, a Taylor expansion of the training formula for model weights is performed [...] Read more.
To enhance the classification accuracy of the ResNet model for 12-lead ECG signals, a novel approach that focuses on optimizing the learning rate within the model training algorithm is proposed. Firstly, a Taylor expansion of the training formula for model weights is performed to derive a learning rate that incorporates the second-order gradient information. Subsequently, to circumvent the direct computation of the complex second-order gradient in the learning rate, an approximation method utilizing the historical first-order gradient is introduced. Additionally, truncation techniques are employed to ensure that the second-order learning rate remains neither excessively large nor too small. Ultimately, the 1D-ResNet-AdaSOM model is constructed based on this adaptive second-order momentum (AdaSOM) method and applied for 12-lead ECG classification. The proposed algorithm and model were validated on the CPSC2018 dataset. The evolving trend of the loss function throughout the training process demonstrated that the proposed algorithm exhibited commendable convergence and stability, and these results aligned with the conclusions derived from the theoretical analysis of the algorithm’s convergence. On the test set, the model attained an impressive average F1 score of 0.862, demonstrating that 1D-ResNet-AdaSOM surpassed several state-of-the-art deep-learning models in performance while exhibiting strong robustness. The experimental findings further substantiate our hypothesis that adjusting the learning rate in the ResNet training algorithm can effectively enhance classification accuracy for 12-lead ECGs. Full article
Show Figures

Figure 1

Back to TopTop