Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (115)

Search Parameters:
Keywords = historical consistent neural networks

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
11 pages, 684 KB  
Article
Replacing Batch Normalization with Memory-Based Affine Transformation for Test-Time Adaptation
by Jih Pin Yeh, Joe-Mei Feng, Hwei Jen Lin and Yoshimasa Tokuyama
Electronics 2025, 14(21), 4251; https://doi.org/10.3390/electronics14214251 - 30 Oct 2025
Viewed by 324
Abstract
Batch normalization (BN) has become a foundational component in modern deep neural networks. However, one of its disadvantages is its reliance on batch statistics that may be unreliable or unavailable during inference, particularly under test-time domain shifts. While batch-statistics-free affine transformation methods alleviate [...] Read more.
Batch normalization (BN) has become a foundational component in modern deep neural networks. However, one of its disadvantages is its reliance on batch statistics that may be unreliable or unavailable during inference, particularly under test-time domain shifts. While batch-statistics-free affine transformation methods alleviate this by learning per-sample scale and shift parameters, most treat samples independently, overlooking temporal or sequential correlations in streaming or episodic test-time settings. We propose LSTM-Affine, a memory-based normalization module that replaces BN with a recurrent parameter generator. By leveraging an LSTM, the module produces channel-wise affine parameters conditioned on both the current input and its historical context, enabling gradual adaptation to evolving feature distributions. Unlike conventional batch-statistics-free designs, LSTM-Affine captures dependencies across consecutive samples, improving stability and convergence in scenarios with gradual distribution shifts. Extensive experiments on few-shot learning and source-free domain adaptation benchmarks demonstrate that LSTM-Affine consistently outperforms BN and prior batch-statistics-free baselines, particularly when adaptation data are scarce or non-stationary. Full article
(This article belongs to the Special Issue Advances in Data Security: Challenges, Technologies, and Applications)
Show Figures

Figure 1

28 pages, 2041 KB  
Article
Self-Adaptable Computation Offloading Strategy for UAV-Assisted Edge Computing
by Yanting Wang, Yuhang Zhang, Zhuo Qian, Yubo Zhao and Han Zhang
Drones 2025, 9(11), 748; https://doi.org/10.3390/drones9110748 - 28 Oct 2025
Viewed by 370
Abstract
Unmanned Aerial Vehicle-assisted Edge Computing (UAV-EC) leverages UAVs as aerial edge servers to provide computation resources to user equipment (UE) in dynamically changing environments. A critical challenge in UAV-EC lies in making real-time adaptive offloading decisions that determine whether and how UE should [...] Read more.
Unmanned Aerial Vehicle-assisted Edge Computing (UAV-EC) leverages UAVs as aerial edge servers to provide computation resources to user equipment (UE) in dynamically changing environments. A critical challenge in UAV-EC lies in making real-time adaptive offloading decisions that determine whether and how UE should offload tasks to UAVs. This problem is typically formulated as Mixed-Integer Nonlinear Programming (MINLP). However, most existing offloading methods sacrifice strategy timeliness, leading to significant performance degradation in UAV-EC systems, especially under varying wireless channel quality and unpredictable UAV mobility. In this paper, we propose a novel framework that enhances offloading strategy timeliness in such dynamic settings. Specifically, we jointly optimize offloading decisions, transmit power of UEs, and computation resource allocation, to maximize system utility encompassing both latency reduction and energy conservation. To tackle this combinational optimization problem and obtain real-time strategy, we design a Quality of Experience (QoE)-aware Online Offloading (QO2) algorithm which could optimally adapt offloading decisions and resources allocations to time-varying wireless channel conditions. Instead of directly solving MIP via traditional methods, QO2 algorithm utilizes a deep neural network to learn binary offloading decisions from experience, greatly improving strategy timeliness. This learning-based operation inherently enhances the robustness of QO2 algorithm. To further strengthen robustness, we design a Priority-Based Proportional Sampling (PPS) strategy that leverages historical optimization patterns. Extensive simulation results demonstrate that QO2 outperforms state-of-the-art baselines in solution quality, consistently achieving near-optimal solutions. More importantly, it exhibits strong adaptability to dynamic network conditions. These characteristics make QO2 a promising solution for dynamic UAV-EC systems. Full article
(This article belongs to the Section Drone Communications)
Show Figures

Graphical abstract

17 pages, 2877 KB  
Article
Prediction/Assessment of CO2 EOR and Storage Efficiency in Residual Oil Zones Using Machine Learning Techniques
by Abdulrahman Abdulwarith, Mohamed Ammar and Birol Dindoruk
Energies 2025, 18(20), 5498; https://doi.org/10.3390/en18205498 - 18 Oct 2025
Viewed by 416
Abstract
Residual oil zones (ROZ) arise under the oil–water contact of main pay zones due to diverse geological conditions. Historically, these zones were considered economically unviable for development with conventional recovery methods because of the immobile nature of the oil. However, they represent a [...] Read more.
Residual oil zones (ROZ) arise under the oil–water contact of main pay zones due to diverse geological conditions. Historically, these zones were considered economically unviable for development with conventional recovery methods because of the immobile nature of the oil. However, they represent a substantial subsurface volume with strong potential for CO2 sequestration and storage. Despite this potential, effective techniques for assessing CO2-EOR performance coupled with CCUS in ROZs remain limited. To address this gap, this study introduces a machine learning framework that employs artificial neural network (ANN) models trained on data generated from a large number of reservoir simulations (300 cases produced using Latin Hypercube Sampling across nine geological and operational parameters). The dataset was divided into training and testing subsets to ensure generalization, with key input variables including reservoir properties (thickness, permeability, porosity, Sorg, salinity) and operational parameters (producer BHP and CO2 injection rate). The objective was to forecast CO2 storage capacity and oil recovery potential, thereby reducing reliance on time-consuming and costly reservoir simulations. The developed ANN models achieved high predictive accuracy, with R2 values ranging from 0.90 to 0.98 and mean absolute percentage error (MAPRE) consistently below 10%. Validation against real ROZ field data demonstrated strong agreement, confirming model reliability. Beyond prediction, the workflow also provided insights for reservoir management: optimization results indicated that maintaining a producer BHP of approximately 1250 psi and a CO2 injection rate of 14–16 MMSCF/D offered the best balance between enhanced oil recovery and stable storage efficiency. In summary, the integrated combination of reservoir simulation and machine learning provides a fast, technically robust, and cost-effective tool for evaluating CO2-EOR and CCUS performance in ROZs. The demonstrated accuracy, scalability, and optimization capability make the proposed ANN workflow well-suited for both rapid screening and field-scale applications. Full article
Show Figures

Figure 1

26 pages, 2931 KB  
Review
Prospects of AI-Powered Bowel Sound Analytics for Diagnosis, Characterization, and Treatment Management of Inflammatory Bowel Disease
by Divyanshi Sood, Zenab Muhammad Riaz, Jahnavi Mikkilineni, Narendra Nath Ravi, Vineeta Chidipothu, Gayathri Yerrapragada, Poonguzhali Elangovan, Mohammed Naveed Shariff, Thangeswaran Natarajan, Jayarajasekaran Janarthanan, Naghmeh Asadimanesh, Shiva Sankari Karuppiah, Keerthy Gopalakrishnan and Shivaram P. Arunachalam
Med. Sci. 2025, 13(4), 230; https://doi.org/10.3390/medsci13040230 - 13 Oct 2025
Viewed by 916
Abstract
Background: This narrative review examines the role of artificial intelligence (AI) in bowel sound analysis for the diagnosis and management of inflammatory bowel disease (IBD). Inflammatory bowel disease (IBD), encompassing Crohn’s disease and ulcerative colitis, presents a significant clinical burden due to its [...] Read more.
Background: This narrative review examines the role of artificial intelligence (AI) in bowel sound analysis for the diagnosis and management of inflammatory bowel disease (IBD). Inflammatory bowel disease (IBD), encompassing Crohn’s disease and ulcerative colitis, presents a significant clinical burden due to its unpredictable course, variable symptomatology, and reliance on invasive procedures for diagnosis and disease monitoring. Despite advances in imaging and biomarkers, tools such as colonoscopy and fecal calprotectin remain costly, uncomfortable, and impractical for frequent or real-time assessment. Meanwhile, bowel sounds—an overlooked physiologic signal—reflect underlying gastrointestinal motility and inflammation but have historically lacked objective quantification. With recent advances in artificial intelligence (AI) and acoustic signal processing, there is growing interest in leveraging bowel sound analysis as a novel, non-invasive biomarker for detecting IBD, monitoring disease activity, and predicting disease flares. This approach holds the promise of continuous, low-cost, and patient-friendly monitoring, which could transform IBD management. Objectives: This narrative review assesses the clinical utility, methodological rigor, and potential future integration of artificial intelligence (AI)-driven bowel sound analysis in inflammatory bowel disease (IBD), with a focus on its potential as a non-invasive biomarker for disease activity, flare prediction, and differential diagnosis. Methods: This manuscript reviews the potential of AI-powered bowel sound analysis as a non-invasive tool for diagnosing, monitoring, and managing inflammatory bowel disease (IBD), including Crohn’s disease and ulcerative colitis. Traditional diagnostic methods, such as colonoscopy and biomarkers, are often invasive, costly, and impractical for real-time monitoring. The manuscript explores bowel sounds, which reflect gastrointestinal motility and inflammation, as an alternative biomarker by utilizing AI techniques like convolutional neural networks (CNNs), transformers, and gradient boosting. We analyze data on acoustic signal acquisition (e.g., smart T-shirts, smartphones), signal processing methodologies (e.g., MFCCs, spectrograms, empirical mode decomposition), and validation metrics (e.g., accuracy, F1 scores, AUC). Studies were assessed for clinical relevance, methodological rigor, and translational potential. Results: Across studies enrolling 16–100 participants, AI models achieved diagnostic accuracies of 88–96%, with AUCs ≥ 0.83 and F1 scores ranging from 0.71 to 0.85 for differentiating IBD from healthy controls and IBS. Transformer-based approaches (e.g., HuBERT, Wav2Vec 2.0) consistently outperformed CNNs and tabular models, yielding F1 scores of 80–85%, while gradient boosting on wearable multi-microphone recordings demonstrated robustness to background noise. Distinct acoustic signatures were identified, including prolonged sound-to-sound intervals in Crohn’s disease (mean 1232 ms vs. 511 ms in IBS) and high-pitched tinkling in stricturing phenotypes. Despite promising performance, current models remain below established biomarkers such as fecal calprotectin (~90% sensitivity for active disease), and generalizability is limited by small, heterogeneous cohorts and the absence of prospective validation. Conclusions: AI-powered bowel sound analysis represents a promising, non-invasive tool for IBD monitoring. However, widespread clinical integration requires standardized data acquisition protocols, large multi-center datasets with clinical correlates, explainable AI frameworks, and ethical data governance. Future directions include wearable-enabled remote monitoring platforms and multi-modal decision support systems integrating bowel sounds with biomarker and symptom data. This manuscript emphasizes the need for large-scale, multi-center studies, the development of explainable AI frameworks, and the integration of these tools within clinical workflows. Future directions include remote monitoring using wearables and multi-modal systems that combine bowel sounds with biomarkers and patient symptoms, aiming to transform IBD care into a more personalized and proactive model. Full article
Show Figures

Figure 1

16 pages, 6578 KB  
Article
Adaptive Trigger Compensation Neural Network for PID Tuning in Virtual Autopilot Heading Control
by Yutong Zhou and Shan Fu
Machines 2025, 13(10), 933; https://doi.org/10.3390/machines13100933 - 10 Oct 2025
Viewed by 478
Abstract
Virtual commands are significant to model human–computer interactions in autopilot flight missions. However, the huge system hysteresis makes it difficult for proportional–integral–derivative (PID) algorithms to generate the commands that promise better flight convergence. An adaptive trigger compensation neural network method is proposed to [...] Read more.
Virtual commands are significant to model human–computer interactions in autopilot flight missions. However, the huge system hysteresis makes it difficult for proportional–integral–derivative (PID) algorithms to generate the commands that promise better flight convergence. An adaptive trigger compensation neural network method is proposed to dynamically tune the PID parameters, simulating the process of deciding virtual heading commands and performing heading adjustments for virtual pilots. The method consists of trigger filtering, dynamic updating, and compensation synthesis. First, the necessary historical errors are adaptively selected by the threshold trigger filter for better error utilization. Second, error-based initialization is introduced in the neural network PID update process to improve adaptiveness in the initial settings of PID parameters. Third, the parameters are synthesized via error compensation to compute virtual heading commands for acquiring more convergent flight trajectories. The adaptive filter, error-based initialization, and compensation are important to improve the backward propagation neural network in tuning PID parameters. The results demonstrate the advance of the method in simulating heading adjustment behaviors and reducing flight trajectory deviation and fluctuation. The adaptive trigger compensation neural network can enhance the convergent performance of the PID algorithm during autopilot flight scenarios. Full article
(This article belongs to the Special Issue Control and Mechanical System Engineering, 2nd Edition)
Show Figures

Figure 1

21 pages, 538 KB  
Article
Finite-Time Synchronization and Mittag–Leffler Synchronization for Uncertain Fractional-Order Delayed Cellular Neural Networks with Fuzzy Operators via Nonlinear Adaptive Control
by Hongguang Fan, Kaibo Shi, Zizhao Guo, Anran Zhou and Jiayi Cai
Fractal Fract. 2025, 9(10), 634; https://doi.org/10.3390/fractalfract9100634 - 29 Sep 2025
Viewed by 418
Abstract
This paper investigates a class of uncertain fractional-order delayed cellular neural networks (UFODCNNs) with fuzzy operators and nonlinear activations. Both fuzzy AND and fuzzy OR are considered, which help to improve the robustness of the model when dealing with various uncertain problems. To [...] Read more.
This paper investigates a class of uncertain fractional-order delayed cellular neural networks (UFODCNNs) with fuzzy operators and nonlinear activations. Both fuzzy AND and fuzzy OR are considered, which help to improve the robustness of the model when dealing with various uncertain problems. To achieve the finite-time (FT) synchronization and Mittag–Leffler synchronization of the concerned neural networks (NNs), a nonlinear adaptive controller consisting of three information feedback modules is devised, and each submodule performs its function based on current or delayed historical information. Based on the fractional-order comparison theorem, the Lyapunov function, and the adaptive control scheme, new FT synchronization and Mittag–Leffler synchronization criteria for the UFODCNNs are derived. Unlike previous feedback controllers, the control strategy proposed in this article can adaptively adjust the strength of the information feedback, and partial parameters only need to satisfy inequality constraints within a local time interval, which shows our control mechanism has a significant advantage in conservatism. The experimental results show that our mean synchronization time and variance are 11.397% and 12.5% lower than the second-ranked controllers, respectively. Full article
Show Figures

Figure 1

92 pages, 3238 KB  
Review
Machine Learning-Based Electric Vehicle Charging Demand Forecasting: A Systematized Literature Review
by Maher Alaraj, Mohammed Radi, Elaf Alsisi, Munir Majdalawieh and Mohamed Darwish
Energies 2025, 18(17), 4779; https://doi.org/10.3390/en18174779 - 8 Sep 2025
Viewed by 1812
Abstract
The transport sector significantly contributes to global greenhouse gas emissions, making electromobility crucial in the race toward the United Nations Sustainable Development Goals. In recent years, the increasing competition among manufacturers, the development of cheaper batteries, the ongoing policy support, and people’s greater [...] Read more.
The transport sector significantly contributes to global greenhouse gas emissions, making electromobility crucial in the race toward the United Nations Sustainable Development Goals. In recent years, the increasing competition among manufacturers, the development of cheaper batteries, the ongoing policy support, and people’s greater environmental awareness have consistently increased electric vehicles (EVs) adoption. Nevertheless, EVs charging needs—highly influenced by EV drivers’ behavior uncertainty—challenge their integration into the power grid on a massive scale, leading to potential issues, such as overloading and grid instability. Smart charging strategies can mitigate these adverse effects by using information and communication technologies to optimize EV charging schedules in terms of power systems’ constraints, electricity prices, and users’ preferences, benefiting stakeholders by minimizing network losses, maximizing aggregators’ profit, and reducing users’ driving range anxiety. To this end, accurately forecasting EV charging demand is paramount. Traditionally used forecasting methods, such as model-driven and statistical ones, often rely on complex mathematical models, simulated data, or simplifying assumptions, failing to accurately represent current real-world EV charging profiles. Machine learning (ML) methods, which leverage real-life historical data to model complex, nonlinear, high-dimensional problems, have demonstrated superiority in this domain, becoming a hot research topic. In a scenario where EV technologies, charging infrastructure, data acquisition, and ML techniques constantly evolve, this paper conducts a systematized literature review (SLR) to understand the current landscape of ML-based EV charging demand forecasting, its emerging trends, and its future perspectives. The proposed SLR provides a well-structured synthesis of a large body of literature, categorizing approaches not only based on their ML-based approach, but also on the EV charging application. In addition, we focus on the most recent technological advances, exploring deep-learning architectures, spatial-temporal challenges, and cross-domain learning strategies. This offers an integrative perspective. On the one hand, it maps the state of the art, identifying a notable shift toward deep-learning approaches and an increasing interest in public EV charging stations. On the other hand, it uncovers underexplored methodological intersections that can be further exploited and research gaps that remain underaddressed, such as real-time data integration, long-term forecasting, and the development of adaptable models to different charging behaviors and locations. In this line, emerging trends combining recurrent and convolutional neural networks, and using relatively new ML techniques, especially transformers, and ML paradigms, such as transfer-, federated-, and meta-learning, have shown promising results for addressing spatial-temporality, time-scalability, and geographical-generalizability issues, paving the path for future research directions. Full article
(This article belongs to the Topic Electric Vehicles Energy Management, 2nd Volume)
Show Figures

Figure 1

17 pages, 3027 KB  
Article
Time Series Prediction of Water Quality Based on NGO-CNN-GRU Model—A Case Study of Xijiang River, China
by Xiaofeng Ding, Yiling Chen, Haipeng Zeng and Yu Du
Water 2025, 17(16), 2413; https://doi.org/10.3390/w17162413 - 15 Aug 2025
Viewed by 929
Abstract
Water quality deterioration poses a critical threat to ecological security and sustainable development, particularly in rapidly urbanizing regions. To enable proactive environmental management, this study develops a novel hybrid deep learning model, the NGO-CNN-GRU, for high-precision time-series water quality prediction in the Xijiang [...] Read more.
Water quality deterioration poses a critical threat to ecological security and sustainable development, particularly in rapidly urbanizing regions. To enable proactive environmental management, this study develops a novel hybrid deep learning model, the NGO-CNN-GRU, for high-precision time-series water quality prediction in the Xijiang River Basin, China. The model integrates a Convolutional Neural Network (CNN) for spatial feature extraction and a Gated Recurrent Unit (GRU) for temporal dependency modeling, with hyperparameters optimized via the Northern Goshawk Optimization (NGO) algorithm. Using historical water quality (pH, DO, CODMn, NH3-N, TP, TN) and meteorological data (precipitation, temperature, humidity) from 11 monitoring stations, the model achieved exceptional performance: test set R2 > 0.986, MAE < 0.015, and RMSE < 0.018 for total nitrogen prediction (Xiaodong Station case study). Across all stations and indicators, it consistently outperformed baseline models (GRU, CNN-GRU), with average R2 improvements of 12.3% and RMSE reductions up to 90% for NH3-N predictions. Spatiotemporal analysis further revealed significant pollution gradients correlated with anthropogenic activities in the Pearl River Delta. This work provides a robust tool for real-time water quality early warning systems and supports evidence-based river basin management. Full article
(This article belongs to the Special Issue Monitoring and Modelling of Contaminants in Water Environment)
Show Figures

Figure 1

34 pages, 2523 KB  
Technical Note
A Technical Note on AI-Driven Archaeological Object Detection in Airborne LiDAR Derivative Data, with CNN as the Leading Technique
by Reyhaneh Zeynali, Emanuele Mandanici and Gabriele Bitelli
Remote Sens. 2025, 17(15), 2733; https://doi.org/10.3390/rs17152733 - 7 Aug 2025
Viewed by 2775
Abstract
Archaeological research fundamentally relies on detecting features to uncover hidden historical information. Airborne (aerial) LiDAR technology has significantly advanced this field by providing high-resolution 3D terrain maps that enable the identification of ancient structures and landscapes with improved accuracy and efficiency. This technical [...] Read more.
Archaeological research fundamentally relies on detecting features to uncover hidden historical information. Airborne (aerial) LiDAR technology has significantly advanced this field by providing high-resolution 3D terrain maps that enable the identification of ancient structures and landscapes with improved accuracy and efficiency. This technical note comprehensively reviews 45 recent studies to critically examine the integration of Machine Learning (ML) and Deep Learning (DL) techniques, particularly Convolutional Neural Networks (CNNs), with airborne LiDAR derivatives for automated archaeological feature detection. The review highlights the transformative potential of these approaches, revealing their capability to automate feature detection and classification, thus enhancing efficiency and accuracy in archaeological research. CNN-based methods, employed in 32 of the reviewed studies, consistently demonstrate high accuracy across diverse archaeological features. For example, ancient city walls were delineated with 94.12% precision using U-Net, Maya settlements with 95% accuracy using VGG-19, and with an IoU of around 80% using YOLOv8, and shipwrecks with a 92% F1-score using YOLOv3 aided by transfer learning. Furthermore, traditional ML techniques like random forest proved effective in tasks such as identifying burial mounds with 96% accuracy and ancient canals. Despite these significant advancements, the application of ML/DL in archaeology faces critical challenges, including the scarcity of large, labeled archaeological datasets, the prevalence of false positives due to morphological similarities with natural or modern features, and the lack of standardized evaluation metrics across studies. This note underscores the transformative potential of LiDAR and ML/DL integration and emphasizes the crucial need for continued interdisciplinary collaboration to address these limitations and advance the preservation of cultural heritage. Full article
(This article belongs to the Special Issue Application of Remote Sensing in Cultural Heritage Research II)
Show Figures

Figure 1

18 pages, 2108 KB  
Article
Machine Learning Forecasting of Commercial Buildings’ Energy Consumption Using Euclidian Distance Matrices
by Connor Scott and Alhussein Albarbar
Energies 2025, 18(15), 4160; https://doi.org/10.3390/en18154160 - 5 Aug 2025
Viewed by 811
Abstract
Governments worldwide have set ambitious targets for decarbonising energy grids, driving the need for increased renewable energy generation and improved energy efficiency. One key strategy for achieving this involves enhanced energy management in buildings, often using machine learning-based forecasting methods. However, such methods [...] Read more.
Governments worldwide have set ambitious targets for decarbonising energy grids, driving the need for increased renewable energy generation and improved energy efficiency. One key strategy for achieving this involves enhanced energy management in buildings, often using machine learning-based forecasting methods. However, such methods typically rely on extensive historical data collected via costly sensor installations—resources that many buildings lack. This study introduces a novel forecasting approach that eliminates the need for large-scale historical datasets or expensive sensors. By integrating custom-built models with existing energy data, the method applies calculated weighting through a distance matrix and accuracy coefficients to generate reliable forecasts. It uses readily available building attributes—such as floor area and functional type to position a new building within the matrix of existing data. A Euclidian distance matrix, akin to a K-nearest neighbour algorithm, determines the appropriate neural network(s) to utilise. These findings are benchmarked against a consolidated, more sophisticated neural network and a long short-term memory neural network. The dataset has hourly granularity over a 24 h horizon. The model consists of five bespoke neural networks, demonstrating the superiority of other models with a 610 s training duration, uses 500 kB of storage, achieves an R2 of 0.9, and attains an average forecasting accuracy of 85.12% in predicting the energy consumption of the five buildings studied. This approach not only contributes to the specific goal of a fully decarbonized energy grid by 2050 but also establishes a robust and efficient methodology for maintaining standards with existing benchmarks while providing more control over the method. Full article
Show Figures

Figure 1

21 pages, 5017 KB  
Article
Vessel Trajectory Prediction with Deep Learning: Temporal Modeling and Operational Implications
by Nicos Evmides, Michalis P. Michaelides and Herodotos Herodotou
J. Mar. Sci. Eng. 2025, 13(8), 1439; https://doi.org/10.3390/jmse13081439 - 28 Jul 2025
Viewed by 1395
Abstract
Vessel trajectory prediction is fundamental to maritime navigation, safety, and operational efficiency, particularly as the industry increasingly relies on digital solutions and real-time data analytics. This study addresses the challenge of forecasting vessel movements using historical Automatic Identification System (AIS) data, with a [...] Read more.
Vessel trajectory prediction is fundamental to maritime navigation, safety, and operational efficiency, particularly as the industry increasingly relies on digital solutions and real-time data analytics. This study addresses the challenge of forecasting vessel movements using historical Automatic Identification System (AIS) data, with a focus on understanding the temporal behavior of deep learning models under different input and prediction horizons. To this end, a robust data pre-processing pipeline was developed to ensure temporal consistency, filter anomalous records, and segment continuous vessel trajectories. Using a curated dataset from the eastern Mediterranean, three deep recurrent neural network architectures, namely LSTM, Bi-LSTM, and Bi-GRU, were evaluated for short- and long-term trajectory prediction. Empirical results demonstrate that Bi-LSTM consistently achieves higher accuracy across both horizons, with performance gradually degrading under extended forecast windows. The analysis also reveals key insights into the trade-offs between model complexity, horizon-specific robustness, and predictive stability. This work contributes to maritime informatics by offering a comparative evaluation of recurrent architectures and providing a structured and empirical foundation for selecting and deploying trajectory forecasting models in operational contexts. Full article
(This article belongs to the Special Issue Maritime Transport and Port Management)
Show Figures

Figure 1

10 pages, 6510 KB  
Proceeding Paper
Energy Consumption Forecasting for Renewable Energy Communities: A Case Study of Loureiro, Portugal
by Muhammad Akram, Chiara Martone, Ilenia Perugini and Emmanuele Maria Petruzziello
Eng. Proc. 2025, 101(1), 7; https://doi.org/10.3390/engproc2025101007 - 25 Jul 2025
Viewed by 1780
Abstract
Intensive energy consumption in the building sector remains one of the primary contributors to climate change and global warming. Within Renewable Energy Communities (RECs), improving energy management is essential for promoting sustainability and reducing environmental impact. Accurate forecasting of energy consumption at the [...] Read more.
Intensive energy consumption in the building sector remains one of the primary contributors to climate change and global warming. Within Renewable Energy Communities (RECs), improving energy management is essential for promoting sustainability and reducing environmental impact. Accurate forecasting of energy consumption at the community level is a key tool in this effort. Traditionally, engineering-based methods grounded in thermodynamic principles have been employed, offering high accuracy under controlled conditions. However, their reliance on exhaustive building-level data and high computational costs limits their scalability in dynamic REC settings. In contrast, Artificial Intelligence (AI)-driven methods provide flexible and scalable alternatives by learning patterns from historical consumption and environmental data. This study investigates three Machine Learning (ML) models, Decision Tree (DT), Random Forest (RF), and CatBoost, and one Deep Learning (DL) model, Convolutional Neural Network (CNN), to forecast community electricity consumption using real smart meter data and local meteorological variables. The study focuses on a REC in Loureiro, Portugal, consisting of 172 residential users from whom 16 months of 15 min interval electricity consumption data were collected. Temporal features (hour of the day, day of the week, month) were combined with lag-based usage patterns, including features representing energy consumption at the corresponding time in the previous hour and on the previous day, to enhance model accuracy by leveraging short-term dependencies and daily repetition in usage behavior. Models were evaluated using Mean Squared Error (MSE), Mean Absolute Percentage Error (MAPE), and the Coefficient of Determination R2. Among all models, CatBoost achieved the best performance, with an MSE of 0.1262, MAPE of 4.77%, and an R2 of 0.9018. These results highlight the potential of ensemble learning approaches for improving energy demand forecasting in RECs, supporting smarter energy management and contributing to energy and environmental performance. Full article
(This article belongs to the Proceedings of The 11th International Conference on Time Series and Forecasting)
Show Figures

Figure 1

19 pages, 910 KB  
Article
Robust Gas Demand Prediction Using Deep Neural Networks: A Data-Driven Approach to Forecasting Under Regulatory Constraints
by Kostiantyn Pavlov, Olena Pavlova, Tomasz Wołowiec, Svitlana Slobodian, Andriy Tymchyshak and Tetiana Vlasenko
Energies 2025, 18(14), 3690; https://doi.org/10.3390/en18143690 - 12 Jul 2025
Viewed by 864
Abstract
Accurate gas consumption forecasting is critical for modern energy systems due to complex consumer behavior and regulatory requirements. Deep neural networks (DNNs), such as Seq2Seq with attention, TiDE, and Temporal Fusion Transformers, are promising for modeling complex temporal relationships and non-linear dependencies. This [...] Read more.
Accurate gas consumption forecasting is critical for modern energy systems due to complex consumer behavior and regulatory requirements. Deep neural networks (DNNs), such as Seq2Seq with attention, TiDE, and Temporal Fusion Transformers, are promising for modeling complex temporal relationships and non-linear dependencies. This study compares state-of-the-art architectures using real-world data from over 100,000 consumers to determine their practical viability for forecasting gas consumption under operational and regulatory conditions. Particular attention is paid to the impact of data quality, feature attribution, and model reliability on performance. The main use cases for natural gas consumption forecasting are tariff setting by regulators and system balancing for suppliers and operators. The study used monthly natural gas consumption data from 105,527 households in the Volyn region of Ukraine from January 2019 to April 2023 and meteorological data on average monthly air temperature. Missing values were replaced with zeros or imputed using seasonal imputation and the K-nearest neighbors. The results showed that previous consumption is the dominant feature for all models, confirming their autoregressive origin and the high importance of historical data. Temperature and category were identified as supporting features. Improvised data consistently improved the performance of all models. Seq2SeqPlus showed high accuracy, TiDE was the most stable, and TFT offered flexibility and interpretability. Implementing these models requires careful integration with data management, regulatory frameworks, and operational workflows. Full article
Show Figures

Figure 1

29 pages, 2947 KB  
Article
Predicting Olympic Medal Performance for 2028: Machine Learning Models and the Impact of Host and Coaching Effects
by Zhenkai Zhang, Tengfei Ma, Yunpeng Yao, Ningjia Xu, Yujie Gao and Wanwan Xia
Appl. Sci. 2025, 15(14), 7793; https://doi.org/10.3390/app15147793 - 11 Jul 2025
Viewed by 2659
Abstract
This study develops two machine learning models to predict the medal performance of countries at the 2028 Olympic Games while systematically analyzing and quantifying the impacts of the host effect and exceptional coaching on medal gains. The dataset encompasses records of total medals [...] Read more.
This study develops two machine learning models to predict the medal performance of countries at the 2028 Olympic Games while systematically analyzing and quantifying the impacts of the host effect and exceptional coaching on medal gains. The dataset encompasses records of total medals by country, event categories, and athletes’ participation from the Olympic Games held between 1896 and 2024. We use K-means clustering to analyze medal trends, categorizing 234 nations into four groups (α1, α2, α3, α4). Among these, α1, α2, α3 represent medal-winning countries, while α4 consists of non-medal-winning nations. For the α1, α2, and α3 groups, 2–3 representative countries from each are selected for trend analysis, with the United States serving as a case study. This study extracts ten factors that may influence medal wins from the dataset, including participant data, the number of events, and medal growth rates. Factor analysis is used to reduce them into three principal components: Factor analysis condenses ten influencing factors into three principal components: the event scale factor (F1), the medal trend factor (F2), and the gender and athletic ability factor (F3). An ARIMA model predicts the factor coefficients for 2028 as 0.9539, 0.7999, and 0.2937, respectively. Four models (random forest, BP Neural Network, XGBoost, and SVM) are employed to predict medal outcomes, using historical data split into training and testing sets to compare their predictive performance. The research results show that XGBoost is the optimal medal predicted model, with the United States projected to win 57 gold medals and a total of 135 medals in 2028. For non-medal-winning countries (α4), a three-layer fully connected neural network (FCNN) is constructed, achieving an accuracy of 85.5% during testing. Additionally, a formula to calculate the host effect and a Bayesian linear regression model to assess the impact of exceptional coaching on athletes’ medal performance are proposed. The overall trend of countries in the α1 group is stable, but they are significantly affected by the host effect; the trend in the α2 group shows an upward trend; the trend in the α3 group depend on the athletes’ conditions and whether the events they excel in are included in that year’s Olympics. In the α4 group, the probabilities of the United Arab Republic (UAR) and Mali (MLI) winning medals in the 2028 Olympic Games are 77.47% and 58.47%, respectively, and there are another four countries with probabilities exceeding 30%. For the eight most recent Olympic Games, the gain rate of the host effect is 74%. Great coaches can bring an average increase of 0.2 to 0.5 medals for each athlete. The proposed models, through an innovative integration of clustering, dimensionality reduction, and predictive algorithms, provide reliable forecasts and data-driven insights for optimizing national sports strategies. These contributions not only address the gap in predicting first-time medal wins for non-medal-winning nations but also offer guidance for policymakers and sports organizations, though they are constrained by assumptions of stable historical trends, minimal external disruptions, and the exclusion of unknown athletes. Full article
Show Figures

Figure 1

23 pages, 6067 KB  
Article
Daily-Scale Fire Risk Assessment for Eastern Mongolian Grasslands by Integrating Multi-Source Remote Sensing and Machine Learning
by Risu Na, Byambakhuu Gantumur, Wala Du, Sainbuyan Bayarsaikhan, Yu Shan, Qier Mu, Yuhai Bao, Nyamaa Tegshjargal and Battsengel Vandansambuu
Fire 2025, 8(7), 273; https://doi.org/10.3390/fire8070273 - 11 Jul 2025
Viewed by 1398
Abstract
Frequent wildfires in the eastern grasslands of Mongolia pose significant threats to the ecological environment and pastoral livelihoods, creating an urgent need for high-temporal-resolution and high-precision fire prediction. To address this, this study established a daily-scale grassland fire risk assessment framework integrating multi-source [...] Read more.
Frequent wildfires in the eastern grasslands of Mongolia pose significant threats to the ecological environment and pastoral livelihoods, creating an urgent need for high-temporal-resolution and high-precision fire prediction. To address this, this study established a daily-scale grassland fire risk assessment framework integrating multi-source remote sensing data to enhance predictive capabilities in eastern Mongolia. Utilizing fire point data from eastern Mongolia (2012–2022), we fused multiple feature variables and developed and optimized three models: random forest (RF), XGBoost, and deep neural network (DNN). Model performance was enhanced using Bayesian hyperparameter optimization via Optuna. Results indicate that the Bayesian-optimized XGBoost model achieved the best generalization performance, with an overall accuracy of 92.3%. Shapley additive explanations (SHAP) interpretability analysis revealed that daily-scale meteorological factors—daily average relative humidity, daily average wind speed, daily maximum temperature—and the normalized difference vegetation index (NDVI) were consistently among the top four contributing variables across all three models, identifying them as key drivers of fire occurrence. Spatiotemporal validation using historical fire data from 2023 demonstrated that fire points recorded on 8 April and 1 May 2023 fell within areas predicted to have “extremely high” fire risk probability on those respective days. Moreover, points A (117.36° E, 46.70° N) and B (116.34° E, 49.57° N) exhibited the highest number of days classified as “high” or “extremely high” risk during the April/May and September/October periods, consistent with actual fire occurrences. In summary, the integration of multi-source data fusion and Bayesian-optimized machine learning has enabled the first high-precision daily-scale wildfire risk prediction for the eastern Mongolian grasslands, thus providing a scientific foundation and decision-making support for wildfire prevention and control in the region. Full article
Show Figures

Figure 1

Back to TopTop