Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (7,483)

Search Parameters:
Keywords = ensembled model

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
22 pages, 1588 KB  
Article
A Hybrid HOG-LBP-CNN Model with Self-Attention for Multiclass Lung Disease Diagnosis from CT Scan Images
by Aram Hewa, Jafar Razmara and Jaber Karimpour
Computers 2026, 15(2), 93; https://doi.org/10.3390/computers15020093 (registering DOI) - 1 Feb 2026
Abstract
Resource-limited settings continue to face challenges in the identification of COVID-19, bacterial pneumonia, viral pneumonia, and normal lung conditions because of the overlap of CT appearance and inter-observer variability. We justify a hybrid architecture of deep learning which combines hand-designed descriptors (Histogram of [...] Read more.
Resource-limited settings continue to face challenges in the identification of COVID-19, bacterial pneumonia, viral pneumonia, and normal lung conditions because of the overlap of CT appearance and inter-observer variability. We justify a hybrid architecture of deep learning which combines hand-designed descriptors (Histogram of Oriented Gradients, Local Binary Patterns) and a 20-layer Convolutional Neural Network with dual self-attention. Handcrafted features were then trained with Support Vector Machines, and ensemble averaging was used to integrate the results with the CNN. The confidence level of 0.7 was used to mark suspicious cases to be reviewed manually. On a balanced dataset of 14,000 chest CT scans (3500 per class), the model was trained and cross-validated five-fold on a patient-wise basis. It had 97.43% test accuracy and a macro F1-score of 0.97, which was statistically significant compared to standalone CNN (92.0%), ResNet-50 (90.0%), multiscale CNN (94.5%), and ensemble CNN (96.0%). A further 2–3% enhancement was added by the self-attention module that targets the diagnostically salient lung regions. The predictions that were below the confidence limit amounted to only 5 percent, which indicated reliability and clinical usefulness. The framework provides an interpretable and scalable method of diagnosing multiclass lung disease, especially applicable to be deployed in healthcare settings with limited resources. The further development of the work will involve the multi-center validation, optimization of the model, and greater interpretability to be used in the real world. Full article
(This article belongs to the Special Issue AI in Bioinformatics)
15 pages, 1766 KB  
Article
Metaheuristic Optimizer-Based Segregated Load Scheduling Approach for Household Energy Consumption Management
by Shahzeb Ahmad Khan, Attique Ur Rehman, Ammar Arshad, Farhan Hameed Malik and Walid Ayadi
Eng 2026, 7(2), 65; https://doi.org/10.3390/eng7020065 (registering DOI) - 1 Feb 2026
Abstract
In the face of escalating energy demand, this research proposes a demand-side management (DSM) strategy that focuses on appliance-level load shifting in residential environments. The proposed approach utilizes detailed energy consumption forecasts that are generated by ensemble machine learning models, which predict usage [...] Read more.
In the face of escalating energy demand, this research proposes a demand-side management (DSM) strategy that focuses on appliance-level load shifting in residential environments. The proposed approach utilizes detailed energy consumption forecasts that are generated by ensemble machine learning models, which predict usage at both whole-household and individual appliance levels. This granular forecasting enables the development of customized load-shifting schedules for controllable devices. These schedules are optimized using a metaheuristic genetic algorithm (GA) with the objectives of minimizing consumer energy costs and reducing peak demand. The iterative nature of GA allows for continuous fine-tuning, thereby adapting to dynamic energy market conditions. The implemented DSM technique yields significant results, successfully reducing the daily energy consumption cost for shiftable appliances. Overall, the proposed system decreases the per-day consumer electricity cost from 237 cents (without DSM) to 208 cents (with DSM), achieving a 12.23% cost saving. Furthermore, it effectively mitigates peak demand, reducing it from 3.4 kW to 1.2 kW, which represents a substantial 64.7% reduction. These promising outcomes demonstrate the potential for substantial consumer savings while concurrently enhancing the overall efficiency and reliability of the power grid. Full article
Show Figures

Figure 1

32 pages, 2526 KB  
Article
HSE-GNN-CP: Spatiotemporal Teleconnection Modeling and Conformalized Uncertainty Quantification for Global Crop Yield Forecasting
by Salman Mahmood, Raza Hasan and Shakeel Ahmad
Information 2026, 17(2), 141; https://doi.org/10.3390/info17020141 (registering DOI) - 1 Feb 2026
Abstract
Global food security faces escalating threats from climate variability and resource constraints. Accurate crop yield forecasting is essential; however, existing methods frequently overlook complex spatial dependencies driven by climate teleconnections, such as the ENSO, and lacks rigorous uncertainty quantification. This paper presents HSE-GNN-CP, [...] Read more.
Global food security faces escalating threats from climate variability and resource constraints. Accurate crop yield forecasting is essential; however, existing methods frequently overlook complex spatial dependencies driven by climate teleconnections, such as the ENSO, and lacks rigorous uncertainty quantification. This paper presents HSE-GNN-CP, a novel framework integrating heterogeneous stacked ensembles, graph neural networks (GNNs), and conformal prediction (CP). Domain-specific features are engineered, including growing degree days and climate suitability scores, and explicitly model spatial patterns via rainfall correlation graphs. The ensemble combines random forest and gradient boosting learners with bootstrap aggregation, while GNNs encode inter-regional climate dependencies. Conformalized quantile regression ensures statistically valid prediction intervals. Evaluated on a global dataset spanning 15 countries and six major crops from 1990 to 2023, the framework achieves an R2 of 0.9594 and an RMSE of 4882 hg/ha. Crucially, it delivers calibrated 80% prediction intervals with 80.72% empirical coverage, significantly outperforming uncalibrated baselines at 40.03%. SHAP analysis identifies crop type and rainfall as dominant predictors, while the integrated drought classifier achieves perfect accuracy. These contributions advance agricultural AI by merging robust ensemble learning with explicit teleconnection modeling and trustworthy uncertainty quantification. Full article
Show Figures

Graphical abstract

28 pages, 9410 KB  
Article
Integrated AI Framework for Sustainable Environmental Management: Multivariate Air Pollution Interpretation and Prediction Using Ensemble and Deep Learning Models
by Youness El Mghouchi and Mihaela Tinca Udristioiu
Sustainability 2026, 18(3), 1457; https://doi.org/10.3390/su18031457 (registering DOI) - 1 Feb 2026
Abstract
Accurate prediction, forecasting and interpretability of air pollutant concentrations are important for sustainable environmental management and protecting public health. An integrated artificial intelligence (AI) framework is proposed to predict, forecast and analyse six major air pollutants, such as particulate matter concentrations (PM2.5 [...] Read more.
Accurate prediction, forecasting and interpretability of air pollutant concentrations are important for sustainable environmental management and protecting public health. An integrated artificial intelligence (AI) framework is proposed to predict, forecast and analyse six major air pollutants, such as particulate matter concentrations (PM2.5 and PM10), ground-level ozone (O3), carbon monoxide (CO), nitrogen dioxide (NO2), and sulphur dioxide (SO2), using a combination of ensemble and deep learning models. Five years of hourly air quality and meteorological data are analysed through correlation and Granger causality tests to uncover pollutant interdependencies and driving factors. The results of the Pearson correlation analysis reveal strong positive associations among primary pollutants (PM2.5–PM10, CO–nitrogen oxides NOx and VOCs) and inverse correlations between O3 and NOx (NO and NO2), confirming typical photochemical behaviour. Granger causality analysis further identified NO2 and NO as key causal drivers influencing other pollutants, particularly O3 formation. Among the 23 tested AI models for prediction, XGBoost, Random Forest, and Convolutional Neural Networks (CNNs) achieve the best performance for different pollutants. NO2 prediction using CNNs displays the highest accuracy in testing (R2 = 0.999, RMSE = 0.66 µg/m3), followed by PM2.5 and PM10 with XGBoost (R2 = 0.90 and 0.79 during testing, respectively). The Air Quality Index (AQI) analysis shows that SO2 and PM10 are the dominant contributors to poor air quality episodes, while ozone peaks occur during warm, high-radiation periods. The interpretability analysis based on Shapley Additive exPlanations (SHAP) highlights the key influence of relative humidity, temperature, solar brightness, and NOx species on pollutant concentrations, confirming their meteorological and chemical relevance. Finally, a deep-NARMAX model was applied to forecast the next horizons for the six air pollutants studied. Six formulas were elaborated using input data at times (t, t − 1, t − 2, …, t − n) to forecast a horizon of (t + 1) hours for single-step forecasting. For multi-step forecasting, the forecast is extended iteratively to (t + 2) hours and beyond. A recursive strategy is adopted for this purpose, whereby the forecast at (t + 1) is fed back as an input to generate the forecasts at (t + 2), and so forth. Overall, this integrated framework combines predictive accuracy with physical interpretability, offering a powerful data-driven tool for air quality assessment and policy support. This approach can be extended to real-time applications for sustainable environmental monitoring and decision-making systems. Full article
(This article belongs to the Section Air, Climate Change and Sustainability)
Show Figures

Figure 1

39 pages, 3699 KB  
Article
Enhancing Decision Intelligence Using Hybrid Machine Learning Framework with Linear Programming for Enterprise Project Selection and Portfolio Optimization
by Abdullah, Nida Hafeez, Carlos Guzmán Sánchez-Mejorada, Miguel Jesús Torres Ruiz, Rolando Quintero Téllez, Eponon Anvi Alex, Grigori Sidorov and Alexander Gelbukh
AI 2026, 7(2), 52; https://doi.org/10.3390/ai7020052 (registering DOI) - 1 Feb 2026
Abstract
This study presents a hybrid analytical framework that enhances project selection by achieving reasonable predictive accuracy through the integration of expert judgment and modern artificial intelligence (AI) techniques. Using an enterprise-level dataset of 10,000 completed software projects with verified real-world statistical characteristics, we [...] Read more.
This study presents a hybrid analytical framework that enhances project selection by achieving reasonable predictive accuracy through the integration of expert judgment and modern artificial intelligence (AI) techniques. Using an enterprise-level dataset of 10,000 completed software projects with verified real-world statistical characteristics, we develop a three-step architecture for intelligent decision support. First, we introduce an extended Analytic Hierarchy Process (AHP) that incorporates organizational learning patterns to compute expert-validated criteria weights with a consistent level of reliability (CR=0.04), and Linear Programming is used for portfolio optimization. Second, we propose a machine learning architecture that integrates expert knowledge derived from AHP into models such as Transformers, TabNet, and Neural Oblivious Decision Ensembles through mechanisms including attention modulation, split criterion weighting, and differentiable tree regularization. Third, the hybrid AHP-Stacking classifier generates a meta-ensemble that adaptively balances expert-derived information with data-driven patterns. The analysis shows that the model achieves 97.5% accuracy, a 96.9% F1-score, and a 0.989 AUC-ROC, representing a 25% improvement compared to baseline methods. The framework also indicates a projected 68.2% improvement in portfolio value (estimated incremental value of USD 83.5 M) based on post factum financial results from the enterprise’s ventures.This study is evaluated retrospectively using data from a single enterprise, and while the results demonstrate strong robustness, generalizability to other organizational contexts requires further validation. This research contributes a structured approach to hybrid intelligent systems and demonstrates that combining expert knowledge with machine learning can provide reliable, transparent, and high-performing decision-support capabilities for project portfolio management. Full article
Show Figures

Figure 1

24 pages, 3870 KB  
Article
Hybrid Ensemble Learning for TWSA Prediction in Water-Stressed Regions: A Case Study from Casablanca–Settat Region, Morocco
by Youssef Laalaoui, Naïma El Assaoui, Oumaima Ouahine, Thanh Thi Nguyen and Ahmed M. Saqr
Hydrology 2026, 13(2), 53; https://doi.org/10.3390/hydrology13020053 (registering DOI) - 1 Feb 2026
Abstract
A hybrid machine learning framework has been developed in this study to estimate Terrestrial Water Storage Anomalies (TWSA) in Morocco’s Casablanca–Settat region, which faces serious groundwater stress due to rapid urbanization, intensive agriculture, and climate variability. In this study, TWSA is used as [...] Read more.
A hybrid machine learning framework has been developed in this study to estimate Terrestrial Water Storage Anomalies (TWSA) in Morocco’s Casablanca–Settat region, which faces serious groundwater stress due to rapid urbanization, intensive agriculture, and climate variability. In this study, TWSA is used as an integrated proxy for groundwater-related storage changes, while acknowledging that it also includes contributions from soil moisture and surface water. The approach combines satellite-based observations from the Gravity Recovery and Climate Experiment (GRACE) and GRACE Follow-On (GRACE-FO) with key environmental indicators such as rainfall, evapotranspiration, and land use data to track changes in groundwater availability with improved spatial detail. After preprocessing the data through feature selection, normalization, and outlier handling, the model applies six base learners, i.e., Huber regressor, automatic relevance determination regression, kernel ridge, long short-term memory, k-nearest neighbors, and gradient boosting. Their predictions are aggregated using a random forest meta-learner to improve accuracy and stability. The ensemble achieved strong results, with a root mean square error of 0.13, a mean absolute error of 0.108, and a determination coefficient of 0.97—far better than single-model baselines—based on a temporally independent train-test split. Spatial analysis highlighted clear patterns of groundwater depletion linked to land cover and usage. These results can guide targeted aquifer recharge efforts, drought response planning, and smarter irrigation management. The model also aligns with national goals under Morocco’s water sustainability initiatives and can be adapted for use in other regions with similar environmental challenges. Full article
(This article belongs to the Topic Advances in Hydrological Remote Sensing)
Show Figures

Figure 1

23 pages, 8188 KB  
Article
Enhanced Pix2pixGAN with Spatial-Channel Attention for Underground Medium Inversion from GPR
by Sicheng Yang, Liangshuai Guo, Yahan Yang and Hongxia Ye
Remote Sens. 2026, 18(3), 448; https://doi.org/10.3390/rs18030448 (registering DOI) - 1 Feb 2026
Abstract
Ground penetrating radar (GPR) data inversion, especially in parallel-layered homogeneous media with multiple subsurface targets, still faces challenges in accurately reconstructing geometric structures due to weak reflections and complex target–medium interactions. To address these limitations, this paper proposes a novel multi-scale inversion framework [...] Read more.
Ground penetrating radar (GPR) data inversion, especially in parallel-layered homogeneous media with multiple subsurface targets, still faces challenges in accurately reconstructing geometric structures due to weak reflections and complex target–medium interactions. To address these limitations, this paper proposes a novel multi-scale inversion framework named GPRGAN-SCSE (Ground Penetrating Radar Generative Adversarial Network with Spatial-Channel Squeeze and Excitation). Built upon the Pix2Pix Generative Adversarial Network (Pix2PixGAN), the proposed model incorporates a Spatial-Channel Squeeze and Excitation (SCSE) module into a residual U-Net generator to adaptively enhance target features embedded in layered media. Furthermore, a tri-scale discriminator ensemble is designed to enforce structural consistency and suppress layer-induced artifacts. The network is optimized using a composite loss integrating adversarial loss, L1 loss, and gradient difference loss to jointly improve structural continuity and boundary sharpness. Experiments conducted on a simulation dataset of parallel-layered homogeneous media with multiple targets demonstrate that GPRGAN-SCSE substantially outperforms existing inversion networks. The proposed method reduces the MAE by 63.8% and achieves a Structural Similarity Index (SSIM) of 99.96%, effectively improving the clarity of subsurface edges and the fidelity of geometric contours. These results confirm that the proposed framework provides a robust and high-precision solution for non-destructive subsurface imaging under layered media conditions. Full article
Show Figures

Figure 1

14 pages, 6484 KB  
Article
Short-Term Electricity Price Forecasting via a Reinforcement Learning-Based Dynamic Soft Ensemble Strategy
by Yan Wang, Yongxi Zhao, Kun Liang and Hong Fan
Energies 2026, 19(3), 761; https://doi.org/10.3390/en19030761 (registering DOI) - 1 Feb 2026
Abstract
To address the high volatility of spot market prices and the feature extraction limitations of single models, a short-term electricity price forecasting method based on a reinforcement learning dynamic soft ensemble strategy is proposed. First, a complementary dual-branch architecture is constructed: the CNN-LSTM-Attention [...] Read more.
To address the high volatility of spot market prices and the feature extraction limitations of single models, a short-term electricity price forecasting method based on a reinforcement learning dynamic soft ensemble strategy is proposed. First, a complementary dual-branch architecture is constructed: the CNN-LSTM-Attention branch mines local temporal features, while the Transformer branch captures long-range global dependencies. Second, the Q-learning algorithm is introduced to model weight optimization as a Markov Decision Process. An intelligent agent perceives fluctuation states to adaptively allocate weights, overcoming the rigidity of traditional ensembles. Case studies on PJM market data demonstrate that the proposed model outperforms advanced benchmarks in MAE and RMSE metrics. Notably, prediction accuracy is significantly improved during price spikes and negative price periods. The results verify that the strategy effectively copes with market concept drift, supporting reliable bidding and risk mitigation. Full article
(This article belongs to the Special Issue Energy, Electrical and Power Engineering: 5th Edition)
Show Figures

Figure 1

28 pages, 3165 KB  
Article
Assessment of the Reliability of AI Models in Predicting Urban Energy Consumption Under Conditions of Small or Incomplete Data
by Giuseppe Piras, Francesco Muzi and Zahra Ziran
Appl. Sci. 2026, 16(3), 1457; https://doi.org/10.3390/app16031457 (registering DOI) - 31 Jan 2026
Abstract
The use of artificial intelligence (AI) and machine learning (ML) models to forecast urban energy consumption is becoming more widespread, but these models often rely on large, clean and well-distributed datasets. In reality, particularly at a local level, the available data are often [...] Read more.
The use of artificial intelligence (AI) and machine learning (ML) models to forecast urban energy consumption is becoming more widespread, but these models often rely on large, clean and well-distributed datasets. In reality, particularly at a local level, the available data are often limited, inconsistent or incomplete. This study systematically examines the robustness and reliability of AI predictive models in urban small data conditions using a real energy dataset from a neighborhood monitored over 24 months. The analysis compares several ML models trained on progressively shorter historical windows (6, 12, 18 and 24 months) and assesses performance degradation through controlled data quality stress tests, including missing values and noise. Results show that ensemble-based models achieve high accuracy when at least 18–24 months of data are available (normalized R2 up to 0.87), while performance declines markedly below 12 months. Gradient boosting demonstrates the highest robustness under severe data constraints, maintaining normalized R2 values above 0.70 with 12 months of data. Regularized linear models perform competitively in longer, well-structured time series but degrade under extreme data scarcity. An ultra-conservative data augmentation strategy yields limited but consistent improvements (≈1–2%) in short-horizon scenarios. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

18 pages, 3738 KB  
Article
Overcoming the Curse of Dimensionality with Synolitic AI
by Alexey Zaikin, Ivan Sviridov, Artem Sosedka, Anastasia Linich, Ruslan Nasyrov, Evgeny M. Mirkes and Tatiana Tyukina
Technologies 2026, 14(2), 84; https://doi.org/10.3390/technologies14020084 (registering DOI) - 31 Jan 2026
Abstract
High-dimensional tabular data are common in biomedical and clinical research, yet conventional machine learning methods often struggle in such settings due to data scarcity, feature redundancy, and limited generalization. In this study, we systematically evaluate Synolitic Graph Neural Networks (SGNNs), a framework that [...] Read more.
High-dimensional tabular data are common in biomedical and clinical research, yet conventional machine learning methods often struggle in such settings due to data scarcity, feature redundancy, and limited generalization. In this study, we systematically evaluate Synolitic Graph Neural Networks (SGNNs), a framework that transforms high-dimensional samples into sample-specific graphs by training ensembles of low-dimensional pairwise classifiers and analyzing the resulting graph structure with Graph Neural Networks. We benchmark convolution-based (GCN) and attention-based (GATv2) models across 15 UCI datasets under two training regimes: a foundation setting that concatenates all datasets and a dataset-specific setting with macro-averaged evaluation. We further assess cross-dataset transfer, robustness to limited training data, feature redundancy, and computational efficiency, and extend the analysis to a real-world ovarian cancer proteomics dataset. The results show that topology-aware node feature augmentation provides the dominant performance gains across all regimes. In the foundation setting, GATv2 achieves an ROC-AUC of up to 92.22 (GCN: 91.22), substantially outperforming XGBoost (86.05), α=0.001. In the dataset-specific regime, GATv2, combined with minimum-connectivity filtering, achieves a macro ROC-AUC of 83.12, compared to 80.28 for XGBoost. Leave-one-dataset-out evaluation confirms cross-domain transfer, with an ROC-AUC of up to 81.99. SGNNs maintain ROC-AUC around 85% with as little as 10% of the training data and consistently outperform XGBoost in more extreme low-data regimes, α=0.001. On ovarian cancer proteomics data, foundation training improves both predictive performance and stability. Efficiency analysis shows that graph filtering substantially reduces training time, inference latency, and memory usage without compromising accuracy. Overall, these findings suggest that SGNNs provide a robust and scalable approach for learning from high-dimensional, heterogeneous tabular data, particularly in biomedical settings with limited sample sizes. Full article
Show Figures

Figure 1

18 pages, 10981 KB  
Article
Ensemble Entropy with Adaptive Deep Fusion for Short-Term Power Load Forecasting
by Yiling Wang, Yan Niu, Xuejun Li, Xianglong Dai, Xiaopeng Wang, Yong Jiang, Chenghu He and Li Zhou
Entropy 2026, 28(2), 158; https://doi.org/10.3390/e28020158 (registering DOI) - 31 Jan 2026
Abstract
Accurate power load forecasting is crucial for ensuring the safety and economic operation of power systems. However, the complex, non-stationary, and heterogeneous nature of power load data presents significant challenges for traditional prediction methods, particularly in capturing instantaneous dynamics and effectively fusing multi-feature [...] Read more.
Accurate power load forecasting is crucial for ensuring the safety and economic operation of power systems. However, the complex, non-stationary, and heterogeneous nature of power load data presents significant challenges for traditional prediction methods, particularly in capturing instantaneous dynamics and effectively fusing multi-feature information. This paper proposes a novel framework—Ensemble Entropy with Adaptive Deep Fusion (EEADF)—for short-term multi-feature power load forecasting. The framework introduces an ensemble instantaneous entropy extraction module to compute and fuse multiple entropy types (approximate, sample, and permutation entropies) in real-time within sliding windows, creating a sensitive representation of system states. A task-adaptive hierarchical fusion mechanism is employed to balance computational efficiency and model expressivity. For time-series forecasting tasks with relatively structured patterns, feature concatenation fusion is used that directly combines LSTM sequence features with multimodal entropy features. For complex multimodal understanding tasks requiring nuanced cross-modal interactions, multi-head self-attention fusion is implemented that dynamically weights feature importance based on contextual relevance. A dual-branch deep learning model is constructed that processes both raw sequences (via LSTM) and extracted entropy features (via MLP) in parallel. Extensive experiments on a carefully designed simulated multimodal dataset demonstrate the framework’s robustness in recognizing diverse dynamic patterns, achieving MSE of 0.0125, MAE of 0.0794, and R² of 0.9932. Validation on the real-world ETDataset for power load forecasting confirms that the proposed method significantly outperforms baseline models (LSTM, TCN, transformer, and informer) and traditional entropy methods across standard evaluation metrics (MSE, MAE, RMSE, MAPE, and R²). Ablation studies further verify the critical roles of both the entropy features and the fusion mechanism. Full article
(This article belongs to the Section Multidisciplinary Applications)
24 pages, 2031 KB  
Article
A Unified Approach for Ensemble Function and Threshold Optimization in Anomaly-Based Failure Forecasting
by Nikolaos Kolokas, Vasileios Tatsis, Angeliki Zacharaki, Dimosthenis Ioannidis and Dimitrios Tzovaras
Appl. Sci. 2026, 16(3), 1452; https://doi.org/10.3390/app16031452 (registering DOI) - 31 Jan 2026
Abstract
This paper introduces a novel approach to anomaly-based failure forecasting that jointly optimizes both the ensemble function and the anomaly threshold used for decision making. Unlike conventional methods that apply fixed or classifier-defined thresholds, the proposed framework simultaneously tunes the threshold of the [...] Read more.
This paper introduces a novel approach to anomaly-based failure forecasting that jointly optimizes both the ensemble function and the anomaly threshold used for decision making. Unlike conventional methods that apply fixed or classifier-defined thresholds, the proposed framework simultaneously tunes the threshold of the failure probability or anomaly score and the parameters of an ensemble function that integrates multiple machine learning models—specifically, Random Forest and Isolation Forest classifiers trained under diverse preprocessing configurations. The distinctive contribution of this work lies in introducing a weighted mean ensemble function, whose coefficients are co-optimized with the anomaly threshold using a global optimization algorithm, enabling adaptive, data-driven decision boundaries. The method is designed for predictive maintenance applications and validated using sensor data from three industrial domains: aluminum anode production, plastic injection molding, and automotive manufacturing. The experimental results demonstrate that the proposed combined optimization significantly enhances forecasting reliability, improving the Matthews Correlation Coefficient by up to 6.5 percentage units compared to previous approaches. Beyond its empirical gains, this work establishes a scalable and computationally efficient framework for integrating threshold and ensemble optimization in real-world, cross-industry predictive maintenance systems. Full article
Show Figures

Figure 1

39 pages, 7869 KB  
Article
Research on an Ultra-Short-Term Wind Power Forecasting Model Based on Multi-Scale Decomposition and Fusion Framework
by Daixuan Zhou, Yan Jia, Guangchen Liu, Junlin Li, Kaile Xi, Zhichao Wang and Xu Wang
Symmetry 2026, 18(2), 253; https://doi.org/10.3390/sym18020253 - 30 Jan 2026
Viewed by 18
Abstract
Accurate wind power prediction is of great significance for the dispatch, security, and stable operation of energy systems. It helps enhance the symmetry and coordination between the highly stochastic and volatile nature of the power generation supply side and the stringent requirements for [...] Read more.
Accurate wind power prediction is of great significance for the dispatch, security, and stable operation of energy systems. It helps enhance the symmetry and coordination between the highly stochastic and volatile nature of the power generation supply side and the stringent requirements for stability and power quality on the grid demand side. To further enhance the accuracy of ultra-short-term wind power forecasting, this paper proposes a novel prediction framework based on multi-layer data decomposition, reconstruction, and a combined prediction model. A multi-stage decomposition and reconstruction technique is first employed to significantly reduce noise interference: the Sparrow Search Algorithm (SSA) is utilized to optimize the parameters for an initial Variational Mode Decomposition (VMD), followed by a secondary decomposition of the high-frequency components using Complete Ensemble Empirical Mode Decomposition with Adaptive Noise (CEEMDAN). The resulting components are then reconstructed based on Sample Entropy (SE), effectively improving the quality of the input data. Subsequently, a hybrid prediction model named IMGWO-BiTCN-BiGRU is constructed to extract spatiotemporal bidirectional features from the input sequences. Finally, simulation experiments are conducted using actual measurement data from the Sotavento wind farm in Spain. The results demonstrate that the proposed hybrid model outperforms benchmark models across all evaluation metrics, validating its effectiveness in improving forecasting accuracy and stability. Full article
16 pages, 5297 KB  
Article
Human Activities and Climate Change Accelerate the Spread Risk of Hyphantria cunea in China
by Mu Duan, Jing Ning, Gejiao Wang, Zhaocheng Xu, Shengming Li, Zhen Zhang, Longwa Zhang and Lilin Zhao
Insects 2026, 17(2), 154; https://doi.org/10.3390/insects17020154 - 30 Jan 2026
Viewed by 39
Abstract
Anthropogenic activities and climate change have accelerated biological invasions, leading to profound ecological, economic, social, and health impacts. The invasive species fall webworm (Hyphantria cunea) has been reported to have outbreaks in areas with climate anomalies and human settlements in recent [...] Read more.
Anthropogenic activities and climate change have accelerated biological invasions, leading to profound ecological, economic, social, and health impacts. The invasive species fall webworm (Hyphantria cunea) has been reported to have outbreaks in areas with climate anomalies and human settlements in recent years, highlighting the necessity to explore the species’ suitable habitat and associated future changes. We built an ensemble species distribution model using Random Forest, MaxEnt, and Support Vector Machine, achieving excellent predictive performance (AUC = 0.996). Our results identify human settlement density as the dominant driving factor, with a contribution > 50%, far exceeding climatic and forest structure variables. Therefore, densely urbanized regions such as Beijing–Tianjin–Hebei, the Liaodong Peninsula, and the North China Plain comprise the current highly suitable areas. Future climate projections suggest a continued expansion of the suitable habitat for H. cunea, with the most pronounced growth expected under the high-emission pathway (SSP5-8.5), where human activity is greatest. Such a correlation indicates that highly urbanized regions should be given priority for corresponding monitoring and control measures. As climate warming continues, northeastern China will face escalating invasion risks. Conversely, some regions within the Yangtze River Delta may become less suitable for the habitation of H. cunea. These findings provide insightful guidance for region-specific surveillance, quarantine measures, and the precision management of H. cunea in China. Full article
(This article belongs to the Special Issue Invasive Pest Management and Climate Change—2nd Edition)
Show Figures

Figure 1

21 pages, 6669 KB  
Article
Adaptive Time-Lagged Ensemble for Short-Range Streamflow Prediction Using WRF-Hydro and LDAPS
by Yaewon Lee, Bomi Kim, Hong Tae Kim and Seong Jin Noh
Water 2026, 18(3), 356; https://doi.org/10.3390/w18030356 - 30 Jan 2026
Viewed by 81
Abstract
This study evaluates a time-lagged ensemble averaging strategy to improve the accuracy and robustness of short-range streamflow point forecasts when hydrological simulations are driven by deterministic numerical weather prediction (NWP) forcing. We implemented WRF-Hydro in standalone mode for the Geumho River basin, South [...] Read more.
This study evaluates a time-lagged ensemble averaging strategy to improve the accuracy and robustness of short-range streamflow point forecasts when hydrological simulations are driven by deterministic numerical weather prediction (NWP) forcing. We implemented WRF-Hydro in standalone mode for the Geumho River basin, South Korea, using Local Data Assimilation and Prediction System (LDAPS) forecasts initialized every 6 h with lead times up to 48 h. Time-lagged ensembles were constructed by averaging overlapping WRF-Hydro predictions from successive LDAPS initializations. Across two contrasting flood-producing storms, ensemble-mean forecasts consistently reduced lead-time-dependent skill degradation relative to single-initialization forecasts; the event-wise median Nash–Sutcliffe efficiency at the downstream gauge improved from 0.39 to 0.81 at 48 h (Event 2020) and from 0.48 to 0.85 at 24 h (Event 2022), while RMSE decreased by up to 48%. The most effective ensemble window varied with storm evolution and forecast horizon, indicating additional gains from adaptive time-lag selection. Overall, time-lagged ensemble averaging provides a practical, low-cost post-processing approach to enhance operational short-range streamflow prediction with NWP forcings. Full article
(This article belongs to the Special Issue Innovations in Hydrology: Streamflow and Flood Prediction)
Show Figures

Figure 1

Back to TopTop