Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (203)

Search Parameters:
Keywords = reliable ensemble averaging

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
25 pages, 727 KB  
Article
Migraine and Epilepsy Discrimination Using DTCWT and Random Subspace Ensemble Classifier
by Tuba Nur Subasi and Abdulhamit Subasi
Mach. Learn. Knowl. Extr. 2026, 8(2), 35; https://doi.org/10.3390/make8020035 - 4 Feb 2026
Viewed by 46
Abstract
Migraine and epilepsy are common neurological disorders that share overlapping symptoms, such as visual disturbances and altered consciousness, making accurate diagnosis challenging. Although their underlying mechanisms differ, both conditions involve recurrent irregular brain activity, and traditional EEG-based diagnosis relies heavily on clinical interpretation, [...] Read more.
Migraine and epilepsy are common neurological disorders that share overlapping symptoms, such as visual disturbances and altered consciousness, making accurate diagnosis challenging. Although their underlying mechanisms differ, both conditions involve recurrent irregular brain activity, and traditional EEG-based diagnosis relies heavily on clinical interpretation, which may be subjective and insufficient for clear differentiation. To address this challenge, this study introduces an automated EEG classification framework combining Dual Tree Complex Wavelet Transform (DTCWT) for feature extraction with a Random Subspace Ensemble Classifier for multi-class discrimination. EEG data recorded under photic and nonphotic stimulation were analyzed to capture both temporal and frequency characteristics. DTCWT proved effective in modeling the non-stationary nature of EEG signals and extracting condition-specific features, while the ensemble classifier improved generalization by training multiple models on diverse feature subsets. The proposed system achieved an average accuracy of 99.50%, along with strong F-measure, AUC, and Kappa scores. Notably, although previous studies suggest heightened EEG activity in migraine patients during flash stimulation, findings here indicate that flash stimulation alone does not reliably distinguish migraine from epilepsy. Overall, this research highlights the promise of advanced signal processing and machine learning techniques in enhancing diagnostic precision for complex neurological disorders. Full article
(This article belongs to the Section Learning)
22 pages, 2833 KB  
Article
A Hybrid HOG-LBP-CNN Model with Self-Attention for Multiclass Lung Disease Diagnosis from CT Scan Images
by Aram Hewa, Jafar Razmara and Jaber Karimpour
Computers 2026, 15(2), 93; https://doi.org/10.3390/computers15020093 - 1 Feb 2026
Viewed by 111
Abstract
Resource-limited settings continue to face challenges in the identification of COVID-19, bacterial pneumonia, viral pneumonia, and normal lung conditions because of the overlap of CT appearance and inter-observer variability. We justify a hybrid architecture of deep learning which combines hand-designed descriptors (Histogram of [...] Read more.
Resource-limited settings continue to face challenges in the identification of COVID-19, bacterial pneumonia, viral pneumonia, and normal lung conditions because of the overlap of CT appearance and inter-observer variability. We justify a hybrid architecture of deep learning which combines hand-designed descriptors (Histogram of Oriented Gradients, Local Binary Patterns) and a 20-layer Convolutional Neural Network with dual self-attention. Handcrafted features were then trained with Support Vector Machines, and ensemble averaging was used to integrate the results with the CNN. The confidence level of 0.7 was used to mark suspicious cases to be reviewed manually. On a balanced dataset of 14,000 chest CT scans (3500 per class), the model was trained and cross-validated five-fold on a patient-wise basis. It had 97.43% test accuracy and a macro F1-score of 0.97, which was statistically significant compared to standalone CNN (92.0%), ResNet-50 (90.0%), multiscale CNN (94.5%), and ensemble CNN (96.0%). A further 2–3% enhancement was added by the self-attention module that targets the diagnostically salient lung regions. The predictions that were below the confidence limit amounted to only 5 percent, which indicated reliability and clinical usefulness. The framework provides an interpretable and scalable method of diagnosing multiclass lung disease, especially applicable to be deployed in healthcare settings with limited resources. The further development of the work will involve the multi-center validation, optimization of the model, and greater interpretability to be used in the real world. Full article
(This article belongs to the Special Issue AI in Bioinformatics)
Show Figures

Figure 1

21 pages, 3489 KB  
Article
A Novel Reservoir Ensemble Forecasting Method Based on Constrained Multi-Model Weight Optimization
by Yinuo Gao, Xu Yang and Shuai Zhou
Water 2026, 18(3), 327; https://doi.org/10.3390/w18030327 - 28 Jan 2026
Viewed by 129
Abstract
Accurate runoff forecasting is vital yet challenged by the increasing non-stationarity of hydrological systems, which often exceeds the capacity of traditional single models. Ensemble forecasting, as an effective approach, integrates multiple models’ information to enhance forecasting performance and assess uncertainty. However, existing methods [...] Read more.
Accurate runoff forecasting is vital yet challenged by the increasing non-stationarity of hydrological systems, which often exceeds the capacity of traditional single models. Ensemble forecasting, as an effective approach, integrates multiple models’ information to enhance forecasting performance and assess uncertainty. However, existing methods (such as Bayesian Model Averaging and BMA) still have limitations in dealing with complex hydrological scenarios, particularly in the construction and optimization of forecast intervals. This paper proposes a novel hydrological ensemble interval forecasting method based on constrained multi-model weight optimization (Constrained Multi-Model Weight Optimization, CMWO). CMWO utilizes a set of heterogeneous deterministic models to generate members, assigns dynamic optimization weight intervals to enhance flexibility, and employs a multi-objective framework to minimize interval width and errors subject to a ≥95% coverage constraint. Taking the Huangjinxia Reservoir in the upper reaches of the Hanjiang River as a case study, the CMWO method was systematically applied and evaluated for decadal-scale runoff forecasting and comprehensively compared with widely used BMA methods and individual models. The results show that CMWO significantly outperforms in improving point forecast accuracy (measured by RMSE, KGE, etc.) and interval forecast quality (evaluated by PICP, PIAW, CRPS, etc.), especially in generating narrower, more informative prediction intervals while ensuring high reliability. The CMWO method proposed in this study provides a competitive new tool for the effective management of forecasting uncertainty in complex hydrological systems. Full article
Show Figures

Figure 1

24 pages, 6667 KB  
Article
Data-Driven Forecasting of Electricity Prices in Chile Using Machine Learning
by Ricardo León, Guillermo Ramírez, Camilo Cifuentes, Samuel Vergara, Roberto Aedo-García, Francisco Ramis Lanyon and Rodrigo J. Villalobos San Martin
Appl. Sci. 2026, 16(3), 1318; https://doi.org/10.3390/app16031318 - 28 Jan 2026
Viewed by 98
Abstract
This study proposes and evaluates a data-driven framework for short-term System Marginal Price (SMP) forecasting in the Chilean National Electric System (NES), a power system characterized by high penetration of variable renewable generation and persistent transmission congestion. Using publicly available hourly operational data [...] Read more.
This study proposes and evaluates a data-driven framework for short-term System Marginal Price (SMP) forecasting in the Chilean National Electric System (NES), a power system characterized by high penetration of variable renewable generation and persistent transmission congestion. Using publicly available hourly operational data for 2024, multiple machine learning regressors including Linear Regression (base case), Bayesian Ridge, Automatic Relevance Determination, Decision Trees, Random Forests, and Support Vector Regression are implemented under a node-specific modeling strategy. Two alternative approaches for predictor selection are compared: a system-wide methodology that exploits lagged SMP information from all network nodes; and a spatially filtered methodology that restricts SMP inputs to correlated subsystems identified through nodal correlation analysis. Model robustness is explicitly assessed by reserving January and July as out-of-sample test periods, capturing contrasting summer and winter operating conditions. Forecasting performance is analyzed for representative nodes located in the northern, central, and southern zones of the NES, which exhibit markedly different congestion levels and generation mixes. Results indicate that non-linear and ensemble models, particularly Random Forest and Support Vector Regression, provide the most accurate forecasts in well-connected areas, achieving mean absolute errors close to 10 USD/MWh. In contrast, forecast errors increase substantially in highly congested southern zones, reflecting the structural influence of transmission constraints on price formation. While average performance differences between M1 and M2 are modest, a paired Wilcoxon signed-rank test reveals statistically significant improvements with M2 in highly congested zones, where M2 yields lower absolute errors for most models, despite relying on fewer inputs. These findings highlight the importance of congestion-aware feature selection for reliable price forecasting in renewable-intensive systems. Full article
(This article belongs to the Special Issue New Trends in Renewable Energy and Power Systems)
Show Figures

Figure 1

16 pages, 3327 KB  
Article
EEMD-TiDE-Based Passenger Flow Prediction for Urban Rail Transit
by Dongcai Cheng, Yuheng Zhang and Haijun Li
Electronics 2026, 15(3), 529; https://doi.org/10.3390/electronics15030529 - 26 Jan 2026
Viewed by 164
Abstract
Urban rail transit networks in developing countries are rapidly expanding, entering a networked operational phase where accurate passenger flow forecasting is crucial for optimizing vehicle scheduling, resource allocation, and transportation efficiency. In the short term, accurate real-time forecasting enables the dynamic adjustment of [...] Read more.
Urban rail transit networks in developing countries are rapidly expanding, entering a networked operational phase where accurate passenger flow forecasting is crucial for optimizing vehicle scheduling, resource allocation, and transportation efficiency. In the short term, accurate real-time forecasting enables the dynamic adjustment of train headways and crew deployment, reducing average passenger waiting times during peak hours and alleviating platform overcrowding; in the long term, reliable trend predictions support strategic planning, including capacity expansion, station retrofitting, and energy management. This paper proposes a novel hybrid forecasting model, EEMD-TiDE, that combines improved Ensemble Empirical Mode Decomposition (EEMD) with a Time Series Dense Encoder (TiDE) to enhance prediction accuracy. The EEMD algorithm effectively overcomes mode mixing issues in traditional EMD by incorporating white noise perturbations, decomposing raw passenger flow data into physically meaningful Intrinsic Mode Functions (IMFs). At the same time, the TiDE model, a linear encoder–decoder architecture, efficiently handles multi-scale features and covariates without the computational overhead of self-attention mechanisms. Experimental results using Xi’an Metro passenger flow data (2017–2019) demonstrate that EEMD-TiDE significantly outperforms baseline models. This study provides a robust solution for urban rail transit passenger flow forecasting, supporting sustainable urban development. Full article
(This article belongs to the Section Computer Science & Engineering)
Show Figures

Figure 1

23 pages, 3441 KB  
Article
Integrating Large Language Models with Deep Learning for Breast Cancer Treatment Decision Support
by Heeseung Park, Serin Ok, Taewoo Kang and Meeyoung Park
Diagnostics 2026, 16(3), 394; https://doi.org/10.3390/diagnostics16030394 - 26 Jan 2026
Viewed by 289
Abstract
Background/Objectives: Breast cancer is one of the most common malignancies, but its heterogeneous molecular subtypes make treatment decision-making complex and patient-specific. Both the pathology reports and the electronic medical record (EMR) play a critical role for an appropriate treatment decision. This study [...] Read more.
Background/Objectives: Breast cancer is one of the most common malignancies, but its heterogeneous molecular subtypes make treatment decision-making complex and patient-specific. Both the pathology reports and the electronic medical record (EMR) play a critical role for an appropriate treatment decision. This study aimed to develop an integrated clinical decision support system (CDSS) that combines a large language model (LLM)-based pathology analysis with deep learning-based treatment prediction to support standardized and reliable decision-making. Methods: Real-world data (RWD) obtained from a cohort of 5015 patients diagnosed with breast cancer were analyzed. Meta-Llama-3-8B-Instruct automatically extracted the TNM stage and tumor size from the pathology reports, which were then integrated with EMR variables. A multi-label classification of 16 treatment combinations was performed using six models, including Decision Tree, Random Forest, GBM, XGBoost, DNN, and Transformer. Performance was evaluated using accuracy, macro/micro-averaged precision, recall, F1 score, and AUC. Results: Using combined LLM-extracted pathology and EMR features, GBM and XGBoost achieved the highest and most stable predictive performance across all feature subset configurations (macro-F1 ≈ 0.88–0.89; AUC = 0.867–0.868). Both models demonstrated strong discrimination ability and consistent recall and precision, highlighting their robustness for multi-label classification in real-world settings. Decision Tree and Random Forest showed moderate but reliable performance (macro-F1 = 0.84–0.86; AUC = 0.849–0.821), indicating their applicability despite lower predictive capability. By contrast, the DNN and Transformer models produced comparatively lower scores (macro-F1 = 0.74–0.82; AUC = 0.780–0.757), especially when using the full feature set, suggesting limited suitability for structured clinical data without strong contextual dependencies. These findings indicate that gradient-boosting ensemble approaches are better optimized for tabular medical data and generate more clinically reliable treatment recommendations. Conclusions: The proposed artificial intelligence-based CDSS improves accuracy and consistency in breast cancer treatment decision support by integrating automated pathology interpretation with deep learning, demonstrating its potential utility in real-world cancer care. Full article
(This article belongs to the Section Machine Learning and Artificial Intelligence in Diagnostics)
Show Figures

Figure 1

16 pages, 6135 KB  
Article
Interlayer Identification Method Based on SMOTE and Ensemble Learning
by Shengqiang Luo, Bing Yu, Tianrui Zhang, Junqing Rong, Qing Zeng, Tingting Feng and Jianpeng Zhao
Processes 2026, 14(2), 351; https://doi.org/10.3390/pr14020351 - 19 Jan 2026
Viewed by 168
Abstract
The interlayer is a key geological factor that regulates reservoir heterogeneity and remaining oil distribution, and its accurate identification directly affects the reservoir development effect. To address the strong subjectivity of traditional identification methods and the insufficient recognition accuracy of single machine learning [...] Read more.
The interlayer is a key geological factor that regulates reservoir heterogeneity and remaining oil distribution, and its accurate identification directly affects the reservoir development effect. To address the strong subjectivity of traditional identification methods and the insufficient recognition accuracy of single machine learning models under imbalanced sample distributions, this study focuses on three types of interlayers (argillaceous, calcareous, and petrophysical interlayers) in the W Oilfield, and proposes an accurate identification method integrating the Synthetic Minority Over-Sampling Technique (SMOTE) and heterogeneous ensemble learning. Firstly, the corresponding data set of interlayer type and logging response is established. After eliminating the influence of dimension using normalization, the sensitive logging curves are optimized using the crossplot method, mutual information, and effect analysis. SMOTE technology is used to balance the sample distribution and solve the problem of the identification deviation of minority interlayers. Then, a heterogeneous ensemble model composed of the k-nearest neighbor algorithm (KNN), decision tree (DT), and support vector machine (SVM) is constructed, and the final recognition result is output using a voting strategy. The experiments show that SMOTE technology improves the average accuracy of a single model by 3.9% and effectively improves the model bias caused by sample imbalance. The heterogeneous integration model improves the overall recognition accuracy to 92.6%, significantly enhances the ability to distinguish argillaceous and petrophysical interlayers, and optimizes the F1-Score simultaneously. This method features a high accuracy and reliable performance, providing robust support for interlayer identification in reservoir geological modeling and remaining oil potential tapping, and demonstrating prominent practical application value. Full article
Show Figures

Figure 1

24 pages, 3303 KB  
Article
Deep Learning-Based Human Activity Recognition Using Binary Ambient Sensors
by Qixuan Zhao, Alireza Ghasemi, Ahmed Saif and Lila Bossard
Electronics 2026, 15(2), 428; https://doi.org/10.3390/electronics15020428 - 19 Jan 2026
Viewed by 253
Abstract
Human Activity Recognition (HAR) has become crucial across various domains, including healthcare, smart homes, and security systems, owing to the proliferation of Internet of Things (IoT) devices. Several Machine Learning (ML) techniques, including Convolutional Neural Networks (CNNs) and Long Short-Term Memory (LSTM), have [...] Read more.
Human Activity Recognition (HAR) has become crucial across various domains, including healthcare, smart homes, and security systems, owing to the proliferation of Internet of Things (IoT) devices. Several Machine Learning (ML) techniques, including Convolutional Neural Networks (CNNs) and Long Short-Term Memory (LSTM), have been proposed for HAR. However, they are still deficient in addressing the challenges of noisy features and insufficient data. This paper introduces a novel approach to tackle these two challenges, employing a Deep Learning (DL) Ensemble-Based Stacking Neural Network (SNN) combined with Generative Adversarial Networks (GANs) for HAR based on ambient sensors. Our proposed deep learning ensemble-based approach outperforms traditional ML techniques and enables robust and reliable recognition of activities in real-world scenarios. Comprehensive experiments conducted on six benchmark datasets from the CASAS smart home project demonstrate that the proposed stacking framework achieves superior accuracy on five out of six datasets when compared to literature-reported state-of-the-art baselines, with improvements ranging from 3.36 to 39.21 percentage points and an average gain of 13.28 percentage points. Although the baseline marginally outperforms the proposed models on one dataset (Aruba) in terms of accuracy, this exception does not alter the overall trend of consistent performance gains across diverse environments. Statistical significance of these improvements is further confirmed using the Wilcoxon signed-rank test. Moreover, the ASGAN-augmented models consistently improve macro-F1 performance over the corresponding baselines on five out of six datasets, while achieving comparable performance on the Milan dataset. The proposed GAN-based method further improves the activity recognition accuracy by a maximum of 4.77 percentage points, and an average of 1.28 percentage points compared to baseline models. By combining ensemble-based DL with GAN-generated synthetic data, a more robust and effective solution for ambient HAR addressing both accuracy and data imbalance challenges in real-world smart home settings is achieved. Full article
(This article belongs to the Section Computer Science & Engineering)
Show Figures

Figure 1

32 pages, 10741 KB  
Article
A Robust Deep Learning Ensemble Framework for Waterbody Detection Using High-Resolution X-Band SAR Under Data-Constrained Conditions
by Soyeon Choi, Seung Hee Kim, Son V. Nghiem, Menas Kafatos, Minha Choi, Jinsoo Kim and Yangwon Lee
Remote Sens. 2026, 18(2), 301; https://doi.org/10.3390/rs18020301 - 16 Jan 2026
Viewed by 235
Abstract
Accurate delineation of inland waterbodies is critical for applications such as hydrological monitoring, disaster response preparedness and response, and environmental management. While optical satellite imagery is hindered by cloud cover or low-light conditions, Synthetic Aperture Radar (SAR) provides consistent surface observations regardless of [...] Read more.
Accurate delineation of inland waterbodies is critical for applications such as hydrological monitoring, disaster response preparedness and response, and environmental management. While optical satellite imagery is hindered by cloud cover or low-light conditions, Synthetic Aperture Radar (SAR) provides consistent surface observations regardless of weather or illumination. This study introduces a deep learning-based ensemble framework for precise inland waterbody detection using high-resolution X-band Capella SAR imagery. To improve the discrimination of water from spectrally similar non-water surfaces (e.g., roads and urban structures), an 8-channel input configuration was developed by incorporating auxiliary geospatial features such as height above nearest drainage (HAND), slope, and land cover classification. Four advanced deep learning segmentation models—Proportional–Integral–Derivative Network (PIDNet), Mask2Former, Swin Transformer, and Kernel Network (K-Net)—were systematically evaluated via cross-validation. Their outputs were combined using a weighted average ensemble strategy. The proposed ensemble model achieved an Intersection over Union (IoU) of 0.9422 and an F1-score of 0.9703 in blind testing, indicating high accuracy. While the ensemble gains over the best single model (IoU: 0.9371) were moderate, the enhanced operational reliability through balanced Precision–Recall performance provides significant practical value for flood and water resource monitoring with high-resolution SAR imagery, particularly under data-constrained commercial satellite platforms. Full article
(This article belongs to the Section AI Remote Sensing)
Show Figures

Figure 1

27 pages, 11028 KB  
Article
Integration of Satellite-Derived Meteorological Inputs into SWAT, XGBoost, WGAN, and Hybrid Modelling Frameworks for Climate Change-Driven Streamflow Simulation in a Data-Scarce Region
by Sefa Nur Yeşilyurt and Gülay Onuşluel Gül
Water 2026, 18(2), 239; https://doi.org/10.3390/w18020239 - 16 Jan 2026
Viewed by 293
Abstract
The pressure of climate change on water resources has made the development of reliable hydrological models increasingly important, especially for data-scarce regions. However, due to the limited availability of ground-based observations, it considerably affects the accuracy of models developed using these inputs. This [...] Read more.
The pressure of climate change on water resources has made the development of reliable hydrological models increasingly important, especially for data-scarce regions. However, due to the limited availability of ground-based observations, it considerably affects the accuracy of models developed using these inputs. This also limits the ability to investigate future hydrological behavior. Satellite-based data sources have emerged as an alternative to address this challenge and have received significant attention. However, the transferability of these datasets across different model classes has not been widely explored. This paper evaluates the transferability of satellite-derived inputs to eleven types of models, including process-based (SWAT), data-driven methods (XGBoost and WGAN), and hybrid model structures that utilize SWAT outputs with AI models. SHAP has been applied to overcome the black-box limitations of AI models and gain insights into fundamental hydrometeorological processes. In addition, uncertainty analysis was performed for all models, enabling a more comprehensive evaluation of performance. The results indicate that hybrid models using SWAT combined with WGAN can achieve better predictive accuracy than the SWAT model based on ground observation. While the baseline SWAT model achieved satisfactory performance during the validation period (NSE ≈ 0.86, KGE ≈ 0.80), the hybrid SWAT + WGAN framework improved simulation skill, reaching NSE ≈ 0.90 and KGE ≈ 0.89 during validation. Models forced with satellite-derived meteorological inputs additionally performed as well as those forced using station-based observations, validating the feasibility of using satellite products as alternative data sources. The future hydrological status of the basin was assessed based on the best-performing hybrid model and CMIP6 climate projections, showing a clear drought signal in the flows and long-term reductions in average flows reaching up to 58%. Overall, the findings indicate that the proposed framework provides a consistent approach for data-scarce basins. Future applications may benefit from integrating spatio-temporal learning frameworks and ensemble-based uncertainty quantification to enhance robustness under changing climate conditions. Full article
(This article belongs to the Special Issue Application of Hydrological Modelling to Water Resources Management)
Show Figures

Figure 1

19 pages, 3746 KB  
Article
Fault Diagnosis and Classification of Rolling Bearings Using ICEEMDAN–CNN–BiLSTM and Acoustic Emission
by Jinliang Li, Haoran Sheng, Bin Liu and Xuewei Liu
Sensors 2026, 26(2), 507; https://doi.org/10.3390/s26020507 - 12 Jan 2026
Viewed by 311
Abstract
Reliable operation of rolling bearings is essential for mechanical systems. Acoustic emission (AE) offers a promising approach for bearing fault detection because of its high-frequency response and strong noise-suppression capability. This study proposes an intelligent diagnostic method that combines an improved complete ensemble [...] Read more.
Reliable operation of rolling bearings is essential for mechanical systems. Acoustic emission (AE) offers a promising approach for bearing fault detection because of its high-frequency response and strong noise-suppression capability. This study proposes an intelligent diagnostic method that combines an improved complete ensemble empirical mode decomposition with adaptive noise (ICEEMDAN) and a convolutional neural network–bidirectional long short-term memory (CNN–BiLSTM) architecture. The method first applies wavelet denoising to AE signals, then uses ICEEMDAN decomposition followed by kurtosis-based screening to extract key fault components and construct feature vectors. Subsequently, a CNN automatically learns deep time–frequency features, and a BiLSTM captures temporal dependencies among these features, enabling end-to-end fault identification. Experiments were conducted on a bearing acoustic emission dataset comprising 15 operating conditions, five fault types, and three rotational speeds; comparative model tests were also performed. Results indicate that ICEEMDAN effectively suppresses mode mixing (average mixing rate 6.08%), and the proposed model attained an average test-set recognition accuracy of 98.00%, significantly outperforming comparative models. Moreover, the model maintained 96.67% accuracy on an independent validation set, demonstrating strong generalization and practical application potential. Full article
(This article belongs to the Special Issue Deep Learning Based Intelligent Fault Diagnosis)
Show Figures

Figure 1

26 pages, 60486 KB  
Article
Spatiotemporal Prediction of Ground Surface Deformation Using TPE-Optimized Deep Learning
by Maoqi Liu, Sichun Long, Tao Li, Wandi Wang and Jianan Li
Remote Sens. 2026, 18(2), 234; https://doi.org/10.3390/rs18020234 - 11 Jan 2026
Viewed by 244
Abstract
Surface deformation induced by the extraction of natural resources constitutes a non-stationary spatiotemporal process. Modeling surface deformation time series obtained through Interferometric Synthetic Aperture Radar (InSAR) technology using deep learning methods is crucial for disaster prevention and mitigation. However, the complexity of model [...] Read more.
Surface deformation induced by the extraction of natural resources constitutes a non-stationary spatiotemporal process. Modeling surface deformation time series obtained through Interferometric Synthetic Aperture Radar (InSAR) technology using deep learning methods is crucial for disaster prevention and mitigation. However, the complexity of model hyperparameter configuration and the lack of interpretability in the resulting predictions constrain its engineering applications. To enhance the reliability of model outputs and their decision-making value for engineering applications, this study presents a workflow that combines a Tree-structured Parzen Estimator (TPE)-based Bayesian optimization approach with ensemble inference. Using the Rhineland coalfield in Germany as a case study, we systematically evaluated six deep learning architectures in conjunction with various spatiotemporal coding strategies. Pairwise comparisons were conducted using a Welch t-test to evaluate the performance differences across each architecture under two parameter-tuning approaches. The Benjamini–Hochberg method was applied to control the false discovery rate (FDR) at 0.05 for multiple comparisons. The results indicate that TPE-optimized models demonstrate significantly improved performance compared to their manually tuned counterparts, with the ResNet+Transformer architecture yielding the most favorable outcomes. A comprehensive analysis of the spatial residuals further revealed that TPE optimization not only enhances average accuracy, but also mitigates the model’s prediction bias in fault zones and mineralize areas by improving the spatial distribution structure of errors. Based on this optimal architecture, we combined the ten highest-performing models from the optimization stage to generate a quantile-based susceptibility map, using the ensemble median as the central predictor. Uncertainty was quantified from three complementary perspectives: ensemble spread, class ambiguity, and classification confidence. Our analysis revealed spatial collinearity between physical uncertainty and absolute residuals. This suggests that uncertainty is more closely related to the physical complexity of geological discontinuities and human-disturbed zones, rather than statistical noise. In the analysis of super-threshold probability, the threshold sensitivity exhibited by the mining area reflects the widespread yet moderate impact of mining activities. By contrast, the fault zone continues to exhibit distinct high-probability zones, even under extreme thresholds. It suggests that fault-controlled deformation is more physically intense and poses a greater risk of disaster than mining activities. Finally, we propose an engineering decision strategy that combines uncertainty and residual spatial patterns. This approach transforms statistical diagnostics into actionable, tiered control measures, thereby increasing the practical value of susceptibility mapping in the planning of natural resource extraction. Full article
Show Figures

Figure 1

21 pages, 1360 KB  
Article
A Real-Time Consensus-Free Accident Detection Framework for Internet of Vehicles Using Vision Transformer and EfficientNet
by Zineb Seghir, Lyamine Guezouli, Kamel Barka, Djallel Eddine Boubiche, Homero Toral-Cruz and Rafael Martínez-Peláez
AI 2026, 7(1), 4; https://doi.org/10.3390/ai7010004 - 22 Dec 2025
Viewed by 718
Abstract
Objectives: Traffic accidents cause severe social and economic impacts, demanding fast and reliable detection to minimize secondary collisions and improve emergency response. However, existing cloud-dependent detection systems often suffer from high latency and limited scalability, motivating the need for an edge-centric and [...] Read more.
Objectives: Traffic accidents cause severe social and economic impacts, demanding fast and reliable detection to minimize secondary collisions and improve emergency response. However, existing cloud-dependent detection systems often suffer from high latency and limited scalability, motivating the need for an edge-centric and consensus-free accident detection framework in IoV environments. Methods: This study presents a real-time accident detection framework tailored for Internet of Vehicles (IoV) environments. The proposed system forms an integrated IoV architecture combining on-vehicle inference, RSU-based validation, and asynchronous cloud reporting. The system integrates a lightweight ensemble of Vision Transformer (ViT) and EfficientNet models deployed on vehicle nodes to classify video frames. Accident alerts are generated only when both models agree (vehicle-level ensemble consensus), ensuring high precision. These alerts are transmitted to nearby Road Side Units (RSUs), which validate the events and broadcast safety messages without requiring inter-vehicle or inter-RSU consensus. Structured reports are also forwarded asynchronously to the cloud for long-term model retraining and risk analysis. Results: Evaluated on the CarCrash and CADP datasets, the framework achieves an F1-score of 0.96 with average decision latency below 60 ms, corresponding to an overall accuracy of 98.65% and demonstrating measurable improvement over single-model baselines. Conclusions: By combining on-vehicle inference, edge-based validation, and optional cloud integration, the proposed architecture offers both immediate responsiveness and adaptability, contrasting with traditional cloud-dependent approaches. Full article
Show Figures

Figure 1

30 pages, 3641 KB  
Article
Modified EfficientNet-B0 Architecture Optimized with Quantum-Behaved Algorithm for Skin Cancer Lesion Assessment
by Abdul Rehman Altaf, Abdullah Altaf and Faizan Ur Rehman
Diagnostics 2025, 15(24), 3245; https://doi.org/10.3390/diagnostics15243245 - 18 Dec 2025
Viewed by 499
Abstract
Background/Objectives: Skin cancer is one of the most common diseases in the world, whose early and accurate detection can have a survival rate more than 90% while the chance of mortality is almost 80% in case of late diagnostics. Methods: A [...] Read more.
Background/Objectives: Skin cancer is one of the most common diseases in the world, whose early and accurate detection can have a survival rate more than 90% while the chance of mortality is almost 80% in case of late diagnostics. Methods: A modified EfficientNet-B0 is developed based on mobile inverted bottleneck convolution with squeeze and excitation approach. The 3 × 3 convolutional layer is used to capture low-level visual features while the core features are extracted using a sequence of Mobile Inverted Bottleneck Convolution blocks having both 3 × 3 and 5 × 5 kernels. They not only balance fine-grained extraction with broader contextual representation but also increase the network’s learning capacity while maintaining computational cost. The proposed architecture hyperparameters and extracted feature vectors of standard benchmark datasets (HAM10000, ISIC 2019 and MSLD v2.0) of dermoscopic images are optimized with the quantum-behaved particle swarm optimization algorithm (QBPSO). The merit function is formulated by the training loss given in the form of standard classification cross-entropy with label smoothing, mean fitness value (mfval), average accuracy (mAcc), mean computational time (mCT) and other standard performance indicators. Results: Comprehensive scenario-based simulations were performed using the proposed framework on a publicly available dataset and found an mAcc of 99.62% and 92.5%, mfval of 2.912 × 10−10 and 1.7921 × 10−8, mCT of 501.431 s and 752.421 s for HAM10000 and ISIC2019 datasets, respectively. The results are compared with state of the art, pre-trained existing models like EfficentNet-B4, RegNetY-320, ResNetXt-101, EfficentNetV2-M, VGG-16, Deep Lab V3 as well as reported techniques based on Mask RCCN, Deep Belief Net, Ensemble CNN, SCDNet and FixMatch-LS techniques having varying accuracies from 85% to 94.8%. The reliability of the proposed architecture and stability of QBPSO is examined through Monte Carlo simulation of 100 independent runs and their statistical soundings. Conclusions: The proposed framework reduces diagnostic errors and assists dermatologists in clinical decisions for an improved patient outcomes despite the challenges like data imbalance and interpretability. Full article
(This article belongs to the Special Issue Medical Image Analysis and Machine Learning)
Show Figures

Figure 1

26 pages, 4154 KB  
Article
Establishment and Evaluation of an Ensemble Bias Correction Framework for the Short-Term Numerical Forecasting on Lower Atmospheric Ducts
by Huan Guo, Bo Wang, Jing Zou, Xiaofeng Zhao, Bin Wang, Zhijin Qiu, Hang Wang, Lu Liu, Xiaolei Liu and Hanyue Wang
J. Mar. Sci. Eng. 2025, 13(12), 2397; https://doi.org/10.3390/jmse13122397 - 17 Dec 2025
Viewed by 297
Abstract
Based on the COAWST (Coupled Ocean–Atmosphere–Wave–Sediment Transport) model, this study developed an atmospheric refractivity forecasting model incorporating ensemble bias correction by combining five bias correction algorithms with the Bayesian Model Averaging (BMA) method. Hindcast tests conducted over the Yellow Sea and Bohai Sea [...] Read more.
Based on the COAWST (Coupled Ocean–Atmosphere–Wave–Sediment Transport) model, this study developed an atmospheric refractivity forecasting model incorporating ensemble bias correction by combining five bias correction algorithms with the Bayesian Model Averaging (BMA) method. Hindcast tests conducted over the Yellow Sea and Bohai Sea regions demonstrated that the ensemble bias correction enhanced both forecasting accuracy and adaptability. On the one hand, the corrected forecasting outperformed the original COAWST model in terms of mean error (ME), root mean square error (RMSE), and correlation coefficient (CC), with the RMSE reduced by approximately 20% below 3000 m altitude. On the other hand, the corrected forecasting reduced the uncertainty associated with the performance of different algorithms. In particular, during typhoon events, the corrected forecasting maintained stable bias characteristics across different height layers through dynamic weight adjustment. Throughout the hindcast period, the ME of the corrected forecasting was lower than that of any single bias correction algorithm. Moreover, compared with other ensemble methods, the corrected forecasting developed in this study achieved more flexible weight allocation through Bayesian optimization, resulting in lower ME. In addition, the corrected forecasting maintained an improvement of approximately 28% in bias reduction even at a 72 h forecasting lead time, demonstrating their robustness and reliability under complex weather conditions. Full article
(This article belongs to the Special Issue Artificial Intelligence and Its Application in Ocean Engineering)
Show Figures

Figure 1

Back to TopTop