Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (383)

Search Parameters:
Keywords = ensemble threshold

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
18 pages, 2702 KB  
Article
A Dual-Branch Ensemble Learning Method for Industrial Anomaly Detection: Fusion and Optimization of Scattering and PCA Features
by Jing Cai, Zhuo Wu, Runan Hua, Shaohua Mao, Yulun Zhang, Ran Guo and Ke Lin
Appl. Sci. 2026, 16(3), 1597; https://doi.org/10.3390/app16031597 - 5 Feb 2026
Abstract
Industrial visual anomaly detection remains challenging because practical inspection systems must achieve high detection accuracy while operating under highly imbalanced data, diverse defect patterns, limited computational resources, and increasing demands for interpretability. This work aims to develop a lightweight yet effective and explainable [...] Read more.
Industrial visual anomaly detection remains challenging because practical inspection systems must achieve high detection accuracy while operating under highly imbalanced data, diverse defect patterns, limited computational resources, and increasing demands for interpretability. This work aims to develop a lightweight yet effective and explainable anomaly detection framework for industrial images in settings where a limited number of labeled anomalous samples are available. We propose a dual-branch feature-based supervised ensemble method that integrates complementary representations: a PCA branch to capture linear global structure and a scattering branch to model multi-scale textures. A heterogeneous pool of classical learners (SVM, RF, ET, XGBoost, and LightGBM) is trained on each feature branch, and stable probability outputs are obtained via stratified K-fold out-of-fold training, probability calibration, and a quantile-based threshold search. Decision-level fusion is then performed by stacking, where logistic regression, XGBoost, and LightGBM serve as meta-learners over the out-of-fold probabilities of the selected top-K base learners. Experiments on two public benchmarks (MVTec AD and BTAD) show that the proposed method substantially improves the best PCA-based single model, achieving relative F1_score gains of approximately 31% (MVTec AD) and 26% (BTAD), with maximum AUC values of about 0.91 and 0.96, respectively, under comparable inference complexity. Overall, the results demonstrate that combining high-quality handcrafted features with supervised ensemble fusion provides a practical and interpretable alternative/complement to heavier deep models for resource-constrained industrial anomaly detection, and future work will explore more category-adaptive decision strategies to further enhance robustness on challenging classes. Full article
(This article belongs to the Special Issue AI and Data-Driven Methods for Fault Detection and Diagnosis)
Show Figures

Figure 1

23 pages, 3997 KB  
Article
Assimilation of ICON/MIGHTI Wind Profiles into a Coupled Thermosphere/Ionosphere Model Using Ensemble Square Root Filter
by Meng Zhang, Xiong Hu, Yanan Zhang, Zhaoai Yan, Hongyu Liang, Junfeng Yang, Cunying Xiao and Cui Tu
Remote Sens. 2026, 18(3), 500; https://doi.org/10.3390/rs18030500 - 4 Feb 2026
Abstract
Precise characterization of the thermospheric neutral wind is essential for comprehending the dynamic interactions within the ionosphere-thermosphere system, as evidenced by the development of models like HWM and the need for localized data. However, numerical models often suffer from biases due to uncertainties [...] Read more.
Precise characterization of the thermospheric neutral wind is essential for comprehending the dynamic interactions within the ionosphere-thermosphere system, as evidenced by the development of models like HWM and the need for localized data. However, numerical models often suffer from biases due to uncertainties in external forcing and the scarcity of direct wind observations. This study examines the influence of incorporating actual neutral wind profiles from the Michelson Interferometer for Global High-resolution Thermospheric Imaging (MIGHTI) on the Ionospheric Connection Explorer (ICON) satellite into the Thermosphere Ionosphere Electrodynamics General Circulation Model (TIE-GCM) via an ensemble-based data assimilation framework. To address the challenges of assimilating real observational data, a robust background check Quality Control (QC) scheme with dynamic thresholds based on ensemble spread was implemented. The assimilation performance was evaluated by comparing the analysis results against independent, unassimilated observations and a free-running model Control Run. The findings demonstrate a substantial improvement in the precision of the thermospheric wind field. This enhancement is reflected in a 45–50% reduction in Root Mean Square Error (RMSE) for both zonal and meridional components. For zonal winds, the system demonstrated effective bias removal and sustained forecast skill, indicating a strong model memory of the large-scale mean flow. In contrast, while the assimilation exceptionally corrected the meridional circulation by refining the spatial structures and reshaping cross-equatorial flows, the forecast skill for this component dissipated rapidly. This characteristic of “short memory” underscores the highly dynamic nature of thermospheric winds and emphasizes the need for high-frequency assimilation cycles. The system required a spin-up period of approximately 8 h to achieve statistical stability. These findings demonstrate that the assimilation of data from ICON/MIGHTI satellites not only diminishes numerical inaccuracies but also improves the representation of instantaneous thermospheric wind distributions. Providing a high-fidelity dataset is crucial for advancing the modeling and understanding of the complex interactions within the Earth’s ionosphere-thermosphere system. Full article
(This article belongs to the Section Atmospheric Remote Sensing)
Show Figures

Figure 1

22 pages, 1588 KB  
Article
A Hybrid HOG-LBP-CNN Model with Self-Attention for Multiclass Lung Disease Diagnosis from CT Scan Images
by Aram Hewa, Jafar Razmara and Jaber Karimpour
Computers 2026, 15(2), 93; https://doi.org/10.3390/computers15020093 - 1 Feb 2026
Viewed by 90
Abstract
Resource-limited settings continue to face challenges in the identification of COVID-19, bacterial pneumonia, viral pneumonia, and normal lung conditions because of the overlap of CT appearance and inter-observer variability. We justify a hybrid architecture of deep learning which combines hand-designed descriptors (Histogram of [...] Read more.
Resource-limited settings continue to face challenges in the identification of COVID-19, bacterial pneumonia, viral pneumonia, and normal lung conditions because of the overlap of CT appearance and inter-observer variability. We justify a hybrid architecture of deep learning which combines hand-designed descriptors (Histogram of Oriented Gradients, Local Binary Patterns) and a 20-layer Convolutional Neural Network with dual self-attention. Handcrafted features were then trained with Support Vector Machines, and ensemble averaging was used to integrate the results with the CNN. The confidence level of 0.7 was used to mark suspicious cases to be reviewed manually. On a balanced dataset of 14,000 chest CT scans (3500 per class), the model was trained and cross-validated five-fold on a patient-wise basis. It had 97.43% test accuracy and a macro F1-score of 0.97, which was statistically significant compared to standalone CNN (92.0%), ResNet-50 (90.0%), multiscale CNN (94.5%), and ensemble CNN (96.0%). A further 2–3% enhancement was added by the self-attention module that targets the diagnostically salient lung regions. The predictions that were below the confidence limit amounted to only 5 percent, which indicated reliability and clinical usefulness. The framework provides an interpretable and scalable method of diagnosing multiclass lung disease, especially applicable to be deployed in healthcare settings with limited resources. The further development of the work will involve the multi-center validation, optimization of the model, and greater interpretability to be used in the real world. Full article
(This article belongs to the Special Issue AI in Bioinformatics)
24 pages, 2031 KB  
Article
A Unified Approach for Ensemble Function and Threshold Optimization in Anomaly-Based Failure Forecasting
by Nikolaos Kolokas, Vasileios Tatsis, Angeliki Zacharaki, Dimosthenis Ioannidis and Dimitrios Tzovaras
Appl. Sci. 2026, 16(3), 1452; https://doi.org/10.3390/app16031452 - 31 Jan 2026
Viewed by 198
Abstract
This paper introduces a novel approach to anomaly-based failure forecasting that jointly optimizes both the ensemble function and the anomaly threshold used for decision making. Unlike conventional methods that apply fixed or classifier-defined thresholds, the proposed framework simultaneously tunes the threshold of the [...] Read more.
This paper introduces a novel approach to anomaly-based failure forecasting that jointly optimizes both the ensemble function and the anomaly threshold used for decision making. Unlike conventional methods that apply fixed or classifier-defined thresholds, the proposed framework simultaneously tunes the threshold of the failure probability or anomaly score and the parameters of an ensemble function that integrates multiple machine learning models—specifically, Random Forest and Isolation Forest classifiers trained under diverse preprocessing configurations. The distinctive contribution of this work lies in introducing a weighted mean ensemble function, whose coefficients are co-optimized with the anomaly threshold using a global optimization algorithm, enabling adaptive, data-driven decision boundaries. The method is designed for predictive maintenance applications and validated using sensor data from three industrial domains: aluminum anode production, plastic injection molding, and automotive manufacturing. The experimental results demonstrate that the proposed combined optimization significantly enhances forecasting reliability, improving the Matthews Correlation Coefficient by up to 6.5 percentage units compared to previous approaches. Beyond its empirical gains, this work establishes a scalable and computationally efficient framework for integrating threshold and ensemble optimization in real-world, cross-industry predictive maintenance systems. Full article
Show Figures

Figure 1

21 pages, 2013 KB  
Article
Machine Learning Models for Reliable Gait Phase Detection Using Lower-Limb Wearable Sensor Data
by Muhammad Fiaz, Rosita Guido and Domenico Conforti
Appl. Sci. 2026, 16(3), 1397; https://doi.org/10.3390/app16031397 - 29 Jan 2026
Viewed by 127
Abstract
Accurate gait-phase detection is essential for rehabilitation monitoring, prosthetic control, and human–robot interaction. Artificial intelligence supports continuous, personalized mobility assessment by extracting clinically meaningful patterns from wearable sensors. A richer view of gait dynamics can be achieved by integrating additional signals, including inertial, [...] Read more.
Accurate gait-phase detection is essential for rehabilitation monitoring, prosthetic control, and human–robot interaction. Artificial intelligence supports continuous, personalized mobility assessment by extracting clinically meaningful patterns from wearable sensors. A richer view of gait dynamics can be achieved by integrating additional signals, including inertial, plantar flex, footswitch, and EMG data, leading to more accurate and informative gait analysis. Motivated by these needs, this study investigates discrete gait-phase recognition for the right leg using a multi-subject IMU dataset collected from lower-limb sensors. IMU recordings were segmented into 128-sample windows across 23 channels, and each window was flattened into a 2944-dimensional feature vector. To ensure reliable ground-truth labels, we developed an automatic relabeling pipeline incorporating heel-strike and toe-off detection, adaptive threshold tuning, and sensor fusion across sensor modalities. These windowed vectors were then used to train a comprehensive suite of machine learning models, including Random Forests, Extra Trees, k-Nearest Neighbors, XGBoost, and LightGBM. All models underwent systematic hyperparameter tuning, and their performance was assessed through k-fold cross-validation. The results demonstrate that tree-based ensemble models provide accurate and stable gait-phase classification with accuracy exceeding 97% across both test sets, underscoring their potential for future real-time gait analysis and lower-limb assistive technologies. Full article
Show Figures

Figure 1

20 pages, 1578 KB  
Article
Climate Warming at European Airports: Human Factors and Infrastructure Planning
by Jonny Williams, Paul D. Williams and Marco Venturini
Aerospace 2026, 13(2), 127; https://doi.org/10.3390/aerospace13020127 - 28 Jan 2026
Viewed by 145
Abstract
Temperature and related thermal comfort metrics at a representative 9-member ensemble of airports in Europe are presented using a combination of historical (1985–2014) and future projection (2035–2064) timescales under a variety of forcing scenarios. Data are shown for summer (June–July–August) and the nine [...] Read more.
Temperature and related thermal comfort metrics at a representative 9-member ensemble of airports in Europe are presented using a combination of historical (1985–2014) and future projection (2035–2064) timescales under a variety of forcing scenarios. Data are shown for summer (June–July–August) and the nine sites are further grouped into `oceanic’, `continentally influenced’, and `Mediterranean coastal’ climate types, which ameliorates visualisation and provides more generalised policy-relevant results. Using the Humidex metric, it is shown that some airports in southern Europe may enter a `dangerous’ (>45 C) regime of human discomfort. This would be accompanied by economic impacts related to longer mandated rest periods for ground workers, as well as increased water intake and changes to health and safety training. The coincidence of the 38 C flash point of kerosene jet fuel with perturbed daily maximum temperature occurrence thresholds at some sites will likely also have knock-on effects on safety practices since some sites may experience 70% of future summer days with temperatures exceeding this value. Using an 18 C threshold for defining cooling and heating `degree days’, increases in cooling requirements are projected to be larger than reductions in heating for continental and Mediterranean sites, and heatwave occurrence (3 or more days at or above the 95th historical percentile) may increase by a factor of 10. From a building and infrastructure services perspective, increased temperature variability around larger average values has the potential to reduce safe runway lifetimes and increase structural fatigue in large-span steel terminal buildings. Full article
(This article belongs to the Section Air Traffic and Transportation)
29 pages, 2666 KB  
Article
Explainable Ensemble Learning for Predicting Stock Market Crises: Calibration, Threshold Optimization, and Robustness Analysis
by Eddy Suprihadi, Nevi Danila, Zaiton Ali and Gede Pramudya Ananta
Information 2026, 17(2), 114; https://doi.org/10.3390/info17020114 - 25 Jan 2026
Viewed by 374
Abstract
Forecasting stock market crashes is difficult because such events are rare, highly nonlinear, and shaped by latent structural and behavioral forces. This study introduces a calibrated and interpretable Random Forest framework for detecting pre-crash conditions through structural feature engineering, early-warning calibration, and model [...] Read more.
Forecasting stock market crashes is difficult because such events are rare, highly nonlinear, and shaped by latent structural and behavioral forces. This study introduces a calibrated and interpretable Random Forest framework for detecting pre-crash conditions through structural feature engineering, early-warning calibration, and model explainability. Using daily data on global equity indices and major large-cap stocks from the U.S., Europe, and Asia, we construct a feature set that captures volatility expansion, moving-average deterioration, Bollinger Band width, and short-horizon return dynamics. Probability-threshold optimization significantly improves sensitivity to rare events and yields an operating point at a crash-probability threshold of 0.33. Compared with econometric and machine learning benchmarks, the calibrated model attains higher precision while maintaining competitive F1 and MCC scores, and it delivers meaningful early-warning signals with an average lead-time of around 60 days. SHAP analysis indicates that predictions are anchored in theoretically consistent indicators, particularly volatility clustering and weakening trends, while robustness checks show resilience to noise, structural perturbations, and simulated flash crashes. Taken together, these results provide a transparent and reproducible blueprint for building operational early-warning systems in financial markets. Full article
(This article belongs to the Special Issue Predictive Analytics and Data Science, 3rd Edition)
Show Figures

Graphical abstract

30 pages, 25744 KB  
Article
Long-Term Dynamics and Transitions of Surface Water Extent in the Dryland Wetlands of Central Asia Using a Hybrid Ensemble–Occurrence Approach
by Kanchan Mishra, Hervé Piégay, Kathryn E. Fitzsimmons and Philip Weber
Remote Sens. 2026, 18(3), 383; https://doi.org/10.3390/rs18030383 - 23 Jan 2026
Viewed by 373
Abstract
Wetlands in dryland regions are rapidly degrading under the combined effects of climate change and human regulation, yet long-term, seasonally resolved assessments of surface water extent (SWE) and its dynamics remain scarce. Here, we map and analyze seasonal surface water extent (SWE) over [...] Read more.
Wetlands in dryland regions are rapidly degrading under the combined effects of climate change and human regulation, yet long-term, seasonally resolved assessments of surface water extent (SWE) and its dynamics remain scarce. Here, we map and analyze seasonal surface water extent (SWE) over the period 2000–2024 in the Ile River Delta (IRD), south-eastern Kazakhstan, using Landsat TM/ETM+/OLI data within the Google Earth Engine (GEE) framework. We integrate multiple indices using the modified Normalized Difference Water Index (mNDWI), Automated Water Extraction Index (AWEI) variants, Water Index 2015 (WI2015), and Multi-Band Water Index (MBWI) with dynamic Otsu thresholding. The resulting index-wise binary water maps are merged via ensemble agreement (intersection, majority, union) to delineate three SWE regimes: stable (persists most of the time), periodic (appears regularly but not in every season), and ephemeral (appears only occasionally). Validation against Sentinel-2 imagery showed high accuracy F1-Score/Overall accuracy (F1/OA ≈ 0.85/85%), confirming our workflow to be robust. Hydroclimatic drivers were evaluated through modified Mann–Kendall (MMK) and Spearman’s (r) correlations between SWE, discharge (D), water level (WL), precipitation (P), and air temperature (AT), while a hybrid ensemble–occurrence framework was applied to identify degradation and transition patterns. Trend analysis revealed significant long–term declines, most pronounced during summer and fall. Discharge is predominantly controlled by stable spring SWE, while discharge and temperature jointly influence periodic SWE in summer–fall, with warming reducing the delta surface water. Ephemeral SWE responds episodically to flow pulses, whereas precipitation played a limited role in this semi–arid region. Spatially, area(s) of interest (AOI)-II/III (the main distributary system) support the most extensive yet dynamic wetlands. In contrast, AOI-I and AOI-IV host smaller, more constrained wetland mosaics. AOI-I shows persistence under steady low flows, while AOI-IV reflects a stressed system with sporadic high-water levels. Overall, the results highlight the dominant influence of flow regulation and distributary allocation on IRD hydrology and the need for ecologically timed releases, targeted restoration, and transboundary cooperation to sustain delta resilience. Full article
(This article belongs to the Section Remote Sensing in Geology, Geomorphology and Hydrology)
Show Figures

Figure 1

24 pages, 7427 KB  
Article
A Two-Stage Feature Reduction (FIRRE) Framework for Improving Artificial Neural Network Predictions in Civil Engineering Applications
by Yaohui Guo, Ling Xu, Xianyu Chen and Zifeng Zhao
Infrastructures 2026, 11(1), 29; https://doi.org/10.3390/infrastructures11010029 - 16 Jan 2026
Viewed by 135
Abstract
Artificial neural networks (ANNs) are widely used in engineering prediction, but excessive input dimensionality can reduce both accuracy and efficiency. This study proposes a two-stage feature-reduction framework, Feature Importance Ranking and Redundancy Elimination (FIRRE), to optimize ANN inputs by removing weakly informative and [...] Read more.
Artificial neural networks (ANNs) are widely used in engineering prediction, but excessive input dimensionality can reduce both accuracy and efficiency. This study proposes a two-stage feature-reduction framework, Feature Importance Ranking and Redundancy Elimination (FIRRE), to optimize ANN inputs by removing weakly informative and redundant variables. In Stage 1, four complementary ranking methods, namely Pearson correlation, recursive feature elimination, random forest importance, and F-test scoring, are combined into an ensemble importance score. In Stage 2, highly collinear features (ρ > 0.95) are pruned while retaining the more informative variable in each pair. FIRRE is evaluated on 32 civil engineering datasets spanning materials, structural, and environmental applications, and benchmarked against Principal Component Analysis, variance-threshold filtering, random feature selection, and K-means clustering. Across the benchmark suite, FIRRE consistently achieves competitive or improved predictive performance while reducing input dimensionality by 40% on average and decreasing computation time by 10–60%. A dynamic modulus case study further demonstrates its practical value, improving R2 from 0.926 to 0.966 while reducing inputs from 25 to 7. Overall, FIRRE provides a practical, robust framework for simplifying ANN inputs and improving efficiency in civil engineering prediction tasks. Full article
Show Figures

Figure 1

19 pages, 1973 KB  
Article
Continuous Smartphone Authentication via Multimodal Biometrics and Optimized Ensemble Learning
by Chia-Sheng Cheng, Ko-Chien Chang, Hsing-Chung Chen and Chao-Lung Chou
Mathematics 2026, 14(2), 311; https://doi.org/10.3390/math14020311 - 15 Jan 2026
Viewed by 459
Abstract
The ubiquity of smartphones has transformed them into primary repositories of sensitive data; however, traditional one-time authentication mechanisms create a critical trust gap by failing to verify identity post-unlock. Our aim is to mitigate these vulnerabilities and align with the Zero Trust Architecture [...] Read more.
The ubiquity of smartphones has transformed them into primary repositories of sensitive data; however, traditional one-time authentication mechanisms create a critical trust gap by failing to verify identity post-unlock. Our aim is to mitigate these vulnerabilities and align with the Zero Trust Architecture (ZTA) framework and philosophy of “never trust, always verify,” as formally defined by the National Institute of Standards and Technology (NIST) in Special Publication 800-207. This study introduces a robust continuous authentication (CA) framework leveraging multimodal behavioral biometrics. A dedicated application was developed to synchronously capture touch, sliding, and inertial sensor telemetry. For feature modeling, a heterogeneous deep learning pipeline was employed to capture modality-specific characteristics, utilizing Convolutional Neural Networks (CNNs) for sensor data, Long Short-Term Memory (LSTM) networks for curvilinear sliding, and Gated Recurrent Units (GRUs) for discrete touch. To resolve performance degradation caused by class imbalance in Zero Trust environments, a Grid Search Optimization (GSO) strategy was applied to optimize a weighted voting ensemble, identifying the global optimum for decision thresholds and modality weights. Empirical validation on a dataset of 35,519 samples from 15 subjects demonstrates that the optimized ensemble achieves a peak accuracy of 99.23%. Sensor kinematics emerged as the primary biometric signature, followed by touch and sliding features. This framework enables high-precision, non-intrusive continuous verification, bridging the critical security gap in contemporary mobile architectures. Full article
Show Figures

Figure 1

41 pages, 5624 KB  
Article
Tackling Imbalanced Data in Chronic Obstructive Pulmonary Disease Diagnosis: An Ensemble Learning Approach with Synthetic Data Generation
by Yi-Hsin Ko, Chuan-Sheng Hung, Chun-Hung Richard Lin, Da-Wei Wu, Chung-Hsuan Huang, Chang-Ting Lin and Jui-Hsiu Tsai
Bioengineering 2026, 13(1), 105; https://doi.org/10.3390/bioengineering13010105 - 15 Jan 2026
Viewed by 441
Abstract
Chronic obstructive pulmonary disease (COPD) is a major health burden worldwide and in Taiwan, ranking as the third leading cause of death globally, and its prevalence in Taiwan continues to rise. Readmission within 14 days is a key indicator of disease instability and [...] Read more.
Chronic obstructive pulmonary disease (COPD) is a major health burden worldwide and in Taiwan, ranking as the third leading cause of death globally, and its prevalence in Taiwan continues to rise. Readmission within 14 days is a key indicator of disease instability and care efficiency, driven jointly by patient-level physiological vulnerability (such as reduced lung function and multiple comorbidities) and healthcare system-level deficiencies in transitional care. To mitigate the growing burden and improve quality of care, it is urgently necessary to develop an AI-based prediction model for 14-day readmission. Such a model could enable early identification of high-risk patients and trigger multidisciplinary interventions, such as pulmonary rehabilitation and remote monitoring, to effectively reduce avoidable early readmissions. However, medical data are commonly characterized by severe class imbalance, which limits the ability of conventional machine learning methods to identify minority-class cases. In this study, we used real-world clinical data from multiple hospitals in Kaohsiung City to construct a prediction framework that integrates data generation and ensemble learning to forecast readmission risk among patients with chronic obstructive pulmonary disease (COPD). CTGAN and kernel density estimation (KDE) were employed to augment the minority class, and the impact of these two generation approaches on model performance was compared across different augmentation ratios. We adopted a stacking architecture composed of six base models as the core framework and conducted systematic comparisons against the baseline models XGBoost, AdaBoost, Random Forest, and LightGBM across multiple recall thresholds, different feature configurations, and alternative data generation strategies. Overall, the results show that, under high-recall targets, KDE combined with stacking achieves the most stable and superior overall performance relative to the baseline models. We further performed ablation experiments by sequentially removing each base model to evaluate and analyze its contribution. The results indicate that removing KNN yields the greatest negative impact on the stacking classifier, particularly under high-recall settings where the declines in precision and F1-score are most pronounced, suggesting that KNN is most sensitive to the distributional changes introduced by KDE-generated data. This configuration simultaneously improves precision, F1-score, and specificity, and is therefore adopted as the final recommended model setting in this study. Full article
Show Figures

Figure 1

21 pages, 2947 KB  
Article
HFSOF: A Hierarchical Feature Selection and Optimization Framework for Ultrasound-Based Diagnosis of Endometrial Lesions
by Yongjun Liu, Zihao Zhang, Tongyu Chai and Haitong Zhao
Biomimetics 2026, 11(1), 74; https://doi.org/10.3390/biomimetics11010074 - 15 Jan 2026
Viewed by 267
Abstract
Endometrial lesions are common in gynecology, exhibiting considerable clinical heterogeneity across different subtypes. Although ultrasound imaging is the preferred diagnostic modality due to its noninvasive, accessible, and cost-effective nature, its diagnostic performance remains highly operator-dependent, leading to subjectivity and inconsistent results. To address [...] Read more.
Endometrial lesions are common in gynecology, exhibiting considerable clinical heterogeneity across different subtypes. Although ultrasound imaging is the preferred diagnostic modality due to its noninvasive, accessible, and cost-effective nature, its diagnostic performance remains highly operator-dependent, leading to subjectivity and inconsistent results. To address these limitations, this study proposes a hierarchical feature selection and optimization framework for endometrial lesions, aiming to enhance the objectivity and robustness of ultrasound-based diagnosis. Firstly, Kernel Principal Component Analysis (KPCA) is employed for nonlinear dimensionality reduction, retaining the top 1000 principal components. Secondly, an ensemble of three filter-based methods—information gain, chi-square test, and symmetrical uncertainty—is integrated to rank and fuse features, followed by thresholding with Maximum Scatter Difference Linear Discriminant Analysis (MSDLDA) for preliminary feature selection. Finally, the Whale Migration Algorithm (WMA) is applied to population-based feature optimization and classifier training under the constraints of a Support Vector Machine (SVM) and a macro-averaged F1 score. Experimental results demonstrate that the proposed closed-loop pipeline of “kernel reduction—filter fusion—threshold pruning—intelligent optimization—robust classification” effectively balances nonlinear structure preservation, feature redundancy control, and model generalization, providing an interpretable, reproducible, and efficient solution for intelligent diagnosis in small- to medium-scale medical imaging datasets. Full article
(This article belongs to the Special Issue Bio-Inspired AI: When Generative AI and Biomimicry Overlap)
Show Figures

Figure 1

20 pages, 3743 KB  
Article
Unsupervised Learning-Based Anomaly Detection for Bridge Structural Health Monitoring: Identifying Deviations from Normal Structural Behaviour
by Jabez Nesackon Abraham, Minh Q. Tran, Jerusha Samuel Jayaraj, Jose C. Matos, Maria Rosa Valluzzi and Son N. Dang
Sensors 2026, 26(2), 561; https://doi.org/10.3390/s26020561 - 14 Jan 2026
Viewed by 299
Abstract
Structural Health Monitoring (SHM) of large-scale civil infrastructure is essential to ensure safety, minimise maintenance costs, and support informed decision-making. Unsupervised anomaly detection has emerged as a powerful tool for identifying deviations in structural behaviour without requiring labelled damage data. The study initially [...] Read more.
Structural Health Monitoring (SHM) of large-scale civil infrastructure is essential to ensure safety, minimise maintenance costs, and support informed decision-making. Unsupervised anomaly detection has emerged as a powerful tool for identifying deviations in structural behaviour without requiring labelled damage data. The study initially reproduces and implements a state-of-the-art methodology that combines local density estimation through the Cumulative Distance Participation Factor (CDPF) with Semi-parametric Extreme Value Theory (SEVT) for thresholding, which serves as an essential baseline reference for establishing normal structural behaviour and for benchmarking the performance of the proposed anomaly detection framework. Using modal frequencies extracted via Stochastic Subspace Identification from the Z24 bridge dataset, the baseline method effectively identifies structural anomalies caused by progressive damage scenarios. However, its performance is constrained when dealing with subtle or non-linear deviations. To address this limitation, we introduce an innovative ensemble anomaly detection framework that integrates two complementary unsupervised methods: Principal Component Analysis (PCA) and Autoencoder (AE) are dimensionality reduction methods used for anomaly detection. PCA captures linear patterns using variance, while AE learns non-linear representations through data reconstruction. By leveraging the strengths of these techniques, the ensemble achieves improved sensitivity, reliability, and interpretability in anomaly detection. A comprehensive comparison with the baseline approach demonstrates that the proposed ensemble not only captures anomalies more reliably but also provides improved stability to environmental and operational variability. These findings highlight the potential of ensemble-based unsupervised methods for advancing SHM practices. Full article
Show Figures

Figure 1

26 pages, 60486 KB  
Article
Spatiotemporal Prediction of Ground Surface Deformation Using TPE-Optimized Deep Learning
by Maoqi Liu, Sichun Long, Tao Li, Wandi Wang and Jianan Li
Remote Sens. 2026, 18(2), 234; https://doi.org/10.3390/rs18020234 - 11 Jan 2026
Viewed by 237
Abstract
Surface deformation induced by the extraction of natural resources constitutes a non-stationary spatiotemporal process. Modeling surface deformation time series obtained through Interferometric Synthetic Aperture Radar (InSAR) technology using deep learning methods is crucial for disaster prevention and mitigation. However, the complexity of model [...] Read more.
Surface deformation induced by the extraction of natural resources constitutes a non-stationary spatiotemporal process. Modeling surface deformation time series obtained through Interferometric Synthetic Aperture Radar (InSAR) technology using deep learning methods is crucial for disaster prevention and mitigation. However, the complexity of model hyperparameter configuration and the lack of interpretability in the resulting predictions constrain its engineering applications. To enhance the reliability of model outputs and their decision-making value for engineering applications, this study presents a workflow that combines a Tree-structured Parzen Estimator (TPE)-based Bayesian optimization approach with ensemble inference. Using the Rhineland coalfield in Germany as a case study, we systematically evaluated six deep learning architectures in conjunction with various spatiotemporal coding strategies. Pairwise comparisons were conducted using a Welch t-test to evaluate the performance differences across each architecture under two parameter-tuning approaches. The Benjamini–Hochberg method was applied to control the false discovery rate (FDR) at 0.05 for multiple comparisons. The results indicate that TPE-optimized models demonstrate significantly improved performance compared to their manually tuned counterparts, with the ResNet+Transformer architecture yielding the most favorable outcomes. A comprehensive analysis of the spatial residuals further revealed that TPE optimization not only enhances average accuracy, but also mitigates the model’s prediction bias in fault zones and mineralize areas by improving the spatial distribution structure of errors. Based on this optimal architecture, we combined the ten highest-performing models from the optimization stage to generate a quantile-based susceptibility map, using the ensemble median as the central predictor. Uncertainty was quantified from three complementary perspectives: ensemble spread, class ambiguity, and classification confidence. Our analysis revealed spatial collinearity between physical uncertainty and absolute residuals. This suggests that uncertainty is more closely related to the physical complexity of geological discontinuities and human-disturbed zones, rather than statistical noise. In the analysis of super-threshold probability, the threshold sensitivity exhibited by the mining area reflects the widespread yet moderate impact of mining activities. By contrast, the fault zone continues to exhibit distinct high-probability zones, even under extreme thresholds. It suggests that fault-controlled deformation is more physically intense and poses a greater risk of disaster than mining activities. Finally, we propose an engineering decision strategy that combines uncertainty and residual spatial patterns. This approach transforms statistical diagnostics into actionable, tiered control measures, thereby increasing the practical value of susceptibility mapping in the planning of natural resource extraction. Full article
Show Figures

Figure 1

24 pages, 3204 KB  
Article
Web-Based Explainable AI System Integrating Color-Rule and Deep Models for Smart Durian Orchard Management
by Wichit Sookkhathon and Chawanrat Srinounpan
AgriEngineering 2026, 8(1), 23; https://doi.org/10.3390/agriengineering8010023 - 9 Jan 2026
Viewed by 270
Abstract
This study presents a field-oriented AI web system for durian orchard management that recognizes leaf health from on-orchard images under variable illumination. Two complementary pipelines are employed: (1) a rule-based module operating in HSV and CIE Lab color spaces that suppresses sun-induced specular [...] Read more.
This study presents a field-oriented AI web system for durian orchard management that recognizes leaf health from on-orchard images under variable illumination. Two complementary pipelines are employed: (1) a rule-based module operating in HSV and CIE Lab color spaces that suppresses sun-induced specular highlights via V/L* thresholds and applies interpretable hue–chromaticity rules with spatial constraints; and (2) a Deep Feature (PCA–SVM) pipeline that extracts features from pretrained ResNet50 and DenseNet201 models, performs dimensionality reduction using Principal Component Analysis, and classifies samples into three agronomic classes: healthy, leaf-spot, and leaf-blight. This hybrid architecture enhances transparency for growers while remaining robust to illumination variations and background clutter typical of on-farm imaging. Preliminary on-farm experiments under real-world field conditions achieved approximately 80% classification accuracy, whereas controlled evaluations using curated test sets showed substantially higher performance for the Deep Features and Ensemble model, with accuracy reaching 0.97–0.99. The web interface supports near-real-time image uploads, annotated visual overlays, and Thai-language outputs. Usability testing with thirty participants indicated very high satisfaction (mean 4.83, SD 0.34). The proposed system serves as both an instructional demonstrator for explainable AI-based image analysis and a practical decision-support tool for digital horticultural monitoring. Full article
Show Figures

Figure 1

Back to TopTop