Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (681)

Search Parameters:
Keywords = Bayesian classifications

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
32 pages, 999 KB  
Article
A Robust Hybrid Metaheuristic Framework for Training Support Vector Machines
by Khalid Nejjar, Khalid Jebari and Siham Rekiek
Algorithms 2026, 19(1), 70; https://doi.org/10.3390/a19010070 - 13 Jan 2026
Abstract
Support Vector Machines (SVMs) are widely used in critical decision-making applications, such as precision agriculture, due to their strong theoretical foundations and their ability to construct an optimal separating hyperplane in high-dimensional spaces. However, the effectiveness of SVMs is highly dependent on the [...] Read more.
Support Vector Machines (SVMs) are widely used in critical decision-making applications, such as precision agriculture, due to their strong theoretical foundations and their ability to construct an optimal separating hyperplane in high-dimensional spaces. However, the effectiveness of SVMs is highly dependent on the efficiency of the optimization algorithm used to solve their underlying dual problem, which is often complex and constrained. Classical solvers, such as Sequential Minimal Optimization (SMO) and Stochastic Gradient Descent (SGD), present inherent limitations: SMO ensures numerical stability but lacks scalability and is sensitive to heuristics, while SGD scales well but suffers from unstable convergence and limited suitability for nonlinear kernels. To address these challenges, this study proposes a novel hybrid optimization framework based on Open Competency Optimization and Particle Swarm Optimization (OCO–PSO) to enhance the training of SVMs. The proposed approach combines the global exploration capability of PSO with the adaptive competency-based learning mechanism of OCO, enabling efficient exploration of the solution space, avoidance of local minima, and strict enforcement of dual constraints on the Lagrange multipliers. Across multiple datasets spanning medical (diabetes), agricultural yield, signal processing (sonar and ionosphere), and imbalanced synthetic data, the proposed OCO-PSO–SVM consistently outperforms classical SVM solvers (SMO and SGD) as well as widely used classifiers, including decision trees and random forests, in terms of accuracy, macro-F1-score, Matthews correlation coefficient (MCC), and ROC-AUC. On the Ionosphere dataset, OCO-PSO achieves an accuracy of 95.71%, an F1-score of 0.954, and an MCC of 0.908, matching the accuracy of random forest while offering superior interpretability through its kernel-based structure. In addition, the proposed method yields a sparser model with only 66 support vectors compared to 71 for standard SVC (a reduction of approximately 7%), while strictly satisfying the dual constraints with a near-zero violation of 1.3×103. Notably, the optimal hyperparameters identified by OCO-PSO (C=2, γ0.062) differ substantially from those obtained via Bayesian optimization for SVC (C=10, γ0.012), indicating that the proposed approach explores alternative yet equally effective regions of the hypothesis space. The statistical significance and robustness of these improvements are confirmed through extensive validation using 1000 bootstrap replications, paired Student’s t-tests, Wilcoxon signed-rank tests, and Holm–Bonferroni correction. These results demonstrate that the proposed metaheuristic hybrid optimization framework constitutes a reliable, interpretable, and scalable alternative for training SVMs in complex and high-dimensional classification tasks. Full article
Show Figures

Figure 1

26 pages, 60486 KB  
Article
Spatiotemporal Prediction of Ground Surface Deformation Using TPE-Optimized Deep Learning
by Maoqi Liu, Sichun Long, Tao Li, Wandi Wang and Jianan Li
Remote Sens. 2026, 18(2), 234; https://doi.org/10.3390/rs18020234 - 11 Jan 2026
Viewed by 115
Abstract
Surface deformation induced by the extraction of natural resources constitutes a non-stationary spatiotemporal process. Modeling surface deformation time series obtained through Interferometric Synthetic Aperture Radar (InSAR) technology using deep learning methods is crucial for disaster prevention and mitigation. However, the complexity of model [...] Read more.
Surface deformation induced by the extraction of natural resources constitutes a non-stationary spatiotemporal process. Modeling surface deformation time series obtained through Interferometric Synthetic Aperture Radar (InSAR) technology using deep learning methods is crucial for disaster prevention and mitigation. However, the complexity of model hyperparameter configuration and the lack of interpretability in the resulting predictions constrain its engineering applications. To enhance the reliability of model outputs and their decision-making value for engineering applications, this study presents a workflow that combines a Tree-structured Parzen Estimator (TPE)-based Bayesian optimization approach with ensemble inference. Using the Rhineland coalfield in Germany as a case study, we systematically evaluated six deep learning architectures in conjunction with various spatiotemporal coding strategies. Pairwise comparisons were conducted using a Welch t-test to evaluate the performance differences across each architecture under two parameter-tuning approaches. The Benjamini–Hochberg method was applied to control the false discovery rate (FDR) at 0.05 for multiple comparisons. The results indicate that TPE-optimized models demonstrate significantly improved performance compared to their manually tuned counterparts, with the ResNet+Transformer architecture yielding the most favorable outcomes. A comprehensive analysis of the spatial residuals further revealed that TPE optimization not only enhances average accuracy, but also mitigates the model’s prediction bias in fault zones and mineralize areas by improving the spatial distribution structure of errors. Based on this optimal architecture, we combined the ten highest-performing models from the optimization stage to generate a quantile-based susceptibility map, using the ensemble median as the central predictor. Uncertainty was quantified from three complementary perspectives: ensemble spread, class ambiguity, and classification confidence. Our analysis revealed spatial collinearity between physical uncertainty and absolute residuals. This suggests that uncertainty is more closely related to the physical complexity of geological discontinuities and human-disturbed zones, rather than statistical noise. In the analysis of super-threshold probability, the threshold sensitivity exhibited by the mining area reflects the widespread yet moderate impact of mining activities. By contrast, the fault zone continues to exhibit distinct high-probability zones, even under extreme thresholds. It suggests that fault-controlled deformation is more physically intense and poses a greater risk of disaster than mining activities. Finally, we propose an engineering decision strategy that combines uncertainty and residual spatial patterns. This approach transforms statistical diagnostics into actionable, tiered control measures, thereby increasing the practical value of susceptibility mapping in the planning of natural resource extraction. Full article
Show Figures

Figure 1

20 pages, 4911 KB  
Article
Autonomous Real-Time Regional Risk Monitoring for Unmanned Swarm Systems
by Tianruo Cao, Yuxizi Zheng, Lijun Liu and Yongqi Pan
Mathematics 2026, 14(2), 259; https://doi.org/10.3390/math14020259 - 9 Jan 2026
Viewed by 117
Abstract
Existing State-of-the-Art (SOTA) methods for situational awareness typically rely on high-bandwidth transmission of raw data or computationally intensive models, which are often impractical for resource-constrained edge devices in unstable communication environments. To address these limitations, this paper introduces a comprehensive framework for Regional [...] Read more.
Existing State-of-the-Art (SOTA) methods for situational awareness typically rely on high-bandwidth transmission of raw data or computationally intensive models, which are often impractical for resource-constrained edge devices in unstable communication environments. To address these limitations, this paper introduces a comprehensive framework for Regional Risk Monitoring utilizing unmanned swarm systems. We propose an innovative knowledge distillation approach (SIKD) that leverages both soft label dark knowledge and inter-layer relationships, enabling compressed models to run in real time on edge nodes while maintaining high accuracy. Furthermore, recognition results are fused using Bayesian inference to dynamically update the regional risk level. Experimental results demonstrate the feasibility of the proposed framework. Quantitatively, the proposed SIKD algorithm reduces the model parameters by 52.34% and computational complexity to 44.21% of the original model, achieving a 3× inference speedup on edge CPUs. Furthermore, it outperforms state-of-the-art baseline methods (e.g., DKD and IRG) in terms of convergence speed and classification accuracy, ensuring robust real-time risk monitoring. Full article
Show Figures

Figure 1

25 pages, 1075 KB  
Article
Prompt-Based Few-Shot Text Classification with Multi-Granularity Label Augmentation and Adaptive Verbalizer
by Deling Huang, Zanxiong Li, Jian Yu and Yulong Zhou
Information 2026, 17(1), 58; https://doi.org/10.3390/info17010058 - 8 Jan 2026
Viewed by 176
Abstract
Few-Shot Text Classification (FSTC) aims to classify text accurately into predefined categories using minimal training samples. Recently, prompt-tuning-based methods have achieved promising results by constructing verbalizers that map input data to the label space, thereby maximizing the utilization of pre-trained model features. However, [...] Read more.
Few-Shot Text Classification (FSTC) aims to classify text accurately into predefined categories using minimal training samples. Recently, prompt-tuning-based methods have achieved promising results by constructing verbalizers that map input data to the label space, thereby maximizing the utilization of pre-trained model features. However, existing verbalizer construction methods often rely on external knowledge bases, which require complex noise filtering and manual refinement, making the process time-consuming and labor-intensive, while approaches based on pre-trained language models (PLMs) frequently overlook inherent prediction biases. Furthermore, conventional data augmentation methods focus on modifying input instances while overlooking the integral role of label semantics in prompt tuning. This disconnection often leads to a trade-off where increased sample diversity comes at the cost of semantic consistency, resulting in marginal improvements. To address these limitations, this paper first proposes a novel Bayesian Mutual Information-based method that optimizes label mapping to retain general PLM features while reducing reliance on irrelevant or unfair attributes to mitigate latent biases. Based on this method, we propose two synergistic generators that synthesize semantically consistent samples by integrating label word information from the verbalizer to effectively enrich data distribution and alleviate sparsity. To guarantee the reliability of the augmented set, we propose a Low-Entropy Selector that serves as a semantic filter, retaining only high-confidence samples to safeguard the model against ambiguous supervision signals. Furthermore, we propose a Difficulty-Aware Adversarial Training framework that fosters generalized feature learning, enabling the model to withstand subtle input perturbations. Extensive experiments demonstrate that our approach outperforms state-of-the-art methods on most few-shot and full-data splits, with F1 score improvements of up to +2.8% on the standard AG’s News benchmark and +1.0% on the challenging DBPedia benchmark. Full article
Show Figures

Graphical abstract

22 pages, 2885 KB  
Article
Classifying National Pathways of Sustainable Development Through Bayesian Probabilistic Modelling
by Oksana Liashenko, Kostiantyn Pavlov, Olena Pavlova, Robert Chmura, Aneta Czechowska-Kosacka, Tetiana Vlasenko and Anna Sabat
Sustainability 2026, 18(2), 601; https://doi.org/10.3390/su18020601 - 7 Jan 2026
Viewed by 160
Abstract
As global efforts to achieve the Sustainable Development Goals (SDGs) enter a critical phase, there is a growing need for analytical tools that reflect the complexity and heterogeneity of development pathways. This study introduces a probabilistic classification framework designed to uncover latent typologies [...] Read more.
As global efforts to achieve the Sustainable Development Goals (SDGs) enter a critical phase, there is a growing need for analytical tools that reflect the complexity and heterogeneity of development pathways. This study introduces a probabilistic classification framework designed to uncover latent typologies of national performance across the seventeen Sustainable Development Goals. Unlike traditional ranking systems or composite indices, the proposed method uses raw, standardised goal-level indicators and accounts for both structural variation and classification uncertainty. The model integrates a Bayesian decision tree with penalised spline regressions and includes regional covariates to capture context-sensitive dynamics. Based on publicly available global datasets covering more than 150 countries, the analysis identifies three distinct development profiles: structurally vulnerable systems, transitional configurations, and consolidated performers. Posterior probabilities enable soft classification, highlighting ambiguous or hybrid country profiles that do not fit neatly into a single category. Results reveal both monotonic and non-monotonic indicator behaviours, including saturation effects in infrastructure-related goals and paradoxical patterns in climate performance. This typology-sensitive approach provides a transparent and interpretable alternative to aggregated indices, supporting more differentiated and evidence-based sustainability assessments. The findings provide a practical basis for tailoring national strategies to structural conditions and the multidimensional nature of sustainable development. Full article
Show Figures

Figure 1

33 pages, 2607 KB  
Article
Efficient Blended Models for Analysis and Detection of Neuropathic Pain from EEG Signals Using Machine Learning
by Sunil Kumar Prabhakar, Keun-Tae Kim and Dong-Ok Won
Bioengineering 2026, 13(1), 67; https://doi.org/10.3390/bioengineering13010067 - 7 Jan 2026
Viewed by 203
Abstract
Due to the damage happening in the nervous system, neuropathic pain occurs and it affects the quality of life of the patient to a great extent. Therefore, some clinical evaluations are required to assess the diagnostic outcomes precisely. A lot of information about [...] Read more.
Due to the damage happening in the nervous system, neuropathic pain occurs and it affects the quality of life of the patient to a great extent. Therefore, some clinical evaluations are required to assess the diagnostic outcomes precisely. A lot of information about the activities of the brain is provided by Electroencephalography (EEG) signals and neuropathic pain can be assessed and classified with the aid of EEG and machine learning. In this work, two approaches are proposed in terms of efficient blended models for the classification of neuropathic pain through EEG signals. In the first blended model, once the features are extracted using Discrete Wavelet Transform (DWT), statistical features, and Fuzzy C-Means (FCM) clustering techniques, the features are selected using Grey Wolf Optimization (GWO), Feature Correlation Clustering Technique (FCCT), F-test, and Bayesian Optimization Algorithm (BOA) and it is classified with the help of three hybrid classification models like Spider Monkey Optimization-based Gradient Boosting Machine (SMO-GBM) classifier, hybrid deep kernel learning with Support Vector Machine (DKL-SVM) classifier, and CatBoost classifier. In the second blended model, once the features are extracted, the features are selected using Hybrid Feature Selection—Majority Voting System (HFS-MVS), Hybrid Salp Swarm Optimization—Particle Swarm Optimization (SSO-PSO), Pearson Correlation Coefficient (PCC), and Mutual Information (MI) and it is classified with the help of three hybrid classification models like Partial Least Squares (PLS) variant classification models combined with Kernel-based SVM, ensemble classification model with soft voting strategy, and Extreme Gradient Boosting (XGBoost) classifier. The proposed blended models are evaluated on a publicly available dataset and the best results are shown when the FCM features are selected with SSO-PSO feature selection technique and classified with Polynomial Kernel-based PLS-SVM Classifier, reporting a high classification accuracy of 92.68% in this work. Full article
(This article belongs to the Section Biosignal Processing)
Show Figures

Figure 1

39 pages, 3706 KB  
Article
Performance Assessment of DL for Network Intrusion Detection on a Constrained IoT Device
by Armin Mazinani, Daniele Antonucci, Luca Davoli and Gianluigi Ferrari
Future Internet 2026, 18(1), 34; https://doi.org/10.3390/fi18010034 - 7 Jan 2026
Viewed by 107
Abstract
This work investigates the deployment of Deep Learning (DL) models for network intrusion detection on resource-constrained IoT devices, using the public CICIoT2023 dataset. In particular, we consider the following DL models: Long Short-Term Memory (LSTM), Gated Recurrent Unit (GRU), Recurrent Neural Network (RNN), [...] Read more.
This work investigates the deployment of Deep Learning (DL) models for network intrusion detection on resource-constrained IoT devices, using the public CICIoT2023 dataset. In particular, we consider the following DL models: Long Short-Term Memory (LSTM), Gated Recurrent Unit (GRU), Recurrent Neural Network (RNN), Convolutional Neural Network (CNN), Temporal Convolutional Network (TCN), Multi-Layer Perceptron (MLP). Bayesian optimization is employed to fine-tune the models’ hyperparameters and ensure reliable performance evaluation across both binary (2-class) and multi-class (8-class, 34-class) intrusion detection. Then, the computational complexity of each DL model is analyzed—in terms of the number of Multiply–ACCumulate operations (MACCs), RAM usage, and inference time—through the STMicroelectronics Cube.AI Analyzer tool, with models being deployed on an STM32H7S78-DK board. To assess the practical deployability of the considered DL models, a trade-off score (balancing classification accuracy and computational efficiency) is introduced: according to this score, our experimental results indicate that MLP and TCN outperform the other models. Furthermore, Post-Training Quantization (PTQ) to 8-bit integer precision is applied, allowing the model size to be reduced by more than 90% with negligible performance degradation. This demonstrates the effectiveness of quantization in optimizing DL models for real-world deployment on resource-constrained IoT devices. Full article
Show Figures

Graphical abstract

15 pages, 1843 KB  
Article
Comparing Methods for Uncertainty Estimation of Paraganglioma Growth Predictions
by Evi M. C. Sijben, Vanessa Volz, Tanja Alderliesten, Peter A. N. Bosman, Berit M. Verbist, Erik F. Hensen and Jeroen C. Jansen
J. Otorhinolaryngol. Hear. Balance Med. 2026, 7(1), 3; https://doi.org/10.3390/ohbm7010003 - 6 Jan 2026
Viewed by 151
Abstract
Background: Paragangliomas of the head and neck are rare, benign and indolent to slow-growing tumors. Not all tumors require immediate active intervention, and surveillance is a viable management strategy in a large proportion of cases. Treatment decisions are based on several tumor- [...] Read more.
Background: Paragangliomas of the head and neck are rare, benign and indolent to slow-growing tumors. Not all tumors require immediate active intervention, and surveillance is a viable management strategy in a large proportion of cases. Treatment decisions are based on several tumor- and patient-related factors, with the tumor progression rate being a predominant determinant. Accurate prediction of tumor progression has the potential to significantly improve treatment decisions by helping to identify patients who are likely to require active treatment in the future. It furthermore enables better-informed timing for follow-up, allowing early intervention for those who will ultimately need it, and optimization of the use of resources (such as MRI scans). Crucial to this is having reliable estimates of the uncertainty associated with a future growth forecast, so that this can be taken into account in the decision-making process. Methods: For various tumor growth prediction models, two methods for uncertainty estimation were compared: a historical-based one and a Bayesian one. We also investigated how incorporating either tumor-specific or general estimates of auto-segmentation uncertainty impacts the results of growth prediction. The performance of the uncertainty estimates was examined both from a technical and a practical perspective. Study design: Method comparison study. Results: Data of 208 patients were used, comprising 311 paragangliomas and 1501 volume measurements, resulting in 2547 tumor growth predictions (a median of 10 predictions per tumor). As expected, the uncertainty increased with the length of the prediction horizon and decreased with the inclusion of more tumor measurement data in the prediction model. The historical method resulted in estimated confidence intervals where the actual value fell within the estimated 95% confidence interval 94% of the time. However, this method resulted in confidence intervals that were too wide to be clinically useful (often over 200% of the predicted volume), and showed poor ability to differentiate growing and stable tumors. The estimated confidence intervals of the Bayesian method were much narrower. However, the 95% credible intervals were too narrow, with the true tumor volume falling within them only 78% of the time, indicating underestimation of uncertainty and insufficient calibration. Despite this, the Bayesian method showed markedly better ability to distinguishing between growing and stable tumors, which has arguably the most practical value. When combining all growth models, the Bayesian method using tumor-specific auto-segmentation uncertainties resulted in an 86% correct classification of growing and non-growing tumors. Conclusions: Of the methods evaluated for predicting paraganglioma progression, the Bayesian method is the most useful in the considered context, because it shows the best ability to discriminate between growing and non-growing tumors. To determine how these methods could be used and what their value is for patients, they should be further evaluated in a clinical setting. Full article
(This article belongs to the Section Head and Neck Surgery)
Show Figures

Figure 1

30 pages, 4494 KB  
Article
An Uncertainty-Aware Bayesian Deep Learning Method for Automatic Identification and Capacitance Estimation of Compensation Capacitors
by Tongdian Wang and Pan Wang
Sensors 2026, 26(1), 279; https://doi.org/10.3390/s26010279 - 2 Jan 2026
Viewed by 365
Abstract
This paper addresses the challenges of misclassification and reliability assessment in compensation capacitor detection under strong noise in high-speed railway track circuits. A hierarchical Bayesian deep learning framework is proposed, integrating multi-domain signal enhancement in the time, frequency, and time–frequency (TF) domains with [...] Read more.
This paper addresses the challenges of misclassification and reliability assessment in compensation capacitor detection under strong noise in high-speed railway track circuits. A hierarchical Bayesian deep learning framework is proposed, integrating multi-domain signal enhancement in the time, frequency, and time–frequency (TF) domains with bidirectional long short-term memory (BiLSTM) sequence modeling for robust feature extraction. Bayesian classification and regression based on Monte Carlo (MC) Dropout and stochastic weight averaging Gaussian (SWAG) enable posterior inference, confidence interval estimation, and uncertainty-aware prediction, while a rejection mechanism filters low-confidence outputs. Experiments on 8782 real-world segments from five railway lines show that the proposed method achieves 97.8% state-recognition accuracy, a mean absolute error of 0.084 μF, and an R2 of 0.96. It further outperforms threshold-based, convolutional neural network (CNN), and standard BiLSTM models in negative log-likelihood (NLL), expected calibration error (ECE), and overall calibration quality, approaching the theoretical 95% interval coverage. The framework substantially improves robustness, accuracy, and reliability, providing a viable solution for intelligent monitoring and safety assurance of compensation capacitors in track circuits. Full article
Show Figures

Figure 1

43 pages, 6158 KB  
Article
A Multi-Fish Tracking and Behavior Modeling Framework for High-Density Cage Aquaculture
by Xinyao Xiao, Tao Liu, Shuangyan He, Peiliang Li, Yanzhen Gu, Pixue Li and Jiang Dong
Sensors 2026, 26(1), 256; https://doi.org/10.3390/s26010256 - 31 Dec 2025
Viewed by 284
Abstract
Multi-fish tracking and behavior analysis in deep-sea cages face two critical challenges: first, the homogeneity of fish appearance and low image quality render appearance-based association unreliable; second, standard linear motion models fail to capture the complex, nonlinear swimming patterns (e.g., turning) of fish, [...] Read more.
Multi-fish tracking and behavior analysis in deep-sea cages face two critical challenges: first, the homogeneity of fish appearance and low image quality render appearance-based association unreliable; second, standard linear motion models fail to capture the complex, nonlinear swimming patterns (e.g., turning) of fish, leading to frequent identity switches and fragmented trajectories. To address these challenges, we propose SOD-SORT, which integrates a Constant Turn-Rate and Velocity (CTRV) motion model within an Extended Kalman Filter (EKF) framework into DeepOCSORT, a recent observation-centric tracker. Through systematic Bayesian optimization of the EKF process noise (Q), observation noise (R), and ReID weighting parameters, we achieve harmonious integration of advanced motion modeling with appearance features. Evaluations on the DeepBlueI validation set show that SOD-SORT attains IDF1 = 0.829 and reduces identity switches by 13% (93 vs. 107) compared to the DeepOCSORT baseline, while maintaining comparable MOTA (0.737). Controlled ablation studies reveal that naive integration of CTRV-EKF with default parameters degrades performance substantially (IDs: 172 vs. 107 baseline), but careful parameter optimization resolves this motion-appearance conflict. Furthermore, we introduce a statistical quantization method that converts variable-length trajectories into fixed-length feature vectors, enabling effective unsupervised classification of normal and abnormal swimming behaviors in both the Fish4Knowledge coral reef dataset and real-world Deep Blue I cage videos. The proposed approach demonstrates that principled integration of advanced motion models with appearance cues, combined with high-quality continuous trajectories, can support reliable behavior modeling for aquaculture monitoring applications. Full article
Show Figures

Figure 1

28 pages, 1477 KB  
Review
Solar-Assisted Thermochemical Valorization of Agro-Waste to Biofuels: Performance Assessment and Artificial Intelligence Application Review
by Balakrishnan Varun Kumar, Sassi Rekik, Delmaria Richards and Helmut Yabar
Waste 2026, 4(1), 2; https://doi.org/10.3390/waste4010002 - 31 Dec 2025
Viewed by 238
Abstract
The rapid growth and seasonal availability of agricultural materials, such as straws, stalks, husks, shells, and processing wastes, present both a disposal challenge and an opportunity for renewable fuel production. Solar-assisted thermochemical conversion, such as solar-driven pyrolysis, gasification, and hydrothermal routes, provides a [...] Read more.
The rapid growth and seasonal availability of agricultural materials, such as straws, stalks, husks, shells, and processing wastes, present both a disposal challenge and an opportunity for renewable fuel production. Solar-assisted thermochemical conversion, such as solar-driven pyrolysis, gasification, and hydrothermal routes, provides a pathway to produce bio-oils, syngas, and upgraded chars with substantially reduced fossil energy inputs compared to conventional thermal systems. Recent experimental research and plant-level techno-economic studies suggest that integrating concentrated solar thermal (CSP) collectors, falling particle receivers, or solar microwave hybrid heating with thermochemical reactors can reduce fossil auxiliary energy demand and enhance life-cycle greenhouse gas (GHG) performance. The primary challenges are operational intermittency and the capital costs of solar collectors. Alongside, machine learning (ML) and AI tools (surrogate models, Bayesian optimization, physics-informed neural networks) are accelerating feedstock screening, process control, and multi-objective optimization, significantly reducing experimental burden and improving the predictability of yields and emissions. This review presents recent experimental, modeling, and techno-economic literature to propose a unified classification of feedstocks, solar-integration modes, and AI roles. It reveals urgent research needs for standardized AI-ready datasets, long-term field demonstrations with thermal storage (e.g., integrating PCM), hybrid physics-ML models for interpretability, and region-specific TEA/LCA frameworks, which are most strongly recommended. Data’s reporting metrics and a reproducible dataset template are provided to accelerate translation from laboratory research to farm-level deployment. Full article
Show Figures

Figure 1

23 pages, 3769 KB  
Article
Partial Discharge Pattern Recognition of GIS with Time–Frequency Energy Grayscale Maps and an Improved Variational Bayesian Autoencoder
by Yuhang He, Yuan Fang, Zongxi Zhang, Dianbo Zhou, Shaoqing Chen and Shi Jing
Energies 2026, 19(1), 127; https://doi.org/10.3390/en19010127 - 25 Dec 2025
Viewed by 321
Abstract
Partial discharge pattern recognition is a crucial task for assessing the insulation condition of Gas-Insulated Switchgear (GIS). However, the on-site environment presents challenges such as strong electromagnetic interference, leading to acquired signals with a low signal-to-noise ratio (SNR). Furthermore, traditional pattern recognition methods [...] Read more.
Partial discharge pattern recognition is a crucial task for assessing the insulation condition of Gas-Insulated Switchgear (GIS). However, the on-site environment presents challenges such as strong electromagnetic interference, leading to acquired signals with a low signal-to-noise ratio (SNR). Furthermore, traditional pattern recognition methods based on statistical parameters suffer from redundant and inefficient features that compromise classification accuracy, while existing artificial-intelligence-based classification methods lack the ability to quantify the uncertainty in defect classification. To address these issues, this paper proposes a novel GIS partial discharge pattern recognition method based on time–frequency energy grayscale maps and an improved variational Bayesian autoencoder. Firstly, a denoising-based approximate message passing algorithm is employed to sample and denoise the discharge signals, which enhances the SNR while simultaneously reducing the number of sampling points. Subsequently, a two-dimensional time–instantaneous frequency energy grayscale map of the discharge signal is constructed based on the Hilbert–Huang Transform and energy grayscale mapping, effectively extracting key time–frequency features. Finally, an improved variational Bayesian autoencoder is utilized for the unsupervised learning of the image features, establishing a GIS defect classification method with an associated confidence level by integrating probabilistic features. Validation based on measured data demonstrates the effectiveness of the proposed method. Full article
(This article belongs to the Special Issue Operation, Control, and Planning of New Power Systems)
Show Figures

Figure 1

35 pages, 3818 KB  
Article
Machine Learning-Based QSAR Screening of Colombian Medicinal Flora for Potential Antiviral Compounds Against Dengue Virus: An In Silico Drug Discovery Approach
by Sergio Andrés Montenegro-Herrera, Anibal Sosa, Isabella Echeverri-Jiménez, Rafael Santiago Castaño-Valencia and Alejandra María Jerez-Valderrama
Pharmaceuticals 2025, 18(12), 1906; https://doi.org/10.3390/ph18121906 - 18 Dec 2025
Viewed by 395
Abstract
Background/Objectives: Colombia harbors exceptional plant diversity, comprising over 31,000 formally identified species, of which approximately 6000 are classified as useful plants. Among these, 2567 species possess documented food and medicinal applications, with several traditionally utilized for managing febrile illnesses. Despite the global [...] Read more.
Background/Objectives: Colombia harbors exceptional plant diversity, comprising over 31,000 formally identified species, of which approximately 6000 are classified as useful plants. Among these, 2567 species possess documented food and medicinal applications, with several traditionally utilized for managing febrile illnesses. Despite the global burden of dengue virus infection affecting millions annually, no specific antiviral therapy has been established. This study aimed to identify potential anti-dengue compounds from Colombian medicinal flora through machine learning-based quantitative structure–activity relationship (QSAR) modeling. Methods: An optimized XGBoost algorithm was developed through Bayesian hyperparameter optimization (Optuna, 50 trials) and trained on 2034 ChEMBL-derived activity records with experimentally validated anti-dengue activity (IC50/EC50). The model incorporated 887 molecular features comprising 43 physicochemical descriptors and 844 ECFP4 fingerprint bits selected via variance-based filtering. IC50 and EC50 endpoints were modeled independently based on their pharmacological distinction and negligible correlation (r = −0.04, p = 0.77). Through a systematic literature review, 2567 Colombian plant species from the Humboldt Institute’s official checklist were evaluated (2501 after removing duplicates and infraspecific taxa), identifying 358 with documented antiviral properties. Phytochemical analysis of 184 characterized species yielded 3267 unique compounds for virtual screening. A dual-endpoint classification strategy categorized compounds into nine activity classes based on combined potency thresholds (Low: pActivity ≤ 5.0, Medium: 5.0 < pActivity ≤ 6.0, High: pActivity > 6.0). Results: The optimized model achieved robust performance (Matthews correlation coefficient: 0.583; ROC-AUC: 0.896), validated through hold-out testing (MCC: 0.576) and Y-randomization (p < 0.01). Virtual screening identified 276 compounds (8.4%) with high predicted potency for both endpoints (“High-High”). Structural novelty analysis revealed that all 276 compounds exhibited Tanimoto similarity < 0.5 to the training set (median: 0.214), representing 145 unique Murcko scaffolds of which 144 (99.3%) were absent from the training data. Application of drug-likeness filtering (QED ≥ 0.5) and applicability domain assessment identified 15 priority candidates. In silico ADMET profiling revealed favorable pharmaceutical properties, with Incartine (pIC50: 6.84, pEC50: 6.13, QED: 0.83), Bilobalide (pIC50: 6.78, pEC50: 6.07, QED: 0.56), and Indican (pIC50: 6.73, pEC50: 6.11, QED: 0.51) exhibiting the highest predicted potencies. Conclusions: This systematic computational screening of Colombian medicinal flora demonstrates the untapped potential of regional biodiversity for anti-dengue drug discovery. The identified candidates, representing structurally novel chemotypes, are prioritized for experimental validation. Full article
Show Figures

Figure 1

29 pages, 12360 KB  
Article
Vision-Guided Dynamic Risk Assessment for Long-Span PC Continuous Rigid-Frame Bridge Construction Through DEMATEL–ISM–DBN Modelling
by Linlin Zhao, Qingfei Gao, Yidian Dong, Yajun Hou, Liangbo Sun and Wei Wang
Buildings 2025, 15(24), 4543; https://doi.org/10.3390/buildings15244543 - 16 Dec 2025
Viewed by 330
Abstract
In response to the challenges posed by the complex evolution of risks and the static nature of traditional assessment methods during the construction of long-span prestressed concrete (PC) continuous rigid-frame bridges, this study proposes a risk assessment framework that integrates visual perception with [...] Read more.
In response to the challenges posed by the complex evolution of risks and the static nature of traditional assessment methods during the construction of long-span prestressed concrete (PC) continuous rigid-frame bridges, this study proposes a risk assessment framework that integrates visual perception with dynamic probabilistic reasoning. By combining an improved YOLOv8 model with the Decision-making Trial and Evaluation Laboratory–InterpretiveStructure Modeling (DEMATEL–ISM) algorithm, the framework achieves intelligent identification of risk elements and causal structure modelling. On this basis, a dynamic Bayesian network (DBN) is constructed, incorporating a sliding window and forgetting factor mechanism to enable adaptive updating of conditional probability tables. Using the Tongshun River Bridge as a case study, at the identification layer, we refine onsite targets into 14 risk elements (F1–F14). For visualization, these are aggregated into four categories—“Bridge, Person, Machine, Environment”—to enhance readability. In the methodology layer, leveraging causal a priori information provided by DEMATEL–ISM, risk elements are mapped to scenario probabilities, enabling scenario-level risk assessment and grading. This establishes a traceable closed-loop process from “elements” to “scenarios.” The results demonstrate that the proposed approach effectively identifies key risk chains within the “human–machine–environment–bridge” system, revealing phase-specific peaks in human-related risks and cumulative increases in structural and environmental risks. The particle filter and Monte Carlo prediction outputs generate short-term risk evolution curves with confidence intervals, facilitating the quantitative classification of risk levels. Overall, this vision-guided dynamic risk assessment method significantly enhances the real-time responsiveness, interpretability, and foresight of bridge construction safety management and provides a promising pathway for proactive risk control in complex engineering environments. Full article
(This article belongs to the Special Issue Big Data and Machine/Deep Learning in Construction)
Show Figures

Figure 1

15 pages, 1377 KB  
Article
Fault Detection and Classification of Power Lines Based on Bayes–LSTM–Attention
by Chen Yang, Hao Li, Wenhui Zeng, Jiayuan Fan and Zhichao Ren
Energies 2025, 18(24), 6483; https://doi.org/10.3390/en18246483 - 11 Dec 2025
Viewed by 399
Abstract
As a critical component of the power system, transmission lines play a significant role in ensuring the safe and stable operation of the power grid. To address the challenge of accurately characterizing complex and diverse fault types, this paper proposes a fault detection [...] Read more.
As a critical component of the power system, transmission lines play a significant role in ensuring the safe and stable operation of the power grid. To address the challenge of accurately characterizing complex and diverse fault types, this paper proposes a fault detection and classification method for power lines that integrates Bayesian Reasoning (BR), Long Short-Term Memory (LSTM) networks, and the Attention mechanism. This approach effectively improves the accuracy of fault classification. Bayesian Reasoning is used to adjust the hyperparameters of the LSTM, while the LSTM network processes sequential data efficiently through its gating mechanism. The self-Attention mechanism adaptively assigns weights by focusing on the relationships between information at different positions in the sequence, capturing global dependencies. Test results demonstrate that the proposed Bayes–LSTM–Attention model achieves a fault classification accuracy of 94.5% for transmission lines, a significant improvement compared to the average accuracy of 80% achieved by traditional SVM multi-class classifiers. This indicates that the model has high precision in classifying transmission line faults. Additionally, the evaluation of classification results using the polygon area metric shows that the model exhibits balanced and robust performance in fault classification. Full article
Show Figures

Figure 1

Back to TopTop