Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (451)

Search Parameters:
Keywords = Bayesian transformation

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
26 pages, 1858 KB  
Review
Artificial Intelligence in Lubricant Research—Advances in Monitoring and Predictive Maintenance
by Raj Shah, Kate Marussich, Vikram Mittal and Andreas Rosenkranz
Lubricants 2026, 14(2), 72; https://doi.org/10.3390/lubricants14020072 - 3 Feb 2026
Abstract
Artificial intelligence transforms lubricant research by linking molecular modeling, diagnostics, and industrial operations into predictive systems. In this regard, machine learning methods such as Bayesian optimization and neural-based Quantitative Structure–Property/Tribological Relationship (QSPR/QSTR) modeling help to accelerate additive design and formulation development. Moreover, deep [...] Read more.
Artificial intelligence transforms lubricant research by linking molecular modeling, diagnostics, and industrial operations into predictive systems. In this regard, machine learning methods such as Bayesian optimization and neural-based Quantitative Structure–Property/Tribological Relationship (QSPR/QSTR) modeling help to accelerate additive design and formulation development. Moreover, deep learning and hybrid physics–AI frameworks are now capable to predict key lubricant properties such as viscosity, oxidation stability, and wear resistance directly from molecular or spectral data, reducing the need for long-duration field trials like fleet or engine endurance tests. With respect to condition monitoring, convolutional neural networks automate wear debris classification, multimodal sensor fusion enables real-time oil health tracking, and digital twins provide predictive maintenance by forecasting lubricant degradation and optimizing drain intervals. AI-assisted blending and process control platforms extend these advantages into manufacturing, reducing waste and improving reproducibility. This article sheds light on recent progress in AI-driven formulation, monitoring, and maintenance, thus identifying major barriers to adoption such as fragmented datasets, limited model transferability, and low explainability. Moreover, it discusses how standardized data infrastructures, physics-informed learning, and secure federated approaches can advance the industry toward adaptive, sustainable lubricant development under the principles of Industry 5.0. Full article
Show Figures

Figure 1

21 pages, 615 KB  
Article
A New Hybrid Weibull–Exponentiated Rayleigh Distribution: Theory, Asymmetry Properties, and Applications
by Tolulope Olubunmi Adeniji and Akinwumi Sunday Odeyemi
Symmetry 2026, 18(2), 264; https://doi.org/10.3390/sym18020264 - 31 Jan 2026
Viewed by 80
Abstract
The choice of probability distribution is strongly data-dependent, as observed in several studies. Given the central role of statistical distribution in predictive analytics, researchers have continued to develop new models that accurately capture underlying data behaviours. This study proposes the Hybrid Weibull–Exponentiated Rayleigh [...] Read more.
The choice of probability distribution is strongly data-dependent, as observed in several studies. Given the central role of statistical distribution in predictive analytics, researchers have continued to develop new models that accurately capture underlying data behaviours. This study proposes the Hybrid Weibull–Exponentiated Rayleigh distribution developed by compounding the Weibull and Exponentiated Rayleigh distributions via the T-X transformation framework. The new three-parameter distribution is formulated to provide a flexible modelling framework capable of handling data exhibiting non-monotone failure rates. The properties of the proposed distribution, such as the cumulative distribution function, probability density function, survival function, hazard function, linear representation, moments, and entropy, are studied. We estimate the parameters of the distribution using the Maximum Likelihood Estimation technique. Furthermore, the impact of the proposed distribution parameters on the distribution’s shape is studied, particularly its symmetry properties. The shape of the distribution varies with its parameter values, thereby enabling it to model diverse data patterns. This flexibility makes it especially useful for describing the presence or absence of symmetry in real-world failure processes. Simulation studies are conducted to assess the behaviour of the estimators under different parameter settings. The proposed distribution is applied to real-world data to demonstrate its performance. Comparative analysis is performed against other well-established models. The results indicate that the proposed distribution outperforms other models in terms of goodness-of-fit, demonstrating its potential as a superior alternative for modelling lifetime data and reliability analysis based on Akaike Information Criterion and Bayesian Information Criterion. Full article
Show Figures

Figure 1

27 pages, 1881 KB  
Article
From Latent Manifolds to Targeted Molecular Probes: An Interpretable, Kinome-Scale Generative Machine Learning Framework for Family-Based Kinase Ligand Design
by Gennady Verkhivker, Ryan Kassab and Keerthi Krishnan
Biomolecules 2026, 16(2), 209; https://doi.org/10.3390/biom16020209 - 29 Jan 2026
Viewed by 308
Abstract
Scaffold-aware artificial intelligence (AI) models enable systematic exploration of chemical space conditioned on protein-interacting ligands, yet the representational principles governing their behavior remain poorly understood. The computational representation of structurally complex kinase small molecules remains a formidable challenge due to the high conservation [...] Read more.
Scaffold-aware artificial intelligence (AI) models enable systematic exploration of chemical space conditioned on protein-interacting ligands, yet the representational principles governing their behavior remain poorly understood. The computational representation of structurally complex kinase small molecules remains a formidable challenge due to the high conservation of ATP active site architecture across the kinome and the topological complexity of structural scaffolds in current generative AI frameworks. In this study, we present a diagnostic, modular and chemistry-first generative framework for design of targeted SRC kinase ligands by integrating ChemVAE-based latent space modeling, a chemically interpretable structural similarity metric (Kinase Likelihood Score), Bayesian optimization, and cluster-guided local neighborhood sampling. Using a comprehensive dataset of protein kinase ligands, we examine scaffold topology, latent-space geometry, and model-driven generative trajectories. We show that chemically distinct scaffolds can converge toward overlapping latent representations, revealing intrinsic degeneracy in scaffold encoding, while specific topological motifs function as organizing anchors that constrain generative diversification. The results demonstrate that kinase scaffolds spanning 37 protein kinase families spontaneously organize into a coherent, low-dimensional manifold in latent space, with SRC-like scaffolds acting as a structural “hub” that enables rational scaffold transformation. Our local sampling approach successfully converts scaffolds from other kinase families (notably LCK) into novel SRC-like chemotypes, with LCK-derived molecules accounting for ~40% of high-similarity outputs. However, both generative strategies reveal a critical limitation: SMILES-based representations systematically fail to recover multi-ring aromatic systems—a topological hallmark of kinase chemotypes—despite ring count being a top feature in our structural similarity metric. This “representation gap” demonstrates that no amount of scoring refinement can compensate for a generative engine that cannot access topologically constrained regions. By diagnosing these constraints within a transparent pipeline and reframing scaffold-aware ligand design as a problem of molecular representation our work provides a conceptual framework for interpreting generative model behavior and for guiding the incorporation of structural priors into future molecular AI architectures. Full article
(This article belongs to the Special Issue Cancer Biology: Machine Learning and Bioinformatics)
Show Figures

Graphical abstract

18 pages, 2401 KB  
Article
An Efficient Pedestrian Gender Recognition Method Based on Key Area Feature Extraction and Information Fusion
by Ye Zhang, Weidong Yan, Guoqi Liu, Ning Jin and Lu Han
Appl. Sci. 2026, 16(3), 1298; https://doi.org/10.3390/app16031298 - 27 Jan 2026
Viewed by 119
Abstract
Aiming to address the problems of scale uncertainty, feature extraction difficulty, model training difficulty, poor real-time performance, and sample imbalance in low-resolution images for gender recognition, this study proposes an efficient pedestrian gender recognition model based on key area feature extraction and fusion. [...] Read more.
Aiming to address the problems of scale uncertainty, feature extraction difficulty, model training difficulty, poor real-time performance, and sample imbalance in low-resolution images for gender recognition, this study proposes an efficient pedestrian gender recognition model based on key area feature extraction and fusion. First, a discrete cosine transform (DCT)-based local super-resolution preprocessing algorithm is developed for facial image gender recognition. Then, a key area feature extraction and information fusion model is designed, using additional appearance features to assist in gender recognition and improve accuracy. The proposed model preprocesses images using the DCT image fusion and super-resolution methods, dividing pedestrian images into three regions: face, hair, and lower body (legs) regions. Features are separately extracted from each of the three image regions. Finally, a multi-region local gender recognition classifier is designed and trained, employing decision-level information fusion. The results of the three local classifiers are fused using a Bayesian computation-based fusion strategy to obtain the final recognition result of a pedestrian’s gender. This study uses surveillance video data to create a dataset for experimental comparison. Experimental results demonstrate the superiority of the proposed approach. The facial model (DCT-PFSR-CNN) achieved the best accuracy of 89% and an F1-Score of 0.88. Furthermore, the complete pedestrian model (MPGRM) attained an mAP of 0.85 and an AUC of 0.86, surpassing the strongest baseline (HDFL) by 2.4% in mAP and 2.3% in AUC. These results confirm the high application potential of the proposed method for gender recognition in real-world surveillance scenarios. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

22 pages, 31480 KB  
Article
Bayesian Inference of Primordial Magnetic Field Parameters from CMB with Spherical Graph Neural Networks
by Juan Alejandro PintoCastro, Héctor J. Hortúa, Jorge Enrique García-Farieta and Roger Anderson Hurtado
Universe 2026, 12(2), 34; https://doi.org/10.3390/universe12020034 - 26 Jan 2026
Viewed by 157
Abstract
Deep learning has emerged as a transformative methodology in modern cosmology, providing powerful tools to extract meaningful physical information from complex astronomical data. This paper implements a novel Bayesian graph deep learning framework for estimating key cosmological parameters in a primordial magnetic field [...] Read more.
Deep learning has emerged as a transformative methodology in modern cosmology, providing powerful tools to extract meaningful physical information from complex astronomical data. This paper implements a novel Bayesian graph deep learning framework for estimating key cosmological parameters in a primordial magnetic field (PMF) cosmology from simulated Cosmic Microwave Background (CMB) maps. Our methodology utilizes DeepSphere, a spherical convolutional neural network architecture specifically designed to respect the spherical geometry of CMB data through HEALPix pixelization. To advance beyond deterministic point estimates and enable robust uncertainty quantification, we integrate Bayesian Neural Networks (BNNs) into the framework, capturing aleatoric and epistemic uncertainties that reflect the model confidence in its predictions. The proposed approach demonstrates exceptional performance, achieving R2 scores exceeding 89% for the magnetic parameter estimation. We further obtain well-calibrated uncertainty estimates through post hoc training techniques including Variance Scaling and GPNormal. This integrated DeepSphere-BNNs framework delivers accurate parameter estimation from CMB maps with PMF contributions while providing reliable uncertainty quantification, enabling robust cosmological inference in the era of precision cosmology. Full article
(This article belongs to the Section Astroinformatics and Astrostatistics)
25 pages, 2206 KB  
Article
Adaptive Bayesian System Identification for Long-Term Forecasting of Industrial Load and Renewables Generation
by Lina Sheng, Zhixian Wang, Xiaowen Wang and Linglong Zhu
Electronics 2026, 15(3), 530; https://doi.org/10.3390/electronics15030530 - 26 Jan 2026
Viewed by 116
Abstract
The expansion of renewables in modern power systems and the coordinated development of upstream and downstream industrial chains are promoting a shift on the utility side from traditional settlement by energy toward operation driven by data and models. Industrial electricity consumption data exhibit [...] Read more.
The expansion of renewables in modern power systems and the coordinated development of upstream and downstream industrial chains are promoting a shift on the utility side from traditional settlement by energy toward operation driven by data and models. Industrial electricity consumption data exhibit pronounced multi-scale temporal structures and sectoral heterogeneity, which makes unified long-term load and generation forecasting while maintaining accuracy, interpretability, and scalability a challenge. From a modern system identification perspective, this paper proposes a System Identification in Adaptive Bayesian Framework (SIABF) for medium- and long-term industrial load forecasting based on daily freeze electricity time series. By combining daily aggregation of high-frequency data, frequency domain analysis, sparse identification, and long-term extrapolation, we first construct daily freeze series from 15 min measurements, and then we apply discrete Fourier transforms and a spectral complexity index to extract dominant periodic components and build an interpretable sinusoidal basis library. A sparse regression formulation with 1 regularization is employed to select a compact set of key basis functions, yielding concise representations of sector and enterprise load profiles and naturally supporting multivariate and joint multi-sector modeling. Building on this structure, we implement a state-space-implicit physics-informed Bayesian forecasting model and evaluate it on real data from three representative sectors, namely, steel, photovoltaics, and chemical, using one year of 15 min measurements. Under a one-month-ahead evaluation using one year of 15 min measurements, the proposed framework attains a Mean Absolute Percentage Error (MAPE) of 4.5% for a representative PV-related customer case and achieves low single-digit MAPE for high-inertia sectors, often outperforming classical statistical models, sparse learning baselines, and deep learning architectures. These results should be interpreted as indicative given the limited time span and sample size, and broader multi-year, population-level validation is warranted. Full article
(This article belongs to the Section Systems & Control Engineering)
40 pages, 9833 KB  
Article
Decision-Level Fusion of PS-InSAR and Optical Data for Landslide Susceptibility Mapping Using Wavelet Transform and MAMBA
by Hongyi Guo, Antonio M. Martínez-Graña, Leticia Merchán, Agustina Fernández and Manuel Casado
Land 2026, 15(2), 211; https://doi.org/10.3390/land15020211 - 26 Jan 2026
Viewed by 144
Abstract
Landslides remain a critical geohazard in mountainous regions, where intensified extreme rainfall and rapid land-use changes exacerbate slope instability, challenging the reliability of traditional single-sensor susceptibility assessments. To overcome the limitations of data heterogeneity and noise, this study presents a decision-level fusion strategy [...] Read more.
Landslides remain a critical geohazard in mountainous regions, where intensified extreme rainfall and rapid land-use changes exacerbate slope instability, challenging the reliability of traditional single-sensor susceptibility assessments. To overcome the limitations of data heterogeneity and noise, this study presents a decision-level fusion strategy integrating Permanent Scatterer InSAR (PS-InSAR) deformation dynamics with multi-source optical remote sensing indicators via a Wavelet Transform (WT) enhanced Multi-source Additive Model Based on Bayesian Analysis (MAMBA). San Martín del Castañar (Spain), a region characterized by rugged terrain and active deformation, served as the study area. We utilized Sentinel-1A C-band datasets (January 2020–February 2025) as the primary source for continuous monitoring, complemented by L-band ALOS-2 observations to ensure coherence in vegetated zones, yielding 24,102 high-quality persistent scatterers. The WT-based multi-scale enhancement improved the signal-to-noise ratio by 23.5% and increased deformation anomaly detection by 18.7% across 24,102 validated persistent scatterers. Bayesian fusion within MAMBA produced high-resolution susceptibility maps, indicating that very-high and high susceptibility zones occupy 24.0% of the study area while capturing 84.5% of the inventoried landslides. Quantitative validation against 1247 landslide events (2020–2025) achieved an AUC of 0.912, an overall accuracy of 87.3%, and a recall of 84.5%, outperforming Random Forest, Logistic Regression, and Frequency Ratio models by 6.8%, 10.8%, and 14.3%, respectively (p < 0.001). Statistical analysis further demonstrates a strong geo-ecological coupling, with landslide susceptibility significantly correlated with ecological vulnerability (r = 0.72, p < 0.01), while SHapley Additive exPlanations identify land-use type, rainfall, and slope as the dominant controlling factors. Full article
(This article belongs to the Special Issue Ground Deformation Monitoring via Remote Sensing Time Series Data)
Show Figures

Figure 1

21 pages, 6173 KB  
Article
Adaptive Digital Twin Framework for PMSM Thermal Safety Monitoring: Integrating Bayesian Self-Calibration with Hierarchical Physics-Aware Network
by Jinqiu Gao, Junze Luo, Shicai Yin, Chao Gong, Saibo Wang and Gerui Zhang
Machines 2026, 14(2), 138; https://doi.org/10.3390/machines14020138 - 24 Jan 2026
Viewed by 225
Abstract
To address the limitations of parameter drift in physical models and poor generalization in data-driven methods, this paper proposes a self-evolving digital twin framework for PMSM thermal safety. The framework integrates a dynamic-batch Bayesian calibration (DBBC) algorithm and a hierarchical physics-aware network (HPA-Net). [...] Read more.
To address the limitations of parameter drift in physical models and poor generalization in data-driven methods, this paper proposes a self-evolving digital twin framework for PMSM thermal safety. The framework integrates a dynamic-batch Bayesian calibration (DBBC) algorithm and a hierarchical physics-aware network (HPA-Net). First, the DBBC eliminates plant–model mismatch by robustly identifying stochastic parameters from operational data. Subsequently, the HPA-Net adopts a “physics-augmented” strategy, utilizing the calibrated physical model as a dynamic prior to directly infer high-fidelity temperature via a hierarchical training scheme. Furthermore, a real-time demagnetization safety margin (DSM) monitoring strategy is integrated to eliminate “false safe” zones. Experimental validation on a PMSM test bench confirms the superior performance of the proposed framework, which achieves a Root Mean Square Error (RMSE) of 0.919 °C for the stator winding and 1.603 °C for the permanent magnets. The proposed digital twin ensures robust thermal safety even under unseen operating conditions, transforming the monitoring system into a proactive safety guardian. Full article
Show Figures

Figure 1

21 pages, 1463 KB  
Article
A Mathematical Framework for E-Commerce Sales Prediction Using Attention-Enhanced BiLSTM and Bayesian Optimization
by Hao Hu, Jinshun Cai and Chenke Xu
Math. Comput. Appl. 2026, 31(1), 17; https://doi.org/10.3390/mca31010017 - 22 Jan 2026
Viewed by 84
Abstract
Accurate sales prediction is crucial for inventory and marketing in e-commerce. Cross-border sales involve complex patterns that traditional models cannot capture. To address this, we propose an improved Bidirectional Long Short-Term Memory (BiLSTM) model, enhanced with an attention mechanism and Bayesian hyperparameter optimization. [...] Read more.
Accurate sales prediction is crucial for inventory and marketing in e-commerce. Cross-border sales involve complex patterns that traditional models cannot capture. To address this, we propose an improved Bidirectional Long Short-Term Memory (BiLSTM) model, enhanced with an attention mechanism and Bayesian hyperparameter optimization. The attention mechanism focuses on key temporal features, improving trend identification. The BiLSTM captures both forward and backward dependencies, offering deeper insights into sales patterns. Bayesian optimization fine-tunes hyperparameters such as learning rate, hidden-layer size, and dropout rate to achieve optimal performance. These innovations together improve forecasting accuracy, making the model more adaptable and efficient for cross-border e-commerce sales. Experimental results show that the model achieves an Root Mean Square Error (RMSE) of 13.2, Mean Absolute Error (MAE) of 10.2, Mean Absolute Percentage Error (MAPE) of 8.7 percent, and a Coefficient of Determination (R2) of 0.92. It outperforms baseline models, including BiLSTM (RMSE 16.5, MAPE 10.9 percent), BiLSTM with Attention (RMSE 15.2, MAPE 10.1 percent), Temporal Convolutional Network (RMSE 15.0, MAPE 9.8 percent), and Transformer for Time Series (RMSE 14.8, MAPE 9.5 percent). These results highlight the model’s superior performance in forecasting cross-border e-commerce sales, making it a valuable tool for inventory management and demand planning. Full article
(This article belongs to the Special Issue New Trends in Computational Intelligence and Applications 2025)
Show Figures

Figure 1

22 pages, 3607 KB  
Article
A Feature Engineering and XGBoost Framework for Prediction of TOC from Conventional Logs in the Dongying Depression, Bohai Bay Basin
by Zexi Zhao, Guoyun Zhong, Fan Diao, Peng Ding and Jianfeng He
Geosciences 2026, 16(1), 44; https://doi.org/10.3390/geosciences16010044 - 19 Jan 2026
Viewed by 279
Abstract
Total organic carbon (TOC) is a critical parameter for evaluating shale source rock quality and hydrocarbon generation potential. However, accurate TOC estimation from conventional well logs remains challenging, especially in data-limited geological settings. This study proposes an optimized XGBoost model for TOC prediction [...] Read more.
Total organic carbon (TOC) is a critical parameter for evaluating shale source rock quality and hydrocarbon generation potential. However, accurate TOC estimation from conventional well logs remains challenging, especially in data-limited geological settings. This study proposes an optimized XGBoost model for TOC prediction using conventional logging data from the Shahejie Formation in the Dongying Depression, Bohai Bay Basin, China. We systematically transform four standard logs—resistivity, acoustic transit time, density, and neutron porosity—into 165 candidate features through multi-scale smoothing, statistical derivation, interaction term creation, and spectral transformation. A two-stage feature selection process, combining univariate filtering and recursive feature elimination and further refined by principal component analysis, identifies ten optimal predictors. The model hyperparameters are optimized via Bayesian search within the Optuna framework to minimize cross-validation error. The optimized model achieves an R2 of 0.9395, with a Mean Absolute Error (MAE) of 0.3392, a Root Mean Squared Error (RMSE) of 0.4259, and a Normalized Root Mean Squared Error (NRMSE) of 0.0604 on the test set, demonstrating excellent predictive accuracy and generalization capability. This study provides a reliable and interpretable methodology for TOC characterization, offering a valuable reference for source rock evaluation in analogous shale formations and sedimentary basins. Full article
Show Figures

Figure 1

28 pages, 2028 KB  
Article
Dynamic Resource Games in the Wood Flooring Industry: A Bayesian Learning and Lyapunov Control Framework
by Yuli Wang and Athanasios V. Vasilakos
Algorithms 2026, 19(1), 78; https://doi.org/10.3390/a19010078 - 16 Jan 2026
Viewed by 186
Abstract
Wood flooring manufacturers face complex challenges in dynamically allocating resources across multi-channel markets, characterized by channel conflicts, demand uncertainty, and long-term cumulative effects of decisions. Traditional static optimization or myopic approaches struggle to address these intertwined factors, particularly when critical market states like [...] Read more.
Wood flooring manufacturers face complex challenges in dynamically allocating resources across multi-channel markets, characterized by channel conflicts, demand uncertainty, and long-term cumulative effects of decisions. Traditional static optimization or myopic approaches struggle to address these intertwined factors, particularly when critical market states like brand reputation and customer base cannot be precisely observed. This paper establishes a systematic and theoretically grounded online decision framework to tackle this problem. We first model the problem as a Partially Observable Stochastic Dynamic Game. The core innovation lies in introducing an unobservable market position vector as the central system state, whose evolution is jointly influenced by firm investments, inter-channel competition, and macroeconomic randomness. The model further captures production lead times, physical inventory dynamics, and saturation/cross-channel effects of marketing investments, constructing a high-fidelity dynamic system. To solve this complex model, we propose a hierarchical online learning and control algorithm named L-BAP (Lyapunov-based Bayesian Approximate Planning), which innovatively integrates three core modules. It employs particle filters for Bayesian inference to nonparametrically estimate latent market states online. Simultaneously, the algorithm constructs a Lyapunov optimization framework that transforms long-term discounted reward objectives into tractable single-period optimization problems through virtual debt queues, while ensuring stability of physical systems like inventory. Finally, the algorithm embeds a game-theoretic module to predict and respond to rational strategic reactions from each channel. We provide theoretical performance analysis, rigorously proving the mean-square boundedness of system queues and deriving the performance gap between long-term rewards and optimal policies under complete information. This bound clearly quantifies the trade-off between estimation accuracy (determined by particle count) and optimization parameters. Extensive simulations demonstrate that our L-BAP algorithm significantly outperforms several strong baselines—including myopic learning and decentralized reinforcement learning methods—across multiple dimensions: long-term profitability, inventory risk control, and customer service levels. Full article
(This article belongs to the Section Analysis of Algorithms and Complexity Theory)
Show Figures

Figure 1

21 pages, 16271 KB  
Article
Soybean Leaf Disease Recognition Methods Based on Hyperparameter Transfer and Progressive Fine-Tuning of Large Models
by Xiaoming Li, Wenxue Bian, Boyu Yang, Yongguang Li, Shiqi Wang, Ning Qin, Shanglong Ye, Zunyang Bao and Hongmin Sun
Agronomy 2026, 16(2), 218; https://doi.org/10.3390/agronomy16020218 - 16 Jan 2026
Viewed by 249
Abstract
Early recognition of crop diseases is essential for ensuring agricultural security and improving yield. However, traditional CNN-based methods often suffer from limited generalization when training data are scarce or when applied to transfer scenarios. To address these challenges, this study adopts the multimodal [...] Read more.
Early recognition of crop diseases is essential for ensuring agricultural security and improving yield. However, traditional CNN-based methods often suffer from limited generalization when training data are scarce or when applied to transfer scenarios. To address these challenges, this study adopts the multimodal large model Qwen2.5-VL as the core and targets three major soybean leaf diseases along with healthy samples. We propose a parameter-efficient adaptation framework that integrates cross-architecture hyperparameter transfer and progressive fine-tuning. The framework utilizes a Vision Transformer (ViT) as an auxiliary model, where Bayesian optimization is applied to obtain optimal hyperparameters that are subsequently transferred to Qwen2.5-VL. Combined with existing low-rank adaptation (LoRA) and a multi-stage training strategy, the framework achieves efficient convergence and robust generalization with limited data. To systematically evaluate the model’s multi-scale visual adaptability, experiments were conducted using low-resolution, medium-resolution, and high-resolution inputs. The results demonstrate that Qwen2.5-VL achieves an average zero-shot accuracy of 71.72%. With the proposed cross-architecture hyperparameter transfer and parameter-efficient tuning strategy, accuracy improves to 88.72%, and further increases to 93.82% when progressive fine-tuning is applied. The model also maintains an accuracy of 91.0% under cross-resolution evaluation. Overall, the proposed method exhibits strong performance in recognition accuracy, feature discriminability, and multi-scale robustness, providing an effective reference for adapting multimodal large language models to plant disease identification tasks. Full article
(This article belongs to the Special Issue Digital Twins in Precision Agriculture)
Show Figures

Figure 1

9 pages, 661 KB  
Article
Extracting Weight of Evidence from p-Value via Bayesian Approach to Activation Likelihood Estimation Meta-Analysis
by Tommaso Costa, Jordi Manuello, Franco Cauda, Annachiara Crocetta and Donato Liloia
Brain Sci. 2026, 16(1), 87; https://doi.org/10.3390/brainsci16010087 - 12 Jan 2026
Viewed by 214
Abstract
Background: p-values are ubiquitous in scientific research, yet they fundamentally fail to quantify the strength of evidence for or against competing hypotheses. This limitation is particularly problematic in neuroimaging meta-analyses, where researchers need to assess how strongly the available data support specific [...] Read more.
Background: p-values are ubiquitous in scientific research, yet they fundamentally fail to quantify the strength of evidence for or against competing hypotheses. This limitation is particularly problematic in neuroimaging meta-analyses, where researchers need to assess how strongly the available data support specific and spatially consistent patterns of brain activation across studies. Methods: In this work, we present a practical approach that transforms p-values into their corresponding upper bounds on the Bayes factor, which quantify the maximum plausible evidence in favor of the alternative hypothesis given the observed data. The method is illustrated within the framework of Activation Likelihood Estimation, the most widely used coordinate-based meta-analytic technique in neuroimaging and applied to a reference dataset comprising 73 finger-tapping experiments. Results: The results show that effects traditionally classified as statistically significant using the canonical Activation Likelihood Estimation framework actually span a wide range of evidential strengths, with Bayes factor bounds varying approximately from 46 to 410. This finding reveals substantial heterogeneity in weight of evidence that is concealed by conventional threshold-based inference. Conclusion: By enabling the construction of voxel-wise maps of evidential strength while remaining fully compatible with existing analysis pipelines, the proposed approach helps to avoid common misinterpretations of p-values and improves the interpretability and reliability of neuroimaging meta-analytic conclusions. It therefore provides a conservative, Bayesian-inspired complement to standard significance maps. Full article
Show Figures

Figure 1

22 pages, 3447 KB  
Article
Leveraging Machine Learning Flood Forecasting: A Multi-Dimensional Approach to Hydrological Predictive Modeling
by Ghazi Al-Rawas, Mohammad Reza Nikoo, Nasim Sadra and Malik Al-Wardy
Water 2026, 18(2), 192; https://doi.org/10.3390/w18020192 - 12 Jan 2026
Viewed by 264
Abstract
Flash flood events are some of the most life-threatening natural disasters, so it is important to predict extreme rainfall events effectively. This study introduces an LSTM model that utilizes a customized loss function to effectively predict extreme rainfall events. The proposed model incorporates [...] Read more.
Flash flood events are some of the most life-threatening natural disasters, so it is important to predict extreme rainfall events effectively. This study introduces an LSTM model that utilizes a customized loss function to effectively predict extreme rainfall events. The proposed model incorporates dynamic environmental variables, such as rainfall, LST, and NDVI, and incorporates additional static variables such as soil type and proximity to infrastructure. Wavelet transformation decomposes the time series into low- and high-frequency components to isolate long-term trends and short-term events. Model performance was compared against Random Forest (RF), Support Vector Machines (SVMs), Artificial Neural Networks (ANNs), and an LSTM-RF ensemble. The custom loss LSTM achieved the best performance (MAE = 0.022 mm/day, RMSE = 0.110 mm/day, R2 = 0.807, SMAPE = 7.62%), with statistical validation via a Kruskal–Wallis ANOVA, confirming that the improvement is significant. Model uncertainty is quantified using a Bayesian MCMC framework, yielding posterior estimates and credible intervals that explicitly characterize predictive uncertainty under extreme rainfall conditions. The sensitivity analysis highlights rainfall and LST as the most influential predictors, while wavelet decomposition provides multi-scale insights into environmental dynamics. The study concludes that customized loss functions can be highly effective in extreme rainfall event prediction and thus useful in managing flash flood events. Full article
(This article belongs to the Section Hydrology)
Show Figures

Figure 1

26 pages, 60486 KB  
Article
Spatiotemporal Prediction of Ground Surface Deformation Using TPE-Optimized Deep Learning
by Maoqi Liu, Sichun Long, Tao Li, Wandi Wang and Jianan Li
Remote Sens. 2026, 18(2), 234; https://doi.org/10.3390/rs18020234 - 11 Jan 2026
Viewed by 222
Abstract
Surface deformation induced by the extraction of natural resources constitutes a non-stationary spatiotemporal process. Modeling surface deformation time series obtained through Interferometric Synthetic Aperture Radar (InSAR) technology using deep learning methods is crucial for disaster prevention and mitigation. However, the complexity of model [...] Read more.
Surface deformation induced by the extraction of natural resources constitutes a non-stationary spatiotemporal process. Modeling surface deformation time series obtained through Interferometric Synthetic Aperture Radar (InSAR) technology using deep learning methods is crucial for disaster prevention and mitigation. However, the complexity of model hyperparameter configuration and the lack of interpretability in the resulting predictions constrain its engineering applications. To enhance the reliability of model outputs and their decision-making value for engineering applications, this study presents a workflow that combines a Tree-structured Parzen Estimator (TPE)-based Bayesian optimization approach with ensemble inference. Using the Rhineland coalfield in Germany as a case study, we systematically evaluated six deep learning architectures in conjunction with various spatiotemporal coding strategies. Pairwise comparisons were conducted using a Welch t-test to evaluate the performance differences across each architecture under two parameter-tuning approaches. The Benjamini–Hochberg method was applied to control the false discovery rate (FDR) at 0.05 for multiple comparisons. The results indicate that TPE-optimized models demonstrate significantly improved performance compared to their manually tuned counterparts, with the ResNet+Transformer architecture yielding the most favorable outcomes. A comprehensive analysis of the spatial residuals further revealed that TPE optimization not only enhances average accuracy, but also mitigates the model’s prediction bias in fault zones and mineralize areas by improving the spatial distribution structure of errors. Based on this optimal architecture, we combined the ten highest-performing models from the optimization stage to generate a quantile-based susceptibility map, using the ensemble median as the central predictor. Uncertainty was quantified from three complementary perspectives: ensemble spread, class ambiguity, and classification confidence. Our analysis revealed spatial collinearity between physical uncertainty and absolute residuals. This suggests that uncertainty is more closely related to the physical complexity of geological discontinuities and human-disturbed zones, rather than statistical noise. In the analysis of super-threshold probability, the threshold sensitivity exhibited by the mining area reflects the widespread yet moderate impact of mining activities. By contrast, the fault zone continues to exhibit distinct high-probability zones, even under extreme thresholds. It suggests that fault-controlled deformation is more physically intense and poses a greater risk of disaster than mining activities. Finally, we propose an engineering decision strategy that combines uncertainty and residual spatial patterns. This approach transforms statistical diagnostics into actionable, tiered control measures, thereby increasing the practical value of susceptibility mapping in the planning of natural resource extraction. Full article
Show Figures

Figure 1

Back to TopTop