Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (2,769)

Search Parameters:
Keywords = Bayesian estimate

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
22 pages, 4837 KiB  
Article
Leveraging Historical Process Data for Recombinant P. pastoris Fermentation Hybrid Deep Modeling and Model Predictive Control Development
by Emils Bolmanis, Vytautas Galvanauskas, Oskars Grigs, Juris Vanags and Andris Kazaks
Fermentation 2025, 11(7), 411; https://doi.org/10.3390/fermentation11070411 (registering DOI) - 17 Jul 2025
Abstract
Hybrid modeling techniques are increasingly important for improving predictive accuracy and control in biomanufacturing, particularly in data-limited conditions. This study develops and experimentally validates a hybrid deep learning model predictive control (MPC) framework for recombinant P. pastoris fed-batch fermentations. Bayesian optimization and grid [...] Read more.
Hybrid modeling techniques are increasingly important for improving predictive accuracy and control in biomanufacturing, particularly in data-limited conditions. This study develops and experimentally validates a hybrid deep learning model predictive control (MPC) framework for recombinant P. pastoris fed-batch fermentations. Bayesian optimization and grid search techniques were employed to identify the best-performing hybrid model architecture: an LSTM layer with 2 hidden units followed by a fully connected layer with 8 nodes and ReLU activation. This design balanced accuracy (NRMSE 4.93%) and computational efficiency (AICc 998). This architecture was adapted to a new, smaller dataset of bacteriophage Qβ coat protein production using transfer learning, yielding strong predictive performance with low validation (3.53%) and test (5.61%) losses. Finally, the hybrid model was integrated into a novel MPC system and experimentally validated, demonstrating robust real-time substrate feed control in a way that allows it to maintain specific target growth rates. The system achieved predictive accuracies of 6.51% for biomass and 14.65% for product estimation, with an average tracking error of 10.64%. In summary, this work establishes a robust, adaptable, and efficient hybrid modeling framework for MPC in P. pastoris bioprocesses. By integrating automated architecture searching, transfer learning, and MPC, the approach offers a practical and generalizable solution for real-time control and supports scalable digital twin deployment in industrial biotechnology. Full article
Show Figures

Figure 1

36 pages, 1465 KiB  
Article
USV-Affine Models Without Derivatives: A Bayesian Time-Series Approach
by Malefane Molibeli and Gary van Vuuren
J. Risk Financial Manag. 2025, 18(7), 395; https://doi.org/10.3390/jrfm18070395 (registering DOI) - 17 Jul 2025
Abstract
We investigate the affine term structure models (ATSMs) with unspanned stochastic volatility (USV). Our aim is to test their ability to generate accurate cross-sectional behavior and time-series dynamics of bond yields. Comparing the restricted models and those with USV, we test whether they [...] Read more.
We investigate the affine term structure models (ATSMs) with unspanned stochastic volatility (USV). Our aim is to test their ability to generate accurate cross-sectional behavior and time-series dynamics of bond yields. Comparing the restricted models and those with USV, we test whether they produce both reasonable estimates for the short rate variance and cross-sectional fit. Essentially, a joint approach from both time series and options data for estimating risk-neutral dynamics in ATSMs should be followed. Due to the scarcity of derivative data in emerging markets, we estimate the model using only time-series of bond yields. A Bayesian estimation approach combining Markov Chain Monte Carlo (MCMC) and the Kalman filter is employed to recover the model parameters and filter out latent state variables. We further incorporate macro-economic indicators and GARCH-based volatility as external validation of the filtered latent volatility process. The A1(4)USV performs better both in and out of sample, even though the issue of a tension between time series and cross-section remains unresolved. Our findings suggest that even without derivative instruments, it is possible to identify and interpret risk-neutral dynamics and volatility risk using observable time-series data. Full article
(This article belongs to the Section Financial Markets)
Show Figures

Figure 1

26 pages, 1772 KiB  
Article
Navigating Structural Shocks: Bayesian Dynamic Stochastic General Equilibrium Approaches to Forecasting Macroeconomic Stability
by Dongxue Wang and Yugang He
Mathematics 2025, 13(14), 2288; https://doi.org/10.3390/math13142288 - 16 Jul 2025
Abstract
This study employs a dynamic stochastic general equilibrium model with Bayesian estimation to rigorously evaluate China’s macroeconomic responses to cost-push, monetary policy, and foreign income shocks. This analysis leverages quarterly data from 2000 to 2024, focusing on critical variables such as the output [...] Read more.
This study employs a dynamic stochastic general equilibrium model with Bayesian estimation to rigorously evaluate China’s macroeconomic responses to cost-push, monetary policy, and foreign income shocks. This analysis leverages quarterly data from 2000 to 2024, focusing on critical variables such as the output gap, inflation, interest rates, exchange rates, consumption, investment, and employment. The results demonstrate significant social welfare losses primarily arising from persistent inflation and output volatility due to domestic structural rigidities and global market dependencies. Monetary policy interventions effectively moderate short-term volatility but induce welfare costs if overly restrictive. The findings underscore the necessity of targeted structural reforms to enhance economic flexibility, balanced monetary policy to mitigate aggressive interventions, and diversified economic strategies to reduce external vulnerability. These insights contribute novel policy perspectives for enhancing China’s macroeconomic stability and resilience. Full article
(This article belongs to the Special Issue Time Series Forecasting for Economic and Financial Phenomena)
Show Figures

Figure 1

20 pages, 774 KiB  
Article
Robust Variable Selection via Bayesian LASSO-Composite Quantile Regression with Empirical Likelihood: A Hybrid Sampling Approach
by Ruisi Nan, Jingwei Wang, Hanfang Li and Youxi Luo
Mathematics 2025, 13(14), 2287; https://doi.org/10.3390/math13142287 - 16 Jul 2025
Abstract
Since the advent of composite quantile regression (CQR), its inherent robustness has established it as a pivotal methodology for high-dimensional data analysis. High-dimensional outlier contamination refers to data scenarios where the number of observed dimensions (p) is much greater than the [...] Read more.
Since the advent of composite quantile regression (CQR), its inherent robustness has established it as a pivotal methodology for high-dimensional data analysis. High-dimensional outlier contamination refers to data scenarios where the number of observed dimensions (p) is much greater than the sample size (n) and there are extreme outliers in the response variables or covariates (e.g., p/n > 0.1). Traditional penalized regression techniques, however, exhibit notable vulnerability to data outliers during high-dimensional variable selection, often leading to biased parameter estimates and compromised resilience. To address this critical limitation, we propose a novel empirical likelihood (EL)-based variable selection framework that integrates a Bayesian LASSO penalty within the composite quantile regression framework. By constructing a hybrid sampling mechanism that incorporates the Expectation–Maximization (EM) algorithm and Metropolis–Hastings (M-H) algorithm within the Gibbs sampling scheme, this approach effectively tackles variable selection in high-dimensional settings with outlier contamination. This innovative design enables simultaneous optimization of regression coefficients and penalty parameters, circumventing the need for ad hoc selection of optimal penalty parameters—a long-standing challenge in conventional LASSO estimation. Moreover, the proposed method imposes no restrictive assumptions on the distribution of random errors in the model. Through Monte Carlo simulations under outlier interference and empirical analysis of two U.S. house price datasets, we demonstrate that the new approach significantly enhances variable selection accuracy, reduces estimation bias for key regression coefficients, and exhibits robust resistance to data outlier contamination. Full article
Show Figures

Figure 1

22 pages, 5236 KiB  
Article
Research on Slope Stability Based on Bayesian Gaussian Mixture Model and Random Reduction Method
by Jingrong He, Tao Deng, Shouxing Peng, Xing Pang, Daochun Wan, Shaojun Zhang and Xiaoqiang Zhang
Appl. Sci. 2025, 15(14), 7926; https://doi.org/10.3390/app15147926 - 16 Jul 2025
Abstract
Slope stability analysis is conventionally performed using the strength reduction method with the proportional reduction in shear strength parameters. However, during actual slope failure processes, the attenuation characteristics of rock mass cohesion (c) and internal friction angle (φ) are [...] Read more.
Slope stability analysis is conventionally performed using the strength reduction method with the proportional reduction in shear strength parameters. However, during actual slope failure processes, the attenuation characteristics of rock mass cohesion (c) and internal friction angle (φ) are often inconsistent, and their reduction paths exhibit clear nonlinearity. Relying solely on proportional reduction paths to calculate safety factors may therefore lack scientific rigor and fail to reflect true slope behavior. To address this limitation, this study proposes a novel approach that considers the non-proportional reduction of c and φ, without dependence on predefined reduction paths. The method begins with an analysis of slope stability states based on energy dissipation theory. A Bayesian Gaussian Mixture Model (BGMM) is employed for intelligent interpretation of the dissipated energy data, and, combined with energy mutation theory, is used to identify instability states under various reduction parameter combinations. To compute the safety factor, the concept of a “reference slope” is introduced. This reference slope represents the state at which the slope reaches limit equilibrium under strength reduction. The safety factor is then defined as the ratio of the shear strength of the target analyzed slope to that of the reference slope, providing a physically meaningful and interpretable safety index. Compared with traditional proportional reduction methods, the proposed approach offers more accurate estimation of safety factors, demonstrates superior sensitivity in identifying critical slopes, and significantly improves the reliability and precision of slope stability assessments. These advantages contribute to enhanced safety management and risk control in slope engineering practice. Full article
(This article belongs to the Special Issue Slope Stability and Earth Retaining Structures—2nd Edition)
Show Figures

Figure 1

18 pages, 1438 KiB  
Article
Maximum Entropy Estimates of Hubble Constant from Planck Measurements
by David P. Knobles and Mark F. Westling
Entropy 2025, 27(7), 760; https://doi.org/10.3390/e27070760 - 16 Jul 2025
Abstract
A maximum entropy (ME) methodology was used to infer the Hubble constant from the temperature anisotropies in cosmic microwave background (CMB) measurements, as measured by the Planck satellite. A simple cosmological model provided physical insight and afforded robust statistical sampling of a parameter [...] Read more.
A maximum entropy (ME) methodology was used to infer the Hubble constant from the temperature anisotropies in cosmic microwave background (CMB) measurements, as measured by the Planck satellite. A simple cosmological model provided physical insight and afforded robust statistical sampling of a parameter space. The parameter space included the spectral tilt and amplitude of adiabatic density fluctuations of the early universe and the present-day ratios of dark energy, matter, and baryonic matter density. A statistical temperature was estimated by applying the equipartition theorem, which uniquely specifies a posterior probability distribution. The ME analysis inferred the mean value of the Hubble constant to be about 67 km/sec/Mpc with a conservative standard deviation of approximately 4.4 km/sec/Mpc. Unlike standard Bayesian analyses that incorporate specific noise models, the ME approach treats the model error generically, thereby producing broader, but less assumption-dependent, uncertainty bounds. The inferred ME value lies within 1σ of both early-universe estimates (Planck, Dark Energy Signal Instrument (DESI)) and late-universe measurements (e.g., the Chicago Carnegie Hubble Program (CCHP)) using redshift data collected from the James Webb Space Telescope (JWST). Thus, the ME analysis does not appear to support the existence of the Hubble tension. Full article
(This article belongs to the Special Issue Insight into Entropy)
Show Figures

Figure 1

15 pages, 3145 KiB  
Article
Probabilistic Prediction of Spudcan Bearing Capacity in Stiff-over-Soft Clay Based on Bayes’ Theorem
by Zhaoyu Sun, Pan Gao, Yanling Gao, Jianze Bi and Qiang Gao
J. Mar. Sci. Eng. 2025, 13(7), 1344; https://doi.org/10.3390/jmse13071344 - 14 Jul 2025
Viewed by 106
Abstract
During offshore operations of jack-up platforms, the spudcan may experience sudden punch-through failure when penetrating from an overlying stiff clay layer into the underlying soft clay, posing significant risks to platform safety. Conventional punch-through prediction methods, which rely on predetermined soil parameters, exhibit [...] Read more.
During offshore operations of jack-up platforms, the spudcan may experience sudden punch-through failure when penetrating from an overlying stiff clay layer into the underlying soft clay, posing significant risks to platform safety. Conventional punch-through prediction methods, which rely on predetermined soil parameters, exhibit limited accuracy as they fail to account for uncertainties in seabed stratigraphy and soil properties. To address this limitation, based on a database of centrifuge model tests, a probabilistic prediction framework for the peak resistance and corresponding depth is developed by integrating empirical prediction formulas based on Bayes’ theorem. The proposed Bayesian methodology effectively refines prediction accuracy by quantifying uncertainties in soil parameters, spudcan geometry, and computational models. Specifically, it establishes prior probability distributions of peak resistance and depth through Monte Carlo simulations, then updates these distributions in real time using field monitoring data during spudcan penetration. The results demonstrate that both the recommended method specified in ISO 19905-1 and an existing deterministic model tend to yield conservative estimates. This approach can significantly improve the predicted accuracy of the peak resistance compared with deterministic methods. Additionally, it shows that the most probable failure zone converges toward the actual punch-through point as more monitoring data is incorporated. The enhanced prediction capability provides critical decision support for mitigating punch-through potential during offshore jack-up operations, thereby advancing the safety and reliability of marine engineering practices. Full article
(This article belongs to the Section Ocean Engineering)
Show Figures

Figure 1

27 pages, 9829 KiB  
Article
An Advanced Ensemble Machine Learning Framework for Estimating Long-Term Average Discharge at Hydrological Stations Using Global Metadata
by Alexandr Neftissov, Andrii Biloshchytskyi, Ilyas Kazambayev, Serhii Dolhopolov and Tetyana Honcharenko
Water 2025, 17(14), 2097; https://doi.org/10.3390/w17142097 - 14 Jul 2025
Viewed by 136
Abstract
Accurate estimation of long-term average (LTA) discharge is fundamental for water resource assessment, infrastructure planning, and hydrological modeling, yet it remains a significant challenge, particularly in data-scarce or ungauged basins. This study introduces an advanced machine learning framework to estimate long-term average discharge [...] Read more.
Accurate estimation of long-term average (LTA) discharge is fundamental for water resource assessment, infrastructure planning, and hydrological modeling, yet it remains a significant challenge, particularly in data-scarce or ungauged basins. This study introduces an advanced machine learning framework to estimate long-term average discharge using globally available hydrological station metadata from the Global Runoff Data Centre (GRDC). The methodology involved comprehensive data preprocessing, extensive feature engineering, log-transformation of the target variable, and the development of multiple predictive models, including a custom deep neural network with specialized pathways and gradient boosting machines (XGBoost, LightGBM, CatBoost). Hyperparameters were optimized using Bayesian techniques, and a weighted Meta Ensemble model, which combines predictions from the best individual models, was implemented. Performance was rigorously evaluated using R2, RMSE, and MAE on an independent test set. The Meta Ensemble model demonstrated superior performance, achieving a Coefficient of Determination (R2) of 0.954 on the test data, significantly surpassing baseline and individual advanced models. Model interpretability analysis using SHAP (Shapley Additive explanations) confirmed that catchment area and geographical attributes are the most dominant predictors. The resulting model provides a robust, accurate, and scalable data-driven solution for estimating long-term average discharge, enhancing water resource assessment capabilities and offering a powerful tool for large-scale hydrological analysis. Full article
Show Figures

Figure 1

30 pages, 8543 KiB  
Article
Multi-Channel Coupled Variational Bayesian Framework with Structured Sparse Priors for High-Resolution Imaging of Complex Maneuvering Targets
by Xin Wang, Jing Yang and Yong Luo
Remote Sens. 2025, 17(14), 2430; https://doi.org/10.3390/rs17142430 - 13 Jul 2025
Viewed by 126
Abstract
High-resolution ISAR (Inverse Synthetic Aperture Radar) imaging plays a crucial role in dynamic target monitoring for aerospace, maritime, and ground surveillance. Among various remote sensing techniques, ISAR is distinguished by its ability to produce high-resolution images of non-cooperative maneuvering targets. To meet the [...] Read more.
High-resolution ISAR (Inverse Synthetic Aperture Radar) imaging plays a crucial role in dynamic target monitoring for aerospace, maritime, and ground surveillance. Among various remote sensing techniques, ISAR is distinguished by its ability to produce high-resolution images of non-cooperative maneuvering targets. To meet the increasing demands for resolution and robustness, modern ISAR systems are evolving toward wideband and multi-channel architectures. In particular, multi-channel configurations based on large-scale receiving arrays have gained significant attention. In such systems, each receiving element functions as an independent spatial channel, acquiring observations from distinct perspectives. These multi-angle measurements enrich the available echo information and enhance the robustness of target imaging. However, this setup also brings significant challenges, including inter-channel coupling, high-dimensional joint signal modeling, and non-Gaussian, mixed-mode interference, which often degrade image quality and hinder reconstruction performance. To address these issues, this paper proposes a Hybrid Variational Bayesian Multi-Interference (HVB-MI) imaging algorithm based on a hierarchical Bayesian framework. The method jointly models temporal correlations and inter-channel structure, introducing a coupled processing strategy to reduce dimensionality and computational complexity. To handle complex noise environments, a Gaussian mixture model (GMM) is used to represent nonstationary mixed noise. A variational Bayesian inference (VBI) approach is developed for efficient parameter estimation and robust image recovery. Experimental results on both simulated and real-measured data demonstrate that the proposed method achieves significantly improved image resolution and noise robustness compared with existing approaches, particularly under conditions of sparse sampling or strong interference. Quantitative evaluation further shows that under the continuous sparse mode with a 75% sampling rate, the proposed method achieves a significantly higher Laplacian Variance (LV), outperforming PCSBL and CPESBL by 61.7% and 28.9%, respectively and thereby demonstrating its superior ability to preserve fine image details. Full article
Show Figures

Figure 1

22 pages, 852 KiB  
Article
Structural Equation Modeling and Genome-Wide Selection for Multiple Traits to Enhance Arabica Coffee Breeding Programs
by Matheus Massariol Suela, Camila Ferreira Azevedo, Ana Carolina Campana Nascimento, Eveline Teixeira Caixeta Moura, Antônio Carlos Baião de Oliveira, Gota Morota and Moysés Nascimento
Agronomy 2025, 15(7), 1686; https://doi.org/10.3390/agronomy15071686 - 12 Jul 2025
Viewed by 137
Abstract
Recognizing the interrelationship among variables becomes critical in genetic breeding programs, where the goal is often to optimize selection for multiple traits. Conventional multi-trait models face challenges such as convergence issues, and they fail to account for cause-and-effect relationships. To address these challenges, [...] Read more.
Recognizing the interrelationship among variables becomes critical in genetic breeding programs, where the goal is often to optimize selection for multiple traits. Conventional multi-trait models face challenges such as convergence issues, and they fail to account for cause-and-effect relationships. To address these challenges, we conducted a comprehensive analysis involving confirmatory factor analysis (CFA), Bayesian networks (BN), structural equation modeling (SEM), and genome-wide selection (GWS) using data from 195 arabica coffee plants. These plants were genotyped with 21,211 single nucleotide polymorphism markers as part of the Coffea arabica breeding program at UFV/EPAMIG/EMBRAPA. Traits included vegetative vigor (VV), canopy diameter (CD), number of vegetative nodes (NVN), number of reproductive nodes (NRN), leaf length (LL), and yield (Y). CFA established the following latent variables: vigor latent (VL) explaining VV and CD; nodes latent (NL) explaining NVN and NRN; leaf length latent (LLL) explaining LL; and yield latent (YL) explaining Y. These were integrated into the BN model, revealing the following key interrelationships: LLL → VL, LLL → NL, LLL → YL, VL → NL, and NL → YL. SEM estimated structural coefficients, highlighting the biological importance of VL → NL and NL → YL connections. Genomic predictions based on observed and latent variables showed that using VL to predict NVN and NRN traits resulted in similar gains to using NL. Predicting gains in Y using NL increased selection gains by 66.35% compared to YL. The SEM-GWS approach provided insights into selection strategies for traits linked with vegetative vigor, nodes, leaf length, and coffee yield, offering valuable guidance for advancing Arabica coffee breeding programs. Full article
(This article belongs to the Section Crop Breeding and Genetics)
Show Figures

Figure 1

22 pages, 3791 KiB  
Article
Voxel Interpolation of Geotechnical Properties and Soil Classification Based on Empirical Bayesian Kriging and Best-Fit Convergence Function
by Yelbek Utepov, Aliya Aldungarova, Assel Mukhamejanova, Talal Awwad, Sabit Karaulov and Indira Makasheva
Buildings 2025, 15(14), 2452; https://doi.org/10.3390/buildings15142452 - 12 Jul 2025
Viewed by 159
Abstract
To support bearing capacity estimates, this study develops and tests a geoprocessing workflow for predicting soil properties using Empirical Bayesian Kriging 3D and a classification function. The model covers a 183 m × 185 m × 24 m site in Astana (Kazakhstan), based [...] Read more.
To support bearing capacity estimates, this study develops and tests a geoprocessing workflow for predicting soil properties using Empirical Bayesian Kriging 3D and a classification function. The model covers a 183 m × 185 m × 24 m site in Astana (Kazakhstan), based on 16 boreholes (15–24 m deep) and 77 samples. Eight geotechnical properties were mapped in 3D voxel models (812,520 voxels at 1 m × 1 m × 1 m resolution): cohesion (c), friction angle (φ), deformation modulus (E), plasticity index (PI), liquidity index (LI), porosity (e), particle size (PS), and particle size distribution (PSD). Stratification patterns were revealed with ~35% variability. Maximum φ (34.9°), E (36.6 MPa), and PS (1.29 mm) occurred at 8–16 m; c (33.1 kPa) and PSD peaked below 16 m, while PI and e were elevated in the upper and lower strata. Strong correlations emerged in pairs φ-E-PS (0.91) and PI-e (0.95). Classification identified 10 soil types, including one absent in borehole data, indicating the workflow’s capacity to detect hidden lithologies. Predicted fractions of loams (51.99%), sandy loams (22.24%), and sands (25.77%) matched borehole data (52%, 26%, 22%). Adjacency analysis of 2,394,873 voxel pairs showed homogeneous zones in gravel–sandy soils (28%) and stiff loams (21.75%). The workflow accounts for lateral and vertical heterogeneity, reduces subjectivity, and is recommended for digital subsurface 3D mapping and construction design optimization. Full article
(This article belongs to the Section Building Structures)
Show Figures

Figure 1

26 pages, 7164 KiB  
Article
Evapotranspiration Partitioning in Selected Subtropical Fruit Tree Orchards Based on Sentinel 2 Data Using a Light Gradient-Boosting Machine (LightGBM) Learning Model in Malelane, South Africa
by Prince Dangare, Zama E. Mashimbye, Paul J. R. Cronje, Joseph N. Masanganise, Shaeden Gokool, Zanele Ntshidi, Vivek Naiken, Tendai Sawunyama and Sebinasi Dzikiti
Hydrology 2025, 12(7), 189; https://doi.org/10.3390/hydrology12070189 - 11 Jul 2025
Viewed by 234
Abstract
The accurate estimation of evapotranspiration (ET) and its components are vital for water resource management and irrigation planning. This study models tree transpiration (T) and ET for grapefruit, litchi, and mango orchards using light gradient-boosting machine (LightGBM) [...] Read more.
The accurate estimation of evapotranspiration (ET) and its components are vital for water resource management and irrigation planning. This study models tree transpiration (T) and ET for grapefruit, litchi, and mango orchards using light gradient-boosting machine (LightGBM) optimized using the Bayesian hyperparameter optimization. Grounds T and ET for these crops were measured using the heat ratio method of monitoring sap flow and the eddy covariance technique for quantifying ET. The Sentinel 2 satellite was used to compute field leaf area index (LAI). The modelled data were used to partition the orchard ET into beneficial (T) and non-beneficial water uses (orchard floor evaporation—Es). We adopted the 10-fold cross-validation to test the model robustness and an independent validation to test performance on unseen data. The 10-fold cross-validation and independent validation on ET and T models produced high accuracy with coefficient of determination (R2) 0.88, Kling–Gupta efficiency (KGE) 0.91, root mean square error (RMSE) 0.04 mm/h, and mean absolute error (MAE) 0.03 mm/h for all the crops. The study demonstrates that LightGBM can accurately model the transpiration and evapotranspiration for subtropical tree crops using Sentinel 2 data. The study found that Es which combined soil evaporation and understorey vegetation transpiration contributed 35, 32, and 31% to the grapefruit, litchi and mango orchard evapotranspiration, respectively. We conclude that improvements on orchard floor management practices can be utilized to minimize non-beneficial water losses while promoting the productive water use (T). Full article
(This article belongs to the Special Issue GIS Modelling of Evapotranspiration with Remote Sensing)
Show Figures

Figure 1

17 pages, 2060 KiB  
Article
Limit Reference Points and Equilibrium Stock Dynamics in the Presence of Recruitment Depensation
by Timothy J. Barrett and Quang C. Huynh
Fishes 2025, 10(7), 342; https://doi.org/10.3390/fishes10070342 - 11 Jul 2025
Viewed by 186
Abstract
Depensation (or an Allee effect) has recently been detected in stock–recruitment relationships (SRRs) in four Atlantic herring stocks and one Atlantic cod stock using a Bayesian statistical approach. In the present study, we define the Allee effect threshold (BAET) for [...] Read more.
Depensation (or an Allee effect) has recently been detected in stock–recruitment relationships (SRRs) in four Atlantic herring stocks and one Atlantic cod stock using a Bayesian statistical approach. In the present study, we define the Allee effect threshold (BAET) for these five stocks and propose BAET as a candidate limit reference point (LRP). We compare BAET to traditional LRPs based on proportions of equilibrium unfished biomass (B0) and biomass at maximum sustainable yield (BMSY) assuming a Beverton–Holt or Ricker SRR with and without depensation, and to the change point from a hockey stick SRR (BCP). The BAET for the case studies exceeded 0.2 B0 and 0.4 BMSY for three of the case study stocks and exceedances of 0.2 B0 were more common when the Ricker form of the SRR was assumed. The BAET estimates for all case studies were less than BCP. When there is depensation in the SRR, multiple equilibrium states can exist when fishing at a fixed fishing mortality rate (F) because the equilibrium recruits-per-spawner line at a given F can intersect the SRR more than once. The equilibrium biomass is determined by whether there is excess recruitment at the initial projected stock biomass. Estimates of equilibrium FMSY in the case studies were generally higher for SRRs that included the depensation parameter; however, the long-term F that would lead the stock to crash (Fcrash) in the depensation SRRs was often about half the Fcrash for SRRs without depensation. When warranted, this study recommends exploration of candidate LRPs from depensatory SRRs, especially if Allee effect thresholds exceed commonly used limits, and simulation testing of management strategies for robustness to depensatory effects. Full article
(This article belongs to the Special Issue Fisheries Monitoring and Management)
Show Figures

Figure 1

21 pages, 1847 KiB  
Article
Fusion of Recurrence Plots and Gramian Angular Fields with Bayesian Optimization for Enhanced Time-Series Classification
by Maria Mariani, Prince Appiah and Osei Tweneboah
Axioms 2025, 14(7), 528; https://doi.org/10.3390/axioms14070528 - 10 Jul 2025
Viewed by 254
Abstract
Time-series classification remains a critical task across various domains, demanding models that effectively capture both local recurrence structures and global temporal dependencies. We introduce a novel framework that transforms time series into image representations by fusing recurrence plots (RPs) with both Gramian Angular [...] Read more.
Time-series classification remains a critical task across various domains, demanding models that effectively capture both local recurrence structures and global temporal dependencies. We introduce a novel framework that transforms time series into image representations by fusing recurrence plots (RPs) with both Gramian Angular Summation Fields (GASFs) and Gramian Angular Difference Fields (GADFs). This fusion enriches the structural encoding of temporal dynamics. To ensure optimal performance, Bayesian Optimization is employed to automatically select the ideal image resolution, eliminating the need for manual tuning. Unlike prior methods that rely on individual transformations, our approach concatenates RP, GASF, and GADF into a unified representation and generalizes to multivariate data by stacking transformation channels across sensor dimensions. Experiments on seven univariate datasets show that our method significantly outperforms traditional classifiers such as one-nearest neighbor with Dynamic Time Warping, Shapelet Transform, and RP-based convolutional neural networks. For multivariate tasks, the proposed fusion model achieves macro F1 scores of 91.55% on the UCI Human Activity Recognition dataset and 98.95% on the UCI Room Occupancy Estimation dataset, outperforming standard deep learning baselines. These results demonstrate the robustness and generalizability of our framework, establishing a new benchmark for image-based time-series classification through principled fusion and adaptive optimization. Full article
Show Figures

Figure 1

26 pages, 2582 KiB  
Article
An Off-Grid DOA Estimation Method via Fast Variational Sparse Bayesian Learning
by Xin Tong, Yuzhuo Chen, Zhongliang Deng and Enwen Hu
Electronics 2025, 14(14), 2781; https://doi.org/10.3390/electronics14142781 - 10 Jul 2025
Viewed by 153
Abstract
In practical array signal processing applications, direction-of-arrival (DOA) estimation often suffers from degraded accuracy under low signal-to-noise ratio (SNR) and limited snapshot conditions. To address these challenges, we propose an off-grid DOA estimation method based on Fast Variational Bayesian Inference (OGFVBI). Within the [...] Read more.
In practical array signal processing applications, direction-of-arrival (DOA) estimation often suffers from degraded accuracy under low signal-to-noise ratio (SNR) and limited snapshot conditions. To address these challenges, we propose an off-grid DOA estimation method based on Fast Variational Bayesian Inference (OGFVBI). Within the variational Bayesian framework, we design a fixed-point criterion rooted in root-finding theory to accelerate the convergence of hyperparameter learning. We further introduce a grid fission and adaptive refinement strategy to dynamically adjust the sparse representation, effectively alleviating grid mismatch issues in traditional off-grid approaches. To address frequency dispersion in wideband signals, we develop an improved subspace focusing technique that transforms multi-frequency data into an equivalent narrowband model, enhancing compatibility with subspace DOA estimators. We demonstrate through simulations that OGFVBI achieves high estimation accuracy and resolution while significantly reducing computational time. Specifically, our method achieves more than 37.6% reduction in RMSE and at least 28.5% runtime improvement compared to other methods under low SNR and limited snapshot scenarios, indicating strong potential for real-time and resource-constrained applications. Full article
(This article belongs to the Special Issue Integrated Sensing and Communications for 6G)
Show Figures

Figure 1

Back to TopTop