Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (348)

Search Parameters:
Keywords = Gaussian likelihoods

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
32 pages, 3764 KB  
Article
Assessment of Compound Hydrological–Thermal Extremes over Indian River Systems
by Jaya Bharat Reddy Buchupalle, Satish Kumar Mummidivarapu, Shaik Rehana, Shahid Latif and Taha B. M. J. Ouarda
Water 2026, 18(8), 896; https://doi.org/10.3390/w18080896 - 9 Apr 2026
Viewed by 212
Abstract
River water quality assessment has traditionally been conducted using univariate or threshold-based approaches; however, the exploration of extremes assessment under bivariate water quality variables has been limited by many studies. Understanding the compound extremes of low river discharge (Q) and elevated river water [...] Read more.
River water quality assessment has traditionally been conducted using univariate or threshold-based approaches; however, the exploration of extremes assessment under bivariate water quality variables has been limited by many studies. Understanding the compound extremes of low river discharge (Q) and elevated river water temperatures (RWTs) resulting from climatic variability is essential for effective water quality management and protection of the river. This study investigates the joint behaviour of RWTs and Q in six Indian rivers: Kaveri, Mahi, Sabarmati, Vardha, Bhadra, and Yamuna. The Weibull-3P and Generalised Extreme Value (GEV-3P) distributions best fit for Q and RWTs, respectively. The adequacy of eighteen different parametric copula classes was evaluated. The Gaussian copula provided the best fit for the Vardha River, the Frank copula for Bhadra, and the BB8 copula for the Yamuna River. The evaluation of joint return periods (RPs) and conditional distributions has identified notable spatial variability in compound hydrological and thermal extreme hazards. The semi-arid Vardha River showed the shortest RPs for simultaneous low Q and high RWTs, indicating a greater likelihood of combined extremes. Conversely, the monsoon-fed Bhadra River displayed moderate hazard levels, while the Himalayan-fed Yamuna River had the longest joint RPs and the lowest conditional probabilities. This suggests that simultaneous extreme drought and heat events are less likely in the Yamuna basin, although significant risks remain for less severe thresholds. Full article
Show Figures

Figure 1

23 pages, 2866 KB  
Article
A Cloud–Robot–Wearable System for Bilateral Reaching Rehabilitation: Affected-Side Identification and Quality Quantification
by Chia-Hau Chen, Li-Hsien Tang, Chang-Hsin Yeh, Eric Hsiao-Kuang Wu and Shih-Ching Yeh
Electronics 2026, 15(7), 1459; https://doi.org/10.3390/electronics15071459 - 1 Apr 2026
Viewed by 323
Abstract
Therapist shortages make home-based rehabilitation an essential component of post-stroke care, yet patients often exhibit reduced adherence when functional gains are difficult to quantify and interpret. This study presents a cloud-enabled assessment framework centered on a dynamic reaching task for upper-limb rehabilitation in [...] Read more.
Therapist shortages make home-based rehabilitation an essential component of post-stroke care, yet patients often exhibit reduced adherence when functional gains are difficult to quantify and interpret. This study presents a cloud-enabled assessment framework centered on a dynamic reaching task for upper-limb rehabilitation in individuals with mild stroke. The proposed system combines wearable sensing and Internet of Things (IoT) connectivity to stream kinematic data to the cloud for near real-time analysis, and integrates a force-feedback rehabilitation robot to deliver motion guidance during training. The pipeline proceeds in three stages. First, smoothness-related kinematic descriptors are extracted and fed into a deep multi-class classifier to discriminate the affected side (left, right, or healthy). Second, movement quality is modeled using a Gaussian Mixture Model (GMM) trained on IoT-acquired trajectories to quantify performance via probabilistic similarity. Third, a calibrated scoring function transforms GMM log-likelihood into a normalized 0–1 quality index, producing visual reports that support interpretable feedback for patients and therapists. The framework is validated using motion data collected from stroke patients at Taipei Veterans General Hospital. Experimental results demonstrate that the neural network multi-classifier achieved an F1-score of 0.95. Incorporating robot-derived interaction signals further improved classification performance by approximately 5%. For movement quality assessment, the derived scores showed a significant positive correlation (Pearson correlation = 0.632, p = 0.02) with therapist-defined gold reference standards for right-affected patients. Additionally, integrating robot force-feedback signals and AIoT-enabled dynamic streams improved score accuracy by 8% and score responsiveness by 10%. These quantitative outcomes substantiate the efficacy of combining IoT-driven sensing and robot-assisted training for objective, interpretable, and remotely deployable motor assessment. Full article
(This article belongs to the Section Computer Science & Engineering)
Show Figures

Figure 1

20 pages, 1060 KB  
Article
Closed-Form Approximations of Range Mutual Information for Integrated Sensing and Communication Systems
by Zhuoyun Lai, Hao Luo, Yinlu Wang, Yue Zhang and Biao Jin
Sensors 2026, 26(7), 2113; https://doi.org/10.3390/s26072113 - 28 Mar 2026
Viewed by 311
Abstract
Sensing mutual information (SMI) is widely adopted as a performance metric for integrated sensing and communication (ISAC) to enhance both sensing and communication capabilities. However, conventional approaches derive SMI from amplitude and phase, whereas an explicit evaluation of range mutual information (RMI) remains [...] Read more.
Sensing mutual information (SMI) is widely adopted as a performance metric for integrated sensing and communication (ISAC) to enhance both sensing and communication capabilities. However, conventional approaches derive SMI from amplitude and phase, whereas an explicit evaluation of range mutual information (RMI) remains absent. In this paper, we investigate a novel closed-form approximation of RMI for ISAC. We first derive an explicit expression for the posterior probability density function (PDF) of the target range, which is formulated as a function of the signal’s autocorrelation and cross-correlation. Furthermore, we show that under high signal-to-noise ratio (SNR), the estimated range PDF approximates a Gaussian distribution in the sensing-unconstrained scenario and a truncated Gaussian distribution in the sensing-constrained scenario. Finally, we derive closed-form approximations of the RMI in both scenarios under high SNR. In the sensing-unconstrained scenario, the RMI is proportional to the delay interval, root-mean-square bandwidth, and SNR. In the constrained scenario, we obtain a closed-form RMI approximation by introducing an entropy correction term that quantifies the impact of boundary constraints. Additionally, we employ a maximum likelihood estimation (MLE) method to assess range estimation performance. Simulation results validate the accuracy of the theoretical results and the effectiveness of the proposed approximations. Full article
(This article belongs to the Section Communications)
Show Figures

Figure 1

25 pages, 3347 KB  
Article
Variational Bayesian-Based Reliability Evaluation of Nonlinear Structures by Active Learning Gaussian Process Modeling
by Wei-Chao Hou, Yu Xin, Ding-Tang Wang, Zuo-Cai Wang and Zong-Zu Liu
Infrastructures 2026, 11(4), 118; https://doi.org/10.3390/infrastructures11040118 - 27 Mar 2026
Viewed by 284
Abstract
In this study, variational Bayesian inference (VBI) with Gaussian mixture models is applied to update models of nonlinear structures, and then, the calibrated model is employed to estimate the failure probability of structures using a subset simulation (SS) algorithm. To improve the computation [...] Read more.
In this study, variational Bayesian inference (VBI) with Gaussian mixture models is applied to update models of nonlinear structures, and then, the calibrated model is employed to estimate the failure probability of structures using a subset simulation (SS) algorithm. To improve the computation efficiency of probabilistic nonlinear model updating, a Gaussian Process (GP) model is used to construct a surrogate likelihood function in Bayesian inference using an active learning algorithm, and then, Gaussian mixture models (GMMs) are employed to approximate the unknown posterior probabilistic density functions (PDFs) of model parameters. The optimized hyperparameters of GMMs can be obtained by maximizing the evidence lower bound (ELBO), and the stochastic gradient search method is used to solve this optimization problem. Based on the optimized hyperparameters, the posterior distributions of model parameters can be approximated using a combination of multiple Gaussian components. Subsequently, the SS algorithm is used to calculate the earthquake-induced failure probability of structures based on the calibrated nonlinear model. To verify the feasibility and effectiveness of the proposed method, a numerical simulation of a two-span bridge structure subjected to seismic excitations was developed. Moreover, the proposed strategy is further applied to estimate the failure probability of a scaled monolithic column structure subjected to bi-directional earthquake excitations. Both numerical and experimental results indicate that the proposed method is feasible and effective for probabilistic nonlinear model updates, and the updated model can significantly enhance the accuracy of structural failure probability predictions. Full article
(This article belongs to the Section Infrastructures and Structural Engineering)
Show Figures

Figure 1

20 pages, 2863 KB  
Article
Particle Filtering-Based In-Flight Icing Detection for Unmanned Aerial Vehicles
by Toufik Souanef, Mohamed Tadjine, Nadjim Horri, Ilyes Chaabeni and Bilel Boulassel
Sensors 2026, 26(6), 1993; https://doi.org/10.3390/s26061993 - 23 Mar 2026
Viewed by 326
Abstract
Ice accretion poses a threat to fixed-wing aerial vehicles as it alters the wings’ shape and thus degrades the aerodynamic performance. In manned aircraft, the icing detection system assists the pilot and utilises dedicated sensors. However, in unmanned aerial vehicles (UAVs), onboard icing [...] Read more.
Ice accretion poses a threat to fixed-wing aerial vehicles as it alters the wings’ shape and thus degrades the aerodynamic performance. In manned aircraft, the icing detection system assists the pilot and utilises dedicated sensors. However, in unmanned aerial vehicles (UAVs), onboard icing detection can generally only be achieved using standard sensors in conjunction with dynamical models, because dedicated sensors are rarely available. In this paper, we propose two approaches based on the particle filter for both icing detection and accurate state and aerodynamic parameter estimation in the presence of icing, with different levels of severity. The first approach uses the observation likelihood for icing hypothesis testing with a complement of the Gaussian kernel to compute icing probability. The second approach uses a discrete jump approach based on a Bernoulli process and a subset of particles to test the icing hypothesis for faster icing detection by estimating changes in icing-related aerodynamic parameters. Using both approaches, the simulation results demonstrate improved estimation accuracy compared to an extended Kalman filter (EKF), under both moderate and severe icing conditions. With adequate tuning, the proposed approaches show potential for indirect icing detection in UAVs. They also enable the computation of icing severity and provide a more accurate and reliable estimate of the icing probability compared to the EKF. Full article
Show Figures

Figure 1

16 pages, 1565 KB  
Article
Shrimp Market Under Innovation Schemes: Hidden Markov Modeling
by Johnny Javier Triviño-Sanchez, Alexander Fernando Haro-Sarango, Julián Coronel-Reyes, Carlos Alfredo De Loor-Platón and Dayanna Soria-Encalada
J. Risk Financial Manag. 2026, 19(3), 214; https://doi.org/10.3390/jrfm19030214 - 12 Mar 2026
Viewed by 336
Abstract
This article models the Ecuadorian shrimp market as a nonlinear system with recurring latent regimes that affect margins and planning decisions. A multivariate Hidden Markov Model (HMM) with Gaussian emissions in log space is estimated via the Baum–Welch algorithm to segment the joint [...] Read more.
This article models the Ecuadorian shrimp market as a nonlinear system with recurring latent regimes that affect margins and planning decisions. A multivariate Hidden Markov Model (HMM) with Gaussian emissions in log space is estimated via the Baum–Welch algorithm to segment the joint dynamics of pounds produced, dollars invoiced, and average price. The analysis uses monthly data from January 2017 to May 2025 (T = 101). The selected four-state specification shows strong fit and outperforms linear alternatives (log likelihood = 480.9; AIC = 859.8; BIC = 729.5). The dominant regime (State 2) concentrates high prices (~USD 2.97/lb) with intermediate production and acts as an attractor (stationary probability ≈ 1), while States 0 and 1 capture orderly expansion and oversupply conditions, and State 3 reflects episodic demand rallies. Adverse regimes (States 0–1) exhibit expected durations of 6–8 months, suggesting natural reversion toward the profitable regime. These estimates enable probabilistic regime forecasting and Monte Carlo scenario simulation to support hedging, inventory management, and financial stress testing. Overall, the proposed HMM framework provides an operational decision tool for producers, traders, and policymakers seeking to anticipate regime shifts, mitigate oversupply cycles, and stabilize margins. Full article
(This article belongs to the Section Mathematics and Finance)
Show Figures

Figure 1

33 pages, 662 KB  
Article
The Asymmetric Bimodal Normal Distribution: A Tractable Mixture Model for Skewed and Bimodal Data
by Hassan S. Bakouch, Hugo S. Salinas, Çağatay Çetinkaya, Shaykhah Aldossari, Amira F. Daghestani and John L. Santibáñez
Mathematics 2026, 14(5), 901; https://doi.org/10.3390/math14050901 - 6 Mar 2026
Viewed by 387
Abstract
We study a parsimonious constrained two-component Gaussian mixture with symmetric locations ±λ and unequal weights controlled by α[1,1]; we refer to this family as the asymmetric bimodal normal. The constraint eliminates label switching and [...] Read more.
We study a parsimonious constrained two-component Gaussian mixture with symmetric locations ±λ and unequal weights controlled by α[1,1]; we refer to this family as the asymmetric bimodal normal. The constraint eliminates label switching and yields an identifiable parametrization for λ>0, while noting the boundary degeneracy at λ=0 where α is not identifiable. We derive closed-form analytical expressions for the density and distribution functions, an equivalent constructive representation (useful for simulation and interpretation), explicit moment formulas, and conditions distinguishing unimodality from bimodality. For inference, we develop maximum likelihood estimation with observed information standard errors and provide numerically stable fits via a block-coordinate quasi-Newton routine using method of moments initial values. A Monte Carlo simulation study across representative parameter settings evaluates bias and root mean squared error, and examines the behavior of Hessian-based standard error estimates, highlighting regimes where the observed information becomes ill-conditioned under weak separation. Empirical analyses, chemical calibration deviations from the National Institute of Standards and Technology and a regression example with asymmetric errors, show competitive or superior fit and interpretability relative to skewed normal alternatives, asymmetric Laplace models, and unconstrained Gaussian mixtures, with consistent advantages under model comparison using the Akaike information criterion and the Bayesian information criterion. Full article
(This article belongs to the Special Issue Computational Statistics and Data Analysis, 3rd Edition)
Show Figures

Figure 1

15 pages, 593 KB  
Article
Using Subspace Algorithms for the Estimation of Linear State Space Models for Over-Differenced Processes
by Dietmar Bauer
Econometrics 2026, 14(1), 12; https://doi.org/10.3390/econometrics14010012 - 28 Feb 2026
Viewed by 359
Abstract
Subspace algorithms like canonical variate analysis (CVA) are regression-based methods for the estimation of linear dynamic state space models. They have been shown to deliver accurate (consistent and asymptotically equivalent to quasi-maximum likelihood estimation using the Gaussian likelihood) estimators for stably invertible stationary [...] Read more.
Subspace algorithms like canonical variate analysis (CVA) are regression-based methods for the estimation of linear dynamic state space models. They have been shown to deliver accurate (consistent and asymptotically equivalent to quasi-maximum likelihood estimation using the Gaussian likelihood) estimators for stably invertible stationary autoregressive moving average (ARMA) processes. These results use the assumption that there are no zeros of the spectral density on the unit circle corresponding to the state space system. In this technical study, we consider vector processes made stationary by applying differencing to all variables, ignoring potential co-integrating relations. This leads to spectral zeros violating the above mentioned assumptions. We show consistency for the CVA estimators, closing a gap in the literature. However, a simulation exercise shows that over-differencing (while leading to consistent estimation of the transfer function) also complicates inference for CVA estimators, not just maximum likelihood-based estimators. This is also demonstrated in a real-world data example. The result also applies to seasonal differencing. The present paper hence suggests working with original data, not working in differences. Full article
Show Figures

Figure 1

29 pages, 1017 KB  
Article
Bayesian Elastic Net Cox Models for Time-to-Event Prediction: Application to a Breast Cancer Cohort
by Ersin Yılmaz, Syed Ejaz Ahmed and Dursun Aydın
Entropy 2026, 28(3), 264; https://doi.org/10.3390/e28030264 - 27 Feb 2026
Viewed by 407
Abstract
High-dimensional survival analyses require calibrated risk and measurable uncertainty, but standard elastic net Cox models provide only point estimates. We develop a Bayesian elastic net Cox (BEN–Cox) model for high-dimensional proportional hazards regression that places a hierarchical global–local shrinkage prior on coefficients and [...] Read more.
High-dimensional survival analyses require calibrated risk and measurable uncertainty, but standard elastic net Cox models provide only point estimates. We develop a Bayesian elastic net Cox (BEN–Cox) model for high-dimensional proportional hazards regression that places a hierarchical global–local shrinkage prior on coefficients and performs full Bayesian inference via Hamiltonian Monte Carlo. We represent the elastic net penalty as a global–local Gaussian scale mixture with hyperpriors that learn the 1/2 trade-off, enabling adaptive sparsity that preserves correlated gene groups; using HMC with the Cox partial likelihood, we obtain full posterior distributions for hazard ratios and patient-level survival curves. Methodologically, we formalize a Bayesian analogue of the elastic net grouping effect at the posterior mode and establish posterior contraction under sparsity for the Cox partial likelihood, supporting the stability of the resulting risk scores. On the METABRIC breast cancer cohort (n=1903; p=440 gene-level features after preprocessing, derived from an Illumina HT-12 array with ≈24,000 probes at the raw feature level), BEN–Cox achieves slightly lower prediction error, higher discrimination, and better global calibration than a tuned ridge Cox, lasso Cox, and elastic net Cox baselines on a held-out test set. Posterior summaries provide credible intervals for hazard ratios and identify a compact gene panel that remains biologically plausible. BEN–Cox provides an uncertainty-aware alternative to tuned penalized Cox models with theoretical support, offering modest improvements in calibration and providing an interpretable sparse signature in highly-correlated survival data. Full article
Show Figures

Figure 1

14 pages, 413 KB  
Article
Likelihood-Based CFAR Detectors for FDA-MIMO Radar Under Signal Mismatch
by Yi Cheng and Yiyang Li
Appl. Sci. 2026, 16(5), 2217; https://doi.org/10.3390/app16052217 - 25 Feb 2026
Viewed by 258
Abstract
This paper investigates the degradation of detection performance in FDA–MIMO radar systems caused by signal mismatch under constant-velocity target motion and develops a robust detection strategy to mitigate this effect. Under the effective hypothesis, a stochastic term is introduced into the received radar [...] Read more.
This paper investigates the degradation of detection performance in FDA–MIMO radar systems caused by signal mismatch under constant-velocity target motion and develops a robust detection strategy to mitigate this effect. Under the effective hypothesis, a stochastic term is introduced into the received radar signal to account for mismatch uncertainty. This term is modeled as a Gaussian random variable whose covariance structure is identical to that of the noise while being scaled by an unknown robustness parameter. Based on the resulting statistical model, three robust detectors are derived using the One-Step Generalized Likelihood Ratio Test (OGLRT), the Two-Step GLRT (TGLRT), and the Gradient test. Simulation results demonstrate that all proposed detectors preserve the Constant False Alarm Rate (CFAR) property under the null hypothesis. Further performance evaluations reveal that, in the absence of signal mismatch, the OGLRT and Gradient detectors provide superior detection performance, whereas under mismatched conditions, all three detectors exhibit improved robustness. These findings provide both theoretical insight and practical guidance for the design and implementation of FDA–MIMO radar systems, contributing to the enhancement and optimization of detection performance in realistic operating environments. Full article
Show Figures

Figure 1

36 pages, 3000 KB  
Article
Bivariate Generalized Split-BREAK Process with Application in Modeling Crime Dynamics
by Snežana Stojičić, Vladica S. Stojanović, Mihailo Jovanović, Dušan Joksimović and Radovan Radovanović
Mathematics 2026, 14(5), 754; https://doi.org/10.3390/math14050754 - 24 Feb 2026
Viewed by 267
Abstract
The manuscript proposes a new non-linear and non-stationary bivariate stochastic model, termed the two-dimensional Gaussian (generalized) Split-BREAK (2D-GSB) process, as a multivariate extension of the univariate GSB framework. The generalization consists in introducing a common threshold mechanism based on the norm of a [...] Read more.
The manuscript proposes a new non-linear and non-stationary bivariate stochastic model, termed the two-dimensional Gaussian (generalized) Split-BREAK (2D-GSB) process, as a multivariate extension of the univariate GSB framework. The generalization consists in introducing a common threshold mechanism based on the norm of a bivariate innovation vector and a single synchronized Bernoulli indicator which jointly governs regime activation in both components. This structure induces cross-dependent regime shifts and yields a binomial–Gaussian mixture representation of the joint distribution, explicitly linking contemporaneous dependence with a common latent regime mechanism. The fundamental properties of the proposed model are established, with particular emphasis on its asymptotic behavior. Parameter estimation procedure is developed using both the method of moments (MoM) and the empirical characteristic function (ECF) approach, and their performance is evaluated through Monte Carlo simulations. An empirical application to daily crime data illustrates how the proposed framework captures synchronized structural shocks and heavy-tailed features in related crime categories. In comparison with a standard VAR(1) benchmark, the 2D-GSB specification provides a parsimonious yet substantially improved likelihood-based fit, thus offering a theoretically sound framework for analyzing multivariate time series characterized by synchronized regime shifts and heavy-tailed behavior. Full article
Show Figures

Figure 1

17 pages, 2000 KB  
Article
Probabilistic Bird Trajectory Forecasting with Heavy-Tailed Uncertainty Modeling for Low-Altitude Airspace Monitoring
by Feiyang Song, Zhonghe Liu, Yuyang Zhao and Jingguo Zhu
Sensors 2026, 26(4), 1270; https://doi.org/10.3390/s26041270 - 15 Feb 2026
Viewed by 486
Abstract
The low-altitude airspace of bird flocks is gradually shared by unmanned aerial vehicles (UAVs), posing safety risks that necessitate accurate trajectory forecasting. However, existing vision-based methods often treat trajectory prediction and UAV detection as separate tasks, assume light-tailed Gaussian noise, and rely on [...] Read more.
The low-altitude airspace of bird flocks is gradually shared by unmanned aerial vehicles (UAVs), posing safety risks that necessitate accurate trajectory forecasting. However, existing vision-based methods often treat trajectory prediction and UAV detection as separate tasks, assume light-tailed Gaussian noise, and rely on heavy backbones. These limitations, when applied to bird trajectory forecasting, limit uncertainty calibration and embedded deployment in ground-based monocular surveillance. In this work, we propose a unified framework for low-altitude monitoring. Its core, Mini-BirdFormer, combines a lightweight Transformer encoder with a Student-t mixture density head to model heavy-tailed flight dynamics and produce calibrated uncertainty. Experiments on a real-world dataset show the model achieves strong long-horizon performance with only 1.05 million parameters, attaining a minADE of 0.785 m and reducing negative log-likelihood from 1.25 to −2.01 (lower is better) compared with a Gaussian Long Short-Term Memory (LSTM) baseline. Crucially, it enables low-latency inference on resource-constrained platforms at 616 FPS. Additionally, a system-level extension supports zero-shot UAV detection via open-vocabulary learning, attaining 92% recall without false alarms. Results demonstrate that combining heavy-tailed probabilistic modeling with a compact backbone provides a practical, deployable approach for monitoring shared airspace. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

42 pages, 10041 KB  
Article
Probabilistic Prediction of Concrete Compressive Strength Using Copula Functions: A Novel Framework for Uncertainty Quantification
by Cheng Zhang, Senhao Cheng, Shanshan Tao, Shuai Du and Zhengjun Wang
Buildings 2026, 16(4), 754; https://doi.org/10.3390/buildings16040754 - 12 Feb 2026
Viewed by 358
Abstract
Traditional machine learning models for concrete compressive strength prediction provide only single-value estimates without quantifying the probability of meeting design requirements, leaving engineers unable to make risk-informed decisions. This study addresses this critical limitation by developing a novel probabilistic prediction framework that integrates [...] Read more.
Traditional machine learning models for concrete compressive strength prediction provide only single-value estimates without quantifying the probability of meeting design requirements, leaving engineers unable to make risk-informed decisions. This study addresses this critical limitation by developing a novel probabilistic prediction framework that integrates explainable machine learning with Copula-based joint distribution modeling. Using a dataset of 1030 concrete samples with curing ages ranging from 1 to 365 days, we first established an XGBoost 2.1.4 prediction model achieving R2 = 0.9211 (RMSE = 4.51 MPa) on the test set. SHAP 0.49.1 (SHapley Additive exPlanations) analysis identified curing age (33.3%) and water–cement ratio (28.8%) as the dominant features, together accounting for 62.1% of predictive importance. These two controllable engineering parameters were then selected as core variables for probabilistic modeling. The key innovation lies in integrating Copula-based dependence modeling with explainable machine learning (XGBoost–SHAP) to quantify the compliance probability of concrete strength under specific mix designs and curing conditions, thereby supporting risk-informed quality control decisions. Through systematic comparison of five Copula families (Gaussian, Student t, Clayton, Gumbel, and Frank), we identified optimal dependence structures: Gaussian Copula (ρ = −0.54) for the water–cement ratio–strength relationship and Clayton Copula for the age–strength relationship, revealing asymmetric tail dependence patterns invisible to conventional correlation analysis. The three-dimensional Copula model enables engineers to estimate compliance probability—the likelihood of concrete achieving target strength under specific mix designs and curing conditions. We propose an illustrative three-tier decision rule for construction quality management based on the compliance probability P: P ≥ 0.95 (high-confidence approval), 0.80 ≤ P < 0.95 (warning zone requiring enhanced monitoring), and P < 0.80 (high risk suggesting corrective actions such as mix adjustment or extended curing), noting that these thresholds can be recalibrated to project-specific risk tolerance and local specifications. This framework supports a paradigm shift from reactive “mix-then-test” quality control to proactive “predict-then-decide” construction management, providing quantitative risk assessment tools previously unavailable in deterministic prediction approaches. Full article
Show Figures

Figure 1

27 pages, 536 KB  
Article
Efficient EM Estimation for the Pogit Model via Polya-Gamma Augmentation
by Iván Gutiérrez, Sandra Ramírez and Leonardo Jofré
Entropy 2026, 28(2), 207; https://doi.org/10.3390/e28020207 - 11 Feb 2026
Viewed by 451
Abstract
The Poisson-logistic (pogit) model is widely used for count data with latent intensities, with applications including under-reporting correction and share-of-wallet estimation, yet existing estimation methods do not scale well to large datasets. We propose a new expectation-maximization (EM) algorithm for the standard pogit [...] Read more.
The Poisson-logistic (pogit) model is widely used for count data with latent intensities, with applications including under-reporting correction and share-of-wallet estimation, yet existing estimation methods do not scale well to large datasets. We propose a new expectation-maximization (EM) algorithm for the standard pogit model based on Polya-Gamma data augmentation, which yields a conditionally Gaussian complete-data likelihood with closed-form EM-updates. The resulting EM algorithm has low per-iteration cost and naturally accommodates computational enhancements, including quasi-Newton acceleration and mini-batch implementations. These features enable efficient inference on datasets with millions of observations. Simulation studies and real-data applications demonstrate substantial computational improvements without loss of statistical accuracy, and comparisons with direct maximum-likelihood optimization routines show that the proposed method provides a scalable and competitive alternative for large-scale pogit estimation. Full article
(This article belongs to the Special Issue Statistical Inference: Theory and Methods)
Show Figures

Figure 1

19 pages, 8143 KB  
Article
300-GHz Photonics-Aided Wireless 2 × 2 MIMO Transmission over 200 m Using GMM-Enhanced Duobinary Unsupervised Adaptive CNN
by Luhan Jiang, Jianjun Yu, Qiutong Zhang, Wen Zhou and Min Zhu
Sensors 2026, 26(3), 842; https://doi.org/10.3390/s26030842 - 27 Jan 2026
Viewed by 481
Abstract
Terahertz wireless communication offers ultra-high bandwidth, enabling an extremely high data rate for next-generation networks. However, it faces challenges including severe propagation loss and atmospheric absorption, which limits the transmission rate and transmission distance. To address the problem, polarization division multiplexing (PDM) and [...] Read more.
Terahertz wireless communication offers ultra-high bandwidth, enabling an extremely high data rate for next-generation networks. However, it faces challenges including severe propagation loss and atmospheric absorption, which limits the transmission rate and transmission distance. To address the problem, polarization division multiplexing (PDM) and antenna diversity techniques are utilized in this work to increase system capacity without changing the bandwidth of transmitted signals. Meanwhile, duobinary shaping is used to solve the problem of bandwidth limitation of components in the system, and the final duobinary signals are recovered by maximum likelihood sequence detection (MLSD). A Gaussian mixture model (GMM)-enhanced duobinary unsupervised adaptive convolutional neural network (DB-UACNN) is proposed, to further deal with channel noise. Based on the technologies above, a 2 × 2 multiple-input multiple-output (MIMO) photonic-aided terahertz wireless transmission system at 300 GHz is demonstrated. Experimental results have proved that the signal-to-noise ratio (SNR) gain of duobinary shaping is up to 1.87 dB and 1.70 dB in X-polarization and Y-polarization. The proposed GMM-enhanced DB-UACNN also shows extra SNR gain of up to 2.59 dB and 2.63 dB in X-polarization and Y-polarization, compared to the conventional duobinary filter. The high transmission rate of 100 Gbit/s over the distance of 200 m is finally realized under a 7% hard-decision forward error correction (HD-FEC) threshold. Full article
Show Figures

Figure 1

Back to TopTop