Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

Search Results (364)

Search Parameters:
Keywords = sparsity estimation

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
22 pages, 11612 KB  
Article
A Novel Method for Reducing Uncertainty in Subglacial Topography: Implications for Greenland Ice Sheet Volume and Stability
by Oliver T. Bartlett and Steven J. Palmer
Remote Sens. 2026, 18(1), 16; https://doi.org/10.3390/rs18010016 - 20 Dec 2025
Viewed by 214
Abstract
Subglacial topography is a critical boundary condition for ice sheet models projecting past and future ice sheet–climate interactions. Contemporary ice-sheet-wide bed topography datasets are partially derived using mass conservation, but approximately 75% of the most widely used Greenland Ice Sheet (GrIS) dataset is [...] Read more.
Subglacial topography is a critical boundary condition for ice sheet models projecting past and future ice sheet–climate interactions. Contemporary ice-sheet-wide bed topography datasets are partially derived using mass conservation, but approximately 75% of the most widely used Greenland Ice Sheet (GrIS) dataset is based on simple interpolation of airborne radio-echo sounding (RES) measurements, such as kriging or streamline diffusion. Due to limited independent validation data, the errors and biases in this approach are poorly understood, creating largely unknown uncertainties in subglacial topography. Here, we interpolated synthetic RES observations of bed topography over ice-free areas with a known topography at a 5 m spatial resolution and quantify discrepancies. We found that the absolute error in kriged bed topography increases with distance from the input data, though at a reduced rate than previously estimated. The difference between an interpolated elevation estimate and the local mean elevation is a strong predictor of real bed errors (R2 = 0.72), with further improvement as input observation sparsity increases (R2 > 0.82). We propose a method to quantify and reduce uncertainty in kriged bed topography in sparsely surveyed regions, reducing uncertainty for at least 56% of the kriged interior at a 99% confidence interval. Our results suggest that subglacial depth is on average 5 m deeper than previous estimates, though individual areas may be shallower or deeper (σ = 41 m). Consequently, the area grounded below sea level is likely underestimated by 2%, increasing to 29% for regions deeper than 200 m. These findings have potential implications for the future stability of the GrIS under climate change. Full article
(This article belongs to the Special Issue Remote Sensing of the Cryosphere (Third Edition))
Show Figures

Figure 1

20 pages, 4309 KB  
Article
Targetless Radar–Camera Calibration via Trajectory Alignment
by Ozan Durmaz and Hakan Cevikalp
Sensors 2025, 25(24), 7574; https://doi.org/10.3390/s25247574 - 13 Dec 2025
Viewed by 454
Abstract
Accurate extrinsic calibration between radar and camera sensors is essential for reliable multi-modal perception in robotics and autonomous navigation. Traditional calibration methods often rely on artificial targets such as checkerboards or corner reflectors, which can be impractical in dynamic or large-scale environments. This [...] Read more.
Accurate extrinsic calibration between radar and camera sensors is essential for reliable multi-modal perception in robotics and autonomous navigation. Traditional calibration methods often rely on artificial targets such as checkerboards or corner reflectors, which can be impractical in dynamic or large-scale environments. This study presents a fully targetless calibration framework that estimates the rigid spatial transformation between radar and camera coordinate frames by aligning their observed trajectories of a moving object. The proposed method integrates You Only Look Once version 5 (YOLOv5)-based 3D object localization for the camera stream with Density-Based Spatial Clustering of Applications with Noise (DBSCAN) and Random Sample Consensus (RANSAC) filtering for sparse and noisy radar measurements. A passive temporal synchronization technique, based on Root Mean Square Error (RMSE) minimization, corrects timestamp offsets without requiring hardware triggers. Rigid transformation parameters are computed using Kabsch and Umeyama algorithms, ensuring robust alignment even under millimeter-wave (mmWave) radar sparsity and measurement bias. The framework is experimentally validated in an indoor OptiTrack-equipped laboratory using a Skydio 2 drone as the dynamic target. Results demonstrate sub-degree rotational accuracy and decimeter-level translational error (approximately 0.12–0.27 m depending on the metric), with successful generalization to unseen motion trajectories. The findings highlight the method’s applicability for real-world autonomous systems requiring practical, markerless multi-sensor calibration. Full article
(This article belongs to the Section Radar Sensors)
Show Figures

Figure 1

32 pages, 7489 KB  
Article
Identification of Non-Stationary Communication Channels with a Sparseness Property
by Marcin Ciołek
Appl. Sci. 2025, 15(24), 13043; https://doi.org/10.3390/app152413043 - 11 Dec 2025
Viewed by 209
Abstract
The problem of identifying non-stationary communication channels with a sparseness property using the local basis function approach is considered. This sparseness refers to scenarios where a few impulse response coefficients significantly differ from zero. The sparsity-aware estimation algorithms are usually obtained using [...] Read more.
The problem of identifying non-stationary communication channels with a sparseness property using the local basis function approach is considered. This sparseness refers to scenarios where a few impulse response coefficients significantly differ from zero. The sparsity-aware estimation algorithms are usually obtained using 1 regularization. Unfortunately, the minimization problem lacks a sometimes closed-form solution; one must rely on numerical search, which is a serious drawback. We propose the fast regularized local basis functions (fRLBF) algorithm based on appropriately reweighted 2 regularizers, which can be regarded as a first-order approximation of the 1 approach. The proposed solution incorporates two regularizers, enhancing sparseness in both the time/lag and frequency domains. The choice of regularization gains is an important part of regularized estimation. To address this, three approaches are proposed and compared to solve this problem: empirical Bayes, decentralized, and cross-validation approaches. The performance of the proposed algorithm is demonstrated in a numerical experiment simulating underwater acoustics communication scenarios. It is shown that the new approach can outperform the classical one and is computationally attractive. Full article
Show Figures

Figure 1

13 pages, 2355 KB  
Article
Structural Damage Identification with Machine Learning Based Bayesian Model Selection for High-Dimensional Systems
by Kunyang Wang and Yukihide Kajita
Buildings 2025, 15(24), 4456; https://doi.org/10.3390/buildings15244456 - 10 Dec 2025
Viewed by 218
Abstract
Identifying structural damage in high-dimensional systems remains a major challenge due to the curse of dimensionality and the inherent sparsity of real-world damage scenarios. Traditional Bayesian or optimization-based approaches often become computationally intractable when applied to structures with a large number of uncertain [...] Read more.
Identifying structural damage in high-dimensional systems remains a major challenge due to the curse of dimensionality and the inherent sparsity of real-world damage scenarios. Traditional Bayesian or optimization-based approaches often become computationally intractable when applied to structures with a large number of uncertain parameters, where only a few members are actually damaged. To address this problem, this study proposes a Machine Learning and Widely Applicable Information Criterion (WAIC) based Bayesian framework for efficient and accurate damage identification in high-dimensional systems. In the proposed approach, an ML is first trained using simulated modal responses under randomly generated damage patterns. The ML predicts the most likely damaged members by measured responses, effectively reducing the high-dimensional search space to a small subset of candidates. Subsequently, a WAIC is employed to estimate the model combined by these candidates, while automatically selecting the optimal damage model. By combining the localization capability of ML with the uncertainty quantification of Bayesian inference, the proposed method achieves high identification accuracy with significantly reduced computational cost of model selection. Numerical experiments on a high-dimensional truss system demonstrate that the method can accurately locate and quantify multiple damages even under noise contamination. The results confirm that the hybrid framework effectively mitigates the curse of dimensionality and provides a robust solution for structural damage identification in large-scale structural systems. Full article
(This article belongs to the Section Building Structures)
Show Figures

Figure 1

18 pages, 3502 KB  
Article
A Machine Learning Approach for Estimating Person Counts Using Anonymous WiFi Data in a University Library
by Lucio Hernando-Cánovas, Alejandro S. Martínez-Sala, Juan C. Sánchez-Aarnoutse and Juan J. Alcaraz
Sensors 2025, 25(22), 7065; https://doi.org/10.3390/s25227065 - 19 Nov 2025
Viewed by 625
Abstract
Accurately estimating indoor occupancy is essential for managing building spaces and infrastructure, with applications ranging from ensuring safe distancing and adequate ventilation during health crises to optimizing energy consumption and resource allocation. However, no existing technology simultaneously achieves accuracy, low-cost, and privacy preservation [...] Read more.
Accurately estimating indoor occupancy is essential for managing building spaces and infrastructure, with applications ranging from ensuring safe distancing and adequate ventilation during health crises to optimizing energy consumption and resource allocation. However, no existing technology simultaneously achieves accuracy, low-cost, and privacy preservation in indoor occupancy measurement. This study investigates the use of existing WiFi infrastructure as a non-intrusive sensing system, where access points operate as soft sensors that passively collect anonymized connection metadata serving as proxies for human presence. The proposed approach was validated in a university library over eight months, training supervised machine learning regression models on WiFi data and comparing predictions against computer-vision ground truth. The best-performing models (SVR, Ridge, and MLP) consistently achieved R2 ≈ 0.95, with mean absolute errors of about 8 persons and relative errors (SMAPE) below 10% at medium-to-high occupancies. Tree-based ensemble models, particularly XGBoost, exhibited weaker generalization at extreme capacity ranges, likely due to data sparsity and sensitivity to hyperparameters. Importantly, no temporal degradation was observed across the 8-month horizon, confirming the long-term stability of the method. Overall, the results demonstrate that WiFi-based occupancy estimation offers a robust, cost-effective, and privacy-preserving solution for real-world deployments. Full article
(This article belongs to the Special Issue Indoor Wi-Fi Positioning: Techniques and Systems—2nd Edition)
Show Figures

Figure 1

19 pages, 5931 KB  
Article
Vascular-Aware Multimodal MR–PET Reconstruction for Early Stroke Detection: A Physics-Informed, Topology-Preserving, Adversarial Super-Resolution Framework
by Krzysztof Malczewski
Appl. Sci. 2025, 15(22), 12186; https://doi.org/10.3390/app152212186 - 17 Nov 2025
Viewed by 334
Abstract
Rapid and reliable identification of large vessel occlusions and critical stenoses is essential for guiding treatment in acute ischemic stroke. Conventional MR angiography (MRA) and PET protocols are constrained by trade-offs among acquisition time, spatial resolution, and motion tolerance. A multimodal MR–PET angiography [...] Read more.
Rapid and reliable identification of large vessel occlusions and critical stenoses is essential for guiding treatment in acute ischemic stroke. Conventional MR angiography (MRA) and PET protocols are constrained by trade-offs among acquisition time, spatial resolution, and motion tolerance. A multimodal MR–PET angiography reconstruction framework is introduced that integrates joint Hankel-structured sparsity with topology-preserving multitask learning to overcome these limitations. High-resolution time-of-flight MRA and perfusion-sensitive PET volumes are reconstructed from undersampled data using a cross-modal low-rank Hankel prior coupled to a super-resolution generator optimized with adversarial, perceptual, and pixel-wise losses. Vesselness filtering and centerline continuity terms enforce preservation of fine arterial topology, while learned k-space and sinogram sampling concentrate measurements within vascular territories. Motion correction, blind deblurring, and modality-specific denoising are embedded to improve robustness under clinical conditions. A multitask output head estimates occlusion probability, stenosis localization, and collateral flow, with hypoperfusion mapping generated for dynamic PET. Evaluation on clinical and synthetically undersampled MR–PET studies demonstrated consistent improvements over MR-only, PET-only, and conventional fusion methods. The framework achieved higher image quality (MRA PSNR gains up to 3.7 dB and SSIM improvements of 0.042), reduced vascular topology breaks by over 20%, and improved large vessel occlusion detection by nearly 10% in AUROC, while maintaining at least a 40% reduction in sampling. These findings demonstrate that embedding vascular-aware priors within a joint Hankel–sparse MR–PET framework enables accelerated acquisition with clinically relevant benefits for early stroke assessment. Full article
Show Figures

Figure 1

19 pages, 7441 KB  
Article
All for One or One for All? A Comparative Study of Grouped Data in Mixed-Effects Additive Bayesian Networks
by Magali Champion, Matteo Delucchi and Reinhard Furrer
Mathematics 2025, 13(22), 3649; https://doi.org/10.3390/math13223649 - 14 Nov 2025
Viewed by 468
Abstract
Additive Bayesian networks (ABNs) provide a flexible framework for modeling complex multivariate dependencies among variables of different distributions, including Gaussian, Poisson, binomial, and multinomial. This versatility makes ABNs particularly attractive in clinical research, where heterogeneous data are frequently collected across distinct groups. However, [...] Read more.
Additive Bayesian networks (ABNs) provide a flexible framework for modeling complex multivariate dependencies among variables of different distributions, including Gaussian, Poisson, binomial, and multinomial. This versatility makes ABNs particularly attractive in clinical research, where heterogeneous data are frequently collected across distinct groups. However, standard applications either pool all data together, ignoring group-specific variability, or estimate separate models for each group, which may suffer from limited sample sizes. In this work, we extend ABNs to a mixed-effect framework that accounts for group structure through partial pooling, and we evaluate its performance in a large-scale simulation study. We compare three strategies—partial pooling, complete pooling, and no pooling—cross a wide range of network sizes, sparsity levels, group configurations, and sample sizes. Performance is assessed in terms of structural accuracy, parameter estimation accuracy, and predictive performance. Our results demonstrate that partial pooling consistently yields superior structural and parametric accuracy while maintaining robust predictive performance across all evaluated settings for grouped data structures. These findings highlight the potential of mixed-effect ABNs as a versatile approach for learning probabilistic graphical models from grouped data with diverse distributions in real-world applications. Full article
Show Figures

Figure 1

27 pages, 13622 KB  
Article
Deep Learning Improves Planting Year Estimation of Macadamia Orchards in Australia
by Andrew Clark, James Brinkhoff, Andrew Robson and Craig Shephard
Agriculture 2025, 15(22), 2346; https://doi.org/10.3390/agriculture15222346 - 11 Nov 2025
Viewed by 551
Abstract
Deep learning reduced macadamia planting year error at a national scale, achieving a pixel-level Mean Absolute Error (MAE) of 1.2 years and outperforming a vegetation index threshold baseline (MAE 1.6 years) and tree-based models—Random Forest (RF; MAE 3.02 years) and Gradient Boosted Trees [...] Read more.
Deep learning reduced macadamia planting year error at a national scale, achieving a pixel-level Mean Absolute Error (MAE) of 1.2 years and outperforming a vegetation index threshold baseline (MAE 1.6 years) and tree-based models—Random Forest (RF; MAE 3.02 years) and Gradient Boosted Trees (GBT; MAE 2.9 years). Using Digital Earth Australia Landsat annual geomedians (1988–2023) and block-level, industry-supplied planting year data, models were trained and evaluated at the pixel level under a strict Leave-One-Region-Out cross-validation (LOROCV) protocol; a secondary block-level random split (80/10/10) is reported only to illustrate the more optimistic setting, where shared regional conditions yield lower errors (0.6–0.7 years). Predictions reconstruct planting year retrospectively from the full historical record rather than providing real-time forecasts. The final model was then applied to all Australian Tree Crop Map (ATCM) macadamia orchard polygons to produce wall-to-wall planting year estimates. The approach enables fine-grained mapping of planting patterns to support yield forecasting, resource allocation, and industry planning. Results indicate that sequence-based deep models capture informative temporal dynamics beyond thresholding and conventional machine learning baselines, while remaining constrained by regional and temporal data sparsity. The framework is scalable and transferable, offering a pathway to planting year mapping for other perennial crops and to more resilient, data-driven agricultural decision-making. Full article
(This article belongs to the Special Issue Remote Sensing in Crop Protection)
Show Figures

Figure 1

28 pages, 19566 KB  
Article
CResDAE: A Deep Autoencoder with Attention Mechanism for Hyperspectral Unmixing
by Chong Zhao, Jinlin Wang, Qingqing Qiao, Kefa Zhou, Jiantao Bi, Qing Zhang, Wei Wang, Dong Li, Tao Liao, Chao Li, Heshun Qiu and Guangjun Qu
Remote Sens. 2025, 17(21), 3622; https://doi.org/10.3390/rs17213622 - 31 Oct 2025
Viewed by 581
Abstract
Hyperspectral unmixing aims to extract pure spectral signatures (endmembers) and estimate their corresponding abundance fractions from mixed pixels, enabling quantitative analysis of surface material composition. However, in geological mineral exploration, existing unmixing methods often fail to explicitly identify informative spectral bands, lack inter-layer [...] Read more.
Hyperspectral unmixing aims to extract pure spectral signatures (endmembers) and estimate their corresponding abundance fractions from mixed pixels, enabling quantitative analysis of surface material composition. However, in geological mineral exploration, existing unmixing methods often fail to explicitly identify informative spectral bands, lack inter-layer information transfer mechanisms, and overlook the physical constraints intrinsic to the unmixing process. These issues result in limited directionality, sparsity, and interpretability. To address these limitations, this paper proposes a novel model, CResDAE, based on a deep autoencoder architecture. The encoder integrates a channel attention mechanism and deep residual modules to enhance its ability to assign adaptive weights to spectral bands in geological hyperspectral unmixing tasks. The model is evaluated by comparing its performance with traditional and deep learning-based unmixing methods on synthetic datasets, and through a comparative analysis with a nonlinear autoencoder on the Urban hyperspectral scene. Experimental results show that CResDAE consistently outperforms both conventional and deep learning counterparts. Finally, CResDAE is applied to GF-5 hyperspectral imagery from Yunnan Province, China, where it effectively distinguishes surface materials such as Forest, Grassland, Silicate, Carbonate, and Sulfate, offering reliable data support for geological surveys and mineral exploration in covered regions. Full article
(This article belongs to the Special Issue AI-Driven Hyperspectral Remote Sensing of Atmosphere and Land)
Show Figures

Figure 1

36 pages, 738 KB  
Article
Activity Detection and Channel Estimation Based on Correlated Hybrid Message Passing for Grant-Free Massive Random Access
by Xiaofeng Liu, Xinrui Gong and Xiao Fu
Entropy 2025, 27(11), 1111; https://doi.org/10.3390/e27111111 - 28 Oct 2025
Viewed by 544
Abstract
Massive machine-type communications (mMTC) in future 6G networks will involve a vast number of devices with sporadic traffic. Grant-free access has emerged as an effective strategy to reduce the access latency and processing overhead by allowing devices to transmit without prior permission, making [...] Read more.
Massive machine-type communications (mMTC) in future 6G networks will involve a vast number of devices with sporadic traffic. Grant-free access has emerged as an effective strategy to reduce the access latency and processing overhead by allowing devices to transmit without prior permission, making accurate active user detection and channel estimation (AUDCE) crucial. In this paper, we investigate the joint AUDCE problem in wideband massive access systems. We develop an innovative channel prior model that captures the dual correlation structure of the channel using three state variables: active indication, channel supports, and channel values. By integrating Markov chains with coupled Gaussian distributions, the model effectively describes both the structural and numerical dependencies within the channel. We propose the correlated hybrid message passing (CHMP) algorithm based on Bethe free energy (BFE) minimization, which adaptively updates model parameters without requiring prior knowledge of user sparsity or channel priors. Simulation results show that the CHMP algorithm accurately detects active users and achieves precise channel estimation. Full article
(This article belongs to the Topic Advances in Sixth Generation and Beyond (6G&B))
Show Figures

Figure 1

23 pages, 746 KB  
Article
Modeling Viewing Engagement in Long-Form Video Through the Lens of Expectation-Confirmation Theory
by Yingjie Chen and Jin Zhang
Appl. Sci. 2025, 15(20), 11252; https://doi.org/10.3390/app152011252 - 21 Oct 2025
Viewed by 750
Abstract
Existing long-form video recommendation systems primarily rely on rating prediction or click-through rate estimation. However, the former is constrained by data sparsity, while the latter fails to capture actual viewing experiences. The accumulation of mid-playback abandonment behaviors undermines platform stickiness and commercial value. [...] Read more.
Existing long-form video recommendation systems primarily rely on rating prediction or click-through rate estimation. However, the former is constrained by data sparsity, while the latter fails to capture actual viewing experiences. The accumulation of mid-playback abandonment behaviors undermines platform stickiness and commercial value. To address this issue, this paper seeks to improve viewing engagement. Grounded in Expectation-Confirmation Theory, this paper proposes the Long-Form Video Viewing Engagement Prediction (LVVEP) method. Specifically, LVVEP estimates user expectations from storyline semantics encoded by a pre-trained BERT model and refined via contrastive learning, weighted by historical engagement levels. Perceived experience is dynamically constructed using a GRU-based encoder enhanced with cross-attention and a neural tensor kernel, enabling the model to capture evolving preferences and fine-grained semantic interactions. The model parameters are optimized by jointly combining prediction loss with contrastive loss, achieving more accurate user viewing engagement predictions. Experiments conducted on real-world long-form video viewing records demonstrate that LVVEP outperforms baseline models, providing novel methodological contributions and empirical evidence to research on long-form video recommendation. The findings provide practical implications for optimizing platform management, improving operational efficiency, and enhancing the quality of information services in long-form video platforms. Full article
Show Figures

Figure 1

19 pages, 1396 KB  
Article
Sparse Keyword Data Analysis Using Bayesian Pattern Mining
by Sunghae Jun
Computers 2025, 14(10), 436; https://doi.org/10.3390/computers14100436 - 14 Oct 2025
Viewed by 474
Abstract
Keyword data analysis aims to extract and interpret meaningful relationships from large collections of text documents. A major challenge in this process arises from the extreme sparsity of document–keyword matrices, where the majority of elements are zeros due to zero inflation. To address [...] Read more.
Keyword data analysis aims to extract and interpret meaningful relationships from large collections of text documents. A major challenge in this process arises from the extreme sparsity of document–keyword matrices, where the majority of elements are zeros due to zero inflation. To address this issue, this study proposes a probabilistic framework called Bayesian Pattern Mining (BPM), which integrates Bayesian inference into association rule mining (ARM). The proposed method estimates both the expected values and credible intervals of interestingness measures such as confidence and lift, providing a probabilistic evaluation of keyword associations. Experiments conducted on 9436 quantum computing patent documents, from which 175 representative keywords were extracted, demonstrate that BPM yields more stable and interpretable associations than conventional ARM. By incorporating credible intervals, BPM reduces the risk of biased decisions under sparsity and enhances the reliability of keyword-based technology analysis, offering a rigorous approach for knowledge discovery in zero-inflated text data. Full article
Show Figures

Graphical abstract

24 pages, 7771 KB  
Article
Cross-Domain OTFS Detection via Delay–Doppler Decoupling: Reduced-Complexity Design and Performance Analysis
by Mengmeng Liu, Shuangyang Li, Baoming Bai and Giuseppe Caire
Entropy 2025, 27(10), 1062; https://doi.org/10.3390/e27101062 - 13 Oct 2025
Viewed by 694
Abstract
In this paper, a reduced-complexity cross-domain iterative detection for orthogonal time frequency space (OTFS) modulation is proposed that exploits channel properties in both time and delay–Doppler domains. Specifically, we first show that in the time-domain effective channel, the path delay only introduces interference [...] Read more.
In this paper, a reduced-complexity cross-domain iterative detection for orthogonal time frequency space (OTFS) modulation is proposed that exploits channel properties in both time and delay–Doppler domains. Specifically, we first show that in the time-domain effective channel, the path delay only introduces interference among samples in adjacent time slots, while the Doppler becomes a phase term that does not affect the channel sparsity. This investigation indicates that the effects of delay and Doppler can be decoupled and treated separately. This “band-limited” matrix structure further motivates us to apply a reduced-size linear minimum mean square error (LMMSE) filter to eliminate the effect of delay in the time domain, while exploiting the cross-domain iteration for minimizing the effect of Doppler by noticing that the time and Doppler are a Fourier dual pair. Furthermore, we apply eigenvalue decomposition to the reduced-size LMMSE estimator, which makes the computational complexity independent of the number of cross-domain iterations, thus significantly reducing the computational complexity. The bias evolution and variance evolution are derived to evaluate the average MSE performance of the proposed scheme, which shows that the proposed estimators suffer from only negligible estimation bias in both time and DD domains. Particularly, the state (MSE) evolution is compared with bounds to verify the effectiveness of the proposed scheme. Simulation results demonstrate that the proposed scheme achieves almost the same error performance as the optimal detection, but only requires a reduced complexity. Full article
Show Figures

Figure 1

17 pages, 1106 KB  
Article
Calibrated Global Logit Fusion (CGLF) for Fetal Health Classification Using Cardiotocographic Data
by Mehret Ephrem Abraha and Juntae Kim
Electronics 2025, 14(20), 4013; https://doi.org/10.3390/electronics14204013 - 13 Oct 2025
Viewed by 488
Abstract
Accurate detection of fetal distress from cardiotocography (CTG) is clinically critical but remains subjective and error-prone. In this research, we present a leakage-safe Calibrated Global Logit Fusion (CGLF) framework that couples TabNet’s sparse, attention-based feature selection with XGBoost’s gradient-boosted rules and fuses their [...] Read more.
Accurate detection of fetal distress from cardiotocography (CTG) is clinically critical but remains subjective and error-prone. In this research, we present a leakage-safe Calibrated Global Logit Fusion (CGLF) framework that couples TabNet’s sparse, attention-based feature selection with XGBoost’s gradient-boosted rules and fuses their class probabilities through global logit blending followed by per-class vector temperature calibration. Class imbalance is addressed with SMOTE–Tomek for TabNet and one XGBoost stream (XGB–A), and class-weighted training for a second stream (XGB–B). To prevent information leakage, all preprocessing, resampling, and weighting are fitted only on the training split within each outer fold. Out-of-fold (OOF) predictions from the outer-train split are then used to optimize blend weights and fit calibration parameters, which are subsequently applied once to the corresponding held-out outer-test fold. Our calibration-guided logit fusion (CGLF) matches top-tier discrimination on the public Fetal Health dataset while producing more reliable probability estimates than strong standalone baselines. Under nested cross-validation, CGLF delivers comparable AUROC and overall accuracy to the best tree-based model, with visibly improved calibration and slightly lower balanced accuracy in some splits. We also provide interpretability and overfitting checks via TabNet sparsity, feature stability analysis, and sufficiency (k95) curves. Finally, threshold tuning under a balanced-accuracy floor preserves sensitivity to pathological cases, aligning operating points with risk-aware obstetric decision support. Overall, CGLF is a calibration-centric, leakage-controlled CTG pipeline that is interpretable and suited to threshold-based clinical deployment. Full article
(This article belongs to the Special Issue Advances in Algorithm Optimization and Computational Intelligence)
Show Figures

Figure 1

27 pages, 3840 KB  
Article
Adaptive Lag Binning and Physics-Weighted Variograms: A LOOCV-Optimised Universal Kriging Framework with Trend Decomposition for High-Fidelity 3D Cryogenic Temperature Field Reconstruction
by Jiecheng Tang, Yisha Chen, Baolin Liu, Jie Cao and Jianxin Wang
Processes 2025, 13(10), 3160; https://doi.org/10.3390/pr13103160 - 3 Oct 2025
Viewed by 571
Abstract
Biobanks rely on ultra-low-temperature (ULT) storage for irreplaceable specimens, where precise 3D temperature field reconstruction is critical to preserve integrity. This is the first study to apply geostatistical methods to ULT field reconstruction in cryogenic biobanking systems. We address critical gaps in sparse-sensor [...] Read more.
Biobanks rely on ultra-low-temperature (ULT) storage for irreplaceable specimens, where precise 3D temperature field reconstruction is critical to preserve integrity. This is the first study to apply geostatistical methods to ULT field reconstruction in cryogenic biobanking systems. We address critical gaps in sparse-sensor environments where conventional interpolation fails due to vertical thermal stratification and non-stationary trends. Our physics-informed universal kriging framework introduces (1) the first domain-specific adaptation of universal kriging for 3D cryogenic temperature field reconstruction; (2) eight novel lag-binning methods explicitly designed for sparse, anisotropic sensor networks; and (3) a leave-one-out cross-validation-driven framework that automatically selects the optimal combination of trend model, binning strategy, logistic weighting, and variogram model fitting. Validated on real data collected from a 3000 L operating cryogenic chest freezer, the method achieves sub-degree accuracy by isolating physics-guided vertical trends (quadratic detrending dominant) and stabilising variogram estimation under sparsity. Unlike static approaches, our framework dynamically adapts to thermal regimes without manual tuning, enabling centimetre-scale virtual sensing. This work establishes geostatistics as a foundational tool for cryogenic thermal monitoring, with direct engineering applications in biobank quality control and predictive analytics. Full article
Show Figures

Figure 1

Back to TopTop