Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (3,462)

Search Parameters:
Keywords = ground-truthing

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
46 pages, 2951 KB  
Article
Topology-Based Machine Learning and Regime Identification in Stochastic, Heavy-Tailed Financial Time Series
by Prosper Lamothe-Fernández, Eduardo Rojas and Andriy Bayuk
Mathematics 2026, 14(7), 1098; https://doi.org/10.3390/math14071098 (registering DOI) - 24 Mar 2026
Abstract
Classic machine learning and regime identification methods applied to financial time series lack theoretical guarantees and exhibit systematic failure modes: heavy-tails invalidate moment-based geometry, rendering distances and centroids dominated by extremes or unstable; jumps violate smoothness, destabilizing local regressions, kernel methods, and gradient-based [...] Read more.
Classic machine learning and regime identification methods applied to financial time series lack theoretical guarantees and exhibit systematic failure modes: heavy-tails invalidate moment-based geometry, rendering distances and centroids dominated by extremes or unstable; jumps violate smoothness, destabilizing local regressions, kernel methods, and gradient-based learning; and non-stationarity disrupts neighborhood relations, so distances in classical feature spaces no longer reflect meaningful proximity. To address these challenges, we propose a topology-based machine-learning framework grounded on probabilistic reconstruction of state-space geometry, which replaces moment- and smoothness-dependent representations with deformation-stable summaries of state-space geometry, preserving neighborhoods, adjacency, and topology. The finite-sample validity of homeomorphic state-space reconstruction, required for topology-based machine learning, is assessed through numerical studies on synthetic data with heavy tails, jumps, and known ground-truth regimes. Further diagnostics of local invertibility and bounded geometric distortion quantify when embedding windows are consistent with local diffeomorphic behavior, enabling metric-sensitive, geometry-aware learning. Clustering of Hilbert-space summaries accurately recovers underlying market tail-risk regimes with robust results across selected filtrations. Temporal, feature-space, and cluster-label null tests confirm that topology-based clustering captures genuine topological structure rather than noise or artifacts, and encodes temporal dependencies at local, mesoscopic, and network levels associated with market regimes. Full article
(This article belongs to the Section E: Applied Mathematics)
14 pages, 851 KB  
Article
Fully Automated AI-Based Lymph Node Measurements in Chest CT: Accuracy and Reproducibility Compared with Multi-Reader Assessment
by Andra-Iza Iuga, Heike Carolus, Liliana Lourenco Caldeira, Jonathan Kottlors, Miriam Rinneburger, Mathilda Weisthoff, Philipp Fervers, Philip Rauen, Florian Fichter, Lukas Goertz, Pia Niederau, Florian Siedek, Carola Heneweer, Carsten Gietzen, Lenhard Pennig, Anja Dobrostal, Fabian Laqua, Piotr Woznicki, David Maintz, Bettina Baessler and Thorsten Persigehladd Show full author list remove Hide full author list
Diagnostics 2026, 16(7), 967; https://doi.org/10.3390/diagnostics16070967 (registering DOI) - 24 Mar 2026
Abstract
Background/Objectives: Accurate and reproducible lymph node (LN) measurement is essential for oncologic staging and therapy monitoring but is subject to inter-reader variability. This study evaluated the accuracy and reproducibility of a fully automated artificial intelligence (AI)-based LN measurement workflow in contrast-enhanced chest [...] Read more.
Background/Objectives: Accurate and reproducible lymph node (LN) measurement is essential for oncologic staging and therapy monitoring but is subject to inter-reader variability. This study evaluated the accuracy and reproducibility of a fully automated artificial intelligence (AI)-based LN measurement workflow in contrast-enhanced chest CT, using multi-reader manual measurements as reference. Methods: Sixty thoracic LNs from seven patients were independently measured by 13 radiologists in two reading rounds. The median of all measurements served as the ground truth (GT). Automated short- and long-axis diameters were derived from fully automated 3D CNN-based segmentations. Agreement between AI and manual measurements was assessed using Friedman testing, intraclass correlation coefficients (ICCs), and concordance correlation coefficients (CCCs). Measurement stability was evaluated across repeated runs on different hardware systems. Results: A total of 2280 manual measurements were analyzed. Manual assessment showed significant inter-reader variability (p < 0.01), while intra-reader agreement was high. No significant differences were observed between AI-based measurements and the GT (all p > 0.01). Agreement was good, with CCC values of 0.86 (SAD) and 0.79 (LAD). AI-based measurements were numerically stable across hardware configurations. Conclusions: Fully automated AI-based LN measurements in chest CT scans provide strong agreement with multi-reader consensus and high numerical stability. Automated measurement may support more standardized and reproducible oncologic imaging assessment. Full article
(This article belongs to the Special Issue AI for Medical Diagnosis: From Algorithms to Clinical Integration)
Show Figures

Figure 1

19 pages, 689 KB  
Article
From Social Media Content to Value Co-Creation: Role of Environmental Attitude, Environmental Knowledge, and Green Truth
by Gabriel Usiña-Báscones, Nelson Carrión-Bósquez, Mayra Samaniego-Arias, Rubén Marchena-Chanduvi, Santiago Medina-Miranda, Wilson Zambrano-Vélez, Wilfredo Ruiz-García, Mary Llamo-Burga and Oscar Ortiz-Regalado
Foods 2026, 15(7), 1120; https://doi.org/10.3390/foods15071120 - 24 Mar 2026
Abstract
This study examined how social media content influences value co-creation among organic product consumers through the mediating roles of environmental awareness, green truth, and environmental attitude. Grounded in the Stimulus-Organism-Response (SOR) framework, social media content is conceptualized as a stimulus, environmental awareness, green [...] Read more.
This study examined how social media content influences value co-creation among organic product consumers through the mediating roles of environmental awareness, green truth, and environmental attitude. Grounded in the Stimulus-Organism-Response (SOR) framework, social media content is conceptualized as a stimulus, environmental awareness, green trust, and environmental attitude as internal organism states, and value co-creation as the behavioral response. A cross-sectional quantitative design was applied using a 20-item questionnaire administered to 739 organic-product consumers. Data were analyzed using partial least-squares structural equation modeling (PLS-SEM). The results indicate that social media content does not directly affect value co-creation but significantly influences environmental awareness, green trust, and environmental attitude. Environmental awareness and green trust positively affect both environmental attitude and value co-creation, and environmental attitude emerges as the strongest direct predictor of value co-creation. These findings confirm the mediating role of cognitive and attitudinal mechanisms in transforming digital sustainability content into collaborative consumer behavior. This study contributes to the literature on sustainable consumption by integrating communication, cognitive, and attitudinal variables in a single explanatory model. Practically, the findings suggest that sustainability communication strategies in digital environments should prioritize credibility and environmental knowledge to foster consumer participation in value co-creation. Full article
(This article belongs to the Section Sensory and Consumer Sciences)
Show Figures

Figure 1

16 pages, 7174 KB  
Article
Aberration-Conditioned Attention-Driven Centroid Localization: From Simulation Mechanism to Double-Spot Experiment
by Zhonghao Zhao, Jia Hou, Yuanting Liu, Anwei Liu and Zhiping He
Photonics 2026, 13(3), 304; https://doi.org/10.3390/photonics13030304 - 20 Mar 2026
Viewed by 13
Abstract
In size, weight, and power (SWaP)-constrained optical systems, such as spaceborne LiDAR, high-precision centroid localization often relies on focal-plane measurements without dedicated wavefront sensors. Under such conditions, the nonlinear coupling between optical aberrations and sensor noise introduces systematic bias that is difficult to [...] Read more.
In size, weight, and power (SWaP)-constrained optical systems, such as spaceborne LiDAR, high-precision centroid localization often relies on focal-plane measurements without dedicated wavefront sensors. Under such conditions, the nonlinear coupling between optical aberrations and sensor noise introduces systematic bias that is difficult to mitigate using conventional centroiding methods. To address this issue, we propose a physics-conditioned feature correction framework based on an aberration-conditioned attention mechanism. A hybrid CNN–Transformer architecture is employed to predict and compensate for systematic centroid bias. Specifically, convolutional layers encode the degraded spot morphology, while a multi-head attention mechanism leverages Seidel aberration coefficients to adaptively modulate spatial features for precise regression. Given the unavailability of absolute ground-truth coordinates in empirical scenarios, a physics-consistent simulation framework based on scalar diffraction theory is constructed to generate synthetic data for supervised learning. Simulation results indicate that the proposed method objectively reduces anisotropic systematic bias, achieving a localization root-mean-square error (RMSE) of 0.011 to 0.021 pixels, and maintains stable sub-pixel accuracy even under a 10% empirical prior perturbation. To evaluate generalization performance and engineering reliability, a wedge-based double-spot platform is developed to verify physical consistency via geometric invariance. Experimental results demonstrate a measured spacing standard deviation (SD) of 0.015 to 0.039 pixels. This validates the framework’s transferability from theoretical simulation to controlled physical measurements, providing an algorithmic foundation for precision optical metrology in hardware-constrained environments. Full article
(This article belongs to the Special Issue Advancements in Optics and Laser Measurement)
Show Figures

Figure 1

21 pages, 6097 KB  
Article
HySIMU: An Open-Source Toolkit for Hyperspectral Remote Sensing Forward Modelling
by Fadhli Atarita and Alexander Braun
Remote Sens. 2026, 18(6), 943; https://doi.org/10.3390/rs18060943 - 20 Mar 2026
Viewed by 23
Abstract
Hyperspectral remote sensing (HRS) is gaining widespread adoption within the geoscience and Earth observation communities. It fosters diverse applications, including precision agriculture, soil science, mineral exploration, and carbon detection, to name a few. Recent technological advancements facilitated a growing number of satellite missions [...] Read more.
Hyperspectral remote sensing (HRS) is gaining widespread adoption within the geoscience and Earth observation communities. It fosters diverse applications, including precision agriculture, soil science, mineral exploration, and carbon detection, to name a few. Recent technological advancements facilitated a growing number of satellite missions as well as an increase in the availability of commercial sensors and platforms, such as drones. A significant challenge in deploying the varied platforms and sensors is the design and optimization of the hyperspectral surveys. Forward modelling simulators are valuable for optimizing mission parameters and estimating imaging performance. Limited accessibility of open-source simulators presents an obstacle for users who seek to benefit from such tools. To bridge this gap, HySIMU (Hyperspectral SIMUlator) was developed and described herein. It is an open-source, forward modelling toolkit that combines and integrates a primary processing pipeline with various open-source packages into a transparent and modular workflow. It offers a cost-effective approach to evaluating the performance of hyperspectral surveys. HySIMU is designed to simulate hyperspectral imagery based on user-defined targets, platforms, and sensor parameters. Features include (i) a ground truth data cube builder for customizable input parameters, (ii) a terrain-based solar and view geometry calculator for illumination modelling, (iii) integrated open-source radiative transfer models for incorporating atmospheric effects, and (iv) spatial resampling filters. In this manuscript, the initial framework for HySIMU is presented with some example applications, including two validation studies with real hyperspectral images. As remote sensing technologies advance, forward modelling toolkits such as HySIMU play a crucial role in refining mission designs and assessing survey feasibility. The scalability for arbitrary hyperspectral sensors, platforms, and spectral libraries ensures broad applicability. Of particular importance is support for parameter optimization for both scientific and commercial HRS campaigns. Full article
Show Figures

Figure 1

15 pages, 5710 KB  
Article
Prediction of Cataract Severity Using Slit Lamp Images from a Portable Smartphone Device: A Pilot Study
by David Z. Chen, Changshuo Liu, Junran Wu, Lei Zhu and Beng Chin Ooi
Sensors 2026, 26(6), 1954; https://doi.org/10.3390/s26061954 - 20 Mar 2026
Viewed by 81
Abstract
Cataract diagnosis requires a comprehensive dilated examination by an ophthalmologist using a slit lamp; there is currently no effective means to objectively screen for cataracts in the community using portable devices without dilation. We hypothesized that it would be possible to predict cataract [...] Read more.
Cataract diagnosis requires a comprehensive dilated examination by an ophthalmologist using a slit lamp; there is currently no effective means to objectively screen for cataracts in the community using portable devices without dilation. We hypothesized that it would be possible to predict cataract severity using deep learning on images taken using a portable smartphone-based slit lamp prototype, with and without dilation. In this prospective cross-sectional pilot study, slit lamp images were captured from eligible patients with cataracts in a tertiary clinic using a portable slit lamp prototype attached to a smartphone. The Pentacam nuclear staging score (PNS, Pentacam®, Oculus, Inc., Arlington, WA, USA) was taken from the dilated pupils and served as ground truth. A transformer prototypical network with the Swin transformer on the images was trained to assign the class label corresponding to the highest predicted probability. Heat maps were generated based on attribution masks to identify the anatomical areas of concern. A total of 1900 images from 198 eyes of 99 patients were captured. The average age was 65.3 ± 10.4 years (range, 41.0 to 88.0 years) and the average PNS score was 1.57 ± 0.81 (range, 0 to 4). The model achieved an average accuracy of 81.25% and 74.38% for undilated and dilated eyes, respectively. Heat map visualization using the integrated gradient method successfully identified the anatomical area of interest in certain images. This study suggests the possibility of estimating cataract density using a portable smartphone slit lamp device without dilation. Further work is under way to validate this technique in a larger and more diverse group of eyes with cataracts. Full article
(This article belongs to the Special Issue Smartphone Sensors and Their Applications)
Show Figures

Figure 1

23 pages, 28834 KB  
Article
Patient-Specific Computational Hemodynamic Modeling of the Right Pulmonary Artery Using CardioMEMS Data: Validation, Simplification, and Sensitivity Analysis
by Angélica Casero, Laura G. Sánchez, Felicia Alfano, Pedro Navas, Juan F. Oteo, Carlos Arellano-Serrano and Manuel Gómez-Bueno
Fluids 2026, 11(3), 83; https://doi.org/10.3390/fluids11030083 - 19 Mar 2026
Viewed by 36
Abstract
This study investigates the application of computational hemodynamic modeling, involving both FSI and CFD models, using SimVascular to simulate blood flow in the right pulmonary artery for patient-specific cardiovascular assessment. The artery’s three-dimensional geometry was reconstructed from a computed tomography (CT) image, and [...] Read more.
This study investigates the application of computational hemodynamic modeling, involving both FSI and CFD models, using SimVascular to simulate blood flow in the right pulmonary artery for patient-specific cardiovascular assessment. The artery’s three-dimensional geometry was reconstructed from a computed tomography (CT) image, and pressure measurements from a CardioMEMS™ device were used as clinical ground truth for validation. To represent the arterial hemodynamics, we initially formulated a fluid–structure interaction (FSI) approach to capture wall mechanics. However, given the high computational cost of fully patient-specific FSI simulations for routine clinical decision-making, we evaluated the validity of key simplifications by assuming rigid vessel walls coupled with a three-element Windkessel (3WK) model and applying a half-sine inflow waveform derived from the patient’s cardiac output. These simplifications yielded results with minimal error: the rigid-wall assumption introduced a 1.1% deviation, while the idealized waveform resulted in a 0.56 mmHg offset. Crucially, while wall rigidity was acceptable, we found that arterial compliance in the boundary conditions is non-negotiable; reducing the model to a pure resistance approach resulted in non-physiological pressures (130 mmHg). A subsequent parametric analysis examined how varying resistance (R) and compliance (C) distinctively alter the pressure waveform morphology. The results underscore the potential of combining remote monitoring data with validated computational simulations to deepen the understanding of cardiovascular dynamics and enhance diagnostic and therapeutic approaches for cardiovascular diseases. Full article
(This article belongs to the Special Issue Advances in Hemodynamics and Related Biological Flows, 2nd Edition)
Show Figures

Graphical abstract

11 pages, 908 KB  
Article
Accuracy of AI-Based Nutrient Estimation from Standardized Hospital Meal Images: A Comparison with Registered Dietitians
by Tomomi Isobe, Lim Wan Zhang, Hana Murakami, Miyu Kadono, Megumi Aso, Atsuko Kayashita and Jun Kayashita
Nutrients 2026, 18(6), 966; https://doi.org/10.3390/nu18060966 - 18 Mar 2026
Viewed by 161
Abstract
Background: Accurate dietary assessment is vital for preventing malnutrition in aging populations, particularly in home-care settings. Although Large Multimodal Models (LMMs) for nutrient estimation are evolving, their nutrient-specific accuracy requires rigorous validation. Methods: Fifteen standardized hospital meals were photographed under controlled conditions (90-degree [...] Read more.
Background: Accurate dietary assessment is vital for preventing malnutrition in aging populations, particularly in home-care settings. Although Large Multimodal Models (LMMs) for nutrient estimation are evolving, their nutrient-specific accuracy requires rigorous validation. Methods: Fifteen standardized hospital meals were photographed under controlled conditions (90-degree angle, 500 lux). Ground truth values were determined by direct weighing. Estimates for energy and macronutrients were performed by 10 registered dietitians (RDs) and 10 AI models (including ChatGPT-4o and Gemini 1.5 Pro). Accuracy was assessed using Pearson’s correlation, Mean Absolute Error (MAE), and Bland–Altman analysis to quantify systematic bias. Results: For energy and carbohydrates, RDs and top-performing AI models (notably ChatGPT-4o and Gemini 1.5 Pro) demonstrated practical accuracy (r > 0.8, frequently within ±10% range). However, accuracy for protein and lipids was significantly lower across all AI models. Specifically, all AI models exhibited a substantial systematic overestimation of lipids (Mean Bias > +20%, p < 0.01), highlighting a critical “invisible nutrient” bias. Conclusions: Current AI tools show potential for caloric and carbohydrate monitoring but struggle with lipid and protein density. These findings emphasize the need for human–AI collaboration (“human-in-the-loop”) and the integration of cooking metadata to improve clinical utility in geriatric nutrition. Full article
(This article belongs to the Special Issue A Path Towards Personalized Smart Nutrition)
Show Figures

Figure 1

26 pages, 3122 KB  
Article
A 94 GHz Millimeter-Wave Radar System for Remote Vehicle Height Measurement to Prevent Bridge Collisions
by Natan Steinmetz, Eyal Magori, Yael Balal, Yonatan B. Sudai and Nezah Balal
Sensors 2026, 26(6), 1921; https://doi.org/10.3390/s26061921 - 18 Mar 2026
Viewed by 93
Abstract
Collisions between over-height vehicles and low-clearance bridges cause infrastructure damage and pose safety risks. Existing detection systems rely primarily on optical sensors, which suffer from performance degradation in adverse weather conditions. This paper presents an alternative approach based on a 94 GHz millimeter-wave [...] Read more.
Collisions between over-height vehicles and low-clearance bridges cause infrastructure damage and pose safety risks. Existing detection systems rely primarily on optical sensors, which suffer from performance degradation in adverse weather conditions. This paper presents an alternative approach based on a 94 GHz millimeter-wave radar that achieves velocity-independent height measurement. The proposed technique exploits the ratio of Doppler shifts from two scattering centers on a vehicle, specifically the roof and the wheel–road interface. This ratio depends only on the measurement geometry, as the unknown vehicle velocity cancels algebraically, enabling direct height computation without speed measurement. The paper provides a closed-form height estimation model, analyzes the trade-off between frequency resolution and geometric constancy during integration, and presents experimental validation using a scaled laboratory testbed. An optical tracking system is used solely for ground-truth validation in the laboratory and is not required for operational deployment. Results across six test cases with heights ranging from 20 cm to 46 cm demonstrate an average absolute error of 0.60 cm and relative errors below 3.3 percent. A scaling analysis for representative full-scale geometries indicates that at highway speeds of 80 km/h, integration times in the millisecond range (approximately 3–18 ms for representative 20–50 m measurement standoff) are feasible; warning distance can be extended independently by upstream radar placement. The expected advantage in fog, rain, and dust is based on established W-band propagation characteristics; dedicated adverse-weather and full field validation (including multipath, clutter, and multi-vehicle scenarios) remain future work. Full article
Show Figures

Figure 1

16 pages, 4086 KB  
Article
A Behavioral Ground Truth for Exteroceptive Sensors: Geometric Constraints and Stochastic Duration in Parking Maneuvers
by Salvatore Leonardi and Natalia Distefano
Sensors 2026, 26(6), 1911; https://doi.org/10.3390/s26061911 - 18 Mar 2026
Viewed by 63
Abstract
The deterministic simplification of parking maneuvers in traditional traffic models presents a critical challenge for the safe integration of Autonomous Vehicles (AVs). This study establishes a stochastic human baseline to provide a naturalistic ground truth dataset essential for calibrating perception and prediction sensors [...] Read more.
The deterministic simplification of parking maneuvers in traditional traffic models presents a critical challenge for the safe integration of Autonomous Vehicles (AVs). This study establishes a stochastic human baseline to provide a naturalistic ground truth dataset essential for calibrating perception and prediction sensors in mixed traffic scenarios. Through the analysis of 1038 maneuvers observed in a university shared space in Catania, Generalized Linear Models and Kaplan–Meier estimators were applied to quantify the impact of geometric constraints on 0°, 45°, and 90° configurations. Results identify 45° angled parking as the Pareto-optimal solution regarding stability and speed, achieving an average maneuver time of 7.54 s. Furthermore, a vertical parking paradox emerges: in the presence of narrow aisles, entry times increase drastically, generating bottlenecks with an 85th percentile exceeding 50 s. Finally, a structural functional asymmetry reveals that exit maneuvers require approximately 54% of the time needed for entry. These findings provide empirical metrics essential for validating human behavior models and fine-tuning decision-making and timeout logic in autonomous driving systems. Full article
(This article belongs to the Special Issue Smart Traffic Control Based on Sensor Technology)
Show Figures

Figure 1

19 pages, 3195 KB  
Article
UMLoc: Uncertainty-Aware Map-Constrained Inertial Localization with Quantified Bounds
by Mohammed S. Alharbi and Shinkyu Park
Sensors 2026, 26(6), 1904; https://doi.org/10.3390/s26061904 - 18 Mar 2026
Viewed by 67
Abstract
Inertial localization is particularly valuable in GPS-denied environments such as indoors. However, localization using only Inertial Measurement Units (IMUs) suffers from drift caused by motion-process noise and sensor biases. This paper introduces Uncertainty-aware Map-constrained Inertial Localization (UMLoc), an end-to-end framework that jointly models [...] Read more.
Inertial localization is particularly valuable in GPS-denied environments such as indoors. However, localization using only Inertial Measurement Units (IMUs) suffers from drift caused by motion-process noise and sensor biases. This paper introduces Uncertainty-aware Map-constrained Inertial Localization (UMLoc), an end-to-end framework that jointly models IMU uncertainty and map constraints to achieve drift-resilient positioning. UMLoc integrates two coupled modules: (1) a Long Short-Term Memory (LSTM) quantile regressor, which estimates the specific quantiles needed to define 68%, 90% and 95% prediction intervals serving as a measure of localization uncertainty and (2) a Conditioned Generative Adversarial Network (CGAN) with cross-attention that fuses IMU dynamic data with distance-based floor-plan maps to generate geometrically feasible trajectories. The modules are trained jointly, allowing uncertainty estimates to propagate through the CGAN during trajectory generation. UMLoc was evaluated on three datasets, including a newly collected 2-h indoor benchmark with time-aligned IMU data, ground-truth poses and floor-plan maps. Results show that the method achieves a mean drift ratio of 5.9% over a 70m travel distance and an average Absolute Trajectory Error (ATE) of 1.36m, while maintaining calibrated prediction bounds. Full article
(This article belongs to the Section Navigation and Positioning)
Show Figures

Figure 1

29 pages, 11319 KB  
Article
Confidence-Aware Topology Identification in Low-Voltage Distribution Networks: A Multi-Source Fusion Method Based on Weakly Supervised Learning
by Siliang Liu, Can Deng, Zenan Zheng, Ying Zhu, Hongxin Lu and Wenze Liu
Energies 2026, 19(6), 1503; https://doi.org/10.3390/en19061503 - 18 Mar 2026
Viewed by 144
Abstract
The topology identification (TI) of low-voltage distribution networks (LVDNs) is the foundation for their intelligent operation and lean management. However, the existing identification methods may produce inconsistent results under measurement noise, missing data, and heterogeneous load behaviors. Without principled multiple method fusion and [...] Read more.
The topology identification (TI) of low-voltage distribution networks (LVDNs) is the foundation for their intelligent operation and lean management. However, the existing identification methods may produce inconsistent results under measurement noise, missing data, and heterogeneous load behaviors. Without principled multiple method fusion and meter-level confidence quantification, the reliability of the identification results is questionable in the absence of ground-truth topology. To address these challenges, a confidence-aware TI (Ca-TI) method for the LVDN based on weakly supervised learning (WSL) and Dempster–Shafer (D-S) evidence theory is proposed, aiming to infer each meter’s latent topology connectivity label and quantify the meter-level confidence without ground truth by fusing different identification methods. Specifically, within the framework of data programming (DP) in WSL, different TI methods were modeled as labeling functions (LFs), and a weakly supervised label model (WSLM) was adopted to learn each method’s error pattern and each meter’s posterior responsibility; within the framework of D-S evidence theory, an uncertainty-aware basic probability assignment (BPA) was constructed from each meter’s posterior responsibility, with posterior uncertainty allocated to ignorance, and was further discounted according to the missing data rate; subsequently, a consensus-calibrated conflict-gated (CCCG)-enhanced D-S fusion rule was proposed to aggregate the TI results of multiple methods, producing the final TI decisions with meter-level confidence. Finally, the test was carried out in both simulated and actual low-voltage distribution transformer areas (LVDTAs), and the robustness of the proposed method under various measurement noise and missing data was tested. The results indicate that the proposed method can effectively integrate the performances of various TI methods, is not adversely affected by extreme bias from any single method, and provides the meter-level confidence for targeted on-site verification. Further, an engineering deployment scheme with cloud–edge collaboration is further discussed to support scalable implementation in utility environments. Full article
(This article belongs to the Special Issue Application of Artificial Intelligence in Electrical Power Systems)
Show Figures

Figure 1

23 pages, 2876 KB  
Article
Denoising and Baseline Correction of Low-Scan FTIR Spectra: A Benchmark of Deep Learning Models Against Traditional Signal Processing
by Azadeh Mokari, Shravan Raghunathan, Artem Shydliukh, Oleg Ryabchykov, Christoph Krafft and Thomas Bocklitz
Bioengineering 2026, 13(3), 347; https://doi.org/10.3390/bioengineering13030347 - 17 Mar 2026
Viewed by 158
Abstract
High-quality Fourier Transform Infrared (FTIR) imaging usually needs extensive signal averaging to reduce noise and drift, which severely limits clinical speed. Deep learning can accelerate imaging by reconstructing spectra from rapid, single-scan inputs. However, separating noise and baseline drift simultaneously without ground truth [...] Read more.
High-quality Fourier Transform Infrared (FTIR) imaging usually needs extensive signal averaging to reduce noise and drift, which severely limits clinical speed. Deep learning can accelerate imaging by reconstructing spectra from rapid, single-scan inputs. However, separating noise and baseline drift simultaneously without ground truth is an ill-posed inverse problem. Standard black-box architectures often rely on statistical approximations that introduce spectral hallucinations or fail to generalize to unstable atmospheric conditions. To solve these issues, we propose a physics-informed cascade Unet that separates denoising and baseline correction tasks using a new, deterministic Physics Bridge. This architecture forces the network to separate random noise from chemical signals using an embedded SNIP layer to enforce spectroscopic constraints instead of learning statistical approximations. We benchmarked this approach against a standard single Unet and a traditional Savitzky–Golay smoothing followed by SNIP baseline correction workflow. We used a dataset of human hypopharyngeal carcinoma cells (FaDu). The cascade model outperformed all other methods, achieving a 51.3% reduction in RMSE compared to raw single-scan inputs, surpassing both the single Unet (40.2%) and the traditional workflow (33.7%). Peak-aware metrics show that the cascade architecture eliminates spectral hallucinations found in standard deep learning. It also preserves peak intensity with much higher fidelity than traditional smoothing. These results show that the cascade Unet is a robust solution for diagnostic-grade FTIR imaging. It enables imaging speeds 32 times faster than current methods. Full article
Show Figures

Graphical abstract

22 pages, 8428 KB  
Article
Fire Detection Misalignments Between GOES ABI and VIIRS and Their Impact on GOES FDC Evaluation
by Asaf Vanunu, Rodney Fonseca, Meirav Galun, Boaz Nadler and Arnon Karnieli
Remote Sens. 2026, 18(6), 906; https://doi.org/10.3390/rs18060906 - 16 Mar 2026
Viewed by 189
Abstract
Wildfires cause major damage, and their accurate detection is crucial. A common approach to near-real-time detection uses Geostationary (GEO) satellite algorithms. A standard scheme for evaluating the accuracy of a GEO-based algorithm is to compare its detections with higher-resolution Low Earth Orbit (LEO) [...] Read more.
Wildfires cause major damage, and their accurate detection is crucial. A common approach to near-real-time detection uses Geostationary (GEO) satellite algorithms. A standard scheme for evaluating the accuracy of a GEO-based algorithm is to compare its detections with higher-resolution Low Earth Orbit (LEO) images, considering the latter as ground truth. The primary objective of this study is to quantify the prevalence of GOES ABI/VIIRS fire detection misalignments and assess their impact on the accuracy evaluation of the GOES Fire Detection and Characterization (FDC) product. Thus, the key question is how this evaluation should be performed. To this end, a large dataset of matching FDC/VIIRS fire detections across Western U.S., Amazonas, and Patagonia was constructed. Our finding is that for nearly 12% of fire events, there are spatial misalignments between FDC and VIIRS detections. Next, we show that using VIIRS as ground truth without considering these misalignments yields highly biased estimates. This affects the evaluation of the FDC product detection capabilities. Finally, we demonstrate that using a GOES FDC/VIIRS buffer window substantially mitigates the effect of misalignments. For example, the estimated false alarm rate ranges between 26% and 36% without a window, whereas using a 3×3 window yields values between 7% and 15%. Full article
Show Figures

Figure 1

30 pages, 2796 KB  
Article
Information Recovery Under Partial Observation: A Methodological Analysis of Multi-Informant Questionnaire Data
by Nawaphol Thepnarin and Adisorn Leelasantitham
Information 2026, 17(3), 290; https://doi.org/10.3390/info17030290 - 15 Mar 2026
Viewed by 169
Abstract
This study examines information recovery under structured partial observation in multi-informant questionnaire systems. Rather than predicting an external ground truth, we evaluate the recoverability of an operational full-information decision rulewhen only partial informant views are available. In the empirical SNAP-IV calibration study, this [...] Read more.
This study examines information recovery under structured partial observation in multi-informant questionnaire systems. Rather than predicting an external ground truth, we evaluate the recoverability of an operational full-information decision rulewhen only partial informant views are available. In the empirical SNAP-IV calibration study, this reference is intentionally defined as a deterministic function of the combined informant views, so the combined-view result is treated only as an oracle-style ceiling and the substantive analysis concerns how single-view recovery degrades when one informant is withheld. To examine whether a similar qualitative pattern extends beyond this calibration setting, we additionally evaluate a latent-state simulation in which the reference decision is generated from an unobserved latent state and informant views are noisy observations. Across both settings, single-view recoverability declines as inter-rater disagreement increases, whereas combined-view representations remain more stable. In the empirical study, combined-view models achieved near-ceiling recovery performance (e.g., 90.9% for Logistic Regression and 91.3% for MLP), while Teacher-only recovery dropped from approximately 78% to 63% under higher disagreement (p=0.0005, Cohen’s d=1.9). Additional non-learned single-rater score-threshold baselines exhibited the same qualitative degradation pattern, indicating that the effect is not specific to fitted machine learning models. Importantly, this work is methodological: it does not propose new learning algorithms or clinical prediction models, but instead presents a conceptual–methodological framework, together with model-agnostic recoverability quantities, for quantifying missing-view information loss under incomplete, heterogeneous observations. Full article
(This article belongs to the Section Information Theory and Methodology)
Show Figures

Figure 1

Back to TopTop