Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (868)

Search Parameters:
Keywords = interpretation uncertainty

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
26 pages, 8709 KiB  
Article
Minding Spatial Allocation Entropy: Sentinel-2 Dense Time Series Spectral Features Outperform Vegetation Indices to Map Desert Plant Assemblages
by Frederick N. Numbisi
Remote Sens. 2025, 17(15), 2553; https://doi.org/10.3390/rs17152553 - 23 Jul 2025
Abstract
The spatial distribution of ephemeral and perennial dryland plant species is increasingly modified and restricted by ever-changing climates and development expansion. At the interface of biodiversity conservation and developmental planning in desert landscapes is the growing need for adaptable tools in identifying and [...] Read more.
The spatial distribution of ephemeral and perennial dryland plant species is increasingly modified and restricted by ever-changing climates and development expansion. At the interface of biodiversity conservation and developmental planning in desert landscapes is the growing need for adaptable tools in identifying and monitoring these ecologically fragile plant assemblages, habitats, and, often, heritage sites. This study evaluates usage of Sentinel-2 time series composite imagery to discriminate vegetation assemblages in a hyper-arid landscape. Spatial predictor spaces were compared to classify different vegetation communities: spectral components (PCs), vegetation indices (VIs), and their combination. Further, the uncertainty in discriminating field-verified vegetation assemblages is assessed using Shannon entropy and intensity analysis. Lastly, the intensity analysis helped to decipher and quantify class transitions between maps from different spatial predictors. We mapped plant assemblages in 2022 from combined PCs and VIs at an overall accuracy of 82.71% (95% CI: 81.08, 84.28). A high overall accuracy did not directly translate to high class prediction probabilities. Prediction by spectral components, with comparably lower accuracy (80.32, 95% CI: 78.60, 81.96), showed lower class uncertainty. Class disagreement or transition between classification models was mainly contributed by class exchange (a component of spatial allocation) and less so from quantity disagreement. Different artefacts of vegetation classes are associated with the predictor space—spectral components versus vegetation indices. This study contributes insights into using feature extraction (VIs) versus feature selection (PCs) for pixel-based classification of plant assemblages. Emphasising the ecologically sensitive vegetation in desert landscapes, the study contributes uncertainty considerations in translating optical satellite imagery to vegetation maps of arid landscapes. These are perceived to inform and support vegetation map creation and interpretation for operational management and conservation of plant biodiversity and habitats in such landscapes. Full article
(This article belongs to the Section Remote Sensing in Agriculture and Vegetation)
Show Figures

Figure 1

30 pages, 6810 KiB  
Article
Interpretable Machine Learning Framework for Non-Destructive Concrete Strength Prediction with Physics-Consistent Feature Analysis
by Teerapun Saeheaw
Buildings 2025, 15(15), 2601; https://doi.org/10.3390/buildings15152601 - 23 Jul 2025
Abstract
Non-destructive concrete strength prediction faces limitations in validation scope, methodological comparison, and interpretability that constrain deployment in safety-critical construction applications. This study presents a machine learning framework integrating polynomial feature engineering, AdaBoost ensemble regression, and Bayesian optimization to achieve both predictive accuracy and [...] Read more.
Non-destructive concrete strength prediction faces limitations in validation scope, methodological comparison, and interpretability that constrain deployment in safety-critical construction applications. This study presents a machine learning framework integrating polynomial feature engineering, AdaBoost ensemble regression, and Bayesian optimization to achieve both predictive accuracy and physics-consistent interpretability. Eight state-of-the-art methods were evaluated across 4420 concrete samples, including statistical significance testing, scenario-based assessment, and robustness analysis under measurement uncertainty. The proposed PolyBayes-ABR methodology achieves R2 = 0.9957 (RMSE = 0.643 MPa), showing statistical equivalence to leading ensemble methods, including XGBoost (p = 0.734) and Random Forest (p = 0.888), while outperforming traditional approaches (p < 0.001). Scenario-based validation across four engineering applications confirms robust performance (R2 > 0.93 in all cases). SHAP analysis reveals that polynomial features capture physics-consistent interactions, with the Curing_age × Er interaction achieving dominant importance (SHAP value: 4.2337), aligning with established hydration–microstructure relationships. When accuracy differences fall within measurement uncertainty ranges, the framework provides practical advantages through enhanced uncertainty quantification (±1.260 MPa vs. ±1.338 MPa baseline) and actionable engineering insights for quality control and mix design optimization. This approach addresses the interpretability challenge in concrete engineering applications where both predictive performance and scientific understanding are essential for safe deployment. Full article
(This article belongs to the Section Building Materials, and Repair & Renovation)
Show Figures

Figure 1

29 pages, 17922 KiB  
Article
Wheat Soil-Borne Mosaic Virus Disease Detection: A Perspective of Agricultural Decision-Making via Spectral Clustering and Multi-Indicator Feedback
by Xue Hou, Chao Zhang, Yunsheng Song, Turki Alghamdi, Majed Aborokbah, Hui Zhang, Haoyue La and Yizhen Wang
Plants 2025, 14(15), 2260; https://doi.org/10.3390/plants14152260 - 22 Jul 2025
Abstract
The rapid advancement of artificial intelligence is transforming agriculture by enabling data-driven plant disease monitoring and decision support. Soil-borne mosaic wheat virus (SBWMV) is a soil-transmitted virus disease that poses a serious threat to wheat production across multiple ecological zones. Due to the [...] Read more.
The rapid advancement of artificial intelligence is transforming agriculture by enabling data-driven plant disease monitoring and decision support. Soil-borne mosaic wheat virus (SBWMV) is a soil-transmitted virus disease that poses a serious threat to wheat production across multiple ecological zones. Due to the regional variability in environmental conditions and symptom expressions, accurately evaluating the severity of wheat soil-borne mosaic (WSBM) infections remains a persistent challenge. To address this, the problem is formulated as large-scale group decision-making process (LSGDM), where each planting plot is treated as an independent virtual decision maker, providing its own severity assessments. This modeling approach reflects the spatial heterogeneity of the disease and enables a structured mechanism to reconcile divergent evaluations. First, for each site, field observation of infection symptoms are recorded and represented using intuitionistic fuzzy numbers (IFNs) to capture uncertainty in detection. Second, a Bayesian graph convolutional networks model (Bayesian-GCN) is used to construct a spatial trust propagation mechanism, inferring missing trust values and preserving regional dependencies. Third, an enhanced spectral clustering method is employed to group plots with similar symptoms and assessment behaviors. Fourth, a feedback mechanism is introduced to iteratively adjust plot-level evaluations based on a set of defined agricultural decision indicators sets using a multi-granulation rough set (ADISs-MGRS). Once consensus is reached, final rankings of candidate plots are generated from indicators, providing an interpretable and evidence-based foundation for targeted prevention strategies. By using the WSBM dataset collected in 2017–2018 from Walla Walla Valley, Oregon/Washington State border, the United States of America, and performing data augmentation for validation, along with comparative experiments and sensitivity analysis, this study demonstrates that the AI-driven LSGDM model integrating enhanced spectral clustering and ADISs-MGRS feedback mechanisms outperforms traditional models in terms of consensus efficiency and decision robustness. This provides valuable support for multi-party decision making in complex agricultural contexts. Full article
(This article belongs to the Special Issue Advances in Artificial Intelligence for Plant Research)
Show Figures

Figure 1

11 pages, 428 KiB  
Article
False Troponin Elevation in Pediatric Patients: A Long-Term Biochemical Conundrum Without Cardiac Effects
by Ceren Yapar Gümüş, Taner Kasar, Meltem Boz and Erkut Ozturk
Diagnostics 2025, 15(15), 1847; https://doi.org/10.3390/diagnostics15151847 - 22 Jul 2025
Abstract
Background/Objectives: Elevated troponin levels are widely recognized as key biomarkers of myocardial injury and are frequently used in clinical decision making. However, not all instances of troponin elevation indicate true cardiac damage. In some cases, biochemical or immunological interferences may lead to [...] Read more.
Background/Objectives: Elevated troponin levels are widely recognized as key biomarkers of myocardial injury and are frequently used in clinical decision making. However, not all instances of troponin elevation indicate true cardiac damage. In some cases, biochemical or immunological interferences may lead to false-positive results. These situations may lead to unnecessary diagnostic interventions and clinical uncertainty, ultimately impacting patient management negatively. This study aims to investigate the underlying mechanisms of false-positive troponin elevation in pediatric patients, focusing on factors such as macrotroponin formation, autoantibodies, and heterophile antibody interference. Methods: This retrospective study analyzed data from 13 pediatric patients who presented with elevated cardiac troponin levels between 2017 and 2024. Clinical evaluations included transthoracic echocardiography (TTE), electrocardiography (ECG), coronary computed tomography angiography (CTA), cardiac magnetic resonance imaging (MRI), and rheumatologic testing. Laboratory findings included measurements of cardiac troponins (cTnI and hs-cTnT) and pro-BNP levels. Results: Among 70 patients evaluated for elevated troponin levels, 13 (18.6%) were determined to have no identifiable cardiac etiology. The median age of these 13 patients was 13.0 years (range: 9–16), with 53.8% being female. The most common presenting complaints were chest pain (53.8%) and palpitations (30.8%). TTE findings were normal in 61.5% of the patients, and all patients had normal coronary CTA and cardiac MRI findings. Although initial troponin I levels were elevated in all cases, persistent positivity was observed up to 12 months. Median cTnI levels were 1.00 ng/mL (range: 0.33–7.19) at week 1 and 0.731 ng/mL (range: 0.175–4.56) at month 12. PEG precipitation testing identified macrotroponin in three patients (23.1%). No etiological explanation could be identified in 10 cases (76.9%), which were considered idiopathic. All patients had negative results for heterophile antibody and rheumatologic tests. Conclusions: When interpreting elevated troponin levels in children, biochemical interferences—especially macrotroponin—should not be overlooked. This study emphasizes the diagnostic uncertainty associated with non-cardiac troponin elevation. To better guide clinical practice and clarify false positivity rates, larger, multicenter prospective studies are needed. Full article
(This article belongs to the Section Clinical Laboratory Medicine)
Show Figures

Figure 1

21 pages, 2049 KiB  
Article
Tracking Lava Flow Cooling from Space: Implications for Erupted Volume Estimation and Cooling Mechanisms
by Simone Aveni, Gaetana Ganci, Andrew J. L. Harris and Diego Coppola
Remote Sens. 2025, 17(15), 2543; https://doi.org/10.3390/rs17152543 - 22 Jul 2025
Viewed by 38
Abstract
Accurate estimation of erupted lava volumes is essential for understanding volcanic processes, interpreting eruptive cycles, and assessing volcanic hazards. Traditional methods based on Mid-Infrared (MIR) satellite imagery require clear-sky conditions during eruptions and are prone to sensor saturation, limiting data availability. Here, we [...] Read more.
Accurate estimation of erupted lava volumes is essential for understanding volcanic processes, interpreting eruptive cycles, and assessing volcanic hazards. Traditional methods based on Mid-Infrared (MIR) satellite imagery require clear-sky conditions during eruptions and are prone to sensor saturation, limiting data availability. Here, we present an alternative approach based on the post-eruptive Thermal InfraRed (TIR) signal, using the recently proposed VRPTIR method to quantify radiative energy loss during lava flow cooling. We identify thermally anomalous pixels in VIIRS I5 scenes (11.45 µm, 375 m resolution) using the TIRVolcH algorithm, this allowing the detection of subtle thermal anomalies throughout the cooling phase, and retrieve lava flow area by fitting theoretical cooling curves to observed VRPTIR time series. Collating a dataset of 191 mafic eruptions that occurred between 2010 and 2025 at (i) Etna and Stromboli (Italy); (ii) Piton de la Fournaise (France); (iii) Bárðarbunga, Fagradalsfjall, and Sundhnúkagígar (Iceland); (iv) Kīlauea and Mauna Loa (United States); (v) Wolf, Fernandina, and Sierra Negra (Ecuador); (vi) Nyamuragira and Nyiragongo (DRC); (vii) Fogo (Cape Verde); and (viii) La Palma (Spain), we derive a new power-law equation describing mafic lava flow thickening as a function of time across five orders of magnitude (from 0.02 Mm3 to 5.5 km3). Finally, from knowledge of areas and episode durations, we estimate erupted volumes. The method is validated against 68 eruptions with known volumes, yielding high agreement (R2 = 0.947; ρ = 0.96; MAPE = 28.60%), a negligible bias (MPE = −0.85%), and uncertainties within ±50%. Application to the February-March 2025 Etna eruption further corroborates the robustness of our workflow, from which we estimate a bulk erupted volume of 4.23 ± 2.12 × 106 m3, in close agreement with preliminary estimates from independent data. Beyond volume estimation, we show that VRPTIR cooling curves follow a consistent decay pattern that aligns with established theoretical thermal models, indicating a stable conductive regime during the cooling stage. This scale-invariant pattern suggests that crustal insulation and heat transfer across a solidifying boundary govern the thermal evolution of cooling basaltic flows. Full article
Show Figures

Figure 1

20 pages, 35728 KiB  
Article
Prestack Depth Migration Imaging of Permafrost Zone with Low Seismic Signal–Noise Ratio Based on Common-Reflection-Surface (CRS) Stack
by Ruiqi Liu, Zhiwei Liu, Xiaogang Wen and Zhen Zhao
Geosciences 2025, 15(8), 276; https://doi.org/10.3390/geosciences15080276 - 22 Jul 2025
Viewed by 66
Abstract
The Qiangtang Basin (Tibetan Plateau) poses significant geophysical challenges for seismic exploration due to near-surface widespread permafrost and steeply dipping Mesozoic strata induced by the Cenozoic Indo-Eurasian collision. These seismic geological conditions considerably contribute to lower signal-to-noise ratios (SNRs) with complex wavefields, to [...] Read more.
The Qiangtang Basin (Tibetan Plateau) poses significant geophysical challenges for seismic exploration due to near-surface widespread permafrost and steeply dipping Mesozoic strata induced by the Cenozoic Indo-Eurasian collision. These seismic geological conditions considerably contribute to lower signal-to-noise ratios (SNRs) with complex wavefields, to some extent reducing the reliability of conventional seismic imaging and structural interpretation. To address this, the common-reflection-surface (CRS) stack method, derived from optical paraxial ray theory, is implemented to transcend horizontal layer model constraints, offering substantial improvements in high-SNR prestack gather generation and prestack depth migration (PSDM) imaging, notably for permafrost zones. Using 2D seismic data from the basin, we detailedly compare the CRS stack with conventional SNR enhancement techniques—common midpoint (CMP) FlexBinning, prestack random noise attenuation (PreRNA), and dip moveout (DMO)—evaluating both theoretical foundations and practical performance. The result reveals that CRS-processed prestack gathers yield superior SNR optimization and signal preservation, enabling more robust PSDM velocity model building, while comparative imaging demonstrates enhanced diffraction energy—particularly at medium (20–40%) and long (40–60%) offsets—critical for resolving faults and stratigraphic discontinuities in PSDM. This integrated validation establishes CRS stacking as an effective preprocessing foundation for the depth-domain imaging of complex permafrost geology, providing critical improvements in seismic structural resolution and reduced interpretation uncertainty for hydrocarbon exploration in permafrost-bearing basins. Full article
(This article belongs to the Section Geophysics)
Show Figures

Figure 1

35 pages, 9965 KiB  
Review
Advances in Dissolved Organic Carbon Remote Sensing Inversion in Inland Waters: Methodologies, Challenges, and Future Directions
by Dandan Xu, Rui Xue, Mengyuan Luo, Wenhuan Wang, Wei Zhang and Yinghui Wang
Sustainability 2025, 17(14), 6652; https://doi.org/10.3390/su17146652 - 21 Jul 2025
Viewed by 129
Abstract
Inland waters, serving as crucial carbon sinks and pivotal conduits within the global carbon cycle, are essential targets for carbon assessment under global warming and carbon neutrality initiatives. However, the extensive spatial distribution and inherent sampling challenges pose fundamental difficulties for monitoring dissolved [...] Read more.
Inland waters, serving as crucial carbon sinks and pivotal conduits within the global carbon cycle, are essential targets for carbon assessment under global warming and carbon neutrality initiatives. However, the extensive spatial distribution and inherent sampling challenges pose fundamental difficulties for monitoring dissolved organic carbon (DOC) in these systems. Since 2010, remote sensing has catalyzed a technological revolution in inland water DOC monitoring, leveraging its advantages for rapid, cost-effective long-term observation. In this critical review, we systematically evaluate research progress over the past two decades to assess the performance of remote sensing products and existing methodologies in DOC retrieval. We provide a detailed examination of diverse remote sensing data sources, outlining their application characteristics and limitations. By tracing uncertainties in retrieval outcomes, we identify atmospheric correction, spatial heterogeneity, and model and data deficiencies as primary sources of uncertainty. Current retrieval approaches—direct, indirect, and machine learning (ML) methods—are thoroughly scrutinized for their features, effectiveness, and application contexts. While ML offers novel solutions, its application remains nascent, constrained by limited waterbody-specific samples and model constraints. Furthermore, we discuss current challenges and future directions, focusing on data optimization, feature engineering, and model refinement. We propose that future research should (1) employ integrated satellite–air–ground observations and develop tailored atmospheric correction for inland waters to reduce data noise; (2) develop deep learning architectures with branch networks to extract DOC’s intrinsic shortwave absorption and longwave anti-interference features; and (3) incorporate dynamic biogeochemical processes within study regions to refine retrieval frameworks using biogeochemical indicators. We also advocate for multi-algorithm collaborative prediction to overcome the spectral paradox and unphysical solutions arising from the single data-driven paradigm of traditional ML, thereby enhancing retrieval reliability and interpretability. Full article
Show Figures

Figure 1

34 pages, 3704 KiB  
Article
Uncertainty-Aware Deep Learning for Robust and Interpretable MI EEG Using Channel Dropout and LayerCAM Integration
by Óscar Wladimir Gómez-Morales, Sofia Escalante-Escobar, Diego Fabian Collazos-Huertas, Andrés Marino Álvarez-Meza and German Castellanos-Dominguez
Appl. Sci. 2025, 15(14), 8036; https://doi.org/10.3390/app15148036 - 18 Jul 2025
Viewed by 133
Abstract
Motor Imagery (MI) classification plays a crucial role in enhancing the performance of brain–computer interface (BCI) systems, thereby enabling advanced neurorehabilitation and the development of intuitive brain-controlled technologies. However, MI classification using electroencephalography (EEG) is hindered by spatiotemporal variability and the limited interpretability [...] Read more.
Motor Imagery (MI) classification plays a crucial role in enhancing the performance of brain–computer interface (BCI) systems, thereby enabling advanced neurorehabilitation and the development of intuitive brain-controlled technologies. However, MI classification using electroencephalography (EEG) is hindered by spatiotemporal variability and the limited interpretability of deep learning (DL) models. To mitigate these challenges, dropout techniques are employed as regularization strategies. Nevertheless, the removal of critical EEG channels, particularly those from the sensorimotor cortex, can result in substantial spatial information loss, especially under limited training data conditions. This issue, compounded by high EEG variability in subjects with poor performance, hinders generalization and reduces the interpretability and clinical trust in MI-based BCI systems. This study proposes a novel framework integrating channel dropout—a variant of Monte Carlo dropout (MCD)—with class activation maps (CAMs) to enhance robustness and interpretability in MI classification. This integration represents a significant step forward by offering, for the first time, a dedicated solution to concurrently mitigate spatiotemporal uncertainty and provide fine-grained neurophysiologically relevant interpretability in motor imagery classification, particularly demonstrating refined spatial attention in challenging low-performing subjects. We evaluate three DL architectures (ShallowConvNet, EEGNet, TCNet Fusion) on a 52-subject MI-EEG dataset, applying channel dropout to simulate structural variability and LayerCAM to visualize spatiotemporal patterns. Results demonstrate that among the three evaluated deep learning models for MI-EEG classification, TCNet Fusion achieved the highest peak accuracy of 74.4% using 32 EEG channels. At the same time, ShallowConvNet recorded the lowest peak at 72.7%, indicating TCNet Fusion’s robustness in moderate-density montages. Incorporating MCD notably improved model consistency and classification accuracy, especially in low-performing subjects where baseline accuracies were below 70%; EEGNet and TCNet Fusion showed accuracy improvements of up to 10% compared to their non-MCD versions. Furthermore, LayerCAM visualizations enhanced with MCD transformed diffuse spatial activation patterns into more focused and interpretable topographies, aligning more closely with known motor-related brain regions and thereby boosting both interpretability and classification reliability across varying subject performance levels. Our approach offers a unified solution for uncertainty-aware, and interpretable MI classification. Full article
(This article belongs to the Special Issue EEG Horizons: Exploring Neural Dynamics and Neurocognitive Processes)
Show Figures

Figure 1

20 pages, 9135 KiB  
Article
Kolmogorov–Arnold Networks for Interpretable Crop Yield Prediction Across the U.S. Corn Belt
by Mustafa Serkan Isik, Ozan Ozturk and Mehmet Furkan Celik
Remote Sens. 2025, 17(14), 2500; https://doi.org/10.3390/rs17142500 - 18 Jul 2025
Viewed by 412
Abstract
Accurate crop yield prediction is essential for stabilizing food supply chains and reducing the uncertainties in financial risks related to agricultural production. Yet, it is even more essential to understand how crop yield models make predictions depending on their relationship to Earth Observation [...] Read more.
Accurate crop yield prediction is essential for stabilizing food supply chains and reducing the uncertainties in financial risks related to agricultural production. Yet, it is even more essential to understand how crop yield models make predictions depending on their relationship to Earth Observation (EO) indicators. This study presents a state-of-the-art explainable artificial intelligence (XAI) method to estimate corn yield prediction over the Corn Belt in the continental United States (CONUS). We utilize the recently introduced Kolmogorov–Arnold Network (KAN) architecture, which offers an interpretable alternative to the traditional Multi-Layer Perceptron (MLP) approach by utilizing learnable spline-based activation functions instead of fixed ones. By including a KAN in our crop yield prediction framework, we are able to achieve high prediction accuracy and identify the temporal drivers behind crop yield variability. We create a multi-source dataset that includes biophysical parameters along the crop phenology, as well as meteorological, topographic, and soil parameters to perform end-of-season and in-season predictions of county-level corn yields between 2016–2023. The performance of the KAN model is compared with the commonly used traditional machine learning (ML) models and its architecture-wise equivalent MLP. The KAN-based crop yield model outperforms the other models, achieving an R2 of 0.85, an RMSE of 0.84 t/ha, and an MAE of 0.62 t/ha (compared to MLP: R2 = 0.81, RMSE = 0.95 t/ha, and MAE = 0.71 t/ha). In addition to end-of-season predictions, the KAN model also proves effective for in-season yield forecasting. Notably, even three months prior to harvest, the KAN model demonstrates strong performance in in-season yield forecasting, achieving an R2 of 0.82, an MAE of 0.74 t/ha, and an RMSE of 0.98 t/ha. These results indicate that the model maintains a high level of explanatory power relative to its final performance. Overall, these findings highlight the potential of the KAN model as a reliable tool for early yield estimation, offering valuable insights for agricultural planning and decision-making. Full article
Show Figures

Figure 1

23 pages, 963 KiB  
Article
A Methodology for Turbine-Level Possible Power Prediction and Uncertainty Estimations Using Farm-Wide Autoregressive Information on High-Frequency Data
by Francisco Javier Jara Ávila, Timothy Verstraeten, Pieter Jan Daems, Ann Nowé and Jan Helsen
Energies 2025, 18(14), 3764; https://doi.org/10.3390/en18143764 - 16 Jul 2025
Viewed by 193
Abstract
Wind farm performance monitoring has traditionally relied on deterministic models, such as power curves or machine learning approaches, which often fail to account for farm-wide behavior and the uncertainty quantification necessary for the reliable detection of underperformance. To overcome these limitations, we propose [...] Read more.
Wind farm performance monitoring has traditionally relied on deterministic models, such as power curves or machine learning approaches, which often fail to account for farm-wide behavior and the uncertainty quantification necessary for the reliable detection of underperformance. To overcome these limitations, we propose a probabilistic methodology for turbine-level active power prediction and uncertainty estimation using high-frequency SCADA data and farm-wide autoregressive information. The method leverages a Stochastic Variational Gaussian Process with a Linear Model of Coregionalization, incorporating physical models like manufacturer power curves as mean functions and enabling flexible modeling of active power and its associated variance. The approach was validated on a wind farm in the Belgian North Sea comprising over 40 turbines, using only 15 days of data for training. The results demonstrate that the proposed method improves predictive accuracy over the manufacturer’s power curve, achieving a reduction in error measurements of around 1%. Improvements of around 5% were seen in dominant wind directions (200°–300°) using 2 and 3 Latent GPs, with similar improvements observed on the test set. The model also successfully reconstructs wake effects, with Energy Ratio estimates closely matching SCADA-derived values, and provides meaningful uncertainty estimates and posterior turbine correlations. These results demonstrate that the methodology enables interpretable, data-efficient, and uncertainty-aware turbine-level power predictions, suitable for advanced wind farm monitoring and control applications, enabling a more sensitive underperformance detection. Full article
Show Figures

Figure 1

42 pages, 2145 KiB  
Article
Uncertainty-Aware Predictive Process Monitoring in Healthcare: Explainable Insights into Probability Calibration for Conformal Prediction
by Maxim Majlatow, Fahim Ahmed Shakil, Andreas Emrich and Nijat Mehdiyev
Appl. Sci. 2025, 15(14), 7925; https://doi.org/10.3390/app15147925 - 16 Jul 2025
Viewed by 199
Abstract
In high-stakes decision-making environments, predictive models must deliver not only high accuracy but also reliable uncertainty estimations and transparent explanations. This study explores the integration of probability calibration techniques with Conformal Prediction (CP) within a predictive process monitoring (PPM) framework tailored to healthcare [...] Read more.
In high-stakes decision-making environments, predictive models must deliver not only high accuracy but also reliable uncertainty estimations and transparent explanations. This study explores the integration of probability calibration techniques with Conformal Prediction (CP) within a predictive process monitoring (PPM) framework tailored to healthcare analytics. CP is renowned for its distribution-free prediction regions and formal coverage guarantees under minimal assumptions; however, its practical utility critically depends on well-calibrated probability estimates. We compare a range of post-hoc calibration methods—including parametric approaches like Platt scaling and Beta calibration, as well as non-parametric techniques such as Isotonic Regression and Spline calibration—to assess their impact on aligning raw model outputs with observed outcomes. By incorporating these calibrated probabilities into the CP framework, our multilayer analysis evaluates improvements in prediction region validity, including tighter coverage gaps and reduced minority error contributions. Furthermore, we employ SHAP-based explainability to explain how calibration influences feature attribution for both high-confidence and ambiguous predictions. Experimental results on process-driven healthcare data indicate that the integration of calibration with CP not only enhances the statistical robustness of uncertainty estimates but also improves the interpretability of predictions, thereby supporting safer and robust clinical decision-making. Full article
(This article belongs to the Special Issue Digital Innovations in Healthcare)
Show Figures

Figure 1

29 pages, 2885 KiB  
Article
Embedding Security Awareness in IoT Systems: A Framework for Providing Change Impact Insights
by Masrufa Bayesh and Sharmin Jahan
Appl. Sci. 2025, 15(14), 7871; https://doi.org/10.3390/app15147871 - 14 Jul 2025
Viewed by 164
Abstract
The Internet of Things (IoT) is rapidly advancing toward increased autonomy; however, the inherent dynamism, environmental uncertainty, device heterogeneity, and diverse data modalities pose serious challenges to its reliability and security. This paper proposes a novel framework for embedding security awareness into IoT [...] Read more.
The Internet of Things (IoT) is rapidly advancing toward increased autonomy; however, the inherent dynamism, environmental uncertainty, device heterogeneity, and diverse data modalities pose serious challenges to its reliability and security. This paper proposes a novel framework for embedding security awareness into IoT systems—where security awareness refers to the system’s ability to detect uncertain changes and understand their impact on its security posture. While machine learning and deep learning (ML/DL) models integrated with explainable AI (XAI) methods offer capabilities for threat detection, they often lack contextual interpretation linked to system security. To bridge this gap, our framework maps XAI-generated explanations to a system’s structured security profile, enabling the identification of components affected by detected anomalies or threats. Additionally, we introduce a procedural method to compute an Importance Factor (IF) for each component, reflecting its operational criticality. This framework generates actionable insights by highlighting contextual changes, impacted components, and their respective IFs. We validate the framework using a smart irrigation IoT testbed, demonstrating its capability to enhance security awareness by tracking evolving conditions and providing real-time insights into potential Distributed Denial of Service (DDoS) attacks. Full article
(This article belongs to the Special Issue Trends and Prospects for Wireless Sensor Networks and IoT)
Show Figures

Figure 1

20 pages, 275 KiB  
Article
“My Future”: A Qualitative Examination of Hope in the Lives of Black Emerging Adults
by William Terrell Danley, Benson Cooke and Nathalie Mizelle
Soc. Sci. 2025, 14(7), 428; https://doi.org/10.3390/socsci14070428 - 11 Jul 2025
Viewed by 194
Abstract
The presence of hope significantly influences how youth interpret possibilities and commit to future-oriented action. This qualitative study investigates how fifteen Black emerging adults, ages eighteen to twenty-five, living in a major United States urban city on the East Coast, describe their aspirations, [...] Read more.
The presence of hope significantly influences how youth interpret possibilities and commit to future-oriented action. This qualitative study investigates how fifteen Black emerging adults, ages eighteen to twenty-five, living in a major United States urban city on the East Coast, describe their aspirations, goal-setting strategies, and responses to personal and structural challenges. Participants were categorized as connected or disconnected based on their engagement in school, work, or training programs. Using Reflexive Thematic Analysis of interviews, the research identified key differences in agency, emotional orientation, and access to guidance between the two groups. Connected participants often described clear, structured goals supported by networks of mentorship and opportunity. Disconnected participants expressed meaningful hope, yet described fewer supports and greater uncertainty in achieving their goals. These findings highlight how consistent exposure to guidance and structured environments strengthens future orientation and internal motivation. These results deepen our understanding of how young people experience hope across diverse contexts and show that mentorship, intentional goal setting, and greater access to opportunity play a vital role in sustaining hopeful thinking during the transition to adulthood. Full article
24 pages, 3524 KiB  
Article
Transient Stability Assessment of Power Systems Based on Temporal Feature Selection and LSTM-Transformer Variational Fusion
by Zirui Huang, Zhaobin Du, Jiawei Gao and Guoduan Zhong
Electronics 2025, 14(14), 2780; https://doi.org/10.3390/electronics14142780 - 10 Jul 2025
Viewed by 207
Abstract
To address the challenges brought by the high penetration of renewable energy in power systems, such as multi-scale dynamic interactions, high feature dimensionality, and limited model generalization, this paper proposes a transient stability assessment (TSA) method that combines temporal feature selection with deep [...] Read more.
To address the challenges brought by the high penetration of renewable energy in power systems, such as multi-scale dynamic interactions, high feature dimensionality, and limited model generalization, this paper proposes a transient stability assessment (TSA) method that combines temporal feature selection with deep learning-based modeling. First, a two-stage feature selection strategy is designed using the inter-class Mahalanobis distance and Spearman rank correlation. This helps extract highly discriminative and low-redundancy features from wide-area measurement system (WAMS) time-series data. Then, a parallel LSTM-Transformer architecture is constructed to capture both short-term local fluctuations and long-term global dependencies. A variational inference mechanism based on a Gaussian mixture model (GMM) is introduced to enable dynamic representations fusion and uncertainty modeling. A composite loss function combining improved focal loss and Kullback–Leibler (KL) divergence regularization is designed to enhance model robustness and training stability under complex disturbances. The proposed method is validated on a modified IEEE 39-bus system. Results show that it outperforms existing models in accuracy, robustness, interpretability, and other aspects. This provides an effective solution for TSA in power systems with high renewable energy integration. Full article
(This article belongs to the Special Issue Advanced Energy Systems and Technologies for Urban Sustainability)
Show Figures

Figure 1

34 pages, 338 KiB  
Article
Systemic Gaps in Circular Plastics: A Role-Specific Assessment of Quality and Traceability Barriers in Australia
by Benjamin Gazeau, Atiq Zaman, Roberto Minunno and Faiz Shaikh
Sustainability 2025, 17(14), 6323; https://doi.org/10.3390/su17146323 - 10 Jul 2025
Viewed by 231
Abstract
The effective adoption of quality assurance and traceability systems is increasingly recognised as a critical enabler of circular economy (CE) outcomes in the plastics sector. This study examines the factors that influence the implementation of such systems within Australia’s recycled plastics industry, with [...] Read more.
The effective adoption of quality assurance and traceability systems is increasingly recognised as a critical enabler of circular economy (CE) outcomes in the plastics sector. This study examines the factors that influence the implementation of such systems within Australia’s recycled plastics industry, with a focus on how these factors vary by company size, supply chain role, and adoption of CE strategy. Recycled plastics are defined here as post-consumer or post-industrial polymers that have been reprocessed for reintegration into manufacturing applications. A mixed-methods survey was conducted with 65 stakeholders across the Australian plastics value chain, comprising recyclers, compounders, converters, and end-users. Respondents assessed a structured set of regulatory, technical, economic, and systemic factors, identifying whether each currently operates as an enabler or barrier in their organisational context. The analysis employed a comparative framework adapted from a 2022 European study, enabling a cross-regional interpretation of patterns and a comparison between CE-aligned and non-CE firms. The results show that firms with CE strategies report greater alignment with innovation-oriented enablers such as digital traceability, standardisation, and closed-loop models. However, these firms also express heightened sensitivity to systemic weaknesses, particularly in areas such as infrastructure limitations, inconsistent material quality, and data fragmentation. Small- and medium-sized enterprises (SMEs) highlighted compliance costs and operational uncertainty as primary barriers, while larger firms frequently cited frustration with regulatory inconsistency and infrastructure underperformance. These findings underscore the need for differentiated policy mechanisms that account for sectoral and organisational disparities in capacity, scale, and readiness for traceability. The study also cautions against the direct transfer of European circular economy models into the Australian context without consideration of local structural, regulatory, and geographic complexities. Full article
Back to TopTop