Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,770)

Search Parameters:
Keywords = real-world measurement data

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
17 pages, 2513 KB  
Article
Modeling Multivariate Distributions of Lipid Panel Biomarkers for Reference Interval Estimation and Comorbidity Analysis
by Julian Velev, Luis Velázquez-Sosa, Jack Lebien, Heeralal Janwa and Abiel Roche-Lima
Healthcare 2025, 13(19), 2499; https://doi.org/10.3390/healthcare13192499 - 1 Oct 2025
Abstract
Background/Objectives: Laboratory tests are a cornerstone of modern medicine, and their interpretation depends on reference intervals (RIs) that define expected values in healthy populations. Standard RIs are obtained in cohort studies that are costly and time-consuming and typically do not account for [...] Read more.
Background/Objectives: Laboratory tests are a cornerstone of modern medicine, and their interpretation depends on reference intervals (RIs) that define expected values in healthy populations. Standard RIs are obtained in cohort studies that are costly and time-consuming and typically do not account for demographic factors such as age, sex, and ethnicity that strongly influence biomarker distributions. This study establishes a data-driven approach for deriving RIs directly from routinely collected laboratory results. Methods: Multidimensional joint distributions of lipid biomarkers were estimated from large-scale real-world laboratory data from the Puerto Rican population using a Gaussian Mixture Model (GMM). GMM and additional statistical analyses were used to enable separation of healthy and pathological subpopulations and exclude the influence of comorbidities all without the use of diagnostic codes. Selective mortality patterns were examined to explain counterintuitive age trends in lipid values while comorbidity implication networks were constructed to characterize interdependencies between conditions. Results: The approach yielded sex- and age-stratified RIs for lipid panel biomarkers estimated from the inferred distributions (total cholesterol, LDL, HDL, triglycerides). Apparent improvements in biomarker profiles after midlife were explained by selective survival. Comorbidities exerted pronounced effects on the 95% ranges, with their broader influence captured through network analysis. Beyond fixed limits, the method yields full distributions, allowing each individual result to be mapped to a percentile and interpreted as a continuous measure of risk. Conclusions: Population-specific and sex- and age-segmented RIs can be derived from real-world laboratory data without recruiting healthy cohorts. Incorporating selective mortality effects and comorbidity networks provides additional insight into population health dynamics. Full article
(This article belongs to the Special Issue Data Driven Insights in Healthcare)
Show Figures

Figure 1

17 pages, 1102 KB  
Article
A Hybrid Artificial Intelligence for Fault Detection and Diagnosis of Photovoltaic Systems Using Autoencoders and Random Forests Classifiers
by Katlego Ratsheola, Ditiro Setlhaolo, Akhtar Rasool, Ahmed Ali and Nkateko Eshias Mabunda
Eng 2025, 6(10), 254; https://doi.org/10.3390/eng6100254 - 1 Oct 2025
Abstract
The increasing sophistication of grid-connected photovoltaic (GCPV) systems necessitates advanced fault detection and diagnosis (FDD) methods to ensure operation efficiency and security. In this paper, a novel two-stage hybrid AI architecture is analyzed that couples an autoencoder using Long Short-Term Memory (LSTM) for [...] Read more.
The increasing sophistication of grid-connected photovoltaic (GCPV) systems necessitates advanced fault detection and diagnosis (FDD) methods to ensure operation efficiency and security. In this paper, a novel two-stage hybrid AI architecture is analyzed that couples an autoencoder using Long Short-Term Memory (LSTM) for unsupervised anomaly detection with an RF classifier for focused fault diagnosis. The architecture is critically compared to that of a baseline-only RF baseline on a synthetic dataset. The results of this two-stage hybrid AI show a strong overall accuracy of (83.1%). The hybrid model’s first stage trains only on unlabeled healthy data, reducing the reliance on extensive and often unavailable labeled fault datasets. This design has the safety-critical advantage of marking unfamiliar faults as anomalies instead of committing to a misclassification. By integrating anomaly detection with classification, the architecture enables early stage screening of faults and targeted categorization, even in data-scarce scenarios. This offers a scalable, interpretable solution suitable for deployment in real-world GCPV systems where robustness and early detection are critical. While the method exhibits reduced sensitivity to subtle or recurring faults, it demonstrates strong reliability in confidently detecting distinct and significant anomalies. Additionally, the approach improves interpretability, facilitating clearer identification of performance constraints such as the autoencoder’s moderate fault sensitivity (AUC = 0.61). This study confirms the hybrid approach as a very promising FDD solution, in which the architectural advantages of safety and maintainability offer a more worthwhile proposition to real-world systems than incremental improvements in a single accuracy measure. Full article
(This article belongs to the Section Electrical and Electronic Engineering)
Show Figures

Figure 1

25 pages, 7878 KB  
Article
JOTGLNet: A Guided Learning Network with Joint Offset Tracking for Multiscale Deformation Monitoring
by Jun Ni, Siyuan Bao, Xichao Liu, Sen Du, Dapeng Tao and Yibing Zhan
Remote Sens. 2025, 17(19), 3340; https://doi.org/10.3390/rs17193340 - 30 Sep 2025
Abstract
Ground deformation monitoring in mining areas is essential for hazard prevention and environmental protection. Although interferometric synthetic aperture radar (InSAR) provides detailed phase information for accurate deformation measurement, its performance is often compromised in regions experiencing rapid subsidence and strong noise, where phase [...] Read more.
Ground deformation monitoring in mining areas is essential for hazard prevention and environmental protection. Although interferometric synthetic aperture radar (InSAR) provides detailed phase information for accurate deformation measurement, its performance is often compromised in regions experiencing rapid subsidence and strong noise, where phase aliasing and coherence loss lead to significant inaccuracies. To overcome these limitations, this paper proposes JOTGLNet, a guided learning network with joint offset tracking, for multiscale deformation monitoring. This method integrates pixel offset tracking (OT), which robustly captures large-gradient displacements, with interferometric phase data that offers high sensitivity in coherent regions. A dual-path deep learning architecture was designed where the interferometric phase serves as the primary branch and OT features act as complementary information, enhancing the network’s ability to handle varying deformation rates and coherence conditions. Additionally, a novel shape perception loss combining morphological similarity measurement and error learning was introduced to improve geometric fidelity and reduce unbalanced errors across deformation regions. The model was trained on 4000 simulated samples reflecting diverse real-world scenarios and validated on 1100 test samples with a maximum deformation up to 12.6 m, achieving an average prediction error of less than 0.15 m—outperforming state-of-the-art methods whose errors exceeded 0.19 m. Additionally, experiments on five real monitoring datasets further confirmed the superiority and consistency of the proposed approach. Full article
Show Figures

Graphical abstract

42 pages, 4392 KB  
Article
Holism of Thermal Energy Storage: A Data-Driven Strategy for Industrial Decarbonization
by Abdulmajeed S. Al-Ghamdi and Salman Z. Alharthi
Sustainability 2025, 17(19), 8745; https://doi.org/10.3390/su17198745 - 29 Sep 2025
Abstract
This study presents a holistic framework for adaptive thermal energy storage (A-TES) in solar-assisted systems. This framework aims to support a reliable industrial energy supply, particularly during periods of limited sunlight, while also facilitating industrial decarbonization. In previous studies, the focus was not [...] Read more.
This study presents a holistic framework for adaptive thermal energy storage (A-TES) in solar-assisted systems. This framework aims to support a reliable industrial energy supply, particularly during periods of limited sunlight, while also facilitating industrial decarbonization. In previous studies, the focus was not on addressing the framework of the entire problem, but rather on specific parts of it. Therefore, the innovation in this study lies in bringing these aspects together within a unified framework through a data-driven approach that combines the analysis of efficiency, technology, environmental impact, sectoral applications, operational challenges, and policy into a comprehensive system. Sensible thermal energy storage with an adaptive approach can be utilized in numerous industries, particularly concentrated solar power plants, to optimize power dispatch, enhance energy efficiency, and reduce gas emissions. Simulation results indicate that stable regulations and flexible incentives have led to a 60% increase in solar installations, highlighting their significance in investment expansion within the renewable energy sector. Integrated measures among sectors have increased energy availability by 50% in rural regions, illustrating the need for partnerships in renewable energy projects. The full implementation of novel advanced energy management systems (AEMSs) in industrial heat processes has resulted in a 20% decrease in energy consumption and a 15% improvement in efficiency. Making the switch to open-source software has reduced software expenditure by 50% and increased productivity by 20%, demonstrating the strategic advantages of open-source solutions. The findings provide a foundation for future research by offering a framework to analyze a specific real-world industrial case. Full article
Show Figures

Graphical abstract

26 pages, 9223 KB  
Article
A CAT Bond Pricing Model Based on the Distortion of Aggregate Loss Distributions
by Ning Ma
Mathematics 2025, 13(19), 3113; https://doi.org/10.3390/math13193113 - 29 Sep 2025
Abstract
Pricing catastrophe (CAT) bonds in incomplete markets poses persistent challenges, particularly in converting risk from the real-world measure to the pricing measure. The commonly used Wang transform focuses on distorting the loss severity distribution, which may underestimate catastrophe risk. This paper proposes a [...] Read more.
Pricing catastrophe (CAT) bonds in incomplete markets poses persistent challenges, particularly in converting risk from the real-world measure to the pricing measure. The commonly used Wang transform focuses on distorting the loss severity distribution, which may underestimate catastrophe risk. This paper proposes a new distortion operator based on the Esscher transform that distorts the aggregate loss distribution rather than focusing solely on the severity or frequency components. The proposed approach provides more comprehensive risk adjustment, making it well-suited for the distributional characteristics of catastrophic loss indicators. Its applicability is demonstrated via an application to Chinese earthquake data. Monte Carlo simulation was used to compare pricing results via the distortion of different components. By reformulating the proposed distortion method into the form of a distortion operator and comparing it with the Wang transform, this paper demonstrates that the proposed approach offers significantly enhanced analytical tractability for complex distributions. It enables a more transparent analysis of the transformed distribution and its implications for bond pricing mechanisms. Full article
(This article belongs to the Section E5: Financial Mathematics)
Show Figures

Figure 1

36 pages, 3101 KB  
Article
A Potential Outlier Detection Model for Structural Crack Variation Using Big Data-Based Periodic Analysis
by Jaemin Kim, Seongwoong Shin, Seulki Lee and Jungho Yu
Buildings 2025, 15(19), 3492; https://doi.org/10.3390/buildings15193492 - 27 Sep 2025
Abstract
Cracks in concrete structures, caused by aging, adjacent construction, and seismic activity, pose critical risks to structural integrity, durability, and serviceability. Traditional monitoring methods based solely on absolute thresholds are inadequate for detecting progressive crack growth at early stages. This study proposes a [...] Read more.
Cracks in concrete structures, caused by aging, adjacent construction, and seismic activity, pose critical risks to structural integrity, durability, and serviceability. Traditional monitoring methods based solely on absolute thresholds are inadequate for detecting progressive crack growth at early stages. This study proposes a big data-driven anomaly detection model that combines absolute threshold evaluation with periodic trend analysis to enable both real-time monitoring and early anomaly identification. By incorporating relative comparisons, the model captures subtle variations within allowable limits, thereby enhancing sensitivity to incipient defects. Validation was conducted using approximately 2700 simulated datasets with an increase–hold–increase pattern and 470,000 real-world crack measurements. The model successfully detected four major anomalies, including abrupt shifts and cumulative deviations, and time series visualizations identified the exact onset of abnormal behavior. Through periodic fluctuation analysis and the Isolation Forest algorithm, the model effectively classified risk trends and supported proactive crack management. Rather than defining fixed labels or thresholds for the detected results, this study focused on verifying whether the analysis of detected crack data accurately reflected actual trends. To support interpretability and potential applicability, the detection outcomes were presented using quantitative descriptors such as anomaly count, anomaly score, and persistence. The proposed framework addresses the limitations of conventional digital monitoring by enabling early intervention below predefined thresholds. This data-driven approach contributes to structural health management by facilitating timely detection of potential risks and strengthening preventive maintenance strategies. Full article
(This article belongs to the Section Construction Management, and Computers & Digitization)
Show Figures

Figure 1

18 pages, 3524 KB  
Article
Transformer-Embedded Task-Adaptive-Regularized Prototypical Network for Few-Shot Fault Diagnosis
by Mingkai Xu, Huichao Pan, Siyuan Wang and Shiying Sun
Electronics 2025, 14(19), 3838; https://doi.org/10.3390/electronics14193838 - 27 Sep 2025
Abstract
Few-shot fault diagnosis (FSFD) seeks to build accurate models from scarce labeled data, a frequent challenge in industrial settings with noisy measurements and varying operating conditions. Conventional metric-based meta-learning (MBML) often assumes task-invariant, class-separable feature spaces, which rarely hold in heterogeneous environments. To [...] Read more.
Few-shot fault diagnosis (FSFD) seeks to build accurate models from scarce labeled data, a frequent challenge in industrial settings with noisy measurements and varying operating conditions. Conventional metric-based meta-learning (MBML) often assumes task-invariant, class-separable feature spaces, which rarely hold in heterogeneous environments. To address this, we propose a Transformer-embedded Task-Adaptive-Regularized Prototypical Network (TETARPN). A tailored Transformer-based Temporal Encoder Module is integrated into MBML to capture long-range dependencies and global temporal correlations in industrial time series. In parallel, a task-adaptive prototype regularization dynamically adjusts constraints according to task difficulty, enhancing intra-class compactness and inter-class separability. This combination improves both adaptability and robustness in FSFD. Experiments on bearing benchmark datasets show that TETARPN consistently outperforms state-of-the-art methods under diverse fault types and operating conditions, demonstrating its effectiveness and potential for real-world deployment. Full article
Show Figures

Figure 1

14 pages, 1011 KB  
Article
Measuring What Matters in Trial Operations: Development and Validation of the Clinical Trial Site Performance Measure
by Mattia Bozzetti, Alessio Lo Cascio, Daniele Napolitano, Nicoletta Orgiana, Vincenzina Mora, Stefania Fiorini, Giorgia Petrucci, Francesca Resente, Irene Baroni, Rosario Caruso and Monica Guberti
J. Clin. Med. 2025, 14(19), 6839; https://doi.org/10.3390/jcm14196839 - 26 Sep 2025
Abstract
Background/Objectives: The execution of clinical trials is increasingly constrained by operational complexity, regulatory requirements, and variability in site performance. These challenges have direct implications for the reliability of trial outcomes. However, standardized methods to evaluate site-level performance remain underdeveloped. This study introduces the [...] Read more.
Background/Objectives: The execution of clinical trials is increasingly constrained by operational complexity, regulatory requirements, and variability in site performance. These challenges have direct implications for the reliability of trial outcomes. However, standardized methods to evaluate site-level performance remain underdeveloped. This study introduces the Clinical Trial Site Performance Measure (CT-SPM), a novel framework designed to systematically capture site-level operational quality and to provide a scalable short form for routine monitoring. Methods: We conducted a multicenter study across six Italian academic hospitals (January–June 2025). Candidate performance indicators were identified through a systematic review and expert consultation, followed by validation and reduction using advanced statistical approaches, including factor modeling, ROC curve analysis, and nonparametric scaling methods. The CT-SPM was assessed for structural validity, discriminative capacity, and feasibility for use in real-world settings. Results: From 126 potential indicators, 18 were retained and organized into four domains: Participant Retention and Consent, Data Completeness and Timeliness, Adverse Event Reporting, and Protocol Compliance. A bifactor model revealed two higher-order dimensions (participant-facing and data-facing performance), highlighting the multidimensional nature of site operations. A short form comprising four items demonstrated good scalability and sufficient accuracy to identify underperforming sites. Conclusions: The CT-SPM represents an innovative, evidence-based instrument for monitoring trial execution at the site level. By linking methodological rigor with real-world applicability, it offers a practical solution for benchmarking, resource allocation, and regulatory compliance. This approach contributes to advancing clinical research by providing a standardized, data-driven method to evaluate and improve performance across networks. Full article
(This article belongs to the Special Issue New Advances in Clinical Epidemiological Research Methods)
Show Figures

Figure 1

18 pages, 537 KB  
Article
Structural and Functional Outcomes in Rheumatoid Arthritis After 10-Year Therapy with Disease-Modifying Antirheumatic Drugs Under Tight Control: Evidence from Real-World Cohort Data
by Shunsuke Mori, Akitomo Okada, Toshimasa Shimizu, Ayuko Takatani and Tomohiro Koga
J. Clin. Med. 2025, 14(19), 6832; https://doi.org/10.3390/jcm14196832 - 26 Sep 2025
Abstract
Objectives: To examine long-term outcomes and predictors of structural and functional remission in rheumatoid arthritis (RA) after 10-year disease-modifying antirheumatic drug (DMARD) therapy under tight control. Methods: We used real-world cohort data from RA patients who completed 10-year DMARD therapy toward [...] Read more.
Objectives: To examine long-term outcomes and predictors of structural and functional remission in rheumatoid arthritis (RA) after 10-year disease-modifying antirheumatic drug (DMARD) therapy under tight control. Methods: We used real-world cohort data from RA patients who completed 10-year DMARD therapy toward remission or low disease activity based on every-3-month measurements between April 2001 and July 2024. Baseline characteristics, disease control during follow-up, and outcomes after 10 years were examined. Results: Among 204 patients, 76% received biological and/or non-biological targeted DMARDs. Clinical remission, structural remission defined as an increase in modified total Sharp score (mTSS) ≤ 5 per 10 years, and functional remission defined as health assessment questionnaire-disability index (HAQ-DI) ≤ 0.5 were achieved by 68.1%, 73.0%, and 81.4% of patients, respectively. The mean increase (∆) in mTSS was 5.4 for 10 years (∆erosion score, 1.2; ∆joint space narrowing [JSN] score, 4.2), and 28.9% of patients had no structural progression (51% for erosion score and 34.8% for JSN score). Mean HAQ-DI was 0.26. During a 10-year follow-up, 8.8% of patients experienced high or moderate disease activity lasting for ≥12 months and they had a low structural remission rate (11.1%) and functional remission rate (16.6%). According to multivariable logistic regression analysis, baseline mTSS and JNS score (but not erosion score) were strong predictors for structural and functional remission after 10 years. Conclusions: Structural damage progression and functional loss are limited during 10-year tightly controlled DMARD therapy. Compared with bone erosion, JSN appears to be of much higher relevance to structural and functional outcomes. Full article
(This article belongs to the Special Issue Rheumatoid Arthritis: Clinical Updates on Diagnosis and Treatment)
Show Figures

Figure 1

34 pages, 17998 KB  
Article
Bayesian Stochastic Inference and Statistical Reliability Modeling of Maxwell–Boltzmann Model Under Improved Progressive Censoring for Multidisciplinary Applications
by Heba S. Mohammed, Osama E. Abo-Kasem and Ahmed Elshahhat
Axioms 2025, 14(9), 712; https://doi.org/10.3390/axioms14090712 - 21 Sep 2025
Viewed by 174
Abstract
The Maxwell–Boltzmann (MB) distribution is important because it provides the statistical foundation for connecting microscopic particle motion to macroscopic gas properties by statistically describing molecular speeds and energies, making it essential for understanding and predicting the behavior of classical ideal gases. This study [...] Read more.
The Maxwell–Boltzmann (MB) distribution is important because it provides the statistical foundation for connecting microscopic particle motion to macroscopic gas properties by statistically describing molecular speeds and energies, making it essential for understanding and predicting the behavior of classical ideal gases. This study advances the statistical modeling of lifetime distributions by developing a comprehensive reliability analysis of the MB distribution under an improved adaptive progressive censoring framework. The proposed scheme strategically enhances experimental flexibility by dynamically adjusting censoring protocols, thereby preserving more information from test samples compared to conventional designs. Maximum likelihood estimation, interval estimation, and Bayesian inference are rigorously derived for the MB parameters, with asymptotic properties established to ensure methodological soundness. To address computational challenges, Markov chain Monte Carlo algorithms are employed for efficient Bayesian implementation. A detailed exploration of reliability measures—including hazard rate, mean residual life, and stress–strength models—demonstrates the MB distribution’s suitability for complex reliability settings. Extensive Monte Carlo simulations validate the efficiency and precision of the proposed inferential procedures, highlighting significant gains over traditional censoring approaches. Finally, the utility of the methodology is showcased through real-world applications to physics and engineering datasets, where the MB distribution coupled with such censoring yields superior predictive performance. This genuine examination is conducted through two datasets (including the failure times of aircraft windshields, capturing degradation under extreme environmental and operational stress, and mechanical component failure times) that represent recurrent challenges in industrial systems. This work contributes a unified statistical framework that broadens the applicability of the Maxwell–Boltzmann model in reliability contexts and provides practitioners with a powerful tool for decision making under censored data environments. Full article
Show Figures

Figure 1

29 pages, 7359 KB  
Article
Adaptive Optimization of Traffic Sensor Locations Under Uncertainty Using Flow-Constrained Inference
by Mahmoud Owais and Amira A. Allam
Appl. Sci. 2025, 15(18), 10257; https://doi.org/10.3390/app151810257 - 20 Sep 2025
Viewed by 212
Abstract
Monitoring traffic flow across large-scale transportation networks is essential for effective traffic management, yet comprehensive sensor deployment is often infeasible due to financial and practical constraints. The traffic sensor location problem (TSLP) aims to determine the minimal set of sensor placements needed to [...] Read more.
Monitoring traffic flow across large-scale transportation networks is essential for effective traffic management, yet comprehensive sensor deployment is often infeasible due to financial and practical constraints. The traffic sensor location problem (TSLP) aims to determine the minimal set of sensor placements needed to achieve full link flow observability. Existing solutions primarily rely on algebraic or optimization-based approaches, but often neglect the impact of sensor measurement errors and struggle with scalability in large, complex networks. This study proposes a new scalable and robust methodology for solving the TSLP under uncertainty, incorporating a formulation that explicitly models the propagation of measurement errors in sensor data. Two nonlinear integer optimization models, Min-Max and Min-Sum, are developed to minimize the inference error across the network. To solve these models efficiently, we introduce the BBA Algorithm (BBA) as an adaptive metaheuristic optimizer, not as a subject of comparative study, but as an enabler of scalability within the proposed framework. The methodology integrates LU decomposition for efficient matrix inversion and employs a node-based flow inference technique that ensures observability without requiring full path enumeration. Tested on benchmark and real-world networks (e.g., fishbone, Sioux Falls, Barcelona), the proposed framework demonstrates strong performance in minimizing error and maintaining scalability, highlighting its practical applicability for resilient traffic monitoring system design. Full article
(This article belongs to the Section Transportation and Future Mobility)
Show Figures

Figure 1

17 pages, 1731 KB  
Article
Comparative Performance Analysis of Lightweight Cryptographic Algorithms on Resource-Constrained IoT Platforms
by Tiberius-George Sorescu, Vlad-Mihai Chiriac, Mario-Alexandru Stoica, Ciprian-Romeo Comsa, Iustin-Gabriel Soroaga and Alexandru Contac
Sensors 2025, 25(18), 5887; https://doi.org/10.3390/s25185887 - 20 Sep 2025
Viewed by 230
Abstract
The increase in Internet of Things (IoT) devices has introduced significant security challenges, primarily due to their inherent constraints in computational power, memory, and energy. This study provides a comparative performance analysis of selected modern cryptographic algorithms on a resource-constrained IoT platform, the [...] Read more.
The increase in Internet of Things (IoT) devices has introduced significant security challenges, primarily due to their inherent constraints in computational power, memory, and energy. This study provides a comparative performance analysis of selected modern cryptographic algorithms on a resource-constrained IoT platform, the Nordic Thingy:53. We evaluated a set of ciphers including the NIST lightweight standard ASCON, eSTREAM finalists Salsa20, Rabbit, Sosemanuk, HC-256, and the extended-nonce variant XChaCha20. Using a dual test-bench methodology, we measured energy consumption and performance under two distinct scenarios: a low-data-rate Bluetooth mesh network and a high-throughput bulk data transfer. The results reveal significant performance variations among the algorithms. In high-throughput tests, ciphers like XChaCha20, Salsa20, and ASCON32 demonstrated superior speed, while HC-256 proved impractically slow for large payloads. The Bluetooth mesh experiments quantified the direct relationship between network activity and power draw, underscoring the critical impact of cryptographic choice on battery life. These findings offer an empirical basis for selecting appropriate cryptographic solutions that balance security, energy efficiency, and performance requirements for real-world IoT applications. Full article
(This article belongs to the Section Internet of Things)
Show Figures

Figure 1

14 pages, 3214 KB  
Article
On the Feasibility of Localizing Transformer Winding Deformations Using Optical Sensing and Machine Learning
by Najmeh Seifaddini, Meysam Beheshti Asl, Sekongo Bekibenan, Simplice Akre, Issouf Fofana, Mohand Ouhrouche and Abdellah Chehri
Photonics 2025, 12(9), 939; https://doi.org/10.3390/photonics12090939 - 19 Sep 2025
Viewed by 188
Abstract
Mechanical vibrations induced by electromagnetic forces during transformer operation can lead to winding deformation or failure, an issue responsible for over 12% of all transformer faults. While previous studies have predominantly relied on accelerometers for vibration monitoring, this study explores the use of [...] Read more.
Mechanical vibrations induced by electromagnetic forces during transformer operation can lead to winding deformation or failure, an issue responsible for over 12% of all transformer faults. While previous studies have predominantly relied on accelerometers for vibration monitoring, this study explores the use of an optical sensor for real-time vibration measurement in a dry-type transformer. Experiments were conducted using a custom-designed single-phase transformer model specifically developed for laboratory testing. This experimental setup offers a unique advantage: it allows for the interchangeable simulation of healthy and deformed winding sections without causing permanent damage, enabling controlled and repeatable testing scenarios. The transformer’s secondary winding was short-circuited, and three levels of current (low, intermediate, and high) were applied to simulate varying stress conditions. Vibration displacement data were collected under load to assess mechanical responses. The primary goal was to classify this vibration data to localize potential winding deformation faults. Five supervised learning algorithms were evaluated: Random Forest, Support Vector Machine, K-Nearest Neighbors, Logistic Regression, and Decision Tree classifiers. Hyperparameter tuning was applied, and a comparative analysis among the top four models yielded average prediction accuracies of approximately 60%. These results, achieved under controlled laboratory conditions, highlight the promise of this approach for further development and future real-world applications. Overall, the combination of optical sensing and machine learning classification offers a promising pathway for proactive monitoring and localization of winding deformations, supporting early fault detection and enhanced reliability in power transformers. Full article
Show Figures

Figure 1

14 pages, 1188 KB  
Article
Kinetics of High-Sensitive Cardiac Troponin I in Patients with ST-Segment Elevation Myocardial Infarction and Non-ST Segment Elevation Myocardial Infarction
by Adi Haizler, Ranel Loutati, Louay Taha, Mohammad Karmi, Dana Deeb, Mohammed Manassra, Noam Fink, Pierre Sabouret, Jamal S. Rana, Mamas A. Mamas, Ofir Rabi, Akiva Brin, Amro Moatz, Maayan Shrem, Abed Qadan, Nir Levi, Michael Glikson, Elad Asher and on behalf of the Jerusalem Platelets Thrombosis and Intervention in Cardiology (JUPITER-26) Study Group
Diagnostics 2025, 15(18), 2390; https://doi.org/10.3390/diagnostics15182390 - 19 Sep 2025
Viewed by 310
Abstract
Background/Objectives: Existing data regarding the kinetics of cardiac troponin I (cTnI) are limited. The aim of the current study was to evaluate the kinetics of highly sensitive (hs) cTnI following acute myocardial infarction (MI) in a large-scale, real-world cohort. Methods: A prospective observational [...] Read more.
Background/Objectives: Existing data regarding the kinetics of cardiac troponin I (cTnI) are limited. The aim of the current study was to evaluate the kinetics of highly sensitive (hs) cTnI following acute myocardial infarction (MI) in a large-scale, real-world cohort. Methods: A prospective observational cohort study included all consecutive patients admitted to the intensive cardiovascular care unit (ICCU) with ST-segment elevation MI (STEMI) and non-ST-segment elevation MI (NSTEMI) who underwent percutaneous coronary intervention (PCI) between January 2020 and April 2024. Hs-cTnI concentrations were measured at the time of presentation and daily thereafter. Results: A total of 1174 STEMI patients [191 females (16.3%)] with a mean age of 63 years and 767 NSTEMI patients [137 females (17.9%)] with a mean age of 66.7 years were enrolled. The average hs-cTnI peak levels were 77,937.99 ng/L and 24,804.73 ng/L for STEMI and NSTEMI patients, respectively. A single peak of hs-cTnI was observed in 83% and 78% of STEMI and NSTEMI patients, respectively, while two peaks were observed in 11% and 19% and three or more peaks were observed in 6% and 3% of STEMI and NSTEMI patients, respectively. A higher number of peaks was associated with a lower ejection fraction and more in-hospital complications. Additionally, a higher number of peaks correlated with a higher in-hospital mortality rate among NSTEMI patients. Conclusions: Most STEMI and NSTEMI patients displayed a monophasic kinetic pattern of hs-cTnI peak levels. However, a greater number of hs-cTnI peaks was linked to a higher incidence of clinical complications, lower ejection fraction, and increased mortality. Full article
(This article belongs to the Special Issue Diagnosis and Management in Cardiac Intensive Care Medicine)
Show Figures

Figure 1

22 pages, 3553 KB  
Article
An Extended Epistemic Framework Beyond Probability for Quantum Information Processing with Applications in Security, Artificial Intelligence, and Financial Computing
by Gerardo Iovane
Entropy 2025, 27(9), 977; https://doi.org/10.3390/e27090977 - 18 Sep 2025
Viewed by 199
Abstract
In this work, we propose a novel quantum-informed epistemic framework that extends the classical notion of probability by integrating plausibility, credibility, and possibility as distinct yet complementary measures of uncertainty. This enriched quadruple (P, Pl, Cr, Ps) enables a deeper characterization of quantum [...] Read more.
In this work, we propose a novel quantum-informed epistemic framework that extends the classical notion of probability by integrating plausibility, credibility, and possibility as distinct yet complementary measures of uncertainty. This enriched quadruple (P, Pl, Cr, Ps) enables a deeper characterization of quantum systems and decision-making processes under partial, noisy, or ambiguous information. Our formalism generalizes the Born rule within a multi-valued logic structure, linking Positive Operator-Valued Measures (POVMs) with data-driven plausibility estimators, agent-based credibility priors, and fuzzy-theoretic possibility functions. We develop a hybrid classical–quantum inference engine that computes a vectorial aggregation of the quadruples, enhancing robustness and semantic expressivity in contexts where classical probability fails to capture non-Kolmogorovian phenomena such as entanglement, contextuality, or decoherence. The approach is validated through three real-world application domains—quantum cybersecurity, quantum AI, and financial computing—where the proposed model outperforms standard probabilistic reasoning in terms of accuracy, resilience to noise, interpretability, and decision stability. Comparative analysis against QBism, Dempster–Shafer, and fuzzy quantum logic further demonstrates the uniqueness of architecture in both operational semantics and practical outcomes. This contribution lays the groundwork for a new theory of epistemic quantum computing capable of modelling and acting under uncertainty beyond traditional paradigms. Full article
(This article belongs to the Special Issue Probability Theory and Quantum Information)
Show Figures

Figure 1

Back to TopTop